text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
Department of Physics, Royal Holloway, University of London, Egham, Surrey, UK, TW20 0EXDépartement de physique, Institut quantique, and Regroupement Québécois sur les matériaux de Pointe, Université de Sherbrooke, Sherbrooke, Québec, Canada J1K 2R1Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973-5000, USADepartment of Physics, Royal Holloway, University of London, Egham, Surrey, UK, TW20 0EXDépartement de physique, Institut quantique, and Regroupement Québécois sur les matériaux de Pointe, Université de Sherbrooke, Sherbrooke, Québec, Canada J1K 2R1 Canadian Institute for Advanced Research, Toronto, Ontario, Canada, M5G 1Z8 Recent quantum-gas microscopy of ultracold atoms and scanning tunneling microscopy of the cuprates reveal new detailed information about doped Mottantiferromagnets, which can be compared with calculations. Using cellular dynamical mean-field theory, we map out the antiferromagnetic (AF) phase of the two-dimensional Hubbard model as a function of interaction strength U, hole doping δ and temperature T. The Néel phase boundary is non-monotonic as a function of U and δ. Frustration induced by second-neighbor hopping reduces Néel order more effectively at small U. The doped AF is stabilized at large U by kinetic energy and at small U by potential energy. The transition between the AF insulator and the doped metallic AF is continuous. At large U, we find in-gap states similar to those observed in scanning tunneling microscopy. We predict that, contrary to the Hubbard bands, these states are only slightly spin polarized. Effects of interaction strength, doping, and frustration on the antiferromagnetic phase of the two-dimensional Hubbard model A.-M. S. Tremblay December 30, 2023 ============================================================================================================================The quantum mechanics of interacting electrons on a lattice can lead to complex many-body phase diagrams. For example, doping a layered Mott insulator can give rise to antiferromagnetism, pseudogap, unconventional superconductivity and multiple exotic phases <cit.>.The Hubbard model is the simplest model of interacting electrons on a lattice. It can be used for both natural (e.g. cuprates) and artificial (e.g. ultracold atoms) systems <cit.>.Therefore understanding the phases that appear in this model and the transitions between them is a central programme in condensed matter physics. Here we study the regimes where antiferromagnetic (AF) correlations set in within the two dimensional (2D) Hubbard model on a square lattice as a function of interaction U, doping δ and temperature T, within cellular dynamical mean-field theory (CDMFT) <cit.>. The motivation for our work is threefold.First, recent advances in ultracold atom experiments can now reach temperatures low enough to detect AF correlations for repulsively interacting Fermi gases <cit.>. Hence, a theoretical characterisation of the AF phase in the whole U-δ-T space might guide ultracold atom experiments that are exploring this uncharted territory.Second, recent tunneling spectroscopy studies <cit.> reveal new details on the evolution of the AF Mott insulator upon doping, thus calling for theoretical explanations.Third, on the theory side we still know little about the detailed boundaries of the AF phase in the whole U-δ-T space of the 2D Hubbard model and the mechanism by which AF is stabilized. Most previous studies with this and other methods focused on T=0 <cit.>. The negative sign problem hampers the study of finite T, large U and finite doping <cit.>. Our results might serve as a stepping stone for new approaches directed towards including Mott physics and long wavelength fluctuations <cit.>.Model and method. – We consider the 2D Hubbard model, H=-∑_ijσt_ijc_iσ^† c_jσ +U∑_i n_i↑ n_i↓ -μ∑_iσ n_iσ,where t_ij=t (t') is the (next) nearest-neighbor hopping, U is the onsite Coulomb repulsion and μ is the chemical potential. Here c^†_iσ (c_iσ) is the creation (destruction) operator on lattice site i and spin σ, and n_iσ is the number operator. We set t=1 as our energy unit. Within the cellular extension <cit.> of dynamical mean-field theory <cit.>, a 2× 2 plaquette is embedded in a self-consistent bath. We have successfully benchmarked this approach <cit.> at δ=0, where reliable results are available. We solve the cluster impurity problem using continuous time Quantum Monte Carlo based on the expansion of the hybridization between impurity and bath <cit.>.Symmetry breaking is allowed only in the bath. It is efficient to use of the C_2v group symmetry with mirrors along plaquette diagonals <cit.>.U-T-δ map of the AF phase. –Long-wavelength spin fluctuations lead, in two dimensions, to a vanishing staggered magnetization m_z at finite temperature <cit.>. Nevertheless, m_z = 2/N_c∑_i(-1)^i (n_i↑ - n_i↓) is non-zero in cold-atom experiments because of finite-size effects. For cuprates, the m_z that we compute becomes non-vanishing at a dynamical mean-field Néel temperature T^d_N where the antiferromagnetic correlation length of the infinite system would start to grow exponentially <cit.>. Coupling in the third dimension leads to true long-range order at a lower temperature.As a first step, m_z is used to map out the AF phase in the U-T-δ space for t'=-0.1. We consider hole doping only (δ=1-n>0) and perform various cuts, i.e. (i) at δ=0 (T-U plane in Fig. <ref>b), (ii) at fixed values of U (T-δ planes in Figs. <ref>c,d), (iii) and at fixed temperature T (δ-U plane at T=1/10 in Fig. <ref>e). These cuts are reported in the U-T-δ space in Fig. <ref>a, where one sees that T_N^d(U,δ) has a global maximum at U ≈ 7 for δ=0. The sign problem prevents convergence below T≈ 1/20. The value of m_z ≠ 0 is color coded in Figs. <ref>b-e and shown in the Supplemental Material (SM) [See Supplemental Material for the staggered magnetization curves as a function of U and δ for few temperatures; complementary data for the local DOS.].The staggered magnetization m_z(U,δ, T) is largest in the δ=0 plane and saturates for large U and low T, as in mean field <cit.>. Our analysis of the AF region in the U-T-δ space highlights two points. First, the overall behavior of m_z differs from that of T_N^d: For example, m_z(U,T=0)|_δ=0 does not scale with either T_N^d(U)|_δ=0 (phase boundary in Fig. <ref>b), or with δ_N^d(U)|_T (phase boundary in Fig. <ref>e).Physically even if large U creates local moments, T_N^d decreases with U since it is the superexchange J=4t^2/U that aligns these moments at finite temperature. Second, the maxima <cit.> of both T_N^d(U)|_δ=0 and δ_N^d(U)|_T are correlated with the Mott transition that exists at δ=0 in the unstable normal state below T_N^d, suggesting that the hidden Mott transition [see Mott endpoint in Fig.<ref>b,e] drives the qualitative changes in the AF state. It is well known that the increase of T_N^d(U) at small U is explained by the Slater physics of nesting and that the decrease of T_N^d(U) at large U is explained by the Heisenberg physics of superexchange. Hence, the fact that the position of the maximum of T_N^d(U) at δ=0 is controlled by the underlying Mott transition <cit.> reflects the underlying physics. As we saw above (cf. green line in Fig. <ref>e), this difference between small and large U persists upon doping since the range of δ where AF exists first increases with U and then decreases, with the crossover again controlled by the Mott transition at δ=0.In contrast, regardless of the strength of U, T_N^d(δ) monotonically decreases with increasing δ [phase boundaries in Figs. <ref>c,d]. Effect of frustration on T_N^d(U,δ). – We can gain further insights by varying the next-nearest neighbor hopping t', which frustrates AF order in varying degree depending on the value of U, as we shall see. Having in mind the physics of hole doped cuprates, here we consider only negative values of t', in the range t' ∈ [0, -0.5]. Figure <ref>a shows T_N^d(U) at δ=0 for different values of t'.AF now appears at a critical U_c that shifts to higher values of U upon increasing |t'|, in agreement with expectation from the physics of nesting and also from the T=0, DMFT d=∞ <cit.> and Hartree-Fock (HF) results <cit.>. The T→ 0 transition at U_c is consistent with first order <cit.> for finite t' (see Fig. 2 in SM <cit.>). U_c is larger than the HF result <cit.> because the vertex is renormalized downward compared to the bare U by fluctuations in other channels <cit.>. We find once again, that the position of max T_N^d(U) is correlated with the Mott transition in the underlying normal state. Although frustration reduces T_N^d(U) as expected, the reduction of T_N^d(U) upon increasing |t'| at δ=0 is stronger at small U than at large U, as shown in Fig. <ref>b. Indeed, although at small U deviations from perfect nesting are first order in |t'/t|, at large U the AF arises from localized spins and the correct quantities to compare are J'=4 t'^2/U and J=4 t^2/U whose ratio scale as |t'/t|^2.Figure <ref>c shows the doping-dependent T_N^d(δ) at U=16 for different values of t': at our lowest temperature, afivefold increase of |t'| only approximately halves the critical doping δ_N^d at which the AF phase ends. The robustness of the AF phase at finite δ seems to reflect the robustness at δ=0 since we observe a rigid downward shift of the whole T_N^d(δ) line. The transition at the critical δ is consistent with second order (SM <cit.> Fig. 1d). AF insulator to AF metal transition. – Having mapped out the Néel state, we next explore its nature by analyzing the local density of states (DOS) N(ω) and the occupation n(μ)=1-δ(μ). First, consider the δ=0 case. For t'=0 we know that CDMFT recovers the AF insulating behavior <cit.>. In principle, at small U and large t', the AF state can have both hole- and electron pockets at the Fermi surface. Then the AF state would be metallic even at δ=0 <cit.>. Here we find that the δ=0 solution is insulating for all t' and U we considered.This can be checked from the local DOS and from the occupation shown at t'=-0.1 for U=5 and U=12 in Fig. <ref>. More specifically, the plateau in the occupation at n(μ)=1 in Figs. <ref>c,g signals an incompressible insulator, i.e. the charge compressibility κ=n^-2 dn/dμ vanishes. Second, consider the AF state at finite doping δ≠ 0. In this case, n(μ) has a finite slope, signalling a compressible metal, i.e. κ>0. In addition, Figs. <ref>a,e show that the local DOS has a small but finite spectral weight at the Fermi level, indicating a metallic state.Therefore, at δ=0 there is an AF insulator (AF-I), whereas at δ>0 there is an AF metal (AF-M). This also holds in the d →∞ limit <cit.>. What is the nature of the AF-I to AF-M transition driven by doping? Close to n(μ)=1 (δ=1-n=0), the occupation n(μ) is continuous for all T we have explored. As T decreases, the curvature at the transition becomes sharper, suggesting a discontinuous change in slope in the T=0 limit, as expected for a second-order AF-I to AF-M transition. The transition is a crossover at finite T.Density of states and “in-gap” states. – There are striking differences between the DOS N(ω) of a doped Slater AF (U=5) vs a doped Mott AF (U=12). For U=5, the N(ω) spectra for δ=0 in Fig. <ref>a shows two Bogoliubov peaks along with high frequency precursors of the Hubbard bands <cit.>.When μ reaches the edge of the lower Bogoliubov peak, metallic behavior is recoverd since doped holes appear at ω=0. The rearrangement of the specral weight is not expected from the HF Slater solution. Upon doping, spectral weight transfers from high to low frequencies: the lower Bogoliubov peak decreases in intensity and moves close to the Fermi energy ω=0 and, correspondingly, the upper Bogoliubov peak broadens.Fig. <ref>b shows that the upper and lower Bogoliubov peaks have sizeable spin polarization, as in the t'=0 case <cit.>.By contrast, for U=12, the spectra for δ=0 in Fig. <ref>c,d have a clear four peak structure: two Bogoliubov peaks surrounded by Hubbard bands <cit.>. In the doped case, there is a dramatic redistribution of spectral weight over a large frequency range across the AF gap, reminiscent of the Eskes-Meinders-Sawatzky picture <cit.>: the lower Bogoliubov peak sharpens and a new spectral feature, “in-gap state”, appears between the upper Hubbard band and the Fermi level located at ω=0. In that picture, the lower Hubbard band comes mostly from removing electrons in singly-occupied sites, while the upper Hubbard band comes mostly from adding electrons to singly-occupied sites. Given the large local moment at this value of U in the AF, this is consistent with the fact that these Hubbard bands are strongly spin polarized, as seen in Fig. <ref>f.On the other hand, the “in-gap states” come, in that same picture <cit.>, from adding electrons in empty sites, which explains the near absence of spin polarization observed in Fig. <ref>f. Finally, with further doping, the lower Bogoliubov peak and the Hubbard bands decrease in intensity at the expense of the in-gap state above ω=0. These results are compatible with the variational approach in Ref. wuAFevolution.Stability of the Néel phase. –To assess the origin of the stability of the AF phase, we compare the kinetic, potential and total energy differences between the AF phase and the underlying normal phase <cit.> as a function of doping δ at low T. Fig. <ref>d,h shows that the crossover in the source of stability of the AF phase that we identified earlier <cit.> at δ=0 persists for all doping levels. Therefore the hidden Mott transition at δ=0 reorganises the energetics both of the AF-I at δ=0 and of the AF-M away from δ=0.Discussion. –Our results are relevant for experiments with ultracold atoms and with cuprates. The first observation of the AF phase in a 2D square optical lattice appeared recently <cit.>. Persistence of AF correlations was found up to δ≈ 0.15 for U=7.2 and t'=0. Consistency of this finding with our results is promising. Our U-δ-T map of the AF phase can be explored further with ultracold atom systems since U, δ and T can be tuned.Specifically the nonmonotonic behavior of δ_N^d(U), along with stronger reduction of T^d_N(U) with increasing |t'| at small U, are testable predictions.When comparing our U-T-δ map with experiments on hole-doped cuprates, one should focus on the multilayer case where interlayer magnetic exchange favors mean-field like behavior. In the n=5 CuO_2 layer cuprates, AF persists up to δ≈ 0.10 and it decreases with decreasing n <cit.>. Our predictions for m_z could be compared with neutron scattering and muon spin rotation experiments in this regime.Other effects that we did not take into account and that can decrease δ_ N^d(U) are the development of incommensurate spin-density waves and competition with other phases. The n(μ) curve and the resulting charge compressibility κ that describes the continuous transition between an AF insulator and an AF metal as a function of doping is another prediction that can be tested with ultracold atoms. In principle, such measurements are also possible in cuprates <cit.>. The in-gap state feature that we found in the DOS has a position and a width that is compatible with recent scanning tunnelling microscopy experiments on lightly doped AF Mott insulators <cit.>. In addition, the observed transfer of spectral weight from high energy to low energy as a function of doping is consistent with our results. We predict that a spin-polarized STM probe will find that these states are essentially unpolarized, by contrast with the lower and upper Hubbard bands [In these experiments, the DOS at ω=0 vanishes while it in our case it is small. The inhomogeneity of the samples suggests that disorder can induce localisation effects at ω=0, as pointed out in Ref. wuAFevolution. The AF phase of cuprates at finite doping is generally considered metallic <cit.>.]. Our prediction that the doped AF state is stabilized by a gain in kinetic energy for large U and by a gain in potential energy for small U can in principle be tested by optical spectroscopy in cuprates <cit.>. If the correlation strength U is lower in electron that in hole doped cuprates, as has been proposed <cit.>, our data suggest a potential energy driven AF in electron doped cuprates and a kinetic energy driven AF in hole doped cuprates, similar to earlier findings on the Emery model <cit.>. Moving forwards, it will be important to study the interplay between antiferromagnetism, pseudogap and superconductivity <cit.>. We acknowledge discussion with D. Sénéchal and K. Miyake. This work has been supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant RGPIN-2014-04584, the Canada First Research Excellence Fund and by the Research Chair in the Theory of Quantum Materials. Simulations were performed on computers provided by the Canadian Foundation for Innovation, the Ministère de l'Éducation des Loisirs et du Sport (Québec), Calcul Québec, and Compute Canada. 80 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Keimer et al.(2015)Keimer, Kivelson, Norman, Uchida,and Zaanen]keimerRev author author B. Keimer, author S. A. Kivelson, author M. R. Norman, author S. Uchida, and author J. Zaanen, 10.1038/nature14165 journal journal Nature volume 518, pages 179 (year 2015)NoStop [Anderson(1987)]Anderson:1987 author author P. W. Anderson, 10.1126/science.235.4793.1196 journal journal Science volume 235, pages 1196 (year 1987)NoStop [Jaksch and Zoller(2005)]jzHM author author D. Jaksch and author P. Zoller, @noopjournal journal Ann. Phys. volume 315, eid 52 (year 2005)NoStop [Tremblay(2013)]AMJulich author author A.-M. S.Tremblay, in http://juser.fz-juelich.de/record/137827/files/FZJ-2013-04137.pdf?version=1 booktitle Emergent Phenomena in Correlated Matter Modeling and Simulation, Vol. volume 3, editor edited by editor E. Pavarini, editor E. Koch,and editor U. Schollwöck (publisher Verlag des Forschungszentrum, address Jülich,year 2013) Chap. chapter 10NoStop [Georges and Giamarchi(2010)]AntoineLesHouches author author A. Georges and author T. Giamarchi, in @noopbooktitle Many-body Physics with Ultracold Gases, Vol. volume 94, editor edited by editor C. Salomon, editor G. Shlyapnikov,and editor L. Cugliandolo (publisher Oxford University Press, address Les Houches, year 2010) Chap. chapter 1NoStop [Maier et al.(2005)Maier, Jarrell, Pruschke, and Hettler]maier author author T. Maier, author M. Jarrell, author T. Pruschke,andauthor M. H. Hettler, 10.1103/RevModPhys.77.1027 journal journal Rev. Mod. Phys. volume 77, pages 1027 (year 2005)NoStop [Kotliar et al.(2006)Kotliar, Savrasov, Haule, Oudovenko, Parcollet, and Marianetti]kotliarRMP author author G. Kotliar, author S. Y. Savrasov, author K. Haule, author V. S. Oudovenko, author O. Parcollet,and author C. A. Marianetti, 10.1103/RevModPhys.78.865 journal journal Rev. Mod. Phys. volume 78, eid 865 (year 2006)NoStop [Tremblay et al.(2006)Tremblay, Kyung, and Sénéchal]tremblayR author author A.-M. S.Tremblay, author B. Kyung,and author D. Sénéchal, 10.1063/1.2199446 journal journal Low Temp. Phys. volume 32, pages 424 (year 2006)NoStop [Greif et al.(2013)Greif, Uehlinger, Jotzu, Tarruell,and Esslinger]Greif:2013 author author D. Greif, author T. Uehlinger, author G. Jotzu, author L. Tarruell,and author T. Esslinger, 10.1126/science.1236362 journal journal Science volume 340, pages 1307 (year 2013)NoStop [Hart et al.(2015)Hart, Duarte, Yang, Liu, Paiva, Khatami, Scalettar, Trivedi, Huse, andHulet]Hart:2015 author author R. A. Hart, author P. M. Duarte, author T.-L. Yang, author X. Liu, author T. Paiva, author E. Khatami, author R. T. Scalettar, author N. Trivedi, author D. A. Huse,and author R. G. Hulet, 10.1038/nature14223 journal journal Naturevolume 519, pages 211 (year 2015)NoStop [Parsons et al.(2016)Parsons, Mazurenko, Chiu, Ji, Greif, and Greiner]Parsons:2016 author author M. F. Parsons, author A. Mazurenko, author C. S. Chiu, author G. Ji, author D. Greif,and author M. Greiner, 10.1126/science.aag1430 journal journal Science volume 353, pages 1253 (year 2016)NoStop [Boll et al.(2016)Boll, Hilker, Salomon, Omran, Nespolo, Pollet, Bloch, andGross]Boll:2016 author author M. Boll, author T. A. Hilker, author G. Salomon, author A. Omran, author J. Nespolo, author L. Pollet, author I. Bloch,and author C. Gross, 10.1126/science.aag1635 journal journal Science volume 353, pages 1257 (year 2016)NoStop [Cheuk et al.(2016)Cheuk, Nichols, Lawrence, Okan, Zhang, Khatami, Trivedi, Paiva, Rigol, and Zwierlein]Cheuk:2016 author author L. W. Cheuk, author M. A. Nichols, author K. R. Lawrence, author M. Okan, author H. Zhang, author E. Khatami, author N. Trivedi, author T. Paiva, author M. Rigol,and author M. W.Zwierlein, 10.1126/science.aag3349 journal journal Science volume 353, pages 1260 (year 2016)NoStop [Mazurenko et al.(2017)Mazurenko, Chiu, Ji, Parsons, Kanász-Nagy, Schmidt, Grusdt, Demler, Greif, andGreiner]mazurenko2017cold author author A. Mazurenko, author C. S. Chiu, author G. Ji, author M. F. Parsons, author M. Kanász-Nagy, author R. Schmidt, author F. Grusdt, author E. Demler, author D. Greif,and author M. Greiner, @noopjournal journal Nature volume 545, pages 462 (year 2017)NoStop [Drewes et al.(2017)Drewes, Miller, Cocchi, Chan, Wurz, Gall, Pertot, Brennecke, and Köhl]drewesPRL2017 author author J. H. Drewes, author L. A. Miller, author E. Cocchi, author C. F. Chan, author N. Wurz, author M. Gall, author D. Pertot, author F. Brennecke, and author M. Köhl, 10.1103/PhysRevLett.118.170401 journal journal Phys. Rev. Lett. volume 118, pages 170401 (year 2017)NoStop [Cai et al.(2016)Cai, Ruan, Peng, Ye, Li, Hao, Zhou, Lee, and Wang]CaiSTM author author P. Cai, author W. Ruan, author Y. Peng, author C. Ye, author X. Li, author Z. Hao, author X. Zhou, author D.-H.Lee,and author Y. Wang, 10.1038/nphys3840 journal journal Nature Physics volume 12, pages 1047 (year 2016), http://arxiv.org/abs/1508.05586 arXiv:1508.05586 [cond-mat.supr-con] NoStop [Ye et al.(2013)Ye, Cai, Yu, Zhou, Ruan, Liu, Jin, andWang]YeSTM author author C. Ye, author P. Cai, author R. Yu, author X. Zhou, author W. Ruan, author Q. Liu, author C. Jin,and author Y. Wang, 10.1038/ncomms2369 journal journal Nature Communications volume 4, eid 1365 (year 2013), http://arxiv.org/abs/1201.0342 arXiv:1201.0342 [cond-mat.str-el] NoStop [Borejsza and Dupuis(2004)]Dupuis2004 author author K. Borejsza and author N. Dupuis, 10.1103/PhysRevB.69.085119 journal journal Phys. Rev. B volume 69, pages 085119 (year 2004)NoStop [Sénéchal et al.(2005)Sénéchal, Lavertu, Marois, andTremblay]senechalAFSC2005 author author D. Sénéchal, author P.-L. Lavertu, author M.-A. Marois,and author A.-M. S. Tremblay, 10.1103/PhysRevLett.94.156404 journal journal Phys. Rev. Lett. volume 94, pages 156404 (year 2005)NoStop [Aichhorn, M. and Arrigoni, E.(2005)]markus2005 author author Aichhorn, M. andauthor Arrigoni, E., 10.1209/epl/i2005-10192-1 journal journal Europhys. Lett. volume 72, pages 117 (year 2005)NoStop [Aichhorn et al.(2007)Aichhorn, Arrigoni, Potthoff, andHanke]markus author author M. Aichhorn, author E. Arrigoni, author M. Potthoff,andauthor W. Hanke, 10.1103/PhysRevB.76.224509 journal journal Phys. Rev. B volume 76, pages 224509 (year 2007)NoStop [Tocchio et al.(2016)Tocchio, Becca, and Sorella]Tocchio2016 author author L. F. Tocchio, author F. Becca, and author S. Sorella, 10.1103/PhysRevB.94.195126 journal journal Phys. Rev. B volume 94, pages 195126 (year 2016)NoStop [Zheng and Chan(2016)]ZhengDMET author author B.-X. Zheng and author G. K.-L. Chan, 10.1103/PhysRevB.93.035126 journal journal Phys. Rev. B volume 93,pages 035126 (year 2016)NoStop [Hirsch(1985)]Hirsch:1985 author author J. E. Hirsch, 10.1103/PhysRevB.31.4403 journal journal Phys. Rev. B volume 31,pages 4403 (year 1985)NoStop [White et al.(1989)White, Scalapino, Sugar, Loh, Gubernatis, and Scalettar]White:1989 author author S. R. White, author D. J. Scalapino, author R. L. Sugar, author E. Y. Loh, author J. E. Gubernatis,andauthor R. T. Scalettar, 10.1103/PhysRevB.40.506 journal journal Phys. Rev. B volume 40, pages 506 (year 1989)NoStop [Lichtenstein and Katsnelson(2000)]lkAF author author A. I. Lichtenstein and author M. I. Katsnelson, 10.1103/PhysRevB.62.R9283 journal journal Phys. Rev. B volume 62, pages R9283 (year 2000)NoStop [Paiva et al.(2001)Paiva, Scalettar, Huscroft, and McMahan]Paiva:2001 author author T. Paiva, author R. T. Scalettar, author C. Huscroft,and author A. K. McMahan,10.1103/PhysRevB.63.125116 journal journal Phys. Rev. B volume 63, pages 125116 (year 2001)NoStop [Kyung et al.(2006)Kyung, Kancharla, Sénéchal, Tremblay, Civelli, and Kotliar]kyung author author B. Kyung, author S. S. Kancharla, author D. Sénéchal, author A.-M. S.Tremblay, author M. Civelli,and author G. Kotliar, 10.1103/PhysRevB.73.165114 journal journal Phys. Rev. B volume 73, eid 165114 (year 2006)NoStop [Paiva et al.(2010)Paiva, Scalettar, Randeria, and Trivedi]Paiva2010 author author T. Paiva, author R. Scalettar, author M. Randeria,andauthor N. Trivedi, 10.1103/PhysRevLett.104.066406 journal journal Phys. Rev. Lett. volume 104, pages 066406 (year 2010)NoStop [Sato and Tsunetsugu(2016)]Sato2016 author author T. Sato and author H. Tsunetsugu, 10.1103/PhysRevB.94.085110 journal journal Phys. Rev. B volume 94, pages 085110 (year 2016)NoStop [Ayral and Parcollet(2015)]trilex1 author author T. Ayral and author O. Parcollet, 10.1103/PhysRevB.92.115109 journal journal Phys. Rev. B volume 92, pages 115109 (year 2015)NoStop [Ayral and Parcollet(2016a)]trilex2 author author T. Ayral and author O. Parcollet, 10.1103/PhysRevB.93.235124 journal journal Phys. Rev. B volume 93, pages 235124 (year 2016a)NoStop [Ayral and Parcollet(2016b)]quadrilex author author T. Ayral and author O. Parcollet, 10.1103/PhysRevB.94.075159 journal journal Phys. Rev. B volume 94, pages 075159 (year 2016b)NoStop [Rohringer et al.(2017)Rohringer, Hafermann, Toschi, Katanin, Antipov, Katsnelson, Lichtenstein, Rubtsov,and Held]Rohringer2017 author author G. Rohringer, author H. Hafermann, author A. Toschi, author A. A. Katanin, author A. E. Antipov, author M. I. Katsnelson, author A. I. Lichtenstein, author A. N. Rubtsov,and author K. Held, @noopjournal journal ArXiv e-prints(year 2017), http://arxiv.org/abs/1705.00024 arXiv:1705.00024 [cond-mat.str-el] NoStop [Note1()]Note1 note See Supplemental Material for the staggered magnetization curves as a function of U and δ for few temperatures; complementary data for the local DOS.Stop [Georges et al.(1996)Georges, Kotliar, Krauth, andRozenberg]rmp author author A. Georges, author G. Kotliar, author W. Krauth,and author M. J. Rozenberg, 10.1103/RevModPhys.68.13 journal journal Rev. Mod. Phys. volume 68, pages 13 (year 1996)NoStop [Fratino et al.(2017)Fratino, Sémon, Charlebois, Sordi, and Tremblay]LorenzoAF2017 author author L. Fratino, author P. Sémon, author M. Charlebois, author G. Sordi,and author A.-M. S. Tremblay, 10.1103/PhysRevB.95.235109 journal journal Phys. Rev. B volume 95, pages 235109 (year 2017)NoStop [Gull et al.(2011)Gull, Millis, Lichtenstein, Rubtsov, Troyer, and Werner]millisRMP author author E. Gull, author A. J. Millis, author A. I. Lichtenstein, author A. N. Rubtsov, author M. Troyer,and author P. Werner, 10.1103/RevModPhys.83.349 journal journal Rev. Mod. Phys. volume 83, pages 349 (year 2011)NoStop [Sémon et al.(2014a)Sémon, Yee, Haule, and Tremblay]patrickSkipList author author P. Sémon, author C.-H. Yee, author K. Haule,and author A.-M. S. Tremblay, 10.1103/PhysRevB.90.075149 journal journal Phys. Rev. B volume 90, pages 075149 (year 2014a)NoStop [Sémon and Tremblay(2012)]patrickCritical author author P. Sémon and author A.-M. S. Tremblay, 10.1103/PhysRevB.85.201101 journal journal Phys. Rev. B volume 85, pages 201101 (year 2012)NoStop [Sémon et al.(2014b)Sémon, Sordi,and Tremblay]patrickERG author author P. Sémon, author G. Sordi, and author A.-M. S. Tremblay,10.1103/PhysRevB.89.165113 journal journal Phys. Rev. B volume 89, pages 165113 (year 2014b)NoStop [Mermin and Wagner(1966)]MWtheorem author author N. D. Mermin and author H. Wagner, 10.1103/PhysRevLett.17.1133 journal journal Phys. Rev. Lett. volume 17, pages 1133 (year 1966)NoStop [Hohenberg(1967)]Hohenberg:1967 author author P. C. Hohenberg, 10.1103/PhysRev.158.383 journal journal Phys. Rev. volume 158, pages 383 (year 1967)NoStop [Schrieffer et al.(1989)Schrieffer, Wen, and Zhang]Zhang:1989 author author J. R. Schrieffer, author X. G. Wen,and author S. C. Zhang,10.1103/PhysRevB.39.11663 journal journal Phys. Rev. B volume 39, pages 11663 (year 1989)NoStop [Georges and Krauth(1993)]GeorgesKrauthAFM:1993 author author A. Georges and author W. Krauth, 10.1103/PhysRevB.48.7167 journal journal Phys. Rev. B volume 48,pages 7167 (year 1993)NoStop [Freericks and Jarrell(1995)]FreericksJarrelAFM:1995 author author J. K. Freericks and author M. Jarrell, 10.1103/PhysRevLett.74.186 journal journal Phys. Rev. Lett. volume 74, pages 186 (year 1995)NoStop [Hofstetter and Vollhardt(1998)]hofstetter1998 author author W. Hofstetter and author D. Vollhardt, 10.1002/andp.2060070105 journal journal Annalen der Physik volume 7, pages 48 (year 1998)NoStop [Chitra and Kotliar(1999)]ChitraAFM:1999 author author R. Chitra and author G. Kotliar, 10.1103/PhysRevLett.83.2386 journal journal Phys. Rev. Lett. volume 83, pages 2386 (year 1999)NoStop [Zitzler et al.(2004)Zitzler, Tong, Pruschke, andBulla]ZitzlerPruschkeAFM:2004 author author R. Zitzler, author N.-H. Tong, author T. Pruschke,andauthor R. Bulla, 10.1103/PhysRevLett.93.016406 journal journal Phys. Rev. Lett. volume 93, pages 016406 (year 2004)NoStop [Kanamori(1963)]kanamori_electron_1963 author author J. Kanamori, 10.1143/PTP.30.275 journal journal Progress of Theoretical Physics volume 30, pages 275 (year 1963)NoStop [Brueckner et al.(1960)Brueckner, Soda, Anderson, andMorel]Brueckner:1960 author author K. A. Brueckner, author T. Soda, author P. W. Anderson,andauthor P. Morel, 10.1103/PhysRev.118.1442 journal journal Phys. Rev. volume 118, pages 1442 (year 1960)NoStop [Vilk and Tremblay(1997)]Vilk:1997 author author Y. M. Vilk and author A.-M. S. Tremblay, @noopjournal journal J. Phys I (France) volume 7, pages 1309(year 1997)NoStop [Yang et al.(2000)Yang, Lange, and Kotliar]kotliarSBaf author author I. Yang, author E. Lange,andauthor G. Kotliar, 10.1103/PhysRevB.61.2521 journal journal Phys. Rev. B volume 61, pages 2521 (year 2000)NoStop [Camjayi et al.(2006)Camjayi, Chitra, and Rozenberg]albertoAF author author A. Camjayi, author R. Chitra, and author M. J. Rozenberg,10.1103/PhysRevB.73.041103 journal journal Phys. Rev. B volume 73, pages 041103 (year 2006)NoStop [Bergeron and Tremblay(2016)]DominicMEM author author D. Bergeron and author A.-M. S. Tremblay, 10.1103/PhysRevE.94.023303 journal journal Phys. Rev. E volume 94, pages 023303 (year 2016)NoStop [Moreo et al.(1995)Moreo, Haas, Sandvik, and Dagotto]Moreo1995 author author A. Moreo, author S. Haas, author A. W. Sandvik,andauthor E. Dagotto, 10.1103/PhysRevB.51.12045 journal journal Phys. Rev. B volume 51, pages 12045 (year 1995)NoStop [Preuss et al.(1995)Preuss, Hanke, and von der Linden]Preuss1995 author author R. Preuss, author W. Hanke, and author W. von der Linden,10.1103/PhysRevLett.75.1344 journal journal Phys. Rev. Lett. volume 75, pages 1344 (year 1995)NoStop [Eskes et al.(1991)Eskes, Meinders, and Sawatzky]eskes1991 author author H. Eskes, author M. B. J. Meinders,and author G. A. Sawatzky, 10.1103/PhysRevLett.67.1035 journal journal Phys. Rev. Lett. volume 67, pages 1035 (year 1991)NoStop [Wu and Lee(2017)]wuAFevolution author author H.-K. Wu and author T.-K. Lee,10.1103/PhysRevB.95.035133 journal journal Phys. Rev. B volume 95, pages 035133 (year 2017)NoStop [Taranto et al.(2012)Taranto, Sangiovanni, Held, Capone, Georges, and Toschi]TarantoPRB2012 author author C. Taranto, author G. Sangiovanni, author K. Held, author M. Capone, author A. Georges,and author A. Toschi, 10.1103/PhysRevB.85.085124 journal journal Phys. Rev. B volume 85, pages 085124 (year 2012)NoStop [Mukuda et al.(2008)Mukuda, Yamaguchi, Shimizu, Kitaoka, Shirage, and Iyo]mukuda2008 author author H. Mukuda, author Y. Yamaguchi, author S. Shimizu, author Y. Kitaoka, author P. Shirage,and author A. Iyo, 10.1143/JPSJ.77.124706 journal journal Journal of the Physical Society of Japan volume 77,pages 124706 (year 2008), http://arxiv.org/abs/http://dx.doi.org/10.1143/JPSJ.77.124706 http://dx.doi.org/10.1143/JPSJ.77.124706 NoStop [Mukuda et al.(2012)Mukuda, Shimizu, Iyo, and Kitaoka]mukuda2012 author author H. Mukuda, author S. Shimizu, author A. Iyo,and author Y. Kitaoka, 10.1143/JPSJ.81.011008 journal journal Journal of the Physical Society of Japan volume 81,pages 011008 (year 2012)NoStop [Ino et al.(1997)Ino, Mizokawa, Fujimori, Tamasaku, Eisaki, Uchida, Kimura, Sasagawa, and Kishio]InoMU author author A. Ino, author T. Mizokawa, author A. Fujimori, author K. Tamasaku, author H. Eisaki, author S. Uchida, author T. Kimura, author T. Sasagawa,and author K. Kishio, 10.1103/PhysRevLett.79.2101 journal journal Phys. Rev. Lett. volume 79, pages 2101 (year 1997)NoStop [Harima et al.(2003)Harima, Fujimori, Sugaya, and Terasaki]HarimaMU author author N. Harima, author A. Fujimori, author T. Sugaya,and author I. Terasaki, 10.1103/PhysRevB.67.172501 journal journal Phys. Rev. B volume 67, pages 172501 (year 2003)NoStop [Rietveld et al.(1992)Rietveld, Chen, and van der Marel]RietveldMU author author G. Rietveld, author N. Y. Chen,and author D. van der Marel,10.1103/PhysRevLett.69.2578 journal journal Phys. Rev. Lett. volume 69, pages 2578 (year 1992)NoStop [van der Marel and Rietveld(1992)]dirkMU author author D. van der Marel and author G. Rietveld, 10.1103/PhysRevLett.69.2575 journal journal Phys. Rev. Lett. volume 69, pages 2575 (year 1992)NoStop [Note2()]Note2 note In these experiments, the DOS at ω =0 vanishes while it in our case it is small. The inhomogeneity of the samples suggests that disorder can induce localisation effects at ω =0, as pointed out in Ref. @citealpnum wuAFevolution. The AF phase of cuprates at finite doping is generally considered metallic <cit.>.Stop [Molegraaf et al.(2002)Molegraaf, Presura, van der Marel, Kes, and Li]Molegraaf2002 author author H. J. A.Molegraaf, author C. Presura, author D. van der Marel, author P. H. Kes, and author M. Li, 10.1126/science.1069947 journal journal Science volume 295, pages 2239 (year 2002)NoStop [Deutscher et al.(2005)Deutscher, Santander-Syro, and Bontemps]deutscher2005 author author G. Deutscher, author A. F. Santander-Syro,and author N. Bontemps, 10.1103/PhysRevB.72.092504 journal journal Phys. Rev. B volume 72, pages 092504 (year 2005)NoStop [Carbone et al.(2006)Carbone, Kuzmenko, Molegraaf, van Heumen, Lukovac, Marsiglio, van der Marel, Haule, Kotliar, Berger, Courjault, Kes, and Li]PhysRevB.74.064510 author author F. Carbone, author A. B. Kuzmenko, author H. J. A. Molegraaf, author E. van Heumen, author V. Lukovac, author F. Marsiglio, author D. van der Marel, author K. Haule, author G. Kotliar, author H. Berger, author S. Courjault, author P. H.Kes,and author M. Li, 10.1103/PhysRevB.74.064510 journal journal Phys. Rev. B volume 74, pages 064510 (year 2006)NoStop [Sénéchal and Tremblay(2004)]st author author D. Sénéchal and author A.-M. S.Tremblay, 10.1103/PhysRevLett.92.126401 journal journal Phys. Rev. Lett. volume 92, pages 126401 (year 2004)NoStop [Weber et al.(2010a)Weber, Haule,and Kotliar]Weber:2010 author author C. Weber, author K. Haule,andauthor G. Kotliar, 10.1038/nphys1706 journal journal Nature Physics volume 6, pages 574 (year 2010a)NoStop [Weber et al.(2010b)Weber, Haule,and Kotliar]cedricApical author author C. Weber, author K. Haule,andauthor G. Kotliar, 10.1103/PhysRevB.82.125107 journal journal Phys. Rev. B volume 82, pages 125107 (year 2010b)NoStop [Sordi et al.(2010)Sordi, Haule, and Tremblay]sht author author G. Sordi, author K. Haule,andauthor A.-M. S. Tremblay,10.1103/PhysRevLett.104.226402 journal journal Phys. Rev. Lett. volume 104,pages 226402 (year 2010)NoStop [Sordi et al.(2011)Sordi, Haule, and Tremblay]sht2 author author G. Sordi, author K. Haule,andauthor A.-M. S. Tremblay,10.1103/PhysRevB.84.075161 journal journal Phys. Rev. B volume 84, pages 075161 (year 2011)NoStop [Sordi et al.(2012)Sordi, Sémon, Haule, and Tremblay]ssht author author G. Sordi, author P. Sémon, author K. Haule,and author A.-M. S. Tremblay, doi:10.1038/srep00547 journal journal Sci. Rep. volume 2, pages 547 (year 2012)NoStop [Sordi et al.(2013)Sordi, Sémon, Haule, and Tremblay]sshtRHO author author G. Sordi, author P. Sémon, author K. Haule,and author A.-M. S. Tremblay, 10.1103/PhysRevB.87.041101 journal journal Phys. Rev. B volume 87, pages 041101 (year 2013)NoStop [Fratino et al.(2016a)Fratino, Sémon, Sordi, and Tremblay]LorenzoSC author author L. Fratino, author P. Sémon, author G. Sordi,and author A.-M. S. Tremblay, 10.1038/srep22715 journal journal Sci. Rep. volume 6, pages 22715 (year 2016a)NoStop [Fratino et al.(2016b)Fratino, Sémon, Sordi, and Tremblay]Lorenzo3band author author L. Fratino, author P. Sémon, author G. Sordi,and author A.-M. S. Tremblay, 10.1103/PhysRevB.93.245147 journal journal Phys. Rev. B volume 93, pages 245147 (year 2016b)NoStop [Ando et al.(2001)Ando, Lavrov, Komiya, Segawa, andSun]ando2001 author author Y. Ando, author A. N. Lavrov, author S. Komiya, author K. Segawa,and author X. F. Sun, 10.1103/PhysRevLett.87.017001 journal journal Phys. Rev. Lett. volume 87, pages 017001 (year 2001)NoStop Supplemental information:Effects of interaction strength, doping, and frustration on the antiferromagnetic phase of the two-dimensional Hubbard modelL. Fratino, M. Charlebois, P. Sémon, G. Sordi, and A.-M. S. Tremblay In this supplemental information, we first present in Fig. <ref> the raw data of the staggered magnetization m_z, to complement the colormaps in Figure 1 of the main text. We also show in Fig. <ref> the raw data of m_z to complement the antiferromagnetic boundaries of the Figure 2 of the main text. Finally, Fig. <ref> and Fig. <ref> display the density of states for different values of hole doping δ, to extend data shown in Figures 3(a), 3(b), 3(e), 3(f) of the main text.
http://arxiv.org/abs/1709.09655v1
{ "authors": [ "L. Fratino", "M. Charlebois", "P. Sémon", "G. Sordi", "A. -M. S. Tremblay" ], "categories": [ "cond-mat.str-el", "cond-mat.quant-gas" ], "primary_category": "cond-mat.str-el", "published": "20170927174532", "title": "Effects of interaction strength, doping, and frustration on the antiferromagnetic phase of the two-dimensional Hubbard model" }
Walther Meissner Institut, Bayerische Akademie der Wissenschaften, 85748 Garching, Germany Fakultät für Physik E23, Technische Universität München, 85748 Garching, GermanyStanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Department of Physics, Stanford University, California 94305, USACenter for Solid State Physics and New Materials, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade, SerbiaStanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Department of Applied Physics, Stanford University, California 94305, USAPresent address: TNG Technology Consulting GmbH, Beta-Straße, 85774 Unterföhring, Germany Walther Meissner Institut, Bayerische Akademie der Wissenschaften, 85748 Garching, Germany Fakultät für Physik E23, Technische Universität München, 85748 Garching, GermanyPresent address: School of Solar and Advanced Renewable Energy, Department of Physics and Astronomy, University of Toledo, Toledo, Ohio 43606, USA Walther Meissner Institut, Bayerische Akademie der Wissenschaften, 85748 Garching, Germany Fakultät für Physik E23, Technische Universität München, 85748 Garching, GermanyKarlsruher Institut für Technologie, Institut für Festkörperphysik, 76021 Karlsruhe, GermanyKarlsruher Institut für Technologie, Institut für Festkörperphysik, 76021 Karlsruhe, GermanyCenter for Solid State Physics and New Materials, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade, Serbia Serbian Academy of Sciences and Arts, Knez Mihailova 35, 11000 Belgrade, SerbiaStanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USAStanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025, USA Geballe Laboratory for Advanced Materials, Stanford University, California 94305, [email protected] Walther Meissner Institut, Bayerische Akademie der Wissenschaften, 85748 Garching, GermanyThe charge and spin dynamics of the structurally simplest iron-based superconductor, FeSe, may hold the key to understanding the physics of high temperature superconductors in general. Unlike the iron pnictides, FeSe lacks long range magnetic order in spite of a similar structural transition around 90 K. Here, we report results of Raman scattering experiments as afunction of temperature and polarization and simulations based on exact diagonalization of a frustrated spin model. Both experiment and theory find a persistent low energy peak close to 500 cm^-1 in B_1g symmetry, which softens slightly around 100 K, that we assign to spin excitations. By comparing with results from neutron scattering, this study provides evidence for nearly frustrated stripe order in FeSe.74.70.Xa,75.10.Jm,74.20.Mn,74.25.nd Frustrated spin order and stripe fluctuations in FeSe R. Hackl December 30, 2023 =====================================================§ INTRODUCTIONFe-based pnictides and chalcogenides, similar to cuprates, manganites or some heavy fermion compounds, are characterized by the proximity and competition of various phases including magnetism, charge order and superconductivity. Specifically the magnetism of Fe-based systems has various puzzling aspects which do not straight­forwardly follow from the Fe valence or changes in the Fermi surface topology <cit.>. Some systems have a nearly ordered localized moment close to 2 μ _B <cit.>, such as FeTe or rare-earth iron selenides, whereas the moments of AFe_2As_2-based compounds (A= Ba, Sr, Eu or Ca) are slightly below 1 μ_ B <cit.> and display aspects of itinerant spin-density-wave (SDW) magnetism with a gap in the electronic excitation spectrum <cit.>.In contrast others do not order down to the lowest temperatures, such as FeSe <cit.> or LaFePO <cit.>.The material specific differences are a matter of intense discussion, and low- as well as high-energy electronic and structural properties determine the properties <cit.>. At the Fermi energy E_ F, the main fraction of the electronic density of states originates from t_2g Fe orbitals, but a substantial part of the Fe-Fe hopping occurs via the pnictogen or chalcogen atoms,hence via the xz, yz, and p_z orbitals. For geometrical reasons, the resulting exchange coupling energies between nearest (J_1) and next nearest neighbour (J_2) iron atoms have the same order of magnitude, and small changes in the pnictogen (chalcogen) height above the Fe plane influence the ratio J_2/J_1, such that various orders are energetically very close <cit.>.The reduced overlap of the in-plane xy orbitals decreases the hopping integral tand increases the influence of the Hund's rule interactions and the correlation energy U, even though they are only in the range of 1-2 eV. Thus the electrons in the xy orbitals have a considerably higher effective mass m^∗ and smaller quasiparticle weight Z than those of the xz and yz orbitals. This effect was coined orbital selective Mottness <cit.> and observed by photoemission spectroscopy (ARPES) in Fe-based chalcogenides <cit.>. It is similar in spirit to what was found by Raman scattering in the cuprates as a function of momentum <cit.>. In either case some of the electron wave functions are more localized than others. This paradigm may explain why the description remains difficult and controversial in all cases.Therefore we address the question as to whether systematic trends can be found across the families of the Fe-based superconductors, how the spin excitations are related to other highly correlated systems, and how they can be described appropriately.As an experimental tool we use Raman scattering since the differences expected theoretically <cit.> and indicated experimentally in the electronic structure <cit.> can be tracked in both the charge and the spin channel. Another advantage is the large energy range of approximately 1 meV to 1 eV (8 to 8,000 ) accessible by light scattering <cit.>.Early theoretical work on Fe-based systems considered the Heisenberg model the most appropriate approach <cit.>, and the high-energy maxima observed by Raman scattering in BaFe_2As_2 were interpreted in terms of localized spins <cit.>.On the other hand, the low-energy spectra are reminiscent of charge density wave (CDW) or SDW formation <cit.>. In principle, both effects can coexist if the strength of the correlations varies for electrons from different orbitals, where itinerant electrons form a SDW, while those on localized orbitals give rise to a Heisenberg-like response. In contrast to the AFe_2As_2-based compounds (A= Ba, Sr, Ca), FeSe seems to be closer to localized order with a larger mass renormalization than in the iron pnictides <cit.>. Apart from low lying charge excitations, the remaining, presumably spin, degrees of freedom in FeSe may be adequately described by a spin-1 J_1-J_2-J_3-K Heisenberg model <cit.> which provides also a consistent description of our results shown in this work and allows for the presence of different spin orders. Since various types of spin order are energetically in close proximity<cit.>, frustration may quench long-range order down to the lowest temperatures <cit.>, even though neutron scattering experiments in FeSe find large values for the exchange energies <cit.>. Recent experiments on FeSe focused on low-energies and (x^2-y^2) symmetry, and the response was associated with particle-hole excitations and critical fluctuations <cit.>. Here, we obtain similar experimental results below 1,500 . Those in the range 50-200 show similarities with the other Fe-based systems while those above 200 are distinctly different but display similarities with the cuprates <cit.>. In addition to previous work, we analyze all symmetries at higher energies up to 3,500 , to uncover crucial information about the behaviour of the spin degrees of freedom.By comparing experimental and simulated Raman data we find a persistent low energy peak at roughly 500 in symmetry, which softens slightly around 100 K. We assign the B_1g maximum and the related structures in and symmetry to spin excitations. Thetheoretical simulations also aim at establishing a link between light and neutron scattering data with respect to the spin degrees of freedom and to furnish evidence for nearly frustrated stripe order at low temperature. We arrive at the conclusion that frustrated order of localized spins dominates the physics in FeSe, while critical spin and/or charge fluctuations are not the main focus of the paper.§ RESULTS§.§ ExperimentsSymmetry-resolved Raman spectra of single-crystalline FeSe (see Methods) in the energy range up to 0.45 eV (3,600 cm^-1) are shown in Fig. <ref>. The spectra are linear combinations of the polarization dependent raw data (see Methods and Supplementary Fig. 1 in Supplementary Note 1). For symmetry (Fig. <ref>a) we plot only two temperatures, 40 and 300 K, to highlight the persistence of the peak at approximately 500 cm^-1. The full temperature dependence will be shown below. For , , and symmetry we show spectra at 40, 90 and 300 K (Fig. <ref>b-d). Out of the four symmetries, the , , and spectra display Raman active phonons, magnons or electron-hole excitations, while the spectra are weak and vanish below 500-1,000 cm^-1. As intensity in symmetry appears only under certain conditions not satisfied in the present study, we ignore it here.In the high-energy limit the intensities are smaller in all symmetries than those in other Fe-based systems such as BaFe_2As_2 (see Supplementary Fig. 2 in Supplementary Note 2). However, in the energy range up to approximately 3,000 cm^-1there is a huge additional contribution to the cross section in FeSe (Fig. <ref>a). The response is strongly temperature dependent and peaks at 530 cm^-1 in the low-temperature limit. Between 90 and 40 K the and spectra increase slightly in the range around 700 and 3,000 cm^-1, respectively (indicated as blue shaded areas in Fig. <ref>b and d). The overall intensity gain in the and spectra in the shaded range is a fraction of approximately 5% of that in symmetry. The spectra exhibit a reduction in spectral weight in the range from 600 to 1,900 (shaded red) which is already fully developed at the structural transition at T_s = 89.1 K in agreement with earlier work <cit.>. In contrast to and symmetry, the temperature dependence of the intensity is strong, whereas the peak energy changes only weakly, displaying some similarity with the cuprates <cit.>. This similarity, along with the considerations of Glasbrenner et al. <cit.>, motivated us to explore a spin-only, Heisenberg-like model for describing the temperature evolution of the Raman scattering data.§.§ Simulations at zero temperatureWe performed numerical simulations at zero temperature for a frustrated spin-1 system on the basis of a J_1-J_2-J_3-K Heisenberg model <cit.> on a16-points cluster as shown in Fig. <ref>a and described in the Methods section. Fig. <ref>b shows the resulting phase diagram as a function of J_2 and J_3. K was set at 0.1 (repulsive) in order to suppress ordering tendencies on the small cluster. The parameter set for the simulations of the Raman and neutron data at finite temperature is indicated as a black dot. In Fig. <ref> we show the low-temperature data (Fig. <ref>a) along with the simulations (Fig. <ref>b).The energy scale for the simulations is given in units of J_1 which has been derived to be 123 meV or 990 cm^-1<cit.>, allowing a semi-quantitative comparison with the experiment. As already mentioned, the experimental and spectra are not dominated by spin excitations and we do not attempt to further analyze the continua extending to energies in excess of 1 eV, considering them a background. The opposite is true for symmetry, also borne out in the simulations. For the selected values of J_1=123 meV, J_2=0.528 J_1, J_3=0, and K=0.1 J_1, the positions of the spin excitations in the three symmetries and the relative intensities are qualitatively reproduced. The choice of parameters is motivated by the previous use of the J_1-J_2 Heisenberg model, with J_1=J_2 to describe the stripe phase of iron pnictides <cit.>. Here we use a value of J_2 smaller than J_1 to enhance competition between Néel and stripe orders when describing FeSe. This approach and choice of parameters is strongly supported in a recent neutron scattering study <cit.>.The comparison of the different scattering symmetries, the temperature dependence, and our simulations indicate that the excitation at 500 cm^-1 is an additional scattering channel superimposed on the particle-hole continuum and fluctuation response, as shown in Supplementary Note 3 with Supplementary Figures 3 and 4.Here we focus on the peak centered at approximately 500 cm^-1 which, in agreement with the simulations, originates from two-magnon excitations in a highly frustrated spin system, although the features below 500 cm^-1 also are interesting and were interpreted in terms of quadrupolar orbital fluctuations <cit.>. §.§ Temperature dependenceIt is enlightening to look at the spectra across the whole temperature range as plotted in Fig. <ref>. The well-defined two-magnon peak centered at approximately 500 cm^-1 in the low temperature limit loses intensity, and becomes less well defined with increasing temperature up to the structural transition = 89.1 K.Above the structural transition, the spectral weight continues to decrease and the width of the two-magnon feature grows, while the peak again becomes well-defined and the energy hardens slightly approaching the high temperature limit of the study.What may appear as a gap opening at low temperature is presumably just the reduction of spectral weight in a low-energy feature at approximately 22 cm^-1.The intensity of this lower energy response increases with temperature, leading to a well-formed peak at an energy around 50 cm^-1 near the structural transition.Above the structural transition this feature rapidly loses spectral weight, hardens, and becomes indistinguishable from the two-magnon response in the high temperature limit.This low-energy feature develops in a fashion very similar to that found in Ba(Fe_1-xCo_x)_2As_2 for x>0 <cit.>.Now we compare the measurements with numerical simulations for the temperature dependence of the Raman susceptibility in Fig. <ref>a and b, respectively. For the simulations (Fig. <ref>b) we use the same parameters as at T=0 (black dot in Fig. <ref>). At zero temperature the simulations show a single low energy peak around 0.3 J_1. As temperature increases, a weak shoulder forms on the low energy side of the peak, and the whole peak softens slightly and broadens over the simulated temperature range. Except for the additional intensity at low energies, Ω < 200, (Fig. <ref>a) there is good qualitative agreement between theory and experiment. As shown in Supplementary Fig. 5 in Supplementary Note 4, a similar agreement between experiment and simulations is obtained for the temperature dependence in and symmetries, indicating that both the gain in intensity (blue shaded areas in Fig. <ref>) as well as the reduction in spectral weight in from 600 to 1,900 (shaded red in Fig. <ref>d) can be attributed to the frustrated localized magnetism.§.§ Connection to the spin structure factorTo support our explanation of the Raman data, we simulated the dynamical spin structure factor S( q,ω) and compared the findings to results of neutron scattering experiments <cit.>. While clearly not observing long-range order, above the structural transition neutron scattering finds similar intensity at finite energy for several wave vectors along the line (π,0)-(π,π). Upon cooling, the spectral weight at these wave vectors shifts away from (π,π) to directions along (π,0), although the respective peaks remain relatively broad. In Fig. <ref>a and b we show the results of the simulations for two characteristic temperatures. As temperature decreases, spectral weight shifts from (π, π) towards (π,0) in agreement with the experiment <cit.>. In Fig. <ref>c we show the evolution of the spectral weights around (π,π) and (π,0) in an energy window of (0.4±0.1) J_1 as a function of temperature, similar to the results shown in Ref. WangQS:2016.In the experiment, the temperature where the integrated dynamical spin structure factor changes most dramatically is close to the structural transition. From our simulations, the temperature where similar changes occur in comparison to neutron scattering corresponds to the temperature at which the simulated response (Fig. <ref>) shows the most pronounced shoulder, and the overall intensity begins to decrease. Not surprisingly, the low-energy peak in the Raman scattering experiment is also strongest near the structural transition.§ DISCUSSIONThe agreement of experiment with theory in both neutron and Raman scattering suggests that a dominant contribution to the FeSe spectra comes from frustrated magnetism of essentially local spins. The differences between the classes of ferro-pnictides and -chalcogenides, in particular the different degrees of itineracy,may then originate in a subtle orbital differentiation across the families <cit.>. If FeSe were frustrated, near such a phase boundary between magnetic states, then its behaviour would be consistent with the observed sensitivity to intercalation <cit.>, layer thickness <cit.>, and pressure <cit.>, which could affect the exchange interactions through the hopping.Relative to the theoretical results below 200 cm^-1, critical fluctuations of any origin, which are characterized by a diverging correlation length close to the transition, can neither be described nor distiguished in such a small cluster calculation. Here, only experimental arguments can be applied similar to those in Ref. Kretzschmar:2016, but will not be further discussed, since they are not the primary focus of the analysis. A brief summary may be found in the Supplementary Note 3. It is remarkable how clearly the Raman spectra of an SDW state originating from a Fermi surface instability and a magnet with local moments can be distingiuished. For comparison, Fig. <ref> shows Raman spectra for La_2CuO_4 and BaFe_2As_2 at characteristic temperatures. La_2CuO_4 (Fig. <ref>a) is an example of a material with local moments on the Cu sites <cit.> having a Néel temperature of T_ N=325 K. The well-defined peak at approximately 2.84 J_1 <cit.> possesses a weak and continuous temperature dependence across T_ N <cit.>. The origin of the scattering in La_2CuO_4 and other insulating cuprates <cit.> can thus be traced back to Heisenberg-type physics of local moments <cit.>, which, for simplicity, need only include the nearest-neighbour exchange interaction J_1.In contrast, most iron-based superconductors are metallic antiferromagnets in the parent state exhibiting rather different Raman signatures.In BaFe_2As_2 (Fig. <ref>b) abrupt changes are observed in symmetry upon entering the SDW state: the fluctuation peak below 100 cm^-1 vanishes, a gap develops below some 500-600 cm^-1, and intensity piles up in the range 600-1,500 cm^-1 <cit.>, the typical behaviour of an SDW or CDW <cit.> in weak-coupling, resulting from Fermi surface nesting. Yet, even for itinerant systems such as these, longer range exchange interactions can become relevant and lead to magnetic frustration <cit.>. In summary, the Raman response of FeSe was measured in all symmetries and compared to simulations of a frustrated spin-1 system. The experimental data were decomposed in order to determine which parts of the spectra originate from particle-hole excitations, fluctuations of local spins, and low energy critical fluctuations. Comparison of the decomposed experimental data with the simulations gives evidence that the dominant contribution of the Raman spectra comes from magnetic competition between (π,0) and (π, π) ordering vectors. These features of the Raman spectra, which agree qualitatively with a spin only model, consist of a dominant peak in symmetry around 500 along with a peak at similar energy but lower intensity in and at higher energy in symmetry. These results will likely help to unravel the mechanism behind the superconducting phase found in FeSe. § METHODS §.§ Experiment The FeSe crystals were prepared by the vapor transport technique. Details of the crystal growth and characterization are described elsewhere <cit.>. Before the experiment the samples were cleaved in air and the exposure time was minimized. The surfaces obtained in this way have several atomically flat regions allowing us to measure spectra down to 5 cm^-1. At the tetragonal-to-orthorhombic transition T_ s twin boundaries appear and become clearly visible in the observation optics. As described in detail by Kretzschmar et al. <cit.> the appearance of stripes can be used to determine the laser heating Δ T_ L and T_ s to be (0.5±0.1) K mW^-1 and (89.1±0.2) K, respectively.Calibrated Raman scattering equipment was used for the experiment. The samples were attached to the cold finger of a He-flow cryostat having a vacuum of approximately 5·10^-5 Pa (5·10^-7 mbar). For excitation we used a diode-pumped solid state laser emitting at 575 nm (Coherent GENESIS MX-SLM 577-500) and various lines of an Ar ion laser (Coherent Innova 304). The angle of incidence was close to 66^∘ for reducing the elastic stray light entering the collection optics. Polarization and power of the incoming light were adjusted in a way that the light inside the sample had the proper polarization state and, respectively, a power of typically P_a=4 mW independent of polarization. For the symmetry assignment we use the 1 Fe unit cell (axes x and y parallel to the Fe-Fe bonds) which has the same orientation as the magnetic unit cell in the cases of Néel or single-stripe order (4 Fe cell). The orthorhombic distortion is along these axes whereas the crystallographic cell assumes a diamond shape with the length of the tetragonal axes preserved. Because of the rotated axes in the 1 Fe unit cell the Fe B_1g phonon appears in the B_2g spectra. Spectra at low to medium energies were measured with a resolution σ≈ 5 in steps of ΔΩ = 2.5 or 5 below 250 and steps of 10 above where no sharp peaks need to be resolved. Spectra covering the energy range up to 0.5-1eV were measured with a resolution σ≈ 20 in steps of ΔΩ = 50 .§.§ SimulationsWe use exact diagonalization to study a Heisenberg-like model on a 16 site square lattice, which contains the necessary momentum points and is small enough that exact diagonalization can reach high enough temperatures to find agreement with the temperature dependence in the experiment. This was solved using the parallel Arnoldi method <cit.>. The Hamiltonian is given byℋ=∑_ nn[J_1𝐒_i·𝐒_j + K(𝐒_i·𝐒_j)^2]+ ∑_ 2nn J_2𝐒_i·𝐒_j + ∑_ 3nn J_3𝐒_i·𝐒_jwhere 𝐒_i is a spin-1 operator reflecting the observation that the local moments of iron chalcogenides close to 2 μ_B <cit.>. The sum over nn is over nearest neighbours, the sum over 2nn is over next nearest neighbours, and the sum over 3nn is over next next nearest neighbours.We determine the dominant order according to the largest static spin structure factor, given byS(𝐪) = 1/N∑_l e^i𝐪·𝐑_l∑_i⟨𝐒_𝐑_i+𝐑_l·𝐒_𝐑_i⟩.Due to the possible spontaneous symmetry breaking we adjust the structure factor by the degeneracy of the momentum. To characterize the relative strength of the dominant fluctuations we project the relative intensity of the dominant static structure factor onto the range [0,1] using the followingintensity = 1 - d_𝐪_subS(𝐪_sub)/d_𝐪_max S(𝐪_max)where d_𝐪 is the degeneracy of momentum 𝐪, 𝐪_ max is the momentum with the largest d_𝐪S_𝐪, and 𝐪_ sub is the momentum with the second largest (subdominant) d_𝐪S_𝐪.The Raman susceptibilities for , , and symmetries for non-zero temperatures were calculated using the Fleury-Loudon scattering operator <cit.> given by 𝒪=∑_i,j J_ij (𝐞̂_in·𝐝̂_ij) (𝐞̂_out·𝐝̂_ij) 𝐒_i·𝐒_jwhere J_ij are the exchange interaction values used in the Hamiltonian, 𝐝̂_ij is a unit vector connecting sites i and j and 𝐞̂_in/out are the polarization vectors. For the symmetries calculated we use the polarization vectors𝐞̂_in=1/√(2)(𝐱̂+𝐲̂), 𝐞̂_out=1/√(2)(𝐱̂+𝐲̂)for ⊕, 𝐞̂_in=𝐱̂, 𝐞̂_out=𝐲̂ for , 𝐞̂_in=1/√(2)(𝐱̂+𝐲̂), 𝐞̂_out=1/√(2)(𝐱̂-𝐲̂)for ,(where 𝐱̂ and 𝐲̂ point along the Fe-Fe directions). We use this operator to calculate the Raman response R(ω) using the continued fraction expansion <cit.>, where R(ω) is given byR(ω) = -1/π Z∑_n e^-β E_nIm(⟨Ψ_n|𝒪^†1/ω + E_n + iϵ - ℋ𝒪|Ψ_n⟩)with Z the partition function. The sum traverses over all eigenstates Ψ_n of the Hamiltonian ℋ having eigenenergies E_n < E_0+2J_1 where E_0 is the ground state energy. The Raman susceptibility is given by χ^''(ω) = 1/2[R(ω) - R(-ω)]. The dynamical spin structure factor was calculated using the same method with 𝒪 replaced with S_𝐪^z=1/√(N)∑_le^i𝐪·𝐑_l S_l^z. § ACKNOWLEDGEMENTThe work was supported by the German Research Foundation (DFG) via the Priority Program SPP 1458 (grant-no. Ha2071/7) and the Transregional Collaborative Research Center TRR80 and by the Serbian Ministry of Education, Science and Technological Development under Project III45018. We acknowledge support by the DAAD through the bilateral project between Serbia and Germany (grant numbers 57142964 and 57335339). The collaboration with Stanford University was supported by the Bavaria California Technology Center BaCaTeC (grant-no. A5 [2012-2]). Work in the SIMES at Stanford University and SLAC was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under Contract No. DE-AC02-76SF00515. Computational work was performed using the resources of the National Energy Research Scientific Computing Center supported by the U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-05CH11231. § AUTHOR CONTRIBUTIONS A.B., T.B., and R.H. conceived the experiment. B.M. and T.P.D. conceived the ED analysis. P.A. and T.W. synthesized and characterized the samples. A.B., N.L., T.B., and R.H.A. performed the Raman scattering experiment. H.N.R. and Y.W. coded and performed the ED calculations. A.B., H.N.R., N.L., B.M., and R.H. analyzed and discussed the data. A.B., H.N.R., N.L., Z.P., B.M., T.P.D., and R.H. wrote the paper. All authors commented on the manuscript.§ COMPETING INTERESTSThe authors declare that there are no competing interests.§ DATA AVAILABILITYData are available upon reasonable request from the corresponding author.           § SUPPLEMENTARY NOTE 1: POLARIZATION DEPENDENCE OF THE RAMAN SPECTRA OF FESESupplementary Fig. <ref>a shows the complete set of polarization resolved Raman spectra we measured for FeSe at T=40 K up to a maximum energy of 0.45 eV. The measured spectra have been corrected for the sensitivity of the instrument and divided by the Bose factor { 1-exp(-ħΩ/k_B T)}. In Supplementary Fig. <ref>b sums of corresponding pairs of spectra are shown. Each sum contains the full set of all four symmetries (+++) accessible with the light polarizations in the Fe plane. All three sets exhibit the same spectral shape. The spectra measured with linear light polarizations at 45^∘ with respect to the Fe-Fe bonds (x^' x^' and x^' y^') were multiplied by a factor of 0.65 to fit the other configurations. The same factor was applied when calculating the sums for extracting the pure symmetries. The reason for this deviation from the expected x^' x^' and x^' y^' intensities lies in small inaccuracies in determining the optical constants. Since we never observed polarization leakages the main effect pertains obviously on the power absorption and transmission rather than phase shifts between the parallel and perpendicular light polarizations.§ SUPPLEMENTARY NOTE 2: RAMAN SPECTRA OF BAFE2AS2 Supplementary Fig. <ref> shows the Raman spectra of BaFe_2As_2 as a function of symmetry and temperature. Towards high energies the spectra increase almost monotonically over an energy range of approximately 0.7 eV. We could not observe the pronounced nearly polarization-independent maxima in the range 2,000 - 3,000 cm^-1 reported in Ref. Sugai:2010. At high energies our spectra are temperature independent. At low energies pronounced changes are observed in A_1g and B_1g symmetry upon entering the striped spin density wave (SDW) state below T_ SDW = 135 K as described by various authors <cit.>. In A_2g and B_2g symmetry the changes are small but probably significant in that polarization leakage is unlikely to be the reason for the weak low-temperature peaks in the range 2,000 cm^-1 and the gap-like behaviour below approximately 1,000 cm^-1. The changes are particularly pronounced in B_1g symmetry. As shown in Supplementary Fig. <ref>c, in Fig. 1b of the main text and in more detail elsewhere <cit.> the fluctuation peak vanishes very rapidly and the redistribution of spectral weight from low to high energies sets in instantaneously at T_ SDW. All these observations show that the polarization and temperature dependences here are fundamentally different from those of FeSe (Fig. 1 of the main text).§ SUPPLEMENTARY NOTE 3: DELINEATION OF THE CONTRIBUTIONS TO THE SPECTRASupplementary Fig. <ref> shows Raman spectra of the FeSe sample at temperatures below (blue line) and above (red line) the superconducting transition temperature , which was determined to be = 8.8 K by measuring the third harmonic of the magnetic susceptibility <cit.>. Both spectra show a sharp increase towards the laser line which can be attributed to increased elastic scattering due to an accumulation of surface layers at low temperatures. Below a broad peak emerges centred around approximately 28 cm^-1 which we identify as pair breaking peak at 2Δ≈ (4.5±0.5) k_B. Above 50 cm^-1 the spectra at T< and T ≥ are identical. We could not resolve the second peak close to 40 cm^-1 as observed in Ref. Massat:2016. The gap ratio of (4.5±0.5)k_B is comparable to what was found for Ba(Fe_0.939Co_0.061)_2As_2 <cit.> but smaller than that found for Ba_1-xK_xFe_2As_2 <cit.>. The existence of a superconducting gap and a pair-breaking peak in the Raman spectra shows that the magnetic features are superposed on an electronic continuum.The temperature and symmetry dependence of theRaman response (Figs. 1 and 4 of the main text) indicate that the spectra are a superposition of various scattering channels as shown in Supplementary Fig. <ref>: (i) particle-hole excitations and presumably also a weak contribution from luminescence in the range up to 1 eV and beyond, (ii) critical fluctuations of either spin or charge in the range below 250 cm^-1, and (iii) excitations of neighboring spins with the response centered at 500 cm^-1 in and symmetry and at 3,000 cm^-1 in symmetry.(i) An estimate of electron-hole excitations may be obtained by comparing the with the and spectra at various temperatures including T<T_c. In a first approximation we assume that luminescence has a weak symmetry and temperature dependence and find that the intensities in all channels have the same order of magnitude. We use the continuum for deriving an analytical approximation for modeling the particle-hole spectrum (blue in Supplementary Fig. <ref>).(ii) There are various ways to derive the Raman response of critical fluctuations with finite wave vector Q_c. Caprara and coworkers considered the clean limit and, consequently, calculated the response and the selection rules for a pair of fluctuations having Q_c and - Q_c thus maintaining the q=0 selection rule for light scattering <cit.>. Alternatively the the collision-limited regime was considered where the momentum of the fluctuation can be carried away by an impurity <cit.>. Finally, quadrupolar fluctuations in the unit cell can give rise to Raman scattering <cit.>. In either case the response diverges at or slightly below the structural transition where the correlation length diverges. We used the approach of Ref. Caprara:2005 for modeling the response since we believe that FeSe is in the clean limit and that spin fluctuations are a possible candidate for the response <cit.>. Yet, the decision about the type of fluctuations relevant here is not a subject of this publication, and we are predominantly interested in excitations of neighboring spins.(iii) For isolating the response of neighboring spins in the total Raman response we subtract the particle-hole continuum (i) and the response of fluctuations (ii) from the spectra. The resulting difference is shown in green in Supplementary Fig. <ref> and can be considered the best possible approximation to the two-magnon response. At temperatures much smaller or larger than the critical fluctuations do not contribute substantially to the total response and can be ignored. The particle-hole continuum is generally weak. Therefore the simulations can be best compared to the Raman data at temperatures sufficiently far away from as shown in Figs. 3 and 5. Since the simulations were performed on a 4×4 cluster critical fluctuations cannot be described close to where the correlation length is much larger than the cluster size. § SUPPLEMENTARY NOTE 4: TEMPERATURE DEPENDENCE IN A1G AND B2G SYMMETRIESSupplementary Figure <ref> compares experimental and simulated Raman spectra in and symmetry up to high energies at room temperature (red), slightly above (green) and below (blue). The choice of temperatures for the simulated spectra corresponds to Fig. 5 of the main text. Sharp phonon peaks (labelled ph) appear in the experimental spectra at 200 cm^-1 the shape of which is not reproduced properly since resolution and sampling width are reduced. With J_1 ≈ 123 meV (990 cm^-1) as found in Ref. Glasbrenner:2015 the experimental and simulated spectra can be compared semi-quantitatively. Both theory and experiment consistently show a gain in intensity for at medium energies and for at high energies (blue shaded areas in the respective spectra) as well as a reduction of spectral weight in in the range from 600 to 1,900 (shaded red). The changes appear to be more continuous in the simulations than in the experiment where the gain in intensity in both symmetries only occurs at T<. The reduction in spectral weight in symmetry has already taken place at (green spectra). 56 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Yin et al.(2011)Yin, Haule, and Kotliar]Yin:2011 author author Z. P. Yin, author K. Haule,andauthor G. Kotliar, title title Kinetic frustration and the nature of the magnetic and paramagnetic states in iron pnictides and iron chalcogenides, 10.1038/nmat3120 journal journal Nature Mater. volume 10,pages 932–935 (year 2011)NoStop [Georges et al.(2013)Georges, de' Medici, and Mravlje]Georges:2013 author author Antoine Georges, author Luca de' Medici,and author Jernej Mravlje, title title Strong Correlations from Hund's Coupling, 10.1146/annurev-conmatphys-020911-125045 journal journal Annu. Rev. Cond. Mat. Phys. volume 4,pages 137–178 (year 2013)NoStop [Si et al.(2016)Si, Yu, and Abrahams]Si:2016 author author Qimiao Si, author Rong Yu,andauthor Elihu Abrahams,title title High-temperature superconductivity in iron pnictides and chalcogenides, 10.1038/natrevmats.2016.17 journal journal Nat. Rev. Mater. volume 1, pages 16017 (year 2016)NoStop [Skornyakov et al.(2017)Skornyakov, Anisimov, Vollhardt, andLeonov]Skornyakov:2017 author author S. L. Skornyakov, author V. I. Anisimov, author D. Vollhardt,and author I. Leonov,title title Effect of electron correlations on the electronic structure and phase stability of FeSe upon lattice expansion, 10.1103/PhysRevB.96.035137 journal journal Phys. Rev. B volume 96, pages 035137 (year 2017)NoStop [Li et al.(2009)Li, de la Cruz, Huang, Chen, Lynn, Hu, Huang, Hsu, Yeh, Wu, andDai]LiSL:2009a author author ShiliangLi, author Clarina de la Cruz, author Q. Huang, author Y. Chen, author J. W. Lynn, author Jiangping Hu, author Yi-Lin Huang, author Fong-Chi Hsu, author Kuo-Wei Yeh, author Maw-Kuen Wu,and author Pengcheng Dai, title title First-order magnetic and structural phase transitions in Fe_1+ySe_xTe_1x, 10.1103/PhysRevB.79.054503 journal journal Phys. Rev. B volume 79, pages 054503 (year 2009)NoStop [Johnston(2010)]Johnston:2010 author author D. C. Johnston, title title The Puzzle of High Temperature Superconductivity in Layered Iron Pnictides and Chalcogenides , 10.1080/00018732.2010.513480 journal journal Adv. Phys. volume 59, pages 803 (year 2010)NoStop [Yi et al.(2017)Yi, Zhang, Shen, and Lu]YiM:2017 author author Ming Yi, author Yan Zhang, author Zhi-Xun Shen,andauthor Donghui Lu, title title Role of the orbital degree of freedom in iron-based superconductors, 10.1038/s41535-017-0059-y journal journal npj Quantum Materials volume 2, pages 57 (year 2017)NoStop [Baek et al.(2014)Baek, Efremov, Ok, Kim, van den Brink, and Büchner]Baek:2014 author author S.-H. Baek, author D. V. Efremov, author J. M. Ok, author J. S. Kim, author Jeroen van den Brink,and author B. Büchner, title title Orbital-driven nematicity in FeSe, 10.1038/nmat4138 journal journal Nature Mater. volume 14, pages 210–214 (year 2014)NoStop [Taylor et al.(2013)Taylor, Ewing, Perring, Parker, Ollivier, Clarke, and Boothroyd]Taylor:2013 author author A. E. Taylor, author R. A. Ewing, author T. G. Perring, author R. D. Parker, author J. Ollivier, author S. J. Clarke,and author A. T. Boothroyd, title title Absence of strong magnetic fluctuations in FeP-based systems LaFePO and Sr_2ScO_3FeP, http://stacks.iop.org/0953-8984/25/i=42/a=425701 journal journal J. Phys.: Condens. Matter volume 25, pages 425701 (year 2013)NoStop [Mazin and Johannes(2009)]Mazin:2009 author author I. I. Mazin and author M. D. Johannes, title title A key role for unusual spin dynamics in ferropnictides, 10.1038/nphys1160 journal journal Nature Phys.volume 5, pages 141 (year 2009)NoStop [Stadler et al.(2015)Stadler, Yin, von Delft, Kotliar, and Weichselbaum]Stadler:2015 author author K. M. Stadler, author Z. P. Yin, author J. von Delft, author G. Kotliar,and author A. Weichselbaum, title title Dynamical mean-field theory plus numerical renormalization-group study of spin-orbital separation in a three-band Hund metal, 10.1103/PhysRevLett.115.136401 journal journal Phys. Rev. Lett. volume 115, pages 136401 (year 2015)NoStop [Glasbrenner et al.(2015)Glasbrenner, Mazin, Jeschke, Hirschfeld, Fernandes, and Valentí]Glasbrenner:2015 author author J. K. Glasbrenner, author I. I. Mazin, author Harald O. Jeschke, author P. J. Hirschfeld, author R. M. Fernandes,and author Roser Valentí, title title Effect of magnetic frustration on nematicity and superconductivity in iron chalcogenides, 10.1038/nphys3434 journal journal Nature Phys. volume 11,pages 953–958 (year 2015)NoStop [Baum et al.(2018)Baum, Li, Tomi ćć, Lazarevi ćć, Jost, Löffler, Muschler, Böhm, Chu, Fisher, Valentí, Mazin, and Hackl]Baum:2018a author author A. Baum, author Ying Li, author M. Tomi ćć, author N. Lazarevi ćć, author D. Jost, author F. Löffler, author B. Muschler, author T. Böhm, author J.-H. Chu, author I. R.Fisher, author R. Valentí, author I. I. Mazin,and author R. Hackl, title title Interplay of lattice, electronic, and spin degrees of freedom in detwinned BaFe_2As_2: A Raman scattering study, 10.1103/PhysRevB.98.075113 journal journal Phys. Rev. B volume 98, pages 075113 (year 2018)NoStop [Anisimov, V. I. et al.(2002)Anisimov, V. I., Nekrasov, I. A., Kondakov, D. E., Rice, T. M., and Sigrist, M.]Anisimov:2002 author author Anisimov, V. I., author Nekrasov, I. A., author Kondakov, D. E., author Rice, T. M.,and author Sigrist, M.,title title Orbital-selective Mott-insulator transition in Ca_2-xSr_xRuO_4, 10.1140/epjb/e20020021 journal journal Eur. Phys. J. B volume 25, pages 191–201 (year 2002)NoStop [de' Medici et al.(2009)de' Medici, Hassan, Capone, and Dai]deMedici:2009 author author Luca de' Medici, author S. R. Hassan, author Massimo Capone,and author Xi Dai,title title Orbital-Selective Mott Transition out of Band Degeneracy Lifting, 10.1103/PhysRevLett.102.126401 journal journal Phys. Rev. Lett. volume 102, pages 126401 (year 2009)NoStop [de' Medici(2017)]deMedici:2017 author author Luca de' Medici, title title Hund's Induced Fermi-Liquid Instabilities and Enhanced Quasiparticle Interactions, 10.1103/PhysRevLett.118.167003 journal journal Phys. Rev. Lett. volume 118, pages 167003 (year 2017)NoStop [Yi et al.(2015)Yi, Liu, Zhang, Yu, Zhu, Lee, Moore, Schmitt, Li, Riggs, Chu, Lv, Hu, Hashimoto, Mo, Hussain, Mao, Chu, Fisher, Si, Shen, andLu]YiM:2015 author author M. Yi, author Z-K Liu, author Y. Zhang, author R. Yu, author J.-X. Zhu, author J.J.Lee, author R.G. Moore, author F.T. Schmitt, author W. Li, author S.C. Riggs, author J.-H. Chu, author B. Lv, author J. Hu, author M. Hashimoto, author S.-K. Mo, author Z. Hussain, author Z.Q. Mao, author C.W. Chu, author I.R.Fisher, author Q. Si, author Z.-X. Shen,andauthor D.H. Lu, title title Observation of universal strong orbital-dependent correlation effects in iron chalcogenides, 10.1038/ncomms8777 journal journal Nature Commun. volume 6, pages 7777 (year 2015)NoStop [Venturini et al.(2002)Venturini, Opel, Devereaux, Freericks, Tütt ő, Revaz, Walker, Berger, Forró, and Hackl]Venturini:2002b author author F. Venturini, author M. Opel, author T. P. Devereaux, author J. K. Freericks, author I. Tütt ő, author B. Revaz, author E. Walker, author H. Berger, author L. Forró,and author R. Hackl, title title Observation of an Unconventional Metal-Insulator Transition in Overdoped CuO_2 Compounds, 10.1103/PhysRevLett.89.107003 journal journal Phys. Rev. Lett. volume 89, pages 107003 (year 2002)NoStop [Devereaux and Hackl(2007)]Devereaux:2007 author author Thomas P.Devereaux and author RudiHackl, title title Inelastic light scattering from correlated electrons, 10.1103/RevModPhys.79.175 journal journal Rev. Mod. Phys. volume 79, pages 175 (year 2007)NoStop [Chen et al.(2011)Chen, Jia, Kemper, Singh, andDevereaux]Chen:2011b author author C.-C. Chen, author C. J. Jia, author A. F. Kemper, author R. R. P. Singh,and author T. P. Devereaux, title title Theory of Two-Magnon Raman Scattering in Iron Pnictides and Chalcogenides, 10.1103/PhysRevLett.106.067002 journal journal Phys. Rev. Lett. volume 106, pages 067002 (year 2011)NoStop [Okazaki et al.(2011)Okazaki, Sugai, Niitaka, andTakagi]Sugai:2011 author author K. Okazaki, author S. Sugai, author S. Niitaka,andauthor H. Takagi, title title Phonon, two-magnon, and electronic Raman scattering of Fe_1+yTe_1-xSe_x, 10.1103/PhysRevB.83.035103 journal journal Phys. Rev. B volume 83, pages 035103 (year 2011)NoStop [Sugai et al.(2012)Sugai, Mizuno, Watanabe, Kawaguchi, Takenaka, Ikuta, Takayanagi, Hayamizu, and Sone]Sugai:2012 author author Shunji Sugai, author Yuki Mizuno, author Ryoutarou Watanabe, author Takahiko Kawaguchi, author Koshi Takenaka, author Hiroshi Ikuta, author Yasumasa Takayanagi, author Naoki Hayamizu,and author Yasuhiro Sone, title title Spin-Density-Wave Gap with Dirac Nodes and Two-Magnon Raman Scattering in BaFe_2As_2, 10.1143/JPSJ.81.024718 journal journal J. Phys. Soc. Japan volume 81, pages 024718 (year 2012)NoStop [Chauvière et al.(2011)Chauvière, Gallais, Cazayous, Méasson, Sacuto, Colson,and Forget]Chauviere:2011 author author L. Chauvière, author Y. Gallais, author M. Cazayous, author M. A. Méasson, author A. Sacuto, author D. Colson,and author A. Forget, title title Raman scattering study of spin-density-wave order and electron-phonon coupling in Ba(Fe_1-xCo_x)_2As_2, 10.1103/PhysRevB.84.104508 journal journal Phys. Rev. B volume 84, pages 104508 (year 2011)NoStop [Eiter et al.(2013)Eiter, Lavagnini, Hackl, Nowadnick, Kemper, Devereaux, Chu, Analytis, Fisher, and Degiorgi]Eiter:2013 author author Hans-MartinEiter, author MichelaLavagnini, author RudiHackl, author Elizabeth A.Nowadnick, author Alexander F.Kemper, author Thomas P.Devereaux, author Jiun-HawChu, author James G.Analytis, author Ian R.Fisher,and author LeonardoDegiorgi, title title Alternative route to charge density wave formation in multiband systems, 10.1073/pnas.1214745110 journal journal Proc. Nat. Acad. Sciences volume 110, pages 64–69 (year 2013), http://arxiv.org/abs/http://www.pnas.org/content/110/1/64.full.pdf+html http://www.pnas.org/content/110/1/64.full.pdf+html NoStop [Yang et al.(2014)Yang, Gallais, Rullier-Albenque, Méasson, Cazayous, Sacuto, Shi, Colson, and Forget]YangYX:2014 author author Y.-X. Yang, author Y. Gallais, author F. Rullier-Albenque, author M.-A. Méasson, author M. Cazayous, author A. Sacuto, author J. Shi, author D. Colson,and author A. Forget, title title Temperature-induced change in the Fermi surface topology in the spin density wave phase of Sr(Fe_1xCo_x)_2As_2, 10.1103/PhysRevB.89.125130 journal journal Phys. Rev. B volume 89, pages 125130 (year 2014)NoStop [Wang et al.(2015)Wang, Kivelson, and Lee]Wang:2015 author author Fa Wang, author Steven A. Kivelson,and author Dung-Hai Lee, title title Nematicity and quantum paramagnetism in FeSe, 10.1038/nphys3456 journal journal Nature Phys. volume 11, pages 959–963 (year 2015)NoStop [Wang et al.(2016)Wang, Shen, Pan, Zhang, Ikeuchi, Iida, Christianson, Walker, Adroja, Abdel-Hafiez, Chen, Chareev, Vasiliev,and Zhao]WangQS:2016 author author Qisi Wang, author Yao Shen, author Bingying Pan, author Xiaowen Zhang, author K. Ikeuchi, author K. Iida, author A. D.Christianson, author H. C.Walker, author D. T.Adroja, author M. Abdel-Hafiez, author Xiaojia Chen, author D. A. Chareev, author A. N. Vasiliev,andauthor Jun Zhao, title title Magnetic ground state of FeSe,10.1038/ncomms12182 journal journal Nature Commun. volume 7, pages 12182 (year 2016)NoStop [Rahn et al.(2015)Rahn, Ewings, Sedlmaier, Clarke,and Boothroyd]Rahn:2015 author author M. C. Rahn, author R. A. Ewings, author S. J. Sedlmaier, author S. J. Clarke,and author A. T. Boothroyd, title title Strong (,0) spin fluctuations in -FeSe observed by neutron spectroscopy, 10.1103/PhysRevB.91.180501 journal journal Phys. Rev. B volume 91, pages 180501 (year 2015)NoStop [Massat et al.(2016)Massat, Farina, Paul, Karlsson, Strobel, Toulemonde, Méasson, Cazayous, Sacuto, Kasahara, Shibauchi, Matsuda, andGallais]Massat:2016 author author Pierre Massat, author Donato Farina, author Indranil Paul, author Sandra Karlsson, author Pierre Strobel, author Pierre Toulemonde, author Marie-Aude Méasson, author Maximilien Cazayous, author Alain Sacuto, author Shigeru Kasahara, author Takasada Shibauchi, author Yuji Matsuda,and author Yann Gallais, title title Charge-induced nematicity in FeSe, 10.1073/pnas.1606562113 journal journal Proc. Nat. Acad. Sciences volume 113, pages 9177–9181 (year 2016)NoStop [Sulewski et al.(1991)Sulewski, Fleury, Lyons, andCheong]Sulewski:1991 author author P. E. Sulewski, author P. A. Fleury, author K. B. Lyons, and author S-W. Cheong,title title Observation of chiral spin fluctuations in insulating planar cuprates, 10.1103/PhysRevLett.67.3864 journal journal Phys. Rev. Lett. volume 67, pages 3864 (year 1991)NoStop [Muschler et al.(2010)Muschler, Prestel, Tassini, Hackl, Lambacher, Erb, Komiya, Ando, Peets, Hardy, Liang, and Bonn]Muschler:2010a author author B. Muschler, author W. Prestel, author L. Tassini, author R. Hackl, author M. Lambacher, author A. Erb, author Seiki Komiya, author YoichiAndo, author D.C. Peets, author W.N. Hardy, author R. Liang,and author D.A. Bonn, title title Electron interactions and charge ordering in CuO_2 compounds, 10.1140/epjst/e2010-01302-4 journal journal Eur. Phys. J. Special Topicsvolume 188, pages 131 (year 2010)NoStop [Knoll et al.(1990)Knoll, Thomsen, Cardona, and Murugaraj]Knoll:1990 author author P. Knoll, author C. Thomsen, author M. Cardona,andauthor P. Murugaraj, title title Temperature-dependent lifetime of spin excitations in RBa_2Cu_3O_6 ( R = Eu, Y),10.1103/PhysRevB.42.4842 journal journal Phys. Rev. B volume 42, pages 4842–4845 (year 1990)NoStop [Choi et al.(2008)Choi, Wulferding, Lemmens, Ni, Bud'ko, and Canfield]Choi:2008 author author K.-Y. Choi, author D. Wulferding, author P. Lemmens, author N. Ni, author S. L. Bud'ko,and author P. C. Canfield, title title Lattice and electronic anomalies of CaFe_2As_2 studied by Raman spectroscopy, 10.1103/PhysRevB.78.212503 journal journal Phys. Rev. B volume 78, pages 212503 (year 2008)NoStop [Gallais et al.(2013)Gallais, Fernandes, Paul, Chauvière, Yang, Méasson, Cazayous, Sacuto, Colson, andForget]Gallais:2013 author author Y. Gallais, author R. M. Fernandes, author I. Paul, author L. Chauvière, author Y.-X. Yang, author M.-A. Méasson, author M. Cazayous, author A. Sacuto, author D. Colson,and author A. Forget, title title Observation of Incipient Charge Nematicity in Ba(Fe_1-xCo_x)_2As_2, 10.1103/PhysRevLett.111.267001 journal journal Phys. Rev. Lett. volume 111, pages 267001 (year 2013)NoStop [Kretzschmar et al.(2016)Kretzschmar, Böhm, Karahasanović, Muschler, Baum, Jost, Schmalian, Caprara, Grilli, Di Castro, Analytis, Chu, Fisher,and Hackl]Kretzschmar:2016 author author F. Kretzschmar, author T. Böhm, author U. Karahasanović, author B. Muschler, author A. Baum, author D. Jost, author J. Schmalian, author S. Caprara, author M. Grilli, author C. Di Castro, author J. H. Analytis, author J.-H. Chu, author I. R. Fisher,and author R. Hackl, title title Critical spin fluctuations and the origin of nematic order in Ba(Fe_1-xCo_x)_2As_2, 10.1038/NPHYS3634 journal journal Nature Phys.volume 12, pages 560–563 (year 2016)NoStop [Burrard-Lucas et al.(2013)Burrard-Lucas, Free, Sedlmaier, Wright, Cassidy, Hara, Corkett, Lancaster, Baker, Blundell, and Clarke]Burrard:2013 author author Matthew Burrard-Lucas, author David G. Free, author Stefan J. Sedlmaier, author Jack D. Wright, author Simon J. Cassidy, author Yoshiaki Hara, author Alex J. Corkett, author Tom Lancaster, author Peter J. Baker, author Stephen J. Blundell,and author Simon J. Clarke, title title Enhancement of the superconducting transition temperature of fese by intercalation of a molecular spacer layer, 10.1038/nmat3464 journal journal Nature Mater. volume 12, pages 15–19 (year 2013)NoStop [Zhang et al.(2013)Zhang, Xia, Liu, Tong, Yang, and Zhang]ZhangA:2013 author author An-min Zhang, author Tian-long Xia, author Kai Liu, author Wei Tong, author Zhao-rong Yang,and author Qing-ming Zhang, title title Superconductivity at 44 K in K intercalated FeSe system with excess Fe, 10.1038/srep01216 journal journal Sci. Rep. volume 3, pages 1216 (year 2013)NoStop [Ge et al.(2015)Ge, Liu, Liu, Gao, Qian, Xue, Liu, and Jia]GeJF:2015 author author Jian-FengGe, author Zhi-Long Liu, author Canhua Liu, author Chun-Lei Gao, author Dong Qian, author Qi-Kun Xue, author Ying Liu,and author Jin-Feng Jia, title title Superconductivity above 100 K in single-layer FeSe films on doped SrTiO_3, 10.1038/nmat4153 journal journal Nature Mater. volume 14, pages 285–289 (year 2015)NoStop [Medvedev et al.(2009)Medvedev, McQueen, Troyan, Palasyuk, Eremets, Cava, Naghavi, Casper, Ksenofontov, Wortmann, and Felser]Medvedev:2009 author author S. Medvedev, author T. M. McQueen, author I. A. Troyan, author T. Palasyuk, author M. I. Eremets, author R. J. Cava, author S. Naghavi, author F. Casper, author V. Ksenofontov, author G. Wortmann,and author C. Felser, title title Electronic and magnetic phase diagram of β-Fe_1.01Se with superconductivity at 36.7 K under pressure, 10.1038/nmat2491 journal journal Nature Mater.volume 8, pages 630 (year 2009)NoStop [Canali and Girvin(1992)]Canali:1992 author author C. M. Canali and author S. M. Girvin, title title Theory of Raman scattering in layered cuprate materials, 10.1103/PhysRevB.45.7127 journal journal Phys. Rev. B volume 45, pages 7127–7160 (year 1992)NoStop [Weidinger and Zwerger(2015)]Weidinger:2015 author author Simon AdrianWeidinger and author WilhelmZwerger, title title Higgs mode and magnon interactions in 2D quantum antiferromagnets from Raman scattering, 10.1140/epjb/e2015-60438-1 journal journal Eur. Phys. B volume 88, pages 237 (year 2015)NoStop [Chelwani et al.(2018)Chelwani, Baum, Böhm, Opel, Venturini, Tassini, Erb, Berger, Forró, and Hackl]Chelwani:2018 author author N. Chelwani, author A. Baum, author T. Böhm, author M. Opel, author F. Venturini, author L. Tassini, author A. Erb, author H. Berger, author L. Forró, and author R. Hackl, title title Magnetic excitations and amplitude fluctuations in insulating cuprates, 10.1103/PhysRevB.97.024407 journal journal Phys. Rev. B volume 97, pages 024407 (year 2018)NoStop [Fleury and Loudon(1968)]Fleury:1968 author author P. A. Fleury and author R. Loudon, title title Scattering of Light by One- and Two-Magnon Excitations, 10.1103/PhysRev.166.514 journal journal Phys. Rev. volume 166, pages 514 (year 1968)NoStop [Chauvière et al.(2010)Chauvière, Gallais, Cazayous, Méasson, Sacuto, Colson,and Forget]Chauviere:2010 author author L. Chauvière, author Y. Gallais, author M. Cazayous, author M. A. Méasson, author A. Sacuto, author D. Colson,and author A. Forget, title title Impact of the spin-density-wave order on the superconducting gap of Ba(Fe_1-xCo_x)_2As_2, 10.1103/PhysRevB.82.180521 journal journal Phys. Rev. B volume 82, pages 180521 (year 2010)NoStop [Yildirim(2009)]Yildirim:2009b author author Taner Yildirim, title title Frustrated magnetic interactions, giant magneto-elastic coupling, and magnetic phonons in iron-pnictides, DOI: 10.1016/j.physc.2009.03.038 journal journal Physica C volume 469, pages 425 (year 2009)NoStop [Böhmer et al.(2013)Böhmer, Hardy, Eilers, Ernst, Adelmann, Schweiss, Wolf, and Meingast]Bohmer:2013 author author A. E. Böhmer, author F. Hardy, author F. Eilers, author D. Ernst, author P. Adelmann, author P. Schweiss, author T. Wolf,and author C. Meingast, title title Lack of coupling between superconductivity and orthorhombic distortion in stoichiometric single-crystalline FeSe, 10.1103/PhysRevB.87.180505 journal journal Phys. Rev. B volume 87, pages 180505 (year 2013)NoStop [Sorensen et al.(1998)Sorensen, Lehoucq, and Yang]Sorensen:1998 author author D.C. Sorensen, author R.B. Lehoucq,and author C. Yang, @nooptitle ARPACK Users' Guide: Solution of Large-Scale Eigenvalue Prob­lems with Implicitly Restarted Arnoldi Methods (publisher Siam, address Philadelphia, year 1998)NoStop [Gretarsson et al.(2011)Gretarsson, Lupascu, Kim, Casa, Gog, Wu, Julian, Xu, Wen, Gu, Yuan, Chen, Wang, Khim, Kim, Ishikado, Jarrige, Shamoto, Chu, Fisher, andKim]Gretarsson:2011 author author H. Gretarsson, author A. Lupascu, author Jungho Kim, author D. Casa, author T. Gog, author W. Wu, author S. R.Julian, author Z. J.Xu, author J. S. Wen, author G. D. Gu, author R. H. Yuan, author Z. G. Chen, author N.-L. Wang, author S. Khim, author K. H. Kim, author M. Ishikado, author I. Jarrige, author S. Shamoto, author J.-H.Chu, author I. R. Fisher,and author Young-June Kim, title title Revealing the dual nature of magnetism in iron pnictides and iron chalcogenides using x-ray emission spectroscopy, 10.1103/PhysRevB.84.100509 journal journal Phys. Rev. B volume 84, pages 100509 (year 2011)NoStop [Dagotto(1994)]Dagotto:1994 author author Elbio Dagotto, title title Correlated electrons in high-temperature superconductors, 10.1103/RevModPhys.66.763 journal journal Rev. Mod. Phys. volume 66, pages 763 (year 1994)NoStop [Sugai et al.(2010)Sugai, Mizuno, Kiho, Nakajima, Lee, Iyo, Eisaki, andUchida]Sugai:2010 author author S. Sugai, author Y. Mizuno, author K. Kiho, author M. Nakajima, author C. H. Lee, author A. Iyo, author H. Eisaki,and author S. Uchida, title title Pairing symmetry of the multiorbital pnictide superconductor BaFe_1.84Co_0.16As_2 from Raman scattering, 10.1103/PhysRevB.82.140504 journal journal Phys. Rev. B volume 82, pages 140504 (year 2010)NoStop [Venturini(2003)]Venturini:2002d author author F. Venturini, title Raman Scattering Study of Electronic Correlations in Cuprates: Observation of an Unconventional Metal-Insulator Transition, @noopPh.D. thesis, school TU-München (year 2003)NoStop [Muschler et al.(2009)Muschler, Prestel, Hackl, Devereaux, Analytis, Chu, andFisher]Muschler:2009 author author B. Muschler, author W. Prestel, author R. Hackl, author T. P. Devereaux, author J. G. Analytis, author Jiun-Haw Chu,and author I. R. Fisher, title title Band- and momentum-dependent electron dynamics in superconducting Ba(Fe_1 - xCo_x)_2As_2 as seen via electronic Raman scattering, 10.1103/PhysRevB.80.180510 journal journal Phys. Rev. B volume 80, pages 180510 (year 2009)NoStop [Böhm et al.(2017)Böhm, Kretzschmar, Baum, Rehm, Jost, Hosseinian Ahangharnejhad, Thomale, Platt, Maier, Hanke, Moritz, Devereaux, Scalapino, Maiti, Hirschfeld, Adelmann, Wolf, Wen, and Hackl]Bohm:2017 author author T. Böhm, author F. Kretzschmar, author A. Baum, author M. Rehm, author D. Jost, author R. Hosseinian Ahangharnejhad, author R. Thomale, author C. Platt, author T. A. Maier, author W. Hanke, author B. Moritz, author T. P.Devereaux, author D. J.Scalapino, author S. Maiti, author P. J.Hirschfeld, author P. Adelmann, author T. Wolf, author H.-H.Wen,and author R. Hackl, title title Microscopic pairing fingerprint of the iron-based superconductor Ba_1-xK_xFe_2As_2, @noopjournal journal ArXiv e-prints(year 2017), http://arxiv.org/abs/1703.07749 arXiv:1703.07749 [cond-mat.supr-con] NoStop [Caprara et al.(2005)Caprara, Di Castro, Grilli, andSuppa]Caprara:2005 author author S. Caprara, author C. Di Castro, author M. Grilli,and author D. Suppa, title title Charge-Fluctuation Contribution to the Raman Response in Superconducting Cuprates, 10.1103/PhysRevLett.95.117004 journal journal Phys. Rev. Lett. volume 95, pages 117004 (year 2005)NoStop [Gallais and Paul(2016)]Gallais:2016a author author Yann Gallais and author Indranil Paul, title title Charge nematicity and electronic Raman scattering in iron-based superconductors, http://dx.doi.org/10.1016/j.crhy.2015.10.001 journal journal C. R. Physique volume 17,pages 113 – 139 (year 2016)NoStop [Thorsmølle et al.(2016)Thorsmølle, Khodas, Yin, Zhang, Carr, Dai, and Blumberg]Thorsmolle:2016 author author V. K. Thorsmølle, author M. Khodas, author Z. P. Yin, author Chenglin Zhang, author S. V. Carr, author Pengcheng Dai,and author G. Blumberg, title title Critical quadrupole fluctuations and collective modes in iron pnictide superconductors, 10.1103/PhysRevB.93.054515 journal journal Phys. Rev. B volume 93, pages 054515 (year 2016)NoStop
http://arxiv.org/abs/1709.08998v3
{ "authors": [ "Andreas Baum", "Harrison N. Ruiz", "Nenad Lazarević", "Yao Wang", "Thomas Böhm", "Ramez Hosseinian Ahangharnejhad", "Peter Adelmann", "Thomas Wolf", "Zoran V. Popović", "Brian Moritz", "Thomas P. Devereaux", "Rudi Hackl" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170926131831", "title": "Frustrated spin order and stripe fluctuations in FeSe" }
Journal ofClass Files, Vol. XX, No. XX, December 30, 2023 Tayli et al.: Accurate and Efficient Evaluation of Characteristic Modes Accurate and Efficient Evaluation of Characteristic Modes Doruk Tayli, Student Member, IEEE, Miloslav Capek, Senior Member, IEEE, Lamyae Akrou, Vit Losenicky, Lukas Jelinek, and Mats Gustafsson, Senior Member, IEEE Manuscript receivedDecember 30, 2023; revised December 30, 2023. This work was supported by the Swedish Foundation for Strategic Research (SSF) under the program Applied Mathematics and the project Complex analysis and convex optimization for EM design, and by the Czech Science Foundation under project No. 15-10280Y. D. Tayli and M. Gustafsson are with the Department of Electrical and Information Technology, Lund University, 221 00 Lund, Sweden (e-mail: {doruk.tayli,mats.gustafsson}@eit.lth.se). M. Capek, V. Losenicky and L. Jelinek are with the Department of Electromagnetic Field, Faculty of Electrical Engineering, Czech Technical University in Prague, Technicka 2, 166 27 Prague, Czech Republic (e-mail: {miloslav.capek,losenvit,lukas.jelinek}@fel.cvut.cz). L. Akrou is with the Department of Electrical and Computer Engineering, Faculty of Sciences and Technology, University of Coimbra, Polo II, Pinhal de Marrocos, 3030-290 Coimbra, Portugal (e-mail: [email protected]).Accepted . Received; in original form=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================A new method to improve the accuracy and efficiency of CM decomposition for perfectly conducting bodies is presented. The method uses the expansion of the Green dyadic in spherical vector waves. This expansion is utilized in the MoM solution of the EFIE to factorize the real part of the impedance matrix. The factorization is then employed in the computation of CM, which improves the accuracy as well as the computational speed. An additional benefit is a rapid computation of far fields. The method can easily be integrated into existing MoM solvers. Several structures are investigated illustrating the improved accuracy and performance of the new method.Antenna theory, numerical analysis, eigenvalues and eigenfunctions, electromagnetic theory, convergence of numerical methods.§ INTRODUCTION The MoM solution to electromagnetic field integral equations was introduced by Harrington <cit.> and has prevailed as a standard in solving open (radiating) electromagnetic problems <cit.>. While memory-demanding, MoM represents operators as matrices (notably the impedance matrix <cit.>) allowing for direct inversion and modal decompositions <cit.>. The latter option is becoming increasingly popular, mainly due to CM decomposition <cit.>, a leading formalism in antenna shape and feeding synthesis  <cit.>, determination of optimal currents  <cit.>, and performance evaluation <cit.>. Utilization of CM decomposition is especially efficient when dealing with electrically small antennas <cit.>, particularly if they are made solely of PEC, for which only a small number of modes are needed to describe their radiation behavior. Yet, the real part of the impedance matrix is indefinite as it is computed with finite precision . The aforementioned deficiency is resolved in this paper by a two-step procedure. First, the real part of the impedance matrix is constructed using spherical wave expansion of the dyadic Green function <cit.>. This makes it possible to decompose the real part of the impedance matrix as a product of a spherical modes projection matrix with its hermitian conjugate. The second step consists of reformulating the modal decomposition so that only the standalone spherical modes projection matrix is involved preserving the numerical dynamics[The numerical dynamic is defined as the largest characteristic eigenvalue.]. The proposed method significantly accelerates the computation of CM as well as of the real part of the impedance matrix. Moreover, it is possible to recover CM using lower precision floating point arithmetic, which reduces memory use and speeds up arithmetic operations if hardware vectorization is exploited <cit.>. An added benefit is the efficient computation of far field patterns using spherical vector harmonics.The projection on spherical waves in the proposed method introduces several appealing properties. First is an easy monitoring of the numerical dynamics of the matrix, since the different spherical waves occupy separate rows in the projection matrix. Second is the possibility to compute a positive semidefinite impedance matrix which plays important role in an optimal design . A final benefit is the superposition of modes. .The paper is organized as follows. The construction of the impedance matrix using classical procedure is briefly reviewed in Section <ref> and the proposed procedure is presented in Section <ref>. Numerical aspects of evaluating the impedance matrix are discussed in Section <ref>. In Section <ref>, the spherical modes projection matrix is utilized to reformulate modal decomposition techniques, namely the evaluation of radiation modes in Section <ref> and CM in Section <ref>. These two applications cover both the standard and generalized eigenvalue problems. The advantages of the proposed procedure are demonstrated on a series of practical examples in this section. Various aspects of the proposed method are discussed in Section <ref> and the paper is concluded in Section <ref>. § EVALUATION OF IMPEDANCE MATRIXThis paper investigates mode decompositions for PEC structures in free space. The time-harmonic quantities under the convention , with ω being the angular frequency, are used throughout the paper.§.§ Method of Moments Implementation of the EFIELet us consider the EFIE <cit.> for PEC bodies, defined as 𝒵(J)=R(J)+X(J) = n×( n×E),with 𝒵(J) being the impedance operator, E the incident electric field <cit.>, J the current density,the imaginary unit, and n the unit normal vector to the PEC surface. The EFIE (<ref>) is explicitly written asn×E(r_2)= k n×∫_G(r_1, r_2)·J(r_1) A_1,where , k is the wave number,the free space impedance, and G the dyadic Green function for the electric field in free-space defined as <cit.>G(r_1,r_2) = ( + 1/k^2∇∇) e^- k |r_1 - r_2|/4 π|r_1 - r_2|.Here,is the identity dyadic, and r_1, r_2 are the source and observation points. The EFIE (<ref>) is solved with the MoM by expanding the current density J(r) into real-valued basis functions {_p (r)} asJ(r) ≈∑_p=1^ I_p_p (r)and applying Galerkin testing procedure <cit.>. The impedance operatoris expressed as the impedance matrix , where R is the resistance matrix, and X the reactance matrix. The elements of the impedance matrix areZ_pq =k ∫_∫__p (r_1) ·G(r_1,r_2) ·_q (r_2) A_1 A_2. §.§ Spherical Wave Expansion of the Green DyadicThe Green dyadic (<ref>) that is used to compute the impedance matrix Z can be expanded in spherical vector waves asG(r_1,r_2) = - k ∑_αα1kr_<α4kr_>,whereandif , andandif . The regular and outgoing spherical vector waves <cit.> areand , see Appendix <ref>. The mode index α forvector spherical harmonics is <cit.>α( τ, σ,m, l ) = 2 (l^2+l-1+(-1)^sm ) + τwith , , , s=0 foreven azimuth functions (σ=e), and s=1 for odd azimuth functions (σ=o). Inserting the expansion of the Green dyadic (<ref>) into (<ref>), the impedance matrix Z becomesZ_pq = k^2 ∑_α∫_∫__p (r_1) · α1kr_< α4kr_> · _q (r_2) A_1 A_2.For a PEC structure the resistive part of (<ref>) can be factorized asR_pq = k^2 ∑_α∫__p (r_1) · α1kr_1A_1 ∫_α1kr_2 · _q (r_2) A_2, where α1kr=Re{α4kr} is used. Reactance matrix, X, cannot be factorized in a similar way as two separate spherical waves occur. Resistance matrix can be written in matrix form asR = ^,where ^ is the matrix transpose. Individual elements of the matrixareS_α p =k √()∫__p (r)· α1krAand the size of the matrixis , where Ν = 2 ( + 2)is the number of spherical modes andthe highest order of spherical mode, see Appendix <ref>.Forvector spherical harmonics <cit.> the transpose ^ in (<ref>) is replaced with the hermitian transpose ^.The individual integrals in (<ref>) are in fact related to the  <cit.>, where the incident and scattered electric fields are expanded using regular and outgoing spherical vector waves, respectively. The factorization (<ref>) is also used in vector fast multipole algorithm <cit.>.The radiatedF(r) can conveniently be computed using spherical vector harmonicsF(r) = 1/k∑_α^l-τ+2 f_ααr,where αr are the spherical vector harmonics, see Appendix <ref>. The expansion coefficients f_α are given by[ f_α] = ,where the column matrix  contains the current density coefficients I_p.The totalradiated power of a lossless antenna can be expressed as a sum of expansion coefficients≈1/2^R = 1/2||^2 = 1/2∑_α| f_α|^2. §.§ Numerical ConsiderationsThe spectrum of the matrices R and X differ considerably <cit.>. The eigenvalues of the R matrix decrease exponentially and the number of eigenvalues are corrupted by numerical noise, while this is not the case for the matrix X. As a result, if the matrix R is used in an eigenvalue problem, only a few modes can be extracted. This major limitation can be overcome with the use of the matrix  in (<ref>), whose elements vary several order of magnitude, as the result of the increased order of spherical modes with increasing row number. If the matrix R is directly computed with the matrix product (<ref>) or equivalently from matrix produced by (<ref>) small values are truncated due toarithmetic[As an example to the loss of significance in double precision arithmetic consider the sum .] <cit.>. Subsequently, the spectrum of the matrix R should be computed from the matrix  as presented in Section <ref>.The matrix  also provides a low-rank approximation of the matrix R, which is the result of the rapid convergence of regular spherical waves. In this paper, the number of used modes in (<ref>) is truncated using a modified version of the expression in <cit.>=ka+7√(ka)+3,whereis the highest order of spherical mode, a is the radius of the sphere enclosing the scatterer, and . is the ceiling function. The resulting accuracy in all treated cases is satisfactory. The order of spherical modes can be modified to trade between accuracy and computational efficiency, where increasingimproves the accuracy.Fig. <ref> shows the convergence of the matrix R for Example [tab:Examples]R2.Substitution of the spherical vector waves, introduced in Section <ref>, separates (<ref>) into two separate surface integrals reducing computational complexity. Table <ref> presents computation times[Computations are done on a workstation with i7-3770 CPU @ 3.4 GHz and 32 GB RAM, operating under Windows 7.] of different matrices[Computation time for the matrix X is omitted as it takes longer than the matrix R, due to Green function singularity.] Z, R, , and ^ for the examples given in Table <ref>. As expected, the matrix Z requires the most computational resources, as it includes both the matrix R and X. The computation of the matrix R using MoM is faster than the matrix Z since the underlying integrals are regular. The computation of the matrix R using (<ref>) takes the least amount of time for most of the examples. The computational gain is notable for structures with more dof, .§ MODAL DECOMPOSITION WITH THE MATRIX Modal decomposition using the matrix  is applied to two structures; a spherical shell of radius a, and a rectangular plate of length L and width W=L/2 ARXIV (App. <ref>), <cit.>,are presented in Table <ref>. Both structures are investigated for different number of dof, RWG functions <cit.> are used as the basis functions _p. The matrices used in modal decomposition have been computed using in-house solvers AToM <cit.> and IDA <cit.>, see Appendix <ref> for details. Results from the commercial electromagnetic solver FEKO <cit.> are also presented for comparison. Computations that require a higher precision than the double precision arithmetic are performed using the mpmath Python library <cit.>, and the Advanpix Matlab toolbox <cit.>.§.§ Radiation Modes The eigenvalues for the radiation modes <cit.> are easily found using the eigenvalue problemR_n = ξ_n _n,where ξ_n are the eigenvalues of the matrix R, and _n are the eigencurrents. The indefiniteness of the matrix R poses a problem in the eigenvalue decomposition (<ref>) as illustrated in <cit.>. In this paper we show that the indefiniteness caused by the numerical noise can be bypassed using the matrix . We start with the SVD of the matrix S = UΛV^,where U and V are unitary matrices, and Λ is a diagonal matrix containing singular values of matrix . Inserting (<ref>), (<ref>) into (<ref>) and multiplying from the left with V^ yieldsΛ^Λ_n = ξ_n _n,where the eigenvectors are rewritten as , and the eigenvalues are . The number of radiation modes is shown in Table <ref> for all the examples. The proper modes are defined to have 5% deviation from the computation with quadruple precision. For the characteristic eigenvalues of the spherical shell, the correct modes are compared with the analytical values, and a 5% threshold is selected for the error. A comparison of procedure (<ref>) and (<ref>) is shownin Table <ref>. For high order n, the classical procedure (<ref>) with double numerical precision yields in unphysical modes with negative eigenvalues ξ_n (negative radiated power) or with incorrect current profile (as compared to the use of quadruple precision). Using double precision, the number of modes which resemble physical reality (called “properly calculated modes” in Table <ref>) is much higher[Quantitatively, the proper modes in Table <ref> are defined as those having less than 5 % deviation in eigenvalue ξ_n as compared to the computation with quadruple precision.] for the new procedure (<ref>). It is also worth mentioning that the new procedure, by design, always gives positive eigenvalues ξ_n.§.§ Characteristic Modes (CMs)The GEP with the matrix R on the right hand side, , serving as a weighting operator <cit.>, is much more involved as the problem cannot be completely substituted by the SVD. Yet, the SVD of the matrix  in (<ref>) plays an important role in the CM decomposition. The CM decomposition is defined as The CM decomposition is defined here with a GEP asX_n = λ_n R_n,which is known to suffer from the indefiniteness of the matrix R <cit.>, therefore delivering only a limited number of modes. The first step is to represent the solution in a basis of singular vectors V by substituting the matrix R in (<ref>) as (<ref>), with (<ref>) and multiplying (<ref>) from the left by the matrix V^V^XVV^_n = λ_nΛ^ΛV^_n.Formulation (<ref>) can formally be expressed as a GEP with an already diagonalized right hand side <cit.>X_n = λ_nR_n,, X≡V^XV, R≡Λ^Λ, and _n ≡V^_n. Since the matrix  is in general rectangular, it is crucial to take into account cases where , (<ref>). This is equivalent to a situation in which there are limited number of spherical projections to recover the CM. Consequently, only limited number of singular values Λ_nn exist. In such a case, the procedure similar to the one used in <cit.> should be undertaken by partitioning (<ref>) into two linear systems X= [ X_11 X_12; X_21 X_22 ][ _1n; _2n ] =[ λ_1nR_11_1n; 0 ],where , , and Ν<. The Schur complement is obtained by substituting the second row of (<ref>) into the first row(X_11 - X_12X_22^-1X_21) _1n = λ_1nR_11_1nwith expansion coefficients of CM defined as_n = [_1n; - X_22^-1X_21_1n ].As far as the matrices U and V in (<ref>) are unitary, the decomposition (<ref>) yields CM implicitly normalized to_n^R_m = δ_nm,which is crucial since the standard normalization cannot be used without decreasing the number of significant digits. In order to demonstrate the use of (<ref>), various examples from Table <ref> are calculated and compared with the conventional approach (<ref>). The CM of the spherical shell from Example [tab:Examples]S2 are calculated and shown as absolute values in logarithmic scale in Fig. <ref>. It is shown that the number of the CM calculated by classical procedure (FEKO, AToM) is limited to the lower modes, especially considering the degeneracyof the CM on the spherical shell <cit.>. The number of properly found CM is significantly higher when using (<ref>) than the conventional approach (<ref>) and the numerical dynamic is doubled. Notice that, even (<ref>) where the matrix R calculated from (<ref>) yields slightly better results than the conventional procedure. This fact is confirmed in Fig. <ref> dealing with Example [tab:Examples][tab:Examples]R2, where the multiprecision package Advanpix is used as a reference. The same calculation illustrates that the matrix R contains all information to recover the same number of modes as (<ref>), but this can be done only at the expense of higher computation time[For Example [tab:Examples]S2 the computation time of CM with quadruple precision is approximately 15 hours.].While  (<ref>) preserves the numerical dynamics, the computational efficiency is not improved due to the matrix multiplications to calculate the X term in (<ref>). An alternative formulation that improves the computational speed is derived by replacing the matrix R with (<ref>) in (<ref>) X_n = λ_n ^_n,and multiplying from the left with X^-1_n = λ_n X^-1^_n.The formulation (<ref>) is a standard eigenvalue problem and can be written asX^-1^_n= X_n = ξ_n _n,whereX = X^-1^, _n =, and ξ_n = 1/λ_n. As an intermediary step, the matrix X_S=X^-1^ is computed, which is later used to calculate the characteristic eigenvectors _n=λ_nX_S_n. The eigenvalue problem (<ref>) is solved in the basis of spherical vector waves, _n =, that results in a matrix X∈ℂ^Ν×Ν. For problems with Ν≪ the eigenvalue problem is solved rapidly compared with (<ref>) and (<ref>). The computation times for various examples are presented in Table <ref> for all three formulations where a different number of CM are compared. For Example [tab:Examples]H1 the computation time is investigated for the first 20 and 100 modes. The acceleration using (<ref>) is approximately 4.7 and 14 times when compared with the conventional method (<ref>). The first characteristic mode of Example [tab:Examples]H1 is illustrated in Fig. <ref>.Two tests proposed in <cit.> are performed to validate the conformity of characteristic current densities and the characteristic far fields with the analytically known values. The results of the former test are depicted in Fig. <ref> for Example [tab:Examples]S2 and [tab:Examples]S5 that are spherical shells with two different dof. Similarity coefficients χ_τ n are depicted both for the CM using the matrix R (<ref>) and for the CM calculated by (<ref>). The number of valid modes correlates well with Table <ref> and the same dependence on the quality and size of the mesh grid as in <cit.> is observed.Qualitatively the same behavior is also observed in the latter test, depicted in Fig. <ref>, where similarity of characteristic far fields is expressed by coefficient ζ_τ n . These coefficients read ζ_τ n = max_l∑_σ m| f̃_τσ mln|^2,where f̃_τσ mln has been evaluated using (<ref>). withbeing the characteristic far fields evaluated for a spherical shell using (<ref>) with [f_α] = _n.The results for characteristic far fields computed from the conventional procedure (<ref>) and the procedure presented in this paper (<ref>) are illustrated in Fig. <ref>. Lastly, the improved accuracy of using (<ref>) over (<ref>), is demonstrated in the Fig. <ref> , for the 17^th inductive CM of the rectangular plate (Example [tab:Examples]R2). The surface current density in the left panel, calculated using 𝐈_41 in (<ref>) is, in fact, only the numerical noise. However, in the right panel, the current density calculated using (<ref>) is the correct higher-order mode. which shows current profiles, corresponding to a rectangular plate (Example [tab:Examples]R2), ARXIV of a collection of the first 30 modes. of a selected high order mode (a collection of the first 30 modes is presented in <cit.>). It can be seen that for modes with high eigenvalues (numerically saturated regions in Fig. <ref>) the surface current density in left panel, calculated via (<ref>), shows numerical noise, while the evaluation via (<ref>) still yields a correct current profile. ARXIV §.§ Restriction to TM/TE modesMatrix , described in Section <ref>, contains projections onto TE and TM spherical waves in its odd (τ=1) and even rows (τ=2), respectively. The separation of TE and TM spherical waves can be used to construct resistance matrices R^TE and R^TM, where only odd and even rows of matrix  are used to evaluate (<ref>).Matrices R^TM and R^TE can be used in optimization, , in such a case when the antennas have to radiate TM-modes only <cit.>. With this feature, characteristic modes consisting of only TM (or TE) modes can easily be found. This is shown in Fig. <ref>, in which the spherical shell (Example [tab:Examples]S2) and rectangular plate (Example [tab:Examples]R2) are used to find only TM (capacitive) and TE (inductive) modes, respectively. In case of a spherical shell this separation could have been done during the post-processing. For a generally shaped body this separation however represents a unique feature of the proposed method. § DISCUSSIONImportant aspects of the utilization of the matrix  are discussed under the headings implementation aspects, computational aspects and potential improvements.§.§ Implementation AspectsUnlike the reactance matrix X, the resistance matrix R suffers from high condition number.Therefore, the combined approach to evaluate the impedance matrix (matrix R using matrix , matrix X using conventional Green function technique with double integration) takes advantage of both methods and is optimal for, , modal decomposition techniques dealing with the matrix R (radiation modes , CM, energy modes , and solution of optimization problems ). Evaluation and the SVD of the matrix  are also used to estimate number of modes,number of modes of the matrix  found by (<ref>) and number of CM found by (<ref>) in Table <ref>.§.§ Computational AspectsComputational gains of the proposed method are seen in Table <ref> for the matrix R and Table <ref> for the CM. The formulation (<ref>) significantly accelerates CM computation when compared with the classical GEP formulation (<ref>). Moreover, it is possible to employ lower precision floating point arithmetic,float, to compute as many modes as the conventional method that employs higher precision floating point arithmetic,double. In modern hardware, this can provide additional performance boosts if vectorization is used.An advantage of the proposed method is that the matrix  is rectangular for Ν<, allowing independent selection of the parametersand Ν. While the parameter  controls the details in the model, the parameter Ν (or alternatively L) controls the convergence of the matrix  and the number of modes to be found. In this paper (<ref>) is used to determine the highest spherical wave order L for a given electrical size ka. The parameter L can be increased for improved accuracy or decreased for computational gain depending on the requirements of the problem. Notice that the parameter Ν is limited from below by the convergence and the number of desired modes, but also from above since the spherical Bessel function in α1kr decays rapidly with l asj_l (ka) ≈2^l l!/(2l + 1)!(ka)^l,ka≪ l.The rapid decay can be observed in Fig. <ref>, where the convergence of the matrix R to double precision for ka=3 requires only L=12 while (16) gives a conservative number of L=17. §.§ Potential ImprovementsEven though the numerical dynamic is increased, it is strictly limited and it presents an inevitable, thus fundamental, bottleneck of all modal methods involving radiation properties.The true technical limitation is, in fact, the SVD of the matrix . A possible remedy is the use of high-precision packages that come at the expense of markedly longer computation times and the necessity of performing all subsequent operations in the same package to preserve high numerical precision.The second potential improvement relies on higher-order basis functions, which can compensate a poor-meshing scheme (that is sometimes unavoidable for complex or electrically large models). It can also reduce the number of basis function  so that the evaluation of CM is further accelerated. § CONCLUSIONEvaluation of the discretized form of the EFIE impedance operator, the impedance matrix, has been reformulated using projection of vector spherical harmonics onto a set of basis functions. The key feature of the proposed method is the fact that the real part of the impedance matrix can be written as a multiplication of the spherical modes projection matrix with itself. This feature accelerates modal decomposition techniques and doubles the achievable numerical dynamics. The results obtained by the method can also be used as a reference for validation and benchmarking.It has been shown that the method has notable advantages, namely the number of available modes can be estimated prior to the decomposition and the convergence can be controlled via the number of basis functions and the number of projections. The normalization of generalized eigenvalue problems with respect to the product of the spherical modes projection matrix on the right hand side are implicitly done. The presented procedure finds its use in various optimization techniques as well. It allows for example to prescribe the radiation pattern of optimized current by restricting the set of the spherical harmonics used for construction of the matrix.The method can be straightforwardly implemented into both in-house and commercial solvers, improving thus their performance and providing antenna designers with more accurate and larger sets of modes.§ USED COMPUTATIONAL ELECTROMAGNETICS PACKAGES§.§ FEKOFEKO (ver. 14.0-273612, <cit.>) has been used with a mesh structure that was imported in NASTRAN file format <cit.>: CMs and far fields were chosen from the model tree under requests for the FEKO solver. Data from FEKO were acquired using *.out, *.os, *.mat and *.ffe files. The impedance matrices were imported using an in-house wrapper <cit.>. Double precision was enabled for data storage in solver settings. §.§ AToMAToM (pre-product ver., CTU in Prague, <cit.>) has been used with a mesh grid that was imported in NASTRAN file format <cit.>, and simulation parameters were set to comply with the data in Table <ref>. AToM uses RWG basis functions with the Galerkin procedure <cit.>. The Gaussian quadrature is implemented according to <cit.> and singularity treatment is implemented from <cit.>. Built-in Matlab functions are utilized for matrix inversion and decomposition. Multiprecision package Advanpix <cit.> is used for comparison purposes. §.§ IDAIDA (in-house, Lund University, <cit.>) has been used with the NASTRAN mesh and processed with the IDA geometry interpreter. IDA solver is a Galerkin type MoM implementation. RWG basis functions are used for the current densities. Numerical integrals are performed using Gaussian quadrature <cit.> forterms and the DEMCEM library <cit.> for singular terms. Intel MKL library <cit.> is used for linear algebra routines. The matrix computation routines are parallelized using OpenMP 2.0 <cit.>. Multiprecision computations were done with the mpmath Python library <cit.>. § SPHERICAL VECTOR WAVESGeneral expression of the (scalar) spherical modes is <cit.>u^(p)_σ ml(kr)= z_l^(p)(kr)σ m lr,withand k being the wavenumber. The indices are ,and  <cit.>. For regular wavesis a spherical Bessel function of order l, irregular wavesis a spherical Neumann function, and are spherical Hankel functions for the ingoing and outgoing waves, respectively. Spherical harmonics are defined as <cit.>σ m lr = √(ε_m/2π)P_l^m (cosϑ) cos mφ sin mφ ,σ = e o with ε_m=2-δ_m0 the Neumann factor, δ_ij the Kronecker delta function and lmcosϑ the normalized associated Legendre functions <cit.>. The spherical vector waves are <cit.> 1σm lpkr = 1 lpkr1σ m lr,2σm lpkr = 2 lpkr2σ m lr + 3 lpk rσ m lrr, where τ lpkr are the radial function of order l defined asτlpκ= z_l^(p)(κ), τ=1,1/κ∂/∂κ(κz_l^(p)(κ)), τ=2,/κz_l^(p)(κ), τ=3,with = √(l ( l+1 )) anddenotes thevector spherical harmonics defined as1σ m lr = 1/∇×(rσ m lr),2σ m lr = r×1σ m lr, where Y_σ m l denotes the ordinary spherical harmonics <cit.>. The radial functions can be seperated into real and imaginary parts asτ l3κ = τ l1κ +τ l2κ, τ l4κ = τ l1κ -τ l2κ. § ASSOCIATED LEGENDRE POLYNOMIALSThe associated Legendre functions are defined <cit.> asP_l^m ( x ) = ( 1-x^2 )^m/2^m / x^mP_l(x),l≥ m ≥ 0,withP_l ( x ) = 1/2^ll!^l/ x^l(x^2 - 1 )^lbeing the associated Legendre polynomials of degree l and . One useful limit when computing the vector spherical harmonics is <cit.>lim_x→ 1P_l^m ( x )/√(1-x^2)=δ_m1l(l+1)/2.The normalized associated Legendre function P̃_l^m, is defined as followsP_l^m (x) = √(2l+1/2(l-m)!/(l+m)!) P_l^m (x).The derivative of the normalized associated Legendre function is required when computing the spherical harmonics, and is given by the following recursion relation∂/∂ϑP_l^m ( cosϑ)= 1/2√((l+m)(l-m+1))P_l^m-1( cosϑ)-1/2√((l-m)(l+m+1))P_l^m+1( cosϑ)where x≡cosϑ, . ARXIV § SPHERICAL SHELL AND RECTANGULAR PLATEMeshes for the spherical shell of radius a=1m with =750 and =3330 dof are depicted in Fig. <ref>. The meshes for the rectangular plate of aspect ratio L/W=2 with =199 , =655, and =2657 dof are presented in Fig. <ref>.ARXIV § RADIATION MODESEigenvalues of the radiation modes for Example [tab:Examples]S2 and [tab:Examples]R2 are presented in Fig. <ref> and Fig. <ref>. The eigenvalues are computed using both the conventional (<ref>) and the proposed (<ref>) method. It can be seen that the number of modes computed using (<ref>) is significantly higher compared to (<ref>) for both examples. Eigenvalues calculated using quadruple precision SVD of the matrixare also included. The number of correct radiation modes is shown in Table <ref>.If eigenvalues ξ_n of the different mesh grids are to be compared the MoM matrices must be normalized.The normalized matrices are , , , , where L is the diagonal matrix of basis functions' reciprocal edge lengths, , . IEEEtran [ < g r a p h i c s > ]Doruk Tayli(S'13)received his B.Sc. degree in Electronics Engineering from Istanbul Technical University and his M.Sc. in degree in Communications Systems from Lund University, in 2010 and 2013, respectively. He is currently a Ph.D. student at Electromagnetic Theory Group, Department of Electrical and Information Technology at Lund University. His research interests are Physical Bounds, Small Antennas and Computational Electromagnetics. [ < g r a p h i c s > ]Miloslav Capek(SM'17) received his Ph.D. degree from the Czech Technical University in Prague, Czech Republic, in 2014. In 2017 he was appointed Associate Professor at the Department of Electromagnetic Field at the same university.He leads the development of the AToM (Antenna Toolbox for Matlab) package. His research interests are in the area of electromagnetic theory, electrically small antennas, numerical techniques, fractal geometry and optimization. He authored or co-authored over 70 journal and conference papers.Dr. Capek is member of Radioengineering Society, regional delegate of EurAAP, and Associate Editor of Radioengineering. [ < g r a p h i c s > ]Vit Losenicky received the M.Sc. degree in electrical engineering from the Czech Technical University in Prague, Czech Republic, in 2016. He is now working towards his Ph.D. degree in the area of electrically small antennas. [ < g r a p h i c s > ]Akrou Lamyae received the Dipl.-Ing. degree in networks and telecommunicationsfrom National School of Applied Sciences of Tetouan in 2012. Since 2014 she is working towards her Ph.D. degree in Electrical and Computer Engineering at the University of Coimbra.[ < g r a p h i c s > ]Lukas Jelinek received his Ph.D. degree from the Czech Technical University in Prague, Czech Republic, in 2006. In 2015 he was appointed Associate Professor at the Department of Electromagnetic Field at the same university.His research interests include wave propagation in complex media, general field theory, numerical techniques and optimization. [ < g r a p h i c s > ]Mats Gustafsson (SM'17) received the M.Sc. degree in Engineering Physics 1994, the Ph.D. degree in Electromagnetic Theory 2000, was appointed Docent 2005, and Professor of Electromagnetic Theory 2011, all from Lund University, Sweden. He co-founded the company Phase holographic imaging AB in 2004. His research interests are in scattering and antenna theory and inverse scattering and imaging. He has written over 90 peer reviewed journal papers and over 100 conference papers. Prof. Gustafsson received the IEEE Schelkunoff Transactions Prize Paper Award 2010 and Best Paper Awards at EuCAP 2007 and 2013. He served as an IEEE AP-S Distinguished Lecturer for 2013-15.
http://arxiv.org/abs/1709.09976v4
{ "authors": [ "Doruk Tayli", "Miloslav Capek", "Lamyae Akrou", "Vit Losenicky", "Lukas Jelinek", "Mats Gustafsson" ], "categories": [ "physics.comp-ph", "physics.class-ph" ], "primary_category": "physics.comp-ph", "published": "20170926145541", "title": "Accurate and Efficient Evaluation of Characteristic Modes" }
http://arxiv.org/abs/1709.09571v1
{ "authors": [ "Hao-Yang Jing", "Xin Liu", "Zhen-Jun Xiao" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170927151110", "title": "Hadronic decays of $B \\to a_1(1260) b_1(1235)$ in the perturbative QCD approach" }
matrix,arrows,decorations.pathmorphing matrix,arrows,decorations.pathmorphingtheoremTheorem[section] proposition[theorem]Proposition corollary[theorem]Corollary lemma[theorem]Lemma conjecture[theorem]Conjecturedefinitiondefinition[theorem]Definition example[theorem]Examplenotation[theorem]Notation question[theorem]Question claimClaimequationsectiondefinition remark[theorem]Remark remarks[theorem]Remarks
http://arxiv.org/abs/1709.09256v2
{ "authors": [ "Igor Dolgachev", "Benson Farb", "Eduard looijenga" ], "categories": [ "math.AG" ], "primary_category": "math.AG", "published": "20170926204633", "title": "Geometry of the Wiman Pencil, I: Algebro-Geometric Aspects" }
model1-num-names 𝐙𝐑𝐂#1figuret Department of Machine Intelligence and Systems Engineering, Akita Prefectural University, Akita 015-0055, Japan[mycorrespondingauthor]Corresponding [email protected] for Integrated Cell-Material Sciences (WPI-iCeMS), Kyoto University, Kyoto 606-8501, JapanJST PRESTO, Tokyo 102-0075, Japan The wings in different insect species are morphologically distinct with regards to their size, outer contour (margin) shape, venation, and pigmentation. The basis of the diversity of wing margin shapes remains unknown, despite the fact that gene networks governing the Drosophila wing development have been well characterised. Among the different types of wing margin shapes, smoothly curved contour is the most frequently found and implies the existence of a highly organised, multicellular mechanical structure. Here, we developed a mechanical model for diversified insect wing margin shapes, in which non-uniform bending stiffness of the wing margin is considered. We showed that a variety of spatial distribution of the bending stiffness could reproduce diverse wing margin shapes. Moreover, the inference of the distribution of the bending stiffness from experimental images indicates a common spatial profile among insects tested. We further studied the effect of the intrinsic tension of the wing blade on the margin shape and on the inferred bending stiffness. Finally, we implemented the bending stiffness of the wing margin in the cell vertex model of the wing blade, and confirmed that the hybrid model retains the essential feature of the margin model. We propose that in addition to morphogenetic processes in the wing blade, the spatial profile of the bending stiffness in the wing margin can play a pivotal role in shaping insect wings.Mechanics, Morphogenesis, Insect, Wing § INTRODUCTIONInsects have acquired the ability to fly with special appendages, i.e. wings,which confer upon them adaptive fitness <cit.>. Insect wing consists of the blade, vein, margin, sensory organs, and trachea. The base of the wing is connected to the body via the hinge.Insect adult structures, including wings, develop from imaginal discs, which arise as epithelial folds in the embryonic ectoderm and grow inside the larval body <cit.>. In Drosophila[ The insect wing development has been mostly studied using D. melanogaster as a model organism. A mechanical model formulated in this study corresponds to developmental events at 15 to 27 hr APF (after puparium formation) in D. melanogaster. Nomenclature and spatiotemporal dynamics of the wing development are different among insects. For instance, in many insects, the wing disc everts inside the larval body well before the puparium formation.], upon evagination of the disc at the end of larval stages, the wing tissue forms a flat, epithelial bilayer (dorsal and ventral).Under the control of biochemical and mechanical interaction between cells, the larval wing disc and pupal wing epithelium undergo extensive morphogenetic processes: cells differentiate, grow, proliferate, move, and die to determine the final size, shape, and structure of the tissue [4–16]. Immediately after eclosion, epithelial cells die by programmed cell death and are absorbed into the thoracic cavity through the veins,leaving the exoskeleton <cit.>.The wings in different insect species are morphologically distinct with regards to their size, outer contour (margin) shape, venation, and pigmentation (Fig. <ref>) [19–23].For instance, dragonfly has a smoothly curved, elongated wing, whereas butterflies develop a fan-shaped wing. Although gene networks governing the Drosophila wing development have been well characterised, very little work has been done on the basis of the diversity of wing margin shapes. During evolution, organisms have tuned the unified mechanism of development, in particular gene network, to generate diverse morphologies [20–25].One of the candidates for such unified basis of wing margin shape determination might be a highly organised, multicellular mechanical structure, as suggested by the observation that many insect wings have a smoothly curved shape.In this study, we formulate a mechanical model for simple yet diversified insect wing margin shapes. We adopt the basic notion of Euler's elastica to wing development, where the stiff margin is pinched by the hinge along the proximal boundary of the wing blade. We then introduce non-uniform bending stiffness that depends on the position of the wing margin. By using the model, we show that the spatial distribution of the bending stiffness could generate a variety of shapes that resemble the smooth outer contours of natural insect wings. We also infer the bending stiffness of the wing margin from experimental images.The inferred profiles of the bending stiffness of different insects are distinct, but all shared a common spatial domain structure. These data imply that the conserved, mechanical machineries have been tuned to give rise to diverse wing margin shapes during evolution.§ MODEL §.§ The biological basis for model formulationIn this section, we explain why we focus on the mechanics of the wing margin. Smoothly curved shapes of the wing margin, which can be found in different orders of insects (Fig. <ref>), are reminiscent of a flexible, elastic rod under load such as largely deformed beam or semiflexible polymer chainincluding DNA, F-actin, and collagen [28–31].In addition, studies reporting that the external force acting from the hinge stretches the wing along the proximal-distal axis in Drosophila melanogaster [10–12, 32]are consistent with an idea that pinching forces act at the crossing points of the margin, hinge, and blade (yellow arrowheads in Fig. <ref>(a)). Because the crossing points are tied with the hinge, either the pinned or fixed boundary condition should be employed. The observation that the angles of the margin at the crossing points relative to the hinge are not fixed during the wing development may suggest the pinned boundary condition, whereas the attachment to the overlying cuticle via the Dumpy protein <cit.> may offer the fixed boundary condition. Together, these suggest that an elastic, stiff margin is pinched by the extrinsic force and bends like the Euler's elastica (Fig. <ref>(a)) <cit.>. As explained below, in our attempt to explain the morphological diversity of wings, we consider the spatial distribution of the bending stiffness of the wing margin. In the case of a beam, a mechanically uniform material, the bending stiffness can be estimated by the Young's modulus multiplied by the second moment of area. Actin cytoskeleton, molecular motors, extracellular matrix, and many other components would be involved in the regulation of such a modulus of single cells and tissues [35-38].Since the wing margin may not be mechanically uniform like the beam,the second moment of area should be changed according to cell morphogenetic processessuch as cell proliferation and cell rearrangement <cit.>. Changing these biochemical, mechanical, and cellular parameters differentially affect the bending stiffness of tissue. Although to our knowledge, non-uniform distribution of the bending stiffness of the wing margin at developmental stages has not been experimentally shown,the fact that the wing margin has three (i.e. proximal anterior, distal anterior, and posterior) domains, each of which contains a unique set of differentiated cells in D. melanogaster<cit.>,implies a spatially patterned mechanical structure. Indeed, similar spatial patterns of cell differentiation along the margin have been reported in other insects <cit.>.When the extrinsic stretching force acts on the wing, the area of the wing blade is kept nearly constant <cit.>.Below we will mention that our model can be applied to such a case. Because a veinless mutant wing develops a largely normal morphology in D. melanogaster<cit.>, we defer a study on the effect of wing veins to future work. §.§ The simplest elastic model of the wing margin shape – Euler's elasticaA naive and one of the simplest mechanical models of smooth margin shapes is the Euler's elastica: Euler's mathematical and mechanical model of a thin strip or a thin rod and its resulting curves[ Note that Euler is not the founder of the problem, but the model has been historically called so <cit.>. ].Namely, a thin stiff rod bends or buckles in response to applied force, adapting a smooth, curved shape. Mathematicians have idealised the rod and used two distinct formulations to solve the curves. One formulates a force balance as equilibrium of moments along the rod, and the other formulates the bending energy and finds its minimum. Both formulations give the same deterministic differential equation. For a thin rod with length L and bending stiffness κ, when the forces F and -F are acting on the two ends of the rod, the equation can be written as κd^2 φ(s) /ds^2 = -F sinφ(s), where F ≡|F|, s parameterises the position along the rod,i.e., s∈[0,L], and φ(s) denotes the tangential angle of the rod at s (Fig. <ref>(a)).Eq. (<ref>) is exactly the same as the equation of pendulum motion and can be solved analytically and exactly with the boundary conditions at the two ends(<ref>). Corresponding to the initial conditions of the pendulum, such as being at rest or with some velocity at a given height, we have pinned and fixed boundary conditions. The pinned boundary condition means that the position is attached to its support by a freely rotating joint so that the angle is free. The fixed boundary condition means that the end point and its angle are totally fixed and supported as a beam is clamped to a wall. Mathematically speaking, the main difference between them is the value of dφ(s)/ds: it vanishes under the pinned condition, and it can be nonzero under the fixed boundary condition. In addition, there is the free boundary condition which means free to such constraints. In the case of an insect wing, the end points of the margin are attached to its support – the hinge – so either the pinned or fixed boundary condition is implied. With the pinned conditions, the solutions of Eq. (<ref>) for different F are shown in Fig. <ref>(b).For later convenience, we further introduce other variables and parameters for the wing margin shape as shown in Fig. <ref>(a). The tangential vector u(s) along the margin and the two-dimensional position r(s) of the margin segment at s areu(s)= ( cosφ(s), sinφ(s) )^T ,r(s)= ∫_0^s dt u(t),where the upper suffix T stands for the transposition; we set r(0)≡0 without loss of generality. The area A of the wing blade can then be given by A = 1/2∫_0^L ds ( u(s) ×r(s) ).Note that we can redefine s in a unit segment, s∈ [0,1], and delete F from the equation by nondimensionalisation (<ref>).In the next section, we develop a mechanical model with a non-uniform bending stiffness κ along the concept of Euler's elastica.§.§ A mechanical model for insect wing margin shapesThe deterministic equation with the non-uniform bending stiffness κ(s) leads toκ(s) d^2 φ(s) /ds^2 + d κ(s)/dsd φ(s) /ds= -F sinφ(s), where κ(s) becomes a distribution along s. Its derivation is included in <ref>. When κ(s) is constant, the equation reduces to that of Euler's elastica. In general, in the presence of the non-uniform distribution κ(s), one cannot expect to obtain an exact, analytic solution. Instead, we invoke a numerical calculation to find a wing margin shape for a given distribution of the bending stiffness. We use the pinned condition, which is consistent with the system considered (Sec. <ref> and Sec. <ref>).By changing the magnitude of the external force F,or by changing the overall factor of κ(s) in the nondimensionalised version of Eq. (<ref>), the different margin shapes can be obtained (Fig. <ref>(c) and Sec. <ref>).There are a few notes on computing the margin shape by using our model. First, the external forces and moments must be balanced among themselves; otherwise, the wing moves and rotates by the external residual force and torque. The forces are cancelled by definition while the torques must satisfy ( r(L) - r(0) ) ×F = 0.Second, when the fixed boundary conditions are imposed such as d/dsφ(0)0, one must be aware of another natural constraint, which in the case of Euler's elastica corresponds to the energy conservation law of the pendulum.By the virtue of the nondimensionalisation in <ref>, one can apply this model to the case with a fixed area of the wing blade, which has been reported in D. melanogaster [10]: calculate the wing shapes and areas for a common κ(s) with different multiplicities in the nondimensionalised equation and then rescale or dimensionalise the quantities such as κ(s), F, and L so that the wing blade area is kept constant.This can be directly implemented in the current modelling framework.§.§ The homogeneous pressure model Motivated by the observation that Drosophila pupal wing develops tensile tissue stress upon tissue stretching by some external force [10–12, 32],we next implement the homogeneous tension of the wing blade as additional mechanical ingredient of the model (Fig. <ref>(a)). We call such a derivative of our model the homogeneous pressure model in what follows. Given an internal homogeneous pressure p of the wing blade, which is negative in the case of tensile force, the equation can be derived asd/ds( κ(s) d φ(s)/ds)= -F sinφ(s) + p u(s) ·( r(s) - r(L)/2).The homogeneous hydrostatic pressure p appears in the second term of the right hand side.Note that when we deal with the energy of the wing while keeping the pressure p constant, the internal area A, which is mechanically conjugate to p, is not a conserved quantity. Its detailed derivation is presented in <ref>. Another important note regards the balance equation of the external forces.In the system considered, the tensile force is acting through the proximal boundary of the wing; thus the tensile force must be balanced with the non-parallel forces F_i and F_f acting on the ends of the margin.Introducing a unit vector n normal to the vector r(L)-r(0) pointing inwards to the wing, this condition can be expressed as:p |r(L)-r(0)| n + F_i + F_f = 0.The difference between this and afore-mentioned force balance equations is only the forces that are normal to r(L)-r(0). The components of F_i and F_f parallel to the proximal boundary are the same as before, F and -F, and F is given by |F|. Therefore, Eq. (<ref>) is automatically satisfied.By changing the internal tension, or the negative pressure p, for fixed κ(s), one can draw a variety of curves depicted in Fig. <ref>(b). The boundary conditions are the pinned condition at s=0 and the fixed condition at s=L. Such asymmetry in the boundary conditions may be realisedby differential attachment strength to the cuticle via the Dumpy protein <cit.>. Indeed, Dumpy accumulates more at the anterior end than at the posterior end (Fig. 2H of <cit.>),which may correspond to the fixed boundary condition at the anterior end and the pinned boundary condition at the posterior end.Note that the choice of pinned/fixed boundary conditions is opposite in our simulation due to a technical reason. § NUMERICAL RESULTS§.§ Simulated wing margin shapesLet us draw wing margin shapes by using our model for given distributions of the bending stiffness (Fig. <ref> and Fig. <ref>). We solve the nondimensionalised version of Eq. (<ref>) with the pinned condition at s=0 step by step using Euler's method. The minimum mathematical requirement for κ(s) is that it is at least once differentiable to fit to Eq. (<ref>). Another physical requirement is that κ(s) takes only a nonnegative value.Note that, if we abandon this physical requirement, one can create any smooth shape by unbounding and tuning κ(s), which is unrealistic in our current setup. Since we do not know the initial condition of u(0),i.e.φ(0), which is consistent with r_y(0)=r_y(L=1)=0 and the pinned condition at s=1, its iterative search is also implemented in our simulations.We first present a simple case study on a relationship between κ(s) and the wing margin shape. By using the symmetric distribution of κ(s) along the AP axis (Fig. <ref>(a2)), we obtain a shape mirrored about the AP axis (Fig. <ref>(a1)). We found that the smaller value of κ(s) in the distal region is necessary to obtain such a wing-like shape.By shifting the κ(s) curve towards the anterior end (Fig. <ref>(b2)), the position of the distal tip of the wing follows(Fig. <ref>(b1)). Moreover, a bulge in the posterior region is formed (arrow in Fig. <ref>(b1)), which could not be produced by simulations in previous studies (<cit.>). The position of the bulge corresponds to a soft region posterior to the second peak of κ(s). In the first two examples, we have used a rather stepwise form of κ(s). By giving a longer and smoother tail to the posterior peak of κ(s) (Fig. <ref>(c2)), the posterior bulge becomes smaller (arrow in Fig. <ref>(c1)), and the resultant curve is reminiscent of a veinless mutant of D. melanogaster<cit.>. As a whole, this case study highlights characteristics of κ(s) critical to shaping the wing margin.We next attempt to create diverse margin shapes by elaborately constructing κ(s). Because even a simple distribution of the bending stiffness gives a shape reminiscent of an insect wing(Fig. <ref>),we manually assign some smoothly connected functional forms to κ(s) as was performed in Fig. <ref>, referring to an inferred bending stiffness from the actual images of various insects. Note that the former is the intuitive construction of the distribution by hand, and the latter is the mathematically derived inference of the distribution, which will be given in Sec. <ref>. The simulated margin shapes with manually constructed distributions of κ(s) are clearly similar to those of real insects (Fig. <ref>). The first example (Fig. <ref>(a1, a2)) shows an image of a veinless mutant of D. melanogaster<cit.> and our simulation result. For the simulation, we have used the asymmetric but similar curves of κ(s) for the anterior and posterior regions,while a much softer, concave shape is assigned to the distal and posterior-end regions. The following three sets of examples in Fig. <ref>(b1–d1, b2–d2) show the drawings of Zygaenidae, Tortricidae, and Crambidae in <cit.> and their corresponding simulation results. The results are reasonably good against the images, although the details of κ(s) had to be fine-tuned in the distal regions in the cases of Fig. <ref>(c2, d2).A slightly concave shape in the distal region of Fig. <ref>(c1) is outside the scope of the model without the internal pressure. We will mention the reason in regards to the pressure shortly. The last example shows an elongated shape of wings in Eustheniidae <cit.>, which can be also found in dragonfly. The wing margin shapes of those insects appear more or less anterior-posteriorly symmetric. However, we have used significantly different values to κ(s) of those regions, which will be inferred in the next section. In our simulation, the shapes were sufficiently robust against small perturbations in κ(s) in most part of the margin, while some seemed sensitive to such perturbations, giving a potential source of the morphological diversity. We speculate that the sensitivity comes from either characteristic values of curvature, or comparatively small values of the bending stiffness, or the positions where the connection to the veins might be influential. For the last possibility, we will discuss in Sec. <ref>.By using the homogeneous pressure model with the internal pressure p, the wing margin can be buckled (Fig. <ref>). Such buckled shapes cannot be obtained by using the original model simply because the original model describes a pinched form of a straight line whose curvature along s should be positive everywhere when the pinned-pinned condition is imposed. §.§ Inference of the bending stiffness from experimental images and analysis on the relevance of the internal tensionFor a given distribution of κ(s), from Eq. (<ref>), we have calculated the form of φ(s) as the margin shape. In reverse, for a given φ(s), the distribution of κ(s) can be calculated up to its initial value. Therefore, schematically rewriting Eq. (<ref>) for κ(s) as below and using the nondimensionalisation described in <ref>, we can infer κ(s) from an experimental image with a value of κ(0): κ̇(s) = - κ(s) φ̈(s) + sinφ(s)/φ̇(s) ,where the dot (·) stands for the s-derivative. Let us translate the above differential equation into a difference equation and infer the value of κ(s) from images of insect wings. Eq. (<ref>) can be reduced to the difference equation κ̇(s)=- {κ(s-Δ s) + Δ s/2κ̇(s-Δ s) }φ̈(s) + sinφ(s) /φ̇(s) + Δ s/2φ̈(s) , κ(s)= κ(s-Δ s)+(κ̇(s-Δ s)+κ̇(s)) Δ s/2 where Δ s is the segment length between nearest data points and Δ s ≪ 1 is assumed (refer to <ref> for its detailed derivation).The equation indicates that one can infer the values of {κ(s), κ̇(s) } from a data set of {φ(s), φ̇(s), φ̈(s)} and κ(0), κ̇(0), step by step. To obtain such data set, we first extract a margin contour from an experimental image (Fig. <ref>(a)).We then perform smoothing of the contour and resampling from the smoothed one, and calculate the curvature from which the bending stiffness κ(s) is inferred.The initial values are set by κ(0)=1 and κ̇(0)=0 for simplicity. By a simple reverse engineering, we can draw a very similar shape to the original wing image (data not shown).From the images shown in Fig. <ref>(a1–e1), we inferred the profiles of the bending stiffness κ(s) (Fig. <ref>(b–f)). Interestingly, in all cases, the bending stiffness exhibits asymmetric peak profiles in the anterior and posterior domains and takes small values in the distal domain, suggesting the existence of a conserved spatial structure among the insects tested (discussed in Sec. <ref> and Sec. <ref>).By further extending the difference equation in Eq. (<ref>) to the case with internal tension p<0,we can infer the bending stiffness in the presence of tension. By varying the total pressure, p |r (L)|, from 0 to -0.2 F, we found that κ(s) values in some regions fall below 0 when p |r (L)| is as large as -0.01 F (Fig. <ref>). Therefore, only p|r(L)|>-T F with T≃ 0.01 is allowed in the examined case.This suggests that the internal tension is, at most, two orders of magnitude smaller than the pinching force at the ends of the wing margin (discussed in Sec. <ref>). § AN INTEGRATED MECHANICAL MODEL OF THE WING BLADE AND MARGIN During wing development, cells in the wing blade and margin undergo extensive morphogenetic processes, which are regulated bybiochemical and mechanical interactions between cells [4–16]. In this section, we address if the spatial profile of the bending stiffness of the wing margin can significantlycontribute to the wing shape determination in the presence of other mechanical ingredients in the wing blade. For this purpose, we formulate an integrated mechanical model of the wing margin and blade. To simulate the Drosophila pupal wing morphogenesis,the cell vertex model, which was formulated by Honda for modelling epithelial mechanics [46–49],has been extensively used and has proven its validity for analysing morphogenetic cell processes,such as cell rearrangements, proliferations, cell shape changes due to applied forces, and so on <cit.>.We therefore construct a simple hybrid model of the stiff wing margin and the cell vertex model, and perform simulations with different distributions of the bending stiffness of the wing margin.The cell vertex model is a cell-based discrete model, so our mechanical model of the wing margin should also be discretised in the same fashion to merge the models. Following convention,we use the energy formulation of the cell vertex model and the following energy function of the margin is added to it:E_margin = ∑_i∈ margin 2 κ_i sin^2 θ_i/2,where i stands for the vertex number along the margin, and κ_i and θ_i are the bending stiffness and the angle associated to it. The angle θ_i is defined by the angle difference between neighbouring orientation vectorsanalogous to u(s) in Fig. <ref>: the vector connecting the (i-1)-th and i-th vertices, and that of the i-th and (i+1)-th vertices along the margin.In a sufficiently relaxed state of the cell vertex model, each cell junction at the tissue boundary tends to take a similar length; thus, we assign the values of the bending stiffness according to the margin vertex numbering counted from one end of the margin. If the discretised input of the bending stiffness mismatches with the numbering, linear interpolation is applied. To compare with the results in the preceding sections, we neglect the hinge region andmake a straight boundary at the proximal side of the wing blade.Implementing the pinned boundary conditions on the two ends of the margin,we ran the simulations for, at least, a few times longer than the relaxation time of the cell vertex model.We used the symmetric and asymmetric distributions of the bending stiffness as in Fig. <ref>,and the uniform distribution of the bending stiffness to compare with.The results in Fig. <ref> show distinct shapes for different distributions of the bending stiffness. This supports the idea that even in the presence of other mechanical processes in the wing blade,the stiffness of the margin may provide a key role in shaping the wing margin.§ DISCUSSIONS §.§ Implication on the evolutionarily conserved mechanism for the determination of insect wing margin shapesMotivated by the beauty of smooth curves of insect wings, we have proposed and constructed a mechanical model for the diversified insect wing margin shapes by generalising the Euler's elastica. It is assumed that the wing margin has bending stiffness as its multicellular mechanical properties, and our numerical simulations showed that the spatial distribution of bending stiffness of the wing margin was sufficient to reproduce diverse wing margin shapes found in natural insects. Although identification of such non-uniform distribution of the bending stiffness in a developing wing tissue awaits future studies, the observation that proximal anterior, distal anterior, and posterior domains of the wing margin contain a unique set of differentiated cells <cit.>suggest that the domains might have distinctive mechanical properties and genes encoding mechanical structural/signaling components might be differentially expressed along the margin contour. Interestingly, the inferred profiles of the bending stiffness were distinct for different insect images, but all shared spatial (proximal anterior, distal anterior, and posterior domain) domain structure. Thus, similar to the margin cell differentiation <cit.>, the margin shape determination might also be under the control of global patterning of the wing. Taken all the above into consideration, we speculate that diverse wing margin shapes in different species might have evolved by modifyingthe conserved, mechanical machineries at the downstream of patterning information. §.§ Notes on the spatial profile of the bending stiffness of the wing marginOne might argue that the complexity of the inferred bending stiffness function exceeds by far the complexity of the gene patterning along the margin. However, several lines of evidence imply that this may not be the case. First, inferred bending stiffness takes collective values around the margin, absorbing relevant surrounding mechanical properties. For instance, we found that the complex profile of inferred κ(s) was often associated with the position of veins, which may pull the margin towards the hinge as suggested in a previous study <cit.>. Second, the inferred bending stiffness in Fig. <ref> appearsto be complex because it contains noise generated by image acquisition and processing.Third, as shown in Fig. <ref> in Sec. <ref>, a simple distribution of the bending stiffness is sufficient to reproduce a wing-like shape in a numerical simulation of our model. From these arguments and results, we speculate that the spatial distribution of the bending stiffness in real insects may be sufficiently simple and smooth such that it can be coded by global patterning of the wing.A measurement of the spatio-temporal profiles of bending stiffness of the wing margin is expected to directly prove the hypothesis of our model. Because, in the case of a beam, the bending stiffness can be estimated by the Young's modulus multiplied by the second moment of area, corresponding quantities of the wing margin are to be measured. Once the bending stiffness of the wing margin is measured, one can quantitatively assess the relationship betweenthe spatial variation of the bending stiffness and the wing shape by genetically or pharmacologically manipulating the bending stiffness. §.§ Future extensions of the current modelling frameworkThere are several possibilities to extend the current framework of modelling.First, additional material properties of the wing margin such as extensibility can be considered. Second, future work will study how the wing acquires its margin shape during development. For this, one needs to consider temporal changes in the bending stiffness profiles and/or morphogenetic dynamics of the margin cells (i.e., cell shape change, cell rearrangement, cell division, and apoptosis) <cit.>, which can possibly couple to the extensibility of the wing margin as well. Third, the mechanical interaction between the wing margin and other structural components of the wing is definitely a research direction to pursue. For instance, it has been shown that patterned linkage between the wing epithelial cells and the overlying chitinous cuticle is essential to give rise to the tissue tension that shapes the Drosophila wing <cit.>. Also, the inferred profiles in Fig. <ref>(f) may be influenced to some extent by the positions of the margin connected to the veins, as some characteristic points in the profiles correspond to such positions.Though the above observations have not clearly been described in the language of mechanics, the next direction of study is to merge our model of the margin with other mechanical structures either in a discrete or in a continuous way <cit.>. One of the discrete candidates for the replacement of our static wing blade is the cell vertex model [46–49],as we presented in our first attempt in this direction in Sec. <ref> (Fig. <ref>). The existence of the veins can be realised on the hybrid model by introducing specific cell types such as vein cells or some specific mechanical properties to the borders between intervein and margin cells. For instance, a recent study has shown that increasing the line tension at the vein-intervein boundaries is required to reproduce the wing shape <cit.>. A continuous candidate is a continuum body equipped with some appropriate constitutive equations to be constructed and to be consistent with the existing experimental observations <cit.>. Both approaches would provide a better understanding of the wing shaping and more refined tools to compare or support experimental observations with computational modelling. §.§ Mathematical notes and comments on the modelsFor the simulations in Fig. <ref>,we have always found an appropriate φ(0) for the pinned-pinned boundary condition for any tested form of κ(s).Although we have not shown this rigorously, it suggests another unravelled law of conservation analogous to the energy conservation law in the Euler's elastica.This is a mathematically interesting statement to prove.As for the stiffness inference, the dependence of the inference on the initial value of κ(0) has not been investigated. The value cannot be scaled out of Eq. (<ref>), so that its investigation might be necessary in some cases, particularly when the wing margin buckles due to the internal pressure. In such a case, there would exist singular points such as inflection points of φ(s), and their positions may strongly depend on the initial value. In addition, in the search for buckled shapes in Fig. <ref>, we have sometimes found singularities at the end points. This suggests that if the wing margin contains a buckled part, for example in its earlier developmental stages, special care is required for the stiffness inference. This is out of the scope of the present study, but is surely an interesting point to investigate further.We have shown that internal pressure/tension affects both on the shape and inferred bending stiffness of the wing margin. While the internal pressure/tension provides additional types of wing margin shapes obtained by simulation as shown in Fig. <ref>, the allowed value of the negative pressure (i.e., tension) was found to be very restrictive because of the physical requirement, κ(s)≥ 0.This restriction may be relaxed by considering additional force generators, such as veins, which can restore the stiffness back to the physical region.This is to be investigated further in conjunction with the above mentioned hybrid models. §.§ ConclusionThe present study provides the simple yet insightful approach for understanding the mechanical control of the insect wing margin shape.We expect that it will serve as a basic building block of an integrated model of wing development in the future. AcknowledgmentsThe authors are grateful to Yoshihiro Morishita, Alexis Matamoro-Vidal, François Graner, Philippe Marcq, Daiki Umetsu, Tsuyoshi Hirashima, Frank Jülicher,and Osamu Shimmi for their stimulating discussions and suggestions. We would like to thank Stephanie Nix and the other members of the laboratory for Life-Integrated Fluid Engineeringin Akita Prefectural University for their discussions. We would also like to acknowledge the current and past members of the laboratoryfor Developmental Morphogeometry in RIKEN QBiC for their continuous support and stimulating discussions. The present work was commenced while YI was in the laboratory.This work was supported by JSPS KAKENHI (Grants-in-Aid for Scientific Research) Grant Number 26540158 and 17K00410 to YI andby JST PRESTO (JPMJPR13A4) to KS.Appendix § DERIVATION OF THE GENERAL SOLUTION OF THE EULER'S ELASTICA In this section, we show the derivation of the general solution of the Euler's elastica (Eq. (<ref>)).κφ̈(s)= -F sinφ(s)Here, we use the shorthand notation of the s-derivative: φ̇≡dφ/ds. Multiplying both sides of the equation by φ̇(s) and integrating over s from 0 to L leads κ/F∫_0^L ds' φ̇(s') φ̈(s')=- ∫_0^L ds' φ̇(s')sinφ(s') κ/2F( φ̇(s) )^2= κ/2Fφ̇(0)^2 + cosφ(s) - cosφ(0) φ̇(s)= ±√(2F/κ( cosφ(s) - cosφ(0) ) + φ̇(0)^2) ds= ±dφ(s)/A √(cosφ(s)-B),where A≡√(2F/κ) and B≡cosφ(0)-κ/2Fφ̇(0)^2. The sign on the right hand side depends on the direction in which φ(s) changes. When φ(s) decreases as s increases, the sign is negative.In the context of pendulum, B is the energy transferred to the kinetic energy at s. By the symmetry argument, φ(s) reaches zero at s=L/2. Integrating the above again over s from 0 to L/2, one obtains the following solution L/2 = ∫_0^φ(0)dφ/A √(cosφ-B) = 1/A√(1-B)∫_0^φ(0)dφ/√( 1 - 2/1-Bsin^2φ/2) = 2/A√(1-B)F ( . φ(0)/2|2/1-B), where F(z|m) is the incomplete elliptic integral of the first kind. This can be further simplified when φ(0) is a multiple of π: F(kπ/2|m) = k K(m), where K(m) is the complete elliptic integral of the first kind. The above solution is the relation between L, κ, and F. Thus if the initial conditions and L are given, the ratio κ/F is also given.§ UNITS AND NONDIMENSIONALISATION The units of the relevant quantities are [ s ]= [ L ] , [ κ(s) ]=[M][L]^3 [S]^-2 , [ F ]=[M][L][S]^-2 ,[ p ]=[M][S]^-2 ,[ φ(s) ]=[M]^0 ,where [M], [L], and [S] stand for the dimensions of mass, length, and time, respectively. Thus κ(s)/F and p/F have the dimensions [L]^2 and [L]^-1.Redefining κ(s)/F and p/F by κ(s) and p, respectively, and normalising s by the total length of the margin L, we have dimensionless combinations of κ(s)/L^2 and p L.With the dimensionless variable and parameters s ≡ s/L, κ( s) ≡κ(s)/L^2, p ≡ p L, and φ( s)≡φ(s), Eqs. (<ref>,<ref>,<ref>) can be nondimensionalised as κd^2 φ( s)/ds^2 =- sinφ( s), κ( s) d^2 φ( s)/ds^2 + d κ( s)/dsd φ( s)/ds =- sinφ( s), d/ds( κ( s) d φ( s)/ds)=-sinφ( s) + p u( s) ·( r( s) - r(1)/2).These tilded quantities are implied in the manuscript when the above nondimensionalisation is mentioned. § DERIVATION OF THE DETERMINISTIC EQUATION OF THE HOMOGENEOUS PRESSURE MODEL In this section, we show how to derive Eq. (<ref>) of the homogeneous pressure model in Sec. <ref>. There are two ways to formulate the model. Of the two, we employed the principle of virtual work, or of the least action, because the force balance formulation cannot be given in a naive way with the internal pressure.Following the standard way of formulating the elastic medium <cit.>, one can write the energy function of the wing margin asE_bend = ∫_0^L ds κ(s)/2( dφ(s)/ds)^2.Then, one can equate the variation of the energy with the variation of the virtual work done by the external forces as δ E_bend = δ W, where W=F ( L-|r(L)| ) + p A ≃- F∫_0^L ds cosφ(s) + p A.Here, F is the pinching force acting on the ends of the wing margin. Going from the first line to the second, r_y(L)=0 is implicated and the term FL is omitted since it does not contribute to the variation of the work. p is the internal pressure, or the tension when it is negative, while A= 1/2∫_0^L ds (u(s)×r(s)) is the area of the wing blade. In other words, the hinge plays a role of the reservoir for the pressure and keeps supplying the pressure p through the proximal boundary of the wing blade. The corresponding action can be expressed by S≡ E-W:S= ∫_0^L ds [ κ(s)/2( dφ(s)/ds)^2 + F cosφ(s) - p/2 u(s)×r(s)].The variation of the last term by δφ can be given by δ( ∫_0^L ds u(s)×r(s) )= δ{∫_0^L ds u(s) ×( ∫_0^s dt cosφ(t), ∫_0^s dt sinφ(t) )^T } = ∫_0^L ds δφ(s) { (-sinφ(s),cosφ(s))^T ×r(s) } + ∫_0^L dt δφ(t) {( ∫_t^L ds u(s) ) ×( -sinφ(t), cosφ(t) )^T } = ∫_0^L ds δφ(s) { - u(s)·r(s)+ (r(L)-r(s) ) ·u(s) } =- 2 ∫_0^L ds δφ(s)u(s) ·( r(s) - r(L)/2).Note, for the derivation of the second term in the second line, we have changed the regions of integration over s and t by the identity ∫_0^L dx ∫_0^x dyf(x,y) = ∫_0^L dy ∫_y^L dxf(x,y) .By applying the least action principle δ S=0, one finds Eq. (<ref>) as0=-d/ds( κ(s) dφ(s)/ds)- F sinφ(s) + p u(s)·( r(s)-r(L)/2) .§ DERIVATION OF THE DIFFERENCE EQUATION FOR THE STIFFNESS INFERENCE Let us start with nondimensionalised Eq. (<ref>) with the dots κ(s) φ̈(s) + κ̇(s) φ̇(s) = - sinφ(s).Given φ̈(s_n), φ̇(s_n), φ(s_n), κ(s_n-1), and κ̇(s_n-1), where {s_n} is the set of discrete points from an image, we sought to infer the values of κ̇(s_n) and κ(s_n). If the two-dimensional position is defined at s_n by s_n and Δ s_n ≡ |s_n+1 - s_n|, then κ(s_n) can be given exactly by the form of the forward difference at s_n-1 by κ(s_n)= κ(s_n-1) + Δ s_n-1δ_fκ(s_n-1), δ_fκ(s_n-1)= κ(s_n) - κ(s_n-1)/Δ s_n-1.This forward difference δ_f at s_n-1 is equivalent to the central difference δ_c at s_n-1/2, which is the middle point between s_n-1 and s_n. Assuming Δ s_n-1≪ 1, we approximate this central difference by the average of the first derivatives at s_n-1 and s_n as δ_cκ(s_n-1/2)= κ(s_n) - κ(s_n-1)/Δ s_n-1 ≃ κ̇(s_n-1) + κ̇(s_n)/2.This approximation leads to the second line of Eq. (<ref>). Plugging this into the expression (<ref>) and to Eq. (<ref>), one gets κ̇(s_n) ( φ̇(s_n) + Δ s_n-1/2φ̈(s_n) ) + ( κ(s_n-1) + Δ s_n-1/2κ̇(s_n-1) )φ̈(s) = - sinφ(s).By replacing s_n-1, s_n and Δ s_n-1 by s-Δ s, s and Δ s for simplicity, the above can be expressed by Eq. (<ref>). Similarly, the difference equation for the homogeneous pressure model can be given trivially .We have derived and used the difference equation (<ref>) for the stiffness inference.There could be a variety of ways to make the differential equations the difference ones, which are out of the scope of the current manuscript.References99Grodnitsky1962 Grodnitsky, D. L., 1962. Form and Function of Insect Wings: The Evolution of Biological Structures, The Johns Hopkins University Press,Baltimore and London.Dudley2002 Dudley, R., 2002. The Biomechanics of Insect Flight: Form, Function, Evolution, Princeton University Press, New Jersey.Held2002 Held, L. I., 2002. Imaginal discs, Cambridge University Press, Cambridge.Wartlick2011 Wartlick, O. et al., 2011. Understanding morphogenetic growth control – lessons from flies. Nat Rev Mol Cell Biol 12, 594-604.Goodrich2011 Goodrich, L. V., Strutt, D., 2011. Principles of planar polarity in animal development. Development 138, 1877-92.Heisenberg2013 Heisenberg, C. P., Bellaïche, Y., 2013. Forces in tissue morphogenesis and patterning. Cell 153, 948-62.Resino2002 Resino, J. et al., 2002.Determining the role of patterned cell proliferation in the shape and size of the Drosophila wing. Proc Natl Acad Sci U S A 99, 7502-7.LeGoff2013 Le Goff, L. et al., 2013. A global pattern of mechanical stress polarizes cell divisions and cell shape in the growing Drosophila wing disc. Development 140, 4051-9.Taylor2008 Taylor, J., Adler, P. N., 2008. Cell rearrangement and cell division during the tissue level morphogenesis of evaginating Drosophila imaginal discs. Dev Biol 313, 739-51.Aigouy2010 Aigouy, B. et al., 2010. Cell flow reorients the axis of planar polarity in the wing epithelium of Drosophila. Cell 142, 773-86.Sugimura2013 Sugimura, K., Ishihara, S., 2013. The mechanical anisotropy in a tissue promotes ordering in hexagonal cell packing. Development 140, 4091-101.Matamoro-Vidal2015 Matamoro-Vidal, A. et al., 2015. Making quantitative morphological variation from basic developmental processes: Where are we? The case of the Drosophila wing. Dev Dyn 244, 1058-73.Etournay2015 Etournay, R. et al., 2015. Interplay of cell dynamics and epithelial tension during morphogenesis of the Drosophila pupal wing. eLife 4, e07090.Guirao2015 Guirao, B. et al., 2015. Unified quantitative characterization of epithelial tissue development. eLife 4, e08519.Baonza2000Baonza, A., Garcia-Bellido, A., 2000.Notch signaling directly controls cell proliferation in the Drosophila wing disc. Proc Natl Acad Sci U S A 97, 2609-14.Takemura2011Takemura, M., Adachi-Yamada, T., 2011.Cell death and selective adhesion reorganize the dorsoventral boundary for zigzag patterning of Drosophila wing margin hairs. Dev Biol 357, 336-46.Johnson1987 Johnson, S. A., Milner, M. J., 1987. The final stages of wing development in Drosophila melanogaster. Tissue Cell 19, 505-13.Kimura2004 Kimura, K. et al., 2004.Activation of the cAMP/PKA signaling pathway is required for post-ecdysial cell death in wing epidermal cells of Drosophila melanogaster. Development 131, 1597-606.Comstock1918 Comstock, J. H., 1918. The wings of insects, The Comstock Publishing Company, Ithaca, New York.Loehlin2012 Loehlin, D. W., Werren, J. H., 2012. Evolution of shape by multiple regulatory changes to a growth gene. Science 335, 943-7.Shimmi2014 Shimmi, O. et al., 2014. Insights into the molecular mechanisms underlying diversified wing venation among insects. Proc Biol Sci 281, 20140264.Prudhomme2007 Prud'homme, B., Gompel, N., Carroll, S. B., 2007. Emerging principles of regulatory evolution. Proc Natl Acad Sci U S A 104, 8605-12. Wittkopp2009 Wittkopp, P. J., Beldade, P., 2009. Development and evolution of insect pigmentation: genetic mechanisms and the potential consequences of pleiotropy. Semin Cell Dev Biol 20, 65-71.Carroll2008 Carroll, S. B., 2008. Evo-devo and an expanding evolutionary synthesis: a genetic theory of morphological evolution. Cell 134, 25-36.Levine2005 Levine, M., Davidson, E. H., 2005. Gene regulatory networks for development. Proc Natl Acad Sci U S A 102, 4936-42.Huang2009 Huang, J. et al., 2009.Directed, efficient, and versatile modifications of the Drosophila genome by genomic engineering. Proc Natl Acad Sci U S A 106, 8284-9.McAlpine1981 McAlpine, J. F. 1981.Morphology and terminology - adults. in: McAlpine J.F. (Ed.), Manual of Nearctic Diptera. Agriculture Canada, Ottawa, pp. 9-63.Ishimoto2006Ishimoto, Y., Kikuchi, N., 2006. Low energy states of a semiflexible polymer chain with attraction and the whip-toroid transitions. J. Chem. Phys. 125, 074905.Ishimoto2008Ishimoto, Y., Kikuchi, N., 2008. Effect of interaction shapeon the condensed DNA toroid. J. Chem. Phys. 128, 134906.Freed1972Freed, K. F., 1972. Functional Integrals and Polymer Statistics. Adv. Chem. Phys. 22, 1.Doi1986Doi, M., Edwards, S. F., 1986. The theory of polymer dynamics. Clarendon Press, Oxford.Ishihara2012 Ishihara, S., Sugimura, K., 2012. Bayesian inference of force dynamics during morphogenesis. J Theor Biol 313, 201-11.Ray2015 Ray, R. P. et al., 2015. Patterned Anchorage to the Apical Extracellular Matrix Defines Tissue Shape in the Developing Appendages of Drosophila. Dev Cell 34, 310-22.EulerEuler, L., 1744. Methodus inveniendi lineas curvas maximi minimive proprietate gaudentes, sivo solutio problematis isoperimitrici latissimo sensu accepti. Lausanne and Geneva: Marc-Michel Bousquet.Schillers2011 Schillers, H. et al., 2011. Real-time monitoring of cell elasticity reveals oscillating myosin activity.Biophys J 99, 3639-46.Schaefer2014 Schaefer, A. et al., 2014. Actin-binding proteins differentially regulate endothelial cell stiffness, ICAM-1 function and neutrophil transmigration.J Cell Sci 127, 4470-82.Zhou2009 Zhou, J. et al., 2013. Actomyosin stiffens the vertebrate embryo during crucial stages of elongation and neural tube closure. Development 136, 677-88.Marturano2013 Marturano, J. E. et al., 2013. Characterization of mechanical and biochemical properties of developing embryonic tendon. Proc Natl Acad Sci U S A 106, 6370-5.Garcia-Bellido1972 Garcia-Bellido, A., Santamaria, P., 1972. Developmental analysis of the wing disc in the mutant engrailed of Drosophila melanogaster. Genetics 72, 87-104.Couso1994 Couso, J. P. et al., 1994.The wingless signalling pathway and the patterning of the wing margin in Drosophila. Development 120, 621-36.vanBreugel1980 Vanbreugel, F. M. A., Grond, C., 1980. Bristle Patterns and Clones Along a Compartment Border in the Anterior Wing Margin of Drosophila-Hydei. Wilhelm Rouxs Archives of Developmental Biology 188, 195-200.Yoshida2011 Yoshida, A., Emoto, J., 2011. Variations in the arrangement of sensory bristles along butterfly wing margins. Zoolog Sci 28, 430-7.deCelis2003 de Celis, J. F., 2003. Pattern formation in the Drosophila wing: The development of the veins. Bioessays 25, 443-51, doi:10.1002/bies.10258.history_elasticaLevien, R., 2008. The Elastica: A Mathematical History. UCB/EECS-2008-103. Bethoux2005 Béthoux, O., 2005. Wing venation pattern of Plecoptera (Insecta: Neoptera). Illiesia 1, 52-81.Honda1983 Honda, H., 1983. Geometrical Models for Cells in Tissues. International Review of Cytology-a Survey of Cell Biology 81, 191-248.Honda01Nagai, T., Honda, H., 2001. Philos. Mag. B 81, 699. Ishimoto14Ishimoto, Y., Morishita, Y., 2014. Bubbly vertex dynamics: A dynamical and geometrical model for epithelial tissues with curved cell shapes. Phys. Rev. E 90, 052711.Fletcher2014 Fletcher, A. G. et al., 2014. Vertex models of epithelial morphogenesis. Biophys. J. 106, 2291-304.Khalilgharibi2016 Khalilgharibi, N. et al., 2016. The dynamic mechanical properties of cellularised aggregates. Curr Opin Cell Biol 42, 113-120.Sugimura2016Sugimura, K., Lenne, P. F., Graner, F., 2016. Measuring forces and stresses in situ in living tissues. Development 143, 186-96.landauLandau, L., Lifshitz, L., 1959. The theory of elasticity, Pergamon Press, New York.
http://arxiv.org/abs/1709.09526v1
{ "authors": [ "Yukitaka Ishimoto", "Kaoru Sugimura" ], "categories": [ "q-bio.TO", "cond-mat.soft", "physics.bio-ph" ], "primary_category": "q-bio.TO", "published": "20170927135850", "title": "A mechanical model for diversified insect wing margin shapes" }
Introducing machine learningfor power system operation support Benjamin DONNOT^ ^+†, Isabelle GUYON^*+, Marc SCHOENAUER^+,Patrick PANCIATICI^†, Antoine MAROT^†*UPSud Paris-Saclay, +INRIA ^LRI, Laboratoire de Recherche en Informatique ^†RTE R&D January 2017 ============================================================================================================================================================================================ For an unknown continuous distribution on a real line,we consider the approximate estimation by the discretization. There are two methods for the discretization. First method is to divide the real line into several intervals before taking samples ("fixed interval method") . Second methodis dividing the real line using the estimated percentiles after taking samples ("moving interval method"). In either way, we settle down to the estimation problem of a multinomial distribution. We use (symmetrized) f-divergence in order to measure the discrepancy of the true distribution and the estimated one. Our main result is the asymptotic expansion of the risk (i.e. expected divergence)up to the second-order term in the sample size. We prove theoretically that the moving interval method is asymptotically superior to the fixed interval method.We also observe how the presupposed intervals (fixed interval method) or percentiles (moving interval method) affect the asymptotic risk.MSC(2010) Subject Classification: Primary 60F99; Secondary 62F12Key words and phrases: f-divergence, alpha-divergence, asymptotic risk, asymptotic expansion, multinomial distribution.§ INTRODUCTIONOne of the useful methods dealing with a continuous distribution is the discretization of the continuous distribution, namely the approximation by the finite-dimensional discrete distribution. Consider a probability distribution on the real line that is absolutely continuous with respect to Lebesgue measure. We call this distribution "mother distribution".It is not necessarily required to have full support (-∞, ∞). Let P(a,b) denote the probability of the mother distribution for the interval (a, b). We descretize the mother distribution and get the corresponding multinomial distribution as follows; Let-∞(≜ a_0)< a_1 < a_2 < … < a_p < ∞(≜ a_p+1).Consider the multinomial distribution with possible results C_i (i=0,…,p) each of which has a probability P(a_i, a_i+1). This multinomial distribution is an approximation of the mother distribution and coveys a certain amount of information on the mother distribution. In many practical cases, this information could be enough for a statistical analysis with an appropriate selection of a_i's. (See e.g. Drezner and Zerom <cit.> and the cited paper therein for this approximation. )In this paper, we consider the estimation of the unknown mother distributionthrough thisapproximation. Needless to say, the discretized model has a finite number of parameters and much easier to be estimated than the infinite dimensional model for the mother distribution.There are two methods on how to decide a_i's. One is the "fixed interval method".The a_i'sare given before collecting the sample. In other words, we choose the intervals independently of the sample from the mother distribution.The other method is the "moving interval method". First choose the percentiles to be estimated ξ_1 < … < ξ_p and estimate them from the sample of the mother distribution. The estimated percentiles ξ̂_i (i=1,…,p) are used as the end points of the intervals, that is, a_i=ξ̂_i (i=1, …,p). The difference between the two methods lies"intervals first" or "percentiles first".Once the intervals a_i's are given, we have the estimation problem of the parameters of the multinomial distribution. If we use the fixed interval method, the true (unknown)parameters are P(a_i, a_i+1) (i=0, …, p) and we need to estimate these parameters based on the sample. On the other hand, for the moving interval method, the true parameter is P(ξ̂_i, ξ̂_i+1) (ξ̂_0≜ -∞ and ξ̂_p+1≜∞), while the estimand is the probability given by the presupposed percentiles; if ξ_i is the lower 100λ_i% percentile for 1≤ i ≤ p, then the estimated probability for each result is given by λ_i+1-λ_i (i=0,…,p) with λ_0≜ 0, λ_p+1≜ 1.For the measurement of the performance of the estimators, we use f-divergence. f-divergence between the two multinomial distributions (say M_1 and M_2) is defined asD_f[M_1 : M_2] ≜∑_i=0^p p1_if(p2_i/p1_i),where p1_i, p2_i, i=0,…,p are the probabilities of each result respectively for M_1 and M_2, and f is a smooth convexfunction such that f(1)=0, f'(1)=0, f”(1)=1.f-divergence is natural in view of the sufficiency of the sample information.If we use the dual function of f defined by f^*(x)=xf(1/x), we haveD_f^*[M_1 : M_2] = D_f[M_2 : M_1].(See Amari <cit.> and Vajda <cit.> for the property of f-divergence.)When the f-divergence is too abstract for us to gain some concrete result, we use α-divergence.It is a one-parameter (α) family given by (<ref>) with f_α(x) such asf_α(x)≜4/1-α^2(1-x^(1+α)/2)+2/1-α(x-1) if α± 1,xlog x +1-x if α=1,-log x+x-1 if α=-1.We will use the notation αD[M_1 : M_2] instead of D_f_α[M_1 : M_2]. α-divergence is the subclass of f-divergence, but still a broad class which contains the frequently used divergence such asKullback-Leibler divergence (α =-1), Hellinger distance (α=0), χ^2-divergence (α=3). Note that the conjugate of (f_α)^* equals f_-α, hence -αD[M_1 : M_2] = αD[M_2 : M_1] In general, divergence D[M_1:M_2] satisfies the conditionD[M_1:M_2]≥ 0, D[M_1:M_2]=0if and only if M_1d=M_2But the triangle inequality and symmetricity do not hold true.In this paper, we adopt the mean of the dual divergences in order to satisfy the symmetricity (see Amari and Cichocki <cit.>);|α|D[M_1 : M_2]≜1/2{αD[M_1 : M_2]+-αD[M_1 : M_2] } We take the expectation of the divergence between the estimated multinomial distribution M̂ and the true one M;ED ≜ E[D_f[M : M̂]]This is the risk of M̂ and we use it to describe the goodness of the estimation. In this paper, we only consider the basic estimators, that is, the most likelihood estimator for the fixed interval and the ordered sample for the moving interval.It is not easy to analyzethe risk theoretically under small sample, hence we focus ourselves on the asymptotic risk under large sample. In Section <ref>, as the main result, we show the asymptotic expansion of the risk for the both methods, the fixed interval and the moving interval (Theorem 1 and 2 ) . Using this result, first we observe how the asymptotic risk is affected by the the presupposed intervals (the fixed intervals) or percentiles (the moving intervals). Secondwe compare the asymptotic risk between the two methods and report the superiority of the moving interval methods when the percentiles are given with equi-probable intervals.§ MAIN RESULT We state the asymptotic expansion of the risk (<ref>) up to the second order with respect to the sample size, n, for the both methods, that is, the fixed interval method (Section <ref>) and the moving interval method (Section <ref>). In each subsection, we analyzehow the asymptotic risk is determined with respect to the sample size, the dimension of the multinomial distribution and the prefixed intervals (fixed intervals) or percentiles (moving intervals). In Section <ref>, we compare the both methods and show the superiority of the moving intervals when the percentiles are given with equi-probable intervals.§.§ Fixed Intervals We prefix the intervals with the endpoints (<ref>) before taking the sample from the mother distribution. In other words, we choose the endpoints (<ref>) independently of the sample.We consider the multinomial distribution with the possible results C_i, i=0,…,p. If a sample from the mother distribution take the value within the interval (a_i,a_i+1) for i=0,…,p, we count it as the sample with the result C_i. Then this multinomial distribution is an approximation of the mother distribution by a discretizaion..The probability for C_i is given bym_i ≜ P(a_i, a_i+1), i=0,…,p,where P(a_i, a_i+1) is the probability of the mother distribution for the interval (a_i, a_i+1).We estimate this multinomial distribution through the m.l.e.. Let X_i, i=1,…,n be the i.i.d. sample from the mother distribution. Then the m.l.e. of m≜ (m_0, …, m_p) is given by m̂≜ (m̂_0, …, m̂_p), where m̂_i≜#{X_i | X_i ∈ (a_i, a_i+1)}/n, i=0,…,p. dive, that is, D_f[m: m̂] ≜∑_i=0^p m_if(m̂_̂î/m_i).The performance of m̂ is measured by the risk, ED_I ≜ E[D_f[m : m̂]].For a general multinomial distribution, which is not necessarily given by a mother distribution as above,the following result holds. For a multinomial distribution with the probability m≜(m_0, …, m_p) and its m.l.e. m̂, the risk of m.l.e. (<ref>)based on i.i.d. sample of size n is given as follows;ED_I=p/2n+1/24n^2[ 4f^(3)(1)(-3p-1+M )+3f^(4)(1)(-2p-1+M)],where f^(3) and f^(4) are respectively the third and forth derivative of f in (<ref>),and M≜∑_i=0^p m_i^-1. –Proof–LetR_i ≜m̂_i-m_i/m_i.Note that (√(n)(m̂_1-m_1),…, √(n)(m̂_p-m_p)) d⟶ N_p(0, Σ),whereΣ≜ (σ_ij),σ_ij≜ p_i(1-p_i) if i=j, -p_i p_jif ij.(See e.g. (5.4.13) of <cit.>.)Using this fact and f(1)=0, f'(1)=0, f”(1)=1, we have the following expansion D_f[m: m̂] with respect to n. D_f[m: m̂]=∑_i=0^p m_i f(1+R_i)=∑_i=0^p m_i (f(1)+f'(1)R_i+1/2f”(1)R_i^2+1/6f^(3)(1)R_i^3+1/24f^(4)(1)R_i^4)+o_p(n^-2)=1/2∑_i=0^p m_i R_i^2 +1/6f^(3)(1)∑_i=0^p m_i R_i^3+1/24f^(4)(1)∑_i=0^p m_i R_i^4+o_p(n^-2).=1/2∑_i=0^p m_i^-1(m̂_i-m_i)^2+1/6f^(3)(1)∑_i=0^p m_i^-2(m̂_i-m_i)^3+1/24f^(4)(1)∑_i=0^p m_i^-3(m̂_i-m_i)^4+o_p(n^-2).From the central moments of the standardized multinomial distribution,E[m̂_i-m_i]=0, E[(m̂_i-m_i)^2]=n^-1(m_i-m_i^2), E[(m̂_i-m_i)^3]=n^-2(m_i-3m_i^2+2m_i^3), E[(m̂_i-m_i)^4]=3n^-2(m_i-m_i^2)^2+o(n^-2),we haveED_I=1/2n∑_i=0^p(1-m_i)+1/6n^2f^(3)(1)∑_i=0^p(m_i^-1-3+2m_i)+1/8n^2f^(4)(1)∑_i=0^p(m_i^-1-2+m_i),which is equivalent to the result (<ref>) since ∑_i=0^p m_i=1. Q.E.D.Especially for the α-divergence,αD[m : m̂]≜ D_f_α[m : m̂] ,|α|D[m : m̂]≜1/2{ D_f_α[m : m̂]+D_f_-α[m : m̂]},where f_α is given by (<ref>),the following results hold. (Sheena <cit.> gained this result as an example ofthe asymptotic risk of m.l.e. for a general parametric model.) αED_I ≜ E[αD[m: m̂]]=p/2n+1/96n^2{(α-3)(3α-7)(M-1)-6(α-3)(α-1)p}+o(n^-2), |α|ED_I ≜ E[|α|D[m: m̂]]=p/2n+1/32n^2{(α^2+7)(M-1)-2(α^2+3)p}+o(n^-2). –Proof– The results are straightforward from Theorem <ref> and the fact f^(3)_α(1)=(α-3)/2f^(4)_α(1)=(α-3)(α-5)/4. Q.E.D.We observe the following points from (<ref>), (<ref>) and (<ref>). * The main term, i.e. n^-1-order term, is determined by p/n, that is the ratio of the dimension of the multinomial distribution model (the number of the free parameters) to the sample size. We call this"p-n ratio" hereafter. p-n ratio showsthecomplexity of the model to be estimated relative to thesample size. The main term is independent off or α, and m_i (i=0, …, p).* The second term, i.e. n^-2-order term, depends on the parameter of the multinomial distribution through M≜∑_i=0^p m_i^-1.M attains the minimum value (p+1)^2 when m_0=m_1=⋯=m_p. It increases rapidly if one ofm_i's is near to zero. The effect of M on the risk depends on the choice of f or α. If you choose f such that 4f^(3)(1)+3f^(4)(1) is non-positive or α such that 7/3 ≤α≤ 3, (<ref>) and (<ref>) respectively decreases or are constant as M increases. This is rather unnatural since it contradicts to our belief that the existence of result with a small probability makes estimation harder for a multinomial distribution. In this sense, χ^2-distance with α=3 seems inappropriate, since it is asymptotically insensitive to the difference in the parameters m_i (i=0, …, p).(See Sheena <cit.>, which reports that the α-divergence seems statistically unnatural when |α| is large for a regression model.) α-divergence is a distance if and only if α=0, and the pair of α- and -α- divergences work dually like a distance. (For "generalized Pythagorean theorem", see <cit.> or <cit.>.) In this respect, the divergence |α|D seems natural. Actually (<ref>) shows that the risk is a monotonically increasing function of M for any α. * The n^-2 term of (<ref>) or (<ref>) can be negative for some f(or α), p , while that of (<ref>) is always positive as(α^2+7)(M-1)-2(α^2+3)p ≥(α^2+7)((p+1)^2-1)-2(α^2+3)p=p^2α^2+7p^2+8p>0. §.§ Moving Intervals First we choose points λ_i (1≤ i ≤ p) in the interval (0, 1);λ_0(≜ 0)< λ_1 < λ_2 < ⋯ < λ_p <λ_p+1(≜ 1).Let ξ_i ≜ F^-1(λ_i), 1≤ i ≤ p, ξ_0≡ -∞,ξ_p+1≡∞,where F^-1 is the inverse function of the cumulative distribution function, F, of the mother distribution.We call ξ's the percentiles of the mother distribution. In the moving intervals method, we estimate the percentiles of the mother distribution from the sample of the mother distribution, and use them as the endpoints of (<ref>);a_i = ξ̂_i, 1≤ i ≤ p,where ξ̂_i is the estimator of ξ_i for i=1,… p and ξ̂_0≡ -∞ and ξ̂_p+1≡∞.In this case,the multinomial distribution that approximates the mother distribution has unknown parameters m̂≜(m̂_0,…,m̂_p),m̂_i≜ P(a_i, a_i+1) ≡ P(ξ̂_i, ξ̂_i+1)0≤ i ≤ p,while it is estimated as m≜(m_0, …, m_p), m_i≜λ_i+1-λ_i 0≤ i ≤ p. Although there are several ways to estimate the percentile ξ, we focus here on the simple estimator using the order statistic itself. Take i.i.d sample of size n from the mother distribution and let the ordered sample be denoted byX_(1)≤ X_(2)≤⋯≤ X_(n).We estimate ξ _i by ξ̂_i≜ X_(n_i) 1≤ i ≤ p,where n_i is a function of n with the values in {1, 2, …, n}. Let r_i denote the gap between n_i and nλ_i, namelyr_i≜ n_i-nλ_i1≤ i ≤ p,r_0≜ 0,r_p+1≜ 1. We measure the discrepancy between m and m̂ by f-divergence, D_f[m : m̂] ≜∑_i=0^p m_if(m̂_i/m_i).If one might think it is natural to consider D_f[m̂ : m] in the sense that the true parameter should come first, it is satisfied by using the dual function f^* (see (<ref>)). Hence we willproceed with (<ref>).The risk for the moving interval method is given byED_P ≜ E[D_f[m : m̂]], and the following result holds.Suppose that r_i/n=o(n_i^-1/2), then ED_P =p/2n+1/24 n^2[-24-36p+12∑_i=0^p(r_i+1-r_i)(r_i+1-r_i+1)m_i^-1+4f^(3)(1){-5-9p+∑_i=0^p(3(r_i+1-r_i)+2)m_i^-1}+f^(4)(1){-3-6p+3∑_i=0^p m_i^-1}]+o(n^-2). –Proof– The whole process of proof is lengthy, hence we only state the outline of the proof here. All the details are found in Appendix.LetU_(n_i)≜ F(X_(n_i)), Δ_i ≜√(n)(U_(n_i)-λ_i)1≤ i ≤ pand Δ_0≜ 0, Δ_p+1≜ 0. The following relationship holds for 0 ≤ i ≤ p.m̂_i =F(ξ̂_i+1)-F(ξ̂_i)=F(X_(n_i+1))-F(X_(n_i))=U_(n_i+1)-U_(n_i)=λ_i+1-λ_i+n^-1/2(Δ_i+1-Δ_i)=m_i+n^-1/2(Δ_i+1-Δ_i).Note that(Δ_1, …, Δ_p) d⟶ N_p(0, Σ),where Σ=(σ_ij)=λ_i (1-λ_j) 1≤ i ≤ j ≤ p(see e.g. Theorem 5.4.5 of <cit.>). Similarly to (<ref>), the following equation holds. D_f[m: m̂]=1/2∑_i=0^p m_i R_i^2 +1/6f^(3)(1)∑_i=0^p m_i R_i^3+1/24f^(4)(1)∑_i=0^p m_i R_i^4+o_p(n^-2).Therefore we haveED_P=1/2∑_i=0^p m_i E[R_i^2] +1/6f^(3)(1)∑_i=0^p m_i E[R_i^3]+1/24f^(4)(1)∑_i=0^p m_i E[R_i^4]+o(n^-2).After long but straightforward calculation (see Appendix), we have∑_i=0^p m_i E[R_i^2] =n^-1p+n^-2[-2-3p+∑_i=0^p (r_i+1-r_i)(r_i+1-r_i+1)m_i^-1], ∑_i=0^p m_i E[R_i^3] =n^-2[-5-9p+∑_i=0^p(3(r_i+1-r_i)+2)m_i^-1], ∑_i=0^p m_i E[R_i^4] =n^-2[-3-6p+3∑_i=0^pm_i^-1].If we insert these results into (<ref>), we have the result.Q.E.D.We also have the following formulas for the α-divergence. αED_P =p/2n+1/96n^2[-α^2(3+6p)-α(16+24p)-18p-21+∑_i=0^p{48(r_i+1-r_i)^2+24(α-1)(r_i+1-r_i)+3α^2-8α-3}m_i^-1]+o(n^-2), |α|ED_P =p/2n+1/96n^2[-α^2(3+6p)-18p-21+∑_i=0^p{48(r_i+1-r_i)^2-24(r_i+1-r_i)+3α^2-3}m_i^-1]+o(n^-2) –Proof– The results are straightforward from (<ref>) and (<ref>).Q.E.D.We give some cements on ED_P, αED_P and |α|ED_P. * The main term is half the p-n ratio just like ED_I. It is independent off or α, and m_i (i=0, …, p).* The risk is independent of the mother distribution (it is due to the fact (<ref>)).It is determined by our choice of m_i's or equivalently λ_i's in (<ref>). * The choice of n_i's, or equivalently r_i's (i=1, …, p) effects the n^-2-order term. It is possible that the coefficient of m_i^-1 could be negative for some r_i's and f(or α). In this case small m_i could reduce the risk.§.§ Comparison of two methods We compare the risks between the fixed interval method and the moving interval method. For the both methods, the main term (n^-1-order term) are common, but we can see some difference in the second term (n^-2-order term). The biggest difference between the two methods lies in m_i's.In the fixed interval method, m_i's depend on the unknown mother distribution, hence we are unable to control them. As we observed in Section <ref>, if they include even one small m_i near to zero, then the (asymptotic) risk gets extremely high through M.Themore intervals (endpoints)we use for discretization, more likely we are to have small m_i's. Even if we have a large set of sample, we have to be cautions to raise the dimension of the multinomial distribution. On the contrary, for the moving interval method, m_i's are controllable. We can choose m_i's so that the risk does not take a large value.In order to make more specific comparison,first we will specify n_i's or equivalently r_i's for the moving interval method. The most naive selection of n_i is [nλ_i] or [nλ_i]+1, where [ · ] is Gauss symbol. Let≜ [nλ_i]-nλ_i.In this paper, we adopt the following randomized choice of r_i's;P(r_i=)(=P(n_i=[nλ_i]))=1+, P(r_i=1+)(=P(n_i=[nλ_i]+1))=-for 1≤ i ≤ p, while r_0≡ 0 and r_p+1≡1 as in (<ref>). This is natural in that n_i is chosen to be [nλ_i] and [nλ_i]+1 respectively with the probabilities proportional to the closeness to the both points. (To locate ξ̂_i between X_([nλ_i]) and X_([nλ_i]+1) according to r_i is another appealing idea. But if we adopt this estimation of ξ_i, then the risk depends on the mother distribution.)Let ED_P^*≜ E[ED_P],αED^*_P≜ E[αED_P],|α|ED_P^*≜ E[|α|ED_P],where all the expectation is taken with respect to the distribution (<ref>). The following results hold for the randomized choice of r_i's (<ref>). ED_P^* =p/2n+1/48n^2[-48-72p+24{-r̅_1(1+r̅_1)m_0^-1+(2-r̅_p(1+r̅_p))m_p^-1-∑_i=1^p-1((1+)+(1+))m_i^-1}+8f^(3)(1){-5-9p+2∑_i=0^p m_i^-1+3m_p^-1}+2f^(4)(1){-3-6p+3∑_i=0^p m_i^-1}]+o(n^-2), αED_P^* =p/2n+1/96n^2[-α^2(3+6p)-α(16+24p)-18p-21-48r̅_1(1+r̅_1)m_0^-1+(-48r̅_p(1+r̅_p)+24(α+1))m_p^-1-48∑_i=1^p-1((1+)+(1+))m_i^-1+(3α^2-8α-3)∑_i=0^p m_i^-1]+o(n^-2), |α|ED_P^* =p/2n+1/96n^2[-α^2(3+6p)-18p-21-48r̅_1(1+r̅_1)m_0^-1+(24-48r̅_p(1+r̅_p))m_p^-1-48∑_i=1^p-1((1+)+(1+))m_i^-1+(3α^2-3)∑_i=0^p m_i^-1]+o(n^-2). –Proof– As proved in Appendix, the following results hold.E[]=0 for 0 ≤ i ≤ p,E[r_p+1]=1 E[^2]=-(1+) for 1≤ i ≤ p,E[r_0^2]=0,E[r_p+1^2]=1 E[]=0for 0≤ i ≤ p.Applying these results to E[(-)^2]=E[^2]+E[^2]-2E[] and E[]-E[] in (<ref>), (<ref>) and (<ref>), we have the results.Q.E.D.Note that for 1 ≤ i ≤ p, -1 ≤≤ 0 and 0 ≤ -(1+) ≤1/4.Therefore we haveED_P^* ≤p/2n+1/48n^2[-48-72p+6{m_0^-1+9m_p^-1+2∑_i=1^p-1m_i^-1}+8f^(3)(1){-5-9p+2∑_i=0^p m_i^-1+3m_p^-1}+2f^(4)(1){-3-6p+3∑_i=0^p m_i^-1}]+o(n^-2) (say ED_P^*), αED_P^* ≤p/2n+1/96n^2[-α^2(3+6p)-α(16+24p)-18p-21+12m_0^-1+(24α+36)m_p^-1+24∑_i=1^p-1m_i^-1+(3α^2-8α-3)∑_i=0^p m_i^-1]+o(n^-2)=p/2n+1/96n^2[-α^2(3+6p)-α(16+24p)-18p-21+(3α^2-8α+9)m_0^-1+(3α^2+16α+33)m_p^-1+(3α^2-8α+21)∑_i=1^p-1m_i^-1]+o(n^-2) (say αED_P^*),|α|ED_P^* ≤p/2n+1/32n^2[-α^2(1+2p)-6p-7+(α^2+3)m_0^-1+(α^2+11)m_p^-1+(α^2+7)∑_i=1^p-1m_i^-1]+o(n^-2) (say |α|ED_P^*).If we choose the equal right-end and left-end probabilities, i.e. m_0=m_p, |α|ED_P^*=p/2n+1/32n^2[-α^2(1+2p)-6p-7+(α^2+7)M]+o(n^-2).This upper bound for |α|ED_P^*is affected by m_i's through M just like (<ref>). This indicates that the choice of equally-valued m_i's, that is, m_i=1/(p+1), i=1, …, p are reasonable for the estimation of the mother distribution. It is needles to say that the percentiles with a common increment ("quantiles") are most often used in a practical situation.If we choose "quantiles" for the moving interval method, we have the following result.Set λ_i's in (<ref>) so that m_i=1/(p+1),i=0,…,p, then asymptotically (exactly speaking, as for the comparison up to the n^-2-order term) , the following inequality holds.|α|ED_I ≥|α|ED_P^*. –Proof– Since M ≥ (p+1)^2, from (<ref>), we have|α|ED_I≥p/2n+1/32n^2{(α^2+7)(p^2+2p)-2(α^2+3)p}+o(n^-2) ( say |α|ED_I ),while when m_i=1/(p+1), i=0, …, p, |α|ED_P^* equalsp/2n+1/32n^2[-α^2(1+2p)-6p-7+(α^2+7)(p^2+2p+1)]+o(n^-2).Up to the n^-2-order term, we have|α|ED_I-|α|ED_P^*≥|α|ED_I - |α|ED_P^* =0. Q.E.D.The above theorem says that even if we are lucky enough to choose the best intervals (that is, equi-probable intervals) for the fixed interval method, it is asymptotically dominated by the moving interval method with "quantiles". We can conclude that if we estimate an unknown continuous distribution by the approximation method of discretization, it is better, at least asymptotically, to use the moving interval method.We will also present a numerical comparison between the both methods. Suppose that a_i's in (<ref>) for the fixed interval method is given by(-2.0, -1.5, -1.0, -0.5, 0,0.5,1.0, 1.5,2.0).with p=9.We consider the two cases where the mother distribution are respectively N(0,1) and st(0.8), where st(0.8) is the skew t-distribution with the zero mean, the unit variance and the skewness parameter of 0.8.For the intervals with the endpoints (<ref>), the corresponding probabilities of N(0,1) are(m_0, m_1, …, m_9) ≑ (0.023, 0.044, 0.092, 0.150, 0.191, 0.191, 0.150, 0.092, 0.044, 0.023),while those of st(0.8) are given by(m_0, m_1, …, m_9) ≑ (6.496*10^-8, 0.003, 0.153, 0.219, 0.194, 0.155, 0.113, 0.074, 0.044, 0.044) The density function of N(0,1)and the histogram of 10^4 samples with the above endpoints (<ref>) are drawn in Figure <ref>. The similar figures for st(0.8) are drawn in Figure <ref>. For the moving interval method, we use "quantiles". Namelyλ's in (<ref>) are given by λ_i=i/10 (1 ≤ i ≤ 9), or equivalently m_i (0≤ i ≤ 9) in (<ref>) are all 1/10. We put α=1. Let's skip the o(n^-2) part of |α|ED_I (|α|ED_P^*), and call it the approximated|α|ED_I (|α|ED_P^*). The graphs of the approximated |α|ED_I and |α|ED_P^* as n varies are drawn in Figure <ref> for N(0,1) and in Figure <ref> for st(0.8). (Note that |α| are skipped in the legend.)In Figure <ref>, though the graph of the approximated |α|ED_P^* is slightly lower than that of |α|ED_I, the two curves are quite close to each other. In Figure <ref>, we see that the curve of |α|ED_I is located at much higher position than that of |α|ED_P^*.Let'sconsider the approximated |α|ED_I and |α|ED_P^* as the functions of n and put the equation(The approximated |α|ED_I)(n)= (The approximated |α|ED_P^*)(100)The solution of this equation indicates how large sample is required for the approximated |α|ED_I to attain the same risk as that of the approximated |α|ED_P^* with n=100. For the case of N(0,1) the solution is given by n≑ 109, while n≑ 9298 for st(0.8). Consequently we notice that the fixed interval method could be extremely inefficient to the moving interval method if the unknown mother distribution assigns very small probability for one of the chosen intervals.This could happen if the mother distribution has a finite support.Suppose that we have prior knowledge that the mother distribution has the support [0, 1], and set a_i's as a_i=i/10 (1≤i ≤ 9) for the fixed intervals. The m_i's for the moving interval method with "quantiles" are again m_i=1/10 (0 ≤ i ≤ 9). If the mother distribution is Beta(2, 5), the corresponding probabilities for the fixed intervals are given by(m_0, m_1, …, m_9) ≑ (0.114, 0.230, 0.235, 0.187, 0.124, 0.068, 0.030, 0.009, 0.002, 5.5*10^-5).The graph of the density function and the histogram of 10^4 samples with above a_i's as the endpoints are given in Figure <ref> . The graphs of the approximated risks for the both methods are shown in Figure <ref>.The solution for the equation (<ref>) is given by n≑ 379. Even if we are lucky enough to know the finite support of the mother distribution, the fixed interval method is still quite inefficient to the moving interval method.We saw that the moving interval method is superior theoretically and numerically to the fixed interval method as estimation of the mother distribution.Needless to say, we often need to know the probability of some fixed intervals for a certain practical purpose. In that case, it might be preferable that the moving interval method is also subsidiarily used, since it could give some information on M in (<ref>) for the fixed interval method. Lastly we mention that the histogram (as estimation of the unknown distribution) falls between the both methods. In a conventional way, the intervals for the histogram are chosen after the sample is taken, taking into the consideration the frequency of each interval, especially being careful not to create the interval of null frequency.§ APPENDIX –Proof of (<ref>), (<ref>), (<ref>)– From (<ref>), we notice thatR_i=m̂_i/m_i-1=1/√(n) m_i(-),hencem_i R_i^2=n^-1 m_i^-1 (^2+^2-2), m_i R_i^3=n^-3/2 m_i^-2(^3-3^2+3^2-^3), m_i R_i^4=n^-2 m_i^-3(^4-4^3+6^2^2-4^3+^4).From the formula on the moments of the ordered statistics U_(n_i) (see (3.1.6) of <cit.>)E[∏_i=1^k U_(n_i)^a_i]= n!/(n+∑_i=1^k a_i)!∏_i=1^k (n_i-1+∑_j=1^i a_j )!/(n_i-1+∑_j=1^i-1 a_j )!,n_1 ≤⋯≤ n_k,we have the following results.E[U_()]=n!/(n+1)!n_i !/(n_i-1)!=n_i/n+1=n/n+1(+/n) =(1-1/n+1/n^2+O(n^-3))(+/n)=+1/n(-+)+1/n^2(-+)+O(n^-3),where the forth equation is due to the factn/n+1=1-1/n+1=1-1/n+(1/n-1/n+1)=1-1/n+1/n(n+1)1-1/n+1/n^2+(-1/n^2+1/n(n+1))=1-1/n+1/n^2-1/n^2(n+1)=1-1/n+1/n^2+O(n^-3). E[U_()^2]=n!/(n+2)!(n_i+1)!/(n_i-1)!=1/(n+1)(n+2)n_i(n_i+1)=n^2/(n+1)(n+2)(+/n)(++1/n)=(1-3/n+7/n^2+O(n^-3))(+/n)(++1/n)=^2+1/n(-3^2++(+1))+1/n^2(-3-3(+1)+(+1)+7^2)+O(n^-3)=^2+1/n(-3^2+(2+1))+1/n^2(-3(2+1)+(+1)+7^2)+O(n^-3),where the forth equation is due to the factn^2/(n+1)(n+2)=n^2/n^2+3n+2=1-3n+2/n^2+3n+2=1-3/n+(3/n-3n+2/n^2+3n+2)=1-3/n+7n+6/n^3+3n^2+2n=1-3/n+7/n^2+(-7/n^2+7n+6/n^3+3n^2+2n)= 1-3/n+7/n^2+-7(n^2+3n+2)+7n^2+6n/n^4+3n^3+2n^2= 1-3/n+7/n^2+O(n^-3). E[U_()U_()]=n!/(n+2)!(-1+1)!(-1+2)!/(-1)!(-1+1)!=(+1)/(n+1)(n+2)=n^2/(n+1)(n+2)(+/n)(++1/n)=(1-3/n+7/n^2+O(n^-3))(+/n)(++1/n)=+1/n(-3++(+1))+1/n^2(-3-3(+1)+(+1)+7)+O(n^-3). E[U_()^3]=n!/(n+3)!(-1+3)!/(-1)!=(+1)(+2)/(n+1)(n+2)(n+3)=n^3/(n+1)(n+2)(n+3)(+/n)(++1/n)(++2/n)=(1-6/n+25/n^2+O(n^-3))(+/n)(++1/n)(++2/n)=^3+1/n(-6^3+^2 +^2(+1)+^2(+2))+1/n^2(-6^2-6(+1)^2-6(+2)^2+(+1)+(+2)+(+1)(+2)+25^3)+O(n^-3)=^3+1/n(-6^3+3^2+3^2)+1/n^2(25^3+(-18-18)^2+(3^2+6+2))+O(n^-3),where the forth equation is due to the following relation;n^3/(n+1)(n+2)(n+3)-1=n^3-(n^2+3n+2)(n+3)/(n+1)(n+2)(n+3)=-6n^2-11n-6/(n+1)(n+2)(n+3)=-6/n+6/n-6n^2+11n+6/(n+1)(n+2)(n+3)=-6/n+25n^2+60n+36/n(n^3+6n^2+11n+6)=-6/n+25/n^2+(-25/n^2+25n^2+60n+36/n(n^3+6n^2+11n+6))=-6/n+25/n^2+-25(n^3+6n^2+11n+6)+25n^3+60n^2+36n/n^2(n^3+6n^2+11n+6)=-6/n+25/n^2+O(n^-3). E[U_()^2U_()]=n!/(n+3)!(+1)! (+2)!/(-1)! (+1)!=(+1)(+2)/(n+1)(n+2)(n+3)=n^3/(n+1)(n+2)(n+3)(+/n)(++1/n)(++2/n)=(1-6/n+25/n^2)(+/n)(++1/n)(++2/n)=^2+1/n(-6^2++(+1)+(+2)^2)+1/n^2(-6-6(+1)-6(+2)^2+(+1)+(+2)+(+1)(+2)+25^2)+O(n^-3)=^2+1/n(-6^2+(2+1)+(+2)^2)+1/n^2(25^2-(6+12)^2-(12+6)+(2+4++2)+(^2+))+O(n^-3). E[U_()U_()^2]=n!/(n+3)!! (+2)!/(-1)! !=(1-6/n+25/n^2+O(n^-3))(+/n)(++1/n)(++2/n)=^2+1/n(-6^2+^2+(+1)+(+2))+1/n^2(-6^2-6(+1)-6(+2)+(+1)+(+2)+(+1)(+2)+25^2)+O(n^-3)=^2+1/n(-6^2+^2+(2+3))+1/n^2(25^2-6^2-(12+18)+(^2+3+2)+(2+3))+O(n^-3). E[U_()^4]=n!/(n+4)!(+3)!/(-1)!=n^4/(n+1)(n+2)(n+3)(n+4)(+1)(+2)(+3)/n^4=(1-10/n+65/n^2+O(n^-3))(+/n)(++1/n)(++2/n)(++3/n)=^4+1/n(-10^4+^3+(+1)^3+(+2)^3+(+3)^3)+1/n^2(-10^3-10(+1)^3-10(+2)^3-10(+3)^3+(+1)^2+(+2)^2+(+3)^2+(+1)(+2)^2+(+1)(+3)^2+(+2)(+3)^2+65^4)+O(n^-3)=^4+1/n(-10^4+(4+6)^3)+1/n^2( 65^4-(40+60)^3+(6^2+18+11)^2)+O(n^-3),where the third equation is due to the following relation;n^4/(n+1)(n+2)(n+3)(n+4)-1=n^4-(n^4+10n^3+35n^2+50n+24)/(n+1)(n+2)(n+3)(n+4)=-(10n^3+35n^2+50n+24)/n^4+10n^3+35n^2+50n+24=-10/n+10/n-10n^3+35n^2+50n+24/n^4+10n^3+35n^2+50n+24=-10/n+10n^4+100n^3+350n^2+500n+240-10n^4-35n^3-50n^2-24n/n^5+10n^4+35n^3+50n^2+24n=-10/n+65n^3+300n^2+476n+240/n^5+10n^4+35n^3+50n^2+24n=-10/n+65/n^2+O(n^-3). E[U_()^3U_()]=n!/(n+4)!(+2)! (+3)!/(-1)! (+2)!=n^4/(n+1)(n+2)(n+3)(n+4)(+1)(+2)(+3)/n^4=(1-10/n+65/n^2+O(n^-3))(+/n)(++1/n)(++2/n)×(++3/n)=^3+1/n(-10^3+^2+(+1)^2+(+2)^2+(+3)^3)+1/n^2(-10^2-10(+1)^2-10(+2)^2-10(+3)^3+(+1)+(+2)+(+3)^2+(+1)(+2)+(+1)(+3)^2+(+2)(+3)^2+65^3)+O(n^-3)=^3+1/n(-10^3+(3+3)^2+(+3)^3)+1/n^2(65^3-10(+3)^3-(30+30)^2+(3+9+3+9)^2+(3^2+6+2))+O(n^-3). E[U_()U_()^3]=n!/(n+4)!! (+3)!/(-1)! !=n^4/(n+1)(n+2)(n+3)(n+4)(+1)(+2)(+3)/n^4=(1-10/n+65/n^2+O(n^-3))(+/n)(++1/n)(++2/n)×(++3/n)=^3+1/n(-10^3+^3+(+1)^2+(+2)^2+(+3)^2)+1/n^2(-10^3-10(+1)^2-10(+2)^2-10(+3)^2+(+1)^2+(+2)^2+(+3)^2+(+1)(+2)+(+1)(+3)+(+2)(+3)+65^3)+O(n^-3)=^3+1/n(-10^3+^3+(3+6)^2)+1/n^2(65^3-10^3-(30+60)^2+(3+6)^2+(3^2+12+11))+O(n^-3). E[U_()^2U_()^2]=n!/(n+4)!(+1)! (+3)!/(-1)! (+1)!=n^4/(n+1)(n+2)(n+3)(n+4)(+1)(+2)(+3)/n^4=(1-10/n+65/n^2+O(n^-3))(+/n)(++1/n)(++2/n)×(++3/n)=^2^2+1/n(-10^2^2+^2+(+1)^2+(+2)^2+(+3)^2)+1/n^2(-10^2-10(+1)^2-10(+2)^2-10(+3)^2+(+1)^2+(+2)+(+3)+(+1)(+2)+(+1)(+3)+(+2)(+3)^2+65^2^2)+O(n^-3)=^2^2+1/n(-10^2^2+(2+5)^2+(2+1)^2)+1/n^2(65^2^2-(20+10)^2-(20+50)^2+(^2+5+6)^2+(^2+)^2+(4+10+2+5))+O(n^-3).From the moments of 's, we can calculate the moments of 's as follows.n^-1E[^2]=E[(-)^2]=E[^2-2+^2]=^2-2^2+^2+n^-1(-3^2+(2+1)+2^2-2)+n^-2(-3(2+1)+(+1)+7^2+2-2^2)+O(n^-3)=1/n(1-)+1/n^2(5^2-3-4+(+1))+O(n^-3). n^-1E[]=E[(-)(-)]=E[--+]=+n^-1(-3+++)+n^-2(-3-3(+1)+7+(+1))--n^-1(-+)-n^-2(-+)--n^-1(-+)-n^-2(-+)++O(n^-3)=n^-1(-+)+n^-2(5-2-2-3+(+1))+O(n^-3). n^-3/2E[^3]=E[(-)^3]=E[^3-3^2+3^2-^3]=^3+n^-1(-6^3+3^2+3^2)+n^-2(25^3-(18+18)^2+(3^2+6+2))-3^3+n^-1(9^3-6^2-3^2)+n^-2(18^2+9^2-3(+1)-21^3)+3^3+n^-1(-3^3+3^2)+n^-2(-3^2+3^3)-^3 +O(n^-3)=n^-2(7^3-3^2(+3)+(3+2))+O(n^-3). n^-3/2E[^2]=E[(-)^2(-)]=E[^2-^2-2+2+^2-^2]=^2+n^-1(-6^2+(2+1)+(+2)^2)+n^-2(25^2-(6+12)^2-(12+6) +(2+4++2)+(^2+))-^2+n^-1(3^2-(2+1))+n^-2(3(2+1)-(+1)-7^2)-2^2+n^-1(6^2-2-2(+1)^2)+n^-2(6+6(+1)^2-2(+1)-14^2)+2^2+n^-1(-2^2+2)+n^-2(-2+2^2)+^2+n^-1(-^2+^2)+n^-2(-^2+^2)-^2+O(n^-3)=n^-2(7^2-(+6)^2-(2+3)+(2++2))+O(n^-3). n^-3/2E[^2]=E[(-)(-)^2]=E[^2-^2-2+2+^2-^2]=^2+n^-1(-6^2+^2+(2+3))+n^-2(-6^2-(12+18)+(^2+3+2)+(2+3)+25^2)-^2+n^-1(3^2-(2+1))+n^-2(6+3-(+1)-7^2)-2^2+n^-1(6^2-2^2-2(+1))+n^-2(6^2+6(+1)-2(+1)-14^2)+2^2+n^-1(-2^2+2)+n^-2(-2+2^2)+^2+n^-1(-^2+^2)+n^-2(-^2+^2)-^2+O(n^-3)=n^-2(^2(25-7-14+2+1)+^2(-6+6-)+(-12-18+6+3+6+6-2)+(2+3-2-2)+(^2+3+2-^2-))=n^-2(7^2-^2-(2+9)++(2+2))+O(n^-3). n^-2E[^4]=E(-)^4]=E[^4-4^3+6^2^2-4^3+^4]=^4+n^-1(-10^4+(4+6)^3)+n^-2(65^4+(-40-60)^3+(6^2+18+11)^2)-4^4+n^-1(24^4-12^3-12^3)+n^-2(-100^4+(72+72)^3+(-12^2-24-8)^2)+6^4+n^-1(-18^4+(12+6)^3)+n^-2(42^4+(-36-18)^3+6(^2+)^2)-4^4+n^-1(4^4-4^3)+n^-2(4^3-4^4)+^4+O(n^-3)=n^-2(3^4-6^3+3^2)+O(n^-3). n^-2E[^3]=E[(-)^3(-)]=E[(^3-3^2+3^2-^3)(-)]=E[^3-^3-3^2+3^2+3^2-3^2-^3+^3]=^3+n^-1( -10^3+(+3)^3+(3+3)^2)+n^-2(65^3+(-10-30)^3+(-30-30)^2+(3+9+3+9)^2+(3^2+6+2))-^3+n^-1(6^3+(-3-3)^2)+n^-2(-25^3+(18+18)^2+(-3^2-6-2))-3^3+n^-1(18^3+(-3-6)^3+(-6-3)^2)+n^-2(-75^3+(18+36)^3+(-6-12-3-6)^2+(36+18)^2+(-3^2-3))+3^3+n^-1(-9^3+(6+3)^2)+n^-2(21^3+(-18-9)^2+(3^2+3))+3^3+n^-1(-9^3+(3+3)^3+3^2)+n^-2(21^3+(-9-9)^3+(3+3)^2-9^2)-3^3+n^-1(3^3-3^2)+n^-2(-3^3+3^2)-^3+n^-1(^3-^3)+n^-2(-^3+^3)+^3+O(n^-3)=n^-2(3^3-3^3-3^2+3^2)+O(n^-3). n^-2E[^3]=E[(-)^3(-)]=E[(^3-3^2+3^2-^3)(-)]=E[^3-^3-3^2+3^2+3^2-3^2-^3+^3]=^3+n^-1( -10^3+^3+(3+6)^2)+n^-2(65^3-10^3-(30+60)^2+(3+6)^2+(3^2+12+11))-^3+n^-1(6^3-(3+3)^2)+n^-2((18+18)^2-(3^2+6+2)-25^3)-3^3+n^-1(18^3-3^3-3(2+3)^2)+n^-2(-75^3+18^3+(36+54)^2-(3^2+9+6)-(6+9)^2)+3^3+n^-1(-9^3+(6+3)^2)+n^-2(21^3-(18+9)^2+(3^2+3))+3^3+n^-1(-9^3+3^3+(3+3)^2)+n^-2(21^3-9^3-(9+9)^2+(3+3)^2)-3^3+n^-1(3^3-3^2)+n^-2(3^2-3^3)-^3+n^-1(^3-^3)+n^-2(^3-^3)+^3+O(n^-3)=n^-2(3^3-6^2+3)+O(n^-3). n^-2E[^2^2]=E[(-)^2(-)^2]=E[(^2-2+^2)(^2-2+^2)]=E[^2^2-2^2+^2^2-2^2+4-2^2+^2^2-2^2+^2^2]=^2^2+n^-1(-10^2^2+(2+5)^2+(2+1)^2)+n^-2(65^2^2-(20+10)^2-(20+50)^2+(^2+5+6)^2+(^2+)^2+(4+10+2+5))-2^2^2+n^-1(12^2^2-(4+2)^2-(2+4)^2)+n^-2(-50^2^2+(12+24)^2+(24+12)^2-(4+8+2+4)-(2^2+2)^2)+^2^2+n^-1(-3^2^2+(2+1)^2)+n^-2(7^2^2-(6+3)^2+(^2+)^2)-2^2^2+n^-1(12^2^2-2^2-(4+6)^2)+n^-2(-50^2^2+12^2+(24+36)^2-(2^2+6+4)^2-(4+6))+4^2^2+n^-1(-12^2^2+4^2+(4+4)^2)+n^-2(-12^2-(12+12)^2+(4+4)+28^2^2)-2^2^2+n^-1(2^2^2-2^2)+n^-2(2^2-2^2^2)+^2^2+n^-1(-3^2^2+(2+1)^2)+n^-2((-6-3)^2+(^2+)^2+7^2^2)-2^2^2+n^-1(2^2^2-2^2)+n^-2(2^2-2^2^2)+^2^2+O(n^-3)=n^-2(3^2^2-5^2-^2+2^2+)+O(n^-3).Now we are ready to calculate (<ref>),(<ref>) and (<ref>). From (<ref>) and n^-1(E[^2]+E[^2]-2E[])=n^-1((1-)+(1-)-2(1-)) +n^-2(5^2-3-4+(+1)+5^2-3-4+(+1)-10+6+4+4-2(1+))+O(n^-3)=n^-1(-+-(^2+^2-2))+n^-2(5(-)^2-3(-)-4(-)(-)+(+1)+(+1)-2(1+))+O(n^-3)=n^-1(m_i-m_i^2)+n^-2(5m_i^2-3m_i-4(-)m_i+(-)(-+1))+O(n^-3),wehave the result (<ref>) as follows;∑_i=0^p m_i E[R_i^2]=∑_i=0^p{n^-1(1-m_i)+n^-2(5m_i-3-4(-)+(-)(-+1)m_i^-1)}+O(n^-3)=n^-1∑_i=0^p(1-m_i)+n^-2(∑_i=0^p(5m_i-3)-4∑_i=0^p(-)+∑_i=0^p(-)(-+1)m_i^-1)+O(n^-3)=n^-1∑_i=0^p(1-m_i)+n^-2(5-3(p+1)-4+∑_i=0^p(-)(-+1)m_i^-1)+O(n^-3)=n^-1p+n^-2(-2-3p+∑_i=0^p(-)(-+1)m_i^-1)+O(n^-3),where we used the fact ∑_i=0^p m_i=1, r_0=0 and r_p+1=1.From (<ref>) andn^-3/2(E[^3]-3E[^2]+3E[^2]-E[^3])=n^-2(7^3-3^2(+3)+(3+2)-21^2+3^2+(6+27)-3-(6+6)+21^2-(3+18)^2-(6+9)+(6+3+6)-7^3+3^2(+3)-(3,+2))+O(n^-3)=n^-2[7(-)^3+3(^2+^2)(--3)+6(-+3)+(3(-)+2)(-)]+O(n^-3)=n^-2[7(-)^3+3(--3)(-)^2+(3(-)+2)(-)]+O(n^-3),we have the result (<ref>) as follows;∑_i=0^p m_i E[R_i^3]=n^-2[∑_i=0^p{7m_i+3(--3)+(3(-)+2)m_i^-1}]+O(n^-3)=n^-2[∑_i=0^p(7m_i-9)+3∑_i=0^p(-)+∑_i=0^p(3(-)+2))m_i^-1]+O(n^-3)=n^-2[7-9(p+1)-3+∑_i=0^p(3(-)+2))m_i^-1]+O(n^-3)=n^-2[-5-9p+∑_i=0^p(3(-)+2)m_i^-1]+O(n^-3). From (<ref>) andn^-2(E[^4]-4E[^3]+6E[^2^2]-4E[^3]+E[^4])=n^-2(3^4-6^3+3^2-12^3+24^2-12+18^2^2-30^2-6^2+12^2+6-12^3+12^3+12^2-12^2+3^4-6^3+3^2)+O(n^-3)=n^-2(3(^4-4^3+6^2^2-4^3+^4)-6(^3-3^2+3^2-^3)+3(^2-2+^2))+O(n^-3)=n^-2(3(-)^4-6(-)^3+3(-)^2)+O(n^-3),we have the result (<ref>) as follows;∑_i=0^p m_i E[R_i^4]=n^-2[∑_i=0^p(3m_i-6+3m_i^-1)]+O(n^-3)=n^-2[3-6(p+1)+3∑_i=0^p m_i^-1]+O(n^-3)=n^-2[-3-6p+3∑_i=0^p m_i^-1]+O(n^-3).–Proof of (<ref>), (<ref>), (<ref>)– For 1≤ i ≤ p, E[] =r̅_i(1+r̅_i)+(1+r̅_i)(-r̅_i)=0,E[^2] =r̅_i^2(1+r̅_i)-(1+r̅_i)^2r̅_i=(1+r̅_i)r̅_i(r̅_i-(1+r̅_i))=-r̅_i(1+r̅_i),while E[r_0]=0, E[r_p+1]=1, E[r_0^2]=0,E[r_p+1^2]=1 is obvious from r_i≡ 0, r_p+1≡ 1. (<ref>) is proved from (<ref>) and the following equation; for 1≤ i ≤ p-1,E[(1-)] =(1-)(1+)(1+)+(-)(1+)(-)+(1+)(1-)(-)(1+)+(1+)(-)(-)(-)=0. 99Amari4 S. Amari. Information Geometry and Its Applications. Springer, 2016.Amari Cichocki S. Amari and A. Chichocki. Information geometry of divergence function. Bulletin of the Polish Academy ofSciences : Technical Sciences, 58:183-195, 2010.Amari Nagaoka S. Amari and H. Nagaoka. Methods of Information Geometry.Translations of Mathematical Monographs 191. American Mathematical Society, 2000.David Nagaraja H.A.David and H.N.Nagaraja. Ordered Statistics 3rd ed.. Wiley, 2003.Drezner Zerom Z. Drezner and D. Zerom. A simple and effective discretization of a continuous random variable Communications in Statistics –Simulation and Computation, 45: 3798-3810, 2016.Lehmann E. L. Lehmann. Elements of Large Sample Theory. Springer, 1999.Sheena Y. Sheena. Asymptotic expansion of the risk of maximum likelihood estimator with respect to α-divergence as a measure of the difficulty of specifying a parametric model,to be published in Communications in Statistics –Theory and Methods.Sheena_2 Y. Sheena. Asymptotic Expansion of Risk for a Regression Model with respect to α-Divergence with an Application to the Sample Size Problem.Far East Journal of Theoretical Statistics, 53, 187-230, 2017.Vajda I. Vajda. Theory of Statistical Inference and Information, Kluwer Academic Publishers, 1989.
http://arxiv.org/abs/1709.09520v2
{ "authors": [ "Yo Sheena" ], "categories": [ "math.ST", "stat.TH" ], "primary_category": "math.ST", "published": "20170927135127", "title": "Estimation of a Continuous Distribution on a Real Line by Discretization Methods -- Complete Version--" }
[][email protected]; the corresponding author Laboratory of Artificial Quantum Systems, Moscow Institute of Physics and Technology, 141700 Dolgoprudny, Russia [][email protected] Physics Department, Royal Holloway, University of London, Egham, Surrey TW20 0EX, United Kingdom Laboratory of Artificial Quantum Systems, Moscow Institute of Physics and Technology, 141700 Dolgoprudny, Russia [][email protected] Physics Department, Royal Holloway, University of London, Egham, Surrey TW20 0EX, United Kingdom Laboratory of Artificial Quantum Systems, Moscow Institute of Physics and Technology, 141700 Dolgoprudny, Russia [][email protected] National Physical Laboratory, Teddington, TW11 0LW, United Kingdom Physics Department, Royal Holloway, University of London, Egham, Surrey TW20 0EX, United Kingdom [][email protected]; the corresponding author Physics Department, Royal Holloway, University of London, Egham, Surrey TW20 0EX, United Kingdom National Physical Laboratory, Teddington, TW11 0LW, United Kingdom Laboratory of Artificial Quantum Systems, Moscow Institute of Physics and Technology, 141700 Dolgoprudny, RussiaSuperconducting quantum systems (artificial atoms) have been recently successfully used to demonstrate on-chip effects of quantum optics with single atoms in the microwave range.In particular, a well-known effect of four-wave mixing could reveal a series of features beyond classical physics, when a non-linear medium is scaled down to a single quantum scatterer.Here we demonstrate a phenomenon of the quantum wave mixing (QWM) on a single superconducting artificial atom. In the QWM, the spectrum of elastically scattered radiation is a direct map of the interacting superposed and coherent photonic states. Moreover, the artificial atom visualises photon-state statistics, distinguishing coherent, one- and two-photon superposed states with the finite (quantized) number of peaks in the quantum regime. Our results may give a new insight into nonlinear quantum effects in microwaveoptics with artificial atoms. Quantum wave mixing and visualisation of coherent and superposed photonic states in a waveguide O.V.Astafiev December 30, 2023 ===============================================================================================Introduction In systems with superconducting quantum circuits – artificial atoms – strongly coupled to harmonic oscillators, many amazing phenomena of on-chip quantum optics have been recently demonstrated establishing the direction of circuit quantum electrodynamics <cit.>. Particularly, in such systems one is able to resolve photon number states in harmonic oscillators <cit.>, manipulate with individual photons <cit.>, generate photon (Fock) states <cit.> and arbitrary quantum states of light <cit.>, demonstrate the lasing effect from a single artificial atom <cit.>, study nonlinear effects<cit.>. The artificial atoms can also be coupled to open space <cit.>(microwave transmission lines) and also reveal many interesting effects such as resonance fluorescence of continuous waves<cit.>, elastic and inelastic scattering of single-frequency electromagnetic waves<cit.>, amplification <cit.>, single-photon reflection and routing <cit.>, non-reciprocal transport of microwaves <cit.>, coupling of distant artificial atoms by exchanging virtual photons <cit.>, superradiance of coupled artificial atoms <cit.>. All these effects require strong coupling to propagating waves and therefore are hard to demonstrate in quantum optics with natural atoms due to low spatial mode matching of propagating light.In our work, we focus on the effect of wave mixing. Particularly, the four wave mixing is a textbook optical effect manifesting itself in a pair of frequency side peaks from two driving tones on a classical Kerr-nonlinearity<cit.>. Ultimate scaling down of the nonlinear medium to a single artificial atom, strongly interacting with the incident waves, results in time-resolution of instant multi-photon interactions and reveals effects beyond classical physics.Here, we demonstrate a physical phenomenon of Quantum Wave Mixing (QWM) on a superconducting artificial atom in the open 1D space (coplanar transmission line on-chip). We show two regimes of QWM comprising different degrees of "quantumness": The first and most remarkable one is QWM with nonclassical superposed states, which are mapped into a finite number of frequency peaks. In another regime, we investigate the different orders of wave mixing of classical coherent waves on the artificial atom.The dynamics of the peaks exhibits a series of Bessel-function Rabi oscillations, different from the usually observed harmonic ones, with orders determined by the number of interacting photons. Therefore, the device utilising QWM visualises photon-state statistics of classical and non-classical photonic states in the open space.The spectra are fingerprints of interacting photonic states, where the number of peaks due to the atomic emission always exceeds by one the number of absorption peaks.Below, we summarise several specific findings of this work: (1) Demonstration of the wave mixing on a single quantum system. (2) In the quantum regime of mixing, the peak pattern and the number of the observed peaks is a map of coherentand superposed photonic states, where the number of peaks N_ peaks is related to the number of interacting photons N_ ph as N_ peaks = 2 N_ ph+1. Namely, the one-photon state (in two-level atoms) results in precisely three emission peaks; the two-photon state (in three-level atoms) results in five emission peaks; and the classical coherent states, consisting of infinite number of photons, produce a spectrum with an infinite number of peaks.(3) Bessel function Rabi oscillations are observed and the order of the Bessel functions depends on the peak position and is determined by the number of interacting photons. ResultsCoherent and zero-one photon superposed state To evaluate the system we consider electromagnetic waves propagating in a 1D transmission line with an embedded two-level artificial atom <cit.> (see also Supplementary Methods, Supplementary Figure 1)shown in Fig. <ref>a.In this work, we are interested in photon statistics, which will be revealed by QWM, therefore, we will consider our system in the photon basis. The coherent wave in the photon (Fock) basis |N⟩ is presented as |α⟩ = e^-|α|^2/2(|0⟩ + α |1⟩ + α^2/√(2!) |2⟩ + α^3/√(3!) |3⟩ +...)and consists of an infinite number of photonic states. A two-level atom with ground and excited states |g⟩ and |e⟩ driven by the field can be prepared in superposed state Ψ = cosθ/2 |g⟩ + sinθ/2 |e⟩ and, if coupled to the external photonic modes, transfers the excitation to the mode, creating zero-one photon superposed state |β⟩ = |cosθ/2| (|0⟩ + β |1⟩),where β = tanθ/2 (see Supplementary Note 1) .The superposed state comprises coherence, however |β⟩ state is different from classical coherent state |α⟩, consisting of an infinite number of Fock states.The energy exchange process is described by the operatorb^- b^+ |g⟩⟨ g| + b^+ |g⟩⟨ e|, which maps the atomic to photonic states, where b^+ = |1⟩⟨ 0| and b^- = |0⟩⟨ 1| are creation/annihilation operators of the zero-one photon state. The operator is a result of a half-period oscillation in the evolution of the atom coupled to the quantised photonic mode and we keep only relevant for the discussed case (an excited atom and an empty photonic mode) terms (see Supplementary Note 1).We discuss and demonstrate experimentally an elastic scattering of two waves with frequencies ω_- = ω_0 - δω and ω_+ = ω_0 + δω, where δω is a small detuning, on a two-level artificial atom with energy splitting ħω_0. The scattering, taking place on a single artificial atom, allows us to resolve instant multi-photon interactions and statistics of the processes.Dealing with the final photonic states, the system Hamiltonian is convenient to present as the one, which couples the input and output fields H = i ħ g (b^+_- a_- - b^-_- a_-^† + b^+_+ a_+ - b^-_+ a_+^†),using creation and annihilation operators a_±^† (a_±) of photon states |N⟩_± (N is an integer number) and b^+_± and b^-_± are creation/annihilation operators of single-photon output statesat frequencies ω_±. Here ħ g is the field-atom coupling energy. Operators b^+_± and b^-_± also describe the atomic excitation/relaxation, using substitutions b^+_±↔ e^∓ i φ |e⟩⟨ g| and b^-_±↔ e^± i φ |g⟩⟨ e|, where φ = δω t is a slowly varying phase (see Supplementary Note 2). The phase rotation results in the frequency shift according to ω_± t = ω_0 t ±δω t and more generally for b_m^± (with integer m) the varied phase mδφ results in the frequency shift ω_m = ω_0 + mδω. The system evolution over the time interval [t, t'] (t' = t + Δ t and δωΔ t ≪ 1) described by the operator U(t,t') = exp(-i/ħ HΔ t) can be presented as a series expansion of different order atom-photon interaction processes a^†_± b^-_± and a_± b^+_± – sequential absorption-emission accompanied by atomic excitations/relaxations (see Supplementary Note 2).Operators b describe the atomic states (instant interaction of the photons in the atom) and, therefore, satisfy the following identities: b^-_p b^+_m = |0⟩_m-p⟨ 0|, b^±_j b^∓_p b^±_m = b^±_j-p+m, b^±_p b^±_m = 0.The excited atom eventually relaxes producing zero-one superposied photon field |β⟩_m at frequency ω_m = ω_0+mδω according to b^+_m |0⟩ = |1⟩_m. We repeat the evolution and average the emission on the time interval t > δω^-1and observe narrow emission lines.In the general case, the atom in a superposed state generates coherent electromagnetic waves of amplitude V_m = -ħΓ_1/μ⟨ b^+_m⟩at frequency ω_m, where Γ_1 is the atomic relaxation rate and μ is the atomic dipole moment <cit.>. Elastic scattering and Bessel function Rabi oscillationsTo study QWM we couple the single artificial atom (a superconducting loop with four Josephson junctions) to a transmission line via a capacitance (see Supplementary Methods). The atom relaxes with the photon emission rate found to be Γ_1/2π≈ 20 MHz. The coupling is strong, which means that any non-radiative atom relaxation is suppressed and almost all photons from the atom are emitted into the line.The sample is held in a dilution refrigerator with base temperature 15 mK. We apply periodically two simultaneous microwave pulses with equal amplitudes at frequencies ω_- and ω_+, length Δ t = 2 ns and period T_ r = 100 ns (much longer than the atomic relaxation time Γ_1^-1≈ 8 ns).A typical emission power spectrum integrated over many periods (bandwidth is 1 kHz) is shown in Fig. <ref>a. The pattern is symmetric with many narrow peaks (as narrow as the excitation microwaves), which appeared at frequencies ω_0 ± (2k+1)δω, where k ≥ 0 is an integer number. We linearly change driving amplitude (Rabi frequency) Ω, which is defined from the measurement of harmonic Rabi oscillations under single-frequency excitation. The dynamics of several side peaks versus linearly changed ΩΔ t (here we vary Ω, however, equivalently Δ t can be varied) is shown on plots of Fig. <ref>b. Note that the peaks exhibit anharmonic oscillations well fitted by the corresponding 2k+1-order Bessel functions of the first kind. The first maxima are delayed with the peak order, appearing at ΩΔ t ∝ k+1. Note also that detuning δω should be within tens of megahertz (≤Γ_1). However, in this work we use δω/2π = 10 kHz to be able to quickly span over several δω of the SA with the narrow bandwidth. Figure <ref>b examplifies the third-order process (known as the four-wave mixing in the case of two side peaks), resulting in the creation of the right hand-side peak at ω_3 = 2ω_+ - ω_-. The process consists of the absorption of two photons of frequency ω_+ and the emission of one photon at ω_-. More generally, the 2k+1-order peak at frequency ω_2k+1 = (k+1)ω_+ -kω_- (≡ω_0 + (2k+1)δω)is described by the multi-photon process (a_+ a^†_-)^k a_+ b^+_2k+1, which involves the absorption of k+1 photons from ω_+ and the emission of k photons at ω_-; and the excited atom eventually generates a photon at ω_2k+1. The symmetric left hand-side peaks at ω_0 - (2k+1)δω are described by a similar processes with swapped indexes (+ ↔ -). The peak amplitudes from Eq. (<ref>) are described by expectation values of b-operators, which at frequency ω_2k+1 can be written in the form of ⟨ b_2k+1^+⟩ = D_2k+1⟨ (a_+ a^†_- )^k a_+ ⟩. The prefactor D_2k+1 depends on the driving conditions and can be calculated summing up all virtual photon processes (e.g. a^†_+ a_+, a^†_- a_-, etc.) not changing frequencies (Supplementary Note 2).For instance, the creation of a photon at 2ω_+ - ω_- is described by ⟨ b^+_3 ⟩ = D_3 ⟨ a_+ a^†_- a_+⟩.As the number of required photons increases with k, the emission maximum takes longer time to appear (Fig. <ref>b).To derive the dependence observed in our experiment, we consider the case with initial state Ψ = |0⟩⊗ (|α⟩_- + |α⟩_+) and α≫ 1. We find then that the peaks exhibit Rabi oscillations described by ⟨ b_2k+1⟩ = (-1)^k/2× J_2k+1(2ΩΔ t)(Supplementary Note 2, Eq. (29)) and the mean number of generated photons per cycle in 2k+1-mode is ⟨ N_±(2k+1)⟩ = J^2_±(2k+1)(2ΩΔ t)/4. The symmetric multi-peak pattern in the spectrum is a map of an infinite number of interacting classical coherent states. The dependence from the parameter 2ΩΔ t observed in our experiment can also be derived using a semiclassical approach, where the driving field is given by Ω e^iδω t + Ω e^-iδω t = 2Ωcosδω t. As shown in Supplementary Note 2, a classical description can be mathematically more straightforward and leads to the same result, but fails to provide a qualitative picture of QWM discussed below. The Bessel function dependencies have been earlier observed in multi-photon processes, however in frequency domain <cit.>. QWM and dynamics of non-classical photon statesNext, we demonstrate one of the most interesting results: QWM with non-classical photonic states.We further develop the two-pulse technique separating the excitation pulses in time. Breaking time-symmetry in the evolution of the quantum system should result in asymmetric spectra and the observation of series of spectacular quantum phenomena.The upper panel in Fig. <ref>a demonstrates such a spectrum, when the pulse at frequency ω_+ is applied after a pulse at ω_-. Notably, the spectrum is asymmetric and contains only one side peak at frequency 2ω_+ - ω_-. There is no any signature of other peaks, which is in striking contrast with Fig. <ref>a. Reversing the pulse sequence mirror-reflects the pattern revealing the single side peak at 2ω_- - ω_+ (not shown here). The quantitative explanation of the process is provided on the left panel of Fig. <ref>c. The first pulse prepares superposed zero-one photon state |β⟩_- in the atom, which contains not more than one photon(N_ ph = 1). Therefore, only a single positive side peak 2ω_+ - ω_- due to the emission of the ω_--photon, described by a_+ a^†_- a_+, is allowed. See Supplementary Note 3 for details. To prove that there are no signatures of other peaks, except for the observed three peaks, we vary the peak amplitudes and compare the classical and quantum wave mixing regimes with the same conditions.Figure <ref>b demonstrates the side peak power dependencies in different mixing regimes: classical (two simultaneous pulses) (left panels) and quantum (two consecutive pulses) (right panels). The two cases reveal a very similar behaviour of the right hand-side four-wave mixing peak at 2ω_+ - ω_-, however the other peaks appear only in the classical wave mixing, proving the absence of other peaks in the mixing with the quantum state. The asymmetry of the output mixed signals, in principle, can be demonstrated in purely classical systems. It can be achieved in several ways, e.g. with destructive interference, phase-sensitive detection/amplification<cit.>, filtering. All these effects are not applicable to our system of two mixed waves on a single point-like scatterer in the open (wide frequency band) space. What is more important than the asymmetry is that the whole pattern consists of only three peaks without any signature of others. This demonstrates another remarkable property of our device: it probes photonic states, distinguishing the coherent, |α⟩, and superposed states with the finite number of the photon states.Moreover, the single peak at ω_3 shows that the probed state was |β⟩ with N_ ph = 1. This statement can be generalised for an arbitrary state. According to the picture in Fig. <ref>c, adding a photon increases the number of peaks from the left and right-hand side by one, resulting in the total number of peaks N_ peaks = 2N_ ph+1.Probing the two-photon superposed stateTo have a deeper insight into the state-sensing properties and to demonstrate QWM with different photon statistics, we extended our experiment to deal with two-photon states (N_ ph = 2).The two lowest transitions in our system can be tuned by adjusting external magnetic fields to be equal to ħω_0, though higher transitions are off-resonant (≠ħω_0, See Supplementary Figure 2). In the three-level atom, the microwave pulse at ω_- creates the superposed two-photon state |γ⟩_- = C(|0⟩_- + γ_1 |1⟩_- + γ_2 |2⟩_-), where C = √(1+|γ_1|^2+|γ_2|^2). The plot in Fig. <ref> shows the modified spectrum. As expected, the spectrum reveals only peaks at frequencies consisting of one or two photons of ω_-.The frequencies are ω_3 = 2ω_+ - ω_-, ω_-3 = 2ω_- - ω_+,and ω_5 = 3ω_+ - 2ω_- corresponding, for instance, to processes a_+ a_-^† a_+ c^+_3, a_- a_- a_+^† c^+_-3 and a_+ a_-^† a_-^† a_+a_+ c^+_5, where c^+_m and c^-_m are creation and annihilation operators defined on the two-photon space (|n⟩, where n takes 0, 1 or 2). The intuitive picture of the two-photon state mixing is shown on the central and right-hand side panels of Fig. <ref>c. The two photon state (N_ ph=2) results in the five peaks.This additionally confirms that the atom resolves the two-photon state. See Supplementary Note 4 for the details.The QWM can be also understood as a transformation of the quantum states into quantised frequencies similar to the Fourier transformation. The summarised 2D plots with N_ ph are presented in Fig. <ref>. The mixing with quantum states is particularly revealed in the asymmetry.Note that for arbitrary N_ ph coherent states, the spectrum asymmetry will remain, giving N_ ph and N_ ph-1 peaks at the emission and absorption sides. According to our understanding, QWM has not been demonstrated in systems other than superconducting quantum ones due to the following reasons.First, the effect requires a single quantum system because individual interaction processes have to be separated in time <cit.> and it will be washed out in multiple scattering on an atomic ensemble in matter.Next, although photon counters easily detect single photons, in the visible optical range, it might be more difficult to detect amplitudes and phases of weak power waves <cit.>. On the other hand, microwave techniques allow one to amplify and measure weak coherent emission from a single quantum system <cit.> due to strong coupling of the single artificial atom; the confinement of the radiation in the transmission line; and due to an extremely high phase stability of microwave sources. The radiation can be selectively detected by either spectrum analysers (SA) or vector network analysers (VNA) with narrow frequency bandwidths, efficiently rejecting the background noise.In summary, we have demonstrated quantum wave mixing – an interesting phenomenon of quantum optics.We explore different regimes of QWM and prove that the superposed and coherent states of light are mapped into a quantised spectrum of narrow peaks. The number of peaks is determined by the number of interacting photons. QWM could serve as a powerful tool for building new types of on-chip quantum electronics.Data availability. Relevant data is available from A.Yu.D. upon request.Author contributions. O.V.A. planned and designed the experiment, R.Sh., A.Yu.D. and T.H-D. fabricated the sample and built the set-up for measurements. A.Yu.D, R.Sh. and T.H-D. measured the raw data. A.Yu.D., V.N.A and O.V.A made calculations, analysed and processed the data and wrote the manuscript.Acknowledgements. We acknowledge Russian Science Foundation (grant N 16-12-00070) for supporting the work. We thank A. Semenov and E. Ilichev for useful discussions.Competing interests. The authors declare no competing financial interests. 31 § REFERENCE fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Clarke and Wilhelm(2008)]clarke2008superconducting author author Clarke, J. & author Wilhelm, F. K. title Superconducting quantum bits. http://www.nature.com/nature/journal/v453/n7198/abs/nature07128.html journal journal Nature volume 453, pages 1031-1042 (year 2008)NoStop [Wallraff et al.(2004)Wallraff, Schuster, Blais, Frunzio, Huang, Majer, Kumar, Girvin, and Schoelkopf]wallraff2004strong author author Wallraff, A. et al.title Strong coupling of a single photon to a superconducting qubit using circuit quantum electrodynamics. http://www.nature.com/nature/journal/v431/n7005/abs/nature02851.html journal journal Nature volume 431, pages 162-167 (year 2004)NoStop [You and Nori(2011)]you2011 author author You, J. Q. & author Nori, F.title Atomic physics and quantum optics using superconducting circuits. http://www.nature.com/nature/journal/v474/n7353/full/nature10122.html journal journal Nature volume 474, pages 589–597 (year 2011)NoStop [Schuster et al.(2007)Schuster, Houck, Schreier, Wallraff, Gambetta, Blais, Frunzio, Majer, Johnson, Devoret et al.]schuster2007resolving author author Schuster, D. I. et al.title Resolving photon number states in a superconducting circuit. http://www.nature.com/nature/journal/v445/n7127/abs/nature05461.html journal journal Nature volume 445, pages 515-518 (year 2007)NoStop [Peng et al.(2016)Peng, De Graaf, Tsai, and Astafiev]peng2016tuneable author author Peng, Z., author De Graaf, S., author Tsai, J. & author Astafiev, O.title Tuneable on-demand single-photon source in the microwave range. http://www.nature.com/articles/ncomms12588 journal journal Nature Communications volume 7, pages 12588 (year 2016)NoStop [Houck et al.(2007)Houck, Schuster, Gambetta, Schreier, Johnson, Chow, Frunzio, Majer, Devoret, Girvin, and Schoelkopf]houck2007generating author author Houck, A. A. et al.title Generating single microwave photons in a circuit. "http://www.nature.com/nature/journal/v449/n7160/abs/nature06126.html" journal journal Nature volume 449, pages 328-331 (year 2007)NoStop [Lang et al.(2013)Lang, Eichler, Steffen, Fink, Woolley, Blais, and Wallraff]lang2013correlations author author Lang, C. et al. title Correlations, indistinguishability and entanglement in Hong-Ou-Mandel experiments at microwave frequencies. http://www.nature.com/nphys/journal/v9/n6/abs/nphys2612.html journal journal Nature Physics volume 9, pages 345-348 (year 2013)NoStop [Hofheinz et al.(2008)Hofheinz, Weig, Ansmann, Bialczak, Lucero, Neeley, O’connell, Wang, Martinis, and Cleland]hofheinz2008generation-Fock-states author author Hofheinz, M. et al. title Generation of Fock states in a superconducting quantum circuit. http://www.nature.com/nature/journal/v454/n7202/full/nature07136.html journal journal Nature volume 454, pages 310-314 (year 2008)NoStop [Hofheinz et al.(2009)Hofheinz, Wang, Ansmann, Bialczak, Lucero, Neeley, O'Connell, Sank, Wenner, Martinis et al.]hofheinz2009synthesizing author author Hofheinz, M. et al.title Synthesizing arbitrary quantum states in a superconducting resonator. http://www.nature.com/nature/journal/v459/n7246/full/nature08005.html journal journal Nature volume 459, pages 546-549 (year 2009)NoStop [Astafiev et al.(2007)Astafiev, Inomata, Niskanen, Yamamoto, Pashkin, Nakamura, and Tsai]astafiev2007single author author Astafiev, O. et al.title Single artificial-atom lasing. http://www.nature.com/nature/journal/v449/n7162/abs/nature06141.html journal journal Nature volume 449, pages 588-590 (year 2007)NoStop [Hoi et al.(2013b)Hoi, Kockum, Palomaki, Stace, Fan, Tornberg, Sathyamoorthy, Johansson, Delsing, and Wilson]Hoi2013 author author Hoi, I.-C. et al.title Giant cross-Kerr effect for propagating microwaves induced by an artificial atom. 10.1103/PhysRevLett.111.053601 journal journal Phys. Rev. Lett. volume 111, pages 053601 (year 2013b)NoStop [Kirchmair et al.(2013)Kirchmair, Vlastakis, Leghtas, Nigg, Paik, GINOSSAR, MIRRAHIMI, FRUNZIO, GIRVIN, and SCHOELKOPF]kirchmair2013observation author author Kirchmair, G. et. al.title Observation of quantum state collapse and revival due to the single-photon Kerr effect. https://www.nature.com/nature/journal/v495/n7440/full/nature11902.html journal journal Nature volume 495, pages 205 (year 2013)NoStop [Roy et al.(2017)Roy, Wilson, and Firstenberg]Strongly1D author author Roy, D., author Wilson, C. M. & author Firstenberg, O. title Colloquium: strongly interacting photons in one-dimensional continuum. 10.1103/RevModPhys.89.021001 journal journal Rev. Mod. Phys. volume 89, pages 021001 (year 2017)NoStop [Hoi et al.(2013a)Hoi, Wilson, Johansson, Lindkvist, Peropadre, Palomaki, and Delsing]Delsing2013microwave1Dspace author author Hoi, I.-C. et al.title Microwave quantum optics with an artificial atom in one-dimensional open space. http://dx.doi.org/10.1088/1367-2630/15/2/025011 journal journal New Journal of Physics volume 15, pages 025011 (year 2013a)NoStop [Astafiev et al.(2010b)Astafiev, Zagoskin, Abdumalikov, Pashkin, Yamamoto, Inomata, Nakamura, and Tsai]Astafiev2010resonance author author Astafiev, O. et al. title Resonance fluorescence of a single artificial atom. http://science.sciencemag.org/content/327/5967/840 journal journal Science volume 327, pages 840-843 (year 2010b)NoStop [Toyli et al.(2016)Toyli, Eddins, Boutin, Puri, Hover, Bolkhovsky, Oliver, Blais, and Siddiqi]Toyli2016ResSqueez author author Toyli, D. M. et al. title Resonance fluorescence from an artificial atom in squeezed vacuum. 10.1103/PhysRevX.6.031004 journal journal Phys. Rev. X volume 6, pages 031004 (year 2016)NoStop [Abdumalikov Jr et al.(2011)Abdumalikov Jr, Astafiev, Pashkin, Nakamura, and Tsai]abdumalikov2011dynamics author author Abdumalikov Jr, A. A., author Astafiev, O. V., author Pashkin, Y. A., author Nakamura, Y.& author Tsai, J. title Dynamics of coherent and incoherent emission from an artificial atom in a 1D space. http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.107.043604 journal journal Phys. Rev. Lett. volume 107, pages 043604 (year 2011)NoStop [Astafiev et al.(2010a)Astafiev, Abdumalikov, Zagoskin, Pashkin, Nakamura, and Tsai]Astafiev-quantum-amplifier author author Astafiev, O. V. et al. title Ultimate on-chip quantum amplifier. 10.1103/PhysRevLett.104.183603 journal journal Phys. Rev. Lett. volume 104, pages 183603 (year 2010a)NoStop [Hoi et al.(2011)Hoi, Wilson, Johansson, Palomaki, Peropadre, and Delsing]single-photon-router author author Hoi, I.-C., authorWilson, C. M., author Johansson, G., authorPalomaki, T., authorPeropadre, B. & authorDelsing, P. title Demonstration of a single-photon router in the microwave regime. 10.1103/PhysRevLett.107.073601 journal journal Phys. Rev. Lett. volume 107, pages 073601 (year 2011)NoStop [Fang and Baranger(2017)]BarangerMultipleEmitters author author Fang, Y.-L. L. & author Baranger, H. U. title Multiple emitters in a waveguide: Nonreciprocity and correlated photons at perfect elastic transmission, 10.1103/PhysRevA.96.013842 journal journal Phys. Rev. A volume 96, pages 013842 (year 2017)NoStop [van Loo et al.(2013)van Loo, Fedorov, Lalumière, Sanders, Blais, and Wallraff]vanLoo1494 author author van Loo, A. F., authorFedorov, A., authorLalumière, K., authorSanders, B.C., authorBlais, A. & authorWallraff, A. title Photon-mediated interactions between distant artificial atoms. http://science.sciencemag.org/content/342/6165/1494 journal journal Science volume 342, pages 1494-1496 (year 2013)NoStop [Mlynek et al.(2014)Mlynek, Abdumalikov, Eichler, and Wallraff]mlynek2014observation author author Mlynek, J., authorAbdumalikov, A. A., authorEichler, C., authorWallraff, A. title Observation of Dicke superradiance for two artificial atoms in a cavity with high decay rate. https://www.nature.com/articles/ncomms6186 journal journal Nature Communications volume 5, pages 5186 (year 2014)NoStop [Boyd(2003)]boyd2003nonlinear author author Boyd, R. W. @nooptitle Nonlinear optics (publisher Academic press, year 2003)NoStop [Scully and Zubairy(1997)]Scully author author Scully, M. O. & author Zubairy, M. @nooptitle Quantum Optics (publisher Cambridge University Press, Cambridge, year 1997)NoStop [Oliver et al.(2005)Oliver, Yu, Lee, Berggren, Levitov, and Orlando]Oliver2005 author author Oliver, W. D. et al.title Mach-Zehnder interferometry in a strongly driven superconducting qubit. 10.1126/science.1119678 journal journal Science volume 310, pages 1653-1657 (year 2005)NoStop [Sillanpää et al.(2006)Sillanpää, Lehtinen, Paila, Makhlin, and Hakonen]Hakkonen2006 author author Sillanpää, M., authorLehtinen, T., authorPaila, A, authorMakhlin, Y. &authorHakonen, P. title Continuous-time monitoring of Landau-Zener interference in a cooper-pair box. 10.1103/PhysRevLett.96.187002 journal journal Phys. Rev. Lett. volume 96, pages 187002 (year 2006)NoStop [Neilinger et al.(2016)Neilinger, Shevchenko, Bogár, Rehák, Oelsner, Karpov, Hübner, Astafiev, Grajcar, and Il'ichev]Neilinger2016 author author Neilinger, P. et al.title Landau-Zener-Stückelberg-Majorana lasing in circuit quantum electrodynamics. 10.1103/PhysRevB.94.094519 journal journal Phys. Rev. B volume 94, pages 094519 (year 2016)NoStop [Schackert et al.(2013)Schackert, Roy, Hatridge, Devoret, and Stone]Shackert2013ThreeWave author author Schackert, F., author Roy, A., author Hatridge, M., author Devoret, M. H. & author Stone, A. D.title Three-wave mixing with three incoming waves: signal-idler coherent attenuation and gain enhancement in a parametric amplifier. 10.1103/PhysRevLett.111.073903 journal journal Phys. Rev. Lett. volume 111, pages 073903 (year 2013)NoStop [Maser et al.(2016)Maser, Gmeiner, Utikal, Götzinger, and Sandoghdar]maser2016few author author Maser, A., author Gmeiner, B., author Utikal, T., author Götzinger, S.& author Sandoghdar, V.title Few-photon coherent nonlinear optics with a single molecule. http://www.nature.com/nphoton/journal/v10/n7/abs/nphoton.2016.63.html journal journal Nature Photonics volume 10, pages 450-453 (year 2016)NoStop [Lvovsky and Raymer(2009)]OptTomogr author author Lvovsky, A. I. & author Raymer, M. G.,title Continuous-variable optical quantum-state tomography. 10.1103/RevModPhys.81.299 journal journal Rev. Mod. Phys. volume 81, pages 299-332 (year 2009)NoStop [Ip et al.(2008)Ip, Lau, Barros, and Kahn]ip2008coherent-detection-optics author author Ip, E., author Lau, A. P. T., author Barros, D. J., & author Kahn, J. M. title Coherent detection in optical fiber systems. https://www.osapublishing.org/oe/abstract.cfm?uri=oe-16-2-753 journal journal Optics express volume 16, pages 753-791 (year 2008)NoStop [Shen and Fan(2005)]Shen-1D-coupling author author Shen, J.-T. & author Fan, S. title Coherent single photon transport in a one-dimensional waveguide coupled with superconducting quantum bits. 10.1103/PhysRevLett.95.213001 journal journal Phys. Rev. Lett. volume 95, pages 213001 (year 2005)NoStop
http://arxiv.org/abs/1709.09588v1
{ "authors": [ "A. Yu. Dmitriev", "R. Shaikhaidarov", "V. N. Antonov", "T. Hönigl-Decrinis", "O. V. Astafiev" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170927154618", "title": "Quantum wave mixing and visualisation of coherent and superposed photonic states in a waveguide" }
Joint Detection and Recounting of Abnormal Events by Learning Deep Generic Knowledge^*Ryota Hinami^1, 2, Tao Mei^3, and Shin'ichi Satoh^2, 1^1The University of Tokyo,^2National Institute of Infomatics, ^3Microsoft Research Asia [email protected], [email protected], [email protected] ==============================================================================================================================================================================================================empty This paper addresses the problem of joint detection and recounting of abnormal events in videos. Recounting of abnormal events, i.e., explaining why they are judged to be abnormal,is an unexplored but critical task in video surveillance,because it helps human observers quickly judge if they are false alarms or not. To describe the events in the human-understandable form for event recounting, learning generic knowledge about visual concepts (e.g., object and action) is crucial. Although convolutional neural networks (CNNs) have achieved promising results in learning such concepts,it remains an open question as to how to effectively use CNNs for abnormal event detection,mainly due to the environment-dependent nature of the anomaly detection. In this paper, we tackle this problem by integrating a generic CNN model and environment-dependent anomaly detectors.Our approach first learns CNN with multiple visual tasks to exploit semantic information that is usefulfor detecting and recounting abnormal events.By appropriately plugging the model into anomaly detectors, we can detect and recount abnormal events while taking advantage of the discriminative power of CNNs.Our approach outperforms the state-of-the-art on Avenue and UCSD Ped2benchmarks for abnormal event detectionand also produces promising results of abnormal event recounting.[1]This work was conducted when the first author was a research intern at Microsoft Research Asia.§ INTRODUCTIONDetecting abnormal events in videos is crucial for video surveillance. While automatic anomaly detection can free people from having to monitor videos, we still have to check videos when the systems raise alerts,and this still involves immense costs. If systems can explain what is happening andassess why events are abnormal, we can quickly identify unimportant events without having to check videos. Such processes that explain the evidence in detecting events is called event recounting, which was attempted as a multimedia event recounting (MER) task in TRECVid[http://www.nist.gov/itl/iad/mig/mer12.cfm] but has not been explored in the field of abnormal event detection. Recounting abnormal events is also useful in understanding the behavior of algorithms. Analyzing the evidence of detecting abnormal events should disclose potential problems with current algorithms and indicate possible future directions. The main goal of the research presented in this paper was to develop a frameworkthat could jointly detect and recount abnormal events, as shown in Fig. <ref>.Abnormal events are generally defined as irregular events that deviate from normal ones. Since normal behavior differs according to the environment,the target of detection in abnormal event detection depends on the environment (e.g., `riding a bike' is abnormal indoors while it is normal on cycling roads).In other words, positive in anomaly detection has the nature of environment-dependent,wherein only negative samples are given as training data and positive in the environmentis defined by these negative samples. This is different from most other computer vision tasks(e.g., `pedestrian' is always positive on a pedestrian detection task). Since positive samples are not given in anomaly detection,detectors of abnormal events cannot be learned in a supervised way.Instead, the standard approach to anomaly detection is1) learning an environment-dependent normal model using training samples,and 2) detecting outliers from the learned model. However,learning knowledge about basic visual concepts is essential for event recounting. The event in the example in Fig. <ref> is explained as`person', `bending', and `young', because it has knowledge of these concepts. We call such knowledge generic knowledge. We consider generic knowledge to be essential for recountingand also to contribute to accurately detecting abnormal events. Since people also detect anomalies after recognizing the objects and actions, employing generic knowledge in abnormal event detection fits in well with our intuition. Convolutional neural networks (CNNs) have proven successfulin learning visual concepts such as object categories and actions. CNNs classify or detect target concepts with high degreesof accuracy by learning them with numerous positive samples. However, positive samples are not given in anomaly detection due to its environment-dependent nature. This is the main reason that CNNs still have not been successful in anomaly detection and most approaches still rely on low-level hand-crafted features. If we can fully exploit the representation power of CNNs, theperformance of anomaly detection will be significantly improved as it is in other tasks. Moreover, its learned generic knowledge will help to recount abnormal events. This paper presents a framework that jointly detects and recounts abnormal events byintegrating generic and environment-specific knowledge into a unified framework. A model based on Fast R-CNN <cit.>is trained on large supervised datasets to learn generic knowledge. Multi-task learning is incorporated into Fast R-CNN to learn three types of concepts,actions, objects, and attributes, in one model. Then, environment-specific knowledge is learned using anomaly detectors. Unlike previous approaches that have trained anomaly detectors on low-level features, our anomaly detector is trained on more semantic spaces by using CNN outputs (i.e.,deep features and classification scores) as features. Our main contributions are:* We address a new problem, i.e., joint abnormal event detection and recounting, which is important for practical surveillance applications as well asunderstanding the behavior of the abnormal event detection algorithm.* We incorporate the learning of basic visual conceptsinto the abnormal event detection framework. Our concept-aware model opens up interesting directions for higher-level abnormal event detection. *Our approach based on multi-task Fast R-CNNachieves superior performance over other methods on several benchmarks and demonstrates the effectiveness of deep CNN features in abnormal event detection. § RELATED WORK The approach of anomaly detection first involves modeling normal behaviorand then detecting samples that deviate from it. Modeling normal patterns of object trajectories is one standardapproach <cit.>to anomaly detection in videos. While it can capture long-term object-level semantics,tracking fails in crowded or cluttered scenes. An alternative approach is modeling appearance and activity patterns using low-level features extracted from local regions,which is a current standard approach, especially in crowded scenes.This approach can be divided into two stages: local anomaly detection assigning anomaly score to each local region independently, and globally consistent inference integrating local anomaly scores into a globally consistent anomaly map with statistical inferences.Local anomaly detection can be seen as a simple novelty detection problem <cit.>:a model of normality is inferred using a set of normal features X as training data, and used to assign novelty scores (anomaly scores) z(x) to test sample x. Novelty detectors used in video anomaly detection include distance-based <cit.>, reconstruction-based (e.g., autoencoders <cit.>, sparse coding <cit.>),domain-based (one-class SVM <cit.>), and probabilistic methods (e.g., mixture of dynamic texture <cit.>,mixture of probabilistic PCA <cit.>),following the categories in the review by Pimentel et al. <cit.>. These models are generally builton the low-level features (e.g., HOG and HOF)extracted from densely sampled local patches. Several recent approaches have investigated the learning-basedfeatures using autoencoders <cit.>, which minimize reconstruction errors without using any labeleddata for training. Antic and Ommer <cit.> detected object hypotheses by video parsing instead of dense sampling, although they relied on background subtraction that was not robust to illumination changes.Globally consistent inference was introduced in several approachesto guarantee the consistency of local anomaly scores. Kratz et al. <cit.> enforce temporal consistency by modeling temporal sequences with hidden Markov models (HMM). The spatial consistency is also introduced inseveral studies using Markov random field(MRF) <cit.> to capture spatial interdependencies between local regions.While recent approaches have placed emphasis on global consistency <cit.>,it is defined on top of the local anomaly scores as explained above. Besides, several critical issues remain in local anomaly detection. Despite the great success of CNN approaches in many visual tasks,the application of CNN to abnormal event detection is yet to be explored. Normally CNN requires rich supervised information(positive/negative, ranking, etc.) and abundant training data.However, supervised information is unavailable for anomaly detectionby definition. Hasan et al. <cit.> learned temporal regularity in videoswith a convolutional autoencoder and used it for anomaly detection.However, we consider that autoencodersthat are only learned with unlabeled datado not fully leverage the expressive power of CNN. Besides, recounting of abnormal events is yet to be considered, while several approaches have been proposed for multimedia event recountingby localizing key evidence <cit.> orsummarizing the evidence of detected events by text <cit.>.§ ABNORMAL EVENT DETECTION AND RECOUNTING We propose a method of detecting and recounting abnormal events. As shown in Fig. <ref> (a), we learn generic knowledge about visual conceptsin addition to learning environment-specific anomaly detectors. Although most existing approaches use only environment-specific models, they cannot extract semantic information and thus not sufficient to recount abnormal events. Therefore, we learn the generic knowledge that is required for abnormal event recounting by using large-scale supervised image datasets.Since we learn the model with object, action, and attribute detection task that are highly related to abnormal event detection, this generic model can be used to improve anomaly detection performance as shown in <cit.>. First, multi-task Fast R-CNN is learned with large supervised datasets,which corresponds to the generic model that can be commonly used, irrespective of the environment. It is used to extract deep features(we call it a semantic feature) and visual concept classification scores from multiple local regions. Second, anomaly detectors are learned on these features and scores for each environment,which models the normal behavior of the target environment and predict anomaly scores of test samples. The anomaly detectors of features and classification scoresare used for abnormal event detection and recounting, respectively.Our abnormal event detection and recounting are performedusing a combination of two learned models. Figure <ref> (b) outlines the four steps in the pipeline.* Detect object proposal: Object proposals are detected for each frame by geodesic object <cit.> and moving object proposals <cit.>.* Extract features: Semantic features and classification scores are simultaneously extracted from all objectproposals by the multi-task Fast R-CNN.* Classify normal/abnormal: The anomaly scores of each proposal are computed byapplying the anomaly detector to semantic features of the proposal. The object proposals with anomaly scores above a threshold are determined as source regions of abnormal events.* Recount abnormal events: Visual concepts of the three types (objects, actions, and attributes)of abnormal events are predicted from classification scores. The anomaly scores of each predicted concept are computed by the anomaly detector for classification scores to recount the evidence of anomaly detection.This phase is explained in more detail in Sec. <ref>.§.§ Learning of Generic Knowledge We learn the generic knowledge about visual concepts to use it for event recounting and to improve the performanceof abnormal event detection.To exploit semantic information that is effective in these tasks, we learn three types of concepts, i.e., objects, actions, and attributes,that are important to describe events. Since these concepts are jointly learned by multi-task learning, features that are useful to detect any type of abnormality (abnormal objects, actions, or attributes) can be extracted.Our model is based on Fast R-CNN because it can efficiently predict categoriesand output features of multiple region-of-interests (RoIs)by sharing computation at convolutional layers.Network architecture.Figure <ref> (b) illustrates the architecture of the proposed multi-task Fast R-CNN (shaded in red), which is the same as that for the Fast R-CNN except for the last classification layers. It takes image and RoIs as inputs. A whole image is first processed by convolutional layers and its outputs are then processed by the RoI pooling layer and two fully-connected layers to extract fixed length features from each RoI. We used the feature at the last fully-connected layer (fc7 feature)as the semantic feature for learning abnormal event detector. The features were fed into three classification layers, i.e., object, action, and attribute classification layers, each of which consisted of fully-connected layers and activation. A sigmoid was used for activationin attribute and action classification tooptimize multi-label classification while softmax was usedin object classification as in Girshick <cit.>. The bounding box regression was not usedbecause it depends on the class to detect, which is not determined in abnormal event detection. We used Alexnet <cit.> as the base network, which is commonly used as a featureextraction network and is computationally more efficient than that of VGG model <cit.>.Training datasets. We used Microsoft COCO <cit.> training set to learn object andVisual Genome datasets <cit.> to learn attributes and actions because both datasets contain sufficiently large variations in objects with bounding box annotations. Visual Genome was also used for the evaluation, as will be explained later in Sec. <ref>,and to seek for the fairness, the intersection of Visual Genome and COCO validation (COCO-val) set was excluded. We used all 80 object categories in the COCO while 45 attributes and 25 actions that appearedthe most frequently were selected from the Visual Genome dataset. Our model only learned static image information using image datasets instead of video datasets because motion information (e.g., optical flow) from the static camera was significantly different fromthat from the moving camera, and large datasets from the static camera with rich annotations were unavailable. Learning details. We used almost the same learning strategy and parameters as that for Fast R-CNN <cit.>. Here, we only describe differences from Fast R-CNN. First, since we removed bounding box regression, our model was only trained with classification loss. Second, our model was trained to predict multiple tasks, viz., object, action, and attribute detection. A task was first randomly selected out of three tasks for each iteration,and a mini-batch was sampled from the dataset of the selected task followingthe same strategy as that for Fast R-CNN.The loss of each task was applied to its classification layer and shared layers. Since multi-task model converged more slowlythan the single-task model in <cit.>, we set the learning rate of SGD as 0.001 for first 200K iterations,and 0.0001 for the next 100K,which are larger numbers of iterationsfor each step of the learning rate than those for the single-task model. All models are trained and tested with Chainer <cit.>. §.§ Abnormal Event Recounting Abnormal event recounting is expectedto predict concepts and also to provide evidenceas to why the event was detected as an anomaly,which is not a simple classification task. In the case in Fig. <ref> above,predicting category(object=`person', attribute=`young', and action=`bending') is not enough. It is important to predict which concept is an anomaly(bending is an anomaly) to recount the evidence of abnormal events. Therefore, as shown in Fig. <ref>, the proposed abnormal event recounting system predicts:* the categories of three types of visual concepts (object, action, and attribute) of the detected event, and* the anomaly scores for each concept to determine whether the evidence of detecting it as an anomaly. The approach to these predictions is straightforward. We first predict categoriesby simply selecting the category with the highest classification score for each concept. The anomaly score of each predicted category is then computed. At training time,the distribution of classification scores under the target environmentis modeled for each category by using kernel density estimation (KDE) with a Gaussian kernel and a bandwidth calculated withScott's rules <cit.>. At test time,the density at the predicted classification scoreis estimated by KDE for each predicted conceptand the reciprocals of density are used as anomaly scores.§ EXPERIMENTS§.§ DatasetsUCSD Ped2 <cit.> and Avenue <cit.> datasets were usedto evaluate the performance of our method. The UCSD pedestrian dataset is the standard benchmark for abnormal event detection, where only pedestrians appear in normal events, while bikes, trucks, etc., appear in abnormal events.The UCSD dataset consists of two subsets, i.e., Ped1 and Ped2. We selected Ped2 because Ped1 has a significantly lower frame resolution of158 × 240, which would have made it difficult to capture objects in our framework based on object proposal+CNN.Since inexpensive higher resolution cameras have recently become commercially available, we considered that this was not a critical drawback in our framework. Avenue datasets <cit.> are challenging datasetsthat contain various types of abnormal events such as`throwing bag', `pushing bike', and `wrong direction'. Since the pixel-level annotation in some complex events is subjective (e.g., only the bag is annotated in a throwing bag event),we evaluated Avenue with only frame-level metrics. In addition, while the Avenue dataset focuses on moving objects as abnormal events, our focus included static objects. Therefore, we evaluated the subset excluding five clips out of 22 clips that contained staticbut abnormal objects, viz., a red bag on the grass, and a person standing in front of a camera, which are regarded as normal in the Avenue dataset. We called this subset Avenue17,which we will describe in more detail in the supplemental material. We used standard metrics in abnormal event detection, ROC curve,area under curve (AUC), and equal error rate (EER), as was explained in Li et al. <cit.> for both frame-level and pixel-level detection. §.§ Implementation detailsAbnormal event detection procedure. Given the input video, we first detected object proposals from each frameusing GOP <cit.> and MOP <cit.> as in <cit.> (around 2500 proposals per frame). The frame images and detected object proposals were input into Fast R-CNN to obtain semantic features and classification scores for all proposals. The semantic features were fed into the trained anomaly detector (described below) to classify each proposal into normal or abnormal, which computed an anomaly score for each proposal. Object proposals with anomaly scores above the threshold were detected as abnormal events. The threshold parameter was varied to plot the ROC curve in our evaluation. Each detected event was finally processed for recounting, as was explained in Sec. <ref>.Anomaly detectors for semantic features. Given a training set extracted from training samples,anomaly detectors were learned to model `normal' behavior. The anomaly detector took semantic features in testing as an input to output an anomaly score. Three anomaly detectors were used. 1) Nearest neighbor-based method (NN):An anomaly score was the distance between the test sampleand its nearest training sample. 2) One-class SVM (OC-SVM): The anomaly score of test samples was the distance from thedecision boundary of OC-SVM <cit.> with RBF kernel.Since we did not have validation data,we did tests with several parameter combinations and used parameters that performed best (σ=0.001 and ν=0.1). 3) Kernel density estimation (KDE):Anomaly scores were computed as a reciprocal of density of test samples estimated by KDE with a Gaussian kernel and a bandwidth calculated withScott's rules <cit.>.To reduce computational cost,we separated frames into a 3×4 grid with the same cell size,and learned the anomaly detectors for each location (12 detectors in total). The coordinates of the bounding box center determined the cell that each object proposal belonged to.In addition, features were compressed using product quantization (PQ) <cit.> with a code length of 128 bits in NN and features were reduced down to 16-dims using PCA in OC-SVM and KDE. §.§ Comparison of Appearance FeaturesWe compare our framework using following different appearance features to demonstrate the effectiveness of Fast R-CNN (FRCN) features in abnormal event detection: * HOG: HOG <cit.> extracted from a 32×32 resized patch.* SDAE: features of a stacked denoising autoencoder with the same architecture and training procedure as in <cit.>.* FRCN objects, attributes, and actions: The fc7 feature of single-task FRCN trained on one dataset.* MT-FRCN: The fc7 feature of multi-task FRCN.We used the same settings for other componentsincluding those for object proposal generation and anomaly detectors to evaluate the effects of appearance features alone. ROC curves. Figure <ref> plots the ROC curves on Avenue17 andUCSD Ped2 datasets. These experiments used NN as novelty detector.The curves indicate that FRCN features significantly outperformed HOG and SDAE in all benchmarks. The main reason is FRCN features could discriminate different visual concepts while HOG and SDAE features could not. In the supplemental material, the t-SNE map <cit.> of feature space qualitatively justifies discriminability of each feature. The FRCN action performs slightly better than the others because the most challenging abnormal events in the benchmarks are related to actions.Compatibility with different anomaly detectors. We measured performance using the three anomaly detectors explained in Sec. <ref> to clarify that FRCN features were compatible with various anomaly detectors, Figure <ref> compares AUC tested on varied anomaly detectors on Avenue17 and Ped2 datasets. The results indicate that FRCN features always outperformed HOG and SDAE featuresand our performance was insensitive to anomaly detectors. Since FRCN features were compatible with various anomaly detectors, they can replace conventional appearance featuresin any framework for abnormal event detection. §.§ Comparison with State-of-the-art MethodsWe compared our entire pipeline of abnormal event detection pipeline with state-of-the-art methods, viz., local motion histogram (LMH) <cit.>, MPPCA <cit.>, social force model <cit.>,MDT <cit.>, AMDN <cit.>, and video parsing <cit.> on the Ped2 dataset.We also made comparisons with Lu et al. (sparse 150 fps) <cit.>, and Hasan et al. (Conv-AE) <cit.>on the Avenue17 dataset. We measured the performance of Avenue17 using the codes provided by the authors.Results. Table <ref> summarize the AUC/EER on Avenue17 and UCSD Ped2 datasets,which demonstrates our framework outperformed all other methods on all benchmarks. Especially, AUC of Ped2 was 89.2%, which significantly outperformed the state-of-the-art method(66.5% in <cit.>). Since our method was based on object proposals and captured object-level semantics by using FRCN features,we accurately localized abnormal objects. Moreover, the Avenue17 dataset contained objects and actions that were not included in Fast R-CNN's training data (e.g., white paper and throwing bag).This indicated that FRCN features generalized the detection of untrained categories. Note that our method performed best without using any motion features while others used motion features based on optical flows. Learning motion features with two-stream CNN <cit.> or 3D-CNN <cit.>remains to be undertaken in future work. Also, our performance on Ped1 is much worse than state-of-the-art (69.9/35.9 in AUC/EER) because of the low resolution issue as stated above, which should be solved in the future. §.§ Qualitative Evaluation of Evidence RecountingFigure <ref> has examples of recounting results obtained with our framework where the predicted categories and anomaly scores of each category (red bars) have been presented. Figures <ref> (a)–(e) present successfully recounted results. Our method could predict abnormal concepts such as `riding', `truck', and `bending' while assigning lower anomaly scores to normal concepts such as `person' and `black'. The anomaly score of `young' in (e) is much higher than that in (d) because a highclassification score for `young' was assigned to the child in (e), which is rare. Figures <ref> (f)–(j) reveal the limitations of our approach. The event in (f) is a false positive detection. Since we only used appearance information, a person walking in a different direction from the other people is predicted as standing. The events in (g) and (h), viz., scattered papers and the person in the wrong direction,could not be recounted with our approach because they were outside the knowledge we learned. Nevertheless, the results provided some clues to understanding events; the event (g) is something `white' and the anomaly in the event (h) is not due to basic visual concepts. The events in (i) and (j) that correspond to `throwing a bag' and `pushing a bicycle'include the interaction of objects,which could be captured with our approach. Since large datasets for object interactions is available <cit.>,our framework could be extended to learn such knowledge,and this could be another direction for future work.§ EVALUATION WITH ARTIFICIAL DATASETS §.§ SettingsThe current benchmark in abnormal event detection has three main drawbacks when evaluating our methods. 1) The dataset size is too small and variations in abnormalities are limited because collecting data on abnormal events is difficult due to privacy issues in surveillance videos and the scarcity of abnormal events. 2) The definition of abnormal is subjective because it depends on applications. 3) Since ground truth labels on the categories of abnormal events are not annotated, it is difficult to evaluate recounting.The experiments described in this subsection were designed to evaluate the performanceof unseen (novel) visual concept detection, i.e., detect basic visual conceptsthat did not appear in the training data, which represent an important portion of abnormal event detection. Although most events in the UCSD pedestrian dataset belong to this category,the variations in concepts are limited (e.g., person, bikes, and trucks). We artificially generated the dataset of unseen visualconcept detection with large variations based on image dataset. Its evaluation scheme was more objective thanabnormal event detection benchmarks. Task Settings. This task was evaluated on the dataset with thebounding box annotations of objects, actions, or attributes. The n_seen categories were selected from all n annotated categories and the dataset was split into training and test sets so that training set only contained n_seen categories.The main objective of this task was to find n_unseen=n-n_seen categories from the test set. In other words, we detected unseen categories that did not appear in the training set, which had similar settings to abnormal event detection benchmarks. We specifically propose two tasks to evaluate our method. Task 1 (Sec. <ref>): Detect objects that have annotations of unseen categories using our abnormal event detection framework (anomaly detector + fc7 features). Task 2 (Sec. <ref>): Detect and classify unseen objectswith our method of abnormal event recounting(kernel density estimation + classification scores).§.§ Evaluation of Unseen Concept Detection We evaluated this task on the datasets based on COCO and PASCAL datasets. We used the COCO-val set for objects, and the intersection of COCO-val and Visual Genome for actions and attributes. We used the same categories that were used to train Fast R-CNN. As for PASCAL dataset, official PASCAL VOC 2012 datasets were used for object and action detection,while the a-PASCAL dataset <cit.> was used for attribute detection. Each dataset was split into training and test sets in the following procedure:1) randomly select unseen categories (n_unseen is set to be around n/4),2) assign images with unseen category objects to training sets,3) assign randomly sampled images to training sets as distractors(so that # of test images equal to # of training images), and 4) assign remaining images to test sets. We repeated this to create five sets of training–test pairs for each dataset.We used the same method of detection as that in the experiments in Sec. <ref>;unseen categories were detected as regions with high anomaly scores computed by a nearest neighbor-based anomaly detector trained for each training set. We used the ground truth bounding boxes as input RoIs instead ofusing object proposals because some proposals contained unannotated but unseen categories,which made it difficult to evaluate our framework. To evaluate performance,detection results are ranked by the anomaly scoreand average precision (AP) was calculated similarly to PASCAL detection (objects with the annotations of unseen categories are positive in our evaluation). The final performance values were computed as mean average precision (mAP)over the five sets of unseen categories. Table <ref> summarizes the mAP of our framework with different appearance features. The training data to train each feature have also been listed as check marks. We trained two SDAE: a generic model trained on the dataset used in Fast R-CNN learning,and a specific model trained on the training data of each set (that only contained `seen' categories). The results demonstrated that Fast R-CNN significantly outperformed HOG and SDAE, which indicated that unseen visual concept detection is a difficult task without learning with labeled data.The single-task Fast R-CNN trained on the same task as the evaluation taskperformed best in all tasks while the proposed multi-task FastR-CNN gained the second highest mAP in all tasks, which was significantly betterthan models trained on different tasks. Since the types of abnormal concepts to be detected were not fixed in practice, multi-task Fast R-CNN is an excellent choice for abnormal event detection.§.§ Evaluation of Unseen Concept Recounting We quantitatively evaluated our recounting method by usingthe COCO-based unseen concept detection dataset in Sec. <ref>. For each candidate region of test sample,our framework outputs the classification scoresand anomaly scores computed by KDE learned from the train set. The performance values were computed as AUC of TPR versus FPR.For a certain threshold of anomaly scores, unseen categories were predicted for each region, i.e.,categories with the anomaly scores above the threshold and classification scores above 0.1. Unlike the experiments described in Sec. <ref>,multiple categories were sometimes predicted for each concept in this evaluation. An object was true positive if 1) ground truth unseen categories were annotated (it was positive), and 2) the predicted unseen categories agreed with the ground truth. An object was false positive if 1) ground truth unseen categories were not annotated (it was negative), and 2) any category was predicted as being unseen. The threshold was varied to compute AUC. We compared our method with HOG and SDAE features combined with a linear SVM classifier. The SVM classification scores were used as the input for the anomaly detector in these methods. SVMs were trained on the COCO-training set that was used in Fast R-CNN training.Table <ref> compares AUC on the COCO-based unseen concept detection datasets. We can see that multi-task Fast R-CNN outperformed best with all types of conceptswhile HOG and SDAE could hardly recount unseen concepts. This demonstrates that deeply learned generic knowledgeis essential for concept-level recounting of abnormal events.§ CONCLUSION We addressed the problem of joint abnormal event detection and recounting. To solve this problem, we incorporate the learning of generic knowledge, which is required for recounting, and environment-specific knowledge,which is required for anomaly detection, into a unified framework. Multi-task Fast R-CNN is first trained on richly annotated image datasetsto learn generic knowledge about visual concepts. Anomaly detectors are then trained on the outputs of this model to learn environment-specific knowledge. Our experiments demonstrated the effectiveness of our method for abnormal event detection and recounting by improving the state-of-the-art performance on challenging benchmarksand providing successful examples of recounting.Although this paper investigated basic concepts such as actions, our approach could be extended further to complex concepts such as object interactions. This work is the first step in abnormal event detection using generic knowledge of visual concepts andsheds light on future directions for such higher-level abnormal event detection.Acknowledgements: We thank Cewu Lu, Allison Del Giorno, and Mahmudul Hasanfor sharing their code and data. This work was supported by JST CREST JPMJCR1686and JSPS KAKENHI 17J08378. ieee
http://arxiv.org/abs/1709.09121v1
{ "authors": [ "Ryota Hinami", "Tao Mei", "Shin'ichi Satoh" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170926163803", "title": "Joint Detection and Recounting of Abnormal Events by Learning Deep Generic Knowledge" }
Accepted Manuscript in IEEE International Conference on Machine Learning and Applications (ICMLA 2017) SUBIC: A Supervised Bi-Clustering Approach for Precision Medicine Milad Zafar Nezhad^a, Dongxiao Zhu^b,*, Najibesadat Sadati^a, Kai Yang^a, Phillip Levy^c Department of Industrial and Systems Engineering, Wayne State University^a Department of Computer Science, Wayne State University^b Department of Emergency Medicine and Cardiovascular Research Institute, Medical School, Wayne State University^c Corresponding author^*, E-mail addresses: [email protected] The first version is of 22 April 2012. The present version is of December 30, 2023. This research was supported by SSHRC Grants 410-2010-242, 435-2013-0292 and 435-2018-1273, NSERC Grant 356491-2013, and Leibniz Association Grant SAW-2012-ifo-3. The research was conducted in part, while Marc Henry was visiting the University of Tokyo and Isma^̂22el Mourifié was visiting Penn State and the University of Chicago. The authors thank their respective hosts for their hospitality and support. They also thank Désiré Kédagni, Lixiong Li, Karim N'Chare, Idrissa Ouili and particularly Thomas Russell and Sara Hossain for excellent research assistance. Helpful discussions with Laurent Davezies, James Heckman, Hidehiko Ichimura, Koen Jochmans, Essie Maasoumi, Chuck Manski, Ulrich M^̂22uller, Aureo de Paula, Azeem Shaikh and very helpful and detailed comments from five anonymous referees, from numerous seminar audiencesand the 2018 Canadian senate open caucus on women and girls in STEM are also gratefully acknowledged. Correspondence address: Department of Economics, Max Gluskin House, University of Toronto, 150 St. George St., Toronto, Ontario M5S 3G7, Canada ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Traditional medicine typically applies one-size-fits-all treatment for the entire patient population whereas precision medicine develops tailored treatment schemes for different patient subgroups. The fact that some factors may be more significant for a specific patient subgroup motivates clinicians and medical researchers to develop new approaches to subgroup detection and analysis, which is an effective strategy to personalize treatment. In this study, we propose a novel patient subgroup detection method, called Supervised Biclustring (SUBIC) using convex optimization and apply our approach to detect patient subgroups and prioritize risk factors for hypertension (HTN) in a vulnerable demographic subgroup (African-American). Our approach not only finds patient subgroups with guidance of a clinically relevant target variable but also identifies and prioritizes risk factors by pursuing sparsity of the input variables and encouraging similarity among the input variables and between the input and target variables. Keywords—Precision medicine; subgroup identification, biclustering, regularized regression, cardiovascular disease. § INTRODUCTIONThe explosive increase of Electronic Medical Records (EMR) and emerge of precision (personalized) medicine in recent years holds a great promise for greatly improving quality of healthcare <cit.>. In fact, the paradigm in medicine and healthcare is transferring from disease-centered (empirical) to patient-centered, the latter is called Personalized Medicine. The extensive and rich patient-centered data enables data scientists and medical researchers to carry out their research in the field of personalized medicine <cit.>. Personalized medicine is defined as <cit.>: “use of combined knowledge (genetic or otherwise) about an individual to predict disease susceptibility, disease prognosis, or treatment response and thereby improve that individual health." In other words, the goal of personalized medicine is to provide the right treatment policy to the right patient at the right time. A crucial step in personalized medicine is to discover the most important input variables (disease risk factors) related to each patient <cit.>. Since identification of risk factors needs multi-disciplinary knowledge including data science tools, statistics techniques and medical knowledge, many machine learning and data mining methods have been proposed to identify, select and prioritize risk factors <cit.><cit.><cit.>. Some popular methods such as linear model with shrinkage <cit.> and random forest <cit.> effectively select significant risk factors for the entire patient population. However, these approaches are not capable of detecting risk factors for each patient subgroup because they are developed based on an assumption that the patient population is homogeneous with a common set of risk factors. While the point of input variable selection is well taken, the association with small subgroups, a key notion in personalized medicine, is often neglected. As mentioned, personalized healthcare aims to identify subgroup of patients who are similar with each other according to both target variables and input variables. Discovering potential subgroups plays a significant role in designing personalized treatment schemes for each subgroup. Therefore, it is essential to develop a core systematic approach for patient subgroup detection based on both input and target variables <cit.>. A number of data-driven approaches have been developed for subgroup identification. The more popular methods can be divided in two categories: 1) Tree-based approaches <cit.> (or so called recursive partitioning), and 2) Biclustering approaches <cit.>. Tree based methods in subgroup analysis are greatly developed in recent years, such as Model-based recursive partitioning <cit.>, Interaction Trees <cit.>, Simultaneous Threshold Interaction Modeling Algorithm (STIMA) <cit.>, Subgroup Identification based on Differential Effect Search (SIDES) <cit.>, Virtual Twins <cit.>, Qualitative Interaction Tree (QUINT) <cit.> and Subgroup Detection Tree <cit.>. The second approaches (Biclustering) have been extensively developed and applied to analyze gene expression data. Most of the biclustering algorithms developed up-to-date are based on optimization procedures as the search heuristics to find the subgroup of genes or patients.Tree-based methods detect patient subgroups using the relationship between input and target variables whereas biclustering methods just focus on clustering rows and columns of the input variables simultaneously to identify different subgroups with specific risk factors (prioritized input variables). The former employs a target variable to guide subgroup detection by selecting a common set of input variables. The latter selects subgroup of specific input variables without guidance of a target variable. Moreover, both approaches are heuristic in nature that subgroup detection and risk factor identification are sensitive to choices of data sets and initializations hence has a poor generalization performance. Our proposed method combines the strength of the both approaches by using a target variable to guide the subgroup detection and selecting subgroup of specific risk factors. Meanwhile, our systematic approach overcomes the stability limitation of both approaches by casting the problem into a stable and mature convex optimization framework. Figure <ref> demonstrates consecutive steps of our approach: In this study, we propose a new supervised biclustering approach, called SUBIC, for solving patient subgroup detection problem. Our approach is the generalized (supervised) version of convex biclustering <cit.>, which enables prediction of target variables for new input variables. Moreover, we employ the elastic-net penalty <cit.> (both l_1 and l_2 regularization terms) that encourages sparsity of the correlated input variable groups (X) with the guidance of a target value (Y). Our model is specifically designed for patient subgroup detection and target variable prediction from high dimension data. To the best of our knowledge, our model is the first supervised biclustering approach that can be applied in many domains such as personalized medicine. To demonstrate the performance of SUBIC approach, we apply it to detect subgroups among hypertension (HTN) patients with guidance of left ventricular mass indexed to body surface area (LVMI), a clinically important target variable.The rest of this paper is organized as follows. Section II reviews the related works in unsupervised biclustering approaches. Section III explains our proposed supervised biclustering (SUBIC) approach. Section IV describes experimental studies and model evaluation using simulation studies. Section V reports the results of application of our method on patient at the high risk of cardiovascular disease, and finally we conclude this study in Section VI. § RELATED WORKSBiclustering is defined as simultaneous clustering of both rows and columns in the input data matrix. Such clusters are important since they not only discover the correlated rows, but also identify the group of rows that do not behave similarly in all columns <cit.>. In the context of precision medicine, rows correspond to patients and columns correspond to input variables measured in each patient. Biclustering was originally introduced in 1972 <cit.>, and Cheng and Church <cit.> were the first to develop a biclustering algorithm and applied it to gene expression data analysis. There exist a wide range of biclustering methods developed using different mathematical and algorithmic approaches. Tanay et al. <cit.> proved that biclustering is a NP-hard problem, and much more complicated than clustering problem <cit.>. Therefore, most of methods are developed based on heuristic optimization procedures <cit.>. Madeira and Oliveira <cit.>, Busygin et al. <cit.>, Eren et al. <cit.> and Pontes et al. <cit.> provided four comprehensive reviews about biclustering methods in 2004, 2008, 2012 and 2015 respectively. Based on the most recent review <cit.>, biclustering approaches can be divided in two main groups. The first one refers to methods based on evaluation measures, which means some heuristic methods are developed using a measure of quality to reduce the solution space and complexity of biclustering problem. Table <ref> demonstrates different algorithmic categories within this group:The second group of approaches is called non metric-based biclustering methods that do not use any measure of quality (evaluation measure) for guiding the search. These methods use graph-based or probabilistic algorithms to identify the patterns of biclusters in data matrix. Table <ref> summarizes different algorithms of non metric-based group:One of the important aspects of bicluster structure is overlapping, which means several biclusters share rows and columns with each other. Because of the characteristic of search strategy in biclustering methods, overlapping may or may not be allowed among the biclusters. Most of the algorithms mentioned in Table <ref> and Table <ref> allow overlapping biclusters <cit.>. Since these algorithms use heuristic approach for guiding search, final biclusters may vary depending on how the algorithm is initialized. Therefore, they don't guarantee a global optimum nor are they robust against even small perturbations <cit.>. Recently, Chi et al. <cit.> formulated biclustering problem as a convex optimization problem and solved it with an iterative algorithm. Their convex biclustering model corresponds to checkerboard mean model, which means each data matrix component is assigned to one bicluster. They used the concept of fused lasso <cit.> and generalized it with a new sparsity penalty term corresponding to the problem of convex biclustering. This method has some important advantages over the previous heuristic-based methods, that is, it created a unique global minimizer for biclustering problem, which maps data to one biclustering structure, therefore the solution is stable and unique. Also it used a single tuning parameter to control the number of biclusters. Authors performed simulation studies to compare their algorithm with two other biclustering algorithms, dynamic tree cutting algorithm <cit.> and sparse biclustering algorithm <cit.>, which assume the checkerboard mean structure. Results showed that convex biclustering outperforms the competing approaches in terms of Rand Index <cit.>. Despite the improved performance, the convex biclustering method, like other biclustering methods, does not exploit a target variable on subgroup detection and risk factor selection. As a result, the detected biclusters do not link to target variables of interest. Hence, it is unable to predict the target variable for future input variables. Clearly, the target variable such as LVMI provides a critical guidance for detection and selection of the meaningful biclusters (patient subgroups).Moreover, the l_1 penalty term alone in convex biclustering encourages the sparsity of individual input variables but overlooks the fact that they are also correlated within variable groups. To overcome both limitations, we introduce a new elastic-net regularization term that seeks sparsity of the correlated variable groups and employs a target variable to supervise the biclustering optimization process. Consequently, our model is truly a predictive model that is capable of predicting value of the target variable for new patients. In the next section, we describe our method in detail.§ METHOD §.§ The object function of the SUBIC method Let's assume that the input data matrix X_n × p represents n instances with different p input variables and Y_n is the continuous target variable (e.g. LVMI), corresponds to n^th instance (patients). According to the checkerboard mean structure, we assume R and C are the sets of rows and columns of the bicluster B respectively, and x_i,j refers to elements belong to the bicluster B, the observed value of x_i,j can be defined as <cit.>: x_i,j = μ_0 + μ_RC + ε_i,j, where μ_0 is a baseline mean for all elements, μ_RC is the mean of bicluster corresponds to R and C, and ε_i,j refers to error that is i.i.d with N(0, σ). With considering non-overlapping biclusters, this structure corresponds to a checkerboard mean model <cit.>. Without loss of generality, we ignore μ_0 from all elements. The goal of biclustering is to find the partition indices with regard to R and C then estimate the mean of each corresponding bicluster (B). To achieve this goal, we minimize the following convex objective function: F_λ_1,λ_2 = 1/2 X-T _F^2 + P(T),where matrix T ∈ R^n× p includes our optimization parameters, which are the estimate of means matrix. The first term is frobenius norm of matrix X-T refers to error term and P(T) = P_1(T)+ P_2(T) is the elastic-net regularization penalty term formulated as follows:P_1(T)= λ _1 [Σ _ i<j w_i,jT_.i - T_.j_ 2^2 +Σ _ i<j h_i,jT_i. - T_j._ 2^2 ],andP_2(T)= λ _2 [Σ _ i<j w_i,jT_.i - T_.j_ 1 +Σ _ i<j h_i,jT_i. - T_j._ 1 ].It is clear that this objective function is similar to subset selection problem in regularized regression <cit.>. In the penalty function λ _1 and λ _2 are tunning parameters. The first term penalized by λ _1 is a l_2-norm regularization term and the second termpenalized by λ _2 is a l_1-norm regularization term. Therefore the penalty term P(T) acts as regression elastic-net penalty <cit.>. T_i. and T_.i refer to ith row and column of matrix T, which can be considered as a cluster center (centroid) of ith row and column respectively.By minimizing the objective function defined in Eq.<ref> with sparsity based regularization, the cluster centroids are shrunk together when the tunning parameters increase. It means that sparse optimization tries to unify the similar rows and columns to specific centroid simultaneously. Finding the similarity between rows and columns is guided by different weights (w_i,j, h_i,j), which are included in objective function. These weights has been defined based on distance between input variables (X_.i - X_.j and X_i. - X_j.), distance between target variables (Y_i - Y_j) and correlation between input variables and target variable (X_.i , Y_.j). Therefore both input variables and target variable play significant rule in guiding of sparsity to find the best centroids. The first kind of weights (w_i,j) proceeds the columns convergence and the second one (h_i,j) proceeds the rows convergence. The weights are constructed from un-supervised and supervised parts, where:w_i,j = w_i,j^1 + w_i,j^2and h_i,j = h_i,j^1 + h_i,j^2.The unsupervised part (w_i,j^1, h_i,j^1) attempts to converge rows (columns) based on the similarity exists among input variables, and the supervised part (w_i,j^2, h_i,j^2) converges rows and columns according to the similarity of input and target variables. Since the rows and columns are in R^n and R^p spaces respectively, it is required to normalize the weights (recommended the sum of row weights and column weights to be 1/√(n) and 1/√(p) respectively). We used the idea of sparse Gaussian kernel weights <cit.> for defining w_i,j^1, w_i,j^2, h_i,j^1, h_i,j^2. Table <ref> demonstrates the mathematical description of weights:The way to define the weights has a substantial impact on the quality of biclustering. The weights described above guarantee the sparsity of the problem and employ the similarity of all input and target variables in supervised and unsupervised manner. According to defined weights, the two columns (rows) that are more similar with each other will get larger weight in the convex penalty function, therefore in minimization process, those columns (rows) should be in higher priority, and it means that convex minimizer attempts to cluster the similar columns (rows). The choice of elastic-net penalty term can overcome the lasso limitations. While the l_1-norm can generates a sparse model, the quadratic part of the penalty term encourages grouping effect and stabilizes the l_1-norm regularization path. Also the elastic-net regularization term would be very suitable for high dimension data with correlated input variables <cit.>. §.§ The algorithm to train the SUBIC modelIt can be proved easily that the objective function in Eq.<ref> is a convex function. Therefore we need to develop appropriate algorithm to solve this unconstrained convex optimization. Since the second part of penalty function, P_2(T) is undifferentiated we use Split Bregman method <cit.> developed for large-scale Fused Lasso. It can be shown that this method is equivalent to the alternating direction method of multipliers (ADMM) <cit.>. Readers can refer to Split Bregman method <cit.> or ADMM algorithm <cit.> for more comprehensive explanation. According to both methods we need to use splitting variable and Lagrangian multiplier and then apply augmented Lagrangian for undifferentiated part (P_2(T)) of objective function. First we need to transform our problem to the equality-constrained convex optimization problem by defining two new variables (V,S) and adding two constraints correspond to P_2(T) and then use Lagrangian multipliers: min F_λ_1,λ_2 = 1/2 X-T _F^2 + λ _1 [Σ _ i<j w_i,jT_.i - T_.j_ 2^2 +Σ _ i<j h_i,jT_i. - T_j._ 2^2 ] +λ _2 [Σ _ i<j w_i,jT_.i - T_.j_ 1+Σ _ i<j h_i,jT_i. - T_j._ 1 ],subject to:w_i,j (T_.i - T_.j) = V_i,j∀ i, j; i<j, h_i,j (T_i. - T_j.) = S_i,j∀ i, j; i<j, where V and S are matrices in R^n× p. Assuming the differentiated part of objective function in (1) is F^'_λ_1,λ_2, the Lagrangian Multiplier for the above problem is:L̃ (T, M, N, V, S) =F^'_λ_1,λ_2 + λ _2 [Σ _ i<j w_i,jV_i,j_ 1 +Σ_i<j h_i,jS_i,j_ 1 ]+ Σ _ i<j⟨ M_i,j ,w_i,j (T_.i - T_.j) - V_i,j⟩ +Σ _ i<j⟨ N_i,j ,h_i,j (T_i. - T_j.) - S_i,j⟩, where M and N are the vectors of dual variables (Lagrangian Multipliers) corresponding with each constraints in Eq.<ref> (totally there are n2 + p2 constraints). Finally the Augmented Lagrangian function of Eq.<ref>is as following:L (T, M, N, V, S) =F^'_λ_1,λ_2 + λ _2 [Σ _ i<j w_i,jV_i,j_ 1 +Σ _ i<j h_i,jS_i,j_ 1 ] +Σ _ i<j⟨ M_i,j ,w_i,j (T_.i - T_.j) - V_i,j⟩ + Σ _ i<j⟨ N_i,j ,h_i,j (T_i. - T_j.) - S_i,j⟩+μ_1/2 [Σ _ i<jw_i,j (T_.i - T_.j) - V_i,j_ 2^2] + μ_2/2 [Σ _ i<jh_i,j (T_i. - T_j.) - S_i,j_ 2^2], where μ_1>0 and μ_2>0 are two parameters. The Split Bregman algorithm for supervised convex biclustering problem described below: τ acts as a soft thresholding operator defined on vector space and satisfying the following equation: τ_λ(w)= [t_λ(w_1), t_λ(w_2), ...]^T, where: t_λ(w_i) = sgn(w_i) max {0,|wi-λ|}. §.§ The SUBIC based prediction approachFor prediction of the target variable based on supervised biclustering framework, we introduce a simple yet effective approach based generalized additive model (GAM) <cit.>. Assuming that K biclusters{BC_1,BC_2,..., BC_K } are detected by training the SUBIC model, we consider K classifiers corresponding to each biclusters, i.e., f_k (y|x_bc_k, x_new) = y_bc_k. It means that each classifier predicts the target value as an average of the target variables of the corresponding bicluster. The proposed GAM model is as follows:g(E(y)) = R_1(x_bc_1)+ R_2(x_bc_2) +... +R_k(x_bc_k),where R_k(x_bc_k) = q_k f_k(y|x_bc, x_new).q_k is defined as normalized weight based on posterior probabilities. Assuming that each bicluster follows a Gaussian distribution as N(μ_i, σ) and P(bc_k|x_new) is the posterior probability which refers to the probability of each bicluster given a new instance 𝑥, we can define q_k as below: q_k=P(bc_k|x_new)/∑ _i=1^k P(bc_i|x_new),where:P(bc_k|x_new) = P(x_new|bc_k) ×P(bc_k). P(x_new|bc_k) is conveniently calculated based on Gaussian distribution assuming equal variance and zero covariance and P(bc_k) is the prior that can be calculated by counting the number of instances in each bicluster. § EXPERIMENTAL STUDY AND MODEL EVALUATION For assessing the performance of our approach, we carry out simulation studies and use Rand Index (RI) <cit.> and Adjusted rand index (ARI) <cit.> as two popular measures for evaluating the quality of clustering. Since our biclustering method is supervised,we simulate data for input and target variables based on a checkerboard mean structure. We used normal distribution with different means to generate simulated data. Figure <ref> illustrates anexample simulation study.As shown below, data was simulated in 20×20 matrix. Data in each segment has different size and were created based on a different normal distribution, all sections are generated with low-noisy data (σ=1.5). Input data in segments (2, 3, 4 and 5) are in high positive correlation with the target variable and input data in segment (6, 7, 8 and 9) are in high negative correlation with the target variable. Segments 1 and 10 in general, are similar with very low correlation with target variable. Segments 1 and 3 of the target variable are positive and the other two sections have negative values. According to this assumptions and consider the effect of target variable, it is clear that the true number of biclusters should be 16 (not 10). It means that segments 1 and 10 include 4 biclusters within each. The results of SUBIC implementation for different tuning parameters are displayed in Figure <ref>. As depicted in Figure <ref>, tuning parameters provide a flexible mechanism to analyze data with both high and low variances. It is obvious that by increasing λ_1 and λ_2, rows and columns are unified to mean in each bicluster but when λ_1 and λ_2 get larger values such as 10000, bicluster patterns are “smoothed out" and the number of biclusters reduces.We consider different scenarios in Figure <ref> to show that the flexibility and generalization of our method. Panel a shows our supervised biclustering approach, SUBIC, with elastic-net penalty (l_1 and l_2) as the most general case. By zeroing out λ_1, the l_2 penalty (special case 1), SUBIC becomes the extended (supervised) version of the convex biclustering approach <cit.> (Panel b). If we instead zero out the supervised weight components w_i,j^2 and h_i,j^2 (special case 2), SUBIC becomes extended unsupervised convex biclsutering with elastic-net penalty (Panel c). Finally, if we zero out both the l_2 penalty and the supervised weight components w_i,j^2 and h_i,j^2 (special case 3), SUBIC becomes the bona fide convex biclustering method reported in <cit.>. Therefore, our SUBIC approach is sufficiently general and flexible that employs a target value to guide the subgroup detection by encouraging sparsity of the number of variable groups and variables within each group. Correspondingly, our SUBIC approach most accurately detect the biclusters given in the ground truth. Panel a and b in Figure <ref> confirm that the impact of supervised weights (target value guidance) in identifying of true biclusters in comparison with convex biclustering approach <cit.> (Panels c and d). Also in both cases the elastic-net regularization appears more accurate in detecting true biclusters. We extend the above simulation idea to 80×80 matrix and consider different design (true biclusters) with two variance levels (low and high) for assessment the performance of our model. We use different tuning parameters in each design and evaluate SUBIC method in terms of rand index and adjusted rand index. The results of average RI and ARI over 10 replicates are displayed in Table <ref> and <ref> for low-variance and high-variance data respectively. As shown above, the performance of SUBIC is fully tunable using the pair of tuning parameters in response todata with different levels of variances. From Table <ref> and <ref>, it is clear that SUBIC's superior performance is very stable for both low and high variance data. In particular, the robust performance against high-variance data is achieved by setting larger values of tuning parameters. § APPLICATION IN PERSONALIZED MEDICINEIn this section we demonstrate how SUBIC method is capable of identifying patient subgroups with guidance of the target variable LVMI. We study the population of African-Americans with hypertension and poor blood pressure control who have high risk of cardiovascular disease. Data are obtained from patients enrolled in the emergency department of Detroit Receiving Hospital. After preprocessing step, our data consists of 107 features including demographic characteristics, previous medical history, patient medical condition, laboratory test result, and CMR results related to 90 patients. To achieve a checkerboard pattern, we reorder rows and columns (original data) at first <cit.> using hierarchical clustering and then apply SUBIC method. The results are shown in the top panel of Figure <ref>. In addition, we implemented convex biclustering method (COBRA) developed by Chi et al. <cit.> using package “cvxbiclustr" in R for comparing with our SUBIC method. Results obtain ed using different tuning parameters (λ) are shown in the bottom panel of Figure <ref>. In Figure <ref>, our SUBIC method detects 4 subgroups using 15 features for λ_1 = λ_2= 10^4. These 15 features belong to 3 major groups of features including: 1) Waist Circumference Levels (mm); 2) Average Weight (kg) and 3) Calculated BMI. The statistics related to these risk factors based on 4 groups of patients is summarized in Table <ref>. It is worth mentioning that other potential risk factors such as “Troponin Level" or “Plasma Aldosterone" can be also significant but these three groups of features are sufficient to describe the disparity among patients based on guidance of the target variable LVMI. On the contrary, COBRA method fails to find any patient subgroups for this data set based on different tuning parameters.§ DISCUSSION AND CONCLUSION In this paper, we have developed a novel supervised subgroup detection method called SUBIC based on convex optimization.SUBIC is a predictive model that combines the strength of biclustering and tree-based methods. We introduced a new elastic-net penalty term in our model and defined two new weights in our objective function to enable the supervised training under the guidance of a clinically relevant target variable in detecting biclusters. We further presented a generalized additive model for predicting target variables for new patients. We evaluated our SUBIC approach using simulation studies and applied our approach to identify disparities among African-American patients who are at high risk of cardiovascular disease. Future directions include extending our SUBIC approach to predict categorical target variables, such as stages and subtypes of heart diseases. plain
http://arxiv.org/abs/1709.09929v1
{ "authors": [ "Milad Zafar Nezhad", "Dongxiao Zhu", "Najibesadat Sadati", "Kai Yang", "Phillip Levy" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20170926222146", "title": "SUBIC: A Supervised Bi-Clustering Approach for Precision Medicine" }
Scaling Author Name Disambiguation with CNF Blocking Kunho Kim^†, Athar Sefid^†, C. Lee Giles^† 0.8 ^†Computer Science and Engineering ^Information Sciences and Technology The Pennsylvania State University University Park, PA 16802, USA 0.8 [email protected], [email protected], [email protected] =============================================================================================================================================================================================================================================================================================== An author name disambiguation (AND) algorithm identifies a unique author entity record from all similar or same publication records in scholarly or similar databases. Typically, a clustering method is used that requires calculation of similarities between each possible record pair. However, the total number of pairs grows quadratically with the size of the author database making such clustering difficult for millions of records. One remedy for this is a blocking function that reduces the number of pairwise similarity calculations. Here, we introduce a new way of learning blocking schemes by using a conjunctive normal form (CNF) in contrast to the disjunctive normal form (DNF). We demonstrate on PubMed author records that CNF blocking reduces more pairs while preserving high pairs completeness compared to the previous methods that use a DNF with the computation time significantly reduced. Thus, these concepts in scholarly data can be better represented with CNFs. Moreover, we also show how to ensure that the method produces disjoint blocks so that the rest of the AND algorithm can be easily paralleled. Our CNF blocking tested on the entire PubMed database of 80 million author mentions efficiently removes 82.17% of all author record pairs in 10 minutes. § INTRODUCTION The author name disambiguation (AND) is a problem of identifying each unique author entity record from all publication records in scholarly databases <cit.>. It can be thought as a special case of named entity recognition <cit.> and name entity linking <cit.>, which recognizes same entities from structured data rather than free text. It is an important pre-processing step for a variety of problems. One example is processing author-related queries properly (e.g., identify all of a particular author's publications) in a digital library search engine. Another is to calculate author-related statistics, such as h-index and studying collaboration relationships between authors. Typically a clustering method is used to process the AND. Clustering requires calculating pairwise similarities between each possible pair of records, to determine whether each pair should be in the same cluster. Since the number of possible pairs in a database is n(n-1)/2, it grows as O(n^2) with the number of records n. Since n can be millions of authors in some databases such as PubMed, the AND algorithm needs a blocking method to scale <cit.>. A list of candidate pairs is generated by blocking, and only the pairs on the list are considered for clustering. A good blocking method should have a balance between efficiency and completeness. Efficiency means to minimize the number of pairs to be considered, while completeness implies ensuring coreferent pairs remains after blocking. Blocking is made up of blocking predicates. Each predicate is a logical binary function that selects a set of records based on a combination of an attribute (blocking key) and a similarity criterion. One example can be “exact match of the last name”. A simple but effective way of blocking is selecting manually, with respect to the data characteristics. Most recent work on large-scale AND uses a heuristic with “initial match of first name” and “exact match of last name” <cit.>. Although there is reasonable completeness, it can be problematic when the database is extremely large, such as CiteSeerX (10.1M publications, 32.0M authors), PubMed (24.4M publications, 88.0M authors), and Web of Science (45.3M publications, 162.6M authors)[Those numbers are measured at the end of 2016.]. Table <ref> shows the blocking result on PubMed using the heuristic. Result shows most of the block sizes are less than 100 names, but a few blocks are extremely large. Since the number of pairs grows quadratically, those few blocks can dominate the calculation time. This imbalance of the block size is due to the popularity of certain surnames, especially for Asian names. To make matters worse, this problem increases as time goes by, since the growth rates of the publication records are rapidly increasing. Figure <ref> shows PubMed's cumulative number of publication records. To improve the blocking, some <cit.> have proposed learning it. These approaches can be categorized into two different types. One is disjoint blocking, where each block is separated so that no record belongs to multiple blocks. <cit.> belong to this category. The other is non-disjoint blocking, where some blocks have shared records. <cit.> are some examples. Each has advantages, disjoint blocking makes the clustering step easily parallelized; thus, each block produced can be independently clustered. Non-disjoint blocking produces smaller blocks, since it uses both disjunction and conjunction, and also has more degrees of freedom to select the similarity criterion. Here, we first propose to learn a non-disjoint blocking with a conjunctive normal form (CNF). Also, we propose to extend the method to produce disjoint blocks. Our main contribution can be summarized as below: * We propose a CNF blocking inspired from CNF learning <cit.>. We show that CNF blocking reduces more pairs compared to DNF blocking to achieve high pairs completeness, in the domain of scholarly data. Furthermore, since early rejection of pairs is available, processing time is faster. * To take advantage of disjoint blocking, we extend the method to produce disjoint blocks, so that the clustering step of the AND process can be easily parallelized. * Gain function is used to find the best term to add in each step for learning blocking function. We compare different gain functions introduced in previous work. The paper is organized as follows. In the next session, we discuss previous work. This is followed with the problem definition. Next, we describe learning of CNF blocking and how to make use of CNF blocking while ensuring production of disjoint blocks. After that, we evaluate our methods with a PubMed evaluation dataset. Finally, the last section is a summary of our work including proposals for future directions. § RELATED WORK Blocking has been widely studied for record linkage and disambiguation. Standard blocking is the simplest but most widely used method<cit.>. It is done by considering only pairs that meet all blocking predicates. Each blocking predicate is consistent with a selection of a blocking key and similarity criterion. Another is the sorted neighborhood approach <cit.>. It sorts the data by a certain blocking predicate, then for each record, it forms pairs with those records within a window for further processing. Yan et al. <cit.> further improved this method to select the size of the window adaptively. Aizawa and Oyama <cit.> introduced a suffix array-based indexing, which uses an inverted index of suffixes to generate candidate pairs. Each record generates pairs with records with at least one shared suffix. There is also canopy clustering <cit.>, which generates blocks by clustering with a simple similarity measure and two thresholds, namely loose similarity and tight similarity. While generating a cluster, all records within the loose similarity will be inserted into the cluster, but only those within tight similarity will be removed from the set of candidate records, so that the algorithm generates overlapping clusters. Most of those methods were compared in two recent surveys by Christen <cit.> and Papadakis et al. <cit.>. Those surveys show that no clear winners are among methods, and proper parameter tuning is required for a specific task. Thus, in this paper, we mainly focus on optimizing the blocking function for standard blocking, because of its simplicity to use and small computational overhead of applying it. Another benefit is blocks produced by a standard blocking are easy to parallelize for subsequent processing, such as clustering, as long as those blocks are mutually exclusive. Much work optimized the blocking function for standard blocking. The blocking function is typically presented with a logical formula with blocking predicates. Two of which on learning a disjunctive normal form (DNF) blocking <cit.> were published in the same year. Making use of manually labeled record pairs, they used a sequential covering algorithm to find the optimal blocking predicates in a greedy manner. Additional unlabeled data was used to estimate the reduction ratio of their cost function <cit.> while an unsupervised algorithm was used to automatically generate labeled pairs with rule-based heuristics used to learn DNF blocking <cit.>. All the work above proposed to learn DNF blocking, which is a disjunction (OR) of conjunctions (AND). They produce non-disjoint blocks because of the logical OR terms. On the other hand, other work learns the blocking function with a pure conjunction, to ensure to generate disjoint blocks. Das et al. <cit.> learns a conjunctive blocking tree, which has different blocking predicates for each branch on the tree. Fisher et al. <cit.> produces blocks with respect to a size restriction, by generating candidate blocks with a list of predefined blocking predicates and then performs a merge and split step to generate the block with the desired size. Our work proposes a method for learning a non-disjoint blocking function in a conjunctive normal form (CNF), and later extend it to ensure generating disjoint blocks, which is not previously studied in this field. Our method is inspired from the CNF learner proposed in <cit.>, which is the idea that CNF can be a logical dual of DNF. § PROBLEM DEFINITION Our work tackles the same problem with the DNF blocking <cit.>. Let R={r_1, r_2, ⋯, r_n} be the set of records in the database, where n is the number of records. Each record r has k attributes, and A be the attribute set A={a_1. a_2, ⋯, a_k}. A blocking predicate p is a combination of an attribute a and a similarity function s defined to a. An example of s is exact string match of a. It is a logical binary function applied to each pair of records, so p(r_x, r_y) = {0,1}, where r_x, r_y ∈ R. Blocking function f is a boolean logic formula consisting with blocking predicates p_1, p_2, ⋯, p_n, and each predicate is connected with either conjunction ∧ or disjunction ∨. An example is f_example = (p_1 ∧ p_2) ∨ p_3. Since it is made up of blocking predicates, f(r_x,r_y) = {0,1} for all r_x, r_y ∈ R. The goal is to find an optimal blocking function f^* that covers a minimum number record pairs while missing up to ε of total number of matching record pairs. To formalize it, f^* = _f∑_(r_x, r_y)∈ R f(r_x, r_y)s.t. ∑_(r_i, r_j)∈ R^+ f(r_i, r_j) ≥ (1-ε) ×| R^+ | where R^+ is set of matching record pairs. § LEARNING BLOCKING FUNCTION In this section, we first review the DNF blocking <cit.>. Then we introduce our CNF blocking, which can be implemented with a small modification from the DNF blocking. Then, we describe several gain functions. It is used to select an optimal predicate term in each step for learning CNF and DNF. Finally, we discuss an extension to ensure the production of disjunctive blocks for easy parallelization. §.§ DNF blocking DNF blocking is originally proposed by two parallel work <cit.> in the same year. Although some details are different, the main idea is similar. Given labeled pairs, they attempt to learn the blocking function in the form of DNF, which is the disjunction (logical OR) of conjunction (logical AND) terms. Learning DNFs are known to be a NP-hard problem <cit.>, they proposed an approximation algorithm to learn k-DNF blocking using a sequential covering algorithm. k-DNF means each conjunction term has, at most, k predicates. Algorithm <ref> shows the process of DNF blocking. Function LearnDNF in line 16-39 is the main function of the algorithm. It gets 3 inputs, L is labeled sample pairs, P is blocking predicates, k is the parameter of maximum predicates considered in each conjunction term. First, to reduce the computation, the algorithm selects a set of candidate conjunction terms with at most k predicates. It is not practical to use all possible combinations, since the time complexity will be exponential to k. For each predicate p, it generates k candidate conjunction terms with the function LearnConjTerms in line 1-14. It iteratively selects a predicate p_i, which has the best gain value when it is added into the conjunction term selected from previous step (line 8). Gain value is calculated by the function CalcGain, there are several different metrics to calculate it. We discuss them in detail in later section. Using the candidate terms, the algorithm learns a DNF blocking function by running a sequential covering algorithm (line 26-35). In each iteration, it sequentially selects a conjunction term Term from the set of candidate conjunction terms Terms that has the maximum gain value, and attach it with logical OR to the DNF term. Gain is again calculated with the function CalcGain. Then all positive and negative samples covered by Term are removed. This process repeats until it covers the desired minimum amount of positive samples, or there is no additional candidate term that produces a positive gain value. §.§ CNF blocking CNF blocking can be learned with a small modification from the DNF blocking algorithm. First, we review some basic boolean algebra to understand the relation between CNF and DNF. CNF can be presented as the entire negation of a corresponding DNF and vice versa, using De Morgan's laws. De Morgan's laws are as follows: (A ∧ B) ↔ ( A) ∨ ( B) (A ∨ B) ↔ ( A) ∧ ( B) For example, let's assume that we have a DNF formula A ∨ ( B ∧ C). The negation of the formula is a CNF formula by using (<ref>) and (<ref>): ( A ∨ ( B ∧ C)) = A ∧ ( B ∧ C) = A ∧ (B ∨ C) Using this fact, Mooney proposed an approximate CNF learning <cit.>, which is a logical dual of DNF learning. Inspired from it, we present our CNF blocking method, which is a logical dual of the DNF blocking. Algorithm <ref> shows the proposed CNF blocking. The algorithm has similar structure as Algorithm <ref>. Instead of running a sequential covering algorithm to cover all positive samples, CNF blocking first tries to cover all negative samples using negated blocking predicates. In other words, we learn a DNF formula that is consistent with a negated predicate, which we call negated DNF (NegDNF in Algorithm <ref>). NegP is the negation of each predicate p in P. Main function LearnCNF gets 3 inputs as LearnDNF in Algorithm <ref>. L is labeled sample pairs, P is blocking predicates, k is the parameter of maximum predicates in each term. The algorithm runs as follows. First, it generates a set of negated candidate conjunction term Terms from all p in NegP (line 16). We use a dual of the original gain function, CalcNegGain to select a predicate for generating a negated candidate conjunction. Then, as in the DNF blocking, it runs the sequential covering algorithm to learn the negated DNF formula (line 27-38), which iteratively adds a negated conjunction term until it covers the desired amount of samples. We select a negated conjunction term with a dual of the original gain function, CalcNegGain. Also, note that the termination condition of the loop (line 27) is when ε of total positive samples are covered with the learned NegDNF, to ensure we miss less than ε of the total number of positive samples in the final CNF formula. After we get the final NegDNF, we negate it to get the desired CNF. §.§ Gain Function Gain function estimates the benefit of adding a specific term to the current learned formula for DNF / CNF blocking. It is used in two different places in the algorithm; one is when we choose the conjunction candidates (line 8 in both Algorithm <ref> and <ref>), the other is when we choose a term from the candidates in each iteration (line 27 in Algorithm <ref>, line 28 in Algorithm <ref>). Previous methods <cit.> proposed a different gain function, here we describe the original function for DNF blocking and its dual to use for our CNF blocking, and we compare the results in the experiments section. P, N is the total number of positive / negative samples. In addition, p, n is the number of remaining positive / negative samples covered by the term. §.§.§ Information Gain It is originally from Mooney's DNF and CNF learner <cit.>. Original gain function for DNF learner can be calculated as gain_DNF =p ×[log(p/p+n) - log(P/P+N)] The gain function of CNF learner can be calculated in the same way, in this case since we cover negative samples, gain_CNF =n ×[log(n/n+p) - log(N/N+P)] §.§.§ Ratio Between Positive and Negative Samples Covered Bilenko et al. <cit.> used this for their DNF blocking. It calculates the ratio between the number of positives and the number of negatives covered. gain_DNF = p/n gain_CNF = n/p §.§.§ Reduction Ratio Michaelson and Knoblock <cit.> picked terms with the maximum reduction ratio (RR). In addition, they filter out all terms that has with pairwise completeness (PC) below threshold t. This can be presented as: gain_DNF = p+n/P+N if p/P > t 0otherwise gain_CNF = p+n/P+N if n/N > t 0otherwise §.§ Learning Disjoint Blocks Blocking methods can be categorized into non-disjoint and disjoint. A blocking method is disjoint if all records are separated into mutual exclusive blocks by applying the method. A blocking function is disjoint if and only if it suffices the following conditions: 1) it only consists of pure conjunction (logical AND), 2) all predicates use non-relative similarity measures. That is, measures comparing absolute value of blocking key, e.g. exact match of first n characters. DNF and CNF blocking are both forms of non-disjoint blocking due to the first condition. A disjoint blocking has the advantage that parallelization can be performed easily after applying the blocking method, by running processes for each blocks separately. A weakness of this form is that an ordinary disjoint method tends to have larger blocks, because its formula cannot use logical OR, and only non-relative similarity measures can be used for candidate predicates. We introduce a simple extension to ensure our CNF blocking to produce disjoint blocks (Algorithm <ref>). This is carried out by first producing two blocking functions. The first function learns a blocking function with only conjunctions. This can be performed by running our CNF blocking method with k=1 and using a set of predicates with non-relative similarity measures P_disjoint, since 1-CNF equals to pure conjunction. Then, we learn a CNF blocking with our ordinary k-CNF method with the whole set of predicates P_full, for pairs remaining after applying 1-CNF . After learning them, we first apply the 1-CNF to the whole database to produce disjoint blocks. Then for each block, we apply the second k-CNF blocking function to identify pairs for further processing (clustering).Thus, we filter out pairs that are not consistent with k-CNF, considering them as non-matched pairs. This is similar to the filtering method from Gu and Baxter <cit.> and Khabsa et al. <cit.>. The difference is that we learn those instead of choosing heuristically. This method still produces CNF since it combines conjunction terms and k-CNF with logical AND. The first part consisting of pure conjunction ensures the production of non-disjoint blocks. § EXPERIMENTS §.§ Benchmark Dataset We use the PubMed for the evaluation. PubMed is a public large-scale scholarly database maintained by the National Center for Biotechnology Information (NCBI) at the National Library of Medicine (NLM). We use NIH principal investigator (PI) data for evaluation, which include PI IDs and corresponding publications. We randomly picked 10 names from the most frequent ones in the dataset and verified that all publications belong to each PI. The set of names include C* Lee, J* Chen, J* Smith, M* Johnson, M* Miller, R* Jones, S* Kim, X* Yang, Y* Li, Y* Wang, where C* means any name starts with C. Table <ref> shows the statistics of the dataset. Experiments are done with 2-fold cross validation. §.§ Methodology §.§.§ Evaluation Metrics We evaluate our CNF blocking with reduction ratio (RR), pairs completeness (PC), and F-measure. These metrics are often used to evaluate blocking methods. Those metrics can be calculated as follows: RR= 1 - p+n/P+N PC= p/P F= 2 × RR × PC/RR + PC where P, N are the numbers of positive/negative samples, and p, n are the numbers of positive/negative samples covered with the blocking function. RR measures the efficiency of the blocking function, PC measures the quality of the blocking function. F is the harmonic mean of RR and PC. §.§.§ Blocking Predicates Used We define two different sets of blocking predicates. As we discussed in the previous section, disjoint blocking requires the use of predicates with non-relative similarity measures (e.g., exact match) to ensure blocks are mutually exclusive. On the other hand, non-disjoint blocking has more degrees of freedom to include relative similarity measures (e.g., TF-IDF cosine distance). The blocking predicates set for disjoint and non-disjoint blocking are described in Table <ref> and Table <ref>, respectively. We observed an important characteristic of the data: some attributes are empty. For example, 92.2% include year, 19.9% have affiliation, 54.5% of the author mentions have only initials for the first name. To deal with it better, we add compatible for those blocking keys. Below is a brief explanation for each similarity criterion. * exact: Exact match. * first(n), last(n): First/Last n character match, where n is an integer. We check {1, 3, 5, 7} for name attributes. * order: Assigns True if both records are first authors, last authors, or non-first and non-last authors. * digit(n): First n digit match. We check {1, 2, 3} for year. * compatible: True if at least one of the records are empty (Eq. <ref>). If the key is name, it also checks if the initial matches if one of the records has only initial (Eq. <ref>). 0.65!compatible(A,B) = Trueif at least one is empty exact(A,B)otherwise 0.65!compatible(A,B) = Trueif at least one is empty exact(A,B)if both are full name first1(A,B)otherwise * cos: Cosine distance of TF-IDF bag-of-words vector. We check with threshold {0.2, 0.4, 0.6, 0.8}. * diff: Year difference. We use the threshold {2, 5, 10}. §.§.§ Parameter Setting ε is used to vary the PC; we tested values inside [0,1] to get the PC–RR curve. k is selected experimentally to calculate the maximum reachable F-measure. The result was 0.9458, 0.9531, 0.9540, 0.9535 for k=1, 2, 3, 4 respectively. For k>5, the result was the same as with k=4, which means no terms were selected in more than four predicates. From the result, k=3 had the highest F so we use it for further experiments. §.§ Experimentation Result §.§.§ Gain Function We tested three different gain functions introduced in the previous section. Figure <ref> shows the PC–RR curve generated by testing various ε values. Blocking usually requires high PC, so that we do not lose matched pairs after it is applied. So, we focused on experimenting for high PC values. As we can see from the results, information gain has highest RR overall. Thus, we use it as the gain function for the rest of the experiments. §.§.§ Non-disjoint CNF Blocking We compare the non-disjoint CNF blocking method with the DNF blocking and canopy clustering <cit.>. We use a supervised method to train DNF <cit.> to take advantage of training data. We used the set of Jaro–Winkler distance of attributes for canopy clustering. Figure <ref> shows the PC–RR curve for each method. Both CNF and DNF were better than canopy clustering, as was shown in Bilenko et al. <cit.>. CNF and DNF results are comparable except for high PC (0.9) values. From the result, concepts in the domain of scholarly data can be better represented with CNFs. For the AND problem, the blocking requires high PC so that positive pairs are not excluded in the clustering process, so CNF is preferred. Another advantage of using CNF is the processing time. Table <ref> compares the training time and blocking time at PC=0.99, excluding calculation of string distances. The computation time of CNF is significantly reduced compared to DNF. This is because CNF is composed with conjunction of disjunction terms, it can quickly reject pairs that are not consistent with any terms. On the other hand, DNF consists of disjunction of conjunction terms, so each pair should check all terms to make the final decision. Canopy clustering is the fastest, but it degrades RR significantly in high PC setting. Learned CNF is also simpler than DNF. Learned CNF at this level is as below (fn, mn, ln is first, middle, last name respectively): {(fn,first(5))∨(fn,compatible)∨(coauth,cos(0.8))} ∧ {(ln,exact)} ∧ {(mn,compatible)} ∧ {((fn,first(3))∨(fn,compatible))} And learned DNF is: {((coauth,cos(0.8))∧(ln,exact)∧(mn,compatible))} ∨ {((venue,cos(0.4))∧(mn,first(1)∧(fn,compatible)} ∨ {(fn,compatible)∧(mn,first(1)∧(ln,exact)} ∨ {((venue,cos(0.8))∧(fn,exact)} In addition, we observed that proposed compatible predicate was frequently used in our result. This shows the effectiveness of compatible in dealing with the empty value. §.§.§ Disjoint CNF Blocking We evaluate our extension to make disjoint blocks with the CNF blocking. We compare the blocking learned with a pure conjunction, our proposed method, and the method of Fisher et al. <cit.>. Figure <ref> shows the RR–PC curve for each method. We also plot the original non-disjoint CNF blocking for comparison. We can see that our proposed disjoint CNF blocking is the winner among non-disjoint methods. Fisher's method produced nearly uniform-sized blocks, but it had a limitation to reach high PC and generally RR was similar to conjunction for the same PC level. Disjoint CNF has some degradation compared with the non-disjoint CNF because it is forced to use pure conjunction and limited predicates in the first part. However, this simple extension can aid to easily parallelize the clustering process. Parallelization is important to make the disambiguation algorithm scale to PubMed-scale scholarly databases <cit.>. Processing time for disjoint CNF blocking comparable to the original non-disjoint CNF blocking, it takes 1.57s at PC=0.99. The learned disjoint CNF is: {(fn,first(1))} ∧ {(ln,exact)} ∧ {(fn,compatible)∨(coauth,cos(0.8))} ∧ {(mn,compatible)} First two terms are from 1-CNF, and others from 3-CNF learner. We also tested this function to the whole PubMed. 82.17% of the pairs is reduced using the learned blocking function, and the running time is 10.5 min with 24 threads. § CONCLUSION We show how to learn an efficient blocking function with a conjunctive normal form (CNF) of blocking predicates. Using CNF as a negation of corresponding disjunctive normal form (DNF) of predicates <cit.>, our method is the logical dual of existing DNF blocking methods <cit.>. We find that a learned CNF blocking function reduces more pairs for a large number of target pairs completeness with a faster run time. We devise an extension that ensures that our CNF blocking produces disjoint blocks, so that the clustering process can be easily parallelized. Future work could improve our CNF method so it could be used on different levels of blocking functions for each block with a pure conjunction blocking function <cit.>. Instead of using a sequential covering algorithm, the feasibility of using linear programming to find an optimal CNF <cit.> could be explored. § ACKNOWLEDGMENTS We gratefully acknowledge partial support from the National Science Foundation. abbrv
http://arxiv.org/abs/1709.09657v1
{ "authors": [ "Kunho Kim", "Athar Sefid", "C. Lee Giles" ], "categories": [ "cs.IR", "cs.DL" ], "primary_category": "cs.IR", "published": "20170927174821", "title": "Scaling Author Name Disambiguation with CNF Blocking" }
GBKsong[Corresponding author. E-mail: ][email protected] School of Mechanical and Material Engineering, Xi'an University of Arts and Science, Xi'an 710065, China Department of Physics and Astronomy, Texas A&M University-Commerce, Commerce, TX 75429-3011, USAInstitute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, ChinaCollege of Physics and Technology, Guangxi Normal University, Guilin 541004, China Guangxi Key Laboratory Breeding Base of Nuclear Physics and Technology, Guilin 541004, ChinaSchool of Mechanical and Material Engineering, Xi'an University of Arts and Science, Xi'an 710065, ChinaSchool of Mathematics and Physics, Bohai University, Jinzhou 121013, China Within the isospin- and momentum-dependent transport model IBUU11, we examine the relativistic retardation effects of electrical fields on the ratio and neutron-proton differential transverse flow in heavy-ion collisions at intermediate energies. Compared to the static Coulomb fields, the retarded electric fields of fast-moving charges are known to be anisotropic and the associated relativistic corrections can be significant. They are found to increase the number of energetic protons in the participant region at the maximum compression by as much as 25% but that of energetic neutrons by less than 10% in ^197Au+^197Au reactions at a beam energy of 400 MeV/nucleon. Consequently, more π^+ and relatively less π^- mesons are produced, leading to an appreciable reduction of the ratio compared to calculations with the static Coulomb fields. Also, the neutron-proton differential transverse flow, as another sensitive probe of high-density symmetry energy, is also decreased appreciably due to the stronger retarded electrical fields in directions perpendicular to the velocities of fast-moving charges compared to calculations using the isotropic static electrical fields. Moreover, the retardation effects on these observables are found to be approximately independent of the reaction impact parameter. Effects of retarded electrical fields on observables sensitive to the high-density behavior of nuclear symmetry energy in heavy-ion collisions at intermediate energies Xu-Yang Liu December 30, 2023 =======================================================================================================================================================================§ INTRODUCTIONNuclear symmetry energy E_sym(ρ) at supra-saturation densities is currently the most uncertain part of the Equation of State (EOS) of dense neutron-rich nucleonic matter that can be found in central heavy-ion collisions with rare isotopes in terrestrial laboratories, interiors of neutron stars and overlapping regions of cosmic collisions involving neutron stars and/or black holes. Much efforts have been devoted to extracting information about the E_sym(ρ) from experimental observables and astrophysical messengers, see, e.g., refs. <cit.> for comprehensive reviews. Central heavy-ion reactions play a special role in this endeavor as they are the unique tools available in terrestrial laboratories to form dense neutron-rich matter. The heavy-ion reaction community has actually identified several promising probes of the high-density behavior of E_sym(ρ). In particular, the ratio of charged pions and the neutron-proton differential transverse flow have been found consistently using several transport models, e.g., refs <cit.>, to be among observables most sensitive to the high-density behavior of nuclear symmetry energy. For example, these different models all agree qualitatively that a larger (lower) value of E_sym(ρ) at supra-saturation densities will lead to a more neutron-poor (rich) participant region and subsequently a lower (higher) ratio. Quantitatively, however, it has also been shown in several studies that current predictions are still too model and interaction dependent <cit.> to make a strong conclusion about the high-density E_sym(ρ) from comparing calculations with existing data <cit.>.While new experiments are being carried out <cit.>, more theoretical efforts have been devoted recently to investigating various uncertain aspects associated with pion production in heavy-ion collisions. These include the in-medium pion potential <cit.>, the isovector potential of Δ(1232) resonances <cit.>, neutron-skins of colliding nuclei <cit.>, and tensor-force-induced short-range correlations <cit.>. Moreover, the transport reaction theory communities have been making efforts to compare codes to better understand model dependence, identify best practices and develop new strategies to more reliably extract information about the high-density symmetry energy from heavy-ion reactions at intermediate energies <cit.>.It is well known that the Coulomb field affects significantly the spectrum ratio of charged pions in heavy-ion reactions, see, e.g., refs. <cit.>. To our best knowledge, in most of dynamical simulations of heavy-ion reactions so far only the standard Coulomb field E_static is used. However, it is also well known that the first-order relativistic correction to the electric field created by a moving charge of velocity v⃗ is E_static· [1+(3cos(θ)-1)· v/c)] where θ is the angle between v⃗ and the field position vector according to the Liénard-Wiechert formula. The correction is angular dependent and significant for fast-moving particles. For the field points along the direction of motion of the charged particle, the correction is 2· v/c which may have significant effects on charged pions or even protons in heavy-ion collisions at intermediate energies. Moreover, the retarded electric field is the strongest in the direction perpendicular to the velocity of the charged particle instead of being isotropic as the static Coulomb field. Thus, it is useful to examine how the relativistically retarded electrical field may affect some experimental observables known to be sensitive to the high-density behavior of nuclear symmetry energy.In this work, effects of relativistically retarded electric fields on the ratio and neutron-proton differential transverse flow are studied in heavy-ion reactions at intermediate energies. We found that the retarded electric fields increase the number of energetic protons (neutrons) in the participant region by as much as 25% (less than 10 % as a secondary effect) leading to relatively more π^+ production, thus a reduction of the ratio by about 8% compared with calculations using the static Coulomb field as normally done in transport model simulations of heavy-ion collisions at intermediate energies. Appreciable effects on the neutron-proton differential transverse flow are also found. Moreover, these retardation effects are found to be approximately independent of the impact parameter of the reaction. Thus, as an intrinsic feature of electrical interactions associated with high-speed charged particles, relativistic retardation effects should be considered to predict more precisely the ratio and neutron-proton differential transverse flow in heavy-ion collisions at intermediate energies.In the following, we first outline the major ingredients of the isospin- and momentum-dependent Boltzmann-Uehling-Uhlenbeck transport model (IBUU) <cit.> and recall the Liénard-Wiechert formalism in Section II. We then discuss our results in Section III. A summary will be given in Section IV.§ THE IBUU TRANSPORT MODEL INCORPORATING LIÉNARD-WIECHERT POTENTIALSThe present study is carried out within the IBUU transport model <cit.>. In the IBUU11 version of this model, the nuclear mean-field interaction is expressed as <cit.>U(ρ,δ ,p⃗,τ )= A_u(x)ρ _-τ/ρ _0 +A_l(x)ρ _τ/ρ _0+B/2(2ρ_τ/ρ _0)^σ(1-x)+ 2B/ σ +1(ρ/ρ _0)^σ(1+x)ρ_-τ/ρ[1+(σ-1)ρ_τ/ρ]+ 2C_τ ,τ/ρ _0∫ d^3p^'f_τ( p⃗^')/1+(p⃗-p⃗^')^2/Λ ^2+ 2C_τ ,-τ/ρ _0∫ d^3p^'f_-τ( p⃗^')/1+(p⃗-p⃗^')^2/Λ ^2.In the above expression, ρ=ρ_n+ρ_p is the nucleon number density and δ=(ρ_n-ρ_p)/ρ is the isospin asymmetry of the nuclear medium; ρ_n(p) denotes the neutron (proton) density, the isospin τ is 1/2 for neutrons and -1/2 for protons, and f(p⃗) is the local phase space distribution function. The parameters A_l(x) and A_u(x) are in forms of <cit.>A_l(x) = A_l0 - 2B/σ+1[(1-x)/4σ(σ+1)-1+x/2],A_u(x) = A_u0 + 2B/σ+1[(1-x)/4σ(σ+1)-1+x/2]. Compared to the IBUU04 version <cit.> of the model where the modified Gogny MDI (Momentum-Dependent-Interaction) is used, the adjusted parameters of A_l(x), A_u(x), C_τ ,τ, C_τ ,-τ used in IBUU11 take into account more accurately the spin-isospin dependence of in-medium effective many-body forces by distinguishing the density dependences of nn, pp and np interactions in the effective 3-body force term <cit.>. They can also better fit the high-momentum behaviors of both the isoscalar and isovector nucleon optical potential extracted from nucleon-nucleus scattering experiments <cit.>. Using empirical constraints and properties of symmetric nuclear matter at normal density, the values of these parameters can be determined as A_l0(x) = -76.963 MeV, A_u0(x) = - 56.963 MeV, B= 141.963 MeV, C_τ ,τ= -57.209 MeV, C_τ ,-τ= -102.979 MeV, σ= 1.2652 and Λ= 2.424p_f0 where p_f0 is the nucleon Fermi momentum in symmetric nuclear matter at normal density. They lead to a binding energy of -16 MeV, an incompressibility of 230 MeV for symmetric nuclear matter and a symmetry energy E_sym(ρ_0)=30.0 MeV at the saturation density of ρ_0=0.16 fm^-3. The parameter x is introduced to mimic the different forms of symmetry energy predicted by various many-body theories without changing any property of symmetric nuclear matter and the value of symmetry energy at saturation density. The density dependences of nuclear symmetry energy with different x parameters are shown in Fig. <ref>. In one of our previous works <cit.>, the complete Liénard-Wiechert potentials for both electrical and magnetic fields are consistently incorporated into the IBUU11 code. As it was shown both analytically and numerically in detail in ref. <cit.>, the ratio of Lorentz force over the Coulomb force is approximately (v/c)^2. The Lorentz force was found to have negligible effect on the ratio of charged pions except at extremely forward/backward rapidities. Thus, unless we consider the second-order relativistic correction to the electrical fields, one can safely neglect the Lorentz force and thus speed up the code dramatically by turning off the calculation of the magnetic fields. In this work, thus only the electric fields are calculated according to the well-known Liénard-Wiechert expressioneE⃗(r⃗,t)=e^2/4πε_0∑_nZ_nc^2-v^2_n/(cR_n-R⃗_n·v⃗_n)^3(cR⃗_n-R_nv⃗_n)where Z_n is the charge number of the nth particle, R⃗_n=r⃗-r⃗_n is the position of the field point r⃗ relative to the source point r⃗_n where the nth particle is moving with velocity v⃗_n at the retarded time of t_n=t-|r⃗-r⃗_n|/c. Naturally, in the nonrelativistic limit v_n≪c, Eq.(<ref>) reduces to the static Coulomb field of the formeE⃗(r⃗,t)=e^2/4πε_0∑_nZ_nR⃗_n/R_n^3.Obviously, the most important differences between the two formulas in Eqs. (<ref> and <ref>) are the relativistic retardation effects and the non-isotropic nature of the retarded electrical fields of fast-moving charges. In the relativistic case, all charged particles with velocity v⃗_n at the retarded time t_n contribute to the eE⃗(r⃗,t) at the instant t and location r⃗. Whereas in the nonrelativistic case, those charged particles contribute to the eE⃗(r⃗,t) only at the same moment t. Of course, the retardation effect depends on the reduced velocity β=v/c. In a typical reaction at a beam energy of 400 MeV/nucleon available in several laboratories, the velocities of charged protons reach about 0.7c. It is high enough to warrant an investigation about effects of the retarded electrical fields on several observables useful for studying the high-density behavior of nuclear symmetry energy. It is well known that for a charge moving with a constant velocity v⃗ with respect to the rest frame S, its electrical field seen by an observer at rest in S is asymmetric, i.e., longitudinally reduced while transversely enhanced such that it may look like a pancake at ultra-relativistic energies. More quantitatively, while the electrical field is enhanced by a factor γ=1/(1-β^2)^1/2 perpendicular to v⃗, it is weakened in the direction of motion by 1/γ^2. In heavy-ion collisions at intermediate and higher beam energies, the two Lorentz-contracted nuclei (both in terms of their electrical fields and matter distributions) moving in the opposite directions come to collide with each other under the influence of both strong and Coulomb forces as well as frequent nucleon-nucleon collisions. The complicated dynamics of such reactions is simulated by using the IBUU11 transport model in this work.§ RESULTS AND DISCUSSIONSFirst of all, it is necessary to mention that to calculate the retarded electric fields eE⃗(r⃗,t), the phase space histories of all charged particles before the moment t have to be saved in transport model simulations. Moreover, a pre-collision phase space history for all nucleons is made assuming that they are frozen in the projectile and target moving along their Coulomb trajectories. More technical details about calculating the eE⃗(r⃗,t) and numerical checks in idealized cases against analytical solutions can be found in ref. <cit.>. In the following illustrations, we present results for the reaction of ^197Au+^197Au at a beam energy of 400 MeV/nucleon. The calculations are done in the CMS frame of the colliding nuclei. The beam is in the Z-direction and the reaction plane is the X-o-Z plane. In the following discussions, we refer the electrical fields calculated using the Liénard-Wiechert formula as the retarded fields while those from using the normal Coulomb formula in each time step as the static electrical fields. But we do generally refer all electrical forces as the Coulomb force.§.§ Evolutions of the anisotropic retarded electric fields in comparison with the static Coulomb fieldsTo help understand effects of the retarded electric fields on the reaction dynamics and experimental observables sensitive to the symmetry energy in heavy-ion collisions, we show and discuss in this subsection the time evolution and space distribution of retarded electric fields in comparison with the static Coulomb fields normally used in simulating heavy-ion reactions. We present here results for central reactions with impact parameters b/b_max≤ 0.15.Shown in the upper window of Fig. <ref> are the strength |eE| contours of the static and retarded electric fields in the reaction plane (X-o-Z) at three instants representing the initial compression, maximum compression and the expanding stage, respectively. Firstly, at the initial compression stage, both the static and retarded electric fields show two strong regions around the centers of the target and projectile above and below the origin of the coordinate. When the two electrical pancakes of projectile and target passing each other, the total electrical field is zero around the origin of the coordinate. Secondly, the retarded electric field is weaker (stronger) than the static electric field at the initial compression (expanding) stage as one expects due to the time delay in the relativistic calculations. Thirdly, at the maximum compression when the system has been sufficiently stoped and thermalized, the retarded electric field is obviously anisotropic in the reaction plane compared with the isotropic static electric field. The fields are appreciably stronger in the X direction but weaker in the Z direction. This feature is qualitatively what the Liénard-Wiechert formula predicts for moving charges. Of course, as we discussed earlier the dynamics of heavy-ion collisions are much more complicated than two approaching charges.Because of symmetries in the plane perpendicular to the beam direction, the strengths |eE| of the electrical fields in the X-o-Y plane are approximately spherically symmetric as shown in the lower window of Fig. <ref>. While overall the nuclei are moving in the ± Z direction, because of the Fermi motion in the initial state, nucleon-nucleon collisions, before reaching the maximum compressions, some particles have obtained significant velocity components in the X and Y directions although they are still less than the velocity component in the Z direction before a complete thermalization is realized. Thus, at the stage of maximum compression the electrical fields are still asymmetric in the X-o-Z plane. In more detail and quantitatively, shown in the upper window of Fig. <ref> are comparisons of the eE_x and eE_z in the reaction plane at the instant of 20 fm/c. For the retarded fields, vertically it is seen that the eE_x in the ± X directions is significantly higher and covers a larger area than theeE_z in the ± Z direction. While for the static fields the eE_x and eE_z are very close to each other as shown in the left panels of Fig. <ref>.In the X-o-Y plane, however, as shown in the lower window of Fig. <ref> the eE_x and eE_y are very close to each other for both the retarded and static fields.Because the Coulomb force is much smaller than the nuclear force, we do not expect the overall dynamics and global properties of nuclear reactions are affected by whether one uses the static or retarded electrical fields. Indeed, as shown in the density contours in the X-o-Z and X-o-Y planes in Fig. <ref>, respectively, their evolutions and distributions are very similar. However, it is worth noting that at 20 fm/c, as shown in the middle of the upper-right window of Fig. <ref> the retarded electric field leads to a slightly larger high density region due to the weakened repulsive force in the Z direction compared to the static calculation as we discussed above.§.§ Effects of the retarded electrical fields on the ratio We now turn to the relativistic retardation effects of electric fields on the ratio. In heavy-ion collisions at intermediate energies, pions are mostly produced from the decay of Δ(1232) resonances. To examine the dynamics of pion production in these reactions, one may use the dynamic pion ratio ()_ like defined as <cit.>(π^-/π^+)_ like≡π^-+Δ^-+1/3Δ^0/π^++Δ^+++1/3Δ^+.Because all Δ resonances will eventually decay into nucleons and pions, the ()_ like ratio will naturally become the free ratio at the end of the reaction. Shown in Fig. <ref> is the time evolution of the ()_ like ratio in central Au+Au collisions at a beam energy of 400 MeV/nucleon with retarded and static electric fields, respectively. The corresponding final ratio is shown in the upper window of Fig. <ref> as a function of the symmetry energy parameter x. Consistent with previous observations using most transport models, it is seen that the ratio is sensitive to the density dependence of nuclear symmetry energy E_sym(ρ) regardless how the electrical fields are calculated. A softer E_sym(ρ) leads to a higher ratio, reflecting a more neutron-rich participant region formed in the reaction.It has been a major challenge for the transport model community to predict accurately the final ratio and agree within 20%. Very often, the predicted effects on even the most sensitive observables, when the E_sym(ρ) is modified from being soft to stiff within the known limits using the same model, are on the order 10-50%.This is mainly because the nucleon isovector potential is much weaker than the isoscalar potential. Of course, the exact sensitivity depends on the reaction system and conditions used. Therefore, better understanding various factors affecting appreciably the proposed probes of the high-density behavior of nuclear symmetry energy has been a major goal of many recent works. In this context, it is interesting to see in both Figs. <ref> and <ref> that the ratio at the final stage is about 8% smaller in calculations with the retarded electric fields than those with the static ones approximately independent of the x parameter used. Moreover, as shown in the lower window of Fig. <ref> the multiplicities of both π^+ and π^- get increased by the retarded electrical fields. More quantitatively, the multiplicity of π^+ is increased by about 14% while that of π^- by less than 5%. These results are surprising as one normally expects the Coulomb field mainly affects the spectrum ratio of charged pions but not much their individual multiplicities. It is also surprising to see that there is a small increase in the multiplicity of π^- which is mainly from nn inelastic scatterings that are not directly affected by the variation of the electrical fields.To understand the above observations, we investigate the relative change in nucleon kinetic energy distributions due to using the retarded electrical fields compared to the static ones. For this purpose, we examine the ratioR^i= Number(i)_R/ Number(i)_S,   i≡neutron or proton of nucleons with local densities higher than ρ_0 at the maximum compression stage (20 fm/c) in the Au+Au reactions with the retarded (R) and static (S) electrical fields. Shown in Fig. <ref> are the R^n and R^p as a function of nucleon kinetic energy. Interestingly, it is seen clearly that both the R^n and R^p are larger than 1 for energetic nucleons above about 120 MeV, indicating that the retarded electric fields indeed increase (decrease) the number of high (low) energy nucleons, especially protons. More quantitatively, the number of energetic protons (neutrons) is increased by as much as 25% (10%). As the system approaches the maximum compression where the thermalization is the highest, more energetic particles are being shifted continuously to lower energies. Thus, before reaching the maximum compression there are even more energetic nucleons than indicated by Fig. <ref> with the retarded electrical fields. These increased numbers of energetic nucleons are responsible for the increased production of pions. As we discussed earlier, one of the major features of the retarded electrical field is its asymmetries, namely its longitudinal component is reduced by 1/γ^2 while its transverse component gets enhanced by γ compared to the static fields. The stronger transverse electrical field can accelerate more charged particles to higher energies. Some protons can gain enough kinetic energies to bring certain pp collisions above the pion production threshold, leading to more π^+ mesons. While neutrons are not affected directly by the electrical fields, secondary collisions between neutrons and energetic protons can increase the kinetic energies of neutrons. In addition, neutrons couple to charged Δ resonances through Δ^-↔ n+π^- and Δ^+↔ n+π^+ reaction channels which are affected directly by the electrical fields. Thus, the kinetic energy of neutrons, consequently the π^- multiplicity, can also be increased by the retarded electrical fields albeit at a lower level than protons and the π^+ multiplicity.Next, we investigate the impact parameter dependence of relativistic retardation effects of electrical fields. Shown in Fig. <ref> are the ratios in Au+Au collisions at 400 MeV/nucleon obtained using the static and retarded electric fields, respectively, as functions of centrality. It is seen that the reduction of the ratio due to the retardation effects is approximately independent of the impact parameter. Overall, since the retardation effect is an intrinsic feature of electrical interactions of charged particles moving at high speeds, given its appreciable effects on the ratio shown above, it should be considered when the ratio in heavy-ion collisions is used as a probe of high-density symmetry energy. To this end, some discussions about comparing with available experimental data especially the ones from the FOPI collaboration <cit.> are in order. In principle, the results shown in Figs. <ref> and <ref> can be compared to the data after considering possible detector filterings. A direct and rough comparison with the FOPI data indicates that the π^+ multiplicity is underpredicted by about 25% approximately x-independently even with the retardation effect. While the π^- multiplicity is close to the data only with the super-soft of x=2 but is underpredicted by about 25% with the super-stiff of x=-2. Within the approximately 10% uncertainty of the data, the calculated ratios with or without the retardation effect can all reasonably reproduce the data with the x parameter from about 1 to -1. Since we are not considering in the present work several effects, such as the pion potential, the uncertainty of the Δ isovector potential, the isospin-dependent high-momentum tails of nucleons due to the short-range correlations in both the initial state and during the reaction, that have been shown recently to affect appreciably the ratio as we discussed in the introduction, we are unable to make a solid conclusion regarding the from these comparisons. Obviously, a more comprehensive comparison of the pion data with calculations considering all of the aforementioned effects are necessary before making a final conclusion regrading the high-density behavior of nuclear symmetry energy. Nevertheless, we are confident that the relative effects of the relativistically retarded electrical fields observed in this work are physically sound and should be considered together with the other effects mentioned above in extracting eventually the from analyzing the pion data. §.§ Effects of retarded electrical fields on neutron-proton differential transverse flowFrom Fig. <ref> we have already seen that the retarded electrical fields affect neutrons and protons differently. Especially, the energetic nucleons are affected differently depending on the stiffness parameter x of nuclear symmetry energy. Moreover, as we discussed in detail, because of the asymmetry of the retarded electrical fields the motions of charged particles are influenced differently in transverse and longitudinal directions. It is known that the neutron-proton differential transverse flow probes sensitively the nuclear isovector potential without much interference by the isoscalar potential <cit.>. Depending on the isospin asymmetry of the system, the isovector potential proportional to δ can be very small. While the electrical force is much weaker than nuclear force, the difference between the retarded electrical field and the static one might be as big as the nuclear isovector potential even in the most neutron-rich nuclei. It is thus interesting to study if and how the relativistically retarded electrical fields may affect the neutron-proton differential transverse flow. The neutron-proton differential transverse flow was defined as <cit.>p^np_x(y)=1/N(y)∑^N(y)_i=1p_x_iτ_iwhere N(y) is the number of free nucleons with local densities less than ρ_0/8 at rapidity y, and τ_i is 1 for neutrons and -1 for protons. It was proposed as a sensitive probe of the high-density behavior of nuclear symmetry energy. It has the advantage of enhancing the signal strength of the symmetry energy by: (1) combining constructively effects of the symmetry potential on the isospin fractionation and nucleon transverse collective flow, and (2) maximizing effects of the symmetry potential while minimizing those of the isoscalar potential <cit.>. Shown in Fig. <ref> are the neutron-proton differential transverse flows in the central (upper window) and peripheral (lower window) Au+Au collisions with symmetry energies from being super-hard of x=-2 to super-soft of x=2. Obviously, the sensitivities of neutron-proton differential flow to the stiffness of symmetry energy are clearly visible irrespective of the electric fields used. Moreover, since the retarded electric fields are stronger in directions perpendicular to the velocities of charged particles, the neutron-proton differential transverse flow is reduced appreciably as a whole compared to the calculations using the isotropic static electric fields. Furthermore, the relativistic retardation effects of electric fields on the neutron-proton differential flow are approximately independent of the impact parameter of the reaction. It is worth noting here that we have also investigated effects of the retarded electrical fields on the transverse flows of neutrons and protons themselves. The effects are negligible as expected because the nuclear force overwhelms the Coulomb force.§ SUMMARY In summary, we investigated effects of relativistically retarded electrical fields on the ratio and neutron-proton differential transverse flow in Au+Au collisions at a beam energy of 400 MeV/nucleon. Compared to the isotropic static Coulomb fields, the retarded electrical fields are anisotropic and strongest (weakest) in directions perpendicular (parallel) to the velocities of charged particles. As a result, some charged particles get accelerated by the enhanced electrical fields in some directions. These more energetic particles help produce more π^+ than π^- mesons, leading to an appreciable reduction of the ratio.Also, the neutron-proton differential transverse flow is also decreased appreciably due to the stronger retarded electrical fields in directions perpendicular to the velocities of charged particles compared to calculations using the static Coulomb fields. Moreover, these features are approximately independence of the impact parameter of the reaction. As the next step, how these effects may depend on the beam energy and N/Z ratio of the reaction system are being investigated systematically. In conclusion, the relativistic retardation effects of electrical fields of fast-moving charges should be considered in simulating heavy-ion collisions at intermediate energies to more precisely constrain the high-density behavior of nuclear symmetry energy using the ratio and/or neutron-proton differential transverse flow as probes.§ ACKNOWLEDGEMENTSG.F. Wei would like to thank Profs. Zhao-Qing Feng and Yuan Gao for helpful discussions and Prof. Shan-Gui Zhou for providing us the computing resources at the HPC Cluster of SKLTP/ITP-CAS where some of the calculations for this work were done. This work is supported in part by the National Natural Science Foundation of China under grant Nos.11405128, 11375239, 11365004, and the Natural Science Foundation of Guangxi province under grant No.2016GXNSFFA380001. B.A. Li acknowledges the U.S. Department of Energy, Office of Science, under Award Number DE-SC0013702, the CUSTIPEN (China-U.S. Theory Institute for Physics with Exotic Nuclei) under the US Department of Energy Grant No. DE-SC0009971, the National Natural Science Foundation of China under Grant No. 11320101004 and the Texas Advanced Computing Center.99Steiner05 A. W. Steiner, M. Prakash, J.M. Lattimer and P.J. Ellis, Phys. Rep. 411, 325 (2005).ditoro V. Baran, M. Colonna, V. Greco and M. Di Toro, Phys. Rep. 410, 335 (2005).LCK08 B.A. Li, L.W. Chen and C.M. Ko, Phys. Rep. 464, 113 (2008).lynch09W. G. Lynch, M. B. Tsang, Y. Zhang, P. Danielewicz, M. Famiano, Z. Li, A. W. Steiner, Prog. Part. Nucl. Phys. 62, 427 (2009).DiToro10 M. Di Toro, V. Baran, M. Colonna and V. Greco, J. Phys. G: Nucl. Part. Phys. 37, 083101 (2010).Lat13J.M. Lattimer, Annu. Rev. Nucl. Part. Sci. 62, 485 (2012).Trau12 W. Trautmann and H. H. Wolter, Int. J. Mod. Phys. E 21, 1230003 (2012).Tsang12 M. B. Tsang, J. R. Stone, F. Camera, P. Danielewicz, S. Gandolfi, K. Hebeler, C. J. Horowitz, Jenny Lee, W. G. Lynch, Z. Kohley, R. Lemmon, P. Möller, T. Murakami, S. Riordan, X. Roca-Maza, F. Sammarruca, A. W. Steiner, I. Vidaña, and S. J. Yennello, Phys. Rev. C 86, 015803 (2012).Hor14 C.J. Horowitz, E.F. Brown, Y. Kim, W.G. Lynch, R. Michaels, A. Ono, J. Piekarewicz, M.B. Tsang, and H.H. Wolter, J. Phys. G:Nucl. Part. Phys. 41, 093001 (2014).LiBA14 B.A. Li, A. Ramos, G. Verde, and I. Vidana (Eds.), Topical Issue on Nuclear Symmetry Energy, Eur. Phys. J. A 50, 9 (2014).Heb15 K. Hebeler, J.D. Holt, J. Menéndez, and A. Schwenk, Ann. Rev. Nucl. Part. Sci. 65, 457 (2015).Bal16 M. Baldo and G.F. Burgio, Prog. Part. Nucl. Phys. 91, 203 (2016).Wolfgang16 Wolfgang Trautmann, Mircea Dan Cozma and Paolo Russotto, PoS (Bormio2016), 036 (2016).Ran16 P.-G. Reinhard, A.S. Umar, P.D. Stevenson, J. Piekarewicz, V.E. Oberacker and J.A. Maruhn, Phys. Rev. C 93, 044618 (2016).Oer17 M. Oertel, M. Hempel, T. Klähn, and S. Typel, Rev. Mod. Phys. 89, 015007 (2017).LiNews B.A. Li, Nuclear Physics News, Vol. 27, No. 4, 7-11 (2017).Bao00B. A. Li, Phys. Rev. Lett. 85, 4221 (2000); ibid, 88, 192701 (2002).Yong06B.A. Li, G.C. Yong and W. Zuo, Phys. Rev. C 71, 014608 (2005); G. C. Yong, B. A. Li, L. W. Chen, W. Zuo, Phys. Rev. C 73, 034603 (2006).Xie13W. J. Xie, J. Su, L. Zhu, and F. S. Zhang, Phys. Lett. B 718, 1510 (2013).AMD N. Ikeno, A. Ono, Y. Nara, and A. Ohnishi, Phys. Rev. C 93, 044612 (2016).Cozma16 M.D. Cozma, Phys. Lett. B 753, 166 (2016);Phys.Rev. C 95, 014601 (2017).Tsang17 M.B. Tsang, J. Estee, H. Setiawan, W. G. Lynch, J. Barney, M. B. Chen, G. Cerizza, P. Danielewicz, J. Hong, P. Morfouace, R. Shane, S. Tangwancharoen, K. Zhu, T. Isobe, M. Kurata-Nishimura, J. Lukasik, T. Murakami, Z. Chajecki, and SπRIT Collaboration, Phys.Rev. C 95, 044614 (2017).Li02NPAB.A. Li, Nucl. Phys. A 708, 365 (2002).Ditoro05 G. Ferini, M. Colonna, T. Gaitanos, and M. Di Toro, Nucl. Phys. A 762, 147 (2005).Xiao09 Z. G. Xiao, B. A. Li, L. W. Chen, G. C. Yong, and M. Zhang, Phys. Rev. Lett. 102, 062502 (2009).Feng10 Z. Q. Feng, G. M. Jin, Phys. Lett. B 683, 140 (2010).XuKo10 J. Xu, Che Ming Ko and Yongseok Oh, Phys. Rev. C 81, 024910 (2010).guo13W. M. Guo, G. C. Yong, Y. Wang, Q. Li, H. Zhang, W. Zuo, Phys. Lett. B 726, 211 (2013).Hong14J. Hong, P. Danielewicz, Phys. Rev. C 90, 024605 (2014).Song15T. Song, C. M. Ko, Phys. Rev. C 91, 014901 (2015).Wei16 G. F. Wei, S. H. Dong, X. W. Cao, and Y. L. Zhang, Phys. Rev. C 94, 014605 (2016).FOPI W. Reisdorf et al., Nucl. Phys. A 781, 459 2007; Nucl. Phys. A 848, 366 (2010).ASY-EOS P. Russotto et al. (ASY-EOS Collaboration), Phys. Rev. C 94, 034608 (2016).Shane R. Shane et al., Nucl. Instr. Methods A 784, 513 (2015).Guo15aW. M. Guo, G. C. Yong, H. Liu, and W. Zuo, Phys. Rev. C 91, 054616 (2015).Zhang17 Z. Zhang and C.M. Ko, Phys. Rev. C 95, 064604 (2017).Feng17 Z.Q. Feng, Eur. Phys. J. A 53, 30 (2017).Uma98 V.S. Uma Maheswari, C. Fuchs, Amand Faessler, Z.S. Wang, D.S. Kosov, Phys. Rev. C 57, 922 (1998).Bao15aB. A. Li, Phys. Rev. C 92, 034603 (2015).Guo15bW. M. Guo, G. C. Yong, W. Zuo, Phys. Rev. C 92, 054619 (2015).Wei14G. F. Wei, B. A. Li, J. Xu, and L. W. Chen, Phys. Rev. C 90, 014610 (2014); G.F. Wei, Phys. Rev. C 92, 014614 (2015).Bao15bB. A. Li, W. J. Guo, Z. Z. Shi, Phys. Rev. C 91, 044601 (2015).Yong16G. C. Yong, Phys. Rev. C 93, 044610 (2016).kolo05E. E. Kolomeitsev, C. Hartnack, H. W. Barz, M. Bleicher, E. Bratkovskaya, W. Cassing, L. W. Chen, P. Danielewicz, C. Fuchs, T. Gaitanos, C. M. Ko, A. Larionov, M. Reiter, Gy. Wolf and J. Aichelin, J. Phys. G:Nucl. Part. Phys. 31, S741 (2005).trans1 J. Xu, L. W. Chen, M. B. Tsang, H. Wolter, Y. X. Zhang, J. Aichelin, M. Colonna, D. Cozma, P. Danielewicz, Z. Q. Feng, A. L. Fevre, T. Gaitanos, C. Hartnack, K. Kim, Y. Kim, C. M. Ko, B. A. Li, Q. F. Li, Z. X. Li, P. Napolitani, A. Ono, M. Papa, T. Song, J. Su, J. L. Tian, N. Wang, Y. J. Wang, J. Weil, W. J. Xie, F. S. Zhang, G. Q. Zhang, Phys. Rev. C 93, 044609 (2016).trans2 Ying-Xun Zhang, Yong-Jia Wang, Maria Colonna, Pawel Danielewicz, Akira Ono, Betty Tsang, Hermann Wolter, Jun Xu, Lie-Wen Chen, Dan Cozma, Zhao-Qing Feng, Subal Das Gupta, Natsumi Ikeno, Che-Ming Ko, Bao-An Li, Qing-Feng Li, Zhu-Xia Li, Swagata Mallik, Yasushi Nara, Tatsuhiko Ogawa, Akira Ohnishi, Dmytro Oliinychenko, Massimo Papa, Hannah Petersen, Jun Su, Taesoo Song, Janus Weil, Ning Wang, Feng-Shou Zhang, Zhen Zhang, arXiv:1711.05950Bertsch80G. F. Bertsch, Nature 283, 280 (1980).Li95 B. A. Li, Phys. Lett. B 346, 5 (1995).OSA96 T. Osada, S. Sano, M. Biyajima, and G. Wilk, Phys. Rev. C 54, R2167(R) (1996); Takeshi Osada, Minoru Biyajima, and Grzegorz Wilk Phys. Rev. C 55, 2615 (1997).NA44 H. Boggilda et al. (NA44 Collaboration), Phys. Lett. B 372, 339 (1996).Teis S. Teis, W. Cassing, M. Effenberger, A. Hombach, U. Mosel, Gy. Wolf, Z. Phys. A 359, 297 (1997); ibid, Z. Phys. A 356, 421 (1997).Gor97 M. I. Gorenstein and H. G. Miller, Phys. Rev. C 55, 2002 (1997).Fuchs98 V.S. Uma Maheswari, C. Fuchs, Amand Faessler, L. Sehn, D.S. Kosov and Z. Wang, Nucl. Phys. A 628, 669 (1998).Wagner98 A. Wagner et al., Phys. Lett. B 420, 20 (1998).Barz98 H.W. Barz, J.P. Bondorf, J.J. Gaardhoje, and H. Heiselberg, Phys. Rev. C 57, 2536 (1998).Ryb07 A. Rybicki and A. Szczurek, Phys. Rev. C 75, 054903 (2007); ibid, Phys. Rev. C 87, 054909 (2013).IBUUB. A. Li, C. B. Das, S. Das Gupta, and C. Gale, Phys. Rev. C 69, 011603(R) (2004); Nucl. Phys. A 735, 563 (2004).Das03 C. B. Das, S. Das Gupta, C. Gale, and B. A. Li, Phys. Rev. C 67, 034611 (2003).Chen14bL. W. Chen, C. M. Ko, B. A. Li, C. Xu, and J. Xu, Eur. Phys. J. A 50, (2014) 29.CXu10 C. Xu and B.A. Li, Phys. Rev. C 81, 044603 (2010).LXH13X. H. Li et al., Phys. Lett. B 721, (2013) 101; ibid 743, 408 (2015).Ou11L. Ou, B. A. Li, Phys. Rev. C 84, 064605 (2011).
http://arxiv.org/abs/1709.09127v2
{ "authors": [ "Gao-Feng Wei", "Bao-An Li", "Gao-Chan Yong", "Li Ou", "Xin-Wei Cao", "Xu-Yang Liu" ], "categories": [ "nucl-th", "hep-ph" ], "primary_category": "nucl-th", "published": "20170926165251", "title": "Effects of retarded electrical fields on observables sensitive to the high-density behavior of nuclear symmetry energy in heavy-ion collisions at intermediate energies" }
It is experimentally known that achiral hyperbolic 3-manifolds are quite sporadic at least among those with small volume,while we can find plenty of them as amphicheiral knot complements in the 3-sphere.In this paper, we show that there exist infinitely many achiral 1-cusped hyperbolic 3-manifoldsnot homeomorphic to any amphicheiral null-homologous knot complement in any closed achiral 3-manifold. Leveraging Weakly Annotated Data for Fashion Image Retrieval and Label Prediction [===================================================================================§ INTRODUCTION An oriented 3-manifold is said to be achiral[The term “amphicheiral”is also used for the same notion on a 3-manifold.In this paper, we use the term “achiral” for a 3-manifoldto clearly distinguish from an “amphicheiral knot”.]if it admits an orientation-reversing self-homeomorphism.As pointed out in <cit.>, it is experimentally known that achiral hyperbolic 3-manifolds are quite sporadic at least among those with small volume.Similarly, the achiral cusped hyperbolic 3-manifolds obtained by Dehn fillings on the “magic manifold” are quite rare;just the figure-eight knot complement, its sibling, and the complement of the two-bridge link S(10,3) <cit.>.For the definition of a Dehn filling, see Section <ref>. Among them, the figure-eight sibling[It is a unique complete orientable hyperbolic 3-manifold constructed by gluing together two ideal regular tetrahedra such that it is not homeomorphic to the figure-eight knot complement <cit.>.]is much more special. Precisely, the figure-eight sibling admits an orientation-reversing self-homeomorphism hfor which there exists a slope γ on a horotorus such thatthe distance between γ and its image under h is just one.Here a slope is an isotopy class of a non-trivial simple closed curve on a torus,and the distance of two slopes is defined as the minimal intersection numberbetween their representatives. We note that such a phenomenon does not occur for any amphicheiral null-homologous knot complement.Here a knot K in an achiral 3-manifold Mwith an orientation-reversing self-homeomorphism φis said to be amphicheiral (with respect to φ)if K is isotopic to φ(K) in M.For an amphicheiral null-homologous knot K,with respect to the meridian-preferred longitude system for K,the slope p/q changed to -p/q by the restriction of φto the complement of Ksince φ preserves the meridian-preferred longitude system for K.Note that the distance between the slopes p/q and -p/q is 2|pq|,in particular, it is even.In addition, if |p|, |q|0, then the distance is greater than one. In view of this, it is natural to ask if there exist achiral 3-manifolds not coming from any amphicheiral null-homologous knot complements (other than the figure-eight sibling).As the main result in this paper, the following ensures that such 3-manifolds surely exist.There exist infinitely many achiral 1-cusped hyperbolic 3-manifoldseach of which is not homeomorphic to any amphicheiral null-homologous knot complement in any closed achiral 3-manifold. One of our motivations to study such 3-manifolds comes from the researchabout cosmetic surgeries on knots in <cit.>.There the achirality of the figure-eight sibling can be observed,and the 3-manifolds in Theorem <ref> are considered as extensions to it.In fact, the first two authors asked the following in <cit.>.Can we find chirally cosmetic Dehn fillings on an achiral cusped hyperbolic 3-manifoldalong distance one slopes (other than the figure-eight sibling)?The 3-manifolds in Theorem <ref> givean affirmative answer to the above question as follows.There exist infinitely many achiral 1-cusped hyperbolic 3-manifoldseach of which admits chirally cosmetic Dehn fillings along distance one slopes. We construct our 3-manifolds in Section <ref>,which are given as the double branched cover of certain 2-string tangles.Then, after recalling the definitions of a Dehn surgery, a Dehn filling, a banding, and the Montesinos trick in Section <ref>,we show in Section <ref> that the interior of the obtained 3-manifolds are hyperbolic by using the Montesinos trick. In the last section, we show that each of our 3-manifolds is realized asthe exterior of an amphicheiral knot in some achiral 3-manifold. § EXAMPLES In this section, we construct achiral 3-manifoldsby taking the double branched covers over certain “amphicheiral” 2-string tangles.Here, by a 2-string tangle or simply a tangle,we mean a pair consisting of a 3-ball and two properly embedded arcs in the 3-ball. Let n be an integer with n0,1.Unless otherwise noted, we assume that n0, 1 throughout this paper.Let us consider the tangle T_n depicted in Figure <ref>.Here the box labelled n (resp. -n) indicates n times right-handed (resp. left-handed) half twists (see Figure <ref>).The construction of T_n is based onthe two-component linkproposed by Nikkuni and the third author in <cit.>. Let M_n be the compact oriented 3-manifold with a torus boundaryobtained asthe double branched cover over T_n.Then we have the following. The 3-manifold M_n is achiral,and is not homeomorphic to any amphicheiral null-homologous knot exterior in any closed achiral 3-manifold. Here for a knot K in a 3-manifold M,we denote by N(K) an open tubular neighborhood of K,and the exterior, denoted by E(K), of K is defined as M ∖ N(K). Note that the interior of E(K) is homeomorphic to the complement, M ∖ K, of K.Let m be the map mirroring T_n along the plane of the paper, and let T_n! = m(T_n).Notice that T_n! is the tangle obtained from T_n by performing crossingchanges at all the crossings.Let M_n! be the oriented 3-manifold obtained asthe double branched cover over T_n!.Then the lift m M_n → M_n!is an orientation-reversing homeomorphism.On the other hand, since T_n is obtained by a π/2-rotation from T_n!, M_n! is orientation-preservingly homeomorphic to M_n.ThereforeM_n admits an orientation-reversing self-homeomorphism h = r∘m,where r M_n! → M_n is the lift of the π/2-rotationrT_n! → T_n.This implies that M_n is achiral. On the torus boundary ∂ M_n of M_n,we consider the two slopes μ and λappearing as the preimage of the two loops μ and λdepicted in Figure <ref> respectively.Then the distance between μ and λ is just one.Since r ∘ m (μ) = λ,we have h(μ) = λ.Then, as explained in Section <ref>, M_n is not homeomorphic to any amphicheiral null-homologous knot exteriorin any closed achiral 3-manifold. § DEHN SURGERY, BANDING, AND MONTESINOS TRICK In this section, we give a review on a Dehn surgery, a Dehn filling, a banding, and the Montesinos trick, to show that the interior of M_n is hyperbolic for n0,1.§.§ Dehn surgery Let K be a knot in a closed oriented 3-manifold M with the exterior E(K).Let γ be a slope on the boundary torus ∂ E(K).Then, the Dehn surgery on K along γis defined as the following operation:Glue a solid torus V to E(K) such that a simple closed curve representing γ bounds a meridian disk in V.We denote by K(γ) the obtained 3-manifold.It is said that the Dehn surgery along the meridional slopeis the trivial Dehn surgery.Also it is said that a Dehn surgery alonga slope which is represented by a simple closed curveintersecting the meridian at a single pointis an integral Dehn surgery. In the case where K is null-homologous in M,we have the well-known bijective correspondencebetween ℚ∪{ 1/0 } and the set of slopes on ∂ E(K),which is given by using the meridian-preferred longitude system for K.When the slope γ corresponds to r ∈ℚ∪{1/0}, then the Dehn surgery on K along γ is said to be the r-surgery on K,and the obtained 3-manifold is denoted by K(r). In this case, an integral Dehn surgery corresponds to an n-surgery with an integer n. Two Dehn surgeries on a knot K alongtwo slopes are said to be chirally cosmetic[This terminology was used in Kirby's problem list <cit.>.In <cit.>, it is called reflectively cosmetic.]if two obtained 3-manifolds are orientation-reversingly homeomorphic.§.§ Dehn filling While there are some overlaps with notions on Dehn surgery,here we briefly review a Dehn filling.Let M be a compact connected oriented 3-manifold with a torus boundary ∂ M,and γ a slope on ∂ M.The Dehn filling on M along γis the operation gluing a solid torus V to M so thata simple closed curve representing γ bounds a meridian disk in V.As in the case for a Dehn surgery,if we have the correspondencebetween the slope γ and r ∈ℚ∪{1/0}using the meridian-preferred longitude system,then the Dehn filling on M along γ is said to be the r-Dehn filling on M. Two Dehn fillings on M alongtwo slopes are said to be chirally cosmetic ifthe resultants are orientation-reversingly homeomorphic.For recent studies on chirally cosmetic Dehn fillings,we refer the reader to <cit.>.§.§ BandingWe call the following operation on a link a banding[The operation is sometimes called a band surgery, a bund sum (operation), or a hyperbolic transformation in a variety of contexts.In this paper, referring to <cit.>, we use the term banding to clearly distinguish it from a Dehn surgery on a knot.] on the link.For a given link L in S^3 and an embedding bI × I → S^3 such that b ( I × I ) ∩ L = b ( I ×∂ I ), where I denotes a closed interval, we obtain a (new) link as ( L - b ( I ×∂ I ) ) ∪ b ( ∂ I × I ). On performing a banding, it is often assumed the compatibility of orientations of the original link and the obtained link, but in this paper, we do not assume that.Also note that this operation for a knot yielding a knot appears as the n=2 case of the H(n)-move on a knot, which was introduced in <cit.>. It is well-known that a rational tangle is determined by the meridional disk in the tangle.The boundary curve of the meridional disk isparameterized by an element of ℚ∪{1/0},called a slope of the rational tangle.A rational tangle is said to be integral if the slope is an integer or 1/0.For brevity, we call an integral tangle with a slope n an n-tangle. A banding can be regarded as an operation replacing a 1/0-tangle into an n-tangle.Then we call this banding an n-banding. A banding on a link L is said to be chirally cosmeticif the link obtained from L by the banding is ambient isotopic tothe mirror image L! of L in S^3.§.§ Montesinos trick We here recall the Montesinos trick originally introduced in <cit.>.Let Σ be the double branched cover of S^3 branched along a link L ⊂ S^3.Let K be a knot in Σ, which is strongly invertible with respect to the preimage L̅ of L, that is, there is an orientation preserving involution of Σ with the quotient S^3 and the fixed point set L̅ which induces an involution of K with two fixed points.Then the 3-manifold K(γ) obtained by an integral Dehn surgery on K is homeomorphic to the double branched cover Σ' along the link L' obtained from L by a banding along the band appearing as the quotient of K.That is, we have the following commutative diagram: @!CL̅⊂Σ[d]_double branched covering[r]^Dehn surgery on K Σ' = K(γ) [d]^double branched coveringL ⊂ S^3 [r]^banding on LL' ⊂ S^3 § HYPERBOLICITY In this section,we show that the interior of our 3-manifolds M_n given in Section <ref> are hyperbolic,and prove our main theorem.Let K_n be the link in S^3 obtained by closing the tangle T_nas shown in Figure <ref>. The knot K_n is the two-bridge knot (or link) with Schubert's normal form S(n^4 - 2n^3 + 2n^2 - 2n +1, n^3 - 2n^2 + n - 1) .For the definitions of Schubert's normal form and Conway's normal formfor a two-bridge link, see for example <cit.>. One can diagrammatically check that K_n is the two-bridge knot (or link)with Conway's normal form C(n, n, -1, n, n).Calculating the continued fraction, we have1n + 1n + 1 -1 +1 n + 1n= n^3 - 2n^2 + n - 1n^4 - 2n^3 + 2n^2 - 2n +1.We note that K_n is a two-bridge knot when n is even,and a two-bridge link when n is odd.Also note that K_2 = S(5,1) is the (2,5)-torus knot. As shown in Figure <ref>,each of K_n admits a chirally cosmetic banding.In particular, for K_2, this chirally cosmetic bandingwas essentially discovered by Zeković <cit.>,and pointed out by the first two authors that the upstairs of this banding corresponds tothe chirally cosmetic filling on the figure-eight sibling, see <cit.>. Let Σ_n be the lens space of type(n^4 - 2n^3 + 2n^2 - 2n +1, n^3 - 2n^2 + n - 1)which is obtained as the double branched cover of S^3 branched along K_n.Since n0,1, Σ_n is not homeomorphic to S^3 or S^2× S^1. Then we obtain the following. There exist infinitely many achiral 1-cusped hyperbolic 3-manifoldseach of which is not homeomorphic to any amphicheiral null-homologous knot complement in any closed achiral 3-manifold. Applying the isotopic deformation and the Montesinos trick converselyto the chirally cosmetic banding on K_n shown in Figure <ref>,by Proposition <ref>,we obtain a surgery descriptions of Σ_n and -Σ_nas in Figure <ref>.That is, the trivial Dehn surgery and the 0-surgery on the red component in Figure <ref>yield Σ_n and -Σ_n respectively. Drilling along the red component,we obtain the surgery description of the 3-manifold with a torus boundary M_nas in Figure <ref>,which is already constructed in Section <ref>.Then M_n is the exterior of a knot in Σ_n,which admits chirally cosmetic Dehn fillings along distance one slopes.By the classification of cosmetic surgeries on a non-hyperbolic knot in a lens spacedue to Matignon <cit.>, we see that the interior of M_n is hyperbolic.In fact, for a non-hyperbolic knot J in a lens space other than S^3 or S^2× S^1,if the trivial Dehn surgery and the r-surgery on J are chirally cosmetic,then r0, see <cit.>. It is enough to show that if nn', then M_n is not homeomorphic to M_n'.Then the following lemma completes the proof. We have H_1(M_n; ) ≅⊕_n^3 - n^2 + n -1.Let X be the exterior of the 6-components link in S^3 as in Figure <ref>.Taking representatives of meridians x, a, b, c, d, e as in Figure <ref>,we have H_1(X; ) ≅^6 = ⟨ a ⟩⊕⟨ b ⟩⊕⟨ c ⟩⊕⟨ d ⟩⊕⟨ e ⟩⊕⟨ x ⟩.Reading off the slopes from Figure <ref>,we see that suitable Dehn fillings on the five boundaries supply the following relations.n a + x - b = 0 , -na + c - a = 0 , -c + b - x - d = 0 , -nd -c + e = 0 , ne + d + x = 0 .Then we have [n -10001; -1n1000;01 -1 -10 -1;00 -1 -n10;0001n1;] as the presentation matrix of H_1(M_n ; ).Reducing the matrix by the elementary operations on presentation matrices of modules(for example see <cit.>),we see that the previous matrix is equivalent to [ n^3 - n^2 + n -10 ]. Thus we have H_1(M_n ; ) ≅⊕_n^3 - n^2 + n -1.By the classification of lens spaces, for example see <cit.>,Σ_n is chiral (i.e. not achiral) since we have (n^3 - 2n^2 + n -1)^2 ≡ 1 ≢-1 (n^4 - 2n^3 + 2n^2 - 2n +1). For any null-homologous knot J in an oriented closed 3-manifold Y,we haveH_1(Y ∖ J ; ) ≅⊕ H_1(Y ; ).Each of our 3-manifolds M_n is the exterior of a knot,say J_n, in the lens space Σ_n.By Lemma <ref>,one can also see that J_n is not null-homologous in Σ_nsince H_1(Σ_n ; ) = _n^4 - 2n^3 + 2n^2 - 2n +1.In the case where n = 2,we have n^4 - 2n^3 + 2n^2 - 2n +1 = n^3 - n^2 + n -1 = 5.This case corresponds to the chirally cosmetic fillings on the figure-eight siblingintroduced in Section <ref>.Thus, the knot J_2 is also not null-homologous in Σ_2. As shown in Figure <ref>,we can see directly that the 0-surgery on the red component in Figure <ref>changes Σ_n to -Σ_n.As in the proof of Theorem <ref>,M_n admits chirally cosmetic Dehn fillings along distance one slopes.Thus, we have the following.There exist infinitely many achiral 1-cusped hyperbolic 3-manifolds each of which admits chirally cosmetic Dehn fillings along distance one slopes.§ REALIZING M_N AS AN AMPHICHEIRAL KNOT COMPLEMENT In this section, we show the following.Each of our 3-manifolds M_n can be realized asthe exterior of an amphicheiral knot in some achiral 3-manifold. Note that such an amphicheiral knot is not null-homologous. As in Sections <ref> and <ref>,the meridian on ∂ M_n is μ,and we can choose the longitude on ∂ M_n as λcorresponding to 0/1.With respect to this meridian-longitude system,we see that the two slopes corresponding to 1/1 and -1/1 are invariantvia the orientation-reversing self-homeomorphism h = r∘m on M_n,see Figure <ref>. Let N_n^± be the closed oriented 3-manifoldobtained from M_n by ± 1/1-Dehn filling respectively.Since the core curve c of the attached solid torus Vintersects a meridian disk of V once,h extends to V, and thus, h extends to N^±_n.This implies that N^±_n is achiral.Further, for the knot k in N^±_n, which corresponds to the core c,k is an amphicheiral knot in N^±_n andthe exterior of k is homeomorphic to M_n. §.§ AcknowledgementThe authors would like to thank Professor Kimihiko Motegi and Professor Kai Ishiharafor useful comments on Section <ref>.99 Bleiler S. A. Bleiler,Banding, twisted ribbon knots, and producing reducible manifolds via Dehn surgery,Math. Ann. 286 (1990), no. 4, 679–696. BleilerHodgsonWeeks S. A. Bleiler, C. D. Hodgson and J. R. Weeks,Cosmetic surgery on knots,in Proceedings of the Kirbyfest (Berkeley, CA, 1998), 23–34, Geom. Topol. Monogr., 2, Geom. Topol. Publ., Coventry. HosteNakanishiTaniyama J. Hoste, Y. Nakanishi and K. Taniyama,Unknotting operations involving trivial tangles,Osaka J. Math. 27 (1990), no. 3, 555–566. IJM K. Ichihara, I. D. Jong, and H. Masai,Cosmetic banding on knots and links,to appear in Osaka J. Math.,arXiv:1602.01542.IchiharaSaito K. Ichihara and T. Saito,Cosmetic surgery and the SL(2,) Casson invariant for two-bridge knots,to appear in Hiroshima Math. J.,arXiv:1602.02371.IchiharaWu K. Ichihara and Z. Wu,A note on Jones polynomial and cosmetic surgery,to appear in Comm. Anal. Geom.,arXiv:1606.03372. KawauchiSurvey A. Kawauchi,A survey of knot theory,translated and revised from the 1990 Japanese original by the author, Birkhäuser Verlag, Basel, 1996. KirbyProblems in low-dimensional topology,in Geometric topology (Athens, GA, 1993), 35–473,AMS/IP Stud. Adv. Math., 2.2, Amer. Math. Soc., Providence, RI. MartelliPetronio B. Martelli and C. Petronio,Dehn filling of the “magic” 3-manifold,Comm. Anal. Geom. 14 (2006), no. 5, 969–1026. Matignon D. Matignon,On the knot complement problem for non-hyperbolic knots,Topology Appl. 157 (2010), no. 12, 1900–1925. Montesinos J. M. Montesinos,Surgery on links and double branched covers of S3,in Knots, groups, and 3-manifolds (Papers dedicated to the memory of R. H. Fox), 227–259.Ann. of Math. Studies, 84, Princeton Univ. Press, Princeton, NJ. NiWu Y. Ni and Z. Wu,Cosmetic surgeries on knots in S^3,J. Reine Angew. Math. 706 (2015), 1–17. NikkuniTaniyama R. Nikkuni and K. Taniyama,Symmetries of spatial graphs and Simon invariants,Fund. Math., 205 (2009), no. 3, 219–236.Weeks J. Weeks,Hyperbolic structures on three-manifolds,PhD thesis, Princeton University, 1985.Zekovic A. Zeković,Computation of Gordian distances and H_2-Gordian distances of knots,Yugosl. J. Oper. Res. 25 (2015), no. 1, 133–152.
http://arxiv.org/abs/1709.09418v1
{ "authors": [ "Kazuhiro Ichihara", "In Dae Jong", "Kouki Taniyama" ], "categories": [ "math.GT", "57M25 (Primary), 57M50, 57N10 (Secondary)" ], "primary_category": "math.GT", "published": "20170927094608", "title": "Achiral 1-cusped hyperbolic 3-manifolds not coming from amphicheiral null-homologous knot complements" }
M. F. Barnsley]M. F. Barnsley Australian National UniversityCanberra, ACT, Australia University of FloridaGainesville, FL, USANew tilings of certain subsets of ℝ^M are studied, tilings associated with fractal blow-ups of certain similitude iterated function systems (IFS). For each such IFS with attractor satisfying the open set condition, our construction produces a usually infinite family of tilings that satisfy the following properties: (1) the prototile set is finite; (2) the tilings are repetitive (quasiperiodic); (3) each family contains self-similar tilings, usually infinitely many; and (4) when the IFS is rigid in an appropriate sense, the tiling has no non-trivial symmetry; in particular the tiling is non-periodic. Self-Similar Tilings of Fractal Blow-Ups A. Vince 2017-06-14 ======================================== § INTRODUCTION The subject of this paper is a new type of tiling of certain subsets D of ℝ^M. Such a domain D is a fractal blow-up (as defined in Section <ref>) of certain similitude iterated function systems (IFSs); see also <cit.>. For an important class of such tilings it is the case that D=ℝ^M, as exemplified by the tiling of Figure <ref> (on the right ) that is based on the “golden b" tile (on the left). We are also interested, however, in situations where D has non-integer Hausdorff dimension. The left panel in Figure <ref> shows the domain D, the right panel a tiling of D. These examples are explored in Section <ref>. In this work, tiles may be fractals; pairs of distinct tiles in a tiling are required to be non-overlapping, i.e., they intersect on a set whose Hausdorff dimension is lower than that of the individual tiles. These tilings come in families, one family for each similitude IFS whose functions f_1,f_2…,f_N have scaling ratios that are integer powers s^a_1,s^a_2,…,s^a_N of a single real number s and whose attractor is non-overlapping. Each such family contains, in general, an uncountable number of tilings. Each family has a finite set of prototiles.The paper is organized as follows. Sections <ref> and <ref> provide background and definitions relevant to tilings and to iterated function systems. The construction of our tilings is given in Section <ref>. The main theorems are stated precisely in Section <ref> and proved in subsequent sections. Results appear in Section <ref> that define and discuss the relative and absolute addresses of tiles. These concepts, useful towards understanding the relationships between different tilings, are illustrated in Section <ref>. Also in Section <ref> are examples of tilings of ℝ^2 and of a quadrant of ℝ^2. The Ammann (the golden b) tilings and related fractal tilings are also discussed in that section, as is a blow-up of a Cantor set.A subset P of a tiling T is called a patch of T if it is contained in a ball of finite radius. A tiling T is quasiperiodic (also called repetitive) if, for any patch P, there is a number R>0 such that any disk of radius R centered at a point contained in a tile of T contains an isometric copy of P. Two tilings are locally isomorphic if any patch in either tiling also appears in the other tiling. A tiling T is self-similar if there is a similitude ψ such that ψ(t) is a union of tiles in T for all t∈ T. Such a map ψ is called a self-similarity.Let ℱ be a similitude IFS whose functions have scaling ratios s^a_1,s^a_2,…,s^a_N as defined above. Let [N]^∗ be the set of finite words over the alphabet [N]:={1,2,…,N} and [N]^∞ be the set of infinite words over the alphabet [N]. For a fixed IFS ℱ, our results show that:* For each θ∈[N]^*, our construction yields a bounded tiling, and for each θ∈[N]^∞, our construction yields an unbounded tiling. In the latter case, the tiling, denoted π(θ), almost always covers ℝ^M when the attractor of the IFS has nonempty interior. * The mapping θ↦π(θ) is continuous with respect to the standard topologies on the domain and range of π. * Under quite general conditions, the mapping θ↦π(θ) is injective. * For each such tiling, the prototile set is {sA, s^2A,…, s^a_max A}, where A is the attractor of the IFS and a_max = max{a_1, a_2, …, a_N}. * The constructed tilings, in the unbounded case, are repetitive (quasiperiodic) and any two such tilings are locally isomorphic. * For all θ∈[N]^∞, if θ is eventually periodic, then π(θ) is self-similar. * If ℱ is strongly rigid, then how isometric copies of a pair bounded tilings can overlap is extremely restricted: if the two tilings are such that their overlap is a subset of each, then one tiling must be contained in the other. * If ℱ is strongly rigid, then the constructed tilings have no non-identity symmetry. In particular, they are non-periodic. The concept of a rigid and a strongly rigid IFS is discussed in Sections <ref>.A special case of our construction (polygonal tilings, no fractals) appears in <cit.>, in which we took a more recreational approach, devoid of proofs. Other references to related material are <cit.>. This work extends, but is markedly different from <cit.>.§ TILINGS, SIMILITUDES AND TILING SPACES Given a natural number M, this paper is concerned with certain tilings of strict subsets of Euclidean space ℝ^M and of ℝ^M itself. A tile is a perfect (i.e. no isolated points) compact nonempty subset of ℝ^M. Fix a Hausdorff dimension 0<D_H≤ M. A tiling in ℝ^M is a set of tiles, each of Hausdorff dimension D_H, such that every distinct pair is non-overlapping. Two tiles are non-overlapping if their intersection is of Hausdorff dimension strictly less than D_H. The support of a tiling is the union of its tiles. We say that a tiling tiles its support. Some examples are presented in Section <ref>.A similitude is an affine transformation f:ℝ ^M→ℝ^M of the form f(x)=s O(x)+q, where O is an orthogonal transformation and q∈ℝ^M is the translational part of f(x). The real number s>0, a measure of the expansion or contraction of the similitude, is called its scaling ratio. An isometry is a similitude of unit scaling ratio and we say that two sets are isometric if they are related by an isometry. We write ℰ to denote the group of isometries on ℝ^M.The prototile set 𝒫 of a tiling T is a minimal set of tiles such that every tile in T is an isometric copy of a tile in 𝒫. The tilings constructed in this paper have a finite prototile set.Given a tiling T we define ∂ T to be the union of the set of boundaries of all of the tiles in T and we let ρ:ℝ ^M→𝕊^M be the usual M-dimensional stereographic projection to the M-sphere, obtained by positioning 𝕊^M tangent to ℝ^M at the origin. We define the distance between tilings T and T^' to bed_τ(T,T^')=h(ρ(∂ T),ρ(∂ T^'))where the bar denotes closure and h is the Hausdorff distance with respect to the round metric on 𝕊^M. Let 𝕂(ℝ^M) be the set of nonempty compact subsets of ℝ^M. It is well known that d_τ provides a metric on the space 𝕂(ℝ^M) and that (𝕂(ℝ^M),d_τ) is a compact metric space.This paper examines spaces consisting, for example, of π(θ) indexed by θ∈[N]^∗ with metric d_τ. Although we are aware of the large literature on tiling spaces, we do not explore the larger spaces obtained by taking the closure of orbits of our tilings under groups of isometries as in, for example, <cit.>. We focus on the relationship between the addressing structures associated with IFS theory and the particular families of tilings constructed here.§DEFINITION AND PROPERTIES OF IFS TILINGS Let ℕ={1,2,⋯} and ℕ_0={0,1,2,⋯}. For N∈ℕ, let [N]={1,2,⋯,N}. Let [N]^∗=∪ _k∈ℕ_0[N]^k, where [N]^0 is the empty string, denoted ∅.See <cit.> for formal background on iterated function systems (IFSs). Here we are concerned with IFSs of a special form: let ℱ ={ℝ^M;f_1,f_2,⋯,f_N}, with N≥2, be an IFS of contractive similitudes where the scaling factor of f_n is s^a_n with 0<s<1 where a_n∈ℕ. There is no loss of generality in assuming that the greatest common divisor is one: {a_1,a_2 ,⋯,a_N}=1. That is, for x∈ℝ^M, the function f_n:ℝ^M→ℝ^M is defined byf_n(x)=s^a_nO_n(x)+q_nwhere O_n is an orthogonal transformation and q_n∈ℝ^M. It is convenient to definea_max=max{a_i:i=1,2,…,N}.The attractor A of ℱ is the unique solution in 𝕂(ℝ^M) to the equationA=⋃_i∈ N]f_i(A).It is assumed throughout that A obeys the open set condition (OSC) with respect to ℱ. As a consequence, the intersection of each pair of distinct tiles in the tilings that we construct either have empty intersection or intersect on a relatively small set. More precisely, the OSC implies that the Hausdorff dimension of A is strictly greater than the Hausdorff dimension of the set of overlap 𝒪=∪_i≠ j f_i(A)∩ f_j(A). Similitudes applied to subsets of the set of overlap comprise the sets of points at which tiles may meet. See <cit.> for a discussion concerning measures of attractors compared to measures of the set of overlap. In what follows, the space [N]^∗∪ N]^∞ is equipped with a metric d_[N]^∗∪ N]^∞ such that it becomes compact. First, define the “length" |θ| of θ∈ N]^∗∪ N]^∞ as follows. For θ=θ_1θ_2⋯θ_k∈ N]^∗ define |θ| =k, and for θ∈ N]^∞ define |θ| =∞. Now define d_[N]^∗∪ N]^∞(θ,ω)=0 if θ=ω, andd_[N]^∗∪ N]^∞(θ,ω)=2^-𝒩( θ,ω)if θ≠ω, where 𝒩(θ,ω) is the index of the first disagreement between θ and ω (and θ and ω are understood to disagree at index k if either |θ|<k or |ω|<k ). It is routine to prove that ([N]^∗∪ N]^∞,d_[N]^∗∪ N]^∞) is a compact metric space.A point θ∈ N]^∞ is eventually periodic if there exists m∈ℕ_0 and n∈ℕ such that θ _m+i=θ_m+n+i for all i≥1. In this case we write θ =θ_1θ_2⋯θ_mθ_m+1θ_m+2 ⋯θ_m+n.For θ=θ_1θ_2⋯θ_k∈ N]^∗, the following simplifying notation will be used:f_θ = f_θ_1 f_θ_2⋯ f_θ_kf_-θ =f_θ_1^-1f_θ_2^-1⋯ f_θ_k^-1=(f_θ_kθ_k-1⋯θ_1)^-1,with the convention that f_θ and f_-θ are the identity function id if θ=∅. Likewise, for all θ∈ N]^∞ and k∈ℕ_0 define θ|k=θ_1θ _2⋯θ_k, andf_-θ|k=f_θ_1^-1f_θ_2^-1⋯ f_θ_k ^-1=(f_θ_kθ_k-1⋯θ_1)^-1,with the convention that f_-θ|0=id.For σ=σ_1σ_2⋯σ_k∈ N]^∗ and with {a_1,…,a_N} the scaling powers defined above, lete(σ)=a_σ_1+a_σ_2+⋯+a_σ_kand e^-(σ)=a_σ_1+a_σ_2+⋯ +a_σ_k-1,with the conventions e(∅)=e^-(∅)=0. LetΩ_k:={σ∈ N]^∗:e(σ)>k≥ e^-(σ)}for all k∈ℕ_0, and note that Ω_0=[N]. We also write, in some places, σ^-=σ_1σ_2⋯σ_k-1 so thate^-(σ)=e(σ^-).A mapping π from [N]^∗∪ N]^∞ to collections of subsets of ℝ^M is defined as follows. For θ∈ N]^∗π(θ):={f__-θf_σ(A):σ∈Ω _e(θ)},and for θ∈ N]^∞π(θ):=⋃_k∈ℕ_0π(θ|k). Let 𝕋 be the image of π, i.e.𝕋={π(θ):θ∈ N]^∗∪ N]^∞}. It is consequence of Theorem <ref>, stated below, that the elements of 𝕋 are tilings. We refer to π(θ) as an IFS tiling, but usually drop the term “IFS". It is a consequence of the proof of Theorem <ref>, given in Section <ref>, that the support of π(θ) is what is sometimes referred to as a fractal blow-up <cit.>. More exactly, if F_k:=f__-θ|k(A), thensupport (π(θ))=⋃_k∈ℕ_0 F_k.Thus the support of π(θ) is the limit of an increasing union of sets F_0⊆ F_1⊆ F_2⊆⋯, each similar to A.The theorems of this paper are summarized in the rest of this section. The first two theorems, as well as a proposition in Section <ref>, reveal general information about the tilings in 𝕋 without the rigidity condition that is assumed in the second two theorems. The proof of the following theorem appears in Section <ref>.Each set π(θ) in 𝕋 is a tiling of a subset of ℝ^M, the subset being bounded when θ∈[N]^* and unbounded when θ∈ N]^∞. For all θ∈ N]^∞ the sequence of tilings {π(θ|k)} _k=0^∞ is nested according to{f_i(A):i∈ N]}=π(∅)⊂π(θ|1)⊂π(θ|2)⊂π(θ|3)⊂⋯ .For all θ∈ N]^∞, the prototile set for π(θ) is {s^iA:i=1,2,⋯,a_max}. Furthermoreπ:[N]^∗∪ N]^∞→𝕋is a continuous map from the compact metric space [N]^∗∪ N]^∞ into the space (𝕂(ℝ^M),d_τ). The proof of the following theorem is given in Section <ref>. * Each tiling in 𝕋 is quasiperiodic and each pair of such tilings in 𝕋 are locally isomorphic. * If θ is eventually periodic, then π(θ) is self-similar. In fact, if θ=αβ for some α,β∈[ N]^∗ then f_-αf_-β(f_-α) ^-1 is a self-similarity of π(θ).In Section <ref> the concept of rigidity of an IFS is defined. We postpone the definition because additional notation is required. There are numerous examples of rigid ℱ, including the golden b IFS in Section <ref>. The following theorem is proved in Section <ref>.Let ℱ be strongly rigid. If θ ,θ^'∈ N]^∗and E∈ℰ are such that π(θ)∩ Eπ(θ^') is a nonempty common tiling, then either π(θ)⊂ Eπ(θ^') or Eπ(θ^')⊂π(θ). If e(θ)=e(θ^'), then Eπ (θ^')=π(θ). A symmetry of a tiling is an isometry that takes tiles to tiles. A tiling is periodic if there exists a translational symmetry; otherwise the tiling is non-periodic. For example, any tiling of a quadrant of ℝ^2 by congruent squares is periodic. The proof of the following theorem is given in Section <ref>. If ℱ is strongly rigid, then there does not exist any non-identity isometry E∈ℰ and θ∈ N]^∞ such that Eπ(θ)⊂π(θ). The following theorem is proved in Section <ref>. If π(i)∩π(j) does not tile (support π(i))∩(supportπ(j)) for all i≠ j, then π:[N]^∗∪ N]^∞→𝕋 is one-to-one. § STRUCTURE OF {Ω_K} AND SYMBOLIC IFS TILINGS The results in this section, which will be applied later, relate to a symbolic version of the theory in this paper. The next two lemmas provide recursions for the sequence Ω_k:={σ∈ N]^∗:e(σ)>k≥ e^-(σ)}. In this section the square union symboldenotes a disjoint union.For all k≥ a_maxΩ_k=_i=1^Ni Ω_k-a_i.For all k∈ℕ_0 we havei Ω_k ={iσ:σ∈ N]^∗,e(σ) > k ≥ e^-(σ)}={ω:ω∈ N]^∗,e(ω) > k+a_i≥ e^- (ω),ω_1=i}=Ω_k+a_i∩ i[N]^∗.It follows thati Ω_k-a_i=Ω_k∩ i[N]^∗for all k≥ a_i, from which it follows that Ω_k= _i=1^NiΩ_k-a_i for all k≥ a_max. With Ω_k^^' :={ω∈ N]^∗:e(ω)=k+1}, we have Ω_k^^'⊂Ω_k andΩ_k+1={Ω_k\Ω_k^'}{_i=1^NΩ_k^^'i}.(i) We first show that {Ω_k\Ω_k^'}{_i=1^NΩ_k^^'i}⊂Ω_k+1.Suppose θ∈Ω_k\Ω_k^'. Then e^- (θ)≤ k<e(θ) and e(θ)≠ k+1. Hence e^-(θ)≤ k+1<e(θ) and so θ∈Ω_k+1.Suppose θ∈Ω_k^^'i for some i∈ N]. Then θ=θ^-i where θ^-∈Ω_k^^', e^-(θ)=e(θ^-)=k+1 and e(θ)=e(θ^- i)=k+1+a_i. Hence e(θ)>k+1=e^-(θ). Hence e^-(θ)≤ k+1<e(θ). Hence θ∈Ω _k+1.(ii) We next show that Ω_k+1⊂{Ω_k\Ω _k^'}{_i=1^NΩ _k^^'i}.Let θ∈Ω_k+1. Then e^-(θ)=e(θ^-)≤ k+1<e(θ).If e(θ^-)=k+1, then θ∈Ω_k^^'θ _|θ|⊂{Ω_k\Ω _k^'}{_i=1^NΩ _k^^'i}.If e(θ^-)≠ k+1, then e(θ^-)<k+1. So e(θ^-)≤ k<k+1<e(θ); so θ∈Ω_k\Ω_k^' ⊂{Ω_k\Ω_k^'}{_i=1^NΩ_k^^'i}. For all θ∈ N]^∗, define c(θ)={ω∈ N]^∞:ω_1ω_2⋯ω_|θ|=θ}. (Such sets are sometimes called cylinder sets.) With the metric on [N]^∞ defined to be d_0(θ,ω)=2^-min{k:θ_k≠ω_k} for θ≠ω, the diameter of c(θ) is 2^-(|θ| +1). The following lemma tells us how {c(θ):θ∈Ω_k} may be considered as a tiling of the symbolic space [N]^∞.For each k∈ℕ_0 the collection of sets {c(θ):θ∈Ω_k} form a partition of [N]^∞, each part of which has diameter belonging to {s^k+1,s^k+2,… s^k+a_max} where s=1/2. That is,[N]^∞=_θ∈Ω_kc(θ)for all k∈ℕ_0.Assume that ω∈[N]^∞. There is a unique j such that ω|j ∈Ω_k. Letting θ= w|j we have ω∈ c(θ) ⊂[N]^∞. Therefore [N]^∞ = ⋃_θ∈Ω_k c(θ).Assume that θ,θ^'∈Ω_k. If ω∈ c(θ)∩ c(θ^'), then by the definition of cylinder set either θ=θ^' or |θ|≠|θ^'|. However, if |θ|≠|θ^'|, then ω ||θ|=θ∈Ω_k and ω ||θ^'|=θ^'∈Ω_k, which would contradict the uniqueness of j. Therefore [N]^∞=_θ∈Ω_kc(θ).§ A CANONICAL SEQUENCE OF SELF-SIMILAR TILINGS To facilitate the proofs of the theorems stated in Section <ref>, another family of tilings is introduced, tilings isometric to those that are the subject of this paper. LetA_k=s^-kAfor all k∈ℕ∪{-1,-2,…,-a_max}, and define, for all k∈ℕ, a sequence of tilings T_k of A_k byT_k={ s^-k f_σ(A):σ∈Ω_k}.The following lemma says, in particular, that T_k is a non-overlapping union of copies of T_k-a_i for i∈ N] when k≥ a_max, and T_k may be expressed as a non-overlapping union of copies of T_k-e(ω) for ω∈Ω_l when k is somewhat larger than l∈ℕ_0. In this section the square union notationdenotes a non-overlapping union.For all k∈ℕ_0 the support of T_k is A_k. For all θ∈ N]^∗,π(θ)=E_θT_e(θ)where E_θ is the isometry f_-θs^e(θ). AlsoT_k=_i=1^NE_k,iT_k-a_ifor all k≥ a_max, where each of the mappings E_k,i=s^-k∘ f_i∘ s^k-a_i is an isometry. More generally,T_k=_ω∈Ω_lE_k,ωT_k-e(ω),for all k≥ l+a_max and for all l∈ℕ_0, where each of the mappings E_k,ω =s^-k∘ f_ω∘ s^k-e(ω) is an isometry.It is well-known that if 𝒫 is a partition of [N]^∞, then A= ⋃_ω∈𝒫 ϕ(ω) where ϕ:[N]^∞→ A is the usual (continuous) coding map defined by ϕ(ω)=lim_k→∞f_ω|k(x) for any fixed x∈ A. By Lemma <ref> we can choose 𝒫= {c(θ):θ∈Ω_k}. Hence, the support of T_k iss^-k{ ⋃ {f_σ(A) :σ∈Ω_k}} =s^-k{ ⋃ {ϕ(ω):ω∈{c(θ):θ∈Ω_k}}}=s^-kA.The expression π(θ)=E_θT_e(θ) where E_θ is the isometry f_-θs^e(θ) follows from the definitions of π(θ) and T_k on taking k=e(θ).Equation (<ref>) follows from Lemma <ref> according to these steps.T_k = { s^-k f_σ(A):σ∈Ω_k} (by definition)=s^-k{f_σ(A):σ∈_i=1^NiΩ_k-a_i } (by Lemma <ref>)=s^-k_i=1^N{f_iσ(A):σ∈Ω_k-a_i } (identity)=s^-k_i=1^Nf_i({f_σ(A):σ∈Ω_k-a_i}) (identity)=_i=1^NE_k,iT_k-a_i (by definition)The function E_k,i=s^-k∘ f_i∘ s^k-a_i is an isometry because it is a composition of three similitudes, of scaling ratios s^-k, s^a_i, and s^k-a_i. The proof of the last assertion is immediate: tiles meet at images under similitudes of the set of overlap 𝒪=∪_i≠ jf_i(A)∩ f_j(A).Equation (<ref>) can be proved by induction on l, starting from Equation (<ref>) and using Lemma <ref>. The following definition, formalizing the notion of an “isometric combination of tilings", will be used later, but it is convenient to place it here. Let {U_i:i∈ℐ} be a collection of tilings. An isometric combination of the set of tilings {U_i:i∈ℐ} is a tiling V that can be written in the formV=_i=1^KE^(i)U^(i)for some K∈ℕ, where E^(i)∈ℰ, U^(i)∈{U_i:i∈ℐ}, for all i∈{1,2,…,K}. For example, Lemma <ref> tells us that any T_k can be written as an isometric combination of any set of tilings of the form {T_j, T_j+1,…,T_j+a_max-1} when k⩾ j.The sequence {T_k} of tilings is self-similar in the following sense. Each of the sets in the magnified tiling s^-1T_k is a union of tiles in T_k+1.This follows at once from Lemma <ref>. The tiling T_k+1 is obtained from T_k by applying the similitude s^-1 and then splitting those resulting sets that are isometric to A. By splitting we mean we replace EA by {Ef_1(A), Ef_2(A),…, Ef_N(A)}, see Section <ref>. § THEOREM <REF>: EXISTENCE AND CONTINUITY OF TILINGS LetA_-θ|k:=f_-θ|kAfor all θ∈ N]^∞. It is immediate from Definition <ref> that the support of the tiling π(θ|k) is A_-θ|k and that π(θ|k) is isometric to the tiling T_e(k) of A_e(k). We use this fact repeatedly in the rest of this paper.Theorem <ref>. Each set π(θ) in 𝕋 is a tiling of a subset of ℝ^M, the subset being bounded when θ∈[N]^* and unbounded when θ∈ N]^∞. For all θ∈ N]^∞ the sequence of tilings {π (θ|k)}_k=0^∞ is nested according to{f_i(A):i∈ N]}=π(∅)⊂π(θ|1)⊂π(θ|2)⊂π(θ|3)⊂⋯ .For all θ∈ N]^∞, the prototile set for π(θ) is {s^iA:i=1,2,⋯,a_max}. Furthermoreπ:[N]^∗∪ N]^∞→𝕋is a continuous map from the compact metric space [N]^∗∪ N]^∞ into the space (𝕂(ℝ^M),d_τ). Using Lemma <ref>, for θ=θ_1θ_2⋯θ_l ∈ N]^∗ and θ^-=θ_1θ_2⋯θ_l-1,π(θ) =E_θT_e(θ)=_i=1^N E_θE_e(θ),iT_k-a_i⊃ E_θE_e(θ),θ_lT_k-a_θ_l =E_θ^-T_e(θ^-)=π(θ^-).It follows that {π(θ|k)} is an increasing sequence of tilings for all θ∈ N]^∞, as in Equation (<ref>), and so converges to a well-defined limit. Since the maps in the IFS are strict contractions, their inverses are expansive, whence π(θ) is unbounded for all θ∈ N]^∞.The fact that the tiles here are indeed tiles as we defined them at the start of this paper follows from three readily checked observations. (i) The tiles are nonempty perfect compact sets because they are isometric to the attractor, that is not a singleton, of an IFS of similitudes. (ii) There are only finitely many tiles that intersect any ball of finite radius. (iii) Any two tiles can meet only on a set that is contained in the image under a similitude of the set of overlap.Next we prove that there are exactly a_max distinct tiles, up to isometry, in any tiling π(θ) for θ∈ N]^∞. The tiles of π(θ) take the form {f__-θ|kf_σ (A):σ∈Ω_e(θ|k)} for some k∈ℕ. The mappings here are similitudes whose scaling factors are {s^e(σ)-e(θ |k):e(σ)-e(θ|k)>0≥ e(σ)-e(θ|k)-a_|σ|}, namely {s^m:m>0≥ m-a_|σ|} for which the possible values are at most all of {1,2,…,a_max}. That all of these values occur for large enough k follows from {a_i:i=1,2,…, N}=1.Next we prove that π:[N]^∗∪ N]^∞→𝕋 is a continuous map from the compact metric space [N]^∗ ∪ N]^∞ onto the space (𝕋,d_T). The map π|_[N]^∗:[N]^∗→𝕋 is continuous on the discrete part of the space ([N]^∗,d_[N]^∗∪ N]^∞ ) because each point θ∈ N]^∗ possesses an open neighborhood that contains no other points of [N]^∗∪ N]^∞. To show that π is continuous at points of [N]^∞ we follow a similar method to the one in <cit.>. Let ε>0 be given and let B(R) be the open ball of radius R centered at the origin. Choose R so large that h(ρ(B(R)),𝕊^M)<ε. This implies that if two tilings differ only where they intersect the complement of B(R), then their distance d_τ apart is less than ε. But geometrical consideration of the way in which support(π(θ_1θ_2θ_3..θ_k)) grows with increasing k shows that we can choose K so large that support( π(θ_1θ_2θ_3..θ_k))∩B(R) is constant for all k≥ K. It follows thath(ρ(π(θ_1θ_2..θ_k)),ρ(π(θ_1θ _2..θ_l)))≤εand as a consequenceh(ρ(∂π(θ_1θ_2..θ_k)),ρ(∂π (θ_1θ_2..θ_l)))≤εfor all k,l≥ K. It follows that h(ρ(π(θ)),ρ(π (ω)))≤ε) whenever θ_1θ_2..θ_K =ω_1ω_2..ω_K. It follows that π is continuous. § THEOREM <REF>: WHEN DO ALL TILINGS REPEAT THE SAME PATTERNS? Theorem 2.* Each unbounded tiling in 𝕋 is quasiperiodic and all tilings in 𝕋 have the local isomorphism property. * If θ is eventually periodic, then π(θ) is self-similar. In fact, if θ=αβ for some α,β∈[ N]^∗, then f_-αf_-β(f_-α) ^-1 is a self-similarity of π(θ).(1) First we prove quasiperiodicity. This is related to the self-similarity of the sequence of tilings {T_k} mentioned in Proposition <ref>.Let θ∈[N]^∞ be given and let P be a patch in π(θ). There is a K_1∈ℕ such that P is contained in π(θ|K_1). Hence an isometric copy of P is contained in T_K_2 where K_2=e(θ|K_1). Now choose K_3∈ℕ so that an isometric copy of T_K_2 is contained in each T_k with k≥ K_3. That this is possible follows from the recursion (<ref>) of Lemma <ref> and gcd{a_i}=1. In particular, T_K_2⊂ T_K_3+i for all i∈{1,2,...,a_max}.Now let K_4=K_3+a_max. Then, for all k≥ K_4, the tiling T_k is an isometric combination of {T_K_3+i:i=1,2,...,a_max}, and each of these tilings contains a copy of T_K_2 and in particular a copy of P.Let D=max{‖ x-y‖ :x,y∈ A} be the diameter of A. The support of T_k is s^-kA which has diameter s^-kD. Hence support(T_k)⊂ B(x,2s^-kD), the ball centered at x of radius 2s^-kD, for all x∈ support(T_k). It follows that if x∈supportπ(θ^') for any θ^' ∈[N]^∞, then B(x,2s^-K_4D) contains a copy of support(T_K_2) and hence a copy of P. Therefore all unbounded tilings in 𝕋 are quasiperiodic.In <cit.> Radin and Wolff define a tiling to have the local ismorphism property if for every patch P in the tiling there is some distance d(P) such that every sphere of diameter d(P) in the tiling contains an isometric copy of P. Above, we have proved a stronger property of tilings, as defined here, of fractal blow-ups. Given P, there is a distance d(P) such that each sphere of diameter d(P), centered at any point belonging to the support of any unbounded tiling in 𝕋, contains a copy of P.(2) Let θ=αβ=α_1α_2⋯α _lβ_1β_2⋯β_mβ_1β_2⋯β_m β_1β_2⋯β_m⋯. We have the equivalent increasing unionsπ(θ)= ⋃_k∈ℕ E_θ|kT_e(θ|k)= ⋃_j∈ℕ E_θ|(l+jm)T_e(θ|(l+jm))= ⋃_j∈ℕ E_θ|(l+jm+m)T_e(θ|(l+jm+m))where, for all k,E_θ|k=f_-θ|ks^e(θ|k).We can writeπ(θ)= ⋃_j∈ℕ E_θ|(l+jm)T_e(θ|(l+jm))=f_-α ⋃_j∈ℕ f_-β^js^e(θ|(l+jm))T_e(θ|(l+jm)),and alsoπ(θ)= ⋃_j∈ℕ E_θ|(l+jm+m)T_e(θ|(l+jm+m))=f_-αf_-β ⋃_j∈ℕ f_-β^js^e(θ|(l+jm+m))T_e(θ|(l+jm+m)).Here f_-β^js^e(θ|(l+jm+m))T_e(θ|(l+jm+m)) is a refinement of f_-β^js^e(θ|(l+jm))T_e(θ|(l+jm)). It follows that (f_-αf_-β)^-1π(θ) is a refinement of (f_-α)^-1π(θ), from which it follows that (f_-α)(f_-αf_-β)^-1π(θ) is a refinement of π(θ). Therefore, every set in (f_-αf_-β)(f_-α) ^-1π(θ) is a union of tiles in π(θ). §RELATIVE AND ABSOLUTE ADDRESSES In order to understand how different tilings relate to one another, the notions of relative and absolute addresses of tiles are introduced. Given an IFS ℱ, the set of absolute addresses is defined to be:𝔸:={θ.ω:θ∈ N]^∗, ω∈Ω_e(θ), θ_|θ|≠ω_1}.Define π:𝔸→{t∈ T:T∈𝕋} byπ(θ.ω)=f_-θ.f_ω(A).We say that θ.ω is an absolute address of the tile f_-θ.f_ω(A). It follows from Definition <ref> that the map π is surjective: every tile of {t∈ T:T∈𝕋} possesses at least one address. The condition θ_|θ|≠ω_1 is imposed to make cancellation unnecessary.The set of relative addresses is associated with the tiling T_k of A_k=s^-kA and is defined to be {.ω:ω∈Ω_k}. There is a bijection between the set of relative addresses {.ω:ω∈Ω_k} and the tiles of T_k, for all k∈ℕ_0.This follows from the non-overlapping unionA= _ω∈Ω_k f_ω(A).This expression follows immediately from Lemma <ref>; see the start of the proof of Lemma <ref>. Accordingly, we say that .ω, or equivalently ∅.ω, where ω∈Ω_k, is the relative address of the tile s^-kf_ω(A) in the tiling T_k of A_k. Note that a tile of T_k may share the same relative address as a different tile of T_l for l≠ k.Define the set of labelled tiles of T_k to be𝒜_k={(.ω,s^-kf_ω(A)):ω∈Ω_k}for all k∈ℕ_0. A key point about relative addresses is that the set of labelled tiles of T_k for k∈ℕ can be computed recursively. Define𝒜_k^^'={(ω,s^-kf_ω(A))∈𝒜 _k:e(ω)=k+1}⊂𝒜_k.An example of the following inductive construction is illustrated in Figure <ref>, and some corresponding tilings π(θ) labelled by absolute addresses are illustrated in Figure <ref>. For all k∈ℕ_0 we have𝒜_k+1=ℒ(𝒜_k\𝒜_k^^' )∪ℳ(𝒜_k^^')whereℒ(ω,s^-kf_ω(A)) =(ω,s^-k-1f_ω(A)), ℳ(ω,s^-kf_ω(A)) ={(ω i,s^-k-1 f_ω i(A)):i∈ N]}.This follows immediately from Lemma <ref>. § STRONG RIGIDITY, DEFINITION OF “AMALGAMATION AND SHRINKING" OPERATION Α ON TILINGS, AND PROOF OF THEOREM <REF>. We begin this key section by introducing an operation, called “amalgamation and shrinking", that maps certain tilings into tilings. This leads to the main result of this section, Theorem <ref>, which, in turn, leads to Theorem <ref>.Let T_0={f_i(A):i∈[N]}. The IFS ℱ is said to be rigid if (i) there exists no non-identity isometry E∈ℰ such that T_0∩ ET_0 is non-empty and tiles A∩ ET, and (ii) there exists no non-identity isometry E∈ℰ such that A=EA.Define 𝕋^' to be the set of all tilings using the set of prototiles {s^iA:i=1,2,...,a_max}. Any tile that is isometric to s^a_maxA is called a small tile, and any tile that is isometric to sA is called a large tile. We say that a tiling P∈𝕋^' comprises a set of partners if P=ET_0 for some E∈ℰ. Define 𝕋^''⊂ 𝕋^' to be the set of all tilings in 𝕋^' such that, given any Q∈𝕋^'' and any small tile t∈ Q, there is a set of partners of t, call it P(t), such that P(t)⊂ Q. Given any Q∈𝕋^'' we define Q^' to be the union of all sets of partners in Q.Let ℱ be a rigid IFS. The amalgamation and shrinking operation α:𝕋^''→ 𝕋^' is defined byα Q={st:t∈ Q\ Q^'}∪ _{E∈ℰ:ET_0⊂ Q^'} sEA. If ℱ is rigid, the function α :𝕋^''→ 𝕋^'is well-defined and bijective; in particular, α^-1:𝕋^'→𝕋^'' is well defined byα^-1(Q)={α_Q^-1(q):q∈ Q}whereα_Q^-1(q)={[s^-1q if q∈ Q is not a large tile; s^-1ET_0 if Eq is a large tile, some E∈ℰ ].Because ℱ is rigid, there can be no ambiguity with regard to which sets of tiles in a tiling are partners, nor with regard to which tiles are the partners of a given small tile. Hence α:𝕋^''→ 𝕋^' is well defined. Given any T^'∈𝕋^' we can find a unique Q∈𝕋^'' such that α(Q)=T^', namely α^-1(Q) as defined in the lemma.Let ℱ be rigid and k∈ℕ. Then(i) T_k∈𝕋^'';(ii) α T_k=T_k-1 and α^-1T_k-1=T_k.As described in Lemma <ref>, T_k can constructed in a well-defined manner, starting from from T_k-1, by scaling and splitting, that is, by applying α^-1. Conversely T_k-1 can be constructed from T_k by applying α. Statements (i) and (ii) are consequences. If ℱ is rigid, L,M∈𝕋^'', and L∩ M tiles support(L) ∩support(M), then L ∩M∈𝕋^''. Moreover,α(L∩ M)=α(L)∩α(M),and α(L∩ M) tiles supportα(L) ∩supportα(M).Since L,M∈𝕋^''⊂ 𝕋^' lie in the range of α^-1, we can find unique L^',M^'∈𝕋^' such thatL=α^-1L^' and M=α^-1M^'.Note that α^-1(T^')={α^-1(t):t∈ T^'} for all T^'∈𝕋^', which implies that α^-1 commutes both with unions of disjoint tilings and also with intersections of tilings whose intersections tile the intersections of their supports. It follows that L∩ M∈𝕋^'',α(L∩ M) =α(α^-1L^'∩α^-1M^')=α(α^-1(L^'∩ M^'))=L^'∩ M^'=α(L)∩α(M),and support α(L∩ M)=support α(L)∩support α(M).ℱ is strongly rigid if ℱ is rigid and whenever i,j∈{0,1,2,…,a_max-1},E∈ℰ, and T_i∩ ET_j tiles A_i∩ EA_j, either T_i⊂ ET_j or T_i⊃ ET_j. Section <ref> contain a few examples of strongly rigid IFSs.Let ℱ be strongly rigid, k,l∈ℕ_0, and E∈ℰ.(i) If ET_k∩ T_k is nonempty and tiles EA_k∩ A_k, then E=id.(ii) If EA_k∩ A_k+l is nonempty and ET_k∩ T_k+l tiles EA_k∩ A_k+l, then ET_k⊂ T_k+l.Suppose ET_k∩ T_l≠∅ and t.i.s. (tiles intersection of supports). Without loss of generality assume k≤ l, for if not, then apply E^-1, then redefine E^-1 as E.Both ET_k and T_l lie in the domain of α^k, so we can apply Lemma <ref> k times, yieldingα^k(ET_k∩ T_l) =s^kEs^-kT_0∩ T_l-k :=ET_0∩ T_l-k≠∅,where ET_0∩ T_l-k t.i.s. Now observe that by Lemma <ref> we can write, for all k^'≥ l^'+a_max,T_k^'=_ω∈Ω_l^'E_k^',ωT_k^'-e(ω)(={E_k^',ωT_k^'-e(ω):ω∈Ω_l^'}),where E_k^',ω∈ℰ for all k^',ω. Choosing l^'=k^'-a_max and noting that, for ω∈Ω_l^', we have e(ω)∈{l^'+1,…,l^'+a_max}, and for ω∈Ω_k^'-a_max we have e(ω)∈{k^'-a_max+1,…,k^'}. Therefore k^'-e(ω)∈{0,1,…,a_max-1} and we obtain the explicit representationT_k^'=_ω∈Ω_k^'-a_max E_k^',ωT_k^'-e(ω)which is an isometric combination of {T_0,T_1,…,T_a_max-1}. In particular, we can always reexpress T_l-k in (<ref>) as isometric combination of {T_0,T_1,…,T_a_max-1} and so there is some E^' and some T_m∈{T_0,T_1,…,T_a_max-1} such thatET_0∩ E^'T_m≠∅ and t.i.s.By the strong rigidity assumption, this implies ET_0⊂ E^'T_m, which in turn impliesET_0⊂ T_l-kand t.i.s. Now apply α^-k to both sides of this last equation to obtain the conclusions of the lemma. Theorem <ref>. Let ℱ be strongly rigid. If θ,θ^'∈ N]^∗and E∈ℰ are such that π(θ)∩ Eπ(θ^') is not empty and tiles A_-θ∩ EA_-θ^', then either π(θ)⊂ Eπ(θ^') or Eπ(θ^')⊂π(θ). In this situation, if e(θ)=e(θ^'), then Eπ(θ^' )=π(θ). This follows from Lemma <ref>. If θ,θ^' ∈ N]^∗and E∈ℰ are such that π(θ)∩ Eπ(θ^') is not empty and tiles A_-θ∩ EA_-θ ^', then θ,θ^'∈ N]^∗and E∈ℰ are such that E_θT_e(θ)∩ EE_θ ^'T_e(θ^') is not empty and tiles E_θA_e(θ)∩ EE_θ^'A_e(θ^'), where E_θ=f_-θs^e(θ) and E_θ^'=f_-θ ^'s^e(θ^') are isometries. Assume, without loss of generality, that e(θ)≤ e(θ^') and apply E_θ ^'^-1 E^-1 to obtain that θ,θ^'∈ N]^∗ and E^'=E_θ^'^-1E^-1 E_θ ∈ℰ are such that E^'T_e(θ)∩ T_e(θ ^') is not empty and tiles E^'A_e(θ)∩ A_e(θ^'). By Lemma <ref> it follows that E^'T_e(θ)⊂ T_e(θ^'), i.e. E_θ ^'^-1E^-1E_θT_e(θ)⊂ T_e(θ^'), i.e. π(θ)⊂ Eπ(θ^'). If also e(θ^')≤ e(θ) (i.e. e(θ^')=e(θ)), then also Eπ(θ^')⊂π(θ). Therefore Eπ(θ^')=π(θ). § THEOREM <REF>: WHEN IS A TILING NON-PERIODIC? Theorem <ref>. If F is strongly rigid, then there does not exist any non-identity isometry E∈ℰ and θ∈ N]^∞ such that Eπ(θ)⊂π(θ). Suppose there exists an isometry E such that Eπ(θ)=π(θ). Then we can choose K∈ℕ_0 so large that Eπ(θ|K)∩π(θ|K)≠∅ and Eπ(θ|K)∩π(θ|K) tiles EA_-θ|K∩ A_-θ|K. By Theorem <ref> it follows thatEπ(θ|K)=π(θ|K)This impliesEE_θT_e(θ|K)=E_θT_e(θ |K)whence, because E_θT_e(θ|K) is in the domain of α^e(θ|K) and α^e(θ|K) T_e(θ|K)=T_0, we have by Lemma <ref>α^e(θ|K)E E_θT_e(θ |K) =α^e(θ|K)E_θT_e( θ|K)⟹ s^e(θ|K)EE_θs^-e( θ|K)α^e(θ|K)T_e( θ|K)=s^e(θ|K)E_θs^-e( θ|K)α^e(θ|K)T_e( θ|K)⟹ s^e(θ|K)EE_θs^-e( θ|K)T_0=s^e(θ|K)E_θs^-e( θ|K)T_0⟹ s^e(θ|K)EE_θs^-e( θ|K)=s^e(θ|K)E_θs^-e( θ|K) (using rigidity)⟹ E=id. It follows that if ℱ is strongly rigid, then π(θ) is non-periodic for all θ.§ WHEN IS Π:[N]^∗∪ N]^∞→𝕋 INVERTIBLE?For all ℱ the restricted mapping π|_[ N]^∗.:[N]^∗→𝕋 is injective.To simplify notation, write π=π|_[N]^∗. We show how to calculate θ given π(θ) for θ∈[N]^∗. By Lemma <ref> we have π(θ)=E_θT_e(θ), where E is the isometry f_-θs^e(θ). Given π(θ), we can calculatee(θ)=ln| A| -ln|π(θ )|/ln s,where | U| denotes the diameter of the set U.We next show that if E_θ=E_θ^' for some θ≠θ^' with e(θ)=e(θ^'), then π (θ)≠π(θ^'). To do this, suppose that E_θ=E_θ^'. This implies that f_-θ=f_-θ^' which implies(f_-θ^')^-1f_-θ=id,which is not possible when θ≠θ^', as we prove next. The similitude (f_-θ^')^-1f_-θ maps (f_-θ)^-1(A)⊂ A to (f_-θ ^')^-1(A)⊂ A, and these two subsets of A are distinct for all θ,θ^'∈[N]^∗with θ≠θ^', as we prove next.Let ω,ω^' denote the two strings θ,θ^' written in inverse order, so that θ≠θ^' is equivalent to ω≠ω^'. First suppose |ω| =|ω^'| =m for some m∈ℕ. Then useA= _ω∈[N]^m f_ω(A),which tells us that f_ω(A) and f_ω^'(A) are disjoint. Since (f_-θ^')^-1f_-θ maps (f_-θ)^-1(A)=f_ω(A) to the distinct set (f_-θ^')^-1(A)=f_ω^'(A), we must have (f_-θ^')^-1f_-θ≠ id.Now suppose |ω| =m<|ω^'| =m^'. If both strings ω and ω^' agree through the first m places, then f_ω(A) is a strict subset of f_ω^'^-1(A) and again we cannot have (f_-θ ^')^-1f_-θ=id. If both strings ω and ω^' do not agree through the first m places, then let p<m be the index of their first disagreement. Then we find that f_ω(A)is a subset of f_ω|p(A), while f_ω^'(A) is a subset of the set f_ω^'|p(A), which is disjoint from f_ω|p(A). Since (f_-θ^')^-1f_-θ maps f_ω(A) to f_ω^'(A), we again have that (f_-θ ^')^-1f_-θ≠ id. We are going to need a key property of certain shifts maps on tilings, defined in the next lemma. The mappings S_i:{π(θ):θ∈ N]^l∪ N]^∞,l≥ a_i}→𝕋^' for i∈ N] are well-defined byS_i=f_is^-a_iα^a_i.It is true thatS_θ_1π(θ)=π(Sθ)for all θ∈ N]^l∪ N]^∞ where l≥ a_θ_1.We only consider the case θ∈ N]^∞. The case θ∈ N]^l is treated similarly. A detailed calculation, outlined next, is needed. The key idea is that π(θ) is broken up into a countable union of disjoint tilings, each of which belongs to the domain of α^k for all k≤ K for any K∈ℕ. For all K∈ℕ we have:π(θ)=E_θ|KT_e(θ|K)_k=K^∞ E_θ|k+1T_e(θ|k+1)\ E_θ |kT_e(θ|k).The tilings on the r.h.s. are indeed disjoint, and each set belongs to the domain of α^e(θ|K), so we can use Lemma <ref> applied countably many times to yieldS_θ_1π(θ)=S_θ_1(E_θ |KT_e(θ|K)) _k=K^∞ S_θ_1(E_θ|k+1T_e(θ|k+1)) \ S_θ_1(E_θ|kT_e(θ|k) ).Evaluating, we obtain successivelyS_θ_1π(θ)=f_θ_1s^-a_θ_1 α^a_θ_1(E_θ|KT_e(θ|K) ) _k=K^∞ f_θ_1s^-a_θ_1α^a_θ_1(E_θ |k+1T_e(θ|k+1))\ f_θ_1 s^-a_θ_1α^a_θ_1(E_θ|kT_e( θ|k)),S_θ_1π(θ)=f_θ_1E_θ |Ks^-a_θ_1α^a_θ_1T_e(θ|K) _k=K^∞ f_θ_1E_θ|k+1s^-a_θ_1α^a_θ_1 T_e(θ|k+1)\ f_θ_1E_θ |k+1s^-a_θ_1α^a_θ_1T_e(θ|k) ,S_θ_1π(θ)=f_θ_1E_θ |Ks^-a_θ_1T_e(Sθ|K-1) _k=K^∞ f_θ_1E_θ|k+1s^-a_θ_1T_e(Sθ|k) \ f_θ_1E_θ|ks^-a_θ_1T_e( Sθ|k-1),S_θ_1π(θ)=E_Sθ|(K-1) T_e(Sθ|K-1) _k=K^∞ E_Sθ|kT_e(Sθ|k-1)\ E_Sθ |k-1T_e(Sθ|k-1)=π(Sθ). Theorem <ref>. If π(i)∩π(j) does not tile ( supportπ(i))∩(supportπ(j)) for all i≠ j, then π:[N]^∗∪ N]^∞→𝕋 is one-to-one. The map π is one-to-one on [N]^∗ by Lemma <ref>, so we restrict attention to points in [N]^∞. If θ and θ ^' are such that θ_1=i and θ_1^'=j, then the result is immediate because π(θ) contains π(i) and π (θ^') contains π(j). If θ and θ^' agree through their first K terms with K≥1 and θ_K+1≠ θ_K+1^', then π(S^Kθ)≠π(S^Kθ^'). Now apply S_θ_1^-1S_θ_2^-1...S_θ_K^-1 to obtain π(θ)≠π(θ^'). (We can do this last step because S_i^-1=(f_is^-a_iα^a_i)^-1 =α^-a_is^a_if_i^-1 has as its domain all of 𝕋 ^' and maps 𝕋^' into 𝕋^'.) § EXAMPLES§.§ Golden b tilings A golden b G⊂ℝ^2 is illustrated in Figure <ref>. This hexagon is the only rectilinear polygon that can be tiled by a pair of differently scaled copies of itself <cit.>. Throughout this subsection the IFS isℱ={ℝ^2;f_1,f_2}wheref_1(x,y)= [0s; -s0 ] [ x; y ] + [ 0; s ] , f_2(x,y)= [ -s^20;0s^2 ] [ x; y ] + [ 1; 0 ] ,where the scaling ratios s and s^2 obey s^4+s^2=1, which tells us that s^-2=α^-2 is the golden mean. The attractor of ℱ is A=G. It is the union of two prototiles f_1(G) and f_2(G). Copies of these prototiles are labelled L and S. In this example, note that e(θ)=θ_1+θ_2+⋯+θ_|θ| for θ∈2]^∗.The figures in this section illustrate some earlier concepts in the context of the golden b. Using some of these figures, it is easy to check that ℱ is strongly rigid, so the tilings π(θ) have all of the properties ascribed to them by the theorems in the earlier sections.The relationships between A_θ_1θ_2⋯θ_k1 and A_θ_1θ_2⋯θ_k2 relative to A_θ_1 θ_2⋯θ_k are illustrated in Figure <ref>. Figure <ref> illustrates some of the sets A_θ_1θ _2θ_3..θ_k and the corresponding tilings π(θ _1θ_2θ_3..θ_k).In Section <ref>, procedures were described by which the relative addresses of tiles in T(θ|k) and the absolute addresses of tiles in π(θ|k) may be calculated recursively. Relative addresses for some golden b tilings are illustrated in Figure <ref>. Figure <ref> illustrates absolute addresses for some golden b tilings.The map π:[2]^∗∪2]^∞→𝕋 is 1-1 by Theorem <ref>, because π(1)∪π(2) does not tile the interesection of the supports of π(1) and π(2), as illustrated in Figure <ref>.We note that π(12) and π(21) are aperiodic tilings of the upper right quadrant of ℝ^2. §.§ Fractal tilings with non-integer dimension The left hand image in Figure <ref>, shows the attractor of the IFS represented by the different coloured regions, there being 8 maps, and provides an example of a strongly rigid IFS. The right hand image represents the attractor of the same IFS minus one of the maps, also strongly rigid, but in this case the dimensions of the tiles is less than two and greater than one. Figure <ref> (in Section <ref>) illustrates a part of a fractal blow up of a different but related 7 map IFS, also strongly rigid, and the corresponding tiling.Figure <ref> left shows a tiling associated with the IFS ℱ represented on the left in Figure <ref>, while the tiling on the right is another example of a fractal tiling, obtained by dropping one of the maps of ℱ.§.§ Tilings derived from Cantor sets Our results apply to the case where ℱ={ℝ^M ;f_i(x)=s^a_iO_i+q_i,i∈ N]} where {O_i,q_i :i∈ N]} is fixed in a general position, the a_is are positive integers, and s is chosen small enough to ensure that the attractor is a Cantor set. In this situation the set of overlap is empty and it is to be expected that ℱ is strongly rigid, in which case all tilings (by a finite set of prototiles, each a Cantor set) will be non-periodic. We can then take s to be the supremum of value such that the set of overlap is nonempty, to yield interesting “just touching" tilings.99andersonJ. E. Anderson and I. F. Putnam, Topological invariants for substitution tilings and their associated C^∗-algebras, Ergod. Th. & Dynam. Sys. 18 (1998) 509-537.bandtC. Bandt, M. F. Barnsley, M. Hegland, A. Vince, Old wine in fractal bottles I: Orthogonal expansions on self-referential spaces via fractal transformations, Chaos, Solitons and Fractals, 91 (2016) 478-489.manifoldM. F. Barnsley, A. Vince, Fast basins and branched fractal manifolds of attractors of iterated function systems, SIGMA 11 (2015), 084, 21 pages.tilingsM. F. Barnsley, A. Vince, Fractal tilings from iterated function systems, Discrete and Computational Geometry, 51 (2014) 729-752.polygonM. F. Barnsley, A. Vince, Self-similar polygonal tilings, Amer. Math. Monthly, to appear (2017).GB. Grünbaum and G. S. Shephard, Tilings and Patterns, Freeman, New York (1987).hutchinsonJ. Hutchinson, Fractals and self-similarity, Indiana Univ. Math. J. 30 (1981) 713-747.KenR. Kenyon, The construction of self-similar tilings, Geom. Funct. Anal. 6 (1996) 471-488.PenR. Penrose, Pentaplexity, Math Intelligencer 12 (1965) 247-248.RadC. Radin, M. Wolff, Space tilings and local isomorphism, Geometrica Dedicata 42, 355-360, 1991. SK. Scherer, A Puzzling Journey To The Reptiles And Related Animals, privately published, Auckland, New Zealand, 1987.SchJ. H. Schmerl, Dividing a polygon into two similar polygons, Discrete Math., 311 (2011) 220-231.sadunL. Sadun, Tiling spaces are inverse limits, J. Math. Phys., 44 (2003) 5410-5414.strichartzR. S. Strichartz, Fractals in the large, Canad. J. Math., 50 (1998) 638-657.
http://arxiv.org/abs/1709.09325v1
{ "authors": [ "Michael F Barnsley", "Andrew Vince" ], "categories": [ "math.DS", "math.MG" ], "primary_category": "math.DS", "published": "20170927040634", "title": "Self-Similar Tilings of Fractal Blow-Ups" }
theoremTheorem[section] lemma[theorem]Lemma lemmaaLemma lemmabLemma corollary[theorem]Corollarydefinition defn[theorem]Definition example[theorem]Example xca[theorem]Exerciseremark remark[theorem]Remarkequationsection
http://arxiv.org/abs/1709.09136v6
{ "authors": [ "Natalia Kopteva" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170926171329", "title": "Error analysis of the L1 method on graded and uniform meshes for a fractional-derivative problem in two and three dimensions" }
Local Structure Theorems for Graphs and their Algorithmic Applications Jan Dreier1 Philipp Kuinke1 Ba Le Xuan2 Peter Rossmanith1 December 30, 2023 ======================================================================Recently, the study of the influence of solar activity on the Earth's climate received strong attention, mainly due to the possibility, proposed by several authors, that global warming is not anthropogenic, but is dueto an increase in solar activity. Although this possibility has been ruled out, there are strongevidences that solar variability has an influence on Earth's climate, inregional scales.Here we review some of these evidences, focusing in a particular aspect of climate: atmospheric moisture and related quantities like precipitation. In particular, we studied the influence of activity on South American precipitations during centuries. First, we analyzed the stream flow ofthe Paraná and other rivers of the region, and found a very strong correlation with Sunspot Number in decadal time scales. We found a similar correlation between Sunspot Number and tree-ring chronologies, which allows us to extend our study to cover the last two centuries. § INTRODUCTION In the last decades, several authors proposed that global warming is not anthropogenic, but is due instead to an increase in solar activity, a proposition which resulted in a strong interest to studythe influence of solar activity on the Earth's climate. This discussion was, of course, of great political interest, and had a strong repercussion in the media. For example, on December 4, 1997, on the Wall Street Journal appeared an article on the subject entitled “Science Has Spoken: Global Warming Is a Myth” (see Fig. <ref>). This article, together with a copy of a scientific-looking paper (of which there are threeversions, e.g. Soon et al. 1999), was massively sent to North American scientists, accompanied by a petition to be presented to the Congress of the Unites States opposing the ratification of the Kyoto protocol.This article was based on the results obtained by <cit.> and <cit.>, who found a similarity between the length of the solar cycle (LCS), smoothed with a 1-2-2-2-1 filter, and the 11-yr running mean of the Northern Hemisphere temperature anomalies. However, this studies were seriously objected by <cit.> and <cit.>. In particular, these results were obtained using the actual, non-smoothed, LSCs for the last 4 cycles. Using the right values, already available 10 years later, it can be seen that the solar cycles had approximately the same length than in the 1970s, while the temperature continued to increase (e.g. see <cit.>).Several years later, <cit.> and <cit.> found that total cloud cover changed in phase with the flux of galactic cosmic rays (GCR), which are modulated by the interplanetary magnetic field associated with the solar wind and, therefore, with solar activity. They proposed a mechanism for the influence of solar activity on climate, in which GCR would affect cloud formation on Earth, through ionization of the atmosphere. Therefore, during periods of higher solar activity, when the interplanetary magnetic field is larger, and therefore less GCR hit the Earth, the cloud cover would be smaller.Later on the observedagreement was lost, although <cit.> proposed that it was still visible with low clouds. This theory was criticized by different reasons (e.g. Laut 2003), in particular because GCR should affect more strongly high clouds than low ones. Furthermore, <cit.> studied observations from the ground obtained at 90 meteorological stations in the US during more than 90 years, and found the opposite correlation. At present, the correlation found by Svensmark and collaborators cannot be seen in the data. In fact, <cit.> found that all possible solar forcings of climate had trends opposite to those needed to account for the rise in temperatures measured in the last century.Moreover, the idea that the Sun has played a significant role in modern climate warming was mainly based on a general consensusthat solar activity has been increasing during the last 300 years,after the Maunder Minimum, with a maximum in the late 20th century, which some researchers called the Modern Grand Maximum. However, this increase in solar activity has been identified as an error in the calibration of the Group Sunspot Number. When this error is corrected, solar activity appears to have been relatively stable since the end of the Maunder Minimum (see e.g. <cit.>, and the official IAU release [https://www.iau.org/news/pressreleases/detail/iau1508/]). However, even if global warming cannot be attributed to an increase in solar activity, there is strong evidence that activity can influence terrestrial climate, in local scales. In what follows we will review some of that evidence, in particular the one referred to hydrological phenomena, and review our recent work on the subject.§ SOLAR ACTIVITY AND HYDROLOGICAL PHENOMENA Usually, studies focusing on the influence of solar activity on climate have concentrated on Northern Hemisphere temperature or sea surface temperature. However, climate is a very complex system, involving many other important variables. Recently, several studies have focused in a different aspect of climate: atmospheric moisture and related quantities like, for example, precipitation.Perhaps the most studied case is the Asian monsoon, where correlations between precipitations andsolar activity have been found in several time scales. For example, <cit.> studied the monsoon in Oman between 9 and 6 kyr ago, and found strong coherence with solar variability. <cit.> found that the monsoon intensity in India followed the variations of the solar irradianceoncentennial time scales during the last millennium. <cit.> studiedthe Indian monsoon during the Holocene, and found that intervals of weaksolar activity correlate with periods of low monsoon precipitation, and viceversa. On shorter time scales, <cit.>, found that, at multi­de­ca­dal time scales, when solar irradiance is above normal there is a stronger correlation between the El Niño 3 index and the monsoon rainfall, and viceversa. <cit.> and <cit.>, among others, also found correlations between solar activity and Indian monsoon in decadal time scales.The monsoon in southern China over the past 9000 years was studied by <cit.> who found that higher solar irradiance corresponds to stronger monsoon. They proposed that the monsoon responds almost immediately to the solar forcing by rapid atmospheric responses to solar changes.<cit.> studied groundwater recharge rates in the Chinese region of Mongolia. Groundwater recharge is the hydrologic process where water moves downward from surface water to an aquifer. They found a strong stationary power at 200-220 years, significant at more than 95% confidence level, with wet periods coincident with strong solar activity periods.All these studies found a positive correlation, with periods of higher solar activity corresponding to periods of larger precipitation. In contrast, <cit.> studied a 6000-year record of precipitation and drought in northeastern China, and found that most of the dry periods agreewith stronger solar activity and viceversa. In the American continent, droughts in the Yucatan Peninsula have been associated with periods of strong solar activity and have even been proposed to cause the decline of the Mayan civilization (<cit.>). In the same sense, studies of the water level of the East African Lakes Naivasha (<cit.>) and Victoria (<cit.>), found that severe droughts were coincident with phases of high solar activity and that rains increased during periods of low solar irradiation. To explain these differences it has been proposed that in equatorial regions enhanced solar irradiation causes more evaporation increasing the net transport of moisture flux to the Indian region via monsoon winds (Agnihotri2002).However, these relationships seem to have changed sign around 200 years ago, when strong droughts took place over much of tropical Africa during the Daltonminimum, around 1800-1820 (<cit.>). Furthermore, recent water levels in Lake Victoria were studied by <cit.>, who found that during the 20th century, maxima of the ∼11-year sunspot cycle were coincident with water level peaks caused bypositive rainfall anomalies ∼1 years before solar maxima. These same patterns were also observed in at least five other East African lakes, hinting that these relationships between sunspot number and rainfall wereregional in scale.In <cit.> we took a different approach, and we proposed to use the stream flow of a large river, the Paraná in southern South America, to study precipitations in a large area (see below). In this direction, <cit.> found signals of solar activity in the river Nile using spectral analysis techniques. They reported an 88-year variation present both in solarvariability and in the Nile data. <cit.> studied the stream flow of the Po river, and found a correlation with variations in solar activity, on decadal time scales. § STREAM FLOW OF THE PARANÁ RIVER River stream flows are excellent climatic indicators, and those with continental scale basins smooth out local variations, and can be particularly useful to study global forcing mechanisms. In particular, the Paraná River originates in the southernmost part of the Amazon forest, and it flows south collecting water from the countries of Brazil, Paraguay, Bolivia, Uruguay, and Argentina (see Fig. <ref>). It has a basin area of over 3.100.000 km^2 and a mean stream flow of 20.600 m^3/s, which makes it the fifth river of the world according to drainage area and the fourth according to stream flow.Understanding the different factors that have an impact on the flow of these rivers it is fundamental for different social and economic reasons, from planning of agricultural or hydroenergetic conditions to the prediction of floods and droughts. In particular, floods of the Paraná can occupy very large areas, as can be seen in Fig. <ref>. Duringthelast flood,in1997, 180 000 km^2 of land were covered with water, 125 000 people had to be evacuated, and 25 people died. Together,the three largest floods of the Paraná during the 20th century caused economic losses of five billion dollars.In <cit.> we studied the stream flow data measured at a gauging station located in the city of Corrientes, 900 km north of the outlet of the Paraná. It is measured continuously from 1904, on a daily basis. The yearly data are shown in Fig. <ref> together with the yearly sunspot number (SN), which we use as a solar-activity indicator. Also shown in the figure are the trends, obtained with a low-pass Fourier filter with a 50 years cutoff. In Fig. <ref> we show the stream flow and the SN together. In both cases we have subtracted the secular trend shown in Fig. <ref> from the annual data, and we have performed an 11 yr running-mean to smooth out the solar cycle.We have also normalized both quantities by subtracting the mean and dividing by the standard deviation of each series. These lasts steps have been done to avoid introducing two free parameters, the relative scales and the offset between both quantities. It can be seen that there is a remarkable visual agreement between the Paraná's stream flow and the sunspot number. In fact, the Pearson's correlation coefficient is r=0.78, with a significance level, obtained through a t-student test, higher than 99.99%. It can also be noted that in this area wetter conditionscoincide with periods of higher solar activity. A few years later, in <cit.> we found that the correlation still held when more years of data were added. In particular, between 1995 and 2003 the Paraná's stream flow and the mean Sunspot Number have both decreased by similar proportions. This is of particular interest, since Solar Cycle 23 was the weakest since the 1970s: SN for the years 2008 (2.9) and 2009 (3.1) have been the lowest since 1913, and the beginning of Solar Cycle 24 was delayed by a minimum with the largest number of spotless days since the 1910s. At the same time, the mean levels of the Paraná discharge were also the lowest since the 1970s (see Fig. <ref>).§ OTHER SOUTH-AMERICAN RIVERS In <cit.> we followed up on the study of the influence of solar activity on the flow of South American rivers. In that paper we studied the stream flow of the Colorado river, and two of its tributaries, the San Juan and the Atuel rivers. We also analyzed snow levels, measured near the sources of the Colorado (see Fig. <ref>).The Colorado river marks the north boundary of the Argentine Patagonia, separating it from the Pampas, to the northeast, and the Andean region of Cuyo, to the Northwest. Its origin is on the eastern slopes of the Andes Mountains, from where it flows southeast until it discharges in the Atlantic Ocean.The Atuel, which originates in the glacial Atuel Lake, at 3250 m above sea level in the Andes range, and the 500 km long San Juan river, join the Colorado downstream of it’s gauging station. Therefore, the data given by the three series are not directly related. Unlike the Paraná, whose stream flow is directly related to precipitation, the regime of all these rivers is dominated by snow melting, and their stream flows reflect precipitation accumulated during the winter, and melted during spring and summer. To directly study the snow precipitation, we complete our data with measurements of the height of snow accumulated in the Andes at 2250 m above Sea level, close to the origin of the Colorado (see Fig. <ref>), which were measured in situ at the end of the winter since 1952. In fact, the correlation between the stream flow of the Colorado and the snow height is very good, with a correlation coefficient r=0.87, significant to a 99% level.In Fig. <ref> we compare the multidecadal component of the stream flows with the corresponding series for the sunspot number. In all caseswe proceed as with the data in Fig. <ref>: we smoothed out the solar cycle with an 11-year running mean, we detrended the series by subtracting the long term component, and we standardized the data by subtracting the mean and dividing by the standard deviation. In the panel corresponding to the Colorado, we also include the snow height.It can be seen that in all cases the agreement is remarkable. The correlation coefficients are 0.59, 0.47, 0.67 and 0.69 for the Colorado, the snow level, the San Juan and the Atuel, respectively, all significant to the 96-97% level. Although all these rivers have maximum stream flow during Summer, there is abig difference, however, between the regimes of the Paraná and the remaining rivers: for these ones, the important factor is the intensity of the precipitation occurring as snow during the winter months, from June to August. For the Paraná, what is most important is the level of the precipitation during the summer months. It should also be noticed that, here again, stronger activity coincides with larger precipitation.§ TREE RINGS Tree rings are the most numerous and widely distributed high-resolution climate archi­ves in South America. During the last decades, variations of temperature, stream flow, rainfall and snow were reconstructed using tree-ring chronologies from subtropical and temperate forests, which are based on ring width, density and stable isotopes (see <cit.> and references therein).<cit.> studied the spatial patterns of climate andtree-growth anomalies in the forests of northwesternArgentina. The tree-ring data set consisted of seven chronologies developedfrom Juglans and Cedrela (see Fig. <ref>). They show thattree-ring widths in subtropical Argentina are affected by weatherconditions from late winter to early summer. Tree-ring patternsmainly reflect the direct effects of the principal types ofrainfall-patterns observed. One of these patterns is related to precipitation anomalies concentrated in the northeastern part of the region. To extend back in time and to a larger geographical area the results obtained previously, we study the relation between the Sunspot Number and the tree-ring chronologies studied by <cit.>. These data-sets are shown in Fig. <ref>. It can be seen that the shortest series starts in 1797, while the longest one goes back to the XVI century. Herewe study only the data from 1797, where all the series overlap. The individual sets respond to local conditions, in the particular location of the studied tree. To obtain an indication of global conditions in the region, we built an index in the following way. First, we shifted in time each tree-ring series to obtain the best correlation with the Paraná's stream flow. In particular, in 1982and 1997 there were two very large annual discharges of the Paranáthat are associated with two exceptional El Niño events (see Fig. <ref>). These two events, although weaker, can be seen in the individual tree-ring series, with a small delay, different in each case. We therefore built a composite series as the average between each individual chronology, shifted to match the Paraná's discharge. Finally, we took the 11-years running-mean, and normalized the series as in the previous cases.The resulting index is shown in Fig. <ref>, together withthe Sunspot Number. It can be seen that also here the agreement is quite good. The Pearson's correlation coefficient between both series is R=0.69. § DISCUSSION Although the theory that Global Warming is caused by an increase in Solar activity has been dismissed, particularly because activity and temperature do not have similar trends anymore, it gave a strong impulse to the studies on the relation between climate and activity. In particular, there is strong evidence that the Sun could have an influence in different climatic variables, in different regions of the globe, and not always in the same sense. In particular, we reviewed different studies which concentrate on different aspects of atmospheric moisture, which in some regions reported positive correlations, with stronger activity related to stronger precipitations, and in others the opposite correlation, with strong droughts coincident with solar activity maxima. There are also regions of the world where this relation changed signs over time.In particular, we studied different climatic indicators in southern South America. First, we concentrated in the stream flow of one of the largest rivers of the world, the Paraná. We found a strong correlation on decadal time scales between the river's discharge and Sunspot Number. We later found that this correlation was still present during the large solar minimum between Cycles 23 and 24, which corresponded to a period of very low flows in the Paraná. We can also find in historical records this coincidence between periods with smaller solar activity and low Paraná's discharge. In particular, during the period known as the Little Ice Age there are different reports pointing out to low discharges. For example, a traveler of that period mentions in his diary that in 1752 the level of the river was so small that the small ships of that time could not navigate it. At present, the river can be navigated as far north as Asunción in Paraguayby ships 4 times larger (<cit.>). There other climatic records which point out to reduced precipitations in this region during theLittle Ice Age (see <cit.> and references therein). It is well known that theLittle Ice Age was coincident with the Maunder Minimum, and wasperhapscaused by low solar activity (e.g. <cit.>).To check if the solar influence is also present in other areas ofSouth America, we studied the flow of three other rivers of the region, and the snow level from a mountain-high station of the same area. Also in this cases we founda strong correlation betweenthe Sunspot Number and the stream flows, after removing the secular trends and smoothing out the solar cycle.Finally, to extended both the area coverage and the temporal baseline, we studied a composite of seven tree-ring chronologies affected by precipitations, starting at the end of the XVIII century. Also in this case we found the same correlation with Sunspot Number.We point out that, in all cases, we found a correlation in the intermediate time scale. We removed the secular trends when present (e.g, for the Paraná and the Sunspot Number), which are not correlated. We also smoothed out the solar cycle, since on the yearly timescale, the dominant factor influencing precipitations is El Niño. The results we found show that decades of larger precipitations correspond todecades of higher activity, with these variations overimposed on the corresponding secular trends.In all cases, the correlation we found is positive, i.e., higher precipitations correspond to larger solar activity, in a very large area. Since another mechanism that has been proposed to explain the Sun-Earth connection involve the modulation of Galactic Cosmic Rays, we also studied the correlation between the Paraná's discharge and two other solar-activity indexes: the neutron count at Climax, Colorado, available since 1953, and the aa index, which is an indication of the disturbance level of the magnetic field of the Earth based on magnetometer observations of two stations inEngland and Australia, which is available since 1868. Both indexes can be used to test the GCR hypothesis.We found that the Paraná's stream flow is correlated with both neutron count and the aa index. This was expected, since all activity indexes are correlated among them.However, the correlation with Sunspot Number was strongest, suggesting a direct link between solar irradiance and precipitations. It has been shown that variations in solar insolation affect the position of the Inter Tropical Convergence Zone (ITCZ) (<cit.>, <cit.>).<cit.> proposed that a displacement southwards of the ITCZ would enhance precipitations in the tropical regions of southern South America. We found that the increase in precipitations are seen both in the Southern Hemisphere's summer when the ITCZ is over the equator, close to where the Paraná has its origin, and during winter, when the ITCZ moves north, and precipitations increase further South.[Agnihotri(2002)] 2002E PSL.198..521A Agnihotri, R., K. Dutta, R. Bhushan, & B. L. K. Somayajulu 2002, Earth and Planetary Science Letters 198, 521. Berri, G. J., & E. A. Flamenco 1999. Water Resources Research 35, 3803.[Bhattacharyya and Narasimha (2005)]2005GeoRL..3205813B Bhattacharyya, S., & R. Narasimha 2005. Geophysical Review Letters 32, 5813.[Boninsegna2009]Boninsegna2009210 Boninsegna, J. A., Argollo, J., Aravena, J. C., 2009, Palaeogeography, Palaeoclimatology, Palaeoecology 281, 210.[Damon and Peristykh 2005]DP05 Damon, P.E. and Peristykh, A.N. 2005, Clim.Change 68, 101[Eddy 1976]1976Sci...192.1189E Eddy, J. A. 1976. Science 192, 1189.[Fleitmann et al. (2003)]2003Sci...300.1737F Fleitmann, D., S. J. Burns, M. Mudelsee, U. Neff, J. Kramers, A. Mangini, & A. Matter 2003. Science 300, 1737.[Friis-Christensen and Lassen (1991)]FCL91 Friis-Christensen E. and Lassen K. 1991, Science 254, 698–700.[Friis-Christensen and Svensmark (1997)]FCS97 Friis-Christensen, E. and Svensmark, H. 1997, Ad.Spa. Res 20, 913[Haug et al. 2001]2001Sci...293.1304H Haug, G. H., K. A. Hughen, D. M. Sigman, L. C. Peterson, & U. Röhl 2001. Science 293, 1304.[Hodell et al. 2001]2001E PSL.192..109H Hodell, D. A., C. D. Charles, & F. J. Sierro 2001. Earth and Planetary Science Letters 192, 109.[Hong et al. (2001)]2001E PSL.185..111H Hong, Y. T., Z. G. Wang, H. B. Jiang, Q. H. Lin, B. Hong, Y. X. Zhu, Y. Wang, L. S. Xu, X. T. Leng, & H. D. Li 2001. Earth and Planetary Science Letters 185, 111.[Iriondo 1999]Iri99 Iriondo, M. 1999. Quat. Int. 57-58, 112.[Kodera (2004)]2004GeoRL..3124209K Kodera, K. 2004. Geophysical Review Letters 31, 24209.[Lassen and Friis-Christensen (1995)]LFC95 Lassen K. and Friis-Christensen E. 1995 JATP 57, 835[Laut (2003)]Laut03 Laut, P. 2003, JASTP 65, 801[Laut and Gunderman (2000)]LG00Laut P.and Gunderman J. 2000 SOLSPA I, p. 189[Lockwood and Fröhlich (2007)]LF07 Lockwood, M., Fröhlich, C., 2007, Proc. R. Soc. A 463, 2447[Marsh and Svensmark (2000)]MS00 Marsh, N.D. and Svensmark, H. 2000, Phys. Rev. Lett. 85, 5004[Mauas and Flamenco (2005)]2005MmSAI..76.1002M Mauas, P., & E. Flamenco 2005. Memorie della Societa Astronomica Italiana 76, 1002.[Mauas(2008)]MFB08 Mauas, P.J.D., Flamenco, E., & Buccino, A.P. 2008, Phys. Rev. Let. 101, 168501[Mauas(2011)]MFB11 Mauas, P.J.D., Flamenco, E., & Buccino, A.P. 2011, JASTP 73, 377[Mehta and Lau (1997)]1997GeoRL..24..159M Mehta, V. M., & K.-M. Lau 1997. Geophysical Review Letters 24, 159.[Neff et al. (2001)]2001Natur.411..290N Neff, U., S. J. Burns, A. Mangini, M. Mudelsee, D. Fleitmann, & A. Matter 2001. Nature 411, 290.[Newton et al. (2006)]2006GeoRL..3319710N Newton, A., R. Thunell, & L. Stott 2006. Geophysical Review Letters 33, 19710.[Piovano et al. 2009]piovano09 Piovano, E., D. Ariztegui, F. Córdoba, M. Cioccale, & F. Sylvestre 2009. Past Climate Variability in South America and Surrounding Regions From the Last Glacial Maximum to the Holocene, Chapter 14. Hydrological Variability in South America Below the Tropic of Capricorn (Pampas and Patagonia, Argentina) During the Last 13.0 Ka, pp.323–351. Springer Netherlands.[Poore et al. 2004]2004GeoRL..3112214P Poore, R. Z., T. M. Quinn, & S. Verardo 2004. Geophysical Review Letters  31, 12214.[Ruzmaikin et al. (2006)]Ruzmaikin2006 Ruzmaikin, A., J. Feynman, & Y. L. Yung 2006. Journal of Geophysical Research (Atmospheres) 111, 21114.[Soon et al. 1999]soon99W. Soon, W.Baliunas, S. L.,Robinson, A. B. &Robinson, Z. W. 1999, Climate Research. 13, 149. [Stager et al. (2007)]2007JGRD..11215106S Stager, J. C., A. Ruzmaikin, D. Conway, P. Verburg, & P. J. Mason 2007. Journal of Geophysical Research (Atmospheres) 112, 15106.[Stager et al. 2005]stager05 Stager, J. C., D. Ryves, B. Cumming, L. Meeker, & J. Beer 2005. J. Paleolimnol. 33, 243.[Svalgaard 2012]sval12 Svalgaard, L, 2012, in Proc. Iau Symp. 286, 27[Svensmark (1998)]sven98Svensmark, H. 1998, Phys. Rev. Let. 81, 5027[Udelhofen and Cess (2001)]UC01Udelhofen, P. and Cess, R. 2001, 28, 2617[Tiwari and Rajesh (2014)]TiRa14 Tiwari, R. K.,Rajesh, R. 2014, Geophys. Res. Lett., 41, 3103[Verschuren et al. 2000]2000Natur.403..410V Verschuren, D., K. R. Laird, & B. F. Cumming 2000. Nature 403, 410.[Villalba(1992)]villalba92Villalba, R. Holmes, R. L., and Boninsegna, J. A. 1992, Journal of Biogeography 19, 631[Wang et al. (2005)]2005Sci...308..854W Wang, Y., H. Cheng, R. L. Edwards, Y. He, X. Kong, Z. An, J. Wu, M. J. Kelly, C. A. Dykoski, & X. Li 2005. Science 308, 854. Wang, Y.-M., J. L. Lean, & N. R. Sheeley, Jr. 2005. ApJ 625, 522.[Zanchettin et al. (2008)]2008JGRD..11312102Z Zanchettin, D., A. Rubino, P. Traverso, & M. Tomasino 2008. Journal of Geophysical Research (Atmospheres) 113, 12102.
http://arxiv.org/abs/1709.09170v2
{ "authors": [ "P. J. D. Mauas", "A. P. Buccino", "E. Flamenco" ], "categories": [ "physics.ao-ph" ], "primary_category": "physics.ao-ph", "published": "20170926173044", "title": "Solar activity forcing of terrestrial hydrological phenomena" }
A Centralized Power Control and Management Method for Grid-Connected Photovoltaic (PV)-Battery Systems Zhehan Yi Department of Electrical and Computer Engineering The George Washington University Washington, DC 20052 Email: [email protected] Wanxin Dong Department of Electrical and Computer Engineering The George Washington University Washington, DC 20052 Email: [email protected] Amir H. Etemadi Department of Electrical and Computer Engineering The George Washington University Washington, DC 20052 Email: [email protected]: Jul 6, 2017 / Accepted: Sep 11, 2017 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================== Distributed Generation (DG) is an effective way of integrating renewable energy sources to conventional power grid, which improves the reliability and efficiency of power systems. Photovoltaic (PV) systems are ideal DGs thanks to their attractive benefits, such as availability of solar energy and low installation costs. Battery groups are used in PV systems to balance the power flows and eliminate power fluctuations due to change of operating condition, e.g., irradiance and temperature variation. In an attempt to effectively manage the power flows, this paper presents a novel power control and management system for grid-connected PV-Battery systems. The proposed system realizes the maximum power point tracking (MPPT) of the PV panels, stabilization of the DC bus voltage for load plug-and-play access, balance among the power flows, and quick response of both active and reactive power demands.§ INTRODUCTION Contributions of renewable energy to power generation have been increasing exponentially in the past decades, due largely to the fossil energy crisis, increasing electricity demands, and environment degradation. Power grids in the near future is expected to be equipped with great penetration of renewable energy with effective supply-demand management systems for highly reliable and economical operations <cit.>. Distributed generation (DG), through which electric power is generated on-site instead of centrally, is providing a powerful solution of integrating renewable power generators to the conventional utility grids. Among numerous renewable generations, solar photovoltaic (PV) system is one of the most attractive renewable power system because of its various benefits, such as flexibility of scales and low installation costs. Moreover, as the price continues to decline, global PV installation capacity is expected to increase in recent years <cit.>. As the penetration of solar power expands, future grid-connected PV systems are required to provide more reliable power. Otherwise, power fluctuations in PV systems will bring certain reliability issues to the utility grids and to electricity users.PV output power oscillates frequently during a day as the operating environment (temperature or solar irradiance) changes <cit.>. Therefore, to maintain a stable output, battery storages are necessary on the DC side of a PV system to compensate the differences caused by change of operating condition, for example, cloud shading over the PV panels. Excess power can also be stored in the batteries for later demands, or to trade back to the utility in the future. Additionally, there are usually loads on the DC bus, making the system a DC microgrid. Therefore, the control schemes for PV-battery systems should be able to stabilize the power supply to the DC loads, balance the power flows inside the DC side, and effectively manage the power communications with the grid. A number of control schemes for PV-battery units have been proposed in the literature. An autonomous control strategy is presented in <cit.> for PV-battery systems by droop control. The battery group is charged through the AC bus, which increases the costs for the charger inverter. Furthermore, this strategy only works for islanded but not for grid-connected PV systems. Other islanded PV control systems are also proposed in references <cit.> and <cit.>. Again, these methods are not applicable to grid-connected PV systems, which are widely used in the industry. A hierarchical control scheme is designed in <cit.> for a grid-connected multi-source PV systems, which is primarily used for self-feeding buildings equipped with PV arrays and battery storages. This system requires complicated supervision algorithm using Petri nets (PNs), and the DC bus voltage oscillations is not well eliminated. Literature <cit.> presents an optimal charging/discharging method in PV-battery system that reduces the line loss of distribution systems. However, this method only schedules the battery changing and discharging but does not comprehensively manage the power flows of the entire system. There are also other control strategies introduced in <cit.>, which mainly focus on the optimization of operational costs. Nevertheless, these paper does not introduce the detailed control methods. In an attempt to address the aforementioned issues, this paper proposes a power management and control system for grid-connected PV-battery power systems, which balances the power flows flexibly and maintains a reliable power supply to demands in different circumstances. Control methods in the system are designed to achieve a flexible but reliable power output to fulfill demands from the utility grid, and the loads on the DC bus, by intelligently managing the charging/discharging processes of the battery and switching the power generating control modes for the PV arrays. In the following contents: Section II briefly introduces a typical grid-connected PV-battery system, followed by the proposed power control and management system with detailed control schemes for each part; case studies are carried out in Section III to verify its performance; Section IV presents the conclusion of this paper. § GRID-CONNECTED PV-BATTERY SYSTEMS AND THE PROPOSED POWER CONTROL AND MANAGEMENT SYSTEM§.§ Typical Configuration of Grid-Connected PV-Battery System A typical PV-battery system is illustrated in Fig. <ref>, which consists of a PV array, battery storages, DC/DC converters that connect the PV array and the battery to the DC bus, DC load, a DC/AC inverter and a transformer to bridge the DC microgrid to the utility grid <cit.>. The power generated by the PV array is a function of irradiance during a day, and due to the non-linear characteristics of PV, for different irradiance level, there is a operating voltage VMPP where the PV array is extracted the maximum power. Therefore, maximum power point tracking (MPPT) algorithms are implemented in PV systems, usually through a DC/DC converter, to optimize the power generation <cit.>. The charging or discharging process of the battery storage is controlled by another DC/DC converter. DC loads, on the other hand, can be plugged in the DC microgrid directly or through converters. Therefore, if the control strategies are able to stabilize the DC bus voltage while balancing the power flow, converters can be omitted for load access, making the system more convenient and economical. §.§ The Power Control and Management System While some power flows in the PV-battery systems have to be unidirectional, e.g., power flows from the PV to the DC bus (P_PV) and power consumed by the DC load (P_DC-load), some should be bidirectional, e.g., power charging or discharging the battery (P_bat) and power communicates with the utility grid (P_grid). Therefore, to keep the power balanced in the system, the following equation should always be fulfilled: P_PV+P_bat=P_grid+P_DC-load+P_loss where P_loss is the power loss in the power converters, transformer, and transmission lines, which is usually negligible. P_bat>0 and P_bat<0 indicates discharging and charging mode of the battery, respectively; P_grid>0 means the power is being transferred from the DC microgrid to the utility grid, and vice versa. A power control and management system is designed (Fig. <ref>), which supervises the status of each generation and load in the system, and, depending on the situations, determines the references for the PV power, DC bus voltage, battery charging/discharging power, and the active and reactive power though the inverter. Detailed control schemes are elaborated as follows.§.§.§ PV Generation Control The power control for the PV array can be switched between MPPT control mode and power reference control mode, depending on the SOC of battery, DC load demand P_DC-load and the grid-requested power P_grid. When the battery is not fully charged (< 95%), P_DC-load is met, and the utility grid is requesting the DC microgrid system to provide maximum power as it can, the PV array will be controlled under MPPT mode. On the other hand, if the battery is fully changed(≥ 95%), DC load demand (P_DC-load) is fulfilled, and the grid is not able to consumed the excess PV power, MPPT will turn off, and the PV will be switched to power reference control mode, where the reference is given by the following equation.P_PV-ref= P_grid+P_DC-loadWhen working under MPPT mode, the voltage and current of the PV array, V_PV and I_PV, are extracted and fed into the management system to obtain a voltage reference V_MPP, and generate a gating signal T_PV for the switching control of the DC/DC (boost) converter. Incremental conductance (IncCond) MPPT, one of the most well-known MPPT algorithm <cit.>, is employed in this research. The control scheme is presented in Fig. <ref>.§.§.§ Battery Charging/Discharging ControlThe operating mode of battery, i.e., charging/discharging or the sign of P_bat, is not only subject to equation (<ref>), but also depending on the state of change (SOC) of the battery. Namely, there are maximum and minimum limits for SOC, which is set to be 95% and 20% respectively in this scheme, to eliminate the degradations and extend the life cycle of the battery. A bidirectional DC/DC converter is used to control the charging and discharging of battery (Fig. <ref> (a)), and the control scheme is illustrated in Fig. <ref> (b), where P_bat-mes and P_bat-ref is the measured and desired power flowing in the converter, respectively. §.§.§ Inverter ControlAn inverter is necessary to convert the DC power to AC power, and connect the DC microgrid to the utility grid via the point of common coupling (PCC). Fig. <ref> illustrates the control scheme for the inverter, which aims at stabilizing the DC bus voltage, V_DC, and controlling the reactive power, Q, flowing through the inverter. I_d and I_q are the obtained from the three-phase AC current I_AC (Fig. <ref>) by Clark transform. The active power flows in the inverter is controlled according equation (<ref>), i.e., controlling P_PV and P_bat so that P_grid is controlled. § CASE STUDIES In order to verify the proposed control strategies discussed in Section II, case studies are carried in this section using the PSCAD software package. A grid-connected PV-battery is set up using the configuration in Fig. <ref>, where the parameters are listed in Table. <ref>. According to various statuses of the battery SOC, PV generation (P_PV), DC load demand, and utility demand (P_grid), multiple cases are simulated and the results are presented as follows. §.§.§ Case 1 When there is excess power from the PV array after fulfilling the DC load and AC demand, while the battery is not fully charged (P_PV>P_DC-load+P_grid, < 95%), the excess PV power will be transferred and stored in the battery. The power flows in the system are shown in Fig. <ref> - Case 1, which illustrates the balanced power in the system: the PV array is working in MPPT mode which provides P_PV (165 kW in Fig. <ref> - Case 1), while the DC load is constant at 50 kW, P_grid is around 105 kW, and the battery is charged by approximately 10 kW. §.§.§ Case 2When the maximum power from the PV array is greater than the sum of DC load and AC demand, and the battery is fully charged (P_PV>P_DC-load+P_grid, ≥ 95%), the PV array will switch from MPPT control mode to reference power control mode, where the reference is P_PV-ref = P_DC-load+P_grid, and the battery is disconnected. As is shown in Fig. <ref> - Case 2, P_PV is controlled at 150 kW, the battery is neither charging nor discharging (P_bat), the DC load consumes 50 kW,and the remaining PV power is feeding the grid (100 kW). §.§.§ Case 3For the case where the demand from DC load and the grid is greater than the maximum PV power, and the battery is not over-discharged (P_PV<P_DC-load+P_grid<P_PV+P_bat, ≥ 20%), the PV array will work under MPPT control, providing the 165 kW, and the battery will discharge by 10 kW to compensate the difference between supply and demand (Fig. <ref> - Case3). §.§.§ Case 4If the total demand from DC load and the grid is greater than the maximum PV power and the battery power, (P_PV+P_bat<P_DC-load+P_grid, ≥ 20%), similar to Case 3, the PV array will work on MPPT mode and the battery will discharge, providing as much power to the grid as the system can (Fig. <ref> - Case4).§.§.§ Case 5In Fig. <ref> - Case 5, where the PV array cannot fulfill the DC load (increased from 50 kW to 190 kW), the grid does not request power from the DC microgrid, and the battery has no excess power (P_PV<P_DC-load, ≤ 20%), while maintain MPPT generation in the PV array (165 kW), additional electric power can be purchased from the grid (35 kW) to meet the demand on the DC side, and, when necessary, to charge the battery (10 kW) for later needs.The case studies successfully presents the satisfactory performance of the proposed power control and management system. Power flows in all circumstances mentioned above are properly balanced. In all the cases, whether the PV array is controlled under power reference or MPPT mode, demands from both the DC-load and the utility grid are reliably supplied by controlling the DC/DC converters and the DC/AC inverter, and equation (<ref>) is always maintained. Although waveforms are not shown, the DC bus voltage V_DC is stabilized around 450 V to ensure a stable DC power supply, and the reactive power Q through inverter is controlled at 0 Var in all these cases, regardless of any change in active power flows. Nevertheless, V_DC can be easily changed to any reasonable value, by simply modifying the reference V_DC-ref in Fig. <ref>. When requested, the system can provide reactive power Q to the grid quickly, by setting the reference Q_ref (Fig. <ref>).§ CONCLUSIONPower management in grid-connected PV-battery systems is critical to maintain a reliable power supply to load and the utility grid demands. This paper proposes a power control and management system, which is able to effectively manage the power flows in grid PV-battery microgrid systems. Power demands and supplies are successfully balanced by the control of power converts, and the reactive power is also under full monitoring and control. The proposed system regulates the DC bus voltage by controlling the inverter, such that power can be provided to feed the load reliably in spite of other changes. The DC bus voltage value is under full control. Additionally, it is more convenient and economical for DC load access, as DC/DC converters can be omitted if the rating voltage of the load meets the DC bus voltage. Case studies are carried out, and the performance of the proposed system is successfully verified. 16 url@samestyle yi_tsg2 Z. Yi, W. Dong, and A. H. Etemadi, “A unified control and power management scheme for pv-battery-based hybrid microgrids for both grid-connected and islanded modes,” IEEE Transactions on Smart Grid, vol. PP, no. 99, pp. 1–1, 2017. yi_tsg Z. Yi and A. H. Etemadi, “Fault detection for photovoltaic systems based on multi-resolution signal decomposition and fuzzy inference systems,” IEEE Transactions on Smart Grid, vol. 8, no. 3, pp. 1274–1283, May 2017. yi_tie Z. Yi and A. Etemadi, “Line-to-line fault detection for photovoltaic arrays based on multi-resolution signal decomposition and two-stage support vector machine,” IEEE Transactions on Industrial Electronics, vol. PP, no. 99, pp. 1–1, 2017. yi_pes2016 Z. Yi and A. H. Etemadi, “A novel detection algorithm for line-to-line faults in photovoltaic (pv) arrays based on support vector machine (svm),” in 2016 IEEE Power and Energy Society General Meeting (PESGM), July 2016, pp. 1–4. literature5 H. Mahmood, D. Michaelson, and J. Jiang, “Strategies for independent deployment and autonomous control of pv and battery units in islanded microgrids,” IEEE Journal of Emerging and Selected Topics in Power Electronics, vol. 3, no. 3, pp. 742–755, Sept 2015. literature7 F. Locment, M. Sechilariu, and I. Houssamo, “DC load and batteries control limitations for photovoltaic systems. experimental validation,” IEEE Transactions on Power Electronics, vol. 27, no. 9, pp. 4030–4038, Sept 2012. literature11 H. Mahmood, D. Michaelson, and J. Jiang, “A power management strategy for pv/battery hybrid systems in islanded microgrids,” IEEE Journal of Emerging and Selected Topics in Power Electronics, vol. 2, no. 4, pp. 870–882, Dec 2014. literature6 M. Sechilariu, B. Wang, and F. Locment, “Building integrated photovoltaic system with energy storage and smart grid communication,” IEEE Transactions on Industrial Electronics, vol. 60, no. 4, pp. 1607–1618, April 2013. literature8 J. H. Teng, S. W. Luan, D. J. Lee, and Y. Q. Huang, “Optimal charging/discharging scheduling of battery storage systems for distribution systems interconnected with sizeable pv generation systems,” IEEE Transactions on Power Systems, vol. 28, no. 2, pp. 1425–1433, May 2013. literature9 Y. Riffonneau, S. Bacha, F. Barruel, and S. Ploix, “Optimal power flow management for grid connected PV systems with batteries,” IEEE Transactions on Sustainable Energy, vol. 2, no. 3, pp. 309–320, July 2011. literature10 J. Li, Z. Wu, S. Zhou, H. Fu, and X. P. Zhang, “Aggregator service for pv and battery energy storage systems of residential building,” CSEE Journal of Power and Energy Systems, vol. 1, no. 4, pp. 3–11, Dec 2015. yi_phd Z. Yi, “EnglishSolar photovoltaic (PV) distributed generation systems - control and protection,” Ph.D. dissertation, 2017, copyright - Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works; Last updated - 2017-08-21. [Online]. Available: <http://proxygw.wrlc.org/login?url=https://search-proquest-com.proxygw.wrlc.org/docview/1924969622?accountid=11243> ref12 M. M. R. Singaravel and S. A. Daniel, “Mppt with single dc-dc converter and inverter for grid-connected hybrid wind-driven PMSG-PV system,” IEEE Transactions on Industrial Electronics, vol. 62, no. 8, pp. 4849–4857, Aug 2015. ref13 E. Roman, R. Alonso, P. Ibanez, S. Elorduizapatarietxe, and D. Goitia, “Intelligent pv module for grid-connected pv systems,” IEEE Transactions on Industrial Electronics, vol. 53, no. 4, pp. 1066–1073, June 2006. ref14 M. S. Agamy, S. Chi, A. Elasser, M. Harfman-Todorovic, Y. Jiang, F. Mueller, and F. Tao, “A high-power-density dc-dc converter for distributed pv architectures,” IEEE Journal of Photovoltaics, vol. 3, no. 2, pp. 791–798, April 2013. ref16 D. Sera, L. Mathe, T. Kerekes, S. V. Spataru, and R. Teodorescu, “On the perturb-and-observe and incremental conductance mppt methods for pv systems,” IEEE Journal of Photovoltaics, vol. 3, no. 3, pp. 1070–1078, July 2013.
http://arxiv.org/abs/1709.09219v1
{ "authors": [ "Zhehan Yi", "Wanxin Dong", "Amir H. Etemadi" ], "categories": [ "math.OC", "eess.SP" ], "primary_category": "math.OC", "published": "20170926185406", "title": "A Centralized Power Control and Management Method for Grid-Connected Photovoltaic (PV)-Battery Systems" }
ϵε#1Eq. (<ref>)ln#1ln(#1)#1ln^2(#1)#1Li_2(#1)p/q/ℓ/#1δ̃(#1)#1θ̃(#1)łℓ#1Sp|#⟩1|#1#1#1⟨#|1#1|𝐪𝐩#1 M^(#1)#1 M^(#1)†℘pq-.4ex∼.4ex<-.4ex∼.4ex>↔naíve #1→→#1⟶_#1q_⊥Higher-order QED effects in hadronic processesIn this presentation, we describe the computation of higher-order QED effects relevant in hadronic collisions. In particular, we discuss the calculation of mixed QCD-QED one-loop contributions to the Altarelli-Parisi splittings functions, as well as the pure two-loop QED corrections. We explain how to extend the DGLAP equations to deal with new parton distributions, emphasizing the consequences of the novel corrections in the determination (and evolution) of the photon distributions.EPS-HEP 2017, European Physical Society conference on High Energy Physics 5-12 July 2017 Venice, Italy§ INTRODUCTIONDue to the improved accuracy and precision of the experimental measurements, the corresponding theoretical calculations must start including effects previously neglected. In the context of hadron colliders, it is a well-known fact that QCD is the dominating interaction, and in consequence the most important corrections are related to the strong force. However, since O(^2)≈ O(α) for typical collision energies at the LHC, it becomes necessary to take into account also electroweak (EW) corrections. This is one of the reasons behind the recent explosion in the number of processes that have been computed including EW corrections beyond the leading-order (LO).The idea of this article is to summarize the recent developments carried out by our group. We start by presenting, in Sec. <ref>, the extension of the DGLAP equations <cit.> to include QED effects, as well as a description of the new PDFs associated to leptons and photons. After that, we center in the calculation of the evolution kernels of these equations, namely the splitting functions. In Sec. <ref>, we present an algorithm that allowed us to compute one-loop mixed QCD-QED and two-loop QED corrections to the splitting functions by properly transforming the well-known NLO QCD expressions available in the literature <cit.>. Then, we motivate a generalization of the Abelianization algorithm and apply it to a physical process. The selected process is diphoton production; in Sec. <ref> we describe the computation of the fully consistent NLO QED corrections obtained through the Abelianization of thecode <cit.>. Finally, we present the conclusions and briefly discuss some open questions in Sec. <ref>.§ EXTENDED DGLAP EQUATIONS AND SPLITTING FUNCTIONSThe Altarelli-Parisi equations were originally formulated to describe the perturbative evolution of parton distribution functions (PDF) in the context of QCD interactions <cit.>. Our purpose is to deal with QCD partons (i.e. gluons and quarks) as well as photons. Moreover, since photons couple to charged leptons, we must also consider their presence inside hadrons and define the associated PDFs. Explicitly, given the canonical basis of PDFs, B_c = {q_i,l_i,q̅_i,l̅_i,g,γ}, the extended DGLAP equations read dF_i/dt= ∑_f P_F_i f⊗ f + ∑_f P_F_i f̅⊗f̅ +P_F_i g⊗ g +P_F_i γ⊗γ,dg/dt= ∑_f P_g f⊗f + ∑_f P_g f̅⊗f̅ +P_g g⊗ g +P_g γ⊗γ,dγ/dt= ∑_f P_γ f⊗f + ∑_f P_γf̅⊗f̅ +P_γ g⊗ g +P_γγ⊗γ,where t=lnμ^2 is the evolution variable (with μ the factorization scale), ⊗ denotes the convolution operator and P_ij are the extended splitting functions. The sum over fermions f runs over all the active flavours of quarks (n_F) and leptons (n_L).In order to simplify the previous equations, it is convenient to change the PDF basis as suggested in Ref. <cit.>. While the equations for the photon and gluon distributions remains the same, we find dF_v_i/dt = P_F_i^- ⊗ F_v_i+∑_j=1^n_FΔ P_F_i q_j^S⊗q_v_j + Δ P_F_i l^S ⊗(∑_j=1^n_L l_v_j),d Δ_2^l /dt =P_l^+ ⊗Δ_2^l ,d {Δ_uc , Δ_ct}/dt =P_u^+ ⊗{Δ_uc , Δ_ct}, d {Δ_ds ,Δ_sb}/dt = P_d^+ ⊗{Δ_ds ,Δ_sb}, for the valence (F_v=F_v-F̅_v with F any fermion) and Δ distributions. The evolution equations for the other elements of the optimized basis are a bit cumbersome, and they can be found in Refs. <cit.>. It is worth appreciating that further simplifications take place when analyzing the O(α) and O(α^2) contributions, since many of the modified splitting kernels are trivially vanishing. On the other hand, the sum rules are also extended to include the presence of QED interactions. As usual, we have to impose the conservation of the fermion number inside the proton, as well as the conservation of the total momenta, which must be carried by each possible constituent. Explicitly, this means that the splitting kernels must fulfill ∫_0^1 dx P^-_f =0 , ∫_0^1 dx x(dg/dt+dγ/dt+∑_fdf/dt) , where we are summing over all the possible flavours of fermions and anti-fermions. These conditions fix the behaviour of the regularized splitting kernels in the end-point region, i.e. x=1.To conclude this section, let's make some general remarks. The most prominent consequence of introducing QED interactions is the need to take into account a photon PDF. Moreover, the presence of QED interactions introduces charge separation effects, which were absent in the pure QCD model. Thus, the evolution of PDFs associated to different flavours might differ and this could have a phenomenological impact. In the same direction, the extended QCD-QED model includes quark-lepton mixing, although it is expected to be highly suppressed.§ RECOVERING QED CORRECTIONS: ABELIANIZATION ALGORITHM AND SPLITTING FUNCTIONSAn interesting property of EW corrections is that we can exploit the previous knowledge of QCD calculations to partially recover them. Explicitly, it is possible to obtain QED and mixed QCD-QED contributions by replacing gluons with photons: this is what we called the Abelianization technique. In Refs. <cit.> we apply this idea to compute O(α) and O(α^2) corrections to the Altarelli-Parisi kernels. Some of these contributions have been computed in Refs. <cit.>, centering in the amplitude-level results and the multiple collinear behaviour with photon emissions. In the context of mixed QCD-QED corrections, the first step consisted in proposing a complete perturbative expansion of splitting kernels in both couplings, i.e.P_ij = P_ij^(1,0) + a P_ij^(0,1) + ^2 P_ij^(2,0) + a P_ij^(1,1) +a^2 P_ij^(0,2) + …, and calculate P_ij^(1,1) and P_ij^(0,2) by considering P_ij^(2,0) and replacing one and two gluons by photons, respectively. Of course, this replacement involves some subtleties <cit.>. For instance, the Abelianization of P^(2,0)_gg leads to P^(1,1)_gγ and P^(1,1)_γ g; there are two possibilities for replacing one gluon and two different diagrams are obtained. However, starting from P^(2,0)_qq and implementing the mentioned replacement, we end up with two diagrams contributing only to P^(1,1)_qq.Another subtlety is related to the treatment of the factor n_F. In the context of QCD, it is related to the presence of quark loops. However, when we replace gluons by photons, we distinguish the particles according to their electric charge. Moreover, we could include also leptons inside the loop. So, we have the replacementn_F →∑_f e_f ^2 , where the sum is restricted to quarks at O(α), but all fermions are allowed at O(α^2). In the last case, it is necessary to explicitly indicate the color degeneration, i.e. we need to sum over each possible quark color. More details about the implementation of the Abelianization algorithm and the replacements implemented can be found in Refs. <cit.>. Finally, after obtaining the P^(1,1)_ab and P^(0,2)_ab terms in the perturbative expansion, we define the ratio K_ab^(i,j)=^i a^j P_ab^(i,j)/P_ab^ LO with the leading-order kernel P_ab^ LO=P_ab^(1,0)+ a P_ab^(0,1). Some illustrative plots are shown in Fig. <ref>. In particular, we consider the corrections to P_γ q (left panel) and P_q γ (right panel), both at O(α) and O(α^2). For splittings involving at least one photon, the lowest order is P_ab^ LO= a P_ab^(0,1) and mixed QCD-QED corrections are dominant when compared with two-loop QED terms (by a factor 10). The charge separation effect becomes more noticeable in P_q γ, although it is still present in P_γ q. § BENCHMARK EXAMPLE: DIPHOTON PRODUCTIONFinally, let's present a practical application of the Abelianization algorithm to a complete physical process. We consider diphoton production in hadron colliders, and we rely on the code <cit.>. This code makes use of the q_T-subtraction method <cit.> to implement a fully differential cross-section calculation including NNLO QCD corrections.As a proof of concept, we focus in the NLO QCD part of the code and apply the Abelianization technique to obtain the corresponding NLO QED corrections. We transformed both the hard coefficients (and their finite contributions), as well as the universal coefficients used to build the counter-terms. We check the consistency of the this approach by studying the collinear limits and comparing the behaviour of the counter-terms with the previously known QED splitting functions.After implementing the corresponding experimental cuts, we study the invariant mass and p_T distributions of the diphoton system. The results are shown in Fig. <ref>, where we include the NLO QED corrections for each partonic channel (i.e. qq̅ and q γ, coloured lines), as well as the total NLO and NNLO QCD contributions (black solid and dotted lines). We used <cit.> for the QCD calculation, and we varied the PDF set of the QED contribution in order to explore the effects of changing the photon PDF. From the plot, we can appreciate that both <cit.> and <cit.> produce similar outputs in the q γ channel, but <cit.> exhibits a completely different behaviour. In particular, the last set strongly enhances the QED corrections in the high-energy region: this effect is still compatible with the behaviour found withdue to the high uncertainties in the determination of photon distributions. On the other hand, all the distributions almost agree for the q q̅ channel, which shows that quark PDFs are well constrained by the experimental data available.As a final comment, we would like to emphasize that there are some additional non-trivial features of the higher-order QED corrections. For instance, it is mandatory to properly deal with the EM running coupling, since it could introduce O(10%) deviations in the high-energy distributions. Also, the presence of QED radiation forces the introduction of additional cuts and clustering algorithms, whose phenomenological impact might not be underestimated. A more detailed discussion of this topics can be found in Ref. <cit.>. § CONCLUSIONSIn this work, we discussed some features of the computation of higher-order QED corrections. In particular, we focused on the extension of DGLAP equations to deal with the novel lepton and photon distributions, as well as in the computation of the corresponding splitting kernels. We use an Abelianization technique to recover O(α) and O(α^2) corrections by making use of the well-known NLO QCD corrections for the AP kernels.After that, we extended the application of the Abelianization technique to the q_T-subtraction method and we obtained the corresponding algorithm to compute NLO QED corrections. We applied this framework to the process pp→γγ + X; in particular, we modified thecode keeping only the NLO contributions and we verified the consistent cancellation of IR singularities. Moreover, we used this implementation to explore some phenomenological aspects of the process. For instance, we studied the dependence on the PDF set (focusing in the photon distribution), the high-energy behaviour of the QED corrections and the implementation of experimental cuts when QED radiation is included. From this analysis, we conclude that QED corrections must be seriously taken into account in the context of high-precision physics. In fact, recent studies for diphoton plus jets <cit.> confirm the relevance of the previous assertion and the necessity of a proper understanding of the EW contributions in the high-energy region.§ ACKNOWLEDGMENTSI would like to thank F. Driencourt-Mangin for carefully reading this article and suggesting modifications. This work has been done in collaboration with D. de Florian and G. Rodrigo (splitting functions), and with L. Cieri and G. Ferrera (diphoton corrections). The research project was partially supported by CONICET, ANPCyT, the Spanish Government, EU ERDF funds (grants FPA2014-53631-C2-1-P and SEV-2014-0398) and Fondazione Cariplo under the grant number 2015-0761.99 Altarelli:1977zs G. Altarelli and G. Parisi,Nucl. Phys. B126 (1977) 298.Curci:1980uw G. Curci, W. Furmanski and R. Petronzio,Nucl. Phys. B175 (1980) 27.Furmanski:1980cm W. Furmanski and R. Petronzio,Phys. Lett.97B (1980) 437.Ellis:1996nn R. K. Ellis and W. Vogelsang,hep-ph/9602356. Catani:2011qz S. Catani, L. Cieri, D. de Florian, G. Ferrera and M. Grazzini,Phys. Rev. Lett.108 (2012) 072001Erratum: [Phys. Rev. Lett.117 (2016) no.8,089901]. Roth:2004ti M. Roth and S. Weinzierl,Phys. Lett. B590 (2004) 190. deFlorian:2015ujt D. de Florian, G. F. R. Sborlini and G. Rodrigo,Eur. Phys. J. C76 (2016) no.5,282. deFlorian:2016gvk D. de Florian, G. F. R. Sborlini and G. Rodrigo,JHEP1610 (2016) 056. Sborlini:2016dfn G. F. R. Sborlini, D. de Florian and G. Rodrigo,PoSICHEP2016 (2016) 793. SPLITTINGSG. F. R. Sborlini, D. de Florian and G. Rodrigo,JHEP1401 (2014) 018;JHEP1410 (2014) 161; JHEP1503 (2015) 021. Sborlini:2015jda G. F. R. Sborlini, D. de Florian and G. Rodrigo,PoSEPS-HEP2015 (2015) 492.Catani:2007vq S. Catani and M. Grazzini,Phys. Rev. Lett.98 (2007) 222002.Rojo:2015acz J. Rojoet al.,J. Phys. G42 (2015) 103103. Schmidt:2015zda C. Schmidt, J. Pumplin, D. Stump and C. P. Yuan,Phys. Rev. D93 (2016) no.11,114015. Manohar:2016nzj A. Manohar, P. Nason, G. P. Salam and G. Zanderighi,Phys. Rev. Lett.117 (2016) no.24,242002. Manohar:2017eqh A. V. Manohar, P. Nason, G. P. Salam and G. Zanderighi,arXiv:1708.01256 [hep-ph]. Ball:2014uwa R. D. Ballet al. [NNPDF Collaboration],JHEP1504 (2015) 040.INPREPL. Cieri, G. Ferrera and G. Sborlini, in preparation.Chiesa:2017gqx M. Chiesa, N. Greiner, M. Schoenherr and F. Tramontano,arXiv:1706.09022 [hep-ph].
http://arxiv.org/abs/1709.09596v1
{ "authors": [ "German F. R. Sborlini" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20170927155657", "title": "Higher-order QED effects in hadronic processes" }
Department of Physics and Material Sciences Center, Philipps University Marburg, Renthof 5, D-35032 Marburg, GermanyDepartment of Physics and Material Sciences Center, Philipps University Marburg, Renthof 5, D-35032 Marburg, GermanyDepartment of Physics and Material Sciences Center, Philipps University Marburg, Renthof 5, D-35032 Marburg, GermanyA self-consistent scheme for the calculations of the interacting groundstate and the near bandgap optical spectra of mono- and multilayer transition-metal-dichalcogenide systems is presented. The approach combines a dielectric model for the Coulomb interaction potential in a multilayer environment, gap equations for the renormalized groundstate, and the Dirac-Wannier-equation to determine the excitonic properties. To account for the extension of the individual monolayers perpendicular to their basic plane, an effective thickness parameter in the Coulomb interaction potential is introduced. Numerical evaluations for the example of MoS_2 show that the resulting finite size effects lead to significant modifications in the optical spectra, reproducing the experimentally observed non hydrogenic features of the excitonic resonance series. Applying the theory for multi-layer configurations, a consistent description of the near bandgap optical properties is obtained all the way from monolayer to bulk. In addition to the well-known in-plane excitons, also interlayer excitons occur in multilayer systems suggesting a reinterpretation of experimental results obtained for bulk material. Influence of the effective layer thickness on the groundstate and excitonic properties of transition-metal dichalcogenide systems S.W. Koch December 30, 2023 ================================================================================================================================= § INTRODUCTION The optical and electronic properties of bulk transition-metal dichalcogenide systems (TMDCs) have been investigated intensively already in the 1970s<cit.>. The excitonic series observed in the optical absorption spectra could be attributed to transitions at the K-points of the Brilliouin zone which nowadays are often referred to as Dirac points.<cit.> However, as bulk TMDCs are indirect bandgap semiconductors, these materials have only played a minor role in the field of semiconductor optics in the following decades. More recently, the interest in TMDCs and their optical properties has been revived with the ability to fabricate them as monolayers. Unlike their bulk counterparts, monolayers of several semiconducting TMDCs display a direct gap at the K-points of their respective Brillouin zone with a transition energy in the visible range<cit.>. These systems exhibit a pronounced light-matter coupling and strong excitonic effects<cit.> The availability of different materials with a similar lattice structure but different bandgaps renders this material class extremely interesting as building blocks for heterostructures<cit.>, and allows for the engineering of the overall electronic and optical properties to a wide extend.For the systematic design and engineering of the electronic and optical properties of TMDC systems, it is highly desirable to have a predictive microscopic theory that includes the fundamental structural properties as well as the strong Coulomb interaction effects among the electronic excitations. In this article, we present a theoretical framework that allows us to determine both, the Coulombic renormalization of the K-point bandgap and the excitonic states. Our approach combines a dielectric model to determine the Coulomb interaction potential in a multilayer environment, the gap equations for the renormalized ground state, and the Dirac-Wannier-equation– a generalization of the Mott-Wannier-equation– for the calculation of the excitonic states. Starting point of our theory is an effective two-band Hamiltonian, for which we use the massive Dirac-Fermion model (MDF)<cit.>. Within the MDF, the gap equations and the Dirac-Wannier equation can be derived as static and linear part of the Dirac-Bloch equations,i.e., the coupled equations of motion for the interband polarization and the electron-hole populations<cit.>. As our approach is based on the equations of motion approach, it can easily be extended to describe the nonlinear and dynamical optical properties.In order to account for the finite out-of-plane monolayer extension, we introduce a thickness parameter d in the effective Coulomb potential governing the interaction between the electronic excitations. The precise value of d is determined by fitting a single spectral feature, e.g., the exact value of the energetically lowest excitonic resonance, to available experimental data. As all other parameters are extracted from first-principles density functional theory (DFT) calculations, d is the only adjustable parameter in our theory. Once d is fixed for a given material system, we are able to predict the bandgap and all the excitonic resonances for arbitrary dielectric environment and number of layers. Furthermore, we are able to study the optical properties for multi-layered structures and, in particular, the transition from a monolayer to bulk.The paper is organized as follows: In Sec. <ref>, we present the model system used for the calculations of the K-point groundstate and the optical properties of a multilayer structure.In Sec. <ref>, we derive the Wannier equation for the Dirac excitons and the gap equations that determine the renormalization groundstate properties.In Sec. <ref>, we investigate finite size effects and the scaling properties of the coupled gap and Wannier equations for the simplified case of a constant background screening.The results show that finite size effects lead to drastic modifications of the excitonic spectra. Finally, we analyze in Sec. <ref> the bandgap renormalization and near bandgap optical properties for mono- and multilayer configurations for the example of MoS_2, before we present a brief summary and discussion of our approach. In the appendix, we summarize important aspects of the electrostatic ingredients of our model, including the determination of the effective Coulomb interaction and screening properties. § MODEL SYSTEM Our model system is a stack of N identical van-der-Waals bonded TMDC monolayers. Systematic studies of the bandstructure as function of the number of layers<cit.> show that the transition from direct to indirect occurs already when going from a monolayer to a bilayer configuration. This feature has been confirmed experimentally by layer-number dependent PL measurements<cit.> .At the same time, the DFT bandstructure investigations show that the bandstructure details around the K-points, which govern the optical absorption properties, are pretty much preserved while increasing the number of layers from monolayer to bulk<cit.>. At the K points, the out-of plane effective masses of the valence and conduction bands are typically much larger than those of the in-plane directions<cit.>. Consequently, the out-of-plane component of the kinetic energy can be neglected and the quasi-particles at the K-points can be considered as quasi-two dimensional particles well confined within the layers. Based on this observation,we treat the K-point dynamics in a multilayer stack as N electronically independent layers that are coupled via the Coulomb potential within the respective dielectric environment:H=∑_n ( H_0^n+H_I^n) +1/2∑_nm,V_^nmρ̂_^nρ̂^m _-.Here H_0^n describes the Hamiltonian of the n^th layer, H_I^n contains the light-matter interaction, and H_C the Coulomb interaction, respectively. We assume that ρ_^n, the charge density of the n-th layer, is strongly localized within that layer. Treating the Hamiltonian of the isolated monolayer within aneffective two-band model, screening of the bands under considerationis included dynamically, whereas the Coulomb matrix element V_^nm contains the screening of all the other bands and the dielectric environment. §.§ The Massive Dirac Fermion Hamiltonian According to ab initio methods based on DFT, the highest conduction and the lowest valence band are predominantly composed of d-type atomic orbitals of the metal atom<cit.>.Combining the relevant atomic orbitals that contribute tothe valence and conduction bands into a two-component pseudo spinor, the minimal two-band Hamiltonian describing the near K-point properties in lowest order 𝐤·𝐩-theory can be written as<cit.>Ĥ_0^n = ∑_sτ,𝐤Ψ̂^†_nsτ𝐤(at𝐤·σ̂_τ+Δ/2σ̂_z -sτλσ̂_z-1/2)Ψ̂_nsτ𝐤.Here, τ=± 1 is the so called valley index, whereas Δ, 2λ, t and a denote the energy gap, the effective spin splitting of the valence bands, the effective hopping matrix element, and the lattice constant, respectively. The operator Ψ̂_nsτ𝐤 is the tensor product of the electron spin state and the two component quasi-spinor in the n-th layer. The Pauli matrices σ̂_τ=(τσ̂_x,σ̂_y) and σ̂_z act in the pseudo-spin space and s is the z-component of the real spin, respectively.The eigenstates of Ĥ_0 have the relativistic dispersion ϵ_sτ k = ±1/2√(Δ_sτ^2+(2ħ v_F k)^2),where Δ_sτ=Δ-sτλ denotes the spin and valley dependent energy gap at the K^± points and v_F=at/ħ is the Fermi-velocity.Employing the minimal substitution principle, the light-matter (LM) Hamiltionian is obtained as H_I^n =-ev_F/c∑_sτ^†_nsτ^n ·σ̂_τ_nsτ.Expanding the charge density in terms of the pseudo spinors, we find for the Coulomb interactionH_C=1/2∑_nm':^†_nsτ-_nsτV_^nm^†_msτ'+_msτ':where :· : denotes normal ordering.§.§ Coulomb Potential in a Multilayer Environment The Coulomb interaction potential in our two-band Hamiltonian contains screening contributions from the system's environment, such as substrate screening etc., and possible non-resonant intrinsic contributions arising from all other bands. To avoid double counting, it is important to separate the contributions of the explicitly treated bands from the rest. Since the DFT dielectric tensor contains all the ingredients, the separation of resonant and non-resonant contributions is a nontrivial task.Here, we develop a scheme that combines bulk DFT calculations of the dielectric tensor with analytical results obtained within the MDF model that allows us to determine the fully screened and non-resonantly screened ('bare') Coulomb potential for various dielectric environments. To derive the Coulomb interaction potential in the multilayer environment, we start from Maxwell's equations· = 4πρ_ext · = 0 ×-̋1/c = 4π/c j_ext ×+1/c = 0.For the layered material, we make the ansatz=,̋=+ E_z e_z+4π,where ≡(z) and ≡(z) represent thenon-resonant contributions to the anisotropicdielectric tensor andcontains all nonlocal, time and frequency dependent resonant contributions.The non-resonant contributions are assumed to be local in space and time and constant within a slab of thickness L=ND, where N is the number of layers and D the natural layer-to-layer distance in the bulk parent material (see Fig. <ref>). As the considered structure is homogeneous with respect to the in-plane coordinates but inhomogeneous with respect to the out-of-plane coordinates, we use a mixed (,z) representation in the following, whereis the in-plane wave vector. With =×, =-/c-ϕ andthegeneralized Coulomb gauge _∥·_∥+∂_z A_z=0, a division into in-plane transverse and longitudinal contributions yields Poisson's equation for the scalar potential(-∂^2_z+^2)ϕ =4π(ρ_ext- i ·_∥^L-∂_zP_z).The solution of this equation for a δ-inhomogeneity ρ_ext=δ(z-z') and _∥^L=0 determines the 'bare' Coulomb potential V_(z,z'). Correspondingly, the screened Coulomb potential is obtained as solution of Poisson's equation with resonant contributions.Provided the non-resonant contributions to the dielectric tensor are known,the bare Coulomb interaction can be obtained analytically from Eq. (<ref>).For the resonant contributions to the longitudinal polarization, we assume that these are composed of a sum oflocalized (2D)parts, that are treated within linear response. In the strict 2D limit, these can be expressed as=-ie^2∑_n=1^N χ_L(,ω)ϕ(,z_n,ω)δ(z-z_n),where z_n=(n-1/2)D is the central position of the n^th layer andχ_L(,ω) is the longitudinal susceptibility, respectively. The longitudinal susceptibility is related to the polarzation function of the 2D layer via χ_L(,ω)=-Π(,ω)/q^2.Within the MDF model, for each spin and valley combination, the long-wavelength limit of the static RPA polarization function gives <cit.>Π(,0)=-1/6πq^2/Δ_sτwhere Δ_sτ is the spin and valley dependent gap at the Dirac points. Summing over the spin and valley indices, one findsr_0=lim_→ 02π e^2χ_L(,0)=2 e^2(Δ_A+Δ_B)/3Δ_AΔ_B,which is of the order of 10 Åfor a typical MX_2 monolayer, independent of the dielectric environment.Inserting Eq. (<ref>) into Eq. (<ref>), we obtain for the screened Coulomb interactionV_S,^nm(ω)=∑_l=1^N(δ_nl+e^2q^2χ(,ω)V_^nl) ^-1V_^lm.Eq.(<ref>) expresses the screened Coulomb interaction in terms of the bare potential and an inverse nonlocal dielectric function.With the aid of the screened and unscreened interaction, we can define the local dielectric functionsϵ^n(,ω)=V_ Vac,^nn/V_S,^nn(ω), where V_ Vac,^nn=2π /|| is the 2D Coulomb potential in vacuum. Similarly, we introduce the resonant and nonresonant contributions of the local dielectric functions asϵ^n_ res(,ω)=V_^nn/V_S,^nn(ω) and ϵ^n_ nr(,ω)=V_ Vac,^nn/V_^nn, respectively. In general, each layer within the multilayer environment has a different local dielectric function reflecting its respective dielectric environment.For a bulkmaterial consisting of N≫ 1 regularly spaced layers, Eq. (<ref>) can be solved by aFourier transformation, givingV_S(,q_z)=4π/ q_z^2+ ^2(+4π e^2χ_L(,ω)/D),where D is the layer-to-layer distance. Comparison with the 3D anistropic Coulomb interaction suggests that the bulk in-plane dielectricconstant is given by^B=+lim_→ 04π e^2χ_L(,ω)/D. We use this relation and the bulk values for the macroscopic background dielectric constants obtained by DFT <cit.> to determine the required values of and .§.§ Quasi-2D Coulomb Potential Computing the Coulomb potential for a strictly 2D layer ignores the fact that the spatial carrier distribution in the out-of-plane direction has a finite extension and is not a sharp δ-function at the central layer position. Hence, instead of solving Poisson's equation with a δ-singulartity, we have to compute the scalar potential for a charge distribution ρ_(z-z_n) induced by the charge density in the n^th layer and replace Eq. (<ref>) by (see Appendix <ref>)= -ie^2∑_n=1^N χ_L(,ω)ρ_(z-z_n) × ∫_-D/2^D/2 dz'ϕ(,z',ω)ρ_-(z'-z_n).Defining the quasi-2D Coulomb potential between different layers asV̅_^nm=∫_-D/2^D/2dz∫_-D/2^D/2dz'ρ_-(z'-z_n)V_(z,z')ρ_(z-z_m)and similar for the screened interaction potential, Eq. (<ref>) remains valid with all matrix elements replaced by the quasi-2D ones. In order to have a simple expression, we use in in the following the 2D Ohno potential V̅_^nm≈ V_^nm e^-qd,as approximation for the bare quasi-2D potential. Here, d denotes the effective thickness parameter accounting for finite out-of-plane size effects.§ METHODSThe Coulomb interaction leads to renormalizations of the single-particle bandstructure and to excitonic effects in the optical properties of a semiconductor. In this section, we follow the derivation in Ref. stroucken2017 toshow howboth of these features are obtained within the equations of motion (EOM) approach. Here, one derives the equations of motion for the interband polarization and the valence and conduction band occupation probalities to obtain the semiconductor Bloch equations (SBE)<cit.> which describe excitonic effects as well as the excitation dependent energy renormalizations. As input for the SBE, one needs the single-particle bandstructure and the system's groundstate properties. Since DFT-based bandstructure calculations usually underestimate the unexcited bandgap, one often uses the experimental values instead of the DFT results. Whereas this approach works well for the typical GaAs-type bulk or mesoscopic semiconductor structures, the fundamental gap of mono- or few-layer TMDCs is experimentally difficult to access and depends strongly on the dielectric environment. Therefore, it is desirable to compute the gap renormalization self-consistently from first principles. §.§ Gap Equations As shown in Ref. stroucken2017, the combination of the EOM with a variational approach yields a set of coupled integral equations –the gap equations– for the renormalized bandgap and the Fermi velocity. The gap equations are non-perturbative and can be derived on the same level of approximation as the EOM for the excitation dynamics. We define the dynamical variablesΓ_sτ𝐤 =f^b_sτ𝐤 - f^a_sτ𝐤 = ⟨b̂^†_sτ𝐤b̂_sτ𝐤⟩ - ⟨â^†_sτ𝐤â_sτ𝐤⟩, Π_sτ𝐤 = ⟨b̂^†_sτ𝐤â_sτ𝐤⟩,where â^†_sτ𝐤 and b̂^†_sτ𝐤 create a particle in the basis states spanning the pseudo-spinor ^†_sτ. Since the groundstate should be static, we search for the stationary solutions of Heisenberg's equations of motion,iħd/dtΠ_sτ𝐤 = (Δ_sτ + V̂[Γ_sτ])Π_sτ𝐤+ (τħ v_F k e^-iτθ_𝐤-V̂[Π_sτ])Γ_sτ𝐤,iħd/dtΓ_sτ𝐤 =2Π_sτ𝐤(τħ v_F k e^iτθ_𝐤-V̂[Π^*_sτ])-2Π^*_sτ𝐤(τħ v_F k e^-iτθ_𝐤-V̂[Π_sτ])in the absence of an externally applied optical field. To simplify the notatation, we introduced the functional relation V̂[f]≡∑_𝐤' V_|𝐤-𝐤'| f_𝐤'. Demanding a stationary solution, we find 0= Δ̃_sτ𝐤Π_sτ𝐤 + τħṽ_sτ𝐤 k e^-iτθ_𝐤Γ_sτ𝐤,0= [Π_sτ𝐤τħṽ_sτ𝐤 k e^iτθ_𝐤],whereΔ̃_sτ𝐤 = Δ_sτ + V̂[Γ_sτ], τħṽ_sτ𝐤k e^-iτθ_𝐤 = τħ v_F k e^-iτθ_𝐤 - V̂[Π_sτ]are the renormalized bandgap energy and Fermi-velocity, respectively. Together with the relation 1=Γ^2_sτ𝐤+4 |Π_sτ𝐤|^2, which holds for any coherent state,we obtain thealgebraic equationsΠ_sτ𝐤 =-τħṽ_sτ𝐤k/2ϵ̃_sτ𝐤e^-iτθ_𝐤, Γ_sτ𝐤 = Δ̃_sτ𝐤/2ϵ̃_sτ𝐤withϵ̃_sτ𝐤=1/2√(Δ̃_sτ𝐤^2+(2ħṽ_sτ𝐤 k)^2).Inserting Eqs. (<ref>) and (<ref>) into Eqs. (<ref>) and (<ref>) yields the closed set of integral equations, the gap equations, asΔ̃_sτ𝐤 = Δ_sτ+1/2∑_𝐤' V_|𝐤-𝐤'| Δ̃_sτ𝐤'/ϵ̃_sτ𝐤', ṽ_sτ𝐤 =v_F + 1/2∑_𝐤' V_|𝐤-𝐤'| k'/kṽ_sτ𝐤'/ϵ̃_sτ𝐤' e^iτ(θ_𝐤-θ_𝐤'). It is easily verified thatΔ̃_sτ𝐤 and ṽ_sτ𝐤 define the mean-field HamiltonianĤ^MF = ∑_s,τ,𝐤Ψ̂^†_sτ𝐤(ħṽ_sτ𝐤𝐤·σ̂_τ +Δ̃_sτ𝐤/2σ̂_z) Ψ̂_sτ𝐤with the eigenvalues ±ϵ̃_sτ𝐤. The corresponding eigenstates are given byΨ_^c=([u_sτ k; v_sτ k e^iτθ_ ]),Ψ_^ν=([ v_sτ k e^-iτθ_;-u_sτ k ]),whereu_τ k=√((ϵ̃_sτ k+Δ̃_sτ k/2)/2ϵ̃_sτ k) andv_sτ k=√((ϵ̃_sτ k-Δ̃_sτ k/2)/2ϵ̃_sτ k). As usual in intrinsic semiconductors, the groundstate is characterized by a completely filled valence and empty conduction band, respectively. Since ε_sτ>ϵ_sτ, the total energy lies below the energy of the non-interacting groundstate.§.§ Dirac-Bloch and Dirac-Wannier Equations To determine the excitation dynamics of our model system, we transform the Hamiltonian into the electron-hole picture using the renormalized bandstructure and eigenstates. Furthermore, we use the interband transition amplitudes and occupation numbers of the renormalized bands as dynamical variables,P_sτ = ⟨ν^†_sτc_sτ⟩,f_sτ = 1- ⟨ν^†_sτν_sτ⟩ =⟨ c^†_sτ c_sτ⟩.It is easily verified that, using the renormalized bands, the groundstate expection values are given by P_sτ=f_sτ=0 (note: this is not true for the transition amplitudes and occupation numbers within the unrenormalized bands!).At the Hartree-Fock level, the resulting Heisenberg EOM for the dynamical variables are given by<cit.>:iħ/ t P_sτ =2 (Σ_sτ-1/c·j_sτ) P_sτ-(1-2f_sτ)Ω_sτ- . iħ/ t P_sτ|_coll,ħ/ t f_sτ =-2[ P^*_sτΩ_sτ] -. ħ/ t f_sτ|_coll .In these Dirac-Bloch equations (DBE), the Coulomb interaction leads to excitation dependent renormalizations of the single-particle energy and the Rabi frequency,Σ_sτ = ϵ̃_sτ-∑_'V_|-'|[ W_cccc(,')-W_cνν c(,') ] f_sτ'+∑_' V_|-'|[W_ccν c(,')P_sτ' +c.c. ], Ω_sτ = ∑_' V_|-'|[W_ccνν(,')P_sτ' + W_cν cν(,') P^*_sτ'-2 W_cννν(,')f_sτ'] + τ√(2)e v_F/c(v_k^2 e^-2iτθ_A^τ-u_k^2A^-τ). whereas groundstate renormalizations are contained in the renormalized dispersion ϵ̃_sτ. Here,W_αα'ββ'(,')=⟨α|α''⟩⟨β'|β'⟩ contains the overlap matrix elements between the renormalized conduction and valence bands. Despite the formal equivalence of Eqs. (<ref>) and (<ref>) to the standard SBE, the renormalized single-particle energy and Rabi frequency differ from the standard expressionsby the Coulomb matrix elements for scattering processes across the bands, i.e. Auger-type processes and electron-hole pair creation and annihilation. In Eqs. (<ref>) and (<ref>), the terms /t |_coll refer to incoherent scattering contributionsbeyond the Hartree-Fock approximation and j_sτ=-τe/ħ∇_ϵ̃_sτ is the intraband current matrix element, respectively.The Dirac-Wannier equation (DWE) is obtained from the DBE as homogeneous part of the linearized polarization equation, 2 ε̃_sτϕ_sτλ()- ∑_' V_|-'|[W_ccνν(,')ϕ_sτλ(') + W_cν cν(,) ϕ^*_sτλ(')] =E_sτλϕ_sτλ() . Apart from the dispersion, the DWE differs from the standard Mott-Wannier equation by the last term on the l.h.s. of Eq. (<ref>), that describes a coupling of the ϕ and ϕ^* by spontaneous pair creation and annihilation. In view of the large gap in semiconducting TMDCs, these contributions are frequently neglected. However, the validity of this approximaton is not a priori clear since is actually depends on the strength of the Coulomb interaction. In our evaluations in this paper, we therefore avoid the wide-gap approximation (WGA). § FINITE THICKNESS EFFECTS In the strict 2D limit, the exciton binding and wavefunctions at the origin become singular in the regime of strong Coulomb interactions<cit.> leading to an excitonic collapse of the interacting groundstate. In this case, the system undergoes a transition into an excitonic insulator state, where the bright optical resonances correspond to intra-excitonic transitions of a BCS-like excitonic condensate<cit.>. A similar divergence of the binding energy and wavefunctions is known in QED for hydrogen-like atoms with Z>137. In QED, this "catastrophe" is treated via a regularization of the Coulomb-potential accounting for a small but finite extension of the nucleus, i.e., by replacing the 1/r potential by the Ohno potential 1/√(r^2+d^2). In this section, we apply a similar procedure and investigate the influence of finite size effects on the gap and exciton equations for a monolayer with a constant background screening κ, i.e. V̅_=2πe^2e^-qd/κ q. This potential is appropriate for both, a monolayer embedded in bulk with κ=√() and for the long wavelength limit qD→ 0 of a monolayer on a substrate with κ=(ϵ_S+1)/2 (see Appendix).In order to unify the description of different material systems and to identify the general aspects of the obtained results, it is often advantageous to introduce scaled units. For the problem under investigation here, one can either choose relativistic orexcitonic units. As the only absolute energy value entering into the DWE, one can use the single-particle gap Δ as energy unit. The single-particle dispersion is then found as ϵ_k/Δ=±1/2√(1+(kλ_C)^2),where λ_C=2ħ v_F/Δ is the Compton wavelength of the electrons and holes. Using the Compton wavelength as length scale, the scaled quasi-2D Coulomb potential is given byV̅_𝐪 = V̅_𝐪/Δ = πα/q̅ e^qd,which is characterized by theparameter combination α=e^2/κħ v_F.The Compton wavelength allows one to distinguish between the relativistic and the non-relativistic regimes, wherethe latter one is found on a length scale large compared to the Compton wavelength. Using scaled units, it is easily shown that the total Hamiltonian is characterized by two parameters, namely the effective fine structure constant α and the effective thickness parameter d. Consequently,both the gap equations and the exciton equation are characterized by the same parameters. The long-wavelength limit of the resonant part of the RPA dielectric function in scaled units is obtained as ϵ_ res()=1+2/3α qλ̅_C e^-qd,where λ̅_C=(λ^A_C+λ^B_C)/2 is the avarage of the respective Compton wavelengths associated with the gap of the A and B excitons. This dielectric functionis of a similar form as thepotential first introduced by Keldysh<cit.> for a thin sheet with constant sheet polarizability and has been used by several authors<cit.> to model theexcitonic properties of TMDCs. As a consequence of the Ohno potential, the dielectric function does not increase to infinity with increasing q but approaches itsmaximum value at q=1/d. A similar behavior has been found by first principle calculations including finite size effects<cit.> or using a truncated Coulomb potential<cit.>. Furthermore, the screening length 2/3αλ̅_C contains resonant contributions only.When discussing excitonic properties, it is sometimes useful to resort to excitonic units. The Compton wavelength and the (3D) exciton Bohr radiusa_0=ħ^2κ/m_r e^2 are related via a_B=2λ_C/α, and the exciton Rydberg Ry=m_re^4/2ħ^2κ^2is related to the gap via Ry=α^2Δ/8, respectively. In the following, we will use both unit systems in order to emphasize systematic dependencies andthe essential underlying physics. §.§ Numerical Solution of the Gap EquationsExamples of our numerical solutions of the gap equations (<ref>) are shown in Fig. <ref>. Here, we plot Δ̃_𝐤 and ṽ_𝐤 as well as the resulting renormalized single-particledispersion ε̃_𝐤 for various values of α and a fixed thickness parameter d=1.0λ_C. Both Δ̃_𝐤 (left) and ṽ_𝐤 (inset) have their maxima at k=0andconverge to their respective non-interacting groundstate values Δ and v_F (respective black dotted lines) for large k. Within a good approximation, therenormalization of the band gap energy and the Fermi velocity leads to a rigidshift of the non-interacting single-particle dispersion (right panel in Fig. <ref>), in agreement with reported predictions based on the GW approximation<cit.>.Since the renormalization does not lead to a deformation of the single-particle bandstructure, it only shifts the energetic position of the excitonic resonances in the respective optical spectra but does not influence their binding energies. Hence, it suffices to study the overall gap shift as function of the system parameters α and d. For this purpose, we plot in the left panel of Fig. <ref> the computed dependence of the renormalized gap on α for three different values of the effective thickness parameter. As we can see, the gap increases linearly with α for small values of the coupling strength switching over to a logarithmic increase for large coupling strengths. In the right panel of Fig. <ref>, we show the computed values of the renormalized gap as function of the effective thickness parameter d for three different values of α.We notice a sensitive d dependence of the gap in the region where d≲αλ_C, which is typically realized in TMDC structures.§.§ Numerical Solution of the Dirac-Wannier Equation Often <cit.>, the excitonic properties ofTMDCs are treated in the WGA where the relativistic quasi-particle dispersion can be approximated by parabolic bands and allcontributions ∝ v_kv_k' in the Coulomb matrix elements can be neglected. As a result, the excitonic states become independent of the Compton wavelength and the onlyremaining length scales are the effective sheet thickness d and the exciton Bohr radius a_B. Moreover, states with m=±|m| are degenerate. Since the only energy scaleother than the gap is the exciton Rydberg energy, the WGA is actually equivalent to the nonrelativistic approximation α≪ 1. For typical TMDC parameters, the effective couplingconstant is in the range of α∝ 3/κ-5/κ, clearly questioning the WGA.Corrections to the WGA result both from the full relativistic dispersion and from the lifting of the degeneracy between states with opposite orbital angular momentum<cit.>Numerically solving the full DWE (<ref>) we obtain the results shown in Fig. <ref>. Here, we plot the binding energies of the 1s-(solid lines)and 2s-exciton (dashed lines) as functions of the effective thickness parameter in excitonic units for α = 1.0 and α = 3.0. For reference, the arrows mark the binding of theexciton states with main quantum number n=0, n=1 , and n=2within the 2D hydrogen model. For finite values for the effective sheet thickness d, the Coulomb interaction close to the origin is weakened relative to the strict 2D case, affecting particularly the strongest bound s-type excitons with large probability density at the origin. Fig. <ref> clearly shows that the binding energies of the 1s and 2s excitons vary strongly with the sheet thickness in the regime where d ≈ a_B and become pretty much d independent for d ≫ a_B.In that limit, the binding energy of the1s- exciton drops below the value of the n=1 2D-exciton state. At the same time, the 2s binding energy seems to converge toward the n=2 value of the 2D limitleadingto an overall strongly non-hydrogenic behavior of the exciton series similar to the experimental observations<cit.>. This behavior is quite different from what is known for semiconductor quantum wells, where the exciton series changes from a 2D to 3D Rydberg series if the sample dimensions exceed the exciton Bohr radius. The combined solution of the gap equations (<ref>) together with the DWE (<ref>) allows us to determine the energetic positions of the excitonic resonances in an optical spectrum. In Fig. <ref>, we show the results for the five lowest s-type excitonic states for a fixed thickness d=λ_C as function of coupling strength α.For reference, we also plot the variation of the renormalized bandgap at one of the Dirac points (black dotted line). As expected,thebinding energies increase with increasing Coulomb coupling strength. However, the increased binding is overcompensated by the bandgap renormalization, leading to an overall blue shift of theexcitonic resonance spectrum. In the limit of strong Coulomb coupling, the increase of the 1s-exciton binding energy is almost canceled by the renormalization of the bandgap, such that the lowest exciton resonance depends only weakly on the coupling strength.The coupling between ϕ and ϕ^* in the DWE leads to a fine structure in the exciton spectrum lifting the degeneracy between states with opposite orbital angular momentum. In Fig. <ref>, we show the splitting of the lowest p-states for a fixedeffective thickness d=1.0λ_C. In the limit of small valuesfor the Coulomb coupling, the splitting increases quadratically switching over to a linear increase for large values of α, respectively. For a suspended monolayer(α≈ 3.0-5.0), the splitting of the 2p states can be as high as 5-6% of the noninteracting energy gap.For supported monolayers, e.g. on a SiO_2 substrate(α≈ 1.2-2.0), our calculations predict a splitting on the order of 10-15 meV, depending on the noninteracting gap of the specific material and on the screening. This valueshould be in the experimentally accessible range. § MULTILAYER STRUCTURES So far, we investigated the excitonic scaling properties and the influence of finite layer thickness within a simplified model for the dielectric environment. In this section, we extend this approch and numerically study the properties of a multilayer TMDC system using the full solution of Poisson's equation within the anisotropic dielectric environment for the example of MoS_2.For the MDF material parameters, we use the values given in Ref. xiao2012,Δ_A=1.585 eV, Δ_B=1.735 eV,α^[0]=e^2/ħ v_F=e^2/ta=4.11,from which we obtain the Compton wavelengths λ_A=4.432 Å, λ_B=4.049 Å,and the screening length r_0=11.62 Å.To determine the Coulomb potential, we take the bulk in-plane and out-of-plane dielectric constants from Ref.ghosh2013, ^B=8.29 and =3.92. Using a layer-to-layer-distance D=6.2 Å, we find a background contribution to the in-plane dielectric constant =4.54.In a first step, we fix the only undetermined parameter in our theory, namely the effective thickness parameter d. To this end, we plot the renormalized gap and exciton resonances as funtion of d and compare the resulting predictions with experimentally available data. In Fig. <ref>, we show the result of this procedure for the example of MoS_2 on SiO_2, where we use a constant dielectric constant ϵ_S=3.9 for the SiO_2 substrate.We fit the effective thickness parameter such that we obtain E=1.92 eV as the energy of the lowest exciton resonance, which is in the range ofmeasured values <cit.>. As can be recognized, best agreement is obtained for an effective thickness parameterd=4.47 Å which is smaller than the layer separation D.The corresponding values for the bandgap and the first excited exciton resonance are then E_G=2.244 eV and E_2s=2.136 eV,giving binding energies of E^B_1s=324 meV and E_2s^B=108meV for MoS_2 on SiO_2 respectively.Unfortunately, as the value for the gap is difficult to determine experimentally, we cannot directly comparethe findings for the bandgap and exciton binding energy with experiment.However, we can use the optimized value for the effective thickness to predict the bandgap and exciton resonances for a suspended monolayer, yielding E_G=2.55 eV and E_1s=1.96 eV, and a binding energy for the 1s-exciton ofE_1s^B=0.599 eV.These values are in pretty good agreement with the values of E_G=2.54 eV and E_1s^B=0.63 eV reported in Ref.qiu2016.Once the thickness parameter is fixed, we are able to compute the renormalized bands and resonance positions for samples with arbitrary layer number and substrates.If we increase the number of layers, the number of bands within the first 2D Brillioun-zone is increased accordingly.For the effective 2D quasi-particles that are localized well within a given layer, we can use the layer number n within the stack as a good quantum number. In the following, we introduce the notation E_G(n,m)=E^c_n=0-E^ν_m=0 for the transition energy between the top of the n^th valence band and the bottom of them^th conduction band at the K-points, and a similar notation for the exciton resonances.In Fig. <ref>, we show the variation of the renormalized valence-to-conduction band transition energies E_G(n,n) and of the corresponding lowest exciton resonances E_1s(n,n) with increasing number of layers. Since the effective local dielectric functions differ for different layers in the sample, both the transition energies and the excitonic resonances between bandsassociated with different layers are non-degenerate,leading to additional resonances inthe optical spectra of the multilayer structure.For each value of N, the dots denote the transition energies E_G(n,n) and E_1s(n,n) for n=1,N, and the lines represent their weighted average. In the bulk limit N→∞, we findE_G^∞=2.03andE_1s^∞=1.88, giving a binding energy of 150 meV for the lowest lying bulk exciton. These values are in good agreement with GW-BSE based ab initio results reported in Ref.komsa2012, where a binding energy of 130 meV was found for the bulk A-exciton. For reference, therespective bulk limits for the band gap and lowests exciton are indicated in Fig.<ref> by the dashed lines.Besides their intralayer interaction, the electrons and holes in a multilayer structure interact also with carriers in neighboring layers with the possibility to form bound interlayer excitons. To illustrate these features, we plot in Fig. <ref> the free-particle transition energies E_G(n,n) andresonance energies E_1s(n,n) for intralayer excitons where the electron-hole pair resides within the same layer, as well as the interlayer transition energies E_G(1,n)and E_G(n,25), and energies of interlayer excitons E_1s(1,n)and E_1s(n,25) where an electron is confined in the n^th layer and the hole in top or middle layer, respectively. We see that the interlayer excitons form a whole spectral series with decreasing binding energy for increasing spatial electron-hole separation. Due to our model assumption of electronically independent layers, the interlayer excitons are optically dark and cannot be observed in optical spectra. However, if we relax the assumption of electronically fully independent layers but allow for a finite overlap of the electron and hole wave functions in different layers, these interlayer excitons gain a finite oscillator strength. Assuming Gaussian distributions for the electron and hole densities, we can estimate the electron-hole overlap between different layers from the integral |∫ dz ϕ_e(z)ϕ_h(z-nD)|^2 determining the oscillator strength for the respective interlayer excitons.Using these model assumptions, we can compute optical absorption spectra for different multilayer systems. In Fig. <ref>, we show the results for a suspended mono- and bilayer MoS_2, using the screened Coulomb potential and thickness d=4.47 Å .The signature in the spectral range between the lowest A and B excitons, that are red shifted by roughly 30 meV, is the lowest interlayer exciton. Furthermore, we see a clear red shift of the intralayer excitons in the bilayer relative to the monolayer. In Fig. <ref>, we show the spectrum for a multilayer sample in the limit N→∞ in the spectral region of the A-exciton resonance series. The dominant peak at E=1.87 eV and the absorption featuresslightly below the gap (at 2.03 eV) correspond to the A-intralayer exciton series. The pronounced feature around E=1.93 eV results from the next-neighbor interlayer exciton, where electrons and holes are confined in neighboring layers. It is interesting to compare these predictions with experimental findings on bulk MoS_2 for which the absorption spectrum has been measured already in the 1970s<cit.>. Transitions that were associated with the A-exciton at the K-points of the Brillioun zone have been observed around 1.92, 1.96 and 1.99 eV. In the original publication, the resonance features were interpreted as groundstate and excited state transitions of a single exciton series. However, neither the resonance positions nor the oscillator strength agree with the expectations based on an anisotropic 3D Rydberg series. These deviations have been discussed in the literature and have been explained by so called "central-cell corrections". The remarkable agreement of the spectral signatures in Fig. <ref> with the measured resonances suggests the reinterpretation of the bulk exciton series as 2D intra- and interlayer excitons, despite some small deviations in the absolute positions of the dominant absorption peaks. This interpretation is further supported by recent measurements on bulk MoS_2<cit.>, where a bias-dependentrelative oscillator strength between the two dominant features has been observed, indicating a distinct z-dependence of both signatures.§ DISCUSSIONIn conclusion, we present a theoretical framework that allows us to compute the bandgap renormalization and K-point excitonic resonances of TMDC mono- and multilayer structures. Our method contains the effective monolayer thickness as undetermined parameter.For the example of MoS_2, we show that by fitting this single parameter to obtain agreement for the lowest exciton resonance of a supported monolayer, we are ableto compute the bandgap and excitonic spectra ofsamples with arbitrary layer numbers and substrates. In particular, we are able to predict the evolution of the bandgap and near-bandgap excitonic spectra over the whole range from monolayer to bulk. Our predictions for the bulk limit are in excellent agreement with experimental observations, suggesting a reinterpretation of the bulk A and B excitonic series in terms of effectively 2D intra- and interlayer excitons.It is interesting, to compare our method with thewell established GW-BSE approach. In theGW-BSE approach, the quasi-particle bandgap is computed from many-body perturbation theory on top of the DFT band structure. Subsequently, excitonic states are obtained as solution of the Bethe-Salpeter-equation (BSE). The major strength of the GW-BSE approach is that it is fully ab initio, and as such, free of any undetermined parameters. However, this comes at the price of being numerically very demanding.The treatment of quasi-2D structures withinGW-BSE is computationally even more challenging, as itrequires large supercells to avoid spurious interactions between adjacent layers.The numerical complexity of the GW-BSE approach has not only lead to a wide range of reportedpredictions for the bandgap and exciton bindings, it also limits its practical application to the description of groundstate and linear optical properties. Methodically, our approach displays several similaritiesto the GW-BSE approach. Similarly as GW,the gap equations provide a correction to the DFT bandstructure, and a subsequent solution of the Dirac-Wannier-equation within the renormalized bands gives access to the excitonic states. However, whereas theGW-BSE equations involve many bands, our approach is explicitly based on a two-band Hamiltonian, thus reducing the numerical cost enormously. Though an effective two-band Hamiltonian restricts the applicability of our methodto the simulation of thenear bandgap optical properties, our method is extremely flexible to model different dielectric environments andcan be easily extended to describe nonlinear optical experiments.Both qualitatively and quantitatively, our predictions are in very good agreement with well-converged GW-BSE based results<cit.>. This, in addition to the excellent agreementwith experimental observations can be taken as strong indications that our model system captures the essential physics around the K-points of the Brillioun zone. In particular, we identify finite size effects as essentially responsible for the observed non-hydrogenicity not only of monolayer spectra, but also of multilayer spectra in the bulk limit.This work is a project of the Collaborative Research Center SFB 1083 funded by the Deutsche Forschungsgemeinschaft. We thank M. Rohlfing for stimulating discussions and for sharing his results on interlayer excitons in TMDCs prior to publication.§ SOLUTION OF POISSON'S EQUATION§.§ Bare Coulomb InteractionThe 'bare'Coulomb interaction corresponds to the Green function of Poisson's equation, i.e.,is obtained as the solution of Eq. <ref> for the scalar potentialwith δ-inhomogeneity ρ(_∥,z)=δ(z-z') in the absence of a resonant polarization, but in the presence of the inhomogeneous, anisotropic background. For a slab geometry consistingof thickness L=ND on a substrate with dielectric constant ϵ_S, wehave a spatial profile of the background dielectric tensor: (z) = {[1 z<0,;0<z<L,;ϵ_SL<z ]. (z)={[1 z<0,;0<z<L,;ϵ_S L<z. ].Within the slab, the resultingCoulomb potential of a point chargelocated at 0<z'<L is given by V_(z,z') = 2π/κ q( e^-√(/)q_∥ |z-z'| + c_1 e^-√(/)q_∥ (z+z'). +c_2 e^-√(/)q_∥ (2L-z-z') +c_3e^-√(/)q_∥ (2L-z+z') +. c_3 e^-√(/)q_∥ (2L+z-z')) with κ = √(),c_1 = (κ+ϵ_S)(κ-1)/N,c_2 = (κ-ϵ_S)(κ+1)/N, c_3 = (κ-ϵ_S)(κ-1)/N, N = (κ+ϵ_S)(κ+1)-(κ-ϵ_S)(κ-1) e^-2√(/)q_∥ L. In Eq. (<ref>), the first term describes the direct interaction between the two point charges, the second term interaction of the point charge at z with the image charge of z' from the vacuum/multilayer interface, the third term correspondingly from the multilayer/substrate interface and the last term the interaction between image charges from both interfaces. Interaction with higher order image charges are contained in the denominator N.Relevant for the intralayer exciton and band gap renormalization is the intralayer Coulomb potentialV_q(z_n,z_n) with z_n=(n-1/2)D: V_(z_n,z_n)=2π/κ q(1 + c_1 e^-√(/)q_∥ (2n-1)D+c_2 e^-√(/)q_∥ 2(N-n-1/2)D+2 c_3e^-√(/)q_∥ 2L) For √(/)q_∥ L≪ 1, the intralayer Coulomb potential reduces to V=4π/(ϵ_S+1)q_∥, i.e., to the vacuum 2D Coulomb interaction screened by substrate screening only, while if√(/)q_∥ L≫1, it reduces to 2π/κ q_ ∥(1+κ-1/κ+1 e^-√(/)q_∥ 2(n-1/2)D+ κ-ϵ_S/κ+ϵ_S e^-√(/)q_∥ (2(N-n-1/2)D).In the left part ofFig. <ref>, we show the local dielectric functions for the middle layer of a suspended MoS_2 sample consisting of 1, 3, and 49 layers. At small wavenumbers, the dielectric function of the middle layer can be approximated by a first order Taylor expansion, givingϵ(q)≈ϵ_S+1/2+N2-ϵ_S^2-1/4qD. The linear approximations corresponds to a Keldysh potential<cit.> with background screening (ϵ_S+1)/2 and screening length r=N2-ϵ_S^2-1/2(ϵ_S+1)D. However, the linear approximation breaks down if qND>1, where the dielectric function approaches its bulk value. Estimating the relevant q-values by the inverse exciton radius r_X (note: the exciton radius should not be interchanged with the exciton Bohr radius; only for hydrogen-like excitons these values coincide), this means that the total sample dimensions should not exceed the in-plane exciton radius. While this condition may hold for a monolayer, it is clearly invalid for a multilayer structure with large layer numbers. As can be recognized inFig. <ref>, in a sample with 49 layers the nonresonant dielectric function jumps to its bulk background value at infinitesimal q-values.§.§ Screening Within linear response theory, thepolarizationin an inhomogeneous medium induced by an external perturbation field ϕ can be expressed in terms of a nonlocal susceptibility_L(,z,ω)=-ie^2∫ dz'χ_L(,z,z',ω)ϕ(,z',ω)where the z-dependence of the susceptibility reflects the spatial profile of the induced carrier density. For the multilayer system,we assume charge distributions well localized within the layers, such that the integration region can be restricted to a region of thickness D around the layer centers:_L(,z,ω)=-ie^2∑_n=1^Nρ_(z-z_n)χ_L(,ω)ϕ̅^n(,ω)whereϕ̅^n(,ω)=∫_-D/2^D/2 dz'ρ_-(z'-z_n)ϕ(,z',ω).In the strict 2D limit, this corresponds to Ansatz <ref> of the main text. The formal solution of equation <ref> with charge distribution ρ_ext(,z) is than given byϕ(,z,ω) = ϕ_ext(,z,ω)-e^2q^2∑_nχ_L(,ω)∫ d z'V_(z,z')ρ_(z'-z_n)ϕ̅^n(,ω)≈ ϕ_ext(,z,ω)-e^2q^2∑_nχ_L(,ω)∫_-D/2^D/2 d z'V_(z,z')ρ_(z'-z_n)ϕ̅^n(,ω)where V_(z,z') is the Coulomb interaction screened by the anisotropic background given in Eq. <ref> andϕ_ext(,z,ω)=∫ dz'V_(z,z')ρ_ext(,z')is the potential of the external charge distribution. Multiplication of Eq. (<ref>) with ρ_-(z-z_m) and integration over z givesϕ̅^m(,ω) = ϕ̅_ext^m(,ω)-e^2q^2∑_nχ_L(,ω)V̅_^mnϕ̅^n(,ω)with the quasi-2D bare Coulomb potentialV̅_^mn=∫_-D/2^D/2dz ∫_-D/2^D/2dz'ρ_-(z-z_m)V_(z,z')ρ_(z'-z_n).The solution ofEq. <ref> can be obtained by a matrix inversion: ϕ̅^m(,ω)=∑_l(δ_ml+e^2q^2χ_L(,ω)V̅_^ml)^-1ϕ̅_ext^l(,ω), and the screened Coulomb interaction given in Eq. (<ref>) in the main text is obtained by choosing ρ_ext(,z)=δ(z-z_n).For a monolayer in the strict 2D limit, the solution simplifies toϕ^2D() = ϕ_ext(,z=D/2)/1+ e^2q^2χ_L(,ω)V_(D/2,D/2) where ϕ^2D is the screened external potential.This result generally depends on the slabthickness D and becomes inedpendent of D only in the two limiting cases D→ 0 and D→∞. The limit D→ 0 correponds to a monolayer on a substrate, whereas the limit D→∞ corresponds to a monolayer embedded in a homogeneous anisotropic medium. Defining ϵ_ eff by ϵ_ eff=(ϵ_S+1)/2 and ϵ_ eff=√() respectively, the localized 2D polarization contributes to the longitudinal dielectric function according to ϵ_ RES=1+2π e^2 q_∥χ_L(,ω)/ϵ_ eff.If the 2D susceptibility is independent ofand ω, this part again corresponds to the Keldysh potential, with a resonant contribution to the anti-screening lengthr_0=2π e^2χ_L/ϵ_ eff.In the middle part ofFig. <ref>, we show the resulting total effective dielectric function of the middle layer of a suspended multilayer sample, where we treat the resonant contribtions to the dielectric function in the long wavelength limit. As can be recognized,if N is increased, the longwavelength limit of the total dielectric function ϵ(q=0) approaches the bulk value √(^B), with an in-plane component corresponding to the (fully screened) DFT bulk value, whereas themonolayer dielectric function in the small q regime can again be approximated by a Keldysh potential with a total linear coefficient r_ tot=r+2r_0/(ϵ_S+1). However, whereas the nonresonant contribution does not exceed the bulk back-ground value √(), the total dielectric function increases linearly, exceeding the DFT fully screened bulk value by far. This unphysical result results from the strict 2D treatment of the carriers, and the invalidity of the long-wave-length limit for the polarization function in this regime. In the rightpart ofFig. <ref>, we show the total effective dielectric function including finite size effects by the Ohno potential. As can be recognized, the effective dielectric function for the middle layerincreases linearly for small q-values starting at ϵ(q=0)=1. However, due to finite size effects, the dielectric function does not exceed the fully screened bulk limit √(^B), but reaches a maximum value between the bulk back-ground value√() andthe fully screened DFT bulk limit √(^B). The q-value at wich the maximum is achieved decreases with increasing number of layers, nicely reproducing the bulk long wave-length limit for large layer numbers. Finally, we compare the effective dielectric function for the monolayer with two recent publications whrerethe effective 2D dielectric function for a monolayer TMDC has been exctracted from a first principles supercell calculation, once using a dielectric model similar to ours<cit.>, that accounts for finite size effects, and once using a truncated Coulomb potential<cit.>. Both approaches find a dielectric function starting at ϵ(q=0)=1, and a maximum value in the region q≈ 0.3 Å^-1. At large q-values, the dielectric function decreases to ϵ(q→∞)=1 again, reflecting the lack of dielectric screening at small distances. Apparently, our modell overestimates the effect of screening in the large q≫ 1/dlimit. This is a consequence of using a constant background dielectric constant, which is inappropriate for large q-values. Indeed, choosing a background dielectric constant ==1 in our model and lumping the back-ground contributions into a linear coefficient r_ tot=r+r_0 instead, the monolayer dielectric function is in good agreement with both Refs. latini2015,qiu2016.On the other hand,our model system is in good agreement with the findings in Refs. <cit.> in the region q≲ 1/d, relevant for excitons in the Wannier-limit, andproduces the correct bulk limit if the number of layers is increased. In contrast,lumping the back-ground contributions into the linear increase in the small q-region produces a wrongbulk limitκ^L(q→ 0)=√(1+2r/L), which can be applied to bulk (L=D) as well as to a supercell calculation with supercell period L. 43 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Beal et al.(1972)Beal, Knights, and Liang]beal1972 author author A. R. Beal, author J. C. Knights, and author W. Y. Liang, http://stacks.iop.org/0022-3719/5/i=24/a=016 journal journal Journal of Physics C: Solid State Physics volume 5, pages 3540 (year 1972)NoStop [Bordas and Davis(1973)]bordas1973 author author J. Bordas and author E. A. Davis, 10.1002/pssb.2220600204 journal journal physica status solidi (b) volume 60, pages 505 (year 1973)NoStop [Neville and Evans(1976)]neville1976 author author R. A. Neville and author B. L. Evans, 10.1002/pssb.2220730227 journal journal physica status solidi (b) volume 73, pages 597 (year 1976)NoStop [Fortin and Raga(1975)]fortin1975 author author E. Fortin and author F. Raga,http://dx.doi.org/10.1103/PhysRevB.11.905 journal journal Phys. Rev. B volume 11, pages 905(year 1975)NoStop [Anedda et al.(1979)Anedda, Fortin, and Raga]anedda1979 author author A. Anedda, author E. Fortin, and author F. Raga, http://www.nrcresearchpress.com/doi/abs/10.1139/p79-048#.V7FyG9Fb8nQ journal journal Can. J. Phys. volume 57, pages 368(year 1979)NoStop [Anedda and Fortin(1980)]anedda1980 author author A. Anedda and author E. Fortin, 10.1016/0022-3697(80)90031-1 journal journal J. Phys. Chem. Solids volume 41, pages 865(year 1980)NoStop [Kuc et al.(2011)Kuc, Zibouche, and Heine]kuc2011 author author A. Kuc, author N. Zibouche, and author T. Heine, 10.1103/PhysRevB.83.245213 journal journal Phys. Rev. B volume 83, pages 245213 (year 2011)NoStop [Yun et al.(2012)Yun, Han, Hong, Kim, andLee]yun2012 author author W. S. Yun, author S. W. Han, author S. C. Hong, author I. G. Kim,and author J. D. Lee, 10.1103/PhysRevB.85.033305 journal journal Phys. Rev. B volume 85, pages 033305 (year 2012)NoStop [Cheiwchanchamnangij and Lambrecht(2012)]lambrecht2012 author author T. Cheiwchanchamnangij and author W. R. L.Lambrecht, 10.1103/PhysRevB.85.205302 journal journal Phys. Rev. B volume 85, pages 205302 (year 2012)NoStop [Cappelluti et al.(2013)Cappelluti, Roldán, Silva-Guillén, Ordejón, and Guinea]cappelluti2013 author author E. Cappelluti, author R. Roldán, author J. A. Silva-Guillén, author P. Ordejón,and author F. Guinea, 10.1103/PhysRevB.88.075409 journal journal Phys. Rev. B volume 88, pages 075409 (year 2013)NoStop [Mak et al.(2010)Mak, Lee, Hone, Shan, andHeinz]mak2010 author author K. F. Mak, author C. Lee, author J. Hone, author J. Shan,and author T. F. Heinz, 10.1103/PhysRevLett.105.136805 journal journal Phys. Rev. Lett. volume 105, pages 136805 (year 2010)NoStop [Zeng et al.(2013)Zeng, Liu, Dai, Yan, Zhu, He, Xie, Xu, Chen, Yao, and Cui]zeng2013 author author H. Zeng, author G.-B. Liu, author J. Dai, author Y. Yan, author B. Zhu, author R. He, author L. Xie, author S. Xu, author X. Chen, author W. Yao,and author X. Cui, 10.1038/srep01608 journal journal Scientific Reports volume 3, pages 1608 (year 2013)NoStop [Chernikov et al.(2014)Chernikov, Berkelbach, Hill, Rigosi, Li, and Aslan]chernikov2014 author author A. Chernikov, author T. C. Berkelbach, author H. M. Hill, author A. Rigosi, author Y. Li,and author O. B. Aslan, 10.1103/PhysRevLett.113.076802 journal journal Phys. Rev. Lett. volume 113, pages 076802 (year 2014)NoStop [He et al.(2014)He, Kumar, Zhao, Wang, Mak, Zhao, and Shan]he2014 author author K. He, author N. Kumar, author L. Zhao, author Z. Wang, author K. F. Mak, author H. Zhao,and author J. Shan, 10.1103/PhysRevLett.113.026803 journal journal Phys. Rev. Lett. volume 113, pages 026803 (year 2014)NoStop [Ye et al.(2014)Ye, Cao, K., Zhu, Yin, Wang, Louie, and Zhang]ye2014 author author Z. Ye, author T. Cao, author O. K., author H. Zhu, author X. Yin, author Y. Wang, author S. G. Louie, and author X. Zhang,@noopjournal journal Nature volume 513, pages 214 (year 2014)NoStop [Bairen Zhu(2015)]zhu2014 author author X. C. Bairen Zhu, Xi Chen, @noopjournal journal Scientific Reports volume 5,pages 9218 (year 2015)NoStop [Novoselov et al.(2016)Novoselov, Mishchenko, Carvalho, andCastro Neto]novoselov2016 author author K. S. Novoselov, author A. Mishchenko, author A. Carvalho,and author A. H. Castro Neto, 10.1126/science.aac9439 journal journal Science volume 353 (year 2016), 10.1126/science.aac9439, http://arxiv.org/abs/http://science.sciencemag.org/content/353/6298/aac9439.full.pdf http://science.sciencemag.org/content/353/6298/aac9439.full.pdf NoStop [Dong and Kuljanishvili(2017)]dong2017 author author R. Dong and author I. Kuljanishvili, 10.1116/1.4982736 journal journal Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena volume 35, pages 030803 (year 2017), http://arxiv.org/abs/http://dx.doi.org/10.1116/1.4982736 http://dx.doi.org/10.1116/1.4982736 NoStop [Xiao et al.(2012)Xiao, Liu, Feng, Xu, andYao]xiao2012 author author D. Xiao, author G.-B. Liu, author W. Feng, author X. Xu,and author W. Yao, 10.1103/PhysRevLett.108.196802 journal journal Phys. Rev. Lett. volume 108, pages 196802 (year 2012)NoStop [Stroucken and Koch(2017)]stroucken2017 author author T. Stroucken and author S. W. Koch, in @noopbooktitle Optical Properties of Graphene, editor edited by editor R. Binder (publisher World Scientific Publishing, address Singapur, year 2017) Chap. chapter 2, pp. pages 43 – 84NoStop [Ye et al.(2015)Ye, Winslow, Zhang, Pandey, andYap]ye2015 author author M. Ye, author D. Winslow, author D. Zhang, author R. Pandey,and author Y. K. Yap, 10.3390/photonics2010288 journal journal Photonics volume 2, pages 288 (year 2015)NoStop [Mattheiss(1973)]mattheiss1973 author author L. F. Mattheiss, 10.1103/PhysRevB.8.3719 journal journal Phys. Rev. B volume 8, pages 3719(year 1973)NoStop [Rodin and Castro Neto(2013)]rodin2013 author author A. S. Rodin and author A. H. Castro Neto, 10.1103/PhysRevB.88.195437 journal journal Phys. Rev. B volume 88, pages 195437 (year 2013)NoStop [Ghosh and Mahapatra(2013)]ghosh2013 author author R. K. Ghosh and author S. Mahapatra, 10.1109/JEDS.2013.2292799 journal journal IEEE Journal of the electron devices society volume 1, pages 175(year 2013)NoStop [Haug and Koch(2009)]haugkoch2009 author author H. Haug and author S. W. Koch, http://books.google.de/books?id=qHx2YWx1LOcC title Quantum Theory of the Optical and Electronic Properties of Semiconductors, edition 5th ed. (publisher World Scientific Publishing, address Singapur, year 2009)NoStop [Stroucken and Koch(2015)]stroucken2015 author author T. Stroucken and author S. W. Koch, doi:10.1088/0953-8984/27/34/345003 journal journal J. Phys.: Condens. Matter volume 27, pages 1(year 2015)NoStop [Keldysh(1979)]keldysh1979 author author L. V. Keldysh, @noopjournal journal JETP Lett. volume 29, pages 658(year 1979)NoStop [Cudazzo et al.(2011)Cudazzo, Tokatly, and Rubio]cudazzo2011 author author P. Cudazzo, author I. V. Tokatly,and author A. Rubio, 10.1103/PhysRevB.84.085406 journal journal Phys. Rev. B volume 84,pages 085406 (year 2011)NoStop [Pulci et al.(2012)Pulci, Gori, Marsili, Garbuio, Sole, and Bechstedt]pulci2012 author author O. Pulci, author P. Gori, author M. Marsili, author V. Garbuio, author R. D. Sole,and author F. Bechstedt, http://stacks.iop.org/0295-5075/98/i=3/a=37004 journal journal EPL (Europhysics Letters) volume 98, pages 37004 (year 2012)NoStop [Berkelbach et al.(2013)Berkelbach, Hybertsen, and Reichman]berkelbach2013 author author T. C. Berkelbach, author M. S. Hybertsen,and author D. R. Reichman, 10.1103/PhysRevB.88.045318 journal journal Phys. Rev. B volume 88, pages 045318 (year 2013)NoStop [Wu et al.(2015)Wu, Qu, and MacDonald]wu2015 author author F. Wu, author F. Qu,andauthor A. H. MacDonald, 10.1103/PhysRevB.91.075310 journal journal Phys. Rev. B volume 91, pages 075310 (year 2015)NoStop [Latini et al.(2015)Latini, Olsen, and Thygesen]latini2015 author author S. Latini, author T. Olsen, and author K. S. Thygesen,10.1103/PhysRevB.92.245123 journal journal Phys. Rev. B volume 92, pages 245123 (year 2015)NoStop [Qiu et al.(2016)Qiu, da Jornada, and Louie]qiu2016 author author D. Y. Qiu, author F. H. da Jornada,and author S. G. Louie,10.1103/PhysRevB.93.235435 journal journal Phys. Rev. B volume 93, pages 235435 (year 2016)NoStop [Komsa and Krasheninnikov(2012)]komsa2012 author author H. P. Komsa and author A. V. Krasheninnikov, 10.1103/PhysRevB.86.241201 journal journal Phys. Rev. B volume 86, pages 241201 (year 2012)NoStop [Shi et al.(2013)Shi, Pan, Zhang, and Yakobson]shi2013 author author H. Shi, author H. Pan, author Y. W. Zhang,and author B. I. Yakobson, @noopjournal journal Phys. Rev. B volume 87, pages 155304 (year 2013)NoStop [Rasmussen and Thygesen(2015)]rasmussen2015 author author F. A. Rasmussen and author K. S. Thygesen, 10.1021/acs.jpcc.5b02950 journal journal J. Phys. Chem. C volume 119, pages 13169(year 2015)NoStop [Zhou et al.(2015)Zhou, Shan, Yao, and Xiao]zhou2015 author author J. Zhou, author W.-Y. Shan, author W. Yao,and author D. Xiao, 10.1103/PhysRevLett.115.166803 journal journal Phys. Rev. Lett. volume 115, pages 166803 (year 2015)NoStop [Ugeda et al.(2014)Ugeda, Bradley, Shi, da Jornada, Zhang, Qiu, Ruan, Mo, Hussain, Shen, Wang, Louie, and Crommie]ugeda2014 author author M. M. Ugeda, author A. J. Bradley, author S.-F. Shi, author F. H. da Jornada, author Y. Zhang, author D. Y. Qiu, author W. Ruan, author S.-K.Mo, author Z. Hussain, author Z.-X. Shen, author F. Wang, author S. G. Louie,and author M. F. Crommie, http://www.nature.com/doifinder/10.1038/nmat4061 journal journal Nature Materials volume 13,pages 1091(year 2014)NoStop [Wang et al.(2015)Wang, Marie, Gerber, Amand, Lagarde, Bouet, Vidal, Balocchi, and Urbaszek]wang2015 author author G. Wang, author X. Marie, author I. Gerber, author T. Amand, author D. Lagarde, author L. Bouet, author M. Vidal, author A. Balocchi,and author B. Urbaszek, 10.1103/PhysRevLett.114.097403 journal journal Phys. Rev. Lett. volume 114, pages 097403 (year 2015)NoStop [Kioseoglou et al.(2012)Kioseoglou, Hanbicki, Currie, Friedman, Gunlycke, and Jonker]kioseoglou2012 author author G. Kioseoglou, author A. T. Hanbicki, author M. Currie, author A. L. Friedman, author D. Gunlycke,and author B. T. Jonker, 10.1063/1.4768299 journal journal Applied Physics Letters volume 101, pages 221907 (year 2012), http://arxiv.org/abs/http://dx.doi.org/10.1063/1.4768299 http://dx.doi.org/10.1063/1.4768299 NoStop [Mak et al.(2013)Mak, He, Lee, Lee, Hone, Heinz, and Shan]mak2013 author author K. F. Mak, author K. He, author C. Lee, author G. H. Lee, author J. Hone, author T. F.Heinz,and author J. Shan, http://dx.doi.org/10.1038/nmat3505 journal journal Nature Materials volume 12, pages 207 (year 2013)NoStop [Mitioglu et al.(2016)Mitioglu, Galkowski, Surrente, Klopotowski, Dumcenco, Kis, Maude, and Plochocka]mitioglu2016 author author A. A. Mitioglu, author K. Galkowski, author A. Surrente, author L. Klopotowski, author D. Dumcenco, author A. Kis, author D. K. Maude,and author P. Plochocka, 10.1103/PhysRevB.93.165412 journal journal Phys. Rev. B volume 93, pages 165412 (year 2016)NoStop [Saigal et al.(2016)Saigal, Sugunakar, and Ghosh]saigal2016 author author N. Saigal, author V. Sugunakar, and author S. Ghosh, 10.1063/1.4945047 journal journal Applied Physics Letters volume 108, pages 132105 (year 2016), http://arxiv.org/abs/http://dx.doi.org/10.1063/1.4945047 http://dx.doi.org/10.1063/1.4945047 NoStop
http://arxiv.org/abs/1709.09056v1
{ "authors": [ "Lars Meckbach", "Tineke Stroucken", "Stephan W. Koch" ], "categories": [ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mes-hall", "published": "20170926143555", "title": "Influence of the effective layer thickness on the groundstate and excitonic properties of transition-metal dichalcogenide systems" }
Dept. of Physics, Pohang University of Science and Technology, Pohang 790-784, Korea E-mail: [email protected] September 26, 2017§ ABSTRACTThis paper focuses on the motion of a test particle moving around the Reissner-Nordström black hole. It deals with circular motion and radial motion of the neutral massive test particles, and shortly handles circular motion of the charged massive test particles. Both neutral and charged particles are affected by black hole's charge, but it is due to the fact that charge of the black hole bends the spacetime more strongly. This procedure has nothing to do with electromagnetic interactions, and these are only considered for charged test particles. However, it only treats mathematically easy, approximated situations and general motions and complex motions will not be discussed. This paper has tried to get some physical information only with the easiest mathematical tools and without difficult concepts that general relativity contains. Contents of this paper would be suitable for those who want to know something about the Reissner-Nordström black hole, but does not have much knowledge in this field. They can begin their intellectualjourney with this paper.Motion of a Test Particle in the Reissner-Nordström Spacetime Moonju Hong December 30, 2023 =============================================================§ INTRODUCTIONReissner-Nordström spacetime is one of vacuum solutions of Einstein equation. Although this solution is generally considered unphysical since it is static and charged. In the scale of stars, this is hardly probable because particles accumulated to create the black hole have their own angular momentum and charges will attract the opposite charges until the black hole becomes neutral. This spacetime, however, can be treated with a bit simple mathematical approach compared to Kerr or Kerr-Newman spacetimes and might suggest us some physical information. In this paper we will see how test particles moving in the Reissner-Nordström spacetime behave in the most easiest cases: circular motion and radial motion. To begin with, Reissner-Nordström metric has this form:1 ds^2 = -(1-r_S/r+r_Q ^2/r^2)c^2 dt^2 + (1-r_S/r+r_Q ^2/r^2)^-1dr^2 + r^2 dΩ ^2.where r_S is proportional to the mass of the black hole and r_Q is proportional to the net charge of the black hole.<cit.> We will ignore an additional term given by magnetic charges throughout this paper(letting P=0.) This metric form ensures its spherical symmetry, and tells that the spacetime will depend on the net charge and mass of a star. Because of its peculiar properties caused by electromagnetic interactions, Reissner-Nordström black hole shows quite different features from Schwarzschild or Kerr black holes. One of them will be discussed here and it is depicted in FIG. 3. There have been some approaches to gain some physical results from this kind of black hole, since its static property makes it an easy choice to start an intellectual journey. For example, see Ruffini (2005)<cit.> § I. NEUTRAL PARTICLE WITH CIRCULAR ORBIT ON EQUATORIAL PLANEIn this case, since the particle is neutral, there is no electromagnetic interaction between the test particle and the Reissner-Nordström black hole. Spherical symmetry of the Reissner-Nordström metric ensures that the motion will remain on the equatorial plane, thereby described by (t, r, π/2, ϕ) with θ̇ = 0. With these initial conditions, geodesic equation for μ = 0 yieldsd^2 t/dτ ^2 +2 Γ^0 _10dt/dτdr/dτ = 0,simplified into2d/ds [(1-r_S/r+r_Q ^2/r^2)ṫ] =0 (μ = 0) .This result denotes the conserved energy for massless particles, or the conserved energy per unit mass for massive particles. <cit.>3(1-r_S/r+r_Q ^2/r^2)ṫ = E = const. For μ = 3, the equation givesϕ̈ + 2/rṙϕ̇ = 0,which becomes4d/ds [r^2 ϕ̇] =0 (μ = 3) .In the same reasoning, this result states the conserved angular momentum(for massless particles) or the conserved angular momentum per unit mass(for massive particles.)5 r^2 ϕ̇ = L =const.Instead of trying μ = 1 equation, we use the Reissner-Nordström metric1 ds^2 = -(1-r_S/r+r_Q ^2/r^2)c^2 dt^2 + (1-r_S/r+r_Q ^2/r^2)^-1dr^2 + r^2 dΩ ^2.Dividing both sides by ds^2 and substituting Eqs. (3) and (5), for the massive test particles, we obtain1 = (1-r_S/r+r_Q ^2/r^2)E^2 - L^2/c^2 r^4(1-r_S/r+r_Q ^2/r^2)^-1(dr/dϕ)^2 - a^2/c^2 r^2.Rearranging terms following the similar procedure presented by L. Ryder<cit.>, we finally arrive at the equation:6d^2/dϕ ^2(1/r) + 1/r = r_S c^2/2L^2 - r_Q^2 c^2/rL^2 + 3r_S/2r^2 - 2r_Q ^2/r^3.This is the equation that describes the motion of the massive and neutral test particle on the equatorial plane of the Reissner-Nordström black hole. §.§ Stable, and Circular Orbit MotionFor further discussion, it would be difficult to handle all kinds of possible motions of these test particles on the equatorial plane. Thus, we focus on a specific case, the circular orbit of the neutral and massive test particle. To find a stable circular orbit radius we start from the metric equation, Eq. (1). Rearranging terms to represent the conserved energy part, E, on the RHS and the other terms on the LHS, it becomes as follows:71/2ṙ^2 +1/2 (1-r_S/r+r_Q ^2/r^2)(L^2/r^2+1) = 1/2E^2 ≡ϵ.This equation seems like the equation for conservation of energy. In this analogy, the potential energy term for the circular motion can be obtained.V(r) = 1/2 (1-r_S/r+r_Q ^2/r^2)(L^2/r^2+1).Let the circular orbit radius be r_C. Then for the orbit to be circular, this potential should satisfy the condition:dV/dr| _r=r_C = 0 .Satisfying this restriction yields a polynomial equation of r_C.8 r_S - (2r_Q^2 + L^2)/r_C +3r_S L^2/r_C ^2 - 4r_Q^2 L^2/r_C ^3 = 0.If the test particle were massless, the first two terms of this equation will be removed, thereby resulting in the radii ofr_C = 3r_S ±√(9r_S ^2 - 32r_Q ^2)/4.Especially for the extremal Reissner-Nordström black hole, where r_S = 2r_Q, we have simple radius ofr_C = 2r_Q or r_Q,with r_Q being the hozion.Unlike this massless case, Eq. (8) for massive particles will be cubic equation. To make things simple, we first try the special case r_C ≫ r_Q and then see whether it was a reasonable approach afterwards. In this limiting case, the last term in Eq. (8) vanishes, so we solve this quadratic equation:r_S r_C - 2(r_Q^2 + L^2)r_C +3r_S L^2 = 0.Solving this quadratic equation gives one inner, unstable orbit and another farther, stable orbit. The innermost stable orbit is, therefore, achieved when two orbits coincide, i.e. when the discriminant of this quadratic equation is zero. Again, supposing r_S ≫ r_Q, the circular orbit radius is given by9 r_C ≈ 3r_S - r_Q^2/r_S. This result is compatible with the former constriction r_C > r_S ≫ r_Q. Thus, we found the radius of stable circular orbit of the massive particle, moving around the weakly charged Reissner-Nordström black hole. Compared with the Schwarzschild black hole, the stable circular orbit around the Reissner-Nordström black hole moves a liitle inwards, towards the horizon. Also this result reproduces the Schwarzschild stable orbit, r = 3r_S, in the limit r_Q → 0, or more correctly, when Q → 0, as desired. It complys with the more precisely calculated radius, provided by Praloy, Ripon, and Subir<cit.>, r_C = 3r_S -3r_Q^2/r_S, within some coefficients which might depend on the approximation procedure. In case of strongly charged Reissner-Nordström black hole, we cannot apply the same approximation so one has to solve Eq. (8) directly. Since the potential has the general shape described in FIG. 1, there is one stable circular orbit according to the given potential, and the minimum radius is given by Eq. (9). In the regime that has been used to get Eq. (9), there is another solution; however, it is inside the horizon. Like this, other possible solutions are not physical. §.§ Precession MotionWe finish this section with discussing the precession motion of an orbit. It might not be circular, in general, but elliptical more probably. Before thinking about the precession motion in general relativistic way, we first consider the familiar Newtonian elliptical orbit. It is well known that in Newtonain language,d^2/dϕ ^2(1/r) + 1/r = 1/p = r_S c^2/2L^2.Here, the quantity p is the semi-latus rectum of the ellipse, given byp = a_0 (1-e^2),with the semi-major axis a_0 and the eccentricity e. This Newtonian equation has the solution101/r = 1/p (1+ecosϕ).Now, we go back to the relativistic motion equation for motions on the equatorial plane,6d^2/dϕ ^2(1/r) + 1/r = r_S c^2/2L^2 - r_Q^2 c^2/rL^2 + 3r_S/2r^2 - 2r_Q ^2/r^3.The first term on the RHS is that of the Newtonian term. To solve this equation, we substitute Eq. (10) into Eq. (6) and neglect terms of order higher than e^2(assuming that e ≪ 1.) Also, further assuming that r_S and r_Q are on the same order and [(r_S c)/L]^2 ≪ 1, the equation turns intod^2/dϕ ^2(1/r) + 1/r≈r_S c^2/2L^2[1-r_Q^2 c^2/L^2ecosϕ +3r_S^2 c^2/2L^2ecosϕ -3r_Q^2 r_S^2 c^4/2L^4ecosϕ] .In this approximation, the last term can be disregarded. This is the differential equation and we already obtained the soultion of the first part - it is the answer the Newtonian equation yields - and the other terms give the solution containing ϕsinϕ. Thus, the total solution of Eq. (6) with some approximations is this:1/r = r_S c^2/2L^2 (1+ecosϕ) + r_S^2 c^4/2L^4(3r_S^2/2-r_Q^2)eϕsinϕ.Because of the cosmic censorship hypothesis, the condition r_S ≥ 2r_Q must hold, not to make naked singularity. So the term 3r_S^2/2-r_Q^2) in the above solution is always positive. The approximation [(r_S c)/L]^2 ≪ 1 enables us to synthesize two trigonometric terms into one:111/r = r_S c^2/2L^2[ 1+ecos{ϕ(1- c^2/L^2(3r_S^2/2-r_Q^2))} ].with some minor terms of order higher than (r_S / r)^2 ignored. This solution describes the precesss motion of an elliptical orbit with 12δϕ= 2πc^2/L^2(3r_S^2/2-r_Q^2)= 6π c^2 r_S/a_0 (1-e^2)[1-2/3(r_Q/r_S)^2].To see its effect, let's imagine an imaginary star which has the same properties as the Sun, except for one change that this imaginary star has net charge of Q. For the Sun and Mercury, the parameters are given by r_S =2.95325008× 10^3 m, a_0 = 5.7909175× 10^10 m, and e = 0.20563069.<cit.> Then the cumulative effect for 100 Earth-year becomesδϕ_100 ≈ 43.03”[1-2/3(r_Q/r_S)^2]= 43.03”[1-2/3(Q^2/5.9× 10^10)]. Just as FIG. 2 shows, the effect of the net charge becomes macroscopic after the planet revolves the star countless times. * * *What these two results - circular orbit change and precession change - tell us is this: Even the neutral test particles are affected by the net charge of the black hole, thus giving different consequnces for Schwarzschild spacetime and Reissner-Nordström spacetime. The presence of charge affects the spacetime and curves it stronger than neutral case when the masses are the same. The more charge it acquires, the more the spacetime curved, like mass does. As expected, not only mass and angular momentum, but also charge is an obvious hair for black holes. § II. NEUTRAL MASSIVE PARTICLE WITH RADIALMOTIONWithout loss of generality radial motion of the test particle can be chosen to follow these conditions:r_0 = R,dr/dt|_t=0 = 0,θ = π/2,ϕ =0,θ̇=ϕ̇ =0 .Eq. (3) still holds, but angular terms in Eq. (1) vanishes.13 ds^2 = -(1-r_S/r+r_Q ^2/r^2)c^2 dt^2 + (1-r_S/r+r_Q ^2/r^2)^-1dr^2 .Under this situation, we first try to gain t, the time of an observer far away from the origin, as a function of r. Substituteṙ = dr/dtṫinto Eq. (13) and arrange terms, then we have[c^2 (1-r_S/r+r_Q ^2/r^2) - (1-r_S/r+r_Q ^2/r^2)^-1(dr/dt)^2]ṫ^2 = c^2.Applying initial conditions into this equation givesdt/dτ|_R = 1/√(1-r_S/R+r_Q ^2/R^2).Since Eq. (3) still valid, we obtain E= √(1-r_S/R+r_Q ^2/R^2)anddt/dτ = √(1-r_S/R+r_Q ^2/R^2)/√(1-r_S/r+r_Q ^2/r^2).With these results, we finally reach at the relationship between the time t and the radial position r.14 ct = -√(1- r_S/R+r_Q ^2/R^2) ×∫_R^r dx/(1-r_S/x+r_Q ^2/x^2)√(r_S/x-r_Q ^2/x^2-r_S/R+r_Q ^2/R^2).This integral cannot be solved easily, so we again solve it for special case by taking limits R →∞(although one can retain R to be finite, it will simplify things) and r_S → 2r_Q(extremal approximation,) the integral is solved into15 ct = √(2r/r_Q-1)/r-r_Q .When the Reissner-Nordström black hole is extremal, r_S = 2r_Q, the outer event horizon and the inner Cauchy horizon coincide at r=r_Q. Therefore,t →∞ asr → r_Q,as desired since in terms of the far away observer, an object falling into the black hole will never cross the horizon. However, it does not make any physical problem because in the view of the infalling observer, it would take finite time to cross the horizon. That is, the proper time integration16 cτ = -∫_R^r dx/√(E^2 - (1-r_S/r+r_Q ^2/r^2)) ,produces finite time even when r is inside the horizon, although it is hard to obtain the general formula. In calculating the proper time, one should note that the denominator of Eq. (16) can become zero, meaning non physical situation whenr = -r_S + √(r_S ^2 + 4r_Q^2 (E^2 -1))/2(E^2 -1).What happens here is well described in Carroll's textbook <cit.>:After the infalling observer passes the outer horizon r_+, he has to pass the inner horizon r_- also. When he crosses the inner horizon, however, r coordinate becomes spacelike, so he can go back to the inner horizon and then cross it from the inside to outside. Then, r coordinate again becomes timelike but since it is reversed, he is forced to move along the increasing r path, thereby passing the outer horizon. Finally, he will be released from the black hole, then again he starts to feel attraction towards the black hole. In this way, the observer can oscillate back and forth around the Reissner-Nordström black hole's outer horizon. The parodox, however, arises because the far away observer never sees the infalling observer crossing the outer horizon. When the infalling observer crosses the outer horizon from inside to outside, the far away observer will notice that there exist two totally same person: one is still falling, but the other is coming out. To remedy this paradox, it was suggested that when the infalling observer is released from the outer horizon, it will not be the same universe that he lived when he was falling.In case of the Scwarzschild black hole, the infalling observer cannot escape but fall into and collide to the singularity. On the other hand, an observer diving into the Reissner-Nordström black hole can go to other universes as well as escape the black hole. This is another main difference between two types of black holes and it can be implied from the radial motion of the massive particle.§ III. CHARGED PARTICLE WITH CIRCULAR ORBIT ON EQUATORIAL PLANEWe finally deal with the motion of the charged test particle. First, when the particle is far away from the Reissner-Nordström black hole, the black hole will seem like a point charge with mass. This can be shown easily from Reissner-Nordström black hole's properties: F_tr = -F_rt = - r_Q/r^2, else = 0. <cit.> Electromanetic tensor F^μν is given byF^μν =[ 0 E_rrE_θ rsinθ E_ϕ; 0 -rB_ϕ rsinθ B_θ; 0 -r^2 sinθ B_r; 0 ] = [ 0 r_Q/r^2 0 0; 0 0 0; 0 0; 0 ]Noting that r_Q ∼ Q, we find the expected classical electric field produced by static charge Q. In the neutral particle case, however, the innermost stable circular orbit was constructed at r_C ≈ 3r_S. This is close enough to the black hole, so one cannot expect this classical result to be applied. We again follow the procedure done in section I. For stable circular orbit, since radius should remain constant, its derivative will automatically vanish. Then, d^2/dϕ ^21/r_C =0results in17 0 = (1-r_Q qE/r_S) - 2r_Q^2/r_S r_C(1-q^2+L^2/r_Q^2 c^2)+3L^2/c^2 r_C^2 - 4r_Q^2 L^2/r_S c^2 r_C^3 .Also, the fact that stable orbit occurs at the inflection point of the orbital equation requires derivative of Eq. (17) to be zero. These two conditions are written by powers of r_C as18 0 = (1-Λ)r_C^3 - 2r_Q^2/r_S(1-q^2+L^2/r_Q^2 c^2)r_C^2+ 3L^2/c^2r_C - 4r_Q^2L^2/r_S c^2. 19 0 = 3(1-Λ)r_C^2 - 4r_Q^2/r_S(1-q^2)r_C-4L^2/r_S c^2r_C +3L^2/c^2.where Λ≡ (r_QqE)/r_S. We now solve these two equations and suppose that (r_Q/r_S)^2 ≪ 1, then the last term in Eq. (18) vanishes. Therefore, the approximated answer is obtained.r_C ≈3/2r_S [1+ √(1- 4r_Q^2/r_S(1-2/3Λ - 1/3q^2/1-Λ))].When the charge of the particle is small enough, this can be roughly stated as20 r_C ≈ 3r_S - 3r_Q^2/r_S(1+r_Q E/r_S q).This is the exact solution presented by Praloy, et al. in more rigorous way.<cit.> Note that the coefficient of q is vanishingly small since we assumed that (r_Q/r_S)^2 ≪ 1. What is interesting here is that when q=0, the neutral test particle solution is retrieved. Moreover, this result also tells that the net charge of the particle can be translated into the change of the charge of the Reissner-Nordström black hole. According to the sign of the test charge, the innermost circular orbit becomes closer or farther. Not just being satisfied with the circular case, one can solve the equation 21d^2/dϕ ^2(1/r) + 1/r =r_S c^2/2L^2 - r_Q^2 c^2/rL^2 + 3r_S/2r^2 - 2r_Q ^2/r^3 +r_Q qc^2/L^2(r_Q q/r - 1/2E)to see the full picture. This is Eq. (6) plus some additional terms due to the electromagnetic interaction of the black hole and the test particle. § CONCLUSIONThis paper has concentrated on the motion of a massive test particle moving near the Reissner-Nordström black hole. First, we treated neutral test particles. Although they are neutral, so there should be no electomagnetic interactions, it turns out that even neutral test particles show different motions compared to that of the Schwarzschild cases. This is because the presence of charge can curve the spacetime as mass and energy does; however, it should be noted that electromagnetic interaction itself has nothing to do with curvature just as gravity is the curved spacetime itself. When the Schwarzschild black hole has started to get net charge, it becomes Reissner-Nordström black hole and it begins to curve spacetime more strongly than before. A neutral test particle can follow a cirrcular orbit around the black hole. The innermost stable circular orbit it can have differs from two black holes. For schwarzschild black hole, the radius is only determined by r_S, because it is the only hair it has, while the orbit radius aroundthe Reissner-Nordström black hole depends on both r_S and r_Q and becomes closer, as expected. Second, we paid attention to the fact that Schwarzschild metric is used to explain precession of the perihelion of the Mercury. From this, we imagined a star which has the same properties with the Sun except for its net charge, and calculated how it affects Mercury. It is described in FIG. 2 and we found that its effect is quite small, so one can notice the difference after Mercury revolved tremendous times - even for the extremal case which has the most drastic effect. After that, neutral test charge's radial motion was considered. Like all other kinds of black holes, a person falling into the Reissner-Nordström black hole never crosses the event horizon in the view of the far away observer. However, another observer falling together with the infalling person surely sees him taking finite time to pass the horizon. Also, radial motion implies a weird characteristic of the Reissner-Nordström black hole. Even though it has the name 'black hole,' already fallen person can go outside the event horizon; however, he will be in another universe. This is quite an interesting story, although the Reissner-Nordström black holes are not realistic objects in the universe. Finally, charged test particle was handled, but it was not discussed in depth since it is difficult to insert electromagnetic interactions into the Reissner-Nordström metric and develop a theory. It was shown that the existence of net charge of the test particle can be translated into an increased or decreased charge of the black hole. Also, the innermost circular orbit radius approached that of the neutral particle when we let the charge of the particle be zero. Since Reissner-Nordström black holes distinguish themselves from other kinds of black holes by their charge, most dramatic and amusing effects come from their interaction with the charge of the test particle. Because of its mathematical hardship, however, this paper has satisfied only with the easiest cases. Who one to go further and see what happens could solve Eq. (21). For example, this equation can teach us how electron beams caused by supernovae or whatever behaves when they pass the Reissner-Nordström black hole nearby. Gravity only works as a convex lens for all kinds of matters, but Reissner-Nordström black hole might be able to scatter charged particles, working as a concave lens. Or because charged particles emit electromagnetic waves when it is accelerated, its orbit will not remain stable. One can see how it will fall into the Reissner-Nordström black hole in this way. Although this paper has tried to gain some simple physical results in some simpliest cases, general orbits of the charged test particles' motions are well described in the paper of Grunau and Kagramanova.<cit.> This paper is strongly recommeded for those who want to see particles' exact behaviors not in equations, but in figures. Also, works done by Praloy, et al., which has been cited all along this paper shows how neutral and charged test particles move in much more rigorous way. § REFERENCE9 metric C. Misner, K. Thorne, and J. Wheeler, Gravitation, 1st edition. (W. H. Freeman and Company, 2000), p. 921 rufin R. Ruffin, Charges in gravitational fields: from Fermi, via Hanni-Ruffini-Wheeler, to the "electric Meissner effect", (Unpublished). (2005), 0503439KillingS. Carroll,Spacetime and Geometry An Introduction to General Relativity, new international edition. (Pearson, 2014), pp. 207-208 ryder L. Ryder, Introduction to General Relativity, 1st edition. (Cambridge University Press,2009), pp. 158-160stable D. Praloy, S. Ripon, and G Subir, Motion of charged particle in Reissner-Nordström spacetime: A Jacobi metric approach, (Unpublished). (2017), 7, 1609.04577 [gr-qc]Allen C. Allen, Astrophysical Quantities, 4th edition, editied by A. N. Cox (Springer-Verlag, New York, 2000)carroll S. Carroll,Spacetime and Geometry An Introduction to General Relativity, new international edition. (Pearson, 2014), pp. 257-259 wheeler C. Misner, K. Thorne, and J. Wheeler, Gravitation, 1st edition. (W. H. Freeman and Company, 2000), p. 877 ans D. Praloy, S. Ripon, and G Subir, Motion of charged particle in Reissner-Nordström spacetime: A Jacobi metric approach, (Unpublished). (2017), 12, 1609.04577 [gr-qc]orbit S. Grunau and V. Karamanova, Geodesics of electrically and magnetically charged test particles in the Reissner-Nordström spacetime: analytical solutions, Physical Review D, (2011). 83(4): doi:10.1103/physrevd.83.044009
http://arxiv.org/abs/1709.08978v2
{ "authors": [ "Moonju Hong" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170926123924", "title": "Motion of a Test Particle in the Reissner-Nordstrom Spacetime" }
Introduction Synchronous Detection of Longitudinal and Transverse Bunch Signals at a Storage Ring Anke-Susanne Müller December 30, 2023 ====================================================================================The term “Surgery” comes from the Greek “Kheirourgia" which means handiwork. Despite numerous technological improvements in the last centuries, surgery remains a manual work where surgeons perform complex tasks using their hands and surgical instrumentation. As it is yet not possible to retrieve the view as seen directly by the surgeon, numerous works are using video cameras to record the entire surgical scene. Such a solution is applicable for training medical students using “first-person" view cameras <cit.>, or more commonly for Medical Augmented Reality where another modality (intraoperative or preoperative) is overlaid over the video to give context to the medical data. Having the hands and instrumentation positioned in the field of action inherently signifies the occlusion of the surgical scene and the anatomy being treated. This is true both from the surgeon viewpoint or any imaging modality viewpoint. It would be advantageous if there was a solution to display to the surgeon any occluded region of interest without losing the information about the action that is given by the hands and surgical instrument positions. Introducing transparency links the problem to the Diminished Reality field of study. Such an application would then combine Diminished Reality with Augmented Reality, providing a Mixed Reality Visualization. §.§ Related Works Medical Augmented Reality can be classified into 2 main categories: preoperative data (CT, MRI) overlaid over intraoperative data (video, X-ray images), or intraoperative data overlaid over intraoperative data stemming from another modality. The first category uses preoperative data to segment 3D models of organs<cit.> or plan paths/entry points <cit.> that can then be rendered during surgeries using video coming from an external camera or an endoscope. The second category uses intraoperative data acquired during surgery to display over another type of intraoperative data, most of the time video. The overlaid intraoperative data can be 3D such as 3D Freehand SPECT images <cit.>, 2D such as X-ray images <cit.>, OCT images <cit.> or ultrasound <cit.>.The Camera Augmented Mobile C-arm by Navab et al. <cit.> has been the first Augmented Reality device to enter an Operating Room and has been used on over 40 patients<cit.>. A video camera is placed next to the C-arm source and a mirror construction fixed under the X-ray source allows the alignment of the optical axis and centers of both modalities such that an exact overlay of X-ray and video is possible. The main drawback of this work is its mirror construction, which restricts the surgical workspace available for the surgeon and requires invasive engineering on the C-arm. Habert et al. <cit.> proposed to augment a C-arm with 2 RGBD cameras placed on the side of the X-ray source. Using the RGBD data, the video image from the X-ray source viewpoint can be synthesized and the X-ray image can be overlaid in a similar fashion to Navab et al. <cit.>. A volumetric reconstruction of the scene is computed using the RGBD data from the 2 cameras, following the principle of Truncated Signed Distance Field (TSDF), used for example by Kinect Fusion <cit.>. Then, the image is synthesized using raytracing from the X-ray source viewpoint. Knowing that the reconstruction is volumetric and that the 2 RGBD cameras are positioned on the sides of the X-ray source, the cameras provide more information than is actually used during raytracing. Indeed, the raytracing will stop at the first voxel representing the surface (where the field is equal to zero). If, instead of stopping at this voxel, the raytracing would go further and search for the second voxel where the field is zero along the ray, a second layer could be synthesized beyond the first layer. Thus, using a depth augmented C-arm technology, this method would allow visualization of several layers. These include front and back layers, which are equivalent to any instrument and clinician hand above the patient anatomy, and the X-ray image plane respectively.Making the front layer transparent or even disappear in order to visualize what is beyond has been studied in Diminished Reality (DR). In contrast to Augmented Reality where graphics are overlaid to a real-scene, DR withdraws or attenuates real elements from a scene. The works in DR can be divided into 3 categories according to the background recovering method: multi-viewpoint, temporal, and inpainting. The temporal methods <cit.> suppose that the camera have seen the scene without the occluder (or at another position) and use this previous information to recover the current occluded pixels. The inpainting methods recover the occluded part of an image with information from its non-occluded part using patch-based methods <cit.> or combined pixels methods <cit.>. The multi-viewpoint techniques use additional cameras that can observe the occluded background totally, or partially in order to recover it from the occluded viewpoint. Jarusirisawad and Saitoo <cit.> use perspective wrapping from the non-occluded cameras to the occluded camera to recover background pixels. More recently, using RGBD cameras, several works <cit.> have generated surface mesh models of the background from one or multiple side cameras. Observing the mesh from the occluded viewpoint requires only a rigid transformation, avoiding distortions due to wrapping. Sugimoto et al. <cit.> use the 3D geometry to backproject to the side views the occluded pixels and therefore recover it. By design, the multi-viewpoint recovery can be usedfor the stereo-RGBD augmented C-arm which contains 2 RGBD cameras are placed on the side of the X-ray source viewpoint. Instead of using a mesh, the volumetric field can be used. However, no work in literature has used volumetric field such as TSDF to recover background information to the best of our knowledge. Concerning the visualization of the foreground of the front layer in combination with the back layer, the most used technique is transparency <cit.>. As explained by Livingston et al. in their review of depth cues for “X-ray” vision augmented reality <cit.>, transparency is indeed the most natural depth cues as it can be experienced in the real world with transparent objects. §.§ ContributionIn this paper, we propose a mixed reality multi-layer visualization of the surgeon hands and surgical instruments using a stereo-RGBD augmented C-arm fluoroscope. This visualization consists of multiple layers which can be blended into one single view along the line of sight of the surgeon while offering different output as the blending values are chosen differently. The front layer synthesized from the X-ray source viewpoint by the stereo-RGBD augmented C-arm contains the surgeon hands and surgical instruments, the second layer is the background containing the surgical target (i.e. also synthesized by our algorithm), while the last layer is the X-ray image displaying the anatomy. As any layer can be blended to the others, our visualization proposes, for example, to display transparent hands on the background, on which the X-ray can also be blended. The blending parameters can be chosen on the fly and according to preferences or workflow steps.In summary, this work presents the potential in positively impacting the following areas:* User-adjustable multiple layer visualization for Medical Mixed Reality* Improved training medical students and residents by visualizing multiplelayers to better understand surgical instrument positioning and alignment,as opposed to visualizing the global scene using traditional augmented reality methods.* First work in Medical Mixed Reality combining Diminished and Augmented Reality * First use of volumetric field using TSDF for Diminished/MixedReality§ METHODOLOGY The setup, calibration methods, and image synthesization used in this paper have been previously published by <cit.>. In the interest of brevity, we will not describe the calibration steps but we will thoroughly describe the synthesization process since it is vital to our Mixed Reality multi-layervisualization contribution. §.§ SetupThe setup comprises 2 RGBD cameras(Kinect v2) placed on the side of a X-ray source <ref>. Each RGBD camera outputs a depth image, an infrared image and a wide-angle video image. Their fields-of-view are overlapping over the C-arm detector. Kinect v2 has been chosen because its depth information does not interfere with a similar sensor.The depth and video images are recorded using the libfreenect2 library <cit.>. The mapping from depth to video image is provided by the library.The synchronization between images from the two cameras has been performed manually because two Kinect v2 can not used on a single standard computer and are therefore run on two separate computers. As a consequence, every sequence is recorded at a lower framerate than a standard 30fps video. §.§ Image synthesizationOnce the system has been calibrated following the steps from <cit.>, the video image from the X-ray viewpoint can be synthesized. First, the origin of the 3D world coordinate space Ω_R ⊂ℝ^3is positioned at the center of the volumetric grid, around the C-arm intensifier. Knowing the poses of the two RGBD cameras relative to the X-ray source, the projection matrices Π^1 and Π^2 for the 2 RGBD sensors can be computed. The notations relative to the cameras are defined as follows:optical center of the first camera C^1, its depth image I_d^1 and color image I_c^1 (respectively, in the second camera C^2, I_d^2 and I_c^2).To render the color image from the X-ray source viewpoint, a volumetric TSDF field f_v:Ω_R ⟼ [-1,1] is created which maps a 3D point x∈Ω_R to a truncated signed distance value. This value is the weighted mean of the truncated signed distance values v^1(x) and v^2(x) computed respectively in the 2 RGBD sensor cameras. Therefore, the field f_v follows Equation <ref>.f_v(x)=w^1(x)v^1(x)+w^2(x)v^2(x)/w^1(x)+w^2(x)where w^1 and w^2 are the weights for each camera. The weights are used to reject truncated signed values according to specific conditions (described in Equation <ref>). For each camera i∈{1,2}, the weights w^i(x) for each truncated signed value are computed as:w^i(x)={[ 1ifI_d^i(Π^i(x))-||x-C^i||<-η; 0else ].where η is a tolerance on the visibility of x (we use η=6mm). For each view i∈{1,2},v^i(x) represents geometrically the difference in between the distance from x to the optical center of the camera i C^iand the depth value obtained by projecting x into camera i, on whicha scaled truncation to the interval [-1,1] is applied. The truncated signed distances v^i(x) are computed according to Equation <ref>.v^i(x)=ϕ(I_d^i(Π^i(x))-||x-C^i||)with ϕ(s)= {[ sgn(s)if |s|/δ>1;s/δelse ].with δ being a tolerance parameter to handle noise in depth measurements (δ=2mm in our method) . Alongside with the TSDF f_v, we also create a volumetric color field f_c: Ω_R ⟼ [0..255]^3 following Equation <ref>.f_c(x)=w^1(x)I_c^1(Π^1(x))+w^2(x)I_c^2(Π^2(x))/w^1(x)+w^2(x) The scene to synthesize is represented in the volumetric grid by the voxels whose TSDF values is equal to 0. The color image I_c from the X-ray viewpoint is therefore generated by performing raytracing from the X-ray viewpoint on the TSDF field f_v. For every pixel in the image to be synthesized, a ray is traced passing through the X-ray source and the pixel. Raytracing consists at searching the closest to the X-ray source voxel y respecting the condition f_v(y)=0 along this ray. To speed up this step, the search for theis performed by binary search. Once the y has been found, the color f_c(y) is applied to the pixel in the synthesized image I_c. A depth image I_d can be synthesized by calculating the distance between y and the X-ray source. §.§ Multi-Layer Image Generation After the first raytracing step, thevideo image I_c as seen by the X-ray source viewpoint, as well as its corresponding depth image I_d are generated. The volumetric TSDF field is a dense representation which contains information about the full 3D space around the C-arm detector whereasthe raytracing stops only at the first foundvoxel. Therefore, the TSDF field contains more information than is actually used until now. Beyond the hands synthesized by the first raytracing, morecan be present along the ray. This is true especially since the 2 RGBD cameras are placed on the side of the C-arm, giving additional information from another viewpoint. This situation is illustrated in Figure <ref> where the background occluded by a hand from the X-ray source viewpoint (the blue point) can be seen by at least one of the 2 cameras. In a TSDF representation, this means those occluded background voxels also have a . To find those additional , a modified “second run” raytracing must be performed on the foreground (e.g. surgeon hands or surgical tools).§.§.§ Hand segmentationAs a first step, the foreground needs to be segmented from the synthesized video image I_c and depth image I_d. A background model is computed from an initialization sequence of depth images where no hands or surgical instruments are introduced yet. An average depth image is created by averaging the depth at every pixel along the initialization sequence. Then, for every new image (with potential hands or surgical instruments present), the depth image I_d is compared to the mean image in order to create a binary mask image I_m. For every pixel whose depth is lower than the average depth minus a margin (3 cm), the pixel is classified as foreground and is set as white in I_m. If the pixel is classified as background, then it is set as black in I_m. The method is rudimentary compared to background subtraction methods, however the margin allows the background to change shape (in the limit of the margin). A noise removal step is added using morphological opening on the mask image. An example of scaled depth image and its corresponding mask are shown on Figure <ref>.§.§.§ Second-run raytracing Once the foreground has been segmented, a second raytracing can be performed on the pixels classified as hands or surgical instruments. Instead of beginning the raytracing from the X-ray source viewpoint, the ray search starts at the voxel y found at the first raytracing run plus a margin of 4 cm. This margin is the insurance to not find astill related to the foreground. The starting voxel y can be easily retrieved using the depth image I_d resulting from the first raytracing. The raytracing is then performed forward using binary search in a similar fashion to the first run of raytracing. As a result, a color image of the background can be synthesized and combined to the color image from the first raytracing run (excluding the foreground segmented pixels) creating a complete background image I_b.§.§.§ Multi-Layer Visualization On top of the background image I_b, the foreground layer extracted from I_c can be overlaidwith transparency as well as the X-ray image I_xray. A multi-layer image I_layers can then be created by blending all the layers according to Equation <ref>.I_layers(p)={[ α I_c(p) +β I_b(p) +γ I_xray(p)ifp ∈foreground; (1-δ) I_b(p) +δ I_xray(p) else ].where (α,β,γ,δ) ∈ [0,1]^4 with α+β+γ=1 are the blending parameters associated with each level. They can also be seen as specific weight values which emphasize a specific layer during the blending process.The visualization scheme we propose allows us then to observe three layers of structures (displayed in Figure <ref>) according to those parameters.The furthest layer is the X-ray, which can be observed in its totality in the image I_layers with (α,β,γ,δ)=(0,0,1,1). As we get closer to the camera, another layer is the background structure recovered using volumetric field. It can be observed with (α,β,γ,δ)=(0,1,0,0). Finally the front layer comprising the handsand instruments can be observed in the image I_layers using (α,β,γ,δ)=(1,0,0,0). Our visualization scheme allows to see in transparency the different layers (anatomy by X-ray, background, front layer ) by choosing blending parameters (α,β,γ,δ) non equal to 0 and 1. The choice of blending values depends on multiple parameters such as surgeon preferences, step in the surgical workflow, type of instrument used. It can be changed on the fly during surgery according to such parameters. For example, once an instrument has already penetrated the skin, the background is not necessary to visualize. The transparent hands can be overlaid directly on the X-ray image, skipping the background layer. This scenario corresponds to blending parameters (β,δ)=(0,1), α=1-γ with 0 <γ < 1. With the configuration (α,β,γ,δ)=(1,0,0,1), the visualization consists of fully opaque hands or surgical tools on the X-ray image, giving a similar output as <cit.> which aimed at obtaining a natural ordering of hands over X-ray image. As every layer is known at any point in a sequence, the multi-layer visualization can be replayed to medical students and residents for example with other blending parameters than the one used in surgery. They can have full control for the observation of the layers having the choice to emphasize particular layers of interest for their learning.§ RESULTS§.§ Experimental protocol Six sequences have been recorded depicting example scenarios which include both surgeon hands and surgical tools. Both a realistic hand model phantom and a real patient hand are used and positioned on a surgical table. A clinician wearing purple examination gloves introduces partial occlusions randomly to the scene. Sequences 1 and 3 contain the motion of the clinician’s hand above the hand model phantom at 20 cm and 30 cm respectively. Sequences 2 and 4 contain the motion of a clinician’s hand closed and above the hand model phantom at 20 cm and 30 cm respectively. Sequences 3 and 4 also contain incision lines drawn using a marker on the hand model phantom. Finally, Sequences 5 and 6 are recorded with surgical tools above a real patient hand. Sequence 5 includes actions using a surgical hammer aiming for a cross target drawn on the patient hand. Sequence 6 includes a scalpel targeting the same cross. The heights of the surgical instruments to the patient hand vary up from 5 cm to 30 cm. §.§ Background recoveryFor every sequence,the mean value for the percentage of recovered pixels is calculated and indicated in Table <ref>. The natural observation in Table <ref> is that the closer the surgeon hand and surgical tools are to the anatomy the larger the occlusion in both side cameras will be. This signifies a lower percentage of recovered pixels by our algorithm which is demonstrated.Sequences 1 and 2 were recorded with surgeon hand open (69.3%) and closed (65.2%) Less pixels are recovered for the close hand scenario as mainly the fist is present in the scene. The fist is also not recoveredin the other scenario but the fingers are also occluding which are easier to recover from (due to their thin shape), in percentage, the open hand scenario recovers more, even if occluding more. Sequences 3 and 4 resulted in larger recovery percentages (88.2% and 97.4% respectively) because the surgeon hand was farther away from the hand model. This implies that there is a greater probability for the background voxels to be seen by the RGBD sensors. Sequence 6 with a scalpel confirms that the height strongly influences the recovery. The scalpel scenario which includes numerous images with hands and instruments close to the background (less than 10 cm) shows a low recovery result as expected. Due to the hammer's shape, the sequence 5 shows however a higher recovery percentage. §.§ Visualization results In Figure <ref>, for each scenario, one selected image I_layers in the sequence can observed with different values of α, β, γ and δ. Each row i corresponds to the sequence i. From left to right, the layer visualized in I_layers is getting closer to the X-ray source viewpoint. In the column (a), the furthest layer (the X-ray image) is displayed. In the column (b), the second layer (the background), in the column (c), the blending of the front layer with the background, in the column (d), the blending of the three layers and finally, in the column (e), the closest layer is shown. Additional images from the sequences can be visualized in the supplementary video where interaction between the layers by changing the blending values can be observed.Despite the fact that the background cannot be ideally recovered, a manual post processing step involving inpainting is applied and displayed in the column (f) of Figure <ref>. We believe that the multi-layer visualization concept is an interesting and profound solution offering numerous possibilities in the surgical areas, as well as, the mixed reality communities.Similar to results from Habert et al. <cit.>, the images resulting from synthesization are not as sharp as a real video image. The area synthesized by our algorithm is approximately 20 cm × 20 cm (C-arm detector size), which is small compared to the wide-angle field of view from the Kinect v2. Reduced to the area of synthesization, the video and depth from Kinect is not of high resolution enough for sharper results. More specialized hardware with smaller field of view and higher resolution RGBD data would solve this problem. Moreover, several artifacts can be seen around the hand and surgical instruments in the synthesized image due to high difference and noise in depth in the RGBD data from the 2 cameras. However, our results demonstrate that our method is working well, since the incision line and cross drawn on the hand model and patient hand are perfectly visible in the recovered background image and can be seen in transparency through the hands and surgical tools in the images of Figure <ref>-column (c) and (d). In the scalpel sequence (sequence 6) in Figure <ref>-column (b), it can be seen that the tip of the scalpel is considered as background, this is due to the margin of few centimeters used for background segmentation. In this image, the scalpel is actually touching the skin.§ DISCUSSION Inferring temporal priors can help alleviate occlusion. Methods involving volumetric fields <cit.> use temporal information as the field is sequentially updating with new information, instead of fully being reinitialized as per our method. The percentage of pixels recovered is also dependent of the side cameras configuration. In our clinical case, the camera setup is constrained by the C-arm design and the disparity between the X-ray source and the two RGBD cameras is low. A higher disparity would lead to less occlusion in at least one of the cameras. Even with our constrained and difficult clinical setup, the results are extremely promising and we are convinced the work could also be easily extended to less restrictive settings. A potential application is Industrial Diminished/Mediative Reality where workers wearing a HMD with two cameras placed on its side (with a higher disparity than our setup) could see their viewpoint synthesized with their hands in transparency. § CONCLUSION In this paper, we have presented the first work combining Diminished and Augmented Reality in medical domain. Our visualization scheme proposes a user-adjustable multiple layer visualization where each layer can be blended with others. The multiple layers comprise the anatomy with the X-ray image, the patient background, and the surgeon hand and surgical instruments. The result of our visualization scheme offers the clinician to choose which layer(s) are to become transparent depending on the surgical scenario or workflow step. Beyond the medical domain, this work is the first use of volumetric field for background recovery in Diminished Reality and Mixed Reality. Future works should involve adding additional layers, by disassociating the surgeon hand layer from the surgical instruments layer, in order to adjust further the visualization to the user preferences.abbrv
http://arxiv.org/abs/1709.08962v1
{ "authors": [ "Séverine Habert", "Ma Meng", "Pascal Fallavollita", "Nassir Navab" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170926121301", "title": "Multi-layer Visualization for Medical Mixed Reality" }
GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt D-64291, Germany [][email protected] National Institute of Science and Technology, Ulsan 44919, Republic of Korea [][email protected] In 1926, H. Busch formulated a theorem for one single charged particle moving along a region with a longitudinal magnetic field [H. Busch, Berechnung der Bahn von Kathodenstrahlen in axial symmetrischen electromagnetischen Felde, Z. Phys. 81 (5) p. 974, (1926)]. The theorem relates particle angular momentum to the amount of field lines being enclosed by the particle cyclotron motion. This paper extends the theorem to many particles forming a beam without cylindrical symmetry. A quantity being preserved is derived, which represents the sum of difference of eigen-emittances, magnetic flux through the beam area, and beam rms-vorticity multiplied by the magnetic flux. Tracking simulations and analytical calculations using the generalized Courant–Snyder formalism confirm the validity of the extended theorem. The new theorem has been applied for fast modelling of experiments with electron and ion beams on transverse emittance re-partitioning conducted at FERMILAB and at GSI. Extension of Busch's Theorem to Particle Beams M. Chung December 30, 2023 ==============================================In 1926, H. Busch applied the preservation of angular momentum for systems with cylindrical symmetry to a charged particle moving inside a region with magnetic field B⃗ <cit.>. Using conjugated momenta, the magnetic field strength is intrinsically included into the equations of motion. In linear systems, the normalized conjugated momenta p_x and p_y are related to the derivatives of the particle position coordinates (x,y) w.r.t. the main longitudinal direction of motion s⃗ through p_x := x'+𝒜_x/(Bρ) = x'-yB_s/2(Bρ) , p_y := y'+𝒜_y/(Bρ) = y'+xB_s/2(Bρ) ,where 𝒜⃗ is the magnetic vector potential with B⃗=∇⃗×𝒜⃗, B_s is the longitudinal component of the magnetic field, and (Bρ) is the particle rigidity, i.e., its momentum per charge p/(qe), with p as total momentum, q as charge number, and e as elementary charge.Busch's Theorem <cit.> states that the canonical angular momentum l̃=xp_y-yp_x is a constant of motion that is written in cylindrical coordinates asmγ r^2θ̇ + eq/2πψ = const. ,where γ is the relativistic factor, r is the radius of transverse cyclotron motion around the beam axis, θ̇ is the corresponding angular velocity, and ψ is the magnetic flux enclosed by this motion. Busch's Theorem for axially symmetric systems is on an invariant of motion of a single particle.A general formulation of Eq. (<ref>) has been derived in <cit.>, which is regarded as the generalized Busch's Theorem∮_𝒞v⃗· dC⃗ + eq/mψ = const. ,i.e., the path integral of the stream of possible particle velocities v⃗ along a closed contour 𝒞 confining a fixed set of possible particle trajectories plus the magnetic flux through the area enclosed by 𝒞 is an invariant of the motion. Busch's Theorem of Eq. (<ref>) is the special case of this generalized form for 𝒞 being a circle of radius r. This paper expresses an invariant through a sum of meaningful beam properties by re-formulating the invariance of the two eigen-emittances introduced in 1992 by A.J. Dragt <cit.>. This invariance holds strictly for the paraxial approximation and for mono-energetic beams as pointed out in <cit.>.The two eigen-emittances ε̃_1/2 are equal to the two projected transverse beam rms-emittances ε̃_x/y, if and only if there are no correlations between the two transverse degrees of freedom (planes). Eigen-emittances can be obtained by solving the complex equationdet(JC̃-iε̃_1/2I) = 0 ,where I is the identity matrix andC̃= [ ⟨ x^2 ⟩ ⟨ xp_x⟩ ⟨ xy⟩ ⟨ xp_y⟩; ⟨ xp_x⟩⟨ p_x^2⟩ ⟨ yp_x⟩ ⟨ p_xp_y⟩; ⟨ xy⟩ ⟨ yp_x⟩⟨ y^2⟩ ⟨ yp_y⟩; ⟨ xp_y⟩ ⟨ p_xp_y⟩ ⟨ yp_y⟩⟨ p_y^2⟩ ] , J= [0100; -1000;0001;00 -10 ].Second moments ⟨ uv⟩ are defined through a normalized distribution function f_b as⟨ uv⟩ = ∫∫∫∫ f_b(x,p_x,y,p_y)· uv· dx dp_x dy dp_yand projected rms-emittances by <cit.>ε̃_u^2:= ⟨ u^2⟩⟨ p_u^2⟩ - ⟨ up_u⟩ ^2 .For two degrees of freedom, the two eigen-emittances can be calculated from <cit.>ε̃_1/2=1/2√(-tr[(C̃J)^2] ±√(tr^2[(C̃J)^2]-16 det(C̃) )) .As the two eigen-emittances are preserved for the symplectic transformation <cit.>, the sum of their squares is preserved as well, i.e.,ε̃_1^2 + ε̃_2^2= -1/2tr[(C̃J)^2]= ε̃_x^2 + ε̃_y^2 + 2 (⟨ xy⟩⟨ p_xp_y⟩-⟨ yp_x⟩⟨ xp_y⟩)= const.Using the definitions of p_x and p_y in Eq. (<ref>) together with Eq. (<ref>) and finally expanding Eq. (<ref>) leads to(ε_1-ε_2)^2 + [AB_s/(Bρ)] ^2 + 2B_s/(Bρ)[⟨ y^2⟩⟨ xy'⟩ - ⟨ x^2⟩⟨ yx'⟩ + ⟨ xy⟩ (⟨ xx'⟩ - ⟨ yy'⟩)]=const.whereA := √(⟨ x^2⟩⟨ y^2⟩ -⟨ xy⟩ ^2) is the rms-area of the beam divided by π. Quantities written as Q̃ are calculated from conjugated coordinates (x,p_x,y,p_y) and those written as Q are calculated from laboratory coordinates (x,x',y,y'); hence, Q is obtained from Q̃ by substituting (p_x,p_y)→ (x',y') in the expression defining Q̃. In the following, only laboratory coordinates are used, as the extended theorem will be applied to experiments that used these coordinates.Equation (<ref>) shows that changing both transverse eigen-emittances can be achieved through longitudinal magnetic fields as was proposed first in <cit.>, where the beam is created inside a region of longitudinal field being emerged afterwards into a region without a field. Successful experimental demonstration of this concept was reported in <cit.>. The method has been applied to create very flat electron beams with aspect ratios of up to 100 <cit.>. It was also proposed for ions being emerged from the solenoid field of an electron-cyclotron-resonance source to create beams of very low horizontal emittances that will allow for high-resolution spectrometers <cit.>. By placing a charge state stripper, i.e., changing (Bρ ) inside a solenoid, transverse emittance was adjustably transferred from one plane into the other one <cit.>.The first term of the left-hand side of Eq. (<ref>) is the squared difference of the beam eigen-emittances. The second term is basically the square of the magnetic flux through the beam rms-area A⃗ as illustrated in Fig. <ref>. In the following, it is shown that the essential part of the third term𝒲_A := ⟨ y^2⟩⟨ xy'⟩ - ⟨ x^2⟩⟨ yx'⟩ + ⟨ xy⟩ (⟨ xx'⟩ - ⟨ yy'⟩)is the rms-averaged beam vorticity multiplied by the twofold beam rms-area. We choose the ansatz assigning 𝒲_A to the rotation (∇⃗× ) of the mean, i.e., averaged over (x',y') space, beam angle r⃗̅⃗'⃗̅⃗(x,y,s) being integrated over the beam rms-area, and finally multiplied by the twofold beam rms-area:𝒲_A =2A∫_A[∇⃗×r⃗̅⃗'⃗̅⃗(x,y,s)] · dA⃗being equivalent to𝒲_A =2A ∮_𝒞r⃗̅⃗'⃗̅⃗(x,y,s) · dC⃗ ,where r⃗̅⃗'⃗̅⃗(x,y,s) :=[x̅'̅(x,y,s),y̅'̅(x,y,s),1]. This ansatz is supported by the similarity of 𝒲_A to the first term of Eq. (<ref>). In continuum mechanics, the rotation of a media's velocity (∇⃗×v⃗) is called the vorticity.As 𝒲_A by construction is invariant under rotation by any angle in the (x,y) plane, Eq. (<ref>) may be worked out for a beam with ⟨ xy⟩ =0 without loss of generality (imagine that prior to the calculation of 𝒲_A the beam is rotated around the beam axis by an angle that puts ⟨ xy⟩ to zero). For the following procedure, the resulting beam rms-area (divided by π)is treated as being infinitesimally small in the paraxial approximation. Accordingly, the transverse components of r⃗̅⃗'⃗̅⃗ are expressed through the first terms of the Taylor seriesx̅'̅(x,y) := x̅'̅(0,0) + ∂x̅'̅/∂ x· x + ∂x̅'̅/∂ y· y, y̅'̅(x,y) := y̅'̅(0,0) + ∂y̅'̅/∂ x· x + ∂y̅'̅/∂ y· y,which turns intox̅'̅(x,y) := ⟨ x'x⟩/⟨ x^2⟩x + ⟨ x'y⟩/⟨ y^2⟩y, y̅'̅(x,y) := ⟨ y'x⟩/⟨ x^2⟩x + ⟨ y'y⟩/⟨ y^2⟩y.Figure <ref> illustrates as an example the constant slope (∂y̅'̅/∂ x) of y̅'̅ in the projection of the four-dimensional rms-ellipsoid onto the (x,y') plane.The path integral around the rms ellipse x^2/⟨ x^2⟩ + y^2/⟨ y^2⟩ = 1 can be done by the following changes of variables: x = √(⟨ x^2⟩)cosθ,y = √(⟨ y^2⟩)sinθ, andd C⃗ = ( dx/dθ,dy/dθ) dθ = ( - √(⟨ x^2⟩)sinθ, √(⟨ y^2⟩)cosθ) d θ .Therefore,2A ∮_𝒞r⃗̅⃗'⃗̅⃗(x,y,s) · dC⃗ = 2A ∫_0^2π(⟨ x'x⟩/⟨ x^2⟩x + ⟨ x'y⟩/⟨ y^2⟩y ) ( -√(⟨ x^2⟩)sinθ) d θ+ 2A ∫_0^2π(⟨ y'x⟩/⟨ x^2⟩x + ⟨ y'y⟩/⟨ y^2⟩y ) (√(⟨ y^2⟩)cosθ)d θ= ⟨ y'x⟩⟨ y^2⟩- ⟨ x'y ⟩⟨ y^2⟩ = 𝒲_A ,which proves that the ansatz is correct.For the time being, acceleration has not been included into the treatment. This can be done simply by multiplying Eqs. (<ref>) and (<ref>) initially by . β is the longitudinal particle velocity normalized to the velocity of light c. The extension of Busch's Theorem to beams including acceleration is(ε_n1-ε_n2)^2 + [eqψ/mcπ] ^2 + 4eqψβγ/mcπ ∮_𝒞r⃗̅⃗'⃗̅⃗· dC⃗=const. ,where ψ is the magnetic flux through the beam rms-area A. Analogue to the normalized emittance ε_n:=βγε, the normalized beam rms-vorticity is introduced as𝒲_An := βγ𝒲_A . Tracking simulations using the BEAMPATH <cit.> code have been performed in order to verify Eq. (<ref>). The probe beam line (Fig. <ref>) comprises a solenoid with an extended fringe field, a skewed quadrupole magnet quartet, and another extended solenoid. Figure <ref> plots the beam widths, rms-area, the three summands of Eq. (<ref>), and their sum along the beam line. Additionally, the results from the application of the generalized Courant–Snyder (C–S) formalism for coupled lattices <cit.> are plotted. In the latter, hard-edge solenoids with infinite short fringe field lengths have been assumed.The three summands change exclusively along regions with a longitudinal magnetic field. Behind these regions, each of them gets back to the value it had prior to entering this region, respectively. The sum of the three beam properties remains constant in accordance to Eq. (<ref>).At FERMILAB's NICADD photoinjector, flat electron beams were formed by first producing the beams at the surface of a photo cathode placed inside an rf-gun to which longitudinal magnetic field B_s=B_0 was imposed <cit.>. Along the subsequent region with , the beam was accelerated to 16 MeV. Finally, correlations initially imposed by the magnetic exit fringe field of the rf-gun were removed by three skew quadrupole magnets. Equation (<ref>) equalizes the situation at the cathode surface at the left-hand side to the situation of the finally flat beam on the right-hand side (q=1)0 + [eB_0A_0/mc]^2 + 0 = (ε_nf1-ε_nf2)^2 + 0 + 0 ,where A_0 is the beam rms-area at the cathode surface. The authors of <cit.> used the definitions <cit.>(ε^u_n)^2:= ε_nf1·ε_nf2 ℒ:= (eB_0A_0)/(2mγβ c)resulting inε_nf1 = ℒβγ±√((ℒβγ)^2+(ε^u_n)^2) ,of which only the upper sign gives a meaningful positive result. Re-plugging this expression for ε_nf1 into Eq. (<ref>) leads toε_nf1/2 = ±ℒβγ + √((ℒβγ)^2+(ε^u_n)^2) ,being identical to their original expression (Eq. (1) of <cit.>).At GSI, the EMittance Transfer EXperiment (EMTEX) transferred emittance from one transverse plane into the other one by passing the beam through a short solenoid <cit.>. In the solenoid center, the ions charge state, i.e., their rigidity was changed by placing a thin carbon foil therein from  to . Charge state stripping is a standard procedure used at several laboratories that deliver heavy or intermediate mass ions <cit.>. In front of the solenoid, the beam had no inter-plane correlations, and thus, the difference of rms-emittances was equal to the difference of eigen-emittances (mod. sign). Since the solenoid was short, the beam area at the foil A:=A_f can be approximated as constant during the beam transit through the solenoid. Equation (<ref>) relates the beam parameters in front of the solenoid (B_s=0, no correlations →𝒲_A =0,ε_10=ε_x,3+,ε_20=ε_y,3+) to those in front of the foil in the center of the short solenoid:(ε _10-ε _20)^2 + 0 + 0= (ε _1f-ε _2f)^2 + [A_f B_s/(Bρ )_3+]^2 + 2B_s/(Bρ )_3+𝒲_Af ,where the index f refers to the location of the foil. The entrance fringe field of the solenoid causes the rms-vorticity𝒲_A_f = Δ𝒲_A = -2B_s/2(Bρ)_3+A_f^2leading to(ε _x,3+-ε _y,3+)^2 + 0 + 0 = (ε _1f-ε _2f)^2 - [A_f B_s/(Bρ )_3+]^2 .Using the initial beam parameters of the experiment <cit.>, A_f=√(ε_xβ_xε_yβ_y)=4.166 mm^2, and the identity 1 mm mrad = 1 μm gives(ε _1f-ε _2f)^2 = (ε _x,3+-ε _y,3+)^2+ 2.709 μm^2= 2.755 μm^2 .Equation (<ref>) is re-used to relate the beam parameters that are just behind the foil but still at the center of the solenoid to those at the exit of the beam line, whereand the beam correlations have been removed again. Angular scattering in the foil is neglected. As the beam changed rigidity in the foil, (Bρ )_3+ must be properly replaced by (Bρ )_7+. However, second beam moments are not changed by the foil, i.e., 𝒲_A = 𝒲_A_f, right in front and right behind the foil. Accordingly,(ε _1f-ε _2f)^2 + [A_f B_s/(Bρ )_7+]^2 + 2B_s/(Bρ )_7+𝒲_Af= (ε_x,7+-ε_y,7+)^2 + 0 + 0 ,which by using Eq. (<ref>) and plugging in the values delivers|ε_x,7+-ε_y,7+| = 2.2 mm mradfitting well the measured value of 2.0 mm mrad (see Fig. 2 of <cit.>).The many particle pendant to Busch's Theorem on a single particle has been derived without requiring cylindrical symmetry but with including acceleration of the beam. It introduces the property of beam rms-vorticity and relates the beam's difference of eigen-emittances (i.e., intrinsic anisotropy), the magnetic flux through its area, and its rms-vorticity multiplied by the magnetic flux. Under the transport through coupled linear elements, the sum of these properties is preserved. The extended theorem was verified through tracking simulations and through application of the generalized C–S formalism for coupled dynamics. It was successfully used for quick and precise modelling of emittance re-partitioning experiments conducted at FERMILAB and at GSI, hence it is a powerful tool easily applicable to both electron and heavy ion beam lines or accelerators. The extended theorem significantly facilitates modelling and designing of devices for advanced emittance manipulations.This research was partly supported by the National Research Foundation of Korea (Grants No. NRF-2015R1D1A1A01061074 and No. NRF-2017M1A7A1A02016413).20Busch H. Busch, Berechnung der Bahn von Kathodenstrahlen in axial symmetrischen electromagnetischen Felde, Z. Phys. 81 (5) p. 974, (1926).Reiser M. Reiser, Theory and Design of Charged Particle Beams, Wiley-VCH, Weinheim, 2008, 2nd ed., Chapter 2.Tsimring S.E. Tsimring, Electron Beams and Microwave Vacuum Electronics, John Wiley & Sons, Inc., Hoboken, 2007,Chapters 1 and 3.SCF P.T. Kirstein, G.S. Kino, W.E. Waters, Space Charge Flow, McGraw-Hill Inc., New York, U.S.A., 1967, p. 14.Dragt A.J. Dragt, General moment invariants for linear Hamiltonian systems, Phys. Rev. A 45, 4 (1992).Floettmann K. Floettmann, Some basic features of the beam emittance, Phys. Rev. ST Accel. Beams 6, 034202 (2013).pu_uprime Emittance definitions assume mono-energetic beams and refer to fixed position s_0 rather to fixed time t_0. The particle angle u' and its transverse mechanical momentumare related through P_u=p· u', where p is the longitudinal mechanical momentum, which is the same for each particle.Xiao_prstab2013 C. Xiao, L. Groening, O. Kester, H. Leibrock, M. Maier, and C. Mühle, Single-knob beam line for transverse emittance partitioning, Phys. Rev. ST Accel. Beams 16, 044201 (2013).Brinkmann_rep R. Brinkmann, Y. Derbenev, K. Flöttman, A low emittance, flat-beam electron source for linear colliders, DESY TESLA-99-09, (1999).Brinkmann_prstab R. Brinkmann, Y. Derbenev, and K. Flöttmann, A low emittance, flat-beam electron source for linear colliders, Phys. Rev. ST Accel. Beams 4, 053501 (2001).Piot_prstab2006 P. Piot, Y.-E Sun, and K.-J. Kim, Photoinjector generation of a flat electron beam with transverse emittance ratio of 100, Phys. Rev. ST Accel. Beams 9, 031001 (2006).Bertrand P. Bertrand, J.P. Biarrotte, and D. Uriot, Flat Beams and application to the mass separation of radioactive beams, in Proceedings of the 10th European Particle Accelerator Conference, Edinburgh, Scotland, edited by J. Poole and C. Petit-Jean-Genaz (Institute of Physics, Edinburgh, Scotland, 2006).Groening_prstab2011 L. Groening, Concept for controlled transverse emittance transfer within a linac ion beam, Phys. Rev. ST Accel. Beams 14, 064201 (2011).Groening_prl2014 L. Groening, M. Maier, C. Xiao, L. Dahl, P. Gerhard, O.K. Kester, S. Mickat, H. Vormann, and M. Vossberg, Experimental Proof of Adjustable Single-Knob Ion Beam Emittance Partitioning, Phys. Rev. Lett. 113, 264802 (2014).Groening_IPAC15 L. Groening, S. Appel, L. Bozyk, Y. El-Hayek, M. Maier, C. Xiao, Demonstration of flat ion beam-creation andinto a synchrotron, in Proceedings of the 6th International Particle Accelerator Conference, Richmond, VA, U.S.A., edited by S. Henderson (ANL, Richmond, 2015).BEAMPATH Y.K. Batygin, Particle-in-cell code BEAMPATH for beam dynamics simulations in linear accelerators and beamlines, Nucl. Instrum. & Methods in Phys. Res. A 539, 455 (2005).Chung_prl2016 M. Chung, H. Qin, R.C. Davidson, L. Groening, and C. Xiao, Generalized Kapchinskij-Vladimirskij Distribution and Beam Matrix for Phase-Space Manipulations of High-Intensity Beams, Phys. Rev. Lett. 117, 224801 (2016).Kim K.-J. Kim, Phys. Rev. ST Accel. Beams 6, 104002 (2003). Okuno_prstab H. Okuno, N. Fukunishi, A. Goto, H. Hasabe, H. Imao, O. Kamigaito, M. Kase, H. Kuboki, Y. Yano, and S. Yokouchi, Low-Z gas stripper as an alternative to carbon foils for the acceleration of high-power uranium beams, Phys. Rev. ST Accel. Beams 14, 003503 (2011).Scharrer_prab P. Scharrer, Ch.E. Düllmann, W. Barth, J. Khuyagbaatar, A. Yakushev, M. Bevcic, P. Gerhard, L. Groening, K.P. Horn, E. Jäger, J. Krier, and H. Vormann, Measurements of charge state distributions of 0.74 and 1.4 MeV/u heavy ions passing through dilute gases, Phys. Rev. Accel. Beams 20, 043503 (2017).
http://arxiv.org/abs/1709.09538v1
{ "authors": [ "L. Groening", "M. Chung", "C. Xiao" ], "categories": [ "physics.acc-ph" ], "primary_category": "physics.acc-ph", "published": "20170927140845", "title": "Extension of Busch's Theorem to Particle Beams" }
operators'137 operators'177Universality in the PBH merger distributionKocsis, Suyama, Tanaka, Yokoyama^1Institute of Physics, Eötvös University, Pázmány P. s. 1/A, Budapest, 1117, Hungary;^2 Research Center for the Early Universe (RESCEU), Graduate Schoolof Science,The University of Tokyo, Tokyo 113-0033, Japan^3 Department of Physics, Kyoto University, Kyoto 606-8502, Japan^4 Center for Gravitational Physics, Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, Japan^5 Department of Physics, Rikkyo University, Tokyo 171-8501, Japan^6 Kavli IPMU (WPI), UTIAS, The University of Tokyo,Kashiwa, Chiba 277-8583, Japan It has been proposed that primordial black holes (PBHs) form binaries in the radiation dominated era. Once formed, some fraction of them may merge within the age of the Universe by gravitational radiation reaction. We investigate the merger rate of the PBH binaries when the PBHs have a distribution of masses around O(10), which is a generalization of the previous studies where the PBHs are assumed to have the same mass. After deriving a formula for the merger time probability distribution in the PBH mass plane, we evaluate it under two different approximations. We identify a quantity constructed from the mass-distribution of the merger rate density per unit cosmic time andcomoving volumeℛ(m_1,m_2), α = -(m_1+m_2)^2∂^2 lnℛ/∂ m_1∂ m_2,which universally satisfies 0.97 ≲α≲ 1.05 for all binary masses independently of the PBH mass function. This result suggests that the measurement of this quantity is useful for testing the PBH scenario.Hidden universality in the merger rate distribution in the primordial black hole scenario Bence Kocsis1, Teruaki Suyama2, Takahiro Tanaka3,4, and Shuichiro Yokoyama5,6 22 September 2017 ===========================================================================================§ INTRODUCTIONRecent detections of gravitational wave events (GW150914, LVT151012, GW151226, GW170104, GW170608, and GW170814) by the LIGO-Virgo collaboration <cit.> revealed the existence of binary black holes (BHs) in the mass range8–35.These observations clearly demonstrate that there are numerous BH-BH binaries in the Universe that have previously eluded the scrutiny by astronomers. The origin of such heavy BHs and the formation of close binary BHs which merge within the age of the Universe are widely debated. Various astrophysical scenarios for the explanations of the gravitational wave events are summarized, for instance, in <cit.> and <cit.>.Although only five robustly identified BH-BH binary mergers with GW detectionshave been reported so far, merger rates are constrained to within 12–240Gpc^-3 yr^-1<cit.>.With the further improvement of GW detectors, we will soon enter the eraofblack hole rush where a large number of BH-BH binaries are detected with their masses, spins, and locations determined. Those data will serve us important clues to clarify the origin of binary BHs as wellas the formation mechanism of the binaries. Clearly, investigations of how various astrophysical scenario producing merging BH binaries can be distinguished byobservations will become a fundamentally important topic.Recently, a collaboration including three of the authors, <cit.> pointed out that the GW event GW150914 could be merger events of two primordial black holes (PBHs)based on earlier studies <cit.>.In <cit.> and <cit.>, the formation mechanism of the PBH binaries was proposed and a connection between the PBH binaries andthe gravitational wave events from the merger of binary PBHs was given[ There are other papers in which potential detection of PBHs by LIGO was claimed <cit.>.The binary formation path is different from that in <cit.>.]. PBHs stand for BHs that formed in the very early Universe much before the epoch of the matter radiation equality <cit.>. For instance, in the well-studied scenario, PBHs form from rare high peaksof the primordial density inhomogeneities whose amplitudes are much larger than the standard deviation. In this case, the PBH mass is given by the total energy contained in the Hubble horizon at the formation time,m_ BH =γ4π/3ρ H^-3≈ 30  ( γ/0.2) ( T/30  MeV)^-2,where T is the temperature of radiation and γ= O(1) depends on the details of the BH formation. Analytic estimates give γ=3^-3/2≈ 0.2<cit.>. Other mechanisms of the PBH production are summarized by <cit.>.After having formed in the very early Universe, PBHs stay on the expansion flow of the Universe. Even when PBHs are randomly distributed in space without being clustered, there is a small but non-vanishing probability that two neighboring PBHs happen to be much closer than the mean distance. Such PBHs, being initially on the cosmic expansion flow, eventually start to come closer influenced by their mutual gravity when the cosmic expansion rate becomes too low to separate them apart. As was shown by <cit.>, a direct collision is avoided by the tidal effect of other PBHs in their vicinity, which leads to the formation ofa PBH binary with a large eccentricity.Further <cit.> have recently shown that the tidal field of halos and interactions with other PBHs,as well as dynamical friction by unbound dark matter particles, do not affect PBH binaries significantly. Highly eccentric PBH binaries radiate GWs efficiently anda fraction of them can merge within 14 billion years.In <cit.>, under the approximation that all PBHs have the same mass of 30, it was shown that the expected event rate of the PBH binary mergers is consistent with the one determined by the LIGO-Virgo collaboration after the announcement of GW150914 <cit.>, if the fraction of cold dark matter in PBHs is about 10^-3. This fraction is consistent with existing observational upper limits<cit.>. So far, the PBH scenario proposed by <cit.> is successful in explaining the LIGO event GW150914.In the next decades, many more BH binaries will be detected, which will deliverfruitful statistical information on the merger rates in the two-dimensional BH mass plane (m_1,m_2) (see ). Purpose of the present paper is to examine if the mass distribution can be used observationally to test the PBH scenario.The currently announced five robust merger events show some scatter in the BH mass as (m_1,m_2)=(36^+5_-4,29^+4_-4) for GW150914,(14.2^+8.3_-3.7,7.5^+2.3_-2.3) for GW151226, (31.2^+8.4_-6.0,19.4^+5.3_-5.9) for GW170104, (12^+7_-2,7^+2_-2) for GW170608, and (30.5^+5.7_-3.0,25.3^+2.8_-4.2) for GW170814 in units of solar mass (90% credible intervals) <cit.>. In this paper, we estimate the merger rate density in the m_1-m_2 planepredicted by the PBH scenario. We extend the formalismof previous studies <cit.> to compute the merger event rateto the case in which the PBH mass function is not restricted to a single-mass but itextends over a mass range betweenm_min and m_max with m_max/m_min≲ 10[ Recently, such an extension has also been done in <cit.>. Our study differs from <cit.> in that our primary purpose is to investigatethe universal feature of the merger-rate distribution that is insensitive to the PBH mass function. ]. We assume that the PBH mass function does not extend over many orders of magnitudesince in that case the dynamics may not be accurately captured bythe simple physical processes adopted by <cit.>.Quite interestingly, we find that the merger rate distribution in this case depends on the mass of the BH binary in a specific way and a quantity constructed from the mass-distribution of the merger rate density per unit time and volume ℛ(m_1,m_2), α = -(m_1+m_2)^2 ∂^2 lnℛ/∂ m_1∂ m_2,is insensitive to the PBH mass function. This distinct feature is advantageous since there is no theoreticallytight constraint on the shape of the PBH mass function.Identifying the information in the merger rate density which is insensitive to the BH mass function may be used to discriminate different formation channels <cit.>.This information may be used to obtain the probability of mergers for given BH masses,P_ intr(m_1,m_2) (defined by Eq. (<ref>) below) which is essential in measuringthe underlying BH mass function f(m) itself.Before closing this section, in Table <ref> we list definitions of important symbols that are used in this paper. The paper is organized as follows. We first develop a formalism to compute the event rate in the PBH scenario which can be applied to the case of a non-monochromatic[By “monochromatic mass function” we refer to a population in which all PBHs have the same mass.] mass function. Then, we apply the derived formula to evaluate the mass-dependence of the merger ratein the (m_1,m_2) BH mass plane and show that the special quantity constructed out of theevent rate density becomes almost independent of the PBH mass function. § FORMATION OF BINARY PBHSIn this section, we derive a formula of the merger rate density as a function of themasses of two BHs comprising the binary. §.§ Formation and mass function of PBHsThere are several mechanisms to form PBHs <cit.>. Among them, the most natural and widely investigated mechanism is the direct gravitational collapse of the primordial density perturbation in the radiation dominated Universe.In this scenario, when an overdense region containing an extremely high density peak in which the perturbation amplitudeis greater than δ_ th = O(1) reenters the Hubble horizon, that region directly collapses to a BH (for the estimation of δ_ th,see <cit.>). Crudely speaking, all the energy inside the Hubble horizon at the time of BH formation turns into the BH. This picture enables to relate the BH mass to the comoving wavenumber k of the primordial density perturbation as m_ PBH∼ 20  ( k/1  pc^-1)^-2.There are no direct observational constraints on the probability distribution of density perturbations on such small scales.Although Eq. (<ref>) gives us a simple and approximate estimate of the PBH mass in terms of k, the relation (<ref>) is not precisely correct since the PBH mass also depends on the amplitude of the density perturbation. Deviation of the actual PBH mass from the horizon mass becomes significant as the amplitude of the density perturbation approaches δ_ th<cit.>. Thus, even if the spectrum of the primordial density perturbation is monochromatic, the resulting PBH mass function is not monochromatic <cit.>. Furthermore, the power spectrum of the primordial density perturbations needs not be monochromatic. In the paradigm of the standard inflationary cosmology, the primordial density perturbations are produced in the inflationary era preceding the radiation dominated era. Several inflationary models have been proposed to date with different predictions for the power spectral shape of the primordial density perturbationwhich lead to different PBH numbers and mass functions (see <cit.> and references therein). To a varying degree, these models predict a non-monochromatic power spectrum. Thus, the PBH mass function is generally not concentrated on a single mass.The PBH mass function is determined once the inflation model is fixed and the power spectrum of the primordial density perturbation is computed[In addition,non-Gaussianity of the primordial density perturbation also affects the PBH mass function <cit.>.]. Since there is no fiducial inflation model producing PBHs and different models predict different PBH mass functions, we do not restrict our analysis to any particular PBH mass function. As mentioned earlier, our only requirement is that it is confined to the mass rangem_max/m_min≲ 10. The case where the PBH mass function is extended over many orders of magnitude requires a separate analysis, which is beyond the scope of this paper.In addition to the mass function, the spatial distribution of PBHs also affects the probability of binary formation. In this study, for simplicity we assume that the distribution of PBHs at their birth isstatistically uniform and random in space. However, we also have to keep in mind that primordialclustering of PBHs is also possible and could bean important factor to enhance the merger event rate for a fixed mass fraction of PBHs.We define the PBH mass function f(m) such that f(m) dm is theprobability that a randomly chosen PBH has mass in (m,m+dm). Thus, f(m) is normalized as ∫_m_ min^m_ max f(m)dm=1.We denote the comoving PBH number density as n_ BH. The mean comoving separation between two neighboring BHs is thus given by n_ BH^-1/3.Before closing this subsection, it is important to mention that we do not consider the mass growth of the PBHs following their initial formation. The mass change due to accretion is negligible when PBH is in environments similar to the cosmic average density <cit.>. This may not be true for PBHs residing in high density regions of galaxies such as molecular clouds, accretion disks, or stellar interiors. However, since the majority of PBHs are expected to remain mostly in low density regionssuch as dark matter halos, we ignore the mass growth of PBHs. §.§ Major axis and eccentricity of a binaryJust after PBHs are formed in the early Universe,they are typically separated by super-Hubble distances. Apart from a possible peculiar velocity,each PBH is attached to the flow of the cosmic expansion. Let us denote the mass of a randomly selected PBH by m_1, and the mass of and the comoving distance to the closest PBH by m_2 and x, respectively. Denoting the physical distance between the two BHs by D (see Fig. <ref>), the gravitational force is given by Gm_1 m_2/D^2. Ignoring for the moment the subdominant effects of the other remote BHs and the initial peculiar velocity and assuming that the above gravitational force is the only dynamical effect acting on eachBH[In particular, we neglect the gravitational pull of the background density inhomogeneities and the forcesthat arise due to anisotropic accretion from the background density. We will discuss these assumptions below.],the BHs attract each other and collide within the free-fall time given byt_ ff = D^3/2/√(Gm_ t),     m_ t≡ m_1+m_2.In reality, the space is expanding, and the BHs will be distanced if the space expandsby O(1) or more within the free-fall time. Conversely, if the free-fall time is shorter than the Hubble time 1/H, then the two BHs become gravitationally bound and eventually collide. Since the free-fall time and the Hubble time respectively scale as ( scale  factor)^3/2 and (scale factor)^2 during the radiation dominated era, the Hubble time may eventually exceed the free-fall time in the radiation dominated era even if the BHs are initially on the cosmic expansion flow <cit.>. The condition for forming the bound system can be written as 1/√(Gm_ t)( x/1+z)^3/2 < 1/H(z),where z is the cosmological redshift. Using the Friedmann equation for a flat cosmology and neglecting factors of order unity, this condition can be rewritten asm_ t > ρ (z) x^3/(1+z)^3,where ρ (z) is the background energy density. From this expression, we can give another but equivalent physical interpretationto the criterion for forming the gravitationally bound state. The left hand side is the total mass of the two BHs, and the right hand side is the total mass of whatever matter component that dominates the background Universe. Thus, the condition for two BHs to become gravitationally boundis equivalent to the condition for the total energy m_ t to exceed the background energy contained in the comoving volume to the nearest PBH x^3.In the radiation dominated era, the energy density of radiation can be written as ρ (z) ≈ρ_c,0(1+z)^4/1+z_ eqΩ_m,where z_ eq is the redshift at the time of matter-radiation equality, ρ_c,0 and Ω_m respectively represent a critical density and a density parameter of the non-relativistic matter at the present, and the right-hand side in Eq. (<ref>) decreases in time. Then, if x is smaller than x_ max given byx_ max=( m_ t/ρ_c,0Ω_m)^1/3,Eq. (<ref>) becomes satisfied at z=z_ dec > z_ eq, where z_ dec is given by1+z_ dec= (1+z_ eq) ( x_ max/x)^3.The physical distance of the BH pair at the time of decoupling time, which becomes the semimajor axis of the resultant binary, is given bya=1/1+z_ decx = Ax^4,     A ≡1/1+z_ eq1/x_ max^3 = 1/1+z_ eqρ_c,0Ω_m/m_ t.Since the BH pair forms only for x < x_ max, there is an upper bound on a as a < a_ max =x_ max/(1+z_ eq).If there is no force other than the gravitational force from the neighboring BHs,and the initial peculiar velocities vanish, such two BHs come closer by moving on the same straight line and end up with a head-on collision. However, in reality, there are other remote BHs surrounding the BHs in pair, and theyexert a torque during the infall motion of the BHs in pair. As a result, the BH pair acquires an angular momentum, and the head-on collision is circumvented. The torque exerted by the i-th distant BH to the lowest order in the distance D_i to the i-th BHis given byN_i =3GM_i/2D_i^3sin (2θ_i) m_1 m_2/m_ t D^2,where D is the physical distance between BH1 and BH2 (see Fig.<ref>),M_i is the mass of the i-th perturber BH, and θ_i is the angle between a line connecting two BHs in pairand a line connecting i-th BH and a center of mass of the BH pair (see Fig. <ref>). Thus, the angular momentum generated by this torque throughout the free fall becomesJ_i ≃ N_i t_ ff.Taking the direction of the torque exerted by each BH into account,the total angular momentum that the BH pair acquires is given byJ⃗ =3/2 t_ ffGm_1 m_2/m_ t D^2 ∑_i=1^N M_i/D_i^3sin (2θ_i) (e⃗_z×e⃗_i)/ |e⃗_z×e⃗_i| ,where we have chosen the line of the major-axis to be parallel to z-axis ande⃗_i=(cosϕ_i sinθ_i, sinϕ_i sinθ_i, cosθ_i),is the unit vector pointing to the i-th BH (see Fig. <ref>). For the Keplarian motion, there is a relation between the orbital angular momentum and the eccentricity e as|J⃗|=m_1 m_2 √(G D/m_ t)√(1-e^2).Using this formula, we obtain 1-e^2= 9/4ζ⃗^2,     ζ⃗=∑_i=1^N x^3/y_i^3M_i/m_ tsin (2θ_i ) (e⃗_z×e⃗_i)/ |e⃗_z×e⃗_i| ,where x is the comoving distance between BH1 and BH2 and y_i is the comoving distance to thei-th BH. Eqs. (<ref>) and (<ref>) are the main results of this subsection. They are the major axis and the eccentricity of the BH binary at the time of formation. Our analysis in the next subsection is based on these formulae.Let us now estimate the value of N, namely the number of the surrounding BHs that are inside the Hubble horizon at the time of the PBH binary formation. For simplicity, only in this paragraph we assume all the PBHs have the same mass m_ BH and constitute a fraction f_ PBH of all the cold dark matter (for instance, f_ PBH≃ 10^-3 is required to explain the LIGO observation ). First of all, we notice that N depends on the initial comoving separation of the PBHs that form a pair. For instance, if the initial comoving separation of the BHs that form a binary is sufficiently small, they form a binary at very early time.In such a case, most likely few BHs exist inside the Hubble horizon and N=0 or N=1 will be the typical value. Thus, what we have to estimate is the typical value of N of PBH binaries that are relevant to observations. According to <cit.>, the probability dP that a given BH pair forms a binary, and then undergoes a merger at short cosmic time interval (t,t+dt) is given by dP=3/16( t/T)^3/8 e (1-e^2)^-(45/16)dt/t de,where T is defined byT≡3/170f_ PBH^-16/3(Gm_ BH)^-5/3/(1+z_ eq)^4( 8π/3H_0^2 Ω_m)^4/3.For distinction between the lifetime and merger time of binaries,see discussion around Eq. (<ref>). The merger probability for fixed t is dominated by the binaries having eccentricity near its upper limite_ upper given by Eq. (11) in <cit.>,e_ upper=√(1-( t/T)^6/37)            for t< f_ PBH^37/3 T √(1-f_ PBH^2 ( t/f_ PBH^37/3 T)^2/7)      for t≥f_ PBH^37/3 T.We only consider the first case t<f_ PBH^37/3Twhich is shown to be relevant to LIGO observations <cit.>. For PBH mass m_ PBH=30, this condition becomes f_ PBH≳ 10^-3. Analysis in the second case is straightforward. PBH binaries we are interested in are those that merge on the order of the age of the Universe t=t_0∼ 1/H_0. Then, when we fix the merger time and the eccentricity to t_0 and e_ upper, respectively, the major-axis a at the time of the binary formation is uniquely determined (see Eq. (<ref>)). Once the typical major-axis is determined in this way, we can convert it to the typical redshift of the PBH binary formation by usingEqs. (<ref>) and (<ref>), from which we can evaluate the number of PBHs inside the Hubble horizon at that redshift, namely N. The result is given byN ∼ 3× 10^10 ( t/t_0)^9/37 f_ PBH^-26/37( m_ BH/10 )^-22/37.Thus, for the typical PBH binary with m_ PBH= O(10 ) which we are interested in,there are in general more than ∼ 3× 10^10 PBHs in the Hubble horizon at the time of the binary formation if t≃ t_0. Because of the weak dependence of the PBH number N on the merger time t, N is much bigger than unity for merger times relevant to observations. In what follows, we take N →∞.One may wonder if the subsequent torque exerted on the BH binary by the surrounding BHs changes significantly the orbital parameters from the ones given by Eqs. (<ref>) and (<ref>). Considering the contribution only from the closest BH (i=1) for simplicity,the angular momentum that the BH pair acquires during one period T of the orbital motion is given by Δ J= 3/2GM_1 D^2/2D_1^3m_1m_2/m_ tsin (2θ_1) T.While D does not increase with the scale factor because the BH pair is gravitationally bound, the distance D_1 grows in proportion to the scale factor which scales as ∝ t^1/2 in the radiation dominated epoch. Then, denoting by D_1^(0) the initial value of D_1 at the time of binary formation, D_1 when the BH pair is in the n-th cycle of the orbital motion becomes n^1/2 D_1^(0). The accumulated angular momentum becomesJ<Δ J ∑_n=1^∞ n^-3/2≈ 2.6 Δ J.Thus, the subsequent change of the angular momentum of the BH binary after its formation is at most a factor of ∼ 2. This factor is not important for our main result, and we do not consider this effect in the following analysis. On the other hand, note that if a distant third black hole with mass M_1 is captured on a bound orbit around the binary in a hierarchical configuration with some orbital period T_1≫ T and eccentricity e_1, it can cause significant changes in the eccentricity of the binary due to the Lidov-Kozai effect on a timescale t_ Kozai=[(m_ t+M_1)/M_1](1-e_1^2)^3/2T_1^2/T<cit.>. However, we neglect this possibility in this paper for simplicity.There are also other effects that have been ignored in deriving Eqs. (<ref>) and (<ref>). They include peculiar velocity of the individual BH seeded in at the time of BH formation, the radiation drag, the tidal interaction with the other PBHs in the matter dominated epoch, subsequent infall of the surrounding BHs to the BH binary, tidal force from the perturbations of non-PBH dark matter, and baryon accretion onto the PBH binaries. The first three effects are investigated in <cit.> and was found to be subdominant. Recent study by <cit.> also confirms that the tidal forces from outer PBHs do not significantly affect the late-time evolution of PBH binaries. The subsequent infall of the surrounding BHs is also studied in <cit.>.<cit.> assumed that the dark matter consists of a single-mass PBH population. In this case, the surrounding BH that caused the angular momentum of the BH binary at early times is eventually trapped by the BH binary if the outer BHs are within the mean distance of PBHs, which can be also understood from the expression of x_ max given by Eq. (<ref>). Since the dynamics of three-body problem is difficult to solve, such a case was not considered, and only the opposite case where the nearest BH is more distant than the mean distancewas included in the derivation of the merger event rate in <cit.>. Even under this restriction, it was found that the event rate is reduced by at most by 40%. On the other hand, in the present case where PBHs constitute only a fraction f_ PBH of all the cold dark matter, the mean distance is enhanced by a factor f_ PBH^-1/3 compared with the case where PBHs provide all of the dark matter. Thus the probability that the surrounding BHs are trapped by the BH binary in the latter case is smaller than the former by a factor f_ PBH. Because of this consideration, we make an assumption that the surrounding BHsare not gravitationally bound to the BH pair. Then, the subsequent interaction by the surrounding BH in the BH binary is not significant, and we ignore the late-time effect of the surrounding BHs in the following analysis.The tidal force from the surrounding density perturbations of cold dark matter not in the form of PBHs,exists when PBHs constitute only a fraction of entire dark matter.This issue was addressed by <cit.> and <cit.>who showed that the tidal effect is not significant by extrapolating the primordial perturbations on CMB scales down to the PBH scales (see also ). Due to the random nature of the density perturbations, they yield additional statistically independent random contribution to ζ⃗ in Eq.(14).Since the power of the dark matter perturbation on small scales is not well understood, we do not consider this effect in this paper.Finally, baryon accretion onto PBHs was claimed to significantly affect the PBH binaries and accelerate mergers in . But, recent study by <cit.>, based on the simple analytic calculation, suggests that the baryon mass accumulated on PBHs inis likely to be an overestimation and the baryonic effect is much weaker although it may still be significant with respect to angular momentum exchange. For simplicity we do not account for baryon accretion in this work.§ DISTRIBUTION OF THE MERGER RATEIn the previous section, we have derived the expressions for the major axis and the eccentricity of the PBH binary in terms of the initial comoving positions and masses of PBHs.They are the basic ingredients for the evaluation of the merger rate, which is the purpose of this section.Let us denote by R (m_1,m_2,t) a merger event density per unit cosmic time t and unit comoving volume in the m_1-m_2 plane. In other words,R (m_1,m_2,t) dm_1 dm_2 dt dV,represents the number of merger events of PBH binaries in the mass intervals (m_1,m_1+dm_1), (m_2,m_2+dm_2)that happen during (t,t+dt) and in the comoving volume dV. Since the merger time t can be inferred from the luminosity distance(depending on the cosmological parameters), and the source frame BH masses (m_1,m_2) can be also estimated from the GW waveform,R is the quantity that can be in principle determined observationally. Our strategy to derive R(m_1,m_2,t) is described as follows.What we have to evaluate is the probability P_ intr (m_1,m_2,t)dtthat a given BH pair consisting of two BHs with m_1 and m_2, respectively, forms a binary,and then undergoes a merger during the short cosmic time interval (t,t+dt).Once the quantity P_ intr is obtained, using the PBH mass function given by Eq. (<ref>) and assuming that the masses of the two PBHs in the binary are independent,the merger rate density R is given by ℛ(m_1,m_2,t)=n_ BH/2 f(m_1)f(m_2)P_ intr(m_1,m_2,t).The major-axis and the eccentricity of the BH binary at the formation time are given by Eqs. (<ref>) and (<ref>), respectively. From these equations, we see that the initial semimajor axis is a function of the random variable x as a≡ a(x) and the initial eccentricity is a function of the length of the random vector ζ⃗ as e≡ e(ζ) , where ζ=|ζ⃗|. Denoting by F the probability distribution for x and ζ, the probability that the BH binary takes the values of the parameters in the range (x,x+dx) and (ζ,ζ+dζ) is given byF(x, ζ) dx dζ.We can then convert this probability into the one expressed in terms of a and e asF(x(a), ζ (e)) dx/dadζ/de da de.This gives the probability that the BH binary at the formation time has the major-axisand the eccentricity in the range (a,a+da), (e,e+de).PBH binaries shrink by emitting GWs until they finally merge. The lifetime τ of the BH binary with parameters (m_1,m_2,a,e) until it merges due to GW emissionis given by[We assume that e is typically close to 1 initially,which is a good approximation in the present case.]<cit.>τ=Q (1-e^2)^7/2 a^4,     Q= 3/851/G^3 m_1 m_2 m_ t.Denoting by t_ dec the cosmic time corresponding to z_ dec, namely the time of binary formation, we have τ=t-t_ dec. Since PBH binaries that are relevant to GW observations merge at late time t ≫ t_ dec (t_ dec < 4× 10^5  yr), it is a good approximation to identify τ with t. Thus, in what follows, we replace τ in all of the expressions with t. Under this approximation, we can express a as a function of { t, e, m_1, m_2} as a=a(t,e,m_1,m_2). Using this relation, Eq. (<ref>) becomesF(x(a), ζ(e)) dx/dadζ/de∂ a/∂ t de dt,where it should be understood that a is replaced by { t,e,m_1,m_2 }. Initial eccentricity of the BH binary is not a quantity that can be measured directlyby the GW interferometers for primordial binaries and must be integrated. There is an upper bound e_ m for the initial eccentricity for fixed t because of the existence of the maximum value of the major axisa_ max=x_ max/(1+z_ eq) (see Sec.<ref>). It is determined by the equationt=Q (1-e_ m^2)^7/2 a_ max^4. Notice that in the case of the monochromatic mass functione_ m coincides with e_ upper in the second case in Eq. (<ref>). Finally, the intrinsic probability distribution is given byP_ intr(m_1,m_2,t) = ∫_0^e_ m de  F(x(a), ζ(e)) dx/dadζ/de∂ a/∂ t. Having established the general framework to compute the merger rate density, let us implement this methodology in practice. It is straightforward to derive the last three factors in the integrand of Eq. (<ref>), and they are given bydx/da =1/4(Aa^3)^-1/4,   |dζ/de|=2e/3√(1-e^2),    ∂ a/∂ t =1/4t( t/Q)^1/4(1-e^2)^-7/8.The highly non-trivial part is the evaluation of F (x(a),ζ(e)) since ζ⃗ depends on many random variables (in fact, infinite number of variables) in a complicated manner. Formally, it can be written as F (x(a),ζ(e))= Θ (a_ max-a) 4π x^2 (a)/n_ BH^-1×∫lim_N→∞∏_i=1^N dV_i/n_ BH^-1f(M_i)dM_i/n_ BHsinθ_i dθ_i dϕ_i/4π×Θ (y_i-y_i-1 )e^ -4π/3 n_ BH y_N^3δ(ζ-g (x,y_i,M_i,θ_i,ϕ_i ) ),where Θ(·) is the Heaviside step function and δ (·) is the Dirac's delta function. Here, we have used the parametrization Eq. (<ref>) for e⃗_i, and introduced the notation as y_0=x,  dV_i=4π y_i^2 dy_i andg (x,y_i,M_i,θ_i,ϕ_i) ≡| ∑_i=1^N x^3/y_i^3M_i/m_ tsin (2θ_i ) (e⃗_z×e⃗_i)/| e⃗_z×e⃗_i ||.The derivation of Eq. (<ref>) is given in the appendix <ref>.We evaluate F(x(a),ζ(e)) using two approximations. The first case is that only the nearest BH (i=1) is incorporated in the calculation of ζ⃗. This approximation was adopted in the previous studies <cit.> for single-mass PBH mass functions. In that case, all the PBHs have the same mass and the nearest BH (i=1) exerts the strongest torque on the BH binary. Given that the torque by an outer BH is suppressed by the inverse cube of the distance, the approximation of taking only the nearest BH into account is physically natural as the zero-th order approximation[ The cumulative torque from all objects in a logarithmic radius bin of width Δln y(e.g. here we may set Δln y ∼Δ y / y ∼ n_ BH^-1/3 / y) follows from the central limit theorem and is described by a normal distribution with zero mean and root-mean-square that corresponds to Δ N^1/2g_1, RMS,where Δ N is the number of objects in that logarithmic radius bin andg_1, RMS = 2^-1/2 (x/y)^-3 (M_ RMS/m_ t)sin(2θ)_ RMS.This may be estimated roughly as Δ N = 4π n_ BH y^3 Δln y.Therefore, the relative cumulative contribution of distant objects to the torque scales with y^-3/2,and so the smallest y dominates the integral where the number of objects is ∼ 1.].On the other hand, if the mass function is multimass, a massive outer BH may exert a stronger torque than a low-mass inner one. The wider the mass function, the more likely it is that this possibility may arise. To take into account the effect of outer perturbers, in our second estimatewe consider a flat mass function up to a certain BH mass m_ max andinclude the outer BHs to evaluate the torque.In what follows, we evaluate F (x(a),ζ(e)) and the intrinsic probability distribution for these two cases, separately. §.§ Case 1: torque only from the nearest BHIn this subsection, we make an approximation that the torque is exerted only by the nearest BH. Accordingly, the function g defined by Eq. (<ref>) becomesg= x^3/y_1^3M_1/m_ tsin (2θ_1).Even after this simplification,it is hard to evaluate the integral (<ref>) analytically. For an analytic estimate, we carry out the calculation for an arbitrary but fixed value β=sin (2θ_1). Our result is insensitive to the value of β as long as it is not extremely close to zero. Since the probability of realizing β≪ 1 is suppressed (see discussion after Eq. (<ref>) for the estimation of this probability),we think that this simplification does not lose the essential feature of the merger-rate density. The integral over y_1 simplifies toF(x(a),ζ (e))= Θ (a_ max-a)12π^2 n_ BH/1-e^2β( a/A)^5/4×∫ dM_1 f(M_1) M_1/m_ texp( -2π n_ BH M_1/√(1-e^2)m_ t( a/A)^3/4β)×Θ( M_1/m_ tβ-2/3√(1-e^2)).The PBH binaries at the time of their formation are highly eccentric (e ≈ 1). Since the PBH mass function is implicitly assumed to be narrow in the present case, M_1 does not differ from m_ t significantly, and the argument of the last Heaviside function is positive unless β is smaller than 2/3m_ t/M_1√(1-e^2). Now, let us estimate the probability that β becomes smaller than the critical value β_c for which the argument of the Heaviside function becomes zero. To this end, we again consider the monochromatic mass function and use the eccentricity given bythe first case of Eq. (<ref>). Then, β_c becomes β_c ≃ 0.01 × f_ PBH^16/37( t/t_0)^3/37( m_ BH/10 )^5/37.For β_c ≪ 1, the probability that β happens to be smaller than β_c is approximately given byP(β < β_c )≈β_c^2/16≃ 6× 10^-6,for the fiducial values used in Eq. (<ref>). This probability is much smaller than unity,and we replace the last Heaviside function by 1 in the following analysis. Then, the intrinsic probability distribution (<ref>) becomesP_ intr(m_1,m_2,t)=1/8t∫ dM_1 1/βm_ t/M_1 K^2 f(M_1)/n_ BH×∫_0^e_ m de e (1-e^2)^-45/16exp[ -K (1-e^2)^-37/32],where we have introduced a dimensionless parameter K byK ≡ 2π n_ BHM_1/m_ t A^-3/4( t/Q)^3/16β.This is a small parameter. For instance, for a single-mass PBH mass function with mass m_ BH and the Hubble time t=1/H_0, we haveK= ( 170/3)^3/16( 3/π)^1/4(1+z_ eq)^3/4πΩ_m^1/4 f_ PBH(G m_ BH H_0 )^5/16β∼3 × 10^-4 f_ PBHβ( m_ BH/10 )^5/16,where f_ PBH is the mass fraction of the PBHs to the entire cold dark matterThe integration over e can be expressed in terms of the incomplete gamma function. Then, Eq (<ref>) becomesP_ intr(m_1,m_2,t) = 2/37 t∫ dM_1 1/βm_ t/M_1f(M_1)/n_ BH K^16/37×[ G(K)-G ( M_1/m_c) ],where m_c and G(x) are defined bym_c ≡m_ t/2 π n_ BH1/β( t/Q)^1/7(1+z_ eq)^4/7( ρ_c,0Ω_m/m_ t)^25/21, G(x)=Γ( 58/37, x ).For the monochromatic mass function, m_c is given bym_c ∼ 7× 10^-4 (f_ PBHβ)^-1( m_BH/10 )^26/21.Eq. (<ref>) for arbitrary f(M) mass function is the final expression of the intrinsic merger probability distribution in the present case. §.§ Case 2: torque from the outer BHsLet us next consider the case in which the PBH mass function is flat fromm_ min=ϵ m_ max to m_ max and vanishes outside of it.As mentioned earlier, we implicitly assume that ϵ≳ 0.1. Then the PBH mass function is given byf(m)=1/m_ max (1-ϵ)Θ (m_ max-m ) Θ (m-m_ min ). We include not only the nearest BH but also outer BHs.It is extremely difficult to perform the integration of Eq. (<ref>)analytically [Analytic expression of the probability distributionfor the eccentricity was derived for the monochromatic mass function in <cit.>.]. However, we can estimate the approximate behavior of F(x,ζ) in the domain n_ BH x^3 ≪ 1 where the PBH binaries with lifetime comparable to the age of the Universe form[PBH binaries with n_ BHx^3 ∼ 1 have larger semimajor axis and more circular orbit than those with n_ BHx^3 ≪ 1. These two factors make the lifetime of the binaries much longer than the age of the Universe.]. To this end, let us first write F(x,ζ) asF(x,ζ)=4π x^2/n_ BH^-1 e^-4π/3n_ BH x^3 P(x,ζ),where P(x,ζ_0)dζ is a probability that ζ takes value in the interval (ζ_0,ζ_0+dζ) for given x. For later convenience, let us define ζ̃ by (m_ t/m_ max) ζ. Thus, we haveF(x,ζ) ≈ 4π n_ BHx^2 P̃(x,ζ̃) m_ t/m_ max,where P̃(x,ζ̃_0)dζ̃ is the probability that ζ̃ takes a valuein the interval (ζ̃_0, ζ̃_0+dζ̃) for given x.Looking at the definition of ζ⃗, we expect that the typical value of ζ̃ for given x is around n_ BHx^3 since y_i (i= O(1)) is typically about n_ BH^-1/3 and the contribution from y_i with higher i is suppressed (see footnote <ref>). Noting that y_i > x, the case in which ζ≪ n_ BHx^3 is realized by either if y_1 ≫ n_ BH^-1/3 or if accidental cancellation takes place among terms with different i. Since the former is suppressed exponentially as ∼ e^-4π/3n_ BHy_1^3, the latter, which is stochastic, dominates. Recalling that ζ⃗ is essentially a two-dimensional vector, the probability that ζ̃ is in the thin ring (ζ̃,ζ̃+dζ̃) by the random choice is proportional to the ring area, namely ζ̃ dζ̃. Thus, we expect P̃(x,ζ̃) ∝ζ̃,for ζ̃≪ n_ BHx^3. On the other hand, the case ζ̃≫ n_ BHx^3 is realized mainly when y_1 is accidentally much smaller than the typical value n_ BH^-1/3. The probability of such a situation is controlled by the volume element y_1^2 dy_1, and the relation ζ̃∝ y_1^-3 leads to y_1^2 dy_1 ∝ζ̃^-2 dζ̃. Thus, we expect P̃(x,ζ̃) ∝ζ̃^-2,for ζ̃≫ n_ BHx^3. From the definition of ζ⃗ given by Eq. (<ref>), we have σ^2 ≡⟨ζ⃗^2 ⟩= 32 π/135( m_ max/m_ t)^2n_ BH x^3 (1+ϵ+ϵ^2).The derivation of this result is given in appendix <ref>. One simple function that interpolates Eqs. (<ref>) and (<ref>) is given by P̃ (x,ζ̃) = 3√(3)/2πξ^1/3σ̃^2ζ̃/ζ̃^3+ξσ̃^6,where σ̃= (m_ t/m_ max) σ, ξ= O(1) is a fitting parameter, and the normalization condition is imposed.In order to check the validity of the approximation (<ref>), we evaluate P̃(x,ζ̃) numerically by the Monte Carlo method. For this purpose, we first fix N and x.Then, we randomly generate a set of random variables { M_i, y_i, θ_i, ϕ_i } and compute ζ̃. By repeating this process many times, we obtain the distribution of ζ̃ for a given N and x up to the statistical uncertainty.Figure <ref> shows the distribution of ten thousand realizations of ζ̃for N=5 for ϵ=0.1, 4π/3 n_ BH x^3=(10^-2,5× 10^-3,2× 10^-3, 10^-3). The red curve represents the distribution obtained by the Monte Carlo calculations, and blue one represents the analytic approximation (<ref>) with ξ =5.5. We find that this simple ansatz of P̃(x,ζ̃) fairly recovers the numerically obtained probability distribution. Although we consider the flat mass function, we expect that the ansatz should work qualitatively for other mass functions since the asymptotic behaviors (<ref>) and (<ref>) are determined independently of the mass function.In what follows, we adopt Eq. (<ref>). Then, F(x,ζ) becomesF(x,ζ)=6√(3)ξ^1/3 n_ BHσ̃^2 ζ x^2 ( m_ t/m_ max)^2[ ( m_ t/m_ max)^3 ζ^3+ξσ̃^6 ]^-1. Substituting F(x,ζ) given by Eq. (<ref>) into Eq. (<ref>), after some algebra, we obtain P_ intr =135 √(3)/256π t1/ξ^1/3 (1+ϵ+ϵ^2)ν^16/37m_ t/m_ max∫_w_ m^∞w^21/32/w^111/32+1 dw.Here we have defined a dimensionless quantity ν by ν =16π/45ξ^1/3 (1+ϵ+ϵ^2) n_ BHm_ max/m_ t A^-3/4( t/Q)^3/16,and we have changed the integration variable as w=ν^-32/37 (1-e^2), and w_ m=ν^-32/37 (1-e_ m^2). Using a relation n_ BH=2ρ_ BH/(m_ max (1+ϵ)), which is valid for a flat mass function,we havew_ m = ( 32π/45γ^1/31+ϵ+ϵ^2/1+ϵ)^-32/37(1+z_ eq)^128/259 f_ PBH^-32/37( ρ_c,0Ω_m/m_ t)^128/777×( G^3 m_1m_2m_ t/3t )^32/259.To estimate typical magnitude of w_ m, for equal mass binary (m_1=m_2=m_ BH), w_ m is given byw_ m≈ 2× 10^-4 f_ PBH^-32/37( m_ BH/)^160/777.This shows that w_ m can be bigger or smaller than unity within the range of the feasible values of f_ PBH and m_ BH. Although the integration over w in Eq. (<ref>) can be expressed in terms of the hypergeometric function, we do not write it explicitly here since it gives no useful information. Thus, Eq. (<ref>) is the final expression of the intrinsic merger rate and the main result of this subsection.§ HIDDEN UNIVERSALITY IN THE MERGER RATE DENSITYIn the previous section, we have derived the analytic expression of P_ intrin the m_1-m_2 plane for the two different limiting casescorresponding to the different approximations. According to Eq. (<ref>), the observable merger rate density is not P_ intr, but P_ intr weighted by the PBH mass function. The observable merger event density is highly dependent on the PBH mass function, and it appears at first glance that no definite prediction can be extractedfor the PBH scenario without choosing the specific mass function. Contrary to this naive guess, there is a unique feature expressed as a mathematicalrelation for the differentiated merger rate density specific to the PBH scenario as we will show below. Such a relation could be quite useful as a powerful method for testingthe PBH scenario when the sufficient number of merger events have been accumulated.Let us first consider the case where P_ intr is given by Eq. (<ref>). This expression of P_ intr still containsthe integration over the PBH massnearest to the BH binary. Although this integration cannot be done explicitly without choosing the specific PBH mass function, carrying out the explicit integration is not needed for our present purpose. The function G(x) appearing in the integrand is monotonically decreasing andits asymptotic behavior is given asG(x)= 21/37Γ( 21/37) -37/58 x^58/37+ O( x^95/37),     (x ≪ 1)x^21/37e^-x( 1+ O(x^-1) ).     (x ≫ 1).Using this formula and noting that K, which is much smaller than unity according to Eq. (<ref>),is always less than M_1/m_c,we find that the integrand of Eq. (<ref>) becomesm_ t/M_1f(M_1)/n_ BH K^16/37[ G(K)-G ( M_1/m_c) ]= 37/58m_ t/M_1f(M_1)/n_ BH K^16/37( M_1/m_c)^58/37,    M_1/m_c<121/37Γ( 21/37)m_ t/M_1f(M_1)/n_ BH K^16/37,    M_1/m_c>1A crucial consequence of these approximate expression is that the integrand has a simple scaling property with m_1 and m_2. Using the scalings,K ∝ m_ t^-1/16(m_1m_2)^3/16,      m_c ∝ m_ t^-1/21(m_1m_2)^1/7,we find that the above integrand scales asm_ t/M_1f(M_1)/n_ BH K^16/37 [ G(K)-G ( M_1/m_c) ] ∝m_ t^22/21(m_1 m_2)^-1/7,     M_1/m_c<1 m_ t^36/37(m_1 m_2)^3/37,     M_1/m_c>1.Because of this factorization, the same scaling for m_1 m_2 and m_ t remains for P_ intr. Assuming one of the branches (M_1 < m_c or M_1 > m_c) dominates the integral, P_ intr scales asP_ intr (m_1, m_2,t) ∝m_ t^22/21(m_1 m_2)^-1/7,     (M_1 < m_c  dominates) m_ t^36/37(m_1 m_2)^3/37,     (M_1 > m_c  dominates).Then, the observable merger rate density ℛ per unit time and unit volume defined by Eq. (<ref>) can be written as ℛ(m_1,m_2,t)=C_A m_ t^22/21 h_A(m_1) h_A(m_2),   (M_1 <m_c  dominates) C_B m_ t^36/37 h_B(m_1) h_B(m_2),   (M_1 > m_c  dominates)where h_A(m) ≡ m^-1/7 f(m), h_B(m) ≡ m^3/37 f(m) andC_A, C_B are quantities that are independent of m_1 and m_2, but contain information of f(m). An interesting point of Eq. (<ref>) is that the dependence of themerger rate density on the total mass m_ t is independent of the model-dependent functions h_A(m) or h_B(m) (namely, mass function) and is completely determined as∝m_ t^36/37 for the former case and ∝m_ t^22/21 forthe latter case. The mass function enters the game only through the total normalization constant(represented as C_A and C_B) and the factorizable part h_A(m_1) h_A(m_2) or h_B(m_1) h_B(m_2). Thus, by focusing on the total mass part of merger rate density and picking it up, we can provide a definite prediction for the merger rate density which is insensitive to the shape and amplitude of the PBH mass function. Indeed, we can pick up the total mass part by taking the logarithm of ℛ and then differentiating it by m_1 and m_2, namelyα (m_1,m_2,t)≡ -m_ t^2 ∂^2/∂ m_1 ∂ m_2lnℛ(m_1,m_2,t) = 36/37,     (M_1 < m_c  dominates)22/21,     (M_1 > m_c  dominates)for any (m_1, m_2). As discussed at the beginning of Sec. <ref>, the merger rate density R can be determined in principle by observations if a sufficient number of BH merger events are detected and the potential detection bias can be appropriately eliminated. Thus, the quantity α on the left-hand side can be also determined observationally. In this sense, the left hand side can be determined by observations. Our PBH merger scenario predicts that this quantity is equal to 36/37 for the upper case and 22/21 for the lower case. In reality, what is realized lies between the above two cases, and the left hand side of Eq. (<ref>) may take a valuebetween the two values corresponding to the upper case and the lower case respectively. Given that the numerical values on the right hand side for both cases are close to 1 (within less than 5%), the left hand side of Eq. (<ref>) in the mixture case would be alsoclose to 1. Taking into account this possibility, we conclude that under the assumption of the uniform spatial distribution of PBHs the merger rate density satisfies the following relation 36/37≤α (m_1,m_2,t) ≤22/21.This relation is robust in the sense that it is independent of the underlying mass function.Similar conclusion can be drawn to the second case whereP_ intr is given by Eq. (<ref>). In this case, the observable merger rate density (Equation <ref>) is given by ℛ =135 √(3)/512π tν^16/37/ξ^1/3 (1+ϵ+ϵ^2) (1-ϵ)^2 n_ BH/m_ max^2m_ t/m_ max∫_w_ m^∞w^21/32/w^111/32+1 dw.As we have done in the case 1, let us evaluate the integral for two limiting cases (w_ m≪ 1 and w_ m≫ 1), separately.First, when w_ m≪ 1, we can extend the lower limit of the integral to 0. As a result, we obtain ℛ =C_1/tν^16/37( n_ BH/m_ max)^2 m_ t/m_ max,where C_1 is a constant of order unity. Using the scaling for ν as (see Eq. (<ref>)) ν∝ m_ t^-1/16(m_1m_2)^3/16, ℛ can be written as ℛ(m_1,m_2,t)=C̃_1 m_ t^36/37 h_1(m_1) h_1(m_2),where h_1(m) ≡ m^3/37 f(m) andC̃_1 is a quantity that is independent of m_1, m_2, but contains information of f(m). As with the above discussion for the case 1, ℛ has a unique dependence on m_ t. This dependence can be again extracted by considering the quantity α as α (m_1,m_2,t)=36/37. This value precisely coincides with the lower end of Eq. (<ref>).Let us next investigate the case w_ m≫ 1. In this case, we obtain ℛ≈C_2/tν^16/37( n_ BH/m_ max)^2 m_ t/m_ max w_ m^-29/16,where C_2 is a constant of order unity. Using the scaling for w_ m as (see Eq. (<ref>))w_ m∝ m_ t^-32/777(m_1m_2)^32/259,as well as that for ν, we find ℛ(m_1,m_2,t)=C̃_2 m_ t^22/21 h_2(m_1) h_2(m_2),where h_2(m) ≡ m^-1/7 f(m) andC̃_2 is a quantity that is independent of m_1, m_2, but contains information of f(m). Then, we find α (m_1,m_2,t)=22/21. This value precisely coincides with the upper end of Eq. (<ref>). Thus, the range of α in the present case is also given by Eq. (<ref>).To summarize, our study demonstrates that 0.97 ≲α≲ 1.05 holds in the considered PBH scenario in which PBHs form binaries in the early universe. The uncertainty in α is small enough to distinguish the PBH scenario from different scenarios for explaining the origin of the merging BH binaries once a sufficiently large number of merger events are measured. For instance, <cit.> considered the formation of PBH binaries due to close encounters in dark matter halos at low redshifts. This PBH scenario gives a different merger rate density, i.e. <cit.> R(m_1,m_2,t)=C m_1^2/7 f(m_1)m_2^2/7 f(m_2)m_ t^10/7,where C is a quantity independent of m_1 and m_2. For this process, Equation (<ref>) gives α=10/7≈ 1.43.Thus, this scenario predicts a unique and different value from the one studied in this paper.<cit.> has recently extended this analysis to systems in collisional equilibrium where mass segregation takes places such as in galactic nuclei. In this case α is a unique function of the total binary mass. Another example is the astrophysical scenario in which the BH binaries form and evolve due to dynamical encounters in dense stellar environments. In this scenario, <cit.> found that approximately P_ intr=ℛ(m_1,m_2)/[f(m_1)f(m_2)]∝ m_ t^4. In this case the higher mass mergers are much more probable mainly due to the mass dependence of binary formation during chance triple encounters, exchange interactions, mass segregation and dynamical hardening effects.If the intrinsic merger probability does not depend on the symmetric mass ratio η=m_1m_2/(m_1+m_2)^2,then we get α=4 for this process. Clearly, a α∼ 4 value is largely outside of the region obtained for both PBH scenarios mentioned above. When a sufficient number of mergers accumulates to determine α, it may be possible to exclude several formation scenarios and pin down the most likely scenario. In order to crudely estimate the necessary sample size to measure α from future GW detections,we generate a mock Monte Carlo sample of BHs drawn from a fiducial flat mass function between a range of masses 5 and 30,and generate a random merger sample by randomly drawing objects with probability proportional to (m_1+m_2)^α. For this order-of-magnitude estimate we neglect the measurement error of mass, since the mass measurement accuracy is expected to be much smaller than the range of BH masses, i.e. Δ m_1,2/m_1,2∼ 25% for half of the sources for the design sensitivity of second generation GW instruments including Advanced LIGO, Advanced VIRGO, and KAGRA <cit.>. [If heavy BHs exist with mass 30 <m_1,2<50, the median mass measurement errors are expected to be of order 40%<cit.>.] We generate a 2D histogram of events and fit the value of α. Repeating this analysis 1000 times for fixed fiducial α gives an approximate posterior distribution function of the measured α. This analysis shows that a sample of 100 events is necessary to measure α tointeger accuracy and 1000 events would allow to measure it with an error of 0.15 if the fiducial value of α is between 1 and 3.The current rate estimates predict ℛ = 12–240Gpc^-3 yr^-1.Assuming a maximum detection distance of z=0.5 for the design sensitivity of second generation instruments,a sample of ∼ 100 events (1000 events) will accumulate in between 6 and 120 days (60 days and 3.3 years). § SUMMARYThere is a growing interest in the possibility that the merging BHs detected by LIGO are primordial.Previous study <cit.> showed that the BH binary merger event rate estimated by LIGO can be explained by the PBHs which constitute only a tiny fraction of the entire dark matter. While the estimated masses of the individual BHs show some spread 10 ∼ 30, it was assumed in the previous study that all the PBHs have the same mass of 30. Although this is a reasonable approximation when only the first event for which masses of two BHs in the binary are almost the same is observationally known, it hugely compresses the valuable information aboutthe event rate distribution in the BH mass plane.In this paper, we extended the formalism to compute the merger event rate to the case where the PBH mass function is not monochromatic. Our basic assumption on the mass function made throughout this paper is that it is not widely extended over many orders of magnitude in the BH mass range but is confined to the mass range ∼ 10.The derived formula (<ref>) contains multiple integrations over many random variables (Eq. (<ref>)) andis complicated enough to defeat the exact analytic computation. Based on the physical expectation that among remote BHs, the closest one gives the largest torque on average, we evaluated the simplified version of Eq. (<ref>) in which only the closest BH is taken into account. In this case, the computation becomes much more feasible. We found that the quantity α constructed from the merger rate density ℛ in the BH mass plane as α (m_1,m_2,t) ≡ -(m_1+m_2)^2 ∂^2/∂ m_1 ∂ m_2lnℛ(m_1,m_2,t),becomes almost independent of the PBH mass function and takes a value close to unity (0.97 ≲α≲ 1.05). Since it is possible that several distant BHs generate the dominant torque instead of the closest one during binary formation in the early universe, we have also considered the case in which the remote BHs are taken into account for a flat PBH mass function. Even in this case, we found that the quantity α exactly coincides with the one derived for the case of the closest perturbing BH. This suggests that the determined value of α is robust to observationally test the PBH scenario once a large sample of mergers becomes available with accurately determined masses.Other astrophysical mechanisms leading to BH mergers are generally expected to yield different α values. Recently, <cit.> has shown that the probability of merger is proportional to m_ t^4 for binary BH mergers in dense star clusters, which implies α∼ 4 if the merger rates are nearly independent to mass ratio.PBH binaries formed in the low redshift Universe by GW emission during close encounters leads to α≈ 1.43<cit.>. BH binaries formed by GW emission in mass-segregated environments such as galactic nuclei lead to α values that vary with the total binary mass m_ t<cit.>.The mass distribution is not the onlyGW observable which allows one to distinguish between different mechanisms leading to binary BH mergers. For instance, it was shown recently that PBHs are unlikely to possess large spins <cit.>. When the statistics of BH spins is accumulated in the future,this will also become a powerful discriminator.Further, the eccentricity distribution will be useful to distinguish binaries formed by GW capture in high velocity dispersion environments at low redshifts <cit.>. The observable PBH binaries that formed at high redshifts are expected to have close to zero eccentricity due to circularization by GW emission <cit.>.LISA will be able to determine the eccentricity for mergers with e ≳ 10^-6<cit.>. Detection of BHs with masses less than ∼ 1, which may be possible with the advanced LIGO, VIRGO, and KAGRA at design sensitivity, would provide strong evidence of the existence of PBHs <cit.>. Finally, future GW detectors will allow us to map out the cosmological luminosity distance (or redshift) distributionfor BH mergers to high redshifts <cit.>. Examining the multidimensional GW event rate distribution will be essential to prove or disprove the PBH scenario.This work was supported by MEXT KAKENHI Nos. 17H06357 (T.T. and T.S.), 17H06358 (T.T.), 17H06359 (T.S.), 15H05888 (T.S and S.Y.), 15H02087 (T.T.),and 15K21733 (T.S. and S.Y.), JSPS Grant-in-Aid for Young Scientists (B) No.15K17632 (T.S.) and No.15K17659 (S.Y.),the Grant-in-Aid for Scientific Research No. 26287044 (T.T.). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 638435 (GalNUC) and by the Hungarian NationalResearch, Development, and Innovation Office grant NKFIH KH-125675 (B.K.). This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607761. § DERIVATION OF THE PROBABILITY DISTRIBUTIONNon-trivial part of Eq. (<ref>) is the probability distribution for x and y_i (i=1,⋯,N), and we focus on this part only.Let P(N,V) be the probability that there are N BHs in the volume V. For BHs that are uniform randomly distributed, we haveP(N,V)=1/N!( V/V_0)^N e^-V/V_0,where V_0 is the volume for which the expectation particle number is 1. Thus,V_0=n_ BH^-1.Then, the probability that the situation shown in Fig. <ref> is realized is given bydP =P( 0,4π/3 x^3 ) d( 4π/3 x^3 )/V_0 P( 0, 4π/3 y_1^3-4π/3 x^3 )d( 4π/3 y_1^3 )/V_0⋯ P( 0, 4π/3 y_N^3-4π/3 y_N-1^3 ) d( 4π/3 y_N^3 )/V_0=4π x^2 dx/V_04π y_1^2 dy_1/V_0⋯4π y_N^2 dy_N/V_0exp( -4π y_N^3/3 V_0).From the definition of ζ⃗ in Eq. (<ref>), we have ⟨ζ⃗^2 ⟩=x^6/m_ t^2∑_i=1^N ∑_j=1^N ⟨1/y_i^31/y_j^3⟩⟨ M_i M_j ⟩⟨sin (2θ_i) sin (2θ_j) (e⃗_z×e⃗_i)/| e⃗_z×e⃗_i |·(e⃗_z×e⃗_j)/| e⃗_z×e⃗_j |⟩.Using Eq. (<ref>) for e⃗_i, we obtain ⟨sin (2θ_i) sin (2θ_j) (e⃗_z×e⃗_i)/| e⃗_z×e⃗_i |·(e⃗_z×e⃗_j)/| e⃗_z×e⃗_j |⟩= 8/15δ_ij.By the assumption that M_i obeys the uniform distribution in the interval (ϵ m_ max,m_ max), we have ⟨ M_i^2 ⟩ = 1/3 m_ max^2 (1+ϵ+ϵ^2).Thus, we obtain ⟨ζ⃗^2 ⟩=8/45x^6/m_ t^2 m_ max^2 (1+ϵ+ϵ^2) ∑_i=1⟨1/y_i^6⟩.The calculation of ∑_i=1⟨ 1/y_i^6 ⟩ can be done by noting thatit is an expectation value of 1/y^6 where y is the distance ofparticles randomly distributed in the region y >x<cit.>,lim_N→∞∑_i=1^N ⟨1/y_i^6⟩ = ∫_x^∞4π y^2 dy/n_ BH^-11/y^6= 4π/3n_ BH/x^3.Plugging this result into Eq. (<ref>) finally yields ⟨ζ⃗^2 ⟩=32π/135 n_ BH x^3 ( m_ max/m_ t)^2 (1+ϵ+ϵ^2).yahapj
http://arxiv.org/abs/1709.09007v2
{ "authors": [ "Bence Kocsis", "Teruaki Suyama", "Takahiro Tanaka", "Shuichiro Yokoyama" ], "categories": [ "astro-ph.CO", "gr-qc" ], "primary_category": "astro-ph.CO", "published": "20170926133059", "title": "Hidden universality in the merger rate distribution in the primordial black hole scenario" }
]Giannis Bekouliscor1 [email protected] ]Johannes Deleu [email protected] ]Thomas [email protected] ]Chris Develder [email protected] [cor1]Corresponding author Ghent University – imec, IDLab, Department of Information Technology,Technologiepark Zwijnaarde 15, 9052 Ghent, Belgium In processing human produced text using natural language processing (NLP) techniques, two fundamental subtasks that arise are[label=(*)]* segmentation of the plain text into meaningful subunits (entities), and* dependency parsing, to establish relations between subunits.Such structural interpretation of text provides essential building blocks for upstream expert system tasks: from interpreting textual real estate ads, one may want to provide an accurate price estimate and/or provide selection filters for end users looking for a particular property — which all could rely on knowing the types and number of rooms, etc. In this paper we develop a relatively simple and effective neural joint model that performs both segmentation and dependency parsing together, instead of one after the other as in most state-of-the-art works. We will focus in particular on the real estate ad setting, aiming to convert an ad to a structured description, which we name property tree, comprising the tasks of [label=(*)]* identifying important entities of a property (rooms) from classifieds and* structuring them into atree format. In this work, we propose a new joint model that is able to tackle the two tasks simultaneously and construct the property tree by [label=(*)]* avoiding the error propagation that would arise from the subtasks one after the other in a pipelined fashion, and* exploiting the interactions between the subtasks.For this purpose, we perform an extensive comparative study of the pipeline methods and the new proposed joint model, reporting an improvement of over three percentage points in the overall edge F_1 score of the property tree.Also, we propose attention methods, to encourage our model to focus on salient tokens during the construction of the property tree. Thus we experimentally demonstrate the usefulness of attentive neural architectures for the proposed joint model, showcasing a further improvement of two percentage points in edge F_1 score for our application. While the results demonstrated are for the particular real estate setting, the model is generic in nature, and thus could be equally applied to other expert system scenarios requiring the general tasks of both [label=(*)]* detecting entities (segmentation) and* establishing relations among them (dependency parsing). neural networks, joint model, relation extraction, entity recognition, dependency parsing § INTRODUCTIONMany consumer-oriented digital applications rely on input data provided by their target audience. For instance, real estate websites gather property descriptions for the offered classifieds, either from realtors or from individual sellers. In such cases, it is hard to strike the right balance between structured and unstructured information: enforcing restrictions or structure upon the data format (predefined form) may reduce the amount or diversity of the data, while unstructured data (raw text) may require non-trivial (hard to automate) transformation to a more structured form to be useful/practical for the intended application. In the real estate domain, textual advertisements are an extremely useful but highly unstructured way of representing real estate properties. However, structured descriptions of the advertisements are very helpful, for real estate agencies to suggest the most appropriate sales/rentals for their customers, while keeping human reading effort limited. For example, special search filters, which are usually used by clients, cannot be directly applied to textual advertisements. On the contrary, a structured representation of the property (a tree format of the property) enables the simplification of the unstructured textual information by applying specific filters (based on the number of bedrooms, number of floors, or the requirement of having a bathroom with a toilet on the first floor), and it also benefits other related applications such as automated price prediction <cit.>.The new real estate structured prediction problem as defined by <cit.> has as main goal to construct the tree-like representation of the property (the property tree) based on its natural language description. This can be approached as a relation extraction task by a pipeline of separate subtasks, comprising [label=(*)]* named entity recognition (NER) <cit.> and* relation extraction <cit.>.Unlike previous studies <cit.> on relation extraction, in the work of <cit.>, the relation extraction module is replaced by dependency parsing. Indeed, the relations that together define the structure of the house should form a tree, where entities are part-of one another (a floor is part-of a house, a room is part-of a floor). This property tree is structurally similar to a parse tree. Although the work of <cit.> is a step towards the construction of the property tree, it follows a pipeline setting, which suffers from two serious problems: [label=(*)]* error propagation between the subtasks, NER and dependency parsing, and* cross-task dependencies are not taken into account, terms indicating relations (includes, contains, etc.) between entities that can help the NER module are neglected.Due to the unidirectional nature of stacking the two modules (NER and dependency parsing) in the pipeline model, there is no information flowing from the dependency parsing to the NER subtask. This way, the parser is not able to influence the predictions of the NER. Other studies on similar tasks <cit.> have considered the two subtasks jointly. They simultaneously extract entity mentions and relations between them usually by implementing a beam-search on top of the first module (NER), but these methods require the manual extraction of hand-crafted features. Recently, deep learning with neural networks has received much attention and several approaches <cit.> apply long short-term memory (LSTM) recurrent neural networks and convolutional neural networks (CNNs) to achieve state-of-the-art performance on similar problems. Those models rely on shared parameters between the NER and relation extraction components, whereby the NER module is typically pre-trained separately, to improve the training effectiveness of the joint model. In this work, we propose a new joint model to solve the real estate structured prediction problem. Our model is able to learn the structured prediction task without complicated feature engineering. Whereas previous studies <cit.> on joint methods focus on the relation extraction problem, we construct the property tree which comes down to solving a dependency parsing problem, which is more constrained and hence more difficult. Therefore, previous methods are not directly comparable to our model and cannot be applied to our real estate task out-of-the-box. In this work, we treat the two subtasks as one by reformulating them into a head selection problem <cit.>.This paper is a follow-up work of <cit.>. Compared to the conference paper that introduced the real estate extraction task and applied some basic state-of-the-art techniques as a first baseline solution, we now introduce:[label=(*)]* advanced neural models that consider the two subtasks jointly and* modifications to the dataset annotation representations as detailed below.More specifically, the main contributions of this work are the following: * We propose a new joint model that encodes the two tasks of identifying entities as well as dependencies between them, as a single head selection problem, without the need of parameter sharing or pre-training of the first entity recognition module separately. Moreover, instead of[label=(*)]* predicting unlabeled dependencies and * training an additional classifier to predict labels for the identified heads <cit.>, our model already incorporates the dependency label predictions in its scoring formula. * We compare the proposed joint model against established pipeline approaches and report an F_1 improvement of 1.4% in the NER and 6.2% in the dependency parsing subtask,corresponding to an overall edge F_1 improvement of 3.4% in the property tree. * Compared to our original dataset <cit.>, we introduce two extensions to the data: [label=(*)]* we consistently assign the first mention of a particular entity in order of appearance in the advertisement as the main mention of the entity. This results in an F_1 score increase of about 3% and 4% for the joint and pipeline models, respectively.* We add the equivalent relation to our annotated dataset to explicitly express that several mentions across the ad may refer to the same entity.* We perform extensive analysis of several attention mechanisms that enable our LSTM-based model to focus on informative words and phrases, reporting an improved F_1 performance of about 2.1%.The rest of the paper is structured as follows. In sec:related, we review the related work. problem_definition defines the problem and in sec:methodology, we describe the methodology followed throughout the paper and the proposed joint model. The experimental results are reported in sec:results_discussion. Finally, sec:conclusions concludes our work.§ RELATED WORKThe real estate structured prediction problem from textual advertisements can be broken down into the sub-problems of [label=(*)]* sequence labeling (identifying the core parts of the property) and* non-projective dependency parsing (connecting the identified parts into a tree-like structure) <cit.>. One can address these two steps either one by one in a pipelined approach, or simultaneously in a joint model.The pipeline approach is the most commonly used approach <cit.>, treatingthe two steps independently and propagating the output of the sequence labeling subtask (named entity recognition) <cit.> to the relation classification module <cit.>. Joint models are able to simultaneously extract entity mentions and relations between them <cit.>. In this work, we propose a new joint model that is able to recover the tree-like structure of the property andframe it as a dependency parsing problem,given the non-projective tree structure we aim to output. We now present related works for the sequence labeling and dependency parsing subtasks, as well as for the joint models. §.§ Sequence labeling Structured prediction problems become challenging due to the large output space.Specifically in NLP, sequence labeling (NER) is the task of identifying the entity mention boundaries and assigning a categorical label (POS tags) for each identified entity in the sentence. A number of different methods have been proposed, namely Hidden Markov Models (HMMs) <cit.>, Conditional Random Fields (CRFs) <cit.>, Maximum Margin Markov Network (M^3N) <cit.>, generalized support vector machines for structured output (SVM^struct) <cit.> and Search-based Structured Prediction (SEARN) <cit.>. Those methods heavily rely on hand-crafted features and an in-depth review can be found in <cit.>. Several variations of these models that also require manual feature engineering have been used in different application settings (biology, social media context) and languages (Turkish) <cit.>. Recently, deep learning with neural networks has been succesfully applied to NER. <cit.> proposed to use a convolutional neural network (CNN) followed by a CRF layer over a sequence of word embeddings.Recurrent Neural Networks (RNNs) constitute another neural network architecture that has attracted attention, due to the state-of-the-art performance in a series of NLP tasks (sequence-to-sequence <cit.>, parsing <cit.>). In this context, <cit.> use a sequence-to-sequence approach for modeling the sequence labeling task. In addition, several variants of combinations between LSTM and CRF models have been proposed <cit.> achieving state-of-the-art performance on publicly available datasets. §.§ Dependency parsingDependency parsing is a well studied task in the NLP community, which aims to analyze the grammatical structure of a sentence. We approach the problem of the property tree construction as a dependency parsing task to learn the dependency arcs of the classified. There are two well-established ways to address the dependency parsing problem, via graph-based and transition-based parsers. Graph-based: In the work of <cit.> dependency parsing requires the search of the highest scoring maximum spanning tree in graphs for both projective (dependencies are not allowed to cross) and non-projective (crossing dependencies are allowed) trees with the Eisner algorithm <cit.> and the Chu-Liu-Edmonds algorithm <cit.> respectively.It was shown that exploiting higher-order information (siblings, grand-parental relation) in the graph, instead of just using first-order information (parent relations) <cit.> may yield significant improvements of the parsing accuracy but comes at the cost of an increased model complexity. <cit.> made an important step towards globally normalized models with hand-crafted features, by adapting the Matrix-Tree Theorem (MTT) <cit.> to train over all non-projective dependency trees. We explore an MTT approach as one of the pipeline baselines. Similar to recent advances in neural graph-based parsing <cit.>, we use LSTMs to capture richer contextual information compared to hand-crafted feature based methods. Our work is conceptually related to <cit.>, who formulated the dependency parsing problem as a head selection problem. We go a step further in that direction, in formulating the joint parsing and labeling problem in terms of selecting the most likely combination of head and label. Transition-based: Transition-based parsers <cit.> replace the exact inference of the graph-based parsers by an approximate but faster inference method. The dependency parsing problem is now solved by an abstract state machine that gradually builds up the dependency tree token by token.The goal of this kind of parsers is to find the most probable transition sequence from an initial to some terminal configuration (a dependency parse tree, or in our case a property tree) given a permissible set of actions (LEFT-ARC, RIGHT-ARC, SHIFT) and they are able to handle both projective and non-projective dependencies <cit.>. In the simplest case (greedy inference), a classifier predicts the next transition based on the current configuration. Compared to graph-based dependency parsers, transition-based parsers are able to scale better due to the linear time complexity while graph-based complexity rises to O(n^2) in the non-projective case. <cit.> proposed a way of learning a neural network classifier for use in a greedy, transition-based dependency parser while using low-dimensional, dense word embeddings, without the need of manually extracting features.Globally normalized transition-based parsers <cit.> can be considered an extension of <cit.>, as they perform beam search for maintaining multiple hypotheses and introduce global normalization with a CRF objective. <cit.> introduced the stack-LSTM model with push and pop operations which is able to learn the parser transition states while maintaining a summary embedding of its contents.Although transition-based systems are well-known for their speed and state-of-the-art performance, we do not include them in our study due to their already reported poor performance in the real estate task <cit.> compared to graph-based parsers. §.§ Joint learningAdopting a pipeline strategy for the considered type of problems has two main drawbacks: [label=(*)]* sequence labeling errors propagate to the dependency parsing step, an incorrectly identified part of the house (entity) could get connected to a truly existing entity, and* interactions between the components are not taken into account (feedback between the subtasks), modeling the relation between two potential entities may help in deciding on the nature of the entities themselves.In more general relation extraction settings, a substantial amount of work <cit.> jointly considered the two subtasks of entity recognition and relation extraction. However, all of these models make use of hand-crafted features that: [label=(*)]* require manual feature engineering,* generalize poorly between various applications and* may require a substantial computational cost . Recent advances on joint models for general relation extraction consider the joint task using neural network architectures like LSTMs and CNNs <cit.>. Our work is however different from a typical relation extraction setup in that we aim to model directed spanning trees,or, equivalently, non-projective dependency structures. In particular, the entities involved in a relation are not necessarily adjacent in the text since other entities may be mentioned in between, which complicates parsing. Indeed, in this work we focus on dependency parsing due to the difficulty of establishing the tree-like structure instead of only relation extraction (where each entity can have arbitrary relation arcs, regardless of other entities and their relations), which isthe case for previously cited joint models. Moreover, unlike most of these works that frame the problem as a stacking of the two components, or at least first train the NER module to recognize the entities and then further train together with the relation classification module, we include the NER directly inside the dependency parsing component.In summary, the conceptual strengths of our joint segmentation and dependency parsing approach (described in detail in sec:methodology) will be the following: compared to state-of-the-art joint models in relation extraction, it[label=(*)]* is generic in nature, without requiring any manual feature engineering, * extracts a complete tree structure rather than a single binary relation instance.§ PROBLEM DEFINITION In this section, we define the specific terms that are used in our real estate structured prediction problem. We define an entity as an unambiguous, unique part of a property with independent existence (e.g., bedroom, kitchen, attic). An entity mention is defined as one or more sequential tokens (“large apartment”) that can be potentially linked to one or more entities. An entity mention has a unique semantic meaning and refers to a specific entity, or a set of similar entities (“six bedrooms”). An entity itself is part-of another entity and can be mentioned in the text more than once with different entity mentions. For instance, a “house” entity could occur in the text with entity mentions “large villa” and “a newly built house”. For the pipeline setting as presented in <cit.>, we further classify entities into types (assign a named entity type to every word in the ad). The task is transformed to a sequence labeling problem using BIO (Beginning, Inside, Outside) encoding.The entity types are listed in tab:entitytypes. For instance, in the sequence of tokens “large apartment”, B-PROPERTY is assigned to the token “large” as the beginning of the entity, I-PROPERTY in the token “apartment” as the inside of the entity but not the first token within the entity and O for all the other tokens that are not entities. Unlike previous studies <cit.>, for our joint model there is no need for this type of categorical classification into labels since the two components are treated unified as a single dependency parsing problem.emph=Original ad:,emphstyle= emph=Structured representation:,emphstyle= The goal of the real estate structured prediction task is to map the textual property classified into a tree-like structured representation, the so-called property tree, as illustrated in fig:plain_ad. In the pipeline setting, this conversion implies the detection of [label=(*)]* entities of various types and* the part-of dependencies between them.For instance, the entity “living room” is part-of the entity “large apartment”. In the joint model, each token (“apartment”, “living”, “bathroom”, “includes”, “with”, “3”) is examined separately and 4 different types of relations are defined, namely part-of, segment, skip and equivalent. The part-of relation is similar to the way that it was defined in the pipeline setting but instead of examining entities, sequences of tokens (“living room”), we examine if a (individual) token is part-of another (individual) token (“room” is part-of the “apartment”). We encode the entity identification task with the segment label and we follow the same approach as in the part-of relationships for the joint model. Specifically, we examine if a token is a segment of another token (the token “room” is attached as a segment to the token “living”, “3” is attached as a segment to the token “bedrooms” and “spacious” is also attached as a segment to the token “bedrooms” — this way we are able to encode the segment “3 spacious bedrooms”). By doing so, we cast the sequence labeling subtask to a dependency parsing problem. The tokens that are referring to the same entity belong to the equivalent relation (“home” is equivalent to “apartment”). For each entity, we define the first mention in order of appearance in the text as main mention and the rest as equivalent to this main mention. Finally, each token that does not have any of the aforementioned types of relations has a skip relation with itself (“includes” has a skip relation with “includes”), such that each token has a uniquely defined head.Thus, we cast the structured prediction task of extracting the property tree from the ad as a dependency parsing problem,where [label=(*)]* an entity can be part-of only one (other) entity, because the decisions are taken simultaneously for all part-of relations (a certain room can only be part-of a single floor), and* there are a priori no restrictions on the type of entities or tokens that can be part-of others (a room can be either part-of a floor, or the property itself, like an apartment).It is worth mentioning that dependency annotations for our problem exhibit a significant number ofnon-projective arcs (26%) where part-of dependencies are allowed to cross (see fig:non_projective), meaning that entities involved in the part-of relation are non-adjacent (interleaved by other entities). For instance, all the entities or the tokens for the pipeline and the joint models, that are attached to the entity “garage” are overlapping with the entities that are attached to the entity “apartment”, making parsing even more complicated: handling only projective dependencies as illustrated in fig:projective is an easier task. We note that the segment dependencies do not suffer from non-projectivity, since the tokens are always adjacent and sequential (“3 spacious bedrooms”). § METHODOLOGY We now describe the two approaches, the pipeline model and the joint model to construct the property tree of the textual advertisements, as illustrated in fig:system_setup. For the pipeline system (subsect:two-step_pipeline), we [label=(*)]*identify the entity mentions (subsect:sequence_labeling), then*predict the part-of dependencies between them (subsect:relation_extraction),and finally*construct the tree representation (property tree) of the textual classified (as in fig:plain_ad).In step <ref>, we apply locally or globally trained graph-based models. We represent the result of step <ref> as a graph model, and then solve step <ref> by applying the maximum spanning tree algorithm <cit.> for the directed case (see <cit.>). We do not apply the well-known and fast transition-based systems with hand-crafted features for non-projective dependency structures <cit.>, given the previously established poor performance thereof in <cit.>. In subsect:joint_learning, we describe the joint model where we perform steps <ref> and <ref> jointly. For step <ref>, we apply the maximum spanning tree algorithm <cit.> similarly as in the pipeline setting (subsect:two-step_pipeline).§.§ Two-step pipeline Below we revisit the pipeline approach presented in <cit.>, which serves as the baseline which we compare the neural models against. As mentioned before, the pipeline model comprises two subtasks: <ref> the sequence labeling and the <ref> part-of tree construction. In the following subsections, we describe the methods applied for both. §.§.§ Sequence labelingThe first step in our pipeline approach is the sequence labeling subtask which is similar to NER. Assuming a textual real estate classified, we [label=(*)]* identify the entity mention boundaries and * map each identified entity mention to a categorical label, entity type.In general, in the sequence labeling tasks, it is beneficial to take into account the correlations between labels in adjacent tokens, consider the neighborhood, and jointly find the most probable chain of labels for the given input sentence (Viterbi algorithm for the most probable assignment). For instance, in our problem where we follow the NER standard BIO encoding <cit.>, the I-PROPERTY cannot be followed by I-SPACE without first opening the type by B-SPACE. We use a special case of the CRF algorithm <cit.>, namely linear chain CRFs, which is commonly applied in the problem of sequence labeling to learn a direct mapping from the feature space to the output space (types) where we model label sequences jointly, instead of decoding each label independently. A linear-chain CRF with parameters w defines a conditional probability P_w(y|x) for the sequence of labels y = y_1,...,y_N given the tokens of the text advertisement x =x_1,...,x_N to be P_w(y|x)=1/Z(x)exp(w^Tϕ(x,y)),where Z is the normalization constant and ϕ is the feature function that computes a feature vector given the advertisement and the sequence of labels. §.§.§ Part-of tree constructionThe aim of the part-of tree construction subtask is to link each entity to its parent. We approach the task as a dependency parsing problem but instead of connecting each token to its syntactical parent, we map only the entity set I (“large villa”, “3 spacious bedrooms”) that has already been extracted by the sequence labeling subtask to a dependency structure y.Assuming the entity set I={e_0,e_1,...,e_t} where t is the number of identified entities, a dependency is a pair (p,c) where p∈ I is the parent entity and c∈ I is the child entity. The entity e_0 is the dummy root-symbol that only appears as parent. We will compare two approaches to predict the part-of relations: a locally trained model (LTM) scoring all candidate edges independently, versus a global model (MTT) which jointly scores all edges as a whole.§.§.§ Locally trained model (LTM) In the locally trained model (LTM), we adopt a traditional local discriminative method and apply a binary classification framework <cit.> to learn the part-of relation model (step <ref>), based on standard relation extraction features such as the parent and child tokens and their types, the tokens in between, etc. For each candidate parent-child pair, the classifier gives a score that indicates whether it is probable for the part-of relation to hold between them. The output scores are then used for step <ref>, to construct the final property tree. Following <cit.>, we view the entity set I as afully connected directed graph G={V,E} with the entities e_1,..., e_t as vertices (V) in the graph G, and edges E representing the part-of relations with the respective classifier scores as weights. One way to approach the problem is the greedy inference method where the predictions are made independently for each parent-child pair, thus neglecting that the global target output should form a property tree. We could adopt a threshold-based approach, keep all edges exceeding a threshold, which obviously is not guaranteed to end up with arc dependencies that form a tree structure (could even contain cycles). On the other hand, we can enforce the tree structure inside the (directed) graph by finding the maximum spanning tree. To this end, similar to <cit.>, we apply the Edmonds' algorithm to search for the most probable non-projective tree structure in the weighted fully connected graph G.§.§.§ Globally trained model (MTT)The Matrix-Tree theorem (MTT) <cit.> is a globally normalized statistical method that involves the learning of directed spanning trees. Unlike the locally trained models, MTT is able to learn tree dependency structures, scoring parse trees for a given sentence. We use D(I) to refer to all possible dependencies of the identified entity set I, in which each dependency is represented as a tuple (h,m) in which h is the head (or parent) and m the modifier (or child). The set of all possible dependency structures for a given entity set I is written T(I).The conditional distribution over all dependency structures y∈ T(I) can then be defined as:P(y | I;θ ) = 1/Z(I;θ)exp(∑_h,m ∈ yθ_h,m) in which the coefficients θ_h,m∈ℝ for each dependency (h,m) form the real-valued weight vector θ.The partition function Z(I;θ)is a normalization factorthat alas cannot be computed by brute-force, since it requires a summation overall y∈ T(I), containing an exponential number of possible dependency structures. However, an adaptation of the MTT allows us the direct and efficient computation of the partition function Z(I;θ) as the determinant det(L(θ))where L(θ) is the Laplacian matrix of the graph. It is worth mentioning that althoughMTT learns spanning tree structures during training, at the prediction phase, it is still required to use the maximum spanning tree algorithm (step <ref>) <cit.> as in the locally trained models.§.§ Joint model In this section, we present the new joint model sketched in fig:joint_model, which simultaneously predicts the entities in the sentence and the dependencies between them,with the final goal of obtaining a tree structure, i.e., the property tree.We pose the problem of the identification of the entity mentions and the dependency arcs between them as a head selection problem <cit.>. Specifically, given as input a sentence of length N, the model outputs the predicted parent of each token of the advertisement and the most likely dependency label between them.We begin by describing how the tokens are represented in the model, with fixed pre-trained embeddings (subsec:embeddings), which form the input to an LSTM layer (subsec:lstms). The LSTM outputs are used as input to the entity and dependency scoring layer (subsec:head_selection). As an extension of this model, we propose the use of various attention layers in between the LSTM and scoring layer, to encourage the model to focus on salient information, as described in subsec:attention. The final output of the joint model still is not guaranteed to form a tree structure. Therefore, we still apply Edmonds' algorithm (i.e., step <ref> from the pipeline approach), described in subsec:edmond. §.§.§ Embedding Layer The embedding layer maps each token of the input sequence x_1,...,x_N of the considered advertisement to a low-dimensional vector space. We obtain the word-level embeddings by training the Skip-Gram word2vec model <cit.> on a large collection of property advertisements.We add a symbol x_0 in front of the N-length input sequence, which will act as the root of the property tree, and is represented with an all-zeros vector in the embedding layer. §.§.§ Bidirectional LSTM encoding layerMany neural network architectures have been proposed in literature: LSTMs <cit.>, CNNs <cit.>, Echo State Networks <cit.>, or Stochastic Configuration Networks <cit.>, to name only a few. Many others can be found in reference works on the topic <cit.>.In this work, we use RNNs which have been proven to be particularly effective in a number of NLP tasks <cit.>. Indeed, RNNs are a common and reasonable choice to model sequential data and inherently able to cope with varying sequence lengths. Yet, plain vanilla RNNs tend to suffer from vanishing/exploding gradient problems and are hence not successful in capturing long-term dependencies <cit.>. LSTMs are a more advanced kind of RNNs, which have been successfully applied in several tasks to capture long-term dependencies, as they are able to effectively overcome the vanishing gradient problem. For many NLP tasks, it is crucial to represent each word in its own context, to consider both past (left) and future (right) neighboring information. An effective solution to achieve this is using a bidirectional LSTM (BiLSTM). The basic idea is to encode each sequence from left to right (forward) and from right to left (backward). This way, there is one hidden state which represents the past information and another one for the future information. The high-level formulation of an LSTM is:h_i,c_i= LSTM(w_i,h_i-1,c_i-1), i=0,...,Nwhere in our setup w_i ∈ℝ^d̃ is the word embedding for token x_i, and with theinput and states for the root symbol x_0 initialized as zero vectors. Further, h_i ∈ℝ^d and c_i ∈ℝ^d respectively are the output and cell state for the ith position, where d is the hidden state size of the LSTM. Note that we chose the word embedding size the same as the LSTM hidden state size, or d̃=d. The outputs from left to right (forward) are written as h⃗_⃗i⃗ and the outputs from the backwards direction as h_i. The two LSTMs' outputs at position i are concatenated to form the output h_i at that position of the BiLSTM:h_i= [h⃗_⃗i⃗;h_i], i=0,...,N §.§.§ Joint learning as head selectionIn this subsection, we describe the joint learning task (identifying entities and predicting dependencies between them), which we formulate as a head selection problem <cit.>. Indeed, each word x_i should have a unique head (parent) — while it can have multiple dependent words — since the final output should form the property tree. Unlike the standard head selection dependency parsing framework <cit.>, we predict the head y_i of each word x_i and the relation c_i between themjointly, instead of first obtaining binary predictions for unlabeled dependencies, followed by an additional classifier to predict the labels. Given a text advertisement as a token sequence x = x_0, x_1,..., x_N where x_0 is the dummy root symbol, and a set 𝒞 = {part-of, segment, equivalent, skip}of predefined labels (as defined in problem_definition), we aim to find for each token x_i, i∈{0,...,N} the most probable head x_j, j∈{0,...,N} and the most probable corresponding label c∈𝒞.For convenience, we order the labels c∈𝒞 and identify them as c_k, k∈{0,...,3}. We model the joint probability of token x_j to be the head of x_i with c_k the relation between them, using a softmax:P(head=x_j,label=c_k |x_i)=exp(score(h_j , h_i,c_k))/∑_j̃,k̃exp(score(h_j̃ , h_i,c_k̃)where h_i and h_j are the BiLSTM encodings for words x_i and x_j, respectively. For the scoring formula score(h_j , h_i,c_k) we use a neural network layer that computes the relative score between position i and j for a specific label c_k as follows:score(h_j , h_i,c_k)= V_k ^T tanh (U_kh_j + W_kh_i+b_k)with trainable parameters V_k ∈ℝ^l, U_k ∈ℝ^l × 2d, W_k ∈ℝ^l × 2d, b_k ∈ℝ^l, and l the layer width.As detailed in sec:setup, we set l to be smaller than 2d, similar to <cit.> due to the fact that training on superfluous information reduces the parsing speed and increases tendency towards overfitting.We train our model by minimizing the cross-entropy loss ℒ, written for the considered training instance as:ℒ=∑_i=0^N -log P (head=y_i,label=c_i|x_i)where y_i∈ x and c_i∈𝒞 are the ground truth head and label of x_i, respectively. After training, we follow a greedy inference approach and for each token, we simultaneously keep the highest scoring head ŷ_̂î and label ĉ_̂î for x_i based on their estimated joint probability:(ŷ_̂î,ĉ_̂î)=_x_j ∈ x,c_k ∈𝒞 P(head=x_j,label=c_k|x_i)The predictions (ŷ_̂î,ĉ_̂î) are made independently for each position i, neglecting that the final structure should be a tree. Nonetheless, as demonstrated in sec:comparison_pipeline_joint, the highest scoring neural models are still able to come up with a tree structure for 78% of the ads. In order to ensure a tree output in all cases, however, we apply Edmonds' algorithm on the output. §.§.§ Attention LayerThe attention mechanism in our structured prediction problem aims to improve the model performance by focusing on information that is relevant to the prediction of the most probable head for each token. As attention vector, we construct the new context vector h_i^* as a weighted average of the BiLSTM outputsh_j^* = ∑_i=0^N a(h_j,h_i)h_iin which the coefficients a(h_j, h_i), also called the attention weights, are obtained as follows:a(h_j,h_i)=exp(att(h_j,h_i))/∑_ĩ=0^Nexp(att(h_j,h_ĩ)). The attention function att(h_j, h_i) is designed to measure some form of compatibility between the representation h_i for x_i and h_j for x_j, and the attention weights a(h_j, h_i) are obtained from these scores by normalization using a softmax function.In the following, we will describe in detail the various attention models that we tested with our joint model. §.§.§ Commonly used attention mechanismsThree commonly used attention mechanisms are listed in <ref>: the additive <cit.>, bilinear, and multiplicative attention models <cit.>, which have been extensively used in machine translation. Given the representations h_i and h_j for tokens x_i and x_j, we compute the attention scores as follows:att_additive(h_j , h_i) = V_a tanh (U_ah_j + W_ah_i+b_a) att_bilinear(h_j , h_i) = h_j^T W_bil h_i att_multiplicative(h_j , h_i) = h_j^Th_iwhere V_a ∈ℝ^l, U_a, W_a ∈ℝ^l × 2d, W_bil∈ℝ^2d × 2d and b_a ∈ℝ^l are learnable parameters of the model.§.§.§ Biaffine attentionWe use the biaffine attention model <cit.> which has been recently applied to dependency parsing and is a modification of the neural graph-based approach that was proposed by <cit.>. In this model, <cit.> tried to reduce the dimensionality of the recurrent state of the LSTMs by applying a such neural network layer on top of them. This idea is based on the fact that there is redundant information in every hidden state that (i) reduces parsing speed and (ii) increases the risk of overfitting. To address these issues, they reduce the dimensionality and apply a nonlinearity afterwards. The deep bilinear attention mechanism is defined as follows:h_i^dep=V_deptanh (U_deph_i+b_dep) h_j^head=V_headtanh (U_headh_j+b_head) att_biaffine(h_j^head , h_i^dep) =(h_j^head)^T W_bil h_i^dep +Bh_j^headwhere U_dep, U_head∈ℝ^l × 2d, V_dep, V_head∈ℝ^p × l, W_bil∈ℝ^p × p, B ∈ℝ^p and b_dep, b_head∈ℝ^l. §.§.§ Tensor attentionThis section introduces the Neural Tensor Network <cit.> that has been used as a scoring formula applied for relation classification between entities. The task can be described as link prediction between entities in an existing network of relationships. We apply the tensor scoring formula as if tokens are entities, by the following function:att_tensor(h_j,h_i)=U_ttanh(h_j^TW_th_i+V_t( h_j + h_i)+b_t)where W_t ∈ℝ^2d × l × 2d, V_t ∈ℝ^l × 2d, U_t ∈ℝ^l and b_t∈ℝ^l.§.§.§ Edge attentionIn the edge attention model, we are inspired by <cit.>, which applies neural message passing in chemical structures. Assuming that words are nodes inside the graph and the message flows from node x_i to x_j, we define the edge representation to be:edge(h_j,h_i)=tanh (U_eh_j + W_eh_i+b_e)The edge attention formula is computed as:att_edge(h_j,h_i)=1/N(A_src∑_ĩ=0^Nedge(h_j,h_ĩ) +A_dst∑_j̃=0^Nedge(h_j̃,h_i))where U_e, W_e ∈ℝ^l × 2d, A_src, A_dst∈ℝ^2d × l and b_e ∈ℝ^l. The source and destination matrices respectively encode information for the start to the end node, in the directed edge. Running the edge attention model for several times can be achieved by stacking the edge attention layer multiple times. This is known as message passing phase and we can run it for several (T > 1)time steps to obtain more informative edge representations.§.§.§ Tree construction step: Edmonds' algorithmAt decoding time, greedy inference is not guaranteed to end up with arc dependencies that form a tree structure and the classification decision might contain cycles. In this case, the output can be post-processed with a maximum spanning tree algorithm (as the third step in fig:system_setup). We construct the fully connected directed graph G = (V, E) where the vertices V are the tokens of the advertisement (that are not predicted as skips) and the dummy root symbol, E contains the edges representing the highest scoring relation (e.g., part-of, segment, equivalent) with the respective cross entropy scores serving as weights. Since G is a directed graph, s(x_i, x_j) is not necessarily equal to s(x_j, x_i). Similar to <cit.>, we employ Edmonds' maximum spanning tree algorithm for directed graphs <cit.> to build a non-projective parser. Indeed, in our setting, we have a significant number (26% in the dataset used for experiments, see further) of non-adjacent part-of and equivalent relations (non-projective). It is worth noting that in the case of segment relations, the words involved are not interleaved by other tokens and are always adjacent. We apply Edmonds' algorithm to every graph which is constructed to get the highest scoring graph structure, even in the cases where a tree is already formed by greedy inference. For skips, we consider the predictions as obtained from the greedy approach and we do not include them in the fully connected weighted graph, since Edmonds' complexity is O(n^2) for dense graphs and might lead to slow decoding time. § RESULTS AND DISCUSSION In this section, we present the experimental results of our study. We describe the dataset, the setup of the experiments and we compare the results of the methods analysed in the previous sections. §.§ Experimental setup Our dataset consists of a large collection (887,599) of Dutch property advertisements from real estate agency websites. From this large dataset, a sub-collection of 2,318 classifieds have been manually annotated by 3 trained human annotators (1 annotation per ad, 773 ads per annotator). The annotations follow the format of the property tree that is described in detail in problem_definition and is illustrated in fig:plain_ad. The dataset is available for research purposes, see our github codebase.[<https://github.com/bekou/ad_data>] In the experiments, we use only the annotated text advertisements for the pipeline setting, LTM (locally trained model), MTT (globally trained model). In the case of the neural network approach, we train the embeddings on the large collection by using the word2vec model <cit.> whereas in the joint learning, we use only the annotated documents, similar to the pipeline approach.The code of the LTM and the MTT hand-crafted systems is available on github.<ref>We also use our own CRF implementation. The code for the joint model has been developed in Python with the Tensorflow machine learning library <cit.> and will be made public as well. For the evaluation, we use 70% for training, 15% for validation and 15% as test set. We measure the performance by computing the F_1 score on the test set. The accuracy metric can be misleading in our case since we have to deal with imbalanced data (the skip label is over-represented). We only report numbers on the structured classes, segment and part-of since the other dependencies (skip, equivalent) are auxiliary in the joint models and do not directly contribute to the construction of the actual property tree. For the overall F_1, we are again only considering the structured classes. Finally, we report the number of property trees (which shows how likely our model is to produce trees without applying Edmonds' algorithm, by greedy inference alone) for all the models before applying Edmonds' algorithm that guarantees the tree structure of the predictions.For the pipeline models, we train the CRF with regularization parameter λ_CRF=10 and the LTM and MTT with C=1 based on the best hyperparameters on the validation set. As binary classifier, we use logistic regression. For the joint model, we train 128-dimensional word2vec embeddings on a collection of 887k advertisements. In general, using larger embeddings dimensions (300), does not affect the performance of our models. We consistently used single-layer LSTMs through our experiments to keep our model relatively simple and to evaluate the various attention methods on top of that. We have also reported results on the joint model using a two-layer stacked LSTM joint model,although it needs a higher computation time compared to a single-layer LSTM with an attention layer on top.The hidden size of the LSTMs is d=128 and the size of the neural network used in the scoring and the attention layer is fixed to l=32. The optimization algorithm used is Adam <cit.> with a learning rate of 10^-3. To reduce the effect of overfitting, we regularize our model using the dropout method <cit.>. We fix the dropout rate on the input of the LSTM layer to 0.5 to obtain significant improvements (∼1%-2% F_1 score increase, depending on the model). For the two-layer LSTM, we fix the dropout rate to 0.3 in each of the input layers since this leads to largest performance increase on the validation set. We have also explored gradient clipping without further improvement on our results. In the joint model setting, we follow the evaluation strategy of early stopping <cit.> based on the performance of the validation set. In most of the experiments, we obtain the best hyperparameters after ∼60 epochs.§.§ Comparison of the pipeline and the joint modelOne of the main contributions of our study is the comparison of the pipeline approach and the proposed joint model. We formulated the problem of identifying the entities (segments) and predicting the dependencies between them (construction of the property tree) as a joint model. Our neural model, unlike recent studies <cit.> on joint models that use LSTMs to handle similar tasks, does not need two components to model the problem (NER and dependency parsing). To the best of our knowledge, our study is the first that formulates the task in an actual joint setting without the need to pre-train the sequence labeling component or for parameter sharing between them, since we use only one component for both subtasks. In tab:part_of_segments, we present the results of the pipeline model (hand-crafted) and the proposed joint model (LSTM). The improvement of the joint model over the pipeline is unambiguous, 3.42% overall F_1 score difference between MTT (highest scoring pipeline model) and LSTM+E (LSTM model with Edmonds' algorithm). An additional increase of ∼2.3% is achieved when we consider two-layer LSTMs (2xLSTM+E) for our joint model. All results in tab:part_of_segments, except for the LSTM, are presented using Edmonds' algorithm on top, to construct the property tree. Examining each label separately, we observe that the original LSTM+E model (73.78%) performs better by 1.43% in entity segmentation than the CRF (72.35%). The LSTM model achieves better performance in the entity recognition task since it has to learn the two subtasks simultaneously resulting in interactions between the components (NER and dependency parser). This way, the decisions for the entity recognition can benefit from predictions that are made for the part-of relations. Concerning the part-of dependencies, we note that the LSTMs outperform the hand-crafted approaches by 6.23%. Also, the number of valid trees that are constructed before applying Edmonds' algorithm is almost twice as high for the LSTM models. Stacking two-layer LSTMs results in an additional ∼1% improvement in the segmentation task and ∼3% in the part-of relations. The greedy inference for the hand-crafted methods does not produce well-formed trees, meaning that post-processing with Edmonds' algorithm (enforce tree structure) is expected to increase the performance of the hand-crafted models compared to the LSTM model performance. Indeed, the performance of the feature based hand-crafted models (LTM and MTT) without the Edmonds' on top is not reported in tab:part_of_segments due to their poor performance in our task (∼60% overall F_1 and ∼51% for part-of), but after post-processing with Edmonds' the performance significantly increases (∼65%). On the other hand, applying the Edmonds' algorithm on the LSTM model leads to marginally decreased performance (∼0.2%) compared to the original LSTM model, probably indicating that enforcing structural constraints is not beneficial for a model that clearly has the ability to form valid tree structures during greedy inference. Although one might be tempted not to enforce the tree structure (post-process with Edmonds'), due to the nature of our problem, we have to enforce tree constraints in all of the models.§.§ Comparison of the joint and the attention modelAfter having established the superior performance of neural approach using LSTMs over the more traditional (LTM and MTT) methods based on hand-crafted features, we now discuss further improvements using attentive models. The attention mechanisms are designed to encourage the joint model to focus on informative tokens. We exploited several attention mechanisms as presented in subsec:attention. tab:part_of_segments shows the performance of the various models. Overall, the attention models are performing better in terms of overall F_1 score compared to the original joint model with the Edmonds' on top. Although the performance of the Biaffine and the Tensor models is limited compared to the improvement of the other attentive models, we focus on:[label=(*)]* the Biaffine model since it achieved state-of-the-art performance on the dependency parsing task and * the Tensor model because we were expecting that it would perform similarly to the Bilinear model (it has a bilinear tensor layer). Despite its simplicity, the Bilinear model is the second best performing attentive model in tab:part_of_segments in terms of overall F_1 score. Edge_3 (70.70% overall F_1 score) achieves better results than the other attention mechanisms in the entity recognition and in the dependency parsing tasks. We observe that running the message passing stepmultiple times in the Edge model, gives an increasing trend in the number of valid trees that were constructed before applying the maximum spanning tree algorithm. This is not surprising since we expect that running the message passing phase multiple times leads toimproved edge representations. The maximum number of trees without post-processing by Edmonds' is attained when we run the message passing for 3 times whereas further increasing the number beyond 3 (4) appears no longer beneficial. Stacking a second LSTM layer on top of the joint model (2xLSTM+E) marginally improves the performance by 0.2% compared to the Edge_3 attention model. But adding a second LSTM layer comes with the additional cost of an increased computation time compared to the joint models with the attention layers on top. This illustrates that: [label=(*)]* there might be some room for marginally improving the attention models even further, and* we do not have to worry about the quadratic nature of our approach since in terms of speed the attentive models are able to surpass the two-layer LSTMs.The sequential processing of the LSTMs might be the reason that slows down the computation time for the 2xLSTM over the rest of the attentive models. Specifically, on an Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz processor, the best performing model (Edge_3) takes ∼2 minutes per epoch while in the 2xLSTM case, it takes ∼2.5 minutes leading to a slowdown of ∼25%. The percentage of the ads that are valid trees is 1% better in the Edge_3 over the two-layer LSTM showcasing the ability of the Edge model to form more valid trees during greedy inference. §.§ Discussion In this section, we discuss some additional aspects of our problem and the approaches that we follow to handle them.As we mentioned before, a single entity can be present in the text with multiple mentions.This brings an extra difficulty to our task.For instance, in the example of fig:plain_ad, the entity “large apartment” is expressed in the ad with the mentions “large apartment” and “home”.Hence it is confusing to which mention the other entities should be attached to. One way would be to attach them to both and then eliminate one of the connections using Edmonds' spanning tree algorithm, which is the approach adopted in <cit.>.The problematic issue with this approach is that the spanning tree algorithm would randomly remove all mentions but one, possibly resulting in uncertain outcomes.To avoid this problem, we now use as the main mention for an entity the first mention in order of appearance in thetext (“large apartment” in our example) and the remaining mentions (“home”) are attached as equivalent mentions to the main one.Usually, the most informative mention for an entity is the one that appears first, because we again refer to an entity mentioned before, often with a shorter description.Following our intuition, the neural model increases its overall performance by ∼3% (from 66% to 69% and more than 5% in the part-of relation) and the pipeline approaches by almost 4% (from 61%, reported in <cit.> to 65% and more than 5% in the part-of relation). We also experimented with introducing the equivalent relations. Although it is a strongly under-represented class in the dataset and the model performs poorly for this label (an equivalent edge F_1 score of 10%), introducing the equivalent label is the natural way of modeling our problem (assigning each additional mention as equivalent to the main mention). We find out that introducing this type of relation leads to a slight decrease (∼1%) in the part-of and a marginal increase (∼0.3%) in the segment relations which are the main relations while retaining the nature of our problem. In the pipeline approach, it results in an 9% drop in the F_1 score of the part-of relation. This is the reason that the results as presented in tab:part_of_segments do not consider the equivalent relation for the hand-crafted model to make a fair comparison in the structured classes.We believe our experimental comparison of the various architectural model variations provides useful findings for practitioners.Specifically, for applications requiring both segmentation (entity recognition) and dependency parsing (structured prediction), our findings can be qualitatively summarized as follows: [label=(*)]* joint modeling is the most appropriate approach since it reduces error propagation between the components,* the LSTM model is much more effective (than models relying on handcrafted features) because it automatically extracts informative features from the raw text,* attentive models are proven effective because they encourage the model to focus on salient tokens,* the edge attention model leads to an improved performance since it better encodes the information flow between the entities by using graph representations, and* stacking a second LSTM marginally increases the performance, suggesting that there might be some room for slight improvement of the attention models by adding LSTM layers.Finally, we point out how exactly our model relates to state-of-the-art in the field. Our joint model is able to both extract entity mentions (perform segmentation) and do dependency parsing, which we demonstrate on the real estate problem. Previous studies <cit.> that jointly considered the two subtasks (segmentation and relation extraction): [label=(*)]* require manual feature engineering and* generalize poorly between various applications.On the other hand, in our work, we rely on neural network methods (LSTMs) to automatically extract features from the real estate textual descriptions and perform the two tasks jointly. Although there are other methods which use neural network architectures <cit.> that focus on the relation extraction problem, our work is different in that we aim to model directed spanning trees and thus to solve the dependency parsing problem which is more constrained and difficult (than extracting single instances of binary relations). Moreover, the cited methods require either parameter sharing or pre-training of the segmentation module, which complicates learning. Therefore, cited methods are not directly comparable to our model and cannot be applied to our real estate task out-of-the-box. However, our model's main limitation is the quadratic scoring layer that increases the time complexity of the segmentation task from linear (which is the complexity of a conditional random field, CRF) to O(n^2). As a result, it sacrifices standard linear complexity of the segmentation task, in order to reduce the error propagation between the subtasks and thus perform learning in a joint, end-to-end differentiable, setting.§ CONCLUSIONS In this paper, we proposed an LSTM-based neural model to jointly perform segmentation and dependency parsing. We apply it to a real estate use case processing textual ads, thus [label=(*)]* identifying important entities of the property (rooms) and* structuring them into a tree format based on the natural language description of the property. We compared our model with the traditional pipeline approaches that have been adapted to our task and we reported an improvement of 3.4% overall edge F_1 score. Moreover, we experimented with different attentive architectures and stacking of a second LSTM layer over our basic joint model. The results indicate that exploiting attention mechanisms that encourage our model to focus on informative tokens, improves the model performance (increase of overall edge F_1 score with ∼2.1%) and increases the ability to form valid trees in the prediction phase (4% to 10% more valid trees for the two best scoring attention mechanisms) before applying the maximum spanning tree algorithm. The contribution of this study to the research in expert and intelligent systems is three-fold: [label=(*)] * we introduce a generic joint model, simultaneously solving both subtasks of segmentation (entity extraction) and dependency parsing (extracting relationships among entities), that unlike previous work in the field does not rely on manually engineered features,* in particular for the real estate domain, extracting a structured property tree from a textual ad, we refine the annotations and additionally propose attention models, compared to initial work on this application, and finally * we demonstrate the effectiveness of our proposed generic joint model with extensive experiments (see aforementioned F_1 improvement of 2.1%).Despite the experimental focus on the real estate domain, we stress that the model is generic in nature, and could be equally applied to other expert system scenarios requiring the general tasks of bothdetecting entities (segmentation) and establishing relations among them (dependency parsing).We furthermore note that our model, rather than focusing on extracting a single binary relation from a sentence (as in traditional relation extraction settings), produces a complete tree structure.Future work can evaluate the value of our joint model we introduced in other specific application domains (biology, medicine, news) for expert and intelligent systems.For example, the method can be evaluated for entity recognition and binary relation extraction (the ACE 04 and ACE 05 datasets; see <cit.>) or in adverse drug effects from biomedical texts (see <cit.>).In terms of model extensions and improvements, one research issue is to address the time complexity of the NER part by modifying the quadratic scoring layer for this component. An additional research direction is to investigate different loss functions for the NER component (adopting a conditional random field (CRF) approach), since this has been proven effective in the NER task on its own <cit.>.A final extension we envision is to enable multi-label classification of relations among entity pairs.§ ACKNOWLEDGMENTS The presented research was partly performed within the MALIBU project, funded by Flanders Innovation & Entrepreneurship (VLAIO) contract number IWT 150630.
http://arxiv.org/abs/1709.09590v2
{ "authors": [ "Giannis Bekoulis", "Johannes Deleu", "Thomas Demeester", "Chris Develder" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20170927155053", "title": "An attentive neural architecture for joint segmentation and parsing and its application to real estate ads" }
http://arxiv.org/abs/1709.09720v1
{ "authors": [ "J. T. Peltonen", "P. C. J. J. Coumou", "Z. H. Peng", "T. M. Klapwijk", "J. S. Tsai", "O. V. Astafiev" ], "categories": [ "cond-mat.mes-hall", "cond-mat.supr-con", "quant-ph" ], "primary_category": "cond-mat.mes-hall", "published": "20170927201219", "title": "Hybrid rf SQUID qubit based on high kinetic inductance" }
Convergence analysis of upwind type schemes for the aggregation equation with pointy potential F. DelarueLaboratoire J.-A. Dieudonné,UMR CNRS 7351,Univ. Nice, Parc Valrose, 06108 Nice Cedex 02, France. Email: ,F. LagoutièreUniv Lyon,Université Claude Bernard Lyon 1,CNRS UMR 5208,Institut Camille Jordan,43 blvd. du 11 novembre 1918, F-69622 Villeurbanne cedex, France, Email: , N. VaucheletUniversité Paris 13, Sorbonne Paris Cité, CNRS UMR 7539, Laboratoire Analyse Géométrie et Applications, 93430 Villetaneuse, France, Email:December 30, 2023 =====================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTION In this paper, the deformation quantization approach to the representation theory of Lie groups is discussed via the example of the Euclidean motion group M(2), which is the group of rigid motions of the plane. Representations of Lie groups are subsumed under the general theory of group representations. The development of quantum theory in the mid-1920s, with the appearance of von Neumann's book on the mathematical foundations of quantum mechanics, greatly influenced the theory of unitary representations of groups in infinite dimensional Hilbert space <cit.>. Succeeding early works in this field were due to Bargmann, Wigner, Gelfand-Naimark, and others <cit.>. The basic idea is as follows. Suppose a Lie group G acts on a set X, denoted by (g, x)→ g· x. Let V be a vector space of complex-valued functions on X which is invariant under the action of G, that is, the function x→ f(g· x) is in V, whenever f is in V. Thus, the mapping T_g:f→ T_gf, where (T_gf)(x)=f(g· x) is a linear transformation on V and is invertible. The mapping g→ T_g from G into the group GL(V) of invertible linear transformations of V is called a linear representation of G in V.Mechanics provides basic examples of group representations. In classical mechanics, an observable is a function on phase space M, which is a Poisson manifold, while in quantum mechanics, observables are self-adjoint operators on a Hilbert space. Quantization, as generally understood, is a mapping from classical observables to the space of quantum observables, where this mapping satisfies certain conditions first laid out formally by von Neumann. In the simplest case of the free particle, Dirac's canonical quantization of phase space variables turns out to be a representation of the Heisenberg Lie algebra, and the exponentiation of this representation gives the representation of the Heisenberg Lie group. This basic example already illustrates the deep and beautiful connections between quantization and representations of Lie groups. More generally, in the above definition of a linear representation, the classical observables are the functions f on M and G acts on M. This induces an action of the Lie algebra of G on the classical observables via vector fields. Modulo many technical difficulties, resolved in many general cases by the orbit method or geometric quantization of Kirillov <cit.>, Kostant <cit.> and Souriau <cit.>, the exponentiation of the Lie algebra representations give the quantum observables.There are, currently, three accepted quantization procedures in quantum theory <cit.>. There is the canonical quantization developed earliest by Heisenberg, Schrodinger and others in the 1920s, the path integral method by Dirac and Feynman, and the phase space formulation of quantum mechanics or deformation quantization, which this work focuses on.Phase space quantum mechanics is based on Wigner's quasiprobability distribution <cit.> and the Weyl correspondence <cit.> between self-adjoint operators in Hilbert space and ordinary functions, called the symbols of the operators. It turns out that the Weyl symbol of the projection onto a state is the Wigner function corresponding to the state. The Wigner function, which is a function on phase space, allows for the computation of quantum averages by classical like formulas. Moreover, its marginal distributions produce the correct probability distributions for the position and momentum of the system <cit.>. Not least of its utility is that it is the approach that gives most insight into the connection between classical mechanics and quantum mechanics. It was Groenewold <cit.> and Moyal <cit.> who first gave the formulas for the symbols of the composition and commutators of two quantum observables, now known as the Moyal star-product. In the early 1970s, Bayen et al. <cit.> elevated this formula as a definition of deformation of functions on Poisson manifolds and proposed deformation quantization as an autonomous quantum theory.The central idea of deformation quantization is the deformation of the usual pointwise commutative product of functions on Poisson manifolds into a noncommutative and associative star-product or ⋆-product, and the deformation of the Poisson bracket arising from the associativity of the ⋆-product. In their seminal work, Bayen et al. suggested that quantization should be "a deformation of the structure of the algebra of classical observables and not as a radical change in the nature of the observables" <cit.>. Deformation quantization is a synthesis of works due to Weyl, Wigner, Moyal, Groenewold, Gerstenhaber, and others. In 1997, Kontsevich <cit.> proved the existence of deformation quantization of regular Poisson manifolds. Previous to this, Fedosov, in the early 1980s, gave a very nice geometric proof of the existence of deformation quantization of symplectic manifolds (originally found in <cit.>, but later extended in <cit.>) and started the great interest on deformation quantization among mathematicians.As a quantization theory, it is inevitable that deformation quantization found use into the representation theory of Lie groups. This has already been strongly hinted at in <cit.>. Subsequent developments in the works <cit.> have shown that deformation theory, together with the orbit method, is very useful in representation theory. As the beautiful papers <cit.>, from which we copied our title, and <cit.> have the aim of introducing deformation quantization and phase space methods in physics instruction, in particular in quantum mechanics, we also deemed it worthwhile to teach Lie group representations via the method of deformation quantization. In as much as <cit.> have already attempted to use star-products in the representation theory of various classes of Lie groups, these papers assume many deep mathematical results and large gaps in the computations make them very difficult reading for beginning graduate students.In this article, we present fairly complete and concrete computations in obtaining the irreducible unitary representations of a particular Lie group using deformation quantization. Works similar to our own are <cit.>.We suggest to readers Berndt's introductory text on symplectic manifold <cit.> or Abraham and Marsden's work <cit.> which is a more advanced approach.Kirillov's orbit method <cit.> and introductory books in unitary representations by Sugiura <cit.>, Berndt <cit.> and Mackey <cit.> are highly recommended.In section <ref> important concepts about unitary representations will be discussed, in particular, its construction by the method of induced representation and we also present the unitary representations of the Euclidean motion group M(2). We will formally discuss quantization in section <ref>.The non-Hilbert space-based quantization, deformation quantization, the concept of ⋆-product and its connection to unitary representation theory will be discussed in section <ref>.In section <ref>, our main contribution is the concrete computation of the unitary representations of M(2) via deformation quantization.Finally, we summarize our results in section <ref>.§ UNITARY REPRESENTATIONSA representation of a group G on a vector space V over a field K is a homomorphismU:G⟶ GL(V)of G into the group GL(V) of invertible linear transformations on V. The degree of V is the degree of the representation U. If G is a topological group and 𝕌( H) is the group of unitary operators on the Hilbert space H, it is required that the homomorphism U:G⟶𝕌( H) is strongly continuous, and differentiable in the case of G a Lie group. We call U a unitary representation. A subspace ℋ_0 of ℋ is said to be invariant under the unitary representation 𝒰 if 𝒰_gℋ_0⊂ℋ_0 for all g∈ G.If the trivial subspace {0} and ℋ are the only invariant closed subspaces of ℋ under 𝒰, then 𝒰 is irreducible.It is the irreducible unitary representations that are the "atoms" of the unitary representations of G.Two unitary representations of G, say 𝒰:G→𝕌(ℋ) and 𝒰':G→𝕌(ℋ'), are equivalent when there is an isometryA:ℋ→ℋ' satisfying A∘𝒰_g=𝒰_g'∘ A, for all g∈ G.So, the set of all unitary irreducible representations (UIRs) of G can be partitioned into disjoint classes of UIRs.A basic problem of representation theory ofLie groups is the construction and classification of all UIRs, up to equivalence. In many cases the UIRs are sufficient to decompose L^2-functions on G into their Fourier series or Fourier integral. In the compact group case, for example, the Peter-Weyl Theorem states that the matrix elements of the UIRs form a complete orthonormal set in L^2(G). A good resource for a comprehensive list of representations of Lie groups is the 3-volume survey work of Vilenkin and Klimyk in <cit.>.For the Euclidean motion group M(2), we recommend the earlier work of Vilenkin in <cit.> but in our discussion of its unitary representation, we compare ours with that of Sugiura in <cit.>.A more or less procedural way of constructing representations isthe method of induced representations by Frobenius and Mackey (for general groups, in <cit.> and for locally compact groups, in <cit.>). This is a method ofconstructing representations of a group from representations of a subgroup. Let 𝒮 be a representation of the subgroup H on V and 𝒯 be the desired representation of G, induced by 𝒮, that is 𝒯=Ind^G_H.Let L(G,H,V) be the space of functions f:G→ V satisfying f(gh)=𝒮^-1_hf(g), for any g∈ G and h∈ H.Since L(G,H,V) is invariant with respect to the left translation, the representation 𝒯 of G on L(G,H,V) is defined by (𝒯_gf)(g_0)=f(g^-1g_0). Specifically, we outline the construction of the representation of a semidirect product G, induced by its commutative subgroup B (see <cit.>).Suppose G=A⋉ B and A is a group of automorphisms on B.The collection X of 1-dimensional representations χ of B is partitioned into disjoint orbits via the action a·χ(b)=χ(a^-1b) where a∈ A,b∈ B.If Φ is one of these orbits, we define the collection of functions ℋ={f:Φ→ℋ_0} where ℋ_0 is the representation space of χ.Let ϕ∈Φ and χ represents the class Φ.Then, the map 𝒰:G→Aut(ℋ) defined by (𝒰_gf)(ϕ)=χ(b)f(a·ϕ) where g=(a,b), is a representation of G, induced by the representation χ of B.Let M(2) be the Euclidean motion group of 2 dimensions.It is the semidirect product of SO(2) and ℝ^2.Its unitary representation <cit.> is defined by (𝒰^a_gf)(R_θ)=e^i(r,R_θ a)f(R^-1_ϕ R_θ) where g=(R_ϕ,r)∈ M(2), f∈ L^2(SO(2)) and a∈ℂ.This representation is induced by the 1-dimensional unitary representation χ_a:r↦ e^i(r,a) of the commutative subgroup ℝ^2.Since 𝒰^a is equivalent to 𝒰^b if and only if |a|=|b| <cit.>, an equivalence class of UIRs of M(2) can be represented by 𝒰^a where a>0. Since SO(2)≃ S^1∋(cosθ,sinθ), letting r=(r_1,r_2), expression (<ref>) becomes (𝒰^a_gf)(θ)=e^ia(r_1cosθ+r_2sinθ)f(θ-ϕ).The setP={𝒰^a:a>0} of infinite-dimensional UIRs is called the principal series of UIRs of M(2).There is another set of UIRs other than the set P. These representations are the 1-dimensional unitary representations χ_n,n∈ℤ of SO(2) via the natural projection p:M(2)→ SO(2), defining the operators (χ_n∘ p)(R_ϕ,r)=e^inϕ.Hence, the complete set of representatives of the set of classes of UIRs of M(2) <cit.> is M(2)={𝒰^a:a>0}∪{χ_n∘ p:n∈ℤ}. At this point, consider the infinite-dimensional UIR 𝒰^a.Let U be an element of the Lie algebra 𝔪(2)=span{X,E_1,E_2} of the Euclidean motion group M(2) where X spans the Lie algebra of SO(2), E_1,E_2 are the canonical base elements that span ℝ^2 and the Lie brackets of these spanning elements are [X,E_1]=-E_2,[X,E_2]=E_1 and [E_1,E_2]=0.Given by the 1-parameter subgroup exp tU={[ (R_-tc_1,(c_2/c_1sin tc_1+c_3/c_1(1-cos tc_1),; c_2/c_1(-1+cos tc_1)+c_3/c_1sin tc_1)) if c_1≠0;;(1,(tc_2,tc_3)) if c_1=0 ].of M(2) where U=c_1X+c_2E_1+c_3E_2, expression (<ref>) becomes (𝒰^a_exp tUf)(θ)={[ e^ia[c_2/c_1(sin(tc_1+θ)-sinθ)-c_3/c_1(cos(tc_1+θ)-cosθ)]; × f(tc_1+θ)if c_1≠0;;e^iat(c_2cosθ+c_3sinθ)f(θ)if c_1=0 ].and its derivative with respect to t is d/dt𝒰^a_exp tUf(θ)={[ e^ia[c_2/c_1(sin(tc_1+θ)-sinθ)-c_3/c_1(cos(tc_1+θ)-cosθ)]; ×[ia(c_2cos(tc_1+θ)+c_3sin(tc_1+θ)); × f(tc_1+θ) if c_1≠ 0; +c_1∂/∂(tc_1+θ)f(tc_1+θ)];; ia(c_2cosθ+c_3sinθ)(𝒰^a_exp tUf)(θ)if c_1=0 ].and when t=0 (d𝒰^a(U)f)(θ)=ia(c_2cosθ+c_3sinθ)f(θ)+c_1f'(θ), where d𝒰^a(U)=d/dt𝒰^a_exp tU|_t=0. § QUANTIZATION Quantization is the process of forming a quantum mechanical system from a given classical system where these two systems, classical mechanics (in the Hamiltonian formalism) and quantum mechanics (in the Heisenberg picture), are modeled by the space of C^∞-functions on a symplectic manifold M and the set of self-adjoint operators on a Hilbert space ℋ, respectively.This is done by associating a classical observable f on M to a self-adjoint operator Q(f) on ℋ, where Q is a linear map, Q(1) is the identity operator and satisfies the correspondence Q({f,g})=-i/ħ[Q(f),Q(g)]where the expression above was the result of Dirac's analogy of Heisenberg commutator bracket [·,·] with the Poisson bracket {·,·}, which endow the two respective mechanical systems their Lie algebra structures.When M=T^*N, where N is an n-dimensional smooth manifold and ℋ=L^2(N), the quantization is said to be full if the operators Q(q^i) and Q(p_j) act irreducibly on ℋ.That is, the operators above are the position and momentum operators: Q(q^i) is the multiplication of q^i and Q(p_j)=-iħ∂_q_j. By the theorem of Stone and von Neumann, it is unitarily equivalent to the Schrödinger representation.It is known that the algebra of inhomogenous quadratic polynomials on ℝ^2n is a maximal Lie subalgebra of the space of polynomials under the Poisson bracket.This subalgebra is identified with the Lie algebra of the Jacobi group.A representation of this group, known as the Schrödinger-Weil representation, gives rise to a quantization map.However, by the Groenewold-van Hove theorem, it is impossible to extend this map to the whole C^∞(ℝ^2n) (see <cit.>).Independently, the geometric quantization of Konstant and Souriau is another Hilbert space-based quantization where the goal is the construction of quantum objects from the geometry of the classical ones <cit.>.This quantization procedure is the physical counterpart of Kirillov's orbit method. An orbit of a Lie group G in the coadjoint representation, also known as coadjoint orbit, is the orbit of the coadjoint action of G on the dual 𝔤^* of its Lie algebra 𝔤, through the point F∈𝔤^*.It is given by the set Ω={K(g)F:g∈ G} where <K(g)F,U>=<F,Ad_g^-1U>and <·,·> is the dual pairing of the Lie algebra with its dual.It is known that the coadjoint orbit Ω is a homogeneous symplectic G-manifold <cit.> and its symplectic form ω is called the Kirillov symplectic form.This method's particular interest is the correspondence between the finite-dimensional coadjoint orbits and the infinite-dimensional unitary representations of G.The method first appeared in its application to nilpotent Lie groups <cit.> and further extended to other classes of Lie groups (see <cit.> and <cit.>).In both of the methods above, classical mechanics is a limiting case (that is, ħ→ 0 in Dirac's correspondence principle) of quantum mechanics <cit.>. Moreover, in these definitions of quantization, the association of a C^∞-function to a self-adjoint operator is quite a radical transition. In the next section, we define a quantization method free from the Hilbert space-based formulation of quantum mechanics.§ DEFORMATION QUANTIZATION Earlier, we have briefly introduced deformation quantization or phase-space quantum mechanics. The model of quantum mechanics is described as a deformed structure of the space of classical observables.In this deformed structure, a noncommutative but associative product is introduced, called the ⋆-product.Let f,g∈ C^∞(M) where M is a Poisson manifold.This formal associative ⋆-product <cit.>, here we denote this as ⋆_λ, is a bilinear map C^∞(M)× C^∞(M)→ C^∞(M)[[λ]]defined by f⋆_λ g=∑_r=0^∞λ^r C^r(f,g)where λ is a formal parameter, C^r is a bidifferential operator with C^r(f,g)=(-1)^rC^r(g,f) for all f,g∈ C^∞(M) and satisfies the following properties: 1. C^0(f,g)=fg 2. C^1(f,g)={f,g} and 3. C^r(1,f)=C^r(f,1)=0 for r≥ 1. Property 1 shows that the noncommutative product ⋆_λ is a deformation of the commutative pointwise multiplication of functions in C^∞(M).Property 2 satisfies the correspondence principle f⋆_λ g-g⋆_λ f=2λ{f,g}+⋯where the dots mean higher-order terms with respect to λ and if we let [f,g]_λ=1/2λ(f⋆_λ g-g⋆_λ f),the bracket [·,·]_λ is the deformed Poisson bracket in C^∞(M).Property 3 implies 1⋆_λ f=f⋆_λ 1=f.Hence, the algebra (C^∞(M)[[λ]],⋆_λ,[·,·]_λ) is the quantum analogue of the classical model (C^∞(M),·,{·,·}).The questions of existence and classification of these ⋆-products have already been settled (see the review in <cit.>).The ⋆-product for the symplectic flatmanifold M=ℝ^2n has long been known <cit.> and is the most important. We discuss it at length.Suppose ω is the canonical symplectic form of M in the (q,p) coordinates on some open set O⊂ M, the Moyal ⋆-product of the algebra (C^∞(M)[[λ]],⋆) with λ=1/2i is the product f⋆ g=fg+∑_r=1^∞1/r!(1/2i)^rP^r(f,g) where P^r(f,g)=Λ^i_1j_1Λ^i_2j_2⋯Λ^i_rj_r∂_i_1i_2⋯ i_rf∂_j_1j_2⋯ j_rgwith the multi-index notation ∂_i_1i_2⋯ i_r=∂/∂ x_i_1∂ x_i_2⋯∂ x_i_r,x:=(q^1,...,q^n,p_1,...,p_n)and Λ^ij are constant value entries of the matrix associated to the symplectic form ω. This ⋆-product has an integral formula <cit.>, from which many of its important properties follow directly.Let f,g be functions in the Schwartz space 𝒮(ℝ^2n).By defining the symplectic Fourier transform F:𝒮(ℝ^2n)→𝒮(ℝ^2n) by (Ff)(x)=∫_ℝ^2nf(ξ)e^iω(x,ξ)dξ/(2π)^nand the symplectic convolution ×_ω as (f×_ω g)(x)=∫_ℝ^2nf(t)g(x-t)e^iω(t,x)dt/(2π)^n,the product f⋆ g=F(Ff×_ω Fg)admits the development of the Moyal ⋆-product defined in (<ref>) which converge to a function in 𝒮(ℝ^2n) and has the following properties: 1. (𝒮(ℝ^2n),⋆) is a generalized Hilbert algebra in L^2(ℝ^2n); 2. ∫ (f⋆ g)(ξ)dξ=∫ (fg)(ξ) dξ; 3. f⋆ g=g̅⋆f̅; and 4. the operator l_f:𝒮(ℝ^2n)→𝒮(ℝ^2n) defined by l_f(g)=f⋆ g, can be extended to a bounded operator on L^2(ℝ^2n). Bayen et al. <cit.> predicted that deformation quantization has a promising future in representation theory.Motivated by Kirillov's orbit method via Konstant and Souriau's geometric quantization and Fronsdal's initial investigation in <cit.>, D. Arnal together with J.C. Cortet, J. Ludwig, M. Cahen and S. Gutt, wrote a series of articles about the application of deformation theory on representations of general classes of Lie groups: nilpotent Lie groups <cit.>, compact Lie groups <cit.>, exponential Lie groups <cit.>, and solvable Lie groups <cit.>.These computations were made possible due to the covariance property of the Moyal ⋆-product <cit.>.For a unitary representation of a connected Lie group G corresponding to an orbit Ω≃ G/G_F, where G_F is the stabilizer subgroup of G, the Lie algebra 𝔤 is identified with the Lie subalgebra of C^∞(Ω) 𝔤_Ω={Ũ∈ C^∞(Ω):U∈𝔤}where the function Ũ:Ω→ℝ is defined by Ũ(F)=<F,U>for all F∈Ω and one has to show that the Moyal ⋆-product satisfies 1/2λ(Ũ⋆T̃-T̃⋆Ũ)=[U,T]for any U,T∈𝔤.A ⋆-product that satisfies expression (<ref>) is a 𝔤_Ω-relative quantization.The main result of the paper <cit.> is that each quantization relative to a Lie algebra 𝔤 is a G-covariant ⋆-product, and a G-covariant ⋆-product gives rise to a representation τ of G on C^∞(Ω)[[λ]] by automorphisms, which also gives rise to a differential representation dτ of τ, defined by dτ(U)=.d/dtτ(exp tU)|_t=0. That is, we obtain a representation of 𝔤 on C^∞(Ω)[[λ]] by endomorphisms.The function Ũ on Ω is called the Hamiltonian function associated to the Hamiltonian vector field ξ_U, defined by ξ_Uf={Ũ,f}. We remark that the computations above depend on the parameterization of the orbit Ω.The computational techniques that were outlined in the construction of representations of nilpotent <cit.> and exponential <cit.> Lie groups have led to concrete computations of representations for particular Lie groups, some of which were neither nilpotent nor exponential.Among these are the works of Diep and his students: the group of affine transformation of the real and complex plane <cit.>, the real rotation groups <cit.> and the MD_4-groups <cit.>.The orbits generated by the group of affine transformation of the complex plane and the real rotation groups were parameterized by local charts, while the others have global charts.These papers have provided us an outline to construct and classify unitary representations of concrete Lie groups.As in the method of obtaining representations via induction, we have a more or less procedural way of the construction.Our main contribution is the development of the UIRs of M(2) via deformation quantization, hence an alternative to the method of induced representation.The construction in the next section is outlined as follows: 1. compute the coadjoint orbit Ω_F of M(2) through the point F∈𝔪(2)^*; 2. define a chart on Ω_F and consider the Hamiltonian system (Ω_F,ω,ξ_U) where the Hamiltonian function Ũ is defined in (<ref>), ξ_U is its associated vector field and ω is the Kirillov symplectic form; 3. the Moyal ⋆-product is M(2)-covariant which will give rise to a representation l of 𝔪(2) on C^∞(Ω_F)[[λ]]; 4. the representation l̂, defined by the operators l̂_U=ℱ_p∘ l_U∘ℱ^-1_p, is a differential representation of the UIR of M(2) where the operator ℱ_p is a partial Fourier transform; and 5. classify these constructed representations via the coadjoint orbits. We remark that these steps are quite straightforward to implement and provide concrete computations suitable for the learning by graduate students in Physics and Mathematics of many important mathematical concepts and objects.§ THE UIRS OF M(2)§.§ Coadjoint orbitsIn matrix form, the Lie algebra 𝔪(2) of M(2) is spanned by the matrices X=([010; -100;000 ]), E_1=([ 0 0 1; 0 0 0; 0 0 0 ]), E_2=([ 0 0 0; 0 0 1; 0 0 0 ])and these matrices satisfy the Lie brackets [X,E_1]=-E_2,[X,E_2]=E_1 and [E_1,E_2]=0.Hence, 𝔪(2) is identified with ℝ×ℝ^2 and the elements are written as U=c_1X+c_2E_1+c_3E_2.The dual 𝔪(2)^* is also identified with ℝ×ℝ^2. Let g=exp U∈ M(2) and fix F=(μ,α)=μ X^*+α_1E_1^*+α_2E_2^*∈𝔪(2)^*.The coadjoint orbitΩ_F of M(2) through F, given by expression (<ref>), is the set Ω_F={K(exp U)F: U∈𝔪(2)}⊂𝔪(2)^*satisfying <K(exp U)F,T>=<F,Ad(-exp U)T>.We write K(exp U)F = <F,exp(-ad_U)X>X^*+<F,exp(-ad_U)E_1>E_1^* +<F,exp(-ad_U)E_2>E_2^*. But exp(-ad_U)=∑_r≥ 01/r!([000;c_30 -c_1; -c_2c_10 ])^r= ([ 1 0; 1-R_c_1/c_1([ c_2; c_3 ]) R_c_1 ]).So we have K(exp U)F=(μ+α·1-R_c_1/c_1([ c_2; c_3 ]))X^*+α R_c_1([ E_1^*; E_2^* ]).The coadjoint orbit of M(2) through F is Ω_F={(μ+α·1-R_c_1/c_1([ c_2; c_3 ]),α R_c_1): U∈𝔪(2)}. There are two types of orbits.If α=0, the orbit Ω_F={(μ,0)}) is a point- the trivial orbit.If α≠0, the orbit Ω_F is the 2-dimensional infinite cylinder of radius α which we denote Ω_F=T^*S^1_α. We first work on the nontrivial orbits, then later the trivial ones. §.§ Hamiltonian system on the cylinder Fix F where α≠0.The map ψ:ℝ^2→Ω_F=T^*S^1_αwhere ψ(x,θ)=xX^*+αcosθ E_1^*+αsinθ E_2^* defines a global chart on Ω_F. So each function f in C^∞(Ω_F) is written as f∘ψ and we describe the Hamiltonian system (Ω_F,ω,ξ_U) with respect to the chart (<ref>) as follows: 1. the Hamiltonian function associated to U∈𝔪(2) is Ũ=c_1x+α(c_2+ic_3,e^iθ) where (·,·) isthe inner product and the associated Hamiltonian vector field is ξ_U=c_1∂_θ-α(c_2+ic_3,ie^iθ)∂_x; 2. the map ψ gives rise to a symplectomorphism where the Kirillov symplectic form is the canonical form ω=dx∧ dθ. Since U=c_1X+c_2E_1+c_3E_2∈𝔪(2), the value of the functional Ũ at the point F'=xX^*+αcosθ E_1^*+αsinθ E_2^*∈Ω_F is the value of the dual pairing <F',U>=c_1x+c_2(αcosθ)+c_3(αsinθ),and since ξ_Uf=∂_xŨ∂_θ f-∂_θŨ∂_x f in (x,θ)-coordinates, it follows that ξ_U=c_1∂_θ-α(-c_2sinθ+c_3cosθ)∂_x.The restriction of ψ to the domain ℝ×𝕋 gives rise to a diffeomorphism.Let U=c_1X+c_2E_1+c_3E_2 and T=c'_1X+c'_2E_1+c'_3E_2. Since [U,T]=(c_1c'_3-c'_1c_3)E_1+(c'_1c_2-c_1c'_2)E_2, so for any F'∈Ω_F <F',[U,T]>=αcosθ(c_1c'_3-c'_1c_3)+αsinθ(c'_1c_2-c_1c'_2).But ω(ξ_U,ξ_T) = det([ dx(ξ_U) dx(ξ_T); dθ(ξ_U) dθ(ξ_T) ]) = αcosθ(c_1c'_3-c'_1c_3)+αsinθ(c'_1c_2-c_1c'_2),when ω=dx∧ dθ.Hence, ψ|_ℝ×𝕋 is a symplectomorphism. §.§ Covariance of the Moyal ⋆-product Let Λ be the matrix associated to the canonical form ω=dx∧ dθ, that is, Λ=([01; -10 ]).The Moyal ⋆-product is defined by expression (<ref>) where λ=1/2i.Since P^0(Ũ,T̃)=ŨT̃, P^1(f,g)=∂_xŨ∂_θT̃-∂_θŨ∂_xT̃=αcosθ(c_1c'_3-c'_1c_3)+αsinθ(c'_1c_2-c_1c'_2) and P^r(Ũ,T̃)=0 for r≥ 2, we have Ũ⋆T̃=ŨT̃+1/2i(αcosθ(c_1c'_3-c'_1c_3)+αsinθ(c'_1c_2-c_1c'_2)).So from (<ref>), we can easily compute iŨ⋆ iT̃-iT̃⋆ iŨ=i[U,T]where [U,T] is expression (<ref>).Expression (<ref>) is exactly (<ref>) when λ=1/2i.Thus, the Moyal ⋆-product is M(2)-covariant.Hence, it gives rise to a representation of 𝔪(2) on C^∞(Ω_F)[[λ]] by endomorphism of the Moyal ⋆-product. This representation of 𝔪(2) is defined by the operators l_U:C^∞(Ω_F)[[λ]]→ C^∞(Ω_F)[[λ]]given by the left ⋆-product multiplication l_Uf=1/2λŨ⋆ f.But as we have earlier explained, the Moyal ⋆-product converges in the space 𝒮(Ω_F) and that the operator l_U extends to L^2(Ω_F). We still denote this extension as l_U for all U∈𝔪(2). §.§ Convergence of the operators l̂_U Instead of l_U, we will compute for the convergence of l̂_U=ℱ_x∘ l_U∘ℱ^-1_x, for all U∈𝔪(2) as suggested in <cit.>.In the case of exponential Lie groups, l̂ is the differential of the UIR of the said group associated to the orbit method of Kostant-Kirillov <cit.>.Both the exponential Lie groups and M(2) are solvable, but the latter is non-exponential (since the exponential map exp:𝔪(2)→ M(2) is not injective).However, we will show in section <ref> that l̂ is the differential of the UIR of M(2). Let f∈𝒮(Ω_F).The partial Fourier transform ℱ_x of the function f on Ω_F is defined by (ℱ_xf)(η,θ)=∫_ℝe^-iη xf(x,θ)dx/√(2π) and its inverse transform ℱ^-1_x by (ℱ^-1_xf)(x,θ)=∫_ℝe^iη xf(η,θ)dη/√(2π). The derivatives∂_xℱ^-1_x(f)=iℱ^-1_x(η f)and ℱ_x(xf)=i ∂_ηℱ_x(f)are easily computed while the derivative (<ref>) can be generalized as∂^r/∂x^rℱ^-1_x(f)=i^rℱ^-1_x(η^rf) On the other hand, the partial derivative of Ũ with respect to x of order r≥ 2 or with respect to a mixture of variables x and θ is zero. So, the bidifferential P^r(Ũ,ℱ^-1_xf) will always have the nonzero term Λ^21Λ^21⋯Λ^21∂_θ^rŨ∂_x^rℱ^-1_x(f)where Λ^21Λ^21⋯Λ^21 r-times.The rth partial derivative of expression (<ref>), together with generalized derivative (<ref>) applied in (<ref>), we have P^r(Ũ,ℱ^-1_xf)=(-1)^rα(c_2+ic_3,i^re^iθ)(i^rℱ^-1_x)(η^rf) for r ≥ 2 for all functions f on Ω_F.Now l̂_U(f)=iℱ_x(Ũ⋆ℱ^-1_x(f)).Applying (<ref>) and (<ref>), we have Ũ⋆ℱ^-1_x(f) =c_1x ℱ^-1_x(f)+c_1/2i∂_θℱ^-1_x(f)+∑_r=0^∞1/r!(-1/2)^r α(c_2+ic_3,i^re^iθ)ℱ^-1_x(η^r · f).Together with (<ref>), l̂_U(f) = -c_1∂_η f+c_1/2∂_θ f+ iα∑_r=0^∞1/r!(-η/2)^r (c_2+ic_3,i^re^iθ)f = c_1(1/2∂_θ-∂_η)f+iα(c_2+ic_3,e^iθ∑_r=0^∞1/r!(-iη/2)^r )f = c_1(1/2∂_θ-∂_η)f+iα(c_2+ic_3,e^i(θ-η/2))fLet s=θ-η/2.By the change of variables, the above expression will become l̂_U=c_1∂/∂ s+i α(c_2cos s+c_3 sin s). §.§ Representations associated to the nontrivial orbits The representation l̂ of 𝔪(2) on L^2(Ω_F) is defined by the operators l̂_U in (<ref>).But the representation space L^2(Ω_F) is too big.We choose 𝔥=ℝ^2 as the real algebraic polarization of 𝔪(2).By Remark 6 in <cit.>, the leaves of the M(2)-invariant foliation of Ω_F are the disjoint tangent lines passing through each point in S^1_α.This means that the subalgebra of functions on Ω_F which are constant along these leaves is a maximal abelian subalgebra of C^∞(Ω_F).Hence, we reduce L^2(Ω_F) into L^2(S^1_α).Furthermore, L^2(S^1_α) is isomorphic to L^2(S^1) given by the map f↦ f|_S^1, where (s_1,s_2)∈ S^1_α is identified with (s_1/α,s_2/α)∈ S^1.So, l̂ is a representation of 𝔪(2) in L^2(S^1).We are left to show that l̂ is the differential of the unitary representation of M(2) defined in (<ref>).Set α=a and s=θ in (<ref>).But this is exactly (<ref>).To show uniqueness, we apply the differential operator l̂_U to expression (<ref>) where c_1≠0, so l̂_U(𝒰^a_exp tUf)(θ) = ia(c_2cosθ+c_3sinθ)(𝒰^a_exp tUf)(θ)+c_1∂/∂θ(𝒰^a_exp tUf)(θ).The second term in (<ref>) is computed as c_1∂/∂θ(𝒰^a_exp tUf)(θ) = e^ia[c_2/c_1(sin(tc_1+θ)-sinθ)-c_3/c_1(cos(tc_1+θ)-cosθ)] (ia[c_2(cos(tc_1+θ)-cosθ)+c_3(sin(tc_1+θ)-sinθ)]f(tc_1+θ) +c_1f'(tc_1+θ)).When (<ref>) replaces the second term in (<ref>), d/dt(𝒰^a_exp tUf)(θ)=l̂_U(𝒰^a_exp tUf)(θ)where the left-hand side is the derivative of 𝒰^a expressed in (<ref>).Consider c_1=0.The left-hand side in (<ref>) together with the application of the operator l̂_U=ia(c_2cosθ+c_3sinθ) to the expression (<ref>), will result to the equality d/dt(𝒰^a_exp tUf)(θ) = ia(c_2cosθ+c_3sinθ)(𝒰^a_exp tUf)(θ)= l̂_U(𝒰^a_exp tUf)(θ). For both cases, the derivative with respect tot and the application l̂_U to (𝒰^a_exp tUf)(θ) are equal, for all f∈ L^2(S^1).Moreover, (𝒰^a_exp tUf)(θ)=f(θ) when t=0.Hence, (𝒰^a_exp tUf)(θ) is the unique solution to the Cauchy problem {[ d/dtS(t,θ)= l̂_US(t,θ); S(0,θ)=Id. ]. This means that exp(l̂_U)f(θ)=(𝒰^a_exp Uf)(θ). §.§ Representations associated to the trivial orbits When F=(μ,0), the coadjoint orbit of M(2) is the 0-dimensional Ω_F={(μ,0)}which is a point.The set of C^∞-functions on this orbit can be described as C^∞(Ω_F)={f:Ω_F→ℂ: f(μ,0)=z}≃ℂ.The Hamiltonian function Ũ:Ω_F→ℝ is the constant function Ũ(F)=c_1μ.Obviously, the vector field ξ_U associated to this function is the zero vector field. The Kirillov form is computed as <F,[U,T]>=0 for any U,T∈𝔪(2).The Moyal ⋆-product on the space C^∞(Ω_F) is f⋆ g=fgfor any functions f,g∈ C^∞(Ω_F).Hence, this ⋆-product is trivially covariant satisfying iŨ⋆ iT̃-iT̃⋆ iŨ=i[U,T]=0for any U,T∈𝔪(2).So, there exists a 1-dimensional representation l of 𝔪(2) on C^∞(Ω_F))[[λ]] defined by (l_U)(f)=iŨ⋆ f=(ic_1μ)f.The operator l_U=0 when U∈span{E_1,E_2}.The 1-parameter subgroup U=tX,t∈ℝ is identified with 𝔰𝔬(2)≃ℝ.So, the unitary operator χ_μ(exp tX)=e^itμ is the unique solution to the Cauchy problem {[ d/dtS(t,x)=l_XS(t,x); S(0,x)= Id ].satisfying exp(tl_X)=χ_μ(exp tX).Since the set {χ_n∘ p:n∈ℤ}are the 1-dimensional UIRs of M(2), the set of orbits {Ω_F={(μ,0)}:μ∈ℤ}corresponds with these 1-dimensional UIRs and the rest of the non-integer orbits correspond with {χ_μ∘ p:μ∈ℝ/ℤ}.§ CONCLUSIONThis article has aimed to introduce deformation quantization as a powerful tool in constructing and classifying Lie group representations, an alternative to the traditional method of induced representations.The covariance property of the Moyal ⋆-product and its convergence in the Schwartz space are the key properties that made these constructions and classifications possible.The main result of this work is the unitary representation 𝒰^a of M(2).We have tested Arnal and Cortet's program in <cit.>, despite the original design for nilpotent and exponential Lie groups.The results in section <ref> are summarized as follows. 1. The representation l̂ of 𝔪(2) defined by (<ref>) is the differential representation of the infinite-dimensional UIR of M(2).Moreover, there is a one-to-one correspondence between the nontrivial orbits and the principal series of UIRs of M(2) and this correspondence is defined by the radius of the cylinder. 2. The representation l associated to the orbit {(μ,0)} is the differential representation of the 1-dimensional unitary irreducible representation of M(2) if μ∈ℤ.Though the computations in <cit.> has provided a better understanding of the implementation of the program, this paper implemented it on a cylinder, different from the computations presented in <cit.>, and on trivial orbits which was neglected in <cit.>.While the program has been effectively implemented on a flat orbit generated by the coadjoint action of a solvable Lie group, it is interesting to extend the said program to an orbit with nonzero curvature generated by a nonsolvable Lie group, for example, the spheres and tangent spheres- these nontrivial orbits are generated by the coadjoint action of M(3).This work was supported by the Commission on Higher Education Faculty Development Program (CHED-FDP) II of the Philippines.99Neumann J. von Neumann, Mathematical Foundations of Quantum Mechanics,translated from the German edition by R. Beyer, Princeton University Press, New Jersey U.S.A. (1955).Mackey G. Mackey, Harmonic analysis as the exploitation of symmetry- a historical survey, Bull. Amer. Math. Soc. 3 (1980) 543.Kirillov2 A.A. Kirillov, Lectures on the Orbit Method, American Mathematical Society, Rhode Island U.S.A. (2004).Kostant B. Kostant, Quantization and Unitary Representation, in Lectures in Modern Analysis and Applications III, R.M. Dudley, J. Feldman, B. Kostant, R.P. Langlands and E.M. Stein, Springer-Verlag, Berlin-Heidelberg (1970), pp. 87-208. Souriau J. -M. Souriau, Structure des Systémes Dynamiques,Dunod, Paris France (1970).Zachos C. Zachos, D. Fairlie and T. Curtright, Quantum Mechanics in Phase Space, World Scientific Publishing Co., Singapore (2005).Wigner E. Wigner, On the quantum correction for thermodynamic equilibrium, Phys. Rev. 40 (1932) 749.Weyl H. Weyl, The Theory of Groups and Quantum Mechanics, Dover Publication, Inc., New York U.S.A. (1931).Hug M. Hug, C. Menke and W.P. Schleich, Modified spectral method in phase space: calculation of the Wigner function. I. Fundamentals, Phys. Rev. A 57 (1998) 3188.Groenewold H. Groenewold, On the principles of elementary quantum mechanics, Physica 45 (1949) 99.Moyal J.E. Moyal, Quantum mechanics as a statistical theory, Proc. Camb. Phil. Soc. 45 (1949) 99.Bayen F. Bayen, M. Flato, C. Fronsdal, A. Lichnerowicz and D. Sternheimer, Deformation theory and quantization. I. Deformations of symplectic structures, Ann. Phys. 111 (1978) 61. Bayen1 F. Bayen, M. Flato, C. Fronsdal, A. Lichnerowicz and D. Sternheimer, Deformation theory and quantization. II. Physical applications, Ann. Phys. 111 (1978) 111. Kontsevich M. Kontsevich, Deformation quantization of Poisson manifolds, Lett. Math. Phys.66 (2003) 157.Fedosov B. Fedosov, Formal Quantization, in Some Topics of Modern Mathematics and Their Application to Problems of Mathematical Physics, Moscow (1985), pp. 129-136.Fedosov1 B. Fedosov, A simple geometrical construction of deformation quantization, J. Differ. Geom. 40 (1994) 213. Arnal1D. Arnal, ⋆ products and representations of nilpotent groups, Pac. J. Math. 114 (1984) 285.Arnal2D. Arnal and J.C. Cortet, ⋆-products in the method of orbits for nilpotent groups, J. Geom. Phys. 2 (1985) 83.Arnal3 D. Arnal and J.C. Cortet, Représentations ⋆ des groupes exponentiels, J. Funct. Anal. 82 (1990) 103. Arnal4D. Arnal, M. Cahen and S. Gutt, Representations of compact Lie groups and quantization by deformation, Bull. Acad. Royale Belg. 74 (1988) 123.Arnal7D. Arnal, J.C. Cortet and J. Ludwig, Moyal product and representations of solvable Lie groups, J. Funct. Anal. 133 (1995) 402.Arnal5 D. Arnal, J.C. Cortet, P. Molin and G. Pinczon, Covariance and geometrical invariance in quantization, Lett. Math. Phys. 24 (1983) 276.Fronsdal C. Fronsdal, Some ideas about quantization, Rep. Math. Phys. 15 (1978) 111.Moreno C. Moreno, Invariant star products and representations of compact semisimple Lie groups, Lett. Math. Phys. 12 (1986) 217.Hirshfeld A. Hirshfeld and P. Henselder, Deformation quantization in the teaching of quantum mechanics, Am. J. Phys. 70 (2002) 537.Case W. Case, Wigner's functions and Weyl transforms for pedestrians, Am. J. Phys. 76 (2008) 937.Diep1 Do Ngoc Diep and Nguyen Viet Hai, Quantum half-planes via deformation quantization, Beitr. Algebra Geom. 42 (2001) 407.Diep2 Do Ngoc Diep and Nguyen Viet Hai, Quantum co-adjoint orbits of the group of affine transformation of the complex line, Beitr. Algebra Geom. 42 (2001) 419.Nable J. Nable, Deformation quantization and representations of the real rotation group, Science Diliman 13 (2001) 41.Nguyen Nguyen Viet Hai, Quantum co-adjoint orbits of MD_4-groups, Vietnam J. Math. 29 (2001) 131.Berndt R. Berndt, An Introduction to Symplectic Geometry, American Mathematical Society, Rhode Island U.S.A. (2000).Abraham R. Abraham and J. Marsden, Foundations of Mechanics 2nd ed., Addison-Wesley Publishing, Inc., Canada (1978).Sugiura M. Sugiura, Unitary Representations and Harmonic Analysis- An Introduction, North-Holland, Amsterdam-Oxford-New York-Tokyo (1990).Berndt1 R. Berndt, Representations of Linear Groups, Friedr. Vieweg & Sohn Verlag, Berlin, Germany (2007).Mackey1 G. Mackey, Theory of Unitary Group Representations, The University of Chicago Press, London U.K. (1976).Vilenkin2 N. Ja. Vilenkin and A.U. Klimyk, Representations of Lie Groups and Special Functions, Volume 1: Simplest Lie groups, special functions and integral transforms,Kluwer Academic Publishers, Dordrecht The Netherlands (1991).Vilenkin3 N. Ja. Vilenkin and A.U. Klimyk, Representations of Lie groups and special functions, Volume 2: Class I representations, special functions, and integral transforms,Kluwer Academic Publishers, Dordrecht The Netherlands (1991).Vilenkin4 N. Ja. Vilenkin and A.U. Klimyk, Representations of Lie groups and special functions, Volume 3: Classical and quantum groups and special functions,Kluwer Academic Publishers, Dordrecht The Netherlands (1991).Vilenkin N. Ja. Vilenkin, Special Functions and the Theory of Group Representations, translated from Russian by V.N. Singh, American Mathematical Society, Rhode Island U.S.A. (1968).Mackey3 G. Mackey, On induced representations of groups, Am. J. Math. 73 (1951) 576.Mackey2 G. Mackey, Induced representations of locally compact groups I, Ann. Math. 55 (1952) 101. Kirillov A.A. Kirillov Geometric quantization, in Dynamical Systems IV, V.I. Arnol'd and S.P Novikov, Springer-Verlag, New York, Berlin, Heidelberg (1985) pp. 137-172.Kirillov1 A.A. Kirillov, Unitary representations of nilpotent Lie groups (Russian),Uspekhi Mat. Nauk17 (1962) 57.Kirillov3 A.A. Kirillov, Introduction to the theory of representations and noncommutative harmonic analysis, in Representation Theory and Noncommutative Harmonic Analysis I, A.A. Kirillov, Springer-VerlagNew York, Berlin, Heidelberg (1991) pp. 1-156.Dirac P. Dirac, The Principles of Quantum Mechanics, 4th ed. Clarendon Press, Oxford (1957).Gutt S. Gutt, Deformation quantization, in Workshop on Representation Theory of Lie Groups, International Center for Theoretical Physics, SMR.686/14.Bordemann M. Bordemann, Deformation quantization: a survey, J. Phys. 103 (2008) 1. Hansen F. Hansen, Quantum mechanics in phase space,Rep. Math. Phys. 19 (1984) 361.Arnal8 D. Arnal and J.C. Cortet, Star representations of E(2), Lett. Math. Phys. 20 (1990) 141.
http://arxiv.org/abs/1709.09394v1
{ "authors": [ "Alexander J. Balsomo", "Job A. Nable" ], "categories": [ "math-ph", "math.MP" ], "primary_category": "math-ph", "published": "20170927090439", "title": "Deformation quantization in the teaching of Lie group representations" }
=1mm
http://arxiv.org/abs/1709.09371v1
{ "authors": [ "Rampei Kimura", "Teruaki Suyama", "Masahide Yamaguchi", "Daisuke Yamauchi", "Shuichiro Yokoyama" ], "categories": [ "astro-ph.CO", "gr-qc", "hep-th" ], "primary_category": "astro-ph.CO", "published": "20170927074701", "title": "Are redshift-space distortions actually a probe of growth of structure?" }
firstpage–lastpage 2002Local-hidden-state models for Bell diagonal states and beyond [Phys. Rev. A. 99, 062314 (2019)] Yuan-Yuan Zhang Accepted 2017 September 24. Received 2017 September 8; in original form 2017 June 30 =============================================================================================== We present the results of deep optical imaging of the radio/γ-ray pulsar , obtained with the Large Binocular Telescope (LBT).With a characteristic age of 1.2 Myr,is one of the oldest (non recycled) pulsars detected in γ-rays, although with still a quite high rotational energy reservoir (Ė_ rot = 5.6 × 10^34 erg s^-1). The presumably close distance (a few hundred pc), suggested by the hydrogen column density (N_ H 3.6 × 10^20 cm^-2), would make it a viable target for deep optical observations, never attempted until now. We observed the pulsarwith the Large Binocular Camera of the LBT. The only object (V=25.44±0.05)detected within ∼3 from the pulsar radio coordinates is unrelated to it.is, thus, undetected down to V∼26.6 (3σ), the deepest limiton its optical emission. We discuss the implications of this result on the pulsar emission properties.stars: neutron – pulsars: individual:§ INTRODUCTION The launch of the Fermi Gamma-ray Space Telescope has spurred on the search for pulsars in γ-rays (Grenier & Harding 2015), yielding over 200[ttps://confluence.slac.stanford.edu/display/GLAMCOG/] detections and triggering multi-wavelength observations. While pulsars are common targets in the X-rays, they are very challenging targets in the optical and very few of them have been identified (see Mignani et al. 2016 and references therein). Here we report on Large Binocular Telescope (LBT) observations of an isolated pulsar, , detected by both AGILE(Pellizzoni et al. 2009) and Fermi (Abdo et al. 2010; Noutsos et al. 2011). It was discovered as a radio pulsar (Ray et al. 1996) and later on as an X-ray source by(Becker et al. 2004),although X-ray pulsations have not yet been found. is one of the very few non-recycled pulsars older than 1 Myr detected in γ-rays, with a characteristicage τ_ c = 1.2 Myr, inferred from the values of its spin period P_ s=0.096 s and its derivativeṖ_ s= 1.27 × 10^-15 s s^-1 (Ray et al. 1996).This also yieldsa rotational energy lossrate Ė_ rot = 5.6 × 10^34 erg s^-1 and a surface dipolar magneticfieldB_ s= 3.54 × 10^11 G[Derived from the magnetic dipole model,e.g. Kaspi & Kramer (2016). ].Althoughdoes not have a very large spin-down power compared toyoung (∼1–10 kyr) pulsars (∼10^36–10^38 erg s^-1), it is still a factor of two larger thanthat of middle aged γ-ray pulsars (∼0.1–0.5 Myr), such as Geminga, PSRB0656+14, and PSRB1055-52, all detected in the opticalthanks to their distances500 pc (Abdo et al. 2013).The distance tois uncertain owing to the lack of a radio parallax measurement. The radio dispersion measure(DM=21.0±0.1 pc cm^-3; Ray et al. 1996) gives a distance of 1.8±0.3 kpc from the NE2001 modelof the Galactic free electron density(Cordes & Lazio 2002). A slightly smaller distance (1.48 kpc) is inferred from the model of Yao et al. (2017). The hydrogen column density towards the pulsar obtained from the X-ray spectral fits (N_ H 3.6 × 10^20 cm^-2;Abdo et al. 2013) suggests a distance of a few hundred pc (He et al. 2013), although these estimates depend on the model X-ray spectrum. Such a distance would makea viable target for deep optical observations, never carried out until now, and might be compatible with the debated association (Noutsos et al. 2011) with the Cygnus Loop supernova remnant(SNR) at 540^+100_-80 pc (Blair et al. 2005). The structure of this manuscript is as follows: observations and data reduction are described in Sectn. 2, whereasthe results are presented and discussed in Sectn. 3 and 4, respectively.§ OBSERVATIONS AND DATA ANALYSIS Theobservations were carried out onJuly 5th, 2016 with the LBT at the Mount Graham International Observatory (Arizona, USA) and theLarge Binocular Camera (LBC; Giallongo et al. 2008). The Camera's field of view is 23×25, with a pixel scale of02255.The images were takenthrough the filters SDT-Uspec, V-BESSEL, and i-SLOAN, closely matchingthe Sloan filters u and i (Fukugita et al. 1996), and the Johnson V filter. For each filter, three sets of exposures were acquired with exposure times of 20s, 60s and 120s, for a total integration of 5887 s (Uspec and i-SLOAN) and 5376 s (V-BESSEL).Sky conditions were non-photometric owing to the presence of cirri and the average seeing was around 12. The target was observed with an average airmass around 1.01 and 1.09, and with a lunar illumination of ∼ 1%.Images were reduced with the LBC data reduction pipeline, correcting raw science frames for bias, dark andflat fields. A further low-order flat-field correction was obtained from the night sky flats to remove large-scale effects. We, then, corrected the imagesfor geometrical distortions,applying a linear pixel scale resampling. Finally, we stacked all images taken with the same filterand used themaster frames to compute the astrometric solution (∼ 01 rms).Since the night was non-photometric, we performed the photometric calibration directly on the science frames by matching stars in public source catalogues.In particular, for the SDT-Uspec filter we used a source listextracted from theSerendipitous Ultra-violet Source Survey Catalogue version 3.0(XMM-SUSS3[ttps://www.cosmos.esa.int/web/xmm-newton/xsa]), built from observations with theOptical Monitor (OM; Mason et al. 2001), whereas for both the V-BESSEL and i-SLOAN filters we useda source listfrom the American Association of Variable Stars Observers (AAVSO) Photometric All-Sky Survey[ttp://www.aavso.org/apass] (APASS).All magnitudes are in the AB system (Oke 1974). For all filters, we computed object photometry with the DAOPHOT II software package (Stetson 1994) following a standard procedure for source detection, modelling of the image point spread function (PSF), and multi-band source catalogue generation (see, e.g., Testa et al. 2015).After accounting for photometric errors, the fit residuals turned out to be ∼ 0.01 magnitudes in all filters, to which we must add the average absolute photometry accuracy ofSUSS3 and APASS, which is ∼ 0.05 magnitudes.§ RESULTSFig. <ref>shows a zoom of thei-band image centred around the pulsar position. The J2000 coordinates of used by Noutsos et al. (2011) are: α =20^ h43^ m 4352; δ= +27^∘ 40 5606 but with no quoted error.The ATNF pulsar catalogue (Manchester et al. 2005) reports the same coordinates, with an uncertainty of 01 and 1 in right ascension and declination, respectively,at a reference epoch MJD=49773.Owing to timing noise, no updated pulsar coordinates could be computed using the timing model of Kerr et al. (2015). has not been observed by , so that we cannot rely on an accurate, model-independent position.No proper motion has been measured for . Therefore, to account for its unknown angular displacement between the epoch of the reference radio position and that of our LBT observations (MJD=57574), we looked for candidate counterparts within a conservative search region of 3 radius. This is three times as large as the formal radio position uncertainty and roughly equal to the angular displacement expected for a pulsar moving with an average transverse velocity of 400 km s^-1 (Hobbs et al. 2005) at a distance as close as the Cygnus Loop SNR (540^+100_-80 pc; Blair et al. 2005).Only one object is detected within the search region (3 radius) defined above (Fig. <ref>).The object is barely visible in the V band and not in the U band, whereas it is clearly detected in the i band.Its magnitudes have been computed following the same procedure as described in Sectn. 2 and are V=25.44±0.05, i=25.08±0.08, U>26.5 (AB). To investigate the characteristics of the object, we built a U-V vs V-i colour-colour diagram (CCD) of all objects within 5 from the pulsar position and compared its colourswith respect to the main sequence (Fig. <ref>). Since the field stars are, presumably, at different distances with respect tothe pulsar, the diagram is uncorrected for the reddening.The object's colours are V-i = 0.36 ± 0.09, U-V > 1.06 and are close to those of the main sequence. This means that it does not stand out for having peculiar colours,as one would expect for a pulsar, which is usuallycharacterised by blue colours (e.g., Mignani & Caraveo 2001; Mignani et al. 2010).We compared the observed CCD to a synthetic one computed withthe Besançon Model of Stellar Population Synthesis to simulate the Galactic stellar population within a 5 radius around the direction of . As shown in Fig.<ref>, the main sequence of the observed CCD is consistent with the model Galactic stellar population, supporting the conclusion that the objectis a field star rather than the pulsar. Estimated 3σ-level limiting magnitudes are 26.5, 26.6, and 26.2 in the U, V and i bands, respectively, which we assume as upper limits on the pulsar fluxes.§ DISCUSSION Our observations ofare much deeper than those obtained by Beronya et al. (2015) with the BTA(Bolshoi Teleskop Alt-azimutalnyi) 6m telescope, which only yielded a3σ limit of R∼ 21.7 (Vega) on the pulsar flux. The pulsar is obviously too faint to have been detected in theOM images (Becker et al. 2004),with 3σlimits of B≈21.5 and U≈20.9 (Vega). We checked whether our limits on the pulsar flux could help to prove or disprove the association with the Cygnus Loop SNR. In general, the non-thermal optical luminosity L_ opt of rotation-powered pulsars scales with a power ofthe rotationalenergy loss rate (see, e.g. Mignani et al. 2012) as L_ opt∝Ė_ rot^1.70±0.03 (1σ statistical error). From this relation, weestimate a luminosity of ∼ 3.16× 10^27 erg s^-1 for , corresponding to amagnitude V∼ 26.2–26.9at the distance of the Cygnus Loop SNR, afteraccounting for the interstellar reddening E(B-V) 0.06,inferred from the N_ H (Predehl & Schmitt 1995). Therefore, our detection limit (V∼ 26.6) does not determine whether the pulsar is at the distance of the Cygnus Loop SNR, andtheir association remains uncertain.Pushing the limit on the pulsar brightness down to V∼28 would imply a distance largerthan ∼ 1 kpc for the same predicted opticalluminosity and woulddisprove this association.Given the lack of a counter-evidence, we assume the pulsar DM-based distance (Yao et al. 2017) as a reference. We compared our constraints on the optical emission ofwith the properties of otherpulsars of comparable ageidentified in the optical (Table <ref>). Among them, lies somewhere in between the middle-aged pulsars, withthe oldest being PSRB1055-52(τ_ c∼ 0.5 Myr), and the oldish ones, such as PSRB1929+10 (τ_ c∼3 Myr).The V-band optical luminosity of is L_ opt 2.2 × 10^28 d_1.48^2 erg s^-1, where d_1.48is its distance in units of 1.48 kpc (Yao et al. 2017). If one assumes that the V-band optical emission is entirelynon-thermal and rotation powered, its emission efficiency would be η_ opt 3.93× 10^-7 d_1.48^2.For comparison, for a distance of 0.35 kpc (Mignani et al. 2010), the V-band optical luminosity of PSRB1055-52would be 3.74 × 10^27 erg s^-1 and its emission efficiency 1.26× 10^-7. We note that the opticalspectrum of PSRB1055-52 brings the contribution of both non-thermal emission from the magnetosphere and thermal emissionfrom the neutron star surface and is the combination of a power-law (PL) and a Rayleigh-Jeans (R-J) (Mignani et al. 2010),as observed in other middle-aged pulsars. However, the contribution of the R-J in the V band is about an order of magnitude smallerthan that of the PL, so that its V-band luminosity is essentially non thermal. The γ-ray energy flux above 100 MeV forisF_γ=(1.18±0.12) × 10^-11 erg cm^-2 s^-1 (Acero et al. 2015), whereas its unabsorbed non-thermal X-ray flux (0.3–10 keV) isF_ X=0.22^+0.03_-0.11× 10^-13 erg cm^-2 s^-1 (Abdo et al. 2013), which gives an optical–to–γ-ray flux ratio F_ opt/F_γ 7.14 × 10^-6 and anoptical–to–X-ray fluxratioF_ opt/F_ X 5.57 × 10^-4, where the optical flux F_ opt has been corrected for the extinction.ForPSRB1055-52,F_γ=(2.90±0.03) × 10^-10 erg cm^-2 s^-1andF_ X=1.51^+0.02_-0.13× 10^-13 erg cm^-2 s^-1 (0.3–10 keV), yieldingF_ opt/F_γ∼ 0.88 × 10^-6 and F_ opt/F_ X∼ 16.9 × 10^-4, whereas for PSRB1929+10the V-band optical luminosity would be1.04 × 10^27 erg s^-1 for a 0.31 kpc distance (Verbiest et al. 2012) andits emission efficiency∼ 2.7 × 10^-7. However, PSRB1929+10 has not been observed in the optical but in the near-UV (Mignani et al. 2002) where the spectrum is modelled by a PL with spectral index α∼ 0.5. Therefore, its extrapolationto the optical gives uncertain predictions on theunabsorbed V-band flux, and it is not possible to determine whether it decouples into a PL plus a R-J, like in PSRB1055-52 (Mignani et al. 2010).In this case,both the non-thermal optical luminosity and emission efficiency would be overestimated.PSRB1929+10 has been detected in the X-rays with an unabsorbed non-thermal 0.3–10 keV flux F_ X=2.64^+0.12_-0.16× 10^-13 erg cm^-2 s^-1(Becker et al. 2006), but not in γ-rays down to 2.9 × 10^-12 erg cm^-2 s^-1 (Romani et al. 2011),yielding F_ opt/F_ X ∼ 3.4 × 10^-4 and F_ opt/F_γ 3.11 × 10^-5.The γ-ray spectrum ofis described by a PL with an exponential cut off, where photon indexΓ_γ=1.44±0.25 and cutoff energy E_ c= 1.34 ±0.37 GeV (Acero et al. 2015). Thespectrum can be fit by a PLwith Γ_ X = 2.98^+0.44_-0.29 (Abdo et al. 2013). The addition of a blackbody component is compatible with the counting statistics, but an f-test (Bevington 1969)shows no improvement in the fit significance. We compared our optical flux measurements with the extrapolations of the high-energy spectra, after correcting forthereddening using the extinction coefficients of Fitzpatrick (1999).The spectral energy distribution(SED) ofis shown in Fig. <ref>. As seen in other cases, the extrapolations of the two PL spectra are not compatible witheach other, implyinga turnover inthe γ-ray PL at low energies. This is also observed in, e.g.themiddle-aged pulsarPSRB1055-52 (Mignani et al. 2010), although there is no apparent correlation between the presence of aturnover and the pulsar characteristic age.The optical flux upper limits are below the extrapolation of the assumed X-ray PL spectrum but are not deep enough to rule out that the optical emissionmightbe compatible with the γ-ray PL extrapolation. This could be a rare case where the γ-ray and optical spectra are related to each other.§ ACKNOWLEDGMENTSWe thank the anonymous referee for his/her considerate review. While writing this manuscript, we commemorated the fifth anniversaryof the death of renown Italian astrophysicist Franco Pacini,who passed away on January 26th 2012. Francoauthored many seminal publications on neutron stars since right before their discovery and was an active promoter of the LBT project. We dedicate our manuscript to his memory. RPM acknowledges financial support froman Occhialini Fellowship.This research was made possible through the use of the AAVSO Photometric All-Sky Survey (APASS),funded by the Robert Martin Ayers Sciences Fund. 99[Abdo et al.2010]abdo10 Abdo A.A., et al., 2010, ApJ, 713, 154[Abdo et al.2013]2pc Abdo A.A., et al., 2013, ApJS, 298, 17[Acero et al. (2015)]ace15 Acero F., et al., 2015, ApJS,218, 23 [Becker et al. (2004)]beck04Becker W., Weisskopf M. C., Tennant A. F., Jessner A., Dyks J., Harding A. K., Zhang S. N., 2004, ApJ, 615, 908[Becker et al. (2006)]beck06 Becker W. et al. 2006, ApJ, 645, 1421 [Beronya et al. (2015)]ber15Beronya D. M., Shibanov Yu A.,Zyuzin D. A., Komarova V. N., 2015, 17th Russian Youth Conference on Physics and Astronomy, Journal of Physics: Conference Series 661[Bevington (1969)]bev69Bevington P. R., 1969, Data reduction and error analysis for the physical sciences, McGraw-Hill [Blair et al. (2005)]bla05 Blair W. P., Sankrit R., Raymond J. C., 2005, AJ, 129, 2268 [Cordes & Lazio2002]2002astro.ph..7156C Cordes J. M., & Lazio T. J. W., 2002, arXiv:astro-ph/0207156 [Fitzpatrick1999]1999PASP..111...63F Fitzpatrick E. L., 1999, PASP, 111, 63[Fukugita et al.1996]fuk96 Fukugita M., Ichikawa T.,Gunn J. E.,Doi M., Shimasaku K.,Schneider D. P., 1996, AJ, 111, 1748 [Giallongo et al.2008]2008A A...482..349G Giallongo E., et al., 2008, A&A, 482, 349[Grenier & Harding2015]gre15Grenier I. A. & Harding A. K., 2015, Comptes rendus - Physique, Vol. 16, Issue 6-7, p. 641[He et al.2013]he13 He C., Ng C.-Y., Kaspi V. M., 2013, ApJ, 768, 64[Hobbs et al.2005]ho05 Hobbs G., Lorimer D. R, Lyne A. G., Kramer M., 2005, MNRAS, 360, 974[Kaspi & Kramer2016]kas16 Kaspi V. M. & Kramer M.,2016, Proc. of the 26th Solvay Conference on Physics on Astrophysics and Cosmology, R. Blandford & A. Sevrin eds., arXiv:1602.07738[Kerr et al.2015]kerr15 Kerr M., Ray P. S., Shannon R. M., Camilo F.,2015, ApJ, 814, 128[Manchester et al.2005]man05 Manchester R. N., Hobbs G. B., Teoh A. & Hobbs M., 2005, AJ, 129, 1993 [Mason et al.2001]mas01Mason K. O., et al., 2001, A&A, 365, 36 [Mignani & Caraveo2001]mig01Mignani R. P. & Caraveo P. A., 2001, A&A, 376, 213[Mignani et al.2002]mig02 Mignani R. P., De LucaA., Caraveo P.A., Becker W., 2002, ApJ, 580, L47[Mignani et al.2010]mig10 Mignani R. P., Pavlov G.G., Kargaltsev O., 2010, ApJ, 720, 1635[Mignani et al.2012]mig12 Mignani R. P.,De LucaA., Hummel W., Zajczyk A., Rudak B., Kanbach G., Słowikowska A., 2012, A&A, 544, 100[Mignani et al.2016a]mig16 Mignani R. P., et al., 2016, MNRAS, 461, 4317[Oke1974]oke74Oke J.B., 1974, ApJS, 27, 21[Pellizzoni et al.2009]pel09 Pellizzoni A., et al. 2009, ApJ, 695, L115 [Predehl & Schmitt1995]ps95 Predehl P. & Schmitt J.H.M.M., 1995, A&A, 293, 889[Ray et al.1996]ray96 Ray P. S., et al., 1996, ApJ, 470, 1103[Robin et al.2004]rob04 Robin A.C.,Reylé C., Derriére S., Picaud S., 2004, A&A 416, 157[Romani et al.2011]rom11Romani R. W., Kerr M., Craig H. A., Johnston S., Cognard I., Smith D. A., 2011, ApJ, 738, 114 [Stetson1994]ste94 Stetson P.B., 1994, PASP, 106, 250 [Testa et al.2015]tes15Testa V., Mignani R. P., Pallanca C., Corongiu A., Ferraro F. R.,2015, MNRAS, 453, 4159[Verbiest et al.2012]ver12Verbiest J.P.W., et al., 2012, ApJ, 755, 39[Yao et al.2017]yao17 Yao J. N., Manchester R. N., Wang N., 2017, ApJ, 835, 29
http://arxiv.org/abs/1709.09169v1
{ "authors": [ "V. Testa", "R. P. Mignani", "N. Rea", "M. Marelli", "D. Salvetti", "A. A. Breeveld", "F. Cusano", "R. Carini" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170926164218", "title": "Large Binocular Telescope observations of PSR J2043+2740" }
Quenching Histories of Fast and Slow Rotators]SDSS-IV MaNGA: The Different Quenching Histories of Fast and Slow Rotators Smethurst et al. 2017]R.  J.  Smethurst,^1 K. L. Masters,^2C.  J.  Lintott,^3 A. Weijmans,^4M. Merrifield,^1 S. J. Penny,^2 A. Aragón-Salamanca,^1J. Brownstein,^5 K. Bundy,^6N. Drory,^7 D. R. Law,^8 R.  C.  Nichol ^2^1 School of Physics and Astronomy, The University of Nottingham, University Park, Nottingham, NG7 2RD, UK ^2 Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Barnaby Road, Portsmouth, PO13FX, UK^3 Oxford Astrophysics, Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX13RH, UK ^4 School of Physics and Astrononomy, University of St Andrews, North Haugh, St Andrews, Fife, KY169RJ, UK ^5 Department of Physics and Astronomy, University of Utah, 115 S. 1400 E., Salt Lake City, UT 84112, USA ^6 University of California, Santa Cruz, 1156 High St. Santa Cruz, CA 95064, USA ^7 McDonald Observatory, The University of Texas at Austin, 1 University Station, Austin, TX 78712, USA ^8 Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USAAccepted 2017 September 25. Received 2017 September 25; in original form 2017 August 25. [ [ December 30, 2023 ===================== Do the theorised different formation mechanisms of fast and slow rotators produce an observable difference in their star formation histories? To study this we identify quenching slow rotators in the MaNGA sample by selecting those which lie below the star forming sequence and identify a sample of quenching fast rotators which were matched in stellar mass. This results in a total sample of 194 kinematically classified galaxies, which is agnostic to visual morphology. We use u-r and NUV-u colours from SDSS and GALEX and an existing inference package, starpy, to conduct a first look at the onset time and exponentially declining rate of quenching of these galaxies. An Anderson-Darling test on the distribution of the inferred quenching rates across the two kinematic populations reveals they are statistically distinguishable (3.2σ). We find that fast rotators quench at a much wider range of rates than slow rotators, consistent with a wide variety of physical processes such as secular evolution, minor mergers, gas accretion and environmentally driven mechanisms. Quenching is more likely to occur at rapid rates (τ≲ 1 Gyr) for slow rotators, in agreement with theories suggesting slow rotators are formed in dynamically fast processes, such as major mergers. Interestingly, we also find that a subset of the fast rotators quench at these same rapid rates as the bulk of the slow rotator sample. We therefore discuss how the total gas mass of a merger, rather than the merger mass ratio, may decide a galaxy's ultimate kinematic fate. galaxies – photometry, galaxies – statistics, galaxies – morphology § INTRODUCTION Recent work studying the early-type (i.e. elliptical and lenticular) galaxy population has revealed that it is actually composed of two kinematically distinct populations. The majority of early-types are rotationally supported <cit.> with ∼7 times the number of galaxies with kinematic discs (`fast' rotators), than those with either dispersion dominated kinematics (`slow' rotators) or kinematically decoupled cores <cit.>.This has led to the proposal of a revision of Hubble's morphological classification scheme in the form of a `comb' <cit.>, whereby the evolution of a galaxy, fromdisc to bulge-dominated, takes place along a `tine' of the comb as a fast rotator, always retaining an underlying disc. If the discs of these regular rotators are destroyed, they then evolve along the `handle' of the comb to become slow rotators. Dry major mergers are considered the most likely process to produce high stellar mass slow rotators <cit.> as they can rapidly destroy the disc dominated nature of a galaxy <cit.>. Low stellar mass slow rotators (i.e. dwarf ellipticals with M_* ≲ 10^9 M_⊙) are thought to be formed via harassment mechanisms in the group and cluster environment <cit.>.Fast rotators are thought to evolve from the slow build up of a galaxy's bulge over time, eventually overwhelming the disc. This growth is thought to occur via gas-rich major or minor mergers <cit.> and by gas accretion <cit.> which can produce a bulge dominated but rotationally supported galaxy (which would be visually classified as an early-type in the Hubble classification scheme). The possible formation mechanisms listed above are also often proposed as external quenching mechanisms of star formation in a galaxy. However, these mechanisms are not thought to quench a galaxy at the same rate. Dynamically faster processes, such as mergers, are thought to quench star formation at rapid rates <cit.>, with major mergers thought to cause a much faster quench of the remnant galaxy than a minor merger <cit.>. Similarly, environmental processes, such as harassment, are also thought to cause quenching through repeated high speed interactions with neighbouring galaxies. Over time these interactions can strip both stars and gas from a galaxy and heat the gas needed for star formation <cit.>, quenching the galaxy at a slower rate than a merger. Slow quenching by an external process is also possible through gas accretion due to the large gravitational potential of the bulge which builds as the accreted gas sinks to the centre of the galaxy. This prevents the disc from collapsing and forming stars in an internal process which is categorised as morphological quenching <cit.>. Similarly, there are internal processes which are theorised to cause quenching in galaxies, including AGN feedback <cit.>, mass quenching <cit.> and morphological quenching <cit.> at rapid, intermediate and slow quenching rates respectively. Crucially, external quenching processes are the only mechanisms theorised to be able to change the morphology of a galaxy <cit.>. These quenching mechanisms and their theorised rates are summarised in Table <ref>.If fast and slow rotators form via different mechanisms, we should therefore also expect to find a difference in the star formation histories of quenching or quenched fast and slow rotators. This paper presents a first look at this problem by using an existing Bayesian star formation inference package, starpy, to determine the quenching histories of a sample of quenching or quenched fast and slow rotators identified in the MaNGA sample, irrespective of visual morphology. We use broadband optical, u-r, and near-ultraviolet, NUV-u, colours from SDSS and GALEX to infer both the onset time and exponential rate of quenching for each galaxy. We aim to determine whether kinematically distinct galaxies have different quenching histories. This paper proceeds as follows. In Section <ref> we describe our data sources and our Bayesian inference method for determining the quenching histories. We present our results in Section <ref> and discuss the implications of these results in Section <ref>. The zero points of all magnitudes are in the AB system. We adopt the WMAP Seven-Year Cosmology <cit.> with (Ω_m ,  Ω_Λ ,  h) = (0.26, 0.73, 0.71).§ DATA AND METHODS§.§ SDSS & GALEX Photometry We use optical photometry from the Sloan Digital Sky Survey Data Release 7 (SDSS; ). We use the Petrosian magnitude, petroMag, values for the u (3543 Å) and r (6231 Å) wavebands provided by the SDSS DR7 pipeline <cit.>. Further to this, we also required NUV (2267 Å) photometry from the GALEX survey <cit.>. Observed fluxes are corrected for galactic extinction <cit.> by applying the <cit.> law. We also adopt k-corrections to z = 0.0 and obtain absolute magnitudes from the NYU-VAGC <cit.>. §.§ MaNGA Survey & Data Reduction PipelineMaNGA is a multi-object IFU survey conducted with the 2.5 m Sloan Foundation Telescope <cit.> at Apache Point Observatory (APO) as part of SDSS-IV <cit.>. By 2020 MaNGA will have acquired IFU spectroscopy for ∼10000 galaxies with M_* > 10^9 M_⊙ and an approximately flat mass selection <cit.>. The target selection is agnostic to morphology, colour and environment. MaNGA makes use of the Baryon Oscillation Spectroscopic Survey (BOSS) spectrograph <cit.>. The BOSS spectrograph provides continuous coverage between 3600 Å and 10300 Å at a spectral resolution R ∼ 2000 (σ_instrument∼ 77 km s^−1 for the majority of the wavelength range[Instrument resolution as a function of wavelength in shown in Figure 20 of <cit.>]).Complete spectral coverage to 1.5 R_e, a galaxy's effective radius, is obtained for the majority of targets; a subset have coverage to 2.5 R_e. See <cit.> for an overview of the MaNGA survey. For a further description of the instrumentation used by MaNGA see <cit.>. For a detailed description of the observing strategy see <cit.> and for a description of the survey design see <cit.>. The raw data was processed by the MaNGA data reduction pipeline (DRP version 2.0.1), which is discussed in detail in <cit.>. The MaNGA DRP extracts, wavelength calibrates and flux calibrates all fibre spectra obtained in every exposure. The individual fibre spectra are then used to form a regular gridded datacube of 0.5” ‘spaxels’ and spectral channels. The spectra are logarithmically sampled with bin widths of logλ = 10^-4. These datacubes are then analysed using the MaNGA data analysis pipeline (DAP version 2.0.2); the development of which is ongoing and will be described in detail in Westfall et al. (in prep). Briefly, the spectral emission lines are masked, and the stellar continuum is modelled using the kinematic and stellar population fitting package ppxf <cit.>. The stellar continuum model is then constructed using a thinned version of the MILES spectral library (wavelength range 3525 < λ [Å] < 7500). The model is broadened to match the stellar velocity dispersion of the galaxy in order to cleanly subtract the absorption lines from the spectrum. The residual emission lines are then modelled using Gaussian profiles, with 21 different lines fit in total. The primary output from the DAP are therefore 2D “maps" (i.e., images) of these measured properties, including flux, stellar and gas kinematics, spectral index measurements, and absorption- and emission-line properties. The effective radius of a galaxy and the ellipticity within it, ϵ_e, are provided for MaNGA galaxies in the NASA Sloan Atlas; we use the values measured with elliptical Petrosian apertures in v1_0_1 of the catalogue provided in the SDSS Data Release 13 <cit.>. §.§ Data sample Our galaxy sample is drawn from the 2,777 SDSS galaxies which make up the MaNGA DR14 data release <cit.>. We cross-matched these galaxies with a radius of 3” to the GALEX survey in order to obtain NUV photometry (see Section <ref>), resulting in 1,413 galaxies.In this study we wish to investigate the quenching histories of galaxies, therefore we sub-select those galaxies which are below the star forming sequence (SFS). Here we use the global average star formation rates (SFR) quoted in the MPA-JHU catalogue[<http://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/>] <cit.>. We do not use the MaNGA spectra to calculate SFRs; since the bundles only extend to 1.5 R_e we might miss star formation occurring in the outer regions of galaxies which would result in an underestimate of the global SFR of a galaxy.We select galaxies with a SFR more than 1σ below the SFS of <cit.>. Since we wish to test whether slow rotators quench at rapid rates, consistent with major mergers, we wish to include those galaxies which have just left the SFS (rather than only selecting those that are fully quenched, for example, 3σ below the SFS).This selection on SFR when applied to the manga-galex sample results in a sample of 826 quenching or quenched galaxies, which we will refer to as the q-manga-galex sample. This sample is shown in Figure <ref>. §.§ Identifying Slow and Fast Rotators In order to classify the galaxies in the q-manga-galex sample as slow rotators or otherwise, we first calculate the specific stellar angular momentum as defined by <cit.>;λ_R_e = ∑_i=1^N F_i R_i |V_i|/∑_i=1^N F_i R_i (V_i^2 + σ_i^2)^1/2, where F_i is the flux in the ith spaxel, R_i the spaxel's distance from the galaxy centre (where R_i < R_e, the effective radius of a galaxy), V_i the mean stellar velocity in that spaxel, σ_i the stellar velocity dispersion in that spaxel and N the total number of spaxels. In this work we use the Python function provided in the MaNGA DAP to calculate λ_R_e using the values of mean flux, radius, stellar velocity and stellar velocity dispersion (corrected for instrumental resolution effects) in each bin of the MaNGA data cubes binned with a signal-to-noise ratio of 10 using a Voronoi binning algorithm <cit.>, as calculated by the MaNGA DAP (see Section <ref>). Velocity dispersion measurements in each bin of a galaxy data cube were confirmed to be above the instrument resolution of 77 km s^-1.We then classify galaxies in the q-manga-galex sample as non-regular rotators, or otherwise, using the definition from <cit.>:λ_R_e < 0.08 + ϵ_e/4     with     ϵ_e < 0.4. Both slow rotators and kinematically disturbed galaxies will satisfy this inequality, hence why this selection results in a sample of non-regular rotators.Using this definition reveals 168 (20%) non-regular rotators and 658 (80%) regular rotators in the q-manga-galex sample. Figure <ref> shows the velocity maps of these galaxies plotted at their values of λ_R_e and ϵ_e, along with the definition of a non-regular rotator from <cit.> shown by the solid black line. Note the q-manga-galex sample is agnostic to visual morphology, so our sample of regular rotators will contain both rotationally supported early-types and late-type galaxies. The fraction of non-regular rotators found in the q-manga-galex sample (20%) is slightly higher than that found by previous works <cit.>. However, we must be wary with this comparison since the ATLAS^3D sample is volume limited, whereas the MaNGA sample is selected to have a flat stellar mass distribution, prior to our selection on GALEX cross-matches and those galaxies below the SFS. Therefore although a direct comparison is not possible, we can at least determine if the fraction of non-regular rotators in theq-manga-galex sample is a sensible figure given previous estimates. Considering our sample is agnostic to visual morphology, we wouldexpect this selection effect to dominate resulting in a smaller fraction of non-regular rotators than previous works which specifically derived the fraction of non-regular rotators in a sample of early-types only. However, many other studies have also shown that the non-regular rotator fraction increases with stellar mass <cit.>, up to ∼90% at 10^12 M_⊙ <cit.>. The median stellar mass of the q-manga-galex sample is 10^10.8 M_⊙, which is higher than the median stellar mass of the ATLAS^3D sample at 10^10.5 M_⊙, likely accounting for this apparent discrepancy. In order to obtain a sample of slow rotators, one author (RJS) inspected the velocity maps of the 168 non-regular rotators identified in the q-manga-galex sample to remove those galaxies which showed rotation in their kinematic map (i.e. counter rotation or decoupled cores). 71 galaxies exhibiting rotation were identified, example velocity maps for which are shown in the top row of Figure <ref>. This resulted in a sample of 97 slow rotators, example velocity maps for which are shown in the middle row of Figure <ref>.In order to control for the degeneracies between mass, metallicity and dust (all of which can redden a galaxy's optical colour and mimic the effects of quenching) we selected a sub-sample of fast rotators from those identified as regular rotators in the q-manga-galex sample. We matched to within ± 2.5 % of the stellar mass of each slow rotator to give 97 fast rotators, example velocity maps for which are shown in the bottom row of Figure <ref>. We shall refer to this combined sample of 194 fast and slow rotators as the mm-q-manga-galex sample. An Anderson-Darling <cit.> test reveals that the distribution of stellar masses of the fast rotators and slow rotators within this sample are statistically indistinguishable (p=0.22). Similarly their redshift distributions are also statistically indistinguishable (p=0.19).The optical and NUV colours from SDSS and GALEX (see Section <ref>) for the mm-q-manga-galex sample are shown in Figure <ref>. Performing AD tests on the distributions of the colours of the slow and fast rotators in the mm-q-manga-galex sample reveals that both the u-r (AD= 5.9, p = 0.002) and NUV-u (AD= 19.1, p = 1×10^-5) colours of the two kinematic classifications are statistically distinguishable. These colours will be used to infer the SFHs of the mm-q-manga-galex sample (see Section <ref>).§.§ Environmental Densities We also consider the environmental densities of the fast and slow rotators by using estimates of the projected 5th nearest neighbour density,logΣ_5, from <cit.>. An AD test reveals that the distribution of environment densities of the 72 slow rotators and 80 fast rotators of the mm-q-manga-galex sample with logΣ_5 measurements from <cit.> are statistically indistinguishable (p=0.28). This is surprising since the current theory is that slow rotators are more likely to be the central galaxy of a group or cluster, whereas fast rotators are more likely to be satellite galaxies <cit.>. However, the MaNGA sample was chosen to be agnostic to galaxy environment, giving rise to a representative distribution of galaxy environments. Most galaxies in the sample will therefore reside in groups, a more common environment for a galaxy than the relatively rare environments of rich clusters <cit.> or voids <cit.>. We must therefore probe the positions of the two samples within the group environment itself. Cross-matching the mm-q-manga-galex sample with the <cit.> SDSS group catalogue gives us group information for 94 of the slow rotators and 96 of the fast rotators. Similar fractions of these slow, 75/94 (80%), and fast rotators, 70/96 (73%), are classified as their brightest group galaxy (BGG). However, these fractions include those galaxies which are isolated in their halos (due to the theoretical definition of a BGG used in thecatalogue). These isolated galaxies could be the remains of a fossil group <cit.> or could be truly isolated, at the opposite end of the evolutionary spectrum which we are trying to probe. We must therefore remove these single galaxy `groups' in order to properly test whether the slow rotators are preferentially found at the centre of the groups in the mm-q-manga-galex sample. Testing the distributions of the total group stellar mass for the fast and slow rotators we find they are statistically distinguishable (AD test p=0.03), with slow rotators residing in more massive groups. If we then consider only those galaxies in groups with a total stellar mass greater than 10^11±M_⊙ (under the simplifying assumption that this will remove the majority of single galaxy `groups') we find the fraction of slow rotators classified as a BGG is 44/61 (72%), whereas for fast rotators this drops to 30/52 (58%), a statistically distinguishable difference (p=0.04). Therefore, although the projected local environment densities of the two kinematic classes of galaxies are statistically indistinguishable, their positions within that given environment density do differ, as expected. Given the above statistical tests, the only differences between the fast and slow rotators of the mm-q-manga-galex sample is their kinematics, their colours and their position within their group halo. §.§ Star Formation History Inference starpy[Publicly available: <http://github.com/zooniverse/starpy>] is a python code which allows the inference of the exponentially declining star formation history (SFH) of a single galaxy usingBayesian Markov Chain Monte Carlo techniques <cit.>[<http://dan.iel.fm/emcee/>]. The code uses the solar metallicity stellar population models of <cit.>, assumes a Chabrier IMF <cit.> and requires the input of the observed u-r and NUV-u colours and redshift. No attempt is made to model for intrinsic dust. The SFH is described by an exponentially declining SFR described by two parameters; the time at the onset of quenching, t_q [Gyr], and the exponential rate at which quenching occurs, τ [Gyr]. Under the simplifying assumption that all galaxies formed at t=0 Gyr with an initial burst of star formation, the SFH can be described as:SFR =i_sfr(t_q)ift < t_q i_sfr(t_q) × exp( -(t-t_q)/τ) ift > t_qwhere i_sfr is an initial constant star formation rate dependent on t_q <cit.>. The simplifying assumption that all galaxies formed at t = 0 Gyr means that the age of each galaxy, t_age, corresponds to the age of the Universe at its observed redshift, t_obs. A smaller τ value corresponds to a rapid quench, whereas a larger τ value corresponds to a slower quench. A galaxy undergoing a slow quench is not necessarily quiescent by the time of observation. This SFH model has previously been shown to appropriately characterise quenching galaxies <cit.>. The probabilistic fitting methods to these star formation histories for an observed galaxy are described in full detail in Section 3.2 of <cit.>, wherein the starpy code was used to characterise the morphologically dependence of the SFHs of ∼126,000 galaxies. Similarly, in <cit.>, starpy was used to show the prevalence of rapid, recent quenching within a population of AGN host galaxies and in <cit.> to investigate the quenching histories of group galaxies.Briefly, we assume a flat prior on all the model parameters and model the difference between the observed and predicted u-r and NUV-u colours as independent realisations of a double Gaussian likelihood function (Equation 2 in ). An example posterior probability distribution output by starpy is shown for a single galaxy in Figure 5 of <cit.>, wherein the degeneracies of the SFH model between recent, rapid quenching and earlier, slower quenching can be seen.To study the SFH across a sample of many galaxies, these individual posterior probability distributions are stacked in [t_q, τ] space to give one distribution across each quenching parameter for the sample. This is no longer inference but merely a method to visualise the results for a population of galaxies (see appendix section C infor a discussion on alternative methods which may be used to determine the parent population SFH). These distributions will be referred to as the population SFH densities.§ RESULTS We determine the population SFH densities for both the fast and slow rotators of the mm-q-manga-galex sample. This is shown in Figure <ref> for both the onset time (left panel) and exponential rate (right panel) of quenching for the fast (black solid line) and slow (red dashed line) rotators. Uncertainties on the population densities (shown by the shaded regions) are determined from the maximum and minimum values spanned by N = 1000 bootstrap iterations, each sampling 90% of either the fast (black shaded region) or slow (red shaded region) rotators. To statistically test the significance of our results, we estimate the `best fit' [t_q, τ] values for each galaxy with the median value of an individual galaxy's posterior probability distribution from starpy (i.e. the 50th percentile position of the MCMC chain). We test the distribution of these values of the fast and slow rotators in the mm-q-manga-galex sample with AD-tests. Firstly, an AD-test on the distributions of t_q values in the fast and slow rotator samples, revealed that we cannot reject the null hypothesis that the fast and slow rotators quench at the same time (AD= 0.65, p = 0.69). Finally, an AD-test on the distributions of τ values, revealed that we can reject the null hypothesis that the fast and slow rotators quench at the same rate (AD= 6.3, p = 0.001). This is a 3.2σ result which suggests that slow rotators quench faster than fast rotators of the same mass. § DISCUSSION The results presented in Section <ref> suggest that fast and slow rotators are indeed separate populations quenched, and therefore formed, by different mechanisms. However, these quenching mechanisms occur at statistically indistinguishable onset times for fast and slow rotators. <cit.> find in their simulations that the last major merger interaction for slow rotators was at z ≳ 1.5 (i.e. t_q ≲ 4.5 Gyr). However, <cit.>find in the Illustris simulation that slow rotators only form after z < 1 (i.e. t_q ≳ 6 Gyr). We note that starpy is not very sensitive to the time of quenching, particularly at early times (t_q ≲ 6 Gyr when z ≳ 1), due to the degeneracies between the optical and NUV colours currently used to infer the quenching parameters. Therefore, we cannot currently conclude which scenario our results favour. Future work altering our inference code to take spatial spectral information provided by MaNGA may help us to address this issue by breaking the degeneracies inherent in the photometric colours.However, starpy in its current form is sensitive to the rate of quenching in a galaxy. In the right panel of Figure <ref> we see that there is a wide range of quenching rates occurring within the fast rotator sample. Previous works using starpy have shown how the intermediate quenching rates (1 ≲τ [Gyr]≲ 2) prevalent in the distribution of the fast rotator sample can be attributed to environmental processes such as harassment and galaxy interactions <cit.>, or minor mergers <cit.>. This is unsurprising given that the fast rotators are less likely to be the brightest group galaxy than the slow rotators of the mm-q-manga-galex sample, as discussed in Section <ref>.In particular we find evidence for galaxies in the fast rotator sample to quench at slow rates (τ≥ 2 Gyr). Since the q-manga-galex sample is agnostic to visual morphology, it will contain fast rotators which are disc dominated (i.e. late-type galaxies). This preference for slow quenching rates is therefore likely to be caused by the effects of secular evolution through gas accretion and morphological quenching, slowly moving these disc galaxies off the SFS to produce the red spiral population of <cit.>. Using the morphological classifications of Galaxy Zoo 2 <cit.> we find that 20/97 (21%) of the fast rotators of the mm-q-manga-galex sample are disc dominated with a disc or featured debiased vote fraction, p_d ≥ 0.8 (i.e. 80% of classifiers marked the galaxy as having either a disc or features). This is consistent with the fact that 23±^2_11% of the fast rotator quenching rate population density (black line in the right panel of Figure <ref>) is found at quenching rates τ > 2 Gyr. Conversely only 1 of the slow rotators was classified as having a disc or features by GZ2[Upon visual inspection this galaxy has a large disc with spiral structure lying outside of the MaNGA fibre bundle at >1.5 R_e]. It is not surprising therefore, that there is much less preference for slow quenching rates, with τ≥ 2 Gyr, for slow rotators than fast rotators in the right panel of Figure <ref>. However, <cit.> found for galaxies in the red sequence visually classified as `smooth' in GZ2 (i.e. quenching or quenched early-types) that a significant fraction, 26.1%, of the quenching rate population density was found at these slow quenching rates (see left panel of their Figure 8). However, a sample ofvisually classified `smooth' galaxies in GZ2 may include both fast and slow rotators. It is only in this work that we have been able to investigate the difference in the SFHs of galaxies which are rotationally supported from those which are not, revealing that the stellar kinematics are driving the morphologically dependant star formation histories seen in <cit.>.The slow rotators in the mm-q-manga-galex sample instead show a preference for rapid quenching rates (τ≲1 Gyr) in the right panel of Figure <ref>. Assuming that major mergers are the only mechanism able to destroy rotation in a galaxy, this result supports the theory that these galaxies are formed by major mergers which, along with destroying the disc of a galaxy, are thought to cause quenching at such rapid rates  <cit.>. Surprisingly, we also find evidence that some of the fast rotators are quenching at these same rapid rates (τ≲ 1 Gyr) in the right panel of Figure <ref>. This suggests that in a fraction of fast rotators a dynamically fast process, such as a major merger, may be the cause of quenching. Simulations have recently shown that although major mergers (2:1 or 1:1 mergers) can cause rapid quenching of a galaxy, they do not necessarily destroy the disc dominated nature of a galaxy  <cit.> and can actually form a fast rotator remnant <cit.>. This is thought to mainly occur in gas rich major mergers <cit.> and is likely the explanation for the presence of rapid rates in the fast rotator sample seen in the right panel of Figure <ref>. We therefore predict that the fast rotators in the mm-q-manga-galex sample will be more gas rich than the slow rotators they are stellar mass matched to. We will be able to test this hypothesis with currently ongoing follow-up observations using the Green Bank Telescope (GBT16A-095 and GBT17A-012; Masters et al. in prep.) which will obtain HI profiles for galaxies in the MaNGA target sample. With these observations we will be able to determine whether gas mass has an impact on the formation mechanisms of these kinematically distinct galaxies. § CONCLUSIONS We have investigated the star formation histories of quenching or quenched fast and slow rotators identified in the MaNGA galaxy sample, irrespective of their visual morphology. We used the u-r and NUV-u colours with an existing piece of inference software, starpy, to determine the onset time and exponential rate of quenching in each of these galaxies. An Anderson-Darling test revealed that the distribution of the inferred quenching rates of fast and slow rotators are statistically distinguishable (p=0.001, 3.2σ). We find that rapid quenching rates (τ≲ 1 Gyr) are dominant for slow rotators, supporting the theory that slow rotators form in dynamically fast processes, such as major mergers <cit.>. Conversely, we find that fast rotators quench at a wide range of rates, consistent with dynamically slow processes such as secular evolution, minor mergers, gas accretion and environmentally driven mechanisms. However we also find evidence that some of the fast rotators are quenching at the same rapid rates dominant across the slow rotator sample.This finding of rapid quenching rates occurring for both slow rotators and a subset of the fast rotators suggests that although their kinematics are different in nature, both classes of galaxy may be able to quench, and therefore form, via major mergers. This result combined with the findings of recent simulations showing disc survival in gas-rich major mergers  <cit.>, suggests that the total gas mass fraction within a pair of merging galaxies, is what will ultimately decide the kinematic fate of a galaxy. § ACKNOWLEDGEMENTS RJS gratefully acknowledges research funding from the Ogden Trust. AW acknowledges support of a Leverhulme Trust Early Career Fellowship.Based on observations made with the NASA Galaxy Evolution Explorer.GALEX is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034.Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is <www.sdss.org>.SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University and Yale University.mn2e
http://arxiv.org/abs/1709.09175v1
{ "authors": [ "Rebecca Smethurst", "Karen Masters", "Chris Lintott", "Anne-Marie Weijmans", "Michael Merrifield", "Samantha Penny", "Alfonso Aragon Salamanca", "Joel Brownstein", "Kevin Bundy", "Niv Drory", "David Law", "Robert Nichol" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170926180001", "title": "SDSS-IV MaNGA: The Different Quenching Histories of Fast and Slow Rotators" }
Centre for Quantum Computer Science, Faculty of Computing, University of Latvia, Raiņa 19, Riga, Latvia, LV-1586.Frustrated spin order and stripe fluctuations in FeSe Aleksejs Zajakins December 30, 2023 ===================================================== We show that all known classical adversary lower bounds on randomized query complexity are equivalent for total functions, and are equal to the fractional block sensitivity (f). That includes the Kolmogorov complexity bound of Laplante and Magniez and the earlier relational adversary bound of Aaronson. This equivalence also implies that for total functions, the relational adversary is equivalent to a simpler lower bound, which we call rank-1 relational adversary. For partial functions, we show unbounded separations between (f) and other adversary bounds, as well as between the adversary bounds themselves.We also show that, for partial functions, fractional block sensitivity cannot give lower bounds larger than √(n ·(f)), where n is the number of variables and (f) is the block sensitivity. Then we exhibit a partial function f that matches this upper bound, (f) = Ω(√(n ·(f))). § INTRODUCTION Query complexity of functions is one of the simplest and most useful models of computation. It is used to show lower bounds on the amount of time required to solve a computational task, and to compare the capabilities of the quantum, randomized and deterministic models of computation. Thus providing lower bounds in the query model is essential in understanding the complexity of computational problems.In the query model, an algorithm has to compute a function f : S → H, given a string x from S ⊆ G^n, where G and H are finite alphabets. With a single query, it can provide the oracle with an index i ∈ [n] and receive back the value x_i. After a number of queries (possibly, adaptive), the algorithm must compute f(x). The cost of the computation is the number of queries made by the algorithm.The query complexity of a function f in the deterministic setting is denoted by (f) and is also called the decision tree complexity. The two-sided bounded-error randomized and quantum query complexities are denoted by (f) and (f), respectively (which means that given any input, the algorithm must produce a correct answer with probability at least 2/3). For a comprehensive survey on the power of these models, see <cit.>, and for the state-of-the-art relationships between them, see <cit.>.In this work, we investigate the relation among a certain set of lower bound techniques on (f), called the classical adversary methods, and how they connect to other well-known lower bounds on the randomized query complexity. §.§ Known Lower BoundsOne of the first general lower bound methods on randomized query complexity is Yao's minimax principle, which states that it is sufficient to exhibit a hard distribution on the inputs and lower bound the complexity of any deterministic algorithm under such distribution <cit.>. Yao's minimax principle is known to be optimal for any function but involves a hard-to-describe and hard-to-compute quantity (the complexity of the best deterministic algorithm under some distribution).More concrete randomized lower bounds are block sensitivity (f) <cit.> and the approximate degree of the polynomial representing the function (f) <cit.>introduced by Nisan and Szegedy. Afterwards, Aaronson extended the notion of the certificate complexity (f) (a deterministic lower bound) to the randomized setting by introducing randomized certificate complexity (f) <cit.>. Following this result, both Tal and Gilmer, Saks and Srinivasan independently discovered the fractional block sensitivity (f) lower bound <cit.>, which is equal to the fractional certificate complexity (f) measure, as respective dual linear programs. Since these measures are relaxations of block sensitivity and certificate complexity if written as integer programs, they satisfy the following hierarchy:(f) ≤(f) = (f) ≤(f).Perhaps surprisingly, fractional block sensitivity turned out to be equivalent to randomized certificate complexity, (f) = Θ((f)). Approximate degree and fractional block sensitivity are incomparable in general, but it has been shown that (f) ≤(f)^2 <cit.> and (f) ≤(f)^3 ≤(f)^3 <cit.>.Currently one of the strongest lower bounds is the partition bound (f) of Jain and Klauck <cit.>, which is larger than all of the above mentioned randomized lower bounds (even the approximate degree), and the classical adversary methods listed below. Its power is illustrated by the _n function (anof √(n) s on √(n) variables), where it gives a tight Ω(n) lower bound, while all of the other lower bounds give only O(√(n)). The quantum query complexity (f) is also a powerful lower bound on (f), as it is incomparable with (f) <cit.>. Recently, Ben-David and Kothari introduced the randomized sabotage complexity (f) lower bound, which can be even larger than (f) and (f) for some functions <cit.>, and so far no examples are known where it is smaller.In a separate line of research, Ambainis gave a versatile quantum adversary lower bound method with a wide range of applications <cit.>. Since then, many generalizations of the quantum adversary method have been introduced (see <cit.> for a list of known quantum adversary bounds). Several of these formulations have been lifted back to the randomized setting. Aaronson proved a classical analogue of Ambainis' relational adversary bound and used it to provide a lower bound for the local search problem <cit.>. Laplante and Magniez introduced the Kolmogorov complexity adversary bound for both quantum and classical settings and showed that it subsumes many other adversary techniques. <cit.>. They also gave a classical variation of Ambainis' adversary bound in a different way than Aaronson. Some of the other adversary methods like spectral adversary have not been generalized back to the randomized setting.While some relations between the adversary bounds had been known before, Špalek and Szegedy proved that practically all known quantum adversary methods are in fact equivalent <cit.> (this excludes the general quantum adversary bound, which gives an exact estimate on quantum query complexity for all Boolean functions <cit.>). This result cannot be immediately generalized to the classical setting, as the equivalence follows through the spectral adversary which has no classical analogue. They also showed that the quantum adversary cannot give lower bounds better than a certain “certificate complexity barrier”. Recently, Kulkarni and Tal strenghtened the barrier using fractional certificate complexity. Specifically, for any Boolean function f the quantum adversary is at most √(^0(f)^1(f)), if f is total, and at most 2√(n·min{^0(f),^1(f)}), if f is partial <cit.>.[Here, ^0(f) and ^1(f) stand for the maximum fractional certificate complexity over negative and positive inputs, respectively.]With the advances on the quantum adversary front, one could hope for a similar equivalence result to also hold for the classical adversary bounds. Some relations are known: Laplante and Magniez have shown that the Kolmogorov complexity lower bound is at least as strong as Aaronson's relational and Ambainis' weighted adversary bounds <cit.>. Jain and Klauck have noted that the minimax over probability distributions adversary bound is at most (f) for total functions <cit.>. In general, the relationships among the classical adversary bounds until this point remained unclear. §.§ Our Results Our main result shows that the known classical adversary bounds are all equivalent for total functions. That includes Aaronson's relational adversary bound (f), Ambainis' weighted adversary bound (f), the Kolmogorov complexity adversary bound (f) and the minimax over probability distributions adversary bound (f). Surprisingly, they are equivalent to the fractional block sensitivity (f).We also add to this list a certain restricted version of the relational adversary bound. More specifically, we require that the relation matrix between the inputs has rank 1, and denote this (seemingly weaker) lower bound by _1(f). Thus for total functions (f) = Θ(_1(f)), where the latter is much easier to calculate for Boolean functions.All this shows that (f) is a fundamental lower bound measure for total functions with many different formulations, including the previously known (f) and (f). Another interesting corollary is that since the quantum certificate complexity (f) = Θ(√((f))) is a lower bound on the quantum query complexity <cit.>, we have that by taking the square root of any of the adversary bounds above, we obtain a quantum lower bound for total functions.Along the way, for partial functions we show the equivalence between (f) and (f), and also between (f) and (f). In the case of partial functions, (f) becomes weaker than all these adversary methods. In particular, we show an example of a function where each of these adversary methods gives an Ω(n) lower bound, while fractional block sensitivity is O(1). We also show that (f) and (f) are not equivalent for partial functions, as there exists an example where (f) is constant, but (f) = Θ(log n). Finally, we show a function such that _1(f) = O(√(n)), but (f) = Ω(n).We also show a “block sensitivity” barrier for fractional block sensitivity. Namely, for any partial function f, the fractional block sensitivity is at most √(n·(f)). Note that the adversary bounds do not bear this limitation, as witnessed by the aforementioned example. This result is tight, as we exhibit a partial function that matches this upper bound.Even though our results are similar to the quantum case in <cit.> in spirit, the proof methods are different. § PRELIMINARIES In this section we define the complexity measures we are going to work with in the paper. In the following definitions and the rest of the paper consider f to be a partial function f : S → H with domain S ⊆ G^n, where G, H are some finite alphabets and n is the length of the input string. Throughout the paper we assume that f is not constant. Block Sensitivity.For x ∈ S, a subset of indices B ⊆ [n] is a sensitive block of x if there exists a y such that f(x) ≠ f(y) and B={ i | x_i ≠ y_i }. The block sensitivity (f, x) of f on x is the maximum number k of disjoint subsets B_1, …, B_k ⊆ [n] such that B_i is a sensitive block of x for each i ∈ [k]. The block sensitivity of f is defined as (f) = max_x ∈ S(f, x).Let ={B |∃ y : f(x) ≠ f(y)and B={ i | x_i ≠ y_i }} be the set of sensitive blocks of x. The fractional block sensitivity (f, x) of f on x is defined as the optimal value of the following linear program:maximize ∑_B ∈ w_x(B)subject to∀ i ∈ [n]: ∑_B ∈ i ∈ B w_x(B) ≤ 1.Here w_x ∈ [0;1]^||. The fractional block sensitivity of f is defined as (f) = max_x ∈ S(f, x).When the weights are taken as either 0 or 1, the optimal solution to the corresponding integer program is equal to (f, x). Hence (f,x) is a relaxation of (f,x), and we have (f,x) ≤(f,x). Certificate complexity.An assignment is a map A:{1, …, n}→ G ∪{*}. Informally, the elements of G are the values fixed by the assignment and * is a wildcard symbol that can be any letter of G. A string x ∈ S is said to be consistent with A if for all i ∈ [n] such that A(i) ≠ *, we have x_i = A(i). The length of A is the number of positions that A fixes to a letter of G.For an h ∈ H, an h-certificate for f is an assignment A such that for all strings x ∈ A we have f(x) = h. The certificate complexity (f, x) of f on x is the size of the shortest f(x)-certificate that x is consistent with. The certificate complexity of f is defined as (f) = max_x ∈ S(f, x).The fractional certificate complexity (f, x) of f on x ∈ S is defined as the optimal value of the following linear program:minimize ∑_i ∈ [n] v_x(i)subject to ∀ y ∈ Ss.t.f(x) ≠ f(y): ∑_i : x_i ≠ y_i v_x(i) ≥ 1.Here v_x ∈ [0;1]^n for each x ∈ S. The fractional certificate complexity of f is defined as (f) = max_x ∈ S(f, x).When the weights are taken as either 0 or 1, the optimal solution to the corresponding integer program is equal to (f, x). Hence (f,x) is a relaxation of (f,x), and we have (f,x) ≤(f,x).It has been shown that (f,x) and (f,x) are dual linear programs, hence their optimal values are equal, (f,x) = (f,x). As an immediate corollary, (f) = (f). One-sided measures.For Boolean functions with H= {0,1}, for each measure M from (f), (f), (f), (f) and a Boolean value b ∈{0, 1}, define the corresponding one-sided measure asM^b(f) = max_x ∈ f^-1(b) M(f,x).According to the earlier definitions, we then have M(f) = max{M^0(f), M^1(f)}. These one-sided measures are useful when, for example, working with compositions ofwith some Boolean function. Kolmogorov complexity.A set of strings 𝒮⊂{0, 1}^* is called prefix-free if there are no two strings in 𝒮 such that one is a proper prefix of the other. Equivalently we can think of the strings as programs for the Turing machine. Let M be a universal Turing machine and fix a prefix-free set 𝒮. The prefix-free Kolmogorov complexity of x given y, is defined as the length of the shortest program from 𝒮 that prints x when given y:K(x | y) = min{|P| | P ∈𝒮, M(P, y) = x}.For a detailed introduction on Kolmogorov complexity, we refer the reader to <cit.>.§ CLASSICAL ADVERSARY BOUNDS Let f : S → H be a function, where S ⊆ G^n. The following are all known to be lower bounds on bounded-error randomized query complexity. Relational adversary bound <cit.>. Let R : S × S →ℝ_≥ 0 be a real-valued function such that R(x,y)=R(y,x) for all x,y ∈ S and R(x,y)=0 whenever f(x)=f(y).Then for x ∈ S and an index i, let[We take the reciprocals of the expressions, compared to Aaronson's definition.]θ(x, i) = ∑_y ∈ S R(x, y)/∑_y ∈ S : x_i ≠ y_i R(x, y),where θ(x,i) is undefined if the denominator is 0. Denote[One can show that there exist optimal solutions for R, thus we can maximize over R instead of taking the supremum.](f) = max_R min_x, y ∈ S, i ∈ [n] :R(x, y) > 0, x_i ≠ y_imax{θ(x, i), θ(y, i)}.Rank-1 relational adversary bound. We introduce the following restriction of the relational adversary bound.Let R' be any |S| × |S| matrix of rank 1, such that: * There exist u, v : S →ℝ_≥ 0 such that R'(x,y) = u(x)v(y) for all x, y ∈ S.* R'(x,y) = 0 whenever f(x) = f(y).Then set R(x,y)=max{R'(x,y),R'(y,x)}.Let X={x | u(x)>0} and Y={y | v(y)>0 }. Note that for every x∈ S, either u(x) or v(x) must be 0, as R(x,x) must be 0, therefore X ∩ Y = ∅. Then denote_1(f) = max_u,vmin_x ∈ X, y ∈ Y, i ∈ [n] :u(x)v(y) > 0, x_i ≠ y_imax{θ(x, i), θ(y, i)}.where θ(x,i) can be simplified toθ(x, i) = ∑_y ∈ Y v(y)/∑_y ∈ Y : x_i ≠ y_i v(y)andθ(y, i) = ∑_x ∈ X u(x)/∑_x ∈ X : x_i ≠ y_i u(x).Naturally, _1(f) ≤(f).AsR(x,y)=0 whenever f(x)=f(y), we have that for every output h ∈ H either f^-1(h)∩ X = ∅ or f^-1(h)∩ Y = ∅. Therefore, _1(f) effectively bounds the complexity of differentiating between two non-overlapping sets of outputs. This leads to the following equivalent definition for_1(f): Let A ∪ B = H be a partition of the output alphabet, i.e., A ∩ B = ∅. Let p and q be probability distributions over X:=f^-1(A) and Y:=f^-1(B), respectively. Then_1(f) = max_A, Bp, qmin_i ∈ [n], g_1, g_2 ∈ G : g_1 ≠ g_2∃ x ∈ X, y ∈ Y: p(x)q(y) > 01/min{_x ∼ p[x_i ≠ g_1], _y ∼ q[y_i ≠ g_2]}.For the proof of this proposition see Appendix <ref>. Weighted adversary bound <cit.>. Let w, w' be weight schemes as follows. * Every pair (x, y) ∈ S^2 is assigned a non-negative weight w(x, y) = w(y, x) such that w(x, y) = 0 whenever f(x) = f(y).* Every triple (x, y, i) is assigned a non-negative weight w'(x, y, i) such that w'(x, y, i) = 0 whenever x_i = y_i or f(x) = f(y), and w'(x, y, i), w'(y, x, i) ≥ w(x, y) for all x, y, i such that x_i ≠ y_i.For all x, i, let wt(x) = ∑_y ∈ S w(x, y) and v(x, i) = ∑_y ∈ S w'(x, y, i). Denote(f) = max_w,w'min_x, y ∈ S, i ∈ [n]w(x,y) ≠ 0, x_i ≠ y_imax{wt(x)/v(x,i), wt(y)/v(y,i)}.Kolmogorov complexity <cit.>. Let σ∈{0, 1}^* be any finite string.[By the argument of <cit.>, we take the minimum over the strings instead of the algorithms computing f.] Denote(f) = min_σmax_x, y ∈ Sf(x) ≠ f(y)1/∑_i : x_i ≠ y_imin{2^-K(i|x, σ),2^-K(i|y, σ)}.Minimax over probability distributions <cit.>. Let {p_x}_x ∈ S be a set of probability distributions over [n]. Denote(f) = min_p max_x, y ∈ Sf(x) ≠ f(y)1/∑_i : x_i ≠ y_imin{p_x(i), p_y(i)}. § EQUIVALENCE OF THE ADVERSARY BOUNDSIn this section we prove the main theorem: Let f : S → H be a partial Boolean function, where S ⊆ G^n. Then * (f) ≤_1(f) ≤(f) = (f),* (f) = O((f)),* (f) = Θ((f)).Moreover, for total functions f : G^n → H, we have(f) = (f).The part (f) = O((f)) has been already proven in <cit.>. §.§ Fractional Block Sensitivity and the Weighted Adversary Method First, we prove that fractional block sensitivity lower bounds the relational adversary bound for any partial function. Let f : S → H be a partial Boolean function, where S ⊆ G^n. Then(f) ≤_1(f). Let x ∈ S be such that (f, x) = (f) and denote h = f(x). Let H' = H ∖{h} and S' = f^-1(H').Letbe the set of sensitive blocks of x. Let w : → [0, 1] be an optimal solution to the (f,x) linear program, that is, ∑_B ∈ w(B) = (f, x). For each B ∈, pick a single y_B ∈ S' such that B = {i | x_i ≠ y_i}. Then define R(x, y_B) := w(B) for all B ∈. It is clear that R has a corresponding rank 1 matrix R', as it has only one row (corresponding to x) that is not all zeros.Let y ∈ S' be any input such that R(x, y) > 0. Then for any i ∈ [n] such that x_i ≠ y_i,θ(x, i) = ∑_B ∈ w(B)/∑_B ∈ : i ∈ B w(B) = (f, x)/∑_B ∈ : i ∈ B w(B)≥(f),as 0 < ∑_B ∈ : i ∈ B w(B) ≤ 1. On the other hand, note thatθ(y, i) = w(B)/w(B) = 1,where B = {i | x_i ≠ y_i}. Therefore, for this R,min_x, y ∈ S, i ∈ [n] :R(x, y) > 0, x_i ≠ y_imax{θ(x, i), θ(y, i)}≥min_y ∈ S', i ∈ [n] :R(x, y) > 0, x_i ≠ y_imax{(f), 1} = (f),and the claim follows. As mentioned in <cit.>, (f) is a weaker version of (f). We show that in fact they are exactly equal to each other: Let f : S → H be a partial Boolean function, where S ⊆ G^n. Then(f) = (f). * First we show that (f) ≤(f).Suppose that R is the function for which the relational bound achieves maximum value. Let w(x,y) = w(y,x) = w(x,y,i) = w(y,x,i) = R(x,y) for any x, y, i such that f(x) ≠ f(y) and x_i ≠ y_i. This pair of weight schemes satisfies the conditions of the weighted adversary bound. The value of the latter with w, w' is equal to (f). As the weighted adversary bound is a maximization measure, (f) ≤(f). * Now we show that (f) ≥(f).Let w, w' be optimal weight schemes for the weighted adversary bound. Let R(x,y) = w(x,y) for any x, y ∈ S such that f(x) ≠ f(y). Let S' = f^-1(H ∖ f(x)). Thenθ(x,i) = ∑_y ∈ S' R(x, y)/∑_y ∈ S' : x_i ≠ y_i R(x, y) = ∑_y ∈ S' w(x,y)/∑_y ∈ S' : x_i ≠ y_i w(x,y)≥∑_y ∈ S' w(x,y)/∑_y ∈ S' : x_i ≠ y_i w'(x,y,i) = wt(x)/v(x,i),as w'(x,y,i) ≥ w(x,y) by the properties of w, w'. Similarly, θ(y,i) ≥wt(y)/v(y,i). Therefore, for any x, y ∈ S and i ∈ [n] such that f(x) ≠ f(y) and x_i ≠ y_i, we havemax{θ(x,i),θ(y,i)}≥max{wt(x)/v(x,i), wt(y)/v(y,i)}.As the relational adversary bound is also a maximization measure, (f) ≥(f).The proof of this proposition also shows why (f) and (f) are equivalent — the weight function w' is redundant in the classical case (in contrast to the quantum setting). §.§ Kolmogorov Complexity and Minimax over Distributions In this section we prove the equivalence between the mimimax over probability distributions and Kolmogorov complexity adversary bound. It has been shown in the proof of the main theorem of<cit.> that (f) = Ω((f)). Here we show the other direction using a well-known result from coding theory. Let S be any prefix-free set of finite strings. Then∑_x ∈ S 2^-|x|≤ 1. Let f : S → H be a partial Boolean function, where S ⊆ G^n. Then(f) ≥(f). Let σ be the binary string for which (f) achieves the smallest value. Define the set of probability distributions {p_x}_x ∈ S on [n] as follows. Let s_x = ∑_i ∈ [n] 2^-K(i | x, σ) and p_x(i) = 2^-K(i | x, σ)/s_x. The set of programs that print out i ∈ [n], given x and σ, is prefix-free (by the definition of 𝒮), as the information given to all programs is the same. Thus by Kraft's inequality, we have s_x ≤ 1.Examine the value of the minimax bound with this set of probability distributions. For any x, y ∈ S and i ∈ [n], we havemin{p_x(i), p_y(i)} = min{2^-K(i | x, σ)/s_x, 2^-K(i | y, σ)/s_y}≥min{2^-K(i|x, σ),2^-K(i|y, σ)}.Therefore, (f) = Θ((f)).§.§ Fractional Block Sensitivity and Minimax over Distributions Now we proceed to prove that for total functions, fractional block sensitivity is equal to the minimax over probability distributions. The latter has an equivalent form of the following program. For any partial Boolean function f : S → H, where S ⊆ G^n,(f) = min_v max_x ∈ S∑_i ∈ [n] v_x(i)s.t. ∀ y ∈ Ss.t.f(x) ≠ f(y): ∑_i : x_i ≠ y_imin{v_x(i), v_y(i)}≥ 1,where {v_x}_x ∈ S is any set of weight functions v_x : [n] →ℝ_≥ 0.Denote by μ the optimal value of the given program. * First we prove that μ≤(f).Construct a set of weight functions {v_x}_x ∈ S by v_x(i) := p_x(i) ·(f), where {p_x}_x ∈ S is an optimal set of probability distributions for the minimax bound. Then for any x, y such that f(x) ≠ f(y),∑_i : x_i ≠ y_imin{v_x(i), v_y(i)} = (f) ·∑_i : x_i ≠ y_imin{p_x(i), p_y(i)}≥(f) ·1/(f) = 1.On the other hand, the value of this solution is given bymax_x ∈ S∑_i ∈ [n] v_x(i) = max_x ∈ S(f) ·∑_i ∈ [n] p_x(i) = (f).* Now we prove that μ≥(f).Let {v_x}_x∈ S be an optimal solution for the given program. Set s_x = ∑_i ∈ [n] v_x(i). Construct a set of probability distributions {p_x}_x∈ S by p_x(i) = v_x(i)/s_x. Then for any x, y such that f(x) ≠ f(y), we have∑_i : x_i ≠ y_imin{p_x(i), p_y(i)} = ∑_i : x_i ≠ y_imin{v_x(i)/s_x, v_y(i)/s_y}≥1/μ·∑_i : x_i ≠ y_imin{v_x(i), v_y(i)}≥1/μ.Therefore, (f) ≤μ.In this case we prove that for total functions the minimax over probability distributions is equal to the fractional certificate complexity (f). The result follows since (f) = (f). The proof of this claim is almost immediate in light of the following “fractional certificate intersection” lemma by Kulkarni and Tal: Let f : G^n → H be a total function[Kulkarni and Tal prove the lemma for Boolean functions, but it is straightforward to check that their proof also works for functions with arbitrary input and output alphabets.] and {v_x}_x ∈ G^n be a feasible solution for the (f) linear program. Then for any two inputs x, y ∈ G^n such that f(x) ≠ f(y), we have∑_i : x_i ≠ y_imin{v_x(i), v_y(i)}≥ 1.Let f be a total function. Suppose that {v_x}_x ∈ G^n is a feasible solution for the (f) program. Then for any x, y ∈ G^n such that f(x) ≠ f(y),∑_i : x_i ≠ y_i v_x(i) ≥∑_i : x_i ≠ y_imin{v_x(i), v_y(i)}≥ 1.Hence this is also a feasible solution for the (f) linear program. On the other hand, if {v_x}_x ∈ G^n is a feasible solution for (f) linear program, then it is also a feasible solution for the (f) program by Proposition <ref>. Therefore, (f) = (f). § SEPARATIONS FOR PARTIAL FUNCTIONS§.§ Fractional Block Sensitivity vs. Adversary Bounds Here we show an example of a partial function that provides an unbounded separation between the adversary measures and fractional block sensitivity. There exists a partial Boolean function f : S →{0, 1}, where S ⊆{0, 1}^n, such that (f) = O(1) and _1(f), (f), (f), (f), (f) = Ω(n).Let n be an even number and S = {x ∈{0, 1}^n | |x| = 1} be the set of bit strings of Hamming weight 1. Define the “greater than half” function _n : S →{0, 1} to be 1 iff x_i = 1 for i > n/2.For the first part, the certificate complexity is constant (_n) = 1. To certify the value of greater than half, it is enough to certify the position of the unique i such that x_i = 1. The claim follows, as (f) ≥(f) for any f.For the second part, by Theorem <ref>, it suffices to show that _1(_n) = Ω(n). Let X = f^-1(0) and Y = f^-1(1). Let R(x,y) = 1 for all x ∈ X, y ∈ Y. Suppose that x ∈ X, y ∈ Y, i ∈ [n] are such that x_i = 1 (and thus y_i = 0). Thenθ(x, i)= ∑_y^* ∈ Y R(x, y^*)/∑_y^* ∈ Y : x_i ≠ y^*_i R(x, y^*) = n/2/n/2 = 1, θ(y, i)= ∑_x^* ∈ X R(x^*, y)/∑_x^* ∈ X : x^*_i ≠ y_i R(x^*, y) = n/2/1 = n/2.Therefore, max{θ(x,i), θ(y,i)} = n/2. Similarly, if i is such an index that y_i = 1 and x_i = 0, we also have max{θ(x,i), θ(y,i)} = n/2. Also note that R has a corresponding rank 1 matrix R', hence _1(f) ≥ n/2 = Ω(n). We note that a similar function was used to prove lower bounds on the problem of inverting a permutation <cit.>. More specifically, we are given a permutation σ(1), …, σ(n), and the function is 0 if σ^-1(1) ≤ n/2 and 1 otherwise. With a single query, one can find the value of σ(i) for any i. By construction, a lower bound on _n also gives a lower bound on computing this function. §.§ Relational Adversary vs. Kolmogorov Complexity Bound Here we show that, for a variant of the ordered search problem, the Kolmogorov complexity bound gives a tight logarithmic lower bound, while the relational adversary gives only a constant value lower bound. There exists a partial Boolean function f : S →{0, 1}, where S ⊆{0, 1}^n, such that _1(f), (f), (f) = O(1) and (f), (f) = Ω(log n).Let S = {x ∈{0, 1}^n |∃ i ∈ [0;n]: x_1 = … x_i = 0andx_i+1 = … = x_n = 1}. In other words, x is any string starting with some number of 0s followed by all 1s. Define the “ordered search parity” function _n : S →{0, 1} to be (x)2, where (x) is the last index i such that x_i = 0 (in the special case x = 1^n, assume that i = 0).For simplicity, further assume that n is even. First, we prove that (f) = Ω(log n). We use the argument of Laplante and Magniez and the distance scheme method they have adapted from <cit.>: Let f : S →{0, 1} be a Boolean function, where S ⊆{0, 1}^n. Let D be a non-negative integer function on S^2 such that D(x, y) = 0 whenever f(x) = f(y). Let W = ∑_x,y:D(x,y)≠ 01/D(x,y). Define the right load (x,i) to be the maximum over all values d, of the number of y such that D(x,y) = d and x_i ≠ y_i. The left load (y,i) is defined similarly, inverting x and y. Then(f) = Ω( W/|S|min_x,y,iD(x,y)≠ 0, x_i ≠ y_imax{1/(x,i), 1/(y,i)}).For each pair x, y such that f(x) ≠ f(y) and (x) > (y), let D(x,y) = (x)-(y). Then we haveW = ∑_k = 1^n/2 ((n+1)- (2k-1)) 1/2k-1 = (n+1) ∑_k=1^n/21/2k-1 - n/2.Since ∑_k=1^n/2 1/(2k-1) > ∑_k=1^n/2 1/2k = 1/2·∑_k = 1^n/2 1/k = H_n/2 = Θ(log n) as a harmonic number, we have that W > (n+1)H_n/2 - n/2= Θ(n log n).On the other hand, since for every x ∈ S and positive integer d there is at most one y such that D(x,y) = d, we have that (x,i) = (y,i) = 1 for any x, y such that f(x) ≠ f(y) and x_i ≠ y_i. Since |S| = n+1, by Proposition <ref>,(_n) = Ω( n log n/n) = Ω(log n).Now we prove that (_n)≤ 2. Let N=n/2; we start by fixing an enumeration of S. By x^(i), i ∈ [N+1], we denote the unique element of S satisfying (x^(i))= 2i-2 (it is a negative input for _n);by y^(j), j ∈ [N], we denote the unique element of S satisfying (y^(j))= 2j-1 (it is a positive input for _n). We claim that for every R = (r_ij),i ∈ [N+1], j∈ [N], with nonnegative entries we havemin_(i,j) ∈ [N+1] × [N] : r_ij>0 min_t ∈ [n] : x^(i)_t ≠y^(j) _tmax{θ( x^(i) ,t), θ(y^(j),t)}≤ 2,unless r_ij =0 for all i,j.Since (_n)is defined only forR which are not identically zero, we conclude that (f) ≤ 2. For all i ∈ [N+1], j=[N] we set t_ij = min{t:x^(i)_t≠y^(j) _t} =1+ min{( x^(i)),( y^(j))} =2i-1,i≤ j, 2j,i>j. .We shall show that, unless R ≡ 0, there is a pair (i,j) satisfyingr_ij > 0andmax{θ( x^(i) , t_ij), θ(y^(j),t_ij)}≤ 2.Consider i ∈{2,3,…,N+1} and j ∈ [i-1]. Then we have t_ij= 2jandθ( x^(i) , t_ij)= ∑_k=1^Nr_ik/∑_k=1^jr_ik,θ(y^(j),t_ij) = ∑_l=1^N+1r_lj/∑_l=j+1^N+1r_lj.Now consideri ∈[N] and j ∈{i,i+1,…,N}. Then we have t_ij= 2i-1andθ( x^(i) , t_ij)= ∑_k=1^Nr_ik/∑_k=i^Nr_ik, θ(y^(j),t_ij) =∑_l=1^N+1r_lj/∑_l=1^ir_lj.We introduce the following notation: * α_ij = ∑_k = j+1^N r_ik and β_ij = ∑_k=1^j r_ik, for i ∈ [N+1]and j ∈{0,1,…,i-1}; * γ_ij = ∑_l=i+1^N+1r_lj andδ_ij = ∑_l=1^i r_ljfor i ∈ [N],j ∈{i,i+1,…,N}. By convention, β_10 = α_N+1,N = 0. Then (<ref>)–(<ref>) can be rewritten as follows:θ( x^(i) , t_ij) = 1 + α_ij/ β_ij, j < i , 1 +β_i,i-1 / α_i, i-1, j ≥i , θ(y^(j),t_ij) = 1 + δ_jj/ γ_jj, j < i , 1 +γ_ij / δ_ij, j ≥i .Consequently, (<ref>) holds ifthere is a pair (i,j) ∈ [N+1] × [N] such that r_ij >0 and(α_ij≤β_ij) (δ_jj≤γ_jj), j <i, (β_i,i-1≤α_i, i-1)( γ_ij≤δ_ij) ,j ≥ i. Suppose the contrary: for all(i,j) ∈ [N+1] × [N]we have C1i> j ⇒( r_ij = 0) ( α_ij > β_ij)( δ_jj> γ_jj) and C2i ≤j ⇒( r_ij = 0) (β_i,i-1 > α_i, i-1)(γ_ij >δ_ij). We shall show by induction that for all i ∈{0,1,…,N}, j∈ [N] the following holds: α_i +1,i ≥β_i+1,iandγ_jj≥δ_jj. When that is established, it follows that all r_ij must be zero. To see that, recall α_N+1,N =0. Since (<ref>)implies β_N+1,N ≤α_N+1,N =0, we obtain β_N+1,N =∑_k=1^N r_N+1,k≤ 0. However, all r_lk are nonnegative, hence r_N+1,k=0 for all k ∈ [N]. That, in turn, implies ∑_l=1^N r_lN = δ_NN≤γ_NN =r_N+1,N = 0, where wehave used(<ref>) again. Now r_lN = 0 for all l ∈ [N] (and also for l=N+1), thus α_N,N -1 =r_NN = 0. Continue inductively to obtain thatα_i+1,i =β_i+1,i =0 and γ_jj = δ_jj =0 (and r_ij = 0) for all i,j. It remains to show (<ref>).The base case: we already have α_10≥ 0 = β_10.For the inductive step, suppose that (<ref>) holds for all i ∈{0,1,…,p-1} and j ∈ [p-1], for some p ∈ [N] (for p=1, the inequalityγ_jj≥δ_jj remainsunproven for all j). We shall show that both inequalities hold also with i=j=p. The proof is bycontradiction.Suppose that γ_pp < δ_pp. From (<ref>) it follows that either r_pp = 0 or β_p,p-1 > α_p,p-1. The latter is false by the inductive hypothesis, thusr_pp = 0. But thenγ_p-1,p = ∑_l=p^N+1r_lp = γ_ppandδ_p-1,p = ∑_l=1^p-1 r_lp= δ _pp .Thus we have γ_p-1,p < δ_p-1,p. Again, from (<ref>) it follows that either r_p-1,p = 0 or β_p-1,p-2 > α_p-1,p-2. The latter is false, thus r_p-1,p = 0, which implies γ_p-2,p= γ_p-1,p < δ_p-1,p = δ_p-2,p. Continuing similarly, we obtain r_1p=r_2p = … = r_pp =0. However, then δ_pp =0 and the inequalityγ_pp < δ_pp is impossible, a contradiction. Suppose that β_p+1,p >α_p+1,p. From (<ref>) it followsthat either r_p+1,p = 0 or δ_pp > γ_pp. As shown previously, the latter is false, thus r_p+1,p=0. But then we haveα_p+1,p-1 = ∑_k = p^N r_p+1,k =α_p+1,pandβ_p+1,p-1= ∑_k=1^p-1 r_p+1,k = β_p+1,p.Hence we also haveβ_p+1,p-1 >α_p+1,p-1. Then again from (<ref>) we either haveδ_p-1,p-1 > γ_p-1,p-1, or r_p+1,p-1 = 0. The former is false by the inductive hypothesis, the latter implies β_p+1,p-2= β_p+1,p-1 >α_p+1,p-1 =α_p+1,p-2. Continuing similarly, we obtain r_p+1,1= … = r_p+1,p =0. But thenβ_p+1,p = 0≤α_p+1,p, a contradiction. This completes the inductive step.§.§ Rank-1 Adversary vs. Relational Adversary In this section we show a function such that the relational adversary bound (f) is quadratically larger than the rank-1 relational adversary _1(f). First we give an example of a non-Boolean function, and then convert it to a Boolean function with the same separation.There exists a function f : S →ℕ, where S ⊆{0,1}^n, such that (f) = Ω(n) and _1(f) = O(√(n)).Let n be a perfect square and N^2 = n. For an input x ∈{0, 1}^n, split it into N blocks of N consecutive bits, and denote the j-th bit in the i-th block by x_ij. Then define S to be the set of all inputs x such that the Hamming weight of each block is exactly 1. Let f be any injection on S.First, we prove that (f) = Ω(N^2). Let R(x,y) = 1 iff x and y differ in exactly 2 bits. Pick any two such inputs x and y, and a position i such that x_i ≠ y_i. W.l.o.g. assume that x_i = 0. Examine θ(x,i) = ∑_z ∈ SR(x,z)/∑_z ∈ S,z_i ≠ x_iR(x,z). * The number of z such that x and z differ in 2 bits is N(N-1), since we can pick any of the N blocks of x and change the position of the single 1 in that block to any of N-1 other positions. Hence, ∑_z ∈ S R(x,z) = N(N-1).* There is only one z such that z_i ≠ x_i and x and z differ in exactly two bits, as z_i = 1.Thus, ∑_z ∈ S,z_i ≠ x_i R(x,z) = 1 and θ(x,i) = N(N-1)/1. Therefore, for any x,y,i such that R(x,y) > 0 and x_i ≠ y_i, we have max(θ(x,i),θ(y,i)) = N(N-1), and (f) = Ω(N^2).Now we prove that _1(f) ≤ N. By Proposition <ref>, let X, Y be the partition of S and u : X →ℝ, v : Y →ℝ be the probability distributions that achieve _1(f)(e.g., ∑_x ∈ X u(x) = ∑_y ∈ Y v(y) = 1). For g : S →ℝ, i ∈ [n], b ∈{0, 1}, defines(g,i,b) = ∑_x ∈ g^-1 x_i = b g(x).Then θ(x,i) = 1/s(v,i,1-x_i) and θ(y,i) = 1/s(u,i,1-y_i). We prove the following lemma: For all i ∈ [n], there is a value b ∈{0, 1} such thats(u,i,b) ≤1/_1(f) and s(v,i,b) ≤1/_1(f).Let p := 1/_1(f). Assume on the contrary that for each b ∈{0, 1}, either s(u,i,b) > p or s(v,i,b) > p. We distinguish two cases: * For some b, we have s(v,i,b) > p and s(u,i,1-b) > p. Then we can pick x ∈ X, y ∈ Y such that x_i = b, y_i = 1-b and u(x)v(y) > 0. We havemax{θ(x,i),θ(y,i)} = max{1/s(v,i,b), 1/s(u,i,1-b)} < 1/p = _1(f),a contradiction.* W.l.o.g., s(u,i,0) > p, s(u,i,1) > p, s(v,i,0) ≤ p and s(v,i,1) ≤ p. In that case2p < s(u,i,0) + s(u,i,1) = 1 = s(v,i,0) + s(v,i,1) ≤ 2p,a contradiction. Now assume on the contrary that _1(f) > N. For b ∈{0, 1}, let b := 1-b. Suppose that b_i is the value that satisfies the conditions of Lemma <ref> for i ∈ [n]. Define z := b_1b_2…b_n.First, we prove that z ∈ S. Pick any i ∈ [N] (any block). Let B = {(i-1)N+1,…,iN} be the set of variables of the i-th block. Then∑_j ∈ B s(u,j,z_j) = ∑_j ∈ B s(u,j,b_j) ≤ N ·1/_1(f) < 1by the lemma and the assumption. Since ∑_x ∈ X u(x) = 1, there is an x ∈ X such that x_ij = z_ij for all j ∈ [N], thus the i-th block of z is a correct Hamming weight 1 block. Since we picked i arbitrarily, each block of z is correct and z ∈ S.Now, we prove that z ∈ X. Examine any x ∈ X that is not z. The inputs x and z differ in at least one block, hence they have 1s in different positions in that block. Thus there is a position i such that z_i = 1 and x_i = 0. Therefore, we have∑_x ∈ Xx ≠ z u(x) ≤∑_i : z_i = 1 s(u,i,z_i) ≤ N ·1/_1(f) < 1by the lemma and the assumption. Since ∑_x ∈ X u(x) = 1, it follows that u(z) > 0, thus z ∈ X. Similarly, we prove that z ∈ Y and we get a contradiction. We can extend this result to Boolean functions:There exists a Boolean function f : S →{0,1}, where S ⊆{0,1}^n, such that (f) = Ω(n) and _1(f) = O(√(n)).Let S be the same as in Theorem <ref>. Define f asf(x) = (∑_i ∈ [n] i · x_i)2. For (f), now define R(x,y) = 1 iff f(x) ≠ f(y) and x and y differ in exactly 2 bits. For any x, we can change the position of any 1 in any block to a position of a different parity in that block in either ⌊ N/2 ⌋ or ⌈ N/2 ⌉ ways. Therefore, ∑_y ∈ S R(x,y) ≥ N·⌊ N/2 ⌋ = Ω(N^2). By the same argument as in the previous proof, we have (f) = Ω(1).On the other hand, the argument for the rank-1 adversary from the previous proof works for any X, Y (in this case, X = f^-1(0), Y = f^-1(1)). Hence, we still have _1(f) = O(N). § LIMITATION OF FRACTIONAL BLOCK SENSITIVITY In this section we show that there is a certain barrier that the fractional block sensitivity cannot overcome for partial functions. §.§ Upper Bound in Terms of Block SensitivityFor any partial function f : S → H, where S ⊆ G^n, and any x ∈ S,(f) ≤√(n ·(f)). We will prove that (f,x) ≤√(n ·(f,x)) for any x ∈ S. First we introduce a parametrized version of the fractional block sensitivity. Let x ∈ S be any input,the set of sensitive blocks of x and N ≤ n a positive real number. Define_N(f, x) = max_w ∑_B ∈ w(B) s.t. ∀ i ∈ [n]: ∑_B ∈ : i ∈ B w(B) ≤ 1,∑_B ∈ |B|· w(B) ≤ N.where w : → [0; 1]. If we let N = n, then the second condition becomes redundant and _n(f,x) = (f,x).For simplicity, let k = (f, x). We will prove by induction on k that _N(f, x) ≤√(N k). If k= 0, the claim obviously holds, so assume k > 0. Let ℓ be the length of the shortest block in . Then∑_B ∈ℓ· w(B) ≤∑_B ∈ |B|· w(B) ≤ Nand _N(f, x) = ∑_B ∈ w(B) ≤ N/ℓ.On the other hand, let D be any shortest sensitive block. Let f' be the restriction of f where the variables with indices in D are fixed to the values of x_i for all i ∈ D. Note that (f', x) ≤ k-1, as we have removed all sensitive blocks that overlap with D. Let ' be the set of sensitive blocks of x on f' and let = {B ∈| B ∩ D ≠∅}, the set of sensitive blocks that overlap with D (including D itself). Then no T ∈ is a member of ', therefore∑_B' ∈' |B'|· w(B') ≤ N - ∑_T ∈ |T| · w(T) ≤ N - ℓ·∑_T ∈ w(T). Denote t = ∑_T ∈ w(T). We have that t ≤ |D| = ℓ, as any T ∈ overlaps with D. By combining the two inequalities we get_N(f, x)≤max_ℓ∈ [0; n]min{N/ℓ, max_t ∈ [0; ℓ]{ t + _N-ℓ t(f', x)}}≤max_ℓ∈ [0; n]min{N/ℓ, max_t ∈ [0; ℓ]{ t + √((N-ℓ t)(k-1))}}.If N/ℓ≤√(N k), we are done. Thus further assume that ℓ < √(N/k).Denote g(t) = t + √((N-ℓ t)(k-1)). We need to find the maximum of this function on the interval [0;ℓ] for a given ℓ. Its derivative,g'(t) = 1 - ℓ/2√(k-1/N-ℓ t),is a monotone function in t. Thus it has exactly one root, t_0 = N/ℓ - (k-1) ·ℓ/4. Therefore, g(t) attains its maximum value on [0;ℓ] at one of the points {0, t_0, ℓ}. * If t = 0, then g(0) = √(N(k-1))≤√(Nk).* If t = t_0, then, as t ≤ℓ < √(N/k),√(Nk) - k-1/4·√(N/k) < N/ℓ-(k-1) ℓ/4 < √(N/k) √(k) - k-1/4 √(k) < √(1/k)3k< 0.The last inequality has no solutions in natural numbers for k, so this case is not possible.* If t = ℓ, then g(t) = ℓ + √((N-ℓ^2)(k-1)).Now it remains to find the maximum value of h(k) = ℓ + √((N-ℓ^2)(k-1)) on the interval [0;√(N/k)]. The derivative is equal toh'(ℓ) = 1-ℓ·√(k-1/N-ℓ^2).The only non-negative root of h'(ℓ) is equal to ℓ_0 = √(N/k). Then h(ℓ) is monotone on the interval [0; √(N/k)]. Thus h(ℓ) attains its maximal value at one of the points {0, √(N/k)}. * If ℓ = 0, then h(ℓ) = √(N(k-1)) < √(Nk).* If ℓ = ℓ_0 = √(N/k), thenh(ℓ) = √(N/k) + √((N-N/k)(k -1)) = √(N)(√(1/k) + (k-1) √(1/k)) = √(Nk). Thus, h(ℓ) ≤√(Nk) and that concludes the induction.Therefore, (f,x) = _n(f,x) ≤√(n ·(f,x)), hence also (f) ≤√(n ·(f)) and we are done. We also give a simpler proof of the same (asymptotically) upper bound: For any partial function f : S → H, where S ⊆ G^n, and any x ∈ S,(f) = O(√(n ·(f))). We show that for all x ∈ S, we have (f,x) = O(√(n ·(f,x))). The claim then follows as (f,x) = (f,x). Since (f,x) is a minimization linear program, it suffices to show a fractional certificate v of size at most O(√(n·(f,x))). Let k be a parameter between 1 and n. Let = {B ⊆ [n] | f(x) ≠ f(x^B), |B| ≤ k} be a maximum set of non-overlapping sensitive blocks of x of size at most k. Then || ≤(f). Let S = ⋃_B ∈ B be the set of all positions in blocks of . We construct the fractional certificate v by setting v(i) = 1 for all i ∈ S, and v(i) = 1/k for all i ∉ S.Let B be any sensitive block of x of size at most k. Asis a maximum set of non-overlapping sensitive blocks, there must exist a B' ∈ such that B ∩ B' ≠∅. Therefore, ∑_i ∈ B v(i) ≥ |B ∩ B'| ≥ 1. On the other hand, if |B| ≥ k, then ∑_i ∈ B v(i) ≥ |B|/k ≥ 1. Hence v is a feasible fractional certificate. The size of v is ∑_i ∈ [n] v(i) ≤ ||· k + n/k ≤(f) · k + n/k. The last expression asymptotically reaches the minimum when (f) · k = n/k, which happens if k = √(n / (f)). Then (f,x) = O(√(n·(f))).§.§ A Matching ConstructionFor any k ∈ℕ, there exists a partial Boolean function f : S →{0, 1}, where S ⊆{0, 1}^n, such that (f) = k and (f) = Ω(√(n ·(f))).Take any finite projective plane of order t, then it has ℓ = t^2+t+1 many points. Let n = kℓ and enumerate the points with integers from 1 to ℓ. Let X = {0^ℓ} and Y = {y | there exists a line L such that y_i = 1 iff i ∈ L}. Define the (partial) finite projective plane function _t : X ∪ Y →{0, 1} as _t(y) = 1y ∈ Y.We can calculate the 1-sided block sensitivity measures for this function: * ^0(_t) ≥ (t^2+t+1) ·1/t+1 = Ω(t), as each line gives a sensitive block for 0^n; since each point belongs to t+1 lines, we can assign weight 1/(t+1) for each sensitive block and that is a feasible solution for the fractional block sensitivity linear program.* ^0(_t) = 1, as any two lines intersect, so any two sensitive blocks of 0^n overlap.* ^1(_t) = 1, as there is only one negative input. Next, define f : S^× k→{0, 1} as the composition ofwith the finite projective plane function, f = _k (_t(x^(1)), …, _t(x^(k))). By the properties of composition with(see Proposition 31 in <cit.> for details), we have * (f) = max{^0(f), ^1(f)}≥^0(f) = ^0(_t)· k = Θ(t) · k = Θ(t· n/t^2) = Θ(n/t),* (f) = max{^0(f), ^1(f)} = ^0(_t) · k = k = Θ(n/t^2).As √(n · n/t^2) = n/t, we have (f) = Ω(√(n·(f))) and hence the result. Note that our example is also tight in regards to the multiplicative constant, since t can be unboundedly large (and the constant arbitrarily close to 1). § OPEN ENDSLimitation of the Adversary Bounds. In the quantum setting, the certificate barrier shows a limitation on the quantum adversary bounds. In the classical setting, by our results, fractional block sensitivity characterizes the classical adversary bounds for total functions and thus is of course an upper bound. Is there a general limitation on the classical adversary methods for partial functions? Block Sensitivity vs. Fractional Block Sensitivity. We have exhibited an example with the largest separation between the two measures for partial functions, (f) = O(√(n·(f))). For total functions, one can show that (f) ≤(f)^2, but the best known separations achieve (f) = Ω((f)^3/2) <cit.>. Can our results be somehow extended for total functions to close the gap? § ACKNOWLEDGEMENTS We are grateful to Rahul Jain for igniting our interest in the classical adversary bounds and Srijita Kundu and Swagato Sanyal for helpful discussions. We also thank Jānis Iraids for helpful discussions on block sensitivity versus fractional block sensitivity problem. § RANK-1 RELATIONAL ADVERSARY DEFINITION Let u, v be vectors that maximize _1(f). Let h ∈ H be any letter and S_h = f^-1(h). Since for every x, y, such that f(x) = f(y), we have u(x)v(y) = 0, it follows that either u(x) = 0 for all x ∈ S_h or v(x) = 0 for all x ∈ S_h. Therefore, we can find a partition A ∪ B = H such that: * if u(x) > 0, then f(x) ∈ A;* if v(x) > 0, then f(y) ∈ B;* for every h ∈ H, either h ∈ A or h ∈ B.This partition therefore also defines a partition of the inputs, X ∪ Y = S, where X = f^-1(A) and Y = f^-1(B).Now, notice that θ(x,i) does not depend on the particular choice of x if x_i:=g_1 ∈ G is fixed.Similarly, let y_i := g_2∈ G be fixed, then θ(y,i) does not depend on the particular choice of y.This allows to simplify the expression for _1(f), since for each i we can fix values g_1 ≠ g_2 (such that there existx ∈ X, y ∈ Y with u(x) v(y)>0 and x_i=g_1 and y=g_2) and ignore the remaining components ofx, y, i.e.,_1(f) = max_A,B: A∪ B = Hmax_u, vmin_i ∈ [n], g_1, g_2 ∈ G, g_1 ≠ g_2:∃ x ∈ X, y ∈ Y: x_i = g_1, y_i = g_2, u(x)v(y) > 0max{∑_y ∈ Y v(y)/∑_y ∈ Y : y_i ≠ g_1 v(y), ∑_x ∈ X u(x)/∑_x ∈ X : x_i ≠ g_2 u(x)}. Further assume that both X and Y are non-empty, because otherwise the value of _1 would not be defined. Notice that multiplying either u or v with any scalar does not affect the value of _1. Hence, we can scale u and v to probability distributions p and q over X and Y, respectively. More specifically, we can further simplify _1:_1(f)= max_A,B: A∪ B = Hmax_p, qmin_i ∈ [n], g_1, g_2 ∈ G: g_1 ≠ g_2:∃ x ∈ X, y ∈ Y: x_i = g_1, y_i = g_2, p(x)q(y) > 01/min{∑_y ∈ Yy_i ≠ g_1 q(y), ∑_x ∈ Xx_i ≠ g_2 p(x)}= max_A,B: A∪ B = Hmax_p, qmin_i ∈ [n], g_1, g_2 ∈ G: g_1 ≠ g_2:∃ x ∈ X, y ∈ Y: x_i = g_1, y_i = g_2, p(x)q(y) > 01/min{_y ∼ q[y_i ≠ g_1], _x ∼ p[x_i ≠ g_2]}.We can further simplify this definition if the inputs are Boolean: Let f : S → H, where S ⊆{0, 1}^n. Let A ∪ B = H be a partition of the output alphabet, i.e., A ∩ B = ∅. Let p and q be probability distributions over X:=f^-1(A) and Y:=f^-1(B), respectively. Then_1(f) = max_A,B,p,qmin_i ∈ [n],b ∈{0, 1}1/min{_y ∼ q[y_i ≠ b], _x ∼ p[x_i = b]}. For g_1, g_2 ∈{0, 1}, g_1 ≠ g_2 implies g_2 = g_1 ⊕ 1. It follows that_1 (f) =max_A, B,p, qmin_i ∈ [n], b∈{0,1}: ∃ x ∈ X, y ∈ Y:x_i = b,y_i ≠b,p(x)q(y) > 01/min{_y ∼ q[y_i ≠ b], _x ∼ p[x_i = b]} . Moreover, we can drop the requirement ∃ x ∈ X, y ∈ Y: x_i = b,y_i ≠b, p(x)q(y) > 0. To see that, fix any p, q,and consider the quantities α =max_i ∈ [n]max_b∈{0,1}: ∃ x ∈ X, y ∈ Y:x_i = b,y_i ≠b,p(x)q(y) > 0min{_x ∼ p[x_i = b], _y ∼ q[y_i ≠ b]}β =max_i ∈ [n]b∈{0,1}min{_x ∼ p[x_i = b], _y ∼ q[y_i ≠ b]}. Clearly, α≤β. To show the converse inequality, consider any i∈ [n] and (if such exists) b ∈{0,1} satisfying u(x) v(y) = 0 for any x ∈ X, y ∈ Y with x_i = b, y_i ≠ b (to deal with the possibility no such x, y exist, we consider the empty sum to be zero). Then also0= ∑_x ∈ X, y ∈ Y x_i = b, y_i ≠ b p(x) q(y)= (∑_x ∈ X :x_i = bp(x) )(∑_y ∈ Y :y_i ≠bq(y)) = _x ∼ p[x_i = b] ·_y ∼ q[y_i ≠ b]. Therefore,min{_x ∼ p[x_i = b] , _y ∼ q[y_i ≠ b]} =0≤α. Thus α = β. Thus the claim follows.We also note that _1(f) can be found the following way.Let A ∪ B = H be anysuitable partition of H and denote_1(f,A,B) = max_p, qmin_i ∈ [n], g_1, g_2 ∈ G: g_1 ≠ g_2:∃ x ∈ X, y ∈ Y: x_i = g_1, y_i = g_2, p(x)q(y) > 01/min{_y ∼ q[y_i ≠ g_1], _x ∼ p[x_i ≠ g_2]}.Then _1(f) = max_A,B_1(f,A,B). On the other hand, for each fixed partition A,B the value _1(f,A,B) can be found from the following program:Let f : S → H, where S ⊆ G^n. Let A ∪ B = H be any partition of H such that A, B ≠∅. Let X = f^-1(A) and Y = f^-1(B). The value of _1(f,A,B) is equal to the optimal solution of the following program:maximize∑_x ∈ X w_x s.t. ∑_x ∈ X w_y= ∑_y ∈ Y w_y, min{∑_x ∈ X : x_i ≠ g_2 w_x ,∑_y ∈ Y : y_i ≠g_1 w_y}≤ 1,i ∈ [n], g_1, g_2 ∈ G , g_1 ≠ g_2, w_xw_y > 0w_x ≥ 0,x ∈ S.The proof is analogous to that of Lemma <ref>.Denote the optimal value of this program by μ. Then μ≤_1 (f,A,B), since we can take p(x) = w_x / μ, q(y) = w_y/μ (where {w_x}_x ∈ S) is an optimal solution of the program). This way we obtain a feasible solution for _1(f), which gives min{∑_x ∈ X : x_i ≠ g_2 p(x) , ∑_y ∈ Y : y_i ≠ g_1 q(y) } = 1/μ·min{∑_x ∈ X : x_i ≠ g_2 w_x ,∑_y ∈ Y : y_i ≠g_1 w_y }≤1/μfor each i ∈ [n], g_1, g_2 ∈ G such that g_1 ≠ g_2 and there exist x ∈ X, y ∈ Y with x_i = g_1 and y_i = g_2, thus _1(f,A,B) ≥μ.Let us show the converse inequality. If the probability distributions p,q provide an optimal solution for _1(f,A,B), then w_x = p(x) ·_1(f,A,B) and w_y = q(y) ·_1(f,A,B) gives a feasible solution for the program and the valuethis solution is ∑_x ∈ X w_x =_1(f,A,B).Hence, also _1(f,A,B)≤μ.For Boolean outputs, the partition of H can be fixed to A={0}, B= {1}, giving a single program. For Boolean inputs, the condition g_1, g_2 ∈ G, g_1 ≠ g_2, w_xw_y > 0 can be replaced simply by b ∈{0, 1} by Proposition <ref>. Therefore, for Boolean functions this program can be recast as a mixed-integers linear program, providing an algorithm for finding _1(f).
http://arxiv.org/abs/1709.08985v3
{ "authors": [ "Andris Ambainis", "Martins Kokainis", "Krišjānis Prūsis", "Jevgēnijs Vihrovs", "Aleksejs Zajakins" ], "categories": [ "cs.CC" ], "primary_category": "cs.CC", "published": "20170926125350", "title": "All Classical Adversary Methods are Equivalent for Total Functions" }
[pages=1-last]paper.pdf
http://arxiv.org/abs/1709.10352v1
{ "authors": [ "Fattaneh Bayatbabolghani", "Kourosh Parand" ], "categories": [ "cs.NA" ], "primary_category": "cs.NA", "published": "20170927150511", "title": "A Comparison Between Laguerre, Hermite, and Sinc Orthogonal Functions" }
Deep convolutional neural networks for estimating porous material parameters with ultrasound tomography Timo Lähivaara^a, Leo Kärkkäinen^b, Janne M.J. Huttunen^b, and Jan S. Hesthaven^c ^aDepartment of Applied Physics, University of Eastern Finland, Kuopio, Finland ^bNokia Technologies, Espoo,Finland Present address: Nokia Bell Labs, Espoo, Finland ^cComputational Mathematics and Simulation Science, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland ==================================================================================================================================================================================================================================================================================================gobble * Monash Centre for Astrophysics, School of Physics and Astronomy, MonashUniversity, VIC 3800, Australia* DepartmentofPhysics,AstronomyandGeosciences,Towson University, Towson, MD, 21252, USA* Department of Physics and Astronomy and Pittsburgh Particle Physics, Astrophysics, and Cosmology Center, University of Pittsburgh, 3941 O'Hara Street, Pittsburgh, PA, 15260, USA* Max-Planck Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85741 Garching, Germany* Space Research Institute, Profsoyuznaya 84/32, 117997 Moscow, RussiaType Ia supernovae have proven vital to our understanding of cosmology, both as standard candles and for their role in galactic chemical evolution; however, their origin remains uncertain. The canonical accretion model implies a hot and luminous progenitor which would ionize the surrounding gas out to a radius of ∼10–100 parsecs for ∼100,000 years after the explosion. Here we report stringent upper limits on the temperature and luminosity of the progenitor of Tycho's supernova (SN 1572), determined using the remnant itself as a probe of its environment. Hot, luminous progenitors that would have produced a greater hydrogen ionization fraction than that measured at the radius of the present remnant (∼3 parsecs) can thus be excluded. This conclusively rules out steadily nuclear-burning white dwarfs (supersoft X-ray sources), as well as disk emission from a Chandrasekhar-mass white dwarf accreting ≳ 10^-8M_⊙yr^-1 (recurrent novae).The lack of a surrounding Strömgren sphere is consistent with the merger of a double white dwarf binary, although other more exotic scenarios may be possible. Four hundred and forty-five years ago, the explosion of SN 1572 in the constellation of Cassiopeia demonstrated definitively that the night sky was not permanent – the Universe evolves. The nature of this supernova, however, was not conclusively determined until recently, when modern analysis of the historical light curve,<cit.> the X-ray spectrum of the supernova remnant<cit.> and the spectrum of light echoes from the explosion scattered off of interstellar dust<cit.> showed that it belonged to the majority class of Type Ia supernovae (SNe Ia). In spite of these advances, however, the nature of the star which gave rise to this explosion, as well as all others like it, remains unknown. Theoretical models fall into two broad categories: the accretion scenario,<cit.> wherein a white dwarf grows slowly in mass through accretion and nuclear-burning of material from a binary companion prior to explosion; and alternatively, the merger scenario,<cit.> wherein a binary pair of white dwarfs merge after shedding angular momentum through gravitational-wave radiation. Efforts to constrain the nature of the progenitor and any surviving companion star in the vicinity of Tycho's supernova<cit.> have remained inconclusive.<cit.> In particular, there remains significant disagreement on the location of the centre of Tycho's supernova, and therefore the viability of any candidate surviving donor star.<cit.> The extent to which the most commonly cited candidate, Tycho g, stands out from other stars along the same line of sight also remains in question.<cit.>Attempts to detect emission from an individual SN Ia progenitor system have thus far relied largely on searching pre-explosion images of the host galaxies of a few nearby and very recent events.<cit.> These searches have provided some constraints on the presence of hot luminous progenitors immediately prior to explosion, as well as optically-luminous companions, for these few SNe Ia – in varying degrees of tension with the accretion scenario. Using observations from the Kepler spacecraft, red giant donors were also ruled out for three likely type Ia supernovae in red, passive galaxies,<cit.> based on the lack of observed shock interaction between the expanding supernova and a Roche-lobe filling companion. However,SNe Ia in passive galaxies are not representative of the “typical” population.<cit.> None of these observations can make any statement about the earlier evolution of the progenitor. Nor can these constraints be applied to the large sample of resolved supernova remnants in the Local Group. Conversely, searches for evidence of any luminous progenitor population in X-ray<cit.> or UV<cit.> emission have placed strong constraints on the total contribution of such progenitors to the observed SN Ia rate in nearby galaxies, but this cannot be inverted to yield information on the origin of individual objects.Here we propose an alternative test: to search for a “fossil” or “relic” nebula around individual supernovae photoionized by their progenitors. In the accretion scenario, the process which leads to the growth of the white dwarf mass is fusion of hydrogen to helium and further to carbon and oxygen. These accreting white dwarfs must go through a long-lived (≳ 100,000 years), hot luminous phase of steady nuclear burning on the surface, with effective temperatures of 10^5–10^6K and luminosities of 10^37–10^38erg/s at some point prior to explosion.<cit.> Such objects are expected to be significant sources of ionizing radiation.<cit.> For accretion rates ≲few× 10^-7M_⊙ yr^-1, hydrogen fusion is subject to thermal instability and the nuclear energy is mostly released in theform of classical and recurrent nova explosions.<cit.> Recent measurements of the WD mass in a few nearby recurrent novae yielded values close to the Chandrasekhar mass limit,<cit.> suggesting that these objects may contribute to the production of Type Ia supernovae (if they are carbon-oxygen white dwarfs). However, any accretion onto a white dwarf must also release gravitational potential energy. In the parameter range expected for Type Ia supernova progenitors, accretion proceeds through an optically-thick, geometrically-thin (Shakura-Sunyaev)<cit.> disk, which has a well-understood luminosity, temperature, and emitted spectrum (see Methods). From this, one finds that for massive white dwarfs undergoing accretion rates as low as ∼10^-8 M_⊙ yr^-1, the typical inner disk temperatures andluminosities are on the order of 10^5K and 10^35erg/s, rendering them much dimmer (though not insignificant) sources of ionizing radiation.Any nebula created by the supernova progenitor will persist until sufficient time has passed for the majority of the gas to recombine.<cit.> This can be estimated from the hydrogen recombination rate α_B (H^0,T), and the density n_ ISM of the surrounding interstellar medium, assuming the gas is initially nearly wholly ionized (such that the electron density n_ e∼ n_ ISM): τ_rec = (n_eα_B (H^0,T ≈ 10^4K))^-1≈ (100,000) ×(n_ISM/1 cm^-3)^-1 years where we have assumed Case B recombination (i.e., ionizing photons produced by recombinations to the ground state are immediately absorbed). If the medium is initially only partially-ionized, it will have a longer characteristic recombination timescale. Typically n_ISM ∼1 cm^-3,so τ_rec ∼ 100,000 years. This allows one to constrain the ionization history of supernova progenitors by searching for evidence of their lingering impact on their environment, thus enabling “Type Ia supernova archaeology.”<cit.> This requires knowledge of the density and ionization state of the ISM in the ∼ 1 – 100 pc vicinity of known SNe Ia.<cit.>One way to obtain this information is to use the expanding supernova shock itself as a probe of the surrounding ISM.Tycho's supernova remnant (SNR) is one of a number of known SNe Ia remnants whose forward shock is traced in part by filaments of Balmer line optical emission. This arises due to collisional excitation of neutral hydrogen immediately behind the advancing shock, where excitation of cold neutral hydrogen produces a narrow Balmer emission line, while excitation of hot neutral hydrogen formed by charge exchange gives rise to a broad Balmer emission line.<cit.> The very existence of this emission along the eastern and northern periphery of Tycho's SNR demonstrates that the ambient environment around the remnant (and thus by extension its progenitor) is at least partially neutral.<cit.> Modelling of both the Hα and Hβ broad-to-narrow flux ratios from the forward shock as well as the [O III]/Hβ ratio from the photoionization precursor ahead of the forward shock indicates that the ambient hydrogen must be at least 80% neutral in this region<cit.> (see also Supplementary Information).This strongly constrains the ionizing luminosity from the progenitor prior to, and during, the explosion.<cit.>Recently, it has been suggested based on CO observations<cit.> that Tycho's SNR is associated with dense clumps of molecular gas in the same region of the Milky Way, suggesting a thick molecular shell possibly excavated by a fast, continued outflow from the progenitor. We note, however, that the known age (445 yr), physical size (∼3 pc)<cit.> and ionization timescale (log (n_et/(cm^-3s)) ∼10.5 for the Si ejecta)<cit.> of Tycho confidently rule out an expansion into any kind of low-density cavity or dense, massive shell excavated by a progenitor outflow.<cit.>These properties of the SNR are fully consistent with an expansion into an undisturbed ISM with average density n_ISM ∼0.5 to 1 cm^-3 since the time of the explosion.This is very close to the density that the outer shock is running into today (although the remnant is also encountering denser gas on the eastern edge, this is not characteristic of the mean density of the environment).<cit.> Any large-scale modification of the ambient medium around the SN is in direct conflict with the bulk properties of the SNR (for more details, see Figure 3 and discussion in Patnaude & Badenes 2017). Additional arguments against the existence of any molecular bubble associated with the remnant of Tycho's supernova include the spatial extent of the observed photoionization precursor, and the marked discrepancy between the velocities of the CO and H Balmer emission lines measured relative to the local standard of rest. These arguments are summarised in the Supplementary Information. We note however, that even in the event such a molecular bubble were associated with Tycho, with an inner radius just outside the present radius of the shock, the low density and ionization state of the gas interior to this point which is presently being overrun would still provide the same constraint on the nature of the progenitor prior to explosion.The characteristic size of thenebula ionised by the hypothetical hot supernova progenitor is determined by the “Strömgren” radius (R_S), which scales as<cit.>:R_S≈ 35pc(Ṅ_ph/10^48 s^-1)^1/3(n_ISM/1 cm^-3)^-2/3where Ṅ_ph is the ionizing luminosity (in photons per second). Note that for variable sources, a weighted average of the ionizing photon luminosity over the recombination timescale is the quantity of interest.<cit.> The number of ionizing photons emitted per unit energy depends on the shape of the emitter's spectrum, but for photospheric temperatures in the range 2×10^4 K ≲ T ≲ 10^6 K it is ∼10^9–10^10 ionizing photons/erg.For relatively cooler ionizing sources (T ≲10^5 K), the outer boundary of the ionized nebula is very sharply defined (see blue lines in Fig. <ref>), owing to the high photoionization cross-section for hydrogen.Therefore, given the ambient density of the surrounding ISM inferred above, the presence of any neutral hydrogen at the forward shock radius of Tycho's SNR (R_s≲ 3 pc) places a strict upper limit on the size of the ionised nebula for Tycho's supernova. For an average surrounding ISM density of n_ISM ≲ 1 cm^-3, this translates to an upper limit on the ionising photon luminosity of Ṅ_ph ≲ 6× 10^44 s^-1. From eq. <ref>, the upper limit on the ionising source luminosity scales as the ∝ n_ISM^2. However, as explained above, the density of the ISM surrounding Tycho's supernova is fairly well constrained.Hotter sources (T_eff ≳ few × 10^5 K) produce higher energy photons with longer mean free paths, which broaden the boundary between ionized and neutral media.This necessitates using a detailed photoionization simulation in order to determine the fraction of ionized hydrogen as a function of radius (see Methods). This is illustrated in Fig. <ref>, from which it is clear that any luminous (≳ 10^36erg/s), high temperature source would still have produced a greater ionized hydrogen fraction (≈20%) than observed at the present radius of the remnant. Here we have assumed n_ISM ≈1 cm^-3, approximately the observed upper limit. Lower densities would yield larger ionized regions (cf. eq. <ref>).We summarize our constraints on hot, luminous progenitors in Fig. <ref>, which compares our upper limits on the luminosity as a function of effective temperature with theoretical models of white dwarfs accreting at rates capable of sustaining steady nuclear-burning of hydrogen.<cit.>For comparison, we include parameters for several observed sources.<cit.> All are confidently excluded. Note that for putative progenitors with complex accretion histories, any arbitrary trajectory in the HR diagram given in Fig. <ref> can be excluded using the same constraint i.e., if it produces too great a time-averaged photoionizing flux. From these results it is clear the progenitor of Tycho's supernova cannot be described by the classic nuclear-burning accretion scenario. A white dwarf accreting at a much higher rate, such that it ejected sufficient mass in a fast wind that might have masked the ionizing flux,<cit.> would have strongly modified the surrounding environment, in conflict with the apparent evolution of the shock into an undisturbed, constant density medium.<cit.> Slow (∼ 100km/s) winds could in principle obscure the lowest luminosity sources we otherwise exclude in Fig. <ref> (∼ 10^36erg/s) for mass loss rates greater than 10^-8 M_⊙/yr (e.g., from a companion on the first giant branch)<cit.>; however, radio<cit.> and X-ray<cit.> observations have ruled out such winds in the environments of normal SNe Ia,<cit.> and no surviving giant donor consistent with this scenario has been found for Tycho (see discussion above).We can perform a similar experiment using our photoionisation simulations, given emission spectra from an accretion disk around a white dwarf. For a Chandrasekhar-mass white dwarf, the threshold for an accretion rate capable of steady nuclear-burning is ∼ 4× 10^-7 M_⊙ yr^-1.<cit.> We find that any Shakura-Sunyaev<cit.> disk with accretion rates exceeding ≳ 10^-8M_⊙ yr^-1 onto a Chandrasekhar-mass white dwarf can be confidently excluded. For hydrogen-accreting white dwarfs, this is approximately the threshold accretion rate below which the mass ejected in novae is theoretically expected to exceed that accumulated between outbursts – i.e., the white dwarf could not have been growing in mass.<cit.> Note that, more generally, our constraint on the accretion rate is independent of whether the white dwarf is accreting hydrogen or helium. Thus, the detection of neutral matter in the vicinity of Tycho's supernova is strongly constraining also for accreting scenarios without surface nuclear burning. In particular, it excludes any nova progenitor with recurrence time shorter than ∼ 50 years.<cit.>To conclude, we rule out steadily nuclear-burning white dwarfs or recurrent novae as the progenitors of Tycho's supernova. This is consistent with recent theoretical work indicating sigificant mass accumulation in steadily hydrogen-burning accreting white dwarfs may not be feasible.<cit.> Models which do not predict a hot, luminous phase prior to explosion, such as the merger or “double-degenerate” scenario, remain consistent with our result. This includes so-called “violent” mergers,<cit.> although such explosions may be expected to be too asymmetric to explain typical SNe Ia,<cit.> including Tycho's supernova.<cit.> Notably, it has been suggested that some white dwarf mergers may actually produce a short-lived soft X-ray source; this too is excluded by our constraint, although the same theoretical models suggest in these instances the object may not explode as a SN Ia.<cit.> We also cannot exclude a `spin-up-spin down' single-degenerate progenitor model with a spin-down timescale longer than ∼10^5 years for the origin of Tycho's supernova,<cit.> although there remain other theoretical and observational challenges for this scenario.<cit.>Given that the light echo spectrum of Tycho's supernova has revealed it to be a typical Type Ia,<cit.> any plausible model for the origin of the majority of SNe Ia must remain consistent with the constraint outlined here. Similarly strong constraints – or detections – can be obtained for other nearby SN Ia remnants with sufficiently deep observations,<cit.> using the expanding shock to probe the progenitor's environment. This opens a new path to reveal at last the progenitors of SNe Ia.§ BIBLIOGRAPHYnaturemag 10 url<#>1urlprefixURLRuiz_Lapuente04 authorRuiz-Lapuente, P. titleTycho Brahe's Supernova: Light from Centuries Past. journal volume612, pages357–363 (year2004). astro-ph/0309009.Badenes06 authorBadenes, C., authorBorkowski, K. J., authorHughes, J. P., authorHwang, U. & authorBravo, E. titleConstraints on the Physics of Type Ia Supernovae from the X-Ray Spectrum of the Tycho Supernova Remnant. journal volume645, pages1373–1391 (year2006). astro-ph/0511140.Krause08 authorKrause, O. et al. titleTycho Brahe's 1572 supernova as a standard typeIa as revealed by its light-echo spectrum. journal volume456, pages617–619 (year2008). 0810.5106.WI73 authorWhelan, J. & authorIben, I., Jr. titleBinaries and Supernovae of Type I. journal volume186, pages1007–1014 (year1973).Webbink84 authorWebbink, R. F. titleDouble white dwarfs as progenitors of R Coronae Borealis stars and Type I supernovae. journal volume277, pages355–360 (year1984).RuizLapuente04 authorRuiz-Lapuente, P. et al. titleThe binary progenitor of Tycho Brahe's 1572 supernova. journal volume431, pages1069–1072 (year2004). astro-ph/0410673.Zhou16 authorZhou, P. et al. titleExpanding Molecular Bubble Surrounding Tycho's Supernova Remnant (SN 1572) Observed with the IRAM 30 m Telescope: Evidence for a Single-degenerate Progenitor. journal volume826, pages34 (year2016). 1605.01284.Kerzendorf09 authorKerzendorf, W. E. et al. titleSubaru High-Resolution Spectroscopy of Star G in the Tycho Supernova Remnant. journal volume701, pages1665–1672 (year2009). 0906.0982.MMN14 authorMaoz, D., authorMannucci, F. & authorNelemans, G. titleObservational Clues to the Progenitors of Type Ia Supernovae. journal volume52, pages107–170 (year2014). 1312.0628.Bedin14 authorBedin, L. R. et al. titleImproved Hubble Space Telescope proper motions for Tycho-G and other stars in the remnant of Tycho's Supernova 1572. journal volume439, pages354–371 (year2014). 1312.5640.Williams16 authorWilliams, B. J. et al. titleAn X-Ray and Radio Study of the Varying Expansion Velocities in Tycho's Supernova Remnant. journal volume823, pagesL32 (year2016). 1604.01779.GH09 authorGonzález Hernández, J. I. et al. titleThe Chemical Abundances of Tycho G in Supernova Remnant 1572. journal volume691, pages1–15 (year2009). 0809.0601.Kerzendorf13 authorKerzendorf, W. E. et al. titleA High-resolution Spectroscopic Search for the Remaining Donor for Tycho's Supernova. journal volume774, pages99 (year2013). 1210.2713.Nielsen12 authorNielsen, M. T. B., authorVoss, R. & authorNelemans, G. titleUpper limits on bolometric luminosities of 10 Type Ia supernova progenitors from Chandra observations. journal volume426, pages2668–2676 (year2012). 1109.6605.Graham15 authorGraham, M. L. et al. titleConstraining the progenitor companion of the nearby Type Ia SN 2011fe with a nebular spectrum at +981 d. journal volume454, pages1948–1957 (year2015). 1502.00646.Olling15 authorOlling, R. P. et al. titleNo signature of ejecta interaction with a stellar companion in three type Ia supernovae. journal volume521, pages332–335 (year2015).GB10 authorGilfanov, M. & authorBogdán, Á. titleAn upper limit on the contribution of accreting white dwarfs to the typeIa supernova rate. journal volume463, pages924–925 (year2010). 1002.3359.DiStefano10 authorDi Stefano, R. titleThe Progenitors of Type Ia Supernovae. I. Are they Supersoft Sources? journal volume712, pages728–733 (year2010). 0912.0757.WG13 authorWoods, T. E. & authorGilfanov, M. titleHe II recombination lines as a test of the nature of SN Ia progenitors in elliptical galaxies. journal volume432, pages1640–1650 (year2013). 1302.5911.Johansson16 authorJohansson, J. et al. titleDiffuse gas in retired galaxies: nebular emission templates and constraints on the sources of ionization. journal volume461, pages4505–4516 (year2016). 1607.02243.Rappaport94 authorRappaport, S., authorChiang, E., authorKallman, T. & authorMalina, R. titleIonization nebulae surrounding supersoft X-ray sources. journal volume431, pages237–246 (year1994).WG16 authorWoods, T. E. & authorGilfanov, M. titleWhere are all of the nebulae ionized by supersoft X-ray sources? journal volume455, pages1770–1781 (year2016). 1510.05768.Prialnik95 authorPrialnik, D. & authorKovetz, A. titleAn extended grid of multicycle nova evolution models. journal volume445, pages789–810 (year1995).Nomoto07 authorNomoto, K., authorSaio, H., authorKato, M. & authorHachisu, I. titleThermal Stability of White Dwarfs Accreting Hydrogen-rich Matter and Progenitors of Type Ia Supernovae. journal volume663, pages1269–1276 (year2007). astro-ph/0603351.Wolf13 authorWolf, W. M., authorBildsten, L., authorBrooks, J. & authorPaxton, B. titleHydrogen Burning on Accreting White Dwarfs: Stability, Recurrent Novae, and the Post-nova Supersoft Phase. journal volume777, pages136 (year2013). 1309.3375.Thoroughgood01 authorThoroughgood, T. D., authorDhillon, V. S., authorLittlefair, S. P., authorMarsh, T. R. & authorSmith, D. A. titleThe mass of the white dwarf in the recurrent nova U Scorpii. journal volume327, pages1323–1333 (year2001). astro-ph/0107477.Darnley15 authorDarnley, M. J. et al. titleA remarkable recurrent nova in M31: Discovery and optical/UV observations of the predicted 2014 eruption. journal volume580, pagesA45 (year2015). 1506.04202.SS73 authorShakura, N. I. & authorSunyaev, R. A. titleBlack holes in binary systems. Observational appearance. journal volume24, pages337–355 (year1973).BKS70 authorBisnovatyi-Kogan, G. S. & authorSyunyaev, R. A. titleThe Evolution of Massive Stars and Strömgren Zones. journal volume47, pages441 (year1970).Graur14 authorGraur, O., authorMaoz, D. & authorShara, M. M. titleProgenitor constraints on the Type-Ia supernova SN2011fe from pre-explosion Hubble Space Telescope He II narrow-band observations. journal volume442, pagesL28–L32 (year2014). 1403.1878.CKR80 authorChevalier, R. A., authorKirshner, R. P. & authorRaymond, J. C. titleThe optical emission from a fast shock wave with application to supernova remnants. journal volume235, pages186–195 (year1980).Ghavamian00 authorGhavamian, P., authorRaymond, J., authorHartigan, P. & authorBlair, W. P. titleEvidence for Shock Precursors in Tycho's Supernova Remnant. journal volume535, pages266–274 (year2000).Ghavamian01 authorGhavamian, P., authorRaymond, J., authorSmith, R. C. & authorHartigan, P. titleBalmer-dominated Spectra of Nonradiative Shocks in the Cygnus Loop, RCW 86, and Tycho Supernova Remnants. journal volume547, pages995–1009 (year2001). astro-ph/0010496.CR78 authorChevalier, R. A. & authorRaymond, J. C. titleOptical emission from a fast shock wave - The remnants of Tycho's supernova and SN 1006. journal volume225, pagesL27–L30 (year1978).Ghavamian03 authorGhavamian, P., authorRakowski, C. E., authorHughes, J. P. & authorWilliams, T. B. titleThe Physics of Supernova Blast Waves. I. Kinematics of DEM L71 in the Large Magellanic Cloud. journal volume590, pages833–845 (year2003). astro-ph/0303091.Vink12 authorVink, J. titleSupernova remnants: the X-ray perspective. journal volume20, pages49 (year2012). 1112.0576.Yamaguchi14 authorYamaguchi, H. et al. titleDiscriminating the Progenitor Type of Supernova Remnants with Iron K-shell Emission. journal volume785, pagesL27 (year2014). 1403.5154.Badenes07 authorBadenes, C., authorHughes, J. P., authorBravo, E. & authorLanger, N. titleAre the Models for Type Ia Supernova Progenitors Consistent with the Properties of Supernova Remnants? journal volume662, pages472–486 (year2007). astro-ph/0703321.PB17 authorPatnaude, D. & authorBadenes, C. titleSupernova Remnants as Clues to Their Progenitors. journalArXiv e-prints(year2017). 1702.03228.Williams13 authorWilliams, B. J. et al. titleAzimuthal Density Variations around the Rim of Tycho's Supernova Remnant. journal volume770, pages129 (year2013). 1305.0567.CR96 authorChiang, E. & authorRappaport, S. titleTime-dependent Calculations of Ionization Nebulae Surrounding Supersoft X-Ray Sources. journal volume469, pages255 (year1996).Greiner00 authorGreiner, J. titleCatalog of supersoft X-ray sources. journalNew Astronomy volume5, pages137–141 (year2000).HKN96 authorHachisu, I., authorKato, M. & authorNomoto, K. titleA New Model for Progenitor Systems of Type IA Supernovae. journal volume470, pagesL97 (year1996).NG15 authorNielsen, M. T. B. & authorGilfanov, M. titleAttenuation of supersoft X-ray sources by circumstellar material. journal volume453, pages2927–2936 (year2015). 1507.04547.Yaron05 authorYaron, O., authorPrialnik, D., authorShara, M. M. & authorKovetz, A. titleAn Extended Grid of Nova Models. II. The Parameter Space of Nova Outbursts. journal volume623, pages398–410 (year2005). astro-ph/0503143.Denissenkov17 authorDenissenkov, P. A. et al. titlei-process Nucleosynthesis and Mass Retention Efficiency in He-shell Flash Evolution of Rapidly Accreting White Dwarfs. journal volume834, pagesL10 (year2017). 1610.08541.Pakmor13 authorPakmor, R., authorKromer, M., authorTaubenberger, S. & authorSpringel, V. titleHelium-ignited Violent Mergers as a Unified Model for Normal and Rapidly Declining Type Ia Supernovae. journal volume770, pagesL8 (year2013). 1302.2913.Bulla16 authorBulla, M. et al. titleType Ia supernovae from violent mergers of carbon-oxygen white dwarfs: polarization signatures. journal volume455, pages1060–1070 (year2016). 1510.04128.Williams17 authorWilliams, B. J. et al. titleThe Three-dimensional Expansion of the Ejecta from Tycho's Supernova Remnant. journal volume842, pages28 (year2017). 1705.05405.Shen12 authorShen, K. J., authorBildsten, L., authorKasen, D. & authorQuataert, E. titleThe Long-term Evolution of Double White Dwarf Mergers. journal volume748, pages35 (year2012). 1108.4036.Schwab16 authorSchwab, J., authorQuataert, E. & authorKasen, D. titleThe evolution and fate of super-Chandrasekhar mass white dwarf merger remnants. journal volume463, pages3461–3475 (year2016). 1606.02300.Justham11 authorJustham, S. titleSingle-degenerate Type Ia Supernovae Without Hydrogen Contamination. journal volume730, pagesL34 (year2011). 1102.4913.DiStefano11 authorDi Stefano, R., authorVoss, R. & authorClaeys, J. S. W. titleSpin-up/Spin-down Models for Type Ia Supernovae. journal volume738, pagesL1 (year2011). 1102.4342.Starrfield04 authorStarrfield, S. et al. titleSurface Hydrogen-burning Modeling of Supersoft X-Ray Binaries: Are They Type Ia Supernova Progenitors? journal volume612, pagesL53–L56 (year2004). astro-ph/0407466.Ness13 authorNess, J.-U. et al. titleObscuration effects in super-soft-source X-ray spectra. journal volume559, pagesA50 (year2013). 1309.2604.Benvenuto15 authorBenvenuto, O. G., authorPanei, J. A., authorNomoto, K., authorKitamura, H. & authorHachisu, I. titleFinal Evolution and Delayed Explosions of Spinning White Dwarfs in Single Degenerate Models for Type Ia Supernovae. journal volume809, pagesL6 (year2015). 1508.01921.Cumming96 authorCumming, R. J., authorLundqvist, P., authorSmith, L. J., authorPettini, M. & authorKing, D. L. titleCircumstellar Hα from SN 1994D and future Type IA supernovae: an observational test of progenitor models. journal volume283, pages1355–1360 (year1996). astro-ph/9610020.PerezTorres14 authorPérez-Torres, M. A. et al. titleConstraints on the Progenitor System and the Environs of SN 2014J from Deep Radio Observations. journal volume792, pages38 (year2014). 1405.4702.Chomiuk16 authorChomiuk, L. et al. titleA Deep Search for Prompt Radio Emission from Thermonuclear Supernovae with the Very Large Array. journal volume821, pages119 (year2016). 1510.07662.Margutti12 authorMargutti, R. et al. titleInverse Compton X-Ray Emission from Supernovae with Compact Progenitors: Application to SN2011fe. journal volume751, pages134 (year2012). 1202.0741.Margutti14 authorMargutti, R. et al. titleNo X-Rays from the Very Nearby Type Ia SN 2014J: Constraints on Its Environment. journal volume790, pages52 (year2014). 1405.1488.Kundu17 authorKundu, E., authorLundqvist, P., authorPérez-Torres, M. A., authorHerrero-Illana, R. & authorAlberdi, A. titleConstraining Magnetic Field Amplification in SN Shocks Using Radio Observations of SNe 2011fe and 2014J. journal volume842, pages17 (year2017). 1705.04204.§ CORRESPONDING AUTHOR Correspondence to Tyrone E. Woods ([email protected]).§ ACKNOWLEDGEMENTS The work of P. G. was supported by grants HST-GO-12545.08 and HST-GO-14359.011. C. B. acknowledges support from grants NASA ADAP NNX15AM03G S01 and NSF/AST-1412980. M. G. acknowledges partial support by Russian Scientific Foundation (RNF) project 14-22-00271.§ CONTRIBUTIONS T. E.W. lead the Cloudy simulations and analysis of their results, and was the primary author of the main text and methods. P.G. wrote the supplementary section of the paper, and wrote portions of the main manuscript summarizing the constraints on preshock conditions from the Balmer-dominated shocks. C.B. first suggested this project during the conference ‘Supernova Remnants: An Odyssey In Space After Stellar Death’ in Crete, and contributed to the text and the interpretation of the analysis. M.G. contributed to defining the simulations setup, analysis and interpretation of Cloudy results and to the writing of the manuscript. § METHODS§.§ Photoionization models: Given that we are considering relatively high-temperature (>10^5K) ionizing sources, with correspondingly broader transition regions between ionized and neutral media than given analytically by the classical Strömgren boundary, we model the size of any putative photoionized region using the plasma simulation and spectral synthesis code Cloudy v13.03.<cit.>Cloudy determines the gas temperature, ionization state, chemical structure, and emission spectrum of a photoionized nebula by solving the equations of statistical and thermal equilibrium in 1-D. The code relies on a number of critical databases for its calculations; notably tables of recombination coefficients,<cit.> and ionic emission data taken from the CHIANTI collaboration database version 7.0.<cit.>We assume spherical symmetry in our models with a surrounding ISM having density n_ISM =1 cm^-3, unless otherwise stated.Lower densities would only result in larger nebulae for fixed source temperature and luminosity, and n_ISM ≈ 1 cm^-3 is the approximate upper bound inferred for the pre-shock intercloud ISM in the vicinity of Tycho's SN remnant. We assume solar metallicity for the ISM in the vicinity of Tycho; the default solar values as defined in Cloudy are taken from Grevesse & Sauval (1998),<cit.> with updates to the oxygen and carbon abundances<cit.> as well as those of nitrogen, neon, magnesium, silicon, and iron.<cit.>Modest variations in the metallicity will not significantly effect the radius of any photoionized nebula. Given that the age of Tycho's supernova remnant is << τ _rec, we assume steady-state models throughout.We approximate the spectra of nuclear-burning white dwarfs as blackbodies. This provides a reasonable fit to their ionizing emission,<cit.> with significant deviations only arising far into the Wien tail. For white dwarf accretion disks, we use the ezDiskbb<cit.> model from the X-ray spectral modelling software package xspec<cit.> to produce Shakura-Sunyaev<cit.> disk spectra for any desired white dwarf mass and accretion rate. This sets the shape of the spectrum in cloudy. We normalize the luminosity of the accretion disk to the rate of gravitational potential energy release: L = 1/2GM_WDṀ/R_WD for a given white dwarf mass (M_WD), radius (R_WD), and accretion rate (Ṁ). For the white dwarf radius, we approximate numerical results for a zero-temperature white dwarf<cit.> radius with the relation<cit.>: R_WD= 0.0126(M_WD/M_⊙)^-1/3(1.0 - (M_WD/1.456M_⊙)^4/3)^1/2R_⊙ We conservatively adopt the radius of an approximately 1.35 M_⊙ carbon-oxygen white dwarf in producing our limit on accreting objects. Any more massive white dwarf would have a smaller radius and thus larger disk luminosity. Note that we do not include emission from the boundary layer. For slowly rotating white dwarfs, the luminosity in the boundary layer should be comparable to that of the disk. If the boundary layer is optically-thick, this should roughly double the total ionizing luminosity. This is expected on theoretical grounds.However, given the difficulty in matching this to observed cataclysmic variables, we do not include an optically-thick boundary layer in our estimates.Data availability: The photoionization and spectral synthesis code Cloudy used in this work is open-source, and may be downloaded from <www.nublado.org>. The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.§ BIBLIOGRAPHYnaturemag 10 listctr62 url<#>1urlprefixURLFerland13 authorFerland, G. J. et al. titleThe 2013 Release of Cloudy. journal volume49, pages137–163 (year2013). 1302.4485.Badnell03 authorBadnell, N. R. et al. titleDielectronic recombination data for dynamic finite-density plasmas. I. Goals and methodology. journal volume406, pages1151–1165 (year2003). astro-ph/0304273.Badnell06 authorBadnell, N. R. titleRadiative Recombination Data for Modeling Dynamic Finite-Density Plasmas. journal volume167, pages334–342 (year2006). astro-ph/0604144.Dere97 authorDere, K. P., authorLandi, E., authorMason, H. E., authorMonsignori Fossi, B. C. & authorYoung, P. R. titleCHIANTI - an atomic database for emission lines. journal volume125 (year1997).Landi12 authorLandi, E., authorDel Zanna, G., authorYoung, P. R., authorDere, K. P. & authorMason, H. E. titleCHIANTI—An Atomic Database for Emission Lines. XII. Version 7 of the Database. journal volume744, pages99 (year2012).GS98 authorGrevesse, N. & authorSauval, A. J. titleStandard Solar Composition. journal volume85, pages161–174 (year1998).AP01 authorAllende Prieto, C., authorLambert, D. L. & authorAsplund, M. titleThe Forbidden Abundance of Oxygen in the Sun. journal volume556, pagesL63–L66 (year2001). astro-ph/0106360.AP02 authorAllende Prieto, C., authorLambert, D. L. & authorAsplund, M. titleA Reappraisal of the Solar Photospheric C/O Ratio. journal volume573, pagesL137–L140 (year2002). astro-ph/0206089.Holweger01 authorHolweger, H. titlePhotospheric abundances: Problems, updates, implications. In editorWimmer-Schweingruber, R. F. (ed.) booktitleJoint SOHO/ACE workshop “Solar and Galactic Composition”, vol. volume598 of seriesAmerican Institute of Physics Conference Series, pages23–30 (year2001). astro-ph/0107426.WG14 authorWoods, T. E. & authorGilfanov, M. titleEmission-line diagnostics to constrain high-temperature populations in early-type galaxies. journal volume439, pages2351–2363 (year2014). 1311.1693.Zimmerman04 authorZimmerman, E. R., authorNarayan, R., authorMcClintock, J. E. & authorMiller, J. M. titleMultitemperature Blackbody Spectra of Thin Accretion Disks with and without a Zero-Torque Inner Boundary Condition. journal volume618, pages832–844 (year2005). astro-ph/0408209.XSPEC authorArnaud, K. A. titleXSPEC: The First Ten Years. In editorJacoby, G. H. & editorBarnes, J. (eds.) booktitleAstronomical Data Analysis Software and Systems V, vol. volume101 of seriesAstronomical Society of the Pacific Conference Series, pages17 (year1996).Panei00 authorPanei, J. A., authorAlthaus, L. G. & authorBenvenuto, O. G. titleMass-radius relations for white dwarf stars of different internal compositions. journal volume353, pages970–977 (year2000). astro-ph/9909499.§ SUPPLEMENTARY INFORMATION §.§ The interstellar environment surrounding Tycho's supernovaThe optically emitting shocks in Tycho's supernova remnant are located along its eastern and northeastern edges, and appear as Balmer-dominated filaments.At these locations, the forward shock ispropagating into a strong density gradient with a density nearly 10 times larger than the rest of the (non-optically emitting) SNR.<cit.>The brightest of the optical filamentsin Tycho's SNR is known as“Knot g”.<cit.>A thick shell of diffuse optical emission is observed extending ahead of the Balmer-dominated shocks.<cit.>Models of the expansion of the remnant<cit.> and both X-ray and infrared observations,<cit.> combined with the mass swept up by the forward shock, indicate that for most of itsexistence Tycho's SNR must have propagated through a low density (≲ 0.5cm^-3) environment.This is typical of the warm ionized/warm neutral interstellar medium.The diffuse emission extending ahead of the Balmer filaments in Tycho has been identified as a photoionization precursor produced by He II 304 Åphotons (He II Lyman α) from the Balmer-dominated shocks.<cit.> The He II emission is collisionally excited behind the Balmer-dominated shocks, which moveat v_sh ∼ 2000 km/s. At 40.8 eV per photon, the He II 304 Åradiation field from these shocks is both energetic and dilute, causing the precursor to remain simultaneously under-ionized (as evidenced by a low [O III]/Hβ ratio, ∼ 1)<cit.> and hot (Hα line width∼30 km/s, measured from high resolution spectroscopy).<cit.>Due to its low density, the precursor gas fails to achieve ionization equilibrium before being overrun by the forward shock.<cit.> Further evidence of a high neutral fraction was found by Ghavamian et al. (2001), whomodeled the ratio of broad to narrow Balmer line emission from Knot g and found that the observed ratio of broad to narrow flux in Hα and Hβ required an initial ionized hydrogen fraction f_H II < 0.2 (f_N > 0.8).<cit.>The spatial extent of the photoionization precursor is expected to be on the order of one mean free path for photoionizationof hydrogen by He II λ304 Åphotons, or ℓ_mfp ∼ (n_HIσ_i(304))^-1.Williams et al. (2013)<cit.> estimated the postshock density along the full circumference of Tycho's SNR by modeling its dust emission from mid-infrared imagery with the Spitzer Space Telescope.Assuming a factor of 4 compression by the Balmer-dominated shocks, their estimates yield a preshock density in the range 1.0 - 5.0 cm^-3 for the Balmer filaments (recall that these larger values are consistent with a density gradient along the eastern side of Tycho, and are not representative of the much lower mean preshock density of 0.5-1 cm^-3 averaged along the rim).Combining this estimate with the neutral fractions from above, this yields a spatial scale ∼ 0.3-1.4 pc for the photoionization precursor.This corresponds to a size ∼0.3^' - 1.6^' for an assumed remnant distance of 3 kpc.This is in good agreement with the observed scale length of the diffuse Hα emission.<cit.> Together with the density constraints described from evolutionary models in the text, conditions in the interstellar medium surrounding Tycho's SNR have been well constrained.§.§ Association (or lack thereof) between Tycho's supernova and molecular clouds In their CO line maps of the environs of Tycho's SNR, Zhou et al. (2016)<cit.> found enhanced ^12CO J = 2-1 emission relativeto ^12CO J = 1-0, indicating a molecular structure at V_LSR = -61 km/s and possible line broadening from -64 km/s to -60 km/s. They suggested that this structure surrounded Tycho's SNR, and may have been the relic of a bubble excavated by winds from a single-degenerate progenitor. However, in their high resolution spectra of Tycho's SNR, Lee et al. (2007)<cit.> found the Hα emission from the gas ahead of the Balmer-dominated shocks was centered around a completely different radial velocity, V_LSR = -35.8±0.6 km/s.As mentioned above, Ghavamian et al. (2000) and Lee et al. (2007) determined that this gas was heated by a photoionization precursor, thereby placing it at the same kinematic distanceas Tycho's SNR and by extension a completely different kinematic distance than the CO clouds observed by Zhou et al. (2016).This leaves geometric projection as the most likely explanation for the apparent association between Tycho's SNR and any dense molecular material. Note that Tian & Leahy (2011)<cit.> also found no compelling evidence of interaction betweenTycho and dense molecular cloud material, based on their more recent H I observations of the SNR.Although it is clearthat the eastern side of Tycho is encountering a higher density, more neutral gas than the western side, there is no compelling evidence that the overdense regionis a molecular cloud, either from H I observations<cit.> or CO observations.<cit.>As a final argument, we note that Lee et al. (2007) measured an Hα centroid of V_LSR = -30.3±0.2 km/s for the narrow component line in Knot g,very similar to that of the photoionization precursor. The centroid for Knot g measured byGhavamian et al. (2000) was -45.6±1.3 km/s (note the earlier published value of -53.9±1.3 km/swas incorrect; see the Erratum to that paper). Although this value differs from that of Lee et al. (2007), we note that the emission measured by Ghavamian et al. (2000) was summed over a slit oriented parallel to the Knot g filament, whereas Lee et al. (2007) obtained their spectrum from a slit oriented perpendicular to the filament, from a localized segment. Considering the noticeable variations in shock viewing angle along the Knot g filament (e.g., see the HST images)<cit.> and that the narrow componentHα line can acquire a bulk Doppler shift up to 10 km/s from the back pressure of cosmic rays immediately ahead of the shock (Wagner et al. 2009),<cit.> a cumulative offset ∼15 km/s is plausible between the narrow component centroids by Lee et al. (2007) and Ghavamian et al. (2000). § BIBLIOGRAPHYnaturemag 10 listctr74 url<#>1urlprefixURLKvdB78 authorKamper, K. W. & authorvan den Bergh, S. titleExpansion of the optical remnant of Tycho's supernova. journal volume224, pages851–853 (year1978).Lee07 authorLee, J.-J. et al. titleSubaru HDS Observations of a Balmer-dominated Shock in Tycho's Supernova Remnant. journal volume659, pagesL133–L136 (year2007). 0704.1094.TL11 authorTian, W. W. & authorLeahy, D. A. titleTycho SN 1572: A Naked Ia Supernova Remnant Without an Associated Ambient Molecular Cloud. journal volume729, pagesL15 (year2011). 1012.5673.Reynoso99 authorReynoso, E. M., authorVelázquez, P. F., authorDubner, G. M. & authorGoss, W. M. titleThe Environs of Tycho's Supernova Remnant Explored through the H I 21 Centimeter Line. journal volume117, pages1827–1833 (year1999).Lee10 authorLee, J.-J. et al. titleResolved Shock Structure of the Balmer-dominated Filaments in Tycho's Supernova Remnant: Cosmic-ray Precursor? journal volume715, pagesL146–L149 (year2010). 1005.3296.Wagner09 authorWagner, A. Y., authorLee, J.-J., authorRaymond, J. C., authorHartquist, T. W. & authorFalle, S. A. E. G. titleA Cosmic-Ray Precursor Model for a Balmer-Dominated Shock in Tycho's Supernova Remnant. journal volume690, pages1412–1423 (year2009). 0809.2504.
http://arxiv.org/abs/1709.09190v1
{ "authors": [ "T. E. Woods", "P. Ghavamian", "C. Badenes", "M. Gilfanov" ], "categories": [ "astro-ph.SR", "astro-ph.HE" ], "primary_category": "astro-ph.SR", "published": "20170926180110", "title": "No hot and luminous progenitor for Tycho's supernova" }
Convergence analysis of upwind type schemes for the aggregation equation with pointy potential F. DelarueLaboratoire J.-A. Dieudonné,UMR CNRS 7351,Univ. Nice, Parc Valrose, 06108 Nice Cedex 02, France. Email: ,F. LagoutièreUniv Lyon,Université Claude Bernard Lyon 1,CNRS UMR 5208,Institut Camille Jordan,43 blvd. du 11 novembre 1918, F-69622 Villeurbanne cedex, France, Email: , N. VaucheletUniversité Paris 13, Sorbonne Paris Cité, CNRS UMR 7539, Laboratoire Analyse Géométrie et Applications, 93430 Villetaneuse, France, Email:December 30, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION In the past decade, the problem concerning the intrinsic angular momentum (IAM) has revived asthat of an intrinsic magnetic moment (IMM) in a spin-triplet chiral superconductorSr_2RuO_4, <cit.> in which the orbital partof superconducting gap is identified as Δ_ k=Δ[sin(k_xa)+ isin(k_ya)],where a being the lattice constant in the two dimensional ab-plane <cit.>. This state is consistent with the temperature dependence of the specific heat(under the magnetic field), <cit.> and theoretical investigations that suggest an importance of short range ferromagnetic correlations amongquasiparticles <cit.>. This chiral state [Eq. (<ref>)] breaks the time-reversal symmetry (TRS), which is consistentwith the report of a μSR measurement of a tiny but finite spontaneous magnetic field(∼ 0.5G) aroundμ^+ without external magnetic field <cit.>.However, the size of this spontaneousmagnetic field is far smaller than that expected from the IMM in the bulk system with the surface,as discussed below. If the IAM L_ in is of the order of N_ sħ/2 and the gyro-magnetic ratio is given by(-e/2m), with e(>0) being the elementary charge, as in the classical case,the intrinsic magnetic moment (IMM) density M_ in is estimated as M_ in≃ -n_ s 2μ_0m m_ band^ occμ_ B, where n_ s≡ N_ s/V, μ_0=4π× 10^-7H·m^-1 is the magneticpermeability, μ_ B=eħ/2m is the Bohr magneton, andm_ band^ occ is the harmonic average of band mass of electrons over occupied state inthe Brillouin zone. <cit.> Then, the magnetic flux density B_ in,without the external magnetic field H, is given by M_ in, because the relationB=M+μ_0H holds by its definition. <cit.>The electron number density n of γ-bandin Sr_2RuO_4, which is electron-like band,is roughly estimated as n=1 abc,where a=b=3.9× 10^-10m, and c=(12.7/2)× 10^-10m is the length of edge ofprimitive cell of Sr_2RuO_4 along a (b) and c directions, respectively <cit.>.The magnetization density M_ in is given by the relationM_ in=-μ_0e 2m_ band^ occL_ in= -ħ 2nμ_0e 2m_ band^ occ, where m_ band≃ 2.9 m is the effective mass of γ-band ofSr_2RuO_4 <cit.>. Therefore, the intrinsic magnetic flux density B_ in is estimatedasB_ in=-10^-30 abc m m_ band^ occ× 5.8T ≃ -2.1× 10^-2T=-2.1× 10^2G.This value is larger than the “observed" lower critical fieldB^ obs_ c1=5.0×10^-3T ofSr_2RuO_4 <cit.>. However, since Sr_2RuO_4 has other two bands, hole-like α-band andelectron-like β-band, a considerable cancellation in the IMM is expected amongelectron-like β- and γ-band and hole-like α-band.Indeed,the size of B_ in decreases to |B_ in| 5.0× 10^-3T <cit.> which is comparable to the “observed” lower critical fieldB^ obs_ c1=5.0×10^-3T. Therefore, the actual B_ in inSr_2RuO_4 is expected to be almost screened out by the Meissner effect.Then, it is reasonable to consider that the spontaneous magnetic field (∼ 0.5G) measured byμSR <cit.> is not related to the bulk IMM but to other physical mechanism. One of possible ideas for this is that the positive charge of μ^+ attracts electrons onthe Ru site adjacent to stopping μ^+, which acts as a non-magnetic impurity potentialdestroying the superconductivity gap given by Eq. (<ref>) there, <cit.>resulting in the local electric current surrounding μ^+.Namely, the cancellation of relative rotation of Cooper pairs becomes incomplete there, giving riseto a circulating current around the position of the impurity, i.e., the stopping site of μ^+,and local magnetic flux density (magnetic field) B_ loc which causes the μ spinrotation (μSR). However, it is a nontrivial problem whether this induced B_ loccan be smaller than the B_ ininduced by the surface current of the system if the impurity potential is strong enough to suppressthe superconducting gap adjacent to the impurity, while theB_ loc is expected to be smaller thantheB_ in if the impurity potential is moderate comparable to the pairing interaction. The purpose of the present paper is to clarify this problem by solving theBogoliubov-de Gennes equation on the two-dimensional square lattice modelwith the inter-site attractive interaction causing the chiral superconductivity given by Eq. (<ref>) andthe effect of μ^+ on the electrons at surrounding sites. Organization of this paper is as follows.In Sect. 2, we introduce the model on the square latticewith attractive interaction between nearest neighbor sites and the effect of the repulsive impurity potentialat the sites adjacent to stopping μ^+.In Sect. 3, we discuss a formalism for explicit calculations. In Sect. 4, we present the results of magnetic flux density at μ^+ site andthe pattern of electric current induced around the μ^+ site.Finally, in Sect. 5, the relation between the numerical results and the spontaneous magnetic field≃ 0.5G observed by μSR in Sr_2RuO_4 is discussed, and perspective of the presentresults is discussed in relation to the fact that spontaneous magnetic field is observed in a series ofsuperconductors with crystal structures without inversion center.§ EFFECT OF Μ^+ IN CHIRAL SUPERCONDUCTOR ON SQUARE LATTICE §.§ Model HamiltonianIn order to study the effect of a μ^+ stopping in the chiral superconductor on two-dimensional lattice,a model of Sr_2RuO_4, we start with the following Hamiltonian H=-μ∑_iσc^†_iσc_iσ -t∑_⟨ i,j⟩σc^†_iσc_jσ-V 2∑_⟨ i,j⟩σ c^†_jσc^†_iσ̅c_iσ̅c_jσ +U∑_σc^†_ Oσc_ Oσ,where μ, t, and V are the chemical potential, the transfer integral between nearest neighbor(n.n.) sites of the square lattice, and the attractive interaction between electrons at n.n. sites,respectively, and c^†_iσ (c_iσ) is the creation(annihilation) operator of electron at i-th site with spin component σ (=↑or ↓).The symbol ⟨ i,j⟩ indicates the summation is taken over the n.n. sites. The last term in Eq. (<ref>) represents the repulsive impurity potential U atthe origin of the lattice (i= O) which simulatesthe effect of electrons attracted on Ru site near the μ^+ stopping at interstitial position inSr_2RuO_4, as shown in Fig. <ref>(a). Here, we have simplified the effect of μ^+as Eq. (<ref>) in which the position of mu^+ is shifted on the Ru site, as shown in Fig. <ref>(b), for the sake of simplicity of numerical calculations. Hereafter, we consider the spin triplet paring with S_z=0, and introduce a superconducting gapΔ_ij in the spin-triplet manifold as Δ_ij=V 2⟨ c_i↑c_j↓+ c_i↓c_j↑⟩, where ⟨⋯⟩ means the average by the mean-field HamiltonianH_ mf given as H_ mf=-μ∑_iσc^†_iσc_iσ -t∑_⟨ i,j⟩σc^†_iσc_jσ+∑_⟨ i,j⟩{[Δ_ij (c^†_j↑c^†_i↓+c^†_j↓c^†_i↑) + h.c.]-|Δ_ij|^2/V}+U∑_σc^†_ Oσc_ Oσ.Here the gap Δ_ij depends on lattice sites i and j in general, and its dependenceis determined self-consistently by solving the Bogoliubov-de Gennes equation (of lattice version)together with the relation (<ref>) <cit.>. The gap Δ_ij is odd with respect to the interchange of i⇌ j: Δ_ij=-Δ_ji,which manifests the odd-parity pairing. Note that, in the case of uniform system without boundary, the stablest gap of those givenby Eq. (<ref>) is expressed in a wave-vectorrepresentation as Eq. (<ref>).§.§ Magnetic field B_z at μ^+ site in band pictureSimilar approximation is adopted for the integral along the y-direction. As shown in Ref. Tsuruta,the magnetization operator M̂_z due to orbital motion is given by M̂_z=μ_0(-e) 2m_ b∑_i( r_i× p_i)_z,where the “momentum" operator p_i at the i-th site is defined by p_xi≡- i 2ħ a∑_σ[ (c^†_(i_x+1,i_y)σ-c^†_(i_x-1,i_y)σ)c_(i_x,i_y)σ..-c^†_(i_x,i_y)σ(c_(i_x+1,i_y)σ-c_(i_x-1,i_y)σ)] p_yi≡- i 2ħ a∑_σ[ (c^†_(i_x,i_y+1)σ-c^†_(i_x,i_y-1)σ)c_(i_x,i_y)σ..-c^†_(i_x,i_y)σ(c_(i_x,i_y+1)σ-c_(i_x,i_y-1)σ)].The relation (<ref>) is a band-version of conventional form with gyro-magnetic ratio(-e/2m_ b), wherem_ b≡ħ^2/2ta^2 is the band mass atΓ-point. The above definition of m_ b corresponds to the free electron likedispersion of tight binding dispersion around the Γ-point, (k_x,k_y)=(0,0). Namely, ϵ_k=-2t(cos k_xa+cos k_ya) ≃ -4t+ta^2(k_x^2+k_y^2)+⋯ .Corresponding to the relation (<ref>), B̂_z(0,0),the operator for the z-component of the local magnetic flux density vector at the center of the crystallattice, r_ O≡(0,0), is given by a lattice version ofthe Biot-Savart law <cit.>as follows:B̂_z(0,0)=μ_0/4π(-e) m_ b∑_i( r_i× p_i)_z/| r_i|^3.§ FORMALISM OF NUMERICAL CALCULATIONSAn explicit form of the Bogoliubov-de Gennes equation for the mean-field Hamiltonian (<ref>)with the superconducting gap of S_z=0, Eq. (<ref>), is given by <cit.>εu_i=-μ u_i-t u_j+∑_⟨ j,i⟩Δ_ijv_j+Uu_iδ_i O,εv_i=μ v_i+t v_j+∑_⟨ j,i⟩Δ_ij^*u_j+Uu_iδ_i O,where δ_ij is the Kronecker delta. By solving these equations and the superconducting gap[Eq. (<ref>)] self-consistently, the average of the spontaneous magnetic field at μ^+ site[Eq. (<ref>)] is obtained. An actual calculation isperformed as follows. Hereafter, we focus our discussion in the half-filled case. Equations(<ref>) and (<ref>) are diagonalized by means of a unitary transformation Uto give the mean-field Hamiltonian H_ mf=∑_m=1^N_ Lε_mγ^†_m↑γ_m↑ +∑_m=1^N_ L(-ε_m)γ^†_m↓γ_m↓,where N_ L is the number of lattice sites, 0≤ε_1≤ε_1…≤ε_N_ L, and the fermion operators γ describing quasiparticlesare related to the electron operators a by [c^†_1↑, ⋯, c^†_N_ L↑,c_1↓, ⋯, c_N_ L↓] =[γ^†_1↑,⋯, γ^†_N_ L↑, γ_1↓,⋯, γ_N_ L↓] U^†.Substituting Eq. (<ref>) into Eq. (<ref>), we obtain the self-consistentequation for the gap Δ_ij as Δ_ij=V/2∑_m=1^N_ L[ ( U)^*_j+N_ L,m( U)_i,m -( U)^*_i+N_ L,m( U)_j,m]×[1-f(ε_m)] +V/2∑_m=1^N_ L[ ( U)^*_j+N_ L,m+N_ L( U)_i,m+N_ L..-( U)^*_i+N_ L,m+N_ L( U)_j,m+N_ L] f(ε_m),where U depends on Δ_ij's and ε_m (m=1,⋯, N_ L),and f(x) is the Fermi distribution function f(x)≡(e^x/T+1).We have solved Eqs. (<ref>), (<ref>), and (<ref>) ∼ (<ref>)self-consistently using the numerical diagonalization method and obtained the gapΔ_ij's and the energy level ε_m (m=1,⋯, N_ L). Numerical calculations have been performed for the square lattice of sizes N_ L=20× 20 andN_ L=30× 30 with the periodic boundary condition because we are considering the casewithout the effect of boundary surface of the system. In the pure system with periodic boundary condition, the phase of superconducting gap Δ_ijcan be chosen as shown in Fig. <ref> and Δ_i (i=1∼4) are independent of thesite index i. However, in the system with an impurity, the gap functions Δ_ij do not have suchsimple form and should be determined self-consistently.§ MAGNETIC FLUX DENSITY AND CURRENT PATTERNFigure <ref> shows the dependence of the spontaneous magnetic flux density B_zat the origin (μ^+ site) on the impurity potentialU/t^* for the case that the pair interactionis given by V=4t^*, where t^* is the effective hopping of quasiparticles renormalized bycorrelation effect and m/m^* is the ratio of free electron mass and the effective mass renormalizedby correlation effect.The lattice size is taken as N_ L=30× 30.There exist two solutions, I and II,which make the accuracy of self-consistency stationary as O(10^-3) corresponding to the valueof U/t^*. At U/t^* 2.75, the solution with highest accuracy is the type I,while that atU/t^* 2.75e is the type II. These two solutions exhibit first order like transition atU/t^*≃ 2.75 shown by vertical dashed line,and there exist metastable solutions around U/t^*≃ 2.75. Figure<ref> shows the current pattern for U/t^*=2.7(shown by vertical solid line in Fig. <ref>) for the type I and type II. Spontaneous magnetic field of the type I is B_z>0, and that for II is B_z<0.This is understood from the direction of the current. Namely, it is clockwise around theimpurity (μ^+) for the type I so that the magnetic field points to the positive directionof z-axis, while it is counter clockwise for the type II so that the direction of the magnetic fieldis opposite.The important point is that, in both cases, the magnitudes of the magnetic field induced atμ^+ site are given by |B_z(0,0)|∼ 10×m/m^*G.Since m^*/m∼ 10 in Sr_2RuO_4<cit.>Cthe induced magnetic field is expectedto be the order of 1G.This value of B_z(0,0) is the same order as the spontaneousmagnetic field observed by μSR <cit.>, explaining the extremely small magnetic fieldobserved by the μSR measurement.Note that this spontaneous magnetic field at μ^+ site is not screened by the Meissner effectbecause it is the magnetic field in the region apart from the μ^+ site by the penetrationdepth λ (∼13 nm in Sr_2RuO_4<cit.>j that is screened by theMeissner effect.Figure <ref> shows the results corresponding to Fig. <ref> for the lattice sizeN_ L=20× 20. A general tendency is fundamentally the same as that shown inFig. <ref> forN_ L=30× 30.However, the critical value of U_ cr/t^* giving thetransition between two types I and II shifts from U_ cr/t^*≃ 2.75 to the lower valueU_ cr/t^*≃ 2.10.This may be interpreted as an interference effect oftwo impurities the effect of which inevitably appears due to adopting the periodic boundary condition.In this sense, the calculations with much larger lattice size are desired, which are left for future study.Concluding this section, let us briefly discuss how the results on the size of the spontaneousmagnetization depends on the strength of the intersite attractive interaction V.According to Ref. Tsuruta, the extent ξ^* of the Cooper pair in the low temperaturelimit (T≪ T_ c) is estimated as ξ^*/a≃ 2.6. On the other hand, ξ^*=πξ_0 ofSr_2RuO_4 is estimated as ξ^*/a≃ 5.3× 10^2. <cit.> As shown in Appendix, a factor∑_i( r_i× p_i)_z/| r_i|^3 in Eq. (<ref>) is estimated as ∑_i|( r_i× p_i)_z|/| r_i|^3≈2π p_0/a^2(lnξ^*/a+γ) e^-a/ξ^*,where p_0 is the size of momentum at the nearest neighbor site around the origin (impurity site) andγ≃ 0.557⋯ is the Euler constant. Namely, this factor has only weak logarithmicdependence of ξ^*/a in the region ξ^*≫ a, so that a huge ratio of ξ^* between those ofthe present model and Sr_2RuO_4, 5.3× 10^2/2.6≃ 2.0× 10^2, givesa difference only of a factor 5.§ SUMMARY AND PERSPECTIVEWe have clarified the origin of extremely small spontaneous magnetic field of B≃ 0.5Gobserved in a p-wave chiral superconductor Sr_2RuO_4 by μSR measurement <cit.>on the basis ofnumerical analysis of the model Hamiltonian on the square lattice with the nearest-neighborattraction with the effect of excess electrons on the lattice point which areattracted by the μ^+ itself stopped in interstitial of the lattice.The crucial point was that the excess electrons attracted around the μ^+ work to destroy thechiral superconducting order around them and in turn manifests the circulating currents around theμ^+. This is in marked contrast with the case without μ^+ in which the currents associatedwith chiral motion of the Cooper pairs are cancelingwith each other in the bulk systemexcept near the system boundary. <cit.> The time-reversal-symmetry breaking mechanism discussed in the present paper is also differentform that cause by the effect of spin space in the equal spin paring state of spin tripletparing <cit.> which was discussed in relation to the excess Knight shift increase below the superconducting transitiontemperature observe in Sr_2RuO_4. <cit.> The model and theory developed in the present paper is possibly related to origins ofphenomena of spontaneous time-reversal-symmetry breaking with small intrinsic magnetic fieldsof the order of 1G which are systematically observed by the μSR measurementin a series of exotic superconductors,(U;Th)Be_13, <cit.>UPt_3,<cit.>(Pr;La)(Os;Ru)_4 Sb_12, <cit.>LaNiC_2, <cit.> PrPt_4Ge_12, <cit.>LaNiGa_2, <cit.>Re_6Zr, <cit.> and Lu_5Rh_6Sn_18, <cit.> and so on.§ ACKNOWLEDGMENTSThis work is supported by Grants-in-Aid for ScientificResearch (No. 17K05555) from the Japan Society for the Promotion of Science.One of us (K.M.) is grateful to Jorge Quintanilla for directing our attention to thespontaneous magnetic field observed by μSR experiments, especially ina series of superconductors with and without inversion center of the crystal, which was crucialfor us to think seriously the case of Sr_2RuO_4, and for the hospitality at the University of Kentwhere the final stage of this work has been performed through the EPSRC project hUnconventionalsuperconductors: New paradigms for new materialsh(grant references EP/P00749X/1 and EP/P007392/1).§ COOPER-PAIR SIZE DEPENDENCE OF BIOT-SAVART CONTRIBUTIONIn this appendix, we estimate the size of∑_i( r_i× p_i)_z/| r_i|^3 in Eq. (<ref>). In the case of 2-dimensionallattice with the lattice constant a, the summation with respect to sites is approximated byintegration in the 2-dimensional space as follows: ∑_i r_i× p_i/| r_i|^3≃1/a^2∫ d r r× p( r)/r^3 ∼2π/a^2∫_b^∞ dr p_0 e^-r/ξ^*/r=2π p_0/a^2∫_a/ξ^*^∞ dx e^-x/x ≈2π p_0/a^2( lnξ^*/a+γ)e^-a/ξ^*,where p_0 is the size of momentum at the nearest neighbor site of the origin which is assumed tobe the impurity (μ^+) site, and γ≃ 0.577⋯ is the Euler constant. 99 Maeno Y. Maeno, S. Kittaka, T. Nomura, S. Yonezawa, and K. Ishida:J. Phys. Soc. Jpn. 81, 011009 (2012); and references therein.MiyakeNarikiyo K. Miyake and O. Narikiyo, Phys. Rev. Lett. 83, 1423 (1999). Hoshihara K. Hoshihara and K. Miyake, J. Phys. Soc. Jpn. 74, 2679 (2005). Yoshioka Y. Yoshioka and K. Miyake, J. Phys. Soc. Jpn. 78, 074701 (2009).muSR G. M. Luke, Y. Fudamoto, K. M. Kojima, M. I. Larkin, J. Merrin, B. Nachumi, Y. J. Uemura,Y. Maeno, Z. Q. Mao, Y. Mori, H. Nakamura, and M. Sigrist, Nature 394, 558 (1998). Tsuruta A. Tsuruta, S. Yukawa, and K. Miyake, J. Phys. Soc. Jpn. 84, 094712 (2015). Purcell E. M. Purcell, Electricity and Magnetism, 2nd ed. (McGraw-Hill, New York, 1984).Mackenzie A. P. Mackenzie, S. R. Julian, A. J. Diver, G. J. McMullan, M. P. Ray, G. G. Lonzarich, Y. Maeno, S. Nishizaki, and T. Fujita: Phys. Rev. Lett. 76, 3786 (1996). Akima T. Akima, S. Nishizaki, andY. Maeno: J. Phys. Soc. Jpn. 68, 694 (1999). comment The estimation of B_ in given in Ref. Tsuruta contains an error in numerics and thecharacter of compensated metal of Sr_2RuO_4 should have been taken into account.However, this is not an essential point for the discussions below. Onishi Y. Onishi, Y. Ohashi, Y. Shingaki, and K. Miyake, J. Phys. Soc. Jpn. 65, 675 (1996). SchmittRink S. Schmitt-Rink, K. Miyake, and C. M. Varma, Phys. Rev. Lett. 57, 2575 (1986). Hirschfeld P. Hirschfeld, D. Vollhardt, and P. W'́olfle, Solid State Commun. 59, 111 (1986).deGennes2 P. G. de Gennes: Superconductivity of Metals and Alloys(W. A. Benjamin, New York and Amsterdam, 1966), Chap. 5. Mackenzie2 A. P. Mackenzie and Y. Maeno: Rev. Mod. Phys. 75, 657 (2003). Miyake K. Miyake, J. Phys. Soc. Jpn. 83, 053701 (2014). Ishida K. Ishida, M. Manago, T. Yamanaka, H. Fukazawa, Z. Q. Mao, Y. Maeno, and K. Miyake,Phys. Rev. B 92, 100502(R) (2015)(U;Th)Be13 R. H. Heffner, J. L. Smith, J. O.Willis, P. Birrer, C. Baines, F. N. Gygax, B. Hitti, E. Lippelt, H. R. Ott,A. Schenck, E. A. Knetsch, J. A. Mydosh, and D. E. MacLaughlin, Phys. Rev. Lett. 65, 2816 (1990).UPt3 G. M. Luke, A. Keren, L. P. Le, W. D. Wu, Y. J. Uemura, D. A. Bonn, L. Taillefer, and J. D. Garrett,Phys. Rev. Lett. 71, 1466 (1993).(Pr;La)(Os;Ru)4Sb12 Y. Aoki, A. Tsuchiya, T. Kanayama, S. R. Saha, H. Sugawara, H. Sato,W. Higemoto, A. Koda, K. Ohishi,K. Nishiyama, and R. Kadono, Phys. Rev. Lett. 91, 067003 (2003).Hillier2 A. D. Hillier, J. Quintanilla, and R. Cywinski,Phys. Rev. Lett. 102, 117007 (2009).PrPt4Ge12 A. Maisuradze, W. Schnelle, R. Khasanov, R. Gumeniuk, M. Nicklas, H. Rosner, A. Leithe-Jasper,Y. Grin, A. Amato, and P. Thalmeier, Phys. Rev. B 82, 024524 (2010).Hillier A. D. Hillier, J. Quintanilla, B. Mazidian, J. F. Annett, and R. Cywinski,Phys. Rev. Lett. 109, 097001 (2012).Re6Zr R. P. Singh, A. D. Hillier, B. Mazidian, J. Quintanilla, J. F. Annett, D. M. Paul, G. Balakrishnan, andM. R. Lees, Phys. Rev. Lett. 112, 107002 (2014).Bhattacharyya A. Bhattacharyya, D. T. Adroja, J. Quintanilla, A. D. Hillier, N. Kase, A. M. Strydom, and J. Akimitsu,Phys. Rev. B 91, 060503(R) (2015).
http://arxiv.org/abs/1709.09388v1
{ "authors": [ "Kazumasa Miyake", "Atsushi Tsuruta" ], "categories": [ "cond-mat.supr-con" ], "primary_category": "cond-mat.supr-con", "published": "20170927084459", "title": "Theory for Intrinsic Magnetic Field in Chiral Superconductor Measured by \\muSR: Case of Sr_2RuO_4" }
Aggregated unfitted finite element method]The aggregated unfitted finite element method for elliptic problemsS. Badia]Santiago BadiaF. Verdugo]Francesc VerdugoA. F. Martín]Alberto F. MartínDepartment of Civil and Environmental Engineering. Universitat Politècnica de Catalunya, Jordi Girona 1-3, Edifici C1, 08034 Barcelona, Spain. CIMNE – Centre Internacional de Mètodes Numèrics enEnginyeria, Parc Mediterrani de la Tecnologia, UPC, Esteve Terradas 5, 08860Castelldefels, Spain.SB gratefully acknowledges the support received from the Catalan Government through the ICREA Acadèmia Research Program.E-mails: [email protected] (SB), [email protected] (FV), [email protected] (AM)Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method,is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments. [ [ December 30, 2023 =====================Keywords: unfitted finite elements; embedded boundary methods; ill-conditioning. § INTRODUCTIONUnfitted fe techniques are specially appealing when the generation of body-fitted meshes is difficult.They are helpful in a number of contexts includingmulti-phase and multi-physics applications with moving interfaces (e.g., fracture mechanics, fluid-structure interaction <cit.>, or free surface flows), or in situations in which one wants to avoid the generation of body-fitted meshes to simplify as far as possible the pre-processing steps (e.g., shape or topology optimization frameworks, medical simulations based on CT-scan data, or parallel large-scale simulations).In addition, the huge success of isogeometrical analysis (spline-based discretization) and the severe limitations of this approach in complex 3D geometries will probably increase the interest of unfitted methods in the near future <cit.>.Unfitted fe methods have been named in different ways. When designed for capturing interfaces, they are usually denoted as eXtended fe methods (XFEM) <cit.>, whereas they are usually denoted as embedded (or immersed) boundary methods, when the motivation is to simulate a problem using a (usually simple Cartesian) background mesh (see, e.g., <cit.>).Yet useful, unfitted fe methods have known drawbacks. They pose problems to numerical integration, imposition of Dirichlet boundary conditions, and lead to ill conditioning problems. Whereas different techniques have been proposed in the literature to address the issues related with numerical integration (see, e.g., <cit.>) and the imposition of Dirichlet boundary conditions (see, e.g., <cit.>), the conditioning problems are one of the main showstoppers still today for the successful use of this type of methods in realistic large scale applications. For most of the unfitted fe techniques, the condition number of the discrete linear system does not only depend on the characteristic element size of the background mesh, but also on the characteristic size of the cut cells, which can be arbitrary small and have arbitrarily high aspect ratios. This is an important problem. At large scales, linear systems are solved with iterative Krylov sub-space methods <cit.> in combination with scalable preconditioners. Unfortunately, the well known scalable preconditioners based on (algebraic) multigrid <cit.> or multi-level domain decomposition <cit.> are mainly designed for body-fitted meshes and cannot readily deal with cut cells. Different preconditioners for unfitted fe methods have been recently proposed, but they are mainly serial non-scalable algorithms(see, e.g., <cit.>). Recently, a robust domain decomposition preconditioner able to deal with cut cells has been proposed in <cit.>. Even though this method has proven to be scalable in some complex 3D examples, it is based on heuristic considerations without a complete mathematical analysis and its application to second (and higher) order fe is involved. This lack of preconditioners for unfitted fe can be addressed with enhanced formulations that provide well-posed discrete systems independently of the size of the cut cells. Once the conditioning problems related to cut cells are addressed, the application of standard preconditioners for body-fitted meshes to the unfitted case is strongly simplified, opening the door to large-scale computations.The main goal of this work is to develop such an enhanced unfitted fe formulation that fixes the problems associated with cut cells. The goal is to achieve condition numbers that scale only with the element size of the background mesh in the same way as in standard fe methods for body-fitted meshes. Our purpose is to implement it in FEMPAR, our in-house large scale fe code <cit.>. Since FEMPAR is a parallel multi-physics multi-scale code that includes different continuous and discontinuous fe formulations and several element types, it is crucial for us that the novel formulation fulfills the following additional properties: 1) It should be general enough to be applied to several problem types, 2) it should deal with both continuous and discontinuous fe formulations, 3)it should deal with high order interpolations, and 4) it should be easily implemented in an existing parallel fe package.To our best knowledge, none of the existing unfitted fe formulations fulfill these requirements simultaneously. For instance, one can consider the ghost penalty formulation used in the CutFEM method <cit.> However, it leads to a weakly non-consistent algorithm, and it requires to compute high order derivatives on faces for high order fe, which are not at our disposal in general fe codes and are expensive to compute, certainly complicating the implementation of the methods and harming code performance. Alternatively, for finite volume and dg formulations, one can consider the so-called cell aggregation (or agglomeration) techniques <cit.>. E.g., for dg formulations, the idea is simple: cells with the small cut cell problem, i.e., the ratio between the volume of the cell inside the physical domain and the total cell volume is close to zero, are merged with neighbor full cells forming aggregates. A new polynomial space is defined in each aggregate that replaces the local fe spaces of all cells merged in it. This process fixes the conditioning problems, since the support of the newly defined shape functions is at least the volume of a full cell. Even though this idea is simple and general enough to deal with different problem types and high order interpolations, the resulting discrete spaces are such that the enforcement of continuity through appropriate local-to-global dof numbering, as in standard fe codes (see, e.g., <cit.>), is not possible, limiting their usage to discontinuous Galerkin or finite volume formulations. Up to our best knowledge, there is no variant of cell agglomeration currently proposed in the literature producing conforming fe spaces, which could be used for classical continuous Galerkin formulations. It is the purpose of this work.In this article, we present an alternative cell aggregation technique that can be used for both continuous and discontinuous formulations, the aggregated unfitted fe method. We start with the usual (conforming) Lagrangian fe space that includes cut cells, which is known to lead to conditioning problems. The main idea is toeliminate from this space all the potentially problematic dof by introducing a set of judiciously defined constraints. These constraints are introduced using information provided by the cell aggregates, without altering the conformity of the original fe space. Alternatively, the method can be understood as an extension operator from the interior (well-posed) fe space that only involves interior cells to a larger fe space that includes cut cells and covers the whole physical domain. Discontinuous spaces can also be generated as a particular case of this procedure, which makes the method compatible also with dg formulations. In contrast to previous works, we also include a detailed mathematical analysis of the method, in terms of well-posedness, condition number estimates, and a priori error estimates.For elliptic problems, we mathematically prove that 1) the method leads to condition numbers that are independent from small cut cells, 2) the condition numbers scale with the size of the background mesh as in the standard fe method, 3) the penalty parameter of Nitsche's method required for stability purposes is bounded above, and 4) the optimal fe convergence order is recovered. These theoretical results are confirmed with 2D and 3D numerical experiments using the Poisson equation as a model problem. The outline of the article is as follows. In Section <ref>, we introduce our embedded boundary setup and the strategy to build the cell aggregates. In Section <ref>, we describe the construction of the novel fe spaces based on the cell aggregates. In Section <ref>, we introduce our elliptic model problem. The numerical analysis of the method is carried out in Section <ref>. Finally, we present a complete set of numerical experiments in Section <ref> and draw some conclusions in Section <ref>. § EMBEDDED BOUNDARY SETUP AND CELL AGGREGATION Let ⊂ℝ^d be an open bounded polygonal domain, with d∈{2,3} the number of spatial dimensions. For the sake of simplicity and without loss of generality, we consider in the numerical experiments below that the domain boundary is defined as the zero level-set of a given scalar function ^ls, namely ∂Ω≐{x∈ℝ^d:^ls(x)=0}.[Analogous assumption have to be made for body-fitted methods.] We note that the problem geometry could be described using 3D CAD data instead of level-set functions, by providing techniques to compute the intersection between cell edges and surfaces (see, e.g., <cit.>). In any case, the way the geometry is handled does not affect the following exposition. Like in any other embedded boundary method, we build the computational mesh by introducing an artificial domainsuch that it has a simple geometry that is easy to mesh using Cartesian grids and it includes the physical domain Ω⊂(see Fig.  <ref>).Let us construct a partition ofinto cells, represented by , with characteristic cell size h. We are interested inbeing a Cartesian mesh into hexahedra for d=3 or quadrilaterals for d = 2, even though unstructured n-simplex background meshes can also be considered. Cells incan be classified as follows: a cell ∈ such that ⊂Ω is an internal cell; if ∩Ω = ∅,is an external cell; otherwise,is a cut cell (see Fig. <ref>). The set of interior (resp., external and cut) cells is represented withand its union ⊂Ω (resp., (,) and (, )). Furthermore, we define the set of active cells as ≐∪ and its union . In the numerical analysis, we assume that the background mesh is quasi-uniform (see, e.g., <cit.>) to reduce technicalities, and define a characteristic mesh size . The maximum element size is denoted with _ max.We can also consider non-overlapping cell aggregatescomposed of cut cells and one interior cellsuch that the aggregate is connected, using, e.g., the strategy described in Algorithm <ref>. It leads to another partitiondefined by the aggregations of cells in ; interior cells that do not belong to any aggregate remain the same. By construction of Algorithm <ref>, there is only one interior cell per aggregate, denoted as the root cell of the aggregate, and every cut cell belongs to one and only one aggregate. For a cut cell, we define its root cell as the root of the only aggregate that contains the cut cell. The root of an interior cell is the cell itself. Thus, there is a one-to-one mapping between aggregates (including interior cells) ∈ and the root cut cell K ∈. As a result, we can use the same index for the aggregate and the root cell. We build the aggregates inwith Algorithm <ref>. In any case, other aggregation algorithms could be considered, e.g., touching in the first step of the algorithm not only the interior cells, but also cut cells without the small cut cell problem. It can be implemented by defining the quantity η_≐| ∩Ω|/ || and touch in the first step not only the interior cells but also any cut cell with η_ > η_0 > 0 for a fixed value η_0.[Cell aggregation scheme] * Mark all interior cells as touched and all cut cells as untouched.* For each untouched cell,if there is at least one touched cell connected to it through a facetF such that F ∩Ω≠∅, we aggregate the cell to the touched cell belonging to the aggregate containing the closest interior cell. If more than one touched cell fulfills this requirement, we choose one arbitrarily, e.g., the one with smaller global id.* Mark as touched all the cells aggregated in 2. * Repeat 2. and 3. until all cells are aggregated. Fig. <ref> shows an illustration of each step in Algorithm <ref>. The black thin lines represent the boundaries of the aggregates. Note that from step 1 to step 2, some of the lines between adjacent cells are removed, meaning that the two adjacent cells have been merged in the same aggregate. The aggregation schemes can be easily applied to arbitrary spatial dimensions. As an illustrative example,Fig. <ref> shows some of the aggregates obtained for a complex 3D domain. In the forthcoming sections, we need an upper bound of the size of the aggregates generated with Algorithm <ref>. To this end, let us consider the next lemma. Assume that from any cut cell _0 ∈ there is a cell path {_0, _1, …, _n } that satisfies: 1) two consecutive cells share a facet F such that F ∩Ω≠∅; 2) _n is an interior cell; 3) n ≤γ_ max, where γ_ max is a fixed integer. Then, the maximum aggregate size is at most (2γ_ max+1)_ max. By construction, an aggregate can grow at most at a rate of one layer of elements per each iteration.Thus, after n iterations the aggregate size will be at most (2n+1)h_ maxconsidering that the aggregate can potentially grow in all spatial directions. It is obvious to see that the aggregation scheme finishes at most after γ_max iterations. Thus, the aggregate size will be less or equal than (2γ_ max+1) _ max. From Lemma <ref>, it follows that the aggregate size will be bounded if so is the value of γ_ max. In what follows, we assume that γ_ max is fixed, e.g., eliminating any cut cell that would violate property 3) in Lemma <ref>. One shall assume that each cut cell shares at least one corner with an interior cell (this is usually true if the grid is fine enough to capture the geometry). In this situation, we can easily see that γ_ max=2 for 2D and γ_ max=3 for 3D. Then, by Lemma <ref>, the aggregate size is at most 5h_ max in 2D and 7h_ max in 3D.Even though it is not used in the proof of Lemma <ref>, the fact that we aggregate cut cells to the touched cells belonging to the aggregate containing the closest interior cell (see step 2 in Algorithm <ref>) contributes to further reduce the aggregate size. Indeed, the actual size of the aggregates generated in the numerical examples (cf. Section <ref>) is much lower than the predicted by these theoretical bounds. In 2D, the aggregate size tends to 2h_ max as the mesh is refined, whereas it tends to 3h_ max in the 3D case. This shows that the aggregation scheme produces relative small aggregates in the numerical experiments.§ AGGREGATED UNFITTED LAGRANGIAN FINITE ELEMENT SPACESOur goal is to define a fe space using the cell aggregates introduced above. To this end, we need to introduce some notation. In the case of n-simplex meshes, we define the local fe space V() ≐𝒫_q(), i.e., the space of polynomials of order less or equal to q in the variables x_1,…,x_d. For n-cube meshes, V() ≐𝒬_q(), i.e.,the space of polynomials that are of degree less or equal to k with respect to each variable x_1, …, x_d.In this work, we consider that the polynomial order q is the same for all the cells in the mesh. We restrict ourselves to Lagrangian fe methods. Thus, the basis for V() is the Lagrangian basis (of order q) on .We denote bythe set of Lagrangian nodes of order q of cell .There is a one-to-one mapping between nodes a ∈and shape functions a();it holds a(^b) = δ_ab, where ^b are the space coordinates of node b.We assume that there is a local-to-global dof map such that the resulting global system is 𝒞^0 continuous. This process can be elaborated for hp-adaptivity as well, but it is not the purpose of this work. With this notation, we can introduce the active fe space associated with the active portion of the background mesh≐{ v ∈𝒞^0() : v|_K ∈ V(K),for anyK ∈}.We could analogously define the interior fe space ≐{ v ∈𝒞^0() : v|_K ∈ V(K),for anyK ∈}.The active fe space(see Fig. <ref>) is the functional space typically used in unfitted fe methods (see, e.g., <cit.>).It is well known thatleads to arbitrary ill conditioned systems when integrating the fe weak form on the physical domainonly (if no extra technique is used to remedy it). It is obvious that the interior fe space(see Fig. <ref>) is not affected by this problem, but it is not usable since it is not defined on the complete physical domain . Instead, we propose an alternative spacethat is defined onbut does not present the problems related to . We can define the set of nodes ofandasand , respectively (see Fig. <ref>). We define the set of outer nodes as ≐∖ (marked with red crosses in Fig. <ref>). The outer nodes are the ones that can lead to conditioning problems due to the small cut cell problem (see finite-cell-estimate). The space is defined taking as starting point , and adding judiciously defined constraints for the nodes in . In order to definewe observe that, in nodal Lagrangian fe spaces, there is a one-to-one map between dof and nodes (points) of the fe mesh (for vector spaces, the same is true for every component of the vector field). On the other hand, we can define the owner vef of a node as the lowest-dimensional vef that contains the node. Furthermore, we can construct a map that for every vef F such that F ⊄,gives a cell owner among all the cells that contain it. This map can be arbitrarily built. E.g., we can consider as cell owner the one in the smallest aggregate. As a result, we have a map between dof and (active) cells. Every active cell belongs to an aggregate, which has its own root (interior) cell.So, wealso have a map between dof and interior cells. This map between b∈ and the corresponding interior cell is represented with (b) (see Fig. <ref>).The space of global shape functions ofandcan be represented as {b: b ∈} and {b: b ∈}, respectively. Functions in these fe spaces are uniquely represented by their nodal values. We represent the nodal values of ∈ as ∈ℝ^||, whereas the nodal values of ∈ as ∈ℝ^||. Considering, without loss of generality, that the interior nodal values are labeled the same way for both fe spaces, we have that = [ ,]^T, where ∈ℝ^||. rangeNow, we consider the following extension operator. Given ∈ and the corresponding nodal values , we compute the outer nodal values as follows:_b = ∑_a ∈(b)a(_b) _a, for b ∈.That is, the value at an outer node b∈ is computed by extrapolating the nodal values of theinterior cell K(b) associated with it. In compact form, we can write it as =, whereis the global matrix of constraints. We define the global extension matrix : ℝ^||→ℝ^|| as = [ , ]^T.Let us also define the extension operatorℰ: →, such that, given ∈ represented by its nodal values , provides the fe function ∈ with nodal values . We define the range of this operator as ≐() ⊂. This fe space is called the aggregated fe space since the map K(·) between outer nodes and interior cells is defined using the aggregates in . The motivation behind the construction of such space is to have a fe space covering(and thus ) with optimal approximability properties and without the ill-conditioning problems of . As one can observe, the new space is defined only by interior nodal values, whereas the conflictive outer nodes are eliminated via theconstraints in (<ref>). These constraints are cell-wise local. Thus, they can be readily applied at the assembly level in the cell loop, making its implementation very simple, even for non-adaptive codes that cannot deal with non-conforming meshes.We consider as basis forthe extension of the shape functions of , i.e., { a }_ a ∈. The fact that it is a basis foris straightforward, due to the fact that the extension operator is linear. The extension of a shape function is easily computed as follows:a= a + ∑_b ∈𝒞(a)_bab, for a ∈,where 𝒞(a) represents the set of outer nodes inthat are constrained by a.We note that one could consider an alternative aggregated space,= { v ∈𝒞^0(Ω) : v|_A ∈ V(A),for anyA ∈},where V(A) denotes the space of q order Lagrangian polynomials on n-simplices or n-cubes. It is obvious to check that in fact ⊂, but it is not possible to implement the inter-element continuity for this space using standard fe techniques. On the other hand, the fe space has the same size as the interior problem and the implementation in existing fe codes requires minimal modifications. Furthermore, it is also easy to check that the two approaches coincide for dg formulations, where all dof belong to the cells. In fact, a dg method withhas been proposed in <cit.>.§ APPROXIMATION OF ELLIPTIC PROBLEMSFor the sake of simplicity, we consider the Poisson equation with constant physical diffusion as a model problem, even though the proposed ideas apply to any elliptic problem with H^1-stability, e.g., the linear elasticity problem and heterogeneous problems. The Poisson equation with Dirichlet and Neumann boundary conditions reads as (after scaling with the diffusion term): find u ∈ H^1(Ω) such that -Δ u = finΩ, u=g^DonΓ_D,u · = g^NonΓ_N,where (Γ_D,Γ_N) is a partition of the domain boundary (the Dirichlet and Neumann boundaries, respectively), f∈ H^-1(Ω), g^D∈ H^1/2(Γ_D), and g^N∈ H^-1/2(Γ_N). For the space discretization, we consider H^1-conforming fe spaces on the conforming meshthat are not necessary aligned with the the physical boundary ∂. For simplicity, we assume that, for any cut cell ∈, either ∩Γ⊂Γ_ D or ∩Γ⊂Γ_ N. We consider both the usual fe space as well as the new aggregated spacein order to compare their properties. We will simply usewhen it is not necessary to distinguish betweenand .For unfitted grids, it is not clear to include Dirichlet conditions in the approximation space in a strong manner. Thus, we consider Nitsche's method <cit.> to impose Dirichlet boundary conditions weakly on Γ_D. It provides a consistent numerical scheme with optimal converge rates (also for high-order elements) that is commonly used in the embedded boundary community <cit.>. We define the fe-wise operators:_(u,v) ≐∫_∩Ω u · vdV + ∫_Γ_D∩( τ_ u v- v (· u) -u (· v) ) dS, ℓ_(v) ≐∫_Γ_D∩( τ_vg^D-(· v)g^D) dS,defined for a generic cell ∈. Vectordenotes the outwards normal to ∂Ω. The bilinear form _(·,·) includes the usual form resulting from the integration by parts of (<ref>) and the additional term associated with the weak imposition of Dirichlet boundary conditions with Nitsche's method. The right-hand side operator ℓ_(·) includes additional terms related to Nitsche's method. The coefficient τ_>0 is a mesh-dependent parameter that has to be large enough to ensure the coercivity of _(·,·).The global fe operator : →'and right-hand side term ℓ∈' are stated as the sum of the element contributions, i.e.,(u,v) ≐∑_∈_(u,v),ℓ (v)≐∑_∈ℓ_ (v), foru, v ∈. We will make abuse of notation, using the same symbol for a bilinear form, e.g., : →', and its corresponding linear operator, i.e., ⟨ u , v ⟩≐( u , v). Furthermore, we define b: ' → as b(v) ≐ f(v) + g^ N(v) + ℓ(v), for v ∈. With this, the global problem can be stated as: find ∈ such that(, )= b(), for any ∈.By definition, this problem can analogously be stated as: find ∈ such that ( , )= b() for any ∈.After the definition of the fe basis (of shape functions) that spans , or alternatively the extension operator ·,the previous problem leads to a linear system to be solved. A sufficient (even though not necessary) condition forto be coercive is to enforce the element-wise constant coefficient τ_ to satisfyτ_≥ C_≐sup_v∈ V()ℬ_ (v,v)𝒟_(v,v) ,for all the mesh elements ∈ intersecting the boundary Γ_D. In the previous formula,𝒟_(·,·) andℬ_(·,·) are the forms defined as𝒟_(u,v)≐∫_∩Ω u · vdV, and ℬ_ (u,v) ≐∫_Γ_D∩(· u) (· v)dS. Sinceis finite dimensional, and 𝒟_(·,·) andℬ_(·,·) are symmetric and bilinear forms, the valueC_ (i.e., the minimum admissible coefficient τ_) can be computed numerically as C_≐λ̃_max, being λ̃_maxthe largest eigenvalue of the generalized eigenvalue problem(see <cit.> for details): find u_∈|_ and λ̃∈ℝ such thatℬ_(u_,v_) = λ̃𝒟_(u_,v_)for allv_∈|_. For standard fe for body-fitted meshes, it is enough to compute coefficient τ_ as τ_=β/h to satisfy condition (<ref>), where β is a sufficiently large (mesh independent) positive constant (see, e.g., <cit.>).However, for standard unfitted fe methods using the usual spacewithout any additional stabilization, coefficient τ_ cannot be computed a priori; in fact, the minimum cell-wise value that assures coercivity is not bounded above. In this case, a value for τ_ ensuring coercivity has to be computed for each particular setup usingthe cell-wise eigenvalue problem (<ref>). The introduction of the new spacesolves this problem andτ_ is bounded again in terms of the element size as expected in the body-fitted case (see Section <ref> for more details). In this case, we have taken τ_=100/ in the numerical experiments below. The linear system matrix that arises from nummet can be defined as _ab≐( a , b ), for a, b ∈.The mass matrix related to the aggregated fe spaceis analogously defined as _ab≐∫_Ωab, for a, b ∈.It is well known that the usual fe spaceis associated with conditioning problems due to cut cells. The condition number of the discrete system without the aggregation, i.e., consideringinstead ofin nummet, scales as∼min_∈η_^-(2q+1-2/d),whereis the 2-norm condition number of(see <cit.>for details). Thus, arbitrarily high condition numbers are expected in practice since the position of the interfacecannot be controlled and the valueη_ can be arbitrarily close to zero. This problem is solved if the new aggregated spaceis used instead of(cf. Corollary <ref>).§ NUMERICAL ANALYSISIn this section, we analyze the well-posedness of the agregated unfitted fe method nummet, the condition number of the arising linear system, and a priori error estimates. As commented above, we assume that the background mesh is quasi-uniform. Therefore, the number of neighboring cells of a given cell is bounded above by a constant n_ cell independently of . In a mesh refinement analysis, we also assume that the coarser mesh level-set function already represents the domain boundary.In the following analysis, all constants being used are independent ofand the location of the cuts in cells, i.e., η_. They may also depend on the threshold η_0 in the aggregation algorithm if considered; we have considered η_0 = 1 for simplicity. The constants can depend on the shape/size of Ω and Γ_ D, the order of the fe space, and the maximum aggregation distance γ_ max, which are assumed to be fixed in this work. In turn, due to Lemma <ref>, the maximum size of an aggregate is bounded by a constant times . As a result, the following results are robust with respect to the so-called small cut cell problem. When we have that A ≤ c B for a positive constant c, we may use the notation A ≲ B; analogously for ≳. For the analysis below, we need to introduce some extra notation. Given a function ∈ (or ), the nodal vectorwill be used without any superscript, as soon as it is clear from the context. For a given cell , the cell-wise coordinate vector is represented with . On the other hand, given a fe function ∈, for every interior cell ∈, let us define define the cell-wise extension operator _ = [, ],where is the cell-wise constraint matrix, whose entries can be computed in the reference space (see def-constraints), such that · = ∑_∈·.We denote with ·_2 the Euclidean norm of a vector and the induced matrix norm. Standard notation is used to define Sobolev spaces (see, e.g., <cit.>). Given a Sobolev space X, its corresponding norm is represented with ·_X. §.§ Stability of the coordinate vector extension matrix We start the analysis of the scheme by proving bounds for the norm of the global and cell-wise coordinate vector extension matrix. Therefore, their norms can be bounded independently of the cut location and the size of the aggregate.The cell-wise and global coordinate vector extension matrices hold the following bounds:1 ≤__2^2 ≤ 1 + _2^2, for every∈, and1 ≤_2^2 ≤ 1 + _2^2 ≤,for a positive constant .Using the definition of the extension operator in Section <ref>, we have that _2^2 =_2^2 + _2^2.We proceed analogously for the cell-wise result, to get __2^2 =_2^2 + _2^2. It proves the first result. On the other hand, we have,_2^2=∑_∈_2^2 ≤∑_∈_2^2 _2^2 ≤n_ cellsup_∈_2^2 _2^2,where we have used the fact that the constraint matrix is aggregate-wise and that the maximum number of cell neighbors of a vertex/edge/face is bounded above by a constant n_ cell. The value sup_∈_2^2 (or an upper bound) can explicitly be computed prior to the numerical integration and its entries are independent of the aggregate cut and the geometrical mapping, i.e., . In fact, given a polynomial order and γ_ max, one can precompute the maximum value of _2^2 among all possible aggregate configurations and explicitly obtain an upper boundof the global extension matrix norm. It proves the lemma.§.§ Mass matrix condition number In order to provide a bound for the condition number of the mass matrix, we rely on the maximum and minimum eigenvalues of the local mass matrix in the reference cell :_2^2 ≤_L^2()^2 ≤_2^2, for∈ V().The values ofandonly depend on the order of the fe space and can be computed for different orders on n-cubes or n-simplices (see <cit.>). Using typical scaling arguments, one has the following bound for the local mass matrix of the physical cell:^d _2^2 ≤^2_L^2()≤^d _2^2.In the next lemma, we prove the equivalence between the L^2() norm and the interior dof Euclidean norm, for functions in .The following bounds hold:h^d_2^2 ≲^2_L^2()≲ h^d_2^2, for any∈. By definition, every function incan be expressed asfor some ∈. Using eig_mass, the fact that ⊂, and the quasi-uniformity of the background mesh, we obtain the lower bound in l2stab1 as follows:_L^2()^2 ≥^2_L^2() = ∑_K ∈^2_L^2()≥∑_K ∈^d _2^2 ≳^d _2^2.On the other hand, using ⊂, Lemma <ref>, eig_mass, and the fact that the number of surrounding cells of a node is bounded above by a positive constant, we get:_L^2()^2 = ∑_K ∈^2_L^2()≤∑_K ∈^d__2^2≲^d_2^2 ≲^d_2^2.It proves the lemma. The upper and lower bounds in l2stab1 lead to the continuity of the extension operator and a bound for the condition number of the mass matrix of the aggregated fe space. The extension operator satisfies the following bound:^2_L^2()≲^2_L^2(), for any∈.The mass matrixin massmatdef, related to the aggregated fe space , is boundedby ≤ C, for a positive constant C.§.§ Inverse inequality In order to prove the condition number bound for the system matrix arising from nummet, we need to prove first an extended inverse inequality. We rely on the fact that an inverse inequality holds for the fe space , i.e., _L^2()≲ h^-1_L^2(), for any∈.This standard result for conforming meshes can be found, e.g., in <cit.>. The following inverse inequality holds:_L^2()≲ h^-1_L^2(), for any ∈. Using the fact that ⊆⊆,∈, the standard inverse inequality inv_act, and the stability of the extension operator inLemma <ref>, we get:_L^2()≲h^-1_L^2()≲ h^-1_L^2().It proves the lemma. §.§ Coercivity and Nitsche's coefficient ξ 𝐃 ξ_h _cut In this section, we consider a trace inequality that is needed to prove the coercivity of the bilinear form in bil-form. Given acell ∈, let us consider the set of constraining interior cells _1, …, _m_,m_≥ 1, i.e., the interior cells that constraint at least one dof of the cut cell. Let us also define ≐∩ and Ω_≐∪⋃_i=1^m_ K_i ⊂.For any ∈ and ∈, the following bound holds·_L^2(Γ_ D∩)≤ h_^-1/2_L^2(_),for a positive constant . For interior cells, the left-hand side is zero and the bound trivially holds. Let us consider a cut cell . Let us also consider a fe function ∈ and its gradient ≐. Assuming that all the cells have the same order, we have thatbelongs to the discontinuous Lagrangian fe space of order q-1, and we represent the corresponding coordinate vector with _.First, we use the equivalence of norms in finite dimension and a scaling argument to get:_L^2(Γ_ D∩)^2 ≲ |Γ_ D∩| ^2_L^∞(),where the constant can only depend on the fe space order. Analogously, we have _L^∞()^2 ≲__2^2. Following the same ideas as above,can be expressed as an extension of the corresponding nodal values of the q-1 order fe spaces on top of the interior cells _i, represented with __i; we represent this extension with the matrix _, i.e., _ = _ [ _1, …, _m]^T. Using an analogous reasoning as above for matrix , the norm of this matrix cannot depend on the cut or h. Thus, we have that __2^2 ≲∑_i=1^m___i_2^2. On the other hand, using again the equivalence of norms in finite dimension, we get __i_2≲_L^2(_i). As a result, using typical scaling arguments, and using the fact that || ≲ |_i| ≲ | | for constants independent of mesh size or order, we get:_L^∞()^2 ≲ ||^-1∑_i=1^m_^2_L^2(_i). Combining these results, we get:·_L^2(Γ∩)^2≤ |Γ∩| | | ^2_L^∞()≲^-1∑_i=1^m_^2_L^2(_i),where we have used the fact that |Γ∩| | |^-1≲^-1 holds for a quasi-uniform mesh. It proves the lemma.§.§ Well-posedness of the unfitted fe problemIn this section, we prove coercivity and continuity of the bilinear form bil-form. First, we prove coercivity with respect to the following mesh dependent norm in :^2 ≐_L^2()^2 + ∑_∈β_^-1^2_L^2(Γ_D∩), for∈, which is next proved to bound the L^2() norm.The aggregated unfitted fe problem in nummet satisfies the following bounds: i) Coercivity: (,) ≳^2, for any ∈, ii) Continuity:(,) ≲, for ,∈,if β_K > C, for some positive constant C. In this case, there exists one and only one solution of nummet. For cut cells, we use2 ∫_Γ_ D∩(·)≤α_ h_^-1^2_L^2(Γ_ D∩) +α_^-1^-1 h_·^2_L^2(Γ_ D∩)≤α_ h_^-1^2_L^2(Γ_ D∩) +α_^-1^2_L^2(_).Using the fact that the mesh is quasi-uniform and that the number of neighboring cells and γ_ maxis bounded, one can take a value for α_ large enough (but uniform with respect toand the cut location) such that:2 ∫_Γ_ D(·) ≤∑_∈α__^-1_L^2(Γ_ D∩)^2 +1/2_L^2()^2.As a result, we get:(,) ≥1/2_L^2()^2 + ∑_∈( β_ - α_) _^-1_L^2(Γ_ D∩)^2.For, e.g., β_ > 2 α_, (,) is a norm. By construction, this lower bound for β_ is independent of the mesh sizeand the intersection of Γ and . It proves the coercivity property in th-bounds. Thus, the bilinear form is non-singular.The continuity in th-bounds-2 can readily be proved by repeated use of the Cauchy-Schwarz inequality and inequality stabA1-proof. Since the problem is finite-dimensional and the corresponding linear system matrix is non-singular, there exists one and only one solution of this problem. C_sIfhas smoothing properties, the following bound holds:_L^2()≲ , for any∈.Let us consider ∈ and let ψ∈ H^1_0() solve the problem -Δψ = with the boundary conditions ψ = 0 on Γ_ D and ·ψ = 0 on Γ_ N. Using the fact that the domain Ω has smoothing properties, it holds ψ_H^2()≲_L^2(). We have, after integration by parts:_L^2() = - ∫_Δψ = ∫_·ψ - ∫_Γ_ D·ψ.The first term in the right-hand side of stab-eq1 is easily bounded using the Cauchy-Schwarz inequality:∫_·ψ≤_L^2()ψ_L^2()≲_L^2()_L^2().On the other hand, the following trace inequalityholds ·ψ_L^2(Γ_ D)^2 ≲ | ψ |_H^2(Ω)^2 for a constant that depends on the size of Γ_ D (see <cit.>). Using the Cauchy-Schwarz inequality and the previous trace inequality, we readily get:- ∫_Γ_ D·ψ ≤( ∑_∈^-1_L^2(Γ_ D∩)^2 )^1/2( ∑_∈·ψ_L^2(Γ_ D∩)^2 )^1/2≤( ∑_∈^-1_L^2(Γ_ D∩)^2 )^1/2·ψ_L^2(Γ_ D)≲( ∑_∈^-1_L^2(Γ_ D∩)^2 )^1/2_L^2().Combining these bounds, we prove the lemma.C_A The condition number of the linear system matrixin stifmatdefis boundedby ≲^-2.To prove the corollary, we have to bound · = (,) above and below by ^2_2 times some constant. The lower bound follows from the coercivity property in Th. <ref>, Lemma <ref>, the lower bounds in Lemmas <ref> and <ref>, which lead to (,) ≳^2_L^2()≳h^d^2_2. The upper bound is readily obtained from the continuity property in Lemma<ref> and the upper bound in Lemma <ref>, i.e., · = (,) ≲^2. Using scaling arguments and the equivalence of norms for finite-dimensional spaces, we get ^2_L^2(Γ_D∩)≲^d-1_2^2. Adding up for all cells, invoking the fact that the number of neighbour cells is bounded, and using the upper bound of the coordinate vector extension operator in <ref>, we obtain:∑_∈β_^-1^2_L^2(Γ_D∩ )≲^d-2_2^2.Using the inverse inequality in Lemma <ref> and the upper bound in Lemma <ref>, we obtain:_L^2()^2 ≲h^-2_L^2()^2 ≲ h^d-2_2^2.Combining aux1-sma-aux2-sma, we get ·≤ c h^d-2_2^2. It proves the corollary.§.§ Error estimates∫#1ℐ_h(#1)#1σ(#1)In this section, we get a priori error estimates for the aggregated fe scheme nummet. In order to do that, we prove first approximability properties of the corresponding spaces. Let us consider an aggregated fe space of order q, m ≤ q, 1 ≤ s ≤ m ≤ q+1, 1 ≤ p ≤∞, andm > d/p. Given a function u ∈ H^m(Ω), it holds:inf_∈ u - _W_p^s()≲^m-s | u |_W_p^m(). Under the assumptions of the lemma, we have that the following embedding W_p^m() ⊂𝒞^0() is continuous (see, e.g.,<cit.>).Thus, given a function u ∈ W_p^m(), let represent with u the vector of nodal values in , i.e., u_a = (^a) for a ∈. We define the interpolation operator ∫u≐∑_a ∈a [u]_a.Given a cut cell K ∈, the fact that its dof values only depend on interior dof in Ω̅_, and since each shape function a belongs to W_∞^m(K) ⊆ W_p^m(K), it follows from the upper bound of the norm of the nodal extension operator in Lemma <ref> that∫u_W_p^m()≤ Cu _𝒞^0(Ω̅_) (see also <cit.>). On the other hand, we consider an arbitrary function π(u) ∈ W_p^m() such that π(u)|_∈𝒫_q(_). We note that, by construction, π(u)|_ = ∫π(u)|_. Thus, we have: u - ∫u_W_p^m() ≤ u - π(u)_W_p^m()+∫π(u) - u_W_p^m()≲ u - π(u)_W_p^m()+ π(u) - u_𝒞^0(_)≲ u - π(u)_W_p^m(_),where we have used in the last inequality the previous continuous embedding. Since _ is an open bounded domain with Lipschitz boundary by definition, one can use the Deny-Lions lemma (see, e.g., <cit.>). As a result, the π(u) that minimizes the right-hand side holds:u - ∫u_W_p^m(_)≲ | u |_W_p^m(_).Using standard scaling arguments, we prove the lemma. Ifhas smoothing properties and the solution u of the continuous problem PoissonEq belongs toW_p^m() for d/p < m ≤ q, the solution ∈ of nummetsatisfies the following a priori error estimate:u - _H^1()≤^m-1 | u |_H^m().w_h v_h Combining the consistency of the numerical method, i.e., (u,) = ℓ(), and the continuityand coercivity of the bilinear form in Th. <ref>, we readily get, using standard fe analysis arguments: -^2 ≲(-,-) = ( - u,- ) ≲-u-,for any ∈. On the other hand, we use the trace inequality (see <cit.>)ψ^2_L^2(∂ T)≲ |∂ T|^-1ψ^2_L^2(T) + |∂ T| ψ^2_H^1(T), for anyψ∈ H^1(T).Using this trace inequality, we get:^-1 u- ^2_L^2(Γ_D∩)≤^-1 u- ^2_L^2(∂_)≲^-2 u- ^2_L^2( _)+u- ^2_H^1( _).Combining the previous bound with the approximability property in Lemma <ref>, we readily get-u≲^m-1 | u -|_H^m().It proves the theorem. § NUMERICAL EXPERIMENTS§.§ SetupThe numerical examples below consider as a model problem the Poisson equation with non-homogeneous Dirichlet boundaryconditions. The value of thesource term and the Dirichlet function are defined such that the PDE has the following manufactured exact solution:u(x,y,z) = sin(4π((x-2.3)^2 + y^2 + z^2)^1/2),(x,y)∈Ω⊂ℝ^2, z=0in 2D,(x,y,z)∈Ω⊂ℝ^3in 3D.We consider two different geometries, a 2D circle and a 3D complex domain with the shape of a popcorn flake (see Fig. <ref>). These geometries are often used in the literature to study the performance of unfitted fe methods(see, e.g., <cit.>, where the definition of the popcorn flake is found). In all cases, we use the cuboid [0,1]^d, d=2,3 as the bounding box on top of which thebackground Cartesian grid is created. For the sake of illustration, Fig. <ref> displays both the considered geometries, numerical solution and bounding box. The main goal of the following tests is to evaluate the (positive) effect of using the aggregation-based fe spaceinstead of the usual one .In the next plots, the results for the usual (un-aggregated) fe space are labeled as standard, whereas the cases with the aggregation are labeled as aggregated (or aggr. in its short form).In all the examples, we use Lagrangian reference fe withbi-linear and bi-quadratic shape functions in 2D, and tri-linear and tri-quadratic ones in 3D.Both the standard and the aggregated formulations have been implemented in the object-oriented HPC code FEMPAR <cit.>. The system oflinear equations resulting from the problem discretization are solved within FEMPAR with a sparse direct solver from the MKL PARDISOpackage <cit.>. Condition number estimates are computed outside FEMPAR using the MATLAB functioncondest.[MATLAB is a trademark of THE MATHWORKS INC.]For the standard unfitted fe space , we expect very high condition numbers that can hinder the solution of the discrete system using standard double precision arithmetic. To address this effect and avoid the breakdown of sparse direct solvers, we bound from below the minimum distance between the mesh nodes and the intersection of edges with the boundary Γ to a small numerical threshold D_min proportional to the cell size, namely D_min=ε h, where ε is a (mesh independent) user defined tolerance. If the edge cut-node distance is below this threshold , the edge cut is collapsed with the node, perturbing the geometry. In the numerical experiments, we take ε=10^-6 and ε=10^-3 in 2D and 3D respectively. Using the fact that η_∼ε^d, we can rewrite the condition number estimate (<ref>) in terms of the user-defined tolerance ε as∼ε^-d(2q +1 -2/d). For instance, we have ∼ε^-7 and ∼ε^-13 for first and second order interpolations, respectively, in 3D. This illustrates that the condition numbers expected for second order interpolation are extremely high as it is confirmed below unless very large values of ε are considered. However, the value of ε cannot be increased without affecting the numerical error, since it perturbs the geometry, and destroys at some point the order of convergence of the numerical method. Similar perturbation-based techniques with analogous problems have been used in the frame of the finite cell method in <cit.>. We note that the tolerance ε is not needed at all when using the aggregated fe space. §.§ Moving domain experimentIn the first numerical experiment, we study the robustness of the unfitted fe formulations with respect to the relative position between the unfitted boundaryand the background mesh. To this end, we consider two moving domains that can travel along one of the diagonals of the bounding box(see Fig. <ref>). The considered geometries are obtained by scaling down the circle and the popcorn flake depicted in Fig. <ref> by a factor of 0.25. In both cases,the position of the bodies is controlled by the value of the parameter ℓ (i.e., the distance between the center of the body and a selected vertex of the box). As the value of ℓ varies, the objects move and their relative position with respect to the backgroundmesh changes. In this process, arbitrary small cut cells can show up, leading to potential conditioning problems. In this experiment, we consider a background mesh with element size h=2^-5.Fig. <ref> shows the condition number estimate of the underlying linear systems varying the position of the physical domain . The plot is generated using a sample of 200 different values of ℓ. It is observedthat the condition numbers are very sensitive to the positionof the domain for the standard unfitted fe formulation, whereas the condition numbers are nearly independent of the position whenusing the aggregation-based fe spaces. Note that the standard formulation leads to very high condition numbers specially for secondorder interpolations and the 3D case. Moving from 1st order to 2nd order leads to a rise in the condition number between 10 and15 orders of magnitude. The same disastrous effect is observed when moving from 2D to 3D.In contrast, the condition number is nearly insensitive to the number of space dimensions, and mildlydepends on the interpolation order (as for body-fitted methods) when using aggregation-based fe spaces. From the results shown in Fig. <ref>, it is clear that the aggregation-based fe spaces are able to dramatically improve the condition numbers associated with the standard unfitted fe formulation. The next question is how cell aggregation impacts on the accuracy of the numerical solution. In order to quantify this effect, Fig. <ref> shows the computed energy norm of the discretization error. It is observed that the error is slightly increased when using the aggregation-based fespaces. This is because the considered meshes in this moving domain experiment are rather coarse. The error increments become negligible for finer meshes (see Section <ref> below). In this example, we cannot compute a solution for all the values of ℓ for3Dand2nd order interpolation without using cell aggregation (see the discontinuous fine red curve in Fig. <ref>). The condition numbers are so high (order 10^30) thatthe system is intractable,even with a sparse direct solver, using standard double precision floating point arithmetic.§.§ Convergence test The second experiment is devoted to study the asymptotic behavior of the methods as the mesh is refined. To this end, we consider the geometries and bounding boxes displayed in Fig. <ref>, which are discretized with uniform Cartesian meshes with element sizes h=2^-m,m=3,4,…,9 in 2D, and m=3,4,5,6 in 3D. First, we study howthe size of the aggregates scales when the mesh is refined. Fig. <ref> shows that the aggregate size is 2h in 2D, whereas it tends to 3h in the 3D case. These results agrees with the theoretical bounds for the aggregate size discussed in Section <ref>.Then, we study the scaling of the condition numbers with respect to the mesh size (see Fig. <ref>).For the aggregation-based fe spaces, the condition numbers of the stiffness matrix scales as h^-2, like in standard fe methods for body fitted meshes. This confirms the theoretical result of Corollary <ref>. Conversely, the condition number has an erratic behavior if cell aggregation is not considered. The reason is that, as shown in the previous experiment (cf. Section <ref>), the standard unfitted fe formulation leads to condition numbers very sensitive to the position of the unfitted boundary. Several configurations of cut cells can show up when the mesh is refined, leading to very different condition numbers. As in the previous experiment, the condition number is very sensitive to the interpolation order and number of space dimensions for the standard unfitted fe formulations. This effect is reverted when using cell aggregates.Finally, we study the convergence of the discretization error. To this end, Figs. <ref> and <ref> report the discretization errors measured both in the energy norm and in the L^2 norm. Like in the previous experiment (cf. Section <ref>), the discrete system could not be solved when using the finest meshes in 3D for 2nd order interpolation without using cell-aggregation due to extremely large condition numbers (see the incomplete curve in Fig. <ref>).The results show that the error increment associated with the aggregation-based fe space becomes negligible when the mesh is refined. Moreover, the theoretical results of Section <ref> are confirmed: optimal order of convergence is(asymptotically) achieved in all cases when using aggregation-based fe spaces both for the energy and the L^2 norm, for 1st and 2nd order interpolations, and for 2 and 3 spatial dimensions.§ CONCLUSIONSWe have proposed a novel technique to construct fe spaces designed to improve the conditioning problems associated with unfitted fe methods. The spaces are defined using cell aggregates obtained by merging the cut cells to interior cells. In contrast to related methods in the literature, the proposed technique is easy to implement in existing fe codes (it only involves cell-wise constraints) and it is general enough to deal with both continuous and dg formulations. Another novelty with respect to previous works is that we include the mathematical analysis of the method.For elliptic problems, we have proved that 1) the novel fe space leads to condition numbers that are independent from small cut cells, 2) the condition number of the resulting system matrix scales with the inverse of the square of the size of the background mesh as in standard fe methods, 3) the penalty parameter of Nitsche's method is bounded from above, and 4) the optimal fe convergence order is recovered. These theoretical results are confirmed with 2D and 3D numerical experiments using both first and second order interpolations. myabbrvnat * [ ] Define the extended fe space* [ ] Define the extension operator* [ ] Define the constraint matrix* [ ] Define the nodal values extension operator* [ ] Define the constrained extended fe space (aggregated fe space)* [ ] Remark about the piecewise polynomial space on aggregates, subspace and hard to implement* [ ] Define the elliptic problem (constant coefficient?)* [ ] Statement of the method with Nitsche boundary conditionsm, constant arbitrary* [ ] Estimate for the Nitsche constant such that stable method* [ ] Coercivity* [ ] Norm of the nodal extension and extension operators* [ ] Mass matrix eigenvalues* [ ] Inverse inequality* [ ] Estimates for the Laplacian problem* [ ] Estimates for the Elasticity problem, only one abstract result assuming H1 stability and a Poincare inequality* [ ] Prove optimal convergence order of approximation* [ ] Prove optimal error estimates for elliptic problems* [ ] To be done by @fverdugo
http://arxiv.org/abs/1709.09122v1
{ "authors": [ "Santiago Badia", "Francesc Verdugo", "Alberto F. Martín" ], "categories": [ "cs.CE", "math.NA" ], "primary_category": "cs.CE", "published": "20170926163926", "title": "The aggregated unfitted finite element method for elliptic problems" }
Rao-Blackwellization to Give Improved Estimates inMulti-List StudiesKyle Vincent[Independent Researcher and Consultant, Ottawa, Ontario, CANADA,email: [email protected]] December 30, 2023 ======================================================================================================================gobble Sufficient statistics are derived for the population size and capture-effect parameters of commonly used closed population mark-recapture models. Rao-Blackwellization procedures for improving on estimators that are not functions of such statistics are presented. As Rao-Blackwellization entails enumerating all sample reorderings consistent with the sufficient statistic, Markov chain Monte Carlo resampling procedures are provided to approximate the computationally intensive estimators. Simulation studies and empirical applications demonstrate that significant improvements for such estimators can be made with the strategy. Supplementary materials for this article are available online. The code will be made publicly available to facilitate further research. Keywords: Mark-recapture; Markov chain Monte Carlo; Rao-Blackwell theorem; Resampling; Sufficient statistic; Unit labels.arabic § INTRODUCTIONThe field of mark-recapture is a well-studied topic. Amongst the comprehensive sources of information on the subject are <cit.> and <cit.>. Mark-recapture has numerous applications to wildlife studies, and more recently for multiple systems estimation which is based on administrative lists of hidden populations like human-trafficking victims and drug-users <cit.>. In this paper, sufficient statistics for the population size and parameters of closed population mark-recapture models are derived. Some mark-recapture estimators, which may or may not already be functions of these sufficient statistics, can result in extreme or unstable estimates of the population size. This is especially the case when the number of captures and/or overlap between capture occasions is nil or small, which is commonly seen in the context of multiple systems estimation <cit.>. However, some commonly used mark-recapture estimators which are not functions of these sufficient statistics may result in stable estimates when Rao-Blackwellized, in particular by dampening extreme estimates so they are closer to their expectation, thereby providing a significant increase in their precision.In mark-recapture studies the model is usually taken to be multinomial so that the population units are distributed among all possible capture histories and estimation methods are commonly carried out with respect to such a model; see <cit.>. In this paper, preliminary estimators are based on a variety of strategies that may or may not be directly based on the model. Such estimators do not typically depend on the unit labels. However, Rao-Blackwellization/improved estimation is based on the unit labels and the resulting improved estimators are obtained via weighing over estimates corresponding to all possible capture histories (sample reorderings) consistent with the sufficient statistic, which in turn is based on the assumed multinomial mark-recapture model.In the event there are large sample sizes and/or a large number of sampling occasions, evaluating the improved estimators may be computationally difficult as there will likely be a prohibitively large number of reorderings to tabulate. A practical method to approximate the estimators with a Markov chain Monte Carlo (MCMC) resampling procedure is therefore provided for each mark-recapture model investigated in this paper.The paper is organized as follows. Section 2 discusses the mark-recapture estimators used in the simulations along with the simulation study details. Section 3 outlines the estimation and variance estimation procedure for the Rao-Blackwellized estimators and their corresponding approximations with the use of a resampling algorithm. Section 4 introduces some of the nomenclature used in the paper. Results corresponding to the null, heterogeneity, behavioural, and time-effects mark-recapture models are then presented in Sections 5, 6, 7, and 8, respectively. Within each section, the notation is first introduced, followed by the sufficiency result, resampling algorithm, and results from a simulation study. The paper concludes with a discussion in Section 9. The approach outlined in this paper has been developed for the stratified mark-recapture setup, that is, where capture probabilities are assumed to be homogeneous within each stratum. The theoretical details and simulation study results can be found in the supplementary materials. Empirical applications of the Rao-Blackwell inferential strategy can also be found in the supplementary materials. § MARK-RECAPTURE ESTIMATORS AND SIMULATION STUDY DETAILS All simulation studies are performed in the R programming language <cit.>.Several classes of estimators are explored and listed as follows. * The bias-adjusted Lincoln-Petersen estimator <cit.> and corresponding variance estimator presented in <cit.>.* The maximum likelihood estimates based directly on the likelihood corresponding to the model parameters and capture histories; such estimates are detailed in <cit.> and obtained with the RMark package <cit.>.* The loglinear mark-recapture model estimates based on fitting a Poisson regression model, where the population size estimate is derived from the maximum likelihood estimates of the loglinear parameters <cit.>; these estimates are obtained with the Rcapture package <cit.>.* Bayes estimates based on a computationally efficient semi-complete data likelihood approach, which is composed of the product of a complete likelihood which corresponds with the captured units and a marginal likelihood which corresponds with the uncaptured units, and a hybrid approach based on a data augmentation and numerical integration approximation technique applied to the semi-complete likelihood <cit.>; these estimates are obtained with the multimark package <cit.>.* The jackknife estimator detailed by <cit.>; this estimator is obtained with the SPECIES package <cit.>.* The sample coverage estimates, which are based on measures of overlap and dependence between sample capture probabilities <cit.>; these estimates are obtained with the CARE1 package <cit.> and SPECIES package <cit.>.All simulation studies are based on runs of three samples. A total of 2500 simulation runs are obtained for each study as these are found to sufficiently approximate the mean and variance of the preliminary estimators. The number of resamples used to approximate the mean and variance of the set of estimators corresponding to the sample reorderings consistent with the sufficient statistic for each set of three selected samples is based on Gelman-Rubin statistics and examination of visual plots of the chains <cit.>. The number of resamples used in each simulation is noted in the corresponding simulation study subsections. Nominal 95% confidence/probability intervals are based on the corresponding likelihood-, loglinear Poisson-, posterior-, or bootstrap estimation-based standard errors and central limit theorem. Alternative log-transformation-based intervals corresponding to the maximum likelihood and sample coverage approach are based on the method outlined in <cit.>, and alternative equal-tailed 95% probability intervals corresponding to the Bayes estimators are based on the posterior distributions. The alternative intervals corresponding to the improved estimators are based on averaging over the percentiles of the confidence limits or posterior distribution corresponding to the sampled reorderings, as explained in the following section. § RAO-BLACKWELL/IMPROVED ESTIMATION Traditional/preliminary inference for population size and mark-recapture model parameters are typically based on ignoring unit labels and then basing inference directly on statistics of sample sizes, frequency counts corresponding to each possible capture history, and overlap measures between the samples; see, for example, <cit.> for further details. Improved inference, as presented in this paper, is based on sample reorderings which in turn are based on the unit labels and full set of all possible individual capture histories; sufficiency results based on the labels-based likelihoods are detailed in full in the following sections for the mark-recapture models explored in this paper. The purpose of using the unit labels is to aid in and simplify evaluation of the improved estimators, since combinatorial calculations corresponding to the capture histories when ignoring unit labels are not required, and hence computational issues such as machine zeros can be avoided (cf. the labels-based and traditional likelihoods in the following sections).Suppose a set of samples are obtained for a mark-recapture study under a particular capture model. A sample reordering of the original samples is a hypothetical re-assigning of the sample units to the samples, so the number of captures over the study is equal to that originally observed and hence which gives rise to capture histories different than those originally observed; Table <ref> provides an example for a sample selected under the null M_0 model (that is, where capture probabilities are equal and constant for all units across sampling occasions). Define d_R to be a sufficient statistic for a population quantity or model parameter γ, such as the population size or a capture probability corresponding to the mark-recapture model,. For the setup considered in this paper the sample reorderings that are mapped to the same d_R as that which corresponds with the original sample orderings are consistent with the sufficient statistic and therefore are those which contribute to improved estimation. Let ℛ be the set of all such sample reorderings. Index the sample reorderings as 1,...,|ℛ|. Define d_0^[r] to be the hypothetical samples corresponding to reordering r; for example, for some r the set d_0^[r] would be the samples displayed in the right side of the example presented in Table <ref>. Define γ̂_0^[r] to be the estimate of γ based on the set of samples in d_0^[r]. Finally, define p(d_0^[r]) to be the probability of observing the reordered samples under the assumed capture model based on the observed unit labels and in the order the samples are selected. It can be seen that with the sufficiency results based on the labels-based likelihoods given in this paper, under any of the models p(d_0^[r]) is uniform amongst all reorderings which are consistent with the corresponding sufficient statistic; details and examples are provided in the following sections. Hence, the improved estimate is the arithmetic mean of the estimates that correspond to sample reorderings that could have been observed under the assumed mark-recapture model and which are consistent with the sufficient statistic; the improved estimate of γ isγ̂_RB =E[γ̂_0|d_r] =∑_iϵℛ(γ̂_0^[r]p(d_0^[r]|d_r)) =∑_iϵℛ(γ̂_0^[r]p(d_0^[r]))/∑_iϵℛp(d_0^[r]) =∑_iϵℛγ̂_0^[r]/|ℛ|. To estimate the variance of the improved estimator, the decomposition of variances givesvar(γ̂_RB)=var(γ̂_0)-E[var(γ̂_0|d_r)].If v̂âr̂(γ̂_0) is an estimator of var(γ̂_0) then an estimator of var(γ̂_RB) isv̂âr̂(γ̂_RB)=E[v̂âr̂(γ̂_0)|d_r]-var(γ̂_0|d_r).This estimator is the difference of the expectation of the estimated variance of the preliminary estimator over all consistent reorderings and the variance of the preliminary estimator over all consistent reorderings. Although these estimates are unbiased, they can result in negative estimates of the variance. For such a case, a conservative approach is to set the estimate of var(γ̂_RB) equal to E[v̂âr̂(γ̂_0)|d_r]. This approach is utilized in the simulation studies.With the Markov chain Monte Carlo (MCMC) resampling procedures outlined in this paper, approximations to the Rao-Blackwellized estimators can be obtained as follows. Suppose the number of resamples/length of the MCMC chain is M. Define q(d_0^[r]) to be the probability of selecting sample reordering r under the proposal distribution. Suppose that at step m of the chain the most recently accepted sample reordering drawn from the proposal distribution is d_0^[r^*] for some r^*. Draw a candidate sample reordering, d_0^[r] say, from the proposal distribution. With probability α=min{p(d_0^[r])/p(d_0^[r^*])q(d_0^[r^*])/q(d_0^[r]),1}=min{q(d_0^[r^*])/q(d_0^[r]),1} accept the candidate sample reordering d_0^[r] for step m, and with probability 1-α reject the sample reordering and retain d_0^[r^*] for step m.Let γ̂^(m)_0 be the preliminary estimate of γ obtained with the most recently accepted sample reordering selected at step m. An enumerative estimate of γ̂_RB is thenγ̃_RB=∑_m=1^Mγ̂^(m)_0/M.Similarly, let v̂âr̂(γ̂^(m)_0) be the estimate of the variance of γ̂_0 obtained with the most recently accepted sample reordering selected at step m. An enumerative estimate of v̂âr̂(γ̂_RB) is thenṽãr̃(γ̂_RB)=Ẽ[v̂âr̂(γ̂_0)| d_r]-ṽãr̃(γ̂_0| d_r)=1/M∑_m=1^Mv̂âr̂(γ̂_0^(m))-1/M∑_m=1^M (γ̂_0^(m)-γ̃_RB)^2. § NOMENCLATURE The following notation is used in this paper. Define N to be the population size and K the number of sampling occasions for the mark-recapture study. Define U={1,2,...,N} to be the set of population unit labels, s_k to be the set of units captured on sampling occasion k, n_k=|s_k|, s=∪_k=1^K s_k, and n=|s|. Define p_ik to be the capture probability of unit i on sampling occasion k, C_i,k=1 if unit i is captured on sampling occasion k and zero otherwise, and C_i=∑_k=1^KC_i,k the total number of times unit i is captured over the study. For the purposes of deriving the sufficiency results based on the labels-based likelihoods, and without loss of generality, the unit labels of s are taken to be 1,2,...,n. Define I[s⊆ U]=1 if s⊆ U and 0 otherwise, where in this setup U is considered to be a function of N. Finally, define {x_ω} to be the number of individuals exhibiting each capture history ω. For example, if K=3 and ω=(1,0,1) then x_ω is the number of individuals captured on occasions 1 and 3, but not on occasion 2.§ NULL MODEL Under the M_0 model, p_ik=p for all i=1,2,...,N and k=1,2,...,K. Define C to be the total number of captures in the study. §.§ Sufficiency Results When considering unit labels, define the original data to be d_0={s_k:k=1,...,K} and the reduced data to be d_R={s,C}.Theorem: The reduced data D_R is sufficient for (N,p).Proof: For any (d_0, p), the labels-based likelihood isP_N(D_0=d_0)=P(s_1,s_2,...,s_K)I[s⊆{1,2,...,N}]=∏_k=1^K[p^∑_i ϵ sC_i,k (1-p)^∑_iϵ s(1-C_i,k)(1-p)^(N-n)]I[s⊆{1,2,...,N}]=p^∑_k=1^K∑_i ϵ sC_i,k (1-p)^∑_k=1^K∑_iϵ s (1-C_i,k)(1-p)^(KN-Kn) I[s⊆{1,2,...,N}]=[p^C(1-p)^(KN-C)]I[s⊆{1,2,...,N}]=g(N,p,d_R)h(d_0)where h(d_0)=1. Therefore, by the Neyman-Factorization Theorem D_R is sufficient for (N,p).As presented in <cit.>, when ignoring unit labels the likelihood for (N,p) based on the capture histories isP({x_ω}|N,p) =N!/[∏_ωx_ω!](N-n)!p^C(1-p)^(KN-C)=N!/(N-n)!p^C(1-p)^(KN-C)1/∏_ωx_ω!=g(N,p,T({x_ω}))h({x_ω})where h({x_ω})=1/∏_ωx_ω!. Therefore, by the Neyman Factorization theorem, T({x_ω})=(n,C) is the analogous sufficient for (N,p). Table <ref> in Section <ref> depicts a sample selected under the M_0 model (where A corresponds with unit 1, B with unit 2, and C with unit 3), where the probability of observing the original data is p^6(1-p)^(3N-6), and a sample reordering that is consistent with the reduced data. §.§ Resampling Procedure A Metropolis-Hastings MCMC chain <cit.> is used to approximate the improved estimators with the following proposal distribution.Step 1: For each of the n individuals in s, assign them a capture to a randomly chosen sample and tabulate the capture history matrix.Step 2: Distribute the remaining ∑_k=1^Kn_k - n captures at random to the zero entries in the capture history matrix.In the first step and for each captured unit i, there are C_i possible ways of first assigning this unit to one of the C_i samples in which they are captured in the sample reordering. Therefore, the total possible number of ways of selecting the proposed sample reordering is ∏_i ϵ s C_i. For the accept/reject portion of the chain, the sample reordering is accepted with probability min{q(d_0^[r^*])/q(d_0^[r]),1}=min{∏_i ϵ s C_i^*/∏_i ϵ s C_i, 1} where d_0^[r^*] and d_0^[r^] respectively correspond to the most recently accepted sample reordering and proposed sample reordering.§.§ Simulation Study The population size is set to N=500 and capture probability to p=0.15. The following estimators, which are already functions of the sufficient statistic, are used in the simulation study: the maximum likelihood estimator based directly on the capture histories, maximum-likelihood estimator based on a Poisson log-likelihood, and Bayes estimator, all three of which are based on the assumption of no capture effects. The following estimators, which are not functions of the sufficient statistic and can therefore benefit from the Rao-Blackwellization procedure detailed in this section, are also used in the simulation study: the bias-adjusted Lincoln-Petersen estimator, the jackknife estimator, and the Chao-Tsay sample coverage approach estimator. Table <ref> presents the results from the simulation study. Improved estimates are based on 2500 resamples. The acceptance rate of the MCMC chain is 83.14%.l*10rr Mean, variance, mean-squared error, coverage rates, average length of confidence intervals, coverage rates of alternative confidence intervals, average length of alternative confidence intervals, and percentage of variance estimates which are negative, for preliminary and improved estimators. Top: Estimators which are already functions of the sufficient statistic. Middle: Preliminary estimators. Bottom: Improved estimators. 10l Table <ref> continued from previous page EstimatorMeanVar.MSE CR LengthAlt. CRAlt. LengthNeg. Est.EstimatorMeanVar.MSE CR LengthAlt. CRAlt. LengthNeg. Est. MLE509 5,972 6,052 0.943293 0.951299NAMLE LL 501 5,390 5,391 0.933283 NA NA NABayes511 5,735 5,856 0.950296 0.950295NA LP 501 19,74719,7480.896489 NA NA NAJK 464 3,696 5,016 0.751113 NA NA NASC CT515 9,895 10,1240.950399 0.963414NA LP RB500 5,297 5,297 0.931283 NA NA 0% JK RB464 865 2,152 0.708109 NA NA 90.56% SC CT RB 514 6,136 6,338 0.964343 NA NA 0%§ HETEROGENEITY MODEL Under the M_h model, p_ik=p_i for all i=1,2,...,N and k=1,2,...,K. Define C=(C_1,C_2,...,C_n).§.§ Sufficiency Results When ignoring unit labels, <cit.> present the likelihood for (N,p) via conceptualizing the capture probabilities as a random sample from a probability distribution. For the setup presented in this paper, the model is parameterized with N capture probabilities p_1, ..., p_N to facilitate a Rao-Blackwellization improvement procedure.When considering unit labels, define the original data to be d_0={s_k:k=1,...,K} and the reduced data to be d_R={s,C}.Theorem: The reduced data D_R is sufficient for (N, p), where p=(p_1, p_2, ..., p_N).Proof: For any (d_0,p), the labels-based likelihood isP_N(D_0=d_0)=P(s_1,s_2,...,s_K)I[s⊆ U]=∏_k=1^K[∏_i ϵ s[p_i^C_i,k(1-p_i)^(1-C_i,k)]] ∏_k=1^K[∏_i=n+1^N(1-p_i)]I[s⊆ U]=∏_i ϵ s[p_i^C_i(1-p_i)^(K-C_i)] [∏_i=n+1^N(1-p_i)^K]I[s⊆ U]=g(N,p,d_R)h(d_0)where h(d_0)=1. Therefore, by the Neyman-Factorization Theorem D_R is sufficient for (N,p). Table <ref> depicts a sample selected under the M_h model (where A corresponds with unit 1, B with unit 2, and C with unit 3), where the probability of observing the original data is p_1^2(1-p_1) × p_2^2(1-p_2) × p_3^2(1-p_3) ×∏_i=4^N(1-p_i)^3, and a sample reordering that is consistent with the reduced data. §.§ Resampling ProcedureSample reorderings are selected completely at random with the following algorithm to approximate the improved estimators. Essentially, for each captured unit their corresponding capture history is permuted amongst the K samples to give rise to a sampled reordering. Under an MCMC setup, all sample reorderings have equal probability of being selected under the proposal distribution. Therefore, an accept/reject step is avoided. §.§ Simulation StudyThe population size is set to N=500 and capture probabilities are generated according to a Beta (3,10) distribution. The following estimators, which are already functions of the sufficient statistic, are used in the simulation study:Chao's lower bound and Poisson estimator based on a Poisson log-likelihood, both of which are based on the assumption of heterogeneity effects, the jackknife estimator, and the Chao-Bunge sample coverage approach estimator. The following estimators, which are not functions of the sufficient statistic and can therefore benefit from Rao-Blackwellization, are also used in the simulation study: the Bayes estimator, based on the assumption of heterogeneity effects, and the Chao-Tsay sample coverage approach estimator. Table <ref> presents results from the simulation study. Improved estimates are based on 1000 resamples.l*10rr Mean, variance, mean-squared error, coverage rates, average length of confidence intervals, coverage rates of alternative confidence intervals, average length of alternative confidence intervals, and percentage of variance estimates which are negative, for preliminary and improved estimators. Top: Estimators which are already functions of the sufficient statistic. Middle: Preliminary estimators. Bottom: Improved estimators. 10l Table <ref> continued from previous page EstimatorMeanVar.MSE CR LengthAlt. CRAlt. LengthNeg. Est.EstimatorMeanVar.MSE CR LengthAlt. CRAlt. LengthNeg. Est. Chao 428 1,275 6,501 0.421134 NA NA NAPoisson455 5,112 7,096 0.779266 NA NA NAJK 560 1,018 4,668 0.553130 NA NA NASC CB444 2,641 5,785 0.762190 NA NA NA Bayes549 11,26013,6170.962474 0.953466NASC CT440 1,970 5,561 0.654172 0.775176NA Bayes RB 547 8,703 10,9340.985463 0.9874650%SC CT RB 440 1,959 5,554 0.643171 0.7821760% § BEHAVIOURAL MODEL Under the M_b model, p_ik=p until first capture and p_ik=ϕ p for any recapture for all i=1,...,N and k=1,...,K, where ϕ is the behavioural effect parameter. Define m_i,k=1 if unit i is captured at least once before sampling occasion k and 0 otherwise (by definition, m_i,1=0), and F_k and R_k respectively to be the number of first time captures and recaptures on sampling occasion k. Define F=∑_k=1^K∑_k'<kF_k', the sum over all sampling occasions of the number of previously captured individuals, and R=∑_k=1^KR_k, the total number of recaptures.§.§ Sufficiency Results When considering unit labels, define the original data to be d_0={s_k:k=1,...,K} and the reduced data to be d_R={s,F,R}. Theorem: The reduced data D_R is sufficient for (N,ϕ,p).Proof: For any (d_0,p,ϕ), the labels-based likelihood isP_N(D_0=d_0)=P(s_1,...,s_K)I[s⊆ U]=∏_k=1^K[∏_i ϵ s( (ϕ^m_i,kp)^C_i,k(1-ϕ^m_i,kp)^(1-C_i,k))(1-p)^(N-n)] I[s⊆ U]=∏_k=1^K[ϕ^∑_i ϵ s(m_i,kC_i,k)p^∑_i ϵ sC_i,k∏_i ϵ s[(1-ϕ^m_i,kp)^(1-C_i,k)](1-p)^(N-n)]I[s⊆ U]=∏_k=1^K[ϕ^R_kp^F_kp^R_k] ∏_k=1^K[(1-p)^∑_k' > kF_k'(1-p)^N-n(1-ϕ p)^(∑_k' < kF_k'-R_k)]I[s⊆ U]=(ϕ p)^Rp^n(1-ϕ p)^(F-R)(1-p)^(KN-n-F)I[s⊆ U]=g(N,p,ϕ,d_R)h(d_0)where h(d_0)=1. Therefore, by the Neyman-Factorization Theorem D_R is sufficient for (N,ϕ,p). As presented in <cit.>, when ignoring the unit labels the likelihood for (N,ϕ,p) based on the capture histories isP({x_ω}|N,p,ϕ)=N!/[∏_ωx_ω!](N-n)!(ϕ p)^Rp^n (1-p)^(KN-n-F)(1-ϕ p)^(F-R)=N!/(N-n)!(ϕ p)^Rp^n (1-p)^(KN-n-F)(1-ϕ p)^(F-R)1/∏_ωx_ω!=g(N,p,ϕ,T({x_ω}))h({x_ω})where h({x_ω})=1/∏_ωx_ω!. It can be seen that by the Neyman Factorization theorem, the analogous sufficient statistic for (N,ϕ,p) is T({x_ω})={n,F,R}.Rao-Blackwellization based on the sufficient statistic {s,F,R} poses challenges, primarily due to the cumulative sum of the first time captures that defines the statistic F. Therefore, for the purposes of Rao-Blackwellization, a finer and easier sufficient statistic to work with for evaluating/proposing consistent sample reorderings is used to obtain improved estimates, namely {s,F_k,R_k:k=1,...,K}.Table <ref> depicts a sample selected under the M_b capture model (where A corresponds with unit 1, B with unit 2, and C with unit 3), where the probability of observing the original data is ϕ^3p^6(1-ϕ p)^2(1-p)^(3N-8), and a sample reordering that is consistent with the reduced data. §.§ Resampling ProcedureSample reorderings are selected completely at random with the following algorithm to approximate the improved estimators.Step 1: For k=1, sample F_1 units at random from s and assign them to sample 1. Denote the units selected for sample 1 at this step as s_1'.For each k=2,...,K, sample F_k units at random from s∖∪_j=1^k-1 s_j' and assign them to sample k.Denote the units selected for sample k at this step as s_k'.Step 2: For each k=2,...,K, sample R_k units at random from ∪_j=1^k-1 s_j ∖ s_k' and assign them to sample k.Under an MCMC setup, all sample reorderings have equal probability of being selected under the proposal distribution. Therefore, an accept/reject step is avoided. §.§ Simulation StudyThe population size is set to N=500, capture probability to p=0.3, and behavioural effect to ϕ=1.15. The following estimators, which are already functions of the sufficient statistic, are used in the simulation study: the maximum likelihood estimator based directly on the capture histories, maximum-likelihood estimator based on a Poisson log-likelihood, and Bayes estimator, all three of which are based on the assumption of a behavioural effect. The following estimators, which are not functions of the sufficient statistic and can therefore benefit from Rao-Blackwellization, are also used in the simulation study: the Chao-Bunge sample coverage approach estimator and the Chao-Tsay sample coverage approach estimator. Table <ref> presents results from the simulation study. Improved estimates are based on 1000 resamples.l*10rr Mean, variance, mean-squared error, coverage rates, average length of confidence intervals, coverage rates of alternative confidence intervals, average length of alternative confidence intervals, and percentage of variance estimates which are negative, for preliminary and improved estimators. Top: Estimators which are already functions of the sufficient statistic. Middle: Preliminary estimators. Bottom: Improved estimators. 10l Table <ref> continued from previous page EstimatorMeanVar.MSE CR LengthAlt. CRAlt. LengthNeg. Est.EstimatorMeanVar.MSE CR LengthAlt. CRAlt. LengthNeg. Est. MLE508 4,430 4,495 0.914247 0.950258NAMLE LL 502 4,035 4,040 0.902236 NA NA NABayes527 5,159 5,909 0.961295 0.950289NA SC CB468 987 2,001 0.810119 NA NA NASC CT467 1,108 2,165 0.767129 0.874132NA SC CB RB 469 607 1,596 0.69895 NANA 0%SC CT RB 468 597 1,627 0.69496 0.939 1320% § TIME-EFFECTS MODEL Under the M_t model, p_ik=pe_k for all i=1,...,N and k=1,...,K, where e=(e_1,e_2,...,e_K) are the time-effect parameters. §.§ Sufficiency Results When considering unit labels, define the original data to be d_0={s_k:k=1,...,K} and the reduced data to be d_R={s,n_k:k=1,...,K}.Theorem: The reduced data D_R is sufficient for (N, p, e).Proof: For any (d_0, p, e), the labels-based likelihood isP_N(D_0=d_0)=P(s_1,...,s_K)I[s⊆{1,...,N}]=∏_k=1^K[∏_i ϵ s((pe_k)^C_i,k(1-pe_k)^(1-C_i,k))(1-pe_k)^(N-n)]I[s⊆{1,...,N}]=∏_k=1^K[(pe_k)^n_k(1-pe_k)^(n-n_k)(1-pe_k)^(N-n)]I[s⊆{1,...,N}]=∏_k=1^K[(pe_k)^n_k(1-pe_k)^(N-n_k)]I[s⊆{1,...,N}]=g(N,p,e,d_R)h(d_0)where h(d_0)=1. Therefore, by the Neyman-Factorization Theorem D_R is sufficient for (N,p,e). As presented in <cit.>, the labels-based likelihood for (N,p,e) based on the capture histories isP({x_ω}|N,p,e)=N!/[∏_ωx_ω!](N-n)!∏_k=1^K[(pe_k)^n_k(1-pe_k)^(N-n_k)]=N!/(N-n)!∏_k=1^K[(pe_k)^n_k(1-pe_k)^(N-n_k)]1/∏_ωx_ω!=g(N,p,e,T({x_ω}))h({x_ω})where h({x_ω})=1/∏_ωx_ω!. Therefore, by the Neyman Factorization theorem, T({x_ω})=(n_1,...,n_K) is the analogous sufficient statistic for (N,p,e). Table <ref> depicts a sample selected under the M_t model (where A corresponds with unit 1, B with unit 2, and C with unit 3), where the probability of observing the original data is (pe_1)^2(1-pe_1)^(N-2)× (pe_2)^2 (1-pe_2)^(N-2)× (pe_3)(1-pe_3)^(N-1), and a sample reordering that is consistent with the reduced data. §.§ Resampling Procedure A Metropolis-Hastings MCMC chain <cit.> is used to approximate the improved estimators with the following symmetric proposal distribution.Step 1: Sample an entry in the capture matrix which corresponds with a capture (i.e. a one) and assign it a miss (i.e. a zero).Step 2: For those units with a corresponding miss for this column (sample), choose one at random and assign it a capture.Step 3: Check that all units are captured at least once in the study.For the accept/reject portion of the chain, the sample reordering is only rejected if any units are missing from the final sample.§.§ Simulation Study The population size is set to N=500 and capture probabilities to pe_1=0.30, pe_2=0.20, and pe_3=0.10. The following estimators, which are already functions of the sufficient statistic, are used in the simulation study: the maximum likelihood estimator based directly on the capture histories, maximum-likelihood estimator based on a Poisson log-likelihood, and Bayes estimator, all three of which are based on the assumption of time effects. The following estimators, which are not functions of the sufficient statistic and can therefore benefit from the Rao-Blackwellization procedure detailed in this section, are also used in the simulation study: the bias-adjusted Lincoln-Petersen estimator, the jackknife estimator, the Chao-Bunge sample coverage approach estimator, and the Chao-Tsay sample coverage approach estimator. Table <ref> presents the results from the simulation study. Improved estimates are based on 15,000 resamples.The acceptance rate of the MCMC chain is 33.96%.l*10rr Mean, variance, mean-squared error, coverage rates, average length of confidence intervals, coverage rates of alternative confidence intervals, average length of alternative confidence intervals, and percentage of variance estimates which are negative, for preliminary and improved estimators. Top: Estimators which are already functions of the sufficient statistic. Middle: Preliminary estimators. Bottom: Improved estimators. 10l Table <ref> continued from previous page EstimatorMeanVar.MSE CR LengthAlt. CRAlt. LengthNeg. Est.EstimatorMeanVar.MSE CR LengthAlt. CRAlt. LengthNeg. Est. MLE504 2,676 2,689 0.959205 0.951208NA MLE LL 504 2,591 2,603 0.960203 NA NA NA Bayes508 2,712 2,769 0.966209 0.950208NALP 502 5,047 5,050 0.940268 NA NA NA JK 592 2,026 10,4900.163133 NA NA NA SC CB541 15,92817,6410.973383 NA NA NASC CT508 5,418 5,482 0.954290 0.956299NALP RB501 2,618 2,619 0.950201 NA NA 0%JK RB592 884 9,291 0.117109 NA NA 0%SC CB RB 542 3,676 5,472 0.991331 NA NA 41.10% SC CT RB 510 2,760 2,856 0.978228 0.9943000% § DISCUSSION In this paper the mathematical details for Rao-Blackwellizing estimates of population size and mark-recapture model parameters are presented for several closed mark-recapture models. The simulation studies demonstrate that the improved estimators serve as competitive alternatives to estimators which are already functions of the sufficient statistics and hence which are naturally an average of estimates corresponding to all the consistent sample reorderings. Further, the empirical study results presented in the supplementary materials demonstrate that, under each mark-recapture model, in most cases significant reductions in the standard errors can be expected for the improved counterparts of such estimators.Approximations to the improved estimates is aided with the likelihood based on the unit labels. If one were to use the likelihood that is not based on the unit labels, then the combinatorial calculations corresponding with the capture histories of sample reorderings would need to be calculated in order to evaluate the improved estimators. Further, the probability of observing sample reorderings will not necessarily be homogeneous, and a more complicated resampling algorithm and Monte Carlo Markov chain procedure may be needed to approximate the improved estimators. These may present computational burdens, which therefore motivates the use of the sufficiency result based on the likelihood which makes use of the unit labels.As with all statistical modeling, it is important to base model choice on goodness-of-fit statistics and visual illustrations of the residuals. For the mark-recapture models explored in this paper there is a wealth of such model selection aids in <cit.> and <cit.>, and it is suggested to base improved inference on the model that best fits the data. For cases where the chosen model may be incorrect, future work on quantifying the sensitivity of the Rao-Blackwellized estimators to departures from the chosen model would be invaluable.Recall the expression used to approximate the variance of improved estimates; see Expression <ref>. In the simulation studies presented in this paper, negative estimates of the variance of improved estimates are evaluated for some estimators. Future work is required to address this issue. Three potential approaches are to: 1) Opt to use a conservative approach towards estimating the variance of the preliminary estimator so that E[v̂âr̂(γ̂_0)|d_r] is likely to be larger than var(γ̂_0|d_r), and hence the estimate is more likely to be positive; 2) Base a jackknife-type variance estimator solely on a series of Rao-Blackwellized estimates that correspond to a series of subsets of elements removed from the original samples. However, one drawback with this approach is that it will require more computational resources relative to current approaches since one will be required to evaluate approximations for each of the improved estimators; 3) For Bayesian- and bootstrap-based estimates, where cutoff points of the posterior and resampling distributions can be used to form alternative probabiliy/confidence intervals, base such intervals on the these values as they appear to behave well in the simulation studies.Developments on extending the theoretical results presented in this paper to more elaborate mark-recapture models would be useful. Such models may consist of those based on a two- or three-pair combination of heterogeneity, behavioural, and time effects, as well as the assumption that the population is open. Another valuable contribution would be to extend the strategy to work over mark-recapture models that allow for interaction effects between the sampling occasions <cit.>. Future work is being prepared for such models.biom Supplementary Materials The supplementary materials accompany the main paper “Rao-Blackwellization to Give Improved Estimates in Multi-List Studies". The stratified mark-recapture model and corresponding inference procedure is detailed. Results from applying the Rao-Blackwell inference strategy to empirical data sets are also presented. § STRATIFIED SETUP Suppose there are G strata the population is partitioned into; typically, such partitions would be based on covariate classes like a crossing of age category with gender. Further suppose unit i belongs to some stratum j that can be observed upon capturing unit i, wherej=1,2,...,G. Define p_ik to be the capture probability of unit i on sampling occasion k. Under the stratified setup for the this model, p_ik=p_j for all i=1,2,...,N and k=1,2,...,K, where p_j is the capture probability corresponding with units from stratum j. §.§ Sufficiency Result When considering unit labels, define the original data to be d_0={s_k:k=1,...,K} and the reduced data to be d_R={s,C'}, where s=∪_k=1^K s_k, C'=(C_1',C_2',...,C_G'), C_j'=∑_k=1^K C_j,k, C_j,k is the number of units captured from stratum j on sampling occasion k, and C_i,j,k=1 if unit i from stratum j is captured on sampling occasion k and 0 otherwise. Define n=|s|. Define s_(j) to be the set of units captured from stratum j at least once in the study, and n_(j)=|s_(j)|.Theorem: The reduced data D_R is sufficient for (N,p) where N=(N_1,N_2,...,N_G) (that is, the corresponding sizes of each stratum) and p=(p_1,p_2,...,p_G).Proof: For any (d_0,p), the labels-based likelihood isP_N(D_0=d_0)=P(s_1,s_2,...,s_K)I[s⊆ U]=∏_k=1^K[∏_j=1^Gp_j^∑_i ϵ s_(j)C_i,j,k (1-p_j)^∑_iϵ s_(j) (1-C_i,j,k)(1-p_j)^(N_j-n_(j))]I[s⊆{1,2,...,N}]=∏_j=1^G[p_j^∑_k=1^K∑_i ϵ s_(j)C_i,j,k (1-p_j)^∑_k=1^K∑_iϵ s_(j) (1-C_i,j,k)(1-p_j)^(KN_j-Kn_(j))]I[s⊆{1,2,...,N}]=∏_j=1^G[p_j^∑_k=1^KC_j,k (1-p_j)^∑_k=1^K (n_(j)-C_j,k)(1-p_j)^(KN_j-Kn_(j))]I[s⊆{1,2,...,N}]=∏_j=1^G[p_j^C_j'(1-p_j)^(KN_j-C_j')]I[s⊆{1,2,...,N}]=g(N,p,d_R)h(d_0)where h(d_0)=1. Therefore, by the Neyman-Factorization Theorem D_R is sufficient for (N,p). Table <ref> depicts a sample selected under a stratified setup (where A corresponds with unit 1, B with unit 2, and C with unit 3), where the probability of observing the original data is p_1^4(1-p_1)^(3N_1-4)× p_2^2(1-p_2)^(3N_2-2), and a sample reordering that is consistent with the reduced data. §.§ Resampling Procedure A Metropolis-Hastings MCMC chain <cit.> is used to approximate the improved estimators with the following proposal distribution. Essentially, the proposal distribution based on the null model is adopted and applied to each stratum as follows.Step 1: For each of the n_(j) individuals in s_(j), assign them a capture to a randomly chosen sample and tabulate the capture history matrix.Step 2: Distribute the remaining ∑_k=1^Kn_(j,k) - n_(j) captures at random to the zero entries in the capture history matrix corresponding to the entries from stratum j, where n_(j,k) is the number of units from stratum j captured on sampling occasion k.In the first step and for each captured unit i, there are C_i possible ways of first assigning this unit to one of the C_i samples in which they are captured. Therefore, the total possible number of ways of selecting the proposed sample reordering is ∏_i ϵ s C_i. For the accept/reject portion of the chain, the sample reordering is accepted with probability min{q(d_0^[r^*])/q(d_0^[r]),1}=min{∏_i ϵ s C_i^*/∏_i ϵ s C_i, 1} where d_0^[r^*] and d_0^[r] respectively correspond to the most recently accepted sample reordering and proposed sample reordering.§.§ Simulation Study The population size is set to N=500. The total number of sampling occasions is five. The population is partitioned into three strata/covariate classes of size 200, 200, and 100.The respective capture probabilities for individuals within the covariate classes are set to 0.25, 0.275, and 0.30. The following estimator, which is already a function of the sufficient statistic, is used in the simulation study: the Huggins estimator <cit.> based on the assumption of capture effects by covariate class. The following estimators, which are not functions of the sufficient statistic and can therefore benefit from the Rao-Blackwellization procedure detailed in this section, are also used in the simulation study: the Pledger mixture model maximum likelihood estimator based on two mixtures <cit.> and the assumption of capture effects by covariate class, and the Chao-Tsay and Chao-Bunge sample coverage approach estimators. Table <ref> presents results from the simulation study. Improved estimates are based on 1000 resamples. The acceptance rate of the MCMC chain is 22.85%.l*10rr Mean, variance, mean-squared error, coverage rates, average length of confidence intervals, coverage rates of alternative confidence intervals, average length of alternative confidence intervals, and percentage of variance estimates which are negative, for preliminary and improved estimators. Top: Estimators which are already functions of the sufficient statistic. Middle: Preliminary estimators. Bottom: Improved estimators. 10l Table <ref> continued from previous page EstimatorMeanVar.MSE CR LengthAlt. CRAlt. LengthNeg. Est.EstimatorMeanVar.MSE CR LengthAlt. CRAlt. LengthNeg. Est. Huggins503 279 284 0.949650.994111NA Pledger497 332 343 0.927730.95074 NASC CT501 501 502 0.94387NA NA NASC CB501 572 573 0.948930.94695 NA Pledger RB 496 288 300 0.943720.964 740% SC CT RB 501 303 304 0.94568NANA0.2% SC CB RB 501 309 310 0.943690.987 950.68%§ EMPIRICAL APPLICATIONS The new inference strategy is applied to several empirical data sets. Results based on each data set are presented in the following subsections.§.§ M_0 and M_h Model Application: Diabetes Data Set The new strategy is applied to a diabetes data set which is based on four administrative records from a community in Italy <cit.>. Capture histories can be found in <cit.>.The data set has been analysed by <cit.> with the use of a mark-recapture model that assumes heterogeneity is present in the captures. <cit.> analyse this data set and with the sample coverage approach their proposed estimate is 2,609 with variance estimate based on bootstrap replications of 6,561 and corresponding confidence interval of (2,472; 2,792).Tables <ref> and <ref> provide estimates respectively based on the M_0 and M_h model. For the M_0 estimators, 250,000 resamples are used to approximate the improved estimators and the acceptance rate is 0.01% For the M_h model, 5000 resamples are used to approximate the improved estimators. Most of the resulting confidence intervals corresponding to the Rao-Blackwellized estimators capture the population size estimate suggested by <cit.>, and most provide substantial increases in the estimated precision. l*10rr Population size estimates corresponding to diabetes data set based on new strategy, M_0 model assumption. Point estimate, variance estimate, confidence intervals and alternative confidence intervals for preliminary and improved estimators, and if a negative estimate was initially obtained for the variance estimate of the improved estimator. Top: Estimators which are already functions of the sufficient statistic. Middle: Preliminary estimators. Bottom: Improved estimators. 10l Table <ref> continued from previous page EstimatorEstimateVariance Estimate CIAlt CI. Neg. Est.EstimatorEstimateVariance Estimate CIAlt CI. Neg. Est. MLE2,525 1,054 (2,462; 2,589)(2,466; 2,593)NAMLE LL 2,526 1,055 (2,462; 2,590)NANA Bayes2,525 1,048 (2,462; 2,589)(2,464; 2,591)NALP 2,351 3,345 (2,238; 2,464)NANAJK 3,218 5,850 (3,068; 3,368)NANA SC CT2,458 2,467 (2,361; 2,555)(2,372; 2,568)NA LP RB2,515 8,314 (2,336; 2,693)NAYesJK RB3,231 4,397 (3,101; 3,361)NANoSC CT RB 2,512 1,107 (2,447; 2,578)(2,428; 2,616)No l*10rr Population size estimates corresponding to diabetes data set based on new strategy, M_h model assumption. Point estimate, variance estimate, confidence intervals and alternative confidence intervals for preliminary and improved estimators, and if a negative estimate was initially obtained for the variance estimate of the improved estimator. Top: Estimators which are already functions of the sufficient statistic. Middle: Preliminary estimators. Bottom: Improved estimators. 10l Table <ref> continued from previous page EstimatorEstimateVariance Estimate CIAlt CI. Neg. Est.EstimatorEstimateVariance Estimate CIAlt CI. Neg. Est. Pledger2,559 1,264 (2,489; 2,629)(2,494; 2,634)NAChao 2,513 1,497 (2,437; 2,589)NANA Poisson2,591 2,398 (2,495; 2,687)NANA JK 3,218 5,850 (3,068; 3,368)NANA SC CB2,546 1,891 (2,468; 2,639)NANA Bayes2,632 5,293 (2,489; 2,774)(2,514; 2,778)NASC CT2,458 2,493 (2,360; 2,556)(2,372; 2,569)NA Bayes RB 2,641 2,663 (2,539; 2,742)(2,540; 2,773)NoSC CT RB 2,556 2,536 (2,457; 2,654)(2,467; 2,665)No§.§ M_t Model Application: Hare Data Set The new strategy is applied to a population of snowhshoe hare data with six capture occasions <cit.>. Capture histories can be found in the `Rcapture' package <cit.>.<cit.> analyse this data set and find that a large degree of heterogeneity is introduced by two hares which are captured for all six sampling occasions. Consequently, they suggest removing them from the data set for estimation purposes. Based on goodness-of-fit criteria, they suggest using the M_t model. The resulting estimate of the population size is 76.78 with variance estimate 15.30 and confidence interval based on the profile likelihood method of (70.09; 85.41).Table <ref> provides the estimates based on the M_t model. A total of 2000 resamples are used to approximate the improved estimators and the acceptance rate is 83.65%. Most of the resulting confidence intervals corresponding to Rao-Blackwellized estimators capture the population size estimate suggested by <cit.>, and each provides a substantial increase in the estimated precision.l*10lrr Population size estimates corresponding to hare data set based on new strategy, M_t model assumption. Point estimate, variance estimate, confidence intervals and alternative confidence intervals for preliminary and improved estimators, and if a negative estimate was initially obtained for the variance estimate of the improved estimator. Top: Estimators which are already functions of the sufficient statistic. Middle: Preliminary estimators. Bottom: Improved estimators. 10l Table <ref> continued from previous page EstimatorEstimateVariance Estimate CIAlt CI. Neg. Est. EstimatorEstimateVariance Estimate CIAlt. CI Neg. Est. MLE74.05 14.78 (66.51; 81.58)(69.31; 85.57)NAMLE LL 75.89 17.70 (67.65; 84.14)NANABayes74.85 15.66 (67.09; 82.60)(68.00; 84.00)NA LP 134.003,240.00(22.43; 245.57) NANAJK 91.00 50.00 (77.00; 105.00) NANASC CB78.00 40.38 (70.00; 98.00)NANASC CT80.06 54.10 (65.64; 94.47)(71.36; 102.87) NA LP RB74.50 116.85(53.32; 95.69)NANoJK RB88.69 29.93 (78.00; 99.41)NANo SC CB RB 75.10 15.21 (67.45; 82.74)NANo SC CT RB 75.32 17.31 (67.17; 83.48)(69.17; 94.64)No §.§ M_t Model Application: HIV Data Set The new strategy is applied to an epidemiological four list/sample study based on an HIV population in Rome, Italy <cit.>. Capture histories can be found in the `Rcapture' package <cit.>.<cit.> analyse this data set and based on goodness-of-fit criteria suggest using the M_t model with interaction terms between the first two lists. The resulting estimate of the population size is 12,319, with variance estimate 1,413,060 and confidence interval based on the profile likelihood method of (10,287; 14,978).Table <ref> provides the estimates based on the M_t model. A total of 15,000 resamples are used to approximate the improved estimators and the acceptance rate is 83.65%. Most of the resulting confidence intervals corresponding to Rao-Blackwellized estimators capture the population size estimate suggested by <cit.>, and most provide substantial increases in the estimated precision.l*10lrr Population size estimates corresponding to hare data set based on new strategy, stratified model assumption. Point estimate, variance estimate, confidence intervals and alternative confidence intervals for preliminary and improved estimators, and if a negative estimate was initially obtained for the variance estimate of the improved estimator. Top: Estimators which are already functions of the sufficient statistic. Middle: Preliminary estimators. Bottom: Improved estimators. 10l Table <ref> continued from previous page EstimatorEstimateVariance Estimate CIAlt CI. Neg. Est. EstimatorEstimateVariance Estimate CIAlt. CI Neg. Est. MLE11,117817,396 (9,345; 12,889) (9,508; 13,066) NA MLE LL 11,069803,667 (9,312; 12,826) NANA Bayes11,187805,991 (9,427; 12,946) (9,549; 13,081) NALP 7,754 1,331,148 (5,492; 10,015) NANA JK 5,329 10,644(5,127; 5,531)NANA SC CB19,430105,352,673 (7,947; 52,705) NANA SC CT12,4641,668,197 (9,932; 14,995) (10,220; 15,312)NALP RB12,7271,348,014(10,451; 15,002) NANo JK RB5,184 10,065 (4,988; 5,831) NAYes SC CB RB 11,0565,286,812(6,550; 15,563)NANoSC CT RB 11,042812,532(9,275; 12,808)(9,153; 13,422) No§.§ Stratified Model Application: Dipper Data SetThe new strategy is applied to a seven sample study based on European dippers from France and the capture histories can be found in the `RMark' package <cit.>. For each captured unit, the gender is recorded and this serves as a stratification variable.Table <ref> provides the estimates based on the stratified model. A total of 2,500 resamples are used to approximate the improved estimators and the acceptance rate is 18.72%. Significant improvements in the (estimated) standard error are found with the new inference strategy.l*10lrr Population size estimates corresponding to dipper data set based on new strategy. Point estimate, variance estimate, mean-squared error, confidence intervals and alternative confidence intervals for preliminary and improved estimators, and if a negative estimate was initially obtained for the variance estimate of the improved estimator. Top: Estimators which are already functions of the sufficient statistic. Middle: Preliminary estimators. Bottom: Improved estimators. 10l Table <ref> continued from previous page EstimatorEstimateVariance Estimate CIAlt CI. Neg. Est. EstimatorEstimateVariance Estimate CIAlt. CI Neg. Est. Huggins373 190 (346; 400)(344; 421)NAPledger447 1,917 (362; 533)(383; 560)NASC CB527 5,005 (424; 711)NANA SC CT553 2,155 (462; 644)(477; 661)NA Pledger RB 376 521 (331; 421)(346; 429)No SC CB RB 377 253 (345; 408)NANo SC CT RB 377 236 (347; 407)(345; 428)No
http://arxiv.org/abs/1709.09138v2
{ "authors": [ "Kyle Vincent" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20170926171734", "title": "Rao-Blackwellization to give Improved Estimates in Multi-List Studies" }
Backtracking strategies for accelerated descent methods with smooth composite objectives Luca Calatroni†, Antonin Chambolle †[ † Centre de Mathématiques Appliquées (CMAP), École Polytechnique CNRS, 91128, Palaiseau Cedex, France. ] [e-mail: mailto:[email protected]@polytechnique.edu,mailto: [email protected]@cmap.polytechnique.fr] We present and analyse a backtracking strategy for a general Fast Iterative Shrinkage/Thresholding Algorithm which has been proposed in <cit.> for strongly convex composite objective functions. Differently from classical Armijo-type line searching, our backtracking rule allows for local increasing and decreasing of the descent step size (i.e. proximal parameter) along the iterations. We prove accelerated convergence rates and show numerical results for some exemplar imaging problems.Keywords: Composite optimisation, forward-backward splitting, acceleration, backtracking, image denoising. § INTRODUCTION The concept of acceleration of first-order optimisation methods dates back to the seminal work of Nesterov <cit.>. For a proper, convex, l.s.c. function F:𝒳→∪{∞} defined on a Hilbert space 𝒳 with Lipschitz gradient with constant L>0, solving the abstract optimisation problem min_x∈𝒳 F(x)by means of an accelerated iterative method means improving the convergence rate O(1/k) achieved after k≥ 1 iterations of standard gradient descent methods in order to (almost) match the universal lower bound of O(1/k^2) holding for any function such as F. In the smoother case, i.e. when F is a strongly convex function with parameter μ>0, Nesterov showed in <cit.> that a lower bound for first-order optimisation methods of the orderO((√(q)-1/√(q)+1)^2k) can be shown, with q:=L/μ≥ 1 being the condition number ofF. In this case, improved linear convergence rates of the order O((√(q)-1/√(q))^k) are proved. Similar results for implicit gradient descent have been studied by Güler<cit.>. We also refer the reader to <cit.>, where a general framework for inexact accelerated methods is presented.If the objective function in (<ref>) can be further decomposed into the sum of a convex function f with Lipschitz gradient ∇ f and a convex, l.s.c. and non-smooth function g, i.e. if the problem (<ref>) can be rewritten asmin_x∈𝒳 { F(x)=f(x) + g(x)} ,different descent methods taking into account the non-differentiability of F need to be considered. Such approaches go under the name of composite optimisation methods, after the work of Nesterov <cit.>. A typical optimisation strategy for solving composite optimisation problems consists in alternating along the iterations a `forward' (i.e. explicit) gradient descent step taken in correspondence with the differentiable component f and a `backward' (implicit)gradient descent step in correspondence with the non-smooth part g. Due to this alternation, such optimisation technique is known as forward-backward (FB) splitting. The literature on FB splitting methods is extremely vast. Historically, such strategy has firstly been used in <cit.> for projected gradient descent, and subsequently popularised within the imaging community after the work of Combettes and Wajs <cit.>. Acceleration methods for FB splitting has firstly been considered by Nesterov in <cit.> for projected gradient descent, and later extended by Beck and Teboulle<cit.> to more general `simple' non-smooth functions g under the name of Fast Iterative Shrinkage/Thresholding Algorithm (FISTA). Several variants of FISTA have been considered in a number of work such as <cit.>, just to mention a few, and properties such as convergence of the iterates under specific assumptions (<cit.>) and monotone variants (M-FISTA) <cit.> have also been studied. In the case when only an approximate evaluation of the FB operators up to some error can be provided, accelerated convergence rates can also be shown. We refer the reader to<cit.> for these studies In its original formulation,FISTA requires an estimate on the Lipschitz constant L_f>0 of ∇ f. Whenever such estimate is not easily computable, an Armijo-type backtracking rule <cit.> canalternatively be used <cit.>. By construction, this backtracking strategyrequires such estimate to be non-decreasing along the iterations. From a practical point of view, thisconditions implies that if a large value of this constant is computed in the early iterations, a corresponding small (or even smaller!) gradient step size will be used in the later iterations. As a consequence, convergence speed may suffer if an inaccurate estimate of L_f is computed. To avoid this drawback,Scheinberg, Goldfarb and Bai have proposed in <cit.> a backtracking strategy for FISTA where and adaptive increasing and decreasing of the estimated Lipschitz constant along the iterations is allowed. In particular, a Lipschitz constant estimate is computed locally at each iterate k≥ 1 in terms of a suitable average of the k-1 local estimates of the L_f computed in the previous iterations. The proposed strategy is shown to guarantee acceleration and to outperform the standard Armijo-type backtracking in several numerical examples. Compared to the similar full backtracking strategy proposed by Nesterov in <cit.>, the criterion used in <cit.> renders cheaper since it does not require the extra calculation of the term ∇ f in correspondence with the proximal step at each iteration. In the case of strongly convex objective functionals, improved linear convergence rates are expected. Recalling the composite problem (<ref>), the case of a strongly convex component f has firstly been considered for projected gradient descent in <cit.> and, more recently, extended by Chambolle and Pock <cit.> to the case ofstrongly convexf and g.In this work, we will denote this general FISTA algorithm by GFISTA. For GFISTA, linear convergence rates have rigorously been shown, encompassing the quadratic ones of plain FISTA in the non-strongly convex case.For its practical application, GFISTA requires an estimate of the Lipschitz constant L_f, which paves the way for the design of robust and fast backtracking strategies similar to the ones described above. We address this problem in this work. §.§ ContributionIn this work we analyse a full backtracking strategy for thestrongly convex version of FISTA (GFISTA)proposed in <cit.>. Differently from the standard backtracking rule proposed in the original paper by Beck and Teboulle <cit.> and based on an Armijo line-searching <cit.>, the strategy considered here allows for both increasing and decreasing of the Lipschitz constant estimate, i.e. for both decreasing and increasing of the gradient descent step size. Compared to the full backtracking strategy already presented by Nesterov in <cit.>, the one we consider here does not require the evaluation of the gradient of the smooth component in correspondence with the proximal step at each iteration, thus it renders cheaper. A similar backtracking strategy has been considered by Scheinberg, Godfarb and Bai in <cit.> for plain FISTA, but its generalisation to the strongly convex case is notstraightforward. We address this in this work, presenting a unified framework where the standard FISTA algorithm (with and without backtracking) can be derived as a particular case. In the case of strongly convex objectives, we prove linear convergence results studying in detail the decay speed of the corresponding convergence factors. We validate our theoretical results on some exemplar problems with strongly convex objective functions which can be encountered in imaging or in data analysis. To relax the dependence on the strong convexity parameters appearing in the algorithm, we finally combine the backtracking strategy to classical restarting methods <cit.>, which show empirical convergence properties.§.§ Organisation of the paperIn Section <ref> we recall some definitions and standard assumptions used in the modelling of composite optimisation problems. In Section <ref> we present the GFISTA strongly convex variant of FISTA studied in <cit.>. Next, in Section <ref> we analyse an adaptive backtracking strategy for GFISTA and prove the accelerate convergence results by means of technical tools inspired by <cit.>. Numerical examples confirming our theoretical results are reported in Section <ref>. In the final Section <ref> we summarise the main results of this work and give an outlook to some challenging questions to be addressed in future work. §.§ RemarkIn their recent preprint <cit.>, Florea and Vorobyov an algorithm similar to the one described in this work as an extension of their previous work <cit.>. The convergence result <cit.> obtained by the authors is similar to the one presented in our work (see Theorem <ref>), but les accurate since it is based on a worst-case analysis, while ours depends on average quantities estimated along the iterations. Furthermore, the arguments used in<cit.> are completely different from the ones used here. To show the main convergence result, the authors consideredgeneralised estimate sequences, a notion which, starting from the original paper by Nesterov <cit.>, has indeed become very popular in the field of optimisation (see, e.g., <cit.>, just to mention a few) due to its easy geometrical interpretation. However, the use of this technique leaves the technical difficulties related to the precise study of the decay speed of the convergence factors somehow hidden. Inspired by <cit.> and<cit.>, we follow here a different path, defining appropriate decay factors and extrapolation rules along the iterations which, eventually, will result in an accelerated (linear) convergence rates.§ PRELIMINARIES AND NOTATION We are interested in the solution of the composite minimisation problemmin_x∈𝒳 { F(x)= f(x) + g(x)} ,where 𝒳 is a (possibly infinite-dimensional) Hilbert space endowed with norm · = ⟨·,·⟩^1/2 and F:𝒳→ℝ∪{+∞} is a convex, l.s.c. and proper functional to minimise. We denote by x^*∈𝒳 a minimiser of F. We assume that f:𝒳→ℝ is a differentiable convex functionwith Lipschitz gradient and g:𝒳→∪{+∞} is non-smooth, convex and l.s.c .We further denote by L_f the Lipschitz constant of ∇ f, so that∇ f(y) - ∇ f(x) ≤ L_f y-x,for any x,y∈𝒳.The strong convexity parameter of f will be denoted by μ_f≥ 0 so that for any t∈[0,1], by definition, there holdsf(tx+(1-t)y) ≤ tf(x) + (1-t)f(y)-μ_f/2t(1-t)x-y^2,for any x,y∈𝒳.Similarly, by μ_g≥ 0 we will denote the strong convexity parameter of g.The strong convexity parameter of the composite functional F in (<ref>) will be then the sum μ=μ_f + μ_g. In this work we are particularly interested in the case when at least one of the two parameters μ_f and μ_g is strictly positive, so that μ>0. §.§ RemarkNote that the case μ=0 reduces (<ref>) to the classical FISTA-type optimisation problem. In the case of projected gradient descent, i.e. when solvingmin_x∈ℬ⊂𝒳 f(x),the case μ_f>0 has already been studied by Nesterov in <cit.>. The problem can formulated in the form (<ref>) with g being the indicator function of the subset ℬ (with μ_g=0) as:min_x∈𝒳 f(x)+δ_ℬ(x),withδ_ℬ= 0, if x∈ℬ+∞, if x∉ℬ.Note, however, that the proof in <cit.> works actually for any function g, see <cit.> for more details. In order to write the FB optimisation step, a standard descent step in the differentiable component f is combined with an implicit gradient descent step for g. For any τ >0 and for x̅∈𝒳 we then introduce the corresponding FB operator T_τ: 𝒳→𝒳:x̅↦x̂ = T_τx̅ := prox_τ g(x̅-τ∇ f(x̅)),where prox_τ g denotes the proximal mapping operator defined by:prox_τ g(z):= _y∈𝒳( g(y) + 1/2τ z-y^2), z∈𝒳.Note that in order to exploit some properties of the proximal mapping operator above, for η>0 we will also make use of the notation:prox_g^η(z)= _y∈𝒳( g(y) + 1/2 z-y_η^-1^2), z∈𝒳,where the weighted norm is defined by w_η^-1^2 = ⟨η^-1 w, w⟩.§ A GENERAL FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHMThe FISTA algorithm proposed in <cit.> is a very popular optimisation strategy to minimise composite functionals F like (<ref>) with convergence guarantees of order O(1/k^2). Originally proposed by Nesterov in <cit.> in the case of smooth constrained minimisation, FISTA extends Nesterov's approach for more general non-smooth functions g. In the strongly-convex case μ>0 linear convergence rates have been shown in <cit.> by means of a careful study of the decay of the composite functional towards is optimal value . In the following, wewill refer to this extension as GFISTA. For the sake of conciseness, we unify in Algorithm <ref> the FISTA and GFISTA algorithms followed by the convergence result<cit.>. Its proof is rather technical and can be found in <cit.>: the key idea consists in finding a useful recursion starting from the following descent rule for F holding for every x∈𝒳 and for x̂=T_τx̅, with x̅∈𝒳:F(x̂) + (1+τμ_g)x-x̂^2/2τ≤ F(x) + (1-τμ_f)x-x̅^2/2τ, τ>0.Inequality (<ref>) is in fact classically used as a starting point to study convergence rates. Its proof is a trivial consequence of a general property holding for strongly convex functions. We report its proof in Lemma <ref> in the Appendix.Starting from (<ref>), the general technique to perform a convergence analysis consists in taking as element x∈𝒳 the convex combination of the k-th iterate x_k of the algorithm considered and a generic point (such as x^*) and, by means of (strong) convexity assumptions, in defining an appropriate decay factor by which a recurrence relation for the algorithm starting from the initial guess x_0 can be derived. To show acceleration, a detailed study of such factor needs then to be done by means of technical properties of the iterates of the algorithm and of its extrapolation parameters. We refer to the work of Nesterov <cit.> for a review of these techniques applied to standard cases and to <cit.> to a survey on their applications in the context of Imaging.The result reported in Theorem <ref> generalises the ones proved for FISTAin <cit.>. In particular, the standard FISTA convergence rate of O(1/k^2) proved in <cit.> in the non-strongly convex case (μ=q=0 and t_0=0) turns out to be a particular case, while improved linear convergence is shown whenever the composite functional F is μ-strongly convex (μ>0) and an estimate on the Lipschitz constant L_f is available and used as an input to find admissible gradient parameters τ>0. We refer the reader to <cit.> for similar results proved for variants of FISTA. §.§ Remark (FISTA updates) Note that in the case μ=0 the update rules for t_k+1 and β_k in (<ref>) simplify to:t_k+1 = 1+ √(1+4 t^2_k)/2, β_k = t_k-1/t_k+1,which are the standard FISTA updates considered by Beck and Teboulle in <cit.>. convergence_resultsTheorem[section][<cit.> and Theorem B.1 <cit.>]Let τ>0 with τ≤ 1/L_fand let q:=μτ/1+τμ_g and x^* be a minimiser of F. If √(q)t_0≤ 1 with t_0≥ 0, then the sequence ( x^k) produced by the Algorithm <ref> in (<ref>) satisfiesF(x^k)-F(x^*) ≤ r_k(q) ( t^2_0(F(x^0) - F(x^*)) + 1+τμ_g/2 x-x^*^2),and r_k(q) is defined by:r_k(q) = min{4/(k+1)^2,(1+√(q))(1-√(q))^k, (1-√(q))^k/t^2_0}.§.§ BacktrackingWhenever an estimate of L_f is not available, backtracking techniques can be used.For FISTA, an Armijo-type backtracking rule has been proposed in the original paper of Beck and Teboulle <cit.>. For that, similar convergence rates as above can be proved. Furthermore, in order to improve the speed of the algorithm allowing also the increasing of the step size τ in the neighbourhoods of `flat' points of the function f (i.e. where L_f is small), a full backtracking strategy for FISTA has been considered by Scheinberg, Goldfarb and Bai in <cit.>. The typical inequality to check in the design of any backtracking strategy can be derived from (<ref>) (see Lemma <ref> in the Appendix) and reads: F(x̂) + (1+τμ_g)x-x̂^2/2τ + (x̂-x̅^2/2τ - D_f(x̂,x̅) ) ≤ F(x) + (1-τμ_f)x-x̅^2/2τ, where D_f(x̂,x̅):=f(x̂)-f(x̅)-⟨∇ f(x̅),x̂-x̅⟩≤L_f/2x̂-x̅^2 is the Bregman distance of f between x̂ and x̅. Note that in the case when no backtracking is performed, condition (<ref>) is satisfied as long as:D_f(x̂,x̅)≤x̂-x̅^2/2τ,CBwhich is clearly true for constant τ whenever 0<τ≤ 1/L_f with L_f known. However, by letting τ vary, one can alternatively check condition (<ref>) along the iterations of the algorithm and redefine τ_k at each iteration k≥ 1 so as to compute a local Lipschitz constant estimate. In the following, we will indeed use this rule for the design of a backtracking strategy for Algorithm <ref> with μ>0. In order to allow robust backtracking, we will allow the step size τ_k to either decrease (as it is classically done) or increase depending on the validity of the following inequality:2D_f(x̂,x̅)/x̂-x̅^2>ρ(1/τ_k),CB2where the constant ρ∈ (0,1) is chosen in advance. Note that this inequality entails that at any iteration the following inequality holds:τ_k ≥ρ/L_f.Heuristically, condition (<ref>) favours the step size τ_k to be decreased at iteration k≥ 1 whenever the estimate of the Lipschitz constant given by the left hand side in the inequality above is `too close' to 1/τ_k, i.e. whenever (<ref>) is verified, and increased otherwise. § A BACKTRACKING STRATEGY FOR GFISTA ALGORITHM <REF>Following the analysis performed in <cit.>, we prove that the backtracking strategy described above and applied to the GFISTA algorithm <ref> enjoys accelerated convergence rates, which turn out to be linear in the case μ>0.For an arbitrary t≥ 1, k≥ 0 and τ>0 we start from inequality (<ref>) and choose the point x to be the convex combination x=((t-1)x^k + x^*)/t wherex^k is an iterate of the algorithm we are going to define and x^* is a minimiser of F. For the other points, we set x̅=y^k+1 and x̂=x^k+1=T_τ y^k+1. The formula for y^k+1 will be specified in the following. After multiplication by t^2 and using the strong convexity of F we get:t^2( F(x^k+1)-F(x^*)) + 1+τμ_g/2τx^*-x^k+1-(t-1)(x^k+1-x^k)^2+  t^2 (t-1)μ(1-τμ_f)/1+τμ_g-tτμx^k-y^k+1^2/2≤t(t-1)( F(x^k) - F(x^*) ) + 1+τμ_g-tτμ/2τ x^*-x^k - t1-τμ_f/1+τμ_g-tτμ(y^k+1-x^k) ^2.We now set t=t_k+1, let τ = τ_k+1 and define the following quantities:τ'_k+1 := τ_k+1/1+τ_k+1μ_g >0 q_k+1 := μτ'_k+1 = 1 - 1-τ_k+1μ_f/1+τ_k+1μ_g∈ [0,1),ω_k+1:= 1+τ_k+1μ_g-t_k+1τ_k+1μ/1+τ_k+1μ_g = 1 - t_k+1q_k+1∈ (0,1], β_k+1:= t_k-1/t_k+11+τ_k+1μ_g - t_k+1τ_k+1μ/1-τ_k+1μ_f = ω_k+1t_k -1/t_k+11+τ_k+1μ_g/1-τ_k+1μ_f ,where we can assume μ_f<L_f, so that τ<1/L_f. We now define the following update for y^k+1:y^k+1 = x^k + β_k+1(x^k-x^k-1),for any k≥ 0.After further multiplying(<ref>) by τ'_k+1, we thusdeduce: τ_k+1't_k+1^2( F(x^k+1)-F(x^*)) + 1/2x^*-x^k+1-(t_k+1-1)(x^k+1-x^k)^2 ≤τ_k+1't_k+1(t_k+1-1)( F(x^k) - F(x^*) )+ ω_k+1/2 x^*-x^k-(t_k-1)(x^k-x^k-1)^2.Let us now assume that for every k≥ 1 the following inequality holds:τ'_k+1t_k+1(t_k+1-1)≤ω_k+1τ'_k t^2_k,and that the same holds for the iteration k=0 by defining T_0^2:=τ_0' t_0^2 implicitly byT_0^2 = τ'_1 t_1(t_1 -1)/ω_1 =τ_1 t_1(t_1 -1)/1+τ_1μ_g - t_1τ_1μ,which is positive whenever1≤ t_1 < 1+τ_1μ_g/τ_1μ = 1/q_1.Then, we get from (<ref>) that for any k≥ 0:τ_k+1't_k+1^2( F(x^k+1)-F(x^*)) + 1/2x^*-x^k+1-(t_k+1-1)(x^k+1-x^k)^2 ≤ω_k+1(τ_k't^2_k( F(x^k) - F(x^*) ) + 1/2 x^*-x^k-(t_k-1)(x^k-x^k-1)^2).By now applying (<ref>) recursively and if we let x^0 = x^-1∈𝒳, wefind the following convergence inequality F(x^k)-F(x^*) ≤θ_k (T_0^2( F(x^0) - F(x^*) ) + 1/2 x^*-x^0 ^2),where the decay rate of the factor θ_k : = ∏_i=1^kω_i/τ'_kt^2_kneeds to be studied to determine the speed of convergence of F(x^k) to the optimal value F(x^*). We will do this in the following sections using some technical properties of the sequences defined above. §.§ Update ruleAssuming that (<ref>) holds with an equality sign, i.e. ifτ'_k+1t_k+1(t_k+1-1) = ω_k+1τ'_k t^2_k,and after recalling the definition of ω_k+1 in (<ref>),we find the following update rule for the elements of sequence (t_k), k≥ 1:t_k+1= 1-q_k+1τ'_k/τ'_k+1t^2_k + √((1-q_k+1τ'_k/τ'_k+1t^2_k )^2 + 4τ'_k/τ'_k+1t^2_k)/2 = 1-q_k t^2_k + √((1-q_k t^2_k )^2 + 4q_k/q_k+1t^2_k)/2≥ 0,by (<ref>) and (<ref>).We can now present theGFISTA algorithm with backtracking. We remark that compared to the algorithm studied in <cit.>, Algorithm <ref> has a lower per-iteration cost. The reason for that is that the backtracking criterion considered in <cit.> requires at any iteration k the computation of the quantity ∇ f (T_τ_k+1 y^k), whereas our backtracking condition (<ref>) is based on the calculation of D_f, and the sole computation of ∇ f(y^k) is required, thus avoiding the calculation of ∇ f in the proximal step. In many applications (e.g. compressed sensing), this difference can be quite crucial: the extra-evaluation of ∇ f in one point requires in fact two matrix-vector multiplications compared to a single one required for functional evaluation. Similar considerations have already been made for the FISTA algorithm with full backtracking in <cit.> since the stopping criterion for the backtracking procedure considered therein is in fact similar to the one used in our Algorithm <ref>. §.§ Remark (No backtracking) When no backtracking is performed along the iterations τ_k = τ_k+1 for any k and the ratio q_k/q_k+1 in (<ref>) is constantly equal to one. In this case, the update rule (<ref>) is the same as the one used in (<ref>) for GFISTA without backtracking, compare <cit.> . In the non-strongly convex case (q_k=0 for every k), the update rule (<ref>) is exactly the same (<ref>) for the original FISTA algorithm<cit.>.§.§ Remark (FISTA with backtracking)In the non-strongly convex case (μ_f=μ_g=q_k=0 for every k), (<ref>) reduces tot_k+1 = 1 + √(1 + 4τ_k/τ_k+1 t^2_k)/2,which is exactly the same update rule considered by Goldfarb et al. in <cit.> for adaptive backtracking of plain FISTA.We now prove a fundamental property of the sequence (t_k) defined by (<ref>).tkgreaterLemma[section] Let the sequence (t_k) be defined by the update rule (<ref>). Then: t_k≥ 1for any k≥ 1. We simply observe that since q_k≤ 1 for every k there holds:t_k = 1-q_k-1t^2_k-1 + √((1-q_k-1t^2_k-1)^2 + 4q_k-1/q_kt^2_k-1)/2 . ≥1-q_k-1t^2_k-1 + √((1-q_k-1t^2_k-1)^2 + 4 q_k-1t^2_k-1)/2 .=1-q_k-1t^2_k-1 + √((1+q_k-1t^2_k-1)^2)/2=1.For the following convergence proofs, the following technical lemma will be crucial.propertyqktkLemma[section] Let √(q_1)t_1≤ 1. Then, there holds:√(q_k)t_k ≤ 1 . We proceed by induction. By assumption, the initial step k=1 holds. Let us assume that (<ref>) holds for some k≥ 1. By (<ref>), we get:q_k+1 t^2_k+1 = q_k+1 t_k+1 + ω_k+1q_kt^2_k = 1+ ω_k+1(q_kt^2_k -1 ) ≤ 1by simply applying the induction assumption. Note that the condition t_1 ≤ 1/√(q_1) combined with t_1≥ 1results in the following bound:1≤ t_1 ≤√(1 + 1-τ_1μ_f/τ_1μ).Furthermore, since 1/√(q_1) < 1/q_1, such condition also guarantees(<ref>). In particular, t_1=1 is an admissible choice.§.§ Convergence rates In this section, we follow <cit.> to derive a precise estimate of the factor θ_k in (<ref>). The following convergence result shows that the backtracking strategy applied to the GFISTA algorithm guarantees accelerated linear convergence rates given in terms of averaging quantities defined in terms of the Lipschitz constant estimates along the iterations. Comments on our result in comparison to the ones studied in analogous works <cit.> are given in the following remarks.convergence_ratesTheorem[section] [Convergence rates]Let T_0 be defined as in (<ref>). If 1≤ t_1 ≤ 1/√(q_1), then the sequence (x^k) produced by theAlgorithm <ref> with (<ref>), (<ref>), (<ref>) and (<ref>) satisfies: F(x^k)-F(x^*)≤ r_k ( T_0^2 (F(x^0)-F(x^*)) + 1/2x^0-x^*^2),where r_k is defined by:r_k := min{4 L̅_k/k^2, (L_1 - μ_f)(1-√(q̅_k))^k-1},and the average quantities L̅_k and √(q̅_k) aredefined by:√(L̅_k) := 1/1/k∑_i=1^k1/√(L_i - μ_f),√(q̅_k ):= 1/k-1∑_i=2^k√(μ/L_i + μ_g),with L_i:=1/τ_i.We recall the definition of θ_k given in (<ref>) and start computing the O(1/k^2) factor in (<ref>) following <cit.>.We first notice that from (<ref>) we can deduce1-1/t_k+1 = ω_k+1τ'_kt^2_k/τ'_k+1t^2_k+1 = θ_k+1/θ_k≤ 1,which also shows that θ_k is non-increasing. Thus, we have:1/√(θ_k+1) - 1/√(θ_k) = θ_k-θ_k+1/√(θ_kθk+1)(√(θ_k) + √(θ_k+1))≥θ_k - θ_k+1/2θ_k√(θ_k+1).By now applying (<ref>), we get1/√(θ_k+1) - 1/√(θ_k)≥1/2 t_k+1√(θ_k+1).We now recall definitions (<ref>), (<ref>), and use Lemma <ref> to find:t_k+1√(θ_k+1)= 1/√(τ'_k+1)∏_i=1^k+1√(ω_i)≤√(ω_k+1/τ'_k+1) =√(1/τ'_k+1-μ t_k+1)≤√(1/τ'_k+1-μ) = √(1/τ_k+1 -μ_f),whence:1/√(θ_k+1) - 1/√(θ_k)≥1/2√(1/τ_k+1- μ_f ).Applying this recursively we get that for any k≥ 11/√(θ_k)≥1/2∑_i=1^k 1/√(1/τ_i -μ_f).Note that indeed for i=1 we have:θ_1 = 1-μ t_1τ_1'/τ_1't^2_1 = 1-μ_g(t_1 -1)τ_1-μ_f t_1τ_1/τ_1 t_1^2≤1/τ_1-μ_f ,since t_1≥ 1 by (<ref>). We then deduce:1/√(θ_1)≥1/2√(1/τ_1-μ_f).After setting L_i = 1/τ_i in (<ref>), we get:√(θ_k)≤2/k√(L̅_k)where√(L̅_k) is defined in (<ref>). To get the linear rates, we notice that by Lemma <ref>, relation (<ref>) and definition(<ref>), we have:θ_k = θ_1 ∏_i=2^k(1-1/t_i) ≤θ_1∏_i=2^k (1-√(q_i)) ≤θ_1 ∏_i=2^k (1-√(μ/L_i+μ_g) )≤ (θ_1 (1-√(q̅))^k-1,where √(q̅_k) is defined as in(<ref>). and by the concavity of the function logarithm. We then get from (<ref>) that:θ_k ≤θ_1 (1-√(q̅_k))^k-1≤ (L_1 -μ_f) (1-√(q̅_k))^k-1by (<ref>).Combining this with (<ref>) we finally get the final rate (<ref>).Note that the averaging term L̅_k appearing above is always smaller than the actual average of the terms (L_i-μ_f), since: √(L̅_k)≤1/k∑_i=1^k √(L_i - μ_f)≤√(1/k∑_i=1^k (L_i - μ_f)).Furthermore, whenever L_f is known and recalling (<ref>), we can deduce the following bounds for the terms defined in (<ref>):√(L̅_k)≤√(L_f -ρμ_f/ρ),√(q̅_k)≥√(ρμ/L_f + ρμ_g).Hence, the convergence rate r_k in(<ref>) can be estimated as:r_k ≤1/ρmin{4(L_f-ρμ_f)/ k^2,(L_f-ρμ_f)(1-√(ρμ/L_f + ρμ_g))^k-1}.Finally, as far as the choice of T_0 is concerned, note that by(<ref>) when t_1=1, then T_0=0.§.§ Remark (FISTA with backtracking)Note that in the non-strongly convex case (μ=q_k=0 for all k), the global convergence rate (<ref>)-(<ref>) is analogous to <cit.>, which reads:F(x^k)-F(x^*)≤2 L̃_k x^0-x^*^2/ρ k^2,and where the term L̃_k is defined byL̃_k:= (∑_i=1^k √(L_i))^2/k^2.Note in fact that whenever μ_f=0 our definition (<ref>) relates with the one above via Remark (<ref>).§.§ RemarkThe worst-case convergence result <cit.> is obtained via the analysis of generalised estimate sequences. In <cit.> some comments on the extrapolated form of their algorithm and its relation with the strongly-convex variant of the FISTA algorithm <ref> are given. Although the expression of the sequence {ω_k} and the update rule for the elements {t_k} is similar (but not equal) to our definitions (<ref>) and (<ref>), respectively, the arguments used by the authors are different from the ones used here. More importantly, compared to a worst-case analysis, the convergence result <ref> is more precise, since it provides quantitative convergence estimates in terms of the average quantities √(L̅_k) and √(q̅_k) estimated along the iterations.§.§ Monotone algorithmsAs already noticed for standard FISTA <cit.> and for GFISTA without backtracking <cit.>, the convergence of the composite energy F to the optimal value x^* is not guaranteed to be monotone non-increasing. A straightforward modification of the GFISTA Algorithm <ref> enforcing such property and used in several papers <cit.> consists in taking as x^k any point such that F(x^k)≤ F(T_τ_k y^k). Recalling the definition of ω_k+1 in (<ref>), the update rule (<ref>) for extrapolation can then be changed as:y^k+1=x^k + β_k+1(x^k - x^k-1) + ω_k+1t_k/t_k+11+τ_k+1μ_g/1-τ_k+1μ_f( T_τ_k y^k - x^k)C2_m = x^k + β_k+1( (x^k - x^k-1) + t_k/t_k-1( T_τ_k y^k - x^k)).One can easily check that starting from (<ref>) and replacing in (<ref>)x^k+1 by T_τ y^k+1 with the update rule abovethe same computations of the previous sections carry on and the same convergence rates are obtained. Condition (<ref>) suggests also a natural choice for x^k. In fact, one can simply set:x^k =T_τ_k (y^k)ifF(T_τ_k y^k)≤ F(x^k-1),x^k-1 otherwise,so that in either case one of the two terms in (<ref>) vanishes. Whenever the evaluation of the composite functional F is cheap, this choice seems to be the most sensible. Another monotone implementation of FISTA has been recently considered in <cit.> where despite the further computational costs required to compute the value x^k, an empirical linear convergence rate is observed also for standard FISTA applied to strongly convex objectives. A rigorous proof of such convergence property is an interesting question of future research. § NUMERICAL EXAMPLES In this section we report some numerical experiments to confirm numerically the convergence result <ref> of Algorithm <ref>. We also discuss some heuristic restarting strategies <cit.> in the case when the strong convexity parameters are unknown. §.§ TV-Huber ROF denoisingWe start considering a strongly convex variant of the well-know Rudin, Osher and Fatemi image denoising model <cit.>based on the use of Total Variation (TV) regularisation. In its discretised form and for a given noisy image u^0∈^m× n corrupted by Gaussian noise with zero mean and variance σ^2, the original ROF model reads:min_u λ Du _p,1 + 1/2u-u^0_2^2.Here, Du = ( (Du)_1, (Du)_2) is the gradient operator discretised using forward finite differences (see, e.g., <cit.>) and the discrete TV regularisation is defined by:Du _p,1 = ∑_i=1^m ∑_j=1^n |(Du)_i,j|_p =∑_i=1^m ∑_j=1^n ( (Du)_i,j,1^p + (Du)_i,j,2^p )^1/p,where the value of the parameter p allows for both anisotropic (p=1) and isotropic (p=2) TV, which is generally preferred to reduce grid bias. The regularisation parameter λ>0 in (<ref>) weights the action of TV-regularisation against the fitting with the Gaussian data given by the ℓ^2 squared term.Taking p=2 in (<ref>), we now follow <cit.> and consider a similar denoising model where a strongly convex variant of TV is employed. This can be obtained, for instance, using the C^1-Huber smoothing function h_ε: → defined for a parameter ε>0 by:h_ε(t):=t^2/2ε for |t|≤ε, |t|-ε/2 for |t|>ε.Applying such smoothing to the TV energy (<ref>) removes the singularity in a neighbourhood zero by means of a quadratic term and leaves the TV term almost unchanged otherwise. The resulting Huber-ROF image denoising model then reads:min_u  λ H_ε(u)+1/2u-u^0_2^2,withH_ε(u) :=∑_i=1^m ∑_j=1^n h_ε( √((Du)_i,j,1^2 + (Du)_i,j,2^2 )). The dual problem of (<ref>) reads:min_p 1/2D^*p - u^0 ^2_2 + ε/2λp_2^2 + δ_{·_2,∞≤λ}(p),where p is the dual variable, D^* is the adjoint operator of D (i.e. the discretised negative finite-difference divergence operator) and δ_{·_2,∞≤λ} is the indicator function defined by:δ_{·_2,∞≤λ}(p) =0 if |p_i,j|_2≤λ for any i,j,+∞otherwise.Note that (<ref>) is the sum of a function f with Lipschitz gradient and a non-smooth function g which are respectively given by:f(p)=1/2D^*p - u^0^2_2,g(p)= ε/2λp_2^2 + δ_{·_2,∞≤λ}(p).The gradient of the differentiable component f reads:∇ f(p) = D(D^*p-u^0),and it is easy to show that its Lipschitz constant L_f can be estimated as L_f≤ 8, see, e.g. <cit.>. Note also that μ_f=0.The function g is strongly convex with parameter μ_g=μ=ε/λ and its proximal map p̂ = prox_τ g(p̃) can be easily computed pixel-wise as:p̂_i,j=(1+τμ_g)^-1p̃_i,j/max{1,(λ(1+τμ_g))^-1|p̃_i,j|_2},for any i, j,since, due general properties of proximal maps with added squared ℓ^2 terms (see Lemma <ref> in the Appendix), there holds:prox_τ g(p̃) = prox_τ/1+τμ_gδ_{·_2,∞≤λ}(p̃/1+τμ_g)= Π_{·_2,∞≤λ}(p̃/1+τμ_g). Note that the same example has also been considered for similar verifications in <cit.>: our results are in fact in good agreement with the ones reported therein. §.§ ParametersIn the following experiments we consider an image u^0∈^m× n with m=n=256 corrupted by Gaussian noise with zero mean and σ^2=0.005, see Figure <ref>-<ref>. We set the Huber parameter ε=0.01 and the regularisation parameter λ=0.1, so that μ_g=μ=0.1. In our comparisons we use the GFISTA algorithms <ref> and <ref> with and without backtracking using the prior knowledge of L_f given by the estimate L_f=8 and an initial L_0, respectively. To ensure monotone decay we use the modified version described in Section (<ref>), i.e. we use the modified update rules (<ref>)-(<ref>). For comparison, we report numerical results where the backtracking strategy is used `classically', i.e. it allows only for increasing of the Lipschitz constant estimate L_k and used `adaptively' i.e. it allows for both its increasing and decreasingalong the iterations. The backtracking factor ρ is set ρ=0.9. The initial value t_1 is set t_1=1.The algorithm is initialised by the gradient of the noisy image u^0, i.e. p_0=Du^0.To compute an approximation of the optimal solution u^*, we let the plain GFISTA algorithm run beforehand for 5000 iterations and store the result for comparison, see Figure <ref>. We then compute the results running the algorithms <ref> and <ref> for =100 iterations. We report the results computed for two different choices of L_0 which underestimate and overestimate the actual value of L_f, respectively, see Figure <ref> and <ref>.For comparison, we further report theO(1/k^2) convergence rate of standard FISTA with no strongly convex parameter (μ=0) encoded. §.§ Strongly convex TV Poisson denoising In this second example we consider a different denoising model for images corrupted by Poisson noise, which is commonly observed in microscopy and astronomy imaging applications. Standard Poisson denoising models using Total Variation regularisation are typically combined with a convex,non-differentiable Kullback-Leibler data fitting term, which can be consistently derived from the Bayesian formulation of the problem via MAP estimation (see, e.g., <cit.>). Here, we follow <cit.> and consider a differentiable version of the Kullback-Leibler data term which, for a given positive noisy image u^0∈^m× n corrupted byPoisson noise reads:f(u) =K̃L̃(u_0,u):=∑_i=1^m∑_j=1^nu_i,j + b_i,j - u^0_i,j + u^0_i,jlog( u^0_i,j/u_i,j+b_i,j) if u_i,j≥ 0, u^0_i,j/2b_i,j^2u_i,j^2 + (1-u^0_i,j/b_i,j)u_i,j + b_i,j - u^0_i,j + u^0_i,jlog( u^0_i,j/b_i,j) otherwise,where b∈^m× n stands for the background image which can be typically estimated from the data at hand. It is easy to verify the Lipschitz constant ∇K̃L̃(u_0,u)can be very roughly estimated as L_f= max_i,ju^0_i,j/b_i,j^2,which it is well-defined, positive and finite as long as u^0 and b are positive. As a regularisation term, we will consider the following ε-strongly convex variant of isotropic TV in (<ref>):g(u)=λ Du_2,1+ ε/2u_2^2,where λ>0 stands again for the regularisation parameter. Differently from the Huber-TV ROF example, we aim here to apply the GFISTA algorithm <ref> to solve composite problem:min_u λ Du_2,1+ ε/2u_2^2 + K̃L̃(u_0,u)in primal form.The gradient of the KL term (<ref>) can be easily computed andthe proximal map of g in (<ref>) can be computed using the proximal map of the TV functional due to a general property reported in Lemma <ref> in the appendix, so that, recalling the definition (<ref>), for any z there holds:prox_τ g(z) = prox^λτ/1+ετ_·_2,1(z/1+ετ).Thus, for any τ>0, computing the right hand side of the equality above corresponds simply to solve the classical ROF problem with regularisation parameter σ:=λτ/1+τε. We do that using standard FISTA as an iterative inner solver. §.§ ParametersWe consider an image u^0∈^m× n with m=n=256 corrupted artificially by Poisson noise, see Figure <ref>-<ref>. For simplicity, we consider a constant background with b_i,j=1 for all i,j. We set the strong convexity parameter ε=0.15 and the regularisation parameter λ=0.1. Clearly μ=μ_g=ε. In order to compute the proximal map (<ref>) we use 10 iterations of standard FISTA. In the following example the Lipschitz constant of the gradient of the K̃L̃ term can be estimated via (<ref>) as L_f=45. We report in the following the results computed using the monotone variant of GFISTA algorithm <ref> without backtracking and with classical and full backtracking (Algorithm <ref> with monotone updates (<ref>)-(<ref>)), for which the factor ρ=0.8 is chosen. The initial value t_1 is set t_1=1. The algorithm is initialised using the given noisy image u^0.An approximation of the solution u^* is computed beforehand by letting the plain FISTA algorithm run for 5000 iterations and then stored for comparison, see Figure <ref>. Results are then computed letting the monotone version of the GFISTA algorithms run for =200 iterations. In Figure <ref> we report the results computed for a value of L_0 overestimating the actual one given by L_f and in comparison with standard FISTA with no strongly convex modification. Once again we can observe that by incorporating the strongly convex modification of GFISTA linear convergence is achieved, in comparison with slower convergence of standard FISTA. Furthermore, the local estimate of the Lipschitz constant provided by the full backtracking strategydecreases along the iterations, thus allowing for larger gradient steps and convergence in fewer iterations. In Figure <ref>, we plot the monotone decay of the energy along the GFISTA iterates (with and without backtracking) after the monotone modification described in Section (<ref>).§.§ Restarting strategies applied to the elastic net In this final example we test the performance of the GFISTA algorithm with backtracking <ref>in the case when a prior estimate of the strong convexity parameters μ_f and/or μ_g is either misspecified or not available. As a test problem we consider the Elastic Net regularisation model, which, for a given matrix A∈^m× m, data y∈^m and positive parameters λ_1 and λ_2 reads:min_u {F(u):=1/2 Au-y_2^2 + λ_1u_1 + λ_2/2u_2^2},The Elastic Net is commonly used in the study of logistic regression models as a regularised version of the LASSO estimator by means of a ridge-type quadratic term and it is employed for several parameter identification <cit.> and support vector machine problems <cit.>. In order to apply the the GFISTA algorithm <ref>, we split the functional F above into the sum:f(u):= 1/2 Au-y_2^2 + λ_2/2u_2^2 ,g(u):= λ_1u_1.Under this choice, we note that f is differentiable with Lipschitz-continuous gradient given by∇ f(u)= A^*(Au-y) + λ_2 u whose Lipschitz constant can be calculated as L_f = λ_max(A^*A + λ_2Id), where by λ_max(M) we denote the largest eigenvalue of the matrix M. Note that in case of large-size problems (m≫ 1), such computation of L_f may render prohibitively expensive. The non-smooth function g is convex and for τ>0 its proximal map can be calculated component-wise by the soft-thresholding operator as:(prox_τ g(z))_i= sign(z_i)max(|z_i|-τλ_1,0 ), i=1,…,m.Finally, note that f is λ_2-strongly convex, so that μ=μ_f=λ_2.§.§ ParametersIn the following experiments we solve the problem (<ref>) in correspondence of a normalised randomly generated operator A∈^3600× 3600 and for parameters λ_1, λ_2 set as λ_1=0.01 and λ_2=1e^-5, so that μ=μ_f=λ_2. The Lipschitz constant L_f of ∇ f can be estimated in this example as L_f=0.0657. For the backtracking routine, we set the backtracking factor ρ=0.95. The GFISTA algorithm <ref> is initialised by t_1=1, L_0=1 and x_0=0.The plain GFISTA algorithm (<ref>) without backtracking is run for 5000 iterations and its solution x^* is stored for comparisons. The following results are computed by running the algorithm for =100 iterations. In the first test, we compare once again the performance of the GFISTA algorithm <ref> when the prior estimate of L_f is available and when it is not, using both standard Armijo-type backtracking and the adaptive one proposed in this work, see Figure <ref>. Compared to the examples considered above, note that in this case the strong convexity constant of the problem is encoded in the term f defined in (<ref>), which is accommodated by our strategy. Note, however, that it renders typically more efficient to encode strong convexity in the non-smooth component g which is treated implicitly rather than in f which is treated explicitly. This latter choice would require in fact more restrictive time-steps τ≤ 1/(L_f + μ_f). In addition, we also report the results obtained when a “wrong" value of μ_f is used. Given its quadratic behaviour, one may in fact suppose that in addition to the λ_2-strongly convexity, some further strong convexity could be hidden in the quadratic data fitting term. In the following, we then report the results obtained by applying the GFISTA algorithm <ref> with full backtracking for a perturbed value of μ_f given by μ_f= λ_2 + δ, for a small perturbation 0< δ≪ 1.Note that under such modification the natural condition μ_f< L_k may be violated along the iterations, thus preventing the algorithm from converging. Whenever this happens, we decrease the value μ_f of a factor ρ, redefine the term q_k appearing in Algorithm <ref> in correspondence of this new value and carry on with the algorithm. In this way convergence is always guaranteed and also large misspecifications of μ_f can be treated.Provided such verification is performed along the iterations, these tests suggest that encoding further, hidden, strong convexity information in the model (<ref>) can improve the convergence rates of the GFISTA algorithm <ref>. Motivated by these considerations, we perform in the following a further numerical test where we assume that the values of the strong convexity parameters μ_f and μ_g (and, consequently, μ) are unknown. In several applications, it is actually very hard to provide an explicit estimation of such parameters.Moreover, as we have seen in the examples above, some hidden strong convexity can be still not detected explicitly only looking at the structure of the functions f and g. An indirect way to estimate strong convexity consists in restarting the algorithm depending on a certain criterion, see, e.g., <cit.>. In <cit.> two heuristic restarting procedures based either onthe evaluation of the composite functional or of a (generalised) gradient are studied. These two restarting approaches have become very popular since then and, more recently, some others have been proposed, for instance in <cit.> and <cit.>. Here, we follow <cit.> and apply the two function- and gradient-based restarting procedures to the GFISTA algorithm <ref> with full backtracking to solve the Elastic Net problem above under the same choice of parameters as above. As discussed in <cit.> the two restarting criteria to consider for FISTA-type algorithms are the following: * Function adaptive restart: restart the algorithm whenever F(u^k+1)>F(u^k). * Gradient adaptive restart: restart the algorithm whenever (y^k-u^k+1)^T(u^k+1-u^k)>0. Compared to the function-based restarting scheme, the gradient adaptive restart isobserved to be more stable around x^*. Furthermore, there is no extra computational cost in applying such restarting to GFISTA<ref> since all the quantities appearing in (<ref>) have already been calculated during the backtracking phase. We remark that this second approach goes under the name of `gradient' restart since one can interpret for each k≥ 0 the FB step (<ref>) in Algorithm <ref> as a generalised gradient step in defined byx^k+1 =prox_τ_k+1 g (y^k - τ_k+1∇ f(y^k))=: y^k - τ_k+1 G(y^k).The restarting condition (<ref>) would then actually read in this case G(y^k)^T(u^k+1-u^k)>0. In Figure <ref>, we report the convergence plots and the Lipschitz constant variations for the solution of the Elastic Net problem (<ref>) via the GFISTA algorithm <ref> with full backtracking combined with the two restarting strategies above. We observe a faster linear convergence compared to the fully backtracked GFISTA algorithm which, heuristically, can therefore be adapted and efficiently employed also to strongly convex problem with no prior estimate on the strong convexity constant μ. A rigorous proof of these convergence results is left for future research. § CONCLUSIONS AND OUTLOOK We study a fast backtracking strategy for the strongly convex variant of the FISTA algorithm proposed in <cit.> and based on a inequality condition expressed in terms of the Bregman distance, see Section <ref>. Using standard properties of strongly convex functions and upon multiplication by appropriate terms, we have derived in Section <ref> the convergence estimate (<ref>) whose decay factor (<ref>) has been then studied carefully to estimate the convergence speed of Algorithm <ref>. Our analysis is essentially based on classical technical tools similar to the ones used in Nesterov in <cit.> and on general properties of the extrapolation sequences defined. Our main result is reported in Theorem <ref> where accelerated linear convergence rates are proved in term of average quantities depending on the estimated values along the iterations.Our theoretical results are verified numerically in Section <ref> on some exemplar problems.The backtracking strategy proposed is fast and robust since it allows for adaptive adjustment of the gradient step size (i.e. the proximal map parameter) depending on the local `flatness' of the gradient of the component f in the objective functional, i.e. on the local estimate L_k of L_f. In other words, in flat regions (small L_f) larger step sizes are promoted, whereas where large variations of ∇ f occur (large L_f), small steps are preferred for a more accurate descent. From an algorithmic point of view, extrapolation is performed using suitable parameters providing strict decay in the convergence inequality (<ref>) and defined not only in terms of the step sizes, but also in terms of the strong convexity parameters of f and g and resulting in more refined convergence rate estimates.Finally, in terms of computational costs our approach has a lower per-iteration cost than the one studied by Nesterov in <cit.> since it avoids the calculation of the gradient of the smooth component in the proximal step. Accelerated convergence rates are proved and defined in terms of average quantities depending on the estimates performed along the iterations.Further research could address the rigorous analysis of the combined backtracking approach with the restarting procedures á la Candés used in Section <ref> for situations whenthe strong convexity parameters μ_f and μ_g are unkonwn. In this work we have heuristically showed good performance only for the case of function- and gradient-based restarting procedures, but it would be of great interest also exploring more the recently proposed approaches by Fercoq and Qu <cit.> where the restarting does not require any condition but combines appropriately past iterates of the algorithm in an appropriate way. A rigorous analysis of a combined backtracking-restarting procedure would be very interesting for the sake of designing an algorithm fully adaptive to local convexity and smoothness of its functions.Finally, it would be interesting to test the robustness and the performance of our algorithm on other strongly convex, possibly large-scale problems coming from the fields of image and data analysis with various condition numbers. § ACKNOWLEDGEMENTSThe authors would like tothank the valuable comments of the anonymous referees which improved significantly the quality of the manuscript. § SOME USEFUL LEMMAS In this appendix we prove some general results which has been used in our work. We start with a general inequality used to derive the descent rule (<ref>). Its proof is a consequence of a trivial property of strongly convex functions.notationLemma[section] If h:𝒳→∪{∞} is strongly convex with parameter μ_h>0 and x̂∈𝒳 is a minimiser of h, the following property holds:h(x)≥ h(x̂) + μ_h/2x-x̂^2,for any x∈𝒳.By definition of μ_h-strong convexity, for any x,y∈𝒳 there holds:h(x) ≥ h(y) + ⟨ p, y-x⟩ + μ_h/2x-y^2,where p∈∂ h(y), the subdifferential of h evaluated in y. Taking y=x̂, since 0∈∂ h(x̂), we get (<ref>).An immediate consequence of this general property is the proof of the descent rule (<ref>) used in Section <ref> as a starting point of our convergence estimates. We follow <cit.>.lemma:descent[notation]Lemma\beginlemma:descentLet f:𝒳→ be a μ_f-strongly convex function with Lipschitz gradient with constant L_f and g:𝒳→∪{∞} be a l.s.c., μ_g-strongly convex function. Then, defining for any x̅∈𝒳 and any 0<τ <1/L_f the forward-backward map: T_τ: x̅↦prox_τ g(x̅-τ∇ f(x̅))=:x̂, the following inequality holds for the composite functional F=f+g:F(x)+(1-τμ_f)x-x̅^2/2τ≥ F(x̂) + (1+τμ_g)x-x̂^2/2τ,for anyx∈𝒳.\endlemma:descent By definition, x̂ is the minimiser of the function h:𝒳→∪{∞} defined by:h:x↦ g(x) + f(x̅) + ⟨ f(x̅,x-x̅⟩ + x-x̅^2/2τ.The function h is strongly convex with parameter μ_h:=(τμ_g+1)/τ. Hence, for any x∈𝒳:F(x)+(1-τμ_f)x-x̅^2/2τ≥ g(x) + f(x̅) + ⟨∇ f(x̅),x-x̅⟩ + x-x̅^2/2τ≥ g(x̂) + f(x̅) + ⟨∇ f(x̅),x̂-x̅⟩ + x̂-x̅^2/2τ + (1+τμ_g)x-x̂^2/2τ≥ g(x̂) + f(x̂) + 1-τ L_f/2τx̂-x̅^2 + (1+τμ_g)x-x̂^2/2τ,= F(x̂) + 1-τ L_f/2τx̂-x̅^2 + (1+τμ_g)x-x̂^2/2τ,where the first inequality holds by strong convexity of f, the second one is a simple application of Lemma <ref> and the last one follows from the Lipschitz continuity of ∇ f. Since τ L_f<1 by assumption, we can neglect the third term in (<ref>) and get (<ref>). We finally report a general properties of proximal mappings which we used in our numerical experiments in Section <ref>. For a general convex function h it essentially allows a straightforward calculation of the proximal map of the composite ε-strongly convex function g:=α h + ε/2·_2^2 in terms of the proximal map of h itself. We recall the notation (<ref>).lemma:proximal:map[notation]Lemma\beginlemma:proximal:map Let h:𝒳→∪{+∞} a convex, proper and l.s.c. function. For α, ε>0 let g be defined as:g(x):=α h(x) + ε/2x^2, x∈𝒳.Then, there holds:prox_τ g(z) = prox_h^ατ/1+ετ(z/1+ετ),for any τ>0 and z∈𝒳.\endlemma:proximal:map Let τ>0 and z∈𝒳. We have the following chain of equalities:prox_τ g(z)=prox^τ_g(z) = _y∈𝒳 g(y) + 1/2τy-z^2 =_y∈𝒳  h(y) + 1+τε/2ατy^2 + 1/2ατz^2 - 1/ατ⟨ y, z⟩ =_y∈𝒳  h(y) + 1/2ατ/1+τεy^2 + (1/2ατ(1+ετ)- ε/2α(1+ετ))z^2 - 1/ατ⟨ y, z⟩ =_y∈𝒳  h(y) + 1/2ατ/1+τεy^2 +1/2ατ/1+τεz/1+ετ^2 - 1+ετ/ατ⟨ y, z/1+ετ⟩ = _y∈𝒳  h(y) +1/2ατ/1+τε y - z/1+ετ^2= prox^ατ/1+ετ_h(z/1+ετ). amsplain
http://arxiv.org/abs/1709.09004v2
{ "authors": [ "Luca Calatroni", "Antonin Chambolle" ], "categories": [ "math.OC", "90C25, 65F22" ], "primary_category": "math.OC", "published": "20170926132852", "title": "Backtracking strategies for accelerated descent methods with smooth composite objectives" }
Readable and Editable Ontologies]User and Developer Interaction with Editable and Readable OntologiesA. Blfgeh et al]Aisha Blfgeh ^1,2[To whom correspondence should be addressed: mailto:[email protected]@newcastle.ac.uk or mailto:[email protected]@kau.edu.sa] and Phillip Lord ^1 ^1School of Computing Science, Newcastle University, UK^2Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia [ [ December 30, 2023 =====================The process of building ontologies is a difficult task that involves collaboration between ontology developers and domain experts and requires an ongoing interaction between then.This collaboration is made more difficult, because they tend to use different tool sets, which can hamper this interaction. In this paper, we propose to decrease this distance between domain experts and ontology developers by creating more readable forms of ontologies, and further to enable editing in normal office environments.Building on a programmatic ontology development environment, such as , we are now able to generate these readable/editable from the raw ontological source and its embedded comments.We have this translation to HTML for reading; this environment provides rich hyperlinking as well as active features such as hiding the source code in favour of comments. We are now working on translation to a Word document that also enables editing.Taken together this should provide a significant new route for collaboration between the ontologist and domain specialist.§ INTRODUCTIONOntologies are wide-spread in the field of biology and biomedicine, as they facilitate the management of knowledge and the integration of information, as in the Semantic Web <cit.>. Additionally, biological data are not only heterogeneous but also require complex domain knowledge to be dealt with <cit.>. Therefore, ontologies are useful models for representing this complex knowledge that is potentially changing and are also widely used in biomedicine, examples being the GO (Gene Ontology) <cit.>, SNOMED (Systematized Nomenclature of Medicine) <cit.>.However, building an ontology is a challenging task due to the use of languages with a sophisticated formalism (such as OWL), especially when combined with a complex domain such as biology or medicine. Normally ontologies are built as a collaboration between domain specialists who have the knowledge of the domain and ontology developers who know how to structure and represent the knowledge; they have to work together to construct a robust and accurate ontology. Often, community involvement during the process of building ontologies using meetings, focus groups and the like is very important <cit.>, as in GO where biological community involvement is important for successful uptake <cit.>. In addition,  <cit.> state that the development of Protein Ontology requires wider range of involvement to include other users and developers of the associated ontologies (such as GO) to ensure consistent architecture of the ontology.Biologists represent, manipulate and share their data in a wide-variety of tools such as Microsoft Excel spreadsheets and Word documents. Unfortunately, these environments are far removed from the formal structured representation of the ontology development environments with which the ontologists work to build ontologies. As a result of this difference in tools it is unclear how we can bridge the gap between the the two groups; this would be useful to facilitate the interaction between domain specialists and ontologists and help to make more convenient for both sides to read and/or manipulate the ontology.Ontology development environments are designed to produce formal structured representation of any domain. Either using GUI software such as  [<http://protege.stanford.edu/>] or a textual programmatic environment such in  <cit.>. The next section describes these tools in more details.§ BUILDING ONTOLOGIESThere are various tools for constructing and developing ontologies with a variety of user interfaces and environments. The most popular is which is an open-source tool that provides a user interface to develop and construct ontologies of any domain. It has been widely used for developing ontologies due to the variety of plug-ins and frameworks <cit.>. provides an easy interface for editing, visualisation and validation of ontologies as well as a useful tool for managing large ontologies <cit.>.Conversely, Tawny-OWL is a textual interface for developing ontologies in a fully programmatic manner <cit.>. This provides a convienient and readable syntax which can be edited directly using an IDE or text editor; in this style of ontology development, the ontologist ceases to mainpulate an OWL representation directly, and instead develops the ontology as programmatic source code. In contrast to developing ontologies in OWL, the ontologist can introduce new abstractions and syntax as they choose, whether for general use or specifically for a single ontology. An OWL version of the ontology can then be generated as required. It has been implemented in Clojure, which is a dialect of lisp and runs on the Java Virtual Machine <cit.>. Like , it also wraps the OWL-API <cit.> which performs much of the actual work, including interaction with reasoners, serialisation and so forth.Recently, we have developed tolAPC ontology using a new document-centric approach by including an Excel spreadsheet directly in the development pipeline. The spreadsheet contains all knowledge for the ontology which has been created and maintained by a biologist. Meanwhile, we design the ontology patterns using , then generate the axioms by extracting data from the spreadsheet using Clojure. Thus, contains the spreadsheet as a part of the source code; which can be freely updated and the ontology regenerated when needed. Hence, it remains as a part of the ontology development process <cit.>.In this approach, the Excel spreadsheet is totally developed by biologists; this has a significant advantage because it is a tool which they are familiar with and find convenient. However, we cannot ensure that the programmatic transformation of the values in the spreadsheet to the final ontology conforms with the domain specialists understanding, without the biologists reading and interacting with source code. Therefore, next we will discuss the probabilities of making this ontological source more readable by the specialists. § MULTILINGUAL ONTOLOGIESThe first and most obvious mechanism for increasing ontology readability is to enable users to read and write the ontology using their native language. Internationalisation technologies are widespread and enable support for multiple languages for applications with a graphical user interface.We next consider how we can enable support for multiple languages in textual user interface such as , giving the ontologist the ability to use their own native language for all parts of the development process.The first option is using polyglot library. This part of the system mimics a fairly standard technique for internationalisation of programmatic code; the ontology is developed with a set of programmatic labels which are then referenced in a language, or locale bundle with an appropriate translation. In the case of , this translation appears as |rdfs:label| annotations on the ontology entities (classes, properties etc). This overall process is shown in Figure <ref>, placing Italian and Arabic language translations onto the pizza ontology.While this may enable internationalisation for users of the ontology, it does not change the English-centric editing environment. We would wish, instead, to internationalise the entire source code of the ontology.This will make the entire ontology more comprehensible and readable for all developers who communicate in Italian and/or Arabic. This is fully supported with a full conversion of the environment using the multilingual feature of as in Figure <ref>, which shows the English, Italian and Arabic version of the pizza ontology <cit.>.The latter of these is a right-to-left alphabet, and we can use the IDE to change the direction that code is rendered in. This demonstrates the capability of to adapt with any language. The next language to be implemented will be French.These multilingual environments are advantageous for being readable and comprehensible by users when using their own language.This still leaves us in a programming environment, which is an environment unlikely to be familiar or comfortable to the most domain users. Moreover the ontology lacks a narrative structure, which means that it cannot be read in a literate fashion. We consider how to enable this in the next section.§ LITERATE ONTOLOGIESThe term literate programming was invented by <cit.> where the program is treated as a piece of literature rather than a program. The main idea in this paradigm is to insert text along with code and the program will also be its own documentation. The intentionality here is that the program should become easier to understand and, conversely, that the documentation is less likely to become out-of-date, as it is maintained in the same place. As is a fully programmatic environment, we can add comments freely, along with any additional mark-up that we wish. This enables us to produce different representation of the ontology.We have previously discussed two examples of literate ontologies: the first is the Amino Acid Ontology, taken from a previous ICBO2015 tutorial about  [<http://homepages.cs.ncl.ac.uk/phillip.lord/take-wing/take_wing.html>], while the second is a version of the Karyotype ontology <cit.>. In both cases, they have been produced using the source code, with markup in the comments being interpreted using a markup processing tool.Figure <ref> shows a snippet from the literate ontology Amino Acids as a webpage. The result appears as a normal web page, with syntax highlighting for the source code.Literate ontologies can be represented in different forms; using the various techniques for converting the markup text into different formalisms; webpages for example. Representing the ontology as an HTML webpage gives us the ability to navigate and browse the documentation either in order (section by section) or with a navigation facility (jumping between sections). It is also possible to hide or expose the “source” sections, leaving the reader to see just the documentation as appropriate.From the developer perspective, while the reader may still not be able to see the axiomatization in this way, the comments that they have checked are embedded directly next to the code which is an interpretation of them.It is interesting to enable specialists to read and navigate through the ontology and its documentation. However, with HTML there are no editing facilities to modify and update the ontology. Therefore, rather than using HTML, we have also investigated the possibility to turn the whole ontology into a Word document, an environment which can also be modified, changed or updated. Now biologists and domain specialists are placed in an environment in which they can freely provide feedback on an existing ontology simply by interacting with Word documents.§ DISCUSSION In this paper, we have described our approach to the translation of ontologies into a form that domain users can interact with more naturally.We have shown that it is possible to translate a textual environment like into another human language, or indeed a different script, including right-to-left text. To our knowledge, this is the first ontology editing environment with such textual and syntactic flexibility.Despite the fact that the multilingual ontologies approach is less relevant for scientific ontologies, it is already applied by some means in some cases of terminologies.For example, the use ofandin rather than using universal and existential notations which implies the agreement for using alternative language for ontology development.Further than this, however, we also translate the ontological source code into alternative visualisations such as HTML and Word documents which map directly back to the source, but which can differ from it: for instance, by enabling hyperlinks, adding section links and hide source code in favour of commentary. Especially with a Word document, this should enable a novel mechanism for interacting with an ontology: users can see and edit comments, with change tracking switched on, and use this as mechanism for feeding back to the ontology developer.Using this approach, of course, only enables us to visualise ontologies developed using . While a migration path is provided <cit.>, a whole-sale switch to is not effort-free. We note, however, that many ontologies are developed partly in and partly using OWL generated from other sources; a secondarly migratory path would be to use for these sections.We still need to evaluate this kind of interaction rigorously. For this, we are proposing a focus group test which will be include specialists participants to read the document of the ontology and provide their opinion about it and whether they prefer to update any terminologies according to their expertise. We are not proposing that Word documents will be directly used by domain specialists for editing ontologies. We expect that an ontologist will be involved with incorporating changes suggested back to the domain user; in this sense, we are using a Word document as an intermediate representation <cit.>. Our hope is that the reviewing features of Word should, however, enable us to provide a rich environment to support the ontologist in this process. Taken together, these should provide us with a significantly enhance process for the knowledge capture, ontology development and refinement from the process that we currently have.§ ACKNOWLEDGEMENTS Thanks to Newcastle university for supporting this research. Also, thanks to King Abdulaziz University, Jeddah, Saudi Arabia for funding and supporting the study. natbib
http://arxiv.org/abs/1709.08982v2
{ "authors": [ "Aisha Blfgeh", "Phillip Lord" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20170926124833", "title": "User and Developer Interaction with Editable and Readable Ontologies" }
1Department of Earth Sciences, University of California, Riverside, CA 92521, USA 2Department of Physics & Astronomy, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132, USA [email protected] discoveries over recent years have shown that terrestrial planets are exceptionally common. Many of these planets are in compact systems that result in complex orbital dynamics. A key step toward determining the surface conditions of these planets is understanding the latitudinally dependent flux incident at the top of the atmosphere as a function of orbital phase. The two main properties of a planet that influence the time-dependent nature of the flux are the obliquity and orbital eccentricity of the planet. We derive the criterion for which the flux variation due to obliquity is equivalent to the flux variation due to orbital eccentricity. This equivalence is computed for both the maximum and average flux scenarios, the latter of which includes the effects of the diurnal cycle. We apply these calculations to four known multi-planet systems (GJ 163, K2-3, Kepler-186, and Proxima Centauri), where we constrain the eccentricity of terrestrial planets using orbital dynamics considerations and model the effect of obliquity on incident flux. We discuss the implications of these simulations on climate models for terrestrial planets and outline detectable signatures of planetary obliquity.§ INTRODUCTIONExoplanetary science is rapidly requiring the need for characterization techniques for terrestrial planets as their discovery rate continues to increase. The Kepler mission has demonstrated that planet frequency increases with smaller size <cit.>, implying that the Transiting Exoplanet Survey Satellite will discover numerous examples of terrestrial planets around bright host stars <cit.>. Significant attention is given to those planets that lie within the Habitable Zone (HZ) of their host stars <cit.>, although the HZ is primarily a target selection tool for future atmospheric studies <cit.>. In the meantime, General Circulation Models (GCMs) are used to provide our best estimate of the surface conditions for discovered HZ planets <cit.>.A primary driving force in GCMs affecting surface conditions, climate dynamics, and seasonal variations, is the instellation flux on the planet <cit.>. Two primary factors effect the variability of the instellation flux: orbital eccentricity and obliquity. Tidal effects can occasionally play a significant role for planets in eccentric orbits and/or involved in planet-planet interactions <cit.> and may even push the planet into a runaway greenhouse scenario <cit.>. The effect of eccentricity on planetary atmospheres and subsequent climate variations follows a Keplerian pattern of long winters interrupted by brief periods of “flash heating” during periastron passage <cit.>. The obliquity of a planet's rotational axis undergoes short and long term oscillations due to perturbations from other planetary bodies in the system <cit.>. Fluctuations in planetary obliquity can have large effects on climates <cit.> and extreme obliquities can move the outer edge of the HZ <cit.>. Of the two primary factors, orbital eccentricity is currently a far more accessible measurable than obliquity. However, for systems in which we have constraints on eccentricity, we can determine the range of obliquities that drive the variation of instellation flux.Here, we describe the latitudinal flux incident on an exoplanet as a function of obliquity, eccentricity, and orbital phase. We further show how eccentricity constraints from radial velocity (RV) measurements or dynamical constraints can be used to model obliquity-dependent flux variations and locate regions where the changes in flux due to obliquity are equivalent to those due to eccentricity. In Section <ref> we formulate the time variable flux and equate regions of flux change in obliquity and eccentricity parameter space, both for maximum and average flux scenarios. In Section <ref> we provide stability criteria for known terrestrial planets in the GJ 163, K2-3, Kepler-186, and Proxima Centauri systems and model their potential flux maps as a function of obliquity. In Section <ref> we discuss the implications of the flux variations for surface temperatures and atmospheric conditions in so far as they effect habitability. We provide concluding remarks and suggestions for future work in Section <ref>.§ THE TIME VARIABLE FLUXHere, we consider the orbital eccentricity and the obliquity of the planetary rotation axis as sources of variable flux as a function of latitude. For a given eccentricity, e, semi-major axis, a, and star-planet separation, r, the maximum flux occurs at periastron, r = a(1-e), and the minimum flux occurs at apastron, r = a(1+e). In Section <ref>, we calculate maximum incident flux (when the star crosses the local meridian) for a given latitude. This maximum flux can be considered the instantaneous flux, or the latitudinal flux received as a planet nears synchronous rotation (tidal locking). In Section <ref>, we calculate the average the flux over the diurnal cycle of the planet, which applies to planets whose rotation period is significantly smaller than the orbital period. §.§ Maximum Flux VariationThe maximum flux at latitude β is given byF= L_⋆/4 π r^2 (sinδsinβ + cosδcosβ) = L_⋆/4 π r^2cos | β - δ |where L_⋆ is the stellar luminosity. The solar declination, δ, is given by:δ = θcos [2 π (ϕ - Δϕ)]for which ϕ is the orbital phase, Δϕ is the offset in phase between periastron and highest solar declination in the northern hemisphere, and θ is the obliquity. For the Earth, Δϕ = 0.46 and θ = 23.5. Figure <ref> is an incident flux map for an Earth–Sun analog as a function of latitude with contours of constant flux throughout a complete orbital phase. The phase of ϕ = 0.0 corresponds to the planet's periastron passage.The aim of the calculations here is to determine values of e and θ where the maximum change in flux during an orbit, Δ F, are equivalent at a given latitude, β. For the change in flux due to e, we assume θ = 0, and likewise for the change in flux due to θ, we assume e = 0. For eccentricity, the maximum change in flux is the difference in flux between periastron and apastron:Δ F_e= L_⋆/4 π a^2 (1-e)cosβ - L_⋆/4 π a^2 (1+e)cosβ= L_⋆/4 π a^2cosβ( 1/1-e - 1/1+e) = L_⋆/2 π a^2e/(1-e^2)cosβFor obliquity, the maximum change in flux occurs amidst the difference between the minimum and maximum solar declination. When θ≤ 45 and θ≤β≤ 90 - θ, this can be expressed as:Δ F_θ = L_⋆/4 π a^2cos (β-θ) - L_⋆/4 π a^2cos (β+θ) = L_⋆/4 π a^2 [cos (β-θ) - cos (β+θ)] = L_⋆/2 π a^2sinβsinθFor β < θ, the following applies:Δ F_θ = L_⋆/4 π a^2 [1 - cos (β+θ)]and for β > 90 - θ, the following applies:Δ F_θ = L_⋆/4 π a^2cos (β-θ) Thus, the maximum flux changes due to eccentricity and obliquity are equivalent where Δ F_e = Δ F_θ. Solving for obliquity in the regime θ≤β≤ 90 - θ, we combine Equations <ref> and <ref>:L_⋆/2 π a^2e/(1-e^2)cosβ = L_⋆/2 π a^2sinβsinθ e/(1-e^2) = tanβsinθ θ = arcsin[ e/(1-e^2) tanβ]For β < θ, we combine Equations <ref> and <ref>:L_⋆/2 π a^2e/(1-e^2)cosβ = L_⋆/4 π a^2 [1 - cos (β+θ)]e/(1-e^2) = 1 - cos (β+θ)/2 cosβ θ = arccos[ 1 - 2 cosβe/(1-e^2)] - βFor β > 90 - θ, we combine Equations <ref> and <ref>:L_⋆/2 π a^2e/(1-e^2)cosβ = L_⋆/4 π a^2cos (β-θ)e/(1-e^2) = cos (β-θ)/2 cosβ θ = β - arccos[ 2 cosβe/(1-e^2)]Solving for eccentricity results in:e = √(1+f(θ,β)^2) - 1/f(θ,β)where the function f(θ,β) for θ≤ 45 is given by:f(θ,β) = {[ 2 tanβsinθ ; 1 - cos (β+θ)/cosβ ; cos (β-θ)/cosβ].Equations <ref> and <ref> allow the calculation of orbital eccentricities for which the total change in incident flux is the same as obliquities with θ≤ 45. Using the same methodology for θ > 45, the function f(θ,β) is given by:f(θ,β) = {[ 1/cosβ ; 1 - cos (β+θ)/cosβ ; cos (β-θ)/cosβ].Shown in Figure <ref> are the locations of eccentricity and obliquity where the flux variation during a complete orbital phase are equivalent to each other. We plot this for latitudes ranging from β = 0 to β = 90 in steps of 10. At latitudes close to the poles, the variation between winter and summer incident flux increases as the pole is tilted toward the ecliptic plane. The minimum flux for an eccentric orbit e < 1 will never reach zero, even at apastron. Thus, an obliquity of θ = 90 approaches a boundary condition where the flux difference is equivalent to that of a hyperbolic orbit.There is a difference that should be noted for the change in flux due to eccentricity and obliquity. Although the flux variation due to obliquity at a given latitude varies sinusoidally, the flux variation due to eccentricity varies based on the star-planet separation produced by a Keplerian orbit. Therefore, though the total change in flux is the same, the rate at which the flux varies between minimum and maximum are different for the eccentricity and obliquity scenarios, likely resulting in a different atmospheric response over the orbital phase time scale. §.§ Diurnal Cycle EffectsFor planets where the rotation period is significantly less than the orbital period, the average incident flux as a function of latitude may be used. For this purpose, Equation <ref> is modified as followsF = L_⋆/4 π r^2 (sinδsinβ + cosδcosβcos h)where h is the hour angle of the star with respect to the local meridian. The fraction of planetary rotation period that experiences daylight for a given latitude isΔ t_dl = 2 arccos(-tanδtanβ)/360For obliquities of θ > 0, there will be latitudes that experience constant day/night during the course of an orbital period. These situations are defined by the criteria that if β + δ > 90 or β + δ < -90 then Δ t_dl = 1.0, and if β - δ > 90 or β - δ < -90 then Δ t_dl = 0.0. The average flux at a given latitude can then be calculated by accounting for the change in flux as a function of h and the fractional daylight time. Figure <ref> is an incident flux map averaged over the diurnal cycle for an Earth–Sun analog as a function of latitude. The comparison with Figure <ref> shows the impact of including the effect of constant daylight periods on the polar regions.As for Section <ref>, we now calculate the values of e and θ where the change in the average flux during an orbit, Δ F, are equivalent at a given latitude, β. For the eccentricity case with θ = 0, the average flux is equivalent to the amplitude shown in Equation <ref> multiplied by the average of a sine function including the effect of a day/night cycle. This leads to an additional 1/π factor, as followsΔ F_e = L_⋆/2 π^2 a^2e/(1-e^2)cosβFor obliquity, the introduction of the hour angle and fraction daylight in Equations <ref> and <ref> produce a non-trivial calculation of Δ F_θ for various obliquity and latitude ranges. We solve this by numerically calculating regions where Δ F_e = Δ F_θ. The result of these calculations are summarized in Figure <ref> where, as for Figure <ref>, we plot lines of constant latitude from β = 0 to β = 90 in steps of 10. The main effect of including the diurnal cycle is to smooth the relationship with eccentricity due to the averaging of the flux received for a given latitude. Additionally, the diurnal cycle increases the equivalent eccentricity at high latitudes, as the change in average flux is larger than for the maximum flux case described in Section <ref>. The combination of the two factors, eccentricity and obliquity, are investigated for specific planets in the case studies that follow.§ CASE STUDIESHere, we apply eccentricity constraints through stability considerations to four of the known exoplanets: GJ 163 c, K2-3 d, Kepler-186 f, and Proxima Centauri b. These are then used to determine latitudinal flux maps of the planets as a function of orbital phase for fixed obliquities, including diurnal effects. The four exoplanets were carefully chosen from the known terrestrial exoplanets considering their proximity to the HZ and the diversity of the system architectures. System parameters were extracted from the NASA Exoplanet Archive <cit.> and relevant publications (see Table <ref>).lccccc 5 0pcStellar and Planetary ParametersParameter GJ 163 c^a K2-3 d^b Kepler-186 f^c Proxima Centauri b^d-3pt Star     Spectral Type M3.5 V M0.0 V M1 V M5.5 V     V 11.811±0.012 12.17±0.01 15.6511.13     Distance (pc) 15.0±0.4 45±3 151±18 1.295     T_eff (K)3500±100 3896±189 3788±543050±100     M_⋆ (M_⊙) 0.40±0.020.60±0.090.478±0.0550.120±0.015     R_⋆ (R_⊙) – 0.56±0.070.472±0.0520.141±0.021     L_⋆ (L_⊙) 0.0196±0.001 0.065 ^e0.0412±0.069 0.00155±0.00006     CHZ (AU)0.145–0.282^e 0.262–0.500^e 0.21–0.40^e 0.041–0.081^e     OHZ (AU)0.115–0.297^e 0.207–0.527^e 0.17–0.42^e 0.032–0.086^e Planet     P (days) 25.63±0.0344.5631^+0.0063_-0.0055 129.9459±0.001211.186±0.002     e0.099±0.086 <0.162 ^e<0.628 ^e <0.35     ω () 227±80–– 310     M_p (M_⊕) 6.8±0.9 3.97 ^e1.54 ^e 1.27^+0.19_-0.17     R_p (R_⊕) –1.52^+0.21_-0.201.11^+0.14_-0.13 –     a (AU) 0.1254±0.0001 0.2076^+0.0098_-0.01080.356±0.0480.0485^+0.0051_-0.0041     R_H (AU) –0.004 ^e 0.005 ^e– a<cit.> b<cit.> c<cit.> d<cit.> eCalculated in this work.§.§ Stability CriteriaOf the four systems considered here, two were discoveries using the RV technique (GJ 163 and Proxima Centauri). The planets in these systems have measurements and subsequent constraints placed upon their orbital eccentricities from the Keplerian fit to the RV data. The remaining two systems, K2-3 and Kepler-186, were detected using the transit method with scant RV data obtained. These two systems thus have limited information available for the planetary orbital eccentricities. Observations of compact Kepler systems indicate that such planets are likely to be in circular orbits <cit.>. However, here we use stability considerations to determine the maximum eccentricities allowed for planets in those systems.We use a similar methodology for orbital stability to that used by <cit.> and <cit.>. The masses of the transiting planets (M_p) were calculated using the mass-radius relationships of <cit.>. For two-planet systems, a criterion for stability was numerically estimated by <cit.>, requiring that the separation of the planets exceed about 3.5 mutual Hill radii (R_H,M_p), given byR_H,M_p = [ M_p,in + M_p,out/3 M_⋆] ]^1/3(a_in + a_out)/2where M_⋆ is the mass of the host star and the “in/out” subscripts refer to the inner and outer planets in the system. For multi-planet systems, a long-term stability criterion established by <cit.> requires that Δ > 9 for adjacent planets where Δ = (a_out - a_in)/R_H. For three adjacent planets, the criterion becomes Δ_in + Δ_out > 18, where Δ_in and Δ_out are the Δ calculations for the inner and outer adjacent planet pairs respectively. By modifying Equation <ref> with a (1 - e) multiplicative factor to account for eccentricity, we are able to determine eccentricities that satisfy the above stability criteria. The results of these calculations for individual systems are described in the sections specific to those systems below. §.§ Habitable ZoneTo calculate the HZ boundaries of the four planetary systems studied here, we use the methodology described by <cit.>. There are two inner and two outer boundaries calculated, the extent of which depend on assumptions regarding how long Venus and Mars were able to retain liquid water at their surfaces. These are referred to as the Conservative Habitable Zone (CHZ) and the Optimistic Habitable Zone (OHZ), for which a detailed description can be found in <cit.>. Our calculations for the CHZ and OHZ boundaries for each of the systems are shown in Table <ref>.Figure <ref> shows a top-down view of the systems, including the planetary orbits and the CHZ (light-green) and OHZ (dark-green) regions. The size along a panel side (scale) in the figure is indicated in the top-right corner of each panel. The parameters used to plot the planetary orbits are those from Table <ref> and the associated references. For K2-3 d and Kepler-186 f, we have used the maximum eccentricities for those planets using the calculations of Section <ref> and described further in Sections <ref> and <ref>. The percentage of a complete orbital period spent within the OHZ for each of the four planets are 86% (GJ 163 c), 56% (K2-3 d), 33% (Kepler-186 f), and 94% (Proxima Centauri b). §.§ GJ 163The known planets orbiting the low-mass star GJ 163 were discovered by <cit.>. Their analysis of the RV data indicated the presence of five periodic signals, two of which were attributed to possible stellar activity sources. The three-planet solution includes a 6.8 M_oplus planet (planet c) in a ∼25 day period orbit. We adopt this three-planet solution and use the stellar parameters of <cit.> and <cit.>, as shown in Table <ref>.The Keplerian orbit of planet c reveals an orbital eccentricity of e ∼ 0.1 which we utilize in our models. According to Figure <ref>, an obliquity of θ = 16 produces an equivalent flux variation to that produced by the e = 0.1 eccentricity at a latitude of β = 20. For a circular orbit (e = 0.0), the maximum flux received by planet c would be 1705 W m^-2 (1.25 F_⊕). Using the measured eccentricity, the maximum flux (during periastron passage) is 2100 W m^-2 (1.54 F_⊕).Shown in Figure <ref> are three incident flux maps for the planet GJ-163 c. As with Figure <ref>, the flux maps are a function of latitude and orbital phase with contours of constant flux. The phase of ϕ = 0.0 corresponds to the planet's periastron passage. All three panels use the known eccentricity of e = 0.099. The top panel assumes an obliquity of θ = 20 and a phase offset between periastron and highest stellar declination in the northern hemisphere of Δϕ = 0.0. The lower two panels assume Δϕ = 0.25 and obliquities of θ = 50 (middle) and θ = 80 (bottom). Choosing Δϕ = 0.25 demonstrates the effect of decoupling the incident flux effects of periastron and maximum stellar declination in a particular hemisphere. Using the methodology of Section <ref>, the eccentricity of e ∼ 0.1 and obliquity of θ = 20 have approximately equivalent effects on the seasonal variations in flux at low latitudes, and thus supply similar driving energy for the climate variations in those low latitude regions. The middle and bottom panels of Figure <ref> show that the obliquity becomes the dominant source of variable energy for θ > 20, with a mean incident flux of 0.98 F_⊕ in the latitude range of -30 > β > +30 for θ = 50. §.§ K2-3An early result from the K2 mission <cit.> was the discovery of the planetary system K2-3 by <cit.>. The system parameters were subsequently refined further by the work of <cit.> and <cit.>. The stellar and planetary properties shown in Table <ref> are those from <cit.>, as they provide a self-consistent model of the system. Using the stability criteria described in Section <ref>, we calculated the estimated planet mass and subsequent limits on the orbital eccentricity of the outermost planet known in the system, planet d. For a circular orbit, the semi-major axis of planet d corresponds to the inner edge of the OHZ and the Hill radius is 0.004 AU (see Table <ref>). By adjusting the eccentricity of the planet, the limit of Δ∼ 9 is reached at an eccentricity of e = 0.162 where the mutual Hill radius for the outer two planets is R_H,M_p = 0.004 AU. Adopting this eccentricity for the outer planet results in an orbital architecture that is depicted in the top-right panel of Figure <ref>, where planet d enters the OHZ during apastron. Although we have selected an argument of periastron of ω = 90, the value of ω has no impact on our flux calculations and the transit duration will provide limits on the allowed periastron values for a given eccentricity <cit.>.From Figure <ref>, it can be seen that the eccentricity of e = 0.162 results in an equivalent flux variation to an obliquity of θ = 17 at a latitude of β = 20. If planet d is in a circular orbit, the flux received by the planet during the entire orbit is 2055 W m^-2, (1.50 F_⊕). For an eccentricity of e = 0.162, the maximum flux received is 2925 W m^-2 (2.14 F_⊕). Shown in Figure <ref> are the flux intensity maps for K2-3 d as a function of latitude and orbital phase where, once again, we include the diurnal effects. The top panel assumes a circular orbit, an obliquity of θ = 20, and Δϕ = 0.0. The bottom two panels assume a maximum eccentricity of e = 0.162, Δϕ = 0.25, and obliquities of 50 and 80 for the middle and bottom panels respectively.The amplitude of the flux variation effects in the top panel are below those predicted by the maximum eccentricity and would thus result in a more temperate climate than the eccentric cases in the bottom two panels. The mean incident flux in the latitude range of -30 > β > +30 for θ = 50 (Figure <ref>, middle panel) is 1.20 F_⊕. §.§ Kepler-186The multi-planet system, Kepler-186, was confirmed by <cit.> and <cit.> and later confirmed to have a fifth planet by <cit.>. The new outer planet, designated Kepler-186 f, was a particularly important discovery due to its relatively small size and location within the HZ of the host star <cit.>. Our adopted stellar and planetary properties for the Kepler-186 system are from <cit.> and are shown in Table <ref>. Combining these parameters with the methodology of Section <ref> results in a maximum eccentricity of the outer planet of e = 0.628 where the mutual Hill radius for the outer two planets is R_H,M_p = 0.0025 AU. Adopting this eccentricity for Kepler-186 f results in the orbital architecture depicted in the bottom-left panel of Figure <ref>. We will explore the effect of this extreme eccentricity limit on flux variations noting that, as for K2-3 d, the periastron argument for an eccentric orbit may be constrained from the transit duration.As can be seen in Figure <ref> and Figure <ref>, such an extreme eccentricity has no obliquity equivalent at latitude β = 20 regardless of diurnal effects. Such obliquity-induced flux variations are only possible for θ > 60 for the maximum flux case and θ > 30 for the diurnal case. Planet f is toward the outer edge of the system HZ and, for a circular orbit, receives a maximum flux of 445 W m^-2 (0.33 F_⊕). Adopting the extreme eccentricity of e = 0.628 creates a significant change in this result, with a maximum flux during periastron passage of 3212 W m^-2 (2.35 F_⊕). The flux intensity maps for Kepler-186 f as a function of latitude and orbital phase using the diurnal model are shown in Figure <ref>. The top panel shows the flux map for the scenario of a circular orbit, an obliquity of θ = 20, and an alignment of maximum stellar declination and periastron passage (Δϕ = 0.0). The bottom two panels assume a maximum stellar declination phase offset from periastron passage of Δϕ = 0.25. The middle panel represents the extreme eccentricity scenario with e = 0.628 and an obliquity of θ = 50 and further demonstrates how the eccentricity dominates the flux variations for even relatively high obliquities. The scenario shown in the bottom panel assumes a more moderate eccentricity of e = 0.3 where the obliquity of θ = 80 is more readily able to drive the seasonal flux variations. §.§ Proxima CentauriThe terrestrial planet orbiting Proxima Centauri was discovered by <cit.>. This is a naturally high-value planet as it is, by definition, the closest exoplanet to our planetary system. The value is increased by its orbit lying within the HZ of the host star, leading to the exploration of potentially habitable conditions and detectable biosignatures <cit.>. Although they are not entirely ruled out, no evidence of planetary transits have been found at this time <cit.>. The orbital solution provided by <cit.> has a maximum orbital eccentricity of e = 0.35. <cit.> utilized this eccentricity to calculate observable signatures as a function of planet mass, and they also performed stability simulations that exclude the presence of additional terrestrial planets in the HZ of the system. The orbit of the planet in relation to the system HZ is shown in the bottom-right panel of Figure <ref>, where the maximum eccentricity has been adopted.According to Figure <ref>, the maximum eccentricity of e = 0.35 produces a flux variation equivalent to an obliquity of θ = 44 at a latitude of β = 20. For the circular orbit scenario, the maximum flux received by the planet is 901 W m^-2 (0.66 F_⊕), whereas the eccentric scenario results in a maximum incident flux of 2133 W m^-2 (1.56 F_⊕). The flux intensity maps for Proxima Centauri b as a function of latitude and orbital phase are shown in Figure <ref>. The top panel represents the circular orbit case along with an obliquity of θ = 20 and an alignment of maximum stellar declination and periastron passage (Δϕ = 0.0). The bottom two panels of Figure <ref> represent the maximum eccentricity case and assume a maximum stellar declination phase offset from periastron passage of Δϕ = 0.25. The middle panel shows the flux map for an obliquity of θ = 50 and thus represents the case where the flux variations match those of the eccentricity at latitude β = 20. The bottom panel assumes an obliquity of θ = 80. Fully constraining the eccentricity of this planet (and, indeed, of all planets) is clearly critical for developing the needed flux maps to determine climate cycles and potential impacts on surface temperatures.§ IMPLICATIONS FOR HABITABILITYThe construction of detailed GCMs relies heavily upon many factors, such as the atmospheric composition, temperature-pressure profile, and orbital properties (see references provided in Section <ref>). With relatively few exceptions, measurements of exoplanet parameters are currently restricted to the mass, radius, and Keplerian orbital properties. Parameters that are inaccessible, at least for terrestrial planets, include the planetary rotation rate and the obliquity of the rotation axis. The influence of rotation rate on atmospheric dynamics for HZ planets has been considered in detail <cit.>, and it has been shown that the evolution of cloud layers at the substellar point that influence habitable surface conditions is highly sensitive to the rotation period <cit.>. It is therefore important to include the diurnal effects that we have incorporated into our flux map models, as described in Section <ref>.The effect of obliquity on habitable climates is substantial, such as the possibility for HZ planets with large obliquities to experience regular global snowball transitions <cit.>. For the Earth, the obliquity is stabilized by the Earth's moon <cit.>, without which the obliquity variations would likely have been much more extreme <cit.>. Additional simulations for a retrograde-rotating Venus by <cit.> indicate that obliquity variations may have been as low as ± 7 over Gyr timescales, implying that massive moons are not necessarily required for obliquity stability. In either case, the obliquity of a particular exoplanet is one that must float as a free parameter in the GCMs that predict surface conditions. A direct measurement of obliquity from seasonal variations in directly detected light will be possible from future missions capable of such measurements. Modeling of these data using current Earth-based observations shows that planetary rotational and obliquity parameters may be inferred from exoplanet imaging photometry <cit.>.A planetary parameter that can be presently measured is the orbital eccentricity. This parameter is most often extracted from the Keplerian orbital solution to RV observations of a bright host star and can also be inferred to a lesser extent from the duration of a planetary transit <cit.>. The eccentricities for most of the Kepler HZ planets are largely unknown due to the faintness of the host stars <cit.>. In addition, variable eccentricities due to dynamical interactions with other planets can induce Milankovitch cycles with significantly shorter periods than those measured from the Earth <cit.>. The primary purpose of the study described in this work could then be seen as placing constraints on the variable flux from the measurable parameter of eccentricity as a proxy for the presently unknown obliquity of the exoplanet.§ CONCLUSIONSDespite the rapid progress in our understanding of terrestrial exoplanets frequency in the HZ, there are many planetary parameters crucial to calculating habitability models that remain beyond our reach. The seasonal variations in incident flux are driven by the orbital eccentricity and the obliquity of the planet's rotational angular momentum. Within the solar system, Mars is an example of a planet where the obliquity and eccentricity play similar roles in driving the seasonal climate variations. Of the two, eccentricity is currently our only accessible parameter and so it is useful to determine the limits on seasonal variations imposed by the eccentricity that would be matched by a particular obliquity.In this work, we have calculated the effects of eccentricity and obliquity on incident flux as a function of latitude, and where the flux variations are equivalent for a complete orbital cycle. The two effects largely differ in the Keplerian nature of the eccentricity variations as opposed to the sinusoidal changes in obliquity-induced flux at a given latitude. We selected four case studies of terrestrial planets in the HZ of their host stars where the eccentricity is either measured or we were able to calculate a maximum dynamical eccentricity. These case studies demonstrate where extreme eccentricities and obliquities can dominate the incident flux map and is particularly important in demonstrating the contrast to either the zero eccentricity and/or zero obliquity models. This in turn establishes the importance of constraining eccentricity, as even a relatively small eccentricity (e ∼ 0.2) can have a large influence on the flux map and climate cycles. Until such time as direct measurements of obliquity can be made, the models presented here will find their utility in constraining obliquity for a given eccentricity and flux map of the planet.§ ACKNOWLEDGEMENTS The authors would like to thank Fred Adams, Colin Chandler, and Ravi Kopparapu for useful discussions regarding this work. The authors would also like to thank the anonymous referee, whose comments greatly improved the quality of the paper. This research has made use of the Habitable Zone Gallery at hzgallery.org. This research has also made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. The results reported herein benefited from collaborations and/or information exchange within NASA's Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA's Science Mission Directorate.[Akeson et al.(2013)]ake13 Akeson, R.A., Chen, X., Ciardi, D., et al. 2013, PASP, 125, 989 [Almenara et al.(2015)]alm15 Almenara, J.M., Astudillo-Defru, N., Bonfils, X., et al. 2015, A&A, 581, L7 [Anglada-Escudé et al.(2016)]ang16 Anglada-Escudé, G., Amado, P.J., Barnes, J., et al. 2016, Nature, 536, 437 [Armstrong et al.(2014)]arm14 Armstrong, J.C., Barnes, R., Domagal-Goldman, S., et al. 2014, AsBio, 14, 277 [Barnes et al.(2008)]bar08 Barnes, R., Raymond, S.N., Jackson, B., Greenberg, R. 2008, AsBio, 8, 557 [Barnes et al.(2009)]bar09 Barnes, R., Jackson, B., Greenberg, R., Raymond, S.N. 2009, ApJ, 700, L30 [Barnes et al.(2013)]bar13 Barnes, R., Mullins, K., Goldblatt, C., et al. 2013, AsBio, 13, 225 [Barnes et al.(2016)]bar16 Barnes, J.W., Quarles, B., Lissauer, J.J., Chambers, J., Hedman, M.M. 2016, AsBio, 16, 487 [Barnes et al.(2017)]bar17 Barnes, R., Deitrick, R., Luger, R., et al. 2017, AsBio, submitted (arXiv:1608.06919) [Bolmont et al.(2014)]bol14 Bolmont, E., Raymond, S.N., von Paris, P., et al. 2014, ApJ, 793, 3 [Bolmont et al.(2016)]bol16 Bolmont, E., Libert, A.-S., Leconte, J., Selsis, F. 2016, A&A, 591, A106 [Bonfils et al.(2013)]bon13 Bonfils, X., Lo Curto, G., Correia, A.C.M., et al. 2013, A&A, 556, A110 [Burke(2008)]bur08 Burke, C.J. 2008, ApJ, 679, 1566 [Crossfield et al.(2015)]cro15 Crossfield, I.J.M., Petigura, E., Schlieder, J.E., et al. ApJ, 804, 10 [Davenport et al.(2016)]dav16 Davenport, J.R.A., Kipping, D.M., Sasselov, D., Matthews, J.M., Cameron, C. 2016, ApJ, 829, L31 [Dressing et al.(2010)]dre10 Dressing, C.D., Spiegel, D.S., Scharf, C.A., Menou, K., Raymond, S.N. 2010, ApJ, 721, 1295 [Driscoll & Barnes(2015)]dri15 Driscoll, P.E., Barnes, R. 2015, AsBio, 15, 739 [Fressin et al.(2013)]fre13 Fressin, F., Torres, G., Charbonneau, D., et al. 2013, ApJ, 766, 81 [Gladman(1993)]gla93 Gladman, B. 1993, Icarus, 106, 247 [Howard(2013)]how13 Howard, A.W. 2013, Science, 340, 572 [Howell et al.(2014)]how14 Howell, S.B., Sobeck, C., Haas, M., et al. 2014, PASP, 126, 398 [Kane & von Braun(2008)]kan08 Kane, S.R., von Braun, K. 2008, ApJ, 689, 492 [Kane & Gelino(2012a)]kan12a Kane, S.R., Gelino, D.M. 2012a, PASP, 124, 323 [Kane & Gelino(2012b)]kan12b Kane, S.R., Gelino, D.M. 2012b, AsBio, 12, 940 [Kane et al.(2012)]kan12c Kane, S.R., Ciardi, D.R., Gelino, D.M., von Braun, K. 2012, MNRAS, 425, 757 [Kane et al.(2015)]kan15 Kane, S.R., Domagal-Goldman, S.D., Herman, J.R., Robinson, T.D., Stine, A.R. 2015, Proceedings of the Comparative Climates of Terrestrial Planets II (arXiv:1511.03779) [Kane et al.(2017)]kan16 Kane, S.R., Hill, M.L., Kasting, J.F., et al. 2016, ApJ, 830, 1 [Kane et al.(2017)]kan17 Kane, S.R., Gelino, D.M., Turnbull, M.C. 2017, AJ, 153, 52 [Kasting et al.(1993)]kas93 Kasting, J.F., Whitmire, D.P., Reynolds, R.T. 1993, Icarus, 101, 108 [Kaspi & Showman(2015)]kas15 Kaspi, Y., Showman, A.P. 2015, ApJ, 804, 60 [Kawahara(2016)]kaw16 Kawahara, H. 2016, ApJ, 822, 112 [Kopparapu et al.(2013)]kop13 Kopparapu, R.K., Ramirez, R., Kasting, J.F., et al. 2013, ApJ, 765, 131 [Kopparapu et al.(2014)]kop14 Kopparapu, R.K., Ramirez, R.M., SchottelKotte, J., et al. 2014, ApJ, 787, L29 [Kopparapu et al.(2016)]kop16 Kopparapu, R.K., Wolf, E.T., Haqq-Misra, J., et al. 2016, ApJ, 819, 84 [Laskar(1986)]las86 Laskar, J. 1986, A&A, 157, 59 [Laskar & Robutel(1993)]las93a Laskar, J., Robutel, P. 1993, Nature, 361, 608 [Laskar et al.(1993)]las93b Laskar, J., Joutel, F., Robutel, P. 1993, Nature, 361, 615 [Leconte et al.(2013)]lec13 Leconte, J., Forget, F., Charnay, B., et al. 2013, A&A, 554, A69 [Leconte et al.(2015)]lec15 Leconte, J., Wu, H., Menou, K., Murray, N. 2015, Science, 10, 1126 [Li & Batygin(2014)]li14 Li, G., Batygin, K. 2014, ApJ, 790, 69 [Linsenmeier et al.(2015)]lin15 Linsenmeier, M., Pascale, S., Lucarini, V. 2015, P&SS, 105, 43 [Lissauer et al.(2014)]lis14 Lissauer, J.J., Marcy, G.W., Bryson, S.T., et al. 2014, ApJ, 784, 44 [Meadows et al.(2017)]mea17 Meadows, V.S., Arney, G.N., Schwieterman, E.W., et al. 2017, Astrobiology, submitted (arXiv:1608.08620) [Petigura et al.(2013)]pet13 Petigura, E.A., Marcy, G.W., Howard, A.W. 2013, ApJ, 770, 69 [Quintana et al.(2014)]qui14 Quintana, E.V., Barclay, T., Raymond, S.N., et al. 2014, Science, 344, 277 [Ribas et al.(2016)]rib16 Ribas, I., Bolmont, E., Selsis, F., et al. 2016, A&A, 596, A111 [Ricker et al.(2015)]ric15 Ricker, G.R., Winn, J.N., Vanderspek, R., et al. 2015, JATIS, 1, 014003 [Rowe et al.(2014)]row14 Rowe, J.F., Bryson, S.T., Marcy, G.W., et al. 2014, ApJ, 784, 45 [Schwartz et al.(2016)]sch16 Schwartz, J.C., Sekowski, C., Haggard, H.M., Pallé, E., Cowan, N.B. 2016, MNRAS, 457, 926 [Sinukoff et al.(2016)]sin16 Sinukoff, E., Howard, A.W., Petigura, E.A., et al. 2016, ApJ, 827, 78 [Smith & Lissauer(2009)]smi09 Smith, A.W., Lissauer, J.J. 2009, Icarus, 201, 381 [Spiegel et al.(2016)]spi09 Spiegel, D.S., Menou, K., Scharf, C.A. 2009, ApJ, 691, 596 [Sullivan et al.(2015)]sul15 Sullivan, P.W., Winn, J.N., Berta-Thompson, Z.K., et al. 2015, ApJ, 809, 77 [Tuomi & Anglada-Escudé(2013)]tuo13 Tuomi, M., Anglada-Escudé, G. 2013, A&A, 556, A111 [Turbet et al.(2016)]tur16 Turbet, M., Leconte, J., Selsis, F., et al. 2016, A&A, 596, A112 [Van Eylen & Albrecht(2015)]van15 Van Eylen, V., Albrecht, S. 2015, ApJ, 808, 126 [Way & Georgakarakos(2017)]way17 Way, M.J., Georgakarakos, N. 2017, ApJ, 835, L1 [Weiss & Marcy(2014)]wei14 Weiss, L.M., Marcy, G.W. 2014, ApJ, 783, L6 [Williams & Kasting(1997)]wil97 Williams, D.M., Kasting, J.F. 1997, Icarus, 129, 254 [Williams & Pollard(2002)]wil02 Williams, D.M., Pollard, D. 2002, IJAsB, 1, 61 [Williams & Pollard(2003)]wil03 Williams, D.M., Pollard, D. 2003, IJAsB, 2, 1 [Wolf & Toon(2013)]wol13 Wolf, E., Toon, O.B. 2013, Astrobiology, 13, 656 [Wolf & Toon(2014)]wol14 Wolf, E., Toon, O.B. 2014, Astrobiology, 14, 241 [Wordsworth et al.(2010)]wor10 Wordsworth, R., Forget, F., Selsis, F., et al. 2010, A&A 522, A22 [Wordsworth et al.(2011)]wor11 Wordsworth, R., Forget, F., Selsis, F., et al. 2011, ApJ, 733, L48 [Yang et al.(2013)]yan13 Yang, J., Cowan, N.B., Abbot, D.S. 2013, ApJ, 771, L45 [Yang et al.(2014)]yan14 Yang, J., Boué, G., Fabrycky, D., Abbot, D.S. 2014, ApJ, 787, L2
http://arxiv.org/abs/1709.09240v5
{ "authors": [ "Stephen R. Kane", "Stephanie M. Torres" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170926200008", "title": "Obliquity and Eccentricity Constraints For Terrestrial Exoplanets" }
firstpage–lastpage 2002Understanding Infographics through Textual and Visual Tag Prediction Zoya Bylinskii1* Sami Alsheikh1* Spandan Madan2* Adrià Recasens1*Kimberli Zhong1 Hanspeter Pfister2 Fredo Durand1 Aude Oliva1 1 Massachusetts Institute of Technology 2 Harvard University{zoya,alsheikh,recasens,kimberli,fredo,oliva}@mit.edu {spandan_madan,pfister}@seas.harvard.eduAccepted; Received ============================================================================================================================================================================================================================================================================================================================= Asteroseismology is a unique tool to explore the internal structure of stars through both observational and theoretical research. The internal structure of pulsating hydrogen shell white dwarfs (ZZ Ceti stars) detected by asteroseismology is regarded as the representative of all DA white dwarfs. Observations for KUV 08368+4026, which locates in the middle of the ZZ Ceti instability strip, have been carried out in 1999 and from 2009 to 2012 with either single-site runs or multisite campaigns. Time-series photometric data of about 300 hours were collected in total. Through data reduction and analysis, 30 frequencies were extracted, including four triplets, two doublets, one single mode and further signals. The independent modes are identified as either l=1 or l=2 modes. Hence, a rotation period of 5.52± 0.22 days was deduced from the period spacing in the multiplets. Theoretical static models were built and a best fit model for KUV 08368+4026 was obtained with 0.692±0.002 solar mass, (2.92±0.02)× 10^-3 solar luminosity and the hydrogen mass fraction of 10^-4 stellar mass.stars: white dwarfs-stars – stars:oscillations – stars:individual:KUV 08368+4026. § INTRODUCTIONWhite dwarfs are the final remains of almost all the moderate- and low-mass stars and the oldest kind of stars in the Galaxy. Measurement of the age of white dwarfs can put constraints on the galaxy's and the universe's ages(Winget et al. 1987). The accurate determination of age requires precise measurements of stellar parameters such as the total mass, luminosity, radius, effective temperature, hydrogen mass fraction, helium mass fraction and so on. Asteroseismology provides a tool to estimate these parameters through modeling the internal structure of pulsating white dwarfs.Since the discovery of the first member by Landolt (1968), pulsating white dwarfs are divided into four classes (DAV, DBV, DQV and GW Vir). Among them, the DAVs or ZZ Ceti stars or hydrogen surface pulsating white dwarfs have the lowest effective temperatures and the largest total number of members. The instability strip of ZZ Ceti stars locates at the cross of the Cepheid instability strip and the evolution tracks of white dwarfs. It is regarded as a “pure” instability strip, which indicates that every DA white dwarf locating in the instability strip does show pulsation. Thus the internal structure of ZZ Ceti stars can be regarded as the representative of all DA white dwarfs.For the ZZ Ceti stars, the properties vary dependent on their locations in the instability strip. The hot ZZ Ceti stars locating close to the blue edge of the instability strip exhibit typically short periods, low amplitudes and small amplitude modulations, while the cold members show usually long periods, high amplitudes and large amplitude modulations. The attempt of using theoretical models to explore the internal structure DA white dwarfs has been applied to either individual ZZ Ceti stats(c. f. HL Tau 76, Pech, Vauclair & Dolez 2006; HS 0507 Fu et al. 2013) or globally for a sample of ZZ Ceti stars(Romero et al. 2012).KUV 08368+4026 was discovered to be a ZZ Ceti star by Vauclair et al. (1997). A three site campaign was made in 1998 by Dolez et al. (1999). Fontaine et al.(2003) gave stellar parameters including the effective temperature of 11490 K, the surface gravity log g of 8.05, the mass of 0.64 solar mass and the absolute magnitude of 11.85. However, a set of parameters was given by Gianninas et al. (2011), providing a hotter model with T_eff of 12280±192K, log g of 8.17±0.05, mass of 0.71±0.03M_ and absolute magnitude of 11.88. In order to study the oscillations of KUV 08368+4026 hence constrain the stellar parameters, one-week observations were made from Haute-Provence Observatory in 1999 and four observation runs were taken from 2009 to 2012 from multiple site.+The observation and data reduction are described in section 2. We present the period and seismology analysis in section 3 and 4, respectively. Section 5 gives discussion on the variations of pulsation amplitudes in different time scales. In section 6, we present the attempt of stellar modeling and the result. Finally we give the discussion and conclusions in section 7.§ OBSERVATIONS AND DATA REDUCTIONTime-series photometric data (dataset 1) were collected for KUV 08368+4026 withe the Chevreton photoelectric photometer in 1999 from Haute-Provence Observatory in France. Four runs were taken for this star from 2009 to 2012 when CCD cameras and Johnson B filters were used. The run of dataset 2 was a single site observation effort carried out from Lijiang Observatory in China in February of 2009 with the 2.4m telescope. From December of 2009 to January of 2010, data (dataset 3) were obtained with the 2.4-m telescope in Lijiang and the 2.16-m telescope in Xinglong of China, and the 1.5-m telescope in Observatorio de San Pedro Mártir (SPM) of Mexico. In the run of dataset 4, two more telescopes in Xinglong (80-cm and 85-cm telescopes) were used, together with the 2.16-m telescope in Xinglong and the 2.1-m telescope in Observatorio Astrofisico Guillermo Halo (OAGH) in Mexico. The run of dataset 5 was arranged as a two site campaign of the 2.16m-telescope in Xinglong and the 1.5-m telescope in SPM. Unfortunately, observations in Mexico were not carried out due to some technical problems.Table 1 lists the observation log. All data were reduced with the package of IRAF DAOPHOT except the photometer's data, which were reduced with the standard method for photoelectric photometer data. Figure 1 shows the reduced light curves of KUV 08368+4026.§ PERIOD ANALYSIS Period analysis was made for the light curves by using the software Period04 (Lenz & Breger 2005). For all the five datasets, we made Fourier transformation. Figure 2 shows the Fourier spectra. Notice that the amplitudes of the same frequencies vary among the observation runs and this will be discussed in Section 5.The data analysis followed such steps: 1) extract the highest peak from the Fourier spectrum, get its frequency and amplitude, and calculate its phase through fitting. 2) Prewhiten the sine function, make the Fourier transformation for the residuals in order to get the next frequency. We repeated the steps and got finally a list of frequencies with S/N ratio larger than 4. The extracted frequencies are listed in Table 2. The method of Monte-Carlo simulation was used to derive the uncertainties. For more detail about Monte-Carlo simulation please read Fu et al. 2013. For dataset 3, we found a number of extremely-close frequencies with low amplitudes in addition to the major frequencies during the prewhitenning process. After checking the data carefully we realized that it is due to changing amplitudes of the same frequencies in the time scale of weeks, thus the prewhitenning of the frequency could not be completely done with a fixed amplitude and phase. The subroutine “Calculate amplitude/phase variations” of Period04 was then used to solve this problem. For each frequency, individual amplitude and phase were used to prewhiten the light curves of idividual run. The residuals were then combined to calculate the next Fourier transformation.We compare frequency list of the five datasets in Table 2 to each other in order to identify the same frequencies from different datasets. The result is presented in Table 3. § ASTEROSEISMOLOGY §.§ Linear combinationsSince most of runs are single site observations, aliasing effect is strongly visible in some spectrum windows in Figure 2, especially in the window spectra of January of 1999 (dataset 1), Febuary of 2009 (dataset 2) and Febuary of 2012 (dataset 5).The analysis of linear combinations and aliasing frequencies were made and the result is listed in the column "Note" of Table 2. §.§ Mode identificationAfter removing the frequencies of linear combination and aliasing, we list the frequencies with amplitudes in Table 3. The frequencies which are components of multiplets, or present large amplitudes, or are detected in multiple seasons are identified as independent signals. For the single frequencies with low amplitudes or detected in only one observing season, we group them as further signals. Please note the frequency of 3249.32 μ Hz. Although it is close to a triplet, we assign it as a further signal since it is detected only once in a single-site observing run with a photoelectric photometer and its amplitude is small. We summarize the independent frequencies and further signals in Table 4. For the frequencies detected in multiple seasons, we take the average values of the frequencies.Four triplets are identified from the independent signals in Table 4. f1-f3 is an unequal triplet. Since the spacing of f3 to f2 is almost twice of the spacing of f1 to f2, we suppose they are l=2 modes. The other three triplets show nearly equal spacing of about 1 μ Hz inside the triplets, which is the property of l=1 modes. We also notice that the two doublets have frequency spacing of about 2 μ Hz. Thus we identify them as l=1 modes with the central frequencies of m=0 mode undetected.From the equation:σ_k,l,m=σ_k,l+m× (1-C_k,l) Ω Where C_k,l=1/l(l+1) in the asymptotic regime (Brickhill 1975), one may derive that the splits of the l=2 modes are 1.67 times of the splits of the l=1 modes. As far as the triplet around 1280.51 μ Hz, the two splits are about 1.6 and 3.2  μ Hz, which agrees with the early identification of this frequency as a l=2 mode.The frequency at 2159.20 μ Hz will be discussed in the section 4.4 alone. §.§ Rotation splitting From the five multiplets of the l=1 modes, we derived an average rotation splitting of 1.07±0.05 μ Hz. Thus we estimate the rotation period of KUV 08368+4026 as 5.4±0.3 days. §.§ Period spacingIn Table 3, there are some frequencies which belong to neither doublets nor triplets with amplitudes close to the detection limit and appearing only once among the five observation seasons. We identify these frequencies as further signals in Table 4, except the one at 2159.20 μ Hz, which has been detected in four seasons hence listed as an independent signal.Table 5 lists the identified l=1 frequencies and l=2 modes in Table a and b, respectively. With the three l=1 and m=0 modes in the triplets, We made a linear fit for the three modes which give an average spacing of 49.7s. Figure 3 shows the fitting. We also plot the missing m=0 modes with the frequency values of the center of the frequency values of the m=± 1 modes. The single mode in 2159.20μ Hz is plotted on the Figure as well. §.§ Mode trappingFigure 4 presents the residuals of linear fit for the frequencies of three m=0 modes, together with the residuals corresponding to the frequencies of the undetected m=0 modes in Table 5. The two doublets are also plotted, where the single mode point is shown with open circle. From Figure 4, a possible trapping mode could be visible for the period around 400s.§ AMPLITUDE VARIATIONSAs mentioned before, KUV 08368+4026 shows a varying amplitude spectra in different observation seasons. The variations are in the scales of not only years but also weeks. Table 6 lists the amplitudes of the 25 frequencies resolved in dataset 3 in the three individual weeks from December 2009 to January 2010. Figure 5 shows the amplitude changes of each frequency in the three individual weeks. The amplitudes of these frequencies change a lot in a duration of around one month. We also calculate the total power for each week using the following equation:TotalPower=∑_i A_i^2F_i Which are listed in the bottom of Table 6. As one may see, the total power of oscillation are changing in different weeks.§ MODELING EXPLORATIONWe try to use the theoretical static models calculated with the Toulouse white dwarf code (Pech, Vauclair & Dolez 2006) to constrain the stellar parameters of KUV 08368+4026. The models have four input parameters, total mass, luminosity, hydrogen mass fraction and helium mass fraction. First we built a large grid in a large parameter range to select potential good-fit models. The ranges and steps of the grid are listed in Table 7. χ^2 test estimates were used to find the best fit models with the five l=1 modes and the l=2 mode. χ^2=∑_n(P^the_n-P^obs_n)^2where P^the are the periods of theoretical models and P^obs are observing periods. More than 8000 models are calculated for the large grid. We constrain the parameters with the effective temperature and the surface gravity from Giannias catalog and Fontaine et al.(2003). We selected the models whose parameters locate inside the range of 3 times uncertainty of the parameters and one minimum of χ^2 was found among the models. Around it we build a detailed grid to get more precise parameters. Table 8 lists the range of the detailed grid. Figure 6 displays the distribution of χ^2 test of the eigen mode calculated from models and the observed mode. The χ^2 values are represented by different gray scales. One minimum was found and the model was identified as the best fit model. Table 9 presents the parameters of the best fit model.We tried to do analysis for the trapping modes of the best fit model and the result is presented in Figure 7.We suggest to take this model as the best fit model due to the following reasons: 1), it has the closest pulsation modes to the observation modes; 2), it shows a trapping mode around the 400s period, which agrees with the observation result well; 3), the effective temperature and surface gravity of this model locate between the values given by the two spectroscopic observation efforts. Therefore we take it as the best fit model under the current condition.§ DISCUSSION AND CONCLUSIONSA three-site observation campaign was carried out in 1998 (Dolez et al. 1999) when photelectric photometers were used. With the collected data, six independent modes were extracted while no multiplets detected. The Frequencies of the 6 modes agree with the central frequencies of the six multiplets detected in this work correspondingly.We hence conclude our work as follows,* We obtained time-series photometric data for the ZZ Ceti star KUV 08368+4026 in 1999 and from 2009 to 2012. 17 independent modes were extracted, including six multiplets and one single mode together with 13 (f18-30) further signals. We identified the independent signals as either l=1 or l=2 modes with the rotation split. Also a number of linear combinations and low amplitude modes are resolved but we failed to identify them. * From the six multiplets, an average rotation split of 1.049±0.041μ Hz was determined which derived the rotation period of 5.52± 0.22days. * An average period spacing of 49.2s was obtained from the the l=1, m=0 modes.* All six multiplets were found in the 1998 campaign though the rotation splitting is not detected due to the observation conditions. The two periods of 619s and 494.5s found in the discovery observation were not detected in neither 1998 campaign nor the following observations.* We found the evidence of amplitude variations of KUV 08368+4026 in time scale of both years and weeks. The total pulsation power was changing as well in the 3 weeks from December 2009 to January 2010.* The theoretical modeling work suggests a thick hydrogen layer for KUV 08368+4026. We estimate a best fit model with the mass of 0.692± 0.002 solar mass, luminosity of (2.92± 0.02)× 10^-3 solar luminosity and hydrogen mass fraction of 10^-4 stellar mass and helium mass fraction of 10^-2 stellar mass * Romero et al. (2012) gave a group of stellar parameters from theoretical modeling work which suggested log g of 8.02±0.03, mass of 0.609±0.012 solar mass, effective temperature of 11230±95K, M_H/M_* of (1.42±0.52)×10^-5 and M_He/M_* of 2.45× 10^-2 base on the two periods of the discovery data. Thus our constraints, which are base on more data of multisite campaigns should be considered to be more reliable.§ ACKNOWLEDGMENTS CL and JNF acknowledge the support from the Joint Fund of Astronomy of National Natural Science Foundation of China (NSFC) and Chinese Academy of Sciences through the Grant U1231202, and the support from the National Basic Research Program of China (973 Program 2014CB845701 and 2013CB834904). 42 [Brickhill1975]1Brickhill A. J., 1975, MNRAS, 170, 404 [Dolez et al.1999]9Dolez N., Vauclair G., Zhang X. B., Chevreton M., Handler G., 1999, ASPC, 169, 129 [Fontaine et al.2003]13Fontaine G., Bergeron P., Billères M., Charpinet S., 2003, ApJ, 591, 1184 [Fu et al.2013]13Fu J. -N. et al., 2013, MNRAS, 429, 1585* [Gianninas, Bergeron & Ruiz2011]gcatalogGianninas A., Bergeron P., Ruiz M. T., 2011, ApJ, 743, 138 [Landolt1968]41Landolt A. U., 1968, ApJ, 153, 151 [Lenz & Breger2005]p04Lenz P., Breger M., 2005, Comm. in Asteroseismology, 146, 53 [Mukadam et al.2004]44Mukadam A. S., Winget D. E., von Hippel T., Montgomery M. H., Kepler S. O., Costa A. F., 2004, ApJ, 612, 1052 [Pech, Vauclair & Dolez2006]42Pech D., Vauclair G., Dolez N., 2006, A&A, 446, 223 [Romeo et al.2012]44Romero A. D., Górsico A. H., Althaus L. G., Kepler S. O., Castanheira B. G., Miller Bertolami M. M., 2012, MNRAS, 420, 1462* [Vauclair et al.1997]7Vauclair G., Dolez N., Fu J. -N., Chevreton M., 1997, A&A, 322, 155 [Winget et al.1991]43Winget D. E. et al., 1991, ApJ, 379, 326 [Winget et al.1987]44Winget D. E. et al., 1987, ApJ, 315, 77*
http://arxiv.org/abs/1709.09206v1
{ "authors": [ "C. Li", "J. -N. Fu", "G. Vauclair", "N. Dolez", "L. Fox-Machedo", "R. Michel", "M. Chavez", "E. Bertone" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170926181711", "title": "Asteroseismology of the ZZ Ceti star KUV 08368+4026" }
1 Estimating a Separably-Markov Random Field (SMuRF) from Binary Observations Yingzhuo Zhang^ 1, Noa Malem-Shinitski^ 2, Stephen A Allsop^ 3, Kay Tye^ 3 and Demba Ba^ 1^ 1 Harvard University, John A. Paulson School of Engineering and Applied Sciences.^ 2 Technische Universität Berlin.^ 3 Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences. Keywords: Spike rasters, Dynamics, Random field, Bayesian Estimation, associative learning.empty NC instructionsAbstract A fundamental problem in neuroscience is to characterize the dynamics of spiking from the neurons in a circuit that is involved in learning about a stimulus or a contingency. A key limitation of current methods to analyze neural spiking data is the need to collapse neural activity over time or trials, which may cause the loss of information pertinent to understanding the function of a neuron or circuit. We introduce a new method that can determine not only the trial-to-trial dynamics that accompany the learning of a contingency by a neuron, but also the latency of this learning with respect to the onset of a conditioned stimulus. The backbone of the method is a separable two-dimensional (2D) random field (RF) model of neural spike rasters, in which the joint conditional intensity function of a neuron over time and trials depends on two latent Markovian state sequences that evolve separately but in parallel.Classical tools to estimate state-space models cannot be applied readily to our 2D separable RF model. We develop efficient statistical and computational tools to estimate the parameters of the separable 2D RF model. We apply these to data collected from neurons in the pre-frontal cortex (PFC) in an experiment designed to characterize the neural underpinnings of the associative learning of fear in mice.Overall, the separable 2D RF model provides a detailed, interpretable, characterization of the dynamics of neural spiking that accompany the learning of a contingency.§ INTRODUCTION A fundamental problem in the analysis of electrophysiological data from neuroscience experiments is to determine the trial, and time within said trial, when a neuron or circuit first exhibits a conditioned response to a stimulus. This is a challenging problem because neural spike rasters resulting from such experiments can exhibit variability both within a given trial and across trials <cit.>. Fear conditioning experiments <cit.> are a prime example of a scenario when this situation arises: a neutral stimulus, present across all trials of an experiment, gives rises to stereotypical within-trial spiking dynamics, while the associated aversive stimulus leads to changes in spiking dynamics across a subset of the trials. State-of-the-art methods for analyzing neural spike rasters fall primarily within two classes. The most pervasive class of such methods neglect the inherent two-dimensional nature of neural spike rasters by aggregating the raster data either across time or trials, and subsequently applying techniques applicable to one-dimensional signals <cit.>. In contrast to these one-dimensional methods, two-dimensional methods model both the within and cross-trial dynamics of neural spiking <cit.>. Within the class of one-dimensional methods, the past decade has seen a growing interest in approaches based on state-space models of neural spiking activity. These approaches treat neural spiking data as realizations of a stochastic point process whose conditional intensity function obeys a stochastic smoothness constraint in the form of a Markov process followed by a nonlinearity. The main challenge is to estimate the parameters of the model, and various solutions have been proposed towards this end <cit.>. The main drawback of one-dimensional approaches applied to the analysis of neural spike rasters is the need, preceding analysis, for aggregation across one of the dimensions. Among one-dimensional methods, non-parametric methods based on rank tests (e.g. Wilcoxon rank sum test) have been the most popular, primarily due to their ease of application. In addition to the need to collapse neural activity of time or trials, two common pitfalls of non-parametric methods are their reliance on large sample assumptions to justify comparing neural spiking rates, and the need to correct for multiple comparisons. For instance, tests that rely on estimates of the neural spiking rate based on empirical averages are hard to justify when it is of interest to characterize the dynamics of neural spiking at the millisecond time scale. Consider a neural spike raster for which it is of interest to assess differences in instantaneous spiking rates between distinct time/trial pair. At the millisecond time scale, there would only be one observation per time/trial pair, violating the large sample assumptions that such non-parametric methods rely upon. To the best of our knowledge, the work of <cit.> remains the most successful attempt to characterize simultaneously the within and cross-trial dynamics of neural spiking. This approach uses a state-space model of the cross-trial dynamics, in conjunction with a parametric model of the within-trial dynamics. The use of a parametric model for the within-trial dynamics is convenient because it enables the estimation of the model parameters by Expectation-Maximization (EM), using a combination of point-process filtering and smoothing in the E-step (to fill-in the missing cross-trial effect), and an M-step for the within-trial parameters that resembles a GLM <cit.>. The main drawbacks of this approach are, on the one hand, the high-dimensionality of the state-space model that captures the cross-trial dynamics, and on the other hand the lack of a simple interpretation,as in the one-dimensional models <cit.>, for the state sequence.Lastly, a two-dimensional approach based on Gaussian processes was proposed in <cit.>. One advantage of this approach, which is based on a Gaussian process prior of the neural spiking rate surface, is its ability to model the interaction between the two dimensions through the use of a two dimensional kernel. As is common with kernel methods, it does not scale well to multiple dimensions.We propose a two-dimensional (2D) random field (RF) model of neural spike rasters–termed Separably-Markov Random Field (SMuRF)–in which the joint conditional intensity function of a neuron over time and trials depends on two latent Markovian state sequences that evolve separately but in parallel. Conventional methods for estimating state-space models from binary observations <cit.> are not applicable to SMuRF. We derive a Monte Carlo Expectation-Maximization algorithm to maximize the marginal likelihood of observed data under the SMuRF model. In the E-step, we leverage the Polya-Gamma <cit.> representation of Bernoulli random variables to generate samples from the joint posterior distribution of the state sequences by Gibbs sampling. A similar strategy was adopted in <cit.> for a one-dimensional state-space model. The sampler uses a highly efficient forward-filtering backward-sampling algorithm for which the forward step can be implemented exactly and elegantly as Kalman filter, while the backward step uses Bayes' rule to correct the filter samples. The SMuRF model obviates the need for aggregation across either time or trials, and yields a low-dimensional 2D characterization of neural spike rasters that is interpretablein the sense that the posterior of the two state sequences capture the variability within and across trials respectively. Moreover, being model-based, the SMuRF model, unlike non-parametric methods, yields a characterization of the joint posterior (over all trials and time within a trial) distribution of the instantaneous rate of spiking, thus allowing us to precisely determine the dynamics of neural spiking that accompany the learning of a contingency. To demonstrate this, weapply the model to data collected from neurons in the pre-frontal cortex (PFC) in an experiment designed to characterize the neural underpinnings of the associative learning of fear in mice. We find that the trial at which the cortical neurons begin to exhibit a conditioned response to the auditory conditioned stimulus is robust across cells, occurring 3 to 4 trials into the conditioning period. We also find that the time with respect to conditioned stimulus onset when we observe a significant change in neural spiking compared to baseline activity varies significantly from cell to cell, occurring between 20 to 600 ms after conditioned stimulus onset. These findings are likely reflective of the variability in synaptic strength and connectivity that accompany learning, as well as the location of the neurons in the population.The rest of our treatment begins in Section <ref> where we motivate the SMuRF model, define it and introduce our notation. In Section <ref>, we present the Monte-Carlo EM algorithm for parameter estimation in the SMuRF model, as well as our process for inferring the dynamics of neural spiking that accompany the learning of a contingency by a neuron. The reader may find derivations relevant to this section in the Appendix. We present an application to the cortical data in Section <ref>, and conclude in Section <ref>.§ NOTATION AND SMURF MODELWe begin this section with a continuous-time point-process formalism of a neural spike raster, characterized by a trial-dependent conditional intensity function (CIF). Then, we introduce the SMuRF model, a model for the discrete-time version of the CIF. §.§ Continuous-time point-process observation model We consider an experiment that consists of R successive trials. During each trial, we record the activity of a neuronal spiking unit. We assume, without loss of generality, that the duration of the observation interval during each trial is (0,T]. For trial r, r=1,⋯,R, let the sequence 0 < t_r,s < ⋯ < t_r,S_r < T correspond to the times of occurrence of events from the neuronal unit, that is to say the times when the membrane potential of the neuron crosses a given threshold. We assume that {t_r,s}_s=1^S_r is the realization in (0,T] of a stochastic point-process with counting process N_r(t) = ∫_0^t dN_r(u), where dN_r(t) is the indicator function in (0,T] of {t_r,s}_s=1^S_r. A point-process is fully characterized by its CIF. Let λ_r(t|H_t) denote the trial-dependent CIF of dN_r(t) defined asλ_r(t|H_t) = lim_Δ→ 0P[N_r(t+Δ)-N_r(t)=1|H_t]/Δ,where H_t is the history of the point process up to time t.We denote by {Δ N_k,r}_k=1,r=1^K,R, the discrete-time process obtained by sampling dN_r(t) at a resolution of Δ, K = ⌊T/Δ⌋. Let {λ_k,r}_k=1,r=1^K,R denote the discrete-time, trial-dependent, CIF of the neuron. §.§ Separably-Markov Random Field (SMuRF) model of within and cross-trial neural spiking dynamics Let {y_k,r}_k=1,r=1^K,R∈^K × R be a collection of random variables. We say that this collection is a separable random field if ∃ 𝐱∈^K, 𝐳∈^R s.t. ∀ k,r ∃ unique (x_k,z_r) ∈𝐱×𝐳 s.t. y_k,r|(𝐱,𝐳) ∼ f(x_k,z_r). If in addition 𝐱 and 𝐳 are Markov processes, we say that{y_k,r}_k=1,r=1^K,R is a separably-Markov random field or “SMuRF". A 2D random field {y_k,r}_k=1,r=1^K,R∈^K × R is a collection of random variables indexed over a subset of ℕ^+ ×ℕ^+. We call this collection a separable field if there exists latent random vectorsand(each indexed over a subset of ℕ^+) such that {y_k,r}_k=1,r=1^K,R are independent conditioned onandand only a function of the outer product betweenand . If, in addition,andare Markov, we say that the field is a SMuRF. Intuitively, a separable random field is a random field that admits a stochastic rank-one decomposition.We propose the following SMuRF model of the discrete-time, trial-dependent, CIF {λ_k,r}_k=1,r=1^K,R of a neuronal spiking unit{[ x_k = ρ_x x_k-1 + α_x u_x,k + ϵ_k , ϵ_k ∼𝒩(0, σ^2_ϵ);z_r = ρ_z z_r-1 + α_z u_z,k+δ_r , δ_r ∼𝒩(0,σ^2_δ);log λ_k,rΔ/1-λ_k,rΔ = x_k + z_r;Δ N_k,r | x_k, z_r ∼Bernoulli(λ_k,rΔ);].By construction, this is a SMuRF of the trial-dependent CIF of a neuron. u_x,k and u_z,k are indicator functions of presence of cue. To provide some intuition, if we assume x_k + z_r is small, then the SMuRF model approximates the trial-dependent CIF as λ_k,rΔ≈e^z_r·e^x_k, that is, as the product of a within-trial component e^x_k in units of Hz (spikes/s) and a unitless quantity e^z_r. For a given trial r, e^z_r represents the excess spiking rate above what can be expected from the within-trial component at that trial, which we call the cross-trial component of the CIF. The within and cross-trial components from the SMuRF model are functions of two independent state sequences, (x_k)_k=1^K and (z_r)_r=1^R, that evolve smoothly according to a first-order stochastic difference equation.The parameters ρ_x, α_x, σ^2_ϵ, ρ_z, α_z and σ^2_δ, which govern the smoothness of (x_k)_k=1^K and (z_r)_r=1^R, must be estimated from the raster data. Remark 1: We note that, in its generality, our model does not assume that λ_k,rΔ = e^z_r·e^x_k. In our model, λ_k,rΔ = e^x_k + z_r/1+e^x_k + z_r. The approximation λ_k,rΔ≈e^z_r·e^x_k holds for a neuron with small neural spiking rate <cit.>. Figure <ref> shows a graphical representation of the SMuRF model as a Bayesian network. It is not mathematically possible to rewrite the state equations from the SMuRF model in standard state-space form without increasing significantly the dimension of the state space. We give a sketch of an argument as to why in the Appendix. Therefore, in an Expectation-Maximization (EM) algorithm for parameter estimation, one cannot simply apply classical (approximate) binary filtering and smoothing in the E-step <cit.>. We derive a Monte-Carlo Expectation-Maximization algorithm to maximize the likelihood of observed data under the SMuRF model, with respect to the parameter vector θ = (ρ_x,α_x,σ^2_ϵ,ρ_z,α_z,σ^2_δ). § PARAMETER ESTIMATION IN THE SMURF BY MAXIMUM LIKELIHOOD §.§ Maximum Likelihood Estimation by Expectation-Maximization Let = (x_1,⋯,x_K)^T, 𝐳 = (z_1,⋯,z_R)^T, and Δ 𝐍 = {Δ N_k,r}_k=1,r=1^K,R. The goal is to maximize, with respect to θ, the likelihood L(θ|Δ 𝐍) of the SMuRF modelL(θ|Δ 𝐍) = logp(Δ 𝐍; θ) = log ∫_, p(Δ 𝐍,,; θ) d d. This is a challenging problem because of the high-dimensional integral that must be carried out in Equation <ref>. We propose to maximize the likelihood by EM. Remark 2: For the moment, we treatandas missing data; in the sequel, we will augment the model with additional missing data that will simplify the EM algorithm. Given a candidate solution θ^(ℓ), EM <cit.> maximizes L(θ|Δ 𝐍) by building a sequence of successive approximations 𝒬(θ|θ^(ℓ)) of L(θ|Δ 𝐍) (the so-called E-step) such that maximizing these approximations, which in general is simpler than directly maximizing L(θ|Δ 𝐍), is guaranteed to not decrease L(θ|Δ 𝐍). That is, each iteration of EM generates a new candidate solution θ^(ℓ+1) such that L(θ^(ℓ+1)|Δ 𝐍) ≥ L(θ^(ℓ)|Δ 𝐍). By iterating this process, EM generates a sequence of iterates {θ^(ℓ)}_ℓ=1^∞ that, under regularity conditions, converge to a local optimum of L(θ|Δ 𝐍) <cit.>.In the context of the SMuRF model, the key challenge of EM is to compute𝒬(θ|θ^(ℓ)) defined as𝒬(θ|θ^(ℓ)) = 𝔼_𝐱,𝐳[logp(Δ 𝐍,𝐱,𝐳;θ)| Δ 𝐍,θ^(ℓ)],the expected value of the complete-data likelihood with respect to the joint posterior distribution of the missing data (,) conditioned on the observed data Δ 𝐍 and the candidate solution θ^(ℓ). This expectation is not tractable, i.e. it cannot be computed in closed-form. The intractability stems not only from the lack of conjugacy between the Bernoulli observation model and our Gaussian priors–also an issue for one-dimensional models <cit.>–but also because, as mentioned previously, the SMuRF model cannot be reduced to a standard state-space model.We propose to approximate the required expectations using Markov-Chain Monte-Carlo (MCMC) samples from p(,|Δ 𝐍;θ^(ℓ)). In particular, we will use Gibbs sampling <cit.>, a Monte-Carlo technique, to generate samples from a distribution by sampling from its so called full conditionals (conditional distribution of one variable given all others), thus generating a Markov chain that, under regularity conditions, can be shown to converge to a sample from the desired distribution. Gibbs sampling is attractive in cases where sampling from the full-conditionals is simple. However, it is prone to the drawbacks of MCMC methods, such as poor mixing and slow convergence, particularly if one is not careful in selecting the full-conditionals from which to generate samples from. Two observations are in order, that will lead to the derivation of an elegant block Gibbs sampler with attractive properties * Conditioned on , the joint distribution, p(Δ 𝐍,|;θ), ofand Δ 𝐍 is equivalent to the joint distribution from a one-dimensional state-space model with binary observations <cit.>. By symmetry, this is also true for p(Δ 𝐍,|;θ). This readily motivates a block Gibbs sampler that alternates between sampling from |Δ 𝐍;θ and | Δ 𝐍;θ. This leaves us with one challenge: how to obtain samples from the posterior distribution of the state in a one-dimensional state-space model with Bernoulli (more generally binomial) observations?* We introduce a new collection of i.i.d., Polya-Gamma distributed <cit.> random variables 𝐰 = {w_k,r}_k=1,r=1^K,R, such that sampling from |Δ 𝐍,;θ is equivalent to sampling from the posterior of the state in a linear Gaussian state-space model (we will prove this in the Appendix) using a forward-filtering backward-sampling algorithm <cit.>. Moreover, it has been shown that the Gibbs sampler based on this Polya-Gamma augmentation scheme <cit.> is uniformly ergodic and possesses superior mixing properties to alternate data-augmentation scheme for logit-based models <cit.>. The intuition behind the introduction of the Polya-Gamma random variables is the following: they are missing data that, if we could observe, would make the Bernoulli observations Gaussian. Stated otherwise, the Polya-Gamma random variables are scale variables in a Gaussian scale mixture <cit.> representation of Bernoulli random variables.Remark 3: The random vectorin the preceding bullet point is the vector additional missing data alluded to in Remark 2. Together, these two observations form the basis of an efficient block-Gibbs sampler we use for maximum-likelihood estimation of the parameters from the SMuRF model by Monte-Carlo EM, also referred to as empirical Bayes <cit.>. We introduce the basic ideas behind PG augmentation and its utility in Bayesian estimation for logit-based models. In the Appendix, we provide detailed derivations for the PG sampler adapted to the SMuRF model. §.§ Polya-Gamma augmentation and sampling in one dimension Let Δ N ∈{0,1} and X ∈ and suppose that, conditioned on X = x, Δ N is Bernoulli with mean e^x/1+e^x, i.e.p(Δ N|x) = (e^x)^Δ N/1+e^x We begin with a definition of Polya-Gamma (PG) random variables, followed by a PG augmentation scheme for the Bernoulli/binomial likelihood. We will see that the augmentation scheme leads to an attractive form for the posterior of x given the observation Δ N and the augmented variable. Finally, we will see that the posterior of the augmented variable itself follows a PG distribution. Our treatment follows closely that of <cit.>.Definition of Polya-Gamma random variables: Let {E_m}_m=1^∞ be a sequence of i.i.d. exponential random variable with parameter equal to 1. The random variableW d=2/π^2∑_m=1^∞E_m/(2m-1)^2follows a PG(1,0) distribution, where d= denotes equality in distribution. The moment generating function of W is𝔼[e^-tW] = cosh^-1(√(t)/2).An expression for its density p_W(w), expressed as an infinite sum, can be found in <cit.> and <cit.>. The PG(1,c) random variable is obtained by exponential tiling of the density of a PG(1,0) random variable. Letting p_W(w|c) denote the density of a PG(1,c) random variable,p_W(w|c) = cosh(c/2)e^-c^2w/2p_W(w). PG augmentation preserves the Bernoulli likelihood: Following the treatment of <cit.>, conditioned on X = x, let W be a PG(1,|x|) random variable. Further suppose that, conditioned on X = x, Δ N and W are independent. Thenp(Δ N,w|x) = p(Δ N|x)p_W(w|x).Integrating out W, we see that the augmentation scheme does not alter p(Δ N|x). One may then ask, what is the utility of the augmentation scheme? The answer lies in the following identity, discussed in detail in <cit.>, and which is the key ideal behind PG augmentationp(Δ N|x)p_W(w|x) = (e^x)^Δ N/1+e^x·PG(1,|x|)∝e^-1/2(ỹ-x)^2/1/w∝𝒩(ỹ;x,1/w),where ỹ = y-1/2/w, and ∝ indicates that we are dropping terms independent of x. Equation <ref> states that, given X = x and a logit model, a Bernoulli random variable is, up to a constant independent of x, a scale mixture of Gaussian <cit.>, i.e. a Gaussian random variable with random variance 1/w, where W = w follows a PG distribution <cit.>. If we assume X ∼ p_X(x), then <cit.>p(Δ N,x,w) = p(Δ N|x,w)p_W(w|x)p_X(x) ∝𝒩(ỹ;x,1/w) p_X(x).Implications of augmentation on p(x|Δ N,w) and p(w|Δ N,x): p(x|Δ N,w) = p(Δ N,x,w)/p(Δ N,w)∝ p(Δ N|x,w)p_W(w|x)p_X(x)∝𝒩(ỹ;x,1/w) p_X(x),where we make use of Equation <ref>. If X is Gaussian, then p(x|Δ N,w) is Gaussian and available in closed-form! (Appendix).p_W(w|Δ N,x) = p(Δ N,x,w)/p(Δ N,x) = p(Δ N|x)p_W(w|x)p_X(x)/∫_w p(Δ N|x)p_W(w|x)p_X(x) = p_W(w|x),i.e. p(w|Δ N,x) = p_W(w|x) = PG(1,|x|). Together,Equations <ref> and <ref> form the basis of a uniformly ergodic <cit.> Gibbs sampler to obtain sample from p(x,w|Δ N). §.§ Block Gibbs sampler for PG-augmented SMuRF model Consider the following version of the SMuRF model with PG augmentation:{[ x_k = ρ_x x_k-1 + α_x u_x,k+ϵ_k, ϵ_k ∼ N(0,σ^2_ϵ);z_r = ρ_z z_r-1 + α_z u_z,k +δ_r, δ_r ∼ N(0,σ^2_δ);λ_k,rΔ = e^x_k+z_r/1+e^x_k+z_r;Δ N_k,r | x_k,z_r ∼Bernoulli(λ_k,rΔ); w_k,r | x_k, z_r ∼PG(1,|x_r + z_r|), k = 1,⋯,K; r=1,⋯, R. ]. We can apply the basic results from the previous subsection to derive the following result (proof in Appendix): Suppose , ,and Δ 𝐍 come from the PG-augmented SMuRF model (equation ), then p(Δ N,|,;θ) is equivalent in distribution to the following linear-Gaussian state-space model{[x_k = ρ_x x_k-1 + α_xu_x,k +ϵ_k, ϵ_k ∼𝒩(0,σ^2_ϵ); ΔÑ_k,r = x_k + z_r + ṽ_k,r, ṽ_k,r∼𝒩(0, w_k,r^-1),i.i.d. , r = 1,⋯,R; ΔÑ_k,r = Δ N_k,r - 1/2/w_k,r. ] .Following the discussion from the previous subsection, it is not hard to see that such a result would hold. The proof of this result is in the appendix, as well as the derivation of an elegant forward-filtering backward-sampling algorithm <cit.> for drawing samples from p(Δ N,|,;θ). By symmetry, it is not hard to see that a similar result holds for p(Δ N,|,;θ). Block Gibbs sampling from PG-augmented SMuRF model:The E-step of the Monte-Carlo EM algorithm consists in sampling from p(,|Δ 𝐍;θ^(ℓ)) by drawing from p(,,|Δ N;θ^(ℓ)) using a block Gibbs sampler that uses the following full-conditionals ∙ p(|Δ 𝐍,,;θ^(ℓ)), which according to the theorem above is equivalent to the posterior distribution of the state sequence in a linear-Gaussian state-space model. ∙ p(|Δ 𝐍,,;θ^(ℓ)), which obeys properties similar to the previous full-conditional (by symmetry). ∙ p(w_k,r|,) = p(w_k,r|x_k,z_r) = PG(1,|x_k + z_r|), k = 1,⋯, K, r = 1,⋯, R <cit.>. In the Appendix, we detail how we initialize the algorithm and monitor convergence.In practice, we found that estimating ρ_x and ρ_z is difficult. We hypothesize that including those parameters yields an unwieldy likelihood function. In the results we report, we assume ρ_x = ρ_z = 1, α_x = α_z = 0 and focus on estimating a simple model with two parameters σ^2_ϵ and σ^2_δ. The assumption ρ_x = ρ_z = 1 gives the random walk priors more freedom, thus allowing us to be capture the variability of the within and cross-trial processes. We have run simulations, not reported here, that show that the joint estimation of σ^2_δ, σ^2_ϵ, α_x and α_z is stable and that our EM algorithm converges. This demonstrates the ability of the SMuRF model (Equation (<ref>)) to incorporate exogenous input stimuli.Assuming ρ_x = ρ_z = 1, in the M-step, the update equations for the parameters σ^2_ϵ, σ^2_δ, α_x and α_z follow standard formulas <cit.>α^(ℓ+1)_x=∑_k=1^K(𝔼_𝐱[x_k - x_k-1|Δ 𝐍,θ^(ℓ)])u_x,k/∑_k=1^K u_x,k^2, α^(ℓ+1)_z=∑_r=1^R(𝔼_𝐳[z_r - z_r-1|Δ 𝐍,θ^(ℓ)])u_z,r/∑_r=1^R u_z,r^2, σ^2(ℓ+1)_ϵ=𝔼_𝐱[1/K∑_k=1^K(x_k-x_k-1-α_x^(ℓ+1) u_z,k)^2|Δ 𝐍,θ^(ℓ)], σ^2(ℓ+1)_δ=𝔼_𝐳[1/R∑_r=1^R(z_r-z_r-1-α_z^(ℓ+1) u_z,k)^2|Δ 𝐍,θ^(ℓ)],where we set x_0 = z_0 = 0, and we approximate the expectations with respect to p(|Δ N,θ^(ℓ)) and p(|Δ N,θ^(ℓ)) using Gibbs samples from the E-step. §.§ Assessment of within-trial and cross-trial spiking dynamics Bayesian estimation of the SMuRF model (Equation (<ref>)) enables us to infer detailed changes in neural dynamics, in particular to extract the within-trial and cross-trial components of the neural spiking dynamics that accompany the learning of a contingency by a neuron. This is because, following estimation, inference in the SMuRF model yields the joint posterior distribution of the instantaneous spiking rate of a neuron as a function of trials, and time within a trial, conditioned on the observed data. We can use this posterior distribution, in turn, to assess instantaneous changes in neural spiking dynamics, and without the need to correct for multiple comparisons as with non-parametric methods. In what follows, we let p(,|Δ N;θ̂_ML) denote the posterior distribution ofand , given the raster data Δ 𝐍 and the maximum likelihood estimate θ̂_ML of θ. In what follows, it is understood that we use Gibbs samples (_i,_i)_i=1^n from p(,|Δ N;θ̂_ML) to obtain an empirical estimate of the distribution.Posterior distribution of the joint CIF over time and trials:We can use these posterior samples to approximate the posterior distribution, at θ̂_ML, of any quantities of interest. Indeed, it is well known from basic probability that if (_i,_i) is a sample from p(,|Δ N;θ̂_ML), then f(_i,_i) is a sample from p(f(,)|Δ N;θ̂_ML). In particular, if the instantaneous spiking rate of a neuron a time k and trial r λ_k,rΔ = e^x_k+z_r/1+e^x_k+z_r, we can use the Gibbs samples to approximate the joint posterior distribution of {λ_k,rΔ}_k=1,r=1^K,R given Δ 𝐍 and θ̂_ML.Let {λ^p_k,rΔ}_k=1,r=1^K,R be the random variable that represents the a posteriori instantaneous spiking rate of the neuron at time trial r and time k within that trial. The superscript `p' highlights the conditioning on the data Δ 𝐍 and θ̂_ML, and the fact that this quantity is a function of (,) distributed according to p(,|Δ N;θ̂_ML).Within-trial effect: We define the within-trial effect as the a posteriori instantaneous spiking rate at time k, average over all trialse^WT_k = 1/R∑_r=1^R λ^p_k,r (x_k,z_r), k = 1,⋯,K.It is important to note that the averaging is performed after characterization of the joint CIF as a function of time and trials, which is not the same as first aggregating the data across trials and applying one of the one-dimensional methods for analyzing neural data <cit.>. In practice, every Gibbs sample pair (_i,_i), i=1,⋯,n leads to a scalar quantityê^WT_i,k = 1/R∑_r=1^R λ_k,r (_i,k,_i,r), k = 1,⋯,K. Performing this computation over all Gibbs samples and times k=1,⋯,K leads to a joint empirical distribution for the within-trial effect {e^WT_k}_k=1^K.Cross-trial effect: We define the cross-trial effect as the a posteriori excess instantaneous spiking at trial r and time k (above the within-trial effect effect e^WT_k) averaged across all times ke^CT_r = 1/K∑_k=1^K λ^p_k,r (x_k,z_r)/e^WT_k, r = 1,⋯,R. In practice, every Gibbs sample pair (_i,_i), i=1,⋯,n leads to a scalar quantityê^CT_i,r = 1/K∑_k=1^K λ_k,r (_i,k,_i,r)/ê^WT_i,k, r = 1,⋯,R. Performing this computation over all Gibbs samples and R_c trials of interest, r=R-R_c+1,⋯,R, leads to a joint empirical distribution for the cross-trial effect {e^CT_r}_r=R-R_c^R. Remark 4: The following paragraph explains the meaning of R_c in the context of an associative learning experiment.§.§ Assessment of neural spiking dynamics across time and trials Consider an associative learning (conditioning) experiment characterized by the pairing of a conditioned stimulus (e.g. auditory) to an aversive stimulus (e.g. a shock). Let R_c be the number of conditioning trials and K_h the length of the habituation period. Gibbs samples from the SMuRF model (Equation (<ref>)) paramaterized by θ̂_ML let us approximate the a posteriori probability that the spiking rate at a given point (Point C in Figure <ref>) during one of the conditioning trials (trials 16 through 45 in this example) is bigger than the baseline spiking rate at that trial (Region A in Figure <ref>) and the average spiking rate at the same time during the habituation period (Region B in Figure <ref>). This yields a probabilistic description of the the intricate dynamics of neural spiking that accompany the learning of the contingency by a neuron. LetEvent U ={λ^p_k,r (x_k,z_r) > 1/R_c∑_m=1^R_cλ^p_k,m (x_k,z_m)^Average rate in Region A} Event V = {λ^p_k,r (x_k,z_r) > 1/K_h∑_s=1^K_hλ^p_s,r (x_s,z_r)^Average rate in Region B}For a given pair (k,r) s.t. k ≥ K_h, r ≥ R_c, this probability isℙ[Event U∩Event V] ≈ 1/n∑_i=1^n 𝕀_{λ^p_k,r (_i,k,_i,r) > 1/R_c∑_m=1^R_cλ^p_k,m (_i,k,_i,m) ∩λ^p_k,r (_i,k,_i,r) > 1/K_h∑_s=1^K_hλ^p_s,r (_i,s,_i,r)},where the second line approximates the probability of the event of interest using its frequency of occurrence in the n posterior samples. As we demonstrate in the following section, we thus obtain an detailed characterization of the dynamics of neural spiking that accompany learning. In the following section, we use simulated and real data examples to demonstrate the utility of the SMuRF model (Equation (<ref>)) for the characterization of detailed neural spiking dynamics.§ APPLICATIONS§.§ Simulation studies We simulated neural spike raster data from a neuron that exhibits a conditioned response to the conditioned stimulus (Figure <ref>) in an associative learning experiment. The experiment consists of 45 trials, each of which lasts 2 s. The conditioned stimulus becomes active 1 s into a trial, while the aversive stimulus becomes active after trial 15. We obtain the simulated data by dividing the raster into two pre-defined regions as shown in Figure <ref>. Region A consists of all trials before trial 16, along with the period from all trials before the conditioned stimulus is presented. We assume that the rate of spiking of the neuron is λ_A = 60 Hz. Region B consists of the period from trials following trial 15 after the conditioned stimulus is presented. The rate of spiking of the neuron in this region is λ_B = 20 Hz. We applied the SMuRF model (Equation (<ref>)) to the analysis of this simulated neural spike raster. Figure <ref>(a) shows that, during conditioning, learning is accompanied by a doubling of the spiking rate above the within-trial spiking rate of the neuron. Indeed, the left hand panel of the figure shows the cross-trial effect which, following conditioning, increases above its average initial value of ≈ 1 to ≈ 2. Figure <ref>(b) provides a more detailed characterization of the neural spiking dynamics. With probability close to 1, the spiking rate at a given time/trial pair–following the conditioned stimulus and during conditioning (Figure <ref> C)–is bigger than the average rate at the same trial (Figure <ref> A) and the average rate at the same time (Figure <ref> B). We conclude that, with high probability, the simulated neuron exhibits a conditioned response to the conditioned stimulus.§.§ Neural dynamics during associative learning of fearBasic experimental paradigm: The ability to learn through observation is a powerful means of learning about aversive stimuli without direct experience of said stimuli. We use the SMuRF model (Equation (<ref>)) to analyze data from a fear conditioning paradigm designed to elucidate the nature of the circuits that facilitate the associative learning of fear. The experimental paradigm is described in detail in <cit.>. Briefly, an observer mouse observes a demonstrator receive conditioned stimulus-shock pairings through a perforated transparent divider. The experiment consists of 45 to 50 trials, divided into two phases. During the first 15 trials of the experiment, termed the habituation period, both the observer and the demonstrator simply hear an auditory conditioned stimulus. From the 16^th trial onwards, the auditory conditioned stimulus is followed by the delivery of a shock to the demonstrator. The data are recorded from the pre-frontal cortex (PFC) of the observer mouse. Results:Figure <ref>(a) shows the within and cross-trial effects estimated using the SMuRF model applied to a cortical neuron from the experiment described above. The estimated within-trial (bottom) and cross-trial (left) components indicate significant changes respectively in response to the conditioned stimulus and to conditioning. By definition (Equation <ref>), the cross-trial effects takes into account the increase in spiking rate due to the presentation of the conditioned stimulus. The bottom panel suggests that this neuron exhibits a delayed response to the conditioned stimulus, beginning at ≈ 400 ms following conditioned stimulus presentation. Accounting for this increase in within-trial spiking rate due to the conditioned stimulus, the left panel shows a multiplicative increase in spiking rate due to conditioning from an average initial value of ≈ < 1 (indicative of suppression, as can be seen through the sparseness of the raster during trials 1 through 5) to a peak average value of ≈ 4 at trial 23. This increase, however, does not persist as in the case of the simulated data (Figure <ref>), suggesting that conditioning is accompanied by intricate dynamics in neural modulation.figure-1 Figure <ref> (b) provides a more detailed characterization of the neural spiking dynamics of this neuron.The figure shows the evolution, as a function of time and trials, of the probability that the spiking rate at a given time/trial pair (Figure <ref> C) is bigger than the average rate at the same trial (Figure <ref> A) and the average rate at the same time (Figure <ref> B). The figure indicates that this neuron exhibit a delayed conditioning to the conditioned stimulus (beginning ≈ 400 ms following conditioned stimulus presentation) and that the extent of the condition is highest first between trials 18 and 24 and then between trials 31 and 41. Figure <ref> shows an application of the SMuRF model (Equation (<ref>)) to a cortical neuron that does not exhibit a conditioned response to the conditioned stimulus. The bottom of panel (a) indicates no significant increase in the within-trial spiking rate in response to the conditioned stimulus, while the left panel shows that the cross-trial effect remains constant throughout the experiment an average value of ≈ 1. This indicates that conditioning does not result in a significant increase in spiking rate. Panel (b) corroborates these findings: for all points C following the conditioned stimulus and during conditioning, there is a small probability that the instantaneous spiking rate is significantly different from the average spiking rates in Regions A and B.figure-1 Figures <ref> and <ref>show results for two additional cortical neurons that exhibit a transient conditioned response to the conditioned stimulus.Using SMuRF inference to determine a neuron's learning time and trialThe power of the Bayesian approach, and the SMuRF model (Equation (<ref>)) in particular, lies in the fact that it lets us approximate the a posteriori probability that the spiking rate at a given point (point C in Figure <ref>) during one of the conditioning trials (trials 16 through 45 in this example) is bigger than the baseline spiking rate at that trial (Region A in Figure <ref>) and the average spiking rate at the same time during the habituation period (Region B in Figure <ref>) (Equation <ref>). This yields an instantaneous probabilistic quantification of the extent of learning for any given time and trial pair.Panel (b) of Figures <ref>, <ref>, <ref> and <ref> provide a detailed characterizations of the dynamics of learning and its extent for all times following the onset of the conditioned stimulus, all conditioning trials.Here, we provide some guidance for practitioners to summarize the results of our inference to a single learning time/trial pair. We would like to stress, however, that the power of our methods lies in the detail provided by panel (b) of Figures <ref>, <ref>, <ref> and <ref>. Since the SMuRF model enables us to compute a empirical probability that the spiking rate at a given time/trial pair (Figure <ref>, Point C) is bigger than the average rate at the same trial (Figure <ref>, Region A) and the average rate at the same time (Figure <ref>, Region B), we can identify learning time and learning trial for each neuron by finding the first time after cue and after conditioning that this probability exceeds a certain threshold. Table  <ref> reports the learning time and trial computed using a threshold of of 95%. Note that the learning time is computed with respect to the onset of the conditioned stimulus (time =0 ms). The learning times and trials reported in Table <ref> are consistent with the detailed inference provided by the respective Figures for these neurons. Indeed, the cortical unit from Figure <ref> shows a delayed response, significant 617 ms after conditioned stimulus onset and at trial 16. The cortical unit from Figure <ref> only exhibits a significant change in neural spiking 1316 ms following the conditioned stimulus and at trial 34. This is consistent with our previous observation from Figure <ref> that this neuron does not exhibit a conditioned response to the stimulus.In the Appendix, we perform a simulation that demonstrates the ability of SMuRF inference to identify learning time and trial when learning of a contingency is accompanied by sustained changes in neural spiking following conditioned stimulus onset and during conditioning. We also demonstrate through simulation that the SMuRF model is robust to the presence of error trials.§.§ Application of SMuRF model to a non-separable example We demonstrate the limitations of the separability assumption in the SMuRF model (Equation (<ref>)) by applying it to the neural spike raster data from <cit.>. We briefly describe the experiment here and refer the reader to <cit.> for a more detailed description. Panel (a) of Figure <ref> shows neural spiking activity from a hippocampal neuron recorded during an experiment designed for a location-scene association learning task. The same scene was shown to a Macaque monkey across 55 trials, and each trial lasted 1700 ms. The first 300 ms of every trial is fixation period, and the scene is presented to the monkey from 300 to 500 ms. A delay period takes place from 800 to 1500 ms, followed by a response period from 1500 to 1700 ms.The data from the experiment are shown in the center of panel (a) from Figure <ref>. The raster suggests that the time and trial-dependent CIF of this neuron is not separable. this Intuitively, this can be seen from the fact that the region in which there are significant changes in neural spiking does not follow the rectangular form from Figure <ref>. Nevertheless, the CIF could be well approximated by a separable model. We apply the SMuRF model to these data to uncover some of its limitations in non-separable settings. The bottom panel of Figure <ref>(a) shows the estimate of the within-trial effect from the SMuRF model, while the left panel shows the cross-trial effect. These two figures indicate that the SMuRF model is able to capture within and cross-trial dynamic changes in the spiking activity of the neuron. Figure  <ref>(b) shows the estimate of the a-posteriori mean instantaneous spiking rate {λ̂^p_k,rΔ}_k=1,r=1^K,R (in Hz) of the neuron at time trial r and time k within that trial. This figure shows that, while the SMuRF model is able to characterize the detailed changes in spiking dynamics, it does not fully capture the non-separable nature of the raster data.Remark 5: Unlike for the cortical neurons, this experiment does not have a conditioning period. That's why, it does not make sense to generate plots such as Figure <ref>(b). figure-1§ CONCLUSIONWe proposed a 2D separably-Markov random field (SMuRF) for the analysis of neural spike rasters that obviates the need to aggregate data across time or trials, as in classical one-dimensional methods <cit.>, while retaining their interpretability. The SMuRF model approximates the trial-dependent conditional intensify function (CIF) of a neuron as the product of a within-trial component, in units of Hz (spikes/s), and a unitless quantity, which we call the cross-trial effect, that represents the excess spiking rate above what can be expected from the within-trial component at that trial. One key advantage of our 2D model-based approach over non-parametric methods stems from the fact that it yields a characterization of the joint posterior (over all trials and times within a trial) distribution of the instantaneous rate of spiking of as a function of both time and trials given the data. This not only obviates the need to correct for multiple comparisons, but also enables us to compare the instantaneous rate of any two trial time pairs at the millisecond resolution, where non-parametric methods break down because the sample size is 1.We applied the SMuRF model to data collected from neurons in the pre-frontal cortex (PFC) in an experiment designed to characterize the neural underpinnings of the associative learning of fear in mice. We found that, as a group, the recorded cortical neurons exhibit a conditioned response to the auditory conditioned stimulus, occurring 3 to 4 trials into conditioning. We also found intricate and varied dynamics of the extent to which the cortical neurons exhibit a conditioned response (e.g. delays, short-term conditioning). This is likely reflective of the variability in synaptic strength, connectivity and location of the neurons in the population.In future work, we plan to investigate non-separable random field models of neural spike rasters, such as Markov random fields <cit.> (MRFs). Compared to the SMuRF model, MRFs are 2D models for which the dimensionality of the putative state-space is as large as the dimensionality of the raster, suggesting that MRFs may provide a more detailed characterizations of neural spike rasters. Indeed, the SMuRF model makes the strong assumption that the neural spiking dynamics are decomposable into two time scales, with the additional simplifying assumption that there is only one component per time scale. This simplifying assumption is motivated by one-dimensional state-space models of neural data <cit.> in which a neuron's time-dependent CIF is only a function of one hidden state sequence. We will investigate the inclusion of additional components in future work. We also plan to investigate analogues of the SMuRF model for population level data. MRFs, multi-component and population-level SMuRF models, naturally lead to model selection problems, and to the investigation of tools, based on sequential Monte-Carlo methods <cit.> (aka particle filters), to compare state-space models of neural spike rasters (such as one-dimensional models <cit.>, the SMuRF model, and MRFs). The development of such tools for model comparisons is, in our opinion, the ultimate measure of the ability of different models to capture the intricate dynamics present in neural spike rasters. Lastly, as previously mentioned, the SMuRF model can be interpreted as a two-dimensional Gaussian process prior on the neural spiking rate surface <cit.>, with a separable kernel that is the Kronecker product of kernels from Gauss-Markov processes (one process for each dimension).The choice of kernels in the SMuRF model leads to the very efficient algorithms for estimation and inference derived in this article. Moreover, these algorithms scale well to more than two dimensions unlike classical kernel methods. We plan to explore this connection to Gaussian process inference in future work. §.§ AcknowledgementsWe would like to thank Dr. Anne C. Smith for her generous feedback on this manuscript and extensive discussions regarding the SMuRF model. Demba thanks the Alfred P. Sloan Foundation. K.M.T. is a New York Stem Cell Foundation–Robertson Investigator and McKnight Scholar and this work was supported by funding from the JPB Foundation, the PIIF and PIIF Engineering Award, PNDRF, JFDP, Alfred P Sloan Foundation, New York Stem Cell Foundation, McKnight Foundation, R01-MH102441-01 (NIMH), RF1-AG047661-01 (NIA), R01-AA023305-01 (NIAAA) and NIH Director’s New Innovator Award DP2-DK-102256-01 (NIDDK).§ APPENDIX§.§ The SMuRF model cannot be converted easily to a standard state-space modelWe focus on the simple case when ρ_x = ρ_z = 1, and α_x = α_z = 0{[ x_k =x_k-1 + ϵ_k , ϵ_k ∼𝒩(0, σ^2_ϵ), k = 1,⋯,K;z_r =z_r-1 + δ_r , δ_r ∼𝒩(0,σ^2_δ), r = 1,⋯,R ].Let t = (r-1)× K + k, r = 1, ⋯,R, k=1,⋯,K. The index t is obtained by “unstacking" the raster trials and serializing them.The question we ask is whether the state equations from the SMuRF model can be turned into ones of the form_t = _t-1 + _t,where _t ∈^2. Let _t,1 and _t,2 denote the first and second components of _t respectively. We ask that _t ∈^2 because of the two dimensions present in the SMuRF model. Allowing the dimensionality of _t to increase up to K would allow a representation of the form of Equation <ref>. However, this would become a very high-dimensional, unwieldy state-space model.Intuitively, this cannot be done for the following reason: the dimensionality of the latent states in the SMuRF model is K + R, while the dimensionality of the state sequence in Equation <ref> is 2 × (K × R). For there to be an equivalence, the sequence _t must necessarily be redundant, i.e. some of the states must be copies of previous states. Storing these copies, would necessarily mean having to increase the dimensionality of the state space!Let _t = [ x_t - (⌈t/K⌉ - 1) × K; z_⌈t/K⌉ ]∈^2. The quantity ⌈t/K⌉ gives the trial index r corresponding time index t. The within-trial index corresponding to index t is then obtained by substracting (r-1) × K from t.Note, for instance, that _1 = [ x_1; r_1 ] and _K+1 = [ x_1; r_2 ]. In general, _t,1 = _t',1 = x_k_0 for some 1 ≤ k_0 ≤ K if and only if t > t's.t. t - t' = p× K for some integer p, where we assume without loss of generality that t > t'. That is, two different indices t and t' share the same within-trial component if and only if they are apart by an integer multiple of K. Stated otherwise, the first component of _t exhibit circular symmetry! Therefore, for Equation <ref> to hold, _t-1,1 must equal _t,1, which is not possible because t and t-1 are not apart by an integer multiple of p!The argument above shows that, in order to write the SMuRF state equations in the form of Equation <ref>, one would need to augment the state _t to dimension K + 1, which would lead to a very high dimensional standard state-space model, thus increasing the complexity of performing inference.§.§ Derivation of Gibbs sampler for PG-augmented SMuRF model We first derive Theorem <ref>, which leads to the forward-filter backward-sampling algorithm from the full-conditionals forandin the Gibbs sampler.Since w_k,r| x_k,z_r is drawn from a PG distribution, we can write the log pdf of w_k,r| x_k,z_r as, log p(w_k,r| x_k,z_r)= log(cosh(x_k+z_r/2)) + log(∑_i=1^∞(-1)^i (2i+1)/√(2πw_k,r^2) e^-(2i+1)^2/8w_k,r-x_k^2 w_k,r/2) The complete data likelihood of the SMuRF model is,p(Δ 𝐍, , ; θ)= p(Δ 𝐍|,)p(|)p()= ∏_k=1^K∏_r=1^R{p(Δ N_k^r|x_k,z_r)p(w_k,r|x_k,z_r)}∏_k=1^K p(x_k|x_k-1; σ_ϵ^2)∏_r=1^Rp(z_r|z_r-1; σ_δ^2)The log of the complete data likelihood is therefore,log p(Δ 𝐍, , ; θ) =∑_k=1^K∑_r=1^R{log p(Δ N_k,r|x_k,z_r) + log p(w_k,r|x_k,z_r)} + π(,)= ∑_k=1^K∑_r=1^R[Δ N_k,rlog(e^x_k+z_r/1+e^x_k+z_r) + (1-Δ N_k,r)log(1/1+e^x_k+z_r)+ log(cosh(x_k+z_r/2)).. + log(∑_i=1^∞(-1)^i (2i+1)/√(2πw_k,r^2) e^-(2i+1)^2/8w_k,r-(x_k+z_r)^2w_k,r/2)]+π(,)= ∑_k=1^K∑_r=1^R[ Δ N_k,r(x_k+z_r) - log(1+e^x_k+z_r) + log(1+e^x_k+z_r/2e^x_k+z_r/2) ..+ log(e^-(x_k+z_r)^2w_k,r/2∑_i=1^∞(-1)^i (2i+1)/√(2πw_k,r^2) e^-(2i+1)^2/8w_k,r)]+π(,)= ∑_k=1^K∑_r=1^R[ Δ N_k,r(x_k+z_r) - log(2)-x_k+z_r/2 -(x_k+z_r)^2w_k,r/2..+ log(∑_i=1^∞(-1)^i (2i+1)/√(2πw_k,r^2) e^-(2i+1)^2/8w_k,r)]+∑_k=1^K[1/2log(2πσ_ϵ^2)-(x_k-x_k-1)^2/2σ_ϵ^2]+π(,)=Klog(2)+ π(,) + ∑_k=1^K∑_r=1^R{Δ N_k^r(x_k+z_r)-x_k+z_r/2-(x_k+z_r)^2w_k,r/2}+∑_k=1^K∑_r=1^R{log(∑_i=1^∞(-1)^i (2i+1)/√(2πw_k,r^2) e^-(2i+1)^2/8w_k,r)}whereπ(,)= ∑_k=1^K[1/2log(2πσ_ϵ^2)-(x_k-ρ_x x_k-1 - α_x u_x,k)^2/2σ_ϵ^2] +∑_r=1^R[1/2log(2πσ_δ^2)-(z_r-ρ_z z_r-1-α_z u_z,k)^2/2σ_δ^2] From the complete data log likelihood, we see thatlog p( | Δ 𝐍, ,;θ)∝∑_k=1^K∑_r=1^R{Δ N_k^r(x_k+z_r)-x_k+z_r/2-(x_k+z_r)^2w_k,r/2} + π(,)∝-∑_k=1^K∑_r=1^R1/2(ΔÑ_k,r-(x_k+x_r))^2/1/w_k,r+π(,),where ΔÑ_k,r = Δ N_k,r - 1/2/w_k,r.Therefore, we can rewrite the augmented model as a linear Gaussian state space model as stated in Theorem <ref>.{[ x_k = ρ_x x_k-1 + α_x u_x,k + ϵ_k, ϵ_k ∼𝒩(0,σ^2_ϵ) ; ỹ_k = x_k + ṽ_̃k̃, ṽ_̃k̃∼𝒩(0,(∑_r=1^R w_k,r)^-1); ỹ_k = ΔÑ_k = x_k-K/2-∑_r=1^R x_r w_k,r ;] .Let H_k,r = ΔÑ_1^r,…,ΔÑ_k-1^r denote the history of the observed process up-to and including k-1. We can now writep(|Δ 𝐍, , ; θ) ∝∏_k=1^K p(Δ N|x_k)p(x_k|H_k)The forward filtering equations for this linear Gaussian state space model are as follows.x_k|k-1 = ρ_x x_k-1|k-1 + α_x u_x,k σ^2_k|k-1 = ρ_x^2σ^2_k-1|k-1+σ^2_ϵ x_k|k =ρ_xx_k|k-1 +(∑_r=1^Rw_k,r)σ^2_k|k-1/1+(∑_r=1^Rw_k,r)σ^2_k|k-1(α_x u_x,k/(∑_r=1^Rw_k,r)σ^2_k|k-1+∑_r=1^R Δ N_k^r -R/2-∑_r=1^R z_r w_k,r/(∑_r=1^Rw_k,r)-ρ_x x_k|k-1) σ^2_k|k = σ^2_k|k-1/1+(∑_r=1^Rw_k,r)σ^2_k|k-1After running the forward filtering algorithm, we obtain x_K|K and σ^2_K|K from the final iteration of the filter. We can then draw x_K ∼ N(x_K|K, σ^2_K|K). Now we can treat x_K as the new observations and use the Kalman filter again to draw samples for x_K-1, and repeat this process iteratively for x_K-1,..., x_1. The new observation equation reads,{[ x_k = ρ_x x_k-1 + ϵ_k, ϵ_k ∼ N(0,σ^2_ϵ); x_k+1 = x_k+ϵ_k ].From Bayes Rule we have,p(x_k|x_k+1,H_k) = p(x_k+1|x_k)p(x_k|H_k)/p(x_k+1|H_k)Denote the densities of x_k|x_k+1,H_k asx_k|x_k+1,H_k∼ N(x_k|k^*, σ^2_k|k^*)Then the update equations are,log p(x_k|H_k)∝log p(x_k+1|x_k) + log p(x_k|H_k-1) x_k|k^* = ρ_x x_k|k-1+ σ^2_k|k/σ^2_ϵ(x_k+1-ρ_x x_k|k-1) σ^2_k|k^* = σ^2_ϵσ^2_k|k-1/σ^2_ϵ+σ^2_k|k-1With this backward-sampling algorithm, we can draw x_k∼ N(x_k|k^*,σ^2_k|k^*), where i = K-1, ..., 1. The forward-filtering and backward-sampling algorithm are symmetric for x_k and z_r. §.§ Initialization of the EM algorithm and the Gibbs samplerWe initialize the Monte-Carlo EM algorithm with values for σ^2_ϵ and σ^2_δ obtained by applying the one-dimensional state space model from <cit.> to the raster data aggregated across either trials or time. We initialize the Gibbs sampler using trajectories drawn from posterior distribution of the state in the one-dimensional state-space models <cit.> used to initialize σ^2_ϵ and σ^2_δ. The Gibbs sampler draws 5000 samples for , , andat every iteration. The algorithm reaches convergence when the absolute change in σ^2_ϵ and σ^2_δ is less than a certain threshold (10^-5). §.§ Ability of SMuRF model to identify learning time and trial in simulated data The results of our analysis of the cortical data in Section <ref> demonstrate that learning of a contingency by a neuron is a dynamic process that cannot be easily quantified in terms of a static time and trial of learning. We also demonstrated (Table <ref>) how to use inferences from the SMuRF model to identify a learning time and a trial.Here, we use simulated data to determine the ability of the SMuRF model to identify learning time and trial when learning is accompanied by sustained changes in neural spiking following conditioned stimulus onset and during conditioning. In particular, we assess the sensitivity of our method to the extent of the change in neural spiking rate following conditioned stimulus onset and during conditioning.We simulated neural spike raster data in the same manner as described in the Simulation Studies component of our Applications section (Section <ref>).As in said section, the raster is divided into two regions (Figure <ref>). We assume that the rate of spiking of the neuron in Region B is fixed and equal λ_B = 20 Hz. We vary the rate of spiking λ_A of the neuron in Region A from 20 to 45 Hz in 5 Hz increments. For each value of λ_A, we simulated 10 independent rasters and determine the learning time and trial as in Table <ref>. We use the average over the 10 rasters as the learning time/trial pair. When our method detects no change, we declare the learning time and trial as the last time and trail pair in the simulated data, i.e. 1000 ms and trial. Figures <ref>(a) and <ref>(b) show the averages of the identified learning times and trials as a function of the ratio λ_A/λ_B. The true learning time is at 0 ms with respect to conditioned stimulus onset, and true learning trial is trial 16. The figures demonstrate that the inference performed from the SMuRF model is able to detect the true learning time and trial when the rate in Region A is 1.8 and 2 times larger than that in Region B. Moreover, the lower the ratio λ_A/λ_B, the larger the delay. The intuitive reason why it is easier to determine the learning trial is that, for a given trial, there are many more observations, compared to the number of trials for a give time instant. §.§ Robustness of SMuRF model to the presence of error trialsWe simulated neural spike raster data in the same manner as described in the Simulation Studies component of our Applications section (Section <ref>). We picked three consecutive trials, starting from trial 21, to be a error trials in which all of the observations were 0. Note that the data were simulated in the same manner as in Figure <ref>, except for the presence of the error trials. Figure <ref> shows the result of applying the SMuRF model to these simulated raster data. The presence of the error trials does not affect our remarks for Figure <ref>.figure-1figure-1 figure-1 apacite
http://arxiv.org/abs/1709.09723v1
{ "authors": [ "Yingzhuo Zhang", "Noa Malem-Shinitski", "Stephen A Allsop", "Kay Tye", "Demba Ba" ], "categories": [ "stat.ME", "cs.CE" ], "primary_category": "stat.ME", "published": "20170927201848", "title": "Estimating a Separably-Markov Random Field (SMuRF) from Binary Observations" }
1,2]Eva Kaslik 1]Mihaela Neamţu*EVA KASLIK & MIHAELA NEAMŢU[1]West University of Timişoara, Romania[2]Institute e-Austria Timişoara, Romania*Mihaela Neamţu, [email protected][Abstract] A four-dimensional mathematical model of the hypothalamus-pituitary-adrenal (HPA) axis is investigated, incorporating the influence of the GR concentration and general feedback functions. The inclusion of distributed time delays provides a more realistic modeling approach, since the whole past history of the variables is taken into account. The positivity of the solutions and the existence of a positively invariant bounded region are proved. It is shown that the considered four-dimensional system has at least one equilibrium state and a detailed local stability and Hopf bifurcation analysis is given. Numerical results reveal the fact that an appropriate choice of the system's parameters leads to the coexistence of two asymptotically stable equilibria in the non-delayed case. When the total average time delay of the system is large enough, the coexistence of two stable limit cycles is revealed, which successfully model the ultradian rhythm of the HPA axis both in a normal disease-free situation and in a diseased hypocortisolim state, respectively. Numerical simulations reflect the importance of the theoretical results. Stability and Hopf bifurcation analysis of a four-dimensional hypothalamic-pituitary-adrenal axis model with distributed delays This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS-UEFISCDI, project no. PN-II-RU-TE-2014-4-0270. [ December 30, 2023 =================================================================================================================================================================================================================================================================================================§ INTRODUCTIONThe hypothalamus-pituitary-adrenal (HPA) axis is a neuroendocrine system which regulates a number of physiological processes <cit.>, playing an important role in stress response. It consists of the hypothalamus, pituitary and adrenal glands, as well direct influences and positive and negative feedback interactions. Different types of stressors (e.g. infection, dehydration, anticipation, fear) activate the secretion of corticotropin-releasing hormone (CRH)in the hypothalamus, which induces the corticotropin (ACTH) production in the pituitary. ACTH travels by the bloodstream to the adrenal cortex, where it activates the release of cortisol (CORT), which in turn down-regulates the production of both CRH and ACTH.Dynamical systems have previously proved to be successful in studying metabolic and endocrine processes. Different types of mathematical models of the HPA axis have been recently explored. Three dimensional systems of differential equations with or without time delays, with the state variables given by the hormone concentrations CRH, ACTH and CORT, have been used to model the HPA axis in <cit.>. The influence of the circadian rhythm in the mathematical model has been analyzed in <cit.>. A more general three-dimensional model has been developed in <cit.>, possessing a unique equilibrium state. If time delays are not taken into consideration, no oscillatory behavior has been observed <cit.>. Oscillatory solutions should be a feature of mathematical models of the HPA axis, as they correspond to the circadian / ultradian rhythm of hormone levels <cit.>. A generalization of the "minimal model" <cit.>has been obtained in <cit.>, including memory terms in the form of distributed delays and fractional-order derivatives, which are shown to generate oscillatory solutions. Due to the transportation of the hormones throughout the HPA axis, time delays should mandatorily be incorporated in the considered mathematical models. With the aim of reflecting the whole past history of the variables, general distributed delays are considered, proving to be more realistic and more accurate in real world applications than discrete time delays <cit.>. Distributed delay models appear in a wide range of applications such as hematopoiesis <cit.>, population biology <cit.> or neural networks <cit.>. Four-dimensional models which incorporate the positive self-regulation of glucocorticoid receptors (GR) in the pituitary have been investigated in <cit.>. In particular, in <cit.> we constructed a four-dimensional general model with distributed time delays, which represents an extension of the minimal model of <cit.>. In <cit.>, it has been suggested that positive self-regulation of GR may trigger bistability in the dynamical structure of the HPA model, i.e. there exist two asymptotically stable equilibrium states: one corresponding to the normal disease-free state with higher cortisol levels, and a second one with lower cortisol levels related to a diseased state associated with hypocortisolism. In this paper, an in-depth analysis is provided for the distributed-delay model introduced in <cit.>, proving the positivity of the solutions and the existence of a positively invariant bounded region. It is shown that the considered four-dimensional system has at least one equilibrium state and a local stability and bifurcation analysis is provided. Numerical results reveal the fact that an appropriate choice of the system's parameters leads to the coexistence of two asymptotically stable equilibria in the non-delayed case. Moreover, when the total average time delay is large enough, it is shown that two stable limit cycles coexist, which appear due to Hopf bifurcations, extending the results presented in <cit.>. § MATHEMATICAL MODEL OF HPA WITH DISTRIBUTED DELAYS With the aim of formulating a mathematical model of the HPA axis, the following sequence of events is considered. Cognitive and physical stressors stimulate CRH neurons in the paraventricular nucleus (PVN) of the hypothalamus to trigger the secretion of corticotropin-releasing hormone (CRH), which is released into the portal blood vessel of the hypophyseal stalk. CRH is transported to the anterior pituitary, where it stimulates the secretion of adrenocorticotropin hormone (ACTH), with an average time delay τ_1. ACTH then activates a complex signaling cascade in the adrenal cortex, stimulating the secretion of the stress hormone cortisol (CORT) with the average time delay τ_2. CORT exerts a negative feedback on the hypothalamus and the pituitary, suppressing the synthesis and release of CRH and ACTH, in an effort to return them to the baseline levels. On one hand, cortisol inhibits the secretion of CRH in the hypothalamus <cit.>, with an average time delay τ_31. On the other hand, CORT binds to glucocorticoid receptors (GR) in the pituitary and performs a negative feedback on the secretion of ACTH, with an average time delay τ_32. Moreover, the CORT-GR complex self-upregulates the GR production in the anterior pituitary, with an average time delay τ_34 .Denoting the plasma concentrations of hormones CRH, ACTH and CORT by x_1(t), x_2(t), and x_3(t) respectively, and the availability of the glucocorticoid receptor GR in the anterior pituitaryby x_4(t), the following system of differential equations with general distributed delays is considered:{[ ẋ_1(t)=k_1f_1(∫_-∞^t x_3(s)h_31(t-s)ds)-w_1x_1(t),; ẋ_2(t)=k_2f_2(x_4(t)∫_-∞^t x_3(s)h_32(t-s)ds)∫_-∞^tx_1(s)h_1(t-s)ds-w_2x_2(t),;ẋ_3(t)=k_3∫_-∞^tx_2(s)h_2(t-s)ds-w_3x_3(t),; ẋ_4(t)=k_4(ξ+f_3(x_4(t)∫_-∞^t x_3(s)h_34(t-s)ds))-w_4x_4(t).;].Here, the positive constants k_i, i=1,4, relate the production rate of each variable to specific factors that regulate the rate of release/synthesis <cit.>. The basal production rate ξ and elimination constants w_1,w_2,w_3,w_4 are positive.The function f_1 represents the negative feedback of CORT on CRH levels in the paraventricular nucleus of the hypothalamus while the function f_2 describes the negative feedback of the CORT-GR complex (at concentration x_3(t)x_4(t)) in the pituitary. The positive feedback function f_3, describes the self-upregulation effect of the CORT-GR complex on GR production in the anterior pituitary. The following general assumptions will be considered:* f_1,f_2:[0,∞)→ (0,1] are strictly decreasing, smooth and bounded on [0,∞);* f_3:[0,∞)→ [0,1) is strictly increasing, smooth and bounded on [0,∞);* f_1(0)=f_2(0)=1; f_3(0)=0.As a special case, the feedback functions can be chosen as Hill functions, such as in <cit.>, which verify the conditions given above:f_1(u)=1-ηu^α_1c_1^α_1+u^α_1,f_2(u)=1-μu^α_2c_2^α_2+u^α_2, f_3(u)=u^α_3c_3^α_3+u^α_3with Hill coefficients α_1,α_2,α_3≥ 1, η,μ∈(0,1], and microscopic dissociation constants c_1,c_2,c_3>0. In system (<ref>), the delay kernels h_1,h_2,h_31,h_32,h_34:[0,∞)→[0,∞) are probability density functions representing the probability of occurrence of a particular time delay. These functions are bounded, piecewise continuous and satisfy∫_0^∞h(s)ds=1.The average time delay of a kernel h(t) isτ=∫_0^∞sh(s)ds<∞.In this paper, we focus our attention on two types of delay kernels: * Dirac kernels: h(s)=δ(s-τ), where τ≥ 0, equivalent to a discrete time delay:∫_-∞^t x(s)h(t-s)ds=∫_0^∞ x(t-s)δ(s-τ)ds=x(t-τ). * Gamma kernels: h(s)= s^p-1e^-s/θθ^pΓ(p), where p,θ>0, with the average delay τ=pθ.In the mathematical modeling of real world phenomena, the exact distribution of time delays is generally unavailable, and hence, general kernels may provide better results <cit.>. The analysis of models which include particular classes of delay kernels (e.g. weak Gamma kernels with p=1 or strong Gamma kernels with p=2) may reveal the more realistic effect of distributed delays on the system's dynamics, compared to discrete delays. Initial conditions associated with system (<ref>) are of the form:x_i(s)=φ_i(s),∀s∈(-∞,0], i=1,2,3,4,where φ_iare bounded continuous functions defined on (-∞,0], with values in [0,∞).§ POSITIVELY INVARIANT SETS AND EQUILIBRIUM STATESAssume that g:[0,∞)→[0,∞) is a continuously differentiable function such that there exist m_1,m_2>0such that g(0)≤m_1/m_2 andg'(t)≤ m_1-m_2g(t),∀ t≥ 0.Then, g(t)≤m_1/m_2 for any t≥ 0.From the hypothesis we easily obtain that the function G(t)=e^m_2t(g(t)-m_1/m_2) is decreasing on [0,∞).Therefore, as G(t)≤ G(0) for any t≥ 0, it follows thatg(t)≤m_1/m_2+e^-m_2t(g(0)-m_1/m_2)≤m_1/m_2,∀ t≥ 0.This completes the proof. In the following, we denote: k_1w_1=L_1,k_1k_2w_1w_2=L_2,k_1k_2k_3w_1w_2w_3=L_3,k_4w_4=L_4.The compact setΩ=[0,L_1]×[0,L_2]×[0,L_3]×[0,(ξ+1)L_4]⊂ℝ^4_+and ℝ^4_+ are positively invariant sets for system (<ref>). Assume that (x_1(t),x_2(t),x_3(t),x_4(t)) denotes the solution of system (<ref>) with the initial condition x_i(s)=φ_i(s), s∈(-∞,0], with i=1,4, where φ_i are bounded positive continuous functions defined on (-∞,0]. From the positivity of the feedback functions it easily follows thatẋ_i(t)≥ -w_i x_i(t),∀ t>0, i=1,4and hence, the functions x_i(t)e^w_it are increasing on (0,∞). Therefore:x_i(t)≥φ_i(0)e^-w_it≥ 0,∀ t>0, i=1,4.Therefore, all positive initial conditions lead to positive solutions, i.e.ℝ^4_+ is positively invariant for system (<ref>).Moreover, assume (φ_1(s),φ_2(s),φ_3(s),φ_4(s))∈Ω for any s∈(-∞,0].From the first equation of(<ref>) and the boundedness of f_1, it follows that ẋ_1(t)≤ k_1-w_1x_1(t),∀ t>0.Using Lemma <ref>, as x_1(0)≤ L_1, we have that x_1(t)≤ L_1 for any t≥ 0. The second equation of (<ref>), the boundedness of f_2 and (<ref>) provides ẋ_̇2̇(t)≤ k_2L_1-w_2x_2(t),∀ t> 0.From Lemma <ref> it follows that x_2(t)≤ L_2 for any t≥ 0.From the third equation of (<ref>) and (<ref>) it follows that ẋ_̇3̇(t)≤ k_3L_2-w_3x_3(t),∀ t≥ 0.Lemma <ref> leads to x_3(t)≤ L_3 for any t≥ 0.The last equation of (<ref>), the boundedness of f_3 leads toẋ_̇4̇(t)≤ k_4(ξ+1)-w_4x_4(t),∀ t≥ 0,which, based on Lemma <ref>, provides the desired conclusion.Due to the fact that x_4(t) in the mathematical model (<ref>) is a non-dimensional variablerepresenting the availability of glucocorticoid receptors<cit.>, it is reasonable to demand that x_4(t)∈[0,1] for any t∈ℝ. Based on Proposition <ref>, this is guaranteed if the following inequality is satisfied: (ξ+1)L_4≤ 1.The existence of an equilibrium point of system (<ref>) is provided by the following: The equilibrium states of system (<ref>) belong to the invariant set Ω and are of the formE=(L_1f_1(x_0), w_3x_0k_3,x_0,1x_0f_2^-1(x_0L_3f_1(x_0))).where x_0∈[0,L_3] is a solution of the equationL_4(ξ+(f_3∘ f_2^-1)(xL_3f_1(x)))=1xf_2^-1(xL_3f_1(x)). From Proposition <ref> it follows that any equilibrium state of system (<ref>) belongs to the set Ω. Moreover, An equilibrium point of system (<ref>) is a solution of the following algebraic system:{[k_1f_1(x_3)=w_1x_1,;k_2f_2(x_3x_4)x_1=w_2x_2,; k_3x_2=w_3x_3,; k_4(ξ+f_3(x_3x_4))=w_4x_4, ].which is equivalent to{[x_1=L_1f_1(x_3),;x_2=w_3x_3k_3,; L_3f_2(x_3x_4)f_1(x_3)=x_3,; L_4(ξ+f_3(x_3x_4))=x_4. ].From the first two equations of (<ref>) it follows that the first two components of an equilibrium state are uniquely determined by the third component. The last two components of an equilibrium state represent a fixed point for the continuous function F:ℝ^2→ℝ^2 defined by (u,v)↦ F(u,v)=(L_3f_1(u)f_2(uv),L_4(ξ+f_3(uv)))From the boundedness properties of the functions f_i, i∈{1,2,3} it easily follows that the function F maps the convex compact set [0,L_3]×[0,(ξ+1)L_4] into itself. By Brouwer's fixed-point theorem we obtain the existence of at least one fixed point of the function F in the set [0,L_3]×[0,(ξ+1)L_4]. Therefore, system (<ref>) has at least one equilibrium state.From system (<ref>) we easily deduce (<ref>), and hence we obtain the form of the equilibrium states given by (<ref>).In the case of the minimal model of the HPA-axis, it has been shown <cit.> that there exists a unique equilibrium state. For the extended four-dimensional model (<ref>), Proposition <ref> only shows the existence of at least one equilibrium state. The presence of the positive feedback function is often associated with the coexistence of several equilibrium states <cit.>.§ LOCAL STABILITY ANALYSIS In this section, necessary and sufficient conditions for the local asymptotic stability of an equilibrium point E are provided, choosing general delay kernels. Delay independent sufficient conditions are explored for the local asymptotic stability of the equilibrium point E, which may prove to be useful if the time delays in system (<ref>) cannot be accurately estimated.By linearizing the system (<ref>) at an equilibrium point E, we obtain:{[ ẏ_1(t)=k_1f_1'(x_0)∫_-∞^ty_3(s)h_31(t-s)ds-w_1y_1(t),; ẏ_2(t)=k_2f_2(x_0 r_0)∫_-∞^ty_1(s)h_1(t-s)ds+k_1k_2w_1f_1(x_0)r_0 f_2'(x_0 r_0)∫_-∞^ty_3(s)h_32(t-s)ds+;+k_1k_2w_1f_1(x_0)x_0 f_2'(x_0 r_0)y_4(t)-w_2y_2(t),; ẏ_3(t)=k_3∫_-∞^ty_2(s)h_2(t-s)ds-w_3y_3(t),; ẏ_4(t)= k_4r_0 f_3'(x_0 r_0)∫_-∞^ty_3(s)h_34(t-s)ds+k_4x_0 f_3'(x_0 r_0)y_4(t)-w_4y_4(t). ].where r_0=1x_0f_2^-1(x_0L_3f_1(x_0)).The characteristic equation of the linearized system at the equilibrium point E is:( z+w_1)( z+w_2)( z+w_3)(z+w̃_̃4̃)+a(w_4-w̃_̃4̃)( z+w_1)H_2( z)H_34( z)++b(z+w̃_̃4̃)H_1( z)H_2( z)H_31( z)+a(z+w_1)(z+w̃_̃4̃)H_2(z)H_32(z)=0,where H_i( z)=∫_0^∞ e^- z sh_i(s)ds are the Laplace transforms of the kernels h_i, i∈{1,2,31,32,34} anda =-k_1k_2k_3w_1f_1(x_0)f_2'(x_0 r_0)r_0=-w_2w_3x_0r_0f_2'(x_0r_0)f_2(x_0r_0)>0,b =-k_1k_2k_3f_1'(x_0)f_2(x_0 r_0)=-w_1w_2w_3x_0f_1'(x_0)f_1(x_0)>0, w̃_̃4̃ =w_4-k_4x_0 f_3'(x_0 r_0)<w_4. For the theoretical analysis, we introduce the following set of inequalities:(I_0) w̃_̃4̃>0;(I_1)(w_1+w̃_̃4̃)(w_2+w̃_̃4̃)(w_3+w̃_̃4̃)≥ (w̃_̃4̃-w_1)(w̃_̃4̃-w_4)(w_1+w_2+w_3+w̃_̃4̃);(I_2)a(w_1+w_4)+b≤ (w_1+w_2)(w_2+w_3)(w_1+w_3);(I_3) aw_4w̃_̃4̃+bw_1<w_2w_3; (I_3) aw_4w̃_̃4̃+bw_1≥ w_2w_3. * If there is no time-delay and (I_0), (I_1) and (I_2) are satisfied, the equilibrium point E of system (<ref>) is locally asymptotically stable.* For any delay kernels h_i(t), i∈{1,2,31,32,34}, if (I_0) and (I_3) hold, then the equilibrium point E of system (<ref>) is locally asymptotically stable.1. In the absence of delays, the characteristic equation (<ref>) is given by:z^4+c_1z^3+c_2z^2+c_3z+c_4=0,wherec_1 =w_1+w_2+w_3+w̃_̃4̃>0, c_2 =w_1w_2+w_2w_3+w_1w_3+(w_1+w_2+w_3)w̃_̃4̃+a>0,c_3 =w_1w_2w_3+(w_1w_2+w_2w_3+w_1w_3)w̃_̃4̃+a(w_1+w_4)+b>0,c_4 =(w_1w_2w_3+b)w̃_̃4̃+aw_1w_4>0.Based on the Routh-Hurwitz stability test, it suffices to prove thatc_1c_2c_3-c_3^2-c_1^2c_4>0.From this inequality it clearly follows that c_1c_2-c_3>0.DenotingS =(w_1+w_2)(w_1+w_3)(w_2+w_3)T =(w_1+w̃_̃4̃)(w_2+w̃_̃4̃)(w_3+w̃_̃4̃)we obtainc_1c_2c_3-c_3^2-c_1^2c_4= (S-b-a(w_1+w_4))(T+b+a(w_1+w_4))++a(w_1+w_2+w_3+w̃_̃4̃)(T-(w̃_̃4̃-w_1)(w̃_̃4̃-w_4)(w_1+w_2+w_3+w̃_̃4̃)))Using inequalities (I_0), (I_1) and (I_2) it is easy to see that c_1c_2c_3-c_3^2-c_1^2c_4>0. The Routh-Hurwitz stability criterion implies that the equilibrium point E is asymptotically stable.2. In the presence of delays, the characteristic equation (<ref>) can be expressed asφ(z)=ψ(z),where φ and ψ are φ(z) =-( z+w_1)( z+w_2)( z+w_3)(z+w̃_̃4̃), ψ(z) =a(w_4-w̃_̃4̃)( z+w_1)H_2( z)H_34( z)+b(z+w̃_̃4̃)H_1( z)H_2( z)H_31( z)+a(z+w_1)(z+w̃_̃4̃)H_2(z)H_32(z).The functionsφ and ψ are holomorphic in the right half-plane.Considering z∈ℂ with (z)≥ 0, the properties of the delay kernels (<ref>) imply:|H_i(z)|=|∫_0^∞ e^-z sh_i(s)ds|≤∫_0^∞ |e^-z s|h_i(s)ds=∫_0^∞ e^-(z) sh_i(s)ds≤∫_0^∞ h_i(s)ds=1,for any i∈{1,2,31,32,34}. Therefore, based on inequalities (I_0) and (I_3), we have:|ψ(z)| ≤ a(w_4-w̃_̃4̃)|z+w_1||H_2(z)||H_34(z)|+b|z+w̃_̃4̃||H_1(z)||H_2(z)||H_31(z)|++ a|z+w_1||z+w̃_̃4̃||H_2(z)||H_32(z)|≤ a(w_4-w̃_̃4̃)|z+w_1|+b|z+w̃_̃4̃|+a|z+w_1||z+w̃_̃4̃|=|z+w_1||z+w̃_̃4̃|(a(w_4-w̃_̃4̃)/|z+w̃_̃4̃|+b/|z+w_1|+a)≤|z+w_1||z+w̃_̃4̃|(a(w_4-w̃_̃4̃)/w̃_̃4̃ +b/w_1+a)<|z+w_1||z+w̃_̃4̃| w_2w_3 =|z+w_1||z+w_2||z+w_3||z+w̃_̃4̃|=|φ(z)|.where the inequality |z+w|≥ w, for anyz∈ℂ with (z)≥ 0 and w>0, has been repeatedly used.Hence, the inequality |ψ(z)|<|φ(z)| is true for any z in the right half plane, and Rouché's theorem implies that the characteristic equation (<ref>) does not have any root in the right half-plane (or on the imaginary axis). Therefore, all the roots of (<ref>) are in the open left half plane, and it follows that the equilibrium E is asymptotically stable.Assume that(I_0) holds and that the delay kernels h_i(t), i∈{1,2,31,32,34} are chosen. If the equilibrium point E of system (<ref>) is unstable, Theorem <ref> implies that inequality (I_3) holds.§ BIFURCATION ANALYSISIn this section, we explore the possibility of the occurrence of limit cycles in a neighborhood of E, due to Hopf bifurcations, that reflect the ultradian rhythm of the HPA axis.For simplicity, we further assume thatH_32(z)=H_34(z)=H_1(z)H_31(z),and we denoteH(z)=H_2(z)H_32(z)=H_2(z)H_34(z)=H_1(z)H_2(z)H_31(z).We emphasize that H(z) is the Laplace transform of the convolution of h_2 and h_32:h(t)=∫_0^t h_2(s)h_32(t-s)ds,with the average time-delayτ=∫_0^∞sh(s)ds=τ_2+τ_32=τ_2+τ_34=τ_1+τ_2+τ_31,where τ_i represent the average delays of the kernels h_i, for any i∈{1,2,31,32,34}. The characteristic equation (<ref>) is( z+w_1)( z+w_2)( z+w_3)(z+w̃_̃4̃)+[a(z+w_1)(z+w_4)+b(z+w̃_̃4̃)]H(z)=0,which can be rewritten as:H(z)^-1=Q(z),whereQ(z)=-a(z+w_1)(z+w_4)+b(z+w̃_̃4̃)/( z+w_1)( z+w_2)( z+w_3)(z+w̃_̃4̃).The properties of the function Q(z) are given in the following Lemma. Assume that (I_0) holds.a. The function↦|Q(i)|=√((bw̃_̃4̃+aw_1w_4-a^2)^2+^2(a(w_1+w_4)+b)^2/(^2+w_1^2)(^2+w_2^2)(^2+w_3^2)(^2+w̃_̃4̃^2))defined on [0,∞) is strictly decreasing. b. A uniquepositive real root _0 exists for the equation |Q(i)|=1 if and only if inequality (I3) holds. c. The function Q satisfies the following inequality:(Q'(i)Q(i))>0∀ >0. To prove, a. it is easy to see that|Q(i)|^2=1/(^2+w_2^2)(^2+w_3^2)[a^2+d_1/(^2+w_1^2)+d_2/(^2+w̃_̃4̃^2)]whered_1=b^2+2abw_1(w_1+w_4)/w_1+w̃_̃4̃>0d_2=2abw̃_̃4̃(w_4-w̃_̃4̃)/w_1+w̃_̃4̃>0Therefore, ↦|Q(i)| is strictly decreasing on [0,∞), and tends to 0 as →∞. Therefore, the equation |Q(i)|=1 admits a unique positive solution if and only if |Q(0)|>1. This impliesw_1w_2w_3w̃_̃4̃<aw_1w_4+bw̃_̃4̃, which in turn, is equivalent to (I3), and b. is proved.Point c. follows from <cit.>.For the bifurcation analysis, due to the complexity of the problem, we restrict our attention to Dirac kernels and Gamma kernels. §.§ Dirac kernels If all delay kernels are of Dirac type: h_1(t)=δ(t-τ_1), h_2(t)=δ(t-τ_2), h_31(t)=δ(t-τ_31), h_32(t)=δ(t-τ_32), h_34(t)=δ(t-τ_34) where τ_1,τ_2,τ_31,τ_32,τ_34≥ 0 satisfy the propertyτ_2+τ_32=τ_2+τ_34=τ_1+τ_2+τ_31=τ>0,then, the characteristic equation (<ref>) becomes:e^τ z=Q(z).Choosing τ as bifurcation parameter and following the same proof as in <cit.>, we have: If inequalities(I_0), (I_1), (I_2) and (I_3)hold, considering _0>0 given by Lemma <ref> andτ_p=arccos[(Q(i_0))]+2pπ/_0, p∈ℤ^+,the equilibrium point E is asymptotically stable if any only if τ∈[0,τ_0). For any p∈ℤ^+, at τ=τ_p, a Hopf bifurcation takes place in a neighborhood of the equilibrium point E of system (<ref>).§.§ Gamma kernels If all delay kernels are of Gamma type: h_1(t)=t^p_1-1e^-t/θθ^p_1(p_1-1)!, h_2(t)=t^p_2-1e^-t/θθ^p_2(p_2-1)!, h_31(t)=t^p_31-1e^- t/θθ^p_31 (p_31-1)!,h_32(t)= t^p_32-1e^-t/θθ^p_32(p_32-1)!,h_34(t)= t^p_34-1e^-t/θθ^p_34(p_34-1)!, where θ>0 and p_1,p_2,p_31,p_32,p_34∈ℤ^+∖{0} satisfy:p_2+p_32=p_2+p_34=p_1+p_2+p_31=p≥ 2,the characteristic equation (<ref>) is:(θ z+1)^p=Q(z).Choosing θ as bifurcation parameter, as in <cit.>, the following result holds: If inequalities (I_0), (I_1), (I_2) and (I_3) hold and _p is the largest real root from the interval (0,_0) of the equationT_p(1|Q(i)|^1/p)=(Q(i))/|Q(i)|where T_p denotes the Chebyshev polynomial of the first kind of order p, consideringθ_p=1/_p√(|Q(i_p)|^2/p-1).the equilibrium point E is asymptotically stable if θ∈(0,θ_p). At θ=θ_p, system (<ref>) undergoes a Hopf bifurcation at the equilibrium point E.§ NUMERICAL SIMULATIONS The literature values of the elimination constants w_i, i∈{1,2,3} are given by w_i=ln(2)/T_i, where T_i is the plasma half-life of hormones: T_1≈ 4 min, T_2≈ 19.9 min, T_3≈ 76.4 min <cit.>. We choose w_4=0.001 min^-1 as in <cit.>.For simplicity, let η=μ=1 and hence, the considered feedback functions are: f_1(x)=c_1^α/c_1^α+x^α, f_2(x)=c_2^α/c_2^α+x^α, f_3(x)=x^β/c_3^β+x^βwith α=4 and β=5 as in <cit.>, c_1=2 ng/ml as in <cit.> and c_2=c_3=0.8 ng/ml.The normal equilibrium state E should reflect the normal mean values of the hormones: x̅^n_1=7.659 pg/ml (24-h mean value of CRH), x̅^n_2=21 pg/ml (24-h mean value of ACTH) and x̅^n_3=3.055 ng/ml (24-h mean value of free CORT) <cit.>. In accordance with <cit.>, we assume x̅^n_4=0.1. Choosing ξ=0.1, from system (<ref>) we deduce:k_1 =w_1 x̅^n_1f_1(x̅^n_3)=8.55261 pg/ml·min;k_2 =w_2x̅^n_2x̅^n_1f_2(x̅^n_3x̅^n_4)=0.09753 min^-1;k_3 =w_3x̅_3^n/x̅^n_2= 1.31985 min^-1;k_4 =w_4x̅^n_4/ξ+f_3(x̅^n_3x̅_4)=0.00092545 min^-1. For these values of the system parameters, the following equilibrium states exist:E^n=(7.659 pg/ml, 21pg/ml, 3.055ng/ml,0.1)normal state E^d=(38.425 pg/ml, 10.04pg/ml, 1.4606ng/ml,0.967)diseased state E^u=(8.3097 pg/ml, 20.495pg/ml, 2.981ng/ml,0.16)unstable stateThe low level of cortisol in the case of the equilibrium state E^d can be associated with hypocortisolism, and hence, E^d is regarded as the "diseased" state. In the non-delayed case, the normal equilibrium state E^n and the diseased equilibrium state E^d are both asymptotically stable, as inequalities (I_0), (I_1) and (I_2) are satisfied (see Theorem <ref>). On the hand, the equilibrium state E^u is unstable, therefore, it is not significant from the biological point of view.It is important to emphasize that for both equilibria E^n and E^d, inequality (I_3) is satisfied, which implies that when delays are introduced in the mathematical model, for sufficiently high average time delays bifurcations will occur, causing the loss of stability the E^n and E^d.As for the choice of mean time delays, firstly, as CRH travels from the hypothalamus to the pituitary through the hypophyseal portal blood vessels in an extremely short time <cit.>, we assume τ_1=0. Moreover, the human inhibitory time course for the negative feedback of cortisol on the secretion of ACTH has been described as anything between 15 and 60 min <cit.>, therefore we consider a mean delay τ_32∈(0,60]. In our numerical simulations, we additionally assume that τ_31=τ_32=τ_34. In <cit.>, a 30-min delay has been given for the positive-feedforward effect of ACTH on plasma cortisol levels, therefore, we assume τ_2∈(0,30].§.§ Dirac kernels In the case of discrete time delays, choosing the bifurcation parameter τ=τ_2+τ_32, we find the following critical values corresponding to Hopf bifurcations, based on Theorem <ref> and equation (<ref>): τ_0^n=49.8505 (min) for E^n and τ_0^d=37.8362 (min) for E^d, respectively. For τ<τ_0^d, both equilibria E^n and E^d are asymptotically stable. When τ crosses the critical value τ_0^d, a Hopf bifurcation occurs in a neighborhood of the equilibrium E^d, which causes this equilibrium to become unstable and generates an asymptotically stable limit cycle in its neighborhood. The equilibrium state E^n remains asymptotically stable whenever τ<τ_0^n. However, when the bifurcation parameter τ passes through the critical value τ_0^n, a supercritical Hopf bifurcation takes place at E^n. Numerical simulations show that for τ>τ_0^n two asymptotically stable limit cycles coexist, one corresponding to the normal ultradian rythm of the HPA axis and the other one reflecting a diseased hypocortisolic ultradian rythm. Considering τ=50 (min), the coexisting limit cycles are presented in Figures <ref>, <ref> and <ref>. §.§ Strong Gamma kernels We now consider system(<ref>) with strong Gamma kernels with the same parameter θ and p_2=p_31=p_32=p_34=2 and p_1=0. Choosing the bifurcation parameter θ, we find the following critical values corresponding to Hopf bifurcations, based on Theorem <ref> and equation (<ref>): θ_4^d=12.625 (min) for E^d and θ_4^n=18.9 (min) for E^n, respectively. As in the previous case, when θ passes one of the critical values θ_4^d or θ_4^n, a supercritical Hopf bifurcation takes place in a neighborhood of the corresponding equilibrium E^d or E^n. For θ>θ_4^n, numerical simulations show the coexistence of two asymptotically stable limit cycles, one corresponding to the normal ultradian rythm of the HPA axis and the other one reflecting a diseased hypocortisolic ultradian rythm. Considering θ=19 (min) (i.e. a total average time delay τ=76 (min)), the coexisting limit cycles are presented in Figures <ref>, <ref> and <ref>. § CONCLUSIONSThis paper presents an analysis of a four-dimensional mathematical model describing the hypothalamus-pituitary-adrenal axis with the influence of the GR concentration, considering general feedback functions (which include as a special case Hill-type functions frequently used in the literature) to account for the interactions within the HPA axis.Due to the fact that the involved processes are not instantaneous, general distributed delays have been included. This is a more realistic approach to the modeling of the biological processes, as it takes into account the whole past history of the variables, efficiently capturing the vital mechanisms of the HPA system. The positivity of the solutions and the existence of a positively invariant bounded region are proved. It is shown that the considered four-dimensional system has at least one equilibrium state and a detailed local stability and Hopf bifurcation analysis is given. Sufficient conditions expressed in terms of inequalities involving the system's parameters are found which guarantee the local asymptotic stability of an equilibrium. On the other hand, a necessary condition has also been obtained for the occurrence of bifurcations in a neighborhood of an equilibrium, when time delays are present. For the Hopf bifurcation analysis, two particular types of delays have been considered, given by Dirac and Gamma kernels, respectively.Numerical simulations reflect the importance of the theoretical results. They exemplify the fact that an appropriate choice of the system's parameters leads to the coexistence of two asymptotically stable equilibria in the non-delayed case. When the total average time delay of the system passes through critical values which are computed according to the theoretical findings, the asymptotically stable equilibria loose their stability due to Hopf bifurcations and stable limit cycles are born in their neighborhoods. The coexistence of two stable limit cycles is revealed for a sufficiently large average time delay, which successfully model the ultradian rhythm of the HPA axis both in a normal disease-free situation and in a diseased hypocortisolim state, respectively. As a direction for future research, a fractional-order formulation of the mathematical model will be analyzed.
http://arxiv.org/abs/1709.08936v1
{ "authors": [ "Eva Kaslik", "Mihaela Neamtu" ], "categories": [ "math.DS", "q-bio.TO" ], "primary_category": "math.DS", "published": "20170926105016", "title": "Stability and Hopf bifurcation analysis of a four-dimensional hypothalamic-pituitary-adrenal axis model with distributed delays" }
Towards a new proposal for the time delay in gravitational lensing Nicola Alchera[[email protected] ], Marco Bonici[[email protected] ] and Nicola Maggiore[[email protected] ] Dipartimento di Fisica, Università di Genova,via Dodecaneso 33, I-16146, Genova, Italyand I.N.F.N. - Sezione di Genova § ABSTRACT One application of the Cosmological Gravitational Lensing in General Relativity is the measurement of the Hubble constant H_0 using the time delay Δ t between multiple images of lensed quasars. This method has already been applied, obtaining a value of H_0 compatible with that obtained from the SNe 1A,but non compatible with that obtained studying the anisotropies of the CMB. This difference could be a statistical fluctuation or an indication of new physics beyond the Standard Model of Cosmology, so it desirable to improve the precision of the measurements. At the current technological capabilities it is possible to obtain H_0 to a percent level uncertainty, so a more accurate theoretical model could be necessary in order to increase the precision about the determination of H_0. The actual formula which relates Δ t with H_0 is approximated; in this paper we expose a proposal to go beyond the previous analysis and, within the context of a newmodel, we obtain a more precise formula than that present in the Literature. Keywords: classical general relativity; gravitational lenses § INTRODUCTIONOne of the nicest consequences of the existence of symmetries in nature is General Relativity. In fact,the Einstein equationsR_μν-1/2Rg_μν+Λ g_μν=0,where R_μν and R are the Ricci tensor and the Ricci scalar, respectively, g_μν is the metric and Λ is the cosmological constant, are the equations of motion for g_μν, seen as dynamical tensor field, naturally derived from the Hilbert actionS_H=∫ d^4x√(-g)(R-2Λ),where g is the determinant of g_μν. The Hilbert action (<ref>), in turn, is the most general scalar functional, including up to second order derivatives of g_μν, invariant under diffeomorphisms of the metric g_μν δ g_μν= L_Vg_μν=∇_μ V_ν + ∇_ν V_μ,where ∇_μ V_ν is the covariant derivative of a vector field V_ν generating the diffeomorphisms.The transformations (<ref>) represent gauge transformations, whose geometrical setup is commonly exploited to obtain nontrivial results in several branch of theoretical physics, from gravity to condensed matter and AdS/CFT <cit.> As it is well known, General Relativity is, under any respect, a gauge field theory, for the gauge invariance (<ref>), with all the subtleties which this implies <cit.>. It is therefore perfectly legitimate to include General Relativity as a majestic consequence of the Symmetry Principle governing our Universe.One of the first tests of General Relativity was the effect called Gravitational Lensing (GL): the presence of a massive object, which could be a star, a black hole or a galaxy cluster (we will refer to them as lenses), deforms the spacetime in its neighborhood, causing the deflection of light. Although in this paper we will consider the deformation induced by massive objects, this is not the only possibility to deform the spacetime.This deflection generates multiple images of the source: according to the equations of General Relativity the photonsfollow different paths from the source to the observer.The deflection of light is not the only consequence of GL because if we consider two photons, emitted at the same time but following different paths, they will be observed at different times: we will call this difference time delay. This delay is important because it is directly related to the value of the Hubble constant, providing us a method to determine its value. As pointed out in <cit.>, there is a certain degeneracy in the determination of the cosmological parameters from the CMB <cit.> and independent measurements are important because they could break this degeneracy. In particular, the value of H_0 can be determined using the GL <cit.><cit.><cit.><cit.><cit.> , following <cit.>, orStandard Candles <cit.>; these measurements are compatible with each other but not with the one in <cit.>.In order to face this problem, there have been different proposal involving, for example, dynamical dark energy <cit.>.In order to evaluate the delay between the detection of this two photons, we should compare the flight time needed to travel the different paths from the emitting source (S) to the observer on Earth (E). To do this, we should solve the geodesic of the photons, which in general is a tough task. We will instead adopt a perturbative approach. The paper is organized as follows: * In section <ref>, in order to face the task of solving the geodesics, the delay will be split in two contributions in order to get an approximate expression, following the standard analysis. * In section <ref> we extend in an easy way the standard analysis. * In section <ref> we propose an alternative method to calculate the time delay, possibly in a more precise way. This is important because, if we will obtain an expression of the delay which refines and contains the standard one, we will strengthen the result in <cit.>.§ STANDARD ANALYSIS§.§ Basics of Gravitational Lensing We have to solve the Einstein equations (<ref>) where the role of matter is covered by the gravitational lens L. In order to do that, we will adopt a perturbative approach decomposing the metric g_μν as followsg_μν=g̅_μν+h_μνwhere g̅_μν is the background metric and h_μν the perturbation induced by the massive object. In Cosmology, the commonly used energy-momentum tensor corresponding to gravitational lenses is that of non-relativistic matter, which is parametrized as a perfect fluid T_μν=(ρ+P)U_μ U_ν+P g_μν,where the pressure P is negligible with respect to the density ρP≪ρ.hence the energy-momentum tensor in the Einstein equations for GL isT_μν=ρ U_μ U_νwhere U_μ is the4-velocity of the lens.The details of calculations can be found in <cit.>,here we will simply sketch the method and expose the main results.We are interested in the Cosmological Lensing and so we should use as background metric the Robertson-Walker (RW) metric; however we will use the Minkowski metric ds^2=-dt^2+dx^idx^jδ_ijbecause the calculations are simpler and we will be able to insert in the result the information of the cosmological expansion. In any case, as we will see later, the same results can be rigorously obtained perturbing the (flat) Robertson-Walker metric, as it should be. Using as background metric the Minkowski metric the result isds^2=-( 1+2Φ) dt^2+( 1-2Φ) dx^idx^jδ_ijwith Φ satisfying the Poisson equation∇^2Φ=4π Gρthus we can interpret Φ as the Newtonian potential associated to the lens.This result explains why we observe only two images of the source if we consider a spherically symmetric lens. In this case, the potential will be of the formΦ=Φ(r)thus the metric (<ref>) has a rotational invariance, so the angular momentum of the photon is conserved and this means that the motion of the photon is restricted to the plane individuated by the S, L and the momentum of the photon, as in the case of theSchwarzschild's geodesics. Furthermore, the equation which determines the position of the images, the lens equation which can be found in <cit.>, is a quadratic equation and thus there will be two solutions. As already anticipated in the introduction, the delay will be split in two different parts * The Shapiro delay, or potential time delay, caused directly by the motion of the light through the gravitational potential of the lens * The geometric delay, caused by the increased length of the total light path from the source to the earth. §.§ The Shapiro time delay in Minkowski metric We want to study the geodesic of a photon moving in the metric (<ref>). Following a perturbative approach, we will divide the geodesic in two parts, the background term x̅^μ and a perturbative term x^'μ[From now on we will indicate with a bar all the background quantities and with a prime the perturbed quantities.]. Then we havex^μ(λ)=x̅^μ (λ)+x^'μ (λ) where λ parametrizes the geodesic. From now on we will perform all the integrals along the background paths; this is a good approximation, as long as it is satisfied x^' i∂_iΦ≪ΦThis condition ensures that the potential along the background path does not sensibly differ from that of the real path.The equation for null geodesic isg_μνdx^μ dλdx^ν dλ = 0We will solve Eq. (<ref>) perturbatively order by order. It will be useful to define the following quantitiesk^μ≡d x̅^μ dλ l^μ≡d x^'μ dλAt zeroth order we haveη_μνdx̅^μ dλ dx̅^ν dλ = 0 which gives us the constraint-(k^0)^2 +|k⃗|^2 =0 From now on we will use the following notation|k⃗|^2=k^2At first order we have2η_μν k^μ l^ν + h_μν k^μ k^ν = 0which, using (<ref>), (<ref>) and (<ref>), becomes- k l^0 +l⃗·k⃗ = 2 k^2 ΦNow, let us consider the geodesic equationd^2 x^μ dλ^2 + Γ^μ _ρσd x^ρ dλdx^σ dλ =0where Γ^μ_ρσ are the Christoffel symbols corresponding to the metric (<ref>), which can be foundin Appendix <ref>. At order zero we haved k^μ dλ=0This means that the background trajectories are straight lines, as we expected.At first order we haved l^μ dλ= - Γ^μ _ρσ k^ρ k^σLet us consider the μ=0 componentd l^0dλ=-2 k (k⃗·∇⃗Φ)and the spatial componentsd l⃗ dλ=-2 k^2 ∇ _⊥Φ where we have introducedthe transverse gradient ∇ _⊥Φ, defined as the total gradient less the gradient along the path∇_⊥Φ≡∇Φ - ∇_∥Φ= ∇Φ- 1k^2(k⃗·∇Φ) k⃗It is worth emphasizing that evaluating the following indefinite integrall^0 = ∫dl^0dλ dλ = - 2k∫ (∇⃗Φ·k⃗)dλ == -2k ∫d x⃗̅⃗ dλ·∇⃗Φ dλ = -2k ∫∇⃗Φ· d x⃗̅⃗ = -2kΦthe integration constant is fixed demanding that l_0=0 when Φ=0. Plugging this expression in (<ref>) we obtainl⃗·k⃗ = 0which means that the two vectors are orthogonal one to each other. We can now evaluate the time delay between a photon moving in the unperturbed Minkowski metric (<ref>) and one moving in the perturbed metric (<ref>).Following <cit.>, let us consider a photon emitted in S, which is detected in E after being deflected by L (see Figure 1), in the perturbed metric (<ref>). Having in mind that the approximate path travelled by the photon is SPE, where P is the deflection point closest to the lens L,the flight time of the photon moving in the perturbed metric ist = ∫dx^0dλ d λ = ∫(d x̅^0dλ+dx^' 0 dλ) d λ= ∫(k^0 +l^0)dλwhile the flight time of the photon moving in the unperturbed metric ist̅ = ∫d x̅^0dλ dλ = ∫ k^0 dλThe time delay between the two paths is Δ t_1= t-t̅ = ∫ l^0 dλUsing the expression already obtained for l^0 given by (<ref>) we obtainΔ t_1= - 2 k ∫Φ dλ Using the infinitesimal line element dl=kdλ we can write Δ t_1 = -2 ∫_SPEΦ dlWe stress again that the integral is done over the path SPE <cit.>. Notice that this time delay depends on the gravitational potential Φ of the lens, which therefore has the effect of reducing the effective speed of lightrelative to propagation in vacuum. In presence of two images S_1 and S_2, we have to deal with two photons travelling two distinct paths, namely SP_1E and SP_2E. Correspondingly, the total Shapiro time delay is given by <cit.>Δ t_S=Δ t_2-Δ t_1=-2( ∫_SP_2EΦ dl-∫_SP_1EΦ dl) In order to put (<ref>) in a more compact form we must introduce the angular diameter distance and the gravitational lensing potential.If we observe from a point P an object in Q of proper length l, perpendicular to PQ and with angular size θ, then we define the angular diameter distance d_A(PQ)d_A(PQ)=l/θin particular, it can be showed that in flat spacetime we haved_A(PQ)=r_PQ/1+z_Qwhere r_PQ is the radial coordinate from P to Q in a coordinate system centered in P and z_Q is the redshift of Q with respect to P; the details about the angular diameter distance can be found in <cit.>.Moreover, the gravitational lensing potential ψ is given byψ(θ⃗)≡2d_A(LS)/d_A(EL) d_A(ES)∫Φ(d_Lθ⃗,l) dlwhere we inserted the dependence on θ⃗ because the value of the angle determines the integration path, which is taken to be the spatial background geodesic in figure <ref>; it is worth emphasizing that this angles are vectors because, in general, we will not consider only planar angles but also angles in the space. Using this two quantities we can write the equation (<ref>) asΔ t_S =-2d_A(LS)/d_A(EL) d_A(ES)d_A(EL) d_A(ES)/d_A(LS)( ∫_SP_2EΦ dl-∫_SP_1EΦ dl)==-d_A(EL) d_A(ES)/d_A(LS)( ψ(θ⃗_2)-ψ(θ⃗_1)).We have not yet considered the contribution arising from the expansion of the universe. However, this can be taken into account as follows. As we can see from (<ref>) the main contribution to the integral is originated near the lens, so we can say that the Shapiro delay is originated near the lens. This means that when photons leave the region of space perturbed by the lens they have already acquired the delay given by (<ref>), then we simply have to redshift the result by (1+z_L) and we can conclude that the Shapiro time delay Δ t_S observed from the Earth is Δ t_S=-(1+z_L)d_A(EL) d_A(ES)/d_A(LS)( ψ(θ⃗_2)-ψ(θ⃗_1))where we have used the definition of redshift z a(t)=1/1+z,and a(t) is the scale factor at time t. More details about redshift can be found in <cit.>. As we will see, the same result (<ref>) can be obtained perturbing the flat RW metric, with the advantage that the redshift scaling (1+z_L) will be obtained naturally. and not put by hand as we just did here.§.§ Geometric time delay Let us calculate the geometric time delay Δ t_G. Using the lightlike interval and the unperturbed RW flat metricds^2=-dt^2+a^2(t) dx^idx^jδ_ijwe have∫_t_S^t_E_0dt/a(t)≡σ_SEwhere σ_SE is the proper length between Earth and the light Source, t_S is the emission time and t_E_0 is the arrival time of the photon running along the straight path. We perturbed the flat RW metric because it is compatible with the experimental result |Ω_c|<0.1 <cit.>.Now, let us calculate the flight time of the photon running along the lengthened path in the perturbed metric: we can parametrize the trajectory with two segments, one from the source to the minimum distance point P and one fromP to the Earth (see figure <ref>). Thus∫_t_S^t_Edt/a(t)=σ_SP+σ_PEWe can calculate the delay Δ t' between the two paths subtracting (<ref>) from (<ref>)∫_t_S^t_Edt/a(t)-∫_t_S^t_E_0dt/a(t)=σ_SP+σ_PE-σ_SEWe can evaluate the left hand side of (<ref>)∫_t_S^t_Edt/a(t)-∫_t_S^t_E_0dt/a(t)= ∫_t_E_0^t_Edt/a(t)≈Δt̃/a(t_E)=Δt̃where we used the observation that time delay is small compared to Hubble time, so we can consider a(t) constant, the usual normalization a(t_E)=1 and we have introduced the delay between the two photons Δt̃. In order to evaluate the proper distance it is convenient to use radial coordinates with the origin positioned on the Earth, so we can immediately writeσ_SE=∫_0^r_ESdr=r_ESσ_PE=∫_0^r_EPdr=r_EPσ_SP is not purely radial; from the geometry in figure <ref> we haveσ_SP=√(r_ES^2+r_EP^2-2r_ESr_EPcosα)We are interested in small angles, so we can perform an expansionσ_SP ≈√(r_ES^2+r_EP2-2r_ESr_EP+r_ESr_EPα^2)==(r_ES-r_EP)√(1+r_EPr_ESα^2/(r_ES-r_EP)^2)=≈ r_ES-r_EP+r_EPr_ESα^2/2(r_ES-r_EP)from which it followsΔt̃=r_ESr_EPα^2/2(r_ES-r_EP)We can use r_ES-r_EP≈ r_LS because a more precise treatment would introduce higher order corrections. Thus, we haveΔt̃=r_ESr_EPα^2/2r_LS=(1+z_L)d_A(ES)d_A(EL)α^2/2d_A(LS)where we have used (<ref>).As in the previous case, we are not interested in the delay given by (<ref>) since it is not observable, but in the delay between two photons running along different geometric paths, so we obtainΔ t_G=Δt̃_2-Δt̃_1=(1+z_L)d_A(ES)d_A(EL)/2d_A(LS)(α_2^2-α_1^2)Adding (<ref>) to (<ref>) we obtain the total delay Δ t Δ t=Δ t_S+Δ t_G=(1+z_L)d_A(ES)d_A(EL)/d_A(LS)[ (α_2^2-α_1^2)/2-( ψ(θ⃗_2)-ψ(θ⃗_1)) ]which is the same formula that can be found in <cit.>; however we want an expression which involves H_0. If we use (<ref>) we obtainΔ t =r_ESr_EL/r_LS[ (α_2^2-α_1^2)/2-( ψ(θ⃗_2)-ψ(θ⃗_1)) ]==r_ESr_EL/r_ES-r_EL[ (α_2^2-α_1^2)/2-( ψ(θ⃗_2)-ψ(θ⃗_1)) ]We will use the following relation,which can be derived using the lightlike interval and the first Friedmann equation; a complete derivation can be found in <cit.>,r_ES=1/H_0∫_0^z_Sdz'/E(z')≡ℛ(z_S)/H_0whereE(z)=[ ∑_iΩ_i0(1+z)^n_i] ^1/2Notice that ℛ(z) is written in terms of the cosmological parameters Ω_i0. If we use (<ref>), then (<ref>) becomesΔ t=1/H_0ℛ(z_S)ℛ(z_L)/ℛ(z_S)-ℛ(z_L)[ (α_2^2-α_1^2)/2-( ψ(θ⃗_2)-ψ(θ⃗_1)) ]§ AN EASY EXTENSION Studying delay we have obtained two different contributions: the Shapiro time delay, given by equation (<ref>), and the geometric time delay, given by (<ref>).When we calculated Δ t_G we made an approximation expanding (<ref>) because we neglected contributes of order 𝒪(α^3).When we calculated Δ t_S we perturbed Minkowski rather than RW metric, so we had to add manually the redshift in order to account for the expansion of the universe. In the next subsections we will show a more precise result for Δ t_G and a more rigorous calculation for the Shapiro time delay Δ t_S.§.§ The extension of Δ t_GLet us consider equation (<ref>)σ_SP =√(r_ES^2+r_EP^2-2r_ESr_EPcosα)expand the RHS we obtainσ_SP=r_ES-r_EP+r_ESr_EP/2(r_ES-r_EP)∑_k=1^+∞c_kα^2kwhere the first coefficients are reported in appendix <ref>. If we repeat the analysis of section <ref> using (<ref>) instead of (<ref>) we obtainΔ t=r_ESr_EP/2(r_ES-r_EP)∑_k=1^+∞c_kα^2kr_ES and r_EP are not observable, but we can use(<ref>) we haveΔ t=ℛ(z_S)ℛ(z_P)/2H_0(ℛ(z_S)-ℛ(z_P))∑_k=1^+∞c_kα^2kThus, the geometric time delay isΔ t_G=ℛ(z_S)/2H_0∑_k=1^+∞c_k( ℛ(z_P_2)/(ℛ(z_S)-ℛ(z_P_2))α_2^2k-ℛ(z_P_1)/(ℛ(z_S)-ℛ(z_P_1))α_1^2k)The distance between P_1 and L and between P_2 and L are small compared to cosmological scales, thus we can make the following approximationz_P_2≃ z_P_1≃ z_Lobtaining a generalization for the geometric time delay (<ref>)Δ t_G=ℛ(z_S)ℛ(z_L)/2H_0(ℛ(z_S)-ℛ(z_L))∑_k=1^+∞c_k( α_2^2k-α_1^2k)Using (<ref>) instead of (<ref>) we obtain the following expression for the total time delayΔ t=1/H_0ℛ(z_S)ℛ(z_L)/ℛ(z_S)-ℛ(z_L)[ ∑_k=1^+∞c_k( α_2^2k-α_1^2k)/2-( ψ(θ⃗_2)-ψ(θ⃗_1)) ]It is easy to check that (<ref>) includes (<ref>), which trivially coincides with the first term of the expansion.Evaluating numerically the second coefficient of the expansion in (<ref>), in the case of the quasar Q0957+561, it has been obtained that c_2 is of the order of the unity, which is good for the convergence of the series, while α is of the order of the arcsecond, i.e. 10^-5 rad, which is a typical value for quasars. Indeed, the second order contribution is smaller than the first one by a factor of 10^10; using the lenses in the CASTLES catalogue <cit.> it is not possible to detect this contribution. This shows that, in order to solve the tension about H_0, we must follow another way. §.§ The Shapiro time delay in RW metricIn <ref> we obtained the value of the Shapiro delay Δ t_S on Cosmological Scales perturbing Minkowski spacetime and adding at the result the value of the redshift of the lens. In this section we want to show a derivation of Δ t_S considering the flat RW metric (<ref>) and the RW metric perturbed by a massive object.The perturbed metric can be obtained in a similar manner to (<ref>), following the same steps (more details can be found in <cit.>)ds^2=-( 1+2Ψ(x)) dt^2+a^2(t)( 1-2Ψ(x)) dx^idx^jδ_ijwith Ψ satisfying∇^2Ψ(x)=4π Ga^2(t)ρ(x)where ρ is the energy density of the massive object. The energy density of the non-relativistic matter behaves as <cit.>ρ(x)=ρ_0(x⃗)a(t)^-3It can be useful to introduceΦ(x)≡Ψ(x)a(t)Using (<ref>) and (<ref>) we obtain thatΦ=Φ(x⃗)Plugging (<ref>) in (<ref>) we obtainds^2=-( 1+2Φ(x⃗)/a(t)) dt^2+a^2(t)( 1-2Φ(x⃗)/a(t)) dx^idx^jδ_ijwith Φ satisfying the Poisson equation (<ref>). We perturbed the flat RW metric because it is compatible with the observations (|Ω_c|<0.1).Now we will calculate the delay between a photon moving in (<ref>) and one moving in (<ref>) evaluating the integral along the path γ_1, which is the RW deformation of the minkowskian SP_1E,then we will calculate the observable delay. Using the lightlike interval and (<ref>) we have∫_t_S^t_E_0dt/a(t)=∫_γ_1 dlInstead, using the lightlike interval and the perturbed flat RW metric (<ref>) we have∫_t_S^t_Edt/a(t)=∫_γ_1√(1-2Φ a^-1/1+2Φ a^-1)dl≃∫_γ_1( 1-2Φ/a(t)) d lwhere in the last step we have performed an expansion in Φ/a becausein situation of cosmological interest it has a small value.Subtracting (<ref>) from (<ref>) we obtain∫_t_S^t_Edt/a(t)- ∫_t_S^t_E_0dt/a(t)=∫_γ_1( 1-2Φ/a(t)) d l-∫_γ_1 dlThe LHS of (<ref>) gives the delay between the two photons∫_t_S^t_Edt/a(t)-∫_t_S^t_E_0dt/a(t)= ∫_t_E_0^t_Edt/a(t)≈Δ t_1/a(t_E)=Δ t_1wherewe used the observation that time delay is small compared to Hubble time, so we can consider a(t) constant, and the usual normalization a(t_E)=1. Thus we obtainΔ t_1=-2∫_γ_1Φ/a(t)dlThe potential delay between two photons moving in the perturbed metric isΔ t_S=Δ t_2-Δ t_1=-2∫_γ_2Φ/a(t)dl+2∫_γ_1Φ/a(t)dlWe are not able of evaluating this integrals analytically; however we can avoid this difficulty. Let us consider two scalar functions f(x) and g(x) that have the same value on a interval Ω, except for a interval Δ x_0 around a value x_0, and a scalar function a(x) that is nearly constant in the interval Δ x_0; then, we can make the following approximation∫_Ω a(x)( f(x)-g(x))dx ≃ a(x_0)∫_Ω( f(x)-g(x))dxLet us come back to (<ref>): the Newtonian potential evaluated along two different paths will be sensibly different only in the neighborhood of the lens; in analogy with the previous example we can writeΔ t_S≃-2/a(t_L)( ∫_γ_2Φ dl-∫_γ_1Φ dl)where t_L is the time when the photon pass near the lens. Usingthe expression for the lensing gravitational potential (<ref>) andthe redshift (<ref>), Eq. (<ref>) becomesΔ t_S=-(1+z_L)d_A(EL) d_A(ES)/d_A(LS)( ψ(θ⃗_2)-ψ(θ⃗_1))which is exactly the result of (<ref>); the main advantage of this method is that we obtained the Shapiro delay Δ t_S considering the expansion of the universe ab initio because we have perturbed RW instead of Minkowski metric. In other words, the scale factor (1+z_L) comes naturally, without need of introducing it by hand as it has been done in (<ref>). § COSMOLOGICAL BORN-OPPENHEIMER APPROXIMATION FOR TIME DELAYIn section <ref> we calculated an extension of the geometric delay, showing that it does not solve the tension about H_0. This leads us to develop a different approach: we will not calculate Δ t_S and Δ t_G separately, we will calculate directly the total delay in one shot using an alternative approximation for the geodesics of the photon.§.§ The ideaOur idea is to divide the space into a region where the gravitational potential originated by the lens is negligible and another with a non vanishing gravitational potential, in close analogy with the Born-Oppenheimer approximation in non-relativistic Quantum Mechanics. It is worth emphasizing that the potential, in general, does not have to possess any symmetry because in the following we will not make any assumptions about Φ. We will approximate the photon spatial geodesicwith SQPE, as shown in fig <ref>. In particular SQ and PE are straight lines in the region with vanishing potential and QP is a curve in the region with non vanishing potential. We will calculate the flight time of the photon moving along the curve SQPE using the unperturbedflat RW metric (<ref>) only along QP, while elsewhere the perturbing effect of the lens L is taken into account by (<ref>).Let us start from the photon moving in the unperturbed metric. The proper length between the Earth E and the Source S is∫_t_S^t_E_0dt/a(t)=σ_SELet us consider the SQPE path, that we can divide into threeparts; using the perturbed metric (<ref>) we have∫_t_S^t_Qdt/a(t)+∫_t_P^t_Edt/a(t)+∫_t_Q^t_Pdt/a(t)=σ_SQ+σ_PE+∫_Q^P( 1-2Φ a^-1(t))dlNotice that the path from Q to P is calculated along the curved line and not along the straight line, as shown in Figure <ref>.Let us evaluate the left hand side of (<ref>);∫_t_S^t_Qdt/a(t)+∫_t_P^t_Edt/a(t)+∫_t_Q^t_Pdt/a(t)=∫_t_S^t_Edt/a(t)Instead, for the RHS of (<ref>)σ_SQ+σ_PE+∫_Q^P( 1-2Φ a^-1(t))dl=σ_SQ+σ_PE+σ_QP-2/a(t_L)∫_Q^PΦ dlSo, (<ref>) becomes∫_t_S^t_Edt/a(t)=σ_SQ+σ_PE+σ_QP-2/a(t_L)∫_Q^PΦ dlWe want to calculate the time delay between the photon moving in the perturbed RW metric and the photon moving in the background RW metric; in order to obtain this result let us subtract (<ref>) from (<ref>)∫_t_E_0^t_Edt/a(t)=σ_SQ+σ_PE+σ_PQ-σ_SE-2/a(t_L)∫_Q^PΦ dlLet us evaluate the LHS of the (<ref>): the difference between t_E and t_E0 is small compared to Hubble time, thus we can consider a(t) constant, and considering the usual normalization a(t_E)=1 we obtain∫_t_E_0^t_Edt/a(t)=t_E-t_E_0We need to evaluate the RHS of (<ref>)σ_PE=r_PEσ_ES=r_ESIn order to have an explicit expression of σ_PQ we can approximate it with an arcσ_PQ=bμwhere the angle μ and the distanceb are defined in Figure <ref>. We can obtain an expression for σ_SQ using the geometry in figure (<ref>)σ_SQ=√(r_EQ^2+r_ES^2-2r_ESr_EQcosγ)We can use Eq. (<ref>) to calculate σ_SQ, obtainingσ_SQ=r_ES-r_EQ+r_ESr_EQ/2(r_ES-r_EQ)∑_k=1^+∞c_kγ^2kPlugging all together we obtaint_E-t_E_0=r_ES-r_EQ+r_ESr_EQ/2(r_ES-r_EQ)∑_k=1^+∞c_kγ^2k+r_EP+bμ-r_ES-2/a(t_L)∫_Q^PΦ dlThe delay between the photon moving in the perturbed metric and the photon moving in the background metric ist_E-t_E_0=-r_EQ+r_ESr_EQ/2(r_ES-r_EQ)∑_k=1^+∞c_kγ^2k+r_Ep+bμ -2/a(t_L)∫_Q^PΦ dlAs in the previous cases we should consider the delay between photons running along different perturbed paths; if we defineψ_1[ Q_1P_1] ≡2d_A(LS)/d_A(EL) d_A(ES)∫_Q_1P_1Φ dlandψ_2[ Q_2P_2] ≡2d_A(LS)/d_A(EL) d_A(ES)∫_Q_2P_2Φ dlwe obtainΔ t =[ b_2μ_2-b_1μ_1] -[ (r_EQ_2-r_EP_2)-(r_EQ_1-r_EP_1) ]+-(1+z_L)d_A(EL) d_A(ES)/d_A(LS)( ψ_2-ψ_1)++[ r_ESr_EQ_2/2(r_ES-r_EQ_2)∑_k=1^+∞c_kγ_2^2k-r_ESr_EQ_1/2(r_ES-r_EQ_1)∑_k=1^+∞c_kγ_1^2k] using (<ref>) and (<ref>) we can concludeΔ t =[ b_2μ_2-b_1μ_1] +1/H_0[(ℛ(z_P_2)-ℛ(z_Q_2)) -(ℛ(z_P_1)-ℛ(z_Q_1)) ]++1/H_0∑_k=1^+∞[ ℛ(z_S)ℛ(z_Q_2)/ℛ(z_S)-ℛ(z_Q_2)( c_kγ_2^2k/2-ψ_2) -ℛ(z_S)ℛ(z_Q_1)/ℛ(z_S)-ℛ(z_Q_1)( c_kγ_1^2k/2-ψ_1) ].The expression for the time delay (<ref>) is more precise then the one obtained in (<ref>). In fact, in a certain limit, the former reduces to the latter. In order to see this, let us consider the following approximationsb_1μ_1≃ r_EQ_1-r_EP_1 b_2μ_2≃ r_EQ_2-r_EP_2 γ_1≃α_1 γ_2≃α_2 z_Q_1≃ z_Q_2≃ z_LThese approximations have a precise meaning: our proposal for the time delay (<ref>) is more accurate than the previous one (<ref>), which in turn contains the “standard” time delay formula (<ref>)because we considered a more complicated geometry, but with the previous approximations we can reduce (<ref>) to (<ref>). In fact, Plugging (<ref>), (<ref>) and (<ref>) in (<ref>) we findΔ t=1/H_0ℛ(z_S)ℛ(z_L)/ℛ(z_S)-ℛ(z_L)[ ∑_k=1^+∞( c_kα_2^2k/2-ψ_2) -∑_k=1^+∞( c_kα_1^2k/2-ψ_1) ]There is only a small difference between(<ref>) and (<ref>): ψ_1 and ψ_2 have not the same value of ψ(θ⃗_1) and ψ(θ⃗_2) due to the longer integration path of the latter. However, the difference is negligible because the integrand decays quickly. Therefore, we can conclude that (<ref>) is an extension of (<ref>).A remark is in order concerning the points P and Q in figure 2: the angles in figure <ref> are uniquely identified unlike the angles in figure <ref>. In other words, we could set the position of Q and P in different ways. Only after the determination of μ and γ we will be able to use (<ref>). Nevertheless, we already have some constraints: γ must be smaller than θ, while μ must be small. However, the two points P and Q in figure 2 can be determined by imposing a smooth connection (for instance a tangency condition) between the straight linesPE and SQ and the curve QP <cit.>. § CONCLUSIONSIn this paper we have studied one of the main tests of GR, the Gravitational Lensing: massive objects can modify the structure of spacetime, with the consequence that photons will not follow straight paths. This effect has a remarkable consequence: we will detect multiple images of lensed light-source, which will not be synchronized due to the different paths followed by light. In section <ref> we have divided this delay in two contributions, the Shapiro, or potential, delay and the geometric delay, which we calculated following the standard analysis <cit.>, obtaining an approximate expression, (<ref>), known in the Literature <cit.>. This formula is importantbecause it is directly related to the value of the Hubble constant H_0, so we can obtain a direct measurement of its value studying the time delay of lensed images. However, the results of the H0LiCOW collaboration <cit.> are not compatible with the measurement obtained by the PLANCK collaboration <cit.>; this tension is a strong motivation to improve the expression of time delay (<ref>). In section <ref> we studied two slightly different approaches: we developed a more rigorous treatment for the Shapiro delay and a more precise value for the geometric delay, obtainingthe time delay formula (<ref>) involving higher orders in the angles α_1,2, which identify the images of the source S. The crucial fact to notice is that it can be traced back to the Taylor series of the cosine, hence it goes like even powers of the angles.Now, it has been possible to give a preliminary estimate of the second order correction of the time delay formula (<ref>), applied to a typical source like the twin quasar Q0957+561. For this lensing phenomenon, the angular separations are of the order of one arcsecond, i.e. 10^-5 rad. Using the lens parameters, the coefficient c_2 in (<ref>) is of the order of unity.Hence, the second order correction is of the order 10^-10 which is far too small to be detected with the lenses at our disposal. For lenses with bigger angular separation (around 22 arcseconds), the second order correction reaches 10^-8, which is still too little. The important conclusion is that, at least for the lenses appearing in the CASTLES catalogue <cit.>, the standard formula (2.54) for the time delay seems to be acceptable within the actual instrumental capabilities. This even more motivates the search for an alternative formula for time delay, which goes beyond the simple expansion in powers of the angles. In section <ref> we proposed a new approach: in analogy with the first Born-Oppenheimer approximation for the scattering amplitude in non-relativistic Quantum Mechanics,we considered the lens as a kind of cosmological scattering target, and consequently we divided the space in two regions: one where the gravitational potential originated by the lens is negligible, and another one, closer to the lens, where the gravitational potential is different from zero. This led to consider a more complicated geometry, which gave us the possibility to calculate the total delay in a single shot. We believe that our result represent an important improvement, because it allows to avoid the inaccuracies of the standard analysis. We also checked that the expression we have obtained for the time delay (<ref>) can be reduced to, hence includes, the known result (<ref>).In order to test the accuracy of our formula we should apply it in a real situation, obtaining an estimate of H_0; in particular, it would be of great interest the recognition of a situation where the difference between (<ref>) and (<ref>) is not negligible.AcknowledgementsIt is a pleasure to thankMarco Anghinolfi, Daniele Barducci, Gianangelo Bracco, Lorenzo Cabona, Roberta Cardinale,Alba Domi, Andrea La Camera, Davide Ricci,Chiara Righi, and Silvano Tosifor collaboration with us on this topic: most of what we have presented in this article has been motivated by illuminating discussions with them. In particular, we are indebted with Gianangelo Bracco,Alba Domi, Luca Panizzi and Silvano Tosi for applying the formula (<ref>) to the experimental data coming from the CASTLE database and finally again to Luca Panizzi for a critical and careful reading of the manuscript. Nicola Maggiore thanks the support of INFN Scientific Initiative SFT: “Statistical Field Theory, Low-Dimensional Systems, Integrable Models and Applications” . § APPENDIX §.§ Christoffel symbols The Christoffel coefficients used in <ref> areΓ^0_i0=Γ^i_00=∂_iΦ Γ^i_jk=δ_jk∂_iΦ-δ_ik∂_jΦ-δ_ij∂_kΦ §.§ Coefficients of the expansion The first coefficients appearing in the expansion present in (<ref>) arec_1=1 c_2=-(r^2_ES+r_ESr_EP+r^2_EP)/12(r_ES-r_EP)^2 c_3=r_ES^4+11r_ES^3r_EP+21r_ES^2r_EP^2+11r_ESr_EP^3+r_EP^4/360(r_ES-r_EP)^4 c_4=-r_ES^6+57r_ES^5r_EP+393r_ES^4r_EP^2+673r_ES^3r_EP^3+393r_ES^2r_EP^4+57r_ESr_EP^5+r_EP^6/20160(r_ES-r_EP)^6 9Blasi:2015lrgA. Blasi and N. Maggiore,Class. Quant. Grav.34, no. 1, 015005 (2017) doi:10.1088/1361-6382/34/1/015005 [arXiv:1512.01025 [hep-th]]. Blasi:2017pkkA. Blasi and N. Maggiore,Eur. Phys. J. C 77, no. 9, 614 (2017) doi:10.1140/epjc/s10052-017-5205-y [arXiv:1706.08140 [hep-th]]. Blasi:2011pfA. Blasi, A. Braggio, M. Carrega, D. Ferraro, N. Maggiore and N. Magnoli,New J. Phys.14, 013060 (2012) doi:10.1088/1367-2630/14/1/013060 [arXiv:1106.4641 [cond-mat.mes-hall]]. Blasi:2008gtA. Blasi, D. Ferraro, N. Maggiore, N. Magnoli and M. Sassetti,Annalen Phys.17, 885 (2008) doi:10.1002/andp.200810323 [arXiv:0804.0164 [hep-th]]. Amoretti:2013nv A. Amoretti, A. Blasi, G. Caruso, N. Maggiore and N. Magnoli,Eur. Phys. J. C 73 (2013) no.6,2461 doi:10.1140/epjc/s10052-013-2461-3 [arXiv:1301.3688 [hep-th]]. Amoretti:2017xtoA. Amoretti, A. Braggio, N. Maggiore and N. Magnoli,Adv. Phys. X 2, no. 2, 409 (2017). doi:10.1080/23746149.2017.1300509 Amoretti:2014kba A. Amoretti, A. Braggio, G. Caruso, N. Maggiore and N. Magnoli,JHEP 1404 (2014) 142 doi:10.1007/JHEP04(2014)142 [arXiv:1401.7101 [hep-th]]. Carroll:2004stS. M. Carroll, “Spacetime and geometry: An introduction to general relativity,” San Francisco, USA: Addison-Wesley (2004) 513 p. Efstathiou:1998xxG. Efstathiou and J. R. Bond, Mon. Not. Roy. Astron. Soc.304, 75 (1999) doi:10.1046/j.1365-8711.1999.02274.x [astro-ph/9807103]. Ade:2015xuaP. A. R. Ade et al. [Planck Collaboration], “Planck 2015 results. XIII. Cosmological parameters,” Astron. Astrophys.594, A13 (2016) doi:10.1051/0004-6361/201525830 [arXiv:1502.01589 [astro-ph.CO]].Suyu:2016qxx S. H. Suyu et al.,Mon. Not. Roy. Astron. Soc.468 (2017) no.3,2590 doi:10.1093/mnras/stx483 [arXiv:1607.00017 [astro-ph.CO]]. Sluse:2016owq D. Sluse et al.,Mon. Not. Roy. Astron. Soc.470 (2017) 4838 doi:10.1093/mnras/stx1484 [arXiv:1607.00382 [astro-ph.CO]]. h0licow3 Cristian E. Rusu et al., Mon. Not. Roy. Astron. Soc.467 (2017) no.4, doi:10.1093/mnras/stx285 arXiv:1607.01047 [astro-ph.GA]. Wong:2016dpo K. C. Wong et al.,Mon. Not. Roy. Astron. Soc.465 (2017) no.4,4895 doi:10.1093/mnras/stw3077 [arXiv:1607.01403 [astro-ph.CO]]. Bonvin:2016crtV. Bonvin et al.,Mon. Not. Roy. Astron. Soc.465, no. 4, 4914 (2017) doi:10.1093/mnras/stw3006 [arXiv:1607.01790 [astro-ph.CO]]. Refsdal:1964nwS. Refsdal, “On the possibility of determining Hubble's parameter and the masses of galaxies from the gravitational lens effect,” Mon. Not. Roy. Astron. Soc.128, 307 (1964). Riess:2016jrrA. G. Riess et al., Astrophys. J.826, no. 1, 56 (2016) doi:10.3847/0004-637X/826/1/56 [arXiv:1604.01424 [astro-ph.CO]]. DiValentino:2017iww E. Di Valentino, A. Melchiorri and O. Mena,Phys. Rev. D 96 (2017) no.4,043503 doi:10.1103/PhysRevD.96.043503 [arXiv:1704.08342 [astro-ph.CO]]. defalco Schneider, P., Ehlers, J., Falco, E.E., 1992, Springer, Gravitational Lenses castles CASTLES catalogue https://www.cfa.harvard.edu/castlesWeinberg:2008zzcS. Weinberg, “Cosmology,” Oxford, UK: Oxford Univ. Pr. (2008) 593 p. progress N.Alchera, M.Bonici, N.Maggiore and L.Panizzi, work in progress.
http://arxiv.org/abs/1709.09055v1
{ "authors": [ "Nicola Alchera", "Marco Bonici", "Nicola Maggiore" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170926143540", "title": "Towards a new proposal for the time delay in gravitational lensing" }
AIP/123-QED []Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87544, USA [email protected] [email protected] Physics of Living Systems, Massachusetts Institute of Technology, 400 Technology Square, Cambridge, MA 02139, USA A “self-replicator" is usually understood to be an object of definite form that promotes the conversion of materials in its environment into a nearly identical copy of itself.The challenge of engineering novel, micro- or nano-scale self-replicators has attracted keen interest in recent years, both because exponential amplification is an attractive method for generating high yields of specific products, and also because self-reproducing entities have the potential to be optimized or adapted through rounds of iterative selection.Substantial steps forward have been achieved both in the engineering of particular self-replicating molecules, and also in characterizing the physical basis for possible mechanisms of self-replication.At present, however, there is need for a theoretical treatment of what physical conditions are most conducive to the emergence of novel self-replicating structures from a reservoir of building blocks on a desired time-scale.Here we report progress in addressing this need.By analyzing the kinetics of a toy chemical model, we demonstrate that the emergence of self-replication can be controlled by coarse, tunable features of the chemical system, such as the fraction of fast reactions or the width of the rate constant distribution. We also find that the typical mechanism is dominated by the cooperation of multiple interconnected reaction cycles as opposed to a single isolated cycle. The quantitative treatment presented here may prove useful for designing novel self-replicating chemical systems. Design of conditions for self-replication Jeremy L. England December 30, 2023 ========================================= § INTRODUCTION Emergence of self-replicators from a mixture of components is marked by exponential growth of one or more multi-component structures. This process is of great practical importance due to the possibility of exponentially fast synthesis of target structures, and also has previously been considered in models of pre-biotic chemistry <cit.>. The mechanisms that enable self-replication in a soup of metastable bound states have been investigated intensively in the past decades <cit.> and still continue to inspire new attempts <cit.>. The processes of self-replication described in these studies, though distinct, share two mechanistic elements: (a) the existence of at least one autocatalytic cycle and (b) a source of driving that runs the autocatalytic cycle. In the usual case <cit.> an autocatalytic cycle is designed by experimenters to consume one or more building blocks that are provided in excess to generate replicas of a template that is used as a seed. A significant challenge in any such case lies in devising an appropriate chemical library that limits parasitic side reactions. Theoretical approaches, meanwhile, have been most successful in the opposite regime, where the catalytic network is sufficiently densely connected, and every molecule available in the reaction pot catalyzes the production of at least one other molecule <cit.>. In such a case, it is possible to formulate general criteria for the onset of positive feedback loops in the catalytic reaction network that lead to the exponential growth of the molecules in those loops. Thus, although it is qualitatively understood that robust self-replication requires sufficient catalytic promiscuity that somehow avoids excessive side reactions, there is need for a quantitative treatment of this tradeoff in a physical model that may provide future guidance for the design of conditions conducive to the spontaneous emergence of self-replicators from customizable mixtures of nano- or microscale components <cit.>.Therefore, we sought to investigate a toy model where all possible stoichiometric combinations of certain building blocks are considered in the construction of an effective model of a “chemical" space . Using this model (Fig. <ref>), we lay out general conditions for the emergence of exponential growth in systems without explicit catalysis.Interestingly, we find that the typical mechanism for the emergence of self-replicators occurs via a multi-cycle topological element in the reaction, and therefore violates previously established quantitative criteria for self-replication that were developed assuming that self-replication occurs through isolated autocatalytic cycles <cit.>. § MODEL §.§ Toy chemical system We undertook to model a large, well-mixed reaction pot with diverse possible combinations of monomers.We call these monomers “atoms" here because we eventually plan to model the dynamics of their bound states using thermodynamically consistent mass-action kinetics, but it should not be imagined that we intend exclusively or even principally to describe real molecular chemistry using the model presented here.Rather, the essence of the “chemical space" constructed is that it is a vast space of diverse combinations among physical interacting components such as polymer-coated colloidal particles or DNA origami(Fig. <ref>A). In our model, two or more atoms interact with each other to form a bound state, which we call a “molecule." For simplicity, we assume that the molecules do not have any internal structure and all the atoms inside a molecule interact with all other atoms in that molecule with interaction energies ϵ_BB, ϵ_BG, ϵ_GG (Fig. <ref> B).Since the molecules do not have any internal structure, their free energies are completely determined by their composition and the three ϵ parameters. Also, we assume that each molecule contains at most μ_max atoms, and forbid all other bound states. Except where it is explicitly mentioned, we set μ_max = 4.With these two assumptions it can be shown that there are fourteen distinct molecules in the model with two types of monomers(Fig. <ref> C) . The molecules take part in reactions that involve one molecule donating an atom to the surrounding medium or to another molecule. We call the former a dissociation reaction and the latter a bimolecular reaction (Fig. <ref> D). The reactions are activated processes and the rate constant of a given reaction that takes the reactant state ito product state j, is inversely proportional to the exponential of the barrier height: k_ij∝exp(- B_ij). The activation barriers B_ij are either chosen randomly or using a model of the transition state. We refer to the latter as mechanistic model. In the mechanistic model, B_ij= F^Tr_ij - F_i, where F_i is the free energy of the reactant state and F^Tr_ij is the free energy of the transition state. F_i is determined from the interaction energies. To calculate F^Tr_ij, we assume that during a reaction, the donated atom first goes to an excited state, where it interacts with other atoms in the donor molecule through a weakly repulsive interaction (Fig. <ref> E) that is proportional to the ground state interaction energy. The proportionality factor c_0 = -0.1 is same for all three interaction energies and is a parameter of the model. The results described is robust with variation in c_0, as long as ϵ_** < 0c_0 < 0. The resulting toy “chemistry" generates a full system of rate equations with mass-action kinetics governing the concentrations of different allowed molecules. There is no explicit catalysis or autocatalysis in this system at the level of a single reaction, but catalytic and autocatalytic cycles appear naturally in the reaction network (defined in the next section) due to coupling between different reactions. In what follows, we explicitly solve this set of equations in two instances of the model with one and two types of atoms. We investigate the resultant transient kinetics of molecular concentration to identify conditions necessary for the persistence of one or more autocatalytic cycles that drive exponential growth of a subset of the molecules. §.§ Reaction network §.§.§ Coupled reaction graph In our model, the product of various reactions acts as reactants to other reactions. For example in the following two reactions one of the products of r1, B is used as a reactant in r2. r1:BG + B_2 → B + B_2Gr2:B + B_2 → B_3 Hence, r1 is coupled to r2. We graphically represent this relationship by constructing a directed graph, whose nodes are the reactions r1 and r2 and which has a directed edge from r1 to r2 (Fig. <ref>A). The graphical representation of all the 180 reactions in our model is shown in Fig. <ref>B. Three reaction motifs are usually found in the reaction network: catalytic cycles, autocatalytic cycles, and lossy side reactions. §.§.§ Network motifs Catalytic cycles (CC) Consider the reactions r1 and r3 in Fig. <ref>A. Both of them have a directed edge from one to the other. Hence, if by some process r1 and r3 runs in sequence for some time, then the net output will be the production of B and B_2G_2 from B_2 and BG_2, catalyzed by BG and B_2G. It is easy to show that other cycles, such as r1→r2→r4→r1 andr2→r4→r2 are also catalytic cycles. In fact, any cycle in the reaction graph defined here is a catalytic cycle. Autocatalytic cycles (ACC) A subset of the catalytic cycles have a special property that at least one of the catalyst molecules is produced in excess. That is the catalyst molecule catalyzes its own production. We refer to such cycles as autocatalytic cycles. For example, it is easy to see that r2→r4→r2 is an autocatalytic cycle, because B_2 catalyzes its own production. Lossy side reactions In a complex reaction network, such as ours, it is likely that reactions are coupled to more than one reactions. Therefore, quite often, the function of an autocatalytic cycle is hindered by the presence of parasitic side reactions that couple to one of the reactions in the autocatalytic cycle and usurp the resources required to drive the cycle. For example, r1 is a lossy side reaction for the autocatalytic cycle r2→r4→r2. As can be seen in Fig. <ref>A, lossy reactions need not be an isolated reaction. Often, they are part of another catalytic or autocatalytic cycles. When it is part of another autocatalytic cycle, the parasitism is equivalent to competition between two autocatalytic cycle. § CONDITIONS FOR SELF-REPLICATION The physico-chemical conditions required for self-replication is very different in an interacting chemical system, such as ours, than for isolated autocatalytic cycles which have been studied theoretically and experimentally over the last few decades. Prior work has indicated that the kinetic dominance of reactions can be quantified through a measure called specificity. It has been shown that for any cycle, the product of the specificity, which we call cycle-specificity for the sake of brevity, has to be greater than 0.5 for a reaction cycle to run. However, this result is incomplete. As we show here, even for an isolated autocatalytic cycle, other conditions have to be met for self-replication to take place. Furthermore, self-replication in an interacting system can happen even when the cycle-specificity of all the autocatalytic cycles is orders of magnitude lesser than 0.5, requiring a fresh search for the conditions required for self-replication. To establish these conditions, we study kinetics of simple network motifs that are outlined in Fig. <ref>. These are by no means the exhaustive list of network motifs that lead to self-replication, but these are the simplest ones to study. We summarize the necessary conditions for self-replication for these motifs below. The derivation of these condition is described in SI. The sufficient condition for self-replication is the union of all the necessary conditions. Scheme 1: Isolated ACC For isolated ACCs, the cycle-specificity has to be greater than 0.5, in agreement with previous results. However, additionally, the chemical current (see Materials & Methods for definition) for all the reactions have to be greater than zero and increasing function of time. Scheme 2 and 3: For scheme 2, no exponential growth occurs unless the specificity of ACC is greater than 0.5. For scheme 3, it is possible to observe exponential growth as long as one of the ACC has specificity greater than 0.5. Scheme 4 and 5: It is difficult to write a simple closed expression for the condition required for exponential growth. However, under these two schemes, it is possible to observe exponential growth even when both cycles have specificity lower than 0.5. The specificity distribution required for these two schemes is listed in Table <ref>. § COARSE CONTROL OF EXPONENTIAL GROWTH The fundamental goal of this paper is to understand how these reaction motifs come to dominate the kinetics and give rise to different types of concentration growth. For example, if the kinetics is dominated by autocatalytic cycles, we expect to observe exponential growth, whereas if the the kinetics is dominated by uncoupled reactions, then we expect linear growth. It is to be noted that growth is a strictly transient behavior of the underlying rate equations, which is governed by the topology of the coupled reaction graph and the instantaneous rates of the reactions. Therefore, through a suitable choice of reaction library, which determines the topology, and rate constants, which determine the instantaneous rates, it is possible to manipulate the influence of various motifs on the reaction kinetics. These facts are well known and have been used qualitatively to design small chemical systems that permit near-exponential growth of molecular concentrations <cit.>. However, such qualitative knowledge is of little use when large chemical systems with hundreds, if not thousands, of reactions need to be designed for self-replication. To design a chemical network of such complexity, quantitative relationship between the rate constants and the transient behavior of the reaction network need to be established. Unfortunately, it is impractical to explore the parameter space of the rate constants to establish such a relationship due to the cost involved with exploring the parameter space, which may be thousand dimensional. We therefore need to establish the required quantitative behavior using coarse (macroscopic) features of the rate constants, for example, in increasing order of coarseness, (a) Protocol PF:the fraction of fast reactions, (b) Protocol CD:the width of the rate constant distribution, or (c) Protocol IE:the interaction energies between the atoms. Due to our interest in self-replication, we only focus on the emergence of exponential growth and establish quantitative criteria using these parameters. §.§ PF: Fraction of fast reactions The most theoretically accessible case arises when all the interaction energies are zero and the rate constants are chosen in such a way that a controllable fraction, p_fast, of the reactions may occur, and the rest are effectively forbidden. To implement such a system, we identified the set of all reactions permitted by stoichiometry and drew the random barriers for the reactions from the binary set {0, ∞}, corresponding to rate constants of 1 or 0. The fast reactions, with rate constants 1, were assigned with a probability p_fast. To ensure detailed balance conditions, the barriers for the forward and the reverse reactions were set to be equal. As we discuss later in this section, p_fast can be mapped to the dispersion of the rate constant distribution, with p_fast≈ 1 corresponding to narrow and p_fast≈ 0 to broad distributions. Under these assumptions, the probability of self-replication, p_sr, can be estimated (SI) as a function of p_fast. Self-replication occurs if and only if at least one autocatalytic cycle in the reaction network has direct and exclusive access to its fuel (Fig. <ref>A). Hence, p_sr can be calculated from (a) the probability of finding at least one autocatalytic cycle with direct access to its fuel, p_acc(p_fast) and (b) the probability that all autocatalytic cycles have side reactions, p_loss(p_fast). Whence, for p_fast = x: p_sr(x) = p_acc(x)×(1 - p_loss(x)). As Fig. <ref>A-B shows, self-replication generally sets in spontaneously when a reaction network has a specific level of complexity dictated by the trade off of the two different competing percolation transitions, p_acc and p_loss – the first of which determines whether there are enough fast reactions to ensure existence of at least one driven autocatalytic cycle, and the second of which determines whether reactions are so promiscuously coupled that every cycle is drained by numerous side reactions. Due to this trade off, an optimal p_fast exists at which p_sr is maximized. Simply stated, this result implies that emergent self-replication occurs with high probability when there are enough autocatalytic cycles and no parasitic reactions: a result that is qualitatively well-known <cit.> and perhaps unsurprising. More surprisingly, however, our quantitative treatment shows that this optimality depends only on the reaction network topology (through p_fast and the randomized graph connectivity) and should be relatively insensitive to the specific rate constant distribution. Therefore, as long as p_fast can be tuned to its optimal value, exponential growth will emerge in a large network with certainty. What remains now is to determine whether a quasi-randomly connected network is a suitable approximation to real chemical network, and if so, how then may we tune the effective value of p_fast to its optimal value? §.§ CD: Width of the rate constant distribution A first and simplest hypothesis is that the p_fast can be tuned to optimality by the dispersion of the rate constants. To demonstrate this, we chose the activation barriers from exponential distributions with varying amount of coefficient of dispersion (variance/mean), c_d, while keeping the interaction energy zero. In the first set of studies, we numerically solved the equations until concentrations reached steady state (t_obs =∞). From the obtained time-series of the molecular concentrations, we found their growth exponent γ (M&M). If γ = 1, the corresponding concentration grows exponentially. If γ < 1, the concentration grows subexponentially. The probability of exponential growth, p_sr, was determined by finding Prob(γ > 0.99). Under this protocol, when the distribution was too narrow ( c_d < 10 kT in Fig. <ref>C), the molecules never grew exponentially. However, when the distribution was broader, the probability of exponential growth, p_sr, increased with c_d, eventually saturating at a value that is dependent on the underlying reaction network (Fig. <ref>C). §.§ IE: Interaction energy In most experiments, it is easier to control the interaction energies of the building blocks (atoms) than the rate constant distribution of the generated reaction network. Therefore, our theoretical results will be useful if and only if it can be established that the simplifying assumption of a quasi-random chemical network connectivity is effectively valid for more realistic models in which reaction rate kinetics are determined by underlying physical quantities such as interaction energies between components. We therefore sought next to analyze a “mechanistic model" in which the activation barriers of the reactions are obtained by assuming a transition state model of the reaction kinetics (Fig. <ref>E). The energies of the ground and the transition states are determined by the interaction energies of the atoms (SI), which are allowed to form clusters of up to four members. Therefore, the dispersion of the rate constants can be controlled by changing the interaction energies. Typically, stronger interaction energies correspond to broader distributions of rate constants. Hence, as per our results from protocol CD, we expect to observe exponential growth when the atoms interact strongly with each other. As Fig. <ref>D shows, that is indeed the case. Detailed exploration of the interaction energy space shows that this analogy is rigorous (Fig. <ref>) and these three protocols are potentially equivalent to each other. § EQUIVALENCE OF CONTROL PROTOCOLS The three protocols described here impose macroscopic control on the reaction kinetics through the rate constants. Although motivated by related physical intuitions, these ensembles of reaction graphs do differ in their microscopic statistics, and it is important to ask whether they ultimately succeed in generating self-replicators for the same underlying topological reasons. Therefore, we sought to understand the modes of self-replication that each of these protocols employs. As Fig. <ref> shows, the dominant modes of self-replication are, perhaps surprisingly, scheme 4 and 5 and schemes 1-3 were absent from all three protocols. Although surprising, this result is similar to previous experiments  <cit.>, where isolated ACCs were superseded by cooperative CCs as the main mode of self-replication. Furthermore, the equivalence between the three protocols indicates that the topology of the coupled reaction network plays more important role in determining the transient behavior than the rate constants. To understand how the choice of the coupled-reaction graph may influence the transient growth behavior, we investigate the outcome of protocol PF under various choices of the underlying coupled-reaction network. The analysis is described in detail in the SI. Here, we describe the set up of the problem. Let's consider a reaction network with N reactions that are coupled with each other with probability p. Furthermore, let's assume that a fraction f_d of the N reactions are doubling reactions (reaction of the type: A + B → 2C). Then, the number of 2-step isolated ACC (scheme 1), scales as: n_1∼ (N - Nf_d)Nf_dp^2 Similarly, n_4∼1/2 (N - Nf_d)^2 Nf_d p^4 n_5∼1/6 (N - Nf_d)^3 p^4 It is easy to show from Eq. <ref>-<ref> that n_1 is larger than n_4 if p < √(2(1+f_d))/N, and n_1 is larger than n_5 if p < √(6f_d(1+2f_d))/N. Both of these probabilities are incidentally smaller than the average p for our system, which is roughly 2/√(N). Therefore, purely by numbers, schemes 4 and 5 are more likely over schemes 1-3. However, as we have stated earlier, self-replication occurs only when the specificities of the reactions in a given motif satisfy the required conditions. For schemes 1-3, the specificity of the cycle has to be greater than 0.5 or, on average, the specificities of the reactions comprising the ACCs has to be greater than 1/√(2)≈ 0.71. On the other hand, the the conditions for schemes 4 and 5 are much more lenient, as can be verified from Table <ref>. To estimate the likelihood of meeting these conditions, we estimate the probability distribution of the specificities (SI). Under the assumption that the propensities for various reactions are distributed as ρ_p(x) ∼ x^νexp(-λ x), the pdf of the specificity σ, follows the distribution described in Fig. <ref>. It is evident from the pdf that one is hardly likely to find reactions with specificities higher than 0.71. On the other hand, one is quite likely to find reactions with specificities less than 0.5, which can satisfy the conditions required for schemes 4 and 5. Furthermore, despite the differences in the choice of the rate constants the specificity distribution from the three protocols are statistically identical to the theoretical approximation. Therefore, structural identity of the coupled reaction graph as well as the statistical similarity of the specificity distribution is the origin of microscopic equivalence between the three different protocols. § DISCUSSION In this paper, we have developed and investigated a model chemical system, where the constituent chemicals interact with each other through stoichiometric reactions. We have solved this model under three different protocols that impart different levels of macroscopic control over the rate constant distribution of the reactions. We have found out that despite the macroscopic differences, the microscopic kinetics responsible for self-replication is same for all three protocols. In all three protocols, self-replication occurs due to the proliferation of coupled catalytic cycles and not due to isolated autocatalytic cycles, a result similar in spirit to an earlier experiment <cit.>. Furthermore, we have also shown that the criteria for self-replication from the proliferation of an isolated autocatalytic cycle is very different from the criteria for the self-replication of coupled catalytic cycles. In fact, cycle specificity, a well-known metric, can be much less than 0.5 and still the molecules involved can still grow exponentially, in complete violation of the criteria established previously <cit.>. In the light of the results described here, future design of self-replicating systems should focus on developing chemical environment conducive for the proliferation of coupled catalytic cycles as opposed to isolated autocatalytic cycles, since the former can survive even when the reactions are not very specific. Creating such an environment through microscopic tuning of the rate constants, by no means, is easy. However, as we have shown here, it is possible to control coarse features of the chemical network, such as the width of the rate constant distribution, or the interaction energies between the building blocks to achieve the same goal easily. Many factors may affect the viability of these design conditions. Firstly, in this paper, we have chosen to report the behavior of the model in a regime in which the supply of the resources is not a limiting factor. In simulations with limited resources, however, exponential growth can be hindered if the system reaches chemical equilibrium before the onset of the exponential growth, consistent with previous studies <cit.>. Secondly, we have focused implicitly on the regime of a large and dilute reaction pot where mass-action kinetics applies.Of course, in any real reactor, the finite total number of particles would lead to small number noisiness in the early emergence and growth of self-replicators that come about from bound states that are initially at low concentration or totally absent.This means that our results most likely to apply in settings where the components feeding autocatalytic cycles are not themselves difficult to form rapidly from promiscuous reactions among components present in the initial condition.Finally, it is certain that topological quantities other than p_fast also can play an important role in determining the likelihood of self-replication. For example, the edge degree distribution of the coupled-reaction graph, which is nearly uniform here, is an important determinant of the reaction kinetics.However, for the purpose of clarity and brevity, we postpone this discussion for the future. We would like to thank J. Horowitz, P. Chvykov and other members of the England group for extensive discussion and critical evaluation of the work. Additionally, SS would like to thank B. Chakraborty, P. Mehta, K. Ramola, A. Narayanan, and N. Pal for stimulating discussions that led to the core results of this paper. This work was funded by grants from John Templeton Foundation through grant 55844 and the Gordon and Betty Moore Foundation through grant GBMF4343. JLE is also supported by a Scholar Award (220020476) from the James S. McDonnell Foundation. * § MATERIALS AND METHODS §.§ Numerical solution of differential equations We solved the systems of reactions assuming mass action kinetics. The concentrations of B and G were kept constant at 1, whereas the other molecules were initialized with concentration 0. We solved the resultant systems of differential equation with ODE23tb, a stiff solver in matlab. The simulations were run until the system reached chemical equilibrium. Due to the stiffness of the differential equations, the solution sometimes failed to reach chemical equilibria during the runtime of the code, but it did not affect the growth regime. Hence, all the results reported here are unaffected by this limitation of the numerical algorithm. §.§ Useful thermodynamic quantities: Propensity or rate is the product of the rate constant of a reaction and the concentration of the reactants raised to appropriate power. For example, for a reaction: A + B -> C + D with rate constant k_+, and obeying mass action kinetics, the propensity is k_+[A][B], where [X] denotes the concentration of the reactant X. Chemical Current:Denoted J, it is the difference between the propensities of the forward and reverse reactions of a reversible reaction. For example, for the reaction described earlier, J = k_+[A][B] - k_-[C][D]. §.§ SpecificityDenoted here as σ. The specificity is the ratio of the propensity of a given reaction to sum of the propensities of all reactions that consume the resources required for the given reaction, including itself <cit.>. Mathematically, if π_i is the propensity of reaction i, then σ = π_i/π_i + ∑_j∈𝒞π_j , where 𝒞 is the set of all parasitic reactions that consume the resources required for reaction i. C = |𝒞| is the number of such parasitic reactions. The cycle specificity is the product of the specificities of the reactions in the cycle. In previous works <cit.>, specificity was defined strictly for completely irreversible reactions. Therefore, its definition has to be modified for our system, where the reactions are reversible. We have found out that if the chemical current for a reaction is negative it does not contribute to the calculation of the specificity. Therefore, to measure specificity, we have only used reactions whose chemical current is positive. Furthermore, often the concentrations of molecules span several orders of magnitude. Some of them may reach very close to their equilibrium concentration much before other molecules. Under such condition, the concentration of these molecules are unaffected by the consumption of various reactions. As a result, we have ignored any parasitic reaction that consume these molecules from our calculation of specificity. §.§ Growth exponent At any given instant, t, the instantaneous growth rate of the concentration, dc(t)/dt, is a simple algebraic function of the concentration, c(t). Formally, dc/dt = rc^γ, where γ is the growth exponent and r is a proportionality constant. For exponential growth γ = 1, for power law (subexponential) growth 0 < γ < 1, and for linear growth γ = 0.When the concentration grows exponentially (γ = 1), r is equal to the exponential growth rate constant. In a typical timeseries, γ varies with time. Therefore, to assess the occurence of exponential growth, in this paper, we measure and report only the maximum value of γ over a timeseries, also referred to as γ. §.§ Estimate of p_fast To estimate p_fast from the time series of the molecular concentrations, we find the fraction of reactions whose propensities are within 10% of the propensity of the reaction with fastest propensity. This is a heuristic definition and we have found out that the result does not change as long as it varies between 1-20%. For smaller values, the quantitative result changes, but qualitative result remains the same. §.§ Random sampling We sampled 100 different configurations for each random activation barrier ensemble. To estimate p_sr in Fig, <ref>, we binned the scatter plot into different parameter values (c_d or p_fast). Any bins with less than five datapoints were ignored.
http://arxiv.org/abs/1709.09191v3
{ "authors": [ "Sumantra Sarkar", "Jeremy L. England" ], "categories": [ "cond-mat.soft", "cond-mat.stat-mech", "physics.bio-ph" ], "primary_category": "cond-mat.soft", "published": "20170926180115", "title": "Design of conditions for emergence of self-replicators" }
language=R, basicstyle= , numbers=left,numberstyle=, stepnumber=2, showspaces=false,showtabs=false,frame=single,rulecolor=, tabsize=2, captionpos=b,breaklines=true, breakatwhitespace=false, keywordstyle=, commentstyle=,stringstyle=,backgroundcolor=, systemEq {.mydefDefinitiontheoremNoiseTheorem #11_{#1}#1||#1||_p*firstlawPremière loi du réseau électrique*secondlawSeconde loi du réseau électriqueIntroducing machine learningfor power system operation support Benjamin DONNOT^ ^+†, Isabelle GUYON^*+, Marc SCHOENAUER^+,Patrick PANCIATICI^†, Antoine MAROT^†*UPSud Paris-Saclay, +INRIA ^LRI, Laboratoire de Recherche en Informatique ^†RTE R&D January 2017 ============================================================================================================================================================================================We address the problem of assisting human dispatchers in operating power grids in today's changing context usingmachine learning, with the aim of increasing security and reducing costs. Power networks are highly regulated systems, which at all times must meet varying demands of electricity with a complex production system, including conventional power plants, less predictable renewable energies (such as wind or solar power), and the possibility of buying/selling electricity on the international market with more and more actors involved at a European scale. This problem is becoming ever more challenging in an aging network infrastructure.One of the primary goals of dispatchers is to protect equipment (e.g. avoid that transmission lines overheat) with few degrees of freedom: we are considering in this paper solely modifications in network topology, i.e. re-configuring the way in which lines, transformers, productions and loads are connected in sub-stations. Using years of historical data collected by the French Transmission Service Operator (TSO) “Réseau de Transport d'Electricité" (RTE), we develop novel machine learning techniques (drawing on “deep learning") to mimic human decisions to devise “remedial actions" to prevent any line to violate power flow limits (so-called"thermal limits"). The proposed technique is hybrid. It does not rely purely on machine learning: every action will be tested with actual simulators before being proposed to the dispatchers or implemented on the grid. Key words: data science, data mining, power systems, machine learning, deep learning, imitation learning§ INTRODUCTIONElectricity is a commodity that consumers take for granted and, while governments relaying public opinion (rightfully) request that renewable energies be used increasingly, little is known about what this entails behind the scenes in additional complexity for the Transmission Service Operators (TSOs) to operate the power grid in security. Indeed, renewable energies such as wind and solar power are less predictable than conventional power sources (mainly thermal power plants).In cases of contingency, which may be weather-related (e.g. decreased production because of less wind or sun or line failure due to meteorological conditions) operators (a.k.a. dispatchers) must act quickly to protect equipment to meet all “security criteria" (for example to avoid that lines get overloaded). Remedial actions they take in such situations may include among others (1) modifications of the networktopology to re-direct power flows, (2) modification of productions or consumptions ( re-dispatching). By far the least costly and preferred of these options is the first one, and it will be the only one considered in this paper. A network is considered to be operated in “security" (i.e. in a secure state) if it is outside a zone of “constraints", which includes that power flowing in every line does not exceed given limits. The dispatchers must avoid ever getting in a critical situation, which may lead to a cascade of failures (circuit breakers opening lines automatically to protect equipment, thus putting more and more load on fewer and fewer lines), ultimately leading to a blackout. To that end, it is standard practice to operate the grid in real time with the so-called “N-1 criterion":this is a preventive measure requiring that at all times the network would remain in a safe state even if one component (productions, lines, transformers, etc.) would be disconnected.In choosing proper remedial actions, the dispatchers are facing various trade-offs. Remedial actions must eliminate the problem they were designed to address, but also must avoid creating new problems elsewhere on the grid. Today, the complex task of dispatchers, which are highly trained engineers, consists in analyzing situations, proposing remedial actions, and checking prospectively their effect using sophisticated (but slow) high-end simulators, which allow them to investigate only a few options. Our goal is toassist the dispatchers by suggesting them with quality candidate remedial actions, obtained by synthesizing several years of historical decisions made in various situations into a powerful predictive machine learning models, built upon earlier work <cit.>. The main contributions of this paper are: (1) To address a large scale industrial project with potentially high financial impact using real historical data and a large-scale simulator (deployed in real operations) from the company RTE; (2) To cast the problem in a mathematical setting amenable to machine learning studies; (3) To devise a methodology to extract from historical data and simulations a dataset usable for training and testing in a supervised machine learning setting; (4) To suggest and study machine learning architectures, which automatically generate candidate remedial actions, which could be validated with more extensive power system simulations. The paper is organized as follows: Section <ref> formalizes the problem.Section <ref> describes the proposed methodology.Section <ref> outlines initial results.Section <ref> presents a possible integration into today operational processes. Finally, section <ref> provides conclusions and outlooks. § FORMALIZATION OF THE PROBLEM. In this section, we formalize daily real-time tasks of dispatchers as a formal realistic (yet simplified) optimization problem, amenable to mathematical studies. Our setting is inspired by the analysis found in reference <cit.> Suppose that we are studying a powergrid at a given time t (either the current time for a real-time study, or some time in the near future for a forecast study). Let:* ℛ_t be the set of all feasible re-dispatching actions possible for time t; *and 𝒯_t the set of all feasible topological actions for time t known at the time of the study.Let us then assume that we are given a cost function R (resp. T), that assigns some cost to any re-dispatching action ρ∈ℛ_t (resp. topological action τ∈𝒯_t). For instance, the cost of a redispatching action can be the money paid by the TSO to the producers. The cost of a topological action can include the aging of the breakers, the probability of failure etc. We further assume that decisions performed by dispatchers made for the sake of security of the grid are optimally efficient, given available information. They implicitly solve an optimization problem consisting in minimizing the cost of their actions to meet a security measure 𝕊. This can be formalized with the equation <ref>:(ρ∈ℛ_t, τ∈𝒯_t) minimize R(ρ) + T(τ)subject to𝕊(grid_t⊙{ρ,τ})where 𝕊 denotes the function stating whether a powergrid is in a secure state. More formally, 𝕊 should be a function taking a grid as input, and returning a list of security issues (for example if the grid is secure according to 𝕊, the result should be the empty set ∅). We also denote by grid_t the state of the grid at time t. The operator ⊙ must be understood as applying a set of actions on a given grid: "grid_t⊙{ρ,τ}" should be though as The grid resulting of the application of actions ρ and τ on the network grid_t. This problem can be very complex to solve. For instance, it mixes continuous variables (such as redispatching) and integer variables (for example the topology or the maximum values allowed for productions). The number of variables involved is also quite important. France alone count around 3 000 productions and RTE can act on more than 30 000 breakers.Solving this problem "as is" requires to do some hypothesis on the costs functions and on the type of constraints of the problem <ref> for example to formulate as a Mixed-Integer Linear Program for which there exist some suitable solvers.In this paper, we propose a new methodology, based on learning of remedial actions taken by operators. Indeed, learning from human actions has some advantages:*It will improve the acceptance of the algorithm for dispatcher:*the proposed actions come from what they have already done in the past;*they can use the same tools they use today to check the validity of the results proposed. *It can indirectly model other security issues ignored by 𝕊. For example, dispatchers may know that a given breaker is in bad shape. So, they rarely actuated it. This can be taken into account by a learning strategy but may not be as easily digested using optimization tools (such constraints may be difficult to express, or difficult to centralized in one unique Information System Database).*It can help sharing knowledge between dispatchers, and capitalizing on the best action taken. § PROPOSED METHODOLOGYIn this section, we address the problem of finding curative/remedial actions to protect the power grid with a novel methodology based on machine learning. Our methodology is inspired by the game playing literature and in particular the very successful AlphaGO machine learning program<cit.> developed by Google Deepmind to tackle the ancient game of Go. We detail in this section the first step of the methodology concerning “imitation learning", i.e. training a learning machine to imitate decisions made by experts (expert players for Go and professional dispatchers for power grids). Improvements gained by self-play and reinforcement learning are discussed in Section <ref> and will be the object of future work. However, despite great similarities with the setting of AlphaGO, our problem has features of its own, which are addressed in this section. First, in the game playing setting, every action is perpetrated by one player with the intention of winning the game (i.e. pursue the objective at hand). In contrast, historical actions in power networks may stem from various motivations, which include protecting the grid (our objective), but also include scheduled maintenance actions and various other maneuvers unrelated to our objective. Because of the lack of data annotation regarding the purpose of actions, we must perform sophisticated preprocessing to prepare data suitable for our machine learning modeling.Second, in a game setting the risk vs. reward trade-off does not have the same implications and level of gravity. In power network applications, much greater levels of care must be given to assessing potential adverse effects of proposed actions, possibly discarding those which may be curing a given problems while triggering one of several others.Because of these distinguishing features of the problem, our methodology for “imitations learning" is split in two steps, which are described in this section: (1) Data generation; (2) Learning. §.§ Dataset generation: Extracting relevant actions To train our models, which will imitate human dispatchers, we need a large dataset of pairs {network state, action taken}. We describe in this section the method we used to obtain such a dataset.Our work builds upon a wealth of data recorded by RTE. Every 5 minutes, the consistent state of the grid is archived. We have available data fromNovember 1 2011, to present times. For this study, we use data until 2016 August 7.For each grid state, we have access to all the injections (injections are complex numbers having active power positive or negative values andreactive power values; they include both “productions" and “consumptions" or “loads"). We also know the nodal topology of grid and the voltage (angle and magnitude) for every node of the network. Accurate simulators of the physical grid can compute other quantities, such as the flows on lines using standard models such as AC load-flow simulators. This represents approximately 485 000 snapshots of theFrench grid: each snapshot being a modeling of the French Very High Voltage and High Voltage network counting more than 11 000 lines, an average of around 6 400 buses, and around 7 000 loads for 3 000 productions. One pitfall of the data is the lack of annotation of the actions. Changes in network topology cannot be only attributable to remedial actions taken by dispatchers to protect the grid. For example, we cannot distinguish between corrective actions performed in response to unplanned contingencies (e.g. a line struck by lightning) and periodical maneuvers to check if a breaker can still be open/closed. Therefore, to obtain data that is useful for training, we must perform a “detective work" and extract from available data plausible remedial actions by analyzing which action, if not performed, would have led to an adverse change in network security. Of two possible types of actions (re-dispatching and changes in topology), our main focus here is on topological actions. This stems from two main reasons. First, in the literature, some methods have already been developed to tackle the re-dispatching problem such method include OPF (Optimal power flow)<cit.> or SCOPF (Security Constrained Optimal Power Flow) where<cit.> present most recent advances in such area.Second, as we previously mentionned, TSOs like RTE are more interested in topological remedial actions because they are generally less costly. To isolate the relevant changes in network topology, which could correspond to dispatcher actions responding to a problem or anticipating a situation that may yield to a problem, we propose an algorithm inspired by counterfactual reasoning <cit.>: "What would have happened if a given topological change τ had not occurred?" To do that, we use a combination of real data and grid simulation. We proceed in two steps for which pseudo-code is provided:*Algorithm <ref>: Considering two grid states g_t, and g_t+h at times t and t+h, we check the potential outcome of not having performed a change in topology by freezing the network topology at t while imposing the injections that were observed in real data at t+h. The power flows and security criterion 𝕊 are re-calculated by simulation. Unsafe networks are detected when security violations occur, indicating that a topological change may have played the role of a preventive “remedial action". *Algorithm <ref>: Changes in topology occurring between t and t+h may have been motivated by other reasons than preventing the network to go out of its security operation regime (for reason of maintenance, for example). We post-process the data by looking for a minimal subset of actions, which bring the network back to a safe operation mode.The output of Algorithm <ref> is then a list of security criteria not met in a stressed network, and the corresponding time-stamps. The output of Algorithm <ref> is a list of topological changes that can be applied as remedial actions.§.§ Model training: Imitate human expertsNow that we have a clean database with pairs of {X=stressed state, Y=remedial actions} we can learn from it.The main idea is to use learning machines to quickly propose and/or evaluate actions by learning from what the human would have done facing the same situation. This is often called supervised learning, or imitation learning.For instance, we may provide our learning machine with an ensemble of variables X (in our case an encoding of the security issue-s- s and the grid g) and teach it to produce the response Y=τ.One of the main difficulties we have to face is that of encoding information: the structure and state of a power grid, including representing security issues, and the actions.We propose and study several methods of encoding, restraining ourselves to the French power grid of which we have in depth knowledge. One of them consists in simply enumerating all the important variables, for example the productions, the loads, the flows on each line or the voltage magnitude and angles and encode them with an arbitrary “barcode". This first approach main seem too crude, but has proved useful in combination of deep learning neural network architectures that we have explored in our machine learning analyses. This also demonstrates the robustness of deep learning techniques to arbitrary input representation and their capability of learning internal representation even from unpreprocessed data as shown in <cit.> for example. This is a important feature to achieve our goal: model grid data is a complex task. As learning machines, several neural network architectures have been envisioned and will be compared. One of the most promising ones, for which we have initial results reported in the next section and that could serve as benchmark for most advance study, involves a deep neural network, which predicts power flows from injections and topologies, simply coded with their “barcode”. The benefit of this network is to quickly be able to evaluate the security status of a proposed power network topology by calculating𝕊 from the neural network output. Such evaluation of 𝕊 using a neural network is orders of magnitude faster than running the RTE simulator Hades2 (typically 100x for moderate size neural networks used on moderate size powergrid). Today, this first module must be combined with another system, which produces candidate topologies, including topologies proposed by dispatchers and re-combinations. We presently have a dictionary of 3 000 topologies corresponding to preventive “remedial actions". We envision that this set of remedial actions could be enriched with the help of data generating models such as GANs <cit.> or can be ranked using a learning algorithm and then tested in real time with the preferred simulator of the dispatcher. § MAIN RESULTS The previous algorithm <ref> and <ref> have been run through the first six months of 2012. To make the simulation tractable for a reasonable computer, some restriction have been imposed some restrictions: a) we impose the window h to be in the ensemble { 5 min, 10 min, 15 min, 30min, 45 min, 1h, 1h30, 2h, 2h30, 3h, 3h30, 4h, 4h30, 5h, 5h30, 6h, 7h, 8h, 9h, 10h, 11h, 12h, 23h, 23h30, 23h45, 24h } and not on the interval [0, 24h] in algorithm <ref>. This restriction have been made in compliance with RTE experts, and preliminary results that showed a lot of redundancies.b) we use a simplify version of the safety criterion use for operation support 𝕊. The criterion used was that each line of the network must be bellow 95% of its thermal limit. The operation safety criterion would lead to 10 000 times more computation, as for each security assessment, a simulation retrieving each line one by one must be computed. c) in algorithm <ref>, only the subset of τ of cardinal one have been tested. This again is in compliance with the operators: it is quite rare that people need to act on different substations for security reason. With this settings, the security around 1 250 000 grids have been computed using our implementation of algorithm <ref> as shown in the table <ref> (first row). This allowed us to identify more than 81 000 stressed grids g̃_t,h in an insecure state. On this 81 000 insecure grids, we noted that 2 008 lines have seen their flow exceed their thermal limit (fifth row). This represent around 18% of the total number of lines present in the grid. This validation is compliant with expert knowledge of the French grid: there exists some part weaker than the others.We can also note that in total, we have found 3 266 unique remedial actions (two remedial actions are different if and only if they do not solve an overflow on the same line or if they do not act on the same substation, of if they do not change the nodal topology in the same manner). This means that, in the history, at least 3 266 different topological actions could have been done for solving a security issue.With these data collected, we intend to conduct a systematic comparison of learning machine architectures to propose and evaluate “remedial actions". We have first started studying neural network architectures allowing us to evaluate “remedial actions". As explained in the previous section, such learning machines take as input injections and topologies and predict power flows (which allow us to quickly calculate 𝕊).Our preliminary study includes testing artificial neural network for approximating load-flow of Matpower <cit.> (test cases "case30" coming originally from <cit.> and "case118"[This test case "represents a portion of the American Electric Power System (in the Midwestern US) as of December, 1962". More information can be found at http://www2.ee.washington.edu/research/pstca/pf118/pg_tca118bus.htmwww2.ee.washington.edu/research/pstca/pf118.]). Using these power grids we taught the neural networks to predict the outcome of a given outage. The neural networks are trained using the Tensorflow framework. An example of the architecture used to approximate the load-flow can be found in figure <ref>. This study has been conducted by first simulating a lot of plausible grid state, making the injections (productions and consumptions) vary. The workflow to obtain such a database is the following:*Get the grid in the proper format used by Hades2*Disconnect one line.*Sample the active loads based on the 2012 French loads consumptions*Sample the reactive loads from the historical distribution pq calibrated on the French power grid*Sample active productions value from active loads value:*Disconnect randomly some productions (to take into account the fact that not all productions are functioning at a given time)*dispatch the loads power according p_max *add noise*The voltages of the productions are not modified Once the data base has been built, it has been divided in 3. One part (50 %) for training the model, another one (25%) for fitting the meta parameters, and at last a third one for testing and reporting results. The example for which the results are given are then never seen during any part of the training.For each line disconnection, we ran n_s = 10 000 simulations with 10 000 different productions / loads values. The matpower 30 buses grid count 41 lines, making in total n_s_no line are disconnected + 41×n_s_one line is disconnected. For this grid, the test set counts then 2 100 00 samples. For the bigger 118 buses grid, we simulate n_s = 5 000 sample per configuration, and there is 199 lines, so the test set counts 450 000 rows.In this first experiment, we try to approximate a load-flow computation. So we feed a neural network with with an architecture presented in figure <ref>: the active c_p and reactive c_q loads value, the active production value p_p as well as their voltages setpoints p_v as inputs. We also give in input which line have been disconnected using a one-hot encoding enc. And we ask the neural network to predict the rest of the variables: reactive productions values p_q, the voltages at the buses where each load is connected c_v and the flows. For the flows we decided to make the network compute the active power flow f_MW, and the current power flow f_a. The reactive power flow is not computed.We note that we did not feed the network with the p_min or p_max values for the productions. One of the task of the neural network will be to balance the loads to take into account the losses for example. To evaluate the performance of our models, we will use the Mean Absolute Error (MAE) and the Mean Absolute Percentage Error. If y^true denotes the vector (of size n) of the true values, and ŷ the vector of the predicted values (also of size n), we have:{ MAE(ŷ,y^true)1/n.∑_i=1^nŷ_i - y^true_iMAPE(ŷ,y^true)1/n.∑_i=1^nŷ_i - y^true_i/y^true_i.As we can see from the table <ref>, the neural networks achieve great performance. They are able to predict the output of the load-flow with an error close to 1% for the 30 buses grid and around 2% for the 118 buses, which is enough for looking at curative actions, as one will see in the next section. We must note that no special care have been taken to feed the data in the neural network. Further studies will focus on the matter. We believe that this could greatly improve the performance.To be complete, training the model for the 30 buses grid took 18h 31min on a computer with an high-end GPU (Nvidia GTX 1080) and 20h 03min (in this case, the error was still decreasing when at the time of writing). Once the model are trained, the computation of load-flow is very fast. Computing 5 000 security analysis for the "N-1" criterion for the 30 buses grid (210 000 load-flows) took only 1.56s on an intel i5 2 cores laptop processor. For comparison, generating the dataset using a much faster i7 processor took 123.7s. This lead to computation time speed-up of around 80. Concerning the 118 buses the speed-up is about 450 (1 432s to generate the data and 3.01s to compute 2 500 security analysis, representing 450 000 load-flows using the trained model).The main drawback of this method consist in the fixed topology settings.Only lines disconnection are taken into account. It is for now impossible to perform more complex topological changes on the power grid. We are currently working on this issue, and preliminary results seems promising. Even with more complex topological changes, the error is around 2-3% for the 30 buses grid. No experiment have been done concerning the 118 buses grid yet. Once such a model will be available, we will be able to run the algorithms <ref> and <ref> with the standard "N-1" security criterion. This could also allow us to test more topological changes. In summary, drastically reducing the computation time could allow us to find more historical curative actions. After building such a data base of curative actions, the next step will be to learn to mimic the human. The encouraging performance of artificial neural network in various supervised learning setting. The first encouraging results concerning flows approximations made us optimistic regarding the possibility to predict, based on human decisions, the substation for which we topology must be changed for security issue. The training of this model will be made with the data obtain after running algorithm <ref>. The remedial action from which the algorithm will learn will concern the unsafe grid ĝ_t,h. That's where the time window h_max play an important role. The time interval must be long enough to capture some possible remedial action, but narrow enough such that the grid ĝ_t,h is "realistic" (eg that this simulated grid state is "close" enough to a grid that could have happened in real time). That's why we did not compute all the grid in the interval { 5 mins, …, 24 hours }: so grid where completely unrealistic. For example applying the injections plan of peak time over the grid topology that was in operation at lowest load level often results in divergence of the load-flow. § LINKS WITH OPERATIONAL DECISION PROCESSES In this section, we will explain our view about the possible usage of our method as a tool to help the real time operations.Let's consider that we have at our disposal the models discussed in the previous section:M1 which approximate a load-flow computation very rapidlyM2 that is able, given a safety issue and a grid state to predict accurately on which substation we can act. First, as the grid evolve the Model 1 and Model 2 describe above could be learned from time to time, for example during the week-end, or if a greater computation power is available, during the night if time allows it.Then the future real time operation framework could look like:*Use standard tools to assess whether or not a grid is secure. This could be done with standard computation, such as a load-flow computation and the "N-1" criterion. To speed-up the computation, and get faster results, one could also use model M1 to pre-screen the contingencies that will most probably cause at least one overload.*If there is some non secure contingency detected, one could then use model M2 to predict on which substation a topological action is worth looking for. Let's name sub_i this substations.*After such substation is detected, we could enumerate all possible action doable at the time of the study in sub_i. We could rapidly assess if a possible curative action is found or not. *Then are 2 cases:*If a possible change has been found with this method, we will use accurate model such as load-flow as well as models that take into account dynamic phenomenon to check that the action found remove the security issue, and that it does not cause any problem elsewhere.*Or no action have been found. In this case, we let the operator the choice of which action to do. But we can tell him that it is most probably useless to seek for a topological action in the substation sub_i. As one can see, this framework offers a lot of flexibility. One can for example decide at step 3 to look at the k "most likely" substations where a topological curative action can take place. This would of course increase the computation time, but it will be more likely to find one. Also, this method does allow for operators to take the control at any moment. For example, it is always possible to stop the research of curative actions, and the algorithm will be able to tell which actions have been (unsuccessfully) tested quite easily.Most of all the security assessment can be performed after the action have been chosen by the machine, the fast approximation are only relevant for exploring the curative actions space. Another set of methods, including dynamic simulation of the changes in the grid will take place after the selection of the right action. The proposed method is then a mixture of different approaches. We use machine learning methods to search the curative action. The security check are performed with very well established method, relying on simulation of physical systems. § DISCUSSION AND CONCLUSION This paper proposed to generate candidate remedial actions to dispatchers in order to maintain a power network in `a safe state, using machine learning techniques. With our method, remedial actions are rank-orders in order of increasing cost (costs for re-dispatching are typically much higher than costs for modifying network topology) and then tested with simulators used today by dispatchers before being proposed to them. Our methodology requires first extracting from historical data actual actions that were performed and have been evaluated to have a positive influence (protect against possible network issues to be avoided, such as power flows exceeding lines thermal limits). That alone is a non-trivial problem because (1) many actions performed on the network are not protective actions (they may include maintenance actions and miscellaneous maneuvers); (2) there is no centralized and uniform record of why given actions are performed;(3) the consequences of not performing given actions are not observed, hence it is difficult to assess how effectively protective given actions may be. We devised and implemented an algorithm based on the causal concept of counterfactuals, which allows us to identify actions that have had beneficial effects (or more precisely, without which the network would have incurred adverse effects). Such training data will be used to train learning machines in an supervised way to imitate the actions of dispatchers or to evaluate rapidly candidate actions. This allows us to generalize and generate ranked lists of remedial actions in situations never seen before. We tested our method on small well-known test cases and obtained promising preliminary results. The proposed methodology counts multiple advantages. The first one is to be able to explore a vast number of possible curative actions, thanks to the very fast approximation of flows. But most importantly, we think that this method will proposed realistic remedial actions thanks to learning from operators expertise by observation.Or methodology is to some extent inspired by game playing machine learning programs such as AlphaGo of Google Deepmind.<cit.>. Further work will consist in learning using reinforcement learning to refine our learning machine. In the same manner than AlphaGo improved itself by self-play, after being only initially trained to imitate the play of famous Go players, we intend to use the RTE simulator to generate millions of new situations and let the learning machine propose candidate remedial solutions and learn from its errors to progressively improve (i.e. decrease cumulative costs). In combination with Monte Carlo Tree Search (as used by AlphaGo), we believe that this could be a powerful way of improving policy learning.Other avenues of research include seeking the worst case events that could happen after a remedial action took place, following the work of<cit.> and <cit.>, for example.Another possible extension would be to used the proposed framework in more generic settings in the context of mid- to long-term studies, where real-time actions must be taken into account (the GARPUR[GARPUR: Generally Accepted Reliability Principle with Uncertainty modelling and through probabilistic Risk assessment (http://www.garpur-project.eu/http://www.garpur-project.eu/), is an European project which "aims to maintain power system performance at a desired level, while minimizing the socio-economic costs of keeping the power system at that performance level".] project would be an example),or for the classification of contingencies in the case of the I-TESLA project[I-TESLA stands for Innovative Tools for Electrical System Security within Large Areas. I-TESLAis a European project (http://www.itesla-project.eu/http://www.itesla-project.eu/) aiming at “improving network operations with a new security assessment tool".].We also intend to explore many remedial action recombination strategies to enrich the space of exploration, in the spirit of genetic algorithms. While our approach will initially draw on classical Markov Decision Processes, assuming largely quasi-total observability of the grid state and dispatcher actions, we will progressively incorporate more realism and complexity and devise methods having only partial knowledge of the overall situation, which may occur in case of delayed information transmission, and move into the realm of more complex models such as Partially Observable Markov Decision Processes (POMDP).IEEEtran
http://arxiv.org/abs/1709.09527v1
{ "authors": [ "Benjamin Donnot", "Isabelle Guyon", "Marc Schoenauer", "Patrick Panciatici", "Antoine Marot" ], "categories": [ "stat.ML", "cs.AI" ], "primary_category": "stat.ML", "published": "20170927135935", "title": "Introducing machine learning for power system operation support" }
[][email protected] [][email protected] Département de Physique, de Génie Physique, et d'Optique,Université Laval, Québec (Québec), Canada, G1V 0A6 We present a degree-based theoretical framework to study the susceptible-infected-susceptible (SIS) dynamics on time-varying (rewired) configuration model networks.Using this framework on a given degree distribution, we provide a detailed analysis of the stationary state using the rewiring rate to explore the whole range ofthe time variation of the structure relative to that of the SIS process.This analysis is suitable for the characterization of the phase transition and leads to three main contributions. (i) We obtain a self-consistent expression for the absorbing-state threshold, able to capture both collective and hub activation. (ii) We recover the predictions of a number of existing approaches as limiting cases of our analysis, providing thereby a unifying point of view for the SIS dynamics on random networks. (iii) We obtain bounds for the critical exponents of a number of quantities in the stationary state. This allows us to reinterpret the concept of hub-dominated phase transition. Within our framework, it appears as a heterogeneous critical phenomenon : observables for different degree classes have a different scaling with the infection rate. This phenomenon is followed by the successive activation of the degree classes beyond the epidemic threshold. 64.60.aqPhase transition of the susceptible-infected-susceptible dynamics on time-varying configuration model networks Louis J. Dubé December 30, 2023 ==============================================================================================================§ INTRODUCTION The susceptible-infected-susceptible (SIS) model is one of the classical and most studied models of disease propagation on complex networks <cit.>. It can be understood as a specific case of binary-state dynamics <cit.> where nodes are either susceptible (S) or infected (I). Susceptible nodes become infected at rate λ l where l represents the number of infected neighbors; infected nodes recover and become susceptible at rate μ, set to unity without loss of generality. Despite being a crude approximation of reality, this is arguably one of the simplest models leading to an absorbing-state phase transition. For infinite size networks in the stationary state (t →∞), there are two distinct phases :an absorbing phase—consisting of all nodes being susceptible—and an active phase where a constant fraction of the nodes remains infected on average. The former is attractive for any initial configurations with infection rate λ≤λ_c, which defines the threshold λ_c. From a statistical physics perspective, this represents a critical phenomenon, where the density of infected nodes in the stationary state plays the role of the order parameter.It is now common knowledge in network science that the degree distribution P(k), the probability that a random node has k neighbors, is a fundamental property to quantify the extent of an epidemic outbreak <cit.>. To this end, random networks with an arbitrary degree distribution have been extensively used to study the impact of this property on the spreading of diseases <cit.>. Recently, a profound impact of the degree distribution has been unveiled, leading to an interesting dichotomy for the nature of the phase transition of the SIS model on networks. The activity just beyond the threshold is either localized in the neighborhood of high degree nodes (hubs), sustained by correlated reinfections, or maintained collectively by the whole network <cit.>. As in Ref. <cit.>, we will use the terminology hub activation and collective activation to discriminate these two scenarios.To capture the dynamics and describe its critical behavior, various analytical approaches have been developed using mean field, pair approximation and dynamic message passing techniques <cit.> (see Refs. <cit.> for recent reviews). They can be divided into two major families : degree-based and individual-based formalisms. The former is a compartmental modeling scheme that assumes the statistical equivalence of each node in a same degree class. It leads to simple approaches with explicit analytical predictions, but restricted to infinite size random networks. The latter relies explicitly on the (quenched) structure, described by an adjacency matrix a_ij, to estimate the marginal probability of infection for each node. Its range of applicability is not restricted to infinite size random networks, but it is less amenable to analytical treatment than degree-based approaches.Despite the same basic structural information—the degree distribution—there remain disparities between the predictions of degree-based and individual-based formalisms. An important theoretical gap that needs to be addressed is that current characterizations of the phase transition using degree-based approaches are unable to describe a hub activation correctly. This arises from the fact that the neighborhood of nodes for each degree class is not described properly. We provide in the following a degree-based theoretical analysis of the SIS dynamics on time-varying (edges are being rewired) random networks with a fixed degree sequence in the infinite size limit. Our emphasis is on the characterization of the critical phenomenon for both, collective and hub activation. Our rewired network approach (RNA) permits us to simulate an effective structural dynamics and mathematically provides an interpolation between existing compartmental formalisms.The paper is organized as follows. In Sec. <ref>, we introduce a compartmental formalism to characterize the dynamics and we show how it is related to other approaches. In Sec. <ref>, we obtain the stationary distributions that we develop near the absorbing phase. Using this framework, we draw a general portrait of the phase transition. In Sec. <ref>, we present an explicit upper bound and an implicit expression for the threshold λ_c, that we compare analytically and numerically with the predictions of a number of existing approaches. In Sec. <ref>, we obtain bounds for the critical exponents describing the stationary distributions near the absorbing phase, bringing to light a heterogeneous critical phenomenon associated with the hub activation. In Sec. <ref>, we discuss the impacts of structural dynamics on the hub-dominated property of a phase transition, and show the successive activation of the degree classes beyond the threshold. We finally gather concluding remarks and open challenges in Sec. <ref>. They are followed by two Appendices, giving details of the Monte-Carlo simulations (Appendix <ref>) and of the mathematical developments for the critical exponents (Appendix <ref>).§ MATHEMATICAL FRAMEWORKTime variations of the structure greatly affect the propagation <cit.>. For networks whose evolution is independent from the dynamical state <cit.>, it has been shown to notably alter the epidemic threshold of the SIS model. For adaptive networks <cit.> where the dynamical state influences the evolution of the structure, a hysteresis loop and a first order transition have even been observed <cit.>.In this paper, we consider the former scenario, a structure evolving according to a continuous Markov process, independent of the SIS dynamics. Each edge in the network is rewired at a constant rate ω: a rewiring event involves two edges that are disconnected, and the stubs are rematched as presented in Fig. <ref>. For nodes, this implies that their stubs are effectively reconnected to random stubs in the network at the rate ω. We allow loops and multiple edges to simplify the rewiring procedure and impose a structural cut-off for the maximal degree k_max < N^1/2 to have a vanishing fraction of these undesired edges.This process samples a configuration model ensemble by leaving the degree sequence unaltered <cit.>. Noteworthy, this allows us to control the heterogeneity of the structure independently from the time-varying mechanism. Moreover, the networks ensemble is uncorrelated, i.e the degrees at the end points of any edge are independent. Since the structural dynamics is a Poisson process, exponentially distributed lifetimes for the edges are produced. Although it has been argued that many real contact patterns are better represented by power-law distributed lifetimes <cit.>, our framework still captures the essence of a time-varying structure and is simple enough to lend itself to explicit analytical results. For all ensuing mathematical developments, the thermodynamic limit (N →∞) is assumed.§.§ Compartmental formalism Since we consider a time-varying network preserving the degree sequence, the statistical equivalence of each node with a same degree k is guaranteed. This implies that the probability ρ_k(t) that a node of degree k is infected follows the rate equation ρ̣_̣ḳt = - ρ_k + λ k (1- ρ_k) θ_k,where θ_k(t) is the probability of reaching an infected node following a random edge starting from a degree k susceptible node. In the stationary limit (ρ̇_k = 0 k), the following relationsρ_k^* = λ k θ_k^* /1 + λ k θ_k^* orλ k θ_k^* = ρ_k^*/1- ρ_k^* ,are obtained. Stationary values will be marked hereafter with an asterisk (*). Equation (<ref>) expresses that a node's probability of being infected is directly related to its neighborhood's state, quantified by θ_k^*. Our objective is therefore to find the most precise explicit expression for this probability, taking into account the rewiring process. In the general case, we must have a degree dependent solution to represent θ_k^*.Accordingly, we consider a pair approximation framework as introduced in Ref. <cit.>. To include the rewiring process, we account for the probability Θ(t) that a newly rewired stub reaches an infected node Θ≡k ρ_k/k ,where all averages ⋯ are taken over P(k). Let ϕ_k(t) be the probability of reaching an infected node following a random edge starting from a degree k infected node. We obtain (see Appendix <ref>) θ̣_̣ḳt =-λθ_k + (k-1)θ_k^2+ r_k ϕ_k + (Ω^S + ωΘ) (1- θ_k) -1+ω(1- Θ)θ_k- θ_kr_k - λ k θ_k ,ϕ̣_̣ḳt = λ r_k^-1θ_k + (k-1)θ_k^2- ϕ_k + (Ω^I + ωΘ) (1- ϕ_k) - 1+ω(1- Θ)ϕ_k+ϕ_k 1 - λ k θ_k r_k^-1 , with r_k ≡ρ_k/(1-ρ_k). Also, Ω^S(t) and Ω^I(t) are the mean infection rates for the neighbors of susceptible and infected nodes. These rates are estimated by Ω^S = λ(1- ρ_k)(θ_k-θ_k^2)(k-1)k/(1- ρ_k)(1- θ_k)k,Ω^I = λ(1- ρ_k)[θ_kk + θ_k^2 k(k-1)]/(1- ρ_k)θ_kk . Before going any further with the analysis, it useful to discuss the approximations involved in Eqs. (<ref>). * The mean infection rates for the neighbors (Ω^S and Ω^I) are independent of the degree and are estimated from mean values over the network. An infinite size configuration model network is assumed. * The pair approximation considers that, for a degree k susceptible node, each neighbor is infected with an independent probability θ_k.Compartmental formalisms based only on the first approximation (effective degree or approximated master equations <cit.>) lead to excellent agreement with the corresponding stochastic processes on random networks (see Refs. <cit.>). The second approximation enables us to perform a thorough stationary state analysis in the following sections. Such pairwise approximations have been shown to predict an epidemic threshold that is slightly off, but still show very good agreement with numerical simulations in contrast to mean-field theories <cit.>.§.§ Reduction and relation to other formalisms The rewiring rate ω≥ 0 permits us to tune the interplay between the disease propagation and the structural dynamics, for which we can distinguish two extreme limits. There is the annealed network limit when the rewiring is much faster than the propagation dynamics (ω→∞). It is equivalent to consider the SIS dynamics on an annealed network with adjacency matrix a_ij = k_i k_j/(Nk) <cit.>. In this limit, our compartmental approach is identical to the heterogeneous mean field theory (HMF) <cit.>.For annealed networks, the dynamic correlation and the neighborhood heterogeneity can be neglected. On the one hand, the absence of a dynamic correlation implies that the states of neighbor nodes are independent <cit.>. On the other hand, the absence of neighborhood heterogeneity implies that the degree of a node, on average, does not affect the state of its neighbors. From a degree-based perspective, this would mean that θ_k^* is a probability independent of the degree class. In contrast with the annealed limit, there is the quasi-static network limit (ω→ 0), where both the dynamic correlation and the neighborhood heterogeneity cannot be neglected. Between each rewiring event, the SIS dynamics has enough time to relax and reach a stationary distribution—temporal averages for the dynamics are then equivalent to ensemble averages on every static realization of the configuration model. In this limit, our compartmental approach is equivalent to the heterogeneous pair approximation (HPA) of Ref. <cit.>, which considers both the dynamic correlation and the neighborhood heterogeneity.We stress that our mathematical framework (as well as HPA) is different from other pair approximation formalisms that neglect the neighborhood heterogeneity, such as the pair heterogeneous mean field theory (PHMF) <cit.> or similar approaches <cit.>. In the quasi-static limit, we also expect our compartmental formalism to be in agreement with individual-based approaches such as quenched mean-field theory (QMF) <cit.> and pair QMF (PQMF) <cit.>. The RNA effectively interpolate between HPA and HMF through the tuning of the rewiring rate ω. The specific properties of each formalism are compiled in Table <ref>. § STATIONARY DISTRIBUTIONSSolving Eqs. (<ref>) in the stationary limit for θ_k^*, we findθ_k^*(ω,λ)= β/κ - 1 ifk = 1, k - κ + √((k - κ)^2 + 4 αβ(k-1))/2 α (k-1) ifk > 1,where the parameters areα = 1 + ω + Ω^I^* /Ω^I^*+ ωΘ^* ,β = (Ω^S^* + ωΘ^*)(2 + ω + Ω^I^*)/λ (Ω^I^* + ωΘ^*) ,κ = (λ + 1 + Ω^S^* + ω)(2+ ω + Ω^I^*) - λ/λ(Ω^I^* + ωΘ^*) . As desired, we have obtained a degree dependent solution for θ_k^*. At this point, one can already verify the consistency with HMF in the annealed limit : Taking ω→∞ in Eq. (<ref>), one recovers θ_k^* →Θ^*.For finite ω however, we obtain a solution that is potentially heterogeneous among degree classes. §.§ Collective and hub activationsAs briefly discussed in the Introduction, there exists a dichotomy in the nature of the phase transition of the SIS model. Numerical evidences suggest that near the absorbing phase, the activity is localized either on the hubs (hub activation) or on the innermost network core (collective activation) <cit.>. This dichotomy is also supported theoretically by individual-based approaches such as QMF <cit.>, for which the active phase near the epidemic threshold is dominated by the principal eigenvector of the adjacency matrix.This eigenvector is localized either on the subgraph associated with the highest degree nodes or on the shell with the largest index in the K-core decomposition <cit.>. For uncorrelated configuration model networks with power-law degree distribution P(k) ∼ k^-γ, this dichotomy is reflected as two distinct regimes <cit.>. For γ < 5/2, the phase transition is collective due to the presence of a large innercore whereas for γ≥ 5/2, the phase transition is dominated instead by the hubs. It is important to note that these two regimes are well defined only in the thermodynamic limit (N →∞ and consequently k_max→∞) <cit.>. To illustrate how this dichotomy is transposed to degree-based approaches, we present in Fig. <ref> the behavior of ρ_k^* and θ_k^* near the absorbing phase (λ→λ_c) for quasi-static networks with power-law degree distributions. For an exponent γ = 2.25, associated with a collective activation, we see in Fig. <ref>(b) that θ_k^* is independent of the degree, and ρ_k^* grows linearly with the degree [Fig. <ref>(a)]. For γ = 3.1 however, associated with a hub activation, θ_k^* increases with the degree [Fig. <ref>(b)], and ρ_k^* grows supra-linearly [Fig. <ref>(a)]. Our solution [Eq. (<ref>)] reproduces the qualitative behavior for both scenarios. This indicates that the dichotomy can also be identified and characterized by a degree-based point of view by studying the behavior of θ_k^* near the absorbing phase. This is achieved with our approach in the following sections. §.§ Perturbative developmentAs seen in Fig. <ref>, the solution for θ_k^* can be heterogeneous near the absorbing phase. To provide further insights, we consider the absorbing-state limit : we start with an active phase (λ > λ_c), then we take the limit λ→λ_c, which leads to ρ_k^*, θ_k^* → 0k. According to Eq. (<ref>), to force θ_k^* → 0∀k, we must require thatlim_λ→λ_cβ = 0 andlim_λ→λ_cκ≥ k_max .These strong constraints allow us to introduce a perturbative development : any quantity around the critical threshold is expressed as a power series of β.Since the RNA is self-consistent, all quantities [Eqs. (<ref>), (<ref>), (<ref>)] are interrelated. Therefore, we need to develop them recursively in a coherent way. First, we develop the stationary probability θ_k^* near the absorbing phase.θ_k^*(ω, λ)= k- κ + |k- κ| + 2 αβ (k-1)/|k - κ|/2 α (k-1) + 𝒪(β^2)= β/κ - k + 𝒪(β^2),where the second equality comes from Eq. (<ref>). However, κ also depends on β through the quantities Ω^S^*, Ω^S^* and Θ^*. Using Eq. (<ref>) with Eqs. (<ref>) and (<ref>), we obtain the following leading behaviorsΩ^S^*= 𝒪(β) ,Ω^I^*= λ + 𝒪(β) ,Θ^*= 𝒪(β) .This fixes κ to order zero, i.e., from Eq. (<ref>), we obtainκ = κ(ω,λ) + 𝒪(β),where κ(ω,λ) ≡1 + (λ + 1)^2 + ω (2 λ + 3) + ω^2/λ^2 . Combining Eq. (<ref>) with Eq. (<ref>), we have a coherent development for θ_k^* θ_k^*(ω,λ)= β f_k(ω,λ) + 𝒪(β^2) ,with the auxiliary functionf_k(ω,λ)≡1/κ(ω,λ) - k .Using these definitions, it is possible to express all quantities to first orderΩ^S^*= λf_k k(k-1)/kβ + 𝒪(β^2),Ω^I^*= λ +λf_k^2 k(k-1)/f_k kβ + 𝒪(β^2),Θ^*= λf_k k^2/kβ + 𝒪(β^2). One could continue this perturbative scheme in order to extract the quadratic terms in β and so forth. However, the first order development is quite sufficient to characterize the absorbing-state threshold in Sec. <ref>.§.§.§ Approximate exponential form We can rewrite the solution for θ_k^* in Eq. (<ref>) asθ_k^*= β/κ(ω,λ)exp-ln1- k/κ(ω,λ) + 𝒪(β^2), ≈β/κ(ω,λ)expk/κ(ω,λ) ,where the approximate exponential form is valid provided k is sufficiently small compared to κ̃(ω,λ). Near the threshold, the density of infected nodes for each degree class is to good approximation ρ_k^* ≈λ k θ_k^* [Eq. (<ref>)]. In the quasi-static limit (ω→ 0) and considering λ≪ 1, κ̃(ω, λ≪ 1) ≈ 2/λ^2 [Eq. (<ref>)], which leads to the exponential formρ_k^* ∼ k expλ^2 k/2 ,This form has been obtained previously by other means in Ref. <cit.>, based upon the results of Ref. <cit.>. However, they needed to extract κ∼λ^-2 from numerical simulations, whereas it emerges naturally in our framework. A similar expression has also been found in Ref. <cit.> to describe the hub lifetime.However, the approximate expression Eq. (<ref>) will be inadequate to describe the activity of high degree nodes if k ∼κ̃(ω, λ). In fact, in Sec. <ref> we show that the ratio k_max/κ→ 1 near the threshold for a hub dominated phase transition and the development of Eq. (<ref>) breaks down.§ THRESHOLDWe now turn our attention towards the absorbing-state threshold λ_c. Using the perturbative development of Sec. <ref>, we obtain an explicit upper bound and an implicit expression for λ_c, which we analytically and numerically compare with existing expressions gathered in Table <ref>. §.§ Explicit upper bound An important parameter from the perturbative development is κ(ω,λ), that we call hereafter the self-activating degree. In fact, it will become clear throughout the following sections that κ is a good proxy of the minimal degree class able to sustain by itself the dynamics in its neighborhood with correlated reinfections. In the absorbing-state limit, Eq. (<ref>) leads to the constraint κ(ω,λ_c) ≥ k_max. This can be interpreted as follows : the self-activating degree must be higher than the maximal degree, otherwise the system would be in an active phase, sustained by the maximal degree class. This constraint is rewritten asλ_c(ω) ≤1 + ω + √(2 k_max -1 + ω(3 k_max -1) + ω^2 k_max)/k_max-1 .Equation (<ref>) sets a general upper bound on the threshold λ_c for any rewiring regime specified by ω. Notably, our approach predicts a vanishing threshold for any random networks with finite ω in the limit k_max→∞.In the quasi-static limit, we haveλ_c(ω→ 0) ≡λ_c^qs≤1 + √(2k_max -1 )/k_max-1 .For large k_max, Eq. (<ref>) is well approximated by . This upper bound is qualitatively in agreement with QMF (see Table <ref>) and numerical simulations on static networks <cit.>.Moreover, Eq. (<ref>) can be associated with the threshold of a star graph with k_max leaves <cit.>. This is a natural constraint, since this star is certainly a subgraph of the network due to the presence of k_max degree nodes. While Eq. (<ref>) is slightly different from the threshold suggested by the exact analysis of the star graph <cit.>, it is identical to the threshold obtained from PQMF <cit.>. In the annealed limit, one expects a finite threshold in the limit k_max→∞ for bounded second moment k^2 <cit.>, i.e for any degree distribution that asymptotically decreases faster than P(k) ≃ k^-3, in agreement with HMF. For this condition to be satisfied, Eq. (<ref>) prescribes that the rewiring rate ω≳√(k_max). Therefore, a network with higher degree nodes requires a faster rewiring dynamics to be considered annealed. §.§ Self-consistent expression Using the definition of β in Eq. (<ref>) with the first order developments of Eqs (<ref>), we write the self-consistent expression β = β k(k-1)f_k + ω k^2 f_k(2 + ω + λ)/λk + 𝒪(β^2),which can be rewritten as𝒪(β)=λ - (2+ω)k f_k/(2+ω)k^2 f_k - 2 k f_k .In the absorbing-state limit, which implies β→ 0, the term in parentheses on the right must be zero. This defines an implicit expression for the thresholdλ_c(ω)= (2+ω)k f_k(ω,λ_c)/(2+ω)k^2 f_k(ω,λ_c)-2k f_k(ω,λ_c) .Equation (<ref>) is a central result of the RNA—it allows the accurate evaluation of λ_c for any degree distribution P(k), and any time scale fixed by ω. For arbitrary ω and P(k), Eq. (<ref>) is transcendental and must be solved numerically. §.§ Correspondence with existing approaches The transcendental expression for the threshold admits some simplifications for certain limiting cases, leading to many correspondences with current formalisms. First, we consider the extreme regimes of the rewiring process. Equation (<ref>) becomesλ_c= k/k^2 if ω→∞ ,kf_k^qs/k^2f_k^qs-kf_k^qs if ω→ 0.where f_k (ω→ 0, λ_c) ≡ f_k^qs. Hence, we recover as expected the HMF threshold <cit.> in the annealed limit. In the quasi-static limit, we obtain a threshold similar in form to the one predicted by PHMF, except for the presence of f_k^qs in each average (see Table <ref>).To make further progress in the quasi-static limit, let us consider the limit k_max→∞. To simplify the notation, we let . In this case, there are two possible scenarios for the threshold, depending on the scaling of κ_0 with k_max. On the one hand, if , then f_k →β/κ_0, which is independent of the degree. On the other hand, if κ_0/k_max→ c ≥ 1, then f_k depends strongly on the degree and the threshold λ_c is obtained directly. Together, this leads toλ_c^qs = k/k^2-k if κ_0/k_max→∞ ,√(2)/√(c k_max) if κ_0/k_max→ c. In accordance with the literature and our previous discussion in Sec. <ref>, we identify the first case in Eq. (<ref>) (incidentally the exact same form as the PHMF threshold) with the collective activation scenario. Indeed, since the self-activating degree κ_0 is much larger than the maximal degree k_max just beyond the threshold, none of the degree classes are able to self-sustain the dynamics. The critical phenomenon is therefore truly a collective one. We associate the second case in Eq. (<ref>) with the hub activation scenario. Effectively, κ_0 ∼ k_max, such that the active phase just beyond the threshold is attributed to the self-activation of the maximal degree class in the network.We can again relate the scaling with k_max (the second case of Eq. (<ref>)) with the threshold of the star graph <cit.>. The subgraph containing the hubs and their neighbors (maximal degree stars) is therefore the dominant topological structure responsible for the onset of the active phase.This correspondence can be verified explicitly for power-law degree distributions P(k) ∼ k^-γ, for which a transition between the collective and hub dominated scenario appears at γ = 5/2 <cit.>. This is done in Fig. <ref> where, as expected, the ratio κ_0/k_max is a growing function of k_max for γ < 5/2, while it goes to 1 for γ > 5/2—the threshold then coalesces with the upper bound (<ref>). This type of result has been observed numerically <cit.> and is coherent with individual-based approaches <cit.>. Precisely at γ = 5/2, the ratio of the first two moments, k^2/k, is equal to √(k_max k_min), which lead all curves of κ_0/k_max to cross at the same point c = 2 k_min.The two different expressions in Eq. (<ref>) are similar to the ones for QMF (see Table. <ref>). One is reminded that the QMF estimate for the epidemic threshold is formally a lower bound for the real threshold <cit.>, but it is nonetheless qualitatively correct <cit.>. Therefore, Eq. (<ref>) has the appropriate behavior in both the annealed and quasi-static limits. This is further validated with numerical simulations (see Figs. <ref> and <ref>).§.§ Comparison with simulations We expect that Eq. (<ref>) should be a good approximation of λ_c for finite size realizations of the configuration model with large N. This can be verified by sampling the configurations of the system that do not fall on the absorbing state, the quasi-stationary distribution <cit.>, to evaluate the susceptibilityχ = E[n^2]-E[n]^2/E[n] ,with n ≤ N the number of infected nodes in the system and E[⋯] denotes the expectation over the quasi-stationary distribution. The susceptibility exhibits a sharp maximum at λ_p(N) as shown in Fig. <ref>(a) and <ref>(b), corresponding to the epidemic threshold of the system in the thermodynamic limit <cit.>. We have first validated Eq. (<ref>) regarding the two possible activation schemes using a power-law degree distribution P(k)∼ k^-γ in the quasi-static limit. Figures <ref>(c) and <ref>(d) show that the RNA yields a threshold in agreement with the susceptibility for both the collective (γ≤ 5/2) and the hub dominated (γ > 5/2) phase transition. As a comparison, it is seen in Fig. <ref>(d) that the prediction of PHMF does not reproduce the scaling of λ_p(N) for the hub activation scenario. This is explained by the fact that this approach neglects the neighborhood heterogeneity. Despite being accurate for collective activation <cit.>, as seen in Fig. <ref>(c), PHMF is unable to describe correctly a hub dominated dynamics.Moreover, Eq. (<ref>) is versatile and predicts the threshold for all intermediate regimes between the annealed and quasi-static limit. To illustrate this feature, we have extended the standard quasi-stationary distribution method to include the rewiring procedure (see Appendix <ref>). For the sake of simplicity, we have applied it to a regular random network with distribution P(k) = δ_kk_0, for which Eq. (<ref>) yields the thresholdλ_c(ω)= 2 + ω/(2+ω)k_0 - 2 .The validation is presented in Fig. <ref>. Equation (<ref>) reproduces with good accuracy the smooth transition from one regime to another. §.§ Non-monotonicity of the threshold Equation (<ref>) and Fig. <ref> suggest a monotically decreasing threshold with growing rewiring rate ω. One may ask: is this always the case? Equation (<ref>) is much more intricate and does not possess an explicit dependence upon ω for general degree distributions. To answer this question, it is important to note that the random rewiring of the edges affects the threshold in two different ways. On the one hand, it promotes the contact between infected and susceptible nodes (the dynamic correlation is reduced), which decreases the threshold (see Fig. <ref>). On the other hand, random rewiring inhibits the reinfection of hubs by their neighbors, which is driving the hub dominated phase transition.For heterogeneous networks that are affected by both mechanisms, this leads to a non-monotonic relation for λ_c(ω), as presented in Fig. (<ref>). There exists a value ω_opt at which λ_c(ω) is maximized : the hub reinfection mechanism is inhibited, without too much stimulating the spreading through new infected-susceptible contacts. The value ω_opt then defines the optimal rewiring rate to hinder the infection spreading on a network with a specified degree distribution.§ CRITICAL EXPONENTSTo complete the phase transition portrait, we address the theoretical determination of the critical exponents of ρ^*, the mean infected density, and θ_k^*, which describes the neighborhood for each degree class. More specifically, we characterize the scaling exponents δ associated withρ^* ∼ (λ - λ_c)^δ ,and η_k related toθ_k^* ∼ (λ -λ_c)^η_k . To make analytical progress, we restrict ourselves to power-law degree distribution P(k) = A k^-γ in the limit k_max→∞. The case ω→∞, the annealed limit, has already been analyzed through the HMF framework <cit.> and leads to the following critical exponentsδ^HMF = 1/(3-γ)for γ < 3, 1/(γ - 3)for3 <γ < 4, 1for γ≥ 4, η_k^HMF = (γ -2)/(3-γ)for γ < 3, 1/(γ - 3)for3 <γ < 4, 1for γ≥ 4,with η_k being the same ∀k. Note that for γ > 3, λ_c > 0 for annealed networks. In this section, we consider the case study of finite ω, leading to a vanishing threshold λ_c → 0 for all degree distribution exponents γ in the limit k_max→∞ [see Eq. (<ref>)]. §.§ Bounds on the critical exponentsThe solution for θ_k^* in Eq. (<ref>) has a complicated dependence on each degree class and is ill suited for the direct estimation of the critical exponents. Instead, we consider lower and upper bounds for various quantities near the absorbing phase, each identified by the subscript “-” or “+” respectively. For instance, θ_-^* and θ_+^* are lower and upper bounds for θ_k^* respectively, valid for all degree classes. We are mostly interested in the scaling of these quantities with λ near the absorbing phase, hence lower and upper bounds are expressed only up to a constant factor. According to Eq. (<ref>), we can set the following bounds for θ_k^* (see Appendix <ref> for details) θ_-^*≡β/κ_-∼Ω^S^*_- + ωΘ^*_- ,θ_+^*≡1/α_+∼Ω^I^*_+ + ωΘ^*_+ , The bracket [x]_-/+ indicates that we take the lower/upper bound of x. This permits us to obtain bounds for other quantities in terms of the bounds for θ_k^*—for instance Ω^S_-^* in terms of θ_-^*, leading to self-consistent expressions. Since the developments for lower and upper bounds are the same, we write explicit equations in terms of θ_±^*. For Ω^S^*, according to Eq. (<ref>), this leads toΩ^S_±^* = λ(1-θ_±^*)/k[ A θ_±^* ∫_k'^∞k^2- γ - k^1- γ/1 + λθ_±^* k k.+ θ_±^* (k-1)k_k'. ] +𝒪λ^2 θ_±^*^2,where ⋯_k' represents an average over P(k) from k_min to k'-1, and k' is a finite value chosen such that the rest of the average can be approximated by an integral. For λθ_±^* → 0, we can then extract the leading terms of the integral in Eq. (<ref>) (see Appendix <ref>). This leads toΩ^S_±^* =(1- θ_±^*) [ . a_1(λθ_±^*)^γ - 2 + a_2 λθ_±^* + a_3 (λθ_±^*)^γ - 1. ]+ 𝒪(λ^2 θ_±^*^2)Similarly, using Eq. (<ref>) and (<ref>), we obtainΩ^I_±^* = λ + b_1(λθ_±^*)^γ - 1/ρ_±^* + b_2 λ^2 θ_±^*^2/ρ_±^*+ b_3 (λθ_±^*)^γ/ρ_±^* + 𝒪λ^3 θ_±^*^3/ρ_±^* ,Θ_±^* =c_1 (λθ_±^*)^γ - 2 + c_2 λθ_±^* + 𝒪(λ^2 θ_±^*^2) ,ρ_±^* =d_1 λθ_±^* + d_2 (λθ_±^*)^γ - 1 + 𝒪(λ^2 θ_±^*^2)where the coefficients a_i,b_i,c_i,d_i are non-vanishing constants in the absorbing-state limit. We now consider separately the region 2 < γ < 3 and γ≥ 3.§.§.§ Region 2 < γ < 3 Since Ω^S^* and Θ^* possess the same critical behavior according to Eqs. (<ref>) and (<ref>), the lower bound θ^*_- possesses the simple self-consistent expressionθ^*_-∼ (λθ^*_-)^γ-2θ^*_-∼λ^(γ - 2)/(3- γ) .Combining this with Eq. (<ref>), we obtainρ^*_-∼λ^1/(3- γ)≡λ^δ_+ .The upper bound is slightly more complicated : Ω^I^* and Θ^* might not possess the same critical behavior. However, by definition we know that Ω^I^* ≥Ω^S^* ∼Θ^*, hence Ω^I^* is always dominant for finite rewiring rates ω. This implies that a finite rewiring rate does not have any impact on the critical exponents. We therefore haveθ^*_+∼Ω^I^*_+≃λ + b_1(λθ^*_+)^γ - 1/ρ^*_+ .Using Eq. (<ref>), we obtainθ_+^*∼λ^ψ ,ρ^*_+ ∼λ^ψ +1≡λ^δ_- .whereψ = γ-2/3- γ for γ≤ 5/2, 1for γ > 5/2.Equations (<ref>) and (<ref>) fix the bounds for the critical exponent δ, as presented in Fig. <ref>. In the region γ≤ 5/2, associated to the collective activation scheme, upper and lower bounds collapse to the annealed exponent of Eq. (<ref>), namely δ= 1 / (3-γ). This is in fact the region where the annealed regime describes the dynamics well, even for static networks <cit.>. However, in the hub activation region (γ > 5/2), the bounds are different, δ_+= 1 /(3-γ), δ_- = 2, giving rise to a wide range for the values of the critical exponent. We will see in Sec. <ref> that this behavior is related to the emergence of a heterogeneous critical phenomenon in this region. Nevertheless, it is straightforward to verify that these bounds are not in contradiction with the exact ones (γ-1 ≤δ≤ 2 γ - 3) of Ref. <cit.> for static networks. §.§.§ Region γ≥ 3 The lower bound θ^*_- in this region can be determined again using θ^*_-∼Ω^S^*_- + ωΘ^*_-. More explicitly, in this region we haveθ^*_-≃ e_1 λθ^*_- + e_2(λθ^*_-)^γ - 2 - e_3 λ (θ^*_-)^2,where e_i are non-vanishing constants formed by the combination of a_i,c_i. This leads to a critical behavior of the formρ^*_-∼θ^*_-∼λ - λ_e^ν ,where ν = max1, 1/(γ-3). Therefore, the lower bound is associated with a finite effective threshold defined by . This is at odds with the upper bound in this region, which is the continuity of the previous regionθ^*_+ ∼λ ,ρ^*_+ ∼λ^2 .In brief, the two bounds are even more separated from each other in this region.§.§ Heterogeneous critical phenomenonUsing the results of Sec. <ref>, it is also possible to get some insight on the critical behavior of θ_k^* for extreme degree classes, θ_k_min^* and θ_k_max^* (the limit k_max→∞ is still implicitly considered). We stress that θ_k_min^* and θ_k_max^* are different from θ^*_- and θ^*_+.According to Eq. (<ref>), we have the following behavior near the absorbing phase (see Appendix <ref> for details) θ_k_min^*≃β/κ∼Ω^S^* + ωΘ^*,θ_k_max^*≃1/α∼Ω^I^* + ωΘ^*. Using the expressions for θ^*_- and θ^*_+ to bound Ω^S^* and Ω^I^*, we arrive at the following portraitθ_k_min^*≲λ^(ψ + 1)(γ - 2) ,θ_k_min^*≳θ^*_- ,θ_k_max^*∼λ^ψ ,which characterizes the critical exponents η_k_min and η_k_max. For instance, for 2 < γ < 3, we havemin2γ - 4, γ-2/3- γ≤η_k_min≤γ-2/3- γ ,andη_max = min1, γ-2/3- γ .It is a striking new result : as presented in Fig. (<ref>), in the hub dominated regime (γ > 5/2), the bounded regions for η_k_min and η_k_max are disjoint. These different asymptotic scalings are validated for finite k_max in Fig. <ref>.Different critical exponents for extreme degree classes is also an elegant explanation for the heterogeneity of θ_k^* observed in Fig. <ref>(b). Indeed, near the absorbing phase,θ_k_min^*/θ_k_max^*∼λ^η_k_min - η_k_max≡λ^Δ ,with Δ > 0 for γ > 5/2. Moreover, it illustrates that the critical phenomenon is itself heterogeneous, involving different mechanisms depending on the degree class : for hubs, activity is supported locally through correlated reinfections, while for the rest of the system, activity is mostly due to the propagation induced by the hubs. This results also have an impact on how ρ_k^* grows for each degree class beyond λ_c, according to Eq. (<ref>). It explains the wide bounds we obtained for ρ^* = ρ_k^* in the hub activation region, since ρ_k^* grows differently for each degree class. § BEYOND THE HUB ACTIVATION THRESHOLDAs presented in Sec. <ref>, a collective activation leads to θ_k^* ∼ f_k independent of the degree, while a hub activation results in a growing function of the degree (see Fig. <ref>). The latter is formally identified as a heterogeneous critical phenomenon [Eq. (<ref>)]. However, this analysis based on the critical exponents is well defined only in the combined limit k_max→∞ and , in which case the impact of the rewiring is lost. Beyond the threshold and for finite k_max, the dichotomy is not as well defined and the rewiring rate ω does have a significant impact. In fact, the structural dynamics permits us to interpolate between the two scenarios. According to Eq. (<ref>), the rewiring rate ω increases the self-activating degree κ(ω,λ), forcing a more collective activation. This leads to a more homogeneous neighborhood among the degree classes near the absorbing phase, as seen in Fig. <ref>. Also, critical exponents of Sec. <ref> do not inform us on the behavior of the system far beyond the hub activation threshold. For power-law degree distribution having an exponent γ > 3, it has been observed in numerical simulations that the delocalization of the dynamics, where not only hubs sustain the propagation, happens at a finite λ. This gives rise to a second peak on the susceptibility curve χ, associated with the activation of the shell with the largest index in the K-core decomposition <cit.> and seems to correspond with the HMF threshold <cit.>.Our compartmental formalism is not well suited to identify precisely this second transition. However, we are able to describe how the system behaves as the infection rate is increased beyond λ_c, towards this delocalized regime. An interesting feature is the successive activation of the degree classes. According to Eq. (<ref>), the self-activating degree κ is a monotically decreasing function of λ. Since κ(ω,λ_c) → k_max for hub activation, κ(ω,λ) = k < k_max for λ > λ_c. In words, for λ beyond the absorbing phase, lower degree classes than k_max are able to self-sustain the dynamics in their neighborhood, largely increasing their infected density ρ_k^*. This successive activation mechanism is observed in Fig. <ref>(a), where each ρ_k^* sharply increases as k ∼κ, then saturates according to Eq. (<ref>). This is also well portrayed by the derivative of ρ_k^* with respect to λ, ∂_λρ_k^* ≡ζ_k^*, which exhibits a maximum for k ∼κ [Fig. <ref>(b)].These successive activations could be related to the smeared phase transition observed in Refs. <cit.> for power-law degree distribution with γ > 3. In a smeared phase transition, parts of the network exhibit an ordering transition independently, which in this case can be associated with the high degree nodes and their direct neighbors. § CONCLUSIONUsing a degree-based theoretical framework, we have developed a stationary state analysis to study the SIS dynamics on time-varying configuration model networks. The rewiring mechanism has allowed us to take into account the effect of an effective structural dynamics, which mathematically represents an interpolation between a heterogeneous pair approximation (HPA) and a heterogeneous mean field theory (HMF). A general portrait of the phase transition that characterizes both collective and hub activation has emerged, filling the theoretical gap between degree-based and individual-based formalisms.First, we have shown that it is possible to discern the type of activation by studying the properties of θ_k^* near the absorbing phase, providing an alternative to the study of the principal eigenvector <cit.>. This new point of view has inspired our analysis of the phase transition and allowed us to distinguish the hub and collective activation within our degree-based framework. Second, by using a perturbative scheme, we have obtained a self-consistent expression for the absorbing-state threshold λ_c. Due to the analytical tractability of the RNA, we have been able to establish several correspondences with existing threshold expressions. Moreover, the generality of our threshold expression has allowed us to illustrate the impact of a time-varying structure by tuning the rewiring rate, leading to a smooth and possibly non-monotonic relation λ_c(ω).Third, by means of bounds on various quantities, we have characterized the critical exponents of ρ^* and θ_k^* for power-law degree distributions. Noteworthy, it has allowed us to unveil the heterogeneous critical phenomenon for the hub activation scenario. This offers an elegant explanation for the heterogeneity of θ_k^* in Fig. <ref>(b) and also permits to discriminate between collective and hub-dominated phase transitions.Finally, we have studied the active phase beyond a hub activation threshold. The time variations of the structure leads to a more homogeneous neighborhood among the degree classes. Therefore, the dichotomy discussed in Sec. <ref> is not as clear-cut anymore since the rewiring rate allows to interpolate between the two activation scenarios. Also, in between the localized and delocalized regime for a hub-dominated phase transition, we have observed that each degree class undergoes a certain type of activation as the infection rate λ is increased. These independent activations could be related to the smeared phase transition—with inhomogeneous ordering—observed in Refs. <cit.>.Several extensions of this work can be studied. For instance, the stationary state analysis can be applied to networks featuring other types of rewiring processes. These can be adaptive processes <cit.> or mechanisms that preserve other structural properties apart from the degree sequence, such as degree assortativity <cit.>. Finally, due to the generality and versatility of the RNA, it can easily be applied to other binary-state dynamics. We thank Laurent Hébert-Dufresne for useful discussions and comments. We acknowledge Calcul Québec for computing facilities. This research was undertaken thanks to the financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC), the Fonds de recherche du Québec — Nature et technologies (FRQNT) and the Canada First Research Excellence Fund.§ DEVELOPMENT OF THE PAIR APPROXIMATIONWe adapt the approach proposed in Refs. <cit.>, which starts with a set of differential equations governing the evolution of the compartments of nodes of a specified degree k and infected degree l (see also Refs. <cit.>). Let s_kl(t) [i_kl(t)] be the probability that a degree k node is susceptible (infected) and has l ≤ k infected neighbors. The rate equations for these probabilities areṣ_̣ḳḷt =i_kl - λ l s_kl+ [1+ω(1- Θ)](l+1)s_k(l+1) - l s_kl+(Ω^S + ωΘ) (k-l+1)s_k(l-1) - (k-l)s_kl ,ị_̣ḳḷt = λ l s_kl - i_kl + [1+ω(1- Θ)](l+1)i_k(l+1) - l i_kl+(Ω^I + ωΘ) (k-l+1)i_k(l-1) - (k-l)i_kl , where Ω^S(t) and Ω^I(t) are the mean infection rates for the neighbors of susceptible and infected nodes. These rates can be estimated from the compartmentalization <cit.>, yieldingΩ^S= λ∑_l(k-l)ls_kl/∑_l(k-l)s_kl ,Ω^I= λ∑_ll^2s_kl/∑_ll s_kl . Equations (<ref>) form an 𝒪k_max^2 system of equations and do not lead to simple stationary solutions. To obtain a pair approximation formalism from Eqs. (<ref>), we use the dimensionality reduction scheme proposed in Ref. <cit.>. Let ϕ_k(t) be the probability of reaching an infected node following a random edge starting from a degree k infected node. Using Eqs. (<ref>), we can define a rate equation for θ_k and ϕ_k together with the definitions ∑_l l s_kl = (1- ρ_k)k θ_k and ∑_l l i_kl = ρ_k k ϕ_k. This leads to the following system of equations θ̣_̣ḳt =-λ/k(1- ρ_k)∑_ll^2 s_kl+ r_k ϕ_k + (Ω^S + ωΘ) (1- θ_k) -1+ω(1- Θ)θ_k- θ_kr_k - λ k θ_k ,ϕ̣_̣ḳt = λ/k ρ_k∑_ll^2s_kl- ϕ_k + (Ω^I + ωΘ) (1- ϕ_k) - 1+ω(1- Θ)ϕ_k+ϕ_k 1 - λ k θ_k r_k^-1 , with r_k ≡ρ_k/(1-ρ_k). To obtain a closed system for Eqs. (<ref>), we use the pair approximation∑_l=0^k l^2 s_kl≈ (1- ρ_k) k θ_k + k (k-1) θ_k^2 ,which implies that the state of each neighbor is independent. The Eqs. (<ref>) and (<ref>) follows accordingly. § MONTE-CARLO SIMULATIONSTo simulate the SIS dynamics on networks, we used a modified Gillespie algorithm <cit.>. During the simulation process, we track the number of infected nodes n(t) and the number of stubs emanating from them u(t). The total number of stubs is 2M and is fixed according to our rewiring process. At each step, three event types are possible with the following probability P(Recovery)= n/(n + λ u + ω M/2) , P(Infection)= λ u/(n + λ u + ω M/2), P(Rewiring)= (ω M/2)/(n + λ u + ω M/2). Each event occurs as follows * Recovery event : an infected node is chosen randomly and becomes susceptible. * Infection attempt event : an infected node is chosen proportionally to its degree. We then choose one of its emanating stubs randomly and infect the node at the other end point. If it is already infected, we do nothing : this phantom process <cit.> corrects the probability in order to make the process equivalent to randomly choosing an edge among the set of all susceptible-infected edges. * Rewiring event : Two edges (a_1, b_1) and (a_2, b_2) are randomly chosen with a_i, b_i the labels for the nodes; choosing an edge (b_1, a_1) is equally likely. We then rematch the stubs according to the following scheme (a_1, b_1), (a_2, b_2) ↦ (a_1, b_2), (a_2, b_1). Loops and multi-edges are permitted.After all events—even the frustrated ones—we update the time with t ↦ t + Δ t where Δ t ≡ EΔ t = [n(t) + λ u(t) + ω M/2]^-1.To evaluate some observables for infection rates λ near the absorbing phase, we sample the configurations of the system that do not fall on the absorbing state—the quasi-stationary distribution <cit.>. When the system visits the absorbing state, the current state is replaced by a configuration randomly chosen among the set ℋ of previously stored active configurations. Also, with probability ξΔ t, each active configuration is stored, replacing a randomly chosen one among ℋ, thus updating the set of states proportionally to their average lifetime <cit.>. The system is then expected to converge on the quasi-stationary distribution <cit.> over which we measure observables. In all our simulations, we chose |ℋ| ∈ [50,100] and ξ = 10^-2.§ SUPPLEMENTARY DEVELOPMENTS FOR THE CRITICAL EXPONENTS §.§ Lower and upper bounds on θ_k^*Our insight is that θ_k^* is a monotically increasing function of the degree k. Higher degree nodes have a higher probability of being infected, hence their neighbors can only be more infected on average. This is reflected in Eq. (<ref>), despite not being explicit.The lower and upper bounds are then fixed using the minimal and maximal values for the degree in Eq. (<ref>).θ_-^*≡β/κ_-≤β/κ - 1 ,θ_+^*≡1/α_+ = lim_k→∞θ_k^*.The parameters α,β,κ are considered finite when taking the limit k →∞ in the second equation, which is true for any λ > λ_c.§.§ Integral approximationLet us consider an integral of the formI = k'^-ab^-1∫_k'^∞k^a-1/1 + k(bk')^-1 k,where b ≡ (λθ^* k')^-1 and a <1, equal to (3-γ) or (2-γ) according to the integrals appearing in Eq. (<ref>). Using z ≡ k' k^-1, this can be rewritten asI = ∫_0^1 z^-a/1+bz z.This integral can be associated with the hypergeometric function <cit.>I = (1-a)^-1_2F_1(1,1-a;2-a;-b).Since near the absorbing phase b ≫ 1, to extract the leading terms of Eq. (<ref>), we use the transformation formulas for the hypergeometric function <cit.>, leading toI = Γ(1-a)Γ(a) b^a-1 - (ab)^-1_2F_11,a;a+1;-b^-1 .The leading terms are finally I = h_1 b^a-1 + h_2 b^-1 + 𝒪b^-2 ,where h_i are non-vanishing constants. Appropriate limits must be taken for all values of a = 0or negative integer values. §.§ Critical behavior of θ_k_min^* and θ_k_max^*Near the phase transition (λ→ 0 in this case), according to Eq. (<ref>), κ≃κ(ω,λ) is very large. Since we can choose λ arbitrarily small, we can let κ→∞, keeping however κ≪ k_max→∞.For θ_k_min^*, we simply use the perturbative development [Eq. (<ref>)] to extract the leading termθ_k_min^*= β/κ - k_min + 𝒪(β^2) ≃β/κ .For θ_k_max^*, we need to develop Eq. (<ref>) in terms of κ/k_max→ 0 instead. In this case, we obtainθ_k_max^* = 1/α + 𝒪κ/k_max≃1/α .53 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Barrat et al.(2008)Barrat, Barthelemy, and Vespignani]Barrat2008 author author A. Barrat, author M. Barthelemy,and author A. Vespignani,@nooptitle Dynamical Processes on Complex Networks (publisher Cambridge University Press, year 2008)NoStop [Newman(2010)]Newman2010 author author M. Newman, @nooptitle Networks: An Introduction (publisher Oxford university press, year 2010)NoStop [Pastor-Satorras et al.(2015)Pastor-Satorras, Castellano, Van Mieghem,and Vespignani]Pastor-Satorras2015 author author R. Pastor-Satorras, author C. Castellano, author P. Van Mieghem,and author A. Vespignani, 10.1103/RevModPhys.87.925 journal journal Rev. Mod. Phys. volume 87, pages 925 (year 2015)NoStop [Gleeson(2011)]Gleeson2011 author author J. P. Gleeson, 10.1103/PhysRevLett.107.068701 journal journal Phys. Rev. Lett. volume 107, pages 068701 (year 2011)NoStop [Gleeson(2013)]Gleeson2013 author author J. P. Gleeson, 10.1103/PhysRevX.3.021004 journal journal Phys. Rev. X volume 3, pages 021004 (year 2013)NoStop [Boguñá and Pastor-Satorras(2002)]Boguna2002 author author M. Boguñá and author R. Pastor-Satorras, 10.1103/PhysRevE.66.047104 journal journal Phys. Rev. E volume 66, pages 047104 (year 2002)NoStop [Castellano and Pastor-Satorras(2010)]Castellano2010 author author C. Castellano and author R. Pastor-Satorras, 10.1103/PhysRevLett.105.218701 journal journal Phys. Rev. Lett. volume 105, pages 218701 (year 2010)NoStop [Castellano and Pastor-Satorras(2012)]Castellano2012 author author C. Castellano and author R. Pastor-Satorras, 10.1038/srep00371 journal journal Sci. Rep. volume 2, pages 371 (year 2012)NoStop [Ferreira et al.(2012)Ferreira, Castellano, and Pastor-Satorras]Ferreira2012 author author S. C. Ferreira, author C. Castellano,and author R. Pastor-Satorras, 10.1103/PhysRevE.86.041125 journal journal Phys. Rev. E volume 86, pages 041125 (year 2012)NoStop [Mata and Ferreira(2013)]Mata2013 author author A. S. Mata and author S. C. Ferreira, 10.1209/0295-5075/103/48003 journal journal EPL volume 103,pages 48003 (year 2013)NoStop [Mata et al.(2014)Mata, Ferreira, and Ferreira]Mata2014 author author A. S. Mata, author R. S. Ferreira,and author S. C. Ferreira,10.1088/1367-2630/16/5/053006 journal journal New J. Phys. volume 16,pages 053006 (year 2014)NoStop [Mata and Ferreira(2015)]Mata2015 author author A. S. Mata and author S. C. Ferreira, 10.1103/PhysRevE.91.012816 journal journal Phys. Rev. E volume 91, pages 012816 (year 2015)NoStop [Cai et al.(2016)Cai, Wu, Chen, Holme, andGuan]Cai2016 author author C.-R. Cai, author Z.-X. Wu, author M. Z. Q. Chen, author P. Holme,and author J.-Y. Guan, 10.1103/PhysRevLett.116.258301 journal journal Phys. Rev. Lett. volume 116, pages 258301 (year 2016)NoStop [Cota et al.(2016)Cota, Ferreira, and Ódor]Cota2016 author author W. Cota, author S. C. Ferreira,and author G. Ódor, 10.1103/PhysRevE.93.032322 journal journal Phys. Rev. E volume 93, pages 032322 (year 2016)NoStop [Ferreira et al.(2016)Ferreira, Sander, and Pastor-Satorras]Ferreira2016 author author S. C. Ferreira, author R. S. Sander,and author R. Pastor-Satorras, 10.1103/PhysRevE.93.032314 journal journal Phys. Rev. E volume 93, pages 032314 (year 2016)NoStop [Van Mieghem et al.(2009)Van Mieghem, Omic, and Kooij]VanMieghem2009 author author P. Van Mieghem, author J. Omic, and author R. Kooij, 10.1109/TNET.2008.925623 journal journal IEEE/ACM Trans. Netw. volume 17,pages 1 (year 2009)NoStop [Cator and Van Mieghem(2012)]Cator2012 author author E. Cator and author P. Van Mieghem, 10.1103/PhysRevE.85.056111 journal journal Phys. Rev. E volume 85, pages 056111 (year 2012)NoStop [Shrestha et al.(2015)Shrestha, Scarpino, and Moore]Shrestha2015 author author M. Shrestha, author S. V. Scarpino,and author C. Moore, 10.1103/PhysRevE.92.022821 journal journal Phys. Rev. E volume 92,pages 022821 (year 2015)NoStop [Wang et al.(2017)Wang, Tang, Stanley, and Braunstein]Wang2017 author author W. Wang, author M. Tang, author H. E. Stanley,and author L. A. Braunstein, http://stacks.iop.org/0034-4885/80/i=3/a=036603 journal journal Rep. Prog. Phys. volume 80,pages 036603 (year 2017)NoStop [Gross et al.(2006)Gross, D'Lima, and Blasius]Gross2006 author author T. Gross, author C. J. D. D'Lima,and author B. Blasius, 10.1103/PhysRevLett.96.208701 journal journal Phys. Rev. Lett. volume 96, pages 208701 (year 2006)NoStop [Marceau et al.(2010)Marceau, Noël, Hébert-Dufresne, Allard, and Dubé]Marceau2010 author author V. Marceau, author P.-A. Noël, author L. Hébert-Dufresne, author A. Allard,and author L. J. Dubé, 10.1103/PhysRevE.82.036116 journal journal Phys. Rev. E volume 82, pages 036116 (year 2010)NoStop [Holme and Saramäki(2012)]Holme2012 author author P. Holme and author J. Saramäki, 10.1016/j.physrep.2012.03.001 journal journal Phys. Rep. volume 519, pages 97 (year 2012)NoStop [Vazquez et al.(2007)Vazquez, Rácz, Lukács, andBarabási]Vazquez2007 author author A. Vazquez, author B. Rácz, author A. Lukács,andauthor A.-L. Barabási, 10.1103/PhysRevLett.98.158702 journal journal Phys. Rev. Lett. volume 98, pages 158702 (year 2007)NoStop [Perra et al.(2012)Perra, Gonçalves, Pastor-Satorras, andVespignani]Perra2012 author author N. Perra, author B. Gonçalves, author R. Pastor-Satorras,and author A. Vespignani, 10.1038/srep00469 journal journal Sci. rep. volume 2 (year 2012), 10.1038/srep00469NoStop [Taylor et al.(2012)Taylor, Taylor, and Kiss]Taylor2012 author author M. Taylor, author T. J. Taylor,and author I. Z. Kiss,10.1103/PhysRevE.85.016103 journal journal Phys. Rev. E volume 85, pages 016103 (year 2012)NoStop [Valdano et al.(2015)Valdano, Ferreri, Poletto, andColizza]Valdano2015 author author E. Valdano, author L. Ferreri, author C. Poletto,andauthor V. Colizza, 10.1103/PhysRevX.5.021005 journal journal Phys. Rev. X volume 5, pages 021005 (year 2015)NoStop [Gross and Sayama(2009)]Gross2009 editor T. Gross and editor H. Sayama, eds., @nooptitle Adaptive Networks (publisher Springer, year 2009)NoStop [Fosdick et al.()Fosdick, Larremore, Nishimura, and Ugander]Fosdick2016 author author B. K. Fosdick, author D. B. Larremore, author J. Nishimura,and author J. Ugander, https://arxiv.org/abs/1608.00607 journal arXiv:1608.00607 NoStop [Lindquist et al.(2011)Lindquist, Ma, Van den Driessche, andWilleboordse]Lindquist2011 journal author author J. Lindquist, author J. Ma, author P. Van den Driessche,and author F. H.Willeboordse, 10.1007/s00285-010-0331-2 journal journal J. Math. Biol. volume 62, pages 143 (year 2011)NoStop [Kiss et al.(2017)Kiss, Miller, and Simon]Kiss2017 author author I. Z. Kiss, author J. C. Miller, and author P. L. Simon,@nooptitle Mathematics of Epidemics on Networks: From Exact to Approximate Models, Vol. volume 46(publisher Springer, year 2017)NoStop [Eames and Keeling(2002)]Eames2002 author author K. T. D.Eames and author M. J.Keeling, 10.1073/pnas.202244299 journal journal Proc. Natl. Acad. Sci. USA volume 99, pages 13330 (year 2002), http://arxiv.org/abs/http://www.pnas.org/content/99/20/13330.full.pdf http://www.pnas.org/content/99/20/13330.full.pdf NoStop [Pastor-Satorras and Vespignani(2001a)]Pastor-Satorras2001a author author R. Pastor-Satorras and author A. Vespignani, 10.1103/PhysRevLett.86.3200 journal journal Phys. Rev. Lett. volume 86, pages 3200 (year 2001a)NoStop [Pastor-Satorras and Vespignani(2001b)]Pastor-Satorras2001 author author R. Pastor-Satorras and author A. Vespignani, 10.1103/PhysRevE.63.066117 journal journal Phys. Rev. E volume 63, pages 066117 (year 2001b)NoStop [Gleeson et al.(2012)Gleeson, Melnik, Ward, Porter, and Mucha]Gleeson2012 author author J. P. Gleeson, author S. Melnik, author J. A. Ward, author M. A. Porter,and author P. J. Mucha, 10.1103/PhysRevE.85.026106 journal journal Phys. Rev. E volume 85, pages 026106 (year 2012)NoStop [Van Mieghem(2012)]VanMieghem2012 author author P. Van Mieghem, 10.1209/0295-5075/97/48004 journal journal EPL volume 97,pages 48004 (year 2012)NoStop [Goltsev et al.(2012)Goltsev, Dorogovtsev, Oliveira, andMendes]Goltsev2012 author author A. V. Goltsev, author S. N. Dorogovtsev, author J. G. Oliveira,and author J. F. F. Mendes, 10.1103/PhysRevLett.109.128702 journal journal Phys. Rev. Lett. volume 109, pages 128702 (year 2012)NoStop [Pastor-Satorras and Castellano(2016)]Pastor2016 author author R. Pastor-Satorras and author C. Castellano, 10.1038/srep18847 journal journal Sci. Rep. volume 6 (year 2016), 10.1038/srep18847NoStop [Castellano and Pastor-Satorras(2017)]Castellano2017 author author C. Castellano and author R. Pastor-Satorras, 10.1103/PhysRevX.7.041024 journal journal Phys. Rev. X volume 7, pages 041024 (year 2017)NoStop [de Oliveira and Dickman(2005)]Oliveira2005 author author M. M. de Oliveira and author R. Dickman, 10.1103/PhysRevE.71.016129 journal journal Phys. Rev. E volume 71, pages 016129 (year 2005)NoStop [Ferreira et al.(2011)Ferreira, Ferreira, and Pastor-Satorras]Ferreira2011 author author S. C. Ferreira, author R. S. Ferreira,and author R. Pastor-Satorras, 10.1103/PhysRevE.83.066113 journal journal Phys. Rev. E volume 83, pages 066113 (year 2011)NoStop [Sander et al.(2016)Sander, Costa, and Ferreira]Sander2016 author author R. S. Sander, author G. S. Costa, and author S. C. Ferreira,10.1103/PhysRevE.94.042308 journal journal Phys. Rev. E volume 94, pages 042308 (year 2016)NoStop [Wei et al.()Wei, Liao, Zhou, Xie, Zhang, Wang, and Chen]Wei2017 author author Z.-W. Wei, author H. Liao, author M. Zhou, author J.-R. Xie, author H.-F. Zhang, author B.-H.Wang,and author G.-L.Chen, https://arxiv.org/abs/1704.02925 journal arXiv:1704.02925 NoStop [Boguñá et al.(2013)Boguñá, Castellano, and Pastor-Satorras]Boguna2013 journal author author M. Boguñá, author C. Castellano,and author R. Pastor-Satorras, 10.1103/PhysRevLett.111.068701 journal journal Phys. Rev. Lett. volume 111, pages 068701 (year 2013)NoStop [Cator and Van Mieghem(2013)]Cator2013 author author E. Cator and author P. Van Mieghem, 10.1103/PhysRevE.87.012811 journal journal Phys. Rev. E volume 87, pages 012811 (year 2013)NoStop [Van Mieghem and van de Bovenkamp(2013)]VanMieghem2013 author author P. Van Mieghem and author R. van de Bovenkamp, 10.1103/PhysRevLett.110.108701 journal journal Phys. Rev. Lett. volume 110, pages 108701 (year 2013)NoStop [Chatterjee and Durrett(2009)]Chatterjee2009 author author S. Chatterjee and author R. Durrett, 10.1214/09-AOP471 journal journal Ann. Probab. volume 37,pages 2332 (year 2009)NoStop [Ódor(2014)]Odor2014 author author G. Ódor, 10.1103/PhysRevE.90.032110 journal journal Phys. Rev. E volume 90, pages 032110 (year 2014)NoStop [Newman(2002)]Newman2002 author author M. E. J.Newman, 10.1103/PhysRevLett.89.208701 journal journal Phys. Rev. Lett. volume 89, pages 208701 (year 2002)NoStop [Gillespie(1976)]Gillespie1976 author author D. T. Gillespie, 10.1016/0021-9991(76)90041-3 journal journal J. Comput. Phys. volume 22, pages 403 (year 1976)NoStop [Cota and Ferreira()]Cota2017 author author W. Cota and author S. C. Ferreira, https://arxiv.org/abs/1704.01557 journal arXiv:1704.01557 NoStop [Marro and Dickman(2005)]Marro2005 journal author author J. Marro and author R. Dickman, @nooptitle Nonequilibrium Phase Transitions in Lattice Models (publisher Cambridge University Press, year 2005)NoStop [Blanchet et al.()Blanchet, Glynn, and Zheng]Blanchet2014 author author J. Blanchet, author P. Glynn, and author S. Zheng, https://arxiv.org/abs/1401.0364 journal arXiv:1401.0364NoStop [Gradshteyn and Ryzhik(2014)]Gradshteyn2014 journal author author I. S. Gradshteyn and author I. M. Ryzhik, @nooptitle Table of Integrals, Series, and Products (publisher Academic Press, year 2014)NoStop
http://arxiv.org/abs/1709.09257v2
{ "authors": [ "Guillaume St-Onge", "Jean-Gabriel Young", "Edward Laurence", "Charles Murphy", "Louis J. Dubé" ], "categories": [ "physics.soc-ph" ], "primary_category": "physics.soc-ph", "published": "20170926205232", "title": "Phase transition of the susceptible-infected-susceptible dynamics on time-varying configuration model networks" }
A Bimodal Network Approach to Model Topic DynamicsLuigi Di Caro ^1,3, Marco Guerzoni ^1,2,Massimiliano Nuccio ^1,2, Giovanni Siragusa ^1,3^1 Despina, Big Data Lab ^2 Department of Economics and Statistics "Cognetti de Martiis", University of Turin, Italy ^3 Department of Computer Science, University, of Turin, Italy =====================================================================================================================================================================================================================================================================================================This paper presents an intertemporal bimodal network to analyze the evolution of the semantic content of a scientific field within the framework of topic modeling, namely using the Latent Dirichlet Allocation (LDA). The main contribution is the conceptualization of the topic dynamics and its formalization and codification into an algorithm. To benchmark the effectiveness of this approach, we proposethree indexes which track the transformation of topics over time, their rate of birth and death, and the novelty of their content. Applying the LDA, we test the algorithm both on a controlled experiment and on a corpus of several thousands of scientific papers over a period of more than 100 years which account for the history of the economic thought.Keywords: topic modeling, LDA, bimodal network, topic dynamics, economic thought[1]We would like to thank JSTOR (<www.jstor.org>) for providing the data andDESPINA -Big Data Lab (<www.despina.unito.it>) and the Department of Computer Science at the Univeristy of Turin for financial support. § INTRODUCTIONA crucial issue in the philosophy of science consists in the understanding of the evolution of scientific paradigms within a discipline. Following <cit.>, a scientific paradigm can be thought as the set of assumptions, legitimate theories, methods, and experiments both adequately new to attract a group of scholars, to build a contribution to a field and to open enough the exploration of different directions of research.In the traditional view, as developed for hard and mature sciences, the evolution of scientific paradigm consists in "the successive transition from one paradigm to another via revolution" <cit.>. However, a scientific field is usually composed by several research paradigms either competing or addressing different issues,and a revolution in one of those necessarily involves effects and readjustments in the entire discipline. Moreover, each new paradigm carries the legacy of the existing knowledge of past paradigms, which is often recombined into the new one. This is especially true for social sciences, in which the identification of clear scientific paradigms in the sense of Kuhn is often blurred and it is probably more correct referring to "research traditions" <cit.>.However, whether you call paradigms or traditions, the existence of patterns of thoughts which are legitimate contributions to a theory is undeniable. Thus, we can postulate that the evolution of knowledge in a scientific field is generated among a community of researchers which share a semantic area to define specific research issues, describe methodologies, and lay down results. Thus, the heterogeneity of the research tradition of a scientific field can be described with semantic analysis. The idea that some measure of words co-occurrence reveals an underlying epistemic pattern and, therefore, it can capture the essence of evolution in science is not a new one. Despite the difficulty in programming, the first attempts date back to the work of <cit.> and refined when the first open code have been made available a decade later <cit.>.The challenge of classifying science on the basis of its semantic content has found a renewal with the diffusion of machine learning techniques and, in particular, in the subfield of unsupervised learning <cit.>.Topic modeling includes a family of algorithms <cit.>, which are particularly performant in extracting information from large corpora of textual data by reducing dimensionality. This feature has been clearly recognised in mapping science <cit.> or news <cit.>.<cit.> review four major methods of topic modeling, including Latent Semantic Analysis (LSA), Probabilistic LSA, Latent Dirichelet Allocation (LDA) and Correlated Topic Model (CTM). The LDA proposed in <cit.> is one of the most diffused approaches. LDA retrieves latent patterns in texts on the basis of a probabilistic Bayesian model, where each document is a mixture of latent topics described by a multinomial distribution of words. One of the major limitations of LDA lies on its inability to model and represent relationships among topics over time <cit.>.In this paper, we address a major recurring issue in topic modeling, that is the topic dynamics, or, in other words, we test a method to track the transformation of topics over time. As stated by <cit.>, LDA is a powerful approach to reduce dimensionality, but it assumes that documents in a corpus are exchangeable. On the contrary, articles and themes are sequentially organized and evolve over time. Therefore, it is not only relevant to develop a statistical model to determine the evolving topics from a corpus of a sequential collection of documents, but also to measure and describe the transformation of topics and their appearance and disappearance.In the literature of information retrieval, the dynamics of topics has been faced with two approaches <cit.>: a discriminative one monitors a change in the distribution of words or in the mixture over documents, while a generative approach searches for general topics over the whole corpus and, then, it assigns the documents which belong to each topic <cit.>. Specifically <cit.> introduced Dynamic Topic Modeling (DTM), a class of generative models in which the per document topic distribution and per topic word distributions are generated from the same distributions in a previous time frame. This approach has been very influential since it imposes a connection between the sets of topics at different periods and allows to track the evolution of a single topic over time.DTM performs very well in capturing the evolution of a single topic. However, the evolution of knowledge is much more complicated that the change of relative importance of words within a topic, since it may involve also the creation of new topics, their mutual re-combinations and, eventually their possible demise. The major contribution of the paper is the conceptualization and formalization of the evolution of knowledge, conceived as different streams of semantic content which continuously appears and disappears, merges and splits. Thereby we propose an original method based on inter-temporal bimodal networks of topicscompute the key elements in the evolution of knowledge. Moreover, the ultimate goal of the paper is not to track in detail what happens within a single topic, but rather to develop indexes which can measure at the aggregate level some properties of the observed knowledge dynamics, such as an overall degree of novelty or the level of turbulence at specific time windows.The paper is organized as follows: in the next section, we suggest a method to analytically conceptualize and measure different patterns of topics evolution. Section <ref> translates it into an algorithm which calculate some measures of merging, splitting and novelty of the topics generated by the LDA. In section <ref>, a simple simulation tests the robustness of the method on artificial data. Finally, in Section <ref>, the same algorithm is applied to a large dataset of papers in economics: main results are presented and discussed by describing the evolution of the topics in the economic science in the past century.§ A CONCEPTUALIZATION OF KNOWLEDGE EVOLUTION In this paper, we focus on the dynamic evolution of topics over time. With DTM, each topic K_t is linked to K_t+1 creating a topics chain which spans the years covered by the documents.Specifically, <cit.> maps each topic at time t-1 into a topic in t by chaining the per document topic distributionα _t and the per topic word distribution, β _t,k in a sate space model with a Gaussian noise: β _t,k|β _t-1,k∼𝒩(β_t-1,k,σ ^2I)α _t|α _t-1∼𝒩(α_t-1,δ ^2I) This approach is highly performing to track incremental changes of the same topic but it does not focus on revealing neither birth nor death nor possible combinations of topics and it imposes a constant number of topics within the model. On the contrary, we are interested to discover the structural change of topics in a corpus and to understand the underlying topic dynamics which explain it. Thereby, we do not focus on the evolution of the single topic. The inter-temporal link across topics is not a constraint in the estimation of the model as in the DTM, but it is introduced ex-post in the empirical analysis by looking at the similarities (co-occurrence of words) amongst topics generated by independent LDAs. More in detail, while DTM models sequences of compositional random variables by chaining Gaussian distributions (thus directly embodying topics dynamics in the model), our approach operates on single and static LDAs in order to track and measure such dynamics out of the model.The evolutionof a topic structure of a corpus accumulating knowledge overtime takes place because of two main reasons.On the one hand, any epistemic community (say for instance journalists or scientists) can shift their intellectual interest to new issues and problems, which will result in different choices, frequencies and co-occurrence of words. On the other hand, language is subject to a constant evolution, in which new words, named entities, acronyms, etc. appear while other ones disappear due to an increasingly lesser use of them by the same community. We rule out this second scenario, by assuming that in the short time frame the language is fairly stable.Under this assumption, when comparing the topics generated by a topic modeling exercise in two different, although adjacent, time windows, we should be able to capture the evolution of the scientific debate and highlight the birth, death and recombination of topics. On the one extreme, we can find a situation in which knowledge does not evolve and thus topics are stable. On the other, we figure out the maximum of turbulence in which new topics emerge without any semantic relation with the incumbent ones. In the latter case, we may assume the death of past topics and the birth of new ones. In between the two ideal cases, we can also draw a continuum in which we can observe both deaths and births of topics.Finally, in a most interesting scenario, rather than observing stability or turbulence, knowledge may evolve recombining existing topics in both old and new ones. Table <ref> summarizes five typical patterns of knowledge evolution and their interpretation within a topic modeling framework.Figure <ref> presents the five ideal types of knowledge evolution as a proximity network of topics, that we mathematically formalize as follows. Let us consider M topics emerged as the result of a topic modeling exercise from a corpus of articles at time t and N topics at time t+1. We tackle the critical problem of tracking the transformation of the set of topics M=(1,…,A,…,M) at t into the set of topics N=(1,…,a,…,N) at t+1. Specifically, we are interested in measuring the magnitude of the various phenomena such as birth, death, merging, and splitting. Consider asimilarity index based on word co-occurrence, simil[Typically, this index is the cosine similarity index, as it is used in the empirical part of the paper], between each couple of topics (A,a) with A ∈ M and a ∈ N and consider the similarity matrix S (M × N)S = a … N A simil_1,1 ….simil_1,N⋮⋱Msimil_M,1 … simil_M,N For the sake of clarity and with reference to Figure <ref>, let us consider the minimal example in which M=(A,B) and N=(a,b) S = a b A α βB γ δThe network representation allows to visualize the five ideal types of knowledge evolution: Table <ref> summarizes them and the necessary and sufficient conditions on the values of the similarity index to observe such cases. However, with a higher number of topics, a derivation of the conditions on the values of the similarity index would be cumbersome. Moreover, Table <ref> depicts ideal situations only, while the observed reality usually deals with a continuous mixture of the paradigmatic cases presented above. For instance, already in the case with M=4 and N=3 depicted in Figure <ref>, the analysis becomes strenuous.With this purpose in mind, we consider the similarity matrix S as the incidence matrix of M over N. We can thus employ S to create a bi-adjacency matrix D, and thus consider Figure <ref> as the resulting bipartite network in which M and N are the sets of nodes, while the elements of the matrix are the weights of the edges.D= [ [ 0 S; S^T 0 ]] = = A…Ma b… NA 0 0 0 B 0 0 0S…0 0 0M 0 0 0a0 0 0 0b0 00 0 … S^T 0 00 0N 0 00 0 We now show how this representation can help measure the magnitude of births, deaths, merging and splitting. Births and deaths can be easily calculated from the matrix S. A row sum equal to zero highlights a death, while a column sum equals to zero indicates a birth. A death means that the semantic legacy completely disappears while a birth means that a topic carries no semantic similarity with other topics in the past.Once again it is important to notice that these cases are extreme scenarios while in the reality we observe a continuum between births and deaths. We might thus calculate an index Novelty_i (NI) for each topic i at time t+1 where for NI_i=MAX we have a birth, that is a topic with no similarity to any other previous one. For higher value we have a higher novelty of the topic. We can also measure an average change in NI on the overall structure of a scientific field by looking at distributions of these indexes over the topics. For instance, let us consider the Novelty Index and the average, defining:NI_j = 1-∑_i^MS_i,j/M where j is the index of the j-th column in the matrix S, and NI=1-∑_i^M∑_j^NS_i,j/M*N We take the average of all the cell values in matrix S. If the similarity index isbounded between 0 and 1, such it is the very common case of the cosine similarity index, thus NI ranges from 0 to 1. For very small value of novelty,new topics show different word distribution from old ones. As mentioned, transformation of topics can take the form of merging and splitting. We say that a merging occurs if a topic at time t+1 shows a high similarity with two topics at time t, meaning that the semantic universe of A and B at t (as in figure <ref>) is combined in the topic a. Similarly, we can say that a split occurs if the semantic legacy of one topic at t is to be found in multiple topics at t+1 as in the case for topic C. To analyse the intensity of a merging we can project the bipartite network of figure <ref> into its two 1-mode-network of figure <ref>. This is achieved by a matrix multiplication S× S^T for the merging and S^T× S for the splitting which result in two matrices P^merging and P^splitting of dimension respectively M × M and N × N. Please note, that for the properties of matrix multiplicationP^merging and P^splitting are always square matrices, even when the number of topics in two periods differs. The network is represented by the matrix P P^merging =A B... M A B S× S^T ... MP^splitting =a b... N a b S^T× S ... N The matrix transformation allows us to draw the 1-mode-network as in Figure <ref>, which represents the merging and splitting between two time windows. The matrix formulation of the network is also useful for computing the intensity of merging and splitting on the basis of the two relative matrices P. Let us consider the matrix P^merging in a minimal example of the table <ref> P^merging=S× S^T=[ α β; γ δ ]×[ α γ; β δ ]=[α·α +β·β α·γ + β·γ;α·γ +β·δγ·γ +δ·δ ]The matrix P is always symmetric and, for our purpose, we focus on the low triangle. The merging is captured by the number outside the diagonal(α·γ +β·δ), where (α·γ) is the intensity of the merging of A and B in a, while (β·δ) is the intensity of the merging of A and B in b. In this exemplary case shown in Table <ref>, β and δ are equal to zero and α and γ are different from zero: thus, we have a merging between A and B as depicted in Figure <ref>.Mutatis mutandis, we can consider the case of splitting. Once again, the low triangle off the diagonal highlights the intensity of split with (α·β) the split of A in a and b, while (γ·δ) the split of B. P^split=S^T× S=[ α γ; β δ ]×[ α β; γ δ ]=[α·α +γ·γ α·β + δ·γ;α·β +γ·δβ·β +δ·δ ] When we have a large number of topics in both time windows, we can use this formulation to create indexes measuring the intensity of merging and splitting or other properties of the transition. Specifically, we aim at comparing the values below the diagonal with those on the diagonal. We thus create a normalized matrix in which all elements of the diagonal and below the diagonal add up to one. P^merging_normalized=P^merging·1/∑_i≤ jP(i,j)In this way, we can compute a Merging Index (MI) which takes value 0 when no merging occurs and it ranges up to an upper limit which can not exceed 1.MI=1-trace(P_normalized^merging) Symmetrically, we calculate a Splitting Index (SI)§.§ Conditional dependenceA last important issue to be addressed consists of the impact of the conditional dependence of topics at time t and its relation with the 1-mode network projection. Two topics at t can appear to merge into a topic at t+1 only because they are already similar to each other at time t. In this case we might run the risk of identify a spurious process of merging. However, it is possible to account for this dynamic conditional dependence. We can compute a similarity index among topics at time t, simT, which can also be represented by a network. Q =[ simT_1,1… simT_1,M; ⋱ ; simT_M,1… simT_M,M ]Note that Q is a symmetric matrix, with the same dimension (M × M) as P^merging. The same procedure can be applied to topics at t+1. In this case, we obtain a matrix (N × N), with the same dimension of P^split.In order to take into account the conditional dependence, we might consider R^merging, splitting= (P^merging, splitting|Q^merging, splitting) and recompute the indexes, substituting R with P. There exist different ways to operationalize the dependence. Probably the most sophisticated one would be to encode the overall conditional dependence structure within a graphical network <cit.>. However, we might also consider that the similarity measure has a scalar meaning which goes beyond a simple probabilistic relation. For this reason, we surmise that the conditional dependence can be at best considered by dividing or subtracting element by element the two matrices: in the developed algorithm (see next paragraphs), we divide. Table <ref> summarizes the indexes we use and their range. §.§ The Proposed AlgorithmThis paragraph describes the algorithm which we developed to operationalize the former theoretical approach.Our example relies on the Latent Dirichlet Allocation (LDA) <cit.>, although this methodology does not involve any assumption in the way topics are created. LDA is a generative model that summarizes the documents through a mixture of topics, where each topic is a probability distribution in the dictionary. The algorithm first generates a database which allows query of documents per time period. Thereafter, it divides the dataset into unigrams where stopwords are eliminated according to the NLTK list (<www.nltk.org>). Finally, we have applied the Porter Stemmer <cit.> on individual words. This algorithm transforms (or truncates) every word in a morphological root form. We create a subset per each T time window and compute N _t topics using standard LDAs [<https://radimrehurek.com/gensim/>]. On the generated output we are able to compute the three indexes. For the similarity computation, we use the probability of the first 100 topic's words to generate the vector weights.Algorithm <ref> shows the pseudo-code to compute the time window from t to t+1. It simply takes in input the cleaned documents of the selected windows and the number of topics at time t and t+1 and returns the merging, splitting and value indexes. In details, the algorithm generate a LDA model for each time window t and t+1 and computes the similarity between topics at time t and t+1 (and themselves). Then, it computes the matrices P_merging, P_splitting using the similarity matrix S and the matrix Q. The two P matrices are used to compute MI and SI, while the matrix Q is used to compute NI.§ EVALUATION As to evaluate this approach, we cannot benchmark it with other dynamic methods such as DTM, since we do not track the single topics over time, but we compare adjacent time windows to measure the degree of topics recombination. Therefore, we test the methodology by applying the algorithmon an artificially-generated dataset with controlled characteristics.§.§ Artificial Data Creation To generate the experimental datasets, we create artificial topics reflecting natural and realistic textual content. Instead of directly producing topics as sets of artificially-built sets of words, we started from concept seeds, used as query of real textual data. A concept seed is a word (or compound word) that represents a concept in a text-based resource. For example, the concept seed physics within the Wikipedia resource is the Wikipedia page about Physics.From a set of concept seeds and their associated Wikipedia pages, it is possible to extract the whole textual content and build artificial documents for the chosen concepts[We used thelibrary Wikipedia available at <https://github.com/goldsmith/Wikipedia>, which acts as a wrapper of the MediaWiki API (<https://www.mediawiki.org/wiki/>)].In the following exercise, we selected 8 concept seeds, all related to the field of Economics, in order to understand how well our approach works on a toy model reflecting contents which are consistent with the real data we used in Section <ref>).As in most natural language processing systems, we applied some pre-processing phase, which includes the removal of stopwords as well as functional linguistic items such as determiners, punctuations, etc[We used the library Spacy (<https://spacy.io/>), filtering out the words having the following Part-of-Speech tags: DET (article), NUM (number) and PUNCT (punctuation).]. Once the sets of words are built, we generated a document for each seed concept by randomly selecting the words[The number of words of each document has been chosen randomly.] with uniform probability. We maintained word repetitions to allow us to sampling words with their real frequency and generate documents closed to real cases. The documents generated areused to train different LDA models with different seeds concepts. Finally, we compared the topics of different LDA models by means of the proposed measures to see whether they capture the dynamics of the topic changes.We refer the reader to Appendix <ref> for details about thealgorithms.§.§ Controlled ExperimentsTo evaluate the algorithm we create 8 different controlled experiments which are designed to capture the 4 ideal cases of knowledge evolution. Specifically, we conducted twice 4 experiments to test the functioning of the method in 4 different situations by changing (or not) the number of topics and by replacing (or not) the concept seed. In the first 4 runs we kept the scenario as simple as possible and we slightly increased the complexity of the exercise in the second 4 runs. In the former, the number of topics at time t are fixed to 2 for the first experiment and 4 for the second one; the number of topics at time t+1 is determined by the experiment (see Table <ref> for details). In details, we set each experiment as follows:stability the number of topics and seed concepts are kept the same. The variation is only stochastic.birth/death the number of topicsdoes not change, but we replace the concept seeds to force death of the previous topics and birth of new ones.merging the seed concepts do not change, but we reduce the number of topics to force a situation of merging. For instance, if we cluster the same concept seeds in 2 and 1 topics, we necessarily observe only merging and no splitting. splittingthe seed concepts do not change, but we increase the number of topics to force a situation of splitting. Table <ref> summarizes the design of the experiments and depicts avarage values of 100 runs of the Algorithm <ref>. Concerning with the first 4 simple designs, experiments are conceived to force the results and createonly splitting and only merging. For the splitting the number of topics increases from one to two and we should not observe merging since at t-1 there is also one topic. Analogously, in the case of merging the number of topic shrinks to one in t+1.The remaining two experiments compare stability with births and deaths, which lead to a higher degree of novelty. Our indexes vary as expected: in splitting and merging the MI and SI respectively go to zero. If we compare stability with births and deaths the NI is much higher in the former case. Table <ref> shows four different experiments with higher number of topics. It is relevant to notice that even with a few topics, it is impossible to get a clear-cut outcome since the recombination of knowledge may be unexpected and typically reproduces at the same time merging, splitting, stability for some topics, and birth and death for others. However, these baseline examples clearly points at the aggregate behaviour of topics within a discipline.§ THE EVOLUTION OF KNOWLEDGE IN ECONOMICS The dataset is a collection of documents which appear in the JSTOR database (<www.jstor.org>) and were published from 1845 to 2013 in more than 190 journals concerning with economic sciences (also defined as economics). They are more than 460,000 documents, classified as research articles (about 250,000), book reviews (135,000), miscellaneous (73,000), news (4,000) and editorials (500). For each document, in addition to bibliographic information (title, publication date, authors, journal title, etc.), the dataset provides full content in form of a bag of words, i.e. the set of words used in the documents associated with their frequencies.The following analysis only considers the research articles in order to remove the possible noise caused by using different types of documents, which can be written in different languages. The distribution of research articles over the time considered is very skewed (see Figure <ref>). Although the first documents date back to 1845, until the end of the XIX century the corpus of articles accounts only for 2930 items. The increase is almost linear till the beginning of the 1960s, when the number of documents more than doubled in a few years and rose to over 5000 items published every year during the 1990s and 2000s. From 2011 to 2013 we count 8220 items published.The LDA has been applied to research papers published between 1890 and 2013: decades before 1890 were dropped because of the extremely low number of documents. Thereby, the resulting dataset of articles consists of 755,838,336 words and 3,169,515 unique words.We experimented varying the hyper-parameters of the method, namely the number of topics and the dimension of time windows, in order to evaluate the robustness and sensitivity of our approach in the 123 years considered. We selected 25, 50 and 100 topics and time windows of 5, 10 and 20 years, keeping fixed one parameter and varying the other one. In details, we first show the values of SI and MI fixing the window dimension to 10 years and varying the number of topics. In the following figures, for example, 1900-1920 indicates the value of the indexes between 1900 and 1910 compared with the corresponding value between 1910 and 1920. Figures <ref>, <ref> and <ref> show the indexes for 25, 50 and 100 topics within a window of 10 years. Then, we fixed the number of topics to 25 and we varied the size of the time window. Figures <ref> and <ref> show the indexes for 25 topics and windows of 5 and 20 years. These simple tests have demonstrated that the main trends of the indexes do not change substantially by varying the hyper-parameters, meaning that our method is robust to the number of topics and the size of the time windows. To further prove invariance to number of topics and windows-size, we applied Greene metric <cit.> on a subset of the research articles with a time windows of 10 years to capture all the possible changing in economic knowledge. Values of the metric reveal how much the topics generated capture the information presented in the dataset. Greene metric requires a range in input, which is formed by the minimum and maximum number of topics, and a step parameter, used by the metric to shift the number of topics considered at the current step starting from the minimum ones. For example, if the minimum number of topics is 10, the maximum is 50 and the step is 20, the Greene metrics will compute a score at 10, 30 and 50 topics. The plot of the metric in Figures <ref> and <ref> concerns with two windows and shows that increasing the number of topics we can increase stability too, but of course, it becomes very difficult to interpret the meaning of each topic.As suggested by <cit.> when topic modeling is employed to explore the content of a dataset -as in this paper - rather than to predict there is not a definitive test to support the choice of the optimal number of topics. We solved this trade-off between stability and meaningfulness by manually controlling for the topics generated by the model with 25 topics within time window of 10 years. When we found that a few topics could be split up again because they were too general, we set an optimal and analytically useful number of topics to 27.Therefore, the following analysis is based on 27 topics within time windows of 10 years, which perform the maximum stability of the indexes varying the number of topics. Figure <ref> shows the values of MI and SI respectively for each time window, as defined in Section <ref>. In the corpus we analyzed, both indexes show a general trend of decreasing values over time, which becomes particularly severe starting from 1960s. Merging and splitting increase only between the 1940s and the 1950s while dropping dramatically in the second half of the XX century. The transformation of topics seems to find new urge only around the end of the century, when merging is increasing againg and splitting is stable.As for the NI, we mentioned that the index tends to one when new topics emerge without matching with topics at t-1. On the average the value is higher than 0.9 all over the 123 years considered, so we tracked both micro-variation and general trend.In Figure <ref> NI does not show relevant variations until 1990s, with some local maximum in the first decade of the past century and a local minimum around the half of it. In the last decade of the century itgrows sharply, revealing a higher rate of brand new topics or at least of topics defined by new words. Such a methodological approach has the advantage of tracking the evolution of each single stream of economic theory by looking simultaneously at all the others. On the whole, the analysis of such a big corpus of documents suggests that merging and splitting cannot be considered as opposite phenomena, but a complementary measure of recombination of topics. In particular, trends in the field of economics suggest a steady decrease of both splitting and merging only temporally balanced by a weak growth before and after the WWII. From a historical perspective this is absolutely consistent with the need of theoretical elaboration in economics following the great Depression in 1929 and the dramatic economic changed imposed by the post war reconstruction.During the 1960s and in combination with the boom of academic publications, many topics are spreads over a relevant number of documents and journals, although they seem to elaborate on relative stable basis of autonomous topics. Only by the end of the century we have witnessed the development of new-brand topics. The birth of new topics strengthens the hypothesis of self-standing topics shaped by their own specialised language and a lesser exchange of knowledge across the economic discipline. In other words, the terrific expansion of the academic production seems to come with a fragmentation and dispersion in multiple niches of knowledge <cit.> which elaborate on a new language, but not necessarily producing new paradigms.§ CONCLUSIONIn this paper we proposed a method to measure the evolution of knowledge in a scientific field extracting topics in a corpus of documents. Topic modeling techniques are becoming increasingly refined in treating large and complex corpora of documents, but they may lack of a theoretical reflection of the underlying empirical phenomenon. Taking a dynamic perspective we recognise five paradigmatic cases of knowledge evolution. We then surmise that modeling the proximity between topics of different time windows as a proximity network might be a useful tool to measure their knowledge dynamics. Indeed, this network approach allows us to develop 3 indexes, which grasp i) the stability of topics over time measuring their rate of death and birth (Novelty Index - NI), and ii) the degree of recombination of topics (Merging Index - MI and Splitting Index - SI). For very simple cases, we are also able to analytically derive those conditions, which link the proximity network and the value of each index. Testing the algorithm over a set of simulated documents, we showed its robustness for each the indexed developed. Finally, we applied our approach to a real and large corpus of academic publications in economics to illustrate how the combined use of MI, SI and NI is effective to understand dynamics and trends in economic knowledge and thought. We believe, this is a first step towards the development of a closer connection between algorithms for dynamic topic modeling and the empirical phenomenon they are supposed to describe.§ ARTIFICIAL DATA CREATION: ALGORITHMSIn Algorithm <ref>, the function getNum(minNum,maxNum) returns a number, randomly selected, between minNum e maxNum; the getWord() function returns a word, randomly chosen on the selected set; the function computeTopicSimilarity() calculates the cosine similarity between the input topics; the function zeros() returns an array containing all zeros. Finally, the function getWordList(concept) generates a set of words. The words are taken from the Wikipedia page that points to the chosen concept. In rows [1-6], the function getWordList collects, for each concept seed, a set of words. In details, getWordList, as shown in Algorithm <ref>, extracts all words contained both in the Wikipedia page related to the concept in input through the python library Wikipedia[<https://github.com/goldsmith/Wikipedia>]. Words are extracted using the library Spacy[<https://spacy.io/>] and stored in wordList[There exists a wordList for each conceptSeed in input.]. Then, the wordList of each concept seed is inserted into wordConceptList. In rows [7-16], Algorithm <ref> generates a document for each concept, sampling words (with uniform probability) from the wordList related to the concept seed. The number of words to sample is specified by numWords, which ranges from 1000 to 10000. Successively, in rows [18-20], the algorithm divides documents in two sets, a set containing the first numDocument documents and a set containing the remain documents, and applies LDA. The LDA can be applied over the two documents sets or only over a single documents set according to the replaceDoc flag. If replaceDoc is set to True, the first documents set is replaced with the second one (it is set to False by default). Algorithm <ref> shows how words are processed. We filtered stopwords and words having Part-Of-Speech tags Det (Determiner), X (foreign word), NUM (Numeral), Punct (Punctuation), SPACE and EOL (end of line symbols). We also filtered words that does not match the python regular expression \w+. Furthermore, all unfiltered words are brought back to their morphological root.
http://arxiv.org/abs/1709.09373v1
{ "authors": [ "Luigi Di Caro", "Marco Guerzoni", "Massimiliano Nuccio", "Giovanni Siragusa" ], "categories": [ "cs.CL", "econ.GN", "q-fin.EC" ], "primary_category": "cs.CL", "published": "20170927074903", "title": "A Bimodal Network Approach to Model Topic Dynamics" }
^aSchool of Physics, University of the Witwatersrand, Wits 2050, South Africa ^bNational Institute for Theoretical Physics; School of Physics and Mandelstam Institute for Theoretical Physics, University of the Witwatersrand, Johannesburg, Wits 2050, South [email protected] The Madala hypothesis postulates a new heavy scalar, H, which explains several independent anomalous features seen in ATLAS and CMS data simultaneously. It has already been discussed and constrained in the literature by Run 1 results, and its underlying theory has been explored under the interpretation of a two Higgs doublet model coupled with a scalar singlet, S. When applying the hypothesis to Run 2 results, it can be shown that the constraints from the data are compatible with those obtained using Run 1 results. § THE MADALA HYPOTHESIS Searches for physics beyond the Standard Model (BSM) have become ubiquitous since the discovery of the Standard Model (SM) Higgs boson, h. The discovery of the Higgs boson was the first step in understanding the nature of electroweak symmetry breaking (EWSB). There are currently a plethora of BSM physics scenarios extending the notion of EWSB in the literature, many of these being within the reach of the Large Hadron Collider (LHC) running at an energy of √(s)=13 TeV.One such scenario is the Madala hypothesis. The Madala hypothesis was formulated in 2015 with the aim of connecting and explaining several anomalies in the data from Run 1 of the LHC <cit.>. At first, this was done through the introduction of a heavy boson H – the Madala boson – with a mass in the range 2m_h<m_H<2m_t. It was shown that if the SM Higgs boson could be produced via the decay of H, it would contribute to the apparent distortion of the Higgs spectrum from the Run 1 ATLAS differential distributions <cit.>. This is achieved through the effective decay vertex of H→ hχχ shown in <ref>(a), where χ is a scalar dark matter (DM) candidate of mass ∼12m_h. A fit was performed using a complete set of ATLAS and CMS data (at the time), and the parameters of the model were constrained. The two parameters of interest which were are constrained were the mass of H, to a value of m_H=272^+12_-9 GeV, and the scaling factor β_g=1.5±0.6 which modifies the gluon fusion (ggF) production cross section of H.The hypothesis was then extended by introducing a DM mediator S <cit.>, such that the effective decay vertex in <ref>(a) is resolved into the cascading decay process depicted in <ref>(b). The mass of S lies in the range m_h<m_S<m_H-m_h in order for the decay process H→ Sh to be on-shell.[The S boson is not actually required to decay on-shell from H; this is merely a convenient mass range to simplify phenomenology. This facet has earned it the nickname the Shelly boson.] The H then preferentially decays to one of three pairs of bosons: H→ SS,Sh,hh. The S boson was also considered to have small couplings to SM particles, and the assumption that was made is that it can be Higgs-like such that all of its branching ratios (BRs) are already defined. The S would then decay predominantly to the vector bosons Z and W^± if it has a mass of around 160 GeV or above.Since these studies were done, however, several new results have been released by the ATLAS and CMS collaborations. In particular, the first Run 2 results for resonant di-Higgs production and di-boson production have been released. In addition to this, both ATLAS and CMS have published Run 1 differential distributions (including Higgs ) for the h→ WW→ eνμν channel <cit.>, and ATLAS have also released their Run 2 result for h→γγ <cit.>. It is therefore instructive to determine whether or not these new results are compatible with the Run 1 fit result mentioned above.§ EXPLORING RESONANT SEARCH CHANNELS The first check for compatibility is to explore the resonant search channels which contributed to the Run 1 fit result mentioned above. These include resonant searches for H→ hh and VV (where V=Z,W^±).Since no combination of these results exists, extracting meaningful information from them becomes a matter of statistics. In this study, results are combined through the addition of units of χ^2. For measurements, χ^2 is calculated as Pearson's test statistic. That is, a measurement μ^exp along with its associated uncertainty Δμ^exp are tested against a theoretical prediction μ^th and its uncertainty Δμ^th by calculating the following:[The denominator here differs from Pearson's test statistic, since it already assumes that the theoretical and experimental uncertainties are independent and can therefore be added in quadrature.] χ^2=(μ^th-μ^exp)^2/(Δμ^th)^2 + (Δμ^exp)^2. Most results in the literature, however, come in the form of 95% CL limits. In this case, a modified version of Pearson's test statistic is used. Namely, for an observed and expected limit L^obs and L^exp, respectively, a theoretical prediction μ^th can be tested using the following: χ^2=(L^obs-L^exp-μ^th)^2/(L^exp/1.96)^2,where the factor of 1.96 in the denominator arises from the fact that a 95% CL corresponds to 1.96 units of standard deviation.To understand whether or not a potential signal already lies in the LHC data, the limits coming from the di-Higgs and di-boson searches listed in <ref> were scanned and evaluated using <ref>. A best fit value for a production cross section times BR of H was determined as a function of m_H by minimising the sum of χ^2 coming from each search channel. These best fit values as well as 1σ error bands are shown in <ref>. Note here that since results are shown both at 8 TeV and 13 TeV, the cross sections should not be directly compared between the two energies. However, the reader should keep in mind that the scaling factor from 8 TeV to 13 TeV for the production cross section of H lies between 2.7 and 3.0 as m_H increases from 250 GeV to 350 GeV (calculated using the NNLO+NNLL cross sections in reference <cit.>). The plots indicate that the best fit value tends to deviate from the null hypothesis (i.e. that H is not produced at all) mostly in the range between 260 GeV and 300 GeV. The only clear exception is in the Run 2 H→ ZZ search results, where the best fit line underestimates even the null hypothesis. However, the H→ ZZ BR was found to be small in the Run 1 fit result <cit.>, and the value presented here is compatible with a small BR within the large uncertainties that surround the central value. This is compatible with the best fit mass of 272 GeV which was obtained in the Run 1 fit result mentioned in <ref>. The Madala boson of course can be as light as 250 GeV, but since di-Higgs and di-boson results seldom consider masses below 260 GeV, the scans in <ref> are limited. § FITTING THE HIGGS SPECTRUM Another key aspect of the Run 1 fit result mentioned in <ref> is the Madala hypothesis's ability to predict a distorted Higgs spectrum. In the Run 1 data, this effect is most notably seen in the ATLAS results, where differential distributions are presented in fiducial volumes of phase space <cit.>. Through the effective decay of H to hχχ, the BSM component of Higgs can be added to a SM prediction to reproduce the systematic enhancement of fiducial cross section in the range 20 GeV << 100 GeV, therefore improving the theoretical description of the data.In order to test whether or not such an improvement can be seen in the results released since the Run 1 fit result, a set of Monte Carlo (MC) samples were made to reproduce the different components of the Higgs spectrum. The SM Higgs spectrum was separated into its different production mechanisms. The ggF spectrum was generated using the NNLOPS procedure <cit.>, which is accurate to next-to-next-to leading order (NNLO) in QCD. The associated production modes – vector boson fusion (VBF), Vh and tth which are commonly labelled together as Xh – were generated at next to leading order (NLO) using MG5_aMC@NLO <cit.>. These spectra are scaled to the cross sections provided by the LHC Higgs Cross Section Working Group (LHCHXSWG) <cit.> (from which the theoretical uncertainty also comes). The events are also passed through an even selection identical to the fiducial selection recommended by the experimental collaborations. A further scaling factor was applied to the SM ggF prediction, this being the reported signal strength of ggF (often denoted as μ_ggF).The BSM prediction (i.e. the Madala hypothesis prediction of gg→ H→ hχχ as shown in <ref>(a)) was generated using Pythia 8.2 <cit.>. These events were scaled to the LHCHXSWG N^3LO ggF cross sections for a high mass Higgs-like scalar, and passed through the fiducial selections as well. Since the Run 1 fit result had a best fit mass of m_H=272 GeV with m_χ=60 GeV, the mass points considered for this study were m_H=270 GeV and m_χ=60 GeV.With the MC samples scaled accordingly each spectrum was added, and a χ^2 value was calculated for each bin per channel, as in <ref>. The BSM component was scaled such that the total χ^2 was minimised. This BSM scaling is interpreted to be equal to β_g^2, which is a dimensionless factor that multiplies the effective g-g-H coupling, and therefore controls the production cross section of H through ggF. The results of this fit are shown in <ref>. The Run 1 fit result mentioned in <ref> has a value of β_g=1.5±0.6, and here it can be seen that the ATLAS Run 1 h→ WW and ATLAS Run 2 h→γγ results are compatible with this value. The CMS Run 1 h→ WW is not improved by the BSM hypothesis. The spectra for the best fit values are shown in <ref> for the two spectra which are improved by the BSM hypothesis.§ CONCLUSIONS The Madala hypothesis was proposed in 2015 as an explanation of several anomalies in experimental data from the LHC. However, since then many newer results have come out which should corroborate the hypothesis if it exists in nature. In this work, these newer results have been tested using a statistical method, and are shown to be compatible with the result obtained in 2015. That is, most of the excesses from Run 1 which motivated the Madala hypothesis have reappeared in the current ensemble of Run 2 results.However, this ensemble of Run 2 results comprises of preliminary studies which do not make use of the full integrated luminosity which has been accrued by the detectors over the duration of Run 2 at the LHC.It is therefore imperative that a far more detailed study be done when such results become available, since the time is near when enough data will be available to make more definite statements about the Madala hypothesis. Some of the results in this paper are made from experimental plots containing even less than 5 fb^-1 of data. One would expect to be able to make a statement with 𝒪 ∼3σ confidence for individual search channels with at least 50 fb^-1 of data. Until such a time arrives, the phenomenology Madala hypothesis shall continue to be studied in the context of various BSM scenarios, to gain an understanding of how we might treat it in future.§ REFERENCES iopart-num
http://arxiv.org/abs/1709.09419v1
{ "authors": [ "Stefan von Buddenbrock", "Alan S. Cornell", "Mukesh Kumar", "Bruce Mellado" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170927094738", "title": "The Madala hypothesis with Run 1 and 2 data at the LHC" }
Center for Future High Energy Physics & Theoretical Physics Division, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China Physics Division, National Center for Theoretical Sciences, Hsinchu, Taiwan 300 Institute of Theoretical Physics & State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China Collaborative Innovation Center of Quantum Matter, Beijing 100871, China Center for High Energy Physics, Peking University, Beijing 100871, [email protected] Institute of Theoretical Physics & State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China [email protected] Center for Future High Energy Physics & Theoretical Physics Division, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China [email protected] Physics Division, National Center for Theoretical Sciences, Hsinchu, Taiwan 300 Institute of Theoretical Physics & State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China [email protected] Institute of Theoretical Physics & State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China Collaborative Innovation Center of Quantum Matter, Beijing 100871, China Center for High Energy Physics, Peking University, Beijing 100871, China The issue of deriving ZHη vertex in the simplest Little Higgs (SLH) model is revisited. Special attention is paid to the treatment of non-canonically-normalized scalar kinetic matrix and vector-scalar two-point transitions. We elucidate a general procedure to diagonalize a general vector-scalar system in gauge theories and apply it to the case of SLH. The resultant ZHη vertex is found to be different from those which have already existed in the literature for a long time. We also present an understanding of this issue from an effective field theory viewpoint. On the ZHη vertex in the simplest Little Higgs Model Shou-hua ZhuTathagata Karmakar, Tapobrata Sarkar E-mail:  karmakar, [email protected] 0.4cm Department of Physics, Indian Institute of Technology,Kanpur 208016, India =================================================================================================================================================================== § INTRODUCTION The discovery of the 125 Higgs-like boson <cit.> marks a prominent triumph of the Standard Model (SM). Nevertheless, it is widely believed that this is not the end of the story. The SM in its current form leaves too many unanswered questions, from theoretical ones like the issue of Higgs mass naturalness <cit.>, to observational ones like the nature of the dark matter present in the universe <cit.>. Almost all models going beyond the SM (BSM) entail an enlargement of the scalar sector, and consequently forms of interaction which are absent in the SM could be possible. Searching for such kind of new interactions therefore may lead to decisive evidence of the existence of BSM and provide a clue to the nature of the BSM physics.For example, Lorentz symmetry does not forbid the interaction of one gauge boson (denoted as Z) with two scalar bosons (denoted as H and η) at the dimension-4 level, in the form likeZ^μ(H∂_μη-η∂_μ H)The SM has only one Higgs particle and thus cannot accommodate such kind of vector-scalar-scalar (VSS) interactions[Here we mean physical fields. Unphysical fields like Goldstone or ghost can certainly participate in VSS interactions in the SM.]. Going beyond the SM, the appearance of interactions like Eq. (<ref>) is quite common in models like the two-Higgs-doublet model (2HDM) and supersymmetric models, which may lead to the associated production of two scalar bosons <cit.> or Higgs-to-Higgs cascade decays <cit.> as important collider signatures.Besides the usual 2HDM and supersymmetric models which contain a linearly-realized scalar sector, VSS interactions have also been studied in the context of nonlinearly-realized scalar sectors. Nonlinearly-realized scalar sectors are frequently adopted when building a model in which the Higgs is realized as a pseudo-Goldstone boson of some global symmetry breaking <cit.>, which could be helpful in addressing the hierarchy problem. In principle the derivation of VSS vertices in such models is similar to the linearly-realized case: start from the gauge covariant kinetic terms of the scalar fields and then expand the interaction fields into vacuum expectation values and mass eigenstate fields after which the three-point VSS vertices could be extracted. Nevertheless there can be important technical differences in intermediate steps. When the scalar sector is nonlinearly-realized, scalar kinetic terms are in general not automatically canonically normalized, and there can be “unexpected" vector-scalar two-point transitions which need to be taken care of. We will show in the following sections that these situations indeed occur for the case of the simplest Little Higgs (SLH) model <cit.>, which is proposed as a simple solution to the Higgs mass naturalness problem.From a more general perspective, the problem we encounter is how to diagonalize a vector-scalar system in gauge field theories. Specifically, the Lagrangian we start with might not be canonically normalized in its kinetic part, and may have some general vector-scalar two-point transitions. To do perturbation theory in the usual manner, we need to first render its kinetic part canonically normalized, which could be done via the usual complete-the-square method. To remove the vector-scalar two-point transitions, strictly speaking we need to choose appropriate gauge-fixing terms. Finally we still need to diagonalize the scalar mass matrix with contribution from both the original scalar mass terms and the gauge-fixing terms. These steps set the stage for the derivation of VSS interactions.In Section <ref> the systematic procedure of diagonalize a general vector-scalar system in gauge field theories will be elucidated. Then in Section <ref> we apply this procedure to the SLH model and derive the mass eigenstate ZHη vertex[By `mass eigenstate' ZHη vertex we mean the ZHη vertex obtained after rotating Z,H,η fields into their corresponding mass eigenstates. For previous studies related to the η particle in the SLH, we refer the reader to  <cit.>.] to ((v/f)^3). The ZHη vertex derived here is found to be different from those which have already existed in the literature <cit.> for a long time. In Section <ref> we present our discussion and conclusion.§ GENERAL DIAGONALIZATION PROCEDUREConsider a gauge field theory in which there are n_S real scalar fields G_i,i=1,2,...,n_S and n_M real massive gauge boson fields Z_p^μ,p=1,2,...,n_M. If complex fields exist, we can always decompose them into their real components and proceed in a similar manner. The G_i's which we start with neither need to be canonically normalized nor need to have diagonalized mass terms. For simplicity (but without loss of generality) the Z_p's are assumed to have canonically normalized kinetic terms but don't have to be diagonalized in their mass terms. When we say the Z_p's are massive, it means that the eigenvalues of the mass matrix of Z_p's are all positive. Especially, massless gauge bosons like photon are temporarily excluded from discussion. However, generalizing the procedure to theories containing massless gauge bosons is straightforward.Now suppose the classical Lagrangian of this gauge theory contains the following quadratic parts[Here we suppress the gauge boson kinetic terms which are assumed to be already canonically normalized.] (summation over repeated indices is implicitly assumed): _quad⊃1/2V_ij(∂_μ G_i)(∂^μ G_j) +F_piZ_p^μ(∂_μ G_i)-1/2(_G^2)_ijG_i G_j +1/2(_V^2)_pqZ_pμZ_q^μ Here V is a real invertible n_S× n_S symmetric matrix, F is a real n_M× n_S matrix, _G^2 is a n_S× n_S symmetric matrix the rank of which does not exceed n_E≡ n_S-n_M [Here we assume all the Z_p's acquire their masses by eating appropriate Goldstones. In compliance with the fact that n_M massless Goldstones should exist before gauge-fixing, the rank of _G^2 should not exceed n_S-n_M.], and _V^2 is a real n_M× n_M symmetric matrix which has n_M positive eigenvalues. The elements of the four matrices V,F,_G^2,_V^2 depend only on the model parameters, not on field variables. For convenience let us defineG̃_p=F_piG_i, p=1,2,...,n_MThen the vector-scalar two-point transition term (the second term on the right hand side of Eq. (<ref>)) is simply Z_p^μ∂_μG̃_p.To carry out perturbation theory, it is preferable to eliminate the vector-scalar two-point transitions, make the scalar kinetic terms canonically normalized and at the same time diagonalize the scalar and vector mass terms. We will see that the procedure involved actually goes hand in hand with the quantization of the theory. Also, the tight structure of the gauge theory greatly facilitates the diagonalization process.In gauge field theories, the vector-scalar two-point transitions are usually eliminated by adding appropriate gauge-fixing terms. If we require the R_ξ gauge-fixing procedure remove all the vector-scalar two-point transitions, then it is natural to consider adding the following gauge-fixing Lagrangian:_gf=-∑_p=1^n_M1/2ξ^p(∂_μ Z_p^μ-ξ^pG̃_p)^2Here ξ^p,p=1,2,...,n_M are gauge parameters. There is freedom in the choice of the gauge-fixing function and the requirement to remove vector-scalar two-point transitions is not sufficient to uniquely determine it. However we will see below there is a theoretically well-motivated choice which facilitates the diagonalization process. After adding the gauge-fixing terms, we have_quad+_gf⊃1/2V_ij(∂_μ G_i)(∂^μ G_j) -1/2ξ^pG̃_p^2-1/2(_G^2)_ijG_i G_j -1/2ξ^p(∂_μ Z_p^μ)^2+1/2(_V^2)_pqZ_pμZ_q^μ The matrix V denotes the scalar kinetic matrix. If it is not the identity matrix, we may simply use the complete-the-square method to diagonalize it and then make the resulting terms canonically normalized. This is in complete analogy to the diagonalization of quadratic forms in linear algebra. Note that the overall transformation employed to render the scalar kinetic terms canonically normalized need not be orthogonal.Now suppose we have found a transformation of the scalar fieldsS_i=U_ijG_jwhich renders the scalar kinetic terms diagonalized and canonically normalized:1/2V_ij(∂_μ G_i)(∂^μ G_j) =1/2(∂_μ S_i)(∂^μ S_i)Here U is a real invertible n_S× n_S matrix which only needs to satisfyV=U^T UIt is evident that U is not uniquely determined. It is only determined up to an orthogonal transformation. We may take advantage of this freedom to do additional orthogonal transformation to further diagonalize the scalar mass matrix while still keeping scalar kinetic terms in their canonically normalized form.After the transformation Eq. (<ref>) we obtain _quad+_gf⊃1/2(∂_μ S_i)(∂^μ S_i) -1/2ξ^pG̃_p^2-1/2((U^-1)^T_G^2 U^-1)_ijS_i S_j-1/2ξ^p(∂_μ Z_p^μ)^2+1/2(_V^2)_pqZ_pμZ_q^μ In the above equation G̃_p's can be viewed as linear combinations of S_i's. It should be noted from a physical perspective that the n_S scalar degrees of freedom with which we started could be divided into two categories (after appropriate linear combinations if needed): unphysical scalars and physical scalars. Specifically, n_M unphysical scalars should exist and serve as unphysical Goldstones to be eaten by n_M gauge bosons to make them massive. The remaining n_E=n_S-n_M scalar degrees of freedom then must be physical scalars. By virtue of this observation, there must exist an orthogonal transformationS̅_i=P_ijS_jwhich diagonalizes the -1/2((U^-1)^T_G^2 U^-1)_ijS_i S_j term. Then Eq. (<ref>) becomes _quad+_gf⊃1/2(∂_μS̅_i)(∂^μS̅_i) -1/2ξ^pG̃_p^2-1/2ν_r^2S̅_r^2 -1/2ξ^p(∂_μ Z_p^μ)^2+1/2(_V^2)_pqZ_pμZ_q^μ The index r ranges from n_M+1 to n_S (this will be assumed whenever we use the index r), and ν_r's depend only on model parameters, not on field variables. With this labeling convention the latter n_E fields in S̅_i's correspond to physical scalars while the remaining ones are unphysical Goldstone bosons. The matrix P and the ν_r's can be made independent of the ξ^p's, because in the course of diagonalizing the -1/2((U^-1)^T_G^2 U^-1)_ijS_i S_j term, the -1/2ξ^pG̃_p^2 term is left untouched.It is helpful to recall that in Eq. (<ref>) the G̃_p's can be viewed as linear combinations of S̅_i's. In fact, because n_E physical scalars must exist, the matrix P can be chosen so that the G̃_p's do not contain the S̅_r's. That is to say, the G̃_p's can be expressed as linear combinations of S̅_i,i=1,2,...,n_M. Therefore, by examining Eq. (<ref>) it is obvious that in _quad+_gf the n_E physical scalars are clearly separated from the unphysical ones after the orthogonal transformation Eq. (<ref>).At this stage we need to take a closer look at the unphysical scalar mass term in Eq. (<ref>), which is'≡-1/2ξ^pG̃_p^2Recalling that the G̃_p's are linear combinations of S̅_i,i=1,2,...,n_M, the next thing we need to do is to find an orthogonal transformationS̃_i=K_ijS̅_jwhich diagonalizes '. In Eq. (<ref>) i,j range from 1 to n_S, and K is a n_S× n_S orthogonal matrix. Nevertheless, to avoid spoiling the already diagonalized physical scalar mass term, it is advisable to consider the following block-diagonal form of K:K=[K_M 0_n_M× n_E; 0_n_E× n_M I_n_E× n_E ]Here I_n_E× n_E is the n_E× n_E identity matrix, and K_M is a n_M× n_M orthogonal matrix. With this form of matrix K it is made clear that the S̅_r's actually don't get transformed in this step, however the -1/2ξ^pG̃_p^2 term is diagonalized by K_M.It remains to find the n_M× n_M orthogonal matrix K_M. We note that ' written in the form of Eq. (<ref>) is highly suggestive, because it has already completed the square. Therefore it seems natural to guess that the transformation we need is simplyS̃_p=α_p G̃_p, p=1,2,...,n_M (no summation over p)Here the α_p's are constants chosen to make the transformed fields canonically normalized. Because the G̃_p's can be expressed as linear combinations of S̅_i,i=1,2,...,n_M, Eq. (<ref>) effectively leads to a transformation from S̅_i,i=1,2,...,n_M to S̃_i,i=1,2,...,n_M, from which the matrix K_M can be inferred.There is one remaining potential loophole that we need to deal with. It is necessary to ensure that the matrix K_M inferred from Eq. (<ref>) is indeed an orthogonal matrix, otherwise we will not be able to keep the scalar kinetic terms in their diagonalized and canonically normalized form.To help determine whether the matrix K_M inferred from Eq. (<ref>) is orthogonal we denote the real vector space spanned by G_i,i=1,2,...,n_S as 𝕃 and introduce an inner product in 𝕃, defined by⟨ S_i|S_j⟩≡δ_ij,i,j=1,2,...,n_SThis means the S_i's constitute an orthonormal basis in 𝕃. The inner product of any two elements in 𝕃 can then be calculated by virtue of the linearity property of the inner product. It is obvious that the S̅_i's also form an orthonormal basis in 𝕃. Based on simple algebraic knowledge the problem of judging whether K_M is orthogonal reduces to judging whether S̃_p,p=1,2,...,n_M form an orthonormal basis in the subspace spanned by themselves.As long as all the G̃_p's have positive norm, we may always adjust the α_p's so that⟨S̃_p|S̃_p⟩=1,∀ p=1,2,...,n_MTherefore the question becomes whether ⟨S̃_p|S̃_q⟩=0 holds when p,q=1,2,...,n_M and p≠ q. According to Eq. (<ref>) we only need to check whether ⟨G̃_p|G̃_q⟩=0 holds when p,q=1,2,...,n_M and p≠ q.Fortunately, when the scalar fields are canonically normalized in their kinetic part, the vector-scalar two-point transitions in a gauge theory has the form <cit.>i∑_nmα∂_μϕ'_n t_nm^α A_α^μv_mHere ϕ'_n is the shifted scalar field with zero vacuum expectation value, v_m is the vacuum expectation value of the original scalar fields. t^α denotes the generator matrix with α being the adjoint index and A_α^μ is the corresponding gauge field. On the other hand, the elements of the gauge boson mass matrix are <cit.>μ_αβ^2=-∑_nmlt_nm^α t_nl^β v_m v_lCompare Eq. (<ref>) and Eq. (<ref>) it is easy to find for our case the useful property⟨G̃_p|G̃_q⟩=(_V^2)_pq,∀ p,q=1,2,...,n_MA nonlinearly-realized scalar sector does not introduce additional difficulty in arriving at Eq. (<ref>), because compared to the linearly-realized case, the relevant differences begin from quadratic terms in the field expansion and do not affect Eq. (<ref>) and Eq. (<ref>).Eq. (<ref>) suggests that if the gauge bosons are already in their mass eigenstates, then the related Goldstone boson vectors must be orthogonal to each other, which is exactly what we desire. Physically this implies that massive gauge bosons eat their corresponding Goldstone bosons along the directions dictated by their mass eigenstates. Therefore it would be desirable we rotate the gauge boson fields to their mass eigenstates before adding the gauge-fixing terms Eq. (<ref>). This offers great convenience for the diagonalization of scalar mass matrix afterwards.On the other hand, if the gauge-fixing terms in Eq. (<ref>) are added when Z_p^μ's are not mass eigenstate fields, although this way of gauge-fixing is also legitimate, it would cause further inconveniences. First, after rotation to gauge boson mass eigenstates, the term -1/2ξ^p(∂_μ Z_p^μ)^2 will induce kinetic mixing between gauge bosons in a general R_ξ gauge, spoiling the diagonalization of gauge boson kinetic terms. Secondly, from Eq. (<ref>) it is obvious that now the G̃_p's are not orthogonal to each other. Therefore the diagonalization of scalar mass terms would not be straightforward. Due to the above considerations in the following we adopt the procedure in which gauge-fixing terms Eq. (<ref>) are added after rotating gauge boson fields to their mass eigenstates.Suppose the gauge boson mass matrix _V^2 can be diagonalized as followsR_V^2 R^-1=_DV^2≡diag{μ_1^2,μ_2^2,...,μ_n_M^2}Here R is a n_M× n_M orthogonal matrix, and μ_1^2,μ_2^2,...,μ_n_M^2 are positive. Let us define G_p^m≡R_pq/μ_pG̃_q=(RF)_pi/μ_pG_i, p=1,2,...,n_M (no summation over p) (superscript m denotes canonically-normalized mass eigenstates). Now we can check with the help of Eq. (<ref>)(no summation over p,q)⟨ G_p^m|G_q^m⟩=1/μ_pμ_q(R_V^2 R^T)_pq=δ_pq,∀ p,q=1,2,...,n_M We could further extend the definition of G_p^m to the states S̅_r,r=n_M+1,...,n_S which we have already obtained. According to our diagonalization of physical scalar mass term, S̅_r can be expressed asS̅_r=(PU)_riG_i,r=n_M+1,...,n_Swhere the matrix U and P are introduced in Eq. (<ref>) and Eq. (<ref>), respectively. Finally we can express G_i^m as followsG_i^m=Q_ijG_j,i=1,2,...,n_Swhere the n_S× n_S matrix Q is defined by (no summation over i)Q_ij=(RF)_ij/μ_i, i=1,2,...,n_M, (PU)_ij, i=n_M+1,...,n_S.With the transformation matrix R and Q at our hand it will then be straightforward to derive any three-point or four-point interaction that we are interested in.§ THE CASE OF SLH §.§ Preparation for the calculation The SLH model was proposed as a simple solution to the Higgs mass naturalness problem, making use of the collective symmetry breaking mechanism <cit.>. Its electroweak gauge group is enlarged to SU(3)_L× U(1)_X, and two scalar triplets are introduced to realize the global symmetry breaking pattern [SU(3)_1× U(1)_1]×[SU(3)_2× U(1)_2] →[SU(2)_1× U(1)_1]×[SU(2)_2× U(1)_2] The scalar sector of the SLH model is usually written in a nonlinearly-realized form. In this paper we follow the convention of  <cit.> and parameterize the two scalar triplets as followsΦ_1=exp(iΘ'/f) exp(it_βΘ/f) [0;0; fc_β ] Φ_2=exp(iΘ'/f) exp(-iΘ/ft_β) [0;0; fs_β ]Here we introduced the shorthand notation s_β≡sinβ,c_β≡cosβ,t_β≡tanβ. f is the Goldstone decay constant which is supposed to be at least a few. Θ and Θ' are 3× 3 matrix fields, defined byΘ=η/√(2)+ [ 0_2× 2h;h^†0 ],Θ'=ζ/√(2)+ [ 0_2× 2k;k^†0 ]where h and k are parameterized as (v≈ 246 denotes the vacuum expectation value of the Higgs doublet)h =[ h^0; h^- ], h^0=1/√(2)(v+H-iχ) k =[ k^0; k^- ], k^0=1/√(2)(σ-iω)The covariant derivative in the electroweak sector can be written asD_μ=∂_μ-igA_μ^a T^a+ig_xQ_xB_μ^x, g_x=gt_W/√(1-t_W^2/3)Here t_W≡tanθ_W.A_μ^a and B_μ^x denote the SU(3)_L and U(1)_X gauge fields, respectively. The SU(3)_C× SU(3)_L× U(1)_X gauge quantum number of Φ_1,Φ_2 is (1,3)_-1/3, therefore for Φ_1,Φ_2, Q_x=-1/3, and A_μ^a T^a can be written as A_μ^a T^a=A_μ^3/2[100;0 -10;000 ] +A_μ^8/2√(3)[100;010;00 -2 ] +1/√(2)[0W_μ^+Y_μ^0;W_μ^-0X_μ^-; Y_μ^0†X_μ^+0 ] The gauge kinetic terms for Φ_1,Φ_2 are_gk=(D_μΦ_1)^†(D^μΦ_1)+ (D_μΦ_2)^†(D^μΦ_2)The first order (in v/f) gauge boson mixing for A^3,A^8,B_x takes the form[ A^3; A^8; B_x ] = [ 0 c_W-s_W;√(1-t_W^2/3)s_W t_W/√(3)s_W/√(3); -t_W/√(3) s_W√(1-t_W^2/3) c_W√(1-t_W^2/3) ][ Z';Z;A ]We note that Z',Z are not the ultimate mass eigenstate fields. For future convenience we split the Y^0 field into real and imaginary partsY_μ^0≡1/√(2)(Y_Rμ+iY_Iμ), Y_μ^0†≡1/√(2)(Y_Rμ-iY_Iμ)In this paper we intend to focus on the neutral sector, in which there are six scalar degrees of freedom: η,ζ,H,χ,σ,ω. Four degrees of freedom will be eaten to give mass to massive neutral gauge bosons and are unphysical. The remaining two are physical and need to play the role of the observed Higgs-like boson and the pseudo-axion which has been discussed a lot in the literature. The pseudo-axion actually corresponds to the Goldstone boson of a spontaneously broken global U(1) symmetry in the SLH. To give it a mass, the so-called `μ term' needs to be introduced_μ=μ^2(Φ_1^†Φ_2+h.c.)The observed Higgs-like boson will acquire its mass from the Coleman-Weinberg potential (however the μ term will also contribute to its potential). Because _gk,_μ and the Coleman-Weinberg potential conserve CP, it will be convenient to group the neutral bosons into the CP-even and CP-odd sectors: H,σ,Y_R belong to the CP-even sector, while η,ζ,χ,ω,Z',Z,Y_I,A belong to the CP-odd sector. There are no two-point transitions between these two sectors.Some comments concerning the parametrization of Φ_1,Φ_2 in Eq. (<ref>) and Eq. (<ref>) are in order. Firstly, we have chosen to retain the heavy sector fields in Θ', rather than omitting them from the beginning. Apparently the omission of Θ' can be justified by doing a SU(3)_L gauge transformation. This justification is valid, and in the more precise language of Faddeev-Popov gauge-fixing, the omission of Θ' actually corresponds to a certain choice of the gauge-fixing function. However, this omission could lead to future inconvenience, since as we will show, _gk contains two-point transitions between heavy sector gauge bosons and the pseudo-axion. Θ' can be rotated away by a gauge transformation but heavy sector gauge bosons cannot. This means that when doing perturbation theory we need to always carry those two-point vector-scalar transitions, which are quite inconvenient. Nevertheless, the omission of Θ' and heavy sector gauge bosons can indeed be convenient if we only need to obtain the (v/f) coefficient of the mass eigenstate ZHη vertex, since the effect of those omitted two-point vector-scalar transitions will be suppressed due to the heavy gauge boson masses. Secondly, we have chosen to parameterize Φ_1,Φ_2 with two exponentials for each, rather than use a single exponential likeΦ_1,SE=exp[i/f(Θ'+t_βΘ)] [0;0; fc_β ]Also, in Eq. (<ref>) and Eq. (<ref>) the exponential of Θ' has been put to the left of the exponential of Θ. For noncommutative matrices the single exponential parametrization is not mathematically equivalent to the double exponential parametrization. Moreover, the double exponential parametrization will depend on the order of the two exponentials. However, these parametrizations are related to each other by field redefinition and should thus be physically equivalent. Which one to use is a matter of convenience. We choose the double exponential parametrization in Eq. (<ref>) and Eq. (<ref>) because it does not introduce mass mixing between heavy and light sector scalars in ℒ_μ and will thus facilitate the mass diagonalization.The aim of this section is to derive the mass eigenstate ZHη vertex in the SLH. With the current double exponential parametrization it is possible to demonstrate that H does not mix with σ, and the scalar kinetic terms are already canonically-normalized in the CP-even sector. Also, the μ term gives η a mass but does not introduce mass mixing between η and other fields. According to our argument in the previous section this means that after all the diagonalization procedure is completed, the whole effect on η is supposed to be a simple rescaling. This offers great convenience for the derivation of the mass eigenstate ZHη vertex. The needed rescaling factor can be easily computed. Going back to the notation of Section <ref>, the inner product between two Goldstone bosons G_i and G_j in Eq. (<ref>) satisfies ⟨ G_i|G_j⟩=(U^-1)_ik(U^-1)_jl⟨ S_k|S_l⟩ =(U^-1)_ik(U^-1)_jlδ_kl =(U^-1)_ik(U^-1)_jk =(V^-1)_ij We employ the convention that η,ζ,χ,ω correspond to indices 1,2,3,4 respectively, therefore⟨η|η⟩=(V^-1)_11Consequently, the ultimate mass eigenstate field η^m is related to η throughη=√((V^-1)_11)η^m To obtain the mass eigenstate ZHη vertex, we also need to know the component of η^m in ζ,χ,ω. For the case of the SLH, let us denote the CP-odd sector elements of the matrix F introduced in Eq. (<ref>) asF=[F_ZηF_ZζF_ZχF_Zω; F_Z'η F_Z'ζ F_Z'χ F_Z'ω;F_YηF_YζF_YχF_Yω ](We assume for the CP-odd sector gauge boson mass matrix, the first, second and third row/column correspond to Z,Z',Y_I, respectively.) In the third row, F_Yη denotes the coefficient of the two-point transition Y_I^μ∂_μη (similar for F_Yζ,F_Yχ,F_Yω). Due to CP-conservation there is no two-point transition between Y_R^μ and the CP-odd scalars, therefore no confusion would arise. The photon field A^μ does not have two-point transition with scalars. We would like to denote the submatrix formed by the second, third and fourth column of F as F̃F̃≡[F_ZζF_ZχF_Zω; F_Z'ζ F_Z'χ F_Z'ω;F_YζF_YχF_Yω ]Now the application of Eq. (<ref>) and Eq. (<ref>) to the CP-odd scalar sector of the SLH leads to[ ζ^m; χ^m; ω^m ] =𝕄^-1_DVR[ [F_Zη; F_Z'η;F_Yη ]η +F̃[ ζ; χ; ω ]]As before the superscript m denotes canonically-normalized mass eigenstate fields. Inverting Eq. (<ref>) and using Eq. (<ref>) will lead to[ ζ; χ; ω ] =F̃^-1R^T𝕄_DV[ ζ^m; χ^m; ω^m ] -√((V^-1)_11)F̃^-1[F_Zη; F_Z'η;F_Yη ]η^mWe define the four-component column vectorΥ≡[√((V^-1)_11); -√((V^-1)_11)F̃^-1[F_Zη; F_Z'η;F_Yη ] ]and denote the first row of R as ℝ_1ℝ_1=[ R_11 R_12 R_13 ]where R_ij represents the (i;j) element of R. We will also need the coefficient matrices ℂ^dH=[C^dH_ZηC^dH_ZζC^dH_ZχC^dH_Zω; C^dH_Z'η C^dH_Z'ζ C^dH_Z'χ C^dH_Z'ω;C^dH_YηC^dH_YζC^dH_YχC^dH_Yω ],ℂ^Hd=[C^Hd_ZηC^Hd_ZζC^Hd_ZχC^Hd_Zω; C^Hd_Z'η C^Hd_Z'ζ C^Hd_Z'χ C^Hd_Z'ω;C^Hd_YηC^Hd_YζC^Hd_YχC^Hd_Yω ] Here C^dH_Zη denotes the coefficient of Z^μη∂_μ H, while C^Hd_Zη denotes the coefficient of Z^μ H∂_μη, and so on. If we have calculated the matrices ℂ^dH,ℂ^Hd and the vectors Υ and ℝ_1, then the coefficient of mass eigenstate antisymmetric ZHη vertex (Z^μ(η∂_μ H-H∂_μη) with all fields understood to be mass eigenstate fields) can be obtained asc^as_ZHη=ℝ_1 ℂ^dHΥ -ℝ_1 ℂ^HdΥ/2while the coefficient of mass eigenstate symmetric ZHη vertex (Z^μ(η∂_μ H+H∂_μη) with all fields understood to be mass eigenstate fields) can be obtained asc^s_ZHη=ℝ_1 ℂ^dHΥ +ℝ_1 ℂ^HdΥ/2Here we remark that we divide a general VSS vertex into its antisymmetric and symmetric parts because they exhibit distinct features in physical processes. For example, the symmetric VSS vertex does not contribute when the involved vector boson is on shell. Therefore, only the antisymmetric ZHη vertex is expected to contribute at tree level to decay processes H→ Zη (or η→ ZH if η is heavy) where Z is supposed to be on shell. §.§ Results In principle the derivation of mass eigenstate ZHη vertex with no expansion on the v/f can be carried out manually[In practice, they can be more readily obtained with the help of .]. However, after obtaining V,F and 𝕄_V^2, the calculation of R and the inverse matrices can become extremely cumbersome. Therefore we choose to compute the mass eigenstate ZHη vertex to ((v/f)^3), which makes the results easier to obtain and display. For brevity we define ξ≡v/f in the following.Let us first find the scalar kinetic matrix V and vector-scalar transition matrix F for the SLH. They are computed to be V=[10 √(2)/t_2βξ-7c_2β+c_6β/6√(2)s_2β^3ξ^3-√(2)ξ+5+3c_4β/3√(2)s_2β^2ξ^3;01 -1/√(2)ξ+5+3c_4β/12√(2)s_2β^2ξ^3-2√(2)/3t_2βξ^3; √(2)/t_2βξ-7c_2β+c_6β/6√(2)s_2β^3ξ^3 -1/√(2)ξ+5+3c_4β/12√(2)s_2β^2ξ^31-5+3c_4β/12s_2β^2ξ^2 2/3t_2βξ^2;-√(2)ξ+5+3c_4β/3√(2)s_2β^2ξ^3-2√(2)/3t_2βξ^3 2/3t_2βξ^21 ]+(ξ^4) F= gf[ 1/√(2)c_W t_2βξ^2-1/2√(2)c_Wξ^2 1/2c_Wξ-5+3c_4β/24c_W s_2β^2ξ^31/3c_W t_2βξ^3; ρ/t_2βξ^2 √(2)/√(3-t_W^2)-1+2c_2W/2√(2)c_W^2√(3-t_W^2)ξ^2κξ-κ (5+3c_4β)/12s_2β^2ξ^3-1/3c_W^2√(3-t_W^2)t_2βξ^3; -ξ+5+3c_4β/6s_2β^2ξ^3 -2/3t_2βξ^3 √(2)/3t_2βξ^21/√(2) ] +(ξ^4) where we definedρ≡√(1+2c_2W/1+c_2W),κ≡c_2W/2c_W^2√(3-t_W^2)It is obvious from Eq. (<ref>) that the scalar kinetic terms in the original η,ζ,χ,ω are not canonically normalized, and also obvious from Eq. (<ref>) that there are general vector-scalar two-point transitions. Especially, the two-point Zη transition appears at (ξ^2), only one order of ξ relatively suppressed when compared to Zχ transition [Although the two-point Zη transition appears at (ξ^2), the elimination of this part require an (ξ) field redefinition, due to the fact that the relative suppression of Zη transition to Zχ transition is (ξ). The ZHχ coupling is (1). Therefore, the removal of Zη transition could lead to an (ξ) change in the derived ZHη vertex.]. The appearance of these non-canonically normalized kinetic terms and `unexpected'[By `unexpected' we refer to the fact that η is considered physical, yet there exist two-point transitions such as Z^μ∂_μη in _gk.] vector-scalar transitions is the exact reason for introducing the systematic procedure in Section <ref>.The Υ vector is computed to beΥ=[1+1/s_2β^2ξ^2+(ξ^4); -1/t_2βξ^2+(ξ^4); -√(2)/t_2βξ -3-c_4β/√(2)s_2β^2 t_2βξ^3+(ξ^5);√(2)ξ+3-c_4β/3√(2)s_2β^2ξ^3+(ξ^5) ]A compact expression for Υ valid to all orders in ξ can also be obtained. It isΥ=[c_γ+δ^-1;; -c_γ+δ^-1(s_δ^2 t_β-s_γ^2 t_β^-1);; v/√(2)fc_γ+δ^-1(c_2δt_β-c_2γt_β^-1);; 1/2c_γ+δ^-1(s_2δt_β+s_2γt_β^-1) ]whereγ≡vt_β/√(2)f,δ≡v/√(2)ft_β Expanding the above expression to 𝒪(ξ^3), Eq. (<ref>) can be recovered. The above expression for the Υ vector is very useful in derivation of exact results of tree level vertices involving the η particle. The ℂ^dH matrix is computed to be ℂ^dH=[00 -g/2c_W+g(5+3c_4β)/24c_W s_2β^2ξ^2+(ξ^4)0;00 -g(1-t_W^2)/2√(3-t_W^2)+gκ (5+3c_4β)/12s_2β^2ξ^2+(ξ^4)0;00-√(2)g/3t_2βξ+g(7c_2β+c_6β)/30√(2)s_2β^3ξ^3+(ξ^5)0 ]The ℂ^Hd matrix is computed to beℂ^Hd= [ √(2)g/c_W t_2βξ-g(7c_2β+c_6β)/3√(2)c_W s_2β^3ξ^3-g/√(2)c_Wξ+g(5+3c_4β)/6√(2)c_W s_2β^2ξ^3 g/2c_W-g(5+3c_4β)/8c_W s_2β^2ξ^2g/c_W t_2βξ^2;2gρ/t_2βξ-gρ(7c_2β+c_6β)/3s_2β^3ξ^3-gρξ+gρ(5+3c_4β)/6s_2β^2ξ^3gκ-gκ(5+3c_4β)/4s_2β^2ξ^2-g/c_W^2√(3-t_W^2)t_2βξ^2; -g+g(5+3c_4β)/2s_2β^2ξ^2-2g/t_2βξ^22√(2)g/3t_2βξ-√(2)g(7c_2β+c_6β)/15s_2β^3ξ^30 ]+(ξ^4)The matrix R can be computed asR=[1+(ξ^4) -c_2W(1+2c_2W)/8c_W^5√(3-t_W^2)ξ^2+(ξ^4) -√(2)/3c_W t_2βξ^3+(ξ^5);c_2W(1+2c_2W)/8c_W^5√(3-t_W^2)ξ^2+(ξ^4)1+(ξ^4) -√(2)(1+2c_2W)/3c_W^2√(3-t_W^2)t_2βξ^3+(ξ^5);√(2)/3c_W t_2βξ^3+(ξ^5)√(2)(1+2c_2W)/3c_W^2√(3-t_W^2)t_2βξ^3+(ξ^5)1+(ξ^6) ]With this precision it is feasible to obtain c^as_ZHη and c^s_ZHη via Eq. (<ref>) and Eq. (<ref>) to (ξ^3), the results of which arec^as_ZHη=-g/4√(2)c_W^3 t_2βξ^3+(ξ^5) c^s_ZHη=g/√(2)c_W t_2βξ +g/24√(2)c_W s_2β[8/s_2βt_2β+3c_2β(8+6/c_W^2-1/c_W^4)]ξ^3 +(ξ^5) Therefore we arrive at the conclusion that the symmetric ZHη vertex appear at (ξ), while the antisymmetric ZHη vertex does not appear until (ξ^3). The coefficients of these two vertices are presented in Eq. (<ref>) and Eq. (<ref>), respectively. We note that this conclusion differs from what has been derived and used in the literature <cit.> for a long time. In the intermediate steps, one important discrepancy between our results and Ref. <cit.> is that in a footnote Ref. <cit.> claims that choosing the η generator to be the identity matrix would remove the kinetic mixing between η and unphysical Goldstone bosons, while in our derivation Eq. (<ref>) shows there still exists (ξ) kinetic mixing of such kind, which we have checked by various means. It is then not clear whether Ref. <cit.> have made appropriate field redefinitions to diagonalize the SLH vector-scalar system. §.§ Effective Field Theory Analysis The fact that the mass eigenstate antisymmetric ZHη vertex does not appear until (ξ^3) can be understood from an effective field theory (EFT) point of view. Let us focus on the bosonic sector of the SLH, and integrate out heavy sector fields X,Y,Z' and their Goldstones. We are then interested in the EFT formed with the remaining fields, namely the SM and η, which are classified according to gauge transformation properties. Especially, η is a singlet under the SM gauge symmetries. Let us suppose at this moment we have not added the gauge-fixing terms yet. It is obvious that at dimension-four level no gauge-invariant operator can deliver a ZHη vertex. We are then forced to consider higher-dimensional operators. At dimension-five level, let us consider𝒪_1=(∂^μη)[ih^†(D_μ-D_μ)h]where h^†D_μh≡(D_μ h)^† h and D_μ denotes the SM covariant derivative for the Higgs doublet. We may denote its coefficient as c_1/f, in which c_1 is a dimensionless constant. Then we could find in the Lagrangian the following terms ⊃(D_μ h)^†(D^μ h)+1/2(∂_μη)^2+c_1/f𝒪_1 ⊃1/2(∂_μ H)^2+1/2(∂_μχ)^2+1/2(∂_μη)^2 +v/fc_1(∂^μη)(∂_μχ)-m_Z Z_μ∂^μ(χ+v/fc_1η) +m_Z/vZ_μ(χ∂^μ H-H∂^μχ) -2m_Z/fc_1 HZ_μ∂^μη The appearance of scalar kinetic mixing (∂^μη)(∂_μχ) and vector-scalar two-point transition Z_μ∂^μη signal the need for a further field redefinition in the scalar sector. Up to (ξ), the transformation is easily found:χ̃=χ+v/fc_1η,η̃=η.The Lagrangian can be written with the transformed fields⊃1/2(∂_μ H)^2+1/2(∂_μχ̃)^2 +1/2(∂_μη̃)^2-m_Z Z_μ∂^μχ̃+m_Z/vZ_μ(χ̃∂^μ H-H∂^μχ̃) -c_1m_Z/fZ_μ(η̃∂^μ H+H∂^μη̃)The two-point vector-scalar transition -m_Z Z_μ∂^μχ̃ can be eliminated by an appropriate R_ξ gauge-fixing term. From the above expression we see that at (ξ), only symmetric mass eigenstate ZHη vertex could survive while the antisymmetric counterpart is removed after the transition to mass eigenstate. This is similar to the situation considered in Ref. <cit.> which also concluded for the case of the SM plus a singlet scalar S that the dimension-five operator cannot give rise to tree-level S→ ZH decay.At dimension-six level, let us consider the operator𝒪_2=(h^† D^μ h)(h^† D_μ h)This operator should have a coefficient of (1/f^2). Apparently it does not contain η. However, if 𝒪_1 is also present, then a field redefinition like Eq. (<ref>) needs to be performed, after which 𝒪_2 could lead to a mass eigenstate antisymmetric ZHη vertex. Since the field redefinition implies an (ξ) η component in χ, the resultant mass eigenstate antisymmetric ZHη vertex should appear at (ξ^3).We may also consider operators with even higher dimension, but of course they cannot lead to (ξ) or (ξ^2) mass eigenstate antisymmetric ZHη vertex.Other bosonic operators (containing Z) at dimension-five or six level can be considered, for example𝒪_3 =η(D_μ h)^†(D^μ h)𝒪_4 =∂^μ(h^† h)[ih^†(D_μ-D_μ)h]However, these operators do not have the correct CP property. Furthermore, in our parametrization η has a shift symmetry η→η+c where c is a constant, which also forbids the appearance of 𝒪_3.Therefore from an EFT analysis, we also arrive at the conclusion that in the SLH, mass eigenstate antisymmetric ZHη vertex cannot appear until (ξ^3) while symmetric ZHη vertex can appear at (ξ)  [According to Ref. <cit.>, a similar situation occurs for the ZHϕ_0 vertex in the left-right twin Higgs model, where ϕ_0 denotes a neutral pseudoscalar. This is consistent with our EFT analysis here, since ϕ_0 does not mix with other physical fields due to an imposed discrete symmetry.], consistent with our explicit calculation in the previous subsection. It is important to note that all of the EFT derivation is based on the field content SM+η (η is a CP-odd singlet [Ref. <cit.> studied the composite two-Higgs-doublet model which contains (1) antisymmetric ZHA vertex since the pseudoscalar A is not a singlet.]), with no additional particles leading to further mass mixings, which could alter the conclusion. § DISCUSSION AND CONCLUSIONIn this paper we revisited the issue of deriving the mass eigenstate ZHη vertex in the SLH. We found that the scalar kinetic terms are not canonically normalized in the usual parametrization and there are `unexpected' vector-scalar two-point transitions that need to be taken care of. We formulated the problem in a generic setting as the diagonalization of a vector-scalar system in gauge field theories. Especially we proved that the scalar mass terms coming from the R_ξ gauge-fixing procedure will be automatically orthogonal to each other if the corresponding gauge fields are rotated to their mass eigenstate prior to gauge-fixing [We refer the reader to Ref. <cit.> for another example in the Littlest Higgs with T-parity.]. This fact greatly simplifies the diagonalization procedure.For the SLH model, we found that the double exponential parametrization of scalar triplets, as shown in Eq. (<ref>) and Eq. (<ref>) is convenient for the derivation of ZHη vertex, since in this parametrization the η field is only subject to a simple rescaling in the diagonalization procedure, with which we could display in a simple form the η^m component contained in the original η,ζ,χ,ω fields we started with, as shown in Eq. (<ref>).In principle the derivation of mass eigenstate ZHη vertex could be worked out to all order in ξ≡v/f, however the intermediate results are too lengthy and we find it convenient to display the derivation and results to (ξ^3). The final results of antisymmetric and symmetric ZHη vertices are shown in Eq. (<ref>) and Eq. (<ref>). Contrary to what has existed in the literature <cit.> (which claims an 𝒪(ξ) antisymmetric ZHη vertex) for a long time , we found that the coefficient of the antisymmetric ZHη vertex c^as_ZHη does not show up until (ξ^3). This result is also understood from an EFT point of view. Based on these results we expect that the exotic Higgs decay H→ Zη (or η→ ZH if η is heavy) and the associated production of h and η at hadron or lepton colliders will be much more difficult to observe due to the 𝒪(ξ^3) suppression in the antisymmetric ZHη vertex. On the other hand, the symmetric ZHη vertex already appears at (ξ), however the investigation of its effect involves some subtleties, which will be treated in a follow-up paper.The procedure elucidated in this paper can be applied to other models containing a gauged nonlinearly-realized scalar sector as well. From the experience with the SLH we find it important to examine the quadratic part of the Lagrangian in these models, which could contain non-canonically normalized scalar kinetic terms and `unexpected' vector-scalar two-point transitions. Moreover, finding a convenient parametrization for the exponentials in these models could be very helpful in the diagonalization procedure. We expect to investigate these issues and their phenomenological implications in the future. §.§ Acknowledgements We thank Kingman Cheung for helpful discussion. We also thank the referee who drew our attention to an EFT viewpoint. This work was supported in part by the Natural Science Foundation of China (Grants No. 11135003, No. 11375014 and No. 11635001), and the China Postdoctoral Science Foundation (Grant No. 2017M610992).h-physrev
http://arxiv.org/abs/1709.08929v3
{ "authors": [ "Shi-Ping He", "Ying-nan Mao", "Chen Zhang", "Shou-hua Zhu" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170926102549", "title": "On the $ZHη$ vertex in the simplest Little Higgs Model" }
Asymptotic approximations to the nodes and weights of Gauss–Hermite and Gauss–Laguerre quadratures A. GilDepartamento de Matemática Aplicada y CC. de la Computación.ETSI Caminos. Universidad de Cantabria. 39005-Santander, Spain. J. SeguraDepartamento de Matemáticas, Estadistica yComputación,Universidad de Cantabria, 39005 Santander, Spain.N.M. TemmeIAA, 1825 BD 25, Alkmaar, The Netherlands.[Former address: Centrum Wiskunde & Informatica (CWI),Science Park 123, 1098 XG Amsterdam,The Netherlands] December 30, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Asymptotic approximations to the zeros of Hermite and Laguerrepolynomials are given, together with methods for obtaining the coefficients in the expansions.These approximations can be usedas a standalone method of computation of Gaussian quadratures for high enough degrees, with Gaussianweights computed from asymptotic approximations for the orthogonal polynomials.We provide numerical evidence showing that for degrees greater than 100the asymptotic methods are enough for a double precision accuracy computation (15-16 digits) of the nodes and weights of the Gauss–Hermite and Gauss–Laguerre quadratures.§ INTRODUCTION As is well known, the nodes x_i, i=1,…, n ofGaussian quadrature rules are the roots of the (for instance monic) orthogonal polynomial satisfying∫_a^b x^i p_n(x)w(x)dx = 0, i = 0,…, n-1. Among the Gauss quadrature rules, the most popular are those for which the associated orthogonal polynomialsare the so-called classical orthogonal polynomials, namely: * Gauss–Hermite: w(x) = e^-x^2; a = -∞, b = +∞. Orthogonal polynomials: Hermite polynomials (H_n(x)); * Gauss–Laguerre: w(x) = x^αe^-x, α > -1; a = 0, b = +∞. Orthogonal polynomials: Laguerre polynomials (L_n^(α)(x)); * Gauss-Jacobi: w(x) = (1-x)^α(1+x)^β, α, β >-1; a = -1, b = 1. Orthogonal polynomials: Jacobi polynomials (P_n^(α,β)(x)).The weights for the n-point Gauss quadrature based on the nodes {x_i}_i=1^ncan be writtenin terms of the derivatives of the orthogonal polynomials at the nodes as follows:* Gauss–Hermite: w_i=√(π)2^n+1n![H_n^'(x_i)]^2,* Gauss–Laguerre: w_i = Γ (n+α+1)n! x_i [L_n ^(α)'(x_i)]^2,* Gauss-Jacobi:w_i=M_n,α,β(1-x_i^2) [P_n ^(α ,β) '(x_i)]^2, where M_n,α,β=2^α+β+1Γ (n+α+1)Γ (n+β+1)n!Γ (n+α+β+1 ). Iterative algorithms are interesting methods of computation of Gaussian nodes and weights, very clearlyoutperforming matrix methods (Golub-Welsch <cit.>) for high degrees. They are basedon the computation of the roots of the orthogonal polynomial by an iterative method and the subsequent computation of the weights by using function relations like those inEqs. (<ref>)-(<ref>). Most iterative methods for the computation of the Gaussiannodes (with the exception of <cit.>)require accurate enough first approximations in order to ensure the convergence of the iterative method (typically the Newton method); for two recent examples, see <cit.>. An alternative approach<cit.>, although less efficient for high degrees than iterative methods with asymptotic first approximations <cit.>, consists in guessing these first approximations by integrating a Prufer-transformed ODE with a Runge-Kutta method, and then refining these guesses by the Newton method (however, asymptotic approximations were also used in this reference for the particular case of Gauss-Legendre quadrature). More recently,non-iterative methods based on asymptotic approximationsfor the computation of Gauss-Legendre nodes and weights were developed in <cit.>, which were shown to outperform iterative approaches. In this paper, our aim is to provide asymptotic approximationsfor the accurate computation of the nodes and weights of Gauss–Hermite and Gauss–Laguerre quadrature. These approximations provide a fast and accurate method of computation whichcan be used for arbitrarily large degree, but which also provide accurate results for not so large degrees (n≥ 100).The methods are able to compute both the nodes and the weights with nearly double precision accuracy, improving the accuracy of the available fixed precision iterative methods. As we will discuss in a subsequent paper, a fully non-iterative approach is also possible for thecase of Gauss-Jacobi quadrature <cit.>, similarly as was shown for the particular case of Legendre polynomials <cit.>.§ HERMITE POLYNOMIALS In <cit.>first estimates of the zeros of Hermite polynomials are based on work of Tricomi for the middle zeros; these first guesses follow from expansions in terms of elementary functions. For the remaining zeros near the positive endpoint √(2n+1) of the zeros interval the first estimates are taken from the work of Gatteschi, and are in terms of the zeros of the Airy functions.In this section we give an expansion of the zeros based on the asymptotic expansionin terms of Airy functions. The expansion can be used for all positive zeros, however, the approximations are less accurate for the small zeros. For these we give an approximationbased on an asymptotic expansion in terms of elementary functions. We start discussing this expansion.§.§ Expansions in terms of elementary functionsAn expansion in terms of elementary functions for the Hermite polynomials is given in <cit.> with a limited number of coefficients. However, we prefer an expansion for the parabolic cylinder function derived in <cit.>; these results are summarized in <cit.> and <cit.>.The relation between the parabolic cylinder function U(a,z) and the Hermite polynomial H_n(z) isU(-n-12,z)=2^-n/2e^-1/4z^2H_n(z/√(2)), n=0,1,2,….We use the notationsμ=√(2n+1), t=x/μ,η(t)=12arccos t-12 t√(1-t^2),and we have the asymptotic representation[H_n(x) =2^1/2n+1e^1/2x^2g(μ)/(1-t^2)^1/4 ×; (cos(μ^2η-14π)A_μ(t) -sin(μ^2η-14π)B_μ(t)), ]with expansions[ A_μ(t)∼∑_s=0^∞(-1)^su_2s(t)/(1-t^2)^3sμ^4s, B_μ(t)∼∑_s=0^∞(-1)^su_2s+1(t)/(1-t^2)^3s+3/2μ^4s+2, ]uniformly for -1+δ≤ t≤ 1-δ, where δ is an arbitrary small positive number.The first few coefficients areu_0(t)=1, u_1(t)=t(t^2-6)/24,u_2(t)=-9t^4+249t^2+145/1152,and more u_s(t) follow fromthe recurrence relations[ (t^2-1)u'_s(t)-3stu_s(t)=r_s-1(t),; 8r_s(t)=(3t^2+2)u_s(t)-12(s+1)tr_s-1(t)+4(t^2-1)r'_s-1(t). ] The quantity g(μ) is only known in the form of an asymptotic expansiong(μ)∼ h(μ)(1+12∑_k=0^∞γ_k/(1/2μ^2)^k),where the coefficients γ_k are defined byΓ(12+z)∼√(2π)e^-z z^z ∑_k=0^∞γ_k/z^k,z→∞.The first ones areγ_0=1,γ_1 =- 124, γ_2=11152,γ_3 = 1003414720, γ_4=-402739813120.For h(μ) we haveh(μ)=2^-1/4μ^2-1/4e^-1/4μ^2μ^1/2μ^2-12=2^-1/2(n+12)^1/2ne^-1/2n-1/4.§.§.§ Expansions of the zerosNext we discuss expansions for the zeros of H_n(x), x_k, 1≤ k ≤ n (x_1<x_2<⋯<x_n). We introduce a function W(η) (see (<ref>))W(η)=cos(μ^2η-14π) A_μ(t) -sin(μ^2η-14π)B_μ(t),and try to solve the equation W(η)=0 for large values of n. We define a first approximation η_0 such that the cosine term vanishes and η_0 and the corresponding t and x-values are (in first-order approximation) related toa zero of H_n(x). The small zeros are around x=0 and t=0, that is, for η near η(0)=1/4π.We defineη_0=n-k+3/4/μ^2π, k=1,2,…,n.In this way, cos(μ^2η_0-1/4π)=0, and this choice of η_0 follows from the location of the zeros of the cosine function and those of H_n(x).Observe that, when n is odd and k=1/2(n+1), that is, x_k=0, it follows that η_0=1/4π. If η=1/4π we have t=0 and x=0.We assume that the equation W(η)=0 has a solution η that can be expanded in the formη=η_0+,∼η_1/μ^2+η_2/μ^4+η_3/μ^6+η_4/μ^8+…,and consider the Taylor expansion and equationW(η)+/1!d/dηW(η)+^2/2!d^2/dη^2W(η)+^3/3!d^2/dη^3W(η)+…=0,where W(η) and its derivatives are taken at η=η_0. Because the expansions in (<ref>) are in terms of t,we need dt/dη=-1/√(1-t^2).When we have found η, the corresponding t-value is obtained by inverting the relation for η(t) in (<ref>). For this purpose we use the expansion[ t=-η-1/6η^3-13/120η^5-493/5040η^7+⋯,;η=η-14π=-1/2arcsin t-1/2t√(1-t^2) ] It is also possible to invert the relation (<ref>) by using an iterative method.For this purpose it is convenient to write t=sin1/2θ. Then the equation to be solved for θ∈(-π,π) reads 4η+θ+sinθ=0.A Newton or related procedure can be used to solve this equation, but in our algorithms we prefer to use the series shown in (<ref>), which is faster (and of more restricted applicability, but sufficient for our purposes).After a few symbolic manipulations we find that η_2k+1=0, k=0,1,2,…, and that the first nonzero coefficients are [ η_2 =-t(t^2-6)/24(1-t^2)^3/2,; η_4 = -t(56t^8-252t^6 +351t^4+ 2340t^2+3780)/5760(1-t^2)^9/2,; η_6 = -t(3968t^14-29760t^12+95544t^10-173232t^8+231237t^6 -;1890882t^4-6068580t^2-1690920)/(322560(1-t^2)^15/2). ] Because we have a recurrence relation for the coefficients u_s(t) in (<ref>), it is quite easy to generate many u_s(t) and also much more coefficients η_j than given in (<ref>). Algorithm For the computation of the approximations of the zeros x_k we summarize the procedure as follows. * To approximate the zero x_k, compute the starting value η_0, given in (<ref>). * Compute the corresponding t-value from (<ref>) (with η=η_0).* With these values η_0 and t, compute the coefficients η_k in (<ref>).* Next, compute ηfrom (<ref>).* Then the better value of t again follows from (<ref>).* Finally,the approximation for the requested zero is x_k∼μ t, see (<ref>). §.§ Expansions in terms of Airy functionsFor the large zeros we shall use the Airy-type expansion ofthe Hermite polynomials. We write (see <cit.>)H_n(x)=√(π) 2^1/2n+1μ^1/3χ(ζ)e^1/2x^2g(μ)((μ^4/3ζ)A(ζ) +μ^-8/3^'(μ^4/3ζ)B(ζ)), with expansions A(ζ)∼∑_s=0^∞A_s(ζ)/μ^4s,B(ζ)∼∑_j=0^∞B_s(ζ)/μ^4s,μ→∞, where μ=√(2n+1),t=x/μ, and g(μ) is the function with asymptotic expansion given in (<ref>).For ζwe have the definition[ 2/3ζ^3/2=1/2t√(t^2-1)-1/2arccosh t, t≥1,; ;2/3(-ζ)^3/2=η(t), -1<t≤1, ]where η(t) is defined in (<ref>); χ(ζ) is defined byχ(ζ)=(ζ/t^2-1)^1/4. The variable ζ is analytic in a neighborhood of t=1. We have the differential equationζ(dζ/dt)^2=t^2-1,and we have the following expansions in powers of t-1 and ζ the expansions[2^-1/3ζ=(t-1)+110(t-1)^2-2175(t-1)^3+⋯,;t= 1+ζ-110ζ^2+11350ζ^3+⋯,ζ=2^-1/3ζ. ]The relation between t and ζ is singular at t=-1, ζ(-1)=-(3π/4)^2/3=-1.770⋯, and the series in the second line converges for |ζ|<1.770⋯. The coefficients are given by [ A_s(ζ)=ζ^-3s∑_m=0^2sβ_m (χ(ζ))^6(2s-m)u_2s-m(t),; B_s(ζ)= -ζ^-3s-2∑_m=0^2s+1α_m (χ(ζ))^6(2s-m+1)u_2s-m+1(t), ] where u_s(t) are as in <ref>, and [α_m= (2m+1)(2m+3)⋯(6m-1)/m! (144)^m, α_0=1,;β_m= -6m+1/6m-1α_m. ]A recursion for α_m reads α_m+1=α_m(6m+5)(6m+3)(6m+1)/144(m+1)(2m+1),m=0,1,2,… .The first few coefficients of the expansions in (<ref>) are given by: [A_0(ζ) = 1,B_0(ζ)=-48χ^6u_1(t)+5/48 ζ^2,;A_1(ζ) = 4608χ^12u_2(t)-672χ^6u_1(t)-455/4608 ζ^3,;B_1(ζ) = -663552χ^18u_3(t)+69120χ^12u_2(t)+55440χ^6u_1(t)+85085/6635528 ζ^5. ] Here χ=χ(ζ) is given by (<ref>). To avoid numerical cancellations when ζ is small in the above representations,we can expand the coefficients, which are analytic at ζ=0, in powers of ζ. §.§.§ Expansions of the zeros An expansion for the zeros is obtained as follows. First we determine the zeros in terms of ζ. For the first-order approximation of a zero x_n-k+1 of H_n(x) wecompute ζ_0=μ^-4/3a_k, where a_k is a zero of the Airy function (x). Because of the symmetry of the Hermite polynomial, we assume that 1≤ k ≤⌊1/2n⌋. We introduce an expansion ofζ corresponding to the zero of H_n(x) by writing ζ=ζ_0+,∼ζ_1/μ^4+ζ_2/μ^8+…, and we try to obtain the ζ_j, j≥1. We introduce a function W(ζ) by writing (see (<ref>)) W(ζ)=(μ^4/3ζ)A(ζ) +μ^-8/3^'(μ^4/3ζ)B(ζ), and expand W(ζ) at ζ=ζ_0, writing ζ=ζ_0+, which givesW(ζ_0)+/1!W^'(ζ_0)+ ^2/2!W^''(ζ_0)+… = 0. In this equation we substitute the expansion given in (<ref>) and those in (<ref>), compare equal powers of μ and obtain the first few coefficients [ ζ_ 1= -B_0(ζ_0),;ζ_2= -1/3(3B_1(ζ_0)-3B_0(ζ_0) A_1(ζ_0)-3 B_0(ζ_0) B_0^'(ζ_0)+ζ_0 B_0(ζ_0)^3), ] where the derivative is with respect to ζ and the coefficients are given in (<ref>). To obtain the derivative of B_0(ζ) we need dt/dζ=χ^2(ζ),dχ/dζ=1-2tχ^6(ζ)/4ζχ(ζ), whichfollow from(<ref>) and (<ref>). This gives d/dζB_0(ζ)=χ^6t^3+6χ^12t^4-6tχ^6-36t^2χ^12-6χ^8ζ t^2+12χ^8ζ+10/48ζ^3. For small values of ζ we have expansions of the form [ζ_ 1 =2^1/3(9/280-7/450ζ+1359/134750ζ^2+…), ζ=2^-1/3ζ,; ζ_2 = 2^1/3(-1539/130000+1550191/138915000ζ- 193351/16362500ζ^2+…). ]Algorithm When we have obtained a value ζ that corresponds to a zero of the Hermite polynomial, the corresponding t-value should be obtained from the second equation in (<ref>). This equation has to be solved by a numerical procedure. A first estimate, when ζ is small, can be obtained from the second line in (<ref>), and more terms of that expansion can easily be obtained by a symbolic package.For an iterative procedureit is convenient to substitutet=cos1/2θ, with θ∈[0,2π). Then the equation to be solved for θ reads 8/3(-ζ)^3/2=θ-sinθand we can use, for instance, the Newton method for this purpose. However, in our algorithms we prefer to invert using enough terms in (<ref>), which is a faster method.We proceed as follows for computing approximations for the zeros. * To approximate the zero x_n-k+1, define the starting value ζ_0=μ^-4/3a_k, 1≤ k ≤1/2 n, where a_k is a zero of the Airy function. * Compute t from the second line of (<ref>).* With these values ζ_0 and t, compute the coefficients ζ_j in (<ref>) and χ(ζ_0) from (<ref>).* Next, compute ζfrom (<ref>).* Then the better value of t again follows from the second line of (<ref>).* Finally, x_n-k+1∼ tμ. §.§ Numerical performance of the expansionsThe approximation(<ref>)(obtained from the expansion in terms of elementary functions) is accurate for large n and particularly for the small zeros. As a first numerical example ofthe accuracy, even for quite small n, we take n=11, k=7 (the smallest positive zero). Then, η_0=0.648807 andthe corresponding t and x-values are0.137021 and 0.657129. The seventh zero of H_11(x) is 0.656810…, and the relative error is 0.00048.With the shown coefficients in (<ref>)we obtainη=0.6488732440401913 and x =0 .6568095658827670, with relative error 1.52×10^-9. The computations are done with Maple, with Digits = 16. With n=51 and k=27 (the smallest positive zero), the relative error becomes 10^-15.The expansions in (<ref>) are uniformly valid for -1+δ≤ t≤ 1-δ, where δ is an arbitrary small positive number. Hence, for the large zeros this method is not reliable, and we need to restrict the number of zeros that we can compute. For example, we can request that | t|≤1/2, the corresponding η-value satisfies |η-1/4π|≤1/12π+1/8√(3)=0.478. When we use the first estimate η_0 given in (<ref>) in the equation |η_0-1/4π|≤ 0.478, we find for k the bound |1/2n-k|≲0.478/π(2n+1)=0.304 n+0.152. This says that roughly 0.3n of the positive zeros can be computed by using the asymptotic approximations of <ref>, when we request| t|≤1/2. In practice, as we will see later, the expansions in terms of elementary functions can be used for larger values of |t| and when they are accurate, they are preferable to the expansions in terms of Airy functions because the algorithm is faster.More extensive tests of the expansions have been performed using finite precision implementations coded in Fortran 90. In these implementations only non-iterative methods (power series) are used for the inversion of the variables.Figure <ref> shows the performance of the expansion in terms of elementaryfunctions. In this figure, the relative accuracy obtained forcomputing the positive zeros of H_n(x) for n=100, 1000, 10000 is plotted.The label i in the abscissa represents the order of the zero (starting from i=1 for the smallest positive zero).The algorithm for testing the accuracy of the zeros has been implemented in finite precision arithmetic using the first 6 non-zero terms in the expansion. We compare the asymptotic expansionsagainst an extended precision accuracy (close to 32 digits)iterative algorithm which uses the global fixed point method of<cit.>, with orthogonal polynomials computed by local Taylor series.As can be seen, a very large number of the zerosfor the three values of n tested can be computed with the expansion with a relative accuracy near full double precision. Actually, the points not shown in the plot correspond to values with all digits correct in double precision accuracy. However, the expansion fails for the largest zeros, as expected. As for the asymptotic expansion in terms of the zeros of Airy functions (<ref>), the situation is the reverse: the further we are from the turning point at t=1 (ζ=0), the largerthe relative errors become. Therefore, for n fixed the maximum errors in the computation are obtained for the small zeros.For example, using Maple with Digits = 16, we take n=11 and 6 coefficients in (<ref>). Then we have for the zero x_6at the origin ζ_0=μ^-4/3a_6=-1.115618210110694, t=-0.1668495251592333× 10^-3, and the better valuesζ=-1.115460237225190 and t=1.746192313216916 × 10^-13. This givesx_6≐ 8.374444141492045× 10^-13and for the largest zero x_11 the relative accuracy is 10^-15. A test of the expansion for very large values of n using a finite precision arithmetic implementation is shown inFigure <ref>. In this figure, we show the relative accuracyobtained with the asymptotic expansion (<ref>) for computing the largest 1000 positive zeros of H_n(x) for n=10000, 100000, 1000000. As can be seen, an accuracy near 10^-16 can be obtained in all cases. The zeros a_k of the Airy function have been computed usinga_k=-T(38π(4k-1)), where T(t)has the Poincaré's expansion (see <cit.>) T(t)∼ t^2/3(1+5/48t^-2-5/36t^-4+77125/82944t^-6-108056875/6967296t^-8+⋯). This expansion is valid for moderate/large values of k. In our implementation we use pre-computed values for the first 10 zeros of the Airy function and the Poincaré's expansion for the rest.The accuracy of the two expansions(<ref>)and (<ref>)for approximatingthe zeros of Hermite polynomials for n=100 is compared in Figure <ref>. As can be seen, the combined use of both expansions allow the computation of all the zeros with a double precision accuracy of 15-16 digits.In Table <ref> we illustrate the efficiency of the expansions for approximating the zeros of Hermite polynomials for n=100, 10000. In particular, the first 0.6n zeros of the Hermite polynomials have been computed with the asymptotic expansion in terms of elementary functions and the last 0.4n zeros with the asymptotic expansion in terms of the zeros of Airy functions. With this splitting and by taking enough terms, it is possible to use the series(<ref>) and (<ref>) for computing the t-values in the expansions instead of using an iterative method for solving the non-linear equations. In the table we show average CPU times (obtained using an Intel Core i54310U 2.6GHz processor under Windows) pernode. The second column shows the CPU times when the number of terms required (no more than five or six depending on the expansion) for a double precision accuracy for the zeros is considered, while the first column shows the CPU times for only two terms.For n=10000 this is the number of terms needed in the expansions to obtain double precision accuracy. For n=100 we observe that there is not much difference in speed between the more simple (2 terms) and themore accurate approximation; this favors the use of accurate asymptotic approximations with no ulterior iterative refinements.The table also shows that the computation of the expansionin terms of elementary functions is more efficientthan the expansion in terms of zeros of Airy functions although for n=10000 the difference in speed is not very significant. Once the nodes (the zeros of H_n(x)) of the Gauss–Hermite quadrature have been computed, approximations to the weights given in (<ref>)can be also obtained by using the asymptotic results in <ref> (elementary functions)and <ref> (Airy functions). For the computation of the weights, one needs to be careful in order to avoidoverflows in the computation both as a function of n and as a function of the values of the nodes. With respect to the dependence on n, we observe that the large factor 2^n n! in (<ref>) can be cancelled out by the factors in front of the expansions (<ref>) and (<ref>). This is asexpected because using the first approximations from the elementary asymptotic expansions as n→∞ we obtain the estimate for the weights:w_i∼π√(2n)e^-x_i^2.This estimation shows that underflow may occur for computing the large zeros. Inthis case the range of computation of the weights can be enlarged by scaling the factore^x^2/2 in the asymptotic approximations and computing scaled weights given byw̃_i =w_i e^x^2_i . With this, the overflow/underflow limitations are eliminated.Using (<ref>) this scaled weight can be written asw̃_i=√(π)2^n+1n!y'(x_i)^2, y(x)=e^-x^2/2H_n(x) .This expression does not have overflow/underflow limitations neither with respect to x nor with respect to n. Using (<ref>) or (<ref>) we observe that the dominant factors e^-x^2/2 and 2^n+1n! can be explicitly cancelled out. Another interesting property of this expression is that it is well conditioned with respect to the values of the nodes. Indeed, we have w̃_i=W(x_i), where we define the functionW(x)=√(π)2^n+1n!/y'(x)^2. Now, it is straightforward to check that W'(x_i)=0 which means that, at the nodes x=x_i, the value of the weight is little affected by variations on the actual value of the node. This, as we will show, will allow us to compute scaled weights with nearly full double precision in all the range. For computing the scaled weights in this way, we need to compute y'(x) from the asymptotic expansions (<ref>) or (<ref>). This is a straightforward computation and, for instance, starting from(<ref>) we have thaty'(x)=2^1/2 n+1g (μ)/μ(1-t^2)^5/4[ cos(μ^2 η-14π)C_μ(t)-sin(μ^2 η-14π)D_μ(t)],where[ C_μ(t)∼∑_s=0^∞(-1)^s a_s(1-t^2)^3sμ^4s, D_μ(t)∼∑_s=-1^∞(-1)^s b_s(1-t^2)^3s+3/2μ^4s+2 ]and[a_s=(1/2 +6s) t u_2s +u_2s+1+(1-t^2)u̇_2s,;; b_-1=1,b_s=(7/2 +6s) t u_2s+1 +u_2s+2+(1-t^2)u̇_2s+1, s≥ 0. ]The dots mean derivative with respect to t.Two examples of computation of the scaled weights (for n=1000, 10000) using the expansion in terms of elementary functions are shown in Figure <ref>.As can be seen, most of the scaled weights can be computed with almost double precision accuracy.Also, as expected, there is some loss of accuracy for the weights corresponding to the largest nodes (as discussed, for these values one has to use the expansion for the Hermite polynomials in terms of Airy functions). Typically, theadditional computation of the weights requires about 70% more CPU time than when usingthe asymptotic expansion in terms of elementary functions and about 133% more CPU time than when using the asymptotic expansion in terms of Airy functions (due to the computation of these functions). This shows that, when possible, the direct computation of nodes and weightsusing asymptotics will be more efficient than computing more crude first approximations and then refining with an iterative method which uses values of the orthogonal polynomial. Each time the function (and its derivative when we use Newton's method) is computed, the CPU time increases by this same amount,and only when one iteration is needed the speed would be comparable. § LAGUERRE POLYNOMIALS We consider asymptotic expansions for the Laguerre polynomials L_n^(α)(x) in terms of Bessel functions, Airy functions and Hermite polynomials. Some of these expansions have been used to build an efficient scheme for computing the Laguerre polynomials for large values of n and small values of α (-1<α≤ 5) <cit.>. We discuss how to use the expansions to obtain approximations to the zeros of Laguerre polynomials.Later, in Section <ref> we give expansions valid for large n and α.For a survey of the work of several authors on inequalities and asymptotic formulas for the zeros of L_n^(α)(x) as n or αor ν=4n+2α+2 →∞, we refer to <cit.>.See also <cit.>, were an alternative method, based on nonlinear steepest descentanalysis of Riemann–Hilbert problems, is given for Laguerre-type Gaussian quadrature (and in particular Gauss–Laguerre). §.§ A simple Bessel-type expansionWe have the following representation[We summarize the results of<cit.>.]L_n^(α)(x)=(x/n)^-1/2α e^1/2x(J_α(2√(nx))A(x)-√(x/n)J_α+1(2√(nx))B(x)),with expansionsA(x)∼∑_k=0^∞ (-1)^ka_k(x)/n^k, B(x)=∑_k=0^∞ (-1)^k b_k(x)/n^k n→∞,valid for bounded values of x and α.The coefficients a_k(x) and b_k(x) follow from the expansion of the functionf(z,s)=e^xg(s)(s/1-e^-s)^α+1, g(s)=1/s-1/e^s-1-1/2.The function f is analytic in the strip | s|<2π and it can be expanded for| s|<2π intof(x,s)=∑_k=0^∞ c_k(x) s^k.The coefficients c_k(x)are combinations of Bernoulli numbers and Bernoulli polynomials, the first ones being (with c=α+1) [ c_0(x)=1, c_1(x)=1/12(6c-x),; c_2(x)=1/288(-12c+36c^2-12xc+x^2),; c_3(x)= 1/51840(-5x^3 + 90x^2c +(-540c^2 + 180c+72)x +1080c^2(c-1)). ] The coefficients a_k(x) and b_k(x) are in terms of the c_k(x) given by[ a_k(x)= ∑_m=0^k km(m+1-c)_k-mx^m c_k+m(x),; b_k(x)= ∑_m=0^k km(m+2-c)_k-mx^m c_k+m+1(x), ]k=0,1,2,…, and the first relations are[ a_0(x)= c_0(x)=1, b_0(x)= c_1(x),; a_1(x)= (1-c)c_1(x)+xc_2(x), b_1(x)= (2-c)c_2(x)+xc_3(x),;a_2(x)= (c^2-3c+2)c_2(x)+(4x-2xc)c_3(x)+x^2c_4(x),; b_2(x)=(c^2-5c+6)c_3(x)+(6x-2xc)c_4(x)+x^2c_5(x), ]again with c=α+1. §.§.§ Expansions of the zeros Approximations of the zeros of L_n^(α)(x) can be obtained from (<ref>) and expressed in terms of zeros of the Bessel function J_α(x). Because the expansion is valid for bounded values of x,the approximation can only be used for the small zeros. For example, in Table <ref>we show the results for the first 10 zeros when n=100, and for these early zeros the approximations are satisfactory. We write (see (<ref>))W(x)=J_α(2√(nx))A(x)-√(x/n)J_α+1(2√(nx))B(x),A first approximation to the zero x_k ofL_n^(α)(x) follows from writing 2√(nx_k)=j_k, where j_k is the kth zero of J_α(x). A further approximation will be obtained by writingx_k=ξ+, ξ=1/4nj_k^2. Byexpanding W(x) at the zero x=ξ+, assuming thatis small, we findW(ξ)+/1!W^'(ξ)+^2/2!W^''(ξ)+…=0, and substituting an expansion of the form ∼ξ_1/n+ξ_2/n^2+ξ_3/n^3+…, we find the following first few values[ξ_1=ξ/12(ξ-6(α+1)),;ξ_2= ξ/720(150-90ξ+11ξ^2+360α+210α^2-90ξα),;ξ_3= ξ/20160(2121ξ-770ξ^2+73ξ^3-6300α-8820α^2 +;5040ξα-3780α^3-770ξ^2α+2751ξα^2-1260), ] where ξ is defined in (<ref>).Algorithm and first numerical examples for the zerosThe algorithm for computing the asymptotic approximation of the zeros runs in the same way as described for the Hermite polynomials, but is quite simple now. First compute ξ from (<ref>) and the ξ_j given in (<ref>), then computefrom (<ref>), and finally x_k from (<ref>).In Table <ref> we show the results of a first numerical verification for the expansion. We take n=100, α=1/3, and compute the first 10 zeros by using Maple with Digits = 32.We show the relative errors in our approximations when we take 2, 4 and 6 terms in the expansion (<ref>). As can be seen in the table,it is possible to obtain an accuracy near double precision (10^-16) in the computation of the first two zeros of L^(1/3)_100(x) using just the expansion with 6 terms.§.§ An expansion in terms of Airy functionsWe start with the representation[We summarize results of <cit.>;see also <cit.>.] L_n^(α)(νσ)=(-1)^ne^1/2νσχ(ζ)/2^αν^1/3((ν^2/3ζ)A(ζ) +ν^-4/3^'(ν^2/3ζ) B(ζ)) withexpansions A(ζ)∼∑_j=0^∞α_2j/ν^2j, B(ζ)∼∑_j=0^∞β_2j+1/ν^2j, n→∞, uniformly for bounded α andσ∈(σ_0,∞], where σ_0∈(0,1), a fixed number. Here ν=4κ,κ=n+12(α+1), χ(ζ)=2^1/2σ^-1/4-1/2α(ζ/σ-1)^1/4, and 23(-ζ)^3/2=12(arccos√(σ)-√(σ-σ^2)) if 0<σ≤1, 23ζ^3/2=12(√(σ^2-σ)-arccosh√(σ)) if σ≥1. We have the relationζ^1/2dζ/dσ=√(σ-1)/2√(σ). For the derivative we can use the relation d/dxL_n^(α)(x)=L_n^(α)(x)-L_n^(α+1)(x).The first coefficients of the expansions in (<ref>) are α_0=1, β_1=-1/4b^3(f_1-bf_2), where b=√(ζ) if ζ≥0 and b=i√(-ζ) when ζ≤0, and [ f_1=i(σ+3α(σ-1))σ^2a_1^3-2/3a_1^2σ√(σ(1-σ)) ,; f_2=-4-8σ^2(σ+3σα-3α)a_1^3+σ^4(12σ-3-4σ^2+12α^2(σ-1)^2)a_1^6/12σ^3a_1^4(σ-1),;a_1=(4ζ/σ^3(σ-1))^1/4. ] More coefficients can be obtained by the method described in <cit.>. Starting point in this case is the integral (see<cit.>) 1/2π i∫_ f(u) e^ν(1/3u^3-ζ u) du, whereis an Airy-type contour and f(u)is given byf(u)=(1-z^2)^1/2(α-1)dz/du. The relation between z and u follows in this case from the cubical transformation 12arctanh z-12 zσ =13u^3-ζ u, dz/du= 2(u^2-ζ)(1-z^2)/1-σ+σ z^2. The function f(u) can be expanded in a two-point Taylor seriesf(u)=∑_k=0^∞(c_k+ud_k)(u^2-ζ)^k, in which the coefficients can be expressed in terms of the derivatives of f(u) at u=±√(ζ). An integration by parts procedure then gives the coefficients α_2j and β_2j+1 of (<ref>).In <ref> we describe in detail this method for a Bessel-type expansion.§.§.§ Expansions of the zerosWe write W(ζ)=(ν^2/3ζ)A(ζ)+ν^-4/3^'(ν^2/3ζ)B(ζ), where A(ζ) and B(ζ)have the expansions shown in (<ref>). Similarly as in <ref> we write the zeros x_j ofL_n^(α)(x) in terms of the zeros a_k of the Airy function. These zeros are negative, and a_1 will correspond the nth zero of L_n^(α)(x), a_2 with the (n-1)th zero, and so on. A zero ofL_n^(α)(x) is a zero of W(ζ) and it can be written in terms of ζ in the form ζ=ζ_0+, ζ_0=ν^-2/3a_j, and we assume that we can expand ∼ζ_1/ν^2+ζ_2/ν^4+ζ_3/ν^6+…. Byexpanding W(ζ) at ζ_0 we have W(ζ_0)+/1!W^'(ζ_0)+^2/2!W^''(ζ_0)+…=0, and substituting the expansions shown in(<ref>) we can obtain the coefficients ζ_j. We obtain ζ_1=-β_1,ζ_2=-(β_3+16ζ_0ζ_1^3+ζ_1α_2+ζ_1d/dζβ_1+12ζ_0β_3ζ_1^2), where β_1 given in (<ref>). The coefficients are evaluated at ζ_0.Algorithm and first numerical examples for the zeros In <ref> we have described the algorithm for computing the asymptotic approximation of the zeros for the Airy case. The present algorithm runs in the same way.For the zero x_n+1-j, j=1,2,…, first compute ζ_0, from (<ref>). Thencompute σ_0 by inverting the first relation in (<ref>). This is done by using the expansion σ=1+ζ+15ζ^2-3175ζ^3+237875ζ^4+…,ζ=2^2/3ζ. An alternative would be to use an iterative method. In that case it is convenient to write σ=cos^2θ, and the equation to be solved for θ becomes 8/3(-ζ)^3/2=θ-sinθ, 0≤θ<π. Withσ=σ_0 we compute the coefficients in (<ref>), thenand ζ from (<ref>) and (<ref>). A final inversion of the relationin the first line of (<ref>) gives the σ, and then x_n+1-j∼νσ.For example, we take n=100, α=1/3, and we compute the zero x_100=375.635158667⋯ by using Maple. We compute (see(<ref>)) ζ_0= -2.3381074105ν^-2/3=-0.0428779491924. Upon solving the first equation in (<ref>) for σ, we obtain σ_0=0.9328675228515. With this value, a first approximation of the zero is x_100∼νσ_0≐ 375.634655868, with a relative accuracyof 1.34 × 10^-6.Finally, we compute ζ_1=0.131145197575, compute ζ∼ζ+ζ_1/ν^2, invert again the first relation in (<ref>), giving σ=0.932868771534 and x_100≐ 375.635158671, a relative accuracyof 1.08 × 10^-11.For the halfway zero x_51 we found the relative accuracies6.71 × 10^-6 and 1.52 × 10^-10.§.§ Another expansion in terms of Bessel functionsAfter substituting t=e^-s in the integral representation[We summarize the results of <cit.>; see also <cit.>.]L_n^(α)(z)= 1/2π i∫_ (1-t)^-α-1e^-tz/(1-t) dt/t^n+1, we obtain the representation e^-νρL_n^(α)(2νρ)=2^-α/2π i∫_-∞^(0+)e^ν h(s,ρ)(sinh s/s)^-α-1 ds/s^α+1, where ν=2n+α+1 and h(s,ρ)=s-ρ s. The contourstarts at -∞ withu=-π, encircles the origin anti-clockwise, and returns to -∞ with u=π.The transformation to a standard form for this case is h(s,ρ)=u-ζ/u, with result 2^αe^-νρL_n^(α)(2νρ)=1/2π i∫_-∞^(0+)e^ν(u-ζ/u)f(u) du/u^α+1, where f(u)=(u/sinh s)^α+1ds/du. By using an integration by parts procedure (see <ref>), we can obtain the representation L_n^(α)(2νρ)= e^νρχ(ζ)/2^αζ^1/2α(J_α(2ν√(ζ)) A(ζ)- 1/√(ζ)J_α+1(2ν√(ζ))B(ζ)), with expansions A(ζ)∼∑_j=0^∞A_2j(ζ)/ν^2j,B(ζ)∼∑_j=0^∞B_2j+1(ζ)/ν^2j+1,ν→∞, uniformly for ρ≤ 1-δ, where δ∈(0,1)is a fixed number. Here, ν=2n+α+1,χ(ζ)=(1-ρ)^-1/4(ζ/ρ)^1/2α+1/4,ρ<1,with ζ given by √(-ζ)=12(√(ρ^2-ρ) +arcsinh√(-ρ)),if ρ≤0, √(ζ)=12(√(ρ-ρ^2)+arcsin√(ρ)), if 0≤ρ<1.We have the relation1/ζ^1/2dζ/dρ=√(1-ρ/ρ),ρ <1.The first coefficients are [ A_0(ζ)= 1,; B_1(ζ)= 1/48ξ(5ξ^4b+6ξ^2b+3ξ+12a^2(b-ξ)-3b), ] where ξ=√(ρ/1-ρ),b=√(ζ). More coefficients can be obtained by using the method described in<ref>. To remove in (<ref>) the singularities dueto the Bessel functions at ζ=0,it is convenient to use the function E_ν(z) introduced by Tricomi; see<cit.>. We haveE_ν(z)=z^-1/2ν J_ν(2√(z))=∑_k=0^∞(-1)^kz^k/k! Γ(ν+k+1).It is an analytic function of z. In terms of the modified Besselfunction we can writeE_ν(-z)=z^-1/2ν I_ν(2√(z))=∑_k=0^∞z^k/k! Γ(ν+k+1).The representation in (<ref>) can be written in the form L_n^(α)(2νρ)=(12 ν)^α e^νρχ(ζ)(E_α(ζν^2) A(ζ)-E_α+1(ζν^2)B(ζ)), and we can use this representation also for ζ<0, i.e., ρ<0.For more details about the coefficients A_j(ζ) and B_j(ζ) of the expansions in (<ref>), see <cit.>. §.§.§ A general method forthe coefficients in Bessel-type expansionsWe describe a general method for evaluating the coefficientsA_k(ζ) and B_k(ζ) used in (<ref>). We consider the standard form F_ζ(ν)=1/2π i∫_e^ν(u-ζ/u)f(u) du/u^α+1, where the contourstarts at -∞ withu=-π, encircles the origin anti-clockwise, and returns to -∞ with u=π. The f(u) is assumed to be analytic in a neighborhood of , andin particular in a domain that contains the saddle points ± ib, where b=√(ζ). When we replace f by unity, we obtain the Bessel function: F_ζ(ν)=ζ^-1/2α J_α(2ν√(ζ)).The coefficients of the expansions in (<ref>) follow from the recursive scheme [f_j(u) =A_j(ζ)+B_j(ζ)/u+(1+b^2/u^2)g_j(u),;f_j+1(u) = g_j^'(u)-α+1/ug_j(u),;A_j(ζ) = f_j(ib)+f_j(-ib)/2,B_j(ζ)=if_j(ib)-f_j(-ib)/2b, ] with f_0(u)=f(u), the coefficient function. Usingthis scheme and integration by parts, we can obtain the asymptotic expansion F_η(ν)∼ζ^-1/2α J_α(2ν√(ζ))∑_j=0^∞(-1)^jA_j(ζ)/ν^j+ζ^-1/2(α+1) J_α+1(2ν√(ζ))∑_j=0^∞(-1)^jB_j(ζ)/ν^j. The coefficients A_j(ζ) and B_j(ζ) can all be expressed in terms of the derivatives f^(k)(± ib)of f(u) at the saddle points ± ib; we will need these for 0≤ k ≤ 2j (see (<ref>)).We expand the functions f_j(u) in two-point Taylor expansions f_j(u)= ∑_k=0^∞C_k^(j) (u^2-b^2)^k+u∑_k=0^∞D_k^(j) (u^2-b^2)^k.Using(<ref>), we derive the following recursive scheme for the coefficients [ C_k^(j+1)=(2k-α)D_k^(j)+b^2(α-4k-2)D_k+1^(j)+2(k+1)b^4D_k+2^(j),;D_k^(j+1)=(2k+1-α)C_k+1^(j)-2(k+1)b^2C_k+2^(j), ] for j,k=0,1,2,…, and the coefficients A_j and B_j follow fromA_j(ζ)=C_0^(j), B_j(ζ)=-b^2D_0^(j), j ≥ 0.In the present case of the Laguerre polynomials the functions f_2j are even and f_2j+1 are odd, and we have A_2j+1(ζ)=0 and B_2j(ζ)=0. A few non–vanishing coefficients are [ A_0(ζ)= f(ib),; B_1(ζ)=-1/4b((2α-1)if^(1)(ib)+bf^(2)(ib)),; A_2(ζ)= -1/32b(3i(4^2α-1)f^(1)(ib)- (3-16α+4α^2)bf^(2)(ib) +;2i(2α-3)b^2f^(3)(ib)b^2+b^3f^(4)(ib)),; B_3(ζ)= -1/384b(3(4α^2-1)(2α-3)(if^(1)(ib)+bf^(2)(ib)) +; 2i(α-7)(2α-1)(2α-3)b^2f^(3)(ib) +;3(19-20α+4α^2))b^3f^(4)(ib) -3i(2α-5)b^4f^(5)(ib)-b^5f^(6)(ib)). ]To have A_0(ζ)=1 inthe first expansion in (<ref>) we have scaled all A and B-coefficients with respect to A_0(ζ)=χ(ζ); see (<ref>). The main step for obtaining the coefficients A_j(ζ) and B_j(ζ) is the evaluation of those for j=0 in (<ref>) and we summarizethe method described in <cit.>. We rewrite the two-point Taylor expansion in the form f(u)=∑_k=0^∞( a_k(u_1,u_2)(u-u_1)+a_k(u_2,u_1)(u-u_2)) (u-u_1)^k(u-u_2)^k, where, in the present case, u_1=-b and u_2=b. Then,C_k^(0)=-u_1a_k(u_1,u_2)-u_2a_k(u_2,u_1), D_k^(0)= a_k(u_1,u_2)+a_k(u_2,u_1). We have a_0(u_1,u_2)=f(b)/ (2b) and a_0(u_2,u_1)=-f(-b)/ (2b), and,for k=1,2,3,..., a_k(u_1,u_2)= ∑_j=0^k(k+j-1)!/ j!(k-j)!(-1)^k+1kf^(k-j)(b)+(-1)^j jf^(k-j)(-b)/ k!(-2b)^k+j+1, a_k(u_2,u_1) follows from a_k(u_1,u_2) by replacing b by -b.§.§.§ Expansions of the zerosFrom the Bessel-type expansion we derive expansions of the firsthalf of the zeros of the Laguerre polynomial. We write W(ζ)=J_α(2ν√(ζ))A(ζ)-1/√(ζ)J_α+1(2ν√(ζ))B(ζ). A zero ofL_n^(α)(2ν x) is a zero of W(ζ) and it can be written in terms of ζ in the form ζ=ζ_0+, ζ_0=j_k^2/4ν^2, where j_k is a zero of J_α(z). Byexpanding W(ζ) we have with the zero ζ inthis formW(ζ_0)+/1!W^'(ζ_0)+^2/2!W^''(ζ_0)+…=0. We assume thatcan be expanded in the form ∼ζ_1/ν^2+ζ_2/ν^4+ζ_3/ν^6+…, and substituting this expansion, we obtain ζ_1=-B_1(ζ) (see (<ref>)) and 6ζζ_2=2 B_1(ζ)^3-3 (α+1) B_1(ζ)^2+6 ζ B_1(ζ) (B_1^'(ζ)+A_2(ζ)) -6 ζ B_3(ζ), In the algorithm we use ζ=ζ_0.Algorithm and first numerical examples for the zeros As in the previous cases we describe how the asymptotic approximations for the zeros can be obtained. For the zero x_k, k=1,2,…, first compute ζ_0, from (<ref>). Then computeρ_0 by inverting the second relation in (<ref>). This is doneby using the expansion ρ=ζ+13ζ^2+1145ζ^3+73315ζ^4+…. An alternative is solving with an iterative method. In that case it is convenient to writeρ=sin^21/2θ, and the equation to be solved for θ becomes8√(ζ)=θ+sinθ, 0≤θ<π. With ρ=ρ_0 we compute the coefficients ζ_j in (<ref>), see also (<ref>). Compute ζ from (<ref>) and perform a final inversion of the relationin the second line of (<ref>). This gives the ρ, and then x_k∼ 2νρ.Because the expansions in (<ref>) become useless when ρ→1, we should use the present result for a limited number of zeros, say, only for k=1,2,3,…,1/2n; the remaining zeros can be obtained by using the Airy-type expansion.When we take n=100, α=1/3, and use the approximation ζ∼ζ_0 with the first zero x_1=0.02092331638663936 computed by Maple with Digits=16, we found a relative accuracy of 3.65 × 10^-6; with theterm ζ_1/ν^2 included we found 6.68×10^-11 and when included up to the term ζ_3/ν^6, the accuracy is 2×10^-16.For the zero x_50 we found the relative errors 4.91 × 10^-6, 1.57 × 10^-10 and 0 (full double accuracy), respectively. In the next section we analyze in more detail the performance of the different expansions for thezeros and we also discuss the stable computation of the weights.§.§ Numerical performance of the expansions for α smallIn Figures <ref>, <ref> and <ref> we show the accuracy obtained with the asymptotic expansions (<ref>), (<ref>), (<ref>), respectively, for the zeros of the Laguerre polynomial L^(1/4)_n(x)for different values of n. An implementation of the expansions in finite precision arithmetic (coded in Fortran 90)has been considered for testing. As for the Hermite case, in these implementations only non-iterative methods (power series) are used for the inversion of the variables. For computing the first zeros of Bessel functions we use the algorithm describe in <cit.>. For large zeros we use the MacMahon's expansion (see <cit.>) j_ν,m∼ a-μ-1/8a-4(μ-1)(7μ-31)/3(8a)^3 -32(μ-1)(83μ^2-982μ+3779)/15(8a)^5-⋯,. where μ=4ν^2, a=(m+ν/2-1/4)π.As can be seen in Figure<ref>, the validity of the first asymptotic expansionin terms of zeros of Bessel functions (<ref>) is limited to the first zeros.On the contrary, Figure <ref> shows that the other Bessel expansion (<ref>) works very well for approximating a large number of zerosof the Laguerre polynomial but fails for the last zeros. For these zeros, the Airy expansion (<ref>) should be used. The accuracy of the Bessel and Airy expansions for n=100 is illustrated inFigure <ref>. As in the case of the Hermite approximations, the combined use of the expansions allow the computation of the zeros of Laguerre polynomials for n=100 with an accuracy of 15-16 digits. The efficiency of the expansions is compared in Table <ref>. As in the case of the zeros of Hermite polynomials,in order to improve the speed of the methods we apply the expansions onlyin the regions where the inversion of the variables can be done accuratelyby using the series expansions(<ref>) and(<ref>)in the case of the Airy expansion and the second Bessel expansion(<ref>), respectively: the first 0.75n zeros for the Bessel expansion and the last 0.25n zeros for the Airy expansion.For these two expansions, we observe in the Table that there is no much difference in speed between using 2 terms and themore accurate approximation (for clarity the Table includes the number of termsneeded for the three different expansions).With respect to the comparison between the different expansions, we observe that the computation of the first expansion in terms of Bessel functions is, as expected, extremely efficient in its range of validity.On the other hand, the expansion (<ref>) in terms of Bessel functions is slightly more efficient than the Airy expansion.As in the case of the Gauss–Hermite quadrature, overflow/underflow limitationsin the computation of the weights can be eliminated by balancing the largeterms as a function of n in the expressions and by scaling out the dependenceon the weights. A first estimation of the weights as n→∞ isgiven byw_i∼π√(n)x_i^α+1/2e^-x_i.The range of computation of the weights of Gauss–Laguerre quadrature can be enlarged by simply scaling out the dominant factor in the asymptotic expansions for Laguerre polynomials. When α is small, this factor is given by e^x/2. With this, one can define the scaled weights by w̃_i =w_i e^x_i x_i^α+1/2 . These normalized weights do not overflow/underflow as a function of n, α and x_i. In addition,similarly as we did for the Hermite case, we can compute this scaled weights in a numerically stable way. We notice that the weights (<ref>) can be written asw_i=4Γ (n+α+1)n! [ddzL_n^(α)(z_i^2)]^2,where z=√(x), and therefore z_i=√(x_i). Now, in the new variable z, the scaled weights can be expressed as w̃_i=4Γ (n+α +1)n! (ẏ(z_i))^2,where the dots mean differentiation with respect to z andy(z)=z^α+1/2e^-z^2/2L_n^(α)(z^2).Now, we define W(z)=4Γ (n+α +1)/(n! (ẏ(z))^2) and with this we have that w_i=W(z_i), and it is straightforward to check that we have again the desirable property d/dzW(z_i)=0. This means that the computation is well conditioned in the sense that the error for the weights will be approximately proportional to the square of the error for the nodes. As a consequence, as we will shown, the weights can can be computed with almost no accuracy loss.All that is left for computing the nodes is to use the expansions for the Laguerre polynomials in order to compute ẏ(z) by differentiation. In particular, starting from (<ref>) we haveẏ(z)=(ν/2)^α -1/2(2νζρ^2/1-ρ)^1/4[J_α(2ν√(ζ))C(ζ)-1√(ζ)J_α+1(2ν√(ζ))) D(ζ)],where in this expression x is the variable defined in Section (<ref>)and[C(ζ)={14(1-ρ)+(1/2+α)φ}A+A^' -2νφ B,; D(ζ)={14(1-ρ)-(3/2+α)φ}B+B^' +2νζφ A, ]and in these equations prime denotes the derivative with respect to ρ.Similarly as we did for the Hermite case, we show in Figure <ref> two examplesof computation of the scaled weights (<ref>) for n=1000, 10000 (with α=1/4).We use the expansion in terms of Bessel functions (<ref>).As can be seen, the accuracy for the scaled weights isbetter than 10^-15 in most cases. There is some loss of accuracy for the weights corresponding to the largest nodes (as discussed, for these values one has to use the expansion for the Laguerre polynomials in terms of Airy functions).§.§ Expansions for largevalues ofα §.§.§ An expansion for largevalues ofα and fixed degree nFrom the well-known limitlim_α→∞α^-n L_n^(α)(α t) = (1-t)^n/n!,it follows that the zeros of L_n^(α)(α t) coalesce at t=1 when α is large and n≪α. The limit gives limited information when t=1, and in this section we give more details about the behavior of L_n^(α)(α t) for small values of | t-1|. We consider an asymptotic representation in terms of Hermite polynomials, which has been derived in <cit.>. We haveL_n^(α)(x) =(-1)^n z^n ∑_k=0^n c_k/z^k H_n-k(ζ)/(n-k)!,wherez=√(x-(α+1)/2),ζ=x-α-1/2z.The representation in (<ref>) holds for n=0,1,2,…, and all complexvalues ofx and α and has an asymptotic character for large values of |α|+|x|; the degree n should be fixed. It is not difficult to verify that the limit given in (<ref>)follows from (<ref>).The coefficients c_k are defined byc_0=1,c_1=c_2=0,c_3= 13(3x-α-1),c_4=14(-4x+α+1),andthe recursion relationkc_k=-2(k-1)c_k-1-(k-2)c_k-2+(3x-α-1)c_k-3+ (2x-α-1)c_k-4.An approximation of the zeros of L_n^(α)(x)can be found in <cit.>, and in<cit.> it is shown thatit can be derived from the expansion given in (<ref>). Calogero's result isℓ_n,m= α+√(2α)h_n,m+13(1+2n+2h_n,m^2)+(α^-1/2),α→∞,where ℓ_n,m and h_n,m denote the corresponding zeros of the Laguerre and Hermite polynomials.For example, with n=10 and α=1000, the relative error is not larger than 0.85× 10^-3 (for the first zero). For the fifth and sixth zero the relative errors are about 0.65× 10^-4. §.§.§ An expansion for largevalues ofn andαIn <cit.> we have given expansions for large n in which α=(n) is allowed; for a summary see<cit.>. The results follow also from uniform expansions of Whittaker functions obtained by using differential equations; see<cit.>. These expansions include the J-Bessel function, and are valid in the parameter domain where order and argument of the Bessel function are equal, that is, in the turning point domain.In this section, explicit expressions for the first few coefficients of the expansion are given. By using an integralwe can derive the following asymptotic representation L_n^(α)(4κ x)=e^-κ Aχ(b)(b/2κ x)^αΓ(n+α+1)/n!(J_α(4κ b)A(b)-2bJ_α^'(4κ x)B(b)), with expansions A(b)∼∑_k=0^∞A_k(b)/κ^k,B(b)∼∑_k=0^∞B_k(b)/κ^k, where κ=n+12(α+1),χ(b)=(4b^2-τ^2/4x-4x^2-τ^2)^1/4,τ=α/2κ. We assume that τ <1. The quantityb is a function of x and follows from the relation [ 2W-2τarctanW/τ=;2R-arcsin1-2x/√(1-τ^2)-τarcsinx-1/2τ^2/x√(1-τ^2)+12π(1-τ), ] whereR=12√(4x-4x^2-τ^2)=√((x_2-x)(x-x_1)), W=√(4b^2-τ^2), and x_1=12(1-√(1-τ^2)), x_2=12(1+√(1-τ^2)).The relation in (<ref>) can be used for x∈[x_1,x_2], in which case b≥1/2τ. In this interval the zeros of L_n^(α)(4κ x) occur. For x outside this interval we refer to <cit.>. The first coefficients of the expansions in (<ref>) are[ A_0(b)= 1, B_0(b)=0,;A_1= τ/24(τ^2-1),B_1=P R^3+QW^3/192R^3W^4(τ^2-1),;P= 4(2τ^2+12b^2)(1-τ^2),; Q= 2τ^4-12x^2τ^2-τ^2-8x^3+24x^2-6x,;B_2(b )=A_1(b)B_1(b). ] §.§.§ Expansions of the zerosA zeroof L_n^(α)(4κ x) is a zero of U(b) defined byU(b)=J_α(4κ b)A(b)-2bJ_α^'(4κ x)B(b), where the relation between b and x is given in (<ref>). We write a zero in terms of b in the form b=b_0+,b_0=j_k/4κ where j_k is a zero of the Bessel function J_α(z). We assume foran expansion in the form∼b_1/κ+b_2/κ^2+b_3/κ^3+….By expanding U(b) at b_0 we haveU(b_0)+/1!U^'(b_0)+ ^2/2!U^''(b_0)+… = 0.Using the representation of U(b) given in (<ref>),substituting the expansion of , those of A(b) and B(b) given in (<ref>), and comparing equal powers of κ, we can obtain the coefficients b_j of (<ref>).The first coefficients are[b_1= 0, b_2=1/2 b B_1(b), b_3=1/2 b(B_2(b)-A_1(b)B_1(b))=0,;b_4= 1/24b(12B_3(b)-16b^2B_1^3(b)+6bB_1^'(b)B_1(b)-12A_2(b)B_1(b)+3B_1^2(b)), ]with b=b_0 given in (<ref>).For example, when we take n=100, α=75, then we obtain for the first zero b_0=0.1504907582034649. We find withthis value for b from (<ref>) a first approximation x=0.0231157462791716, with a relative error 2.45×10^-5. We compute with this x and b=b_0 the coefficient b_2 and find from b∼ b_0+b_2/κ^2 the value b=0.1504905751793771. Again inverting (<ref>) to find the corresponding x-value, we find x=0.0231156896044437, now with relative error 3.01618×10^-11. § ACKNOWLEDGEMENTS The authors thank the referees for their constructive remarks. The authors acknowledge financial support from Ministerio de Economía y Competitividad,project MTM2015-67142-P (MINECO/FEDER, UE). NMT thanks CWI, Amsterdam, for scientific support. plain
http://arxiv.org/abs/1709.09656v1
{ "authors": [ "A. Gil", "J. Segura", "N. M. Temme" ], "categories": [ "math.CA" ], "primary_category": "math.CA", "published": "20170927174745", "title": "Asymptotic approximations to the nodes and weights of Gauss-Hermite and Gauss-Laguerre quadratures" }
Release Connection Fingerprints in Social Networks Using Personalized Differential Privacy Yongkai Li12, Shubo Liu12, Jun Wang12, and Mengjun Liu12 1School of Computer, Wuhan University, Wuhan, China 2Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, Wuhan University, Wuhan, China Email: [email protected],[email protected], [email protected],[email protected] Accepted 2017 September 15. Received 2017 September 6; in original form 2017 June 26. ============================================================================================================================================================================================================================================================================================================================================== There are many benefits of publication social networks statistics for societal or commercial purposes, such as political advocacy and product recommendation. It is very challenging to protect the privacy of individuals in social networks while ensuring a high accuracy of the statistics. Moreover, most of the existing work on differentially private social network publication ignores the facts that different users may have different privacy preferences and there also exists a considerable amount of users whose identities are public. In this paper, we aim to release the number of public users that a private user connects to within n hops (denoted as n-range Connection Fingerprints,or n-range CFPs for short) regarding user-level personalized privacy preferences. To this end, we proposed two schemes, DEBA and DUBA-LF, for privacy-preserving publication of the CFPs on the base of personalized differential privacy(PDP), and conduct a theoretical analysis of the privacy guarantees provided within the proposed schemes. The implementation showed that the proposed schemes are superior in publication errors on real datasets.§ INTRODUCTIONNowadays, more and more people join multiple social networks on the Web, such as Facebook, Twitter, and Sina Weibo, to share their own information and at the same time to monitor or participate in different activities. Many institutions and firms are investing time and resources into analyzing social networks to address a diverse set of societally or commercially important issues including disease transmission, product recommendation, and political advocacy, among many others. Although the sophistication of information technology has allowed the collection, analysis, and dissemination of social network data, privacy concerns have posed significant restriction of the ability of social scientists and others to study these networks. To respect the privacy of individual participants in the networks, network data cannot be released for public access and scientific studies without proper sanitization. A common practice is to release a “naively anonymized” isomorphic network after removing the real identities of vertices. It is now well-known that this can leave participants open to a range of attacks <cit.>. Thus, a great many of anonymization techniques have been proposed <cit.> to ensure network data privacy. However, those anonymization techniques have been shown to be susceptible to several newly discovered privacy attacks and lack of rigorous privacy and utility guarantees. In response, differential privacy <cit.> has been applied to solve such vulnerability in social network data publication. Differential privacy is a popular statistical model, and it prevents any adversary from inferring individual information from the output of a computation by perturbing the data prior to the release. A limitation of the model is that the same level of privacy protection is afforded for all individuals. However, it is common that different users may have different privacy preferences<cit.>. Therefore, providing the same level privacy protection to all the users may not be fair and in addition may cause the published social network data useless. Moreover, in reality, not all the identities of social network users are sensitive<cit.>. For instance, Sina Weibo, a popular Chinese microblogging social network, hosts around lots of media accounts (e.g., NBA and Xinhuanet), and millions of celebrity accounts (e.g., Christine Lagarde and Kai-Fu Lee). All these users identities are public, and they in total account for over 1% of the overall half billion registered user accounts <cit.>. In this paper, we classify the users whose identities are not sensitive as public users to be distinguished from private users. It is pointed out that releasing the identities of public users with social network data can benefit both research and the users themselves <cit.>. Moreover, we take different privacy requirements into account to guarantee precisely the required level of privacy to different users.In this work, we focus on a specific publication goal when public users are labeled, i.e., the number of public users that a private user connects to within n hops. For ease of presentation, we also use the definition n-Range connection fingerprints (CFPs)<cit.> to denote the public users that a private user connects to within n hops. We choose to focus on the number of CFPs because it is one of the most important properties for a public users labeled graph. For example, these statistics can be used for studying the social influence of government organizations, simulating information propagation through media, helping corporate make smart targeted advertising plans, and so on.In this work, we consider the setting in which a trusted data analyst desires to publish the number of n-Range CFPs of each private user. Every private user potentially requires different privacy guarantee for his or her statistics and the analyst would like to publish useful aggregate information about the network. To this end, we employ a new privacy framework, Personalized Differential Privacy (PDP)<cit.>, to provide personal privacy guarantee specified at the user-level, rather than by a single, global privacy parameter. The privacy guarantees of PDP have the same strength and attack resistance as differential privacy, but are personalized to the preferences of all users in the input domain. In this work, we propose two schemes to release the number of CFPs in the context of personalized privacy preferences. We address the challenge of improving the data utility by employing the distance-based approximation mechanism and decreasing the introduced noise. The main contributions of this paper are: * To the best our knowledge, we formalized the question of releasing the number of CFPs in the context of personalized privacy preferences for the first time.* We present two schemes, DEBA and DUBA-LF, for privacy-preserving publication of the CFPs regarding personalized privacy preferences, and we conduct a theoretical analysis of the privacy guarantees provided within the proposed schemes. The proposed schemes are designed to be 𝒫-PDP.* We experimentally evaluate the two proposed schemes on real datasets and it is demonstrated that our proposed schemes have high utility for each dataset. The paper is organized as follows. Section 2 discusses preliminaries and related work. Section 3 presents the problem statement and privacy goal. Overview of our solutions is described in Section 4. Section 5 presents our methods for privacy-preserving CFPs publishing. The privacy analysis is reported in Section 6. Section 7 describes some of our experimental results and performance analysis. Section 8 presents the conclusions of this research. § PRELIMINARIESIn this section, we introduce some notations and initial definitions, and review the definition of differential privacy, two conventional mechanisms to achieve differential privacy, upon which our work is based. Then the related work is discussed.We model a social network as an undirected and unweighted graph G =(V,E) ∈𝒢, where V is a set of vertices representing user entities in the social network, and E is a set of edges representing social connections between users (e.g., friendships, contacts, and collaborations). The notation e(v_i , v_j) ∈ E represents an edge between two vertices v_i and v_j . We let |V| = n_0 and the notation |V| is used to represent the cardinality of V. For ease of presentation, we use “grap” and “social network” interchangeably in the following discussion, as well as “user” and “node”. §.§ Differential PrivacyWe call two graphs G, G' as neighboring if G' can be obtained from G by removing or adding a one edge, i.e., their minimum edit distance <cit.> d(G, G') ≤ 1. We write GG'to denote that G and G' are neighbors and that G = G' ∧ e or G' = G∧ e, where e is an egde. Differential privacy requires that, prior to f(G)'s release, it should be modified using a randomized algorithm 𝒜, such that the output of 𝒜 does not reveal much information about any edge in G. The definition of differential privacy is shown as follows:(ϵ-differential privacy)<cit.>. A randomized algorithm 𝒜 is ϵ-differentially private if for any two graphs G and G'that are neighboring, and for all O ∈ Range(𝒜), Pr[𝒜(G) ∈ O] ≤ e^ϵ· Pr[𝒜(G') ∈ O]. A differentially private algorithm 𝒜 provides privacy because, given any two graphs which differ on a single edge only, respective results of a same query on the graphs are not distinguishable. Therefore, an adversary cannot infer the value of any single edge in the dataset. Here, ϵ represents the level of privacy. A smaller value of ϵmeans better privacy, but it also implies lower accuracy of the query result. The composition of differentially private algorithms also provides differential privacy, but it produces different results depending on the data to which the queries are applied. * Sequential composition [<cit.>, Theorem 3]. Let 𝒜_i each provides ϵ_i-differential privacy. The sequence of 𝒜_i(X) provides (∑_iϵ_i)-differential privacy. While there are many approaches to achieving differential privacy, the best known and most-widely used two for this purpose are the Laplace mechanism <cit.> and the exponential mechanism <cit.>. For real valued functions, i.e., f:𝒢→ R^d, the most common way to satisfy differential privacy is to inject carefully chosen random noise into the output. The magnitude of the noise is adjusted according to the global sensitivity of the function, or the maximum extent to which any one tuple in the input can affect the output. Formally,(Global Sensitivity <cit.>): The global sensitivity of the function f:𝒢→ R^d is Δ(f)= max_d(G,G')≤ 1f(G)-f(G') for all neighboring G, G' ∈𝒢, where · denotes the L_1 norm. Similarly, the local sensitivity and local sensitivity at distance t of function f are defined as follows.(Local Sensitivity <cit.>): The local sensitivity of the function f:𝒢→ R^d is LS(G,f)= max_G'|d(G,G')≤ 1f(G)-f(G') , where · denotes the L_1 norm. (Local Sensitivity at distance t <cit.>): The local sensitivity of f at distance t is the largest local sensitivity attained on graphs at distance at most t from G. Formally, the global sensitivity of the function f:𝒢→ R^d at distance t is LS(G,f,t)= max_G'|d(G,G')≤ tf(G)-f(G') , where · denotes the L_1 norm. Note that global sensitivity can be understood as the maximum of local sensitivity over the input domain, i.e., Δ(f) = max_G LS(G, f) and local sensitivity of f is a special case of LS(G, f , t) for distance t=1.To maintain differential privacy, the Laplace mechanism adds noise drawn from the Laplace distribution into the data to be published. The influence of any single edge on the outcome will be masked and hidden by the Laplace noise. Let Lap(λ) be a random value sampled from a Laplace distribution with mean zero and scale λ. The Laplace Mechanism through which ϵ-differential privacy is achieved is outlined in the following theorem.<cit.>Let f:𝒢→ R^d. A mechanism M that adds independently generated noise from a zero-mean Laplace distribution with scale λ=Δ(f)/ϵ to each of the d output values f(G), i.e., which produces O = f(G)+ ⟨ Lap(Δ(f)/ϵ)⟩^d satisfies ϵ-differential privacy. The exponential mechanism <cit.> is useful for sampling one of several options in a differentially-private way. A score to each of the options, which is determined by the input of the algorithm, is assigned by a quality function q. Clearly, higher scores signify more desirable outcomes and the scores are then used to formulate a probability distribution over the outcomes in a way that ensures differential privacy.(Exponential Mechanism <cit.>).Let q:(𝒢×𝒪) → R be a quality function that assigns a score to each outcome O∈𝒪. Let Δ_1(q)= max_d(G,G')≤ 1q(G,O)-q(G',O) and M be a mechanism for choosing an outcome O∈𝒪. Then the mechanism M, defined by M(G,q)={return O with probability ∝ exp(ϵ q(G,O)/2Δ_1(q))} maintains ϵ-differential privacy.§.§ Related WorkWith the increasing popularity of social network analysis research, privacy protection of social network data is a broad topic with a significant amount of prior work. In this section, we review the most closely related work about privacy protection on social network data.An important thread of research aims to preserve social network data privacy by obfuscating the edges (vertices), i.e., by adding /deleting edges (vertices).<cit.>.Mittal et al. proposed a perturbation method in <cit.> by deleting all edges in the original graph and replacing each edge with a fake edge that is sampled based on the structural properties of the graph. Liu et al. <cit.> design a system, called LinkMirage , which mediates privacy-preserving access to users social relationships in both static and dynamic social graphs. Hay et al. <cit.> perturb the graph by applying a sequence of r edge deletions and r edge insertions. The deleted edges are uniformly selected from the existing edges in the original graph while the added edges are uniformly selected from the non-existing edges. Wang et al. propose two different perturbation methods to anonymize social networks against CFP attacks in <cit.>, which serves as the practical foundation for our algorithm. Their first method is based on adding dummy vertices, while the second algorithm achieves k-anonymity based on edge modification. Their proposed methods can resist CFP attacks on private users based on their connection information with the public users. Another important work for our algorithm is <cit.>. Yuan et al. <cit.> introduce a framework that provides personalized privacy protection for labeled social networks. They define three levels of privacy protection requirements by modeling gradually increasing adversarial background knowledge. The framework combines label generalization and other structure protection techniques (e.g., adding nodes or edges) in order to achieve improved utility.Most of the obfuscating based works mainly focus on developing anonymization techniques for specific types of privacy attacks. They employ privacy models derived from k-anonymity <cit.> by assuming different types of adversarial knowledge. Unfortunately, all these anonymization techniques are vulnerable to attackers with stronger background knowledge than assumed, which has stimulated the use of differential privacy for more rigorous privacy guarantees.There are many papers <cit.> have started to apply differential privacy to protect edge/node privacy, defined as the privacy of users relationship or identity in graph data. One important application direction aims to release certain differentially private data mining results, such as degree distributions, subgraph counts and frequent graph patterns <cit.>. However, our problem is substantially more challenging than publishing certain network statistics or data mining results. Our goal is to publish the CFPs of private user, which incurs a much larger global sensitivity. Note that the sensitivity in the problem setting of <cit.> is only 1. Whats more, each private user in the networks independently specifies the privacy requirement for their data. In addition, some latest works are done for graph-oriented scenario. Proserpio et al. <cit.> develop a private data analysis platform wPINQ over weighted datasets, which can be used to answer subgraph-counting queries. Zhang et al. <cit.> propose a ladder framework to privately count the number of occurrences of subgraphs.There are also some other related works aiming to publish a sanitized graph, which is out the scope of the objective of this paper. In <cit.>, Sala et al. introduced Pygmalion, a differentially private graph model. Similar to <cit.>, Wang and Wu employed the dK-graph generation model for enforcing edge differential privacy in graph anonymization <cit.>. In <cit.>, Xiao et al. proposed a Hierarchical Random Graph (HRG) model based scheme to meet edge differential privacy. In addition, Chen et al. <cit.> propose a data-dependent solution by identifying and reconstructing the dense regions of a graph's adjacency matrix. § PRIVACY GOAL§.§ Problem DefinitionIn general, we assume there are some public users whose identities are not sensitive in social networks. Except the public users, the rest of the users are private users, whose edges are sensitive and each private user independently specifies the privacy requirement for their data. For convenience, we denote the set of public users in the social network as V_pub and the set of private users V_pri and we let|V_pub|= m_p and |V_pri|= m. More formally, the Privacy Specification of private users is defined as follows:(User-Level Privacy Specification<cit.>). A privacy specification is a mapping 𝒫: V_pri→ R^+ from private users to personal privacy preferences, where a smaller value represents a stronger privacy preference. The notation P^v is used to denote the privacy preference corresponding to user v ∈ V_pri. Similar to <cit.>, we also describe a specific instance of a privacy specification as a set of ordered pairs, e.g., 𝒫:= {(v_1, ϵ_1),(v_2, ϵ_2),…} where v_i ∈ V_pri and ϵ_i∈ R^+. We also assume that a privacy specification contains a privacy preference for every v ∈ V_pri , or that a default privacy level is used. Here the information about any edge in G should be protected, and the privacy specification of edge e(v_i,v_j) can be quantified by min{P^v_i,P^v_j}.In this paper, we focus on privately releasing of connection statistics between private users and public users. First, we specify the hop distance h(v_i,v_j) between two vertices v_i and v_j as the number of edges on the shortest path between them. Second, as indicated in ref.<cit.>, we call the public user v as v_i's connection fingerprint (CFP) if the private user v_i and the public user v is linked by a path of certain length. For a given hop distance n, the formal definitions of the nth-hop connection fingerprint CFP_n(v_i) and n-range connection fingerprint CFP(v_i, n) for a private user v_i in Definition 7 and Definition 8, respectively.(nth-Hop Connection Fingerprint<cit.>): The nth-hop connection fingerprint CFP_n(v_i) of a private vertex v_i in a social network G=(V,E) consists of the group of public vertices whose hop distances to v_i are exactly n, i.e., CFP_n(v_i)= {ID(v_j)| v_j ∈ V_pub∧ h(v_i,v_j)= n}.(n-Range Connection Fingerprint<cit.>): The n-range connection fingerprint of a private vertex v_i, denoted by CFP(v_i, n), is formed by v_i's xth-hop connection fingerprints, where 1 ≤ x ≤ n, i.e., CFP(v_i, n) = ∪_x ∈ [1,n]CFP_x(v_i). Given a system parameter c, we aim to release the number of kth-hop(1 ≤ k ≤ c) connection fingerprints for each private user in the sensitive graph, while protecting individual privacy in the meantime. Formally, we write f_k(G):𝒢→ R^m to denote the function that computes the number of kth-hop connection fingerprints for each private user in graph G. Therefore, the final publication results for a sensitive graph G can be denoted in a form of m × c matrix F=(f_1(G),, f_c(G)). §.§ Privacy GoalThe goal of this paper is to release the connection statistics under the novel notation of Personalized Differential Privacy (PDP) <cit.>. In contrast to traditional differential privacy, in which the privacy guarantee is controlled by a single, global privacy parameter (i.e., ϵ), PDP makes use of a privacy specification, in which each user in V_pri independently specifies the privacy requirement for their data. More formally, the definition of PDP is showed in Definition 9.(Personalized Differential Privacy (PDP)<cit.>). In the context of a privacy specification 𝒫 and a universe of private users U, a randomized mechanism ℳ:𝒢→ R^m satisfies 𝒫-personalized differential privacy (or 𝒫-PDP), if for every pair of neighboring graphs G and G', with GG' and e_ij = e (v_i , v_j), and for all O ∈ Range(ℳ), Pr[ℳ(G) ∈ O] ≤ e^min{P^v_i,P^v_j}· Pr[ℳ(G') ∈ O]. Intuitively, PDP offers the same strong, semantic notion of privacy that traditional differential privacy provides, but the privacy guarantee for PDP is personalized to the needs of every user simultaneously. Jorgensen et al.<cit.> point out that the composition properties of traditional differential privacy extend naturally to PDP, see Theorem 2.(Composition<cit.>). Let ℳ_1:𝒢→ R^m and ℳ_2:𝒢→ R^m denote two mechanisms that satisfy PDP for 𝒫_1 and 𝒫_2 , respectively. Then, the mechanism ℳ_3: = (ℳ_1(𝒢), ℳ_2(𝒢)) satisfies 𝒫_3-PDP, where 𝒫_3= 𝒫_1 + 𝒫_2. A smart general purpose mechanism for achieving PDP, called Sample Mechanism, is proposed in <cit.>. The sample mechanism works by introducing two independent sources of randomness into a computation: (1) non-uniform random sampling at the tuple level, and (2) additional uniform randomness introduced by invoking a traditional differentially private mechanism on the sampled input.(The Sample Mechanism<cit.>). Consider a function f:𝒢→ R^m, a social network G ∈𝒢, a configurable threshold t and a privacy specification 𝒫. Let RS(G, 𝒫, t) denote the procedure that independently samples each edge e_ij = e(v_i , v_j) ∈ G with probabilityπ(e_ij,t) = {[ e^min{P^v_i,P^v_j}-1/e^t-1 if min{P^v_i,P^v_j}<t,;1 otherwise. ].where min_v P^v ≤ t ≤max_v P^v. The sample mechanism is defined as S_f (G, 𝒫, t) = DP_f^t(RS(G, 𝒫, t)) where DP_f^t is any t-differentially private mechanism that computes the function f. Then the sample mechanism S_f (G, 𝒫, t) achieves 𝒫-PDP.The mechanism DP_f^t could be a simple instantiation of the Laplace or exponential mechanisms, or a more complex composition of several differentially private mechanisms. § OVERVIEW OF OUR SOLUTIONSGiven our problem of releasing the number of kth-hop(1 ≤ k ≤ c) connection fingerprints under 𝒫-PDP, we overview the baseline and our advanced methods to give a brief glance on our motivations.Baseline method. A baseline method is to apply a(n) uniform or exponential budget allocation method and release a 𝒫/c (or 𝒫/2^k and the budget is 𝒫/2^c-1 indeed for k = c)-personalized differential private result for every k(1 ≤ k ≤ c). If each released statistics for kth-hop(1 ≤ k ≤ c) connection fingerprints preserves 𝒫/c (or 𝒫/2^k)-PDP, the series of c counting queries guarantee 𝒫-PDP by theorem 1. These two baseline methods are denoted as Uniform and Exponential, correspondingly. Both Uniform and Exponential methods are easy to achieve 𝒫-PDP, but they ignore the fact that the statistics may not change significantly for successive queries f_k due to the sparsity of social network and inherits a large quantity of unnecessary noises.Distance-based method. In this paper, we use a distance-based budget allocation approach inspired by <cit.> to reduce noise. Our proposed DEBA starts by distributing the publication budget in an exponentially decreasing fashion to every private counting query f_k(1 ≤ k ≤ c), i.e., query f_k receives P/2^k+1 to publish its counting results. If it is found out that the distance between the statistics for f_k and f_k-1(2 ≤ k ≤ c-1) is smaller than its publication threshold, the counting query f_k is skipped and its corresponding publication budget becomes available for a future counting query. On the other hand, if it is decided to publish the counting results of f_k , the f_k should absorb all the budgets that became available from the previous skipped counting queries, and uses it in order to publish the current counting query f_k with higher accuracy.The presence or absence of one edge in the graph can contribute to a large number of potential CFPs, i.e., the global sensitivity of the counting queries is large and so the noise added to the count has to be scaled up. Our second method, DUBA-LF, further improves DEBA and uses a new technique, called ladder functions, for producing differentially private output. The technique specifies a probability distribution over possible outputs that are carefully defined to maximize the utility for the given input, while still providing the required privacy level. Moreover, we start the DUBA-LF by uniformly distributing the budget instead of the exponentially distributing in DEBA.§ PROPOSED METHODSWe propose a distance-based budget absorption approach to release the number of kth-hop(1 ≤ k ≤ c) CFPs under 𝒫-PDP. Instead of releasing a 𝒫/c (or 𝒫/2^k)-PDP result for every k(1 ≤ k ≤ c), the new publication results are computed if and only if the distance between the counting statistics and the latest released statistics is larger than a threshold. It is worth noting that the statistics may not change significantly for successive queries f_k due to the sparsity of social network. Therefore, this distance-based budget allocation approach can save some privacy budgets for a future counting query and reduce the overall error of released statistics.In this section, our basic method, called DEBA, is presented at first. The basic method starts by exponentially distributing the budget to every private counting query f_k (1 ≤ k ≤ c), and the budget absorption is decided by the distance between the counting statistics and the latest released statistics. We then introduce our advanced method, DUBA-LF, which uses ladder function to reduce the noise introduced by the traditional differentially private mechanism. §.§ DEBADEBA(Publication with Distance-based Exponential Budget Absorption) starts with an exponentially decreasing budget for every private counting query f_k (1 ≤ k ≤ c), and then a privacy-preserving distance calculation mechanism is adopted to measure the distance between the counting statistics and the latest released statistics. The decision step uses the distance to decide whether to publish the private counting results of f_k or not. If the decision is not, the private counting results of f_k are approximated with the last non-null publication and the budget of f_k becomes available for a future counting query. Otherwise, the private counting query f_k absorbs all the budgets that are available from the previous skipped counting queries. The overall privacy budget is divided between the decision and publishing steps which are designed to guarantee personalized differential privacy as we will analyze later.Before introducing the proposed DEBA, we give the sensitivity of counting query f_k (1 ≤ k ≤ c) at first. Neighbor graphs of G are all the graphs G' which differ from G by at most a single edge. For the counting query f_1, it queries the number of 1st-hop connection fingerprints for each private user in the sensitive graph, and changing a single edge in G will result in at most one entry changing in the 1st-hop connection fingerprints. Hence, Δ(f_1)=1. For the counting query f_k (2 ≤ k ≤ c), changing a single edge in G will result in at most |V_pub| = m_p entries changing in the kth-hop connection fingerprints, i.e., Δ(f_k)= m_p for 2 ≤ k ≤ c. Algorithm 1 presents the pseudocode of DEBA. DEBA is decomposed into two sub mechanisms: personal private distance calculation mechanism M_1 and personal private publication mechanism M_2. Line 1-5 capture the calculation of personal private distance between the counting statistics and the latest released statistics, labeled as mechanism M_1. Line 6-9 carry out the publication step for 1st-hop connection fingerprints and line 10-21 carry out the publication step for kth-hop(2 ≤ k ≤ c) connection fingerprints. Line 11 or line 17 gets the total budgets of skipped queries whose budgets for publication is absorbed. Then the publication threshold T_k for query f_k is determined by m_p/ϵ_k,2. The reason to define such a threshold is that the injecting Laplace noise of f_k is with scale T_k. Then DEBA compares the distance dist to the threshold T_k(line 12). If the distance is larger than T_k , DEBA samples the private social network G(Line 13) and outputs the noisy counts (Line 14), or null otherwise (Line 20). In addition, DEBA outputs cth-hop connection fingerprints with the totally remaining budgets as shown in Line 16-19.Remark: Recall that the error of randomly sampling input graph G is data-dependent as well as the error of distance based approximation. And we cannot present a formal utility analysis for such a data-dependent mechanism. We will present extensive experiments using real datasets to justify the performance of our algorithms. Moreover, precisely optimizing t for an arbitrary f may be nontrivial in practice because, although the error of DP_f^t may be quantified without knowledge of the dataset, the impact of sampling does depend on the input data. A possible option, in some cases, is to make use of old data that is no longer sensitive (or not as sensitive), and that comes from a similar distribution, to approximately optimize the threshold without violating privacy. It is demonstrated that for many functions, the simple heuristics of setting t = max_v P^v or t = 1/m∑_v P^v , often give good results on real data and privacy specifications<cit.>. §.§ DUBA-LF The proposed DEBA mechanism publishes the private count of f_k by adding Laplace noise to the true answer, where the scale of noise is proportional to the global sensitivity of f_k. It is pointed out that the global sensitivity of f_k is 1 for k =1 or m_p for 2 ≤ k ≤ c. It is obvious that there can be numerous public users in large network graphs. Hence, the global sensitivity of counting query f_k may be very large and makes the noise large enough to overwhelm the true answer. In order to improve the utility of private release for f_k, we use the new definition of ladder function to reduce the introduced noise. The definition of ladder function is presented at first.(Ladder function<cit.>). A function I_x(G) is said to be a ladder function of query f if and only if(a) LS (G, f) ≤ I_0(G), for any G;(b) I_x(G') ≤ I_x+1(G), for any pair of neighboring graphs G, G', and any nonnegative integer x. A straightforward example of a ladder function for count query f_k is I_t(G, f_k) = Δ(f_k), since LS (G, f) ≤Δ(f_k) for any G, and a constant always satisfies the second requirement. However, as aforementioned, the global sensitivity of counting query f_k can be extremely large for CFP counting, which may not require so much noise.For counting query f_1, its global sensitivity is 1 and the ladder function for f_1 can be defined as I_t(G, f_1) = 1. Before detailing the ladder function for f_k (2 ≤ k ≤ c), the important notation local sensitivity is refined by defining the sensitivity for a particular pair of nodes by defining the sensitivity for a particular pair of nodes (v_i,v_j), denoted by LS_ij(G,f). Then LS(G, f) = max_i,j LS_ij(G, f). Let p_i denotes the number of the number of 1st-hop connection fingerprints for user v_i and d_max be the maximum node degree in G. Then it is easy to get that LS(G, f_k) = max_i p_i for f_k (2 ≤ k ≤ c).Without losing of generality, we simply assume that p_i ≤ p_j for a particular pair of nodes (v_i,v_j). Then we give our ladder function for kth-hop connection fingerprint counting queries f_k (2 ≤ k ≤ c) in Theorem 4 and prove that the constructed ladder function satisfy the requirements in Definition 10. I_t(G, f_k) = min{m_p,LS(G, f_k)+t}is a ladder function for f_k (2 ≤ k ≤ c). The proof contains the following two steps.(i)LS(G, f_k ) ≤ I_0(G, f_k) for any G. This step is trivial since I_0(G, f_k) = LS(G, f_k). (ii)I_t(G', f_k) ≤ I_t+1(G, f_k) for any neighboring graphs G' and G. Note the fact that the set {G^* |d(G^*,G')≤ t} is a subset of {G^* |d(G^*,G)≤ t+1}. Therefore, max_G^*|d(G^*,G')≤ t f_k(G')-f_k (G^*)≤max_G^*|d(G^*,G)≤ t+1f_k (G^* )-f_k(G ), i.e., I_t+1(G, f_k) = min{m_p,LS(G, f_k)+t+1} = min{m_p,LS(G, f_k,t+1)}≥ = min{m_p,LS(G', f_k,t)} = I_t(G', f_k). It is clear that the ladder function I_t(G, f_k) converges to Δ(f_k) when t≥Δ(f_k) - LS(G, f_k). The ladder function I_t(G, f_k) is used to determine the quality function q in exponential mechanism and define how q varies. In particular, q is a symmetric function over the entire integer domain, centered at f_k(G). The quality function q is defined as follows:(Ladder Quality<cit.>). Formally, given ladder function I_x(G, f_k) we define the ladder quality function q_f_k(G,v_i,s) for node v_i by (i) q_f_k(G, v_i , f_k(v_i)) = 0; (ii) for s ∈ f_k(v_i)±(∑_t=0^u-1 I_x(G, f_k), ∑_t=0^u I_t(G, f_k)], set q_f_k(G, v_i ,s) = -u-1.After assigning each integer a quality score, the sensitivity of the quality function can be calculated as, Δ(q_f_k) = max_v_i,G,G'q_f_k(G,v_i,s)-q_f_k(G',v_i,s) =1. We refer the reader to <cit.>(THEOREM 4.2) for a full description of the proof of it. DUBA-LF (Publication with Distance-based Uniformly Budget Absorption using Ladder Function) uses ladder function to reduce the introduced noise while reallocating the pre-allocated uniform privacy budget. The pseudocode of DUBA-LF is presented in Algorithm 2. The personal private distance calculation mechanism M_1 is identical to that of DEBA (Lines 1-5 in Algorithm 1). The personal private publication mechanism M_2 is presented in Lines 6-21. Lines 6-9 carry out the publication step for counting query f_1. The publication step for kth-hop(2 ≤ k ≤ c) connection fingerprints is carried out in Line 10-21. DUBA-LF samples the private social network G(Line 13 or Line 18) in the same way with DEBA but the sampling probabilities are different. The personal private publication for kth-hop(2 ≤ k ≤ c) connection fingerprints in DUBA-LF is also different to DEBA. If the distance is larger than T_k or counting for f_c, DUBA-LF uses an exponential mechanism based mechanism LFNoising to provide differential privacy (Line 14 and Line 19). In the meantime, DUBA-LF outputs null if the distance is not larger than T_k (Line 20).LFNoising is an extending algorithm of NoiseSample in <cit.>. NoiseSample is proposed to output one value as the final differentially private result while our proposed LFNoising is aims to solve the problem of differentially private releasing in the vector form. The pseudocode of LFNoising is presented in Algorithm 3. Given the ladder function I_t(G,f_k), the calculation of the range and weight for the first few rungs, e.g., rung 0 (the center) to rung M+1 (M = m_p - LS(G,f_k))are shown in Lines 1-7. Lines 8-12 describe the random sampling of the private publication vector F̃_k which presents the private count of kth-hop(2 ≤ k ≤ c) connection fingerprints.§ PRIVACY ANALYSISThe proofs of privacy guarantees for the proposed mechanisms are formally provided in this section. We show the proposed DEBA satisfies P-personalized differential privacy first. Mechanism M_1 in Algorithm 1 is 𝒫/2-personalized differentially private. We use the notation F_k,-v and F_k,+v to mean the graph resulting from removing from or adding to F_k the tuple f_k(v). We can represent two neighboring datasets (vectors) as F_k and F_k,-v. For each 1 ≤ k ≤ c, all of the possible outputs of RS(F_k,𝒫/2c, t/2c) can be divided into those in which f_k(v) was selected, and those in which f_k(v) was not selected. The sensitivity of dist function in M_1 is m_p/m, therefore, the mechanism injects Laplace noise with scale 2m_pc/mt in Line 5 can be denoted as DP_dist^t/2c.Thus, we have Pr[S_f_k(F_k,𝒫/2c,t/2c)∈ O]= Pr[DP_dist^t/2c(RS(F_k,𝒫/2c,t/2c))∈ O] = ∑_Z ∈ F_k, -v (π(v,t/2c)Pr[RS(F_k,𝒫/2c,t/2c)= Z]·Pr[DP_dist^t/2c(Z_+ v)∈ O]) + ∑_Z ∈ F_k, - v ((1 - π(v,t/2c))·Pr[RS(F_k,𝒫/2c,t/2c)= Z]Pr[DP_dist^t/2c(Z) ∈ O]) ≤∑_Z ∈ F_k, - v(π(v,t/2c)Pr[RS(F_k,𝒫/2c,t/2c) = Z]e^t/2c·Pr[DP_dist^t/2c(Z) ∈ O])+ (1 - π(v,t/2c))S_f_k(F_k, - v,𝒫/2c,t/2c) ≤ e^t/2cπ(v,t/2c)S_f_k(F_k,-v,𝒫/2c,t/2c)+ (1-π(v,t/2c))S_f_k(F_k,-v,𝒫/2c,t/2c) = (1 - π (v,t/2c) + e^t/2cπ(v,t/2c))S_f_k(F_k, - v,𝒫/2c,t/2c)There are two cases for v that we must consider: (1)P^v/2c ≥ t/2c; (2)P^v/2c < t/2c. For the former case, we have π(v,t/2c)=1 and eq.(1) can be rewritten as Pr[S_f_k(F_k,𝒫/2c,t/2c)∈ O]≤ (1 - 1 + e^t/2c· 1)S_f_k(F_k, - v,𝒫/2c,t/2c)= e^t/2cS_f_k(F_k, - v,𝒫/2c,t/2c)≤ e^P^v/2cS_f_k(F_k, - v,𝒫/2c,t/2c)For the latter case P^v/2c < t/2c, Pr[S_f_k(F_k,𝒫/2c,t/2c)∈ O]≤ (1 - π (v,t/2c) + e^t/2cπ(v,t/2c))S_f_k(F_k, - v,𝒫/2c,t/2c)=(1 - e^P^v/2c - 1/e^t/2c - 1 + e^t/2ce^P^v/2c - 1/e^t/2c - 1)S_f_k(F_k, - v,P/2c,t/2c)= e^P^v/2c(e^t/2c - 1)/e^t/2c - 1S_f_k(F_k, - v,P/2c,t/2c)= e^P^v/2cS_f_k(F_k, - v,𝒫/2c,t/2c)To sum up, we have Pr[S_f_k(F_k,𝒫/2c,t/2c)∈ O] ≤e^P^v/2cS_f_k(F_k, - v,𝒫/2c,t/2c), and for each 1 ≤ k ≤ c, the mechanism satisfies 𝒫/2c-PDP. Therefore, according to Theorem 2, mechanism M_1 in Algorithm 1 is 𝒫/2-personalized differentially private. We have proved that mechanism M_1 satisfies 𝒫/2-personalized differential privacy. To prove that DEBA satisfies 𝒫-personalized differential privacy, we must prove that, for every k (1 ≤ k ≤ c), M_2 is 𝒫·∑_j = r + 1^k t/2^j + 1 - personalized differentially private if it publishes, and 0 - personalized differentially private otherwise. The proposed DEBA satisfies 𝒫-PDP.Mechanism M_1 satisfying 𝒫/2-personalized differential privacy is captured in Lemma 1. Mechanism M_2 publishes F̃_k or null. In the latter case, the privacy budget is trivially equal to zero, as no publication occurs. In the former case, the sensitivity of f_k is mp for 2 ≤ k ≤ c and 1 for k =1 and the publication budget depends on previous publications. Hence, the mechanism injects Laplace noise with scale m_p/∑_j = r + 1^k t/2^j + 1 can be denoted as DP_f_k^∑_j = r + 1^k t/2^j + 1 for 2 ≤ k ≤ c and the mechanism injects Laplace noise with scale 4/t can be denoted as DP_f_1^t/4 for k =1.Following the proof technology in Lemma 1, it is easy to prove that M_2 is 𝒫·∑_j = r + 1^k t/2^j + 1-PDP if it publishes a non-null result for each k (1 ≤ k ≤ c). Moreover, the total publication budget is 𝒫/2, and it at most equals to the case where each of these c queries receives a budget of t/2^k + 1. So, ∑_j = 1^c 1/2^j + 1𝒫≤𝒫/2. According to Theorem 2, we get the conclusion that M_2 satisfies 𝒫/2-PDP. To sum up, the proposed DEBA satisfies 𝒫-PDP. DUBA-LF employs a personal private distance calculation mechanism M_1 identical to that of DEBA and its privacy guarantee is captured by Lemma 1. In order to show the mechanism M_2 in DUBA-LF satisfies 𝒫/2-PDP, we need to prove that the algorithm LFNoising(f_k(G_k),ϵ_k,2, I_t(G,f_k)) is ϵ_k,2- differentially private, i.e., LFNoising(f_k(G_k),ϵ_k,2, I_t(G,f_k)) can be denoted as DP_f_k^ε _k,2.LFNoising(f_k(G_k),ϵ_k,2, I_t(G,f_k)) is ϵ_k,2- differentially private. There are two steps in the algorithm LFNoising: selecting a rung of the ladder (where rung M+1 is considered as a special case) according to the relative value of the weight of the rung and picking an integer from the corresponding rung. For rungs 0 to M, the possible output values on the same rungs are picked uniformly. For rung M+1, the possible outputs are determined by two actions picking how many further rungs down the ladder to go and then picking uniformly from these. As discussed above, for 1 ≤ i ≤ m, the output probability distribution is equal toPr[F̃_k[i] = ρ] = exp(ε _k,2/2Δ( q_f_k)·q_f_k(G,v_i,ρ))/∑_ρ∈exp(ε _k,2/2Δ( q_f_k)·q_f_k(G,v_i,ρ))As argued above, if the input graph G is replaced by its neighboring graph G, the quality of ρ will be changed by at most Δ( q_f_k) = 1, i.e., the numerator exp(ε _k,2/2Δ( q_f_k)·q_f_k(G,v_i,ρ)) can change at most exp(ε _k,2/2Δ( q_f_k)·Δ( q_f_k)) = exp(ε _k,2/2). Moreover, a single change in graph G the changing in denominator is minimized by a factor of exp( - ε _k,2/2), giving the ratio of the new probability of ρ and the original one exp(ε _k,2). Therefore,LFNoising(f_k(G_k),ϵ_k,2, I_t(G,f_k)) is ϵ_k,2- differentially private.This result highlights the fact that LFNoising(f_k(G_k),ϵ_k,2, I_t(G,f_k)) can be denoted as DP_f_k^ε _k,2. Similar to Theorem 5, we can conclude that DUBA-LF is 𝒫-personalized differentially private.The proposed DUBA-LF satisfies 𝒫-PDP. The proof is similar to that of Theorem 5 and we omit it.§ EXPERIMENTAL EVALUATION We make use of three real-world graph datasets in our experiments: polblogs<cit.>, facebook<cit.> and CondMat<cit.> networks. The polblogs network was crawled from the US political blogosphere in 2005. The vertices are blogs of a set of US politicians, and an edge between two blogs represents the existence of hyperlinks from one blog to the other. The facebook network was collected from the survey participants using a Facebook app. The vertices are Facebook users, and an edge between two users represents the established friendship between them. The CondMat network was collaboration networks from the e-print arXiv, which cover scientific collaborations between authors who submitted papers to Condensed Matter category. The edge between two authors represents an author co-authored a paper with another author in this network. All the networks are represented by undirected and unweighted graphs with no isolated vertices.The real-world networks we used do not contain public user identity. In other words, all vertices in the networks are anonymous. In order to evaluate the proposed CFPs publication algorithms, we select a set of vertices in each network assuming their identities are public. Thereafter, based on these public vertices, we generate the CFPs of the remaining private vertices. In this paper, we select vertices with the highest degrees as public vertices and the proportion of public users is set to 5% in our whole experiment. Table I presents some basic statistics of the networks. We compared DEBA and DUBA-LF with benchmarks Uniform and Exponential over these three datasets. We implemented all methods in Matlab, ran each experiment 100 times, and reported the average error, expressed as the Mean Absolute Error (MAE) and the Mean Relative Error (MRE). To generate the privacy specifications for our experiments, we randomly divided the private users (records) into three groups: conservative, representing users with high privacy concern; moderate, representing users with medium concern; and liberal, representing users with low concern. The fraction of each type users is 1/3. The privacy preferences for the users in the conservative, moderate and liberal groups received a privacy preference of ϵ_c = 1, ϵ_m = 4 and ϵ_l = 16 respectively. As a result, the average privacy preference of all users equals to 7.Fig.1 plots the MAE and MRE of all schemes for the Polblogs dataset, where we vary the sampling threshold t and set c = 4. DUBA-LF is the best method in this setting. And it outperforms Uniform mechanism by up to 76.6% in MAE and 152.5% in MRE, Exponential by up to 50.3% in in MAE and 98.4% in MRE, and DEBA by up to 5.0% in MAE and 10.4% in MRE. The results also indicate that increasing the sampling threshold has an effect of decreasing of MAR and MRE for DUBA-LF, DEBA and Uniform mechanisms but the effect for Exponential mechanism is not evident.Fig.2 plots the MAE and MRE of all schemes for the Polblogs dataset, where we vary the sampling threshold t and set c = 7.DUBA-LF is the best method in the measurement of MRE but is outperformed by DEBA in MAE for small sampling threshold. For the sampling threshold t (t≥4), DUBA-LF outperforms Uniform mechanism by up to 43.9% in MAE and 180.4% in MRE, Exponential by up to 38.8% in in MAE and 181.8% in MRE, and DEBA by up to 7.1% in MAE and 37.8% in MRE. The increasing the sample threshold has an effect of decreasing of MAR and MRE for DUBA-LF and DEBA mechanisms but the effect for Uniform and Exponential mechanism is less.Fig.3 shows the MAE and MRE of all schemes for the Facebook dataset, where we vary the sampling threshold t and set c = 4. DEBA seems to outperform the other methods in this setting. And DEBA outperforms Uniform mechanism by up to 45.8% in MAE and 126.3% in MRE, Exponential by up to 42.9% in in MAE and 125.0% in MRE, and DUBA-LF by up to 25.4% in MAE and 76.7% in MRE. Similar to Fig.2, increasing the sampling threshold has a less evident effect for both Uniform and Exponential mechanisms. We can also conclude that increasing the sampling thresholds causes decreasing of MAR and MRE for DUBA-LF and DEBA for small threshold t while increasing of MAR and MRE for larger t. The MAE and MRE of all schemes for the Facebook dataset is showed in Fig.4, and here we set c = 4 and vary the sampling threshold t. DUBA-LF seems to outperform the other methods in this setting. It outperforms Uniform mechanism by up to 27.2% in MAE and 86.3% in MRE, Exponential by up to 17.7% in in MAE and 70.0% in MRE, and DEBA by up to 4.1% in MAE and 3.9% in MRE. Similar to Fig.1, increasing the sampling thresholds causes decreasing of MAR and MRE for DUBA-LF and DEBA. However, increasing the sample threshold has an evident effect for decreasing of MAR and MRE for both Uniform and Exponential for small threshold t while increasing of MAR and MRE for larger t.Fig.5 plots the MAE and MRE of all schemes for the CondMat dataset, where the sample threshold t is varied and c is set to 4. DUBA-LF is the best method in this setting. And it outperforms Uniform mechanism by up to 107.9% in MAE and one order of magnitude in MRE, Exponential by up to 107.2% in in MAE and also one order of magnitude in MRE, and DEBA by up to 35.8% in MAE and 190.7% in MRE. The results also indicate that increasing the sampling threshold has an effect of decreasing of MAR and MRE for DUBA-LF, DEBA and Uniform mechanisms but the effect for Exponential mechanism is not evident.Fig.6 shows the MAE and MRE of all schemes for the CondMat dataset, where the sample threshold t is varied and c is set to 7. DUBA-LF is shown to be the best method in this setting. And it outperforms Uniform mechanism by up to120.5% in MAE and894.5%in MRE, Exponential by up to 76.9% in in MAE and862.7% in MRE, and DEBA by up to 34.4% in MAE and 84.3% in MRE. It is indicated that increasing the sampling threshold has a little effect of decreasing of MAR and MRE for Uniform and Exponential mechanisms. We can also conclude that increasing the sampling thresholds causes decreasing of MAR and MRE for DUBA-LF and DEBA for small threshold t while increasing of MAR and MRE for larger t.§ CONCLUSION The number of CFPs is one of the most important properties for a public users labeled graph. In order to release the number of CFPs in the context of personalized privacy preferences, we proposed two schemes (DEBA and DUBA-LF) to achieve personal differential privacy. Both DEBA and DUBA-LF use the distance-based budget absorption mechanism to improve the publication utility while DUBA-LF also employs ladder function to reduce the introduced noise. We formally prove that the proposed DEBA and DUBA-LF schemes are 𝒫-PDP and we conduct thorough experimentation with real datasets, which demonstrated the superiority and the practicality of our proposed schemes. 99 1 L. Backstrom, C. Dwork, and J. Kleinberg. Wherefore art thou R3579X? . In WWW, 2007.. 2 M. Hay, G. Miklau, D. Jensen, D. Towsley, and P. Weis. Resisting structural re-identification in anonymized social networks. In VLDB, 2008. 3 G. Cormode, D. Srivastava, S. Bhagat, and B. Krishnamurthy. Class-based graph anonymization for social network data. In VLDB, 2009. 4 L. Zou, L. Chen, and M. Özsu. K-automorphism: A general framework for privacy preserving network publication. In VLDB Endowment, vol. 2, no. 1, pp. 946-957, 2009. 5 J. Cheng, A. W.-c. Fu, and J. Liu, K-isomorphism: privacy preserving network publication against structural attacks. in SIGMOD10, 2010,pp. 459-470. 6 Wang Y, Zheng B. Preserving privacy in social networks against connection fingerprint attacks[C]//Data Engineering (ICDE), 2015 IEEE 31st International Conference on. IEEE, 2015: 54-65. 7 C. Dwork. Differential privacy. In ICALP, pages 1-12, 2006. 18 Yuan M, Chen L, Yu P S. Personalized privacy protection in social networks[J]. Proceedings of the VLDB Endowment, 2010, 4(2): 141-150. 34 Ebadi H, Sands D, Schneider G. Differential Privacy: Now it's Getting Personal[C]// ACM Sigplan-Sigact Symposium on Principles of Programming Languages. ACM, 2015:69-81. 8 Jorgensen Z, Yu T, Cormode G. Conservative or liberal? Personalized differential privacy[C]//Data Engineering (ICDE), 2015 IEEE 31st International Conference on. IEEE, 2015: 1023-1034. 9 G. Times, Media, govt, organizations get hooked on weibo: report. 2013. [Online].http://www.globaltimes.cn/content/757560.shtml 10 H. Bunke. On a relation between graph edit distance and maximum common subgraph. Pattern Recogn. Lett.,18(9):689C694, Aug. 1997. 11 F. D. McSherry. Privacy integrated queries: an extensible platform for privacy-preserving data analysis. In Proc. of ACM SIGMOD, Jun.29-Jul.2009,Providence, Rhode Island 12 C. Dwork, et al. Calibrating noise to sensitivity in private data analysis.In Proceedings of TCC. Springer, 2006, pp. 265-284. 13 F. McSherry and K. Talwar, Mechanism design via differential privacy,in FOCS, 2007, pp. 94-103 14 K. Nissim, S. Raskhodnikova, and A. Smith. Smooth sensitivity and sampling in private data analysis. In STOC, pages 75C84, 2007. 15 Mittal P, Papamanthou C, Song D. Preserving Link Privacy in Social Network Based Systems[J]. Computer Science - Cryptography and Security, 2012. 16 Liu C, Mittal P. LinkMirage: Enabling Privacy-preserving Analytics on Social Relationships[C]. NDSS,2016. 17 Hay M, Miklau G, Jensen D, et al. Anonymizing social networks[J]. Technical Report, University of Massachusetts, Amherst, 2007. 19 L. Sweeney, K-anonymity: A Model for Protecting Privacy, IJUFKS,vol. 10, no. 5, pp. 557C570, 2002. 20 M. Hardt and A. Roth. Beating randomized response on incoherent matrices. In STOC, 2012. 21 M. Hay, C. Li, G. Miklau, and D. Jensen. Accurate estimation of the degree distribution of private networks. In ICDM, 2009. 22 S. P. Kasiviswanathan, K. Nissim, S. Raskhodnikova, and A. Smith. Analyzing graphs with node differential privacy. In TCC, 2013. 23 E. Shen and T. Yu. Mining frequent graph patterns with differential privacy. In SIGKDD, 2013. 24 D. Proserpio, S. Goldberg, and F. McSherry. Calibrating data to sensitivity in private data analysis: A platform for differentially-private analysis of weighted datasets. In VLDB, 2014. 25 J. Zhang, G. Cormode, C. Procopiuc, D. Srivastava, and X. Xiao. Private release of graph statistics using ladder functions. In SIGMOD, 2015. 26 A. Sala, X. Zhao, C. Wilson, H. Zheng, and B. Y. Zhao. Sharing graphs using differentially private graph models. In IMC, 2011. 27 Y.Wang and X.Wu. Preserving differential privacy in degree-correlation based graph generation. TDP,2013. 28 Q. Xiao, R. Chen, and K. Tan. Differentially private network data release via structural inference. KDD,2014. 29 Chen R, Fung B C M, Yu P S, et al. Correlated network data publication via differential privacy[J]. Vldb Journal, 2014, 23(4):653-676. 30 G.Kellaris, et al. Differentially private event sequences over infinite streams. PVLDB,7(12): 1155-1166(2014) 31 Adamic L A, Glance N. The political blogosphere and the 2004 US election: divided they blog[C].Proceedings of the 3rd international workshop on Link discovery. ACM, 2005: 36-43. 32 McAuley J J, Leskovec J. Learning to Discover Social Circles in Ego Networks[C].NIPS. 2012, 2012: 548-56. 33 Leskovec J, Kleinberg J, Faloutsos C. Graph evolution: Densification and shrinking diameters[J]. ACM Transactions on Knowledge Discovery from Data (TKDD), 2007, 1(1): 2.
http://arxiv.org/abs/1709.09454v2
{ "authors": [ "Yongkai Li", "Shubo Liu", "Dan Li", "Jun Wang" ], "categories": [ "cs.CR" ], "primary_category": "cs.CR", "published": "20170927111927", "title": "Release Connection Fingerprints in Social Networks Using Personalized Differential Privacy" }
TIFPA - INFN,Via Sommarive 14, 38123 Povo (TN), ItalyDipartimento di Fisica, Università di Trento,Via Sommarive 14, 38123 Povo (TN), Italy [email protected] We discuss some thermodynamical definitions for black holes in modified theories of gravity.§ INTRODUCTIONIn General Relativity (GR), several thermodynamical notions can be introducedfor the black holes (BHs), but in the modified theories of gravity the black hole solutions are not expected to share the same proprieties of their Einsteinian counterparts.In F(R)-modified gravity the First law of thermodynamics can be derived from the equations of motion, evaluating independently the entropy via Wald method and the Killing-Hawkingtemperature from the metric, and an expression for the BH Killing energy can be found. In an analogue way, in other theories of modified gravity (for instance, in Gauss-Bonnet gravity) the First Law ofthermodynamics can be used to infer the black hole energy. This proceeding is mainly based on Refs. <cit.>. § BLACK HOLES IN GENERAL RELATIVITY Any spherically symmetric and four dimensional metric can be locally expressed in the form: ds^2 =γ_ij(x^i)dx^idx^j+ ℛ^2(x^i) dΩ_2^2 , i,j ∈{0,1} ,where dΩ_2^2 is the metric of a two-dimensional maximally symmetric space, γ_ij(x^i) is the reduced metric of the normal space-time with coordinates x^i and ℛ(x^i) is the areal radius and is a function of the coordinates of the normal space.On the normal space one canintroduce the scalar quantity χ(x^i)=γ^ij(x^i)∂_iℛ(x^i)∂_jℛ(x^i) , such that the sphere with areal radius ℛ(x^i) turns out to be trapped when χ(x^i)<0; marginal when χ(x^i)=0; untrapped when0<χ(x^i). Thus, the dynamical trapping horizon of a black hole is defined by the conditionsχ(x^i)|_H = 0 ,0<∂_iχ(x^i)|_H .In this paper, the pedex `H' denotes a quantity evaluated on the coordinates of the horizon. In General Relativity we can associate to the black hole horizons several thermodynamical quantities, namely the energy, the entropy and the surface gravity. For the energy we have a quasi-local definition given by the Misner-Sharp formula,E_MS(x^i):=1/2G_Nℛ(x^i)[1-χ(x^i) ] ,with G_N the Newton's constant. Thus, the Misner-Sharp energy evaluated on the BH horizon corresponds to the BH Killing energy/mass,E=r_H/2G_N .The entropy of a black hole satisfies the Area Law,S=𝒜_H/(4 G_N) ,namely is proportional to the area 𝒜_H of the horizon.Finally, for static black holes we may usethe time-like Killing vector field ξ_μ(x^ν) to define the Killing surface gravity κ_K as follows,κ_Kξ^μ(x^ν)=ξ^ν∇_νξ^μ(x^ν) . In the dynamical case, where the time-like Killing vector field is absent, Hayward found a way to infer the surface gravity by working with the metric only,κ_H:=1/2_γℛ(x^i)|_H ,where the d'Alambertian is evaluated with respect to the reduced metric γ_ij(x^i).To the horizon of a black hole is also possible to associate a temperature.In fact the black holes are not so black and may emit radiation, dubbed the “Hawking radiation”, due to the quantum effects near to the horizon <cit.>.In the static case, all derivations of the Hawking radiation rate leads to thesemi-classical expression,Γ≡e^-2πΔ E_K/κ_K ,in terms of the change Δ E_K of the Killing energy of the emitted particle and the Killing surface gravity. Thus, the surface gravity can be identified with the Killing temperature asT_K:=κ_K/2π .Therefore, if one uses the change of the entropy Δ S one easily obtains the First Law of black hole thermodynamics,Δ E_K=T_KΔ S .In the dynamical case Hayward found a way to derive the First Law from the equations of motion. Assuming the Einstein's equation of GR, in a generic four-dimensionalspherically symmetric space-time, the following geometric identity holds true on the black hole trapping horizon <cit.>, κ_H/2πd/d r_H(𝒜_H/4G_N)=d/d r_H(ℛ_H/2G_N)+ T_H^(2)/2d/d r_H𝒱_H ,where 𝒱_H is the three-volume enclosed by the horizon andT^(2)_H is the reduced trace of the matter stress-energy tensor at the horizon and acts like a working term. On thermal equilibrium the Gibbs equation leads to,T Δ S =Δ E + p dV , dV=𝒱_k r_H^2dr_H ,such that, by introducingthe entropy (<ref>) and the BH energy (<ref>), one may suggest the Kodama/Hayward temperature,T_H:=κ_H/2π .Let us restrict our analysis to the spherically symmetric static space-time. The metric reads,ds^2=-e^2α(r)B(r)dt^2+dr^2/B(r)+r^2dΩ_k^2 ,dΩ_k^2=(dρ^2/1-kρ^2+ρ^2 dϕ^2) ,where α(r) and B(r) are functions of the radial coordinate only, ℛ=r is the areal radius and the topology depends on the k parameter and can bespherical, flat or hyperbolic for k=+1,0,-1, respectively. A static solution describes a black hole as soon as there exists an event horizon with a real and positive radius r=r_H whereB(r_H)=0 , 0<B'(r)|_r_H .The prime index corresponds to the derivative with respect to r. We should note that in the static case the Killing temperature T_K and the Kodama temperature T_H associated to the event horizon are in principle different when α(r_H)≠ 0,T_K:=1/4πe^α(r_H)B'(r_H) ,T_H:=1/4πB'(r_H) .In General Relativity this is not a problem. The Hawking radiation rate is independent on the choices of temperature and energy of the emitted particle and in the vacuum caseof the Schwarzshild solution one has α(r)=0 and the two definitions coincide.However, this is not true for the vacuum case of a modified gravity theory where α(r)≠ 0. Moreover, in modified gravity is not easy to define the energy of a black hole. In General Relativity the Misner-Sharp mass corresponds to the charge of a conserved current from the second-order differential equations of the theory, but in modified gravity we deal with higher derivative field equations and we must use a different approach. In what follows, we will consider some classes of modified theories with black hole solutions and we will analyzethe First Law of Thermodynamics in their framework. § F(R)-FOUR DIMENSIONAL MODIFIED GRAVITY Let us consider the F(R)-gravityin vacuum, whose action is given by (see Ref. <cit.> for some general reviews),I=1/16π G_N∫_ℳ d^4 x√(-g) F(R) .Here, g is the determinant of the metric tensor g_μν(x^μ), F(R) is a function of the Ricci scalar only and ℳ is the space-time manifold. Given a static black hole solution described by the metric (<ref>), iff R_H explicitly depends on r_H only, from the (0,0)-component of the F(R)-field equations evaluated on the event horizon we obtain,T_kΔ S_W = e^α(r_H)𝒱_k(k F_R(R_H)/2G_N-R_HF_R(R_H)-F(R_H)/4G_Nr_H^2) ,where 𝒱_k≡𝒜_k/r^2 depends on the topology, the Killing temperature T_K (<ref>) emerges in a natural way, and S_W is the Waldentropy <cit.>,S_W=𝒜_k(r_H) F_R(R_H)/4 G_N ,Δ S_W=1/4G_N(2V_k r_H F_R(R_H)dr_H+V_k r_H^2 F_RR(R_H)d R_H) .The second expression above holds true when R_H is an explicit function of r_H only. The condition on the entropy looks quite restrictive, but in a large class of explicit examples of F(R)-static black hole solutions it is well satisfied. Thus, we can derive for a generic F(R)-gravitational model a First Law of black hole thermodynamics in the form Δ E_K:=T_KΔ S_W ,where Δ E_K is the variation of the Killing energy of the black hole itself. As a consequence, one may define E_K:=𝒱_k/4π∫ e^α(r_H)(k F_R(R_H)/2G_N-R_HF_R(R_H)-F(R_H)/4 G_Nr_H^2)d r_H .Here, an expression for theBH-energy is proposed by deriving the First Law from the equations of motion of F(R)-gravity,evaluating independently the entropy via Wald method and theHawking temperature via quantum mechanical methods in curved space-times.Let us consider some examples where only one integration constant C appears in the SSS metric which may decsribes a black hole for some choices of the topology. For the case R=4Λ with Λ=(R F_R(R)-F)/(2F_R(R)) andSchwarzshild dS/AdS solution α(r)=0, B(r)=(k-C/r-Λ r^2/3), we getT_K=4π r_H/(1-Λ r_H^2) , S_W=𝒜_k(r_H)F_R(R_H)/4 G_N ,E_K= 𝒱_kF_R(R_H)/8π G_Nr_H(k-Λ/3r_H^2) .Therefore, by using the fact that B(r_H)=0, one hasE_K=𝒱_k F_R(R_H)/8π G_N C . For the model F(R)=γ√(k(R+12λ)) with α(r)=0 and B(r)=(k/2-C/r^2+λ r^2), the BH Killing energy reads E_K∝ C.For the model F(R)=γ(1/R-h^2/6) with exp[2α]=r/r_0, r_0 being a dimensional parameter, and B(r)=4/7(k-7r/6h +C/r^7/2), one obtains E_K∝ C.For the class of Clifton-Barrow models F(R)=R^δ+1(κ^2)^δ, δ≠ 1, the metric reads <cit.>,ds^2=-(r/r_0)^2a(k-C/r^b)dt^2+β dr^2/(k-C/r^b)+r^2 dΩ^2_k ,where a, b and β are functions of δ. Also in this case the BH Killing energy results to be E_K∝ C. In all this examples, the Killing energy is proportional to the integration constant of the metric, giving to it a physical meaning like in the Schwarzwschild case of GR. We point out that, when α(r)≠ 0, if one uses the Hayward prescriptionsuch a result cannot be achieved, but with the Killing formalism there are some sorts of cancellations and we obtain this reasonable results. Let us consider the case of R^2-gravity, where two integration constants appear in the metric. The action reads:I=∫_ℳ d^4x√(-g) R^2 .Such a model is often considered in the inflationary scenario and admits the Schwarzshild dS/AdS solution,α(r)=0 , B(r)=(k-C/r-λ r^2/3) ,where R=4λ and the cosmological constant λ is a free parameter like C, due to the fact that it is not fixed by the gravitational Lagrangian. As a consequence, when we take the thermodynamical variation ofthe Killing energy of the black hole described by this solution,we must consider the variation with respect to λ also and it is not possible to give an explicit expression for the energy.On the other hand, the Wald entropy of the black hole reads,S_W=𝒜_k(r_H) R_H/2=2𝒜_k(r_H)λ ,and vanishes for λ=0. The cosmological constant plays the role of the inverse of the Planck Mass of GR, since the action is scale invariant.In fact, λ=1/L^2 introduces a fundamental lenght scale into the theory and one may consider it like a fixed parameter.Only in this case the First Law leads toE_K=𝒱_k/πλ[k r_H-λ r_H^3/3]=𝒱_k/πλ C . We observe that the presence of the R^2-term modifies the energy of a Schwarzschild dS/AdS black hole when the cosmological constant is different to zero (for example, in the model with Lagrangian ℒ= (R-2Λ)/(16π G_N)+R^2).§ GAUSS-BONNET MODIFIED GRAVITY Let us consider now the following action,I=1/16π G_N∫_ℳ d^4√(-g) F(R,G) ,where F(R,G) is a function of the Ricci scalar R and the Gauss-Bonnet four dimensional topological invariant G.In this framework some static SSS black hole solutions are known, but in general it is not possible to derive the First Law from the field equations of the theory. However, given a BH solution, it is still possible to evaluate its Killing temperature, its Wald entropy and therefore its Killing energy. For example, the model with F(R, G)=R+√(G) admits the topological SSS solution,ds^2=-B(r)dt^2+dr^2/B(r)+r^2d (dρ^2/1-kρ^2+ρ^2 dϕ^2) , B(r)=-k+r/C , C being an integration constant, which describes a black hole in the spherical topological case with k=1. The Wald entropy readsS_W=𝒜_1(r_H)/4 G_N[ℱ'_R+ℱ'_G(4k/r^2)]|_H= π r_H^2/G_N[1+C/r_H] ,and we see that, since ∂_r S_W|_r_H≠Δ S, the First Law can not been derived from the equations of motion like in F(R)-gravity. On the other side, by using the First Law with the Killing temperature we findT_K=1/4π C , E_K=C/G_N ,and we see that even in this case the integration constant of the solution can be identified with the energy. We may conclude that in the vacuum case of F(R) and F(R,G)-gravity the Killing formalism leads to resonable definitions for the thermodynamics of the black holes.§ NON VACUUM STATIC SPHERICALLY SYMMETRIC SOLUTION As a last example, we will consider the following model where a scalar field ϕ is non-minimally coupled with the electromagnetic potential (F_μν is theelectromagnetic stress-energy tensor):I=∫_ℳ d^4x√(-g)[(R-2Λ)/16π G_N-1/2∂^μϕ∂_μϕ +V(ϕ) -ξe^√(16π G_n)λϕ(F^μνF_μν)] ,V(ϕ)=V_0e^γ√(16π G_N)ϕ .Here, Λ , λ, ξ , γ and V_0 are fixed parameters of the theory. This model admits the following class of topological Lifshitz-like solutions,ds^2=-(r/r_0)^zB(r)dt^2+d r^2/B(r)+r^2dΩ_k^2 ,where z is a number and r_0 a dimensional parameter. The equations of motion constrain the field ϕ=ϕ(r) asϕ(r)=√(2z/16π G_N)log[r/ r_0] .The form of B(r) results to be,B(r)=2k/z+2-C/r^1+z/2+Ṽ_0 r^2/(2γ√(2z)+6+z)(r_0/r)^λ√(2z) +8ξQ̃^2 /(2λ√(2z)+2-z)r^2(r/r_0)^γ√(2z) -4Λ r^2/6+z ,where C is a free integration constant of the solution, Ṽ_0=16π G_N V_0,Q̃^2=G_N Q^2, Q being the charge of the electromagnetic potential, and the parameters of the model must be related to each other in order to satisfy the Klein-Gordon equation of the scalar field. For ϕ(r)=0, namely z=0, ξ=1/4 and V_0=0, we recover the Reissner-Norstrom solution with cosmological constant. The model under investigation has second order field equations like in GR and when solution (<ref>, <ref>) describes a black hole its mass is well defined asE=𝒱_k/8π G_NC,while the entropy satisfies the Area Law (<ref>). Now from the first component of the field equations evaluated on the BH horizon we derive1/4πB'(r_H)Δ S=Δ E+pdV , p=(p_ϕ+p_EM)_radial ,where the working term collects the contributions from the radial pressures of the scalar and electromagnetic fields. Thus, the First Law holds true by making use of the Hayward temperature in (<ref>). It looks that when we consider a static but non vacuum solution the Hayward formalis ismore adapt to describe the thermodynamics of the black holes.§ REFERENCES9uno L. Sebastiani and S. Zerbini,Eur. Phys. J.C 71, 1591 (2011).due G. Cognola, O. Gorbunova, L. Sebastiani and S. Zerbini, Phys. Rev.D 84, 023515 (2011).tre R. Myrzakulov, L. Sebastiani and S. Zerbini,Gen. Rel. Grav.45, 675 (2013) [arXiv:1208.3392 [gr-qc]]. HT S. W. Hawking, Nature 248 30 (1974);Commun. Math. Phys. 43 199-220 (1975).Hay S. A. Hayward,Class. Quant. Grav.15, 3147 (1998)[gr-qc/9710089];S. A. Hayward, R. Di Criscienzo, L. Vanzo, M. Nadalini and S. Zerbini,Class. Quant. Grav.26, 062001 (2009)[arXiv:0806.0014 [gr-qc]]. OdS. Nojiri and S. D. Odintsov,eConf C 0602061 (2006) 06[Int. J. Geom. Meth. Mod. Phys.4 (2007) 115][hep-th/0601213];S. Nojiri and S. D. Odintsov,Phys. Rept.505 (2011) 59[arXiv:1011.0544 [gr-qc]].WaldR. M. Wald,Phys. Rev. D 48, no. 8, R3427 (1993) doi:10.1103/PhysRevD.48.R3427 [gr-qc/9307038].CB T. Clifton and J. D. Barrow,Phys. Rev. D 72, no. 10, 103005 (2005)[gr-qc/0509059].
http://arxiv.org/abs/1709.09986v1
{ "authors": [ "Lorenzo Sebastiani" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170927124600", "title": "Thermodynamical aspects of black holes in modified gravity" }
First]A. Tosin Second]M. Zanella [First]Department of Mathematical Sciences “G. L. Lagrange” Politecnico di Torino, Torino, Italy (e-mail: [email protected]) [Second]Department of Mathematical Sciences “G. L. Lagrange” Politecnico di Torino, Torino, Italy (e-mail: [email protected]) In this paper we present a Boltzmann-type kinetic approach to the modelling of road traffic, which includes control strategies at the level of microscopic binary interactions aimed at the mitigation of speed-dependent road risk factors. Such a description is meant to mimic a system of driver-assist vehicles, which by responding locally to the actions of their drivers can impact on the large-scale traffic dynamics, including those related to the collective road risk and safety.Road traffic, traffic control, binary interactions, Boltzmann-type equations § INTRODUCTIONOur inner city mobility is rapidly changing due to the automation of driving and the sharing of information and communication technology. This process is leading to the creation of new paradigms in terms of efficient infrastructure and traffic management solutions. Among others, we mention in this direction the broad developments in the technology for driver-assist cars, self-driving cars and intelligent intersections, see <cit.>. As an effect of the fast rising of urban population, such an automation process also shed light on safety issues in road traffic management. According to recent reports on traffic safety in the world, see e.g. <cit.>, road risk arises as a result of several factors largely linked to the subjectivity of the driving behaviour of the individuals. Among others, here we recall in particular those related to the variability of the speed in the traffic flow: large differences in the speeds of the vehicles within the traffic stream are reported to be responsible for an increase in the crash risk.So far, road risk and safety have been mainly investigated by means of empirical approaches. These include, for instance, the analysis of the distribution of the fatality rates over time or the study of the accident time series and of safety indicators, see e.g. <cit.>. Nevertheless, recently theoretical efforts have been devoted to the comprehension of the links between traffic dynamics and safety issues by means of mathematical models, see e.g. <cit.>.In this paper we continue along the latter research line by combining a Boltzmann-type kinetic description of the road traffic, cf. <cit.> for related approaches, with a preliminary study of control strategies in the frame of the driver-assist car technology for the mitigation of the risk caused by the speed variance of the vehicles. The kinetic approach is particularly appropriate to our goal thanks to its fundamental link with the particle representation of the driver-vehicle system, which is precisely the level at which driver-assist control strategies can act. At the same time, it allows one to upscale rigorously such small-scale dynamics to an aggregate level, which is more suited to engineering needs.The control approach adopted here has roots in the Model Predictive Control (MPC), which has been used in the engineering community since over fifty years, seee.g. <cit.> for an overview and further references. MPC methods have been traditionally employed in the frame of ODEs, whereas for kinetic and fluid dynamic equations few results are available in the literature, cf. <cit.>. The hallmark of the kinetic formulation of the control problem is the derivation of an explicit feedback control for binary, i.e. one-to-one, vehicle dynamics, which is then straightforwardly embedded into a Boltzmann-type kinetic equation for a large number of vehicles. As it is well known, MPC leads typically to a control which is suboptimal with respect to the theoretical optimal one. Nevertheless, performance bounds can be established which guarantee the consistency of such an approximation in the kinetic framework, see <cit.>. In addition to that, the proposed Boltzmann formulation of the MPC has an overall computational cost which scales linearly with the total number of vehicles of the system. This makes it competitive compared to other techniques for computing the optimal control.In more detail, the paper is organised as follows. In Section <ref> we present the unconstrained microscopic traffic dynamics via the concept of binary interactions. In Section <ref> we introduce the binary control and discuss possible strategies for speed-dependent road risk mitigation. In Section <ref> we embed the constrained microscopic dynamics into a kinetic Boltzmann-type equation, which we then use to investigate analytically the large-scale impact of the envisaged risk mitigation strategies. In Section <ref> we provide numerical evidences of the risk mitigation effect by simulating the fundamental diagrams of traffic with special focus on the evolution of the speed variance. Finally, in Section <ref> we summarise the main aspects of the proposed approach and we briefly sketch research perspectives.§ MICROSCOPIC BINARY INTERACTIONSThe kinetic modelling approach relies on the concept of binary interactions at the particle level, which fits naturally the follow-the-leader principle that most microscopic models of vehicular traffic are based on, cf. <cit.>.We describe the microscopic state of a vehicle by a scalar variable v∈ [0, 1] representing the (dimensionless) speed. If w∈ [0, 1] is the speed of the leading vehicle, we assume that in a short time interval Δt>0 an interaction between the two vehicles produces a change of speed of the former described by the rulev'=v+ΔtI(v, w; ρ),where v' is the post-interaction speed andI(v, w; ρ):= {[ P(ρ)(min{v+Δv, 1}-v)ifv<w;(1-P(ρ))(P(ρ)w-v)ifv>w ].is the interaction function. In particular, Δv>0 is the increase in speed when the vehicle accelerates, ρ∈ [0, 1] is the (dimensionless) macroscopic density of the vehicles andP(ρ):=1-ρ^γ, γ>0,is the probability of accelerating. The function (<ref>) expresses the fact that a vehicle accelerates if it is slower than the leading vehicle (v<w) and brakes if it is faster (v>w). In the former case it increases its speed by a quantity which is at most Δv (the “min” guarantees that the bound v'≤ 1 is preserved) while in the latter case it decreases its speed to a fraction P(ρ)w of the speed of the leading vehicle. Owing to (<ref>), the lighter the traffic (i.e. low ρ) the closer to w the speed targeted when breaking. Finally, acceleration and breaking are more or less probable depending on the congestion of the traffic, which is expressed by the coefficients P(ρ) and 1-P(ρ) in (<ref>).The leading vehicle is instead assumed not to change speed in consequence of the interaction just described, because binary interactions in vehicular traffic are mainly anisotropic. Therefore we setw'=w. Notice that the binary interaction rules for v, w can be seen as a time discretisation of the following equations:dv/dt=I(v, w; ρ), dw/dt=0relating the acceleration of a car to the interaction with its leading vehicle in a time interval (t, t+Δt].§ BINARY CONTROL STRATEGIES FOR RISK MITIGATIONHaving in mind driver-assist vehicles, we now include in the previous setting a reaction ability of the cars to the actions of the drivers aimed at enhancing the driving safety. Thus we modify the binary interaction rules set forth in Section <ref> by adding a control term u such thatdv/dt=I(v, w; ρ)+u, dw/dt=0.The control is supposed to be applied by the car in response to the changes of speed imposed by the driver so as to minimise a certain cost functional J=J(v, w, u) linked to a measure of the driving risk. Hence the optimal control u^∗ is defined byu^∗:=_u∈J(v, w, u)subject to (<ref>),being a set of admissible controls to be suitably specified.Since the differences in the speed of the vehicles along the road have been recognised as a non-negligible factor of driving risk, cf. <cit.>, a conceivable form of the cost functional J to be minimised may be one which involves the binary variance of the speeds of the interacting vehicles. This leads us to consider:J(v, w, u)=1/2∫_t^t+Δt[(w-v)^2+ν u^2] ds,where the term 1/2(w-v)^2 is the aforesaid binary variance while ν/2u^2, ν>0, is a penalisation on large controls.Another option is to minimise the gap between the current speed of the car and a certain desired (or imposed) speed v_d∈ [0, 1], which may be understood for instance as a speed limit or as a recommended speed fostering the occurrence of green waves. In this case we may consider the cost functionalJ(v, w, u)=1/2∫_t^t+Δt[(v_d-v)^2+ν u^2] ds,cf. <cit.> in a different context. §.§ Feedback controlIn order to tackle the control problem (<ref>)-(<ref>) we should consider a bounded control -∞<a≤ u≤ b<+∞. The values a, b should guarantee that the bounds 0≤ v≤ 1 on the post-interaction speed resulting from (<ref>) are not violated in the whole time interval (t, t+Δt]. However, instead of considering the constrained minimisation problem (<ref>) we will admit that u∈ and we will show that it is possible to preserve the aforesaid bounds by carefully selecting Δt and ν a posteriori.We consider at first the cost functional (<ref>). The Hamiltonian of the control problem (<ref>)-(<ref>) is in this case:H(v, w, u, λ):=1/2(w-v)^2+ν/2u^2+λ(I(v, w; ρ)+u),λ=λ(t) being the Lagrange multiplier. From Pontryagin's principle, the optimality conditions turn out to be:ν u+λ=0dλ/dt=w-v-λ∂_vI(v, w; ρ)λ(t+Δt)=0,which we discretise in (t, t+Δt) asν u+λ=0λ'=λ+Δt(w'-v'-λ'∂_vI(v', w'; ρ))λ'=0.We have denoted by ' the variables computed at t+Δt and, in particular, we have used the implicit Euler scheme for the equation of the multiplier. As a result we getu=Δt/ν(w'-v')where v', w' have to be understood as the post-interaction speeds produced by the constrained binary interaction rules resulting from the time discretisation of (<ref>), i.e.[c] v'= v+ΔtI(v, w; ρ)+Δtu w'= w.Using these expressions we deduceu=Δt/ν+Δt^2(w-v)-Δt^2/ν+Δt^2I(v, w; ρ),namely we get u in feedback form as a function of the pre-interaction speeds. We notice that, consistently with the MPC approach together with a receding horizon strategy, the control u in (<ref>) is assumed to be constant in the time horizon Δt coinciding with the characteristic time of a binary interaction.By plugging (<ref>) into (<ref>) we finally deduce the following feedback-constrained binary interaction scheme:[c] v'= v+νΔtν+Δt^2I(v, w; ρ)+Δt^2/ν+Δt^2(w-v) w'= wcorresponding to the instantaneous strategy of reducing the speed variance of the interacting vehicles. Using the expression (<ref>) of the interaction function I it is possible to check that if 0<Δt≤ 1 then v'∈ [0, 1] for any given v, w∈ [0, 1] and any ν>0. In particular, if ν→+∞ then (<ref>) reduces to the unconstrained binary interaction scheme discussed in Section <ref>.By repeating the same procedure in the case of the cost functional (<ref>) we determine the following control:u=Δt/ν+Δt^2(v_d-v)-Δt^2/ν+Δt^2I(v, w; ρ)which finally gives rise to the feedback-constrained binary interaction scheme[c] v'= v+νΔtν+Δt^2I(v, w; ρ)+Δt^2/ν+Δt^2(v_d-v) w'= w.Also in this case, the restriction 0<Δt≤ 1 guarantees that v'∈ [0, 1] for all v, w∈ [0, 1] and all ν>0.§ BOLTZMANN-TYPE DESCRIPTIONThe constrained binary interaction rules (<ref>), (<ref>) can be fruitfully encoded in a Boltzmann-type statistical description of the system, which is suitable to depict the aggregate dynamics. To this end we introduce the distribution function f=f(t, v):_+× [0, 1]→_+ such that f(t, v)dv is the fraction of vehicles travelling with speed comprised between v and v+dv at time t. Under a given microscopic binary interaction scheme, the time evolution of f is ruled by the following Boltzmann-type equation (cf. <cit.>):d/dt∫_0^1φ(v)f(t, v) dv =ρ/2∫_0^1∫_0^1(φ(v')+φ(w')-φ(v)-φ(w))× f(t, v)f(t, w) dv dw,that here we have written in weak form for a test function φ:[0, 1]→. Taking φ≡ 1 we notice that the equation impliesd/dt∫_0^1 f(t, v) dv=0,thus if f is chosen to be a probability distribution in v at the initial time t=0 it will be so at every successive time t>0. The physical counterpart of this fact is the conservation of the mass of vehicles.Furthermore, with specific reference to the interaction rules (<ref>), (<ref>), and in particular to the fact that w'=w, we observe that the Boltzmann equation specialises asd/dt∫_0^1φ(v)f(t, v) dv =ρ/2∫_0^1∫_0^1(φ(v')-φ(v))f(t, v)f(t, w) dv dw,which can be equivalently rewritten in strong form as∂_t f=Q(f, f),whereQ(f, f)(t, v):=ρ/2(∫_0^11/'Jf(t, 'v)f(t, 'w) dw-f(t, v))is the collisional operator. As a minor change of notation, we point out that in this formulation the symbols 'v, 'w denote the pre-interaction speeds while v, w denote the post-interaction speeds. Moreover, 'J is the Jacobian of the transformation from the pre- to the post-interaction speeds.It is worth stressing that, thanks to the fact that u is included in the interaction rules (<ref>) and (<ref>), the control mechanism is naturally embedded into the kinetic equation (<ref>). §.§ Large-time trendsIn order to gain some insights into the large-time trend of the solution to (<ref>), and particularly to ascertain the impact of the binary control strategies on the aggregate behaviour of the system, we take advantage of the quasi-invariant interaction limit introduced by <cit.>.The basic idea is to investigate the asymptotic regime in which the effect of each binary interaction becomes negligible but the number of interactions per unit time is considerably high. For this we setΔt=ε, ν=ν_0ε (ν_0>0),where ε>0 is meant to be a small parameter, and we introduce the new time scale τ:=ε t, which, owing to the scaling by ε, is much larger than the characteristic time scale t of the binary interactions. Consequently we define the scaled distribution function g(τ, v):=f(τ/ε, v), which from (<ref>) is readily seen to satisfyd/dτ∫_0^1φ(v)g(τ, v) dv =ρ/2ε∫_0^1∫_0^1(φ(v')-φ(v))g(τ, v)g(τ, w) dv dw.Since for ε small we have t=τ/ε large, the limit ε→ 0^+ describes the large-time behaviour of f. On the other hand, by definition of g, the large-time behaviour of f is well approximated by that of g.Let us consider, as a reference for comparison, the unconstrained interaction dynamics discussed in Section <ref>. Choosing φ(v)=v, v^2, respectively, in (<ref>) and then plugging (<ref>) under the scaling (<ref>)_1 we discover, in the limit ε→ 0^+,[c] d/dτ = ρ/2∫_0^1∫_0^1 I(v, w; ρ)g(τ, v)g(τ, w) dv dw,d/dτ = ρ∫_0^1∫_0^1 vI(v, w; ρ)g(τ, v)g(τ, w) dv dw,where(τ):=∫_0^1 vg(τ, v) dv, (τ):=∫_0^1 v^2g(τ, v) dvare the mean speed and energy of the system. Performing the same calculations with the constrained interaction rules (<ref>) and the scaling (<ref>)_1-2 and denoting by V(τ), E(τ) the corresponding new mean speed and energy of the system we finddV/dτ=d/dτ, dE/dτ=d/dτ-ρ/ν_0(E-V^2)≤d/dτ,the inequality in the second equation being due to that E-V^2≥ 0 because this expression is the variance of the distribution g. If we assume that the initial speed distribution is the same in the two cases, so that V(0)=(0) and E(0)=(0), we further obtainV(τ)=(τ),E(τ)≤(τ), for all τ≥ 0,whence[b] E(τ)-V^2(τ)= E(τ)-^2(τ) ≤(τ)-^2(τ), for all τ≥ 0,which shows that the binary control strategy (<ref>) succeeds in reducing globally the speed variance in the traffic flow, viz. in mitigating the component of the collective road risk linked to the differences in the speed of the vehicles. Interestingly, this happens without affecting the natural mean speed of the flow.Similar arguments can be repeated for the binary control strategy (<ref>), for which we obtain:dV/dτ=ddτ+ρ/2ν_0(v_d-V), dE/dτ=d/dτ-ρ/ν_0(E-v_dV).For τ→+∞, using the bounds d/dτ≤ρ/2 and d/dτ≤ρ deducible from (<ref>), we can estimate | V-v_d|≤ν_0 and | E-v_d^2|≤ν_0(v_d+1), whence E-V^2=O(ν_0). Thus also strategy (<ref>) operates so as to reduce the global speed variance of the car flow, being however more coercive than strategy (<ref>). In fact, depending on the stregth ν_0 of the control, it tends to force the mean speed towards v_d.§ NUMERICAL EXAMPLES: FUNDAMENTAL DIAGRAMS AND SPEED VARIANCEIn this section we present numerical results concerning the constrained traffic model (<ref>) presented in the previous sections. The results have been obtained by means of direct Monte Carlo methods for the Boltzmann equation under the scaling (<ref>), see <cit.> for details on the numerical methods. Each test of the present section has been performed setting γ=1 in (<ref>) and ε=10^-2 in (<ref>). §.§ Binary variance control (<ref>), (<ref>), (<ref>)In Figure <ref> we show the contours of the kinetic distribution function in the time frame [0, 5] obtained in the unconstrained case (<ref>) (left column) and under the action of the binary control (<ref>) (right column) for two different values of the traffic density: ρ=0.3, representative of a free traffic regime, and ρ=0.6, representative of a congested traffic regime. It is apparent that the action of the binary control reduces immediately the variance of the speed distribution. On the other hand, from the fundamental diagrams of traffic displayed in Figure <ref> we see that the control does not affect either the mean speed or the macroscopic flux of the flow of vehicles, as it has been anticipated theoretically in Section <ref>. Finally, in Figure <ref> we show the time evolution of the variance of the speed distribution under unconstrained and constrained binary interactions and, in particular, we consider in the latter case two different values of the penalisation parameter ν_0 in (<ref>): ν_0=10^-1 (weak penalisation, strong control) and ν_0=10 (strong penalisation, weak control). Consistently with the theoretical predictions, cf. (<ref>), we observe that at each time step the variance of the constrained model is bounded from above by that of the unconstrained model. §.§ Desired speed control (<ref>), (<ref>), (<ref>)Concerning the control by means of the desired speed, we consider in particular a density-dependent v_d of the formv_d=v_d(ρ)=1-ρ, ρ∈ [0, 1],mimicking the fact that the driver-assist system may tune the target speed of the vehicle taking into account the level of congestion of the road. The relationship (<ref>) is a prototypical one implying that the desired speed is a non-increasing function of the traffic density, which vanishes in bumper-to-bumper conditions (ρ=1).In Figure <ref> we show the contours of the kinetic distribution function in both the unconstrained and the constrained case, cf. (<ref>), (<ref>), respectively, for the same values of the traffic density ρ=0.3, 0.6 as before. It is evident that the speed distribution concentrates asymptotically in two different values, the constrained one being dictated by v_d(ρ) as predicted theoretically in Section <ref>. As we see from Figure <ref>, this implies that in principle such a control strategy allows one to force the fundamental diagrams of traffic to adapt to ρ↦ v_d(ρ) (mean speed) and to ρ↦ρ v_d(ρ) (macroscopic flux). In particular, the choice (<ref>) of v_d induces a mean speed and a macroscopic flux which are lower than the unconstrained ones in the free traffic regime (low ρ) but higher in the congested traffic regime (high ρ) while still reducing the global speed variance, hence the related road risk, at each time step, cf. Figure <ref>.§ CONCLUSION In this paper we have described a mathematical approach to control problems in kinetic traffic modelling, with particular reference to road risk mitigation issues, whose hallmarks can be summarised as follows: [(i)]* the control method is based on the MPC strategy, which assumes that drivers determine their best actions by minimising a cost functional during a short and receding time horizon;* the time horizon is taken coincident with the duration of a single binary interaction with the leading vehicle, thereby allowing for a binary control implemented directly at the microscopic level;* the microscopic control problem can be solved in feedback form, i.e. the control can be expressed in terms of the microscopic states of the interacting vehicles, whereby constrained binary interaction rules can be defined explicitly;* the constrained binary interaction rules can be embedded in a Boltzmann-type kinetic description of the system, which allows for a statistical study of the global traffic dynamics and of the collective impact of the microscopic control strategies. Starting from the consideration that the differences in the speeds of the vehicles are reported as one of the major road risk factors, we have constructed two possible control strategies for the reduction of the speed variance in the stream of vehicles. One of them does not change the fundamental diagram at the macroscopic level whereas the other drives the global mean speed towards a congestion-dependent desired speed. Both strategies have proved effective in reducing the global statistical dispersion of the speeds of the vehicles, hence potentially in mitigating the road risk component linked to the speed variance. In this preliminary approach we assumed that all vehicles are subject to the action of the control. Further developments towards more realistic scenarios may include instead sparse control strategies. In our view, the proposed approach can provide a sound theoretical framework to model, analyse and simulate driver-assist car technologies from a genuine multiscale perspective, with useful implications also for the traffic governance. M.Z. acknowledges support from “Compagnia di San Paolo” (Torino, Italy).
http://arxiv.org/abs/1709.09980v2
{ "authors": [ "Andrea Tosin", "Mattia Zanella" ], "categories": [ "math.OC", "physics.soc-ph" ], "primary_category": "math.OC", "published": "20170926163833", "title": "Control strategies for road risk mitigation in kinetic traffic modelling" }
Kramers-Kronig potentials for the discrete Schrödinger equation Stefano Longhi^*=============================================================== Phylogenetics is a widely used concept in evolutionary biology. It is the reconstruction of evolutionary history by building trees that represent branching patterns and sequences. These trees represent shared history, and it is our intention for this approach to be employed in the analysis of Galactic history. In Galactic archaeology the shared environment is the interstellar medium in which stars form and provides the basis for tree-building as a methodological tool. Using elemental abundances of solar-type stars as a proxy for DNA, we built in <cit.> such an evolutionary tree to study the chemical evolution of the solar neighbourhood.In this proceeding we summarise these results and discuss future prospects. § INTRODUCTION§.§ Stellar DNAIn the widely-read review of <cit.> a very important concept was discussed: chemical tagging. Unlike the kinematical memory of long-lived low-mass stars,the chemical pattern imprinted in their atmospheres remains intact, reflecting the chemical composition of the gas from which they formed. Hence,the chemical abundances of stars can be used to identify the clouds from which they formed. By doing this for stars at different locations and of different ages, and complementing this informationwith their kinematic properties, one can constrain chemodynamical models of the Galaxy. This idea, combined with the arrival of Gaia data, is motivating the development of very large high resolution spectroscopic surveys, able to provide us with about 20 elemental abundances for thousands of stars. This is further leading to the development of sophisticated clustering techniques which are able to classify different stellar populations in chemical space, and hence, identify the origins of different stellar populations.§.§ Chemical continuityIt is already well known that there is a continuity in chemical patterns between successive stellar generations. Massive stars pollute the interstellar medium with more metals enabling the formation of new stars that are more metal-rich. This implies that the origins of stars identified with chemical tagging are related to each other, and understanding their relationship is what reveals the chemical evolution of the Milky Way.§ EVOLUTIONARY TREE OF SOLAR NEIGHBORHOOD STARSIf we can identify the origins of stars using the concept of chemical tagging, phylogenetics offers a powerful way to complement chemical tagging and study the chemical evolution of the Milky Way. We can use the chemical pattern of stars as DNA and build evolutionary trees, in which every branch represents a different stellar population.At a first stage, this is doing the same as chemical tagging aswe are essentiallyclassifying stars in their chemical space. But at a second stage, the branches can be used to study their relationships and reconstruct their shared history.In <cit.> we took the sample of solar twins of <cit.>, which comprises accurate chemical abundances of 17 different elements, ages and kinematic properties. With these abundances we constructed an evolutionary tree and found three different branches. By analysing the ages and kinematics of the stars in these branches we could attribute these branches to the thick disk, thin disk and an intermediate population. By further analysing their branch lengths, we could estimate a total chemical enrichment rate for each of the populations, finding that the thick disk had a faster star formation rate than the thin disk, confirming, in a purely empirical manner, previous findings. We finally identified nodes in the tree which split into multiple branches, discussing possible extreme events in the past which might drive independent evolutionary paths for different populations.Although our sample of stars was very small, we showed that this approach has great potential to disentangle the different physical processes that formed our Galaxy.§ FUTURE Phylogenetic tools have existed for over a century and are based on using evolutionary trees to understand the evolution of systems. These can be biological, but also sociological (languages, religions). While the mechanisms of evolution differ, the manner in which phylogenetic trees are interpreted is remarkably similar. It is our contention that we can apply theories of Evolution to the Milky Way. As long as we believe chemical tagging can work, we can do Galactic Phylogenetics and reconstruct the history of our Galaxy. [Jofré et al.(2017)]Jofre17 Jofré, P., Das, P., Bertranpetit, J., & Foley, R. 2017, MNRAS, 467, 1140[Freeman & Bland-Hawthorn(2002)]Freeman02 Freeman, K., & Bland-Hawthorn, J. 2002, ARA&A, 40, 487 [Nissen(2015)]Nissen16 Nissen, P. E. 2015, A&A, 579, A52
http://arxiv.org/abs/1709.09338v1
{ "authors": [ "P. Jofre", "P. Das" ], "categories": [ "astro-ph.GA", "astro-ph.SR" ], "primary_category": "astro-ph.GA", "published": "20170927052433", "title": "Galactic Phylogenetics" }
^1Department of Astrophysical and Planetary Sciences, University of Colorado, Boulder, CO 80309, USA ^2Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA ^3Department of Physics and Astronomy, Trinity University, San Antonio, TX 78212, USA While 2% of active galactic nuclei (AGNs) exhibit narrow emission lines with line-of-sight velocities that are significantly offset from the velocity of the host galaxy's stars, the nature of these velocity offsets is unknown.We investigate this question with Chandra/ACIS and Hubble Space Telescope/Wide Field Camera 3 observations of seven velocity-offset AGNs at z<0.12, and all seven galaxies have a central AGN but a peak in emission that is spatially offset by < kpc from the host galaxy's stellar centroid.These spatial offsets are responsible for the observed velocity offsets and are due to shocks, either from AGN outflows (in four galaxies) or gas inflowing along a bar (in three galaxies).We compare our results to a velocity-offset AGN whose velocity offset originates from a spatially offset AGN in a galaxy merger.The optical line flux ratios of the offset AGN are consistent with pure photoionization, while the optical line flux ratios of our sample are consistent with contributions from photoionization and shocks.We conclude that these optical line flux ratios could be efficient for separating velocity-offset AGNs into subgroups of offset AGNs – which are important for studies of AGN fueling in galaxy mergers – and central AGNs with shocks – where the outflows are biased towards the most energetic outflows that are the strongest drivers of feedback. § INTRODUCTIONGalaxies and their supermassive black holes are linked in their evolution, resulting in surprisingly tight observational correlations between parameters such as supermassive black hole mass, stellar velocity dispersion, and host galaxy mass ( for a review).Active galactic nuclei (AGNs) have emerged as key players in this coevolution, by the primary mechanisms of AGN fueling and AGN feedback.Supermassive black holes build up mass by accreting gas during AGN fueling, while AGN outflows are a crucial regulator of star formation that controls the mass growth of the galaxies (e.g., ).In recent years, double-peaked narrow emission lines in AGN host galaxies have been studied as a population (e.g., ), and have been shown to be signatures of both AGN fueling and AGN outflows.Some of these double-peaked emission lines are produced by dual AGNs, which are a pair of AGNs being fueled during a galaxy merger <cit.>, and the majority of double-peaked emission lines are produced by AGN outflows (e.g., ).Analogous to the AGNs with double-peaked narrow emission lines, there is also a population of galaxy spectra with single-peaked narrow AGN emission lines that exhibit a statistically significant line-of-sight velocity offset relative to the velocity of the host galaxy's stars; 2% of AGNs exhibit these velocity offsets <cit.>.These objects have been much less well studied than the AGNs with double-peaked narrow emission lines, and numerical simulations of galaxy mergers show that velocity-offset emission lines can be produced by offset AGNs, which are off-nuclear AGNs in ongoing galaxy mergers (e.g., ).Inflows or outflows of gas could also produce velocity-offset AGN emission lines (e.g., ).Here, we investigate the origins of the velocity-offset narrow emission lines observed in the Sloan Digital Sky Survey (SDSS) spectra of seven AGNs at z<0.12.We observe each galaxy with the Chandra X-ray Observatory ACIS (Chandra/ACIS), to pinpoint the location of the AGN, and the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3), to obtain high spatial resolution maps of the stellar continuum and the ionized gas.Our goal is to determine the nature of each galaxy and whether its velocity-offset emission lines are tracers of AGN fueling (via inflows or offset AGNs) or AGN feedback (via outflows). This paper is organized as follows: In Section 2 we describe the sample selection and characteristics.In Section 3 we describe the observations of the sample (SDSS spectra, Keck/OSIRIS integral-field spectroscopy for three of the seven galaxies, Chandra observations, and HST/WFC3 multiband imaging), the astrometry, and our analyses of the data.Section 4 presents our results, including the nature of each velocity-offset AGN.Finally, our conclusions are summarized in Section 5.We assume a Hubble constant H_0 =70 km s^-1 Mpc^-1, Ω_m=0.3, and Ω_Λ=0.7 throughout, and all distances are given in physical (not comoving) units.§ THE SAMPLE We begin with a parent sample of 18,314 Type 2 AGNs at z<0.21 in SDSS, which were identified as AGNs via their optical emission line ratios <cit.> and the requirement that the fits to the absorption and emission line systems in the SDSS spectra are robust (by examining the signal, residual noise, and statistical noise; ).The line-of-sight velocity offsets of the emission lines relative to the stellar absorption lines were then measured.From the parent sample of 18,314 Type 2 AGNs, the velocity-offset AGNs were the systems that fulfilled the following four criteria: 1) the velocity offsets of the forbidden emission lines and the Balmer emission lines are the same to within 1σ; 2) the velocity offsets of the emission lines are greater than 3σ in significance; 3) the emission line profiles are symmetric; 4) the systems do not have double-peaked emission lines.The 351 AGNs that meet these criteria are the velocity-offset AGNs <cit.>.From these 351 velocity-offset AGNs, we select seven systems with low redshifts (z<0.12) and high estimated 2-10 keV fluxes (>5 × 10^-14 erg cm^-2 s^-1).We estimate the 2-10 keV fluxes from the fluxes of the AGNs (which are >1.3 × 10^-14 erg cm^-2 s^-1 for this sample of seven systems) and the established Type 2 AGN to X-ray scaling relation <cit.>.The low redshifts maximize the physical spatial resolution that we can achieve with Chandra and HST, while the high 2-10 keV fluxes minimize the observing time necessary for X-ray detections.The seven systems are listed in Table <ref>.§ OBSERVATIONS AND ANALYSIS§.§ Optical SDSS ObservationsFor each of the seven velocity-offset AGNs, the host galaxy redshift (based on the stellar absorption features), the line-of-sight velocity offset of the emission lines, and the luminosity were determined from the SDSS spectrum <cit.>.Three of the AGNs have emission lines with redshifted velocity offsets, and four have emission lines with blueshifted velocity offsets.The absolute values of the velocity offsets range from 50 to 113 km s^-1 (Table <ref>).llll0pt 4 Measurements from SDSS Observations SDSS Designation z Δ v L_(km s^-1) (10^40 erg s^-1) SDSS J013258.92-102707.0 0.03222 ± 0.00002 56 ± 10 5.5 ± 0.6SDSS J083902.97+470756.3 0.05236 ± 0.00006 -50 ± 10 17.7 ± 1.2SDSS J105553.64+152027.4 0.09201 ± 0.00002 -113 ± 10 48.1 ± 5.7SDSS J111729.22+614015.2 0.11193 ± 0.00001 85 ± 12 44.5 ± 7.2 SDSS J134640.79+522836.6 0.02918 ± 0.00001 -52 ± 10 7.5 ± 0.8 SDSS J165430.72+194615.5 0.05367 ± 0.00001 66 ± 11 20.7 ± 2.8SDSS J232328.01+140530.2 0.04142 ± 0.00007 -53 ± 10 10.2 ± 1.3Column 2: host galaxy redshift, based on stellar absorption features.Column 3: line-of-sight velocity offset of emission lines relative to host galaxy systemic.Column 4: observed luminosity. §.§ Keck/OSIRIS Near-infrared IFU ObservationsThree of the velocity-offset AGNs were observed with Keck Laser Guide Star Adaptive Optics with OH-Suppressing Infra-Red Imaging Spectrograph (OSIRIS) integral-field spectroscopy <cit.>.In each galaxy (SDSS J1055+1520, SDSS J1117+6140, and SDSS J1346+5228), the peak of the line emission (Paα, Paα, and in each galaxy, respectively) was spatially offset from the galaxy center by 01 (0.2 kpc), 02 (0.5 kpc), and 03 (0.2 kpc), respectively.Based on the kinematics of the gas in the OSIRIS observations, <cit.> found that SDSS J1055+1520 and SDSS J1346+5228 host AGN outflows while SDSS J1117+6140 has gas inflow along a bar.They concluded that the spatially-offset peaks in line emission are the result of the outflows or inflows driving shocks into off-nuclear gas. §.§ Chandra/ACIS X-ray ObservationsThe seven velocity-offset AGNs were observed with Chandra/ACIS for the program GO4-15113X (PI: Comerford).Our exposure times were derived from the observed flux for each system (Table <ref>) and the scaling relation between flux and hard X-ray (2-10 keV) flux for Type 2 AGNs, which has a scatter of 1.06 dex <cit.>.We selected exposure times that would ensure a firm detection of at least 10 counts for each AGN, even in the case of the actual X-ray flux falling in the low end of the 1.06 dex scatter.The galaxies were observed with exposure times of 10 ks to 20 ks (Table <ref>).lllllll0pt 7 Summary of Chandra and HST Observations SDSS Name Chandra/ACISChandra/ACISHST/WFC3HST/WFC3 HST/WFC3HST/WFC3 exp. time (s) obs. date (UT) F160W F606W F438W obs. date (UT) exp. time (s) exp. time (s) exp. time (s) J0132-1027 14871 2014-08-23 147 900 1047 2014-06-24 J0839+4707 9927 2014-09-03 147 945 1050 2014-09-06 J1055+1520 14869 2015-02-04 147 900 957 2014-10-25 J1117+6140 19773 2015-02-03 147 1062 1065 2014-07-03J1346+5228 9937 2014-08-29 147 996 1050 2015-02-05 J1654+1946 9937 2014-07-23 147 900 957 2014-07-27 J2323+1405 14868 2014-08-31 147 900 954 2014-06-08Column 2: exposure time for the Chandra/ACIS observation.Column 3: UT date of the Chandra/ACIS observation.Columns 4 – 6: exposure times for the HST/WFC3 F160W, F606W, and F438W observations.Column 7: UT date of the HST/WFC3 observations.The galaxies were observed with the telescope aimpoint on the ACIS S3 chip in “timed exposure” mode and telemetered to the ground in “faint” mode.We reduced the data with the latest Chandra software (CIAO 4.6.1) in combination with the most recent set of calibration files (CALDB 4.6.2).For each galaxy, we usedto make a sky image of the field in the rest-frame soft (0.5-2 keV), hard (2-10 keV) and total (0.5-10 keV) energy ranges.Using the modeling facilities in , we simultaneously modeled the source as a two-dimensional Lorenztian function (: f(r)=A(1+[r/r_0]^2)-α) and the background as a fixed count rate estimated using a source-free adjacent circular region of 30^'' radius.We used the SDSS galaxy coordinates as the initial position of thecomponent, and then we allowed the model to fit a region of 3 times the PSF size (estimated with ) at that location.We determined the best-fit model parameters with 's implementation of the `Simplex' minimization algorithm <cit.>, by minimizing the Cash statistic.We also attempted a two-componentmodel to test for additional sources, but all secondary components were detected with <1σ significance.Therefore, none of the systems require a secondary component, and Table <ref> and Figure <ref> show the best-fit positions of the X-ray source in each galaxy.Table <ref> also gives the spatial separations between each X-ray source and the host galaxy's stellar nucleus.The errors on these separations are dominated by the astrometric uncertainties in aligning the Chandra and HST images.These astrometric erros are calculated in Section <ref>, and the median astrometric error is 05.lllllllll0pt 9 Chandra and HST/F160W Positions of Each Source SDSS Name RA_HST/F160W DEC_HST/F160W Chandra Energy RA_Chandra^aDEC_Chandra^a Δθ (^'')^b Δ x (kpc)^b Sig. Range (keV)J0132-1027 01:32:58.927 -10:27:07.05 0.5-2 01:32:58.924 -10:27:06.87 0.18 ± 0.33 0.12 ± 0.21 0.6σ2-10 01:32:58.917 -10:27:07.05 0.14 ± 0.46 0.09 ± 0.29 0.3σ0.5-10 01:32:58.922 -10:27:07.02 0.08 ± 0.43 0.05 ± 0.27 0.2σ J0839+4707 08:39:02.949 +47:07:55.88 0.5-2 08:39:02.944 +47:07:55.95 0.09 ± 0.29 0.09 ± 0.30 0.3σ2-10 08:39:02.961 +47:07:55.84 0.13 ± 0.19 0.13 ± 0.19 0.7σ0.5-10 08:39:02.961 +47:07:55.88 0.12 ± 0.18 0.12 ± 0.18 0.7σ J1055+1520 10:55:53.644 +15:20:27.87 0.5-2 10:55:53.653 +15:20:27.40 0.49 ± 0.84 0.83 ± 1.44 0.6σ2-1010:55:53.682 +15:20:27.16 0.90 ± 0.84 1.54 ± 1.44 1.1σ0.5-1010:55:53.662 +15:20:27.30 0.62 ± 0.84 1.07 ± 1.44 0.7σ J1117+6140 11:17:29.208 +61:40:15.38 0.5-2 11:17:29.193 +61:40:16.06 0.69 ± 0.73 1.41 ± 1.49 0.9σ2-10 11:17:29.287 +61:40:15.63 0.62 ± 0.37 1.26 ± 0.76 1.7σ0.5-10 11:17:29.268 +61:40:15.56 0.46 ± 0.36 0.94 ± 0.74 1.3σ J1346+5228 13:46:40.812 +52:28:36.22 0.5-2 13:46:40.816 +52:28:36.15 0.08 ± 0.36 0.05 ± 0.21 0.2σ2-10 13:46:40.821 +52:28:35.76 0.48 ± 0.35 0.28 ± 0.20 1.4σ0.5-10 13:46:40.816 +52:28:35.76 0.47 ± 0.35 0.27 ± 0.21 1.3σ J1654+1946 16:54:30.724 +19:46:15.56 0.5-2 16:54:30.734 +19:46:15.45 0.17 ± 0.48 0.18 ± 0.50 0.4σ2-10 16:54:30.806 +19:46:15.78 1.17 ± 0.44 1.22 ± 0.46 2.6σ0.5-10 16:54:30.732 +19:46:15.42 0.18 ± 0.44 0.18 ± 0.46 0.4σ J2323+1405 23:23:28.010 +14:05:30.08 0.5-2 23:23:27.996 +14:05:30.12 0.21 ± 0.26 0.17 ± 0.22 0.8σ2-10 23:23:28.008 +14:05:30.12 0.05 ± 0.22 0.04 ± 0.18 0.2σ0.5-10 23:23:28.003 +14:05:30.12 0.11 ± 0.21 0.09 ± 0.17 0.5σColumns 2 and 3: coordinates of the host galaxy's stellar nucleus, measured from HST/WFC3/F160W observations. Column 4: rest-frame energy range of Chandra observations.Columns 5 and 6: coordinates of the X-ray AGN source, measured from Chandra/ACIS observations in the energy range given in Column 4. Columns 7 and 8: angular and physical separations between the positions of the host galaxy's stellar nucleus and the X-ray AGN source, where the error includes uncertainties in the positions of the HST and Chandra sources as well as the astrometric uncertainty.Column 9: significance of the separation between the host galaxy's stellar nucleus and the X-ray AGN source. aThe astrometric shifts described in Section <ref> have been applied to the Chandra source positions. bThe errors are dominated by the astrometric uncertainties, which range from 02 to 08.Then, we used the Bayesian Estimation of Hardness Ratios () code <cit.> to measure the rest-frame soft, hard, and total counts in each X-ray source.We usedto determine the number of observed soft and hard counts from both the source region and a background region, and thenused a Bayesian approach to estimate the expected values and uncertainties of the rest-frame soft counts, hard counts, total counts, and hardness ratio.Table <ref> shows these values, and we estimated errors on the counts assuming Poisson noise. llllllll 0pt 8 X-ray Counts and Spectral Fits SDSS Name Soft CountsHard Counts Total CountsHardness n_H,exgalΓ Reduced (0.5-2 keV) (2-10 keV) (0.5-10 keV) Ratio (10^22 cm^-2) C-stat J0132-1027 14.9^+3.1_-4.2 8.2^+2.3_-3.4 23.1^+4.2_-5.2 -0.30^+0.19_-0.22 <0.02 1.73^+0.41_-0.38 0.24J0839+4707 8.7^+2.1_-3.4 66.1^+7.4_-8.6 74.8^+7.9_-9.1 0.77^+0.09_-0.06 8.18^+2.21_-0.29 1.70 (fixed)^a0.59J1055+1520 15.8^+3.2_-4.5 6.8^+2.0_-3.3 22.7^+4.1_-5.3 -0.40^+0.19_-0.22 <0.03 1.71^+0.39_-0.47 0.22J1117+6140 7.6^+2.0_-3.3 3.1^+1.1_-2.5 10.7^+2.6_-4.0 -0.43^+0.25_-0.35 <0.10 1.87^+0.78_-0.76 0.12J1346+5228 10.8^+2.4_-3.9 7.2^+1.9_-3.3 18.0^+3.8_-4.7 -0.21^+0.23_-0.25 <0.12 1.35^+0.54_-0.85 0.19J1654+194623.9^+4.2_-5.3 4.2^+1.4_-2.7 28.1^+4.4_-6.0 -0.71^+0.11_-0.17 <0.02 1.70 (fixed)^a 0.20J2323+1405 17.4^+3.5_-4.7 32.5^+5.2_-6.2 49.9^+6.3_-7.6 0.30^+0.15_-0.13 0.41^+0.18_-0.16 1.70 (fixed)^a 0.52Column 2: soft X-ray (restframe 0.5-2 keV) counts (S).Column 3: hard X-ray (restframe 2-10 keV) counts (H).Column 4: total X-ray (restframe 0.5-10 keV) counts.Column 5: hardness ratio HR = (H-S)/(H+S).Column 6: extragalactic column density. Column 7: best-fit spectral index. Column 8: reduced Cash statistic of the fit. aThe best-fit spectrum had a spectral index of Γ <1 or Γ>3, so we redid the fit by freezing the spectral index to Γ=1.70.To model the energy spectra of the extracted regions over the observed energy range 2-8 keV, we used . We fit each unbinned spectrum with a redshifted power law, F∼ E^-Γ (which represents the intrinsic AGN X-ray emission at the SDSS spectroscopic redshift z).This spectrum is attenuated by passing through two absorbing column densities of neutral Hydrogen.One of these is fixed to the Galactic value, n_H,Gal, and the other is assumed to be intrinsic to the source, n_H,exgal, at the redshift z.We determined n_H,Gal using an all-sky interpolation of the in the Galaxy <cit.>.For our first fit to each spectrum, we allowed Γ and n_H,exgal to vary freely.If the best-fit value of Γ was not within the typical range of observed power-law indices, i.e. 1≤Γ≤ 3 <cit.>, then we fixed Γ at a value of 1.7, which is a typical value for the continuum of Seyfert galaxies, and ran the fit again.To determine the best-fit model parameters for each spectrum, we used 's implementation of the Levenberg-Marquardt optimization method <cit.> to minimize the Cash statistic.Table <ref> shows the results of these spectral fits.All fluxes are k-corrected, and we calculated the observed flux values from the model sum (including the absorbing components) and the intrinsic flux values from the unabsorbed power law component.Finally, we used the redshift to determine the distance to each system and convert the X-ray fluxes to X-ray luminosities (Table <ref>). llll 0pt 4 X-ray Luminosities SDSS Name L_X, 0.5-2 keV abs (unabs) L_X, 2-10 keV abs (unabs)L_X, 0.5-10 keV abs (unabs)(10^40 erg s^-1) (10^40 erg s^-1) (10^40 erg s^-1) J0132-1027 1.2^+0.4_-0.3 (1.3^+0.4_-0.4) 2.3^+2.1_-1.2 (2.3^+2.1_-1.2) 3.3^+2.5_-1.2 (3.5^+2.6_-1.3) J0839+4707 0.2^+0.5_-0.1 (89.7^+29.6_-29.0) 165.0^+44.7_-37.6 (245.0^+61.2_-59.8) 170.0^+39.6_-40.1 (338.0^+83.0_-84.3) J1055+1520 9.7^+3.2_-3.5 (11.6^+3.3_-3.8) 18.4^+19.2_-8.8 (18.5^+19.2_-8.8) 30.1^+22.5_-12.7 (32.3^+21.1_-13.3) J1117+6140 3.9^+2.4_-1.9 (6.4^+2.5_-2.9) 7.7^+14.6_-5.1 (7.8^+14.6_-5.1) 12.3^+15.1_-7.1 (14.9^+14.0_-8.2) J1346+5228 0.8^+0.4_-0.3 (1.0^+0.4_-0.4) 2.7^+3.1_-1.4 (2.7^+3.1_-1.5) 3.7^+3.5_-1.7 (3.8^+3.6_-1.7) J1654+1946 5.5^+1.2_-1.2 (6.6^+1.2_-1.4) 11.9^+2.1_-2.6 (12.0^+2.1_-2.7) 17.2^+3.5_-3.2 (18.2^+3.6_-3.5) J2323+1405 2.9^+1.2_-0.8 (7.0^+1.3_-1.4) 12.3^+2.4_-2.6 (12.7^+2.4_-2.6) 15.4^+3.3_-3.0 (19.8^+3.9_-3.8)Column 2: absorbed (and unabsorbed) soft X-ray 0.5-2 keV luminosity. Column 3: absorbed (and unabsorbed) hard X-ray 2-10 keV luminosity. Column 4: absorbed (and unabsorbed) total X-ray 0.5-10 keV luminosity. §.§ HST/WFC3 F438W, F606W, and F160W ObservationsThe seven velocity-offset AGNs were also observed with HST/WFC3 (GO 13513, PI: Comerford), and the observations covered three bands: UVIS/F438W (B band), UVIS/F606W (V band), and IR/F160W (H band).The exposure times are summarized in Table <ref>.Each band revealed different properties of the galaxies.The F438W observations covered Hδ, Hγ, and for the 0.02 < z < 0.06 galaxies and and Hδ for the 0.09 < z < 0.12 galaxies.The F606W observations covered , ; ; ; , and for the 0.02 < z < 0.06 galaxies; , ; and for the z=0.09 galaxy; and Hγ, , , ; and for the z=0.11 galaxy.The F160W observations primarily traced the stellar continuum, although they may also have included 1.6436 μm emission for the 0.02 < z < 0.04 galaxies and Paβ emission for the 0.09 < z < 0.12 galaxies.To locate the stellar centroid of each galaxy, we fit a Sérsic profile (plus a fixed, uniform sky component) to each galaxy's F160W image using GALFIT V3.0 <cit.>.We ran each fit on a square region of projected physical size 40 kpc on each side, with the angular size scale calculated from z and assuming the cosmology stated in Section 1. The errors returned by GALFIT are purely statistical in that they are computed directly from the variance of the input images.We note that in reality, the true radial profiles may deviate from the parametric model components used in GALFIT, particularly at large radii.We previously examined this in <cit.> by creating radial profiles of the Sérsic fits to merger-remnant galaxies, where we found that, even with significant residuals at large radii, the Sérsic component peaks are excellent tracers of the photometric peaks.In our fitting procedure, we also attempted a two-Sérsic component fit (over the same fitting region) to test for the presence of secondary nuclei and/or close interacting neighbors.In these cases, we adopted the two-component model if the secondary component is detected at >3σ significance above the background.We found one system, SDSS J0839+4707, with a nearby neighbor galaxy.GALFIT returned the positions of the sources and their integrated magnitudes, which we used to determine the spatial separation on the sky between the two galaxies and their merger mass ratio.We approximated the merger mass ratio as the luminosity ratio of the two stellar bulges.We also measured the centroid of emission for each galaxy, using Source Extractor <cit.> on the F606W images.According to the SDSS spectra, the emission line is the dominant line in the F606W image for each galaxy, within the central 3^''.Therefore, the centroid of F606W emission within the central 3^'' is a proxy for the centroid of emission.We ran Source Extractor with a detection threshold of 5σ above the background, and the errors on the positions are statistical.The positions of the emission centroids, as well as their separations from the stellar centroids, are shown in Table <ref> and Figure <ref>.We determined the spatial separation errors by combining the errors on the GALFIT positions in the F160W data, the Source Extractor positions in the F606W data, and the relative astrometric uncertainties in the F160W (10 mas) and F606W observations (4 mas; ).The relative astrometric uncertainties dominate the errors, so that the spatial separation errors are all 001.We found that all of the spatial separations between the emission centroids and the stellar centroids are greater than 3σ in significance.lllll0pt 5 HST/F606W Positions of Each Source SDSS Name RA_HST/F606W DEC_HST/F606W Δθ (^'')^a Δ x (kpc)^a J0132-1027 01:32:58.922 -10:27:07.01 0.078 ± 0.011 0.050 ± 0.007 J0839+4707 08:39:02.937 +47:07:56.02 0.193 ± 0.011 0.197 ± 0.011 J1055+1520 10:55:53.638 +15:20:27.96 0.124 ± 0.012 0.212 ± 0.020 J1117+6140 11:17:29.218 +61:40:15.31 0.179 ± 0.011 0.365 ± 0.023 J1346+5228 13:46:40.802 +52:28:36.23 0.252 ± 0.011 0.147 ± 0.006 J1654+1946 16:54:30.734 +19:46:15.48 0.152 ± 0.011 0.159 ± 0.011 J2323+1405 23:23:28.004 +14:05:30.03 0.111 ± 0.011 0.091 ± 0.009 Columns 2 and 3: coordinates of the peak of the emission, measured from HST/WFC3/F606W observations.Columns 4 and 5: angular and physical separations between the positions of the peak of the emission and the host galaxy's stellar nucleus.All separations are >3σ in significance. aThe errors are dominated by the astrometric uncertainties, which are 001.Finally, we measured the spatial separation between the X-ray AGN source and the center of the stellar bulge (Table <ref>).The error on each spatial separation incorporate the errors from themodel fit to the Chandra data (Section <ref>), the GALFIT fit to the HST/F160W data, and the astrometric uncertainty (Section <ref>).The error budget is dominated by the uncertainty in aligning the Chandra and HST images, where the median astrometric uncertainty is 05.Due in part to these large astrometric uncertainties, all of the spatial separations are less than 3σ in significance. §.§ AstrometryTo determine if any Chandra sources are significantly spatially offset from the stellar bulges seen in the HST/F160W data, we registered each pair of HST/F160W and Chandra images and estimated their relative astrometric uncertainties.Due to the small number of Chandra/ACIS sources and the relatively small HST/F160W field of view, we registered each image separately to SDSS (u, g, r, i, and z) and the 2MASS point source catalog <cit.>.Then, we combined the two transformations to register the Chandra and HST images.We usedwith a threshold of =10^-8 to detect sources in Chandra, and Source Extractor with a threshold of 3σ to detect sources in SDSS, 2MASS, and HST.Then, we matched sources in each pair of images using thetask in IRAF.Next, we used thetask in IRAF to calculate X and Y linear transformations for each matched pair (X_shift,j, Y_shift,j).We took the final linear transformations in X and Y to be the error-weighted averages, X_shift=∑_j=1^n X_shift,j× w_j,X and Y_shift=∑_j=1^n Y_shift,j× w_j,Y, where n is the number of sources matched between two images and w is the error weighting.For each dimension, X and Y, we combined in quadrature the errors on the Chandra and SDSS/2MASS source positions in each band.We repeated this procedure to determine the uncertainty of the relative astrometry for the HST and SDSS/2MASS images. Then, we added the relative astrometric errors between Chandra and SDSS/2MASS and between HST and SDSS/2MASS in quadrature to determine the relative astrometric errors between the Chandra and HST images.The final astrometric errors (Δ X, Δ Y) are then the error-weighted averages of these bands, shown in Table <ref>.These uncertainties range from 02 to 08, and they dominate the errors when we measure the spatial separations between sources in Chandra and HST.lllll 0pt 5 Astrometry Measurements SDSS Name n_CSn_HSΔ X ('') Δ Y ('')J0132-1027 0,1,1,0,0,1 0,0,1,0,0,0 0.4656 0.2622 J0839+4707 2,2,2,2,1,1 0,2,2,2,2,1 0.2612 0.2607 J1055+1520 0,0,0,0,0,0 0,0,0,1,0,0 0.8382 0.8382 J1117+6140 0,1,1,1,0,0 0,1,1,1,0,0 0.5299 0.7136 J1346+5228 1,2,1,1,1,0 0,1,1,1,1,0 0.2837 0.3517 J1654+1946 0,0,1,1,0,0 1,2,3,3,2,1 0.4721 0.4375 J2323+1405 1,1,2,1,1,1 0,0,1,0,0,0 0.2035 0.2388 Column 2: number of sources matched between Chandra and SDSS u, g, r, i, z and 2MASS images. Column 3: number of sources matched between HST/F160W and SDSS u, g, r, i, z and 2MASS images. Columns 4 and 5: astrometric accuracy measurements based on matching these sources, in native X and Y coordinates of the HST/F160W image.§ RESULTS§.§ The Galaxies Host Central AGNs, Where Shocks Produce Off-nuclear Peaks in EmissionWe use the Chandra observations to pinpoint the location of the AGN in each galaxy, and we find that each AGN's position is consistent with the host galaxy center to within 3σ (Table <ref>).Some of the AGNs may have small, but real, spatial offsets from the galaxy center, but the HST/F160W images do not show evidence of secondary stellar cores that would accompany these offset AGNs.This leads us to conclude that each galaxy in our sample most likely hosts a central AGN, and not an offset AGN.The emission line maps for each galaxy are probed by the HST/F606W observations, which are dominated by .We find that the emission line centroids are spatially offset from the host galaxy centers by 0.05 to 0.4 kpc, and that all of the spatial separations are greater than 3σ in significance (Table <ref>).For the three galaxies that were also observed with Keck/OSIRIS, in all three galaxies the spatial offsets of the emission in the OSIRIS data are consistent with those measured in the F606W data.Such spatially-offset peaks in emission could be produced by photoionization of an off-nuclear cloud of gas.Outflows and inflows can drive gas into off-nuclear dense regions, but this gas need not necessarily be excited by shocks(e.g., ).Spatially-offset peaks in emission can also be a signature of shocks.Interacting gas clouds shock the gas, enhancing the ionized gas emission and producing an off-nuclear peak of emission within the narrow line region (e.g., ).To search for further evidence of shocks, we examine the optical line flux ratios /, /, and / measured from the SDSS spectrum of each galaxy.Shocks driven into the surrounding gas clouds compress the gas, increasing its density and temperature.The emission line indicates a very high kinetic temperature, which is produced by shock wave excitation and is inconsistent with photoionized low-density clouds.Consequently, the / line ratio is temperature sensitive and a good indicator of shock activity.Shock heating can also be probed by the / line flux ratio (e.g., ).The emission line is another indicator of shocks (e.g., ), and / is an ionization level-sensitive line flux ratio.We compare the / vs. / line flux ratios, as well as the / vs. / line flux ratios, to models of pure AGN photoionization and combined AGN photoionization and shocks <cit.>. The pure photoionization models are computed with CLOUDY <cit.> and use a spectral index α=-1 of the ionizing continuum and an ionization parameter ranging from -4 ≤log U ≤ -1.The hydrogen density is 100 cm^-3, which is typical for extended emission line regions <cit.>, and the metallicity is solar.The shock models are computed with MAPPINGSIII <cit.>, and have a range of shock velocities 100 < v_s (km s^-1) < 1000.We find that none of the velocity-offset AGNs have line flux ratios consistent with pure photoionization, and that instead their spectra are explained by a combination of photoionization and shocks (Figure <ref>).To further explore the role of photoionization and shocks in these galaxies, we compare our data to the radiative shock models of <cit.>. They assume solar abundance, a preshock density 1 cm^-3, magnetic parameters ranging from 10^-4 to 10 μG cm^3/2, and shock velocities ranging from 200 to 1000 km s^-1, and they use MAPPINGSIII to model both the shock and its photoionized precursor.For shocks with velocities ≳170 km s^-1, the ionizing front is moving faster than the shock itself, and the ionizing front dissociates and spreads out to form a precursor region in front of the shock.Hence, a shocked region can have both shocked gas and photoionized gas.We find that the line flux ratios of our seven velocity-offset AGNs are consistent with the shock plus precursor models of <cit.>.We conclude that all seven of the galaxies host both shocked gas and photoionized gas.In the three galaxies observed with Keck/OSIRIS, the OSIRIS data show that the velocity-offset emission lines in the SDSS integrated spectra originate from the shocked off-nuclear emission peak in the gas <cit.>. The same is most likely true for the other four galaxies in our sample, and spatially resolved spectra would show it definitively. §.§ Sources of the Shocks in the Galaxies Here we explore the nature of the shocks in each of the seven galaxies individually. §.§.§ Four AGN OutflowsSDSS J0132-1027.This galaxy displays several colinear knots of emission (Figure <ref>), which are often seen in radio jets driving collimated AGN outflows (e.g., ).Indeed, SDSS J0132-1027 is detected in the FIRST radio survey <cit.> with a 20 cm flux density of 1.6 mJy, and higher resolution radio observations would reveal whether it hosts a radio jet.We also note that the southwestern most knot is also detected in the F160W observations.While it is possible that this the stellar bulge of a minor merger, it seems an unlikely coincidence that the minor merger would be colinear with the other knots of emission.Instead, the F160W observations may be tracing 1.6436 μm emission, which is a common indicator of shocks (e.g., ) and could be produced as the jet drives into the interstellar medium. SDSS J1055+1520.The outflow in this galaxy has been modeled as a bicone using the Keck/OSIRIS observations of Paα, and the ratio of the outflow energy to the AGN bolometric luminosity was found to be Ė_out/L_bol = 0.06 ± 0.015 <cit.>.This exceeds the energy threshold for an outflow to drive a powerful two-stage feedback process that removes cold molecular gas from the inner parts of a galaxy and suppresses star formation, as found by theoretical studies (Ė_out/L_bol > 0.005; ).Further, the bicone is oriented with a position angle 138^∘± 6^∘ east of north, which is consistent with the spatial orientation of the X-rays (Figure <ref>).This hints that there may be spatially extended X-ray emission associated with the spatially extended ionized gas, and deeper X-ray observations would be required to confirm this.SDSS J1055+1520 is not detected in FIRST, so its outflow is not radio jet driven.The HST image also shows that the galaxy itself is asymmetric, which suggests that it may be a merger-remnant galaxy (Figure <ref>).SDSS J1346+5228. This galaxy's outflow was modeled as a bicone with the Keck/OSIRIS observations, and it is energetic enough (Ė_out/L_bol=0.01 ± 0.002; ) to suppress star formation in the galaxy (as was also the case for SDSS J1055+1520, above).SDSS J1346+5228 is also detected in FIRST with a 20 cm flux density of 1.1 mJy, indicating that there may be a radio jet powering the outflow.SDSS J2323+1405.This galaxy has symmetric emission line gas extending north and south of the galaxy center, out of the plane of the galaxy (Figure <ref>).This morphology is typical of AGN outflows (e.g., ), and we conclude that SDSS J2323+1405 most likely hosts an AGN outflow. §.§.§ Two Inflows of Gas along a BarSDSS J0839+4707.This galaxy has a stellar bar that is visible in Figure <ref>, and the peak of emission is spatially offset along the bar (Figure <ref>).SDSS J0839+4707 is also the only galaxy in our sample that has a close companion.The companion galaxy, SDSS J083902.50+470813.9, is located 18.8 kpc (184) to the northwest and has a redshift of z=0.053454 ± 0.000045 (Figure <ref>).This corresponds to a velocity difference of 311.9 ± 21.4 km s^-1 redshifted away from the primary galaxy.Emission line diagnostics of the companion's SDSS spectrum show that it is a star-forming galaxy <cit.>.Using the ratio of the stellar bulge luminosities as a proxy for the merger mass ratio, the merger ratio is 3.59:1 (SDSS J0839+4707 is the more massive galaxy).There is no morphological evidence that SDSS J0839+4707 and its companion are interacting, though a future interaction may trigger new accretion onto the central AGN. SDSS J1117+6140.The OSIRIS observations of this galaxy reveal two kinematic components: a disturbed rotating disk on large scales and a counterrotating nuclear disk on the small scales of the central kpc <cit.>.The galaxy's stellar bar is apparent in Figure <ref>, and the peak of emission is spatially offset along the bar (Figure <ref>). Based on the model of the counterrotating disk <cit.>, the emission peak is located where the nuclear disk and the bar intersect.§.§.§ One Ambiguous System SDSS J1654+1946.The HST observations of this galaxy show no obvious signatures of an outflow, a bar, or a merger.There is a knot of emission northwest of the galaxy center (Figure <ref>), which could be a nuclear star cluster (e.g., ).Since SDSS J1654+1946 is highly inclined (almost edge-on), we hypothesize that there could be a small nuclear bar that is too inclined to clearly see in the HST data.Gas inflowing along this bar could be the cause of the off-nuclear peak in emission, though without evidence of this bar we classify this system as ambiguous.§.§ Distinguishing between Velocity Offsets Produced by Shocks and by Offset AGNs We have determined that the velocity offsets in our seven targets are produced by shocks and not offset AGNs.Now, for comparison, we consider a velocity-offset AGN that has been confirmed as an offset AGN: SDSS J111519.98+542316.65.SDSS J1115+5423 is z=0.07 galaxy that is in the velocity-offset AGN catalog <cit.> from which we selected the seven targets in this paper, and it is the only galaxy in that catalog that has been shown to be an offset AGN so far.The emission lines in SDSS J1115+5423 are offset -68.5 ± 11.9 km s^-1 from systemic.By analyzing archival Chandra observations of this galaxy, <cit.> found that it has a hard X-ray source with L_2-10 keV=4 × 10^43 erg s^-1 that is located 0.8 ± 0.1 kpc (064 ± 005) from the host galaxy center.This offset AGN is located within the 3^'' SDSS fiber and presumably is the source of the velocity-offset emission lines in the SDSS spectrum, which could be confirmed witha spatially resolved spectrum of the system.Interestingly, SDSS J1115+5423's / vs. / line flux ratios, as well as its / vs. / line flux ratios, are consistent with models of pure photoionization, in contrast to the seven velocity-offset AGNs studied here (Figure <ref>).In the case of the offset AGN (SDSS J1115+5423), the emission lines are produced by photoionization from an AGN that is off-nuclear from the galaxy center but still within the SDSS fiber; this explains the velocity-offset emission lines observed in the SDSS spectrum.On the other hand, in each of the seven velocity-offset AGNs studied here, the emission lines originate from a central (not offset) AGN.Inflowing or outflowing gas is shocked, producing off-nuclear peaks in emission (still within the SDSS fiber) that result in the velocity-offset emission lines in the SDSS spectrum.Consequently, we suggest that it is possible to separate a sample of velocity-offset AGNs into offset AGNs and central AGNs (which have shocks resulting from inflows or outflows of gas) using the shocks vs. photoionization diagnostic line flux ratios / vs. /, or / vs. /.These line flux ratios are measurable with the SDSS spectrum alone; no follow-up observations are required.§ CONCLUSIONS We have presented Chandra and multiband HST observations of seven velocity-offset AGNs.The seven AGNs are at z<0.12 and have SDSS spectra that show emission lines that are offset in line-of-sight velocity from systemic by 50 to 113 km s^-1.To determine the nature of the velocity offset in each galaxy, we use the Chandra observations to determine the location of the AGN and the HST observations to identify the galaxy's stellar centroid and the location of the peak of the ionized gas emission.Our main results are summarized as follows.1.All seven velocity-offset AGNs have central AGNs, yet each galaxy's peak in emission is spatially offset from the stellar centroid.The spatial offsets range from 0.05 to 0.4 kpc, and they are all >3σ in significance.The spatially offset emission is produced by shocks, and the velocity offsets of the emission lines observed in the SDSS spectra originate from the spatially offset, shocked emission. 2.The shocks are produced by gas falling onto the AGN along a bar, or by AGN outflows propelling outward into the interstellar medium.The seven velocity-offset AGNs are classified as follows: four outflows, two inflows of gas along a bar, and one ambiguous case (since this galaxy is nearly edge-on, it may have a bar that is difficult to see).3.All of the velocity-offset AGNs studied here fall in the regions of the / vs. / and / vs. / diagrams that are consistent with a combination of photoionization and shock contributors.However, a comparison velocity-offset AGN (where the velocity offset is caused by an offset AGN in a galaxy merger) is consistent with models of pure photoionization and no shocks.We suggest that these emission lines, measured from the SDSS spectrum alone, may efficiently separate the velocity-offset AGNs produced by offset AGNs (photoionization only) from those produced by central AGNs with shocked gas in inflows or outflows (photoionization plus shocks). Additional follow-up observations, including spatially resolved spectroscopy, X-ray observations, and radio observations, of a large sample of velocity-offset AGNs could test the hypothesis that the /, /, and / line flux ratios distinguish the offset AGNs from the central AGNs with shocks.The offset AGNs could then be used for studies of AGN fueling during galaxy mergers (e.g., ), while the central AGNs with outflows may be particularly effective drivers of feedback.Since the outflows selected from velocity-offset AGNs are outflows with shocks, these outflows have already been pre-selected to be interacting with their host galaxies.In fact, we found that the two outflows in our sample that were modeled as bicones are energetic enough to drive cold molecular gas out of the galaxy's inner regions and regulate star formation.Thus, AGN outflows with velocity offsets may be a rich source of examples of feedback.We thank the anonymous referee for comments that have improved the clarity of this paper.Support for this work was provided by NASA through Chandra Award Number GO4-15113X issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060.Support for HST program number GO-13513 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.The scientific results reported in this article are based in part on observations made by the Chandra X-ray Observatory, and this research has made use of software provided by the Chandra X-ray Center in the application packages CIAO, ChIPS, and Sherpa.The results reported here are also based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program number GO-13513.Facilities: CXO, HSTapj60 natexlab#1#1[Allen et al.(2015)Allen, Schaefer, Scott, Fogarty, Ho, Medling, Leslie, Bland-Hawthorn, Bryant, Croom, Goodwin, Green, Konstantopoulos, Lawrence, Owers, Richards, & Sharp]AL15.1 Allen, J. T., Schaefer, A. L., Scott, N., Fogarty, L. M. R., Ho, I.-T., Medling, A. M., Leslie, S. K., Bland-Hawthorn, J., Bryant, J. J., Croom, S. M., Goodwin, M., Green, A. W., Konstantopoulos, I. S., Lawrence, J. S., Owers, M. S., Richards, S. N., & Sharp, R. 2015, , 451, 2780[Allen et al.(2008)Allen, Groves, Dopita, Sutherland, & Kewley]AL08.1 Allen, M. G., Groves, B. A., Dopita, M. A., Sutherland, R. S., & Kewley, L. J. 2008, , 178, 20[Alonso-Herrero et al.(1997)Alonso-Herrero, Rieke, Rieke, & Ruiz]AL97.1 Alonso-Herrero, A., Rieke, M. J., Rieke, G. H., & Ruiz, M. 1997, , 482, 747[Barrows et al.(2016)Barrows, Comerford, Greene, & Pooley]BA16.1 Barrows, R. S., Comerford, J. M., Greene, J. E., & Pooley, D. 2016, , 829, 37[Barrows et al.(2017)Barrows, Comerford, Greene, & Pooley]BA17.1 —. 2017, , 838, 129[Barrows et al.(2013)Barrows, Sandberg Lacy, Kennefick, Comerford, Kennefick, & Berrier]BA13.1 Barrows, R. S., Sandberg Lacy, C. H., Kennefick, J., Comerford, J. M., Kennefick, D., & Berrier, J. C. 2013, , 769, 95[Barrows et al.(2012)Barrows, Stern, Madsen, Harrison, Assef, Comerford, Cushing, Fassnacht, Gonzalez, Griffith, Hickox, Kirkpatrick, & Lagattuta]BA12.1 Barrows, R. S., Stern, D., Madsen, K., Harrison, F., Assef, R. J., Comerford, J. M., Cushing, M. C., Fassnacht, C. D., Gonzalez, A. H., Griffith, R., Hickox, R., Kirkpatrick, J. D., & Lagattuta, D. J. 2012, , 744, 7[Becker et al.(1995)Becker, White, & Helfand]BE95.1 Becker, R. H., White, R. L., & Helfand, D. J. 1995, , 450, 559[Bertin & Arnouts(1996)]BE96.1 Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393[Bevington(1969)]BE69.1 Bevington, P. R. 1969, Data reduction and error analysis for the physical sciences[Blecha et al.(2013)Blecha, Loeb, & Narayan]BL13.1 Blecha, L., Loeb, A., & Narayan, R. 2013, , 429, 2594[Brinchmann et al.(2004)Brinchmann, Charlot, White, Tremonti, Kauffmann, Heckman, & Brinkmann]BR04.1 Brinchmann, J., Charlot, S., White, S. D. M., Tremonti, C., Kauffmann, G., Heckman, T., & Brinkmann, J. 2004, , 351, 1151[Comerford et al.(2009)Comerford, Gerke, Newman, Davis, Yan, Cooper, Faber, Koo, Coil, Rosario, & Dutton]CO09.1 Comerford, J. M., Gerke, B. F., Newman, J. A., Davis, M., Yan, R., Cooper, M. C., Faber, S. M., Koo, D. C., Coil, A. L., Rosario, D. J., & Dutton, A. A. 2009, , 698, 956[Comerford et al.(2012)Comerford, Gerke, Stern, Cooper, Weiner, Newman, Madsen, & Barrows]CO12.1 Comerford, J. M., Gerke, B. F., Stern, D., Cooper, M. C., Weiner, B. J., Newman, J. A., Madsen, K., & Barrows, R. S. 2012, , 753, 42[Comerford & Greene(2014)]CO14.1 Comerford, J. M., & Greene, J. E. 2014, , 789, 112[Comerford et al.(2015)Comerford, Pooley, Barrows, Greene, Zakamska, Madejski, & Cooper]CO15.1 Comerford, J. M., Pooley, D., Barrows, R. S., Greene, J. E., Zakamska, N. L., Madejski, G. M., & Cooper, M. C. 2015, , 806, 219[Comerford et al.(2011)Comerford, Pooley, Gerke, & Madejski]CO11.2 Comerford, J. M., Pooley, D., Gerke, B. F., & Madejski, G. M. 2011, , 737, L19+[Comerford et al.(2013)Comerford, Schluns, Greene, & Cool]CO13.1 Comerford, J. M., Schluns, K., Greene, J. E., & Cool, R. J. 2013, , 777, 64[Croton et al.(2006)Croton, Springel, White, De Lucia, Frenk, Gao, Jenkins, Kauffmann, Navarro, & Yoshida]CR06.1 Croton, D. J., Springel, V., White, S. D. M., De Lucia, G., Frenk, C. S., Gao, L., Jenkins, A., Kauffmann, G., Navarro, J. F., & Yoshida, N. 2006, , 365, 11[Cutri et al.(2003)Cutri, Skrutskie, van Dyk, Beichman, Carpenter, Chester, Cambresy, Evans, Fowler, Gizis, Howard, Huchra, Jarrett, Kopan, Kirkpatrick, Light, Marsh, McCallon, Schneider, Stiening, Sykes, Weinberg, Wheaton, Wheelock, & Zacarias]CU03.1 Cutri, R. M., Skrutskie, M. F., van Dyk, S., Beichman, C. A., Carpenter, J. M., Chester, T., Cambresy, L., Evans, T., Fowler, J., Gizis, J., Howard, E., Huchra, J., Jarrett, T., Kopan, E. L., Kirkpatrick, J. D., Light, R. M., Marsh, K. A., McCallon, H., Schneider, S., Stiening, R., Sykes, M., Weinberg, M., Wheaton, W. A., Wheelock, S., & Zacarias, N. 2003, VizieR Online Data Catalog, 2246[Deustua(2016)]DE16.1 Deustua, S. 2016[Di Matteo et al.(2005)Di Matteo, Springel, & Hernquist]DI05.1 Di Matteo, T., Springel, V., & Hernquist, L. 2005, , 433, 604[Dickey & Lockman(1990)]DI90.1 Dickey, J. M., & Lockman, F. J. 1990, , 28, 215[Dopita(1976)]DO76.1 Dopita, M. A. 1976, , 209, 395[Dopita & Sutherland(1996)]DO96.1 Dopita, M. A., & Sutherland, R. S. 1996, , 102, 161[Fabian(2012)]FA12.2 Fabian, A. C. 2012, , 50, 455[Ferland(1996)]FE96.1 Ferland, G. J. 1996, Hazy, A Brief Introduction to Cloudy 90[Fu et al.(2012)Fu, Yan, Myers, Stockton, Djorgovski, Aldering, & Rich]FU12.1 Fu, H., Yan, L., Myers, A. D., Stockton, A., Djorgovski, S. G., Aldering, G., & Rich, J. A. 2012, , 745, 67[Fu et al.(2011)Fu, Zhang, Assef, Stockton, Myers, Yan, Djorgovski, Wrobel, & Riechers]FU11.3 Fu, H., Zhang, Z.-Y., Assef, R. J., Stockton, A., Myers, A. D., Yan, L., Djorgovski, S. G., Wrobel, J. M., & Riechers, D. A. 2011, , 740, L44+[Georgiev & Böker(2014)]GE14.1 Georgiev, I. Y., & Böker, T. 2014, , 441, 3570[Greene et al.(2012)Greene, Zakamska, & Smith]GR12.2 Greene, J. E., Zakamska, N. L., & Smith, P. S. 2012, , 746, 86[Heckman & Best(2014)]HE14.1 Heckman, T. M., & Best, P. N. 2014, , 52, 589[Heckman et al.(2005)Heckman, Ptak, Hornschemeier, & Kauffmann]HE05.1 Heckman, T. M., Ptak, A., Hornschemeier, A., & Kauffmann, G. 2005, , 634, 161[Hopkins & Elvis(2010)]HO10.2 Hopkins, P. F., & Elvis, M. 2010, , 401, 7[Ishibashi & Courvoisier(2010)]IS10.1 Ishibashi, W., & Courvoisier, T.-L. 2010, Astronomy & Astrophysics, 512, A58[Lagarias et al.(1998)Lagarias, Reeds, Wright, & Wright]LA98.1 Lagarias, J. C., Reeds, J. A., Wright, M. H., & Wright, P. E. 1998, SIAM Journal on optimization, 9, 112[Liu et al.(2013)Liu, Civano, Shen, Green, Greene, & Strauss]LI13.1 Liu, X., Civano, F., Shen, Y., Green, P., Greene, J. E., & Strauss, M. A. 2013, , 762, 110[Liu et al.(2010)Liu, Shen, Strauss, & Greene]LI10.1 Liu, X., Shen, Y., Strauss, M. A., & Greene, J. E. 2010, , 708, 427[Mazzalay et al.(2013)Mazzalay, Rodríguez-Ardila, Komossa, & McGregor]MA13.3 Mazzalay, X., Rodríguez-Ardila, A., Komossa, S., & McGregor, P. J. 2013, , 430, 2411[McCarthy et al.(1990)McCarthy, Spinrad, Dickinson, van Breugel, Liebert, Djorgovski, & Eisenhardt]MC90.1 McCarthy, P. J., Spinrad, H., Dickinson, M., van Breugel, W., Liebert, J., Djorgovski, S., & Eisenhardt, P. 1990, , 365, 487[McGurk et al.(2015)McGurk, Max, Medling, Shields, & Comerford]MC15.1 McGurk, R. C., Max, C. E., Medling, A. M., Shields, G. A., & Comerford, J. M. 2015, , 811, 14[Middelberg et al.(2004)Middelberg, Roy, Nagar, Krichbaum, Norris, Wilson, Falcke, Colbert, Witzel, & Fricke]MI04.1 Middelberg, E., Roy, A. L., Nagar, N. M., Krichbaum, T. P., Norris, R. P., Wilson, A. S., Falcke, H., Colbert, E. J. M., Witzel, A., & Fricke, K. J. 2004, , 417, 925[Moy & Rocca-Volmerange(2002)]MO02.1 Moy, E., & Rocca-Volmerange, B. 2002, , 383, 46[Mulchaey et al.(1996)Mulchaey, Wilson, & Tsvetanov]MU96.1 Mulchaey, J. S., Wilson, A. S., & Tsvetanov, Z. 1996, , 102, 309[Müller-Sánchez et al.(2016)Müller-Sánchez, Comerford, Stern, & Harrison]MU16.1 Müller-Sánchez, F., Comerford, J., Stern, D., & Harrison, F. A. 2016, , 830, 50[Müller-Sánchez et al.(2015)Müller-Sánchez, Comerford, Nevin, Barrows, Cooper, & Greene]MU15.1 Müller-Sánchez, F., Comerford, J. M., Nevin, R., Barrows, R. S., Cooper, M. C., & Greene, J. E. 2015, , 813, 103[Nandra & Pounds(1994)]NA94.2 Nandra, K., & Pounds, K. A. 1994, , 268, 405[Nevin et al.(2016)Nevin, Comerford, Müller-Sánchez, Barrows, & Cooper]NE16.1 Nevin, R., Comerford, J., Müller-Sánchez, F., Barrows, R., & Cooper, M. 2016, , 832, 67[Oh et al.(2011)Oh, Sarzi, Schawinski, & Yi]OH11.1 Oh, K., Sarzi, M., Schawinski, K., & Yi, S. K. 2011, , 195, 13[Park et al.(2006)Park, Kashyap, Siemiginowska, van Dyk, Zezas, Heinke, & Wargelin]PA06.1 Park, T., Kashyap, V. L., Siemiginowska, A., van Dyk, D. A., Zezas, A., Heinke, C., & Wargelin, B. J. 2006, , 652, 610[Peng et al.(2010)Peng, Ho, Impey, & Rix]PE10.1 Peng, C. Y., Ho, L. C., Impey, C. D., & Rix, H.-W. 2010, , 139, 2097[Piconcelli et al.(2005)Piconcelli, Jimenez-Bailón, Guainazzi, Schartel, Rodríguez-Pascual, & Santos-Lleó]PI05.1 Piconcelli, E., Jimenez-Bailón, E., Guainazzi, M., Schartel, N., Rodríguez-Pascual, P. M., & Santos-Lleó, M. 2005, , 432, 15[Reeves & Turner(2000)]RE00.1 Reeves, J. N., & Turner, M. J. L. 2000, , 316, 234[Rosario et al.(2010a)Rosario, Shields, Taylor, Salviander, & Smith]RO10.1 Rosario, D. J., Shields, G. A., Taylor, G. B., Salviander, S., & Smith, K. L. 2010a, , 716, 131[Rosario et al.(2010b)Rosario, Whittle, Nelson, & Wilson]RO10.3 Rosario, D. J., Whittle, M., Nelson, C. H., & Wilson, A. S. 2010b, , 408, 565[Rosario et al.(2004)Rosario, Whittle, Silverman, Wilson, & Nelson]RO04.1 Rosario, D. J., Whittle, M., Silverman, J. D., Wilson, A. S., & Nelson, C. H. 2004, in IAU Symposium, Vol. 222, The Interplay Among Black Holes, Stars and ISM in Galactic Nuclei, ed. T. Storchi-Bergmann, L. C. Ho, & H. R. Schmitt, 287–290[Schmitt et al.(2003)Schmitt, Donley, Antonucci, Hutchings, & Kinney]SC03.3 Schmitt, H. R., Donley, J. L., Antonucci, R. R. J., Hutchings, J. B., & Kinney, A. L. 2003, , 148, 327[Shull & McKee(1979)]SH79.1 Shull, J. M., & McKee, C. F. 1979, , 227, 131[Steinborn et al.(2016)Steinborn, Dolag, Comerford, Hirschmann, Remus, & Teklu]ST16.1 Steinborn, L. K., Dolag, K., Comerford, J. M., Hirschmann, M., Remus, R.-S., & Teklu, A. F. 2016, , 458, 1013[Tombesi et al.(2012)Tombesi, Sambruna, Marscher, Jorstad, Reynolds, & Markowitz]TO12.1 Tombesi, F., Sambruna, R. M., Marscher, A. P., Jorstad, S. G., Reynolds, C. S., & Markowitz, A. 2012, , 424, 754
http://arxiv.org/abs/1709.09177v1
{ "authors": [ "Julia M. Comerford", "R. Scott Barrows", "Jenny E. Greene", "David Pooley" ], "categories": [ "astro-ph.GA", "astro-ph.CO" ], "primary_category": "astro-ph.GA", "published": "20170926180003", "title": "Shocks and Spatially Offset Active Galactic Nuclei Produce Velocity Offsets in Emission Lines" }
Asymptotically flat 3-manifolds contain minimal planes]Asymptotically flat three-manifolds contain minimal planesDepartment of Mathematics Princeton University Princeton, NJ 08544 School of Mathematics Institute for Advanced Study Princeton, NJ 08540 [email protected] Department of Mathematics Princeton University Princeton, NJ [email protected] (M,g) be an asymptotically flat 3-manifold containing no closed embedded minimal surfaces.We prove that for every point p∈ M there exists a complete properly embedded minimal plane in M containing p. [ Daniel Ketover December 30, 2023 =====================§ INTRODUCTIONGiven a point p in ℝ^3 there are infinitely many minimal planes passing through p.However, for a general complete metric on ^3 with infinite volume, it is not known if any unbounded minimal planes (or surfaces of any topology) exist. This is the topic of our main result:[Added in proof: Mazet–Rosenberg <cit.> have recently generalized Theorem <ref> to show that under the same hypothesis, there exists a minimal plane through any three points.]Let (M,g) be an asymptotically flat 3-manifold containing no closed embedded minimal surfaces.For every point p∈ M there exists a complete properly embedded minimal plane in M containing p.The following notion of asymptotic flatness suffices in Theorem <ref>: M is diffeomorphic[Note that the work of Meeks–Simon–Yau <cit.> shows that a general asymptotically flat 3-manifold with no compact minimal surfaces is automatically diffeomorphic to ^3 (cf. <cit.>). ] to ^3 and in the associated coordinates, the metric satisfies g = g̅ + b, where |b| + |x| |D̅ b| + |x|^2|D̅^2b| = o(1) as |x| →∞ (where g̅ is the Euclidean metric and D̅ the Euclidean connection). We emphasize that no curvature assumption (e.g., non-negative scalar curvature) is included in the statement of Theorem <ref>. Our motivation for Theorem <ref> comes from Schoen–Yau's proof of the Positive Mass Theorem <cit.>.A key aspect of their proof is showing that certain stable minimal surfaces cannot exist in an asymptotically flat 3-manifold with positive scalar curvature. This non-existence result has been refined in the works <cit.> (cf. <cit.>) so as to apply to any unbounded embedded stable minimal surface. In particular, these works show that such surfaces cannot exist in asymptotically flat 3-manifolds with positive scalar curvature or non-negative scalar curvature and “Schwarzschild asymptotics.” It is thus natural to wonder whether an asymptotically flat manifold admits any complete unbounded minimal surfaces whatsoever.Theorem <ref> settles this question affirmatively, as long as the manifold does not contain any closed minimal surfaces. One reason to expect minimal surfaces to exist is the min-max theory of Almgren and Pitts <cit.>, which produces unstable minimal surfaces in general compact three-manifolds (even in those which do not contain any stable or area-minimizing surfaces). For closed manifolds of positive Ricci curvature, Marques–Neves have shown the existence of infinitely many minimal surfaces <cit.>.Simon–Smith <cit.> used such methods to show that every closed Riemannian three-sphere contains a minimal embedded two-sphere (see also <cit.>).Similarly by sweeping out the manifold with planes, one might expect an asymptotically flat three-manifold to contain a minimal plane.The difficulty is that an asymptotically flat three-manifold has infinite volume, and the slices of such a sweepout would also have infinite areas and thus the “width" of such a family is not a sensible notion.One can instead try to apply variational methods in a fixed (convex) ball B_R(0) to obtain a minimal disk with boundary and then let R→∞.The difficulty in carrying this out is that the sequence of minimal surfaces may run off to infinity as R→∞.Indeed, in a non-flat asymptotically flat manifold (M^3,g) with non-negative scalar curvature, direct minimization is doomed to fail: by the work of the first-named author and Eichmair <cit.>, (M^3,g) cannot contain an unbounded area-minimizing surface. Thus, if one considers a large equatorial circle in B_R(0) and let Σ_R be a minimal disk solving the Plateau problem for this boundary curve, the limit of Σ_R as R→∞ is guaranteed to be the empty set.Similarly, index 1 critical points obtained by min-max methods could potentially disappear in the limit.To emphasize the difficulty in controlling index 1 surfaces obtained by min-max, one may consider a 3-manifold (M^3,g) whose metric is asymptotic to the cone g̅_α = dr^2 + r^2α^2g_^2,for α∈ (0,1) (where g_^2 is the standard round metric on the unit 2-sphere). By <cit.> we know that (M^3,g) cannot contain any unbounded immersed minimal surfaces of finite index. Hence, if one considers a sequence of index 1 surfaces Σ_R in B_R(0) with respect to the metric g, the surfaces must necessarily run off to infinity as R→∞. Interestingly, the method developed in this paper also applies in this setting, showing that if (M^3,g) is asymptotic to g_α and does not contain any closed minimal surfaces, then it contains properly embedded minimal planes through every point p∈ M. These planes have quadratic area growth, but infinite index. We discuss the extension of Theorem <ref> to this setting in Section <ref>. See also Section <ref> below for a discussion of certain results overcoming the difficulty we have just described in the context of geodesics on surfaces. Finally, we note that even in the asymptotic region of (M^3,g) it is not clear that one can perturb a Euclidean minimal surface to a g-minimal surface; an obstruction to a particular such deformation was demonstrated in <cit.>. Moreover, such a perturbative technique has no hope of constructing surfaces through any fixed point p∈ M as we do in Theorem <ref>, since we do not assume that g is close to the Euclidean metric in the compact part of the manifiold.In this paper we overcome these difficulties by relying on degree theoretic techniques, rather than variational methods.Degree theory was introduced in this context by Tomi–Tromba <cit.> and further developed by White <cit.>. Tomi–Tromba first applied it to show that a curve in the boundary of a convex body in ℝ^3 bounds an embedded minimal disk. White extended the theory and proved (among other things) that a three-sphere with positive Ricci curvature contains an embedded minimal torus <cit.>.It was recently extended to the free boundary setting to prove that convex bodies contain embedded free boundary annuli <cit.>.Fix a large convex ball B_R(0) in M.We would like to produce a minimal disk passing through the origin (as then the limit as R→∞ would not be the empty set).By the degree theory of White, it follows that an equatorial circle C in the xy plane in ∂ B_R(0) bounds an odd number of embedded minimal disks.However, assuming no minimal disks pass through the origin, we prove that any minimal disk bounded by C in the southern hemisphere can be “flipped" to another such disk in the northern hemisphere. See Figure <ref>.Thus the number of minimal disks bounded by C is even. See Figure <ref>. This gives a contradiction and from it we obtain the existence of a minimal disk passing through the origin.As the argument is indirect, we obtain no information about the Morse index of the minimal disk obtained.Roughly speaking, the point is that minimal disks in the northern hemisphere pair off bijectively with those in the southern, and there must be some disk in the middle which “flips" to itself in order to have an odd number of disks.The rigorous argument and the precise notion of “flipping" comes from the fact that ℝ^3∖{} has two distinct isotopy classes of embedded two-spheres.To apply degree theory in this setting and to take a limit as R→∞ we need area and curvature bounds for minimal disks with certain kinds of boundaries, which we also establish. A difficulty here is that we do not have any a priori control on the surfaces, since they are not constructed variationally. Instead, we will use curvature estimates based on the fact that the surfaces are disks. Schoen–Simon <cit.> have proven that minimal disks Σ in ^3 with bounded area have curvature bounds away from ∂Σ. In a general Riemannian manifold, these curvature estimates might not apply (they would require that Σ intersected any sufficiently small ball in a disk). Thus, we rely instead on the curvature estimates of White <cit.>. To apply these estimates, we must show that the surfaces have bounded area and controlled intersection with ∂ B_r(0) for r sufficiently large.In ^3, we have the following isoperimetric inequality for minimal surfaces: if Σ⊂ B_R(0)⊂^3 has ∂Σ⊂∂ B_R(0), then taking X = r∂_r in the first variation, we find that2(Σ) = ∫_Σ_Σ X dμ = ∫_∂Σg̅(η,X) dμ≤ R (∂Σ).Such an estimate holds in the asymptotic region of an asymptotically flat manifold as well. However, notice that as R→∞, an estimate of this form will not give local area bounds, since if (∂Σ) =O( R), then the estimate only implies (Σ) ≤ O(R^2). In ^3, this would be sufficient to prove local area bounds by the monotonicity formula, but here the error terms in the monotonicity formula might be too large for such an argument. Instead, we choose ∂Σ to be very close to an equator in ∂ B_R(0) and use the above computation, along with a continuity argument to prove that Σ intersects ∂ B_r(0) in a nearly equatorial circle, for all r large. Carrying this out carefully will prove area bounds for Σ outside of a fixed compact set. Finally, to prove area bounds in the fixed compact set, we rely on an isoperimetric inequality of White <cit.>, which requires that M does not contain any minimal surfaces. The assumption that M contain no closed embedded minimal surfaces seems essential for the argument in its current form, since White has shown <cit.> that in the presence of closed embedded minimal surfaces, it is always possible to find minimal disks bounded by well behaved curves, but with curvature and area blowing up. In fact, the logic of our construction in the proof of Theorem <ref> is somewhat analogous to White's construction of these misbehaving disks.It is natural to wonder whether the minimal planes obtained by Theorem <ref> have index 0 or 1 in general (we show that as long as the metric satisfies a slightly stronger decay condition, the of the minimal plane index is finite in Proposition <ref>, but do not estimate it explicitly). It also seems natural to conjecture that if (M^3,g) is asymptotically flat with ∂ M consisting of closed minimal surfaces then there is an unbounded minimal surface in (M^3,g) with (possibly empty) free boundary on ∂ M. This is supported by the situation in the Schwarzschild manifold defined (for m>0) byg = ( 1 + m/2|x|)^4g̅on M = {|x| ≥ m/2}, where any Euclidean coordinate plane through {0} clearly yields such a surface. It would be interesting to compute the index of these free-boundary annuli in the exact Schwarzschild metric. This should be possible by an ODE analysis. More generally, are these annuli and the horizon the only embedded minimal surfaces in Schwarzschild? The corresponding problem for embedded closed constant mean curvature surfaces was recently solved by Brendle <cit.>: such surfaces must be centered coordinate spheres. While many authors have studied min-max methods in the non-compact setting <cit.>, to our knowledge Theorem <ref> is the first such construction in a manifold of infinite volume. §.§ Analogous results for geodesics on surfaces One dimension lower, i.e., for geodesics on surfaces, Bangert proved <cit.> that every complete two dimensional plane contains a complete geodesic escaping to infinity. Moreover, Bonk and Lang use a flip argument in <cit.> that has a similar flavor to our techniques described above. More recently, Carlotto and De Lellis proved <cit.> that an asymptotically conical surface with non-negative Gaussian curvature contains infinitely many properly embedded geodesics with Morse index at most one, resolving (in the setting of asymptotically conical surfaces) the issue described above about controlling the drifting of min-max critical points as the boundary is sent to infinity. We emphasize that the arguments in the papers <cit.> make heavy use of the two-dimensional setting in various ways. In particular: (i) besides variational methods, geodesics can be constructed by solving an ODE initial value problem, (ii) the Gauss–Bonnet formula can be used in a strong way to control the behavior of geodesics on a surface, and (iii) geodesics have no extrinsic curvature and thus automatically satisfy curvature estimates. None of these three features carry over to the setting (minimal surfaces in three manifolds) we consider here. §.§ On the “flip” argument for Theorem <ref>Let us give a more detailed sketch of the existence part of Theorem <ref>. Let B_R(0) be a large ball centered about the origin in M and let C_R:=∂ B_R(0)∩{z=0} be an equatorial circle in ∂ B_R(0). For t∈ [0,1] denote the equatorial circleC_R^t:=∂ B_R(0)∩{zcos(π t)=x sin (π t)}.The family C_R^t consists of rotating the circle C_R=C_R^0 a full 180^∘ degrees in ∂ B_R(0) back to itself. For each t∈ [0,1] denote by M_R^t the family of embedded minimal disks with boundary equal to C_R^t. Our goal is to find a minimal embedded disk with boundary in ∂ B_R(0) passing through the origin.We can assume toward a contradiction that none of the disks in ∪_t M_R^t pass through the origin.Since large balls in M are mean convex, one expects from work of Tomi–Tromba that there should be an odd number of minimal disks in B_R with boundary C_R.However, assuming no disk in ∪_t M_R^t passes through the origin we can show that the number of minimal disks with boundary in C_R is even.To see this, note that M_R^0 consists of two types of disks depending on which “side" of the disk the origin lies.More precisely, the space of embedded disks in ℝ^3∖{} with boundary C_R has two connected components (both contractible). Let us thus denote the disks in M_R^0 as either “red disks" Red_R or “blue disks" Blue_R, depending on the component in which they are contained. Assume that as t changes, the family of disks M_R^t changes continuously (this can be guaranteed after small perturbation of the curves C_R^t by Smale's transversality theorem). Let D_R^0 be some disk in Blue_R.As t increases, the disk D_R^0 moves with its boundary to a disk D_R^t in M_R^t with boundary C_R^t, and finally at t=1 the disk returns back to a disk in M_R^0.This gives a bijection Φ: M_R^0→ M_R^0. We claim thatΦ(Blue_R)=Red_R.and similarly Φ(Red_R)=Blue_R. The reason for (<ref>) and (<ref>) is that if Φ mapped a point in Blue_R to a point in Blue_R, then the family ∪ _t D_R^t would sweep-out out all of B_R(0) and in particular D_R^t would pass through the origin for some value of t.Thus we would have found a minimal disk with boundary in B_R(0) (although with rotated boundary from C_R), which contradicts our assumption that no such disk exists.But (<ref>) and (<ref>) imply that the cardinality of the set Blue_R is the same as that of Red_R and thus the number of minimal disks in B_R^0 is even.This is a contradiction. Thus for each R we obtain a minimal disk Σ_R in B_R(0) passing through the origin with boundary in ∂ B_R(0) close to an equator.In light of the curvature estimates we prove in this paper, we can take a limit of Σ_R as R→∞ and obtain a smooth embedded minimal plane passing through the origin.This completes the sketch of the “flip” argument used in Theorem <ref>. §.§ Organization of the paperIn Section <ref> we prove the curvature bounds we need to take a limit of disks with boundaries in larger and larger balls.In Section <ref> we introduce the degree theory of Tomi–Tromba as extended by White.In Section <ref> we prove Theorem <ref>. Section <ref> includes a generalization of Theorem <ref> to the setting of asymptotically conical 3-manifolds, as well as a discussion of the Morse index of the surfaces.Acknowledgements: O.C. was partially supported by the Oswald Veblen Fund and NSF grants DMS 1638352 and DMS 1811059. D.K. was partially supported by NSF Postdoctoral Fellowship DMS 1401996. We are grateful to the referee for a careful reading, and in particular for pointing out a mistake in the original version of Proposition <ref>. O.C. would also like to thank Florian Johne for several useful comments on the first version of this article. § AREA AND CURVATURE BOUNDS In this section we prove curvature and area estimates for minimal disks in an asymptotically flat manifold with no closed embedded minimal surfaces. The following estimates are due to Anderson <cit.> and White <cit.>. We will use them to control the curvature of our surfaces in a large, but fixed ball. These estimates are somewhat related to those of Schoen–Simon <cit.> but crucially do not require the surfaces to intersect small balls in topological disks.Let N denote a compact Riemannian 3-manifold with strictly mean convex boundary ∂ N. Suppose that Σ is an embedded minimal disk with ∂Σ⊂∂ N. Then, there is a constant C depending on * the Riemannian manifold (N,g),* the area of Σ,* the C^2,α-norm of ∂Σ (i.e., the C^2,α norm of ∂Σ as a map parametrized by arc length), and* the “embeddedness” of ∂Σ, i.e., max_x≠y∈∂Σd_∂Σ(x,y)/d_N(x,y),so that the second fundamental form A_Σ of Σ satisfies |A_Σ|≤ C.Thus, to obtain curvature estimates, it will be crucial to obtain area bounds. We recall the following isoperimetric inequality due to White <cit.>.Suppose that N is a compact Riemannian 3-manifold with strictly mean convex boundary ∂ N. Assume that N does not contain any closed embedded minimal surfaces. Suppose that Σ is an embedded minimal surface with ∂Σ⊂∂ N. Then, there is a constant C depending only on (N,g) so that _g(Σ) ≤ C _g(∂Σ).Suppose now that (M^3,g) is asymptotically flat and Σ_R are embedded minimal disks in B_R(0) whose boundary ∂Σ_R is converging in C^2,α to an equator as R→∞ (after rescaling the picture to unit size). Choose ε>0 sufficiently small so that any stationary integral 2-varifold in ^3 that is not a (multiplicity one) flat plane has density at infinity at least 1+2ε (this is possible by Allard's theorem; cf. <cit.>). Fix σ_0 sufficiently large so that |∇ r| ≤ 2 and D^2r^2≥ g for r≥σ_0 (where r is the usual Euclidean radial coordinate in the chart at infinity). Here, the gradient ∇ and Hessian D^2 are taken with respect to the asymptotically flat metric g. The following quantity will be crucial for our proof of area and curvature estimates. Define σ(R) to be the infimum of σ∈ [σ_0,R] so that for all ρ∈ [σ,R), Σ_R is transverse to ∂ B_ρ, ∫_Σ_R∩∂ B_ρ(0)1/|∇_Σ_Rr| dμ≤ 2π (1+ε) ρ,and after rescaling by ρ^-1, the curve Σ_R∩∂ B_ρ(0) is in the ε-neighborhood (in the C^2,α sense[We can use the ‖·‖_2,α^* norm from <cit.> to measure distance here.]) of the set of equatorial circles in ∂ B_ρ(0). Our goal will be to show that σ(R) is uniformly bounded from above.We have that R^-1σ(R) → 0 as R→∞.Consider the vector field X = r∇ r. Note that D X = 1/2 D^2r^2 = g + o(1) as r→∞. Choose λ_R→ 0 so that λ_R R →∞ and so that Σ_R intersects ∂ B_λ_R R(0) transversely (for example λ_R≈ R^-1/2 will suffice). Consider the vector field X in the first variation formula for Σ_R∖ B_λ_R R(0):(2+o(1)) _g(Σ_R∖ B_λ_R R(0)) = ∫_Σ_R∖ B_λ_R R(0)_Σ X dμ = ∫_∂Σ_R g(η,X) dμ - ∫_Σ_R∩∂ B_λ_R R(0) g(η,X) dμ≤ (2π + o(1))R^2, where the o(1) terms are as R→∞. Here, η is the outwards pointing unit normal to ∂(Σ_R∖ B_λ R) (so the second term on the third line was negative, and thus could be discarded). Consider Σ̃_R : = R^-1(Σ_R∖ B_λ_R R), along with the associated rescaled metric g̃. Note that _g̃(Σ̃_R) ≤π + o(1). Denote by Ṽ, a stationary integral varifold in ^3∖{0} of Σ̃_R so that Σ̃_R converges to Ṽ in the varifold sense (after passing to a subsequence). It is clear that Ṽ extends (cf. <cit.>) to a stationary integral varifold in B_1(0) with 0 ∈Ṽ and ‖Ṽ‖ (B_1(0)) ≤π. Thus, the monotonicity formula implies that Ṽ is the varifold associated to Σ̃, a flat disk through the origin with multiplicity one. Now, by White's version of Allard's interior and boundary regularity theorem <cit.> we see that Σ̃_R converges with multiplicity one in C^2,α on compact subsets of ^3∖{0} to Σ̃. Observe that Σ̃ intersects each ∂ B_r(0) transversely in an equatorial curve and ∫_Σ̃∩∂ B_r(0)1/|∇_Σ̃ r|dμ̅= 2π r,for all r ∈ (0,1]. Thus, for any r ∈ (0,1], we may take R sufficiently large so that R^-1σ(R) ≤ r. This proves the claim. The quantity σ(R) is uniformly bounded from above as R→∞.The argument is somewhat similar to Lemma <ref>. However, we proceed here by contradiction. To this end, assume that σ(R) →∞ as R→∞. We will rescale Σ_R by σ(R)^-1 to produce a contradiction.By definition of σ(R), we find that _g( Σ_R∩∂ B_2σ(R)(0)) ≤ Cσ(R)Hence, by considering X=r∇ r in the first variation as in Lemma <ref>, we see that _g(Σ_R∩ (B_2σ(R)(0)∖ B_σ_0(0)) ≤ C σ(R)^2.Now, consider Σ̃_R : = σ(R)^-1(Σ_R∖ B_σ_0). By Lemma <ref>, the boundary components of Σ̃_R are eventually disjoint from any compact subset of ^3∖{0}. By the definition of σ(R) and the co-area formula, we have that for ρ > 1,_g̃(Σ̃_R∩ (B_ρ(0)∖ B_1(0))) ≤π (1+ε) (ρ^2-1).Putting this together, Σ̃_R has uniformly bounded area on compact subsets of ^3∖{0}. Thus, we may pass to a subsequence and find a stationary integral varifold Ṽ in ^3 so that Σ̃_R converges to Ṽ away from {0}. Moreover, by (<ref>), Σ̃ has quadratic area growth, and density at infinity bounded above by π (1+ε). Thus, a standard argument shows that the density at infinity is π, and Ṽ is the varifold associated to a plane through the origin with multiplicity one. As before, Allard's theorem implies that the convergence of Σ̃_k to Ṽ occurs in C^2,α on compact subsets of ^3∖{0}. Thus, we find that for k large, Σ̃_k is transverse to ∂ B_r(0) for all r≈ 1,∫_Σ̃_k∩∂ B_r(0)1/|∇_Σ̃_R r|dμ = 2π (1+o(1)) ras k→∞, and Σ̃_k∩∂ B_r(0) is converging in C^2,α to an equator in ∂ B_r(0) as k→∞. This contradicts the definition of σ(R) after rescaling.Now, taking σ_0 larger if necessary, using the isoperimetric inequality in Theorem <ref> as combined with the definition of σ(R) we find that Σ_R has uniformly bounded area inside of B_σ_0(0) and uniform quadratic area growth outside of B_σ_0(0). Furthermore, we have that Σ_R∩∂ B_ρ(0) is close in C^2,α to an equatorial circle for any ρ≥σ_0. This allows us to apply the curvature estimates of Theorem <ref> to obtain the following compactness theorem:Let M denote an asymptotically flat manifold diffeomorphic to ℝ^3 which contains no closed embedded minimal surfaces. Let Σ_R, be a sequence of embedded minimal disks in M containing p∈ M with ∂Σ_R⊂∂ B_R(0) and R→∞.Suppose in addition that after rescaling to unit size, ∂Σ_R is converging to an equator in ∂ B_R(0) in the C^2,α topology. Then, a subsequence of Σ_R converges smoothly on compact subsets of M to a complete properly embedded minimal plane Σ_∞ with p∈Σ_∞. Furthermore, Σ_∞ has quadratic area growth and for λ→∞, after passing to a subsequence λ^-1Σ_∞ converges smoothly with multiplicity one on compact subsets of ^3∖{0} to a plane through the origin. By Proposition <ref>, the quantity σ(R) is uniformly bounded as R→∞. Thus, we can take σ_0even larger if necessary so that for all R>σ_0, and any ρ∈ [σ_0,R), we have that Σ_R is transverse to ∂ B_ρ(0),∫_Σ_R∩∂ B_ρ(0)1/|∇_Σ_Rr|dμ≤ 2π(1+ε)ρ,and after rescaling by ρ^-1, the curve Σ_R∩∂ B_ρ(0) is in the C^2,α ε-neighborhood of an equator.Thus, we see that Σ_R∩ B_σ_0(0) has uniformly bounded area, by the isoperimetric inequality Theorem <ref>. This and the co-area formula (using (<ref>)) show that there is Λ > 0 so that _g(Σ_R∩ B_ρ(0)) ≤Λ+ π(1+) (ρ^2 - σ_0^2)for all ρ >∈ [σ_0,R] (where Λ is independent of R). Moreover, for any ρ∈[σ_0,R], we have seen that Σ_R∩∂ B_ρ(0) is controlled in C^2,α (and uniformly “embedded” in the sense described in Theorem <ref>). Thus, by Theorem <ref> applied to Σ_R∩ B_ρ⊂ B_ρ(0), we have that the curvature of Σ_R is uniformly bounded on compact subsets of ^3. By the uniform quadratic area growth, so is the area, and thus we can pass to a subsequential (smooth) limit to find a properly embedded minimal plane Σ_∞ with p∈Σ_∞. The plane Σ_∞ has quadratic area growth by (<ref>), so it remains to consider the blow-down limits λ^-1Σ_∞. By the quadratic area growth, a subsequence converges in the varifold sense (on compact subsets of ^3∖{0}) to a stationary integral varifold V on ^3 with the property that‖ V‖(B_ρ(0)) ≤π(1+).Note that here, exactly in the previous two proofs, we have used the standard extension property of stationary integral varifolds described in e.g. <cit.>. Now, by choice of , Allard's theorem applies to show that V is a multiplicity one plane in ^3. Thus, the convergence happens smoothly with multiplicity one on compact subsets of ^3∖{0}. Finally, we observe that V must be a plane through the origin: since p∈Σ_∞ and Σ_∞ is connected, we can always find a point in Σ_∞∩∂ B_ηλ(0) for any η>0 fixed. Thus, λ^-1Σ_∞∩∂ B_η(0) ≠∅. The monotonicity formula implies that there is a definite amount of area in a small ball around this point. This easily is seen to imply that the blow-down plane must pass through the origin. § DEGREE THEORYIn this section we introduce the degree theory of Tomi–Tromba <cit.> as later extended by White <cit.> needed for the proof of Theorem <ref>. Let M denote a compact Riemannian three-ball with strictly mean convex boundary ∂ M. Let D denote the flat unit disk in ℝ^2.Let us call two maps f_1,f_2:D→ M equivalent if f_1=f_2∘ u for some diffeomorphism u:D→ D fixing ∂ D pointwise. Let [f_1] denote the equivalence class of f_1. Letℳ={[f]:f∈𝒞^2,α(D,M) f(∂ D)⊂∂ M}. We have the following theorem due to White <cit.> (generalizing earlier work of Tomi–Tromba in ℝ^3 <cit.>):The space ℳ is a smooth Banach manifold and Π:ℳ→𝒞^2,α(∂ D, ∂ M)given by Π([f])=f|_∂ Dis a smooth Fredholm map of index 0. By Smale's infinite dimensional version <cit.> of Sard's theorem it follows that the singular values of a Fredholm map are of the first category in the Baire sense, so in particular they contain no interior point.Since Π is Fredholm of index 0, for any regular value y of the mapping Π, the set Π^-1(y) is a 0 dimensional manifold, and is locally the union of finitely many points <cit.>.In order to assign a mod 2 degree to the mapping Π we need to restrict to subsets of ℳ on which the mapping Π is proper.Namely, we have the following:Let ℳ' and W be open subsets of ℳ and 𝒞^2,α(∂ D, ∂ M) respectively, such that W is connected and Π:ℳ'→ W is proper.Then for generic γ∈ W, the number of elementsΠ^-1(γ)∩ℳ' is constant modulo 2. Finally, we have the following theorem <cit.>:Suppose the Riemannian three-ball M with mean convex boundary and containing no closed embedded minimal surfaces.Let ℳ' be the subset of ℳ consisting of embeddings, and let W:=𝒞^2,α(∂ D, ∂ M).Then Π restricted to ℳ' is a proper map.Moreover, the mod 2 degree of Π is equal to one.In particular, a generic γ∈𝒞^2,α(∂ D, ∂ M) bounds an odd number of embedded minimal disks. We also have the following which allows us to perturb curves in the Banach space 𝒞^2,α(∂ D, ∂ M) to be transverse to Π <cit.> (and thus have nice pre-images under Π):Let Γ be a C^1 mapping Γ:[0,1]→𝒞^2,α(∂ D, ∂ M).Then after arbitrarily small C^1 perturbation of Γ, one obtains a new mapping Γ̃:[0,1]→𝒞^2,α(∂ D, ∂ M) so that Π^-1(Γ̃[0,1])∩ℳ' is a smooth one-dimensional submanifold with boundary consisting of the finite set Π^-1(Γ̃(0))∩ℳ' and Π^-1(Γ̃(1))∩ℳ'.§ PROOF OF MAIN THEOREMIn this section we prove Theorem <ref>.Thus let M be an asymptotically flat three-manifold containing no closed embedded minimal surfaces.Let B_R(0) denote the Euclidean ball of radius R. Fix p∈ M. We will always assume that R is large enough so that p∈ B_R(0).To apply degree theory, we need the following:For R large enough, the ball B_R(0) is convex with respect to g. In particular, for R large enough, any minimal disk with boundary in ∂ B_R(0) is contained entirely in B_R(0).Let C_R:=∂ B_R(0)∩{z=0} be the equatorial circle in ∂ B_R(0) in the xy-plane. Let us consider a one parameter family of curves in ∂ B_R(0).Namely for t∈ [0,2] denote the equatorial circleC_R^t:=∂ B_R(0)∩{zcos(π t)=xsin(π t)}.The family C_R^t consists of rotating the circle C_R=C_R^0 a full 360^0 degrees in ∂ B_R(0) back to itself.In fact we will be interested in only half of this family, namely the part with t∈ [0,1].The path from 0 to 1 reverses the orientation from C_R^0 to C_R^1 and thus is not a closed loop in 𝒞^2,α(∂ D, ∂ M).Since M contains no embedded minimal surfaces and its boundary is mean convex (Lemma <ref>) by Theorem <ref> we can replace the curve {C_R^t}_t∈[0,2] by a new curve {D_R^t}_t∈[0,2] arbitrarily close to {C_R^t}_t∈[0,2] so that:The set ℒ :=Π^-1(∪_t∈[0,1]D_R^t)∩ℳ' is a smooth one dimensional manifold. Moreover, the closed curve D_R^0 in ∂ B_R(0) bounds an odd number of embedded minimal disks.We can arrange that the nearby curves D_R^0 and D_R^1 are in a connected regular neighborhood for Π and thus bound the same number of embedded minimal disks. Finally, we can ensure that both curves D_R^0 and D_R^1 have images arbitrarily close to that of the equator in the xy-plane, C_0.We can find a curve γ arbitrarily close to C^0_R which is a regular value of Π. Then, by concatenating small paths of curves on both ends of {C_R^t}_t∈[0,1], we can obtain a path {D̃_R^t}_t∈[0,1] so that D^0_R = γ, D^1_R = -γ (i.e., γ with the opposite orientation), and so that D_R^t is arbitrarily close to C_R^t. For s∈[0,1] choose a path E_R^s with E_R^0 = γ and so that for s ∈ [1/2,1), E_R^s is close to ∂ B_R(0)∩{z=Rs}and a regular value of Π for a.e. s close to 1. For s close enough to 1, it follows (see page 149 in <cit.>) that E_R^s bounds precisely one embedded minimal disk.Namely, Π^-1(E_R^t) consists of one point.Thus the mod 2 degree of Π on the set Π^-1(∪_s∈[0,1] E_R^s) is odd.Thus we see (cf. Theorem 2.1 in <cit.>) that γ must bound an odd number of disks. Now, since γ is a regular value for Π, any boundary curve which is sufficiently close to γ will bound the same (odd) number of minimal surfaces as γ (and they will be small perturbations of those bounded by γ). Then, by applying Theorem <ref>, we can arrange for a small perturbation of {D̃_R^t}_t∈[0,1] to {D_R^t}_t∈[0,1] which is transverse to Π. If this perturbation is sufficiently small, the endpoints will still be in the regular neighborhood of γ, which is what we wanted.On the other hand, we have:Suppose no disk in ℒ passes through p∈ M fixed, then the number of disks bounded by γ = D_R^0 is even.See Figure <ref> for an illustration of the proof of this proposition. Let us consider the smooth one-manifold ℒ.The boundary of ℒ consists of elements 𝒜:=Π^-1(D_R^0) together with elements in ℬ:=Π^-1(D_R^1).We want to compute the parity of the cardinality of 𝒜 and ℬ.By the classification of 1-manifolds, each connected component of ℒ with non-empty boundary has exactly two boundary points.Some such components of ℒ have both boundary points in either 𝒜 or ℬ.Let us denote these connected components by ℒ” and 𝒜” (ℬ”) the subset of 𝒜 (resp. ℬ”) joined by curves in ℒ”. Similarly let us denote by 𝒜' (ℬ') the elements of𝒜 (resp. ℬ) so that ℒ connects each point in 𝒜' to one in ℬ'.Let ℒ' be the set of connected component of ℒ with one boundary point in 𝒜 and its other in ℬ.Thus ℒ' provides a bijection Φ between 𝒜' and ℬ'. Given a disk D∈𝒜, let S_D denote the component of ∂ B_R(0)∖∂ D containing the South Pole of ∂ B_R(0), and N_D the component containing the North Pole.Let us say D is a ^0_R disk if the three-ball bounded byD∪ S_D does not contain p in its interior (this definition is well-defined as no disk in 𝒜 intersects p by assumption and each disk in 𝒜 is contained in B_R(0) by Lemma <ref>). If the three-ball bounded by D∪ S_D does contain p, let us say D is a “red" disk ^0_R.Thus we partition 𝒜 into ^0_R and ^0_R.In the same way we partition ℬ into ^1_R and ^1_R.As in Proposition <ref>, by choosing the perturbation D_R^t of C_R^t small enough, we obtain that the cardinality of ^0_R is equal to that of ^1_R and the cardinality of ^0_R is equal to that of ^1_R.We claim that in addition the cardinality of ^0_R is equal to that of ^0_R modulo 2.Thus the cardinality of 𝒜 is even and Proposition <ref> follows.Toward that end, we first consider the disks in 𝒜'⊂𝒜 which are in bijective correspondence with ℬ' by the map Φ described above. We claim that Φ(𝒜'∩^0_R)⊂^1_R,and similarly Φ(𝒜'∩^0_R)⊂^1_R. To prove (<ref>), fix a disk P∈𝒜'∩^0_R.Since P is blue, it follows that P∪ S_D is a two-sphere not containing the origin.As we move P along the curve in ℒ linking it to a disk in ℬ', we obtain a moving sequence of boundary curves D_R^t, starting at D_R^0 and ending at D_R^1, together with a moving sequence of disks P_t with boundary D_R^t. In this notation P_0=P and P_1=Φ(P).For each t there is a choice of component S_t of ∂ B_R(0)∖ D_R^t so that at time 0, S_t is nearly all in the southern hemisphere, and the component S_t varies continuously in t. For t=1, (after a 180^0 rotation has been completed), S_1 is mostly in the northern hemisphere. Note that as no minimal disk in ℒ hits p and S_t is contained in the boundary of the sphere of radius R, it follows that the two-sphere P_t∪ S_t bounds a three-ball that is disjoint from the origin for all t.Thus in particular P_1∪ S_1 bounds a ball disjoint from p.It follows that P_1 is a red disk.Thus Φ(P) is red as desired, establishing (<ref>) and mutatis mutandis, (<ref>). Thus the cardinality of 𝒜' and thus also ℬ' (as the set is in bijective correspondence with 𝒜') is even.It remains to consider the other elements of 𝒜 which comprise the set 𝒜”.Arguing similarly to the above paragraph, one can see that a component of ℒ” cannot join a blue disk in 𝒜” to a red disk in 𝒜”.Thus the only possibility is that each component of ℒ” joints a red disk to a red disk, or a blue disk to a blue disk.But anyway these contribute an even number of elements, and thus the cardinality of 𝒜” is even.But since the cardinality of 𝒜 is the sum of the cardinalities of 𝒜' and 𝒜”, we obtain that this cardinality is even.This completes the proof. Since the conclusions of Propositions <ref> and <ref> are in contradiction, it follows that the assumption of Proposition <ref> is false, and thus:For R large enough, B_R(0) contains an embedded minimal disk passing through p with boundary arbitrarily close to some equatorial circle C^t_R, where t∈ [0,1] depends on R. We may thus combine this with the curvature and area estimates to complete the proof of the main result. Let R_i→∞ be a sequence of radii, and let Σ_i denote the embedded minimal disk with some boundary circle close to C^t_i_R_i obtained from Corollary <ref>.Denote by Σ_∞ a subsequential limit of Σ_i as i→∞ (using Proposition <ref>).By Proposition <ref> Σ_∞ contains p and thus is non-empty. Moreover, the same proposition shows that Σ_∞ is a smooth properly embedded minimal plane.This completes the proof of Theorem <ref>. We remark that it should be possible to prove that Σ_∞ has a unique tangent plane at infinity (cf. <cit.>). It would be interesting to know if this tangent plane is the same as the one containing the (limits of the) circles C^t_i_R. This would presumably imply that there is a full one-parameter family of minimal planes through any given point p. § REMARKS RELATED TO THE MORSE INDEXIn this section we discuss the index of the minimal planes obtained in Theorem <ref>. We begin by discussing a related setting in which the disks Σ_R have unbounded index. We say that a metric on M^3 is asymptotically conical if M is diffeomorphic to ^3 and in the associated coordinates g=g̅_α +b where g̅_α = dr^2 + r^2α^2g_𝕊^2.and |b| + |x| |D_g̅_αb| + |x|^2|D_g̅_α^2b| = o(1). Let (M^3,g) be an asymptotically conical 3-manifold containing no closed embedded minimal surfaces. For every point p∈ M there exists a complete properly embedded minimal plane containing p. If the cone parameter satisfies α∈ (0,1) each plane has infinite Morse index. The existence proof proceeds exactly as that of Theorem <ref>, after noting that the vector field X = r∂_r satisfies D_g̅_α X =2 g̅_α. Finally, the statement about the Morse index is a consequence of<cit.>. Now, for any asymptotically flat (M^3,g) with no closed interior minimal surfaces, it is not hard to construct g_j asymptotically conical (with α_j→ 1) so that g_j converges locally smoothly to g and (M^3,g_j) contains no closed embedded minimal surfaces. Through any p∈ M, we can consider the sequence of minimal planes Σ_j with respect to g_j as constructed in Theorem <ref>. By appropriately modifying the arguments to prove compactness, we see that (after passing to a subsequence) Σ_j converges locally smoothly to a minimal plane Σ with respect to g still containing p. One might expect Σ still to have infinite Morse index. Surprisingly, this is not the case as long as we impose slightly stronger decay assumptions on the metric (as we show in the next proposition). Thus, the index of the Σ_j “drifts to infinity” as the asymptotic cone angle parameter α tends to 1.Consider (M^3,g) asymptotically flat. Assume that the asymptotically flat metric g satisfies the stronger decay condition:[Note that these conditions are still much weaker than is usually considered.] g=g̅+b where|b| + |x| |D̅ b| + |x|^2 |D̅^2 b| = O(r^-τ)for some τ > 0. Suppose that Σ is an unbounded minimal surface in (M^3,g) so that * Σ has quadratic area growth and* for λ→∞, after passing to a subsequence, λ^-1Σ converges in C^2,α_loc(^3∖{0}) to a (multiplicity one) plane through the origin.Then Σ has finite Morse index.We note that the argument used here should extend (using e.g., arguments from <cit.>) to show equivalence of finite index and finite total curvature for embedded minimal surfaces in asymptotically flat 3-manifolds.[Depending on the hypothesis concerning (M,g) (e.g., non-negative scalar curvature, Schwarzschild asymptotics, etc.) it may be necessary (or not) to assume quadratic area growth for the minimal surface.] We note that in the case of ambient ^3 the equivalence of finite index and finite total curvautre is a well known result of Fischer-Colbrie <cit.>.We begin by observing that because λ^-1Σ is close to a plane in C^2_loc(^3∖{0}) for λ sufficiently large, we see that ∫_Σ∩∂ B_Rκ dμ = 2π + o(1).as R→∞. Moreover, by convexity of large coordinate balls, it is clear that Σ∩ B_R is a disk. Thus, Gauss–Bonnet yields∫_Σ∩ B_R K_Σ dμ = o(1)as R→∞. On the other hand, the Gauss equations give2 K_Σ = R_g - 2_g(ν,ν) - |A_Σ|^2 = O(r^-2-τ) - |A_Σ|^2,where we have used (<ref>) to estimate the scalar curvature R_g and Ricci curvature _g of g. Because Σ has quadratic area growth, a simple estimate on dyadic annuli gives∫_Σ O(r^-2-τ) < ∞.Thus, ∫_Σ |A_Σ|^2 < ∞.This implies that|A_Σ| = O(r^-1-δ)for some δ>0 by the work of Bernard–Riviere <cit.> (clearly Σ is embedded outside of a compact set by blow-down assumption on λ^-1Σ).[We note also the work of Carlotto <cit.> that proves such an estimate under the a priori assumption that Σ has is stable outside of a compact set (and under stronger asymptotic decay conditions of the metric). ] We prove that for μ>0 to be chosen, the function φ(x) = 1-|x|^-μ satisfies the following inequality outside of a compact setΔ_Σφ + (|A_Σ|^2 + _g(ν,ν))φ≤ 0. Since φ is positive, this implies that Σ is stable outside of a compact set. This will then imply[In flat ^3 there is a well known but indirect proof by Fischer-Colbrie <cit.> that stability of a minimal surface outside of a compact set is equivalent to finite index. This proof does not seem to extend to the present situation; this is why we appeal to <cit.> here.] that Σ has finite Morse index by work of Devyver <cit.>. To establish (<ref>), note that = O(r^-2-τ) by the assumed asymptotically flat condition (<ref>). On the other hand, a straightforward blow-down argument, using the fact that Δ_^2 r^-μ = μ^2 r^-2-μ shows thatΔ_Σφ = - r^-2-μ(μ^2+o(1))as |x|→∞. Hence,Δ_Σφ + (|A_Σ|^2 + _g(ν,ν))φ = -(μ^2 + o(1))r^-2-μ + O(r^-2-τ) + O(r^-2-δ)which is negative for r sufficiently large, as long as we choose 0 < μ < min{τ,δ} (we recall that τ>0 is the constant in (<ref>) while δ>0 is the constant in (<ref>)). This completes the proof. We give an example to show that Proposition <ref> is false without the stronger notion of asymptotic flatness assumed there. Our construction follows a construction of Grigor'yan and Nadirashvili <cit.> modified in a straightforward manner to the present setting.Consider a metric of the formg = dr^2 + h(r)^2g_𝕊^2where h(r) is smooth and satisfies h(r) = r(1-(log r)^-2) for r sufficiently large and h(r) = r^2 for r sufficiently small. It is clear that (^3,g) is asymptotically flat in the sense of Theorem <ref> but not in the stronger sense considered in Proposition <ref>. Consider Σ a totally geodesic plane in (^3,g), i.e. for any equator γ: 𝕊^1→𝕊^2, setΣ : = { (r,γ(θ)) : r∈[0,∞), θ∈𝕊^1}.That Σ is totally geodesic (and thus minimal) follows from the symmetry of (^3,g). We claim that Σ has infinite Morse index. It is easy to compute (cf. <cit.>)(ν,ν) = -h”(r)/h(r) + 1-h'(r)^2/h(r)^2≥ (r log r)^-2for r sufficiently large. Consider the function φ(r) = (log r)^1/2sin(1/2loglog r)for r ∈ [2π k,2π(k+1)] (taking φ identically 0 otherwise). Note that (rφ'(r))' = -1/2 r^-2(log r)^-2φ(r)We consider φ in the second variation form of area for Σ. We find, for k sufficiently large:Q(φ,φ)= ∫_Σ ( |∇_Σφ|^2 - (|A_Σ|^2 + (ν,ν))φ^2) dμ≤ 2π∫_2π k^2π(k+1) (φ'(r)^2 - (rlog r)^-2φ(r)^2) r (1-(log r)^-2) dr≤ 2π∫_2π k^2π(k+1)(r φ'(r)^2 - 3/4 r^-1(log r)^-2φ(r)^2) dr = 2π∫_2π k^2π(k+1) (- (r φ'(r))' - 3/4 r^-1(log r)^-2φ(r))φ(r) dr = 2π∫_2π k^2π(k+1)( 1/2r^-1(log r)^-2φ(r) - 3/4 r^-1(log r)^-2φ(r))φ(r) dr = - π/2∫_2π k^2π(k+1) r^-1(log r)^-2φ(r)^2 dr < 0.Because this holds for all k sufficiently large, Σ has infinite index. amsplain
http://arxiv.org/abs/1709.09650v2
{ "authors": [ "Otis Chodosh", "Daniel Ketover" ], "categories": [ "math.DG" ], "primary_category": "math.DG", "published": "20170927174018", "title": "Asymptotically flat three-manifolds contain minimal planes" }
();a, Ubbmmsf OT1pzcmit
http://arxiv.org/abs/1709.09120v1
{ "authors": [ "J. Nättilä", "M. C. Miller", "A. W. Steiner", "J. J. E. Kajava", "V. F. Suleimanov", "J. Poutanen" ], "categories": [ "astro-ph.HE", "nucl-th" ], "primary_category": "astro-ph.HE", "published": "20170926163601", "title": "Neutron star mass and radius measurements from atmospheric model fits to X-ray burst cooling tail spectra" }
Raymond and Beverly Sackler School of Physics & Astronomy, Tel Aviv University, Tel Aviv 69978, IsraelUnder conditions prevailing in certain classes of compact astrophysical systems, the active magnetosphere of a rotating black hole becomes charge-starved, giving rise to formation of a spark gap in which plasma is continuously produced. The plasma production processis accompanied by curvature and inverse Compton emission of gamma rays in the GeV-TeV band, that may be detectable by current and futureexperiments.The properties of the gap emission have been studied recently using a fully general relativistic modelof a local steady gap.However, this model requires artificial adjustment of the electric current which is determined, in reality, by the global properties of the magnetosphere. In this paper we map the parameter regime in whichsteady gap solutions exist, using a steady-state gap model in Kerr geometry, and show that such solutions are allowed only under restrictive conditions that may not apply to most astrophysical systems. We further argue that even the allowedsolutions are inconsistent with the global magnetospheric structure.We conclude that magnetospheric gaps are inherently intermittent, and point out that this may drastically change their emission properties.. Existence ofsteady gap solutions in rotating black hole magnetospheres Amir Levinson & Noam Segev Received …; accepted … ========================================================================§ INTRODUCTIONA question of considerable interest in the theory of Poynting-flux outflows from black holes (BHs) <cit.> is the natureof the plasma source in the magnetosphere. In difference from pulsars, in which free charges can besupplied to the magnetosphere by the rigid staralong magnetic fieldlines that are anchored to its surface, in Kerr BHs there is no such an inherentplasma source. As discussed in some greater detail in the next section, plasma in the region enclosed between theinner and outer Alfven surfaces must be continuously replenished by either some external agent or via pair cascades in a spark gap.It has been argued that under conditions likely to prevail in many BH systems, both supermassive and stellar, formation of a spark gap is inevitable<cit.>. It has been further pointed out that the gap activity may be imprintedin the high-energy emission observed in these sources <cit.>.The variable TeV emission detected in M87 <cit.>, a galaxy thatharbours one of largest BHs in the universe, as well as in the the radio galaxy IC 310 <cit.>, has been regarded as being a plausible example of the signatureof magnetospheric plasma production on horizon scales <cit.>.In essence, the gap is an inherent part of the global magnetospheric structure.Hence, a self-consistent analysis of magnetic outflows requires a proper account of the coupling between the gap and the force-free regions of the outflow. This can only be achieved,at least in principle, using global plasma simulations.While global PIC simulations have been performed recently for pulsars <cit.>,they are expected to be far moreinvolved in the case of black holes, since (i) a fully general relativistic scheme must be implemented, and(ii) unlike inpulsars, the origin of the magnetic field threading the BH is poorly understood, which reflects on the choice of boundary conditions.To avoid such complications, and still get some insight into the physics underlying plasma production in the gap, local gapsolutions can be sought, in which the global magnetospheric structure is assumed to be unaffected by the gap activity, while themagnetospheric current is treated as a free inputparameter of the gap model.In a recent series of papers <cit.>, a fully general relativistic model of a steady gap has been developed and exploited tostudy the properties of magnetospheric emission. In this model the magnetospheric current was not treated as a free parameter, butrather adjusted, for any given choice of the remaining parameters, to keep the multiplicity at the value required by the closure condition (c.f., Eq. (26) in Ref <cit.>). The question then arises as to how restrictive are the conditions under which steady state solutions exist.This issue is of importance, as it might have drastic implications for the gap emission.The point is that in steady gaps thatencompass the null surface the maximum power that can be released scales as h^4 with the gap width h <cit.>. Since the pair multiplicity in a steady gap cannot exceed unity, this implies thatthe gamma-ray luminosity emitted from a steady gap decreases rapidly as the intensity of the external radiation source, that provides the pair productionopacity, increases.Such restrictions do not apply to intermittent gaps that can support a large magnetospheric current even when exposed to an intense radiation field.What are the limits on the output power of intermittent gaps is unclear at present. Future plasma simulations might be able to resolve this question.In this paper we map the parameter regime in which local steady gap solutions exist, using a 1D model of a local magnetospheric gap in Kerr geometry. We find that such solutions require highly restrictive conditions, that may not apply to most astrophysical systems.Moreover, we argue that even the local steady solutions that are allowed in this model are inconsistent with the global magnetospheric structure.This implies that the plasma production region is dynamic, which may have far reaching consequences for the gap emission.In Sec. <ref> we review the conditions under which gap formation is expected.In Sec. <ref> we present the model, and in Sec. <ref> discuss the results.In sec. <ref> we briefly remark on the connection between the local model and the global structure.We conclude in Sec. <ref>. § CONDITIONS FOR VACUUM BREAKDOWN An inherent feature of MHD outflows drivenby a Kerr BH is the presence of a stagnation surface located between the inner and outer light cylinders (e.g., Refs <cit.>). Thereason is that the stronggravitationalfieldoftheblackhole imposes an inward motion of plasma very near the horizon, regardless of the direction of the energy flux, whereas the plasma above the outer lightcylinder must be flowing outwards.Consequently, the plasma in the causal magnetospheric regionmust be continuously replenished.Theinjectionofchargesintothemagnetospheremaybe associatedwiththeaccretionprocess.Directfeedingseems unlikely, as charged particles would have to cross magnetic field lines on a timescale shorter than the accretion time in order to reach the polar outflow.Magnetic field irregularities, either inherentor forming by some macroscopic instabilities, can give rise to occasional loading of the magnetosphere.However, the timescale of such episodes maybe considerably longer than the escape time of plasma in the inner magnetosphere(around the stagnation surface), so that some additional injection process maybe required to maintain the local charge density above the Goldreich-Julian (GJ) value everywhere in the magnetosphere. In AGNs and microquasars this may be accomplished through annihilation of MeV photons emanating from the hot gas accreted into the black hole.We denote the luminosity of this radiation source, henceforth measured in Eddington units, by l_γ=L_γ/L_Edd,and its size, given in units of r_g, by R̃_γ=R_γ/r_g. For sufficiently high annihilation rate the resultantchargedensitycanexceedtheGJvalue, keepingthemagnetosphereforce-free.Atlowerannihilation rates the magnetosphere will be starved and a gap should form. The density of injected pairs can be estimated by equating the pair production rate with the escape rate. It is given roughly by n_±≃σ_γγn^2_γ r_g/3 <cit.>, wheren_γ≃ 10^22m^-1R̃_γ^-2 l_γcm^-3 is the density of MeV photons, and m=M_BH/M_⊙ is the black hole mass in solar mass units.Complete screening requires n_± > n_GJ,here n_GJ=Ω B/(2π e c)=2×10^11 B_8(Ω/ω_H)m^-1 cm^-3denotes the GJ density, Ω is the angular velocity of magnetic surfaces, ω_H≃ c/2r_g is theangular velocity of the black hole, B=10^8 B_8 Gauss is the strength of the magnetic field near the horizon,and e>0 is the magnitude of the electron charge. The later condition can be expressed as:l_γ > 10^-3 B_8^1/2(Ω/ω_H)^1/2 (R̃_γ/30)^2.For smaller values of l_γ the magnetosphere becomes charge starved and a gap forms.The strength of the magnetic field near horizon can be estimated by assuming that it is in rough equipartition with the ram pressure in the disk. This yields B≃ 10^9 ṁ^1/2 m^-1/2 G,where ṁ=ηṀc^2/ L̇_Edd, with η≃0.1 being the radiative efficiency, is the acrretion rate in Eddington units.Inthe RIAF regime the accretion flow is hot and the gamma ray luminosity can be estimated from an ADAF model, e.g., Ref  <cit.>,up to some uncertainty in the electron temperature. Adopting such a model yields a condition for the appearance of a gap: ṁ <4×10^-3 m^-1/7 <cit.>.At higher accretion ratesthe accretion disk spectrum cannot extend to high energies, as it is too cold.However, gamma-rays may originate from a tenuous corona, if present as widely believed, although no reliableconstraints on the spectrum and luminosity of this coronal component have been imposed thus far.In principle, it could be that in sourcesthat accrete at relatively high rates the magnetic field is much higher than in RIAF sources, while the gamma ray luminosity is suppressed.If indeed true it could mean that gap emission in such objects may be more intense than in RIAF sources.§ A STATIONARY GAP MODEL We construct a model describing a 1D, general relativistic stationary gap, that treats the electron-positron plasma as a two-beam fluid.The global magnetic field geometry adopted below is a split monopole geometry. The gap extends along a poloidal magnetic surface, characterized by an inclination angle θ.Gamma rays are produced by accelerating pairs via curvature emission and inverse Compton (IC) scattering, and in turn generate fresh pairs through their interaction with an ambient radiation field, given as input.It should be emphasized that these local steady gap solutions are applicable only in the region where ideal MDH breaks down.In the global picture additional forces are acting on the particles that are ignored here, which will determine the conditions outside the gap.The details are outlined in the following: §.§ Background geometryThe background spacetime is described by the Kerr metric, heregiven in Boyer-Lindquist cordinates with the following notation:ds^2= -α^2dt^2 + g_φφ(dφ-ω dt)^2 + g_rrdr^2 +g_θθdθ^2,where α^2 = ΣΔ/A;ω=2ar_g r/A; g_rr=Σ/Δ;g_θθ =Σ;g_φφ=A/Σsin^2θ,with Δ=r^2+a^2-2r_g r,Σ=r^2+a^2cos^2θ, A=(r^2+a^2)^2-a^2Δsin^2θ, and r_g=GM/c^2 denotes the gravitational radius. The parameter a=J/M represents the specificangular momentum.The determinant of the matrix g_μν is given by √(-g)=Σsinθ. The angular velocity of the black hole is defined as ω_H=ω(r=r_H)=ã/2r_H, whereã=a/r_g denotes the dimensionless spin parameter, and r_H=r_g+√(r_g^2-a^2) is the radius of the horizon.Henceforth, all lengths are measured in units of r_g and time in units of r_g/c, so we set c=r_g=1 unless explicitlystated otherwise.To avoid the singularity on the horizon, we find it convenient to transform to the tortoisecoordinate ξ, defined by dξ= (r^2+a^2)dr/Δ.It is related to r through:ξ(r)=r+1/√(1-ã^2)[r_+ln(r/r_+-1) -r_-ln(r/r_--1)], with r_±=1±√(1-ã^2).Note that ξ→-∞ as r→ r_H=r_+.§.§ Gap electric fieldWe implicitly assume that the gap forms a small perturbation in the force-free magnetosphere, in the sense that thepotential drop across the gap is much smaller than the full vacuum potential. We can then ignore the variation in Ω in the gap and, for every magnetic flux surface, define the electric field in the corotating frame as F^'_μ t=F_μ t+Ω F_μφ. In general it satisfies Equation <ref>, with the GJ density defined explicitly in Equation <ref>. In order to compute the gap structure in our formalizm, the magnetic field geometry needs to be specified. In what follows we adopt a split monopole geometry, defined by A_φ=B_H√(A_H)(1-cosθ), where B_H=10^8 B_8 G denotes the strength of the magnetic field on the horizon, andA_H≡ A(r=r_H)=(r^2_H+a^2)^2=4(1+√(1-ã^2))^2 in our units (4r_g^2r_H^2 in full units).With this choice F_rφ=0 and F_θφ=B_H√(A_H)sinθ.Note that in the ZAMO frame the radial magnetic field is given by B_r=F_θφ/√(A)sinθ = B_H√(A_H/A),and the non-corotating electric field by E^'_r=√(A) F^'_rt/Σ. We find it convenient to use the electric flux function Φ_E=√(A) E^'_r (which is essentially the electric flux per solid angle, as measured in the ZAMO frame). Then, Gauss' law, Eq. (<ref>), reduces to (see Eq. (<ref>))∂_ξΦ_E=4πΣΔ/r^2+ã^2 (ρ_e-ρ_GJ),with the GJ density given byρ_GJ=B_H√(A_H)/4π√(-g)∂_θ[sin^2θ/α^2(ω-Ω)].Note that ΣΔρ_GJ is finite on the horizon. Contours of ρ_GJ(r,θ) are exhibited in Figure <ref> for Ω=0.5ω_H and ã=0.9.As seen,it vanishes on the null surface denoted here by r_c(θ), located roughly (but not exactly) where Ω=ω. In what follows the charge density and electric flux are normalized to the fiducial values ρ_0=B_Hω_H√(A_H)/(2π cr_g^2)=B_Hã/2π r_g and Φ_o=ρ_0 r_g^3, respectively, densities are measured in units of n_0=ρ_0/e, and angular velocities are measured in units of ω_H. With the convention Ω· B>0 adopted below the electric field in the gap is negative, Φ_E<0.§.§ Plasma dynamics We adopt a treatment in which the plasma in the gap is modelled as a two-component fluid, consisting of electrons and positrons with proper number densities n_- and n_+, respectively, and 4-velocities u^μ_±=(u_±^t,u_±^r,0,0).In a ZAMO frame the velocity components are given by u_±=√(g_rr)u_±^r, γ_± =α u_±^t, v_±=u_±/γ_±. We define the radial fluxes, N^r_±=Σ n_± u^r_±, measured in units of r_g^2 n_0 c.The continuity equation for each species can then be expressed as (see appendix <ref>),∂_ξ N_±^r= ΣΔ/2(r^2+ã^2) Q,where Q is the net pair production rate per unit volume, measured in units of n_0c/r_g, and is the same for electrons and positrons byvirtue of charge conservation.It is readily seen that the difference N_0^r=N^r_+-N_-^r is conserved along magnetic flux tubes. This conserved quantity is simply the electric current per solid angle per unit charge flowing along magnetic flux tube, viz.,N_0^r= Σ j^r/e, where j^r=e(n_+u^r_+-n_- u_-^r) is the radial component of the electric 4-current density, which is determined by the global magnetosphericstructure.The evaluation of N_0^r requires proper account of the coupling between the gap and the global magnetosphere, whichis beyond the scope of our analysis, and in our model it is treated as a free parameter. As will be shown below, itaffects the gap structure. The normalized charge density, ρ_e= j^t/ρ_0=e(n_+ u_+^t - n_-u^t_-)/ρ_0, can be expressed in terms of the electron and positronfluxes as, ρ_e=√(A)/ΣΔ(N^r_+/v_+-N^r_-/v_-). With the convention Ω· B>0 (Φ_E<0) electrons accelerate outwards, v_->0, and positrons inwards, v_+<0.The equations of motionof the pair fluids can be expressed as (see appendix <ref> for details),dγ_±/dξ=- γ_±∂_ξlnα±α/r^2+ã^2( η_E Φ_E -√(A) s^t_±), withη_E=eB_H√(A_H)ω_H/2π m_e c^3 =1.4× 10^9 ã B_8 m. The first term on the right hand side of Equation (<ref>) accounts for the gravitational redshift, the second term for energy gain due to acceleration in the gap electric field, and the third term for the sum of curvature and inverseCompton losses, s_±^t=s^t_±,cur+s^t_±,IC, derived explicitly below. As will be shown below, in practice the Lorentz factors γ_± equal their saturation values, at which energy gain is compensated by redshift effects and radiative losses almost everywhere in the gap.§.§ Gamma-ray emission and pair productionWe suppose that the gap is exposed to emission of soft photons by the accretion flow,from a putative source of size R_s=R̃_s r_g and luminosity L_s=l_s L_Edd. For simplicity, we assume that the intensityof the seed radiation in the gap is isotropic with a power law spectrum:I_s(x^μ,ν_s,Ω_s)=I_0(ϵ_s/ϵ_s,min)^-p,ϵ_s,min<ϵ_s<ϵ_s,max, where ϵ_s=hν_s/m_ec^2 is the dimensionless photon energy and p>1. The assumption that I_s is isotropic is reasonable, except perhaps very near the horizon,since the size R_s of the radiation source is typically much larger than the gap dimensions.The number density of seed photons is given by n_s=4π/c∫I_s/hν_sdν_s=4π I_0/h c(1-ϵ_s,min^p/ϵ_s,max^p)/p≃4π I_0/h c.We find it convenient to define a fiducial optical depth:τ_0=σ_T r_g 4π I_0/hc=A_s4 m_p/m_el_s/R̃_s^2 ϵ_s,min,where A_s=(p-1)/[1-(ϵ_s,min/ϵ_s,max)^p-1]∼ 1.It roughly gives the scaling of the IC and pair production opacities. Typically R̃ < 10^2, so that a large opacity is anticipated when l_s>ϵ_s,min. As shown below, the terminal Lorentz factor of the pairs in the gap is extremely high.Thus, their emission is highly beamed along their direction of motion.Let I_γ(r,ϵ_γ, μ_γ) denotes the intensity of gamma-rays emitted by the pairs at radius r,in direction μ_γ=cosθ_γ=r̂·Ω̂_γand energy ϵ_γ=hν_γ/m_ec^2.Under the beaming approximation we have: I_γ(r,ϵ_γ, μ_γ)=I_γ^+(r,ϵ_γ)δ(μ_γ+1) +I_γ^-(r,ϵ_γ)δ(μ_γ-1),here I^-_γ denotes the intensity emitted by electrons and I_γ^+ by positrons. The beamed intensities satisfy the radiative transfer equations1/√(A)d/dξ(√(A)I^±_γ )=±α√(A)/r^2+ã^2( κ_ppI^±_γ -j_γ^±),neglecting redshift effects (see appendix <ref> for details), where the emissivity is the sum of curvature and IC emissions,j^±_γ=j^±_IC + j_cur^±.The absorption coefficient κ_pp and the emissivities j^±_IC and j^±_cur are computed in the ZAMO frame. To render this equation dimensionless, we normalize intensities by h c n_0, emissivitiesby hc n_0/r_g, and opacities by 1/r_g.§.§.§ Curvature emission The normalized curvature emissivity is given by <cit.>j_cur^± (r,ϵ_γ)=√(3)α_fn_±γ^2_±/2π R_c F(ϵ/ϵ_c),where R_c denotes thecurvature radius of magnetic field lines (in units of r_g),α_f=e^2/ħ c is the fine structure constant, F(x) isthe usual synchrotron function, andϵ_c=2πλ_c/r_gγ_±^3/R_c≃ 10^-15γ_±^3/mR_c,here λ_c=ħ/m_ec denotes the Compton wavelength of the electron. The curvature radius is a free parameter in our model.Inthe numerical calculations presented below we adopted R_c = 1.Finally, the curvature loss term is given bys^t_±,cur= -10^-18γ_±^4/m R_c^2. §.§.§ Inverse Compton emissionThe normalized IC emissivity, computed in appendix <ref> using the full Klein-Nishina (KN) cross-section, can be expressed in the ZAMO frame in terms of the fiducial optical depth τ_0 as:j^±_IC(r,ϵ_γ) = τ_0 n_±γ_±/6π[4γ_±ϵ_s,min(γ_±-ϵ_γ)/ϵ_γ+4γ_±ϵ_s,min(γ_±-ϵ_γ)]^p× [ϵ_γ/ϵ_γ+4γ_±ϵ_s,min(γ_±-ϵ_γ)].The corresponding drag (energy loss) terms for the pairs are given formally bys^t_±,IC(r)=- 2π/n_±γ_±∫_0^γ_± j^±_IC(r,ϵ_γ) dϵ_γ.Within the beaming approximation invoked here, transition from the Thomson to the KN regime occurs at a Lorentz factor γ_KN=1/4ϵ_s,min. In the Thomson limit, where γ/γ_KN=4γ_±ϵ_s,min<<1, the above expressions reduce toj^±_IC(r,ϵ_γ)≃τ_0 n_±γ_±/6π(ϵ_γ/4γ_±^2ϵ_s,min)^-p,with 4γ_±^2ϵ_s,min≤ϵ_γ, and s^t_±,IC(r)≃ -4 τ_0/3(p-1)γ_±^2ϵ_s,min,whereas in the KN limit, γ_±/γ_KN>>1, we have j^±_IC(r,ϵ_γ)≃τ_0 n_±γ_±/6π(ϵ_γ/4γ_±^2ϵ_s,min),and s^t_±,IC(r)≃ -τ_0/24 ϵ_s,min.§.§.§ Pair production Under the assumption that the seed photon intensity is isotropic, the normalized pair production opacity simplifies to κ_pp(r,ϵ_γ) = τ_0/2 ∫_ϵ_th^ϵ_s,max dlnϵ_s(ϵ_s/ϵ_s,min)^-p× ∫_-1^μ_maxdμ(1-μ)σ_γγ where σ_γγ is the full pair creation cross-section (in units of σ_T) given in Ref <cit.>, ϵ_th= max(ϵ_s,min, ϵ_γ^-1) and μ_max=1-2/(ϵ_sϵ_γ) from the threshold condition. In the Thomson limit, that is, ϵ_γϵ_s,min <1, it is given, to a good approximation, by <cit.> κ_pp(r,ϵ_γ)=3 τ_0/2A_p(ϵ_s,min ϵ_γ)^p,where A_p is a number that depends on the spectral index p, and is plotted in Figure 1 of Ref <cit.>. It equals roughly 0.2 for p=1 and 0.1 for p=2. In the KN limit, ϵ_γϵ_s,min >> 1, we can use the approximation∫_-1^μ_max(1-μ)σ_γγ dμ≃ 3ln(ϵ_γϵ_s)/2ϵ_γϵ_s, to obtainκ_pp(r,ϵ_γ)=3 τ_0/4(p+1) ln(ϵ_γϵ_s,min)/ϵ_γϵ_s,min.For the parameter regime considered here we find ln(ϵ_γϵ_s,min)/2(p+1)≃ 1.Thus, to a good approximation we canuse the simple extrapolation:κ_pp(r,ϵ_γ)=3τ_0/2 A_p(ϵ_s,min ϵ_γ)^p/1+A_p(ϵ_s,min ϵ_γ)^p+1.The net specific pair production rate can be expressed asQ = 2π∫ dlnϵ_γ∫_-1^1dμ_γκ_pp I_γ(r,ϵ_γ,μ_γ)= 3πτ_0 ∫A_p(ϵ_γϵ_s,min)^p/1+A_p(ϵ_γϵ_s,min)^p+1(I^+_γ+I_γ^-)dlnϵ_γ.§.§ Boundary conditions The outer and inner gap boundaries are treated as free boundaries.Their location is determinedby two parameters; the global current N_0^r and the fiducial optical depth τ_0. In the steady gap model it is implicitly assumed that beyond the gap boundaries the field aligned electric field vanishes.Thus, it must satisfy the boundary conditionsΦ_E(r_in)=Φ_E(r_out)=0.We further assume that pairs and photons are not injected into the gap across either boundary. This implies N^r_+(r_out)=N^r_-(r_in)=0, N^r_-(r_out)=- N^r_+(r_in)=- N_0^r.Likewise, since no gamma-rays are incident into the gap through its boudaries,I_γ^+(r_out,ϵ_γ)= I_γ^-(r_rin,ϵ_γ)=0.The Lorentz factors of the electron and positron beams formally satisfy γ_-(r_in)=γ_+(r_out)=1. However,since practically they reach their saturation level instantaneously, we find that the solution is highly insensitive to the exact values taken at the boundary, as long asthey are much smaller than the maximum values.§ STEADY GAP SOLUTIONS AND FORBIDDEN REGIMES Equations (<ref>)-(<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), subject to the boundary conditions (<ref>)-(<ref>) form a complete set that governs the structure and spectrum of the steady gap for a given choice of the input parameters N_0^r and τ_0 (if a solution exists). The location of the outer boundary r_out is constrainedto exceed a minimumvalue by the condition |ρ_e(r_out)|< |ρ_GJ(r_out)|. To obtain a solution, we integrate the equations iteratively, changing the locations of the inner and outer boundaries, r_in and r_out, in each iteration, until all boundary conditions are satisfied. In each iterationwe first guess a value for r_out, and then integrate the equations inwards starting at r_out until Φ_E vanishes (provided it is outside the horizon). We then check the values of N^r_- and I_γ^- there, and if nonzero change the location of r_out accordinglyfor the next iteration. The process is repeated until the desired solution is obtained. Examples are exhibited in Figure <ref>, where profiles of the electric flux, Lorentz factor, pair fluxes and specific pair production rate,computed for a prototypical supermassive BH accreting in the RIAF regime (ṁ≃10^-4), are plotted for different values of themagnetospheric current, hererepresented in terms of the current density at the nullsurface, j_c=e N^r_0/Σ_c (normalized by the fiducial current Ω B_H cosθ/2π),where Σ_c≡Σ(r_c) is the value of Σ(r) at the null surface r_c(θ). Each case shown corresponds to a specific value of τ_0.Similar solutions were obtained for parameters typical to stellar BHs. As seen, the gap shrinks as the magnetospheric current j_c (or equivalently the flux N_0^r) is reduced, as expected. It isalso seen that unless the magnetospheric current is unlikely weak, the gap width is not much smaller thanthe horizon scale.Since in a stationary gap the pair multiplicity cannot largely exceed unity (see bottom right panel in Fig. <ref>), this implies that τ_0 (and hence L_s) must also be small, as shown next. Much insight can be gained into the behaviour of the gap by employing crude estimates that allow analyticderivation of the pair productionrate and the closure condition. Below, we adopt such a treatment to map the parameter regime in which local, steady gap solutions exist. The Lorentz factor of accelerating pairs is limited by the saturation value at which the radiation drag (due to curvature and IC emission) balances theelectric force acting on the particles within the gap. It formally obtained by setting the right hand side of Eq. (<ref>) to zero. Neglecting gravity (which is important only very near the horizon) we find that the acceleration length is roughly l_acc≃ 10^-2 m^-1/2R_c^1/2 (B_8| E_r^'|)^-3/4, so that practically the Lorentz factor is determined by the saturation condition in the entire gap region. The dependence of γ/γ_KN, the saturated Lorentz factor normalized byγ_KN≡ 1/4ϵ_s,min, on τ_0 is displayed in Fig 3, for different values of the peak energy ϵ_min.The transition from curvature dominated to IC dominated losses is clearly seen.The value of τ_0 at which the transition occurs depends on the spectral peak ϵ_min through Klein-Nishina effects.It can be estimated analytically from the saturation condition, whereby it is foundthat IC losses dominate the drag force whenτ_0>10^4 B_8 m (ϵ_s,min/10^-6), as indeed seen in Figure <ref>.As argued below, for such high values of τ_0 steady gap solutions do not exist for any reasonable choice of parameters, hence this regime is irrelevant for our analysis. At smaller values of τ_0 curvature losses dominate, andthe saturated Lorentz factor is:γ_±≃ 5×10^6 R_c^1/4(B_8|E^'_r|)^1/4 m^1/2,so that γ_+=γ_-=γ.Under a broad range of conditions we find |E^'_r|^1/4∼1.Hence, for our fiducial stellar black hole, m=10, B_8=1, we expect γ∼ 10^7, whereas for a fiducial blazar withm=10^9 and B_8=10^-4 we have γ≃ 10^10. Our detailed calculations confirm this.Under our beaming approximation, IC scattering is in the KN limit if (see Eq. <ref>)ϵ_s,min >1/4γ≃ 5× 10^-8 R_c^-1/4(B_8|E^'_r|)^-1/4 m^-1/2.This condition is satisfied essentially in all sources.Consequently, we conclude that quite generally IC scattering of external radiation by pairs accelerated in the gap is in the KN regime (although sufficiently soft extension of the spectrum to energies below the peak may somewhat alter this conclusion). The characteristic energy of curvature photons, Equation (<ref>), can be expressed asϵ_c≃ 10^5 (B_8|E^'_r|)^3/4 R_c^-1/4m^1/2,and it is seen that typically ϵ_c<<γ.Consequently, we expect two peaks in the high-energy spectrum emitted from the gap, one due to IC scattering, at ϵ_γ≃γ, and the other one due to curvature emission, at ϵ_γ≃ 0.3 ϵ_c. Equations (<ref>) and (<ref>) imply that the separation between the peaks is independent of the black hole mass, but scales with the magnetic field roughly as B^1/2.The detailed calculations outlined in Ref  <cit.> indeed confirm that the spectral energy distribution has a double peak structure with these scalings.Next, we provide estimates for the pair production opacity and the specific pair production rate in the gap. Since IC scattering is in the KN regime, the characteristic energyof scattered photons is γ.Thus, pair creation occurs also in the KN regime, wherebyEquation (<ref>) applies:κ_pp,IC = 3 τ_0/4(p+1) ln(γϵ_s,min)/γϵ_s,min∼ 0.1 τ_0R_c^-1/4× (B_8|E^'_r|)^-1/4 m^-1/2(ϵ_s,min/10^-6)^-1,for p>1.Consequently, for τ_0>10 R_c^1/4, κ_pp,IC1 for both stellar and supermassive black holes.At these energies the contribution of curvature emission is completely negligible (see Eq. (<ref>)), and the solution to the radiative transfer equation, Eq. (<ref>), is approximately the IC source function, specifically I^±_γ≃ j^±_IC/κ_pp.Using Equation (<ref>) and noting that in the ultra-relativistic limit n_+γ_+ +n_-γ_- = -N^r_0/√(ΣΔ), one obtains the contribution of IC scattered photons to the pair creation rate:Q_IC(r) ≃ τ_0/12γϵ_min(-N^r_0)/√(ΣΔ)≃ 0.02 τ_0 (-N^r_0)/√(ΣΔ)× R_c^-1/4(B_8|E^'_r|)^-1/4 m^-1/2(ϵ_s,min/10^-6)^-1.The peak of curvature emission occursat an energy of ϵ_γ=0.29 ϵ_c, for which ϵ_γϵ_s,min<<1. Thus, the interaction of curvature photonswith the target radiation field is in the Thomson regime.Choosing p=2 for illustration, Equation (<ref>) yieldsκ_pp,cur(ϵ_γ)≃ 10^-4τ_0  mR_c^-1/2(B_8|E^'_r|)^3/2× (ϵ_s,min/10^-6)^2(ϵ_γ/0.29ϵ_c)^2at energies ϵ_γϵ_c. Since for a steady gap τ_0<<10^4, it implies κ_pp,cur<<1.The calculation of Q_cur is more involved than in the IC case, and we can only offer a rough analytic estimate of its average.The details can be found in appendix <ref>, where the following result for the average pair production rate is derived:<Q_cur> ≃ 2 τ_0  (-N^r_0)R_c^-5/4(B_8|E^'_r|)^7/4×m^3/2(ϵ_s,min/10^-6)^21/<√(A)>∫_r_in^r_out√(A) dr/Δ, here <√(A)> is the average value of √(A(r)) across the gap, defined explicitly below Equation (<ref>), and is typically in the range 3 to 4.5. Equation (<ref>)may overestimate the local rate by a factor of a few. From a comparison of Eqs. (<ref>) and (<ref>)we anticipatethe pair production to be dominated by IC photons when B_80.1 m^-1(ϵ_min/10^-6)^-3/2R_c.This condition is roughly satisfied in RIAF sources with ṁ<10^-4, assuming R_c≃1. At larger accretion rates pair production is predominantly due curvature photons. Finally, we derive a closure condition that defines a limit on the luminosity of the external radiation source, l_s, above which steady gap solutions are forbidden. For clarity of our analysis we include only the contribution of IC photons to the pairproduction rate, viz., Q=Q_IC.Thus, the limit obtained from the closure condition derived below should be considered an absoluteupper limit.Additional production of pairs by curvature photons would merely enlarge the forbidden regime.Integration of Equation (<ref>), subject to the boundary condition N^r_+(r_out)=0, yields N^r_0=∫_r_in^r_out(Σ Q/2)dr. This last relation simply means that the pair multiplicity in the gapis roughly unity.Taking Q=Q_IC in the latter expression and substituting Eq. (<ref>) yields τ_0=12 γϵ_min/H≃ 50 (R_cB_8|E^'_r|)^1/4m^1/2(ϵ_min/10^-6)/H,where the factor H=∫_r_in^r_out√(Σ/Δ) drdepends on the magnetospheric current N^r_0 throughthe gap boundaries r_in and r_out, and is of order a few for the solutions shown in Figure <ref>.It can become much smaller than unity for extremely small values of N^r_0, but we find such values unlikely.For our fiducial sourcesthe value ofB_8^1/4m^1/2(ϵ_min/10^-6) is about 3 in case of a stellar BH andabout 30 for a supermassive BH. The maximum value of |E^'_r|^1/4 ranges between 0.8 and 1.3 in the solutionsexhibited in Figure <ref>. Consequently, the corresponding Eddington ratio, l_s=1.3×10^-7 (R̃_s/30)^2(ϵ_min/10^-6)τ_0 (see Eq. (<ref>)), that allows stationary gap solutions must be very small.Larger values would render the gap intermittent.Figure <ref> shows the separation into forbidden and allowed regimes computed numerically using the full gap equationswith Q=Q_IC. The solid curve corresponds to the locus of solutions, each having the maximum value of τ_0 above which no steadysolutions exist. For each choice of the magnetospheric current N_0^r this maximum value is obtained by seeking the solution that satisfies ρ_e(r_out)=ρ_GJ(r_out), orequivalently √(A(r_out))[α(r_out)]^2ρ_GJ(r_out)=N_0^r. This solution defines the maximum luminosity l_s at which a steady gap can still supportthe current N^r_0.At lowerluminosities the gap widens (r_out increases).At larger luminosities it must become intermittent. §A REMARK ON THE GLOBAL STRUCTURE In this section we briefly comment on the relation between the local gap and the global magnetospheric structure.As mentioned above, a generic feature of magnetically driven outflows from a Kerr black hole is a plasma double-flow that emanates from a stagnation surface located between the inner and outer Alfven surfaces.f The location of the stagnation surface is determined from a balance between thegravitational, centrifugal and Lorentz forces <cit.>.In the limit of low inertia considered in this paper (where a gap forms) it dependsvery weakly on the details of plasma injection <cit.>. In general, it has a non-spherical shape <cit.>, and its distance from the BH ranges fromr∼ 4.5 r_g in the equator to r∼ 10 r_g along the axis, so that it is located well outside the null surface (see Figure <ref> for illustration). Now, if the outer gap boundary extends beyond the stagnation surface, then accelerated particles leaving the outer gap boundary move outwards and particles the escape through the inner gap boundary move inwards, in accord with the global plasma flow requirements.On the other hand, if the outer gap boundary lies below the stagnation surface, then the direction of the particle beam that escapes through the outer gap boundary is opposite to that of the plasmaflow in the force-free section below the stagnation surface, as illustrated schematically in Figure <ref>.This inconsistency most likely meansthat the plasma production process must be dynamic. As seen in Figure <ref>, in all steady solutions the outer gap boundary does not extend beyond 3r_g, so that it is located below the stagnation surface. This suggests that the local steady solutions derived here may be inapplicable to a global magnetosphere.§ CONCLUSIONSThe main conclusion of this paper is that under realistic conditions, charge-starved regions in the magnetosphere of a Kerr black holeare expected to be inherently intermittent. The main reasons are that (i) for realistic values of the magnetospheric current thepair multiplicity cannot accommodate the closure condition required by a steady gap, unless the luminosity of the external radiation source is extremely small, and (ii) the steady gap solutions are inconsistent with the global magnetospheric structure.The latter reason seems to imply that in black hole outflows the entire region below the stagnation surface should be dynamic. It is unclear at present how the intermittency of the plasma production process will affect the resultant emission. In local gap models the plasma production process can be sporadic, giving rise to electric current oscillations around the mean value imposed by the globalmagnetosphere with an amplitude that depends on the pair production rate.In this case, a reduction in the amplitude of the gap oscillations is expected when the intensity of the ambient radiationfield, that provides the dominant pair production opacity, is increased.In global, self-consistent gap models it seems that plasma productionshould occur in cycles of pair creation bursts.What is the fraction of the black hole spin down power that can be released in the form of high-energy radiation in this dynamic state and how this should affect the emitted spectrum is yet an open question. Intermittency is expected also in pulsar gaps under certain conditions <cit.>, however, the reason for this is different than in the case of rotating black holes discussed here. This research was supported bya grant from the Israel Science Foundation no. 1277/13.§ DERIVATION OF THE GENERALIZED GAUSS' LAWFrom the inhomogeneous Maxwell's equations,1/√(-g)∂_μ (√(-g)F^νμ)=4π j^ν,and the relation F^tμ =g^μν(g^ttF_t ν+g^tφF_φ ν)=1/α^2g^μν(F_νt+ω F_ν φ)one obtains the generalized Gauss' law:1/√(-g)∂_μ[√(-g)g^μν/α^2 (F_νt+ω F_ν φ)]=4πj^t.In terms of the electric field measured in a frame rotating with the flux tube, F^'_α t= F_α t+Ω F_αφ,the latter equation can be written as1/√(-g)∂_μ[√(-g)g^μν/α^2F^'_ν t]=4π(j^t-ρ_GJ),where ρ_GJ=1/4π√(-g)∂_μ[√(-g)g^μν/α^2(ω-Ω)F_νφ].For the static, axisymmetric radial gap invoked in section <ref> we have ∂_t=∂_φ=0 and F^'_θ t=0, wherebythe expression for ρ_GJ reduces to Equation (<ref>),and Equation (<ref>) reduces to1/Σ∂_r(A/ΣF^'_r t)=4π(j^t-ρ_GJ),where the substitutions √(-g)=Σsinθ and√(-g)g^rr/α^2=(A/Σ)sinθ have been used.Upon defining the electric flux as Φ_E= A F^'_r t/Σ and transforming to the tortoise coordinate given in Equation (<ref>), Equation (<ref>) is obtained.§ DERIVATION OF THE FLUID EQUATIONS The plasma in the gap is treated as a two-component fluid consisting of electrons and positrons, with proper densities n_±, pressures p_±,specific enthalpies (per particle)h_±, and 4-velocitiesu^μ_±, where subscript - (+) designates the electron (positron) fluid. In the presence of pair creation the continuityequation becomes,1/√(-g)∂_μ(√(-g)n_± u_±^μ) =Q/2,here Q denotes the pair production rate per unit volume. The electric 4-current is given byj^μ=e(n_+u_+^μ - n_-u_-^μ),and from Eq. (<ref>) it is readily seen that the electric current is conserved, viz., ∂_μ j^μ=0.In a steady gap this implies that the current is constant inside the gap, ∇· j=0. The energy-momentum equation takes the form:1/√(-g)∂_ν(√(-g)T_±^μν)+Γ^μ_ αβT_±^αβ =± en_± F^μ_ αu_±^α - S_±^μ + Q_±^μ,in terms of the energy-momentum tensorT_± ^μν=h_± n_± u_±^μ u_±^ν + p_± g^μν.The first term on the right hand side of Eq. (<ref>) accounts for the work done on the fluids by electromagnetic forces, the second term (S_±^μ)for radiative losses, and the third term (Q_±^μ) is associated with pair loading via annihilation of photons.The projection of Eq. (<ref>) on the 4-velocity u^ν yields an equation for the change in the entropy per particle σ (in k_B units) of each fluid:n_± T_± u_± ^μ∇_μσ_±= (S_±α - Q_±α)u_±^α - h_± Q/2, here T_± is the temperature of the fluids.By employing Eqs. (<ref>), (<ref>)-(<ref>),and the second law, dh-dp/n=Tdσ, we arrive atn_± h_± u_± ^μ∇_μ u_± ^ν =± en_± F^ν_ αu_±^α + (- S_±α + Q_±α - ∂_α p_±)(g^αν+u_±^α u_±^ν),denotingu_± ^μ∇_μ u_± ^ν=u_± ^μ∂_μ u_± ^ν + Γ^ν_ αβu_±^α u_±^β.We now make the following approximations:First, pressure forces are expected to be small compared with the electric and radiation forces, thus we neglect the term ∂_α p_±. Second, we assume that each fluid is approximately adiabatic, u^ν∇_νσ_±=0. This assumption is reasonableif the spread in momentum is much smaller than the bulk momentum.Under the above simplifications Eq. (<ref>)yields(- S_±α + Q_±α - ∂_α p_±)(g^αν+u_±^α u_±^ν)=-S_±^ν+Q_±^ν-Qhu_±^ν/2. Third, if newly created pairs are added to the fluid with an average momentum that is roughly equal to the bulk momentum (as naively expected from energy-momentum conservation), thenQ_±^ν-Qhu_±^ν/2=0.With these approximations the radiative source term is orthogonal to the fluid velocity, viz., u_±^ν S_±ν=0. Next, we take the radial (ν=r) component ofEq. (<ref>), make use of the relationu_± ^μ∇_μ u_± r=u_± ^μ∂_μ u_± r - Γ_α r βu_±^α u_±^β and the fact that u_rΓ^r_αβ= u^rΓ_rαβ, and note that for the invoked gap geometry u^μ∂_μ=u^r∂_r, to get∂_r(u_±^2/2)=1/2(u_r∂_r u^r+u^r∂_r u_r) =-1/2(u_±^t)^2∂_rα^2±e/h_±F_rtu_±^t+s_± ru_±^t ,where s^r_±=-S^r_±/(u_±^t n_± h_±) and s_± r=g_rrs_±^r.Noting that ∂_φ is a Killing vector we further have - u^μ∇_μ u_±φ=±e/n_± h_±F_φνu^ν + s_±φ=s_±φ,since F_φνu^ν=F_φ ru^r=0 for the split monopole geometry invoked in our gap model. Neglecting the toroidalcomponent of the radiative force, s_±φ=0, which is reasonable for the assumed isotropic radiation field, implies that the angular momentum of each fluid is conserved: u_±φ=g_φφ(u^φ-ω u^t)= const.For simplicity, we take the angular momentum of the fluids to be zero (although our analysis can be readily extended to fluids with nonzero angular momentum).Then, u_±^φ=ω u^t, andfrom the normalization condition u_μ u^μ=-1 we readily have (α u_±^t)=1+g_rr (u_±^r)^2=1+u^2_±, which simply defines the Lorentz factor of the fluid measured by a ZAMO, γ_±=α u_±^t.Upon substituting the relation γ^2_±-1=u^2_± into Eq. (<ref>), using the orthogonality condition s_μ u^μ=s_± tu_±^t+s_± ru_±^r=0, noting that s_±^t=g^tts_± t+g^tφs_±φ=-s_± t/α^2, since we invoke s_±φ=0,and transforming to the tortoise coordinate, we arrive at Eq. (<ref>). § RADIATION §.§ Transport equationIn terms of the absorption coefficient κ_ν and the emissivity g_ν=c^2 j_ν/(h^4ν^3), the transport equation for thephoton distribution function, f(x^μ,p^ν), takes the covariant form:p^α∂_αf-Γ^α_βγ p^β p^γ∂ f/∂ p^α=p^t(-κ_ν f+g_ν),where Γ^α_ βγ is the usual Christoffelsymbol.With respect to a ZAMO framedefined by the tetrads e_t̂=1/α(∂_t+ω∂_φ), e_r̂=1/√(g_rr)∂_r,e_θ̂=1/√(g_θθ)∂_θ, e_φ̂=1/√(g_φφ)∂_φ,the components of the photon momentum are p_â= e_â^b p_b,and p^â=η^âb̂p_b̂.In this frame we define the directionvectors n^â=(1,μ_p,sinθ_pcosφ_p, sinθ_psinφ_p), where μ_p=cosθ_p <cit.>. Clearly n_ân^â=0, as required.The photon momentum in this frame is p^â=ν n^â, where henceforthwe use units where h=c=1.Note that the angle θ_p is measured with respect to the radial direction ∂_r. We suppose that the photon distribution is axi-symmetric locally, that is f is independent of φ_p.Then, the transfer equationtakes the form <cit.>[n^â∂_â -γ^t̂_âb̂n^ân^b̂ ν∂/∂ ν+(n^r̂γ^t̂_âb̂-γ^r̂_âb̂)n^ân^b̂∂/∂ μ_p]f =-κ_ν f+g_ν,in terms of the Ricci rotation coefficientsγ^â_b̂ĉ= e^â_λ e^ν_ĉ(∂_ν e^λ_b̂ +Γ^λ_νμe^μ_b̂). In applying the transport equation to the gamma ray emission in the gap we take ν=ϵ_γ, μ_p=μ_γ, φ_p=φ_γ,f=I_γ(r,ϵ_γ,μ_γ)/ϵ_γ^3.Since the beamed intensity is independent of φ_γ we can average the transport equation over this angle. Using the relationsγ^t̂_âb̂ n^ân^b̂ = n^â∂_âlnα=μ_γ/√(g_rr)∂_rlnαand1/2π∫γ^r̂_âb̂ n^ân^b̂ dφ_γ=γ^r̂_t̂t̂ +1/2(1-μ_γ^2)(γ^r̂_θ̂θ̂+γ^r̂_φ̂φ̂) =1/√(g_rr)[∂_rlnα - (1-μ^2_γ) ∂_rln√(A)], one finally arrives at:n^â∂_â I_γ - μ_γ/√(g_rr) (∂_rlnα) ϵ_γ^4∂/∂ϵ_γ(I_γ/ϵ_γ^3) +[-∂_rlnα+1/2∂_rln√(A)] (1-μ_γ^2)/√(g_rr)∂/∂μ_γI_ν=-κ_pp I_γ +j_γ.To simplify our analysis we shall neglect the term ∂_rlnα/√(g_rr) as it is merely important very near the horizon.we further apply the beaming approximation (<ref>), note that n^â∂_â=1/√(g_rr)∂_r, and integrate the later equation over the angle μ_γ to obtain: 1/√(A)∂_r(√(A) I^±_γ)=√(g_rr)(±κ_pp I^±_γ∓ j^±_γ).Upon transforming to the tortoise coordinate we obtain Eq. (<ref>). §.§ Inverse Compton emissivityWe consider inverse Compton scattering of target radiation by a cold electron (positron) beam of comoving density n_±. The intensity of the target radiation, as measured in the rest frame of the beam,is denoted by I^'_s(ϵ^'_s,μ^'_s,r,t), with ϵ_s^', μ^'_s being the energy and direction of the target photons. The comoving gamma-ray emissivity has the general formj^'_γ(ϵ^'_γ,μ^'_γ,r,t)=n_±∫ϵ^'_γ/ϵ^'_sI^'_s(ϵ^'_s,μ^'_s,r,t) dσ^'/dΩ^'_γδ[ϵ^'_γ-ϵ^'_c(ϵ^'_s)] dϵ^'_sdΩ^'_s, hereϵ^'_c(ν^'_s)=ϵ^'_s/1+hϵ^'_s/m_ec^2(1-cosψ),ψ is the angle between the incident and scattered photons, given bycosψ=μ^'_γμ^'_s+sinθ^'_γsinθ^'_scos(φ^'_γ-φ^'_s),anddσ^'/dΩ^'_γ=3σ_T/16π(ϵ^'_γ/ϵ^'_s)^2 (ϵ^'_γ/ϵ^'_s+ϵ^'_s/ϵ^'_γ-sin^2ψ)is the differential Klein-Nishina cross-section.In our model the target radiation field is taken to be isotropic in the ZAMO frame with a power law spectrum, I_s=I_0(r)(ϵ_s/ν_min)^-p, ϵ_min < ϵ_s < ϵ_max. Since the Lorentz factor of the beams, γ_±, is extremely large, we safely assume that the target radiation field iscompletely beamed in the comoving frame.Specifically, I^'±_s(ϵ^'_s,μ^'_s,r) =4γ_±/3I_0(r)(ϵ^'_s/2γ_±ϵ_min)^-pδ(1∓ϵ^'_s);2γ_±ϵ_min<ϵ^'_s<2γ_±ϵ_max,where superscript + (-) refers to the positron (electron) beam.Performing the integral in Eq. (<ref>) and noting that|dϵ_c^'/dϵ_s^'|=(ϵ_c^'/ϵ_s^')^2 yields: j^±'_γ(ϵ^'_γ,μ^'_γ,r)=σ_Tn_±/2I_0 (ϵ^'_s/2γ_±ϵ_min)^-pϵ_γ^'/ϵ^'_s[ϵ_γ^'/ϵ^'_s +ϵ_s^'/ϵ^'_γ-1+μ_γ^' 2]. Transforming back to the ZAMO frame, and recalling thatϵ_γ^'/ϵ^'_s=1-ϵ_γ(1-β_±)γ_±(1+μ_γ), ϵ_s^'/2γ_±ϵ_min=ϵ_γ/2ϵ_min[1-β_±μ_γ/1-ϵ_γγ_±(1-β_±)(1+μ_γ)], we havej^±_γ(ϵ_γ,μ_γ,r)=j^±'_γ(ϵ^'_γ,μ^'_γ,r)/[γ_±(1-β_±μ_γ)]^2=σ_Tn_±γ_±/2 I_0(r) (ϵ_γ/2 ϵ_min)^-p g(ϵ_γ,μ_γ,γ_±), where g(ϵ_γ,μ_γ,γ_±))=1/γ_±^2(1-β_±μ_γ)^2[2γ_±-ϵ_γ(1+μ_γ)/2γ_±(1-β_±μ_γ)]^pϵ_γ^'/ϵ^'_s[ϵ_γ^'/ϵ^'_s +ϵ_s^'/ϵ^'_γ-1+μ_γ^' 2].Noting that the minimum and maximum scattering angles for a given gamma ray energy are1-β_±μ_min=min[(1+β_±), 2ϵ_max/ϵ_γ-2ϵ_max/γ_±],1-β_±μ_max=max[(1-β_±), 2ϵ_min/ϵ_γ-2ϵ_min/γ_±],and averaging the emissivities over angles, that is,j^±_γ(ϵ_γ,r)=1/2∫_μ_min^μ_maxj^±_γ(ϵ_γ,μ_γ,r) dμ_γ,one obtains, to leading order, the beamed emissivities in Eq. (<ref>).§ DERIVATION OF <Q_CUR>As argued below Eq. (<ref>), at energies ϵ_γ<ϵ_c the pair production opacity is much smaller than unity.We can thereforeneglect absorption in the transfer equation (<ref>). We can also neglect the IC emissivity since it is much smaller than the curvatureemissivity at these energies. The approximate solutions to Eq. (<ref>), subject to the boundary conditions (<ref>), then read: I_γ^+(r,ϵ_γ) = -√(3)α_f/2π R_cγ_+ F(ϵ_γ/ϵ_c)1/√(A(r))∫_r^r_out√(A)N^r_+/Δdr^',I_γ^-(r,ϵ_γ) = √(3)α_f/2π R_cγ_-  F(ϵ_γ/ϵ_c) 1/√(A(r))∫_r_in^r√(A)N^r_-/Δdr^'.We note that √(A(r)) changes by at most a factor of 3 across the gap, so we assume it s constant with an averagevalue <√(A)>=(√(A_in)+√(A_out))/2, where A_in≡ A(r_in) and likewise for A_out. Recalling that N^r_+ - N^r_- = N^r_0 = const, and that γ^+=γ^-≡γ across most of the gap, and taking <√(A)>instead of √(A(r)), the sum of the two intensities yieldsI_γ^+(r_in,ϵ_γ)+I_γ^-(r_out,ϵ_γ)=√(3)α_f/2π R_cγ F(ϵ_γ/ϵ_c)(-N^r_0) 1/<√(A)>∫_r_in^r_out√(A) dr^'/Δ With the crude approximation I_γ^+(r,ϵ_γ)+I_γ^-(r,ϵ_γ)= [I_γ^+(r_in,ϵ_γ)+I_γ^-(r_out,ϵ_γ)]/2,we obtain an expression for the average pair production rate:<Q_cur>=√(27)α_f/2 R_cγ τ_0 (-N^r_0)A_p (ϵ_cϵ_min)^p 1/<√(A)>∫_r_in^r_out√(A) dr^'/2 Δ ∫_0^∞ x^p-1F(x)dx.By employing Eqs. (<ref>) and (<ref>), choosing p=2 for illustration, and computing the last integralon the right hand side we arrive at Eq. (<ref>). 99 BZ77 R. D. Blandford and R. L. Znajek, Mon. Not. R. Astron. Soc.,179, 433, (1977) BK08 M. Barkov and S. Komissarov,Mon. Not. R. Astron. Soc., 385, L28 (2008) LR11 A. Levinson and F. Rieger, Astrophys. J., 730, 123 (2011) [Levinson2000]le00 A. Levinson, PhRvL, 85, 912 (2000) [Hirotani & Pu2016]HP16 K. Hirotani, Pu H.-Y., 2016, Astrophys. J., 818, 50 (2016) [Hirotani et al.2016]Hir16 K. Hirotani, H.-Y. Pu, L. C.-C. Lin, H.-K. Chang, M. Inoue, A. K. H. Kong, S. Matsushita and P.-H. T. Tam, Astrophys. J., 833, 142 (2016) [Hirotani et al.2017]Hir17 K. Hirotani, H.-Y.  Pu, L. C.-C. Lin, A. K. H. Kong, S. Matsushita, K. Asada, H.-K. Chang and P.-H. T. Tam, Astrophys. J., 845, 77 (2017) [Lin et al.2017]Lin17L. C.-C. Lin, H.-Y. Pu, K. Hirotani, A. K. H. Kong, S. Matsushita, H.-K. Chang, M. Inoue and P.-H. T. Tam, Astrophys. J., 845, 40 (2017) [Neronov & Aharonian2007]NA07 A. Neronov, F. A. Aharonian, Astrophys. J., 671, 85 (2007) [Rieger2011]Rie11 F. M. Rieger, Int. J. Mod. Phys. D, 20, 1547 (2011)[Broderick & Tchekhovskoy2015]BT15 A. E. Broderick and A. Tchekhovskoy,Astrophys. J., 809, 97 (2015)[Aharonian et al.2003]ahr03 F. A. Aharonian, et al., Astron. Astrophys., 403, L1 (2003) [Acciari et al.2009]acc09 V. A. Acciari, et al.,Sci, 325, 444 (2009) [Aleksić et al.2014]alk14 J. Aleksić, et al.,Sci, 346, 1080 (2014)[Chen & Beloborodov2014]CB14A. Y. Chen and A. M.  Beloborodov,Astrophys. J., 795, L22 (2014) [Cerutti et al.2015]CPPS15 B. Cerutti, A. Philippov, K. Parfrey and A. Spitkovsky, Mon. Not. R. Astron. Soc., 448, 606 (2015) [Philippov, Spitkovsky, & Cerutti2015]PSC15 A. A. Philippov, A. Spitkovsky and B.  Cerutti, Astrophys. J., 801, L19 (2015)[Globus & Levinson2014]GL14 N. Globus and A.  Levinson,Astrophys. J., 796, 26 (2014) [Globus & Levinson2013]GL13 N. Globus and A.  Levinson,PhRvD, 88, 084046 (2013) [Narayan & Yi1995]NY95 R. Narayan and I.  Yi,Astrophys. J., 452, 710 (1995) [Blandford & Levinson1995]BL95 R. D.  Blandford andA.  Levinson, Astrophys. J., 441, 79 (1995) [Rybicki & Lightman1979]RL79 G. B.  Rybicki and A. P.  Lightman, Radiative Processes in Astrophysics (New York: Wiley) (1979) [Gould & Schréder1967]GS67 R. J.  Gould and G. P.  Schréder, PhRv, 155, 1404 (1967) [Morita & Kaneko1986]MK86 K. Morita and N.  Kaneko,Astrophys. & Space Sci., 121, 105 (1986) [Lindquist1966]Lin66 R. W.  Lindquist, AnPhy, 37, 487 (1966) [Takahashi1990]Tak90 M. Takahashi, S. Nitta, Y. Tatematsu & A. Tomimatsu, Astrophys. J., 363, 206 (1990)[Levinsoni2005]Lev05 A. Levinson, D. Melrose, A. Judge & L. Qinghuan, Astrophys. J., 631, 456 (2005) [Timokhini2013]Tim13A. N. Timokhin & J. Arons,Mon. Not. R. Astron. Soc., 429, 20 (2013)
http://arxiv.org/abs/1709.09397v3
{ "authors": [ "Amir Levinson", "Noam Segev" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170927090916", "title": "Existence of steady gap solutions in rotating black hole magnetospheres" }
Homological mirror symmetry for generalized Greene–Plesser mirrors [ December 30, 2023 ================================================================== We prove the first rigorous bound on the heat transfer for three-dimensional Rayleigh–Bénard convectionof finite-Prandtl-number fluids between free-slip boundaries with an imposed heat flux. Using the auxiliary functional method with a quadratic functional, which is equivalent to the background method, we prove that the Nusselt number Ν is bounded by Ν≤ 0.5999 ^1/3 uniformly in the Prandtl number, whereis the Rayleigh number based on the imposed heat flux. In terms of the Rayleigh number based on the mean vertical temperature drop, ,we obtain Ν≤ 0.4646 ^1/2. The scaling with Rayleigh number is the same as that of bounds obtained with no-slip isothermal, free-slip isothermal, and no-slip fixed flux boundaries, and numerical optimisation of the bound suggests that it cannot be improved within our bounding framework. Contrary to the two-dimensional case, therefore, the -dependence of rigorous upper bounds on the heat transfer obtained with the background method for three-dimensional Rayleigh–Bénard convection is insensitive to both the thermal and the velocity boundary conditions. § INTRODUCTION (RB) convection, the buoyancy-driven motion of a fluid confined between horizontal plates, is a cornerstone of fluid mechanics. Its applications include atmospheric and oceanic physics, astrophysics, and industrial engineering <cit.>, and due to its rich dynamics it has also become a paradigm to investigate pattern formation and nonlinear phenomena <cit.>. One of the fundamental questions in the study of convection is to which extent the flow enhances the transport of heat across the layer. Precisely, one would like to relate the Nusselt number Ν (the nondimensional measure of the heat transfer enhancement) to the parameters of the fluid and the strength of the thermal forcing. These are described, respectively, by the Prandtl and Rayleigh numbers =ν/κ and =α g h^3Δ /(νκ), where α is the fluid's thermal expansion coefficient, ν is its kinematic viscosity, κ is its thermal diffusivity, h is the dimensional height of the layer, g is the gravitational acceleration, and Δ is the average temperature drop across the layer.It is generally expected that for large Rayleigh numbers the Nusselt number obeys a simple scaling law of the form Ν∼^a^b. However, different phenomenological arguments predict different scaling exponents in the ranges -1/4≤ a ≤ 1/2 and 2/7≤ b ≤ 1/2 <cit.>, and the available experimental evidence in the high- regime is controversial <cit.>. Discrepancies in the measurements are often attributed to differences in theboundary conditions (BCs) or in the Prandtl number. From the modelling point of view, eight basic configurations of RB convection can be identified depending on the Prandtl number (finite or infinite), the BCs for the fluid's temperature (fixed temperature or fixed flux), and the BCs for its velocity (no-slip or free-slip). Two-dimensional simulations <cit.> have shown that changing the thermal BCs for given velocity BCs has no quantitative effect on Ν, while replacing no-slip boundaries with free-slip ones can dramatically reduce the heat transfer through the appearance of zonal flows. However, zonal flows have not been observed in three dimensions <cit.> and how different BCs affect the Ν-- relationship in general remains an open problem. In the absence of extensive numerical result for the high- regime in three dimensions, one way to make progress is through rigorous analysis of the equations that ostensibly describe RB convection. A particularly fruitful approach is to use the background method <cit.> and derive rigorous bounds of the form Ν≤ f(,) for each of the eight configurations described above.The no-slip case has been studied extensively. For fluids with finite Prandlt number the bound Ν≲^1/2 holds uniformly inirrespective of the thermal BCs <cit.>. When =∞ (and ≳^1/3 with isothermal boundaries), instead, one has Ν≲ℓ()^1/3, where ℓ() is a logarithmic correction whose exact form depends on the thermal BCs <cit.>.In contrast, the only bounds available for free-slip velocity BCs are for RB convection between isothermal plates. All identities and estimates used in the no-slip analysis of <cit.> hold also for free-slip boundaries, so one immediately obtains Ν≤^1/2 at finite . This result can be tightened to Ν≤^5/12 in two dimensions and at infinitein three dimensions by explicitly taking advantage of both the stress-free and the isothermal BCs <cit.>.Free-slip conditions pose a challenge for the background method when a constant heat flux κβ, rather than a fixed boundary temperature, is imposed. The reason is that the analysis usually relies on at least one of the temperature and horizontal velocities being fixed at the top and bottom boundaries, which is not the case with free-slip and fixed flux BCs. In this short paper we show that such lack of “boundary control” for the dynamical fields can be overcome with a simple symmetry argument and thereby prove the first rigorous upper bound on Ν for RB convection between free-slip boundaries with imposed heat flux. The exposition is organised as follows. Section <ref> reviews the Boussinesq equations used to model the system. We formulate a bounding principle for Ν in <ref>, and prove our main result in <ref>. Finally, <ref> offers further discussion and conclusive remarks. § THE MODEL We model the system using the Boussinesq equations and make all variables nondimensional using h, h/κ, and hβ, respectively,as the length, time and temperature scales <cit.>. The nondimensional velocity u⃗(x,y,z,t), pressure p(x,y,z,t), and perturbations θ(x,y,z,t) from the conductive temperature profile T_c = -z then satisfy <cit.>∂_t u⃗ + (u⃗) u⃗+p=∇^2 u⃗ + (θ - z)e⃗_z,u⃗ = 0,∂_t θ + u⃗θ = ∇^2 θ + w,wheree⃗_z is the unit vector in the z direction and R=α g β h^4 / (νκ) is the Rayleigh number based on the imposed boundary heat flux. Note thatis related to the Rayleigh number based on the (unknown) mean temperature drop, , by =Ν <cit.>. The domain is periodic in the horizontal (x, y) directions and the vertical BCs are ∂_z u = ∂_zv = w = 0, ∂_zθ = 0atz=0andz=1. Since the average vertical heat flux across the layer is fixed to 1 in nondimensional units, convection reduces the mean temperature difference between the top and bottom plates and hence the mean conductive heat flux⟨ -∂_z T⟩= 1-⟨∂_z θ⟩ (here and throughout this work overlines denote averages over infinite time, while angle brackets denote volume averages). The Nusselt number—the ratio of the average vertical heat flux and the mean conductive flux—is then given by <cit.> Ν =( 1-⟨∂_z θ⟩)^-1.§ UPPER BOUND FORMULATIONWhen< 120 conduction is globally asymptotically stable and Ν=1 <cit.>. For >120 convection sets in <cit.> and we look for a positive lower bound L on 1-⟨∂_z θ⟩, implying Ν≤ 1/L. To find L we use the background method <cit.> but we formulate it in the language of the auxiliary functional method <cit.> because of its conceptual simplicity: it relies on one simple inequality, rather than a seemingly ad hoc manipulation of the governing equations.The analysis starts with the observation that any uniformly bounded and differentiable time-dependent functional 𝒱(t)=𝒱{θ(,t),u⃗(,t)} satisfies d𝒱 /d t=0. Consequently, to prove that 1-⟨∂_zθ⟩≥ L it suffices to show that at any instant in time 𝒮{θ(,t),u⃗(,t)} :=d𝒱/ d t+ 1 - ⟨∂_z θ⟩- L ≥ 0.Using the ideas outlined by <cit.>, it can be shown that constructing a background temperature field in the “classical” background method analysis is equivalent to finding constants a, b and L and a function ϕ(z) such that (<ref>) holds for 𝒱{θ(,t),u⃗(,t)} :=-a/2 ⟨u⃗^2 ⟩ -b/2⟨θ^2 ⟩+ ⟨ϕθ⟩. We assume that u⃗ and θ are sufficiently regular in time to ensure differentiability of this functional, while uniform boundedness can be proven using estimates similar to those presented in this paper(we do not give a full proof in this work due to space limitations, but outline the argument in appendix <ref>).The functional 𝒮{θ(,t),u⃗(,t)} corresponding to (<ref>) can be expressed in terms of u⃗ and θ using (<ref>)–(<ref>). Integrating the volume average ⟨u⃗(<ref>)⟩ by parts using incompressibility and the BCs yields d/ d t⟨u⃗^2 ⟩/2=-⟨u⃗^2 ⟩/+ ⟨ w θ⟩. Averaging θ×(<ref>) and ϕ×(<ref>) in a similar way gives d/ d t⟨θ^2 ⟩/2 =- ⟨θ^2 ⟩ + ⟨ wθ⟩,⟨ϕ ∂_t θ⟩ =⟨ϕ' wθ⟩ -⟨ϕ' ∂_z θ⟩. Combining expressions (<ref>)–(<ref>) and rearranging we find 𝒮{θ(,t),u⃗(,t)} = 1 - L- ⟨(ϕ'+1)∂_zθ⟩ + ⟨a/u⃗^2 + bθ^2 + (ϕ'-a-b)wθ⟩.To prove that (<ref>) holds at all times, we make one key further simplification: we drop the equation of motions and choose a, b, L and ϕ(z) such that 𝒮{θ,u⃗}≥ 0 for any time-independent fields θ = θ(x,y,z) and u⃗= u⃗(x,y,z) that satisfy (<ref>) and the BCs. Hereafter, we also assume that a,b>0 to ensure that 𝒮{θ,u⃗} is bounded below. Incompressibility can be incorporated explicitly in (<ref>) upon substitution of the horizontal Fourier expansions equation a,bθ = ∑_k⃗θ_k⃗(z)e^k⃗x⃗, u⃗ = ∑_k⃗u⃗_k⃗(z)e^k⃗x⃗,where x⃗=(x,y) is the horizontal position vector and k⃗=(k_x,k_y) is the wavevector. The z-dependent Fourier amplitudes u⃗_k⃗, θ_k⃗ satisfy the same vertical BCs as the full fields in (<ref>).Using the Fourier-transformed incompressibility constraint one can show that <cit.> 𝒮{θ,u⃗}≥𝒮_0{θ_0}+ b ∑_k⃗≠ (0,0)𝒮_k⃗{θ_k⃗,w_k⃗}, with𝒮_0{θ_0}:= bθ_0'^2-∫_0^1(ϕ'+1)θ_0'+ 1-L,𝒮_k⃗{θ_k⃗,w_k⃗}:=θ_k⃗'^2 + k^2 θ_k⃗^2+a/b(w_k⃗”^2/k^2+ 2w_k⃗'^2 + k^2 w_k⃗^2) +∫_0^1ϕ'-a-b/bRe( θ_k⃗w̃_k⃗) .In these equations and in the following we write k^2 = k_x^2 + k_y^2,denotes the standard Lebesgue ℒ^2 norm on the interval (0,1), and w̃_k⃗ is the complex conjugate of w_k⃗.The right-hand side of (<ref>) is clearly non-negative if 𝒮_0≥ 0 and 𝒮_k⃗≥ 0 for all wavevectors k⃗≠ (0,0). (A standard argument based on the consideration of fields θ and u⃗ with a single Fourier mode shows that these conditions are also necessary, so enforcing the positivity of each 𝒮_k⃗ individually does not introduce conservativeness. However, necessity is not required to proceed with our argument so we omit the details for brevity.)In particular, givena, b, and ϕ the largest value of L for which 𝒮_0≥ 0 is found upon completing the square (in the ℒ^2 norm sense) in (<ref>), so we set L = 1 - ϕ' + 1^2/4b. We will try to maximise this expression over a, b and ϕ subject to the non-negativity of the functional 𝒮_k⃗ in (<ref>) for all wavevectors k⃗≠ (0,0). Note that𝒮_k⃗ and the right-hand side of (<ref>)reduce, respectively, to the quadratic form and the bound obtained by <cit.> using the “classical” background method analysis if we let a=b-1 and identify [ϕ'(z)-2b+1]/(2b) with the derivative of the background temperature field. We also remark that our analysis appears more general because the choice a=1-b is unjustified at this stage, but its optimality (at least within the context of our proof) will be demonstrated below. § AN EXPLICIT BOUNDLet δ≤ 1/2 and consider the piecewise-linear profile ϕ(z) shown in figure <ref>, whose derivative is ϕ'(z) = a-1, z∈[0,δ]∪[1-δ,1],a+b,z∈(δ,1-δ). To show that a, b, and δ can be chosen to make the quadratic form 𝒮_k⃗{θ_k⃗,w_k⃗} in (<ref>) positive semidefinite we rewrite θ_k⃗ and w_k⃗ as the sum of functions that are symmetric and antisymmetric with respect to z=1/2. In other words, we decomposeequation a,bθ_k⃗(z) = θ_+(z) + θ_-(z),w_k⃗(z) = w_+(z) + w_-(z),withequation a,bθ_±(z) = θ_k⃗(z) ±θ_k⃗(1-z)/2,w_±(z) = w_k⃗(z) ± w_k⃗(1-z)/2.(The subscripts + and - denote, respectively, the symmetric and antisymmetric parts.) Since ϕ'(z) is symmetric with respect to z=1/2 by construction we obtain 𝒮_k⃗{θ_k⃗,w_k⃗} =𝒮_k⃗{θ_+,w_+}+ 𝒮_k⃗{θ_-,w_-}, i.e., we can split the quadratic form 𝒮_k⃗{θ_k⃗,w_k⃗} into its symmetric and antisymmetric components also.Symmetric and antisymmetric Fourier amplitudes θ_k⃗ and w_k⃗—for which one term on the right-hand side of (<ref>) vanishes—are also admissible, so 𝒮_k⃗ is non-negative if and only if it is so for arguments that are either symmetric or antisymmetric with respect to z=1/2 (and, of course, satisfy the correct BCs). As before, the “only if” statement is not needed to proceed but guarantees that no conservativeness is introduced.The decomposition into symmetric and antisymmetric components is the essential ingredient of our proof.In fact, contrary to the case of no-slip boundaries considered by <cit.>, the free-slip and fixed flux BCs cannot be used to control the indefinite term in 𝒮_k⃗{θ_k⃗,w_k⃗} via the usual elementary functional-analytic estimates. However, θ_± and w_± (or the appropriate derivatives) are known not only at the boundaries, but also on the symmetry plane.In particular, for small δ the indefinite term in (<ref>) can be controlled without recourse to the free-slip, fixed-flux BCs using w_±(0)= w_+'(1/2) = θ_-(1/2) = 0. To prove this, recall that for any symmetric or antisymmetric quantity q(z) ∫_0^1/2q(z)^2 = q^2/2. Symmetry, (<ref>), and the identity θ_±w̃_± = θ_±w_± (w̃_± is the complex conjugate of w_±) yield ∫_0^1ϕ'-a-b/b Re(θ_±w̃_±) ≤ 2M∫_0^δθ_±w_±, with M := (1+a+b)/b. Since w_±(0)=0 the product θ_± w_± vanishes at z=0 and for any z≤δ≤ 1/2 the fundamental theorem of calculus implies θ_±(z) w_±(z)≤∫_0^z θ_±(ξ)w_±'(ξ) + ∫_0^z θ_±'(ξ)w_±(ξ). Using the fact that w_±(0)=0 once again, thefundamental theorem of calculus for ξ≤ 1/2, the Cauchy–Schwarz inequality, and (<ref>) we also obtain w_±(ξ) = ∫_0^ξ w_±'(η) ≤√(ξ/2)w_±'. Furthermore, the conditions in (<ref>) imply that the product θ_± w_±' vanishes at the symmetry plane, so similar estimates as above yield θ_±(ξ)w_±'(ξ) = ∫_ξ^1/2[θ_±(η)w_±”(η) + θ_±'(η)w_±'(η) ]≤1/2θ_±w_±” + 1/2θ_±'w_±'. Upon inserting (<ref>) and (<ref>) into (<ref>), applying the Cauchy–Schwarz inequality, and using (<ref>) we arrive at θ_±(z) w_±(z)≤z/2( θ_±w_±” + 1+√(2)/√(2)θ_±'w_±'). Substituting this estimate into (<ref>) and integrating gives an estimate for the indefinite term in (<ref>), and after dropping the term ak^2w_±^2/(b) we conclude that 𝒮_k⃗{θ_±,w_±}≥ 2a/bw_±'^2 - (1+√(2))Mδ^2/2√(2)w_±'θ_±' + θ_±'^2 +a/b k^2w_±”^2- Mδ^2/2w_±”θ_± + k^2 θ_±^2 Recalling the definition of M and that a quadratic form α u^2 + β uv + γ v^2 is positive semidefinite if β^2 ≤ 4αγ, the right-hand side of (<ref>) is non-negative if we set equation δ = A [ab/(1+a+b)^2]^1/4, A:=(8/1+√(2))^1/2.a,bHaving chosen δ to ensure the non-negativity of 𝒮_k⃗, all is left to do is optimise the eventual bound Ν≤ L^-1 over a and b as a function of . Substituting (<ref>) into (<ref>) for our choice ofδ yields L = 1 -(1+a+b)^2/4b +A/2 (1+a+b)^3/2a^1/4/ b^3/4 R^1/4. In order to maximise this expression with respect to a,b>0 we set the partial derivatives ∂ L/∂ a and ∂ L/∂ b to zero. After some rearrangement it can be verified that∂ L/∂ a = 0 ⇔ A b^1/4(7a+b+1)-4 R^1/4a^3/4(1+a+b)^1/2 = 0, ∂ L/∂ b = 0 ⇔ (1+a-b)[ 2 R^1/4 (1+a+b)^1/2 - 3A a^1/4 b^1/4] =0.A few lines of simple algebra show that setting to zero the second factor in (<ref>) leads to a solution with negative a or b, so we must choose b=1+a where a>0 satisfies A^4(1+4a)^4-64(1+a)a^3=0. (No positive roots exist if R ≤ 4 A^4 ≈ 43.92, but we are only interested in ≥ 120 because conduction is globally asymptotically stable otherwise. It can also be checked that this stationary point is a maximum; the algebra is straightforward but lengthy and uninteresting, so we do not report it for brevity).In particular, whentends to infinity (<ref>) admits an asymptotic solution of the form a = a_1^-1/3 + O(^-2/3). Substituting this expansion into (<ref>) and solving for the leading order terms gives a_1 = A^4/3/4. We then set b=1+a and a=a_1 ^-1/3 in (<ref>), simplify, and estimate L = A^4/3/4^1/3[√(2)(4+A^4/3/^1/3)^3/4 -1 ] ≥3A^4/3/4^1/3. Note that this bound is sharp as →∞. Consequently, Ν≤1/L≤4^1/3/3A^4/3≈ 0.5999^1/3. Recalling that =Ν <cit.> we can also express this bound in terms of the Rayleigh numberbased on the average temperature drop across the layer:Ν≤8^1/2/ 3√(3)A^2≈ 0.4646 ^1/2. § DISCUSSIONThe bound proven in this work is the first rigorous result for three-dimensional RB convection between free-slip, fixed flux boundaries (but note that our proof holds also in the two-dimensional case). Key to the result is a symmetry argument that overcomes the loss of boundary control for the trial fields when the no-slip velocity conditions are replaced with free-slip ones. Our approach is fully equivalent to the “classical” application of the background method to the temperature field, and the scaling of our bound withis the same as obtained for no-slip BCs <cit.> and for free-slip isothermal BCs <cit.>. Modulo differences in the prefactor, therefore, rigorous upper bounds on the the heat transfer obtained with the background method for three-dimensional RB convection at finiteare insensitive to both the velocity and the thermal BCs.Whether convective flows observed in reality exhibit the same lack of sensitivity to the BCs, however, remains uncertain. Two-dimensional simulations indicate that the thermal BCs make no quantitative difference for given velocity BCs <cit.>, while replacing no-slip with free-slip leads to zonal flows with reduced vertical heat transfer <cit.>. Partial support for such observations comes from the improved bound Ν≲^5/12 obtained with free-slip isothermal boundaries in two dimensions <cit.>. It does not seem unreasonable to expect that a symmetry argument similar to that of this paper will extend the result to the fixed flux case, but we leave a formal confirmation to future work. On the other hand, zonal flows have not been observed in three dimensions <cit.>. More extensive three-dimensional numerical simulations should be carried out to reveal if and how free-slip conditions affect the Ν- relationship, as well as whether the thermal BCs can have any influence.Should numerical simulations in three dimensions suggest that Ν grows more slowly than ^1/2, the challenge will be to improve the scaling exponents in (<ref>)–(<ref>). The argument by <cit.> may be adapted to study the infinite- limit, but cannot be used at finite . Moreover, at finiteit does not seem sufficient to consider a more sophisticated choice of a, b, and ϕ(z)in the functional (<ref>). To provide evidence of this fact, we used  <cit.> to maximise the constant L (and, consequently, minimise the eventual bound Ν≤ L^-1) over all constants a, b and functions ϕ(z) that make the functionals in (<ref>) and (<ref>) positive semidefinite. We considered domains with period 2π and 10π in both horizontal directions, respectively, and the corresponding optimal bounds on Ν are compared to the analytic bound (<ref>) in figure <ref>(a). For both values of the horizontal period a least-square power-law fit to the numerical results for ≥ 10^6 returns L^-1≈ 0.325 ^0.33. Moreover, as illustrated in figure <ref>(b) for =10^5, the optimal ϕ(z) closely resembles the analytical profile sketched in figure <ref>: it is approximately linear with slope a+b in the bulk and it decreases near the top and bottom boundaries.This strongly suggests that carefully tuning a, b, and ϕ(z) can only improve the prefactor in (<ref>). Lowering the scaling exponent for three-dimensional RB convection at finite Prandtl number, if at all possible, will therefore demand a different approach. Recently,have proven that the auxiliary functional method gives arbitrarily sharp bounds on maximal time averages for systems governed by ordinary differential equations. This gives hope that progress may be achieved in the context of RB convection if a more general functional than (<ref>) is considered. The resulting bounding problem will inevitably be harder to tackle with purely analytical techniques,but the viability of this approach may be assessed with computer-assisted investigation based on sum-of-squares programming <cit.>. Another option is to try and lower the bound proven here through the study of optimal “wall-to-wall” transport problems <cit.>. Exactly how much these alternative bounding techniques can improve on the background method and advance our ability to derive a rigorous quantitative description of hydrodynamic systems is the subject of ongoing research.We are indebted to D. Goluskin and J. P. Whitehead, who introduced us to the problem studied in this paper. We thank them, C. R. Doering, A. Wynn, and S. I. Chernyshenko for their encouragement and helpful comments. Funding by an EPSRC scholarship (award ref. 1864077) and the support and hospitality of the Geophysical Fluid Dynamics program at Woods Hole Oceanographic Institution are gratefully acknowledged. § BOUNDEDNESS OF 𝒱The Cauchy-Schwarz inequality and the estimate ⟨θ^2⟩ = ⟨T + z^2⟩≤ 2⟨T^2⟩ + 2/3 imply that the functional in (<ref>) is bounded if ⟨u⃗^2⟩,⟨T^2⟩<∞. Following ideas by <cit.> and <cit.>, this holds if velocity and temperature perturbations û⃗̂:= u⃗-ψ⃗ and ϑ:=T-τ from steady background fields ψ⃗ and τ satisfy ⟨û⃗̂^2⟩,⟨ϑ^2⟩<∞. Below we briefly outline how to find suitable ψ⃗ and τ. Let T(·, 0) and u⃗(·, 0) be given initial conditions. The volume-averaged temperatureand horizontal velocities (⟨ T⟩, ⟨ u⟩ and ⟨ v⟩) are conserved, e.g. ⟨ T(·,t)⟩ = ⟨ T(·, 0)⟩. This follows after taking the volume average of the Boussinesq equations using the divergence theorem, incompressibility, and the BCs. Then, let ψ⃗:=⟨ u(·, 0)⟩e⃗_x+⟨ v(·, 0)⟩e⃗_y and set τ=τ(z) with τ'(z) =-1,z∈[0,δ]∪[1-δ,1], -1,z∈(δ,1-δ), for some δ>0 to be determined and the constant of integration chosen such that ∫_0^1 τ(z)= ⟨ T(·, 0)⟩. It follows that û⃗̂ satisfies the same BCs as the full velocity field, ϑ satisfies ∂_zϑ|_z=0=0=∂_zϑ|_z=1, and ⟨ϑ⟩=⟨û⟩=⟨v̂⟩=0 at all times.Since ψ⃗ and τ are independent of time and ψ⃗=0, for any constant C>0 we can useincompressibility, the BCs, and the Boussinesq equations to write d / d t⟨ϑ^2/2 + û⃗̂^2/2 ⟩=- ⟨ϑ^2 + û⃗̂^2/+ (τ'-1) ŵϑ+(τ'+1)∂_z ϑ+C⟩ + C.The task is then to find δ in (<ref>), C>0, and a constant γ>0 such that ⟨ϑ^2 + û⃗̂^2/+ (τ'-1) ŵϑ+(τ'+1)∂_z ϑ+C⟩ - γ⟨ϑ^2/2 + û⃗̂^2/2 ⟩≥ 0for all time-independent trial fields û⃗̂ and ϑ with ⟨ϑ⟩=⟨û⟩=⟨v̂⟩=0 and û⃗̂=0 that satisfy the BCs. In fact, combining (<ref>) and (<ref>) shows that ⟨ϑ^2/2 + û⃗̂^2/(2 )⟩ decays when it is large, remaining bounded. Hence, ⟨û⃗̂^2⟩ and ⟨ϑ^2⟩ are also bounded.Inequality (<ref>) can be proven wavenumber by wavenumber upon considering horizontal Fourier expansions for ϑ and û⃗̂ provided that (i) γ< min{4, 4, 2k_m^2, 2 k_m^2} with k_m^2 := min_k⃗≠(0,0)k^2 (here k^2 is the magnitude of the horizontal wavevector, cf. <ref>; the minimum is strictly positive because we work in a finite periodic domain), (ii) C is sufficiently large, and (iii) δ is sufficiently small. Nonzero wavevectors can be analysed using estimates similar to those of <ref>, while the case k⃗=(0,0) is handled using Poincaré-type inequalities deduced using the zero-average conditions ⟨ϑ⟩=⟨û⟩=⟨v̂⟩=0../jfm
http://arxiv.org/abs/1709.08932v2
{ "authors": [ "Giovanni Fantuzzi" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20170926103205", "title": "Bounds for Rayleigh-Bénard convection between free-slip boundaries with an imposed heat flux" }
[figure]style=plain,subcapbesideposition=top[][email protected] of Physics, The University of Texas at Austin, Austin, TX, 78712, USA In this work we use Floquet-Bloch theory to study the influence of circularly and linearly polarized light on two-dimensional band structures with semi-Dirac band touching points, taking the anisotropic nearest neighbor hopping model on the honeycomb lattice as an example. We find circularly polarized light opens a gap and induces a band inversion to create a finite Chern number in the two-band model. By contrast, linearly polarized light can either open up a gap (polarized in the quadratically dispersing direction) or split the semi-Dirac band touching point into two Dirac points (polarized in the linearly dispersing direction) by an amount that depends on the amplitude of the light. Motivated by recent pump-probe experiments, we investigated the non-equilibrium spectral properties and momentum-dependent spin-texture of our model in the Floquet state following a quench in absence of phonons, and in the presence of phonon dissipation that leads to a steady-state independent of the pump protocol. Finally, we make connections to optical measurements by computing the frequency dependence of the longitudinal and transverse optical conductivity for this two-band model.We analyze the various contributions from inter-band transitions and different Floquet modes. Our results suggest strategies for optically controlling band structures and experimentally measuring topological Floquet systems.Floquet band structure of a semi-Dirac system Gregory A. Fiete December 30, 2023 =============================================§ INTRODUCTION Recent years have witnessed dramatic advances in understanding the topological properties of the band structure of quantum many-particle systems<cit.>. These include time-reversal (TR) breaking integer quantum Hall systems, TR invariant two-dimensional quantum spin Hall systems, and three-dimensional topological insulators (TIs). When inter-particle interactions are included, the phenomenology is even more diverse<cit.>. Certain isotropic low-energy dispersions are known to have particular stability conditions with respect to inter-particle interactions. For example, two-dimensional Dirac points are perturbatively stable to interactions, requiring a finite interaction strength to open a gap<cit.>, which underlies the low-energy properties of single-layer graphene<cit.>. By contrast, two-dimensional quadratic band touching points are known to be perturbatively unstable (i.e. a gap is opened, or the band touching point splits into two Dirac points) to interactions<cit.>. On the other hand, anisotropic band touching points dominating the low-energy physics are more intriguing as both Coulomb interactions and disorder can have interesting consequences<cit.>. Notably, semi-Dirac fermions have an anisotropic dispersion which displays a linear dispersion along one direction and a quadratic dispersion in the perpendicular direction<cit.>. Such a dispersion can be realized in phosphorene, in TiO_2 / VO_2 superlattices<cit.>, deformed graphene,and BEDT-TTF_2I_3 salt under pressure<cit.>. Systems with semi-Dirac band touching points are unstable to Coulomb interactions and display marginal Fermi liquid behavior with well-defined quasi-particles<cit.>.Another interesting class of topological states studied in recent years arises from the non-equilibrium generation of interesting band structures under the influence of a periodic drive<cit.>. At the non-interacting level, dramatic changes in the band structure can occur, including a change from a non-topological band structure to a topological one<cit.>. Two commonly discussed physical scenarios for periodically driven systems include periodic changes in the laser fields that establish the optical lattice potential for cold atom systems<cit.> and solid state systems that are driven by a monochromatic laser field<cit.>.Recent work shows that a quadratic band touching point in two-dimensions has a gap opened by virtual two-photon absorption and emission processes in some cases,<cit.> while it can be opened by one-photon processes in others.<cit.>By contrast, linearly polarized light splits the quadratic band touching point into two Dirac points by an amount that depends only on the amplitude and polarization direction of the light<cit.>. When inter-particle interactions are included, energy is typically absorbed from the periodic drive<cit.> and a closed many particle system will generically end up at infinite temperature in the infinite time limit, unless nongeneric conditions such as many body localization are present<cit.>. On the other hand, if the system is open, i.e. coupled to a bath such as phonons, it is possible for a balance?? to be established where the average energy (over a drive period) absorbed by the system from the drive can be released to the bath and a nonequilibrium steady state established<cit.>. Previous studies have mostly been performed on Floquet steady states in systems with isotropic low-energy dispersions<cit.> while a thorough examination of anisotropic band touching points under periodic drive is still lacking.In this paper, we focus on a periodically driven semi-Dirac band model on the honeycomb lattice. We demonstrate that circularly polarized light can induce a TR breaking topological band structure carrying finite Chern numbers in the non-equilibrium steady states, while linearly polarized light can split the semi-Dirac point into two linearly dispersing Dirac points.A quench into the Floquet state yields a strongly momentum-dependent spin density. By contrast, we find an open semi-Dirac system with phonon dissipation can remove the anisotropy introduced by the quench from the initial state, which is qualitatively similar to the study of the Dirac dispersion<cit.>.We examine the spin-averaged ARPES spectrum, the time-averaged spin density, and we compute the longitudinal and Hall optical conductivity.We analyze the contribution from different Floquet modes and emphasize the important differences between linearly polarized and circularly polarized driving fields. Our paper is organized as follows. In Sec. <ref>, we describe the lattice Hamiltonian we study, and in Secs. <ref> and <ref> we discuss the influence of a monochromatic laser field of different polarizations, intensities, and frequencies on the Hamiltonian. In Sec. <ref> we present the spectral function and time-averaged spin texture. In Sec. <ref>, we compute the finite-frequency longitudinal optical conductivity of the model for different laser parameters. In Sec. <ref> we address the Hall optical conductivity in comparison with the longitudinal components. In Sec. <ref> we summarize the main conclusions of this work and discuss their relevance to real materials.Details of the derivation of the longitudinal optical conductivity and the low energy effective model are presented in the Appendices.§ LATTICE MODEL AND BAND STRUCTURE We study a honeycomb lattice model with anisotropic hopping that leads to semi-Dirac dispersions at low energy.We also consider a coupling of electrons to a bath of phonons. The total Hamiltonian isH=H_0+H_ph+H_c,where H_0 is the tight-binding model with different values of nearest-neighbor (NN) hopping parameters that produces the semi-Dirac band touching point:H_0=∑ _𝐥(t c_B, 𝐥+𝐚_1^†c_A,𝐥+t c_B, 𝐥+𝐚_2^†c_A, 𝐥+t'c_B, 𝐥^†c_A, 𝐥)+h.c.,and H_ph is the phonon Hamiltonian, with H_c the Hamiltonian describing the coupling of electrons and phonons.In Fig. <ref>, the primitive lattice vectors are chosen as (we set the lattice constant a=1 in the remainder of the paper),𝐚_1=a(3/2,√(3)/2), 𝐚_2=a(3/2,-√(3)/2),where t is the NN hopping integral along δ_1=(1/2,√(3)/2) and δ_2=(1/2,-√(3)/2), t' is the NN hopping integral along δ_3=(-1,0), c_A(B)i, c^†_A(B)i are creation and annihilation operators of electrons on the A(B) sublattices. The electron Hamiltonian H_0 can be Fourier transformed and then diagonalized. The electron dispersions and corresponding band structure are obtained from the eigenvalues: ϵ _±(𝐤)=±√(2t^2+t'^2+2t^2cos√(3)k_y+4t't cos(3/2k_x)cos(√(3)/2k_y)). For t'≠ 2t, there are two Dirac points in the first Brillouin zone. If we set t'=2t, the dispersionis quadratic along k_y and linear along k_x near the position of the band touching point𝐌=(2π/3a,0).The spectrum (Fig. <ref>) is linear in k_x and quadratic in k_y. The standard 𝐤·𝐩 Hamiltonian readsH_SD(𝐤)=k_y^2/2mσ_x+v_F k_x σ_y,with the effective mass m=2/3t and the fermi velocity v_F=3 t. In the following sections, we set t'=2t to investigate semi-Dirac points under the influence of a periodically driven electric field. Furthermore, dissipation from the environment affects the electron distribution and thus the spectral density together with the electrical transport coefficients. Here we consider dissipation due to coupling to two-dimensional phonons, similar to the approach of Refs. [Dehghani2014, Dehghani2015, Dehghani2015a]: the phonon part of Eq. (<ref>) is a bilinear form of free boson operators:H_ph=∑_q,i=x,yω_q i b^†_q ib_q i,and the electron-phonon coupling is specified asH_c=∑_k q,σ, σ^'=A,Bω_q i c^†_k σ𝐀_ph (q) ·σ_σσ^' c_k σ^',with𝐀_ph (q)=[λ_x,q(b^†_q x+b_-q x), λ_y,q(b^†_q y+b_-q y)],representing the phonon field. Here, σ, σ'=A, B are pseudo-spin labels of sublattices. Above we have made the assumption that phonon induced electron scattering with different quasi-momentum does not occur<cit.>. In the following calculations, the electronic states at different quasi-momenta k are independently coupled to the reservoir and the broadening effect of electron-phonon interaction is not taken into account.§ PERIODIC DRIVE UNDER A LASER FIELDWhen the system is coupled to a laser field, the Hamiltonian is modified according to the Peierls substitution 𝐤→𝐤+𝐀(t_1):H_𝐤(t_1)=∑ _𝐤(c_𝐤 A^†,c_𝐤 B^†)( [0 h_𝐤^A B(t_1); [h_𝐤^A B(t_1)]^*0;])( [ c_𝐤 A; c_𝐤 B; ]),where we use t_1 as the time label to distinguish it from the hopping parameter andh_𝐤^A B(t_1)=∑ _i=1,2 t e^i (𝐤+𝐀(t_1))·δ_i+t'e^i (𝐤+𝐀(t_1))·δ_3.In Eq. (<ref>), we set Planck's constant ħ = 1, the speed of light c = 1, and the charge of the electron e = 1, and adopt the Coulomb gauge by setting the scaler potential ϕ = 0. We ignore the tiny effect of the magnetic field. The units of energy are expressed in terms of the hopping t and we set t=1. As h_𝐤^A B is not invariant under translation by integer multiples of n_ib_i, we could recover the symmetry by the shift c_𝐤B→ c_𝐤Be^i 𝐤·δ_3:h_𝐤^A B(t_1)=t'e^i 𝐀(t_1)·δ_3+∑ _i=1,2 t e^i 𝐤·𝐚_i+i 𝐀(t_1)·δ_i.Throughout this paper, circularly polarized laser fields are expressed with the vector potential 𝐀(t_1)=A(cos (Ωt_1),sin (Ωt_1)) and linear polarized laser fields are expressed with 𝐀(t_1)=A(sin (Ωt_1), 0) and 𝐀(t_1)=A(0, sin (Ωt_1)) for the polarization along k_x and k_y direction, respectively, where A is the amplitude and Ω the frequency of the laser.§ FLOQUET THEORY Since the laser field can be approximated as monochromatic (single frequency) light, it renders the Hamiltonian periodic in time: H(t)=H(t+T), where T is the period corresponding to Ω=2π/T. In analogy to the periodicity in lattice translations that leads to Bloch's theorem, one can apply Floquet's theory<cit.>. The Floquet eigenfunction can be expressed as|Ψ_k α(t)⟩ =e^i ϵ _k αt|ϕ_k α(t)⟩,where |ϕ_k α(t)⟩ = |ϕ_k α(t+T)⟩ are the Floquet quasimodes and ϵ_k α is the corresponding quasienergy for band α. Substituting this form of the wave function into the time-dependent Schrodinger equation, and defining the Floquet Hamiltonian operator as ℋ(t) = H(t) - i∂/∂ t, one findsℋ(t)|ϕ_k α(t)⟩ = ϵ_k α |ϕ_k α(t)⟩.By performing a Fourier transformation on timeH_αβ^n= 1/T∫ _0^TH_αβ(t)exp (-i n Ωt)dt, |ϕ_k α(t)⟩ = ∑_m e^i m Ω t |ϕ_k α^m⟩,with m,n=0, ± 1, ± 2,…, ±∞, one arrives at∑_m H_F^n m |ϕ_k α^m⟩ = ϵ_k α |ϕ_k α^n⟩,whereH_F^n m=H^n-m+n ΩI δ _n m,is the Floquet Hamiltonian living in the enlarged Floquet Hilbert space<cit.>. In the lattice model we studied,h_𝐤^A B(m-n) = 1/T∫ _0^Th_𝐤^A B(t_1)exp[-i (m-n) Ωt_1]dt_1 = 1/T∫ _0^Tdt_1[t e^i [3k_x/2+√(3)k_y/2+A_x(t_1)/2+√(3)A_y(t_1)/2]+t e^i [3k_x/2-√(3)k_y/2+A_x(t_1)/2-√(3)A_y(t_1)/2]+t'e^-i A_x(t_1)]×exp[-i (m-n) Ωt_1]. In the numerical evaluation, we truncate the range of Floquet modes to m,n=0,±1,±2,±3,±4 and verified that a larger range of m,n has little numerical impact on our results for the frequencies and electric field amplitudes we considered. §.§ Circularly polarized caseFor circularly polarized light, Eq. (<ref>) becomes h_𝐤^A B(m-n)=t e^i [3k_x/2+√(3)k_y/2]J_m-n(A)exp[i (m-n) π/6]+t e^i [3k_x/2-√(3)k_y/2]J_m-n(A)exp[i (m-n) 5π/6]+t'J_m-n(A)e^-i (m-n)π/2, where J_n(x) is the order-n Bessel function of the first kind. The Floquet band structures are displayed in Fig. <ref>(a-c). For A=1.5, Ω=5t, there exists a gap between upper and lower band in a single Floquet copy. The gap size is Δ≈ 0.52 t. As a comparison, the gap size for A=1.5, Ω=10t is Δ≈ 0.15 t. They both hold finite Chern numbers C=1 for fully occupied “lower" bands (which one is “lower" is essentially a gauge choice; we refer here to the lower one in our figure), indicating the existence of topologically non-trivial transport properties under TR breaking circularly polarized light. A higher chern number with C=2 is realized by A=2.4, Ω=5t with a small gap size Δ≈ 0.05. But unlike the graphene case<cit.>, we do not find C=3 for the semi-Dirac band structure in the presence of a laser field.Moreover, we discovered that the leading order contribution to Δ is O(A^4/Ω^2) in the small field amplitude and large frequency limit. This is revealed by the low energy effective theory in the high frequency expansion<cit.> up to O(1/Ω^2). The detailed analysis is given in Appendix <ref>. We would like to emphasize that this leading order contribution is different from either that of the quadratic band touching point<cit.> O(A^4/Ω) or of the Dirac point<cit.> O(A^2/Ω). §.§ Linearly polarized caseIn this work, we consider linear polarization in the x and y directions to reflect the symmetry of the semi-Dirac dispersion in our model. When the driven field is polarized in the x-direction, i.e., the linearly dispersing direction around the M point according to Eq. (<ref>), Eq. (<ref>) is reduced to h_𝐤^A B(m-n)=t e^i [3k_x/2+√(3)k_y/2]J_m-n(A/2)+t e^i [3k_x/2-√(3)k_y/2]J_m-n(A/2)+t'J_n-m(A).Similarly, if the polarization is in the y-direction, i.e., the quadratically dispersing direction around the M point, Eq. (<ref>) readsh_𝐤^A B(m-n)=t e^i [3k_x/2+√(3)k_y/2]J_m-n(√(3)A/2)+t e^i [3k_x/2-√(3)k_y/2]J_n-m(√(3)A/2)+t'. Fig. <ref>(c),(d) display the Floquet band structures under linearly polarized light along the x and y directions, respectively. Unlike circularly polarized light, linearly polarized light does not break the time-reversal symmetry<cit.> and therefore the Chern number must be zero. For polarization along the quadratically dispersing direction, we find a gap opening induced at the band touching point. The gap size is of order O(A^2) and can be estimated from the zeroth order high frequency expansion of the low energy Hamiltonian in Appendix <ref>.In contrast to the circular polarization case, the leading order contribution to the gap is independent of the driving frequency, Ω. On the other hand, when the polarization is along the linearly dispersing direction, the bands remain gapless and the semi-Dirac band touching point described by Eq. (<ref>) is split into two single Dirac points. This particular feature of the Floquet bands can be roughly understood in the zeroth order high frequency expansion of the lattice model itself, which is the n=0 case of Eq. (<ref>): h_𝐤^A B(m-n=0)=t e^i [3k_x/2+√(3)k_y/2]J_0(A/2)+t e^i [3k_x/2-√(3)k_y/2]J_0(A/2)+t'J_0(A). As long as A ≠ 0, the coefficients in front of the phase factors in eq. (<ref>) do not change signs and the proportion t/t^' is only renormalized by J_0(A/2)/J_0(A), which leads to the splitting of semi-Dirac point into two Dirac points as J_0(A/2)/J_0(A) ≠ 1 in analogy with different t/t^' values in the static Hamiltonian Eq. (<ref>). Moreover, the two Dirac points are on the k_y axis and their separation in the BZ is proportional to A^2 and independent of Ω, up to leading order in the high frequency limit. This dependency is again captured by the low energy model (Appendix <ref>).§ SPECTRAL FUNCTION In this section, we examine the electronic spectral density of our model in both closed and open systems.One can expand the fermionic operators in the quasimode basis at time t_0,c_𝐤σ(t_0)=∑ _α 'ϕ _𝐤α '^σ(t_0)γ _𝐤α ',where γ _𝐤α ' annihilates aparticle in Floquet state 𝐤α '. In a closed system, the electron occupation probability is given byρ_k, α=| ⟨ϕ _k α(0)|ψ _in, k⟩|^2,where |ψ _in, k⟩ is the initial state chosen to be the ground state of Eq. (<ref>). In an open system, we consider electrons coupled to a phonon bath described by Eq. (<ref>). We assume the reservoir of phonons to remain in thermal equilibrium at a temperature T. Inelastic scattering between electrons and phonons will cause the electron distribution function to relax and ρ_k, α can be solved using the methods of Ref. [Dehghani2014].The pseudo-spin-resolved ARPES spectrum is given by the lesser Greens function, i g_σσ^<(k,ω ), with pseudospin labels σ =A,B. The analytical expression is derived in Ref. [Dehghani2014]:i g_A A^< (k,ω) = 2 π∑ _m αδ(ω -[ϵ_k α-m Ω])|a_m k α| ^2ρ _k, α, i g_B B^<(k,ω ) = 2π∑ _m αδ(ω -[ϵ _k α-m Ω])|b_m k α| ^2ρ _k, α,where a_m k α, b_m k α are the Fourier transformed components of the Floquet eigenvectors,|ϕ _k α(t)⟩ =∑ _m∈int e^i m Ωt( [ a_m k α; b_m k α; ]).Then the total spectral density is A(k,ω )=Im[Tr(g^R)].<cit.> When the electron occupation probability is taken into account, the spectral density has an imbalance between upper and lower m=0 Floquet bands. The total spectral density is a sum over psuedo-spin states i ∑ _σg_σσ^<(k,ω ).It is also possible to measure the momentum resolved pseudo-spin polarization texture averaged over a period of the driving laser field, which is obtained from,P_z(k_x,k_y)=i∫dω/2π∑ _σσg_σσ^<(k,ω ).In the following, we will discuss the momentum and energy resolved spectral density and the momentum resolved pseudo-spin polarization for both closed and open systems under different polarizations of light. §.§ Circularly polarized lightThe ARPES spectrum and pseudo-spin textures in a circularly polarized laser field are shown in Fig. <ref>. From the spectral density along the high symmetry line, one can see the appearance of Floquet side bands. Without phonons, the system is quenched from its initial state to the Floquet eigenstate with an electron distribution density given by Eq. (<ref>). This is a highly nonthermal state in which the memory of the initial state is retained and the state does not thermalize<cit.>. As a result, the ARPES spectrum intensity in Fig.<ref>(a-b) exhibits discontinuity at the K point along Γ→ K →Γ^' and anisotropy at the M point along Γ→ M →Γ^'. The same character around K can be observed in the momentum slices of P_z(k_x,k_y) in Fig.<ref>(c-d). The asymmetry at K and M can be understood<cit.>. When the initial gauge field is pointing along the x̂ direction, ρ^quench_k α around K and M has a strong angle dependence on the phase angle θ(𝐤) of the initial ground state. In the presence of a phonon bath, Fig. <ref> shows that the lattice symmetry is retained in the ARPES spectrum and pseudo-spin textures, indicating that the phonons cause a loss of the memory of the initial states<cit.> and lead to a nonequilibrium steady state distribution. In particular, the pseudo-spin texture with a phonon bath has perfect symmetry around k_x. For A=1.5, Ω=5t, the band is predominantly of sublattice B character<cit.> in the upper half of k_x-k_y plane while sublattice A dominates the lower half plane. The same phenomenon happens for the counterpart in A=2.4, Ω=5t except for the region near the BZ boundary, where positive and negative polarizations are separated by nodal lines. This is a strong indication of a further band inversion compared to A=1.5, Ω=5t.§.§ Linearly polarized case For comparison purposes, we plot the ARPES spectrum for our model in the presence of linearly polarized light in Fig. <ref>. Without phonons, it is noticeable that the asymmetry along Γ→ K →Γ^' and Γ→ M →Γ^' is no longer present for both x and y polarization in contrast with the circular polarization. This is due to the fact that at time t_1=0, the initial gauge field is exactly 0 and the electron distribution is independent of the angle between the momentum and the gauge field around K and M<cit.>. In the presence of phonons, one can observe the spectral weight redistribution between upper and lower bands in all cases.§ LONGITUDINAL OPTICAL CONDUCTIVITYAlthough angle resolved photoemission spectroscopy is a direct measurement of the energy spectrum in the system, it can only detect occupied states<cit.>. Here we investigate the electromagnetic response of the system.<cit.>In the following, we will present a thorough study of both the longitudinal and the Hall optical conductivity. In this section, our focus is on the longitudinal components of the ac conductivity, for which the formula is derived in Appendix <ref>, Re[σ _i i(ω )] = 1/N∑ _𝐤∑ _m D_u i d^m(𝐤)D_d i u^-m(𝐤)(ρ _𝐤u-ρ _𝐤d)×-4(ϵ _𝐤d-ϵ _𝐤u-m Ω)δ/[ω ^2-(ϵ _𝐤d-ϵ _𝐤u-m Ω)^2]^2+2(ω ^2+(ϵ _𝐤d-ϵ _𝐤u-m Ω)^2) δ ^2, whereD_u i d^m(𝐤)=∑ _n l⟨ϕ̃_𝐤u^n|[∂ h_𝐤^m+n-l/∂ k_i]|ϕ̃_𝐤d^l⟩.Eq. (<ref>) can be seen as a generalization of the Kubo formula for a Floquet system. The total optical conductivity is comprised of contributions from different Floquet modes, Re [σ_i i(ω )] = ∑_m Re [σ^m_i i(ω )],Re [σ^m_i i(ω ] = 1/N∑ _𝐤 D_u i d^m(𝐤)D_d i u^-m(𝐤)(ρ _𝐤u-ρ _𝐤d)×-4(ϵ _𝐤d-ϵ _𝐤u-m Ω)δ/[ω ^2-(ϵ _𝐤d-ϵ _𝐤u-m Ω)^2]^2+2(ω ^2+(ϵ _𝐤d-ϵ _𝐤u-m Ω)^2) δ ^2. It is worth pointing out that Eq. (<ref>) has most of its weight coming from regions where ω≈| ϵ _𝐤d-ϵ _𝐤u-m Ω|. In our study, σ_xx is along the linearly dispersing direction while σ_yy is along the quadratically dispersing direction. §.§ Circular Polarization From Fig. <ref>(a), one sees that σ_x x and σ_y y for Ω=5t, A=1.5 are similar to each other in profile. Because of a finite gap in the Floquet band structure, σ_x x and σ_y y only have appreciable contributions from inter band quasi-electron excitations with ω≳Δ (the gap).Note that σ_x x is larger than σ_y y in the whole frequency range indicating a smaller effective mass generated by the laser field along the x-direction compared to the y-direction.From Fig. <ref>(b), one sees that both σ_x x and σ_y y for Ω=5t, A=2.4 become negative around ω≈Ω. This is a characteristic feature of a Floquet system in the non-equilibrium steady state due to the non-zero electron distribution on the side bands. By examining Eq. (<ref>), one can see that when ω≈| ϵ _𝐤d-ϵ _𝐤u-m Ω| for m = -1, the numerator can change sign if a quasi-electron can be excited from the lower band to the upper band by a single photon absorption. To illustrate this point, we plot Eq. (<ref>) with all the Floquet modes for A=1.5, Ω=5t in the top and middle panel of Fig. <ref>. We notice that the m<0 contributions are negative while the m ≥ 0 contributions are all positive for both σ^m_xx and σ^m_yy. Overall, the m=0 mode dominates the low frequency regime while m ≠ 0 modes dominate the high frequency regime of the longitudinal optical conductivity. In Fig. <ref>(c), we show the case of large driving frequency of the laser field: A=1.5, Ω=10t, in which a sharp contrast between the profiles of σ_x x and σ_y y are observed. In the ideal case, the non-zero dc conductivity is due to the small gap size compared with the broadening parameter. In both closed and open systems, a finite electron distribution probability above the Fermi level will also lead to a finite contribution to the dc conductivity. §.§ Linear PolarizationNext we turn to the optical conductivity of the linearly polarized driving field. Fig. <ref>(a) and (c) display σ_x x and σ_y y for polarization along x and y direction, respectively. It is obvious from both plots that σ_x x and σ_y y have significant difference in peak profile, indicating a sharp contrast between the gapless and gapped Floquet bands near the Fermi level. The shift in peak positions of the longitudinal optical conductivity for x polarization (Fig. <ref>(a)) results from the anisotropy of the band structure along x and y directions, i.e. the splitting of the semi-Dirac point into single Dirac points separated along k_y. On the other hand, laser fields polarized along y-direction gives rise to a negative value for both σ_x x and σ_y y in the ideal case. This feature can be attributed to the gapless nature between the upper band of m=0 mode and the lower band of m=1 mode while ϵ_k u-ϵ_k d holds a finite gap. To illustrate the point, we plot Eq. (<ref>) with the ideal electron distribution in Fig. <ref>(b) and (d) corresponding to (a) and (c) respectively, where the dominant contribution at low probe frequency shifts from σ^m=0 to σ^m=-1 and changes sign by comparing (b) to (d). In both the quench and phonon panels of Fig. <ref>(c), the negative sign of σ^m=-1 is offset by the inversion in electron distribution between different Floquet modes and in consequence, the low frequency conductivity remains positive.§ CHERN NUMBER AND OPTICAL HALL CONDUCTIVITYStarting from the linear response theory, the optical Hall conductivity is derived as<cit.> σ _i j(ω )=-1/N∑ _𝐤,m[ϵ _𝐤d-ϵ _𝐤u+m Ω]^2F_i j 𝐤^mω ^2-(ϵ _𝐤u-ϵ _𝐤d-m Ω)^2-2i ωδ/[ω ^2-(ϵ _𝐤u-ϵ _𝐤d-m Ω)^2]^2+4ω ^2 δ ^2⟨Ψ(t_0)|[γ _𝐤d^†γ _𝐤d-γ _𝐤u^†γ _𝐤u]|Ψ(t_0)⟩,whereF_i j𝐤^m=i[∑ _l ⟨ϕ̃_𝐤u^l|∂ _k_iϕ̃_𝐤d^l-m⟩∑ _n ⟨ϕ̃_𝐤d^n|∂ _k_jϕ̃_𝐤u^n+m⟩ -∑ _l ⟨ϕ̃_𝐤d^l|∂ _k_iϕ̃_𝐤u^l+m⟩∑ _n ⟨ϕ̃_𝐤u^n|∂ _k_jϕ̃_𝐤d^n-m⟩],is the Berry curvature andA_βi α^m=1/T∫ _0^Tdt e^-i m Ωt⟨ϕ _𝐤β(t)|∂ _k_iϕ _𝐤α(t)⟩ =1/T∫ _0^Tdt e^-i m Ωt∑ _l ∑ _l' e^i l' Ωte^-i l Ωt⟨ϕ̃_𝐤β^l|∂ _k_iϕ̃_𝐤α^l'⟩ =∑ _l ⟨ϕ̃_𝐤β^l|∂ _k_iϕ̃_𝐤α^l+m⟩, is the Fourier transformed Berry connection. In the static limit ω→ 0, Eq.(<ref>), the dc Hall conductivity can be obtained asσ _i j(ω =0)=∫ _BZd^2k/(2π )^2F̅_𝐤d⟨Ψ(t_0)|[γ _𝐤d^†γ _𝐤d-γ _𝐤u^†γ _𝐤u]|Ψ(t_0)⟩,whereF_𝐤d(t)=i[⟨∂ _k_iϕ _𝐤d(t)|∂ _k_jϕ _𝐤d(t)⟩ -⟨∂ _k_jϕ _𝐤d(t)|∂ _k_iϕ _𝐤d(t)⟩],is the berry curvature in the real time. The above expression is in the unit of e^2/ħ, if we recover the units, σ _i j(ω =0)=e^2/h∫ _BZd^2k/(2π )^2F̅_𝐤d⟨Ψ(t_0)|[γ _𝐤d^†γ _𝐤d-γ _𝐤u^†γ _𝐤u]|Ψ(t_0)⟩.In the ideal case, ⟨Ψ(t_0)|[γ _𝐤d^†γ _𝐤d-γ _𝐤u^†γ _𝐤u]|Ψ(t_0)⟩ =1, Eq.(<ref>) is reduced toσ _i j(ω =0)=e^2/hC,where C is the Chern number computed as asC=1/2π∫ _BZd^2kF̅_𝐤d. In Fig. <ref>, the Hall optical conductivity is plotted together with the longitudinal components for all cases we have examined in the system with circularly polarized laser fields. The main difference between the two is the oscillation between positive and negative values in σ_x y for ω≪Ω. In particular, for ω≈ max(ϵ_k u-ϵ_k d), the optical Hall conductivity dips sharply into negative values while the longitudinal components are peaked due to the van Hove singularity. This can be explained by the different analytical behavior of the factors that include ω dependence. In Eq.(<ref>), the frequency dependent factor is sharply peaked at ϵ_k d-ϵ_k u while the counterpart in Eq.(<ref>) changes sign. In the bottom panel of Fig. <ref>, we confirm that a sign change can happen within each m in Eq.(<ref>). § CONCLUSION AND DISCUSSIONIn this work, we addressed the influence of a laser driving field on a tight-binding model on the honeycomb lattice with a semi-Dirac dispersion at the low energies. We studied the effects of both circularly and linearly polarized light along two characteristic directions (reflecting the anisotropy of the semi-Dirac point) and analyzed different Floquet band structures from the low-energy effective Hamiltonian obtained in the high frequency limit. Compared to a nearest-neighbor hopping graphene model, the anisotropic band touching point we studied exhibits more diversity in gap openings, avoided crossings, and mixing between different Floquet side bands. We corroborated the richness by computing the ARPES spectrum and the pseudo-spin texture within quench scenario, and one that includes phonon dissipation.These calculations connect with recent pump-probe experiments. In addition, we also studied the optical conductivity of the lattice model over the same conditions (quench and with phonons). The decomposition of the optical conductivity into different Floquet modes helps one better understand the Floquet band structure and connects to experiments by including realistic features of an electronic system in an open environment.We would like to point out that the low energy Hamiltonian that captures the Floquet bands in our system is not the same as the semi-Dirac Hamiltonian with momentum replaced by Peierls substitution, H_SD(𝐤,𝐀)=(k_y+A_y)^2/2mσ_x+v_F (k_x+A_x) σ_y,which will only includes the vector potential A_x(t) up to linear order. Thus, the gap size of leading order O(A^4/Ω^2) is not captured correctly. Moreover, for a linearly polarized laser field applied along the x-direction, there is no splitting of the semi-Dirac point into two single Dirac-points along the y-direction in Eq.(<ref>). Our study highlights the fact that even though a leading order 𝐤·𝐩 Hamiltonian is a successful low-energy effective for the static Hamiltonian in equilibrium, its time-dependent counterpart by Peierls substitution can still hold different physical content from that of the correct low energy model.Overall, our work broadens the scope for optically controlling band structures with topological band touching points and presents a detailed, experimentally accessible set of observables in lattice systems exposed to periodically driven laser field. The model we studied could be realized in modern cold atom experiments in optical lattices, in addition to the solid state systems we mentioned in the introduction. § ACKNOWLEDGMENTWe thank Hsiang-Hsuan Hung, Chungwei Lin, Ming Xie, Allan H. MacDonald for helpful discussions. We gratefully acknowledge funding from ARO grant W911NF-14-1-0579, NSF DMR-1507621, and NSF MRSEC DMR-1720595. This work was performed in part at Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. § DERIVATION OF LONGITUDINAL OPTICAL CONDUCTIVITYIn this section we derive the general form of the optical conductivity<cit.>. The current-current correlation function which quantifies how an electric field applied in the direction î affects the current flowing in the direction î is given byR_i i^C(𝐪,t,t')=-i⟨ T_C[J_𝐪 I^i(t)J_-𝐪 I^i(t')]⟩,whereJ_𝐪^i(t)=1/√(N)∑ _𝐤,σσ ' c_𝐤+𝐪/2, σ^†(t)c_𝐤-𝐪/2, σ '(t)∂ h_𝐤^σσ '(t)/∂ k_i,is the current operator in the interaction representation evolved from t=t_0: 𝐉_𝐤 I(t)=U_𝐤(t_0,t)𝐉_𝐤(t_0)U_𝐤(t,t_0).The time-evolution operator is given by,U_𝐤(t,t_0)=∑ _α e^-i ϵ _𝐤α(t-t_0)|ϕ _𝐤α(t)⟩⟨ϕ _𝐤α(t_0)|,where ϵ _𝐤α is the quasi-energy and|ϕ _𝐤α(t)⟩ =( [ ϕ _𝐤α^up(t); ϕ _𝐤α^dn(t); ]),is the Floquet eigenvector. Thus, Eq. (<ref>) becomesU_𝐤σσ '(t,t_0)=∑ _α e^-i ϵ _𝐤α(t-t_0)ϕ _𝐤α^σ(t)ϕ _𝐤α^σ '*(t_0).In the interaction representation, c_𝐤σ^I(t)=U_𝐤σσ '(t,t_0)c_𝐤σ '^I(t_0), c_𝐤σ^I †(t)=c_𝐤σ '^I †(t_0)U_𝐤σ 'σ(t_0,t).We expand the fermionic operators in the quasi-mode basis at time t_0 asc_𝐤σ^I(t_0)=∑ _α 'ϕ _𝐤α '^σ(t_0)γ _𝐤α '.By combining Eqs.(<ref>-<ref>) and then inserting the result into Eq.(<ref>), the response function becomes R_i j(𝐪,t,t') = -i θ (t-t')1/N∑ _𝐤,αβγδ e^-i(ϵ _𝐤-𝐪/2α-ϵ _𝐤+𝐪/2β)(t̅+t_r/2-t_0)e^-i(ϵ _𝐤+𝐪/2γ-ϵ _𝐤-𝐪/2δ)(t̅-t_r/2-t_0) ×⟨ϕ _𝐤+𝐪/2β(t)|[∂ h_𝐤(t)/∂ k_i]|ϕ _𝐤-𝐪/2α(t)⟩⟨ϕ _𝐤-𝐪/2δ(t')|[∂ h_𝐤(t')/∂ k_j]|ϕ _𝐤+𝐪/2γ(t')⟩ ×⟨Ψ(t_0)|[γ _𝐤+𝐪/2β^†γ _𝐤-𝐪/2α,γ _𝐤-𝐪/2δ^†γ _𝐤+𝐪/2γ].|Ψ(t_0)⟩≈-i θ (t-t')1/N∑ _𝐤,αβ e^-i(ϵ _𝐤-𝐪/2α-ϵ _𝐤+𝐪/2β)t_r ×⟨ϕ _𝐤+𝐪/2β(t)|[∂ h_𝐤(t)/∂ k_i]|ϕ _𝐤-𝐪/2α(t)⟩⟨ϕ _𝐤-𝐪/2α(t')|[∂ h_𝐤(t')/∂ k_j]|ϕ _𝐤+𝐪/2β(t')⟩ ×⟨Ψ(t_0)|γ _𝐤+𝐪/2β^†γ _𝐤+𝐪/2β-γ _𝐤-𝐪/2α^†γ _𝐤-𝐪/2α|Ψ(t_0)⟩, where α ,β =u,d, u,d represent upper and lower band in a Floquet mode respectively. In the last approximate equality of Eq.(<ref>), we drop the term with fast oscillation factor e^-i (ϵ_k u-ϵ_k d)(t+t')/2. We set⟨ϕ _𝐤+𝐪/2β(t)|[∂ h_𝐤(t)/∂ k_i]|ϕ _𝐤-𝐪/2α(t)⟩ =∑ _m e^i m ΩtD_βi α^m(𝐤,𝐪),and rewrite Eq.(<ref>) asR_i j(𝐪,t,t')=-i θ (t-t')1/N∑ _𝐤,αβ e^-i(ϵ _𝐤-𝐪/2α-ϵ _𝐤+𝐪/2β)t_r ×∑ _m e^i m ΩtD_βi α^m(𝐤,𝐪)∑ _m' e^i m' Ωt'D_αj β^m'(𝐤,-𝐪)×⟨Ψ(t_0)|γ _𝐤+𝐪/2β^†γ _𝐤+𝐪/2β-γ _𝐤-𝐪/2α^†γ _𝐤-𝐪/2α|Ψ(t_0)⟩.Averaged over t+t'/2, only m=-m' term of Eq.(<ref>) has a contribution: R_i j(𝐪,t_r,mode=0) = -i θ (t_r)1/N∑ _𝐤,αβ e^-i(ϵ _𝐤-𝐪/2α-ϵ _𝐤+𝐪/2β)t_r ×∑ _m e^i m Ωt_rD_βi α^m(𝐤,𝐪)D_αj β^-m(𝐤,-𝐪)⟨Ψ(t_0)|[γ _𝐤+𝐪/2β^†γ _𝐤+𝐪/2β-γ _𝐤-𝐪/2α^†γ _𝐤-𝐪/2α]|Ψ(t_0)⟩.By Fourier transform Eq.(<ref>) with respect to t_r, one arrives atR_i j(𝐪,ω ,mode=0) = ∫ dt_rR_i j(𝐪,t_r,mode=0)e^i (ω +iδ) t_r= -i ∫ dt_re^i (ω +iδ) t_rθ (t_r)1/N∑ _𝐤,αβ e^-i(ϵ _𝐤-𝐪/2α-ϵ _𝐤+𝐪/2β)t_r∑ _m e^i m Ωt_rD_βi α^m(𝐤,𝐪)D_αj β^-m(𝐤,-𝐪)×⟨Ψ(t_0)|[γ _𝐤+𝐪/2β^†γ _𝐤+𝐪/2β-γ _𝐤-𝐪/2α^†γ _𝐤-𝐪/2α]|Ψ(t_0)⟩= 1/N∑ _𝐤,αβ∑ _m D_βi α^m(𝐤,𝐪)D_αj β^-m(𝐤,-𝐪)⟨Ψ(t_0)|[γ _𝐤+𝐪/2β^†γ _𝐤+𝐪/2β-γ _𝐤-𝐪/2α^†γ _𝐤-𝐪/2α]|Ψ(t_0)⟩/ω +iδ-(ϵ _𝐤-𝐪/2α-ϵ _𝐤+𝐪/2β-m Ω),where the longitudinal component can be extracted asR_i i(𝐪,ω ,mode=0)=1/N∑ _𝐤,αβ∑ _m D_βi α^m(𝐤,𝐪)D_αi β^-m(𝐤,-𝐪)⟨Ψ(t_0)|[γ _𝐤+𝐪/2β^†γ _𝐤+𝐪/2β-γ _𝐤-𝐪/2α^†γ _𝐤-𝐪/2α]|Ψ(t_0)⟩/ω +iδ-(ϵ _𝐤-𝐪/2α-ϵ _𝐤+𝐪/2β-m Ω).In the limit 𝐪→ 0, Eq.(<ref>) is reduced toR_i i(𝐪=0,ω ,mode=0) = 1/N∑ _𝐤,αβ∑ _m D_βi α^m(𝐤)D_αi β^-m(𝐤)⟨Ψ(t_0)|[γ _𝐤+𝐪/2β^†γ _𝐤+𝐪/2β-γ _𝐤-𝐪/2α^†γ _𝐤-𝐪/2α]|Ψ(t_0)⟩/ω +iδ-(ϵ _𝐤-𝐪/2α-ϵ _𝐤+𝐪/2β-m Ω)= 1/N∑ _𝐤∑ _m D_u i d^mD_d i u^-m⟨Ψ(t_0)|[γ _𝐤u^†γ _𝐤u-γ _𝐤d^†γ _𝐤d]|Ψ(t_0)⟩2(ϵ _𝐤d-ϵ _𝐤u-m Ω)/ω ^2-(ϵ _𝐤d-ϵ _𝐤u-m Ω)^2+2i ωδ -δ ^2withD_u i d^m(𝐤)=∑ _n l⟨ϕ̃_𝐤u^n|[∂ h_𝐤^m+n-l/∂ k_i]|ϕ̃_𝐤d^l⟩.Thus the longitudinal optical conductivity is evaluated asRe [σ _i i(ω )] ≡ Im R_i i(𝐪=0,ω ,mode=0)/ω= 1/N∑ _𝐤∑ _m D_u i d^m(𝐤)D_d i u^-m(𝐤)(ρ _𝐤u-ρ _𝐤d)×-4(ϵ _𝐤d-ϵ _𝐤u-m Ω)δ/[ω ^2-(ϵ _𝐤d-ϵ _𝐤u-m Ω)^2]^2+2(ω ^2+(ϵ _𝐤d-ϵ _𝐤u-m Ω)^2) δ ^2.§ LOW ENERGY EFFECTIVE HAMILTONIANIn this section, we derive the low energy time-dependent Hamiltonian from the lattice model Eq.(<ref>) which we rewrite here for convenience: h_𝐤^A B(t_1) = 2t e^i 𝐀(t_1)·δ_3+∑ _i=1,2 t e^i 𝐤·𝐚_i+i 𝐀(t_1)·δ_i=t e^i [3k_x/2+√(3)k_y/2+A_x(t_1)/2+√(3)A_y(t_1)/2]+t e^i [3k_x/2-√(3)k_y/2+A_x(t_1)/2-√(3)A_y(t_1)/2]+2t e^-i A_x(t_1), where we used t_1 as time to be distinguished from the hopping parameter. By expanding Eq.(<ref>) up to O(k^2) and O(A^2) in the vicinity of 𝐃=(2π/3,0), one arrives at h_𝐤^A B(t_1)≈ -3t (A_x(t_1)+p_x)i-3t/4(A_x(t_1)^2-A_y(t_1)^2-2 A_x(t_1) p_x-3 p_x^2-2 A_y(t_1) p_y-p_y^2),where (p_x,p_y) is the momentum around 𝐃=(2π/3,0).This can also be written in the compact matrix form asH_𝐤(t_1)=3t/4(-A_x(t_1)^2+A_y(t_1)^2+2 A_x(t_1) p_x+3 p_x^2+2 A_y(t_1) p_y+p_y^2)σ _x+3t (A_x(t_1)+p_x)σ _y. The dominant features of the band structure can be understood by considering the effective Hamiltonian at large driving frequency Ω, which is given by<cit.> H_𝐤^eff = H_𝐤^0+1/Ω[H_𝐤^1,H_𝐤^-1]+[H_𝐤^-1,[H_𝐤^0,H_𝐤^1]]+[H_𝐤^1,[H_𝐤^0,H_𝐤^-1]]/2 Ω ^2 -[H_𝐤^1,[H_𝐤^-2,H_𝐤^1]]+[H_𝐤^-1,[H_𝐤^2,H_𝐤^-1]]/3Ω ^2+[H_𝐤^-1,[H_𝐤^-1,H_𝐤^2]]+[H_𝐤^1,[H_𝐤^1,H_𝐤^-2]]/6Ω ^2, where H_𝐤^n is computed from Eq.(<ref>). We will discuss the form of H_𝐤^eff in different polarization of the laser field. §.§ Circularly polarized laser field In the circularly polarized light, one has the following Fourier components:H_𝐤^0 = 3t p_xσ _y+(3t/4p_y^2+9t/4p_x^2)σ _x, H_𝐤^1 = (3t A p_x/4 -i3t A p_y/4)σ _x+3t A/2σ _y,H_𝐤^-1 = (3t A p_x/4+i3t A p_y/4)σ _x+3t A/2σ _y, H_𝐤^2 = -3t A^2/8σ _x,H_𝐤^-2=-3t A^2/8σ _x,By inserting Eq.(<ref>) into Eq.(<ref>),H_𝐤^eff=(3t/4p_y^2+9t/4p_x^2-27 t^3A^2/8 Ω ^2(p_x^2+p_y^2+A^2))σ _x+3t p_x(1-9 t^2A^2/8 Ω ^2(p_x^2-p_y^2+A^2/2))σ _y+9(t A)^2/4Ωp_yσ _z. The energies of Eq.(<ref>) contain a gap Δ =27 t^3A^4/4Ω ^2 at (0,0). Notice that we keep p_x^2 term only for the convenience of momentum expansion. The dispersion is dominated by O(k_x).§.§ Linearly polarized laser fieldIn the linearly polarized field along x-direction, the effective Hamiltonian readsH_𝐤^eff=H_𝐤^0=(3 t/4p_y^2 +9t/4p_x^2-3t/8A^2)σ _x+3t p_xσ _y.The spectrum ofEq.(<ref>) includes two symmetric Dirac points along y-direction. The distance between the two band touching points is |Δ𝐤|=√(2)A. In the linearly polarized field along y-direction, the effective Hamiltonian readsH_𝐤^eff=H_𝐤^0=(3 t/4p_y^2 +9t/4p_x^2+3t/8A^2)σ _x+3t p_xσ _y,which contains a gap of size Δ =3t A^2/4.
http://arxiv.org/abs/1709.09218v1
{ "authors": [ "Qi Chen", "Liang Du", "Gregory A. Fiete" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170926185331", "title": "Floquet band structure of a semi-Dirac system" }
references § REFERENCES10000 1cm-1cm 0pt0pt 0myheadingsptmbi8t at 12ptThe Cluster AgeS Experiment (CASE).† Variable stars in the field ofthe globular cluster M22^∗1cm   M.  R o z y c z k a^1,   I. B.  T h o m p s o n^2,   W.  P y c h^1,   W.  N a r l o c h^1,   R.  P o l e s k i^3   and  A.  S c h w a r z e n b e r g  –  C z e r n y^1 3mm^1Nicolaus Copernicus Astronomical Center, ul. Bartycka 18, 00–716 Warsaw, Poland e–mail: (mnr, pych, wnarloch, alex)@camk.edu.pl^2The Observatories of the Carnegie Institution for Science, 813 Santa Barbara Street, Pasadena, CA 91101, USA e–mail: [email protected]^3 Department of Astronomy, Ohio State University, 140W. 18th Ave., Columbus, OH43210, USA e–mail: [email protected] field of the globular cluster M22 (NGC 6656) was monitored between 2000 and 2008 in a searchfor variable stars. BV light curves were obtained for 359 periodic, likely periodic, and long–term variables, 238 of which are new detections. Thirty nine newly detected variables,and 63 previously known ones are members or likely members of the cluster, including20 SX Phe, 10 RRab and 16 RRc-type pulsators, one BL Her-type pulsator, 21 contact binaries, and 9 detachedor semi–detached eclipsing binaries. The most interesting among the identified objects are V112 – a bright multimode SX Phe pulsator,V125 –a β Lyr–type binary on the blue horizontal branch, V129 – a blue/yellow straggler with aW UMa–like light curve, located halfway between the extreme horizontal branch and red giant branch,and V134 – an extreme horizontal branch object with P=2.33 d and a nearly sinusoidal light curve; allfour of them are proper motion (PM) members of the cluster. Among nonmembers, a P=2.83 d detached eclipsingbinary hosting a δ Sct-type pulsator was found, and a peculiar P=0.93 d binary with ellipsoidalmodulation and narrow minimum in the middle of one of the descending shoulders of the sinusoid. Wealso collected substantial new data for previously known variables; in particular we revise the statisticsof the occurrence of the Blazhko effect in RR Lyr-type variables of M22.globular clusters: individual (M22) – stars: variables –stars: SX Phe – blue stragglers – binaries: eclipsing †CASE was initiated and for long time led by our friend and tutor Janusz Kaluzny, who prematurely passed away in March 2015. ^∗Based on data obtained with the Swope telescope at Las Campanas Observatory. Introduction M22 is projected against the Galactic bulge at l=99, b=-76, ina substantially reddened region with E(B-V) varying between 0.26 mag and 0.39 magacross our field of view[The extinction calculator athttp://ned.ipac.caltech.edu/help/extinction_law_calc.html was used]. Core radius r_c, half–lightradius r_h, tidal radius r_t, [Fe/H] index, radial velocity, heliocentric distanced_⊙, and galactocentric distance d_G of the cluster are equal to 133, 336,320, -1.70, -146.3±0.2 km/s, 3.2 kpc and 4.9 kpc, respectively (Harris 1996,2010 edition). Dotter et al. (2010) excluded M22 from their age survey "because it isknown to harbor multiple stellar populations". Indeed, Lee (2016) suggests that it isa merger of two globular clusters (GCs) which occurred in a dwarf galaxy, subsequentlyaccreted onto the Milky Way. This might explain differences in age estimations of M22,varying from 12–13 Gyr (Lee 2015, 2016) up to 14 Gyr (Marino et al. 2009).M22 is classified as an old GC of Oosterhoff type II, and has a rich and long bluehorizontal branch (BHB). The (N_BHB-N_RHB)/(N_BHB+N_RHB+N_RR) index, whereN_BHB is the number of BHB stars, N_RHB the number of red HB stars (locatedredward of the instability strip on the CMD), and N_RR the number of RR Lyr stars,is equal to 0.97±0.1,one of the largest among GCs with a substantialpopulation of RR Lyr pulsators (Kunder et al. 2013a, hereafter K13). Even though M22 is one of the closest GCs to the Sun, factors like very strongcontamination of its field by bulge stars, substantial differential extinction,and appreciable concentration (c=log r_t/r_c = 1.38; Harris 1996, 2010 edition)make it a rather challenging target for studies. The pre–CCD searches for variables,summarized by Clement et al. (2001; 2017 edition[< http://www.astro.utoronto.ca/ cclement/cat/C0100m711>]) (hereafter C01-17),resulted in the detection of 43 objects. The targeted CCD surveys performed sofar (Kaluzny & Thompson 2001, hereafter KT01; Pietrukowicz & Kaluzny 2003, hereafterPK03; K13, and Sahay, Lebzelter & Wood 2014) brought additional 56 discoveries,including two optical cataclysmic variablesand a microlensing event (Pietrukowicz et al. 2005, 2012). Fourteen of these objectsare listed by C01-17 as members or possible members of the cluster, including eightRR Lyr pulsators, one contact binary, and five semiregular variables. Apart fromnormal stars, the cluster contains two millisecond pulsars (Lynch et al. 2011), andtwo candidate stellar–mass black holes (Strader et al. 2012). Finally,according to Kains et al. (2016) M22 provides the best chance to detect anintermediate–mass black hole via astrometric microlensing.Our survey is a part of the CASE project (Kaluzny et al. 2005) conducted usingtelescopes of the Las Campanas Observatory, with an aim of increasing the inventoryof variable objects in the field of M22. It completes the previous findings of KT01 (based on 76 frames obtained during one night on the du Pont telescope) and PK03 (based on 31 archival HST/WFPC2 frames, and necessarily limited to the central part of the cluster). Altogether we identified 283 periodic, likely periodic or long–termvariables not cataloged by C01-17, of which 45 were independently found by Soszyńskiet al. (2016; hereafter S16) duringthe OGLE–IV survey of the Galactic bulge. In Section 2, we briefly report on theobservations and explain the methods used to calibrate the photometry. Newly discoveredvariables are presented and discussed in Section 3. Section 4 contains new data onpreviously known variables which we consider worthy of publishing, and the paper issummarized in Section 5. § OBSERVATIONS AND DATA PROCESSING Our paper is based on images acquired with the 1.0–m Swope telescope equipped with the2048× 3150 SITe3 camera. The field of view was 14.8× 22.8 arcmin^2at a scale of 0.435 arcsec/pixel. Observations were conducted on 86 nights fromApril 11, 2000 to August 22, 2008; always with the same set of filters. A total of 2730 V–band images and 384 B–band images were selected for the analysis. The seeing ranged from 1”.2to 3”.6 and 1”.2to 3”.9forV and B, respectively, with median values of 1”.4in both filters.The photometry was performed using an image subtraction technique implemented in theDIAPL package.[Available from http://users.camk.edu.pl/pych/DIAPL/index.html]To reduce the effects of PSF variability, each frame was divided into 4×6 overlapping subframes. The reference frames were constructed by combining 18 imagesin V and 17 in B with an average seeing of 1.”1 and 1.”2, respectively. The light curves derived with DIAPL were converted from differential counts to magnitudesbased on profile photometry and aperture corrections determined separately for eachsubframe of the reference frames. To extract theprofile photometry from reference images and to derive aperture corrections, thestandard Daophot, Allstar and Daogrow (Stetson 1987, 1990) packages were used.Profile photometry was also extracted for each individual image, enabling usefulphotometric measurements of stars which were overexposed on the reference frames. §.§ Calibration The photometric calibration is based on standard magnitudes and colors derived by KT01.Using over 40,000 comparison stars common to our survey and theirs, the followingtransformation to the standard system was derived:V= v + 2.0763(2) + 0.0191(2)×(b-v) B - V= -0.1504(3) + 1.0400(3)×(b-v) ,where lower case and capital letters denote instrumental and standard magnitudes,respectively, and numbers in parentheses are uncertainties of last significantdigits.Crowding in the field of view resulted in enhanced blending, which in turnsignificantly increased the scatter of photometric measurements in the observedmagnitude range. For example, observations made of the globular cluster NGC 3201 (using the same instrument setup) resulted in a smallest scatter of 0.1 mag at V = 21 mag (Kaluzny et al. 2016), while for M22 the best photometric accuracy is∼0.23 mag (Fig. <ref>) at the same brightness level. Fig. <ref>,shows the CMD of the observed field and was constructed based on the reference images.To make the figure readable, only stars with measured proper motions (Narloch et al. 2017; hereafter N17) are selected to serve as a background for the variables. Starsidentified as proper–motion (PM) members of the cluster are shown in the right panel.§.§ Search for variables The search for periodic variables was conducted using the AOV and AOVTRANS algorithmsimplemented in the TATRY code (Schwarzenberg–Czerny 1996 and 2012; Schwar­zenberg–Czerny & Beaulieu 2006). We examined time–series photometric dataof 132,457 stars brighter than V∼22 mag. The photometricaccuracy was partly offset by the large number of available frames, and as aresult we were able to detect periodic signals with amplitudes of ∼0.02 magdown to V≈15 mag, and ∼0.1 mag down to V≈21 mag.Among the known variables within our field of view, light curves were obtained forall 45 stars discovered by S16, and for 76 out of 85 C01-17 stars. Of the latter, light curves are missing for SLW-7 which was overexposed in our frames, and for sevenPK03 stars located close to the center of the cluster. We identified 238 new variableor likely variable stars, 36 of which are PM–members or likely PM–members of M22.Membership status was also assigned to the variables known before. [Data for all the identified variables are available at http://case.camk.edu.pl] § THE NEW VARIABLES Basic data for selected variables not listed in C01-17 are given inTable <ref>.For our naming convention to agree withthat of C01-17 we start numbering the new variable cluster members from V102. Theremaining variables are given names from U01 on (stars for which no PM–data are present) and from N01 on (stars whose PM indicates that they do not belong to M22). The equatorialcoordinates in columns 2 and 3 conform to the UCAC4 system (Zacharias et al. 2013), and are accurate to 0”.2 – 0”.3. The V–band magnitudes in column 4 correspondto the maximum light in the case of eclipsing binaries; in the remaining casesthe average magnitude is given. Columns 5–7 give B-V color, amplitude in theV–band, and period of variability. A CMD of M22 with locations ofthe variables is shown in Fig. <ref>. Field objects are marked in black,those for which the PM data are missing or ambiguous in blue, and members of the clusterin red. The gray background stars are the PM–members of M22 from the right panel ofFig. <ref>.§.§ Members and likely members of M22Based on proper motions, distances from the center of the cluster, and CMD locations we identified 39 M22–members not cataloged by C01-17 (among them, three discoveredby S16). A star was considered a member or likely member if one of the following criteria was fulfilled:* PM–membership probability P_PM≥70%.* P_PM<70%, but CMD-location compatible with cluster membership, variabilitytype compatible with CMD–location, and geometricmembership probability P_geom=1-π r^2/S>90%, where r is star's distancefrom the center of M22 (α = 18^h 36^m23^s94, δ = -2354171) in arcseconds, andS=1.22×10^6 is the size of thefield of view in arcseconds^2 (there are two such cases).* Proper motion not known, but P_geom>70%, CMD-location compatible with clustermembership, and variability type compatible with CMD–location.Details concerning PM measurements and calculations of membership probability are givenin N17, who also provide a PM catalog for nearly 450000 stars in the fields of 12 GCs. In the following, we describe the ten most interesting variables, whose light curvesare shown in Fig. <ref>. Our data suggest that multimode pulsations are likely in 16 SX Phe stars (seven new ones and nine from the C01-17 catalog). The most interesting one among them is the newly detected variable V112, which is also the brightest and reddest blue straggler (BS). It clearlyexhibits multimode pulsations at an amplitude of ∼0.3 mag suitable for asteroseismology analysis which in turn would provide valuable information on BS mass. Admittedly, its CMD location may seem a bit extreme for this type of variability, however both the V and B lightcurves are of good quality, so that a large error in <B>-<V> can be excluded.P_PM=100% for V112, however we feel a radial velocity measurement would be necessaryto confirm its membership. The star is a component of a blend. However, in the archival HSTframe NGC6656-J9L948010 V112 is much brighter than the remaining components (in fact, it is strongly overexposed). V116, a sinusoidal variable on the lower main sequence, is a 100% PM–member of M22. Our lightcurve is of poor quality because of partial blending with a much brighter star ∼1”.5distant. We did not detect any periodicity in the latter, and V116 is well isolated in thearchive HST frame NGC6656-U2X80302T. Thus, if the weak periodic signal we observe is real,then it must originate in V116 (not being entirely sure about its reality, we marked the staras a suspected variable). V116 would then closely resemble the optical counterpart of theX–ray source CX1 in M4 (Kaluzny et al. 2012). V117 is a low–amplitude sinusoidal variable with a short period (0.31 d) clearly incompatiblewith its location on the red giant branch (RGB). However, it is a 100% PM–member of M22. Our imageof V117 is perfectly symmetric, but, since the star is located in the unobserved by HST partof the cluster, the possibility of blending cannot be excluded. If adaptive optics photometry confirmed that we deal with a single light source, V117 would become an interesting target for further research. V125, located on the blue horizontal branch (BHB), has a β Lyr–type (EB) light curve with minimaof different depths, and P_PM=100%. No HST data are available for this object. The staris well separated from its neighbors; nevertheless adaptive optics would be needed to excludeblending. If not a blend, V125 would be one of the very rare BHB binaries with short periods(Heber 2016). V129, a BS with P_PM=100%, which exhibits a W UMa–like (EW) light curve with minima ofdifferent depth, is peculiar because of its long period (1.39 d). The observed minima are broaderthan the maxima, also not fitting a W UMa interpretation. In our frame, variableV129 is blendedwith at least two fainter stars; unfortunately their contribution to the total light cannot beestimated because of lacking HST data.The blue stragglers V130 and V131 are Algol-type eclipsing (EA) binaries with a strong ellipsolidal effect. No largeobservational effort would be needed to obtain reasonable quality light and velocity curvesfor these systems, and determine their parameters. Such a project would be worthwhile, as BSAlgols provide a very demanding test suite for stellar evolution codes even in cases whentheir parameters are not accurately known (Stȩpień, Pamyatnykh & Rozyczka 2017). V133 is a detached eclipsing binary with a period ambiguity. P_1=2.244228 d in Table <ref> is the best fit to the light curve, with only one minimum visible. For P_2=1.195288 d a secondary minimum appears, which may be as deep as the primary minimum. However, the fit becomes markedly poorer. Since PM is not available for V133, and P_geom=90%, V133 is just a likely member of M22; potentially interesting since it might serve as age and distance indicator if its membership were confirmed. If it belongs to M22, the absence of the second minimum speaks against P_1, as the system is located too high above the lower main sequence for such a large luminosity difference between the components.V134 is a nearly sinusoidal variable discovered by S16 (their star OGLE-BLG-ECL-423136). WithP_PM=100%, P=2.33 d, and a location between the extreme horizontal branch (EHB) and the BS region on the CMD, it constitutes a real puzzle. The high qualityB and V light curves yield a reliable <B>-<V>, so that the chance that V134 is horizontally misplaced in the CMD is low. The most natural causeof this type of variability is a strong reflection effect similar to that observed in HW Vir binaries,however the period of V134 (2.331 d) is much longer than the longest period known among the members of that class (∼0.75 d; Heber 2016). A slightelongation of the image of this star in our frames suggests a tight blend; unfortunately no HST data are available. Clearly, a spectroscopic follow–up is needed toverify its membership and reveal its nature. V135, another detached eclipsing binary witha 1:2 period ambiguity, is located on the lower main sequence. P=4.928 d and P=2.464 d fit the lightcurve almost equally well; however thelonger period implies nearly the same brightness of the components, which is barelycompatible with the CMD location of the system. Thus, although V135 is a 100% PM–member of M22,its membership should be verified through radial velocity measurements.§.§ Stars of unknown PM–membershipIn our sample, there are 69 variables with P_geom<70% and unknown proper motions, atleast some of which may turn out to belong to M22. Below we describe eight of the most interesting cases, whose light curves are shown in Fig. <ref>.Algols U39 (OGLE-BLG-ECL-423130) and U53 are prospective yellow stragglers. If theirmembership is confirmed they will provide excellent opportunity to test and/or calibratestellar evolution codes (Stȩpień, Pamyatnykh & Rozyczka 2017).U44 is a RS CVn–type eclipsing binary with a strong sinusoidal modulation, resembling V9 in NGC 6971 (Kaluzny 2003; Bruntt et al. 2003) or a sample of RS CVn discovered within the OGLE III survey and described by Pietrukowicz et al. (2013). Only one eclipse isvisible, situated almost in the middle of the ascending branch of the light curve. Themodulation originates from spot(s) possibly accompanied by mass transfer effects,similarly to those observed in R Ara (Bakiş et al. 2016). Since such systems arerare, a follow–up of U44 would be desirable independently of its membership status.U50 and U61 are detached eclipsing binaries located to the right of the lower mainsequence. Both their light curvesreveal only one eclipse. Iffollow–upphotometry confirms our light curve fits, the systems would become interestingred straggler candidates (see e.g. Kaluzny 2003).U51, a detached eclipsing binary with two eclipses visible, has a period long enough (2.6 d) to serve as age and distance indicator despite its low brightness.U56, located redward of the subgiant branch, is another red straggler candidate.U62 is a detached eclipsing binary located on the subgiant branch, and anotherpotential excellent age and distance indicator. As our light curve covers only a partof a single eclipse, its period of 20.8d is only tenative. §.§ Field variables We identified 176 variables which according to N17 do not belong to M22. As errorsin PM measurements cannot be entirely excluded, a few of them may in principle turnout to be cluster members. For that reason, while selecting the most interesting cases,we paid special attention to stars located on the CMD in the vicinity of the turnoffor in the BS region. The light curves of the selected variables are shown inFig. <ref>. N04, N10, and N11 are either field δ Sct variables or cluster SX Phe stars and blue stragglers, all showing clear multimode pulsations. N12, a clear multimode pulsator located in the RR Lyr gap, has a period of only 0.15 d,which unambiguously identifies it as a field δ Sct star. N15, located in the BS region, is another multimode pulsator. Its period of 0.25 d istoo long for a SX Phe variable; therefore it must also be a field δ Sct star. N44 is a field contact binary with a variable light curve. Its brightness seems to havedecreased by ∼0.08 mag between 2000 and 2008 (the 2008 data were collected duringfour nights, so that a zero point artefact is rather unlikely – the more that sucheffects are not seen in any other lightcurve).N65 (OGLE-BLG-ECL-423254) is a W UMa eclipsing binary in poor thermal contact.The secondary eclipse is total, allowingan estimationi of the temperature of the primary from the color–temperature calibration. Theobserved B-V index is 0.70 mag. Assuming a reddening of 0.30 mag (an average for M22) and usingthe calibration of Sousa et al (2011) one obtains T_1=6500 K. An approximate solutionof the V- and B-band light curves with the PHOEBE implementation of the Wilson–Devinney code (Prša and Zwitter 2005) yields i=858, T_2=4400 K, and Δ M_bol=2.9mag between the components. Neglecting the contribution of the secondary, and assuming thatN65 is a member of M22, from the observed V = 17.85 mag at maximum light, we obtain M_bol^1 =4.38 mag. This absolute brightness is reproduced by a W–D solution with semimajor axis and mass of the primary of 2 R_⊙ and 0.25 M_⊙, respectively. Since the latter value is much too low for a 6500 K star, N65 must be a background object, only interesting because of the significant temperature difference between the components. N87 seems similar toU44, however there is a significant difference between them: N88 hastwo maxima per period instead of one. Since a configuration of two nearly identicalspots at locations differing by nearly 180^∘ in longitude is rather unlikely, the nature of N88 is puzzling; the more that a similar object, OGLE-GD-ECL-04649, mentioned byPietrukowicz et al. (2013), exhibits both a single and a double maximum at various seasons.The double-peaked curve resembles that of a cataclysmic variable with a giant donor (e.g.T CrB) yet the color is 1 mag too red. The system clearly deserves thorough follow–upobservations, especially sinceit is just 1.4 arcsec distant from the ChandraX–ray source C183656.05-234845.5 with (α,δ)_2000 = (279.23355, -23.81263).N107 is a detached eclipsing binary, interesting independently of its membership status, sinceit hosts a δ Sct or SX Phe star. In Fig. <ref> the light curve of thissystem is phased separately with the pulsation period (0.08 d) and with the orbital period (2.83 d). Another two detached systems, N113 (OGLE-BLG-ECL-423112) andN121, are potentially interesting because of their CMD locations near the bottom of thered giant branch. If either of these turns out to be M22 member, it would provide a goodreference point for isochrone fitting in M-R and M-L diagrams (see e.g. Kaluzny et al. 2013).§ NEW DATA ON KNOWN VARIABLESOf the 101 objects cataloged by C01-17 fourteen are located beyond our FOV, and two are pulsarswithout optical counterparts. Due to crowding and blending, among the eight variablesdiscovered by PK03 in HST frames of the central part of M22 only PK-05 could have been identified (allstar designations in this Section are taken from C01-17). For the remaining 78 starsmembership status and membership probability were assigned using the criteria given inSection <ref>. Stars #3, #14, #39, #40, KT-01, KT-03, KT-05, KT-15, KT-18,KT-40, KT-41 and KT-48 turned out to be field objects. There are 10 RRab and 16 RRc pulsators in M22. A detailed analysis of our data on theseobjects will be published elsewhere; here we limit ourselves to a general remark concerning the Blazhko effect. K13 suggest asmall incidence (∼10%) of theBlazhko effectamong RRab stars of M22, and do not detect any such effects in stars of RRc type. In fact, the only star with a firmly established Blazhko effect they reportis KT-55. We observe this behavior also in RRab stars #2, #3 and #6. Another RRab star,#23, suggested by K13 to have a rapidly changing or erratic period, does not show any suchchanges in our data: we only observe modest (±0.05 mag) variations of the descendingshoulder of the light curve, which in principle might be interpreted as a weak Blazhko effect.Thus, according to our data, the incidence of the Blazhko effect among RRab stars is 40% (50% if#23 is included). Moreover, we find a Blazhko effect of a varying strength in RRc stars #18,#19, #25 and KT-36 (phase), #15 (phase, shape) and KT-26, KT-37, Ku-1, Ku-2, Ku-3 and Ku-4(phase, shape, amplitude). Altogether, we observe Blazhko behavior for 15 (16) RR pulsators,i.e. an incidence rate of 58% (62%). Among the RRc stars the incidence is even higher – 68%.Thus, M22 is another GC with a large (>50%) percentage of RRc Blazhko behavior, joining NGC 2808(Arellano Ferro et al. 2012) and M53 (Kunder et al. 2013b). Below we briefly describe C01-17 stars listed in Table <ref>, whose lightcurves are shown in Fig. <ref>.Star #24: To our surprise, this object, listed as a non–variable by K13, turns out to be a BL Her pulsator with P = 1.715 d, and a stable lightcurve. Our data show no period doublingphenomenon foreseen theoretically by Buchler & Moskalik (1992), and for the first timeobserved by Smolec et al. (2012) in a star belonging to the Galactic bulge.Star #31: In our data no star closer than 5” to the position of #31 shows evidence for variability. KT-02: This Algol-type binary star, relatively isolated within M22, and located slightly above the turnoff of the cluster, is a potentially valuable age and distance indicator (Kaluzny et al. 2005). KT01observed the ∼0.25 mag deep secondary minimum only. We find the primary minimum to be∼0.4 mag deeper, indicating not too discrepant temperatures of the components. Thus,spectral lines of both the components should be visible, and despite the short period (P=0.49 d)the system is bright enough (V=17.35 mag) for good quality spectra to be obtained and anaccurate velocity curve to be extracted. KT-26: The light curve of this star suggests that this is an RRc pulsator exhibiting the Blazhkoeffect. However, KT-26 is too blue to be an ordinary RRc star (both the V and B lightcurvesare of very good quality, so that a large error in B-V is rather unlikely, the more that ourV-band brightness agrees very well with that of K13). The archival Hubble frame NGC6656-J9L948010reveals KT-26 is a ∼0”.75 blend of two stars with a flux ratio ∼15:8. Unfortunately,since this is the only available ACS frame taken in the F606W filter, one cannot tell which component of theblend is the proper variable. If the brighter one, then its brightness is lower by ∼0.5 mag than the combined brightness of the blend, and ∼0.2 mag lower than that of the weakestRR Lyr in M22 (i.e. star #23). In that case, the proper variable would resemble the peculiarpulsator V37 in NGC 6362 (Smolec et al. 2017). If the fainter component were variable,the magnitude difference would increase to ∼1.2 mag and ∼0.9 mag, respectively, moving it to the BHB. Then, however, its period of 0.361366 d would be definitely too long fora BHB star. In any case, KT-26 clearly deservescloser observational scrutiny.KT-39: Tentatively classified by KT01 as a contact binary, this is in fact another interestingand potentially valuable Algol-type system. Its location just below the subgiant branch indicates that atleast one of the components must have left the main sequence, thus providing a good pointfor isochrone fitting. With a difference between depths of minima similar to that of KT-02,comparable isolation and a period three times longer, KT-39 is a relatively easy target forspectroscopy. KT-46: Another Algol-type binary. KT01 only observed the ∼1 mag deep primary minimum. We found thesecondary minimum is more than ten times shallower, which together with a maximum brightnessof V∼19.6 mag and a period of only 0.61 d rather eliminates this system from the list ofcurrently interesting objects. The light curve for KT-46 can be downloaded from the CASE archive.KT-13, KT-20, KT-23, KT-33, KT-42 and KT-43 are contact binaries located at the turnoff or inthe BS region (KT-42 was erroneously classified by KT01 as a possible pulsator). Another threecontact binaries, KT-07, KT-08 and PK-05, occupy positions to the right of the lower main sequence. For all the eight binaries complete light curves are presented. All of them, including PK-05which is placed closest to the center of M22, are well isolated within the cluster, so thatradial velocity measurements seem entirely feasible (see Rozyczka et al. 2010). KT-08 is particularly interesting as thefirst, and so far the only, contact binary found within CASE to reside significantly (∼2 mag)below the turnoff of a globular cluster.KT-51: This star, located at the top of the EHB, was singled out by KT01 as the most interesting object in their sample; possibly a binary. We confirm its variability,however with a different period than theirs (0.103 d vs. ∼0.2 d) and with a different amplitude(0.04 mag vs. 0.06 mag). Thus, the question of the binarity of this object remains open. In thearchival HST/WFPC2 frame UA2L0802M, KT-51 is an unresolved blend ∼0”.3 wide.Summary This contribution substantially increases the inventory of variable stars in the fieldof M22. A total of 359 variables or suspected variables were detected, 238 of which had been notknown before. 102 members or likely PM–members of the cluster were identified, including 20 SX Phe, 10 RRab and 16 RRc pulsators, one BL Her pulsator, 21 contact binaries, and 8 detachedor semi–detached binaries. Periods were obtained for almost all of the observed variables excepta few cases with variability timescale longer than our time base. Among the new members of M22, the most interesting objects for follow–up studies are V125 – aβ Lyr-type BHB binary, V129 – a blue/yellow straggler with a W UMa-like light curve locatedhalfway between EHB and RGB, and V134 – an EHB object with P=2.33 d and sinusoidal light curve.Among nonmembers, observational scrutiny would be desirable for N107 – a detached eclipsingbinary hosting a δ Sct-type pulsator, N44 – a contact binary whose luminosity seems to havedecreased by 0.08 mag between 2000 and 2008, and N87 – a peculiar P=0.93 d binary with ellipsoidalmodulation and narrow minimum in the middle of one of the descending shoulders of the sinusoidwhich may be an optical counterpart of the Chandra X–ray source C183656.05-234845.5. Multimodalitywas detected in 16 SX Phe stars, with the blue straggler V112 being the most prominent example ofthis type of variability. We also provide substantial new data on the variables cataloged by C01-17. In particular, we identifyM22 as the third GC with a large (>50%) percentage of Blazhko effect incidence among RRc stars afterNGC 2808 (Arellano Ferro et al. 2012) and M53 (Kunder et al. 2013b). The RRc star KT-26 shows apeculiar behaviour, resembling that found for V37 in NGC 6362 by Smolec et al. (2017). Finally, the contact binary KT-08 is the first, and so far the only such system found within CASE to residesignificantly (∼2 mag) below the turnoff of a globular cluster. As such, it might providesome constraints on the evolution of binary systems in GCs. We thank Grzegorz Pojmański for the lc code which vastly facilitated the work with light curves,and to the anonymous referee for many comments and suggestions which substantially improved themanuscript.This paper is partly based on data obtained from the Mikulski Archive for Space Telescopes (MAST).STScI is operated by AURA, Inc., under NASA contract NAS5-26555. Support for MAST for non–HST datais provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts.Arellano Ferro, A., Bramich, D. M., Figuera Jaimes, R., Giridhar, S., andKuppuswamy, K.20124201333Bakiş, H., Bakiş, W., Eker, Z., and Demircan, O.2016458508Bruntt, H., Grundahl, F., Tingley, B., Frandsen, S., Stetson, P. B., and Thomsen, B. 2003Å410323Buchler J. R., and Moskalik, P.1992391736Clement, C. M., Muzzin, A., Dufton, Q., Ponnampalam, T., Wang, J. et al.20011222587 (C01-17)Dotter, A., Sarajedini, A., Anderson, J., Aparicio, A., Bedin, L. R. et al. 2010ApJ708698Harris, W.E.19961121487Heber, U.2016128082001Kains, N., Bramich, D. M., Sahu, K. C., andCalamida, A.2016 4602025Kaluzny, J.2000ASP Conf.Ser.20319Kaluzny, J.20035351Kaluzny, J., and Thompson, I. B.2001Å373899 (KT01)Kaluzny, J., Thompson, I. B., Krzeminski, W., Preston, G. W., Pych, W.et al.2005Stellar Astrophysics with the World???s Largest Telescopes,AIP Conf. Proc.75270Kaluzny, J., Rozanska, A., Rozyczka, M., Krzeminski, W., and Thompson, I. B. 2012750L3Kaluzny, J., Thompson, I. B., Rozyczka, M., Dotter, A., Krzeminski, W. et al. 201314543Kaluzny, J., Rozyczka, M., Thompson, I. B.,Narloch, W., Mazur, B. et al.20166631Kunder, A., Stetson, P. B., Cassisi, S., Layden, A., Bono, G. et al.2013a 146119 (K13)Kunder, A., Stetson, P. B., Catelan, M., Walker, A., and Amigo, P2013b 14533Lee, J-W.2015219:7Lee, J-W.2016226:16Lynch, R. S., Ransom, S. M., Freire, P. C. C., and Stairs, I. H.2011 73489Marino, A. F., Milone, A. P., Piotto, G., Villanova, S., Bedin, R. L. et al. 2009Å5051099Mazur, B.,Krzeminski, W., and Thompson, I. B.20033401205Narloch, W., Kaluzny, J., Poleski, R., Rozyczka, M., Pych, W., and Thompson,I. B.20174711446 (N17)Nemec, J. M., Balona, L. A., Murphy, S. J., Kinemuchi, K., and Jeon, Y.-B.20174661290 Petersen J. O., and Christensen-Dalsgaard J.1996Å312463Pietrukowicz, P., and Kaluzny, J.200353371 (PK03)Pietrukowicz, P., Kaluzny, J., Thompson, I. B., Jaroszynski, M.,Schwarzenberg-Czerny, A. et al.200555261Pietrukowicz, P.,Minniti, D., Jetzer, Ph., Alonso-García, J., and Udalski, A. 201274418Pietrukowicz, P., Mróz, P., Soszyński, I., Udalski, A., Poleski, R. et al.201363115Prša, A., and Zwitter, T.2005628426Rozyczka, M., Kaluzny, J., Pietrukowicz, P., Pych, W., Catelan, M. et al.2010 Å52478Sahay, A., Lebzelter, T., and Wood, P. R.2014Publ. Astr. Soc. Austr.3112Saio, H., Kurtz, D. W., Takata, M., Shibahashi, H., Murphy, S. J. et al.20154433264Schwarzenberg-Czerny, A.1996460L107Schwarzenberg-Czerny, A.1999516315Schwarzenberg-Czerny, A.2012New Horizons in Time-Domain Astronomy, IAU Symposium 28581Schwarzenberg-Czerny A., and Beaulieu, J.-Ph.2006365165Smolec, R., Moskalik, P., Kauny, J., Pych, W., Ryczka, M., and Thompson, I. B. 20174672349Smolec, R., Soszyński, I., Moskalik, P., Udalski, A., Szymański, M. K. et al. 2012Astrophys. & Sp. Sci. Proc3185 Soszyński, I., Pawlak, M., Pietrukowicz, P., Udalski, A., Szymański, M. K. et al.201666405 (S16)Sousa, S. G., Alapini, A., Israelian, G., and Santos, N. C.2010Å51213Stȩpień, K., Pamyatnykh, A. A., and Rozyczka, M2017Å58787Stetson, P. B.198799191Stetson, P. B.1990102932Strader, J., Chomiuk, L., Maccarone, T. J., Miller-Jones, J. C. A., and Seth, A. C.2012Nature49071Zacharias, N., Finch, C. T., Girard, T. M., Henden, A.,Bartlett, J. L.et al.201314544§ APPENDIX: V112 – A MULTI-MODE, NON-RADIAL SX PHE TYPE PULSATING STAR§.§ Light curve decompositionThe short period and unstable light curve of the blue straggler V112suggests it to be a SX Phe type star. Cores of globular clusters host manysuch stars yet most appearto be of lowamplitude (Kaluzny 2000; for recent references see Nemec et al. 2017).Because of its large amplitude V112seemed worthy of further attention. We performed a completeFourier decomposition of its light curve, employing the NFIT code by one of authors (ASC).For early application of this code to SX Phe light curves, and underlying methods, seeMazur et al. (2003), and Schwarzenberg-Czerny (1999), respectively. The analysis isperformed in stages, so that consecutive frequencies are identified in the periodogram,and subsequently data are prewhitened of them. In that way a Fourier model of the lightcurve is established. At the final stage the model is refined by fitting all frequencyterms simultaneously by non–linear least squares, with adjustment of the base frequencies.The effective Nyquist interval of our observations is close to 130 c/d. Our decompositionof the light curve of V112is complete in that we accounted for all frequencies inthe range up to 100 c/d and with half–amplitudes over 0.0015 mag, i.e. twice theirtypical standard deviation (σ=0.008 mag). Even for these small amplitudes the standarddeviation of phases remains within 0.08P while for 9 strong modes they were ≪ 0.01P.§.§ Pulsation modes of V112 Our analysis revealed three base frequencies of pulsation, f_0, f_1, and f_2, with someharmonics and also seven combination frequencies between them (see Table <ref>). Hence it may be securely assumed all these frequencies correspond to the pulsation of V112. The ratio f_0/f_1=0.784 is within the range of that for fundamental–to–first–overtone radialp–modes in SX Phe, depending on metallicity (e.g. Petersen & Christensen–Dalsgaard 1996),hence it seems secure to identify f_0 and f_1 with the fundamental and first overtonepulsation of V112. If so, the presence of the combination mode f_2+f_0 with f_2 closeto f_0 constitutesevidence of a non–radial mode f_2.In the light curve of V112, there appear another 3 seemingly unrelated frequencies f_3,f_4, and f_5. Note that f_3 differs substantially from the combination 2f_0-2f_2 and itsmoon alias. It appears within a low frequency power bump at f<2 c/d. Such a bump does appear in some SX Phe candidate stars observed by Kepler Satellite (Nemec et al. 2017), and a coherent frequencyfound in Kepler data for a δ Scuti star is interpreted as due to either stellarrotation or g–modes (Saio et al. 2015). Our ground–based mono–site data may suffer from a zerolevel drift for frequencies below 0.2 c/d (Mazur et al. 2003), though most of thelow–frequency bump could be real, similar to the one in Kepler stars. The remaining frequencies f_4 and f_5 are ill–expressed in our data. Althoughtheir amplitudes reach 6 and 4.5σ, anNLSQ fit yields as standard deviationas large as that corresponding to a 0.1P uncertainity over a half time–span of our data. We leavethe question of their reality and nature open.Due to the simultaneous presence of well established fundamental and first overtone radialp–modes and a non–radial one, V112 belongs to a subgroup of SX Phe stars most suitable toan asteroseismic analysis. There are signs of the presence of low frequency f_3 & f_4oscillations consistent with g–modes, yet due to their small amplitude and uncertain fits we refrain from further discussion. As f_0 and f_0/f_1 are tied to metallicity andluminosity, from such analysis of V112 it may be possible to get information on chemicalcomposition and distance of M22 – the more that the presence of non–radial mode(s) yieldsadditional constraints.
http://arxiv.org/abs/1709.09572v1
{ "authors": [ "M. Rozyczka", "I. B. Thompson", "W. Pych", "W. Narloch", "R. Poleski", "A. Schwarzenberg-Czerny" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170927151120", "title": "The Cluster AgeS Experiment (CASE). Variable stars in the field of the globular cluster M22" }
empty ©2021 IEEE.Personal use of this material is permitted.Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This article has been accepted for publication in IEEE Transactions on Medical Robotics and Bionics.DOI: 10.1109/TMRB.2021.3073209URL: https://ieeexplore.ieee.org/document/9404322Rate of Orientation Change as a New Metric for Robot-Assisted and Open Surgical Skill Evaluation Yarden Sharon, Student Member, IEEE, Anthony M. Jarc, Thomas S. Lendvay, and Ilana Nisky, Senior Member, IEEEManuscript revised ___ This research was supported by the Helmsley Charitable Trust through the ABC Robotics Initiative and by the Marcus Endowment Fund both at Ben-Gurion University of the Negev, the ISF grant number 327/20, the Israeli Ministry of Science and Technology grant 15627-3 and a grant for the Israel Italy Virtual Lab on artificial somatosensation for humans and humanoids. Y. Sharon was supported by the Besor scholarship and the Israeli Planning and Budgeting Committee scholarship. Y. Sharon and I. Nisky are with the Department of Biomedical Engineering and Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel (e-mail: [email protected] and [email protected]). A. Jarc is with Medical Research, Intuitive Surgical Inc., Norcross, GA, USA (e-mail: [email protected]). T. S. Lendvay is with the Department of Urology, University of Washington, Seattle, WA, USA (e-mail: [email protected]). Accepted 2017 September 15. Received 2017 September 6; in original form 2017 June 26. ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================fancy Surgeons’ technical skill directly impacts patient outcomes. To date, the angular motion of the instruments has been largely overlooked in objective skill evaluation. To fill this gap, we have developed metrics for surgical skill evaluation that are based on the orientation of surgical instruments. We tested our new metrics on two datasets with different conditions: (1) a dataset of experienced robotic surgeons and nonmedical users performing needle-driving on a dry lab model, and (2) a small dataset of suturing movements performed by surgeons training on a porcine model. We evaluated the performance of our new metrics (angular displacement and the rate of orientation change) alongside the performances of classical metrics (task time and path length). We calculated each metric on different segments of the movement. Our results highlighted the importance of segmentation rather than calculating the metrics on the entire movement. Our new metric, the rate of orientation change, showed statistically significant differences between experienced surgeons and nonmedical users / novice surgeons, which were consistent with the classical task time metric. The rate of orientation change captures technical aspects that are taught during surgeons' training, and together with classical metrics can lead to a more comprehensive discrimination of skills. Medical robotics, Surgical robotics, Human motion analysis, Physical human-robot interaction, Surgical skill evaluation.§ INTRODUCTION Successful surgery requires cognitive skill – "knowing what to do", and motor skill – "knowing how to do it" <cit.>. The technical skill of a surgeon directly impacts patient outcomes <cit.>. Training programs are intended to bring junior surgeons to a high level of procedural and technical skill, but because of limited standard technical skill metrics, the maintenance of certification for practicing surgeons is mostly cognitive-based. For both cognitive and motor goals, it is paramount to evaluate the quality of surgeon's performance.State of the art surgical skill assessment is still largely based on direct or video observation by expert surgeons. However, such evaluation is problematic for several reasons. First, subjective assessment may vary between evaluators <cit.>, and suffer from bias. Second, even if the assessment is structured using checklists <cit.>, it is still limited by what the observers see and by their attention. Third, these observations require time. Therefore, it is important to find objective metrics that can describe the surgical performance in detail. Such metrics can help to identify training deficiencies more accurately, and to provide the trainee with valuable near real-time feedback to optimize their performance <cit.>. The simplest objective metric is task completion time <cit.>. However, task time does not provide information about the quality of the action <cit.>. For example, a task that was completed fast might have been accomplished with careless instrument gestures. Development of more complex objective metrics is now possible, thanks to the advancement of technology. Tracking systems <cit.> and virtual reality trainers <cit.> enable the collection of motion information and the calculation of objective metrics such as path length of the instruments, number of movements, speed of movements, and number of errors <cit.>. However, to reach accurate surgical skill assessment, there is still much room for improvement.One of these enabling technologies is teleoperated robot-assisted minimally-invasive surgery (RMIS). In RMIS, the surgeon teleoperates robotic surgical instruments inside the body of the patient <cit.>. This allows for both unobtrusive tracking of the movements of the surgeon, e.g. position, velocity, and angular velocity, and using this information to evaluate skill <cit.>. This abundance of data also motivates the use of advanced machine learning techniques for skill evaluation <cit.>. However, such classification techniques are limited in terms of their ability to provide the trainee with meaningful feedback. To date, the orientation of surgical instruments has been used for calculate the angular path around different axes of the laparoscopic tool for assessing laparoscopic skill<cit.>. However, for robotic and open techniques, where the movement is not limited to rotations around certain axes, there has been no extensive use of orientation-based metrics for skill evaluation. This is somewhat surprising, because rotation of instruments is critical in many surgical tasks. For example, in needle-driving, surgeons are taught to rotate their wrist so that the needle addresses the tissue at a right angle and pierces the tissue with the least amount of force. Previous studies found that the angular velocity of the hands of experts was significantly higher compared to novices <cit.>, but this measurement was not linked to a specific task. In sports, measures of rotation have been used to assess skill among tennis players <cit.>. Therefore, we believe that orientation-based metrics can also be used for evaluating surgical skill. In this study, we developed orientation-based metrics for surgical skill assessment in robot-assisted and open needle-driving movements. These metrics capture critical aspects of surgical expertise that, to the best of our knowledge, were not quantified by the existing metrics for surgical skill. Our metrics were designed to assess movements in which the DOF of the hand are not limited by the surgical instrument; therefore, we did not include conventional laparoscopic movements. We chose the needle-driving task as a good example, characterized by a combination of high clinical importance, technical complexity, and minimal necessary procedural knowledge. Moreover, needle-driving is the building block of surgical suturing that is part of the majority of surgical procedures regardless of the specialty field <cit.>. To investigate the performance of our new orientation-based metrics, we used two datasets. The first dataset, named here "Dry Lab", was collected in a previous study that compared teleoperated and open unimanual needle-driving movements of experienced robotic surgeons and nonmedical users <cit.>. The teleoperated needle-driving was performed using the da Vinci Research Kit (dVRK) <cit.>, which is a custom research version of the da Vinci Surgical System. The open needle-driving was performed with a needle-driver that was equippedwith magnetic trackers. Because each stage of the task has different constraints, different metrics may be required for the different stages. Therefore, we hypothesized that segmentation of the needle-driving movement into its stages is necessary prior to calculation of metrics in order to assess surgical skills. Specifically, we assumed that during the part of the needle insertion when rotation motion is required, orientation-based metrics can highlight differences between different levels of surgical skill.The second dataset, named here "Porcine", was collected during the training of surgeons on a porcine model, and consisted of a series of tasks that highlight technical skills when using the da Vinci Surgical System, such as knot tying and third arm retraction. The analysis of this dataset is used to demonstrate the feasibility of using our orientation-based metrics in the analysis of realistic surgical data.A preliminary version of this study for a subset of metrics on the teleoperated movements of the Dry Lab dataset was presented in an extended abstract form <cit.>. § METHODSThe novelty of this study is the development and examination of orientation-based metrics for surgical skill evaluation. We were interested in the development of robust metrics whose performance is not dependent on experimental conditions. Therefore, we chose to test our metrics using data from several experiments with different conditions. In this section, we first briefly present the experimental setup, data acquisition, preprocessing, and segmentation of each dataset. Then, we present the calculation of the metrics, and the statistical analysis. We use 𝐱 as the Cartesian translation vector (x,y,z position coordinates), and φ as the opening angle of the needle-driver. To present orientations, we use 𝐑 for the rotation matrix that consists of three unit vectors (𝐑=[𝐱̂,ŷ,ẑ]), and 𝐐 for the quaternion that consists of four components (𝐐=[q_1,q_2,q_3,q_4]). The ^P superscript stands for PSM, and ^O for open needle-driver. j is the index of sampled data points.§.§ Dataset 1: Dry Lab§.§.§ da Vinci Research Kit SetupFull details of the experimental setup and procedures are reported in <cit.>, but we summarize here the important details for the current study. The setup of the dVRK that was used in the experiment is depicted in Fig. <ref>(a-b). The system consisted of a pair of Master Tool Manipulators (MTMs), a pair of Patient Side Manipulators (PSMs), a high resolution stereo viewer, and a foot-pedal tray. Two large needle-drivers were used as PSM instruments. Using the stereo viewer, the participant watched a 3D view of the task scene. A pair of Flea 3 (Point Grey, Richmond, BC) cameras were mounted on a custom designed fixture. The position and orientation of the camera were manually adjusted to obtain the best view of the task board, and were fixed throughout all the experiments.The teleoperation was implemented as a position-exchange with PD controllers. The Cartesian position and the orientation of the tooltips were calculated from the sampled joint angles via forward kinematics. Velocities were calculated using numerical differentiation and filtering with a second-order Butterworth low-pass filter with a 20 Hz cutoff. To control the PSM, the position and velocity of the MTM were down-scaled by factor of 3 to mimic the `fine' movement scaling mode of the clinical da Vinci system, the orientation was not scaled. Similarly to the clinical da Vinci, there was no force feedback, and there was a small torque feedback on the orientation degrees of freedom to help users avoid large misalignment in tool orientation between the PSM and MTM.§.§.§ Experimental ProceduresSixteen participants performed the experiment that was approved by the Stanford University Institutional Review Board, after giving informed consent. The participants included six experienced surgeons (five urologists, nrobotic procedures>120, and one general surgeon, nrobotic procedures>150, self-reported), and ten nonmedical users (engineering graduate students) without surgical experience. There were no restrictions on the handedness of the surgeons, and the nonmedical users were all right handed by self-report. One nonmedical participant had extensive experience with the experimental setup, and hence was removed from the analysis.Each participant performed both teleoperated and open needle-driving sessions. The order of the two sessions was balanced across participants, i.e., half of the experienced surgeons and half of the nonmedical users performed the robotic task first, the other halves performed the open task first. The assignment into these groups was random. The participants performed the teleoperated needle-driving using the dVRK with a large needle-driver (Fig. <ref>(a-b)), and the open needle-driving using standard surgical needle-driver (Fig. <ref>(c-d)). The needle was a CT-1 tapered needle without its thread. Each task board consisted of four identical sets of targets, but only one set of targets was visible during a particular trial. Each set of targets on the task board consisted of four marks (Fig. <ref>(a).III): start (s), insertion (i), exit (e), and finish (f). In the teleoperated session, the participant sat in front of the master console of the dVRK (Fig. <ref>(a)). The task board was rigidly mounted on the patient-side table (Fig. <ref>(b)), such that its position was fixed relative to the cameras.In the open session (Fig. <ref>(c)), to provide a similar context to the teleoperated session, the participant also sat in front of the dVRK. A similar task-board was mounted on the armrest of the dVRK. To determine the position and orientation of the surgical needle-driver's tip, magnetic pose trackers (trakSTAR, Ascension Technology Corporation, Shelburne, VT) were mounted on its shafts (Fig. <ref>(d)). To prevent signal distortion, the tracker was separated from the metal body of the driver by 2 cm.Each participant watched an instructions video before each session (teleoperated or open). Each session included 80 trials; after each block (10 trials) a break was offered. After two blocks, the suture-pad was readjusted so that a fresh area of the pad and targets were presented to the participant. Each trial started with a bimanualadjustment of the needle in the right needle-driver in a configuration that is appropriate for the task. Then, participants placed the tip of the needle at start target (s), and pressed the left foot-paddle (teleoperated) or mouse-button (open) to indicate the beginning of the task sequence.A single needle-driving trial consisted of four stages, as depicted in the video and in Fig. <ref>(a): (I) transport – reaching with needle head from s to i, (II) insertion – driving the needle through the tissue until its tip exits at e, (III) catching – opening the needle-driver and catching the tip of the needle, and (IV) extraction – pulling the needle and reaching to f with its tail. The trial ended when the tail of the needle was placed at the end target, and the left foot-paddle or mouse-button were pressed to indicate trial end. During the experiment, some of the trials were not performed according to the instructions or not recorded properly. These trials were identified during the experiment prior to data analysis, and were removed from the analysis. Among teleoperated sessions, 28 out of 1200 trials were removed, and in the open sessions, 58 trials were removed. §.§.§ Data Acquisition and PreprocessingIn the teleoperated session, we analyzed the right PSM's data. The Cartesian position, velocity, orientation and opening angle of the needle-driver were recorded at 2 kHz. In the open session, the position and the orientation of the two magnetic pose trackers were recorded at 120 Hz. We interpolated and downsampled all data to 100 Hz. We filtered the Cartesian position offline at 6 Hz with a 2nd order zero lag low-pass Butterworth filter using the Matlab function . In the open condition, we calculated the mapping from the position of the sensors 𝐱^O_R and 𝐱^O_L to the driver's endpoint𝐱^O (Fig. <ref>(c)) using a calibration dataset. We calculated the Cartesian velocity using numerical differentiation of the filtered position.In both conditions, the orientations were recorded as rotation matrices. Rotation matrices are orthogonal by definition, but because the resolution of the recording is limited, the recorded matrices were not orthogonal. Therefore, we used singular value decomposition (SVD) to find the nearest orthogonal matrix for each sampling point. Then, we converted the matrices to quaternions using the Matlabfunction, and interpolated them using spherical linear interpolation (SLERP) <cit.>. Quaternions that represent orientation are unit quaternions (i.e., normalized quaternions), and therefore, we normalized the quaternions after each calculation. In the open condition, the opening angle of the driver, φ^O, was calculated as φ^O=cos^-1(𝐱̂^O_R·𝐱̂^O_L), where 𝐱̂^O_R and 𝐱̂^O_L are elements from the rotation matrices which represent the orientations of the right and left magnetic trackers (Fig. <ref>(c)), and· is the dot product. In the teleoperated condition, the opening angle was available directly from the recorded data.§.§.§ SegmentationWe segmented the needle-driving movement into four stages (Fig. <ref>(b)). Since there was no video recording of the movements, we segmented the movements using the kinematic trajectories. Because the movements were structured, we could define the transition between segments by specific indicators in the recorded signals, such as a minimum of the velocity or a threshold of the opening angle. To replace the manual search for these indicators, we built an algorithm that automatically finds them. The segmentation of all the trials was then checked visually, and when the algorithm failed to segment the movement, we corrected the segmentation manually. This happened in 7 teleoperated trials, and in 127 open trials, including all the trials of one of the participants.The movement's trajectory and the opening angle were helpful for the segmentation of the first two segments, until the participant opened the needle-driver for the first time. However, after these two segments, the movements were very different from each other, so we could not define the transition between the segments using the kinematics signals. For instance, using the kinematics of the tools we could not identify a situation in which the participant was not able to pull the needle out, and tried to insert it again. Therefore, in this paper, we focus on the first and second segments, which were relatively consistent across participants and trials. §.§ Dataset 2: Porcine§.§.§ SetupData were collected from da Vinci Xi surgical systems (Intuitive Surgical, Inc.) using an Intuitive Data Recorder (IDR). The data consisted of a single channel of endoscopic video and kinematics based on the joint angles of the patient side cart.§.§.§ Experimental ProceduresThree experienced (nrobotic procedures>200) and four novice (nrobotic procedures<100) surgeons completed a clinical-like suturing task on a porcine model that targeted the technical skills of using the da Vinci system (Fig. <ref>). The suturing exercise required surgeons to use a two-hand technique to tie 4 interrupted sutures using large needle-drivers. The suturing task was part of a series of clinical-like activities conducted by each surgeon. §.§.§ Data Acquisition and PreprocessingIn this dataset, we used the videos from the endoscopic camera for segmentation, and the PSM's data for calculation of the metrics. The videos were recorded at 30 frames per second. The participants were not limited to performing the task with one specific hand, and therefore, we analyzed the data of the tool that was used in each specific segment. The Cartesian position and orientation of the tools were recorded at 50 Hz. The filtering of the Cartesian position data, and the conversion of rotation matrices to quaternions was performed similarly to the analysis of the Dry Lab dataset, as described in <ref>.§.§.§ SegmentationThe part of the movement that was consistent across participants and attempts was when the surgeon inserts the needle into the tissue. Therefore, we chose to analyze only the data of the insertion part to enable comparisons between the movements. We used the video stream to manually label these segments. Each participant performed four or five sutures, for every suture we isolated the insertion segment. When there were two attempts of needle insertion for the same suture, we included both attempts in the data analysis, and it happened only twice in the entire dataset.§.§ MetricsFor each trial and each segment, we calculated four metrics: (1) task time – the time elapsed between the beginning and the end of the movement; (2) path length – the distance travelled by the instrument; (3) angular displacement – the accumulated change in instrument orientation; and (4) rate of orientation change – the average rate of the change in instrument orientation. The first two are classical metrics for skill evaluation. We included these metrics to allow us to compare the performance of our new orientation-based metrics to the classical ones.The task time was calculated as:TT=t_1_i+1-t_1_i,where t_1_i is the time elapsed between the beginning of the movement and the beginning of the ith segment. Then, we found the distance Δ d_j,j+1 between pairs of consecutive sampled frames at the instrument's tip, T_j and T_j+1 (Fig. <ref>(a)):Δ d_j,j+1=||Δ x_j,Δ y_j,Δ z_j||,where Δ x,Δ y,Δ z are the differences in the x,y,z positions, respectively. Using the distance, we calculated the path length for the N samples of the segment:P=∑_j=1^N-1Δ d_j,j+1. For the orientation-based metrics, we first calculated the rotation difference between consecutive frames:Δ𝐐_j,j+1=𝐐_j+1𝐐_j^-1,where 𝐐_j and 𝐐_j+1 are unit quaternions representing the orientation of the frames. Δ𝐐_j,j+1 is a unit quaternion and thus can be referred to as rotation around the axis 𝐤̂ (𝐤̂=[k_x,k_y,k_z]) by Δθ_j,j+1 <cit.> (Fig. <ref>(b)):Δ𝐐_j,j+1 = [q_1,q_2,q_3,q_4]= [cos(Δθ_j,j+1/2),𝐤̂sin(Δθ_j,j+1/2)].We calculated the angle Δθ_j,j+1 (Fig. <ref>(b)), which represents the orientation change between pairs of sampled frames: Δθ_j,j+1=2cos^-1(q_1),whereq_1 is the first component of the quaternionΔ𝐐_j,j+1.For each participant and segment, outlier angle values were defined as angle values that were 35 times larger than the average of the angles across all the trials of that participant and segment. The source of the outliers is attributed to problems during the recording of the data. The entire segment that included an outlier angle was removed from the analysis. In the teleoperated condition of the Dry Lab dataset, this outlier removal procedure resulted in the removal of 6 segments. In the open condition of the Dry Lab dataset, and in the Porcine dataset, none of the segments were removed.The angular displacement, was defined as:A=∑_j=1^N-1 |Δθ_j,j+1|.This metric quantifies the overall change in orientation during the movement – the angular path. Note that this metric is different to the angular path metric used to assess laparoscopic skill, which refers to the amount of rotation around only one or two of the tool’s axes <cit.>. In the open needle-driving, we measured two orientations – one for each magnetic tracker. Because the trackers were rigidly attached to the driver, we assumed that as long as the needle-driver is closed around the needle, the change in the orientation between subsequent samples should be equal for both of the trackers. However, some participants held the driver so that one of their fingers touched one of the trackers. This contact disturbance caused movements of the tracker, and therefore, unintentional changes in the orientation that could inflate the angular displacement metric. Therefore, we calculated the angular displacement for the two trackers, and used the smaller angular displacement in further calculations.The rate of orientation change was defined as:C=1/N-1∑_j=1^N-1|Δθ_j,j+1|/Δ t_j,j+1,where Δ t_j,j+1 is the time difference between the subsequent samples. This metric quantifies the rate of the change of the instrument orientation during the movement. In the open needle-driving, we calculated Δθ_j,j+1 from the same tracker that was used for the calculation of the normalized angular displacement (without the finger contact disturbance). §.§ Statistical analysis§.§.§ Dry Lab Dataset In this study we focused on differences between the performance of experienced surgeons and nonmedical users, and between the beginning and the end of the experiment. We did not perform statistical comparisons between the teleoperated and open conditions. Therefore, the following process was carried out for each condition separately.For each trial, we calculated the four metrics for the first and second segments (I-transport and II-insertion). We log-transformed the metrics to correct their non-normal distributions. We calculated the averages of the first and last 10 trials of each participant for each metric and each segment. For each metric, we fit a 2-way ANOVA model with repeated measures on one factor (mixed model ANOVA), with expertise (experienced surgeon / nonmedical user) as the between-participants factor, and trial (early/late) as the within-participant factor. We used Bonferroni correction for post-hoc comparisons.§.§.§ Porcine DatasetWe calculated the values of the four metrics for each insertion attempt for each of the seven participants. We then calculated the average value of each metric per participant. Following that, we calculated the average value of each metric for each expertise group (experienced surgeons and novice surgeons), and the difference between the groups' averages (Δ_Exp.-Nov.). The sample size of this dataset is small, and therefore, we used permutation tests <cit.> to test the statistical significance of the difference between the different expertise groups. We reassigned the seven participants' averages into the 35 possible combinations of groups: one group with three participants (`experienced surgeons'), and a second group with four participants (`novice surgeons'). For each combination we calculated (Δ_Exp.-Nov.). To calculate the significance, we counted the number of combinations in which (Δ_Exp.-Nov.) was equal to or higher in absolute value than the original (Δ_Exp.-Nov.), and divided it by the number of the possible combinations. Statistical significance was determined at the 0.05 threshold. We used the Matlab Statistics Toolbox for our statistical analysis. § RESULTS §.§ Comparing needle-driving performance of experienced surgeons and nonmedical users in the Dry Lab dataset Fig. <ref> depicts examples of a teleoperated and an open trial of an experienced surgeon in the upper panels, and a nonmedical user in the lower. Qualitatively, it is evident that the experienced surgeon completed both tasks faster than the nonmedical user, and with higher rate of orientation change. Figs. <ref>-<ref> depict the metrics in the first two segments, for the teleoperated (Fig. <ref>) and the open (Fig. <ref>) conditions, as a function of trial number (left panels), and the averages of the first and the last 10 trials (right panels). Most of the noticeable differences between experienced surgeons and nonmedical users are in segment II (insertion). This observation is not surprising because the driving of the needle through the tissue (segment II) is the challenging aspect of the task. Nevertheless, for completeness, we briefly present the full analysis of segment I and then focus on segment II.§.§.§ Segment I-transport The statistical analysis of segment I (transport) showed that for most of the metrics, the differences between experienced surgeons and nonmedical users, in both conditions (teleoperated and open) were not statistically significant. The only metric that showed statistically significant differences between experienced surgeons and nonmedical users was task time (teleoperated: F_1,13=8.942, p=0.010, Δ_Exp.-Non.=-0.476, open: F_1,13=6.206, p=0.027, Δ_Exp.-Non.=-0.316). In addition, the improvement between early and late trials was statistically significant for the task time in both conditions (teleoperated: F_1,13=10.147, p=0.007, Δ_Late-Early=-0.463, open: F_1,13=14.003, p=0.002, Δ_Late-Early=-0.256), and for the rate of orientation change in the open condition (F_1,13=7.891, p=0.015, Δ_Late-Early=0.223). §.§.§ Segment II-insertion Table I summarizes the results of the mixed effects ANOVA for the different metrics. Post-hoc comparisons are presented only when the interaction expertise*trial was significant. Task time is a classical metric, and we expected shorter task times for more experienced surgeons. Indeed, the task time of the experienced surgeons was shorter than of nonmedical users, and in the last trials of the experiment, task time was shorter than in the first trials (Fig. <ref>(a-b), <ref>(a-b)). This observation is supported by the statistical analysis – for both conditions (teleoperated and open), the effect of expertise and trial was statistically significant. In the open condition, the interaction between trial and expertise was statistically significant, because the improvement of the nonmedical users was greater than of experienced surgeons. Nevertheless, the difference between them remained statistically significant even in the last trials.Path length is related to the classical economy of motion metrics. Fig. <ref>(c-d) and <ref>(c-d) show that in segment II, in both conditions (teleoperated and open), experienced surgeons had a shorter path length than nonmedical users, and that there was an improvement between early and late trials. These effects were statistically significant. In the teleoperated condition, there was a statistically significant interaction between expertise and trial. In the early trials, the paths of experienced surgeons were shorter than of nonmedical users (Fig. <ref>(d)). The nonmedical users improved more than the experienced surgeons, and as a result, in the late trials, there was no longer a statistically significant difference in path length between experienced surgeons and nonmedical users. These results are consistent with our previously reported analysis of the entire task <cit.>. The fact that there was no difference between experienced surgeons and nonmedical users after only 80 trials suggests that, at least in some tasks, this metric alone is insufficient for surgical skill assessment.Angular displacement. Our task requires rotation of the needle along its arc for successful insertion into the tissue. Therefore, we hypothesized that a large angular displacement will be correlated to surgical experience. However, Fig. <ref>(e-f) depict that in the teleoperated condition, there is no statistically significant difference between experienced surgeons and nonmedical users. Moreover, Fig. <ref>(e-f) show that in the open condition, experienced surgeons had a statistically significant smaller angular displacement than nonmedical users. In addition, in the teleoperated condition, the angular displacement at the end of the experiment was statistically significantly smaller than in the early trials. A careful examination of Fig. <ref>(d) suggests a reason for these surprising results. The nonmedical user tried a few times unsuccessfully to rotate the needle through the tissue, and accumulated a large angular displacement that does not necessarily reflect a successful drive of the needle (panel b). This motivated us to propose a metric that quantifies the rate of orientation change rather than its accumulation.Rate of orientation change. Examination of orientation change trajectories (Δθ) (Fig. <ref>) suggests that experienced surgeons perform the insertion in one attempt, and use faster orientation changes. Therefore, we hypothesized that a higher rate of orientation change will be correlated to surgical experience. Fig. <ref>(g-h) and <ref>(g-h) show that in segment II, in both sessions (teleoperated and open), experienced surgeons changed their orientation faster than nonmedical users, and that in the last trials of the experiment, the rate of orientation change was higher than in the first trials. The statistical analysis supported this observation, and showed that for both conditions (teleoperated and open), the effect of expertise and trial was statistically significant.§.§ Comparing performance of experienced and novice surgeons in the Porcine datasetFig. <ref> depicts the results of the metrics in the insertion segments of the suturing task. The experimental conditions in this dataset are different to those in the Dry Lab dataset, i.e., novice surgeons instead of nonmedical users, and the training is on a porcine model instead of a dry lab task. Despite these differences,the results of the two datasets are consistent with each other. There are large and statistically significant differences between experienced surgeons and novice surgeons intask time (Δ_Exp.-Nov.=-6.1442 [sec], p=0.029) and rate of orientation change (Δ_Exp.-Nov.=0.38157 [rad/sec], p=0.029). The differences in path length (Δ_Exp.-Nov.=-24.4987 [mm], p=0.086) and angular displacement (Δ_Exp.-Nov.=-1.3479 [rad], p=0.086) are less pronounced. While these results are very promising, and demonstrate the performance our new rate of orientation change metric on more realistic data, we are cautious when it comes to drawing clear conclusions, due to the small size of this dataset. § DISCUSSIONIn this study, we developed two orientation-based metrics for surgical skill evaluation. These metrics capture critical aspects of angular motion that are not taken into account in other motion-based metrics which are calculated using the position of the tools. We tested the new metrics using two datasets, which differ in their conditions. The Dry Lab dataset includes data of a very structured and simplified task, which was performed by experienced surgeons and nonmedical users. Using this dataset, we were able to test our new metrics under controlled conditions. The Porcine dataset includes data of training on a porcine model, which was performed by experienced and novice surgeons. Using this dataset, we demonstrated the ability of our metrics on more realistic situations. We tested our new metrics alongside task time and path length metrics, which are classical metrics for surgical skill evaluation.This allowed us to examine the performance of our metrics alongside the performance of metrics that are commonly used today.Using our new metric (the rate of orientation change) we found that experienced surgeons change the tool’s orientation statistically significantly faster than nonmedical users / novice surgeons. This result is consistent with the statistically significant difference we found using the classical task time metric. Surprisingly, using our second new metric (angular displacement) we found that experienced surgeons have shorter angular paths than nonmedical users / novice surgeons. This difference was statistically significant only for the open technique. Taken together, our results suggest that when assessing skill in procedures that require control of orientation, in addition to the existing metrics, it is important to use orientation-based metrics.§.§ Task segmentationThe needle-driving and suturing tasks include several segments. Each segment in these tasks has different requirements in terms of task constraints, and may require the use of different metrics to assess surgical skills. For example, the needle transport segment probably does not require prominent orientation change. Therefore, prior to metrics calculation, we used characteristics of the movement (Dry Lab) or video data (Porcine) to segment it. Our results indicate that the segmentation was important – for example, in the Dry Lab dataset, the angular displacement was much higher during insertion (segment II) than during transport (segment I). Additionally, the rate of orientation change revealed differences between experienced surgeons and nonmedical users during insertion, but not during the transport segment. These results highlight the importance of segmentation in surgical skill assessment. In the Dry Lab dataset, the segments were part of the design of the experiment, and therefore, their definition was simple. In most of the clinical procedures, segmentation is also very important <cit.>, and exists both on a macro and on a micro level. For example, in prostatectomy or thymectomy the procedure can be segmented into discrete steps: anatomical structures dissection, removal of anatomy of interest, and anastomosis. Each of these steps can be further segmented into sub-movements, like the segments detailed within the Dry Lab dataset. To address the segmentation challenge, several prior studies developed algorithms for surgical task segmentation <cit.>. In the Dry Lab dataset, we focused the majority of our analysis on the insertion of the needle, because this was the most challenging aspect of the task, and because most of the differences between experienced surgeons and nonmedical users were observed in that segment. Additionally, in the Porcine dataset, to be able to compare between the movements we chose to focus on the insertion part, which was consistent across participants and attempts. Therefore, the remainder of the discussion focuses on the insertion segment. §.§ MetricsConsistently with previous studies <cit.>, we found that experienced surgeons completed the task faster than nonmedical users. However, the speed-accuracy tradeoff – the inverse relation between the speed of the movement and its accuracy <cit.> suggests that surgeons may compromise accuracy to complete the task very fast. Therefore, task time must be accompanied by accuracy metrics <cit.>.Although path length is a common measure for surgical skill, there is disagreement regarding its effectiveness. Several studies showed that path length is a useful metric <cit.>, but others found it to be less adequate <cit.>. For example, during blunt tissue dissection, it is common for novices to be too `timid' and do inefficient and small instrument sweeps to separate tissue planes, whereas experienced surgeons, who understand tissue tolerances better, may make much broader sweeping motions, thus elevating overall path length. Our results show difference between the path length of experienced surgeons and novices in the Dry Lab dataset, but at the end of the teleoperated sessions, this difference was not statistically significant. Similarly, we observed differences in path length between the expertise groups in the Porcine dataset, but not in all the surgeons. The difference between the averages of the two expertise groups was not statistically significant. Therefore, we believe that, at least in needle-driving, path length alone is insufficient for quantifying surgical skill.To quantify the range of the tool's orientation change, we proposed a new metric of angular displacement. We expected that experienced surgeons will have a larger angular displacement. Surprisingly, in the teleoperated condition of the Dry Lab dataset and in the Porcine dataset, we found no significant difference, and in the open condition, we found differences in the opposite direction to what we expected. A possible explanation is corrections of the tool's orientation to enable a better insertion. Nonmedical users probably used many such (unsuccessful) correction attempts that resulted in a large total angular displacement. On the other hand, the experienced surgeons knew exactly how to rotate their hand as required, and needed fewer corrections. Therefore, a movement of an experienced surgeon with fast large accumulated orientation, and a movement of a nonmedical user with many corrections may yield the same angular displacement. Our results are in agreement with a previous study of suturing skill in a virtual reality simulator <cit.>. They found that during needle insertion, trained participants had less orientation change than untrained participants, and suggested that this result may be due to errors in needle grasping and penetration angle.The last metric (rate of orientation change) is not affected by the accumulation of unsuccessful attempts of insertion. Indeed, it showed statistically significant differences between nonmedical users and experienced surgeons in the Dry Lab dataset. We also found that in the last trials, the participants changed the tool's orientation faster than in the first trials. Similarly, in the Porcine dataset this metric, together with task time, was the most successful in characterising the differences between expertise levels. These results demonstrate that the rate of the change of the tool's orientation is important for the success of needle-driving. Our ultimate goal is to develop new metrics for differentiation between expertise levels. Our results of statistically significant differences between the rate of orientation change of experienced surgeons and nonmedical users / novice surgeons fall short of proving this differentiation. However, thesize of the effect is substantial – the experienced surgeons are almost twice as fast in orientation change compared to nonmedical users / novice surgeons in both datasets. Moreover, the differences are statistically signficant even in a small sample size. This suggests that our new metric presents a promising step towards the eventual differentiation between expertise levels.For each surgical task it is important to choose the relevant metrics. For example, each exercise of the da Vinci Skills Simulator (Intuitive Surgical, Inc.) has different requirements and therefore, each exercise has a unique scoring method <cit.>. The new orientation-based metric may help get a more accurate estimation of technical skill in tasks that involve control of orientation, such as suturing. Each individual metric has its strengths and its limitations. Moreover, it appears necessary to combine more than one metric. For example, in a task of needle insertion, if only orientation-based metrics are used, it is possible to `game' the task by significantly and quickly rotating the tool before starting the insertion and getting a better score. Therefore, in developing training curricula it is important to combine many metrics for skill assessment.Because the rate of orientation change quantifies characteristics that can be explained, trainers will be able to give trainees informative feedback on how to improve their movements. For example, they will be able to guide trainees to rotate the tool faster during specific parts of the task. In addition, it is possible to tailor haptic assistance or resistance training for the control of orientation <cit.>. For example, future studies can test whether haptic guidance that will rotate the hand of the participant faster during surgical tasks or, alternatively, adding a resistance to rotation can increase the rate of orientation change. §.§ Implications to human motor controlThe insertion of the needle involves a complicated motion that requires control of the tool's orientation. In human motor control, point-to-point and planar drawing movements have been well studied, and many models were proposed to explain how we control these movements <cit.>. Three-dimensional movements were studied to a much lesser extent <cit.>, and the control of orientation <cit.> is almost never explored. In addition, our task involves insertion of a curved needle into either artificial or real tissue. Interaction with elastic objects is often studied in one-dimensional movements <cit.> and needle insertion into soft tissue was also previously studied <cit.>, but only using a simplified model of tissue in a constrained task. However, models of movement and orientation coordination in three dimensional movement while manipulating complex end-effectors (such as our needle) are yet to be developed. Our new orientation-based metrics may help in understanding how surgeons control the orientation of their hands and instruments. Therefore, in future work, our study can advance the understanding of movement coordination in realistic scenarios.§.§ Limitations and Future workA video of the experiments in the Dry Lab dataset was not recorded, and therefore, we were not able to segment the movement after the two first segments. An analysis of the last two segments could add valuable information, and could contribute to the examination of the new metrics that we proposed. However, the first two segments require insertion of the needle through the tissue, which is a complex movement for inexperienced surgeons. Therefore, we believe that we can learn about surgical skill, and examine our new metrics using the first two segments.The needle-driving task of the Dry Lab dataset does not represent a real situation. For example, while surgeons often use the thread to position the needle, in the Dry Lab task there was no thread. To teach and evaluate basic surgical skills, it is typical to simplify real situations <cit.>, or even to use tasks which are not related to real surgical situations (e.g., peg transfer <cit.>). To test the potential of our new metrics in realistic situations, we examined them on the Porcine dataset. Although this dataset is small, the results suggest that these metrics may also be used in more realistic tasks. For the rate of orientation change, there was great variability within the experienced surgeons group of the Dry Lab dataset (Fig. <ref>(g) and Fig. <ref>(g)). This may be a result of different strategies, or different skill levels within the group. A composite of all the metrics may provide more granular discrimination among surgeons – not just novices and experienced surgeons – but novice to intermediate and intermediate to expert and all levels between. Future studies with additional participants from different expertise groups such as medical students, residents, fellows, and experienced surgeons with a larger variety of case experience are needed to explore such composite metrics. Correlating the new orientation-based metric with global rating scores such as OSATS may add validation to these novel performance metrics. In the main dataset that we used in this study, the Dry Lab, we do not havevideo recordings of the experiment, and therefore, we cannot extract global rating scores of the movements. However, we compare between experienced surgeons and engineering students, which are two very different groups. This maximizes the possible differences in expertise, and facilitates development of new metrics. To validate this metric, further investigation is needed, including comparisons with global rating scores. In the Porcine dataset, we also do not have these ratings, and there are too few surgeons in the Porcine dataset for meaningful conclusions using such ratings.In the Dry Lab dataset, we compared between experienced surgeons and engineering students; medical residents were not included. The needle-driving task does not require clinical judgment, and therefore is less reliant on clinical knowledge and training. Moreover, medical residents early in their residency have very limited suturing experience, and hence they resemble engineering students in terms of suturing skills. Therefore, comparing between these two groups would allow us to test the performance of the new metrics we developed. Additionally, testing our metrics on the Porcine dataset, which consisted of surgeons with different expertise levels, yielded similar results.§ CONCLUSIONSWe developed two new metrics for surgical skill evaluation. The rate of orientation change showed promising results. This metric captures technical aspects of the rotation of the hands and instruments that are taught during surgical training and had not been quantified by any other metric. We demonstrated that the rate of orientation change correlates with experience in both teleoperated and open needle insertion on a dry lab model, as well as on a porcine model. In addition, our results highlighted the importance of evaluating each segment of the movement separately. Future studies are needed to test this metric on a larger cohort of surgeons, and to translate kinematic metrics into meaningful training feedback to facilitate more efficient training. Characterizing the movements of the surgeons may help improve the evaluation and the acquisition of motor skills that are critical to surgery, and may also provide new insight into how to improve the control of surgical robots, and the training of new surgeons.§ ACKNOWLEDGMENTWe thank Allison Okamura, Michael Hsieh, Zhan Fan Quek and Yuhang Che for providing the experimental data of the Dry Lab dataset.IEEEtran
http://arxiv.org/abs/1709.09452v2
{ "authors": [ "Yarden Sharon", "Anthony M. Jarc", "Thomas S. Lendvay", "Ilana Nisky" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20170927111427", "title": "Rate of Orientation Change as a New Metric for Robot-Assisted and Open Surgical Skill Evaluation" }
http://arxiv.org/abs/1709.09192v2
{ "authors": [ "Mariam Bouhmadi-López", "Che-Yu Chen", "Pisin Chen" ], "categories": [ "gr-qc", "astro-ph.CO", "hep-ph", "hep-th" ], "primary_category": "gr-qc", "published": "20170926180131", "title": "Primordial Cosmology in Mimetic Born-Infeld Gravity" }
equation*endequation*
http://arxiv.org/abs/1709.09188v1
{ "authors": [ "William J. Potter" ], "categories": [ "astro-ph.HE", "astro-ph.GA" ], "primary_category": "astro-ph.HE", "published": "20170926180052", "title": "Modelling blazar flaring using a time-dependent fluid jet emission model - an explanation for orphan flares and radio lags" }
nithin]Nithin Mohan subhashish]Subhashis Roy subhashish]Govind Swarup subhashish]Divya Oberoi subhashish]Niruj Mohan Ramanujam nithin]Suresh Raju C.cor1 [cor1]Corresponding author [email protected] nithin]Anil Bhardwaj [nithin]Space Physics Laboratory, Vikram Sarabhai Space Center,Thiruvananthapuram-695022, India. [subhashish]National Center for Radio Astrophysics, Tata Institute ofFundamental Research, Pune-411007, India. The Venusian surface has been studied by measuring radar reflections and thermal radioemission over a wide spectral region of several centimeters to meterwavelengths from the Earth-based as well as orbiter platforms. The radiometricobservations, in the decimeter (dcm) wavelength regime showed a decreasing trend in the observed brightness temperature (𝑇_) with increasing wavelength. The thermal emission models available at present have not been able to explain the radiometric observations at longer wavelength (dcm) to a satisfactory level. This paper reports the first interferometric imaging observations of Venus below 620 MHz. They were carried out at 606, 332.9 and 239.9 MHz using the Giant Meterwave Radio Telescope (GMRT).The 𝑇_ values derived at the respective frequencies are 526 K, 409 K and < 426 K, with errors of ∼7% which are generally consistent with the reported 𝑇_ values at 608 MHz and 430 MHz by previous investigators, but are much lower than those derived from high-frequency observations at 1.38-22.46 GHz using the VLA.Venus, surfaceRadio ObservationsRadiative transfer§ INTRODUCTION Venus being the nearest planet, its dense atmosphere and the surface have been the subject of many studies, including the ones from orbiting spacecraft, landers and from Earth-based observations in the last six decades: * orbiting spacecraft, Mariner <cit.>; Magellan <cit.>; Venus Express <cit.>* Venera and Vegas Landers (, ;, ; , ), and * radar and radio observations made from the Earth(, ;, ;, ;, ).The atmosphere of Venus is comprised of ∼ 96% CO_2, ∼ 4%of N_2 and trace amounts of gases like H_2O, SO_2, CO and H_2SO_4( ;). The thick atmosphere generatesa pressure of ∼ 90 bars at the surface.Since CO_2 is a very efficient greenhouse gas, the surface is extremely hot,∼735 K; and for the same reason, it does not have a significant diurnal as wellas equator to pole variation of temperature. Many studies have reported possiblecharacteristics of the interior of Venus and its tectonic nature, as well as the possible areas of the Venus surface that expel its internal heat(, ; , ;, ;, ). It is postulated thatheat generation from its core and by radioactive elements is similar to that for theEarth or Mars. <cit.> has summarized studies of Venus rocks by Venera 8, 9and 10 that landed on its surface during 1972-75. The mean contents were found to beclose to that of thebasalts and granites of the Earth's crust having a density of 2.8 ± 0.1g cm^-3.The Venusian surface has been explored since 1961 with the first radarobservation of Venus from Earthcarried out at NASA's Goldstone Observatory(, ). The successive radar observations revealed important information about Venus, such as its rotation is retrograde and the rotation period is 243.1 days, its axis of rotation is almost perpendicular to its orbital plane, and the planetary radius is ∼ 6,052 km (, ).Besides the ground-based radar observations, Venus surface has been mapped usingspacecraft-based radars on the Pioneer 10(, ; , ;, ); and the Magellan probe(, ;, ).All these studies were limited to single frequency and in horizontal (H) polarization. Studies of radar echoes at longer wavelengths ranging from 15 cm to 7.84 mshowed global mean surface reflectivity values of ∼ 0.15. The lowreflectivity of∼ 0.02 at 3.8 cm was attributed to the attenuation of radar echoes by the atmosphere at these higher frequencies <cit.>. These radar measurements also indicated the presence of athin layer (of ∼ centimeter thickness) of porous powdered soil or dust<cit.>. The Magellan radar (SAR) data have been used extensivelyto characterize the Venusian surface bystudying geomorphology and variation in the dielectric properties of the high and lowland regions. The dielectric permittivity at lowlands are∼ 5 showing the presence of dry basaltic or granite minerals, but high value of dielectric permittivity(> 50) at highlands indicates the presence of highly conducting mineraldeposits or the presence of less absorbing materials that can return mostof the incident radar signals back at these locations.These radar investigations showed that about 15% of the impinging radiation isreflected, indicating that the dielectric permittivity of Venus is in the rangeof ∼ 4.15 to ∼ 4.5 <cit.>.Several investigations have been made to derive properties of the thermalradio emission of Venus using Earth-based and space-borne radio telescopes operatingat centimeter and decimeter wavelengths(, ; , ;, ; , ;, ). Thepassive mode operation of the Magellan radar enabled the measurement ofradio emissivity at12.6 cm wavelength in horizontal polarization for more than 91% of the Venussurface during the first 8 months of its operation <cit.>. The global mean value of emissivity observed usinghorizontal linear polarization is 0.845, a value that corresponds to adielectric permittivity of between 4.0 and 4.5, depending on the surface roughness. These values are consistent with the permittivity values of dry basaltic minerals that compose the bulk of the Venussurface. The above emissivity value is also in good agreement with that derived from theradar reflectivity.<cit.> summarized early radiometric measurements of thebrightness temperatures (𝑇_) of Venus made using Earth-based radio telescopes. <cit.> measured Venusian𝑇_ at different frequencies by using the VLA observations at 22.46, 14.94, 8.44, 4.86 and1.385 GHz with errors of ∼ 2 to 5%.At shorter wavelengths (< 5 cm or frequencies above 6 GHz) radiation arisesprimarily from the denseatmosphere of Venus but at longer wavelengths (decimeter and meter wavelengths)thermal radiation is increasingly dominated by the surface and subsurface emission<cit.>. <cit.> proposed a detailed model, to explain the VLA measured valuesof 𝑇_ considering absorption at microwave frequencies by theatmosphere based on the vertical profiles of SO_2 and H_2SO_4 derived from the Pioneer-Venus(PV) probes and that inferred from measurements of the Mariner V (MV) spacecraft<cit.>. At longer wavelengths, the contribution of radiation from the surface and subsurface of Venus is alsoconsidered. However, theirmodel predicts 𝑇_ values much higher than those measured at 1.385 GHz and at < 1 GHz by others. <cit.>, and <cit.> measured 𝑇_values at frequencies < 1 GHz and found them to be in the range 500-550 K.These 𝑇_ values are significantly lower than those measured at higherfrequencies.The radiometric measurements made during 1972 - 73 used Wyllie's flux densityscale for calibrations and doubts were raised in the literatureabout the scale used those measurements at frequencies < 1 GHz <cit.>. However, subsequently <cit.> noted that “Wyllie's flux density scale is only 3% above our CasA scale”. It is to be noted that <cit.> flux density scale is widely used today by radio astronomers across the world for flux density calibration.The depth of penetration of micro/radiowaves into the Venusian regolithdepends on the dielectric properties of the same. The lander based in-situmeasurements, Earth-based radar/radiometric measurements as well as theorbiter measurements concluded that the Venusian surface is dry and has low dielectric constant values of ∼ 4.5. With this consideration the observation at meter wavelengths are suitable for the study of deepersubsurface characteristics since the penetration depth, δ (in meters), of the radiation, is related to the wavelength by the equation δ= λ _0 √(ε')/2πε”where ε' is the real part of the and ε” is theimaginary part of the dielectric and λ _0 is the wavelength in free space. It has been found that there is a significant decrease in 𝑇_ beyond awavelength of ∼15 cm. There were a large number of successful observations carriedout between several millimeters (mm) to centimeter (cm) wavelength. But noobservations were reported beyond 70 cm (≲ 400 MHz) due to an increased system and background noise, solar interference and weak planetary emission <cit.>. Flux density measurements based on interferometric imaging do not suffer from base-level variations, solar interference and local radio-frequency interferencewhich often afflict single dish observations <cit.>, thelatter two can also afflict the non-imaging interferometric observations<cit.>.Here we report on interferometric observations carried out at 3 different wavelengths 50 cm, 90 cm and 123 cm (or 606 MHz, 332.9 MHz and 239.9 MHz,respectively) using Giant Meterwave Radio Telescope (GMRT). The 𝑇_ of Venus measured from images made from these observations can serve as inputs for developing an improved thermal emission model that can account for theincreasing subsurface thermal emission at longer wavelengths. These are the first reported interferometric imaging flux density and 𝑇_ of Venus at frequencies lower than 620 MHz. This paper presents the results obtained from analyzing the archival data of Venus (project code 05BBA01) collected during the observations made in March 2004 using the GMRT at 239.9 MHz, 332.9 MHz and 606 MHz, as discussed in Sections 2 and data processing and reduction are presented inSection 3. The results are presented in Section 4, followed by discussion andconclusions in Sections 5 and 6, respectively.§ OBSERVATIONS Observations of Venus were carried out on six days between March 20 and March 27, 2004 using the GMRT at three frequencies centered close to239.9 MHz, 332.9 MHz and 606 MHz. The 239.9 MHz and 606 MHz the observations were conducted simultaneously using the dual frequency co-axial GMRT feeds. The design of the GMRT is described in <cit.>. Briefly, GMRT consists of 30 fully steerable parabolic dishes each of 45 m in diameter, 14 of which are located in a central array of ∼1 km × 1 km extent and the other16 in a Y-shaped array of extent 25 km. Two of the six days of observations had short observing runs, only the observations from theother four days are presented here. An integration time of 16.8 second and a spectral resolution of 125 kHz was used for these observations. Table 1 summarizes the observation details. Both 3C147 and 3C48 were used as primary flux calibrators at 606.0, 606.1 and 239.9 MHz and only 3C48 was used for this purpose at 332.9 MHz. The primary flux calibrators were observed at the start and end of each observing session. The flux density of the flux calibrators was obtained from the task SETJY in theAstronomical Image Processing System (AIPS) and Common Astronomy Software Applications (CASA),which use Scaife and Heald flux density scale <cit.> below 500 MHzand Perley and Butler scale <cit.> above 500 MHz. These scales are in close agreement with <cit.> scale at frequencies ∼ > 300 MHz.The compact radio source 0318+164 was used as both the phase and bandpass calibrator, it was observed for ∼5 mins every 30 minutes.Rather than tracking Venus, whose right ascension (RA) and declination (Dec) change with time, the antennas tracked the RA, Dec corresponding to the coordinates of Venus at the middle of observing period for that particular day.The ephemeris details of the Venus during the observations are tabulated in Table 2.The diameter of Venus varied from 21.19 arcsec to 22.97 arcsec over the 6 days of observations. § DATA REDUCTION§.§ Analysis MethodologyTo build confidence in our analyses and results, we use both CASA and AIPS and somewhat different analysis procedures. The observations made at 606.1 and 239.9 MHz were analyzed using both AIPS and CASA,those at 332.9 MHz wereanalyzed only using AIPS and those at 606.0 MHz were analyzed only using CASA. Data editing was performed to remove records highly deviant in amplitude, arising fromman-made radio interference, in both time and frequency domains.The flagging of data analyzed in CASA was carried out using a combination of automatedflagging outside CASA using FLAGging and CALibration (FLAGCAL),a software pipeline developed to automate the flagging and calibration of GMRT (, ;, ), and manual flagging in CASA.The data analyzed in AIPS were first manually flagged for dead antennas followed by theautomated flagging task RFLAG. First, we describe the analysis procedure followed in CASA.As mentioned earlier, we tracked the mean RA, Dec of Venus for every observing day.While this allowed Venus to move appreciably with respect to the phase center and in the antenna beam over the course of the observations,tracking at the sidereal rate enables us to do self-calibration to correct for the phase changes introduced by the GMRT electronics and the ionosphere during the observations.The background celestial radio sources seen in the field of view were used for self-calibration. Next, CASA task UVSUB was used to subtract contribution of the background sources from the self-calibrated visibility data using the statistically significant deconvolved(CLEAN) components in the model. An often used technique for imaging a source with non-sidereal motion (Venus) is to make multiple individual images, each of them over a duration short enough that the angular displacement of the source in that period is significantly smaller than the resolution of the imaging instrument. This method was followed for the 606.0, 606.1 MHz and 239.9 MHz observations.Synthesized beams at 606.0 and 606.1 MHz were about 6 arcsec and about 15 arcsec at 239.9 MHz.The angular velocity of Venus in RA and Dec was about 2.65 arcsec/minute and individual images were made every one minute.After primary beam correction, all the one-minute maps were aligned using the position of Venus available from the NASA JPL Horizons ephemeris (<http://ssd.jpl.nasa.gov/horizons.cgi>) and co-added. Finally, the co-added map of Venus was deconvolved using a point-spread-function corresponding to unflagged data for the entire duration of the observation.The analysis strategy used in AIPS is common with that followed in CASA till self-calibration and subsequent removal of the contribution of background sidereal sources using UVSUB. The motion of Venus across the beam is accounted for by using a different strategy. An artificial strong point-source (10 Jy) was added to the last spectral channel of the dataset.The position of this artificial source was shifted every minute in a direction equal and opposite to that of movement of Venus in the sky plane.Then, using a script, phase-only self-calibration was carried out only on the last spectralchannel of this dataset using this artificial point source and a solution interval of one minute.The antenna solutions determined arise entirely due to the motion of the artificial source and were applied to all the frequency channels of the UV dataset.This changed the visibility phases of the UVSUB-ed data precisely by the amount needed to compensate for the motion of Venus.The last spectral channel, where the artificial source was introduced, was not used for imaging of Venus. Both of these analyses at 606.1 MHz on 26 March, 2004, using independentsoftware suits and differing procedures, yielded flux densities which differ by ≤ 5% with similar images. §.§ Flux calibration uncertainties Given the challenging nature of these observations, particular attention was paid to estimating the uncertainties in flux calibration. In general, the errors in flux density of Venus as measured by an interferometer comprisingphysically large antennas like the GMRT arise from the following main reasons: (i) instrumental ‘gain’ variations due to the change in pointing directions between the primary and secondary calibrator, and also between thesecondary calibrator and Venus; (ii) gain variations over time; and (iii) uncertainty in absolute flux density scales used to estimate the flux density of the primary calibrator. Each of these concerns is discussed briefly below. To quantify the net observed variation of the antenna gains during these observations,we measured the gains of the antennas from uncalibrated data towards the secondary calibratorobserved on 19 March, 2004 at 332 MHz. Over the 5 hours of observations of thesecondary calibrator, a gain variation of ∼10% was found. Typically for large dishes, the gain changes with elevation angle and these gainvariations are correlated across different antennas.For the purpose of estimating uncertainties in flux density measurement, we assume these variations to be 100% correlated.Given that the flux density is directly proportional to the square of the gain,the resulting uncertainty in the fluxdensity estimates cannot be larger than ∼20%. However, observing the primary and secondary calibrators at similar elevations and frequentobservations of the secondary calibrator reduce this uncertainty substantially as discussed later in this section.We make the conservative assumption of the gain change being linear with elevation angle. During the observations, the elevation of the secondary calibrator changedfrom ∼85^∘ to ∼23^∘ (a change of ∼62^∘).The primary calibrator was observed at the start of observation at an elevation angle of 67^∘. Attributing the entire observed gain variation of ∼10% over the elevation range covered to elevation angle dependence of GMRT dishes, the difference betweenthe elevation angles of the two calibrators implies a gain change of ∼2.5%.As the angular distance between Venus and the secondary calibrator was much smaller (∼8^∘), a similar argument leads to an expected elevation angle dependent gain variation between these two sources to be ∼1%.In addition, the instrumental gains might also drift over time.The gains vary slowly and smoothly in time and we assume this variation to be proportional to the difference in time over time scales of interest here.In order to track these gain variations, the observations of primary and secondary calibrators were done within 30 minutes of each other and the secondary calibratorwas observed every 30 minutes. Assuming all of the 10% of the gain variation to come from such gain drifts in time, the estimated change in gain due to the time difference while observing primary and secondary calibrator, and secondary calibrator and Venus is <1%.The absolute flux density scale is now believed to be accurate to better than 3% <cit.>. Since all the above-mentioned errors add in quadrature, we finally get an error in measured flux density of Venus to be ∼7%. We note that in practice the elevation dependent gain variation is shallower than linear dependence assumed here. It variesmuch more slowly near the zenith, where the absolute gain from the primary was used to calibrate the flux density of the secondarycalibrator. In addition, the gain variations in time of each antenna are independentand hence expected to contribute randomly to the uncertainty in the measured flux density. Both of the above-mentioned factors will reduce the uncertainty in flux density estimate when compared to the estimate presented. §.§ Uncertainties in Galactic background temperatureThe full sky map at 408 MHz made by <cit.> was used to estimate the Galactic background temperature, 𝑇_, towards the direction of Venus.The 408 MHz 𝑇_ towards Venus was 30 K during our observations.Considering the uncertainty in the zero level and absolute calibration, theuncertainty on <cit.> measurement is ∼4 K.The spectral index of 𝑇_ near the location of Venus is measured to be -2.6±0.15 <cit.>. This leads to a 𝑇_ of 10±1.5, 52±7 and 122±22 K at606, 332.9 and 239.9 MHz, respectively.The position of Venus itself in the sky changed by ∼5-10' during observationsand the 𝑇_ can vary as a function of the Galactic latitude and longitude (l, b).The <cit.> map has a resolution of 0.85^∘ and any variationof 𝑇_ at smaller angular scales is averaged out in this map. However, 𝑇_ variations at angular scales of 1-30' are easily picked up by GMRT 330 MHz band observations.To estimate this variation, we subtracted out the background extragalactic sourcesat high resolution and then made a low-resolution map of the region at 332 MHz. The resultant map had a resolution of 159^”× 135^”.The rms of the map was ∼3 mJy/beam, which corresponds to afluctuation of < 2 K in 𝑇_ at 330 MHz at ∼3-30' scales.No structures at scales >5' were seen with a significance >2σ.Therefore, the uncertainty due to angular variations in 𝑇_ scaledto the above frequencies is low in comparison to the base level uncertainty in <cit.> map. §.§ Contamination from background sourcesMost background sources tend to have spectral indices which will make thembrighter at lower frequencies, where we find Venus to be weaker.So we have carefully examined the effects of removal of these sources.We note a few things to build confidence that our results are not significantlyaffected by the errors due to background subtraction: * We find that the background sources are all unresolved sources, which greatly simplifies the deconvolution problem. * The rms in the UVSUB maps including regionsfrom where sources have been subtracted are similar to the rms in the final map of the Venus. This implies that any residual flux left behind after cleaning is small enough to not give rise to any discernible artifacts even at the lowest frequency. * There are few background sources close to Venus. Hence, any contaminationfrom them can only be due to the side lobes of the point-spread-function (PSF).* When using CASA to make the final maps for the Venus, we cut out theappropriate parts of the 1 minute maps and align them to ensure that flux fromVenus falls in the same pixels.This has the consequence that any small residual flux from the background sources will get smeared in the Venus maps, further reducing any contamination from them. Similarly,when using AIPS, though the implementation details differ, the methodology followed ensuresthe residual flux from the background sources will get smeared over a region spanning the track of the Venus in the sky plane. § RESULTSFigure <ref> shows the map of Venus at 606.1 MHz from the observations of March 26, 2004 and Figure <ref> shows the map of Venus at 332.9 MHz from the observations of 19 March 2004. The error on the measured flux density of Venus was obtained by multiplying themeasured rms noise in the background image by √(N), where N isthe total surface area of Venus measured in units of the synthesized beam. We then multiply by a factor of 1.07 to take care of the random andsystematic errors as discussed earlier. From the known solid angle subtended by Venus during these observations(Table 2), its brightness temperature, 𝑇_, can be estimated from the observed values of its flux density using Rayleigh-Jeans law. However, an additive correction needs to be applied to the 𝑇_ thus determined. To understand its origin, we note that the uv coverage of any interferometer has a central hole,reflecting the absence of baselines shorter than some minimum length. This leads to the common situation that the peak of the point-spread-function(PSF) is surrounded by a shallow negative bowl, or equivalently the interferometer is not sensitive to brightness distribution at large angular scales. A practical consequence of this is that the interferometer resolves out the smoothGalactic background and when the PSF is convolved with an extended source like Venus,the source is observed to be sitting in a bowl of negative flux <cit.>. Also, the Galactic background radiation gets fully absorbed by Venus <cit.>. Together, they lead to an underestimate in the true value of the 𝑇_ of Venus by an amount equal to the temperature of the Galactic background,𝑇_, which is resolved out by the interferometer and needs to be added to get the true brightness temperature for Venus, 𝑇_. We used the 𝑇_ values as discussed in Sec. 3.3.The 𝑇_ values for the three GMRT frequenciesare provided in Table 3. Column 1 lists the frequency of observation, column 2 the date of observation and columns 3 and 4 are the size and theposition angle of the synthesized beam,respectively. Column 5 gives the measured rms in the map of Venus, Columns 6 and 7 are measured flux density andthe estimated rms error of the flux density in the map, respectively. Column 8 gives the 𝑇_ computed from themeasured flux density and the size of the known sources, the magnitude of the correction for 𝑇_ is given in column 9.Column 10 gives the final computed values of the brightness temperature ofVenus, 𝑇_. In column 11, the numbers in bold give the average value of 𝑇_ for a given frequency. As discussed in Sec. 3.2, ∼7% error is assumed to account for random andother systematic errors. The lowest frequency data (239.9 MHz) was analyzed using both CASA and AIPSwhich gave similar rms in the image plane. In both the cases, Venus could not be detected in the image. A 3 σ value is used to place an upper limit of the brightness temperature of Venus at this frequency and is indicated by ↓ in Table 3. Figure <ref> and Table 4 compile all available measurements of brightness temperatureof Venus in the wavelength range from 0.013 m (22.46 GHz) to 1.25 m (239.3 MHz), including the ones obtained by us. Figure <ref> also includes the model by <cit.>. Our observations clearly indicate that 𝑇_ decreases with increasing wavelength beyond ∼ 0.5 m, in contrast to the model which remains practically flat beyond ∼0.06 m.§ DISCUSSION As can be seen from Table 4 and Figure <ref> that the 𝑇_of Venus obtained from GMRT observations at longer wavelengths and those reported values by earlier investigators <cit.>, are appreciably lower than those observed by<cit.> at cm wavelengths (at frequencies > 1.385 GHz).The detailed model 𝑇_ values as seen in Figure <ref>agree very well with the observed values by <cit.> at shorter wavelengths (< 6 cm) increasingwith the same in a log-linear manner with a slope of ∼ 40 K/cm which peaks around 6 cm. The observations beyond 11 cm wavelength again show a decreasing 𝑇_ valuesin a log-linear form. But the model by <cit.>predicts the values of 𝑇_ at lower frequencies (or longerwavelengths) to be same as that at ∼ 6 cm. Very low atmospheric opacity, nearly 0.006 ± 0.005 at 608 MHz and about 0.02at 1.4 GHz were reported by <cit.> and <cit.>,respectively. As the GMRT observations are at and around these frequencies, itcan be safelyassumed that the atmosphere is almost transparent at the GMRT frequencies.Another possible mechanism responsiblefor the reduction in 𝑇_ at radiowaves was the presence of selectivewavelength absorbing ionosphere as suggested by <cit.>. However, this suggestion was ruled out by <cit.> based on theMariner V electron density profile measurements <cit.> which showed a low peak electron density of 5.2 × 10^5 cm^-3 at an altitude of 135-140 km, which was not enough to act as an absorber.<cit.> and <cit.> in their independentinvestigations reported the insignificance of the Venusian ionosphere at lowerfrequencies at ∼ 608 MHz and 430 MHz (70 cm) so thatthe Venusian atmosphere including the ionosphere can be neglected indetermining the 𝑇_ at low frequencies.The other possible reasons for the reduction in 𝑇_ with wavelengthobserved in the radio observations at decimeter wavelength could be the variations in the dielectric constant or temperature. The radar signal can be significantly affected by the reflection and absorptiondepending on the dielectric properties of the surface medium, whereas thescattering is controlled by the surface roughness.It must be noted that the temperature plays only a minor role in thevariation of the radar signal. Based on radar observations at 50 MHz and 38 MHz (, ;, ;, ), <cit.> have ruled out a drastic variation in the values of the dielectric constant of the Venusian regolith at least up to several tens of meters. The dielectric constantmeasurements of typical planetary rocks including basalts at 450 MHz and35 GHz by <cit.> revealed no significant variation in thedielectric properties with frequency. They have also ascertained the absence of absorbing lines which can alter the dielectric values in between these twofrequencies. <cit.> using their two-layer subsurface model tried to explain thereduced radar reflectivity at centimeter wavelength and reduced brightnesstemperature at decimeter wavelength. The best fit in explaining the reducedreflectivity at cm wavelength and 𝑇_ at decimeter wavelength was obtained when atwo-layer model consisting of a layer with ε = 1.5 overlaying another layer of ε = 8.31 was chosen. However, a better fit to theobservation for λ > 15 cm was obtained when a possible decreasing radiating temperaturewith increasing subsurface depth was assumed. This is expected as at longer wavelengths the emission is dominated by the deeper subsurface layers owing to deeper penetration at these frequencies. They did not further probe for a satisfactory explanation for the reduction in planetary regolith temperature with depth.When emissivity of an object is close to unity at aparticular wavelength, its 𝑇_ approaches to its physical temperature. In the case of Venus, the optical depth of its dense atmosphere at decimeter and meter wavelengths is much lower than unity and theobserved emission is expected to be generated from its surface and subsurfacewith certain depth.The reduction in 𝑇_ due to the atmosphere, ionosphere and the variation of dielectric values with decreasing frequenciesare not expected to be significant.Radiative transfer model is an effective tool for computing thermal emission at microwave and radio wavelengths by accounting for the detailed variation oftemperature and dielectric properties with the depth of the terrain as well as with altitude of the atmosphere of Venus. Further studies are needed to explain the lower values of 𝑇_ at frequencies< 1 GHz (meter wavelengths), where emission arisespredominantly from a region further down the surface owing to deeper penetration.§ CONCLUSION The first interferometric imaging observations of Venus at frequencies below 620 MHz are presented here. These observations of thermal emission from Venus were conducted using the GMRT. The analyses of these data revealed that brightness temperature of Venusdecreases with increasing wavelengthas 526 K ± 22, 409 K ± 33,and < 426 K at 606, 332.9, and239.9 MHz, respectively. These values are consistent with valuesof about 498 K and 523 K measured at 608 and 430 MHz, respectively by previous workersduring the 1970s, but are much lower than those measured at higher frequencies, e.g., 679.9 K ± 13.6 at 4.86 GHz using theVLA. The microwave observations (cm wavelengths) of𝑇_ of Venus has been explained earlier by considering emission from itsatmosphere and surface. The observed variation of the 𝑇_ at low microwavefrequencies (< 1 GHz) can only be explained with further radiative transfer studies, as in this frequency regime, the emission is dominated by the surface/subsurface of the planetary regolith.§ ACKNOWLEDGEMENTAuthors thank Dr. Dharam Vir Lal, NCRA-TIFR, Dr. K Krishnamoorthy, former Director,SPL, VSSC and Dr. Nizy Mathew,SPL, VSSC for many valuable discussions. We thank the staff of the GMRT who made theseobservations possible. The GMRT is run by the National Centre for Radio Astrophysics of the TataInstitute of Fundamental Research. Finally, authors thank the anonymous referees for their constructive comments and valuable suggestions. Mr. Nithin Mohan is supported by ISRO Research Fellowship.§ REFERENCE 41 natexlab#1#1url<#>1urlprefixURL [Baars et al.(1977)Baars, Genzel, Pauliny-Toth, and Witzel]Baars1977 Baars, J. W. M., Genzel, R., Pauliny-Toth, I. I. K., Witzel, A., 1977. The Absolute Spectrum of Cas A. An Accurate Flux Density Scale and a Set of Secondary Calibrators. Astron. Astrophys. 61 (99-106).[Basilevsky et al.(1986)Basilevsky, Pronin, Ronca, Kryuchkov, Sukhanov, and Markov]Basilevsky1986 Basilevsky, A. T., Pronin, A. A., Ronca, B., Kryuchkov, V. P., Sukhanov, A. L., Markov, S., 1986. Styles of Tectonic Deformations on Venus: Analysis of Venera 15 and 16 Data. J. Geophys. Res. 91 (399-411).[Butler et al.(2001)Butler, Steffes, Suleiman, Koldoner, and Jenkins]Butler2001 Butler, B. J., Steffes, P. G., Suleiman, S. H., Koldoner, M. A., Jenkins, J. M., 2001. Accurate and Consistent Microwave Observations and their Implications. Icarus 154 (226-238).[Campbell(1994)]Campbell1994 Campbell, B. A., 1994. Merging Emissivity and SAR Data for Analysis of Venus Dielectric Properties. Icarus 112 (187-203).[Campbell et al.(1989)Campbell, Head, Hine, Harmon, Senske, and Fisher]Campbell1989 Campbell, D. N., Head, J. W., Hine, A. A., Harmon, J. K., Senske, D. A., Fisher, P. C., 1989. Styles of Volcanism on Venus: New Arecibo High Resolution Radar Data. Science 246 (373-377).[Campbell and Ulrichs(1969)]Campbell1969 Campbell, M. J., Ulrichs, J., 1969. Electrical Properties of Rocks and Their Significance for Lunar Radar Observations. J. Geophys. Res. 74 (5867-5881).[Carpenter(1964)]Carpenter1964 Carpenter, R. L., 1964. Study of Venus Surface by C.W. Radar. Astron. J. 69 (1-11).[Chengalur(2013)]Chengalur2013 Chengalur, J. N., 2013. NCRA Technical Report, NCRA/COM/OD,. Tech. rep., National Centre for Radio Astrophysics, Pune 411007, India.[Condon et al.(1973)Condon, Jauncey, and Yerbury]Condon1973 Condon, J. J., Jauncey, D. L., Yerbury, M. J., 1973. The Brightness Temperature of Venus at 70 Centimeters. Astrophys. J. 183 (1075-1080).[Fjeldbo et al.(1971)Fjeldbo, Kliore, and Eshleman]Fjeldbo1971 Fjeldbo, G., Kliore, A. J., Eshleman, V. R., 1971. The Neutral Atmosphere of Venus as Studied With the Mariner V Radio Occultation Experiment. Astron. J. 73 (123-140).[Florenskii et al.(1982)Florenskii, Bazilevsky, Kruchkyov, Kuzmin, Nikolaeva, Pronin, Selivanov, Naraeva, and Tyuflin]Florenskii1982 Florenskii, K. P., Bazilevsky, A. T., Kruchkyov, V. P., Kuzmin, O. V., Nikolaeva, Pronin, A A Chernaya, I. M., Selivanov, A. S., Naraeva, M. K., Tyuflin, Y. S., 1982. Analysis of the Panoramas of the Venera 13 and Venera 14 Landing Sites. Sov. Astron. Lett. 8 (233-234).[Ford and Pettengill(1983)]Ford1983 Ford, P. G., Pettengill, G. H., 1983. Venus: Global Surface Radio Emissivity. Science 220 (1379-1381).[Goldstein and Carpenter(1963)]Goldstein1963 Goldstein, R. M., Carpenter, R. L., 1963. Rotation of Venus: Period Estimated from Radar Measurements. Science 139 (910-911).[Haslam et al.(1983)Haslam, Salter, Stoffel, and Wilson]Haslam1982 Haslam, C. G. T., Salter, C. J., Stoffel, H., Wilson, W. E., 1983. A 408 MHz All-Sky Continuum Survey II - The Atlas of Contour Maps. Astron. Astrophys. Suppl. Series 47 (1-142).[Herman et al.(1971)Herman, Hartle, and Bauer]Herman1971 Herman, J., Hartle, R., Bauer, S., 1971. The Dayside Ionosphere of Venus. The Planet. Space Sci. 19.[James and Ingalls(1967)]James1964 James, J., Ingalls, R., 1967. Radar Observation of Venus at 38 Mc/sec. The Astron. Journ. 72.[James et al.(1967)James, Ingalls, and Rainville]James1967 James, J., Ingalls, R., Rainville, L., 1967. Radar Echos from Venus at 38 Mc/sec. The Astron. Journ. 72.[Klemperer and Bowles(1964)]Klemperer1964 Klemperer, W.K.and Ochs, G., Bowles, K., 1964. Radar Echos from Venus at 50 Mc/sec. The Astron. Journ. 69.[Kuzmin(1964)]Kuzmin1964 Kuzmin, A., 1964. Radio Physical Investigations of Venus. In: Physics, All - Union Institute of Scientific and Technical Information. Academy of Science USSR, Moscow.[Kuzmin(1967)]Kuzmin1967 Kuzmin, A., 1967. Concerning a Model of Venus with Cold Absorbing Atmosphere. Izv. Vyssh. Ueheb. Zaved. Radiofiz. 7, 1021–1031.[Kuzmin(1983)]Kuzmin1983 Kuzmin, A. D., 1983. Radio Astronomical Studies of Venus. In: Venus. University of Arizona Press, Tucson, Arizona, pp. 37–44.[Markiewicz et al.(2007)Markiewicz, Titov, Limaye, Keller, Ignatiev, Jaumann, Thomas, Michalik, Moissl, and Russo]Markiewicz2007 Markiewicz, W. J., Titov, D. V., Limaye, S. S., Keller, H. U., Ignatiev, N., Jaumann, R., Thomas, N., Michalik, H., Moissl, R., Russo, P., 2007. Morphology and Dynamics of the Upper Cloud Layer of Venus. Nature 450 (633-636).[Marov(1978)]Marov1978 Marov, M. Y., 1978. Results of Venus Missions. Annu. Rev. Astron. Astrophys. 16 (141-169).[Muhleman et al.(1973)Muhleman, Berge, and Orton]Muhleman1973 Muhleman, D. O., Berge, G. L., Orton, G. S., 1973. The Brightness Temperature of Venus and the Absolute Flux-Density Scale at 608 MHz. Astrophys. J. 183 (1081-1085).[Muhleman et al.(1979)Muhleman, Orton, and Berge]Muhleman1979 Muhleman, D. O., Orton, G. S., Berge, G. L., 1979. A Model of the Venus Atmosphere from Radio, Radar, and Occultation Observations. Astrophys. J. 234 (733-745).[Perley and Butler(2013)]Perley2013 Perley, R. A., Butler, B. J., 2013. An Accurate Flux Density Scale From 1 to 50 GHz. Astrophys. J. Supp. Series, 204:19(20 pp).[Pettengill et al.(1980)Pettengill, Eliason, Ford, Loriot, Masursky, and McGill]Pettengill1980 Pettengill, G. H., Eliason, E., Ford, P. G., Loriot, G. B., Masursky, H., McGill, G. E., 1980. Pioneer Venus Radar Results: Altimetry and Surface Properties. J. Geophys. Res. 85 (8261-8270).[Pettengill et al.(1988)Pettengill, Ford, and Chapman]Pettengill1988 Pettengill, G. H., Ford, P. G., Chapman, B. D., 1988. Venus: Surface Electromagnetic Properties. J. Geophys. Res. 93 (14,881-14,892).[Pettengill et al.(1991)Pettengill, Ford, Johnson, Raney, and Soderblom]Pettengill1991 Pettengill, G. H., Ford, P. G., Johnson, W. T. K., Raney, R. K., Soderblom, L. A., 1991. Magellan: Radar Performance and Data Products. Science 252 (260-265).[Pettengill et al.(1992)Pettengill, Ford, and Wilt]Pettengill1992 Pettengill, G. H., Ford, P. G., Wilt, R. J., 1992. Venus Rurface Radiothermal Emission as Observed by Magellan. J. Geophys. Res. 97 (13,091-13,102).[Phillips and Malin(1983)]Phillips1983 Phillips, R. J., Malin, M. C., 1983. The interior of venus and tectonic implications. In: Venus. University of Arizona Press, Tucson, Arizona, pp. 159–214.[Prasad and Chengalur(2012)]Prasad2012 Prasad, J., Chengalur, J. N., 2012. FLAGCAL: a Flagging and Calibration Package for Radio Iterferometric Data. Exp. Astron. 33 (157-171).[Reich and Reich(1988)]Reich1988 Reich, P., Reich, W., 1988. Spectral index variations of the galactic radio continuum emission - evidence for a galactic wind. Astron. Astrophys. 196, 211–226.[Scaife and Heald(2012)]Scaife2012 Scaife, A. M. M., Heald, G. H., 2012. A Broadband Flux Scale for Low Frequency Radio Telescopes. Mon. Not. R. Astron. Soc. 423 (30-34).[Seiff et al.(1980)Seiff, Kirk, Young, Blanchard, Findlay, Kelley, and Soreruer]Seiff1980 Seiff, A., Kirk, D. B., Young, R. E., Blanchard, R. C., Findlay, J. T., Kelley, G. M., Soreruer, S. C., 1980. Measurements of the Thermal Structure and Thermal Contrasts in the Atmosphere of Venus, and Related Dynamical Observations: Results from the four Pioneer Venus probes. J. Geophys. Res. 85 (7903-7933).[Sinclair et al.(1970)Sinclair, Basart, Buhl, Gale, and Liwshitz]Sinclair1970 Sinclair, A. C. E., Basart, J. P., Buhl, D., Gale, W. A., Liwshitz, M., 1970. Preliminary Results of Interferometric Observations of Venus at 11.1 cm Wavelength. Radio Science 5 (347-354).[Surkov(1983)]Surkov1983 Surkov, Y. A., 1983. Studies of venus rocks by veneras 8,9 and 10. In: Venus. University of Arizona Press, Tucson, Arizona, pp. 154–158.[Swarup et al.(1991)Swarup, Ananthkrishnan, Kapahi, Rao, Subrahmanya, and Kulkarni]Swarup1991 Swarup, G., Ananthkrishnan, S., Kapahi, V. K., Rao, A. P., Subrahmanya, C. R., Kulkarni, V. K., 1991. The Giant Metrewave Radio Telescope. Current Science 60 (90-105).[Taylor et al.(1999)Taylor, Carilli, and Perley]Taylor1999 Taylor, G. B., Carilli, C. L., Perley, R. A., 1999. Synthesis Imaging in Radio Astronomy II. Vol. 180. Astronomical Society of the Pacific Conference Series.[Vinogradov et al.(1976)Vinogradov, Florenskii, Bazilevskii, and Selivanov]Vinogradov1976 Vinogradov, A. P., Florenskii, K. P., Bazilevskii, A. T., Selivanov, A. S., 1976. First Panoramic Pictures of Venus: Preliminary Image Analysis. Sov. Astron. Lett. 2 (67-71).[Warnock and Dickel(1972)]Warnock1972 Warnock, W. W., Dickel, J. R., 1972. Venus: Measurements of Brightness Temperatures in the 7-15-cm Wavelength Range and Theoretical Radio and Radar Spectra for a two-layer Subsurface Model. Icarus 17 (682-691). elsarticle-harv
http://arxiv.org/abs/1709.09390v1
{ "authors": [ "Nithin Mohan", "Subhashis Roy", "Govind Swarup", "Divya Oberoi", "Niruj Mohan Ramanujam", "Suresh Raju C", "Anil Bhardwaj" ], "categories": [ "astro-ph.EP", "astro-ph.IM" ], "primary_category": "astro-ph.EP", "published": "20170927085054", "title": "Radio Observation of Venus at Meter Wavelengths using the GMRT" }
Linearithmic weighted phase correlation]Redshift determination through weighted phase correlation: a linearithmic implementationL. Delchambre]L. Delchambre^1E-mail: [email protected] ^1 Institut d'Astrophysique et de Géophysique, Université de Liège, Allée du 6 Août 17, B-4000 Sart Tilman (Liège), Belgiquefirstpage–lastpage 2016[ [ Accepted ???. Received ???; in original form ??? ====================================================We present a new algorithm having a time complexity of N log N and designed to retrieve the phase at which an input signal and a set of not necessarily orthogonal templates match at best in a weighted chi-squared sense. The proposed implementation is based on an orthogonalization algorithm and thus also benefits from a high numerical stability.We successfully apply this method to the redshift determination of quasars from the twelfth Sloan Digital Sky Survey (SDSS) quasar catalog and derive the proper spectral reduction and redshift selection methods. Also provided are the derivations of the redshift uncertainty and of the associated confidence. Results of this application are comparable to the performances of the SDSS pipeline while not having a quadratic time dependency. methods: data analysis – quasars: distances and redshifts. § INTRODUCTION The advent of extremely large spectroscopic surveys like the Sloan Digital Sky Survey (SDSS) that includes more than 2×10^6 high resolution spectra over 5200 deg^2 of the sky <cit.> or the Gaia space mission that will provide, by the end of 2018, 150×10^6 low resolution spectra <cit.> provide us with unique opportunities to have a statistical view on the kind of objects present in the universe along with some of their fundamental characteristics. These play a key role in the answer to some of the currently most important astrophysical questions like the evolution scenarios of the galaxy; of the universe or its accelerated expansion <cit.>. Along with these large surveys comes an impressive continuous flow of data that has to be treated right on time through huge dedicated processing centers. One of the most important tasks amongst the spectral reduction processes stands in the objects classification and in their astrophysical parameters (APs) determination. More specifically, in the case of extragalactic objects, these informations critically depend on the availability of reliable redshift estimates. Redshift determination, even if apparently straightforward, is in practice a challenging problem for which numerous solutions have been proposed: * Visual inspection procedures: a skilled observer can efficiently guess the APs of any object and can deal with any unexpected cases like corrupted/missing emission lines; spectra superposition or non-physical solutions. Obviously, this choice is unbearable for large surveys though the analysis of any sufficiently large subset is invaluable as it can serve as input to sophisticated computer algorithms that will try to mimic this human expertise. This is the solution undertaken by <cit.> regarding the redshift of quasars and accordingly, it will be used along this paper as the default quasar spectral library.* Matching of spectral lines: this method consists in extracting some significant patterns out of the input spectra and then trying to match them to known emission/absorption lines. This procedure has been used for a long time but has been shown to be restricted to relatively high signal-to-noise ratio spectra <cit.>.* Computer learning methods: the goal is here to make the algorithm guess the relations that exist between some characteristics of already-reduced objects (e.g. observed wavelengths and fluxes), and the parameters of interest (e.g. redshift coming from a visual inspection procedure). The aim being then to apply these relations to the case of objects whose parameters are still unknown. Interested readers may find in <cit.> the descriptions of many such algorithms. Note that, depending on its complexity, the guessed relation may be non-physical and hard to interpret leading to suboptimal or potentially unrealistic predictions. This is the reason why these should preferably be used for the case of highly non-linear problems for which no other–fast–solution exists.* Phase correlation: the idea is here to find the optimal correlation of a given observation against one or more templates in order to determine its redshift. Based upon the ability of these templates to match the observations; from the physical nature of this solution and from the shortcomings of the previously mentioned alternatives, we will consider it to be the most trustworthy automated procedure for redshift determination. Based on the work of <cit.>, <cit.> first suggested the use of the Fast Fourier Transform (FFT) as an efficient way of finding the redshift of galaxies based on their cross-correlation with a single template. <cit.> later derived the formulationassociated with the resulting redshift uncertainties, that were further refined by <cit.>. Finally, <cit.> generalized the cross-correlation technique to the case of templates coming from the principal components analysis (PCA) decomposition of spectral libraries. Although being currently the most widespread technique for redshift determination, the latter actually suffers from some well-known drawbacks (see section <ref>). The solution to these problems comes from the use of a weighting scheme associated with the observed spectrum as implemented in <cit.>. Unfortunately, this solution has a quadratic time dependency that makes it fairly time consuming. The method proposed in the present work overcomes this high numerical complexity and was developed in the framework of the Gaia astrophysical parameters inference system <cit.> and more specifically within the field of the quasar classification module(QSOC) whose goal is to find the APs associated with the quasars that Gaia will detect. In this domain, the time constraints imposed by the Gaia mission restricted us to the use of computer learning methods but in the end, the advent of this new method will allow us to predict fair and fast redshift estimates for the upcoming Gaia data releases. Section <ref> explains the conventions used along this paper. Section <ref> makes a brief review of the phase correlation and PCA techniques aimed at better understanding their main limitations. We have developed a fast solution to the problem of the weighted phase correlation in Section <ref>. Tests against real cases are then performed within Section <ref> while extensions of the presented algorithm are discussed in Section <ref>. Finally, we conclude in Section <ref>.§ NOTATIONThis paper uses the following notations: vectors are in bold italic, x⃗; x_i being the element i of the vector x⃗. Matrices are in uppercase boldface or are explicitly stated; i.e. X from which the ith column will be denoted by Xi and the element at row i, column j will be denoted by X_ij. In the following, we will consider the problem of finding the optimal offset between an observed spectrum composed of N_s samples andtemplates of size N_p by probing various shift estimate, Z. By considering the zero-padding necessary in order for these to be properly used within the Fourier domain, we will have that the template matrices, P and T, will be of size (N×) with N = N_s + N_p. Similarly, we will have that the observation vector, s⃗, will be of size N as well. Note that in order for the redshift to turn into a simple offset, we will have to consider a logarithmic wavelength scale. If not stated otherwise, matrices and vectors having a tilde on top of them (e.g. T) will be specific to a given shift try, Z ∈ 0⋯ N-1. Amongst commonly used operators, a⃗b⃗ denotes the inner-product of a⃗ and b⃗; a⃗b⃗, their outer-product; a⃗, the Euclidian norm of a⃗ and a⃗ its complex conjugate. Finally, x⃗ and x⃗ respectively corresponds to the discrete Fourier transform (hereafter DFT) and inverse DFT of x⃗.§ PHASE CORRELATION USING PCA As already stated, the most commonly used technique for QSO redshift determination consists in finding the best correlation of an observed spectrum against templates coming from the PCA decomposition of a restframe spectral library. More specifically, these are based on spectra sampled on a uniform logarithmic wavelength scale such that the observed wavelength, , can be related to the restframe wavelength, , through the QSO redshift, z, as a simple offset:log = log + log (z+1). In the following, we make a brief review of the two above-mentioned techniques that should provide the reader insights about their way of working and aimed at better understanding their main limitations regarding the redshift estimation of QSOs. §.§ Principal components analysis PCA is a well-known technique designed to extract a set of templates –the principal components– from a typically huge set of data while keeping most of its variance <cit.>. These principal components will then be those that are the best suited in order to highlight the most important patterns out of the input data set. Mathematically, the goal of the PCA is to find a decomposition of an input matrix X, from which we have subtracted the mean observation, intoX = PC,such thatD = PXXP = Pσ^2Pis diagonal and for whichD_i ≥D_j;∀ i < j.P, the matrix of the eigenvectors of σ^2, is then called the matrix of principal components; C is the associated matrix of principal coefficients and D_i's are the eigenvalues of the covariance matrix, σ^2. Note that according to the spectral theorem[Any real symmetric matrix is diagonalized by a matrix of its eigenvectors.], P will be orthonormal such that we haveC = PX.From this orthonormality and from equation <ref>, we will have that the linear combination of the first principal components of P with the associated principal coefficients of C will constitute the best linear combination in order to fit X in a least squares sense. An illustrative example of PCA decomposition is given in figure <ref>. The latter is based on spectra covering the restframe wavelength range 1100–2000Å coming from the SDSS DR12 quasar catalog <cit.>. Notice how the main QSOs emission lines are modelled by the various components as a way to grab the variance coming from the great diversity of shapes encountered within the spectral library. Readers willing more information on the PCA decomposition are invited to read <cit.> for a deep analysis of the technique or <cit.> for an accessible tutorial. The application of this technique to the analysis of QSO spectra was first covered by <cit.>; <cit.> later adapted it to the case of the SDSS DR1 quasars classification and redshift determination while <cit.> did a similar work based upon spectra coming from the Large Zenith Telescope survey whose spectral resolution (λ/Δλ∼ 40) is in the same order of magnitude as the one of the red and blue photometers of Gaia <cit.>. §.§.§ Weighted PCA One of the main limitations of the classical PCA method is that it does not make any distinction between variance coming from noise and variance coming from a genuine signal. Furthermore, in its naive form, it does not know how to deal with missing data. This last point is particularly crucial in the field of high-redshift surveys where the observed wavelength ranges may not overlap from object to object. A straightforward approach so as to avoid these shortcomings stands in the use of a weighting scheme that allows each flux within each spectrum to come along with its own uncertainty while performing the PCA decomposition. Such a fully-weighted PCA (WPCA) method was first described in the astronomical literature by <cit.> and was later refined by <cit.>. In the field of the present study, we will use the implementation described in <cit.>, this choice mainly comes from its high numerical stability. This method is based on the diagonalization of the weighted variance-covariance matrix as defined byσ^2 = (XW) (XW)/WW,whererepresents the element-wise product of two matrices and where X is supposed to have a weighted mean observation of zero. The decomposition of σ^2 into a diagonal matrix of eigenvalues, D, and a matrix of orthonormal principal components, P, being then performed using either a combination of two spectral decomposition methods, namely the power iteration method followed by the Rayleigh quotient iteration one, or by the use of the singular value decomposition (SVD). This technique allows us to retrieve the fairest components (i.e. those for which uncertainties are taken into account) without having to worry about missing data: this case being the limiting case of weights equal to zero. Consequently, this method will be used through the rest of this document as the default process in order to retrieve the principal components. §.§ Phase correlation The goal of the phase correlation algorithm is to find the optimal shift between a set of orthonormal templates –or a sole unit-length template–, P, and a given observation, s⃗, that has been shifted relatively to P. The way to proceed is to compute for each potential shift, Z, the linear least-squares solution of the shifted templates, P_ij≡P_(i+Z)j, against the observation such as to find the offset having the minimal resulting chi-square. More concisely, this is equivalent to find the minimal shift-dependent chi-square as defined byχ^2(Z) = s⃗ - Pa⃗(Z),where a⃗(Z) contains the optimal linear coefficients in order to fit s⃗ based on P. Extending the work of <cit.>, <cit.> noticed that in the case of orthonormal templates, like the PCA principal components, equation <ref> becomesχ^2(Z) = s⃗ - a⃗(Z).Consequently, equation <ref> will be minimal for an associated maximal a⃗(Z). Moreover, due to the orthonormality of P, we will have thata⃗(Z) = Ps⃗.More specifically, regarding the ith linear coefficient, a_i(Z), we will have thata_i(Z) = ∑_j P_(j+Z)i s_j = (Pis⃗)_Z.We recognize equation <ref> as being the correlation of the vector Pi with s⃗ that can hence be efficiently computed in the Fourier domain. Interested readers may find in <cit.> exhaustive hints about the practicalities surrounding the Fourier implementation of equation <ref>. Let us just point out that both vectors, Pi and s⃗ have to be extended and zero-padded such as to deal with the periodic nature of the DFT. Note that in the rest of this document, the curve obtained after evaluating a⃗(Z) at each Z will be termed the cross-correlation function (CCF). A sub-sampling precision on the offset can be gained by considering the fit of a continuous function in the vicinity of the maximal peak of the discrete CCF. <cit.> supposed this peak to be Gaussian profiled, but in the aim of having a model-independent estimate of Z, we will follow <cit.> and use a quadratic curve fitting that will allow us to take into account potential asymmetries in the fitted peak.§.§.§ Practicalities Some issues highlighted in <cit.> are the subtraction of the QSO continuum and of the restframe mean spectrum from the observed spectrum. The first issue was here solved by the use of a dedicated method that allows us to fit the QSO continuum in a fast and redshift-independent way. This method will be further described in section <ref>. The second issue is often overcome by omitting the subtraction of the mean spectrum from the input dataset. We have to note that this omission typically degrades the ability of the PCA decomposition to extract the most significant patterns out of this input dataset. Another solution would have been to alter the mean spectrum such as to make it orthonormal to the template components, P, –thanks to the use of a Gram-Schmidt orthogonalization process <cit.> for example– and to further consider it as being an additional template. This solution will be adopted here for the use of the phase correlation algorithm. Finally, the major drawback of the implementation of <cit.> stands in the fact that the observed spectra typically span only a small part of the template spectra such that the CCF will be computed over a substantial number of unknown points. As a consequence, the fit of the input spectra will be disrupted by the `flattening' of the principal components over the unobserved wavelengths. Figure <ref> illustrates the result of the phase correlation algorithm along with the best-fit solution associated with the maximal peak of the CCF. Notice how the solutions are flattened over unobserved wavelengths. More precisely, considering the observation of "SDSS J024008.93-003448.7", the Lyα, Hα and Hβ emission lines are strongly damped despite the fact that the optimal shift was found while for the observation of "SDSS J132218.88+365342.0", this `flattening' has led to an ambiguity in the CCF that leads to an erroneous shift estimate. Additionally, uncertainties about the observed fluxes are often available and will not be used within this implementation.§ WEIGHTED PHASE CORRELATION With the aim of dealing efficiently with the previously mentioned problem of unobserved wavelengths and of neglected uncertainties, we will use a χ^2 formulation similar to equation <ref>, but whose fluxes are weighted according to the observed spectrum wavelengths. Also, we will drop the orthonormality constraint on the fitted templates since, in anyway, the previously mentioned weighting will break it down. We will then have the following objective formula:χ^2(Z) = Ws⃗-WTa⃗(Z)^2 = y⃗-Xa⃗(Z)^2,where W is the diagonal matrix of weights associated with s⃗ and T_ij≡T_(i+Z)j is the shifted matrix of –not necessarily orthonormal– template observations. The fastest solution in order to minimize equation <ref> for a given Z stands in the use of a Cholesky decomposition of the design matrix, XX, followed by a forward-backward substitution associated with the image vector Xy⃗ <cit.>. We have to note that this approach is known to suffer from numerical instabilities <cit.> and is solely provided here as a comparison point regarding its computational performances. Practically, slower but more stable methods based on the orthogonalization of X should be preferred. In a computational point of view, the evaluation of equation <ref> for each Z will require N^2 flops[Floating operations][The interested reader may find in <cit.> informations and references about the various algorithmic complexities used along this document.], the latter being mainly dedicated to the building of the design matrices. This relatively high complexity constitutes the main limitation of this implementation and makes it unaffordable for the tight processing of a large survey like Gaia. Nonetheless, it has proven to provide fair redshift estimates and is currently being effectively used in the SDSS-III spectral classification redshift measurement pipeline with a singular value decomposition (SVD) of X advantageously replacing the Cholesky decomposition of the design matrix <cit.>. §.§ Orthogonal decomposition approach The previous section points out the risks encountered while using a naive approach for solving the normal equations associated with equation <ref>. In this optics, let us explore the effect of the orthogonalization of X on the latter equation. For this purpose, let us detail the QR decomposition of X = QR[Note that we dropped the upper tilde for clarity purpose], that is such thatQX = Q_-1⋯Q_1X = Q_-1⋯Q_iX_i = R ,where R is an upper triangular matrix of size (N ×) and where each Q_i is an Householder reflection matrix designed to annihilate the elements below the ith row of the ith column of X_i <cit.>. More precisely, given X_i^', the not-already upper-triangular part of X_i, we will haveQ_i = ( [ 0; 0 - 2 v⃗_⃗i⃗v⃗_⃗i⃗ / v⃗_⃗i⃗^2 ]) =( [ 0; 0 Q_i^' ])with v⃗_⃗i⃗ = x⃗_⃗i⃗±x⃗_⃗i⃗e⃗_⃗1⃗;x⃗_⃗i⃗ being the first column of X_i^' and e⃗_⃗1⃗ being the first row of the identity matrix. For numerical stability reasons, the choice between subtraction and addition in equation <ref> should be matched to the sign of the first element of x⃗_⃗i⃗ <cit.>. By using such a decomposition, we will have that equation <ref> becomesχ^2(Z) = y⃗-QRa⃗(Z)^2 = y⃗-Qb⃗(Z)^2,with the last N- elements of b⃗(Z) being zeros. The point is now to recognize equation <ref> as being the weighted counterpart of equation <ref> such that the firstelements of b⃗(Z) will be equal to the firstelements of Qy⃗, whose computation can be efficiently performed by successive multiplication of each of the Q_iwith the associated y⃗_⃗i⃗≡Q_i-1⋯Q_1y⃗ = (b_1(Z) ⋯ b_i-1(Z) y⃗_⃗i⃗^⃗'⃗), rather than by explicitly computing the general Q matrix. This efficiency mainly comes from the fact that:* We do not need to explicitly compute any Q_i^', since we will have that the jth column of the product Q_i^'X_i^' will be given by(Q_i^'X_i^')_j^col = (X_i^')_j^col - 2 v⃗_⃗i⃗(X_i^')_j^colv⃗_⃗i⃗v⃗_⃗i⃗v⃗_⃗i⃗, and similarly,Q_i^'y⃗_⃗i⃗^⃗'⃗ = y⃗_⃗i⃗^⃗'⃗ - 2 v⃗_⃗i⃗y⃗_⃗i⃗^⃗'⃗v⃗_⃗i⃗v⃗_⃗i⃗v⃗_⃗i⃗.That is: the computation of Q_i^'y⃗_⃗i⃗^⃗'⃗ and of any column of the products Q_i^'X_i^' is now reduced to a single inner product (the product v⃗_⃗i⃗v⃗_⃗i⃗ being common to all multiplications, it can be pre-computed) and to a single vector subtraction.* We do not need to compute any R_ij. Differently stated, we do not need to compute the first row nor the first column of any Q_i^'X_i^'. This implementation, termed `factorized QR algorithm', has a total complexity which can compete with the Cholesky solution of the normal equations while gaining in numerical stability. But practically it is of low interest for us since it remains a quadratic problem that is consequently out of the time processing required by the Gaia tight data reduction. Let us note that the equation <ref> still provides us with a weighted formulation of the CCF, that is b⃗(Z)^2, such that we can already investigate the effects of the weighting on the best fit solutions at its maximal peak and on the CCF itself. As illustrated in figure <ref>, the fitted spectra do no longer exhibit border flattening and thanks to this, the maximal peaks are now clearly identified. More particularly, regarding the observation of "SDSS J132218.88+365342.0", the optimal peak of the CCF turns out to be unambiguously identified thanks to the use of this weighted formulation of the phase correlation.§.§.§ Factorized QR algorithm with lookup tables The quadratic nature of the factorized QR algorithm comes from the large amount of inner products involved in the computation of the firstelements of each b⃗(Z). More specifically, by developing each inner product coming from equations <ref> and <ref> in the case of the initial reduction X_2≡Q_1X and associated image production y⃗_⃗2⃗≡Q_1y⃗, we getv⃗_⃗1⃗v⃗_⃗1⃗= 2 α(α + X_11),v⃗_⃗1⃗y⃗=α y_1 + X1y⃗, v⃗_⃗1⃗Xj=αX_1j + X1Xjwith α = X_11(X1X1)^1/2. At this point, it should be noted that XiXj = w⃗^2 (TiTj) = ∑_k=1^N w_k^2 (TiTj)_k+Zand thatXiy⃗ = (w⃗^2 s⃗) Ti = ∑_k=1^N w_k^2 s_k T_(k+Z)i,with w⃗≡W. We can readily see that equations <ref> and <ref> can be efficiently computed in the Fourier domain. In order to take benefits from it, let us define the lookup table of the inner products of X with itself asL_ij = XiXj = TiTjW^2_Z,and the one containing the inner products of X with y⃗ asl⃗_i = Xiy⃗ = TiW^2s⃗_Z.Note that in the latter equations, TiTj and Ti are template-specific and can hence be computed in advance. Explicitly stated, these lookup tables allow us to have for any shift estimates, Z, an instantaneous evaluation of all the inner products associated with the initial reduction process. Furthermore, thanks to the Q_1 orthonormality, we will have that the lookup tables associated with X_2 will be also given by L and l⃗. Consequently, we can easily compute the inner product of X_2^' with itself based on L as(X_2^')_i^col(X_2^')_j^col = L_ij - R_1iR_1j; ∀ i,jand in the same way, we can compute the inner products of X_2^' with y⃗_⃗2⃗^⃗'⃗ based on l⃗ as(X_2^')_i^coly⃗_⃗2⃗^⃗'⃗ = l⃗_i - R_1i b_1(Z); ∀ i.Equations <ref> and <ref>will allow us to recursively process each subsequent X_i^' and y⃗_⃗i⃗^⃗'⃗ in a way similar to the one used to produce X_2 and y⃗_⃗2⃗ and will be referred to as the lookup tables update equations. Finally, let us note that once these lookup tables have been computed, only the firstrows of X and y⃗ are now needed for the algorithm to run. If we suppose nowthat ≪ N, then we will have that most of the computation time will be spent in the building of the initial values of the lookup tables (equations <ref> and <ref>). More precisely these will crudely correspond to the DFT of w⃗^⃗2⃗ and of w⃗^⃗2⃗s⃗; their vector multiplicationwith each combination of the templates plus the inverse transforms leading to these initial values. Despite the fact that the previous derivation is a bit coarse, it still assesses the linearithmic (i.e. N log N) behaviour of the presented algorithm. Regarding now the specific problem of the QSO redshift determination within the Gaia mission (expected to be N=10^4, =10), tests performed on a 2,4Ghz CPU provide execution times of 180.35 ± 6.76 seconds for the normal equations solution compared to 0.173 ± 0.002 second for our implementation; these become respectively 4.95 ± 0.19 hours compared to 1.81 ± 0.02 second for the case of N=10^5 and =10. Finally, let us note that the proposed algorithm can be easily implemented in parallel given the fact that the estimation of each χ^2(Z) can be separately performed. As a consequence, the execution time can be scaled by an arbitrary factor that is inversely proportional to the number of running processes.§ APPLICATION Unsurprisingly, the performance of the presented method was assessed on type I/II QSOs coming from the SDSS DR12 quasar catalog <cit.>. The choice of this catalogue comes from the fact that all spectra contained therein were visually inspected and can hence be considered as being extremely reliable regarding their redshift. Additionally, it is also interesting to note that the latter contains a non-negligible number of 297 301 QSOs that is adequate in order to derive strong statistics. Due to time constraints and to the need for the WPCA algorithm to have a well covered input space of parameters (i.e. numerous observations), we used a two-fold cross-validation in order to test our method. That is: we split our input catalog into two randomly drawn parts out of which we extract the principal components; then we compute the redshift of spectra belonging to each part based on both the weighted and classical phase correlation algorithms whose inputs are the principal components built on the alternative part. Following is a detailed description of the processes leading to this cross-validation. §.§ Procedure description Raw spectra are generally not readily exploitable. Rather, we have to reduce them such as to get rid of most of the contaminating signals that encompass, for the specific case of this study: deviant points (amongst which night sky emission lines and spectrograph edge effects) and QSOs continuum. Note that since the SDSS DR12 spectra are already sampled on a uniform logarithmic scale, nothing has to be done in order for equation <ref> to be fulfilled but usually spectra have to be resampled. The estimation of the QSO continuum turns out to be a challenging problem upon which the quality of the principal components and of the redshift prediction strongly depend <cit.>. Four broad kinds of approaches have been investigated so far in order to estimate this continuum: (1) the fit of a `damped' power-law function to the observed spectra <cit.>; (2) the use of PCA such as to predict the shape of the Lyα forest continuum based on the red part of the spectrum <cit.>; (3) the modelling of the dependency between the intrinsic QSO continuum and the absorption that it encounters as a mean to extrapolate it <cit.> and (4) through the use of techniques related to the multiresolution analysis <cit.>. We choose to use this last alternative based on the fact that we do not require the resulting continuum to have a physical basis (i.e. the continuum subtraction being rather used as a normalization) and on the fact that we would like to have the most empirical estimation of this continuum. Following <cit.>, we found that the signature of the continuum clearly stands within the low frequency components of the pyramidal median transform <cit.> of the input spectrum. In practice, the PMT is computed on a flipped version of the spectrum concatenated with the original version and another flipped version such as to ensure continuity at the border. After taking the inverse transform through a third degree fitting polynomial, we enforce the smoothness of the solution by convolving it with a thousand points-wide Savitzky-Golay filter such as to provide the final continuum. Besides its accuracy, we have to note that the PMT, from its pyramidal nature, has an algorithmic complexity of Nlog N and will consequently not degrade the performances of the global process. After having subtracted the derived continuum, we discard border regions for which either λ < 3800Å or λ > 9250Å; we reject 4Å regions around each significant night sky emission lines and finally we perform a k-sigma clipping (k = 3, σ = 4) on the two first scales of the PMT such as to remove extremely deviant points. Finally, we get an estimate of the signal-to-noise ratio (hereafter SNR) of each continuum-subtracted spectrum through the computation of a `noiseless' spectrum coming from the hypothesis that the noise within these spectra is entirely contained within the five first scales of the biorthogonal spline stationary wavelet transform of each spectrum <cit.>. Practically, a spline of third degree was used for both analysis and synthesis. Figure <ref> illustrates the result of the initial reduction process. Spectra having an estimated SNR greater than 1 are then set on a common logarithmic wavelength scale with a uniform sampling of Δlog_10λ = 10^-4, equal to the original sampling of the spectra. The 116 374 resulting spectra are then divided into two equal parts –called learning sets– each of which is being used to produce the principal components and mean observations associated with each part of the cross-validation process. Resulting from this subdivision, we will have that the input catalogue will be split into two parts –the test sets– each consisting in 133 860 observations. Note that given the fact that the broad absorption line QSOs are discarded, both sets do not sum up to 293 301 QSOs. We then compute the classical and weighted CCF of each spectrum contained within the two test sets based on the mean observation and ten first principal components coming from the alternative learning set. Out of these CCF we extract the five most significant peaks –having a separation of at least 15,000km s^-1– and we fit them with a second order polynomial such as to gain a sub-sampling precision on the predicted peak position. Note that we choose to consider multiple solutions based on the fact that the most significant peaks may not always lead to a physical basis. For example, we might have deep absorption lines either coming from the host galaxy of the quasars or from extragalactic objects being located along the line-of-sight during acquisition and leading to `negative' fitted emission lines. These can definitely prevent the highest peak –the one with the associated minimal χ^2– from being the effective one. In order to discriminate between these five selected solutions, we define two score measures: χ_r^2(z), defined as the ratio of the value of the peak associated with the redshift z to the value of the maximal peak and Z_score(z), defined asZ_score(z) = ∏1/2[1 + e_λ/σ(e_λ)√(2)],where e_λ are the mean values of the emission lines covered by the observed spectrum if we consider it to be at redshift z and where σ(e_λ) are the associated uncertainties. Note that both e_λ and σ(e_λ) are computed over a range of eleven points surrounding each emission line. We can recognize each term of equation <ref> as being the cumulative distribution function of a normal distribution of mean zero and variance σ^2(e_λ) evaluated at e_λ. The use of equation <ref> allows us to have a numerical estimate of the ability for a given redshift, z, to grab the following chosen QSO emission lines: Ovi1033; Lyα λ 1215; Nv1240; Siiv1396; Civ1549; Ciii]1908; Mgii2797; Hγ λ 4340; Hβ λ 4861 and Hα λ 6562Å. Typical values of Z_score(z) range from ∼ 1 for a solution with a clear match of all positive emission lines while it voluntarily penalizes solutions with a match of at least one `negative' emission line by giving them a Z_score(z) ∼ 0, values in between often occur in low SNR spectra or spectra with strongly damped emission lines. Finally, an error on each estimated peak position is derived and will be further described in section <ref>. For each spectra, the selection of the optimal redshift out of the five potential ones, z_1, ⋯, z_5 for which 1 = χ_r^2(z_1) ≥χ_r^2(z_2) ≥⋯≥χ_r^2(z_5) and coming either from the classical CCF or from the weighted CCF is done in the following way: if Z_score(z_1) > 0.8, then select z_1; otherwise choose the shift having the highest χ^2_r and for which both Z_score(z_i) > 1-10^-6 and χ^2_r(z_i) > 0.8; otherwise choose the shift having the highest Z_score and for which χ_r^2(z_i) > 0.9. Note that the previous selection and constants therein are purely empirical and based on an iterative visual inspection of misclassified spectrum. This final step provides us with what we thought to be the most probable redshift estimate for a given input spectrum along with the associated uncertainty and a warning flag notifying a failure and/or imprecision in the CCF computation; in the peak identification or in the redshift selection (e.g. all fluxes to zero, low Z_score or less precise uncertainties). §.§ Results Figure <ref> illustrates the result of the cross-validation process for both the classical phase correlation and weighted phase correlation algorithms and further illustrates a comparison with the redshift predicted by the SDSS-III pipeline. We can readily see that the performances of the classical phase correlation algorithm are strongly degraded compared to the weighted version with a correlation factor of 0.557 compared to 0.984 and a ratio of observations having|Δ z| < 0.05 of 0.838 compared to 0.992, respectively. These differences mainly come from the previously mentioned problem of border flattening that translates into frequent emission line mismatches and into errors coming from the difficulty that the algorithm has in order to extrapolate the regions surrounding the Lyα and Hα emission lines. This difficulty arise because of the prominence of these lines as well as because of the high correlation they have with the other emission lines <cit.>. As a consequence, the algorithm is often constrained to consider the Lyα or Hα lines to be embedded within the observed spectra which graphically results in a gap around 0.4 < z̅ < 2.12. Note that the systematic errors occurring at z̅∼ 0.4 and at z̅∼ 2.12 can be attributed to the fitting of these specific emissions lines to the residual spectrograph edge effects –particularly significant within the low SNR spectra– and that these errors account for ∼ 2% of the observations having |Δ z| ≥ 0.05. Investigation of the most significant errors coming from the emission lines mismatch, illustrated in figure <ref>, shows that the latter can be modelled as a linear relation between the predicted redshift and the effective redshift. Indeed, if we consider an emission line observed at wavelength λ and falsely considered to stand at a restframe wavelength λ_f instead of λ_t, we will have that the predicted redshift, z_f, can be related to the effective redshift, z_t, throughz_f+1/z_t+1 = λ_t/λ_f. These mismatches do not constitute, in themselves, real cases of degeneracy regarding our χ_r^2 and Z_score selection criteria. Indeed, each of the configuration mentioned within figure <ref> has unconfused emission lines that make the resulting redshift unambiguous. Rather, the observed degeneracies also come from the low SNR of the observed spectra. Figure <ref> illustrates the distribution of the SNR of both the observation having Δ z≥ 0.05 and those having Δ z < 0.05 for our three cases of study. We notice that for all three cases, the SNR of the maximal peak of the fair redshifts estimate is approximately twice the one of the erroneous ones, this is especially significant in the cases of the weighted phase correlation and of the SDSS-III pipeline where the errors come nearly exclusively from this line mismatch problem. Furthermore, a visual inspection of these degenerated spectra shows both potential redshifts to be undistinguishable from one another in most of the cases and thus constituting in fine effective cases of degeneracy. Consequently, some of the low SNR spectra will unavoidably have ambiguous redshift estimates that will stand in well specific regions defined by equation <ref>. Nevertheless, these will be easily identified as having a low Z_score and/or a low redshift confidence (see section <ref>). Finally, we may notice that our implementation seems to have a better tolerance to noise compared to the SDSS-III implementation (i.e. see within figure <ref> the lower peak of the erroneous curve as well as its globally smaller width). This higher tolerance does not come from differences in the algorithms since both implementations are based on the sole solution to equation <ref>, but either: (1) from the higher number of PCA components we used (11 compared to 4); (2) from the fact that the components we used were more suitable in order to represent the observed spectra or (3) from the fact that the redshifts coming from the visual inspection procedure are also subject to errors, especially since we are concerned with low SNR spectra where degeneracy may occur. In order to reject the fact that this higher tolerance comes from the larger number of components we used, we repeat the described cross-validation procedure by using only three components (plus mean observation) instead of ten. The results of this configuration lead us to the same conclusions with a correlation factor of 0.976 (compared to 0.967) and a ratio of observations having|Δ z| < 0.05 of 0.989 (compared to 0.988). Although the differences in the erroneous SNR curves are less perceptible, it still remains globally sharper. Furthermore, we have to mention that within the SDSS-III pipeline, no more than four principal components were used because any larger number of components would make the error higher. In regard to this point and to the fact that we succeed in getting good predictions using 11 components, we might suppose, in anyway, that the components we used were of higher quality in order to model this specific dataset. Nevertheless, let us mention that we cannot totally reject the hypothesis according to which this better tolerance comes from a fortuitous statistical fluctuation itself produced by the degeneracy occurring during the visual inspection of some low SNR spectra. § DISCUSSION§.§ Redshift confidence & uncertainty estimation In order for the derived redshift to be effectively used within subsequent scientific applications it is mandatory for it to come along with an estimation on its uncertainty and to have a confidence level that the chosen redshift is indeed in the vicinity of the real redshift. To make it clear, we may have a redshift estimation with a reasonable uncertainty (e.g. z = 2.31±10^-3) but being degenerated such that we are not sure that it stands in the neighbourhood of the effective redshift. Fortunately, the computed CCF offers us simple and efficient ways to evaluate both the redshift uncertainty as well as the confidence we can set on it. Generally speaking, we know that for a sufficiently large sample of observed points, the χ^2 map defined in the parameters { a_1,⋯, a_n } can be approximated in the neighbourhood of the global minimum, { a_1^⋆,⋯, a_n^⋆}, asχ^2(a_i) ≈(a_i-a_i^⋆)^2/σ^2(a_i^⋆) + C,where C is a function depending on a_j, j ≠ i and thus considered here as a constant. In other words, the approximation of the χ^2 map near a global minimum can be evaluated for each of the parameters independently from the others as a simple quadratic curve whose curvature depends on the uncertainty of the varying parameter. As a consequence, if χ^2(a_i) increases by one compared to the optimal χ^2, then we will have that σ^2(a_i) = (a_i-a_i^⋆)^2. The reader may find in <cit.> more informations about the variation of the χ^2 near the optimum and more particularly about the derivation of equation <ref>. Regarding the uncertainty on the predicted redshifts, we used a second order polynomial such as to fit the optimal peak of the CCF, Z, and derived its associated uncertainty[Beware that the shift value corresponding to the uncertainty will have an associated decrease by one compared to the maximal peak of the CCF.], σ(Z). We then use the propagation of the uncertainty such as to get the error on the estimated redshiftσ(z) = (z+1) σ(Z) s log b,where b is the base of the logarithmic scale we used (in our case b = 10) and s is the sampling of the spectra on this logarithmic scale (in our case s = Δlog_10λ = 10^-4). Secondly, we have to evaluate the confidence we can have on the predicted redshift. First estimators of this confidence are the already mentioned Z_score(z) and χ_r^2(z) (see section <ref>). Indeed, a secure estimate will typically have Z_score(z) ≈χ_r^2(z) ≈ 1. Unfortunately, these estimators do not take into account the potential ambiguity that might be present during the selection of the CCF peak associated with the predicted redshift. In order to tackle this lack, we defined the chi-squared difference associated with a redshift estimate, z_i, asΔχ_r^2(z_i) = minχ_r^2(z_i) - χ_r^2(z_j),∀ j ≠ i,where each z_j corresponds to a redshift associated with a peak selected within the CCF. We may notice that any redshift being unsure due to the ambiguity in the CCF peak selection will now be marked as having Δχ_r^2(z) ≈ 0. Also note that compared to <cit.>, we use the distance between all χ_r^2 and not only those for which χ_r^2(z_i) ≥χ_r^2(z_j) because we adopt the hypothesis that any solution having χ_r^2(z_i) < χ_r^2(z_j) might have been falsely rejected while being a valid solution. §.§ Dealing with zero weights It commonly happens for the weight matrix, W from equation <ref>, to have a lot of successive weights set to zero, this is especially true if we consider that the observation can be padded such as either to match the size of the templates or to deal with the periodic nature of the phase correlation. Additionally, nothing prevents us from shifting the observation in a circular way such as to have this set of successive zeroes being in the first rows of the weights matrix and to later get rid of this artificial shift by sliding back the CCF. This is particularly interesting if we have a number of zeroed weights equal to –or greater than– the number of components we used, . In this case, thefirst rows of the matrix X and of the vector y⃗ used within the factorized QR algorithm with lookup tables will all be equal to zero and as a consequence, none of the X_i^''s, as well as none of the y⃗_⃗i⃗^⃗'⃗'s have to be computed. Differently stated, in addition to the building of the lookup tables and their updates, we solely have to compute R_ij and b_i(Z) through equations <ref> and <ref>:R_ij = - L_ij/√(L_ii)andb_i(Z) = - l⃗_i√(L_ii).This allows us to greatly simplify our algorithm and leads to execution times of 0.082 ± 0.001s for the case N = 10^4, = 10 and of 0.912± 0.026s for the case N = 10^5, = 10. A rough comparison shows these execution times to be twice faster than those presented at the end of section <ref>. §.§ Templates weighting Although, the weighting of the observed spectra is the most important regarding the redshift determination of QSOs, one might also want to have a template weighting such as, for example, to highlight some patterns or to reflect the fact that these templates often come along with their own uncertainties. To this aim, we plug into equation <ref>, the diagonal matrix of weights associated with the template observations, W_T, that isχ^2(Z) = W_TWs⃗-WW_TTa⃗(Z)^2 = y⃗-Xa⃗(Z)^2.After orthogonalization of the matrix X = QR, we get toχ^2(Z) = y⃗^2- b⃗(Z)^2,with the firstelements of b⃗(Z) being equal to the firstelements of Qy⃗. We can already note that since y⃗ is now shift-dependent, the knowledge of b⃗(Z) alone is no more sufficient in order to find the optimal shift such that χ^2(Z) must be explicitly evaluated through equation <ref>. Computation of the firstelements of b⃗(Z) is straightforwardly done using the procedure described in section <ref> with y⃗ replacing y⃗ and both lookup tables given byL_ij = W_T^2(TiTj)W^2_Zandl⃗_i = W_T^2TiW^2s⃗_Z.Finally, we will have that each y⃗^2 will be given byy⃗^2 = W_T^2W^2s⃗^2_Z. § CONCLUSIONSWe have presented a new method for computing the weighted phase correlation of an observed input signal against not necessarily orthogonal templates. This method is found to be the preferred alternative to the classical phase correlation in the case of input observations having a limited coverage and/or having very distinct weights. The implementation of this method is based on a weighted chi-squared problem solved through a highly modified version of the QR orthogonalization algorithm designed to take benefit of the performances of the fast Fourier transform such as to compute the numerous inner products present within the original QR algorithm. This implementation provides us with a numerically stable algorithm having a linearithmic time complexity that makes it affordable for the tight spectral processing of QSOs within the Gaia mission. We have presented a complete application of this method to the case of the redshift determination of type I/II QSOs coming from the SDSS DR12 quasar catalog through a two-fold cross-validation procedure. This application is based on templates coming from the weighted principal components analysis decomposition of independent spectra coming from the same catalog. We described in detail the reduction of those input spectra as well as the method we used in order to select the most probable redshift amongst the set of possible ones. Results of this cross-validation show our method to be the one of predilection for QSO redshift determination and is comparable to the SDSS-III pipeline output while not being a N^2 process. Finally, we showed how we can get both the uncertainty on the predicted redshift as well as the confidence we can set on it. We furtherdiscuss two extensions of our method, namely: the time saving we can get if having a sufficient number of successive zeroed weights and the weighting of the template observations. A free implementation of the described algorithm has been released under the GNU Public License[http://www.gnu.org/licenses/gpl-3.0.txthttp://www.gnu.org/licenses/gpl-3.0.txt] and can be freely downloaded at https://github.com/ldelchambre/wcorrQRLhttps://github.com/ldelchambre/wcorrQRL.§ ACKNOWLEDGEMENTSThe author acknowledges support from the ESA PRODEX Programme `Gaia-DPAC QSOs' and from the Belgian Federal Science Policy Office.Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/.SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington and Yale University.[Alam et al.2015]alam2015 Alam, S., Albareti, F.D., Allende Prieto, C., et al. 2015, , 219, 12 [Aubourg et al.2014]aubourg2014 Aubourg, É., Bailey, S., Bautista, J. E., et al. 2014, arXiv:1411.1074 [Bailer-Jones et al.2013]cbj2013 Bailer-Jones C.A.L. et al., 2013, , 559, A74 [Bailey2012]bailey2012 Bailey S., 2012, PASP, 124, 1015[Bernardi et al.2003]bernardi2003 Bernardi, M., Sheth, R. K., SubbaRao, M., et al. 2003, , 125, 32 [Bevington & Robinson2003]bevington2003 Bevington, P.R. Robinson, D.K., 2003, Data reduction and error analysis for the physical sciences, 3rd edn., McGraw-Hill [Bishop2006]bishop2006 Bishop, C., 2006, Pattern Recognition and Machine Learning, 1st edn. Springer-Verlag, New York [Brault & White1971]brault1971 Brault J.W., White O.R., 1971, , 13, 169 [Bolton et al.2012]bolton2012 Bolton, A. S., Schlegel, D. J., Aubourg, É., et al. 2012, , 144, 144 [Burrus1997]burrus1997 Burrus C.S., Gopinath R.A., Guo H., 1997, Introduction to Wavelets and Wavelet Transforms: A Primer, 1st edn. Prentice Hall, London [Cabanac et al.2002]cabanac2002 Cabanac, R. A., de Lapparent, V., & Hickson, P. 2002, , 389, 1090 [Cohen & Daubechies1992]cohen1992 Cohen A., Daubechies I., Feauveau J.C. 1992, Communications on Pure and Applied Mathematics, 45, 485 [Dall'Aglio et al.2008]dallaglio2008 Dall'Aglio, A., Wisotzki, L., & Worseck, G. 2008, , 491, 465[de Bruijne2012]debruijne2012 de Bruijne, J. H. J. 2012, , 341, 31 [Delchambre2015]delchambre2015 Delchambre L., 2015, , 446, 3545 [Ferland1996]ferland1996 Ferland, G. J. 1996, University of Kentucky Internal Report, 565 pages, [Francis et al.1992]francis1992 Francis, P. J., Hewett, P. C., Foltz, C. B., & Chaffee, F. H. 1992, , 398, 476 [Glazebrook1997]glazebrook1997 Glazebrook K., Offer A.R., Deeley K., 1997, , 492, 98 [Golub & Van Loan1996]golub1996 Golub G.H., Van Loan C.F., 1996, Matrix Computations, 3rd edn. The Johns Hopkins Univ. Press, London [Heavens1993]heavens1993 Heavens, A. F. 1993, , 263, 735[Jolliffe2002]jolliffe2002 Jolliffe I.T., 2002, Principal Component Analysis, 2nd edn. Springer, New York [Lee et al.2012]lee2012 Lee, K.-G., Suzuki, N., & Spergel, D. N. 2012, , 143, 51[Machado2013]machado2013 Machado, D. P., Leonard, A., Starck, J.-L., Abdalla, F. B., & Jouvel, S. 2013, , 560, A83 [Pâris et al.2016]paris2015 Pâris I. et al., 2016, , in prep. [Pâris et al.2011]paris2011 Pâris, I., Petitjean, P., Rollinde, E., et al. 2011, , 530, A50 [Pearson1901]pearson1901 Pearson K., 1901, Phil. Mag., 2, 559 [Perryman et al.2001]perryman2001 Perryman, M. A. C., de Boer, K. S., Gilmore, G., et al. 2001, , 369, 339 [Petitjean et al.1993]petitjean1993 Petitjean, P., Webb, J. K., Rauch, M., Carswell, R. F., & Lanzetta, K. 1993, , 262, 499 [Press et al.2002]press2002 Press W.H., Tuekolsky S.A., Vetterling W.T., Flannery B.P., 2002, Numerical recipes in C++: The Art of Scientific Computing, 2nd edn. Cambridge Univ. Press, New York [Schlens2014]schlens2014 Schlens J., 2014, preprint (arXiv1404.1100) [Simkin1974]simkin1974 Simkin S.M., 1974, , 31, 129 [Starck1996]starck1996 Starck J.L., Murtagh F., Pirenne B., Albrecht M., 1996, , 108, 446 [Suzuki et al.2005]suzuki2005 Suzuki, N., Tytler, D., Kirkman, D., O'Meara, J. M., & Lubin, D. 2005, , 618, 592 [Tonry & Davis1979]tonry1979 Tonry J., Davis M., 1979, , 84, 1511 [Tsalmantza & Hogg2012]tsalmantza2012 Tsalmantza P., Hogg D.W., 2012, ApJ, 753, 122[Yip et al.2004]yip2004 Yip, C. W., Connolly, A. J., Vanden Berk, D. E., et al. 2004, , 128, 2603
http://arxiv.org/abs/1709.09375v1
{ "authors": [ "L. Delchambre" ], "categories": [ "astro-ph.IM" ], "primary_category": "astro-ph.IM", "published": "20170927075436", "title": "Redshift determination through weighted phase correlation: a linearithmic implementation" }
^1Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Portsmouth, PO1 3FX, United Kingdom ^2Instituto de Física, Universidade Federal da Bahia, Salvador, BA, 40210-340, BrasilWe examine the growth of structure in three different cosmological models with interacting dark matter and vacuum energy. We consider the case of geodesic dark matter with zero sound speed, where the relativistic growing mode in comoving-synchronous gauge coincides with the Newtonian growing mode at first order in ΛCDM. We study corrections to the linearly growing mode in the presence of interactions and the linear matter growth rate, f_1, contrasting this with the velocity divergence, f_ rsdσ_8, observed through redshift-space distortions. We then derive second-order density perturbations in these interacting models. We identify the reduced bispectrum that corresponds to the non-linear growth of structure and show how the shape of the bispectrum is altered by energy transfer to or from the vacuum. Thus the bispectrum, or higher-order correlators, might in future be used to identify dark matter interactions.Growth of structure in interacting vacuum cosmologies Humberto A. Borges^1,^2, David Wands^1 7th April 2020 =====================================================§ INTRODUCTION The current accelerated expansion of the universe, inferred from observations of type Ia supernovae (SNe Ia) <cit.>, anisotropies in the cosmic microwave background (CMB) and observations of large-scale structures (LSS), among others, is one of the most fascinating topics in modern cosmology, attracting the attention of researchers in both the theoretical and experimental area. The most common explanation is the existence of an energy component that has negative pressure known as “dark energy" <cit.>, which in its simplest form corresponds to a cosmological constant in the Einstein equations of general relativity <cit.>. Observations show that around 95% of the energy in the Universe today is in the form of dark energy and dark matter, which plays a crucial role in the formation of galaxies and clusters of galaxies.Cosmology with a cosmological constant and cold dark matter has become the standard model of the universe, known as ΛCDM. This model has proved to be successful when tested against a range of precise observations <cit.>. However, despite these successes, the problem remains that the vacuum energy density observed today is much lower than the theoretical value predicted by quantum field theories <cit.>. Thus there is a need to find a mechanism to understand the small value of the dark energy density required by observations. If the origin of dark energy is not a cosmological constant, then alternative models <cit.> should be considered to explain the current accelerated expansion of the universe. Often this is done by introducing additional fields whose dynamics modify the dark energy equation of state and determine the present density <cit.>.An alternative approach is to instead consider an interacting vacuum energy whose present value is dependent on energy-momentum transfer with existing matter fields[This differs from interacting dark energy models which introduce additional dark energy fields interacting with dark matter <cit.>.]. Since the physics underlying the dark sector is still unknown, it could be that vacuum energy and dark matter interact directly and exchange energy. Unified dark matter models, such as the generalised Chaplygin gas (gCg) <cit.>, can easily be decomposed into two interacting components <cit.>, one representing dark matter density, ρ_ dm, and the other the vacuum energy, ρ_V. The energy exchange implied by this decomposition can be written for the gCg model as Q=3α Hρ_ dmρ_V/ρ <cit.>, where α is a dimensionless parameter constant. For α<0 there is more matter today compared with ΛCDM if we start with the same amount of primordial matter at high redshift. One particular case is given by α=-0.5, which corresponds to a dark matter created at a constant rate due to a decaying vacuum energy <cit.>. This particular model has been shown competitive with the ΛCDM model when tested against observational data including LSS, SNe Ia and integrated Sachs-Wolfe (ISW) constraints <cit.>. On the other hand a full analysis of CMB+ISW constraints on the decomposed gCg model gives the bounds -0.15<α<0.26 <cit.>, while a joint analysis of LSS, SNe Ia and the position of the first peak of CMB has lead to -0.39<α<-0.04 (2σ) <cit.>. The results of analysis using Planck data for the CMB anisotropy spectrum is consistent with |α| ≤ 0.05 <cit.>.An interaction of the form Q=-q_VHρ_V <cit.> has also been studied in light of observations, with q taking different values in distinct redshift bins. The analyses suggested that a non-zero interaction may be favoured by cosmological data, including redshift-space distortions, when compared with ΛCDM model. Another interaction, proposed in <cit.>,is Q=ϵ Hρ_ dm with a small constant ϵ. Such a scenario is obtained in Ref. <cit.> from thermodynamics arguments. The best fit found is ϵ=-0.11 through a joint analysis involving measurements of type Ia supernovae, gas mass fraction and CMB. Ref. <cit.> found ϵ∼-10^-2,and some authors have argued <cit.> that there is evidence for ϵ<0 at more than 4σ including LSS data. An approach to construct model-independent constraints on the dark matter-vacuum interaction is presented in <cit.>.At the same time, it is widely believed that another period of accelerated expansion called inflation occurred at very high energies in the very early universe and primordial perturbations were created from quantum fluctuations; this creates the seed for large-scale structures that grow by gravitational instability to result in the present distribution of matter on cosmological scales. A non-Gaussian distribution of primordial perturbations, that appears due to nonlinear evolution in second-order perturbation theory, has been proposed as a means to discriminate among different inflationary scenarios. Gravitational instability is a non-linear process which itself leads to non-Gaussianity in the matter distribution at late times, even if we start with a completely Gaussian perturbation. Thus it is important to understand the effects of nonlinear evolution, including possible interactions between vacuum energy and dark matter, in order to be able to distinguish possible non-linear effects of vacuum interactions from those of primordial non-Gaussianity.In this work we study both linear and non-linear evolution of matter perturbations <cit.> in the presence of an interacting vacuum energy. We employ the fluid-flow approach adopted in <cit.>, including for the first time the effects of energy transfer in gravitational clustering at second order, as well as making a careful study of peculiar velocities and hence redshift-space distortions in the presence of interactions. At second order we identify the effects of primordial non-Gaussianity and non-linear growth of structure, leading to distinct shapes for the reduced bispectrum at second-order. § FLUID-FLOW EQUATIONS The Einstein field equations are given byR_μν-1/2g_μνR=T_μν,where R_μν represents the Ricci tensor, R the Ricci scalar, and g_μν represents the space-time metric.We will consider pressureless dark matter, p_ dm=0, with energy density ρ_ dm and vacuum energy, ρ_V, with equation of state p_V=-ρ_V, such that the energy-momentum tensor of matter plus vacuum isT_μν =T_( dm)μν + T_(V)μν = ρ_ dmu_μ u_ν - ρ_V g_μν .where u^μ is the matter four-velocity.The energy-momentum conservation equations for each component are given by∇^μT_(V)μν = Q_ν , ∇^μT_( dm)μν = -Q_ν ,where the energy-momentum transfer from the dark matter to the vacuum is Q_μ=-∇_μρ_V=∇_μ p_V.We will assume[Another possibility, for example, would be that the energy flow follows the gradient of matter density, which implies that the local vacuum energy is a function of the local matter density. In that case the sound speed corresponds to the adiabatic sound speed, as in unified dark matter models with barotropic equation of state, and the energy transfer is already strongly constrained by CMB observations <cit.>.] that the energy transfer follows the 4-velocity of the dark matter, Q^μ=Qu^μ <cit.>. This has two important consequences. Firstly, the vacuum is homogeneous on hypersurfaces orthogonal to the matter 4-velocity. This means that there are no pressure gradients in a frame comoving with matter. Thus matter follows geodesics and the matter sound speed is zero. Secondly, the matter 4-velocity is a potential flow and thus irrotational. We expect this to be a good description of matter at early times and on large scales where the initial density field is set by primordial scalar perturbations. This is sufficient for our perturbative treatment of the initial growth of structure, but at late times we would expect the nonlinear growth of structures to develop vorticity and indeed to develop rotationally supported dark matter halos. Thus we expect the geodesic approximation to break down below some length scale. Otherwise truly irrotational dark matter would have distinctive observational consequences <cit.>. Since there are no pressure gradients orthogonal to the matter 4-velocity, we can write the equations of motion in a comoving-synchronous gauge, just as in ΛCDM, where we write the line element asds^2=a^2(η)[-dη^2+γ_ijdx^idx^j].We will consider inhomogeneous perturbations about a spatially flat Friedmann-Robertson-Walker background for which γ̅_ij=δ_ij and we use an overbar to denote the spatially homogeneous background solution. The background expansion is given by the Friedmann constraint equation3ℋ^2=a^2(ρ̅_ dm+ρ̅_V) ,where the conformal Hubble rate is ℋ≡ a'/a and a prime denotes a derivative with respect to conformal time.Following <cit.>, we define the deformation tensor by the conformal time derivative of the spatial metricϑ^i_j=1/2γ^ikγ'_jk ,and the perturbed scalar expansion byϑ = ϑ^i_i. The i-j component of the Einstein equations (<ref>) gives the evolution equation <cit.>ϑ^i_j'+2ℋϑ^i_j+ϑϑ^i_j+1/4(ϑ^l_mϑ^m_l-ϑ^2)δ^i_j+ℛ^i_j-1/4ℛδ^i_j=0,where the Ricci tensor on the spatial hypersurfaces is given by ^(3)R^i_j=ℛ^i_j/a^2 and the Ricci scalar ^(3)R=ℛ/a^2.The 0-0 component of the Einstein equations gives the perturbed energy constraintϑ^2-ϑ^i_jϑ^j_i+4ℋϑ+ℛ=2a^2ρ̅_ dmδ_ dm ,where we define the matter density contrastδ_ dm(η,x⃗)=ρ_ dm(η,x⃗)-ρ̅_ dm(η)/ρ̅_ dm(η) .Using the 0-j component of the Einstein equations we find the momentum constraintϑ^i_j_;i=ϑ_,j,where a semi-colon denotes the covariant derivative with respect to the 3-metric γ_ij.The perturbed Raychaudhuri equation for the expansion is found taking the trace of the evolution equation (<ref>)ϑ'+ℋϑ+ϑ^i_jϑ^j_i+1/2a^2ρ̅_ dmδ_ dm=0. Finally, projecting the equations (<ref>) and (<ref>) parallel to u_μ for matter without pressure and vacuum, we obtain the energy continuity equations ρ'_V=aQ, ρ'_ dm+(3ℋ+ϑ)ρ_ dm=-aQ.Note that since the vacuum energy is homogeneous on comoving-orthogonal hypersurfaces we have ρ_V=ρ̅_V(η) and thus Q=Q̅(η). This does not imply that the vacuum energy is unperturbed but rather that we have picked a coordinate frame in which constant time hypersurfaces coincide with uniform-vacuum hypersurfaces.In terms of the density contrast (<ref>), the continuity equation (<ref>) becomesδ_ dm'-aQ/ρ_ dmδ_ dm+(1+δ_ dm)ϑ=0. § BACKGROUND SOLUTIONS We briefly review the solutions for the homogeneous background cosmology (<ref>) with different interaction models. The background Raychaudhuri equation isℋ'=1/2(2-3Ω_ dm)ℋ^2,with the dimensionless density parameter defined by Ω_ dm(a)=a^2ρ̅_ dm/3ℋ^2. The time dependence of the matter density parameter is given byΩ_ dm'=[-3(1-Ω_ dm)+g]ℋΩ_ dm,where we defined the dimensionless interaction parameterg≡ -aQ/ℋρ̅_ dm . For Q=0 there is no interaction between matter and the vacuumand the vacuum energy density is a constant in time and space, equivalent to a cosmological constant. The equation (<ref>) (with ϑ=0 in the background) can be integrated to giveρ̅_ dm(a)=ρ_ dm0a^-3,where the subscript 0 refers to the present value, and a_0=1. This is the ΛCDM model. The matter density parameter and the Hubble parameter are, respectively, given byΩ_ dm(a)=Ω_ dm0/Ω_ dm0+(1-Ω_ dm0)a^3, ℋ (a)=a H_0[1-Ω_ dm0+Ω_ dm0/a^3]^1/2,where the density parameters obey the relation Ω_ dm+Ω_V=1. For high-redshift (early times), as a≪ 1, we have a matter-dominated epoch with Ω_ dm≈ 1. In the limit of large times a de Sitter vacuum dominated epoch is obtained.More generally, the cosmological evolution for Ω_ dm and ℋ depends of the form of the interaction parameter. In the following, we consider three different models for the possible forms of Q. §.§ i. Model with Q=3α Hρ̅_ dmρ̅_V/ρ̅ This type of interaction corresponds to the decomposed generalized Chaplygin gas model <cit.> where α is a constant parameter. The dimensionless interaction parameter (<ref>) in this case is g=-3α(1-Ω_ dm).The matter density parameter and the Hubble parameter, given byΩ_ dm(a)=Ω_ dm0/Ω_ dm0+(1-Ω_ dm0)a^3(1+α), ℋ (a)=a H_0[1-Ω_ dm0+Ω_ dm0/a^3(1+α)]^1/2(1+α),are solutions of the equations (<ref>) and (<ref>). The standard matter era is recovered for early times (a≪ 1) with Ω_ dm≈ 1 and g≈ 0. The ΛCDM model corresponds to taking α =0 in the above expressions. In the special case α=-1/2 we have from (<ref>) the Hubble rateH (a)=H_0[1-Ω_ dm0+Ω_ dm0/a^3/2],and thusH'/H = -3/2ℋΩ_ dm .Comparing with Eq. (<ref>) we see that ρ̅_V'/ρ̅_V=H'/H and thus the vacuum density decays linearly with the Hubble rate, ρ̅_V=2Γ H, and matter is produced at a constant rate, ρ̇̅̇_ dm+3Hρ̅_ dm=Γρ̅_ dm <cit.>.§.§ ii. Model with Q=qHρ_V In this case the dimensionless interaction parameter (<ref>) is[Note that q here has the opposite sign to q_V in Salvatelli et al <cit.>.]g=-( 1-Ω_ dm/Ω_ dm) q .For constant q the energy continuity equation (<ref>) givesρ̅_V(a)=3H_0^2Ω_V 0a^q.Substituting (<ref>) into the Raychaudhuri equation (<ref>) and integrating, we obtain the solutionℋ(a)=H_0√(3(1-Ω_ dm0)a^3+q+3Ω_ dm0+q/(3+q)a) .The matter density parameter, given byΩ_ dm(a)=3Ω_ dm0+q-q(1-Ω_ dm)a^3+q/3Ω_ dm0+q+3(1-Ω_ dm0)a^3+q,is solution of Eq. (<ref>). The standard matter-dominated era (Einstein-de Sitter cosmology) is recovered for early times (a≪ 1) with Ω_ dm≈ 1 and g≈ 0. Note that the matter density parameter becomes negative for values q>0 at large times (a≫ 1).The ΛCDM model corresponds to the case q=0. §.§ iii. Model with Q=ϵ Hρ̅_ dm In this model the deviation from the standard evolution is given by a small constant ϵ that characterises the strength of interaction. The dimensionless interaction parameter (<ref>) is g=-ϵ,and for constant ϵ the equation (<ref>) (with ϑ=0) can be integrated to giveρ̅_ dm(a)=ρ_ dm0a^-(3+ϵ). Note that the matter energy density never evolves as ρ̅_ dm(a)∝ a^-3 except for the case ϵ=0, and consequently this model never has a conventional matter-dominated era.The amount of the vacuum energy at early times depends on the strength of interaction. Substituting Eq. (<ref>) into (<ref>) gives the evolution for the vacuum energy densityρ̅_V(a)=Λ-ϵ/3+ϵρ̅_ dm(a).Here Λ is a constant, and the vacuum energy approaches a cosmological constant, ρ̅_V→Λ, as a →∞ for ϵ>-3. At early times the vacuum density becomes negative for ϵ>0. The ΛCDM model is recovered with zero coupling, ϵ=0.From the Friedmann equation (<ref>) we obtainℋ (a)=a√(ρ_ dm0/3+ϵa^-(3+ϵ)+Λ/3). The dark matter density parameter is then <cit.>Ω_ dm(a)=(3+ϵ)Ω_ dm0a^-(3+ϵ)/(3+ϵ)+3Ω_ dm0(a^-(3+ϵ)-1), At high-redshift, a≪ 1 for ϵ>-3, the density parameter is given byΩ_ dm≈ 1+ϵ/3. § GROWTH OF STRUCTURE The metric and comoving matter density contrast can be expanded up to second order using only scalar quantities asγ_ij≈[1-2ψ^(1)-2ψ^(2)]δ_ij+∂_i∂_jχ^(1)-1/3∇^2χ^(1)+∂_i∂_jχ^(2)-1/3∇^2χ^(2), δ_ dm≈δ_ dm^(1)+1/2δ_ dm^(2).If we assume that there are no primordial vector and tensor perturbations then the vector and tensor modes can be set to zero at first order. Vector and tensor metric perturbations will then be generated at second and higher order, but they do not affect the matter density at first or second order which is the focus of our work.§.§ First-order solutions The first order expansion of the Ricci tensor of the spatial metric (<ref>) is given byℛ^(1)^i_j= ( ∂^i∂_j+δ^i_j∇^2 ) ℛ_c,whereℛ_c=ψ^(1)+1/6∇^2χ^(1),and thus the 3-Ricci scalar isℛ^(1)=4∇^2ℛ_c. The expressions (<ref>) and (<ref>) for the deformation tensor and scalar expansion are given to first order byϑ^i_j^(1)=-ψ^(1)'δ^i_j+1/2(∂^i∂_j- 1/3δ^i_j ∇^2 )χ'^(1), ϑ^(1)=-3ψ'^(1).The momentum constraint (<ref>) at the first order requiresℛ_c'=0.So ℛ_c is constant in time, to be determined by initial conditions.The continuity equation (<ref>) and Raychaudhuri equation (<ref>) for the density contrast and perturbed expansion are written up to first order as δ_ dm'^(1)+gℋδ_ dm^(1)+ϑ^(1)=0. ϑ'^(1)+ℋϑ^(1)+1/2a^2ρ̅_ dmδ_ dm^(1)=0.subject to the first-order energy constraint (<ref>)4ℋϑ^(1) - 2a^2ρ̅_ dmδ_ dm^(1) + ℛ^(1) =0, Differentiating the continuity equation (<ref>) with respect to time and eliminating ϑ^(1) and ϑ'^(1) using the energy constraint (<ref>) and Raychaudhuri equation (<ref>), we obtain the evolution equation for the density contrastδ_ dm”^(1)+(1+g)ℋδ_ dm'^(1)+[(gℋ)'+gℋ^2-1/2a^2ρ̅_ dm]δ_ dm^(1)=0.On the other hand, combining the first-order continuity equation (<ref>) with the constraint (<ref>), we find a first integral2ℋδ_ dm'^(1)+[a^2ρ̅_ dm+2gℋ^2]δ_ dm^(1) =2∇^2ℛ_c.where we used equation (<ref>) for the first-order Ricci scalar, and we know from the momentum constraint (<ref>) that ℛ_c is a constant.The general solution for density contrast is a linear combination of growing and decaying modes. The decaying mode is the homogeneous solution to the first integral (<ref>), i.e., setting the ℛ_c to zero. Neglecting this decaying mode, we are left with the growing mode driven by the non-zero Ricci curvatureδ_ dm^(1)(η,x⃗)=C(x⃗)D_+(η). where we have from (<ref>)C(x⃗) = ( f_1i + 3/2Ω_ dm,i + g_i )^-1∇^2ℛ_c/ℋ_i^2 D_+i ,and we define the linear growth rate asf_1=D_+'/ℋ D_+ . The growing mode is thenD_+(η)= (ℋ_i/ℋ)^2 ( f_1i + 3/2Ω_ dm,i +g_i ) (f_1+3Ω_ dm/2+g)^-1D_+i .Note that in this expression for the growing mode we have left an arbitrary overall normalisation constant, D_+i. If we set initial conditions at high redshift, a_i≪1, during a standard matter-dominated era, where Ω_dm i = 1, f_i=1 and g_i=0, then we have C(x⃗)=2/5∇^2ℛ_c/ℋ_i^2 D_+i ,and the growing mode (<ref>) reduces toD_+(η) = 5/2(ℋ_i/ℋ)^2 (f_1+3Ω_ dm/2+g)^-1 D_+i . From (<ref>) the first-order solution is thenδ_ dm^(1)(η,x⃗)= (f_1+3Ω_ dm/2+g)^-1∇^2ℛ_c/ℋ^2 . Substituting the growing mode solution (<ref>) and (<ref>) in the continuity equation (<ref>) we obtain the expansion scalarϑ^(1) = - (f_1+g) ℋδ_ dm^(1) .The metric perturbation ψ^(1) is given by integrating Eq. (<ref>). Using (<ref>) and (<ref>) we obtainψ^(1)=ℛ_c+1/3∇^2ℛ_c[1/ℋ^2(f_1+3/2Ω_ dm+g)^-1+∫g/ℋ(f_1+3/2Ω_ dm+g)^-1dη]. Equation (<ref>) then givesχ^(1)=-2ℛ_c[1/ℋ^2(f_1+3/2Ω_ dm+g)^-1+∫g/ℋ(f_1+3/2Ω_ dm+g)^-1dη].For completeness we note that the expression for the deformation tensor ϑ^i_j^(1) is then given by (<ref>). The expressions above are valid only if the matter flow follows geodesics, as we have assumed throughout.For a dimensionless parameter interaction g equal to zero the results for the ΛCDM model are recovered <cit.>.Figure <ref> shows the plot of the evolution of first-order growing mode D_+ for theΛCDM and all three interaction models obtained by solving the differential equation (<ref>) with the same initial amplitude D_+i for all of the growing modes at z=1000.When g>0 we have energy flux from vacuum to dark matter, since Q<0, and dark matter is created. In this case the first-order growing mode is suppressed with respect to the ΛCDM model (black curve) for a given value of the present day dark matter, Ω_ dm0. This is because the dark matter density is lower at early times when we fix the dark matter density today.When g<0 we have energy flow from dark matter to the vacuum, since Q>0, and dark matter is annihilated or decays. In this case there is an enhancement in the first-order growing mode for the same value of Ω_ dm0 <cit.>. In figure <ref> we plot the evolution of the growth rate f_1 defined in Eq. (<ref>) for model (i) (left panel) and for models (ii) and (iii) (right panel) with different values for the model parameters α, q and ϵ. §.§ Redshift-space distortions Redshift-space distortions (RSD) arise from peculiar velocities of galaxies, i.e., the perturbed expansion, ϑ, given in (<ref>). This induces an anisotropy in the apparent clustering of galaxies in redshift space, where we use the observed redshift to determine the radial distance. This observed anisotropy thus provides information about the formation of large-scale structure <cit.>. In standard ΛCDM (where the dimensionless parameter g=0)the variance of the expansion is usually characterised from equation (<ref>) by <cit.> ⟨ϑ^2/ H^2 ⟩^1/2 = f_1(z)σ_8(z),where f_1(z) is the linear growth rate and σ_8(z)=⟨δ_m^2 ⟩^1/2 is the rms mass fluctuation in a sphere with comoving radius 8h^-1Mpc, used to describe the amplitude of density perturbations. If we use the growing mode normalised to unity today, D^N_+(z)=δ_ dm(z)/δ_ dm(0), then we can write σ_8(z)=σ_8(0)D^N_+(z) where σ_8(0) gives the present rms matter fluctuations. More generally, for interacting models, the dimensionless interaction parameter g contributes explicitly in equation (<ref>) for redshift space distortions. If we assume that galaxies still trace the motion of the underlying dark matter (i.e., neglecting any velocity bias) then the variance of the expansion (<ref>) is given by⟨ϑ^2/ H^2 ⟩^1/2 = f_ rsd(z)σ_8(z) ,wheref_ rsd(z)= f_1(z)+g(z) . Figure <ref> shows the theoretical predictions for f_ rsdσ_8 as a function of redshift z for the different interacting models, where we fix σ_8(0)=0.83 <cit.>. We see that in contrast to the linear growth rate, the RSD distortions are enhanced by energy transfer from the vacuum to dark matter. The peculiar velocity field responds to the local gravitational potential and thus the total comoving density perturbation, not just the density contrast.The second-order differential equation for the density contrast (<ref>) can be written as a first-order differential equation for the redshift-space distortion parameter 2ℋ^-1 f_ rsd' + (2f_ rsd+4-3Ω_ dm-2g)f_ rsd = 3Ω_ dm . In the conventional matter-dominated era at high redshift with Ω_ dm=1 and the dimensionless parameter interaction g=0, we have a solution corresponding to the standard growing mode[Note we also have a solution f_ rsd=f_1=-3/2 corresponding to the standard decaying mode.] with f_ rsd=f_1=1 and the linear growing mode is proportional to the scale factor, D_+∝ a.This describes the early growing mode at high redshifts as g→0 and Ω_ dm→1 in models (i) and (ii), as well as ΛCDM.More generally, when vacuum energy contributes to the total density (Ω_ dm<1) we can express the first-order equation (<ref>) for the RSD parameter as a function of the density parameter, written in terms of Ω_V=1-Ω_ dm,2( 3Ω_V - g ) (1-Ω_V) d/dΩ_V f_ rsd + (2f_ rsd+1+3Ω_V-2g)f_ rsd = 3(1-Ω_V) .Note that g is a given function of the density parameter, Ω_V, in each of our interaction models.For Ω_ dm=1 to be a fixed point of Eq. (<ref>) we require g=0 when Ω_V=0.If we then expand the dimensionless interaction parameter (<ref>) as a Taylor series about the standard matter-dominated (Ω_ dm=1, Ω_V=0) solutiong = g_1 Ω_V + … ,we obtain an expression for the redshift-distortion parameter (<ref>) f_ rsd = f_ rsd,0+ f_ rsd,1Ω_V+ … , From Eq. (<ref>) we require(1+2f_ rsd,0)f_ rsd,0 =3 (3-2g_1+2f_ rsd,1)f_ rsd,0 + (1+2f_ rsd,0)f_ rsd + 2(3-g_1)f_ rsd =-3. For ΛCDM with g=0 we have from (<ref>) (1+2f_ rsd,0)f_ rsd,0 =3 (3+2f_ rsd,1)f_ rsd,0 + (1+2f_ rsd,0)f_ rsd,1 + 6f_ rsd,1 =-3.This gives either f_ rsd,0=-3/2 (decaying mode) or f_ rsd,0=1 (growing mode) and then f_ rsd,1=-6/11, corresponding to <cit.>f_1 = f_ rsd≈Ω_ dm^6/11 . More generally, we can give a similar approximation for the RSD parameter in terms of Ω_ dm when g≠0. In models (i) or (ii) we writef_ rsd≈Ω_ dm^γ . For model (i) we have g=-3αΩ_V and hence g_1=-3α in Eq. (<ref>). Thus we have for the growing mode f_ rsd,0=1 and f_ rsd,1=-γ such thatγ = 6+6α/11+6α , For model (ii) we have g=-qΩ_V(1-Ω_V)^-1 and hence g_1=-q in Eq. (<ref>). Thus we have f_ rsd,0=1 and f_ rsd,1=-γ where in this case γ = 6+2q/11+2q . Note that for a given value of Ω_ dm the RSD index γ, is now enhanced for α>0 in Eq. (<ref>) and q>0 in (<ref>), corresponding to g<0.As shown in figure <ref>, the analytical formula (<ref>) for the RSD parameter can be used as a good approximation for model (i), corresponding to the decomposed generalized Chaplygin gas, just as it is used in ΛCDM. For this class of model the expression (<ref>) with the growth index (<ref>) works very well within an error less than 1.5 percent up to redshift z=0 for |α|<0.5. On the other hand, for model (ii) shown in the right panel of figure <ref>, the expression (<ref>) with the growth index (<ref>) is a good approximation with errors below 3.5% for |q|<0.2.In all the cases shown, the approximations for f_ rsd become extremely accurate when applied for higher redshift where 1-Ω_ dm≪1.Finally, for model (iii) g=-ϵ and thus is not zero at early times so Ω_ dm≠1 at high redshift. Instead from Eq. (<ref>) we have Ω_ dm→1+(ϵ/3). Nonetheless, from Eq. (<ref>), we see that there is still an early time solution for the RSD parameter f_ rsd→ f_ rsd,0=1 as Ω_ dm→1+(ϵ/3)[We also find a decaying mode solution at early times in this model corresponding to f_1=-(3-ϵ)/2 and f_ rsd=-(3+ϵ)/2].This corresponds to an early-time growing mode solution D_+∝ a^1+ϵ with modifield growth rate f_1=1+ϵ. Expanding about this early-time solutionwe find an analogous approximation for the RSD parameter (<ref>) f_ rsd≈( Ω_ dm/1+(ϵ/3))^γ ,where the index γ is given byγ = 6+2ϵ/11+3ϵ .For ϵ=0 we recover the ΛCDM result (<ref>). §.§ Second-order perturbations To investigate the emergence of nonlinear structure in the presence of energy transfer we consider the second-order terms in the continuity equation (<ref>) and Raychaudhuri equation (<ref>) for the evolution of the density contrast and perturbed expansion in comoving synchronous coordinatesδ_ dm'^(2)+gℋδ_ dm^(2)+ϑ^(2) = -2δ_ dm^(1)ϑ^(1) , ϑ'^(2)+ℋϑ^(2)+1/2a^2ρ̅_ dmδ_ dm^(2)= -2ϑ^(1)^i_jϑ^(1)^j_i,subject to the constraint (<ref>)4ℋϑ^(2)-2a^2ρ̅_ dmδ_ dm^(2)+ℛ^(2)=2ϑ^(1)^i_jϑ^(1)^j_i-2ϑ^(1)^2.The left-hand-sides of these equations have the same form as the first-order equations (<ref>), (<ref>) and (<ref>), but now with source terms on the right-hand-sides of the equations that are quadratic in the first-order quantities.Differentiating the continuity equation (<ref>) with respect to time and eliminating ϑ'^(2) and ϑ^(2) using the Raychaudhuri equation (<ref>) and constraint (<ref>), we obtain an evolution equation for the second-order density contrastδ_ dm”^(2)+(1+g)ℋδ_ dm'^(2)+[(gℋ)'+gℋ^2-1/2a^2ρ̅_ dm]δ_ dm^(2)=-2ℋδ_ dm^(1)ϑ^(1)-2δ_ dm'^(1)ϑ^(1)-2δ_ dm^(1)ϑ'(1)+2ϑ^(1)^i_jϑ^(1)^j_i.The differential equation (<ref>) for the second-order density contrast has a particular solution, δ_ dm,p^(2), driven by the second-order source terms on the right-hand-side. However the general solution also includes the decaying and growing mode solutions to the homogeneous (source-free) equation, i.e., with the right-hand-side set to zero, with two arbitrary constants of integration. Since the source-free equation is the same as the first-order equation (<ref>), the homogeneous growing and decaying modes have the same time-dependence as the first-order solutions, but with second-order coefficients, to be set by the initial conditions.As we did for the first-order equations, we can combine the constraint (<ref>) and the continuity equation (<ref>) to obtain a first integral4ℋδ_ dm'^(2)+2[a^2ρ_ dm+2gℋ^2]δ_ dm^(2)-ℛ^(2)=2ϑ^(1)^2-2ϑ^(1)^i_jϑ^(1)^j_i-8ℋδ_ dm^(1)ϑ^(1).Here, and in (<ref>), the second-order part of the comoving curvature is given by <cit.>1/2ℛ^(2) = 2∇^2[ψ^(2)+1/6∇^2χ^(2)] + 6∂^iψ^(1)∂_iψ^(1) + 16ψ^(1)∇^2ψ^(1)+4ψ^(1)∂_i∂_jχ^(1)^ij-2∂_i∂_jψ^(1)χ^(1)^ij+ +χ^(1)^ij∇^2χ^(1)_ij-2χ^(1)^jk∂_l∂_kχ^(1)^l_j-∂_lχ^(1)^lk∂_jχ^(1)^j_k+3/4∂_kχ^(1)^lj∂^kχ^(1)_lj-1/2∂_kχ^(1)^lj∂_lχ^(1)^k_j.Unlike the first-order case, the second-order comoving scalar is no longer constant on all scales. However to leading order in a spatial gradient expansion we have <cit.> 1/2ℛ^(2)=2∇^2 ψ^(2)+6∂^iψ^(1)∂_iψ^(1) + 16ψ^(1)∇^2ψ^(1) +O(∇^4),and this does remain constant in the large-scale limit <cit.>.As in the first-order case, we may neglect the decaying mode for regular initial conditions, while the amplitude of the homogeneous growing mode must be set from the constraint equation (<ref>). The homogeneous, linearly-growing mode, δ_ dm,h^(2)∝ D_+,is driven by the constant part of the second-order curvature, ℛ^(2)_ h=constantwhile at second-order there is also the particular solution, δ_ dm,p^(2), corresponding to a solution to (<ref>) sourced by the time-dependent part of the comoving curvature, ℛ^(2)_ p=ℛ^(2)-ℛ^(2)_ h. Note that the homogeneous, linearly-growing mode, δ_ dm,h^(2)= O(∇^2/ℋ^2), will dominate on large scales where the comoving curvature perturbation (<ref>) is constant. The particular, nonlinearly-growing solution, δ_ dm,p^(2)= O(∇^4/ℋ^4), will dominate on smaller scales and late times.§.§.§ Particular solution The time-dependent part of comoving Ricci scalar ℛ^(2) can be obtained by differentiating (<ref>) with respect to time. After some calculation, using the equations for the second-order continuity equation (<ref>) and Raychaudhuri equation (<ref>) as well as the Einstein evolution equation (<ref>) to first order, we obtainℛ'^(2) = -2 ℛ^j(1)_i ∂^i∂_jχ'^(1),where the first-order Ricci tensor on the comoving spatial hypersurfaces, ℛ^i(1)_j=[∂^i∂_j+δ^i_j∇^2]ℛ_c, is constant in time.Integrating (<ref>), and using the solution (<ref>) for χ^(1), we findℛ^(2)_ p=4[1/ℋ^2(f_ rsd+3/2Ω_ dm)^-1+∫g/aℋ^2(f_ rsd+3/2Ω_ dm)^-1da][∂^i∂_jℛ_c∂^j∂_iℛ_c+(∇^2ℛ_c)^2].Note that this time-dependent part of the Ricci scalar at second order is fourth-order in spatial derivatives, consistent with our earlier conclusion that the Ricci scalar is constant at leading order on large scales (<ref>). The constraint equation (<ref>) for the particular solution to equation (<ref>) with the time-dependent part of the Ricci scalar, ℛ^(2)_ p:4ℋδ_ dm,p'^(2)+2[a^2ρ̅_ dm+2gℋ^2]δ_ dm,p^(2) = ℛ^(2)_ p + 2ϑ^(1)^2-2ϑ^(1)^i_jϑ^(1)^j_i-8ℋδ_ dm^(1)ϑ^(1) ,can thus be written as4ℋδ_ dm,p'^(2)+2[a^2ρ̅_ dm+2gℋ^2]δ_ dm,p^(2) =𝒮(a, Σ) (∇^2ℛ_c)^2/ℋ^2 ,where we introduce the dimensionless shape coefficientΣ(x⃗)=ϑ^i_jϑ^j_i/ϑ^2=∂^i∂_jℛ_c∂^j∂_iℛ_c/(∇^2ℛ_c)^2,and define the dimensionless source function𝒮(a,Σ)= 2f_ rsd^2(1-Σ)+8f_ rsd+4(f_ rsd+3/2Ω_ dm)(1+Σ)/(f_ rsd+3/2Ω_ dm)^2+ 4(1+Σ) ℋ^2 ∫g/aℋ^2(f_ rsd+3/2Ω_ dm)^-1da. The factorised form of the source term on the right-hand-side of (<ref>) suggests the second-order growing mode solutionδ_ dm,p^(2)(η,x⃗)=P(x⃗)D^(2)_+(η, Σ).Note that, unlike the first order solution (<ref>), this second-order solution is no longer separable since the source function 𝒮(a,Σ) in Eq. (<ref>) is not in general separable. The growing mode D^(2)_+ is separable only in special cases, e.g., for the case of planar symmetry, Σ=1, or matter-dominated solutions where Ω_ dm, g are f_ rsd are constant in time.Nonetheless, without loss of generality we may define the local second-order growth rate asf_2(η, Σ) = D^'(2)_+/2ℋD^(2)_+, where equation (<ref>) can then be written as4P(x⃗)(2f_2+3/2Ω_ dm+g)D^(2)_+=(∇^2ℛ_c)^2/ℋ^4𝒮(η,Σ).Using the first-order solution (<ref>) we can formally write the second-order particular solution asδ^(2)_ dm,p=[2f_ rsd+3Ω_ dm]^2/8(4f_2+2g+3Ω_ dm)S(a,Σ)(δ_ dm^(1))^2.We see that a non-zero interaction, g≠0, affects both the growing curvature (<ref>) contributing to the source term (<ref>) driving the growth of structure at second order, and the second order growing mode (<ref>).§.§.§ Homogeneous solution To find the homogeneous solution of the second-order evolution equation for the density contrast (<ref>), we solve the second-order constraint equation (<ref>) with a constant source term, ℛ^(2), i.e., 4ℋδ_ dm,h'^(2)+2[a^2ρ̅_ dm+2gℋ^2]δ_ dm,h^(2) = ℛ^(2)_ h ,The homogeneous solution is thus given byδ^(2)_ dm,h(η,x⃗)=C_2(x⃗)D_+(η),where D_+ is the linear growth factor (<ref>) and C_2(x⃗) is given by (<ref>) replacing the first-order curvature, ℛ^(1)=4∇^2 ℛ_c, by the second order term, ℛ_ h^(2). Thus we haveδ^(2)_ dm,h=ℛ^(2)_ h/4ℋ^2(f_1+3/2Ω_ dm+g)^-1 ,where subtracting the time-dependent contribution (<ref>) from full second-order curvature (<ref>) gives <cit.>ℛ^(2)_ h=4∇^2[ψ^(2)+1/6∇^2χ^(2)]+32ℛ_c∇^2ℛ_c+12∂^iℛ_c∂_iℛ_c -2[2∂^i∇^2χ^(1)∂_iℛ_c+∂^i∂_jχ^(1)∂^j∂_iℛ_c+∇^2χ^(1)∇^2ℛ_c] +1/2[∂^i∂^j∂^kχ^(1)∂_i∂_j∂_kχ^(1)-∂^k∇^2χ^(1)∂_k∇^2χ^(1)].To set the initial conditions at second order, we will introduce the primordial curvature perturbation on uniform-density hypersurfaces, ζ. This gauge-invariant quantity remains constant on super-horizon scales for adiabatic perturbations <cit.> and hence can be predicted from standard inflation models in order to set the initial conditions for the subsequent radiation and matter eras.We expand ζ at second order asζ≈ζ^(1)+1/2ζ^(2)=ζ^(1)+3/5f_NL(ζ^(1))^2 ,where we introduced the non-linearity parameter f_NL to describe local-type primordial non-Gaussianity <cit.>.For scales well outside de horizon (k≪ℋ_i) and, therefore, at early times (a_i≪ 1) we havee^2ζ = 1 - 2[ψ_i + 1/6∇^2χ_i ] .Thus we findζ^(1)=-ℛ_c, ψ_i^(2)+1/6∇χ_i^(2)=-(2+6/5f_NL)ℛ_c^2.Setting initial conditions on large scales and at early times, the expression (<ref>) reduces to the large-scale limit (<ref>)ℛ^(2)_ h/4 = 2(2-6/5f_NL)ℛ_c∇^2ℛ_c- (1+12/5f_NL)∂^iℛ_c∂_iℛ_c. Thus the homogenous solution for the second-order density contrast (<ref>) is given byδ_ dm,h^(2)=4/ℋ^2(f_ rsd+3/2Ω_ dm)^-1[-(1/4+3/5f_NL)∂ ^iℛ_c∂_iℛ_c+(1-3/5f_NL)ℛ_c∇^2ℛ_c].§.§.§ Relativistic comoving density contrast The full solution for the second-order density contrast in synchronous comoving coordinates, obeying the initial constraint on large scale at early times, is thus a sum of the homogeneous solution (<ref>) with the particular solution (<ref>), which givesδ^(2)_ dm=-24/5[2f_ rsd+3Ω_ dm][(f_NL+5/12)∂ ^iℛ_c∂_iℛ_c/ℋ^2+(f_NL-5/3)ℛ_c∇^2ℛ_c/ℋ^2]+ 𝒮(a,Σ)/2(4f_2+3Ω_ dm+2g) ( ∇^2ℛ_c/ℋ^2)^2 .In this expression the first term corresponds to the large-scale/early-time part where the second-order perturbation contains information about primordial non-Gaussianity and the relativistic non-linear initial constraints. The constant f_NL describes the level of primordial non-Gaussianity (<ref>) large scales at the end of inflation. In the absence of primordial non-Gaussianity f_NL=0. At smaller scales, well inside the Hubble horizon, the terms in the second line dominate and represent the growing non-Gaussianity due to gravitational collapse.In ΛCDM we have g=0 and hence f_ rsd=f_1. At early times we have matter-dominated evolution, Ω_ dm=1, and the linear growth function is D_+∝ℋ^2∝ a and hence the first-order growth rate (<ref>) obeys f_1=1. The function S(a,Σ) in Eq. (<ref>) becomes a constant𝒮(Σ) = (16/25)(5+2Σ), and the second-order growing mode (<ref>) reduces to D_+^(2)∝ (D_+)^2∝ a^2. Hence the second order growth rate (<ref>) f_2=1 (independent of the shape, Σ). The second-order solution for the synchronous comoving density contrast (<ref>) in the early matter-dominated (Einstein-de Sitter) era is then given by <cit.>δ^(2)_ dm =- 24/25[(f_NL+5/12)∂ ^iℛ_c∂_iℛ_c/ℋ^2+(f_NL-5/3) ℛ_c∇^2ℛ_c/ℋ^2] + 8(5+2Σ)(∇^2ℛ_c)^2/175ℋ^4 .In models (i) and (ii) the dimensionless interaction parameter g is proportional to Ω_ dm-1 and at early times we have g→0 in the matter-dominated limit, Ω_ dm→1. Hence, as in ΛCDM, we recover the second-order Einstein-de Sitter solution (<ref>) at early times, with the more general solution (<ref>) with g≠0 at late times when Ω_ dm≠1. In model (iii) the dimensionless interaction parameter g=-ϵ is non-zero at all times. Some vacuum energy is present at early times, Ω_ dm=1+ϵ/3, such that D_+∝ℋ^-2∝ a^1+ϵ and hence a modified growth rate, f_1=1+ϵ. The second-order source term (<ref>) remains a constant in this early time limit𝒮(Σ) = 16(5+2Σ+3ϵ)/(1+ϵ)(5+ϵ)^2 ,The solution for the second order density contrast is then separable and with the second-order growing mode D_+^(2)∝ (D_+)^2 as in a conventional matter-dominated era, but with a modified growth rate, (<ref>), f_2=1+ϵ. The solution (<ref>) for the second-order synchronous comoving density contrast is thus δ^(2)_ dm(a,Σ)=-24/5(5+ϵ)[(f_NL+5/12)∂ ^iℛ_c∂_iℛ_c/ℋ^2 + (f_NL-5/3)ℛ_c∇^2ℛ_c/ℋ^2]+ 8(5+3ϵ+2Σ)/(7+3ϵ)(5+ϵ)^2(1+ϵ)( ∇^2ℛ_c/ℋ^2)^2 .This reduces to the standard matter-dominated (Einstein-de Sitter) second-order solution (<ref>) in the limit ϵ→0.§.§.§ Relativistic Eulerian density contrast In the absence of an interaction between dark energy and dark matter, the continuity equation (<ref>) and Raychaudhuri equation (<ref>) in the synchronous comoving gauge are formally identical to the corresponding equations for the fluid dynamics in Newtonian gravity in Lagrangian coordinates, i.e., comoving with the matter <cit.>. The general solution to these second-order evolution equations is thus identical to the Newtonian solution, but the relativistic solution (<ref>) has a characteristic initial condition (the specific choice for the second order homogeneous solution) set by the non-linear initial relativistic constraints.To compare our general solution (<ref>) with the standard second-order solution for the density contrast in Newtonian theory, for example, we will also transform from the comoving (Lagrangian) frame to an Eulerian frame where the matter moves with respect to “fixed” spatial coordinates. The perturbed scalar expansion (<ref>) is corresponds to the divergence of the matter 3-velocity in this frame, ϑ≡∇^2 v. In relativistic perturbation theory this Eulerian frame is usually referred to as the total-matter gauge <cit.>. Although the first-order density perturbation is invariant under a change of spatial gauge, at second order the density contrast transforms to <cit.>δ_E^(2) = δ^(2)_ dm - 2∂_iδ_ dm∫∂^i v dη .Substituting in the first order results for the density contrast and velocity divergence, we find the Eulerian densityδ_E^(2)=δ^(2)_ dm + 8/(2f_ rsd+3Ω_ dm)^2[ 1 + (2f_ rsd+3Ω_ dm) ℋ^2 ∫g/aℋ^2 (2f_ rsd+3Ω_ dm) da ] ∂^iℛ_c∂_i∇^2ℛ_c/ℋ^4,where δ_ dm^(2) is given by solution (<ref>) in synchronous comoving gauge and the second term is due to the spatial gauge transformation. In an early matter era, including the possibility of a non-zero interaction g=-ϵ such that Ω_ dm=1+(ϵ/3), we can then obtain an analytic expression for the Eulerian density contrastδ_E^(2) =-24/5(5+ϵ)[(f_NL+5/12)∂ ^iℛ_c∂_iℛ_c/ℋ^2 + (f_NL-5/3)ℛ_c∇^2ℛ_c/ℋ^2]+ 8(5+3ϵ+2Σ)/(7+3ϵ)(5+ϵ)^2(1+ϵ)( ∇^2ℛ_c/ℋ^2)^2+ 8/(1+ϵ)(5+ϵ)^2∂^iℛ_c∂_i∇^2ℛ_c/ℋ^4 .We recover the early-time limit in the conventional matter-dominated limit of ΛCDM or models (i) or (ii), where g→0 and Ω_ dm→1 at early time, in the limit ϵ→0.Any separable second-order solution can be expressed in Fourier space via the convolution δ_Ek⃗^(2)=2∫d^3k⃗_1d^3k⃗_2/(2π)^3δ_D(k⃗-k⃗_1-k⃗_2)F_2(k⃗_1,k⃗_2)δ^(1)_k⃗_1δ^(1)_k⃗_2,with kernelF_2(k⃗_1,k⃗_2) = F_in(k⃗_1,k⃗_2) + F_nl(k⃗_1,k⃗_2),where we separate two distinct contributions coming from the linearly and non-linearly growing terms.The relativistic initial constraint including any primordial non-Gaussianity gives rise to the linearly growing term which dominates at early times (large scales)in ΛCDM or models (i) or (ii)F_in(k⃗_1,k⃗_2) = 3(2f_ rsd+3Ω_ dm)/5ℋ^2 [ (f_NL+5/12) k⃗_1·k⃗_2/k_1^2k_2^2 + (f_NL-5/3) k_1^2+k_2^2/2k_1^2k_2^2] ,For the early matter era with g=-ϵ such that Ω_ dm=1+(ϵ/3) this becomesF_in(k⃗_1,k⃗_2) = 3(5+ϵ)/5ℋ^2 [ (f_NL+5/12) k⃗_1·k⃗_2/k_1^2k_2^2 + (f_NL-5/3) k_1^2+k_2^2/2k_1^2k_2^2] ,For ϵ=0 this reduces to the conventional Einstein de-Sitter initial constraint <cit.>.The nonlinear growth of structure due to gravitational instability and vacuum-dark matter interactions dominates at late times (small scales). For general interacting-vacuum cosmology the solution is not separable, however for the matter era solution (<ref>) with g=-ϵ such that Ω_ dm=1+(ϵ/3) we haveF_nl(k⃗_1,k⃗_2) = 5+3ϵ/(7+3ϵ)(1+ϵ) + 2/(7+3ϵ)(1+ϵ)(k⃗_1·k⃗_2)^2/k_1^2k_2^2 + 1/1+ϵk⃗_1·k⃗_2(k_1^2+k_2^2)/2k_1^2k_2^2,In the absence of vacuum-dark matter interactions [ϵ=0 or models (i) or (ii) at early times] this reduces to the standard Newtonian kernel <cit.>F_N(k⃗_1,k⃗_2) = 5/7 + 2/7(k⃗_1·k⃗_2)^2/k_1^2k_2^2 + k⃗_1·k⃗_2(k_1^2+k_2^2)/2k_1^2k_2^2, § CONCLUSIONS In this paper we have studied the growth of density perturbations in three simple models where dark matter interacts with vacuum energy to give rise to late-time acceleration. In two of these models, including a decomposed Chaplygin gas model, the interaction vanishes at early times leading to a conventional matter-dominated (Einstein-de Sitter) cosmology. In the third model we have considered a constant dimensionless interaction rate relative to the matter density, leading to a modified matter era at early times. In all three models the interaction vanishes at late times and we recover a constant vacuum energy, driving a de Sitter expansion in the asymptotic future.The growth of inhomogeneous perturbations of interacting dark matter is dependent upon the covariant energy-momentum transfer four-vector, Q^μ. We have considered a simple interaction model where the energy-momentum transfer follows the matter four-velocity, Q^μ∝ u^μ. In this case the vacuum energy is homogeneous on spatial hypersurfaces orthogonal to the comoving worldlines and therefore the sound speed remains zero even in the presence of a non-zero matter-vacuum interaction. This means we get a simple, scale-independent growth of linear density perturbations, similar to standard cold dark matter; a non-zero sound speed would lead to a finite Jeans length, suppressing clustering on small scales. We find the linearly growing mode for the first-order comoving density contrast, which in a conventional matter-dominated (EdS) era reduces to the usual linearly growing mode, D_+∝ a with corresponding linear growth rate f_1≡ dlnδ/dln a=1. Matter over-densities grow due to gravitational collapse and this can be enhanced by non-zero energy transfer from dark matter to the vacuum.For example, in the case of a non-zero energy transfer from dark matter to the vacuum even at early times, as in our model (iii) where Q=ϵ Hρ̅_ dm we have a modified early time limit Ω_ dm→1+(ϵ/3) with a modified growing mode, D_+∝ a^1+ϵ, and hence f_1=1+ϵ.Non-zero energy transfer from/to matter leads to an enhanced/suppressed matter growth rate.This may appear counter-intuitive, but since the vacuum is homogeneous in the comoving frame any energy transfer to the matter contributes only to the background matter density and not to the comoving density perturbation. Hence the growth rate of the local matter density contrast, δ_ dm=δρ_ dm/ρ̅_ dm, is suppressed.Energy transfer between dark matter and the vacuum also changes the usual relation between the growth rate and the velocity divergence. For interacting dark matter the linear growth rate for the matter overdensity, f_1, differs from the growth rate that would be inferred purely from redshift-space distortions (i.e., the peculiar velocity field) which we denote by f_ rsd, defined in Eq. (<ref>) and related to the linear growth rate in Eq. (<ref>). By contrast with the linear growth rate, the RSD distortions are enhanced by energy transfer from the vacuum to dark matter as the velocity field responds to the local gravitational potential and thus the total comoving density perturbation, not just the density contrast. We give expressions for the RSD index, γ = d ln f_ rsd/dlnΩ_ dm ,for each model by expanding about the early matter-dominated limit. The corresponding expressions for f_ rsd∝Ω_ dm^γ, give a per-cent level fit to the RSD parameter in an interacting model corresponding to the decomposed Chaplygin gas with -0.2<α<0.2, see figure <ref>. In principle independent measurements of the RSD parameter and the linear growth rate for the density contrast could reveal the effect dark matter interaction. This assumes that galaxies follow the dark matter velocity field, i.e., the role of baryons is sub-dominant in determining the peculiar velocities of galaxy. It would be interesting to develop more realistic model of a baryon+dark matter system in the presence of vacuum-dark matter interactions.We have also found solutions for the second-order growth of the density contrast in interacting vacuum cosmologies for the first time. We identify two components in the second-order density field, Eq. (<ref>), analogous to the usual second-order solutions in non-interacting ΛCDM cosmology. One component is a homogeneous solution, corresponding to a linearly growing density perturbation whose amplitude is second order in perturbations. This includes any primordial non-Gaussianity, e.g., originating during a period of inflation in the very early universe, as well as a term due to the initial second-order constraint for the comoving density contrast in general relativity <cit.>, usually set to zero in Newtonian studies of structure formation <cit.>. This homogeneous solution dominates in the squeezed limit or at early times, but it would also be sensitive to the effect of early radiation damping on scales below the matter-radiation equality scale ≈ 100 Mpc <cit.> and our analytic results do not include the effect of radiation.The second component, which we term the particular solution, is a modification of the usual Newtonian second-order density perturbation. It leads to a growing matter bispectrum which dominates on small scales and at late times, until eventually the structure formation becomes fully nonlinear. We identify the second-order kernel or reduced bispectrum (<ref>) and show how its shape is altered by energy transfer to or from the vacuum. This opens up the possibility of distinguishing interacting dark matter models in future through the shape of the matter bispectrum on weakly nonlinear scales (see <cit.> for related work in modified gravity). A much more challenging task for future work would be to identify dark matter-vacuum interactions in the fully nonlinear regime. Nonetheless our second order results suggest that the bispectrum, or higher order correlations in the matter density field, could in future be used to identify modifications of the standard ΛCDM scenario. § ACKNOWLEDGEMENTS The authors are grateful to Saulo Carneiro, Marco Bruni and Joan Solà for useful discussions.H.A.B. was partially supported by CNPq and Fapesb. DW was supported by STFC grant ST/N000668/1 and ST/S000550/1. DW is grateful to KITP, University of California Santa Barbara, for their hospitality while this paper was revised.This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.30Astier P. Astier et al., Astron. Astrophys. 447, 31 (2006).riess A. G. Riess et al., Astrophys. J. 607, 665 (2004).perl S. Perlmutter et al., Astrophys. J. 517, 565 (1999).tegmark M. Tegmark et al., Phys. Rev. D 69, 103501 (2004).pebles P. J. E. Peebles, B. Ratra, Rev. Mod. Phys. 75, 559 (2003).Padmanaban T. Padmanabhan, Phys. Rept. 380, 235 (2003).weinberg02 S. Weinberg, Rev. Mod. Phys. 61, 1 (1989).planck P. A. R. Ade et al. [Planck Collaboration],Astron. Astrophys.594, A13 (2016)Ozer M. Ozer, O. Taha, Phys. Lett. B 171, 363 (1986). Nucl. Phys. B 287, 776 (1987).Copeland:2006wrE. J. Copeland, M. Sami and S. Tsujikawa,Int. J. Mod. Phys. D 15, 1753 (2006). Wetterich:1994bgC. Wetterich,Astron. Astrophys.301, 321 (1995) [hep-th/9408025]. Amendola:1999qqL. Amendola,Phys. Rev. D 60, 043501 (1999)[astro-ph/9904120]. Holden:1999hmD. J. Holden and D. Wands,Phys. Rev. D 61, 043506 (2000)[gr-qc/9908026]. He:2008tnJ. H. He and B. Wang,JCAP 0806, 010 (2008)[arXiv:0801.4233 [astro-ph]].Valiviita:2008ivJ. Valiviita, E. Majerotto and R. Maartens,JCAP 0807, 020 (2008) doi:10.1088/1475-7516/2008/07/020 [arXiv:0804.0232 [astro-ph]].Koyama:2009gdK. Koyama, R. Maartens and Y. S. Song,JCAP 0910, 017 (2009)[arXiv:0907.2126 [astro-ph.CO]]. Tsujikawa:2012hvS. Tsujikawa, A. De Felice and J. Alcaniz,JCAP 1301, 030 (2013)[arXiv:1210.4239 [astro-ph.CO]]. Marcondes:2016rebR. J. F. Marcondes, R. C. G. Landim, A. A. Costa, B. Wang and E. Abdalla,JCAP 1612, no. 12, 009 (2016)[arXiv:1605.05264 [astro-ph.CO]]. kam A. Y. Kamenshchik, U. Moschella and V. Pasquier, Phys. Lett. B511, 265 (2001).Fabris J. C. Fabris, S. V. B. Gonçalves and P. E. de Souza, Gen. Rel. Grav. 34, 53 (2002).Bento M. C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. D66, 043507 (2002).Sandvik:2002jzH. Sandvik, M. Tegmark, M. Zaldarriaga and I. Waga,Phys. Rev. D 69, 123524 (2004). bento M. C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. D70, 083519 (2004).degeneracy S. Carneiro and H. A. Borges, JCAP 1406, 010 (2014).Wands D. Wands, J. De-Santiago and Y. Wang,Class. Quant. Grav.29, 145017 (2012).Borges H. A. Borges, S. Carneiro, J. C. Fabris and W. Zimdahl, Phys. Lett. B727, 37 (2013).saulo J. S. Alcaniz, H. A. Borges, S. Carneiro, J. C. Fabris, C. Pigozzo and W. Zimdahl, Phys. Lett. B716, 165 (2012). hermano H. Velten, H. A. Borges, S. Carneiro, R. Fazolo and S. Gomes, MNRAS 452, 2220 (2015).Wang:2013qyY. Wang, D. Wands, L. Xu, J. De-Santiago and A. Hojjati,Phys. Rev. D 87, 083503 (2013). note C. Pigozzo, S. Carneiro, J. S. Alcaniz, H. A. Borges and J. C. Fabris, JCAP 1605, 022 (2016).vom R. F. vom Marttens, L. Casarini, W. Zimdahl, W. S. Hipólito-Ricaldi and D. F. Mota, Phys. Dark Univ. 15, 114 (2017).Salvatelli V. Salvatelli, N. Said, M. Bruni, A. Melchiorri and D. Wands,Phys. Rev. Lett.113, 181301 (2014). Martinelli:2019dauM. Martinelli, N. B. Hogg, S. Peirone, M. Bruni and D. Wands,Mon. Not. Roy. Astron. Soc.488, no. 3, 3423 (2019) Shapiro:2003uiI. L. Shapiro, J. Sola, C. Espana-Bonet and P. Ruiz-Lapuente,Phys. Lett. B 574, 149 (2003)[astro-ph/0303306]. EspanaBonet:2003vkC. Espana-Bonet, P. Ruiz-Lapuente, I. L. Shapiro and J. Sola,JCAP 0402, 006 (2004)[hep-ph/0311171]. Wang1 P. Wang and X-H. Meng, Claa. Quant. Grav. 22, 283 (2005).jailson J. S. Alcaniz and J. Lima, Phys. Rev. D.72, 063516 (2005).valent A. Gomez-Valent, J., Sola, S. Basilakos, JCAP, 1501, 004 (2015).Sola:2016jkyJ. Sola, A. Gomez-Valent and J. de Cruz Perez,Astrophys. J.836, no. 1, 43 (2017)[arXiv:1602.02103 [astro-ph.CO]]. Sola:2016eczJ. Sola, J. de Cruz Perez, A. Gomez-Valent and R. C. Nunes,arXiv:1606.00450 [gr-qc]. Sola:2017jblJ. Sola, J. d. C. Perez and A. Gomez-Valent,arXiv:1703.08218 [astro-ph.CO]. Wang:2015wgaY. Wang, G. B. Zhao, D. Wands, L. Pogosian and R. G. Crittenden,Phys. Rev. D92, 103005 (2015). Hogg:2020rdpN. B. Hogg, M. Bruni, R. Crittenden, M. Martinelli and S. Peirone,arXiv:2002.10449 [astro-ph.CO]. matarrese S. Matarrese, S. Mollerach, and M. Bruni, Phys. Rev. D 58, 043504 (1998).Noh:2005hcH. Noh and J. c. Hwang,Class. Quant. Grav.22, 3181 (2005) doi:10.1088/0264-9381/22/16/004 [gr-qc/0412127]. bartolo N. Bartolo, S. Matarrese and A. Riotto,JCAP 0510, 010 (2005). bartolo1 N. Bartolo, S. Matarrese, O. Pantano, and A. Riotto, Classical and Quantum Gravity 27, 124009 (2010).Bruni M. Bruni, J. C. Hidalgo, N. Meures and D. Wands,Astrophys. J.785, 2 (2014)[arXiv:1307.1478 [astro-ph.CO]]. Bruni:2014xmaM. Bruni, J. C. Hidalgo and D. Wands,Astrophys. J.794, no. 1, L11 (2014)[arXiv:1405.7006 [astro-ph.CO]]. Uggla C. Uggla, J. Wainwright, Class. Quant. Grav. 31, 105008 (2014).Chani N. C. Devi, H. A. Borges, S. Carneiro and J. S. Alcaniz, MNRAS 448, 37 (2015).Sawicki:2013wjaI. Sawicki, V. Marra and W. Valkenburg,Phys. Rev. D 88, 083520 (2013). Peebles P. J. E. Peebles, Astrophys. J. 284, 439 (1984).Kaiser:1987qvN. Kaiser,Mon. Not. Roy. Astron. Soc.227, 1 (1987). three Y-S. Song and W.J. Percival, JCAP 0910, 4 (2009).Malik:2003mvK. A. Malik and D. Wands,Class. Quant. Grav.21, L65 (2004)[astro-ph/0307055]. Langlois:2005qpD. Langlois and F. Vernizzi,Phys. Rev. D 72, 103501 (2005)[astro-ph/0509078]. Wands:2010afD. Wands,Class. Quant. Grav.27, 124002 (2010) doi:10.1088/0264-9381/27/12/124002 [arXiv:1004.0818 [astro-ph.CO]]. Bertacca:2015mcaD. Bertacca, N. Bartolo, M. Bruni, K. Koyama, R. Maartens, S. Matarrese, M. Sasaki and D. Wands,Class. Quant. Grav.32, 175019 (2015)[arXiv:1501.03163 [astro-ph.CO]]. Bernardeau:2001qrF. Bernardeau, S. Colombi, E. Gaztanaga and R. Scoccimarro,Phys. Rept.367, 1 (2002) doi:10.1016/S0370-1573(02)00135-7 [astro-ph/0112551]. Tram:2016cpyT. Tram, C. Fidler, R. Crittenden, K. Koyama, G. W. Pettinari and D. Wands,JCAP 1605, no. 05, 058 (2016) doi:10.1088/1475-7516/2016/05/058 [arXiv:1602.05933 [astro-ph.CO]]. Yamauchi:2017ibzD. Yamauchi, S. Yokoyama and H. Tashiro,arXiv:1709.03243 [astro-ph.CO].
http://arxiv.org/abs/1709.08933v3
{ "authors": [ "Humberto A Borges", "David Wands" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170926103841", "title": "Growth of structure in interacting vacuum cosmologies" }
thmTheorem[section] pro[thm]Proposition cor[thm]Corollary lem[thm]Lemmadefinition rem[thm]Remark defn[thm]Definition exam[thm]Exampleequationsection⇒⇔↔∖∂∂ßÅ𝒜ℬ𝒞𝔻ℰℱ𝒢ℋ𝒥ℒℳ𝒩𝒫𝒬ℛ𝒮𝒯𝒰𝒳𝒴𝒵 BM PW Cart Ai tent Schr Ev const spec𝕀 id span sign v.p. ess ind dom cap ker dim diam co  Divdist suppran clos dist𝒲𝒦ℬM𝒰𝒦𝐂𝐃𝐙𝐍ℝ𝕋ℋ̋Φ̃Backward shift invariant subspaces in RKHS]Backward shift invariant subspaces in reproducing kernel Hilbert spacesFricain]Emmanuel FricainLaboratoire Paul Painlevé, Université Lille 1, 59 655 Villeneuve d'Ascq Cédex [email protected]]Javad MashreghiDépartement de mathématiques et de statistique, Université Laval, Québec, QC, Canada G1K [email protected]]Rishika RupamLaboratoire Paul Painlevé, Université Lille 1, 59 655 Villeneuve d'Ascq Cé[email protected] first and third authors were supported by Labex CEMPI (ANR-11-LABX-0007-01) and the grant ANR-17-CE40 -0021 of the French National Research Agency ANR (project Front). The second author was supported by grants from NSERC (Canada).[2010]30J05, 30H10, 46E22 In this note, we describe the backward shift invariant subspaces for an abstract class of reproducing kernel Hilbert spaces. Our main result is inspired by a result of Sarason concerning de Branges–Rovnyak spaces (the non-extreme case). Furthermore, we give new applications in the context of the range space of co-analytic Toeplitz operators and sub-Bergman spaces. [ [ December 30, 2023 =====================§ INTRODUCTIONA celebrated theorem of Beurling describes all (non-trivial) closed invariant subspaces of the Hardy space H^2 on the open unit disc 𝔻 which are invariant with respect to the backward shift operator S^*. They are of the form K_Θ=(Θ H^2)^⊥, where Θ is an inner function. The result of Beurling was the cornerstone of a whole new direction of research lying at the interaction between operator theory and complex analysis. It was generalized in many ways. See for instance <cit.>. Sarason <cit.> classified the non-trivial closed backward shift invariant subspaces of the de Branges–Rovnyak spaces (̋b), where b is a non-extreme point of the closed unit ball of H^∞: they are of the form K_Θ∩(̋b), where Θ is an inner function. In other words, the closed invariant subspaces for S^*|(̋b) are the trace on (̋b) of the closed invariant subspaces for S^*. This naturally leads to the following question:(Q): let _̋1 and_̋2 be two reproducing kernel Hilbert spaces on 𝔻 such that _̋1⊂_̋2; assume that the shift operator S (multiplication by the independent variable) is contractive on _̋2 and if T=S|_̋2, its adjoint T^* maps _̋1 contractively into itself. Then, is it true that every closed invariant subspace ℰ of T^*|_̋1 has the form E∩_̋1, where E is a closed invariant subspace of T^* (as an operator on _̋2)? In other words, are the closed invariant subspaces for T^*|_̋1 the trace on _̋1 of the closed invariant subspaces for T^*? It should be noted that, of course, the interesting situation is when _̋1 is not a closed subspace of _̋2. Sarason's result says that the answer to question (Q) is affirmative in the situation where _̋2=H^2 and _̋1=(̋b), with b a non-extreme point of the closed unit ball of H^∞. However, it should be noted that question (Q) has a negative answer in the case where _̋1=𝒟 is the Dirichlet space and _̋2=H^2 is the Hardy space. Indeed, let (z_n)_n≥ 1 be a non-Blaschke sequence of 𝔻 which is a zero set for A^2 and putM:={f∈ A^2:f(z_n)=0, n≥ 1},where A^2 is the Bergman space of 𝔻. Define N={F∈(𝔻): F'=f, f∈ A^2⊖ M}.It is not difficult to see that N is a non-trivial closed subspace of 𝒟, which is S^* invariant. Then, observe that N cannot be of the form E∩𝒟, where E is a closed subspace of H^2 invariant with respect to S^*. Indeed, assume on the contrary that there exists a closed subspace E of H^2, invariant with respect to S^*, such that N=E∩𝒟. Since N is non-trivial, the subspace E is also non-trivial, and by Beurling's theorem, there exists an inner function Θ such that E=K_Θ. Thus N=K_Θ∩𝒟. Observe now that for every n≥ 1, the Cauchy kernel k_λ_n belongs to N (because its derivative is up to a constant the reproducing kernel of A^2 at point λ_n and thus it is orthogonal to M). Then k_λ_n∈ K_Θ, n≥ 1. To get a contradiction, it remains to see that, since (z_n)_n≥ 1 is not aBlaschke sequence, then the sequence of Cauchy kernels k_λ_n, n≥ 1, generates all H^2, and we deduce that H^2⊂ K_Θ, which is absurd. The aim of this note is to present a general framework where the answer to the question (Q) is affirmative. Note that in <cit.>, Aleman–Malman present another general situation of reproducing kernel Hilbert spaces where they extend Sarason's result. In Section 2, we first recall some basic facts on reproducing kernel Hilbert spaces and on the Sz.-Nagy–Foias model for contractions. Then, in Section 3, we study the properties of multiplication operators in our general context and prove that the scalar spectral measures of the minimal unitary dilation of T^*|_̋1 are absolutely continuous. We also show that when ℋ_2=H^2, then the reproducing kernel Hilbert space _̋1 satisfies an interesting division property, the so-called F-property. In Section 4, we give an analogue of Beurling's theorem in our general context and give an application to cyclic vectors for the backward shift.In Section 5, we show that our main theorem can be applied to (̋b) spaces and range space of co-analytic Toeplitz operators. We also provide a new application in the context of sub-Bergman Hilbert space which was recently studied in <cit.>. § PRELIMINARIESWe first recall some standard facts on reproducing kernel Hilbert spaces. See <cit.> for a detailed treatment of RKHS.§.§ Reproducing kernel Hilbert spaces and multipliersLet $̋ be a Hilbert space of complex valued functions on a setΩ. We say that$̋ is areproducing kernel Hilbert space (RKHS) on Ω if the following two conditions are satisfied:(P1) for every λ∈Ω, the point evaluations f ⟼ f(λ) are bounded on $̋;(P2) for everyλ∈Ω, there exists a functionf∈$̋ such that f(λ)≠ 0.According to the Riesz representation theorem, for each λ∈Ω, there is a function k^_λ in $̋, called the reproducing kernel at pointλ, such thatf(λ) = ⟨ f,k^_λ⟩_,(f ∈)̋.Note that according to (P2), we must havek_λ^̸̋≡ 0. Moreover if(f_n)_nis a sequence in$̋, thenf_n→ f ∀λ∈Ω,lim_n→∞f_n(λ)=f(λ). A multiplier of $̋ is a complex valued functionφonΩsuch thatφ f ∈$̋ for all f ∈$̋. The set of all multipliers of$̋ is denoted by ()̋. Using the closed graph theorem, we see that if φ belongs to ()̋, then the mapM_φ,: |[ ⟶ f ⟼ φ f ].isbounded on $̋. When there is no ambiguity, we simply writeM_φforM_φ,.It is well-known that if we setφ_()̋=M_φ_ℒ()̋,φ∈()̋,then()̋becomes a Banach algebra. Moreover, using a standard argument, we haveM_φ^*k_λ^=φ(λ)k_λ^, (λ∈Ω),which gives|φ(λ)| ≤φ_()̋,(λ∈Ω).See for instance <cit.> or <cit.>.Let_̋1,_̋2be two RKHS such that_̋1⊂_̋2. If(f_n)_nis a sequence in_̋1which is convergent in the weak topology of_̋2, we cannot deduce that it also converges in the weak topology of_̋1. However, the following result shows that on the bounded subsets of_̋1the above conclusion holds. Let _̋1,_̋2 be two RKHS on a set Ω such that _̋1⊂_̋2, let (f_n)_n be a sequence in_̋1 bounded in _̋1-norm by a constant C, and let f ∈_̋2. Assume that (f_n )_n converges to f in the weak topology of _̋2. Then the following holds:* f ∈_̋1,* f_n → f in the weak topology of _̋1,* f__̋1≤ C. Since(f_n)_nis uniformly bounded in the norm of_̋1, it has a weakly convergent subsequence. More explicitly, there is a subsequence(f_n_k)_kthat converges to someg∈_̋1in the weak topology of_̋1. Using (<ref>), we easily see that the two functionsfandgcoincide onΩ. Thereforef ∈_̋1. Second, since each_̋1-weakly convergent subsequence of(f_n)_nhas to converge weakly tofin_̋1, we conclude that(f_n)_nitself also converges tofin the weak topology of_̋1. Third, the weak convergence in_̋1impliesf__̋1≤lim inf_n →∞f_n__̋1≤ C,completing the proof.§.§ H^∞ functional calculus for contractionsLetTbe a contraction on a Hilbert space$̋. We recall that T is said to be completely non-unitary if there is no nonzero reducing subspaces _̋0 for T such that T|_̋0 is a unitary operator. We recall that for a completely non-unitary contraction T on $̋, we can define anH^∞-functional calculus with the following properties (see <cit.>):(P3) for every f∈ H^∞, we havef(T)≤f_∞.(P4) If (f_n)_n is a sequence of H^∞ functions which tends boundedly to f on the open unit disc 𝔻 (which means that sup_n f_n_∞<∞ and f_n(z)→ f(z), n→+∞ for every z∈𝔻), then f_n(T) tends to f(T) WOT (for the weak operator topology).(P5) If (f_n)_n is a sequence of H^∞ functions which tends boundedly to f almost everywhere on =∂𝔻, then f_n(T) tends to f(T) SOT (for the strong operator topology). Finally, we recall that every contractionTon a Hilbert space$̋ has a unitary dilation U on 𝒦 (which means that ⊂̋𝒦 and T^n=P_U^n|$̋,n≥ 1) which is minimal (in the sense that𝒦=⋁_-∞^∞U^n $̋).§.§ A general framework In this note, we consider two analytic reproducing kernel Hilbert spaces _̋1 and _̋2on the open unit disc 𝔻 (which means that their elements are analytic on 𝔻) and such that _̋1⊂_̋2. A standard application of the closed graph theorem shows that there is a constant C such thatf__̋2≤ C f__̋1,(f ∈_̋1).Denote by χ the function χ(z)=z, z∈𝔻. Furthermore, we shall assume the following two properties:χ∈(_̋2) χ_(_̋2)≤ 1,and if X:=M^*_χ,_̋2 (recall notation (<ref>)), thenX_̋1⊂_̋1X_ℒ(_̋1)≤ 1.The restriction of X to _̋1 is denoted byX__̋1:=X_|_̋1.§.§ Range SpacesLet 𝒳,𝒴 be two Hilbert spaces and T∈ℒ(𝒳,𝒴). We define ℳ(T) as the range space equipped with the range norm. More explicitly, ℳ(T)=ℛ(T)=T𝒳 andTx_ℳ(T)=P_( T)^⊥x_𝒳, x∈𝒳,where P_( T)^⊥ denotes the orthogonal projection from 𝒳 onto ( T)^⊥. It is easy to see that ℳ(T) is a Hilbert space which is boundedly contained in 𝒴. A result of Douglas <cit.> says that if A∈ℒ(𝒳_1,𝒴) and B∈ℒ(𝒳_2,𝒴), thenℳ(A)≖ℳ(B) ⟺ AA^*=BB^*.Here the notation ℳ(A)≖ℳ(B) means that the Hilbert spaces ℳ(A) and ℳ(B) coincide as sets and, moreover, have the same Hilbert space structure. We also recallthat if A,B∈ℒ(𝒳_1,𝒴) and C∈ℒ(𝒴), thenC⟺ CAA^*C^*≤ BB^*.See also <cit.>. § MULTIPLICATION OPERATORS Note that (<ref>) implies χ_(_̋2)=1. Indeed, according to (<ref>), we have1=sup_z∈𝔻|χ(z)|≤χ_(_̋2)≤ 1.More generally, since ⋂_n≥ 0M_χ,ℋ_2^nℋ_2={0}, we see that M_χ,ℋ_2 is a completely non-unitary contraction. Hence, we get the following consequence. Let _̋2 be a reproducing kernel Hilbert space of analytic functions on 𝔻 satisfying (<ref>). Then (_̋2)=H^∞ and for every φ∈ H^∞, we have M_φ,_̋2=φ(M_χ,_̋2) withφ_(_̋2)=φ_∞.Let φ∈ H^∞ and consider the dilates φ_r(z)=φ(rz), 0<r<1, z∈𝔻. If φ(z)=∑_n=0^∞ a_nz^n and f∈ℋ_2, observe that φ_r(M_χ,_̋2)f=∑_n=0^∞ a_n r^n M_χ,_̋2^n f=∑_n=0^∞ a_n r^n χ^n f=φ_r f.Moreover, by (P5), we have φ_r(M_χ,_̋2)f→φ(M_χ,_̋2)f in _̋2 as r→ 1. Then, using (<ref>), we get on one handφ_r(λ)f(λ)=(φ_r(M_χ,_̋2)f)(λ)→ (φ(M_χ,_̋2)f)(λ),r→ 1,(λ∈𝔻),and on the other hand, φ_r(λ)f(λ)→φ(λ)f(λ), r→ 1 (λ∈𝔻). We thus deduce that φ f=φ(M_χ,_̋2)f∈_̋2. In particular, φ∈(_̋2) and M_φ,_̋2=φ(M_χ,_̋2). Moreover, by (P3), we have φ_(_̋2)=φ(M_χ,_̋2)≤φ_∞.If we combine with (<ref>), we get (<ref>), as claimed.Let ℋ_1 and _̋2 be two reproducing kernel Hilbert spaces of analytic functions on 𝔻 such that _̋1⊂_̋2. Assume that _̋1 and _̋2 satisfy (<ref>) and (<ref>). Then the minimal unitary dilation of X__̋1 has an absolutely continuous scalar spectral measure. In particular, for every f,g∈_̋1, there exists u_f,g∈ L^1() such that⟨ X__̋1^nf, g ⟩__̋1=∫_z^n u_f,g(z) dm(z).Let f,g∈_̋1 and let μ_f,g be the scalar spectral measure associated to the minimal unitary dilation of the contraction X__̋1. Then, we have⟨ X__̋1^nf, g ⟩__̋1=∫_z^n dμ_f,g(z).Let us prove that μ_f,g is absolutely continuous with respect to normalized Lebesgue measure m on . Let F be a closed Borel subset ofsuch that m(F)=0. Then, we can construct a bounded sequence of polynomials (q_n)_n such that q_n(z)→χ_F(z), as n→ +∞, for every z∈𝔻. Indeed,Let f be the Fatou function associated to F, that is a function f in the disc algebra (that is the closure of polynomials for the sup norm) such that f=1 on F and |f|<1 on 𝔻∖ F (See <cit.> or <cit.>). Now take f^n, n≥ 0. The functions f^n are still in the disc algebra. Then if we take ε>0, we can find a polynomial q_n such that sup_z∈𝔻|f^n(z)-q_n(z)|≤ε/2.In particular, we have for every z∈ F, |1-q_n(z)|≤ε/2. On the other hand, for z∈𝔻∖ F, we can find n_0 such that for n≥ n_0, |f^n(z)|≤ε/2 (because |f_n(z)|<1 and thus |f^n(z)|→ 0, as n→∞). Therefore, for n≥ n_0, we have|q_n(z)|≤ |q_n(z)-f^n(z)|+|f^n(z)|≤ε/2+ε/2=ε. Hence q_n(z) tends to 1 for z∈ F and to 0 for z∈𝔻∖ F. In other words, q_n tendsto χ_F pointwise. On the other hand, we have of coursesup_z∈𝔻|q_n(z)|≤ 1+ε/2,which proves that the sequence (q_n)_n is also bounded, and we are done. Now, since (q_n)_n converges boundedly to 0 on 𝔻 and since X is a completely unitary contraction, we deduce from (P4) that (q_n(X))_n converges WOT to 0 in ℒ(_̋2). Hence it implies that (q_n(X__̋1)f)_n converges weakly to 0 in _̋2.On the other hand, by von Neumann inequality, we haveq_n(X__̋1)f__̋1≤q_n_∞f__̋1≤ C f__̋1,where C=sup_nq_n_∞<+∞. By Lemma <ref>, we deduce that (q_n(X__̋1)f)_n converges weakly to 0 in _̋1. But, according to (<ref>), we have ⟨ q_n(X__̋1)f, g ⟩__̋1=∫_q_n(z) dμ_f,g(z),which gives thatlim_n→+∞∫_q_n(z) dμ_f,g(z)=0.It remains to apply dominated Lebesgue convergence theorem to get∫_χ_F(z) dμ_f,g(z)=0,which implies that μ_f,g(F)=0. Hence μ_f,g is absolutely continuous with respect to m, as claimed. Let _̋1 and _̋2be two reproducing kernel Hilbert spaces of analytic functions on 𝔻 such that _̋1⊂_̋2. Assume that _̋1 and _̋2 satisfy (<ref>) and (<ref>). Let φ∈ H^∞.ThenM_φ,_̋2^* maps _̋1 into itself, and if f,g∈_̋1, we have⟨ M_φ,_̋2^*f, g ⟩__̋1=∫_φ^*(z)u_f,g(z) dm(z),where φ^*(z)=φ(z). Let us first assume that φ is holomorphic on 𝔻 and let us consider the Taylor series of φ,φ(z)=∑_n=0^∞ a_n z^n. Then we have M_φ,_̋2^*=φ(M_χ,_̋2)^*=∑_n=0^∞a_nX^n.Since X_̋1⊂_̋1, the last equation implies that M_φ,_̋2^*_̋1⊂_̋1. Now using that ∑_n=0^∞|a_n|<∞ and (<ref>), we get ⟨ M_φ,_̋2^*f, g ⟩__̋1 = ∑_n=0^∞a_n⟨ X__̋1^n f,g⟩__̋1= ∑_n=0^∞a_n∫_𝕋z^n u_f,g(z) dm(z)= ∫_𝕋∑_n=0^∞a_n z^n u_f,g(z) dm(z)= ∫_𝕋φ^*(z) u_f,g(z) dm(z).This proves (<ref>) for φ which is holomorphic on 𝔻. We also observe that| ⟨ M_φ,_̋2^*f, g ⟩__̋1|≤ ∫_𝕋|φ^*(z)| |u_f,g(z)| dm(z)≤ φ_∞∫_𝕋|u_f,g(z)| dm(z).But by spectral theorem, we know that ∫_𝕋|u_f,g(z)| dm(z)=μ_f,g≤f__̋1g__̋1, which gives M_φ,_̋2^*f__̋1≤φ_∞f__̋1. Now let φ∈ H^∞ and define the dilates φ_r(z)=φ(rz), 0<r<1, z∈𝔻. Observe that φ_r are holomorphic on 𝔻. By the previous argument, we get that M_φ_r,_̋2^* maps _̋1 into itself and ⟨ M_φ_r,_̋2^*f, g ⟩__̋1=∫_𝕋φ_r^* u_f,g dm, f,g∈_̋1.Since φ_r converges boundedly to φ on 𝔻 as r→ 1, and since M_χ,_̋2 is a completely non unitary contraction on _̋2, we get that M_φ_r,_̋2^*f converges weakly to M_φ,_̋2^*f in _̋2 as r→ 1. On the other hand, using (<ref>), we haveM_φ_r,_̋2^*f__̋1≤φ_r_∞f__̋1≤φ_∞f__̋1.Lemma <ref> now implies that M_φ,_̋2^*f belongs to _̋1 and M_φ_r,_̋2^*f converges weakly to M_φ,_̋2^*f in _̋1 as r→ 1. Letting r→ 1 in (<ref>) and using dominated convergence, we deduce that formula (<ref>) is satisfied by φ, completing the proof. It follows immediately from (<ref>) that for φ∈ H^∞, we haveM_φ,_̋2^*_ℒ(_̋1)≤φ_∞. Given a bounded operator T on a Hilbert space $̋, the family of all closedT-invariant subspaces of$̋ is denoted by (T). Let _̋1 and _̋2be two reproducing kernel Hilbert spaces of analytic functions on 𝔻 such that _̋1⊂_̋2. Assume that _̋1 and _̋2 satisfy (<ref>) and (<ref>). Then, for every φ∈ H^∞, we have(X__̋1)⊂(M_φ,_̋2^*|_̋1).Let φ∈ H^∞, φ_r(z)=φ(rz), 0<r<1, and let E∈(X__̋1). Note that (<ref>) implies that M_φ_r,_̋2^* E⊂ E. On the other hand, as we have seen in the proof of Theorem <ref>, M_φ_r,_̋2^* → M_φ,_̋2^*, as r→ 1, in the weak operator topology of ℒ(_̋1). Since a norm-closed subspace is also weakly closed <cit.>, we conclude that M_φ,_̋2^* E ⊂ E, as claimed.To conclude this section, we show that Theorem <ref> has an interesting application in relation with the F-property. Recall that a linear manifold V of H^1 is said to have the F-property if whenever f ∈ V and θ is an inner function which is lurking in f, i.e., f/θ∈ H^1 or equivalently θ divides the inner part of f, then we actually have f/θ∈ V. This concept was first introduced by V. P. Havin <cit.> and it plays a vital role in the analytic function space theory. Several classical spaces have the F-properties. the list includes Hardy spaces H^p, Dirichlet space 𝒟, BMOA, VMOA, and the disc algebra 𝒜. See <cit.>. However, for the Bloch spaces 𝔅 and 𝔅_0, we know that 𝔅∩ H^p and 𝔅_0 ∩ H^p do not have the F-property <cit.>. Using the tools developed in Section <ref>, we will see that in the situation when _̋1⊂ H^2 satisfies (<ref>), then _̋1 has the F-property. First, let us note that_̋2=H^2 satisfies (<ref>) and M_χ,_̋2=S is the classical forward shift operator. Thus, X=M_χ,_̋2^*=S^* is the backward shift operator(S^*f)(z)=f(z)-f(0)/z, f∈ H^2,z∈𝔻.In this context, if _̋1 is a reproducing kernel Hilbert space such that _̋1⊂ H^2, the condition (<ref>) can be rephrased asS^*_̋1⊂_̋1S^*|_̋1≤ 1.Recall that for ψ∈ L^∞(𝕋), the Toeplitz operator T_ψ is definedon H^2 by T_ψ(f)=P_+(ψ f) where P_+ is the Riesz projection (the orthogonal projection from L^2(𝕋) onto H^2). If φ∈ H^∞=(H^2), then M_φ,H^2=T_φ and M_φ,H^2^*=T_φ. In this situation, we get the following result. Let _̋1 be a reproducing kernel Hilbert space contained in H^2, and assume that it satisfies (<ref>). Then the space _̋1 has the F-property. Moreover, if f∈_̋1 and θ is an inner function which divides f, thenf/θ__̋1≤f__̋1.Assume that f∈_̋1 and that θ is an inner function so that f/θ∈ H^1. In fact, by Smirnov Theorem <cit.>, we actually have ψ:=f/θ∈ H^2. Therefore, T_θ (f) = P_+(θ f)= P_+(ψ) = ψ.But according to Theorem <ref> and Remark <ref>, T_θ acts contractively on _̋1. Hence ψ=T_θ (f)∈_̋1 and f/θ__̋1=T_θf__̋1≤f__̋1,as claimed.§ INVARIANT SUBSPACES AND CYCLICITYThe following result says that under certain circumstances, the closed invariant subspaces of X__̋1=X_|_̋1 are exactly the trace on _̋1 of the closed invariant subspaces of X. Despite the following characterization, the implication (i) ⟹ (ii) is the essential part of the result. Let _̋1 and _̋2be two analytic reproducing kernel Hilbert spaces on 𝔻 such that _̋1⊂_̋2 and satisfying (<ref>) and (<ref>). Assume that there exists an outer function φ∈ H^∞ such that ℛ(M_φ,_̋2^*)⊂_̋1. Then, for every ℰ⊂_̋1, the following assertions are equivalent.* ℰ is a closed subspace of _̋1 invariant under X__̋1;*there is a closed subspace E of _̋2 invariant under X=M_χ,_̋2^* such that ℰ=E∩_̋1.Moreover, ℰ=_̋1 if and only if E=_̋2.The proof will be based on the following lemma, which extends <cit.> in our general context. Let _̋1 and _̋2be two analytic reproducing kernel Hilbert spaces on 𝔻 such that _̋1⊂_̋2 and satisfying (<ref>) and (<ref>). Assume that there exists an outer function φ∈ H^∞ such that ℛ(M_φ,_̋2^*)⊂_̋1. Then, for every ℰ∈(X__̋1), the space M_φ,_̋2^*ℰ is dense in ℰ with respect to the norm topology of _̋1.According to Corollary <ref>, we know that M_φ,_̋2^*ℰ⊂ℰ. Now let g∈ℰ, g⊥ M_φ,_̋2^*ℰ in the _̋1-topology. In particular, for every n≥ 0, we have0=⟨ M_φ,_̋2^*X__̋1^n g,g ⟩__̋1.Observe now that M_χ,_̋2^n M_φ,_̋2=M_χ^nφ,_̋2, which gives M_φ,_̋2^*X^n=M_χ^nφ,_̋2^*. Hence, by Theorem <ref>, we get0=⟨ M_χ^nφ,_̋2^* g,g ⟩__̋1=∫_𝕋φ^*(z) z^n u_g,g(z) dm(z),for every n≥ 0. We thus deduce that φ^* u_g,g∈ H_0^1. Since φ^* is outer and u_g,g∈ L^1(𝕋), Smirnov Theorem <cit.> implies that u_g,g∈ H_0^1. Since u_g,g≥ 0, this gives u_g,g=0, that is g=0, completing the proof of the Lemma.Proof of Theorem <ref>. 0,1cm (ii)⟹ (i): Let E be a closed subspace of _̋2, invariant under X=M_χ,_̋2^* such that ℰ=E∩_̋1. First, let us check that ℰ is a closed subspace of _̋1.The verification essentially owes to (<ref>). To do so, let f ∈_̋1 be in the _̋1-closure of E ∩_̋1. Then there is a sequence (f_n)_n in E∩_̋1 which converges to f in the norm topology of _̋1. Since _̋1 is boundedly contained in _̋2, the sequence (f_n)_n also converges to fin _̋2. Since E is closed in _̋2, the function f must belong to E. Hence, f∈ℰ=E∩_̋1, which proves thatℰ is closed in _̋1. The fact that ℰ is invariant under X__̋1=M_χ,_̋2^*|_̋1 is immediate. (i)⟹ (ii): A standard argument using the closed graph theorem implies that, according to ℛ(M_φ,_̋2^*)⊂_̋1, the mapping M_φ,_̋2^* from _̋2 into _̋1 is a bounded operator. Now let ℰ be a closed subspace of _̋1, and assume that ℰ is invariant under X__̋1. Denote by E the closure of ℰ in the _̋2-topology. It is clear that E is a closed subspace of _̋2 which is invariant under X. Let us prove that ℰ=E∩_̋1.The inclusion ℰ⊂ E∩_̋1 is trivial. For the reverse inclusion, let us verify thatM_φ,_̋2^* E⊂ℰ.Let f∈ E. By definition, there is a sequence (f_n)_nin ℰ which converges to f in the _̋2-topology. Then, since M_φ,_̋2^* is bounded from _̋2 into _̋1, the sequence (M_φ,_̋2^*f_n)_n tends to M_φ,_̋2^*f in the _̋1-topology. Since f_n∈ℰ, Corollary <ref> implies that M_φ,_̋2^*f_n∈ℰ and since ℰ is closed in _̋1, then M_φ,_̋2^*f∈ℰ, which proves (<ref>). In particular, we haveM_φ,_̋2^*(E∩_̋1)⊂ℰ,and since E∩_̋1 is a closed subspace of _̋1 invariant with respect to X__̋1, it follows from Lemma <ref> that M_φ,_̋2^*(E∩_̋1) is dense in E∩_̋1, which impliesE∩_̋1⊂ℰ.Thus we have ℰ=E∩_̋1.It remains to prove that ℰ=_̋1 if and only if E=_̋2. Assume first that ℰ=_̋1. Then ℛ(M_φ,_̋2^*)⊂ E. But note that (M_φ,_̋2)={0}, whence ℛ(M_φ,_̋2^*) is dense in _̋2. Hence we get E=_̋2. Conversely, assume that E=_̋2. Thenℰ=E∩_̋1=_̋2∩_̋1=_̋1,which concludes the proof.As already noted, ℛ(M_φ,_̋2^*) is alwaysdense in _̋2 and thus under the hypothesis of Theorem <ref> (that is if there exists an (outer) function φ∈ H^∞ such that ℛ(M_φ,_̋2^*)⊂_̋1), then automatically _̋1 is dense in _̋2.Theorem <ref> has an immediate application in characterization of cyclic vectors. Let _̋1 and _̋2be two analytic reproducing kernel Hilbert spaces on 𝔻 such that _̋1⊂_̋2 and satisfying (<ref>) and (<ref>). Suppose that there exists an outer function φ∈ H^∞ such that ℛ(M_φ,_̋2^*)⊂_̋1. Let f∈_̋1. Then the following assertions are equivalent.* f is cyclic for X=M_χ,_̋2^*.* f is cyclic for X__̋1=X_|_̋1. (i)⟹ (ii): Assume that f is cyclic for X in _̋2 and denote by ℰ the subspace of _̋1 defined byℰ=(X__̋1^nf:n≥ 0)^_̋1.It is clear that ℰ is a closed subspace of _̋1, invariant with respect to X__̋1. Assume that ℰ≠_̋1. Then, according to Theorem <ref>,there exists a closed subspace E of _̋2, E≠_̋2, invariant under X such that ℰ=E∩_̋1. In particular, f∈ E, and thus it is not cyclic for X, which is contrary to the hypothesis. Thus ℰ=_̋1 and f is cyclic for X__̋1.(ii)⟹ (i): Let g∈ℛ(M_φ,_̋2^*). Theng∈_̋1 and if f is cyclic for X__̋1, there exists a sequence of polynomials (p_n) such thatp_n(X__̋1)f-g__̋1→ 0,n→∞.Since _̋1 is contained boundedly in _̋2, then we havep_n(X__̋2)f-g__̋2→ 0,n→∞.Thus,ℛ(M_φ,_̋2^*)⊂(X^*^nf:n≥ 0)^_̋2. Since ℛ(M_φ,_̋2^*) is dense in _̋2, we get (X^*^nf:n≥ 0)^_̋2=_̋2,completing the proof.We can apply Theorem <ref>and Corollary <ref> to some specific reproducing kernel Hilbert spaces contained in the Hardy space H^2 on 𝔻. Let _̋1be a reproducing kernel Hilbert space contained in H^2 that satisfies (<ref>) and assume that there exists an outer function φ∈ H^∞ such that T_φH^2⊂_̋1. Then, for every ℰ⊊_̋1, the following assertions are equivalent.* ℰ is a closed subspace of _̋1 invariant under X__̋1;*there is an inner function Θ such that ℰ=K_Θ∩_̋1.Moreover, if f∈_̋1, then f is cyclic for S^*|_̋1 if and only if f is cyclic for S^*.It is sufficient to combine Theorem <ref> andCorollary <ref> with Beurling's theorem.Note that the hypothesis T_φH^2⊂_̋1 implies that polynomials belong to _̋1. Regarding the last part of Theorem <ref>, let us mention that a well-known theorem of Douglas–Shapiro–Shields <cit.> says that a function f in H^2 is cyclic for S^* if and only if f has no bounded type meromorphic pseudo continuation across 𝕋 to 𝔻_e={z:1<|z|≤∞}.§ APPLICATIONSIn this section, we give some examples of RKHS for which our main Theorem <ref> can be applied. §.§ A general RKHSLet _̋2 be an analytic RKHS on 𝔻 satisfying (<ref>). Let φ∈ H^∞ and _̋1:= (M^*_φ, _̋2). Recall the definition of the range space from Section <ref>. Then _̋1 is also an analytic RKHS on 𝔻 which is contained in _̋2. Observe that _̋1 satisfies (<ref>). Indeed, since M_φ, _̋2M_χ, _̋2=M_χ, _̋2M_φ, _̋2, we have XM^*_φ, _̋2=M^*_φ, _̋2X, which implies that X_̋1 ⊂_̋1. Moreover, if f = M^*_φ,_̋2g ∈_̋1 for some g∈ ( M^*_φ,_̋2)^⊥, thenXf__̋1 = X M^*_φ,_̋2g__̋1=M^*_φ,_̋2 X g__̋1= Xg__̋2≤ g__̋2=f__̋1Thus _1 satisfies (<ref>). In this context, we get immediately from Theorem <ref> the following.Let _̋2 be an analytic RKHS satisfying (<ref>). Let φ be an outer function in H^∞ and let _̋1:= (M^*_φ, _̋2). Then, for every ℰ⊂_̋1, the following assertions are equivalent.* ℰ is a closed subspace of _̋1, invariant under X__̋1;*There is a closed subspace E of _̋2, invariant under X=M_χ,_̋2^* such that ℰ=E∩_̋1.Moreover ℰ=_̋1 if and only if E=_̋2. §.§ The space ℳ(φ)Let _̋2=H^2 be the Hardy space on 𝔻, φ an outer function in H^∞ and _̋1=ℳ(T_φ) which we denote for simplicity ℳ(φ). The space H^2 trivially satisfies (<ref>)and according to the discussion at the beginning of Subsection <ref>, the space ℳ(φ) is an analytic RKHS contained in H^2 and satisfying(<ref>) (or equivalently (<ref>)). Again, for simplicity, we writeX_φ=X_ℳ(φ)=S^*|ℳ(φ).In this context, we can apply Theorem <ref> which immediately gives the following. Let φ be an outer function. Then the following assertions are equivalent.* ℰ is a closed subspace of ℳ(φ), ℰ≠ℳ(φ), and ℰ is invariant under X_φ.*There is an inner function Θ such that ℰ=K_Θ∩ℳ(φ). §.§ de Branges–Rovnyak space (̋b)Let b∈𝐛(H^∞)–the closed unit ball of H^∞. The de Branges–Rovnyak space (̋b) is defined as(̋b)=ℳ((I-T_bT_b)^1/2).For details on de Branges–Rovnyak spaces, we refer to <cit.>. Here we will just recall what will be useful for us. It is well–known that (̋b) is an analytic RKHS contractively contained in H^2 and invariant with respect to S^*. Moreover, the operator X_b=S^*|(̋b) acts as a contraction on (̋b). In particular, the space (̋b) satisfies the hypothesis (<ref>). Assume now that b is a non-extreme point of 𝐛(H^∞), meaning that log(1-|b|)∈ L^1(). Thus, there exists a unique outer function a such that a(0)>0 and |a|^2+|b|^2=1 a.e. on 𝕋. This function a is called the pythagorean mate of b. It is well-known that ℛ(T_a)⊂(̋b). We can then apply Theorem <ref> to _̋1=(̋b) and _̋2=H^2 to recover the following result due to Sarason (<cit.>, Theorem 5).Let b be a non-extreme point of the closed unit ball of H^∞, and let ℰ be a closed subspace of (̋b), ℰ≠(̋b). Then the following are equivalent.* ℰ is invariant under X_b.*There exists an inner function Θ such that ℰ=K_Θ∩(̋b).As already noted, hypothesis of Theorem <ref> implies that polynomials belongs to _̋1. In the case when _̋1=(̋b), we know that it necessarily implies that b is non-extreme. In the extreme case, the backward shift invariant subspaces have been described by Suarez <cit.>, also using some Sz.-Nagy-Foias model theory, butthe situation is rather more complicated.§.§ Sub-Bergman Hilbert spaceThe Bergman space A^2 on 𝔻 is defined as the space of analytic functions f on 𝔻 satisfyingf_A^2^2:=∫_𝔻 |f(z)|^2 dA(z) < ∞,where dA(z) is the normalized area measure on 𝔻. In <cit.>, an analogue of de Branges–Rovnyak spaces was considered in this context. Recall that the Toeplitz operator on A^2(𝔻) with symbol φ∈ L^∞(𝔻) is defined asT_φ(f)=P_A^2(φ f),where P_A^2 is the Bergman projection (that is the orthogonal projection from L^2(𝔻,dA) onto A^2).It is clear that T_φ^*=T_φ. Given φ∈ L^∞(𝔻), we define the sub–Bergman Hilbert space (̋φ) as(̋φ)=ℳ((I-T_φ T_φ)^1/2).In other words, (̋φ)=(I-T_φ T_φ^*)^1/2A^2 and it is equipped with the inner product⟨ (I-T_φ T_φ^*)^1/2f,(I-T_φ T_φ^*)^1/2g ⟩_(φ):=⟨ f, g⟩_A^2,for every f,g∈ A^2⊖ (I-T_φ T_φ^*). We keep the same notation as the de Branges–Rovnyak spaces, but there will be no ambiguity because in this subsection,the ambient space is A^2 (in contrast with the de Branges–Rovnyak spaces for which the ambient space is H^2). We refer the reader to <cit.> for details about this space.The shift operator (also denoted S in this context), defined as S=T_z, is clearly a contraction and S^*=T_z. As we have seen, the de Branges–Rovnyak spaces are invariant with respect to the backward shift operator which acts as a contraction on them. In the context of sub–Bergman Hilbert spaces, the analogue of this property is also true. The proof is the same but we include it for completeness.Let b∈𝐛(H^∞). Then S^* acts as a contraction on (̋b).We first prove that S^* acts as a contraction on (̋b). According to (<ref>), we should prove thatS^*(I-T_bT_b)S≤ I-T_bT_b,that isT_z(I-T_bT_b)T_z≤ I-T_bT_b.But, if φ,ψ∈ L^∞(𝔻,dA) and at least one of them is in H^∞, thenT_ψT_φ=T_ψφ.See <cit.>. Then (<ref>) is equivalent to T_|z|^2(1-|b|^2)≤ T_1-|b|^2, that is0≤ T_(1-|z|^2)(1-|b|^2).Since (1-|z|^2)(1-|b|^2) ≥ 0 on 𝔻, the latter inequality is satisfied (see also <cit.>) and thus S^* acts as a contraction on (̋b).To pass to the (̋b) case, we use a well–known relation between (̋b) and (̋b): let f∈ A^2; then f∈(̋b) if and only if T_bf∈(̋b) andf_(̋b)^2=f_A^2^2+T_bf^2_(̋b).So let f∈(̋b). Since T_bS^* f=S^* T_bf and (̋b) is invariant with respect to S^*, we get that T_bS^*f∈(̋b), whence S^*f∈(̋b) andS^*f_(̋b)^2 = S^* f_A^2^2+T_bS^* f_(̋b)^2= S^* f_A^2^2+S^*T_b f_(̋b)^2≤f_A^2^2+T_b f_(̋b)^2=f_(̋b)^2.Hence S^* is a contraction on (̋b), completing the proof.According to Lemma <ref>, we see that A^2 satisfies (<ref>) and (̋b) satisfies (<ref>). We will show that under the additional hypothesis that b is a non-extreme point of the closed unit ball of H^∞, we can apply our Corollary <ref> to _̋1=(̋b) and _̋2=A^2. Let b be a non–extreme point of the unit ball of H^∞ and a its pythagorean mate. Then the following are equivalent.* ℰ is a closed subspace of (̋b), invariant under X_b=S^*|(̋b).*There is a closed subspace E of A^2, invariant under S^*, such that ℰ=E∩(̋b).It is known that since b is analytic, then (̋b)=(̋b̅) with equivalent norms, see <cit.>. Moreover, according to (<ref>), we haveI-T_b̅T_b=T_1-|b|^2=T_|a|^2=T_a̅T_a.This identity implies by (<ref>) that (̋b̅)≖(T_a̅). Hence (̋b)=(T_a̅)=(T_a^*) with equivalent norms.We then apply Corollary <ref> to_̋2=A^2 and _̋1=(̋b)=(T_a^*), which gives the result.We would like to warmly thank the anonymous referee for his/her remarks leading to a real improvement of the paper. In an earlier version, we had stated Theorem <ref> with the additional hypothesis that for every ℰ∈(X__̋1), the space M_φ,_̋2^*ℰ is dense in ℰ. It was the referee's suggestion to use the Sz.-Nagy–Foias theory to obtain that particular property as a consequence of the other hypothesis in the theorem (see Lemma <ref>).mscplain.bst10MR566739Aleksandrov A.B.,Invariant subspaces of the backward shift operator in the space H^p(p∈ (0, 1)).Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI), 92:7–29, 318, 1979.Investigations on linear operators and the theory of functions, IX.Aleman-MalmanAleman A.; Malman B.,Hilbert spaces of analytic functions with a contractive backward shift.Preprint on ArXiv:1805.11842.MR2244001 Aleman A.; Richter S;Sundberg C.,Invariant subspaces for the backward shift on Hilbert spaces of analytic functions with regular norm.InBergman spaces and related topics in complex analysis, volume 404 ofContemp. Math., pages 1–25. Amer. Math. Soc., Providence, RI, 2006.Sz-NagyBercovici H.; Foias C.; Kerchy L.; Sz.-Nagy B., Harmonic analysis of operators on Hilbert space Universitext, Springer Verlag. Revised and Enlarged Edition, 2010.MR1903737 Bolotnikov, V.; Rodman, L.,Finite dimensional backward shift invariant subspaces of Arveson spaces.Linear Algebra Appl., 349:265–282, 2002.MR0203464 Douglas, R.G.,On majorization, factorization, and range inclusion of operators on Hilbert space.Proc. Amer. Math. Soc., 17:413–415, 1966.MR0270196 Douglas, R.G.; Shapiro, H.S.; Shields, A.L.,Cyclic vectors and invariant subspaces for the backward shift operator.Ann. Inst. Fourier (Grenoble), 20(fasc. 1):37–76, 1970. Duren Duren P.,Theory of H^p spaces, volume 38 ofPure and Applied Mathematics,.Academic Press, New York-London, 1970.MR3185375 El-Fallah, O.; Kellay, K.; Mashreghi, J.; Ransford, T.,A primer on the Dirichlet space, volume 203 ofCambridge Tracts in Mathematics.Cambridge University Press, Cambridge, 2014.MR3497010 Fricain, E.; Mashreghi, J.,The theory of ℋ(b) spaces. Vol. 1, volume 20 ofNew Mathematical Monographs.Cambridge University Press, Cambridge, 2016.MR3617311 Fricain, E.; Mashreghi, J.,The theory of ℋ(b) spaces. Vol. 2, volume 21 ofNew Mathematical Monographs.Cambridge University Press, Cambridge, 2016.Garnett Garnett J.,Bounded Analytic Functions.Graduate Texts in Mathematics 236. Revised First Edition, Springer, 2007.MR2032687 Girela, D.; González,C.,Division by inner functions.InProgress in analysis, Vol. I, II (Berlin, 2001), pages 215–220. World Sci. Publ., River Edge, NJ, 2003.MR2199173 Girela, D.; González,C.; Peláez, J. Á.,Multiplication and division by inner functions in the space of Bloch functions.Proc. Amer. Math. Soc., 134(5):1309–1314, 2006.MR2290751 Girela, D.; González,C.; Peláez, J. Á.,Toeplitz operators and division by inner functions.InProceedings of the First Advanced Course in Operator Theory and Complex Analysis, pages 85–103. Univ. Sevilla Secr. Publ., Seville, 2006.MR0289783 Havin, V.P.,The factorization of analytic functions that are smooth up to the boundary.Zap. Naučn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI), 22:202–205, 1971.MR2034817 Izuchi, K.; Nakazi, T.,Backward shift invariant subspaces in the bidisc.Hokkaido Math. J., 33(1):247–254, 2004.Koosis Koosis, P.,An introduction to H^p-spaces, vol. 115 ofCambridge Tracts in Mathematics Cambridge University Press, Cambridge 1998.MR1892228 Lax, P.D.,Functional analysis.Pure and Applied Mathematics (New York). Wiley-Interscience [John Wiley & Sons], New York, 2002.MR2500010 Mashreghi, J.,Representation theorems in Hardy spaces, volume 74 ofLondon Mathematical Society Student Texts.Cambridge University Press, Cambridge, 2009.Paulsen Paulsen, V.I.; Raghupathi, M.,An introduction to the theory of reproducing kernel Hilbert spaces, volume 152 ofCambridge Studies in Advanced Mathematics.Cambridge University Press, Cambridge, 2016.Sarason-86-OT Sarason, D.,Doubly shift-invariant spaces in H 2.J. Operator Theory, 16(1):75–97, 1986.sarason1994sub Sarason, D.,Sub-Hardy Hilbert spaces in the unit disk.Wiley-interscience, 1994.Shirokov Shirokov N.A.,Analytic functions smooth up to the boundary.Lecture Notes in Mathematics, 1312, Springer Verlag, Berlin, 1988.Suarez-Indiana-97 Suárez, D.,Backward shift invariant spaces in H 2.Indiana Univ. Math. J., 46(2):593–619, 1997.sultanic2006sub Sultanic, S.,Sub-Bergman Hilbert spaces.Journal of mathematical analysis and applications, 324(1):639–649, 2006.zhu1996sub Zhu, K.,Sub-Bergman Hilbert spaces on the unit disk.Indiana University Mathematics Journal, 45(1):165–176, 1996.MR1990528 Zhu, K.,Sub-Bergman Hilbert spaces in the unit disk. II.J. Funct. Anal., 202(2):327–341, 2003.zhu2007operator Zhu, K.,Operator theory in function spaces.Number 138. American Mathematical Soc., 2007.Let $̋ be a reproducing kernel Hilbert space contained inH^2that satisfies(H1). Letφ∈ H^∞. Then$̋ is invariant under T_φ. Moreover, if[ T_φ,: ⟶ f ⟼ T_φf, ]then T_φ, is bounded andT_φ,_()̋≤ φ_∞.First note that if p is a polynomial, say p(z)=∑_k=0^N a_k z^k, and if p^* is the polynomial associated to p by p^*(z)=p(z), thenT_pf=p^*(X_)̋ffor all f∈$̋. Indeed, we haveT_pf= ∑_k=0^Na_k T_z^k f = ∑_k=0^Na_kS^*^kf= ∑_k=0^Na_k X_^̋kf =p^*(X_)̋f.In particular, we get from(H1)and von Neuman's inequality that$̋ is invariant under T_p andT_p,_()̋≤ p_∞,where T_p,=T_p|$̋. Now, we should extend (<ref>) for an arbitrary functionφ∈ H^∞. For that purpose, we use a standard approximation argument. So, letφ∈ H^∞and let(p_n)_n≥ 1be the sequence of Fejér means of its Fourier sums, that isp_n=1/n+1(s_0+s_1+…+s_n),withs_j(e^iθ)=∑_k=0^j φ̂(k) e^ikθ. Recall thatp_n_∞≤φ_∞and(p_n)_n≥ 1converges toφin the weak-star topology ofL^∞(). Thus, according to (<ref>),T_p_nf _≤ φ_∞ f_for anyf∈$̋. Hence, by passing to a subsequence if needed, there exists a g∈$̋ such thatT_p_nfconverges weakly togandg_≤φ_∞f_. The argument finishes if we show thatg = T_φf.Fixz∈, and letk_k^$̋ denote the reproducing kernel of $̋. Then, on one hand, we havelim_n→∞(T_p_nf)(z)=lim_n→∞⟨ T_p_nf,k_z^⟩̋_=̋⟨ g,k_z^⟩̋_=̋g(z).On the other hand, since⊂̋H^2, we can write(T_p_nf)(z)=⟨ T_p_nf,k_z ⟩_2=⟨p_n f,k_z⟩_2=1/2π∫_0^2πp_n(e^iθ)f(e^iθ)k_z(e^iθ) dθ.Sincefk_z∈ L^1()and(p_n)_n≥ 1converges toφin the weak-star topology ofL^∞(), we havelim_n→∞(T_p_nf)(z)=1/2π∫_0^2πφ(e^iθ)f(e^iθ)k_z(e^iθ) dθ=⟨φf,k_z⟩_2=(T_φf)(z).Therefore, by uniqueness of limit, we haveg=T_φf∈$̋.
http://arxiv.org/abs/1709.09396v2
{ "authors": [ "Emmanuel Fricain", "Javad Mashreghi", "Rishika Rupam" ], "categories": [ "math.FA", "30J05, 30H10, 46E22" ], "primary_category": "math.FA", "published": "20170927090854", "title": "Backward Shift Invariant Subspaces in Reproducing Kernel Hilbert Spaces" }
Let τ_n be a random tree distributed as a Galton-Watson tree with geometricoffspring distributionconditioned on{Z_n=a_n} where Z_nisthesizeofthen-thgenerationand(a_n, n∈^*)isa deterministicpositive sequence.We studythelocallimit ofthesetrees τ_nasn→∞ andobservethreedistinct regimes:if (a_n, n∈^*) grows slowly, the limit consists in an infinite spinedecorated withfinite trees(whichcorresponds tothe size-biased tree for critical or subcritical offspring distributions), inanintermediate regime,thelimitingtreeis composedofan infinite skeleton (that does not satisfy the branching property) still decorated withfinite treesand, ifthe sequence(a_n, n∈^*) increases rapidly,acondensationphenomenonappears andtherootofthe limiting tree has an infinite number of offspring. Distributions on homogeneous spaces and applications N. Ressayre[Université Montpellier II -CC 51-Place Eugène Bataillon - 34095 Montpellier Cedex 5 - France - [email protected]] December 30, 2023 ==============================================================================================================================================§ INTRODUCTION A Galton-Watson (GW for short) process (Z_n,n≥ 0) describes the size ofanevolving populationwhere,ateach generation,everyextant individual reproducesaccording to the sameoffspring distribution p independently of the rest of the population. The associated genealogical tree τis called aGW tree. Let μdenote the meannumber of offspring perindividual, thatis themean of p. When pis non degenerate,a classicalresultstates thatif μ<1(sub-critical case)or μ=1(critical case),thenthe populationbecomes a.s. extinct (i.e. Z_n=0 for some n≥ 0 a.s.)whereas if μ>1 (super-critical case), the population hasa positive probability of non extinction.Another classicalresult fromKesten's work<cit.> describes the locallimit in distributionof acritical or subcriticalGW tree conditioned on {Z_n>0}as n→∞, whichcan be seen as acritical orsub-critical GWtree conditionedon non-extinction. The limiting treeis the so-called sized-biased treeor Kesten tree, and it can also be viewed as a two-type GW tree.Thereareotherwaysof conditioningthetreeofbeinglarge: conditioning on having a large totalpopulation size, or a large number of leaves... Inthe critical case, all these conditioningslead to the same local limit, see <cit.> and the references therein. In the sub-critical case, a condensationphenomenon (i.e. a vertex with an infinite numberof offspringat thelimit) mayhappen, see <cit.> or <cit.> and the references therein, but even there, there can be only twodifferent limiting trees, a size-biased GW tree or a condensation tree. In order to havedifferent limits, an idea is tocondition the tree to be even bigger, i.e. to consider conditionings of the form {Z_n=a_n} forsome positivedeterministic sequence(a_n, n∈^*) possibly converging to infinity.Some results on branching processes conditioned on theirlimit behaviouralready appeared inprevious works,see for instance <cit.>where the distributions ofthe conditioned Yule process (which correspondsto a super-critical branchingprocess) or a critical binarybranching are described viaan infinitesimal generator and a martingaleproblem. The first study of locallimits for GW trees withsuch aconditioning appearsin <cit.>where itis proven that,if pis a criticaloffspring distributionwith finite variance,thenthetreeconditioned on{Z_n=a_n}convergesin distributiontotheassociatedsized-biasedtreeifandonlyif lim_n→∞ a_nn^-2=0.The goalof this paper isto study what happensbeyond that condition and toconsider thesub-critical and super-criticalcases. Wegive a complete description of all the cases when the offspring distribution is ageometric distributionwith aDirac massat 0(in thatcase, the distribution of Z_n is explicit). We observe threeregimes according tothe speed ofgrowth of (a_n, n∈^*). We set:c_n=μ^-n μ<1(sub-critical case),n^2μ=1(critical case), μ^nμ>1(super-critical case),and we shall consider that:lim_n→∞a_n/c_n=θ∈[0,+∞].Let τ^0,0 denote the GW tree τ conditioned on the extinction event =⋃ _n∈^*{Z_n=0}. Notice that τ^0,0 is distributed as τ in the sub-critical and critical cases. * In the Kesten regime (θ=0), the limiting tree, τ^0, is theKestentree, which is a two-type GW tree, with an infinite spinecorresponding to the individuals having an infinite progeny (calledthe survivor type), on which are graftedindependent GW trees distributed as τ^0,0 corresponding toindividuals having a finite progeny (called extinction type).* In the Poisson regime (θ∈ (0, +∞ )), the limitingtree, τ^θ,is no morea GW tree,but it stillhas twotypes, with a backbonewithout leaves corresponding to individualshavingan infiniteprogeny (alsocalled thesurvivor type),onwhich are grafted independent GW trees distributed asτ^0,0. However, thebackbone can not be seen asa GW tree,as it lacks the branching property. This is more like a random treewithaPoissonianimmigrationateachgenerationwithratesdepending onθ and withall the configurationshaving thesame probability. * Inthe condensationregime (θ=+∞),the limitingtree τ^∞ is again atwo-type GW tree, witha backbonewithoutleaves correspondingtoindividualshaving aninfiniteprogeny(also calledthesurvivor type),onwhich aregraftedindependent GW trees distributedas τ^0,0. The backbone canbeseen asaninhomogeneousGW treewiththeroot havinganinfinite number ofchildren (condensation regime), andsuper-critical offspringdistribution atlevel h>0with finitemean μ_h which decreases to 1 as h goes to infinity. We also prove that the family (τ^θ, θ∈ [0, +∞ ]) is continuous in distribution (the most interesting case are the continuity at 0 and +∞), see Remark <ref> and Proposition <ref>. The main ingredient of the proofs is Equation eq:ph and hence is the limit of the ratio lim_n→+∞_k(Z_n-h=a_n)/(Z_n=a_n) which is closely related to the extremal space-time harmonic functions associated with the GW process, see <cit.>. This limit is computed in the Kesten regime at the end of the proof of Proposition <ref>, and at the end of the proof of Proposition <ref> in the Poisson regime. In the condensation regime, this limit is 0. Notice that in this regime, the conditioned Galton-Watson process converges to a trivial process which is always equal to +∞ (except at n=0) but considering the genealogical tree gives a non-trivial limit.Partial resultsin a moregeneral setting for super-criticaland some sub-criticalcasesaregiven in<cit.>:convergenceof τ_nintheKestenandtheintermediateregimesforgeneral offspringdistributions, andin thehighregime inthe Harriscase (offspringdistributionwithbounded support),thecontinuityin distributionof thefamily oflimiting treesat θ=0and some partial resultsat θ=+∞. Somesimilar results canalso be derived forsub-critical offspring distributions understrongadditional assumptions. The rest of the paper is organized as follows: Section <ref> introduces the frameworkof discrete trees with the notion oflocal convergence forsequences oftrees, the GWtrees and some properties of thegeometric distribution. Section <ref> describes theGW tree withgeometric offspring distributionwith some technicallemmas thatare usedin theproofs ofthe maintheorems. Section <ref> studiesthe Kesten regime, wherethe Kesten tree τ^0 is definedand the convergence in distributionof τ_n to τ^0isstated (Proposition<ref>). InSection <ref>,the familyof randomtrees (τ^θ, θ∈ (0, +∞ )) is introduced and a convergence resultis obtainedfor thePoisson regime(Proposition <ref>)aswellas thecontinuityindistributionof (τ^θ,θ∈ (0,+∞ ))at θ=0(Remark <ref>). Finally, Section<ref> introducesthe condensation treeτ^∞, proves the convergenceof τ_n to τ^∞ in thecondensation regime (Proposition <ref>)and thecontinuity indistribution of (τ^θ, θ∈ (0, +∞ )) at θ=+∞ (Proposition <ref>). § NOTATIONS We denote by ={0,1,2,…} the set of non-negative integers, by ={1,2,…}theset ofpositiveintegersand =∪{+∞}. Foranyfinite setE,wedenoteby ♯ E its cardinal. §.§ The set of discrete treesWe recall Neveu's formalism <cit.> for ordered rooted trees. Let =⋃_n≥ 0()^n be theset of finite sequences of positive integers with the convention ()^0={∅}. We alsoset ^*=⋃_n≥ 1()^n= \{∅}. For u∈, let |u|be thelength or the generationof u defined as theinteger n such that u ∈()^n. If u and v are two sequences of ,we denote by uv the concatenation of two sequences,with the convention that uv=vu=uif v=∅.The set of strict ancestors of u∈^* is defined by:(u)={v ∈𝒰, ∃w ∈𝒰^*,u=vw},andfor⊂^*,beingnon-empty,weset ()=⋃ _u∈(u).A treeis a subset of 𝒰 that satisfies : * ∅∈.* If u∈, then (u)⊂.* For every u∈, there exists k_u()∈ such that, for every positive integer i, ui ∈ 1≤ i≤ k_u(). We denote by _∞ the set of trees.Let ∈_∞ be a tree. The vertex ∅ is calledthe root of the treeand wedenote by^*=\{∅} thetree withoutits root.For avertex u∈, the integerk_u() represents the numberofoffspring(alsocalled theout-degree)ofthevertex u∈.By convention,we shallwrite k_u()=-1if u∉.The height H() of the treeis defined by:H() =sup{|u|, u ∈}∈.For n∈, the size of the n-th generation ofis defined by:z_n()=♯{u ∈ ,| u|=n}.We denote by ^* the subset of trees with finite out-degrees except the root's:^* = {∈_∞ ; ∀ u∈^*, k_u() < + ∞}and by = {∈^*;k_∅() < + ∞} the subset of trees with finite out-degrees. Let h,k∈. We define ^(h) the subset of finite trees with height h: ^(h)={∈; H()= h }and ^(h)_k= {∈^(h);k_∅()= k} the subset of finite trees with height equal toh and out-degree of the root equal tok.We also define the restriction operators r_h and r_h,k, for every ∈_∞, by:r_h()={ u∈; |u|≤ h}andr_h,k()={∅}∪{ u∈ r_h() ^*; u_1≤ k},where u_1 represents the first term of the sequence u if u∅. In other words, r_h() represents the treetruncated at height h and r_h,k() represents the subtree of r_h() where only the k-first offspring of the root are kept. Remark that, for∈, if H()≥ h then r_h()∈^(h) and if furthermorek_∅()≥ k then r_h,k()∈^(h)_k.§.§ Convergence of trees Set _1={-1}∪, endowedwith the usual topologyof the one-point compactification of thediscrete space {-1}∪.For atree∈_∞,recall thatbyconventiontheout-degree k_u() of u isset to -1 if u does not belongto . Thus a tree∈_∞isuniquelydetermined bythesequence (k_u(), u∈)and then_∞is asubset of _1^.By Tychonofftheorem, theset _1^endowed with the product topology is compact. Since _∞ is closed it is thus compact. In fact, the set _∞ is a Polish space (but we don't need any precise metric at this point). The convergence of sequences of trees is thencharacterized as follows. Let(_n, n∈) and be trees in _∞. We say that lim_n→∞_n= if and only iflim_n→∞ k_u(_n)=k_u() for all u∈.It is easy to see that:* If(_n, n∈) andare treesin , then we have lim_n→∞_n= if and only iflim_n→∞ r_h(_n)=r_h() for all h∈^*.* If(_n, n∈) andare treesin ^*, then we have lim_n→∞_n= if and only iflim_n→∞ r_h,k(_n)=r_h,k() for all h,k∈^*. LetT bea-valued (resp. ^*-valued) randomvariable.It is easy to getthat if a.s.H(T)=+∞ (resp.a.s.H(T)=+∞ andk_∅(T)=+∞),then the distribution ofT ischaracterizedby((r_h(T)=); h∈^*, ∈^(h))(resp.((r_h,k(T)=); h,k∈^*, ∈_k^(h))).Using the Portmanteau theorem, we deduce the following results: * Let(T_n, n∈) and Tbe -valuedrandom variables. Thenwe havethe followingcharacterization ofthe convergencein distribution if a.s. H(T)=+∞:T_n Tlim_n→∞(r_h(T_n)=)=(r_h(T)=)for all h∈^*,∈^(h).* Let(T_n, n∈) and T be ^*-valued randomvariables.Then we have thefollowing characterization of the convergence in distribution ifa.s. H(T)=+∞,k_∅(T)=+∞: T_n Tlim_n→∞(r_h,k(T_n)=)=(r_h,k(T)=)for all h,k∈^*,∈_k^(h). §.§ GW treesLet p=(p(n), n∈) be a probability distribution on . A -valued random variable τ is called a GW tree with offspring distribution p if for all h∈ and ∈ with H()≤ h:(r_h(τ)=)=∏_u∈ r_h-1() p(k_u()).The generation size process defined by(Z_n=z_n(τ), n∈) is the so calledGW process. We referto <cit.> for ageneral study ofGW processes. We set _k the probability under which the GW process(Z_n, n∈) starts with Z_0=k individuals and writefor _1 so that: _k(Z_n=a)=(Z_n^(1)+⋯+Z_n^(k)=a), where the (Z^(i),1≤ i≤ k) are independent copiesof Z under . We consider a sequence (a_n, n∈) of elements in and, when (Z_n=a_n)>0,τ_narandomtree distributedastheGW treeτ conditionally on {Z_n= a_n}. Let n≥ h≥ 1 and ∈^(h). We have by the branching property of GW-trees at height h, setting k=z_h():(r_h(τ_n)=) = (r_h(τ)=)_k(Z_n-h=a_n)/(Z_n=a_n)·§.§ Geometric distribution Let η∈ (0,1] and q∈ (0,1). We define the geometric (η,q) distribution p=(p(k), k∈) byp(0)=1-η,p(k)=η q(1-q)^k-1 k∈. We shall always consider that τ is a GW tree with geometric offspring distribution (η,q).The mean of(η, q) is given by μ=η/qand its generating functionis given by:(s)=(1-η) - s(1-q-η)/1 - s(1-q),s∈ [0,1/(1-q)).We set:γ=1-qandκ=1-η/1-qwhere γ is the radius of convergence ofand κ and 1 are the only fixed points ofon [0, γ).If μ=1 then there isonly one fixed pointas κ=1. We shalluse frequently the following relations:γ-κ= μ (γ -1) and,if μ≠ 1,γ-1=κ-1/1-μ·Notice that κ∈ [0, +∞ ) and γ∈ (1, +∞ ) allow to recover η and q as:η=1- κ/γand q= 1-γ·For this reason, we shall also write[κ, γ] for(η,q). Notice that if μ<1, then q>η and γ> κ>1; and if μ>1, then η>q and γ>1>κ≥ 0. Sinceis an homography, we getfor s∈ [0, γ)\{1}:(s) - κ/(s) - 1=μ s - κ/s - 1·Weset_1= and,forn∈, _n+1=∘_n. Notice that κ is a fixed point of _n as it is a fixed point of . We deduce from eq:homo and the second equality of eq:g-c,g-1 if μ≠1 and by direct recurrenceif μ=1, that _n, for n∈, is thegenerating functionof thegeometric distribution [κ, γ_n]=(η_n, q_n) with mean μ_n=μ^n and, thanks to eq:h+q:η_n=1-κ/γ_n, q_n=1-γ_nwith γ_n=κ -μ^n/1 -μ^n = 1+ (γ-1) q^n-1 (q-η) /q^n-η^nif μ≠ 1, 1+ (γ-1)nif μ=1. By convention, we set _0 the identity function defined on [0, +∞ ) and γ_0=+∞ so that for all n∈, we have γ_n=lim_r→+∞_n^-1(r) that is in short γ_n=_n^-1(∞ ). We deduce that for all n≥ℓ≥ 0:_ℓ(γ_n)=γ_n-ℓ. We derive some asymptotics for γ_n for large n.It is easy to deduce from eq:hqg-n that:lim_n→∞γ_n=max(1, κ)= κif μ≤ 1,1 if μ≥ 1.Using eq:g-c,g-1, we get for largen:(γ_n -κ)(γ_n -1)= μ^n (κ -1)^2+O(μ^2n) if μ<1, (γ-1)^2 n^-2if μ=1, μ^-n (κ-1)^2 + O(μ^-2n) if μ>1.We derive from eq:hqg-n the logarithm asymptotics of γ_n/γ_n-h for given h∈ and large n: log(γ_n-h/γ_n )=log(γ_n-h)- log(γ_n )= μ^n-h(1-μ^h) (κ -1)/κ + O(μ^2n) if μ<1, (γ-1) h n^-2 +O (n^-3) if μ=1, μ^-n(μ^h-1) (1-κ)+ O(μ^-2n) if μ>1.We recall the following well-known equality which holds for all k∈ and r∈ (0, 1):∑_ℓ≥kℓ -1k-1 r ^ℓ= (r/1-r)^k.And we end this section with an elementary lemma. Let (X_ℓ, ℓ∈) be independent random variables with distribution (η,q)=[κ, γ]. For a≥ k≥ 1:(∑_ℓ=1^k X_ℓ=a) =∑_i=1^kki a-1i-1 κ^k-i (γ-κ)^i (γ-1)^i γ^-a-k.We have:(∑_ℓ=1^k X_ℓ=a)=∑_i=1^kki(X_1=0)^k-i (∑_ℓ=1^i X_ℓ=a, X_ℓ≥ 1for ℓ∈{1, …, i})=∑_i=1^kki(1-η)^k-i a-1i-1(η q)^i (1-q)^a-i. Then use eq:h+q to conclude.§ THE GEOMETRIC GW TREELet τbe a GWtree with geometric (η,q)offspring distribution p given by eq:geo, with η∈(0, 1] and q∈ (0, 1). Recall that (Z_n,n∈) is the associated GWprocess.For k∈, we denote by _k the distribution of the geometric GW forest composedof kindependent GWtrees withgeometric offspring distribution (η,q),and writefor_1. For convenience,weshallunderdenoteby Z^(k)=(Z_n^(k), n∈) aGW process distributed as Z=(Z_n, n∈) under _k. For n∈, we set:M_n=γ_1^-Z_1 γ_n^Z_n.Since Z_nhas generating function_n under ,we deduce from eq:g-g that (M_n, n∈) is a martingale with M_1=1. For n≥ h≥ 1, we set:b_n,h=(γ_n/γ_n-h)^a_n. We shall use the following formula when lim_n→∞ b_n,h exists and belongs to (0, ∞ ). Let n≥ h≥ 1 and k∈. We have:_k(Z_n-h=a_n)/(Z_n=a_n)=b_n,h ∑_i=1^k ki κ^k-i G_n,h(k,i),withG_n,h(k,i)=a_n-1i-1 γ_n/γ^k_n-h (γ_n-h -κ)^i(γ_n-h -1)^i/(γ_n -κ)(γ_n -1)·Letn≥ h≥ 1. Since Z_n has distribution [κ,γ_n], we obtain thanks to eq:g+c0:(Z_n=a_n)=η_n q_n (1-q_n)^a_n-1=(γ_n -κ) (γ_n -1) γ_n^-a_n -1.Using that Z_n-h is under _k distributed as the sum of k independent random variables with distribution [κ, γ_n-h], we deduce from Lemma <ref> that:_k(Z_n-h=a_n)/(Z_n=a_n) =∑_i=1^kki a_n-1i-1 κ^k-i (γ_n-h -κ)^i(γ_n-h -1)^i/γ^a_n+k_n-h γ_n^a_n+1/(γ_n -κ)(γ_n -1)=b_n,h ∑_i=1^kki κ^k-iG_n,h(k,i).This gives the result.We shall use the following formula when lim_n→∞ b_n,h=0 and lim_n→∞a_n=+∞.Let n> h≥ 1, k_0∈ and ∈_k_0^(h). We have, with a_n≥ k=z_h(): (r_h, k_0(τ_n)=) = 1-q/η q(r_h(τ)=)( γ_h^k - R_n,h^1(k)- R_n,h^2(k)),with α_n=(γ_n-h-κ) (γ_n-h-1),x_n=γ_n/γ_n-h and:0≤ R_n,h^1(k)≤b_n,h α_n/1-x_nmax(1,κ)^k-12^2k-1(2+(α_n/1-x_n)^k-1+ (α_n a_n) ^k-1),R_n,h^2(k) =(κ+1-γ)_k(Z_n-h=a_n)/(Z_n=a_n)· Let n>h≥ 1, k_0∈and ∈_k_0^(h).We set k=z_h(). For every 1≤ j≤ k_0, we denote by _j the subtree rooted at the j-th offspring of the root i.e.u∈_j ju∈. Inwhat follows,we denoteby Z̃^(i)a process distributed as Z^(i) and independent of Z^(k).We have:(r_h, k_0(τ_n)=)= ∑_i=0^+∞ p(i+k_0) [∏_j=1^k_0(r_h-1(τ)=_j) ] (Z_n-h^(k)+Z̃^(i)_n-1=a_n)/(Z_n=a_n)= (r_h(τ)=)∑_i=0^+∞ (1-q)^i(Z_n-h^(k)+ Z̃^(i)_n-1=a_n)/(Z_n=a_n)= 1-q/η q(r_h(τ)=) (A+B),where we used the branching property for the first and second equalities, the independence of Z^(k) and Z̃^(i) for the third,whereA= ∑_ℓ=0^a_n(Z_n-h^(k)=ℓ) ∑_i=0^+∞p(i) (Z_n-1^(i)=a_n-ℓ)/(Z_n=a_n) andB=( η q/1-q -(1-η)) (Z_n-h^(k)=a_n)/(Z_n=a_n)·We have:A =∑_ℓ=0^a_n(Z_n-h^(k)=ℓ)(Z_n=a_n-ℓ)/(Z_n=a_n) =∑_ℓ=0^a_n(Z_n-h^(k)=ℓ) γ_n^ℓ = ( _n-h(γ_n)^k -R^1_n,h(k)),where we usedthat k_∅(τ) has distribution p for the first equality, that Z_n has distribution [κ, γ_n] for the secondone and thus (Z_n=k)=η_n q_n γ_n^-(k-1), and for the last one that:R^1_n,h(k)= ∑_ℓ>0(Z_n-h^(k)=ℓ+ a_n) γ_n^ℓ+a_n.We have, with α_n=(γ_n-h-κ) (γ_n-h-1)and x_n=γ_n/γ_n-h:(Z_n-h^(k)=ℓ+a_n)γ_n^ℓ+a_n = b_n,h∑_i=1^kki ℓ+ a_n-1i-1 κ^k-i (γ_n-h-κ)^i (γ_n-h-1)^i γ_n-h^-ℓ-kγ_n^ℓ≤b_n,hx_n^ℓ max(1,κ)^k-1 ∑_i=1^kki ℓ+ a_n-1i-1 α_n^i,whereweusedLemma<ref> forthefirstequalityand γ_n-h≥max(1,κ) for the last. Using that (x+y)^j≤ 2^j-1 (x^j + y ^j) for j∈ and x,y∈ (0, +∞ ), we deduce that:ℓ+ a_n-1i-1≤2^i-1/(i-1)!(ℓ ^i-1 +a_n^i-1).We have the following rough bounds:0≤R^1_n,h(k)≤ b_n,h max(1,κ)^k-12^k-1∑_i=1^kα_n^i ki∑_ℓ>0(ℓ ^i-1/(i-1)!x_n^ℓ+a_n^i-1x_n^ℓ)≤ b_n,h x_n α_n/1-x_nmax(1,κ)^k-12^k-1∑_i=1^kki((α_n/1-x_n)^i-1+ (α_n a_n) ^i-1)≤ b_n,h α_n/1-x_nmax(1,κ)^k-12^2k-1(2+(α_n/1-x_n)^k-1+ (α_n a_n) ^k-1)where we used that x_n∈ (0, 1) as the sequence (γ_m, m∈) is non-increasing and that ∑_ℓ>0ℓ^i-1 x^ℓ/(i-1)! ≤x(1-x)^i-1 for the last inequality but one.Then useeq:g-g, which gives _n-h(γ_n)=γ_h, to get A=γ_h^k - R^1_n,h(k) as well aseq:Rnh1. WecanrewritetheconstantinBas (η q/1-q -(1-η))= -(κ+1-γ), so that B=-R^2_n,h(k), see eq:Rnh2, and thus A+B=γ_h^k - R^1_n,h(k) - R^2_n,h(k). This ends the proof.§ THE KESTEN REGIME OR THE NOT SO FAT CASE§.§ The Kesten tree Inthissection,wedenotebyτaGWtreewithgeometric p=(η,q) with η,q∈(0,1).Recall that the extinction event ={H(τ)<+∞} has probability =min(1,κ). Moreover, as weassume η<1, we have >0. We define the probability distribution =((n), n∈) by:(n)=^n-1 p(n) for n∈.We denote by τ^0,0 a random tree distributed asτ conditionallyon theextinction event , thatis a GW tree with offspring distribution . We denote by the meanof .If μ≤1, thenwe have=p, =μ, =1 and that τ^0,0 is distributed as τ. If μ>1, thenwe havethatis the geometric distribution (q, η),=1/μ and=κ.Let k∈. We define the k-th order size-biased probability distribution of p as p_[k]=(p_[k](n), n∈) defined by:p_[k](n)= n!/(n-k)!^(k)(1)p(n) for n∈ and n≥ k.The generating function of p_[k] is _[k](s)=s^k^(k)(s)/ ^(k)(1).The probability distribution p_[1]is the so-called size-biased probability distributionof p.For the distribution (η, q), we have^(k)(1)=k! η q^-k(1-q)^k-1, so the k-th order size-biased probability distribution of p is given by:p_[k](n)=nk q^k+1 (1-q)^n-kfor n∈ and n≥ k.We nowdefine the so-calledKesten tree τ̂^0associated with theoffspring distributionp as a two-typeGW tree where thevertices are eitherof type (for survivor)or oftype (for extinction).It isthen characterized as follows. * The number of offspring of a vertex depends, conditionally on the vertices of lower or same height, only on its own type (branching property). * The root is of type . * A vertex of typeproduces only vertices of typewithoffspring distribution .* The random numberof children of a vertex oftypehas thesize-biased distributionof that is _[1]defined byeq:def-biased-pwithk=1.Furthermore, all of the childrenare of typebutone,uniformlychosenat random,whichisoftype. Informally the individuals oftypeinτ̂^0form an infinitespine on which aregrafted independentGW treesdistributed as τ^0,0. We defineτ^0=(τ̂^0)as the treeτ̂^0 when one forgets the types of the vertices. The distribution ofτ^0 is given inthe following classical result. Letp=(η,q) with η,q∈(0,1). The distribution of τ^0 ischaracterized by: for all n≥ h≥1 and ∈^(h)withk=z_h():(r_h(τ^0))=) = k ^k-1^-h (r_h(τ)=). We give a short proof of thiswell-known result. Since τ^0 belongs to and has infiniteheight, its distributionis indeedcharacterizedbyeq:ph0 forall n≥ h≥ 1 and ∈^(h) with k=z_h().Let n≥ h≥ 1,∈^(h) andv∈ such that |v|=h. LetV be the vertex of typeat level h in τ̂^0.Wehave, with k=z_h():(r_h(τ^0)=, V=v)= ∏_u ∈\({v}); |u|<h(k_u()) ∏_u ∈({v})k_u() _[1](k_u())= ^-h^∑_u∈ r_h-1() (k_u() -1) ∏_u∈ r_h-1() p(k_u())= ^-h^k-1 (r_h(τ)=) ,where weused eq:def-biased-p (with k=1,n=k_u() and p replacedby )andeq:def-fp (withn=k_u()) forthe second equalityand that∑_u∈ r_h-1()(k_u() -1)=k-1 for the lastone.Summing over all v∈such that |v|=h gives the result. §.§ Convergence of the not so fat geometric GW treeWeconsider asequence (a_n,n∈)with a_n∈ anda random treeτ_n distributed asthe GW tree τwith offspring distributionp=(η,q) conditionallyon {Z_n=a_n}.We have the following result.Let η∈(0,1) and q∈ (0,1).Assume thatlim_n→∞ a_n μ^n =0 if μ< 1,lim_n→∞a_nn^-2 =0ifμ=1orlim_n→∞ a_n μ^-n=0 if μ>1.Then wehave the following convergence in distribution:τ_nτ^0.The critical case, μ=1, appears inCorollary 6.2 of <cit.> for general offspring distribution with second moment. Leth∈ and k∈. Recall the definitions of b_n,h ineq:def-bnh and of G_n,h in eq:Gnh. According toLemma<ref>,we havefor n≥ h≥ 1 and k∈:_k(Z_n-h=a_n)/(Z_n=a_n)=b_n,h ∑_i=1^k ki κ^k-i G_n,h(k,i).According toeq:def-bnh, we have b_n,h=exp(-a_nlog(γ_n-h/γ_n)). We deducefrom eq:equiv-ggandthehypothesis on(a_n,n∈)that lim_n→∞a_n log(γ_n-h/γ_n)=0 and thus lim_n→∞ b_n,h= 1. We deduce fromeq:Gnh, eq:lim-g and eq:equiv-gg-1c0that,for k≥i>1, lim_n→∞ G_n,h(k,i)=0 and for k≥ 1:lim_n→∞G_n,h(k,1)= κ^1-kμ^-h if μ<1, 1if μ=1, μ^hif μ>1.We deduce that:lim_n→∞_k(Z_n-h=a_n)/(Z_n=a_n)=k μ^-h if μ<1 kif μ=1 k κ^k-1μ^hif μ>1 = k^k-1^-h.Then, as a.s. H(τ^0)=+∞, we canuse the characterization eq:cv-loi of the convergencein , as well as eq:ph and Lemma<ref> to conclude. § THE POISSON REGIME OR THE FAT CASE§.§ An infinite Poisson tree Letθ∈ (0,+∞ ). Weconsider atwo-type randomtree τ̂^θ wherethe verticesare eitherof type (for survivor) or of type (for extinction).We define τ^θ=(τ̂^θ) as the tree τ̂^θ when one forgets the types of thevertices of τ̂^θ.We denote by _h={u∈τ^θ;|u|=hand u is of typein τ̂^θ} the set ofvertices of τ̂^θ with typeat level h∈.Notice that (_ℓ, 0≤ℓ<h)=(_h) andthat τ̂^θ is completelycharacterizedbyτ^θ and(_h,h∈). Recall definedby eq:def-fpandthe k-thorder size-biased distribution, p_[k],defined by eq:def-biased-p. The random tree τ̂^θ is defined as follows. * The root is of type(i.e. _0={∅}).* The number of offspring of a vertex of typedoes not depend on the vertices of lower or same height (branching property only for individuals of type ). * A vertex of typeproduces only vertices of typewithoffspring distribution(as in theKesten tree).* Forh∈,let Δ_h=♯_h+1-♯_hbethe increaseofnumber ofverticesoftypebetween generations h and h+1.Conditionallyon r_h(τ^θ)and (_ℓ,0≤ℓ≤h), Δ_his distributedasa Poissonrandomvariable withmean θζ_h, where:ζ_h= μ^-h-1 (1-μ) (κ -1)/κ if μ<1,(γ-1) if μ=1, μ^h (μ-1) (1-κ) if μ>1.The vertexu∈_hhas κ^(u)≥1 childrenof type ,withallthe configurations(κ^(u),u∈_h) havingthesame probability,thatis 1/♯_h+1-1♯_h -1= 1/♯_h+1-1Δ_h. (Thisbreaks thebranching property!) Furthermore, conditionallyon r_h(τ^θ), _h and(κ^(v)=s_v≥ 1, v∈_h), the vertex u∈_h hasκ^(u) vertices of typesuch thatk_u(τ^θ)=κ^(u)+κ^(u) hasdistribution _[s_u]andthes_uindividuals oftypearechosen uniformly at random among the k_u(τ^θ) children.Moreprecisely,forh∈,n∈,u∈_h, k_u≥ s_u≥ 1,A_u⊂{1, …,k_u} with ♯ A_u=s_u and∑_u∈_hs_u=n+♯_h,wehavewith k=∑_u∈_h k_u:(κ^(u)+κ^(u)=k_uand _h+1∩{u1, …, uk_u}=uA_u ∀ u∈_h |r_h(τ^θ), _h)= (θζ_h)^n/n!-θζ_h♯_h+n-1n∏_u∈_hk_us_u_[s_u](k_u)= (♯_h-1)!/(♯_h+n-1)! (θ (γ-1) ζ_h )^n-θζ_h∏_u∈_h(k_u) μ^- ♯_h if μ≤ 1, μ^♯_h(μ/κ)^n if μ>1,where we used eq:biased-fp and eq:def-fp as well as eq:h+q for the last equality.By construction, a.s.individuals of typehave a progeny which does not suffer extinctionwhereas individuals of type have a progeny whichsuffersextinction. Since the individuals of typedo not satisfy the branching property, the random tree τ̂^θ is not a multi-type GW tree. Westress outthat τ̂^θtruncated at level h can berecovered from r_h(τ^θ)and _h asall the ancestorsof a vertex of typeare also oftypeand a vertex of typehas at least one children of type .We have the following result. Letη∈(0,1]andq∈(0,1). Letθ∈(0,+∞ ).Letn≥h≥ 1and∈^(h). We have, with k=z_h():(r_h(τ^θ)=) = (h,k,θ)(r_h(τ)=),where (h,k,θ) is equal to μ^-h-θ (μ^-h-1) (κ -1)/κ ∑_i=1^k ki (θμ^-h(κ-1)^2/κ)^i-1/(i-1)! if μ<1,-θ(γ-1)h ∑_i=1^k ki (θ(γ-1)^2)^i-1/(i-1)! if μ=1,μ^h-θ (μ^h-1) (1-κ) ∑_i=1^k kiκ^k-i (θμ^h (1-κ)^2 )^i-1/(i-1)! if μ>1. We deduce from Lemma <ref> that τ^θτ^0. Therefore the trees τ^θ appear as a generalization of the Kesten tree. We will also prove in Proposition <ref> that a limit also exists when θ→+∞. We consider only the super-critical case. The sub-critical case andthe critical case can be handled in a similar way. Leth∈, ∈^(h)and S_h⊂{u∈;|u|=h} be non empty.Inorder to shorten the notations, we set =(S_h). Notice thatis tree-like. Weset,for ℓ∈{0,…,h-1}, S_ℓ={u∈,|u|=ℓ} thevertices at levelℓ which have atleast onedescendant inS_h and Δ_ℓ=♯S_ℓ+1-♯ S_ℓ. Werecallthat τ̂^θtruncatedatlevelhcanberecoveredfrom r_h(τ^θ)and_h.Wecompute _S_h=(r_h (τ^θ)=, _h=S_h).Wehave, using eq:pre-ch and eq:zeta_h: _S_h = [∏_u∈ r_h-1(), u∉(k_u()) ]∏_ℓ=0^h-1[ (♯ S_ℓ -1)!/(♯ S_ℓ+1 -1)!(θ (γ-1) ζ_ℓ)^Δ_ℓ-θζ_ℓ[∏_u∈S_ℓ(k_u()) ] μ^♯ S_ℓ(μ/κ)^Δ_ℓ]= [∏_u∈ r_h-1()(k_u()) ] (θ(γ-1)(μ -1)(1-κ)/κ)^∑_ℓ=0^h-1Δ_ℓ/(♯ S_h-1)!-θ∑_ℓ=1^h-1ζ_ℓ∏_ℓ=0^h-1μ^(ℓ+1)Δ_ℓ +♯ S_ℓ=[∏_u∈ r_h-1()κ^k_u()-1] [∏_u∈ r_h-1() p(k_u()) ] (θ(1-κ)^2/κ)^♯ S_h -1/(♯ S_h-1)!-θ (μ^h-1)(1-κ)μ^ h ♯ S_h = κ^z_h()-♯ S_h (r_h(τ)=)μ^h (θμ^h(1-κ)^2)^♯ S_h -1/(♯ S_h-1)!-θ (μ^h-1)(1-κ),where we used for the third equality that∑_ℓ=0^ h-1Δ_ℓ= ♯ S_h-1,∑_ℓ=1^h-1ζ_ℓ= (μ^h-1) (1-κ) and∑_ℓ=0^h-1 (ℓ+1) Δ_ℓ + ♯ S_ℓ = ∑_ℓ=0^h-1 (ℓ+1)♯ S_ℓ+1 - ℓ♯ S_ℓ=h ♯ S_h.Since _S_h depends only of ♯ S_h, we shall write _♯ S_h for _S_h.Set k=z_h()=♯{u∈; |u|=h}. Since ♯ S_h≥ 1 as the root if of type , we obtain:(r_h (τ̃^θ)= ) = ∑_i=1^k ∑_S_h⊂{u∈; |u|=h} _{♯ S_h=i} _S_h = ∑_i=1^k ki _i =(r_h(τ)=) (h,k,θ),where we used the definition offor the last equality. §.§ Convergence of the fat geometric GW treeWe consider a sequence (a_n, n∈), with a_n∈ andτ_narandomtree distributedastheGW treeτ conditionally on {Z_n= a_n}. We have the following result. Let η∈(0,1], q∈ (0,1) and θ∈ (0, +∞). Assume thatlim_n→∞ a_nμ^n=θifμ< 1orlim_n→∞ a_nn^-2 =θ ifμ= 1orlim_n→∞ a_n μ^-n =θ if μ> 1. Thenwe have the following convergence in distribution:τ_nτ^θ. Recall the definitions of b_n,h in eq:def-bnh and of G_n,h in eq:Gnh.According to Lemma <ref>, we have for n≥ h≥ 1 and k∈:_k(Z_n-h=a_n)/(Z_n=a_n)=b_n,h ∑_i=1^k ki κ^k-i G_n,h(k,i).According toDefinition eq:def-bnh,we have b_n,h=exp(-a_nlog(γ_n-h/γ_n)).Wededucefrom eq:equiv-gg and the hypothesis on (a_n, n∈) thatlim_n→∞ -log(b_n,h)= θ (μ^-h-1) (κ -1)/κ if μ<1, θ(γ-1)h if μ=1, θ (μ^h-1) (1-κ ) if μ>1.We deduce from eq:Gnh, eq:lim-g and eq:equiv-gg-1c0, that for h∈, k≥ i≥ 1: lim_n→∞ (i-1)! G_n,h(k,i)= (θμ^-h (κ-1)^2 )^i-1μ^-hκ^1-k if μ<1, (θ(γ-1)^2)^i-1 if μ=1, (θμ^h (1-κ)^2 )^i-1μ^h if μ>1.Usingdefinition ofin Lemma <ref>, we obtain that:lim_n→∞_k(Z_n-h=a_n)/(Z_n=a_n) = (h,k,θ).Then use thecharacterization of the convergencein , eq:ph and Lemma<ref> to conclude. § THE CONDENSATION REGIME OR THE VERY FAT CASE§.§ An infinite geometric treeRecall γ_n defined in eq:hqg-n.For n∈, we define the probability p̃_n=(p̃_n(k), k∈) by:p̃_n(k)=γ_n+1^k/γ_n p(k).Thanks toeq:g-g, we get ∑_k∈p̃_n(k)=(γ_n+1) γ_n^-1=1, so that p̃ is indeed aprobability distribution on .For n=0, weset p̃_0 the Diracmass at +∞, whichis a degenerate probability measure on .We define τ^∞ as an inhomogeneous GW tree with reproduction distribution p̃_h at generation h∈. In particular the root has an infinite number of children, whereas all the other individuals have a finite number of children. More precisely, for all h∈, k_0∈ and ∈_k_0^(h), we have:(r_h, k_0(τ^∞)=)= ∏_u∈ r_h-1()^*p̃_|u|(k_u()) ,where we recall that ^*=∖{∅}. Remark that a.s. τ^∞∈^*.We give a representation of the distribution of τ^∞ as thedistribution of τ with a martingale weight.Let η∈(0,1] and q∈ (0,1).For all h∈, k_0∈ and F a non-negative function on _∞, we have:[F(r_h, k_0(τ^∞))]= [F(r_h(τ)) M_h_{k_∅(τ)=k_0}]/(k_∅(τ)=k_0) ,where (M_h,h∈) is the martingale defined by eq:def-m. Equivalently, for all h∈, k_0∈ and ∈_k_0^(h), we have with k=z_h():(r_h, k_0(τ^∞)=)=1-q/η qγ_h ^k(r_h( τ)=).Let h∈, k_0∈ and ∈_k_0^(h).Set k=z_h(). We have:1-q/η qγ_h ^k (r_h( τ)=)= 1-q/η q[∏ _u∈,|u|=h-1γ_h^k_u() ][∏_u∈ r_h-1() p(k_u())] = 1-q/η qγ_1^k_0 [∏_u∈ r_h-1()^*γ_|u|^-1 γ_|u|+1^k_u()] [∏_u∈ r_h-1() p(k_u())]=1-q/η qγ_1^k_0p(k_0) [∏_u∈ r_h-1()^*p̃_|u|(k_u())]=(r_h, k_0(τ^∞)=),where we used that ∑_u∈, |u|=ℓ k_u()=∑_u∈, |u|=ℓ+1 1 for the second equality and the definition of p(k_0) and γ_1=γ as well as eq:def-t-t for the last one. To conclude, notice also that thanks to the definition of p(k_0) and γ_1=γ as well as eq:def-m, we have on {k_∅(τ)=k_0}:1-q/η qγ_h ^z_h(τ) = M_h/p(k_0)· We give an alternative description of τ^∞ as the skeleton of a two-type GW tree. We set for n∈:ν_n=1- γ_n+1 -1/γ_1 - 1 =μ (1 -μ^n) (1 -μ^n+1)^-1if μ≠ 1, n (n+1)^-1if μ=1.We have ν_n∈ [0, 1). It is easy to check (using the first expressionof ν_n-1 for the first equality and the second expression for ν_n-1 and ν_n for the second equality)that for all n∈:1-qν_n-1/1-q= γ_n andμ(1- ν_n-1)ν_n/1-ν_n=1. We considera two-type GW treeτ̂^∞ wherethe vertices are either oftype(for survivor) or oftype(for extinction). We define(τ̂^∞ ) asthe treeτ̂^∞when one forgets the types of the vertices of τ̂^∞.We denote by _h={u∈(τ̂^∞ );|u|=hand u is of typeinτ̂^∞} theset of vertices ofτ̂ with typeat level h∈. The random tree τ̂^∞ is defined as follows: * The number of offspring of a vertex depends, conditionally on the vertices of lower or same height, only on its own type (branching property). * The root is of type(i.e. _0={∅}). * A vertex of typeproduces only vertices of typewithoffspring distributiondefined by eq:def-fp. * Avertex u∈τ̂^∞ at levelh of type producesκ^(u) verticesof typewithprobability distribution(1,ν_h) (withthe conventionthatif h=0,thenκ^(∅)=+∞) and κ^(u) vertices of type suchthat the type ofthe vertices(ui, 1≤i≤κ^(u)+κ^(u))is asequence ofheads (type )and tails (type ) wherethe probability toget anhead isq∨η and atail is 1-q∨η,stopped justbefore the(κ^(u) +1)-th head.Equivalently,for |u|≥ 1,conditionallyonκ^(u)=s_u≥1,thevertexuhasκ^(u) verticesof type such thatk_u((τ̂^∞ ))=κ^(u)+κ^(u)hasdistribution _[s_u],defined in eq:biased-fp,and thes_uindividuals oftype are chosenuniformly atrandomamong thek_u((τ̂^∞ )) children.Moreprecisely, wehave for k_0∈ and S_1⊂{1, …, k_0}:(_1 ∩{1, …, k_0} =S_1) = (q∨η)^♯ S_1 (1-(q∨η))^k_0 -♯ S_1,and for h≥ 2, k∈,u∈ with |u|=h, s_u∈{1, …, k},and A⊂{1,…, k } such that ♯ A=s_u:(κ^(u)+κ^(u)=k,_h+1∩{u1, …, uk}=uA| r_h((τ̂^∞ )), _h,u∈_h)= ν_h (1-ν_h)^s_u -1(q∨η)^s_u+1 (1-(q∨η))^k -s_u. By constructionindividuals of type have a progeny whichdoes not suffer extinction whereas individuals of typehavea.s.a finiteprogeny. We stress out that τ̂^∞, truncatedat level h and when considering only the first k_0 children of the root,can berecoverfrom r_h,k_0((τ̂^∞ )) and _h asall the ancestors of a vertex of type is also of a type and a vertex of type has at least one children of type .We have the following result. Let η∈ (0,1] and q∈ (0,1). We have that τ^∞ isdistributed as (τ̂^∞ ).We first suppose that η≤ q. In that case, μ≤ 1 and we have =p and q∨η=q. Let h∈, k_0∈, ∈_k_0^(h) and S_h⊂{u∈; |u|=h} which might be empty.In order to shorten the notations, we set =(S_h) which is a tree if S_h is non-empty.For u∈,we set s_u=♯{ i∈; ui∈∪ S_h} the number of children of u which have at least one descendant in S_h.Weset,for ℓ∈{0,…, h-1}, S_ℓ={u∈,|u|=ℓ} the vertices atlevel ℓwhich haveat least onedescendant in S_h. Notice that ∑_u∈ S_ℓ s_u=♯ S_ℓ+1.Set k=z_h(). We compute _S_h=(r_h, k_0 ((τ̂^∞ ))=, _h=S_h).If S_h is non-empty, we have:_S_h = [∏_u∈ r_h-1(),u∉ p(k_u()) ] q^♯ S_1 (1-q)^k_0- ♯ S_1∏_u∈^*ν_|u| (1-ν_|u|)^s_u-1 q^s_u+1 (1-q)^k_u() - s_u= [∏_u∈ r_h-1()^* p(k_u()) ] q^♯ S_1 (1-q)^k_0- ♯ S_1∏_u∈^*ν_|u|/1- ν_|u|1-q/η(q/1-q(1- ν_|u|)) ^ s_u=1-q/η q(r_h(τ)=) (q/1-q)^♯ S_1∏_ℓ=1^h-1( ν_ℓ/1- ν_ℓ1-q/η) ^♯ S_ℓ(q/1-q(1- ν_ℓ)) ^♯ S_ℓ+1=1-q/η q(r_h(τ)=)( ν_1/1- ν_1q/η) ^♯ S_1(q/1-q(1- ν_h-1)) ^♯ S_h ∏_ℓ=2^h-1( ν_ℓ/1- ν_ℓq/η (1-ν_ℓ-1) ) ^♯ S_ℓ=1-q/η q (r_h(τ)=) (q/1-q(1- ν_h-1)) ^♯ S_h,where we used for the second equality that if u∈ and _h=S_h, then k_(τ̂^∞ )(u)≥ 1;and for the fifth the second equation from eq:q-nu=g as well as ν_1/(1-ν_1)=μ=η/q (which comes also from the second equation in eq:q-nu=g with n=0).If S_h is empty, then we have:_∅=(1-q)^k_0∏_u∈ r_h-1()^* p(k_u()) = 1-q/η q (r_h(τ)=).Notice that _S_h depends on S_h only trough ♯ S_h.We deduce that:(r_h, k_0 ((τ̂^∞ ))=)= ∑_i=0^k ∑_S_h⊂{u∈; |u|=h} _{♯ S_h=i} _S_h= 1-q/η q (r_h(τ)=) ∑_i=0^kki(q/1-q(1- ν_h-1)) ^ i=1-q/η q (r_h(τ)=) (1+ q/1-q(1- ν_h-1)) ^k= 1-q/η q (r_h(τ)=) (1 -q ν_h-1/1-q) ^ k= 1-q/η q (r_h(τ)=) γ_h ^ k,where we used the firstequation from eq:q-nu=g for the last equality.Then we conclude using eq:t-t=mart-t from Lemma <ref>.In the case q<η, we have thatis the (q,η) distribution. So the computations are the same, inverting the roles of q and η. As in Remark <ref>, we also have the convergence of the trees τ^θ introduced in Section <ref> to the infinite geometric tree τ^∞ as θ→+∞. Let η∈(0,1] and q∈(0, 1). Then we havethe following convergence in distribution:τ^θτ^∞. We only deal with the supercritical case, the subcritical and critical cases can be handled in a similar way.For ,'∈ such that k_∅()<∞, let us denote by *' the tree obtained by graftingand ' on the same root i.e.: *'=∪{(u_1+k_∅(),u_2,…,u_n), (u_1,…,u_n)∈'^*},with the convention *'= if '={∅}. We denoteby ^(≤ h) thesubset ofoftrees with height lessthan orequal toh.Leth,k_0>0 andlet ∈_k_0^(h). Then using Lemma <ref> with k=z_h() and k'=z_h('), we have:(r_h,k_0(τ^θ)=) =∑_'∈^(≤ h)(r_h(τ^θ)=*') =∑ _'∈^(≤ h)μ^h-θ(μ^h-1)(1-κ)∑_i=1^k+k'k+k'iκ^k+k'-i(θμ^h(1-κ)^2)^i-1/(i-1)!(r_h(τ)=*').Let us remark that, if '{∅}, then(r_h(τ)=*') =(r_h(τ)=)/p(k_∅())(r_h(τ)=')/p(k_∅('))p(k_∅()+k_∅(')) =1-q/η q(r_h(τ)=)(r_h(τ)=').Since (r_h(τ^θ)=) converges to 0 as θ increases to infinity, we deduce that for θ→+∞:(r_h,k_0(τ^θ)=)=1-q/ηqμ^h(r_h(τ)=) -θ(μ^h-1)(1-κ)A_1 +o(1),with A_1= ∑ _'∈^(≤ h)∖{∅}∑_i=1^k+k'k+k'iκ^k+k'-i(θμ^h(1-κ)^2)^i-1/(i-1)!(r_h(τ)=').We have, using for the third equality that Z_h has distribution [κ, γ_h], that:A_1 =∑_k'=0^+∞ ∑_i=1^k+k'k+k'iκ^k+k'-i(θμ^h(1-κ)^2)^i-1/(i-1)!∑_{'∈^(≤ h),z_h(')=k'}(r_h(τ)=')=∑_k'=0^+∞ ∑_i=1^k+k'k+k'iκ^k+k'-i(θμ^h(1-κ)^2)^i-1/(i-1)!(Z_h=k') =∑_k'=0^+∞ ∑_i=1^k+k'k+k'iκ^k+k'-i(θμ^h(1-κ)^2)^i-1/(i-1)!(1-1/γ_h)(1-κ/γ_h) 1/γ_h^k'-1= (1-1/γ_h)(1-κ/γ_h) (A_2+A_3),where A_2=∑_i=k+1^+∞(∑_k'=i-k^+∞k+k'i(κ/γ_h)^k'-1) (θμ^h(1-κ)^2)^i-1/(i-1)!κ^k-i+1and A_3=∑_i=1^k(∑_k'=0^+∞k+k'i(κ/γ_h)^k'-1)(θμ^h(1-κ)^2)^i-1/(i-1)!κ^k-i+1.Using eq:serie and κ/γ_h<1, we getlim_θ→+∞-θ(μ^h-1)(1-κ)A_3=0. Using eq:serie, we also have:A_2 =∑_i=k+1^+∞1/(1-κ/γ_h)^i+1(κ/γ_h)^i-k-1 (θμ^h(1-κ)^2)^i-1/(i-1)!κ^k-i+1= γ_h^k+2/(γ_h-κ)^2(θμ^h(1-κ)^2)/γ_h-κ + O(θ^k). Then, as (γ_h-1)/(γ_h-κ)=μ^-hand (1-κ)/(γ_h-κ)=1-μ^-h, we get that:lim_θ→+∞-θ(μ^h-1)(1-κ)A_1= lim_θ→+∞-θ(μ^h-1)(1-κ) (1-1/γ_h)(1-κ/γ_h) A_2= μ^-hγ_h^k. We deduce that:lim_θ→+∞(r_h,k_0(τ^θ)=)=1-q/η q γ_h^k (r_h(τ)=).Using eq:t-t=mart-t, this gives the result.§.§ Convergence of the very fat geometric GW treeWe consider a sequence (a_n, n∈), with a_n∈ andτ_narandomtree distributedastheGW treeτ conditionally on {Z_n= a_n}. We have the following result.Letη∈(0,1]andq∈(0,1). Assume thatlim_n→∞a_n μ^n=+∞if μ<1or lim_n→∞a_n n^-2=+∞if μ=1 or lim_n→∞a_n μ^-n=+∞if μ>1. Then we have the following convergence in distribution:τ_nτ^∞. Firstnotice thata.s. H(τ^∞)=+∞.Then, usingthe characterization eq:cv-loi* for the convergence in distribution in ^*,the result isa direct consequence ofeq:phk0 in Lemma <ref>and of eq:t-t=mart-tin Lemma <ref>, providedthat lim_n→∞R^i_n,h(k)=0for i∈{1,2}, h≥2 andk∈, whereR^i_n,hare definedin eq:Rnh1 and eq:Rnh2.According to eq:def-bnh and the definitions in Lemma <ref>, we have b_n,h=exp(-a_n log(γ_n-h/γ_n)),α_n=(γ_n-h-κ) (γ_n-h-1) and x_n=γ_n/γ_n-h.Since κ>1 (resp. γ>1, resp. κ<1) if μ<1 (resp. μ=1, resp. μ>1), and since h≥ 1, we deduce from eq:hqg-n, eq:equiv-gg-1c0 and eq:equiv-ggthat log(γ_n-h/γ_n), α_n and 1-x_n are ofthe same order μ^-n (resp. n^-2, resp. μ^n). In particular lim_n→∞α_n/(1-x_n) exists and is finite.Because of the hypothesis on (a_n, n∈), we deduce that lim_n→∞ a_n log(γ_n-h/γ_n)=+∞ and thus lim_n→∞ b_n,h=0 as well aslim_n→∞ b_n,h(α_n a_n)^k-1=0 as a_n log(γ_n-h/γ_n) and α_n a_n are of the same order. This gives lim_n→∞ R^1_n,h(k)=0 Since p(k) _k(Z_n-h=a_n)≤∑_i∈ p(i) _i(Z_n-h=a_n)=(Z_n-h+1=a_n), we deduce that:_k(Z_n-h=a_n)/(Z_n=a_n) ≤p(k)(Z_n-h+1=a_n)/(Z_n=a_n)= p(k)b_n, h-1(γ_n-h+1 - κ)(γ_n-h+1 -1)/(γ_n - κ)(γ_n -1)γ_n/γ_n-h+1,where we used that Z_ℓ has distribution [κ, γ_ℓ] andeq:hqg-n for the last equality. According to the previous paragraph, we have lim_n→∞ b_n,h-1=0 as h≥ 2. Furthermore, using eq:equiv-gg, we get that:lim_n→∞(γ_n-h+1 - κ)(γ_n-h+1 -1)/(γ_n - κ)(γ_n -1)γ_n/γ_n-h+1 = μ^-h+1.This implies that lim_n→∞_k(Z_n-h=a_n)/(Z_n=a_n) =0and thus lim_n→∞ R^2_n,h(k)=0. This finishes the proof.abbrv
http://arxiv.org/abs/1709.09403v1
{ "authors": [ "Romain Abraham", "Aymen Bouaziz", "Jean-François Delmas" ], "categories": [ "math.PR" ], "primary_category": "math.PR", "published": "20170927091940", "title": "Very fat geometric galton-watson trees" }
Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA E-mail: [email protected] Hot X-ray gas in the NLR of Mrk 3 BOGDÁN ET AL.We study the prototypical Seyfert 2 galaxy, Markarian 3, based on imaging and high-resolution spectroscopy observations taken by the Chandra X-ray Observatory. We construct a deconvolved X-ray image, which reveals the S-shaped morphology of the hot gas in the narrow line region (NLR). While this morphology is similar to the radio and [O III] emission, the distribution of the X-ray gas is broader than that obtained at these other wavelengths. By mapping the density and temperature distribution of the hot gas in the NLR, we demonstrate the presence of shocks towards the west (M=2.5^+1.0_-0.6) and east (M=1.5^+1.0_-0.5). Moreover, we compute the flux ratios between the[O III] and 0.5-2 keV band X-ray luminosity and show that it is non-uniform in the NLR with the western side of the NLR being more highly ionized. In addition, based on the Chandra grating data we investigate the line ratios of the Si XIII triplet, which are not consistent with pure photoionization. Based on these results, we suggest that in the NLR of Mrk 3 both photoionization and collisional ionization act as excitation mechanisms.We conclude that the canonical picture, in which photoionization is solely responsible for exciting the interstellar medium in the NLR of Seyfert galaxies, may be overly simplistic. Given that weak and small-scale radio jets are commonly detected in Seyfert galaxies, it is possible that shock heating plays a non-negligible role in the NLR of these galaxies. § INTRODUCTION Large optical surveys demonstrated that galaxies evolve through mergers from star-forming spirals, through a transition region, to massive elliptical galaxies <cit.>. Outflows and the energetic feedback from active galactic nuclei (AGN) are widely believed to play a crucial role in building the observed luminosity function of galaxies and in the co-evolution of supermassive black holes and their host galaxies <cit.>. However, it is still debated how the energy released from AGN is coupled with the surrounding matter.The NLR, located beyond the sphere of influence of supermassive black holes, provides an ideal laboratory to explore the connection between the central AGN and the host galaxy. Given that the typical extent of NLRs is in the range of few hundreds to about a thousand pc, for nearby AGN these regions are well resolved, allowing detailed morphological and diagnostic studies. Investigations of the NLR reveal the presence of bright narrow emission lines at a wide range of energies, from [O III] to X-rays <cit.>. The presence of emission lines at various wavelengths suggest a common ionization mechanism. A long-standing debate is whether the NLR of AGN is ionized by the nucleus or by shocks driven by radio jets. Although observational studies suggest that in radio galaxies jets may be responsible for the ionization of optical emission-line material <cit.>, the consensus is still lacking <cit.>. Similarly, the dominant emission mechanism is also debated in radio-quiet AGN. Studies of nearby Seyfert galaxies suggest that their X-ray spectrum is consistent with photoionized gas<cit.>. The main arguments hinting that photoionization is the main excitation mechanism are (1) the morphological similarity between the diffuse X-rayand the [O III] emission; (2) the approximately constant flux ratios between the[O III] and soft X-ray emission; and (3) the acceptable fit obtained by describing the observed spectra with a photoionized gas models. However, several studies hint that the role of collisional ionization may be non-negligible in Seyfert galaxies <cit.>. To probe the ionization mechanism of the hot gas in the NLR of Seyfert galaxies, it is indispensable to perform detailed studies of nearby Seyferts with prime multi-wavelength data. As we demonstrate below, Mrk 3 is the ideal candidate for such a study. Mrk 3 (UGC 3426) is an early-type (S0) galaxy[Morphological classification taken from HyperLeda.] at z=0.013509, which hosts a luminous Seyfert 2 AGN. Given its brightness and proximity, Mrk 3 was the subject of a wide range of multi-wavelength observations. The Hubble Space Telescope (HST) [O III] survey of nearby AGN showed that Mrk 3 is the second brightest source, after the spiral galaxy, NGC 1068 <cit.>. The HST images point out that the NLR of Mrk 3 exhibit a series of emission-line knots, which show an S-shaped morphology. Radio observations reveal the presence of a pair of jet knots, whose position angle is consistent with that of the NLR <cit.>. Based on the spectroscopic study carried out with HST, <cit.> argue that the NLR is a high-density shell that was shock heated by the jet. However, based on long-slit spectra obtained with HST and by utilizing photoionization models, <cit.> suggest that the NLR is dominated by photoionization. Thus, the mechanism responsible for ionizing the diffuse gas in the NLR of Mrk 3 remains a matter of debate. Mrk 3 has been explored with the X-ray grating spectrometers of Chandra and XMM-Newton. Based on the analysis of a 100 ks Chandra HETG observation and by extracting the spectrum of an 8 pixel (≈4 wide) region,<cit.> suggested that the main excitation mechanism of the X-ray emitting plasma is photoionization. In agreement with this, the XMM-Newton RGS spectrum of Mrk 3 also hinted that the soft X-ray emission is dominated by photoionized gas <cit.>. However, previousX-ray studies did not explore the NLR of Mrk 3 at spatial scales comparable to that of the HST and radio images. Indeed, these works probed the entire NLR as a single region and did not take into account its complex structure.Given the prototypical nature of Mrk 3 and the wealth of available multi-wavelength data, it is a prime laboratory to explore the mechanisms responsible for exciting the diffuse X-ray gas. In this work, we focus on analyzing Chandra observations of Mrk 3 to distinguish between collisional ionization from the small-scale radio jet and photoionization from the AGN radiation field. This project is greatly facilitated by the deep Chandra HETG observations that allow us to perform high-resolution spectroscopy of the NLR at 0.5 spatial scales – nearly an order of magnitude smaller scales than applied in previous X-ray grating studies.This work is structured as follows. In Section 2 we introduce the data and describe the main steps of the analysis. In Section 3 we present our results, namely discuss the deconvolved X-ray image of the diffuse emission, study the surface brightness and temperature structure of the hot gas, and probe the Chandra HETG spectra in seven distinct locations as a function of radius from the nucleus. We discuss our results in Section 4 and argue that both photoionization and collisional ionization play a role in the NLR of Mrk 3. We summarize in Section 5. The luminosity distance of Mrk 3 is D_ L = 58.8 Mpc and the corresponding angular scale is 278pcarcsec^-1. All uncertainties listed in the paper are 1σ errors. § THE CHANDRA DATA The Chandra X-ray Observatory observed Mrk 3 in nine pointings with HETG/ACIS-S for a total of389.3 ks. In addition, one pointing with an exposure time of 30.6 ks was done with Chandra ACIS-S in imaging mode. The details of the individual observations are listed in Table <ref>. The data were reduced with standard CIAO[http://cxc.harvard.edu/ciao/] software package tools (CIAO version 4.5, CALDB version 4.6.7). To analyze the HETG data, we reprocessed all observations, which assures that the most recent calibration updates are applied. We used standard CIAO tools to create the region masks (tg_create_mask) and extract the spectra (tg_extract). Throughout the analysis we only consider the first order dispersed spectra for the Medium Energy Grating (MEG) and High Energy Grating (HEG). To maximize the signal-to-noise ratios of the spectra, we combined the ±1 orders of each grating. To probe the spectral variability of Mrk 3, we investigated the individual exposures and found that the spectra are consistent and the count rates measured in the 6-10Å wavelength range exhibit ≲7% variations. Therefore, we combined the spectra from all individual observations to obtain a single first order HEG and MEG spectrum. Finally, we produced grating response files for each observations by employing the mkgarf and mkgrmf tools, which were then combined. The imaging observation was analyzed following the main steps outlined in <cit.>. First, we used the chandra_repro tool to reprocess the observations. Then we searched for high background periods using a light curve that was extracted from the 2.3-7.3 keV energy range, which band is the most sensitive to flares <cit.>. Using the deflare tool and applying 3σ clipping, we did not find any high background time periods, hence the total exposure time of the imaging observation remains 30.6 ks. Given that we aim to explore the gaseous X-ray emission around Mrk 3, bright point sources – mostly originating from low-mass X-ray binaries or background AGN – need to be identified and removed. To detect the point sources, we utilized thewavdetect tool. The resulting source regions were excluded from the analysis of the diffuse emission. Although we identify several point sources, including the nuclear source associated with the galaxy, aside from the central AGN none of the sources are in the proximity of the NLR. To account for the background emission when studying the diffuse emission, we utilize nearby regions, which assures the precise subtraction of both the instrumental and sky background components. Exposure maps were produced for the images to correct for vignetting effects. To create the exposure maps, we used a spectral weights file that was computed by assuming an optically-thin thermal plasma emission model with N_ H = 9.67 × 10^20 cm^-2 column density, kT= 0.85 keV temperature, and Z=0.24 Solar metallicity. This model represents the best-fit average spectrum of the hot gaseous emission in the NLR of Mrk 3 (see Section <ref>).§ RESULTS§.§ X-ray images of Mrk 3In Figure <ref> we depict the 0.3-2 keV (soft) and 4-8 keV (hard) band X-ray images of the central 20× 20 (5.56 × 5.56 kpc) region around Mrk 3 based on the sole imaging observation (Obs ID: 12293). Both images reveal the presence of a bright nuclear point source. However, the overall distribution of X-ray photons is strikingly different. The hard band image appears to be round and symmetric, whereas the soft band image shows an elongated structure in the east-west direction.To probe whether the distribution of photons can be explained by a bright point source, we construct the Chandra point spread function (PSF) for both energy ranges. Based on the hardband PSF we expect that ∼90% of the photons should be enclosed within a circular region with radius of 2. In agreement with this, we find that somewhat more than 90% of the photons are encircled within this radius, implying that the hard band emission can be explained with the bright AGN. The PSF extracted for the soft band predicts that the 90% encircled radius is 0.8. However, within this radius only ∼35% of the photons are included, implying that beyond the nuclear source an extended X-ray emitting component is present. This diffuse X-ray emitting component in the NLR of Mrk 3, originating from hot X-ray gas, is in the main focus of our study. §.§ Average properties of the hot gas in the NLRWe establish the nature and average characteristics of the extended emission within the NLR by extracting an X-ray energy spectrum using the ACIS-S imaging observation. We utilize a circular region with 2 (556 pc) radius centered on the center of Mrk 3. We note that this region covers most of the NLR. We fit the resulting spectrum with a two component model consisting of an absorbed optically-thin thermal plasma emission model (APEC) and a power law model. The thermal component describes the gaseous emission, while the power law component accounts for the emission associated with the nuclear source and the population of unresolved X-ray binaries. The column density was fixed at the Galactic value. The spectrum and the best-fit model is shown in Figure <ref>. Based on the fit performed in the 0.5-2 keV band, we confirm the presence of a significant gaseous component. The average best-fit temperature of the hot gas is kT=0.83±0.03 keV and the metallicity of Z=0.24^+0.24_-0.09 Solar using the <cit.> abundance table. Given the stellar mass of the galaxy (M_⋆ = 1.6 × 10^11 M_⊙), the metallicity of the gas is relatively low. Indeed, other massive early-type galaxies exhibit approximately Solar metallicities <cit.>, whereas lower mass gas-poor ellipticals have sub-Solar metallicities <cit.>, similar to that observed in Mrk 3. The slope of the power law component is Γ = 1.96±0.10, which is similar to that obtained by <cit.>, who performed a thorough analysis of the spectral properties of the AGN. In addition, these authors reported that Mrk 3 has a heavily absorbed continuum emission with N_ H = (0.8-1.1)×10^24 cm^-2. However, due to the high absorbing column this emission component does not add a notable contribution at energies below ≲5 keV, hence our results are not affected by this emission in any significant way. The absorption corrected 0.3-2 keV band luminosity of the thermal component is L_ 0.3-2keV = 6.7×10^40 ergs^-1. We note that the observed X-ray luminosity and the gas temperature is in broad agreement with the scaling relations established for massive early-type galaxies <cit.>. Based on the best-fit spectral model we compute the emission measure of the gas and obtain ∫ n_e n_H dV = 7.9×10^63 cm^-3. By using an admittedly simplistic approach and assuming uniform density and spherical symmetry for the gas distribution, we estimate the average gas density n_e=0.61cm^-3 and obtain a total gas mass of M = 1.1×10^7M_⊙ within the studied volume. This gas mass is comparable to that obtained by <cit.> from HST observations of the NLR.§.§ High-resolution images§.§.§ Deconvolved X-ray image High-resolution radio and optical observations demonstrate the complex structure of the NLR. Given that the native 0.492 per pixel Chandra resolution is lower than the resolution of the radio and optical images, for a more appropriate comparison we enhance the resolution of the ACIS imaging. This allows us to explore the spatial structure of the diffuse X-ray emission at finer angular scales. To this end, we apply the Lucy-Richardson deconvolution algorithm.Since the observed Chandra image is the intrinsic brightness distribution of the source (in our case the hot gas in the NLR of Mrk 3) convolved with the point spread function (PSF) of the detector, it is indispensable to have a good understanding of the PSF. To construct an accurate image of the PSF, we used the Chandra Ray Tracer (ChaRT). Specifically, we ran a ray-trace simulation using the ChaRT web interface[http://cxc.harvard.edu/ciao/PSFs/chart2/runchart.html], which set of rays was then projected onto the detector-plane, resulting in a pseudo-event file. We binned this PSF event file to a fraction of the native ACIS pixels and created an image of the PSF in the 0.5-2 keV band. For the Lucy-Richardson deconvolution we utilized the 0.5-2 keV band X-ray image binned to 30% of the native ACIS resolution and the similarly binned PSF image obtained from ChaRT. We used the CIAO arestore task to carry out the deconvolution and iterated 100 times. While we tried to construct deconvolved images at different resolutions, we found that for the NLR of Mrk 3 the best result is achieved if 30% of the ACIS resolution is applied. The deconvolved Chandra image, shown in Figures <ref> and <ref>, have a pixel size of 0.148. We note that the applied Lucy-Richardson deconvolution technique tends to sharpen features. Therefore, the true X-ray light distribution is slightly more extended than seen on the deconvolved images.§.§.§ Comparing the X-ray and [O III] morphologyThe morphological similarity and the nearly uniform flux ratios between the [O III] line emission and the gaseous X-ray emission were used to argue that photoionization is the main excitation mechanism in Seyfert galaxies<cit.>. In addition, <cit.> studied a sample of radio galaxies and obtained similar conclusions. To this end, we confront the [O III] and X-ray morphology and flux ratios (ℛ_ [O III]/X = F_ [O III]/F_ 0.5-2keV) in the NLR of Mrk 3. In Figure <ref> we present the deconvolved Chandra image of the central regions of Mrk 3 and over plot the intensity levels of the [O III] λ5007 emission observed by the HST. The [O III] image was taken with the Faint Object Camera at an angular resolution of ≈0.1.There is an overall agreement between the distribution of the X-ray light and the [O III] intensity levels as both images exhibit a characteristic S-shaped morphology. However, the emission from the hot X-ray gas has a broader distribution and surrounds the [O III] emission.To compute the flux ratios, we utilize the [O III] fluxes measured by <cit.> and the X-ray luminosity obtained from the 0.5-2 keV band Chandra images. Given thedifferent angular resolution of the two images, we derive the average flux ratios in two regions corresponding to the east and west of the NLR within 1 radius from the nucleus. The flux ratios are different on the two sides of the AGN. Specifically, we obtain ℛ_ [O III]/X≈ 5.6 towards the east and a significantly lower value, ℛ_ [O III]/X≈ 2.2, towards the west. The former ratio agrees with those obtained by <cit.>, suggesting that photoionization plays a notable role to excite the gas.However, the ℛ_ [O III]/X ratio on the western side of Mrk 3 is significantly lower and is comparable with sources that contain small-scale radio sources <cit.>. Therefore, the higher ionization state towards the west hints that the interaction between the jet and ISM may play – at least – a complementary role in the ionization of the gas. Thus, the non-uniform flux ratios and the broader distribution of the X-ray emission in the NLR of Mrk 3 suggest that photoionization may not be the sole excitation mechanism.§.§.§ Comparing the X-ray and radio morphology The high-resolution 18 cm EVN and Merlin radio images of Mrk 3 (see the intensity levels in Figure <ref>) reveal jets with an S-shaped structure and a remarkable hotspot on the western side <cit.>. These authors suggest that the S-shaped morphology of the radio jet may either be due to a change in the jet axis or from the jet interacting with the rotating interstellar medium. Moreover, they suggest that the characteristics of the hotspot on the west may signify the presence of a shock, where the radio jet is interacting with the surrounding material. On the eastern side a similar hotspot is not observed, but two bright radio components are present at <100 pc from the nucleus. These features are marked as R1 and R2 in Figure <ref>. <cit.> suggest that these radio components may have played a role in thermalizing the kinetic energy of the eastern jet,hence reducing the jet's Mach number and leading to a weaker eastern shock. To compare the morphology of the X-ray and radio emission,in Figure<ref> we show thedeconvolved X-ray image with the 18 cm radio intensity levels over plotted. This image reveals that the overall morphology between the X-ray and radio structure is similar since both images show the S-shaped morphology. However, the radio jets are narrow, whereas the gaseous X-ray emission is significantly broader and surrounds the radio emission. This hints that collisional ionization may play a role and the gas may be driven by shocks <cit.>. Based on the prediction of shocks in the X-ray surface brightness, we investigate the X-ray data to identify potential shocks in the NLR. §.§ Detection of shocks in the NLRTo search for possible shocks in the NLR, we investigate the surface brightness and temperature distribution of the X-ray gas. We extract the profiles using circular wedges with position angles of 135-225 and 315-405, where 0 and 90 correspond to west and north, respectively. While both the surface brightness and temperature profiles are extracted using wedges with these position angles, the width of the individual wedges are different. To extract the surface brightness profiles, we used regions with widths of 0.5-1 and for the temperature profile the extraction regions had widths of 1-6 depending on the brightness of the diffuse emission. The surface brightness profiles, extracted from the 0.3-2 keV energy range towards the east and west of the NLR, are depicted in the left panel of Figure <ref>. Along with the surface brightness profiles, we also show the expected brightness distribution of the PSF obtained from ChaRT. As discussed in Section <ref>, the PSF has a significantly narrower distribution than the diffuse emission, thereby demonstrating that the extended emission cannot be associated with the bright nuclear source. The surface brightness profiles reveal a notable jump at ∼2 towards the east and west.Although the surface brightness profile demonstrates the presence of jumps, de-projection analysis needs to be performed to determine the exact position and the magnitude of the corresponding density jumps. Therefore, we utilize the proffit software package tools <cit.> and construct de-projected density profiles. We assume spherical symmetry for the gas density within each wedge and assume that the gas density can be described with a broken power law model inside and outside the edge. We obtain density jumps of n_1/n_0=2.68±0.54 at r_ cut = 2.14± 0.08 (or 595±22 pc) towards the west and n_1/n_0=1.72±0.26 at r_ cut = 1.80±0.08 (or 478±22 pc) towards the east. In the right panel of Figure <ref>, we show the temperature profile of the hot gas towards the east and west. To measure the temperature of the gas, we construct X-ray spectra of each region and fit them with a model consisting of an APEC thermal emission model with the metallicities fixed at Z=0.24 Solar (Section <ref>) and a power law model. The latter component accounts for the emission arising from the AGN at r≲2 and from the population of unresolved low-mass X-ray binaries at r≳ 2 <cit.>. The best-fit spectra are depicted in the Appendix A. The profiles reveal a significant drop at ∼2towards the westand a smaller jump towards the east. Specifically, we observe a temperature drop of T_1/T_0=1.23±0.24 and T_1/T_0=2.67±0.39 towards the east and west. Based on the presence of sharp surface brightness jump, and the observed density and temperature ratios across the edge, we conclude that the observed discontinuity on the western side of Mrk 3 is a shock front<cit.>. To compute the shock Mach number (v≡ c_ s) and the corresponding velocity, we utilize the Rankine-Hugoniot jump conditions <cit.>, which directly connect the pre-shock and post-shock density and temperature with the Mach number. Given that we measure the density and temperature jumps in the NLR of Mrk 3, we can derive the Mach numbers using these two independent approaches. Based on the pre-shock and post-shock densities we find M=2.5^+1.0_-0.6 towards the west and M=1.5±0.2 towards the east. Based on the magnitude of the temperature jump, we derive the shock Mach numbers ofM=2.4±0.3 and M=1.5^+1.0_-0.5 towards the west and east, respectively. We emphasize that the Mach numbers obtained from the density and temperature jumps are in excellent agreement with each other. For a 0.7 keV plasma the sound speed is c_ s = √((γ kT)/( μ m_ H)) = 420kms^-1 by using γ = 5/3 and μ = 0.62. Hence, the velocities are v = 1050^+420_-252 kms^-1 and v = 630±84kms^-1 towards the west and east, respectively.The presence of shocks suggests that the hot gas in the NLR of Mrk 3 is undergoing collisional ionization due to the interaction between the radio jet and the circum nuclear material.§.§ High-resolution spectrum of the NLRThe bipolar morphology of the X-ray emitting gas combined with the presence of small-scale radio jets and the detection of shocks suggest the presence of an outflow.To characterize the properties of the outflow, we utilize the HETG spectrum of the NLR. Due to the relative proximity of Mrk 3 and the superb angular resolution of Chandra, we can perform spatially resolved X-ray spectral diagnostics on the outflow. To this end, we study the emission line spectra at two locations on either side of the nucleus with extraction regions that have a width of 0.5 (139 pc). Although the outflow is traced out to larger radii in the ACIS images, the signal-to-noise ratio is not sufficiently high to explore the high-resolution spectrum of the outflow beyond these regions. In Figure <ref> we show the dispersed spectra of the individual extraction regions in the most relevant 6-9Å region. To obtain these spectra, the plus and minus orders were combined, and the depicted wavelengths are corrected for the cosmological redshift. To fit the spectral lines, we utilize a model consisting of an absorbed power law model to account for the continuum and a series of Gaussian lines. When fitting the lines, we fixed the slope of the power law at Γ=1.7, but left the normalization as a free parameter.Additionally, the centroid, width, and normalization of the Gaussian lines were also free parameters. The best fit line centroid wavelengths were corrected for the cosmological redshift and then compared with the laboratory measurements of strong lines based on the NIST data base <cit.>. The spectra reveal a series of H-like and He-like emission lines along with fluorescence lines. In general, the set of identified lines and their best-fit wavelengths are in good agreement with those of <cit.>. In this work, we compare the best-fit line centroids of the strongest emission lines between the east and west side of the nucleus and probe whether the outflowing gas shows significant blue-shift or redshift. Detailed modeling of the emission spectrum lines will be subject of a future paper. Based on the HETG spectra we find that, within measurement uncertainties, the line centroids towards the east and west agree with the laboratory wavelengths (Table <ref>). In addition, they do not show a statistically significant difference between the east and west side of the NLR. Specifically, all lines are within the 1σ uncertainties with the expected wavelengths, except for the Si line we measure a 1.5σ offset from the laboratory wavelength. In the absence of red-shifted and blue-shifted line centroids, we place upper limits on the outflow velocity of the hot gas. The upper limits typically remain below a few hundred kms^-1. The detailed constraints on the outflow velocity of the gas are listed in (Table <ref>). We note that these velocities are significantly lower than those inferred from the Rankine-Hugonoit jump conditions (Section <ref>). This difference is likely caused by the orientation of NLR, which has an inclination of 5 implying that is it is virtually in the plane of the sky <cit.>. Therefore, if the outflowing gas propagates along the plane of the sky and does not have a significant velocity towards (and away) from the observer, the projected velocities will be close to 0. Hence, the low outflow velocities computed from the HETG data indirectly point out that the outflow propagates almost in the plane of the sky. The low observed outflow velocities observed from the Chandra HETG data are at odds with the results of <cit.>, who identified [O III] emission lines shifted with several hundreds kms^-1. We speculate that the observed velocity difference might be due to the decoupled nature of the cold [O III] and the hot X-ray gas. In this picture, the rotating cold gas has a different velocity and temperature structure than the X-ray gas.Therefore, the shock driven by the radio jet will not drag the cold and hot gas components with the same velocity, implying that these gaseous components remain decoupled. In addition, we mention that due to the ≈0.1 angular resolution of the HST Faint Object Camera, <cit.> extracted narrow regions that were mostly coincident with the locations of bright radio components. As opposed to these, the Chandra HETG spectra cover notably larger regions with 0.5 width, implying that these regions include brighter and fainter parts of the emission. This difference might also contribute to the observed velocity difference. However, further exploring the velocity difference would require a dedicated analysis, which is beyond the scope of this paper. §.§ Line ratios of He-like ions The line ratios of He-like triplets, and in particular the G ratios, are suitable to probe the ionization state of the gas. Due to the high energy resolution of HETG, the three most intense lines, namely the resonance (1s^2^1S_0 - 1s2p^1P_1), the intercombination (1s^2^1S_0 - 1s2p^3P_2,1), and the forbidden lines (1s^2^1S_0 - 1s2s^3S_1), can be individually resolved. Following <cit.>, we derive the G ratio as: G (T_e) = F+I/R . where R, I, and F refer to the resonance, intercombination, and forbidden line strengths, respectively. In Mrk 3, the most prominent He-like ion is the Si XIII triplet at ∼6.7Å. The He-like lines of Mg and Ne are also detected, but these lines are significantly weaker due to the lower effective area of MEG and the relatively high absorbing column, and hence, cannot be used to compute constraining G ratios.Based on a 100 ks Chandra HETG observation <cit.>concluded that the G-ratios are inconsistent with a pure collisional plasma and are marginally consistent with a photoionized plasma. However, these line ratios were obtained by treating the entire NLR as a single region. Due to the presence of shocks, it is feasible that the line ratios show variation in the east-west direction. Therefore, we characterize the ionization state of the gas as a function of central distance by computing the G ratios of the Si XIII triplet in seven distinct locations. The central region is centered on the nucleus of Mrk 3 and has a width of 0.5, while the regions at 0.5 (139 pc), 1 (278 pc), and 1.5 (417 pc) radii towards the east and west each comprise 0.5 wide extraction regions. To fit the lines, we used Gaussian line profiles (agauss in XSpec). Based on the fits we find that the G-ratios in the NLR are in the range of 𝒢 = 0.7-1.1 and 𝒢 = 1.6±0.7 in the center. Although the G-ratios are comparable within uncertainties at every radius, the line strengths of the resonance, intercombination, and forbidden lines exhibit stark differences. As demonstrated in Figure <ref>, the intercombination line is weak or virtually absent towards the west, while it is prominent in the central region and in the east of the NLR. In addition, the intensities of the resonance and forbidden lines are comparable towards the west, while the resonance lines are factor of about 3 times stronger than the forbidden lines towards the east. These results hint that multiple processes may be responsible for ionizing the hot gas. Although the G-ratios are similar to those expected for collisional plasma, the observed values may be influenced by resonance line scattering, which is relevant for high absorbing column densities (N_ HI≳ 10^21 cm^-2). This, in turn, could enhance the intensities of the resonance lines, thereby decreasing the G-ratios of a photoionized plasma and mimicking collisional ionization <cit.>. To probe whether resonance line scattering plays a role in the NLR of Mrk 3, we rely on <cit.> who probed the geometry of the NLR and the extinction as a function of angular position. These authors found that Mrk 3 hosts inner gas disks, which results in a positive extinction gradient from west to east. Specifically, <cit.> measured E(B-V)=0.12-0.16 towards the west and E(B-V)=0.2-0.4 towards the east. We convert these values to hydrogen column densities following <cit.> as N_ HI=5.2 × 10^21 cm^-2×E(B-V), and conclude that theE(B-V) color excess corresponds to N_ HI = (0.6-0.8) × 10^21 cm^-2 and N_ HI = (1.0-2.1) × 10^21 cm^-2 towards the east and west, respectively. Thus, resonance line scattering is expected to increase the resonance line intensities, and hence decrease the G-ratios towards the west. As opposed to this, due to the relatively low column densities, resonance line scattering is not expected to significantly influence the observed G-ratios towards the east.Overall, the line intensities of the Si XIII tripletand the G-ratios hint that both excitation mechanisms – photoionization and collisional ionization – may be present in the NLR of Mrk 3. Specifically, in the central regions and towards the east the main ionizing mechanism may be photoionization, whereas collisional ionization may play a role on the west.§ DISCUSSION§.§ Excitation mechanismsThere is a significant debate about the ionization process of the thermal gas in the NLR of Seyfert galaxies. The observed X-ray emission may either originate from photoionized gas or may be due to gas shock heated by the radio jet. Detailed morphological studies of a sample of Seyfert galaxies pointed out the nearly constant OIII-to-X-ray flux ratios in the NLR <cit.>. Specifically, these studies found the median value of ℛ_[O III]/X = 5 and a scatter of about 0.3 dex. These arguments suggest that a common ionizing source, photoionization from the nuclear source, may be responsible for the observed emission. However, this simple picture may break down when galaxies with small-scale radio jets are investigated. These galaxies exhibit lower O III-to-X-ray flux ratios, indicating a higher level of ionization. This implies that photoionization may not be the only ionizing source, but the interaction between the radio jets and the dense ISM may also play a role. The picture, in which photoionization is the main ionization mechanism, is further challenged when the morphology of the X-ray gas and radio emission is compared. Specifically, in several radio galaxies (e.g. 3C 293, 3C 305, NGC 4258) the X-ray emission exhibits a broader distribution than the radio jets, hinting that shock heating may play a role in heating the gas to X-ray temperatures <cit.>. Our results obtained for the NLR of Mrk 3 can be summarized as follows.* The X-ray gas and the [O III] emission share similar morphology. However, the X-ray light distribution is more extended in the east-west direction than the [O III] emission. * The [O III]-to-X-ray flux ratios are non-uniform across the NLR. In the central regions and in the east they are ℛ_ [O III]/X≈ 5.6, while towards the west the observed ratio drops toℛ_ [O III]/X≈ 2.2.* The X-ray and radio morphology shows generally similar structures, but the X-ray emission is significantly broader and surrounds the radio emission. * We detect shocks with M= 2.4±0.3 and M=1.5^+1.0_-0.5 toward the west and east, respectively.The shock front towards the west is approximately consistent with the locations of the radio hot spot. * The line ratios of the Si XIII triplets do not favor photoionization as the sole ionizing source in the western regions of the NLR.Overall, these results strongly suggest that photoionization and collisional excitation commonly act as excitation mechanisms in the NLR of Mrk 3. This result is at odds with the canonical picture, which hypothesized that photoionization is the main excitation mechanism <cit.>. However, this canonical picture may be overly simplistic and does not reflect the complexity of Seyfert galaxies, most of which produce small-scale, weak, bipolar radio-emitting jets <cit.>. Indeed, small-scale radio jets that are confined within the host galaxy are expected to interact with the surrounding dense interstellar material, which can give rise to shock heating <cit.>. Therefore, it is feasible that shock heating plays a general, possibly complementing, role in the ionization of the gas surrounding the nuclei. §.§ Large-scale gasTo study the diffuse emission on galaxy scales, we extract an X-ray energy spectrum using an elliptical region with 54.5 and38.2 axis radii with position angle of 20. This region corresponds to the total elliptical aperture radius of the galaxy as measured by the 2MASS Large Galaxy Atlas <cit.>. Since we aim to study the large-scale diffuse emission, we omit the counts originating from the NLR by excluding en elliptical region with 3.3× 2.2 radii centered on the center of Mrk 3.To fit the spectrum of the large-scale diffuse emission, we employ a two component model consisting of an absorbed apec thermal emission model and a power law model. As before, we fixed the column density at the Galactic value and the slope of the power law at Γ=1.56.We find a best-fit temperature and abundance of kT=0.77±0.05 keV and Z=0.09^+0.08_-0.04 Solar. With these parameters we obtain the absorption corrected 0.3-2 keV band luminosity of L_0.3-2keV = 4.9×10^40 ergs^-1, which corresponds to the bolometric luminosity of L_bol = 8.3 ×10^40 ergs^-1. Based on the normalization of the spectrum, we compute the emission measure of the gas and compute the total gas mass following Section <ref> and obtain M_ gas = 1.0×10^9 M_⊙. To place the X-ray luminosity and gas mass of the galaxy into a broader context, we compute the X-ray luminosity per unit K-band luminosity. We derive the K-band luminosity of the the galaxy based on its apparent K-band magnitude (m_ K = 8.97) and obtain L_ K = 1.8×10^11 L_⊙. Using the 0.3-2 keV band X-ray luminosity, we find that the specific X-ray emissivity of Mrk 3 is L_0.3-2keV/M_⋆ = 2.7×10^29 ergs^-1 L^-1_K,⊙. This value exceeds that obtained in low luminosity ellipticals, but are comparable to emissivities found in more massive (non-BCG) ellipticals <cit.>. Although the NLR demonstrated an outflow in the east-west direction, it is not clear whether the gas is expelled from the galaxy or it is retained in the gravitational potential well. If a galactic-scale outflow is present, it may be either powered by the the energy input of Type Ia Supernovae or from the AGN. In this picture, the outflowing gas is replenished by the stellar yields originating from evolved stars, which are estimated to shed mass at a rate of 0.0021L_K/L_K,⊙ M_⊙ Gyr^-1 <cit.>. Given the K-band luminosity of Mrk 3, we estimate that the mass loss rate from evolved stars is Ṁ = 0.38M_⊙ yr^-1. This implies that the replenishment time scale of the total observed gas mass is about t_repl = 2.6×10^9 years. To lift the gas from the potential well of the galaxy, we require E_ lift = 7.2 Ṁσ^2 <cit.>, where σ=274kms^-1 corresponds to the central stellar velocity dispersion. We thus find that the total energy required to lift the gas isE_ lift = 4.1×10^41 ergs^-1. The available energy from Type Ia Supernova can be computed by assuming that each supernova releases 10^51 ergs^-1 energy and by computing the Type Ia Supernova rate of the galaxy using the frequency established by <cit.> and theK-band luminosity of the galaxy. Hence we obtain the Type Ia Supernova frequency of 6.4×10^-3 yr^-1, implying the total energy of E_SNIa = 2.0×10^41 ergs^-1. This value falls factor of about two short of the energy required to lift the gas from the potential of Mrk 3, hinting that Type Ia Supernovae cannot provide sufficient energy to drive a galaxy-scale outflow. The minimum energy required to drive a galactic-scale outflow is about factor of five lower than the kinetic energy (E_ kin≳ 2× 10^42 ergs^-1) from the AGN <cit.>, hinting that the AGN is able to expel the gas from the galaxy. However, the large hot gas mass in the galaxy combined with the long replenishment time of the gas argues against the existence of a large-scale outflow that would remove the gas from the gravitational potential well of the galaxy. Instead, it is more likely that the energy from the AGN plays a role in heating the X-ray gas, possibly driving it to larger radii. § SUMMARYIn this work we analyzed Chandra X-ray observations of the NLR of Markarian 3. By combining imaging and grating spectroscopy data, we achieved the following conclusions:* We confirmed the presence of X-ray emitting gas in the NLR of the galaxy. The average gas temperature and metallicity is kT=0.85 keV and Z=0.24 Solar. * We deconvolved the X-ray image to probe the structure of the gas at small angular scales. The X-ray morphology of the hot gas was confronted with the radio and [O III] morphology. We found that while the X-ray gas exhibits an S-shaped morphology, which is similar to those observed in other wavelengths, the hot gaseous emission has a broader distribution than the radio or [O III] emission. * We demonstrated the presence of shocks towards the west (M=2.4±0.3) and towards the east (M=1.5^+1.0_-0.5). This detection suggests that shock heating due to the interaction between the radio jets and the dense interstellar material may play a non-negligible role in the ionization of the gas. * Spectroscopic analysis of the Si XIII triplet (resonance, intercombination, forbidden) lines suggests that both photoionization and collisional ionization may excite the hot gas. * Using the high-resolution spectra we compared the best-fit line centroids between the east and west sides of the NLR. We did not find statistically significant differences, which hints at low projected outflow velocities that are significantly lower than those inferred from the Rankine-Hugonoit jump conditions. This difference implies that the outflow likely propagates along the plane of the sky. * Given the common nature of small-scale radio jets in Seyfert galaxies, it is feasible that collisional ionization plays a role in the excitation of the hot gas in the NLR of other Seyfert galaxies as well.Acknowledgements. We thank the referee for the constructive comments. This research has made use of Chandradata provided by the Chandra X-ray Center. The publication makes use of software provided by the Chandra X-ray Center (CXC) in the application package CIAO. In this work the NASA/IPAC Extragalactic Database (NED) has been used. We acknowledge the usage of the HyperLeda database (http://leda.univ-lyon1.fr). Á.B., R.P.K, W.R.R acknowledges support for the Smithsonian Institution. F.A-S. acknowledges support from Chandra grant GO3-14131X.[Anders & Grevesse(1989)]anders89 Anders, E., & Grevesse, N. 1989, , 53, 197[Balmaverde et al.(2012)]balmaverde12 Balmaverde, B., Capetti, A., Grandi, P., et al. 2012, , 545, A143[Baum et al.(1992)]baum92 Baum, S. A., Heckman, T. M., & van Breugel, W. 1992, , 389, 208[Bell et al.(2004)]bell04 Bell, E. F., Wolf, C., Meisenheimer, K., et al. 2004, , 608, 752[Best et al.(2000)]best00 Best, P. N., Röttgering, H. J. A., & Longair, M. S. 2000, , 311, 23[Benson et al.(2003)]benson03 Benson, A. J., Bower, R. G., Frenk, C. S., et al. 2003, , 599, 38[Bianchi et al.(2005)]bianchi05 Bianchi, S., Miniutti, G., Fabian, A. C., & Iwasawa, K. 2005, , 360, 380[Bianchi et al.(2006)]bianchi06 Bianchi, S., Guainazzi, M., & Chiaberge, M. 2006, , 448, 499[Bogdán et al.(2012)]bogdan12 Bogdán, Á., David, L. P., Jones, C., Forman, W. R., & Kraft, R. P. 2012, , 758, 65[Bogdán & Gilfanov(2008)]bogdan08 Bogdán, Á., & Gilfanov, M. 2008, , 388, 56[Bogdán & Gilfanov(2011)]bogdan11 Bogdán, Á., & Gilfanov, M. 2011, , 418, 1901[Capetti et al.(1999)]capetti99 Capetti, A., Axon, D. J., Macchetto, F. D., Marconi, A., & Winge, C. 1999, , 516, 187[Collins et al.(2005)]collins05 Collins, N. R., Kraemer, S. B., Crenshaw, D. M., et al. 2005, , 619, 116[Collins et al.(2009)]collins09 Collins, N. R., Kraemer, S. B., Crenshaw, D. M., Bruhweiler, F. C., & Meléndez, M. 2009, , 694, 765[Crenshaw et al.(2010)]crenshaw10 Crenshaw, D. M., Kraemer, S. B., Schmitt, H. R., et al. 2010, , 139, 871[Croton et al.(2006)]croton06 Croton, D. J., Springel, V., White, S. D. M., et al. 2006, , 365, 11[David et al.(2006)]david06 David, L. P., Jones, C., Forman, W., Vargas, I. M., & Nulsen, P. 2006, , 653, 207[Eckert et al.(2011)]eckert11 Eckert, D., Molendi, S., & Paltani, S. 2011, , 526, A79[Emery et al.(2017)]emery17 Emery, D. L., Bogdán, Á., Kraft, R. P., et al. 2017, , 834, 159[Faber et al.(2007)]faber07 Faber, S. M., Willmer, C. N. A., Wolf, C., et al. 2007, , 665, 265[Gilfanov(2004)]gilfanov04 Gilfanov, M. 2004, , 349, 146[Goulding et al.(2016)]goulding16 Goulding, A. D., Greene, J. E., Ma, C.-P., et al. 2016, , 826, 167[Guainazzi et al.(2016)]guainazzi16 Guainazzi, M., Risaliti, G., Awaki, H., et al. 2016, , 460, 1954[Heckman & Best(2014)]heckman14 Heckman, T. M., & Best, P. N. 2014, , 52, 589[Hickox & Markevitch(2006)]hickox06 Hickox, R. C., & Markevitch, M. 2006, , 645, 95[Irwin et al.(2003)]irwin03 Irwin, J. A., Athey, A. E., & Bregman, J. N. 2003, , 587, 356[Jarrett et al.(2003)]jarrett03 Jarrett, T. H., Chester, T., Cutri, R., Schneider, S. E., & Huchra, J. P. 2003, , 125, 525[Ji et al.(2009)]ji09 Ji, J., Irwin, J. A., Athey, A., Bregman, J. N., & Lloyd-Davies, E. J. 2009, , 696, 2252[Knapp et al.(1992)]knapp92 Knapp, G. R., Gunn, J. E., & Wynn-Williams, C. G. 1992, , 399, 76[Kukula et al.(1993)]kukula93 Kukula, M. J., Ghosh, T., Pedlar, A., et al. 1993, , 264, 893[Kukula et al.(1999)]kukula99 Kukula, M. J., Ghosh, T., Pedlar, A., & Schilizzi, R. T. 1999, , 518, 117[Lal et al.(2004)]lal04 Lal, D. V., Shastri, P., & Gabuzda, D. C. 2004, , 425, 99[Landau & Lifshitz(1959)]landau59 Landau, L. D., & Lifshitz, E. M. 1959, Course of theoretical physics, Oxford: Pergamon Press, 1959, [Lanz et al.(2015)]lanz15 Lanz, L., Ogle, P. M., Evans, D., et al. 2015, , 801, 17[Maksym et al.(2016)]maksym16 Maksym, W. P., Fabbiano, G., Elvis, M., et al. 2016, arXiv:1611.05880[Mannucci et al.(2005)]mannucci05 Mannucci, F., Della Valle, M., Panagia, N., et al. 2005, , 433, 807[Massaro et al.(2009)]massaro09 Massaro, F., Chiaberge, M., Grandi, P., et al. 2009, , 692, L123[Markevitch & Vikhlinin(2007)]markevitch07 Markevitch, M., & Vikhlinin, A. 2007, , 443, 1[Massaro et al.(2009)]massaro09 Massaro, F., Chiaberge, M., Grandi, P., et al. 2009, , 692, L123[Nesvadba et al.(2008)]nesvadba08 Nesvadba, N. P. H., Lehnert, M. D., De Breuck, C., Gilbert, A. M., & van Breugel, W. 2008, , 491, 407[Netzer(2015)]netzer15 Netzer, H. 2015, , 53, 365[Porquet et al.(2001)]porquet01 Porquet, D., Mewe, R., Dubau, J., Raassen, A. J. J., & Kaastra, J. S. 2001, , 376, 1113[Porquet et al.(2010)]porquet10 Porquet, D., Dubau, J., & Grosso, N. 2010, , 157, 103[Pounds & Page(2005)]pounds05 Pounds, K. A., & Page, K. L. 2005, , 360, 1123[Robinson et al.(2000)]robinson00 Robinson, T. G., Tadhunter, C. N., Axon, D. J., & Robinson, A. 2000, , 317, 922[Sako et al.(2000)]sako00 Sako, M., Kahn, S. M., Paerels, F., & Liedahl, D. A. 2000, , 543, L115[Schawinski et al.(2014)]schawinski14 Schawinski, K., Urry, C. M., Simmons, B. D., et al. 2014, , 440, 889[Schmitt et al.(2003)]schmitt03 Schmitt, H. R., Donley, J. L., Antonucci, R. R. J., et al. 2003, , 597, 768 [Shull & van Steenberg(1985)]shull85 Shull, J. M., & van Steenberg, M. E. 1985, , 298, 268[Thean et al.(2000)]thean00 Thean, A., Pedlar, A., Kukula, M. J., Baum, S. A., & O'Dea, C. P. 2000, , 314, 573[Verner et al.(1996)]verner96 Verner, D. A., Ferland, G. J., Korista, K. T., & Yakovlev, D. G. 1996, , 465, 487[Wilson et al.(2001)]wilson01 Wilson, A. S., Yang, Y., & Cecil, G. 2001, , 560, 689 Appendix A:
http://arxiv.org/abs/1709.09171v1
{ "authors": [ "Akos Bogdan", "Ralph P. Kraft", "Daniel A. Evans", "Felipe Andrade-Santos", "William R. Forman" ], "categories": [ "astro-ph.HE", "astro-ph.GA" ], "primary_category": "astro-ph.HE", "published": "20170926180000", "title": "Probing the hot X-ray gas in the narrow-line region of Mrk 3" }
Department of Physics and Astronomy, University of Missouri, Columbia MissouriDepartment of Physics, King's College London, London WC2R 2LS, United KingdomDepartment of Applied Mathematics and Physics, Tottori University, Tottori 680-8552, JapanDepartment of Physics, Case Western Reserve University, Cleveland, OH 44106-7079The electronic band structure of SrTiO_3 is investigated in the all-electron QSGW approximation. Unlike previous pseudopotential based QSGW or single-shot G_0W_0 calculations, the gap is found to be significantly overestimated compared to experiment. After putting in a correction for the underestimate of the screening by the random phase approximation in terms of a 0.8Σ approach, the gap is still overestimated. The 0.8Σ approach is discussed and justified in terms of various recent literature results including electron-hole corrections. Adding a lattice polarization correction (LPC) in the q→0 limit for the screening of W, agreement with experiment is recovered. The LPC is alternatively estimated using a polaron model. We apply ourapproachto the cubic and tetragonal phases as well as a hypothetical layered post-perovskite structure and find that the LDA (local density approximation) to GW gap correction is almost independent of structure. All-electron quasi-particle self-consistent GW band structures for SrTiO_3 including lattice polarization corrections in different phases Walter R. L. Lambrecht December 30, 2023 ===========================================================================================================================================§ INTRODUCTION It is well known that the density functional theory in its commonly used local density and generalized gradient approximations (LDA and GGA) does not provide accurate electronic band structures and in particular underestimates band gaps. This is by now recognized to be mostly because the Kohn-Sham eigenvalues in this theory should not be interpreted as one-electron excitations. To calculate the latter,a many-body-perturbation theory, including a dynamical self-energy, such as the GW approximation, provide a much better justified and more accurate framework. For standard tetrahedral semiconductors, the GW method has been shown to provide accurate gaps. Still, this depends on details of the implementation, for example, all-electron results may differ from pseudopotential results and the level of self-consistency used in the GW method and its convergence versus various parameters plays a significant role. For transition metal and complex oxides, it is still far less clear how well the GW method performs.Here we considerSrTiO_3 as a case study.We use the all-electron full-potential linearized muffin-tin orbital (FP-LMTO) implementation<cit.> of the quasiparticle self-consistent (QS) GW method<cit.> and compare its results for SrTiO_3with previous results in literature.<cit.> § LITERATURE REVIEWSponza performed G_0W_0 calculations of the band structure starting from a pseudopotential LDA calculation including Sr 4s,4p and Ti 3s,3p semicore states as valence. They obtain the vertical gap at Γ to be 3.76 in good agreement with experiment, whereas their LDA calculation gave 2.21 eV. The actual valence band maximum (VBM) at R is slightly higher than at Γ resulting in a smaller indirect gap both in LDA and in GW. The focus of their paper is on the optical dielectric function including electron-hole interaction effects.Hamann and Vanderbilt (HV) <cit.> performed QSGW calculations using maximally localized Wannier functions (MLWF) to interpolate the self-energy Σ matrix between k mesh-points on which the QSGW is performed. A similar functions is played by the atom centered muffin-tin-orbitals in our approach. They include only Sr-4p semicore states as valence electrons. Both these groups used the ABINIT package but used somewhat different cut-off parameters. Their plane-wave cut-off for the basis set is similar but HV used a smaller number of unoccupied bands. They obtained the indirect LDA gap of 1.61 and a GW gap of 3.32 eV. Curiously, the gap correction of HV (1.71 eV) is larger than that of Sponza (1.55 eV). They did not mention the direct gap at Γ, but assuming all LDA calculations considered here get similar value for this difference, we'll use our LDA value (0.44 eV) for the difference between the VBM at R and Γ. HV's direct gaps at Γ would then amount to 2.05 eV (LDA) and 3.76 eV (GW). Thus, these two pseudopotential calculations are in good agreement with each other in spite of the small changes in parameter choices.The main point of HV's paper is that the MLWF interpolation works well and indicates little change in the Wannier functions extracted from LDA or GW calculations.A third pseudopotential based GW calculation by Cappellini <cit.> obtained significantly different results. They also include Sr 4s,4p, Ti 3s,3p as valence electrons and obtain an LDA gap at Γ of 2.24 eV (indirect R-Γ of 1.90)butGW gaps of 5.42 eV (Γ-Γ) and 5.07 eV (R-Γ).The reason for this discrepancy is unclear but presumably is related to the use of a model dielectric function instead of a consistently calculated one. Finally, a previous FP-LMTO QSGW calculation by Kotani ,<cit.> gives the indirect gap at Γ of about 4.25 eV but gave few details.From the above, it appears from the pseudopotential calculations that the G_0W_0 gap is close to that of the QSGW gap, and that both are in good agreement with experiment. The all-electron QSGW gap however seems to be about 1 eV larger than experiment.Here we further investigate this issue.§ METHODSThe QSGW approximation as implemented in FP-LMTO was described in detail in Ref. Kotani07. The idea behind the QSGW method is to make an optimal choice of the H_0 Hamiltonian so that its Kohn-Sham eigenvalues ϵ_i are as close as possible to the quasiparticle energies E_i. To do this, a hermitian but non-local exchange correlation potential, specified by itsmatrix in the basis of the H_0 eigenstates,[V_xc^Σ]_ij=1/2Re[Σ_ij(E_i)+Σ_ij(E_j)],is used in H_0. Here, Σ(ω) is the energy dependent self-energy calculated from G_0(ω), the one-electron Green's function corresponding to H_0,in the single-shot GW approximation: Σ=iG_0W_0. Starting from an LDA H_0, Σ is calculated, V_xc^Σ-V_xc^LDA is added to H_0, a new G_0 calculated and so on till self-consistency. The reasons behind this approach and differences from fully self-consistentscGW are discussed in Refs. Kotani07,Takao07,Ismail-Beigi17.For tetrahedral semiconductors, this approach provides systematicallya ∼20 % overestimate of the gap due to the underestimate of the dielectric screening in the random phase approximation (RPA) which does not include electron-hole effects and thus misses ladder diagrams in the evaluation of the irreducible polarization propagator Π^0=-iG_0× G_0, which determines W through W=(1-vΠ^0)^-1v, where v is the bare Coulomb interaction and a simplified symbolic operator notation is used. This has led to the adoption of a universal 0.8Σ correction factor.<cit.> This is illustrated in Fig. <ref> which shows the typical underestimate of screening by QSGW to be 20 % as indicated by the dashed line. Although it is not clear a priori that this also applies to oxides we adopt a similar correction factor here.It is interesting that ϵ_∞ predicted by the LDA is in sometimes better agreement with experiment.This can be attributed to a fortutitous cancellation of errors: missing ladder diagrams tend to cause ϵ_∞ to be underestimated, while the LDA's gap underestimate contributes an error of the opposite sign.There is no universal pattern, however, as is already apparent in the data shown in Fig. <ref>. Where gap errors are severe the LDA severely overestimates the ϵ_∞.For example in NiO, the ϵ_∞^LDA>30. Further justification for the 0.8Σ correction factor can be obtained from the work of Shishkin, Marsman and Kresse (SMK)<cit.> and Wei and Pasquarello (WP), <cit.> who added an exchange-correlation kernel to the screening of the polarization function Π̃=[1-(v+f_xc)Π^0]^-1 using the nanoquanta kernel or a bootstrap kernel respectively. We will refer to their approach as QSGW̃.Although these kernels primarily address the q→0 and static (ω=0) behavior and might thus not capture the full extent of the electron-hole effects on renormalizing the screening in W, and have received some critical discussions,<cit.> it is useful in the present context to analyzehow much theyaffect the gaps for a variety of materials.Analyzing the data in Table I in WP, part of which is reproduced here in Table <ref> with additional analysis, we find that [E_g(QSGW̃)-E_g(LDA)]/[E_g(QSGW)-E_g(LDA)] has an average value of about 0.76 with standard deviation of 0.04 with the largest deviation for NiO, where it is 0.85 and ZnO, 0.68. ZnO, is a notably difficult material to converge and SMK's values for the QSGW and QSGW̃ gaps would give 0.77. NiO isa well-known strongly correlated material and a deviation here is not too unexpected. We note that multiplying the self-energy operator Σ by 0.8 is not exactly the same as correcting the gap shift by 0.8. A slightly larger gap reduction typically occurs. Very recently, Kutepov<cit.> introduced a way to solve Hedin's full set of equations<cit.> beyond the GW approximation using systematic diagrammatic approximations for the vertex function.First of all, his results show fully self-consistent scGW results differ only slightly from the QSGW results and tend to overestimate the band gaps by a similar amount.Secondly, he used two different self-consistency schemes which both introduce vertex corrections both in G and Π. The results of his scheme B, which in his notation only includes a first correction to the vertex Γ_1,are close to those of SMK and WP where comparison for the same material is possible (Si, LiF, GaAs, SiC, BN, MgO)while his most advanced schemeincluding the full Γ_GW vertex, give a somewhat larger reduction of the gap. These are also shown in Table <ref>. Viewed as percentage of the scGW-GGA (or LDA) correction they give correction factors of about 0.78 and 0.72 respectively when averaged over various cases.As an example for MgO, his scGW gap is 9.31 and his schemes B and D give 8.24, 7.96 eV while WP 's QSGW and QSGW̃ give 9.29, 8.30 eV and SMK obtain 9.16 eV, 8.12 eV respectively.The scheme D agrees almost perfectly with experiment when a lattice-polarization correction of 0.15 eV added to the experimental volume but the latter may be somewhat underestimated.<cit.> In any case, these results also support that the electron-hole correction effects beyond RPA amount to about a 20 % reduction of the QSGW gap correction beyond LDA or GGA. Besides the electron-hole corrections discussed until now, we also consider a lattice-polarization correction as suggested by Botti and Marques (BM)<cit.> and revisited recently in Ref. Lambrecht17. The idea here is that for strongly ionic materials, with large LO-TO phonon splittings, the W in the long-wave length limit W( q→0,ω) should include the effects of the ionic displacements on the macroscopic dielectric constant. The macroscopic dielectric constant enters the calculation of Σ in the special treatment of the q→0 region in the convolution integral over k-space:Σ^c_nm( k,ω) = i/2π∫ dω'∑_ q^BZ∑_n'^allG_n'n'( k- q,ω-ω') ∑_μν W^c_μν( q,ω')e^-iδω' ⟨ψ_ kn|ψ_ k- qn'E_μ^ q⟩⟨ E_ν^ qψ_ k- qn'|ψ_ km⟩Here, a two-particle mixed product interstitial-plane-wave basis set E_ν diagonalizing the bare Coulomb interaction matrix is used<cit.> and W^c is the correlation part of W, subtracting the bare exchange. The need for a special treatment of the q→0 region arises from the integrable divergence of the Coulomb interaction (∝1/q^2) and is here treated using the modified offset-Γ method,<cit.> which in turn is closely related to the analytick· p scheme of Friedrich <cit.> This involves the macroscopic dielectric tensor L(ω), in their notation e_ k^T L(ω) e_ k. The projection along unit vectors e_ k takes care of the non-analytic (orientation dependent) nature of the k→0 limit and fully takes into account any possible anisotropies depending on the crystal structure. It is this macroscopic dielectric tensor, usually written ε(ω) which needs to be modified to take into account the lattice polarization effect.This is most easily done by means of a Lyddane-Sachs-Teller factor:ε_tot^α( q→ 0,ω)/ε_el^α( q→ 0,ω)=∏_m(ω^α_LOm)^2-ω^2/(ω_TOm)^2-(ω+i0^+)^2.where the superscript α denotes a projection direction of the tensor, (ε^α= e_α^Tε e_α). It is clear from this expression that the correction goes to zero for ω≫ω_L. In practice we only include it for ω=0 to avoid the necessity for a careful integration mesh right near the phonon frequency poles. As discussed in Ref. Lambrecht17 the BMapproach gives the long-range or Fröhlich contribution to the Fan-part of the zero-point motion electron-phonon correction of the gap.The q-point integration mesh that needs to be used is a subtle issue discussed in Lambrecht <cit.>. The strengthof this contribution, applied only at q=0 for convenience, can be estimated from the polaron length scale, a_P=√(ħ/2m_* ω_L) with m_* the band-edge effective mass and ω_L the relevant LO-phonon frequency. We will discuss later how to apply this in the present case with multiple phonons and a degenerate VBM not occurring at Γ. The polaronic point of view allows us to make an independent estimate of the corresponding gap reduction. § COMPUTATIONAL DETAILSWe employ a generalized FP-LMTO method<cit.> as implemented in the Questaal package.<cit.> The basis set is specifiedby two sets of parameters, the smoothing radii R_sm and decay lengths (κ) of smoothed Hankel function envelope functions.<cit.> For SrTiO_3we include (spdf, spd) for Sr, (spd, spd) for Ti and (spd,sp) for O atoms respectively. These indicate the angular momenta included for each κ. The envelope functions are augmented inside the spheres in terms of solutions of the Schrödinger equation and their energy derivative up to an augmentation cut-off of l_max=4. In addition, calculations are made with and without the 4p (3p) local orbitals inside the spheres for Sr and (Ti).The Brillouin zone integration k-point convergence and other convergence parameters of the method were carefully testedfor cubic SrTiO_3 and similar criteria were adopted for the tetragonal and orthorhombic phases. We also tested result with a larger k-point mesh and found the band gap is converged within 0.05 eV. Specifically, we used a 4×4×4 un-shifted mesh for the Brillouin zone sampling, along with the tetrahedron method for the cubic cases in the LDA self-consistent charge convergence andfor the calculation of the Σ in GW. For the tetragonal phase, theunit cell is larger alongthe c-direction than in-plane by a factor √(2).Thus, we use accordingly smaller number of k-points, 4×4×3 for both LDA and QSGW calculations.For the self-consistency cycle, the charge density and the total energy are converged within the tolerance of 10^-5 e/a_0^3 and 10^-5 Ry respectively. For QSGW, after several convergence test calculations, we settled the cut-off above which the self-energy matrix is approximated by an average diagonal value, Σ_cut= 3 Ry, including self-energy calculations up to 3.5 Ryd, the interstitialplane wave cut-off energy for basis functions E_cut(ψ_G)=2.6 Ry and for the auxiliary basis E_cut(ψ_coul)=2.8 Ry respectively. In QSGW, the self-consistent iteration was carried until the change in Σ was less than 10^-4 Ry. § CRYSTAL STRUCTURESWe consider the cubic and tetragonal anti-ferro-electrically distorted (AFD) I4/mcm structure occurring at low temperature. In addition we consider the layered orthorhombic CaIrO_3 structure, suggested to occur at high pressures by Cabaret <cit.> and also known as the post-perovskite structure.Although we will show elsewhere<cit.> that this structure is unlikely to occur because it has a higher equilibrium lattice volume and much higher total energy, it is of interest to see how the GW gap corrections compare in such different structures.Fig. <ref> shows the crystal structures for cubic, tetragonal and orthorhombic from left to right respectively. Table <ref> summaries the structural parameters used in the calculations, such as the lattice constants and Wyckoff positions. The relaxed lattice constant for the cubic phase in LDA is 3.86 Å which is only 1 % underestimated relative to the experiment. § RESULTS §.§ Cubic STOIn Fig. <ref>a we show the band structure of cubic SrTiO_3 in the full QSGW approach compared with LDA. A few states at Γ are symmetry labeled for later reference.In Table <ref> we summarize the gaps and various other band structure parameters in different approximations. In Table <ref> we show how the different approximations affect other band states relative to the VBM. This allows us to assess to what extentthe GW correction can be approximated by a k and state independent scissor shift.First, we see that our LDA gap agrees quite well with other LDA (or GGA) calculations.Second we see that the G_0W_0 gap is significantly lower than the QSGW gap. Third, unlike the pseudopotential calculations reviewed in Sec. <ref>, the QSGW gap significantly overestimates the gap. Even if we use the 0.8Σ approach, they are still larger than experiment. It is only when we add both the 0.8Σ and lattice polarization correction, that we recover the experimental values.We also note that the 0.8Σ approach actually reduces the QSGW-LDA indirect (direct) gap shifts by about a factor 0.73 (0.74). In agreement with other calculations and already correctly described in LDA, the indirect R-Γ gap is about 0.4 eV lower than the lowest direct Γ-Γ gap. The VBM between R-M is very flat and in QSGW the actual VBM lies actually in between R and M and is 0.09 eV above that at R. Finally we see that the semi-core levels play a moreimportant role in QSGW than in LDA. Neglecting them, the gap would be only 0.07 eV lower in LDA but is 0.5 eV lower in QSGW or still 0.2 eV lower in the final LST and 0.8Σ corrected case.§.§ Polaron estimatesNext, we discuss the lattice polarization correction to the gap in detail.The zero-point motion correction contains a contribution from the long-range Fröhlich type of electron phonon coupling. The latter is arguably the largest electron-phonon coupling correction for a strongly ionic material with large LO-TO splitting because the other electron-phonon coupling effects tend to be smaller than 0.1 eV except for systems with all light atoms. To estimate it we follow the approach of Nery and Allen.<cit.> The main point is that the Fröhlich electron-phonon coupling behaves as 1/q and hence near band edges where the band difference E_n( k+ q)-E_n( k), which enters the denominator in the Allen-Heine-Cardona expression for the electron-phonon self-energy, givesa divergent contribution. Nery and Allen showed how it can be integrated analytically when a simpleeffective mass approximation is used for the bands. The length scale for the polaron effect is a_P=√(ħ/2m_*ω_L) and if we assume we need to integrate the singular behavior only over a region in q-space of size 1/a_P as upper limit, then the polaron shift of a band is given by<cit.>Δ E_n( k) = -α_Pħω_L/2 =-e^2/4 a_P(1/ε_∞-1/ε_0),=-e^2/4 a_Pε_∞(1-ω_T^2/ω_L^2).In other words it essentially the change in the Coulomb interaction calculated at the polaron length scale due to the change in screening from only electronic screening to electronplus lattice screening. The extra factor 2 arises from the choice of cut-off in q-space and we have written the change in macroscopic inverse dielectric constants using the Lyddane-Sachs-Teller relation. In this way, for a given LO-TO phonon pair, we have a separate contribution from each phonon, since both a_P and the dielectric constant factor depend on the phonon considered. We can thus estimate the effect for each phonon and addthem, thereby generalizing Nery and Allen's simple model to the case of multiple phonons.In SrTiO_3, there are three optically active phonons.The second point is that this predicts a correction near each band edge. The conduction band at Γ and the VBM at R are both three-fold degenerate and anisotropic, so to apply the theory in its simple form, we need to average the effective massesin some way to extract the polaron length scale. At both points we could exploit the cubic symmetry to write a Kohn-Luttinger type of effective Hamiltonian. In our previous work<cit.> for simple di-atomic cubic compounds, we just used an average of the heavy and light masses in the cubic direction, according to the corresponding band's degeneracy. Following the same approach here, the band structure shows that it would be appropriate to use m_h=(m_hh+2m_lh)/3 for holes and the same for the electrons, m_e=(m_he+2m_le)/3 where we use the masses in the Γ-X and R-M directions, which are both simple cubic x directions. Thus,we obtain separate electron and hole polaron length scales a_Pe and a_Ph. Since the latter only provide estimates of the q-space integration region, it is not too crucial how we perform the average, although we recognize this is at present a limitation of the approach. The VBM at R can be seen to be rather flat and in fact in GW the maximum moves away from R toward M. We use the masses extracted from our QSGW bands without the lattice polarization correction.The hole polaron length scale is significantly shorter than for the electrons, predicting stronger polaronic effects for holes. This agrees with the finding in other work of self-trapped hole-polarons.<cit.>The results of this approach and the corresponding parameters are summarized in Table <ref>. We can see that the conduction band is predicted to shift less than the valence band as expected and the total gap correction is predicted to be 404 meV, which we really should round off to 0.4 eV. The shortest polaron length scale corresponds to holes for the largest phonon frequency and is 8 Bohr. This corresponds to a q-space region of about 1/6 of the Brillouin zone.Our estimate using the BM approach in Table <ref> used a 4×4×4 mesh and gives a contribution to the zero-point motion or lattice polarization correction of -0.55 eV. This is already rather close to the polaron estimate.With a 6×6×6 mesh we obtain -0.25 eV. These bracket the polaron estimate of Table <ref>. We can thus conservatively conclude that the lattice polarization correction amounts to 0.3±0.1 eV in good agreement between the Nery-Allen like estimate (0.4) and the BM approach. When we add this to the 0.8Σ result we obtain a gap of 3.24 eV for the indirect gap in excellent agreement with experiment. We note that if we apply the lattice polarization correction using the BM approach with a 4×4×4 mesh but then apply the 0.8Σ correction, the LPC shift is also reduced by 0.8, andbecomes 0.4 eV.We can see that in this approach the correction is almost a constant shift and hence the indirect gap correction is the same as the direct gap correction. Because of the approximate nature of these estimates, we have not separately evaluated the polaron approach to the VBM at Γ which would give the direct gap. In principle, the polaronic effect also should enhance the band mass by a factor (1+α_P/6) but it is not clear that the BM-method captures this more subtle effect. In fact, we find the bands to shift almost rigidly as can be seen in Fig. <ref>. §.§ Other band structure featuresTurning to other band features than the gap, summarized in Table <ref> we see that Sr-4p states lie significantly closer to the VBM than the Ti 3p semicore states and hence play a more important role. We can see that the shifts of these states are also sensitive to the 0.8Σ and LPC corrections and amount toabout 2 eV for Sr-4p and 4 eV for Ti 3p. As expected, the farther away from the VBM, the larger is the quasiparticle self-energy shift.In the conduction band we see that the higher lying Γ_12 state has almost the same shift from LDA (about 1.4 eV) as the Γ_25' CBM. In the valence band the shifts are smaller and progressively larger as we go deeper in the VBM.§.§ Tetragonal structureThe band structure for the tetragonal structure is shown in Fig. <ref>b. In the tetragonal material we see a similar large shift of the band gap by GW. To understand this band structure, we note that the tetragonal unit cell is rotated by 45^∘ and has a_t=√(2)a_c as in-plane lattice constant. Thus the Brillouin zone (BZ) of the cubic structure is folded into a smaller BZ with the Γ-M of the tetragonal BZ corresponding to half the Γ-X of the cubic BZ. The high symmetry points correspond toM=(1/2,1/2,0) and X=(1/2,0,0) with respect to their respective reciprocal lattice vectors. Similarly the Γ-X of the tetragonal BZ is half the Γ-M of the cubic BZ. One can clearly see the folding in half of the bands with additional small gaps opening due to the breaking of the symmetry by the slight rotation of the octahedra. We can see that VBM which in the cubic caseand in QSGW occurs between M-R (R=(1/2,1/2,1/2)), where the band dispersion is very flat, is folded on to the tetragonal BZ Γ point and the gap becomes direct. §.§ Hypothetical layered orthorhombic structureAlthough, the CaIrO_3 structure, proposed<cit.>for SrTiO_3 as a potential high-pressure structure, can be shown to be unstable,<cit.> it is of interest to see how the GW gap correction changes with such a large change in structure. This structure has edge-sharing octahedra inlayers separated by Sr, rather than corner sharing octahedra. In the LDA, the band gap becomes zero as can be seen in Fig. <ref>c. The very different band dispersion in this case results from the direct Ti-d to Ti-d interactions between much closer Ti atoms in the layer.In the QSGW method the gap becomes 2.32 eV which is not too different from the gap correction 2.68 eV in cubic perovskite. The gap correction is found to be almost the same as in the cubic or tetragonal structures.Similar screening reduction or 0.8Σ corrections and lattice polarization corrections should apply here but are not further pursued at this point. § CONCLUSIONSIn this paper we reviewed the status of the QSGW method for a prototypical complex transition metal oxide like SrTiO_3 in the perovskite structure. We found that all-electron QSGW results obtained by means of the FP-LMTO implementation give a significant overestimate of the gap compared to experiment in contrast to PAW or pseudopotential based GW approaches. This indicates a compensation of errors in the latter. We base this on the observation that fora large family of materials, the under-screening of W in the RPA amounts to about 20 %and can hence be accommodated by using the 0.8Σ approach. This evidence is based both on the comparison of dielectric constants in QSGW with experiment and on recent calculations<cit.> which go beyond the RPA by including an exchange correlation kernelin the calculation of W or adding vertex corrections directly<cit.> and it is found to apply to both tetrahedrally bonded semiconductors andvarious oxides and ionic compounds.The second important correction to the gap is the lattice polarization correction. This is part of the zero-point motion correction due to electron-phonon coupling and more specifically is its dominant contribution in strongly ionic materials arising from the long-range Fröhlich part of the electron-phonon coupling. Two independent estimates of this effect were made: one based on the polaron theory and one on the Botti-Marques approach of multiplying the macroscopic dielectric constant at q=0 by a Lyddane-Sachs-Teller factor along with a suitable q-mesh sampling based itself on the polaron length scale which determines the strength of the effect. The two estimates are found to be in good agreement with each other. We find that both theelectron-hole interaction effects which reduce Σ by about 20 % and the lattice-polarization corrections are required to obtain good agreement with experimental gaps in cubic perovskite SrTiO_3. As for the structural dependence of the QSGW corrections, we find that the gap correction in tetragonal STO is very close to that in cubic STO and the bands are essentially folded according to the rotation of the octahedra, which leads to a doubling of the cell and rotation of the BZ by 45^∘. This happens to fold the R point of the BZ onto the Γ-point an hence the indirect lowest gap becomes then direct. Due to the similarity in band states, we expect it to be pseudo-direct in the sense that no strongly optical transitions will correspond to this direct gap. Even for a very different hypothetical structure with edge-sharing octahedra, we find very similar gap corrections by QSGW, which shows that the gap corrections are rather insensitive to structure. This work was supported by the US Department of Energy, Office of Science, Basic Energy Sciences under grant No.DE-SC0008933.Calculations made use of the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Case Western Reserve University.MvS was supported by EPSRC CCP9 Flagship Project No. EP/M011631/1.
http://arxiv.org/abs/1709.09194v1
{ "authors": [ "Churna Bhandari", "Mark van Schilfgaarde", "Takao Kotani", "Walter R. L. Lambrecht" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170926180144", "title": "All-electron quasi-particle self-consistent $GW$ band structures for SrTiO$_3$ including lattice polarization corrections in different phase" }
Many types of dissipative processes can be found in nature or be engineered, and their interplay with a system can give rise to interesting phases of matter. Here we study the interplay among interaction, tunneling, and disorder in the steady state of a spin chain coupled to a tailored bath. We consider a dissipation which, in contrast to disorder, tends to generate a homogeneously polarized steady state. We find that the steady state can be highly sensitive even to weak disorder. We also establish that, in the presence of such dissipation, even in the absence of interaction, a finite amount of disorder is needed for localization. Last, we show that for strong disorder the system reveals signatures of localization both in the weakly and strongly interacting regimes.Interplay of interaction and disorder in the steady state of an open quantum system Dario PolettiTathagata Karmakar, Tapobrata Sarkar E-mail:  karmakar, [email protected] 0.4cm Department of Physics, Indian Institute of Technology,Kanpur 208016, India ===================================================================================================================================================================Introduction. The interplay of dissipation and interaction can give rise to rich physics from out-of-equilibrium phase transitions <cit.> to complex relaxation dynamics <cit.>.Of particular relevance, in the past few years, has been the use of tailored dissipation to engineer interesting states of matter <cit.>.However, it is important to study the robustness and, more generally, the response of such states to disorder. It should be noted that the interplay of interaction and disorder (without dissipation) has also gathered vast interest. A disordered interacting system can show many-body localization (MBL) or be in a regime where the eigenstate thermalization hypothesis is valid <cit.>. MBL has been observed experimentally both in the presence of pure disorder or of a quasirandom potential <cit.> even in two dimensions <cit.>. Disordered interacting systems have also been studied when in contact with a bath.Recent studies focused on a number of dissipative processes, both theoretically <cit.> and experimentally <cit.>, showing a rich phenomenology.However, the use of a suitable tailored bath which contrasts disorder and, in its absence, could generate peculiar quantum states has yet to be deeply investigated.In the following, we concentrate on a spin-1/2 chain, a common choice in the study of MBL. One important quantity which characterizes the effect of disorder is the difference in local magnetization between nearby spins. In fact, because of the presence of disorder, the local magnetization of a spin can be significantly different from that of its neighbors.We then consider the effects of a bath which, contrary to the effect of disorder, tends to reduce the difference in local magnetization between nearest-neighboring spins. Hence we expect a strong interplay between dissipation and disorder, which can be significantly influenced by the interaction. Such baths were first proposed in Ref. <cit.> (although for bosons) and experimentally realized for spins in Ref. <cit.>.Recently this dissipation has been studied for a disordered quadratic bosonic chain <cit.> where it was shown that the steady state can be dominated by few localized modes. In this Rapid Communication, we first analyze the clean system, i.e., without disorder, and we show that its steady state has identically zero local magnetization on each site. We also prove analytically that the steady state for the case in which the interaction and the kinetic terms of the Hamiltonian are of the same strength is a highly symmetric entangled pure state which is sensitive to small amounts of disorder.We study the localization in the steady state by analyzing the natural orbitals of the single-particle reduced density matrix.This allows us to show a complex behavior of the indicators of localization with the strength of the interaction. Model. We study the steady state of a dissipative spin chain described by a master equation in Lindblad form <cit.>d/dt =[]=-/ħ[,]+𝒟[],whereis the system's Lindbladian. For the system's Hamiltonian we considered a Heisenberg XXZ chain for spin-1/2 with a spatially disordered local magnetic field h_l, =∑_l=1^L-1[J (^x_l ^x_l+1+^y_l ^y_l+1)+Δ^z_l ^z_l+1]+∑_l=1^L h_l ^z_l,where the elements of _l^α are given by the Pauli matrices for α=x, y, or z and where we have used open boundary conditions. The random field h_l is uniformly distributed in [-W,W] with W characterizing the disorder strength. This model exactly maps to an interacting spinless fermionic chain under Jordan-Wigner transformation <cit.>. The XX part, parametrized by J, maps to the kinetic term for the fermionic chain, and the interaction Δ part maps to the nearest-neighbor interaction (we will thus refer to Δ as the interaction in the following). In the absence of dissipation, this model has been shown to have many-body localized or ergodic phases depending on the strengths of disorder and interaction <cit.>.The dissipator we use is in Lindblad form and is given by 𝒟[] =γ∑_l=1^L-1(_l,l+1_l,l+1^†-1/2{_l,l+1^†^_l,l+1, }),where γ indicates the coupling strength, {.,.} indicates the anticommutator, and the jump operators ^_l,l+1 are ^_l,l+1=(^+_l+^+_l+1)(^-_l-^-_l+1). The type of dissipator above has already been studied for bosonic and fermionic systems <cit.>. Its possible realizations with ultracold atoms could rely, for example, on the immersion of the system in a superfluid bath and hence the interaction with Bogoliubov excitations<cit.>. Thanks to the use of a universal quantum computer made of ultracold ions, the dissipator we used in Eqs. (<ref>) and (<ref>) has been realized experimentally <cit.>. We note that such a realization, based upon a Trotterization of the evolution operator <cit.>, is independent of the Hamiltonian evolution and it is thus completely unaffected by the presence of disorder. In Eq. (<ref>) we have used ^+_l=(^x_l+^y_l)/2 and ^-=(_l^x-_l^y)/2. From the expression of the jump operators we can observe that the dissipator favors a balanced magnetization on neighboring sites. To compute the steady-state _s we either use exact diagonalization or evolve any initial state with a matrix product states'-based time evolution (t-MPS). Since both the Hamiltonian and the dissipator are number conserving, we use a number-conserving t-MPS algorithm which conserves, at the same time, the total magnetization both in the bra and in the ket portions of the density operator <cit.>.In this Rapid Communication we concentrated on the sector with 0 total magnetization so as to probe strong effects of the interaction.The notation ⟨Ô⟩_i indicates tr(Ô_s) for the ith disorder realization, whereas O̅ is used for the average over M disorder realization of the quantity ⟨Ô⟩_i, i.e., O̅=∑_i=1^M ⟨Ô⟩_i/M.Nondisordered case. We first consider the system without disorder. It is easy to show that the steady state is invariant to an overall spin flip.Applying =⊗_l ^x_l both to the right and to the left ofleaves it invariant = for any value of Δ and J. Each jump operator _l,l+1 is instead turned into its opposite _l,l+1=-_l,l+1. If we now consider the steady state, we can write 0=(_s)=(_s)=(_s). Considering the 0 total magnetization manifold and since the steady state is unique, Eq. (<ref>) implies that _s=_s.From this we can deduce that the local magnetization ⟨^z_l ⟩=0 at every site (note that since we are considering a disorderless case, we did not use the label i for the average value). This is readily proven by ⟨^z_l ⟩= (^z_l _s) = [(^z_l ) (_s) ]= - ⟨^z_l ⟩,where we have used the fact that ^x_l^z_l^x_l=-^z_l. In the presence of disorder, the Hamiltonian would not be invariant to the action of , and hence it would be possible to have nonzero local magnetization.It is important to stress that the steady state becomes, for Δ=J, a Dicke state, which is a highly symmetric entangled pure state. In fact the steady state is _s=|ψ_S⟩⟨ψ_S| where |ψ_S⟩=1/√(R)∑_r⃗|r⃗⟩.Above | r⃗⟩=| r_1 r_2 … r_L ⟩ with r_l=0,1, respectively, for |⟩ and |⟩, denotes all the possible states at 0 total magnetization, and R is the total number of such combinations <cit.>.To prove that |ψ_S⟩⟨ψ_S| is the steady state for Δ=J we will make use of the fact that ^+_l | r_l ⟩ = (1-r_l)| 1-r_l ⟩ and ^-_j | r_l ⟩ = r_l| 1-r_l ⟩. For the dissipative part, it can be shown that each jump operator _l,l+1 acting on |ψ_S⟩ gives 0, in fact, _l, l+1∑_r⃗|r⃗⟩= (^+_l^-_l - ^+_l+1^-_l+1 + ^-_l^+_l+1-^+_l^-_l+1)∑_r⃗|r⃗⟩ =∑_r⃗ (r_l - r_l+1 - r_l(1-r_l+1) + (1-r_l)r_l+1) |r⃗⟩ = 0, which proves that [|ψ_S⟩⟨ψ_S|]=0. For the Hamiltonian we have (^x_l^x_l+1 + ^y_l^y_l+1 + ^z_l^z_l+1)|ψ_S ⟩ = |ψ_S ⟩ ,which implies that [ , |ψ_S ⟩⟨ψ_S |] = 0 if Δ = J. This highly symmetric pure state is also the ground state of an open boundary XXX spin chain in the absence of dissipation if J=Δ<0. Indeed this would not be the case in the presence of disorder. Disordered case. Disorder favors the occurrence of differences in the local spin orientation (magnetization), whereas the dissipator tends to reduce it. The local magnetization is defined as m_l,i=⟨_l|_⟩i. We thus analyze the distribution of the local magnetization in the different sites for 100 disorder realizations. A narrow distribution indicates a small difference in the local magnetization for all sites. If instead the local magnetization varies significantly, a broad distribution would be expected. In the absence of disorder, as we have shown in the previous paragraphs, the local magnetization is zero for all sites. We first consider a small magnitude of disorder W=0.1J, and we study its interplay with the interaction. The magnetization distribution profiles are represented in Fig. <ref>(a). Given the small amount of disorder, we notice that for Δ=0 the spread of magnetization is fairly narrow. We then observe that the distribution becomes broader as the interaction increases (hence enhancing the role of disorder), whereas for larger interactions the distribution becomes narrow again. The magnetization distribution at large interaction becomes particularly narrow as the interaction dominates over kinetic energy and disorder. The change in the local magnetization distribution is more quantitatively characterized by the variance of the distributions, which is shown in Fig. <ref>(b). Here a peak of the variance is evident at Δ≈ J.The effect of disorder is thus stronger for Δ≈ J where the disorderless Hamiltonian and the dissipator tend to produce the steady-state _s=|ψ_S⟩⟨ψ_S| showing that this state is much less robust against disorder.To obtain a further understanding on the broadening of local magnetization for Δ≈ J, we study the single-particle correlations to build the single-particle reduced density matrix ρ_sp^j,k=⟨^+_j ^-_k⟩<cit.>. Note that here and in the following, to lighten the notations, we have dropped subindex i to indicate the disorder realization. From ρ_sp we compute its natural orbitals ψ_α, i.e., the eigenvectors such that ρ_spψ_α=n_αψ_α where n_α is the normalized occupation spectrum. The natural orbitals in this regime are shown in Fig. <ref> where a drastic contrast is shown among a fully delocalized orbital and all the other orbitals with large single-site occupation <cit.>. This representation of the natural orbital is indicative and consistent with the presence of different local magnetizations. The strong effect of disorder for Δ≈ J can also be observed by studying the operator space entanglement entropy (OSEE) <cit.>, which we refer to as S̅_l. This is a generalization to open systems of the bipartite entanglement entropy. From a matrix product operator representation of the steady state <cit.>, the OSEE is computed from the singular values of a bipartition of the system at site l, s_α,l in the following wayS_l=-∑_αs^2_α,l/∑_αs^2_α,lln( s^2_α,l/∑_αs^2_α,l). In Fig. <ref> we show S̅_L/2 for the middle of a chain with L=10 as a function of the interaction and for different disorder strengths (lower to larger from top to bottom) <cit.>. For the nondisordered case the OSEE is maximum for Δ=J, and then S̅_L/2 decreases significantly and increases again with the interaction. Disorder strongly suppresses the peak in S̅_L/2 before lowering when it is strong also the other parts of the curve (see the bottom lines of Fig. <ref>).A natural question to address is that of whether the steady state of the system has signatures of localization and how they are affected by the interaction. Again, we study the single-particle reduced density matrix ρ_sp^j,k=⟨^+_j ^-_k⟩and its orbitals ψ_α.We then compute a weighted inverse participation ratio =∑_α,l n_α|ψ_α(l)|^4 and its averageover the disorder realizations. Ifis close to 1, it means that the relevant natural orbitals are localized, whereas for lower values of , some relevant natural orbitals are localized <cit.>. In Fig. <ref> we depictversus the interaction Δ for different disorder magnitude strengths W, increasing from the bottom to the top curve.The dependence ofwith Δ is nonmonotonous and it is a function of the strength of disorder. At weak disorder, small interaction “favors” disorder increasinguntil, at intermediate values of the interaction,decreases as Δ increases.For even larger Δ,rises, and for large enough interaction and disorder, all the natural orbitals of the steady state are localized, each with a different weight n_α <cit.>. To clearly illustrate this, in Fig. <ref>(a) we plot a typical relevant natural orbital as a function of position for four cases: weak disorder and weak interactions (the red dot-dashed line), weak disorder and strong interactions (the green dotted line), strong disorder and weak interactions (the orange dashed line) and intermediate strength of disorder and strong interactions (the blue solid line) <cit.>. The first two cases (weak disorder) are delocalized, whereas the other two are localized. We deduce that intermediate to strong disorder is needed for localization as both cases with weak disorder have delocalized orbitals. Note also that for weak disorderin Fig. <ref> is small for all values of the interaction Δ. For intermediate or strong disorder, the natural orbitals can be localized, in particular, also in the presence of strong interaction. We stress that, for intermediate disorder W = 5J, strong interactions, e.g., Δ = 14J, favor localization of the orbitals. It is however also important to consider the occupation of the orbitals. The interplay of disorder, dissipation, and interaction is such that, when the effects of disorder are strong enough, each natural orbital is exponentially localized and has a different occupation. This disordered scenario with localized orbitals is shown in Fig. <ref>(b) in which all the natural orbitals ψ_α are multiplied by their weight n_α for a typical disorder realization.Conclusions. We have considered an interacting spin chain to study the interplay among tailored dissipation, interaction, tunneling, and disorder. The dissipation is such that it contrasts disorder and, in fact, we have proven that without disorder the steady state has zero magnetization on each site, independent of the strength of the interaction. Moreover, in the limit in which our spin chains reverts to a XXX chain, the steady state is an entangled highly symmetric pure state.In this regime we have shown that the steady state can be very sensitive even to small amounts of disorder. In the presence of strong disorder, the steady state has localization signatures indicated by localization of all the natural orbitals of the single-particle reduced density matrix, a large inverse participation ratio, and a different occupation for each orbital. In this case, small and intermediate interactions can lower the inverse participation ratio, but large interactions can enhance it, indicating that all natural orbitals can be localized even at large interactions.More work is required to better understand dissipative disordered many-body systems, the phases that emerge and their properties, especially, in the spirit of dissipative engineering, with the use of tailored baths. The role of external time-dependent drivings to probe or further enrich these systems could be another interesting research direction.Concurrently with our Rapid Communication, another study of a similar model whose results are consistent with ours appeared on the arXiv <cit.>. Acknowledgments. D.P. acknowledges support from the Ministry of Education of Singapore AcRF MOE Tier-II (Project No. MOE2016-T2-1-065) and fruitful discussions with S. Denisov and F. Heidrich-Meisner. D.P. was hosted by ICTP (Trieste) and PCS-IBS (Daejeon) during part of this study. The computational work for this Rapid Communication was performed on resources of the National Supercomputing Centre, Singapore <cit.>.10 Mitra A. Mitra, S. Takei, Y. B. Kim, and A. J. Millis, Phys. Rev. Lett. 97, 236808 (2006).DiehlZoller2008 S. Diehl, A. Micheli, A. Kantian, B. Kraus, H. P. Büchler, P. Zoller, Nat. Phys. 4, 878 (2008). DellaTorre E. G. Dalla Torre, E. Demler, T. Giamarchi, and E. Altman, Nat. Phys. 6, 806 (2010). DiehlZoller2010 S. Diehl, A. Tomadin, A. Micheli, R. Fazio, and P. Zoller, Phys. Rev. Lett. 105, 015702 (2010).PolettiKollath2012 D. Poletti, J.-S. Bernier, A. Georges, C. Kollath, Phys. Rev. Lett. 109, 045302 (2012).CaiBarthel2012 Z. Cai, T. Barthel, Phys. Rev. Lett. 111, 150403 (2013).PolettiKollath2013 D. Poletti, P. Barmettler, A. Georges, and C. Kollath, Phys. Rev. Lett. 111, 195301 (2013).SciollaKollath2015 B. Sciolla, D. Poletti, and C. Kollath, Phys. Rev. Lett. 114, 170401 (2015). GarrahanLesanovsky I. Lesanovsky, and J. P. Garrahan, Phys. Rev. Lett. 111, 215305 (2013). MarcuzziLesanovsky2014 M. Marcuzzi, E. Levi, S. Diehl, J. P. Garrahan, and I. Lesanovsky, Phys. Rev. Lett. 113, 210401 (2014). MarcuzziLesanovsky2015 M. Marcuzzi, E. Levi, W. Li, J. P. Garrahan, B. Olmos, and I. Lesanovsky, New J. Phys. 17, 072003 (2015). BaskoAltshuler2006 D. M. Basko, I. L. Aleiner, and B. L. Altshuler, Ann. Phys. (N.Y.) 321, 1126 (2006). OganesyanHuse2007 V. Oganesyan, and D. A. Huse, Phys. Rev. B 75, 155111 (2007).PalHuse2010 A. Pal, and D. A. Huse, Phys. Rev. B 82, 174411 (2010). DErricoModugno2014 C. D'Errico, E. Lucioni, L. Tanzi, L. Gori, G. Roux, I.P. McCulloch, T. Giamarchi, M. Inguscio, and G. Modugno, Phys. Rev. Lett. 113, 095301 (2014).Schreiber M. Schreiber, S. S. Hodgman, P. Bordia, H. P. Lüschen, M. H. Fischer, R. Vosk, E. Altman, U. Schneider, and I. Bloch, Science 349, 842 (2015).Monroe J. Smith, A. Lee, P. Richerme, B. Neyenhuis, P. W. Hess, P. Hauke, M. Heyl, D. A. Huse, and C. Monroe, Nat. Phys. 12, 907 (2016).Bordia P. Bordia, H. P. Lüschen, S. S. Hodgman, M. Schreiber, I. Bloch, and U. Schneider, Phys. Rev. Lett. 116, 140401 (2016).Bordia2 P. Bordia, H. Lüschen, U. Schneider, M. Knap, and I. Bloch, Nat. Phys. 13, 460 (2017).Roushan2017 P. Roushan et al., Science 358, 11775 (2017). Gross J. Choi, S. Hild, J. Zeiher, P. Schauß, A. Rubio-Abadal, T. Yefsah, V. Khemani, D. A. Huse, I. Bloch, and C. Gross, Science 352, 1547 (2016).BordiaBloch2017 P. Bordia, H. Lüschen, S. Scherg, S. Gopalakrishnan, M. Knap, U. Schneider, and I. Bloch, Phys. Rev. X 7, 041047 (2017).NandkishoreHuse2015 R. Nandkishore, S. Gopalakrishnan, and D. A. Huse, Phys. Rev. B 90, 064203 (2014).JohriBhatt2015 S. Johri, R. Nandkishore, and R. N. Bhatt, Phys. Rev. Lett. 114, 117401 (2015).Nandkishore2015 R. Nandkishore, Phys. Rev. B 92, 245141 (2015).FischerAltman2016 M. H. Fischer, M. Maksymenko, and E. Altman, Phys. Rev. Lett. 116, 160401 (2016).LeviGarrahan2016 E. Levi, M. Heyl, I. Lesanovsky, and J. P. Garrahan, Phys. Rev. Lett. 116, 237203 (2016). EverestLevi2016 B. Everest, I. Lesanovsky, J. P. Garrahan, and E. Levi,Phys. Rev. B 95, 024310 (2017). NandkishoreGopalakrishnan2016 R. Nandkishore, and S. Gopalakrishnan, Ann. Phys. (Berlin) 529, 1600181 (2017). HyattBauer2016 K. Hyatt, J. R. Garrison, A. C. Potter, and B. Bauer, Phys. Rev. B 95, 035132 (2017).ZnidaricGoold2016 M. Žnidarič, J. J. Mendoza-Arenas, S. R. Clark, and J. Goold, Ann. Phys. (Berlin) 529, 1600298 (2017)ZnidaricVarma2016 M. Žnidarič, A. Scardicchio, and V. K. Varma, Phys. Rev. Lett. 117, 040601 (2016).MedvedyevaZnidaric2016 M. V. Medvedyeva, T. Prosen, and M. Žnidarič, Phys. Rev. B 93, 094205 (2016). DroennerCarmele2017 L. Droenner, and A. Carmele, Phys. Rev. B 96, 184421 (2017). VanNieuwenburgFischer2017 E. P. L. van Nieuwenburg, J. Yago Malo, A. J. Daley, and M. H. Fischer, Quantum Sci. Technol. 3, 01LT02 (2018).KarlssonVerdozzi2018 D. Karlsson, M. Hopjan, and C. Verdozzi, Phys. Rev. B 97, 125151 (2018).LuschenSchneider H. P. Lüschen, P. Bordia, S. S. Hodgman, M. Schreiber, S. Sarkar, A. J. Daley, M. H. Fischer, E. Altman, I. Bloch, and U. Schneider, Phys. Rev. X 7, 011034 (2017).Schindler2013 P. Schindler, M. Müller, D. Nigg, J.T. Barreiro, E.A. Martinez, M. Hennrich, T. Monz, S. Diehl, P. Zoller, and R. Blatt, Nat. Phys. 9, 361 (2013).Denisov I. Yusipov, T. Laptyeva, S. Denisov, and M. Ivanchenko, Phys. Rev. Lett. 118, 070402 (2017). Denisov2 O. S. Vershinina, I. I. Yusipov, S. Denisov, M. V. Ivanchenko, and T. V. Laptyeva, Europhys. Lett. 119, 56001 (2017). GoriniSudharsan V. Gorini, A. Kossakowski, and E. C. G. Sudarshan, J. Math. Phys. 17, 821 (1976).Lindblad G. Lindblad, Commun. Math. Phys. 48, 119 (1976). JordanWigner P. Jordan, and E. Wigner, Z. Physik 47, 631 (1928). Lieb E. Lieb, T. Schultz, and D. Mattis, Ann. Phys. (N.Y.) 16, 407 (1961).BeraBardarson S. Bera, H. Schomerus, F. Heidrich-Meisner, and J. H. Bardarson, Phys. Rev. Lett. 115, 046603 (2015). Bera2017 S. Bera, T. Martynec, H. Schomerus, F. Heidrich-Meisner, and J. H. Bardarson, Ann. Phys. (Berlin) 529, 1600356 (2017).LezamaBardarson T. L. M. Lezama, S. Bera, H. Schomerus, F. Heidrich-Meisner, and J. H. Bardarson, Phys. Rev. B 96, 060202(R) (2017).LinHeidrichMeisner2017 S.-H. Lin, B. Sbierski, F. Dorfner, C. Karrasch, F. Heidrich-Meisner, SciPost Phys. 4, 002 (2017). Alet D. J. Luitz, N. Laflorencie, and F. Alet, Phys. Rev. B 91, 081103 (2015). Abanin M. Serbyn, Z. Papić, and D. A. Abanin, Phys. Rev. X 5, 041047 (2015).Yi W. Yi, S. Diehl, A. J. Daley, and P. Zoller, New J. Phys. 14, 055002 (2012). Bardyn C.-E. Bardyn, M. A. Baranov, C. V. Kraus, E. Rico, A. İmamoğlu, P. Zoller, and S. Diehl, New J. Phys. 15 085001 (2013).Lloyd1996 S. Lloyd, Science 273, 1073 (1996). Lauchli L. Bonnes and A. Läuchli, arXiv:1411.4831.Bernier2017 J.-S. Bernier, R. Tan, L. Bonnes, C. Guo, D. Poletti, and C. Kollath, Phys. Rev. Lett. 120, 020401 (2018). examplestates For four spins, R=6 and the possible states are |⟩, |⟩, |⟩, |⟩, |⟩ and |⟩.nonexponential Here, we stress that these seemingly localized orbitals with large single-site occupation are not exponentially localized. OSEE T. Prosen and I. Pižorn, Phys. Rev. A 76, 032316 (2007). Schollwock2011 U. Schollwöck, Ann. Phys. (N.Y.) 326, 96 (2011). VerstraeteCirac2004 F. Verstraete, J. J. García-Ripoll, and J. I. Cirac, Phys. Rev. Lett. 93, 207204 (2004).ZwolakVidal2004 M. Zwolak, and G. Vidal, Phys. Rev. Lett. 93, 207205 (2004).n_realizations We have used 100 disorder realizations. nonweighted We have also computed the nonweighted inverse participation ratio (substituting n_α with 1/L in the expression for ) which is qualitatively similar to the weighted one, showing significant quantitative differences only for low disorder. fermiliquid We should highlight that the distribution of the occupation of the various natural orbitals does not follow a Fermi-liquid-like behavior as for equilibrium systems studied in Refs. <cit.>. alpha The natural orbitals for the weak disorder cases are randomly chosen. For the strong disorder cases, the two natural orbitals are chosen such that they are located in the middle of the chain so as to show both exponentially decaying tails.VakulchykDenisov2017 I. Vakulchyk, I. Yusipov, M. Ivanchenko, S. Flach, and S. Denisov, arXiv:1709.08882.nscc https://www.nscc.sg
http://arxiv.org/abs/1709.08934v2
{ "authors": [ "Xiansong Xu", "Chu Guo", "Dario Poletti" ], "categories": [ "cond-mat.quant-gas", "cond-mat.dis-nn", "quant-ph" ], "primary_category": "cond-mat.quant-gas", "published": "20170926103951", "title": "Interplay of interaction and disorder in the steady state of an open quantum system" }
On the Majorana condition]On the Majorana condition for nonlinear Dirac systems T. Candy]Timothy Candy [T. Candy]Universität Bielefeld, Fakultät für Mathematik, Postfach 100131, 33501 Bielefeld, Germany [email protected] S. Herr]Sebastian Herr [S. Herr]Universität Bielefeld, Fakultät für Mathematik, Postfach 100131, 33501 Bielefeld, Germany [email protected] Financial support by the German Research Foundation through the CRC 1283 “Taming uncertainty and profiting from randomness and low regularity in analysis, stochastics and their applications” is acknowledged. [2010]42B37, 35Q41For arbitrarily large initial data in an open set defined by an approximate Majorana condition, global existence and scattering results for solutions to the Dirac equation with Soler-type nonlinearity and the Dirac-Klein-Gordon system in critical spaces in spatial dimension three are established.[ [ December 30, 2023 =====================§ INTRODUCTION Let m,M≥ 0. Using the summation convention with respect to μ=0,…,3, the cubic Dirac equation (Soler model) for a spinor ψ: ^1+3→^4 is given by- i γ^μ_μψ + M ψ = (ψψ) ψ.Here, x^0 = t, _0 = _t, and ψ = ψ^†γ^0 is the Dirac adjoint, where ψ^† denotes the complex conjugate transpose of the spinor ψ, and the matricesγ^μ∈^4× 4 are the standard Dirac matrices, see <cit.>. Writing =_t^2-Δ, the Dirac-Klein-Gordon system is- i γ^μ_μψ + M ψ = ϕψ,ϕ + m^2 ϕ = ψψ,where ϕ: ^1+3→ is a scalar field.These equations (<ref>) and (<ref>) arise as in relativistic quantum mechanics as toy models for interactions of elementary particles, see e.g. <cit.>.In previous work, we have addressed the initial value problems for the above equations for small initial data of low regularity. Concerning the cubic Dirac equation, we have obtained small data global well-posedness and scattering in the massive case M>0 <cit.> as well as the massless case M=0 <cit.>.For the massive Dirac-Klein-Gordon system, we have obtained small data global well-posedness in thenon-resonant regime for initial data of subcritical regularity <cit.> and both in the resonant and the non-resonant regime in the critical space with additional angular regularity <cit.>. Concerning a more complete account on earlier work on the low regularity well-posedness problem, we refer to the references therein. The purpose of the current article to gain insight into the asymptotic behaviour of an open set of large data solutions to (<ref>) and (<ref>). In <cit.> Chadam and Glassey considered the equations (<ref>) and (<ref>) under the assumption that the initial data was of the formψ(0) = (f,g,-g^*,f^*)^twhere, given a complex scalar (or vector) z ∈^n, we let z^* denote the complex conjugate, and f, g: ^3 →. This condition (<ref>) is equivalent toψ(0) + z γ^2 ψ^*(0)=0with z=-i, see <cit.>. A computation shows that the condition (<ref>) is conserved under the evolution of (<ref>) and (<ref>), and moreover, that if ψ is of the form (<ref>) then ψψ = 0.Consequently, under the assumption (<ref>), the cubic Dirac equation (<ref>) and the Dirac-Klein-Gordon system (<ref>) reduce to equations which are linear in ψ. In particular, the argument of Chadam-Glassey gives scattering and global well-posedness for (<ref>) and (<ref>) for a class of large data <cit.>.The structural condition (<ref>) considered by Chadam and Glassey was introduced in the physics literature long before by Majorana <cit.> to describe fermions which are their own anti-particles, see <cit.> for an overview.Our main Theorems <ref> and <ref> below pertain to solutions emanating from initial data which approximately satisfy the algebraic condition (<ref>) with |z|=1. For the results concerning the cubic Dirac equation (<ref>), we rely on the estimates obtain in <cit.>. On the other hand, in the case of the Dirac-Klein-Gordon system (<ref>), we require more refined estimates than those used in <cit.> to obtain the current sharpest small data global theory. The reason is that we have to deal with a large potential in the Dirac equation, which essentially is a free Klein-Gordon wave. Instead, we use refined estimates obtained in <cit.> which give a small power of a space-time L^4_t,x norm on the righthand side. The main result regarding the cubic Dirac equation is the following. Let z ∈, |z|=1, and M0. For any A 1 there exists ϵ=ϵ(A)>0 such that for all initial data satisfyingψ(0) _H^1(^3) Aandψ(0) + z γ^2 ψ^*(0) _H^1(^3)ϵ,the cubic Dirac equation (<ref>) is globally well-posed and solutions scatter to free solutions as t →±∞.To be more precise, we prove Theorem <ref> on a reduced system instead, which is equivalent for smooth solutions. In Theorem <ref> we are forced to take ϵ much smaller than A^-1.The regularity assumption in Theorem <ref> is sharp, in the sense that Ḣ^1(^3) is the scale invariant space. In particular, the regularity assumptions match the optimal results known in the small data case <cit.>.The importance of Theorem <ref> is that we can take A to be large, in particular, we obtain scattering for an open set of large data with essentially sharp regularity assumptions. Under stronger decay and regularity conditions, such results have been proven by Bachelot in <cit.>. Very recently, a similar result has been derived in <cit.> in the presence of a time independent potential and for initial data in H^1(^3) with additional angular regularity. We also have the corresponding version for the Dirac-Klein-Gordon system. Let H^s_σ(^3) = (1-Δ_^2)^-σ/2H^s(^3) be the subspace of the standard Sobolev space H^s(^3) containing functions with σ angular derivatives in H^s(^3), equipped with the norm f _H^s_σ =( 1 -Δ_^2)^σ/2 f _H^s see<cit.> for details. Note that H^s(^3)=H^s_0(^3). Let z ∈, |z|=1. Suppose that either s>0=σ and 2M > m > 0, or σ>0=s and M,m>0. For any A 1, there exists ϵ=ϵ (A)>0, such that ifψ(0) _H^s_σ(^3) A, ϕ(0) _H^1/2+s_σ(^3) A, _t ϕ(0) _H^-1/2+s_σ(^3) A,andψ(0) + z γ^2 ψ^*(0) _H^s_σ (^3)ϵ,then the system (<ref>) is globally well-posed and solutions scatterto free solutions as t →±∞.As for the cubic Dirac equation, we prove Theorem <ref> on a reduced system instead, which is equivalent for smooth solutions.We obtain an upper bound for ϵ which is the inverse exponential of a power of A, see Theorem <ref> for more details. The Chadam-Glassey result in <cit.> corresponds to the case z=i and ϵ = 0 (with additional smoothness assumptions on the data). A result similar to Theorem <ref> under strong decay and regularity conditions has been established in <cit.>. Notice that the small data results in <cit.> correspond to Theorems <ref> and <ref>, respectively, in the case where A is very small, since it clearly implies the condition onψ(0) + z γ^2 ψ^*(0). Notice thats=0 is the critical regularity for (<ref>). §.§ Organisation of the paperIn Section <ref> weperform an initial reduction which decouples the small andthe large parts of the spinors. In Section <ref> we reformulateand prove the main results concerning the Soler model. In Section <ref> we reformulateand prove the main results on the Dirac-Klein-Gordon system.§ INITIAL REDUCTIONS Suppose we have data ψ(0) satisfying the assumptions of Theorem <ref>. One way to proceed would be to linearise around the Chadam-Glassey type solutions. Thus decomposingψ(0) = ψ_N(0) + ψ_L(0)where ψ_N(0) _H^1ϵ and ψ_L(0)+ z γ^2 ψ_L^*(0)=0. Let ψ_L denote the solution to the linear Dirac equation with data ψ_L(0). As mentioned in the introduction, for all times we have ψ_L ψ_L = 0. Consequently, the remaining term ψ_N = ψ - ψ_L satisfies the equation- i γ^μ_μψ_N + M ψ_N = ( ψ_L ψ_N + ψ_N ψ_L ) ψ +ψ_N ψ_N ψ.The last term is small since ψ_N(0) is small. On the other hand, it is not at all clear that the first term ( ψ_L ψ_N + ψ_N ψ_L ) ψ should be small, since it contains terms of the schematic form ψ_L^2 ψ_N, and ψ_L can be large. In particular, if we wanted to use the linearised equation to prove Theorem <ref>, we would be forced to absorb these terms into the left hand side, which would significantly complicate the required multilinear estimates. It turns out that there is a better way to decompose ψ, which avoids this problem. In particular, we can exploit the multilinear estimates already contained in <cit.>. A similar comment applies to the proof for the Dirac-Klein-Gordon system, Theorem <ref>. However, a significant additional difficulty arises in the case where the data for ϕ is large. We start with the following observation, see <cit.>, we follow <cit.>. Assume that ψ is a classical solution of - i γ^μ_μψ + M ψ = V ψfor some real-valued, scalar, and locally integrable function V:^1+3→. Then for any z ∈ we have ψ(t) + z γ^2 ψ^*(t) _L^2_x = ψ(0) +z γ^2 ψ^*(0) _L^2_x.A computation shows that γ^μγ^2 = - γ^2 (γ^μ)^* which implies that - iγ^μ_μ( ψ + z γ^2 ψ^*) =- i γ^μ_μψ+ z γ^2 (- i γ^μ_μψ)^*= - M ( ψ +z γ^2 ψ^*) + V( ψ + z γ^2 ψ^*).Result now follows by multiplying by i (ψ + z γ^2 ψ^*)^†γ^0, taking the real part, and then integrating over ^3.We can now rewrite the cubic Dirac equation (<ref>). Let φ, χ: ^1+3→^4 be smooth enough and solve- i γ^μ_μφ + M φ = ( φχ+ χφ) φ- i γ^μ_μχ + M χ = ( φχ + χφ) χwith dataφ(0) = 1/2( ψ(0) + z γ^2 ψ^*(0) ), χ(0) = 1/2( ψ(0) - z γ^2 ψ^*(0)).Then a computation using Lemma <ref> implies that for all t ∈ and |z|=1 we haveφ(t) + z γ^2 φ^*(t)=0, χ(t)-z γ^2 χ^*(t)=0and moreover that φφ = χχ = 0. Consequently, if we let ψ = φ + χ, we obtain a solution to the cubic Dirac equation (<ref>). Similarly, in the case of the Dirac-Klein-Gordon system (<ref>), let φ, χ: ^1+3→^4 and ϕ: ^1+3→ be smooth enough and solve- i γ^μ_μφ + M φ = ϕφ- i γ^μ_μχ + M χ = ϕχ ϕ + m^2 ϕ = φχ + χφwith dataφ(0) = 1/2( ψ(0) + z γ^2 ψ^*(0) ), χ(0) = 1/2( ψ(0) - z γ^2 ψ^*(0)) . As in the case of the cubic Dirac equation, an application of Lemma <ref> implies thatφ(t) + z γ^2 φ^*(t)=0, χ(t) -z γ^2 χ^*(t)=0and hence provided |z|=1 we have φφ = χχ = 0. Consequently, letting ψ = φ + χ we get a solution to (<ref>). For technical reasons, we prefer to work with a first order system. Defining ϕ_+=ϕ+i∇^-1∂_t ϕ, as ϕ is real-valued, we obtain- i γ^μ_μφ + M φ = (ϕ_+) φ- i γ^μ_μχ + M χ =(ϕ_+) χ-i∂_t ϕ_++∇_mϕ_+=∇_m^-1( φχ + χφ)with dataφ(0) = 1/2( ψ(0) + z γ^2 ψ^*(0) ), χ(0) = 1/2( ψ(0) - z γ^2 ψ^*(0)), andϕ_+(0)=ϕ(0)+i∇^-1∂_t ϕ(0).Conversely, from ϕ_+ we can recover ϕ by taking the real part of ϕ_+.§ CUBIC DIRAC EQUATIONWe begin by introducing some notation. Let Π_± be the projectionΠ_± = 12( I ±∇_M^-1 ( - i γ^0 γ^j _j + M γ^0) ),let U^±_m(t)=e^∓ i t ∇_m be the propagator for the homogeneous half-wave equation, letU_M(t)=U^+_M(t)Π_+ + U^-_M(t)Π_-be the propagator for the homogeneous Dirac equation, and letI^±,m_t_0(F)(t)= i∫_t_0^t U^±_m(t-t_0-t') F(t')dt', I^M_t_0(G)(t)= i∫_t_0^t U_M(t-t_0-t') γ^0 G(t')dt'.be the corresponding Duhamel integrals.The previous section implies that for smooth solutions (<ref>) and(<ref>) are equivalent, so that we focus on proving the following. Let z ∈, |z|=1, and M 0.There exists c∈ (0,1), such that for any A>0 and ϵ cA^-1, if the initial data satisfyφ(0) _H^1ϵ, χ(0) _H^1 A,then (<ref>) is globally well-posed and the solutionsscatter in H^1(^3) to free solutions as t →±∞, i.e. there exist φ_±∞∈ H^1(^3) and χ_±∞∈ H^1(^3), such thatlim_t →±∞φ(t)-U_M(t) φ_±∞_H^1=0and lim_t →±∞χ(t)-U_M(t) χ_±∞_H^1=0.Let X ⊂ C(, H^1(^3)) be the Banach space constructed in <cit.> in the massive case (M>0) and in <cit.> in the massless case (M=0). Further, let ·_X denote the norm obtained by multiplying by the norms from <cit.> by a small enough constant, such that for all solutions φ∈ X to the inhomogeneous Dirac equation- i γ^μ_μφ + M φ = (φ^(1)φ^(2)) φ^(3)the boundφ_X φ(0) _H^1(^3) + C φ^(1)_X φ^(2)_X φ^(3)_Xholds. Consider the set X= { (φ, χ) ∈ X × X|φ_X2 φ(0) _H^1, χ_X2 χ(0) _H^1}and, for A,ϵ>0, thenorm(φ, χ)_X=ϵ^-1φ_X+A^-1χ_X.X is a complete metric space.Let T = (T_1, T_2) denote the standard (inhomogeneous) solution map for (<ref>) constructed from the Duhamel formula. The bound (<ref>) together with the assumption on the initial data show that if (φ, χ) ∈X thenT_1(φ,χ)_Xφ(0) _H^1 + 2C φ_X^2 χ_Xφ(0) _H^1 + 2^4C φ(0)_H^1^2 χ(0)_H^1 ( 1 + 2^4C A ϵ) φ(0) _H^1,and similarly T_2(φ, χ)_X χ(0) _H^1 + 2C χ_X^2 φ_X ( 1 + 2^4 C A ϵ) χ(0) _H^1.Consequently, provided that ϵ≤ (2^4 C A)^-1, we see that T: X→X. Next, we verify that T is a contraction. For (φ_1,χ_1),(φ_2,χ_2)∈X another application of (<ref>) gives T_1(_1,χ_1) - T_1(_2,χ_2) _X 2^4 C A ϵ_1 - _2_X + 2^3 C ϵ^2 χ_1 - χ_2_X,and similarly T_2(_1,χ_1) - T_2(_2,χ_2) _X2^4 C A ϵχ_1 - χ_2_X + 2^3 C A^2 _1 - _2_X.This impliesT(_1,χ_1) - T(_2,χ_2) _X 2^6 CA ϵ (_1,χ_1) - (_2,χ_2) _X.Therefore, choosing ϵ≤ (2^7 C A)^-1, the map T: X→X is a contraction with respect to ·_X, hence it has a unique fixed point in X, and standard arguments show the continuity of the flow map. The scattering claim follows from the finiteness of both _X and χ_X, because this implies that the pull-backs ofand χ along the free evolution, as maps fromto H^1(^3), have finite quadratic variation,see <cit.> for the details.§ THE DIRAC-KLEIN-GORDON SYSTEMLet P_λ be the standard Littlewood-Paley projections onto dyadic frequencies of size λ, and take H_N to be the projection onto angular frequencies of size N, see <cit.> for precise definitions.If s0 and σ=0, we define f_D^s_0(I)=∇^s f_L^4(I×^3). On the other hand, for s0 and σ>0, we takef_D^s_σ(I)=(∑_N1N^2σ∇^sH_N f^2_L^4(I×^3))^1/2. The results in Section <ref> imply that for smooth solutions (<ref>) and (<ref>) are equivalent, so that we focus on proving the following. Let z ∈, |z|=1. Suppose that either s>0=σ and 2M > m > 0, or σ>0=s and M,m>0. There exist 0<c<1 and γ>1, such that for any A 1 and any ϵ cexp(-A^γ), ifφ(0) _H^s_σ(^3)ϵ, χ(0) _H^s_σ(^3) A,ϕ_+(0) _H^1/2+s_σ(^3) A, then the system (<ref>) is globally well-posed and scatters to free solutions as t →±∞, i.e. there exist φ_±∞∈ H^s_σ(^3), χ_±∞∈ H^s_σ(^3) andϕ_±∞∈ H^1/2+s_σ(^3), such thatlim_t →±∞φ(t)-U_M(t) φ_±∞_H^s_σ=0,lim_t →±∞χ(t)-U_M(t) χ_±∞_H^s_σ=0,andlim_t →±∞ϕ_+(t)-U^+_m(t) ϕ_±∞_H^s+1/2_σ=0.Before we turn to its proof,we summarise the results we require from <cit.>. Let s,σ∈, and I be any interval of the form I=[t_1,t_2), -∞<t_1<t_2≤∞. There exist Banach function spaces F^s, σ_M(I) and V^s, σ_+, m(I) and C_0 1 with the following properties: * C_0^∞(I×^3;^4)⊂F^s, σ_M(I), C_0^∞(I×^3;)⊂V^s, σ_+, m(I), andF^s, σ_M(I)↪ C_b(I;H^s_σ(^3;^4)),V^s, σ_+, m(I)↪ C_b(I;H^s_σ(^3;)). *For ψ∈F^s, σ_M(I), ϕ_+∈V^s, σ_+, m(I), and for any I'=[s_1,s_2)⊂ I, we have ψ|_I'∈F^s, σ_M(I'), ϕ_+|_I'∈V^s, σ_+, m(I'), andψ|_I'_F^s, σ_M(I') C_0ψ_F^s, σ_M(I), ϕ_+|_I'_V^s+1/2, σ_+, m(I') C_0ϕ_V^s+1/2, σ_+, m (I). *For ψ_0 ∈ H^s_σ(^3;^4) and ϕ_0∈ H^s_σ(^3;) we have U_M(t)ψ_0 ∈F^s, σ_M(I), U^+_m (t)ϕ_0∈V^s, σ_+, m(I), and the boundsU_Mψ_0_F^s, σ_M(I)ψ_0_H^s_σ, U^+_m ϕ_0_V^s, σ_+, m(I)ϕ_0_H^s_σ. *For ψ∈F^s, σ_M([t_1,t_2)) and ϕ_+∈V^s, σ_+, m([t_1,t_2)) the limitslim_t→ t_2U_M (-t)ψ(t) ∈ H^s(^3;^4) and lim_t→ t_2U^+_m(-t)ϕ_+(t) ∈ H^s(^3;) exist.*For ϕ_+∈V^s+1/2, σ_+, m(I) we have the Strichartz-type estimateϕ_+_D^s_σ (I)C_0 ϕ_+_V^s+1/2, σ_+, m(I). *Suppose that either s>0=σ and 2M > m > 0, or σ>0=s and M,m>0. There exists θ∈ (0,1), such that for any t_0 ∈ I the Duhamel operatorsV^s+1/2, σ_+, m(I)×F^s, σ_M(I)∋ (ϕ_+ ,φ)↦I^M_t_0((ϕ_+) φ)∈F^s, σ_M(I), F^s,σ_M(I)×F^s, σ_M(I)∋ (χ,φ)↦I^+,m_t_0(∇_m^-1 ( χφ))∈V^s+1/2, σ_+, m(I)are well-defined and the following estimates hold:I^M_t_0((ϕ_+) φ)_F^s, σ_M(I) C_0 ϕ_+_D^s_σ(I)^θϕ_+_V^s+1/2, σ_+, m(I)^1-θφ_F^s, σ_M(I), I^+,m_t_0(∇_m^-1(χφ))_V^s+1/2, σ_+, m(I) C_0 χ_F^s, σ_M(I)φ_F^s, σ_M(I). For details see Section 2, Lemma 2.1, and Theorem 3.2 in <cit.>.The first step in the proof of Theorem <ref>, is to prove the following local result.Suppose that either s>0=σ and 2M > m > 0, or σ>0=s and M,m>0. There exist θ,c∈(0,1) and C>1, such that for any A, B 1 and any 0<α c A^-1 and 0<β c B^θ-1/θ, and for any interval I=[t_1,t_2)⊂ and t_0∈ I, if we haveφ_0 _H_σ^s(^3)α, χ_0 _H_σ^s(^3) A,andU^+_m(·-t_0)ϕ_0_D_σ^s(I)β ,ϕ_0 _H_σ^1/2+s(^3) B,then there exists a unique solution (φ,χ, ϕ_+)∈F^s, σ_M(I)×F^s, σ_M(I)×V^s+1/2, σ_+, m(I) of (<ref>) on I ×^3 with initial condition (φ, χ, ϕ_+)(t_0) = (φ_0, χ_0, ϕ_0). Moreover the solution depends continuously on the initial data and satisfies the boundssup_t ∈ Iφ(t) _H_σ^s(^3) 2 φ_0 _H_σ^s(^3), sup_t ∈ Iχ(t) _H_σ^s(^3) 2 χ_0 _H_σ^s(^3),sup_t ∈ Iϕ_+(t) -U^+_m(t-t_0)ϕ_0(t_0)_H_σ^1/2+s(^3)Cφ_0 _H_σ^s(^3)χ_0 _H_σ^s(^3).For convenience, let φ_L(t)=𝒰_M(t-t_0)φ_0, χ_L(t)=𝒰_M(t-t_0)χ_0, and ϕ_+,L(t)=U^+_m(t-t_0)ϕ_0.Let C_0 1 and θ∈ (0,1) be as in Lemma <ref>. Define S as the set of all (φ,χ,ϕ_+)∈F^s, σ_M(I)×F^s, σ_M(I)×V^s+1/2, σ_+, m(I) satisfyingφ -φ_L_F^s, σ_M(I) φ_0_H_σ^s,χ-χ_L_F^s, σ_M(I)χ_0_H_σ^s,ϕ_+-ϕ_+,L_V^s+1/2, σ_+, m(I)2^3C_0φ_0_H_σ^sχ_0_H_σ^s.It is a complete metric space with respect to the norm(φ,χ,ϕ_+)_S:=α^-1φ_F^s, σ_M(I)+A^-1χ_F^s, σ_M(I)+η^-1ϕ_+_V^s+1/2, σ_+, m(I),where η>0 will be chosen later. Let𝒯=(T_1,T_2,T_3): F^s, σ_M(I)×F^s, σ_M(I)×V^s+1/2, σ_+, m(I)→F^s, σ_M(I)×F^s, σ_M(I)×V^s+1/2, σ_+, m(I)be defined as𝒯(φ,χ,ϕ_+)=[𝒰_M(·-t_0)φ_0+ℐ^M_t_0((ϕ_+)φ);𝒰_M(·-t_0)χ_0+ℐ^M_t_0((ϕ_+)χ); U^+_m(·-t_0)ϕ_+,0+I^+,m_t_0(∇_m^-1(φχ+χφ)) ],see Lemma <ref>. Fixed points of T are solutions of (<ref>) with the given data at time t_0. For (φ,χ,ϕ_+)∈ S we infer thatφ_F^s, σ_M(I) φ-φ_L_F^s, σ_M(I)+φ_L_F^s, σ_M(I) 2φ_0_H_σ^s 2α, χ_F^s, σ_M(I) χ-χ_L_F^s, σ_M(I)+χ_L_F^s, σ_M(I) 2χ_0_H_σ^s 2A,and similarly,ϕ_+,L_D_σ^s(I)^θϕ_+,L_V^s+1/2, σ_+, m(I)^1-θ β^θ B^1-θ,ϕ_+-ϕ_+,L_D_σ^s(I)^θϕ_+-ϕ_+,L_V^s+1/2, σ_+, m(I)^1-θ 2^3 C_0^1+θφ_0_H_σ^sχ_0_H_σ^s 2^3C_0^2α A.If α (2^5C_0^3A)^-1 and β (4C_0 B^1-θ)^-1/θ, Lemma <ref> impliesT_1(φ,χ,ϕ_+)-φ_L_F^s, σ_M(I)(2C_0 β^θB^1-θ + 2^4C_0^3α A)φ_0_H_σ^sφ_0_H_σ^s,andT_2(φ,χ,ϕ_+)-χ_L_F^s, σ_M(I)(2C_0 β^θB^1-θ + 2^4C_0^3α A)χ_0_H_σ^sχ_0_H_σ^s,as well asT_3(φ,χ,ϕ_+)-ϕ_+,L_V^s+1/2, σ_+, m(I)2^3C_0 φ_0_H_σ^sχ_0_H_σ^s.We will now show that T:S→ S is a contraction, provided that α,β are chosen small enough. Let (φ,χ,ϕ_+)∈ S and (φ̃,χ̃,ϕ̃_+)∈ S. Then, by Lemma <ref>,T_1(φ,χ,ϕ_+)-T_1 (φ̃,χ̃,ϕ̃_+)_F^s, σ_M(I) (C_0 β^θB^1-θ + 2^3C_0^3α A)φ-φ̃_F^s, σ_M(I)+2C_0^2αϕ_+-ϕ̃_+_V^s+1/2, σ_+, m(I),andT_2(φ,χ,ϕ_+)-T_2 (φ̃,χ̃,ϕ̃_+)_F^s, σ_M(I) (C_0 β^θB^1-θ + 2^3C_0^3α A)χ-χ̃_F^s, σ_M(I)+2C_0^2Aϕ_+-ϕ̃_+_V^s+1/2, σ_+, m(I),as well asT_3(φ,χ,ϕ_+)-T_3 (φ̃,χ̃,ϕ̃_+)_V^s+1/2, σ_+, m(I) 2^2C_0 αχ-χ̃_F^s, σ_M(I) +2^2C_0 Aφ-φ̃_F^s, σ_M(I).We obtainT(φ,χ,ϕ_+)- T (φ̃,χ̃,ϕ̃_+)_S4C_0^2ηη^-1ϕ_+-ϕ̃_+_V^s+1/2, σ_+, m(I)+ (C_0 β^θB^1-θ + 2^3C_0^3α A+2^2C_0 A αη^-1)α^-1φ-φ̃_F^s, σ_M(I)+ (C_0 β^θB^1-θ + 2^3C_0^3α A+2^2C_0 A αη^-1)A^-1χ-χ̃_F^s, σ_M(I).By fixing η=(2^4 C_0^2)^-1, and choosing α (2^12C_0^3A)^-1 and β (2^4C_0B^1-θ)^-1/θ, we have verified that 𝒯:S→ S is a contraction, hence it has a fixed point (φ,χ,ϕ_+)∈ S which is unique in S. For later purposes we note that we have chosen the thresholds for α and β small enough such that the same conclusion holds if α, A, and B are doubled. Similar estimates show that the fixed point depends continuously on the initial data. Due to (<ref>), the claimed estimates on the Sobolev norms for (φ(t),χ(t),ϕ_+(t)) for t∈ I follow from (<ref>), (<ref>) and (<ref>).Finally, we prove uniqueness. Assume that (φ',χ',ϕ'_+)∈F^s, σ_M(I)×F^s, σ_M(I)×V^s+1/2, σ_+, m(I) is another solution with the same data at t_0 such thatt':=sup{t∈ I| (φ',χ',ϕ'_+)(t)= (φ,χ,ϕ_+) (t)}<t_2.Then,φ'(t')_H_σ^s 2α ,χ'(t')_H_σ^s 2A,ϕ'_+(t')_H_σ^1/2+s 2B.Let ϕ'_+_V^s+1/2, σ_+, m(I)≤ R. By Lemma <ref> we haveϕ'_+_D_σ^s(I') C_0ϕ'_+_V^s+1/2, σ_+, m(I') C_0^2 Rfor any I'⊆ I. For ε∈(0, β) (which will be specified below), let δ>0 be small enough such that I':=[t',t'+δ)⊂ I and ϕ'_+_D_σ^s(I')ε. Let φ'_L(t):=𝒰_M(t-t')φ(t'), χ'_L(t):=𝒰_M(t-t')χ(t'), and ϕ'_+,L(t):=U^+_m(t-t')ϕ_+(t'). Then,φ'-φ'_L_F^s, σ_M(I')C_0 ε^θ R^1-θ(φ'-φ'_L_F^s, σ_M(I')+φ'_L_F^s, σ_M(I')),so that if we fix some ε (2C_0R^1-θ)^-1/θ, we obtainφ'-φ'_L_F^s, σ_M(I')φ(t')_H_σ^s.A similar estimate showsχ'-χ'_L_F^s, σ_M(I')χ(t')_H_σ^s.Then,ϕ'_+-ϕ'_+,L_V^s+1/2, σ_+, m(I') 2^3C_0 φ(t')_H_σ^sχ(t')_H_σ^s.These estimates show that (φ',χ',ϕ'_+) is contained in the set S defined as above, but with the modified initial condition at t' instead of t_0 and the interval I' instead of I. Also, the estimates with I replaced by I' in the first part of the proof imply that (φ,χ,ϕ_+)|_I' is contained in this version of the set S. The uniqueness within S proven above implies that (φ',χ',ϕ'_+)=(φ,χ,ϕ_+) in I', which contradicts the definition of t'.We can now prove Theorem <ref> as follows.By our hypothesis, the initial data attime 0 satisfyφ_0_H_σ^sϵ, χ_0_H_σ^s A,ϕ_0_H_σ^1/2+s A,and ϵ>0 is chosen small enough, depending on A only (the precise threshold will be specified below). Let β^∗(B)=cB^θ-1/θ and α^∗(A)=cA^-1 be the thresholds as in Theorem <ref>. Then, by the Strichartz estimate from Lemma <ref> (<ref>), we haveU^+_m(t)ϕ_0_D_σ^s(_+) C_0 Awith C_0 1. By monotone convergence, the function T ↦U^+_m(t)ϕ_0_D_σ^s([T_0,T)) is continuous in T and converges to zero as T↘ T_0. Therefore, for β:=β^∗(2A), we can choose0=s_0<s_1<…<s_N such thatU^+_m(t)ϕ_0_D_σ^s([s_n-1,s_n))=β/4and U^+_m(t)ϕ_0_D_σ^s([s_n,∞))β/4.With s_N+1=∞, define the collection of intervals I_n=[s_n-1, s_n+1) for n=1, ..., N. Then,β/4U^+_m(t)ϕ_0 _D_σ^s(I_n)β/2and, by Minkowski's inequality,∑_n=1^N U^+_m(t)ϕ_0 _D_σ^s(I_n) ^4 2 (C_0A)^4,therefore N N_0:=2^6(C_0A)^4β^-4.Now, fix ϵ cC^-1C_0^-1 2^-2N_0A^-1β. We claim that for every 1 nN, on I_n we have a unique solution (φ^(n),χ^(n),ϕ_+^(n))∈F^s, σ_M(I_n)×F^s, σ_M(I_n)×V^s+1/2, σ_+, m(I_n) with initial condition(φ^(n),χ^(n),ϕ_+^(n))(s_n-1)= (φ^(n-1),χ^(n-1),ϕ_+^(n-1))(s_n-1) (if 2 n N)(φ^(1),χ^(1),ϕ_+^(1))(s_0)= (φ_0,χ_0,ϕ_0)(if n=1)which satisfies the boundsU^+_m(· -s_n-1)ϕ_+^(n-1)(s_n-1) _D_σ^s(I_n)β, φ^(n)(s_n)_H_σ^s 2^n ϵ, χ^(n)(s_n) _H_σ^s 2^n A,ϕ_+^(n)(s_n) - U^+_m(s_n)ϕ_0_H_σ^1/2+s C 2^2nϵ A,where C is the constant from Theorem <ref>. Indeed, for n=1 the estimate in the first line follows by definition of I_1, and the estimates inthe second and third line follow from an application of Theorem <ref> (with t_0=0), where we use that ϵα^∗(A) and ββ^∗(A). As an induction hypothesis, let us suppose thatholds (<ref>) for some 1 nN-1. By Lemma <ref>, the induction hypothesis, and the choice of ϵ we haveU^+_m(· -s_n)ϕ_+^(n)(s_n) _D_σ^s(I_n+1) U^+_m ϕ_0_D_σ^s(I_n+1)+ U^+_m( ϕ_0 - U^+_m(-s_n) ϕ_+^(n)(s_n))_D_σ^s(I_n+1)β/2 +C_0 ϕ_0 - U^+_m(-s_n) ϕ_+^(n) (s_n)_H_σ^1/2 +sβ/2 + C C_02^2nϵ A β.From the estimate in the third line of the induction hypothesis and the smallness condition on ϵ we obtainϕ_+^(n) (s_n)_H_σ^1/2+sU^+_m(s_n)ϕ_0_H_σ^1/2+s+ C 2^2nϵ A A + C 2^2nϵ A 2A.Notice that due to our choices we have ββ^∗ (2A) and 2^nϵα^∗ (2^n A). Then, as s_n+1∈ I_n+1, we obtain from Theorem <ref> (with t_0=s_n) that φ^(n+1) (s_n+1)_H_σ^s2 φ^(n) (s_n)_H_σ^s2^n+1ϵ, χ^(n+1) (s_n+1) _H_σ^s 2χ^(n) (s_n) _H_σ^s 2^n+1 A, and, using the induction hypothesis again,ϕ_+^(n+1)(s_n+1) - U^+_m(s_n+1)ϕ_0_H_σ^1/2+sϕ_+^(n+1)(s_n+1) - U^+_m(s_n+1-s_n)ϕ_+^(n)(s_n)_H_σ^1/2+s+U^+_m(s_n+1-s_n)ϕ_+^(n)(s_n) - U^+_m(s_n+1)ϕ_0_H_σ^1/2+sC 2^2nϵ A+C 2^2nϵ AC2^2(n+1)ϵ A.The proof of the claim is complete.By uniqueness, we have constructed a global solution(φ,χ,ϕ_+)∈C_b(_+,H_σ^s)× C_b(_+,H_σ^s)× C_b(_+,H_σ^1/2+s),and due to (φ,χ,ϕ_+)|_[s_N,∞)∈F^s, σ_M([s_N,∞))×F^s, σ_M([s_N,∞))×V^s+1/2, σ_+, m([s_N,∞))it scatters as t→∞, see Lemma <ref> Part(<ref>). The claim for t→ -∞ follows by timereversibility. Continuous dependence also follows from the local result, we omit the details. This completes the proof of Theorem<ref>.amsplain
http://arxiv.org/abs/1709.09568v1
{ "authors": [ "Timothy Candy", "Sebastian Herr" ], "categories": [ "math.AP" ], "primary_category": "math.AP", "published": "20170927150339", "title": "On the Majorana condition for nonlinear Dirac systems" }
Pseudo-labels for Supervised Learning on Dynamic Vision Sensor Data, Applied to Object Detection under Ego-motion Nicholas F. Y. ChenDSO National Laboratories12 Science Park Drive, Singapore (118225)[email protected] 30, 2023 =============================================================================================================================In recent years, dynamic vision sensors (DVS), also known as event-based cameras or neuromorphic sensors, have seen increased use due to various advantages over conventional frame-based cameras. Using principles inspired by the retina, its high temporal resolution overcomes motion blurring, its high dynamic range overcomes extreme illumination conditions and its low power consumption makes it ideal for embedded systems on platforms such as drones and self-driving cars. However, event-based data sets are scarce and labels are even rarer for tasks such as object detection. We transferred discriminative knowledge from a state-of-the-art frame-based convolutional neural network (CNN) to the event-based modality via intermediate pseudo-labels, which are used as targets for supervised learning. We show, for the first time, event-based car detection under ego-motion in a real environment at 100 frames per second with a test average precision of 40.3% relative to our annotated ground truth. The event-based car detector handles motion blur and poor illumination conditions despite not explicitly trained to do so, and even complements frame-based CNN detectors, suggesting that it has learnt generalized visual representations.§ INTRODUCTION Dynamic vision sensors (DVS), also known as event-based cameras or neuromorphic sensors <cit.>, are a class of biologically-inspired sensors which capture data in an asynchronous manner. When a pixel detects a change in luminance above a certain threshold in log scale, the device emits an output (hence called an `event') containing the pixel location, time and polarity (+1 or -1, corresponding to an increase and decrease in luminance respectively). Such sensors have a temporal resolution on the order of milliseconds or less, making the device suitable for high speed recognition, tracking and collision avoidance. Other advantages of dynamic vision sensors include a high dynamic range and power efficiency, making it ideal for outdoor usage on embedded systems in robotics. Frame-based labeled data sets are widely available, contributing to the tremendous advancements in frame-based computer vision in recent years. However, event-based computer vision is still in the process of maturing, and current event-based data sets are quite limited, especially in the case of object detection. Event-based data sets have been released for robotics applications such as simultaneous localization and mapping (SLAM), visual navigation, pose estimation and optical flow estimation <cit.>, and they comprise of mostly indoor scenes such as objects on a table top, boxes in a room, posters and shapes, and occasional outdoor scenes. For object recognition and detection, some data sets were created by placing a dynamic vision sensor in front of a monitor and recording existing frame-based data sets <cit.>. Moeys  <cit.> recorded scenes of a predator robot chasing a prey robot in a controlled lab environment with some background objects, and includes ground truth of the prey robot position.In the long run, dynamic vision sensors will be integrated in platforms such as drones and autonomous vehicles which work in complex, outdoor environments. The DAVIS Driving Dataset 2017 (DDD17)  <cit.> is the only data set as of writing which captures such environments, and is the largest event-based data set to date, with over 400 GB and 12 hours worth of driving data spread across over 40 scenes at a resolution of 346 × 260 pixels. These scenes are varied over the times of the day (day, evening, night), weather (dry, rainy, wet) and location (campus, city, town, freeway, highway), and includes vehicle details like velocity, steering wheel angle and accelerator pedal position. The DAVIS is a camera model which contains a dynamic vision sensor synchronized with a grayscale frame-based camera (also known as the active pixel sensor, or APS).High speed object detection under ego-motion from dynamic vision sensor data serves a few purposes. First, dynamic vision sensors overcome problems which ordinary frame-based cameras typically encounter. At high speeds, frame-based cameras suffer from motion blur and collision avoidance is limited, placing a speed limit on the platform which the camera is mounted on. In extreme illumination conditions, frame-based cameras have difficulty capturing features of objects. Since dynamic vision sensors output changes in luminance, the data is a sparse representation which can be processed faster, compared to the output of frame-based cameras which contains (potentially redundant) background information. Also, detections from dynamic vision sensor data can be used to complement detections from frame-based cameras, as we will show from our experiments. Last, detection under ego-motion is required because dynamic vision sensors mounted on platforms will inevitably have ego-motion, and the output of the sensors will include some background information as a result, creating distractions which the detection algorithm must overcome.Like most objects in event-based data sets however, objects in the DDD17 are not labeled. In this paper, we take advantage of the mature state of frame-based detection by using a state-of-the-art CNN to perform car detection on the grayscale (APS) images of the DDD17. These detections, hence termed `pseudo-labels', are shown to be effective when used as targets for a separate (fast) CNN when training on dynamic vision sensor data in the form of binned frames. A schematic of this method can be found in Figure <ref>. Contributions* We trained a CNN on pseudo-labels to detect cars from dynamic vision sensor data, with a test average precision of 40.3% relative to annotated ground truth. This is the first time that high-speed (100 FPS) object detection is done on dynamic vision sensor data under ego-motion in a real environment, whereas previous works have only focused on recognizing/detecting simple objects in a controlled environment or detecting objects without camera ego-motion.* We show that a CNN trained on pseudo-labels can detect cars despite motion blur or poor lighting, even though pseudo-labels were not generated for these scenarios. This CNN even complements the original frame-based CNN that was used to generate the pseudo-labels, suggesting that our trained CNN learnt generalized visual representations of cars.§.§ Related work Pseudo-labels & cross modal distillationPseudo-labeling was introduced by Lee <cit.> for semi-supervised learning on frame-based data, where during each weight update, the unlabeled data picks up the class which has the maximum predicted probability and treats it as the ground truth. Chen  <cit.> proposed a method to incrementally select reliable unlabeled data to give pseudo-labels to. Saito  <cit.> proposed using three classifiers that regulate each other, to achieve domain adaptation from pseudo-labels. Pathak  <cit.> used automatically generated masks (pseudo-labels in their context) from unsupervised motion segmentation on videos, and then trained a CNN to predict these masks from static images. The trained CNN learnt feature representations and was able to perform image classification, semantic segmentation and object detection. For data sets with paired modalities (e.g. RGB-D data contains RGB data of a scene synchronized with depth data of the same scene), cross modal distillation <cit.> is a scheme that transfers knowledge from one modality, which has a lot of labels, to another modality, which has very few labels. In  <cit.>, mid-level representations of a CNN trained on RGB images were used to supervise training for another CNN to perform object detection and segmentation on depth images. In <cit.>, the visual modality of videos was used to generate pseudo-labels from CNNs and used to train a separate 1-D CNN to classify scenes from sound inputs. Our work is inspired by these cross modal methods, and we leverage on the fact that the DDD17 is a large data set with synchronized DVS and APS modalities.Event-based object detectionObject detection on dynamic vision sensor data is relatively new since labeled event-based data sets are scarce. Liu  <cit.> performed object detection on the predator-prey data set <cit.>. They used dynamic vision sensor data as an attention mechanism for a frame-based CNN, and compared it to using a CNN to perform detection on the entire grayscale image. Including particle filter for both methods to aid tracking, the former method is 70X faster than the latter, with an accuracy of 90%. Li  <cit.> proposed a method which adaptively pools feature maps from successive frames (generated by binning dynamic vision sensor data over time) to create motion invariant features for object detection. They demonstrated hand detection on a private data set, with performance scores averaging from 61.3% to 76.0% depending on the variant of the method used. Hinz  <cit.> demonstrated a tracking-by-clustering system which detects and tracks vehicles on a highway bridge. Both <cit.> and <cit.> did not benchmark their methods on dynamic vision sensor data under camera ego-motion.§ GENERATING PSEUDO-LABELS FOR DYNAMIC VISION SENSOR DATAWe overcome the lack of labeled dynamic vision sensor data by using cross modal distillation with pseudo-labels on the DDD17 data set (see Figure <ref> for a brief outline). Since the DAVIS sensor has a frame-based camera (APS) synchronized with a dynamic vision sensor, the ground truth in one camera is the same as the ground truth in the other camera. We make use of this correspondence–The grayscale (APS) images are fed into a state-of-the-art CNN which generates outputs (pseudo-labels). These pseudo-labels with confidence above a threshold are treated as ground truth and used to train a supervised learning method, which takes the dynamic vision sensor data as inputs. Though the pseudo-labels are noisy, Pathak  <cit.> argues that in the absence of systematic errors, such ‘noise’ are perturbations around the ground truth, and since supervised learning methods like neural networks have a finite capacity, it cannot learn the noise perfectly and it might learn something closer to the ground truth. In the context of our experiments (car detection), the pseudo-labels are bounding boxes while the supervised learning method is also a CNN. Pseudo-labeling is not limited to object detection–it should work for other computer vision tasks like image segmentation, image recognition and activity recognition.Implementation DetailsWe chose the Recurrent Rolling Convolution (RRC) <cit.> CNN as the object detection CNN for APS images because as of writing, it isthe best-performing model on the KITTI Object Detection Evaluation benchmark <cit.>. Two versions of the RRC are used: The original model trained on the KITTI data set (which is in RGB), and another model which is fine-tuned over 1000 iterations on a grayscale-converted KITTI data set. This is to investigate the impact of pseudo-labels with different performance. As the RRC takes in images of a different aspect ratio than the APS images, we scaled the APS images to the largest possible size while preserving the aspect ratio, and padded the remainder of the image with zeroes. By keeping predictions that have at least a 0.5 confidence score, we produced about 330k and 400k pseudo-labeled images from the original and fine-tuned RRC respectively for various day and evening scenes (the RRC might not produce accurate detections for the night scenes). The scenes are split into train/val/test sets in the ratio 71/15/14 by their recording length, with each set covering a variety of conditions and scenes. Details of the recordings used from the DDD17 can be found in table <ref>. We focused only on detecting cars, but this method can easily be extended to other classes such as pedestrians and cyclists. § SUPERVISED LEARNING WITH PSEUDO-LABELS Implementation DetailsWe adopt a frame-based approach to the dynamic vision sensor data for object detection, because frame-based object detection is mature. The dynamic vision sensor data are converted to images by binning the dynamic vision sensor outputs in 10 ms intervals, and each pixel takes the value σ(x) = 255 * 1/1+e^-x/2, where x is the sum of the polarities of the events in the 10 ms interval. We refer to this as the sigmoid representation of the dynamic vision sensor data. 10 ms was chosen because we aim to achieve detection at 100 frames per second (FPS), about an order of magnitude above most state-of-the-art CNNs.We used the tiny YOLO CNN <cit.> as it is one of the few CNNs that can run above 100 FPS with a decent performance (57.1 mean average precision on the VOC 2007+2012 benchmark). We started with this CNN pre-trained on the VOC 2007+2012 benchmark and fine-tuned it using the pseudo-labels generated, in steps of 10k iterations, up to 150k iterations (including the 20k iterations from pre-training). As we want to show that the object detection CNN performs well as a result of the effectiveness of pseudo-labels rather than the result of optimizing hyper-parameters, we only changed the subdivisions from 8 to 4 and batch size from 64 to 128, and kept the other settings as provided in <cit.>. §.§ Quantitative results The scenario that we are tackling (high-speed object detection in a real environment from dynamic vision sensor data under camera ego-motion) is the first of its kind, so there are no other state-of-the-art algorithms for comparison. As such, we hope that this work serves as a benchmark for future methods tackling the same scenario. Since there is no ground-truth data for the objects in DDD17, we measure performance relative to the RRC pseudo-labels during the model validation step. The model with the highest average precision on the validation set will then be evaluated on the test set. We use an intersection-over-union (IoU) threshold 0.5 for this step.Evaluation against ground truthWe randomly selected 1000 frames from the test set for manual annotation, and all performance figures reported henceforth are obtained by evaluation on this subset. Similar to the KITTI object detection benchmark, we only consider objects that have a minimum height of 25 pixels. A summary of the results can be found in table <ref>. The test average precision of the DVS-only detector is 36.9% and 40.3% for pseudo-labels generated by the RRC (original) and the RRC (fine-tuned) respectively, at an IoU threshold of 0.5. As a comparison, the tiny YOLO architecture achieves a mean average precision of 57.1% when trained on real labels (VOC 2007+2012 benchmark). We see that the model trained on RRC (fine-tuned) pseudo-labels is superior to the model trained on RRC (original) pseudo-labels, which is in line with our expectations because a fine-tuned model will produce more accurate pseudo-labels. Furthermore, the weaker performance of the RRC (original) caused it to produce less pseudo-labels on the training scenes, which could also be a factor in the DVS-only model's weaker performance relative to its RRC (fine-tuned) counterpart. Note that the RRC's performance on our test set is much lower than that reported for the KITTI data set (87.4% for the hard setting, IoU threshold at 0.7) because the data set we are using has a lower resolution (346 × 260).Complementing DVS and grayscale detectionsWe evaluated if the combination of DVS and grayscale detections can improve the overall performance, listed as APS+DVS in table <ref>. We combined the detections of the DVS-only detector and the RRC, and applied non-maximum suppression with an IoU threshold of 0.4 to remove duplicates. At a detection IoU threshold of 0.5, such a combination yielded an average precision of 62.2% on our annotated ground truth data set, roughly a 16% increase over using only the RRC. This is despite the fact that the DVS-only detector is trained only on knowledge generated by the RRC,showing that the DVS-only detector has learnt generalized representations of cars. A similar effect was observed in <cit.> for the RGB and depth modalities.Given the current state of hardware, the RRC is not a real-time detector and the specific combination of the detections mentioned above is not practical yet. However, we hope that this experiment will inspire future work on using detections from the DVS to complement detections from the APS. We notice that at an IoU threshold of 0.7, the benefit from combining the detectors is marginal. This is due to the fact that the RRC architecture is specifically designed to work well at high IoU thresholds, whereas the tiny YOLO architecture is designed assuming that it will be evaluated at an IoU threshold of 0.5.Comparing DVS and grayscale detectionsWe measured the correct detections made by the detectors (regardless of the confidence score) as a fraction of the total number of ground truth objects in table <ref>. We also take a look at the union and intersection of these detections. At 0.5 IoU threshold, the DVS-only detector picked out 60.1% of the objects while the RRC picked out 64.2% of the objects. 10.6% of the objects were detected by the DVS-only detector but not by the RRC, reinforcing the fact that the DVS-only detector learnt general representations of cars, though it was trained on the knowledge from the RRC. We notice that fine-tuning the RRC did not change the fraction by much for the DVS and APS∪DVS modalities thoughit improved the average precision in table <ref>–This might be due to the fine-tuning process increasing the confidence of correct detections rather than the number of correct detections made by the DVS-only detector.§.§ Qualitative results Though we used the sigmoid representation for training our detector, the following images from the dynamic vision sensor are displayed in the binary representation for easier viewing, where each pixel in the frame takes the value b(x) = 255 x ≠ 0 0 x = 0 , where x is the sum of the polarities of the events in the 10 ms interval. The numbers above the bounding boxes indicate the confidence, and the threshold for displaying the bounding boxes on the following images and videos is 0.5. All bounding boxes shown are a result of the fine-tuned RRC and the DVS which is trained on its pseudo-labels. Links to videos can be found in table <ref>, and the reader is strongly encouraged to randomly sample clips from all videos to gauge the performance of the DVS-only detector. Daytime and evening detectionsRandomly sampled images from the test sets are shown in Figure <ref>, and these highlight the main sources of errors. While the CNN is able to detect cars in the near-field, cars in the far-field and cars moving at the same velocity as the camera (hence zero relative velocity) only show up on the DVS images as thin outlines at best and as such are not detected by our CNN. This explains why the fraction of objects detected by the DVS is not 100%.Overcoming motion blurIn the first pair of row 2 and second pair of row 6 of Figure <ref>, we see the high temporal resolution of the dynamic vision sensor in action. The camera is moving fast and as a result, the features captured by the frame-based camera are blurred, whereas the features captured by the dynamic vision sensor is still reasonably sharp. Our event-based detector managed to detect the cars while the RRC did not produce any detections, reinforcing our motivation for object detection on dynamic vision sensor data. An additional motion blur scene can be found at the 1:30 mark in the video of the third test scene.Nighttime detectionsOne key feature of dynamic vision sensors is the high dynamic range which can cope with a wide spectrum of illumination conditions. Figure <ref> shows a night scene (rec1487356509 from the DDD17, at the 2:01:59 mark of the night scene video) where illumination is poor on the left hand side of the lane. The APS sensor barely picks up the cars as they are dark enough to blend into the surrounding, and as such pose a major challenge for conventional frame-based detection. This is confirmed by the fact that the RRC did not manage to detect the cars. However, the DVS can still detect the edges of the cars and as such, the cars on the DVS image are picked out by our DVS-only detector. Considering that the DVS-only detector is trained only on day and evening scenes, the fact that it was able to detect cars at night shows that the detector learnt representations of the cars which are robust to illumination conditions. LimitationsIn Figure <ref>, we see an example where our approach fails. This scene is on a highway at night (also from rec1487356509), where the light source is dominated by the headlights of the cars. As the CNN is trained on DVS images of cars in the day and evening scenes, it learns the features that are visible in the day and in the evening (edges of the car) and it does not learn the features of the headlights. To learn such features, we require labeled data which might be hard to obtain from the pseudo-labeling method because conventional CNNs do not work well on images with poor illumination conditions. This strongly suggests that the naïve approach of binning DVS data and creating images is not sufficient to represent the data.§ DISCUSSION Our implementation is largely unoptimized, and the average precision can be increased via many ways. For example, we can fine-tune the threshold to keep pseudo-labels for training, the network and learning hyper-parameters of the DVS-only detector and explore other representations of the DVS data (possibly binning the data by a fixed number of events). We can also combine detection results with tracking methods such as particle filter <cit.> or those developed for dynamic vision sensor data <cit.>.In Figure <ref>, we saw how our CNN missed detections of cars that are far away, because the pixels that spike are sparsely distributed and possibly drowned out by noise. This issue can be solved via a few ways. For instance, using a higher resolution camera will allow for more pixels to capture the features of the car. However, this approach misses the point of using an dynamic vision sensor–The output of dynamic vision sensors is intended to be sparse, because it captures changes in the scene rather than the entire scene itself. The next step is to move away from a frame-based approach when analyzing dynamic vision sensor data, and towards an entirely event-based approach, i.e. use an algorithm that accepts sparse dynamic vision sensor data, and takes temporal information into account. For example, we can combine the event-based ROI approach in <cit.> with event-based recognition approaches such as HOTS <cit.> or spiking neural networks <cit.>. These event-based recognition approaches can also be trained with pseudo-labels.§ CONCLUSIONS AND FUTURE WORK In all, we have presented two main contributions. First, we showed for the first time high speed (100 FPS) detection of a realistic object (car) in a real scenario with various backgrounds and distracting objects due to camera ego-motion, purely from dynamic vision sensor data. Previous work on event-based detection/recognition have only focused on recognizing simple objects such as numbers, or detecting objects in the absence of ego-motion, and the most realistic work is on detecting a robot in a controlled lab environment <cit.>. Our technique showed reasonable success with detections in day and night scenes, however it failed to detect cars when the headlights are bright enough to distort the features, or when the cars are too far away and show up as very sparse pixels. We suggested approaches to overcome these problems, such as using a fully event-based framework. Second, we showed that our trained CNN can detect cars despite motion blur and poor lighting without explicit training on such scenes, and even cars which were not detected by the RRC in ordinary conditions–This proves that the CNN learnt robust representations of cars from pseudo-labels. Future work includes implementing spiking neural networks on neuromorphic computing hardware, which could potentially bring a 70 times increase in power efficiency compared to traditional hardware <cit.>. We see value in performing event-based image segmentation because it could boost detection performance and overcome the headlights problem in Figure <ref> (if we detect an object on the road, then the object is more likely to be a car even though the DVS detector only sees headlights).We hope that this work will encourage researchers to use pseudo-labels for supervised learning techniques on dynamic vision sensor data and advance the frontiers of this field, and to publish more data sets containing synchronized DVS and APS modalities.ieee
http://arxiv.org/abs/1709.09323v3
{ "authors": [ "Nicholas F. Y. Chen" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170927034527", "title": "Pseudo-labels for Supervised Learning on Dynamic Vision Sensor Data, Applied to Object Detection under Ego-motion" }
Semiclassical catastrophe theory of simple bifurcations K. Arita December 30, 2023 ======================================================= Tensor-valued data are becoming increasingly available in economics and this calls for suitable econometric tools. We propose a new dynamic linear model for tensor-valued response variables and covariates that encompasses some well-known econometric models as special cases. Our contribution is manifold. First, we define a tensor autoregressive process (ART), study its properties and derive the associated impulse response function. Second, we exploit the PARAFAC low-rank decomposition for providing a parsimonious parametrization and to incorporate sparsity effects. We also contribute to inference methods for tensors by developing a Bayesian framework which allows for including extra-sample information and for introducing shrinking effects. We apply the ART model to time-varying multilayer networks of international trade and capital stock and study the propagation of shocks across countries, over time and between layers. Keywords: Tensor calculus; multidimensional autoregression; Bayesian statistics; sparsity; dynamic networks; international trade§ INTRODUCTIONThe increasing availability of long time series of complex-structured data, such as multidimensional tables (<cit.>, <cit.>), multidimensional panel data (<cit.>, <cit.>, <cit.>, <cit.>), multilayer networks (<cit.>, <cit.>), EEG (<cit.>), neuroimaging (<cit.>) has put forward some limitations of the existing multivariate econometric models. Tensors, i.e. multidimensional arrays, are the natural class where this kind of complex data belongs.A naïve approach to model tensors relies on reshaping them into lower-dimensional objects (e.g., vectors and matrices) which can then be easily handled using standard multivariate statistical tools. However, mathematical representations of tensor-valued data in terms of vectors have non-negligible drawbacks, such as the difficulty of accounting for the intrinsic structure of the data (e.g., cells of a matrix representing a geographical map or pairwise relations, contiguous pixels in an image). Neglecting this information in the modelling might lead to inefficient estimation and misleading results. Tensor-valued data entries are highly likely to depend on contiguous cells (within and between modes) and collapsing the data into a vector destroys this information. Thus, statistical approaches based on vectorization are unsuited for modelling tensor-valued data.Tensors have been recently introduced in statistics and machine learning (e.g., <cit.>, <cit.>) and provide a fundamental background for efficient algorithms in Big Data handling (e.g., <cit.>). However, a compelling statistical approach extending results for scalar random variables to multidimensional random objects beyond dimension 2 (i.e., matrix-valued random variables, see <cit.>) is lacking and constitutes a promising field of research.The development of novel statistical methods able to deal directly with tensor-valued data (i.e., without relying on vectorization) is currently an open field of research in statistics and econometrics, where such kind of data is becoming increasingly available. The main purpose of this article is to contribute to this growing literature by proposing an extension of standard multivariate econometric regression models to tensor-valued response and covariates. Matrix-valued statistical models have been widely employed in time series econometrics over the past decades, especially for state space representations (<cit.>), dynamic linear models (<cit.>, <cit.>), Gaussian graphical models (<cit.>), stochastic volatility (<cit.>, <cit.>, <cit.>), classification of longitudinal datasets (<cit.>), models for network data (<cit.>, <cit.>, <cit.>) and factor models (<cit.>).<cit.> proposed a bilinear multiplicative matrix regression model, which in vector form becomes a VAR(1) with restrictions on the covariance matrix. The main shortcoming in using bilinear models is the difficulty in introducing sparsity. Imposing zero restrictions on a subset of the reduced form coefficients implies zero restrictions on the structural coefficients. Recent papers dealing with tensor-valued data include <cit.> and <cit.>, who proposed a generalized linear model to predict a scalar real or binary outcome by exploiting the tensor-valued covariate. Instead, <cit.>, <cit.> and <cit.> followed a Bayesian nonparametric approach for regressing a scalar on tensor-valued covariate. Another stream of the literature considers regression models with tensor-valued response and covariates. In this framework, <cit.> proposed a model for cross-sectional data where response and covariates are tensors, and performed sparse estimation by means of the envelope method and iterative maximum likelihood. <cit.> exploited a multidimensional analogue of the matrix SVD (the Tucker decomposition) to define a parsimonious tensor-on-tensor regression. We propose a new dynamic linear regression model for tensor-valued response and covariates. We show that our framework admits as special cases Bayesian VAR models (<cit.>), Bayesian panel VAR models (<cit.>) and Multivariate Autoregressive Index models (i.e. MAI, see <cit.>), as well as univariate and matrix regression models. Furthermore, we exploit a suitable tensor decomposition for providing a parsimonious parametrization, thus making inference feasible in high-dimensional models. One of the areas where these models can find application is network econometrics.Most statistical models for network data are static (<cit.>), whereas dynamic models maybe more adequate for many applications (e.g., banking) where data on network evolution are becoming available. Few attempts have been made to model time-varying networks (e.g., <cit.>, <cit.>, <cit.>), and most of the contributions have focused on providing a representation and a description of temporally evolving graphs. We provide an original study of time-varying economic and financial networks and show that our model can be successfully used to carry out impulse response analysis in this multidimensional setting.The remainder of this paper is organized as follows. Section <ref> provides an introduction to tensor algebra and presents the new modelling framework. Section<ref> discusses parametrization strategies and a Bayesian inference procedure. Section <ref> provides an empirical application and section<ref> gives some concluding remarks. Further details and results are provided in the supplementary material. § A DYNAMIC TENSOR MODEL In this section, we present a dynamic tensor regression model and discuss some of its properties and special cases. We review some notions of multilinear algebra which will be used in this paper, and refer the reader to Appendix <ref> and the supplement for further details. §.§ Tensor Calculus and DecompositionsThe use of tensors is well established in physics and mechanics (e.g., see <cit.> and <cit.>), but few contributions have been made beyond these disciplines. For a general introduction to the algebraic properties of tensor spaces, see <cit.>. Noteworthy introductions to operations on tensors and tensor decompositions are <cit.> and <cit.>, respectively.A N-order real-valued tensor is a N-dimensional array 𝒳 = (𝒳_i_1,…,i_N) ∈^I_1×…× I_N with entries 𝒳_i_1,…,i_N with i_n =1,…,I_n and n=1,…,N. The order is the number of dimensions (also called modes). Vectors and matrices are examples of 1- and 2-order tensors, respectively. In the rest of the paper we will use lower-case letters for scalars, lower-case bold letters for vectors, capital letters for matrices and calligraphic capital letters for tensors. We use the symbol “:” to indicate selection of all elements of a given mode of a tensor. The mode-k fiber is the vector obtained by fixing all but the k-th index of the tensor, i.e. the equivalent of rows and columns in a matrix. Tensor slices and their generalizations, are obtained by keeping fixed all but two or more dimensions of the tensor.It can be shown that the set of N-order tensors ^I_1×…× I_N endowed with the standard addition 𝒜 + ℬ = (𝒜_i_1,…,i_N + ℬ_i_1,…,i_N) and scalar multiplication α𝒜 = (α𝒜_i_1,…,i_N), with α∈, is a vector space. We now introduce some operators on the set of real tensors, starting with the contracted product, which generalizes the matrix product to tensors. The contracted product between 𝒳∈^I_1 ×…× I_M and 𝒴∈^J_1 ×…× J_N with I_M = J_1, is denoted by 𝒳×_M 𝒴 and yields a (M+N-2)-order tensor 𝒵∈^I_1 ×…× I_M-1× J_1 ×…× J_N-1, with entries𝒵_i_1,…,i_M-1,j_2,…,j_N = (𝒳×_M 𝒴)_i_1,…,i_M-1,j_2,…,j_N =∑_i_M=1^I_M𝒳_i_1,…,i_M-1,i_M𝒴_i_M,j_2,…,j_N.When 𝒴 = 𝐲 is a vector, the contracted product is also called mode-M product. We define with 𝒳×̅_N𝒴 a sequence of contracted products between the (K+N)-order tensor 𝒳∈^J_1×…× J_K × I_1 ×…× I_N and the (N+M)-order tensor 𝒴∈^I_1 ×…× I_N × H_1×…× H_M. Entry-wise, it is defined as( 𝒳×̅_N𝒴)_j_1,…,j_K,h_1,…,h_M = ∑_i_1=1^I_1…∑_i_N=1^I_N𝒳_j_1,…,j_K,i_1,…,i_N𝒴_i_1,…,i_N,h_1,…,h_M.Note that the contracted product is not commutative. The outer product ∘ between a M-order tensor 𝒳∈^I_1 ×…× I_M and a N-order tensor 𝒴∈^J_1 ×…× J_N is a (M+N)-order tensor 𝒵∈^I_1 ×…× I_M × J_1 ×…× J_N with entries 𝒵_i_1,…,i_M,j_1,…,j_N =(𝒳∘𝒴)_i_1,…,i_M,j_1,…,j_N =𝒳_i_1,…,i_M𝒴_j_1,…,j_N.Tensor decompositions allow to represent a tensor as a function of lower dimensional variables, such as matrices of vectors, linked by suitable multidimensional operations. In this paper, we use the low-rank parallel factor (PARAFAC) decomposition, which allows to represent a N-order tensor in terms of a collection of vectors (called marginals). A N-order tensor is of rank 1 when it is the outer product of N vectors. Let R be the rank of the tensor 𝒳, that is minimum number of rank-1 tensors whose linear combination yields 𝒳.The PARAFAC(R) decomposition is rank-R decomposition which represents a N-order tensor ℬ as a finite sum of R rank-1 tensors ℬ_r defined by the outer products of N vectors (called marginals) β_j^(r)∈^I_jℬ = ∑_r=1^R ℬ_r = ∑_r=1^R β_1^(r)∘…∘β_N^(r), ℬ_r = β_1^(r)∘…∘β_N^(r).The mode-n matricization (or unfolding), denoted by 𝐗_(n) = mat_n(𝒳), is the operation of transforming a N-dimensional array 𝒳 into a matrix. It consists in re-arranging the mode-n fibers of the tensor to be the columns of the matrix 𝐗_(n), which has size I_n× I_(-n)^* with I_(-n)^*=∏_i≠ n I_i. The mode-n matricization of 𝒳 maps the (i_1,…,i_N) element of 𝒳 to the (i_n,j) element of 𝐗_(n), where j = 1+ ∑_m≠ n (i_m-1) ∏_p≠ n^m-1 I_p. For some numerical examples, see <cit.> and Appendix <ref>. The mode-1 unfolding is of interest for providing a visual representation of a tensor: for example, when 𝒳 be a 3-order tensor, its mode-1 matricization 𝐗_(1) is a I_1 × I_2 I_3 matrix obtained by horizontally stacking the mode-(1,2) slices of the tensor. The vectorization operator stacks all the elements in direct lexicographic order, forming a vector of length I^*=∏_i I_i. Other orderings are possible, as long as it is consistent across the calculations. The mode-n matricization can also be used to vectorize a tensor 𝒳, by exploiting the relationship 𝒳 = 𝐗_(1), where 𝐗_(1) stacks vertically into a vector the columns of the matrix 𝐗_(1). Many product operations have been defined for tensors (e.g., see <cit.>), but here we constrain ourselves to the operators used in this work. For the ease of notation, we will use the multiple-index summation for indicating the sum over all the corresponding indices. Consider a N-order tensor ℬ∈^I_1×…× I_N with a PARAFAC(R) decomposition (with marginals β_j^(r)), a (N-1)-order tensor 𝒴∈^I_1×…× I_N-1 and a vector 𝐱∈^I_N. Then𝒴 = ℬ×_N 𝐱𝒴 = 𝐁_(N)' 𝐱𝒴' = 𝐱' 𝐁_(N)where 𝐁_(N) = ∑_r=1^R β_N^(r)β_1^(r)∘…∘β_N-1^(r)'.§.§ A General Dynamic Tensor Model Let 𝒴_t be a (I_1×…× I_N)-dimensional tensor of endogenous variables, 𝒳_t a (J_1×…× J_M)-dimensional tensor of covariates, and S_y = _j=1^N { 1,…, I_j }⊂ℕ^N and S_x = _j=1^M { 1,…, J_j }⊂ℕ^M sets of n-tuples of integers. We define the autoregressive tensor model of order p, ART(p), as the system of equations𝒴_𝐢,t = 𝒜_𝐢,0 + ∑_j=1^p ∑_𝐤∈ S_y𝒜_𝐢,𝐤,j𝒴_𝐤,t-j + ∑_𝐦∈ S_xℬ_𝐢,𝐦𝒳_𝐦,t + ℰ_𝐢,t, ℰ_𝐢,tiid𝒩(0,σ_𝐢^2),t=1,2,…, with given initial conditions 𝒴_-p+1,…,𝒴_0 ∈^I_1×…× I_N, where 𝐢 = (i_1,…,i_N) ∈ S_y and 𝒴_𝐢,t is the 𝐢-th entry of 𝒴_t. The general model in eq. (<ref>) allows for measuring the effect of all the cells of 𝒳_t and of the lagged values of 𝒴_t on each endogenous variable.We give two equivalent compact representations of the multilinear system (<ref>). The first one is used for studying the stability property of the process and is obtained through the contracted product that provides a natural setting for multilinear forms, decompositions and inversions. From (<ref>) one gets the tensor equation𝒴_t = 𝒜_0 + ∑_j=1^p 𝒜_j×̅_N𝒴_t-j + ℬ×̅_M𝒳_t + ℰ_t, ℰ_tiid𝒩_I_1,…,I_N(𝒪,Σ_1,…,Σ_N),where ×̅_a,b is a shorthand notation for the contracted product ×_a+1… a+b^1… a, 𝒜_0 is a N-order tensor of the same size as 𝒴_t, 𝒜_j, j=1,…,p, are 2N-order tensors of size (I_1×…× I_N × I_1×…× I_N) and ℬ is a (N+M)-order tensor of size (I_1×…× I_N × J_1×…× J_M). The error term ℰ_t follows a N-order tensor normal distribution (<cit.>) with probability density functionf_ℰ(ℰ) = exp( -1/2 (ℰ-ℳ) ×̅_N,0( ∘_j=1^N Σ_j^-1) ×̅_N,0 (ℰ-ℳ) )/(2π)^I^*/2∏_j=1^N Σ_j^I_-j^*/2,where I^* = ∏_i I_i and I_-i^* = ∏_j≠ i I_j, ℰ and ℳ are N-order tensors of size I_1×…× I_N. Each covariance matrix Σ_j∈^I_j× I_j, j=1,…,N, accounts for the dependence along the corresponding mode of ℰ.The second representation of the ART(p) in eq. (<ref>) is used for developing inference. Let 𝒦_m be the (I_1×…× I_N × m)-dimensional commutation tensor such that 𝒦_m^σ×̅_N,0𝒦_m= 𝐈_m, where 𝒦_m^σ is the tensor obtained by flipping the modes of 𝒦_m. Define the (I_1×…× I_N × I^*)-dimensional tensor 𝒜_j = 𝒜_j ×̅_N𝒦_I^* and the (I_1×…× I_N × J^*)-dimensional tensor ℬ = ℬ×̅_N𝒦_J^*, with J^* = ∏_j J_j. We obtain 𝒜_j ×_N+1𝒴_t-j = 𝒜_j ×̅_N𝒴_t-j and the compact representation𝒴_t = 𝒜_0 + ∑_j=1^p 𝒜_j ×_N+1𝒴_t-j + ℬ×_N+1𝒳_t + ℰ_t,ℰ_tiid𝒩_I_1,…,I_N(𝒪,Σ_1,…,Σ_N).Let = (^I_1×…× I_N× I_1×…× I_N, ×̅_N) be the space of (I_1×…× I_N× I_1×…× I_N)-dimensional tensors endowed with the contracted product ×̅_N. We define the identity tensor ℐ∈ to be the neutral element of ×̅_N, that is the tensor whose entries are ℐ_i_1,…,i_N,i_N+1,…,i_2N = 1 if i_k = i_k+N for all k=1,…,N and 0 otherwise. The inverse of a tensor 𝒜∈ is the tensor 𝒜^-1∈ satisfying 𝒜^-1×̅_N𝒜 = 𝒜×̅_N𝒜^-1 = ℐ. A complex number λ∈ and a nonzero tensor 𝒳∈^I_1×…× I_N are called eigenvalue and eigenvector of the tensor 𝒜∈ if they satisfy the multilinear equation 𝒜×̅_N𝒳 = λ𝒳. We define the spectral radius ρ(𝒜) of 𝒜 to be the largest modulus of the eigenvalues of 𝒜. We define a stochastic process to be weakly stationary if the first and second moment of its finite dimensional distributions are finite and constant in t. Finally, note that it is always possible to rewrite an ART(p) process as a ART(1) process on an augmented state space, by stacking the endogenous tensors along the first mode. Thus, without loss of generality, we focus on the case p=1. We use the definition of inverse tensor, spectral radius and the convergence of power series of tensors to prove the following result.Every (I_1 × I_2 ×…× I_N × I_1 × I_2 ×…× I_N)-dimensional ART(p) process 𝒴_t = ∑_k=1^p 𝒜_k×̅_N 𝒴_t-j + ℰ_t can be rewritten as a (pI_1 × I_2 ×…× I_N × pI_1 × I_2 ×…× I_N)-dimensional ART(1) process 𝒴_t = 𝒜×̅_N 𝒴_t-1 + ℰ_t.If ρ(𝒜_1) < 1 and the process 𝒳_t is weakly stationary, then the ART process in eq. (<ref>), with p=1, is weakly stationary and admits the representation𝒴_t = (ℐ -𝒜_1)^-1×̅_N𝒜_0 + ∑_k=0^∞𝒜_1^k ×̅_Nℬ×̅_M𝒳_t-k + ∑_k=0^∞𝒜_1^k ×̅_Nℰ_t-k. The VAR(p) in eq. (<ref>) is weakly stationary if and only if the ART(p) in eq. (<ref>) is weakly stationary.§.§ ParametrizationThe unrestricted model in eq. (<ref>) cannot be estimated, as the number of parameters greatly outmatches the available data. We address this issue by assuming a PARAFAC(R) decomposition for the tensor coefficients, which makes the estimation feasible by reducing the dimension of the parameter space. The models in eqq. (<ref>)-(<ref>) are equivalent but the assuming a PARAFAC decomposition for the coefficient tensors leads to different degrees of parsimony, as shown in the following remark. The two models (<ref>) and (<ref>) combined with the PARAFAC decomposition for the tensor coefficients allow for different degree of parsimony. To show this, without loss of generality, focus on the coefficient tensor 𝒜_1 (similar argument holds for 𝒜_j, j=2,…,p and ℬ). By assuming a PARAFAC(R) decomposition for 𝒜_1 in (<ref>) and for 𝒜_1 in (<ref>), we get, respectively𝒜_1 = ∑_r=1^R α_1^(r)∘…∘α_N^(r)∘α_N+1^(r)∘…∘α_2N^(r), 𝒜_1 = ∑_r=1^R α_1^(r)∘…∘α_N^(r)∘α_N+1^(r),The length of the vectors α_j^(r) and α_j^(r) coincide for each j=1,…,N. However, α_N+1^(r) has length I^* while α_N+1^(r),…,α_2N^(r) have length I_1,…,I_N, respectively. Therefore, the number of free parameters in the coefficient tensor 𝒜_1 is R(I_1 + … + I_N + ∏_j=1^N I_j), while it is 2R(I_1 + … + I_N) for𝒜_1. This highlights the greater parsimony granted by the use of the PARAFAC(R) decomposition in model (<ref>) as compared to model (<ref>).There is a relation between the (I_1×…× I_N)-dimensional ART(p) and a (I_1·…· I_N)-dimensional VAR(p) model. The vector form of (<ref>) is𝒴_t= 𝒜_0 + ∑_j=1^p mat_N+1(𝒜_j) 𝒴_t-j + mat_N+1(ℬ) 𝒳_t + ℰ_t 𝐲_t = α_0 + ∑_j=1^p 𝐀_(N+1),j' 𝐲_t-j + 𝐁_(N+1)' 𝐱_t + ϵ_t, ϵ_t ∼𝒩_I^*(0, Σ_N ⊗…⊗Σ_1),where the constraint on the covariance matrix stems from the one-to-one relation between the tensor normal distribution for 𝒳 and the distribution of its vectorization (<cit.>) given by 𝒳∼𝒩_I_1,…,I_N(ℳ,Σ_1,…,Σ_N) if and only if 𝒳∼𝒩_I^*(ℳ,Σ_N ⊗…⊗Σ_1). The restriction on the covariance structure for the vectorized tensor provides a parsimonious parametrization of the multivariate normal distribution, while allowing both within and between mode dependence. Alternative parametrizations for the covariance lead to generalizations of standard models. For example, assuming an additive covariance structure results in the tensor ANOVA.This is an active field for further research. For the sake of exposition, consider the model in eq. (<ref>), where p=1, the response is a 3-order tensor 𝒴_t∈^d× d× d and the covariates include only a constant coefficient tensor 𝒜_0. Define by k_ℰ the number of parameters of the noise distribution. The total number of parameters to estimate in the unrestricted case is (d^2N) + k_ℰ = O(d^2N), with N=3 in this example. Instead, in a ART model defined via the mode-n product in eq. (<ref>), assuming a PARAFAC(R) decomposition on 𝒜_0 the total number of parameters is ∑_r=1^R (d^N+d^N) + k_ℰ = O(d^N). Finally, in the ART model defined by the contracted product in eq. (<ref>) with a PARAFAC(R) decomposition on 𝒜_0 the number of parameters is ∑_r=1^R Nd + k_ℰ = O(d). A comparison of the different parsimony granted by the PARAFAC decomposition in all models is illustrated in Fig. <ref>. The structure of the PARAFAC decomposition poses an identification problem for the marginals β_j^(r), which may arise from three sources: * scale identification, since λ_jrβ_j^(r)∘λ_krβ_k^(r) = β_j^(r)∘β_k^(r) for any collection {λ_jr}_j,r such that ∏_j=1^J λ_jr=1;* permutation identification, since for any permutation of the indices { 1,…,R } the outer product of the original vectors is equal to that of the permuted ones;* orthogonal transformation identification, since β_j^(r)Q ∘β_k^(r) Q = β_j^(r)∘β_k^(r) for any orthonormal matrix Q.Note that in our framework these issues do not hamper the inference, since our object of interest is the coefficient tensor ℬ, which is exactly identified. The marginals β_j^(r) have no interpretation, as the PARAFAC decomposition is assumed on the coefficient tensor for the sake of providing a parsimonious parametrization.§.§ Important Special CasesThe model in eq. (<ref>) is a generalization of several well-known econometric models, as shown in the following remarks. See the supplement for the proofs of these results.If I_i=1 for i=1,…,N, then model (<ref>) reduces to a univariate regressiony_t = α_0 + ∑_j=1^p α_j y_t-j + β' 𝒳_t +ϵ_t ϵ_t ∼𝒩(0,σ^2),where the coefficients of (<ref>) become 𝒜_j = α_j ∈, j=0,…,p and ℬ = β∈^J^*.If I_i=1 for i=2,…,N and define by 1_n the unit vector of length n, then model (<ref>) reduces to a Seemingly Unrelated Regression (SUR) model (<cit.>)𝐲_t = α_0 + B ×_2 𝒳_t + ϵ_t ϵ_t ∼𝒩_m(0,Σ),where I_1=m and the coefficients of (<ref>) become 𝒜_j = 0, j=1,…,p, 𝒜_0 = α_0 ∈^m and ℬ = B ∈^m × J^*. Note that, by definition, B ×_2 𝒳_t = B 𝒳_t. Consider the setup of Remark <ref>. If 𝐳_t = 𝐲_t-1, then weoobtain a VARX(1) model, with restricted covariance matrix. Another vector of regressors 𝐰_t = W_t∈^q may enter the regression (<ref>) pre-multiplied (along mode-3) by a tensor 𝒟∈^m× n× q. Therefore, model (<ref>) encompasses as a particular case also the panel VAR models of <cit.>, <cit.>, <cit.>, provided that we make the same restriction on Σ.The model in eq. (<ref>) generalises the Vector Error Correction Model (VECM) widely used in multivariate time series analysis (see <cit.>, <cit.>). Consider a K-dimensional VAR(1) model𝐲_t = B 𝐲_t-1 + ϵ_t ϵ_t ∼𝒩_m(0,Σ).Defining Δ𝐲_t = 𝐲_t-𝐲_t-1 and Π = (B-I) = αβ', where α and β are K× R matrices of rank R<K, we obtain the associated VECMΔ𝐲_t = αβ' 𝐲_t-1 + ϵ_t.This is used for studying the cointegration relations among the components of 𝐲_t. Since Π = αβ' = ∑_r=1^R α_:,rβ_:,r' = ∑_r=1^R β̃_1^(r)∘β̃_2^(r), we can interpret the VECM model in eq. (<ref>) as a particular case of the model in eq. (<ref>) where the coefficient ℬ is the matrix Π = αβ'. Furthermore by writing Π = ∑_r=1^R β̃_1^(r)∘β̃_2^(r) we can interpret this relation as a rank-R PARAFAC decomposition of ℬ. Following this analogy, the PARAFAC rank corresponds to the cointegration rank, β̃_1^(r) are the mean-reverting coefficients and β̃_2^(r) = (β̃_2,1^(r),…,β̃_2,K^(r)) are the cointegrating vectors. See the supplement for details.This interpretation opens the way to reparametrization of ℬ based on tensor SVD representations, and to the application of regularization methods in the spirit of <cit.>. This is beyond the scope of the paper, thus we leave it for further research.The multivariate autoregressive index model (MAI) of <cit.> is another special case of model (<ref>). A MAI is a VAR model with a low rank decomposition imposed on the coefficient matrix, as follows𝐲_t = 𝐀𝐁_0 𝐲_t-1 + ϵ_t,where 𝐲_t is a (n× 1) vector, whereas 𝐀,𝐁_0 are (n× R) and (R× n) matrices, respectively. In <cit.>, the authors assumed R=1. This corresponds to our parametrization using R=1 and defining 𝐀β_1^(1) and 𝐁_0' = β_2^(1), which leads us to 𝐀𝐁_0 = β_1^(1)∘β_2^(1). By removing all the covariates from eq. (<ref>) except the lags of the dependent variable, we obtain a tensor autoregressive model of order p (or ART(p))𝒴_t = 𝒜_0 + ∑_j=1^p 𝒜_j ×_N+1𝒴_t-j + ℰ_t, ℰ_t iid𝒩_I_1,…,I_N(0,Σ_1,…,Σ_N).Matrix autoregressive models (MAR) are another special case of (<ref>), which can be obtained from eq. (<ref>) when the dependent variable is a matrix. See the supplement for an example.§.§ Impulse Response Analysis In this section we derive two impulse response functions (IRF) for ART models, the block Cholesky IRF and the block generalised IRF, exploiting the relationship between ART and VAR models. Without loss of generality, we focus on the ART(p) model in eq. (<ref>), with p=1 and 𝒜_0 = 0, and introduce the following notation. Let 𝐲_t = 𝒴_t and ϵ_t = ℰ_t∼𝒩_I^*(0,Σ) be the (I^* × 1) tensor response and noise term in vector form, respectively, where Σ = Σ_N ⊗…⊗Σ_1 is the (I^* × I^*) covariance of the model in vector form and I^* = ∏_k=1^N I_k. Partition Σ in blocks asΣ = ( [AB; B'C ]),where A is n × n, B is n × (I^*-n) and C is (I^*-n)× (I^*-n). Then, denoting by S = C - B' A^-1 B the Schur complement of A, the LDU decomposition of Σ isΣ =( [I_n _n,I^*-n;B' A^-1I_I^*-n ]) ( [ A_n,I^*-n; _n,I^*-n' S ]) ( [ I_nA^-1 B; _n,I^*-n' I_I^*-n ]) = L D L'.Hence Σ can be block-diagonalisedD = L^-1Σ (L')^-1 = ( [ A_n,I^*-n; _n,I^*-n' S ]).From the Cholesky decomposition of D one obtains a block Cholesky decompositionΣ = ( [L_A _n,I^*-n; B' (L_A^-1)'L_S ]) ( [L_A'L_A^-1 B; _n,I^*-n'L_S' ]) = P P',where L_A,L_S are the Cholesky factors of A and S, respectively.Assume the vectorised ART process admits an infinite MA representation, with Ψ_0 = I_I^* and Ψ_i = mat_(4)(ℬ)' Ψ_i-1, then using the previous results we get:𝐲_t = ∑_i=0^∞Ψ_i ϵ_t-i = ∑_i=0^∞ (Ψ_i L) (L^-1ϵ_t-i) = ∑_i=0^∞ (Ψ_i L) η_t-iη_t ∼𝒩_I^*(0,D),where η_t = L^-1ϵ_t are the block-orthogonalised shocks and D is the block-diagonal matrix in eq. (<ref>). Denote with E_n the I^* × n matrix that selects n columns from a pre-multiplying matrix, i.e. D E_n is a matrix containing n columns of D. Denote with δ^* a n-dimensional vector of shocks. Using the property of the multivariate Normal distribution, and recalling that the top-left block of size n of D is A, we extend the generalised IRF of <cit.> and <cit.> by defining the block generalised IRFψ^G(h;n) = 𝔼( 𝒴_t+h | ℰ_t' = (δ^*',0_I^*-n'),ℱ_t-1) - 𝔼( 𝒴_t+h | ℱ_t-1)= (Ψ_h L) D E_n A^-1δ^*,where ℱ_t is the natural filtration associated to the stochastic process. Starting from eq. (<ref>) we derive the block Cholesky IRF (OIRF) asψ^O(h;n) = 𝔼( 𝒴_t+h | ℰ_t' =(δ^*',0_I^*-n'),ℱ_t-1)- 𝔼( 𝒴_t+h | ℰ_t' = 0_I^*', ℱ_t-1)= (Ψ_h L) P E_n δ^*.Define with 𝐞_j the j-th column of the I^*-dimensional identity matrix. The impact of a shock δ^* to the j-th variable on all I^* variables is given below in eq. (<ref>), whereas the impact of a shock to the j-th variable on the i-th variable is given in eq. (<ref>). ψ_j^G(h;n) = Ψ_h L D 𝐞_j D_jj^-1 δ^*, ψ_j^O(h;n) = Ψ_h L P 𝐞_j δ^*ψ_ij^G(h;n) = 𝐞_i' Ψ_h L D 𝐞_j D_jj^-1 δ^*, ψ_ij^O(h;n) = 𝐞_i' Ψ_h L P 𝐞_j δ^*.Finally, denoting δ_j = 𝐞_j δ^*, we have the compact notation ψ_j^G(h;n) = Ψ_h L D D_jj^-1 δ_j, ψ_j^O(h;n) = Ψ_h L P δ_jψ_ij^G(h;n) = 𝐞_i' Ψ_h L D D_jj^-1 δ_j, ψ_ij^O(h;n) = 𝐞_i' Ψ_h L P δ_j. § BAYESIAN INFERENCE In this section, without loss of generality, we present the inference procedure for a special case of the model in eq. (<ref>), given by𝒴_t = ℬ×_4 𝒴_t-1 + ℰ_t, ℰ_t iid𝒩_I_1,I_2,I_3(0,Σ_1,Σ_2,Σ_3).Here 𝒴_t is a 3-order tensor response of size I_1× I_2× I_3, 𝒳_t = 𝒴_t-1 and ℬ is thus a 4-order coefficient tensor of size I_1× I_2× I_3× I_4, with I_4 = I_1 I_2 I_3. This is a 3-order tensor autoregressive model of lag-order 1, or ART(1), coinciding with eq. (<ref>) for p=1 and 𝒜_0 = 0. The noise term ℰ_t has as tensor normal distribution, with zero mean and covariance matrices Σ_1,Σ_2,Σ_3 of sizes I_1× I_1, I_2× I_2 and I_3× I_3, respectively, accounting for the covariance along each of the three dimensions of 𝒴_t. The specification of a tensor model with a tensor normal noise instead of a vector model (like a Gaussian VAR) has the advantage of being more parsimonious. By vectorising (<ref>), we get the equivalent VAR𝒴_t = 𝐁_(4)' 𝒴_t-1 + ℰ_t, ℰ_tiid𝒩_I^*(0,Σ_3 ⊗Σ_2 ⊗Σ_1),whose covariance has a Knocker structure, which contains (I_1(I_1+1) + I_2(I_2+1) + I_3(I_3+1))/2 parameters (as opposed to (I^*(I^*+1))/2 of an unrestricted VAR) and allows for heteroskedasticity.The choice the Bayesian approach for inference is motivated by the fact that the large number of parameters may lead to an overfitting problem, especially when the samples size is rather small. This issue can be addressed by the indirect inclusion of parameter restrictions through a suitable specification of the corresponding prior distributions. In the unrestricted model (<ref>) it would be necessary to define a prior distribution on the 4-order tensor ℬ. The literature on tensor-valued distributions is limited to the elliptical family (e.g., <cit.>), which includes the tensor normal and tensor t. Both distributions do not easily allow for the specification of restrictions on a subset of the entries of the tensor, hampering the use of standard regularization prior distributions (such as shrinkage priors).The PARAFAC(R) decomposition of the coefficient tensor provides a way to circumvent this issue. This decomposition allows to represent a tensor through a collection of vectors (the marginals), for which many flexible shrinkage prior distributions are available. Indirectly, this introduces a priori sparsity on the coefficient tensor. §.§ Prior SpecificationThe choice of the prior distribution on the PARAFAC marginals is crucial for recovering the sparsity pattern of the coefficient tensor and for the efficiency of the inference. Global-local prior distributions are based on scale mixtures of normal distributions, where the different components of the covariance matrix govern the amount of prior shrinkage. Compared to spike-and-slab distributions (e.g., <cit.>, <cit.>, <cit.>) which become infeasible as the parameter space grows, global-local priors have better scalability properties in high-dimensional settings. They do not provide automatic variable selection, which can nonetheless be obtained by post-estimation thresholding (<cit.>).Motivated by these arguments, we define a global-local shrinkage prior for the marginals β_j^(r) of the coefficient tensor ℬ following the hierarchical prior specification of <cit.> (see also <cit.>, <cit.>). For each β_j^(r), we define a prior distributions as a scale mixture of normals centred in zero, with three components for the covariance. The global parameter τ governs the overall variance, the middle parameter ϕ_r defines the common shrinkage for the marginals in r-th component of the PARAFAC, and the local parameter W_j,r= diag(𝐰_j,r) drives the shrinkage of each entry of each marginal.Summarizing, for p=1,…,I_j, j=1,…,J (J=4 in eq. (<ref>)) and r=1,…,R, the hierarchical prior structure[We use the shape-rate formulation for the gamma distribution.] for each vector of the PARAFAC(R) decomposition in eq. (<ref>) isπ(ϕ)∼𝒟ir(α1_R) π(τ)∼𝒢a(a_τ,b_τ) π(λ_j,r)∼𝒢a(a_λ,b_λ)π(w_j,r,p|λ_j,r)∼ℰxp (λ_j,r^2/2)π( β_j^(r)| W_j,r,ϕ,τ)∼𝒩_I_j(0, τϕ_r W_j,r),where 1_R is the vector of ones of length R and we assume a_τ = α R and b_τ = α R^1/J. The conditional prior distribution of a generic entry b_i_1,…,i_J of ℬ is the law of a sum of product Normals[A product Normal is the distribution of the product of n independent centred Normal random variables.]: it is symmetric around zero, with fatter tails than both a standard Gaussian or a standard Laplace distribution (see the supplement for further details). Note that a product Normal prior promotes sparsity due to the peak at zero. The following result characterises the conditional prior distribution of an entry of the coefficient tensor ℬ induced by the hierarchical prior in eq. (<ref>). See the supplement for the proof.Let b_ijkp = ∑_r=1^R β_r, where β_r = β_1,i^(r)β_2,j^(r)β_3,k^(r)β_4,p^(r), and let m_1=i, m_2=j, m_3=k and m_4=p. Under the prior specification in (<ref>), the generic entry b_ijkp of the coefficient tensor ℬ has the conditional prior distributionπ(b_ijkp | τ, ϕ, 𝐖) = p( ∑_r=1^R β_r | - ) = p(β_1 | -) ∗…∗ p(β_R | -),where ∗ denotes convolution andp(β_r | -) = K_r · G_4,0^4,0( β_r^2 ∏_h=1^4 (2τϕ_r w_h,r,m_h)^-1| 0),with G_p,q^m,n(x| a_𝐛^𝐚) a Meijer G-function andG_4,0^4,0( β_r^2 ∏_h=1^4 (2τϕ_r w_h,r,m_h)^-1| 0) = 1/2π i∫_c-i^∞^c+i^∞( β_r^2 ∏_h=1^4 (2τϕ_r w_h,r,m_h)^-1)^-s ds K_r = (2π)^-4/2∏_h=1^4 (2τϕ_r w_h,r,m_h)^-1.The use of Meijer G- and Fox H-functions is not new in econometrics (e.g., <cit.>), and they have been recently used for defining prior distributions in Bayesian statistics (<cit.>, <cit.>). From eq. (<ref>), we have that the covariance matrices Σ_j enter the likelihood in a multiplicative way, therefore separate identification of their scales requires further restrictions. <cit.> and <cit.> adopt independent hyper-inverse Wishart prior distributions (<cit.>) for each Σ_j, then impose the identification restriction Σ_j,11 = 1 for j=2,…,J-1. The hard constraint Σ_j=𝐈_I_j (where 𝐈_j is the identity matrix of size j), for all but one n, implicitly imposes that the dependence structure within different modes is the same, but there is no dependence between modes. We follow <cit.>, who suggests to introduce dependence between the Inverse Wishart prior distribution of each Σ_j via a hyper-parameter γ affecting their prior scale. To account for marginal dependence, we add a level of hierarchy, thus obtainingπ(γ) ∼𝒢a(a_γ,b_γ) π(Σ_j | γ)∼ℐ𝒲_I_j(ν_j,γΨ_j).Define Λ = {λ_j,r : j=1,…,J, r=1,…,R } and 𝐖 = { W_j,r : j=1,…,J, r=1,…,R }, and let θ denote the collection of all parameters. The directed acyclic graph (DAG) of the prior structure is given in Fig. <ref>.Note that our prior specification is flexible enough to include Minnesota-type restrictions or hierarchical structures as in <cit.>. §.§ Posterior ComputationDefine 𝐘 = {𝒴_t }_t=1^T, I_0 = ∑_j=1^J I_j, β_-j^(r) = {β_i^(r): i≠ j } and ℬ_-r = { B_i : i≠ r }, with B_r = β_1^(r)∘…∘β_4^(r). The likelihood function of model (<ref>) isL(𝐘 | θ) = ∏_t=1^T (2π)^-I_4/2∏_j=1^3 Σ_j^-I_-j/2 ·exp( -1/2Σ_2^-1 (𝒴_t -ℬ×_4 𝐲_t-1) ×_1… 3^1… 3( ∘_j=1^3 Σ_j^-1) ×_1… 3^1… 3 (𝒴_t -ℬ×_4 𝐲_t-1) ),where 𝐲_t-1 =𝒴_t-1. Since the posterior distribution is not tractable in closed form, we adopt an MCMC procedure based on Gibbs sampling. The technical details of the derivation of the posterior distributions are given in Appendix <ref>. We articulate the sampler in three main blocks: * sample the global and middle variance hyper-parameters of the marginals, fromp(ψ_r|ℬ,𝐖,α)∝GiG( α -I_0/2, 2b_τ, 2C_r ) p(τ|ℬ,𝐖,ϕ)∝GiG( a_τ -R I_0/2, 2b_τ, 2∑_r=1^R C_r/ϕ_r ),where C_r = ∑_j=1^J β_j^(r)' W_j,r^-1β_j^(r), then set ϕ_r = ψ_r/∑_l=1^Rψ_l. For improving the mixing, we sample τ with a Hamiltonian Monte Carlo (HMC) step (<cit.>). * sample the hyper-parameters of the local variance component of the marginals and the marginals themselves, fromp( λ_j,r|β_j^(r),ϕ_r,τ) ∝𝒢a ( a_λ +I_j, b_λ + β_j^(r)_1 (τϕ_r)^-1/2) p( w_j,r,p|λ_j,r,ϕ_r,τ,β_j^(r)) ∝GiG( 1/2,λ_j,r^2, (β_j,p^(r))^2/(τϕ_r) )p( β_j^(r)|β_-j^(r),ℬ_-r,W_j,r,ϕ_r,τ,𝐘,Σ_1,…,Σ_3 ) ∝𝒩_I_j(μ̅_β_j,Σ̅_β_j).* sample the covariance matrices and the latent scale, respectively, fromp(Σ_j|ℬ,𝐘,Σ_-j,γ)∝ℐ𝒲_I_j(ν_j + I_j,γΨ_j + S_j) p(γ|Σ_1,…,Σ_3)∝𝒢a ( a_γ + ∑_j=1^3 ν_j I_j, b_γ + ∑_j=1^3 tr(Ψ_j Σ_j^-1) ).§ APPLICATION TO MULTILAYER DYNAMIC NETWORKS We apply the proposed methodology to study jointly the dynamics of international trade and credit networks. The international trade network has been previously studied by several authors (e.g., <cit.>, <cit.>), but to the best of our knowledge, this is the first attempt to model the dynamics of two networks jointly. The bilateral trade data come from the COMTRADE database, whereas the data on bilateral outstanding capital come from the Bank of International Settlements database. Our sample of yearly observations for 10 countries runs from 2003 to 2016. At at each time t, the 3-order tensor 𝒴_t has size (10,10,2) and represents a 2-layer node-aligned network (or multiplex) with 10 vertices (countries), where each edge is given by a bilateral trade flow or financial stock.See the supplement for data descriptionWe estimate the tensor autoregressive model in eq. (<ref>), using the prior structure described in section <ref>, running the Gibbs sampler for N=100,000 iterations after 30,000 burn-in iterations. We retain every second draw for posterior inference.The mode-4 matricization of the estimated coefficient tensor, B̂_(4), is shown in the left panel of Fig. <ref>. The (i,j)-th entry of the matrix B̂_(4) reports the impact of the edge j on edge i (in vectorised form[For example, j=21 and i=4 corresponds to the coefficient of entry 𝒴_1,3,1,t-1 on 𝒴_4,1,1,t.]). The first 100 rows/columns correspond to the edges in the first layer. Hence, two rows of the matricized coefficient tensor are similar when two edges are affected by all the edges of the (lagged) network in a similar way, whereas two similar columns identify the situation where two edges impact the (next period) network in a similar way. The overall distribution of the estimated entries of B̂_(4) is symmetric around zero and leptokurtic, as a consequence of the shrinkage to zero of the estimated coefficients. The right panel of Fig. <ref> shows the log-spectrum of B̂_(4). As all eigenvalues of B̂_(4) have modulus smaller than one, we conclude that the estimated ART(1) model is stationary[It can be shown that the stationarity of the mode-4 matricised coefficient tensor implies stationarity of the ART(1) process.]. Fig. <ref> shows the estimated covariance matrices. In all cases, the highest values correspond to individual variances, while the estimated covariances are lower in magnitude and heterogeneous. We also find evidence of heterogeneity in the dependence structure, since Σ_1, which captures the covariance between rows (i.e., exporting and creditor countries), differs from Σ_2, which describes the covariance between columns (i.e., importing and debtor countries). With few exceptions, estimated covariances are positive. After estimating the ART(1) model (<ref>), we may investigate shock propagation across the network computing generalised and orthogonalised impulse response functions presented in equations (<ref>) and (<ref>), respectively. Impulse responses allow us to analyze the propagation of shocks both across the network, within and across layers, and over time. For illustration, we study the responses to a shock in all edges ofcountry, by applyingblock Cholesky factorisation to Σ, in such a way that the shocked country contemporaneously affects all others and not vice-versa.[To save space, we do not report generalised IRFs, which are very similar to the ones presented.] Thus, the matrices A and C in eq. (<ref>) reflect contemporaneous correlations across transactions of the shock-originating country and with transactions of all other countries, respectively. For expositional convenience, we report only statistically significant responses.In the first analysis we consider a negative 1% shock to US trade imports[That is, we allocate the shock across import originating countries to match import shares as in the last period of the sample.]. The results of the block Cholesky IRF at horizon 1 are given in Fig. <ref>. We report the impact on the whole network (panel (a)) and, for illustrative purposes, the impact on Germany's transactions.The main findings follows.Global effect on the network. The negative shock to US imports has an effect on both layers (trade and financial) of the network. There is evidence of heterogeneous responses across countries and country-specific transactions. On average, trade flows exhibit a slight expansion in response to the shock. Switzerland is the most positively affected, both in terms of exports and imports, andtrade imports of the US show (on average) a reverted positive response one period after the shock. This reflects an oscillating impulse response. The overall average effect on the financial layer is negative, similar in magnitude to the effect on the trade layer. More specifically, we observe that Denmark's and Sweden's exports to Switzerland, Germany and France show a contraction, whereas the effect on US', Japan's and Ireland's exports to these countries is positive. We may interpret these effects as substitution effects: The decreasing share of Denmark's and Sweden's exports to Switzerland, Germany and France is offset by an increase of US, Japanese and Irish exports. In conclusion, the dynamic model can be used for predicting possible trade creation and diversion effects (e.g., see <cit.>).Local effect on Germany. In panel (b) of Fig. <ref> we report the response of Germany's transactions to the negative shock in US imports. The effects on imports are mixed: while Germany's imports from most other EU countries increase, imports from Sweden and Denmark decrease. Likewise, Germany's exports show heterogeneous responses, whereby exports to Switzerland react strongest (positively). The shock of US imports does not have a significant impact on Germany's outstanding credit against most countries (except Switzerland and Japan). On the other hand, the reactions of Germany's outstanding debt reflect those on trade imports.Local effect on other countries. We observe that the most affected trade transactions are those of Denmark, Japan, Ireland, Sweden and US (as exporters) vis-á-vis Switzerland and France (as importers). The financial layer mirrors these effects with opposite sign, while the magnitudes are comparable. Outstanding credit of Ireland and Japan to Switzerland, Germany and France decrease at horizon 1. By contrast, Denmark's outstanding credit to these countries increases. Note that outstanding debt of US vis-á-vis almost all countries decreases after the shock. Overall, responses to a shock on US imports at horizon 1 are heterogeneous in sign but rather low in magnitude, whereas at horizon 2 (plot not reported) the propagation of the shock has vanished. We interpret this as a sign of fast (and monotone) decay of the IRF. Fig. <ref> shows the block Cholesky IRF at horizon h=1,2, resulting from a negative 1% shock to GB's outstanding debt[Again, the shock is allocated across countries to reflect country-specific shares of the last period in the sample.]. The main findings follow.Global effect on the network. We observe heterogeneous effects across countries. Effects on the trade layer at horizon 1 are equally heterogeneous, but smaller in magnitude compared with the financial layer.Local effect on Germany. Compared with other countries, the shock has smaller effects on Germany's trade. The negative shock to GB's outstanding debt has a negative impact on Germany's exports and imports to all countries but Ireland and Sweden for exports and Denmark for imports. Germany's outstanding credit increases vis-á-vis Denmark, GB, Japan and US. Germany's outstanding debt increases against all countries but Denmark and Sweden, in particular against France, Japan and Ireland. At horizon 2 responses are not reverted, butnearly all effects turn insignificant, providing evidence of monotone and fast decay of the IRFs.Local effect on other countries. On the trade layer at horizon 1, we observe a positive response in Denmark's exports and on average a negative response of Switzerland's, Ireland's and Japan's exports. France and Sweden are the most affected countries on the financial layer: The increase in outstanding credit of France towards Germany, Denmark and GB is counterbalanced by a reduction in Sweden's outstanding credit towards the same countries. We observe reverse effects concerning France's and Sweden's outstanding credit towards Switzerland and Ireland. Finally, Ireland's outstanding credit reacts positively towards most other countries. Compared with responses to the shock to US imports, the persistence of a negative shock to GB's outstanding debt is slightly stronger, see impulse responses at horizon 2 in Fig.<ref>. The decay is monotonic. However, the speed of decay is heterogeneous across countries. For some countries, there are small effects at horizon 2, while for others the effects are completely wiped already. Overall, we do not find evidence of a relation between the size of a country in terms of exports or outstanding credit and the persistence in the impulse response. At the most, persistence seems determined by the origin of the shock, the effects of a financial shock being more persistent than those of a trade shock. Finally, in Fig. <ref> we plot the block Cholesky IRF, respectively, at horizon h=1,2, resulting from a 1% negative shock to GB's outstanding debt coupled with a 1% positive shock to GB's outstanding credit. The main findings follow.Global effect on the network. The results remarkably differ from the previous ones (see Fig. <ref>). The responses to this simultaneous shock in GB's outstanding debt and credit are larger, in particular in the trade layer. However, already at horizon 2 responses are nearly fully decayed. The results in Fig. <ref> and Fig. <ref> suggest that an increase in GB's outstanding credit has an overall positive effect on trade, stimulating export/import activities of most other countries.Local effect on Germany. One period after the shock, we observe an overall positive effect on German exports, the exception being towards GB, Ireland and Sweden. Imports react mostly positively. Imports from US and Ireland react most, while those from Denmark react negatively. The responses of Germany's outstanding debt vis-á-vis most countries but Denmark and Sweden are negative, especially against France. At horizon 2 Germany's responses have nearly faded away, suggesting a rapid monotone decay of the shock's effect.Local effect on other countries. In particular, the reactions of Switzerland's imports and outstanding debt are strikingly different from the previous case, compare with Fig. <ref>. Imports from US and Ireland, and to a lesser extent from France and Austria, are strongly boosted, while those from Denmark and Sweden decrease strongly. Moreover, we note that Japan's outstanding debt increases significantly against most countries. We interpret this as a signal for Japan's attractiveness for foreign capital. Compared with the previous exercise, France's financial responses are now mostly insignificant, or of opposite sign. Finally, the reactions of GB's exports and outstanding credit are heterogeneous, the latter ones being larger in absolute magnitude.§ CONCLUSIONS We defined a new statistical framework for dynamic tensor regression. It is a generalisation of many models frequently used in time series analysis, such as VAR, panel VAR, SUR and matrix regression models. The PARAFAC decomposition of the tensor of regression coefficients allows to reduce the dimension of the parameter space but also permits to choose flexible multivariate prior distributions, instead of multidimensional ones. Overall, this allows to encompass sparsity beliefs and to design efficient algorithm for posterior inference.The proposed methodology has been used for analysing the temporal evolution of the international trade and financial network, and the investigation has been complemented with an impulse response analysis. We have found evidence of (i) wide heterogeneity in the sign and magnitude of the estimated coefficients; (ii) stationarity of the network process. The impulse response analysis has highlighted the role of network topology in shock propagation across countries and over time. Irrespective of its origin, any shock is found to propagate between layers, but financial shocks are more persistent than those on international trade. Moreover, we we do not find evidence of a relation between the size of a country, expressed by the total trade or capital exports, and the persistence its response to a shock. Finally, we have found evidence of substitution effects in response to the shocks, meaning that pairs of countries experience opposite effects from a shock to another country. In conclusion, our dynamic model can be used for predicting possible trade creation and diversion effects. § SUPPLEMENTARY MATERIALSupplementary material including background results on tensors, the derivation of the posterior, simulation experiments and the description of the data is available online[<https://matteoiacopini.github.io/docs/BiCaIaKa_Supplement.pdf>].plain10Abadir97MixedNormal_Meijer-G-function Karim M Abadir and Paolo Paruolo. Two mixed normal densities from cointegration analysis. Econometrica, pages 671–680, 1997.Abraham12Tensor_Physics Ralph Abraham, Jerrold E Marsden, and Tudor Ratiu. Manifolds, tensor analysis, and applications. Springer Science & Business Media, 2012.Aldasoro16MultiplexNetwork Iñaki Aldasoro and Iván Alves. Multiplex interbank networks and systemic importance: an application to European data. Journal of Financial Stability, 35:17–37, 2018.Anacleto17DynamicChainGraph_NetworkTimeSeries Osvaldo Anacleto and Catriona Queen. Dynamic chain graph models for time series network data. Bayesian Analysis, 12(2):491–509, 2017.Andrade17G-Meijer_prior_posterior JAA Andrade and PN Rathie. Exact posterior computation in non-conjugate gaussian location-scale parameters models. Communications in Nonlinear Science and Numerical Simulation, 53:111–129, 2017.Andrade15H-functions_prior_posterior JAA Andrade and Pushpa Narayan Rathie. On exact posterior distributions using h-functions. Journal of Computational and Applied Mathematics, 290:459–475, 2015.Aris12Tensor_Mechanics Rutherford Aris. Vectors, tensors and the basic equations of fluid mechanics. Courier Corporation, 2012.Balazsi15MultidimensionalPanel Laszlo Balazsi, Laszlo Matyas, and Tom Wansbeek. The estimation of multidimensional fixed effects panel data models. Econometric Reviews, pages 1–23, 2015.Baltagi15Hedonic_House_price Badi H Baltagi, Georges Bresson, and Jean-Michel Etienne. Hedonic housing prices in paris: An unbalanced spatial lag pseudo-panel model with nested random effects. Journal of Applied Econometrics, 30(3):509–528, 2015.Basturk17NearBoundary_ReducedRank Nalan Baştürk, Lennart Hoogerheide, and Herman K van Dijk. Bayesian analysis of boundary and near-boundary evidence in econometric models with reduced rank. Bayesian Analysis, 12(3):879–917, 2017.Bayer16ECTA_Dynamic_Demand_House Patrick Bayer, Robert McMillan, Alvin Murphy, and Christopher Timmins. A dynamic model of demand for houses and neighborhoods. Econometrica, 84(3):893–942, 2016.Behera19DrazinInverse_tensor_even Ratikanta Behera, Ashish Kumar Nandi, and Jajati Keshari Sahoo. Further results on the drazin inverse of even order tensors. arXiv preprint arXiv:1904.10783, 2019.BattDuns15 Anirban Bhattacharya, Debdeep Pati, Natesh S. Pillai, and David B. Dunson. Dirichlet-Laplace priors for optimal shrinkage. Journal of the American Statistical Association, 110(512):1479–1490, 2015.Bikker10trade_substitution_effect Jacob A Bikker. The gravity model in international trade: advances and applications, chapter An extended gravity model with substitution applied to international trade, pages 135–164. Cambridge University Press, 2010.Brazell13Solving_MultilinearSystem Michael Brazell, Na Li, Carmeliza Navasca, and Christino Tamon. Solving multilinear systems via tensor inversion. SIAM Journal on Matrix Analysis and Applications, 34(2):542–570, 2013.CanovaCiccarelli04PanelVAR Fabio Canova and Matteo Ciccarelli. Forecasting and turning point predictions in a Bayesian panel VAR model. Journal of Econometrics, 120(2):327–359, 2004.CanovaCiccarelli09pVAR Fabio Canova and Matteo Ciccarelli. Estimating multicountry VAR models. International Economic Review, 50(3):929–959, 2009.CanovaCiccarelliOrtega07pVAR Fabio Canova, Matteo Ciccarelli, and Eva Ortega. Similarities and convergence in G-7 cycles. Journal of Monetary Economics, 54(3):850–878, 2007.Carrieroetal16Multivariate_AR_Index Andrea Carriero, George Kapetanios, and Massimiliano Marcellino. Structural analysis with multivariate autoregressive index models. Journal of Econometrics, 192(2):332 – 348, 2016.Carvetal07HIW_Graph Carlos M. Carvalho, Hélène Massam, and Mike West. Simulation of hyper-inverse Wishart distributions in graphical models. Biometrika, 94(3):647–659, 2007.CarvWest07DynMatNormGraph Carlos M Carvalho and Mike West. Dynamic matrix-variate graphical models. Bayesian Analysis, 2(1):69–97, 2007.Chen19Matrix_DynamicFactor Elynn Y Chen, Ruey S Tsay, and Rong Chen. Constrained factor models for high-dimensional matrix-variate time series. Journal of the American Statistical Association, ((forthcoming)), 2019.Cichocki14BigData_Tensor Andrzej Cichocki. Era of Big data processing: a new approach via tensor networks and tensor decompositions. In Proceedings of the International Workshop on Smart Info-Media Systems in Asia (SISA2013), 2014.Davis02Multi-way_error_Panel Peter Davis. Estimating multi-way error components models with unbalanced data structures. Journal of Econometrics, 106(1):67–95, 2002.Dawid93HyperMarkov_decomposableGraphs A Philip Dawid and Steffen L Lauritzen. Hyper Markov laws in the statistical analysis of decomposable graphical models. The Annals of Statistics, 21(3):1272–1317, 1993.dePaula17Econometrics_Networks Aureo De Paula. Econometrics of network models. In Advances in Economics and Econometrics: Theory and Applications, Eleventh World Congress, pages 268–323. Cambridge University Press Cambridge, 2017.DingCook18MatrixReg Shanshan Ding and R Dennis Cook. Matrix-variate regressions and envelope models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(2):387–408, 2018.Dobra15TensorKalmanFilter Adrian Dobra. Handbook of Spatial Epidemiology, chapter Graphical Modeling of Spatial Health Data. Chapman & Hall /CRC, first edition, 2015.DuranteDunson14BNP_DynNet Daniele Durante and David B Dunson. Nonparametric Bayesian dynamic modelling of relational data. Biometrika, 101(4):883–898, 2014.Eaton02ECTA_trade Jonathan Eaton and Samuel Kortum. Technology, geography, and trade. Econometrica, 70(5):1741–1779, 2002.EngleGranger87Cointegration_VECM Robert F Engle and Clive WJ Granger. Co-integration and error correction: representation, estimation, and testing. Econometrica, pages 251–276, 1987.Fieler11ECTA_COMTRADEdata Ana Cecilia Fieler. Nonhomotheticity and bilateral trade: Evidence and a quantitative explanation. Econometrica, 79(4):1069–1101, 2011.George97SpikeSlabPrior Edward I George and Robert E McCulloch. Approaches for Bayesian variable selection. Statistica Sinica, 7:339–373, 1997.Golosnoy12conditional_Wishart_AR Vasyl Golosnoy, Bastian Gribisch, and Roman Liesenfeld. The conditional autoregressive wishart model for multivariate stock market volatility. Journal of Econometrics, 167(1):211–223, 2012.Gourieroux09Wishart_AR Christian Gouriéroux, Joann Jasiak, and Razvan Sufana. The wishart autoregressive process of multivariate stochastic volatility. Journal of Econometrics, 150(2):167–181, 2009.GuhaniyogiDunson17BayesTensorReg Rajarshi Guhaniyogi, Shaan Qamar, and David B Dunson. Bayesian tensor regression. Journal of Machine Learning Research, 18(79):1–31, 2017.GuptaNagar99MatrixDistributions Arjun K Gupta and Daya K Nagar. Matrix variate distributions. CRC Press, 1999.Hackbusch12Tensor_book Wolfgang Hackbusch. Tensor spaces and numerical tensor calculus. Springer Science & Business Media, 2012.HarrisonWest99BayesForecastDLM Jeff Harrison and Mike West. Bayesian forecasting & dynamic models. Springer, 1999.Hoff11SeparableCovArray_Tucker Peter D Hoff. Separable covariance arrays via the Tucker product, with applications to multivariate relational data. Bayesian Analysis, 6(2):179–196, 2011.Hoff15 Peter D Hoff. Multilinear tensor regression for longitudinal relational data. The Annals of Applied Statistics, 9(3):1169–1193, 2015.Holme12TemporalNetworks Petter Holme and Jari Saramäki. Temporal networks. Physics Reports, 519(3):97–125, 2012.ImaizumiHayashi16Tensor_BNP Masaaki Imaizumi and Kohei Hayashi. Doubly decomposing nonparametric tensor regression. In International Conference on Machine Learning, pages 727–736, 2016.Ishwaran05SpikeSlabPrior Hemant Ishwaran and J Sunil Rao. Spike and slab variable selection: frequentist and Bayesian strategies. The Annals of Statistics, 33(2):730–773, 2005.KoldaBader09 Tamara G. Kolda and Brett W. Bader. Tensor decompositions and applications. SIAM Review, 51(3):455–500, 2009.Koop96Generalized_ImpulseResponse Gary Koop, M Hashem Pesaran, and Simon M Potter. Impulse response analysis in nonlinear multivariate models. Journal of Econometrics, 74(1):119–147, 1996.Kostakos09Temporal_Graphs Vassilis Kostakos. Temporal graphs. Physica A: Statistical Mechanics and its Applications, 388(6):1007–1023, 2009.Kroonenberg08AppliedMultiwayDataAnalysis Pieter M Kroonenberg. Applied multiway data analysis. John Wiley & Sons, 2008.LeeChi16 Namgil Lee and Andrzej Cichocki. Fundamental tensor operations for large-scale data analysis in tensor train formats. Multidimensional Systems and Signal Processing, 29(3):921–960, 2018.LiZhang17 Lexin Li and Xin Zhang. Parsimonious tensor response regression. Journal of the American Statistical Association, 112(519):1131–1146, 2017.Lutkepohl05VAR_book Helmut Lütkepohl. New introduction to multiple time series analysis. Springer Science & Business Media, 2005.Mitchell88SpikeSlab_priors Toby J Mitchell and John J Beauchamp. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):1023–1032, 1988.Neal11HamiltonianMC Radford M Neal. MCMC using Hamiltonian dynamics. In Steve Brooks, Andrew Gelman, Jones L Galin, and Xiao-Li Meng, editors, Handbook of Markov Chain Monte Carlo, chapter 5. Chapman & Hall /CRC, 2011.Ohlson13TensorNormal Martin Ohlson, M Rauf Ahmad, and Dietrich Von Rosen. The multilinear normal distribution: introduction and some basic properties. Journal of Multivariate Analysis, 113:37–47, 2013.Park08BayesianLasso Trevor Park and George Casella. The Bayesian lasso. Journal of the American Statistical Association, 103(482):681–686, 2008.Pesaran98Generalized_ImpulseResponse H Hashem Pesaran and Yongcheol Shin. Generalized impulse response analysis in linear multivariate models. Economics Letters, 58(1):17–29, 1998.Poledna15MultiplexNetworkBanks_SystemicRisk Sebastian Poledna, José Luis Molina-Borboa, Serafín Martínez-Jaramillo, Marco Van Der Leij, and Stefan Thurner. The multi-layer network nature of systemic risk and its implications for the costs of financial crises. Journal of Financial Stability, 20:70–81, 2015.SchotmanVanDijk91BayesUnitRoot Peter Schotman and Herman K Van Dijk. A Bayesian analysis of the unit root in real exchange rates. Journal of Econometrics, 49(1-2):195–238, 1991.Shin19Multi-dim_Heterog_Panel Yongcheol Shin, Laura Serlenga, and George Kapetanios. Estimation and inference for multi-dimensional heterogeneous panel datasets with hierarchical multi-factor error structure. Journal of Econometrics, (forthcoming), 2019.Sims98BayesDynamicMultivariate Christopher A Sims and Tao Zha. Bayesian methods for dynamic multivariate models. International Economic Review, 39(4):949–968, 1998.Uhlig97Bayes_VAR_SV Harald Uhlig. Bayesian vector autoregressions with stochastic volatility. Econometrica, 65:59–73, 1997.Viroli11MatNorm Cinzia Viroli. Finite mixtures of matrix normal distributions for classifying three-way data. Statistics and Computing, 21(4):511–522, 2011.Wang09Bayes_matrixNormalGraph Hao Wang and Mike West. Bayesian analysis of matrix normal graphical models. Biometrika, 96(4):821–834, 2009.XuZhangetal13 Tan Xu, Zhang Yin, Tang Siliang, Shao Jian, Wu Fei, and Zhuang Yueting. Logistic tensor regression for classification, volume 7751, chapter Intelligent science and intelligent data engineering, pages 573–581. Springer, 2013.Zellner62SUR Arnold Zellner. An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias. Journal of the American Statistical Association, 57(298):348–368, 1962.Zhao13Tensor_BNP Qibin Zhao, Liqing Zhang, and Andrzej Cichocki. A tensor-variate Gaussian process for classification of multidimensional structured data. In Twenty-seventh AAAI conference on artificial intelligence, 2013.Zhao14Tensor_BNP Qibin Zhao, Guoxu Zhou, Liqing Zhang, and Andrzej Cichocki. Tensor-variate Gaussian processes regression and its application to video surveillance. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 1265–1269. IEEE, 2014.Zhouetal13 Hua Zhou, Lexin Li, and Hongtu Zhu. Tensor regression with applications in neuroimaging data analysis. Journal of the American Statistical Association, 108(502):540–552, 2013.ZhouBattDuns15 Jing Zhou, Anirban Bhattacharya, Amy H. Herring, and David B. Dunson. Bayesian factorizations of big sparse tensors. Journal of the American Statistical Association, 110(512):1562–1576, 2015.Zhu17Network_VAR Xuening Zhu, Rui Pan, Guodong Li, Yuewen Liu, and Hansheng Wang. Network vector autoregression. The Annals of Statistics, 45(3):1096–1123, 2017.Zhu19Network_Quantile_regression Xuening Zhu, Weining Wang, Hansheng Wang, and Wolfgang Karl Härdle. Network quantile autoregression. Journal of Econometrics, (forthcoming), 2019.§ BACKGROUND MATERIAL ON TENSOR CALCULUS This appendix provides the main tools used in the paper. See the supplement for further results and details. A N-order tensor is an element of the tensor product of N vector spaces. Since there exists a isomorphism between two vector spaces of dimensions N and M<N, it is possible to define a one-to-one map between their elements, that is, between a N-order tensor and a M-order tensor.Let V_1,…,V_N and U_1,…,U_M be vector subspaces V_n, U_m ⊆ and 𝒳∈^I_1×…× I_N = V_1 ⊗…⊗ V_N be a N-order real tensor of dimensions I_1,…,I_N. Let (𝐯_1,…,𝐯_N) be a canonical basis of ^I_1×…× I_N and let Π_S be the projection defined asΠ_S : V_1 ⊗…⊗ V_N → V_s_1⊗…⊗ V_s_k𝐯_1 ⊗…⊗𝐯_N ↦𝐯_s_1⊗…⊗𝐯_s_kwith S = { s_1,…,s_k }⊂{ 1,…,N }. Let (S_1,…,S_M) be a partition of { 1,…,N }. The (S_1,…,S_M) tensor reshaping of 𝒳 is defined as 𝒳_(S_1,…,S_M) = (Π_S_1𝒳) ⊗…⊗ (Π_S_M𝒳) = U_1 ⊗…⊗ U_M. The mapping is an isomorphism between V_1 ⊗…⊗ V_N and U_1 ⊗…⊗ U_M. The matricization is a particular case of reshaping a N-order tensor into a 2-order tensor, by choosing a mapping between the tensor modes and the rows and columns of the resulting matrix, then permuting the tensor and reshaping it, accordingly.Let 𝒳 be a N-order tensor with dimensions I_1,…,I_N. Let the ordered sets ℛ = { r_1,…,r_L } and 𝒞 = { c_1,…,c_M } be a partition of 𝐍 = { 1,…,N }.The matricized tensor is defined bymat_ℛ, 𝒞(𝒳) = 𝐗_(ℛ, 𝒞)∈^J× K,J= ∏_n∈ℛ I_n,K= ∏_n∈𝒞 I_n .Indices of ℛ,𝒞 are mapped to the rows and the columns, respectively, and( 𝐗_(ℛ×𝒞))_j,k = 𝒳_i_1,i_2,…,i_N,j= 1+∑_l=1^L ( (i_r_l-1) ∏_l'=1^l-1 I_r_l'),k= 1+∑_m=1^M ( (i_c_m-1) ∏_m'=1^m-1 I_c_m'). The inner product between two (I_1×…× I_N)-dimensional tensors 𝒳,𝒴 is defined as⟨𝒳, 𝒴⟩ = ∑_i_1=1^I_1…∑_i_N=1^I_N𝒳_i_1,…,i_N𝒴_i_1,…,i_NThe PARAFAC(R) decomposition (e.g., see <cit.>), is rank-R decomposition which represents a tensor ℬ∈^I_1×…× I_N as a finite sum of R rank-1 tensors obtained as the outer products of N vectors (called marginals) β_j^(r)∈^I_jℬ = ∑_r=1^R ℬ_r = ∑_r=1^R β_1^(r)∘…∘β_J^(r). Let 𝒳∈^I_1×…× I_N and 𝒴∈^J_1×…× J_N × J_N+1×…× J_N+P. Let (𝒮_1,𝒮_2) be a partition of { 1,…,N+P }, where 𝒮_1 = { 1,…,N }, 𝒮_2 = { N+1,…,N+P }. It holds: * if P=0 and I_n = J_n, n=1,…,N, then 𝒳×̅_N 𝒴 = ⟨𝒳, 𝒴⟩ = 𝒳' ·𝒴. * if P>0 and I_n = J_n for n=1,…,N, then𝒳×̅_N 𝒴= 𝒳×_1 𝒴_(𝒮_1,𝒮_2)∈^j_1×…× j_P 𝒴×̅_N 𝒳= 𝒴_(𝒮_1,𝒮_2)×_1 𝒳∈^j_1×…× j_P.* let ℛ={ 1,…,N } and 𝒞={ N+1,…,2N }. If P=N and I_n = J_n = J_N+n, n=1,…,N, then𝒳×̅_N 𝒴×̅_N 𝒳 = 𝒳' 𝐘_(ℛ, 𝒞)𝒳.* let M=N+P, then 𝒳∘𝒴 = 𝒳×̅_1 𝒴^T, where 𝒳,𝒴 are (I_1×…× I_N× 1)- and (J_1×…× J_M× 1)-dimensional tensors, respectively, given by 𝒳_:,…,:,1 = 𝒳, 𝒴_:,…,:,1 = 𝒴 and 𝒴^T_j_1,…,j_M,j_M+1 = 𝒴_j_M+1,j_M,…,j_1.Case (i). By definition of contracted product and tensor scalar product𝒳×̅_N 𝒴= ∑_i_1=1^I_1…∑_i_N=1^I_N𝒳_i_1,…,i_N𝒴_i_1,…,i_N = ⟨𝒳, 𝒴⟩ = 𝒳' ·𝒴.Case (ii). Define I^* = ∏_n=1^N I_n and k=1+∑_j=1^N (i_j-1) ∏_m=1^j-1 I_m. By definition of contracted product and tensor scalar product𝒳×̅_N 𝒴= ∑_i_1=1^I_1…∑_i_N=1^I_N𝒳_i_1,…,i_N𝒴_i_1,…,i_N,j_N+1,…,j_N+P = ∑_k=1^I^*𝒳_k𝒴_k,j_N+1,…,j_N+P.Note that the one-to-one correspondence established by the mapping between k and (i_1,…,i_N) corresponds to that of the vectorization of a (I_1×…× I_N)-dimensional tensor. It also corresponds to the mapping established by the tensor reshaping of a (N+P)-order tensor with dimensions I_1,…,I_N,J_N+1,…,J_N+P into a (P+1)-order tensor with dimensions I^*,J_N+1,…,J_N+P. Let 𝒮_1 = { 1,…,N }, then𝒳×̅_N 𝒴 = ∑_i_1=1^I_1…∑_i_N=1^I_N𝒳_i_1,…,i_N𝒴_i_1,…,i_N,:,…,: = ∑_s_1=1^|𝒮_1|𝐱_s_1𝒴̅_s_1,:,…,:where 𝒴̅ = reshape_(𝒮_1,N+1,…,N+P)(𝒴). Following the same approach, and defining 𝒮_2 = { N+1,…,N+P }, we obtain the second part of the result.Case (iii). We follow the same strategy adopted in case b). Let 𝐱=𝒳, S_1 = { 1,…,N} and S_2 = { N+1,…,N+P }, such that (S-1,S_2) is a partition of { 1,…,N+P }. Let k,k' be defined as in case b). Then𝒳×̅_N 𝒴×̅_N 𝒳 = ∑_i_1=1^I_1…∑_i_N=1^I_N∑_i_1'=1^I_1…∑_i_N'=1^I_N𝒳_i_1,…,i_N𝒴_i_1,…,i_N,i_1',…,i_N'𝒳_i_1',…,i_N' = ∑_k=1^I^*∑_i_1'=1^I_1…∑_i_N'=1^I_N𝐱_k𝒴_k,i_1',…,i_N'𝒳_i_1',…,i_N'= ∑_k=1^I^*∑_k'=1^I^*𝐱_k𝒴_k,k'𝐱_k' = 𝒳' 𝒴_(S_1,S_2)𝒳.Case (iv).Let 𝐢 = (i_1,…,i_N) and 𝐣 = (j_1,…,j_M) be two multi-indexes. By the definition of outer and contracted product we get (𝒳∘𝒴)_𝐢,𝐣 = 𝒳_𝐢,1𝒴_1,𝐣 = (𝒳×̅_1 𝒴^T)_𝐢,𝐣. Therefore, with a slight abuse of notation, we use 𝒴 = 𝒴 and write 𝒴∘𝒴 = 𝒴×̅_1 𝒴^T, when the meaning of the products is clear form the context.Let X_n be a I_n× I_n matrix, for n=1,…,N, and let 𝒳 = X_1 ∘…∘ X_N be the (I_1×…× I_N× I_1×…× I_N)-dimensional tensor obtained as the outer product of the matrices X_1,…,X_N. Let (𝒮_1,𝒮_2) be a partition of I_𝐍 = { 1,…,2N }, where 𝒮_1 = { 1,…,N } and 𝒮_2 = { N+1,…,N }. Then 𝒳_(𝒮_1,𝒮_2) = 𝐗_(ℛ, 𝒞) = (X_N ⊗…⊗ X_1). Use the pair of indices (i_n,i_n') for the entries of the matrix X_n, n=1,…,N. By definition of outer product (X_1 ∘…∘ X_N)_i_1,…,i_N,i_1',…,i_N' = (X_1)_i_1,i_1'·…· (X_N)_i_N,i_N'. By definition of matricization, 𝒳_(𝒮_1,𝒮_2) = 𝐗_(ℛ, 𝒞). Moreover (𝒳_(𝒮_1,𝒮_2))_h,k = 𝒳_i_1,…,i_2N with h = ∑_p=1^N (i_S_1,p -1) ∏_q=1^p-1 J_S_1,p and k = ∑_p=1^N (i_S_2,p -1) ∏_q=1^p-1 J_S_2,p. By definition of the Kronecker product, the entry (h',k') of (X_N ⊗…⊗ X_1) is (X_N ⊗…⊗ X_1)_h',k' = (X_N)_i_N',i_N'·…· (X_1)_i_1,i_1', where h' = ∑_p=1^N (i_S_1,p -1) ∏_q=1^p-1 J_S_1,p and k' = ∑_p=1^N (i_S_2,p -1) ∏_q=1^p-1 J_S_2,p. Since h=h' and k=k' and the associated elements of 𝒳_(𝒮_1,𝒮_2) and (X_N ⊗…⊗ X_1) are the same, the result follows.Let α_1,…,α_n be vectors such that α_i has length d_i, for i=1,…,n. Then, for each j=1,…,n, it holds∘_i=1^n α_i= ⊗_i=1^n α_n-i+1 = ( α_n ⊗…⊗α_j+1⊗𝐈_d_j⊗α_j-1⊗…⊗α_1 ) α_j.The result follows from the definitions of vectorisation operator and outer product. For n=2, the result follows directly fromα_1 ∘α_2 = α_1 α_2' = α_2 ⊗α_1 = (α_2 ⊗𝐈_d_1)α_1 = (𝐈_d_2⊗α_1)α_2.For n > 2 consider, without loss of generality, n=3 (an analogous proof holds for n>3). Then, from the definitions of outer product and Kronecker product we haveα_1 ∘α_2 ∘α_3 == (α_1' ·α_2,1α_3,1, …, α_1' ·α_2,d_2α_3,1, α_1' ·α_2,1α_3,2, …, α_1' ·α_2,d_2α_3,2, …, α_1' ·α_2,d_2α_3,d_3)'= α_3 ⊗α_2 ⊗α_1 = (α_3 ⊗α_2 ⊗𝐈_d_1)α_1 = (α_3 ⊗𝐈_d_2⊗α_1)α_2 = (𝐈_d_3⊗α_2 ⊗α_1)α_3. Denote with L the lag operator, s.t. L 𝒴_t = 𝒴_t-1, by properties of the contracted product in Lemma <ref>, case (iv), we get (ℐ -𝒜_1 L) ×̅_N 𝒴_t = 𝒜_0 + ℬ×̅_M 𝒳_t + ℰ_t. We apply to both sides the operator (ℐ + 𝒜_1 L + 𝒜_1^2 L^2 + … + 𝒜_1^t-1 L^t-1), take t→∞, and getlim_t→∞ (ℐ-𝒜_1^t L^t) ×̅_N 𝒴_t = ( ∑_k=0^∞𝒜_1^k L^k ) ×̅_N (𝒜_0 + ℬ×̅_M 𝒳_t + ℰ_t).From <cit.>, if ρ(𝒜_1) < 1 and 𝒴_0 is finite a.s., then lim_t→∞𝒜_1^t×̅_N 𝒴_0 = 𝒪 and the operator ∑_k=0^∞𝒜_1^k L^k applied to a sequence 𝒴_t s.t. |𝒴_𝐢,t| < c a.s. ∀ 𝐢 converges to the inverse operator (ℐ -𝒜_1 L)^-1. By the properties of the contracted product we get𝒴_t = ∑_k=0^∞𝒜_1^k ×̅_N (L^k 𝒜_0) + ∑_k=0^∞ (𝒜_1^k ×̅_N ℬ) ×̅_M (L^k 𝒳_t) + ∑_k=0^∞𝒜_1^k ×̅_N (L^k ℰ_t)= (ℐ - 𝒜_1 L)^-1×̅_N 𝒜_0 + ∑_k=0^∞𝒜_1^k ×̅_N ℬ×̅_M 𝒳_t-k + ∑_k=0^∞𝒜_1^k ×̅_N ℰ_t-k. From the assumption ℰ_t iid𝒩_I_1,…,I_N(𝒪,Σ_1,…,Σ_N), we know that (𝒴_t) = 𝒴_0, which is finite. Consider the auto-covariance at lag h ≥ 1. From Lemma <ref>, we have ( ( 𝒴_t -(𝒴_t) ) ∘( 𝒴_t-h -(𝒴_t-h) ) ) = ( 𝒴_t ∘𝒴_t-h) = ( 𝒴_t ×̅_1 𝒴_t-h^T ). Using the infinite moving average representation for 𝒴_t, we get( 𝒴_t ×̅_1 𝒴_t-h^T ) = ( ( ∑_k=0^h-1𝒜^k ×̅_N ℰ_t-k + ∑_k=0^∞𝒜^k+h×̅_N ℰ_t-k-h) ×̅_1 ( ∑_k=0^∞𝒜^k ×̅_N ℰ_t-k-h)^T )= ( ( ∑_k=0^∞𝒜^k+h×̅_N ℰ_t-k-h) ×̅_1 ( ∑_k=0^∞ℰ_t-k-h^T ×̅_N (𝒜^T)^k ) ),where we used the assumption of independence of ℰ_t, ℰ_t-h, for any h ≥ 0, and the fact that (𝒳×̅_N 𝒴)^T = (𝒴^T ×̅_N 𝒳^T). Using (ℰ_t) = 𝒪 and linearity of expectation and of the contracted product we get( 𝒴_t ×̅_1 𝒴_t-h^T ) = ∑_k=0^∞𝒜^k+h×̅_N ( ℰ_t-k-h×̅_1 ℰ_t-k-h^T ) ×̅_N (𝒜^T)^k= ∑_k=0^∞𝒜^k+h×̅_N Σ×̅_N (𝒜^T)^k = 𝒜^h×̅_N (ℐ-𝒜×̅_N Σ×̅_N 𝒜^T)^-1,where ( ℰ_t-k-h×̅_1 ℰ_t-k-h^T) = ( ℰ_t-k-h∘ℰ_t-k-h) = Σ = Σ_1 ∘…∘Σ_N.From the assumption ρ(𝒜) < 1 it follows that the above series converges to a finite limit, which is independent from t, thus proving that the process is weakly stationary. From Theorem 3.2, Corollary 3.3 of <cit.>, we know thatis a group (called tensor group) and that the matricization operator mat_1:N,1:N is an isomorphism betweenand the linear group of square matrices of size I^* = ∏_n=1^N I_n. Therefore, there exists a one-to-one relationship between the two eigenvalue problems 𝒜×̅_N 𝒳 = λ𝒳 and A𝐱 = λ𝐱, where A = mat_1:N,1:N(𝒜). In particular, λ = λ and 𝐱 = 𝒳. Consequently, ρ(A) = ρ(𝒜) and the result follows for p=1 from the fact that ρ(A) < 1 is a sufficient condition for the VAR(1) stationarity Proposition 2.1 of <cit.>. Since any VAR(p) and ART(p) processes can be rewritten as VAR(1) and ART(1), respectively, on an augmented state space, the result follows for any p ≥ 1. Consider a ART(p) process with 𝒴_t ∈^I_1×…× I_N and p ≥ 1. We define the (pI_1 × I_2 ×…× I_N)-dimensional tensors 𝒴_t and ℰ_t as 𝒴_(k-1)I_1+1:kI_1,:,…,:,t = 𝒴_t-k and ℰ_(k-1)I_1+1:kI_1,:,…,:,t = ℰ_t-k, for k=0,…,p, respectively. Define the (pI_1 × I_2 ×…× I_N × pI_1 × I_2 ×…× I_N)-dimensional tensor 𝒜 as 𝒜_(1:I_1,:,…,:,(k-1)I_1+1:kI_1,:,…,: = 𝒜_k, for k=1,…,p, 𝒜_(kI_1+1:(k+1)I_1,:,…,:,(k-1)I_1+1:kI_1,:,…,: = ℐ, for k=1,…,p-1 and 0 elsewhere. Using this notation, we can rewrite the (I_1 × I_2 ×…× I_N)-dimensional ART(p) process 𝒴_t = ∑_k=1^p 𝒜_k×̅_N 𝒴_t-j + ℰ_t as the (pI_1 × I_2 ×…× I_N)-dimensional ART(1) process 𝒴_t = 𝒜×̅_N 𝒴_t-1 + ℰ_t.§ COMPUTATIONAL DETAILS This appendix shows the derivation of the results. See the supplement for details. §.§ Full conditional distribution of ϕ_rDefine C_r = ∑_j=1^J β_j^(r)' W_j,r^-1β_j^(r) and note that, since ∑_r=1^R ϕ_r =1, it holds ∑_r=1^R b_ττϕ_r = b_ττ. The posterior full conditional distribution of ϕ, integrating out τ, isp(ϕ|ℬ,𝐖) ∝π(ϕ) ∫_0^+∞ p(ℬ|𝐖,ϕ,τ) π(τ) dτ ∝∏_r=1^R ϕ_r^α-1∫_0^+∞( ∏_r=1^R ∏_j=1^J (τϕ_r)^-I_j/2exp( -1/2τϕ_rβ_j^(r)' W_j,r^-1β_j^(r)) ) τ^a_τ-1 e^-b_ττdτ ∝∫_0^+∞( ∏_r=1^R ϕ_r^α-I_0/2-1) τ^( α R -RI_0/2) -1exp( -∑_r=1^R ( C_r/2τϕ_r +b_ττϕ_r ) ) dτwhere the integrand is the kernel of the GiG for ψ_r=τϕ_r in eq. (<ref>). Then, by renormalizing, ϕ_r = ψ_r / ∑_l=1^R ψ_l. §.§ Full conditional distribution of τThe posterior full conditional distribution of τ isp(τ|ℬ,𝐖,ϕ)∝τ^a_τ -1 e^-b_ττ( ∏_r=1^R (τϕ_r)^-I_0/2exp( -12τϕ_r∑_j=1^4 β_j^(r)' (W_j,r)^-1β_j^(r)) ) ∝τ^a_τ -R I_0/2 -1exp( -b_ττ - τ^-1∑_r=1^R C_r/ϕ_r),which is the kernel of the GiG in eq. (<ref>). §.§ Full conditional distribution of λ_j,rThe full conditional distribution of λ_j,r, integrating out W_j,r, isp(λ_j,r|β_j^(r),ϕ_r,τ)∝λ_j,r^a_λ -1 e^-b_λλ_j,r∏_p=1^I_jλ_j,r/2√(τϕ_r)exp( -β_j,p^(r)/ (λ_j,r/√(τϕ_r))^-1) ∝λ_j,r^(a_λ+I_j)-1exp( -( b_λ +β_j^(r)_1/√(τϕ_r)) λ_j,r),which is the kernel of the Gamma in eq. (<ref>). §.§ Full conditional distribution of w_j,r,pThe posterior full conditional distribution of w_j,r,p isp(w_j,r,p|β_j^(r), λ_j,r,ϕ_r,τ)∝ w_j,r,p^-1/2exp( -β_j,p^(r)^2 w_j,r,p^-1/2τϕ_r) exp( -λ_j,r^2 w_j,r,p/2) ∝ w_j,r,p^-1/2exp( -λ_j,r^2/2w_j,r,p -β_j,p^(r)^2/2τϕ_r w_j,r,p^-1),which is the kernel of the GiG in eq. (<ref>).§.§ Full conditional distributions of PARAFAC marginalsConsider the model in eq. (<ref>), it holds𝒴_t= ℬ_-r×_4𝐱_t+ ℬ_r ×_4𝐱_t+ ℰ_t,with ℬ_r ×_4𝐱_t = β_1^(r)∘β_2^(r)∘β_3^(r)·𝐱_t' β_4^(r). From Lemma <ref>, we haveβ_1^(r)∘β_2^(r)∘β_3^(r)·𝐱_t' β_4^(r)= β_1^(r)∘β_2^(r)∘β_3^(r)·𝐱_t' β_4^(r) = 𝐛_4 β_4^(r) = ⟨β_4^(r), 𝐱_t ⟩( β_3^(r)⊗β_2^(r)⊗𝐈_I ) β_1^(r) = 𝐛_1 β_1^(r) = ⟨β_4^(r), 𝐱_t ⟩( β_3^(r)⊗𝐈_J ⊗β_1^(r)) β_2^(r) = 𝐛_2 β_2^(r) = ⟨β_4^(r), 𝐱_t ⟩( 𝐈_K ⊗β_2^(r)⊗β_1^(r)) β_3^(r) = 𝐛_3 β_3^(r). Define with 𝐲_t = 𝒴_t and Σ^-1 = Σ_3^-1⊗Σ_2^-1⊗Σ_1^-1, we obtainL(𝐘 |θ) ∝exp( -1/2∑_t=1^T ℰ̃_t' (Σ_3^-1⊗Σ_2^-1⊗Σ_1^-1) ℰ̃_t) ∝exp( -1/2∑_t=1^T -2( 𝐲_t' -ℬ_-r×_4 𝐱_t' ) Σ^-1β_1^(r)∘β_2^(r)∘β_3^(r)⟨β_4^(r), 𝐱_t ⟩ +β_1^(r)∘β_2^(r)∘β_3^(r)' ⟨β_4^(r), 𝐱_t ⟩Σ^-1β_1^(r)∘β_2^(r)∘β_3^(r)⟨β_4^(r), 𝐱_t ⟩).Consider the case j=1. By exploiting eq. (<ref>) we getL(𝐘|θ) ∝exp( -1/2∑_t=1^T β_1^(r)'⟨β_4^(r), 𝐱_t ⟩^2 ( β_3^(r)⊗β_2^(r)⊗𝐈_I_1)' Σ^-1( β_3^(r)⊗β_2^(r)⊗𝐈_I_1)·β_1^(r) -2( 𝐲_t' -ℬ_-r×_4 𝐱_t' ) Σ^-1⟨β_4^(r), 𝐱_t ⟩( β_3^(r)⊗β_2^(r)⊗𝐈_I_1) β_1^(r))= exp( -1/2β_1^(r)'𝐒_1^L β_1^(r) -2𝐦_1^L β_1^(r)).Consider the case j=2. From eq. (<ref>) we getL(𝐘|θ) ∝exp( -1/2∑_t=1^T β_2^(r)'⟨β_4^(r), 𝐱_t ⟩^2 ( β_3^(r)⊗𝐈_I_2⊗β_1^(r)) Σ^-1( β_3^(r)⊗𝐈_I_2⊗β_1^(r)) ·β_2^(r) -2( 𝐲_t' -ℬ_-r×_4 𝐱_t' ) Σ^-1⟨β_4^(r), 𝐱_t ⟩( β_3^(r)⊗𝐈_I_2⊗β_1^(r)) β_2^(r))= exp( -1/2β_2^(r)'𝐒_2^L β_2^(r) -2𝐦_2^L β_2^(r)).Consider the case j=3, by exploiting eq. (<ref>) we getL(𝐘|θ) ∝exp( -1/2∑_t=1^T β_3^(r)'⟨β_4^(r), 𝐱_t ⟩^2 ( 𝐈_I_3⊗β_2^(r)⊗β_1^(r)) Σ^-1( 𝐈_I_3⊗β_2^(r)⊗β_1^(r)) ·β_3^(r) -2( 𝐲_t' -ℬ_-r×_4 𝐱_t' ) Σ^-1⟨β_4^(r), 𝐱_t ⟩( 𝐈_I_3⊗β_2^(r)⊗β_1^(r)) β_3^(r))= exp( -1/2β_3^(r)'𝐒_3^L β_3^(r) -2𝐦_3^L β_3^(r)).Finally, in the case j=4. From eq. (<ref>) we getL(𝐘|θ) ∝exp( -1/2∑_t=1^T -2( 𝐲_t' -ℬ_-r×_4 𝐱_t' ) Σ^-1β_1^(r)∘β_2^(r)∘β_3^(r)·𝐱_t' β_4^(r) +β_4^(r)'𝐱_t β_1^(r)∘β_2^(r)∘β_3^(r)' Σ^-1β_1^(r)∘β_2^(r)∘β_3^(r)𝐱_t' β_4^(r))= exp( -1/2β_4^(r)'𝐒_4^L β_4^(r) -2𝐦_4^L β_4^(r)). §.§.§ Full conditional distribution of β_1^(r)From eq. (<ref>)-(<ref>), the posterior full conditional distribution of β_1^(r) isp(β_1^(r) | -)∝exp( -1/2β_1^(r)'𝐒_1^L β_1^(r) -2𝐦_1^L β_1^(r)) ·exp( -1/2β_1^(r)' (W_1,rϕ_r τ)^-1β_1^(r))= exp( -1/2( β_1^(r)'( 𝐒_1^L + (W_1,rϕ_r τ)^-1) β_1^(r) -2𝐦_1^L β_1^(r)) ),which is the kernel of the Normal in eq. (<ref>).§.§.§ Full conditional distribution of β_2^(r)From eq. (<ref>)-(<ref>), the posterior full conditional distribution of β_2^(r) isp(β_2^(r) | -)∝exp( -1/2β_2^(r)'𝐒_2^L β_2^(r) -2𝐦_2^L β_2^(r)) ·exp( -1/2β_2^(r)' (W_2,rϕ_r τ)^-1β_2^(r))= exp( -1/2( β_2^(r)'( 𝐒_2^L + (W_2,rϕ_r τ)^-1) β_2^(r) -2𝐦_2^L β_2^(r)) ),which is the kernel of the Normal in eq. (<ref>).§.§.§ Full conditional distribution of β_3^(r)From eq. (<ref>)-(<ref>), the posterior full conditional distribution of β_3^(r) isp(β_3^(r) | -)∝exp( -1/2β_3^(r)'𝐒_3^L β_3^(r) -2𝐦_3^L β_3^(r)) ·exp( -1/2β_3^(r)' (W_3,rϕ_r τ)^-1β_3^(r))= exp( -1/2( β_3^(r)'( 𝐒_3^L + (W_3,rϕ_r τ)^-1) β_3^(r) -2𝐦_3^L β_3^(r)) ),which is the kernel of the Normal in eq. (<ref>).§.§.§ Full conditional distribution of β_4^(r)From eq. (<ref>)-(<ref>), the posterior full conditional distribution of β_4^(r) isp(β_4^(r) | -)∝exp( -1/2β_4^(r)'𝐒_4^L β_4^(r) -2𝐦_4^L β_4^(r)) ·exp( -1/2β_4^(r)' (W_4,rϕ_r τ)^-1β_4^(r))= exp( -1/2( β_4^(r)'( 𝐒_4^L + (W_4,rϕ_r τ)^-1) β_4^(r) -2𝐦_4^L β_4^(r)) ),which is the kernel of the Normal in eq. (<ref>). §.§ Full conditional distribution of Σ_1Define ℰ̃_t = 𝒴_t -ℬ×_4 𝐱_t, 𝐄̃_(1),t = mat_(3)(ℰ̃_t), 𝐙_1 = Σ_3^-1⊗Σ_2^-1 and S_1 =∑_t=1^T 𝐄̃_(1),t𝐙_1 𝐄̃_(1),t'. The posterior full conditional distribution of Σ_1 isp(Σ_1 | -)∝exp( -1/2( γΨ_1 Σ_1^-1 + ∑_t=1^T 𝐄̃_(1),t𝐙_1 𝐄̃_(1),t' Σ_1^-1) )/Σ_1^ν_1+I_1+T I_2 I_3+1/2∝Σ_1^-(ν_1+T I_2 I_3)+I_1+1/2exp( -1/2(γΨ_1 +S_1) Σ_1^-1),which is the kernel of the Inverse Wishart in eq. (<ref>). §.§ Full conditional distribution of Σ_2Define ℰ̃_t = 𝒴_t -ℬ×_4 𝐱_t, 𝐄̃_(2),t = mat_(2)(ℰ̃_t) and S_2 = ∑_t=1^T 𝐄̃_(2),t (Σ_3^-1⊗Σ_1^-1) 𝐄̃_(2),t'. The posterior full conditional distribution of Σ_2 is p(Σ_2 | -)∝exp( -1/2( γΨ_2 Σ_2^-1 + ∑_t=1^T 𝐄̃_(2),t (Σ_3^-1⊗Σ_1^-1) 𝐄̃_(2),t' Σ_2^-1) )/Σ_2^ν_2+I_2+T I_1 I_3+1/2∝Σ_2^-ν_2+I_2+T I_1 I_3+1/2exp( -1/2γΨ_2 Σ_2^-1 + S_2 Σ_2^-1),which is the kernel of the Inverse Wishart in eq. (<ref>). §.§ Full conditional distribution of Σ_3Define ℰ̃_t = 𝒴_t -ℬ×_4 𝐱_t, 𝐄̃_(1),t = mat_(1)(ℰ̃_t), 𝐙_3 = Σ_2^-1⊗Σ_1^-1 and S_3 =∑_t=1^T 𝐄̃_(1),t𝐙_3 𝐄̃_(1),t'. The posterior full conditional distribution of Σ_3 isp(Σ_3 | -)∝exp( -1/2( γΨ_3 Σ_3^-1 + ∑_t=1^T ℰ̃_t' (Σ_3^-1⊗𝐙_3) ℰ̃_t) )/Σ_3^ν_3+I_3+T I_1 I_2+1/2∝Σ_3^-(ν_3+T I_1 I_2)+I_3+1/2exp( -1/2(γΨ_3 +S_3) Σ_3^-1),which is the kernel of the Inverse Wishart in eq. (<ref>). §.§ Full conditional distribution of γThe posterior full conditional distribution isp(γ | Σ_1, Σ_2, Σ_3)∝∏_i=1^3 γΨ_i^-ν_i/2exp( -1/2γΨ_i Σ_i^-1) γ^a_γ-1 e^-b_γγ∝γ^a_γ-∑_i=1^3 ν_i I_i/2-1exp( -1/2∑_i=1^3 Ψ_i Σ_i^-1 -b_γγ)which is the kernel of the Gamma in eq. (<ref>).
http://arxiv.org/abs/1709.09606v3
{ "authors": [ "Monica Billio", "Roberto Casarin", "Matteo Iacopini", "Sylvia Kaufmann" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20170927162427", "title": "Bayesian Dynamic Tensor Regression" }
Institut für Nanotechnologie, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany School of Physical Sciences, University of Kent, Canterbury CT27NH, United Kingdom Institut für Nanotechnologie, Karlsruhe Institute of Technology, 76021 Karlsruhe, GermanyPetersburg Nuclear Physics Institute,188350 St. Petersburg, RussiaWe study theoretically the transport of the one-dimensional single-channel interacting electron gas through a strong potential barrier in the parameter regime where the spin sector of the low-energy Luttinger liquid theory is gapped by interaction. This phase is of particular interest since it exhibits non-trivial interaction-induced topological properties. Using bosonization and an expansion in the tunneling strength, we calculate the conductance through the barrier as a function of the temperature as well as the local density of states (LDOS) at the barrier. Our main result concerns the mechanism of bound-state mediated tunneling. The characteristic feature of the topological phase is the emergence of protected zero-energy bound states with fractional spin located at the impurity position. By flipping the fractional spin the edge states can absorb or emit spinons and thus enable single electron tunneling across the impurity even though the bulk spectrum for these excitations is gapped. This results in a finite LDOS below the bulk gap and in a non-monotonic behavior of the conductance. The system represents an important physical example of an interacting symmetry-protected topological phase—which combines features of a topological spin insulator and a topological charge metal—in which the topology can be probed by measuring transport properties. Transmission through a potential barrier in Luttinger liquids with a topological spin gapAlexander D. Mirlin December 30, 2023 ========================================================================================== § INTRODUCTION Following the prediction and experimental discovery of topological insulator materials over the last decade <cit.>, much of the experimental and theoretical effort in recent years has turned towards investigating related topological phenomena in strongly correlated materials. Major progress in understanding these phases has been achieved for systems in one spatial dimension, where a formal mathematical classification of all symmetry protected phases has been developed <cit.>. More recently, a generalization of these methods to systems with dimensionality d>1 was proposed  <cit.>.While the complete classification of one-dimensional (1D) symmetry-protected topological phaseshas constituted an important breakthrough, it is not sufficient by itself to determine physical properties of such systems. In particular, predictions for transport properties of strongly-correlated topological materials are highly desirable as they often offer the most straightforward way to experimentally probe for systems with nontrivial topology.An important model system for studying transport properties in a strongly correlated symmetry-protected topological phase is the one-dimensional (1D) electron gas with time reversal symmetry, electron-electron interaction, and spin-anisotropy <cit.>. As is well known, the spin and charge degrees of freedom of the 1D electron gas decouple in the low-energy Luttinger liquid theory which describes the behavior of gapless collective bosonic excitations <cit.>. In a certain parameter range, the modes in the spin sector get dynamically gapped out due to electron-electron backscattering processes which grow under the renormalization flow. On the other hand, the charge sectorremains gapless. Without any spin anisotropy, this gapped phase is characterized by quasi-long-range charge density wave (CDW) correlations and was shown to betopologically trivial <cit.>. If, however, the spin anisotropy is large enough, the system flows to a different phase which shows nontrivial topological features. This topological phase exhibits quasi-long range spin density wave (SDW) correlations in the bulk and zero-energy boundary bound states (BBS) which carry fractional spin <cit.>. The peculiarity of both the SDW and CDW phase is that the excitations in the spin sector are gapped while the charge sector remains gapless. As such these phase are inherently distinct from non-interacting topological insulators, where both sectors are gapped. On the other hand, they are also distinct from each other since only the SDW phase shows topologically nontrivial features.The subject of this paper is the transport through an impurity in the topological phase described above.It turns out that, in addition to the nontrivial boundary spectrum, the topological SDW phase also exhibits novel transport properties distinct from both the conventional Luttinger liquid and trivial CDW phase. In particular the bulk transport of the system remains ballistic in the low-temperature limit even in the presence of impurities as long as the time reversal symmetry is preserved and interactions are not too strong. More precisely, a single impurity acts as an irrelevant perturbation as long as K_c>1/2, where K_c denotes the Luttinger liquid parameter in the charge sector.The schematic behavior of the bulk conductance of the SDW phase in presence of a single nonmagnetic scatterer as a function of the temperature is shown in Fig. <ref>. This figure combines the results of the present work with previously known results. An overview of the different transport regimes of the system is depicted in Fig. <ref>.In region III, which is the regime of temperatures much higher than the spin gap Δ, the system effectively behaves as a Luttinger liquid in the presence of a single impurity. The transport properties in this regime are well known <cit.>: for repulsive interactions the impurity represents a relevant perturbation which causes the conductance to decrease as a power law as temperature is lowered. The exponent of this power law differs depending on whether the impurity potential is weak (region III_a) or strong (region III_b). The region I, which describes a weak impurity at temperatures T ≪Δ, has been analyzed by two of us in Ref. <cit.>. It was found that the conductance at lowest temperatures behaves ballistic with small power-law corrections at finite temperature. Physically, these corrections stem from scattering of singlet electron pairs off the impurity: due to the excitation gap for spin-1/2 particles, the lowest energetically allowed excitations are electron spin-singlet pairs.Thus, an impurity may become strong under the renormalization group (RG) in the range of relatively high temperatures T≳Δ, regime III_b. Contrary to this, a weak impurity becomes weaker at low temperatures, T ≪Δ. This poses a question of the properties of the topological phase (T ≪Δ) with a strong impurity. The analysis of this regime—which is denoted by II in Figs. <ref> and <ref>—constitutes the main subject of the present work.The intermediate regime II that we explore here represents a vicinity of a strong-coupling fixed point, where both the impurity strength and the bulk gap have flown to strong coupling under the RG. Using the weakness of the electron tunneling across the barrier in this regime, we will determine the temperature dependence of the conductance as well as the tunneling density of states near the edge.Our central findings concern the transport mechanism in the regime II and the associated physical observables. On the one hand, we find that although spin-1/2 excitations are gapped in the bulk, single electron tunneling can take place via flipping the fractional spin of the boundary bound states. On the other hand, the bound states are energetically split due to the finite tunneling amplitude, with the energy splitting growing proportionally to the tunneling. As tunneling increases with lowering temperature, there exists a critical scale where the energy splitting becomes of the order of the bulk gap. At this energy scale, the single particle tunneling becomes frozen out and a crossover to pair-tunneling-mediated transport (regime I) occurs. The overall temperature dependence of the conductance is strongly non-monotonous, see Fig. <ref>. The underlying physics of the problem is also visible in the behavior of the local density of states (LDOS), which has a finite subgap contribution due to the edge states that is gradually shifted in energy towards the bulk density of statesas temperature is lowered. The finite subgap LDOS and the nonmonotonic behavior of the conductance may thus serve as experimental probes of a nontrivial topology of the system.The paper is organized as follows. In Sec. <ref> we introduce the model we are going to study. Next, we calculate the tunneling conductance across the impurity potential barrier to leading order in the tunneling amplitude in Sec. <ref>. To gain more physical insight into the transport results, we show that the model exhibits boundary bound states and calculate their energy as a function of the tunneling strength across the impurity in Sec. <ref>.Finally, we study the local density of states in Sec. <ref> and summarize our findings in Sec. <ref>. For completeness, we present in Appendix <ref> the renormalization-group analysis and the phase diagram of the model under consideration in the whole range of interaction couplings. § THE MODELLet us introduce the model we are going to study. We consider an interacting 1D electron gas in the presence of short-range electron-electron interaction and a single impurity located at the origin. The effective low-energy Hamiltonian of this model can be expressed in bosonized language as H = H_c + H_s + H_imp, withH_c = v_c/4 π K_c∫_-∞^∞d x[( ∂_x Φ_c)^2 + K_c^2(∂_x θ_c)^2]  ,H_s = v_s/4 π K_s∫_-∞^∞d x[( ∂_x Φ_s)^2 + K_s^2(∂_x θ_s)^2] +g_sG/(2 π a)^2∫_-∞^∞d xcos 2 Φ_s ,H_imp= - g_b/π a∫_-∞^∞d xcosΦ_scosΦ_cδ(x) .The bosonic operators Φ_c,s and θ_c,s in the charge and spin sectors are related to fermionic operators, describing modes linearized around the Fermi momentum k_F, asψ_η,σ = κ_σ/√(2 π a) e^i η/4[ Φ_c+ σΦ_s - ηθ_c - ησθ_s ]. Here σ= ↑ ,↓ = +,- denotes the electron spin, η = +,- the electron chirality, κ_σ are Klein factors which ensure the correct fermionic anticommutation relations, and a is the short-distance cutoff of the theory.The Hamiltonian in the charge sector, Eq. (<ref>), describes collective gapless excitations with charge ± e and velocity v_c. The Luttinger parameter K_c is a measure of the strength of electron-electron interactions. Note that we consider electrons at incommensurate filling, so no Umklapp terms are present in the model. On the other hand, spin excitations in the bulk are described by the sine-Gordon model, Eq. (<ref>). Here, the cosine term originates from electronic backscattering processes and can lead to the formation of a gap. More precisely, in the regime 1/2<K_s<1 the excitations in the spin sector are gapped solitons and antisolitons that carry spin 1/2 and -1/2, respectively. As was pointed out in Ref. <cit.>, the system can also flow to this gapped fixed point dynamically for K_s>1 if Ising spin anisotropy is present. Throughout this work we restrict ourselves to the parameter regime K_s ≥ 1/2. For K_s<1/2 propagating breather (soliton-antisoliton) bound states would exist which are not considered here, although we would not expect this to qualitatively change the properties of the state that we discuss.For completeness, we derive the phase diagram of the model Eqs. (<ref>)-(<ref>) in Appendix <ref>.For the purposes of the rest of the text however, it is sufficient to say that we are in a phase where the bulk cosine term in Eq. (<ref>) is relevant and the system develops a spin gap, while the charge sector of the theory remains gapless.Such a phase is often termed as a Luther-Emery liquid <cit.>.Thermodynamically, the sign of the coupling constant g_sG is not important, and there is a duality g_sG→ -g_sG.It should be stressed however that the topological nature of the gapped phase depends crucially on the sign of this coupling constant. In fact, one can define a topological index 𝒬 = sgn(g_sG) which takes the value 𝒬 = +1 in the topological and 𝒬 = -1 in the topologically trivial phase. Throughout this paper we will assume g_sG > 0 since we are interested in studying the topological phase. Lastly, the term (<ref>) in the Hamiltonian describes a time-reversal-symmetric impurity potential with a strength g_b. We assume g_b>0but actually the physical results do not depend on the sign of g_b. Let us note that the term (<ref>) mixes the charge and spin sectors. While we will discuss the model defined by Eqs. (<ref>)–(<ref>) in the context of spinful electrons <cit.>, we note that other physical systems are also described by the same low-energy Hamiltonian. Examples include cold-atom systems <cit.>, coupled superconducting wires <cit.>, coupled edges of quantum spin Hall insulators <cit.>, ladder models <cit.>, and Kondo chains <cit.>.We also refer to these earlier works for the relationships between the parameters in the low energy effective theoryEqs. (<ref>)-(<ref>) and any given microscopic system.§ TUNNELING CURRENT ACROSS A LARGE POTENTIAL BARRIERIn this Section we will discuss the transport properties of the model introduced in Eqs. (<ref>)–(<ref>) in the regime II of Fig. <ref>. To be more precise, we want to analyze the conductance in the regime where both the impurity potential, g_b/ v_s ≫ 1, and the bulk interaction potential, g_sG/ v_s ≫ 1, are strong. First, we derive an effective model in the regime g_sG/ v_s ≫ 1. If correlations in the bulk are strong, the spin field will establish a mean field Φ_s^SDW=π/2 in order to minimize the potential energy of the bulk term (<ref>). Quantum-mechanical fluctuations around this ground state can be described semiclassically by writing Φ_s(x,τ) = Φ_s^SDW + δΦ_s(x,τ) and expanding the action of the model to quadratic order in fluctuations δΦ_s. This yields the following action in energy-momentum space:S_LL [Φ_c,δΦ_s] =1/4 π v_c K_c∫_ω,q |Φ_c(q,ω)|^2 (ω^2 +v_c^2 q^2) +1/4 π v_s K_s∫_ω,q |δΦ_s(q,ω)|^2 (ω^2 +v_s^2 q^2 + Δ^2) ,where Δ = (8 π v_s^2 K_s g_sG)^1/2 /a denotes the excitation gap of the spin fluctuations. In terms of the fluctuations, the impurity potential takes the formS_imp[Φ_c,δΦ_s] = g_b/π a∫dτ cosΦ_c(0,τ) sinδΦ_s(0,τ) . Next we integrate out all fields except those at the origin to obtain an effective local action:e^-S_eff[q_c,q_s] = ∫DΦ_c ∫DΦ_sδ[q_c(τ) - Φ_c(0,τ) ]×δ[q_s(τ)- δΦ_s(0,τ) ] e^-S_LL[Φ_c,δΦ_s]-S_imp[Φ_c,δΦ_s] .Performing the Gaussian functional integration, we arrive at the resultS_eff = ∑_μ=c,s∫_ω𝒦_μ(ω) |q_μ(ω)|^2 + S_imp[q_c,q_s] ,with the kernel functions𝒦_c(ω) = 1/2 π K_c|ω|  ,𝒦_s(ω) = 1/2 π K_s√(ω^2+Δ^2).As was pointed out by Furusaki and Nagaosa <cit.>, this type of action is equivalent to that of a quantum Brownian particle moving in a periodic cosine potential and coupled to a dissipative environment. However, unlike in the gapless Luttinger liquid, only the low-lying charge excitations cause the damping in our model (as long as we are interested only in energies below the gap). The reason for this is that spin excitations are gapped and thus can not contribute to the damping. To avoid UV divergencies it is necessary to introduce a high-frequency cutoffwhich correspond to a finite mass of the Brownian particle.Since we are interested in the physics on energy scales below the gap Δ, we choose the cutoff to be of the order of Δ.In the limit of a large impurity potential, the electron transport can be viewed as a tunneling from a minimum of the potential (<ref>) to an adjacent minimum. The corresponding tunneling amplitude γ (at the new ultraviolet cutoff scale Δ) provides a natural expansion parameter. The relationship between the tunneling strength and the barrier strength is non-universal <cit.>, so we will consider γ to be a phenomenological parameter. In the following, we will calculate the conductance perturbatively in leading order in the tunnneling amplitude γ by using Fermi's golden rule. The analysis generalizes the discussion of Ref. <cit.> to the case of a gapped spin sector. It is useful to rewrite the partititon function by introducing a set of quadratic oscillator degrees of freedom { x_1j} and { x_2k}, Z= ∫D [q_c,q_s] e^-S_eff[q_c,q_s] = ∏_jk∫D [q_c, q_s,x_1j,x_2k] e^-∫dτ ℒ( { x_1j}, { x_2k}, q_c,q_s) ,withℒ = ∑_j [ m_1j/2ẋ_1j^2 +m_1j/2ω_1j^2 x_1j^2 + g_1j x_1j q_c + g_1j^2/2 m_1jω_1j^2 q_c^2] + ∑_k [ m_2k/2ẋ_2k^2 +m_2k/2ω_2k^2 x_2k^2 + g_2k x_2k q_s + g_2k^2/2 m_2kω_2k^2 q_s^2] + g_b/π a ( cos q_c sin q_s -1 ) .The introduced oscillators are characterized by the spectral functionsJ_c(Ω) =π/2∑_jg_1j^2/m_1jω_1jδ(Ω-ω_1j) ,J_s(Ω) =π/2∑_kg_2k^2/m_2kω_2kδ(Ω-ω_2k)  .The identity in Eq. (<ref>) with the Lagrangian in (<ref>) holdsif these spectral functions fulfil the following integral equations:𝒦_μ(ω) =∫dΩ/ΩJ_μ(Ω)/πω^2/ω^2 + Ω^2 .It can be checked that this is the case if we chooseJ_c(ω) = ω/π K_cΘ(ω),J_s(ω) = 1/π K_s[ √(ω^2-Δ^2) Θ(ω-Δ) + π/2Δωδ(ω) ] .The tunneling probability to lowest order in the (phenomenologically introduced)tunneling matrix element γ is obtained using Fermi's golden rule for tunneling between neighboring minima of the potential <cit.>.We consider the minima (q_c,q_s)=(0,-π/2) and (π,π/2). The probability is given by𝒫_(0,-π/2) → (π,π/2)=2 πγ^2 ∑_i,f |⟨ f | i ⟩|^2 e^-β E_iδ(E_f-E_i-eV) / ∑_i e^- β E_i =γ^2 ∫_-∞^∞d t⟨e^-i H_f t e^i H_i t|_⟩i e^i e V t  ,where V is the applied voltage, β =1/T is the inverse temperature and |i ⟩ (|f ⟩) represent eigenstates of H_i (H_f), defined in Eqs. (<ref>) and (<ref>) below, with eigenvalues E_i (E_f).The thermal average is defined as⟨X|_⟩i = Tr(X e^-β H_i) / Tr(e^-β H_i). The initial and final state Hamiltonians are obtained from ℒ in Eq. (<ref>) by setting (q_c,q_s)=(0,-π/2) and (π,π/2), respectively. After quantizing the oscillator modes we findH_i =∑_j ω_1j (a_j^† a_j^† + 1/2)+ ∑_k [ ω_2k (b_k^† b_k^† +1/2) - π g_2k/2 √(2 m_2kω_2k) (b_k^†+b_k^†)+ π^2 g_2k^2 /8 m_2kω_2k^2] , H_f =∑_j [ ω_1j (a_j^† a_j^† +1/2) + π g_1j/√(2 m_1jω_1j) (a_j^†+a_j^†)+ π^2 g_1j^2 /2 m_1jω_1j^2]+ ∑_k [ ω_2k (b_k^† b_k^† +1/2) + π g_2k/2 √(2 m_2kω_2k) (b_k^†+b_k^†)+ π^2 g_2k^2 /8 m_2kω_2k^2]  . The two Hamiltonians are related to each other via the translation of the oscillator coordinates, x_1j → x_1j + π g_1j / √(2 m_1jω_1j) and x_2k→ x_2k + π g_2k / √(2 m_2kω_2k), which corresponds to the transformation H_f = U^† H_i U^† with the unitary translation operatorU = exp[ ∑_j π g_1j/√(2 m_1jω_1j^3 ) (a_j^† - a_j^†) + ∑_k π g_2k/√(2 m_2kω_2k^3 ) (b_k^† - b_k^†) ]  .With the help of this relation the transition probability is evaluated as𝒫_(0,-π/2) → (π,π/2)=γ^2 ∫_-∞^∞d texp[ i e V t - π∫_-∞^∞dω/ω^2 ( J_c(ω) + J_s(ω) ) [ (1-cos(ω t)) (βω/2) ]]  . In the same way the probability of the reverse process, (q_c,q_s) = (π,π/2) → (0,-π/2) is obtained𝒫_(π,π/2) → (0,-π/2)=γ^2 ∫_-∞^∞d t⟨e^-i H_i t e^i H_f t|_⟩f e^- i e V t = e^-β e V𝒫_ (0,-π/2) → (π,π/2) .Here the last line represents the detailed-balance condition. The net current across the impurity potential is given by the difference of the tunneling probabilitiesj_c = 2 e ( 𝒫_ (0,-π/2) → (π,π/2)- 𝒫_(π,π/2) → (0,-π/2) )= 2 e γ^2 (1-e^-β eV) ∫_-∞^∞d texp[ i eV t- π∫dω/ω^2 [J_c(ω) + J_s(ω)] e^-|ω|/Δ( (1-cosω t) βω/2) ]  ,where the factor 2 in front comes from the spin degeneracy and we introduced the ultraviolet cutoff e^-|ω|/Δ. From this result we obtain the conductance at the voltage V → 0 as a function of the temperature:G(T) = 2 e^2 γ^2 β∫_-∞^∞d texp[ - π∫dω/ω^2 [J_c(ω) + J_s(ω)] e^-|ω|/Δ( (1-cosω t) βω/2) ]≃ 2 e^2π^3/2Γ(1/2 K_c)/Γ(1/2 K_c + 1/2) (γ/Δ)^2 ( π T/Δ)^1/K_c -2 ,where Γ(x) denotes the Euler gamma function.It is instructive to rewrite the conductance in Eq.(<ref>) asG(T) ∝e^2 (γ/Δ)^2 ( π T/Δ)^1/K_c -2 ≡e^2 γ^2(T)/Δ^2 ,with the renormalized tunneling amplitude at energy ϵγ(ϵ) =(ϵ/Δ)^1/2 K_c-1γ .Equation (<ref>) shows that the tunneling is enhanced in regime II, as long as interactions are not too strong, i.e., as long as K_c>1/2. Thus the conductanceincreases in region II as temperature is decreased, as shown in Fig. <ref>. The physical reason for this behavior will be discussed in detail in the following sections.For now let us just point out that the spin Luttinger parameter K_s does not appear in the exponent in Eq. (<ref>). As has been discussed above, this is because the spin degrees of freedom in the bulk are gapped and thus do not contribute to the physics at energies below the gap. We have checked explicitly that the topologically trivial CDW phase shows a different behavior of the conductance. In this phase the conductance to order γ^2 is exponentially suppressed as G ∝ t^2 exp(-Δ/T) for temperatures below the spin gap and shows the usual Luttinger liquid behavior G ∝ T^1/K_c+1/K_s-2 for temperatures above the gap. The main contribution to the conductance at T ≪Δ in this phase is due to tunneling of singlet pairs and arises only in the order𝒪(γ^4) in the perturbative expansion of the conductance.To gain a better understanding of the physics governing transport in the regime II (strong barrier) of the topologically gapped phase, we next study the edge state spectrum and the single particle density of states in this regime. § BOUNDARY BOUND STATE In the preceding Section we have seen that, unlike in a disordered Luttinger liquid, the strong impurity fixed point is unstable in the presence of weak tunneling if the spin sector istopologically gapped. As we will see, this is crucially related to the boundary states that emerge at the impurity position in the presence of a bulk gap. We refer the reader to Ref. <cit.> for a related discussion of the effect of boundary states on the transport in two-subband quantum wires.In this Section, we discuss the properties of the boundary bound state of the model in Eqs. (<ref>) - (<ref>). This is most conveniently achieved by mapping the model to two copies of the boundary sine-Gordon model.In the strong impurity regime, g_b /v_s≫ 1, the charge and spin fields at the origin develop expectation values which minimize the potential energy of the impurity, Eq. (<ref>), E_imp∝cosΦ_s(0) cosΦ_c(0).This is minimised by any combination Φ_c(0)=nπ, Φ_s(0)=mπ where n+m is an even integer. The discussion which follows is analogous for any of these degenerate minima, so to be concrete, we choose n=m=0, i.e. Φ_c(0)=0 and Φ_s(0)=0.While the above discussion is correct for the charge mode as there is no other term competing with this minima, this is not the case for the spin mode, where the bulk sine-Gordon term in Eq. (<ref>) which is also in the strong coupling regime is minimised by a different value of Φ_s.We therefore must study the spin sector more closely.By integrating out the small fluctuations of the charge field around the mean field and redefining the spin fields as Φ_s → (Φ_s - π)/2 the model maps to two copies of the boundary sine Gordon model with the actionS_s = S_s,1 + S_s,2with S_s,1=v_s/16 π K_s∫_-∞^0 d x ∫dτ [ v_s^-2 ( ∂_τΦ_s)^2 + (∂_x Φ_s)^2]-g_sG/ (2 π a)^2∫_-∞^0 d x∫dτ cosΦ_s- g_b/2 π a∫dτ cos. ( 1/2( Φ_s - π))|_x=0.The action S_s,2 is defined identically to Eq. (<ref>) but for fields with coordinates x>0. It has been shown <cit.> that the model in (<ref>) supports boundary bound states with energyE_BBS = Δsinχ, χ = π-Φ_s^0/2-2 K_s,where Φ_s^0 = Φ_s(0,τ) denotes the value of the mean field solution at the origin. In particular, in the case of fixed boundary conditions, which can be obtained from (<ref>) in the limit g_b→∞, the mean field takes the value Φ_s^0 =Φ_s(x=0,τ) =π and thus the energy of the boundary bound state (<ref>) is exactly zero. This was also discussed in previous works in a different framework <cit.>.The physical nature of the bound state has been discussed by Ghoshal and Zamolodchikov <cit.>. For 0< Φ_s^0 < π the ground state of each boundary sine-Gordon model is characterized by the asymptotic behavior Φ_s^(1)→ 0 as x →±∞. Classically, there exists another stable state with Φ_s^(2)→ 2 π as x →±∞. This state is expected to be stable in the quantum theory as well if the parameter Φ_s^0 is not too small (compared with the parameter √(K_s) governing quantum fluctuations). Exactly for Φ_s^0 = π both ground states are degenerate and the energy of the boundary bound state vanishes. In this scenario the bound state can emit or absorb a soliton changing its state between the two degenerate ground states without energy cost. This flipping between degenerate edge state configurations allows single electrons to tunnel across the barrier although the bulk spin sector has an excitation gap for single spins.Having understood the g_b →∞ limit, we turn to a more physical setup with a finite potential barrier due to the impurity, which is equivalent to a small but finite tunneling amplitude γ for electrons.Let us first study how a finite barrier strength affects the classical ground state of (<ref>). By minimizing the action S_s, Eqs. (<ref>) and (<ref>), we obtain the equations of motion∂_y^2 Φ_s = sinΦ_s - g̅_b δ(y) cosΦ_s/2,with dimensionless coordinate y= (2 K_s g_sG / v_sπ a^2)^1/2 x and the dimensionless parameter g̅_b = (8 π v_s K_s/g_sG)^1/2 g_b. We solve Eq. (<ref>) by an appropriate ansatz. For y>0 and y<0 the solution should have a form of the bulk soliton. Requiringthe asymptotic condition Φ^(2)_s(y) → 2 π as |y| →∞, we getΦ_s^(2)(y) =4 arctan e^-y-y_1,y<0 ,4 arctan e^y-y_2,y>0 .The constants y_1 and y_2 are determined by the matching conditions at y=0:Φ_s^(2)(0+)= Φ_s^(2)(0-)  , ∂_y Φ_s^(2)(0+) - ∂_y Φ_s^(2)(0-)= - g̅_b cosΦ_s^(2)(0)/2.The first condition is simply the continuity of the solution and the second is obtained by integrating the equation of motion (<ref>) over an infinitesimal interval around the origin.Applying these conditions, we find y_1 = y_2 = arsinh(4/g̅_b). In particular, in the strong-barrier limit, g̅_b ≫ 1, we obtain Φ_s^(2)(x=0) ≃π - 8/ g̅_b, and thusthe energy of the bound state (<ref>) takes the formE_BBS^(2) = Δ√(2 g_sG/π K_s (1-K_s)^2)1/g_b .The calculation for the solution with asymptotics Φ_s^(1)→ 0 is analogous and yields E_BBS^(1) = E_BBS^(2). From the problem of tunneling across a delta-function potential barrier for noninteracting particles we know that the tunneling amplitude is inversely proportional to the barrier strength. Interactions renormalize the tunneling but do not change this relation. Thus, we conclude that, in the presence of weak tunneling through the barrier and at a finite temperature, the renormalized energy of the boundary bound state scales asE_BBS(ϵ) ∝ |g_b(ϵ)|^-1∝ |γ(ϵ)| ∝ϵ^1/2 K_c-1 ,where the scaling of γ(ϵ) was determined in Eq. (<ref>). The linear scaling of E_BBS withthe renormalized tunneling amplitude γ(ϵ) can be also understood from a simple physical reasoning. For an infinite barrier (g_b = ∞, γ = 0), there are zero-energy bound states on each side of the barrier. At finite (but small) γ(ϵ), they get split acquiring an energy proportional to the renormalized matrix elementγ(ϵ). Hence, if K_c>1/2, the energy splitting grows according to Eq. (<ref>) as temperature is decreased. One observable where this splitting can be directly seen experimentally is the local density of states, which will be discussed in the next Section.§ LOCAL DENSITY OF STATESAs we have discussed in the previous Sections, the transport in the regime II of Fig. <ref> is governed by bound-state-mediated single-electron tunneling which leads to an increase of conductance as temperature is lowered. On the other hand, we know that at lowest temperatures (regime I), where the conductance is nearly perfect (ballistic), the transport is governed by pair tunneling of spin singlets. In order to study the crossovers between the different transport regimes, we now consider the local density of states in the regimes I-III.The observable of main interest in this Section is the local tunneling density of states of electrons at the impurity position, which is defined asν(ω) = - 1/πlim_i ω_n →ω + i 0+Im∑_σ=↑,↓. G_σ(x_1,x_2,ω_n) |_x_1=x_2 = 0-.Here ω_n are fermionic Matsubara frequencies, and the Green's function of electrons is given byG_σ= e^i k_F (x_1-x_2) G_σ^RR+ e^-i k_F (x_1-x_2)G_σ^LL +e^i k_F (x_1+x_2) G_σ^RL + e^-i k_F (x_1+x_2)G_σ^LR,with the chiral fermionic Green's functionsG_σ^η_1 η_2 = - ⟨0| T_τψ_η_1,σ(x_1,τ)ψ_η_2,σ^†(x_2,0)|0⟩ ,where |0⟩ is the ground state. Upon bosonization, the Green's function factorizes into a product of correlation functions in the charge and spin sectors, yielding in the x → 0 limitG_σ(0,0,τ) =- 1/2 π a g_c( τ) g_s( τ)  .In the following we will discuss the form of the Green's function [i.e., of the functions g_c and g_s in Eq. (<ref>)] and the resulting LDOS in the regimes I-III of Fig. <ref>. A schematic plot of the LDOS in the different transport regimes is given in Fig. <ref>.§.§ Region III We begin with the high-temperature regime (region III). In this regime, the system is in the Luttinger liquid phase and the form of the Green's function is well known <cit.>:G(x,τ)= 4/(2 π a)^1-α_c-α_ssgn(τ) × [ T/2v_c/sinπ T |τ|]^α_c[ T/2v_s/sinπ T |τ|]^α_s.Here, the factor 4 in front results from the degeneracy in spin and chiral sectors, and the exponents α_c and α_s are functions of the Luttinger parameters in the charge and spin sector. Their form differs depending on whether we measure the Green's function in the bulk (regime III_a) or at the boundary (regime III_b). To be specific they read as α_μ = (K_μ +K_μ^-1)/2 in regime III_a and α_μ = 1/2 K_μ in regime III_b. The density of states is obtained by plugging the Green's function (<ref>) into the definition of the LDOS, Eq. (<ref>). The resulting expression reads asν^III(ω)= 4/π^2 v_ccos( π/4 K_c)( 2 π a T/ v_s)^α_s( 2 π a T/ v_c)^α_c-1×Re{ B(-i ω/2 π T + α_c+α_s/2, 1-α_c-α_s )}Θ(ω) ,where B(x,y) is the Euler beta function. A plot of the LDOS in the regimes IIIa and IIIb is presented in the two lower panels of Fig. <ref>.§.§ Region IIIn this regime the bulk of the system is gapped and the impurity potential is large. The large impurity potential effectively acts as a boundary potential so that we have to calculate the LDOS with open boundary conditions. The charge part in (<ref>) can be calculated in this case using standard open-boundary bosonization methods <cit.>. The correlation function in the charge sector is that of a gapless Luttinger liquid at a hard-wall boundary:g_c( τ) = [ π a T/v_c/sinπ T τ]^1/2 K_c .On the other hand, the integrability of the sine-Gordon model on the half line allows for a calculation of the correlation functions in the spin sector using the boundary state formalism introduced by Ghoshal and Zamolodchikov <cit.> together with a form factor expansion. This procedure has been performed in Ref. <cit.> where the authors calculate the local chiral Green's functions. The correlation function in the spin sector consists of three partsg_s( τ)≡g_s^0( τ) + g_s^1(τ) +g_s^b(τ)  . Let us first discuss the first two terms g_s^0 and g_s^1 which describe one-particle contributions of the form factor expansion. The first term corresponds to the free propagation of a massive (anti-)soliton and the second term describes a single collision of such a particle with the boundary.Terms that involve a higher number of particles in the intermediate state as well as higher order corrections due to the boundary lead to subleading corrections: since these processes require the excitation of more gapped particles, they take place at higher energies <cit.>. Since we are only interested in frequencies ω≲Δ we can discard those terms.Explicitly, the terms g_s^0 and g_s^1read:g_s^0(τ) ≡2 Z_1/π[cos(π/4)K_0 (τ) + K_1/2 (τ) ]  ,andg_s^1(τ)=Z_1 [∫dθ/2 πK(θ+i π/2) e^-Δ|τ| coshθ + ∫dθ/2 πK(θ+i π/2) e^- Δ|τ| coshθ + e^-i π/4∫dθ/2 πK(θ+iπ/2) e^-Δ|τ| coshθ e^-θ/2 +e^i π/4∫dθ/2 πK(θ+i π/2) e^- Δ|τ| coshθ e^θ/2]  .Here, K_n(x) denotes the modified Bessel function of the second kind, Z_1 is a normalization constant which was obtained in Ref. <cit.>, and K(θ) is the so called boundary reflection amplitude. In particular, at the exactly solvable Luther-Emery point, K_s=1/2, this function is given by K(θ) =itanhθ/2. We stress that the dependence of the Green's function in (<ref>) on K_s is only contained in the form of the reflection amplitude K(θ), the normalization constants Z_1, as well as another constant B to be defined below in the discussion of the bound-state contribution g_s^b.A plot of the bulk contribution to LDOS, which is obtained by only taking into account the terms g_s^0 and g_s^1 in the Green's function in (<ref>),is shown in Fig. <ref>. The most salient properties of the bulk LDOS are as follows. First, as a natural manifestation of the gap, the LDOS vanishes exponentially for energies below Δ. Second,we observe a peak structure at energies just above the gap. We note that the peak is not sharp (i.e., not a δ-function). This is because the electronic Green's function is a convolution of a gapped spin part and a gapless charge part and thus the LDOS is associated with excitations involving at least two “elementary” constituents.Technically, the peak arises due to the contribution g_s^1 in (<ref>) shown in the inset of Fig. <ref> which describes the propagation of the electron to the boundary where it is reflected and then propagates back to the point of measurement. Returning to the full expression for the Green's function, the last term in Eq. (<ref>) describes the contribution of the boundary bound state and reads asg_s^b (τ) ≡ 2 Z_1 B [ 1+cosχ/2] e^-E_BBS |τ| ,with E_BBS and χ given in Eq. (<ref>), and B>0 denoting a real constant. For example at the Luther-Emery point, K_s = 1/2, it is given by B = -2 cosΦ_s^0. The contribution to the LDOS at the impurity position arising due to the boundary bound state can be calculated analytically. The calculation is standard <cit.> and yieldsν^b(ω) = 8 Z_1 B/π^2 v_ccos( π/4 K_c) Θ(ω-E_BBS) ( 2 π a T/ v_c)^1/2 K_c-1× Re{ B(-i ω-E_BBS/2 π T + 1/4 K_c, 1-1/2 K_c)} .The contribution ν^b(ω) arising from the bound state is plotted in Fig. <ref> for different temperatures. We observe that the boundary contribution is finite below the bulk gap, vanishing only below the threshold energy E_BBS given by the energy splitting of the edge states. Since the energy splitting increases with lowering temperature according to Eq. (<ref>), the threshold for the LDOS shifts towards the bulk gap until it merges with the bulk LDOS at temperatures ∼ T^∗, where E_BBS(T^∗) = Δ. This temperature scale characterizes the crossover temperature from the edge-state mediated tunneling (at T^∗ < T < Δ, regime II) to the singlet pair tunneling (at T < T^∗, regime I). The total LDOS in the regime II, including both the bulk and boundary contributions, is plotted in the upper right panel of Fig. <ref>. §.§ Region I In this regime the bulk of the system is gapped and the impurity potential is weak. Therefore to lowest order we calculate the LDOS in the absence of the impurity potential.The charge part of the Green's function (<ref>) is thus given by the “bulk" expressiong_c(τ) = [ π a T/v_c/sinπ T τ]^K_c/2+ 1/2K_c .and the spin part is given by second term in Eq. (<ref>) only:g_s(τ) ≡2 Z_1/π K_1/2 (Δτ) ,We note that technically this term arises from the RR and LL components of the chiral Green's function in (<ref>). The offdiagonal chiral components G^RL and G^LR vanish in regime I, since left and right moving electrons are independent in the absence of a boundary.The LDOS in this regime can be obtained numerically by using Eqs. (<ref>), (<ref>), and (<ref>). The results are plotted in the upper left panel of Fig. <ref>.§ CONCLUSION In this work, we have studied theoretically transport properties of a 1D electron gas with strong correlationsthat dynamically gap out the spin degrees of freedom of the low-energy theory, in the presence of a time-reversal invariant impurity. The effective low-energy Hamiltonian of the model is defined by Eqs. (<ref>)-(<ref>). The resulting topological SDW phase is characterized by topologically protected bound states located at the impurity position which carry fractional electron spin.The key results of this paper concern the behavior of the conductance through the impurity and of the LDOSin different transport regimes; see Figs. <ref> and <ref>, respectively. The results are obtained by using a combination of bosonization and perturbative expansions in different limiting regimes.At temperatures far above the bulk gap Δ in the spin sector, i.e., in the regime III of Fig. <ref>, the conductanceshows behavior typical for a gapless Luttinger liquid in the presence of an impurity. At sufficiently high temperatures, the transport is ballistic with small power-law corrections due to elastic scattering of single electrons off the impurity dressed by Friedel oscillations. The density of states shows a power-law behavior with a zero-bias anomaly that is cut off by the finite temperature. The power-law exponent of both the conductanceand the LDOS depends on whether the impurity has flown to strong coupling or not. A strong impurity effectively corresponds to a boundary and leads to different exponents of the transport observables. The weak-impurity regime is denoted by III_a and the strong-impurity regime by III_b in the figures.We have focussed on the range of moderately strong interactions with 1/2< K_c <1 and 1/2≤ K_s <1. Upon lowering the temperature, both the impurity potential and the soliton interaction potential in the spin sector then flow to strong coupling under the renormalization. The corresponding strong coupling fixed point describes two separate 1D subsystems, each with a gap Δ for spin excitations. This regime is denoted by II in Fig. <ref>. The transport in this regime takes place via weak tunneling processes, with amplitude γ, between the ends of the two subsystems. In view of the topological character of the system, a boundary bound state energetically located within the bulk gap emerges at the end of each subsystem. Due to the finite tunneling between both subsystems, the edge states are energetically split around zero energy by E_BBS defined in Eq. (<ref>). The dominant transport mechanism in this regime is the single-electron tunneling mediated by the boundary states. Even though single spin excitations are gapped in the bulk, they can be created or annihilated by flipping the edge spin which has an energy cost of order of the splitting. This is clearly visible in the density of states, depicted in Fig. <ref>. The DOS has a subgap contribution above a threshold value of E_BBS due to the contribution of the edge state. We note that while the edge state gives a delta function contribution to the DOS in the spin sector, the electron DOS is obtained as a convolution of the DOS of the spin and charge sectors and thus the subgap peak is not sharp.It is important that the energy splitting is not constant but scales with temperature ∝ |γ(T)|. Crucially, we find that the tunneling in the regime II is enhanced according to Eq. (<ref>). Thus, upon lowering the temperature, the energy splitting of the boundary state gradually increases until finally the edge DOS merges with the bulk DOS at temperature T^∗, defined by E_BBS(T^∗) = Δ. Simultaneously, the strength of the impurity potential, which scales∝ |γ(T)|^-1, is reduced, ultimately flowing back to a weak-impurity fixed point.This signals a crossover to a phase where the spin sector is gapped but the impurity potential is weak, denoted by I in Fig. <ref>. In this regime, thesingle electron tunneling is energetically forbidden due to the bulk gap for spin-1/2 excitations. This is clearly visible in the DOS in regime I of Fig. <ref>, which shows a hard-gap behavior. The leading transport channel in this regime is then the tunneling of singlet pairs across the impurity. Since this is a much weaker second-order process, the conductance shows ballistic behavior with weak power-law corrections.We briefly discuss now what happens if we relax the conditions 1/2< K_c <1 and 1/2≤ K_s <1; see Appendix <ref> for a more detailed presentation based on the RG analysis. If K_c<1/2, which corresponds to very strong repulsive interactions, pair scattering becomes relevant and the T=0 fixed point is insulating.It is curious that this same limitation also occurs for helical edge states of a two-dimensional topological insulator when interactions are considered <cit.>.If K_c>1, which will occur in superconducting realisations of this model <cit.>, there is no change to regions I and II below the spin gap.However, if K_c+K_s>2, then the impurity is no longer relevant, even above the spin-gap.This means firstly that the conductance as a function of temperature will be monotonic, and secondly that the regime II where the impurity is still strong at an energy scale of Δ will be more more difficult to reach.If K_s<1/2, which would correspond to very strong Ising anisotropy, we would expect that the basic physics we have discussed will remain the same, however there may be some quantitative changes due to breather modes in the spin sector that haven't been taken into account in this work. Finally, if K_s>1, then generally the system is not in a spin-gapped phase (More accurately, the border is K_s=1 only for infinitesimally small backscattering g_sG; a finite g_sG slightly shifts the border - see Appendix <ref>).In conclusion, the discussed quasi-long range order SDW phase is an example of a strongly correlated symmetry-protected topological phase that exhibits features fundamentally different from non-interacting topological phases.We have discussed signatures of these properties both in the LDOS near an impurity and in the behavior of the conductance. We note that although an impurity is irrelevant in the RG sense and will always flow to weak coupling as T→ 0, there are certain parameter regimes where the impurity is strong below the gap, demonstrating boundary states in the LDOS. This physics is rather universal and should be experimentally observable in any of the physical systems listed in Sec. <ref>. A further peculiarity of the system that we have studied is that it has features characteristic for a topological insulator (bulk gap with a topological edge state) only in the spin sector. The charge sector remains gapless. However, as our results show, the charge transport also exhibits remarkable topological properties. Indeed, we have shown that even in the presence of a strong impurity [meaning a (renormalised) impurity strength greater than the gap, implying G ≪ 1 at intermediate T, regime II] the conductance becomes ballistic in the low-temperature limit. Thus, the system combines features of a topological insulator (symmetry-protected topological phase) in the spin sector with those of a topological metal in the charge sector. In short, our results on the conductance and the LDOS can be used to experimentally probe the nontrivial topology in the system. We hope that our work will stimulate experimental activity in this direction, both in the condensed-matter and in the cold-atom realizations of the topological phase that we have theoretically explored.§ ACKNOWLEDGEMENTSWe thankM. Bard, E. Berg, I. Gornyi, A. Haim, A. Keselman, and G. Möller for useful discussions. ADM acknowledges the support within the Weston Visiting Professorship at the Weizmann Institute of Science. This work was supported by the Priority Programme 1666 “Topological Insulator” of the Deutsche Forschungsgemeinschaft (DFG-SPP 1666).§ RENORMALIZATION GROUP EQUATIONSIn this Appendix we developthe renormalization-group (RG) analysis of the model (<ref>)-(<ref>) and discuss the corresponding phase diagram in the full range of Luttinger-liquid constants K_c and K_s.The complete action of the model with the Hamiltonian (<ref>)-(<ref>) reads S = S_LL + S_δ + S_SG + S_imp +S_coh, where S_LL=1/2∑_μ1/K_μ∫d^2 r( ∇φ_μ)^2 , S_δ=δ/2 K_c∫d^2 r[ ( ∂_r_1φ_c)^2 - ( ∂_r_2φ_c)^2], S_SG=λ_⊥∫d^2 r/a^2 cos(√(8 π)φ_s(r)) , S_imp= -λ_imp∫d r_2/a cos(√(2 π)φ_s(0,r_2)) cos(√(2 π)φ_c(0,r_2)) , S_coh=∫d r_2/a [ λ_imp,scos(√(8 π)φ_s(0,r_2)) + λ_imp,ccos(√(8 π)φ_c(0,r_2)) ] . Here, we defined the dimensionless coupling constants λ_⊥ = g_sG /( 4 π^2 v_s) and λ_imp = g_b / (π v_s), as well as the coordinates r = (r_1,r_2)^T = (x, v_s τ)^T. The bosonic fields are related to the convention used in the main text by rescaling φ = Φ/ √(2 π). The dimensionless velocity difference δ = 1- v_c/v_s, while in principle present, turns out not to be important as it neither flows under RG nor influences any of the other flow equations, We therefore will simply drop it in the following.The action S_coh describes two-particle coherent processes generated by the impurity term in second order in a perturbative expansion in λ_imp; the bare values of the coupling constants are λ^0_imp,c=λ^0_imp,s=0. Here, the first term corresponds physically to the backscattering of two incoming electrons with opposite spin, incident from the left and right of the impurity. The resulting scattering process effectively backscatters a particle with spin 1 but zero charge. The second term in S_coh describes a process where two electrons with opposite spin are incident from the same side of the impurity and are coherently backscattered. This process effectively backscatters a singlet with charge 2 e. These coherent scattering processes become important when either the charge or the spin sector are gapped and electronic excitations are prohibited.The phase diagram of the model in Eq. (<ref>) is determined by the interplay of the impurity scattering (described by the terms S_imp and S_coh) and the interaction (described by the sine Gordon term). To gain a better understanding of this interplay, weperform a RG analysis of the action in Eq. (<ref>). The RG equations readd K_s/d ℓ= - 1/2 K_s^2 λ_⊥^2 ,d λ_⊥/d ℓ= (2 -2 K_s) λ_⊥,d λ_imp/d ℓ=[ 1 - 1/2 ( K_s + K_c) ] λ_imp - 1/2λ_impλ_imp,c - 1/2λ_impλ_imp,s - 1/4 √(2 π)λ_⊥λ_imp,d λ_imp,s/d ℓ= (1 - 2 K_s) λ_imp,s - 1/4λ_imp^2 - 1/2λ_⊥λ_imp,s,d λ_imp,c/d ℓ= (1 - 2 K_c) λ_imp,c - 1/4λ_imp^2 .There exists a line of weak-coupling fixed points with K_s = const and λ_i=0 for all i. The corresponding phase is the spinful Luttinger liquid phase. There is also a number of strong-coupling fixed points:λ_⊥→ 0 , λ_imp,s→ 0 , λ_imp,c→ 0, λ_imp→±∞ ⇒Strong impurity I, λ_⊥→ 0 , λ_imp,s→ 0 , λ_imp,c→∞, λ_imp→ 0⇒Strong impurity II,λ_⊥→ 0 , λ_imp,s→∞, λ_imp,c→ 0, λ_imp→ 0⇒Strong impurity III, λ_⊥→∞, λ_imp,s→ 0 , λ_imp,c→ 0, λ_imp→ 0⇒SDW I, λ_⊥→∞, λ_imp,s→ 0 , λ_imp,c→∞, λ_imp→ 0⇒SDW II, λ_⊥→ -∞, λ_imp,s→ 0 , λ_imp,c→ 0, λ_imp→∞ ⇒CDW.Note that the equations for λ_⊥ and K_s decouple from the rest, in the sense that the flow of λ_⊥ and K_s is not influenced by the other couplings. This result is very natural, since the local disorder term cannot affect the physics in the bulk. This can be used to classify the strong-coupling phases above into three strong impurity phases, where the spin gap does not develop and three phases with a gap in the spin sector.If λ_⊥ flows to zero, the fixed points correspond to those encountered in the study of a single impurity in the Luttinger liquid phase <cit.>.There are then three possible strong-coupling phases. In phase I, the impurity term becomes relevant. Physically, the impurity potential perfectly reflects incoming electrons at zero temperature in the thermodynamic limit and the system is effectively cut into two parts, each being in the Luttinger liquid phase. The impurity phases II and III describe impurity potentials that perfectly transmit spin but no charge, or vice versa (see the discussion in  <cit.>). There is, however, one difference between these works and the current discussion. In the presence of the sine Gordon term, the bulk interaction K_s is also subject to renormalization. This renormalization slightly shifts the phase boundaries between the impurity phases in the K_s-K_c-plane compared to the model with λ_⊥ = 0. Since the sine Gordon term is irrelevant in this region of the phase diagram, the shift of the phase boundaries is very minor.Let us now discuss the opposite situation, when λ_⊥ grows under the RG flow. If the bare parameters of the model obey |λ_⊥^0| > 2 (K_s^0-1), the flow is towards a strong coupling fixed point where the system dynamically develops a spin gap. The nature of the fixed point then additionally depends on the sign of λ_⊥. For λ_⊥<0, the strong-coupling fixed point is of the CDW type. In this case the development of CDW order in the bulk goes hand in hand with the flow of the impurity to strong coupling. In the thermodynamic limit and at zero temperature, the impurity potential becomes perfectly reflecting and cuts the wire into two parts, each exhibiting a CDW order.On the other hand, if λ_⊥>0, the strong-coupling fixed point is of the SDW type. In the SDW phase the impurity potential always renormalizes to zero. Whether the system remains conducting or becomes insulating in the thermodynamic limit then depends on the coupling λ_imp,c that is generated by the impurity in second order. We find that the corresponding term becomes relevant for K_c<1/2, independently of the physics in the spin sector. Then there are two disordered SDW phases, depending on whether λ_imp,c grows or decreases under the flow. In the SDW I phase (K_c > 1/2), the system remains a ballistic conductor at zero temperature, while the system in the SDW II phase (K_c < 1/2) is insulating.The overall phase diagram for λ_⊥^0>0 (we have chosen λ^0_⊥ = 0.2) is depicted in Fig. <ref>. The present paper focusses on the SDW I phase.apsrev4-1
http://arxiv.org/abs/1709.08965v1
{ "authors": [ "N. Kainaris", "S. T. Carr", "A. D. Mirlin" ], "categories": [ "cond-mat.str-el", "cond-mat.mes-hall" ], "primary_category": "cond-mat.str-el", "published": "20170926121931", "title": "Transmission through a potential barrier in Luttinger liquids with a topological spin gap" }
Partial differential systems with nonlocal nonlinearitiesBeck Doikou Malham StylianidisMargaret Beck Department of Mathematics and Statistics, Boston University, Boston MA 02215, USA [email protected] Anastasia Doikou, Simon J.A. Malham and Ioannis Stylianidis Maxwell Institute for Mathematical Sciences, and School of Mathematical and Computer Sciences,Heriot-Watt University, Edinburgh EH14 4AS, [email protected], [email protected], [email protected] Partial differential systems with nonlocal nonlinearities: Generation and solutions Margaret Beck Anastasia Doikou Simon J.A. Malham Ioannis Stylianidis 30th January 2018 =================================================================================== We develop a method for generating solutions to large classes of evolutionary partial differential systems with nonlocal nonlinearities.For arbitrary initial data, the solutions are generated from the correspondinglinearized equations. The key is a Fredholm integral equation relating thelinearized flow to an auxiliary linear flow. It is analogous to theMarchenko integral equation in integrable systems. We show explicitly how this can be achieved through several examples including reaction-diffusionsystems with nonlocal quadratic nonlinearities and the nonlinear Schrödinger equation with a nonlocal cubic nonlinear-ity. In each case we demonstrate our approach with numerical simulations. We discuss the effectiveness of our approach and how it might be extended.§ INTRODUCTIONOur concern is the generation of solutions to nonlinear partial differential equations. In particular, as is natural, to develop methods that generate such solutions from solutions to the corresponding linearized equations.Herein we do not restrict ourselves to soliton equations, nor indeedto integrable systems. We do not demand nor require the existence of a Lax pair. However our approach herein as it stands at this time, only applies to classesof partial differential systems with nonlocal nonlinearities. Naturally we seek to extend it to more general systems and we discuss how this might be achieved in our conclusions. However let us return to what we haveachieved thus far and intend to achieve herein.In Beck, Doikou, Malham and Stylianidis <cit.> we demonstrated theapproach we developed indeed works for large classes ofscalar partial differential equations with quadratic nonlocal nonlinearities.For example we demonstrated, for general smooth initial data g_0=g_0(x,y) with x,y∈ andsome time T>0 of existence, how to construct solutions g∈ C^∞([0,T];C^∞(^2;)∩ L^2(^2;))to partial differential equations of the form _tg(x,y;t)=d(_x)g(x,y;t)-∫_ g(x,z;t) b(_z)g(z,y;t)z.In this equation, d=d(_x) is a polynomial function of the partial differential operator _x with constant coefficients, while b is either a polynomial function b=b(_x) of _x with constant coefficients, or it is a smooth bounded function b=b(x) of x. Thus the linear term d(_x) g(x,y;t) is quite general,while the quadratic nonlinear term, whilst also quite general, has the nonlocal form shown.Hereafter for convenience we denote this nonlocal product by `⋆',defined for any two functions g,g^'∈ L^2(^2;) by (g⋆ g^')(x,y)∫_ g(x,z) g^'(z,y)z.Hence for example the nonlocal nonlinear term above can be expressed as (g⋆ (bg))(x,y;t).In this paper we extend our method in two directions. First we extend it to classes ofsystems of partial differential equations with quadratic nonlocal nonlinearities. For example we demonstrate, for general smooth initial data u_0=u_0(x,y) and v_0=v_0(x,y) with x,y∈ and some time T>0, how to constructsolutions u,v∈ C^∞([0,T];C^∞(^2;)∩ L^2(^2;)) to partial differential systems with quadratic nonlocal nonlinearities of the form_tu =d_11(_1)u+d_12(_1)v-u⋆(b_11u)-u⋆(b_12v)-v⋆(b_12u)-v⋆(b_11v), _tv =d_11(_1)v+d_12(_1)u-u⋆(b_11v)-u⋆(b_12u)-v⋆(b_12v)-v⋆(b_11u).In this formulation the operators d_11=d_11(_1),d_12=d_12(_1) are polynomials of _1 analogous to theoperator d above, the operation ⋆ is as defined aboveand b_11 and b_12 are analogous functions to the function b defined above. In the special case that d_11 and d_22 are both constant multiples of _1^2and b_11 and b_12 are scalar constants, then the system of equations for u and v above represent a system of reaction-diffusion equations with nonlocal nonlinear reaction/interaction terms.Second, with a slight modification, we extend our approach to classes ofpartial differential equations with cubic and higher odd degreenonlocal nonlinearities. In particular, for general smooth -valued initial data g_0=g_0(x,y) with x,y∈ and some time T>0, we demonstrate how to construct solutionsg∈ C^∞([0,T];C^∞(^2;)∩ L^2(^2;))to nonlocal nonlinear partial differential equations of the form (i=√(-1)),i _tg=d(_1)g+g⋆ f^⋆(g⋆ g^†).Here with a slight abuse of notation, we suppose(g⋆ g^†)(x,y)∫_g(x,z) g^*(y,z)z,where g^* denotes the complex conjugate of g. Our method works for any choice of d of the form d=ih(_1), where h isany constant coefficient polynomial with only even degree terms of its argument.Further, it works for any function f^⋆ with a power series representationwith infinite radius of convergence and real coefficients α_m of the formf^⋆(c)=i∑_m⩾0α_m c^⋆ m.The expression c^⋆ m represents the m-fold ⋆ productof c∈ L^2(^2;).Our method is based on the development of Grassmannian flows fromlinear subspace flows as follows; see Beck et al. <cit.>. Formally, suppose that Q=Q(t) and P=P(t) are linear operators satisfying the followinglinear system of evolution equations in time t, _tQ=AQ+BP and_tP=CQ+DP.We assume that A and C are bounded linear operators, while B and Dmay be bounded or unbounded operators. Throughout their time interval ofexistence say on [0,T] with T>0, we suppose Q-𝕀 and P to becompact operators, indeed Hilbert–Schmidt operators. Thus Q itselfis a Fredholm operator. If B and D are unbounded operators we suppose Q-𝕀 and P to lie in a suitable subset of the class of Hilbert–Schmidt operators characterised by their domains. We now posit a relation betweenP=P(t) and Q=Q(t) mediated through a compact Hilbert–Schmidt operator G=G(t)as follows,P=G Q.Suppose we now differentiate this relation with respect to time using the product rule and insert the evolution equations for Q=Q(t) and P=P(t) above. If we then equivalenceby the Fredholm operator Q=Q(t), i.e. post-compose by Q^-1=Q^-1(t)on the time interval on which it exists, we obtain the followingRiccati evolution equation for G=G(t),_tG=C+D G-G (A+B G).This demonstrates how certain classes of quadratically nonlinearoperator-valued evolution equations, i.e. the equation for G=G(t) above, can be generated from a coupled pair of linear operator-valued equations, i.e. the equations for Q=Q(t) and P=P(t) above. We think of the prescriptionjust given as the “abstract” setting in which Q=Q(t), P=P(t) and G=G(t) are operators of the classes indicated. Note that often we will take A=C=O and the equations for Q=Q(t) and P=P(t) above are _tQ=BP and _tP=DP. In this case, once we have solved the evolution equation for P=P(t), we can then solve the equation for Q=Q(t).We can generate cubic and higher odd degree classes of nonlinearoperator-valued evolution equations analogous to that for G=G(t) above by slightly modifying the procedure we outlined. Again, formally, suppose that Q=Q(t) and P=P(t) are linear operators satisfying the followinglinear system of evolution equations in time t, _tQ=f(PP^†) Q and_tP=DP,where P^†=P^†(t) denotes the operator adjoint to P=P(t)and f is a function with a power series expansion with infinite radius of convergence.The operator D may be a bounded or unbounded operator. In addition we require that Q=Q(t) satisfies the constraint QQ^†=𝕀 whileit exists. Indeed as above, throughout their time interval ofexistence say on [0,T] with T>0, we suppose Q-𝕀 and P to beHilbert–Schmidt operators. If D is unbounded then we suppose P liesin a suitable subset of the class of Hilbert–Schmidt operators characterised by its domain. We can think of the equations above as corresponding to the previous set of equations for Q=Q(t) and P=P(t) in the paragraph above with the choice B=C=O and A=f(PP^†). We emphasize however, once we havesolved the evolution equation for P=P(t), the evolution equation for Q=Q(t) is linear.We posit the same linear relation P=G Q between P=P(t) and Q=Q(t) as before, mediated through a compact Hilbert–Schmidt operator G=G(t). Then a direct analogous calculation to that above, differentiating this relation with respectto time and so forth, reveals that G=G(t) satisfies the evolution equation_tG=D G-G f(GG^†).The requirement that Q=Q(t) must satisfy the constraint QQ^†=𝕀 induces the requirement that f^†=-f. Hence again, we can generatecertain classes of cubic and higher odd degree nonlinearoperator-valued evolution equations, like that for G=G(t) just above, by first solving the operator-valued linear evolution equation for P=P(t) and then solving the operator-valued linear evolution equation for Q=Q(t). To summarize, we observe that in both procedures above,there were three essential components as follows, a linear: * Base equation: _tP=DP;* Auxiliary equation: _tQ=BP or _tQ=f(PP^†) Q;* Riccati relation: P=G Q.We now make an important observation and ask two crucial questions.First, we observe that solving each of the three linear equations above in turn actually generates solutions G=G(t) to the classes of operator-valued nonlinear evolution equations shown above. Second,in the appropriate context, can we interpret the operator-valued nonlinearevolution equations above as nonlinear partial differential equations?Third, if so, what classes of nonlinear partial differential equations fit into this context and can be solved in this way? In other words, can we solve the inverse problem: given a nonlinear partial differential equation, can we fit it into the context above (or an analogous context)and solve it for arbitrary initial data by solving the corresponding three linear equations above in turn? Briefly and formally, keeping technical details to a minimum for the moment, a simple example that addresses these issues, answers these questionspositively and outlines our proposed procedure is as follows. Supposeℚ is a closed linear subspace of L^2(;^2) and that ℙ is the complementary subspace to ℚ in thedirect sum decomposition L^2(;^2)=ℚ⊕ℙ. Suppose for each t∈[0,T] for some T>0 that Q=Q(t) is a Fredholm operatorfrom ℚ to ℚ of the form Q=𝕀+Q^', and that Q^'=Q^'(t) is a Hilbert–Schmidt operator. Further we assumeP(t)ℚ→ℙ is a Hilbert–Schmidt operator for t∈[0,T]. Technically, as mentioned above, we require Q^' and P to exist inappropriate subspaces of the class of Hilbert–Schmidt operators. However wesuppress this fact for now to maintain clarity and brevity (explicit details are given in the following sections). With this context while they exist,Q^'=Q^'(t) and P=P(t) can both be representedby integral kernels q^'=q^'(x,y;t) and p=p(x,y;t), respectively, where x,y∈ and t∈[0,T]. Suppose that D=_x^2 and B=1 so that the base and auxiliary equations have the form_tp(x,y;t)=_x^2p(x,y;t)and_tq^'(x,y;t)=p(x,y;t).The linear Riccati relation in this context takes the form of the linear Fredholm equationp(x,y;t)=g(x,y;t)+∫_ g(x,z;t) q^'(z,y;t)z.We can express this more succinctly as p=g+g⋆ q^' or p=g⋆(δ+q^'), where δ is the identity operator with respect to the ⋆ product.As described above in the “abstract” operator-valued setting, we candifferentiate the relation p=g⋆(δ+q^') with respect to time using the product rule and insert the base and linear equations _tp=_1^2p and _t q^'=p to obtain the following(_t g)⋆(δ+q^')=_tp-g⋆_tq^'=(_1^2g)⋆(δ+q^')-g⋆(g⋆(δ+q^'))=(_1^2g-g⋆ g)⋆(δ+q^').In the last step we utilized the associativity propertyg⋆(g⋆ q^')=(g⋆ g)⋆ q^' which is equivalent to the relabelling∫_ g(x,z;t)∫_ g(z,ζ;t) q^'(ζ,y;t) ζz =∫_∫_ g(x,ζ;t) g(ζ,z;t) ζ q^'(z,y;t)z. We now equivalence by Q=Q(t), i.e. post-compose by Q̃ Q^-1. This is equivalent to “multiplying” the equation above by ⋆(δ+q̃^')where (δ+q^')⋆(δ+q̃^')=δ and q̃^' is the integral kernel associated with Q̃-𝕀. We thus observe that g=g(x,y;t) necessarily satisfies the nonlocal nonlinear partial differential equation_t g=_1^2g-g⋆ gor more explicitly_t g(x,y;t)=_x^2g(x,y;t)-∫_ g(x,z;t) g(z,y;t)z.Further now suppose, given initial data g(x,y;0)=g_0(x,y) we wish to solve this nonlocal nonlinear partial differential equation. We observethat we can explicitly solve, in closed form via Fourier transform,for p=p(x,y;t) and then q^'=q^'(x,y;t).We take q^'(x,y;0)=0 and p(x,y;0)=g_0(x,y).This choice is consistent with the Riccati relation evaluated at time t=0. Then the solution of the Riccati relation by iteration or other means, and in some cases explicitly, generates the solution g=g(x,y;t) to the nonlocalnonlinear partial differential equation above corresponding to the initial data g_0. We have thus now seen the “abstract” setting and the connection tononlocal nonlinear partial differential equations and their solution, and thus started to lay the foundations to validating our claims at thevery beginning of this introduction.The approach we have outlined above, for us, has its roots in theseries of papers in numerical spectral theory in which Riccati equations were derived and solved in order to resolve numerical difficulties associated with linear spectral problems. These difficulties were associated with different exponential growth rates in the far-field. See for exampleLedoux, Malham and Thümmler <cit.>, Ledoux, Malham, Niesen and Thümmler <cit.>,Karambal and Malham <cit.> and Beck and Malham <cit.> for more details of the use of Riccati equations and Grassmann flows to help numerically evaluate the pure-point spectra of linear elliptic operators. In Beck et al.  <cit.> we turned the question around andasked whether the Riccati equations, which in infinite dimensions represent nonlinear partial differential equations, could be solved by the reverse process. The notion that integrable nonlinear partial differential equations canbe generated from solutions to the corresponding linearized equation and a linear integral equation, namely the Gel'fand–Levitan–Marchenko equation, goes back over forty years. For example it is mentioned inthe review by Miura <cit.>. Dyson <cit.> in particular showed the solution to the Korteweg de Vries equation can be generated from thesolution to the Gel'fand–Levitan–Marchenko equation along the diagonal. See for example Drazin and Johnson <cit.>. Further results of this nature for other integrable systemsare summarized in Ablowitz, Ramani and Segur <cit.>.Then through a sequence of papers Pöppe <cit.>,Pöppe and Sattinger <cit.> and Bauhardt and Pöppe <cit.>, carried through the programme intimated above. Also in a series of papers Tracy and Widom, see for example <cit.>, have also generated similar results. Besides those already mentioned, the papers by Sato <cit.>, Segal and Wilson <cit.>, Wilson <cit.>, Bornemann <cit.>, McKean <cit.>, Grellier and Gerard <cit.> and Beals and Coifman <cit.>, as well as the manuscript by Guest <cit.> were also highlyinfluential in this regard.We note that our second prescription above is analogousto that of classical integrable systems and theDarboux-dressing transformation.The notion of classical integrability in 1+1 dimensions is synonymouswith the existence of a Lax pair (L̃,D̃).The Lax pair may consist of differential operators dependingon the field, i.e. the solution of the associatednonlinear integrable partial differential equation, or field valued matrices,which can also depend on a spectral parameter.The Lax pair satisfies the so called auxiliary linear problemL̃Ψ = λΨand∂_t Ψ = D̃Ψ.Here Ψ is called the auxiliary function and λis the spectral parameter which is constant in time.Compatibility between the two equations above leads tothe zero curvature condition∂_t L̃ = [D̃,L̃],which generates the nonlinear integrable equation.The Darboux-dressing transformation is an efficient andelegant way to obtain solutions of the integrable equation using linear data; see Matveev & Salle <cit.>and Zakharov & Shabat <cit.>.Let us focus on the t-part of the auxiliary linear problemto make the connection with our present formulation more concrete. In the context of integrable systems the Darboux-dressingprescription takes the form of a: (i) Base equation or linearized formulation: _tP=DP; (ii) Auxiliary or modified or dressed equation: _tQ=D̃Q; and (iii) Riccati relation or dressing transformation: P=G Q.In the integrable systems frame D is a linear differential operatorand D̃ is a nonlinear differential operator that can bedetermined via the dressing process; seeZakharov & Shabat <cit.> and Drazin and Johnson <cit.>. The classic example is the Korteweg de Vries equation, in which case D= -4 ∂_x^3 andD̃ =-4 ∂_x^3 + 6 u(x,t) ∂_x+ ∂_x u(x,t). In the integrability context extra symmetriesand thus integrability is provided by the existence ofthe operator L̃ of the Lax pair. For theKorteweg de Vries equation L̃=-∂^2_x+u(x,t). That the field u satisfies the Korteweg de Vries equation is ensured by the zero curvature condition. In our formulation on the other hand, we do not assumethe existence of a Lax pair as we do not necessarily requireintegrability, thus less symmetry is presupposed. We focuson the time part of the Darboux transform described by theequations (i)–(iii) just above. Theyyield the equation for the transformation G(see also Adamopoulou, Doikou & Papamikos <cit.>):∂_t G = D G - G D̃.In the present general description the operators D and D̃ are known and both linear; at least in all the examples we consider herein. The operator G turns out to satisfy the associated nonlinear andnonlocal partial differential equation just above.Depending on the exact form of D̃ various cases of nonlinearitycan be considered as will be discussed in detail in what follows.Indeed, below we investigate various situations regarding the form of the nonlinearoperator D̃, which give rise to qualitatively differentnonlocal, nonlinear equations. These can be seen as nonlocalgeneralizations of well known examples of integrable equations,such as the Korteweg de Vries and nonlinear Schrödinger equations and so forth.Lastly, we remark that Riccati systems play a central role in optimal control theory.In particular, the solution to a matrix Riccati equation provides the optimalcontinuous feedback operator in linear-quadratic control. In such systems thestate is governed by a linear system of equations analogous to those for Q and P above, and the goal is to optimize a given quadratic cost function. See for example Martin and Hermann <cit.>, Brockett and Byrnes <cit.>and Hermann and Martin <cit.> for more details.Our paper is structured as follows. In <ref> we outline our procedure for generating solutions to partial differential systems with quadratic nonlocal nonlinearitiesfrom the corresponding linearized flow. We then examine theslightly modified procedure for generating such solutions forpartial differential systems with cubic and higher odd degreenonlocal nonlinearities in <ref>.In <ref> we apply our method to a series of six examples, including a nonlocal reaction-diffusion system, the nonlocalKorteweg de Vries equation andtwo nonlocal variants of the nonlinear Schrödinger equation, one with cubic nonlinearity and one with a sinusoidal nonlinearity. For each of the examples just mentioned we provide numerical simulationsand details of our numerical methods. Using our method we also derive an explicit form for solutions to a special case of thenonlocal Fisher–Kolmogorov–Petrovskii–Piskunov equation from biological systems. Finally in <ref>we discuss extensions to our method we intend to pursue. We provide the Matlab programs we used for our simulations in the supplementary electronic material. § NONLOCAL QUADRATIC NONLINEARITIES In this section we review and at the same time extend to systemsour Riccati method for generating solutions to partial differential equations with quadratic nonlocal nonlinearities. For furtherbackground details, see Beck et al. <cit.>. Our basic context is as follows. We suppose we have a separableHilbert spacethat admits a direct sum decomposition=ℚ⊕ℙ into closed subspacesand . The set of all subspaces `comparable' in size tois called the Fredholm Grassmann manifold (,ℚ).Coordinate patches of (,ℚ)are graphs of operators ℚ→ℙ parametrized by, say, G.See Sato <cit.> and Pressley and Segal <cit.> for more details. We consider a linear evolutionary flow on the subspace which can be parametrized by two linear operators Q(t)→and P(t)→ for t∈[0,T] for some T>0. More precisely, we suppose the operator Q=Q(t) is a compact perturbation of the identity,and thus a Fredholm operator. Indeed we assume Q=Q(t) has the form Q=𝕀+Q^' where `𝕀' is the identity operator on . We assume for some T>0 that Q^'∈ C^∞([0,T];𝔍_2(;)) andP∈ C^∞([0,T];𝔍_2(;)) where𝔍_2(;) and 𝔍_2(;) denote theclass of Hilbert–Schmidt operators from →and →, respectively. Note that 𝔍_2(;)and 𝔍_2(;) are Hilbert spaces. Our analysis, as we see presently, involves two, in general unbounded,linear operators D and B. In our equations these operators act on P, and since for each t∈[0,T] we would likeDP∈𝔍_2(ℚ;ℙ)and BP∈𝔍_2(ℚ;ℚ), we will assume thatP∈ C^∞([0,T];Dom(D)∩Dom(B)). Here Dom(D)⊆𝔍_2(;) andDom(B)⊆𝔍_2(;) represent the domains of D and B in 𝔍_2(;). Hence in summary, we assume P∈ C^∞([0,T];Dom(D)∩Dom(B)) and Q^'∈ C^∞([0,T];𝔍_2(;)).Our analysis also involves two bounded linear operators A=A(t) and C=C(t). Indeed we assume that A∈ C^∞([0,T];𝔍_2(;)) and C∈ C^∞([0,T];𝔍_2(;)). We are now in a position to prescribe the evolutionary flow of the linear operators Q=Q(t) and P=P(t) as follows. We assume there exists a T>0 such that, for the linear operators A, B, C and Ddescribed above, the linear operatorsP∈ C^∞([0,T];Dom(D)∩Dom(B)) andQ^'∈ C^∞([0,T];𝔍_2(;)) satisfy the linear system of operator equations _tQ=AQ+BP, and_tP=CQ+DP,where Q=𝕀+Q^'. We take Q^'(0)=O at time t=0 so that Q(0)=𝕀. We call the evolution equation for P=P(t) the base equationand the evolution equation for Q=Q(t) the auxiliary equation.We note the following: (i) Nomenclature: The base and auxiliary equationsabove are a coupled pair of linear evolution equations for the operators P=P(t) and Q=Q(t). In many applications and indeed for all those in this paper C=O. In this case theequation for P=P(t) collapses to the stand alone equation _tP=DP. For this reason we call it the base equation and we think of the equation prescribing the evolution of Q=Q(t) as the auxiliary equation; and (ii) In practice: In all our examples in <ref> we can solve the base and auxiliary equations for P=P(t)and Q=Q(t) giving explicit closed form solution expressionsfor all t⩾0.In addition to the linear base and auxiliary equations above, we posit a linear relation between P=P(t) and Q=Q(t) as follows. We assume there exists a T>0 such that, forP∈ C^∞([0,T];Dom(D)∩Dom(B)) andQ^'∈ C^∞([0,T];𝔍_2(;)), there exists a linear operatorG∈ C^∞([0,T];Dom(D)∩Dom(B)) satisfying the linear Fredholm equationP=G Q,where Q=𝕀+Q^'. We call this the Riccati relation. The existence of a solution to the Riccati relation is governed by theregularized Fredholm determinant det_2(𝕀+Q^') for theHilbert–Schmidt class operator Q^'=Q^'(t). For any linear operator Q^'∈𝔍_2(;) this regularized Fredholmdeterminant is given by (see Simon <cit.> and Reed and Simon <cit.>)det_2(𝕀+Q^') exp(∑_ℓ⩾2(-1)^ℓ-1/ℓtr (Q^')^ℓ),where `tr' represents the trace operator. We note that Q^'_𝔍_2(;)^2≡tr |Q^'|^2. The operator 𝕀+Q^' is invertible if and only ifdet_2(𝕀+Q^')≠0; again see Simon <cit.>and Reed and Simon <cit.> for more details. Assume there exists a T>0 such that P∈ C^∞([0,T];Dom(D)∩Dom(B)), Q^'∈ C^∞([0,T];𝔍_2(;)) and Q^'(0)=O. Then there exists a T^'>0 with T^'⩽ Tsuch that for t∈[0,T^'] we havedet_2(𝕀+Q^'(t))≠0 and Q^'(t)_𝔍_2(;)<1. In particular, there exists a unique solutionG∈ C^∞([0,T^'];Dom(D)∩Dom(B)) to the Riccati relation.Since Q^'∈ C^∞([0,T];𝔍_2(;)) and Q^'(0)=O, by continuity there exists a T^'>0with T^'⩽ T such that Q^'(t)_𝔍_2(;)<1for t∈[0,T^']. Similarly by continuity, since Q^'(0)=O, for a short time at least we expect det_2(𝕀+Q^')≠0. We can however assess this as follows. Using theregularized Fredholm determinant formula above, we observe that|det_2(𝕀+Q^')-1| ⩽∑_n⩾ 11/n!(∑_ℓ⩾21/ℓtr |Q^'|^ℓ) ⩽exp(∑_ℓ⩾21/ℓQ^'_𝔍_2(;)^ℓ)-1.In the last step we used thattr |Q^'|^ℓ⩽(tr |Q^'|^2)^ℓ/2for all ℓ⩾2. The series in the exponent in the final term above converges ifQ^'_𝔍_2(;)<1. We deduce that providedQ^'_𝔍_2(;) is sufficiently small thenits regularized Fredholm determinant is bounded away from zero. By continuity there exists a T^', possibly smaller than thechoice above, such that for all t∈[0,T^'] we know Q^'(t)_𝔍_2(;)is sufficiently small and the determinant is bounded away from zero.Next, we set H_Dom(D)∩Dom(B)DH_𝔍_2(;)+BH_𝔍_2(;) for any H∈Dom(D)∩Dom(B), while · _op denotes the operator norm for bounded operators on . We observe that for any n∈ℕ we haveP(t)(Q^'(t))^n_Dom(D)∩Dom(B) ⩽P(t)_Dom(D)∩Dom(B)(Q^'(t))^n_op⩽P(t)_Dom(D)∩Dom(B)Q^'(t)^n_op⩽P(t)_Dom(D)∩Dom(B)Q^'(t)^n_𝔍_2(;).Hence we observe that P(t)(𝕀+∑_n⩾1(-1)^n(Q^'(t))^n)_Dom(D)∩Dom(B)⩽P(t)_Dom(D)∩Dom(B)(1 +∑_n⩾1Q^'(t)^n_𝔍_2(;))⩽P(t)_Dom(D)∩Dom(B)(1-Q^'(t)_𝔍_2(;))^-1.Hence using the operator series expansion for (𝕀+Q^'(t))^-1we observe we have established that P(t)(𝕀+Q^'(t))^-1_Dom(D)∩Dom(B)⩽P(t)_Dom(D)∩Dom(B)(1-Q^'(t)_𝔍_2(;))^-1.Hence there exists a T^'>0 such thatfor each t∈[0,T^'] we know G(t)=P(t)(𝕀+Q^'(t))^-1 exists, is unique, and in factG∈ C^∞([0,T^'];Dom(D)∩Dom(B)).We have already remarked that we set Q^'(0)=O so that Q(0)=𝕀. Consistent with the Riccati relation we hereafter set P(0)=G(0). Our first main result in this section is as follows. Given initial data G_0∈Dom(D)∩Dom(B) we set Q^'(0)=O and P(0)=G_0. Suppose there exists a T>0 such that the linear operatorsP∈ C^∞([0,T];Dom(D)∩Dom(B)) and Q^'∈ C^∞([0,T];𝔍_2(;)) satisfy the linear base and auxiliary equations. We choose T>0 so that fort∈[0,T] we have det_2(𝕀+Q^'(t))≠0 and Q^'(t)_𝔍_2(;)<1. Then there exists a unique solutionG∈ C^∞([0,T];Dom(D)∩Dom(B)) to the Riccati relation which necessarily satisfies G(0)=G_0and the Riccati evolution equation_tG=C+DG-G (A+BG). By direct computation, differentiating the Riccati relation P=G Qwith respect to time using the product rule, using the base andauxiliary equations and feeding back through the Riccati relation,we find (_tG)Q=_tP-G _tQ=(C+DG) Q-(G (A+BG)) Q.Equivalencing with respect to Q, i.e. postcomposing by Q^-1, establishes the result. We assume throughout this paper that C=C(t) is a bounded operator, indeed that C∈ C^∞([0,T];𝔍_2(;)). In fact in every application in <ref> we take C=O. However in general C=C(t) would represent some non-homogeneous forcing in the Riccati equation satisfied by G=G(t). Further, inDoikou, Malham & Wiese <cit.> we apply our methods here to stochastic partial differential equations. One example therein features additive space-time white noise. In that case the term C=C(t) represents the non-homogenous space-timewhite noise forcing term and we must thus allow for C=C(t)to be an unbounded operator.We now turn our attention to applications of Theorem <ref> above and demonstrate how to find solutions to a large class of partial differential systems with nonlocal quadratic nonlinearities. Guided by our results above, we now suppose the classes of operators we have consideredthusfar to be those with integral kernels on ×. For x,y∈ andt⩾0, suppose the functions p=p(x,y;t) and q^'=q^'(x,y;t)are matrix valued, with p∈^n^'× n and q^'∈^n× n for some n,n^'∈ℕ, and they satisfy the linear baseand auxiliary equations_t p(x,y;t)=d(∂_1) p(x,y;t) and_t q^'(x,y;t)=b(x) p(x,y;t).Here the unbounded operator d=d(_1) is a constant coefficientscalar polynomial function of the partial differential operator with respect to the first component _1, while b=b(x) is a smoothbounded square-integrable ^n× n^'-valued function of x∈.We can explicitly solve these equations for p=p(x,y;t) andq^'=q^'(x,y;t) in terms of their Fourier transforms as follows. Note we use the following notation for the Fourier transform of anyfunction f=f(x,y) and its inverse: f(k,κ) ∫_^2 f(x,y)e^2πi(kx+κ y)xy and f(x,y) ∫_^2f(k,κ)e^-2πi(kx+κ y)k κ.Let p=p(k,κ;t) andq^'=q^'(k,κ;t) denote the two-dimensional Fourier transforms of the solutions to thelinear base and auxiliary equations just above. Assume that q^'(x,y;0)≡0 and p(x,y;0)=p_0(x,y). Then for all t⩾0 the functions p andq^' are explicitly given byp(k,κ;t) =exp(d(2πik) t) p_0(k,κ) andq^'(k,κ;t)=∫_b(k-λ) I(λ;t)p_0(λ,κ) λ,where I(k;t) (exp(d(2πik) t)-1)/d(2πik) and indeed q^'(x,y;t)=b(x)∫_ I(x-z,t)p_0(z,y)z.Taking the two-dimensional Fourier transform of the base equationwe generate the decoupled equation_tp(k,κ;t)=d(2πik)p(k,κ;t) whose solution is the form for p(k,κ;t) shown. Then take the Fourier transform of theauxiliary equation to generate the equation_tq^'(k,κ;t) =∫_b(k-λ) p(λ,κ;t) λ.Substituting in the explicit form for p=p(k,κ;t) and integrating with respect to time, usingq^'(k,κ;0)=0, generates the form forq^'=q^'(k,κ;t) shown. We suppose here the separable Hilbert spaceℍ=L^2(ℝ;ℝ^n)×(Dom(D)∩Dom(B))with Dom(D)∩Dom(B)⊆ L^2(ℝ;ℝ^n^') where n and n^' are thedimensions above. Then ℙ and ℚ are closedsubspaces in the direct sum decomposition ℍ=ℚ⊕ℙ; see Beck et al.  <cit.>. The functions inare ^n-valued while those inare ^n^'-valued. By standard theory, Q^'(t)∈𝔍_2(;)and P(t)∈𝔍_2(;) if and only if there exist kernelfunctions q^'(·,·;t)∈ L^2(^2;^n× n) andp(·,·;t)∈ L^2(^2;^n^'× n) with the action of Q^'(t) and P(t) given through q^' and p,respectively. Further we know thatQ^'(t)_𝔍_2(;)=q^'(·,·;t)_L^2(^2;^n× n)and P(t)_𝔍_2(;)=p(·,·;t)_L^2(^2;^n^'× n). For more details see for example Reed & Simon <cit.> or Karambal & Malham <cit.>. The linear base and auxiliary equations above correspond to thecase when A=C=O, D=d(_1) and B is given by the bounded multiplicative operator b=b(x). Recall that in our “abstract” formulation above we required that P∈ C^∞([0,T];Dom(D)∩Dom(B)).The explicit form for p=p(x,y;t) given inLemma <ref> reveals thatP will only have this property for certain classes of operators d=d(_1). For example suppose d=d(_1) is diffusive so that it takes the form of a polynomial with only even degree terms in _1 and the real scalar coefficient of the degree 2N term is of the form (-1)^N+1α_2N. In this casethe exponential term exp(d(2πik) t) decays exponentiallyfor all t>0. We could also include dispersive forms for d. For exampled=_1^3, for which the exponential term exp(d(2πik) t)remains bounded for all t>0. We also note that for such diffusive or dispersive forms for d=d(_1) the integral kernel function p=p(x,y;t) is in fact smooth. Also recall from our “abstract” formulationwe require Q^'∈ C^∞([0,t];𝔍_2(;)). The explicit form for q^'=q^'(x,y;t) given inLemma <ref> reveals thatits time dependence is characterized through the term I(k;t). For the diffusive or dispersive forms for d=d(_1) just discussed we observe that I(k;t)→-1/d(2πik) for all k≠0 while for the singular value k=0 the term I(0;t) growslinearly in time. Thus in such cases, while we know that for some time T>0 for t∈[0,T] we have Q^'(t)_𝔍_2(;)=q^'(·,·;t)_L^2(^2;^n× n) =q^'(·,·;t)_L^2(^2;^n× n) is bounded, we also have q^'(·,·;t)_L^2(^2;^n× n)=∫_^4p_0^∗(λ,κ) I^∗(λ;t) b^∗(k-λ) b(k-ν) I(ν;t)p_0(ν,κ) λ ν κk⩽∫_b_0^∗(k-·) b_0(k-·)k_L^∞(^2;^n× n)··∫_p_0^∗(·,κ) p_0(·,κ) κ_L^∞(^2;^n× n)I(t)_L^1(;)^2.Hence provided the terms on the right are bounded withI(t)_L^1(;) bounded for all t>0, thenq^'(·,·;t)_L^2(^2;^n× n) will be bounded for all t>0, and indeed smooth.However how far the interval of time on whichdet_2(𝕀+Q^'(t))≠0 and Q^'(t)_𝔍_2(;)<1 extends, for now, we treat on case by case basis.Given initial datag_0∈ C^∞(^2;^n^'× n)∩ L^2(^2;^n^'× n) for some n,n^'∈ℕ, suppose p=p(x,y;t) and q^'=q^'(x,y;t) are the solutions to the linear base and auxiliary equations from Lemma <ref> for which p_0≡ g_0and q^'(x,y;0)≡0. Let Dom(d) denote the domain of the operator d=d(_1) and suppose it is of the diffusive or dispersive form described in Remark <ref>. Then there exists a T>0 such that the solutiong∈ C^∞([0,T];Dom(d)∩ L^2(^2;^n^'× n))to the linear Fredholm equationp(x,y;t)=g(x,y;t)+∫_g(x,z;t) q^'(z,y;t)zsolves the evolutionary partial differential equation withquadratic nonlocal nonlinearities of the form_tg(x,y;t)=d(_x) g(x,y;t)-∫_g(x,z;t) b(z) g(z,y;t)z. That for some T>0 there exists a solutiong∈ C^∞([0,T];Dom(d)∩ L^2(^2;^n^'× n))to the linear Fredholm equation (Riccati relation) shown is a consequenceof Lemma <ref> and Remark <ref>. The solution g is the integral kernel of G. That this solution g to the Riccati relation solves the evolutionary partial differential equation with the quadraticnonlocal nonlinearity shown is a direct consequence of the Quadratic Degree Evolution Equation Theorem <ref>.We can also now think of this result in the following way. First differentiate the above linear Fredholm equation inthe Corollary with respect to time using the product rule, and use that p and q^' satisfy the linear baseand auxiliary equations so that_tg(x,y;t)+∫__tg(x,z;t) q^'(z,y;t)z=d(_1)p(x,y;t)-∫_g(x,z;t) b(z)p(z,y;t)z.Second replacing all instances of p using the linear Fredholm equation above and swapping integration labels we obtain_tg(x,y;t)+ ∫__tg(x,z;t) q^'(z,y;t)z=d(_x)g(x,y;t)+∫_d(_x)g(x,z;t) q^'(z,y;t)z -∫_g(x,z;t) b(z)g(z,y;t)z -∫_(∫_g(x,ζ;t) b(ζ)g(ζ,z) ζ) q^'(z,y;t)z.We can express this in the form∫_(_tg(x,z;t)-d(_x)g(x,z;t)+∫_g(x,ζ;t) b(ζ)g(ζ,z;t) ζ) (δ(z-y)+q^'(z,y;t))z=0.Third we postmultiply by `δ(y-η)+q̃^'(y,η;t)' for some η∈. This is the kernel corresponding to the inverse operator𝕀+Q̃^' of 𝕀+Q^'. Integrating over y∈ gives the result for g=g(x,η;t). This derivation follows that in Beck et al. <cit.>for scalar partial differential equations. Some observations are as follows: (i) Nonlocal nonlinearities with derivatives: Starting with the linear base and auxiliary equations for p=p(x,y;t) and q^'=q^'(x,y;t), we could havetaken b to be any constant coefficient polynomial of _1.With minor modifications, all of the main arguments above still apply. Our explicit solution for q^'=q^'(x,y;t) will be slightly more involved. One of our examples in <ref> is the nonlocal Korteweg de Vries equation for which b=_1;(ii) Smooth solutions: All derivatives are with respect to the first parameter x. Differentiating the Riccati relation gives _xp(x,y;t)=_xg(x,y;t)+∫__xg(x,z;t) q^'(z,y;t)z. Hence the regularity of the solution g is directly determined by the regularity of the solution of the base equation p for the time the Riccati relation is solvable, in particular while det_2(𝕀+Q^'(t))≠0 and Q^'(t)_𝔍_2(;)<1. Hence ifp is smooth on this interval, then the solution g is smooth on this interval; (iii) Time as a parameter: Importantly, when we can explicitlysolve for p=p(x,y;t) and q^'=q^'(x,y;t), as we do above,then time t plays the role of a parameter. We choose the time at which we wish to compute the solution and we solve thelinear Fredholm equation to generate the solution g for that time t;(iv) Non-homogeneous coefficients:In principle, if d and b are polynomials of _x,the coefficients in these polynomial could also be functions of x. Though we can in principle always find series solutions to thelinear base and auxiliary equations, we would now have the issueas to whether we can derive explicit formulae for p and q^'. In such cases we may need to evaluate a series ornumerically integrate in time to obtain p and q^'.Thus we cannot compute solutions as simply as in the senseoutlined in Item (iii) just above. An important example is that of evolutionary stochastic partial differential equations with non-local nonlinearities. The presence of Wiener fieldsin such equations as non-homogeneous additive termsor multiplicative factors means that the base equation must be solved numerically. For example the base equationmight be the stochastic heat equation. See Doikou, Malham & Wiese <cit.> for more details; (v) Complex valued solutions: In generalg could be complex matrix valued; see <ref> next; (vi) Domains: If x,y∈𝕀 where 𝕀 is afinite or semi-infinite interval on , then the above calculations go through, see Beck et al. <cit.> andalso Doikou et al. <cit.> where 𝕀=𝕋,the torus with period 2π; and (vii) Multi-dimensional domains:If x,y∈^n for some n∈ℕ and d=d(Δ_1)is a polynomial function of the Laplacian acting on the first argument, then in principle the calculations above go through;see our Conclusions <ref>. § NONLOCAL CUBIC AND HIGHER ODD DEGREE NONLINEARITIESWe assume the same set-up as in the first two paragraphs in <ref> up to the point when we discuss the unbounded linear operator D. In this section we assume ⊆. We still assume that D is in general an unbounded, linear operator,however we set B=O and C=O while A is a bounded operator which wediscuss presently. We assume there exists a T>0 such that for each t∈[0,T] we haveP∈ C^∞([0,T];Dom(D)) andQ^'∈ C^∞([0,T];𝔍_2(;)). Our analysis in this section also involves the bounded linear operatorA∈𝔍_2(;) which depends on another bounded linear operator as follows. For a known operator H∈𝔍_2(;)we assume A has the form A=f(HH^†) where the function f is given byf(x)=i∑_m⩾0α_mx^m,where i=√(-1) and the α_m are real coefficients.Note H^† denotes the operator adjoint to H. We further assume this power series expansion has an infinite radiusof convergence. In this section we assume the evolutionary flow of the linear operators Q=Q(t) and P=P(t) is as follows.We assume there exists a T>0 such that for the linear operators A and D described above, the linear operatorsP∈ C^∞([0,T];Dom(D)) andQ^'∈ C^∞([0,T];𝔍_2(;)) satisfy the linear system of operator equations_tP=DP, and_tQ=f(PP^†) Q,where Q=𝕀+Q^'. We take Q^'(0)=O at time t=0so that Q(0)=𝕀. We call the evolution equation for P=P(t) the base equation and the evolution equation for Q=Q(t)the auxiliary equation.Note we first solve the base equation forP∈ C^∞([0,T];Dom(D)). Then with P given, we observe that f=f(PP^†) is a given linear operator in the auxiliary equation.Assume for some T>0 that P∈ C^∞([0,T];Dom(D)) andQ^'∈ C^∞([0,T];𝔍_2(;)) satisfy the linear base and auxiliary equations above.Then Q(0)=𝕀 implies QQ^†=𝕀 for all t∈[0,T]. By definition f^†=-f, and using the product rule_t(QQ^†)=f (QQ^†)-(QQ^†) f.Thus QQ^†=𝕀 is a fixed point of this flow and Q(0)=𝕀 impliesQQ^†=𝕀 for all t∈[0,T].In addition to the linear base and auxiliary equations above, weagain posit a linear relation between P=P(t) and Q=Q(t), theRiccati relation P=G Q, exactly as in <ref>. Indeed the results of Lemma <ref> for the existence anduniqueness of a solution G to the Riccati relation apply here.Further, as previously, hereafter we set P(0)=G(0).Our main result of this section is as follows. Given initial data G_0∈Dom(D) we set Q(0)=𝕀 and P(0)=G_0. Suppose there exists a T>0 such that the linear operatorsP∈ C^∞([0,T];Dom(D)) and Q-𝕀∈ C^∞([0,T];𝔍_2(;)) satisfy the linear base and auxiliary equations above. We choose T>0 so that fort∈[0,T] we have det_2(Q(t))≠0 and Q^'(t)_𝔍_2(;)<1. Then there exists a unique solutionG∈ C^∞([0,T];Dom(D)) to the Riccati relation which necessarily satisfies the evolution equation_tG=DG-G f(GG^†).First, using the Riccati relation and that QQ^†=𝕀, we have PP^†=GG^† and thus f(PP^†)=f(GG^†) for all t∈[0,T].Second, differentiating the Riccati relation with respect to time using the product rule and then substituting for P using the Riccati relation, we have (_tG)Q=_tP-G _tQ=DG Q-G f(PP^†) Q=DG Q-G f(GG^†) Q. As previously, equivalencing by Q, i.e. postcomposing by Q^-1,establishes the result. We now consider applications of Theorem <ref> aboveand demonstrate how to find solutionsto classes of partial differential systems with nonlocal odd degree nonlinearities.For x,y∈ and t⩾0, suppose the functions p=p(x,y;t) and q=q(x,y;t)are scalar complex valued, with p∈ and q∈, and they satisfy the linear baseand auxiliary equations _t p=-ih(∂_1)p and_t q=f^⋆(p⋆ p^†)⋆ q.Here h=h(_1) is a polynomial function of _1with only even degree terms of its argument and constant coefficients. By analogy with <ref>, here we have made the choiced(_1)=-ih(∂_1).The nonlocal product `⋆' is defined for any twofunctions w,w^'∈ L^2(^2;) by(w⋆ w^')(x,y)∫_ w(x,z) w^'(z,y)z.Hence the expression p⋆ p^† thus represents the kernel function (p⋆ p^†)(x,y;t)∫_ p(x,z;t)p^*(y,z;t)z,Note here we have used that if an operator has integral kernel p=p(x,y;t),its adjoint has integral kernel p^∗(y,x;t), where the `∗' in generaldenotes complex conjugate transpose. The expression f^⋆(c),for some kernel function c, represents the serieswith real coefficients α_m given by f^⋆(c)=i∑_m⩾0α_m c^⋆ m,where c^⋆ m is the m-fold product c⋆⋯⋆ c.We assume this power series has an infinite radius of convergence. In the linear auxiliary equation we take c=p⋆ p^†.It is natural to take the Fourier transform of thebase and auxiliary equations with respect to x and y.The correspondingequations for p=p(k,κ;t)and q=q(k,κ;t) are_t p=-ih(2πik) pand_t q=f^⋆(p⋆p^†) ⋆q.Here we have used Parseval's identity for Fourier transforms which implies(w⋆ w^')(k,κ) =∫_w(k,λ) w^'(λ,κ) λ =(w⋆w^')(k,κ)for any two functions w,w^'∈ L^2(^2;). Hence we see that for f^⋆=f^⋆(c), we havef^⋆=i∑_m⩾0α_m c^⋆ m⇔f^⋆=i∑_m⩾0α_m c^⋆ m.Further we note that if q(x,y;t)=δ(x-y)+q^'(x,y;t) thenq(k,κ;t)=δ(k-κ)+q^'(k,κ;t). The Dirac delta function δ here also represents the identitywith respect to the `⋆' product so that for any w∈ L^2(^2;)we have w⋆δ=δ⋆ w=w. With all this in hand, we can in fact explicitly solve forp=p(k,κ;t) andq=q(k,κ;t) as follows. Let p=p(k,κ;t) andq=q(k,κ;t) denote the two-dimensional Fourier transforms of the solutions to thelinear base and auxiliary equations just above. Assume that q(x,y;0)=δ(x-y) and p(x,y;0)=p_0(x,y). Then for all t⩾0 the functions p andq are explicitly given byp(k,κ;t) =exp(-it h(2πik)) p_0(k,κ), q(k,κ;t) =exp(-it h(-2πik))·exp^⋆(t(f^⋆(p_0⋆p_0^†) +ih·δ))(k,κ;t),where naturally exp^⋆(c)=δ+c+1/2c^⋆2+1/6c^⋆3+⋯.The explicit form for p=p(k,κ;t) follows directly from the Fourier transformed version of the base equation. We now focus on the auxiliary equation.Consider a typical term say c^⋆ m in f^⋆, with cp⋆p^†.Using Parseval's identity the termc^⋆ m=(p⋆p^†)^⋆ m has the explicit formc^⋆ m(ν_0,ν_m;t)=∫_^2m-1(∏_j=1^m p(ν_j-1,λ_j;t)p^*(ν_j,λ_j;t)) λ_1⋯λ_m ν_1⋯ν_m-1.If we insert the explicit solution for p into this expression and use that h is a polynomial of even degree terms only, we findc^⋆ m(ν_0,ν_m;t)= exp(-it(h(2πiν_0)-h(-2πiν_m))) ×∫_^2m-1(∏_j=1^m p_0(ν_j-1,λ_j)p_0^*(ν_j,λ_j)) λ_1⋯λ_m ν_1⋯ν_m-1.Hence we deduce that (f^⋆(p⋆p^†))(ν_0,ν_m;t) =exp(-it(h(2πiν_0)-h(-2πiν_m))) (f^⋆(p_0⋆p_0^†))(ν_0,ν_m).The auxiliary equation thus has the explicit form_tq(k,κ;t)=∫_exp(-it(h(2πik)-h(-2πiν))) (f^⋆(p_0⋆p_0^†))(k,ν) q(ν,κ;t) ν.By making a change of variables we can convert this linear differential equation for q=q(k,κ;t) into a constant coefficient linear differential equation. Indeed we setθ(k,κ;t) exp(it h(-2πik))q(k,κ;t).Combining this definition with the linear differential equation for q=q(k,κ;t) above, we find_tθ(k,κ;t) =∫_(f^⋆(p_0⋆p_0^†))(k,ν) θ(ν,κ;t) ν +i h(-2πik)θ(k,κ;t),where, crucially, we again used that h(-2πik)-h(2πik)≡0 as h is a polynomial of even degree terms. Hencethe evolution equation for θ is the linear constantcoefficient equation_tθ=(f^⋆(p_0⋆p_0^†) +ih·δ)⋆θ,Note the coefficient function depends only on the initial data p_0.Further note we have used that((ih δ)⋆θ)(k,κ;t) =i h(-2πik) ∫_δ(k-ν)θ(ν,κ;t) ν =i h(-2πik)θ(k,κ;t).Let us now focus on the initial data. Recall that we choose q(x,y;0)=δ(x-y) corresponding to q^'(x,y;0)=0. Hence we haveq(k,κ;0)=θ(k,κ;0)=δ(k-κ).The solution to the linear constant coefficient equation forθ=θ(k,κ;t), by iteration,can thus be expressed in the form θ(k,κ;t) =exp^⋆(t(f^⋆(p_0⋆p_0^†) +ih·δ))(k,κ;t),where exp^⋆(c)=δ+c+1/2c^⋆2+1/6c^⋆3+⋯. We can recover q from the definition forθ above.The iterative procedure alluded to in the proof just above ensures the correct interpretation of the terms in the exponentialexpansion exp^⋆ in the expression for q=q(k,κ;t)above. Hence for example we have (f+ih·δ)^⋆2= f⋆f +f⋆(ih·δ) +ih·f +(ih)·(ih)·δ.Here we suppose ℍ=L^2(ℝ;ℂ)×Dom(D)with Dom(D)⊆ L^2(ℝ;ℂ) andℍ=ℚ⊕ℙ with ℙand ℚ closed subspaces of ; see Beck et al.  <cit.>. The functions inandare both -valued. As in Remark <ref>, with Q(t)=𝕀+Q^'(t), the operators Q^'(t)∈𝔍_2(;)and P(t)∈𝔍_2(;) can be characterized, respectively,by kernel functions q^'(·,·;t)∈ L^2(^2;) andp(·,·;t)∈ L^2(^2;). Further we have the usual isometry of Hilbert–Schmidt and L^2(^2;)-norms. The linear base and auxiliaryequations for p=p(x,y;t) and q^'=q^'(x,y;t) are the versions of the linear base and auxiliary equations inDefinition <ref> written in terms of theirintegral kernels; with q(x,y;t)=δ(x-y)+q^'(x,y;t). Note we set D=d(_1) and indeedd(_1)=-i h(_1) where h is a polynomialof even degree terms only with constant coefficients. Hence d=d(_1) is of dispersive form and P∈ C^∞([0,T];Dom(D)) as required in the “abstract” formulation. We observe from the form of the Fourier transform for the solution p=p(k,κ;t) given in Lemma <ref>, that anyFourier Sobolev norm of the solution at any time t>0 equalsthe corresponding Fourier Sobolev norm of the initial data p_0(k,κ). Hence if the initial data is smooth, which we assume, so is p=p(x,y;t) for all t>0.Let us now focus on q=q(x,y;t) which we recall satisfies the linear auxiliary equation _t q=f^⋆(p⋆ p^†)⋆ q and the initial condition q(x,y;0)=δ(x-y). Since p=p(x,y;t) is bounded in any Sobolev norm for all t>0,so is f^⋆(p⋆ p^†). Let 𝔭(t) denote the function {(x,y)↦ p(x,y;t)}, while𝔮(t) denotes the function {(x,y)↦ q(x,y;t)} and 𝔣(t) denotes the function {(x,y)↦ f^⋆(x,y;t)}.By integrating in time, we can express thelinear auxiliary equation in the abstract form 𝔮(t)=δ+∫_0^t𝔣(τ)⋆𝔮(τ) τ.Note we used that the Dirac delta function is the initial data,i.e. 𝔮(0)=δ. Recall it is also the unit with respect to the `⋆' product. We iterate this formula for 𝔮(t) to generate the solution series𝔮(t)=δ+∫_0^t𝔣(τ)τ +∫_0^t∫_0^τ𝔣(τ)⋆𝔣(s)s τ +∫_0^t∫_0^τ∫_0^s𝔣(τ)⋆𝔣(s)⋆𝔣(r) rs τ+⋯.Note that we have the following estimate for the L^2(^2;)-norm of 𝔣(τ)⋆𝔣(s):𝔣(τ)⋆𝔣(s)^2=∫_^2|∫_ f(x,z;τ) f(z,y;s)z|^2xy⩽∫_^2(∫_ |f|^2(x,z;τ)z) (∫_ |f|^2(z,y;s)z)xy=𝔣(τ)^2 𝔣(s)^2.This estimate extends to𝔣(τ)⋆𝔣(s)⋆𝔣(r)^2 ⩽𝔣(τ)^2 𝔣(s)^2𝔣(r)^2 and so forth. Since for any T>0there exists a constant K>0 such that for all t∈[0,T] we have 𝔣(t)^2⩽ K, we observe thatthe L^2(^2;)-norm of (𝔮(t)-δ) is boundedas follows,𝔮(t)-δ^2⩽∫_0^t𝔣(τ)^2τ +∫_0^t∫_0^τ𝔣(τ)⋆𝔣(s)^2s τ +⋯⩽∫_0^t𝔣(τ)^2τ +∫_0^t∫_0^τ𝔣(τ)^2𝔣(s)^2s τ +⋯⩽exp(t K)-1.Consequently Q^'(t)_𝔍_2(;) is bounded.Further, recalling arguments in the proof of Lemma <ref>, there exists a T>0 such that for all t∈[0,T] we haveQ^'(t)_𝔍_2(;)<1 anddet_2(𝕀+Q^'(t))≠0.Given initial data g_0∈ C^∞(^2;)∩ L^2(^2;),suppose p=p(x,y;t) and q=q(x,y;t) are the solutions to the linear base and auxiliary equations from Lemma <ref> for which p_0≡ g_0and q(x,y;0)=δ(x-y). Let Dom(d) denote the domain of the operator d=-i h(_1) where h=h(_1) is defined above.Then there exists a T>0 such that the solutiong∈ C^∞([0,T];Dom(d)∩ L^2(^2;))to the linear Fredholm equationp(x,y;t)=∫_g(x,z;t) q(z,y;t)z.solves the evolutionary partial differential equation withodd degree nonlocal nonlinearity of the form_tg=-ih(_1) g-g⋆ f^⋆(g⋆ g^†). From Remark <ref> we know that with a slight modification of Lemma <ref> for some T>0 there exists a solutiong∈ C^∞([0,T];Dom(d)∩ L^2(^2;))to the linear Fredholm equation (Riccati relation) shown. The solution g is the integral kernel of G, which solves the Odd Degree Evolution Equation in Theorem <ref>. Writing thatequation in terms of the kernel function g corresponds to thepartial differential equation with odd degree nonlocalnonlinearity shown. We make the following observations:(i) Though we have a closed form for p=p(x,y;t) in this case, q=q(x,y;t) has a series representation. However as for our results in <ref>, time t plays the role of a parameter in the sense that we decide on the time at which we wish to evaluate the solution, and then we solve the Fredholm equation to generate the solution g for that time t;(ii) Also as for our results in <ref>, on the interval of time for which we know g exists, its regularity is determined by theregularity of p; and(iii) The extension of our results above to the case when p, q and g are ^n× n-valued functions for any n∈ℕ is straightforward.There are many generalizations and concomitant results we intend to pursue.A few immediate ones are as follows. In all cases we assume the base equationto be _tP=DP and the Riccati relation has the form P=G Q.First, in the nonlocal cubic case assume theauxiliary equation has the form _tQ=(PAP^†) Q for some linear operator A satisfying A^†=-A. This generates the cubic form of the operator equation for G inthe Odd Degree Evolution Equation Theorem <ref> above. However we observe _t(QAQ^†)=[PAP^†,QAQ^†]. Hence if thecommutator on the right vanishes initially then QAQ^† maintains its initial value thereafter. If we assume Q_0AQ_0^†=iα·𝕀then we recover the same result as that in Theorem <ref> with the scalar α forced to be real from the skew-Hermitian property of A. Second, suppose the auxiliary equation has the form _tQ=(A_1PA_2P^† A_3) Q for some operators A_1, A_2 and A_3. Assuming Q satisfies the constraint QA_2Q^†=K for some time independent operator K then G can be shown to satisfy _tG=D G-G (A_1GKGA_3). However, if A_2^†=-A_2 and A_3=± A_1^†, then we observe that _t(QAQ^†)=±[A_1PA_2P^† A_1,K]. Hence similarly, if the commutator on the right vanishes initially and Q_0A_2Q_0^†=K initially, then this constraint is maintained thereafter. Third and lastly, we observe we could assume the auxiliary equation has the form _tQ=f(PP^†) P to attempt to generate even degree equations. We address further generalizationsin our Conclusion <ref>.§ EXAMPLESWe consider six example evolutionary partial differential equationswith nonlocal nonlinearities in detail. The first four examples are: (i) A reaction-diffusion systemwith nonlocal nonlinear reaction terms;(ii) The nonlocal Korteweg de Vries equation; (iii) A nonlocal nonlinearSchrödinger equation and (iv) A fourth order nonlinear Schrödinger equation with a nonlocal sinusoidal nonlinearity. In each of these cases weprovide the following. First, we present the evolutionary system andinitial data and explain how it fits into the context of oneof the systems presented in <ref> or <ref>.Second, we briefly explain how we simulated the evolutionary system with nonlocal nonlinearity directly by adapting well-known algorithms, mainly pseudo-spectral, for the versions of these systems withlocal nonlinearities. We denote these directly computed solutionsby g_D. Third, we explain in some more detail how we generated solutions from the underlying linear base andauxiliary equations and the linear Riccati relation. We denotesolutions computed using our Riccati method by g_R.Then for a particular evaluation time T>0 we compute g_D and g_R. We compare the two simulation results andexplicitly plot their difference at that time T. We also quote a value for the maximum norm over the spatial domainof the difference g_D-g_R. Additionally we plot the evolution of det_2(𝕀+Q^'(t)),and in the first two examples Q^'(t)_𝔍_2(;).We emphasize that for all the examples, to compute g_R we simplyevaluate the explicit forms for p=p(x,y;t) and q^'=q^'(x,y;t)or their Fourier transforms at the given time t=T. We then solve thecorresponding Fredholm equation at time t=T to generate g_R. The evolution plots for det_2(𝕀+Q^'(t)) andQ^'(t)_𝔍_2(;) are provided for interest and analysis only. We remark that in some examples, at theevaluation times t=T, the norm Q^'(t)_𝔍_2(;)is greater than one. This suggests that the estimates in Lemma <ref>, whilst guaranteeing the behaviour required,are somewhat conservative. All the simulations are developed on thedomain [-L/2,L/2]^2 with the problem projected spatially onto M^2 nodes,i.e. M nodes for the x∈[-L/2,L/2] interval andM nodes for the y∈[-L/2,L/2] interval. Naturally M^2 also represents the number of two-dimensional Fourier modes in our simulations. In each case we quote L and M. All our Matlab codes are provided in the supplementary electronic material.The last two examples we present represent interesting special cases of our Riccati approach. They are a: (v) Scalar evolutionary diffusive partial differential equation with a convolutional nonlinearity and (vi) Nonlocal Fisher–Kolmogorov–Petrovskii–Piskunov equation from biology/ecological systems. In the latter case we derive solutions for general initial data constructed using our approach. As far as we know these have not beenderived before.[Reaction-diffusion system with nonlocal reaction terms] In this case the target equation is the system of reaction-diffusion equations with nonlocal reaction terms of the form_tu =d_11u+d_12v-u⋆(b_11u)-u⋆(b_12v)-v⋆(b_12u)-v⋆(b_11v), _tv =d_11v+d_12u-u⋆(b_11v)-u⋆(b_12u)-v⋆(b_12v)-v⋆(b_11u),where u=u(x,y;t) and v=v(x,y;t). We assume d_11=_1^2+1,d_12=-1/2, b_12=0 and b_11=N(x,σ),the Gaussian probability density function with mean zero. We set σ=0.1. We take the initial profilesu_0(x,y)sech(x+y) sech(y) and v_0(x,y)sech(x+y) sech(x), and in this caseL=20 and M=2^7. This system fits into our general theoryin <ref> when we take p, q and gto have the 2× 2 bisymmetric formsp=[ p_11 p_12; p_12 p_11 ],q=[ q_11 q_12; q_12 q_11 ]and g=[ g_11 g_12; g_12 g_11 ].We also assume similar forms for d and b with the components indicated above. Note that the product of two 2×2 bisymmetric matrices is bisymmetric. The resulting evolutionary Riccati equation _tG=dG-G (b G) in terms of the kernel functions g_11=u and g_12=v is the target reaction-diffusion systemwith nonlocal nonlinearities above.The results of our simulations are shown in Figure <ref>.The top two panels show the u and v components of the solution computed up until time T=0.5 using a direct spectral integration approach. By this we mean we solved the system of equations in Fourier space for u=u(k,κ;t) and v=v(k,κ;t). We used the Matlab inbuilt integratorto integrate in time. The middle two panelsshow the g_11 and g_12 components of the solution computed using our Riccati approach which respectively correspond tou and v. To generate the solutions g_11 and g_12we solved the 2× 2 matrix Fredholm equation for g computing p and q as 2× 2 matrices directly from their explicit Fourier transforms. We approximated the integral in theFredholm equation using a simple Riemann rule and used the inbuilt Matlab Gaussian elimination solver to find the solution. The bottom left panel shows the Euclideannorm of the difference (u-g_11,v-g_12) for all (x,y)∈[-L/2,L/2]^2at time t=T. The solutions numerically coincide and indeed for that time t=T we have (u-g_11,v-g_12)_L^∞(^2;)=3.6178×10^-5. We also computed the mean values of |u-g_11| and |v-g_12| over the domain which are, respectively, 8.7796×10^-8 and 1.6967×10^-7. The bottom right panel shows the evolution of det_2(𝕀+Q^'(t)) and also Q^'(t)_𝔍_2(;) for t∈[0,T]. [Nonlocal Korteweg de Vries equation] In this case the target equation is the nonlocal Korteweg de Vries equation_tg=-_1^3 g-g⋆ (_1 g),for g=g(x,y;t). Using our analysis in <ref>we thus need to set d=-_1^3 and b=_1. We choose aninitial profile of the form g_0(x,y)sech^2(x+y) sech^2(y) and in this caseL=40 and M=2^8. The results are shown in Figure <ref>. The top left panel shows the solutiong_D computed up until time T=1 using adirect integration approach. By this we mean we implemented a split-step Fourier Spectral approach modified to deal with the nonlocal nonlinearity; we adapted the code from that found at the Wikiwaves webpage <cit.>. With the initial matrix u_0g_0, indexed by the wavenumbers k and κ, the method is given by (here ℱ denotes the Fourier transform),v_nexp(Δ t K^3) u_n andu_n+1v_n+Δ t h ℱ((ℱ^-1(v_n)) (ℱ^-1(Kv_n))),where K is the diagonal matrix of Fourier coefficients 2πik andwhere the product between the two inverse Fourier transforms shown is thematrix product. In practice of course we used the fast Fourier transform. Note we have chosen to approximate the nonlocal nonlinear term using a Riemann rule. Further we used the time step Δ t=0.0001. The top right panel shows the solution g_R computed using our Riccati approach.By this we mean the following. We compute the explicit solutions forthe base and auxiliary equations in this case in Fourier space in the formp(k,κ;t)=e^t(2πik)^3 g_0(k,κ) andq^'(k,κ;t)=(2πik) (e^t(2πik)^3-1)/(2πik)^3 g_0(k,κ).Recall q^' is the kernel associated with Q^'=Q-𝕀. After computing the inverse Fourier transforms of these expressions we then solved theFredholm equation, i.e. the Riccati relation, for g=g(x,y;t)numerically. There are three sources of error in this computation. The first isthe wavenumber cut-off and inverse fast Fourier transform required to compute p=p(x,y;t) and q'=q'(x,y;t) respectively from pand q^' above. The second is in the choice of integral approximation in the Fredholm equation.We used a simple Riemann rule. The third is the error in solving the corresponding matrix equation representing the Fredholm equation which isthat corresponding to the error for Matlab's inbuilt Gaussianelimination solver. The bottom left panel shows the absolute value ofg_D-g_R. Up to computation error, the solutions naturally coincide, and indeedg_D-g_R_L^∞(^2;)=4.8871× 10^-5. The bottom right panel shows the evolution of det_2(𝕀+Q^'(t)) and also Q^'(t)_𝔍_2(;) for t∈[0,T]. [Nonlocal nonlinear Schrödinger equation] In this case the target equation is the nonlocal nonlinear Schödinger equationi_tg=_1^2 g+g⋆ g⋆ g^†,for g=g(x,y;t). In our analysis in <ref>we thus need to set h(x)=x^2 and f(x)=x. Further for computations we take the initial profileto be g_0(x,y)sech(x+y) sech(y) and in this case L=20 and M=2^8. The results are shown in Figure <ref>. The top two panels show the real and imaginary parts of the solution g_D computed up until time T=0.02 using a direct integration approach. By this we mean we implemented a split-step Fourier transform approach slightly modified to deal with the nonlocal nonlinearity; seeDutykh, Chhay and Fedele <cit.>. The middle two panelsshow the real and imaginary parts of the solution g_R computed using our Riccati approach. By this we mean, given the explicit solution for p=p(k,κ;t) in terms of g_0,we numerically evaluated q=q(k,κ;t) using the exponential form from Lemma <ref>.In practice this consists of computing a large matrix exponential,our first source of error. We then solved the the Riccati relationin Fourier space when it takes the formp(k,κ;t)=∫_g(k,ν;t) q(ν,κ;t) ν for g=g(k,κ;t). We solved this Fredholm equation numerically and recovered g=g(x,y;t)as the inverse Fourier transform of g=g(k,κ;t). There are three further sources of error in this computation.The first is in the choice of integral approximation on the right-hand side.We used a simple Riemann rule. The second is the error in solving the corresponding matrix equation representing the Fredholm equation which isthat corresponding to the error for Matlab's in build Gaussianelimination solver. The third is in computing the inverse fast Fourier transform for the solution. The bottom left panel shows|g_D-g_R| for all (x,y)∈[-L/2,L/2]^2 at time t=T.Up to computation error, the solutions coincide, and we have g_D-g_R_L^∞(^2;)=2.6932× 10^-5. The bottom right panel shows the evolution of det_2(Q(t)) for t∈[0,T], i.e. theFredholm determinant of the Fourier transform q of thekernel q associated with Q. Not too surprisingly we observe|det_2(Q(t))|=1 for all t∈[0,T]. [Fourth order NLS with nonlocal sinusoidal nonlinearity] In this case the target equation is the fourth order nonlocal nonlinear Schödinger equationi_tg=_1^4 g+g⋆sin^⋆(g⋆ g^†),for g=g(x,y;t). In our analysis in <ref>we thus need to set h(x)=x^4 and f(x)=sin(x). The initial profile is g_0(x,y)sech(x+y) sech(y), as previously and in this case L=20 and M=2^8. The results are shown in Figure <ref>. The top two panels show the real and imaginary parts of the solution g_D computed up until time T=0.2 using a direct integration approach. By this we mean we implemented the split-step Fourier transform approach as in the last example, slightly modified to deal with the sinusoidal nonlinearity, and with time step Δ t=0.0001.The middle two panels show the real and imaginary parts of thesolution g_R computed using our Riccati approach.Again by this we mean, given the explicit solution for p=p(k,κ;t) in terms of g_0,we numerically evaluated q=q(k,κ;t) using the exponential form from Lemma <ref>, now including the sinusoidal form for f. We then solved forg=g(k,κ;t) and so forth, as described in the last example. The bottom left panel shows|g_D-g_R| for all (x,y)∈[-L/2,L/2]^2 at time t=T.Thus again, up to computation error, the solutions naturally coincide with g_D-g_R_L^∞(^2;)=5.2793× 10^-6. As in the last example, the bottom right panel shows the evolution of det_2(Q(t)) for t∈[0,T]. Again we observe that |det_2(Q(t))|=1for all t∈[0,T].We now present the two special case examples. The firstis a very special case of the systems in <ref> for which the subspacehas co-dimension one with respect to . We can think of the operator P being parameterized by an infinite row vector. The second is another special case when the Riccati relation represents a rank-one transformation from Q to P. Here we use this context to solve a particular version of the nonlocalFisher–Kolmogorov–Petrovskii–Piskunov equation.The Cole–Hopf transformation for the Burgers equation also represents such a rank-one case; see Beck et al. <cit.>. [Evolutionary diffusive PDE with convolutional nonlinearity]In this example we assume the linear base and auxiliary equationshave the form_tp(y;t)=d(_y) p(y;t) and_tq(y;t)=b(_y) p(y;t).In these equations we assume the operator d=d(_y) is a polynomial in _y with constant coefficients and that it isof diffusive or dispersive type as described in <ref>.We also assume b=b(_y) is a polynomial in _ywith constant coefficients. We now posit the Riccati relation p(y;t)=∫_ g(z;t) q(z+y;t)z.Following Remark <ref> in <ref> by differentiating this Riccati relation with respect to time and using that p=p(y;t) and q=q(y;t) satisfy the scalar linear base and auxiliary equations above, we find ∫__tg(z;t) q(z+y;t)z =_tp(y;t)-∫_ g(z;t) _tq(z+y;t)z=d(_y) p(y;t)-∫_ g(z;t) b(_z) p(z+y;t)z=∫_ g(z;t) d(_y) q(z+y;t)z -∫_ g(z;t) b(_z) ∫_ g(ζ;t) q(ζ+z+y;t) ζz=∫_(d(-_z) g(z;t)) q(z+y;t)z -∫_(b(-_z) g(z;t)) ∫_ g(ζ;t) q(ζ+z+y;t) ζz=∫_(d(-_z) g(z;t)) q(z+y;t)z -∫_(b(-_z) g(z;t)) ∫_ g(ξ-z;t) q(ξ+y;t) ξz=∫_(d(-_z) g(z;t)) q(z+y;t)z -∫_∫_(b(-_ξ) g(ξ;t)) g(z-ξ;t) ξq(z+y;t)z.Here we integrated by parts assuming suitable decay in the far-field, used the substitution ξ=ζ+z for fixed z, and swapped over theintegration variables ξ and z. As in Remark <ref>, if we postmultiply by `δ(z-y)+q̃^'(y,η;t)' and integrate over y∈ we find g=g(η;t) satisfies _tg(η;t)=d(-_η) g(η;t) -∫_(b(-_ξ) g(ξ;t)) g(η-ξ;t) ξ.This is a simpler derivation of Example 1 fromBeck et al. <cit.> where b=1.There we derive an explicit form for the Fourier transform of the solution and compare the result of direct numerical simulations with evaluation of the solution using our explicit formula.[Nonlocal Fisher–Kolmogorov–Petrovskii–Piskunov equation]In this example we assume the scalar linear base and auxiliary equationshave the form_tp(x;t)=d(_x) p(x;t) and_tq(x;t)=b(x,_x) p(x;t).Here the operator d=d(_x) is assumed to be a polynomial in _x with constant coefficients of diffusive or dispersive type as described in <ref>. We assume that the operator b=b(x,_x) is either of the form b=b(x) only, where b(x) is a bounded function, or it is of the form b=b(_x) only, in which case we assume it is a polynomial in _x with constantcoefficients. We could assume b=b(x,_x) is a polynomial in _x with non-homogeneous coefficients, the main constraint is whether we can find an explicit form for the solution q=q(x;t) to the linear auxiliary equation. We now posit the Riccati relation of the following rank-one formp(x;t)=g(x;t) ∫_ q(z;t)z.For convenience we setq(t)∫_ q(z;t)z, in which case we have p(x;t)=g(x;t) q(t) and _tq(t)=∫_ b(z,_z) p(z;t)z.As in <ref>, in particular for examplein Remark <ref>, we differentiate the Riccati relation with respect to time and substitute in that p=p(x;t) satisfies the linear base equation andq=q(t) satisfies the equation just above. Carrying this through generates(_tg(x;t)) q(t)=_tp(x;t)-g(x;t) _tq(t)=d(_x)p(x;t)-g(x;t) ∫_ b(z,_z) p(z;t)z=d(_x)g(x;t) q(t) -g(x;t) ∫_ b(z,_z) g(z;t)z q(t).Dividing through by q=q(t) generates the equation_tg(x;t)=d(_x)g(x;t)-g(x;t) ∫_ b(z,_z) g(z;t)z.Now suppose we wish to solve this evolutionary partial differential equation with the nonlocal nonlinearity shown for some given initial data g_0(x), i.e. such that g(x;0)=g_0(x). We naturally take q(0)=1and p(x;0)=g_0(x). Then that g(x;t)=p(x;t)/q(t) is indeedthe corresponding solution to the evolutionary partial differential equation for g=g(x;t) above, with p=p(x;t) satisfying the linear base equation above and q=q(t) satisfying the integrated auxiliary equation shown, can be verified by direct substitution.Let us now consider the special case b=1. Thenby analogy with Lemma <ref>, thesolution p=p(x;t) to the linear base equation is given in terms of its Fourier transform by p(k;t)=exp(d(2πik) t) g_0(k).By taking the inverse Fourier transform of this and integrating with respect to the spatial coordinate, we find the solution q=q(t)to the integrated auxiliary equation is then given byq(t)=1+(exp(t d(0))-1/d(0)) g_0(0).If d(0)=0, this becomes q(t)=1+t g_0(0). Hence we have an explicit solution for any diffusive or dispersive form for d=d(_x). If d(_x)=_x^2+1, the partial differential equation for g=g(x;t) above corresponds to a particular version of the nonlocalFisher–Kolmogorov–Petrovskii–Piskunov equation which is studiedfor example in Britton <cit.> and Bian, Chen & Latos <cit.>.§ CONCLUSIONWe have extended our Riccati approach for generating solutions to nonlocal nonlinear partial differential equations from a corresponding linear base equation to systems as well as higher odd degree nonlinearities. These systems can be of arbitrary order in the linear terms and include higher order terms in the nonlocal nonlinear terms. We also provided explicit calculations demonstrating how solutions forsuch nonlocal nonlinear systems can be generated in this manner forgeneral initial data. For four example systems we also provided numerical simulations comparing solutions computed using the Riccati approachand solutions computed using direct primarily pseudo-spectralnumerical methods. We provide all the Matlab codes in the supplementary electronic material. We also indicated multiple immediate extensionswe intend to consider, for example to tackle the case ofhigher even degree nonlinearities. Additionally we hinted on how we intend to extend the Riccati approach to the multi-dimensionalnonlocal nonlinear partial differential equations.There are many further extensions and practical considerations in our sights.One natural extension is to consider using the Riccati approach for nonlocalnonlinear stochastic partial differential equations. We would begin with those with additive space-time noise which could be incorporated via the operator C in the quadratic nonlocal nonlinearity set-up described in <ref>. It appears as a linear term in the base equation which would thus become a linear stochastic partial differential equation. The base and auxiliary equations would have to be solved as a linear system, which is achievable in principle. Then the term C appears as a nonhomogeneous source term in the final Riccati stochastic partial differential equation. Indeedwe have already performed some simulations of this nature and these will be published in Doikou, Malham and Wiese <cit.>. On the practical consideration side, we note that to compute solutions using the Riccati method in practice, we may need to approximate the solution to the linear auxiliary equation, and then typically, we need to solvethe linear Fredholm integral equation numerically to find the desired solution.It would be useful to provide a comprehensive numerical analysis study examiningthe relative complexity of the Riccati approach in these cases compared to thestate-of-the-art numerical methods available for such nonlinear systems.The context and examples we have considered thus far have included large classes of nonlocal nonlinear systems. One way to classify these systemsis that they can all be thought of as "big matrix" equations with thenatural extended product encoded in the `⋆' product. In other words we think of the linear operators P, Q and G as matrix operatorsextended to the infinite-dimensional context, whether countable or not.The resulting objects are either countably infinite matrices or are parametrized by integral kernels. The natural extension of the matrix product is then the countable discrete version of the star product or the star product itself. One of our next goals is to consider how to generalize our Riccati approach so as to incorporate local nonlinearities. One natural approach is to replacethe Fredholm Riccati relation by a Volterra one.Lastly, the classes of nonlinear partial differential equations we have considered may have solutions which become singular in finite time.For example the nonlocal nonlinear Schrödinger equation with higher degree nonlinearity or in higher dimensions might exhibit such behaviour. However let us consider the overarching context of the Riccati approachwe prescribe which is that of a linear subspace flow projected down onto theFredholm Grassmannian. In principle the solutions to theunderlying linear base and auxiliary equations which generatethe solution to the nonlocal nonlinear system do not themselves become singular in finite time. The singularity in thenonlocal nonlinear system is just an artifact of a poorchoice of coordinate patch on the Fredholm Grassmannian. It corresponds to the event det_2(𝕀+Q^'(t))→0, though we need to be wary of a hierarchy of regularized determinants here that should be monitored. The coordinate patch choice is made in the projection[ Q; P ]→[ 𝕀; G ].Implicit in the projection as shown is that we have equivalenced by the “top” block of suitable general linear transformations, thus generatingthe graph and coordinate patch on the right shown. However we can equivalence by any block of suitable general linear transformations (for example the lower block instead) generating a different graph and coordinate patch. Indeed there is a Schubert cell decomposition of the Fredholm Grassmannian analogous to that in the finite-dimensional case; see Pressley and Segal <cit.>. Careful analysis of the behaviour ofthe solutions to the underlying linear base and auxiliary equations on the approach to and transcending through and beyond the singularity in a given coordinate patch might reveal more detailed information about the singularity and will provide a mechanism for continuing solutions beyond it. We would like to thank the referees for their insightful comments and constructive suggestions that helped significantly improve the original manuscript. We would also like to thank Anke Wiese for her helpful comments and suggestions andJonathan Sherratt for useful discussions. The work of M.B. was partially supported byUS National Science Foundation grant DMS-1411460.ARSII Ablowitz MJ, Ramani A, Segur H. 1980 A connection between nonlinear evolution equations and ordinary differential equations of P-type. II,Journal of Mathematical Physics 21, 1006–1015.ADP Adamopoulou P, Doikou A, Papamikos G. 2017 Darboux-Backlund transformations, dressing and impurities inmulti-component NLS, Nucl. Phys. B 918, 91–114.BP-ZS Bauhardt W, Pöppe Ch. 1993The Zakharov–Shabat inverse spectral problem for operators, J. Math, Phys. 34(7), 3073–3086.BC Beals R, Coifman RR. 1989 Linear spectral problems, non-linear equations and the ∂-method, Inverse problems 5, 87–130.BDMS Beck M, Doikou A, Malham SJA, Stylianidis I. 2018 Grassmannian flows and applications to nonlinear partial differential equations, Proc. Abel Symposium, revision submitted.BM Beck M, Malham SJA. 2015 Computing the Maslov index for large systems, PAMS 143, 2159–2173.BCL Bian S, Chen L, Latos EA. 2017 Global existence and asymptotic behavior of solutions to anonlocal Fisher–KPP type problem, Nonlinear Analysis 149, 165-–176.Btalk Bornemann F. 2009 Numerical evaluation of Fredholm determinants and Painlevé transcendents with applications to random matrix theory, talk at the Abdus Salam International Centre forTheoretical Physics.Britton Britton NF. 1990Spatial structures and periodic travelling waves in anintegro-differential reaction-diffusion population model, SIAM J. Appl. Math. 50(6), 1663–1688.BB Brockett RW, Byrnes CI. 1981 Multivariable Nyquist criteria, root loci, and pole placement: a geometric viewpoint, IEEE Trans. Automat. control 26(1), 271–284. DMW Doikou A, Malham SJA, Wiese A. 2018Stochastic partial differential equations with nonlocal nonlinearitiesand their simulation, in preparation.DJ Drazin PG, Johnson RS. 1989 Solitons: an introduction, Cambridge Texts in Applied Mathematics, Cambridge University Press.DCF Dutykh D, Chhay M, Fedele, F. 2013 Geometric numerical schemes for the KdV equation, Computational Mathematics and Mathematical Physics53(2), 221–-236.Dyson Dyson FJ. 1976 Fredholm determinants andinverse scattering problems, Commun. Math. Phys.47, 171–183.Gerard Grellier S, Gerard P. 2015The cubic Szegö equation and Hankel operators, arXiv:1508.06814.MG Guest MA. 2008 From quantum cohomology to integrable systems, Oxford University Press.HM Hermann R, Martin C. 1982 Lie and Morse theory for periodic orbits of vector fieldsand matrix Riccati equations, I: General Lie-theoretic methods,Math. Systems Theory 15, 277-–284.KM Karambal I, Malham SJA. 2015 Evans function and Fredholm determinants, Proc. R. Soc. A 471(2174).DOI: 10.1098/rspa.2014.0597McKean McKean HP. 2011 Fredholm determinants, Cent. Eur. J. Math.9(2), 205–243.LMNT Ledoux V, Malham SJA, Niesen J, Thümmler V. 2009 Computing stability of multi-dimensional travelling waves,SIAM Journal on Applied Dynamical Systems 8(1), 480–507.LMT Ledoux V, Malham, SJA, Thümmler V. 2010 Grassmannian spectral shooting, Math. Comp. 79, 1585–1619.MH Martin C, Hermann R. 1978 Applications of algebraic geometry to systems theory: The McMillan degree and Kronecker indicies of transfer functions as topological and holomorphic system invariants, SIAM J. Control Optim. 16(5), 743–755.MS Matveev VB, Salle MA. 1991 Darboux transformations and solitons, Springer–Verlag.Miura Miura RM. 1976The Korteweg–De Vries equation: A survey of results, SIAM Review 18(3), 412–459.P-SG Pöppe Ch. 1983 Construction of solutions of the sine-Gordon equation by means of Fredholm determinants, Physica D 9, 103–139.P-KdV Pöppe Ch. 1984 The Fredholm determinant method for the KdV equations, Physica D 13, 137–160.P-KP Pöppe Ch. 1984 General determinants and theτ function for the Kadomtsev–Petviashvili hierarchy, Inverse Problems 5, 613–630.PS-KP Pöppe, Ch., Sattinger, D.H. 1988 Fredholmdeterminants and the τ function for the Kadomtsev–Petviashvili hierarchy, Publ. RIMS, Kyoto Univ. 24, 505–538.PS Pressley A, Segal G. 1986 Loop groups, Oxford Mathematical Monographs, Clarendon Press, Oxford.RS Reed M, Simon B. 1980, Methods of Modern Mathematical Physics: I Functional Analysis, Academic Press.RSIV Reed M, Simon B. 1978, Methods of Modern Mathematical Physics: IV Analysis of Operators, Academic Press.SatoI Sato M. 1981 Soliton equations as dynamical systems on a infinite dimensional Grassmann manifolds. RIMS 439, 30–46.SatoII Sato M. 1989, The KP hierarchy and infinite dimensional Grassmann manifolds, Proceedings of Symposia in Pure Mathematics49 Part 1, Eds: L. Ehrenpreis and R.C. Gunning,American Mathematical Society, 51–66.SW Segal G, Wilson G. 1985 Loop groups and equations of KdV type, Inst. Hautes Etudes Sci. Publ. Math. N61, 5-–65.Simon:Traces Simon B 2005 Trace ideals and their applications,2nd edn. Mathematical Surveys and Monographs, vol. 120. Providence, RI: AMS.TW Tracy CA, Widom H. 1996 Fredholm determinants and the mKdV/Sinh-Gordon hierarchies, Commun. Math. Phys.179, 1–10.wikiwaves .W Wilson G. 1985 Infinite-dimensional Lie groups and algebraic geometry in soliton theory, Trans. R. Soc. London A 315 (1533), 393–404.ZS Zakharov VE, Shabat AB. 1974A scheme for integrating the non-linear equation of mathematical physicsby the method of the inverse scattering problem I,Funct. Anal. Appl. 8, 226.
http://arxiv.org/abs/1709.09253v2
{ "authors": [ "Margaret Beck", "Anastasia Doikou", "Simon J. A. Malham", "Ioannis Stylianidis" ], "categories": [ "math.AP", "math-ph", "math.MP", "nlin.SI", "quant-ph" ], "primary_category": "math.AP", "published": "20170926202657", "title": "Partial differential systems with nonlocal nonlinearities: Generation and solutions" }
prsty FOM Institute AMOLF, Science Park 104, 1098 XE Amsterdam, The Netherlands Department of Physics, University of Michigan, Ann Arbor, MI 48109-1040 FOM Institute AMOLF, Science Park 104, 1098 XE Amsterdam, The Netherlands To estimate the time, many organisms, ranging from cyanobacteria to animals, employ a circadian clock which is based on a limit-cycle oscillator that can tick autonomously with a nearly 24h period. Yet, a limit-cycle oscillator is not essential for knowing the time, as exemplified by bacteria that possess an “hourglass”: a system that when forced by an oscillatory light input exhibits robust oscillations from which the organism can infer the time, but that in the absence of driving relaxes to a stable fixed point. Here, using models of the Kai system of cyanobacteria, we compare a limit-cycle oscillator with two hourglass models, one that without driving relaxes exponentially and one that does so in an oscillatory fashion.In the limit of low input noise, all three systems are equally informative on time, yet in the regime of high input-noise the limit-cycle oscillator is far superior. The same behavior is found in the Stuart-Landau model, indicating that our result is universal.87.10.Vg,87.16.Xa,87.18.TtRobustness of clocks to input noise Pieter Rein ten Wolde December 30, 2023 ===================================§ INTRODUCTIONMany organisms, ranging from animals, plants, insects, to even bacteria, p̱o̱s̱s̱e̱s̱s̱ ̱a̱ ̱c̱i̱ṟc̱a̱ḏi̱a̱ṉ ̱c̱ḻo̱c̱ḵ,̱ ̱w̱ẖi̱c̱ẖ ̱i̱s̱ ̱a̱ ̱ḇi̱o̱c̱ẖe̱m̱i̱c̱a̱ḻ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ṯẖa̱ṯ ̱c̱a̱ṉ ̱ṯi̱c̱ḵ ̱a̱u̱ṯo̱ṉo̱m̱o̱u̱s̱ḻy̱ ̱w̱i̱ṯẖ ̱a̱ ̱ṉe̱a̱ṟḻy̱ ̱2̱4̱ẖ ̱ ̱ ̱p̱e̱ṟi̱o̱ḏ.̱ ̱C̱o̱m̱p̱e̱ṯi̱ṯi̱o̱ṉ ̱e̱x̱p̱e̱ṟi̱m̱e̱ṉṯs̱ ̱o̱ṉ ̱c̱y̱a̱ṉo̱ḇa̱c̱ṯe̱ṟi̱a̱ ̱ẖa̱v̱e̱ ̱ḏe̱m̱o̱ṉs̱ṯṟa̱ṯe̱ḏ ̱ ̱ ̱ṯẖa̱ṯ ̱ṯẖe̱s̱e̱ ̱c̱ḻo̱c̱ḵs̱ ̱c̱a̱ṉ ̱c̱o̱ṉf̱e̱ṟ ̱a̱ ̱f̱i̱ṯṉe̱s̱s̱ ̱ḇe̱ṉe̱f̱i̱ṯ ̱ṯo̱ ̱o̱ṟg̱a̱ṉi̱s̱m̱s̱ ̱ṯẖa̱ṯ ̱ ̱ ̱ḻi̱v̱e̱ ̱i̱ṉ ̱a̱ ̱ṟẖy̱ṯẖm̱i̱c̱ ̱e̱ṉv̱i̱ṟo̱ṉm̱e̱ṉṯ ̱w̱i̱ṯẖ ̱a̱ ̱2̱4̱ẖ ̱p̱e̱ṟi̱o̱ḏ ̱ ̱ ̱<̱c̱i̱ṯ.̱>̱.̱ ̱C̱ḻo̱c̱ḵs̱ ̱e̱ṉa̱ḇḻe̱ ̱o̱ṟg̱a̱ṉi̱s̱m̱s̱ ̱ṯo̱ ̱ ̱ ̱e̱s̱ṯi̱m̱a̱ṯe̱ ̱ṯẖe̱ ̱ṯi̱m̱e̱ ̱o̱f̱ ̱ḏa̱y̱,̱ ̱a̱ḻḻo̱w̱i̱ṉg̱ ̱ṯẖe̱m̱ ̱ṯo̱ ̱a̱ṉṯi̱c̱i̱p̱a̱ṯe̱,̱ ̱ṟa̱ṯẖe̱ṟ ̱ṯẖa̱ṉ ̱ ̱ ̱ṟe̱s̱p̱o̱ṉḏ ̱ṯo̱,̱ ̱ṯẖe̱ ̱ḏa̱i̱ḻy̱ ̱c̱ẖa̱ṉg̱e̱s̱ ̱i̱ṉ ̱ṯẖe̱ ̱e̱ṉv̱i̱ṟo̱ṉm̱e̱ṉṯ.̱While it is clear that circadian clocks which are entrained to their environment make it possible to estimate the time, it is far less obvious that they are the only or best means to do so <cit.>. The oscillatory environmental input could, for example, also be used to drive a system which in the absence of any driving would relax to a stable fixed point rather than exhibit a limit cycle. The driving would then generate oscillations from which the organism could infer the time.It thus remains an open question what the benefits of circadian clocks are in estimating the time of day.This question is highlighted by the timekeeping mechanisms of prokaryotes. While circadian clocks are ubiquitous in eukaryotes, the only known prokaryotes to possess circadian clocks are cyanobacteria, which exhibit photosynthesis. The best characterized clock is that of the cyanobacterium Synechococcus elongatus, which consists of three proteins, KaiA, KaiB, and KaiC <cit.>.The central clock component is KaiC, which forms a hexamer that is phosphorylated and dephosphorylated in a cyclical fashion under the influence of KaiA and KaiB. This phosphorylation cycle can be reconstitued in the test tube, forminga bonafide circadian clock that ticks autonomously in the absence of any oscillatory driving with a period of nearly 24 hours <cit.>.However, S. elongatus is not the only cyanobacterial species. Prochlorococcus marinus possesses kaiB and kaiC, but lacks (functional) KaiA. Interestingly, this species exhibits daily rhythms in gene expression under light-dark (LD) cycles but not in constant conditions <cit.>.Recently, Johnson and coworkers made similar observations for the purple bacterium Rhodopseudomonas palustris, which harbors homologs of KaiB and KaiC. Its growth rate depends on the KaiC homolog in LD but not constant conditions <cit.>, suggesting that the bacterium uses its Kai system to keep time. Moreover, this species too does not exhibit sustained rhythms in constant conditions, but does show daily rhythms in e.g. nitrogen fixation in cyclic conditions. P. marinus and R. palustris thus appear to keep time via an “hourglass” mechanism that relies on oscillatory driving <cit.>.These observations raise the question why some bacterial species like S. elongatus have evolved a bonafide clock that can run freely, while others have evolved an hourglass timekeeping system.Troein et al. studied the evolution of timekeeping systems in silico <cit.>. They found that only in the presence of seasonal variations and stochastic fluctuations in the input signal did systems evolve that can also oscillate autonomously. However, organisms near the equator have evolved self-sustained oscillations <cit.>, showing that seasonal variations cannot be essential. P̱f̱e̱u̱ṯy̱ ̱e̱ṯ ̱a̱ḻ.̱ ̱ ̱s̱u̱g̱g̱e̱s̱ṯ ̱ṯẖa̱ṯ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱ẖa̱v̱e̱ ̱e̱v̱o̱ḻv̱e̱ḏ ̱ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖe̱y̱ ̱e̱ṉa̱ḇḻe̱ ̱ṯi̱m̱e̱ḵe̱e̱p̱e̱ṟs̱ ̱ṯẖa̱ṯ ̱i̱g̱ṉo̱ṟe̱ ̱ ̱ ̱ṯẖe̱ ̱u̱ṉi̱ṉf̱o̱ṟm̱a̱ṯi̱v̱e̱ ̱ḻi̱g̱ẖṯ-̱i̱ṉṯe̱ṉs̱i̱ṯy̱ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱ ̱ḏu̱ṟi̱ṉg̱ ̱ṯẖe̱ ̱ḏa̱y̱ ̱ ̱ ̱(̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏi̱ṉg̱ ̱ṯo̱ ̱a̱ ̱ḏe̱a̱ḏẕo̱ṉe̱ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱-̱ṟe̱s̱p̱o̱ṉs̱e̱ ̱c̱u̱ṟv̱e̱)̱,̱ ̱y̱e̱ṯ ̱ ̱ ̱s̱e̱ḻe̱c̱ṯi̱v̱e̱ḻy̱ ̱ṟe̱s̱p̱o̱ṉḏ ̱ṯo̱ ̱ṯẖe̱ ̱m̱o̱ṟe̱ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱v̱e̱ ̱i̱ṉṯe̱ṉs̱i̱ṯy̱ ̱c̱ẖa̱ṉg̱e̱s̱ ̱a̱ṟo̱u̱ṉḏ ̱ ̱ ̱ḏa̱w̱ṉ ̱a̱ṉḏ ̱ḏu̱s̱ḵ ̱<̱c̱i̱ṯ.̱>̱. Here, we hypothesize that the optimal design of the readout system that maximizes the reliability by which cells can estimate the time depends on the noise in the input signal. To test this idea, we study three different network designs from which the cell can infer time (Models): 1) a simple push-pull network (PPN), in which a readout protein switches between a phosphorylated and an unphosphorylated state (ModelsA). Because the phosphorylation rate increases with the light intensity, the phosphorylation level oscillates in the presence of oscillatory driving, enabling the cell to estimate the time. This network lacks an intrinsic oscillation frequency, and in the absence of driving it relaxes to a stable fixed point in an exponential fashion; 2) an uncoupled hexamer model (UHM), which is inspired by the Kai system of P. marinus (ModelsB). This model consists of KaiC hexamers which each have an inherent propensity to proceed through a phosphorylation cycle. However, the phosphorylation cycles of the hexamers are not coupled among each other, and without a common forcing the cycles will therefore desynchronize, leading to the loss of macroscopic oscillations.In contrast to the proteins of the PPN, each hexamer is a tiny oscillator with an intrinsic frequency ω_0, which means that an ensemble of hexamers that has been synchronized initially, will, in the absence of driving, relax to its fixed point in an oscillatory manner. 3) a coupled hexamer model (CHM), which is inspired by the Kai system of S. elongatus (ModelsC). As in the previous UHM, each KaiC hexamer has an intrinsic capacity to proceed through a phosphorylation cycle, but, in contrast to that system, the cycles of the hexamers are coupled and synchronized via KaiA, as described further below. Consequently, this system exhibits a limit cycle, yielding macroscopic oscillations with intrinsic frequency ω_0 even in the absence of any driving.H̱e̱ṟe̱ ̱w̱e̱ ̱a̱ṟe̱ ̱i̱ṉṯe̱ṟe̱s̱ṯe̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱q̱u̱e̱s̱ṯi̱o̱ṉ ̱ẖo̱w̱ ̱ṯẖe̱ ̱p̱ṟe̱c̱i̱s̱i̱o̱ṉ ̱o̱f̱ ̱ ̱ ̱ṯi̱m̱e̱ ̱e̱s̱ṯi̱m̱a̱ṯi̱o̱ṉ ̱i̱s̱ ̱ḻi̱m̱i̱ṯe̱ḏ ̱ḇy̱ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱i̱ṉ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱s̱i̱g̱ṉa̱ḻ,̱ ̱a̱ṉḏ ̱ẖo̱w̱ ̱ ̱ ̱ṯẖi̱s̱ ̱ḻi̱m̱i̱ṯ ̱ḏe̱p̱e̱ṉḏs̱ ̱o̱ṉ ̱ṯẖe̱ ̱a̱ṟc̱ẖi̱ṯe̱c̱ṯu̱ṟe̱ ̱o̱f̱ ̱ṯẖe̱ ̱ṟe̱a̱ḏo̱u̱ṯ ̱s̱y̱s̱ṯe̱m̱.̱ ̱W̱e̱ ̱ ̱ ̱ṯẖu̱s̱ ̱f̱o̱c̱u̱s̱ ̱o̱ṉ ̱ṯẖe̱ ̱ṟe̱g̱i̱m̱e̱ ̱i̱ṉ ̱w̱ẖi̱c̱ẖ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱s̱ ̱o̱v̱e̱ṟ ̱ṯẖe̱ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱<̱c̱i̱ṯ.̱>̱ ̱a̱ṉḏ ̱m̱o̱ḏe̱ḻ ̱ṯẖe̱ ̱ḏi̱f̱f̱e̱ṟe̱ṉṯ ̱s̱y̱s̱ṯe̱m̱s̱ ̱ ̱ ̱u̱s̱i̱ṉg̱ ̱m̱e̱a̱ṉ-̱f̱i̱e̱ḻḏ ̱(̱ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱)̱ ̱c̱ẖe̱m̱i̱c̱a̱ḻ ̱ṟa̱ṯe̱ ̱e̱q̱u̱a̱ṯi̱o̱ṉs̱.̱ ̱I̱ṉ ̱ ̱ ̱<̱c̱i̱ṯ.̱>̱,̱ ̱w̱e̱ ̱a̱ḻs̱o̱ ̱c̱o̱ṉs̱i̱ḏe̱ṟ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱a̱ṉḏ ̱s̱ẖo̱w̱ ̱ṯẖa̱ṯ,̱ ̱a̱ṯ ̱ḻe̱a̱s̱ṯ ̱ ̱ ̱f̱o̱ṟ ̱S̱.̱ ̱e̱ḻo̱ṉg̱a̱ṯu̱s̱,̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱ḏ ̱ṟe̱g̱i̱m̱e̱ ̱i̱s̱ ̱ṯẖe̱ ̱ ̱ṟe̱ḻe̱v̱a̱ṉṯ ̱ḻi̱m̱i̱ṯ.̱The chemical rate equation of the PPN is: ẋ_p = k_ f s(t) (x_ T- x_p(t)) - k_ b x_p(t), where x_p(t) is the concentration of phosphorylated protein, x_ T is the total concentration, k_ f s(t) is the phosphorylation rate k_ f times the input signal s(t), and k_ b is the dephosphorylation rate. The uncoupled (UHM) and coupled (CHM) hexamer model are based on the Kai system <cit.>. In both models, KaiC switches between an active conformation in which the phosphorylation level tends to rise and an inactive one in which it tends to fall <cit.>. Experiments indicate that the main Zeitgeber is the ATP/ADP ratio <cit.>, meaning the clock predominantly couples to the input s(t) during the phosphorylation phase of the oscillations <cit.>. In both the UHM and the CHM, s(t) therefore modulates the phosphorylation rate of active KaiC.The principal difference between the UHM and CHM is KaiA: (functional) KaiA is absent in P. marinus and hence in the UHM <cit.>. In contrast, in S. elongatus and hence the CHM, KaiA phosphorylates active KaiC, yet inactive KaiC can bind and sequester KaiA. This gives rise to the synchronisation mechanism of differential affinity <cit.>.In all three models, the input is modeled as a sinusoidal signal with mean s̅ and driving frequency ω=2π/T plus additive noise η_s(t): s(t) = sin(ω t) + s̅ + η_s(t). The noise is uncorrelated with the mean signal, and has strength σ^2_s and correlation time τ_c, η_s(t) η_s(t^') = σ_s^2 e^-|t-t^'|/τ_c.A detailed description of the models is given in <cit.>.As a performance measure for the accuracy of estimating time, we use the mutual information I(p;t) between the time t and the phosphorylation level p(t) <cit.>:I(p;t) = ∫_0^T dt ∫_0^1 dp P(p,t) log_2 P(p,t)/P(p)P(t).Here P(p,t) is the joint probability distribution while P(p) and P(t)=1/T are the marginal distributions of p and t. Ṯẖe̱ ̱q̱u̱a̱ṉṯi̱ṯy̱ ̱2̱^̱I̱(̱p̱;̱ṯ)̱ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏs̱ ̱ṯo̱ ̱ṯẖe̱ ̱ṉu̱m̱ḇe̱ṟ ̱o̱f̱ ̱ ̱ ̱ṯi̱m̱e̱ ̱p̱o̱i̱ṉṯs̱ ̱ṯẖa̱ṯ ̱c̱a̱ṉ ̱ḇe̱ ̱i̱ṉf̱e̱ṟṟe̱ḏ ̱u̱ṉi̱q̱u̱e̱ḻy̱ ̱f̱ṟo̱m̱ ̱p̱(̱ṯ)̱;̱ ̱ ̱ ̱I̱(̱p̱;̱ṯ)̱=̱1̱ ̱ḇi̱ṯ ̱m̱e̱a̱ṉs̱ ̱ṯẖa̱ṯ ̱f̱ṟo̱m̱ ̱p̱(̱ṯ)̱ ̱ṯẖe̱ ̱c̱e̱ḻḻ ̱c̱a̱ṉ ̱ṟe̱ḻi̱a̱ḇḻy̱ ̱ ̱ ̱ḏi̱s̱ṯi̱ṉg̱u̱i̱s̱ẖ ̱ḇe̱ṯw̱e̱e̱ṉ ̱ḏa̱y̱ ̱a̱ṉḏ ̱ ̱ ̱ṉi̱g̱ẖṯ ̱<̱c̱i̱ṯ.̱>̱.̱ The distributions are obtained from running long simulations of the chemical rate equations of the different models <cit.>. F̱o̱ṟ ̱e̱a̱c̱ẖ ̱s̱y̱s̱ṯe̱m̱,̱ ̱ṯo̱ ̱m̱a̱x̱i̱m̱i̱ẕe̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱w̱e̱ ̱f̱i̱ṟs̱ṯ ̱ ̱ ̱o̱p̱ṯi̱m̱i̱ẕe̱ḏ ̱o̱v̱e̱ṟ ̱a̱ḻḻ ̱p̱a̱ṟa̱m̱e̱ṯe̱ṟs̱ ̱e̱x̱c̱e̱p̱ṯ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ.̱ ̱F̱o̱ṟ ̱ṯẖe̱ ̱ ̱ ̱C̱H̱M̱,̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱ ̱w̱a̱s̱ ̱ṯa̱ḵe̱ṉ ̱ṯo̱ ̱ḇe̱ ̱c̱o̱m̱p̱a̱ṟa̱ḇḻe̱ ̱ṯo̱ ̱ṯẖa̱ṯ ̱ ̱ ̱o̱f̱ ̱S̱.̱ ̱e̱ḻo̱ṉg̱a̱ṯu̱s̱ ̱<̱c̱i̱ṯ.̱>̱,̱ ̱a̱ṉḏ ̱f̱o̱ṟ ̱ṯẖe̱ ̱P̱P̱Ṉ ̱a̱ṉḏ ̱ṯẖe̱ ̱U̱H̱M̱ ̱ρ̱ ̱ ̱ ̱w̱a̱s̱ ̱s̱e̱ṯ ̱ṯo̱ ̱a̱ṉ ̱a̱ṟḇi̱ṯṟa̱ṟy̱ ̱ḻo̱w̱ ̱v̱a̱ḻu̱e̱,̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱i̱ṉ ̱ṯẖe̱ ̱ṟe̱ḻe̱v̱a̱ṉṯ ̱ ̱ ̱w̱e̱a̱ḵ-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱i̱s̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ ̱ ̱ρ̱,̱ ̱a̱s̱ ̱e̱ḻu̱c̱i̱ḏa̱ṯe̱ḏ ̱ḇe̱ḻo̱w̱ ̱a̱ṉḏ ̱i̱ṉ ̱<̱c̱i̱ṯ.̱>̱.For the PPN, there exists an optimal response time τ_r ∼ 1/ k_ b that maximizes I(p;t), arising from a trade-off between maximizing the amplitude of p(t), which increases with decreasing τ_r, and minimizing the noise in p(t), which decreases with increasing τ_r because of time averaging <cit.>. Similarly, for the UHM, there exists an optimal intrinsic frequency ω_0 of the individual hexamers.The UHM is linear and similar to a harmonic oscillator. Analyzing this system shows that while the amplitude A of the output x(t) is maximized at resonance, ω_0→ω, the standard deviation σ_x of x is maximized when ω_0→ 0, such that the signal-to-noise ratio A/σ_x peaks for ω_0 > ω <cit.>. Interestingly, also the CHM exhibits a maximum in A/σ_x for intrinsic frequencies that are slightly off-resonance <cit.>.I_CPM shows the mutual information I(p;t) as a function of the input-noise strength σ^2_s for the three systems.In the regime that σ^2_s is small, I(p;t) is essentially the same for all systems. However, the figure also shows that as σ^2_s rises, I(p;t) of the UHM and especially the PPN decrease very rapidly, while that of the CHM falls much more slowly. For σ^2_s ≈ 3, I(p;t) of the CHM is still above 2 bits, while I(p;t) of the PPN and UHM have already dropped below 1 bit, meaning the cell would no longer be able to distinguish between day and night. Indeed, this figure shows that in the regime of high input noise, a bonafide clock that can tick autonomously is a much better time-keeper than a system which relies on oscillatory driving to show oscillations. This is the principal result of our paper. It is observed for other values of τ_c and other types of input, such as a truncated sinusoid corresponding to no driving at night (Fig. S6 <cit.>).The robustness of our observation that bonafide clocks are more reliable timekeepers, suggests it is a universal phenomenon, independent of the details of the system. We therefore analyzed a generic minimal model, the Stuart-Landau model. It allows us to study how the capacity to infer time changes as a system is altered from a damped (nearly) linear oscillator, which has a characteristic frequency but cannot sustain oscillations in the absence of driving, to a non-linear oscillator that can sustain autonomous oscillations <cit.>.Near a Hopf bifurcation where a limit cycle appears the effect of the non-linearity is weak, so that the solution x(t) is close to that of a harmonic oscillator, x(t) = 1/2(A(t) e^iω t + c.c.), where A(t) is a complex amplitude that can be time-dependent <cit.>. The dynamics of A(t) is then given byȦ = -i ν A + α A - β |A|^2 A - ϵ E, dotASLwhere ν≡ (ω^2-ω_0^2) / (2ω) with ω_0 the intrinsic frequency, α and β govern the linear and non-linear growth and decay of oscillations, E is the first harmonic of s(t) and ϵ≡ρ / (2 ω) is the coupling strength. dotASL gives a universal description of a driven weakly non-linear oscillator near a supercritical Hopf bifurcation <cit.>. The non-driven system exhibits a Hopf bifurcation at α = 0. By varying α we can thus change the system from a damped oscillator (α<0) which in the absence of driving exhibits oscillations that decay, to a limit-cycle oscillator (α>0) that shows free-running oscillations. The driven damped oscillator (α<0) always has one stable fixed point with |A|>0 corresponding to sinusoidal oscillations that are synchronized with the driving. The driven limit-cycle oscillator (α>0), however, can exhibit several distinct dynamical regimes <cit.>. Here, we limit ourselves to the case of perfect synchronization, where x(t) has a constant amplitude A and phase shift with respect to s(t). To compute I(x,t), we use an approach inspired by the linear-noise approximation <cit.>.It assumes P(x|t) is a Gaussian distribution with variance σ^2_x(t) centered at the deterministic solution x(t)= 1/2 (A e^i ω t+c.c.), where A is obtained by solving dotASL in steady state. To find σ^2_x, we first compute σ^2_A from dotASL by adding Gaussian white-noise of strength σ^2_s to E and expanding A to linear order around its fixed point; σ^2_x(t) is then obtained from σ^2_A via a coordinate transformation <cit.>.I_SL shows the mutual information I(x;t) as a function α, for different values of σ^2_s. The figure shows that I(x;t) rises as the system is changed from a damped oscillator (α<0) to a self-sustained oscillator (α>0). Moreover, the increase is most pronounced when the input noise σ^2_s is large. Ṯẖe̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱c̱a̱ṉ ̱ṯẖu̱s̱ ̱ṟe̱p̱ṟo̱ḏu̱c̱e̱ ̱ṯẖe̱ ̱q̱u̱a̱ḻi̱ṯa̱ṯi̱v̱e̱ ̱ ̱ ̱ḇe̱ẖa̱v̱i̱o̱ṟ ̱o̱f̱ ̱o̱u̱ṟ ̱c̱o̱m̱p̱u̱ṯa̱ṯi̱o̱ṉa̱ḻ ̱m̱o̱ḏe̱ḻs̱,̱ ̱i̱ṉḏi̱c̱a̱ṯi̱ṉg̱ ̱ṯẖa̱ṯ ̱o̱u̱ṟ ̱p̱ṟi̱ṉc̱i̱p̱a̱ḻ ̱ ̱ ̱ṟe̱s̱u̱ḻṯ ̱i̱s̱ ̱g̱e̱ṉe̱ṟi̱c̱.̱ ̱I̱ṉṯe̱ṟe̱s̱ṯi̱ṉg̱ḻy̱,̱ ̱ṯẖe̱ ̱C̱H̱M̱ ̱i̱s̱ ̱e̱v̱e̱ṉ ̱m̱o̱ṟe̱ ̱ ̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ṯẖa̱ṉ ̱ṯẖe̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱ ̱ ̱m̱o̱ḏe̱ḻ,̱ ̱ḻi̱ḵe̱ḻy̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖe̱ ̱ḻa̱ṯṯe̱ṟ ̱i̱s̱ ̱o̱ṉḻy̱ ̱w̱e̱a̱ḵḻy̱ ̱ṉo̱ṉ-̱ḻi̱ṉe̱a̱ṟ.̱ Ṯo̱ ̱u̱ṉḏe̱ṟs̱ṯa̱ṉḏ ̱w̱ẖy̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱a̱ṟe̱ ̱ ̱ ̱m̱o̱ṟe̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱w̱e̱ ̱s̱ṯu̱ḏy̱ ̱i̱ṉ ̱s̱e̱c̱ṯi̱o̱ṉ ̱S̱I̱I̱E̱ ̱<̱c̱i̱ṯ.̱>̱ ̱ ̱ ̱a̱ṉa̱ḻy̱ṯi̱c̱a̱ḻ ̱m̱o̱ḏe̱ḻs̱ ̱v̱a̱ḻi̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱ ̱ ̱ ̱ ̱ḻi̱m̱i̱ṯ ̱o̱f̱ ̱w̱e̱a̱ḵ ̱c̱o̱u̱p̱ḻi̱ṉg̱.̱ ̱ ̱F̱o̱ṟ ̱a̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ ̱ ̱w̱i̱ṯẖ ̱a̱ ̱f̱i̱x̱e̱ḏ-̱p̱o̱i̱ṉṯ ̱a̱ṯṯṟa̱c̱ṯo̱ṟ ̱(̱P̱P̱Ṉ ̱a̱ṉḏ ̱U̱H̱M̱)̱,̱ ̱w̱e̱ ̱f̱i̱ṉḏ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱A̱ ̱o̱f̱ ̱ṯẖe̱ ̱ ̱ ̱ẖa̱ṟm̱o̱ṉi̱c̱ ̱o̱s̱c̱i̱ḻḻa̱ṯi̱o̱ṉs̱ ̱(̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ)̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ ̱ ̱ρ̱,̱ ̱A̱∼̱ρ̱.̱ ̱ ̱ ̱Ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱i̱ṉ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱s̱i̱g̱ṉa̱ḻ ̱σ̱_̱x̱ ̱s̱c̱a̱ḻe̱s̱ ̱w̱i̱ṯẖ ̱ρ̱,̱ ̱ ̱ ̱σ̱_̱x̱ ̱∼̱ρ̱,̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱a̱m̱p̱ḻi̱f̱i̱e̱s̱ ̱ṉo̱ṯ ̱o̱ṉḻy̱ ̱ṯẖe̱ ̱ ̱ ̱i̱ṉp̱u̱ṯ ̱s̱i̱g̱ṉa̱ḻ,̱ ̱ḇu̱ṯ ̱a̱ḻs̱o̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱.̱ ̱H̱e̱ṉc̱e̱,̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ ̱ ̱ṟa̱ṯi̱o̱ ̱A̱/̱σ̱_̱x̱ ̱i̱s̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ρ̱:̱ ̱a̱ṉ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ḇa̱s̱e̱ḏ ̱o̱ṉ ̱ ̱ ̱a̱ ̱f̱i̱x̱e̱ḏ-̱p̱o̱i̱ṉṯ ̱a̱ṯṯṟa̱c̱ṯo̱ṟ ̱f̱a̱c̱e̱s̱ ̱a̱ ̱f̱u̱ṉḏa̱m̱e̱ṉṯa̱ḻ ̱ṯṟa̱ḏe̱-̱o̱f̱f̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱g̱a̱i̱ṉ ̱ ̱ ̱a̱ṉḏ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱(̱s̱e̱c̱ṯi̱o̱ṉ ̱S̱I̱I̱E̱ ̱<̱c̱i̱ṯ.̱>̱)̱.̱ ̱A̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ ̱ ̱(̱C̱H̱M̱)̱ ̱c̱a̱ṉ ̱ḻi̱f̱ṯ ̱ṯẖi̱s̱ ̱ṯṟa̱ḏe̱-̱o̱f̱f̱:̱ ̱Ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱i̱s̱ ̱a̱ ̱ṟo̱ḇu̱s̱ṯ,̱ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱ ̱ ̱p̱ṟo̱p̱e̱ṟṯy̱ ̱o̱f̱ ̱ṯẖe̱ ̱s̱y̱s̱ṯe̱m̱,̱ ̱a̱ṉḏ ̱e̱s̱s̱e̱ṉṯi̱a̱ḻḻy̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ρ̱.̱ ̱Ṯẖe̱ ̱ ̱ ̱o̱u̱ṯp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱σ̱_̱x̱ ̱∼̱√̱(̱ρ̱)̱,̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṉo̱ṯ ̱ ̱ ̱o̱ṉḻy̱ ̱a̱m̱p̱ḻi̱f̱i̱e̱s̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱p̱ṟo̱p̱o̱ṟṯi̱o̱ṉa̱ḻ ̱ṯo̱ ̱ρ̱,̱ ̱ḇu̱ṯ ̱a̱ḻs̱o̱ ̱ ̱ ̱g̱e̱ṉe̱ṟa̱ṯe̱s̱ ̱a̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱ṯẖa̱ṯ ̱c̱o̱ṉs̱ṯṟa̱i̱ṉs̱ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱,̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱a̱s̱ ̱ ̱ ̱∼̱√̱(̱ρ̱)̱ ̱(̱S̱I̱I̱E̱ ̱<̱c̱i̱ṯ.̱>̱)̱.̱ ̱H̱e̱ṉc̱e̱,̱ ̱A̱/̱σ̱_̱x̱ ̱∼̱ ̱1̱ ̱/̱ ̱ ̱ ̱√̱(̱ρ̱)̱.̱ ̱Ṯẖe̱s̱e̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱a̱ṟg̱u̱m̱e̱ṉṯs̱ ̱s̱ẖo̱w̱ ̱ṯẖa̱ṯ:̱ ̱1̱)̱ ̱c̱o̱ṉc̱e̱ṟṉi̱ṉg̱ ̱ ̱ ̱ṟo̱ḇu̱s̱ṯṉe̱s̱s̱ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱ṟe̱g̱i̱m̱e̱ ̱i̱s̱ ̱ṯẖe̱ ̱w̱e̱a̱ḵ-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ ̱ ̱ṟe̱g̱i̱m̱e̱;̱ ̱2̱)̱ ̱i̱ṉ ̱ṯẖi̱s̱ ̱ṟe̱g̱i̱m̱e̱,̱ ̱a̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱i̱s̱ ̱g̱e̱ṉe̱ṟi̱c̱a̱ḻḻy̱ ̱ ̱ ̱m̱o̱ṟe̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ṯẖa̱ṉ ̱a̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ.̱Y̱e̱ṯ,̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱c̱a̱ṉṉo̱ṯ ̱ḇe̱ ̱ṟe̱ḏu̱c̱e̱ḏ ̱ṯo̱ ̱ẕe̱ṟo̱ ̱f̱o̱ṟ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱.̱ ̱W̱ẖe̱ṉ ̱ṯẖe̱ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱c̱ḻo̱c̱ḵ ̱p̱e̱ṟi̱o̱ḏ ̱ḏe̱v̱i̱a̱ṯe̱s̱ ̱f̱ṟo̱m̱ ̱2̱4̱ẖ,̱ ̱a̱s̱ ̱ ̱ ̱i̱ṯ ̱ṯy̱p̱i̱c̱a̱ḻḻy̱ ̱w̱i̱ḻḻ,̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱s̱ ̱e̱s̱s̱e̱ṉṯi̱a̱ḻ ̱ṯo̱ ̱p̱ẖa̱s̱e̱-̱ḻo̱c̱ḵ ̱ṯẖe̱ ̱c̱ḻo̱c̱ḵ ̱ṯo̱ ̱ ̱ ̱ṯẖe̱ ̱ḏṟi̱v̱i̱ṉg̱ ̱s̱i̱g̱ṉa̱ḻ ̱<̱c̱i̱ṯ.̱>̱.̱ ̱M̱o̱ṟe̱o̱v̱e̱ṟ,̱ ̱ḇi̱o̱c̱ẖe̱m̱i̱c̱a̱ḻ ̱ ̱ ̱ṉe̱ṯw̱o̱ṟḵs̱ ̱i̱ṉe̱v̱i̱ṯa̱ḇḻy̱ ̱ẖa̱v̱e̱ ̱s̱o̱m̱e̱ ̱ḻe̱v̱e̱ḻ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱(̱s̱e̱c̱ṯi̱o̱ṉ ̱S̱I̱I̱F̱ ̱ ̱ ̱<̱c̱i̱ṯ.̱>̱)̱.̱ ̱F̱o̱ṟ ̱ṯẖe̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ,̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱σ̱_̱x̱ ̱ ̱ ̱ṟe̱s̱u̱ḻṯi̱ṉg̱ ̱f̱ṟo̱m̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱i̱s̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ρ̱,̱ ̱ḇu̱ṯ ̱s̱i̱ṉc̱e̱ ̱ ̱ ̱A̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ρ̱,̱ ̱A̱/̱σ̱_̱x̱ ̱∼̱ρ̱ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱o̱ṉḻy̱:̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ẖe̱ḻp̱s̱ ̱ṯo̱ ̱ḻi̱f̱ṯ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ ̱a̱ḇo̱v̱e̱ ̱ṯẖe̱ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱.̱ ̱F̱o̱ṟ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ,̱ ̱ṯẖe̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱ ̱ ̱∼̱√̱(̱ρ̱)̱ ̱ṯa̱m̱e̱s̱ ̱p̱ẖa̱s̱e̱ ̱ḏi̱f̱f̱u̱s̱i̱o̱ṉ,̱ ̱s̱u̱c̱ẖ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱ ̱ ̱o̱f̱ ̱o̱ṉḻy̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱σ̱_̱x̱ ̱∼̱ ̱1̱ ̱/̱ ̱ ̱ ̱√̱(̱ρ̱)̱ ̱a̱ṉḏ ̱A̱ ̱/̱ ̱σ̱_̱x̱ ̱∼̱√̱(̱ρ̱)̱.̱ ̱H̱e̱ṉc̱e̱,̱ ̱a̱ḻs̱o̱ ̱w̱i̱ṯẖ ̱ ̱ ̱ṟe̱g̱a̱ṟḏs̱ ̱ṯo̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱a̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱i̱s̱ ̱s̱u̱p̱e̱ṟi̱o̱ṟ ̱ṯo̱ ̱a̱ ̱ ̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱i̱ṉ ̱ṯẖe̱ ̱w̱e̱a̱ḵ-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱.̱ ̱Ṯẖi̱s̱ ̱a̱ṉa̱ḻy̱s̱i̱s̱ ̱a̱ḻs̱o̱ ̱ ̱ ̱s̱ẖo̱w̱s̱,̱ ̱ẖo̱w̱e̱v̱e̱ṟ,̱ ̱ṯẖa̱ṯ ̱ṯẖi̱s̱ ̱ṟe̱g̱i̱m̱e̱ ̱i̱s̱ ̱ṉo̱ṯ ̱ṉe̱c̱e̱s̱s̱a̱ṟi̱ḻy̱ ̱o̱p̱ṯi̱m̱a̱ḻ,̱ ̱s̱i̱ṉc̱e̱ ̱ ̱ ̱w̱i̱ṯẖ ̱o̱ṉḻy̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱p̱ṟe̱s̱e̱ṉṯ ̱A̱/̱σ̱_̱x̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ ̱ ̱ρ̱.̱ ̱I̱ṉ ̱f̱a̱c̱ṯ,̱ ̱i̱ṯ ̱p̱ṟe̱ḏi̱c̱ṯs̱ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖe̱ ̱s̱ṯṟo̱ṉg̱-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱ ̱ṯẖe̱ ̱ ̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱o̱u̱ṯp̱e̱ṟf̱o̱ṟm̱s̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ.̱ ̱W̱e̱ ̱ ̱ ̱e̱m̱p̱ẖa̱s̱i̱ẕe̱,̱ ̱ẖo̱w̱e̱v̱e̱ṟ,̱ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖi̱s̱ ̱ṟe̱g̱i̱m̱e̱ ̱o̱u̱ṟ ̱w̱e̱a̱ḵ-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱a̱ṉa̱ḻy̱s̱i̱s̱ ̱ḇṟe̱a̱ḵs̱ ̱ ̱ ̱ḏo̱w̱ṉ ̱a̱ṉḏ ̱o̱ṯẖe̱ṟ ̱e̱f̱f̱e̱c̱ṯs̱ ̱c̱o̱m̱e̱ ̱i̱ṉṯo̱ ̱p̱ḻa̱y̱;̱ ̱f̱o̱ṟ ̱e̱x̱a̱m̱p̱ḻe̱,̱ ̱ṉo̱ṉ-̱ḻi̱ṉe̱a̱ṟi̱ṯi̱e̱s̱ ̱ ̱ ̱a̱ṟi̱s̱i̱ṉg̱ ̱f̱ṟo̱m̱ ̱ṯẖe̱ ̱ḇo̱u̱ṉḏe̱ḏ ̱c̱ẖa̱ṟa̱c̱ṯe̱ṟ ̱o̱f̱ ̱p̱(̱ṯ)̱ ̱ḏi̱s̱ṯo̱ṟṯ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ,̱ ̱ ̱ ̱ṟe̱ḏu̱c̱i̱ṉg̱ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ṯṟa̱ṉs̱m̱i̱s̱s̱i̱o̱ṉ.̱I̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ḇo̱ṯẖ ̱ṉo̱i̱s̱e̱ ̱s̱o̱u̱ṟc̱e̱s̱,̱ ̱w̱e̱ ̱e̱x̱p̱e̱c̱ṯ ̱a̱ṉ ̱o̱p̱ṯi̱m̱a̱ḻ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṯẖa̱ṯ ̱m̱a̱x̱i̱m̱i̱ẕe̱s̱ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ṯṟa̱ṉs̱m̱i̱s̱s̱i̱o̱ṉ ̱(̱S̱I̱I̱F̱ ̱ ̱ ̱<̱c̱i̱ṯ.̱>̱)̱.̱ ̱F̱o̱ṟ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱u̱m̱ ̱a̱ṟi̱s̱e̱s̱ ̱f̱ṟo̱m̱ ̱ ̱ ̱ṯẖe̱ ̱ṯṟa̱ḏe̱-̱o̱f̱f̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱m̱i̱ṉi̱m̱i̱ẕi̱ṉg̱ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱p̱ṟo̱p̱a̱g̱a̱ṯi̱o̱ṉ ̱a̱ṉḏ ̱ ̱ ̱m̱a̱x̱i̱m̱i̱ẕi̱ṉg̱ ̱i̱ṉṯe̱ṟṉa̱ḻ-̱ṉo̱i̱s̱e̱ ̱s̱u̱p̱p̱ṟe̱s̱s̱i̱o̱ṉ.̱ ̱ ̱F̱o̱ṟ ̱ṯẖe̱ ̱ḏa̱m̱p̱e̱ḏ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ,̱ ̱A̱/̱σ̱_̱x̱ ̱f̱i̱ṟs̱ṯ ̱ṟi̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ ̱ ̱ẖe̱ḻp̱s̱ ̱ṯo̱ ̱ḻi̱f̱ṯ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ ̱a̱ḇo̱v̱e̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱ḇu̱ṯ ̱ṯẖe̱ṉ ̱p̱ḻa̱ṯe̱a̱u̱s̱ ̱ ̱ ̱w̱ẖe̱ṉ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱(̱w̱ẖi̱c̱ẖ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ρ̱)̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱s̱ ̱o̱v̱e̱ṟ ̱ ̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱;̱ ̱f̱o̱ṟ ̱e̱v̱e̱ṉ ̱ẖi̱g̱ẖe̱ṟ ̱ρ̱,̱ ̱i̱ṯ ̱ḏe̱c̱ṟe̱a̱s̱e̱s̱ ̱a̱g̱a̱i̱ṉ ̱ ̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱o̱f̱ ̱s̱i̱g̱ṉa̱ḻ ̱ḏi̱s̱ṯo̱ṟṯi̱o̱ṉ.̱ ̱I̱ṉ ̱s̱e̱c̱ṯi̱o̱ṉ ̱S̱I̱E̱ ̱<̱c̱i̱ṯ.̱>̱ ̱w̱e̱ ̱v̱e̱ṟi̱f̱y̱ ̱ ̱ ̱ṯẖe̱s̱e̱ ̱p̱ṟe̱ḏi̱c̱ṯi̱o̱ṉs̱ ̱f̱o̱ṟ ̱o̱u̱ṟ ̱c̱o̱m̱p̱u̱ṯa̱ṯi̱o̱ṉa̱ḻ ̱m̱o̱ḏe̱ḻs̱ ̱u̱s̱i̱ṉg̱ ̱s̱ṯo̱c̱ẖa̱s̱ṯi̱c̱ ̱ ̱ ̱s̱i̱m̱u̱ḻa̱ṯi̱o̱ṉs̱.̱E̱x̱p̱e̱ṟi̱m̱e̱ṉṯs̱ ̱ẖa̱v̱e̱ ̱s̱ẖo̱w̱ṉ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱c̱ḻo̱c̱ḵ ̱o̱f̱ ̱S̱.̱ ̱e̱ḻo̱ṉg̱a̱ṯu̱s̱ ̱ẖa̱s̱ ̱a̱ ̱ ̱ ̱s̱ṯṟo̱ṉg̱ ̱ṯe̱m̱p̱o̱ṟa̱ḻ ̱s̱ṯa̱ḇi̱ḻi̱ṯy̱ ̱w̱i̱ṯẖ ̱a̱ ̱c̱o̱ṟṟe̱ḻa̱ṯi̱o̱ṉ ̱ṯi̱m̱e̱ ̱o̱f̱ ̱s̱e̱v̱e̱ṟa̱ḻ ̱m̱o̱ṉṯẖs̱ ̱ ̱ ̱<̱c̱i̱ṯ.̱>̱,̱ ̱s̱u̱g̱g̱e̱s̱ṯi̱ṉg̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱i̱s̱ ̱ ̱ ̱s̱m̱a̱ḻḻ.̱ ̱I̱ṉḏe̱e̱ḏ,̱ ̱ṯy̱p̱i̱c̱a̱ḻ ̱ ̱ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱s̱ṯṟe̱ṉg̱ṯẖs̱ ̱ḇa̱s̱e̱ḏ ̱o̱ṉ ̱w̱e̱a̱ṯẖe̱ṟ ̱ḏa̱ṯa̱ ̱<̱c̱i̱ṯ.̱>̱ ̱a̱ṉḏ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ-̱ṉo̱i̱s̱e̱ ̱s̱ṯṟe̱ṉg̱ṯẖs̱ ̱ḇa̱s̱e̱ḏ ̱o̱ṉ ̱p̱ṟo̱ṯe̱i̱ṉ ̱c̱o̱p̱y̱ ̱ṉu̱m̱ḇe̱ṟs̱ ̱i̱ṉ ̱S̱.̱ ̱e̱ḻo̱ṉg̱a̱ṯu̱s̱ ̱<̱c̱i̱ṯ.̱>̱ ̱i̱ṉḏi̱c̱a̱ṯe̱ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖe̱ ̱ ̱ ̱ḇi̱o̱ḻo̱g̱i̱c̱a̱ḻḻy̱ ̱ṟe̱ḻe̱v̱a̱ṉṯ ̱ṟe̱g̱i̱m̱e̱,̱ ̱a̱ṯ ̱ḻe̱a̱s̱ṯ ̱f̱o̱ṟ ̱c̱y̱a̱ṉo̱ḇa̱c̱ṯe̱ṟi̱a̱,̱ ̱i̱ṉp̱u̱ṯ ̱ ̱ ̱ṉo̱i̱s̱e̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱s̱ ̱o̱v̱e̱ṟ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱(̱F̱i̱g̱.̱ ̱S̱5̱ ̱<̱c̱i̱ṯ.̱>̱)̱.̱ ̱I̱ṉ ̱ ̱ ̱ṯẖi̱s̱ ̱ṟe̱g̱i̱m̱e̱,̱ ̱ṯẖe̱ ̱f̱o̱c̱u̱s̱ ̱o̱f̱ ̱o̱u̱ṟ ̱p̱a̱p̱e̱ṟ,̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱s̱ ̱w̱e̱a̱ḵ ̱ ̱ ̱a̱ṉḏ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱a̱ṟe̱ ̱g̱e̱ṉe̱ṟi̱c̱a̱ḻḻy̱ ̱m̱o̱ṟe̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ ̱ ̱ṉo̱i̱s̱e̱ ̱ṯẖa̱ṉ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱.̱This work is part of the research programme of the Netherlands Organisation for Scientific Research (NWO) and was performed at AMOLF. DKL acknowledges NSF grant DMR 1056456 and grant PHY 1607611 to the Aspen Center for Physics, where part of this work was completed. We thank Jeroen van Zon and Nils Becker for a critical reading of the manuscript. Supplemental Material:Robustness of circadian clocks to input noise This supporting information provides background information on the computational models and analytical models that we have studied. The computational models are described in the next section, while the analytical models are discussed in section <ref>.§ COMPUTATIONAL MODELS In this section, we describe the three computational models that we have considered in this study: the push-pull network; the uncoupled-hexamer model; and the coupled-hexamer model. We also describe how we have modeled the input signal and how the systems are coupled to the input. A̱s̱ ̱ḏe̱s̱c̱ṟi̱ḇe̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱ṯe̱x̱ṯ,̱ ̱w̱e̱ ̱a̱ṟe̱ ̱ ̱ ̱i̱ṉṯe̱ṟe̱s̱ṯe̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱q̱u̱e̱s̱ṯi̱o̱ṉ ̱ẖo̱w̱ ̱ṯẖe̱ ̱ṟo̱ḇu̱s̱ṯṉe̱s̱s̱ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ḏe̱p̱e̱ṉḏs̱ ̱ ̱ ̱o̱ṉ ̱ṯẖe̱ ̱a̱ṟc̱ẖi̱ṯe̱c̱ṯu̱ṟe̱ ̱o̱f̱ ̱ṯẖe̱ ̱ṟe̱a̱ḏo̱u̱ṯ ̱s̱y̱s̱ṯe̱m̱;̱ ̱w̱e̱ ̱ṯẖe̱ṟe̱f̱o̱ṟe̱ ̱m̱o̱ḏe̱ḻ ̱ṯẖe̱s̱e̱ ̱ ̱ ̱s̱y̱s̱ṯe̱m̱s̱ ̱w̱i̱ṯẖ ̱ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱ ̱m̱e̱a̱ṉ-̱f̱i̱e̱ḻḏ ̱c̱ẖe̱m̱i̱c̱a̱ḻ ̱ṟa̱ṯe̱ ̱ ̱ ̱e̱q̱u̱a̱ṯi̱o̱ṉs̱.̱ ̱H̱o̱w̱e̱v̱e̱ṟ,̱ ̱ẖe̱ṟe̱ ̱i̱ṉ ̱ṯẖe̱ ̱S̱u̱p̱p̱o̱ṟṯi̱ṉg̱ ̱I̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱w̱e̱ ̱a̱ḻs̱o̱ ̱ ̱ ̱ṯe̱s̱ṯ ̱ẖo̱w̱ ̱ṟo̱ḇu̱s̱ṯ ̱o̱u̱ṟ ̱f̱i̱ṉḏi̱ṉg̱s̱ ̱a̱ṟe̱,̱ ̱ṉo̱ṯ ̱o̱ṉḻy̱ ̱ṯo̱ ̱ṯẖe̱ ̱s̱ẖa̱p̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ ̱ ̱s̱i̱g̱ṉa̱ḻ,̱ ̱ḇu̱ṯ ̱a̱ḻs̱o̱ ̱ṯo̱ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱.̱In the next section, we first describe how we have modeled the input signal. In the subsequent sections, we then describe the ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱ computational models, how they are coupled to the input, and how we have set their parameters. Table <ref> lists the values of all the parameters of all the models. In section <̱ṟe̱f̱>̱ ̱w̱e̱ ̱s̱ẖo̱w̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱p̱ṟi̱ṉc̱i̱p̱a̱ḻ ̱f̱i̱ṉḏi̱ṉg̱s̱ ̱o̱f̱ ̱F̱i̱g̱.̱2̱ ̱a̱ṟe̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱a̱ṉḏ ̱i̱ṉ ̱s̱e̱c̱ṯi̱o̱ṉ ̱<̱ṟe̱f̱>̱ ̱w̱e̱ ̱s̱ẖo̱w̱ ̱ṯẖa̱ṯ ̱ṯẖe̱y̱ ̱a̱ṟe̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱ṯẖe̱ ̱ṯy̱p̱e̱ ̱o̱f̱ ̱i̱ṉp̱u̱ṯ ̱s̱i̱g̱ṉa̱ḻ ̱a̱ṉḏ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱c̱o̱ṟṟe̱ḻa̱ṯi̱o̱ṉ ̱ṯi̱m̱e̱. §.§ Input signal The input signal is modeled as a sinusoidal oscillation with additive noise:s(t) = sin(ω t) + s̅ + η_s(t), s_twhere s̅ is the mean input signal and η_s(t) describes the input noise. The noise in the input is assumed to be uncorrelated with the mean input signal s(t). Moreover, we assume that the input noise has strength σ^2_s and is colored, relaxing exponentially with correlation time τ_c: η_s(t) η_s(t^')=σ^2_s e^-|t-t^'|/τ_c. The input signal s(t) is coupled to the system by modulating the phosphorylation rate k_α of the core clock protein, as we describe in detail for the respective computational models in the next sections. Here, k_α =k_ f, k_ ps, k_i, depending on the computational model. As we will see, the net phosphorylation rate is given byk_α s(t)= k_α s(t)=k_αs̅ + k_α (sin(ω t) + η_s).kfsThis expression shows that in the presence of oscillatory driving, the mean phosphorylation rate averaged over a period is set by k_αs̅, while the amplitude of the oscillation in the phosphorylation rate, which sets the strength of the forcing, is given by k_α. We also note that k_α amplifies not only the “true” signal sin(ω t), but also the noise η_s, the consequences of which will be discussed below. Lastly, the absence of any oscillatory driving is modeled by taking s(t) = s̅, such that the net phosphorylation rate is then k_αs̅. The phosphorylation rate in the presence of stochastic driving is thus characterized by the following parameters: the mean phosphorylation rate k_αs̅, the amplitude of the phosphorylation-rate oscillations k_α, and the noise η_s(t), characterized by the noise strength σ^2_s and correlation time τ_c. We will vary σ^2_s and τ_c systematically, while s̅ and k_α, together with the other system parameters, will be optimized to maximize the mutual information, as described below. W̱ẖi̱ḻe̱ ̱w̱e̱ ̱w̱i̱ḻḻ ̱v̱a̱ṟy̱ ̱σ̱^̱2̱_̱s̱,̱ ̱w̱e̱a̱ṯẖe̱ṟ ̱ḏa̱ṯa̱ ̱g̱i̱v̱e̱s̱ ̱u̱s̱ ̱ḇa̱ḻḻ-̱p̱a̱ṟḵ ̱ ̱ ̱e̱s̱ṯi̱m̱a̱ṯe̱s̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ṯy̱p̱i̱c̱a̱ḻ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱s̱ṯṟe̱ṉg̱ṯẖs̱.̱ ̱Ṯẖe̱ ̱w̱e̱a̱ṯẖe̱ṟ ̱ḏa̱ṯa̱ ̱o̱f̱ ̱ ̱ ̱<̱c̱i̱ṯ.̱>̱ ̱i̱ṉḏi̱c̱a̱ṯe̱s̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱a̱v̱e̱ṟa̱g̱e̱ ̱ṟe̱ḻa̱ṯi̱v̱e̱ ̱ṉo̱i̱s̱e̱ ̱ ̱ ̱ ̱ ̱i̱ṉṯe̱ṉs̱i̱ṯy̱ ̱a̱ṯ ̱ṉo̱o̱ṉ ̱i̱s̱ ̱a̱ṟo̱u̱ṉḏ ̱δ̱ ̱I̱^̱2̱/̱I̱^̱2̱ ̱≈̱ ̱ ̱ ̱ ̱ ̱0̱.̱2̱ ̱-̱ ̱0̱.̱3̱,̱ ̱w̱ẖi̱c̱ẖ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏs̱ ̱ṯo̱ ̱σ̱^̱2̱_̱s̱ ̱/̱ ̱s̱̱̅^̱2̱ ̱i̱ṉ ̱o̱u̱ṟ ̱ ̱ ̱ ̱ ̱m̱o̱ḏe̱ḻ,̱ ̱y̱i̱e̱ḻḏi̱ṉg̱ ̱σ̱^̱2̱_̱s̱ ̱≈̱ ̱1̱ ̱-̱ ̱2̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ḇa̱s̱e̱ḻi̱ṉe̱ ̱ ̱ ̱ ̱ ̱p̱a̱ṟa̱m̱e̱ṯe̱ṟ ̱v̱a̱ḻu̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱m̱e̱a̱ṉ ̱s̱i̱g̱ṉa̱ḻ ̱s̱̱̅=̱2̱ ̱(̱s̱e̱e̱ ̱Ṯa̱ḇḻe̱ ̱ ̱ ̱ ̱ ̱<̱ṟe̱f̱>̱)̱.̱ ̱ ̱Ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖe̱ṟe̱ ̱w̱i̱ḻḻ ̱ḇe̱ ̱v̱a̱ṟi̱a̱ṯi̱o̱ṉs̱ ̱i̱ṉ ̱ṯẖe̱ ̱ ̱ ̱ ̱ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱ ̱i̱ṉ ̱ṯẖe̱ ̱ḻi̱g̱ẖṯ ̱i̱ṉṯe̱ṉs̱i̱ṯy̱ ̱f̱ṟo̱m̱ ̱ḏa̱y̱-̱ṯo̱-̱ḏa̱y̱,̱ ̱w̱e̱ ̱w̱i̱ḻḻ ̱a̱ḻs̱o̱ ̱ ̱ ̱ ̱ ̱s̱ṯu̱ḏy̱ ̱ẖi̱g̱ẖe̱ṟ ̱v̱a̱ḻu̱e̱s̱ ̱o̱f̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱.̱In the simulations, realisations of η_s(t) are generated via the Ornstein-Uhlenbeck processη̇_s= - η_s/ τ_c + ξ (t),where ξ(t) is Gaussian white noise ξ(t)ξ(t^') = ξ^2δ (t-t^'). This generates colored noise of η_s(t), η_s(t) η_s (t^')= σ^2_s e^-|t-t^'|/τ_c, where σ^2_s = ξ^2τ_c / 2.The results of Fig. 2 of the main text correspond to τ_c = 0.5 /h, consistent with the weather data of <cit.>. However, we have tested the robustness of the results by varying the noise correlation time τ_c. In addition, to test the robustness of our observations to changes in the shape of the input signal, we have also varied that. These tests are described in section <ref> and the results are shown in Robustness. Clearly, the principal result of Fig. 2 of the main text is robust to changes in both the noise correlation time τ_c and the shape of the mean-input signal.§.§ Push-pull network The ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱ push-pull network is described by the following reactionẋ_p = k_ f s(t) (x_ T - x_p(t)) - k_ b x_p(t), PPN_CPMwhere x_ T=x+x_p is the total protein concentration, x_p is the concentration of phosphorylated protein, k_ f s(t) is the phosphorylation rate k_ f times the input signal s(t) (see s_t) and k_ b is the dephosphorylation rate. PPNA shows a time trace of both a driven and a non-driven push-pull network.Setting the parametersThe steady-state mean phosphorylation level is set by p̅ = x̅_p /x_T =k_ fs̅ / (k_ fs̅ + k_ b). We anticipated, based on the analytical calculations described in section <ref>, that a key timescale is k_ b and that the system should operate in the regime in which it responds linearly to changes in the mean input s̅. This means that for a given k_ b, k_ f and s̅ cannot be too large. We have chosen s̅=2, and then varied k_ f and k_ b to optimize the mutual information. We then verified a posteriori that the value of s̅=2 indeed puts the system in the optimal linear regime. Optimal dephosphorylation rate Specifically, the parameters k_ f and k_ b are set as follows: for a given input noise strength σ^2_s=1.0, we first fix the phosphorylation rate k_ f and compute the mutual information I(p;t) between the phosphorylated fraction p(t) = x_p(t) / x_T and time t as a function of the dephosphorylation rate k_b; we then repeat this procedure by varying k_ f. The result is shown in PPNB. Clearly, there exists an optimal value of k_ b that maximizes I(p;t). Moreover, the optimal value k_ b^ opt becomes indepdendent of k_ f when k_ f becomes so small that the system enters the regime in which it responds linearly to changes in the mean input s̅. We then fixed the phosphorylation rate to k_ f=0.01/ h, and compute I(p;t) as a function of k_ b for different levels of the input-noise strength, see PPNC. It is seen that the optimal dephosphorylation rate k_ b^ opt is essentially independent of the input noise strength σ^2_s.In the simulations corresponding to Fig. 2 of the main text, we therefore kept k_ b constant at k_ b^ opt=0.3/ h and k_ f constant at k_ f=0.01/ h when we varied σ^2_s.The observation that k_ b^ opt is independent of k_ f and σ^2_s can be understood by noting that to maximize information transmission, the system should operate in the linear-response regime in which the mean output x̅ responds linearly to changes in the mean input s̅. This regime tends to enhance information because it ensures that in the presence of a sinusoidal input, the output x_p(t) will not be distorted and be sinusoidal too. In this linear-response regime, the system can be analyzed analytically, see muopt in section <ref> below. This equation, which accurately predicts the optimum seen in PPNB and PPNC, reveals that the optimal dephosphorylation rate depends on the frequency of the driving signal, ω, and the correlation time of the noise, τ_c, but not on the noise strength σ^2_s and the coupling ρ to the input signal, given by ρ=k_ f x_T. Increasing the gain ρ amplifies not only the true signal, but also the noise in that signal (see also kfs), such that the signal-to-noise ratio is unaltered. Indeed, increasing the gain only helps in the presence of internal noise, which ẖe̱ṟe̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱ṯe̱x̱ṯ, however, is zero. I̱ṉ ̱s̱e̱c̱ṯi̱o̱ṉs̱ ̱<̱ṟe̱f̱>̱ ̱a̱ṉḏ ̱ ̱ ̱<̱ṟe̱f̱>̱ ̱w̱e̱ ̱ḏi̱s̱c̱u̱s̱s̱ ̱ṯẖe̱ ̱ṟo̱ḻe̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱.̱ ̱A̱s̱ ̱ ̱ ̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱ ̱s̱ẖo̱w̱s̱,̱ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ṉo̱ṯ ̱o̱ṉḻy̱ ̱ ̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ḇu̱ṯ ̱a̱ḻs̱o̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱ṯẖe̱ṟe̱ ̱e̱x̱i̱s̱ṯs̱ ̱a̱ṉ ̱ ̱ ̱o̱p̱ṯi̱m̱a̱ḻ,̱ ̱ṉo̱ṉ-̱ẕe̱ṟo̱,̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ,̱ ̱w̱ẖi̱c̱ẖ ̱a̱ṟi̱s̱e̱s̱ ̱a̱s̱ ̱a̱ ̱ṯṟa̱ḏe̱-̱o̱f̱f̱ ̱ ̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱ḻi̱f̱ṯi̱ṉg̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱o̱f̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱a̱ḇo̱v̱e̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱ ̱ ̱(̱w̱ẖi̱c̱ẖ ̱ṉe̱c̱e̱s̱s̱i̱ṯa̱ṯe̱s̱ ̱a̱ ̱s̱u̱f̱f̱i̱c̱i̱e̱ṉṯḻy̱ ̱ḻa̱ṟg̱e̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ,̱ ̱s̱e̱e̱ ̱ ̱ ̱S̱ṈṞ_̱H̱O̱_̱I̱ṉṯṈo̱i̱s̱e̱)̱ ̱a̱ṉḏ ̱m̱i̱ṉi̱m̱i̱ẕi̱ṉg̱ ̱ṯẖe̱ ̱ḏi̱s̱ṯo̱ṟṯi̱o̱ṉs̱ ̱o̱f̱ ̱ṯẖe̱ ̱s̱ẖa̱p̱e̱ ̱ ̱ ̱o̱f̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱s̱i̱g̱ṉa̱ḻ.̱ ̱H̱o̱w̱e̱v̱e̱ṟ,̱ ̱f̱o̱ṟ ̱ḇi̱o̱ḻo̱g̱i̱c̱a̱ḻḻy̱ ̱ṟe̱ḻe̱v̱a̱ṉṯ ̱c̱o̱p̱y̱ ̱ ̱ ̱ ̱ ̱ṉu̱m̱ḇe̱ṟs̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱i̱s̱ ̱s̱m̱a̱ḻḻ,̱ ̱w̱ẖi̱ḻe̱ ̱s̱i̱g̱ṉa̱ḻ ̱ḏi̱s̱ṯo̱ṟṯi̱o̱ṉs̱ ̱o̱ṉḻy̱ ̱ ̱ ̱ ̱ ̱ḵi̱c̱ḵ ̱i̱ṉ ̱a̱ṯ ̱ḻa̱ṟg̱e̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖs̱.̱ ̱C̱o̱ṉs̱e̱q̱u̱e̱ṉṯḻy̱,̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱u̱m̱ ̱i̱s̱ ̱ ̱ ̱ ̱ ̱ḇṟo̱a̱ḏ ̱(̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱)̱.̱ ̱ ̱Ṯẖe̱ ̱c̱ẖo̱s̱e̱ṉ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ẖe̱ṟe̱ ̱i̱s̱ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ḻa̱ṯe̱a̱u̱ ̱ṟe̱g̱i̱m̱e̱ ̱ ̱ ̱ ̱ ̱i̱ṉ ̱w̱ẖi̱c̱ẖ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱i̱s̱ ̱m̱a̱x̱i̱m̱i̱ẕe̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ ̱ ̱ ̱ ̱ḇo̱ṯẖ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱a̱ṉḏ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱.̱ §.§ Uncoupled-hexamer model: Kai system of ProchlorococcusBackground The uncoupled-hexamer model (UHM) presented in the main text is a minimal model of the Kai system of the cyanobacterium Proclorococcus and, possibly, the purple bacterium Rhodopseudomonas palustris. The well characterized clock of the cyanobacterium S. elongatus consists of three proteins, KaiA, KaiB and KaiC, which are all essential for sustaining free-running oscillations <cit.>. And, indeed, many cyanobacteria possess at least one copy of each kai gene. One exception is Proclororoccus, which contains kaiB and kaiC, but misses a (functional) kaiA gene. Interestingly, in daily (12h:12h) light-dark (LD) cycles, the expression of many genes, including kaiB and kaiC, is rhythmic, but in constant conditions these rhythms damp very rapidly <cit.>. Similar behavior is observed for the purple bacterium R. palustris, which possesses homologs of the kaiB and kaiC genes <cit.>: under LD conditions, the KaiC homolog appears to be phosphorylated in a circadian fashion, but under constant conditions, the oscillations decay very rapidly; physiological activities, such as the nitrogen fixation rates, follow a similar pattern <cit.>. Of particular interest is the observation that under LD conditions but not under LL conditions, the growth rate is significantly reduced in the strain in which the kaiC homolog was knocked out <cit.>. This strongly suggests that the (homologous) Kai system plays a role as a timekeeping mechanism, which relies, however, on oscillatory driving.Model Our model is inspired by the models that in recent years have been developed for S. elongatus <cit.>. These models share a number of characteristics that are essential for generating oscillations and entrainment (see also next section). The central clock component is KaiC, a hexamer, that can switch between an active state in which the phosphorylation level tends to rise and an inactive one in which it tends to fall. The model lacks KaiA because Proclororoccus and R. palustris miss a functional kaiA gene <cit.>. In S. elongatus, KaiB does not directly affect the rates of phosphorylation and dephosphorylation, but mainly serves to stabilize the inactive state and mediate KaiA binding by inactive KaiC <cit.>. KaiB is therefore not modelled explicitly <cit.>. The main entrainment signal for S. elongatus is the ratio of ATP to ADP levels, which depends on the light intensity, and predominantly couples to KaiC in its active conformation <cit.>. These observations give rise to the following chemical rate equations o̱f̱ ̱ ̱ ̱o̱u̱ṟ ̱ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱ ̱m̱o̱ḏe̱ḻ:ċ_0= k_ sc̃_0 - k_ f s(t) c_0 UHM_F ċ_i= k_ f s(t) (c_i-1 - c_i)i ∈ (1,…,5) ċ_6= k_ f s(t) c_5 - k_ s c_6 ċ̃̇_6=k_ s c_6 - k_ fc̃_6 ċ̃̇_i=k_ b (c̃_i+1 - c̃_i)i ∈ (1,…,5) ċ̃̇_0=k_ bc̃_1 - k_ sc̃_0 UHM_LHere, c_i, with i=0,…,6, is the concentration of active i-fold phosphorylated KaiC in its active conformation, while c̃_i is the concentration of inactive i-fold phosphorylated KaiC. The quantity k_ s is the conformational switching rate, k_ b is the dephosphorylation rate of inactive KaiC, and k_ f s(t) is the phosphorylation rate of active KaiC, k_ f, times the input signal s(t). The output is the phosphorylation fraction of KaiC proteins (monomers), given by <cit.>p(t) = 1/6∑_i=0^6 i (c_i + c̃_i)/∑_i=0^6 (c_i + c̃_i). pdefUHMA shows a time trace of the phosphorylation level p(t) of both a driven and a non-driven uncoupled-hexamer model. Intrinsic frequency Because the cycles of the different hexamers are not coupled via KaiA as in the coupled-hexamer model and in S. elnogatus, the system cannot sustain free-running oscillations. In this respect, the system is similar to the push-pull network in the sense that a perturbation of the non-driven system will relax to a stable fixed point. However, this model differs from the push-pull network in that it has a characteristic frequency ω_0=2π / T_0 with intrinsic period T_0, arising from the phosphorylation cycle of the KaiC hexamers. Consequently, while a perturbed (non-driven) push-pull network will relax exponentially to its stable fixed point, the uncoupled-hexamer model will, when not driven, relax in an oscillatory fashion to its stable fixed point with an intrinsic frequency ω_0 (see UHMA). To predict the latter, we note that the dynamics of UHM_FUHM_L can be written in the form ẋ =A x, and when all rate constants are equal, k_ fs̅= k_ b = k_ s, the eigenvalues and eigenvectors of A can be computed analytically. The eigenvectors are complex exponentials. For a cycle with N sites with hopping rate k, the frequency associated with the lowest-lying eigenvalue is k sin(2 π/N), which to leading order is 2π k / N, corresponding to a period T_0 = N / k. Please note that this is also the period of a single multimer with N (cyclic) sites with N equal rates of hopping from one site to the next. We therefore expect that, to a good approximation, the intrinsic frequency ω_0=2π/T_0 of an ensemble of hexamers corresponds to the intrinsic period of a single hexamer:T_0 ≃2/k_ s + 6/k_ fs̅ + 6/k_ b≃6/k_ fs̅+6/k_ bUHMT0,where we recall that in the non-driven system the phosphorylation rate is k_ fs̅. We verfied that this approximation is very accurate by fitting the relaxation of p(t) of the UHM to a function of the form e^-γ tsin(ω_0 t), with ω_0 = 2π / T_0. The intrinsic period T_0 obtained in this way is to an excellent approximation given by UHMT0.Setting the parametersThe parameters were set as follows: the conformational switching rate k_ s was set to be larger than the (de)phosphorylation rates k_ s≫{k_ f,k_ b}, as in the original models <cit.>. This leaves for a given input noise η_s, three parameters to be optimized: the phosphorylation rate k_ f, the dephosphorylation rate k_ b, and the mean input signal s̅. The product k_ fs̅ determines the mean phosphorylation rate, while k_ f separately determines the strength of the forcing, i.e. the amplitude of the oscillations in the phosphoryation rate (see kfs). The quantities k_ fs̅ and k_ b together determine the intrinsic frequency ω_0=2π / T_0 (see UHMT0) and the symmetry of the phosphorylation cycle, set by the ratio r≡ k_ b / (k_ fs̅).Optimal intrinsic frequency We therefore first computed for different input-noise strengths σ^2_s, the mutual information I(p;t) as a function of the ratio r=k_ b/(k_ fs̅) and a scaling factor q that scales both k_ f and k_ b, keeping s̅=2. UHMB shows the heatmap of I(p;t) = I(r,q) for σ^2_s=1, but qualitatively similar results were obtained for other values of σ^2_s (as discussed below). Since the intrinsic frequency ω_0 depends on both r and q (see UHMT0), we have superimposed contourlines of constant ω_0. Interestingly, the figure shows that in the relevant regime of high mutual information, I(p;t) follows the contourlines of constant ω_0. This shows that I(p;t) depends on r and q predominantly through ω_0 (r,q), I (p;t) ≈ I(ω_0(r,q)). It demonstrates that the mutual information is primarly determined by the intrinsic period T_0—the time to complete a single cycle—and not by the evenness of the pace around the cycle set by r.To reveal the dependence of I(ω_0) on σ^2_s, we show in panel C for different values of σ^2_s, I(p;t) as a function of ω_0, which was varied by scaling k_ f and k_ b via the scaling factor q, keeping the ratio of k_ fs̅ and k_ b constant at r=1 (while also keeping s̅=2). Clearly, there is an optimal frequency ω_0^ opt≈ 1.04 ω corresponding to an optimal k=k_ fs̅=k_ b=0.52/ h, that maximizes the mutual information which is essentially independent of σ^2_s. In Fig. 2 of the main text, when we vary σ^2_s, we thus kept k=k_ fs̅ = k_ b = 0.52/ h constant, with k_ f=0.26/ h and s̅=2.Interestingly, the optimal intrinsic frequency ω_0^ opt is not equal to the driving frequency ω: ω_0^ opt> ω, yielding an intrinsic period T_0^ opt≈ 23.1h that is smaller than 24 hrs. This can be understood by analyzing the simplest model that mimics the uncoupled-hexamer model: the (damped) harmonic oscillator, which, like the uncoupled-hexamer model, is a linear system with a characteristic frequency. As described in <ref>, we expect generically for such a system that the optimal intrinsic frequency is larger than the driving frequency: ω_0^ opt > ω. This is because while the amplitude A of the output (the “signal”) is maximal at resonance, ω_0 = ω (see A_HO), input-noise averaging is maximized (i.e. output noise σ_x minimized) for large ω_0 (see noiseHOCol), such that the signal-to-noise ratio A/σ_x is maximal for ω_0^ opt > ω.Mutual information is less sensitive to coupling strength Lastly, while k_ fs̅ and k_ b are vital by setting the intrinsic period T_0 (UHMT0) that maximizes the mutual information (panels B and C of UHM), we now address the importance of the coupling strength, which is set by k_ f separately (see kfs). To this end, we computed the mutual information I(p;t) as a function of k_ f and s̅, keeping the dephosphorylation rate constant at k_ b=0.52/ h. UHMD shows the result. It is seen that there is, as in panel B, a band along which the mutual information is highest. This band coincides with the superimposed dashed white line along which k_ fs̅=0.52 /h and hence T_ 0 are constant (see UHMT0). This shows that the mutual information I(p;t) is predominantly determined by the intrinsic period T_0: as the parameters are changed in a direction perpendiular to this line (and T_0 changes most strongly), then I(p;t) falls dramatically. In contrast, along the dashed white line of constant T_0, I(p;t) is nearly constant. It shows that the precise strength of the forcing, set by k_ f, is not critical for the mutual information. This behavior mirrors that observed for the push-pull network. While increasing k_ f increases the amplitude of the oscillations in p(t), it also increases the noise, such that the signal-to-noise ratio and hence the mutual information are essentially unchanged. The same behavior is observed for the minimal model of this system, the harmonic oscillator, described in <ref>. Y̱e̱ṯ,̱ ̱a̱s̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱p̱u̱s̱ẖ-̱p̱u̱ḻḻ ̱ṉe̱ṯw̱o̱ṟḵ,̱ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱ṉo̱i̱s̱e̱ ̱ṯẖe̱ṟe̱ ̱e̱x̱i̱s̱ṯs̱ ̱a̱ṉ ̱o̱p̱ṯi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ,̱ ̱a̱s̱ ̱s̱ẖo̱w̱ṉ ̱i̱ṉ ̱ ̱ ̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱Ḇ ̱a̱ṉḏ ̱ḏi̱s̱c̱u̱s̱s̱e̱ḏ ̱i̱ṉ ̱s̱e̱c̱ṯi̱o̱ṉ ̱ ̱ ̱<̱ṟe̱f̱>̱.̱ ̱ ̱H̱o̱w̱e̱v̱e̱ṟ,̱ ̱a̱s̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱p̱u̱s̱ẖ-̱p̱u̱ḻḻ ̱ ̱ ̱ṉe̱ṯw̱o̱ṟḵ,̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱i̱u̱m̱ ̱i̱s̱ ̱ḇṟo̱a̱ḏ:̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ ̱ṉe̱e̱ḏs̱ ̱ṯo̱ ̱ḇe̱ ̱ḻi̱f̱ṯe̱ḏ ̱a̱ḇo̱v̱e̱ ̱ ̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱y̱e̱ṯ ̱f̱o̱ṟ ̱ḻa̱ṟg̱e̱ṟ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṯẖe̱ ̱e̱f̱f̱e̱c̱ṯi̱v̱e̱ ̱i̱ṉp̱u̱ṯ ̱ ̱ ̱ṉo̱i̱s̱e̱ ̱(̱w̱ẖi̱c̱ẖ ̱s̱c̱a̱ḻe̱s̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱)̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱s̱ ̱o̱v̱e̱ṟ ̱ṯẖe̱ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱ḻe̱a̱ḏi̱ṉg̱ ̱ṯo̱ ̱a̱ ̱ṟe̱g̱i̱m̱e̱ ̱i̱ṉ ̱w̱ẖi̱c̱ẖ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ ̱ ̱ṟe̱m̱a̱i̱ṉs̱ ̱e̱s̱s̱e̱ṉṯi̱a̱ḻḻy̱ ̱u̱ṉc̱ẖa̱ṉg̱e̱ḏ;̱ ̱ṯẖe̱ ̱c̱ẖo̱s̱e̱ṉ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ẖe̱ṟe̱ ̱i̱s̱ ̱ ̱ ̱i̱ṉ ̱ṯẖi̱s̱ ̱ṟe̱g̱i̱m̱e̱ ̱(̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱)̱.̱To sum up, in the simulations corresponding to Fig. 2 of the main text, we kept k_ b=k_ fs̅ = 0.52 /h, with s̅=2 and k_ f=0.26/ h.§.§ Coupled-hexamer model: Kai system of S. elongatusBackgroud In contrast to the cyanobacterium Prochlorococcus and the purple bacterium R. palustris, the cyanobacterium S. elongatus harbors all three Kai proteins, KaiA, KaiB, and KaiC, and can (therefore) exhibit self-sustained, limit-cycle oscillations <cit.>. The circadian system combines a transcription-translation cycle (TTC) <cit.> with a protein phosphorylation cycle (PPC) of KaiC <cit.>, and in 2005 the latter was reconstituted in the test tube <cit.>. The dominant pacemaker appears to be the protein phosphorylation cycle <cit.>, although at higher growth rates the transcription-translation cycle is important for maintaining robust oscillations <cit.>. Changes in light intensity induce a phase shift of the in-vivo clock and cause a change in the ratio of ATP to ADP levels <cit.>. Moreover, when these changes in ATP/ADP levels were experimentally simulated in the test tube, they induced a phase shift of the protein phosphorylation cycle which is similar to that of the wild-type clock <cit.>. These experiments indicate that the phosphorylation cycle is not only the dominant pacemaker, but also the cycle that couples the circadian system to the light input. We therefore focused on the protein phosphorylation cycle.Due to the wealth of experimental data, the in-vitro protein phosphorylation cycle of S. elongatus has been modeled extensively in the past decade <cit.>. In <cit.> we presented a very detailed thermodynamically consistent statistical-mechanical model, which is based on earlier models <cit.> and can explain most of the experimental observations. The coupled-hexamer model (CHM) presented here is a minimal version of these models.It contains the necessary ingredients for describing the autonomous protein-phosphorylation oscillations and the coupling to the light input, i.e. the ATP/ADP ratio.The model is similar to the uncoupled-hexamer model described in the previous section, with KaiC switching between an active state in which the phosphorylation level tends to rise and an inactive in which it tends to fall. The key difference between the two systems is that the CHM also harbors KaiA, which synchronizes the oscillations of the individual hexamers via the mechanism of differential affinity <cit.>, allowing for self-sustained oscillations. Specifically, KaiA is needed to stimulate phosphorylation of active KaiC, yet inactive KaiC can bind KaiA too. Consequently, inactive hexamers that are in the dephosphoryation phase of the phosphorylation cycle—the laggards—can take away KaiA from those KaiC hexamers that have already finished their phosphorylation cycle—the front runners. These front runners are ready for a next round of phosphorylation, but need to bind KaiA for this. By strongly binding and sequestering KaiA, the laggards can thus take away KaiA from the front runners, thereby forcing them to slow down. This narrows the distribution of phosphoforms, and effectively synchronizes the phosphorylation cycles of the individual hexamers <cit.>. The mechanism appears to be active not only during the inactive phase, but also during the active phase: KaiA has a higher binding affinity for less phosphorylated KaiC <cit.>. Since KaiB serves to mainly stabilize the inactive state and mediate the sequestration of KaiA by inactive KaiC, KaiB is, as in the UHM and following <cit.>, only modelled implicitly.Model Since computing the mutual information accurately requires very long simulations, we sought to develop a minimal version of the PPC model presented in <cit.>, which can describe a wealth of data including the concentration dependence of the self-sustained oscillations and the coupling to ATP/ADP <cit.>.This model i̱s̱ ̱ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱ ̱a̱ṉḏ described by the following chemical rate equations:ċ_0 =k_ sc̃_0 - s(t) c_0 [k_0A/A+K_0 + k_ psK_0/A+K_0] CHM_F ċ_i =s(t) c_i-1[k_i-1A/A+K_i-1 + k_ psK_i-1/A+K_i-1] -s(t) c_i[ k_i A/A+K_i +k_ psK_i/A+K_i]i∈ (1,…,5) ċ_6 = s(t) c_5[k_5A/A+K_5 + k_ psK_5/A+K_5] - k_ s c_6 ċ̃̇_6 = k_ s c_6 - k_ bc̃_6 ċ̃̇_i = k_ b (c̃_i+1 - c̃_i)i ∈ (1,…,5) ċ̃̇_0 = k_ bc̃_1 - k_ sc̃_0 A =A_ T - ∑_j=0^5 c_j A/A+K_j - ∑_j=0^6 b_j c̃_jA^b_j/A^b_j+K̃_j^b_jAfreeHere, c_i and c̃_i are the concentrations of active and inactive i-fold phosphorylated KaiC, A is the concentration of free KaiA. The rates k_i are the rates of KaiA-stimulated phosphorylation of active KaiC and k_ ps is the spontaneous phosphorylation rate of active KaiC when KaiA is not bound. Please note that both rates are multiplied by the input signal s(t), since both rates depend on the ATP/ADP ratio <cit.>. The dephosphorylation rate k_ b is independent of the ATP/ADP ratio <cit.> and hence k_ b is not multiplied with s(t). As in the UHM, k_ s is the conformational switching rate. The last equation, Afree, gives the concentration A of free KaiA under the quasi-equilibrium assumption of rapid KaiA (un)binding by active KaiC with affinity K_i (second term right-hand side) and rapid binding of KaiA by inactive KaiC, where each i-fold phosphorylated inactive KaiC hexamer can bind b_i KaiA dimers (last term right-hand side Afree).The mechanism of differential affinity is implemented via two ingredients: 1) the dissociation constant of KaiA binding to active KaiC, K_i, depends on the phosphorylation level i, with less phosphorylated KaiC having a higher binding affinity: K_i < K_i+1 <cit.>; 2) inactive KaiC can strongly bind and sequester KaiA <cit.>; this is modeled by the last term in Afree.Autonomous oscillations CHMA shows a time trace of p(t) (pdef) for both a driven and a non-driven coupled-hexamer model. Clearly, in contrast to the push-pull network and the uncoupled-hexamer model, this system exhibits free running simulations. Note also that the autonomous oscillations are slightly asymmetric as observed experimentally, and as shown also by the detailed models on which this minimal model is based <cit.>. Lastly, while the driving signal is sinusoidal, the output signal of the driven system remains non-sinusoidal. This is because this system is non-linear; this behavior is indeed in marked contrast to the behavior seen for the linear UHM (see UHM) and that of the PPN (PPN) which operates in the linear regime.The slight asymmetry in the oscillations also explains why in the regime of very low noise, this system has a slightly lower mutual information than that of push-pull network or the uncoupled-hexamer model, as seen in Fig. 1 of the main text. Setting the parametersF̱ṟe̱e̱-̱ṟu̱ṉṉi̱ṉg̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ We first set the parameters to get autonomous oscillations, keeping s(t)=s̅=2. These parameters were inspired by the parameters of the model upon which the current model is built <cit.>. Specifically, the KaiA binding affinity of active KaiC, given by K_i, was chosen such that it obeys differential affinity, K_0 < K_1 < K_2 < K_3 < K_4 < K_5 , as in the PPC model of <cit.>.In addition, in our model, b_i=2 for i=1,2,3,4 and b_i=0 for i=0,5,6, meaning that i=1-4 fold phosphorylated inactive KaiC hexamers can each bind two KaiA dimers with strong affinity K̃_i=K̃.The conformational switching rate k_ s was set to be higher than all the (de)phosphorylation rates, k_ s >> {k_i, k_ ps, k_b} and the values of k_i, k_ ps, k_ b were, again apart from a scaling factor to set the optimal intrinsic frequency as described below, identical to those of the PPC model of <cit.>. These parameter values allowed for robust free-running oscillations (see CHMA) in near quantitative agreement with the oscillations of the more detailed PPC model of <cit.>.Driven oscillator: Optimal intrinsic frequency We then studied the driven system. We computed the mutual information I(p;t) as a function of the mean signal s̅ and the phosphorylation rates k_i=k_1=…=k_5, see CHMB. While the intrinsic frequency is primarily determined by the mean phosphorylation rate k_i s̅, as illustrated by the dashed-white line of constant intrinsic frequency ω_0, the coupling strength is (for a given mean k_i s̅) set by the amplitude k_i (see kfs).Panel B shows that the mutual information changes markedly in the direction perpendicular to the white line, indicating that I(p;t) strongly depends on ω_0. To illustrate this further, we varied the intrinsic frequency ω_0 of the autonomous oscillations by varying all (de) phosphorylation rates {k_i, k_ ps, k_b} by a constant factor and computed the mutual information I(p;t) as a function of this factor and hence ω_0. The result is shown in CHMC.Clearly, as for the uncoupled-hexamer model, there exists an optimal intrinsic frequency ω_0^ opt that maximizes I(p;t). The optimal intrinic frequency depends on the input-noise strength: for low input noise, ω_0^ opt<ω, but then ω_0^ opt increases with σ^2_s to become similar to ω in the high noise regime.We also see, however, that the dependence of ω_0^ opt on σ^2_s is rather weak (CHMB). We therefore kept the parameters in the simulations corresponding to Fig. 2 of the main text, constant.Ḏṟi̱v̱e̱ṉ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ:̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ ̱ ̱ ̱ ̱ḏe̱c̱ṟe̱a̱s̱i̱ṉg̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱a̱s̱ ̱ḻo̱ṉg̱ ̱a̱s̱ ̱ṯẖe̱ ̱s̱y̱s̱ṯe̱m̱ ̱ṟe̱m̱a̱i̱ṉs̱ ̱i̱ṉs̱i̱ḏe̱ ̱ ̱ ̱ ̱ ̱ṯẖe̱ ̱A̱ṟṉo̱ḻḏ ̱ṯo̱ṉg̱u̱e̱.̱ ̱ ̱A̱ḻo̱ṉg̱ ̱ṯẖe̱ ̱w̱ẖi̱ṯe̱ ̱ḏa̱s̱ẖe̱ḏ ̱ḻi̱ṉe̱ ̱o̱f̱ ̱p̱a̱ṉe̱ḻ ̱Ḇ ̱ ̱ ̱(̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏi̱ṉg̱ ̱ṯo̱ ̱ṯẖe̱ ̱ḇḻu̱e̱ ̱ḻi̱ṉe̱ ̱i̱ṉ ̱p̱a̱ṉe̱ḻ ̱Ḏ)̱,̱ ̱ω̱_̱0̱ ̱=̱ ̱ω̱,̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱ ̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱I̱(̱p̱;̱ṯ)̱ ̱ḏe̱c̱ṟe̱a̱s̱e̱s̱ ̱a̱s̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ḵ_̱i̱ ̱ ̱ ̱i̱s̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱ḏ.̱ ̱I̱ṉḏe̱e̱ḏ,̱ ̱w̱ẖe̱ṉ ̱ṯẖe̱ṟe̱ ̱i̱s̱ ̱ṉo̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱(̱ω̱_̱0̱ ̱=̱ ̱ ̱ ̱ω̱)̱ ̱a̱ṉḏ ̱ṉo̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱I̱(̱p̱;̱ṯ)̱ ̱i̱s̱ ̱m̱a̱x̱i̱m̱i̱ẕe̱ḏ ̱w̱ẖe̱ṉ ̱ṯẖe̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱g̱o̱e̱s̱ ̱ṯo̱ ̱ẕe̱ṟo̱.̱ ̱Ṯẖi̱s̱ ̱c̱a̱ṉ ̱ḇe̱ ̱u̱ṉḏe̱ṟs̱ṯo̱o̱ḏ ̱ḇy̱ ̱ṉo̱ṯi̱ṉg̱ ̱ ̱ ̱ṯẖa̱ṯ ̱a̱)̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ẖa̱s̱,̱ ̱i̱ṉ ̱s̱ṯa̱ṟḵ ̱c̱o̱ṉṯṟa̱s̱ṯ ̱ṯo̱ ̱ṯẖe̱ ̱ ̱ ̱p̱u̱s̱ẖ-̱p̱u̱ḻḻ ̱ṉe̱ṯw̱o̱ṟḵ ̱a̱ṉḏ ̱ṯẖe̱ ̱u̱ṉc̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱s̱y̱s̱ṯe̱m̱,̱ ̱a̱ṉ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱ ̱ ̱ṟo̱ḇu̱s̱ṯ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱,̱ ̱w̱ẖi̱c̱ẖ ̱ḏo̱e̱s̱ ̱ṉo̱ṯ ̱ṟe̱ḻy̱ ̱o̱ṉ ̱ḏṟi̱v̱i̱ṉg̱ ̱ḇy̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ ̱ ̱s̱i̱g̱ṉa̱ḻ;̱ ̱ḇ)̱ ̱ḏe̱c̱ṟe̱a̱s̱i̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱ḏu̱c̱e̱s̱ ̱ṯẖe̱ ̱p̱ṟo̱p̱a̱g̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱ ̱ ̱i̱ṉp̱u̱ṯ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱.̱ ̱I̱ṉ ̱s̱e̱c̱ṯi̱o̱ṉ ̱<̱ṟe̱f̱>̱ ̱w̱e̱ ̱p̱ṟo̱v̱e̱ ̱ ̱ ̱a̱ṉa̱ḻy̱ṯi̱c̱a̱ḻḻy̱ ̱ṯẖa̱ṯ ̱c̱o̱ṉc̱e̱ṟṉi̱ṉg̱ ̱ṯẖe̱ ̱ṟo̱ḇu̱s̱ṯṉe̱s̱s̱ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱:̱ ̱a̱)̱ ̱ṯẖe̱ ̱ ̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱ṟe̱g̱i̱m̱e̱ ̱i̱s̱ ̱ṯẖa̱ṯ ̱o̱f̱ ̱w̱e̱a̱ḵ ̱c̱o̱u̱p̱ḻi̱ṉg̱;̱ ̱ḇ)̱ ̱i̱ṉ ̱ṯẖi̱s̱ ̱ṟe̱g̱i̱m̱e̱,̱ ̱s̱y̱s̱ṯe̱m̱s̱ ̱ ̱ ̱ḇa̱s̱e̱ḏ ̱o̱ṉ ̱a̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱a̱ṯṯṟa̱c̱ṯo̱ṟ,̱ ̱s̱u̱c̱ẖ ̱a̱s̱ ̱ṯẖe̱ ̱C̱H̱M̱,̱ ̱a̱ṟe̱ ̱s̱u̱p̱e̱ṟi̱o̱ṟ ̱ṯo̱ ̱ ̱ ̱ṯẖo̱s̱e̱ ̱ḇa̱s̱e̱ḏ ̱o̱ṉ ̱a̱ ̱f̱i̱x̱e̱ḏ-̱p̱o̱i̱ṉṯ ̱a̱ṯṯṟa̱c̱ṯo̱ṟ,̱ ̱s̱u̱c̱ẖ ̱a̱s̱ ̱ṯẖe̱ ̱P̱P̱Ṉ ̱a̱ṉḏ ̱ṯẖe̱ ̱ ̱ ̱U̱H̱M̱.̱Ḏṟi̱v̱e̱ṉ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ:̱ ̱W̱i̱ṯẖ ̱ṉo̱ṉ-̱ẕe̱ṟo̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱,̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱s̱ ̱ ̱ ̱ṉe̱c̱e̱s̱s̱a̱ṟy̱ ̱ṯo̱ ̱ḵe̱e̱p̱ ̱ṯẖe̱ ̱s̱y̱s̱ṯe̱m̱ ̱i̱ṉs̱i̱ḏe̱ ̱ṯẖe̱ ̱A̱ṟṉo̱ḻḏ ̱ṯo̱ṉg̱u̱e̱.̱ I̱m̱p̱o̱ṟṯa̱ṉṯḻy̱,̱ ̱ṯẖe̱ṟe̱ ̱w̱i̱ḻḻ ̱a̱ḻw̱a̱y̱s̱ ̱ḇe̱ ̱a̱ ̱f̱i̱ṉi̱ṯe̱ ̱a̱m̱o̱u̱ṉṯ ̱o̱f̱ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱.̱ ̱I̱ṉ ̱a̱ḏḏi̱ṯi̱o̱ṉ,̱ ̱ṯẖe̱ ̱i̱ṉṯṟi̱s̱i̱c̱ ̱p̱e̱ṟi̱o̱ḏ ̱w̱i̱ḻḻ ̱ṉe̱v̱e̱ṟ ̱ḇe̱ ̱ ̱ ̱e̱x̱a̱c̱ḻy̱ ̱2̱4̱ ̱ẖ.̱ ̱I̱ṉ ̱ḇo̱ṯẖ ̱c̱a̱s̱e̱s̱,̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱s̱ ̱e̱s̱s̱e̱ṉṯi̱a̱ḻ ̱ṯo̱ ̱ḵe̱e̱p̱ ̱ṯẖe̱ ̱ ̱ ̱s̱y̱s̱ṯe̱m̱ ̱i̱ṉ ̱p̱ẖa̱s̱e̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱ḏṟi̱v̱i̱ṉg̱ ̱s̱i̱g̱ṉa̱ḻ.̱ ̱ ̱I̱ṉ ̱ṯẖe̱ ̱ṉe̱x̱ṯ ̱s̱e̱c̱ṯi̱o̱ṉ ̱w̱e̱ ̱ ̱ ̱ḏi̱s̱c̱u̱s̱s̱ ̱ṯẖe̱ ̱ṟo̱ḻe̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱ḇu̱ṯ ̱i̱ṉ ̱p̱a̱ṉe̱ḻ ̱Ḏ ̱o̱f̱ ̱C̱H̱M̱ ̱w̱e̱ ̱ ̱ ̱s̱ẖo̱w̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱ ̱C̱H̱M̱ ̱ṯẖe̱ ̱i̱m̱p̱o̱ṟṯa̱ṉc̱e̱ ̱o̱f̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱w̱ẖe̱ṉ ̱ṯẖe̱ṟe̱ ̱ ̱ ̱i̱s̱ ̱a̱ ̱f̱i̱ṉi̱ṯe̱ ̱a̱m̱o̱u̱ṉṯ ̱o̱f̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱(̱ω̱ ̱-̱ ̱ω̱_̱0̱)̱ ̱/̱ ̱ ̱ ̱ω̱.̱ ̱C̱ḻe̱a̱ṟḻy̱,̱ ̱f̱o̱ṟ ̱ṉo̱ṉ-̱ẕe̱ṟo̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱,̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ ̱ ̱f̱i̱ṟs̱ṯ ̱ṟi̱s̱e̱s̱ ̱a̱s̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱i̱s̱ ̱ḏe̱c̱ṟe̱a̱s̱e̱ḏ ̱(̱ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖa̱ṯ ̱ ̱ ̱m̱i̱ṉi̱m̱i̱ẕe̱s̱ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱p̱ṟo̱p̱a̱g̱a̱ṯi̱o̱ṉ)̱,̱ ̱ḇu̱ṯ ̱ṯẖe̱ṉ ̱s̱u̱ḏḏe̱ṉḻy̱ ̱ḏṟo̱p̱s̱ ̱a̱s̱ ̱ṯẖe̱ ̱ ̱ ̱s̱y̱s̱ṯe̱m̱ ̱m̱o̱v̱e̱s̱ ̱o̱u̱ṯ ̱o̱f̱ ̱ṯẖe̱ ̱A̱ṟṉo̱ḏ ̱ṯo̱ṉg̱u̱e̱:̱ ̱w̱ẖe̱ṉ ̱ṯẖe̱ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱p̱e̱ṟi̱o̱ḏ ̱ḏo̱e̱s̱ ̱ ̱ ̱ṉo̱ṯ ̱m̱a̱ṯc̱ẖ ̱ṯẖe̱ ̱p̱e̱ṟi̱o̱ḏ ̱o̱f̱ ̱ṯẖe̱ ̱ḏṟi̱v̱i̱ṉg̱ ̱s̱i̱g̱ṉa̱ḻ,̱ ̱a̱ ̱m̱i̱ṉi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱s̱ ̱ ̱ ̱e̱s̱s̱e̱ṉṯi̱a̱ḻ ̱ṯo̱ ̱f̱i̱ṟm̱ḻy̱ ̱ḻo̱c̱ḵ ̱ṯẖe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯi̱o̱ṉs̱ ̱ṯo̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱s̱i̱g̱ṉa̱ḻ ̱(̱ḵe̱e̱p̱i̱ṉg̱ ̱ṯẖe̱ ̱s̱y̱s̱ṯe̱m̱ ̱i̱ṉs̱i̱ḏe̱ ̱ṯẖe̱ ̱A̱ṟṉo̱ḻḏ ̱ṯo̱ṉg̱u̱e̱)̱;̱ ̱i̱ṉḏe̱e̱ḏ,̱ ̱a̱s̱ ̱p̱a̱ṉe̱ḻ ̱Ḏ ̱ ̱ ̱s̱ẖo̱w̱s̱,̱ ̱ṯẖe̱ ̱ṟe̱q̱u̱i̱ṟe̱ḏ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱a̱m̱o̱u̱ṉṯ ̱o̱f̱ ̱ ̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱<̱c̱i̱ṯ.̱>̱.̱S̱e̱ṯṯi̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱a̱ṉḏ ̱ṯẖe̱ ̱o̱ṯẖe̱ṟ ̱p̱a̱ṟa̱m̱e̱ṯe̱ṟs̱ ̱Ṯẖe̱ ̱ ̱ ̱f̱a̱c̱ṯ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ḏe̱p̱e̱ṉḏs̱ ̱o̱ṉ ̱ṯẖe̱ ̱a̱m̱o̱u̱ṉṯ ̱o̱f̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱ ̱ ̱(̱C̱H̱M̱Ḏ)̱ ̱a̱ṉḏ ̱a̱ḻs̱o̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱a̱s̱ ̱s̱ẖo̱w̱ṉ ̱i̱ṉ ̱ṯẖe̱ ̱ṉe̱x̱ṯ ̱s̱e̱c̱ṯi̱o̱ṉ ̱ ̱ ̱(̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱)̱,̱ ̱ṟa̱i̱s̱e̱s̱ ̱ṯẖe̱ ̱q̱u̱e̱s̱ṯi̱o̱ṉ ̱w̱ẖa̱ṯ ̱i̱s̱ ̱ṯẖe̱ ̱ ̱ ̱ṉa̱ṯu̱ṟa̱ḻ ̱p̱ṟo̱c̱e̱ḏu̱ṟe̱ ̱ṯo̱ ̱s̱e̱ṯ ̱i̱ṯs̱ ̱v̱a̱ḻu̱e̱.̱ ̱W̱e̱ ̱ẖa̱v̱e̱ ̱ḏe̱c̱i̱ḏe̱ḏ ̱ṯo̱ ̱s̱e̱ṯ ̱ṯẖe̱ ̱ ̱ ̱ṟe̱ḻa̱ṯi̱v̱e̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ṯo̱ ̱a̱ ̱v̱a̱ḻu̱e̱ ̱ṯẖa̱ṯ ̱i̱s̱ ̱c̱o̱m̱p̱a̱ṟa̱ḇḻe̱ ̱ṯo̱ ̱ṯẖe̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱o̱f̱ ̱ṯẖe̱ ̱P̱P̱C̱ ̱o̱f̱ ̱S̱.̱ ̱e̱ḻo̱ṉg̱a̱ṯu̱s̱.̱ ̱S̱p̱e̱c̱i̱f̱i̱c̱a̱ḻḻy̱,̱ ̱ ̱ ̱F̱i̱g̱.̱ ̱3̱Ḇ ̱o̱f̱ ̱P̱ẖo̱ṉg̱ ̱e̱ṯ ̱a̱ḻ.̱ ̱<̱c̱i̱ṯ.̱>̱ ̱s̱ẖo̱w̱s̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱ ̱ ̱ḵi̱ṉa̱s̱e̱ ̱ṟa̱ṯe̱ ̱o̱f̱ ̱ṯẖe̱ ̱C̱I̱I̱ ̱ḏo̱m̱a̱i̱ṉ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱f̱ṟo̱m̱ ̱0̱.̱1̱ ̱/̱ ̱ ̱ẖ ̱a̱ṯ ̱a̱ṉ ̱ ̱ ̱A̱ṮP̱ ̱f̱ṟa̱c̱ṯi̱o̱ṉ ̱o̱f̱ ̱2̱5̱%̱ ̱ṯo̱ ̱0̱.̱4̱2̱ ̱/̱ ̱ ̱ẖ ̱a̱ṯ ̱a̱ṉ ̱A̱ṮP̱ ̱f̱ṟa̱c̱ṯi̱o̱ṉ ̱o̱f̱ ̱ ̱ ̱1̱0̱0̱%̱.̱ ̱A̱s̱s̱u̱m̱i̱ṉg̱ ̱ṯẖe̱ ̱A̱ṮP̱ ̱f̱ṟa̱c̱ṯi̱o̱ṉ ̱o̱s̱c̱i̱ḻḻa̱ṯe̱s̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱ṯẖe̱s̱e̱ ̱ḻe̱v̱e̱ḻs̱ ̱ ̱ ̱i̱ṉs̱i̱ḏe̱ ̱ṯẖe̱ ̱c̱e̱ḻḻ ̱<̱c̱i̱ṯ.̱>̱,̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱o̱v̱e̱ṟ ̱ṯẖe̱ ̱m̱e̱a̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯi̱o̱ṉs̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḵi̱ṉa̱s̱e̱ ̱ṟa̱ṯe̱ ̱i̱s̱ ̱a̱ṟo̱u̱ṉḏ ̱0̱.̱6̱.̱ ̱Ṯẖi̱s̱ ̱s̱ẖo̱u̱ḻḏ ̱ḇe̱ ̱ ̱ ̱c̱o̱m̱p̱a̱ṟe̱ḏ ̱ṯo̱ ̱ḵ_̱i̱ ̱/̱ ̱(̱ḵ_̱i̱ ̱s̱̱̅)̱ ̱=̱ ̱1̱ ̱/̱ ̱s̱̱̅ ̱i̱ṉ ̱o̱u̱ṟ ̱m̱o̱ḏe̱ḻ ̱(̱s̱e̱e̱ ̱ ̱ ̱ḵf̱s̱)̱.̱ ̱W̱i̱ṯẖ ̱s̱̱̅=̱2̱,̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱i̱s̱ ̱i̱ṉḏe̱e̱ḏ ̱ ̱ ̱c̱o̱m̱p̱a̱ṟa̱ḇḻe̱ ̱ṯo̱ ̱ṯẖa̱ṯ ̱o̱f̱ ̱ṯẖe̱ ̱P̱P̱C̱ ̱o̱f̱ ̱S̱.̱ ̱e̱ḻo̱ṉg̱a̱ṯu̱s̱.̱ ̱ ̱W̱e̱ ̱ṯẖu̱s̱ ̱ḵe̱p̱ṯ ̱ ̱ ̱s̱̱̅=̱2̱ ̱f̱i̱x̱e̱ḏ ̱a̱ṉḏ ̱ṯẖe̱ṉ ̱o̱p̱ṯi̱m̱i̱ẕe̱ḏ ̱o̱v̱e̱ṟ ̱ṯẖe̱ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱f̱ṟe̱q̱u̱e̱ṉc̱y̱ ̱ḇy̱ ̱ ̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱ṯẖe̱ ̱(̱ḏe̱)̱p̱ẖo̱s̱p̱ẖo̱ṟy̱ḻa̱ṯi̱o̱ṉ ̱ṟa̱ṯe̱s̱ ̱ḵ_̱i̱,̱ ̱ḵ_̱ ̱p̱s̱,̱ ̱ḵ_̱ ̱ḇ,̱ ̱ ̱ ̱a̱s̱ ̱s̱ẖo̱w̱ṉ ̱i̱ṉ ̱C̱H̱M̱C̱.̱ ̱Ṯẖi̱s̱ ̱y̱i̱e̱ḻḏe̱ḏ ̱ω̱_̱0̱^̱ ̱o̱p̱ṯ=̱0̱.̱9̱6̱ ̱ ̱ ̱ω̱,̱ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏi̱ṉg̱ ̱ṯo̱ ̱a̱ṉ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱p̱e̱ṟi̱o̱ḏ ̱Ṯ_̱0̱ ̱=̱ ̱2̱5̱.̱1̱ ̱ ̱ẖ.̱ ̱ ̱ ̱Ṯa̱ḇḻe̱ ̱<̱ṟe̱f̱>̱ ̱g̱i̱v̱e̱s̱ ̱a̱ṉ ̱o̱v̱e̱ṟv̱i̱e̱w̱ ̱o̱f̱ ̱a̱ḻḻ ̱ṯẖe̱ ̱ ̱ ̱p̱a̱ṟa̱m̱e̱ṯe̱ṟs̱.̱ ̱F̱i̱ṉa̱ḻḻy̱,̱ ̱w̱e̱ ̱e̱m̱p̱ẖa̱s̱i̱ẕe̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱c̱ẖo̱s̱e̱ṉ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ ̱ ̱i̱s̱ ̱a̱ ̱c̱o̱ṉs̱e̱ṟv̱a̱ṯi̱v̱e̱ ̱e̱s̱ṯi̱m̱a̱ṯe̱:̱ ̱i̱f̱ ̱ṯẖe̱ ̱A̱ṮP̱ ̱f̱ṟa̱c̱ṯi̱o̱ṉ ̱o̱s̱c̱i̱ḻḻa̱ṯe̱s̱ ̱f̱ṟo̱m̱ ̱0̱.̱2̱ ̱ ̱ ̱ṯo̱ ̱0̱.̱6̱ ̱i̱ṉs̱i̱ḏe̱ ̱ṯẖe̱ ̱c̱e̱ḻḻ ̱<̱c̱i̱ṯ.̱>̱,̱ ̱ṯẖe̱ṉ ̱ṯẖe̱ ̱i̱ṉ ̱v̱i̱v̱o̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ ̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱w̱i̱ḻḻ ̱ḇe̱ ̱ḻo̱w̱e̱ṟ;̱ ̱a̱s̱ ̱p̱a̱ṉe̱ḻ ̱Ḏ ̱s̱ẖo̱w̱s̱,̱ ̱ṯẖe̱ ̱p̱e̱ṟf̱o̱ṟm̱a̱ṉc̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱ ̱ ̱C̱H̱M̱,̱ ̱ṟe̱g̱a̱ṟḏi̱ṉg̱ ̱ṟo̱ḇu̱s̱ṯṉe̱s̱s̱ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱ ̱ ̱w̱i̱ḻḻ ̱ṯẖe̱ṉ ̱e̱v̱e̱ṉ ̱ḇe̱ ̱ẖi̱g̱ẖe̱ṟ.̱ ̱I̱ṉ ̱f̱a̱c̱ṯ,̱ ̱a̱s̱ ̱ ̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱Ḏe̱ṯu̱ṉi̱ṉg̱A̱ ̱s̱ẖo̱w̱s̱,̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱ ̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ṯẖa̱ṯ ̱m̱a̱x̱i̱m̱i̱ẕe̱s̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ ̱ ̱ ̱ ̱C̱H̱M̱ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ḇo̱ṯẖ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱a̱ṉḏ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱a̱ṯ ̱ ̱ ̱ ̱ ̱ḇi̱o̱ḻo̱g̱i̱c̱a̱ḻḻy̱ ̱ṟe̱ḻe̱v̱a̱ṉṯ ̱s̱ṯṟe̱ṉg̱ṯẖs̱,̱ ̱i̱s̱ ̱e̱v̱e̱ṉ ̱ḻo̱w̱e̱ṟ ̱ṯẖa̱ṉ ̱ṯẖa̱ṯ ̱ ̱ ̱ ̱ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏi̱ṉg̱ ̱ṯo̱ ̱F̱i̱g̱.̱ ̱2̱ ̱o̱f̱ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱ṯe̱x̱ṯ.̱ ̱I̱ṉ ̱c̱o̱m̱p̱a̱ṟi̱ṉg̱ ̱ṯẖe̱ ̱C̱H̱M̱ ̱ ̱ ̱a̱g̱a̱i̱ṉs̱ṯ ̱ṯẖe̱ ̱U̱H̱M̱ ̱a̱ṉḏ ̱P̱P̱Ṉ,̱ ̱w̱e̱ ̱ṯẖu̱s̱ ̱c̱o̱ṉs̱i̱ḏe̱ṟ ̱a̱ ̱“̱w̱o̱ṟs̱ṯ-̱c̱a̱s̱e̱”̱ ̱s̱c̱e̱ṉa̱ṟi̱o̱ ̱ ̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱C̱H̱M̱.̱ ̱I̱ṉḏe̱e̱ḏ,̱ ̱e̱v̱e̱ṉ ̱f̱o̱ṟ ̱ṯẖi̱s̱ ̱s̱c̱e̱ṉa̱ṟi̱o̱,̱ ̱ṯẖe̱ ̱C̱H̱M̱ ̱i̱s̱ ̱m̱u̱c̱ẖ ̱m̱o̱ṟe̱ ̱ ̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ṯẖa̱ṉ ̱ṯẖe̱ ̱P̱P̱Ṉ ̱a̱ṉḏ ̱U̱H̱M̱,̱ ̱a̱s̱ ̱F̱i̱g̱.̱ ̱2̱ ̱o̱f̱ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱ ̱ ̱ṯe̱x̱ṯ ̱s̱ẖo̱w̱s̱.̱ §.§ Robustness to internal noiseṮẖe̱ ̱c̱o̱m̱p̱u̱ṯa̱ṯi̱o̱ṉa̱ḻ ̱m̱o̱ḏe̱ḻs̱ ̱o̱f̱ ̱ṯẖe̱ ̱ṟe̱a̱ḏo̱u̱ṯ ̱s̱y̱s̱ṯe̱m̱s̱ ̱c̱o̱ṉs̱i̱ḏe̱ṟe̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱ ̱ ̱m̱a̱i̱ṉ ̱ṯe̱x̱ṯ ̱a̱ṉḏ ̱a̱ḇo̱v̱e̱ ̱a̱ṟe̱ ̱ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱;̱ ̱o̱ṉḻy̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱s̱i̱g̱ṉa̱ḻ ̱i̱s̱ ̱ ̱ ̱s̱ṯo̱c̱ẖa̱s̱ṯi̱c̱.̱ ̱I̱ṉ ̱ṯẖi̱s̱ ̱s̱e̱c̱ṯi̱o̱ṉ,̱ ̱w̱e̱ ̱a̱ḏḏṟe̱s̱s̱ ̱ṯẖe̱ ̱q̱u̱e̱s̱ṯi̱o̱ṉ ̱ẖo̱w̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯẖe̱ ̱ ̱ ̱ṟe̱s̱u̱ḻṯs̱ ̱o̱ṉ ̱o̱u̱ṟ ̱c̱o̱m̱p̱u̱ṯa̱ṯi̱o̱ṉa̱ḻ ̱m̱o̱ḏe̱ḻs̱ ̱a̱ṟe̱ ̱ṯo̱ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱ṉo̱i̱s̱e̱ ̱ṯẖa̱ṯ ̱a̱ṟi̱s̱e̱s̱ ̱f̱ṟo̱m̱ ̱ṯẖe̱ ̱i̱ṉẖe̱ṟe̱ṉṯ ̱s̱ṯo̱c̱ẖa̱s̱ṯi̱c̱i̱ṯy̱ ̱o̱f̱ ̱c̱ẖe̱m̱i̱c̱a̱ḻ ̱ ̱ ̱ṟe̱a̱c̱ṯi̱o̱ṉs̱.̱ ̱ ̱Ṯo̱ ̱i̱s̱o̱ḻa̱ṯe̱ ̱ṯẖe̱ ̱e̱f̱f̱e̱c̱ṯ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱w̱e̱ ̱f̱i̱ṟs̱ṯ ̱ẕo̱o̱m̱ ̱ ̱ ̱i̱ṉ ̱o̱ṉ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟp̱ḻa̱y̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱a̱ṉḏ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱i̱ṉ ̱ṯẖe̱ ̱a̱ḇs̱e̱ṉc̱e̱ ̱ ̱ ̱o̱f̱ ̱a̱ṉy̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱U̱H̱M̱ ̱a̱ṉḏ ̱C̱H̱M̱ ̱(̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱)̱,̱ ̱a̱ṉḏ ̱ ̱ ̱ṯẖe̱ṉ ̱w̱e̱ ̱s̱ṯu̱ḏy̱ ̱ṯẖe̱ ̱ḇi̱o̱ḻo̱g̱i̱c̱a̱ḻḻy̱ ̱ṟe̱ḻe̱v̱a̱ṉṯ ̱ṟe̱g̱i̱m̱e̱ ̱w̱i̱ṯẖ ̱a̱ ̱f̱i̱ṉi̱ṯe̱ ̱ ̱ ̱a̱m̱o̱u̱ṉṯ ̱o̱f̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱ ̱ ̱(̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱Ḏe̱ṯu̱ṉi̱ṉg̱)̱.̱ ̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱ ̱ ̱ ̱s̱ẖo̱w̱s̱ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ḇo̱ṯẖ ̱s̱o̱u̱ṟc̱e̱s̱ ̱o̱f̱ ̱ṉo̱i̱s̱e̱,̱ ̱a̱ḻḻ ̱ ̱ ̱c̱o̱m̱p̱u̱ṯa̱ṯi̱o̱ṉa̱ḻ ̱m̱o̱ḏe̱ḻs̱ ̱e̱x̱ẖi̱ḇi̱ṯ ̱a̱ṉ ̱o̱p̱ṯi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ṯẖa̱ṯ ̱ ̱ ̱m̱a̱x̱i̱m̱i̱ẕe̱s̱ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ ̱ ̱ṯṟa̱ṉs̱m̱i̱s̱s̱i̱o̱ṉ.̱ ̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱Ḏe̱ṯu̱ṉi̱ṉg̱ ̱ṯẖe̱ṉ ̱ ̱ ̱ḏe̱m̱o̱ṉs̱ṯṟa̱ṯe̱s̱ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖe̱ ̱ḇi̱o̱ḻo̱g̱i̱c̱a̱ḻḻy̱ ̱ṟe̱ḻe̱v̱a̱ṉṯ ̱ṟe̱g̱i̱m̱e̱,̱ ̱a̱ṯ ̱ḻe̱a̱s̱ṯ ̱f̱o̱ṟ ̱ ̱ ̱c̱y̱a̱ṉo̱ḇa̱c̱ṯe̱ṟi̱a̱:̱ ̱1̱)̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱s̱ ̱w̱e̱a̱ḵ ̱ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ ̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱s̱ ̱o̱v̱e̱ṟ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱;̱ ̱2̱)̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱ ̱ ̱m̱o̱ḏe̱ḻ ̱i̱s̱ ̱m̱o̱ṟe̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ṯẖa̱ṉ ̱ṯẖe̱ ̱p̱u̱s̱ẖ-̱p̱u̱ḻḻ ̱ ̱ ̱ṉe̱ṯw̱o̱ṟḵ ̱a̱ṉḏ ̱ṯẖe̱ ̱u̱ṉc̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ.̱ ̱W̱e̱ ̱e̱ḻu̱c̱i̱ḏa̱ṯe̱ ̱ṯẖe̱s̱e̱ ̱ṟe̱s̱u̱ḻṯs̱ ̱ ̱ ̱u̱s̱i̱ṉg̱ ̱o̱u̱ṟ ̱a̱ṉa̱ḻy̱ṯi̱c̱a̱ḻ ̱m̱o̱ḏe̱ḻs̱ ̱i̱ṉ ̱s̱e̱c̱ṯi̱o̱ṉs̱ ̱<̱ṟe̱f̱>̱ ̱a̱ṉḏ ̱<̱ṟe̱f̱>̱.̱ S̱ṯo̱c̱ẖa̱s̱ṯi̱c̱ ̱s̱i̱m̱u̱ḻa̱ṯi̱o̱ṉs̱ ̱Ṯo̱ ̱i̱ṉv̱e̱s̱ṯi̱g̱a̱ṯe̱ ̱ṯẖe̱ ̱ṟo̱ḻe̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱ṉo̱i̱s̱e̱,̱ ̱w̱e̱ ̱ẖa̱v̱e̱ ̱p̱e̱ṟf̱o̱ṟm̱e̱ḏ ̱s̱ṯo̱c̱ẖa̱s̱ṯi̱c̱ ̱G̱i̱ḻḻe̱s̱p̱i̱e̱ ̱s̱i̱m̱u̱ḻa̱ṯi̱o̱ṉs̱ ̱ ̱ ̱<̱c̱i̱ṯ.̱>̱ ̱o̱f̱ ̱a̱ḻḻ ̱ṯẖṟe̱e̱ ̱c̱o̱m̱p̱u̱ṯa̱ṯi̱o̱ṉa̱ḻ ̱m̱o̱ḏe̱ḻs̱.̱ ̱Ṯẖe̱s̱e̱ ̱ ̱ ̱s̱i̱m̱u̱ḻa̱ṯi̱o̱ṉs̱ ̱ṯa̱ḵe̱ ̱i̱ṉṯo̱ ̱a̱c̱c̱o̱u̱ṉṯ ̱ṯẖe̱ ̱i̱ṉẖe̱ṟe̱ṉṯ ̱s̱ṯo̱c̱ẖa̱s̱ṯi̱c̱i̱ṯy̱ ̱o̱f̱ ̱ṯẖe̱ ̱ ̱ ̱c̱ẖe̱m̱i̱c̱a̱ḻ ̱ṟe̱a̱c̱ṯi̱o̱ṉs̱,̱ ̱y̱e̱ṯ ̱ḏo̱ ̱a̱s̱s̱u̱m̱e̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱s̱y̱s̱ṯe̱m̱ ̱ṟe̱m̱a̱i̱ṉs̱ ̱ ̱ ̱w̱e̱ḻḻ-̱s̱ṯi̱ṟṟe̱ḏ ̱a̱ṯ ̱a̱ḻḻ ̱ṯi̱m̱e̱s̱.̱ ̱ ̱W̱e̱ ̱ḵe̱e̱p̱ ̱ṯẖe̱ ̱m̱a̱g̱ṉi̱ṯu̱ḏe̱ ̱o̱f̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱ṉo̱i̱s̱e̱ ̱f̱i̱x̱e̱ḏ ̱ḇy̱ ̱ḵe̱e̱p̱i̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱p̱y̱ ̱ṉu̱m̱ḇe̱ṟ ̱Ṉ ̱o̱f̱ ̱ṯẖe̱ ̱c̱e̱ṉṯṟa̱ḻ ̱c̱ḻo̱c̱ḵ ̱ ̱ ̱c̱o̱m̱p̱o̱ṉe̱ṉṯ,̱ ̱X̱ ̱i̱ṉ ̱ṯẖe̱ ̱P̱P̱Ṉ ̱a̱ṉḏ ̱ṯẖe̱ ̱Ḵa̱i̱C̱ ̱ẖe̱x̱a̱m̱e̱ṟ ̱i̱ṉ ̱ṯẖe̱ ̱U̱H̱M̱ ̱a̱ṉḏ ̱C̱H̱M̱,̱ ̱ ̱ ̱c̱o̱ṉs̱ṯa̱ṉṯ ̱a̱ṯ ̱Ṉ=̱1̱0̱0̱0̱;̱ ̱ṯẖi̱s̱ ̱ṉu̱m̱ḇe̱ṟ ̱i̱s̱ ̱c̱o̱m̱p̱a̱ṟa̱ḇḻe̱ ̱ṯo̱ ̱ṯẖe̱ ̱ṉu̱m̱ḇe̱ṟ ̱o̱f̱ ̱ ̱ ̱Ḵa̱i̱C̱ ̱ẖe̱x̱a̱m̱e̱ṟs̱ ̱i̱ṉ ̱ṯẖe̱ ̱c̱y̱a̱ṉo̱ḇa̱c̱ṯe̱ṟi̱u̱m̱ ̱S̱.̱ ̱e̱ḻo̱ṉg̱a̱ṯu̱s̱ ̱ ̱ ̱<̱c̱i̱ṯ.̱>̱.̱ ̱Ṯẖe̱ ̱s̱ṯo̱c̱ẖa̱s̱ṯi̱c̱ ̱m̱o̱ḏe̱ḻ ̱o̱f̱ ̱ṯẖe̱ ̱P̱P̱Ṉ ̱a̱ṉḏ ̱ṯẖe̱ ̱U̱H̱M̱ ̱ ̱ ̱ ̱ ̱a̱ṟe̱ ̱ṯẖe̱ ̱s̱ṯo̱c̱ẖa̱s̱ṯi̱c̱ ̱v̱e̱ṟs̱i̱o̱ṉs̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱ ̱m̱o̱ḏe̱ḻs̱ ̱s̱ṯu̱ḏi̱e̱ḏ ̱ ̱ ̱ ̱ ̱a̱ḇo̱v̱e̱ ̱a̱ṉḏ ̱i̱ṉ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱ṯe̱x̱ṯ,̱ ̱ṯa̱ḵi̱ṉg̱ ̱i̱ṉṯo̱ ̱a̱c̱c̱o̱u̱ṉṯ ̱ṯẖe̱ ̱s̱ṯo̱c̱ẖa̱s̱ṯi̱c̱ ̱ ̱ ̱ ̱ ̱p̱ẖo̱s̱p̱ẖo̱ṟy̱ḻa̱ṯi̱o̱ṉ ̱a̱ṉḏ ̱ḏe̱p̱ẖo̱s̱p̱ẖo̱ṟy̱ḻa̱ṯi̱o̱ṉ ̱o̱f̱ ̱X̱ ̱a̱ṉḏ ̱Ḵa̱i̱C̱,̱ ̱ ̱ ̱ ̱ ̱ṟe̱s̱p̱e̱c̱ṯi̱v̱e̱ḻy̱.̱ ̱F̱o̱ṟ ̱ṯẖe̱ ̱s̱ṯo̱c̱ẖa̱s̱ṯi̱c̱ ̱m̱o̱ḏe̱ḻ ̱o̱f̱ ̱ṯẖe̱ ̱C̱H̱M̱,̱ ̱w̱e̱ ̱ẖa̱v̱e̱ ̱a̱ḏo̱p̱ṯe̱ḏ ̱ ̱ ̱ ̱ ̱ṯẖe̱ ̱s̱ṯo̱c̱ẖa̱s̱ṯi̱c̱ ̱P̱P̱C̱ ̱m̱o̱ḏe̱ḻ,̱ ̱i̱ṉc̱ḻu̱ḏi̱ṉg̱ ̱i̱ṯs̱ ̱p̱a̱ṟa̱m̱e̱ṯe̱ṟ ̱v̱a̱ḻu̱e̱s̱ ̱ ̱ ̱<̱c̱i̱ṯ.̱>̱;̱ ̱ẖe̱ṟe̱,̱ ̱Ḵa̱i̱A̱ ̱a̱ṉḏ ̱Ḵa̱i̱Ḇ ̱ḇi̱ṉḏi̱ṉg̱ ̱i̱s̱ ̱m̱o̱ḏe̱ḻe̱ḏ ̱ ̱ ̱ ̱ ̱e̱x̱p̱ḻi̱c̱i̱ṯḻy̱,̱ ̱ḇu̱ṯ ̱s̱i̱ṉc̱e̱ ̱ṯẖe̱s̱e̱ ̱ṟe̱a̱c̱ṯi̱o̱ṉs̱ ̱a̱ṟe̱ ̱m̱u̱c̱ẖ ̱f̱a̱s̱ṯe̱ṟ ̱ṯẖa̱ṉ ̱ṯẖe̱ ̱ ̱ ̱ ̱ ̱(̱ḏe̱)̱p̱ẖo̱s̱p̱ẖo̱ṟy̱ḻa̱ṯi̱o̱ṉ ̱ṟe̱a̱c̱ṯi̱o̱ṉs̱,̱ ̱ṯẖi̱s̱ ̱i̱s̱ ̱ṉo̱ṯ ̱i̱m̱p̱o̱ṟṯa̱ṉṯ—̱ṯo̱ ̱a̱ṉ ̱ ̱ ̱ ̱ ̱e̱x̱c̱e̱ḻḻe̱ṉṯ ̱a̱p̱p̱ṟo̱x̱i̱m̱a̱ṯi̱o̱ṉ,̱ ̱ṯẖi̱s̱ ̱m̱o̱ḏe̱ḻ ̱i̱s̱ ̱ṯẖe̱ ̱s̱ṯo̱c̱ẖa̱s̱ṯi̱c̱ ̱ ̱ ̱ ̱ ̱e̱q̱u̱i̱v̱a̱ḻe̱ṉṯ ̱o̱f̱ ̱ṯẖe̱ ̱ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱ ̱C̱H̱M̱ ̱s̱ṯu̱ḏi̱e̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱ṯe̱x̱ṯ ̱a̱ṉḏ ̱ ̱ ̱ ̱ ̱a̱ḇo̱v̱e̱.̱§.§.§ The interplay between input and internal noise with no detuning I̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱v̱i̱o̱u̱s̱ ̱s̱e̱c̱ṯi̱o̱ṉs̱,̱ ̱w̱e̱ ̱ẖa̱v̱e̱ ̱s̱e̱e̱ṉ ̱ṯẖa̱ṯ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱ ̱ ̱ ̱p̱u̱s̱ẖ-̱p̱u̱ḻḻ ̱ṉe̱ṯw̱o̱ṟḵ ̱a̱ṉḏ ̱ṯẖe̱ ̱ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱ ̱u̱ṉc̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ,̱ ̱ṯẖe̱ ̱ ̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱i̱s̱ ̱e̱s̱s̱e̱ṉṯi̱a̱ḻḻy̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ ̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱i̱ṉ ̱ṯẖe̱ ̱w̱e̱a̱ḵ-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱,̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱i̱ṉc̱ṟe̱a̱s̱i̱ṉg̱ ̱ṯẖe̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱ḇo̱ṯẖ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱o̱f̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱(̱ṯẖe̱ ̱ ̱ ̱g̱a̱i̱ṉ)̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱f̱i̱c̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱ḻe̱a̱v̱i̱ṉg̱ ̱ṯẖe̱ ̱ ̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱u̱ṉc̱ẖa̱ṉg̱e̱ḏ.̱ ̱I̱ṉ ̱c̱o̱ṉṯṟa̱s̱ṯ,̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱C̱H̱M̱,̱ ̱w̱ẖe̱ṉ ̱ṯẖe̱ ̱ ̱ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱c̱ḻo̱c̱ḵ ̱p̱e̱ṟi̱o̱ḏ ̱i̱s̱ ̱ṉo̱ṯ ̱e̱q̱u̱a̱ḻ ̱ṯo̱ ̱ṯẖa̱ṯ ̱o̱f̱ ̱ṯẖe̱ ̱ḏṟi̱v̱i̱ṉg̱ ̱s̱i̱g̱ṉa̱ḻ,̱ ̱a̱ ̱ ̱ ̱m̱i̱ṉi̱m̱a̱ḻ ̱a̱m̱o̱u̱ṉṯ ̱o̱f̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱s̱ ̱ṉe̱c̱e̱s̱s̱a̱ṟy̱ ̱ṯo̱ ̱p̱ẖa̱s̱e̱-̱ḻo̱c̱ḵ ̱ṯẖe̱ ̱c̱ḻo̱c̱ḵ ̱ṯo̱ ̱ ̱ ̱ṯẖe̱ ̱ḏṟi̱v̱i̱ṉg̱ ̱a̱ṉḏ ̱p̱u̱ṯ ̱ṯẖe̱ ̱s̱y̱s̱ṯe̱m̱ ̱i̱ṉs̱i̱ḏe̱ ̱ṯẖe̱ ̱A̱ṟṉo̱ḻḏ ̱ṯo̱ṉg̱u̱e̱ ̱ ̱ ̱(̱C̱H̱M̱Ḏ)̱.̱ ̱Y̱e̱ṯ,̱ ̱o̱ṉc̱e̱ ̱ṯẖe̱ ̱s̱y̱s̱ṯe̱m̱ ̱i̱s̱ ̱i̱ṉs̱i̱ḏe̱ ̱ṯẖe̱ ̱A̱ṟṉo̱ḻḏ ̱ṯo̱ṉg̱u̱e̱ ̱ṯẖe̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ẖo̱u̱ḻḏ ̱ḇe̱ ̱a̱s̱ ̱ḻo̱w̱ ̱a̱s̱ ̱p̱o̱s̱s̱i̱ḇḻe̱ ̱ṯo̱ ̱m̱i̱ṉi̱m̱i̱ẕe̱ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱ ̱ ̱p̱ṟo̱p̱a̱g̱a̱ṯi̱o̱ṉ.̱H̱o̱w̱e̱v̱e̱ṟ,̱ ̱f̱o̱ṟ ̱a̱ḻḻ ̱ṯẖṟe̱e̱ ̱s̱y̱s̱ṯe̱m̱s̱,̱ ̱w̱e̱ ̱e̱x̱p̱e̱c̱ṯ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱ṯẖe̱ṟe̱ ̱i̱s̱ ̱a̱ ̱p̱o̱s̱i̱ṯi̱v̱e̱ ̱e̱f̱f̱e̱c̱ṯ ̱o̱f̱ ̱i̱ṉc̱ṟe̱a̱s̱i̱ṉg̱ ̱ṯẖe̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ,̱ ̱a̱ḻṯẖo̱u̱g̱ẖ,̱ ̱i̱ṉṯe̱ṟe̱s̱ṯi̱ṉg̱ḻy̱,̱ ̱ṯẖe̱ ̱o̱ṟi̱g̱i̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱e̱f̱f̱e̱c̱ṯ ̱ ̱ ̱i̱s̱ ̱ḏi̱f̱f̱e̱ṟe̱ṉṯ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ṯẖṟe̱e̱ ̱ṟe̱s̱p̱e̱c̱ṯi̱v̱e̱ ̱s̱y̱s̱ṯe̱m̱s̱:̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱f̱i̱x̱e̱ḏ-̱p̱o̱i̱ṉṯ ̱ ̱ ̱a̱ṯṯṟa̱c̱ṯo̱ṟs̱ ̱(̱P̱P̱Ṉ ̱a̱ṉḏ ̱U̱H̱M̱)̱,̱ ̱i̱ṉc̱ṟe̱a̱s̱i̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ẖe̱ḻp̱s̱ ̱ṯo̱ ̱ṟa̱i̱s̱e̱ ̱ṯẖe̱ ̱ ̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱o̱f̱ ̱ṯẖe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯi̱o̱ṉs̱ ̱(̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ)̱ ̱a̱ḇo̱v̱e̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱ṉo̱i̱s̱e̱,̱ ̱w̱ẖi̱ḻe̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱a̱ṯṯṟa̱c̱ṯo̱ṟ ̱(̱C̱H̱M̱)̱ ̱i̱ṉc̱ṟe̱a̱s̱i̱ṉg̱ ̱ṯẖe̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱ṯẖe̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱ṯẖa̱ṯ ̱ ̱c̱o̱ṉṯa̱i̱ṉs̱ ̱ṯẖe̱ ̱e̱f̱f̱e̱c̱ṯ ̱o̱f̱ ̱ṯẖe̱ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱.̱ ̱S̱e̱c̱ṯi̱o̱ṉ ̱<̱ṟe̱f̱>̱ ̱ḏi̱s̱c̱u̱s̱s̱e̱s̱ ̱ṯẖe̱s̱e̱ ̱e̱f̱f̱e̱c̱ṯs̱ ̱ ̱ ̱i̱ṉ ̱m̱o̱ṟe̱ ̱ḏe̱ṯa̱i̱ḻ.̱ I̱ṉ ̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱ ̱w̱e̱ ̱s̱ẖo̱w̱ ̱f̱o̱ṟ ̱a̱ḻḻ ̱ṯẖṟe̱e̱ ̱m̱o̱ḏe̱ḻs̱ ̱ ̱ ̱s̱e̱p̱a̱ṟa̱ṯe̱ḻy̱,̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱I̱(̱p̱;̱ṯ)̱ ̱a̱s̱ ̱a̱ ̱f̱u̱ṉc̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ,̱ ̱f̱o̱ṟ ̱ḏi̱f̱f̱e̱ṟe̱ṉṯ ̱s̱ṯṟe̱ṉg̱ṯẖs̱ ̱o̱f̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱ ̱ ̱ḵe̱e̱p̱i̱ṉg̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱c̱o̱ṉs̱ṯa̱ṉṯ.̱ ̱W̱e̱ ̱s̱e̱e̱ ̱ṯẖa̱ṯ ̱i̱ṉ ̱a̱ḻḻ ̱c̱a̱s̱e̱s̱ ̱ṯẖe̱ṟe̱ ̱ ̱ ̱e̱x̱i̱s̱ṯs̱ ̱a̱ṉ ̱o̱p̱ṯi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ṯẖa̱ṯ ̱m̱a̱x̱i̱m̱i̱ẕe̱s̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱ ̱ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ,̱ ̱a̱s̱ ̱p̱ṟe̱ḏi̱c̱ṯe̱ḏ ̱ḇy̱ ̱ṯẖe̱ ̱a̱ṉa̱ḻy̱ṯi̱c̱a̱ḻ ̱m̱o̱ḏe̱ḻs̱ ̱ḏi̱s̱c̱u̱s̱s̱e̱ḏ ̱i̱ṉ ̱ ̱ ̱s̱e̱c̱ṯi̱o̱ṉ ̱<̱ṟe̱f̱>̱.̱ ̱F̱o̱ṟ ̱ṯẖe̱ ̱f̱i̱x̱e̱ḏ-̱p̱o̱i̱ṉṯ ̱a̱ṯṯṟa̱c̱ṯo̱ṟs̱,̱ ̱ṯẖe̱ ̱ ̱ ̱P̱P̱Ṉ ̱a̱ṉḏ ̱ṯẖe̱ ̱U̱H̱M̱,̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱u̱m̱ ̱i̱s̱ ̱ḇṟo̱a̱ḏ:̱ ̱a̱ ̱m̱i̱ṉi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱s̱ ̱ ̱ ̱ṟe̱q̱u̱i̱ṟe̱ḏ ̱ṯo̱ ̱ṟa̱i̱s̱e̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ ̱a̱ḇo̱v̱e̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱ḇu̱ṯ ̱f̱o̱ṟ ̱ ̱ ̱ḻa̱ṟg̱e̱ṟ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖs̱ ̱ṯẖe̱ ̱e̱f̱f̱e̱c̱ṯ ̱o̱f̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱w̱ẖi̱c̱ẖ ̱ ̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱,̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱s̱ ̱o̱v̱e̱ṟ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱a̱ṉḏ ̱ ̱ ̱i̱ṉ ̱ṯẖi̱s̱ ̱ṟe̱g̱i̱m̱e̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱i̱s̱ ̱e̱s̱s̱e̱ṉṯi̱a̱ḻḻy̱ ̱c̱o̱ṉs̱ṯa̱ṉṯ;̱ ̱ ̱ ̱f̱o̱ṟ ̱e̱v̱e̱ṉ ̱ḻa̱ṟg̱e̱ṟ ̱c̱o̱u̱p̱ḻi̱ṉg̱,̱ ̱ẖo̱w̱e̱v̱e̱ṟ,̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ ̱w̱i̱ḻḻ ̱s̱a̱ṯu̱ṟa̱ṯe̱ ̱(̱ḇe̱c̱a̱u̱s̱e̱ ̱ ̱ ̱p̱(̱ṯ)̱ ̱i̱s̱ ̱ḇo̱u̱ṉḏe̱ḏ ̱ḇy̱ ̱ẕe̱ṟo̱ ̱a̱ṉḏ ̱u̱ṉi̱ṯy̱)̱,̱ ̱a̱ṉḏ ̱ṯẖi̱s̱ ̱w̱i̱ḻḻ ̱ḻe̱a̱ḏ ̱ṯo̱ ̱ ̱ ̱ṉo̱ṉ-̱s̱i̱ṉu̱s̱o̱i̱ḏa̱ḻ ̱o̱s̱c̱i̱ḻḻa̱ṯi̱o̱ṉs̱,̱ ̱c̱a̱u̱s̱i̱ṉg̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ṯo̱ ̱g̱o̱ ̱ ̱ ̱ḏo̱w̱ṉ.̱ ̱F̱o̱ṟ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱a̱ṯṯṟa̱c̱ṯo̱ṟ ̱(̱ṯẖe̱ ̱C̱H̱M̱)̱,̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱u̱m̱ ̱i̱s̱ ̱m̱o̱ṟe̱ ̱ ̱ ̱p̱ṟo̱ṉo̱u̱ṉc̱e̱ḏ,̱ ̱a̱ṟi̱s̱i̱ṉg̱ ̱f̱ṟo̱m̱ ̱a̱ ̱s̱ẖa̱ṟp̱ ̱ṯṟa̱ḏe̱-̱o̱f̱f̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱m̱i̱ṉi̱m̱i̱ẕi̱ṉg̱ ̱ ̱ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱p̱ṟo̱p̱a̱g̱a̱ṯi̱o̱ṉ ̱(̱w̱ẖi̱c̱ẖ ̱f̱a̱v̱o̱ṟs̱ ̱w̱e̱a̱ḵ ̱c̱o̱u̱p̱ḻi̱ṉg̱)̱ ̱a̱ṉḏ ̱m̱a̱x̱i̱m̱i̱ẕi̱ṉg̱ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱s̱u̱p̱p̱ṟe̱s̱s̱i̱o̱ṉ ̱(̱w̱ẖi̱c̱ẖ ̱f̱a̱v̱o̱ṟs̱ ̱s̱ṯṟo̱ṉg̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱)̱.̱ ̱ ̱I̱ṉḏe̱e̱ḏ,̱ ̱ ̱ ̱p̱a̱ṉe̱ḻ ̱C̱ ̱s̱ẖo̱w̱s̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ḏe̱c̱ṟe̱a̱s̱e̱s̱ ̱a̱s̱ ̱ṯẖe̱ ̱ ̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱i̱s̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱ḏ,̱ ̱p̱ṟe̱c̱i̱s̱e̱ḻy̱ ̱a̱s̱ ̱ṯẖi̱s̱ ̱a̱ṟg̱u̱m̱e̱ṉṯ ̱p̱ṟe̱ḏi̱c̱ṯs̱.̱§.§.§ Interplay between internal and input noise with detuningI̱ṉ ̱v̱i̱v̱o̱,̱ ̱ṉo̱ṯ ̱o̱ṉḻy̱ ̱a̱ ̱f̱i̱ṉi̱ṯe̱ ̱a̱m̱o̱u̱ṉṯ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱i̱s̱ ̱i̱ṉe̱v̱i̱ṯa̱ḇḻe̱,̱ ̱ḇu̱ṯ ̱a̱ḻs̱o̱ ̱a̱ ̱ṉo̱ṉ-̱ẕe̱ṟo̱ ̱a̱m̱o̱u̱ṉṯ ̱o̱f̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱.̱ ̱I̱ṉ ̱ṯẖi̱s̱ ̱s̱e̱c̱ṯi̱o̱ṉ,̱ ̱w̱e̱ ̱c̱o̱m̱p̱a̱ṟe̱ ̱ṯẖe̱ ̱ṯẖṟe̱e̱ ̱c̱o̱m̱p̱u̱ṯa̱ṯi̱o̱ṉa̱ḻ ̱m̱o̱ḏe̱ḻs̱ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ḇo̱ṯẖ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱a̱ṉḏ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱a̱ṯ ̱ḇi̱o̱ḻo̱g̱i̱c̱a̱ḻḻy̱ ̱ṟe̱ḻe̱v̱a̱ṉṯ ̱ḻe̱v̱e̱ḻs̱.̱ P̱a̱ṉe̱ḻ ̱A̱ ̱o̱f̱ ̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱Ḏe̱ṯu̱ṉi̱ṉg̱ ̱s̱ẖo̱w̱s̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱C̱H̱M̱ ̱ ̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱I̱(̱p̱;̱ṯ)̱ ̱a̱s̱ ̱a̱ ̱f̱u̱ṉc̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ ̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ḵ_̱i̱,̱ ̱f̱o̱ṟ ̱ṯẖṟe̱e̱ ̱ḏi̱f̱f̱e̱ṟe̱ṉṯ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱ḻe̱v̱e̱ḻs̱,̱ ̱i̱ṉ ̱ṯẖe̱ ̱ ̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱a̱ṉḏ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱a̱ṯ ̱ḇi̱o̱ḻo̱g̱i̱c̱a̱ḻḻḻy̱ ̱ṟe̱ḻe̱v̱a̱ṉṯ ̱ ̱ ̱ḻe̱v̱e̱ḻs̱.̱ ̱A̱s̱ ̱a̱ḇo̱v̱e̱,̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱i̱s̱ ̱s̱e̱ṯ ̱ḇy̱ ̱ṯẖe̱ ̱c̱o̱p̱y̱ ̱ṉu̱m̱ḇe̱ṟ ̱ ̱ ̱Ṉ=̱1̱0̱0̱0̱ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏi̱ṉg̱ ̱ṯo̱ ̱ṯẖe̱ ̱ṉu̱m̱ḇe̱ṟ ̱o̱f̱ ̱Ḵa̱i̱C̱ ̱ẖe̱x̱a̱m̱e̱ṟs̱ ̱i̱ṉ ̱S̱.̱ ̱e̱ḻo̱ṉg̱a̱ṯu̱s̱ ̱<̱c̱i̱ṯ.̱>̱,̱ ̱w̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱i̱s̱ ̱(̱ω̱ ̱ ̱ ̱-̱ ̱ω̱_̱0̱)̱ ̱/̱ ̱ω̱=̱-̱0̱.̱1̱ ̱a̱s̱ ̱m̱e̱a̱s̱u̱ṟe̱ḏ ̱e̱x̱p̱e̱ṟi̱m̱e̱ṉṯa̱ḻḻy̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ ̱ ̱ṟe̱c̱o̱ṉs̱ṯi̱ṯu̱ṯe̱ḏ ̱P̱P̱C̱ ̱o̱f̱ ̱S̱.̱ ̱e̱ḻo̱ṉg̱a̱ṯu̱s̱ ̱<̱c̱i̱ṯ.̱>̱.̱ ̱P̱a̱ṉe̱ḻ ̱A̱ ̱ ̱ ̱e̱x̱ẖi̱ḇi̱ṯs̱ ̱a̱ ̱m̱i̱x̱ṯu̱ṟe̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḇe̱ẖa̱v̱i̱o̱ṟ ̱o̱f̱ ̱C̱H̱M̱Ḏ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏi̱ṉg̱ ̱ṯo̱ ̱ ̱ ̱ṯẖe̱ ̱C̱H̱M̱ ̱w̱i̱ṯẖ ̱f̱i̱ṉi̱ṯe̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱a̱ṉḏ ̱ṉo̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱a̱ṉḏ ̱ṯẖa̱ṯ ̱o̱f̱ ̱ ̱ ̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱C̱ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏi̱ṉg̱ ̱ṯo̱ ̱ṉo̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱ḇu̱ṯ ̱w̱i̱ṯẖ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱p̱ṟe̱s̱e̱ṉṯ:̱ ̱ṯo̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ,̱ ̱ṯẖe̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱f̱i̱ṟs̱ṯ ̱ẖa̱s̱ ̱ṯo̱ ̱ṟi̱s̱e̱ ̱ṯo̱ ̱ḇṟi̱ṉg̱ ̱ṯẖe̱ ̱s̱y̱s̱ṯe̱m̱ ̱i̱ṉs̱i̱ḏe̱ ̱ṯẖe̱ ̱ ̱ ̱A̱ṟṉo̱ḻḏ ̱ṯo̱ṉg̱u̱e̱ ̱(̱c̱o̱m̱p̱a̱ṟe̱ ̱w̱i̱ṯẖ ̱C̱H̱M̱Ḏ)̱.̱ ̱Y̱e̱ṯ ̱o̱ṉc̱e̱ ̱i̱ṉs̱i̱ḏe̱ ̱ṯẖe̱ ̱A̱ṟṉo̱ḻḏ ̱ ̱ ̱ṯo̱ṉg̱u̱e̱,̱ ̱I̱(̱p̱;̱ṯ)̱ ̱f̱e̱a̱ṯu̱ṟe̱s̱ ̱a̱ṉ ̱o̱p̱ṯi̱m̱u̱m̱ ̱a̱ṟi̱s̱i̱ṉg̱ ̱f̱ṟo̱m̱ ̱ṯẖe̱ ̱ ̱ ̱i̱ṉṯe̱ṟp̱ḻa̱y̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱m̱i̱ṉi̱m̱i̱ẕi̱ṉg̱ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱p̱ṟo̱p̱a̱g̱a̱ṯi̱o̱ṉ ̱a̱ṉḏ ̱m̱a̱x̱i̱m̱i̱ẕi̱ṉg̱ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱s̱u̱p̱p̱ṟe̱s̱s̱i̱o̱ṉ.̱ ̱W̱e̱ ̱a̱ḻs̱o̱ ̱s̱e̱e̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱ ̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ,̱ ̱f̱o̱ṟ ̱a̱ḻḻ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱ḻe̱v̱e̱ḻs̱,̱ ̱i̱s̱ ̱ḻo̱w̱e̱ṟ ̱ṯẖa̱ṉ ̱ṯẖa̱ṯ ̱ ̱ ̱ ̱ ̱o̱f̱ ̱ṯẖe̱ ̱C̱H̱M̱ ̱o̱f̱ ̱F̱i̱g̱.̱ ̱2̱ ̱o̱f̱ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱ṯe̱x̱ṯ;̱ ̱w̱i̱ṯẖ ̱s̱u̱c̱ẖ ̱a̱ ̱w̱e̱a̱ḵe̱ṟ ̱ ̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱,̱ ̱ṯẖe̱ ̱ṟo̱ḇu̱s̱ṯṉe̱s̱s̱ ̱o̱f̱ ̱ṯẖe̱ ̱C̱H̱M̱ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱w̱o̱u̱ḻḏ ̱ḇe̱ ̱e̱v̱e̱ṉ ̱ẖi̱g̱ẖe̱ṟ.̱I̱ṉ ̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱Ḏe̱ṯu̱ṉi̱ṉg̱ ̱w̱e̱ ̱c̱o̱m̱p̱a̱ṟe̱ ̱ṯẖe̱ ̱p̱e̱ṟf̱o̱ṟm̱a̱ṉc̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱ ̱ ̱ṯẖṟe̱e̱ ̱c̱o̱m̱p̱u̱ṯa̱ṯi̱o̱ṉa̱ḻ ̱m̱o̱ḏe̱ḻs̱ ̱a̱s̱ ̱a̱ ̱f̱u̱ṉc̱ṯi̱o̱ṉ ̱o̱f̱ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱s̱ṯṟe̱ṉg̱ṯẖ,̱ ̱ ̱ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ḇo̱ṯẖ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱a̱ṉḏ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱a̱ṯ ̱ḇi̱o̱ḻo̱g̱i̱c̱a̱ḻḻy̱ ̱ ̱ ̱ṟe̱ḻe̱v̱a̱ṉṯ ̱ḻe̱v̱e̱ḻs̱.̱ ̱ ̱C̱ḻe̱a̱ṟḻy̱,̱ ̱a̱s̱ ̱o̱ḇs̱e̱ṟv̱e̱ḏ ̱ ̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱ ̱s̱y̱s̱ṯe̱m̱s̱ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏi̱ṉg̱ ̱ṯo̱ ̱F̱i̱g̱.̱ ̱2̱ ̱o̱f̱ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱ ̱ ̱ṯe̱x̱ṯ,̱ ̱f̱o̱ṟ ̱ḻo̱w̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱ṯẖe̱ ̱p̱e̱ṟf̱o̱ṟm̱a̱ṉc̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱ṯẖṟe̱e̱ ̱s̱y̱s̱ṯe̱m̱s̱ ̱i̱s̱ ̱ ̱ ̱v̱e̱ṟy̱ ̱s̱i̱m̱i̱ḻa̱ṟ.̱ ̱Y̱e̱ṯ,̱ ̱f̱o̱ṟ ̱ẖi̱g̱ẖ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱ṯẖe̱ ̱C̱H̱M̱ ̱i̱s̱ ̱f̱a̱ṟ ̱s̱u̱p̱e̱ṟi̱o̱ṟ.̱ ̱W̱e̱ ̱ ̱ ̱ṯẖu̱s̱ ̱c̱o̱ṉc̱ḻu̱ḏe̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱p̱ṟi̱ṉc̱i̱p̱a̱ḻ ̱ṟe̱s̱u̱ḻṯ ̱o̱f̱ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱ṯe̱x̱ṯ,̱ ̱ṉa̱m̱e̱ḻy̱ ̱ ̱ ̱ṯẖa̱ṯ ̱a̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱s̱u̱c̱ẖ ̱a̱s̱ ̱ṯẖe̱ ̱C̱H̱M̱ ̱i̱s̱ ̱m̱o̱ṟe̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱ ̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ṯẖa̱ṉ ̱a̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱s̱u̱c̱ẖ ̱a̱s̱ ̱ṯẖe̱ ̱P̱P̱Ṉ ̱o̱ṟ ̱U̱H̱M̱,̱ ̱i̱s̱ ̱ ̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱.̱W̱e̱ ̱c̱a̱ṉ ̱u̱ṉḏe̱ṟs̱ṯa̱ṉḏ ̱ṯẖi̱s̱ ̱ṟe̱s̱u̱ḻṯ ̱ḇy̱ ̱ṉo̱ṯi̱ṉg̱ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ ̱ ̱ḇi̱o̱ḻo̱g̱i̱c̱a̱ḻḻy̱ ̱ṟe̱ḻe̱v̱a̱ṉṯ ̱a̱m̱o̱u̱ṉṯs̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱a̱ṉḏ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱ṯẖe̱ ̱ ̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱s̱ ̱w̱e̱a̱ḵ ̱ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱s̱ ̱o̱v̱e̱ṟ ̱ṯẖe̱ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱.̱ ̱ ̱I̱ṉ ̱f̱a̱c̱ṯ,̱ ̱e̱x̱p̱e̱ṟi̱m̱e̱ṉṯs̱ ̱ẖa̱v̱e̱ ̱ṟe̱v̱e̱a̱ḻe̱ḏ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱c̱ḻo̱c̱ḵ ̱o̱f̱ ̱ ̱ ̱S̱.̱ ̱e̱ḻo̱ṉg̱a̱ṯu̱s̱ ̱ẖa̱s̱ ̱a̱ ̱s̱ṯṟo̱ṉg̱ ̱ṯe̱m̱p̱o̱ṟa̱ḻ ̱s̱ṯa̱ḇi̱ḻi̱ṯy̱ ̱w̱i̱ṯẖ ̱a̱ ̱ ̱ ̱c̱o̱ṟṟe̱ḻa̱ṯi̱o̱ṉ ̱ṯi̱m̱e̱ ̱o̱f̱ ̱s̱e̱v̱e̱ṟa̱ḻ ̱m̱o̱ṉṯẖs̱,̱ ̱i̱ṉḏi̱c̱a̱ṯi̱ṉg̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱ṉo̱i̱s̱e̱ ̱i̱s̱ ̱i̱ṉḏe̱e̱ḏ ̱s̱m̱a̱ḻḻ ̱<̱c̱i̱ṯ.̱>̱.̱ ̱ ̱A̱s̱ ̱w̱e̱ ̱p̱ṟo̱v̱e̱ ̱a̱ṉa̱ḻy̱ṯi̱c̱a̱ḻḻy̱ ̱ ̱ ̱i̱ṉ ̱<̱ṟe̱f̱>̱,̱ ̱i̱ṉ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱ḏ ̱ṟe̱g̱i̱m̱e̱ ̱a̱ ̱ ̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ,̱ ̱s̱u̱c̱ẖ ̱a̱s̱ ̱ṯẖe̱ ̱C̱H̱M̱,̱ ̱i̱s̱ ̱g̱e̱ṉe̱ṟi̱c̱a̱ḻḻy̱ ̱m̱o̱ṟe̱ ̱ ̱ ̱ṟe̱s̱i̱ḻi̱e̱ṉṯ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ṯẖa̱ṉ ̱a̱ ̱s̱y̱s̱ṯe̱m̱ ̱w̱i̱ṯẖ ̱a̱ ̱f̱i̱x̱e̱ḏ ̱p̱o̱i̱ṉṯ ̱ ̱ ̱a̱ṯṯṟa̱c̱ṯo̱ṟ,̱ ̱s̱u̱c̱ẖ ̱a̱s̱ ̱ṯẖe̱ ̱P̱P̱Ṉ ̱a̱ṉḏ ̱U̱H̱M̱.̱ ̱Ṟe̱ḏu̱c̱i̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱m̱i̱ṉi̱m̱i̱ẕe̱s̱ ̱ ̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱f̱i̱c̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱i̱ṉ ̱a̱ḻḻ ̱s̱y̱s̱ṯe̱m̱s̱,̱ ̱ḇu̱ṯ ̱o̱ṉḻy̱ ̱ṯẖe̱ ̱ ̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱(̱C̱H̱M̱)̱ ̱c̱a̱ṉ ̱s̱ṯi̱ḻḻ ̱s̱u̱s̱ṯa̱i̱ṉ ̱ṟo̱ḇu̱s̱ṯ ̱ ̱ ̱ḻa̱ṟg̱e̱-̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯi̱o̱ṉs̱ ̱i̱ṉ ̱ṯẖi̱s̱ ̱ṟe̱g̱i̱m̱e̱.̱F̱o̱ṟ ̱ḻa̱ṟg̱e̱ṟ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱s̱ṯṟe̱ṉg̱ṯẖs̱ ̱ṯẖa̱ṉ ̱ṯẖa̱ṯ ̱c̱o̱ṉs̱i̱ḏe̱ṟe̱ḏ ̱ẖe̱ṟe̱,̱ ̱ṯẖu̱s̱ ̱o̱u̱ṯs̱i̱ḏe̱ ̱ṯẖe̱ ̱ḇi̱o̱ḻo̱g̱i̱c̱a̱ḻ ̱ṟe̱a̱ḻm̱,̱ ̱i̱ṯ ̱m̱i̱g̱ẖṯ ̱ḇe̱ ̱ḇe̱ṉe̱f̱i̱c̱i̱a̱ḻ ̱ṯo̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱f̱u̱ṟṯẖe̱ṟ.̱ ̱S̱ṯṟo̱ṉg̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱m̱a̱ḵe̱s̱ ̱i̱ṯ ̱p̱o̱s̱s̱i̱ḇḻe̱ ̱ṯo̱ ̱e̱x̱p̱ḻo̱i̱ṯ ̱ṯẖe̱ ̱f̱a̱c̱ṯ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱p̱(̱ṯ)̱ ̱i̱s̱ ̱ṉa̱ṯu̱ṟa̱ḻḻy̱ ̱ḇo̱u̱ṉḏe̱ḏ ̱ḇe̱ṯw̱e̱e̱ṉ ̱ẕe̱ṟo̱ ̱a̱ṉḏ ̱u̱ṉi̱ṯy̱;̱ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱c̱a̱ṉ ̱ṯẖu̱s̱ ̱ḇe̱ ̱ṯa̱m̱e̱ḏ ̱ḇy̱ ̱c̱o̱ṉṯi̱ṉu̱a̱ḻḻy̱ ̱p̱u̱s̱ẖi̱ṉg̱ ̱p̱(̱ṯ)̱ ̱a̱g̱a̱i̱ṉs̱ṯ ̱e̱i̱ṯẖe̱ṟ ̱ẕe̱ṟo̱ ̱a̱ṉḏ ̱u̱ṉi̱ṯy̱.̱ ̱Ṯẖi̱s̱ ̱g̱e̱ṉe̱ṟa̱ṯe̱s̱,̱ ̱ẖo̱w̱e̱v̱e̱ṟ,̱ ̱s̱ṯṟo̱ṉg̱ḻy̱ ̱ṉo̱ṉ-̱s̱i̱ṉu̱s̱o̱i̱ḏa̱ḻ,̱ ̱s̱q̱u̱a̱ṟe̱-̱w̱a̱v̱e̱ ̱ḻi̱ḵe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯi̱o̱ṉs̱,̱ ̱w̱ẖi̱c̱ẖ ̱a̱ṟe̱ ̱ṉo̱ṯ ̱e̱x̱p̱e̱ṟi̱m̱e̱ṉṯa̱ḻḻy̱ ̱o̱ḇs̱e̱ṟv̱e̱ḏ ̱<̱c̱i̱ṯ.̱>̱.̱ ̱W̱e̱ ̱ṯẖu̱s̱ ̱ḻe̱a̱v̱e̱ ̱ṯẖe̱ ̱ṟe̱g̱i̱m̱e̱ ̱o̱f̱ ̱s̱ṯṟo̱ṉg̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱f̱o̱ṟ ̱f̱u̱ṯu̱ṟe̱ ̱w̱o̱ṟḵ.̱ §.§ Robustness to shape of input signal We have tested the robustness of our principal result, shown in Fig. 2 of the main text, by varying a number of key parameters. We first varied the correlation time τ_c of the noise, see RobustnessA. Clearly, the main result is robust to variations in the value of τ_c: in the limit of small input-noise σ^2_s all three time-keeping systems are equally accurate, while for large input noise the bonafide clock is far superior. We have also varied the nature of the input signal. Specifically, instead of a sinusoidal signal we have also studied a truncated sinusoidal signal s(t), which drops to zero for 12 hours during the night but is a half-sinusoid for 12 hours during the day:s(t) = h(t) {sin(ω t) + η_s(t)}, TruncInputwhere h(t)=0 for 0<t<12 and h(t)=1 for 12 < t< 24. The result is shown in RobustnessB. It is seen that the principal result of Fig. 2 of the main text is also insensitive to the precise choice of the input signal.The robustness of our principal observations indicate they are universal and should be observable in minimal generic models. These are described in the next sections. §.§ Computing the mutual informationṮẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱i̱s̱ ̱g̱i̱v̱e̱ṉ ̱ḇy̱ ̱ ̱ ̱ ̱ ̱ ̱I̱(̱p̱;̱ṯ)̱ ̱=̱ ̱∫̱_̱0̱^̱ṯ ̱ḏp̱ ̱∫̱_̱0̱^̱Ṯ ̱ḏṯ ̱P̱(̱p̱,̱ṯ)̱ ̱ḻo̱g̱_̱2̱ ̱ ̱ ̱ ̱ ̱P̱(̱p̱,̱ṯ)̱/̱P̱(̱p̱)̱P̱(̱ṯ)̱,̱ ̱ ̱ ̱ ̱ ̱M̱I̱1̱ ̱ ̱w̱ẖe̱ṟe̱ ̱P̱(̱p̱,̱ṯ)̱ ̱i̱s̱ ̱ṯẖe̱ ̱j̱o̱i̱ṉṯ ̱p̱ṟo̱ḇa̱ḇi̱ḻi̱ṯy̱ ̱ḏi̱s̱ṯṟi̱ḇu̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱p̱ẖo̱s̱p̱ẖo̱ṟy̱ḻa̱ṯi̱o̱ṉ ̱ḻe̱v̱e̱ḻ ̱p̱ ̱a̱ṉḏ ̱ṯi̱m̱e̱ ̱ṯ ̱a̱ṉḏ ̱P̱(̱p̱)̱ ̱a̱ṉḏ ̱P̱(̱ṯ)̱ ̱a̱ṟe̱ ̱ṯẖe̱ ̱m̱a̱ṟg̱i̱ṉa̱ḻ ̱p̱ṟo̱ḇa̱ḇi̱ḻi̱ṯy̱ ̱ḏi̱s̱ṯṟi̱ḇu̱ṯi̱o̱ṉ ̱f̱u̱ṉc̱ṯi̱o̱ṉs̱ ̱o̱f̱ ̱p̱ ̱a̱ṉḏ ̱ṯ,̱ ̱ṟe̱s̱p̱e̱c̱ṯi̱v̱e̱ḻy̱.̱ ̱W̱ẖe̱ṉ ̱p̱ ̱a̱ṉḏ ̱ṯ ̱a̱ṟe̱ ̱s̱ṯa̱ṯi̱s̱ṯi̱c̱a̱ḻḻy̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ,̱ ̱P̱(̱p̱,̱ṯ)̱=̱P̱(̱p̱)̱P̱(̱ṯ)̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱I̱(̱p̱;̱ṯ)̱ ̱i̱s̱ ̱i̱ṉḏe̱e̱ḏ ̱ẕe̱ṟo̱.̱ ̱M̱o̱ṟe̱ ̱g̱e̱ṉe̱ṟa̱ḻḻy̱,̱ ̱2̱^̱I̱(̱p̱;̱ṯ)̱ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏs̱ ̱ṯẖe̱ ̱ṉu̱m̱ḇe̱ṟ ̱o̱f̱ ̱ṯi̱m̱e̱ ̱p̱o̱i̱ṉṯs̱ ̱ṯ ̱ṯẖa̱ṯ ̱c̱a̱ṉ ̱ḇe̱ ̱i̱ṉf̱e̱ṟṟe̱ḏ ̱u̱ṉi̱q̱u̱e̱ḻy̱ ̱f̱ṟo̱m̱ ̱ṯẖe̱ ̱p̱ẖo̱s̱p̱ẖo̱ṟy̱ḻa̱ṯi̱o̱ṉ ̱ḻe̱v̱e̱ḻ ̱p̱;̱ ̱i̱ṯ ̱ṯẖu̱s̱ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏs̱ ̱ṯo̱ ̱ṯẖe̱ ̱ṉu̱m̱ḇe̱ṟ ̱o̱f̱ ̱ḏi̱s̱ṯi̱ṉg̱u̱i̱s̱ẖa̱ḇḻe̱ ̱m̱a̱p̱p̱i̱ṉg̱s̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱ṯ ̱a̱ṉḏ ̱p̱ ̱<̱c̱i̱ṯ.̱>̱.̱ ̱ ̱Ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ḏe̱p̱e̱ṉḏs̱ ̱o̱ṉ ̱ṯẖe̱ ̱e̱ṉṯṟo̱p̱y̱ ̱o̱f̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ḏi̱s̱ṯṟi̱ḇu̱ṯi̱o̱ṉ ̱H̱(̱ṯ)̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱a̱c̱c̱u̱ṟa̱c̱y̱ ̱o̱f̱ ̱s̱i̱g̱ṉa̱ḻ ̱ṯṟa̱ṉs̱m̱i̱s̱s̱i̱o̱ṉ,̱ ̱w̱ẖi̱c̱ẖ ̱c̱a̱ṉ ̱ḇe̱ ̱s̱e̱e̱ṉ ̱ḇy̱ ̱ṟe̱w̱ṟi̱ṯi̱ṉg̱ ̱M̱I̱1̱ ̱a̱s̱ ̱ ̱ ̱ ̱ ̱ ̱I̱(̱p̱;̱ṯ)̱ ̱=̱ ̱H̱(̱ṯ)̱ ̱-̱ ̱H̱(̱ṯ|̱p̱)̱_̱p̱,̱M̱I̱2̱ ̱ ̱w̱ẖe̱ṟe̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱H̱(̱ṯ)̱ ̱=̱ ̱-̱ ̱∫̱_̱0̱^̱Ṯ ̱ḏṯ ̱P̱(̱ṯ)̱ḻo̱g̱_̱2̱ ̱P̱(̱ṯ)̱ ̱ ̱ ̱i̱s̱ ̱ṯẖe̱ ̱e̱ṉṯṟo̱p̱y̱ ̱o̱f̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ḏi̱s̱ṯṟi̱ḇu̱ṯi̱o̱ṉ ̱P̱(̱ṯ)̱ ̱=̱ ̱1̱ ̱/̱ ̱Ṯ ̱a̱ṉḏ ̱ ̱ ̱ ̱ ̱ ̱ ̱H̱(̱ṯ|̱p̱)̱_̱p̱ ̱=̱ ̱-̱∫̱_̱0̱^̱1̱ ̱ḏp̱ ̱P̱(̱p̱)̱ ̱∫̱_̱0̱^̱Ṯ ̱ḏṯ ̱P̱(̱ṯ|̱p̱)̱ ̱ḻo̱g̱_̱2̱ ̱P̱(̱ṯ|̱p̱)̱ ̱ ̱i̱s̱ ̱ṯẖe̱ ̱a̱v̱e̱ṟa̱g̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱e̱ṉṯṟo̱p̱y̱ ̱o̱f̱ ̱ṯẖe̱ ̱c̱o̱ṉḏi̱ṯi̱o̱ṉa̱ḻ ̱ḏi̱s̱ṯṟi̱ḇu̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯ ̱g̱i̱v̱e̱ṉ ̱p̱,̱ ̱P̱(̱ṯ|̱p̱)̱.̱ ̱Ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱e̱ṉṯṟo̱p̱y̱ ̱H̱(̱ṯ)̱ ̱q̱u̱a̱ṉṯi̱f̱i̱e̱s̱ ̱ṯẖe̱ ̱a̱ ̱p̱ṟi̱o̱ṟi̱ ̱u̱ṉc̱e̱ṟṯa̱i̱ṉṯy̱ ̱o̱ṉ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ,̱ ̱w̱ẖi̱ḻe̱ ̱H̱(̱ṯ|̱p̱)̱_̱p̱ ̱q̱u̱a̱ṉṯi̱f̱i̱e̱s̱ ̱ṯẖe̱ ̱u̱ṉc̱e̱ṟṯa̱i̱ṉṯy̱ ̱o̱ṉ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṯ ̱a̱f̱ṯe̱ṟ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱p̱ ̱ẖa̱s̱ ̱ḇe̱e̱ṉ ̱m̱e̱a̱s̱u̱ṟe̱ḏ.̱ ̱M̱I̱3̱ ̱ ̱s̱ẖo̱w̱s̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱c̱a̱ṉ ̱ḇe̱ ̱i̱ṉṯe̱ṟp̱ṟe̱ṯe̱ḏ ̱a̱s̱ ̱ṯẖe̱ ̱ṟe̱ḏu̱c̱ṯi̱o̱ṉ ̱i̱ṉ ̱ṯẖe̱ ̱u̱ṉc̱e̱ṟṯa̱i̱ṉṯy̱ ̱o̱ṉ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṯ,̱ ̱ḇy̱ ̱m̱e̱a̱s̱u̱ṟi̱ṉg̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱p̱.̱ ̱Ṯẖe̱ ̱c̱o̱ṉḏi̱ṯi̱o̱ṉa̱ḻ ̱e̱ṉṯṟo̱p̱y̱ ̱H̱(̱ṯ|̱p̱)̱_̱p̱ ̱ḏe̱p̱e̱ṉḏs̱ ̱o̱ṉ ̱ṯẖe̱ ̱ṟe̱ḻi̱a̱ḇi̱ḻi̱ṯy̱ ̱o̱f̱ ̱s̱i̱g̱ṉa̱ḻ ̱ṯṟa̱ṉs̱m̱i̱s̱s̱i̱o̱ṉ,̱ ̱a̱ṉḏ ̱g̱o̱e̱s̱ ̱ṯo̱ ̱ẕe̱ṟo̱ ̱w̱ẖe̱ṉ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ ̱i̱s̱ ̱ṯṟa̱ṉs̱ḏu̱c̱e̱ḏ ̱p̱e̱ṟf̱e̱c̱ṯḻy̱.̱ ̱I̱ṉḏe̱e̱ḏ,̱ ̱s̱i̱ṉc̱e̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ḏi̱s̱ṯṟi̱ḇu̱ṯi̱o̱ṉ ̱P̱(̱ṯ)̱ ̱i̱s̱ ̱c̱o̱ṉṯi̱ṉu̱o̱u̱s̱,̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ḏi̱v̱e̱ṟg̱e̱s̱ ̱w̱ẖe̱ṉ ̱ṯẖe̱ṟe̱ ̱i̱s̱ ̱ṉo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱(̱a̱ṉḏ ̱ṉo̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱)̱.̱ ̱Ṯẖe̱ ̱ẖi̱g̱ẖe̱s̱ṯ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ṟe̱p̱o̱ṟṯe̱ḏ ̱i̱ṉ ̱F̱i̱g̱.̱ ̱2̱ ̱o̱f̱ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱ṯe̱x̱ṯ ̱ṯẖu̱s̱ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏs̱ ̱ṯo̱ ̱ṯẖe̱ ̱s̱m̱a̱ḻḻe̱s̱ṯ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱ḻe̱v̱e̱ḻ ̱s̱ṯu̱ḏi̱e̱ḏ.̱ ̱F̱o̱ṟ ̱a̱ ̱m̱o̱ṟe̱ ̱ḏe̱ṯa̱i̱ḻe̱ḏ ̱ḏi̱s̱c̱u̱s̱s̱i̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ,̱ ̱w̱e̱ ̱ṟe̱f̱e̱ṟ ̱ṯo̱ ̱<̱c̱i̱ṯ.̱>̱.̱Ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱i̱s̱ ̱s̱y̱m̱m̱e̱ṯṟi̱c̱ ̱w̱i̱ṯẖ ̱ṟe̱s̱p̱e̱c̱ṯ ̱ṯo̱ ̱i̱ṯs̱ ̱a̱ṟg̱u̱m̱e̱ṉṯs̱,̱ ̱ ̱ ̱a̱ṉḏ ̱M̱I̱1̱ ̱c̱a̱ṉ ̱a̱ḻs̱o̱ ̱ḇe̱ ̱ṟe̱w̱ṟi̱ṯṯe̱ṉ ̱a̱s̱I(p;t) = H(p) - H(p|t)_t.MI3where H(p) = -∫_0^1 dp P(p) log_2 P(p)is the entropy of the output distribution P(p)andH(p|t)_t = -1/T∫_0^T dt ∫_0^1 dp P(p|t) log_2 P(p|t)is the average of the conditional entropy of P(p|t), with P(p|t) the conditional distribution of p given t. W̱e̱ ̱ẖa̱v̱e̱ ̱u̱s̱e̱ḏ ̱ṯẖi̱s̱ ̱f̱o̱ṟm̱ ̱ṯo̱ ̱c̱o̱m̱p̱u̱ṯe̱ ̱I̱(̱p̱;̱ṯ)̱.̱ ̱I̱ṉ ̱ṉu̱m̱e̱ṟi̱c̱a̱ḻḻy̱ ̱c̱o̱m̱p̱u̱ṯi̱ṉg̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ,̱ ̱w̱e̱ ̱ẖa̱v̱e̱ ̱v̱e̱ṟi̱f̱i̱e̱ḏ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱ṟe̱s̱u̱ḻṯs̱ ̱a̱ṟe̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ṯẖe̱ ̱ḇi̱ṉ ̱s̱i̱ẕe̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḏi̱s̱ṯṟi̱ḇu̱ṯi̱o̱ṉ ̱o̱f̱ ̱p̱,̱ ̱f̱o̱ḻḻo̱w̱i̱ṉg̱ ̱ṯẖe̱ ̱a̱p̱p̱ṟo̱a̱c̱ẖ ̱o̱f̱ <cit.>. § ANALYTICAL MODELS§.§ Push-pull network The equation for the push-pull network isẋ_p= k_ f s(t) (x_T - x_p(t)) - k_ b x_p≃ k_ fs(t) x_T - k_ b x_p,where in the last equation we have assumed that x_T ≫ x_p, which is the case when k_ f s(t) ≪ k_ b. In this regime, the push-pull network operates in the linear regime, leading to sinusoidal oscillations, which tend to enhance information transmission <cit.>. In what follows, we write, to facilitate comparison with other studies on noise transmission <cit.> ρ≡ k_ f x_ T, μ = k_ b and, for notational convenience, x_p = x. We thus studyẋ = ρ s(t) - μ x(t).The equation can be solved analytically to yieldx(t) = ∫_-∞^t dt^'χ(t-t^')s(t),with χ(t-t^') = ρ e^-μ(t-t^'). With the input signal given bys(t) = sin (ω t)+s̅ + η_s (t),the output isx(t) =A sin(ω t - ϕ) + x̅ + η_x (t)where the amplitude is A = ρ/√(μ^2+ω^2),the phase difference of the output with the input isϕ = arctan (ω / μ),the mean is x̅ = ρs̅ / μand the noise isη_x = ρ∫_-∞^t dt^' e^-μ (t-t^')η_s (t^').The variance of the output, assuming the system is in steady state, is thenσ^2_x= (x(0) - x̅(0))^2=ρ^2∫_-∞^0 ∫_-∞^0 dt dt^' e^μ (t+t^')η_s (t) η_s(t^').Assuming that the input noise has variance σ^2_s and decays exponentially with correlation time τ_c=λ^-1, meaning that η_s(t) η_s(t^') = σ^2_s e^-λ |t-t^'|, the variance of the output isσ^2_x= ρ^2 σ^2_s[∫_-∞^0∫_-∞^t dt dt^' e^μ (t+t^') e^-λ (t- t^') + ..∫_-∞^0 ∫_t^0 dt dt^' e^μ (t+t^') e^+λ (t- t^')]=g^2 μ/μ+λσ^2_s,with the gain given by g≡ρ / μ. The signal-to-noise ratio A/σ_x is thenA/σ_x=√(μ(μ+λ)/μ^2+ω^2)1/σ_s,which has a maximum at the optimal relaxation rate <cit.>μ^ opt = ω^2/λ(1+√(1+(λ/ω)^2)).muoptThis optimum arises from a trade-off between the amplitude, which increases as μ increases, and input-noise averaging, which improves as μ decreases. Another point to note is that the optimal signal-to-noise ratio does not depend on ρ = k_ f x_ T, and hence not on k_ f and x_ T: while increasing ρ increases the amplitude of the signal, it also amplifies the noise in the input signal. Increasing the gain ρ (via x_ T and/or k_ f) only helps in the presence of intrinsic noise, because increasing the amplitude of the signal helps to raise the signal above the intrinsic noise <cit.>, a̱s̱ ̱ḏi̱s̱c̱u̱s̱s̱e̱ḏ ̱i̱ṉ ̱s̱e̱c̱ṯi̱o̱ṉs̱ ̱ ̱ ̱<̱ṟe̱f̱>̱ ̱a̱ṉḏ ̱<̱ṟe̱f̱>̱. However, in the deterministic models considered in this study, the intrinsic noise is zero. §.§ The harmonic oscillator and the uncoupled-hexamer model The uncoupled-hexamer model (UHM) is linear. Moreover, because each hexamer has a phosphorylation cycle with a characteristic oscillatino frequency ω_0, this system is akin to the harmonic oscillator. Indeed, when not driven, both the UHM and the harmonic oscillator relax in an oscillatory fashion to a stable fixed point. To develop intuition on the behavior of the UHM, we therefore here analyze the behavior of a harmonic oscillator driven by a noisy sinusoidal signal.The equation of motion of the driven harmonic oscillator isẍ + ω^2_0 x + γẋ = ρ s(t), HOwhere ω_0 is the characteristic frequency, γ is the friction and ρ describes the strength of the coupling to the input signal s(t). We assume that s(t)= sin(ω t)+η_s(t). We note that while the undriven harmonic oscillator is isomorphic to the undriven UHM, their coupling to the input is different: in the UHM, the hexamers are, motivated by the Kai system <cit.>, only coupled to the input during their active phosphorylation phase, while the harmonic oscillator is coupled continuously; moreover, in the harmonic oscillator the noise is additive, while in the UHM the signal multiplies the phosphorylation rate, leading to multiplicative noise. Yet, the behavior of the two models is qualitatively similar, as discussed below. Solving HO in Fourier space yields x̃(ω) = χ̃(ω) s̃(ω), withχ̃(ω) = ρ/ω_0^2 - ω^2 - i ωγ.Hence, the time evolution of x(t) isx(t)= 1/2π∫_-∞^∞ dωe^-iω tχ̃(ω) s(ω)= ρ/2π∫_-∞^∞ dω∫_-∞^∞ dt^'e^i ω(t^' - t) s(t^')/ω^2_0 - ω^2 - i ωγ.We do the integral over ω first. The integrand has poles atω = -i γ/2±√(ω_0^2-γ^2/4)≡-i γ/2±ω_1.This yieldsx(t)= ρ/2π∫_-∞^∞ s(t^') θ(t-t^') (2π i) ×[e^i(-iγ/2 + ω_1)(t^' - t)/2ω_1-e^i(-iγ/2 - ω_1)(t^' - t)/2ω_1]=ρ/ω_1∫_-∞^t dt^' e^-γ/2 (t-t^')sin (ω_1 (t-t^')) s(t^').With s(t) = sin(ω t), this yieldsx(t)=-γωcos[ω t] + (-ω^2 + ω_0^2) sin[ω t]/γ^2 ω^2 + (ω^2 - ω_0^2)^2xHO This can also be rewritten asx(t) = A sin (ω t + ϕ),with the amplitude given byA=ρ/√(γ^2 ω^2 + (ω^2 - ω_0^2)^2)A_HOand the phase given byϕ = arctan[- 4 γω/γ^2 + 4(ω_1^2 - ω^2)].A_HO shows that the amplitude increases as the friction decreases and that the amplitude is maximal when the intrinsic frequency equals the driving frequency; in fact, when γ→ 0 and ω_0 = ω, the amplitude diverges. With an input noise with variance σ^2_s and decay rate λ, the noise in the output, σ^2_x = δ x^2(0), is given byσ^2_x= ρ^2/ω_1^2∫_-∞^0 dt ∫_-∞^0 dt^' e^γ/2 (t+t^')sin(ω_1 t) sin(ω_1 t^') η_s (t) η_s (t^')=ρ^2 σ^2_s/ω_1^2[ ∫_-∞^0 dt ∫_-∞^t dt^' e^γ/2 (t+t^')sin(ω_1 t)sin(ω_1 t) e^-λ(t-t^').+.∫_-∞^0dt ∫_t^0dt^' e^γ/2 (t+t^')sin(ω_1 t) sin(ω_1 t^') e^-λ(t^' - t)]= ρ^2σ^2_s 16(γ + λ)/γ[(γ+2λ)^2 + 4ω_1^2](γ^2 + 4ω_1^2)= ρ^2σ^2_s (γ + λ)/γω_0^2[λ(γ+λ)+ω_0^2]noiseHOColThis expression shows that the noise diverges for all frequencies when the friction γ→ 0. It also shows that the noise diverges for ω_0 → 0 for all values of γ, or, conversely, that it goes to zero for ω_0 →∞. This can be understood by imagining a particle with mass m=1 in a harmonic potential well with spring constant k, giving a resonance frequency ω_0^2 = k/m = k, which is buffeted by stochastic forces: its variance decreases as the spring constant k and intrinsic frequency ω_0 increase.HO_Amp_NoiseHO_SNR_Omega0_Gamma show the amplitude A, noise σ^2_x, and signal-to-noise ratio A/σ_x for the harmonic oscillator. Clearly, the amplitude is maximal at resonance, diverging when γ→ 0 (HO_Amp_NoiseA). The noise is maximal at ω_0→ 0, and also diverges for all frequencies when γ→ 0 (HO_Amp_NoiseB). However, the amplitude rises more rapidly as γ→ 0 than the noise does, leading to a global optimum of the signal-to-noise ratio for ω_0 = ω and γ→ 0 (HO_Amp_NoiseC). However, biochemical networks have, in general, a finite friction, and then the optimal intrinsic frequency is off resonance, as most clearly seen in HO_SNR_Omega0_Gamma. In fact, since the noise is minimized for ω_0 →∞ while the amplitude is maximized at resonance, ω_0 = ω, the optimal frequency ω_0^ opt that maximizes the signal-to-noise ratio is in general ω_0^ opt > ω, as indeed also observed for the uncoupled hexamer model (see UHMB). Because noise is commonly modeled as Gaussian white noise, as in our Stuart-Landau model below, rather than colored noise as assumed here, we also give, for completeness, the expression for σ^2_x when the input noise is Gaussian and white, η_s(t) η_s(t^') = σ^2_s,whiteδ (t-t^'). It isσ^2_x= ρ^2σ^2_s,white/2γω_0^2. sigmaxsqHOwhiteThis is consistent with noiseHOCol, by noting that the integrated noise strength of the colored noise is 2 ∫_0^∞ dt σ^2_s e^-λ t=2σ^2_s/λ, while the integrated noise strength of the white noise case is σ^2_s, white. Indeed, with this identification, noiseHOCol in the limit of large λ reduces to the above expression for the white noise case. §.§ Comparison between push-pull network and harmonic oscillator in the high friction limit Intuitively, one would expect that in the high-friction limit the harmonic oscillator peforms similarly to the push-pull network. The signal-to-noise ratio SNR=A/σ_x indeed becomes the same in this limit. However, the amplitude and the noise separately scale differently, because the friction in the harmonic oscillator also reduces the strenght of the signal and the noise: in the high-friction limit, the equation of motion of the harmonic oscillator becomes ẋ_ HO = ρ s(t) / γ - ω_0^2 / γ x(t) + ρη_s (t) / γ, showing that the friction renormalizes both the signal and the noise. However, such a renormalization of both the signal and the noise should not affect the signal-to-noise ratio. Moreover, we now see that in this high-friction limit the harmonic oscillator relaxes with a rate ω_0^2 / γ, which is to be compared with μ of the push-pull network, for which ẋ_ PP = ρ s(t) - μ x(t) + ρη_s(t). From this we can anticipate that while the amplitude and the noise will be different, the signal-to-noise ratio will be the same. Concretely, in the high-friction limit the amplitude, the noise and the signal-to-noise ratio of the harmonic oscillator becomeA^ HO = ρ/γω σ_x^ HO =ρσ_s/ω_0√(γλ)SNR^ HO =√(ω_0^2/γ)√(λ)/ω=√(μλ)/ω,where in the last line we have made the identification μ = ω_0^2 / γ. For the push-pull network, the corresponding quantities, in the limit that μ→ 0, areA^ PP =ρ/μ σ_x^ PP =ρσ_s/μλSNR^ PP =√(μλ)/ω.Clearly, the signal-to-noise ratio of the two models are the same in the limit of high friction. SNR_HO_PP compares the behavior of the harmonic oscillator against that of the push-pull system. Clearly, for small γ, the signal-to-noise ratio SNR of the harmonic oscillator is larger than that of the push-pull network, showing that building an oscillatory tendency with a resonance frequency into a readout system can enhance the signal-to-noise ratio.However, in the large-friction limit, the SNR is the same of both models, as expected. §.§ Weakly non-linear oscillator and the coupled-hexamer model The coupled-hexamer model (CHM) is a non-linear oscillator that can sustain autonomous limit-cycle oscillations in the absence of any driving. Here, we describe the Stuart-Landau model, which provides a universal description of a weakly non-linear system near the Hopf bifurcation where a limit cycle appears. We use it to analyze the time-keeping properties of a system as it is altered from essentially a damped linear oscillator to a weakly non-linear oscillator, see Fig. 3 of the main text. Our treatment follows largely that of Pikovsky et al. <cit.>.§.§.§ The amplitude equationWe consider the weakly non-linear oscillator <cit.>:ẍ + ω_0^2 x = f(x,ẋ) + ρ s(t), nlowith s(t) = sin(ω t) + s̅ + η_s being the driving signal as before. The quantity f(x,ẋ) describes the non-linearity of the autonomous oscillator and the parameter ρ controls the strength of the forcing. The description presented below is valid in the regime where the non-linearity f(x,ẋ) is small and the strength of the driving, quantified by ρ, is small. We begin by developing the formalism in the deterministic limit η_s = 0, in which s(t) is periodic with period T = 2π/ω, before returning to the effects of noisy driving.In contrast to previous sections, our discussion here is limited to input noise that is not only Gaussian but white, η_s(t) = 0 and η_s(t)η_s(t') = σ_s^2 δ(t - t'). nlo is close to that of a linear oscillator. We therefore expect that its solution has a nearly sinusoidal form. Moreover, we expect at least over some parameter range the frequency of the system is entrained by that of the driving signal. We therefore write the solution asx(t) =Re[A(t) e^i ω t] = 1/2( A(t) e^i ω t +c.c.),where c.c. denotes complex conjugate.The above equation has the form of an harmonic oscillation with frequency ω, but with a time-dependent complex amplitude A(t). We emphasize that the observed frequency may deviate from ω, when the amplitude A(t) rotates in the complex plane.The above equation determines only the real part of the complex number A(t)e^i ω t. To fully specify A(t), we also need to set the imaginary part of A(t)e^iω t, which we choose to do viay(t)= - ω Im[A(t) e^i ω t]=1/2(i ω A(t) e^iω t +c.c.)=ẋ.The relation y(t) = ẋ thus specifies the imaginary part of the amplitude A(t). Hence, the complex amplitude can be written asAxy A(t) e^i ω t = x(t) - i y(t) / ω. Writing A(t) = R(t) e^iϕ(t), it can be verified thatx(t)= R(t) cos (ϕ(t) + ω t) xy(t)= - ω R(t) sin(ϕ(t) + ω t) yR^2(t)= x^2(t) + y^2(t) / ω^2, Rsqand that the specification ẋ(t) = y(t) implies that dRdphiṘ(t)/R(t) = ϕ̇(t) tan (ϕ(t) + ω t). y shows that the time derivative of y(t) isẏ =-ω^2 x - ω[Ṙ(t) sin(ϕ(t) + ω t) + R(t) ϕ̇(t)cos(ϕ(t) + ω t)]ydotOn the other hand, we know thati ωȦ e^iω t =- ω[Ṙ(t) sin(ϕ(t) + ω t) + R(t) ϕ̇(t)cos(ϕ(t) + ω t)]+ i ω[Ṙ(t) cos(ϕ(t) + ω t) - R(t) ϕ̇(t) sin (ϕ(t) + ω t)] ImAdot=ẏ + ω^2 x.ioAdtwhere in ImAdot we have exploited that the imaginary part is zero because of dRdphi. Combing the above equation with nlo, noting that ẏ = ẍ, yields the following equation for the time evolution of the amplitude:Ȧ = e^-i ω t/i ω[(ω^2 - ω_0^2) x + f(x,y) + ρ s(t)]. Adot §.§.§ AveragingThe above transformation is exact. To make progress, we will use the method of averaging <cit.>. Specifically, we will time average Adot over one period T <cit.>. Averaging the driving e^-i ω t s(t) / (i ω) yields the complex constant E/(2ω). The second term of Adot can be expanded in polynomials of x(t)=(1/2)Re A(t) e^i ω t and y(t)=(1/2)ImA(t) e^i ω t, yielding powers of the type (A(t) e^i ω t)^n (A^*(t) e^-i ω t)^m. After multiplying with e^-i ω t and averaging over one period T, only the terms with m=n-1 do not vanish. Consequently, the terms that remain after averaging have the form g(|A|^2) A, with an arbitrary function g. For small amplitudes only the linear term proportional to A and the first non-linear term, ∝ |A|^2 A term are important. Finally, averaging the first term of Adot yields a term linear in A.Summing it up, the time evolution of the amplitude of the system with deterministic driving (η_s=0) is given by <cit.>Ȧ = -i ω^2 - ω_0^2/2ω A + α A - (β + i κ) |A|^2 A -ρ/2ω EThe parameters have a clear interpretation. The parameters α and β describe, respectively, the linear and non-linear growth or decay of oscillations. To have stable oscillations, both in the presence and absence of driving, large amplitude oscillations dominated by the nonlinear term need to decay, which means that β must be positive, β>0; this parameter is fixed in all our calculations. The parameter that allows us to alter the system from one that shows damped oscillations in the absence of driving to one that can generate autonomous oscillations which do not rely on forcing, is α.For the system to sustain free-running oscillations, small amplitude oscillations, dominated by the linear term, must grow, meaning that α must be positive, α>0. The case with α>0 thus describes a system that can perform stable limit cycle oscillations, making it a bonafide clock. The case α<0 describes a system that in the absence of any driving, E=0, relaxes in an oscillatory fashion to a stable fixed point with A=0. In the presence of weak driving, the amplitude A at the fixed point will be non-zero but small, making the effect of the non-linearity weak. The case α<0 thus describes a system that is effectively a damped harmonic oscillator, which only dispays sustained oscillations when forced by an oscillatory signal. This system mimics the uncoupled-hexamer model.The parameter κ describes the non-linear dependence of the oscillation frequency on the amplitude. For the isochronous scenario in which the phase moves with a constant velocity, κ=0, which is what we will assume henceforth. Defining the parameter ν≡ (ω^2 - ω_0^2)/(2ω) and the parameter ϵ≡ρ/(2ω), we can then rewrite the above equation asȦ = -i ν A + α A - β |A|^2 A -ϵ E, AmpEqwhere A is the complex time-dependent amplitude, E is a complex constant, and ν, α, and β are real constants. AmpEq is Eq. 2 of the main text. It provides a universal description of a driven weakly nonlinear system near the Hopf bifurcation where the limit cycle appears <cit.>. To model the input noise we will add the noise term to AmpEq:Ȧ = -i ν A + α A - β |A|^2 A - ϵ E + ρη̅_s(t), AmpEqNoisewhere η̅_s(t) is the noise η_s(t) averaged over one period of the driving:η̅_s(t) ≡1/T∫_t-T/2^t+T/2 dt^'e^-iω t^'/iωη_s(t^'). etasbarSince η_s(t) is real but its prefactor e^-iω t/iω is complex, s̅(t) is, in general, complex. Below we will describe the characteristics of the noise η̅_s. §.§.§ Linear-Noise ApproximationScenarios By varying α we will interpolate between two scenarios: the damped oscillator, modeling the UHM, with α < 0, and the weakly non-linear oscillator that can sustain free-running oscillations, modeling the CHM, with α > 0. For the system with α<0, the amplitude of x(t) when not driven is A=0: the system comes to a standstill. When the system is driven, the amplitude will be nonzero, but constant since the system is essentially linear as described above. For the system with α>0, A(t) can exhibit distinct types of dynamics, depending on the strength of driving and the frequency mismatch characterized by ν <cit.>. However, here we do not consider the regimes that A(t) rotates in the complex plane; we will limit ourselves to the scenario that A(t) = A is constant, meaning that ν cannot be too large <cit.>.Overview Before we discuss the linear-noise approximation in detail, we first give an overview. The central observation is that both for the driven damped oscillator with α < 0 and the driven limit-cycle oscillator with α>0, the complex amplitude A is constant, corresponding to a stable fixed point of the amplitude equation, AmpEq. In the spirit of the linear-noise approximation used to calculate noise in biochemical networks, we then expand around the fixed point to linear order, and evaluate the noise at the fixed point. This approach thus assumes that the distribution of the variables of interest is Gaussian, centered at the fixed point. More concretely, we first expand A(t) to linear order around its stable fixed point, which is obtained by setting Ȧ in AmpEq to zero. This makes it possible to compute the variance of A. Importantly, this variance is that of a Gaussian distribution in the frame that co-rotates with the driving, as can be seen from xy. To obtain the variance of x and y in the original frame, we then transform this distribution back to original frame of x and y. If we can make this transformation linear, then it is guaranteed that the distribution of x and y will also be Gaussian. As we will see, the transformation can be made linear by writing A as A=u + i v, where u and v are the real and imgainary parts of A, respectively.Expanding A around its fixed point We write A(t) = u(t) + i v(t). AmpEqNoise then yields for the real and imaginary part of a(t):u̇ = ν v + α u - β (u^2+v^2) u - ϵ e_u + ρη̅_u dotu v̇ =-ν u + α v - β (u^2+v^2) v - ϵ e_v + ρη̅_v dotvHere, η̅_u and η̅_v are the real and imaginary parts of the averaged noise η̅_s, given by etasbar; they are discussed below. The quantities e_u and e_v are the real and imaginary parts of the driving E. Their respective values depend on the phase of the driving, which is arbitrary and can be chosen freely. For example, when the driving is s(t) = sin (ω t), then e_u = 1 and e_v=0, while if the signal is s(t) = cos(ω t), then e_u=0 and e_v=1.We now expand u(t) and v(t) around their steady-state values, u^∗ and v^∗, respectively.Inserting this in the above equations and expanding up to linear order yieldsδ̇ ̇u̇ = c_1 δ u + c_2 δ v + ρη̅_u dotdu δ̇ ̇v̇ = c_3 δ u + c_4 δ v + ρη̅_v, dotdvwithc_1= α - β (3 u^∗^2 + v^∗^2) c1c_2=ν - β 2 u^∗v^∗c_3=-ν - β 2 u^∗v^∗c_4= α - β (u^∗^2 + 3 v^∗^2).c4The fixed points u^∗ and v^∗ are obtained by solving the cubic equations dotudotv in steady state.Noise characteristics We next have to specify the noise characteristics of η̅_u(t) and η̅_v(t). etasbar reveals that the noise terms are given by η̅_u(t)=-1/ω T∫_t-T/2^t+T/2dt^'sin(ω t^') η_s(t^')η̅_v(t) =-1/ω T∫_t-T/2^t+T/2 dt^'cos(ω t^') η_s(t^').The method of averaging <cit.> reveals that to leading order the statistics of these quantities can be approximated byη̅_u(t) η̅_u(t^') =η̅_v(t) η̅_v(t^') =σ_s^2/2ω^2δ (t-t^') η̅_u(t) η̅_v(t^') =0. Variance-co-varianceFrom here, there are (at least) three ways to obtain the variance and co-variance matrix of u and v. Since the system is linear, it can be directly solved in the time domain. Another route is via the power spectra <cit.>. Here, we obtain it from <cit.>A C_uv +C_uv A^ T = -D_uv. ACCAThe matrix C_uv is the variance-covariane matrix with elements σ^2_uu, σ^2_uv, σ^2_vu, σ^2_vv and A is the Jacobian of dotdudotdv with elements A_11 = c_1, A_12 = c_2, A_21 = c_3, A_22 = c_4.The matrix D_uv is the noise matrix of η̅^2_u/v, where we absorb the coupling strength ρ = 2 ωϵ (cf. AmpEq) in the noise strength:D_uv = Duv( [ 2ϵ^2σ^2_s 0; 0 2ϵ^2σ^2_s ]). Transforming back The variance-covariance matrix C_uv, with elements σ^2_uu, σ^2_uv, σ^2_vu, σ^2_vv, characterizes a Gaussian distribution in the complex planeP(u,v) = 1/2π√(| C_uv|)e^-1/2 a^ T C_uv^-1 a,where | C_uv| is the determinant of the variance-covariance matrix C_uv and C_uv^-1 is the inverse of C_uv, and a is a vector with elements δ u,δ v (the deviations of the real and imaginary parts of A=a from their respective fixed points u^∗ and v^∗) with a^ T its transpose. This distribution P(u,v) defines a distribution in the co-rotating frame of the oscillator in the complex plane. To obtain P(x,y) in the original non-co-rotating frame, we need to rotate this distribution.Axy shows that the corresponding rotation is described byx(t)= u cos(ω t)- v sin (ω t) xty(t)= - ω u sin(ω t) - ω v cos (ω t) yt,which defines the rotation matrix Q = ( [ cos(ω t) - sin(ω t); -ωsin(ω t) -ωcos(ω t) ])such that z =Q a, with z the vector with elements δ x(t) = x(t) -x^∗(t), δ y(t) = y(t) - y^∗(t), where x^∗, y^∗ are the rotating “fixed” points of x(t) and y(t), i.e. their time-dependent mean values, given by xtyt with u=u^∗ and v=v^∗. Hence, the distribution of interest is given byP(x,y|t)=1/2π√(| C_xy|) e^-1/2 z^ T C_xy^-1 z,Pxygtwhere C_xy^-1 = [ Q^-1]^ T C_uv^-1 Q^-1CxyInv and its inverse C_xy is the variance-covariance matrix for x,y, with elements σ^2_xx(t),σ^2_xy(t),σ^2_yx(t),σ^2_yy(t), which depend on time because Q depends on time.Mutual information I(p;t) Lastly, the oscillations in the phosphorylation p(t) of the hexamer models correspond to the oscillations in x(t) in the Stuart-Landau model. We therefore need to compute the mutual information I(x;t), not I(x,y;t). Specifically, we calculate the mutual information fromI(x,t) = H(x) - H(x|t)_t,where the entropy H(x) = -∫ dx P(x) log P(x) with P(x) = 1/T ∫_0^T dt P(x|t) and the conditional entropy H(x|t) = - 1/T∫_0^T dt∫ dx P(x|t) log P(x|t), with P(x|t) = 1/√(2πσ^2_xx(t)) e^-(x(t)-x^∗ (t))^2/(2σ^2_xx(t)). We emphasize that both the variance σ^2_xx(t) and the average x^∗(t) depend on time.Summing up Approach and Parameters Fig. 3 main text To sum up the procedure, to compute the noise in A=a we first need to obtain the steady state values of its real and imaginary part, u̅ and v̅ (see c1c4). These are obtained from setting the time derivatives of u(t) and v(t) in dotudotv to zero;this involves solving a cubic equation, which we do numerically. We then compute the variance-covariance matrix C_uv via ACCA, where the elements of the Jacobian A are given by c1c4 and the noise matrix D_uv is given by Duv.After having obtained C_uv, we find the variance-covariance matrix for x and y, C_xy, from CxyInv. For Fig. 3 of the main text, ν = 0, β = ω, ϵ = 0.5ω. §.§.§ Comparing limit cycle oscillator with damped oscillatorFig. 3 of the main text shows that the mutual information I(x;t) increases with α, especially when the input noise is large. To elucidate this further, we show in SLPxyNoise for two different values of α and for two levels of the input noise, the dynamics of the system in the plane of x and y.The panels not only show the mean trajectory, indicated by the dashed line, but also samples (x,y) from P(x,y|t_i) for evenly spaced time points t_i;P(x,y|t) is given by Pxygt and samples from the same time point t_i have the same color. It is seen that when the input noise is low (left two panels), the respective distributions (“blobs”) are well separated, both for α=-ω, when the system is a damped oscillator (D.O.) (top row), and for α=3ω (bottom row), when the system is a limit-cycle oscillator (L.C.O.). However, when the input noise is large (right column), the blobs of the damped oscillator become mixed, while the distributions P(x,y|t) of the limit-cycle oscillator are still fairly well separated. To interpret this further, we note that the mutual information I(x;t) = H(t) - H(t|x). Here, H(t) is the entropy of the input signal, which is constant, i.e. does not depend on the design of the system. The dependence of I(x;t) on the design of the system is thus governed by the conditional entropy, given by H(t|x) = -log P(t|x)_P(t|x)_P(x). The quantity -log P(t|x)_P(t|x) quantifies the uncertainty in estimating the time t from a given output x; the average …_P(x) indicates that this uncertainty should be averaged over all output values x weighted by P(x). The conditional entropy H(t|x) is low and I(x;t) is high when, averaged over x, the distribution P(t|x) of times t for a given x is narrow. We can now interpret SLPxyNoise: The smaller the number of blobs that intersect the line x, the higher the mutual information. Or, concomitantly, the more the distributions are separated, the higher the mutual information—information transmission is indeed a packing problem. Clearly, when the input noise is low, the time can be inferred reliably from the output even with a damped oscillator (top left panel). For high input noise, however, the mutual information of the damped oscillator falls dramatically because the blobs now overlap strongly. In contrast, the distributions of the limit-cycle oscillator are still reasonably separated and I(x;t) is still almost close to 2 bits. SLPxyNoise also nicely illustrates that the mutual information would be increased if the system could estimate the time not from x only, but instead from x and y: this removes the degeneracy in estimating t for a given x associated with sinusoidal oscillations <cit.>. One mechanism to remove the degeneracy is to have a readout system that not only reads out the amplitude of the clock signal, but also its derivative, for example via incoherent feedback loops <cit.>. Another possibility is that the clock signal is read out by 2 (or more) proteins that are out of phase with each other, as shown in <cit.>. Indeed, while we have computed the instantaneous mutual information between time and the output at a given time, the trajectory of the clock signal provides more information about time, which could in principle be extracted by appropriate readout systens <cit.>. Lastly, we show in SLPxyCoupling the dynamics for two different values of α and for two different values of the coupling strength ϵ. The top left panel shows that when ϵ is small, the amplitude of the damped oscillator is very weak—note the scale on the x- and y-axis. To increase the amplitude of the output, the coupling strength must be increased. However, this amplifies the input noise as well, such that the mutual information remains unchanged (top right panel): the damped oscillator faces a fundamental trade-off between gain and input noise that cannot be lifted. In contrast, the limit-cycle oscillator (bottom row) already exhibits strong amplitude oscillations even when the coupling strength ϵ is small: the amplitude of the cycle—a bonafide limit-cycle—is determined by the properties of the system, and is only very weakly affected by the strength of the forcing. A̱ṯ ̱ṯẖe̱ ̱ ̱ ̱s̱a̱m̱e̱ ̱ṯi̱m̱e̱,̱ ̱w̱e̱a̱ḵe̱ṉi̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ḏo̱e̱s̱ ̱ṟe̱ḏu̱c̱e̱ ̱ṯẖe̱ ̱p̱ṟo̱p̱a̱g̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱ ̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱.̱ ̱Ṯẖe̱s̱e̱ ̱ṯw̱o̱ ̱o̱ḇs̱e̱ṟv̱a̱ṯi̱o̱ṉs̱ ̱ṯo̱g̱e̱ṯẖe̱ṟ ̱e̱x̱p̱ḻa̱i̱ṉ ̱w̱ẖy̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ ̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱a̱s̱ ̱ṯẖe̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱s̱ ̱ṟe̱ḏu̱c̱e̱ḏ.̱ ̱I̱ṉ ̱<̱ṟe̱f̱>̱ ̱w̱e̱ ̱e̱ḻu̱c̱i̱ḏa̱ṯe̱ ̱ṯẖe̱s̱e̱ ̱ ̱ ̱a̱ṟg̱u̱m̱e̱ṉṯs̱ ̱f̱u̱ṟṯẖe̱ṟ,̱ ̱a̱ṉḏ ̱s̱ẖo̱w̱ ̱ṯẖa̱ṯ ̱c̱o̱ṉc̱e̱ṟṉi̱ṉg̱ ̱ṯẖe̱ ̱ṟo̱ḇu̱s̱ṯṉe̱s̱s̱ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ ̱ ̱ṉo̱i̱s̱e̱,̱ ̱ṯẖe̱ ̱w̱e̱a̱ḵ-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱ ̱i̱s̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱ṟe̱g̱i̱m̱e̱ ̱ṯẖa̱ṯ ̱m̱a̱x̱i̱m̱i̱ẕe̱s̱ ̱ ̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ,̱ ̱a̱ṉḏ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖi̱s̱ ̱ṟe̱g̱i̱m̱e̱ ̱a̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱i̱s̱ ̱s̱u̱p̱e̱ṟi̱o̱ṟ ̱o̱v̱e̱ṟ ̱a̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ.̱§.§.§ Optimal intrinsic frequencyCHMB shows that the optimal intrinsic frequency ω_0^ opt that maximizes the mutual information I(p;t) for the coupled-hexamer model (CHM) depends, albeit very weakly, on the input-noise strength σ^2_s. Here we wondered whether the Stuart-Landau model could reproduce this feature. SLnu shows the result. The figure shows the mutual information I(x;t) as a function of ν = (ω^2 - ω_0^2)/(2ω) for different values of σ^2_s. It is seen that the dependence of I(x;t) on ν is rather weak, yielding a broad maximum that peaks at ν=0 (corresponding to ω_0 = ω) for all noise strengths. This suggests that the optimal ω_0^ opt<ω observed for low input noise in the CHM arises from a stronger non-linearity in that system than captured by the Stuart-Landau model, which describes weakly non-linear oscillators. §.§ Why limit cycle oscillators are generically more robust to input noise than damped oscillators in the weak-coupling regime Ṯẖe̱ ̱p̱ṟi̱ṉc̱i̱p̱a̱ḻ ̱ṟe̱s̱u̱ḻṯ ̱o̱f̱ ̱o̱u̱ṟ ̱m̱a̱ṉu̱s̱c̱ṟi̱p̱ṯ,̱ ̱i̱ḻḻu̱s̱ṯṟa̱ṯe̱ḏ ̱i̱ṉ ̱F̱i̱g̱.̱ ̱2̱ ̱o̱f̱ ̱ ̱ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱ṯe̱x̱ṯ,̱ ̱i̱s̱ ̱ṯẖa̱ṯ ̱a̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱i̱s̱ ̱m̱o̱ṟe̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱ ̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ṯẖa̱ṉ ̱a̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ.̱ ̱W̱e̱ ̱ṉo̱w̱ ̱a̱ḏḏṟe̱s̱s̱ ̱ṯẖe̱ ̱q̱u̱e̱s̱ṯi̱o̱ṉ ̱ ̱ ̱ẖo̱w̱ ̱g̱e̱ṉe̱ṟi̱c̱ ̱ṯẖi̱s̱ ̱o̱ḇs̱e̱ṟv̱a̱ṯi̱o̱ṉ ̱i̱s̱,̱ ̱a̱ṉḏ ̱w̱ẖe̱ṯẖe̱ṟ ̱i̱ṯ ̱c̱a̱ṉ ̱e̱x̱p̱ḻa̱i̱ṉe̱ḏ ̱f̱ṟo̱m̱ ̱a̱ ̱ ̱ ̱s̱i̱m̱p̱ḻe̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱a̱ṟg̱u̱m̱e̱ṉṯ.̱ ̱Ṯo̱ ̱a̱ṉs̱w̱e̱ṟ ̱ṯẖe̱s̱e̱ ̱q̱u̱e̱s̱ṯi̱o̱ṉs̱,̱ ̱w̱e̱ ̱w̱i̱ḻḻ ̱ ̱ ̱i̱ṉv̱e̱s̱ṯi̱g̱a̱ṯe̱ ̱ṯẖe̱ ̱a̱ṉa̱ḻy̱ṯi̱c̱a̱ḻ ̱m̱o̱ḏe̱ḻs̱ ̱ḏi̱s̱c̱u̱s̱s̱e̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱v̱i̱o̱u̱s̱ ̱ ̱ ̱s̱e̱c̱ṯi̱o̱ṉs̱,̱ ̱w̱ẖi̱c̱ẖ ̱a̱ṟe̱ ̱v̱a̱ḻi̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱ṟe̱g̱i̱m̱e̱ ̱o̱f̱ ̱w̱e̱a̱ḵ ̱c̱o̱u̱p̱ḻi̱ṉg̱.̱ ̱ ̱W̱e̱ ̱w̱i̱ḻḻ ̱ ̱ ̱a̱ṉa̱ḻy̱s̱e̱ ̱ṯẖe̱ ̱ẖa̱ṟm̱o̱ṉi̱c̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ḏe̱s̱c̱ṟi̱ḇe̱ḏ ̱i̱ṉ ̱<̱ṟe̱f̱>̱,̱ ̱w̱ẖi̱c̱ẖ ̱ ̱ ̱a̱p̱p̱ḻi̱e̱s̱ ̱ṉo̱ṯ ̱o̱ṉḻy̱ ̱ṯo̱ ̱ṯẖe̱ ̱u̱ṉc̱o̱u̱p̱ḻe̱ḏ ̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ ̱(̱U̱H̱M̱)̱,̱ ̱ḇu̱ṯ ̱a̱ḻs̱o̱,̱ ̱i̱ṉ ̱ ̱ ̱ṯẖe̱ ̱ẖi̱g̱ẖ-̱f̱ṟi̱c̱ṯi̱o̱ṉ ̱ḻi̱m̱i̱ṯ,̱ ̱ṯo̱ ̱ṯẖe̱ ̱p̱u̱s̱ẖ-̱p̱u̱ḻḻ ̱ṉe̱ṯw̱o̱ṟḵ ̱(̱P̱P̱Ṉ)̱,̱ ̱a̱s̱ ̱ ̱ ̱ḏe̱s̱c̱ṟi̱ḇe̱ḏ ̱i̱ṉ ̱<̱ṟe̱f̱>̱.̱ ̱F̱o̱ṟ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ,̱ ̱ ̱ ̱w̱e̱ ̱w̱i̱ḻḻ ̱a̱ṉa̱ḻy̱s̱e̱ ̱ṉo̱ṯ ̱o̱ṉḻy̱ ̱ṯẖe̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱ḏe̱s̱c̱ṟi̱ḇe̱ḏ ̱i̱ṉ ̱ ̱ ̱<̱ṟe̱f̱>̱,̱ ̱ḇu̱ṯ ̱a̱ḻs̱o̱ ̱a̱ ̱p̱ẖa̱s̱e̱-̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱m̱o̱ḏe̱ḻ ̱w̱i̱ṯẖi̱ṉ ̱ṯẖe̱ ̱ ̱ ̱p̱ẖa̱s̱e̱-̱a̱v̱e̱ṟa̱g̱i̱ṉg̱ ̱a̱p̱p̱ṟo̱x̱i̱m̱a̱ṯi̱o̱ṉ ̱<̱c̱i̱ṯ.̱>̱.̱ ̱W̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱ ̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱g̱i̱v̱e̱s̱ ̱a̱ ̱u̱ṉi̱v̱e̱ṟs̱a̱ḻ ̱ḏe̱s̱c̱ṟi̱p̱ṯi̱o̱ṉ ̱o̱f̱ ̱w̱e̱a̱ḵḻy̱ ̱ ̱ ̱ṉo̱ṉ-̱ḻi̱ṉe̱a̱ṟ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱ṉe̱a̱ṟ ̱ṯẖe̱ ̱H̱o̱p̱f̱ ̱ḇi̱f̱u̱ṟc̱a̱ṯi̱o̱ṉ,̱ ̱ṯẖe̱ ̱ ̱ ̱p̱ẖa̱s̱e̱-̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱m̱o̱ḏe̱ḻ ̱w̱i̱ṯẖi̱ṉ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱-̱a̱v̱e̱ṟa̱g̱i̱ṉg̱ ̱a̱p̱p̱ṟo̱x̱i̱m̱a̱ṯi̱o̱ṉ ̱ ̱ ̱g̱i̱v̱e̱s̱ ̱a̱ ̱g̱e̱ṉe̱ṟa̱ḻ ̱ḏe̱s̱c̱ṟi̱p̱ṯi̱o̱ṉ ̱o̱f̱ ̱(̱p̱o̱ṯe̱ṉṯi̱a̱ḻḻy̱ ̱ẖi̱g̱ẖḻy̱)̱ ̱ṉo̱ṉ-̱ḻi̱ṉe̱a̱ṟ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱i̱ṉ ̱ṯẖe̱ ̱w̱e̱a̱ḵ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱ ̱<̱c̱i̱ṯ.̱>̱;̱ ̱ ̱ ̱i̱m̱p̱o̱ṟṯa̱ṉṯḻy̱,̱ ̱ḇo̱ṯẖ ̱ḏe̱s̱c̱ṟi̱p̱ṯi̱o̱ṉs̱ ̱g̱i̱v̱e̱ ̱ṯẖe̱ ̱s̱a̱m̱e̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱a̱ṟg̱u̱m̱e̱ṉṯ,̱ ̱ ̱ ̱s̱ṯṟo̱ṉg̱ḻy̱ ̱s̱u̱g̱g̱e̱s̱ṯi̱ṉg̱ ̱i̱ṯ ̱a̱p̱p̱ḻi̱e̱s̱ ̱ṯo̱ ̱m̱o̱s̱ṯ,̱ ̱i̱f̱ ̱ṉo̱ṯ ̱a̱ḻḻ,̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱.̱ ̱Ṯẖe̱ ̱p̱ṟi̱ṉc̱i̱p̱a̱ḻ ̱f̱i̱ṉḏi̱ṉg̱ ̱o̱f̱ ̱o̱u̱ṟ ̱a̱ṉa̱ḻy̱s̱i̱s̱ ̱o̱f̱ ̱ṯẖe̱s̱e̱ ̱m̱o̱ḏe̱ḻs̱ ̱ ̱ ̱i̱s̱ ̱ṯẖa̱ṯ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱s̱u̱c̱ẖ ̱a̱s̱ ̱ṯẖe̱ ̱U̱H̱M̱ ̱a̱ṉḏ ̱P̱P̱Ṉ ̱c̱a̱ṉṉo̱ṯ ̱ḻi̱f̱ṯ ̱ṯẖe̱ ̱ ̱ ̱ṯṟa̱ḏe̱-̱o̱f̱f̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱f̱i̱c̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱s̱i̱g̱ṉa̱ḻ ̱(̱ṯẖe̱ ̱g̱a̱i̱ṉ)̱ ̱ ̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱p̱ṟo̱p̱a̱g̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱w̱ẖi̱ḻe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱ ̱ ̱c̱a̱ṉ ̱ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖe̱i̱ṟ ̱o̱s̱c̱i̱ḻḻa̱ṯi̱o̱ṉs̱ ̱ẖa̱v̱e̱ ̱a̱ṉ ̱i̱ṉẖe̱ṟe̱ṉṯ ̱ṟo̱ḇu̱s̱ṯ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱,̱ ̱ ̱ ̱w̱ẖi̱c̱ẖ ̱ḏo̱e̱s̱ ̱ṉo̱ṯ ̱ṟe̱ḻy̱ ̱o̱ṉ ̱e̱x̱ṯe̱ṟṉa̱ḻ ̱ḏṟi̱v̱i̱ṉg̱.̱ ̱Ḇe̱f̱o̱ṟe̱ ̱w̱e̱ ̱ḏe̱ṟi̱v̱e̱ ̱ṯẖe̱ ̱ ̱ ̱p̱ṟi̱ṉc̱i̱p̱a̱ḻ ̱ṟe̱s̱u̱ḻṯ ̱i̱ṉ ̱ḏe̱ṯa̱i̱ḻ ̱i̱ṉ ̱ṯẖe̱ ̱p̱a̱ṟa̱g̱ṟa̱p̱ẖs̱ ̱ḇe̱ḻo̱w̱,̱ ̱w̱e̱ ̱f̱i̱ṟs̱ṯ ̱g̱i̱v̱e̱ ̱a̱ṉ ̱ ̱ ̱o̱v̱e̱ṟv̱i̱e̱w̱ ̱o̱f̱ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱a̱ṟg̱u̱m̱e̱ṉṯs̱,̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱c̱a̱s̱e̱ ̱w̱ẖe̱ṟe̱ ̱ṯẖe̱ṟe̱ ̱i̱s̱ ̱ṉo̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱ṉo̱i̱s̱e̱.̱ ̱I̱ṉ ̱ṯẖe̱ ̱ṉe̱x̱ṯ ̱s̱e̱c̱ṯi̱o̱ṉ ̱(̱<̱ṟe̱f̱>̱)̱,̱ ̱w̱e̱ ̱ṯẖe̱ṉ ̱ḏi̱s̱c̱u̱s̱s̱ ̱ ̱ ̱ṯẖe̱ ̱ṟo̱ḻe̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱a̱ṉḏ ̱ẖo̱w̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱ḏe̱s̱i̱g̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱ṟe̱a̱ḏo̱u̱ṯ ̱ ̱ ̱s̱y̱s̱ṯe̱m̱ ̱ḏe̱p̱e̱ṉḏs̱ ̱o̱ṉ ̱ṯẖe̱ ̱ṟe̱ḻa̱ṯi̱v̱e̱ ̱a̱m̱o̱u̱ṉṯs̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱a̱ṉḏ ̱e̱x̱ṯe̱ṟṉa̱ḻ ̱ ̱ ̱ṉo̱i̱s̱e̱.̱O̱v̱e̱ṟv̱i̱e̱w̱ ̱Ṯo̱ ̱u̱ṉḏe̱ṟs̱ṯa̱ṉḏ ̱w̱ẖy̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱(̱C̱H̱M̱)̱ ̱a̱ṟe̱ ̱ ̱ ̱g̱e̱ṉe̱ṟi̱c̱a̱ḻḻy̱ ̱m̱o̱ṟe̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ṯẖa̱ṉ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱(̱U̱H̱M̱ ̱ ̱ ̱a̱ṉḏ ̱P̱P̱Ṉ)̱,̱ ̱ṯẖe̱ ̱ṟo̱ḻe̱ ̱o̱f̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱ ̱i̱s̱ ̱ḵe̱y̱.̱ ̱F̱o̱ṟ ̱a̱ ̱ ̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ,̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱A̱ ̱o̱f̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱o̱s̱c̱i̱ḻḻa̱ṯi̱o̱ṉs̱ ̱(̱ṯẖe̱ ̱ ̱ ̱s̱i̱g̱ṉa̱ḻ)̱ ̱s̱c̱a̱ḻe̱s̱ ̱ḻi̱ṉe̱a̱ṟḻy̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ,̱ ̱A̱∼̱ρ̱.̱ ̱H̱o̱w̱e̱v̱e̱ṟ,̱ ̱i̱ṉc̱ṟe̱a̱s̱i̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṉo̱ṯ ̱o̱ṉḻy̱ ̱a̱m̱p̱ḻi̱f̱i̱e̱s̱ ̱ṯẖe̱ ̱ ̱ ̱s̱i̱g̱ṉa̱ḻ,̱ ̱ḇu̱ṯ ̱a̱ḻs̱o̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱.̱ ̱M̱o̱ṟe̱o̱v̱e̱ṟ,̱ ̱i̱ṯ ̱ḏo̱e̱s̱ ̱s̱o̱ ̱ḇy̱ ̱ṯẖe̱ ̱s̱a̱m̱e̱ ̱ ̱ ̱a̱m̱o̱u̱ṉṯ:̱ ̱ṯẖe̱ ̱s̱ṯa̱ṉḏa̱ṟḏ ̱ḏe̱v̱i̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱s̱i̱g̱ṉa̱ḻ,̱ ̱σ̱_̱x̱ ̱a̱ḻs̱o̱ ̱ ̱ ̱s̱c̱a̱ḻe̱s̱ ̱ḻi̱ṉe̱a̱ṟḻy̱ ̱w̱i̱ṯẖ ̱ρ̱,̱ ̱σ̱_̱x̱ ̱∼̱ρ̱.̱ ̱C̱o̱ṉs̱e̱q̱u̱e̱ṉṯḻy̱,̱ ̱ṯẖe̱ ̱ ̱ ̱ṉu̱m̱ḇe̱ṟ ̱o̱f̱ ̱ḏi̱s̱ṯi̱ṉc̱ṯ ̱ṯi̱m̱e̱ ̱p̱o̱i̱ṉṯs̱ ̱ṯẖa̱ṯ ̱c̱a̱ṉ ̱ḇe̱ ̱ṟe̱s̱o̱ḻv̱e̱ḏ,̱ ̱ṯẖe̱ ̱ ̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱A̱ ̱/̱ ̱σ̱_̱x̱,̱ ̱i̱s̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ρ̱:̱ ̱ ̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱c̱a̱ṉṉo̱ṯ ̱ḻi̱f̱ṯ ̱ṯẖe̱ ̱ṯṟa̱ḏe̱-̱o̱f̱f̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱g̱a̱i̱ṉ ̱a̱ṉḏ ̱i̱ṉp̱u̱ṯ ̱ ̱ ̱ṉo̱i̱s̱e̱ ̱ḇy̱ ̱o̱p̱ṯi̱m̱i̱ẕi̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ,̱ ̱a̱s̱ ̱c̱a̱ṉ ̱a̱ḻs̱o̱ ̱ḇe̱ ̱s̱e̱e̱ṉ ̱i̱ṉ ̱ ̱ ̱p̱a̱ṉe̱ḻs̱ ̱A̱ ̱a̱ṉḏ ̱Ḇ ̱o̱f̱ ̱S̱ḺP̱x̱y̱C̱o̱u̱p̱ḻi̱ṉg̱.̱ Ṯẖi̱s̱ ̱i̱s̱ ̱i̱ṉ ̱m̱a̱ṟḵe̱ḏ ̱c̱o̱ṉṯṟa̱s̱ṯ ̱ ̱ ̱ṯo̱ ̱ṯẖe̱ ̱ḇe̱ẖa̱v̱i̱o̱ṟ ̱o̱f̱ ̱a̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ.̱ ̱A̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ẖa̱s̱ ̱a̱ṉ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱A̱,̱ ̱w̱ẖi̱c̱ẖ ̱ḏo̱e̱s̱ ̱ṉo̱ṯ ̱ṟe̱ḻy̱ ̱o̱ṉ ̱ ̱ ̱e̱x̱ṯe̱ṟṉa̱ḻ ̱ḏṟi̱v̱i̱ṉg̱,̱ ̱a̱s̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱o̱f̱ ̱a̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ḏo̱e̱s̱.̱ ̱I̱ṯs̱ ̱ ̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱i̱s̱ ̱ṯẖu̱s̱ ̱e̱s̱s̱e̱ṉṯi̱a̱ḻḻy̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ρ̱,̱ ̱a̱ṉḏ,̱ ̱m̱o̱ṟe̱ ̱ ̱ ̱s̱p̱e̱c̱i̱f̱i̱c̱a̱ḻḻy̱,̱ ̱i̱ṯ ̱g̱o̱e̱s̱ ̱ṯo̱ ̱a̱ ̱ṉo̱ṉ-̱ẕe̱ṟo̱ ̱v̱a̱ḻu̱e̱ ̱a̱s̱ ̱ρ̱→̱ ̱0̱.̱ ̱M̱o̱ṟe̱o̱v̱e̱ṟ,̱ ̱ ̱ ̱w̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱ṟe̱m̱a̱i̱ṉs̱ ̱f̱i̱ṉi̱ṯe̱,̱ ̱ṯẖe̱ ̱p̱ṟo̱p̱a̱g̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ ̱ ̱ḏo̱e̱s̱ ̱g̱o̱ ̱ṯo̱ ̱ẕe̱ṟo̱ ̱a̱s̱ ̱ρ̱→̱ ̱0̱,̱ ̱ḇe̱c̱a̱u̱s̱e̱,̱ ̱a̱s̱ ̱w̱e̱ ̱w̱i̱ḻḻ ̱s̱ẖo̱w̱,̱ ̱σ̱_̱x̱ ̱ ̱ ̱∼̱√̱(̱ρ̱)̱.̱ ̱H̱e̱ṉc̱e̱,̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱A̱ ̱/̱ ̱σ̱_̱x̱ ̱ ̱ ̱∼̱ ̱1̱/̱√̱(̱ρ̱)̱ ̱ṟi̱s̱e̱s̱ ̱a̱s̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱s̱ ̱ḏe̱c̱ṟe̱a̱s̱e̱ḏ,̱ ̱a̱s̱ ̱ ̱ ̱S̱ḺP̱x̱y̱C̱o̱u̱p̱ḻi̱ṉg̱C̱/̱Ḏ ̱i̱ḻḻu̱s̱ṯṟa̱ṯe̱.̱ ̱A̱ḻṯẖo̱u̱g̱ẖ ̱ṯẖi̱s̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱ḻa̱w̱ ̱ ̱ ̱ṉa̱i̱v̱e̱ḻy̱ ̱s̱u̱g̱g̱e̱s̱ṯs̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱i̱s̱ ̱ρ̱→̱ ̱0̱,̱ ̱w̱e̱ ̱w̱i̱ḻḻ ̱s̱ẖo̱w̱ ̱ḇe̱ḻo̱w̱ ̱ṯẖa̱ṯ,̱ ̱i̱ṉ ̱ṟe̱a̱ḻ ̱s̱y̱s̱ṯe̱m̱s̱,̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱ṉo̱i̱s̱e̱ ̱a̱ṉḏ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱ṯẖe̱ ̱ḏṟi̱v̱i̱ṉg̱ ̱a̱ṉḏ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ ̱ ̱f̱ṟe̱q̱u̱e̱ṉc̱i̱e̱s̱ ̱a̱ḻw̱a̱y̱s̱ ̱c̱u̱ṯ ̱o̱f̱f̱ ̱ṯẖe̱ ̱ḏi̱v̱e̱ṟg̱e̱ṉc̱e̱ ̱a̱ṯ ̱s̱m̱a̱ḻḻ ̱ḇu̱ṯ ̱f̱i̱ṉi̱ṯe̱ ̱ ̱ ̱ρ̱.̱ I̱m̱p̱o̱ṟṯa̱ṉṯḻy̱,̱ ̱w̱e̱ ̱f̱i̱ṉḏ ̱e̱x̱a̱c̱ṯḻy̱ ̱ṯẖe̱ ̱s̱a̱m̱e̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱ṟe̱ḻa̱ṯi̱o̱ṉ ̱ ̱ ̱A̱/̱σ̱_̱x̱ ̱∼̱ ̱1̱/̱√̱(̱ρ̱)̱ ̱f̱o̱ṟ ̱ ̱ ̱ḇo̱ṯẖ ̱ṯẖe̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱a̱ṉḏ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱-̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱m̱o̱ḏe̱ḻ ̱w̱i̱ṯẖi̱ṉ ̱ ̱ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱-̱a̱v̱e̱ṟa̱g̱i̱ṉg̱ ̱a̱p̱p̱ṟo̱x̱i̱m̱a̱ṯi̱o̱ṉ,̱ ̱w̱ẖi̱c̱ẖ ̱i̱s̱ ̱ṯẖe̱ ̱ṉa̱ṯu̱ṟa̱ḻ ̱ḏe̱s̱c̱ṟi̱p̱ṯi̱o̱ṉ ̱ ̱ ̱o̱f̱ ̱ṉo̱ṉ-̱ḻi̱ṉe̱a̱ṟ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱i̱ṉ ̱ṯẖe̱ ̱w̱e̱a̱ḵ-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱ ̱ ̱ ̱<̱c̱i̱ṯ.̱>̱.̱ ̱O̱u̱ṟ ̱a̱ṉa̱ḻy̱s̱i̱s̱ ̱ṯẖu̱s̱ ̱s̱ẖo̱w̱s̱ ̱ṯẖa̱ṯ ̱ ̱ ̱c̱o̱ṉc̱e̱ṟṉi̱ṉg̱ ̱ṯẖe̱ ̱ṟo̱ḇu̱s̱ṯṉe̱s̱s̱ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱:̱ ̱1̱)̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱ṟe̱g̱i̱m̱e̱ ̱ṯẖa̱ṯ ̱ ̱ ̱m̱a̱x̱i̱m̱i̱ẕe̱s̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱i̱s̱ ̱ṯẖe̱ ̱w̱e̱a̱ḵ-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱;̱ ̱2̱)̱ ̱ ̱ ̱i̱ṉ ̱ṯẖi̱s̱ ̱ṟe̱g̱i̱m̱e̱,̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱a̱ṟe̱ ̱g̱e̱ṉe̱ṟi̱c̱a̱ḻḻy̱ ̱m̱o̱ṟe̱ ̱ṟo̱ḇu̱s̱ṯ ̱ ̱ ̱ṯẖa̱ṉ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱.̱ ̱W̱e̱ ̱e̱m̱p̱ẖa̱s̱i̱ẕe̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱w̱e̱a̱ḵ-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱ ̱ ̱ ̱i̱s̱ ̱p̱ṟe̱c̱i̱s̱e̱ḻy̱ ̱ṯẖe̱ ̱ṟe̱g̱i̱m̱e̱ ̱w̱ẖe̱ṟe̱ ̱o̱u̱ṟ ̱a̱ṉa̱ḻy̱s̱i̱s̱ ̱a̱p̱p̱ḻi̱e̱s̱,̱ ̱i̱ṉḏi̱c̱a̱ṯi̱ṉg̱ ̱ṯẖa̱ṯ ̱ ̱ ̱o̱u̱ṟ ̱p̱ṟi̱ṉc̱i̱p̱a̱ḻ ̱ṟe̱s̱u̱ḻṯ ̱a̱p̱p̱ḻi̱e̱s̱ ̱ṯo̱ ̱a̱ ̱v̱e̱ṟy̱ ̱ḇṟo̱a̱ḏ ̱c̱ḻa̱s̱s̱ ̱o̱f̱ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱.̱ ̱M̱o̱ṟe̱o̱v̱e̱ṟ,̱ ̱ṯẖi̱s̱ ̱ṟe̱s̱u̱ḻṯ ̱c̱a̱ṉ ̱ḇe̱ ̱u̱ṉḏe̱ṟs̱ṯo̱o̱ḏ ̱i̱ṉṯu̱i̱ṯi̱v̱e̱ḻy̱:̱ ̱ ̱ ̱w̱ẖi̱ḻe̱ ̱ḇo̱ṯẖ ̱a̱ ̱ḏa̱m̱p̱e̱ḏ ̱a̱ṉḏ ̱a̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱c̱a̱ṉ ̱ṟe̱ḏu̱c̱e̱ ̱ṯẖe̱ ̱ ̱ ̱p̱ṟo̱p̱a̱g̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ḇy̱ ̱ḻo̱w̱e̱ṟi̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ,̱ ̱o̱ṉḻy̱ ̱ ̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱s̱ṯi̱ḻḻ ̱e̱x̱ẖi̱ḇi̱ṯs̱ ̱a̱ ̱ṟo̱ḇu̱s̱ṯ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱i̱ṉ ̱ṯẖe̱ ̱ ̱ ̱w̱e̱a̱ḵ-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱,̱ ̱ṟa̱i̱s̱i̱ṉg̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱(̱s̱e̱e̱ ̱ ̱ ̱S̱ḺP̱x̱y̱C̱o̱u̱p̱ḻi̱ṉg̱)̱.̱ ̱I̱ṉ ̱ṯẖe̱ ̱ṉe̱x̱ṯ ̱p̱a̱ṟa̱g̱ṟa̱p̱ẖs̱,̱ ̱w̱e̱ ̱ḏe̱ṟi̱v̱e̱ ̱a̱ṉḏ ̱ ̱ ̱e̱ḻu̱c̱i̱ḏa̱ṯe̱ ̱ṯẖe̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱o̱f̱ ̱A̱ ̱a̱ṉḏ ̱σ̱_̱x̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱f̱o̱ṟ ̱ḇo̱ṯẖ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱m̱o̱ḏe̱ḻs̱.̱ ̱Ṯẖe̱ ̱ṟo̱ḻe̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱i̱s̱ ̱ḏi̱s̱c̱u̱s̱s̱e̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱ṉe̱x̱ṯ ̱ ̱ ̱s̱e̱c̱ṯi̱o̱ṉ.̱Ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱W̱e̱ ̱w̱i̱ḻḻ ̱f̱i̱ṟs̱ṯ ̱ṟe̱i̱ṯe̱ṟa̱ṯe̱ ̱ṯẖe̱ ̱m̱a̱i̱ṉ ̱f̱i̱ṉḏi̱ṉg̱s̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ẖa̱ṟm̱o̱ṉi̱c̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱(̱ṯẖe̱ ̱u̱ṉc̱o̱u̱p̱ḻe̱ḏ ̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ)̱,̱ ̱ḏe̱s̱c̱ṟi̱ḇe̱ḏ ̱i̱ṉ ̱<̱ṟe̱f̱>̱;̱ ̱ṯẖe̱s̱e̱ ̱f̱i̱ṉḏi̱ṉg̱s̱ ̱a̱ḻs̱o̱ ̱a̱p̱p̱ḻy̱ ̱ṯo̱ ̱ṯẖe̱ ̱p̱u̱s̱ẖ-̱p̱u̱ḻḻ ̱ṉe̱ṯw̱o̱ṟḵ,̱ ̱w̱ẖi̱c̱ẖ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏs̱ ̱ṯo̱ ̱ṯẖe̱ ̱ẖi̱g̱ẖ-̱f̱ṟi̱c̱ṯi̱o̱ṉ ̱ḻi̱m̱i̱ṯ ̱o̱f̱ ̱ṯẖe̱ ̱ẖa̱ṟm̱o̱ṉi̱c̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱(̱s̱e̱e̱ ̱s̱e̱c̱ṯi̱o̱ṉ ̱<̱ṟe̱f̱>̱)̱.̱ ̱ ̱Ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱o̱f̱ ̱ṯẖe̱ ̱ẖa̱ṟm̱o̱ṉi̱c̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱i̱s̱ ̱g̱i̱v̱e̱ṉ ̱ḇy̱ ̱A̱_̱H̱O̱ ̱a̱ṉḏ ̱ṟe̱p̱e̱a̱ṯe̱ḏ ̱ẖe̱ṟe̱ ̱f̱o̱ṟ ̱c̱o̱m̱p̱ḻe̱ṯe̱ṉe̱s̱s̱:̱ ̱ ̱ ̱ ̱ ̱ ̱A̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱=̱ρ̱/̱√̱(̱γ̱^̱2̱ ̱ω̱^̱2̱ ̱+̱ ̱(̱ω̱^̱2̱ ̱-̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ω̱_̱0̱^̱2̱)̱^̱2̱)̱∼̱ρ̱A̱_̱H̱O̱2̱.̱ ̱ ̱I̱m̱p̱o̱ṟṯa̱ṉṯḻy̱,̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱ḻi̱ṉe̱a̱ṟḻy̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱.̱ ̱Ṯẖi̱s̱ ̱ṟe̱s̱u̱ḻṯ ̱c̱a̱ṉ ̱ḇe̱ ̱u̱ṉḏe̱ṟs̱ṯo̱o̱ḏ ̱ḇy̱ ̱ṉo̱ṯi̱ṉg̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱ḏṟi̱v̱i̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱ρ̱ ̱s̱(̱ṯ)̱ ̱s̱c̱a̱ḻe̱s̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱w̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱-̱ω̱_̱0̱^̱2̱ ̱x̱ ̱i̱s̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ρ̱ ̱(̱s̱e̱e̱ ̱H̱O̱)̱.̱ ̱Ṯẖe̱ ̱v̱a̱ṟi̱a̱ṉc̱e̱ ̱σ̱^̱2̱_̱x̱ ̱o̱f̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱i̱s̱,̱ ̱f̱o̱ṟ ̱G̱a̱u̱s̱s̱i̱a̱ṉ ̱w̱ẖi̱ṯe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱o̱f̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱σ̱^̱2̱_̱s̱ ̱(̱s̱e̱e̱ ̱s̱i̱g̱m̱a̱x̱s̱q̱H̱O̱w̱ẖi̱ṯe̱)̱:̱ ̱ ̱ ̱ ̱ ̱ ̱σ̱^̱2̱_̱x̱ ̱ ̱ ̱ ̱=̱ ̱ρ̱^̱2̱σ̱^̱2̱_̱s̱/̱2̱γ̱ω̱_̱0̱^̱2̱∼̱ρ̱^̱2̱.̱ ̱ ̱ ̱ ̱ ̱s̱i̱g̱m̱a̱x̱s̱q̱H̱O̱w̱ẖi̱ṯe̱2̱ ̱ ̱C̱ḻe̱a̱ṟḻy̱,̱ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱i̱ṉ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱σ̱_̱x̱ ̱s̱c̱a̱ḻe̱s̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱.̱ ̱Ṯẖi̱s̱ ̱i̱s̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱i̱ṉc̱ṟe̱a̱s̱i̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ṉo̱ṯ ̱o̱ṉḻy̱ ̱a̱m̱p̱ḻi̱f̱i̱e̱s̱ ̱ṯẖe̱ ̱ṯṟu̱e̱ ̱s̱i̱g̱ṉa̱ḻ ̱s̱i̱ṉ(̱ω̱ ̱ṯ)̱ ̱ḇu̱ṯ ̱a̱ḻs̱o̱ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱i̱ṉ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱s̱i̱g̱ṉa̱ḻ,̱ ̱η̱_̱s̱ ̱(̱s̱e̱e̱ ̱H̱O̱)̱.̱ ̱Ḇe̱c̱a̱u̱s̱e̱ ̱ḇo̱ṯẖ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱A̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱σ̱_̱x̱ ̱s̱c̱a̱ḻe̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻg̱i̱ṉ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱,̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱i̱s̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱:̱ ̱ ̱ ̱ ̱ ̱ ̱A̱/̱σ̱_̱x̱ ̱ ̱ ̱=̱√̱(̱2̱γ̱)̱ω̱_̱0̱/̱σ̱_̱s̱√̱(̱γ̱^̱2̱ ̱ω̱^̱2̱ ̱+̱ ̱(̱ω̱^̱2̱ ̱-̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ω̱_̱0̱^̱2̱)̱^̱2̱)̱∼̱ρ̱^̱0̱.̱S̱ṈṞ_̱H̱O̱ ̱ ̱I̱ṉḏe̱e̱ḏ,̱ ̱ṯẖe̱s̱e̱ ̱s̱y̱s̱ṯe̱m̱s̱ ̱c̱a̱ṉṉo̱ṯ ̱ḻi̱f̱ṯ ̱ṯẖe̱ ̱ṯṟa̱ḏe̱-̱o̱f̱f̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱g̱a̱i̱ṉ ̱a̱ṉḏ ̱ṉo̱i̱s̱e̱:̱ ̱a̱m̱p̱ḻi̱f̱y̱i̱ṉg̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ ̱i̱ṉe̱v̱i̱ṯa̱ḇḻy̱ ̱a̱ḻs̱o̱ ̱a̱m̱p̱ḻi̱f̱i̱e̱s̱ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱i̱ṉ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ.̱ ̱Ṯẖi̱s̱ ̱i̱s̱ ̱i̱ṉ ̱m̱a̱ṟḵe̱ḏ ̱c̱o̱ṉṯṟa̱s̱ṯ ̱ṯo̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱,̱ ̱a̱s̱ ̱w̱e̱ ̱s̱ẖo̱w̱ ̱ṉe̱x̱ṯ.̱Ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ:̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱ ̱Ṯo̱ ̱ḏe̱v̱e̱ḻo̱p̱ ̱ ̱ ̱ ̱ ̱o̱u̱ṟ ̱a̱ṟg̱u̱m̱e̱ṉṯ,̱ ̱w̱e̱ ̱c̱o̱ṉs̱i̱ḏe̱ṟ ̱ṯẖe̱ ̱c̱a̱s̱e̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱f̱ṟe̱q̱u̱e̱ṉc̱y̱ ̱m̱i̱s̱m̱a̱ṯc̱ẖ ̱ ̱ ̱ ̱ ̱ν̱=̱(̱ω̱^̱2̱-̱ω̱_̱0̱^̱2̱)̱/̱(̱2̱ω̱)̱=̱0̱.̱ ̱M̱o̱ṟe̱o̱v̱e̱ṟ,̱ ̱w̱e̱ ̱c̱ẖo̱o̱s̱e̱ ̱ṯẖe̱ ̱ ̱ ̱ ̱ ̱p̱ẖa̱s̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḏṟi̱v̱i̱ṉg̱ ̱s̱i̱g̱ṉa̱ḻ ̱s̱u̱c̱ẖ ̱ṯẖa̱ṯ ̱e̱_̱v̱=̱0̱,̱ ̱a̱s̱ ̱a̱ ̱ṟe̱s̱u̱ḻṯ ̱o̱f̱ ̱ ̱ ̱ ̱ ̱w̱ẖi̱c̱ẖ ̱v̱^̱∗̱=̱0̱ ̱(̱s̱e̱e̱ ̱ḏo̱ṯv̱)̱.̱ ̱W̱i̱ṯẖ ̱v̱^̱∗̱=̱0̱,̱ ̱ṯẖe̱ ̱ ̱ ̱ ̱ ̱s̱ṯe̱a̱ḏy̱-̱s̱ṯa̱ṯe̱ ̱v̱a̱ḻu̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱ ̱i̱s̱ ̱ϕ̱^̱∗̱=̱0̱,̱ ̱w̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱m̱e̱a̱ṉ ̱ ̱ ̱ ̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ ̱c̱y̱c̱ḻe̱ ̱ḇe̱c̱o̱m̱e̱s̱ ̱ ̱ ̱ ̱ ̱Ṟ^̱∗̱=̱|̱u̱^̱∗̱|̱.̱ ̱I̱m̱p̱o̱ṟṯa̱ṉṯḻy̱,̱ ̱ṯẖi̱s̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱,̱ ̱w̱ẖi̱c̱ẖ ̱c̱a̱ṉ ̱ḇe̱ ̱ ̱ ̱ ̱ ̱o̱ḇṯa̱i̱ṉe̱ḏ ̱ḇy̱ ̱s̱o̱ḻv̱i̱ṉg̱ ̱ṯẖe̱ ̱c̱u̱ḇi̱c̱ ̱e̱q̱u̱a̱ṯi̱o̱ṉ ̱f̱o̱ṟ ̱u̱ ̱(̱ḏo̱ṯu̱)̱ ̱i̱s̱ ̱ ̱ ̱ ̱ ̱v̱e̱ṟy̱ ̱i̱ṉs̱e̱ṉs̱i̱ṯi̱v̱e̱ ̱ṯo̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱—̱ṯẖi̱s̱ ̱i̱s̱ ̱i̱ṉḏe̱e̱ḏ ̱ ̱ ̱ ̱ ̱a̱ ̱ẖa̱ḻḻm̱a̱ṟḵ ̱o̱f̱ ̱a̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ.̱ ̱A̱s̱ ̱a̱ ̱ṟe̱s̱u̱ḻṯ,̱ ̱e̱v̱e̱ṉ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ ̱ ̱ ̱ ̱w̱e̱a̱ḵe̱s̱ṯ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖs̱ ̱ρ̱,̱ ̱ṯẖe̱ ̱s̱y̱s̱ṯe̱m̱ ̱e̱x̱ẖi̱ḇi̱ṯs̱ ̱a̱ ̱ṟo̱ḇu̱s̱ṯ ̱ ̱ ̱ ̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱A̱=̱Ṟ^̱∗̱,̱ ̱a̱s̱ ̱i̱ḻḻu̱s̱ṯṟa̱ṯe̱ḏ ̱i̱ṉ ̱S̱ḺP̱x̱y̱C̱o̱u̱p̱ḻi̱ṉg̱C̱/̱Ḏ.̱ S̱i̱ṉc̱e̱ ̱w̱i̱ṯẖ ̱v̱^̱∗̱=̱0̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱i̱s̱ ̱Ṟ^̱∗̱=̱|̱u̱^̱∗̱|̱,̱ ̱i̱ṯs̱ ̱ ̱ ̱ ̱ ̱v̱a̱ṟi̱a̱ṉc̱e̱ ̱i̱s̱ ̱σ̱^̱2̱_̱Ṟ ̱=̱ ̱σ̱^̱2̱_̱u̱.̱ ̱M̱o̱ṟe̱o̱v̱e̱ṟ,̱ ̱ṯẖe̱ ̱v̱a̱ṟi̱a̱ṉc̱e̱ ̱i̱ṉ ̱ ̱ ̱ ̱ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱ ̱i̱s̱ ̱σ̱^̱2̱_̱ϕ̱ ̱=̱ ̱σ̱^̱2̱_̱v̱ ̱/̱ ̱Ṟ^̱∗̱^̱2̱.̱ ̱W̱i̱ṯẖ ̱ ̱ ̱ ̱ ̱ν̱=̱0̱ ̱a̱ṉḏ ̱v̱^̱∗̱=̱0̱,̱ ̱c̱_̱2̱ ̱a̱ṉḏ ̱c̱_̱3̱ ̱i̱ṉ ̱c̱1̱c̱4̱ ̱a̱ṟe̱ ̱ ̱ ̱ ̱ ̱ḇo̱ṯẖ ̱ẕe̱ṟo̱,̱ ̱w̱ẖi̱c̱ẖ ̱ṯẖe̱ṉ ̱y̱i̱e̱ḻḏs̱ ̱ṯẖe̱ ̱f̱o̱ḻḻo̱w̱i̱ṉg̱ ̱e̱x̱p̱ṟe̱s̱s̱i̱o̱ṉs̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ ̱ ̱ ̱ ̱v̱a̱ṟi̱a̱ṉc̱e̱ ̱i̱ṉ ̱u̱ ̱a̱ṉḏ ̱v̱ ̱(̱u̱s̱i̱ṉg̱ ̱ṯẖa̱ṯ ̱ϵ̱≡̱ρ̱ ̱/̱ ̱ ̱ ̱ ̱ ̱(̱2̱ω̱)̱)̱:̱ ̱ ̱ ̱ ̱ ̱ ̱σ̱^̱2̱_̱u̱ ̱ ̱ ̱ ̱=̱ ̱ρ̱^̱2̱ ̱σ̱^̱2̱_̱s̱/̱4̱ ̱(̱-̱α̱ ̱+̱ ̱β̱ ̱3̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱u̱^̱∗̱^̱2̱)̱ω̱^̱2̱s̱i̱g̱u̱ ̱ ̱ ̱ ̱ ̱σ̱^̱2̱_̱v̱ ̱ ̱ ̱ ̱=̱ρ̱^̱2̱ ̱σ̱^̱2̱_̱s̱/̱4̱ ̱(̱-̱α̱ ̱+̱ ̱β̱u̱^̱∗̱^̱2̱)̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ω̱^̱2̱.̱s̱i̱g̱v̱ ̱ ̱Ḇe̱f̱o̱ṟe̱ ̱w̱e̱ ̱ḏi̱s̱c̱u̱s̱s̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱i̱ṉ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ,̱ ̱w̱e̱ ̱ṉo̱ṯe̱ ̱ṯẖa̱ṯ ̱f̱o̱ṟ ̱a̱ ̱ẖa̱ṟm̱o̱ṉi̱c̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱w̱i̱ṯẖ ̱β̱=̱0̱,̱ ̱ṯẖe̱ ̱m̱e̱ṯẖo̱ḏ ̱o̱f̱ ̱a̱v̱e̱ṟa̱g̱i̱ṉg̱ ̱y̱i̱e̱ḻḏs̱ ̱α̱ ̱=̱ ̱-̱γ̱ ̱/̱ ̱2̱,̱ ̱s̱ẖo̱w̱i̱ṉg̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱ṟe̱s̱u̱ḻṯ ̱a̱ḇo̱v̱e̱ ̱i̱ṉḏe̱e̱ḏ ̱ṟe̱ḏu̱c̱e̱s̱ ̱ṯo̱ ̱ṯẖa̱ṯ ̱f̱o̱ṟ ̱a̱ ̱ẖa̱ṟm̱o̱ṉi̱c̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱w̱i̱ṯẖ ̱ω̱_̱0̱=̱ω̱ ̱(̱s̱e̱e̱ ̱s̱i̱g̱m̱a̱x̱s̱q̱H̱O̱w̱ẖi̱ṯe̱2̱)̱.̱ W̱e̱ ̱ṉo̱w̱ ̱a̱ṉa̱ḻy̱ẕe̱ ̱ṯẖe̱ ̱ṉu̱m̱e̱ṟa̱ṯo̱ṟ ̱a̱ṉḏ ̱ḏe̱ṉo̱m̱i̱ṉa̱ṯo̱ṟ ̱o̱f̱ ̱s̱i̱g̱u̱s̱i̱g̱v̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱w̱i̱ṯẖ ̱β̱>̱0̱.̱ ̱Ṯẖe̱ ̱ṉu̱m̱e̱ṟa̱ṯo̱ṟ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱,̱ ̱a̱s̱ ̱o̱ḇs̱e̱ṟv̱e̱ḏ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ẖa̱ṟm̱o̱ṉi̱c̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ;̱ ̱ṯẖi̱s̱ ̱ṟe̱f̱ḻe̱c̱ṯs̱ ̱ṯẖe̱ ̱f̱a̱c̱ṯ ̱ṯẖa̱ṯ ̱a̱ḻs̱o̱ ̱i̱ṉ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ,̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱ ̱a̱ṟe̱ ̱a̱m̱p̱ḻi̱f̱i̱e̱ḏ ̱ḇy̱ ̱ṯẖe̱ ̱g̱a̱i̱ṉ ̱ρ̱.̱ ̱Ṯẖi̱s̱ ̱ṉu̱m̱e̱ṟa̱ṯo̱ṟ ̱i̱s̱ ̱ṯẖe̱ ̱s̱a̱m̱e̱ ̱f̱o̱ṟ ̱ḇo̱ṯẖ ̱σ̱^̱2̱_̱u̱ ̱a̱ṉḏ ̱σ̱^̱2̱_̱v̱.̱ ̱Ṯẖe̱ ̱ḏe̱ṉo̱m̱i̱ṉa̱ṯo̱ṟ,̱ ̱ẖo̱w̱e̱v̱e̱ṟ,̱ ̱i̱s̱ ̱ḻa̱ṟg̱e̱ṟ ̱f̱o̱ṟ ̱σ̱^̱2̱_̱u̱ ̱ṯẖa̱ṉ ̱f̱o̱ṟ ̱σ̱^̱2̱_̱v̱.̱ ̱I̱ṉḏe̱e̱ḏ,̱ ̱ṯẖe̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱f̱o̱ṟ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱,̱ ̱c̱o̱ṟṟe̱s̱p̱o̱ṉḏi̱ṉg̱ ̱ṯo̱ ̱σ̱^̱2̱_̱u̱=̱σ̱^̱2̱_̱Ṟ,̱ ̱i̱s̱ ̱ḻa̱ṟg̱e̱ṟ ̱ṯẖa̱ṉ ̱ṯẖa̱ṯ ̱f̱o̱ṟ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱,̱ ̱σ̱^̱2̱_̱ϕ̱=̱σ̱^̱2̱_̱v̱/̱Ṟ^̱∗̱^̱2̱.̱ ̱Ṯẖi̱s̱ ̱i̱s̱ ̱ṯẖe̱ ̱ṟe̱m̱ṉa̱ṉṯ ̱o̱f̱ ̱ṯẖe̱ ̱f̱a̱c̱ṯ ̱ṯẖa̱ṯ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱,̱ ̱i̱ṉ ̱ṯẖe̱ ̱a̱ḇs̱e̱ṉc̱e̱ ̱o̱f̱ ̱a̱ṉy̱ ̱ḏṟi̱v̱i̱ṉg̱,̱ ̱e̱x̱ẖi̱ḇi̱ṯ ̱a̱ ̱ṉe̱u̱ṯṟa̱ḻ ̱m̱o̱ḏe̱ ̱i̱ṉ ̱ṯẖe̱ ̱ḏi̱ṟe̱c̱ṯi̱o̱ṉ ̱a̱ḻo̱ṉg̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ ̱c̱y̱c̱ḻe̱;̱ ̱e̱v̱e̱ṉ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱,̱ ̱ṯẖi̱s̱ ̱ṯẖu̱s̱ ̱ṟe̱m̱a̱i̱ṉs̱ ̱ṯẖe̱ ̱s̱o̱f̱ṯ ̱m̱o̱ḏe̱.̱ ̱I̱ṯ ̱i̱s̱ ̱p̱ṟe̱ḏo̱m̱i̱ṉa̱ṉṯḻy̱ ̱ṯẖe̱s̱e̱ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱,̱ ̱σ̱^̱2̱_̱v̱,̱ ̱ṯẖa̱ṯ ̱ḻi̱m̱i̱ṯ ̱ṯẖe̱ ̱p̱ṟe̱c̱i̱s̱i̱o̱ṉ ̱i̱ṉ ̱e̱s̱ṯi̱m̱a̱ṯi̱ṉg̱ ̱ṯẖe̱ ̱ṯi̱m̱e̱.̱ ̱ ̱I̱ṉṯe̱ṟe̱s̱ṯi̱ṉg̱ḻy̱,̱ ̱s̱i̱ṉc̱e̱ ̱w̱e̱ ̱ẖa̱v̱e̱ ̱c̱ẖo̱s̱e̱ṉ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱s̱u̱c̱ẖ ̱ṯẖa̱ṯ ̱v̱^̱∗̱=̱0̱ ̱a̱ṉḏ ̱Ṟ^̱∗̱=̱|̱u̱^̱∗̱|̱,̱ ̱a̱ṉ ̱i̱ṉs̱p̱e̱c̱ṯi̱o̱ṉ ̱o̱f̱ ̱ḏo̱ṯu̱ ̱s̱ẖo̱w̱s̱ ̱ṯẖa̱ṯ ̱-̱α̱ ̱+̱ ̱β̱u̱^̱∗̱^̱2̱=̱ϵ̱/̱Ṟ^̱∗̱=̱ρ̱/̱(̱2̱ω̱ ̱Ṟ^̱∗̱)̱.̱ ̱H̱e̱ṉc̱e̱,̱ ̱w̱e̱ ̱f̱i̱ṉḏ ̱ṯẖa̱ṯ ̱ ̱ ̱ ̱ ̱ ̱σ̱^̱2̱_̱v̱ ̱=̱ ̱ρ̱^̱2̱ ̱σ̱^̱2̱_̱s̱Ṟ^̱∗̱/̱2̱ρ̱ω̱∼̱ρ̱.̱ ̱ ̱ ̱ ̱ ̱v̱a̱ṟḺC̱O̱ ̱ ̱Ṯẖe̱ ̱e̱x̱p̱ṟe̱s̱s̱i̱o̱ṉ ̱s̱ẖo̱w̱s̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṉo̱ṯ ̱o̱ṉḻy̱ ̱a̱m̱p̱ḻi̱f̱i̱e̱s̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱(̱ṯẖe̱ ̱ṉu̱m̱e̱ṟa̱ṯo̱ṟ)̱,̱ ̱ḇu̱ṯ ̱a̱ḻs̱o̱ ̱ṯẖa̱ṯ ̱i̱ṯ ̱g̱e̱ṉe̱ṟa̱ṯe̱s̱ ̱a̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱ṯẖa̱ṯ ̱ṯa̱m̱e̱s̱ ̱ṯẖe̱s̱e̱ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱ ̱(̱ṯẖe̱ ̱ḏe̱ṉo̱m̱i̱ṉa̱ṯo̱ṟ)̱.̱ ̱Ṯẖe̱ ̱ḻa̱ṯṯe̱ṟ ̱i̱s̱ ̱i̱ṉ ̱m̱a̱ṟḵe̱ḏ ̱c̱o̱ṉṯṟa̱s̱ṯ ̱ṯo̱ ̱ṯẖe̱ ̱ẖa̱ṟm̱o̱ṉi̱c̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ,̱ ̱w̱ẖi̱c̱ẖ ̱ḻa̱c̱ḵs̱ ̱ṯẖi̱s̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱(̱s̱e̱e̱ ̱s̱i̱g̱m̱a̱x̱s̱q̱H̱O̱w̱ẖi̱ṯe̱2̱)̱.̱ ̱C̱o̱ṉs̱e̱q̱u̱e̱ṉṯḻy̱,̱ ̱w̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱σ̱^̱2̱_̱x̱ ̱o̱f̱ ̱ṯẖe̱ ̱ẖa̱ṟm̱o̱ṉi̱c̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱s̱c̱a̱ḻe̱s̱ ̱a̱s̱ ̱ρ̱^̱2̱ ̱(̱s̱i̱g̱m̱a̱x̱s̱q̱H̱O̱w̱ẖi̱ṯe̱2̱)̱,̱ ̱ṯẖa̱ṯ ̱o̱f̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱s̱c̱a̱ḻe̱s̱ ̱a̱s̱ ̱ρ̱.̱ ̱W̱e̱ ̱a̱ḻs̱o̱ ̱ṉo̱ṯe̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱ḏe̱c̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱Ṟ^̱∗̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ ̱c̱y̱c̱ḻe̱.̱v̱a̱ṟḺC̱O̱ ̱s̱ẖo̱w̱s̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱A̱/̱σ̱_̱v̱ ̱=̱ ̱Ṟ^̱∗̱/̱σ̱_̱v̱ ̱i̱s̱ ̱g̱i̱v̱e̱ṉ ̱ḇy̱ ̱ ̱ ̱ ̱ ̱ ̱A̱/̱σ̱_̱v̱ ̱=̱1̱/̱σ̱_̱s̱√̱(̱2̱ω̱ ̱Ṟ^̱∗̱/̱ρ̱)̱∼̱1̱/̱√̱(̱ρ̱)̱,̱ ̱ ̱ ̱ ̱ ̱S̱ṈṞ_̱S̱Ḻ ̱ ̱w̱ẖe̱ṟe̱ ̱w̱e̱ ̱ẖa̱v̱e̱ ̱u̱s̱e̱ḏ ̱ṯẖa̱ṯ ̱f̱o̱ṟ ̱s̱m̱a̱ḻḻ ̱ρ̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱Ṟ^̱∗̱ ̱ẖa̱s̱ ̱a̱ ̱f̱i̱ṉi̱ṯe̱ ̱v̱a̱ḻu̱e̱.̱ ̱ ̱C̱ḻe̱a̱ṟḻy̱,̱ ̱i̱ṉ ̱ṯẖe̱ ̱w̱e̱a̱ḵ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ḻi̱m̱i̱ṯ,̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱a̱s̱ ̱ρ̱ ̱ḏe̱c̱ṟe̱a̱s̱e̱s̱,̱ ̱i̱ṉ ̱c̱o̱ṉṯṟa̱s̱ṯ ̱ṯo̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ,̱ ̱w̱ẖi̱c̱ẖ ̱i̱s̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ρ̱ ̱(̱S̱ṈṞ_̱H̱O̱)̱.̱ ̱A̱s̱ ̱a̱ ̱ṟe̱s̱u̱ḻṯ,̱ ̱f̱o̱ṟ ̱s̱u̱f̱f̱i̱c̱i̱e̱ṉṯḻy̱ ̱w̱e̱a̱ḵ ̱c̱o̱u̱p̱ḻi̱ṉg̱,̱ ̱a̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱w̱i̱ḻḻ ̱i̱ṉe̱v̱i̱ṯa̱ḇḻy̱ ̱ḇe̱c̱o̱m̱e̱ ̱s̱u̱p̱e̱ṟi̱o̱ṟ ̱ṯo̱ ̱a̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ.̱ ̱F̱u̱ṉḏa̱m̱e̱ṉṯa̱ḻḻy̱,̱ ̱ṯẖe̱ ̱ṟe̱a̱s̱o̱ṉ ̱i̱s̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ẖa̱s̱ ̱a̱ṉ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱w̱ẖi̱c̱ẖ ̱ḏo̱e̱s̱ ̱ṉo̱ṯ ̱ṟe̱ḻy̱ ̱o̱ṉ ̱e̱x̱ṯe̱ṟṉa̱ḻ ̱ḏṟi̱v̱i̱ṉg̱,̱ ̱w̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ḏo̱e̱s̱ ̱ṉo̱ṯ:̱ ̱i̱ṉ ̱ḇo̱ṯẖ ̱s̱y̱s̱ṯe̱m̱s̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱ ̱a̱ṟe̱ ̱o̱ṉḻy̱ ̱w̱e̱a̱ḵḻy̱ ̱a̱m̱p̱ḻi̱f̱i̱e̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱w̱e̱a̱ḵ-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱,̱ ̱ḇu̱ṯ ̱o̱ṉḻy̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ẖa̱s̱ ̱i̱ṉ ̱ṯẖi̱s̱ ̱ṟe̱g̱i̱m̱e̱ ̱s̱ṯi̱ḻḻ ̱a̱ ̱s̱ṯṟo̱ṉg̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱ṯẖa̱ṯ ̱ṟa̱i̱s̱e̱s̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ ̱a̱ḇo̱v̱e̱ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱.̱W̱e̱ ̱c̱a̱ṉ ̱a̱ḻs̱o̱ ̱o̱ḇṯa̱i̱ṉ ̱a̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱ḇy̱ ̱ḏi̱v̱i̱ḏi̱ṉg̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱A̱=̱2̱π̱ ̱ḇy̱ ̱ṯẖe̱ ̱s̱ṯa̱ṉḏa̱ṟḏ ̱ḏe̱v̱i̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱,̱ ̱σ̱_̱ϕ̱=̱σ̱_̱v̱/̱Ṟ^̱∗̱:̱ ̱ ̱ ̱ ̱ ̱ ̱A̱/̱σ̱_̱ϕ̱ ̱=̱ ̱2̱π̱/̱σ̱_̱s̱√̱(̱2̱ω̱ ̱Ṟ^̱∗̱/̱ρ̱)̱ ̱ ̱Ṯẖi̱s̱ ̱i̱ṉḏe̱e̱ḏ ̱g̱i̱v̱e̱s̱ ̱ṯẖe̱ ̱s̱a̱m̱e̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱c̱o̱ṉs̱ṯa̱ṉṯ ̱ρ̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱ṟa̱ḏi̱u̱s̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ ̱c̱y̱c̱ḻe̱ ̱Ṟ^̱∗̱.̱Ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ:̱ ̱P̱ẖa̱s̱e̱-̱a̱v̱e̱ṟa̱g̱i̱ṉg̱ ̱m̱e̱ṯẖo̱ḏ ̱Ṯẖe̱ ̱ ̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱ḏe̱s̱c̱ṟi̱ḇe̱s̱ ̱a̱ ̱w̱e̱a̱ḵḻy̱ ̱ṉo̱ṉ-̱ḻi̱ṉe̱a̱ṟ ̱s̱y̱s̱ṯe̱m̱ ̱ṉe̱a̱ṟ ̱ṯẖe̱ ̱ ̱ ̱H̱o̱p̱f̱ ̱ḇi̱f̱u̱ṟc̱a̱ṯi̱o̱ṉ.̱ ̱Y̱e̱ṯ,̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ ̱e̱x̱ẖi̱ḇi̱ṯs̱ ̱ ̱ ̱ḻa̱ṟg̱e̱-̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯi̱o̱ṉs̱.̱ ̱W̱e̱ ̱ṯẖe̱ṟe̱f̱o̱ṟe̱ ̱a̱ḻs̱o̱ ̱i̱ṉv̱e̱s̱ṯi̱g̱a̱ṯe̱ ̱a̱ ̱ ̱ ̱p̱ẖa̱s̱e̱-̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱m̱o̱ḏe̱ḻ,̱ ̱w̱ẖi̱c̱ẖ ̱ḏe̱s̱c̱ṟi̱ḇe̱s̱ ̱ṉo̱ṉ-̱ḻi̱ṉe̱a̱ṟ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟs̱ ̱w̱i̱ṯẖ ̱ ̱ ̱a̱ ̱ṟo̱ḇu̱s̱ṯ ̱ḻi̱m̱i̱ṯ ̱c̱y̱c̱ḻe̱.̱ ̱W̱e̱ ̱a̱ṉa̱ḻy̱ẕe̱ ̱ṯẖi̱s̱ ̱m̱o̱ḏe̱ḻ ̱v̱i̱a̱ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱-̱a̱v̱e̱ṟa̱g̱i̱ṉg̱ ̱ ̱ ̱m̱e̱ṯẖo̱ḏ,̱ ̱w̱ẖi̱c̱ẖ ̱a̱p̱p̱ḻi̱e̱s̱ ̱i̱ṉ ̱ṯẖe̱ ̱ṟe̱g̱i̱m̱e̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱f̱ṟe̱q̱u̱e̱ṉc̱y̱ ̱ ̱ ̱ω̱_̱0̱ ̱i̱s̱ ̱c̱ḻo̱s̱e̱ ̱ṯo̱ ̱ṯẖe̱ ̱ḏṟi̱v̱i̱ṉg̱ ̱f̱ṟe̱q̱u̱e̱ṉc̱y̱ ̱ω̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ρ̱ ̱i̱s̱ ̱w̱e̱a̱ḵ ̱<̱c̱i̱ṯ.̱>̱.̱ ̱Ṯẖi̱s̱ ̱ ̱ ̱f̱ṟa̱m̱e̱w̱o̱ṟḵ ̱p̱ṟo̱v̱i̱ḏe̱s̱ ̱a̱ ̱ḏe̱s̱c̱ṟi̱p̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱ḏy̱ṉa̱m̱i̱c̱s̱ ̱o̱f̱ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱ ̱ ̱ ̱ḏi̱f̱f̱e̱ṟe̱ṉc̱e̱ ̱ψ̱≡̱ϕ̱ ̱-̱ ̱ω̱ ̱ṯ ̱ḇe̱ṯw̱e̱e̱ṉ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱ ̱ ̱c̱ḻo̱c̱ḵ,̱ ̱ϕ̱,̱ ̱a̱ṉḏ ̱ṯẖa̱ṯ ̱o̱f̱ ̱ṯẖe̱ ̱e̱x̱ṯe̱ṟṉa̱ḻ ̱s̱i̱g̱ṉa̱ḻ ̱ω̱ ̱ṯ:̱ ̱ ̱ ̱ ̱ ̱ ̱ψ̱̱̇ ̱=̱ ̱ν̱ ̱+̱ ̱ρ̱_̱ψ̱ ̱Q̱(̱ψ̱)̱ ̱+̱ ̱ρ̱_̱ψ̱η̱_̱s̱,̱ ̱ ̱w̱ẖe̱ṟe̱,̱ ̱a̱s̱ ̱ḇe̱f̱o̱ṟe̱,̱ ̱ν̱ ̱=̱ ̱(̱ω̱^̱2̱ ̱-̱ ̱ω̱^̱2̱_̱0̱)̱/̱(̱2̱ ̱ω̱)̱,̱ ̱η̱_̱ψ̱ ̱i̱s̱ ̱a̱ ̱G̱a̱u̱s̱s̱i̱a̱ṉ ̱w̱ẖi̱ṯe̱ ̱ṉo̱i̱s̱e̱ ̱s̱o̱u̱ṟc̱e̱ ̱η̱_̱ψ̱(̱ṯ)̱η̱_̱ψ̱(̱ṯ^̱'̱)̱ ̱=̱ ̱σ̱^̱2̱_̱s̱δ̱ ̱(̱ṯ-̱ṯ^̱'̱)̱,̱ ̱ρ̱_̱ψ̱ ̱i̱s̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ,̱ ̱a̱ṉḏ ̱Q̱(̱ψ̱)̱=̱∫̱_̱0̱^̱Ṯ ̱ḏṯ^̱'̱ ̱Ẕ(̱ψ̱ ̱+̱ ̱ω̱ ̱ṯ^̱'̱)̱ ̱s̱(̱ṯ^̱'̱)̱ ̱i̱s̱ ̱ṯẖe̱ ̱f̱o̱ṟc̱e̱ ̱a̱c̱ṯi̱ṉg̱ ̱o̱ṉ ̱ψ̱,̱ ̱g̱i̱v̱e̱ṉ ̱ḇy̱ ̱ṯẖe̱ ̱c̱o̱ṉv̱o̱ḻu̱ṯi̱o̱ṉ ̱o̱f̱ ̱i̱ṉs̱ṯa̱ṉṯa̱ṉe̱o̱u̱s̱ ̱p̱ẖa̱s̱e̱-̱ṟe̱s̱p̱o̱ṉs̱e̱ ̱c̱u̱ṟv̱e̱ ̱Ẕ(̱ϕ̱)̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱ḏṟi̱v̱i̱ṉg̱ ̱s̱i̱g̱ṉa̱ḻ ̱s̱(̱ṯ)̱ ̱<̱c̱i̱ṯ.̱>̱.̱ ̱I̱ṉ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱-̱ḻo̱c̱ḵe̱ḏ ̱ṟe̱g̱i̱m̱e̱,̱ ̱ṯẖe̱ ̱ḏe̱ṯe̱ṟm̱i̱ṉi̱s̱ṯi̱c̱ ̱e̱q̱u̱a̱ṯi̱o̱ṉ ̱ψ̱̱̇ ̱=̱ ̱ν̱ ̱+̱ ̱ρ̱_̱ψ̱ ̱Q̱(̱ψ̱)̱ ̱a̱ḻw̱a̱y̱s̱ ̱ẖa̱s̱ ̱a̱ ̱s̱ṯa̱ḇḻe̱ ̱f̱i̱x̱e̱ḏ ̱p̱o̱i̱ṉṯ ̱ψ̱^̱*̱.̱ ̱Ḻi̱ṉe̱a̱ṟi̱ẕi̱ṉg̱ ̱a̱ḇo̱u̱ṯ ̱ṯẖi̱s̱ ̱f̱i̱x̱e̱ḏ ̱p̱o̱i̱ṉṯ,̱ ̱w̱e̱ ̱f̱i̱ṉḏ:̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱δ̱̱̇ψ̱̱̇ ̱=̱ ̱-̱ ̱ρ̱_̱ψ̱ζ̱δ̱ψ̱ ̱+̱ ̱ρ̱η̱_̱ψ̱,̱ ̱ ̱ ̱ ̱ ̱ḏo̱ṯḏp̱s̱i̱ ̱ ̱w̱ẖe̱ṟe̱ ̱ζ̱ ̱i̱s̱ ̱ṯẖe̱ ̱ḻi̱ṉe̱a̱ṟi̱ẕa̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱f̱o̱ṟc̱e̱ ̱Q̱(̱ψ̱)̱ ̱a̱ṟo̱u̱ṉḏ ̱ṯẖe̱ ̱f̱i̱x̱e̱ḏ ̱p̱o̱i̱ṉṯ ̱ψ̱^̱*̱.̱ ̱ ̱F̱ṟo̱m̱ ̱ṯẖi̱s̱ ̱w̱e̱ ̱o̱ḇṯa̱i̱ṉ ̱f̱o̱ṟ ̱ṯẖe̱ ̱v̱a̱ṟi̱a̱ṉc̱e̱ ̱ ̱ ̱ ̱ ̱ ̱σ̱^̱2̱_̱ψ̱ ̱=̱ ̱ρ̱_̱ψ̱^̱2̱ ̱σ̱^̱2̱_̱s̱/̱2̱ρ̱_̱ψ̱ζ̱∼̱ρ̱_̱ψ̱.̱ ̱ ̱ ̱ ̱ ̱s̱i̱g̱m̱a̱2̱p̱s̱i̱ ̱ ̱W̱e̱ ̱ṉo̱ṯe̱ ̱ṯẖa̱ṯ,̱ ̱a̱s̱ ̱i̱ṉ ̱ṯẖe̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱ḏe̱s̱c̱ṟi̱p̱ṯi̱o̱ṉ ̱(̱s̱e̱e̱ ̱v̱a̱ṟḺC̱O̱)̱,̱ ̱ṯẖe̱ ̱ṉu̱m̱e̱ṟa̱ṯo̱ṟ ̱s̱c̱a̱ḻe̱s̱ ̱w̱i̱ṯẖ ̱ρ̱_̱ψ̱^̱2̱,̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱f̱i̱c̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱.̱ ̱Ṯẖe̱ ̱ḏe̱ṉo̱m̱i̱ṉa̱ṯo̱ṟ ̱s̱c̱a̱ḻe̱s̱,̱ ̱a̱s̱ ̱i̱ṉ ̱ṯẖe̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ,̱ ̱w̱i̱ṯẖ ̱ρ̱_̱ψ̱,̱ ̱ṟe̱f̱ḻe̱c̱ṯi̱ṉg̱ ̱ṯẖe̱ ̱f̱a̱c̱ṯ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱ṯẖa̱ṯ ̱ṯa̱m̱e̱s̱ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱ ̱i̱ṉ ̱ψ̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱_̱ψ̱.̱ ̱I̱ṉ ̱f̱a̱c̱ṯ,̱ ̱ṉo̱ṯ ̱o̱ṉḻy̱ ̱ṯẖe̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱i̱s̱ ̱ṯẖe̱ ̱s̱a̱m̱e̱ ̱i̱ṉ ̱ṯẖe̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱a̱ṉḏ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱-̱a̱v̱e̱ṟa̱g̱i̱ṉg̱ ̱m̱e̱ṯẖo̱ḏ,̱ ̱ḇu̱ṯ ̱a̱ḻs̱o̱ ̱ṯẖe̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱w̱i̱ṯẖ ̱Ṟ^̱∗̱;̱ ̱ṯẖi̱s̱ ̱c̱a̱ṉ ̱ḇe̱ ̱u̱ṉḏe̱ṟs̱ṯo̱o̱ḏ ̱ḇy̱ ̱ṉo̱ṯi̱ṉg̱ ̱ṯẖa̱ṯ ̱ρ̱_̱ψ̱ ̱=̱ ̱ρ̱ ̱/̱ ̱Ṟ^̱∗̱,̱ ̱w̱ẖi̱c̱ẖ ̱c̱o̱m̱e̱s̱ ̱f̱ṟo̱m̱ ̱ṯẖe̱ ̱f̱a̱c̱ṯo̱ṟ ̱∂̱ϕ̱ ̱/̱ ̱∂̱ ̱x̱ ̱ṯẖa̱ṯ ̱a̱ṟi̱s̱e̱s̱ ̱i̱ṉ ̱ṟe̱ḏu̱c̱i̱ṉg̱ ̱ṯẖe̱ ̱ḏy̱ṉa̱m̱i̱c̱s̱ ̱o̱f̱ ̱x̱ ̱ṯo̱ ̱ṯẖa̱ṯ ̱o̱f̱ ̱ϕ̱ ̱a̱ṉḏ ̱ψ̱ ̱s̱e̱e̱ ̱<̱c̱i̱ṯ.̱>̱.̱ Ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ ̱c̱y̱c̱ḻe̱ ̱A̱=̱2̱π̱ ̱i̱s̱ ̱c̱o̱ṉs̱ṯa̱ṉṯ.̱ ̱Ṯẖi̱s̱ ̱m̱e̱a̱ṉs̱ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖi̱s̱ ̱ḏe̱s̱c̱ṟi̱p̱ṯi̱o̱ṉ,̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱—̱ṯẖe̱ ̱ṉu̱m̱ḇe̱ṟ ̱o̱f̱ ̱ṯi̱m̱e̱ ̱p̱o̱i̱ṉṯs̱ ̱ṯẖa̱ṯ ̱c̱a̱ṉ ̱ḇe̱ ̱i̱ṉf̱e̱ṟṟe̱ḏ ̱f̱ṟo̱m̱ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱ ̱ψ̱—̱s̱c̱a̱ḻe̱s̱ ̱a̱s̱ ̱ ̱ ̱ ̱ ̱ ̱A̱/̱σ̱_̱ψ̱∼̱1̱/̱√̱(̱ρ̱)̱.̱ ̱ ̱H̱e̱ṉc̱e̱,̱ ̱a̱s̱ ̱f̱o̱u̱ṉḏ ̱f̱o̱ṟ ̱ṯẖe̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱(̱S̱ṈṞ_̱S̱Ḻ)̱,̱ ̱a̱ḻs̱o̱ ̱i̱ṉ ̱ṯẖi̱s̱ ̱ḏe̱s̱c̱ṟi̱p̱ṯi̱o̱ṉ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱o̱f̱ ̱a̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱a̱s̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ḏe̱c̱ṟe̱a̱s̱e̱s̱,̱ ̱i̱ṉ ̱c̱o̱ṉṯṟa̱s̱ṯ ̱ṯo̱ ̱ṯẖa̱ṯ ̱o̱f̱ ̱a̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱f̱o̱ṟ ̱w̱ẖi̱c̱ẖ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱i̱s̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ.̱Ṟo̱ḻe̱ ̱o̱f̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱Ḻa̱s̱ṯḻy̱,̱ ̱w̱ẖi̱ḻe̱ ̱a̱ ̱f̱i̱ṉi̱ṯe̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱ν̱≠̱ ̱0̱ ̱ ̱ ̱ṉe̱c̱e̱s̱s̱i̱ṯa̱ṯe̱s̱ ̱a̱ ̱m̱i̱ṉi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱ ̱ṯo̱ ̱ḇṟi̱ṉg̱ ̱ṯẖe̱ ̱s̱y̱s̱ṯe̱m̱ ̱ ̱ ̱i̱ṉs̱i̱ḏe̱ ̱ṯẖe̱ ̱A̱ṟṉo̱ḻḏ ̱ṯo̱ṉg̱u̱e̱,̱ ̱a̱s̱ ̱i̱ḻḻu̱s̱ṯṟa̱ṯe̱ḏ ̱i̱ṉ ̱C̱H̱M̱Ḏ,̱ ̱ ̱ ̱ḏo̱ṯḏp̱s̱i̱ ̱i̱ṉḏi̱c̱a̱ṯe̱s̱ ̱ṯẖa̱ṯ ̱i̱ṉs̱i̱ḏe̱ ̱ṯẖe̱ ̱A̱ṟṉo̱ḻḏ ̱ṯo̱ṉg̱u̱e̱ ̱ṯẖe̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱ ̱ ̱o̱f̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱A̱/̱σ̱_̱ψ̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱ḏo̱e̱s̱ ̱ṉo̱ṯ ̱ ̱ ̱ḏe̱p̱e̱ṉḏ ̱o̱ṉ ̱ṯẖe̱ ̱a̱m̱o̱u̱ṉṯ ̱o̱f̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱ν̱—̱ṯẖe̱ ̱ḏe̱ṯu̱ṉi̱ṉg̱ ̱g̱e̱ṉe̱ṟa̱ṯe̱s̱ ̱a̱ ̱ ̱ ̱c̱o̱ṉs̱ṯa̱ṉṯ ̱f̱o̱ṟc̱e̱ ̱w̱ẖi̱c̱ẖ ̱a̱f̱f̱e̱c̱ṯs̱ ̱ṯẖe̱ ̱f̱i̱x̱e̱ḏ ̱p̱o̱i̱ṉṯ ̱ψ̱^̱∗̱,̱ ̱ḇu̱ṯ ̱i̱ṯ ̱ ̱ ̱ḏo̱e̱s̱ ̱ṉo̱ṯ ̱a̱f̱f̱e̱c̱ṯ ̱ṯẖe̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱f̱o̱ṟ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱ ̱a̱ṟo̱u̱ṉḏ ̱ψ̱^̱∗̱.̱§.§ Role of internal noise I̱ṉ ̱ṯẖe̱ ̱a̱ḇo̱v̱e̱ ̱s̱e̱c̱ṯi̱o̱ṉs̱ ̱w̱e̱ ̱s̱ṯu̱ḏi̱e̱ḏ ̱ṯẖe̱ ̱ṟo̱ḇu̱s̱ṯṉe̱s̱s̱ ̱o̱f̱ ̱ṯẖe̱ ̱ṯẖṟe̱e̱ ̱ ̱ ̱ḏi̱f̱f̱e̱ṟe̱ṉṯ ̱s̱y̱s̱ṯe̱m̱s̱ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱.̱ ̱W̱e̱ ̱ṉo̱w̱ ̱a̱ḏḏṟe̱s̱s̱ ̱ṯẖe̱ ̱ṟo̱ḻe̱ ̱o̱f̱ ̱ ̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱w̱ẖi̱c̱ẖ ̱a̱ṟi̱s̱e̱s̱ ̱f̱ṟo̱m̱ ̱ṯẖe̱ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱s̱ṯo̱c̱ẖa̱s̱ṯi̱c̱i̱ṯy̱ ̱o̱f̱ ̱ ̱ ̱c̱ẖe̱m̱i̱c̱a̱ḻ ̱ṟe̱a̱c̱ṯi̱o̱ṉs̱.̱ ̱F̱i̱ṟs̱ṯ,̱ ̱i̱ṉ ̱ṯẖe̱ ̱ṉe̱x̱ṯ ̱s̱e̱c̱ṯi̱o̱ṉ,̱ ̱w̱e̱ ̱s̱ṯu̱ḏy̱ ̱ṯẖe̱ ̱ ̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱o̱f̱ ̱ṯẖe̱s̱e̱ ̱s̱y̱s̱ṯe̱m̱s̱ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱ṉo̱i̱s̱e̱ ̱o̱ṉḻy̱.̱ ̱I̱ṉ ̱ṯẖe̱ ̱s̱u̱ḇs̱e̱q̱u̱e̱ṉṯ ̱s̱e̱c̱ṯi̱o̱ṉ,̱ ̱w̱e̱ ̱ṯẖe̱ṉ ̱a̱ḏḏṟe̱s̱s̱ ̱ṯẖe̱i̱ṟ ̱ ̱ ̱p̱e̱ṟf̱o̱ṟm̱a̱ṉc̱e̱ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ḇo̱ṯẖ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱a̱ṉḏ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱.̱ ̱ ̱Ṯẖe̱ ̱ ̱ ̱c̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ ̱i̱s̱ ̱a̱g̱a̱i̱ṉ ̱ḏe̱s̱c̱ṟi̱ḇe̱ḏ ̱ḇy̱ ̱ṯẖe̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱ ̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱-̱a̱v̱e̱ṟa̱g̱i̱ṉg̱ ̱m̱e̱ṯẖo̱ḏ ̱ ̱ ̱o̱f̱ ̱ṯẖe̱ ̱p̱ṟe̱v̱i̱o̱u̱s̱ ̱s̱e̱c̱ṯi̱o̱ṉ,̱ ̱w̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱p̱u̱s̱ẖ-̱p̱u̱ḻḻ ̱ṉe̱ṯw̱o̱ṟḵ ̱a̱ṉḏ ̱ṯẖe̱ ̱ ̱ ̱u̱ṉc̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ ̱a̱ṟe̱ ̱ḏe̱s̱c̱ṟi̱ḇe̱ḏ ̱ḇy̱ ̱ṯẖe̱ ̱ḏa̱m̱p̱e̱ḏ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱o̱f̱ ̱s̱e̱c̱ṯi̱o̱ṉ ̱<̱ṟe̱f̱>̱;̱ ̱ṯẖe̱ ̱ḻa̱ṯṯe̱ṟ ̱s̱y̱s̱ṯe̱m̱ ̱ḏe̱s̱c̱ṟi̱ḇe̱s̱ ̱ṉo̱ṯ ̱ ̱ ̱o̱ṉḻy̱ ̱ṯẖe̱ ̱u̱ṉc̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ,̱ ̱ḇu̱ṯ ̱a̱ḻs̱o̱,̱ ̱i̱ṉ ̱ṯẖe̱ ̱ẖi̱g̱ẖ-̱f̱ṟi̱c̱ṯi̱o̱ṉ ̱ ̱ ̱ḻi̱m̱i̱ṯ,̱ ̱ṯẖe̱ ̱p̱u̱s̱ẖ-̱p̱u̱ḻḻ ̱ṉe̱ṯw̱o̱ṟḵ ̱o̱f̱ ̱<̱ṟe̱f̱>̱ ̱(̱s̱e̱e̱ ̱a̱ḻs̱o̱ ̱ ̱ ̱<̱ṟe̱f̱>̱)̱.̱§.§.§ Robustness to internal noise Ṯẖe̱ ̱ḏe̱ṟi̱v̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱o̱f̱ ̱ṯẖe̱ ̱ṟe̱s̱p̱e̱c̱ṯi̱v̱e̱ ̱ ̱ ̱s̱y̱s̱ṯe̱m̱s̱ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱c̱ḻo̱s̱e̱ḻy̱ ̱f̱o̱ḻḻo̱w̱s̱ ̱ṯẖa̱ṯ ̱o̱ṉ ̱ ̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱:̱ ̱ṯẖe̱ ̱p̱ṟi̱ṉc̱i̱p̱a̱ḻ ̱ḏi̱f̱f̱e̱ṟe̱ṉc̱e̱ ̱ ̱ ̱c̱o̱ṉc̱e̱ṟṉs̱ ̱ṯẖe̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱o̱f̱ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ.̱Ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱Ṯo̱ ̱s̱ṯu̱ḏy̱ ̱ṯẖe̱ ̱ṟo̱ḻe̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱w̱e̱ ̱c̱a̱ṉ ̱ ̱ ̱a̱ḏḏ ̱a̱ṉ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱ṉo̱i̱s̱e̱ ̱ṯe̱ṟm̱ ̱ṯo̱ ̱H̱O̱.̱ ̱Ṯẖi̱s̱ ̱w̱i̱ḻḻ ̱y̱i̱e̱ḻḏ ̱ṯẖe̱ ̱s̱a̱m̱e̱ ̱ ̱ ̱e̱x̱p̱ṟe̱s̱s̱i̱o̱ṉ ̱f̱o̱ṟ ̱ẋ̱ ̱a̱s̱ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱e̱x̱ṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱ ̱ ̱e̱x̱c̱e̱p̱ṯ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱e̱x̱ṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱ṯe̱ṟm̱ ̱s̱c̱a̱ḻe̱s̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ ̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱,̱ ̱w̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱ṯe̱ṟm̱ ̱ḏo̱e̱s̱ ̱ṉo̱ṯ.̱ ̱H̱e̱ṉc̱e̱,̱ ̱w̱e̱ ̱ ̱ ̱f̱i̱ṉḏ ̱f̱o̱ṟ ̱ṯẖe̱ ̱v̱a̱ṟi̱a̱ṉc̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱σ̱^̱2̱_̱x̱ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱G̱a̱u̱s̱s̱i̱a̱ṉ ̱w̱ẖi̱ṯe̱ ̱ṉo̱i̱s̱e̱ ̱o̱f̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱σ̱^̱2̱_̱ ̱i̱ṉṯ:̱ ̱ ̱ ̱ ̱ ̱ ̱σ̱^̱2̱_̱x̱ ̱ ̱ ̱ ̱=̱ ̱σ̱^̱2̱_̱ ̱i̱ṉṯ/̱2̱γ̱ω̱_̱0̱^̱2̱.̱ ̱ ̱ ̱ ̱ ̱s̱i̱g̱m̱a̱x̱s̱q̱H̱O̱I̱ṉṯṈo̱i̱s̱e̱ ̱ ̱Ṉo̱ṯe̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱σ̱^̱2̱_̱x̱ ̱i̱s̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ.̱ Ṯẖe̱ ̱e̱x̱p̱ṟe̱s̱s̱i̱o̱ṉ ̱f̱o̱ṟ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱i̱s̱ ̱s̱ṯi̱ḻḻ ̱g̱i̱v̱e̱ṉ ̱ḇy̱ ̱A̱_̱H̱O̱ ̱ ̱ ̱(̱A̱_̱H̱O̱2̱)̱.̱ ̱C̱o̱m̱ḇi̱ṉi̱ṉg̱ ̱ṯẖi̱s̱ ̱e̱x̱p̱ṟe̱s̱s̱i̱o̱ṉ ̱w̱i̱ṯẖ ̱ ̱ ̱s̱i̱g̱m̱a̱x̱s̱q̱H̱O̱I̱ṉṯṈo̱i̱s̱e̱ ̱ṯẖe̱ṉ ̱y̱i̱e̱ḻḏs̱ ̱ṯẖe̱ ̱f̱o̱ḻḻo̱w̱i̱ṉg̱ ̱e̱x̱p̱ṟe̱s̱s̱i̱o̱ṉ ̱f̱o̱ṟ ̱ ̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱w̱i̱ṯẖ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱ṉo̱i̱s̱e̱ ̱o̱ṉḻy̱:̱ ̱ ̱ ̱ ̱ ̱ ̱A̱/̱σ̱_̱x̱ ̱ ̱ ̱=̱√̱(̱2̱γ̱)̱ω̱_̱0̱ ̱ρ̱/̱σ̱_̱ ̱i̱ṉṯ√̱(̱γ̱^̱2̱ ̱ω̱^̱2̱ ̱+̱ ̱(̱ω̱^̱2̱ ̱-̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ω̱_̱0̱^̱2̱)̱^̱2̱)̱∼̱ρ̱.̱ ̱ ̱ ̱ ̱ ̱S̱ṈṞ_̱H̱O̱_̱I̱ṉṯṈo̱i̱s̱e̱ ̱ ̱C̱ḻe̱a̱ṟḻy̱,̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱ṉo̱w̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱.̱ ̱W̱ẖe̱ṟe̱a̱s̱ ̱w̱i̱ṯẖ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ḇo̱ṯẖ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱σ̱_̱x̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱A̱ ̱s̱c̱a̱ḻe̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱s̱u̱c̱ẖ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱i̱s̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ρ̱,̱ ̱w̱i̱ṯẖ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱A̱ ̱s̱c̱a̱ḻe̱s̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱ḇu̱ṯ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱σ̱_̱x̱ ̱ḏo̱e̱s̱ ̱ṉo̱ṯ;̱ ̱i̱ṉc̱ṟe̱a̱s̱i̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṯẖu̱s̱ ̱m̱a̱ḵe̱s̱ ̱i̱ṯ ̱p̱o̱s̱s̱i̱ḇḻe̱ ̱ṯo̱ ̱ṟa̱i̱s̱e̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱s̱i̱g̱ṉa̱ḻ ̱a̱ḇo̱v̱e̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱.̱Ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ:̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱A̱ḻs̱o̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ ̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ,̱ ̱ṯẖe̱ ̱p̱ṟi̱ṉc̱i̱p̱a̱ḻ ̱ḏi̱f̱f̱e̱ṟe̱ṉc̱e̱ ̱ḇe̱ṯw̱e̱e̱ṉ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱a̱ṉḏ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱i̱s̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱f̱o̱ṟm̱e̱ṟ ̱ḏo̱e̱s̱ ̱ṉo̱ṯ ̱s̱c̱a̱ḻe̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ ̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱ ̱w̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱ḻa̱ṯṯe̱ṟ ̱ḏo̱e̱s̱.̱ ̱F̱o̱ḻḻo̱w̱i̱ṉg̱ ̱ṯẖe̱ ̱s̱ṯe̱p̱s̱ ̱f̱ṟo̱m̱ ̱ ̱ ̱s̱i̱g̱u̱ ̱ṯo̱ ̱v̱a̱ṟḺC̱O̱,̱ ̱ḇu̱ṯ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱e̱f̱f̱e̱c̱ṯi̱v̱e̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ ̱ ̱ρ̱^̱2̱ ̱σ̱^̱2̱_̱s̱ ̱ṟe̱p̱ḻa̱c̱e̱ḏ ̱ḇy̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱σ̱^̱2̱_̱ ̱ ̱ ̱ ̱ ̱i̱ṉṯ,̱ ̱w̱e̱ ̱f̱i̱ṉḏ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱ ̱ ̱ṉo̱i̱s̱e̱ ̱i̱s̱ ̱g̱i̱v̱e̱ṉ ̱ḇy̱ ̱ ̱ ̱ ̱ ̱ ̱σ̱^̱2̱_̱v̱ ̱=̱ ̱σ̱^̱2̱_̱ ̱i̱ṉṯṞ^̱∗̱/̱2̱ρ̱ω̱.̱ ̱ ̱ ̱ ̱ ̱v̱a̱ṟḺC̱O̱I̱ṉṯṈo̱i̱s̱e̱ ̱ ̱I̱m̱p̱o̱ṟṯa̱ṉṯḻy̱,̱ ̱σ̱^̱2̱_̱v̱ ̱ḏe̱c̱ṟe̱a̱s̱e̱s̱ ̱a̱s̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ρ̱ ̱i̱s̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱ḏ.̱ ̱A̱s̱ ̱w̱e̱ ̱ẖa̱v̱e̱ ̱s̱e̱e̱ṉ ̱a̱ḇo̱v̱e̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱c̱a̱s̱e̱ ̱o̱f̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱v̱a̱ṟḺC̱O̱,̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṯo̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱y̱i̱e̱ḻḏs̱ ̱a̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱ṯẖa̱ṯ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ρ̱.̱W̱i̱ṯẖ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱A̱=̱Ṟ^̱∗̱,̱ ̱w̱e̱ ̱ṯẖe̱ṉ ̱o̱ḇṯa̱i̱ṉ ̱ṯẖe̱ ̱f̱o̱ḻḻo̱w̱i̱ṉg̱ ̱ ̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱:̱ ̱ ̱ ̱ ̱ ̱ ̱A̱/̱σ̱_̱v̱ ̱=̱√̱(̱2̱ρ̱ω̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱Ṟ^̱∗̱)̱/̱σ̱_̱ ̱i̱ṉṯ∼̱√̱(̱ρ̱)̱.̱ ̱ ̱ ̱ ̱ ̱S̱ṈṞ_̱S̱Ḻ_̱I̱ṉṯṈo̱i̱s̱e̱ ̱ ̱Ḇe̱f̱o̱ṟe̱ ̱w̱e̱ ̱ḏi̱s̱c̱u̱s̱s̱ ̱ṯẖe̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱o̱f̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱w̱i̱ṯẖ ̱ρ̱,̱ ̱w̱e̱ ̱f̱i̱ṟs̱ṯ ̱ṉo̱ṯe̱ ̱ṯẖa̱ṯ ̱ḇy̱ ̱ṟe̱p̱ḻa̱c̱i̱ṉg̱ ̱ρ̱_̱ψ̱^̱2̱ ̱σ̱^̱2̱_̱s̱ ̱ḇy̱ ̱σ̱^̱2̱_̱ ̱i̱ṉṯ ̱i̱ṉ ̱s̱i̱g̱m̱a̱2̱p̱s̱i̱,̱ ̱w̱e̱ ̱s̱e̱e̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱-̱a̱v̱e̱ṟa̱g̱i̱ṉg̱ ̱m̱e̱ṯẖo̱ḏ ̱y̱i̱e̱ḻḏs̱ ̱ṯẖe̱ ̱s̱a̱m̱e̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱o̱f̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ ̱a̱ṉḏ ̱ẖe̱ṉc̱e̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱a̱s̱ ̱ṯẖe̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱ḏo̱e̱s̱.̱S̱ṈṞ_̱S̱Ḻ_̱I̱ṉṯṈo̱i̱s̱e̱ ̱s̱ẖo̱w̱s̱ ̱ṯẖa̱ṯ ̱i̱ṉc̱ṟe̱a̱s̱i̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱o̱f̱ ̱ṯẖe̱ ̱ ̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ṯo̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṟa̱i̱s̱e̱s̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ ̱ ̱ṟa̱ṯi̱o̱,̱ ̱a̱s̱ ̱i̱ṯ ̱ḏo̱e̱s̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ ̱ ̱(̱S̱ṈṞ_̱H̱O̱_̱I̱ṉṯṈo̱i̱s̱e̱)̱.̱ ̱H̱o̱w̱e̱v̱e̱ṟ,̱ ̱ṯẖe̱ ̱o̱ṟi̱g̱i̱ṉ ̱i̱s̱ ̱m̱a̱ṟḵe̱ḏḻy̱ ̱ḏi̱f̱f̱e̱ṟe̱ṉṯ:̱ ̱ ̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ,̱ ̱a̱ ̱s̱ṯṟo̱ṉg̱e̱ṟ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱y̱i̱e̱ḻḏs̱ ̱a̱ ̱ḻa̱ṟg̱e̱ṟ ̱ ̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱(̱A̱_̱H̱O̱2̱)̱ ̱w̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱σ̱_̱x̱ ̱ ̱ ̱(̱s̱i̱g̱m̱a̱x̱s̱q̱H̱O̱I̱ṉṯṈo̱i̱s̱e̱)̱ ̱ṟe̱m̱a̱i̱ṉs̱ ̱c̱o̱ṉs̱ṯa̱ṉṯ,̱ ̱w̱ẖe̱ṟe̱a̱s̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ ̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱i̱s̱ ̱e̱s̱s̱e̱ṉṯi̱a̱ḻḻy̱ ̱u̱ṉa̱f̱f̱e̱c̱ṯe̱ḏ ̱ḇy̱ ̱ ̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱y̱e̱ṯ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱(̱v̱a̱ṟḺC̱O̱I̱ṉṯṈo̱i̱s̱e̱)̱ ̱ḏe̱c̱ṟe̱a̱s̱e̱s̱ ̱a̱s̱ ̱ ̱ ̱ρ̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱,̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḻa̱ṟg̱e̱ṟ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱.̱ ̱Ṯẖi̱s̱ ̱ ̱ ̱ḏi̱f̱f̱e̱ṟe̱ṉc̱e̱ ̱m̱a̱ṉi̱f̱e̱s̱ṯs̱ ̱i̱ṯs̱e̱ḻf̱ ̱i̱ṉ ̱a̱ ̱ḏi̱f̱f̱e̱ṟe̱ṉṯ ̱s̱c̱a̱ḻi̱ṉg̱ ̱w̱i̱ṯẖ ̱ρ̱,̱ ̱ ̱ ̱w̱ẖi̱c̱ẖ ̱ẖa̱s̱ ̱a̱ṉ ̱i̱ṉṯe̱ṟe̱s̱ṯi̱ṉg̱ ̱c̱o̱ṉs̱e̱q̱u̱e̱ṉc̱e̱:̱ ̱Ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ ̱ ̱ṟa̱ṯi̱o̱ ̱o̱f̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱s̱c̱a̱ḻe̱s̱ ̱w̱i̱ṯẖ ̱√̱(̱ρ̱)̱ ̱w̱ẖi̱ḻe̱ ̱ ̱ ̱ṯẖa̱ṯ ̱o̱f̱ ̱ṯẖe̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱s̱c̱a̱ḻe̱s̱ ̱w̱i̱ṯẖ ̱ρ̱,̱ ̱i̱ṉ ̱ṯẖe̱ ̱ ̱ ̱w̱e̱a̱ḵ-̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱w̱i̱ḻḻ ̱ṉo̱ṯ ̱o̱ṉḻy̱ ̱ḇe̱ ̱ ̱ ̱m̱o̱ṟe̱ ̱ṟo̱ḇu̱s̱ṯ ̱ṯo̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱a̱s̱ ̱ḏi̱s̱c̱u̱s̱s̱e̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱v̱i̱o̱u̱s̱ ̱s̱e̱c̱ṯi̱o̱ṉ,̱ ̱ ̱ ̱ḇu̱ṯ ̱w̱i̱ḻḻ ̱a̱ḻs̱o̱ ̱ḇe̱ ̱m̱o̱ṟe̱ ̱ṟe̱s̱i̱ḻi̱e̱ṉṯ ̱ṯo̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱.̱H̱o̱w̱e̱v̱e̱ṟ,̱ ̱ṯẖi̱s̱ ̱a̱ṉa̱ḻy̱s̱i̱s̱ ̱a̱ḻs̱o̱ ̱s̱ẖo̱w̱s̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱ṟe̱g̱i̱m̱e̱ ̱o̱f̱ ̱w̱e̱a̱ḵ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ ̱ ̱i̱s̱ ̱ṉo̱ṯ ̱ṉe̱c̱e̱s̱s̱a̱ṟi̱ḻy̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱o̱ṉe̱:̱ ̱i̱ṉc̱ṟe̱a̱s̱i̱ṉg̱ ̱ρ̱ ̱e̱ṉẖa̱ṉc̱e̱s̱ ̱ṯẖe̱ ̱ ̱ ̱s̱u̱p̱p̱ṟe̱s̱s̱i̱o̱ṉ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱.̱ ̱I̱ṯ ̱s̱ẖo̱u̱ḻḏ ̱ḇe̱ ̱ṟe̱a̱ḻi̱ẕe̱ḏ,̱ ̱ẖo̱w̱e̱v̱e̱ṟ,̱ ̱ṯẖa̱ṯ ̱ ̱ ̱ṯẖe̱ ̱a̱ṉa̱ḻy̱s̱i̱s̱ ̱p̱ṟe̱s̱e̱ṉṯe̱ḏ ̱ẖe̱ṟe̱ ̱i̱s̱ ̱a̱ṉ ̱a̱ṉa̱ḻy̱s̱i̱s̱ ̱ṯẖa̱ṯ ̱s̱ṯṟi̱c̱ṯḻy̱ ̱a̱p̱p̱ḻi̱e̱s̱ ̱ ̱ ̱o̱ṉḻy̱ ̱i̱ṉ ̱ṯẖe̱ ̱ṟe̱g̱i̱m̱e̱ ̱o̱f̱ ̱w̱e̱a̱ḵ ̱c̱o̱u̱p̱ḻi̱ṉg̱.̱ ̱I̱ṉḏe̱e̱ḏ,̱ ̱f̱o̱ṟ ̱ḻa̱ṟg̱e̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ ̱ ̱o̱ṯẖe̱ṟ ̱e̱f̱f̱e̱c̱ṯs̱ ̱w̱ẖi̱c̱ẖ ̱a̱ṟe̱ ̱ṉo̱ṯ ̱c̱a̱p̱ṯu̱ṟe̱ḏ ̱ḇy̱ ̱o̱u̱ṟ ̱a̱ṉa̱ḻy̱s̱i̱s̱ ̱w̱i̱ḻḻ ̱i̱ṉe̱v̱i̱ṯa̱ḇḻy̱ ̱ ̱ ̱c̱o̱m̱e̱ ̱i̱ṉṯo̱ ̱p̱ḻa̱y̱.̱ ̱F̱o̱ṟ ̱e̱x̱a̱m̱p̱ḻe̱,̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱s̱i̱g̱ṉa̱ḻ ̱ḇe̱c̱o̱m̱e̱s̱ ̱ ̱ ̱ṉo̱ṉ-̱s̱i̱ṉu̱s̱o̱i̱ḏa̱ḻ ̱ḇe̱c̱a̱u̱s̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱f̱a̱c̱ṯ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱p̱ẖo̱s̱p̱ẖo̱ṟy̱ḻa̱ṯi̱o̱ṉ ̱ḻe̱v̱e̱ḻ ̱ ̱ ̱p̱(̱ṯ)̱ ̱i̱s̱ ̱ḇo̱u̱ṉḏe̱ḏ ̱ḇe̱ṯw̱e̱e̱ṉ ̱ẕe̱ṟo̱ ̱a̱ṉḏ ̱u̱ṉi̱ṯy̱;̱ ̱ṯẖe̱s̱e̱ ̱ṉo̱ṉ-̱s̱i̱ṉu̱s̱o̱i̱ḏa̱ḻ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯi̱o̱ṉs̱ ̱ṯe̱ṉḏ ̱ṯo̱ ̱ṟe̱ḏu̱c̱e̱ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ṯṟa̱ṉs̱m̱i̱s̱s̱i̱o̱ṉ ̱ ̱ ̱<̱c̱i̱ṯ.̱>̱.̱ ̱M̱o̱ṟe̱o̱v̱e̱ṟ,̱ ̱c̱o̱m̱ḇi̱ṉi̱ṉg̱ ̱ṯẖe̱ ̱o̱ḇs̱e̱ṟv̱a̱ṯi̱o̱ṉs̱ ̱f̱ṟo̱m̱ ̱ṯẖe̱ ̱ ̱ ̱p̱ṟe̱v̱i̱o̱u̱s̱ ̱s̱e̱c̱ṯi̱o̱ṉ ̱o̱ṉ ̱i̱ṉp̱u̱ṯ-̱ṉo̱i̱s̱e̱ ̱p̱ṟo̱p̱a̱g̱a̱ṯi̱o̱ṉ,̱ ̱w̱ẖi̱c̱ẖ ̱ḏe̱c̱ṟe̱a̱s̱e̱s̱ ̱a̱s̱ ̱ṯẖe̱ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ρ̱ ̱i̱s̱ ̱ḏe̱c̱ṟe̱a̱s̱e̱ḏ,̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱o̱ḇs̱e̱ṟv̱a̱ṯi̱o̱ṉs̱ ̱a̱ḇo̱v̱e̱ ̱o̱ṉ ̱ṯẖe̱ ̱ ̱ ̱s̱u̱p̱p̱ṟe̱s̱s̱i̱o̱ṉ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱w̱ẖi̱c̱ẖ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ρ̱,̱ ̱p̱ṟe̱ḏi̱c̱ṯs̱ ̱ ̱ ̱ṯẖa̱ṯ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ḇo̱ṯẖ ̱ṉo̱i̱s̱e̱ ̱s̱o̱u̱ṟc̱e̱s̱ ̱ṯẖe̱ṟe̱ ̱e̱x̱i̱s̱ṯs̱ ̱a̱ṉ ̱o̱p̱ṯi̱m̱a̱ḻ ̱ ̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ṯẖa̱ṯ ̱m̱a̱x̱i̱m̱i̱ẕe̱s̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ.̱ ̱I̱ṉ ̱ ̱ ̱a̱ḏḏi̱ṯi̱o̱ṉ,̱ ̱i̱ṯ ̱p̱ṟe̱ḏi̱c̱ṯs̱ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱m̱a̱g̱ṉi̱ṯu̱ḏe̱ ̱o̱f̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ ̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ḏe̱p̱e̱ṉḏs̱ ̱o̱ṉ ̱ṯẖe̱ ̱ṟe̱ḻa̱ṯi̱v̱e̱ ̱a̱m̱o̱u̱ṉṯs̱ ̱o̱f̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱a̱ṉḏ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ ̱ ̱ṉo̱i̱s̱e̱.̱ ̱Ṯẖi̱s̱ ̱i̱s̱ ̱w̱ẖa̱ṯ ̱w̱e̱ ̱s̱ẖo̱w̱ ̱i̱ṉ ̱ṯẖe̱ ̱ṉe̱x̱ṯ ̱s̱e̱c̱ṯi̱o̱ṉ.̱§.§.§ Signal-to-noise ratio in presence of input noise and internal noise Ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱I̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ḇo̱ṯẖ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱a̱ṉḏ ̱e̱x̱ṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱ṯẖe̱ ̱ṉo̱i̱s̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱o̱f̱ ̱ṯẖe̱ ̱ḏa̱m̱p̱e̱ḏ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱i̱s̱,̱ ̱c̱o̱m̱ḇi̱ṉi̱ṉg̱ ̱s̱i̱g̱m̱a̱x̱s̱q̱H̱O̱w̱ẖi̱ṯe̱2̱s̱i̱g̱m̱a̱x̱s̱q̱H̱O̱I̱ṉṯṈo̱i̱s̱e̱:̱ ̱ ̱ ̱ ̱ ̱ ̱σ̱^̱2̱_̱x̱ ̱=̱ ̱ρ̱^̱2̱σ̱^̱2̱_̱s̱/̱2̱γ̱ω̱_̱0̱^̱2̱ ̱+̱ ̱σ̱^̱2̱_̱ ̱i̱ṉṯ/̱2̱γ̱ω̱_̱0̱^̱2̱.̱ ̱ ̱ ̱ ̱ ̱s̱i̱g̱m̱a̱x̱s̱q̱H̱O̱Ṯo̱ṯṈo̱i̱s̱e̱ ̱ ̱Ṉo̱ṯe̱ ̱ṯẖa̱ṯ ̱f̱o̱ṟ ̱s̱m̱a̱ḻḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱ρ̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱(̱s̱e̱c̱o̱ṉḏ ̱ṯe̱ṟm̱)̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱s̱,̱ ̱w̱ẖi̱ḻe̱ ̱f̱o̱ṟ ̱ḻa̱ṟg̱e̱ ̱ρ̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱s̱.̱C̱o̱m̱ḇi̱ṉi̱ṉg̱ ̱ṯẖi̱s̱ ̱e̱x̱p̱ṟe̱s̱s̱i̱o̱ṉ ̱w̱i̱ṯẖ ̱ṯẖa̱ṯ ̱f̱o̱ṟ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱,̱ ̱A̱_̱H̱O̱,̱ ̱y̱i̱e̱ḻḏs̱ ̱ṯẖe̱ ̱f̱o̱ḻḻo̱w̱i̱ṉg̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱ ̱ ̱ ̱ ̱ ̱A̱/̱σ̱_̱x̱ ̱ ̱ ̱=̱√̱(̱2̱γ̱)̱ω̱_̱0̱ ̱ρ̱/̱√̱(̱ρ̱^̱2̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱σ̱^̱2̱_̱s̱ ̱+̱ ̱σ̱^̱2̱_̱ ̱i̱ṉṯ)̱√̱(̱γ̱^̱2̱ ̱ω̱^̱2̱ ̱+̱ ̱(̱ω̱^̱2̱ ̱-̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ω̱_̱0̱^̱2̱)̱^̱2̱)̱∼̱a̱ ̱ρ̱/̱√̱(̱ḇρ̱^̱2̱ ̱+̱ ̱c̱)̱,̱ ̱ ̱w̱ẖe̱ṟe̱ ̱a̱,̱ ̱ḇ,̱ ̱a̱ṉḏ ̱c̱ ̱a̱ṟe̱ ̱c̱o̱ṉs̱ṯa̱ṉṯs̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ρ̱.̱ ̱H̱e̱ṉc̱e̱,̱ ̱f̱o̱ṟ ̱s̱m̱a̱ḻḻ ̱ρ̱,̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱s̱c̱a̱ḻe̱s̱ ̱ḻi̱ṉe̱a̱ṟḻy̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱i̱ṉ ̱ṯẖi̱s̱ ̱ṟe̱g̱i̱m̱e̱ ̱ṯẖe̱ ̱ṟi̱s̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱A̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱m̱a̱ḵe̱s̱ ̱i̱ṯ ̱p̱o̱s̱s̱i̱ḇḻe̱ ̱ṯo̱ ̱ḻi̱f̱ṯ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ ̱a̱ḇo̱v̱e̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱.̱ ̱Y̱e̱ṯ,̱ ̱f̱o̱ṟ ̱ḻa̱ṟg̱e̱ ̱ρ̱,̱ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱ḇe̱c̱o̱m̱e̱s̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ρ̱,̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖe̱ṉ ̱ṯẖe̱ ̱e̱x̱ṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱s̱,̱ ̱w̱ẖi̱c̱ẖ ̱s̱c̱a̱ḻe̱s̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱i̱ṉ ̱ṯẖe̱ ̱s̱a̱m̱e̱ ̱w̱a̱y̱ ̱a̱s̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱ḏo̱e̱s̱.̱ ̱W̱e̱ ̱e̱m̱p̱ẖa̱s̱i̱ẕe̱ ̱ṯẖa̱ṯ ̱ṯẖe̱s̱e̱ ̱c̱a̱ḻc̱u̱ḻa̱ṯi̱o̱ṉs̱ ̱p̱e̱ṟṯa̱i̱ṉ ̱ṯo̱ ̱ṯẖe̱ ̱p̱u̱s̱ẖ-̱p̱u̱ḻḻ ̱ṉe̱ṯw̱o̱ṟḵ ̱(̱P̱P̱Ṉ)̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱u̱ṉc̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ ̱(̱U̱H̱M̱)̱ ̱p̱ṟo̱v̱i̱ḏe̱ḏ ̱ṯẖa̱ṯ ̱ṯẖe̱s̱e̱ ̱s̱y̱s̱ṯe̱m̱s̱ ̱ṟe̱m̱a̱i̱ṉ ̱i̱ṉ ̱ṯẖe̱ ̱ḻi̱ṉe̱a̱ṟ-̱ṟe̱s̱p̱o̱ṉs̱e̱ ̱ṟe̱g̱i̱m̱e̱;̱ ̱a̱s̱ ̱ḏi̱s̱c̱u̱s̱s̱e̱ḏ ̱i̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱v̱i̱o̱u̱s̱ ̱s̱e̱c̱ṯi̱o̱ṉ ̱(̱s̱e̱e̱ ̱a̱ḻs̱o̱ ̱s̱e̱c̱ṯi̱o̱ṉ ̱<̱ṟe̱f̱>̱)̱,̱ ̱f̱o̱ṟ ̱v̱e̱ṟy̱ ̱ḻa̱ṟg̱e̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱,̱ ̱ṯẖe̱ ̱p̱u̱s̱ẖ-̱p̱u̱ḻḻ ̱ṉe̱ṯw̱o̱ṟḵ ̱a̱ṉḏ ̱u̱ṉc̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ ̱w̱i̱ḻḻ ̱ḇe̱ ̱ḏṟi̱v̱e̱ṉ ̱o̱u̱ṯ ̱o̱f̱ ̱ṯẖe̱ ̱ḻi̱ṉe̱a̱ṟ-̱ṟe̱s̱p̱o̱ṉs̱e̱ ̱ṟe̱g̱i̱m̱e̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱p̱(̱ṯ)̱ ̱i̱s̱ ̱ḇo̱u̱ṉḏe̱ḏ ̱f̱ṟo̱m̱ ̱a̱ḇo̱v̱e̱ ̱a̱ṉḏ ̱ḇe̱ḻo̱w̱;̱ ̱ṯẖi̱s̱ ̱ṟe̱ḏu̱c̱e̱s̱ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ṯṟa̱ṉs̱m̱i̱s̱s̱i̱o̱ṉ.̱ ̱W̱e̱ ̱ṯẖu̱s̱ ̱e̱x̱p̱e̱c̱ṯ ̱a̱ ̱ḇṟo̱a̱ḏ ̱p̱ḻa̱ṯe̱a̱u̱,̱ ̱p̱ṟe̱c̱i̱s̱e̱ḻy̱ ̱a̱s̱ ̱ṯẖe̱ ̱s̱i̱m̱u̱ḻa̱ṯi̱o̱ṉ ̱ḏa̱ṯa̱ ̱o̱f̱ ̱ṯẖe̱ ̱P̱P̱Ṉ ̱a̱ṉḏ ̱U̱H̱M̱ ̱s̱ẖo̱w̱ ̱(̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱A̱/̱Ḇ)̱.̱Ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ:̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱I̱ṉ ̱ṯẖe̱ ̱p̱ṟe̱s̱e̱ṉc̱e̱ ̱o̱f̱ ̱ḇo̱ṯẖ ̱i̱ṉp̱u̱ṯ ̱a̱ṉḏ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱,̱ ̱ṯẖe̱ ̱o̱u̱ṯp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱i̱ṉ ̱ṯẖe̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ ̱i̱s̱,̱ ̱c̱o̱m̱ḇi̱ṉi̱ṉg̱ ̱v̱a̱ṟḺC̱O̱v̱a̱ṟḺC̱O̱I̱ṉṯṈo̱i̱s̱e̱:̱ ̱ ̱ ̱ ̱ ̱ ̱σ̱^̱2̱_̱v̱ ̱=̱ ̱ρ̱σ̱^̱2̱_̱s̱Ṟ^̱∗̱/̱2̱ω̱ ̱+̱ ̱ ̱ ̱ ̱ ̱σ̱^̱2̱_̱ ̱i̱ṉṯ ̱Ṟ^̱∗̱/̱2̱ρ̱ω̱.̱ ̱ ̱W̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱f̱i̱ṟs̱ṯ ̱ṯe̱ṟm̱ ̱(̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱)̱ ̱s̱c̱a̱ḻe̱s̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱a̱m̱p̱ḻi̱f̱i̱e̱s̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱m̱o̱ṟe̱ ̱ṯẖa̱ṉ ̱ṯẖe̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱ṯa̱m̱e̱s̱ ̱i̱ṯ ̱(̱s̱e̱e̱ ̱ḏi̱s̱c̱u̱s̱s̱i̱o̱ṉ ̱ḇe̱ḻo̱w̱ ̱v̱a̱ṟḺC̱O̱)̱,̱ ̱ṯẖe̱ ̱s̱e̱c̱o̱ṉḏ ̱ṯe̱ṟm̱ ̱ḏe̱c̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱.̱ ̱ ̱Ṯẖi̱s̱ ̱e̱x̱p̱ṟe̱s̱s̱i̱o̱ṉ ̱y̱i̱e̱ḻḏs̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱A̱/̱σ̱_̱v̱ ̱=̱ ̱Ṟ^̱∗̱/̱σ̱_̱v̱ ̱ ̱ ̱ ̱ ̱ ̱A̱/̱σ̱_̱v̱ ̱=̱√̱(̱%̱s̱/̱%̱s̱)̱2̱ ̱ρ̱ω̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱ ̱Ṟ^̱∗̱ρ̱^̱2̱ ̱σ̱^̱2̱_̱s̱ ̱+̱ ̱σ̱^̱2̱_̱ ̱i̱ṉṯ∼̱√̱(̱%̱s̱/̱%̱s̱)̱a̱ ̱ρ̱ḇρ̱^̱2̱+̱c̱.̱ ̱ ̱I̱ṯ ̱i̱s̱ ̱s̱e̱e̱ṉ ̱ṯẖa̱ṯ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱s̱ṯṟe̱ṉg̱ṯẖ ̱f̱o̱ṟ ̱s̱m̱a̱ḻḻ ̱ρ̱,̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱a̱s̱ ̱√̱(̱ρ̱)̱,̱ ̱ḇe̱c̱a̱u̱s̱e̱ ̱f̱o̱ṟ ̱w̱e̱a̱ḵ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṯẖe̱ ̱i̱ṉṯṟi̱ṉs̱i̱c̱ ̱ṉo̱i̱s̱e̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱s̱ ̱o̱v̱e̱ṟ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱,̱ ̱a̱ṉḏ ̱i̱ṉc̱ṟe̱a̱s̱i̱ṉg̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟa̱i̱s̱e̱s̱ ̱ṯẖe̱ ̱ṟe̱s̱ṯo̱ṟi̱ṉg̱ ̱f̱o̱ṟc̱e̱ ̱ṯẖa̱ṯ ̱c̱o̱ṉṯa̱i̱ṉs̱ ̱ṯẖe̱s̱e̱ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱.̱ ̱I̱ṉ ̱ṯẖe̱ ̱ḻa̱ṟg̱e̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ṟe̱g̱i̱m̱e̱,̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱w̱i̱ḻḻ ̱ḏo̱m̱i̱ṉa̱ṯe̱ ̱a̱ṉḏ ̱ṯẖe̱ṉ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱w̱i̱ḻḻ ̱ḏe̱c̱ṟe̱a̱s̱e̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱a̱s̱ ̱1̱/̱√̱(̱ρ̱)̱—̱w̱ẖi̱ḻe̱ ̱ṯẖe̱ ̱a̱m̱p̱ḻi̱ṯu̱ḏe̱ ̱i̱s̱ ̱e̱s̱s̱e̱ṉṯi̱a̱ḻḻy̱ ̱i̱ṉḏe̱p̱e̱ṉḏe̱ṉṯ ̱o̱f̱ ̱ρ̱,̱ ̱i̱ṉc̱ṟe̱a̱s̱i̱g̱ ̱ρ̱ ̱a̱m̱p̱ḻi̱f̱i̱e̱s̱ ̱ṯẖe̱ ̱p̱ṟo̱p̱a̱g̱a̱ṯi̱o̱ṉ ̱o̱f̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱f̱ḻu̱c̱ṯu̱a̱ṯi̱o̱ṉs̱.̱ ̱Ṯẖi̱s̱ ̱e̱q̱u̱a̱ṯi̱o̱ṉ ̱ṯẖu̱s̱ ̱p̱ṟe̱ḏi̱c̱ṯs̱ ̱a̱ ̱p̱ṟo̱ṉo̱u̱ṉc̱e̱ḏ ̱m̱a̱x̱i̱m̱u̱m̱ ̱i̱ṉ ̱ṯẖe̱ ̱s̱i̱g̱ṉa̱ḻ-̱ṯo̱-̱ṉo̱i̱s̱e̱ ̱ṟa̱ṯi̱o̱ ̱f̱o̱ṟ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ,̱ ̱a̱s̱,̱ ̱i̱ṉ ̱f̱a̱c̱ṯ,̱ ̱o̱ḇs̱e̱ṟv̱e̱ḏ ̱f̱o̱ṟ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ,̱ ̱s̱e̱e̱ ̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱C̱.̱ ̱S̱i̱ṉc̱e̱ ̱ṯẖe̱ ̱p̱ẖa̱s̱e̱-̱a̱v̱e̱ṟa̱g̱i̱ṉg̱ ̱m̱e̱ṯẖo̱ḏ ̱y̱i̱e̱ḻḏs̱ ̱ṯẖe̱ ̱s̱a̱m̱e̱ ̱s̱c̱a̱ḻi̱ṉg̱ ̱w̱i̱ṯẖ ̱ρ̱ ̱f̱o̱ṟ ̱ḇo̱ṯẖ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱a̱ṉḏ ̱e̱x̱ṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱a̱s̱ ̱ṯẖe̱ ̱S̱ṯu̱a̱ṟṯ-̱Ḻa̱ṉḏa̱u̱ ̱m̱o̱ḏe̱ḻ,̱ ̱i̱ṯ ̱p̱ṟe̱ḏi̱c̱ṯs̱ ̱ṯẖe̱ ̱s̱a̱m̱e̱ ̱ḇe̱ẖa̱v̱i̱o̱u̱ṟ.̱I̱m̱p̱o̱ṟṯa̱ṉṯḻy̱,̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱v̱a̱ḻu̱e̱ ̱o̱f̱ ̱ṯẖe̱ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱c̱o̱ṉs̱ṯa̱ṉṯ ̱ρ̱ ̱ ̱ ̱ ̱ ̱ṯẖa̱ṯ ̱m̱a̱x̱i̱m̱i̱ẕe̱s̱ ̱ṯẖe̱ ̱m̱u̱ṯu̱a̱ḻ ̱i̱ṉf̱o̱ṟm̱a̱ṯi̱o̱ṉ ̱ḏe̱p̱e̱ṉḏs̱ ̱o̱ṉ ̱ṯẖe̱ ̱ṟe̱ḻa̱ṯi̱v̱e̱ ̱ ̱ ̱ ̱ ̱a̱m̱o̱u̱ṉṯs̱ ̱o̱f̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱a̱ṉḏ ̱e̱x̱ṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱:̱ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱ ̱ ̱ ̱ ̱c̱o̱ṉs̱ṯa̱ṉṯ ̱ḏe̱c̱ṟe̱a̱s̱e̱s̱ ̱a̱s̱ ̱ṯẖe̱ ̱i̱ṉp̱u̱ṯ ̱ṉo̱i̱s̱e̱ ̱i̱ṉc̱ṟe̱a̱s̱e̱s̱ ̱w̱i̱ṯẖ ̱ṟe̱s̱p̱e̱c̱ṯ ̱ṯo̱ ̱ ̱ ̱ ̱ ̱ṯẖe̱ ̱i̱ṉṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱.̱ ̱Ṯẖe̱ ̱ṟe̱s̱u̱ḻṯs̱ ̱o̱f̱ ̱o̱u̱ṟ ̱c̱o̱u̱p̱ḻe̱ḏ-̱ẖe̱x̱a̱m̱e̱ṟ ̱m̱o̱ḏe̱ḻ ̱ ̱ ̱ ̱ ̱(̱O̱p̱ṯC̱o̱u̱p̱ḻi̱ṉg̱I̱ṉṯE̱x̱ṯṈo̱i̱s̱e̱Ḏe̱ṯu̱ṉi̱ṉg̱)̱ ̱i̱ṉḏi̱c̱a̱ṯe̱ ̱ṯẖa̱ṯ ̱a̱ṯ ̱ḻe̱a̱s̱ṯ ̱ṯẖe̱ ̱ ̱ ̱ ̱ ̱c̱y̱a̱ṉo̱ḇa̱c̱ṯe̱ṟi̱u̱m̱ ̱S̱.̱ ̱e̱ḻo̱ṉg̱a̱ṯu̱s̱ ̱i̱s̱ ̱i̱ṉ ̱ṯẖe̱ ̱ṟe̱g̱i̱m̱e̱ ̱w̱ẖe̱ṟe̱ ̱ṯẖe̱ ̱ ̱ ̱ ̱ ̱e̱x̱ṯe̱ṟṉa̱ḻ ̱ṉo̱i̱s̱e̱ ̱ḏo̱m̱i̱ṉa̱ṯe̱s̱ ̱a̱ṉḏ ̱ṯẖe̱ ̱o̱p̱ṯi̱m̱a̱ḻ ̱c̱o̱u̱p̱ḻi̱ṉg̱ ̱i̱s̱ ̱w̱e̱a̱ḵ.̱ ̱I̱ṉ ̱ṯẖi̱s̱ ̱ ̱ ̱ ̱ ̱ṟe̱g̱i̱m̱e̱,̱ ̱ṯẖe̱ ̱ḻi̱m̱i̱ṯ-̱c̱y̱c̱ḻe̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ ̱i̱s̱ ̱s̱u̱p̱e̱ṟi̱o̱ṟ ̱ṯo̱ ̱ṯẖe̱ ̱ḏa̱m̱p̱e̱ḏ ̱ ̱ ̱ ̱ ̱o̱s̱c̱i̱ḻḻa̱ṯo̱ṟ,̱ ̱a̱s̱ ̱ṯẖe̱ ̱a̱ṉa̱ḻy̱s̱i̱s̱ ̱o̱f̱ ̱s̱e̱c̱ṯi̱o̱ṉ ̱<̱ṟe̱f̱>̱ ̱s̱ẖo̱w̱s̱.̱ 10Ouyang:1998wp Y Ouyang, C R Andersson, T Kondo, S S Golden, and C H Johnson. Resonating circadian clocks enhance fitness in cyanobacteria. Proceedings of the National Academy of Sciences of the United States of America, 95(15):8660–8664, July 1998.Woelfle:2004cq Mark A Woelfle, Yan Ouyang, Kittiporn Phanvijhitsiri, and Carl Hirschie Johnson. The Adaptive Value of Circadian ClocksAn Experimental Assessment in Cyanobacteria. Current Biology, 14(16):1481–1486, August 2004.Roenneberg:2002tt Till Roenneberg and Martha Merrow. Life before the Clock: Modeling Circadian Evolution. Journal of Biological Rhythms, 17:495–505, 2002.Ma:2016ca Peijun Ma, Tetsuya Mori, Chi Zhao, Teresa Thiel, and Carl Hirschie Johnson. Evolution of KaiC-Dependent Timekeepers: A Proto-circadian Timing Mechanism Confers Adaptive Fitness in the Purple Bacterium Rhodopseudomonas palustris. PLoS genetics, 12(3):e1005922, March 2016.Ishiura:1998vc M Ishiura, S Kutsuna, S Aoki, H Iwasaki, C R Andersson, A Tanabe, S S Golden, C H Johnson, and T Kondo. Expression of a gene cluster kaiABC as a circadian feedback process in cyanobacteria. Science, 281(5382):1519–1523, September 1998.Nakajima2005 Masato Nakajima, Keiko Imai, Hiroshi Ito, Taeko Nishiwaki, Yoriko Murayama, Hideo Iwasaki, Tokitaka Oyama, and Takao Kondo. Reconstitution of circadian oscillation of cyanobacterial KaiC phosphorylation in vitro. Science, 308(5720):414–5, apr 2005.Holtzendorff:2008dj Julia Holtzendorff, Frédéric Partensky, Daniella Mella, Jean-François Lennon, Wolfgang R Hess, and Laurence Garczarek. Genome streamlining results in loss of robustness of the circadian clock in the marine cyanobacterium Prochlorococcus marinus PCC 9511. Journal of Biological Rhythms, 23(3):187–199, June 2008.Zinser:2009js Erik R Zinser, Debbie Lindell, Zackary I Johnson, Matthias E Futschik, Claudia Steglich, Maureen L Coleman, Matthew A Wright, Trent Rector, Robert Steen, Nathan McNulty, Luke R Thompson, and Sallie W Chisholm. Choreography of the transcriptome, photophysiology, and cell cycle of a minimal photoautotroph, prochlorococcus. PLoS ONE, 4(4):e5135, 2009.Troein:2009bm Carl Troein, James C W Locke, Matthew S Turner, and Andrew J Millar. Weather and Seasons Together Demand Complex Biological Clocks. Current Biology, 19(22):1961–1964, December 2009.Pfeuty:2011em Benjamin Pfeuty, Quentin Thommen, and Marc Lefranc. Robust Entrainment of Circadian Oscillators Requires Specific Phase Response Curves. Biophysical Journal, 100(11):2557–2565, June 2011.Rust2011 Michael J Rust, Susan S Golden, and Erin K O'Shea. Light-driven changes in energy metabolism directly entrain the cyanobacterial circadian oscillator. Science, 331(6014):220–3, jan 2011.Pattanayak:2015jm Gopal K Pattanayak, Guillaume Lambert, Kevin Bernat, and Michael J Rust. Controlling the Cyanobacterial Clock by Synthetically Rewiring Metabolism. Cell Reports, 13(11):2362–2367, December 2015.Monti:2018hs Michele Monti, David K Lubensky, and Pieter Rein ten Wolde. Optimal entrainment of circadian clocks in the presence of noise. Physical Review E, 97(3):032405, 2018.SI Supporting Information.VanZon2007 Jeroen S van Zon, David K Lubensky, Pim R H Altena, and Pieter Rein ten Wolde. An allosteric model of circadian KaiC phosphorylation. Proceedings of the National Academy of Sciences of the United States of America, 104(18):7420–7425, may 2007.Rust2007 Michael J Rust, Joseph S Markson, William S Lane, Daniel S Fisher, and Erin K O'Shea. Ordered phosphorylation governs oscillation of a three-protein circadian clock. Science, 318(5851):809–12, nov 2007.Clodong2007 Sébastien Clodong, Ulf Dühring, Luiza Kronk, Annegret Wilde, Ilka Axmann, Hanspeter Herzel, and Markus Kollmann. Functioning and robustness of a bacterial circadian clock. Molecular Systems Biology, 3(1):90–n/a, 2007.Mori2007a Tetsuya Mori, Dewight R Williams, Mark O Byrne, Ximing Qin, Martin Egli, Hassane S Mchaourab, Phoebe L Stewart, and Carl Hirschie Johnson. Elucidating the Ticking of an In Vitro Circadian Clockwork. PLoS Biology, 5(4):e93, apr 2007.Zwicker2010 David Zwicker, David K Lubensky, and Pieter Rein ten Wolde. Robust circadian clocks from coupled protein- modification and transcription – translation cycles. Proceedings of the National Academy of Sciences, 107(52):22540–22545, dec 2010.Lin2014 J Lin, J Chew, U Chockanathan, and M J Rust. Mixtures of opposing phosphorylations within hexamers precisely time feedback in the cyanobacterial circadian clock. Proceedings of the National Academy of Sciences of the United States of America, 111(37):E3937—-E3945, sep 2014.Paijmans:2017gx Joris Paijmans, David K Lubensky, and Pieter Rein ten Wolde. A thermodynamically consistent model of the post-translational Kai circadian clock. PLoS Computational Biology, 13(3):e1005415, March 2017.Paijmans:2017gp Joris Paijmans, David K Lubensky, and Pieter Rein ten Wolde. Period Robustness and Entrainability of the Kai System to Changing Nucleotide Concentrations. Biophysj, 113(1):157–173, July 2017.Monti:2016bp Michele Monti and Pieter Rein ten Wolde. The accuracy of telling time via oscillatory signals. Physical Biology, 13(3):1–14, May 2016.Walczak:1324157 Gašper Tkačik and Aleksandra M Walczak. Information transmission in genetic regulatory networks: a review. Journal of Physics: Condensed Matter, 23(15):153102, April 2011.Becker:2015iu Nils B Becker, Andrew Mugler, and Pieter Rein ten Wolde. Optimal Prediction by Cellular Signaling Networks. Physical Review Letters, 115(25):258103, December 2015.Pikovsky2003 Arkady Pikovsky, Michael Rosenblum, and Juergen Kurths. Synchronisation: A universal concept in nonlinear sciences. Cambridge University Press, Cambridge, 2003.Mihalcescu:2004ch I Mihalcescu, W H Hsing, and S Leibler. Resilient circadian oscillator revealed in individual cyanobacteria. Nature, 430(6995):81–85, 2004.Gu:2001vh Lianhong Gu, Jose D Fuentes, Michael Garstang, Julio Tota da Silva, Ryan Heitz, Jeff Sigler, and Herman H Shugart. Cloud modulation of surface solar irradiance at a pasture site in southern Brazil. Agricultural and Forest Meteorology, 106:117–129, December 2001.Kitayama:2003un Y Kitayama, H Iwasaki, Taeko Nishiwaki, and Takao Kondo. KaiB functions as an attenuator of KaiC phosphorylation in the cyanobacterial circadian clock system. The EMBO Journal, 22(9):2127–2134, 2003.Xu2000 Y Xu, T Mori, and C H Johnson. Circadian clock-protein expression in cyanobacteria: rhythms and phase setting. The EMBO journal, 19(13):3349–57, jul 2000.Nakahira2004 Yoichi Nakahira, Mitsunori Katayama, Hiroshi Miyashita, Shinsuke Kutsuna, Hideo Iwasaki, Tokitaka Oyama, and Takao Kondo. Global gene repression by KaiC as a master process of prokaryotic circadian system. Proceedings of the National Academy of Sciences of the United States of America, 101(3):881–885, 2004.Nishiwaki2004 Taeko Nishiwaki, Yoshinori Satomi, Masato Nakajima, Cheolju Lee, Reiko Kiyohara, Hakuto Kageyama, Yohko Kitayama, Mioko Temamoto, Akihiro Yamaguchi, Atsushi Hijikata, Mitiko Go, Hideo Iwasaki, Toshifumi Takao, and Takao Kondo. Role of KaiC phosphorylation in the circadian clock system of Synechococcus elongatus PCC 7942. Proceedings of the National Academy of Sciences, 101(38):13927–13932, sep 2004.Tomita:2005uv Jun Tomita, Masato Nakajima, Takao Kondo, and Hideo Iwasaki. No transcription-translation feedback in circadian rhythm of KaiC phosphorylation. Science, 307(5707):251–254, 2005.Teng:2013cf S W Teng, S Mukherji, J R Moffitt, S de Buyl, and E K O'Shea. Robust Circadian Oscillations in Growing Cyanobacteria Require Transcriptional Feedback. Science, 340(6133):737–740, May 2013.Paijmans:2016fd Joris Paijmans, Mark Bosman, Pieter Rein ten Wolde, and David K Lubensky. Discrete gene replication events drive coupling between the cell cycle and circadian clocks. Proceedings of the National Academy of Sciences of the United States of America, 113(15):4063–4068, April 2016.Phong:2013fr Connie Phong, Joseph S Markson, Crystal M Wilhoite, and Michael J Rust. Robust and tunable circadian rhythms from differentially sensitive catalytic domains. Proceedings of the National Academy of Sciences of the United States of America, 110(3):1124–1129, January 2013.Gillespie:1977dc Daniel T Gillespie. Exact stochastic simulation of coupled chemical reactions. The Journal of Physical Chemistry, 81(25):2340–2361, December 1977.Paulsson:2004dh Johan Paulsson. Summing up the noise in gene networks. Nature, 427(6973):415–418, January 2004.TanaseNicola:2006bh Sorin Tănase-Nicola, Patrick Warren, and Pieter ten Wolde. Signal Detection, Modularity, and the Correlation between Extrinsic and Intrinsic Noise in Biochemical Networks. Physical Review Letters, 97(6):068102, August 2006.Govern:2014ez Christopher C Govern and Pieter Rein ten Wolde. Energy Dissipation and Noise Correlations in Biochemical Sensing. Physical Review Letters, 113(25):258102, December 2014.Cheong:2011jp R Cheong, A Rhee, C J Wang, I Nemenman, and A Levchenko. Information Transduction Capacity of Noisy Biochemical Signaling Networks. Science, 334(6054):354–358, October 2011.Tostevin:2010bo Filipe Tostevin and Pieter ten Wolde. Mutual information in time-varying biochemical systems. Physical Review E, 81(6):061917, June 2010.Guckenheimer:1983up J Guckenheimer and P J Holmes. Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Springer, New York, 1983.Anishchenko:2007tf V S Anishchenko, V Astakhov, A Neiman, T Vadivasova, and L Schimansky-Geier. Nonlinear dynamics of chaotic and stochastic systems: tutorial and modern developments. Springer, January 2007.Warren:2006ky Patrick B Warren, Sorin Tănase-Nicola, and Pieter Rein ten Wolde. Exact results for noise power spectra in linear biochemical reaction networks. The Journal of Chemical Physics, 125(14):144904, 2006.Gardiner85 C. W. Gardiner. Handbook of Stochastic Methods. Springer-Verlag, Berlin, 1985.
http://arxiv.org/abs/1710.02098v3
{ "authors": [ "Michele Monti", "David K Lubensky", "Pieter Rein ten Wolde" ], "categories": [ "q-bio.MN", "cond-mat.stat-mech", "physics.bio-ph", "q-bio.CB" ], "primary_category": "q-bio.MN", "published": "20170927124454", "title": "Robustness of clocks to input noise" }
firstpage–lastpage Why: Natural Explanations from a Robot Navigator Raj Korpan1, Susan L. Epstein1,2, Anoop Aroor1, Gil Dekel21The Graduate Center and 2Hunter College, City University of New [email protected], [email protected], [email protected], [email protected] December 30, 2023 ============================================================================================================================================================================================================================================================== We explore the weak lensing E- and B-mode shear signals of a field of galaxy clusters using both large scale structure N-body simulations and multi-color Suprime-cam & Hubble Space Telescope observations. Using the ray-traced and observed shears along with photometric redshift catalogs, we generate mass maps of the foreground overdensities by optimally filtering the tangential shear that they induce on background galaxies. We then develop and test a method to approximate the foreground structure as a superposition of NFW-like halos by locating these overdensities and determining their mass and redshift, thereby modeling the background correlated shear field as a sum of lensings induced by the foreground clusters. We demonstrate that the B-mode maps and shear correlation functions, which are generated by similarly filtering the cross shear in this method, are in agreement with observations and are related to the estimated cluster masses and locations as well as the distribution of background sources. Using the foreground mass model, we identify several sources of weak lensing B-modes including leakage and edge effects, source clustering, and multiple lensing which can be observed in deep cosmic shear surveys.cosmology: observations – gravitational lensing: weak – large-scale structure of Universe – galaxies: clusters: general§ INTRODUCTIONWeak gravitational lensing provides a powerful tool to map and study the formation of large scale structure in the Universe. Foreground mass overdensities like clusters of galaxies cause light from distant sources to be deflected, causing extended objects to become tangentially sheared. Conversely, under-dense regions of space cause images of distant galaxies to appear radially aligned to the center of the void. Using this knowledge of the relationship between weak lensing shear and the gravitational potential, finite field non-linear reconstruction techniques can be applied to the shear to generate the locations and depths of potentials on the sky (<cit.>, <cit.>, <cit.>).By analogy with electromagnetism, <cit.> and <cit.> showed that the shear field can be decomposed into two components: the E-modes and the B-modes (so-called E/B, gradient/curl, non-vortical/vortical, and scalar/pseudo-scalar perturbations). <cit.> and others have shown that to a good approximation lensing produces only E-modes when large samples of lenses are used, with lensing-induced B-modes having a much reduced amplitude which have been observationally consistent with zero. Accordingly, E-mode patterns have been the main focus of gravitational lensing studies due to their usefulness in measuring density perturbations. Unfortunately this has led to the treatment of B-modes as a contaminant, and their exclusive use as a probe of systematic error. However, there are numerous reasons to investigate B-modes as a source of astrophysical signal.B-modes can be observationally generated through random or intrinsic alignments of source galaxies as well as through their inhomogeneous (clustered) distribution <cit.>. Additionally, <cit.> show that E- and B-modes can mix when there is a lack of close projected pairs of galaxies, and conversely that B-modes are induced by finite survey size. Finally, if two (or more) lensing clusters are projected along the line of sight, the lensing of a background E-mode produces a so-called double-lensing B-mode pattern as shown in maps of <cit.> and in the shear correlation functions of <cit.>. Every cosmological survey has some of these issues, implying their presence in all weak lensing maps and correlations. The amplitude and scale of these observationally-induced B-modes is determined by the depth and width of field and the distribution of mass within it. As we will demonstrate, modeling of the lensing field can yield useful signal from these inevitable B-modes.Naturally, other observational issues related to the point spread function (PSF), such as atmospheric turbulence or optical distortions, can also induce spurious B-modes. For instance, it was shown in <cit.> that calibration errors in seeing, depth or extinction in excess of 3% r.m.s. would generate B-modes that bias the shear correlation function beyond statistical errors. For this reason, every weak lensing analysis addresses the issue of PSF modeling by using stars as a reference for the spurious PSF shear. Additionally, shear measurement methods must be calibrated to provide unbiased estimates under realistic conditions.Understanding of the above sources of B-mode is critical if novel astrophysical sources of B-modes are to be sought. For instance, the intrinsic alignments of source galaxies induced by tidal gravitational interactions with their environment are an active area of research <cit.>, and weak lensing B-modes produced by these interactions present an observational opportunity to constrain models of galaxy formation and evolution <cit.>. Other more exotic cosmological models can generate B-modes through tensor-vector perturbations <cit.>. Alternative models can even break the fundamental assumption of statistical homogeneity and isotropy of sources, or alter the propagation of photons through anisotropic cosmologies (<cit.>, <cit.>) which have yet to be observationally excluded (<cit.>, <cit.>). Given the abundance of possible astrophysical sources of B-modes, it is worthwhile to conduct a comprehensive search for B-modes in cosmic shear surveys as a potentially useful signal instead of purely an indicator of systematic error.Precision mass mapping and measurement of cosmic shear primarily necessitates a high number density of background galaxies to statistically reduce the shot noise from intrinsic galaxy ellipticity. For this reason, deep surveys of the sky have been specially designed to maximize the observed number density of background galaxies, decreasing the statistical uncertainty in the measured shear field by the square root of the observed number density of objects. Technology has accelerated the progress of weak lensing from first measurement in deep CCD images <cit.> to the discovery <cit.> and characterization of clusters (<cit.>, LoCuSS <cit.>, CCCP <cit.>, and CLASH <cit.>) of and voids <cit.> in galaxy surveys. Weak lensing can also be used as a tool for cosmology, either through a census of peaks in mass maps (<cit.>, <cit.>, <cit.>, or through spatial correlations of the shear (as first measured by <cit.>, <cit.>, <cit.>, <cit.> and more recently by <cit.> <cit.> <cit.> and <cit.>). The power of each of these weak lensing surveys depends on the size of the cosmological volume it can probe, which can be aided by going wider or deeper. One future survey, the LSST, will have a 5σ depth of r∼27 (effectively hours of observing time on a 6.7 meter equivalent) across 18,000 square degrees at the end of its nominal 10 year survey <cit.>, achieved by efficiently stacking hundreds of exposures of billions of galaxies in its 9.6 sq. degree field of view. In this paper we use observations of a small region of the sky imaged to LSST 10-year depth, as well as ray-traced ΛCDM N-body simulations, to measure the 3-dimensional cosmic shear induced by an abundance of clustering along the line of sight. We develop and test a method to reconstruct the foreground lens clustering by representing observed lensing peaks in the E-mode maps as clusters in a forward-model of the lensing, approximating the observed shear as a sum of successive lensing potentials. We begin in Section <ref> with a description of the mass mapping method demonstrated on the Buzzard N-body simulations. In Section <ref> we present our method of lens reconstruction on these simulations and compare the estimated E- and B-mode maps and correlation functions.In Section <ref> we introduce multi-color Suprime-cam & Hubble Space Telescope observations of a deep weak lensing field in which we will apply this modeling technique. We briefly describe our data reduction process, which goes from raw data to photometric redshift calibration and shear measurement using the `stack-fit' algorithm on multiple dithered exposures (and multiple bands). With these calibrated shape and redshifts, in Section <ref> we apply the lens modeling technique to the Lynx field observations and demonstrate that these modeled halos capture most of the information in the E- and B-mode maps and correlation functions. We discuss these results in Section <ref> and conclude in Section <ref> with a summary and future applications of the method to wide field surveys. § MASS MAPPING METHOD In theories and N-body simulations of the formation of large scale structure, the clumping of matter forms in a hierarchical manner with well-defined and characteristic mass density profile for clusters of galaxies <cit.>. Through weak gravitational lensing, each clump of galaxies similarly induces a characteristic pattern of shear on background galaxies <cit.>. Therefore, one can use the measured weak lensing shear to infer the mass map, reconstructing the lensing field by optimally filtering the shear with a profile matched to the lensing signal of an NFW mass profile. We use this characteristic profile to apodize the measured shear field and locate overdensities along the line of sight by spatially mapping the aperture mass statistic M_ap:M_ap(θ)=∫ Q(|θ|)γ_t(θ)d^2θas introduced in <cit.>. This function takes the background galaxy shears around each point in a map and filters them with the function radial chosen function Q(|θ|). In our case, Q is chosen to approximate the tangential shear profile of an NFW-like overdensity through an apodized combination of exponentials and hyperbolic tangent functions which can be scaled to match clusters of varying mass and lensing efficiency. This useful approximately-NFW filter function was introduced in <cit.>, and is given by Equation <ref>.Q_NFW(x)=Q_box(x) tanh(x/x_c)/x/x_cThe parameter x is a dimensionless radius: the angular separation in units of the cutoff radius r_out, i.e. x:=r/r_out. The rate of tangential shear decrease (concentration) in the NFW profile x_c is taken to be x_c∼ 0.15 for all map-making processes. The apodization function Q_box provides the exponential damping around zero radius and around the cutoff radius r_out and is given by:Q_box(r)=(1+e^6-150r/r_out+e^-47+50r/r_out)^-1Maps using both this NFW filter and a generic polynomial filter are generated and compared, and both the E- and B-modes are found to be proportional. In the following analysis, only the maps with NFW-matched filter Q_NFW are used as they have higher signal-to-noise contrast in both the simulations and observations.Due to the WL mass-sheet degeneracy, convergence or mass density maps are measured by subtracting a reference shear from the measured shear, and thus are bipolar. Such maps are often thresholded at some positive level in order to display positive mass. Rather than picking thresholds or contours arbitrarily, we choose instead to plot aperture mass signal-to-noise ratio. Significance of mass density peaks may then be readily assessed.We spatially map the aperture mass signal-to-noise S_t by dividing M_ap by the correspondingly apodized shape shot noise with an average RMS source galaxy shear of σ_e = 0.3. This denominator is equivalent to computing the statistic in a field with no lensing. This can be represented discretely as a sum over all background galaxies:S_t=√(2)∑ϵ_t,i Q_i/(∑(σ_e Q_i)^2)^1/2,where e_t,i is the galaxy's tangential ellipticity relative to the map pixel, Q_i is the NFW filter weight at the galaxy's radial position, and σ_e is the galaxy shape shot noise. Peaks in this S_t map correspond to locations where the tangential shear is largest and most similar to an NFW mass profile, which most likely correspond to clusters or groups of galaxies in the foreground. Therefore we will use these shear peaks to estimate the location and mass of foreground cluster lenses.Similarly, one can use the cross shear γ_×, instead of the tangential shear in Equation <ref> to define the cross statistic:S_×=√(2)∑ϵ_x,i Q_i/(∑(σ_e Q_i)^2)^1/2Unlike the maps of S_t, the Q_NFW filter function is not necessarily optimal for B-mode detection, because there is no expected cross shear for a single isolated NFW concentration lensing a well-sampled and homogeneous background. Nonetheless, this B-mode statistic will pick up any axisymmetric curl-like shear pattern and can be easily computed alongside the E-modes. Often used as a check for systematic errors, significant B-modes in weak lensing studies are usually attributed to PSF mis-estimation or spatially varying calibration errors under the assumption that if PSF correction is done properly (and there are no astrophysical sources of B-modes), these maps should be consistent with noise. However, mapping itself can induce leakage from E-modes into observational B-modes in a predictable fashion for a given distribution of sources and lenses, and crucially, astrophysical B-modes can also be induced by multiple lensing and lensing of non-random source ellipticity distributions such as those induced by intrinsic alignments. Therefore both E- and B-mode maps are worth careful investigation in simulations and observations.§.§ Mapping N-body simulationsWe first investigate the mass mapping method on simulations using the Buzzard cosmological N-body simulations <cit.> that were produced for the Dark Energy Survey <cit.> and which provide realistic 5-year depth galaxy shear and halo catalogs. Catalog data used include positions (RA/Dec/z) and reduced shear (g_1/g_2) of galaxies computed using the ray-tracing code<cit.> as well as the positions and masses M_200 of clusters in the halo catalog. After mass mapping with the filters described in Equation <ref> applied to Equation <ref>, we can identify peaks in the mass map and correlate them with the (unobservable) halo catalog to develop our method of shear reconstruction.We choose to analyze a small region of the Buzzard universe ∼ 1^∘×1^∘ wide and centered around a massive foreground lens at (α: 343.7, δ: -25.0). The large cluster centered in the foreground of this slice is at redshift z=0.28 with mass M_200=6.8× 10^14 M_⊙ provides an anchor for our observations as it is easily detected given ∼40,000 background galaxies with an observed number density of n_gal=13 background galaxies per square arcminute. Additionally, there are ∼120 other clusters/groups with mass M_200>2× 10^13M_⊙ which will also be used into the lensing simulation, shown as colored dots in Figure <ref>. If we then slice the galaxy catalog by choosing galaxies only with z_photo>0.5, we can detect the collective imprint of these overdensities as the high S/N regions of the E-mode map shown as contours in Figure <ref>. There is a clear correlation of the contours and overdensities of halo catalog members along the line of sight, confirming the obvious utility of mass mapping. In the next section we develop a method to model each E-mode peak as a lensing mass detect peaks using an image segmentation algorithm similar to that used in . The locations of high aperture mass are shown in Figure <ref> as black boxes.§ LENS MODELING OF THE E & B MODES Our goal is to estimate the observed shear of background galaxies as the summation of the shears induced by the most massive clusters along the line of sight. In the weak lensing limit, the shears of each lensing halo add together, i.e. each background galaxy's total shear is represented as:g_tot=∑_jg_j(M_j,z_j,c_j,δ_r,z_source)where the sum is taken over all halos, j, which lie in the foreground of the galaxy, and where M_j, z_j, c_j, are the halo mass, concentration, and redshift, and δ_r and z_source are the estimated angular distance between source and halo and the photometric redshift of the source. The resultant reduced shear sums for each galaxy in the simulation can then be compared to the observed shears. In the case of Buzzard, the observed shears are ray-traced using<cit.>, a computationally complex calculation. The simplification in Equation <ref> can be tested in the weak lensing limit by using the positions and masses M_200 of clusters in the halo catalog, assuming an NFW shear profile <cit.> and mass-concentration relation <cit.>. Given a model of angular diameter distances between the lens and source (i.e. h=0.7,Ω_Λ=0.7, Ω_m =0.3), each lens adds a weak shear to each source galaxy. We therefore approximate both components of the reduced shear of each source galaxy g_1,2^tot, which are robustly correlated to the ray-traced shears γ_1,2^ray as shown in Figure <ref>. However, in an observational setting, we are not given a halo catalog with positions and masses. We must therefore estimate these unobservable `halo catalog parameters' from the data, namely through measurements of the reduced shear (g_1/g_2) and position (RA/Dec/z) of galaxies. Mass mapping provides the correlation between overdensities and high S/N regions of the S_t maps, as illustrated in Figure <ref>. We show the feasibility of applying this method to real data by estimating the locations and masses of halo catalog objects in the Buzzard universe, using only aperture mass maps. Overdensities are identified with peaks in the mass map (shown as black boxes in Figure <ref>) using an image segmentation and deblending algorithm. These candidate cluster RA/Dec positions are clearly associated with objects in the (unobserved) halo catalog, shown as colored dots in that figure.Given this list of peak RA/Dec positions, the mass and redshift of candidate clusters can then be estimated. Redshifts of clusters are estimated from the photometric redshiftdistribution, P(z_phot), of foreground galaxies which lie along the line of sight to each peak, with the expectation that galaxies are tracers of the position of the overall halo. Instead of modeling each cluster halos individually, we simultaneously model all clusters using a combination of tangential shear profile fitting and correlations between aperture mass maps S/N and M_200 <cit.>, which can be extrapolated from wide-field trends of the Buzzard simulation. The resulting correlation between this predicted mass and the halo catalog mass measured from the Buzzard N-body simulation is shown in Figure <ref>. Because mass is the only source of lensing E-modes in the N-body simulations, these few approximate lenses capture most of the shear information from a much more complex and computationally intensive ray-tracing the full N-body simulation. Using the cluster mass and location estimates we then simulate the shear of the background galaxies as the sum of the predicted cluster mass lensings, thus introducing only E-modes into the shear field, according to Equation <ref>. Application of Eqs. <ref> and <ref> on these shears produces estimated aperture mass maps S_t and S_×(E and B-mode) maps which we can then compare to the ones which utilize ray-tracing shears. As shown in Figure <ref>, the overall pattern in both modes is quite similar between ray-traced and approximated maps. The E-modes in particular are very strongly similar, as our simulated foreground lenses have been chosen to match the observed tangential signature maps. However, there is much more small scale structure in the ray-traced shears which are estimated from the full matter power spectrum of the N-body simulations. For the B-mode, the simulated and observed maps also have statistically significant correlations, which we can explore in several scenarios.The first is the addition of shape noise, which smooths the large scale structure of the maps adds noise to the small scales, thereby broadening the distribution of S_t and S_× pixel values as shown in the histograms of Figure <ref>. It is shown that if random shape noise is included in the forward simulation at an amplitude of η_rms=0.3, it is found that that the 1-σ width of the simulated S_t and S_× distributions nearly match the width of the observed S_t and S_× distributions as seen in Figure <ref>. At high (S_t>5) values, a slight over- and then under-shoot in the distribution of E-mode map pixels can be seen. These small (∼ 1%) discrepancies indicate that more precise modeling of the halos is needed to account for all pixel values. For instance, the empirical mass-concentration relation assumed in the approximation of the halo lensing potential <cit.> may need to be adapted to the specifics of each N-body simulation.Beyond those induced by shape noise, there are other sources of observational B-modes at play here. The largest and most mundane source of B-modes are those induced at the boundaries of the aperture mass map, where pure E-modes can leak into the B-mode due to azimuthal symmetry violation in aperture measures. These B-modes can be mitigated by padding the edges of the simulation or by limiting consideration of B-modes near the edges entirely, and so they are excluded from the analysis shown in Figure <ref>. However, these edge B-modes are related to the strengths and locations of the lenses in the field, and therefore are not entirely noise. Another source of B-modes is the clustering of source galaxies, as discussed in <cit.> and <cit.>. These variations in spatial positions of galaxies again lead to an asymmetry in aperture measurements of pure E-mode fields, and again are completely describable if the known positions of E-modes are known.We can probe and eliminate these two galaxy density effects in our simulation by replacing the realistically clusteredgalaxy distribution in RA/Dec/z with a uniform distribution of galaxies at a fixed redshift but which are lensed by the estimated 3D positions of halos. This has the effect of drastically reducing the observed B-modes, indicating that the majority of B-modes in mapped fields results from the realistic source density variations. Additionally, If we map the E- and B-modes using a uniform distribution of galaxies (at a single source plane of z=1.0) extended beyond the edge of our slice, such that no cluster is within several aperture radii of the extended galaxy boundary, the edge effect disappears. However, as shown in Figure <ref>, there are still non-zero B-modes even in the case of uniform source galaxy distributions with no edge effects.B-modes which remain after edges and source non-uniformity effects are removed may prove to be physically interesting. For instance, they can be a sign of multiple lensing of background galaxies, which produces a characteristic quadrupole-like B-mode pattern when two lenses are well-aligned along the line of sight <cit.>. However, the simulated source galaxy density in that study is much higher than our current ground observations allow, and their simulations only include very fortuitous alignments of clusters along the line of sight. Therefore, we do not expect such obvious B-mode patterns in our observations, and they aren't readily seen in our observed maps. However, in our simulations we can probe this multi-lensing effect by modifying the distribution of lenses and galaxies. In fact, it is observed that the width of the S_× distribution is broader in the case of realistic 3D lens positions than when all lenses are placed at a single redshift. Additionally, if the source galaxy density is greatly increased to n_gal=100  arcmin^-2, the width of the S_× distribution is increased even further and the quadrupolar B-mode pattern of double lensing is observable in the maps. In practice, this multi-lens effect broadens the simulated (and observed) S_× distribution by a small amount, but given the much lower surface density of z>0.8 galaxies in our observations, the distinct pattern of double lensing in this field is unresolvable.§.§ Shear correlation functions Statistical correlations are complimentary to spatial maps. While losing spatial information, the summary statistics produced by correlating the shears of all galaxies can provide a less noisy test of the origins of B-mode. Correlation functions over the field are computed using both the ray-traced shears and those simulated under multiple conditions. The shear correlations ξ_±,the E- and B-mode aperture mass dispersion ⟨ M_ap,×^2 ⟩, and top-hat shear ⟨γ^2 ⟩_E,B dispersion are calculated on scales of 0.5<θ<15 arcminutes. The raw shear correlation function is computed by correlating the tangential and cross shear around each galaxy, and either summing or differencing the two correlations.ξ_±(θ) = ⟨γ_tγ_t⟩±⟨γ_×γ_×⟩These raw correlations depend on both the E- and B-mode power spectrum, and so the decomposition into aperture mass statistics is a useful one. We define the aperture mass correlations in terms of the ξ_± functions generally as: ⟨ M_ap,×^2 ⟩(R) = ∫_0^∞r dr/2R^2 [ T_+(r/R) ξ_+(r) ± T_-(r/R) ξ_-(r) ]where M_ap and M_× take the plus and minus signs, respectively, and the functions T_± are window functions which are generalized autocorrelationsof the filter function Q, and which limit the inclusion of small and large radii which are difficult to measure. Previously in Section <ref> we chose Q to be a signal-matched filter (NFW-like) for optimal detection of E-modes, but for the correlation functions presented here we use a Gaussian-derivativetype window function which give T_± in the form:T_+(s) = s^4 - 16s^2 + 32/128exp(-s^2/4) T_-(s) = s^4/128exp(-s^2/4)as presented in <cit.>. Another popular cosmic shear statistic is the shear dispersion within a circle of radius R, which can again be decomposed into the shear-shear correlations as given by the expression:⟨γ^2 ⟩_E,B(R) = ∫_0^2Rr dr/2 R^2 [ S_+(r/R) ξ_+(r) ± S_-(r/R) ξ_-(r)]where the E- and B-modes correspond to the + and - in the equation on the R.H.S. These E/B decompositions of the shear field are also filtered analogously to ⟨ M_ap,×^2⟩, but with the S_± window functions applied as follows:S_+(s) = 1/π(4 arccos(s/2) - s √(4-s^2))S_-(s) = 1/π s^4[s √(4-s^2) (6-s^2) - 8(3-s^2) arcsin(s/2)]for s<=2, and S_-(s) =4(s^2-3)/(s^4) for s>=2. These window functions are broader than T_±, implying that more shear dispersion signal is included but the signal is less localized.A constant shear generates contributions to both E- and B-modes <cit.>. The correlations are computed on our dataset using thecode <cit.>, and the results of the calculations for are shown in Figure <ref> using three sets of shears which have similar shape and scale. The three separate shear datasets are as follows: the shears from the ray-tracing code , those simulated using the observed RA/Dec/z positions clustered on the sky and lensed by our approximate foreground, and those simulated with uniform source plane at z=1.0 at a galaxy number density of 13  arcmin^-2 lensed by the same foreground model. E- modes for each statistic are shown in blue shades and B-modes are shown in red shades. Error bars on the correlation functions are from variance in the shear itself, the simulated cases have no shape noise added. As a null test, randomizing the shear components but keeping the positions fixed resulted in zero correlation in all statistics at all scales. The slight mismatch (overshoot/undershoot) in the approximate model correlation functions indicate some residual tuning may be required. For instance, the empirical mass-concentration relation we assume <cit.> is likely an improper approximation when comparing to ray-traced N-body simulations as ray-tracing accounts for mass at scales which are inaccessible to actual observations. Indeed, modification of the concentration relation does lead to a difference in the shape of the correlation function at small scales. Despite the fact that these two-point correlation functions were not fit to the data, the general agreement in the shape and scale of the correlations strongly suggests that this method of reconstructing halos can provide physical insight into both the E- and B-modes.For instance, this field has extensive large scale structures which give an average matter density in excess of that given by the universal mass power spectrum. Given this clustering and depth of the observation along the line of sight, it is thus expected that the measured correlation functions would reach large values in agreement with the model. Interestingly, there are also non-zero B-mode in the measured correlations. Ordinarily these might be blamed on incomplete PSF modeling or gaps in the data, however there is no PSF in the simulations nor are their bright star masks or other depth variations. Though there were B-modes induced at the boundaries of the mass maps due to incomplete sampling of E-modes, in the case of these correlation functions the edges are not an issue. This is because ξ_± are directly sampled at each galaxy point, and not on a grid as in the mapping scenario <cit.>. In fact, masking more (∼ 5 arcmin) of the edges of the observation does not noticeably change the correlation function. Additionally, the ray-traced and approximated B-modes are of similar amplitude even when there is no masking and a uniform lens plane is used. This implies that the majority of observed B-modes in these correlation functions are not due to the PSF (which is not included in the simulations) or source galaxy clustering. Rather, the B-modes in the correlation function must be due to multiple lensing and artifacts intrinsic to the particular decompositions of a realistic shear field into E- and B-modes. In fact, <cit.> show that E- and B-modes can mix when on small scales when there is a lack of close projected pairs of galaxies, and on large scales due to the finite field size. Both of these unavoidable observational facts limit complete sampling of the shear power spectrum and are present in all maps and correlations, to a degree determined by the depth of the field and the distribution of mass within it. Therefore, careful modeling of the lensing field can turn these B-modes into tools for the mapping of large scale structure, and understanding of these observational artifacts is also critical if astrophysical sources of B-modes are to be sought. As seen in the maps, histograms, and correlation functions of Figures <ref>, <ref>, and <ref>, these expected sources of B-modes are well captured by our lensing approximation and justify its use on observational data, as we do in the following section. § APPLICATION TO DEEP WEAK LENSING OBSERVATIONS The observational weak lensing data used in this study consists mainly of Suprime-cam 80 megapixel mosaic exposures in Lynx (α_J2k: 132.20, δ_J2k: +44.93) using the [B/V/R_c/i'/z'] filters for [60, 80, 90, 65, 90] minutes on the Subaru telescope during the first observing season in 2001 & 2002. The exposures were dithered during observation, providing a uniform coverage of a field containing a large cluster, Lynx North (RX J0848+4456, mass M=5×10^14 M_⊙ and redshift z=0.55 <cit.>). Additional super-clustering of galaxies at redshift z∼ 1.3 has been discovered through a combination of X-ray surveys and galaxy clustering (<cit.>, <cit.>, <cit.>). The Lynx field is similar to the simulated field in Section <ref>; both fields are not the most dense regions of the Universe, but do feature a single large overdensity ∼ 10^14 M_⊙ and associated nearby large scale structure. We will use all detected galaxies behind z>0.8 (number density 17  arcmin^-2) to map the abundance of foreground structure through the measured shear patterns they induce. A segment of the observations centered around Lynx North are shown in Figure <ref>, where we show mass map contours overlaid on an RGB composite from i', R_c, and V band coadds frames in the top panel, and multi-color tangential shears measured around the cluster in comparison to an NFW model.Images of the field are gathered from the Subaru-Mitaka-Okayama-Kiso Archive (SMOKA) system <cit.>. Raw images were reduced for scientific analysis partially using thedata reduction process, which provides Suprime-cam image overscan and bias subtraction, flat fielding, atmospheric dispersion correction, as well as masking of known bad pixels (<cit.>, <cit.>). Reduced images are then astrometrically aligned using<cit.> by matching to known objects in the Sloan Digital Sky Survey (SDSS) catalogs. The analysis pipeline must fork at the step of image combination, because two different criteria must be optimized for weak lensing and photometric analysis which we describe separately. §.§ Photometric analysisFor photometry, all images in a given BVR_ci'z' filter are PSF matched and coadded using thepipeline. PSF matched photometry is then performed with<cit.>, where we form a cross-band detection image using the deepest and best seeing frames in the R_c, i',z' (reddest) filters, weighted by their depth. This detection image is then degraded to individual filters where the seeing is poorer in order to estimate the isophotal flux lost by the degradation. PSF stars for this process are chosen using the same method described below in the shape measurement section <ref>. Degradation of the images to a common seeing is performed using 'skernel algorithm. This process results in a PSF-corrected magnitude that provides robust colors for faint and small galaxies which are close to the noise floor in images <cit.>. The resulting photometric catalog is then zero-point calibrated using a combination of matching to SDSS data as well as stellar locus regression <cit.> to determine precise absolute magnitudes across the coadded images. Besides providing foreground/background separation, the photometric color catalog is also useful in the identification of stars for PSF modeling in the following subsection. Counts of detections (3 or more pixels above 3σ) indicate catalog completeness to ∼ 27th magnitude, similar to the depth expected at the completion of the 10-year LSST survey. The calibrated broadband galaxy colors are then matched to models of redshifted galaxy spectral energy distributions (SEDs) using the Bayesian Photometric Redshifts (BPZ) software <cit.>, which provides estimates of the likelihood of redshift and type, P(z,t), for every object. Some modifications to the defaults were made, to both improve the magnitude prior as well as to use updated galaxy SED templates which proved useful in the Deep Lens Survey <cit.>. These tweaks reduced the overall scatter and bias, ensuring separation of foreground and background galaxies critical to the weak lensing analysis. The photometric measurements in the field are further calibrated for zeropoint offsets using spectroscopic redshifts <cit.> in the Lynx field. As seen in Figure <ref> there is a clean separation between foreground and background galaxies at z>0.8, with σ_z = σ[(z_p-z_s)/(1+z_s)]=0.084 for all z and σ_z =0.056 for galaxies z<0.8. This successful calibration shows that it is extremely unlikely for foreground sources to have scattered into the background sample, mitigating a large risk in projected and tomographic weak lensing measurements.§.§ Shape analysisFocusing on the weak lensing analysis, image quality must not be compromised via the above PSF equalization process which degrades images to match the PSF of the worst seeing filter. We instead consider each exposure and its PSF in a joint stack-fit for the galaxy shape which takes advantage of the best seeing exposures. The V/R_c/i' images were determined to be of sufficient PSF quality for weak lensing by examining the exposures in each filter and measuring the mean width of the PSF and depth of imaging, which disqualified the B and z' filters respectively. In each of the V/R/i' bands, non-PSF matched coadds are produced through<cit.>, after which each image of the coadd is individually and jointly analyzed. Preliminary object shape parameters are estimated in addition to the photometric measurements already computed on the coadd. Stars for PSF estimation are then selected through a combination of filtering and clustering in multi-dimensional parameter space. First, since our field overlaps with SDSS we are able to identify the positions of spectroscopically confirmed stellar objects, likely including some fraction of binaries unsuitable for PSF analysis. However, we can use these stars in each frame to find the location of the PSF-like objects in brightness/size space (a tilted line due to instrumental “brighter-fatter" or charge transport effects <cit.>), which then allows for fainter non-spectroscopically confirmed PSF-like objects to be gathered. We then further require that these PSF-like objects occupy a 1σ region in color-color (all permutations of BVR_ci'z') space occupied by the spectroscopically-confirmed objects. Finally, we also reject PSF outliers in e_1-e_2 PSF ellipticity space, and use the remaining stars as anchors for a model of the spatial variations of the PSF pattern in each exposure.This observed PSF pattern of orientation and ellipticity is unique in each exposure, though it does show common smoothly-varying features which are indicative of previously investigated misalignment and drifting of optical elements during observation <cit.>. We model these field-wide aberrations using dozens of PSF stars on each CCD chip and hundreds in each exposure with a smoothly varying principal component analysis (PCA) model.This PCA model finds the coefficients of twenty “eigenPSFs” which describe the majority of PSF variance in each exposure, and we then fit a polynomial surface to the PCA coefficients to provide a map of the PSF across the focal plane for each exposure, as in <cit.>. Special care has been given to ensuring the edges of each chip are roughly continuous and unaffected by gaps in PSF data.Once the PSF stars have been selected, modeled, and interpolated to the position of each galaxy in an individual exposure, the process of forward PSF convolution and shape estimation of galaxy images can begin. The shape measurement algorithm used in this study operates on each exposure and is a modified form of the one used in the DLS <cit.> calledwhich won the GREAT3 gravitational lensing challenge <cit.>. For each galaxy the algorithm fits an elliptical Gaussian jointly across all images using the spatially resolved PSF extrapolated to the position of the galaxy in each exposure. Shear calibration for this method is provided through image simulations. Though precise and accurate shape measurement is an ongoing challenge to the weak lensing community, it can be seen that shear calibration is secondary to the large cosmic shear signal in this field. Additionally, the weighting applied by multiplicative bias calibration does not affect the value of S_t because it changes both the numerator and denominator in Eq. <ref> in similar ways <cit.>.As one consistency check, we investigate the agreement between wavelength bands on the final measured ellipticity profile of Lynx North, shown as colored lines in the bottom panel of Figure <ref>. A model of the tangential shear of background galaxies according to an NFW profile with mass, concentration, and redshift of M=5×10^14 M_⊙, c=4, and z=0.55 is also shown, measurements which have been confirmed through strong lens modeling and X-ray analysis of the galaxy cluster (<cit.>, <cit.>, <cit.>). The thick shaded lines in the bottom of Figure <ref> represent the non-zero cross shear component measured in each filter, where their thickness is the 1σ width as measured in each bin and which are representative of the γ_t (thin line) errors. The same distribution of galaxies is used in the measurements of tangential and cross shear. The broad agreement between the bands, which were independently obtained and analyzed, suggests that observational and modeling systematics which can vary between exposures are sub-dominant to our lensing signal.The consistency of tangential shear between bands also implies that the weak lensing analysis can be improved by combining the information contained in multiple bands. Therefore, our shear analysis leverages the shape of each galaxy in three bands by providing an estimate of the error on shape measurement, wherein only galaxies with agreement between bands are used in later analysis including mass mapping and correlation function measurements. A traditional test of PSF and shape modeling error, the star-galaxy cross-correlation function, is described later subsection <ref>. §.§ Mass mapping the Lynx fieldUsing the measurement of tangential and cross shear measured in each filter as input to Eqs. <ref> and <ref> we construct E- and B-mode aperture mass maps of the Lynx field. In the E-mode maps, the most massive structure in our field, the Lynx North cluster (previously seen in the maps of <cit.>) is detected at S_t > 10 in all three shape catalogs V,R_c,i'. The combined S/N map of Lynx North is shown in the top of Figure <ref>, where contours colored from blue to red indicate S_t from -3 to 8. The contours indicate the complexity of the mass clustering in the field of view and which can also be seen in the optical RGB image in that figure (composed from i', R_c,V bands, respectively). Red sequence galaxies in this region, indicators of cluster membership, often underlie areas of positive S_t across the entire field.The computed S_t mass maps in all three bands are quite similar, indicating agreement on the location and size of E-modes in the field. Calculating the Pearson correlation coefficient on the E-mode maps in the V,R_c,i' bands gives ρ_R_c,i' = 0.5156 ± 0.0005, ρ_R_c,V = .4789 ± .0003, and ρ_i',V = .4859 ± .0004, where the errors have been estimated from bootstrap resampling. Additionally, our S_× (B-mode) maps in the VRi' bands also show spatially correlated structure, with Pearson correlation coefficients between the B-mode maps in the R_c/i',R_c/V and i'/V bands calculated to be ρ_R_c,i' = 0.289 ± 0.004, ρ_R_c,V = 0.363 ± 0.003, and ρ_i',V = 0.221 ± 0.004, again with errors estimated from bootstrap resampling. Because the background shapes in multiple filters were measured independently using observations with large dithers and nights separated by many months as well as vastly different stellar PSF patterns, it is unlikely that PSF error is the origin of these cross-band B-modes. §.§ Comparison with HST observations The Lynx field has been the subject of multiple previous investigations, including deep Hubble Space Telescope observations and a weak lensing analysis of ACS images presented in <cit.>. In that paper, a shapelet decomposition method is used to measure the shear which aided in the 3σ detection of two M∼ 2× 10^14M_⊙ members of the z∼ 1.3 super clustering in the field, Lynx-E and W, which have been well-studied and verified with Chandra X-ray analysis in addition to the weak lensing mass. The HST imagery we access unfortunately does not cover the central region of Lynx North at z∼0.55, but its influence is easily observed in the shear field near the boundary. Indeed, the tangential shear about the coordinates of Lynx North (outside of the HST observations) is virtually identical with both the shears as measured in the Suprime-cam observations and the model shown in the bottom of Figure <ref>. These space-based shapes provide a useful cross-check for our systematics in the ground-based observations. The stark differences in shape measurement method and PSF provide an outside test of the E- and B-mode detection in situations with varying image orientations, resolutions, cameras, optics, and atmospheric effects. We match this space-based shape catalog with the photometry provided by the Suprime-cam observations, and then map the E- and B-modes as was done in the ground-based Suprime-cam observations. These space and ground-based maps are then compared, and broad agreement is seen both visually and statistically: for the E-mode maps, the Pearson correlation coefficient between space and ground is ρ_E:space,ground= 0.374 ± 0.005, and the B-mode maps are statistically correlated as well, ρ_B:space,ground= 0.210 ± 0.0004. This space-based observation therefore validates the PSF modeling, shape measurement, and mass mapping algorithm used in the analysis of the ground-based data, indicating that the maps not biased by observational effects on the scales which are shared by both data. Additionally, such comparisons open up an exciting opportunity to utilize wide-field lensing maps (such as those from surveys) to model the external wide-field convergence of higher-redshift weak lensing observations on narrower fields of view (e.g. <cit.>). Indeed, use of the wide-field lensing map in the analysis of the deeper space-based catalog has provided a tentative detection of a filament between the two z∼ 1.3 clusters Lynx E and W.§ LENS MODELING OF THE LYNX FIELD To investigate the observed E- and B-modes in the Lynx field, we model the foreground lens field as we did in the N-body simulations of Section <ref>. We do this by segmenting the E-mode aperture mass map and assigning clusters to locations to peaks in RA/Dec/z, and then simultaneously fitting N_clust∼100 mass peaks as indicated by boxes in Figure <ref>. As was previously shown, these peaks are correlated with the actual foreground lensing structure of clusters or groups of galaxies. Though some of these assumed peaks may not correspond to real halos, and the real halos may not have exactly NFW shear profiles, their strength, spatial distribution, and number density above M=10^13 M_⊙ is similar to what is expected in similarly-sized fields (see Section <ref>).We then calculate the shear of each of the ∼ 10^4 background galaxies with z_phot>0.8 as being the sum of shears induced by the estimated foreground halos. This is done for each background galaxy located in 3-dimensional coordinate space (RA/Dec/z) by minimizing the difference between the approximate and observed S_t maps and computing the total reduced shear g_tot as in Equation <ref>. Since the simulations are shape-noise free, the approximate model has reduced shears which are mostly of an amplitude g_1,2<0.1 in correspondence with the weak lensing limit. Due to the lack of shape noise, the ellipticity components of the simulation are much smaller than the observations. However, the correlation is statistically significant, ρ_e1:sim,obs= 0.084 ± 0.004 and ρ_e2:sim,obs= 0.088 ± 0.014, where errors have been estimated from bootstrap resampling. Adding a shape noise (pre-lensing) broadens the e_1,2 distributions to the observed values, but unnecessarily degrades the correlation between simulated and observed shears and thus shape noise is not applied in these simulations.These estimated shears can then be run with same aperture mass mapping algorithm as described in the previous section to produce estimated maps. The observed and approximated maps can be visually compared in Figure <ref>. As in the N-body simulation analysis, the E-mode maps are very strongly similar as they have been iteratively fit to match the observed maps under the assumption that the majority of the E-modes are due to lensing, and the correlation coefficient of the simulated E-mode maps with the observed maps is ρ_E:sim,obs= 0.83 ± 0.02. The B-mode maps exhibit weaker correlation by eye due to the dominance of shape noise over the lensing signal, but the correlation remains statistically significant with ρ_B:sim,obs= 0.22 ± 0.01, even when edge effects are excluded.As in Section <ref>, we can use this model of foreground lenses to probe the effects of different source galaxy distributions. First, we can remove the source clustering B-modes by lensing a uniform background source galaxy plane (with the same number density of galaxies per square arcminute, but at a single redshift z=1.0). This is shown as the narrowest distribution of B-modes in Figure <ref> and indicates the sub-dominance of multiple lensing which is expected given the low density of background sources and lack of fortuitous extreme alignment between foreground clusters. Going beyond a uniform source plane by using the actual (measured) 3-dimensional distribution of background galaxies results in B-mode generation from the source clustering effect. This effect broadens the E- and B-mode distributions as it adds depth and variance to each. Finally, we can add shape noise to this clustered distribution of background galaxies and observe the broadening of the E- and B-mode pixel distributions shown as blue lines to nearly the widths of the observed pixels as shown by the filled histograms. This is the most realistic modeling of the observations, and includes all sources of known B-modes including double lensing, source clustering, and E-to-B mode mixing. The edge effects (which can be seen in the bottom panel of Figure <ref>) are excluded from the histograms of Figure <ref> for clarity, though they too contain information about the locations and density of foreground lenses.The flexibility of the lensing simulation is thus capable of encapsulating the known sources of B-modes from lensing and mass mapping, most of which can be attributed to shape noise, source lens clustering, and E-to-B mixing. However there are B-modes in the observed V/R_c/i' maps which have larger amplitudes than can be accounted for by our lensing simulation. These high S/N B-modes, shown as the excess of gray shaded area over the blue line in Figure <ref>, could be due to uncorrected PSF effects. However, these high S/N peaks are consistent between bands with vastly different observing conditions. It therefore seems more likely that they might be accounted for with a more realistic simulation of the lensing potential distribution which we can only approximate. For instance, in the simulation we do not include void lensing, which would increase the number of negative E-modes and likely contribute positive and negative B-modes. However, void lensing was not accounted for in our approximation of N-body simulations and in that case the approximate B-modes distributions were an upper limit (see Figure <ref>). Further investigation of interesting high S/N peaks in B-mode maps is therefore warranted. Additionally, lensing on scales less than ∼ 1 Mpc and scales which span the entire field of view ∼ 100 Mpc are unaccounted for in our finite field reconstruction.In fact, B-modes may be induced by large lenses outside the field which generate E-modes and induce wide-field correlations in background ellipticities.§.§ Shear correlation functions in the Lynx field Statistical correlations are again calculated to compliment our spatial maps. Correlation functions over the field are computed on scales of 0.5<θ<15 arcminutes using both the observed shears and those simulated under multiple conditions shown in Figure <ref>. As in the N-body simulations, this selected field has an overabundance of large scale structure which give large amplitudes to the shear correlation functions as computed on this field. The shear correlations ξ_±, E- and B-mode aperture mass dispersion ⟨ M_ap,×^2 ⟩, and tophat shear dispersion ⟨γ^2 ⟩_E,B are calculated under three different conditions: the observed shears, the shears which are simulated using the observed RA/Dec/z positions on the sky and estimated lens positions, and those simulated with uniform source plane at z=1.0 at the observed galaxy number density of 13  arcmin^-2. E-modes for each statistic are the upper curves in each subplot in blue shades; B-modes are shown in shades of red. Error bars on the observed correlation functions are from pure shape noise with an η_rms=0.3, and the simulated cases have no shape noise added. As in the N-body simulations of Section <ref>, there are non-zero B-mode correlations on the scales shown, which ordinarily might be blamed on incomplete modeling of the PSF or gaps in the data. However, these B-modes are of similar amplitude in each band, and the simulated correlation function has no masking when a uniform lens plane is used. This implies that the majority of observed B-modes are not due to a PSF modeling issue or even the clustering of source galaxies as in the mapping case, but rather are intrinsic to these particular decompositions of a realistic shear field into E- and B-modes. On the smallest and largest scales, the finite depth and field of view our observations limit the size of E-modes which are measurable with our data, and this necessarily limits our E- and B-mode simulation which are constrained by the incompleteness of observations. The shear leakage from E- to B-modes on small scales, as discussed in Section <ref>, is one such artifact of observation limitations, and though not astrophysically induced, still represent signal which is encapsulated in our shear model of the foreground lenses. §.§ Null tests and cross-validation As a consistency check on PSF modeling and interpolation, we test for the presence of residual PSF systematics using the star-galaxy cross correlation function which is used in many weak lensing studies. These correlations can be computed using the same shear autocorrelation formalism presented in Section <ref> but with the substitution of PSF ellipticity for one of the two shears. In this cross correlation, we use the uncorrected PSF ellipticity of stars ϵ_uncorr^⋆ which can reveal a correlated leakage of residuals from the PSF modeling into the shapes of galaxies. This test is presented in Figure <ref>, where the grey and black lines show the ellipticities of stars in the co-added images in the R_c filter that are cross-correlated with the ellipticities of the background (z>0.8) sample of galaxies used in the lensing analysis. In that figure we also show the observed galaxy shear auto-correlation as a reference, which greatly exceeds the signal in the star-galaxy cross correlation as measured in this way. Additionally, a type of cross-validation test can be performed using the overlapping HST-ACS observations (described in subsection <ref>) which used a different camera & optical system at a dissimilar orientation and resolution, without atmospheric effects, and processed using a different PSF and shape estimation routine <cit.>. In Figure <ref>, we compare the galaxy shapes measured using the HST-ACS observations to our wide-field lens model evaluated at those galaxy positions using the top-hat filtered shear correlation. As can be seen in that figure, both the lens model and HST-ACS filtered correlation functions are in agreement about the scale and amplitude of the E- and B-modes in this subset of the data. Interestingly, though the field of view of the HST-ACS mosaic is narrow, it probes a region of high shear near the Lynx-North cluster. This high-shear region shows an increased E-mode, and a complementary increase in the B-mode, in both the lens model and the HST observations.These cross-correlation and cross-validation tests, in addition to the aforementioned consistency of E- and B-modes between the independent V,R,i' filters, implies (but does not prove) that our lensing analysis is not dominated by PSF misestimation or other observational systematic errors. § DISCUSSION Our shear modeling method has been applied in the analysis of the Buzzard N-body simulations and a field of galaxy clusters in Lynx. We described the method of mass mapping in Section <ref> using the shears from ray-tracing the Buzzard N-body simulations. Section <ref> introduced the method of approximating the ray-traced shear field as a sum of finite foreground cluster lenses, which is tested by comparison of the ray-traced shears to those computed using the known halo catalog and assumed NFW shear profile. Using this approximation, we explain the observation of B-modes in the spatial maps and shear correlations in the N-body simulation sub-field. These apparent B-modes can be further understood using the flexibility of our method which allows for modular variation of the spatial distribution of lenses and galaxies. For instance, changing the positions of lensed galaxies from a clustered distribution to a uniform one at fixed redshift reduces much of the B-mode signature. Similarly, collapsing all lenses to a single redshift also eliminates the B-mode signature of double (or multiple) lensing in cases with higher source densities of galaxies. Because these N-body simulation B-modes cannot be systematic errors resulting fromshape noise, PSF mis-estimation, or other observational systematics, they must either result from the lensing field or the nature of shear analysis. In either case, the generated B-modes represent a measurable signal when galaxies which are realistically distributed amongst, and then lensed by, an inhomogeneous 3-dimensional web of lenses.Using this flexible approximation to the lensing potential we can begin to account for the known sources of lensing & mapping B-modes, using them as a signal unto themselves and as an opportunity for discovery. In Section <ref> we apply our approximate lens modeling technique to deep observations of a field of galaxy clusters with the goal of modeling the observed E- and B-mode maps and correlations. We described our data reduction process, which goes from raw observational data to photometric redshift calibration and shear measurement using the `stack-fit' algorithm on multiple dithered exposures and multiple bands. After aperture mass mapping and correlating these shears, similar E- and B-modes were observed in all three filters as well as in overlapping deep HST observations. These space-based measurements provide an independent confirmation of the observed B-modes using a different shape measurement algorithms, camera & optics, and without atmospheric effects. As in the N-body simulations, most of the non-shape noise B-modes can be attributed to source lens clustering and E-to-B mixing, both of which depend on the distribution and density of foreground lenses. This approximate method can also be used to de-lens the effect of wide-field foreground lenses on higher redshift clusters, effectively accounting for external convergence in deeper observations on narrower fields of view.Interestingly, the deep observations also contain B-modes in common to the V/R_c/i' maps which have larger (signal to noise) amplitudes than can be accounted for by our lensing approximation. It therefore seems likely that these modes might be accounted for with a more realistic simulation of the lensing potential. For instance, inclusion of the lensing by voids (underdensities) along the line of sight would increase the number of negative E-modes and likely contribute positive and negative B-modes. Additionally, lensing on scales smaller and larger than our observations allow, those scales less than ∼ 1 Mpc and greater than ∼ 100 Mpc, are also unaccounted for in our finite field reconstruction. B-modes could therefore also be observationally induced by large lenses outside the field which generate E-modes and induce wide-field intrinsic shears in source galaxies. Therefore, the variance in our estimated B-mode distributions must be taken as lower limits. However, lensing by voids and larger-scale structure was not included in our approximation of N-body ray-traced shears and yet all high S/N peaks were accounted for in those simulations. This discrepancy between N-body simulations and observations warrants further investigation, and follow-up studies are underway using the wider fields available in modern weak lensing surveys and other N-body simulations. § SUMMARY In this paper we have explored mass mapping and shear correlations using measurements of shears and redshifts from N-body simulations and multi-band observations similar to what will be available with the LSST 10 year dataset. Data is processed similarly in both observations and simulations, concluding with aperture mass mapping and shear correlation function measurement. We developed and tested a model for representing the shear as the successive lensings in a piece of the cosmic web by associating E-mode overdensities with galaxy clusters along the line of sight. This simple approximation to N-body ray tracing is surprisingly useful at capturing the shear field in the weak lensing limit, even when the mass and 3-d positions of clusters can only be estimated from observational data. We compare this model to observations through both aperture mass maps and shear correlations and demonstrate that the decomposition of the observed shear field into gradient (E-modes) and curl (B-modes) yields general agreement with the model, demonstrating how the patterns of pure tangential shear produced by realistic distributions of lenses & galaxies also induce a measurable B-mode signal. Contrary to a systematic error, these observational B-modes contain information about the lenses in the field. Further application of this method of approximation on real data therefore presents an opportunity to use B-modes as an aid for discovery and a signal unto itself, especially using wide field weak lensing surveys such as CFHTLenS, DES, KiDS, Euclid, and the LSST.§ ACKNOWLEDGEMENTSWe thank Michael Schneider and Sam Schmidt for many helpful discussions, as well as the anonymous referee for valuable feedback which helped clarify the text. Financial support from DOE grant DE-SC0009999 and Heising-Simons Foundation grant 2015-106 are gratefully acknowledged. We thank Risa Wechsler, Joe DeRose, and the Buzzard simulation team for their N-body lensing simulation catalogs. M.J.J. acknowledges support for the current research from the National Research Foundation of Korea under the programs 2017R1A2B2004644 and 2017R1A4A1015178.mnras
http://arxiv.org/abs/1709.09721v2
{ "authors": [ "Andrew K. Bradshaw", "M. James Jee", "J. Anthony Tyson" ], "categories": [ "astro-ph.CO", "astro-ph.IM" ], "primary_category": "astro-ph.CO", "published": "20170927201359", "title": "Deep lensing with a twist: E and B modes in a field with multiple lenses" }
=1 -6pcarrows,shapes,positioning decorations.markingsarrowstyle=[scale=1] directed=[postaction=decorate,decoration=markings, mark=at position .65 with [arrowstyle]stealth] reverse directed=[postaction=decorate,decoration=markings, mark=at position .65 with [arrowstyle]stealth;] positioning empty plain
http://arxiv.org/abs/1709.09204v2
{ "authors": [ "Michael Gutperle", "Justin Kaidi", "Himanshu Raj" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170926181517", "title": "Janus solutions in six-dimensional gauged supergravity" }
Indian Institute of Science Education and Research Bhopal, Bhopal, 462066, IndiaIndian Institute of Science Education and Research Bhopal, Bhopal, 462066, IndiaIndian Institute of Science Education and Research Bhopal, Bhopal, 462066, IndiaISIS facility, STFC Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Oxfordshire, OX11 0QX, UK[][email protected] Indian Institute of Science Education and Research Bhopal, Bhopal, 462066, India The noncentrosymmetric superconductor TaOs has been characterized using x-ray diffraction, resistivity, magnetization, and specific heat measurements. Magnetization and specific heat measurements show a bulk superconducting transition at 2.07 K. These measurements suggest that TaOs is a weakly coupled type-II superconductor. The electronic specific heat in the superconducting state can be explained by the single-gap BCS model, suggesting s-wave superconductivity in TaOs. Superconducting properties of the noncentrosymmetric superconductor TaOs R. P. Singh December 30, 2023 =========================================================================§ INTRODUCTIONNoncentrosymmetric (NCS) superconductivity has been studied extensively in the past few years due to their unconventional superconducting properties, which cannot be explained within the framework of BCS theory <cit.>. In NCS superconductor, the lack of inversion centre in the crystal structure induces an antisymmetric spin-orbit coupling (ASOC), which breaks the parity symmetry. As a result, the superconducting ground state may exhibit mixing of spin-singlet and spin-triplet components, if the pairing gap is much smaller than the strength of the spin-orbit coupling <cit.>.Theoretical predictions suggested that the ratio of spin-singlet to spin-triplet pairing states in a NCS superconductor depend on the strength of ASOC. This prediction was supported by the experimental results of Li_2(Pd,Pt)_3B <cit.>, where pairing state was changed from spin-singlet to spin-triplet, when Pd was replaced with Pt. Concurrently, several NCS superconductors containing heavy transition elements were studied where admixed pairing states were highly anticipated due to strong spin-orbit coupling. Yet most of them showed dominant s-wave superconductivity <cit.>. In contrast, compounds with low ASOC showed unconventional superconductivity <cit.>, which certainly questions the role of ASOC on the superconducting state of noncentrosymmetric superconductors.Recent work on NCS superconductors are predominantly focused on compounds with α-Mn structure after the discovery of time-reversal symmetry (TRS) breaking in Re_6Zr <cit.>. In this system, Re-atom occupies all the noncentrosymmetric sites, therefore was considered a worthy candidate to study the effects of lack of inversion symmetry on the superconducting state. Instigated by the above finding, several other Re-based compounds were systematically investigated, where the transition metal element with Re was replaced with other heavier elements to tune the strength of ASOC e.g. Re_24Ti_5 <cit.>, Nb_0.18Re_0.82 <cit.>, and Re_6Hf <cit.>. Most of these compounds exhibited single-band superconductivity except Nb_0.18Re_0.82, which showed double-gap superconductivity <cit.>. Hence, no apparent conclusion can be made of the role of ASOC in determining the pairing state of noncentrosymmetric superconductors.Another compound with α-Mn structure which we studied recently is Nb_0.5Os_0.5 <cit.>, which shows s-wave superconductivity when examined from bulk and μSR measurements. In order to address the question regarding the effects of ASOC, we have replaced Nb with Ta, as Ta atom is heavier than Nb atom, substituting it should enhance the strength of spin-orbit coupling which, in turn, can increase the extent of parity mixing in the superconducting ground state. In this work, we report the detailed characterization of the noncentrosymmetric superconductor TaOs exhibiting bulk superconductivity at T_c = 2.07 K. Superconducting properties were determined by the magnetic susceptibility, electrical resistivity, and specific heat measurements. The results indicate a single-gap s-wave superconductivity with negligible effect of enhanced ASOC. § EXPERIMENTAL DETAILSThe sample of TaOs was prepared by melting stoichiometric amounts of Ta (99.95%, Alfa Aesar) and Os (99.95%, Alfa Aesar) in an arc furnace. The ingot was flipped and remelted several times. The observed weight loss during the melting was negligible. Then, the ingot was annealed in a vacuum-sealed quartz tube at 900 ^∘C for 1 week. followed by cooling to room temperature in 24 hours. The powder x-ray diffraction (XRD) spectrum was collected on a X'pert PANalytical diffractometer. The magnetization measurements were performed using superconducting quantum interference device (SQUID, Quantum Design Inc.) and electrical resistivity and specific heat measurements were done in a physical property measurement system (PPMS, Quantum Design Inc.).§ RESULTS AND DISCUSSION The x-ray diffraction pattern of TaOs shown in Fig. 1. No impurities were observed in the diffraction pattern. Rietveld refinement performed on the sample confirms that the sample crystallizes into cubic, noncentrosymmetric α - Mn structure (space group I 4̅3m, No. 217) with the lattice cell parameter a = b = c = 9.769 ± 0.002 Å, which is in good agreement with the published data <cit.>. Figure 2(a) shows the electrical resistivity data as a function of temperature in the range of 1.8 K ≤ T ≤ 300 K in zero applied magnetic field. The measurement shows that the sample has poor metallic behavior. This is similar to other α-Mn structure compounds <cit.> where the similar behaviour was attributed to electron scattering due to disorder. The resistivity drops to zero at T_c ≃ 2.06 K as shown in the inset of Fig.2(a). The magnetization measurement was performed in an applied field of H = 1 mT, confirms bulk superconductivity with the onset of strong diamagnetic signal around T_c^onset = 2.07 K, as displayed in Fig. 2(b). The superconducting volume fraction is little higher than -1, that may be due to demagnetization effect. The M(H) curve obtained for temperature above T_c (not shown here) was almost linear in H, which when fitted with the linear model yields the intrinsic susceptibility χ = 8.81 × 10^-4 cm^3/mol. The measured susceptibility χ results from the susceptibilities from the core and conduction electrons and given by χ= χ_core + χ_vv + χ_L + χ_P, where χ_core is the diamagnetic core susceptibility, χ_vv the paramagnetic Van Vleck susceptibility, χ_L the Landau diamagnetic susceptibility, and χ_P the Pauli spin susceptibility. Here the χ_core contribution is from the core electrons, whereas χ_L , χ_P is due to conduction electrons. Using the diamagnetic susceptibilities of the constituent elements <cit.> gives χ_core = - 6.05× 10^-5 cm^3/mol. The χ_P is given by χ_P = (g^2/4)μ^2_BD(E_F) <cit.>, where g is the spectroscopic splitting factor of the conduction carriers, μ_B the Bohr magneton, and D(E_F) the band-structure density of states at the Fermi energy E_F. Using g = 2 and D(E_F) = 1.27 states/eV f.u. (estimated from specific heat measurements), we get χ_P = 4.11 × 10^-5 cm^3/mol. Taking the band structure effective mass m^*_band = m_e, where m_e is the mass of a free electron, we obtained χ_L = -1.37 × 10^-5 cm^3/mol from the formula χ_L = -1/3(m_e/m^*_band)^2χ_P <cit.>. Using the above- estimated values χ_vv is derived as 9.14 × 10^-5 cm^3/mol. To calculate the lower critical field H_c1(0), magnetization curves M(H) in low applied magnetic fields was measured at various temperatures from 1.8 K to 2 K as shown in the inset of Fig. 2(c). The lower critical field H_c1 is defined as the point at which the magnetization deviates from linearity. The main panel of Fig. 2(c) shows the temperature variation of H_c1(T), which can be described by the formulaH_c1(T) = H_c1(0)(1-(T/T_c)^2). When fitted to the experimental data, it yields H_c1(0) = 2.52 ± 0.02 mT.The temperature dependence of the upper critical field H_c2(T) was determined by measuring the shift in T_c^mid in different fixed applied magnetic fields in resistivity measurements as shown in Fig. 2(d). It is evident from the graph that the data obtained from the measurements vary linearly with temperature. This data can be fitted using the relation given byH_c2(T) = H_c2(0)(1-t^2)/(1+t^2), where t = T/T_c. By fitting above equation in the H_c2-T graph, it yields H_c2(0) ≃ 3.44 ± 0.02 T. H_c2(0) can be used to estimate the Ginzburg Landau coherence length ξ_GL from the relation <cit.> H_c2(0) = Φ_0/2πξ_GL^2 ,where Φ_0 is the quantum flux (h/2e). For H_c2(0) ≃ 3.44 ± 0.02 T, we obtained ξ_GL(0) = 97.9 ± 0.3 Å. Within the α-model the Pauli limiting field is given by <cit.>H_c2^p(0) = 1.86T_c(α/α_BCS) .Using α = 1.71 (from the specific heat measurement), it yields H_c2^p(0) = 3.73 T. The upper critical field H_c2(0) and Pauli limiting field are close. Detailed investigation in low temperatures/single crystals are required to calculate accurate value of H_c2(0) and confirm the contribution of spin-triplet component in superconducting ground state.The Ginzburg Landau penetration depth λ_GL(0) can be obtained from the H_c1(0) and ξ_GL(0) using the relation <cit.>H_c1(0) = Φ_0/4πλ_GL^2(0)(lnλ_GL(0)/ξ_GL(0)+0.12) . Using H_c1(0) = 2.52 ± 0.02 mT and ξ_GL(0) = 97.9 ± 0.3 Å, we obtained λ_GL(0) ≃ 5168 ± 3 Å.The Ginzburg Landau parameter is given by the relation <cit.>κ_GL = λ_GL(0)/ξ_GL(0) .For ξ_GL(0) = 97.9 ± 0.3 Å and λ_GL(0) = 5168 ± 3 Å, we calculated κ_GL ≃ 52.78 ± 0.13. This indicates type-II superconductivity in TaOs. Thermodynamic critical field H_c can be estimated from κ_GL(0) and H_c2(0) using the relation H_c = H_c2/√(2)κ_GL , which for H_c2 = 3.44 ± 0.02 T and κ_GL = 52.78 ± 0.13 yields H_c = 46.09 ± 0.15 mT. The low temperature specific heat measurement C(T) was taken in zero applied field. The specific heat data in Fig. 3 confirms bulk superconductivity in TaOs. The normal state low temperature specific heat data above T_c was fitted with the relation C/T = γ_n+β_3T^2+β_5T^4 ,where γ_n is the normal state Sommerfeld coefficient related to the electronic contribution to the specific heat whereas β_3 and β_5 are the coefficients related to the lattice contribution to the specific heat. The solid red line in inset of Fig. 3 shows the best fit to the data which yields γ_n = 3.0 ± 0.01 mJ mol^-1 K^-2, β_3 = 0.052 ± 0.002 mJ mol^-1 K^-4, and β_5 = 0.22 ± 0.04 μJ mol^-1 K^-6. The Debye temperature was related to the coefficient β_3 which give θ_D = 332 K. Density of states at the Fermi level D_C(E_F) was estimated 1.27 states/eV f.u using the relation γ_n = (π^2k_B^2/3)D_C(E_F).The electron-phonon coupling constant which gives the strength of the attractive interaction between the electron and phonon can be calculated by the McMillan equation <cit.>,λ_e-ph = 1.04+μ^*ln(θ_D/1.45T_c)/(1-0.62μ^*)ln(θ_D/1.45T_c)-1.04, where μ^* = 0.13 is the Coulomb repulsion parameter. Using T_c = 2.07 K and θ_D = 332 K for TaOs, we obtained λ_e-ph ≃ 0.50. This value suggests that TaOs is a weakly coupled superconductor. Using the electron-phonon coupling constant we can calculate the bare-band effective mass m^* of the quasi-particles which contains the influence of many body electron-phonon interactions, which for λ_e-ph = 0.50, gives (assuming m^*_band= m_e) m^* = 1.50 m_e <cit.>.The electronic contribution to the specific heat can be calculated by subtracting the phononic contribution from the total specific heat. The normalized specific heat jump Δ C_el/γ_nT_c is 1.41 for γ_n = 3.0 mJ mol^-1 K^-2. The value obtained for Δ C_el/γ_nT_c is close to the value for a weakly coupled BCS type superconductor ( = 1.43).The temperature dependence of the normalized entropy S in the superconducting state for a single-gap BCS superconductor is given by S/γ_nT_c = -6/π^2(Δ(0)/k_BT_c)∫_0^∞[ fln(f)+(1-f)ln(1-f)]dy ,where f(ξ) = [exp(E(ξ)/k_BT)+1]^-1 is the Fermi function, E(ξ) = √(ξ^2+Δ^2(t)), where ξ is the energy of normal electrons measured relative to the Fermi energy, y = ξ/Δ(0), 𝑡 = 𝑇/𝑇_𝑐, and Δ(t) = tanh[1.82(1.018((1/𝑡)-1))^0.51] <cit.> is the BCS approximation for the temperature dependence of the energy gap. The normalized electronic specific heat is related to the normalized entropy byC_el/γ_nT_c = td(S/γ_nT_c)/dt ,where C_el below T_c is described by Eq. (11) whereas above T_c its equal to γ_nT_c. The specific heat data in Fig. 3 fits perfectly well for a fitting parameter α = Δ(0)/k_BT_c = 1.71 ± 0.02, which is close to the BCS value α_BCS = 1.764 in the weak coupling limit, suggesting that TaOs have dominant s-wave superconductivity.Recently unconventional vortex dynamics have been observed in some noncentrosymmetric superconductors <cit.>, which is very distinct from the classical and high-T_c superconductors. Therefore, it is necessary to measure the stability of the vortex system against the thermal fluctuations which is given by Ginzburg number G_i. Ginzburg number G_i is the ratio of thermal energy k_BT_c to the condensation energy associated with coherence volume <cit.>G_i = 1/2(k_Bμ_0τ T_c/4πξ^3(0)H_c^2(0))^2 .Here τ is the anisotropy parameter which is 1 for cubic TaOs. For ξ(0) = 97.9Å, H_c(0) = 46.09 mT and T_c= 2.07 K, we got G_i= 1.02 × 10^-6. The value of G_i is more towards the low T_c superconductors (G_i ≃ 10^-8), suggesting that thermal fluctuations may not be playing any important role in vortex unpinning in our system.Uemura et al. identified that the class of a superconductor can be differentiated conveniently based on the ratio of the transition temperature (T_c) to the Fermi temperature (T_F) <cit.>. It was shown that unconventional, exotic superconductors fall in the range of 0.01 ≤ T_c/T_F ≤ 0.1.For a 3D system Fermi temperature T_F is given by the relationk_BT_F = ħ^2/2(3π^2)^2/3n^2/3/m^*,where n is the quasiparticle number density per unit volume.Using the Sommerfeld coefficient for TaOs, we can calculate the quasiparticle number density per unit volume and mean free path <cit.>γ_n = (π/3)^2/3k_B^2m^*V_f.u.n^1/3/ħ^2N_Awhere k_B is the Boltzmann constant, N_A is the Avogadro constant, V_f.u. is the volume of a formula unit and m^* is the effective mass of quasiparticles. The electronic mean free path l is related to residual resistivity ρ_0 by the equation l = 3π^2ħ^3/e^2ρ_0m^*2v_F^2where the Fermi velocity v_F is related to the effective mass and the carrier density byn = 1/3π^2(m^*v_f/ħ)^3 .In the dirty limit, the penetration depth λ_GL(0) can be estimated by relationλ_GL(0) = λ_L(1+ξ_0/l)^1/2where ξ_0 is the BCS coherence length. The λ_L is the London penetration depth, which is given byλ_L = (m^*/μ_0n e^2)^1/2The Ginzburg-Landau coherence length is also affected in the dirty limit. The relationship between the BCS coherence length ξ_0 and the Ginzburg-Landau coherence ξ_GL(0) at T = 0 isξ_GL(0)/ξ_0 = π/2√(3)(1+ξ_0/l)^-1/2Equations (14)-(19) form a system of four equations which can be used to estimate the parameters m^*, n, l, and ξ_0 as done in Ref.<cit.>. The system of equations was solved simultaneously using the values γ_n = 3.0 mJ mol^-1K^-2, ξ_GL(0) = 97.9 Å, and ρ_0 = 150.01 μΩ-cm. The estimated values are tabulated in Table 1. It is clear that ξ_0 > l, indicating that TaOs is in the dirty limit. The estimated value of mean free path l is of the same order as observed in other α - Mn structure noncentrosymmetric superconductors, where similar high residual resistivity and dirty limit superconductivity was observed <cit.>. Using the estimated value of n in Eq. (13) we get T_F = 897 K, giving T_c/T_F = 0.0023, which places TaOs away from the unconventional superconductors as shown by a solid red square in Fig. 4, where blue solid lines represent the band of unconventional superconductors.§ CONCLUSIONIn summary, TaOs was prepared by standard arc melting technique. The noncentrosymmetric α-Mn cubic structure was confirmed by XRD analysis. A comprehensive study of the superconducting properties of TaOs was done using resistivity, magnetic susceptibility, and heat capacity measurements. These measurements suggest type-II superconductivity in TaOs with superconducting transition temperature T_c = 2.07 K. The electronic specific heat in the superconducting state is well described by the single-gap BCS expression, suggesting the s-wave superconductivity. The close value of the upper critical field H_c2(0) and Pauli limiting field in noncentrosymmetric superconductors may indicate the possibility of mixed pairing in the superconducting ground state. In order to confirm it, local probe measurements, e.g. muon spin rotation/ relaxation is vital. § ACKNOWLEDGMENTS R. P. S. acknowledges Science and Engineering Research Board, Government of India for the Young Scientist Grant YSS/2015/001799 and DST FIST. Referencesmdf M. Sigrist, D. F. Agterberg, P. A. Frigeri, N. Hayashi, R. P. Kaur, A. Koga, I. Milat, K. Wakabayashi, and Y. Yanase, J. Magn. Magn. Mater. 310, 536 (2007).EB E. Bauer and M. Sigrist,Noncentrosymmetric Superconductor:Introduction and Overview (Heidelberg, Springer-Verlag 2012).rashba L. P. Gor'kov, E. I. Rashba, Phys. Rev. Lett. 87, 037004 (2001).sky S. K. Yip, Phys. Rev. B 65, 144508 (2002).kv K. V. Samokhin, E. S. Zijlstra, and S. K. Bose, Phys. Rev. B 69, 094514 (2004).ia I. A. Sergienko and S. H. Curnoe, Phys. Rev. B 70, 214510 (2004).pa P. A. Frigeri, D. F. Agterberg, A. Koga, and M. Sigrist, Phys. Rev. Lett. 92, 097001 (2004).fujimoto1 S. Fujimoto, Phys. Rev. B 72, 024515 (2005).fujimoto2 S. Fujimoto, J. Phys. Soc. Jpn. 75, 083704 (2006).fujimoto3 S. Fujimoto, J. Phys. Soc. Jpn. 76, 051008 (2007).LPt1 H. Q. Yuan, D. F. Agterberg, N. Hayashi, P. Badica, D.Vandervelde, K. Togano, M. Sigrist, and M. B. Salamon, Phys. Rev. Lett. 97, 017006 (2006).LPt2 K. Togano, P. Badica, Y.Nakamori, S. Orimo, H.Takeya, and K. Hirata, Phys. Rev. Lett. 93, 247004 (2004).LPt3 P. Badica, T. Kondo, and K. Togano, J. Phys. Soc. Jpn. 74, 1014 (2005).BPS E. Bauer, R. T. Khan, H. Michor, E. Royanian, A. Grytsiv, N. Melnychenko-Koblyuk, P. Rogl, D. Reith, R. Podloucky, E. W. Scheidt, W. Wolf, and M Marsman, Phys. Rev. B 80, 064504 (2009).IG K. Wakui, S. Akutagawa, N. Kase, K. Kawashima, T. Muranaka, Y. Iwahori, J. ABE, and J. Akimitsu, J. Phys. Soc. Jpn. 78, 034710 (2009).LPS1 I. Kawasaki, I. Watanabe, H. Amitsuka, K. Kunimori, H. Tanida, and Y. O̅nuki, J. Phys. Soc. Jpn. 82, 084713 (2013).LPS2 R. L. Ribeiro, I. Bonalde,Y. Haga, R. Settai, and Y. O̅nuki, J. Phys. Soc. Jpn. 78, 115002 (2009).rw1 P. K. Biswas, M. R. Lees, A. D. Hillier, R. I. Smith, W. G. Marshall, and D. McK. Paul, Phys. Rev. B 84, 184529 (2011).rw2 P. K. Biswas, A. D. Hillier, M. R. Lees, and D. McK. Paul, Phys. Rev. B 85, 134505 (2012).YC J. Chen, M. B. Salamon, S. Akutagawa, J. Akimitsu, J. Singleton, J. L. Zhang, L. Jiao, and H. Q. Yuan, Phys. Rev. B 83, 144529 (2011).LC S. Kuroiwa, Y. Saura, J. Akimitsu, M. Hiyaishi, M. Miyazaki, K. H. Satoh, S. Takeshita, and R. Kadono, Phys. Rev. Lett. 100, 097002 (2008).lnc1 A. D. Hillier, J. Quintanilla, and R. Cywinski, Phys. Rev. Lett. 102, 117007 (2009).lnc2 19J. Chen, L. Jiao, J. L. Zhang, Y. Chen, L. Yang, M. Nicklas, F. Steglich, and H. Q. Yuan, New J. Phys. 15, 053005 (2013).rz1 R. P. Singh, A. D. Hillier, B. Mazidian, J. Quintanilla, J. F. Annett, D. M. Paul, G. Balakrishnan, and M. R. Lees, Phys. Rev. Lett. 112, 107002 (2014).RT1 C. S. Lue, H. F. Liu, C. N. Kuo, P. S. Shih, J-Y Lin, Y. K. Kuo, M. W. Chu, T-L Hung, and Y. Y. Chen, Superconduct. Sci. and Tech. 26, 055011 (2013).nr1 A. B. Karki, Y. M. Xiong, N. Haldolaarachchige, S. Stadler, I.Vekhter, P. W. Adams, D. P. Young, W. A. Phelan, and J. Y. Chan, Phys. Rev. B 83, 144525 (2011).nr2 C. Cirillo,R. Fittipaldi,M. Smidman, G. Carapella,C.Attanasio, A. Vecchione, R. P. Singh, M. R. Lees, G. Balakrishnan, and M. Cuoco, Phys. Rev. B 91, 134508 (2015).rf Bin Chen, Yang Guo, Hangdong Wang, Qiping Su, Qianhui Mao, Jianhua Du, Yuxing Zhou, Jinhu Yang, and Minghu Fang, Phys. Rev. B 94, 024518 (2016).rhf D. Singh, A. D. Hillier, A. Thamizhavel, and R. P. Singh, Phys. Rev. B 94, 054515 (2016).NBD. Singh, J. A. T. Barker, A. Thamizhavel, A. D. Hillier, D. McK. Paul, R. P. Singh arXiv:1705.00129v1 (2017).PS P.S.Rudman, J. Less-Common Metals, 9,(77-79), (1965).core L. B. Mendelsohn, F. Biggs, and J. B. Mann, Phys. Rev. A 2, 1130 (1970).ad N. Ashcroft and N. Mermin, Solid State Physics (Saunders College, Philadelphia, 1976).sr S. R. Elliott, The Physics and Chemistry of Solids (Wiley, Chichester, 1998).mtin M. Tinkham, Introduction to Superconductivity, 2nd ed. (McGraw-Hill, New York, 1996).dc D. C. Johnston, Supercond. Sci. Technol. 26, 115011 (2013).WL W. L. McMillan, Phys. Rev. 167, 331 (1968).GG G.Grimvall, Phys. Scr. 14(1-2), 63 (1976).BM B. Mühlschlegel, Z. Phys. 155, 313 (1959).MS M. Sigrist and D. F. Agterberg, Prog. Theor. Phys. 102, 965 (1999).ED E. Dumont and A. C. Mota, Phys. Rev. B 65, 144519 (2002). CF1 C. F. Miclea, A. C. Mota, M. Sigrist, F. Steglich, T. A. Sayles, B. J. Taylor, C. A. McElroy, and M. B. Maple, Phys. Rev. B 80,132502 (2009).CF2 C. F. Miclea, A. C. Mota, M. Nicklas, R. Cardoso, F. Steglich, M. Sigrist, A. Prokofiev, and E. Bauer, Phys. Rev. B 81, 014527 (2010).vortex G. Blatter, M. V. Feigel'man, V. B. Geshkenbein, A. I. Larkin, and V. M. Vinokur Rev. Mod. Phys. 66, 1125 (1994).YJU Y. J. Uemura et al., Phys. Rev. Lett. 62, 2317 (1989).KKC K. Hashimoto, K. Cho, T. Shibauchi, S. Kasahara, Y. Mizukami, R. Katsumata, Y. Tsuruhara, T. Terashima, H. Ikeda, M. A. Tanatar, H. Kitano, N. Salovich, R. W. Giannetta, P. Walmsley, A. Carrington, R. Prozorov, and Y. Matsuda, Science 336, 1554 (2012).RKH R. Khasanov, H. Luetkens, A. Amato, H.-H. Klauss, Z.-A. Ren, J.Yang,W. Lu, and Z.-X. Zhao,Phys. Rev. B 78, 092506 (2008).ck C.Kittel, Introduction to Solid State Physics 8th edn. (Wiley, New York, 2005).DAM D. A. Mayoh, J. A. T. Barker, R. P. Singh, G. Balakrishnan, D. McK. Paul, and M. R. Lees Phys. Rev. B 96, 064521 (2017).
http://arxiv.org/abs/1709.09591v1
{ "authors": [ "D. Singh", "Sajilesh K. P.", "S. Marik", "A. D. Hillier", "R. P. Singh" ], "categories": [ "cond-mat.supr-con" ], "primary_category": "cond-mat.supr-con", "published": "20170927155104", "title": "Superconducting properties of the noncentrosymmetric superconductor TaOs" }
DeepTransport: Learning Spatial-Temporal Dependency for Traffic Condition Forecasting Xingyi ChengThis work was done before leaving Baidu. Email: [email protected]., Ruiqing Zhang, Jie Zhou, Wei Xu Baidu Research - Institue of Deep Learning [email protected] {zhangruiqing01,zhoujie01,wei.xu}@baidu.com ================================================================================================================================================================================================================================================ Predicting traffic conditions has been recently explored as a way to relieve traffic congestion. Several pioneering approaches have been proposed based on traffic observations of the target location as well as its adjacent regions, but they obtain somewhat limited accuracy due to a lack of mining road topology. To address the effect attenuation problem, we suggest taking into account the traffic of surrounding locations(wider than the adjacent range). We propose an end-to-end framework called DeepTransport, in which Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) are utilized to obtain spatial-temporal traffic information within a transport network topology. In addition, an attention mechanism is introduced to align spatial and temporal information. Moreover, we constructed and released a real-world large traffic condition dataset with a 5-minute resolution.Our experiments on this dataset demonstrate our method captures the complex relationship in the temporal and spatial domains. It significantly outperforms traditional statistical methods and a state-of-the-art deep learning method.§ INTRODUCTIONWith the development of location acquisition and wireless devices, a vast amount of data with spatial transport networks and timestamps can be collected by mobile phone map app. The majority of map apps can tell users real-time traffic conditions, as shown in Figure <ref>. However, only the current traffic conditions are not enough for making effective route planning, a traffic system to predict future road conditions may be more valuable.In the past, there are mainly two approaches for traffic prediction: time-series analysis based on classical statistics and data-driven methods based on machine learning. Most former methods are univariate; they predict the traffic of a place at a certain time. The fundamental work was Auto Regressive Integrated Moving Average (ARIMA) <cit.> and its variations <cit.>. Motivated by the fact <cit.> that traffic evolution is a temporal-spatial phenomenon, multivariate methods with both temporal and spatial features were proposed.  <cit.> developed a model that feeds on data from upstream detectors to improve the predictions of downstream locations.However, many statistics are needed in such methods. On the other hand, data-driven methods <cit.> fit a single model from vector-valued observations including historical scalar measurements with the trend, seasonal, cyclical, and calendar variations. For instance,  <cit.> expressed traffic patterns by mapping road attributes to a latent space. However, the linear model here is limited in its ability to extract effective features.Neural networks and deep learning have been demonstrated as a unified learning framework for feature extraction and data modeling. Since its applicability in this topic, significant progress has been made in related work. Firstly, both temporal and spatial dependencies between observations in time and space are complex and can be strongly nonlinear. While the statistics frequently fail when dealing with nonlinearity, neural networks are powerful to capture very complex relations <cit.>. Secondly, neural networks can be trained with raw data in an end-to-end manner. Apparently, hand-crafted engineered features that extract all information from data spread in time and space are laborious. Data-driven based neural networks extract features without the need for statistical features. e.g., Mean or variance of all adjacent locations of the current location.The advantage of neural networks for traffic prediction has long been discovered by researchers. Some early work <cit.> simply put observations into the input layer, or take sequential features into consideration  <cit.> to capture temporal patterns in time-series. Until the last few years, some works of deep learning were applied. For instance, Deep Belief Networks (DBN) <cit.> and Stack Autoencoders (SAEs) <cit.>. However, input data in these works are directly concatenated from different locations, which ignored the spatial relationship. In general, the existing methods are either concerned with the time series or just a little use of the spatial information. Depending on traffic conditions of a “narrow” spatial range will undoubtedly degrade prediction accuracy. To achieve a better understanding of spatial information, we propose to solve this problem by taking the intricate topological graph as a key feature in traffic condition forecasting, especially for long prediction horizons. To any target location as the center of radiation, surrounding locations with the same order form a “width” region, and regions with different order constitute a “depth” sequence. We propose a double sequential deep learning model to explore the traffic condition pattern. This model adopts a combination of convolutional neural networks (CNN) <cit.> and recurrent networks with long short-term memory (LSTM) units <cit.> to deal with spatial dependencies. CNN is responsible for maintaining the “width” structure, while LSTM for the “depth” structure. To depict the complicated spatial dependency, we utilize the attention mechanism to demonstrate the relationships between time and space. The main contribution of the paper is summarized as follows: * We introduce a novel deep architecture to enable temporal and dynamical spatial modeling for traffic condition forecasting. * We propose the necessity of aligning spatial and temporal information and introduce attention mechanism into the model to quantify their relationship. The obtained attention weight is helpful for daily traveling and path planning. * Experiment results demonstrate that the proposed model significantly outperforms existing methods based on deep learning and time series forecasting methods. * We also release a real large (millions) traffic dataset with topological networks and temporal traffic conditions [https://github.com/cxysteven/MapBJ] for ASC Student Supercomputer Challenge 2017 (ASC17), which was developed on PaddlePaddle platform [https://github.com/PaddlePaddle/Paddle].§ PRELIMINARYIn this section, we briefly revisit the traffic prediction problem and introduce notations in this work.§.§ Common Notations and Definition A traffic network can be represented in a graph in two ways. Either monitoring the traffic flow of crossings, taking the crossing as a node and road as an edge of the graph, or conversely, monitoring the condition of roads, take roads as nodes and crossings as connecting edges. The latter annotation is adopted in our work. Taking figure <ref> as an example, each colored node corresponds to a stretch of road in a map app.We consider a graph consisting of weighted vertices and directed edges. Denote the graph as G = ⟨ V, E ⟩. V is the set of vertices and E ⊆{(u, v) | u ∈ V, v ∈ V} is the set of edges, where (u, v) is an ordered pair. A location(vertex) v at any time point t has five traffic condition states c(v, t) ∈{0, 1, 2, 3, 4}, expressing not-released, fluency, slow, congestion, extreme congestion respectively. Figure <ref> presents an example of road traffic at three-time points in an area.Observations: Each vertex in the graph is associated with a feature vector, which consists of two parts, time-varying O and time-invariant variables F. Time-varying variables that characterize the traffic network dynamically are traffic flow observations aggregated by a 5-minute interval. Time-invariant variables are static features as natural properties which do not change with time s, such as the number of input and output degrees of a road, its length, limit speed, and so forth.In particular, the time-varying and time-invariant variables are denoted as:O_v, t = [ c(v, t); c(v, t-1); ⋮; c(v, t-p) ] F_v = [ f_v, 1; f_v, 2;⋮; f_v, k ]where c(v, t) is traffic condition of vertex v at time t, p isthe length of historical measurement. f_v,k are time-invariant features.Order Slot: In a path of the directed graph, the number of edges required to take from one vertex to another is called order. Vertices of the same order constitute an order slot. Directly linked vertices are termed first-order neighbors. Second-order spatial neighbors of a vertex are the first-order neighbors of its first-order neighbors and so forth. For any vertex in our directed graph, we define the incoming traffic flow as its upstream flow and the outflow as its downstream flow. Take figure <ref> as an example, L_4 is the target location to be predict. L_3 is the first-order downstream vertex of L_4. L_1, L_2 is the first order downstream set of L_3 and they constitute the second order slot of L_4. Each vertex in the traffic flow that goes in one direction is affected by its upstream flow and downstream flow. The first and second order slots of L_4 is shown in Figure <ref>.Introducing the dimension of time series, any location L_v,t is composed of two vectors, O_v,t and F_v. Any order slot consists of some locations: L_v, t = [ O_v, t;F_v ] X^j_v, t = [ L^T_u_1, t; L^T_u_2, t;⋮; L^T_u_k, t ]where location index u_· is one of the jth order neighbors of v.Perceptive Radius: The maximum ordered number controls the perceptive scope of the target location. It is an important hyperparameter describing spatial information, we call it perceptive radius and denote it as r. Problem Definition: According to the above notation, we define the problem as follows: Predict a sequence of traffic flow L_v, t+h for prediction horizon h given the historical observations of L_v', t', where v' ∈ neighbor(v, r), t' ∈{t-p,⋯, t}, r ∈{0,⋯,R} is perceptive radius and p is the length of historical measurement. § MODELAs shown in Figure <ref>, our model consists of four parts: upstream flow observation(left), target location module(middle), downstream flow observation(right), and training cost module(top). In this section, we detail the work process of each module.§.§ Spatial-temporal Relation Construction Since the traffic condition of a road is strongly impacted by its upstream and downstream flow,we use a convolutional subnetwork and a recurrent subnetwork to maintain the road topology in the proposed model.§.§.§ Convolutional LayerCNN is used to extract temporal and “width” spatial information. As demonstrated in the example of figure <ref>, when feeding into our model, L_4's first upstream neighbor L_5 should be copied twice, because there are two paths to L_4,that are [L_6, L_5] and [L_2, L_5]. With the exponential growth of paths, the model suffers from high dimension and intensive computation. Therefore, we employ a convolution operation with multiple encoders and shared weights <cit.>. To further reduce the parameter space while maintaining independence among vertices with the same order, we set the convolution stride to the convolution kernel window size, which is equal to the length of a vertex's observation representation.The non-linear convolutional feature is obtained as follows:e^r_up, q = σ(W_up, q * U_v, t + b_up, q),e^r_down, q = σ(W_down, q * D_v, t + b_down, q),where U_v, t = [X^1_v,t,⋯,X^r_v,t](only upstream neighbors) is denoted as upstream input matrix, while D_v, t is downstream input matrix. The e^r_·, q is at rth order vector of upstream or downstream module where q ∈{1, 2...m} and m is the number of feature map. We set e^r_up = [ e^r_up, 1, ⋯, e^r_up, m ] and e^r_up∈ℝ^l × m,l is the number of observations in a slot. Similarly, we can get the e^r_down. The weights W and bias b composes parameters of CNN subnetworks. σ represents nonlinear activation, we empirically adopt the tanh function here.§.§.§ Recurrent LayerRNN is utilized to represent each path that goes to the target location(upstream path) or goes out from the target location(downstream path). The use of RNN has been investigated for traffic prediction for a long time,  <cit.> used a Time-Lag RNN for short-term speed prediction(from 20 seconds to 15 minutes), and  <cit.> adopted RNN to model state space dynamics for travel time prediction. In our proposed method, since the upstream flow is from high-order to low-order, while the downstream flow is contrary, the output of the CNN layer in the upstream module and downstream module is fed into RNN layer separately. The structure of vehicle flow direction uses LSTM with “peephole” connections to encode a path as a sequential representation. In LSTM, the forget gate f controls memory cell c to erase, the input gate i helps to ingest new information, and the output gate o exposes the internal memory state outward. Specifically, given a rth slot matrix e^r_down∈ℝ^l × m, map it to a hidden representation h^r_down∈ℝ^l × d with LSTM as follows:[ 𝐜̃^r;𝐨^r;𝐢^r;𝐟^r ] = [ tanh;σ;σ;σ ][ 𝐖_p [ 𝐞^r; 𝐡^r-1; ]+𝐛_p ],𝐜^r =𝐜̃^r⊙𝐢^r + 𝐜^r-1⊙𝐟^r,𝐡^r = [𝐨^r⊙tanh( 𝐜^r)]^T,where 𝐞^r ∈ℝ^l× m is the input at the rth order step; 𝐖_p ∈ℝ^4d× (m+d) and 𝐛_p ∈ℝ^4d are parameters of affine transformation; σ denotes the logistic sigmoid function and ⊙ denotes elementwise multiplication.The update of upstream and downstream LSTM units can be written precisely as follows:𝐡^r_down = 𝐋𝐒𝐓𝐌(𝐡^r-1_down,𝐞^r_down, θ_p). 𝐡^r_up = 𝐋𝐒𝐓𝐌(𝐡^r+1_up,𝐞^r_up, θ_p).The function 𝐋𝐒𝐓𝐌(·, ·, ·) is a shorthand for Eq. (<ref>-<ref>), in which θ_p represents all the parameters of 𝐋𝐒𝐓𝐌.§.§.§ Slot AttentionTo get the representation of each order slot, max-pooling is performed on the output of LSTM. As 𝐡^r represents the status sequence of the vertices in the corresponding order slot, we pool on each order slot to get r number of slot embeddings S_up = [ s^1_up, ⋯, s^r_up ] and S_down = [ s^1_down, ⋯, s^r_down ]. Since different order slots have different effects on target prediction, we introduce attention mechanisms to align these embeddings. Given the target location hidden representation g, we get the jth slot attention weights <cit.> as follows:α_j = expa(𝐠,𝐬^j)/∑_k=1^rexpa(𝐠,𝐬^k). We parametrize the model a as a Feedforward Neural Network that is used to compute the relevance between the target location and the corresponding order slot. The weight α_j is normalized by a softmax function. To write it precisely, we let𝐀𝐓𝐓𝐖(𝐬^j) as a shorthand for Eq.(<ref>), we get the upstream and downstream hidden representation by weighting the sum of these slots:𝐳_down = ∑_j=1^r𝐀𝐓𝐓𝐖(𝐬^j_down) 𝐬^j_down. 𝐳_up = ∑_j=1^r𝐀𝐓𝐓𝐖(𝐬^j_up) 𝐬^j_up. Lastly, we concatenate the 𝐳_up, 𝐳_down and the target location's hidden representation 𝐠 and then sent them to the cost layer. §.§ Top Layers with Multi-task LearningThe choice of cost function on the top layer is tightly coupled with the choice of the output unit. We simply use square error to fit the future conditions of the target locations.Multi-task learning is first introduced by  <cit.> for traffic forecasting tasks. It is considered as soft constraint imposed on the parameters arising out of several tasks <cit.>. TheseAdditional training examples put more pressure on the parameters of the model towards values that generalize well when part of a model is shared across tasks. Forecasting traffic future conditions is a multi-task problem as time goes on and different time points correspond to different tasks.In the DeepTransport model, in addition to the computation of the attention weights and affine transformations of the output layer, all other parameters are shared.§ EXPERIMENTS§.§ DatasetWe adopt snowball sampling method <cit.> to collect an urban areal dataset in Beijing from a commercial map app and named it “MapBJ”. The dataset provides traffic conditions in {fluency, slow, congestion, extreme congestion}. The dataset contains about 349 locations which are collected from March 2016 to June every five minutes. We select the first two months' data for training and the remaining half month for testing. Besides traffic topological graphs and time-varying traffic conditions, we also provide the limit speed of each road. Since the limit speed of different roads may be very distinct, and location segmentations method regards this as an important reference index. We introduce a time-invariable feature called limit level and discretize it into four classes. §.§ EvaluationEvaluation is ranked based on quadratic weighted Cohen's Kappa <cit.>, a criterion for evaluating the performance of categorical sorting. In our problem, quadratic weighted Cohen's Kappa is characterized by three 4 × 4 matrices: observed matrix O, expected matrix E and weight matrix w. Given Rater A(ground truth) and Rater B(prediction), O_i,jdenotes the number of records rating i in A while rating j in B, E_i,j indicates how many samples with label i is expected to be rated as j by B and w_i,j is the weight of different rating, w_i,j = (i-j)^2/(N-1)^2,where N is the number of subjects, we have N=4 in our problem. From these three matrices, the quadratic weighted kappa is calculated as:κ = 1 - Σ_i,jw_i,jO_i,j/Σ_i,jw_i,jE_i,j.This metric is typically in the range of 0 (random agreement between raters) to 1 (complete agreement between raters). §.§ Implementation DetailsWe use the open-source deep learning platform PaddlePaddle for the implementation and experiments. PaddlePaddle has two important files for running the program: data providers and trainer configuration. data providers is usually used for data preprocessing with Python language and trainer configuration is responsible for parameter setting and building neural networks layer by layer. Since the condition value ranges in {1, 2, 3, 4}, the multi-classification loss can be treated as the objective function. However, the cost layer with softmax cross-entropy does not take into account the magnitude of the rating. Thus, square error loss is applied as the training objective. But another disadvantage of the straightforward use of linear regression is that the predicted value may be out of the range in {1, 2, 3, 4}. However, we can avoid this problem by labeling projection as follows:We have a statistical analysis on the state distribution of training data. Fluency occupies 88.2% of all records, fluency and slower occupies about 96.7%,fluency, slower and congestion occupies about 99.5%, the extreme congestion is very rare that it accounts for only 0.5%. Therefore, we rank the prediction result in ascending order and set the first 88.2% to fluency, 88.2%-96.7% to slower, 96.7%-99.5% to congestion, and 99.5%-100% to extreme congestion. We put all the observations into 32 dimension continuous vectors. The training optimization is optimized by back-propagation using Adam <cit.>. Parameters are initialized with uniformly distributed random variables and we use batch size 1100 for 11 CPU threads, with each thread processing 100 records. All models are trained until convergence. Besides, there are two important hyperparameters in our model,the length of historical measurement p and perceptive radius r that control temporal and spatial magnitude respectively. §.§ Choosing Hyperparamerters We intuitively suppose that expanding perceptive radius would improve prediction accuracy, but also increase the amount of computation, so it is necessary to explore the correlation between the target location and its corresponding rth order neighbors.Mutual Infomation(MI) measures the degree of correlation between two random variables. When MI is 0, it means the given two random variables are completely irrelevant. When MI reaches the maximum value, it equals to the entropy of one of them, and the uncertainty of the other variable can be eliminated. MI is defined asMI(X;Y)=H(X) - H(X|Y) = ∑_x ∈X, y ∈Yp(x,y)logp(x,y)/p(x)p(y),where H(X) and H(X|Y) are marginal entropy and conditional entropy respectively. MI describes how much uncertainty is reduced.With MI divided by the average of entropy of the given two variables, we get Normalized mutual information(NMI) in [0,1]:NMI(X;Y)=2MI(X,Y)/H(X) + H(Y).We calculated NMI between the observation of each vertex and its rth neighbors over all time points. The NMI gradually decreases as the order increases, it values0.116, 0.052, 0.038, 0.035, 0.034 for r in {1,2,3,4,5} respectively and hardly change after r > 5.Therefore, we set the two hyperparameters as p ∈{3, 6, 12, 18} (corresponding to 15, 30, 60, 90 minutes past measurements as 5-minute record interval) and r ∈{1, 2, 3, 4, 5}.§.§ Effects of Hyperparameters Figure <ref> shows the averaged quadratic weighted kappa of the corresponding prediction horizon. Figure <ref> illustrates 1) a closer prediction horizon always performs better; 2) As r increases, its impaction on the prediction also increases. This can be seen from the slope between r=1 and r=5, the slope at 60-min is greater than the same segment of 15-min. Figure <ref> takes a 60-min estimation as an example, indicating that the predictive effect is not monotonically increasing as the length of measurement p, and the same result can be obtained at other time points. This is because the increase in p brings an increase in the number of parameters, which leads to overfitting. §.§ Comparison with Other MethodsWe comparedDeepTransport with four representative approaches: Random Walk(RW), Autoregressive Integrated Moving Average(ARIMA) and Stacked AutoEncoders(SAEs).RW: In this baseline, the traffic condition at the next moment is estimated as a result of the random walk at the current moment condition that adds a white noise(a normal variable with zero mean and variance one). ARIMA: It <cit.> is a common statistical method for learning and predicting future values with time series data. We take a grid search over all admissible values of p, d and q which are less than p = 5, d = 2 and q = 5.FNN: We also implemented Feed-forward Neural Networks (FNN), with a single hidden layer and an output layer with regression cost. The hidden layer has 32 neurons, and four output neurons refer to the prediction horizon. Hyperbolic tangent function and linear transfer function are used for the activation function and output respectively.SAEs: We also implemented SAEs <cit.>, one of the most effective deep learning-based methods for traffic condition forecasting. It concatenates observations of all locations to a large vector as inputs. SAEs also can be viewed as a pre-training version of FNN with a large input vector proposed by <cit.>. The stacked autoencoder is configured with four layers with [256, 256, 256, 256] hidden units for pre-train. After that, a multi-task linear regression model is trained on the top layer.Besides, we also provide the result of DeepTransport with two configurations, with r=1, p=12 (DeepTransport-R1P12) and r=5, p=12 (DeepTransport-R5P12).Table <ref> shows the results of our model and other baselines on MapBJ. In summary, the models that use spatial information(SAEs, DeepTransport) significantly have higher performance than those that do not use(RW, ARIMA, FNN), especially in longer prediction horizons. On the other hand, SAEs is a fully-connected form, meaning that it assumes that any couple locations directly connect to each other so it neglects the topology structure of transport networks. On the contrary, DeepTransport considers traffic structure results as higher performance than these baselines, demonstrating that our proposed model has good generalization performance. §.§ Slot Attention WeightsDeepTransport also can observe the influence of each slot on the target location by checking slot attention weights. Figure <ref> illustrates the attention weights between prediction minutes and perceptive radius by averaging all target locations. For downstream order slots, as shown in figure <ref>, it can be seen that as predicted time increased, the attention weights shifts from low-order slots to higher ones. On the other side, figure <ref> shows that the upstream first order slot has more impact on the target location for any future time. To capture this intuition, we utilized sandglass as a metaphor to depict the spatial-temporal dependencies of traffic flow. The flowing sand passes through the aperture of a sandglass just like traffic flow through the target location. For the downstream part, the sand is first to sink to the bottom, after a period, this accumulated sand will affect the aperture just like the cumulative congestion from the higher order to the lower order. Thus, when we predict the long-period condition of the target location, our model is more willing to refer to higher-order current conditions. On the other hand, the upstream part is a little different. Higher order slots are no longer important references because traffic flow in higher order is dispersed. The target location may not be the only channel of upstream traffic flow. The nearest locations that can directly affect the target location just like the sand gathering to the aperture of the sandglass. So the future condition of the target location put more attention on the lower order. Although the higher order row receives less attention in the upstream module, there is still a gradual change as prediction minutes increase. §.§ Case StudyFor office workers, it might be more valuable to tell when traffic congestion comes and when the traffic condition will ease. We analyze the model performance over time in figure <ref>, which shows the Root Mean Square Error(RMSE) between ground truth and prediction result of RW, ARIMA, SAEs, and DeepTransport-R5P12. It has two peak periods, during morning and evening rush hours. We summed up three points from this figure: * During flat periods, especially in the early morning, there is almost no difference between models as almost all roads are fluency * Rush hours are usually used to test the effectiveness of models. When the prediction horizon is 15 minutes, DeepTransport has lower errors than other models, and the advantage of DeepTransport is more obvious when predicting the far point of time(60-minute prediction). * After the traffic peak, it is helpful to tell when the traffic condition can be mitigated. The result just after traffic peaks shows that DeepTransport predicts better over these periods. § RELATED WORKSThere has been a long thread of statistical models based on solid mathematical foundations for traffic prediction. Such as ARIMA <cit.> and its large variety <cit.> played a central role due to effectiveness and interpretability. However, the statistical methods rely on a set of constraining assumptions that may fail when dealing when complex and highly nonlinear data.  <cit.> compare the difference and similarities between statistical methods versus neural networks in transportation research.To our knowledge, the first deep learning approach to traffic prediction was published by <cit.>, they used a hierarchical structure with a Deep Belief Network (DBN) in the bottom and a (multi-task) regression layer on the top. Afterward,  <cit.> used the deep stacked autoencoders(SAEs) model for traffic prediction.A comparison <cit.> between SAEs and DNB for traffic flow prediction was investigated. More recently, <cit.> concatenated all observations to a large vector as inputs and send them to Feed-forward Neural Networks(FNN) that predicted future traffic conditions at each location.On other spatial-temporal tasks, several recent deep-learning works attempt to capture both time and space information. DeepST <cit.> uses convolutional neural networks to predict citywide crowd flows. Meanwhile, ST-ResNet <cit.> uses the framework of the residual neural networks to forecast the surrounding crowds in each region through a city. These works partition a city into an I × J grid map based on the longitude and latitude <cit.> where a grid denotes a region. However, MapBJ provides the traffic networks in the form of traffic sections instead of longitude and latitude, and the road partition method should be considered the speed limit level rather than equally cut by road length.Due to the differences in data granularity, we do not follow these methods of traffic forecasting.§ CONCLUSIONSIn this paper, we demonstrate the importance of using road temporal and spatial information in traffic condition forecasting. We proposed a novel deep learning model (DeepTransport) to learn the spatial-temporal dependency. The model not only adopts two sequential models(CNN and RNN) to capture the spatial-temporal information but also takes attention mechanism to quantify the spatial-temporal dependency relationships. We further released a real-world large traffic condition dataset including millions of recordings. Our experiment shows that DeepTransport significantly outperformed other previous statistical and deep learning methods for traffic forecasting.aaai
http://arxiv.org/abs/1709.09585v4
{ "authors": [ "Xingyi Cheng", "Ruiqing Zhang", "Jie Zhou", "Wei Xu" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20170927153949", "title": "DeepTransport: Learning Spatial-Temporal Dependency for Traffic Condition Forecasting" }
definitionDefinition[section] assumptionAssumption[section] propositionProposition[section] remarkRemark[section]lemmaLemma[section] exampleExample[section] equationsection theoremTheorem[section] corollaryCorollary[section]theoTheorem prop[theo]Proposition lem[theo]Lemmacor[theo]Corollary
http://arxiv.org/abs/1709.09982v1
{ "authors": [ "Linda E. Reichl", "Minh-Binh Tran" ], "categories": [ "cond-mat.quant-gas" ], "primary_category": "cond-mat.quant-gas", "published": "20170926215233", "title": "A kinetic model for very low temperature dilute Bose gases" }
=1
http://arxiv.org/abs/1709.09605v1
{ "authors": [ "Estia Eichten", "Zhen Liu" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170927162409", "title": "Would a Deeply Bound $b\\bar b b\\bar b$ Tetraquark Meson be Observed at the LHC?" }
Masked Toeplitz covariance estimation Maryia KabanavaRWTH Aachen University, Lehrstuhl C für Mathematik (Analysis), Pontdriesch 10, 52062 Aachen, Germanyand Holger Rauhut[1]============================================================================================================================================= The problem of estimating the covariance matrix Σ of a p-variate distribution based on its n observations arises in many data analysis contexts. While for n>p, the classical sample covariance matrix Σ̂_n is a good estimator for Σ, it fails in the high-dimensional setting when n≪ p. In this scenario one requires prior knowledge about the structure of the covariance matrix in order to construct reasonable estimators.Under the common assumption that Σ is sparse, a refined estimator is given by M·Σ̂_n, where M is a suitable symmetric mask matrix indicating the nonzero entries of Σ and · denotes the entrywise product of matrices.In the present work we assume that Σ has Toeplitz structure corresponding to stationary signals. This suggests to average the sample covariance Σ̂_n over the diagonals in order to obtain an estimator Σ̃_n of Toeplitz structure. Assuming in addition that Σ is sparse suggests to study estimators of the form M·Σ̃_n. For Gaussian random vectors and, more generally, random vectors satisfying the convex concentration property, our main result bounds the estimation error in terms of n and p and shows that accurate estimation is indeed possible when n ≪ p. The new bound significantly generalizes previous results by Cai, Ren and Zhou and provides an alternative proof.Our analysis exploits the connection between the spectral norm of a Toeplitz matrix and the supremum norm of the corresponding spectral density function. § INTRODUCTION§.§ Masked covariance estimation Estimating the covariance matrix of a random vector X in ^p from n i.i.d. sample observations X_1,,X_nplays a key role in various data analysis tasks. Recently, the case n ≪ p of small sample size has attracted increasing attention due to its appearance in applications including mobile communication problems, gene expression studies and more. Let X be random vector in ^p which we assume to have mean zero throughout this article.(The general case of non-zero mean can be handled as in <cit.>.) Its covariance matrix is defined as Σ= XX^T.The sample covariance matrix of a sequence of n i.i.d. observations X_1,, X_n of X is defined byΣ̂_n=[σ̂_st]_s,t=1^p=1/n∑_i=1^n X_iX_i^Tand it is an unbiased estimator of Σ. If X is Gaussian and n ≥ C ε^-2 p then the estimation error in the spectral norm satisfies Σ̂_n - Σ≤ε with probability at least 1-2 exp(-cn),see e.g. <cit.>, or <cit.>for a variant for heavy-tailed distributions. Since the rank of Σ̂_n is at most n, the given bound of n in terms of p cannot be improved for general Σ, i.e., a sample size of n ≥ p is necessary.However, in modern applications it is desirable to find good estimators of the covariance matrix Σ when n≪ p. Such estimators reflect prior knowledge about the structure of Σ. A common assumption is that Σ is sparse, i.e., a significant amount of entries of Σ is 0 or close to 0. Then the so-called masked covariance estimator is defined as M·Σ̂_n, where M is a symmetric mask matrix and · denotes the entrywise product of matrices. Each entry m_ij of M indicates how important it is to estimate the interaction between the i-th and j-th variable. The masked approach was first introduced in <cit.> and it allows to describe several regularization techniques such as banding or tapering of the covariance matrix in the case of ordered variables <cit.>, and thresholding in the case of unordered variables <cit.>. The accuracy of the masked estimator can be analyzed by splitting it into two terms via the triangle inequality M·Σ̂_n-Σ≤M·Σ̂_n-M·Σ+M·Σ-Σ,where · denotes the spectral norm of a matrix. The bias term M·Σ-Σ describes how well Σ fits the model described by M. The variance term M·Σ̂_n-M·Σ measures how accurately the part of the sample covariance matrix approximates the corresponding part of the true covariance matrix. The intuition behind the use of M is that M·Σ preserves the essential structure of Σ, but at the same time M·Σ̂_n does not deviate too much from its mean. In <cit.> the authors considered a p-variate Gaussian distribution and studied the problem of estimating the variance term M·Σ̂_n-M·Σ for an arbitrary fixed symmetric M∈^p× p.Let X_1,…,X_n be drawn from a multivariate Gaussian distribution (0,Σ). Let M∈^p× p. ThenM·Σ̂_n-M·Σ≤ Clog^3(2p)M_1,2/√(n)+M/nΣ,where M_1,2=max_j∑_i m_ij^2^1/2.In the particular case when the entries of M are either 0 or 1, estimate (<ref>) leads to the following corollary. Let X_1,…,X_n be drawn from a multivariate Gaussian distribution (0,Σ). Assume that the entries of M∈^p× p are equal to 0 or 1 and that there are at most m nonzero entries in each column. ThenM·Σ̂_n-M·Σ≤ Clog^3(2p)√(%s/%s)mn+m/nΣ. The proof of Theorem <ref> is based on decoupling, conditioning, covering argument and Gaussian concentration inequality for Lipschitz functions. It also allows to achieve error bounds that hold in probability. By means of a matrix moment inequality an error estimate that holds in expectation was generalized to arbitrary distributions with finite fourth moments in <cit.>. When restricted to the Gaussian case it provides an improvement of (<ref>) in the logarithmic factor.§.§ Banding and tapering estimators of Toeplitz covariance matrices In this paper we are interested in obtaining bounds similar to (<ref>) and (<ref>) under the additional assumption that X is stationary, resulting in the covariance matrix Σ to be of Toeplitz structure,Σ=[ σ_0 σ_1 σ_2 … … σ_p-1; σ_1 σ_0 σ_1 ⋱ ⋮; σ_2 σ_1 ⋱ ⋱ ⋱ ⋮; ⋮ ⋱ ⋱ ⋱ σ_1 σ_2; ⋮ ⋱ σ_1 σ_0 σ_1; σ_p-1 … … σ_2 σ_1 σ_0 ].Stationary signals appear in many applications including time series analysis and mobile communications. Our intuition is that the additional structure allows to further reduce the required number of samples. The easiest way to improve the sample covariance estimator for this setting is to average the entries of Σ̂_n in (<ref>) over the diagonals. For 0≤ r≤ p-1, setσ̃_r=1/p-r∑_s-t=rσ̂_st and define a new unbiased estimator as the Toeplitz matrix Σ̃_n=[σ̃_st]_s,t=1^p with σ̃_st=σ̃_s-t. Assuming that there is an ordering among the variables of X and that the variables, which are far apart,are only weakly correlated, we may construct more accurate estimators of Σ, so called banding and tapering estimators <cit.>, that we describe here with the mask formalism.For a given positive integer m≤p/2 and 0≤ r≤ p-1, set a_r={[ 1, r≤m/2,;2-2r/m,m/2<r≤ m,; 0, otherwise, ].and b_r={[ 1,r≤ m,; 0, otherwise. ].The tapering and banding estimators are defined as M_tap·Σ̃_n and M_band·Σ̃_n, where (M_tap)_st=a_s-t and(M_band)_st=b_s-t.A bound on the variance term in the error bound (<ref>) for either the tapering or banding mask can be derived from results in <cit.>, see eq. (22) and Lemma 5 of loc. cit. Let X_1,…,X_n be drawn from a multivariate Gaussian distribution (0,Σ). Let M∈^p× p be a tapering or banding mask with m≤p/2. ThenM·Σ̃_n-M·Σ≤ C√(%s/%s)mlog(np)np.Comparing (<ref>) and (<ref>), we see that there is an improvement by a factor of 1/√(p) in the error bound for Toeplitz matrices. However, the result (<ref>) holds only for the special type of masks M, whereas (<ref>) is valid for any symmetric M∈^p× p. Our goal is to extend the error estimate (<ref>) to general Toeplitz masks and not necessarily Gaussian distributions.§.§ Our contribution Our result holds for distributions that satisfy the so-called convex concentration property, see Definition <ref> below which includes mean-zero Gaussian random vectors. The class of such distribution is, however, much broader than the Gaussian class.Fora Toeplitz mask M∈^p× p with M_st=ω_s-t≥ 0, we define the weighted ℓ_1- and ℓ_2-norm of its first row ω=ω_ℓ_ℓ=0^p-1 byω_1,*=∑_ℓ=0^p-1ω_ℓ/p-ℓandω_2,*= ∑_ℓ=0^p-1ω_ℓ^2/p-ℓ^1/2.Let X_1,…,X_n be drawn from a distribution X∈^p such that X=0, Σ= XX^T is Toeplitz and X satisfies the convex concentration property with constant K. Let the Toeplitz mask M∈^p× p with first row ω. Then for every t>0,M·Σ̃_n-M·Σ≥ CK^2ω_2,*√(%s/%s)tn+ω_1,*t/n≤ Cpe^-t,andM·Σ̃_n-M·Σ≤ CK^2ω_2,*√(log(p)/n)+ω_1,*log(p)/n. We note that error bound (<ref>) in expectation also holds in the mean square error (MSE), as follows easily from the probability bound (<ref>) together with integration. The logarithmic factor in (<ref>) cannot beremoved in general, see also Section <ref> below.By estimating the weighted ℓ_1 and ℓ_2-norm of either the tapering or banding mask, we obtain the following result which generalizes and very slightly improves Theorem <ref> above from <cit.>. Moreover, it provides an alternative proof.Let X_1,…,X_n be drawn from a distribution X∈^p such that X=0, Σ= XX^T is Toeplitz and X satisfies the convex concentration property with constant K. Let M∈^p× p be a tapering or banding mask with m≤p/2. Then M·Σ̃_n-M·Σ≤ C K^2 √(mlog(p)/pn)+mlog(p)/pn. In the Gaussian case X ∼𝒩(0, Σ), we have K^2= 2 Σ, so that M·Σ̃_n-M·Σ≤ CΣ√(mlog(p)/pn)+mlog(p)/pn.Corollary <ref> implies that for an error tolerance ∈(0,1), the sample sizen≥ C^2^-2m/plog(p)is sufficient forM·Σ̃_n-M·Σ≤Σ. Therefore, even though the number of observations may be significantly smaller than the dimension of the underlying distribution, partial estimation of the covariance matrix is performed with small error.In some applications, Toeplitz covariance matrices may have a sparsity structure that is more complicated than the one induced by the banding estimator in the sense that zeros in ω interleave with nonzero entries. This includes spectrum sensing applications <cit.>where one needs to test the occupancy of spectral bands for wireless communication purposes. Man made signals, for instance in OFDM <cit.>, may have statistics with Toeplitz covariance matrices andnon-trivial sparsity structure with some zeros close to the diagonal and some non-zeros far away from the diagonal. Denoting by S ⊂{0,,p-1} the support of the first row of a sparse Toeplitz matrix Σ, it is natural to work with a Toeplitz mask M having first row ω = 1_S being the indicator of S, i.e., ω_j = 1 for j ∈ S and ω_j = 0 for j ∉ S. The following weighted version of the cardinality of S, introduced in similar form in <cit.>, determines the required number of samples,ν(S) = ∑_ℓ∈ Sp/p-ℓ.Observe that for ω = 1_S, ν(S) = p ω_1,* = p ω_2,*^2. If S is contained in a band of length q, i.e.,S ⊂{0,,q} then ν(S) ≤p/p-q#S and for q ≤ p/2 we have #S ≤ν(S) ≤ 2 #S. The following is an immediate consequence of Theorem <ref>.Let X_1,…,X_n be drawn from a distribution X∈^p such that X=0, Σ= XX^T is Toeplitz and X satisfies the convex concentration property with constant K. Let M∈^p× p be sparse Toeplitz with M_st=ω_s-t∈{0,1} where ω has support S ⊂{0,,p-1} of weighted cardinality ν(S).Then for every t>0,M·Σ̃_n-M·Σ≥ CK^2√(%s/%s)ν(S) tpn+ν(S) t/pn≤ Cpe^-t,andM·Σ̃_n-M·Σ≤ CK^2√(ν(S) log(p)/pn)+ν(S) log(p)/pn. In the case of a Gaussian distribution this result reduces to the previously known Theorem <ref> for the banding estimator if S = {0,, m}, but may handle general sparsity patterns (and more general distributions). For instance, if S ⊂{0,,p/2} of (small) cardinality s = #S, then as few asn ≥ C ε^-2s/plog(p) samples ensure M·Σ̃_n-M·Σ≤Σ with high probability. It would be interesting to investigate whether the error bound (<ref>) in expectation can be generalized to heavier tailed distributions in the spirit of the main results in <cit.> which only assume finite fourth moments. It is however presently not clear whether it is possible to adapt the proof technique of <cit.> to our Toeplitz covariance structure.§.§ Bounds over a class of smooth spectral densitiesLet us shortly describe an application of our results studied in more detail in <cit.>. To a Toeplitz covariance matrix Σ of the form (<ref>) we associate its spectral density functionf(x) = f_Σ(x) =σ_0+2∑_r=1^p-1σ_rcos rx, x∈[-π,π].The proof of our main result uses the fact that the spectral norm of Σ can be estimated by the L^∞ norm of f, Σ≤f_∞ := sup_x∈[-π,π]f(x), see e.g. <cit.>.As in <cit.> we introduce a class of Toeplitz covariance matrices related to a Lipschitz condition on the spectral densities.For β = γ + α with γ∈_0 and α∈ (0,1], let ℱ_β(L_0, L) = {Σ≻ 0 : Σ≤ L_0, sup_x ∈ [-π,π] |f_Σ^(γ)(x+h) - f_Σ^(γ)(x)| ≤ L h^α},where f_Σ^(γ) is the γ-th derivative of f_Σ. Since the decay of Fourier coefficients is closely connected to smoothness conditions, these two classes are contained in each other for certain choices of parameters.Choosing M = M_tap as the mask of the tapering estimator with parameter m,the spectral function f_M_tap·Σ equals f_Σ * V_m where * denotes convolution andV_m is the so-called De la Vallèe-Poussin kernel. Applying classical results from Fourier series, see e.g.<cit.> yields M_tap·Σ - Σ≤ f_Σ * V_m - f_Σ_∞≤ 4 inf_q ∈ T_mq-f_∞. For Σ in ℱ_β(L_0,L) the last term can further be estimated by 3L m^-β so that M_tap·Σ - Σ≤ 12 L m^-β.Under the assumption ofCorollary <ref> and assuming K^2 = c Σ (as in the Gaussian case) this leads together with (<ref>) to M_tap·Σ̃ - Σ≤ C Σ(√(m log(p)/pn) + m log(p)/pn) + 12 L m^-βNote that Σ≤ L_0 due to Σ∈ℱ(L_0,L).Choosing m = ⌊(L/L_0^2β + 2np /log(p))^1/(2β+1)⌋and making the mild assumption m ≤ pn/log(p), we obtainM_tap·Σ̃ - Σ≤ C L ( log(p)/np)^β/2β+1.Of course, a related probability estimate and an MSE estimate can be derived from (<ref>). It is shown in <cit.> that this bound is optimal over the class ℱ_β(L_0,L). In particular, the logarithmic factor log(p) in (<ref>) cannot be removed. This means that the logarithmic factor in our general bound (<ref>) cannot be removed in general, either. In a similar way <cit.>, we can analyze the performance of the banding estimator over ℱ_β(L_0,L).This leads to the estimateM_band·Σ̃ - Σ≤ C L_0 (√(m log(p)/pn) + m log(p)/pn) + 12 L log(m) m^-β.Choosing m = ⌊(L/L_0^2β + 2np /log(p))^1/(2β+1)log(p)^1/β⌋leads toM_band·Σ̃ - Σ≤ C L ( log(p)^4β+3/4β+2/np)^β/2β+1.Compared to the bound for the tapering estimator this is slightly worse.The article <cit.> considers also a second class of Toeplitz covariance matrices, but for the sake of brevity, we will not go into detail here.§.§ Positive semidefinite estimator It is natural to ask that a covariance estimator is positive semidefinite and some applications will strictly require this.However, our masked estimator M·Σ̃_n does not necessarily fulfill this condition. In order to obtain a positive semidefinite Toeplitz estimator, we can apply a procedure described in <cit.>, which is based on a circulant extension of our original masked estimator. For the sake of completeness we present the construction here and provide an approximation error of the true covariance matrix. For an arbitrary (positive semidefinite) Toeplitz covariance matrix Σ∈^p× p given by (<ref>) and corresponding spectral density function f_Σ as in (<ref>) we define a circulant matrix Σ_∈^(2p-1)×(2p-1) with entriesΣ__st=σ_s-t,if s-t≤ p-1, σ_2p-1-σ_s-t,ifp≤s-t≤ 2p-2.The eigenvalue decomposition of Σ_ is given byΣ_ =∑_ j≤ p-1λ_ju_ju̅_j^Twith the eigenvectorsu_j=1/√(2p-1)1,e^-2π i j/(2p-1),…,e^-2π i j(2p-2)/(2p-1)^T,j≤ p-1,and the (non-negative) eigenvalues λ_j = ∑_r=0^p-1σ_re^-2π i jr/(2p-1) +∑_r=p^2p-2σ_2p-1-r e^-2π i jr/(2p-1)= ∑_r=0^p-1σ_re^-2π i jr/(2p-1) +∑_r=-(p-1)^-1σ_-r e^-2π i jr/(2p-1)=σ_0+2∑_r=1^p-1σ_rcos2π rj/2p-1=f_Σ2π j/2p-1,j≤ p-1.Let M·Σ̃_n be our masked estimator with spectral density function f_M·Σ̃_n, which may possibly take negative values. Define f^*:[-π,π]→ as the non-negative part of f_M·Σ̃_n,f^*(x)=f_M·Σ̃_n(x),iff_M·Σ̃_n(x)≥ 0, 0,otherwise.SetΣ^*_ = ∑_j≤ p-1f^*2π j/2p-1u_ju̅_j^T.Then Σ^*_∈^(2p-1)×(2p-1) is circulant and positive semidefinite. As a new estimator Σ^* we take the restriction of Σ^*_ to its first p rows and p columns. It is clear that Σ^* is Toeplitz and positive semidefinite. But note that in the case of a sparse mask M, the estimator Σ^* may in general fail to be sparse. Nevertheless, we have the following error bound.Let X_1,…,X_n be drawn from a distribution X∈^p such that X=0, Σ= XX^T is Toeplitz and X satisfies the convex concentration property with constant K. Let M∈^p× p be a Toeplitz maskwith first row ω. Then the positive semidefinite estimator Σ^* obtained from M·Σ̃_n by the procedure described above satisfies, for every t>0,(Σ^*-Σ≥ CK^2ω_2,*√(t/n)+ω_1,* t/n+3f-f_M·Σ_∞) ≤ C p e^-t,where f and f_M·Σ denote spectral density functions of Σ and M·Σ respectively. Moreover,Σ^*-Σ≤CK^2ω_2,*√(log(p)/n)+ω_1,*log(p)/n+3f-f_M·Σ_∞, The term f-f_M·Σ_∞ in (<ref>) replaces the bias term in (<ref>) and is in general an upper bound for it. If the mask M is chosen in such a way that M·Σ is precisely Σ then the term f-f_M·Σ_∞ in the estimate above disappears.In the situation of Toeplitz covariance matrices Σ from the class ℱ_β(L_0,L) from Section <ref> we have for the tapering mask M_tap with parameter m, see also (<ref>),f-f_M_tap·Σ_∞≤ 12 L m^-β.Choosing m as in (<ref>) and following the same steps leading to (<ref>),we conclude that the corresponding positive definite estimator Σ^* satisfiesΣ^* - Σ≤ CL ( log(p)/np)^β/2β+1.A corresponding tail estimate follows in the same way. This means that the original masked estimator M ·Σ̃_n and the positive semidefinite estimatorΣ^* obey the same error estimates on ℱ_β(L_0,L) (up to possibly constants).§ ACKNOWLEDGEMENTS Both authors acknowledge funding from the DFG through the project Compressive Covariance Sampling for Spectrum Sensing (CoCoSa). They thank Andreas Bollig and Arash Behboodi for discussions on Toeplitz covariance estimation in the context of wireless communications. § PRELIMINARIES §.§ The convex concentration property The Gaussian concentration inequality, see e.g. <cit.>, states that if f:^p→ is a Lipschitz function with Lipschitz constantf_Lip and X∼(0,Σ), thenf(X)- f(X)≥ t≤ 2exp-t^2/2Σf_Lip^2for allt≥ 0. We are interested in distributions X∈^p that behave similar to (<ref>). Let X be a random vector in ^p. We say that X has the convex concentration property (c.c.p.) with constant K if for every 1-Lipschitz convex function ϕ:^p→, we have ϕ(X)<∞ and for every t>0,(ϕ(X)-ϕ(X)≥ t)≤ 2exp(-t^2/K^2). The mean ϕ(X) in (<ref>) may be replaced by a median M_f after possibly adjusting the constant K, see e.g. <cit.>. This type of distributions is considered in <cit.>. We provide several examples of distributions with possibly dependent entries satisfying the c.c.p.: * Clearly, a Gaussian random vector X∼(0,Σ) satisfies the c.c.p. with K^2=2Σ.* A random vector X that is uniformly distributed on the sphere √(p)S^p-1 satisfies the c.c.p. with constant K=2. This follows from <cit.> combined with <cit.>.* A random vector X∈^p with a density proportional to e^-u(x), where the Hessian satisfies D^2 u(x)≥γ for some c>0 uniformly in x∈^p, has the c.c.p. <cit.> with constant K= √(2/γ). Such random vectors form an important subclass of the logarithmically convex random vectors. *A random vector X = (X_1,,X_p) with independent components X_j taking values in [-1,1] satisfies the c.c.p. with absolute constant K= c <cit.>. (Of course, the X_j taking values in some other bounded intervals works as well after possibly adjusting the constant c.)* In generalization of the previous example, the c.c.p. also holds for certain random vectors X = (X_1,,X_p) on [-1,1]^p with dependent entries. In <cit.> this is proven for some classes of Markov chains and so-called Φ-mixing processes.* Let X∈^p be a random vector with covariance matrix being the identity andthat satisfies the c.c.p. with constant K, for instance, a Rademacher vector, i.e., independent entries that take the value ± 1 with equal probability (see Example <ref>). Now for an arbitraryB∈^q× p we define Y = B X ∈^q. Then Y has covariance matrix Σ_Y=BB^T and satisfies the c.c.p. Indeed, for a 1-Lipschitz and convex function f:^q→, defineϕ:^p→ as ϕ(X)=1/Bf(BX). Then ϕ is also 1-Lipschitz and convex. Since X has the c.c.p., we have f(Y)=f(BX)=Bϕ(X)<∞and (f(Y)- f(Y)≥ t)=ϕ(X)-ϕ(X)≥t/B≤ 2exp(-t^2/(KB)^2),which implies that Y satisfies the c.c.p. with constant KΣ_Y^1/2. This example shows in particular that any positive semidefinite matrix Σ may appear as covariance matrix of a random vector satisfying the c.c.p. and not being Gaussian.* It follows from <cit.> (a generalization of Talagrand's convex distance inequality) that the c.c.p. holds for a random vector X = (X_1,,X_p) with possibly dependent entries which satisfies a Dobrushin type condition. See <cit.> for details and <cit.> on how to deduce a concentration inequality from a convex distance inequality.This examples applies in particular to random vectors generated via sampling from finite sets without replacement <cit.>. * Random vectors satisfying the logarithmic Sobolev inequality are c.c.p.: For some positive measurable function f on ^p, the entropy is defined asEnt_X(f) = [f(X) log(f(X))] - [f(X)] log([f(X)]).The random vector is said to satisfy a logarithmic Sobolev inequality if for all smooth enough functions f on ^p it holdsEnt_X(f^2) ≤ K^2 [ ∇ f(X) _2^2].It follows from <cit.> that X has the c.c.p. with constant K. Examples of random vectors satisfying the logarithmic Sobolev inequality includeGaussian random vectors X∼(0,Σ) and more generally logarithmically concave random vectors as in Example 3. above, the uniform distribution on the sphere <cit.> and, more generally, random vectors distributed according to the normalized Riemann measure on a compact Riemannian manifold with Ricci curvature uniformly bounded from below by a positive constant <cit.>. The following generalization of the Hanson-Wright inequality for random vectors satisfying the c.c.p. due to Adamczak <cit.> is crucial for the proof of Theorem <ref>.Let X∈^p be random with X=0. If X satisfies the c.c.p. with constant K, then for any A∈^p× p and t>0,|AX,X-AX,X|≥ t≤ 2exp-1/Cmint^2/2K^4A_F^2,t/K^2A. §.§ Sub-gamma random variables The proof of our main result uses the concept of sub-gamma random variables, see also <cit.>. A real-valued mean-zero random variable X is called sub-gamma with variance factor ν and scale parameter c if, for all 0<λ<1/c,exp(λ X)≤exp(λ^2 ν/2(1-cλ)) exp(-λ X) ≤exp( λ^2 ν/2(1-cλ))The tail of a sub-gamma variable satisfies <cit.>(|X| > √(2 ν t) + √(c t)) ≤ 2 exp(-t).Sub-gamma variables can be characterized via their moments <cit.>. If, for any integer q ≥ 1, a random variable X satisfies[X^2q] ≤ q! A^q + (2q)! B^2qthen X is sub-gamma with variance factor ν = 4(A+B^2) and scale parameter c= 2B. Conversely, if X is sub-gamma then (<ref>) holds for some A and B.§ PROOF OF MAIN RESULTSAs in <cit.> our results rely on the connection between the spectral norm of the Toeplitz matrix and the L^∞ norm of the corresponding spectral density function. The spectral density function corresponding to a Toeplitz covariance matrix Σ defined in (<ref>) is given byf(x)=f_Σ(x)=σ_0+2∑_r=1^p-1σ_rcos rx=∑_r=-(p-1)^p-1σ_rcos rx, x∈[-π,π].It follows from <cit.> thatΣ≤f_∞:=sup_x∈[-π,π]f(x)≤ 2max_1≤ k≤ 4pf(x_k) ,x_k = k-2p/4pπ, where the second inequality follows from the fact that f is a trigonometric polynomial of order less than p together withTheorem 7.28 in <cit.>. Our masked estimator based on n observations X_1,…,X_n of X∈^p is defined asM·Σ̃_n=[ ω_0σ̃_0 ω_1σ̃_1 … ω_p-1σ̃_p-1; ω_1σ̃_1 ω_0σ̃_0 ⋮; ⋮ ⋱ ω_1σ̃_1; ω_p-1σ̃_p-1 … ω_1σ̃_1 ω_0σ̃_0 ],withσ̃_r=1/n∑_i=1^n1/p-r∑_j=1^p-rX_ijX_i(j+r),r = 0,, p-1, where X_ij is the jth entry of the observation X_i. Then the corresponding spectral density function is given byf_M·Σ̃_n(x) =∑_r=-(p-1)^p-1ω_rσ̃_rcos rx=1/n∑_i=1^n∑_r=-(p-1)^p-1ω_r/p-r∑_j=1^p-rX_ijX_i(j+r)cos rx=1/n∑_i=1^n∑_s=1^p∑_t=1^pω_s-t/p-s-tX_isX_itcos(s-t)x=1/n∑_i=1^nM· V^xX_i,X_i,where V^x=[v_st^x]_s,t=1^p, v_st^x=v_s-t^x=cos(s-t)x/p-s-t, x∈[-π,π].Let Z_i^k be the mean-zero random variable defined byZ_i^k=M· V^x_kX_i, X_i-M· V^x_kX_i, X_i, i=1,…,n, k=1,,p.Then (<ref>) and(<ref>) together with the notation above provide the following boundf_M·Σ̃_n-f_M·Σ_∞≥ t≤max_1≤ k≤ 4p1/n∑_i=1^nZ_i^k≥t/2.By the generalized Hanson-Wright inequality of Theorem <ref>, for each i=1,,n and k=1,,4p, Z_i^k≥ t≤ 2exp-1/Cmint^2/2K^4M· V^x_k_F^2,t/K^2M· V^x_k,which by integration implies that for every integer q≥ 1,Z_i^k^2q ≤ 2q2CK^4M· V^x_k_F^2^qΓ(q)+4qCK^2M· V^x_k^2qΓ(2q)≤ q!4CK^4M· V^x_k_F^2^q +(2q)!2CK^2M· V^x_k^2q.According to Theorem <ref> it follows that Z_i^k is a sub-gamma random variable with variance factor ν = 16 K^4C M· V^x_k_F^2+ C^2M· V^x_k^2and scale parameter c = 2CK^2M· V^x_k.Hence, by (<ref>) and independence, for all 0 < λ < 1/c,expλ∑_i=1^n Z_i^k=∏_i=1^nexpλ Z_i^k≤expλ^2nν/2(1-cλ),and similarly for the Z_i^k replaced by - Z_i^k. This means that ∑_i=1^n Z_i^k is a sub-gamma random variable with variance factor ν n and scale parameter c. By (<ref>)this implies that for every t>0,∑_i=1^nZ_i^k>√(2ν n t)+ct≤ 2e^-t.Taking into account that the spectral norm of a matrix is bounded from above by its Frobenius norm we obtain1/n∑_i=1^n Z_i^k≥ C_1K^2M· V^x_k_F√(%s/%s)tn+CK^2M· V^x_kt/n≤ 2e^-t t > 0,where C_1 = 4√(2C + 2 C^2). A direct calculation of M· V^x_k_F yieldsM· V^x_k_F≤2∑_ℓ=0^p-1ω_ℓ^2/p-ℓ^1/2= 2 ω_2,*.By the Gershgorin disc theorem <cit.>, M· V^x_k is bounded byM· V^x_k≤ 2∑_ℓ=0^p-1ω_ℓ/p-ℓ= 2 ω_1,*.Applying the union bound to (<ref>) results in f_M·Σ̃_n-f_M·Σ_∞≥ C_2 K^2ω_2,*√(%s/%s)tn+ω_1,* t/n≤ 8pe^-tfor some C_2 only depending on C. Due to (<ref>) the error of approximating M·Σ by M·Σ̃_n is bounded byM·Σ̃_n-M·Σ≥ C_2 K^2ω_2,*√(%s/%s)tn+ω_1,* t/n≤ 8pe^-t.Integration yieldsf_M·Σ̃_n-f_M·Σ_∞≤ C_3 K^2ω_2,*√(log(p)/n)+ω_1,*log(p)/n,M·Σ̃_n-M·Σ≤ C_3 K^2ω_2,*√(log(p)/n)+ω_1,*log(p)/n.This concludes the proof. The Gaussian distribution satisfies the c.c.p with the constant K^2=2Σ. Since the entries of either the banding or tapering mask M are bounded from above by 1 and m≤p/2, we obtainω_2,*≤2∑_ℓ=0^m1/p-ℓ^1/2≤2(m+1)/p-m^1/2≤4(m+1)/p^1/2,ω_1,*≤ 2∑_ℓ=0^m1/p-ℓ≤2(m+1)/p-m≤4(m+1)/p.Theorem <ref> yields the claim. By the triangle inequality,Σ^*-Σ≤Σ^*-M·Σ+M·Σ-Σ.We bound the first term by expanding both matrices to a circulant matrix and taking into account expression (<ref>) for its eigenvalues,Σ^*-M·Σ ≤Σ^*_-(M·Σ)_= j≤ p-1maxf^*2π j/2p-1-f_M·Σ2π j/2p-1≤f^*-f_M·Σ_∞≤f^*-f_∞+f-f_M·Σ_∞.Since f is non-negative and f^* is the positive part of f_M·Σ̃_n, f^*-f_∞≤f_M·Σ̃_n-f_∞≤f_M·Σ̃_n-f_M·Σ_∞+f_M·Σ-f_∞.Estimating the second term of (<ref>) by the L^∞ norm of the corresponding spectral density function together with (<ref>) and(<ref>) leads toΣ^*-Σ≤f_M·Σ̃_n-f_M·Σ_∞+3 f-f_M·Σ_∞. Taking expectations and applying estimate (<ref>) shows the claimed estimate for the expectation of the approximation error, while a combination with (<ref>) proves the probability bound.plain
http://arxiv.org/abs/1709.09377v1
{ "authors": [ "Maryia Kabanava", "Holger Rauhut" ], "categories": [ "cs.IT", "math.IT", "62H12, 60B20" ], "primary_category": "cs.IT", "published": "20170927080019", "title": "Masked Toeplitz covariance estimation" }
These two authors contributed equally University of Vienna, Faculty of Physics, Boltzmanngasse 5, 1090 Vienna, AustriaThese two authors contributed equally Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372 Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543 University of Vienna, Faculty of Physics, Boltzmanngasse 5, 1090 Vienna, Austria Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372 Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543 Centro de Ciências Naturais e Humanas, Universidade Federal do ABC, Avenida dos Estados 5001, 09210-580, Santo André, São Paulo, Brazil Corresponding author Singapore University of Technology and Design, 8 Somapah Road, Singapore 487372 Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543 Erwin Schrödinger Institute, University of Vienna, 1090 Vienna, Austria Corresponding author University of Vienna, Faculty of Physics, Boltzmanngasse 5, 1090 Vienna, Austria Erwin Schrödinger Institute, University of Vienna, 1090 Vienna, AustriaOne-time programs, computer programs which self-destruct after being run only once, are a powerful building block in cryptography and would allow for new forms of secure software distribution. However, ideal one-time programs have been proved to be unachievable using either classical or quantum resources. Here we relax the definition of one-time programs to allow some probability of error in the output and show that quantum mechanics offers security advantages over purely classical resources. We introduce a scheme for encoding probabilistic one-time programs as quantum states with prescribed measurement settings, explore their security, and experimentally demonstrate various one-time programs using measurements on single-photon states. These include classical logic gates, a program to solve Yao's millionaires problem, and a one-time delegation of a digital signature. By combining quantum and classical technology, we demonstrate that quantum techniques can enhance computing capabilities even before full-scale quantum computers are available. Quantum advantage for probabilistic one-time programs Philip Walther=====================================================With the continuous march of technological advancement, computer processors have become ubiquitous, impacting almost every aspect of our daily lives. Whether being used to compose email or acting as control systems for industrial applications, these devices rely on specially written software to ensure their correct operation. In many cases it would be desirable to prevent a program from being duplicated or to control the number of times a program could be executed, for example to prevent reverse-engineering or to ensure compliance with licensing restrictions. Unfortunately, the very nature of classical information ensures that software can in principle always be copied and rerun, enabling various misuses. As a solution to this and other problems the concept of one-time programs was introduced <cit.>. One-time programs are a computational paradigm that allows for functions that can be executed one time and one time only. Thus, if a software vendor encodes a function f as a one-time program, a user having only one copy of that program can obtain only one input-output pair (x,f(x)) before the program becomes inoperable. In the classical world, this is only possible through the use of one-time hardware or one-time memories <cit.>, special-purpose hardware that gets physically destroyed after being used once. However, it is unclear whether such hardware can be realised in an absolutely secure way. An adversary may attack the specific implementation, seeking to circumvent or reverse whatever physical process is used to disable the device after a single use.Certain features of quantum mechanics, such as the no-cloning theorem <cit.> and the irreversibility of measurements <cit.>, suggest that it may enable a solution to this problem. It was recently shown, however, that deterministic one-time programs are impossible even in the quantum regime <cit.>. As a result, it is believed that neither classical nor quantum information-theoretically secure one-time programs are possible <cit.> without further assumptions <cit.>.Here, we demonstrate theoretically and experimentally that quantum mechanics does enable a form of probabilistic one-time program which shows an advantage over any possible classical counterpart. These rely on quantum information processing to execute, but encode entirely classical computation. Such probabilistic one-time programs circumvent existing no-go results by allowing a (bounded) probability of error in the output of the computation. We show that these quantum one-time programs offer a trade-off between accuracy and number of lines of the truth table read, which is not possible in the classical case. Remarkably, the experimental requirements to encode the probabilistic one-time programs we introduce are comparable to those of many quantum key distribution implementations, allowing for technological advances in that field to be harnessed for a new application. § CONSTRUCTION We consider one-time programs (OTPs) in the context of a two party setting, where Alice is the software provider and Bob is the user. Alice's program is represented by a secret function f, which she encodes as a separable state of some number of qubits, which scales linearly in the number of elementary logic gates required to implement f, and provides these to Bob. Bob can then evaluate f on some input of his choice x by sequentially measuring each qubit received from Alice. These measurements are a fundamentally irreversible process, which is necessary for Bob to evaluate f(x) while at the same time preventing him from learning f(x') for some input x' ≠ x. An outline of our approach is presented in scheme.In analogy to the compiling of standard classical programs, the logic of f is mapped onto a logic circuit using basic logic synthesis <cit.>. It is necessary that the circuits have a certain standard form, such that the information to be hidden is encoded in the precise choice of logic gates and not on the connections between gates. This is because our approach is to encode the truth table for individual gates as a one-time program in its own right, which we will call gate one-time programs (gate-OTPs). The interconnection of gates is left public, allowing Bob to propagate information from one gate to the next. Each logic gate is a Boolean function, taking k input bits and returning a single output bit. We will denote the set of k-input gates as 𝒢_k. For k≥ 2, it is possible to implement an arbitrary Boolean function on n input bits with gates chosen only from 𝒢_k together with the fan-out operation <cit.> that defines the number of output bits. It is however possible to build up arbitrary 𝒢_k gates from a fixed configuration, with some choice of gates from 𝒢_1. Such a construction of an arbitrary 𝒢_2 gateis shown in statese.Probabilistic versions of the four gates comprising 𝒢_1 can be encoded using a single qubit, as shown in statesa-d, such that the measurement operators corresponding to different inputs anti-commute. This is achieved by first fixing the measurement bases corresponding to inputs of 0 and 1 respectively (statesb), and then finding the states to encode each gate such that it maximises the average probability of obtaining the correct outcome across both possible inputs (statesc). The measurement bases are chosen to be unbiased and correspond to anti-commuting observables, σ_Z for input 0 and σ_X for input 1, to ensure that in learning about the value of one observable Bob must forego information on the other. Once the measurement bases are fixed, the states can be found which yield the correct output with a maximal probability of 1/2 + 1/2√(2) or approximately 85.36%. This encoding relates to conjucate encoding introduced by Wiesner <cit.> and is equivalent to the quantum random-access codes considered in <cit.>, which were motivated by ideas of compression rather than security. However, the concepts of one-time programs and random access codes diverge when we consider hiding gates from 𝒢_k for k>1 later on.With a method for implementing 𝒢_1 now in place, we can proceed to construct a universal set of gates, for example 𝒢_2, while preventing Bob from learning the full truth table. As alluded to previously, one way to achieve this is to insert hidden 𝒢_1 gates into a fixed circuit, as shown in states. The exact choices required for each of the hidden gates to achieve a specific 𝒢_2 gate is described in the Supplementary Information. The overall success probability for gates constructed in this way is 75%. However, such an approach yields a rather complicated construction for gates in 𝒢_k for k>2 and introduces complications in the security analysis. A more appealing approach is to directly implement probabilistic one-time programs for gates in 𝒢_k. This can be done by generalising the construction used in the k=1 case. Specifically, each possible input is assigned a unique observable from a set of anti-commuting multi-qubit Pauli operators {σ_i}, where a +1 measurement outcome is taken to correspond to a gate output of 0 and a -1 outcome is taken to correspond to an output of 1. As before, the states encoding each gate G are chosen to maximise the average probability that the outcome of measuring the observable corresponding to input x results in output G(x). Unlike the case for 𝒢_1, there is an entire subspace of states satisfying this constraint for a given G. Our approach is to encode G as the maximum entropy state maximising success probability,ρ_G = 1/tr(𝕀)( 𝕀 + 1/√(2^k)∑ _i=0 ^2^k-1(-1)^G(i)σ_i ) .This coincides with the definition a particular type of random access code, known as a parity oblivious random access code, explored in <cit.> for other purposes. The success probability for any input i is then given by 1/2(1+(-1)^G(i)tr(ρ_G σ_i)) which simplifies to 1/2(1+2^-k/2). Remarkably, this results in each ρ_G being the maximally mixed state of a 2^k-1-dimensional subspace, so that the von Neumann entropy is k-1.The implementation of 𝒢_k gates requires 2^k anti-commuting operators and 2^k-1 qubits. However, there is an alternative implementation that uses 2^k-1 qubits whose Pauli operators are restricted to being tensor products of the identity, σ_X and σ_Z. While there is no fundamental reason to require such a restriction, it can reduce the hardware requirements necessary to implement the scheme, as seen in the experimental section. These encodings form the basis for the experimental implementations with elliptically and linearly polarised photons respectively.§ EXPLICIT GATE CONSTRUCTIONHere we show the explicit form of single-photon states that can be combined to encode all𝒢_1 and 𝒢_2 gates as shown in the Supplementary Information.§.§.§ Gates with 1 bit of input The simplest case of program is one that accepts one bit of input and returns one bit of output.The truth tables for all such 𝒢_1 gates (shown in statesc) may be easily encoded as: |Ψ_0⟩ = 1/√(2+√(2))( |0⟩ + |+⟩) , |Ψ_1⟩ = 1/√(2+√(2))( |1⟩ - |-⟩) , |Ψ_Id⟩ = 1/√(2+√(2))( |0⟩ + |-⟩) ,|Ψ_not⟩ = 1/√(2+√(2))( |1⟩ + |+⟩) , where |±⟩ = 1/√(2)( |0⟩±|1⟩).§.§.§ Gates with 2 bits of input All 𝒢_2 gates can be encoded using either a combination of three states from Equations <ref>-<ref> (which corresponds to the linear scheme) or a combination of two states (elliptical scheme), in which case the above mentioned states need to be combined with additional states from the following list: |Ψ_0^e⟩=(+1/2-1/√(2)i)|0⟩ +1/2|1⟩, |Ψ_1^e⟩=(-1/2-1/√(2)i)|0⟩ +1/2|1⟩, |Ψ_2^e⟩=1/2|0⟩ +(+1/2+1/√(2)i)|1⟩, |Ψ_3^e⟩=1/2|0⟩ +(-1/2+1/√(2)i)|1⟩, |Ψ_4^e⟩=(+1/2+1/√(2)i)|0⟩ +1/2|1⟩, |Ψ_5^e⟩=(-1/2+1/√(2)i)|0⟩ +1/2|1⟩, |Ψ_6^e⟩=1/2|0⟩ +(+1/2-1/√(2)i)|1⟩, |Ψ_7^e⟩=1/2|0⟩ +(-1/2-1/√(2)i)|1⟩.The encoding of specific gates is done according to tables shown in the Supplementary Information. In the linear and elliptical scheme, the gate-encoding state is a tensor product state of three or two photons, respectively. In the linear scheme, each of the three photons are in a state given in Equations <ref>-<ref>. As there are 64 combinations and only 16 gates, each gate can be encoded in four different ways (represented by orthogonal state vectors), and a random choice is made each time the gate must be encoded. In the elliptical scheme, the first photon is in a state given in Equations <ref>-<ref>, while the second photon is in a state given in Equations <ref>-<ref>. As there are 32 combinations and only 16 gates, each gate can be encoded in two different ways, and again a random choice is made each time the gate must be encoded. The random choice between orthogonal state vectors is made by the sender and it is irrelevant from the point of view of the receiver. Thus, the state as seen by the receiver is effectively the mixed state given in Equation <ref>.§ EXPERIMENTAL IMPLEMENTATION To demonstrate the viability of the presented scheme we show a proof-of-principle implementation based on polarisation encoded photonic qubits (Setupa). We realized two equivalent schemes: we refer to the first one as the linear scheme because it can be implemented using only linearly polarised photons. This version requires fewer technological resources: Alice and Bob each need just one liquid crystal retarder (LCR). These LCRs rotate the polarization of each photon by an angle depending on the applied voltage and are therefore used to actively switch from one polarisation setting (corresponding to a gate or a measurement basis) to the next. However, in this encoding three photons per 𝒢_2 gate are required. Our elliptical scheme uses elliptically polarised states and requires two LCRs per party. The advantage of this scheme is that it only requires two photons per 𝒢_2 gate, reducing the length of the program by a third. For both versions we tested all 16 gates comprising 𝒢_2 for all four possible inputs (00, 01, 10, 11). The average success probability of each gate is shown in Setupb, and the results are in good agreement with the expected value of 0.75. We characterized all single-photon states using quantum state tomography <cit.> where a fidelity,F≥ 0.991± 0.008 could be achieved for all states (see Fidelities for details). § EXPERIMENTAL SETUP Our single-photon source is based on spontaneous parametric down conversion (SPDC) using a Sagnac loop <cit.>. The pump beam is generated by a 4.5 diode laser at a central wavelength of 394.5, followed by a half- and a quarter-wave plate to adjust the polarisation. It was focused on a 20 long, type-II colinear periodically poled Potassium Titanyl Phosphate crystal placed inside the loop, which emitted photon pairs at 789 in a separable state |H⟩|V⟩, where H and V denote horizontal and vertical polarization respectively. The down-converted photons were reflected by a dichroic mirror while the pump beam was transmitted. Additionally long-pass and band-pass filters were used to block the pump beam and to select the desired wavelength for the photon pairs. The down-converted photons were then coupled into single-mode fibres and one was directly sent to a detector to herald the second photon. The source was configured in a way that we observed a typical two-photon coincidence-rate of 2 with an open switch and the ratio of multi-pair events to single-pair events was <0.07.The possibility of multi-pair emissions is a property of every SPDC process which in our case could lead to the transmission of more than one photon at once through the switch and therefore cause unwanted information leaking to the client. Should a future application require even lower (or vanishing) multi-pair emission this could be implemented using alternative single photon sources <cit.>.Furthermore we implemented an active switch based on a KD*P (potassium dideuterium phosphate) Pockels cell with a half-wave voltage of 6.3 and two crossed polarisers. The electronic signal from the avalanche photo diode detector (APD) in the heralding path was sent to a splitterbox which could produce an on and off signal for the driver of the Pockels cell. The pulses were separated by 46 which corresponds to the opening time of the switch. During this time voltage is applied to the Pockels cell, causing it to act as a HWP. These pulses are gated to ensure photons are not transmitted while the LCRs are changing. Once the LCRs are ready to set a state in the program, a gating signal is sent to the splitterbox. Only then will the next heralding signal cause an on/off pulse to be sent to the Pockels cell. All following herald signals will be blocked until the splitterbox receives the next gate signal.The splitterbox itself causes a delay of the electric signal of 22 while the total electronic delay of splitterbox and control electronics is 80. The Pockels cell has a rise-time of 8. To allow for the switch to be opened before the signal photon reaches the Pockels cell in spite of all electronic delays the signal photon is delayed in a 29 single mode fibre.All necessary polarisation states were set using a combination of two LCRs and a QWP at 0. The maximum time to switch between two states in our scheme was 60. This was therefore the time allowed for every switching process (so as not to leak information about the prepared state because of a shorter switching time).To measure the states in the bases dictated by the inputs to the gates a second set of two LCRs was used followed by a PBS and two APDs to measure the photons. Typically 4 of the times the switch opened a photon was also detected at Bob's side. This was due to losses in the setup as well as the limited detection efficiency of the APDs. Together with the LCR switching time of 60this lead to an of average gate time of 1.4 per photon. § DEMONSTRATED PROGRAMS To demonstrate the applicability of our scheme we have experimentally implemented two different classes of one-time programs. The first class we consider is a program built from a combination of 𝒢_2 gates which are universal for classical computation. We use it to solve Yao's Millionaires Problem <cit.>, in which two people wish to compare their wealth without disclosing this value to the other party. To accomplish this goal, Alice encodes her wealth into the program. Bob's wealth will be his input (see millionaires of the Supplementary Information). The program returns a single bit, indicating which number is larger. We ran the Millionaires Problem using both the linear and the elliptical schemes on several inputs. Alice encoded a four-bit number and Bob compared it to numbers that each differed in one bit from Alice's input. The detailed results are shown in Yao. In good agreement with our theoretical expectations, it can be seen that the probability of success rises with the significance of the bit in which the two numbers differ (i.e. it is easier to discriminate two numbers that differ in the most significant bit than two that differ in the least significant bit). The second kind of program we consider concerns the delegation of digital signatures or one time power of attorney. Here Alice can enable Bob to sign one, and only one, message of his choice with a signature derived from her private key. However, due to the probabilistic nature of the described OTPs, there is a non-negligible probability that OTPs will not output the correct signature. To compensate for this we may repeat the procedure and define some threshold number of signatures which is announced publicly to be an acceptable number required to verify a given message has been signed. Alice produces many distinct OTPs each using a different private key such that Bob has a high probability of forming the required number of signatures for a single message.Standard signature schemes use a public key for verification <cit.>. However, due to technical reasons limiting our gate rate in experiment, we restrictour demonstration to a symmetric digital signature scheme, wherein Alice's private key is used to verify a signature. Such a program may be utilised for a third party to spend an amount of money on someone else's behalf, so that they should pay anyone with a signed receit. An overview of this scheme is shown in Fig. 5a.Bob computes a hash of the message he wishes to sign and uses this as the input to the OTPs (using a hash ensures that the input length does not depend on the length of the actual message signed). The output of the OTPs will then be the digital signature which Alice may verify. For each bit of this hash Alice provides 300 𝒢_1 gates, from which Bob produces a bit string dependent on the result of measuring according to that bit. Such a bit string may be compared by Alice to the ideal case where all gates have been implemented on the corresponding hash bit. We require that each bit string matchessuch an ideal string in at least τ positions to produce a valid signature. The threshold τ is chosen as a function of the bit string length T to maximise the difference between the probabilities of success of the honest and dishonest strategies, wherein a dishonest strategy Bob would attempt to sign two hashes differing only by a single bit. This is illustrated in DigitalSignatureb and in histogram of the Supplementary Information. As T is increased the probability of an honest Bob forming a sufficient fraction of correct bits (⩾τ/T) in each bit string approaches 1, while that of a dishonest user who would try to form multiple signatures, approaches 0. This demonstrates a clear example of a case where even probabilistic one-time programs enable new functionality that is inexecutable using classical technology. § SECURITY ANALYSIS We will now discuss the security of our protocol and show a strict advantage over any possible classical strategy. We note that the security relies on several measures affectingdifferent steps of our protocol. Starting with the logical synthesis we see that whengate-OTPs are combined into circuits, there is some freedom over how the gates are chosen. In our proof-of-principle demonstration of Yao's millionaires problem we limit the information accessible to Bob by randomly insertingpairs of NOT gates into the circuit immediately after each gate-OTP with probability one-half. The first NOT gate is absorbed backwards into the gate-OTP, altering the encoded gate. The second NOT of the pair is propagated forward, through any present fan-out and XOR gates, and absorbed into the next layer of gate-OTPs, altering the function they encode. Such a procedure can always be applied to any circuit composed of gate-OTPs along with XOR, NOT and fan-out operations. To analyse the effect of this randomisation procedure, we will assume it is applied after every gate-OTP. In such a case, the joint state of the quantum systems used to encode the gate-OTPs is maximally mixed, and hence independent of the encoded function. For those gate-OTPs which produce the output of theprogram the second NOT gate cannot be absorbed into a subsequent gate-OTP. We will simply eliminate this second NOT gate, effectively applying a one-time pad to the program's output and creating the maximally mixed state from the perspective of the receiver. Such a scheme thus negates all losses in the system as the maximally mixed state does not allow a dishonest user to extract any information regarding the intended gate-OTP.Since the output of the program can be revealed by decoding the one-time pad, the accessible information for the entire system can be no greater than the size of this encryption key, and hence can be no greater than the number of output bits for the program. This is in line with the requirement that a one-time program should reveal no more information than can be obtained from a single run of the program. We now consider the security of the individual gate-OTPs corresponding to gates in 𝒢_1. We show that strictly less can be learned from a single copy of them than from a single query to the encoded function (i.e. an ideal one-time implementation of that function). For all gates G ∈𝒢_1, the corresponding state ρ_G is pure, and so we will denote the state vector as |ψ_G⟩.Security shows how a single query of the encoded function can be used to produce two copies of this state. The fact that states encoding different programs are non-orthogonal, coupled with the no-cloning theorem <cit.>, implies it is not possible to produce two copies of |ψ_G⟩ from a single copy, and hence strictly less can be learned about G from a single copy of |ψ_G⟩ than from a single (coherent) query to the function it encodes. We conclude our analysis by discussing the security of 𝒢_1 and 𝒢_2 gates. We show that the gate-OTPs we have explored here have strict advantages over any purely classical computational procedure. First, we choose an appropriate figure of merit for which to compare quantum and classical noisy OTPs. An ideal OTP would allow for one, and only one, evaluation of the encoded function, resulting in exactly one input-output pair. We will therefore choose the average probability of evaluating a specific input-output pair correctly, P_1, compared to the average probability of correctness when evaluating all input-output pairs P̃_1. In the classical case information can always be copied. Therefore, a classical procedure producing one input-output pair with some fixed probability of success can be repeated arbitrarily many times to produce a noisy version of the encoded gate. The probability of getting a specific input-output pair is equal to the average probability across all input-output pairs, thus P_1^C=P̃^C_1. However, for 𝒢_k OTPs this is not the case. If we fix the single line probability of success such that P_1^C=P^Q_1 we find that, for 𝒢_1 gates P̃_1^Q=0.75 while P̃_1^C≈ 0.8536. Similarly, for 𝒢_2 we find that P̃_1^Q=0.625 while P̃_1^C=0.75. This shows that our encoding gives an advantage over the best possible classical scheme for an equivalent P_1. Details of these calculations can be found in the Supplementary Information, where it is also shown that success probability can be boosted via error-correction while still maintaining an advantage. Furthermore, in the case of 𝒢_1 gates, we may state that the probability of an adversary finding the parity of two lines, which gives an upper bound on the probability of guessing the complete truth table, is strictly lowers than in any possible classical encoding. This includes noisy implementations of oblivious transfer <cit.> as our 𝒢_1 gates are equivalent to noisy 12-oblivious transfer. Remarkably, even though oblivious transfer with a vanishing error probability is known to be impossible <cit.>with our digital signature scheme we were able to present an implementation whose overall success probability can approach 1.Aside from the inherent security of an ideal implementation of gate-OTPs, additional measures are necessary in the presence of communication over lossy channels. It is not in general advisable for Alice to simply resend qubits that are not received by Bob, since he can simply claim to have lost a photon to receive a new copy and hence gain additional information about the encoded gate-OTP. This may be prevented via a simple subroutine: for each gate several copies of each state are produced, but each with a randomly chosen additional one-time pad (i.e. a bit flip on the output of all possible inputs). These states are thus in the maximally mixed state as observed from the client and provide no information. Alice will reveal only the one-time pad for the state that Bob confirms to have received and that she wants him to use. Bob will then keep or flip his measurement result, according to Alice's one-time pad and proceed with the next gate following the same procedure. This procedure has been used in each of the demonstrated programs. § DISCUSSIONHere we have shown the implementation of probabilistic one-time programs in theory and experiment. Our results demonstrate that quantum physics allows for better security trade-offs for certain secure computing tasks than are possible in the classical world, even when perfect security cannot be achieved. This is achieved without assumptions on computational hardness, noisy storage or difficulty of entanglement. Using readily available technology we findour results are in excellent agreement with the theoretical predictions. Future advances in technology that would allow for non-separable measurements on the client's side could be used to further improve our implementation.We believe the presented work strongly hints at a rich area of quantum protocols to enhance the security of classical computation, even before large-scale quantum computers can be realised.[OTP_bib] § ACKNOWLEDGEMENTSWe thank I. Alonso Calafell, M. Tillmann and J. Zeuner for helping with the electronics and L. Rozema, A. Sharma, T. Strömberg and T. Withnell for discussions. M.-C.R. acknowledges support from the the uni:docs fellowship program of the University of Vienna. T.B.B. and P.W. acknowledge support from CAPES through the Science Without Borders program (grant PDSE 99999.005394/2014-07). J.F.F. and J.A.K. acknowledge support from the Singapore National Research Foundation (NRF-NRFF2013-01). J.F.F. and P.W. acknowledge support from the Erwin Schrödinger Institut at the University of Vienna. J.F.F. acknowledge support from United States Air Force Office of Scientific Research (FA2386-15-1-4082) and P.W. acknowledges support from the Austrian Research Promotion Agency (FFG) through the QuantERA ERA-NET Cofund project HiPhoP; and the Austrian Science Fund (FWF) through START (Y585-N20) and the doctoral program CoQuS (No.W1210); and the United States Air Force Office of Scientific Research (FA9550-16-1-0004) and (FA2386-17-1-4011); and the Research Platform TURIS at the University of Vienna. The authors are named on a patent application relating to this method of implementing probabilistic one-time programs (application numbers EP16162886 and PCT/EP2017/057538).§ SUPPLEMENTARY INFORMATION §.§ Advantage over classical encodings for 𝒢_1 gates We will show that the quantum implementation of 𝒢_1 gates can hide more information than a classical scheme about the result of multiple lines of a truth table. This will be done by obtaining an inequality which must be satisfied by all classical schemes but is violated by the quantum scheme described in the main text.We consider a situation where Bob is interested in the parity of some subset of lines of the truth table, which gives us a bound on the probability of identifying these lines in the subset exactly. Using equation 1 of the main text, he considers two states that are formed by summing over all states with equal parity over the subset of lines he is interested in. For subsets consisting of more than one line, these two states are equal and thus impossible to distinguish. In other words, while a single line of the truth table can be found with probability larger than 1/2, the parity of two or more lines is completely hidden. For comparison, a classical scheme that encodes 𝒢_1 gates with single-line error probability higher than 3/4 must give correct results about the parity of two lines with probability higher than 1/2. In particular, if the single-line error probability is 1/2+1/2√(2) (the same as achieved with the quantum states in equation 1 of the main text), the classical scheme must allow the parity of two lines to be correctly identified with probability at least 1/√(2), which is greater than 1/2.In order to improve the probability of getting the correct output from a particular gate, the programmer may send multiple copies, c, of the state corresponding to this gate. The client is expected to make some (possibly non-local) multi-qubit measurement to evaluate the line of the truth table corresponding to their input. In this case the statement that the parity of multiple lines of the truth table of the encoded gates are perfectly hidden is no longer valid. Sending multiple copies of the state in equation 1 of the main text creates a trade-off situation between precision and security, where precision is quantified by the success probability that an honest client can achieve when evaluating a single line, and security is quantified by the amount of information that can be found about multiple lines of the truth table simultaneously. A complete lack of security occurs when the client can perfectly identify which one of the gates is represented by the quantum state he possesses.We will now make a comparison between a quantum scheme and what could be achieved by a classical scheme. Any classical scheme encoding a gate can be repeatedly rerun to generate a noisy truth table for the encoded gate. We will see that these noisy truth tables must satisfy an inequality that bounds the maximum level of security that can be achieved for a given level of precision. This inequality is violated in the quantum case, allowing us to achieve more security for the same level of precision than any classical scheme.§.§.§ Analytical results - Classical scheme for 𝒢_1 gates Without loss of generality, we consider a classical model in which a programmer introduces some errors in the gate truth table. These are errors purposefully introduced at compile time. There is no point in introducing random errors at run time, since the client can evaluate the truth table multiple times and find the most common value with high probability. If there is some anti-correlation in the presence of errors in different lines, then the probability of getting a second line correct is decreased when conditioned on getting the first line correct.To obtain this anti-correlation we consider that the programmer introduces h errors in the truth table (with 0≤ h ≤ 2) with probability E_h (so E_0 + E_1 + E_2 = 1). If one error is introduced it can affect either line with equal probability. Thus for an honest client interested in a single line of the truth table, the average probability that the obtained result is correct isF^C_1 = E_0 + 1/2 E_1Meanwhile, for a dishonest client interested in the parity of both lines, the probability that the obtained result is correct isF^C_2 = E_0 + E_2 We can invert these equations to find E_h in terms of F^C_1 and F^C_2. This tells the programmer what is the probability distribution in the number of errors that need to be introduced in order to produce an encoding that is characterized by given values of the probability of decoding a single line and of decoding the parity of both lines. This leads to the result E_0= F^C_1 + 1/2 F^C_2- 1/2E_1= 1 - F^C_2E_2= 1/2 - F^C_1 + 1/2 F^C_2 Each of these terms must be non-negative, which is only possible ifF^C_2≥| 2F^C_1 - 1 | This means that an attempt at a classical noisy gate which outputs correct results with probability F^C_1 also allows one to probe the parity of both lines of its truth table with probability greater than | 2F^C_1 - 1 |. However, we will see that a quantum implementation of the noisy OTPs violate this inequality, showing that it hides more information about other lines of the truth table than is possible classically.§.§.§ Analytical results - Quantum scheme for 𝒢_1 gates The figure of merit that we consider for security in this section is F_h, the probability of success in calculating the parity of a subset of the lines of the truth table, as a function of the size h of this subset (with 1 ≤ h ≤ 2 in the case of 𝒢_1 gates, whose truth table has only two lines). The outcome of the parity determination is binary, so we can use known results on quantum state discrimination of two quantum states. Specifically, the optimal probability of distinguishing them is uniquely determined by the 1-norm of half of their difference. For an honest client who is interested in only the first line of the truth table, the probability of success is related to the 1-norm of the operatorÂ_1=1/4ρ_00^⊗ c+1/4ρ_01^⊗ c-1/4ρ_10^⊗ c-1/4ρ_11^⊗ cwhile for a dishonest client who is interested in obtaining the parity of both lines, the probability of success is related to the 1-norm of the operatorÂ_2 =1/4ρ_00^⊗ c-1/4ρ_01^⊗ c-1/4ρ_10^⊗ c+1/4ρ_11^⊗ c Gates with one bit of input may be encoded as pure states, so providing multiple copies of them does not increase the dimensionality of the effective Hilbert space, which is spanned by at most four linearly independent vectors. For simplicity, we consider the case where the number of copies is odd and obtain the following resultsF^Q_1 ≡1/2+1/2‖Â_1‖ _1=1/2+1/2√(1-1/2^c)F^Q_2 ≡1/2+1/2‖Â_2‖ _1=1/2+1/2√(1-2/2^c)where the superscript Q refer to a quantum implementation. These values do not satisfy the inequality in Equation <ref>. This means that, if we compare a classical scheme which offers the same level of precision for an honest client (i.e., F^C_1 = F^Q_1), the probability of success for a dishonest client is higher in the classical case. Similarly, if we restrict the two protocols to the same level of security as quantified by the probability of finding the parity of both lines, then the quantum OTP can offer better performance for honest clients than any classical scheme. §.§ Advantage over classical encodings for other gatesWe demonstrate the advantage of the quantum one-time programs with multiple bits of input over possible classical schemes. We assume that the gate-OTP is a priori equally likely to encode any of the possible gates in 𝒢_k. Although we will focus on 𝒢_2 gates, parts of this discussion can be generalized to 𝒢_k gates with k>1.We consider the probability distribution for (potentially correlated) Bernoulli random variables X_i, which are equal to 1 if and only if a query to the a noisy classical truth table encoding gate G for input i returns G(i).All probability distributions over truth tables can be described in this way, and so it can be used to obtain a bound on the trade-offs inherent in any classical scheme. The sender does not know in advance which lines of the truth table the client might be interested, thus his/her interest is in minimizing the worst-case probability of correctly obtaining the output for multiple lines across all sets of lines. To do that, every line is treated in an equivalent way, and so all the elements on the diagonal of the covariance matrix of the Bernoulli variables X_i will be equal, as will be all off-diagonal elements. The covariance matrix thus has the form u𝕀 - v 𝕄, where 𝕀 is the identity matrix and 𝕄 is the matrix with all entries equal to one. In order to obtain a fixed probability P_1 of correctness for a single query to a line of the truth table, it must be the case that u-v = P_1 - P_1^2.Furthermore, since the minimum eigenvalue of such a matrix is u - 2^k v and covariance matrices are positive semi-definite, it must be the case that u - v ≥ (2^k-1)v. With these arguments, it's possible to bound the probability of obtaining the correct values for two lines (indexed by x and y, with x≠ y) of such a truth table,P̃_2= E(X_x X_y) = E((X_x-P_1)(X_y-P_1)) + P_1^ 2= -v + p^2≥ -(u-v)/2^k-1 + P_1^2=P_1^2-P_1/2^k-1 + P_1^2= 2^k P_1^2 -P_1 /2^k-1As the probability of evaluating a single line of a 𝒢_2 OTP is P_1 = 0.75, a noisy classical truth table with the same success probability gives P̃_1 = 0.75 across all lines. The probability of correctly decoding a pair of lines is at least P̃_2 = 0.5, independently of the chosen pair of lines.We may now compare this to the average probability of finding the output values of encoded gates for pairs of inputs. In the quantum case, if the client is interested in a particular line of the truth table, it's possible to implement a quantum measurement strategy that is specifically tuned to increase the probability of getting this value correctly. However, this degrades the available information about the other lines. Thus, in a marked difference to the classical scheme, the probability P_1 of finding the correct value of a particular line is different (and higher) than the average probability P̃_1 of getting a line correctly when the client is trying to identify the whole truth table. The same argument is valid when the client is interested in a given pair of lines, as compared to the average probability of correctly identifying pairs of lines when trying to identify the whole truth table.Using the quantum encoding without error correction, a client interested in a given line of the truth table of a 𝒢_2 gate can correctly identify it with probability equal to 0.75. On the other hand, when the client tries to identify the whole truth table, the average success probability is only 0.625. Looking at pairs of lines, making a specific measurement can allow the client to obtain a probability of success equal to 0.5, but the average over all lines in a measurement of all lines is only 0.375. §.§ Optimal measurements We turn our attention to the measurement strategy that a dishonest client could follow if he is interested in identifying all lines of the truth table of an encoded gate G. This problem is cast as a quantum state discrimination of one state among 2^k alternatives, and the figure of merit is the probability of making a correct guess about the entirety of the truth table. We will consider the pretty good measurement strategy introduced by Hausladen and Wootters <cit.> and another strategy introduced by Jez̆ek, Rehác̆ek and Fiurás̆ek <cit.>. Because of some properties of the way that the gates are encoded in quantum states, these two strategies are equal and also optimal, because they obey established criterias <cit.>.Considering an ensemble of states {ρ_i } with a priori distributions { q_i}, there are several good measurement strategies for distinguishing them. One candidate strategy is the pretty good measurement, or 𝒫𝒢ℳ <cit.>, for which the POVM correspoding to an output x (where x is a 2^k-bit string representing the truth table) isM_x^𝒫𝒢ℳ = ( ∑_s=1 q_s ρ_s )^-1/2^+ q_x ρ_x ( ∑_l=s q_s ρ_s )^-1/2^+where the operation A^-1/2^+ is defined asA^-1/2^+ = ∑_j : a_j >0 a_j ^-1/2|a_j⟩⟨a_j| with a_j and |a_j⟩ being the eigenvalues and eigenvectors of A. In a similar manner, slightly more complex sets of measurement operators may be formed that are called the Jez̆ek-Rehác̆ek-Fiurás̆ek iterative measurement operators <cit.>. These are defined recursively, where each iteration is indexed by w. M_x^𝒥ℛℱ,w = ( ∑_s=1 q_s^2 ρ_s M_s^(w-1)ρ_s)^-1/2^+ q_x^2 ρ_x M_x^(w-1)ρ_x ×( ∑_s=1 q_s^2 ρ_s M_s^(w-1)ρ_s )^-1/2^+where in the first iteration M_x^𝒥ℛℱ,0= 𝕀 / 2^k.Due to the form of our states, each satisfy ρ_x^2 = ξρ_x , ∀ x, where ξ is a proportionality constant independent of x. Coupled with an assumption that the states are a priori equiprobable (so that q_x = ζ , ∀ x, where ζ is a constant independent of x), the 𝒫𝒢ℳ operatorsare equal to every iteration of the 𝒥ℛℱ operators.It is known that a POVM strategy is optimal if it satisfies the following two conditions <cit.>,M_x ( q_x ρ_x - q_y ρ_y )M_y= 0  ,∀ x,y ∑_x=0^2^k-1 q_x ρ_x M_x - q_y ρ_y ≽ 0  ,∀ yIn a numerical study, we have verified that the 𝒫𝒢ℳ strategy is optimal when ρ_x represents three or less copies of the gate-encoding states.§.§.§ Optimality of the measurement for single copies We now give an analytical proof that the 𝒫𝒢ℳ strategy is optimal when a single copy of the quantum states is sent. First we note that, in the case of a single copy, ∑_s q_s ρ_s = 𝕀, assuming as before that the states are a priori equiprobable (when the number of copies is larger than one, then ∑_s p_s ρ_s ≠𝕀 even in the equiprobable case). Thus, the measurement operators M_x are proportional to the density matrices ρ_x.We will now obtain a bound on the value of tr(ρ_x Mxi) using Hölders inequality,‖ fg ‖ _1 ≤‖ f ‖_p ‖ g ‖_qwhich is valid when 1/p+1/q = 1. Assuming that each state is equally probable and the normalization condition ∑_x M_x = 𝕀, this implies that tr(M_x) =D/2^k  , ∀ x. We also note that M_x ≽ 0 as required for POVMs. Using the values p=1, q = ∞, f=M_x and g=ρ_x in Hölders inequality, we find that| tr( M_x ρ _x) |=‖M_ixρ_x ‖ _1 ≤‖M_x ‖_1 ‖ρ _x ‖_∞=D/2^k2/D= 2^1-kWe now use the POVM given by the 𝒫𝒢ℳ (or 𝒥ℛℱ) operators and show that this saturates the above inequality.tr(M_x ρ_x )= ∑_j u_j r_j = ∑_j : u_j, r_j ≠ 0 u_j 2/D= tr(M_x)2/D= D/2^k2/D= 2^1-kwhere u_j and r_j are the eigenvalues of M_x and ρ_x respectively and we have used the fact that, because M_x ∝ρ_x, they both have degenerate eigenvalues in the same eigenbasis. As the 𝒫𝒢ℳ/𝒥ℛℱ measurement achieves the exact upper bound on |tr(ρ _x M_x) |, they must be optimal.§.§.§ Applying the optimal measurements We now look at situation where the client uses these operators to try to learn what state was sent. We may quantify the number of lines of the truth table the client can on average obtain correctly. We define E_h as the probability that exactly h lines are incorrect; in other words, h is the Hamming distance between the encoded gate and the result of a measurement that tries to identify the gate.We consider the average taken over all gates in 𝒢_k, and hence over all ρ_x,E_h=∑_x q_x ∑_s : ℋ(s,x) =htr(ρ_x M_s)where ℋ(s,x) is the Hamming distance between the truth tables represented by s and x. From this average number of errors, we can then consider the average probability of correctly identifying a subset of L lines, P̃_L, which is given by P̃_1=E_0 + 31/41E_1 + 21/41E_2 + 11/41E_3 = E_0 + 3/4E_1 + 1/2E_2 + 1/4E_3 P̃_2=E_0 + 32/42E_1 + 22/42E_2 = E_0 + 1/2E_1 + 1/6E_2P̃_3=E_0 + 33/43E_1 = E_0 + 1/4E_1P̃_4= E_0 In p1vsp1 of the Supplementary Information, P̃_1 in the quantum case is plotted against P̃_1 in the classical case, which is simply the probability that a single line is correct, for 𝒢_2 gates. This shows a clear quantum advantage for noisy one time programs. §.§ Description of the Private Key Signature scheme This scheme allows Alice to delegate to Bob the power of digitally signing a message of his choice once and only once. To realize this, Alice's digital signature will be formed by the output of one-time programs. These OTPs take Bob's message as an input and output a valid signature. To allow the signing algorithm to work on a fixed-size input Bob creates a hash of his message using SHA3-224 protocol (there is no particular theoretical reliance on this or any particular hash, but we chose to use SHA3-224 in our demonstration).The signature is verified by Alice, the programmer, by comparing the generated signature against the ideal one that would be produced in the case of perfect OTPs. For each bit of the hash the client is provided with T OTPs, each of which is chosen uniformly at random from the set of 𝒢_1 OTPs (in principle we could use 𝒢_k gates, but we chose to use k=1 in our demonstration). The client makes measurements on these states according to the corresponding bit of his hash, producing an array where each row corresponds to the output bits for a single hash bit. The signature is deemed to pass if each row is correct in at least τ places, wherein the threshold τ is a integer predetermined by the programmer. We will show now how the scheme displays a clear example of a situation where even probabilistic OTPs may be used to implement a program which works with a high probability of success. We compare the probability of success of passing the verification step for an honest client signing one message to the probability of passing the verification step twice for a dishonest client signing two messages which hash to different values. We will consider the cases where the hashes differ by only one bit. This is a worst case scenario in which an adversary has the maximum probability of cheating successfully. The threshold value τ is chosen to maximise the difference between the success probabilities for an honest and a dishonest client in such a case. Probability that a dishonest client can pass the verification step for a single bit of the hash: The two signatures taken together constitute a string of lenght 2T. Each signature needs to be correct in at least τ places to pass the verification stage and thus a necessary (but not sufficient) condition for the combined string to pass is that it matches the concatenation of the two ideal signatures in 2τ places. We place an upper bound on the probability of this happening by using a similar method to that used by Vazirani <cit.>. The two ideal signatures are encoded in T qubits as is the case when we are sending T 𝒢_1 OTPs. It's considered that each of the 2T-bit strings corresponding to possible signatures is mapped to a pure state |ϕ_x⟩, while a measurement that would output a 2T-bit string y is associated with a projector P_y. This can be done without loss of generality since the measurement projectors can be defined in a larger Hilbert space than the received OTP state, since |ϕ_x⟩ may contain an arbitrary number of additional ancilla qubits. The probability that at most h mistakes are made in such a decoding protocol is given by𝒫 ≡Prob( H(x,y) ≤ h ) = 1/2^2T∑_x,y : H(x,y) ≤ htr(P_y | ϕ_x ⟩⟨ϕ_x | )where H(x,y) is the Hamming distance between the strings x and y.At this moment it is helpful to analyse some properties of the specific ways in which the |ϕ_x⟩ states are defined. The 2T bits of x are split in pairs corresponding to the i-th bit of each signature, and each pair is encoded in a qubit using the model for 𝒢_1 gate-OTPs. Thus, all |ϕ_x⟩ states can be written as|ϕ_y⟩ = |ϕ_y_1⟩⊗|ϕ_y_2⟩⊗⋯⊗|ϕ_y_T⟩⊗|𝒜⟩ where y_k ∈{00,01,10,11} and |𝒜⟩ represents the state of an arbitrary-dimensional ancilla, which does not depend on y. Two states |ϕ_x⟩ and |ϕ_y⟩ are orthogonal if there is at least one pair of bits (which are encoded in the same qubit) which differ between x and y in both bits. This suggests a way to find an orthonormal basis for this space, by starting with any |ϕ_y⟩ and obtain other states by negating pairs of bits from y. Since there are T pairs to negate and all states obtained this way are orthogonal to each other, they form a orthonormal basis with 2^T elements. Given that the space spanned by possible OTP states is of dimension 2^T, and that every state can be written as a linear combination of some others, this basis must span the space generated by all |ϕ_y⟩ states. We call this the y-basis.Using these properties, we argue that that the operator ∑_x : H(x,y) ≤ h| ϕ_x ⟩⟨ϕ_x | is diagonal in the y-basis just defined, and that |ϕ_y⟩ is the eigenvector corresponding to its largest eigenvalue. To see this, we need to consider what the strings x appear in the sum. Specifically, for each string x where the first bit of a given pair does not match the corresponding bit in y (but the second bit of that pair does match), there is also another string where the first bit matches but the second bit does not match. These strings are always both included or both excluded, because the Hamming distance between each of them and y is the same. The mixture associated with these two states is diagonal in the y basis, even though none of them are individually.With the eigenvectors already found, the task is to find eigenvalues. For an eigenvector |ϕ_z⟩, the eigenvalue depends on how many strings x that are not orthogonal to z are included in the summation. Because the summation over x is centered around y (in the sense of the Hamming distance), the eigenvector |ϕ_y⟩ has the highest number of strings x appearing in the sum. This leads to this eigenvalue being the highest one. By a counting argument, it's possible to arrive at its specific value.Strings x that appear in the sum are at Hamming distance at most h from y, but if both bits of a given pair are different in x and y then the state corresponding to this string does not contribute.If a pair is equal in x and y, then the contribution to ⟨ϕ_y |ϕ_x⟩⟨ϕ_x |ϕ_y⟩ corresponding to that qubit is 1. If a pair has x and y differing in one bit, the contribution to ⟨ϕ_y |ϕ_x⟩⟨ϕ_x |ϕ_y⟩ is 1/2, but because there are two of those states, their sum also constributes 1. Thus, when a Hamming distance of w between x and y is considered, we must consider only terms where there is either zero or one differences per pair, with each configuration contributing 1. The eigenvalue corresponding to |ϕ_y⟩ is thenλ = ∑_w=0^h ( [ T; w ])This was explicitly checked for small values of T by numerical diagonalization.We can now find an upper bound to the probability Prob( H(x,y) ≤ h ) that a dishonest client can make at most h mistakes in the determination of the 2T-bit string corresponding to the ideal signatures for two distinct messages. Continuing from Equation <ref>, we have that𝒫 = 1/2^2T∑_y tr(P_y ∑_x:H(x,y)<=h| ϕ_x ⟩⟨ϕ_x |) ≤1/2^2T∑_y tr(P_y Q) ∑_w=0^h ( [ T; w ])where Q is a projector to the codespace spanned by the codewords |ϕ_x⟩, which has dimension 2^T. Then,𝒫 ≤1/2^2Ttr( (∑_yP_y) Q) ∑_w=0^h ( [ T; w ]) =1/2^2Ttr(Q) ∑_w=0^h ( [ T; w ]) =1/2^T∑_w=0^h ( [ T; w ]) If h/T < (1/2) - ϵ, for any positive constant ϵ,the probability of obtaining an output string within Hamming distance h of the ideal signature string is exponentially small in T. Returning to the definition of h as 2T-2τ, we see that the exponential suppression happens when the ratio τ/T is fixed as any constant greater than 3/4.We now have an upper bound for the probability of success of a dishonest client passing the verification step for a single bit of the hash for two different inputs. As we assume a worst case scenario, where the hashes differ in only a single bit the client can follow the honest scenario for all other bits of his hash. Therefore, the overall probability of a dishonest client to sign two such messages is simply given by the product of the individual success probabilities per bit.It becomes increasingly unlikely that the client is able to sign two messages if the required threshold for signing one message is set as a constant fraction α>3/4 of T. If the threshold τ is set at lower than (1/2+1/2√(2))T ≈(0.85)· T, the honest client is able to sign a single message with probability that approaches 1 as T is increased.In conclusion, when the threshold τ is chosen to lie between (0.75)· T and (0.85)· T, a client can sign one message with high probability but can sign two messages with low probability. In the limit of high T, these probabilities tend to 1 and 0, respectively. For practical reasons, as a trade-off between security and speed, we chose the values T=300 and τ=234, which results in a client being able to sign one message with probability 97%, but with a smaller than 4% probability of signing two messages which hashes to strings differing in only one bit. This is an upper bound to the cases where the hashes are different in more than one bit. Another interesting feature of the protocol is that it does not require a perfect implementation of the quantum states. Noise can be tolerated as long as the probability of obtaining a correct outcome for a single line of the 𝒢_1 OTP is higher than 75%, provided that τ is chosen accordingly and T is high enough such that the client can sign one message with reasonably high probability.§ SUPPLEMENTARY FIGURES § SUPPLEMENTARY TABLES [OTP_bib]
http://arxiv.org/abs/1709.09724v3
{ "authors": [ "Marie-Christine Roehsner", "Joshua A. Kettlewell", "Tiago B. Batalhão", "Joseph F. Fitzsimons", "Philip Walther" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170927202137", "title": "Quantum advantage for probabilistic one-time programs" }
Authors contributed equally. Department of Physics, University of Basel, 4056 Basel, Switzerland Authors contributed equally. Department of Physics, University of Basel, 4056 Basel, Switzerland Department of Physics, University of Basel, 4056 Basel, Switzerland Department of Physics, University of Basel, 4056 Basel, Switzerland Department of Physics, University of Basel, 4056 Basel, Switzerland Department of Physics, University of Basel, 4056 Basel, Switzerland Department of Physics, University of Basel, 4056 Basel, Switzerland Laboratory of Semiconductor Materials, Institute of Materials (IMX), School of Engineering, École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland Lehrstuhl für Physik funktionaler Schichtsysteme, Physik Department E10, Technische Universität München, 85747 Garching, Germany Solid State Physics, Lund University, 22100 Lund, Sweden Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, Forschungszentrum Jülich, 52425 Jülich, Germany Laboratory of Semiconductor Materials, Institute of Materials (IMX), School of Engineering, École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland Laboratory of Nanoscale Magnetic Materials and Magnonics, Institute of Materials (IMX), School of Engineering, École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland Department of Physics, University of Basel, 4056 Basel, Switzerland [email protected] http://poggiolab.unibas.ch/ We use a scanning nanometer-scale superconducting quantum interference device to map the stray magnetic field produced by individual ferromagnetic nanotubes (FNTs) as a function of applied magnetic field.The images are taken as each FNT is led through magnetic reversal and are compared with micromagnetic simulations, which correspond to specific magnetization configurations.In magnetic fields applied perpendicular to the FNT long axis, their magnetization appears to reverse through vortex states, i.e.configurations with vortex end domains or – in the case of a sufficiently short FNT – with a single global vortex.Geometrical imperfections in the samples and the resulting distortion of idealized mangetization configurations influence the measured stray-field patterns.Imaging stray magnetic field of individual ferromagnetic nanotubes M. Poggio December 30, 2023 ====================================================================As the density of magnetic storage technology continues to grow, engineering magnetic elements with both well-defined remnant states and reproducible reversal processes becomes increasingly challenging. Nanometer-scale magnets have intrinsically large surface-to-volume ratios, making their magnetization configurations especially susceptible to roughness and exterior imperfections.Furthermore, poor control of surface and edge domains can lead to complicated switching processes that are slow and not reproducible reproducible <cit.>.One approach to address these challenges is to use nanomagnets that support remnant flux-closure configurations.The resulting absence of magnetic charge at the surface reduces its role in determining the magnetic state and can yield stable remnant configurations with both fast and reproducible reversal processes.In addition, the lack of stray field produced by flux-closure configurations suppresses interactions between nearby nanomagnets.Although the stability of such configurations requires dimensions significantly larger than the dipolar exchange length, the absence of dipolar interactions favors closely packed elements and thus high-density arrays <cit.>.On the nanometer-scale, core-free geometries such as rings <cit.> and tubes <cit.> have been proposed as hosts of vortex-like flux-closure configurations with magnetization pointing along their circumference.Such configurations owe their stability to the minimization of magnetostatic energy at the expense of exchange energy.Crucially, the lack of a magnetic core removes the dominant contribution to the exchange energy, which otherwise compromises the stability of vortex states.Here, we image the stray magnetic field produced by individual ferromagnetic nanotubes (FNTs) as a function of applied field using a scanning nanometer-scale superconducting quantum interference device (SQUID).These images show the extent to which flux closure is achieved in FNTs of different lengths as they are driven through magnetic reversal.By comparing the measured stray-field patterns to the results of micromagnetic simulations, we then deduce the progression of magnetization configurations involved in magnetization reversal.Mapping the magnetic stray field of individual FNTs is challenging, due to their small size and correspondingly small magnetic moment. Despite a large number of theoretical studies discussing the configurations supported in FNTs <cit.>, experimental images of such states have so far been limited in both scope and detail.Cantilever magnetometry <cit.>, SQUID magnetometry <cit.>, and magnetotransport measurements <cit.> have recently shed light on the magnetization reversal process in FNTs, but none of these techniques yield spatial information about the stray field or the configuration of magnetic moments.Li et al. interpreted the nearly vanishing contrast in a magnetic force microscopy (MFM) image of a single FNT in remnance as an indication of a stable global vortex state, i.e. a configuration dominated by a single azimuthally-aligned vortex <cit.>.Magnetization configurations in rolled-up ferromagnetic membranes between 2 and 16 μm in diameter have been imaged using magneto-optical Kerr effect <cit.>, x-ray transmission microscopy <cit.>, x-ray magnetic dichroism photoemission electron microscopy (XMCD-PEEM) <cit.>, and magnetic soft x-ray tomography <cit.>.More recently, XMCD-PEEM was used to image magnetization configurations in FNTs of different lengths <cit.>.Due to technical limitations imposed by the technique, measurement as a function of applied magnetic field was not possible.We use a scanning SQUID-on-tip (SOT) sensor to map the stray field produced by FNTs as a function of position and applied field.We fabricate the SOT by evaporating Pb on the apex of a pulled quartz capillary according to a self-aligned method pioneered by Finkler et al. and perfected by Vasyukov et al. <cit.>.The SOT used here has an effective diameter of 150 nm, as extracted from measurements of the critical current I_SOT as a function of a uniform magnetic field 𝐇_0 = H_0 ẑ applied perpendicular to the SQUID loop. At the operating temperature of 4.2 K, pronounced oscillations of critical current are visible as a function of H_0 up to 1 T.The SOT is mounted in a custom-built scanning probe microscope operating under vacuum in a ^4He cryostat.Maps of the magnetic stray field produced by individual FNTs are made by scanning the FNTs lying on the substrate in the xy-plane 300 nm below the SOT sensor, as shown schematically in Fig. <ref> (a).The current response of the sensor is proportional to the magnetic flux threaded through the SQUID loop.For each value of the externally applied field H_0, a factor is extracted from the current-field interference pattern to convert the measured current I_SOT to the flux.The measured flux then represents the integral of the z-component of the total magnetic field over the area of the SQUID loop.By subtracting the contribution of H_0, we isolate the z-component of stray field, H_dz integrated over the area of the SOT at each spatial position.FNT samples consist of a non-magnetic GaAs core surrounded by a 30-nm-thick magnetic shell of CoFeB with hexagonal cross-section. CoFeB is magnetron-sputtered onto template GaAs nanowires (NWs) to produce an amorphous and homogeneous shell <cit.>, which is designed to avoid magneto-crystalline anisotropy <cit.>. Nevertheless, recent magneto-transport experiments show that a small growth-induced magnetic anisotropy may be present <cit.>.Scanning electron micrographs (SEMs) of the studied FNTs, as in Fig. <ref> (c), reveal continuous and defect-free surfaces, whose roughness is less than 5 nm.Figs. <ref> (d) and (e) show cross-sectional high-angle annular dark-field (HAADF) scanning transmission electron micrographs (STEM) of two FNTs from the same growth batch as those measured, highlighting the possibility for asymmetry due to the deposition process.Dynamic cantilever magnetometry measurements of representative FNTs show μ_0 M_S = 1.3 ± 0.1 T <cit.>, where μ_0 is the permeability of free space and M_S is the saturation magnetization.Their diameter d, which we define as the diameter of the circle circumscribing the hexagonal cross-section, is between 200 and 300 nm.Lengths from 0.7 to 4 μm are obtained by cutting individual FNTs into segments using a focused ion beam (FIB).After cutting, the FNTs are aligned horizontally on a patterned Si substrate.All stray-field progressions are measured as functions of H_0, which is applied perpendicular to the substrate and therefore perpendicular to the long axes of each FNT.Gross et al. found that similar CoFeB FNTs are fully saturated by a perpendicular field for |μ_0 H_0| > 1.2 T at T = 4.2 K <cit.>.Since the superconducting SQUID amplifier used in our measurement only allows measurements for |μ_0 H_0| ≤ 0.6 T, all the progressions measured here represent minor hysteresis loops.Fig. <ref> (a) shows the stray field maps of a 4-μm-long FNT for a series of fields as μ_0 H_0 is increased from -0.6 to 0.6 T. The maps reveal a reversal process roughly consistent with a rotation of the net FNT magnetization.At μ_0 H_0 = -249 mT and at more negative fields , H_dz is nearly uniform above the FNT, indicating that its magnetization is initially aligned along the applied field and thus parallel to -ẑ.As the field is increased toward positive values, maps of H_dz show an average magnetization ⟨𝐌⟩, which rotates toward the long axis of the FNT.Near H_0 = 0, the two opposing stray field lobes at the ends of the FNT are consistent with an ⟨𝐌⟩ aligned along the long axis.With increasing positive H_0, the reversal proceeds until the magnetization aligns along ẑ.The simulated stray-field maps, shown in Fig. <ref> (b), are generated by a numerical micromagnetic model of the equilibrium magnetization configurations.We use the software package Mumax3 <cit.>, which employs the Landau-Lifshitz micromagnetic formalism with finite-difference discretization.The length l = 4.08 μm and diameter d = 260 nm of the FNT are determined by SEMs of the sample, while the thickness t = 30 nm is taken from cross-sectional TEMs of samples from the same batch.As shown in Fig. <ref>, the simulated stray-field distributions closely match the measurements.The magnetization configurations extracted from the simulations are non-uniform, as shown in Fig. <ref> (c).In the central part of the FNT, the magnetization of the different facets in the hexagonal FNT rotates separately as a function of H_0, due to their shape anisotropy and their different orientations.As H_0 approaches zero, vortices nucleate at the FNT ends, resulting in a low-field mixed state, i.e. a configuration in which magnetization in the central part of the FNT aligns along its long axis and curls into azimuthally-aligned vortex domains at the ends.Experimental evidence for such end vortices has recently been observed by XMCD-PEEM <cit.> and DCM <cit.> measurements of similar FNTs at room-temperature.We also measured and simulated a 2-μm-long FNT of similar cross-sectional dimensions.It shows an analogous progression of stray field maps as a function of H_0 (see supplementary material).Simulations suggest a similar progression of magnetization configurations, with a mixed state in remnance.FNTs shorter than 2 μm exhibit qualitatively different stray-field progressions.Measurements of a 0.7-μm-long FNT are shown in Fig. <ref> (a).A stray-field pattern with a single lobe persists from large negative field to μ_0 H_0 = -15 mT without an indication of ⟨𝐌⟩ rotating towards the long axis.Near zero field, a stray-field map characterized by an 'S'-like zero-field line appears (white contrast in Fig. <ref> (a)).At more positive fields, a single lobe again dominates.A similar progression of stray field images is also observed upon the reversal of a 1-μm-long FNT (not shown).In order to infer the magnetic configuration of the FNT, we simulate its equilibrium configuration as a function of H_0 using the sample's measured parameters: l = 0.69 μm, d = 250 nm and t = 30 nm.For a perfectly hexagonal FNT with flat ends, the simulated reversal proceeds through different, slightly distorted global vortex states, which depend on the initial conditions of the magnetization. Such simulations do not reproduce the 'S'-like zero-field line observed in the measured stray-field maps.However, when we consider defects and structural asymmetries likely to be present in the measured FNT, the simulated and measured images come into agreement.In these refined simulations, we first consider the magnetic 'dead-layer' induced by the FIB cutting of the FNT ends as previously reported <cit.>. We therefore reduce the length of the simulated FNT by 100 nm on either side.Second, we take into account that the FIB-cut ends of the FNT are not perfectly perpendicular to its long axis.SEMs of the investigated FNT show that the FIB cutting process results in ends slanted by 10^∘ with respect to ẑ.Finally, we consider that the 30-nm-thick hexagonal magnetic shell may be asymmetric, i.e.slightly thicker on one side of the FNT due to an inhomogeneous deposition, e.g. Fig. <ref> (e).With these modifications, the simulated reversal proceeds through at least four different possible stray-field progressions depending on the initial conditions.Only two of these, shown in Figs. <ref> (b) and (c), produce stray-field maps which resemble the measurement.The measured stray-field images are consistent with the series shown in Fig. <ref> (b) for negative fields (μ_0 H_0 = -45, -15 mT).As the applied field crosses zero (-15 mT≤μ_0 H_0 ≤ 14 mT), the FNT appears to change stray-field progressions.The images taken at positive fields (14 mT≤μ_0 H_0), show patterns consistent with the series shown in Fig. <ref> (c).The magnetic configurations corresponding to these simulated stray-field maps suggest that the FNT occupies a slightly distorted global vortex state.Before entering this state, e.g. at μ_0 H_0 = -45 mT, the simulations show a more complex configuration with magnetic vortices in the top and bottom facets, rather than at the FNT ends.On the other hand, at similar reverse fields, e.g. μ_0 H_0 = 57 mT, the FNT is shown to occupy a distortion of the global vortex state with an tilt of the magnetization toward the FNT long axis in some of the hexagonal facets.For some minor loop measurements of short FNTs (l ≤ 1 μm), we obtain stray-field patterns, which the micromagnetic simulations do not reproduce.Two such cases are shown in Fig. <ref>, where (a) represents the stray-field pattern measured above a 0.7-μm-long FNT at μ_0 H_0 = 20 mT and (d) the pattern measured above a 1-μm FNT at μ_0 H_0 = 21 mT.Both of these stray-field maps are qualitatively different from the results of Fig. <ref>. Since the simulations do not provide equilibrium magnetization configurations that generate these measured stray-field patterns, we test a few idealized configurations in search of possible matches.In particular, the measured pattern shown in Fig. <ref> (a) is similar to the pattern produced by an opposing vortex state.This configuration, shown in Fig. <ref> (c), consists of two vortices of opposing circulation sense, separated by a domain wall.It was observed with XMCD-PEEM to occur in similar-sized FNTs <cit.> in remnance at room-temperature.The pattern measured in Fig. <ref> (e) appears to match the stray-field produced by a multi-domain state consisting of two head-to-head axial domains separated by a vortex domain wall and capped by two vortex ends, shown in Fig. <ref> (f).Although these configurations are not calculated to be equilibrium states for these FNTs in a perpendicular field, they have been suggested as possible intermediate states during reversal of axial magnetization in a longitudinal field <cit.>.The presence of these anomalous configurations in our experiments may be due to incomplete magnetization saturation or imperfections not taken into account by our numerical model.Wyss et al. showed that the types of remnant states that emerge in CoFeB FNTs depend on their length <cit.>.For FNTs of these cross-sectional dimensions longer than 2 μm, the equilibrium remnant state at room temperature is the mixed state, while shorter FNTs favor global or opposing vortex states.Here, we confirm these observations at cyrogenic temperatures by mapping the magnetic stray-field produced by the FNTs rather than their magnetization.In this way, we directly image the defining property of flux-closure configurations, i.e. the extent to which their stray field vanishes.In fact, we find that the imperfect geometry of the FNTs causes even the global vortex state to produce stray fields on the order of 100 μT at a distance of 300 nm.Finer control of the sample geometry is required in order to reduce this stray field and for such devices be considered as elements in ultra-high density magnetic storage.Using the scanning SQUID's ability to make images as a function of applied magnetic field, we also reveal the progression of stray-field patterns produced by the FNTs as they reverse their magnetization.Future scanning SOT experiments in parallel applied fields could further test the applicability of established theory to real FNTs <cit.>. While the incomplete flux closure and the presence of magnetization configurations not predicted by simulation indicate that FNT samples still cannot be considered ideal, scanning SOT images show the promise of using geometry to program both the overall equilibrium magnetization configurations and the reversal process in nanomagnets.Methods.SOT Fabrication.SOTs were fabricated according to the technique described by Vasyukov et al. <cit.> using a three-step evaporation of Pb on the apex of a quartz capillary, pulled to achieve the required SOT diameter.The evaporation was performed in a custom-made evaporator with a base pressure of 2 × 10^-8 mbar and a rotateable sample holder cooled by liquid He.In accordance with Halbertal et al. <cit.>, an additional Au shunt was deposited close to the tip apex prior to the Pb evaporation for protection of the SOTs against electrostatic discharge.SOTs were characterized in a test setup prior to their use in the scanning probe microscope.SOT Positioning and Scanning.Positioning and scanning of the sample below the SOT is carried out using piezo-electric positioners and scanners (Attocube AG).We use the sensitivity of the SOT to both temperature and magnetic field <cit.> in combination with electric current, which is passed through a serpentine conductor on the substrate, to position specific FNTs under the SOT (see supplementary material).FNT Sample Preparation.The template NWs, onto which the CoFeB shell is sputtered, are grown by molecular beam epitaxy on a Si (111) substrate using Ga droplets as catalysts <cit.>.During CoFeB sputter deposition, the wafers of upright and well-separated GaAs NWs are mounted with a 35^∘ angle between the long axis of the NWs and the deposition direction.The wafers are then continuously rotated in order to achieve a conformal coating.In order to obtain NTs with different lengths and well-defined ends, we cut individual NTs into segments using a Ga FIB in a scanning electron microscope.After cutting, we use an optical microscope equipped with precision micromanipulators to pick up the FNT segments and align them horizontally onto a Si substrate.FNT cross-sections for the HAADF STEMs were also prepared using a FIB.Mumax3 Simulations.To simulate the CoFeB FNTs, we set μ_0 M_S to its measured value of 1.3 and the exchange stiffness to A_ex = 28/. The external field is intentionally tilted by 2^∘ with respect to ẑ in both the xz- and the yz-plane, in order to exclude numerical artifacts due to symmetry.This angle is within our experimental alignment error.The asymmetry in the magnetic cross-section of an FNT, seen in Fig. <ref> (e), is generated by removing a hexagonal core from a larger hexagonal wire, whose axis is slightly shifted.In this case, the wire's diameter is 30 nm larger than the core's diameter and we shift the core's axis below that of the wire by 5 nm.In order to rule out spurious effects due to the discretization of the numerical cells, the cell size must be smaller than the ferromagnetic exchange length of 6.5 nm.This criterion is fulfilled by using a 5-nm cell size to simulate the 0.7-μm-long FNT.For the 4-μm-long FNT, computational limitations force us to set the cell size to 8 nm, such that the full scanning field can be calculated in a reasonable amount of time.Given that the cell size exceeds the exchange length, the results are vulnerable to numerical artifacts.To confirm the reliability of these simulations, we perform a reference simulation with a 4-nm cell size.Although the magnetic states are essentially unchanged by the difference in cell size, the value of the stray field is altered by up to 10%.We thank Jordi Arbiol and Rafal Dunin-Borkowski for work related to TEM, Sascha Martin and the machine shop of the Department of Physics at the University of Basel for technical support, and I. Dorris for helpful discussions.We acknowledge the support of the Canton Aargau, ERC Starting Grant NWScan (Grant No. 334767), the SNF under Grant No. 200020-159893, the Swiss Nanoscience Institute, the NCCR Quantum Science and Technology (QSIT), and the DFG Schwerpunkt Programm “Spincaloric transport phenomena” SPP1538 via Project No. GR1640/5-2.
http://arxiv.org/abs/1709.09652v1
{ "authors": [ "D. Vasyukov", "L. Ceccarelli", "M. Wyss", "B. Gross", "A. Schwarb", "A. Mehlin", "N. Rossi", "G. Tütüncüoglu", "F. Heimbach", "R. R. Zamani", "A. Kovács", "A. Fontcuberta i Morral", "D. Grundler", "M. Poggio" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170927174212", "title": "Imaging stray magnetic field of individual ferromagnetic nanotubes" }
[email protected] Department of Physics and Astronomy, Rutgers University, Piscataway, New Jersey 08854-8019, USATCM Group, Cavendish Laboratory, University of Cambridge, J. J. Thomson Avenue, Cambridge CB3 0HE, United Kingdom Department of Physics and Astronomy, Rutgers University, Piscataway, New Jersey 08854-8019, USADepartment of Physics and Astronomy, Rutgers University, Piscataway, New Jersey 08854-8019, USA The perovskite BaSnO_3 provides a promising platform for the realization of an earth abundant n-type transparent conductor. Its optical properties are dominated by a dispersive conduction band of Sn 5s states, and by a flatter valence band of O 2p states, with an overall indirect gap of about 2.9 eV.Using first-principles methods, we study the optical properties of BaSnO_3 and show that both electron-phonon interactions and exact exchange, included using a hybrid functional, are necessary to obtain a qualitatively correct description of optical absorption in this material. In particular, the electron-phonon interaction drives phonon-assisted optical absorption across the minimum indirect gap and therefore determines the absorption onset, and it also leads to the temperature dependence of the absorption spectrum. Electronic correlations beyond semilocal density functional theory are key to detemine the dynamical stability of the cubic perovskite structure, as well as the correct energies of the conduction bands that dominate absorption. Our work demonstrates that phonon-mediated absorption processes should be included in the design of novel transparent conductor materials. Phonon-assisted optical absorption in BaSnO_3 from first principles Karin M. Rabe December 30, 2023 ===================================================================§ INTRODUCTION Transparent conductors are materials which simultaneously exhibit optical transparency in the visible and high carrier mobilities. <cit.> As such, they are of interest for optoelectronic applications like photovoltaics and flat panel displays. Commercially available transparent conductors are mostly based on In_2O_3, a large band gap semiconductor, that exhibits high conductivities when doped with tin. However, the scarcity of indium and the associated high costs have fuelled a significant research effort to find alternative materials that could act as transparent conductors.Several strategies for designing novel transparent conductors have been pursued. The most widespread route is to replace In_2O_3 with alternative large band gap oxide semiconductors, and in this paper we focus on one of the most promising examples in this area, BaSnO_3. <cit.> Alternative routes include the use of non-oxide semiconductors, <cit.> graphene, <cit.> metal nanowires, <cit.>, correlated metals, <cit.> or band engineering of the bulk. <cit.>The cubic perovskite BaSnO_3 has emerged as a promising candidate for transparent conductor applications due to the high n-type mobilities exhibited when doped with lanthanum, while retaining its favourable optical properties. <cit.> The high mobility is belived to arise from the Sn 5s character of the conduction band, which provides a small effective mass <cit.> and a low density of states reducing the phase space available for electron scattering. <cit.> The optical properties are dominated by an indirect gap that marks the absorption onset at 2.9 eV, and a strong optical absorption starting at 3.1 eV and marking the direct gap. <cit.> Thin films of doped samples exhibit transmittances of about 80% in the visible range. <cit.>While first-principles methods have been used extensively to study the structure, <cit.> dynamics, <cit.> band structure, <cit.> transport, <cit.> and doping <cit.> of BaSnO_3, a full characterization of the optical properties is missing. This is because optical absorption in indirect gap semiconductors is mediated by lattice vibrations and therefore a full description of the absorption onset in BaSnO_3 requires the inclusion of electron-phonon interactions. Although the theory of phonon-assisted optical absorption has been known for a long time, <cit.> it has only recently been combined with first-principles methods to study absorption in indirect gap materials <cit.> and free-carrier absorption in doped semiconductors. <cit.>In this paper, we use first principles methods to study the optical absorption spectrum of BaSnO_3 including the effects of lattice vibrations. We find that if phonon mediated processes are included, then (i) the absorption onset of BaSnO_3 coincides with the indirect band gap, (ii) the indirect absorption onset is redshifted by 0.1 eV in going from 0 K to 300 K, and (iii) absorption below 7 eV, corresponding to the highly dispersive conduction Sn 5s band, is noticeably enhanced compared to the static lattice counterpart.We have performed calculations using both semilocal and hybrid exchange correlation functionals, and find that the lattice stability, phonon dispersion, electronic band structure, and absorption spectrum are highly sensitive to the choice of functional, with the hybrid functional providing the best agreement with available experimental data. Overall, our results demonstrate that an accurate description of the optical properties of BaSnO_3 requires the inclusion of both electron-phonon interactions and electron-electron interactions beyond semilocal density functional theory. The rest of the paper is organized as follows. In Sec. <ref> we present the calculated equilibrium properties of BaSnO_3 and in Sec. <ref> we describe our lattice dynamics results. In both sections we pay particular emphasis to the choice of exchange-correlation functional. In Sec. <ref> we describe the phonon-assisted optical absorption formalism, and the results for the phonon-assisted indirect gap absorption as well as the temperature dependence of the absorption spectrum. We summarize our findings and reach our conclusions in Sec. <ref>. § EQUILIBRIUM PROPERTIES §.§ Computational detailsOur first principles calculations are performed using density functional theory (DFT) <cit.> within the projector augmented-wave method <cit.> as implemented in vasp. <cit.> We use an energy cut-off of 400 eV and a BZ grid size of 8×8×8 𝐤-points for the primitive cell, and commensurate grids for the supercells. We test four distinct exchange correlation functionals for the calculation of the equilibrium volumes, electronic energy bands, and lattice dynamics, namely the local density approximation (LDA), <cit.> the generalized gradient approximation of Perdew-Burke-Ernzerhof (PBE), <cit.> the PBE approximation for solids (PBEsol), <cit.> and the hybrid Heyd-Scuseria-Ernzerhof functional (HSE). <cit.> §.§ Structure BaSnO_3 has the cubic perovskite structure of space group Pm3m with 5 atoms in the primitive cell. We minimize the energy with respect to the cubic lattice parameter a to find the equilibrium structure, and the results are presented in Table <ref>. The lattice parameter increases in the sequence of semilocal functionals LDA, PBEsol, and PBE, as expected. The hybrid HSE functional leads to a lattice parameter in close agreement with that of PBEsol, and overall the PBEsol and HSE lattice parameters agree best with experiment. §.§ Electronic band structureWe show the band structure of BaSnO_3 calculated using the LDA and HSE functionals in Fig. <ref>. The valence bands are dominated by states of O 2p character, with the valence band maximum at the R (1/2,1/2,1/2) point. The conduction band minimum is located at the Γ point, with a band dominated by states of Sn 5s character that endow it with a strong dispersion and low effective mass. At about 4 eV above the minimum of the conduction band, we find bands dominated by Ba 5d states which exhibit a smaller dispersion. Our results agree with previous analysis. <cit.>The minimum band gap is indirect and has a value of 1.30 eV at the LDA level, and of 2.73 eV at the HSE level. By comparison, experimental estimates of the minimum gap are in the range 2.90–3.10 eV <cit.>. The minimum direct gap occurs at the Γ point and has a value of 1.83 eV for LDA and 3.21 eV for HSE, with experimental estimates in the range 3.10–3.60 eV <cit.>. We note that the spread in experimental gaps might be related to the use of single crystals or thin films, with the latter providing larger estimates for the band gap sizes. We also note that our HSE band gaps are slightly larger than those previously reported. <cit.> We have observed that the band gap size at the HSE level is rather sensitive to the value of the lattice parameter, and we ascribe the difference with previous reports to the smaller lattice parameter in our calculations. We provide a list of the minimum indirect and direct gaps calculated with a range of exchange correlation functionals in Table <ref>.It is important to note from Fig. <ref> that the effect of including electron-electron correlations beyond semilocal DFT using the hybrid HSE functional is not limited to a rigid shift of the conduction bands. As an example, we consider the Γ point gap between the O 2p valence band and the Sn 5s band at the bottom of the conduction band, denoted by E_g(O2p→Sn5s) (the same as E_g^direct), and the Γ point gap between the O 2p valence band and the Ba 5d band, denoted by E_g(O2p→Ba5d). The E_g(O2p→Sn5s) gap has values of 1.83 eV in LDA and 3.21 eV in HSE, leading to a shift of 1.38 eV. By comparison, the E_g(O2p→Ba5d) gap has values of 5.32 eV in LDA and 7.04 eV in HSE, leading to a shift of 1.72 eV. Rather than only undergoing a rigid shift, the bands are also stretched when electron-electron correlations beyond semilocal DFT are included. § LATTICE DYNAMICS The lattice dynamics calculations have been performed using the same numerical parameters as those reported in Sec. <ref>, and the lattice parameters reported in Table <ref> corresponding to each functional used. We employ the finite displacement method <cit.> in conjunction with nondiagonal supercells <cit.> to construct the matrix of force constants, which is then Fourier transformed to the dynamical matrix and diagonalized to obtain the vibrational frequencies and eigenvectors. Converged results are obtained using a coarse 4×4×4 𝐪-point grid which is used as a starting point for the Fourier interpolation to a finer grid along high symmetry lines to construct the phonon dispersion.The phonon dispersions obtained with the LDA and HSE functionals are shown in Fig. <ref> without considering LO-TO splitting, and are in good agreement with ealier calculations. <cit.> We estimate the effects of LO-TO splitting by calculating the phonon frequencies using elongated cells along the (100) crystallographic direction with up to 32 primitive cells within LDA, and up to 8 primitive cells in HSE. The estimated LO-TO splittings are shown in Table <ref> for the three infrared active Γ-point modes of Γ_4^- symmetry, together with the corresponding experimental data from Ref. bso_phonons_bands. The difference between the LDA results from the (1/8,0,0) and (1/32,0,0) 𝐪-points provides an estimate of the error in the HSE results evaluated only at the (1/8,0,0) 𝐪-point, and this error is in the submeV energy range for the two highest frequency modes, and in the meV range for the lowest frequency mode (see Table <ref>). The comparison of the phonon frequencies between LDA and HSE shown in Fig. <ref> and Table <ref> shows significant differences between the two. First, for the low-energy modes below about 55 meV, the LDA modes are in general softer than the corresponding HSE modes. The situation is reversed for the high energy modes, where the HSE frequencies are smaller. Second, the LO-TO splitting for the two higher energy modes is about 10% larger using HSE over LDA. The stronger LO-TO splitting in HSE is in closer agreement with the experimental measurements of Ref. bso_phonons_bands. Third, the LDA results exhibit a triply degenerate soft mode at the R point labeled by the irreducible representation R_5^- (with the Ba atom at the origin of coordinates) and with a vibrational frequency of only 1.8 meV. This mode is significantly harder at the HSE level of theory, reaching 11.3 meV. Using the PBEsol or PBE functionals, the R_5^- mode becomes imaginary, as shown in Table <ref>. As the cubic perovskite structure of BaSnO_3 is dynamically stable experimentally, our results suggest that the hybrid HSE functional provides a better description than semilocal functionals of the lattice dynamics of this system. This observation could have important implications for the study of superlattices formed by BaSnO_3, and in particular about their dynamical stability and ground state structures.§ PHONON-ASSISTED OPTICAL ABSORPTION §.§ Formalism The optical constants of a solid can be derived from the complex dielectric function ε_1+iε_2. In this paper, we describe the frequency dependent dielectric function within the dipole approximation, and writeε_2(ω)=2π/mNω^2_P/ω^2∑_v,c∫_BZd𝐤/(2π)^3|M_cv𝐤|^2δ(ϵ_c𝐤-ϵ_v𝐤-ħω),where m is the electron mass, N is the number of electrons per unit volume, and ω^2_P=4π Ne^2/m is the plasma frequency with e the electron charge. The single-particle electronic states |ψ⟩ of energy ϵ are labeled by their crystal momentum 𝐤 and their valence v or conduction c band index. The sum is over connecting valence and conduction states, and over all 𝐤-points in the Brillouin zone (BZ). The optical matrix element is given by M_cv𝐤=⟨ψ_c𝐤|𝐞̂·𝐩|ψ_v𝐤⟩, where 𝐞̂ is the polarization of the incident light and 𝐩 is the momentum operator. The real part of the dielectric function ε_1(ω) can be obtained from the imaginary part using the Kramers-Kronig relation. <cit.> From the dielectric function, we calculate the absorption coefficient as κ(ω)=ωε_2(ω)/cn(ω), where c is the speed of light and n(ω) is the refractive index.Within the theory of Williams and Lax, <cit.> the imaginary part of the dielectric function at temperature T is given byε_2(ω;T)=1/𝒵∑_𝐬⟨Φ_𝐬(𝐮)|ε_2(ω;𝐮)|Φ_𝐬(𝐮)⟩ e^-E_𝐬/k_BT,where the harmonic vibrational wave function |Φ_𝐬(𝐮)⟩ in state 𝐬 has energy E_𝐬, 𝐮={u_ν𝐪} is a collective coordinate for all the nuclei written in terms of normal modes of vibration (ν,𝐪), 𝒵=∑_𝐬e^-E_𝐬/k_BT is the partition function, T is the temperature, and k_B is Boltzmann's constant. Zacharias, Patrick, and Giustino <cit.> established that the Williams-Lax expression in Eq. (<ref>) is an adiabatic approximation to the standard expression for the temperature dependent imaginary part of the dielectric function within the theory of phonon-assisted optical absorption of Hall, Bardeen, and Blatt, <cit.> and is valid as long as phonon energies are small compared to the size of the band gap, ħω_ν𝐪≪ϵ_c-ϵ_v. An advantage of the Williams-Lax theory is that the temperature dependence of the electronic band structure <cit.> is automatically incorporated, <cit.> while within the Hall-Bardeen-Blatt theory the band structure is temperature-independent. In this work, we use Eq. (<ref>) to study phonon-assisted optical absorption in BaSnO_3.We evaluate Eq. (<ref>) using thermal lines (TL) as introduced in Ref. thermal_lines. In this approach, the multidimensional integral over the harmonic vibrational density is evaluated using the mean value (MV) theorem for integrals. This theorem dictates that there exists at least one atomic configuration, which we denote by 𝐮^MV(T) with an explicit temperature dependence T, for which ε_2(ω;𝐮^MV(T))=ε_2(ω;T). This would, in principle, allow us to replace the multidimensional integral in Eq. (<ref>) by the evaluation of the integrand on a single atomic configuration 𝐮^MV(T) at each temperature of interest. Following Ref. thermal_lines, we can find a good approximation to 𝐮^MV(T) by choosing an atomic configuration for which each phonon mode (ν,𝐪) has an amplitude given byu^TL_ν𝐪(T)=±(1/ω_ν𝐪[1/2+n_B(ω_ν𝐪,T)])^1/2,where ω_ν,𝐪 is the phonon frequency, and n_B is a Bose-Einstein factor. We note that the expression in Eq. (<ref>) is such that for each phonon mode there are two possible amplitudes, thus the number of mean value points is 2^3(𝒩-1), where 𝒩 is the number of atoms in the system.Configurations on thermal lines are the exact mean value configurations if the integrand ε_2(ω;𝐮) is a quadratic function of u_ν𝐪. In practise, we stochastically sample a subset of configurations on thermal lines, and find that the finite temperature dielectric function converges using only two sampling points. We refer the reader to Refs. thermal_lines,gw_thermal_lines for further details about thermal lines. §.§ Computational details Our first principles calculations of the imaginary part of the dielectric function ε_2(ω;𝐮) are performed using vasp. We report results obtained using a 350 eV energy cut-off, and we sample the electronic BZ stochastically including 6400 𝐤-points (equivalent to 100 𝐤-points on a 4×4×4 supercell). The energy conservation for the optical absorption process imposed by the delta function in Eq. (<ref>) is smeared with a Gaussian function of width 80 meV. We report results using both the LDA and HSE exchange correlation functionals.For the electron-phonon contribution to optical absorption we report results obtained using a 4×4×4 supercell of BaSnO_3 containing 320 atoms, which is equivalent to sampling the vibrational BZ using a 4×4×4 𝐪-point grid. Tests with a 5×5×5 supercell show small variations on the optical spectrum, but these do not affect our conclusions. We find that in the evaluation of Eq. (<ref>) using thermal lines, a single configuration is sufficient to obtain converged results, which suggests that the dependence of ε_2 on the phonon modes is close to quadratic. The reported results are obtained averaging over two configurations. Full convergence tests are detailed in the Supplemental Material. §.§ Absorption onsetThe absorption spectrum of BaSnO_3 is shown in Fig. <ref>. Focusing on the static lattice results first (dashed black lines), which correspond to vertical optical transitions only, both LDA and HSE calculations exhibit the same features. The absorption onset occurs at the minimum direct gap, located at the Γ point between an O 2p valence state and a Sn 5s conduction state. The static lattice absorption onset occurs around 1.8 eV at the LDA level, and about 3.2 eV at the HSE level. For a range of about 4 eV, absorption only occurs between the valence O 2p states and the isolated Sn 5s conduction band (cf. Fig. <ref>). Between 5 and 7 eV there is a dramatic increase in absorption, determined by the energy of the transitions between the valence O 2p states and the conduction Ba 5d states. This increase in absorption occurs around 5.3 eV in LDA and 7.0 eV in HSE. We note that the absorption onset occurs below the nominal direct band gap due to the smearing by 80 meV of the delta function in Eq. (<ref>), with the logarithmic scale used in the insets making the apparent shift larger. The tail below the nominal band gap that we observe is similar to that observed in earlier calculations in silicon. <cit.>We next consider the results at 0 K shown as the red solid lines in Fig. <ref> and obtained including the effects of electron-phonon coupling. These results differ from the static lattice results because they include the effects of quantum zero-point motion, which in the perturbative point of view correspond to phonon emission, and enable indirect transitions to occur. The most important difference between the static lattice results and the zero temperature results is the nature of the absorption onset. The minimum absorption onset at 0 K is determined by a second order process which involves the absorption of a photon and the scattering off a phonon. This process bridges the indirect gap of 1.30 eV (LDA) or 2.73 eV (HSE) and can only be accounted for theoretically by the use of the theory of phonon-assisted optical absorption. In both LDA and HSE calculations, the zero temperature results demonstrate an absorption onset that is about 0.5 eV smaller than predicted by the static lattice theory. §.§ Temperature dependent absorptionIn Fig. <ref> we show the temperature dependent absorption spectrum of BaSnO_3 at the LDA level of theory. The first noteworthy feature of the finite temperature absorption spectrum is the smoothing of the peaks exhibited by the static lattice spectrum. As an example, the static lattice absorption peak just below 8 eV decreases in size with increasing temperature, and becomes a shoulder of the larger peak around 9 eV. A second feature of the finite temperature results is the increase in the absorption coefficient in the energy range from about 2 to 5 eV as temperature increases (shown in the inset of Fig. <ref>). This energy range corresponds to transitions from the valence O 2p bands to the isolated conduction Sn 5s band (cf. Fig. <ref>). To understand the origin of this effect, recall that at the static lattice level only vertical transitions are allowed. This implies that in each energy interval there are only a small number of conduction states available for electronic transitions (those of the Sn 5s band). Furthermore, for each of these conduction states there is only a small number of O 2p states from which electrons can absorb photons, those at the same 𝐤-vector as the conduction band. This explains why the absorption coefficient is small below 5 eV when only transition to the Sn 5s band are allowed. When the effects of electron-phonon coupling are included, then the phase space of available electronic states on the conduction band remains the same, but electrons from generic 𝐤-points in the flat O 2p bands can now absorb a photon with the mediation of a phonon. This is a second order process, and therefore its weight is smaller than the dominant vertical first order absorption process. But with increasing temperature the phonon-mediated processes become more relevant, and this is reflected in the increase in the absorption coefficient at energies between 2 and 5 eV.A final feature of the finite temperature results is the red shift of the absorption onset of the O 2p→ Ba 5d transition as temperature increases, which is clearly observed in the inset of Fig. <ref>. We expect that qualitatively similar finite temperature features would be observed if the HSE functional was used instead. However, the computational expense of the latter precludes detailed calculations, and we have performed finite temperature calculations for HSE only at 300 K, shown in Fig. <ref>. The most clear feature is a red shift in the indirect absorption onset between 0 K and 300 K of about 0.13 eV. The corresponding red shift of the indirect absorption onset at the LDA level is about 0.03 eV. This observation is in line with earlier reports of stronger electron-phonon coupling when electronic correlations beyond semilocal DFT are included. <cit.>§.§ Discussion Our first principles calculations provide the first study of the optical absorption spectrum of BaSnO_3 including the effects of electron-phonon coupling and electron-electron interactions beyond semilocal DFT. Our results allow us to confirm that the onset of optical absorption in BaSnO_3 is due to the indirect gap, as previously suggested from experimental absorption spectra. <cit.> We expect that these calculations will become more widespread in the future and phonon-assisted processes will no longer be limited to experimental analysis but will also be routinely treated at the theoretical level.We further provide results of the temperature dependence of the absorption spectrum of BaSnO_3. The calculated red shift of the absorption onset with increasing temperature is small in BaSnO_3, 0.1 eV from 0 K to 300 K, suggesting that the performance of transparent conductors based on BaSnO_3 will only weakly depend on temperature. The increase in magnitude of the absorption coefficient in the energy range dominated by the lone Sn 5s band might be a generic feature of transparent conducting oxides because their high conductivities rely on the same principle of an isolated s-character conduction band. More generally, it would be interesting to explore the implications of phonon-assisted processes in the novel design strategies that have been proposed for transparent conductors beyond oxide semiconductors. <cit.> In these proposals transparency is typically achieved by recourse to selection rules or momentum mismatch between band extrema that forbid some optical transitions within the static lattice approximation. Phonons can both break symmetry-based selection rules and allow phonon-assisted transitions of finite momentum, which might invalidate some of the proposed strategies. However, these phonon-assisted processes are second-order, and a quantitative assessment of their importance is required before any definite conclusions can be reached.§ SUMMARY AND CONCLUSIONSWe have presented first principles calculations of the absorption spectrum of the transparent conducting oxide BaSnO_3, including both electron-phonon coupling and electron-electron coupling beyond semilocal density functional theory. Our results demonstrate that both effects are necessary in order to obtain a qualitatively and quantitatively accurate spectrum. Electron-phonon coupling permits phonon-assisted optical absorption across the minimum indirect gap of BaSnO_3, a transition that is forbidden in the standard static lattic approximation. This provides an absorption onset that occurs about 0.5 eV below the previously calculated direct absorption onset. Electron-phonon coupling also leads to the temperature dependence of the absorption spectrum of BaSnO_3. Electron-electron correlations treated at the hybrid functional level of theory indicate that the conduction bands span a wider range of energies than predicted by semilocal functionals, and therefore modify the position of the absorption peaks in a manner that cannot be predicted by a simple rigid shift of the bands.Our work demonstrates that an accurate description of the optical properties of BaSnO_3 requires the inclusion of both electron-phonon and electron-electron terms. Recent methodological developments make the inclusion of these terms feasible in the context of first principles calculations, and we think that the prediction of novel materials for optoelectronic applications will benefit from these highly accurate calculations that model experimental settings more closely than standard approaches.The authors thank Heung-Sik Kim and André Schleife for helpful discussions and correspondence. This work was partially supported by NSF grant DMR-1629346. B.M. thanks Robinson College, Cambridge, and the Cambridge Philosophical Society for a Henslow Research Fellowship.Supplemental Material for “Phonon-assisted optical absorption in BaSnO_3 from first principles” In the Supplemental Material we evaluate the various convergence parameters of the calculation of the finite temperature absorption coefficient of BaSnO_3. For the convergence study, we use the local density approximation to the exchange correlation functional <cit.>, and perform all calculations at 300 K.In Fig. <ref> we show calculations corresponding to a 4×4×4 supercell. On the left diagram of Fig. <ref> we show the absorption coefficient calculated using 60, 80, and 100 random 𝐤-points to sample the electronic Brillouin zone (BZ). These numbers correspond to 3840, 5120, and 6400 𝐤-points, respectively, in the BZ of the primitive cell. The bottom pannel depicts the ratio of the absorption coefficients with respect to the curve with 100 random 𝐤-points, demonstrating the convergence with respect to the number of 𝐤-points used to sample the electronic BZ. For photon energies below about 1 eV, the ratio exhibits random oscillations which are caused by numerical noise. The value of the absorption coefficient at energies below about 1 eV is smaller than 1 cm^-1, the lowest absorption coefficient reported in the main mansucript.On the right diagram of Fig. <ref> we show the absorption coefficient calculated using different numbers of atomic configurations to sample the vibrational phase space, all configurations corresponding to a thermal line <cit.>. Using only 2 configurations leads to converged results as shown both in the upper pannel for the absorption coefficient, and in the lower pannel for the ratio. We again observe random oscillations due to numerical noise for photon energies below about 1 eV. The sampling of the vibrational BZ is accomplished by the use of supercells of the 5-atom primitive cell of BaSnO_3. In Fig. <ref> we compare the absorption coefficient obtained using supercells of sizes 4×4×4 (containing 320 atoms) and 5×5×5 (containing 625 atoms). The results show some marked differences between the two calculations, in particular the absorption peak just below 8 eV is slightly stronger for the larger 5×5×5 supercell, and the absorption coefficient in the range between 2 eV and 5 eV is slightly larger for the smaller 4×4×4 supercell. Nonetheless, the results presented in the main text are robust with respect to the size of the supercell, and therefore we use a 4×4×4 supercell in our calculations as it delivers the appropriate balance between accuracy and computational cost.We finally note that for the calculation of the imaginary part of the dielectric function, energy conservation is imposed by smearing the delta function with a Gaussian. The results reported in the main text correspond to using a smearing width of 80 meV. Tests with a smearing width of 20 meV show that our conclusions are independent of the smearing width used.Overall, the results reported in the main text are obtained using a 4×4×4 supercell, with averaging over two configurations on thermal lines, and including 100 random 𝐤-points in the supercell (equivalent to 6400 𝐤-points in the primitive cell).
http://arxiv.org/abs/1709.09196v1
{ "authors": [ "Bartomeu Monserrat", "Cyrus E. Dreyer", "Karin M. Rabe" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170926180239", "title": "Phonon-assisted optical absorption in BaSnO$_3$ from first principles" }
§ INTRODUCTION Ultra-High Energy Cosmic Rays, the most energetic known particles in the Universe, offer a unique opportunity to study both some of the most enigmatic acceleration processes of our Universe and probe particle physics at a scale currently unaccessible by human-made accelerators.However, the study of such particles has to be done indirectly by measuring the Extensive Air Showers (EAS) produced by the interaction of the primary particle with the Earth's atmospheric atoms. This requires a knowledge over the shower physical processes, in particular over high-energy hadronic interactions, which we have not achieved yet. In fact, there are multiple evidences that the hadronic component of the cascade is not fully understood <cit.>. These evidences arise mainly from inconsistencies between the shower electromagnetic component and its muonic component. The latter, stems from the decay of charged mesons, being thus intimately related with the description of hadronic interactions.As such, the accurate and direct measurement of the EAS muon content is crucial for the validation of hadronic interaction models and consequently ensure the correct description of the shower. The direct measurement can be done by a setup like the one proposed for MARTA <cit.>. In this setup Resistive Plate Chambers (RPCs) are put below a water-Cherenkov Detector (WCD). The WCD acts like shielding to the electromagnetic shower component allowing the RPCs to detect muons with a high spatial and time resolution [The applications of this EAS hybrid detector in terms of inter-calibrations and combined analysis will be discussed in a paper to be published elsewhere.]. The choice of the RPCs is done not only because of their good time resolution (< 1ns) and ability to easily be segmented (limited only by the electronics) but also their low cost, as a gaseous detector, makes them extremely appealing for cosmic ray experiments where significantly large areas need to be covered. However, the introduction of such detectors in a harsh outdoor environment poses additional challenges: these RPCs have to be resilient to environmental effects such as large temperature excursions and humidity; being spread out on the field, maintenance should be low, which implies that these RPCs should have a very low gas flux; it should have a small ageing. In this communication we present the latest indoor/outdoor results which demonstrates that all the above requirements can be fulfilled. The manuscript is organised as follows: in the next section we describe the detector, after we discuss the low gas flux tests performed at lab and afterwards with describe the outdoor apparatus and present the results for one year of operation. We end with a summary and prospects.§ DETECTOR DESCRIPTION The Resistive Plate Chamber detector module is composed by a 1200×1500×1.9mm^3 glass electrodes separated by Nylon monofilaments forming two 1 mm gas gaps. This module is put inside a permanently closed acrylic box (see figure <ref>(left)). The application of a resistive acrylic paint allows one to apply the high voltage on the outer electrodes.The signal can be read out though a 8× 8 pad matrix, where each pad has an area of 180× 140mm^2. The data acquisition (DAQ) electronics, named PREC <cit.>, is a custom-developed system based on discrete electronics (see figure <ref>(right)). Each acquisition channel comprises of a broadband amplifier followed by a programmable comparator. The threshold outputs are sent, via LVDS links, to a purely digital central board. Data remain in this buffer until read by the DAQ computer.An I2C bus is used to get information about temperature, pressure and relative humidity in the chamber. High voltage (HV) and background currents were monitored by the HV power supply. These parameters are recorded each minute. The RPC was designed to be used on a harsh environment and as such both the gaseous volume and the pickup plane are inside a gas-tight aluminium volume (see figure <ref>(right)). The HV power supply and the frontend electronics will be located in a DAQ box coupled to the RPC volume.The RPC is operated in proportional mode which allows it to operate with a high efficiency while minimizing ageing of the RPC due to electric discharges in the gas. § LABORATORY TESTS The gas flux is an important parameter for the RPC operation. As such, although it is desirable to have a gaseous detector with as low gas flux as possible, one needs to be sure that this is not achieved at the cost of losing efficiency. In this section we present the laboratory tests done for several gas fluxes starting at 12cc/min down to 1cc/min.One of the most important parameters to be monitored and maintained stable is the reduced electric field, E/N, which is a function of the RPC gap width, the applied voltage, the temperature and the pressure. This quantity cannot be obtained directly but it can be calculated using the previous mentioned quantities. In this work we followed the approach used in <cit.>. In order to maintain its efficiency with the change of environmental parameters the high voltage (HV) has to be adjusted accordingly. Hence, the HV was adjusted each 15 minutes, using the average data of the previous 15 minutes. This appears to be enough to accommodate pressure and temperature variations as shown in figure <ref>. In this figure is presented the evolution of E/N during more than 8 months and for different gas fluxes. From these results, it becomes evident that it is possible to maintain E/N constant for different conditions by adjusting the HV. The variation of the induced fast charge, efficiency, background current and E/N, can be seen in figure <ref> over more than 9 months. From these plots is it possible to see that all these quantities vary little over time, even when the gas flux is reduced. Only the background current is not as stable as the other variables. This happens as the current is the sum of various contributions: ionisation currents, leakage currents and the imbalance of the background rate due to temperature variations. It should be however noted that last two contributions will not affect the charge nor efficiency since they are excluded by the trigger definition, and thus have no impact on data analysis of shower events. § TEST ON THE OUTDOOR Once the RPCs were proven to fulfil the necessary requirements in the lab, they needed to be tested in the outdoor environment. The test was conducted in the Pampa Amarilla, in the Province of Mendoza, Argentina at the Pierre Auger Observatory site <cit.>. This plateau has an altitude of 1400m above sea level. The atmospheric conditions are demanding with daily temperature excursions of nearly 30^∘C, minimum temperatures below zero and maximum absolute temperatures exceeding 30^∘C. The RPC has to also be able to endure strong winds and lightning storms as well as high humidity.The RPCs were installed in a closed concrete precast that supports a WCD, a tank with 12 tons of purified water. The WCD offers protection against the shower electromagnetic unwanted particles and the whole system has a thermal inertia which attenuates the temperature excursions to seasonal variations of less than 10^∘C, as seen in figure <ref>. In order to measure the efficiency of the RPC, another RPC was put on top of the other spaced by 10 mm. This way we can use the tank and one RPC to define the trigger and measure the efficiency of the other one.In figure <ref>(left) it is shown the variation of the RPC efficiency with the reduced electric field. Clearly a plateau can be reached above ∼ 240Td allowing the RPC safe operation. The efficiency uniformity of the RPC is also shown in this figure (right). All the instrumented pads present an efficiency of ∼ 85%, showing a good uniformity.The RPC monitoring parameters can be seen in figure <ref> for nearly one year of data acquisition. The most important feature here is that E/N can be maintained constant by adjusting the HV.This has an immediate impact on the RPC efficiency to muons, shown in figure <ref>. This plot clearly shows that it is possible to operate this RPC in the harsh outdoor environment with constant efficiency, proving its potential for cosmic ray experiments. The observed interruptions of the data acquisition are related with the limited availability of power and communications in the test site. It is worth noting that the important result is not the continuous operation of the RPC but the remarkably stable operation during nearly one year in the field while exposed to different weather seasonal effects.§ SUMMARY AND PROSPECTS The detailed study of Extensive Air Showers requires better detectors, able to operate in the outdoor harsh environment.Resistive Plate Chambers are a good candidate, due to their low cost and good spatial and time resolution, provided that they can operate stably and with little maintenance.In this work, we have shown with laboratory and outdoor tests that it is possible to operate the RPCs with gas fluxes as low as1cc/min while maintaining a good efficiency. Moreover, through the adjustment of the gap voltage it is possible to absorb the temperature and pressure variation ensuring stable detection efficiency. This stability was tested during one year in the open field demonstrating the detector resilience to environmental effects. Currently there are about 30 of these RPCs being used and tested in several places in the world. For instance, it is being used at the Pierre Auger Observatory site as a hodoscope to investigate the response of the WCD to muons<cit.>. Hence, the R&D of these RPCs for an outdoor operation is expected to continue with more data and further developments. This R&D is essential for future projects that plan to take advantage of RPCs capabilities. For instance, LATTES <cit.>, an array for the detection of (very) high-energy gamma-rays planned to be installed at very high altitude in South America.jhep
http://arxiv.org/abs/1709.09619v1
{ "authors": [ "Pedro Assis", "Alberto Blanco", "Nuno Carolino", "Ruben Conceição", "Orlando Cunha", "Carola Dobrigkeit", "Miguel Ferreira", "Paulo Fonte", "Luis Lopes", "Ricardo Luz", "Victor Barbosa Martins", "Luis Mendes", "Américo Pereira", "Mário Pimenta", "Raul Sarmento", "Ronald Shellard", "Vitor de Souza", "Bernardo Tomé" ], "categories": [ "astro-ph.IM", "physics.ins-det" ], "primary_category": "astro-ph.IM", "published": "20170927165012", "title": "Autonomous RPCs for a Cosmic Ray ground array" }
24ptMasked Toeplitz covariance estimation Maryia KabanavaRWTH Aachen University, Lehrstuhl C für Mathematik (Analysis), Pontdriesch 10, 52062 Aachen, Germanyand Holger Rauhut[1]============================================================================================================================================= Significance statement: Many of the most fascinating and actively investigated materials classes host strongly correlated electrons. Their understanding is challenging because the strong correlations cause entwining of multiple degrees of freedom of an electron, such as spin, orbital, and charge. This complexity is ubiquitous and underlies many of the rich properties. The question then is whether there are universal organizing principles that provide simplicity to the description. Here, by studying a prototype material with entwined spin and orbital degrees of freedom and a theoretical model pertinent to it, we have demonstrated correlation-driven electron localization-delocalization as such a principle. It happens sequentially, involving a single quantum number at a time, thus deciphering the roles of the individual degrees of freedom.^∗Corresponding author. [email protected], [email protected]^⊕V.M. and A.C. contributed equally to this work. Present addresses: ^+University of São Paulo, São Paulo, Brazil;^×Department of Physics and Astronomy and Quantum Matter Institute, University of British Columbia, Vancouver, B.C., V6T 1Z1, Canada.INTRODUCTIONStrongly correlated electron systems represent a vibrant frontier in modern condensed matter physics. They often contain multiple degrees of freedom, which may be harnessed for future applications in electronic devices. One famous example is the manganites, in which both spin and orbital degrees of freedom play an important role <cit.>. Others are the iron-based superconductors <cit.> and fullerides <cit.>. In the cuprates, charge order emerges and interplays with the spin degrees of freedom to influence their low-energy properties <cit.>. Even in magic-angle graphene, the physics likely depends on both the spin and valley degrees of freedom <cit.>.These systems display a rich variety of exotic properties at lowenergies <cit.>. Finding simplicity out of this complexity is a central goal of the field. An emerging notion is that electron localization may be an organizing principle that can accomplish this goal <cit.>. RESULTSWe have chosen heavy fermion materials as setting for our study because they can be readily tuned to localization transitions and display sharp features thereof. The f electron's spin in a heavy fermion compound corresponds to a well-defined local degree of freedom. At the same time, it is still sufficiently coupled to the conduction electrons so that its behavior can be probed through the latter. In the ground state, Kondo entanglement generally leads to the formation of a many-body spin singlet between the local moment and conduction electrons. Electronic localization of this electron fluid can then be realized as function of a non-thermal control parameter <cit.>, and has beenunderstood in terms of the destruction of Kondo entanglement <cit.>. The accompanying strange-metal behavior, as well as the onset of magnetic ordering of the liberated spins, and unconventional superconductivity are prominent features <cit.> that make this transition both readily observable and broadly important.To explore the intricate interplay of multiple quantum numbers in this setting, a local degree of freedom in addition to the electron's spin should come into play. The simplest such case in heavy fermion systems may arise in cubic Ce-based compounds. Due to strong intraatomic spin-orbit coupling, the spin and orbital degrees of freedom of the Ce 4f^1 electron are described in terms of the total angular momentum J, that encompasses both spins (dipoles) and higher multipolar moments.Ce- and Yb-based heavy fermion materials often have crystalline symmetries lower than cubic.In that case, the lowest crystal electric field (CEF) level would be a Kramers doublet.In the cubic case, however, symmetry allows for CEF levels with higher degeneracy,such as the four-fold Γ_8 level, both in the case of the [Xe]4f^1 wavefunctionof a Ce^+3 ion (for the total angular momentum J=5/2) or the [Xe]4f^13 wavefunction of a Yb^+3 ion(for J=7/2). When this level is the lowest in energy, we end up with one f-electron (or holein the Yb-based systems) occupying a four-fold-degenerate local level,whichcan becharacterized by spin and orbital quantum numbers <cit.>. This is indeed the case in the intermetallic compound Ce_3Pd_20Si_6 (Fig. <ref>(a), see also Section S4). At zero field, it is at first the quadrupolar moments that order into an antiferroquadrupolar (AFQ) phase with ordering wave vector [1 1 1] at T_Q∼ 0.4 K; with further decreasing temperature, the dipolar (magnetic) moments undergo antiferromagnetic (AFM) ordering, with the ordering wave vector [0 0 0.8] at T_N∼ 0.25 K, as shown by recent neutron scattering experiments <cit.>. Both orders are due to Ce atoms on the crystallographic 8c site.As typical for heavy fermion systems, the many-body ground state is readily tunable by external parameters such as magnetic field. Previous work on Ce_3Pd_20Si_6 polycrystals <cit.> indeed revealed the suppression of T_N at a critical field B_N. Quantum criticality was revealed by electrical resistivity and specific heat measurements; the temperature dependencies were found to be different from the expectations <cit.> of the conventional theory based on order parameter fluctuations. Measurements of magnetotransport revealed a jump of the Hall coefficient and magetoresistance in the zero-temperature limit across B_N, which implicates a sudden reconstruction from large to small Fermi surface with decreasing field, as expected for a localization transition of Kondo destruction type <cit.>. When single crystals became available (see also Section S1), the phase diagram was mapped out for different field orientations <cit.>. The AFM transition is suppressed isotropically, implying that the quantum critical behavior at B_N observed in polycrystals captures the behavior of the single crystals. By contrast, the AFQ transition is suppressed anisotropically <cit.>. The study of the interplay between spin and orbital degrees of freedom thus requires measurements on single crystals, which we carry out in the present work.We chose to apply magnetic field along the crystallographic [0 0 1] direction, which suppresses the AFQ phase at a relatively small field B_Q (see Section S2). The temperature-magnetic field phase diagram for this direction is shown in Fig. <ref>(b). The AFM phase (phase III) is suppressed at B_N∼ 0.8 T, whereas the AFQ phase (phase II) is suppressed at B_Q∼ 2 T. Both have been found to be continuous by neutron scattering experiments <cit.>. The continuous nature of the transition at B_Q is also evidenced by the phase transition anomalies in specific heat <cit.>, magnetostriction (fig. S1A,B), and thermal expansion data (fig. S1C). The notion <cit.> that the Fermi surface is large at B > B_N appears to have two implications. Firstly, no further jump is to be expected at larger fields. Indeed, it has been taken for granted that electron localization takes place only once even in the case with multiple degrees of freedom. Secondly, the quantum critical behavior at B_Q should be very different from that near B_N.Surprisingly, we find strange-metal behavior near B_Q that isstrikingly similar to that near B_N, as illustrated by the power-law exponent a of the temperature-dependent electrical resistivity (ρ = ρ_0 + A'· T^a) in the quantum critical fans anchored at B_Q and B_N, respectively (Fig. <ref>(a)). Indeed, at B_Q, the electrical resistivity ρ is linear in temperature down to very low temperatures (Fig. <ref>(b)), and the specific heat coefficient c/T shows a logarithmic divergence (Fig. <ref>(c), right axis). In addition, the thermal expansion coefficient α/T shows a stronger than logarithmic divergence (Fig. <ref>(c), left axis), consistent with a diverging Grüneisen parameter Γ∼α/c. At fields away from B_Q, Fermi liquid (FL) behavior, with the form ρ = ρ_0 + A· T^2, is recovered in the electrical resistivity (Fig. <ref>(b), at temperatures below the arrows). The A coefficient, extracted from the respective FL regimes (fig. S2), is strongly enhanced towards B_N and B_Q (Fig. <ref>(d)). To further characterize the behavior near B_Q, we have measured theisothermal field-dependence of the electrical resistivity (Fig. <ref>(a)-(c)) and the Hall resistivity (Fig. <ref>(d)-(f)) across this critical field. They reveal crossover signatures which can be quantified following the procedures established previously <cit.> (see also Section S3). The characteristic parameters extracted from the analysis at each temperature are the full width at half maximum FWHM of the crossover (Fig. <ref>(g)), the crossover height Δ A (Fig. <ref>(h)), and the crossover field B^∗ or, equivalently, the field-dependent crossover temperature scale T^∗ (Fig. <ref>(a)). The pure power-law behavior of the FWHM is seen as a straight line in a double logarithmic plot (Fig. <ref>(g)); it extrapolates to infinite sharpness in the zero-temperature limit, and thus a jump in the Fermi surface. The power is 1 within error bars (see caption of Fig. <ref>) for both the magnetoresistance and the Hall crossover at B_Q, similar to what was previously found for the quantum critical point (QCP) at the border of the AFM phase in both Ce_3Pd_20Si_6 (Ref.) and YbRh_2Si_2 (Ref.). Note that, in the low-temperature limit, the change Δ n in the effective charge carrier concentration across B_Q, estimated using a simple spherical-Fermi-surface one-band approach, is sizeable: it is about 0.35 electrons per Ce atom at the 8c site (Fig. <ref>(h)).While a change in Fermi surface per se could come from a Lifshitz transition, our observations near B_Q (and B_N) are very different. Lifshitz transitions for three-dimensional Fermi surfaces, as observed in the high-field regime of YbRh_2Si_2 (Ref.), take place in the Fermi-liquid part of the phase diagram <cit.> and give rise to only smooth evolutions of the Hall coefficient. Instead, strange-metal behavior accompanied by a sizeable jump of the Fermi surface is the hallmark of unconventional quantum criticality driven by Kondo destruction. The question, then, is how multiple stages of Kondo destruction may arise under the tuning of a single control parameter. We consider a multipolar Kondo model that contains a lattice of local moments with a 4-fold degeneracy (classified as Γ_8 by the crystalline point group symmetry, see Section S4), whose spin and orbital states are described by σ and τ, respectively, and conduction electrons, c_ kστ, as sketched in Fig. <ref>(d). The Γ_8 moments are Kondo coupled to the conduction electrons, and the coupling constants J^κ_K with κ=σ, τ, m, respectively, describe the interaction of σ, τ, and σ⊗τ with the conduction-electron counterparts. The local moments also interact with each other by the RKKY exchange interactions I_ij^κ between sites i and j which, for the purpose of computational feasibility, we have chosen to be of Ising type (Section S5). In the extended dynamical mean field theory (Section S5), this will be described in terms of the coupling between the local moments and bosonic baths ϕ_κ, with coupling constants g_κ. We are then led to analyze the multipolar Bose-Fermi Kondo (BFK) model as an effective model for the Kondo lattice, which is described by the Hamiltonian (see Section S5 for more details)H_BFK =H_K + H_BK + H_B0(ϕ_σ, ϕ_τ, ϕ_m ) ,with H_BK = g_σ σ^z ϕ_σ + g_τ τ^z ϕ_τ +g_m (σ^z ⊗τ^z)ϕ_m. Here, H_K describes the Kondo coupling between the local spin-orbital moments and the conduction electrons. In addition, H_BK expresses the Bose-Kondo coupling between the local moments and the bosonic baths whose dynamics are specified by H_B0.For the pure (fermionic) Kondo part, our model corresponds to an exactly screenedKondo problem <cit.>, and is SU(4) symmetric whenJ^κ_K is the same for κ=σ,τ,m. Even when the SU(4) symmetry is broken, the system flows tothe exactly screened (Fermi liquid) SU(4) Kondo fixed point <cit.>.The model in the presence of bosonic Kondo couplings has not been studied before. Based on what is known for the SU(2) Bose-Fermi Kondo model <cit.>, we expect that the overall phase diagram of the present model withdifferent kinds of symmetries in the SU(4) space is captured by the calculations with SU(4)-symmetric Kondo couplings and Ising-anisotropic bosonic couplings (see Supplementary Information, Section S4). We have determined the zero-temperature phase diagramof this SU(4)-based Bose-Fermi Kondo model via calculations using a continuous-time quantum Monte Carlo method (Section S5).The theoretical phase diagram is illustrated in Fig. <ref>(b), as a function of g_1 = g_τ + g_σ and g_2=g_τ-g_σ, for fixed nonzero values of g_m and J_K^κ. Consider a generic direction (cut δ). In phase “σ, τ Kondo”, both the spin and orbital moments are Kondo entangled, which gives rise to an SU(4)-symmetric electron fluid (Fig. <ref>(c),(e) right). Upon moving towards the left (against direction of arrow δ), this state first undergoes the destruction of Kondo effect in the orbital sector at one QCP (stars in Fig. <ref>(b),(c)). This drives the system into a phase in which only the spin moments form a Kondo singlet with the conduction electrons (phase “σ Kondo, τ KD” in Fig. <ref>(b),(c), Fig. <ref>(e) left). It then, at the next QCP (squares in Fig. <ref>(b),(c)), experiences the destruction of Kondo effect in the spin sector, leading to a fully Kondo destroyed state (phase “σ, τ KD” in Fig. <ref>(b),(c)). Consequently, in a multipolar Kondo lattice system, there will be two distinct QCPs associated with a sequence of Kondo destructions. At each of the QCPs, the Fermi surface undergoes a sudden reconstruction (circles in Fig. <ref>(c)), which explains the jumps inferred from the Hall coefficient and magnetoresistance data. For a single-band jellium-like electronic fluid, our theory implies an integer jump of the electron count at each QCP. Any real material would, however, show deviations from this equality, as also seen here (see above). The sharp statement is that a jump in the electron count and Fermi surfaces must be manifested in the extrapolated zero-temperature limit of the Hall crossover, as we have demonstrated. We stress that an applied magnetic field is expected to weaken magnetic order more rapidly (∼ B) than the Kondo processes (∼ B^2); related considerations apply to the quadrupolar sector. Thus, the sequential Kondo destruction happens upon decreasing the magnetic field, i.e., from right to left in the experimental phase diagram (Fig. <ref>(a)).We have thus demonstrated that, in spite of the genuine intermixing of the two degrees of freedom in the many-body dynamics, a remarkable separation of their fingerprints occurs in the singular physics of quantum criticality: The magnetic-field tuning realizes two stages of quantum phase transitions, which are respectively dictated by the Kondo destruction of the spin and orbital sectors. DISCUSSIONTo put this finding in perspective, we recall that in spin-only systems, experiments have provided extensive evidence for Kondo destruction in AFM heavy fermion compounds <cit.>. From studying a spin-orbital heavy fermion system, we have shown that Kondo destruction is a general phenomenon and may also occur if degrees of freedom other than spin decouple from the conduction electrons. This demonstrates Kondo destruction as a general framework for both beyond-Landau quantum criticality and the electron localization-delocalization transition in metallic heavy fermion systems. Our analysis of the multipolar degrees of freedom also relates to the purely orbital case, as realized for instance in the Pr-based heavy fermion systems PrV_2Al_20 (Ref.) and PrIr_2Zn_20 (Ref.). These materials show unusual multipolar quantum criticality, though Kondo destruction has not yet been explored. Future studies may reveal whether electron localization occurs in these orbital-only heavy fermion systems as well, and contributes to nucleating phases <cit.> – including unconventional superconductivity <cit.>.More generally, we have demonstrated that strange-metal properties occur at each stage of the electron localization transition. This finding connects well with other classes of strongly correlated systems in which strange-metal behavior has also been linked to electron localization. In the high-T_c cuprate superconductors, electron localization as suggested by a pronounced change of the Fermi surface <cit.> and a divergence of the charge carrier mass <cit.> appears near the hole doping for optimal superconductivity, where strange-metal properties arise. In organic systems, electron localization has also been evidenced in connection with strange-metal behavior and optimal superconductivity <cit.>. In the graphene superlattices with a magic-angle twist, whose electronic states may also satisfy an SU(4) symmetry from the combination of the spin and valley degrees of freedom, transport and quantum oscillation measurements <cit.> have implicated a “small" Fermi surface of the charge carriers doped into a Mott insulator, thereby raising the possibility of an electron localization-delocalization transition underlying the superconductivity. As such, our work provides new understandings on the breakdown of the textbook description of electrons in solids and points to electron localization as a robust organizing principle for strange-metal behavior and, by extension, high-temperature superconductivity.Our system contains strongly correlated and entwined degrees of freedom; the crystalline symmetry dictates the strong intermixing of the spin and orbitalquantum numbers. Yet, near each of the two QCPs, there is a clear selection of the orbital or spin channel that drives the quantum critical singularity. This remarkable simplicity, developed out of the intricate interplay among the multiple degrees of freedom, represents a new insight into the physics of complex electron fluids. This new understanding may also impact on strongly correlated systems beyond the realm of materials such as mesoscopic structures <cit.> and quantum atomic fluids <cit.> where localization-delocalization transitions may also play an important role. Finally, the sequential localization we have advanced may be viewed as selectively coupling only part of the system to an environment. This notion relates to ideas for reduced dephasing within a logical subspace <cit.>, and may as such inspire new settings for quantum technology. Materials and methods are described in the Supplementary Information. Acknowledgements: The authors wish to thank D. Joshi for his contribution to the crystal growth, R. Dumas from Quantum Design for contributing to the heat capacity measurements, T. Sakakibara for sharing data of Ref.with us, L. Bühler for graphical design, and E. Abrahams, S. Kirchner, D.Natelson, A. Nevidomskyy, T. Park, and S. Wirth for fruitful discussions. The work in Vienna was funded by the Austrian Science Fund (P29296-N27 and DK W1243), the European Research Council (Advanced Grant 227378), and the US Army Research Office (ARO-W911NF-14-1-0496). The work at Rice was in part supported by the National Science Fundation (DMR-1920740) and the Robert A. Welch Foundation (C-1411) (A.C., E.M.N., C.-C.L., Q.S.), the Army Research Office (W911NF-14-1-0525) and a Smalley Postdoctoral Fellowship at the Rice Center for Quantum Materials (H.-H.L.), and the Big-Data Private-Cloud Research Cyberinfrastructure MRI Award funded by NSF (CNS-1338099) and by an IBM Shared University Research (SUR) Award. V.M. was supported by the FAPERJ (201.755/2015), R.Y. by the National Science Foundation of China (11374361 and 11674392) and the Ministry of Science and Technology of China (National Program on Key Research, 2016YFA0300504),K.I. by the National Science Foundation (Grant No.DMR-1508122), and A.M.S. by the SA-NRF(93549) and the UJ-FRC/URC.Q.S. acknowledges the hospitality of the Aspen Center for Physics (NSF, PHY-1607611)and University of California at Berkeley.10 url<#>1urlprefixURLTok00.1 authorTokura, Y. & authorNagaosa, N. titleOrbital physics in transition-metal oxides. journalScience volume288, pages462 (year2000).Si16.1 authorSi, Q., authorYu, R. & authorAbrahams, E. titleHigh-temperature superconductivity in iron pnictides and chalcogenides. journalNat. Rev. Mater. volume1, pages16017 (year2016).Tak09.1 authorTakabayashi, Y., authorGanin, A. Y., authorJeglič, P., authorArčon, D., authorTakano, T., authorIwasa, Y., authorOhishi, Y., authorTakata, M., authorTakeshita, N., authorPrassides, K. & authorRosseinsky, M. J. titleThe Disorder-Free Non-BCS Superconductor Cs3C60 Emerges from an Antiferromagnetic Insulator Parent State. journalScience volume323, pages1585 (year2009).Bad16.1 authorBadoux, S., authorTabis, W., authorLaliberté, F., authorGrissonnanche, G., authorVignolle, B., authorVignolles, D., authorBéard, J., authorBonn, D. A., authorHardy, W. N., authorLiang, R., authorDoiron-Leyraud, N., authorTaillefer, L. & authorProust, C. titleChange of carrier density at the pseudogap critical point of a cuprate superconductor. journalNature volume531, pages210 (year2016).Ram15.1 authorRamshaw, B. J., authorSebastian, S. E., authorMcDonald, R. D., authorDay, J., authorTan, B. S., authorZhu, Z., authorBetts, J. B., authorLiang, R., authorBonn, D. A., authorHardy, W. N. & authorHarrison, N. titleQuasiparticle mass enhancement approaching optimal doping in a high-T_c superconductor. journalScience volume348, pages317 (year2015).Cao18.1 authorCao, Y., authorFatemi, V., authorFang, S., authorWatanabe, K., authorTaniguchi, T., authorKaxiras, E. & authorJarillo-Herrero, P. titleUnconventional superconductivity in magic-angle graphene superlattices. journalNature volume556, pages43 (year2018).Bal03.1 authorBalakirev, F. F., authorBetts, J. B., authorMigliori, A., authorOno, S., authorAndo, Y. & authorBoebinger, G. titleSignature of optimal doping in Hall-effect measurements on a high-temperature superconductor. journalNature volume424, pages912 (year2003).Par08.1 authorPark, T., authorSidorov, V. A., authorRonning, F., authorZhu, J.-X., authorTokiwa, Y., authorLee, H., authorBauer, E. D., authorMovshovich, R., authorSarrao, J. L. & authorThompson, J. D. titleIsotropic quantum scattering and unconventional superconductivity. journalNature volume456, pages366 (year2008).Sch00.1 authorSchröder, A., authorAeppli, G., authorColdea, R., authorAdams, M., authorStockert, O., authorv. Löhneysen, H., authorBucher, E., authorRamazashvili, R. & authorColeman, P. titleOnset of antiferromagnetism in heavy-fermion metals. journalNature volume407, pages351 (year2000).Pas04.1 authorPaschen, S., authorLühmann, T., authorWirth, S., authorGegenwart, P., authorTrovarelli, O., authorGeibel, C., authorSteglich, F., authorColeman, P. & authorSi, Q. titleHall-effect evolution across a heavy-fermion quantum critical point. journalNature volume432, pages881 (year2004).Sch16.1 authorSchuberth, E., authorTippmann, M., authorSteinke, L., authorLausberg, S., authorSteppke, A., authorBrando, M., authorKrellner, C., authorGeibel, C., authorYu, R., authorSi, Q. & authorSteglich, F. titleEmergence of superconductivity in the canonical heavy-electron metal YbRh_2Si_2. journalScience volume351, pages485 (year2016).Oik15.1 authorOike, H., authorMiyagawa, K., authorTaniguchi, H. & authorKanoda, K. titlePressure-induced Mott transition in an organic superconductor with a finite doping level. journalPhys. Rev. Lett. volume114, pages067002 (year2015).Si13.1 authorSi, Q. & authorPaschen, S. titleQuantum phase transitions in heavy fermion metals and Kondo insulators. journalPhys. Status Solidi B volume250, pages425 (year2013).Fri10.2 authorFriedemann, S., authorOeschler, N., authorWirth, S., authorKrellner, C., authorGeibel, C., authorSteglich, F., authorPaschen, S., authorKirchner, S. & authorSi, Q. titleFermi-surface collapse and dynamical scaling near a quantum-critical point. journalProc. Natl. Acad. Sci. U.S.A. volume107, pages14547 (year2010).Cus12.1 authorCusters, J., authorLorenzer, K., authorMüller, M., authorProkofiev, A., authorSidorenko, A., authorWinkler, H., authorStrydom, A. M., authorShimura, Y., authorSakakibara, T., authorYu, R., authorSi, Q. & authorPaschen, S. titleDestruction of the Kondo effect in the cubic heavy-fermion compound Ce_3Pd_20Si_6. journalNat. Mater. volume11, pages189 (year2012).Luo14.1 authorLuo, Y., authorPourovskii, L., authorRowley, S. E., authorLi, Y., authorFeng, C., authorGeorges, A., authorDai, J., authorCao, G., authorXu, Z., authorSi, Q. & authorOng, N. P. titleHeavy-fermion quantum criticality and destruction of the Kondo effect in a nickel-oxypnictide. journalNat. Mater. volume13, pages777 (year2014).Wu16.2 authorWu, L. S., authorGannon, W. J., authorZaliznyak, I. A., authorTsvelik, A. M., authorBrockmann, M., authorCaux, J.-S., authorKim, M. S., authorQiu, Y., authorCopley, J. R. D., authorEhlers, G., authorPodlesnyak, A. & authorAronson, M. C. titleOrbital-exchange and fractional quantum number excitations in an f-electron metal, Yb_2Pt_2Pb. journalScience volume352, pages1206 (year2016).Pro18.1x authorL. Prochaska, X. Li, D. C. MacFarland, A. M. Andrews, M. Bonta, E. F. Bianco, S. Yazdi, W. Schrenk, H. Detz, A. Limbeck, Q. Si, E. Ringe, G. Strasser, J. Kono, and S. Paschen. titleSingular charge fluctuations at a magnetic quantum critical point. journalarXiv:1808.02296(year2018).Si01.1 authorSi, Q., authorRabello, S., authorIngersent, K. & authorSmith, J. titleLocally critical quantum phase transitions in strongly correlated metals. journalNature volume413, pages804 (year2001).Col01.1 authorColeman, P., authorPépin, C., authorSi, Q. & authorRamazashvili, R. titleHow do Fermi liquids get heavy and die? journalJ. Phys.: Condens. Matter volume13, pagesR723 (year2001).Sen04.1 authorSenthil, T., authorVojta, M. & authorSachdev, S. titleWeak magnetism and non-Fermi liquids near heavy-fermion critical points. journalPhys. Rev. B volume69, pages035111 (year2004).Cai19.1x authorA. Cai, H. Hu, K. Ingersent, S. Paschen, and Q. Si. titleDynamical Kondo effect and Kondo destruction in effective models for quantum critical heavy fermion metals. journalarXiv:1904.11471(year2019).Shi97.1 authorShiina, R., authorShiba, H. & authorThalmeier, P. titleMagnetic-field effects on quadrupolar ordering in a Γ_8-quartet system CeB_6. journalJ. Phys. Soc. Jpn. volume66, pages1741 (year1997).Por16.1 authorPortnichenko, P. Y., authorPaschen, S., authorProkofiev, A., authorVojta, M., authorCameron, A. S., authorMignot, J.-M., authorIvanov, A. & authorInosov, D. S. titleIncommensurate short-range multipolar order parameter of phase II in Ce_3Pd_20Si_6. journalPhys. Rev. B volume94, pages245132 (year2016).Gri94.1 authorGribanov, A. V., authorSeropegin, Y. D. & authorBodak, O. I. titleCrystal structure of the compounds Ce_3Pd_20Ge_6 and Ce_3Pd_20Si_6. journalJ. Alloys Compd. volume204, pagesL9 (year1994).Pro09.1 authorProkofiev, A., authorCusters, J., authorKriegisch, M., authorLaumann, S., authorMüller, M., authorSassik, H., authorSvagera, R., authorWaas, M., authorNeumaier, K., authorStrydom, A. M. & authorPaschen, S. titleCrystal growth and composition-property relationship of Ce_3Pd_20Si_6 single crystals. journalPhys. Rev. B volume80, pages235107 (year2009).Dee10.1 authorDeen, P. P., authorStrydom, A. M., authorPaschen, S., authorAdroja, D. T., authorKockelmann, W. & authorRols, S. titleQuantum fluctuations and the magnetic ground state of Ce_3Pd_20Si_6. journalPhys. Rev. B volume81, pages064427 (year2010).Ono13.1 authorOno, H., authorNakano, T., authorTakeda, N., authorAno, G., authorAkatsu, M., authorNemoto, Y., authorGoto, T., authorDönni, A. & authorKitazawa, H. titleMagnetic phase diagram of clathrate compound Ce_3Pd_20Si_6 with quadrupolar ordering. journalJ. Phys.: Condens. Matter volume25, pages126003 (year2013).Ste01.1 authorStewart, G. R. titleNon-Fermi-liquid behavior in d- and f-electron metals. journalRev. Mod. Phys. volume73, pages797–855 (year2001).Mit10.1 authorMitamura, H., authorTayama, T., authorSakakibara, T., authorTsuduku, S., authorAno, G., authorIshii, I., authorAkatsu, M., authorNemoto, Y., authorGoto, T., authorKikkawa, A. & authorKitazawa, H. titleLow temperature magnetic properties of Ce_3Pd_20Si_6. journalJ. Phys. Soc. Jpn. volume79, pages074712 (year2010).Pfa13.1 authorPfau, H., authorDaou, R., authorLausberg, S., authorNaren, H. R., authorBrando, M., authorFriedemann, S., authorWirth, S., authorWesterkamp, T., authorStockert, U., authorGegenwart, P., authorKrellner, C., authorGeibel, C., authorZwicknagl, G. & authorSteglich, F. titleInterplay between Kondo suppression and Lifshitz transitions in YbRh_2Si_2 at high magnetic fields. journalPhys. Rev. Lett. volume110, pages256403 (year2013).Geg06.1 authorGegenwart, P., authorTokiwa, Y., authorWesterkamp, T., authorWeickert, F., authorCusters, J., authorFerstl, J., authorKrellner, C., authorGeibel, C., authorKerschl, P., authorMüller, K.-H. & authorSteglich, F. titleHigh-field phase diagram of the heavy-fermion metal YbRh_2Si2. journalNew J. Phys. volume8, pages171 (year2006).Hew97.1 authorHewson, A. C. titleThe Kondo Problem to Heavy Fermions (publisherCambridge University Press, addressCambridge, year1997).Pan94 authorPang, H. titleNon-fermi-liquid states in a generalized two-channel kondo model. journalPhys. Rev. Lett. volume73, pages2736 (year1994).Hur04 authorLe Hur, K., authorSimon, P. & authorBorda, L. titleMaximized orbital and spin kondo effects in a single-electron transistor. journalPhys. Rev. B volume69, pages045326 (year2004).zhu02 authorZhu, L. & authorSi, Q. titleCritical local-moment fluctuations in the bose-fermi kondo model. journalPhys. Rev. B volume66, pages024426 (year2002).zar02 authorZaránd, G. & authorDemler, E. titleQuantum phase transitions in the bose-fermi kondo model. journalPhys. Rev. B volume66, pages024427 (year2002).Shi05.1 authorShishido, H., authorSettai, R., authorHarima, H. & authorOnuki, Y. titleA drastic change of the Fermi surface at a critical pressure in CeRhIn_5: dHvA study under pressure. journalJ. Phys. Soc. Jpn. volume74, pages1103 (year2005).Mun13.1 authorMun, E. D., authorBud'ko, S. L., authorMartin, C., authorKim, H., authorTanatar, M. A., authorPark, J.-H., authorMurphy, T., authorSchmiedeshoff, G. M., authorDilley, N., authorProzorov, R. & authorCanfield, P. C. titleMagnetic-field-tuned quantum criticality of the heavy-fermion system YbPtBi. journalPhys. Rev. B volume87, pages075120 (year2013).Shi15.1 authorShimura, Y., authorTsujimoto, M., authorZeng, B., authorBalicas, L., authorSakai, A. & authorNakatsuji, S. titleField-induced quadrupolar quantum criticality in PrV_2Al_20. journalPhys. Rev. B volume91, pages241102 (year2015).Oni16.1 authorOnimaru, T., authorIzawa, K., authorMatsumoto, K. T., authorYoshida, T., authorMachida, Y., authorIkeura, T., authorWakiya, K., authorUmeo, K., authorKittaka, S., authorAraki, K., authorSakakibara, T. & authorTakabatake, T. titleQuadrupole-driven non-Fermi-liquid and magnetic-field-induced heavy fermion states in a non-Kramers doublet system. journalPhys. Rev. B volume94, pages075134 (year2016).Myd11.1 authorMydosh, J. A. & authorOppeneer, P. M. titleColloquium: Hidden order, superconductivity, and magnetism: The unsolved case of URu_2Si_2. journalRev. Mod. Phys. volume83, pages1301 (year2011).McC13.1 authorMcCollam, A., authorAndraka, B. & authorJulian, S. R. titleFermi volume as a probe of hidden order. journalPhys. Rev. B volume88, pages075102 (year2013).Bau02.1 authorBauer, E. D., authorFrederick, N. A., authorHo, P.-C., authorZapf, V. S. & authorMaple, M. B. titleSuperconductivity and heavy fermion behavior in PrOs_4Sb_12. journalPhys. Rev. B volume65, pages100506 (year2002).Mat12.1 authorMatsubayashi, K., authorTanaka, T., authorSakai, A., authorNakatsuji, S., authorKubo, Y. & authorUwatoko, Y. titlePressure-induced heavy fermion superconductivity in the nonmagnetic quadrupolar system PrTi_2Al_20. journalPhys. Rev. Lett. volume109, pages187004 (year2012).Kel14.1 authorKeller, A. J., authorAmasha, S., authorWeymann, I., authorMoca, C. P., authorRau, I. G., authorKatine, J. A., authorShtrikman, H., authorZarand, G. & authorGoldhaber-Gordon, D. titleEmergent SU(4) Kondo physics in a spin-charge-entangled double quantum dot. journalNat. Phys. volume10, pages145 (year2014).Nak16.2 authorNakamura, S., authorMatsui, K., authorMatsui, T. & authorFukuyama, H. titlePossible quantum liquid crystal phases of helium monolayers. journalPhys. Rev. B volume94, pages180501 (year2016).Neu06.1 authorNeumann, M., authorNyeki, J., authorCowan, B. & authorSaunders, J. titleBilayer ^3He: A simple two-dimensional heavy-fermion system with quantum criticality. journalScience volume317, pages1356 (year2007).Fri17.1 authorFriesen, M., authorGhosh, J., authorEriksson, M. A. & authorCoppersmith, S. N. titleA decoherence-free subspace in a charge quadrupole qubit. journalNat. Commun. volume8, pages15923 (year2017).
http://arxiv.org/abs/1709.09376v3
{ "authors": [ "V. Martelli", "A. Cai", "E. M. Nica", "M. Taupin", "A. Prokofiev", "C. -C. Liu", "H. -H. Lai", "R. Yu", "K. Ingersent", "R. Küchler", "A. M. Strydom", "D. Geiger", "J. Haenel", "J. Larrea", "Q. Si", "S. Paschen" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170927075817", "title": "Sequential localization of a complex electron fluid" }
National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210023, China [cor1]Email: [email protected] Auto-encoding is an important task which is typically realized by deep neural networks (DNNs) such as convolutional neural networks (CNN). In this paper, we propose EncoderForest (abbrv. eForest), the first tree ensemble based auto-encoder. We present a procedure for enabling forests to do backward reconstruction by utilizing the equivalent classes defined by decision paths of the trees, and demonstrate its usage in both supervised and unsupervised setting. Experiments show that, compared with DNN autoencoders, eForest is able to obtain lower reconstruction error with fast training speed, while the model itself is reusable and damage-tolerable.§ INTRODUCTION Auto-encoder <cit.> is a class of models which aim to map the input to a latent space and map it back to the original space, with low reconstruction error as its objective. Previous approaches for building such device mainly came from the neural network community. For instance, a neural network based auto-encoder usually consists of an encoder and a decoder. The encoder maps the input to a hidden layer and the decoder maps it back to the input space. By concatenating the two parts and setting the reconstruction error as learning objective, back-propagation can be used for training such models. It is widely used for dimensionality reduction <cit.>, representation learning <cit.>, as well as some more recent works in generative models such as Variational Auto-encoders <cit.>.Ensemble learning <cit.> is a powerful learning paradigm which trains multiple learners and combines to tackle the problem. It is widely used in a broad range of tasks and and demonstrates great performance. Tree ensemble methods, or forests, such as Random Forest <cit.>, for instance, is one of the best off-the-shelf methods for supervised learning <cit.>. Other successful tree ensembles such as gradient based decision trees (GBDTs), <cit.> has also proven its ability during the past decade. Besides supervised learning, tree ensembles have also achieved great success in other tasks, such as isolation forest <cit.> which is an efficient unsupervised method for anomaly detection. Recently, deep model based on forests has also been proposed <cit.>, and demonstrated competitive performance with DNNs across a broad range of tasks with much fewer hyper-parameters. In this paper, we present the EncoderForest, (abbrv. eForest), by enabling a tree ensemble to perform forward encoding and backward decoding operations and can be trained in both supervised or unsupervised fashion. Experiments showed the eForest approach has the following advantages: * Accurate: Its experimental reconstruction error is lower than a MLP or CNN based auto-encoders.* Efficient: eForest on a single KNL (many-core CPU) runs even faster than a CNN auto-encoder runs on a Titan-X GPU for training.* Damage-tolerable: The trained model works well even when it is partially damaged.* Reusable: A model trained from one dataset can be directly applied on the other dataset in the same domain. The rest of the paper is organized as follows: first we introduce related works, followed by the proposed eForest model, then experimental results are presented, finally conclusion and future works are discussed.§ RELATED WORK Auto-encoding an important task for learning association from data, which is one of the key ingredient of deep learning. <cit.>. The study of auto-encoding dates back to <cit.>, of which the goal is to learning an auto-association relation which can be used to for representation learning. <cit.>.Most of the previous approaches on auto-encoding are neural network based models. For instance, the under-complete auto-encoder, which purpose is to compress data for dimensionality reduction <cit.> and efficient coding<cit.>, sparse auto-encoder gives a sparsity penalty on the on the activation layer <cit.>, which is related with sparse coding <cit.>, and denoising auto-encoders <cit.> forces the model to learn the mapping from a corrupted input to its noiseless version. Applications ranging from computer vision <cit.> to natural language processing <cit.> and semantic hashing <cit.> which uses autoencoders in information retrieval tasks. In fact, the concept of deep learning stated with training a stack of auto-encoders in a greedy layer-wised fashion. <cit.>. Auto-encoding has also been applied in some more recent works such as variational auto-encoder for generative models <cit.>. Ensembles of decision trees, or called forest, are popularly used in ensemble learning <cit.>. For example, Bagging <cit.> and Boosting <cit.> usually take decision trees as component learners. Other famous decision tree ensemble methods including Random Forest <cit.> and GBDT<cit.>; the former is a variant of Bagging, whereas the latter is a variant of Boosting. Some efficient implementations of GBDT, e.g. XGBoost <cit.>, has been widely used in industry and various data analytics competitions. In addition to the above tree ensembles constructed in supervised setting, there are unsupervised tree ensembles also proven to be useful in various domains. For example, the iForest <cit.> is an unsupervised forest designed for anomaly detection, and its ingredient, completely-random decision trees, have also been applied to tasks such as streaming new class learning <cit.>. Note that both supervised and unsupervised forests, i.e. Random Forest and completely-random tree forest, have been simultaneously exploited in the construction of deep forest<cit.>. § THE PROPOSED METHOD An auto-encoder has two basic functions: encoding and decoding. There is no difficulty for a forest to do encoding, because at least the leaf nodes information can be regarded as a kind of encoding; needless to say, the subsets of nodes or even the branch of paths may be able to offer more information for encoding.First, we propose the encoding procedure of EncoderForest. Given a trained tree ensemble model of T trees, the forward encoding procedure takes an input data and send this data to each root node of trees in the ensemble, once the data traverse down to the leaf nodes for all trees, the procedure will return a T dimensional vector, where each element t is an integer index of the leaf node in tree t.A more concrete algorithm for forward encoding is shown in Algorithm <ref>. Notice that this encoding procedure is independent with the particular learning rule on how to split the nodes for trees. For instance, the decision rule can be learned in a supervised setting such as random forest, or can be learned in an unsupervised setting such as completely random trees.On the other hand, however, the decoding function is not that obvious. In fact, forests are generally used for forward prediction, by going from the root of each tree to the leaves, whereas it is unknown how to do backward reconstruction, i.e., inducing the original samples from information obtained at the leaves. Suppose we are handling a binary classification task, with four attributes. The first and second attributes are numerical ones; the third is a boolean attribute with values YES, NO; the fourth is a triple-valued attribute with values RED, BLUE, GREEN.Given an instance x, let x_i denotes the value of x on the i-th attribute. Now suppose in the encoding step we have generated a forest as shown in Fig <ref>. Now, we only know the leaf nodes on which the instance x falling into, as shown in Fig <ref> as the red nodes, and wish to reconstruct x.Here, we propose an effective yet simple, possibly the simplest, strategy for backward reconstruction in forests. First, each leaf node actually corresponds to a path coming from the root, we can identify the path based on the leaf node without uncertainty. For example, in Fig <ref> the identified paths are highlighted in red color. Second, each path corresponds to a symbolic rule; for example, the highlighted tree paths correspond to the following rule set, where RULE_i corresponds to the path of the i-th tree in the forest, wheredenotes the negation of a judgment :RULE_1: (x_1 ≥ 0)∧ (x_2 ≥ 1.5) ∧(x_3==RED) ∧(x_1 ≥ 2.7) ∧(x_4==NO) RULE_2: (x_3 == GREEN) ∧(x_2 ≥ 5) ∧ (x_1 ≥0.5) ∧(x_2 ≥ 2) ...RULE_n: (x_4 == YES) ∧(x_2 ≥ 8) ∧(x_1 ≥ 1.6)This rule set can be further adjusted into a more succinct form: RULE_1': (2.7 ≥ x_1 ≥ 0) ∧ ( x_2 ≥ 1.5) ∧(x_3==RED) ∧ (x_4==YES) RULE_2': (x_1 ≥ 0.5) ∧(x_2 ≥ 2) ∧ (x_3 == GREEN)... RULE_n': (x_1 ≥ 1.6) ∧(x_2 ≥ 8) ∧(x_4 == YES)Then, we can derive the Maximal-Compatible Rule (MCR). MCR is such a rule that each of its component coverage cannot be enlarged, otherwise incompatible issue will occur. For example,from the above rule set we can get the corresponding MCR:(1.6 ≥ x_1 ≥ 0.5) ∧ (2 ≥ x_2≥ 1.5) ∧ (x_3 == GREEN) ∧ (x_4 == YES) For each component of this MCR, such as(2 ≥ x_2≥ 1.5), its coverage cannot be enlarged; for example, if it were enlarged to(3 ≥ x_2≥ 1.5), it would have conflict with the condition in (x_2 ≥ 2) in RULE_2. A more detailed description is shown in Algorithm <ref>.It is very easy to prove the following theorem, and thus we omit the proof.The original sample must reside in the input region defined by the MCR.Thus, after obtaining the MCR, we can reconstruct the original sample. For categorical attributes such as x_3 and x_4, the original sample must take these values in the MCR; for numerical attributes, such as x_2, we can take a representative value, such as the mean value in (2, 1.5). Thus, the reconstructed sample is x = [0.55, 1.75, GREEN, YES]. Note that for numerical value, we can have many alternative ways for the reconstruction, such as the median, max, min, or even calculate the histograms. Given the above description, now we give a summary for conducting backward decoding of eForest. Concretely, given a trained forest with T trees along with the forward encoding x_enc in R^T for a particular data, the backward decoding will first locate the individual leaf node via each element in x_enc, and then obtain T decision rules for the corresponding decision paths accordingly. Then, by calculating the MCR, we can thus get a reconstruction fromx_enc back to x_dec in the input region. A concrete algorithm is shown in Algorithm <ref>. By enabling the eForest to conduct the forward encoding and backward decoding operations, autoencoding tasks can thus be realized. In addition, although beyond the scope of this paper, the eForest model might give some insight on a theoretical treatment for the representation learning ability for tree ensemble models, as well as helping to design new models for deep forest.§ EXPERIMENTS §.§ Image Reconstruction We evaluate the performance of eForest in both supervised and unsupervised setting. In this implementation, we take Random Forest <cit.> to construct the supervised forest, whereas take the completely-random forest <cit.> as the routine for the unsupervised forest. Notice that other decision tree ensemble construction methods can also be used for this purpose. Concretely, for supervised eForest, each non-terminal node randomly select √(d) attributes in the input space and pick the best possible split for information gain; for unsupervised eForest, each non-terminal node randomly pick one attributes and make a random split. In our experiments we simply grow the trees to pure leaf, or terminate when there are only two instances in a node. We evaluate eForest containing 500 trees or 1,000 trees, denoted by eForest_500 and eForest_1000 respectively. Note that eForest_N will re-represent the input instance as a N-dimensional vector. Since auto-encoders especially DNN-based auto-encoders are mainly designed for image tasks, in this section we run some experiments on image data first. We use the MNIST dataset <cit.>, which consists of 60,000 gray scale 28×28 images (784 dimensional vector per sample) for training and 10,000 for testing. We also use CIFAR-10 dataset <cit.>, which is a more complex dataset consists of 50,000 colored 32×32 images (therefore each image is in R^1024 per channel) for training and 10,000 colored images for testing. For colored images, the eForest process each channel separately for memory saving. MLP based AutoEncoders (MLP-AEs) and a convolutional neural network based auto-encoder (CNN-AE) are used for comparison. For MLP-AEs, we follow the suggestions in <cit.> and use two architectures, with 500-dimensional and 1000-dimensional inner representation, respectively. Concretely, the MLP-AE MLP_1 for MNIST is (input-1024-500-1024-output) and the MLP_2 for MNIST is (input-2048-1000-2048-output). Likewise, the MLP-AE MLP_1 for CIFAR-10 is (input-4096-1024-500-1024-4096-output) and the MLP_2 for CIFAR-10 is (input-4096-2048-1000-2048-4096-output). For CNN-AE, we follow the implementations in the Keras documentation [https://blog.keras.io/building-autoencoders-in-keras.html] with the following architecture: It consisting of a conv-layers with 16 (3 × 3) kernels followed by 2 conv-layers with 8 (3 × 3) kernels, and each conv-layer has a 2 × 2 maxpooling layer followed. The decoder we usedhas same structure as encoder except using up-sampling layer instead of pooling layers (for mapping the data back to its original input space). ReLUs are used for activations and logloss is used as training objective. During training, dropout is set to be 0.25 per layer. Experimental results are summarized in Table <ref>. For DNN auto-encoders, cross validation are used for hyper-parameter tuning; for eForest, we just take the min value of the interval defined by the corresponding MCR as indicated in the last sampling step of decoding.It can be seen that eForest achieves the best performance. Some reconstructed samples on the test set are shown in Figure <ref>. This result looks sad for CNN based auto-encoders on CIFAR-10 dataset, as we are using the architecture recommended for image auto-encoders by Keras documentation and have carefully tuned the other hyper-parameters via cross-validation. We believe that the DNN autoencoders can get improved performance by some further tuning; nevertheless, the eForest auto-encoder works well without careful parameter tuning. It is worth noting that the unsupervised eForest had a better performance compared with the supervised eForest, given the same number of trees. Note that each decision tree path corresponds to a rule, whereas a longer rule will define a tighter MCR. We conjecture that a tighter MCR might lead to a more accurate reconstruction. Therefore for a forest with longer tree depth may have a better performance. For example, we measured the maximum depth as well as the average depth for all trees on MNIST dataset, as summarized in Tabel <ref>. Experimental results give positive supports, as shown in Table  <ref>. An unsupervised eForest indeed has a longer average depth. §.§ Text ReconstructionIn addition to image tasks, other tasks may also require auto-encoders. Thus, we study the performance of eForest for text reconstruction. Note that the DNN auto-encoders are mainly designed for images, and if to be applied to texts, some additional mechanism such as word2vec embedding<cit.> is required for pre-processing. Here, in our experiments, we want to study the performance of doing auto-encoding directly on text data.Concretely, we used the IMDB dataset <cit.> which contains 25,000 documents for training and 25,000 documents for testing. Each document was stored as a 5,000 dimensional vector via tf/idf transformation. We used exactly the same configuration of eForests for image data. Cosine distance is used for evaluation metric, which is the standard metric for measuring the similarities between documents represented by tf/idf vectors. The lower the cosine distance, the better. The results are summarized in Table  <ref>. It should be highlighted that CNN based auto-encoders can not be applied on this kind of input data at all and MLP based auto-encoders is barely useful. After extensive cross-validation for parameter search, the best structure for MLP we could obtained is (Input-4096-2048-1024-2048-4096-Output), with the performance of 0.512, more than two hundred times worse than eForest. From the above results, we showed that eForest can also be applied on text data with high performance. In addition, notice that by using only 10 % of the bits of representation (eForest of 500 trees trained unsupervisedly), eForest can already reconstruct the original input with high accuracy. This is a promising result which can be further utilized for data compression.§.§ Computation EfficiencyAs a common advantage for tree ensemble models, eForest is also inherently apt for parallel implementation. We implement eForest on a single KNL-7250 (belongs to Intel XEON Phi many-core product family), and achieved a 67.7 speedup for training 1,000 trees in an unsupervised setting, compared with a serial implementation. For a comparison, we trained the corresponding MLPs and CNN-AEs with the same configurations as in the previous sections on one Titan-X GPU and the results for training cost as well as testing per sample cost are summarized in the Table  <ref>. From the above results, eForest is more than 100 times fast when training, but is slower during encoding time than DNN based auto-encoders. We hope that the decoding can be speedup by some more optimization in the future. §.§ Damage Tolerable There are cases when the model is partially damaged due to a various reasons such as memory or disk failure. For a partially damaged model is still able to function in such cases is one characteristic towards model robustness. The eForest approach for auto-encoding is one such model by its nature since we could still estimate the MCR when facing only a subset of trees in the forest.In this section, we test the damage tolerable empirically on CIFAR-10 and MNIST datasets. Concretely, during testing time, we randomly drop 25%, 50% and 75% of the trees and measure the reconstruction error based on the pattern recovered using only the remaining trees. For a comparison, we also randomly turned off 25%, 50% and 75% of the neurons in the MLP_2 with structure exactly the same as in the previous section. The performance results are illustrated in Figure <ref>.Form the above result, the eForest approach is more damage tolerable than a MLP-AE, and the unsupervised eForest is the most damage tolerable model among others. §.§ Model Reuse for eForest In an open environment, the test data for encoding/decoding may belong to a different distribution with the training data. In this section, we test the ability for model reuse and the goal here is to train a model in one dataset and reuse it in another dataset without any modifications or re-training. The ability for model reuse in this context is an important property for future machine learning developments <cit.>.Concretely, we evaluate the ability for model reuse as follows. We trained an unsupervised and an supervised eForest on CIFAR-10 dataset (converted and rescaled to 28×28 gray scale data), each consisting of 1,000 trees , and then use the exact models to encoding/decoding data from the MNIST test dataset. Likewise, we also trained eForests consists of 1,000 trees on MNIST dataset, and directly test the encoding/decoding performance on the Omniglot datasets <cit.>. For a fair comparison, we trained a CNN-Autoencoder and MLP-Autoencoder on the same dataset without fine-tuning. The architecture for MLP/CNN-AEs and the training procedures are the same in the previous sections accordingly. MSE is used for performance evaluation.Some randomly picked reconstructed samples are presented in Fig. <ref>, and the numerical evaluation on the whole test set is presented in Table  <ref>. It can be inferred that eForests has out-performed the DNN approach by a factor more than 100.Specifically, for an eForest trained on CIFAR-10 can perform a better encoding/decoding task on MNIST dataset, and these two dataset are quite different. It showed the generalization ability in terms of model reuse for eForest. § CONCLUSIONIn this paper, we propose the EncoderForest (abbrv. eForest),the first tree ensemble based auto-encoder model, by devising an effective procedure for enabling forests to reconstruct the original pattern by utilizing the Maximal-Compatible Rule (MCR) defined by decision paths of the trees. Experiments demonstrate its good performance in terms of accuracy and speed, as well as the ability of damage tolerance and model reusability. In particular, on text data, by using only 10% of the input bits, the model is still able to reconstruct the original data with high accuracy. Another advantage of eForest lies in the fact that it can be applied to symbolic attributes or mixed attributes directly, without transforming the symbolic attributes to numerical ones, especially when considering that the transforming procedure generally either lose information or introduce additional bias. Note that supervised and unsupervised eForest are actually the two ingredients utilized simultaneously in each level of the deep forest constructed by gcForst. This work might offer some additional understanding of gcForst<cit.>. Constructing a deep eForest model is also an interesting future issue.elsarticle-harv
http://arxiv.org/abs/1709.09018v1
{ "authors": [ "Ji Feng", "Zhi-Hua Zhou" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20170926135434", "title": "AutoEncoder by Forest" }
RSW estimates for random polynomials]Russo-Seymour-Welsh estimates for the Kostlan ensemble of random polynomials Mathematical Institute, University of Oxford [email protected] Institute, University of Oxford (currently at King's College London) [email protected] of Mathematics, King's College London [email protected][2010]60G15, 60K35, 30C15 We study the percolation properties of the nodal structures of random fields. Lower bounds on crossing probabilities (RSW-type estimates) of quads by nodal domains or nodal setsof Gaussian ensembles of smooth random functions are established under the following assumptions: (i) sufficient symmetry; (ii) smoothness and non-degeneracy; (iii) local convergence of the covariance kernels; (iv) asymptotically non-negative correlations; and (v) uniform rapid decay of correlations.The Kostlan ensemble is an important model of Gaussian homogeneous random polynomials. An application of our theory to the Kostlan ensemble yields RSW-type estimates that are uniform with respect to the degree of the polynomials and quads of controlled geometry, valid on all relevant scales. This extends the recent results on the local scaling limit of the Kostlan ensemble, due to Beffara and Gayet. [ I. Wigman December 30, 2023 =====================§ INTRODUCTION§.§ The Kostlan ensembleThe Kostlan ensemble of homogeneous degree-n polynomials in m+1≥ 2 variables is the Gaussian random field f_n: ^m+1→defined asf_n(x)=f_n;m(x) = ∑_|J|=n√(nJ)a_Jx^J,whereJ=(j_0,…,j_m) is the multi-index, |J|=j_0+…+j_m, nJ = n!/j_0!·…· j_m!, and {a_J} are i.i.d. standard Gaussian random variables. Since f_n is homogeneous, it is also natural to view the Kostlan ensemble as the Gaussian random field on the unit m-dimensional sphere 𝕊^m that is the restriction of (<ref>) to 𝕊^m. The natural extension of (<ref>) to ^m+1 is known as the `complex Fubini-Study' ensemble.In this paper we are interested in various geometric properties of the nodal set of the Kostlan ensemble, i.e. the zero set of f_n, particularly when the degree n is large. Figure <ref> depicts the nodal domains of a sample of the m=2-dimensional Kostlan ensemble of degree 300 on 𝕊^2. Since f_n is either even or odd depending on n, its nodal set can be naturally considered as a degree-n hypersurface (i.e. algebraic variety of co-dimension one) on the projective space ^m. As we explain below, the Kostlan ensemble is a natural model for a `typical' homogeneous polynomial, and hence one may think of its nodal set as a `typical' real projective hypersurface.The Kostlan ensemble can be equivalently defined as the canonical Gaussian element in the Hilbert space _n of homogeneous degree-n polynomials in m+1 variables spanned by the collection{√(nJ) x^J}_|J|=n as its orthonormal basis. Restricted to _n, the associated scalar product is, up to the constant √(n!), equal to the scalar product in the Bargmann-Fock space <cit.>, i.e. the space of all analytic functions on ^m+1 such thatf_BF^2=1/π^m+1∫_^m+1|f(z)|^2e^-z^2d z<∞with the scalar product⟨ f,g ⟩_BF=1/π^m+1∫_^m+1f(z)g̅(z)e^-z^2d z,playing an important role in quantum mechanics. The restriction of the scalar product (<ref>) to _n satisfies the following important property, relevant in our setting: it is the unique (up to a scale factor) scalar product on the space of degree-n homogeneous polynomials on ^m+1 that is invariant w.r.t. the unitary group. In other words, the Kostlan ensemble (<ref>) is the real trace of the unique unitary invariant Gaussian ensemble of homogeneous polynomials (although there exist many other ensembles invariant w.r.t. the orthogonal transformations <cit.>). In particular, the induced distribution on the space of hypersurfaces on ℝℙ^m is also invariant w.r.t. the unitary group, which justifies our description of the nodal set of the Kostlan ensemble as a natural model for a `typical' real projective hypersurface.As mentioned above, it will be convenient to consider f_n as a Gaussian random field on the unit sphere 𝕊^m, and henceforth we take exclusively this view. Computing explicitly from (<ref>), one may evaluate its covariance kernel κ_n:𝕊^m×𝕊^m→ to beκ_n(x,y)=[f_n(x)· f_n(y)] = (⟨ x,y⟩)^n = (cosθ(x,y))^n,where for x,y∈𝕊^m we denote θ(x,y) to be the angle between x and y, also equal to the spherical distance between these points; this covariance kernel determines f_n uniquely via Kolmogorov's Theorem.The random field f_n on 𝕊^m is of high merit since it is rotationally invariant and also admits a natural scaling around every point understood in the following way. Let us fix x_0 ∈𝕊^m, and define the scaled covariance kernel on ^m×^mK_x_0;n(x,y)= κ_n(exp_x_0(x/√(n)),exp_x_0(y/√(n))),where exp_x_0:^m→𝕊^m is the exponential map on the sphere based at x_0. Then, as is shown formally in section <ref> below, the scaled covariance K_x_0;n(x,y) satisfies the convergenceK_x_0;n(x,y)→ K_∞(x,y)=e^-x-y^2/2along with all its derivatives, locally uniformly in x,y ∈^m; the r.h.s. of (<ref>) is the defining covariance kernel of the Bargmann-Fock field on ℝ^m, discussed further below.§.§ RSW estimates for random subsets of Euclidean spaceIn percolation theory the RSW estimates  <cit.> are uniform lower bounds for crossing probabilities of various percolation processes, most fundamentally for Bernoulli percolation. These are a crucial input into establishing the more refined properties of percolation processes, such as the sharpness of the phase transition and scaling limits for the interfaces of percolation clusters.Letbe a periodic lattice (i.e. a periodic set of nodes and edges/bonds between each pair of adjacent nodes), and p∈ [0,1] a number. In Bernoulli bond percolation each edge ofis independently either open with probability p or closed with probability 1-p. This defines a (random) percolation subgraphofcontaining all vertices and only open edges. Alternatively one can think of colouring edges independently black (with probability p) or white (with probability 1-p). In this caseis the black sub-graph.A rather simple argument shows that there exists a critical probability: a number p_c∈ (0,1) such that for all p>p_c the grapha.s. contains an infinite percolation cluster (connected component of ), and for all p<p_c a.s. no such component exists. The more subtle behaviour of the percolation process for p=p_c, critical percolation, is of high intrinsic interest. Apart from being one of the most studied lattice models, it is also believed  <cit.> to represent the nodal structure of Laplace eigenfunctions on `generic' chaotic manifolds, in the high energy limit. Forpossessing sufficient symmetries, the corresponding critical probability should be equal p_c=1/2; for the square lattice this was established rigorously by Kesten  <cit.>.Let us assume that the latticeis regularly embedded in ^2, e.g.the canonical embedding of the square lattice as ℤ^2 in ^2. For ρ>1, s>0 and x_0∈^2 a box-crossing event is the event that a rectangleR = x_0+[-ρ s/2,ρ s/2]× [-s/2,s/2]centred at x_0 of size s×ρ s is traversed horizontally by a black cluster, i.e. there exists a connected componentofsuch that , restricted to R, intersects both {x_0 -ρ s/2}× [-s/2,s/2] and {x_0 + ρ s/2}× [-s/2,s/2].The basic RSW estimates for critical percolation are the assertion that, for every ρ>1 the corresponding crossing probability is bounded away from 0 uniformly in the scale s>0, i.e. there exists a number c(ρ)>0 such that the probability of a box-crossing event is ≥ c(ρ) for all s>0, x_0∈^2. The analogous estimates hold for quads, i.e. triples Q = (D;γ,γ'), where D is a piecewise-smooth domain, and γ,γ'⊆∂ U are two disjoint boundary curves; in this case the RSW estimates assert that there exists a constant c(D; γ,γ')>0 such that the probability p(D;γ,γ';s) that sD = {sx: x ∈ D} contains a black cluster intersecting both sγ and sγ' is at least c(D; γ,γ') for every s>0.In the more general setting of random subsets of Euclidean space, Tassion  <cit.> recently showed the validity of RSW estimates for the Voronoi percolation. Let ⊆^2 be a Poisson point process on ^2 with unit intensity, and for each x∈ construct the associated (random) Voronoi cell_x = {z∈^2: ∀ y∈∖{x}→ d(z,y)≥ d(z,x)};the various Voronoi cells tile the plane disjointly save for boundary overlaps. Each of the cells is coloured black or white independently with probabilities p and 1-p respectively; here again, by a duality argument, the critical probability is p_c=1/2 <cit.>. In this setting Tassion  <cit.> proved that RSW estimates hold on all scales; a somewhat weaker version due to Bolobas-Riordan  <cit.> established that the RSW estimates hold for an unbounded subsequence of scales. §.§ RSW estimates for the Bargmann-Fock space Our starting point is the recent work of Beffara-Gayet <cit.> that established the RSW estimates for the nodal sets of a family of stationary smooth Gaussian random fields on ℝ^2, with positive and rapidly decaying correlations satisfying sufficient symmetry; the motivating and main example of such a field was the scaling limit of the Kostlan ensemble (<ref>) for dimension m=2. To the best of our knowledge, along with the very recent announcement of Nazarov-Sodin on the variance of the number of nodal domains (to be published), Beffara-Gayet's result is the only heretofore known rigorous evidence or manifestation for the conjectured connections  <cit.> between percolation theory and nodal patterns.Let g_∞:^2→ be the random field indexed by (x_1,x_2)∈^2 corresponding to the covariance kernel K_∞ on the r.h.s. of (<ref>). Then g_∞ is an isotropic random field, a.s. smooth, which may be constructed explicitly as the seriesg_∞(x) = ∑_i,j=0^∞a_ij1/√(i!j!) x_1^ix_2^jwith {a_ij} i.i.d. standard Gaussian random variables, and where the convergence is understood locally uniformly; hence the sample paths of g_∞ are a.s. real analytic. Equivalently, recall the Bargmann-Fock space in section <ref> above, and define the spaceof analytic functions on ^2 that admit an analytic extension to ^2 which lies in the Bargmann-Fock space; equip this space with the scalar product ⟨·, ·⟩_BF induced from (<ref>). We may then think of g_∞ in (<ref>) as the canonical Gaussian element of(c.f.  <cit.>), normalised to have unit variance.Define the nodal components {_i}_i of g_∞ to be the connected components of the nodal set g_∞^-1(0), and the nodal domains {_i}_i of g_∞ to be the connected components of the complement ^2∖ g_∞^-1(0) of the nodal set; a.s. all the nodal components {_i} are simple smooth curves. Nazarov and Sodin  <cit.> proved that the number of nodal components _i entirely contained in the disk of radius R is asymptotic to c_NS· R^2 with c_NS>0 the `Nazarov-Sodin constant of g_∞'. The main result of Beffara-Gayet  <cit.> was that the RSW estimates hold for the complement of the nodal set on all scales, and for the nodal set itself on all sufficiently large scales. The restriction to sufficiently large scales is natural, since the probability that the nodal set intersects a domain tends to zero with the size of the domain.As was mentioned at the beginning of section <ref>, other than for g_∞ the result in <cit.> also applies to a (somewhat limited) family of Gaussian random fields; these are fields which have sufficiently nice properties so that Tassion's aforementioned techniques and ideas are applicable. Since Tassion's ideas are also instrumental for the proofs of the results of this paper, the generality of our results are also limited in a similar way.§.§ Statement of the principal result: RSW estimates for the Kostlan ensemble Our aim is to prove the analogous RSW estimates for the m=2-dimensional Kostlan ensemble (<ref>), without passing to the limit. In light of the discussion in subsection <ref> above, these estimates can be interpreted as uniform bounds on crossing probabilities for a `typical' algebraic curve on ℝℙ^2. The RSW estimates that we establish are stronger than those which can be deduced from the corresponding estimates <cit.> for the Bargmann-Fock limit field (<ref>), since they also hold on macroscopic scales. Indeed, our main result (Theorem <ref> below) establishes RSW estimates that hold uniformly on the projective space (or sphere, after removal of antipodal points).Naturally one could try to work in the same Euclidean setting as was used to establish the RSW estimates <cit.> on the Bargmann-Fock limit field (<ref>). One would then consider  <cit.> the projection of f_n on the Euclidean space via the natural embedding π:^2↪^2 with x=(x_1,x_2)↦ (1:x_1:x_2); in this case the corresponding covariance kernel off̃_̃ñ(x) = f_n(π(x))on ^2, normalised to be unit variance, isλ_n(x,y)= (1+⟨ x,y ⟩)^n/(1+x^2)^n/2· (1+y^2)^n/2,where · is the standard Euclidean norm on ^2.Unfortunately this model does not enjoy particularly nice properties, being neither stationarity nor invariant w.r.t. negation of the second coordinate, key ingredients in Beffara-Gayet's (and Tassion's) argument. Our primary observation is that these properties do hold for the spherical ensemble (<ref>). This will allow us to establish the RSW estimates directly for the spherical model, our main result, which we now prepare the ground for. We begin by formally defining the RSW estimates as they apply to general sequences of random sets on the sphere; later this will be extended in an analogous way to the flat torus, see section <ref> below. Let us start by introducing `quads' and their associated crossing events (c.f. the discussion in section <ref> above).A quad Q = (D; γ, γ') is a piecewise-smooth simply-connected (spherical) domain D ⊂𝕊^2 and the choice of two disjoint boundary arcsγ,γ' ⊂∂ D. When we consider a quad Q as a set, we will identify it with the closure of D. For each X ⊆𝕊^2 we denote by Quad_X the collection of quads Q ⊆ X.To each quad Q = (D; γ, γ') and random subset 𝒮 of 𝕊^2 we associate the `crossing event' _Q() that a connected component of , restricted to D, intersects both γ and γ'. We shall sometimes use a phrase such as `Q is crossed by ' to describe the event _Q(). Rather than stating the RSW estimates for rescaled boxes or quads (as was done in <cit.> and <cit.>for instance), in non-Euclidean settings it is natural to state these estimates for a more general class of quads that can be `uniformly crossed by chains of boxes'; we introduce this concept now as it applies to the sphere. The following definition is rather technical but it is well illustrated by Figure <ref>. * For each a, b > 0, an a × b (spherical) rectangle D ⊂𝕊^2 is a simply-connected domain that is bounded by four geodesic line-segments, with all four internal angles equal, and such that the non-adjacent pairs of boundary components have length a and b respectively.We refer to the four boundary components of a rectangle as its `sides', and shall call a rectangle with equal side-lengths a `square'. * An a × b box B is a quad Q = (D; γ, γ') in 𝕊^2 such that D is an a × b rectangle and such that γ and γ' are the opposite sides of length a. We refer to the sides of B other than γ and γ' as the `lateral' sides. For each X ⊆𝕊^2, c ≥ 1 and s > 0, we denote by Box_X; c (s) the collection of all a × b boxes B ⊆ X such that s ≤ a , b ≤ cs. * A curve η⊂ D is said to `transversally cross' a box B = (D; γ, γ') if a connected component of η, restricted to D, intersects both of the lateral sides of D; in particular γ and γ' always transversally cross B. * A box B = (D; γ, γ') is said to `transversally cross' another box B̂ if both of the lateral sides of D transversally cross B̂; this definition is symmetric in the sense that it also implies that B̂ transversally crosses B. * A `box-chain' of length n is a finite set {B_i}_1 ≤ i ≤ n of boxes such that, for each i = 2, …, n, B_i transversally crosses B_i-1. A quad Q = (D; γ, γ') is said to be `crossed' by a box-chain {B_i}_1 ≤ i ≤ n if γ transversally crosses B_1, γ' transversally crosses B_n, and ∪_2 ≤ i ≤ n-1 B_i ⊆ D.The relevance of box-chains to RSW estimates can be seen from the following. Let Q be a quad that is crossed by a box-chain {B_i}, and letbe a random subset of 𝕊^2. Then if the event 𝒞_B_i() holds for each i, so does the event _Q() (see Figure <ref>). In other words, one may bound the probabilities of crossings of quads by controlling the crossings of box-chains instead. This motivates the following definition.For each X ⊆𝕊^2, c ≥ 1 and s > 0, we denote by Unif_X;c(s) the collection of all quads Q ∈Quad_X that are crossed by a box-chain {B_i}_1 ≤ i ≤ n of length n ≤ c such that B_i ∈Box_X;c(s) for each i. The property of quads being uniformly crossed by box-chains generalises the notion of scale invariance on the sphere, with the parameter c in the definition of Unif_X;c(s) playing the role of the `aspect ratio'. One can check, for instance, that for each quad Q = (D; γ, γ') there is a c > 1 such that Unif_𝕊^2;c(s/c) contains the rescaled quad sQ = (sD; sγ, sγ') for each s ∈ (0, 1], where sA denotes linear rescaling of the set A along the unique geodesic to the origin (deleting the antipodal point if necessary). This can be seen by observing that, although rescaling does not preserve geodesics on the sphere, the resulting distortion is uniformly controlled on all small enough scales.The property of being uniformly crossed by box-chains is also closely related to conformal invariants. One can check, for instance, that if a quadQ = (D; γ, γ') is crossed by a box-chain of length n consisting of boxes from Box_X;c(s), then the extremal distance from γ to γ' in D (which is the only conformal invariant of Q) is bounded above by cn, independently of s. In particular, for Q ∈Unif_X;c(s) the extremal distance is uniformly bounded above by c^2.We next introduce the RSW estimates as they apply to the sphere; these give a uniform lower bound on crossing probability for quads that are uniformly crossed by box-chains. We state the RSW estimates for arbitrary sequences of random subsets.Let (𝒮_n)_n ∈ℕ be a sequence of random subsets of 𝕊^2, let X ⊆𝕊^2, and let s_n ≥ 0 be a sequence satisfying s_n → 0 as n →∞. We say that the sequence (𝒮_n)_n ∈ℕ `satisfies the RSW estimates on X down to the scale s_n' if for every c > 1 there exists a C > 0 such thatlim inf_n →∞ inf_ s > C s_ninf_ Q ∈Unif_X;c(s) ℙ(𝒞_Q(𝒮_n)) > 0 .We say that the sequence (𝒮_n)_n ∈ℕ `satisfies the RSW estimates on X on all scales' if (<ref>) holds for s_n ≡ 0. Strictly speaking we should restrict the definition of the RSW estimates in (<ref>) to only hold for quads Q such that _Q(_n) is measurable. However, since we work only with _n being level sets or excursion sets of a.s. C^2 Gaussian random fields, the events _Q(_n) are always measurable and so we will ignore this technicality. We are now ready to state our main result. Recall that the nodal sets of the Kostlan ensemble are the (random) subsets 𝒩_n = f^-1(0) of the sphere; we also consider their complements, 𝕊^2 ∖𝒩_n. Our principal result asserts that the RSW estimates in Definition <ref> hold down to the scale n^-1/2 for the nodal sets of the Kostlan ensemble, and on all scales for their complements (the latter estimates give a lower bound for the probability of a domain being crossed by a single nodal domain).Let X ⊂𝕊^2 be a subset whose closure does not contain pairs of antipodal points, and let s_n = n^-1/2. Then the following hold: * The nodal sets of the Kostlan ensemble (<ref>) on 𝕊^2 satisfy the RSW estimates on X down to the scale s_n. * The complements of the nodal sets of the Kostlan ensemble (<ref>) on 𝕊^2 satisfy the RSW estimates on X on all scales.We constrain the RSW estimates to apply only to a set X whose closure does not contain pairs of antipodal points since the Kostlan ensemble is naturally defined on the projective space; indeed, the RSW estimates do not hold on the whole of the sphere, as certain crossing events on the sphere are impossible due to the identification of points on the projective space.The scales on which we prove the RSW estimates in Theorem <ref> are optimal in the sense that these estimates fail for the nodal set on smaller scales than s_n = n^-1/2. To see this, recall that s_n is the scale on which the local uniform convergence of the ensemble in (<ref>) takes place (in what follows, we often refer to this as the `microscopic scale'), and since the probability that a nodal set crosses a quad in the limit field tends to zero as the size of the quad tends to zero, the same is true for the Kostlan ensemble on scales smaller than s_n. §.§ Acknowledgements The research leading to these results has received funding from the Engineering & Physical Sciences Research Council (EPSRC) Fellowship EP/M002896/1 held by Dmitry Beliaev (D.B. & S.M.), theEPSRC Grant EP/N009436/1 held by Yan Fyodorov (S.M.), and the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013), ERC grant agreement n^o 335141 (I.W.) The authors would like to thank Damien Gayet, Mikhail Sodin and Dmitri Panov for useful discussions.§ OUTLINE OF THE PAPER Theorem <ref> is a particular case of a more general result, Theorem <ref> below, which asserts the RSW estimates for the nodal sets, and their complements, of general sequences of centred Gaussian random fields defined on smooth compact Riemannian manifoldssatisfying sufficient symmetries. In turn, the proof of Theorem <ref> has, as its main ingredient, the even more general Theorem <ref>, which asserts RSW estimates for abstract sequences of random sets obeying a natural scaling (as well as certain other conditions). We believe that both the general Theorem <ref> and the abstract Theorem <ref> are of independent interest. §.§ General RSW estimates for nodal sets of sequences of Gaussian random fieldsLet 𝕏 be either the flat torus 𝕋^2 = ℝ^2 ∖ℤ^2 or the unit sphere 𝕊^2, and equip 𝕏 with a marked origin 0 ∈𝕏 and its natural metric d(·, ·). We consider a sequence (f_n)_n ∈ℕ of Gaussian random fields defined on .Our first task is to define the relevant RSW estimates, which will be the natural generalisation of Definition <ref> to 𝕏. To begin we can define `quads' and `crossing events' analogously to Definition <ref> (i.e. replacing 𝕊^2 everywhere with 𝕏). Before discussing `boxes' as in Definition <ref>, we need to alter slightly the definition of `rectangles' in the case 𝕏 = 𝕋^2, namely restricting their sides to be parallel to the axes; this is so that we may work with fields on 𝕋^2 that are not assumed to be rotationally symmetric. For clarity, we restate this definition (with the difference to Definition <ref> emphasised).For each a, b > 0, an a × b (toral) rectangle D ⊂𝕋^2 is a simply-connected domain that is bounded by four geodesic line-segments that are parallel to the axes, with all four internal angles equal, and such that the non-adjacent pairs of boundary components have length a and b respectively. With this definition of toral rectangles, `boxes' are defined as in Definition <ref>; the notion of `box-crossings' of quads, as well as the set Unif_X;c(s), are then analogous to in Definitions <ref> and <ref>. Finally, RSW estimates are defined analogously to Definition <ref>.We next state various conditions that we impose on the Gaussian random fields (f_n)_n ∈ℕ that we consider; these are most conveniently framed in terms of their covariance kernels. We first describe a set of relevant symmetries that these covariance kernels must satisfy. These symmetries naturally limit the choice of the underlying space to 𝕋^2 and 𝕊^2.We say that a covariance kernel on 𝕏 is `symmetric' if: * In the case 𝕏 = 𝕊^2, it is rotationally invariant and symmetric w.r.t. reflection in any great circle;* In the case 𝕏 = 𝕋^2, it is stationary and possesses the D_4 symmetry, i.e., it is invariant w.r.t. horizontal reflection and rotation by π/2.Next we impose certain smoothness and non-degeneracy conditions on symmetric covariance kernels (in the sense of Definition <ref>). When we work with symmetric covariance kernels, we often naturally consider them as functions of one variable, i.e. setting κ(x) = κ(0, x) (with a slight abuse of notation).[Smoothness and non-degeneracy]A symmetric covariance kernel κ on 𝕏 satisfies the following: * The function κ(x) is C^6; * The Hessian H_κ(0) of κ at the origin is positive-definite. By the standard theory <cit.>, Assumption <ref> guarantees that the associated random field is a.s. C^2, and its nodal set a.s. consists of C^2 curves diffeomorphic to circles.Finally, we define the concept of `local uniform convergence' for a sequence of covariance kernels on 𝕏, generalising our discussion of the local limit (<ref>) of the Kostlan ensemble above. Let Φ : ℝ^2 →𝕏 denote a smooth map that is locally a linear isometry (i.e. such that Φ(0) = 0 and the differential dΦ is a linear isometry); in the case 𝕏 = 𝕋^2 one may take the covering map for instance, whereas in the case 𝕏 = 𝕊^2 one may take the exponential map based at the origin, as in (<ref>).For a sequence s_n > 0 satisfying s_n → 0 as n →∞ we say that covariance kernels (κ_n)_n ∈ℕ on 𝕏 `converge locally uniformly near the origin on the scale s_n' if there exists a symmetric covariance kernel K_∞ on ℝ^2, satisfying Assumption <ref>, and an open set U ⊆ℝ^2 containing the origin such that, as n →∞, for x, y ∈ U uniformly,K_n(x, y) = κ_n(Φ(s_n x),Φ( s_n y) ) → K_∞(x-y) .We say that the covariance kernels (κ_n)_n ∈ℕ on 𝕏 `converge locally uniformly near the origin on the scale s_n along with their first four derivatives' if the above holds also for all partial derivatives K_n of order up to 4.We are now ready to state our general result Theorem <ref>; the proof that Theorem <ref> is a special case of Theorem <ref> is given in section <ref> below.Let (f_n)_n ∈ℕ be a sequence of centred Gaussian random fields on 𝕏 with respective covariance kernels κ_n. Suppose that there exists a constant η > 0, a set X ⊆𝕏, and asequence s_n >0 satisfying s_n → 0 as n →∞, such that the following hold: * Symmetry: The covariance kernels κ_n are symmetric in the sense of Definition <ref>.* Smoothness and non-degeneracy: The covariance kernels κ_n satisfy Assumption <ref>.* Local uniform convergence near the origin: The covariance kernels κ_n converge locally uniformly near the origin on the scale s_n along with their first four derivatives.* Asymptotically non-negative correlations:lim_n →∞s_n^-12 - ηsup_x,y ∈ X (κ_n( x, y )∧ 0 )= 0 . * Uniform rapid decay of correlations:lim_C →∞ lim sup_n →∞sup_x, y ∈ X, d(x,y) > C s_n (d(x,y)s_n^-1)^18 + η |κ_n( x , y)| = 0 . Then the nodal sets of f_n satisfy the RSW estimates on X down to the scale s_n, and the complements of the nodal sets of f_n satisfy the RSW estimates on X on all scales.In Theorem <ref> the covariance kernels are, in principle, allowed to be negative, unlike for the Gaussian random field considered in <cit.>; this is crucial for our application to the Kostlan ensemble since, for n odd, the Kostlan ensemble is only positively correlated within a subset of the sphere. Nevertheless, since the negative correlations in the Kostlan ensemble decay exponentially rapidly as a function of n for any subset X whose closure does not contain antipodal points, condition (<ref>) is satisfied.In regards to the nature of the exponents 12 and 18 in (<ref>) and (<ref>) respectively, these are certainly not optimal for the claimed results, and are chosen mainly for simplicity. In fact, using the somewhat more sophisticated methods in <cit.>, if we additionally assume local uniform convergence of the first six derivatives of the covariance kernel we could reduce these exponents to 8 and 12 respectively. Moreover, with an extra assumption that the covariance kernels κ_n are smooth with derivatives decaying at least as rapidly as the kernel, and if the local convergence (<ref>) of the covariance kernels holds together with all derivatives, using the method in <cit.> we could further reduce these exponents to 4 and 6. For simplicity, we do not implement these improvements here; on the other hand the question of the optimal exponents in (<ref>) and (<ref>) is of considerable importance.To complete section <ref>, we give an example of an application of Theorem <ref> to a sequence of Gaussian random fields defined on the flat torus; that this example falls under the scope of Theorem <ref> is established in section <ref>.Let (f_n)_n ∈ℕ be the sequence of centred stationary Gaussian random fields on the torus 𝕋^2 with respective covariance kernelsκ_n( x, y) =( cos( 2π (x_1 - y_1) ) ·cos( 2 π (x_2 - y_2)) )^n ,(x, y) = ((x_1, x_2), (y_1, y_2) ) ∈𝕋^2 ×𝕋^2.Let X ⊆𝕋^2 be subset whose closure contains no distinct points (x_1,y_1) and (x_2, y_2) such that 2(x_1 - y_1) and 2(x_2 - y_2) are integers. Then the nodal sets of f_n satisfy the RSW estimates on X down to the scale s_n = n^-1/2, and the complements of the nodal sets of f_n satisfy the RSW estimates on X on all scales. The restriction on X is imposed, once again, since the nodal sets are naturally defined on a quotient space of 𝕋^2, and indeed the RSW estimates fail on the whole space.§.§ Overview of the proof of Theorem <ref>Similar to  <cit.>, the overall structure of the proof of Theorem <ref> consists of three main steps: * First (see section <ref>) we adapt an argument borrowed from <cit.> to establish general RSW estimates for abstract sequences of random sets on 𝕏 satisfying certain key assumptions. These general estimates are stated as Theorem <ref> below. * Next, we develop a sufficiently robust perturbation analysis that allows us to apply the abstract RSW estimates to the complement of the nodal sets (in fact, separately to the positive and negative excursion sets f_n^-1(0,∞) and f_n^-1(-∞,0) respectively) of the Gaussian random fields f_n in the setting of Theorem <ref> (see section <ref>). This perturbation analysis is used in two key place in the proof, namely in establishing (i) that (<ref>) guarantees the `asymptotic independence' of crossing events in well-separated domains, and (ii) that negative correlations satisfying  (<ref>) have a negligible effect on crossing probabilities.* Finally, we again apply the `asymptotic independence' of crossing events to infer the RSW estimates for the nodal sets from the RSW estimates for the complements of the nodal sets; this follows from similar arguments to those presented in <cit.> (see the second part of the proof of Theorem <ref> in section <ref>). Despite the structural similarities between our approach to <cit.>, we record three significant modifications that we make here. First, it is necessary to adapt the argument in <cit.> to handle the differences in our setting, namely: (i) the presence of a sequence of random sets rather than just a single random set; (ii) the fact that we work on (bounded) manifolds rather than the Euclidean plane; and (iii) in the spherical case, the positive curvature of the sphere. We believe these modifications to be of independent interest, since, to the best of our knowledge, no theory of RSW estimates exists outside the scope of Euclidean space, and our approach is the first step in this direction.Second, we apply the general argument in <cit.> in a different manner compared to <cit.>, in particular with regards to the treatment of the asymptotic independence of crossing events (see the comments at the end of section <ref>). We believe that our approach yields a significant simplification of the argument presented in <cit.>. Finally, our argument is able to handle negative correlations, as long as these are asymptotically negligible; negative correlations were absent from the model considered in <cit.>.§.§ RSW estimates for abstract sequences of random setsWe give here the statement of the abstract RSW estimates for general sequences of random sets on 𝕏; establishing this abstract result is the first step towards the proof of Theorem <ref>. To this end, we first need to define the analogues of Euclidean annuli and their related crossing events. Recall the definition of a `square' (definitions <ref> and <ref>), and observe that a square has a natural `centre', being the unique interior point equidistant from each side.  * For b > a > 0, an a × b `annulus' is a domain bounded between concentric squares with side-lengths a and b that are `parallel', i.e. such that there is a single geodesic that intersects both boundary squares at the mid-points of opposite sides. * For each X ⊆𝕏, c ≥ 1, r ≥ 1 and s > 0, we denote by Ann_X;c;r(s) the collection of all a × b annuli A ⊂ X such that s ≤ a ≤ b ≤ cs and b/a = r. * To each annulus A and random subsetof 𝕏 we associate the `crossing event' _A() that a connected component of , restricted to A, contains a `circuit' around A. For each r > 0 and v ∈𝕊^1, let B(r) ⊆𝕏 denote the centred open ball of radius r, and let ℒ_v(r) denote the geodesic line-segment of length r, based at the origin, in direction v. Our abstract RSW estimates are the following.Let (𝒮_n)_n ∈ℕ be a collection of random sets on 𝕏. Suppose that there exists a set X ⊆𝕏 and a sequence s_n > 0 satisfying s_n → 0 such that the following hold:* Non-degeneracy: For every n ∈ℕ, ℙ( 0 ∈∂ S_n ) = 0. Moreover, for every v ∈𝕊^1,lim_r → 0 lim inf_n →∞ ℙ( ℒ_v(r s_n)∩∂ S= ∅) = 1 .* Symmetry: For every n ∈ℕ, the law of 𝒮_n satisfies the following symmetries: In the case 𝕏 = 𝕊^2, invariance w.r.t.rotations and reflections w.r.t. great circles; in the case 𝕏 = 𝕋^2, invariance w.r.t. translations, horizontal reflections and rotation by π/2. * Positive associations: For every n ∈ℕ, all events measurable on X and increasing w.r.t. the indicator function of 𝒮_n are positively correlated. * Crossing of square boxes on arbitrary scales:lim inf_n →∞ inf_s >0inf_ B ∈Box_X; 1(s)ℙ( 𝒞_B(𝒮_n) ) > 0.* Arbitrary crossings on the microscopic scale: There exists a number δ > 0 such thatlim inf_n →∞ inf_ Q ∈Quad_B(δ s_n) ℙ ( 𝒞_Q(𝒮_n) ) > 0 .* Annular crossings of a `thick' annulus with high probability: For each c > 0, r > 1 and ε∈ (0, 1), there exist C_1, C_2 > 1 such that, for all sufficiently large n ∈ℕ and all s > C_1 s_n, ifinf_A ∈Ann_X; C_2; r(s)(_A(_n)) > c,then, for any s × C_2s annulus A ⊆ X,( _A(S_n))>1-ε.Then the collection of sets (𝒮_n)_n ∈ℕ satisfies the RSW estimates on X on all scales.Compared to the original setting in <cit.>, and also its application in <cit.>, we have made two important modifications in the formulation of Theorem <ref>. First, since we are dealing with a sequence of random sets rather than a single random set, the conditions are all stated in a way that guarantees uniform control over all necessary quantities.Second, we have formulated a general condition guaranteeing annular crossings with high probability (see condition (6)), rather than a working under the more constraining assumption that the random sets in disjoint domains are asymptotically independent (as was done in <cit.> and <cit.> for instance). This reformulation is useful because we want to work directly with the random fields defined on , rather than applying the general theorem to the discretised version of the model, as was the approach in <cit.>. We believe that this constitutes a significant simplification to the method, and could also be used to simplify the argument in <cit.>.§.§ Summary of the remaining part of the paper The remainder of the paper is structured as follows. In section <ref> we develop the perturbation analysis that is the crucial ingredient in applying the abstract Theorem <ref> to the setting of Gaussian random fields. We then combine this analysis with Theorem <ref> to complete the proof of Theorem <ref>. We conclude the section by showing that Theorem <ref> and Example <ref> fall within the scope of Theorem <ref>.In section <ref> we give the proof of the abstract Theorem <ref>. This is similar to the argument in <cit.>, but with suitable modifications to adapt to our setting. Finally, in section <ref> we complete the proof of the auxiliary results used in the perturbation analysis developed in section <ref>. § PROOF OF THEOREM <REF>: RSW ESTIMATES FOR KOSTLAN ENSEMBLEIn this section we complete the proof of Theorem <ref>, which implies Theorem <ref> as a special case. The main ingredient will be a perturbation analysis that allows us to apply the abstract RSW estimates in Theorem <ref> to the positive (resp. negative) excursion sets of the Gaussian random fields in our setting; these have similarities to the methods in <cit.> and <cit.>.The set-up for the perturbation analysis is the following. Let (f_n)_n ∈ℕ be a collection of centredGaussian random fields on 𝕏 whose respective covariance kernels κ_n are symmetric in the sense of Definition <ref> and satisfy Assumption <ref>. Let𝒮_n^+={x∈: f_n(x)>0}and𝒮_n^-={x∈: f_n(x) < 0}denote the positive and negative excursion sets of f_n respectively. Without loss of generality we may assume that f_n are unit variance, since a normalisation does not affect 𝒮^+_n or 𝒮^-_n. We also assume that there exists a sequence s_n >0 satisfying s_n → 0 as n →∞ such that the covariance kernels κ_n converge locally uniformly (in the sense of Definition <ref>) near the origin on the scale s_n along with their first four derivative; let K_∞ be the limiting covariance kernel. Let δ_0 > 0 be sufficiently small that this uniform convergence holds on the ball B(δ_0). §.§ Perturbation analysisOur perturbation analysis proceeds in two steps. First we argue that, outside an event of a small probability, crossing events for the positive excursion set are determined by the signs of a Gaussian random field on a (deterministic) set of points of finite cardinality. Second, we control the effect of perturbations of the field on the probability of crossing events by controlling their impact on the finite-dimensional law associated to the signs of the random field on the finitely many points described above (which, up to an event of a small probability, determine the crossing probabilities). To state the main propositions of the perturbation analysis, we shall need to define an analogue of Euclidean `polygons' for the manifold 𝕏.A polygon is a quad whose boundary consists of a finite number of geodesic line-segments. Similarly to boxes, we refer to the boundary components as `sides', and their length as `side-lengths'. For each X ⊆𝕏, c > 0 and s > 0, we denote by Poly_X; c(s) the collection of polygons in X with at most c sides and with sides-lengths at most cs. The main propositions of the perturbation analysis are the following.For sufficiently large n ∈ℕ the following holds. Fix c,r> 1. Then there exists a constantc_1=c_1(c; r; K_∞; δ_0) > 0,such that, for all error thresholds ε∈ (0,1), scales s > 0, andQ ∈Poly_𝕏;c(s) ∪Ann_𝕏;c;r(s), there exists a finite set 𝒫 = (Q;K_∞;δ_0) ⊂ Q of cardinality at most|| < c_1 ( ε^-2 (s/s_n)^6∨ 1 ) ,such that, outside an event of probability less than ε, the crossing event 𝒞_Q(𝒮^+_n) is determined by the signs of f_n restricted to 𝒫. Fix η > 0. Let X and Y be centred Gaussian vectors of dimension n with respective covariance matrices Σ_X and Σ_Y, and let ℙ_X and ℙ_Y denote their respective laws. Suppose that X is normalised to have unit variance, and defineδ = max_i,j ≤ n | (Σ_X)_i,j - (Σ_Y)_i,j |.Then there exists a constant c > 0, depending only on η, such that, for all events A that are measurable in ℙ_X and ℙ_Y w.r.t the signs of X and Y respectively, then the following hold: * If the diagonal entries of Σ_Y - Σ_X are non-negative, then| ℙ_X(A ) - ℙ_Y(A) | < c ( n^3+ηδ)^1/4 . * If in addition Σ_Y - Σ_X is positive-definite, then| ℙ_X(A ) - ℙ_Y(A) | < c ( n^2 +ηδ)^1/4 . The first statement of Lemma <ref> is an improved version of <cit.> and <cit.>, implementing an idea from <cit.>.We stress that in Proposition <ref>, once K_∞ and δ_0 are prescribed, neither the constant c_1 nor the set 𝒫, whose existence is established in Proposition <ref>, depend on any other properties of κ_n. Hence we may choose a set 𝒫 that works simultaneously for two different sequences of fields whose covariance kernels converge locally uniformly to K_∞ on B(δ_0); this fact will be crucial in section <ref> below. The proof of Proposition <ref> and Lemma <ref> are given in section <ref>. We mention here that the proof of Proposition <ref> proceeds by controlling the event that the nodal set intersects any of the edges of a certain graph more than once (see Lemma <ref>). We then argue that, outside this event, all crossing events are determined by the signs of the field restricted to the vertices of the graph. An analogous result for Gaussian random fields on ℝ^2 was established in <cit.>. We now outline the two key consequences of the perturbation analysis in our setting.§.§.§ Asymptotic independence of crossing events The first consequence is that crossing events in disjoint polygons or annuli are asymptotically independent in the limit n→∞, as long as their respective polygons or annuli are sufficiently well-separated; this follows in particular from the condition (<ref>) of Theorem <ref>.Suppose that there exists X ⊆𝕏 and η > 0 such that (<ref>) holds. Then for each c , r, k> 1 and ε > 0 there is a C > 0 such that the following hold for all sufficiently large n ∈ℕ:sup_ s > C s_nsup_ X_1, X_2 ⊂ X , d(X_1, X_2) > s sup_P_1 ∈Poly_X_1;c(s) , P_2 ∈Poly_X_2;c(s) | ℙ(𝒞_P_1(𝒮^+_n) ∩𝒞_P_2(𝒮^-_n) ) - ℙ(𝒞_P_1(𝒮^+_n))·ℙ( 𝒞_P_2(𝒮^-_n) )| < ε,and, for each 1 ≤ j ≤ k,sup_ s > C s_nsup_ X_1, X_2 ⊂ X , d(X_1, X_2) > s sup_{A_i}_0 ≤ i ≤ j-1⊂Ann_X_1;c;r(s) , A_j∈Ann_X_2;c;r(s) aaaaaaaaaaaaa| ℙ( ∩_0 ≤ i ≤ j𝒞_A_i^c(𝒮^+_n)) - ℙ(∩_0 ≤ i ≤ j-1𝒞^c_A_i(𝒮^+_n))·ℙ( 𝒞^c_A_j(𝒮^+_n) )| < ε.Observe that while (<ref>) is stated for the positive excursion sets (and for the complements of the events 𝒞_A_i), (<ref>) is formulated to control the asymptotic independence between crossing events 𝒞_P_i for the positive and negative excursion sets. This difference is solely due to how we intend to apply these results, and does not reflect limitations in their generality. Before giving a proof for Proposition <ref>, let us state and prove a crucial corollary of (<ref>), namely that condition (<ref>) of Theorem <ref> implies the `thick' annular crossing condition (6) of Theorem <ref>.Suppose that there exist X ⊆𝕏 and η > 0 such that (<ref>) holds. Then for eachc > 0, r > 1 and ε∈ (0, 1), there exists C_1, C_2 > 1 such that, for all sufficiently large n ∈ℕ and all s > C_1 s_n, ifinf_A ∈Ann_X; C_2; r(s)(_A(_n^+)) >cthen, for every s × C_2s annulus A ⊆ X,( _A(_n^+))>1-ε. The idea of the proof is straightforward. If we take a large number of concentric well-separated annuli, then crossing events in these annuli are almost independent and have the same lower bound. This implies a crossing in one of them with high probability, and hence a crossing in a `thick' annulus with high probability.Fix c > 0, r > 1 and ε∈ (0, 1). Since establishing the corollary for a r > 1 implies the corollary holds for every smaller r̅∈ (1, r), we can and will assume that r ≥ 2. We work with the collection (A_a, b)_a<b of a × b annuli centred at the origin that are `parallel', i.e. such that there is a single geodesic that intersects all boundary squares at the mid-points of opposite sides. In particular, for each s > 0 we introduce the sequence of disjoint annuli {A^s_i}_i ≥ 0 defined by A^s_i = A_r^2is , r^2i+1 s. Since r ≥ 2 it holds that d(A_i^s, A_j^s) > s for all i ≠ j.Let k be an integer to be determined later, and set C_2 larger than r^2k+1. Fix s > 0 and consider an cs × C_2s annulus A ⊆ X. By symmetry we may assume A = A_s, C_2s, and hence A^s_i ∈Ann_X; C_2; r(s) for each 0 ≤ i ≤ k, which by assumption implies that( _A^s_i(S_n^+))> c . Now, since d(X_i, X_j) > s, an application of (<ref>) in Proposition <ref> yields a C_1 > 0 such that, for sufficiently large n and all s > C_1 s_n and j = 0 , … , k,|(∩_i = 1, …, j _A^s_i^c (_n^+)) -(∩_i = 0, …, j-1_A^s_i^c (_n^+) ) ·(_A^s_j^c (_n^+) ) |< ε / (2c) .Combined with (<ref>) this implies that(_A^s_i(_n^+)does not occur fori =0, …, k )<f_c; ε^k(1-c),where f^k_c;ε(x) denotes the k-fold iteration of the map x ↦ (1-c)x + ε / (2c). One may check that f^k_c;ε(1-c) →ε/2 as k →∞, and hence we may choose a k sufficiently large such that(_A^s_i(_n^+)does not occur fori =0, …, k )< ε .Since the occurrence of any one of _A^s_i(_n^+), i = 0, … ,k, implies the occurrence of _A(_n^+), we have the corollary.In what follows we prove (<ref>); the proof of (<ref>) is essentially identical. Fix c > 1 and ε > 0 and take C and n sufficiently large that the conclusion of Proposition <ref> holds, andsup_ x, y ∈ X , d(x, y) >C s (d(x,y)s_n^-1)^18 + η| κ_n(x, y) | <ε^10 + η/3 ;this latter is possible by (<ref>).Now let s > C s_n, subsets X_1, X_2 ⊂ X such that d(X_1, X_2) > s, and polygons P_1 ∈Poly_X_1;c(s) and P_2 ∈Poly_X_2;c(s) be given. By Proposition <ref>, there exists a number c_1 > 0, independent of ε, s, P_1 and P_2, such that the events 𝒞_P_1(𝒮^+_n) and 𝒞_P_2(𝒮^-_n) are determined, outside an event of probability less than ε, by the signs of sets 𝒫_1 ⊂ X_1 and 𝒫_2 ⊂ X_2 respectively, each of cardinality at most|𝒫_1|,|𝒫_2| < c_1 ε^-2 (s/s_n)^6 .Applying the first statement of Lemma <ref> to compare between the joint law on one hand and the product laws on the other hand for the field restricted on 𝒫_1 ∪𝒫_2, we have, for some constant c_2 > 0 independent of ε, s, P_1 and P_2,| ℙ(𝒞_P_1(𝒮^+_n) ∩𝒞_P_2(𝒮^-_n) ) - ℙ(𝒞_P_1(𝒮^+_n))·ℙ( 𝒞_2(𝒮^-_n) )|< ε +c_2( ε^-6 - η/3(s/s_n)^18 + ηsup_ x, y ∈ X , d(x, y) >s| κ_n(x, y) | )^1/4 < ε +c_2( ε^-6 - η/3sup_ x, y ∈ X , d(x, y) >C s_n(d(x, y) s_n^-1)^18 + η | κ_n(x, y) | )^1/4 < ε + c_2 ε .where in the last line we used (<ref>). Since ε > 0 was arbitrary, we conclude the proof.§.§.§ Perturbation on macroscopic scalesThe second consequence of the perturbation analysis is controlling the perturbations on macroscopic scales, the key step in handling asymptotically negligible negative correlations.Let η > 0 and fix a sequence p_n > 0 of positive numbers satisfyinglim_n →∞p_n s_n^-12 - η= 0.Define the sequence of centred Gaussian random fields (f̃_n)_n ∈ℕ on 𝕏 with respective covariance kernelsκ̃_n = κ_n + p_n ;this is a valid covariance kernel since the constant function is positive-definite. Let 𝒮̃^+_n denote the positive excursion set of f̃_n. Then for every c > 0,lim_n →∞ sup_s >0sup_P ∈Poly_𝕏;c(s)| ℙ(𝒞_P(𝒮^+_n)) - ℙ(𝒞_P(𝒮̃^+_n)) |= 0 . Fix c > 0 and ε > 0, and take n sufficiently large that the conclusion of Proposition <ref> holds, ands_n^-12 - η p_n < ε^8 + η/3,possible by (<ref>). Now let s > 0 and P ∈Poly_𝕏; c(s) be given. Observe that the sequence of covariance kernels κ̃_n also converge locally uniformly on B(δ_0), along with their first four derivatives, to the same limit K_∞. By Proposition <ref>, there exists a number c_1 > 0, independent of ε, s and P, such that the events 𝒞_P(𝒮^+_n) and 𝒞_P(𝒮̃^+_n) are determined, outside an event of probability less than ε, by the signs of a set 𝒫⊆ P of cardinality at mostc_1 ε^-2 s_n^-6 ;for this recall that 𝒫 can be chosen to be the same set for all κ_n that converge locally uniformly on B(δ_0) to the same limit K_∞ (see the comments after the statement of Proposition <ref>). Applying the second statement of Lemma <ref> to the law on 𝒫 of the fields f_n and f̃_n respectively, for some constant c_2 > 0 independent of ε, s and P| ℙ(𝒞_P(𝒮^+_n) ) - ℙ(𝒞_P(𝒮̃^+_n)) | < ε +c_2( ε^-4 - η/3 s_n^-12 - η p_n)^1/4< ε +c_2 ε ,where to obtain the last inequality we used (<ref>). Since ε > 0 was arbitrary, we conclude the proof. §.§ Concluding the proof of Theorem <ref>We are now almost ready to conclude the proof of Theorem <ref>. Before we begin, we state some simple geometric lemmas and show how to verify the `microscopic' conditions (1) and (5) of Theorem <ref> in the setting of Theorem <ref>.We work in the same set-up as for the perturbation analysis given at the beginning of section <ref>. Recall that 𝒮_n^+ and 𝒮_n^- denote, respectively, the positive and negative excursion sets of f_n; we denote by 𝒩_n the nodal set of f_n.§.§.§ Geometric lemmas In the proof of Theorem <ref> we shall need the following. Recall the definition of polygons in Definition <ref>.Fix X ⊆𝕏 and c > 0. Then there exists a number c_1 > 0 such that for each s > 0 and quad Q∈Unif_X;c(s) the following hold:* There exists a polygon P ∈Poly_X;c_1(s) ∩Unif_X;c(s) such that the event _P(^+_n) implies the event _Q(^+_n). * There exist disjoint domains X_1, X_2 ⊂ X satisfying d(X_1, X_2) > s/c_1 and polygons P_1 ∈Poly_X_1;c_1(s/c_1) ∩Unif_X;c_1(s/c_1) and P_2 ∈Poly_X_2;c_1(s/c_1) ∩Unif_X;c_1(s/c_1) such that if the events _P_1(^+_n) and _P_2(^-_n) both hold, then so does _Q(𝒩_n). For the first statement of Lemma <ref>, one can simply take the polygon that is the union of the boxes comprising one of the box-chains that cross Q guaranteed by the Definition <ref> (see Figure <ref>, left). For the second statement of Lemma <ref>, we observe that the statement is true for any box B∈Unif_X;c(s), since B can be `divided' along two well-spaced geodesics into three parts, and the top and bottom parts can be crossed by box-chains using smaller boxes. Then for any quad Q∈Unif_X;c(s) we can take the box-chain that crosses Q and decompose each constituent box using these geodesics (see Figure <ref>, right). §.§.§ Verifying the microscopic conditions Here we argue that the two `microscopic' conditions (1) and (4) of Theorem <ref> are satisfied; we begin by verifying the non-degeneracy condition (1). For use in this subsection we introduce F_n (resp. F_∞) as the centred, unit variance Gaussian random field on ℝ^2 with the rescaled covariance kernel K_n (resp. K_∞), as in Definition <ref>.  * For every n ∈ℕ, ℙ( f_n(0) = 0 ) = 0. * Recall that for v ∈𝕊^1 and r>0 the ℒ_v(r) is the length-r geodesic segment based at the origin in direction v. For every v ∈𝕊^1 we havelim_r → 0 lim sup_n →∞ ℙ(∃ x ∈ℒ_v(r s_n).f_n(x) = 0) = 0 . The first statement is clear upon recalling that f_n is symmetric and non-degenerate. For the second statement, observe that by the Kac-Rice formula <cit.>, the symmetries of f_n, and since Assumption <ref> guarantees that ∇ f_n(0) is independent of f_n(0), for each r > 0,𝔼 [| { x ∈ℒ_v(r s_n) :f_n(x) = 0 } | ] = r/√(2 π) s_n𝔼[| ∂ f_n(0)/∂ v|].Since κ_n is C^2, it holds that𝔼[ | ∂ f_n(0)/∂ v|] = √(2/π- ∂^2 κ_n(0)/∂^2 v) .Hence by the local uniform convergence of the second derivatives of κ_n, and since F_∞ satisfies Assumption <ref>,lim_n →∞s_n𝔼[ | ∂ f_n(0)/∂ v|] = lim_n →∞ √(2/π- s_n^2 ∂^2 κ_n(0)/∂^2 v)= √(2/π -∂^2 K_∞(0)/∂^2 v)< ∞ .Taking r → 0 yields the result. Next we verify condition (4) of Theorem <ref> guaranteeing arbitrary crossings on microscopic scales; to this end we formulate the following lemma.There exists a number δ > 0 such thatlim inf_n →∞ℙ(F_n(x) > 0for all x ∈ B(δ) ) > 0 .Before we state the proof of Lemma <ref>, we show that it implies condition (4) of Theorem <ref>.There exists a number δ > 0 such thatlim inf_n→∞inf_Q ∈Quad_B(δ s_n) ℙ ( 𝒞_Q(𝒮^+_n) ) > 0 ,where ^+_n are the positive excursion sets (<ref>) of f_n. By Lemma <ref>, there is a number δ > 0 such that lim inf_n →∞ℙ(Φ( B(δ s_n ) )⊆𝒮^+_n ) > 0 .Since Φ is locally an isometry, for any δ_1 < δ, Φ( B(δ s_n ) ) eventually contains the disk B(δ_1 s_n). Finally, since the occurrence of the event {B(δ_1 s_n)⊆𝒮^+_n} implies the crossing event _Q(_n) for any Q ⊂ B(δ_1 s_n), we have the result.Recall that K_∞ denotes the limit of K_n, well-defined as a stationary C^2 covariance kernel on B(δ_0). Define a stationary covariance kernel K̃_∞ on B(δ_0/2) byK̃_∞(x,y) = K_∞(x/2, y/2) ,and let F̃_∞ denote the centred, unit variance Gaussian random field on B(0, δ_0/2) with covariance kernel K̃_∞. By the local uniform convergence of K_n and its first three derivatives to K_∞ and its respective derivatives, and the strictly negative second derivatives of K_∞ (since K_∞ satisfies Assumption <ref>), there exists a δ_1 ∈ (0, δ_0 / 2) such that, for sufficiently large n,K_n(x, y) > K̃_∞(x, y)for all x,y ∈ B(δ_1).Hence by Slepian's lemma <cit.>, for sufficiently large n, every δ∈ (0, δ_1) satisfiesℙ( F_n(x) > 0for all x ∈ B(δ)) ≥ℙ(F̃_∞(x) for all x ∈ B(δ) ).It then remains to prove the existence of a δ∈ (0, δ_1) such that the latter probability is positive.By the Borel-TIS Theorem <cit.> and Markov's inequality, there exists a number c_1 > 0 such that for every λ > 0,ℙ(sup_v ∈𝕊^1max_ x ∈ B(δ_0/2)|∂F̃_∞/∂ v(x)|> λ) < c_1/ λ .Hence, by taking λ_1, λ_2 > 0 sufficiently small, the eventE={F̃_∞(0) > λ_1}∩{sup_v ∈𝕊^1max_x ∈ B(δ_0/2)|∂F̃_∞/∂ v(x)|< λ_2}has positive probability. By Taylor's theorem we can choose δ > 0 sufficiently small thatE⊆{F̃_∞(x) for all x ∈ B(δ) };which, since ℙ(E) > 0 and in light of (<ref>), yields Lemma <ref>. §.§.§ Proof of Theorem <ref> assuming Theorem <ref>, Proposition <ref> and Lemma <ref> Let (f_n)_n ∈ℕ be given as in Theorem <ref>; with no loss of generality we assume that f_n are unit variance. Let η > 0, X ⊂𝕏 and s_n satisfy the conditions of Theorem <ref>.We begin by slightly perturbing the covariance kernels κ_n of f_n to eliminate possible negative correlations. Define a collection of centred Gaussian random fields (f̃_n)_n ∈ℕ on 𝕏 with respective covariance kernelsκ̃_n (x, y) = κ_n(x, y) + s_n^12 + η/2.Observe that, by condition (<ref>), κ̃_n is everywhere positive on X for n sufficiently large. Moreover, the choice (<ref>) of perturbation means that the conclusion of Proposition <ref> is valid.We now argue that the positive excursion sets 𝒮̃^+_n of f̃_n satisfy all the conditions of Theorem <ref> for the set X and sequence s_n; by symmetry, the same conclusion holds also for the negative excursion sets. The justification for the validity of conditions (2), (3) and (5) of Theorem <ref> is via standard arguments: the symmetry of the excursion sets follows from the symmetry of the kernel, positive associations on X follow from the positivity of the covariance kernels on X by the well-known result of Pitt <cit.>, and the probability of crossing square-boxes is exactly 1/2 by the symmetry of the kernel and the symmetry of a Gaussian random field w.r.t. sign changes. Moreover, conditions (1), (4) and (6) in Theorem <ref> follow from the analysis we developed above, namely Lemma <ref>, and corollaries <ref> and <ref> respectively. Hence all the conditions of Theorem <ref> are satisfied, and an application of Theorem <ref> yields the desired conclusions for f̃_n, i.e. that the positive (resp. negative) excursion sets of f̃_n satisfy the RSW estimates on X on all scales. In particular, for all c > 0,lim inf_n →∞ inf_s > 0 inf_Q ∈Unif_X;c(s) ℙ( _Q(𝒮̃_n^+) ) > 0 . Next we use Proposition <ref> to infer that the positive excursion sets of f_n also satisfy the RSW estimates on X on all scales (the same statement for the negative excursion sets 𝒮^-_n then holds by an identical argument). Fix c > 0, and let c_1 > 0 be the constant prescribed by Lemma <ref>. Also let ε > 0 be such that, for all sufficiently large n ∈ℕ, bothinf_s > 0 inf_Q ∈Unif_X;c(s) ℙ( _Q(𝒮̃_n^+) )> 2ε,andsup_s >0sup_P ∈Poly_X;c_1(s)| ℙ(𝒞_P(𝒮^+_n)) - ℙ(𝒞_P(𝒮̃^+_n)) |<ε ,hold; possible by (<ref>) and Proposition <ref> respectively. Now let s > 0 and Q ∈Unif_X; c(s) be given. By Lemma <ref>, there exists a polygon P ∈Poly_X;c_1(s) ∩Unif_X;c(s) such that the event _P(_n^+) is contained in the event _Q(_n^+). In particular, since P ∈Unif_X;c(s), by (<ref>)ℙ(_P(_n^+)) > 2 ε ,and in light of (<ref>), applicable since P ∈Poly_X;c_1(s), we obtainℙ(𝒞_P(𝒮^+_n))> ε.Finally, since _P(_n^+)⊆_Q(_n^+), we conclude thatℙ(𝒞_Q(𝒮^+_n)) ≥ℙ(𝒞_P(𝒮^+_n))> ε,the RSW estimates for _n^+ on all scales.The final step of the proof of Theorem <ref> is using the first statement (<ref>) of Proposition <ref> to infer the RSW estimates for the nodal sets 𝒩_n of f_n from the already established RSW estimates for the excursion sets of f_n. Again fix c > 0, and let c_1 > 0 be the corresponding constant appearing from Lemma <ref>. Let ε > 0 and C > 0 be such that, for all sufficiently large n ∈ℕ,inf_s > 0 inf_ Q_1 ∈Unif_X;c_1(s/c_1) , Q_2 ∈Unif_X;c_1(s/c_1)ℙ( _Q_1(_n^+) ) ·ℙ( _Q_2(_n^-) ) > 2ε ,andsup_ s > C s_nsup_ X_1, X_2 ⊂ X , d(X_1, X_2) > s/c_1 sup_P_1 ∈Poly_X_1;c_1(s/c_1) , P_2 ∈Poly_X_2;c_1(s/c_1) | ℙ(𝒞_P_1(𝒮^+_n) ∩𝒞_P_2(𝒮^-_n) ) - ℙ(𝒞_P_1(𝒮^+_n))·ℙ( 𝒞_P_2(𝒮^-_n) )| < ε.both hold; possible since the RSW estimates hold for the excursion sets of f_n on all scales and by(<ref>) respectively.Now let s > Cs_n and Q ∈Unif_X; c(s) be given. By Lemma <ref> there exist disjoint domains X_1, X_2 ⊂ X satisfying d(X_1, X_2) > s/c_1 and polygons P_1 ∈Poly_X_1;c_1(s/c_1) ∩Unif_X;c_1(s/c_1) and P_2 ∈Poly_X_2;c_1(s/c_1) ∩Unif_X;c_1(s/c_1)such that if the events _P_1(^+_n) and _P_2(^-_n) both occur, then so does _Q(𝒩_n), i.e.,_P_1(^+_n)∩_P_2(^-_n) ⊆_Q(𝒩_n) .In particular, since P_1, P_2 ∈Unif_X;c_1(s/c_1), by (<ref>)ℙ(_P_1(^+_n))·ℙ(_P_2(^-_n))> 2 ε.Since also P_1 ∈Poly_X_1;c_1(s/c_1), P_2 ∈Poly_X_2;c_1(s/c_1) and d(X_1, X_2) > s/c_1, in light of (<ref>) we deduce thatℙ(𝒞_P_1(𝒮^+_n) ∩𝒞_P_2(𝒮^-_n))> ε.Finally, since _P_1(^+_n) ∩_P_2(^-_n)⊆_Q(𝒩_n), we conclude thatℙ(𝒞_Q(𝒩_n)) ≥ℙ(𝒞_P_1(𝒮^+_n) ∩𝒞_P_2(𝒮^-_n)) > ε,which validates the RSW estimates for 𝒩_n down to the scale s_n.§.§ Proof of Theorem <ref> and the validity of Example <ref>In this section we show that Theorem <ref> and Example <ref> are within the scope of the more general Theorem <ref>. Observe that the covariance kernels κ_n are symmetric in sense of Definition <ref> and satisfy Assumption <ref>.Next we check the local uniform convergence of κ_n together with all its derivatives on the scale s_n (previously stated at (<ref>)). For this, define the smooth functions G_n: ℝ^2 ×ℝ^2 →ℝ and F_n: ℝ→ℝ byG_n(x, y) = √(n)·Φ( x/√(n)) - Φ(y/√(n) )andF_n(t) =(cos(t/n))^n .An explicit computation shows that G_n and F_n converge locally uniformly together with all of their derivatives to the respective limitsG_∞(x, y) = x-yand F_∞(t) = e^-t^2/2 ,and hence so does their composition F_n ∘ G_n. SinceK_n(x,y) = F_n ∘ G_n( x, y)= κ_n( Φ(s_n x), Φ(s_n, y))and K_∞(x, y) = F_∞∘ G_∞(x, y),we have the stated convergence.It remains to show that conditions (4) and (5) of Theorem <ref> hold for any constant η > 0, scale s_n = n^-1/2 and X ⊂𝕊^2 whose closure does not contain antipodal points. For condition (4), we observe that since the closure of X does not contain antipodal points, there exists a number c_1 < π such that θ(x, y) < c_1 for each x,y ∈ X. Therefore there exists a c_2 > 0 such that, for all n ∈ℕ and x,y ∈ X,κ_n(x,y ) = cos( θ(x, y))^n >- e^-c_2n,and so, for any η > 0, as n →∞,s_n^-12 - ηinf_x,y ∈ X (κ_n(x,y) ∧ 0 ) = - n^6 + η/2 e^-c_2n→ 0 . For condition (5), let c_1 < π be as above, and choose a c_2 ∈ (0, 1/ c_1^2) such that |cos(t)| ≤ 1 - c_2 t^2 for each |t| < c_1. Together with the inequality log(1-x) ≤ -x, valid on x ∈ (0, 1), we have for all x,y ∈ X,|κ_n(x, y) | = |cos( θ(x, y) ) |^n≤ e^n log( 1 - c_2 θ(x, y)^2)≤ e^- n c_2 θ(x, y)^2 = e^-c_2 ( d(x, y) s_n^-1 )^2 .Hence for any η > 0 and C > 1,lim sup_n →∞sup_x,y ∈ X,θ(x, y) > C s_n(θ(x, y)s_n^-1)^18 + η |κ_n(x, y) | ≤lim sup_n →∞ sup_t > Ct^18 + ηe^-c_2 t^2 = sup_t > Ct^18 + ηe^-c_2 t^2 ,which tends to zero as C →∞. Remark first that κ_n is a valid covariance kernel since cos^n(x) cos^n(y) can be written as a Fourier series ∑_i,j a_i, jcos(ix) cos(jy) for non-negative coefficients a_i, j, which implies that κ_n is positive-definite.Similarly to in the proof of Theorem <ref> above, it is sufficient that the conditions of Theorem <ref> hold for any constant η > 0, scale s_n = n^-1/2 and subset X ⊆𝕋^2 such that the closure of X does not contain distinct x,y ∈ X having 2(x_1 - y_1) and 2(x_2 - y_2) as integers. The proof of this is similar to the proof of Theorem <ref>, so we omit the details. § PROOF OF THEOREM <REF>: RSW ESTIMATES FOR SEQUENCES OF RANDOM SETSIn this section we prove the abstract RSW estimates in Theorem <ref>, following the argument in <cit.> that established the analogous estimates for planar Voronoi percolation. For the benefit of a reader familiar with <cit.>, we explain the four main differences in our setting, and well as briefly describing the necessary modifications to the argument.* Recall that Theorem <ref> is stated for either the unit sphere 𝕊^2 or the flat torus 𝕋^2. The first difference is due to the non-Euclidean geometry of 𝕊^2; indeed, since the interior angles of spherical squares (see Definition <ref>) depend on their scale, many of the simple geometric arguments in <cit.> fail in the spherical case and need to be derived from scratch or modified significantly. On the other hand, on the flat torus these arguments work as in <cit.>. * Second, we work with a sequence of random sets rather than a single set. Hence we rely on extra `uniform' conditions on the covariance kernels in the statement of Theorem <ref>, which ensure that all the inputs into the argument are uniformly controlled. * Third, the random set considered in <cit.>, arising from planar Voronoi percolation, is asymptotically independent in a very strong sense: the Voronoi percolation restricted to disjoint domains is independent as long as there are no Voronoi cells intersecting both of them, see the discussion in section <ref>. Since we wish to apply Theorem <ref> to Gaussian random fields, we do not have this type of strong mixing of the model, and instead we work with a much weaker notion of asymptotic independence (see condition (6) in the statement of Theorem <ref>). * Finally, unlike <cit.>, the property of `positive associations' only applies inside a subset X ⊆𝕏; this is essential in order to include the Kostlan ensemble (<ref>). As a result, we need to take extra care in the argument to ensure our geometric constructions take place exclusively in this set. Before embarking on the proof of Theorem <ref>, we first build up a collection of preliminary results that hold for arbitrary n ∈ℕ. The first result (Lemma <ref>) can be viewed as a modification of the `standard theory' of RSW: this shows how to transform the bounds on the probability of crossing a small fixed box to infer the bounds on the probability of crossing large domains. The second set of results (section <ref>) contains our modification of Tassion's argument in <cit.>.Throughout the rest of this section we fix a set X ⊆𝕏 as in the statement of Theorem <ref>. Our preliminary results depend only on the conditions of Theorem <ref> that hold for each n ∈ℕ, namely the first non-degeneracy statement in condition (1), the symmetry in condition  (2), and the guarantee of positive associations in X in condition (3). We stress that all the preliminary estimates that we state give lower bounds on various crossing probabilities depending on n ∈ℕ in terms of a positive power of the quantityc_0(n) =inf_s >0 inf_ B ∈Box_X; 1(s)ℙ( 𝒞_B(𝒮_n) );importantly these are monotone increasing in c_0. By condition (4) of Theorem <ref>, c_0(n) is uniformly bounded away from zero for sufficiently large n ∈ℕ, which, in light of the above, yields a uniform control over the crossing probabilities for varying n. For the next two sections we work with arbitrary fixed n ∈ℕ, and for notational convenience we drop all dependencies on n and on the random set _n. §.§ The `standard theory' of RSW: From a fixed box to larger domains One of the most fundamental tools in percolation theory is the FKG inequality, which implies positive associations for the percolation subgraph, and in particular implies that crossing events are positively correlated. In the classical theory (i.e. on the plane), the FKG property is used to infer bounds on the probability of crossing larger domains from assumed bounds on the probability of crossing a fixed small box; we call this the `standard theory' of RSW. For instance, in<cit.> `horizontal' crossings of two overlapping rectangles are connected via a `vertical' crossing of a square to deduce a `horizontal' crossing of a longer rectangle.In our setting the property of positive associations is true in the set X by assumption, and by analogy we shall refer to this fact as the `FKG property'. We next state a version of the `standard theory' of RSW that is valid in the spherical setting. On the sphere, the construction used in <cit.> fails, since two spherical rectangles cannot be overlapped in a way that the overlapping region is a square. Instead, we connect `horizontal' crossings using a third `vertical' rectangle.Let us introduce a fixed box, B̅(s), which denotes, for each s > 0, an s × 2s box chosen arbitrarily. Recall also that, for each c, r ≥ 1 and s > 0, the collection of boxes and annuli Box_X; c(s) and Ann_X; c;r(s) were introduced in definitions <ref>, <ref> and <ref>, and note in particular that Ann_X; 6; 6(s) consists exclusively of s × 6s annuli.There exists a sufficiently small s^∗ > 0 such that the following holds for every c > 1 and s < s^∗: there exists a monotone increasing function f_c, depending only on c, and an absolute monotone increasing function g such thatinf_ B ∈Box_X; c(3s)ℙ(_B)> f_c( ℙ(_B̅(s) )) andinf_A ∈Ann_X; 6; 6(s)ℙ( _A )> g ( ℙ(_B̅(s))) . The value of s^∗ depends only on the geometry of 𝕏^2; in the case 𝕏 = 𝕋^2 it could be arbitrary, whereas in the case 𝕏 = 𝕊^2 it must be sufficiently small so that the distortions due to the spherical geometry are controlled on a ball B(s^∗). This constant could be computed explicitly, but its precise value is irrelevant. In the Euclidean case, the numbers 3 and 6 in the statement of Lemma <ref> could be replaced by 2 and 5 respectively. Using 3 and 6 provides a bit more `space' in the spherical case to account for distortions. The proof is based on the observation that, for sufficiently small s^∗ > 0 and s < s^∗, it is possible to form a box-chain out of alternating `horizontal' and `vertical' copies of B̅(s) that are aligned along a single geodesic (see Figure <ref>, left).For the first statement, fix 3s ≤ a, b ≤ 3cs and consider an a × b box B ⊆ X. Let {B_i} be a box-chain consisting of horizontal and vertical copies of B̅(s) aligned along the geodesic joining the mid-points of the opposite sides of B. Since the shortest sides of B are longer than the longest sides of B̅(s), for sufficiently small s^∗ > 0 and s < s^∗ we can find such a {B_i} that both crosses B and lies inside B (c.f. the Euclidean case, where we could replace the number 3 with 2); moreover, the number of boxes required depends only on c. Since the FKG property holds in B, this establishing the bound.The second statement is proved similarly, working instead with four inter-connecting box-chains aligned along the four `median' geodesics that bisect orthogonally the geodesic line segments joining the mid-points of the boundary squares of any s × 6s annuli A (see Figure <ref>, right). Such box-chains can be formed inside A since B̅(s) fits inside A when aligned with its shortest sides perpendicular to a geodesic bisecting A (c.f. the Euclidean case, where we could replace the number 6 with 5).§.§ Tassion's argumentIn this section we develop Tassion's argument from <cit.>, with suitable modifications to account for the difference in our setting. We begin by introducing, following Tassion, the concept of H-crossings and X-crossings of square boxes (see Figure <ref> for an illustration in the spherical case).Throughout this section, when the parameter s^∗ in the statement of a lemma may be set sufficiently small, we always implicitly set it so that the conclusion of Lemma <ref>, as well as the conclusion of any proceeding lemmas in this section, is valid. Since if X has an empty interior Theorem <ref> has no content, we may assume that X has non-empty interior, and, by symmetry, that X contains an open ball B(δ_0) centred at the origin. Therefore we may assume that s^∗ is sufficiently small so that all of the (finite) collections of domains that we manipulate in the proofs of the following lemmas are contained inside B(δ_0); we may thereby always assume the FKG property holds. * For each s > 0 and α, β∈ [0, s/2], an H-crossing of an s × s square box B = (D; γ, γ'), denoted by _s(α,β) = _s;B(α,β), is the event that a connected component of _n, restricted to B, intersects both γ and the segment of γ' of length β-α at distance α from the mid-point of γ' (see Figure <ref>, left). * For every s > 0 and α∈ [0, s/2], an X-crossing of an s × s square box B = (D; γ, γ'), denoted by _s(α) = _s;B(α), is the event that a connected component of _n, restricted to B, intersects the four segments of γ∪γ' obtained by removing from each of γ and γ' the centred intervals of length 2α (see Figure <ref>, right). Observe that, by the symmetry condition (2) of Theorem <ref> and the definitions of _s(α, β) and _s(α), both _s(α, β) and _s(α) are independent of the choice of the square box B. Hence the functionϕ_s(α)=_s(0,α)-_s(α,s/2)is well-defined, and is a continuous function of α by the first statement of the non-degeneracy condition (1) of Theorem <ref>. Recalling the definition (<ref>) of c_0, for every scale s > 0 we may fix the constantα_s=min(ϕ^-1_s(c_0/4),s/4),satisfying α_s ≤ s/4. The following lemma contains the essential consequences of the definition of α_s, c.f. <cit.>.There exists a sufficiently small s^∗>0, and absolute numbers a_1>0 and k_1∈ℕ, such that if s < s^*, the following two properties hold: (P1) For all 0≤α≤α_s, _s(α) > a_1· c_0^k_1.(P2) If α_s<s/4, then for all α_s≤α≤ s/2, _s(0,α)≥ c_0/4+_s(α,s/2). The proof of Lemma <ref> is independent of the geometry of the ambient space and the argument from <cit.> works in our setting unimpaired. The next three lemmas are the heart of Tassion's argument. Recall the fixed s× 2s box B̅(s) in the statement of Lemma <ref>. We think of s > 0 as being a `good' scale if it satisfies α_s≤ 2α_2s/3, and proceed to formulate a few consequences of a good scale. As a corollary, we deduce, for a fixed n ∈ℕ, the existence of uniform bounds on crossing probabilities on all large scales, provided that certain inputs into the argument are also controlled.As in the proof of Corollary <ref>, in this section we work with the collection (A_a, b)_a<b of a × b annuli centred at the origin that are `parallel', i.e. such that there is a single geodesic that passes through both mid-points of both pairs of opposite sides. When working with square boxes, we shall sometimes abuse notation by referring to these simply as `squares'.There exists a sufficiently small s^* > 0 and absolute numbers a_2>0 and k_2∈ℕ, such that if s<s^* and α_s≤ 2α_2s/3 then ( _B̅(2s) )> a_2· c_0^k_2.The proof of Lemma <ref> is similar to the proof of <cit.>, with certain modifications needed to handle the spherical geometry in the case 𝕏 = 𝕊^2; here we only give a sketch of the argument while explaining in detail the necessary modifications. We consider separately two cases,  α_s=s/4 and α_s=ϕ^-1_s(c_0/4)<s/4, beginning with with the first case. In light of (P1) from Lemma <ref>, we have a lower bound on _s(s/4) of the forma_2· c_0^k_2. Hence, by the FKG property, it suffices to construct a finite collection of s × s squares S_i such that if _s(s/4) holds for each S_i then so does _B̅(2s).In what follows we refer to the labelling in Figure <ref>, which illustrates the argument in the spherical case. Consider the s × s square ABCD and its translation A'B'C'D' by s/2 along the geodesic AB. Observe that if the event _s(s/4) holds for both squares ABCD and A'B'C'D' then there is a crossing ofinside the union of the squares that intersects the two sub-intervals of the geodesic AB' formed by removing a centred interval of length s. Repeating this construction along the top edge of B̅(2s) we obtain a crossing of B̅(2s) using only X-crossings of s × s squares.We turn to the second case. Since α_s≤2α_2s/3 and in light of Lemma <ref>, in this case we have lower bounds on both _2s/3(α_2s/3 ) and _s(0, 2 α_2s/3 ) of the forma_2· c_0^k_2. Hence, by the FKG property, it suffices to construct a finite collection ofs × s squares S_i, and (2s/3) × (2s/3) squares T_i, such that if _s(0, 2 α_2s/3) and _2s/3(α_2s/3 ) holds for each S_i and T_i respectively, then so does _B̅(s).In what follows we refer to the labelling in Figures <ref>–<ref>, which illustrate the argument in the spherical case. Consider the s × s square ABCD and its translation A'B'C'D' by a distance d (to be determined) along the geodesic joining the mid-points of the sides AD and BC. Our aim is to deduce a horizontal crossing of the union of these squares (i.e. between AD and B'C') by assuming _s(0, 2 α_2s/3 ) holds for both the squares, and assuming also _2s/3(α_2s/3 ) holds for two suitably chosen (2s/3) × (2s/3) squares. In the planar case <cit.> we may let d = 4s/3, since then the shaded region in Figure <ref> forms a (2s/3) × (2s/3) square which is sufficient for this purpose. In the spherical case this shaded region is not a square for any choice of translation distance, and so we shall need a slightly different construction.We consider the (2s/3) × (2s/3) square abcd such that its `right' side bc lies on BC with its mid-point coinciding with the mid-point of the marked thick interval fg of length 2α_2s/3 (see Figure <ref>). Note that bc is a subset of BC since α_s ≤ s/4 for each s, and hence s/3 + α_2s/3 is at most s/2. Once this square is fixed, we consider the unique geodesic which passes through the middle of the side ad of the small square abcd and orthogonal to the geodesic connecting mid-points of AD and BC. We define the second s× s square A'B'C'D' to be the square such that its left side is on this geodesic. The second (2s/3) × (2s/3) square a'b'c'd' (not shown in Figure <ref>, but magnified in Figure <ref>) is constructed as the symmetric image of abcd and its left side is on A'D'. Observe that the mid-point of ad (marked by a dot) will lie on the marked interval e'h'. We also notice, that since the distance between the mid-points of an (2s/3) × (2s/3) square is 2s/3+O(s^3) where O(s^3) term depends on s only, the square A'B'C'D' is a copy of ABCD shifted by d = s/3+O(s^3). In particular, there is s^* such that for all s<s^* it holds that d ∈ ( s/4, s/2).Next let us consider the two small squares abcd and a'b'c'd', shown in more detail in Figure <ref>. We mark the middle parts of length 2α_2s/3 on `vertical' sides of both small squares. Two of these marked intervals fg and e'h' are the marked intervals on Figures <ref> and <ref>. As mentioned above, the intervals eh and e'h' intersect. By symmetry, the intervals fg and f'g' intersect as well. This implies thatanycurve in abcd connecting ae to cg must intersect a'h' and c'f' and thus disconnect h'd' from b'f' inside a'b'c'd'. This shows that if _2s/3(α_2s/3) holds for both squares, then the connecting curves must intersect.We also notice that a curve connecting a'e' with h'd' separates e'h' from the right side of the s × s square A'B'C'D'. Similarly, a curve connecting bf and cg separates fg from AD. This implies that in the event that there are crossings from AD to fg, from e'h' to B'C', and two X-crossings in abcd and a'b'c'd' there is a crossing connecting AD to B'C'.All in all, we infer a horizontal crossing of the union of the s × s squares (i.e. between AD and B'C') that are translated a distance d ∈ (s/4, s/2) apart. We finish the proof of Lemma <ref> by using a similar construction to the one in the proof of Lemma <ref>, using multiple copies of such a crossing (i.e. alternating `horizontally' and `vertically') and as long as s^∗ is sufficiently small, to infer a crossing of B̅(2s). There exists a sufficiently small s^* > 0 and absolute numbers a_3>0 and k_3∈ℕ, such that if, for some s and t such that 12 s ≤ t < s^*, α_s ≤ 2α_2s/3 and α_t ≤ s both hold, then (_A_t, 6t) > a_3· c_0^k_3. Since α_s ≤ 2α_2s/3, Lemma <ref> yields a lower bound on [_B̅(2s)] of the form a_3· c_0^k_3. By Lemma <ref> we then conclude the same for _A_2s, 12s. Since also _t(0, s )≥ c_0 / 4 (implied by α_t ≤ s), by the FKG property it suffices to find two t × t squares S_1 and S_2 such that if _t(0, s) holds for S_1 and S_2, and _A_2s, 12s also holds, then we may deduce _B̅(t).The proof is identical to <cit.>, and we only briefly sketch it. Consider the two t × t squares S_1 = (D_1; γ_1, γ') and S_2 = (D_2; γ_2, γ') whose common side γ' has the origin as its mid-point. Observe that if _A_2s, 12s holds simultaneously with the event _t(0, s ) for S_1 and S_2, then there exists a crossing of S_1 ∪ S_2; for this observe that the distance between mid-points of a s × s square is at least s (in the spherical case it is precisely arccos(1-2tan^2(s/2)) = s+1/8s^3+O(s^5)> s), and so the line-segment of length s in the definition of _t(0, s ) lies inside the inner square bounding A_2s, 12s. Since such a crossing of S_1 ∪ S_2 also implies a crossing of two squares that are translated by any smaller amount along the geodesic joining the mid-points of the opposite sides of S_1 and S_2, we infer a crossing of B̅ (t). Finally, from Lemma <ref> we deduce the statement. To state the final lemma in Tassion's argument, we need a certain assumption that is related to condition (6) of Theorem <ref>, stated for a fixed n ∈ℕ.For a quadruple (c, ε, C, s) with c > 0, ε∈ (0, 1), C ≥ 1 and s > 0, we assume that the following holds: Ifinf_A ∈Ann_X; C, 6(s)(_A(_n)) > c,then, for each s × Cs annulus A ⊆ X,( _A)>1- ε.It is clear that if Assumption <ref> is valid for a quadruple (c, ε, C, s), then it is also valid for the quadruple (c', ε', C, s) for any c' > c and ε' > ε. For the final two lemmas, we let a_3 and k_3 be the constants proscribed by Lemma <ref> and fixc_3 = a_3 · c_0^k_3. Fix C > 1. Then there exist a number C_1 > 12, depending only on C, and a sufficiently small s^* > 0, such that if s <s^*, α_s ≤ 2α_2s/3 and Assumption <ref> holds for the quadruple (c_3,c_0/ 8, C, 12s), then there exists a number t∈ [12 s, C_1 s] such that α_t ≤ 2α_2t/3. Suppose s < s^* and α_s ≤ 2α_2s/3. The first step is to show that α_t_i > s for at least one of t_1 = 12 s ort_2 = 2 C t_1 = 24 C s.Let A be a t_1 × 12 C t_1 annulus. Arguing by contradiction, if α_t_1≤ s, then from Lemma <ref> we deduce thatinf_A ∈Ann_X; C; 6(t_1) ℙ( _A ) > c_3 .Hence, since we make Assumption <ref> for the quadruple (c_3,c_0/8, C, t_1), it holds that( _A )>1-c_0/8. On the other hand, let S be a t_2× t_2 square whose centre coincides with the centre of A and such that one side of S lies on a geodesic bisecting A. Define the event E=_t_2(0,s)∖_t_2(s, t_2/2) for square S, and remark that the occurrence of the event E implies that _A does not occur (see Figure <ref>). Recalling the definition (<ref>) of ϕ_s(α) it is clear that (E)≥ϕ_t_2(s). If also α_t_2≤ s, this implies α_t_2 < t_2/4 by (<ref>), andby (P2) of Lemma <ref> we have ϕ_t_2(s) ≥ c_0/4. Since E ⊆_A^c, this shows that ( _A) is at most 1 - c_0/4, which is a contradiction.To conclude the proof of Lemma <ref>, recall that α_s is sub-linear in the sense that α_s < s for all s > 0. Hence if α_t_i > s for at least one of t_1 = 12s or t_2 =24 C s, then there exist sufficiently large C_1, depending only on C, such thatα_t ≥2α_2t/3 for at least one t∈ [12s, C_1 s]. To conclude this section we combine the preceding lemmas into a form most convenient for completing the proof of Theorem <ref>.Fix the constants C > 0 and c̅_1, c̅_2 > 0. Then there exists a sufficiently small s^* > 0 and numbers a_4>0, k_4∈ℕ and C_1 > 0, depending only on C, c̅_1 and c̅_2, such that, if s <s^*, α_s > c̅_1 s, and Assumption <ref> holds for the quadruple ( c_3, c_0/ 8, C, t) for all t > s, then inf_s' > C_1 sinf_ B ∈Box_X; c̅_2( s')ℙ(_B)> a_4 · c_0^k_4. In light of lemmas <ref> and <ref>, it suffices to exhibit constants C_1, C_2 > 0, depending only onC and c̅_1, and a sequence of `good' scales {s^(i)}_1 ≤ i ≤ k such thats^(1) < C_1 s/ 6,12 ≤s^(i+1) / s^(i)≤ C_2and12 s^(k)≤ s^*,and such that α_s^(i)≤ 2α_2s^(i) /3holds for each 1 ≤ i ≤ k. We argue by induction. For the base case, we argue as in the proof of Lemma <ref>: since α_s > c̅_1 s and α_s is sub-linear (in the sense that α_s ≤ s/4 for all s), there exists a sufficiently large C_1, depending only on C and c̅_1, such that α_t ≤2α_2t/3 for at least one t∈ [12s, C_1 s/6].Next suppose we have a scale s^(i) such that s^(i) < s^* and α_s^(i)≤2α_2s^(i) /3. We may suppose that Assumption <ref> holds for the quadruple (c_3, c_0/ 8, C, 12 s^(i) ). Hence by Lemma <ref> there exists a number t ∈ [12s^(i) , 12 Cs^(i) ] such that α_t ≤ 2α_2t/3, which concludes the induction step, and thus also Corollary <ref>. §.§ Concluding the proof of Theorem <ref> Fix s^* > 0 to be sufficiently small such that the conclusions of Lemma <ref> and Corollary <ref> are valid. Before continuing, we discuss the roles of conditions (1) and (6) of Theorem <ref> in ensuring that the conclusion of Corollary <ref> holds on all necessary scales and is uniform for sufficiently large n.We first claim that (<ref>) in condition (1) implies that, for each C > 0, α_C s_n / s_n is uniformly bounded from below. In fact, we prove the stronger statement that α_C s_n > r s_n for any r ∈ (0, C/4) such that,ℙ( ℒ_v(r s_n)∩∂ S= ∅)> 1 - c_0 / 4for all directions v in the spherical case (resp. x and y directions in the toral case); the existence of a single such r > 0 for n sufficiently large is then guaranteed by (<ref>). Similarly to the proof of Lemma <ref>, consider the eventE=_C s_n(0,r s_n) ∖_C s_n(r s_n, C s_n)corresponding to a Cs_n × Cs_n square S, and let L denote the line-segment of length r s_n on the boundary of S used to define the event _C s_n(0,r s_n). It is then clear that (E) ≥ϕ_C s_n(r s_n). If we now assume, for contradiction, that (<ref>) holds and α_C s_n≤ r s_n, then since r < C/4, by (P2) of Lemma <ref> it must be true that ϕ_C s_n(r s_n)≥ c_0/4. Since E implies that ∂ S intersects L, we have thatℙ( ℒ_v(r s_n)∩∂ S= ∅) =ℙ( | L ∩∂ S| = ∅ ) ≤ 1-c_0/4 ,which is a contradiction.Next we observe that condition (6) of Theorem <ref> implies Assumption <ref> on all necessary scales. To see why note that condition (6) guarantees the existence, for any choice of c > 0 and ε > 0, of constants C_1, C_2 > 1such that, for all sufficiently large n and all s > C_1 s_n, Assumption <ref> holds for the quadruple (c, ε, C_2, s); in particular it also holds for any larger c and ε (see the remark immediately after Assumption <ref>). Recall now thatc_0(n) = inf_s >0 inf_ B ∈Box_X; 1(s)ℙ( 𝒞_B(𝒮_n) ) ,is bounded from below by some constant ĉ_0 for sufficiently large n; hence, by (<ref>), the same is true for the number c_3(n) prescribed by Lemma <ref>, monotonically increasing in c_0. Putting this together, condition (6) guarantees the existence of C_1, C_2> 0 such that, for all sufficiently large n, Assumption <ref> holds for the quadruple (c_3(n), c_0(n) / 8, C_2, s) for all s > C_1 s_n. At this point we may fix such C_1, C_2> 0 and n sufficiently large such that the assumption holds for all s > C s_n.We can now finish the proof of Theorem <ref>. Choose c > 0 as in the statement of the RSW estimates. Given the definition of Unif_X;c(s), and since the FKG property is valid in X, it is sufficient to show the existence of a constant c_1 such that for sufficiently large n,inf_s > 0 inf_k ∈ (0, c) inf_Bas × ksbox ℙ(_B(_n )) > c_1.In turn, it is sufficient to establish (<ref>) on both the microscopic scales s ≈ s_n, and then for all larger scales s ≫ s_n.For the microscopic scales s ≈ s_n, recall that, by condition (4) of Theorem <ref>, there exist numbers δ>0 and c_2>0 such that, for all sufficiently large n,inf_s < δ s_n inf_k ∈ (0, c) inf_Bas × ksbox ℙ(_B(_n )) > c_2.By Lemma <ref>, the same conclusion holds for δ replaced by any constant C, i.e. there exists a c_4, depending on C, such that for sufficiently large n,inf_s < C s_n inf_k ∈ (0, c) inf_Bas × ksbox ℙ(_B(_n )) > c_4 . For the larger scales s ≫ s_n, take the constant C_1 that was fixed above, and recall that α_C_1 s_n / s_n is uniformly bound below by some constant c̅_1. Since also Assumption <ref> holds for the quadruple (c_3(n), c_0(n) / 8, C_2, t) for all t > C_1 s_n, by Corollary <ref> there are numbers a_4>0, k_4∈ℕ and C_3 > 0, depending only on c, c̅_1, C_1 and C_2, such thatinf_s' > C_3 s_ninf_ B ∈Box_X; c( s')ℙ(_B) > a_4 · c_0^k_4(n) >a_4 ·ĉ_̂0̂^k_4,which establishes (<ref>) for s > C_3 s_n. Combining with (<ref>) we conclude the proof. § PERTURBATION ANALYSISIn this section we establish the auxiliary results used in the perturbation analysis in section <ref>. In the first part we prove Proposition <ref>, showing that crossing events are determined, outside a small error event, by the signs of the field on a (deterministic) finite set of points. In the second part we prove Lemma <ref>, which controls the effect of a perturbation on the signs of Gaussian vectors.§.§ Measurability of crossing events on a finite number of points We use the following preliminary lemma, which bounds the probability that the nodal set crosses any (geodesic) line-segment twice. Recall that for symmetric covariance kernels we often abuse notation by writing κ_n(x) = κ_n(0, x).Let f be a Gaussian random field on 𝕏 whose covariance kernel κ is C^4 and is symmetric in the sense of Definition <ref>. Suppose that there exists δ >0 such that, for every x,y∈ with 0<d(x,y)<δ the random vector (f(x),f(y))∈^2 is non-degenerate. DefineL_2 =sup_v ∈𝕊^1|κ”_v(0)| and L_4 =sup_v ∈𝕊^1max_d(0,y) < δ |κ^(iv)_v(y)| ,where κ^(ii)_v and κ^(iv)_v are the second and the fourth derivatives of κ in direction v respectively. Then there exists a absolute constant c > 0 such that, for each geodesic line-segment ℒ⊆𝕏 of length ε <δ,ℙ( | {x ∈ℒ : f(x) = 0 } | ≥ 2 )< cε^3 √( L_2^3+ L_2^-1 L_4^2 ) . It is convenient to use the arc-length parametrisation of ℒ, namely let f̃ : [-ε/2, ε/2] →ℝ be the restriction f|_ℒ of f to , and denote by κ̃: [-ε/2, ε/2] →ℝ its covariance kernel. By the symmetry assumption on f, the process f̃ is stationary, and with no loss of generality we may assume that f̃ is unit variance.Let N =| {x ∈ℒ : f(x) = 0 } |. Applying the Kac-Rice formula <cit.>, valid by the non-degeneracy assumption on (f(x),f(y)) in Lemma <ref> we have𝔼[ N(N-1) ]= ∫_x,y ∈ [-ε/2, ε/2]M_2(x-y) dxdywith M_2(x)≥ 0 the two-point correlation function of the zeros of f̃. It is known  <cit.> that M_2 is given byM_2(x) = 1/π^2-κ̃”(0)· (1-κ̃(x)^2)-κ̃'(x)^2/(1-κ̃(x)^2)^3/2·( √(1-ρ(x)^2)+ρ(x)·arcsinρ(x)),with ρ an explicit expression in terms of κ̃ and its first two derivatives, irrelevant for our purpose. The upshot is that the functiont↦√(1-t^2)+t·arcsint is bounded from above, henceM_2(x) ≤ c_1·-κ̃”(0)· (1-κ̃(x)^2)-κ̃'(x)^2/(1-κ̃(x)^2)^3/2for some absolute constant c_1>0.Finally, recall that κ is C^4, and so Taylor's theorem implies that each x ∈ [0, ε] satisfies,| κ̃(x) - 1 -1/2κ̃'(0) x^2 | ≤max_ y ∈ B(δ)| κ̃^(iv)(y) | x^4 and| κ̃'(x) -κ̃'(0)x | ≤max_ y ∈ B(δ)| κ̃^(iv)(y) |x^3.Expanding (<ref>) into the Taylor polynomial of fourth degree around the origin with the help of (<ref>), we obtain the boundM_2(x) ≤ c_2 ( κ̃'(0)^3/2 +κ̃'(0)^-1/2max_ y ∈ B(δ)κ̃^(iv)(y) ) |x|with some absolute constant c_2>0. Finally, integrating the latter inequality over x,y ∈ [-ε/2, ε/2] as in (<ref>) yields that𝔼[ N(N-1) ]< c_3ε^3 ( κ̃'(0)^3/2 +κ̃'(0)^-1/2max_ y ∈ B(δ) κ̃^(iv)(y)) ,with c_3>0 absolute. Since 𝕏 has constant curvature, the ratio of the derivatives of κ̃ and κ are bounded from above and from below by absolute constants, and so by Markov's inequality we conclude the proof.We now state the main implication of Lemma <ref> in our setting. Recall the set-up of the perturbation analysis from section <ref>, and in particular the constant δ_0 and the limit kernel K_∞. The following is an easy corollary of Lemma <ref>, the uniform convergence of κ_n on B(δ_0) to K_∞ along with its first four derivatives, and the fact that K_∞ satisfies Assumption <ref> (and so in particular has strictly-positive second derivatives at the origin); by the above we can take a single number δ>0 satisfying the assumptions of Lemma <ref> applied to f=f_n for n sufficiently large (i.e. the δ corresponding to K_∞). There exists a number 0<δ<δ_0 sufficiently small, and c_1>0 sufficiently large depending on K_∞ only, such that for n ∈ℕ sufficiently large the following holds. For every geodesic line-segment ℒ⊆𝕏 of length ℓ∈ (0, δ),ℙ( | {x ∈ℒ : f_n(x) = 0 } | ≥ 2 )< c_1(ℓ/s_n)^3.We can now complete the proof of Proposition <ref>. For this we will use the following notion of a `triangular decomposition' of a polygon. * For a polygon P = (D; γ, γ') as in Definition <ref>, a triangular decomposition 𝕋 of P is a (finite) embedded graph on 𝕏∩ P such that each edge is a geodesic line-segment, each face has three boundary edges, and the union of the faces equals P, save for boundaries. * A triangular decomposition 𝕋 of a polygon P is said to be compatible with P if both γ and γ' can be expressed as the union of edges of 𝕋. * A triangular decomposition of an annulus A as in Definition <ref> is defined analogously. Fix n ∈ℕ sufficiently large, δ>0 sufficiently small and c_1 sufficiently large, so that the conclusion of Corollary <ref> holds, and fix also c, r > 1 as in the statement of Proposition <ref>. Let s > 0, ε∈ (0, 1) and Q ∈Poly_𝕏; c(s) ∪Ann_𝕏; c;r(s) be given. By the definition of the sets Poly_𝕏; c(s) and Ann_𝕏; c(s), there exists a number c_2 > 0, depending only on c and r, such that for each ℓ∈ (0, s ∧δ] there exists a triangular decomposition 𝕋 of Q with the following properties: (i) if Q ∈Poly_𝕏; c(s) then 𝕋 is compatible with Q; (ii) the edges of 𝕋 have length at most ℓ s; and (iii) 𝕋 has at most c_2 (s/ℓ)^2 vertices.Fix an edge e in 𝕋 and consider the event that e is crossed at least twice by the nodal set. Applying Corollary <ref>, there exists a constant c_2, depending only on K_∞, such that this event is of probability at most c_2( ℓ / s_n)^3. By the union bound, the event E that all the edges of 𝕋 are crossed at most once by the nodal set has probability bounded from below by1 - c_1 c_2 (s/ℓ)^2 (ℓ/s_n)^3= 1 -c_1 c_2 s^2 s_n^-3ℓ.Settingℓ = min{δ,s, ε s_n^3/(c_1 c_2 s^2)},this is bounded from below by 1 - ε. Moreover, with this choice of ℓ, the cardinality of 𝒫 is at most|𝒫| ≤c_2 max{δ^-2 s^2, 1, (c_1 c_2)^2 ε^-2(s/s_n)^-6}Since the sets Poly_𝕏; c(s) and Ann_𝕏; c(s) are empty unless s is less than a constant (2 π in the spherical case, 1 in the toral case), this in turn is bounded from above by|𝒫|≤ c_3 ( ε^-2(s/s_n)^-6∧ 1) ,where c_3 > 0 is a constant depending only on c, r, δ and K_∞.Finally, observe that on the event E the crossing event _Q(_n^+) is determined by the subset of edges in 𝕋 that are crossed exactly once by the nodal set (if Q ∈Poly_𝕏; c(s) the compatibility of 𝕋 with Q is crucial in this step). Since this subset of edges is, in turn, determined by the signs of f_n on the vertices of the triangular decomposition 𝕋, we conclude the proof.§.§ Proof of Lemma <ref> We begin with the first statement. Define the matricesΣ_Z = n δ𝕀_n andΣ_W = n δ𝕀_n + Σ_Y - Σ_X ,where 𝕀_n denotes the n× n identity matrix. By the Gershgorin circle theorem and the definition of δ, the matrix Σ_W is positive-definite. HenceY + Z d= X + Wwhere Z and W are independent Gaussian random vectors with respective covariance matrices Σ_Z and Σ_W.Fix ε > 0 and define the eventsℰ_1 = ⋃_i = 1^n{ |Y_i| < ε}, ℰ_2 = ⋃_i=1^n { |Z_i| > ε}, ℰ_3 = ⋃_i = 1^n{ |X_i| < ε}andℰ_4 = ⋃_i=1^n { |W_i| > ε} .Observe that the variance of the components of Y and X are at least one, whereas the variance of the components of Z and W are at most (n+1)δ. Hence by the union bound, standard results on the maximum of Gaussian vectors, and Markov's inequality, there exists an absolute number c_1 > 0 such thatℙ( ℰ_1 ) + ℙ( ℰ_3)< c_1 n εandℙ (ℰ_2)+ ℙ (ℰ_4)<c_1 (log n ∨ 1)^1/2ε^-1((n+1)δ)^1/2.This implies that we may couple the vectors X and Y so that, outside of an event of probability< c_1 ( n ε+(log n ∨ 1)^1/2ε^-1( (n+1) δ)^1/2 ) ,the signs of all the components of the vectors are equal, and hence all the events measurable w.r.t the signs of the vectors have the same probability up to the said error. To optimise the result we setε = δ^1/4 (n+1)^1/2 n^-1/2 (log n ∨ 1)^1/4,which yields the error probability asc_1n^1/2 (n+1)^1/4(log n ∨ 1)^1/4δ^1/4< c_2 (n^3 + ηδ)^1/4,for a constant c_2 depending only on η>0.For the second statement the argument is similar. Since Σ_Y - Σ_X is positive-definite, one may write Y d= X + W where W is an independent Gaussian random vector with covariance matrix Σ_Y - Σ_X. Fix ε > 0 and let ℰ_1, ℰ_3 and ℰ_4 be defined as before. Since the variance of the components of W are at most δ, as before there exists an absolute c_1 > 0 such thatℙ( ℰ_1 ) + ℙ( ℰ_3)< c_1 n εandℙ(ℰ_4)<c_1 (log n ∨ 1)^1/2ε^-1δ^1/2.Hence we may couple the vectors X and Y so that, outside of an event of probability< c_1 ( n ε+(log n ∨ 1)^1/2ε^-1δ^1/2 ) ,the signs of all the components of the vectors are equal. Settingε = δ^1/4 n^-1/2 (log n ∨ 1)^1/4,the error is at most c_2 (n^2 + ηδ)^1/4 for a constant c_2 depending only on η>0.plain
http://arxiv.org/abs/1709.08961v2
{ "authors": [ "Dmitry Beliaev", "Stephen Muirhead", "Igor Wigman" ], "categories": [ "math.PR" ], "primary_category": "math.PR", "published": "20170926121205", "title": "Russo-Seymour-Welsh estimates for the Kostlan ensemble of random polynomials" }
Specht modules labelled by hook bipartitions IIWe continue the study of Specht modules labelled by hook bipartitions for the Iwahori–Hecke algebra of type B with e∈{3,4,…} via the cyclotomic Khovanov–Lauda–Rouquier algebra ℋ_n^Λ. Over an arbitrary field, we explicitly determine the graded decomposition submatrices for ℋ_n^Λ comprising rows corresponding to hook bipartitions. Keywords: modular representation theory, Hecke algebras, KLR algebras, Specht modulesMathematics Subject Classification 2010: 20C08, 20C20, 20C30, 05E10 § INTRODUCTION The study of the representations of the ℤ-graded cyclotomic Khovanov–Lauda–Rouquier algebras (alternatively the cyclotomic quiver Hecke algebras), denoted ℋ_n^Λ, has been motivated by their connection with the well-studied complex reflection groups and their deformations viaBrundan and Kleshchev's Graded Isomorphism Theorem in <cit.>. This allows us to consider the Ariki–Koike algebras associated to a complex reflection group of type G(l,1,n) as graded algebras. The most important open question in the representation theory of the Ariki-Koike algebras is the Decomposition Number Problem.One aims to understand the graded composition multiplicity [S_λ:D_μ⟨ k⟩]_v of the irreducible module, D_μ, as a composition factor of the Specht module, S_λ, for all multipartitions λ and for all regular multipartitions μ. Throughout this paper, we will fix l=2 and study the graded representation theory of the corresponding Iwahori–Hecke algebra of type B from the perspective of ℋ_n^Λ. In particular, we continue the study from <cit.> of the special family of Specht modules labelled by hook bipartitions, namely S_((n-m),(1^m)), as ℋ_n^Λ-modules. For the first time, we determine the corresponding graded decomposition numbers, which we observe are independent of the characteristic of the ground field.mybox[1]colback=black!10!white, colframe=black,fonttitle=, title=#1 Main Result Let 𝔽 be arbitrary, e∈{3,4,…}, and λ=((n-m),(1^m)) with m∈{0,…,n}. We completely determine the graded decomposition numbers [S_λ:D_μ⟨ k⟩]_v for all regular bipartitions μ.Over a field of characteristic zero, we note that there exist recursive algorithms for determining decomposition numbers for the Ariki–Koike algebras. We know from Ariki's Categorification Theorem in <cit.>, together with recent work of Brundan and Kleshchev <cit.>, that the graded decomposition numbers for the cyclotomic Khovanov–Lauda–Rouquier algebras can be determined from the canonical basis elements of the quantum affine algebra U_q(𝔰𝔩_e) via the LLT algorithm <cit.> in level one, and via an analogous algorithm <cit.> in higher levels.While these are major breakthroughs in the field, the recursive nature of these algorithms means that explicit computations for all but sufficiently small n are impossible, and the Decomposition Number Problem remains unsolved. In positive characteristic, we obtain the decomposition matrices for the Ariki–Koike algebras from the decomposition matrices in characteristic zero by post-multiplying them by certain adjustment matrices. However, there exists no analogue of the LLT algorithm for determining these adjustment matrices in positive characteristic, and moreover, we have very few explicit examples to hand. One of the most fundamental problems is to determine when the decomposition numbers in characteristic zero and positive characteristic coincide, and hence when the adjustment matrices are trivial. Due to Williamson's counterexamples for the symmetric groups <cit.>, we now know that the long-standing James Conjecture <cit.> can no longer hope to provide a partial solution to this problem. Except in a few cases, it is completely unknown when the adjustment matrices are trivial. In level two, Brundan and Stroppel <cit.> and Hu and Mathas <cit.> show that the decomposition numbers of the Iwahori–Hecke algebra of type B do not depend on the characteristic of the ground field when e is either infinite or sufficiently large.In level three, Lyle and Ruff <cit.> study certain blocks of the Ariki–Koike algebras, and determine that their corresponding adjustment matrices are, in fact, trivial for all quantum characteristics. This paper works in finite quantum characteristic e∈{3,4,…}, and adds to these recent developments by providing a special family of Specht modules for the Iwahori–Hecke algebra of type B whose corresponding decomposition numbers are independent of the characteristic of the ground field.In this paper, we add to this literature by studying the structure of Specht modules labelled by hook bipartitions. We first recall from <cit.> that the explicit presentation of ℋ_n^Λ was used to determine composition series of S_((n-m),(1^m)) in which we defined its composition factors in terms of quotients either of the kernels or of the images of certain Specht module homomorphisms. In this way, we can write down explicit spanning sets for these composition factors in terms of standard basis elements of S_((n-m),(1^m)), which will later help us to determine certain properties of their gradings. In general, we note that it is a non-trivial task to explicitly determine which of the regular multipartitions label the irreducible modules arising in the composition series of Specht modules. However, since we know that every composition factor of a Specht module arises as the head of a Specht module labelled by a regular multipartition, we are able to use Brundan and Kleshchev's i-restriction and i-induction functors <cit.> to find isomorphisms between the composition factors of S_((n-m),(1^m)) as presented in <cit.> and the irreducible heads D_μ of certain Specht modules labelled by regular bipartitions. We thus determine characteristic-free ungraded multiplicities [S_((n-m),(1^m)):D_μ] for all regular bipartitions μ, and hence observe that the corresponding submatrices of the adjustment matrices are trivial. Furthermore, we completely determine the analogous graded composition multiplicities [S_((n-m),(1^m)):D_μ]_v by exploiting the combinatorial grading on Specht modules as defined in <cit.>. We remark that one can alternatively keep track of the grading shifts throughout the preceding paper <cit.> so that we immediately arrive at the graded results and hence implicitly recover the ungraded ones, however this method would give us little to no advantage since the resulting computations would be similar to those presented in this article. We instead only enter into the graded world in this paper to provide a distinction between the combinatorial calculations we now perform and those given in <cit.> of the action of the ℋ_n^Λ-generators on standard basis elements of Specht modules.The structure of this paper is as follows. In <ref>, we present necessary background details of the graded representation theory of the cyclotomic Khovanov–Lauda–Rouquier algebras, and in particular we provide a brief overview of Specht modules labelled by hook bipartitions. In <ref>, we determine the composition factors of these Specht modules in terms of irreducible heads D_μ of Specht modules for certain regular bipartitions μ. In doing so, it follows from <cit.> that we completely determine the ungraded decomposition matrices of ℋ_n^Λ corresponding to hook bipartitions; we present these results in <ref>. Furthermore, by obtaining results in <ref> on the graded dimensions of Specht modules labelled by hook bipartitions and of their composition factors, we present the explicit graded decomposition numbers of ℋ_n^Λ corresponding to hook bipartitions in <ref>.§ BACKGROUND Throughout this paper, we let 𝔽 be an arbitrary field and let 𝔖_n be the symmetric group on n letters. Let q∈𝔽^× be a cyclotomic eth root of unity such that e ∈{3,4,…}; we call e the quantum characteristic. We set I:=ℤ/eℤ and identify I with the set {0,1,…,e-1}. Recall that for a fixed level, l, we let the e-multicharge of l be the ordered l-tuple κ=(κ_1,κ_2,…,κ_l)∈ I^l, with associated domaninant weight Λ=Λ_κ_1+… +Λ_κ_l of level l. We refer the reader to <cit.> for further details on the corresponding Lie-theoretic notation. §.§ Graded algebras and graded modules We familiarise the reader with the fundamental theory of graded algebras and modules; <cit.> provides a superb guide to graded representation theory. An 𝔽-algebra A is called graded, more precisely ℤ-graded, if there exists a direct sum decomposition A=⊕_i∈ℤA_i such that A_iA_j⊆A_i+j for all i,j∈ℤ. An element in the summand A_i is said to be homogeneous of degree i. For a_i∈A_i, we write (a_i)=i.Given a graded 𝔽-algebra A, we say that the (left) A-module M is ℤ-graded if there exists a direct sum decomposition M=⊕_i∈ℤM_i such that A_iM_j⊆M_i+j for all i,j∈ℤ. We denote the abelian category of all finitely generated graded (left) A-modules by A. If M∈ A, then we obtain the module M⟨k⟩ by shifting the grading on M upwards by k∈ℤ. For an indeterminate v, we set M⟨k⟩ =v^kM, so that the grading on M⟨k⟩ is defined by ( M⟨k⟩)_i=(v^kM)_i=M_i-k. The graded dimension of M is defined to be the Laurent polynomial(M)=∑_i∈ℤ(M_i)v^i ∈ℕ[v,v^-1].Suppose that A has a homogeneous anti-involution ∗:A→ A, and write a^∗ for the image of a∈ A under this map. Then we define the dual of M to be the ℤ-graded A-moduleM^⊛=⊕_k∈ℤHom_𝔽(M⟨ k⟩,𝔽),where the A-action is given by (af)(m)=f(a^∗m) for all a∈ A, f∈ M^⊛ and m∈ M.We say that a graded composition series for M∈ A is a filtration of graded submodules {0}=M_0 ⊂ M_1 ⊂…⊂ M_n-1⊂ M_n=M such that the quotients M_i/M_i-1 are irreducible for all i∈{1,…,n}, which we refer to as the graded composition factors of M. The Jordan–Hölder theorem yields an analogous graded version, and thus it makes sense to study graded decomposition numbers [M:L]_v of M, where L is a graded irreducible A-module. The graded multiplicity of L as a composition factor of M is defined to be the Laurent polynomial [M:L]_v=∑_i∈ℤ[M:L⟨i⟩]v^i ∈ℕ[v,v^-1]. Note that by setting v=1 into the above definitions, we recover the ungraded analogues.§.§ Multipartitions, Young diagrams and tableaux We recall from <cit.> basic combinatorial notation and definitions in this section.We write 𝒫_n^l for the set of all l-multipartitions of n, and in particular, we write ∅ for the empty multipartition. Let λ=(λ^(1),…,λ^(l))∈𝒫_n^l. We define the Young diagram of λ to be[λ]:= { (i,j,m)∈ℕ×ℕ×{1,…,l} |1⩽j⩽λ_i^(m)}.We draw the ith component λ^(i) of [λ] above its (i+1)th component λ^(i+1) for all i∈ℕ. Each element (i,j,m)∈[λ] is called a node of λ, and in particular, an (i,j)-node of the mth component λ^(m). We say that the node (i_1,j_1,m_1)∈[λ] lies strictly above the node (i_2,j_2,m_2)∈[λ] if either i_1<i_2 and m_1=m_2 or m_1< m_2.We say that A∈[λ] is a removable node for λ if [λ]\{A} is a Young diagram of an l-multipartition of n-1. Similarly, we say that A∉[λ] is an addable node for λ if [λ]∪{A} is a Young diagram of an l-multipartition of n+1.A λ-tableauis a bijection :[λ]→{1,…,n}. We callstandard if the entries in each row increase from left to right along the rows of each component, and the entries in each column increase from top to bottom down the columns of each component. We denote the set of all standard λ-tableaux by (λ). The column-initial tableau _λ is the λ-tableau whose entries 1,…,n appear in order down consecutive columns, working from left to right in components l,l-1,…,1, in turn. §.§ Residues and degreesWe fix an e-multicharge κ=(κ_1,…,κ_l)∈I^l. The e-residue of a node A=(i,j,m) lying in the space ℕ×ℕ×{1,…,l} is defined to beA := κ_m+j-i e.We say that an i-node is a node of residue i.Letbe a λ-tableau. We write r=(i,j,m) to denote that the integer entry r lies in node (i,j,m)∈[λ], and set _(r)= (i,j,m). The residue sequence ofis defined to be𝐢_=(_(1),…,_(n)). We define the degree of an addable i-node A of λ∈𝒫_n^l to bed^A(λ):=#{addable i-nodes of λ strictly above A} -#{removable i-nodes of λ strictly above A}. Let ∈(λ) be such that n lies in node A of λ. We set (∅):=0, and define the degree ofrecursively via():=d^A(λ)+ (_⩽n-1),where _⩽n-1 is the standard tableau obtained by removing node A from . Let e=3 and κ=(0,0). There are five standard ((1),(1^4))-tableaux, namely _1=(1,,2,3,4,5), _2=(2,,1,3,4,5), _3=(3,,1,2,4,5), _4=(4,,1,2,3,5), _5=(5,,1,2,3,4). We find the degree of _1 as follows. We note that the degree of any node in the first row of the first component is 0, so d^(1,1,1)=0 and hence (_⩽1)=0. Observe that _⩽ 2=(1,,2), which has 3-residues (!0,,!0). Thus (1,1,2) has removable 0-node (1,1,1) (shaded above), and hence d^(1,1,2)=-1. Now observe _⩽ 3=(1,,2,3), which has 3-residues (;0,!<2pt> ,,!<0.5pt>0,;2). Thus (2,1,2) has addable 2-node (2,1,1) (outlined above), and hence d^(2,1,2)=1. Now observe _⩽ 4=(1,,2,3,4), which has 3-residues (;0!<2pt> ,,!<0.5pt>0!<2pt> ,!<0.5pt>2,;1). Thus (3,1,2) has addable 1-nodes (1,2,1) and (1,2,2) (outlined above), and hence d^(3,1,2)=2. We finally observe that _1=(1,,2,3,4,5), which has 3-residues (!0,,!0,2,1,0). Thus (4,1,2) has removable 0-node (1,1,1) (shaded above), and hence d^(4,1,2)=-1. Hence (_1)=d^(1,1,1)+d^(1,1,2)+d^(2,1,2)+d^(3,1,2)+d^(4,1,2)=1. Similarly, one can find that (_2)= (_5)=3, (_3)=2 and (_4)=1. §.§ Cylotomic Khovanov–Lauda–Rouquier algebras and Specht modulesThe presentation of the cyclotomic Khovanov–Lauda–Rouquier algebra, ℋ_n^Λ, introduced independently by Khovanov and Lauda in <cit.> and Rouquier in <cit.> endows ℋ_n^Λ with a canonical ℤ-grading. We know from Brundan and Kleshchev's Graded Isomorphism Theorem <cit.> that ℋ_n^Λ is isomorphic to a cyclotomic Hecke algebra (of type A).We refer the reader to <cit.> for the construction of Specht modules, S_λ, over the cyclotomic Khovanov–Lauda–Rouquier algebras, which are indexed by multipartitions λ and generated by the element z_λ as an ℋ_n^Λ-module. We study (column) Specht modules as given in <cit.>, which are dual to those given in <cit.> and consistent with James' classical construction of Specht modules over 𝔽𝔖_n. For a λ-tableau , we recall from <cit.> that for a reduced expression of w_∈𝔖_n such that w__λ=, we can define the vector v_=ψ_w_z_λ for some element ψ_w_∈ℋ_n^Λ associated to the reduced expression of w_. In general, we note that the vector v_ depends on the choice of a reduced expression of w_. We observe that the existence of these vectors ensures that Specht modules naturally inherit a ℤ-grading from ℋ_n^Λ.<cit.> and <cit.> Let λ∈𝒫_n^l. Then the set of vectors{v_| ∈(λ)} is a homogeneous 𝔽-basis of S_λ of degree determined by (v_)=().Recall that this basis is called the standard homogeneous basis of S_λ. We can now define the graded dimensions of Specht modules using the degree function on standard tableaux as follows. Let λ∈𝒫_n^l. Then the graded dimension of S_λ is defined to be (S_λ):=∑_∈(λ)v^ (). We thus note that the graded dimensions of Specht modules depend only on the quantum characteristic e and not directly on the ground field 𝔽. Let e=3 and κ=(0,0). Following <ref>, we know that (S_((1),(1^4))) = v^(_1)+v^(_2)+v^(_3)+v^(_4)+v^(_5) = 2v^3+v^2+2v.§.§ Regular multipartitions We introduce numerous combinatorial definitions following <cit.>, most of which date back to <cit.>, and we adopt notation introduced by Fayers in <cit.>.Let λ∈𝒫_n^l. We denote the total number of removable i-nodes of λ by _i(λ), and we denote the total number of addable i-nodes of λ by _i(λ). We write the l-multipartition obtained by removing all of the removable i-nodes from λ as λ^▿i, and we write the l-multipartition obtained by adding all of the addable i-nodes to λ as λ^▵i.We define the i-signature of λ∈𝒫_n^l by reading the Young digram [λ] from the top of the first component down to the bottom of the last component, writing a + for each addable i-node and writing a - for each removable i-node, where the leftmost + corresponds to the highest addable i-node of λ. We obtain the reduced i-signature of λ by successively deleting all adjacent pairs +- from the i-signature of λ, always of the form -…-+…+. Let e=3, κ=(0,0) and λ=((7,4^2),(4)). The 3-residues of λ, as well as the 0-addable and 0-removable nodes of λ are labelled, respectively, as follows(0120120,2012,1201,,,0120), (;;;;;;;,;;;;:,;;;;,:,,;;;;).Thus, by removing all of the removable 0-nodes from λ (corresponding to the outlined nodes below), and respectively, adding all of the addable 0-nodes of λ (corresponding to the shaded nodes below) we have the following Young diagrams of multipartitions[λ^▿_0]= (;;;;;;!<1.5pt>;,!<0.5pt>;;;;,;;;;,,;;;!<1.5pt>;), [λ^▵_0]= (;;;;;;;,;;;;!;,!;;;;,!;,,!;;;;).Referring to (<ref>), the 0-signature of λ is -++- (corresponding to the - and + labels from top to bottom in the diagram), and the reduced 0-signature is -+ (corresponding to the nodes (1,7,1) and (2,5,1), respectively). The removable i-nodes corresponding to the - signs in the reduced i-signature of λ are called the normal i-nodes of λ, and similarly, the addable i-nodes corresponding to the + signs in the reduced i-signature of λ are called the conormal i-nodes of λ. We denote the total number of normal i-nodes of λ by _i(λ) and the total number of conormal i-nodes of λ by _i(λ). The lowest normal i-node of [λ], if there is one, is called the good i-node of λ, which corresponds to the last - sign in the i-signature of λ. Similarly, the highest conormal i-node of [λ], if there is one, is called the cogood i-node of λ, which corresponds to the first + sign in the i-signature of λ. For r∈{0,…,_i(λ)}, we denote the multipartition obtained from λ by removing the r lowest normal i-nodes of λ by λ↓_i^r, and for r∈{0,…,_i(λ)}, we denote the multipartition obtained from λ by adding the r highest conormal i-nodes of λ by λ↑_i^r. We set ↑_i:=↑_i^1 when adding the cogood i-node of λ and we set ↓_i:=↓_i^1 when removing the good i-node of λ. It is easy to see that A is a cogood i-node of λ∈𝒫_n^l if and only if A is a good i-node of λ∪{A}. The operators ↑_i^r and ↓_i^r act inversely on a multipartition λ∈𝒫_n^l in the following sense:λ↓_i^r↑_i^r=λ(0⩽r⩽_i(λ)); λ↑_i^s↓_i^s=λ(0⩽s⩽_i(λ)). We define the set of all regular l-multipartitions of n to beℛ𝒫_n^l:={∅↑_i_1…↑_i_n |i_1,…,i_n∈I}.If a multipartition λ lies in ℛ𝒫_n^l, then λ is called regular. Hence λ∈𝒫_n^l is regular if and only if [λ] is obtained by successively adding cogood nodes to ∅. §.§ Graded irreducible ℋ_n^Λ-modules In this section, we review a classification of the graded irreducible ℋ_n^Λ-modules. It is well known that the Specht module S_λ has the quotient D_λ:=S_λ/ S_λ for each λ∈𝒫_n^l, where the radical of S_λ is defined from a homogeneous symmetric bilinear form on S_λ of degree zero (see <cit.> for details). We know that each D_λ is either absolutely irreducible or zero by <cit.>, and moreover, D_λ is absolutely irreducible if and only if λ∈ℛ𝒫_n^l <cit.>.<cit.> and <cit.> *{ D_λ⟨i⟩ | λ∈ℛ𝒫_n^l,i∈ℤ} is a complete set of pairwise non-isomorphic irreducible graded ℋ_n^Λ-modules. *For all λ∈ℛ𝒫_n^l, D_λ≅D_λ^⊛ as graded ℋ_n^Λ-modules. §.§ Graded decomposition numbers Decomposition numbers record information about the structure of Specht modules. We denote the ungraded decomposition number by d_λ,μ= [S_λ:D_μ] where λ∈𝒫_n^l and μ∈ℛ𝒫_n^l, which is the multiplicity of D_μ appearing as a composition factor of S_λ.We denote the ungraded decomposition matrix for ℋ_n^Λ by (d_λ,μ), and we write (d_λ,μ^𝔽) when we want to emphasise the ground field. It is well known that we can compute the ungraded decomposition matrices, (d_λ,μ^ℂ), for ℋ_n^Λ via the generalised LLT algorithm given by Fayers in <cit.>, whereas determining decomposition numbers in positive characteristics is an open problem. We know from <cit.> that there exists an adjustment matrix (a_ν,μ^𝔽) such that (d_λ,ν^𝔽)=(d_λ,ν^ℂ)(a_ν,μ^𝔽) where ν,μ∈ℛ𝒫_n^l, but there exists no algorithm for determining the entries in this matrix.We know from <Ref> that we can endow Specht modules with a ℤ-grading, and since there exists a graded version of the Jordan–Hölder theorem, we can study their graded composition factors. We define the graded decomposition number to bed_λ,μ(v)= [S_λ:D_μ]_v:= ∑_i∈ℤ[S_λ:D_μ⟨ i⟩]v^i ∈ℕ[v,v^-1],where λ∈𝒫_n^l and μ∈ℛ𝒫_n^l. We record these graded multiplicities in a graded decomposition matrix, (d_λμ(v)), where its rows are indexed by multipartitions and its columns are indexed by regular multipartitions. The following result for ℋ_n^Λ is a more general version of <cit.> for 𝔽𝔖_n.<cit.> Let λ∈𝒫_n^l and μ∈ℛ𝒫_n^l. Then *d_μ,μ(v)=1; *d_λ,μ(v)≠0 only if μλ. Moreover, if 𝔽=ℂ then d_λ,μ(v)∈ vℕ(v) whenever μλ. We denote the graded adjustment number by a_ν,μ^𝔽(v).<cit.> Let λ∈𝒫_n^l and μ∈ℛ𝒫^l_n. Thend_λ,μ^𝔽(v)=∑_ν∈ℛ𝒫_n^ld_λ,ν^ℂ(v)a_ν,μ^𝔽(v),for some a_ν,μ^𝔽(v)∈ℕ[v,v^-1] with a_ν,μ^𝔽(v)=a_ν,μ^𝔽(v^-1). §.§ Induction and restriction of ℋ_n^Λ-modules The Decomposition Number Problem for ℋ_n^Λ of determining the multiplicities [S_λ:D_μ] for all λ∈𝒫_n^l and for all μ∈ℛ𝒫_n^l is equivalent to the Branching Problem of determining the multiplicities[_ℋ_n-1^Λ^ℋ_n^ΛD_λ:D_μ]for all λ∈𝒫_n^l and for all μ∈ℛ𝒫_n^l. The restriction of the ordinary representations of the symmetric group and their composition factors are well understood via the Classical Branching Rule for 𝔽𝔖_n (for example, see <cit.>), which was first extended to the Ariki–Koike algebras (or the cyclotomic Hecke algebras) by Ariki–Koike <cit.>, and which has recently been extended to the cyclotomic Khovanov–Lauda–Rouquier algebras by Mathas <cit.>.We first introduce Brundan and Kleshchev's i-restriction and i-induction functors, e_i and f_i respectively, acting on 𝔽𝔖_n-modules, as given in Section 2.2 of <cit.>. These functors are exact, and originate from Robinson <cit.>; we extend these functors to act on ℋ_n^Λ-modules.Let M be an ℋ_n^Λ-module. For i∈ℤ/eℤ, there are i-restriction functors e_i:ℋ_n^Λ→ℋ_n-1^Λ,and i-induction functors f_i:ℋ_n^Λ→ℋ_n+1^Λ, such that <cit.>^ℋ_n^Λ_ℋ_n-1^ΛM ≅⊕_i∈ℤ/eℤ e_i M and_ℋ_n^Λ^ℋ_n+1^ΛM ≅⊕_i∈ℤ/eℤf_iM.For i∈ℤ/eℤ and r⩾0, there exist the divided power i-restriction functors e_i^(r):ℋ_n^Λ→ℋ_n-r^Λ and the divided power induction i-functors f_i^(r):ℋ_n^Λ→ℋ_n+r^Λ, which satisfy <cit.>e_i^rM≅⊕_k=1^r! e_i^(r)M and f_i^rM≅⊕_k=1^r! f_i^(r)M.For a non-zero ℋ_n^Λ-module M, we defineϵ_i(M)={r⩾0 | e_i^(r)M≠0}andφ_i(M)={r⩾0 | f_i^(r)M≠0}.We now set e_i^()M=e_i^(ϵ_iM)M and f_i^()M=f_i^(φ_iM)M. By refining the Branching Rule for ℋ_n^Λ-modules, we obtain <cit.> and its analogue. Let i∈ℤ/eℤ and λ∈𝒫_n^l. *Then ϵ_i(S_λ)=_i(λ) and e_i^()S_λ≅ S_λ^▿i. *Then φ_i(S_λ)=_i(λ) and f_i^()S_λ≅ S_λ^▵i.§.§ Modular branching rules for ℋ_n^Λ-modules Kleshchev developed the analogous theory for restricting the modular representations of the symmetric group <cit.>, which Brundan extended to Hecke algebras of type A <cit.>. These modular branching rules were generalised for cyclotomic Hecke algebras, proven by Ariki in the proof of <cit.>. Thus modular branching rules for the cyclotomic Khovanov–Lauda–Rouquier algebras make sense, which we note here.<cit.> Let i∈ℤ/eℤ and λ∈ℛ𝒫_n^l. * Then ϵ_i(D_λ)=_i(λ) and e_i^()D_λ≅ D_λ↓^_i(λ)_i. * Then φ_i(D_λ)=_i(λ) and f_i^()D_λ≅ D_λ↑^_i(λ)_i. Let e=3, κ=(0,2) and λ=((9,6,2^2,1),(4,3,2)). Since we can obtain λ from (∅,∅) by adding certain conormal nodes as follows λ=(∅,∅)↑_2↑_1↑_0↑_2↑_0↑_1^2↑_2^2↑_0^4↑_1^4↑_2↑_0↑_2^4↑_0^2↑_1^3↑_2, we know from (<ref>) that λ is a regular bipartition. We observe from <Ref> the 3-residues of λ, together with its addable nodes. Thus λ has 2-signature -+–++ and reduced 2-signature –++. One can also observe that we have drawn the bipartitions obtained from λ by: 1) removing all of the normal 2-nodes of λ (outlined in <Ref>), corresponding to the - signs in the reduced 2-signature of λ, and 2) adding all of the conormal 2-nodes of λ (shaded in <Ref>), corresponding to the + signs in the reduced 2-signature of λ. It thus follows from <ref> that e_2^(2)D_λ≅D_((8,6,2^2,1),(3^2,2)); f_2^(2)D_λ≅D_((9,6,2^2,1),(4,3^2,1)). For each i∈ℤ/eℤ, there is at most one good i-node of λ, and hence at most e good nodes of λ. It follows from <cit.> that the socle of the restriction of an irreducible ℋ_n^Λ-module D_λ to an ℋ_n-1^Λ-module is a direct sum of at most e indecomposable ℋ_n^Λ-summands. Moreover, we also know from <cit.> that we can verify that the residue sequence of λ\{A} is distinct for each good node A of λ, so that each summand D_λ\{A} belongs to a distinct block of ℋ_n^Λ. We generalise this result to “divided powers” as follows. Let i∈ℤ/eℤ and λ∈ℛ𝒫_n^l. * If r⩽_i(λ), then (e_i^(r)D_λ) ≅ D_λ↓_i^r. * If r⩽_i(λ), then ( f_i^(r)D_λ)≅ D_λ↑_i^r. It follows that the modular branching rules for Specht modules of the cyclotomic Khovanov–Lauda–Rouquier algebras ℋ_n^Λ, together with the operators ↑_i^r and ↓_i^r, provide a combinatorial algorithm for determining the labels of irreducible ℋ_n^Λ-modules. Let r⩾ 0 and i∈ℤ/eℤ.If D is an irreducible ℋ_n^Λ-module with e_i^(r)D≅D_λ for some λ∈ℛ𝒫_n-r^l, then D=D_λ↑_i^r. Suppose that D=D_μ where μ∈ℛ𝒫_n^l, so that e_i^(r)D=e_i^(r)D_μ≅D_λ. We know that r⩽_i(μ) since e_i^(r)D≠0, then from the first part of <ref> we have (e_i^(r)D_μ)≅D_ν where ν=μ↓_i^r. Since e_i^(r)D_μ≅D_λ, we have ν=λ. Then, by (<ref>), λ↑_i^r=μ↓_i^r↑_i^r=μ, as required. Let 0⩽r⩽_i(λ) with e_i^(r)D_μ≅D_λ for some μ∈ℛ𝒫_n^l and λ∈ℛ𝒫_n-r^l. Then the normal i-nodes of μ and the conormal i-nodes of λ coincide, and hence(f_i^(r)(e_i^(r)D_μ))≅D_μ↓_i^r↑_i^r=D_μ. For non-irreducible ℋ_n^Λ-modules, we can determine the labels of their composition factors by applying the same combinatorial algorithm using the following result. Let r⩾0 and i∈ℤ/eℤ. If M is an ℋ_n^Λ-module with e_i^(r)M≅D_μ for some μ∈ℛ𝒫_n-r^l, then one of the composition factors of M is D_μ↑_i^r. Moreover, all of the other composition factors of M are killed by e_i^(r).Let e=3, κ=(0,2) and λ=((6),(1^3)). We successively remove the maximum number of removable i-nodes (shaded below) from λ as follows.(01201!2,,!2,1,0)(0120!1,,!2,1,0)(012!0,,!2,1,!0)(01!2,,!2,1)(0!1,,!2,!1)(0,,!2)(!0,,:)(:,,:).Hence e_0e_2e_1^(2)e_2e_0^(2)e_1e_2S_λ≅ S_(∅,∅), which we know is irreducible, and moreover, (∅,∅)↑_0↑_2↑_1^2↑_2↑_0^2↑_1↑_2=((6,2,1),∅). It thus follows from <ref> that D_((6,2,1),∅) is a composition factor of S_λ.§.§ Specht modules labelled by hook bipartitions We fix l=2 from now on and recall that e∈{3,4,…}. A hook bipartition of n is defined to be a bipartition of the form ((n-m),(1^m)) for some m∈{0,…,n}. We refer to the first component of a hook bipartition as its arm and to its second component as its leg. We call the node (1,n-m,1) lying at the end of its arm its hand node, and the node (m,1,2) lying at the end of its leg its foot node.Letbe the standard ((n-m),(1^m))-tableau with entries a_1,…,a_m∈{1,…,n} lying in its leg, and recall from <cit.> that we define v(a_1,…,a_m):=v_ to be the corresponding standard basis element of S_((n-m),(1^m)). We note that sinceis completely defined by the strictly increasing entries a_1,…,a_m that lie in a single column,the corresponding vector v_ is independent of the choice of a reduced expression of w_ (which is generally not the case).We remind the reader of some of the Specht module homomorphisms that were introduced in <cit.>, which we will require later on.We have the following non-zero homomorphisms of Specht modules. *If n≡κ_2-κ_1+1e and 0⩽m⩽n-1, we have γ_m: S_((n-m),(1^m)) ⟶ S_((n-m-1),(1^m+1)), γ_m(z_((n-m),(1^m)))=v(1,…,m,n).*If κ_2≡κ_1-1e and 1⩽m⩽n-1, we have χ_m: S_((n-m,1^m),∅) ⟶ S_((n-m),(1^m)), χ_m(z_((n-m,1^m),∅))=v(2,3,…,m+1).*If κ_2≡κ_1-1e, n≡0e and 1⩽m⩽n-1, we have ϕ_m: S_((n-m+1,1^m-1),∅) ⟶ S_((n-m),(1^m)), ϕ_m(z_((n-m+1,1^m-1),∅))=v(2,3,…,m,n).The standard basis elements of the kernels and images of the above Specht modules homomorphisms are as follows.<cit.>*If n≡κ_2-κ_1+1e, then * (γ_m) ={ v_ | ∈((n-m-1),(1^m+1)), (m+1,1,2)=n }; * (γ_m) ={ v_ | ∈((n-m),(1^m)), (m,1,2)=n }. * If κ_2≡κ_1-1e, then * (χ_m)={v_ |∈((n-m),(1^m)),(1,1,1)=1}; *(χ_m)=0; *If κ_2≡κ_1-1e and n≡κ_2-κ_1+1e, then * If m<n-1, then (ϕ_m)={v_|∈((n-m),(1^m)),(1,1,1)=1,(m,1,2)=n}. * If m=n-1, then (ϕ_m)={ v_|∈((1),(1^n-1)),(1,1,1)=1}. § ONE-DIMENSIONAL SPECHT MODULES We determine the labels of the irreducible ℋ_n^Λ-modules that are isomorphic to the one-dimensional Specht modules, namely S_((n),∅) and S_(∅,(1^n)), which arise as composition factors of S_((n-m),(1^m)). We note that we work solely with ungraded cyclotomic Khovanov–Lauda–Rouquier modules up to and including <Ref>. We let l be the residue of κ_2-κ_1 modulo e throughout, so that l∈{0,…,e-1}.We know that S_((n),∅)={z_((n),∅)} and S_(∅,(1^n))={z_(∅,(1^n))} are both one-dimensional ℋ_n^Λ-modules, and hence are both irreducible. In fact, S_((n),∅)=D_((n),∅). We now introduce a -functor to determine the bipartition μ∈ℛ𝒫_n^2 such that S_(∅,(1^n))≅D_μ as ungradwed ℋ_n^Λ-modules.For 1⩽r⩽n, S_(∅,(1^r)) has only one removable node, namely (r,1,2), that satisfies (r,1,2)=κ_2+1-re. Thus the only restriction functor which acts non-trivially on S_(∅,(1^r)) is e_κ_2+1-r:ℋ_r^Λ⟶ℋ_r-1^Λ, where e_κ_2+1-rS_(∅,(1^r))≅S_(∅,(1^r-1)). For r⩾0, we now define the -restriction functor to be the composition of restriction functorse_:=e_κ_2∘e_κ_2-1∘…∘e_κ_2+1-r :ℋ_r^Λ⟶ℋ_0^Λ,with the property that e_S_(∅,(1^r))≅S_(∅,∅).With r=n, we observe that e_ is the only composition of n i-restriction functors which acts non-trivially on S_(∅,(1^n)). Analogously, we now define the -induction functor to bef_:=f_κ_2+1-r∘f_κ_2+2-r∘…∘f_κ_2 :ℋ_0^Λ⟶ℋ_r^Λfor r⩾ 0. The -induction functor acts non-trivially on S_(∅,∅); we now determine the socle of f_S_(∅,∅). For each a∈ℕ∪{0}, we define the following weakly decreasing sequence of e-1 integers that sum to a and that differ by at most one{a} := ⌊a+e-2/e-1⌋, ⌊a+e-3/e-1⌋, …, ⌊a/e-1⌋. We now give an explicit description of the regular bipartition that labels the irreducible ℋ_n^Λ-module that is isomorphic to S_(∅,(1^n)). Let (∅,(1^n))^R:=(∅,(1^n)) if n<l, (({n-l}),(1^l)) if n⩾l.Let n∈ℕ∪{0}. Then S_(∅,(1^n))≅ D_(∅,(1^n))^R as ungraded ℋ_n^Λ-modules. Let 1⩽r⩽n, and suppose that S_(∅,(1^n))≅D_μ for some μ∈ℛ𝒫_n^2. It follows from (<ref>) that ^ℋ_n^Λ_ℋ_0^Λ S_(∅,(1^n))≅ e_ S_(∅,(1^n)).For any r>1, there is only one removable (κ_2+1-r)-node of [(∅,(1^r))], so that ϵ_κ_2+1-r(S_(∅,(1^r)))=1. It thus follows from <ref> that e_ S_(∅,(1^n))≅ S_(∅,(1^n))^▿e_ = S_(∅,(1^n)) ^▿(κ_2+1-n)▿(κ_2+2-n)⋯▿κ_2≅ S_(∅,∅).Define (∅,∅)↑_^n := (∅,∅) ↑_κ_2↑_κ_2-1…↑_κ_2+1-n. Since S_(∅,(1^n)) is irreducible, we know from <ref> that S_(∅,(1^n))≅ S_(∅,∅)↑_^n = D_(∅,∅)↑_^n. To calculate (∅,∅)↑_^n, we successively add the highest conormal node of e-residue κ_2,κ_2-1,…,κ_2+1-n, respectively, to [(∅,∅)].Firstly, we successively add the highest l conormal nodes of e-residue κ_2,κ_2-1,…,κ_2-l+1, respectively, to [(∅,∅)]. Since κ_1=κ_2-le, it is easy to see that (∅,(1^i)) has (κ_2-i)-signature +, corresponding to node (i+1,1,2) for each i∈{0,…,l-1}. Hence (∅,∅) ↑_κ_2↑_κ_2-1…↑_κ_2-l+1 =(∅,(1^l)).If n⩽l, then we are done. Instead suppose that n>l. We now successively add the highest e conormal nodes to [(∅,(1^l))] of e-residue κ_1,κ_1-1,…,κ_1+1, respectively. Notice that ((1^i),(1^l)) has (κ_1-i)-signature + for each i∈{0,…,e-1}, corresponding to node (i+1,1,1), except in the following cases. *The κ_1-signature of (∅,(1^l)) is ++, corresponding to the nodes (1,1,1) and (l+1,1,2), respectively. Hence (∅,(1^l))↑_κ_1=((1),(1^l)). *Let l>0. Then the (κ_1+l+1)-signature of ((1^e-l-1),(1^l)) is ++, corresponding to the nodes (e-l,1,1) and (1,2,2), respectively. Hence ((1^e-l-1),(1^l))↑_κ_1+l+1=((1^e-l),(1^l)). *If l>0, then the (κ_1+1)-signature of ((1^e-1),(1^l)) is ++-, corresponding to the nodes (1,2,1), (e,1,1) and (l,1,2), respectively. If l=0, then the (κ_1+1)-signature of ((1^e-1),∅) is ++, corresponding to the nodes (1,2,1) and (e,1,1), respectively. Hence ((1^e-1),(1^l))↑_κ_1+1=((2,1^e-2),(1^l)). It thus follows that(∅,(1^l)) ↑_κ_1↑_κ_1-1…↑_κ_1+1 =((2,1^e-2),(1^l)),and so the first component of (∅,∅)↑_^n has e-1 non-empty rows.Finally, we successively add the remaining nodes to the first component of [((2,1^e-2),(1^l))], down each column from left to right. Observe that there are n-l-r+1 nodes in [(∅,∅)↑_^n] \{(1,1,1),…,(r-1,1,1)}∪{(1,1,2),…,(l,1,2)}for all r∈{1,…,e-1}. Since there are e-1 non-empty rows in the first component of [((2,1^e-2),(1^l))], there are also e-1 non-empty rows in the first component of μ, and moreover, we observe that there are ⌊n-l-r+e-1e-1⌋ nodes in the rth row of the first component of [(∅,∅)↑_^n]. § LABELLING THE COMPOSITION FACTORS OF S_((N-M),(1^M)) In the preceding paper <cit.>, the composition factors of S_((n-m),(1^m)) were constructed as quotients either of the images or of the kernels of the Specht module homomorphisms given in <ref> — both of which do not depend on the characteristic of 𝔽. Since each of these quotients is isomorphic (up to a grading shift) to a particular head of a Specht module, we now determine the regular bipartitions that label these irreducible ℋ_n^Λ-modules. Recall that l is the residue of κ_2-κ_1 modulo e.§.§ Labelling the composition factors of S_((n-m),(1^m)) with κ_2≢κ_1-1e We fix κ_2≢κ_1-1e throughout this subsection.When n≢l+1e, we recall from <cit.> that S_((n-m),(1^m)) is an irreducible ℋ_n^Λ-module, that is, S_((n-m),(1^m))≅D_λ_m for some regular bipartition λ_m∈ℛ𝒫_n^2. Let κ_2≢κ_1-1e and n≢l+1e. For 0⩽m<n, we defineμ_n,m:=((n-m),(1^m)) if 0⩽m<l+1,((n-m,{m-l-1}),(1^l+1)) if l+1⩽m<n-n/e,(({m-l},n-m-1),(1^l+1)) if n-n/e⩽m<n.In fact, we claim that λ_m=μ_n,m for all m∈{0,…,n-1}.When n≡l+1e and 1⩽m<n, we recall from <cit.> that S_((n-m),(1^m)) has two composition factors, namely (γ_m-1) and (γ_m). Thus (γ_m-1)≅D_λ_m and (γ_m)≅D_μ_m for some regular bipartitions λ_m,μ_m∈ℛ𝒫_n^2. Let κ_2≢κ_1-1e and n≡l+1e. For 0⩽m<n, we defineμ_n,m:=((n-m),(1^m))if 0⩽m<l+1, ((n-m,{m-l-1}),(1^l+1))if l+1⩽m<n-n/e, (({m-l+1},n-m-2),(1^l+1))if n-n/e⩽m⩽n-2, (({n-l}),(1^l))if m=n-1.Notice that μ_n,m-1 and μ_n,m are distinct. We claim that the two labels λ_m,μ_m of the composition factors of S_((n-m),(1^m)) as heads of some Specht modules are, in fact, μ_n,m-1 and μ_n,m, respectively, and hence that the corresponding composition factors are non-isomorphic.We require the following combinatorial result in order to confirm our claims. Let κ_2≢κ_1-1e and 0⩽m<n. *If n≡le, thenμ_n,m↑_κ_2-m=μ_n+1,m.*If n≢le, thenμ_n,m↑_κ_2-m=μ_n+1,m+1. *Let 1⩽m<l+1.Observe that ((n-m),(1^m)) has addable (κ_2-m)-node (m+1,1,2), as well as (1,n-m+1,1) if n≡le, and has removable (κ_2-m)-node (1,n-m,1) if n≡l+1e. We note that the addable nodes (2,1,1) and (1,2,2) of ((n-m),(1^m)) cannot have residue κ_2-m since l<e-2.If n≡le then ((n-m),(1^m)) has (κ_2-m)-signature ++, corresponding to the conormal nodes (1,n-m+1,1) and (m+1,1,2). Adding the higher of these conormal nodes, we haveμ_n,m↑_κ_2-m =((n-m),(1^m))↑_κ_2-m =((n-m+1),(1^m)) =μ_n+1,m.Now suppose that n≡l+1e. Then ((n-m),(1^m)) has (κ_2-m)-signature -+, and if n-l≢0,1e then ((n-m),(1^m)) has (κ_2-m)-signature +. The conormal node in each sequence is (m+1,1,2), whereby adding this node gives usμ_n,m↑_κ_2-m =((n-m),(1^m))↑_κ_2-m =((n-m),(1^m+1)) =μ_n+1,m+1.*Let l+1⩽m<n-ne. Observe that ((n-m,{m-l-1}),(1^l+1)) has the following addable and removable (κ_2-m)-nodes * addable node (1,n-m+1,1) if n≡le, * removable node (1,n-m,1) if n≡l+1e, * addable node at the end of the ⌊(m+e-l-2)/(e-1)⌋th column in the first component, * addable node (e+1,1,1) and removable node (l+1,1,2) if m≡le, * addable node (1,2,2) if m≡-1e, * addable node (l+2,1,2) if m≡l+1e. First suppose that n≡le. Then ((n-m,{m-l-1}),(1^l+1)) has (κ_2-m)-signature * +++- if m≡le, * ++++ if m≡-1e and l=e-2, * +++ if m≡-1e and l≠e-2 or m≢-1e and m≡l+1e, * ++ for all other cases.Adding the highest conormal (κ_2-m)-node in these sequences, (1,n-m+1,1), we haveμ_n,m↑_κ_2-m =((n-m,{m-l-1}),(1^l+1))↑_κ_2-m=((n-m+1,{m-l-1}),(1^l+1))=μ_n+1,m.We now suppose that n≢le. If n≡l+1e, then ((n-m,{m-l-1}),(1^l+1)) has (κ_2-m)-signature * -++- if m≡le, * -+++ if m≡-1e and l=e-2, * -++ if m≡-1e and l≠e-2 or m≢-1e and m≡l+1e, * -+ for all other cases.If n-l≢0,1e, then ((n-m,{m-l-1}),(1^l+1)) has (κ_2-m)-signature * ++- if m≡le, * +++ if m≡-1e and l=e-2, * ++ if m≡-1e and l≠e-2 or m≢-1e and m≡l+1e, * + for all other cases.Thus, for n≢le, we observe that the highest conormal (κ_2-m)-node in each (κ_2-m)-signature of ((n-m,{m-l-1}),(1^l+1)) is the addable node lying at the bottom of its ⌊(m+e-l-2)/(e-1)⌋th column in the first component. Adding this node, we haveμ_n,m↑_κ_2-m =((n-m,{m-l-1}),(1^l+1))↑_κ_2-m=((n-m,{m-l}),(1^l+1))=μ_n+1,m+1.*Let m⩾n-ne. Firstly, suppose that n≢l+1e. If n≡le, then we find that (({m-l},n-m-1),(1^l+1)) has (κ_2-m)-signature * +++- if m≡le, * ++++ if m≡-1e and l=e-2, * +++ if m≡-1e and l≠e-2 or m≢-1e and m≡l+1e, * ++ for all other cases.For n-l≢0,1e, (e,n-m,1) is no longer an addable (κ_2-m)-node. Upon discounting this node, we observe that (({m-l},n-m-1),(1^l+1)) has (κ_2-m)-signatures ++-, +++, ++ and + corresponding to the above cases, respectively. Thus, for n≢l+1e, the highest conormal (κ_2-m)-node in each (κ_2-m)-signature of (({m-l},n-m-1),(1^l+1)) is the addable node at the bottom of the ⌊(m+e-l-2)/(e-1)⌋th column in the first component. Henceμ_n,m↑_κ_2-m =(({m-l},n-m-1),(1^l+1))↑_κ_2-m=(({m-l+1},n-m-1),(1^l+1))= μ_n+1,m if n≡le, μ_n+1,m+1 if n-l≢0,1e. Secondly, suppose that n≡l+1e. Then we find that (({m-l+1},n-m-2),(1^l+1)) has (κ_2-m)-signature * -++- if m≡le, * -+++ if m≡-1e and l=e-2, * -++ if m≡-1e and l≠e-2 or m≢-1e and m≡l+1e, * -+ for all other cases.The highest conormal (κ_2-m)-node in each sequence is (e,n-m-1,1), and adding this node we haveμ_n,m↑_κ_2-m =(({m-l+1},n-m-2),(1^l+1))↑_κ_2-m=(({m-l+1},n-m-1),(1^l+1))=μ_n+1,m+1.Suppose that κ_2≢κ_1-1e and n≢l+1e. Then S_((n-m),(1^m))≅D_μ_n,m as ungraded ℋ_n^Λ-modules for all m∈{0,…,n-1}.We proceed by induction on m. * Suppose that n-l≢2e. First observe that e_κ_2S_((n-1),(1))≅S_((n-1),∅), and moreover, by applying <Ref> we have ((n-1),∅)↑_κ_2=((n-1),(1)). It thus follows from <ref> that S_((n-1),(1))≅D_((n-1),(1)).Assuming that S_((n-m),(1^m-1))≅D_μ_n-1,m-1 for some m>1, then e_κ_2+1-mS_((n-m),(1^m))≅S_((n-m),(1^m-1))≅D_μ_n-1,m-1.Since S_((n-m),(1^m)) is an irreducible ℋ_n^Λ-module, we can apply <ref> and <Ref> to obtainS_((n-m),(1^m))≅D_μ_n-1,m-1↑_κ_2-m+1=D_μ_n,m.*Suppose that n-l≡2e. We obtain the regular bipartition that labels the irreducible module which S_((n-m),(1^m)) is isomorphic to, up to a grading shift, as follows. We first restrict S_((n-m),(1^m)) to an irreducible ℋ_n-2^Λ-module, say D_μ for some μ∈ℛ𝒫_n-2^2, by removing both the hand and foot node of residue κ_2+1-m modulo e from the hook bipartition ((n-m),(1^m)), and then inducing D_μ to an irreducible ℋ_n^Λ-module by adding the two highest conormal (κ_2+1-m)-nodes to μ. We have e_κ_2^(2)S_((n-1),(1))≅S_((n-2),∅). By both <Ref>, ((n-2),∅)↑_κ_2^2 =((n-1),∅)↑_κ_2 =((n-1),(1)). Hence S_((n-1),(1))≅D_((n-1),(1)) by <ref>.Assuming that S_((n-m-1),(1^m-1))≅D_μ_n-2,m-1 for some m>1, thene_κ_2-m+1^(2)S_((n-m),(1^m))≅S_((n-m-1),(1^m-1))≅D_μ_n-2,m-1.Since S_((n-m),(1^m)) is an irreducible ℋ_n^Λ-module, we apply <ref> to obtainS_((n-m),(1^m))≅D_μ_n-2,m-1↑_κ_2-m+1^2 =D_μ_n-1,m-1↑_κ_2-m+1(by <Ref>)=D_μ_n,m(by <Ref>).We now use this result to give an explicit description of the composition factors of Specht modules labelled by hook bipartitions in the following case. Suppose that κ_2≢κ_1-1e and n≡l+1e. Then the composition factors of S_((n-m),(1^m)) are D_μ_n,m-1 and D_μ_n,m for all m∈{1,…,n-1}. Moreover, D_μ_n,m≅(γ_m) as ungraded ℋ_n^Λ-modules. We obtain the regular bipartitions that label the two composition factors of S_((n-m),(1^m)) as heads of some Specht modules by first restricting this Specht module to an irreducible ℋ_n-1^Λ-module, say D_μ for some μ∈ℛ𝒫_n-1^2, by either 1) removing the foot node of residue κ_2+1-m from ((n-m),(1^m)) or 2) by removing the hand node of residue κ_2-m from ((n-m),(1^m)). We then induce D_μ to an irreducible ℋ_n^Λ-module by adding the highest conormal node of residue κ_2+1-m or κ_2-m, respectively, to μ. *By removing the foot node of [((n-m),(1^m))], we obtaine_κ_2-m+1S_((n-m),(1^m))≅S_((n-m),(1^m-1))≅ D_μ_n-1,m-1(by <ref>).We now observe from <Ref> that μ_n-1,m-1↑_κ_2+1-m=μ_n,m-1. It thus follows that D_μ_n,m-1 is a composition factor of S_((n-m),(1^m)) by <ref>. *By removing the hand node of [((n-m),(1^m))], we obtaine_κ_2-mS_((n-m),(1^m))≅S_((n-m-1),(1^m))≅ D_μ_n-1,m(by <ref>).We now observe from <Ref> that μ_n-1,m↑_κ_2-m=μ_n,m. Then, by <ref>, D_μ_n,m is a composition factor of S_((n-m),(1^m)).Furthermore, we know from <cit.> that im(γ_m-1) and im(γ_m) are in bijection with D_μ_n,m-1 and D_μ_n,m, up to isomorphism and grading shift. We notice that im(γ_m) is a composition factor of both S_((n-m),(1^m)) and S_((n-m-1),(1^m+1)), and hence must be isomorphic to D_μ_n,m, as required.§.§ Labelling the composition factors of S_((n-m),(1^m)) with κ_2≡κ_1-1e We note that κ_2≡κ_1-1e throughout this subsection.For n≢0e and 1⩽m⩽n-1, we recall from <cit.> that S_((n-m),(1^m)) has two composition factors, namely (χ_m) and S_((n-m),(1^m))/(χ_m). Thus (χ_m)≅D_λ_m and S_((n-m),(1^m))/(χ_m)≅D_μ_m for some regular bipartitions λ_m,μ_m∈ℛ𝒫_n^2. Let κ_2≡κ_1-1e and n≢0e. For 1⩽m⩽n-1, we defineμ_n,2m :=((n-m,{m}),∅) if 1⩽m<n-n/e,(({m+1},n-1-m),∅) if n-n/e⩽m⩽n-1,μ_n,2m+1 :=((n-m),(1^m)) if 1⩽m<e,((n-m,{m-e}),(2,1^e-2)) if e⩽m<n-n/e,(({m-e+1},n-1-m),(2,1^e-2)) if n-n/e⩽m⩽n-1. Notice that μ_n,2m and μ_n,2m+1 are distinct. We claim that the labels λ_m,μ_m of the two composition factors of S_((n-m),(1^m)) are μ_n,2m and μ_n,2m+1, respectively, and hence that the corresponding composition factors are non-isomorphic.For n≡0e, we recall from <cit.> that S_((n-m),(1^m)) has four composition factors (ϕ_m), (ϕ_m+1), (γ_m)/(ϕ_m) and (γ_m+1)/(ϕ_m+1) for m∈{2,…,n-2}, and that S_((n-1),(1)) and S_((1),(1^n-1)) both have three composition factors. Thus, for m∈{2,…,n-2}, (ϕ_m)≅D_λ_m, (ϕ_m+1)≅D_μ_m, (γ_m)/(ϕ_m)≅D_ν_m and (γ_m+1)/(ϕ_m+1)≅D_η_m for some regular bipartitions λ_m,μ_m,ν_m,η_m∈ℛ𝒫_n^2. Let κ_2≡κ_1-1e and n≡0e. For 2⩽m⩽n-1, we defineμ_n,2m := ((n-m+1,{m-1}),∅) if 2⩽m⩽n-ne, (({m+1},n-m-1),∅) if n-ne<m⩽n-1,μ_n,2m+1 := ((n-m+1),(1^m-1)) if 2⩽m⩽e, ((n-m+1,{m-e-1}),(2,1^e-2)) if e<m⩽n-ne, (({m-e+1},n-m-1),(2,1^e-2)) if n-ne<m⩽n-1.We notice that μ_n,2m, μ_n,2m+1, μ_n,2m+2 and μ_n,2m+2 are distinct bipartitions. For m∈{2,…,n-2}, we claim that the labels λ_m,μ_m,ν_m,η_m of the four composition factors of S_((n-m),(1^m)) are μ_n,2m, μ_n,2m+2, μ_n,2m+1 and μ_n,2m+3, respectively, and hence that the corresponding composition factors are non-isomorphic.To confirm our claims above, we need the following combinatorial result, which is analogous to <ref> and can be proved in a similar manner. Suppose that κ_2≡κ_1-1e. *Let m∈{1,…,n-1}. If n≢0e, then μ_n,2m↑_κ_2-m=μ_n+1,2m+2,μ_n,2m+1↑_κ_2-m=μ_n+1,2m+3. *Let m∈{2,…,n-1}. If n≡0e, then μ_n,2m↑_κ_2+1-m =μ_n+1,2m,μ_n,2m+1↑_κ_2+1-m =μ_n+1,2m+1.Suppose that κ_2≡κ_1-1e and n≢0e. Then the composition factors of S_((n-m),(1^m)) are D_μ_n,2m and D_μ_n,2m+1 for all m∈{1,…,n-1}. Moreover, D_μ_n,2m≅(χ_m) and D_μ_n,2m+1≅S_((n-m),(1^m))/(χ_m) as ungraded ℋ_n^Λ-modules. We first show that D_μ_n,3 is a composition factor of S_((n-1),(1)). We have f_κ_2-1^(2)S_((n-1),(1))≅ S_((n),(1^2)) ifn≡-1e, and f_κ_2-1S_((n-1),(1))≅ S_((n-1),(1^2)) ifn≢-1e.For n≡-1e, D_μ_n+2,5 is a composition factor of S_((n),(1^2)) by downwards induction on n. Hence, by <ref>, D_μ_n+2,5↓_κ_2-1^2 is a composition factor of S_((n-1),(1)). We have((n-1),(1))↑_κ_2-1^2 = μ_n,3↑_κ_2-1^2= μ_n+1,5↑_κ_2-1(<Ref>)=μ_n+2,5(<Ref>)=((n),(1^2)).Its inverse gives us μ_n,3=μ_n+2,5↓_κ_2-1^2, and hence D_μ_n,3 is a composition factor of S_((n-1),(1)).Similarly, for n≢-1e, D_μ_n+1,5 is a composition factor of S_((n-1),(1^2)). Thus, by <ref>, D_μ_n+1,5↓_κ_2-1 is a composition factor of S_((n-1),(1)). Observe that((n-1),(1))↑_κ_2-1 = μ_n,3↑_κ_2-1 = μ_n+1,5 =((n-1),(1^2)) (<Ref>).Its inverse gives us μ_n,3=μ_n+1,5↓_κ_2-1, and hence D_μ_n,3 is a composition factor of S_((n-1),(1)).* Suppose that n-l≢2e. We have e_κ_2+1-mS_((n-m),(1^m))≅S_((n-m),(1^m-1)), and by induction, D_μ_n-1,2m-2 and D_μ_n-1,2m-1 are composition factors of S_((n-m),(1^m-1)). It thus follows from <ref> that D_μ_n-1,2m-2↑_κ_2+1-m and D_μ_n-1,2m-1↑_κ_2+1-m are composition factors of S_((n-m),(1^m)). We observe that μ_n-1,2m-2↑_κ_2+1-m=μ_n,2m by <Ref>, and μ_n-1,2m-1↑_κ_2+1-m=μ_n,2m+1 by <Ref>. Hence D_μ_n,2m and D_μ_n,2m+1 are composition factors of S_((n-m),(1^m)). *Suppose that n-l≡2e. We have e_κ_2+1-m^(2)S_((n-m),(1^m))≅S_((n-m-1),(1^m-1)), and by induction, D_μ_n-2,2m-2 and D_μ_n-2,2m-1 are composition factors of S_((n-m-1),(1^m-1)). We observe thatμ_n-2,2m-2↑_κ_2+1-m^2=μ_n-1,2m-2↑_κ_2+1-m(<Ref>)=μ_n,2m(<Ref>).Thus, by <ref>, D_μ_n,2m is a composition factor of S_((n-m),(1^m)).We also observe thatμ_n-2,2m-1↑_κ_2+1-m^2=μ_n-1,2m-1↑_κ_2+1-m(<Ref>)=μ_n,2m+1(<Ref>).Thus, by <ref>, D_μ_n,2m+1 is composition factor of S_((n-m),(1^m)).Furthermore, we know from <cit.> that the composition factors D_μ_n,2m and D_μ_n,2m+1 of S_((n-m),(1^m)) are in bijection with im(χ_m) and S_((n-m),(1^m))/im(χ_m), up to isomorphism and grading shift. By <cit.>, * im(χ_m)=span{v_ | ∈Std((n-m),(1^m)),(1,1,1)=1}; * S_((n-m),(1^m))/im(χ_m)=span{v_ | ∈Std((n-m),(1^m)),(1,1,2)=1}.Now let ,∈Std((n-m),(1^m)) be such that 1 lies in the arm ofand 1 lies in the leg of . Then every tableauhas residue sequence (κ_1,i_2,…,i_n) where i_r∈{0,…,e-1}, and every tableauhas residue sequence (κ_2,j_2,…,j_n) where j_r∈{0,…,e-1}. The only non-empty component of μ_n,2m is its first component, whereas both of the components of μ_n,2m+1 are non-empty. Thus, only the residue sequence of μ_n,2m+1 can begin with residue κ_2, and hence D_μ_n,2m≅im(χ_m), as required. Similarly to the results in <ref>, we use this result to describe the composition factors of Specht modules labelled by hook bipartitions in the following case.Suppose that κ_2≡κ_1-1e, n≡0e and let m∈{1,…,n-1}. Then S_((n-m),(1^m)) has composition factors * S_((n),∅), D_μ_n,4 and D_μ_n,5 if m=1; * D_μ_n,2m, D_μ_n,2m+1, D_μ_n,2m+2 and D_μ_n,2m+3 if m∈{2,…,n-2}; * S_(∅,(1^n)), D_μ_n,2n-2 and D_μ_n,2n-1 if m=n-1. Moreover, D_μ_n,2m≅(ϕ_m) and D_μ_n,2m+1≅(γ_m)/(ϕ_m) as ungraded ℋ_n^Λ-modules.* Firstly, by removing the foot node of ((n-1),(1)), we havee_κ_1S_((n-1),(1))≅S_((n-1),∅)≅D_((n-1),∅).The κ_2-signature of ((n-1),∅) is ++, corresponding to the conormal nodes (1,n,1) and (1,1,2). Adding the higher of these nodes, ((n-1),∅)↑_κ_2=((n),∅), and by <ref>, D_((n),∅) is a composition factor of S_((n-1),(1)).Now suppose that 2⩽m⩽n-1. By removing the foot node of ((n-m),(1^m)), we havee_κ_2+1-mS_((n-m),(1^m))≅S_((n-m),(1^m-1)).It follows from <ref> that D_μ_n-1,2m-2 and D_μ_n-1,2m-1 are composition factors of S_((n-m),(1^m-1)). Observe that μ_n-1,2m-2↑_κ_2+1-m=μ_n,2m by <Ref>, and that μ_n-1,2m-1↑_κ_2+1-m=μ_n,2m+1 by <Ref>. Thus, by <ref>, both D_μ_n,2m and D_μ_n,2m+1 are composition factors of S_((n-m),(1^m)). * First suppose that 1⩽m⩽n-2. By removing the hand node of ((n-m),(1^m)), we havee_κ_2-mS_((n-m),(1^m))≅S_((n-m-1),(1^m)).By <ref>, D_μ_n-1,2m and D_μ_n-1,2m+1 are composition factors ofS_((n-m-1),(1^m)). Observe that μ_n-1,2m↑_κ_2-m by <Ref>, and that μ_n-1,2m+1↑_κ_2-m=μ_n,2m+3 by <Ref>. Thus, D_μ_n,2m+2 and D_μ_n,2m+3 are also composition factors of S_((n-m),(1^m)) by <ref>.Secondly, suppose that m=n-1. By removing the hand node of ((1),(1^n-1)), we havee_κ_1S_((1),(1^n-1))≅S_(∅,(1^n-1))≅D_({n-e},(1^e-1))(by <ref>).The κ_1-signature of ({n-e},(1^e-1)) is +++, corresponding to the conormal nodes (1,⌊(n-2)/(e-1)⌋+1,1), (1,2,2) and (e,1,2). Adding the highest of these nodes, we have ({n-e},(1^e-1))↑_κ_1=({n-e+1},(1^e-1)). By <ref>, D_({n-e+1},(1^e-1))≅S_(∅,(1^n)), and hence S_(∅,(1^n)) is a composition factor of S_((1),(1^n-1)) by <ref>.Furthermore, for all m∈{2,…,n-2}, we know from <cit.> that the composition factors D_μ_n,2m, D_μ_n,2m+1, D_μ_n,2m+2 and D_μ_n,2m+3 of S_((n-m),(1^m)) are in bijection with im(ϕ_m), im(ϕ_m+1), ker(γ_m)/im(ϕ_m) and ker(γ_m+1)/im(ϕ_m+1), up to isomorphism and grading shift. Moreover, im(ϕ_m+1) and ker(γ_m+1)/ im(ϕ_m+1) are composition factors of both S_((n-m),(1^m)) and S_((n-m-1),(1^m+1), and hence are in bijection with D_μ_n,2m+2 and D_μ_n,2m+3, up to isomorphism and grading shift.Let 𝒯=Std((n-m),(1^m)). Then, by <cit.>, we have that * im(ϕ_m+1) ≅span{v_ | ∈𝒯,(1,1,1)=1,(1,n-m,1)=n}; * ker(γ_m+1)/im(ϕ_m+1)≅span{v_ | ∈𝒯,(1,1,2)=1,(1,n-m,1)=n}.It follows, together with <cit.>, that * (1,1,1)=1 if v_ lies in either im(ϕ_m) or im(ϕ_m+1);* (1,1,2)=1 if v_ lies in either ker(γ_m)/im(ϕ_m) or ker(γ_m+1)/im(ϕ_m+1).We now observe that only the first component of μ_n,2m is non-empty, whereas both components of μ_n,2m+1 are non-empty. It follows that 1 can only lie in the leg ofif v_ lies in D_μ_n,2m+1 or D_μ_n,2m+3, and hence D_μ_n,2m≅im(ϕ_m) and D_μ_n,2m+1≅ker(γ_m)/im(ϕ_m), as required.§ UNGRADED DECOMPOSITION NUMBERS CORRESPONDING TO S_((N-M),(1^M)) We remind the reader that we found the characteristic-free composition series of Specht modules labelled by hook bipartitions in terms of the basis vectors of S_((n-m),(1^m)) in <cit.>, and furthermore, in <Ref> we established the regular bipartitions that label these composition factors. We can thus determine the ungraded multiplicities [S_((n-m),(1^m)):D_μ] for all regular bipartitions μ∈ℛ𝒫_n^2. Recall that l≡κ_2-κ_1e.§.§ Case I: κ_2≢κ_1-1e and n≢l+1e We recall from <cit.> that S_((n-m),(1^m)) is irreducible for all m∈{0,…,n}. Moreover, we know from <ref> that S_(∅,(1^n))≅ D_(∅,(1^n))^R and from <ref> that S_((n-m),(1^m))≅D_μ_n,m for all m<n, leading us to the following result. Let κ_2≢κ_1-1e and n≢l+1e. Then the decomposition submatrix (d_((n-m),(1^m)),μ) of ℋ_n^Λ, under a specific ordering on its columns, iscccccccccc(ccccc|ccc) S_((n),∅)1S_((n-1),(1)) 1 0S_((n-2),(1^2))1 0⋮0⋱S_(∅,(1^n))1for all regular bipartitions μ∈ℛ𝒫_n^2.§.§ Case II: κ_2≢κ_1-1e and n≡l+1e We know from <ref> that the composition factors of S_((n-m),(1^m)) are D_μ_n,m-1 and D_μ_n,m for all m∈{1,…,n-1}. Hence D_μ_n,m is a composition factor of both S_((n-m),(1^m)) and S_((n-m-1),(1^m+1)) whenever m∈{1,…,n-2}. We also note that D_μ_n,0=S_((n),∅) and D_μ_n,n-1=D_(∅,(1^n))^R. Furthermore, since the bipartitions μ_n,0,μ_n,1,…,μ_n,n-1 are distinct, the irreducible modules D_μ_n,0,D_μ_n,1,…,D_μ_n,n-1 are non-isomorphic. Let κ_2≢κ_1-1e and n≡κ_2-κ_1+1e.Then the decomposition submatrix (d_((n-m),(1^m)),μ) of ℋ_n^Λ, under a specific ordering on its columns, isccccccccccc(cccccc|ccc)S_((n),∅)1 S_((n-1),(1)) 1 10S_((n-2),(1^2)) 11 S_((n-3),(1^3))1 1 0 ⋮⋱ ⋱S_((1),(1^n-1))0 11 S_(∅,(1^n)) 1 for all regular bipartitions μ∈ℛ𝒫_n^2.§.§ Case III: κ_2≡κ_1-1e and n≢0e We know from <ref> that the composition factors of S_((n-m),(1^m)) are D_μ_n,2m and D_μ_n,2m+1 for all m∈{1,…,n-1}. Furthermore, since the bipartitions ((n),∅), μ_n,2, μ_n,3,…,μ_2n-1, (∅,(1^n))^R are distinct, we know that the irreducible modules S_((n),∅), D_μ_n,2, D_μ_n,3,…,D_μ_n,2n-1, D_(∅,(1^n))^R are non-isomorphic. Let κ_2≡κ_1-1e and n≢0e. Then the decomposition submatrix (d_((n-m),(1^m)),μ) of ℋ_n^Λ, under a specific ordering on its columns, isccccccccccccccc(cccccccccc|ccc) S_((n),∅)1S_((n-1),(1)) 1 1 0 S_((n-2),(1^2)) 1 1S_((n-3),(1^3)) 1 10⋮ ⋱ ⋱S_((1),(1^n-1))0 1 1 S_(∅,(1^n)) 1for all regular bipartitions μ∈ℛ𝒫_n^2.§.§ Case IV: κ_2≡κ_1-1e and n≡0e We recall from <ref> that the composition factors of S_((n-m),(1^m)) are: S_((n),∅), D_μ_n,4 and D_μ_n,5 if m=1; D_μ_n,2m, D_μ_n,2m+1, D_μ_n,2m+2 and D_μ_n,2m+3 if m∈{2,…,n-2}; D_μ_n,2n-2, D_μ_n,2n-1 and D_(∅,(1^n))^R if m=n-1.*D_((n),∅), D_μ_n,4 and D_μ_n,5 if m=1; *D_μ_n,2m, D_μ_n,2m+1, D_μ_n,2m+2 and D_μ_n,2m+3 if m∈{2,…,n-2}; *D_μ_n,2n-2, D_μ_n,2n-1 and D_(∅,(1^n))^R if m=n-1.Thus, for all m∈{1,…,n-2}, D_μ_n,2m+2 and D_μ_n,2m+3 are composition factors of both S_((n-m),(1^m)) and S_((n-m-1),(1^m+1)). Furthermore, since the bipartitions ((n),∅), μ_n,4,…,μ_n,2n-1, (∅,(1^n))^R are distinct, the irreducible modules S_((n),∅), D_μ_n,4,…,D_μ_n,2n-1, D_(∅,(1^n))^R are non-isomorphic. Let κ_2≡κ_1-1e and n≡0e. Then the decomposition submatrix (d_((n-m),(1^m)),μ) of ℋ_n^Λ, under a specific ordering on its columns, iscccccccccccccccccc(cccccccccccc|cccc) S_((n),∅) 1S_((n-1),(1)) 1 1 1 0 S_((n-2),(1^2))1 1 1 1S_((n-3),(1^3))1 1 1 1S_((n-4),(1^4))1 1 1 1 0⋮ ⋱ ⋱ ⋱ ⋱ S_((2),(1^n-2))1 1 1 1S_((1),(1^n-1)) 0 1 1 1 S_(∅,(1^n))1for all regular bipartitions μ∈ℛ𝒫_n^2.Notice that the above decomposition submatrices of ℋ_n^Λ are independent of the characteristic of the ground field, and thus the corresponding adjustment submatrices are trivial. § GRADED DIMENSIONS OF S_((N-M),(1^M)) From now on, we study graded Specht modules labelled by hook bipartitions, using the combinatorial ℤ-grading defined on these ℋ_n^Λ-modules to determine their graded dimensions.We first determine the removable and addable i-nodes of hook bipartitions as follows. Let 1⩽i⩽k. Then ((k-i),(1^i)) has neither an addable nor a removable (κ_2+1-i)-node in the first row of the first component, except in the following cases. *If k≡l+1e, then (1,k-i+1,1) is an addable (κ_2+1-i)-node of ((k-i),(1^i)). * If k≡l+2e and k>i, then (1,k-i,1) is a removable (κ_2+1-i)-node of ((k-i),(1^i)). Let ∈((n-m),(1^m)) be such that (i,1,2)=k. * Suppose that (i,1,2)=l+1+αe for some α∈ℕ∪{0}. Then 1,…,l+αe must lie in the set of nodes{(1,1,2),…,(i-1,1,2)}∪{(1,1,1),…,(1,j,1)}, where j=l+αe-i+1. There are j and i-1 entries strictly smaller than l+αe+1 in the arm and the leg of , respectively. We now observe that (1,j+1,1) = κ_1+j = κ_2-i+1 = (i,1,2) e, and since (i,1,2)>(1,j,1), it follows that (1,j+1,1)=(1,k-i+1,1) is an addable (κ_2+1-i)-node for ((k-i),(1^i)).* Suppose that (i,1,2)=l+k+αe for some α∈ℕ∪{0} such that k∈{2,…,e}, and prove this is in a similar fashion to the first part, treating the cases k=2 and k>2 separately.For any ∈((n-m),(1^m)), we definea_:=#{ i|(i,1,2)≡l+1e}-#{i|(i,1,2)≡l+2e}.We are now able to obtain the degree of an arbitrary standard ((n-m),(1^m))-tableau. Let ∈((n-m),(1^m)) and 1⩽i⩽m<n. Then()=⌊m+e-l-2e⌋ +⌊l+1e⌋ +⌊me⌋+a_.Suppose that (i,1,2)=k for some k∈{i,…,n}, so that _⩽ k is a standard ((k-i),(1^i))-tableau. Applying <ref>, we have() =#{i| (i,1,2) has addable (κ_1-1)-node (2,1,1)}+#{i|(i,1,2) has addable (κ_2+1)-node (1,2,2)}+#{i| (i,1,2) has addable (κ_2+1-i)-node in the first row of }-#{i| (i,1,2) has removable (κ_2+1-i)-node in the first row of }= #{i| i≡l+2e,k>i}+#{i| i≡0e}+#{i| k≡l+1e}-#{i| k≡l+2e,k>i}= #{i| i≡l+2e} -#{i| i≡l+2e,k=i}+#{i| i≡0e}+#{i| k≡l+1e}-#{i| k≡l+2e} +#{i| k≡l+2e,k=i}= #{i| i≡l+2e} +#{i| i≡0e} +#{i| k≡l+1e} -#{i| k≡l+2e}= ⌊m+e-l-2e⌋ +⌊l+1e⌋ +⌊me⌋ +#{i| k≡l+1e} -#{i| k≡l+2e}. For any non-empty subset 𝒯⊆((n-m),(1^m)), we define the set A_𝒯:={a_ |∈𝒯}. We now define the maximum degree of 𝒯 to be (𝒯):={() | ∈𝒯} and the minimum degree of 𝒯 to be (𝒯):=min{() | ∈𝒯}. By <ref>, we have * (𝒯)= ⌊m+e-l-2e⌋ +⌊l+1e⌋ +⌊me⌋ +(A_𝒯), * (𝒯)= ⌊m+e-l-2e⌋ +⌊l+1e⌋ +⌊me⌋ +min(A_𝒯). We now seta_n:=#{i|1⩽i⩽n, i≡l+1e},b_n:=#{i|1⩽i⩽n, i≡l+2e},c_n:=#{i|1⩽i⩽n, i-l≢1,2e}. The values of a_n, b_n and c_n in each of the cases given in <Ref> are as follows. * Case I: a_n=b_n=⌊n-l-1/e⌋+1 and c_n=n-2⌊n-l-1/e⌋-2, * Case II: a_n=⌊n-l-1/e⌋ + 1, b_n=⌊n-l-1/e⌋, c_n=n-2⌊n-l-1/e⌋ -1, * Case III: a_n=⌊n/e⌋, b_n=⌊n/e⌋ + 1, c_n=n-2⌊n/e⌋ -1, * Cases IV: a_n=b_n=⌊n/e⌋ and c_n=n-2⌊n/e⌋. Let 𝒯=((n-m),(1^m)) and 1⩽ m<n. * If 1⩽m⩽ne, then max(A_𝒯)=m and min(A_𝒯)=-m.* If ne<m<n-ne, then max(A_𝒯)=a_n and min(A_𝒯)=-b_n.* If n-ne⩽m<n, then max(A_𝒯)=n-m+a_n-b_n and min(A_𝒯)=m-n+a_n-b_n.Let ,∈𝒯 be such that ()=(𝒯) and ()=(𝒯). It follows from <ref> that(respectively, ) is a standard ((n-m),(1^m))-tableau with the maximum (resp., minimum) number of entries congruent to l+1 modulo e, say i_ (resp., i_), together with the minimum (resp., maximum) number of entries congruent to l+2 modulo e, say j_ (resp., j_), which lie in the leg of(resp., ). We then compute ()=i_-j_ and ()=i_-j_. Let 1⩽ m⩽ n and 𝒯=((n-m),(1^m)). Then (S_((n-m),(1^m))) is∑_i=0^(A_𝒯)-min(A_𝒯)( ∑_j=0^(A_𝒯)( a_nm-i+jb_njc_ni-2j) v^( (A_𝒯) -i+⌊m/e⌋+ ⌊m+e-l-2/e⌋ +⌊l+1/e⌋)). Let ∈𝒯. By <ref>, there are at most max(A_𝒯) entries in the leg ofcongruent to l+1 modulo e, and at most min(A_𝒯) entries congruent to l+2 modulo e. Thus, there exists a tableau with degreed_i:=max(A_𝒯)-i+⌊me⌋+ ⌊m+e-l-2e⌋+⌊l+1e⌋for all i∈{0,…,max(A_𝒯)-min(A_𝒯)}, and hence (S_((n-m),(1^m))) has max(A_𝒯)-min(A_𝒯)+1 terms.Suppose thathas degree d_i for some i and that there are j entries congruent to l+2 modulo e in the leg of . These j entries contribute -j to the degree of . Hence, there must be m-i+j entries congruent to l+1 modulo e in the leg of , and the remaining i-2j nodes in the leg ofmust contain entries congruent to neither l+1 modulo e nor l+2 modulo e. Thus, there are a_nm-i+jb_njc_ni-2j standard ((n-m),(1^m))-tableaux with this combination of entries in its leg for some j∈{0,…,⌊i/2⌋}, and summing over j gives the number of standard ((n-m),(1^m))-tableaux with degree d_i.Later on, we will require the explicit leading and trailing terms in the graded dimensions of Specht modules labelled by hook bipartitions as given below. Let 1⩽ m⩽ n and x=⌊m/e⌋+ ⌊m+e-l-2/e⌋ +⌊l+1/e⌋. Then the first and last two terms in the graded dimension of S_((n-m),(1^m)) are displayed in the following table. 1⩽m⩽n/en/e<m<n-n/en-n/e⩽m<n 0pt4ex 1 term0pt4ex a_nmv^( m+x ) 0pt4exc_nm-av^(a_n+x)0pt4exb_nn-m v^(n-m+a_n-b_n+x) 0pt4ex 2 term 0pt4ex c_na_nm-1v^(m-1+x)0pt4ex(a_nc_nm-a_n+1+b_nc_nm-a_n-1)v^(a_n-1+x) 0pt4exc_nb_nn-m-1 v^(n-m+a_n-b_n-1+x) 0pt4ex 2 last term0pt4ex c_nb_nm-1v^(1-m+x) 0pt4ex( b_nc_nm-b_n+1+ a_nc_nm-b_n-1) v^(1-b_n+x) 0pt4exc_na_nn-m-1 v^(1+m-n+a_n-b_n+x)0pt4ex last term0pt4ex b_nmv^(-m+x)0pt4exc_nm-b_nv^(-b_n+x) 0pt4exa_nn-mv^(m-n+a_n-b_n+x) § GRADED DIMENSIONS OF THE COMPOSITION FACTORS OF S_((N-M),(1^M)) We now study the graded composition factors of Specht modules labelled by hook bipartitions, and determine the leading terms in their graded dimensions. Our results rely on the basis elements that span these irreducible ℋ_n^Λ-modules, which we deduce from the spanning sets of the images and the kernels of certain Specht module homomorphisms given in <cit.>.Recalling from <ref> that irreducible ℋ_n^Λ-modules are self-dual as graded modules, leads us to the following. Let λ∈ℛ𝒫_n^l. Then (D_λ) is symmetric in v and v^-1. Thus, by the symmetry of the graded dimensions of irreducible ℋ_n^Λ-modules, we automatically recover their trailing terms if we know their leading terms. Together with <ref>, the following result is an immediate consequence. Let λ∈𝒫_n^l and 𝒯⊆(λ). Suppose that M is an irreducible ℋ_n^Λ-module with spanning set {v_|∈𝒯} such that M≅ D_μ as ungraded ℋ_n^Λ-modules for some μ∈ℛ𝒫_n^l. Then (D_λ)= v^i ∑_∈𝒯v^ ()∈ℕ∪{0}[v+v^-1], where 2i=- (𝒯)- (𝒯). Moreover, the highest degree in the graded dimension of D_μ is 1/2( (𝒯)- (𝒯)). §.§ Case I: κ_2≢κ_1-1e and n≢l+1e We recall from <ref> that S_((n-m),(1^m)) is irreducible in this case, and moreover, we know that S_((n-m),(1^m))≅D_μ_n,m⟨i⟩ as graded ℋ_n^Λ-modules for some i∈ℤ. Suppose thatκ_2≢κ_1-1e and n≢l+1e, and let 1⩽ m <n. Then the leading term of (D_μ_n,m) is *⌊n-l-1e⌋+1mv^m if 1⩽m⩽ne+1,*n-2(⌊n-l-1e⌋+1)m-⌊n-l-1e⌋-1 v^⌊ne⌋ if ne+1<m<n-ne-1, *⌊n-l-1e⌋+1n-mv^n-m if n-ne-1⩽m < n.Moreover, D_μ_n,m⟨⌊me⌋ +⌊m+e-l-2e⌋⟩≅S_((n-m),(1^m)) as graded ℋ_n^Λ-modules.Since S_((n-m),(1^m)) is irreducible, the coefficients of the leading terms in (D_μ_n,m) and (S_((n-m),(1^m))) are equal, which we know from <ref>.Let 𝒯 = ((n-m),(1^m)). If 1⩽m⩽ne+1, then (𝒯)=m+⌊me⌋+⌊m+e-l-2e⌋ and (𝒯)=-m+⌊me⌋+⌊m+e-l-2e⌋, by <ref>. It thus follows from <ref> that the highest degree in the graded dimension of D_((n-m),(1^m)) is 12( (𝒯)- (𝒯))=m. Similarly, one can deduce the leading degrees in the other two cases.We now determine i∈ℤ such that D_((n-m),(1^m))≅S_((n-m),(1^m))⟨i⟩ as graded ℋ_n^Λ-modules. By above, we also know from <ref> that, for all m∈{1,…,n-1}, i=-12 (𝒯)-12 (𝒯)=-⌊me⌋-⌊m+e-l-2e⌋, as required. Let e=3, κ=(0,1). The following tableaux index the basis vectors of S_((2),(1^2))_1=(34,,1,2)_2=(24,,1,3)_3=(23,,1,4)_4=(14,,2,3)_5=(13,,2,4)_6=(12,,3,4)It is easy to check that (_1)=(_5)=1, (_2)=(_6)=-1 and (_3)=(_4)=0. Hence (S_((2),(1^2)))=2v+2+2v^-1 is symmetric in v and v^-1, and thus S_((2),(1^2))≅ D_μ_4,2 = D_((2),(1^2)) as graded ℋ_n^Λ-modules. §.§ Case II: κ_2≢κ_1-1e and n≡l+1e For m∈{1,…,n-1}, we recall from <ref> that S_((n-m),(1^m)) has graded composition factors D_μ_n,m-1 and D_μ_n,m such that D_μ_n,m-1≅ (γ_m-1)⟨i⟩ and D_μ_n,m≅ (γ_m)⟨j⟩ for some i,j∈ℤ. Suppose that κ_2≢κ_1-1e and n≡l+1e, and let 1⩽ m<n. Then the leading term of (D_μ_n,m) is *⌊n-l-1/e⌋mv^m if 1⩽m⩽n/e, *n-2⌊n-l-1/e⌋-1m-⌊n-l-1/e⌋ v^⌊n-l-1/e⌋ if n/e<m<n-n/e, *⌊n-l-1/e⌋n-m-1v^n-m-1 if n/e⩽m < n.Moreover, D_μ_n,m⟨⌊m+e-l-2e⌋ +⌊me⌋⟩≅ (γ_m) as graded ℋ_n^Λ-modules. Let 𝒯={∈((n-m),(1^m))|(1,n-m,1)=n}. Then we know from <cit.> that the set of vectors {v_|∈𝒯} spans (γ_m). By <ref>, we have(D_μ_n,m) =v^i( (γ_m)) =v^i∑_∈𝒯v^ (),where 2i=- (𝒯)- (𝒯), and moreover, we know from <ref> that the coefficients in the leading and trailing terms of the graded dimension of D_μ_n,m are equal. We now recall from <ref> that a_n=⌊n-l-1/e⌋ + 1, b_n=⌊n-l-1/e⌋ and c_n=n-2⌊n-l-1/e⌋ - 1, and suppose that ,∈𝒯 are such that ()=(𝒯) and ()=(𝒯). Then the proof follows similarly to that of <ref>, by applying <ref>. We note that n, which is congruent to l+1 modulo e, lies in the hand node of bothand . * Observe that each node in the leg ofcontains one of the a-1 entries congruent to l+1 modulo e (excluding n), and hence there are a_n-1m standard ((n-m),(1^m))-tableaux with degree (). We now observe that each node in the leg ofcontains one of the b_n entries congruent to l+2 modulo e. Hence (A_𝒯)=m and min(A_𝒯)=-m. * Firstly, the leg ofcontains all of the remaining a_n-1 entries congruent to l+1 modulo e and m-a_n+1 of the c_n entries neither congruent to l+1 modulo e nor congruent to l+2 modulo e. Secondly, the leg ofcontains all of the b_n entries congruent to l+2 modulo e, and m-b_n of the c_n entries congruent to neither l+1 modulo e nor l+2 modulo e. Hence (A_𝒯)=a_n-1 and min(A_𝒯)=-b_n. * Except for the hand nodes ofand , we observe that each node in the arm ofcontains one of the b_n entries congruent to l+2 modulo e, and that every node in the arm ofcontains one of the remaining a-1 entries congruent to l+1 modulo e. Hence (A_𝒯)=n-m-1 and min(A_𝒯)=m-n+1. For all m, we notice that min (A_𝒯)=-max (A_𝒯). Moreover, i=-1/2 (𝒯) -1/2 (𝒯) = -⌊m+e-l-2e⌋-⌊me⌋, as required. Let e=3, κ=(0,0), n=7 and 𝒯={∈((5),(1^2))|(2,1,2)=7}. We know from <cit.> that (γ_1) is spanned by {v_|∈𝒯}. The tableaux lying in 𝒯 are _1=(23456,,1,!7)_2=(13456,,2,!7)_3 =(12456,,3,!7) _4=(12356,,4,!7)_5=(12346,,5,!7)_6=(12345,,6,!7) Let ,∈𝒯 be such that ()= (𝒯) and ()= (𝒯). Then, by <ref>, {1,4}∈(1,1,2) and {2,5}∈(1,1,2). Hence (_1)=(_4)>(_3)=(_6)> (_2)=(_5). One can check that (_1)=3, (_2)=1 and (_3)=2, obtaining ( (γ_1)) =2v^3+2v^2+2v. By <ref>, (γ_1)≅D_μ_7,1=D_((6),(1)) as ungraded ℋ_7^Λ-modules, and thus by shifting the grading of (γ_1), we have (D_((6),(1))) =( (γ_1)⟨-2⟩) =2v+2+2v^-1. §.§ Case III: κ_2≡κ_1-1e and n≢0e We recall from <ref> that S_((n-m),(1^m)) has graded composition factors D_μ_n,2m and D_μ_n,2m+1, for all m∈{1,…,n-1}, such that D_μ_n,2m≅im(χ_m)⟨i⟩ and D_μ_n,2m+1≅(S_((n-m),(1^m))/ (χ_m))⟨j⟩ for some i,j∈ℤ. Suppose that κ_2≡κ_1-1e and n≢0e, and let 1⩽ m<n. *Then the leading term of (D_μ_n,2m) is * ⌊n/e⌋mv^m if 1⩽m⩽n/e, * n-2⌊n/e⌋-1m-⌊n/e⌋ v^⌊n/e⌋ if n/e<m<n-n/e, * ⌊n/e⌋n-m-1 v^(n-m-1) if n-n/e⩽m<n. Moreover, D_μ_n,2m⟨⌊me⌋+⌊m-1e⌋-1⟩≅ (χ_m) as graded ℋ_n^Λ-modules. *Then the leading term of (D_μ_n,2m+1) is * ⌊n/e⌋m-1 v^m-1 if 1⩽m⩽n/e, * n-2⌊n/e⌋-1m-1-⌊n/e⌋ v^⌊n/e⌋ if n/e<m<n-n/e, * ⌊n/e⌋n-m v^n-m if n-n/e⩽m<n. Moreover, D_μ_n,2m+1⟨⌊m-1e⌋+⌊me⌋⟩≅S_((n-m),(1^m))/ (χ_m) as graded ℋ_n^Λ-modules. The proof follows the same structure as that of <ref>, using the spanning sets of (χ_m) and S_((n-m),(1^m))/ (χ_m) determined from <cit.>. In particular, we apply <ref> with λ=μ_n,2m and 𝒯={∈((n-m),(1^m))|(1,1,1)=1} for the first part, and with λ=μ_n,2m+1 and 𝒯={∈((n-m),(1^m))|(1,1,2)=1} for the second part. Let e=3, κ=(0,2), n=5 and 𝒯={∈ ((3),(1^2))|(1,1,1)=1}. By <cit.>, (χ_2) is spanned by {v_|∈𝒯}. There are six tableaux in 𝒯, namely _1=(!1!45,,2,3)_2=(!1!35,,2,4)_3=(!1!34,,2,5)_4=(!1!25,,3,4)_5=(!1!24,,3,5)_6=(!1!23,,4,5) One can check from <ref> that (_1)= (_5)=2, (_2)= (_6)=0 and (_3)= (_4)=1, and hence ( (χ_2))=2v^2+2v+2. By <ref>, (χ_2)≅D_μ_5,4=D_((3,1^2),∅) as ungraded ℋ_5^Λ-modules, and by shifting the degree of (χ_2), we have (D_((3,1^2),∅)) =( (χ_2)⟨-1⟩) =2v+2+2v^-1. Let 𝒮={∈ ((3),(1^2))|(1,1,2)=1}. It follows from above that S_((3),(1^2))/ (χ_2) is spanned by {v_|∈𝒮}, and moreover, we know that S_((3),(1^2))/ (χ_2)≅D_μ_5,5=D_((3),(1^2)) as ungraded ℋ_5^Λ-modules by <ref>. We see that 𝒮 contains the following tableaux _1=(345,,!1,!2)_2=(245,,!1,!3)_3=(235,,!1,!4)_4=(234,,!1,!5) One can easily check that (_1)= (_4)=0, (_2)=1 and (_3)=-1. Thus (D_((3),(1^2))) = (S_((3),(1^2))/ (χ_2)) = v+2+v^-1, and S_((3),(1^2))/ (χ_2)≅D_((3),(1^2)) as graded ℋ_5^Λ-modules.§.§ Case IV: κ_2≡κ_1-1e and n≡0e Let 1<m<n. Then we know from <ref> that S_((n-m),(1^m)) has graded composition factors D_μ_n,2m and D_μ_n,2m+1 such that D_μ_n,2m⟨i⟩≅ (ϕ_m) and D_μ_n,2m+1⟨j⟩≅ (γ_m)/ (ϕ_m) for some i,j∈ℤ. Except for the one-dimensional Specht modules, recall from <ref> that S_((n-m),(1^m)) has either three or four composition factors. Hence we not only find the leading terms of (D_μ_n,2m) and (D_μ_n,2m+1), but the second leading terms too. It will become apparent to the reader in <ref> that these extra terms are, in fact, necessary in order to determine the corresponding graded decomposition numbers in this case. Suppose that κ_2≡κ_1-1e and n≡0e, and let 1< m<n. * Then the first two leading terms of (D_μ_n,2m) are * n-e/em-1v^m-1 and (e-2)n/en-e/em-2v^m-2 if 1<m⩽n/e, * (e-2)n/eem-n/ev^n-e/e and n-e/e( (e-2)n/em-n/e+1+ (e-2)n/em-n/e-1)v^n-2e/e if n/e<m⩽n(e-1)/e, * n-e/en-m-1 v^n-m-1 and (e-2)n/en-e/en-m-2 v^n-m-2 if n(e-1)+e/e⩽m< n. Moreover, D_μ_n,2m⟨⌊m-1e⌋+⌊me⌋+2⟩≅ (ϕ_m) as graded ℋ_n^Λ-modules. *Then the first two leading terms of (D_μ_n,2m+1) are * n-e/em-2v^m-2 and (e-2)n/en-e/em-3v^m-3 if 1<m⩽n/e, * (e-2)n/ee(m-1)-n/ev^n-e/e and n-e/e( (e-2)n/eem-n/e + (e-2)n/ee(m-2)-n/e) v^n-2e/e if n/e<m⩽n(e-1)/e, * n-e/en-mv^n-m and (e-2)n/en-e/en-m-1v^n-m-1 if n(e-1)+e/e⩽m< n. Moreover, D_μ_n,2m+1⟨⌊m-1e⌋+⌊me⌋+1⟩≅ (γ_m)/ (ϕ_m) as graded ℋ_n^Λ-modules. We follow the same structure as the proof of <ref>, using the spanning sets of (ϕ_m) and (γ_m)/ (ϕ_m) determined from <cit.>. In particular, we apply <ref> with λ=μ_n,2m and 𝒯={∈ ((n-m),(1^m))|(1,1,1)=1,(m,1,2)=n} for the first part, and with λ=μ_n,2m+1 and 𝒯={∈ ((n-m),(1^m))|(1,1,2)=1,(m,1,2)=n} for the second part. Let e=3, κ=(0,2), n=6 and 𝒯={∈ ((3),(1^3))|(1,1,1)=1,(3,1,2)=6}. By <cit.>, (ϕ_3) is spanned by {v_|∈𝒯}. There are six tableaux in 𝒯, namely _1=(!1!45,,2,3,!6)_2=(!1!35,,2,4,!6)_3=(!1!34,,2,5,!6)_4=(!1!25,,3,4,!6)_5=(!1!24,,3,5,!6)_6=(!1!23,,4,5,!6) One can check that (_1)= (_5)=4, (_2)=(_6)=2 and (_3)= (_4)=3, and hence ((ϕ_3))=2v^4+2v^3+2v^2. We know from <ref> that (ϕ_3)≅D_μ_6,6=D_((4,1^2),∅) as ungraded ℋ_6^Λ-modules. Thus, by shifting the grading on (ϕ_3), we obtain (D_((4,1^2),∅)) =((ϕ_3)⟨ -3⟩)=2v+2+2v^-1. Let 𝒮={∈ ((3),(1^3))|(1,1,2)=1,(3,1,2)=6}. By <cit.>, (γ_3)/ (ϕ_3) is spanned by {v_|∈𝒮}. There are four tableaux in 𝒮, namely _1=(345,,!1,!2,!6)_2=(245,,!1,!3,!6)_3=(235,,!1,!4,!6)_4=(234,,!1,!5,!6) One can check that (_1)= (_4)=2, (_2)=3 and (_3)=1, and hence ( (γ_3)/ (ϕ_3))=v^3+2v^2+v. We know from <ref> that (γ_3)/ (ϕ_3)≅D_μ_6,7=D_((4),(1^2)) as ungraded ℋ_6^Λ-modules. By shifting the grading on (γ_3)/ (ϕ_3), we obtain ( D_((4),(1^2))) =( (γ_3)/ (ϕ_3)⟨ -2 ⟩)=v+2+v^-1. § GRADED DECOMPOSITION NUMBERS CORRESPONDING TO S_((N-M),(1^M)) Recall that we determined the ungraded decomposition numbers for ℋ_n^Λ corresponding to Specht modules labelled by hook bipartitions in <Ref>, and then in <Ref> and <Ref>, we determined the graded dimensions of Specht modules labelled by hook bipartitions and of their composition factors, respectively. These findings are equivalent to solving part of the Decomposition Number Problem, corresponding to hook bipartitions, which we now provide an answer to.Recall from <Ref> that the graded decomposition numbers are defined to be the Laurent polynomials [S_λ:D_μ]_v=∑_i∈ℤ[S_λ:D_μ⟨i⟩]v^i for all λ∈𝒫_n^l and for all μ∈ℛ𝒫_n^l.We first determine the grading shifts on the trivial and sign representations to obtain the analogous graded representations. The trivial representation S_((n),∅) is generated by v__((n),∅)) where (_((n),∅)))=0, so that S_((n),∅)=D_((n),∅) as graded ℋ_n^Λ-modules. Hence[S_((n),∅):D_μ]_v = 1if μ=((n),∅),0otherwise.Recall from <Ref> that S_(∅,(1^n))≅D_(∅,(1^n))^R as ungraded ℋ_n^Λ-modules. We now find i∈ℤ such that S_(∅,(1^n))≅D_(∅,(1^n))^R⟨i⟩ as graded ℋ_n^Λ-modules. Let λ=(∅,(1^n))^R. Then[S_(∅,(1^n)):D_λ]_v=v^2⌊n/e⌋ if κ_2≡κ_1-1e,v^(⌊n/e⌋+⌊n-l-1/e⌋+1) if κ_2≢κ_1-1e.Moreover, [S_(∅,(1^n)):D_μ]_v=0 for all other μ∈ℛ𝒫_n^2.We have [S_(∅,(1^n)):D_λ]_v=v^(_(∅,(1^n))) since (D_λ)=1 and (S_(∅,(1^n)))= (_(∅,(1^n))).We now deduce from the proof of <ref> that(_(∅,(1^n)))=⌊ne⌋ +#{i | _(∅,(1^n))(i,1,2)≡l+2e},where (i,1,2)≡κ_1e. In the leg of [(∅,(1^n))], we notice that there are ⌊ne⌋ κ_1-nodes if κ_2≡κ_1-1e and ⌊n-l-1e⌋+1 κ_1-nodes otherwise, and we are done.For all regular bipartitions λ∈ℛ𝒫_n^2, we now establish the graded composition multiplicities [S_((n-m),(1^m)):D_λ]_v of irreducible ℋ_n^Λ-modules D_λ arising as composition factors of S_((n-m),(1^m)), for all m∈{1,…,n-1}, depending on whether κ_2≡κ_1-1e or not and whether n≡l+1e or not.§.§ Case I: κ_2≢κ_1-1e and n≢l+1e Let κ_2≢κ_1-1e and n≢l+1e. We recall from <ref> that S_((n-m),(1^m)) is irreducible and isomorphic to D_μ_n,m as an ungraded ℋ_n^Λ-module for all m∈{1,…,n}. To find the graded multiplicity of D_μ_n,m arising as a composition factor of S_((n-m),(1^m)), it suffices to find the grading shift on D_μ_n,m so that it is isomorphic to S_((n-m),(1^m)) as a graded ℋ_n^Λ-module. Suppose that κ_2≢κ_1-1e and n≢l+1e, and let μ∈ℛ𝒫_n^2. Then, for all m∈{1,…,n-1}, we have[S_((n-m),(1^m):D_μ_n,m]_v= v^(⌊m/e⌋+ ⌊m+e-l-2/e⌋) if μ=μ_n,m, 0 otherwise. We determine i∈ℤ where [S_((n-m),(1^m)):D_μ_n,m]_v=v^i, which is equivalent to finding i∈ℤ such that S_((n-m),(1^m))≅D_μ_n,m⟨i⟩ as graded ℋ_n^Λ-modules. Thus, the result follows from <ref>. Let e=3 and κ=(0,0). Then the decomposition submatrix of ℋ_6^Λ with rows corresponding to Specht modules labelled by hook bipartitions can be written as cccccccccccc(ccccccc|ccc) S_((6),∅) 1 S_((5),(1))10 S_((4),(1^2))v S_((3),(1^3)) v^20 S_((2),(1^4))v^2 S_((1),(1^5))0v^3 S _(∅,(1^6)) v^4 Let e=3 and κ=(0,0). Then the decomposition submatrix of ℋ_8^Λ with rows corresponding to Specht modules labelled by hook bipartitions is cccccccccccccc(ccccccccc|ccc) S_((8),∅) 1 S_((7),(1))10 S_((6),(1^2)) v S_((5),(1^3))v^2 S_((4),(1^4)) v^20 S_((3),(1^5))v^3 S_((2),(1^6)) v^4 S_((1),(1^7))0v^4 S _(∅,(1^8)) v^5 . §.§ Case II: κ_2≢κ_1-1e and n≡l+1e Let κ_2≢κ_1-1e and n≡l+1e. Recall from <ref> that S_((n-m),(1^m)) has ungraded composition factors D_μ_n,m-1 and D_μ_n,m for all m∈{1,…,n-1}. We now determine the grading shifts i,j∈ℤ so that D_μ_n,m-1⟨i⟩ and D_μ_n,m⟨j⟩ are graded composition factors of S_((n-m),(1^m)). Let κ_2≢κ_1-1e and n≡l+1e. Then, for all m∈{1,…,n-1}, * [S_((n-m),(1^m)):D_μ_n,m-1]_v=v^(⌊m/e⌋+ ⌊m+e-2-l/e⌋+1), * [S_((n-m),(1^m)):D_μ_n,m]_v=v^(⌊m/e⌋+ ⌊m+e-2-l/e⌋). Moreover, [S_((n-m),(1^m)):D_μ]_v=0 for all other μ∈ℛ𝒫_n^2.We determine x,y∈ℤ such that (S_((n-m),(1^m)))=v^x (D_μ_n,m-1)+v^y (D_μ_n,m). *Let 0⩽m⩽⌊n/e⌋. By <ref>, the leading and trailing terms, respectively, in the graded dimension of S_((n-m),(1^m)) are⌊n-l-1/e⌋+1m v^( m+⌊m/e⌋+ ⌊m+e-2-l/e⌋) and ⌊n-l-1/e⌋m v^( -m+⌊m/e⌋+ ⌊m+e-2-l/e⌋),and by <ref>, the leading terms in the graded dimensions of (γ_m-1) and (γ_m), respectively, are⌊n-l-1/e⌋m-1v^m-1 and ⌊n-l-1/e⌋mv^m.First observe that the graded dimensions of D_μ_n,m and S_((n-m),(1^m)) both have 2m+1 terms, and hence y=⌊m/e⌋+⌊m+e-2-l/e⌋. Thus x-⌊m/e⌋-⌊m+e-2-l/e⌋ equals 0 or 1 since the trailing coefficients in the graded dimensions of D_μ_n,m and S_((n-m),(1^m)) are equal. Now observe that the sum of the leading coefficients in the graded dimensions of D_μ_n,m-1 and D_μ_n,m equals the leading coefficient in the graded dimension of S_((n-m),(1^m)). Hence x=⌊m/e⌋+⌊m+e-2-l/e⌋+1. *Let ⌊n/e⌋<m<n-⌊n/e⌋. By <ref>, the leading and trailing terms in the graded dimension of S_((n-m),(1^m)), respectively, aren-2⌊n-l-1/e⌋-1m-⌊n-l-1/e⌋-1 v^( ⌊n-l-1/e⌋+1+⌊m/e⌋+⌊m+e-2-l/e⌋), n-2⌊n-l-1/e⌋-1m-⌊n-l-1/e⌋ v^( -⌊n-l-1/e⌋+⌊m/e⌋+⌊m+e-2-l/e⌋).By <ref>, the leading terms in the graded dimensions of D_μ_n,m-1 and D_μ_n,m, respectively, aren-2⌊n-l-1/e⌋-1m-⌊n-l-1/e⌋-1 v^⌊n-l-1/e⌋ and n-2⌊n-l-1/e⌋-1m-⌊n-l-1/e⌋ v^⌊n-l-1/e⌋.Observing that the leading coefficients in the graded dimensions of S_((n-m),(1^m)) and D_μ_n,m-1 are equal, we deduce that x=⌊m/e⌋+⌊m+e-2-l/e⌋+1. Similarly, observing that the trailing coefficients in the graded dimensions of S_((n-m),(1^m)) and D_μ_n,m are equal, we deduce that y=⌊m/e⌋+⌊m+e-2-l/e⌋. *Let ⌊n/e⌋⩽m⩽n-1. By <ref>, the leading and trailing terms in the graded dimension of S_((n-m),(1^m)) are⌊n-l-1/e⌋n-m v^( n-m+1+⌊m/e⌋+⌊m+e-2-l/e⌋) and ⌊n-l-1/e⌋+1n-m v^( m-n+1+⌊m/e⌋+⌊m+e-2-l/e⌋),respectively, and by <ref>, the leading terms in the graded dimension of D_μ_n,m-1 and D_μ_n,m, respectively, are⌊n-l-1/e⌋n-m v^( n-m ) and ⌊n-l-1/e⌋n-m-1 v^( n-m-1 ).First observe that the graded dimensions of S_((n-m),(1^m)) and D_μ_n,m-1 both have 2n-2m+1 terms, and hence x=⌊m/e⌋+⌊m+e-2-l/e⌋+1. Thus y-⌊m/e⌋-⌊m+e-2-l/e⌋ equals 0 or 1 since the leading coefficients in the graded dimensions of S_((n-m),(1^m)) and D_μ_n,m-1 are equal. Now observe that the sum of the trailing coefficients in the graded dimensions of D_μ_n,m-1 and D_μ_n,m equals the trailing coefficient in the graded dimension of S_((n-m),(1^m)). Hence, y=⌊m/e⌋+⌊m+e-2-l/e⌋. Let e=3 and κ=(0,0). Then the decomposition submatrix of ℋ_7^Λ with rows corresponding to Specht modules labelled by hook bipartitions can be written as cccccccccccc(ccccccc|ccc) S_((7),∅) 1 S_((6),(1)) v 10 S_((5),(1^2))v^2 v S_((4),(1^3)) v^3 v^2 0 S_((3),(1^4))v^3 v^2 S_((2),(1^5)) v^4 v^3 S_((1),(1^6))0v^5 v^4 S_(∅,(1^7)) v^5 §.§ Case III: κ_2≡κ_1-1e and n≢0e Let κ_2≡κ_1-1e and n≢0e. Recall from <ref> that the ungraded composition factors of S_((n-m),(1^m)) are D_μ_n,2m and D_μ_n,2m+1 for all m∈{1,…,n-1}. Hence as graded ℋ_n^Λ-modules, the composition factors of S_((n-m),(1^m)) are D_μ_n,2m⟨i⟩ and D_μ_n,2m+1⟨j⟩ for some integers i and j, which we now determine. Let κ_2≡κ_1-1e and n≢0e. Then, for all m∈{1,…,n-1}, * [S_((n-m),(1^m)):D_μ_n,2m]_v= v^(⌊m/e⌋+ ⌊m+e-1/e⌋), * [S_((n-m),(1^m)):D_μ_n,2m+1]_v= v^(⌊m/e⌋+ ⌊m+e-1/e⌋-1). Moreover, [S_((n-m),(1^m)):D_μ]_v=0 for all other μ∈ℛ𝒫_n^2.Similar to the proof of <ref>: we determine x,y∈ℤ such that (S_((n-m),(1^m)))=v^x(D_μ_n,2m)+v^y(D_μ_n,2m+1) using <ref> and <ref>, for each of the three cases 1⩽m⩽⌊n/e⌋, ⌊n/e⌋<m<n-⌊n/e⌋ and n-⌊n/e⌋⩽ m⩽ n-1. Let e=3 and κ=(0,2). Then the decomposition submatrix of ℋ_7^Λ with rows corresponding to Specht modules labelled by hook bipartitions can be written as ccccccccccccccccccc(cccccccccccccc|ccc) S_((7),∅) 1 S_((6),(1))v 10 S_((5),(1^2))v 1 S_((4),(1^3))v^2 v 0 S_((3),(1^4))v^3 v^2 S_((2),(1^5))v^3 v^2 S_((1),(1^6))0v^4 v^3 S_(∅,(1^7))v^4§.§ Case IV: κ_2≡κ_1-1e and n≡0e Let κ_2≡κ_1-1e and n≡0e. Recall from <ref> that D_μ_n,2m, D_μ_n,2m+2, D_μ_n,2m+1 and D_μ_n,2m+3 are the ungraded composition factors of S_((n-m),(1^m)) for all m∈{2,…,n-2}; S_((n-1),(1)) and S_((1),(1^n)) both have three composition factors. Hence as graded ℋ_n^Λ-modules, S_((n-m),(1^m)) has composition factors D_μ_n,2m⟨i_1⟩, D_μ_n,2m+2⟨i_2⟩, D_μ_n,2m+1⟨i_3⟩ and D_μ_n,2m+3⟨i_4⟩ for some i_1,i_2,i_3,i_4∈ℤ, which we now determine.Firstly, one observes that the graded dimension of D_μ_n,2m equals the graded dimension of D_μ_n,2m+3, under a grading shift, which follows immediately from <ref>. Let κ_2≡κ_1-1e and n≡0e. Then, for all m∈{1,…,n-2}, v^2 [S_((n-m),(1^m)):D_μ_n,2m+3]_v = [S_((n-m),(1^m)):D_μ_n,2m]_v.Suppose that κ_2≡κ_1-1e and n≡0e, and let 1⩽ m<n. Then * [S_((n-m),(1^m)):D_μ_n,2m]_v= v^(⌊m/e⌋ +⌊m+e-1/e⌋+1) for all m∈{1,…,n-1}, * [S_((n-m),(1^m)):D_μ_n,2m+2]_v= v^(⌊m/e⌋ +⌊m+e-1/e⌋) for all m∈{1,…,n-2}, * [S_((n-m),(1^m)):D_μ_n,2m+1]_v= v^(⌊m/e⌋ +⌊m+e-1/e⌋) for all m∈{2,…,n-2}, * [S_((n-m),(1^m)):D_μ_n,2m+3]_v= v^(⌊m/e⌋ +⌊m+e-1/e⌋-1) for all m∈{1,…,n-2}. Moreover, [S_((n-m),(1^m)):D_μ]_v=0 for all other μ∈ℛ𝒫_n^2.*Let m=1. We know from <ref> that S_((n-1),(1)) has three composition factors, namely D_μ_n,2=S_((n),∅), D_μ_n,4 and D_μ_n,5. It follows thus from <ref> that, for some x,y∈ℤ, we havegrdim(S_((n-1),(1))) = v^x (S_((n),∅)) + v^y (D_μ_n,4) + v^x-2(D_μ_n,5).Furthermore, one determines that S_((n),∅)≅⟨ v(n)⟩ and D_μ_n,5≅(γ_2)/(ϕ_2)≅⟨ v(1)⟩ as ungraded ℋ_n^Λ-modules, and thus (S_((n),∅))=(D_μ_n,5)=1. Hence, by <ref> and <ref>, we have(S_((n-1),(1)))= nev^2 + (e-2)nev + ne = v^2x-2 + v^y ( n-eev + (e-2)ne + n-eev^-1).Thus, by equating terms, y=1=x-1. *Let 1<m<n-1. We know from <ref> that S_((n-m),(1^m)) has four composition factors, D_μ_n,2m, D_μ_n,2m+1, D_μ_n,2m+2 and D_μ_n,2m+3. Following <ref>, we know that(S_((n-m),(1^m)))=v^x(D_μ_n,2m) +v^y(D_μ_n,2m+1)+v^z(D_μ_n,2m+2) +v^x-2(D_μ_n,2m+3)for some x,y,z∈ℤ, which we now determine.Firstly, let 2⩽m⩽n/e. Let η=⌊m/e⌋+⌊m+e-1/e⌋. Then it follows from <ref> that the leading and trailing terms of (S_((n-m),(1^m))) are as follows. 1 term 2 term 2 last term last term 0pt4exn/em v^(m+η) 0pt4ex (e-2)nen/em-1 v^(m-1+η) 0pt4ex (e-2)nen/em-1 v^(1-m+η) 0pt4ex n/em v^(-m+η) By <ref>, the first two leading terms of (S_((n-m),(1^m))) are n/em v^(m+⌊m/e⌋+⌊m+e-1/e⌋) and (e-2)nen/em-1 v^(m-1+⌊m/e⌋+⌊m+e-1/e⌋),respectively,and the last two trailing terms of (S_((n-m),(1^m))) are (e-2)nen/em-1 v^(1-m+⌊m/e⌋+⌊m+e-1/e⌋) and n/em v^(-m+⌊m/e⌋+⌊m+e-1/e⌋),respectively. By <ref>, the first two leading terms in the graded dimensions of D_μ_n,2m, D_μ_n,2m+1, D_μ_n,2m+2 and D_μ_n,2m+3 are presented in the following table. D_μ_n,2m D_μ_n,2m+1 D_μ_n,2m+2 D_μ_n,2m+3 0pt4ex 1 term 0pt4exn-e/em-1v^m-1 0pt4ex n-e/em-2v^m-2 0pt4ex n-e/emv^m 0pt4ex n-e/em-1v^m-1 0pt4ex 2 term 0pt4ex (e-2)n/en-e/em-2v^m-2 0pt4ex (e-2)n/en-e/em-3v^m-3 0pt4ex (e-2)n/en-e/em-1v^m-1 0pt4ex (e-2)n/en-e/em-2v^m-2 By <ref>, the first two leading terms of the graded dimensions of D_μ_n,2m andD_μ_n,2m+2 aren-e/em-1v^m-1, (e-2)n/en-e/em-2v^m-2andn-e/emv^m,(e-2)n/en-e/em-1v^m-1,respectively, and the first two leading terms ofD_μ_n,2m+1 and D_μ_n,2m+3 aren-e/em-2v^m-2,(e-2)n/en-e/em-3v^m-3andn-e/em-1v^m-1, (e-2)n/en-e/em-2v^m-2,respectively. The graded dimensions of S_((n-m),(1^m)) and D_μ_n,2m+2 both have 2m+1 terms, and hence z=⌊m/e⌋+⌊m+e-1/e⌋. Now observe that the graded dimensions of D_μ_n,2m and D_μ_n,2m+3 both have 2m-1 terms, so together with <ref>, x=⌊m/e⌋+⌊m+e-1/e⌋+1.We thus have -2⩽y-⌊m/e⌋-⌊m+e-1/e⌋⩽2, and observe that the sum of the second leading (trailing, respectively) coefficients in the graded dimensions of D_μ_n,2m and D_μ_n,2m+2 (D_μ_n,2m+2 and D_μ_n,2m+3, respectively) form the second leading (trailing, resp.) coefficient in the graded dimension of S_((n-m),(1^m)). Hence y=⌊m/e⌋+⌊m+e-1/e⌋.We similarly find x,y,z using <ref> and <ref> for n/e<m<n(e-1)/e and n(e-1)/e⩽m⩽n-2, respectively. *Let m=n-1. We know from <ref> that S_((1),(1^n-1)) has three composition factors, namely D_(∅,(1^n))^R, D_μ_n,2n-1 and D_μ_n,2n-2. One determines that D_(∅,(1^n))^R≅⟨ v(1,2,…,n-1)⟩ and D_μ_n,2n-2≅(ϕ_n-1)=⟨ v(2,3,…,n)⟩ as ungraded ℋ_n^Λ-modules, and hence (D_(∅,(1^n))^R)=(D_μ_n,2n-2)=1. Moreover, we find that( v(2,3,…,n) ) = 2⌊me⌋+2 =( v(1,2,…,n-1) ) +2,so thatv^2 [S_((1),(1^n-1)):D_(∅,(1^n))^R]_v = [S_((1),(1^n-1)):D_μ_n,2m-2]_v.It thus follows that(S_((1),(1^n-1))) =v^x (D_μ_n,2n-2) + v^y (D_μ_n,2n-1) + v^x-2(D_(∅,(1^n))^R)for some x,y∈ℤ, which we now determine. Applying <ref> and <ref>,(S_((1),(1^n-1))) =ne v^( 1 + ⌊m/e⌋ + ⌊m+e-1/e⌋) + (e-2)ne v^( ⌊m/e⌋ + ⌊m+e-1/e⌋) + ne v^( ⌊m/e⌋ + ⌊m+e-1/e⌋ - 1 )= v^2x-2 + v^y ( n-eev + (e-2)ne + n-eev^-1).Equating terms, we deduce that y=⌊m/e⌋+⌊m+e-1/e⌋=x-1, as required. Let e=3 and κ=(0,2). Then the decomposition submatrix of ℋ_6^Λ with rows corresponding to Specht modules labelled by hook bipartitions can be written as ccccccccccccccc(cccccccccc|ccc) S_((6),∅) 1 S_((5),(1)) v^2 v 10 S_((4),(1^2))v^2 v v 1 S_((3),(1^3))v^3 v^2 v^2 v 0 S_((2),(1^4))v^4 v^3 v^3 v^2 S_((1),(1^5))0v^4 v^3 v^2 S_(∅,(1^6))v^4 This paper was written under the guidance of the author's PhD supervisor, Matthew Fayers, at Queen Mary University of London, and forms part of her PhD thesis. The author would like to thank Dr Fayers for his many helpful comments and ongoing support, as well as Chris Bowman and Liron Speyer for their useful remarks and guidance. The author is also thankful to the referee for their careful reading of the manuscript. plain
http://arxiv.org/abs/1709.09251v2
{ "authors": [ "Louise Sutton" ], "categories": [ "math.RT" ], "primary_category": "math.RT", "published": "20170926202425", "title": "Specht modules labelled by hook bipartitions II" }
firstpage–lastpage 2017 Understanding Infographics through Textual and Visual Tag Prediction Zoya Bylinskii1* Sami Alsheikh1* Spandan Madan2* Adrià Recasens1*Kimberli Zhong1 Hanspeter Pfister2 Fredo Durand1 Aude Oliva1 1 Massachusetts Institute of Technology 2 Harvard University{zoya,alsheikh,recasens,kimberli,fredo,oliva}@mit.edu {spandan_madan,pfister}@seas.harvard.eduAccepted; Received ============================================================================================================================================================================================================================================================================================================================= Recent detections of gravitational waves from merging binary black holes opened new possibilities to study the evolution of massive stars and black hole formation. In particular, stellar evolution models may be constrained on the basis of the differences in the predicted distribution of black hole masses and redshifts. In this work we propose a framework that combines galaxy and stellar evolution models and use it to predict the detection rates of merging binary black holes for various stellar evolution models. We discuss the prospects of constraining the shape of the time delay distribution of merging binaries using just the observed distribution of chirp masses. Finally, we consider a generic model of primordial black hole formation and discuss the possibility of distinguishing it from stellar-origin black holes. binaries, black holes, gravitational waves, galaxies: evolution § INTRODUCTIONThe discovery of the first gravitational-wave (GW) source GW150914, a merger of two black holes (BHs), by Advanced LIGO <cit.> marked the birth of a new astronomical discipline. Analysis performed on the first two Advanced LIGO observing runs (the last part of the second run being conducted in parallel with Advanced Virgo) has so far resulted in the detection of four additional sources, as well as a tentative, lower-significance, candidate event <cit.>. These observations have notably shown for the first time that heavy (≳ 20M_⊙) BHs exist and can form binaries that merge within the age of the Universe. Furthermore, the joint observation of GW170814 by Advanced LIGO and Advanced Virgo demonstrated the added accuracy (a reduction of over an order of magnitude in positional uncertainty) that can be reached with three detectors <cit.>. As the sensitivity of ground-based interferometers increases, future GW observations of merging BH binaries will provide more precise information on their masses, spins and redshifts. Indeed, it is expected that a few tens to a few hundreds of events will be observed within the next several years <cit.>. This wealth of data can of course be used to study the models that describe how BHs form. The leading scenario that has been proposed to explain the formation of stellar-mass (≲ 100M_⊙) BHs relies on the standard evolution channel of massive (≳ 20M_⊙) field stars. After the iron core collapses, a BH can form either after a supernova explosion and the following (partial) fallback or matter and eventual collapse, or a direct collapse of the entire stellar envelope <cit.>. An interesting phenomenon occurs in the mass range of ∼ 130-250M_⊙ (but note the dependence on metallicity and rotation velocity) where the star becomes unstable due to production of electron-positron pairs and undergoes a pair-instability supernova (PISN). In this case the star is completely disrupted and no remnant is left <cit.>. While the conditions that lead to, or prevent, a successful supernova explosion are not yet fully understood <cit.>, the evolution of binary massive stars is even less certain. The binary orbit is thought to decay during a common envelope phase <cit.> with a possible contribution from a chemically homogeneous evolution channel <cit.>. A complementary channel for binary BH formation, driven by mergers in dense stellar environments, may become dominant in stellar clusters <cit.>. Other possible scenarios for forming stellar-mass binary BHs include primordial BHs <cit.> and population III remnants <cit.>. The distributions of masses, spins and redshifts of detectable sources in each of these channels are different, which opens the possibility of studying them with upcoming GW observations. However, the number densities of sources also depend on the underlying galaxy evolution model, for example the star formation rate (SFR), which renders model selection rather challenging.Nevertheless, several groups have recently started to explore the full potential of GW observations for stellar evolution modeling, in particular for constraining the parameters of specific models <cit.> as well as model selection <cit.> and direct probing of the BH mass function <cit.>. Notably, the important issue of the properties of galaxies that host binary BH mergers has been discussed by <cit.> and <cit.>.In this article we propose a general framework for the analysis of future GW observations. Our ultimate goal is to be able to constrain a large variety of stellar evolution scenarios which will be embedded in our galaxy evolution model. For the latter we use the model developed in <cit.> and <cit.> <cit.> and implement several stellar evolution models that we wish to compare. We then estimate the number of detections that would be made by LIGO in each case, as well as the mass and redshift distribution of these detectable mergers. To demonstrate the utility of this approach we estimate the precision with which some of the model parameters can be measured with mock observations that we draw from out binary black hole populations. Our semi-analytic approach differs from previous studies in that it will allow us to marginalize over many astrophysical 'nuisance parameters', such as the star formation rate (in particular at high redshifts, where it is poorly constrained), the time to coalescence of binary black holes etc. In other words, we can in principle treat different stellar evolution models within the same galaxy evolution scenario while simultaneously varying also the galaxy evolution parameters.The structure of this paper is as follows. Section <ref> describes our calculation of detection rates of binary BH mergers. Section <ref> details our galaxy evolution model as well as the four stellar evolution models which we implement here and a generic primordial black hole formation scenario. Our results for the mass and redshift distribution of detectable mergers are presented in Section <ref>. We then use our framework to predict the accuracy with which some of the parameters can be measured with future detections in Section <ref>. Finally, we discuss future applications of our framework in Section <ref>.§ DETECTION RATESWe start with a model (to be specified below) that provides the total birth rate of BHs per unit observer time per unit comoving volume V and per unit BH mass m:ṅ_ tot/ m =N/ t_ obs V m .We then assume that only a fraction β(m) of these BHs reside in binary systems that coalesce within a Hubble time:ṅ_2/ m(m)=β(m) ṅ_tot/ m .Then the birth rate of binaries with component masses m and m' ≤ m reads:^2 ṅ_ bin/ m m'(m,m') = ṅ_2/ mṅ_2/ m'P(m',m)where the function P(m',m) is normalized so that:∫ṅ_2/ mṅ_2/ m'P(m',m)m' m = 1/2∫ṅ_2/ m m .If the binary merges within a time t_ delay after it has formed, where the latter is given by the normalized probability distribution P_ d(t_ delay): ∫_t_ min^t_ maxP_ d(t_ delay) t_ delay =1,then the number of binaries merging per unit time t_ merge=t+t_ delay is given by:N/ t_ merge m m'=∫^2 ṅ_ bin(t)/ m m'P_ d(t_ merge-t) V/ z z t_ obs .In the last expression, the birth time t and the corresponding redshift z are related by| t/ z|=1/H_0√(Ω_m(1+z)^3+Ω_Λ)(1+z)and t_ obs is the observation time. Since the total observation time is very short compared to cosmological scales (T_ obs∼ 50 days for LIGO O1), the integral over t_ obs is trivial. In order to obtain the number of events detectable by a given instrument, e.g. Advanced LIGO, we need to calculate the signal-to-noise rate (SNR) for each of these events:ρ^2=4∫|h(f)|^2/S_n(f) fwhere h(f) is the GW strain in the observed frequency domain and S_n(f) is the noise power spectral density. Note that the strain is a function of the binary parameters: component masses and spins, redshift, orientation and sky localization. We obtain the number of observed events (defined here as those with ρ>8) by first calculating P(ρ>8|m_1,m_2,z), the probability that a merger of BHs with masses m_1 and m_2 at redshift z is detectable. We average over source orientation and component spins (assuming spins uniform in magnitude and isotropic in direction). It follows that the number of sources detectable after observing for a total time T_ obs is:N_ det/ t_ merge m m'=T_ obs∫^2 ṅ_ bin/ m m'P_ d(t_ merge-t) P(ρ>8|m,m',z_ merge) V/ z z . In this work we assume the following distributions:P(m',m) = constant ,m,m'∈ [M_ min,M_ max]andP_ d(t_ delay)∝ t^-γ_ delay,t∈ [t_ min,t_ max]with t_ min=50 Myr and t_ max=t_H, where t_H is the Hubble time <cit.>. The specific form of the function P(m',m) was adopted here for simplicity, other choices will be explored in future work. Furthermore we assume that the fraction of BHs that are in binaries and that merge within a Hubble time is β and does not depend on mass. We take γ=1 <cit.> and β=0.01 as fiducial valuesand explore the possibilities of constraining them with LIGO observations in Section <ref>.In order to calculate the SNR from eq. (<ref>) we use the PhenomB inspiral-merger-ringdown waveforms <cit.> and the noise power spectral density from <cit.>.In order to compare our model predictions to observational data we present below the detection rate in the primary mass-secondary mass plane, in units of M_⊙^-2yr^-1:R_ det(m,m') = 1/T_ obs∫ N_ det/ t_ merge m m' t_ merge .In the next Section we discuss the astrophysical models that provide the birth rate of binary BHs. § ASTROPHYSICAL MODELS §.§ Galaxy evolution There are two astrophysical terms in eq. (<ref>): the birth rate of binaries ṅ/ m m' and the probability to merge after a time delay t_ delay given by P(t_ delay). Some of the current stellar evolution models can predict the birth rate of binaries with a certain set of orbital parameters, from which the merging time due to emission of GW can be calculated <cit.>. Other models provide only the birth rate ṅ/ m m' and have to rely on some distribution of merging times P(t_ merge). Moreover, most astrophysical models utilize some distribution of the component masses of the stellar binary as an input. It should also be kept in mind that the birth rate of BHs follows from the formation rate of their progenitor massive stars and so depends on the global star formation rate and the stellar initial mass function, as well as stellar metallicity and local density (for example, multiple mergers can occur in dense stellar environments). Therefore, the stellar evolution model that we wish to test needs to be embedded in a galaxy evolution framework, either (semi-)analytical or numerical.In this work we rely on the semi-analytic approach developed in <cit.> and <cit.>, which is based on the galaxy evolution model in <cit.> and <cit.>. To sum up, our model takes as an input the structure formation history (computed with the Press-Schechter semi-analytic approach), the star formation rate (SFR) history, the initial mass function and stellar yields. Another crucial input is the relation between initial stellar mass and metallicity and the remnant (neutron star or black hole) mass. The latter component is taken from detailed stellar evolution models that we want to test, as described below. The output of our model is the evolution of the chemical composition of the interstellar and circumgalactic media and the number densities of black holes and neutron stars, as well as other astrophysical quantities, i.e. gas fraction and the optical depth to reionization, used to calibrate the model. We assume the Salpeter stellar initial mass function <cit.> in the mass range 0.1-100M_⊙ and calibrate our SFR to the observations compiled by <cit.>, complemented by those by <cit.> and <cit.>, as described in <cit.>. We use the metal yields from <cit.> for all of our models. Further discussion on the constraints on metallicity evolution and SFR, as well as a more detailed model description, can be found in <cit.>. §.§ Stellar evolution and initial mass-remnant mass relation In order to relate the initial stellar mass to the remnant mass we used four stellar evolution models: (1) the Fryer model, based on the delayed model in <cit.>; (2) the WWp model, based on <cit.>; and (3)-(4), two models from <cit.> with and without stellar rotation, which we name Limongi300 and Limongi, respectively. All of these models provide the remnant mass as a function of initial stellar mass and metallicity. Since we use<cit.> to calculate stellar yields in all of these cases, the WWp model is the most consistent choice. Note, however, that it is based on rather old 'piston' pre-collapse stellar models and assumes a constant explosion energy. Recent studies suggested that the explosion is powered by neutrinos stored behind the shock <cit.>. In this picture the explosion energy depends on neutrino heat transport mechanisms, the nature of the hydrodynamic instabilities that convert neutrino thermal energy to kinetic energy that can power the supernova <cit.>, and the resulting time delay between shock bounce and explosion. <cit.> provide an analytic model for the latter and calculate the explosion energy, as well as the remnant mass, using numerical pre-collapse stellar models from <cit.>. Here we use the delayed model from <cit.> as a representative case. <cit.> presents a different set of models, including the cases of rotating stars. These models differ from the ones in <cit.> in two aspects. First, <cit.> uses a different set of pre-collapse stellar models which vary from <cit.>in their treatment of convection, mass-loss rate and angular momentum transport. For example, the metallicity dependence of the mass-loss rate used in <cit.> is Ṁ∝ Z^0.5, where Z is the metallicity <cit.>, whereas <cit.> use the steeper relation obtained in <cit.>: Ṁ∝ Z^0.85. Second, <cit.> assumed a constant explosion energy in the calculation of the remnant mass, similar to the approach of <cit.> and contrary to <cit.>. As we will see below, these differences amount to significant discrepancies in the mass distribution of detectable BHs among the Fryer and Limongi models.Finally, the Limongi300 model allows us to test the effect of rotation of the distribution of remnant masses. Rotation affects the evolutionary path of a massive star by lowering the effective gravity and inducing rotation-driven mixing. According to the results of <cit.>, the main effect of rotation on the resulting BH mass is to reduce the minimal mass required for the PISN stage therefore limiting the maximal BH mass. In order to test this model we assumed that all the stars rotate at 300 km/sec (rather than using a distribution of velocities).The initial mass-remnant mass relation for these models is shown in Figure <ref>. There is a clear 'mass hierarchy' among the models, with the exception of Limongi300 which exhibits a cutoff at M_ star∼ 70M_⊙. This is the result of the fact that rotating stars enter the pair-instability regime at lower masses than non-rotating stars, as can be seen in Figs. 24g and 24i in <cit.>. Note also the nearly vertical relationship obtained in the Fryer model around M_ star∼ 30. This is the result of the prescription for stellar winds adopted in this model (see their Eq. (7) and Fig. 4). We will see below that this feature creates an imprint on the observed BH mass distribution. Note also that in all the cases the BH masses are higher at low metallicity, as expected in view of the reduced stellar winds. §.§ Primordial black holesAs first suggested by <cit.>, BHs can form during the radiation- or matter-dominated era from large primordial curvature perturbation generated by inflation. Interestingly, this mechanism can in principle form BHs with masses ranging from the Planck mass (10^-5 g) to ∼ 10^5 M_⊙, depending on their formation epoch, although BHs lighter than ∼ 10^15 g would have evaporated by the present epoch <cit.>. Depending on their mass, PBHs may leave observable traces that can be used to study models of the early Universe. In addition, PBHs are compelling dark matter candidates, and while a large variety of observations provide stringent constraints on the cosmological density of PBHs, certain mass ranges are still not excluded <cit.>.While the mechanism of PBH formation has been extensively studied in the context of inflationary models, the formation of binary PBHs and their merger rates has received little attention until the first discovery of GW from merging ∼ 30M_⊙ BHs, which raised the possibility that this was also the first detection of PBHs. Two possible mechanisms of binary formation were proposed by <cit.> and <cit.>, respectively. In the former scenario, PBHs constitute a significant fraction (up to ∼ 100%) of dark matter, and form binaries at late epochs (z= 0) in dense galactic environments. In the latter model, on the other hand, binaries form at early epochs via 3-body interactions. Note that these two scenarios result in very different (several orders of magnitude, depending on the PBH density) observable merger rates.In view of the uncertainties in both these scenarios it may be useful to consider a more general phenomenological description, where PBH binaries can form (and merge) at any given epoch, which we provide in what follows.Let us assume that all (single) PBHs were formed at the epoch of matter-radiation equality z_ eq with a power-law mass function:n/ m∝ m^-αnormalized so that they account for a fraction q of the total dark matter density:ρ_DM=1/q∫_M_ min^M_ max n/ mm m .Below we will consider the case with M_ min=10M_⊙, M_ max=1000M_⊙, q=0.01 <cit.> and α=2 (these values are chosen here for an illustrative purpose).We then assume that a fraction Γ_ PBH of these PBHs forms binaries per unit observer time:n_2/ m t_ obs(m,t )=Γ_ PBH(t) n/ m(m ) .We take the comoving number density of PBHs to be constant in time, by implicitly assuming that their merger rate is sufficiently small. Then the comoving formation rate of binary PBHs is given by Eq. (<ref>), where we assume P(m',m)= const. for m,m'∈ [M_ min,M_ max].To obtain the number density of mergers we assume the following probability to merge with a delay t_ delay <cit.>:P_ d∝ 1/t_ delay. Then the merger rate per unit time is given by Eq. (<ref>) whereas the number of detectable sources can be calculated using Eq. (<ref>). Note that we still need to specify Γ_ PBH, the binary formation rate. For example, the mechanism proposed by <cit.> corresponds to Γ_ PBH∝δ(t_ eq) where δ is the Dirac distribution, whereas in the scenario of <cit.> binary formation occurs predominantly at lower redshift, after halo collapse. §.§ Merger rate calculationIn order to evaluate the total number of observed events from Eq. (<ref>) we construct lightcones up to z=15 and calculate the mean expected number of events ⟨ N_bin⟩(t_ merge,m,m') in bins of primary and secondary masses m,m', volume shells dV/dzdz/dtΔ t (where Δ t=250 Myr) and merging times Δ t_ merge=250 Myr. Finally, we sum over all birth times t_ birth to obtain the distribution of sources in the mass-redshift space. We can also calculate the observed merger rate density from the number of actual LIGO detections and assuming a specific astrophysical model. For this purpose we use the procedure outlined in <cit.> (see their Appendix C). Namely, if Λ is the number of LIGO triggers of astrophysical origin, then it is related to the merger rate density through:Λ=R⟨ VT_ obs⟩where ⟨ VT_ obs⟩ is the population-averaged sensitive space-time volume of search (Eq. C3 in <cit.>):⟨ VT_ obs⟩=T_ obs∫ zθ V_c/ z1/1+zs(θ)f(z,θ) ,s(θ) is the normalized distribution function of the BH population with respect to the parameters θ (for example mass) and f(z,θ) is a selection function that gives the probability of detecting a source with parameters θ at redshift z (in our case, this is the probability that a given source has SNR ρ>8). We stress that since s(θ) is a normalized distribution, our choice of β, the fraction of BHs that are in binaries and that merge within a Hubble time (see Eq. (<ref>)) does not influence our results.Note that the deduced merger rate density depends on the astrophysical model assumed for the analysis. For example, if we used an astrophysical model that predicts a negligible relative number of ∼ 30M_⊙ BHs, LIGO detections would imply a high total merger rate to allow for the detected ∼ 30M_⊙ events. Conversely, assuming a model that produces an over-abundance of ∼ 30M_⊙ BHs would result in a low overall merger rate.§ DETECTION RATES OF BINARY BH MERGERS §.§ Stellar-origin BHs The simplest way to compare between the four stellar evolution models discussed above is to calculate the detection rate of binary BH mergers that is implied from the detections made during the Advanced LIGO observing runs, as outlined in Section <ref>. Specifically, we calculate the rate based on the O1 observing run. As can be seen in Table <ref>, these range from 15 to 59 Gpc^-3yr^-1 and are in all the cases smaller than the one obtained in <cit.> (97^+135_-67 Gpc^-3yr^-1 for their power-law model). Several factors could contribute to this discrepancy. First, <cit.> assume that the sources are distributed uniformly in comoving volume, whereas our model predicts a specific redshift evolution that peaks at z∼ 2 <cit.>. Therefore our model predicts lower relative numbers of low-redshift sources. Second, the BH mass function in our models differs from the one in <cit.> because here we examine various initial mass-remnant mass relations, as discussed above. Finally, we note that our treatment of the selection function f(θ) is oversimplified with respect to the analysis of <cit.>. In view of the uncertainty in the astrophysical model, it is also unclear which of these interpretations is correct, but it is important to keep in mind that the merger rates computed from the observed number of events are model-dependent. These results may be important for predicting the level of the expected stochastic background of GW <cit.>, although we note that the observational uncertainties, due to the small number of events, are still more significant that the modeling uncertainty. In Figure <ref> we plot the contours of constant detection rates per unit mass squared (in units of events per yr per M_⊙^2) for each of our models in the M_1-M_2 plane, where M_1 and M_2 are the primary and secondary BH masses, respectively (see Eq. (<ref>)). We also show the events detected by LIGO by black points with error bars. For example, comparing the first LIGO detection GW150914 with our models, we see that the Fryer model predicts ∼ 0.16 such detections per year per M_⊙^2 which, taking into account the error bars on the observed masses and the O1 coincident analysis time of 51.5 days, gives ∼ 1 expected detections in this model. The same calculation applies to the Limongi models, but the WWp case clearly produces too few BHs above ∼ 25M_⊙. It is important to mention that these results depend on our model parameter β (the number of BHs that are in binaries and that merge within a Hubble time). However we stress that the relative mass distribution is not affected by our choice of β as long as it is taken to be a constant. Our value β=0.01 was chosen to roughly correspond to most of the models considered here. The only exception is WWp which cannot be accommodated even with the maximal (and unrealistic) value of β=1. However, the most interesting (and robust) conclusion from our calculation is that the models discussed here present various specific features in their mass distributions of detectable BHs. For example, the WWp and the Limongi300 models produce negligible number densities of BHs with masses above ∼ 25M_⊙ and ∼ 45M_⊙, respectively. This means that these models can be excluded even with a very small number of detections of 'heavy' sources. The case of the Fryer and Limongi models is even more striking: while they produce nearly identical total numbers of detectable mergers, the mass distribution of these events is quite different. Specifically, in the Fryer model the detectable binaries tend to cluster around ∼ 20-30M_⊙. This feature of the Fryer model can be traced back to the fact that in this description more massive stars experience stronger winds in such a way as to create an accumulation of BH masses at ∼ 20-30M_⊙, as can be seen from Fig. 4 and eq. (7) in <cit.> and Figure <ref> above. On the other hand, the mass distribution of detectable sources in the Limongi model is predicted to be almost flat, with the exception of a small 'island' at M∼ 20-40M_⊙, possibly because in this model the reduced number densities of more massive BHs are roughly compensated by the fact that they are easier to detect. In this case, a large number of detections of ∼ 10M_⊙ sources will probably exclude the Fryer model while favouring the Limongi model.With only 5+1 detected events, we clearly cannot rule out any of these models, but it may be possible when the number of detections increases. We can then study their mass distribution looking for specific features: do the sources cluster around specific mass values? Is there a mass cutoff? In particular, it might be interesting to estimate the number of detections necessary to rule out specific models, and we plan to do it in an upcoming paper. A possible caveat to this approach is that several channels for BH formation (i.e. primordial BHs, PopIII remnants, dynamical formation) may co-exist, rendering the distribution even more complex. We note that in our approach, the galaxy evolution processes, including the SFR and the metallicity evolution are the same for all the models, and the differences in the resulting distribution of BH masses can be directly attributed to differences in the stellar evolution model. On the other hand, our framework gives us the ability to marginalize over the unknown astrophysical parameters. In addition to the distribution of detectable sources in the M_1-M_2 plane we can calculate their redshifts. Fig. <ref> shows the contours of constant detection rates R'_ det in the M_c-z plane, where the chirp mass is M_c=(M_1M_2)^3/5/(M_1+M_2)^1/5 andR'_ det(M_c,z)=∫ R_ det(M_1,M_2,t)δ(M_c(M_1,M_2)-M_c)M_1M_2| t/ z| . Combining these predictions with those from Fig. <ref> will result in even tighter constraints. As in the case of the M_1-M_2 plane, the Fryer and Limongi models seems to provide a better correspondence to LIGO detections. Finally, we show the full 3D distribution in the primary mass - secondary mass - redshift plane on Figure <ref> for the Limongi model. As can be seen, most of the contribution comes from z∼ 0.2-0.3.§.§ Primordial BHsWe consider a generic PBH model with Γ_ PBH=10^-2 Gyr^-1 for z<2 in eq. (<ref>), mass range M_ min=10M_⊙, M_ max=1000M_⊙, fraction of PBHs as dark matter Ω_ PBH/Ω_ DM=q=0.01 and slope of their mass function α=0.5. These values were chosen to demonstrate the potential differences of this population from stellar-origin BHs, in particular, we chose a very shallow mass distribution to give more weight for high-mass BHs that can form in this scenario. The detection rates that would be obtained by LIGO in this case are shown in Figure <ref>. The shallow slope of the PBH mass distribution results in a relatively high detection rate of BHs in the PISN mass gap M≳ 60M_⊙, in particular a peak around ∼ 100M_⊙. It seems tempting to suggest that even a single detection of such BH masses would provide a strong hint towards a primordial origin, although more detailed studies are needed in order to exclude other formation scenarios such as the dynamical formation channel. Further work is needed to relate the phenomenological model described here to detailed PBH binary formation scenarios.§ PARAMETER ESTIMATIONIn this Section we demonstrate the potential power of our approach by estimating some of the model parameters with maximal likelihood analysis. The number of detections required to estimate model parameters was discussed in previous studies <cit.> and it was shown that in general between a few hundreds to a thousand detections will be needed. The main difficulty in treating this issue is the choice of parameters which vary among different models.In the approach developed in this paper some of parameters are common among the models, which facilitates model comparison. In the following we focus on two parameters: β, the fraction of BHs that are in binaries that merge within a Hubble time, and γ, the power-law of the merger delay time distribution (see Eq. (<ref>)). It is useful to vary also other model parameters, especially the ill-constrained shape of the mass distribution (Eq. (<ref>)), and possibly the parameters of the galaxy evolution model, such as the SFR and the IMF. In view of the large number of parameters, a full analysis necessitates a Monte Carlo Markov chain approach, which we leave to a follow-up study. The accuracy of our analysis depends crucially on the measurement precision. Here we choose to focus on the chirp mass, which was measured to very good precision in O1 and O2 LIGO/Virgo runs. In the future other observables can be included, such as the individual BH masses and redshifts. When testing a given model we calculate the detection rate per unit time and per unit chirp mass assuming some fiducial values of β and γ, as outlined above, which gives us the mean expected number of detections N_ det/ M_c made during a given observation time T_ obs. We then choose T_ obs that corresponds to N_ tot=100 and N_ tot=500 total detectable events (black and red curves on Fig. <ref>, respectively). We bin our results in mass bins of width 3M_⊙ to obtain Δ N_ det and ignore the errors on the masses (that is, we assume that the errors are much smaller than the width of each bin, and ignore the cross-correlations between the bins). For each bin we draw a number from a Poisson distribution with mean Δ N_ det to obtain the mock observations. We then compare these mock observations to the mean expected number of detections for the model and the set of parameters we wish to test. In particular, we assume flat priors on γ and Log(β) and perform a maximum likelihood analysis.To generate our mock observations we use the fiducial values β=0.01 and γ=1 and either the Fryer or Limongi model. As can be seen in Fig. <ref>, β and γ are highly degenerate. This is to be expected: in the case of a steep merger time delay distribution (large γ) most of the BH binaries merge immediately after formation, close to the peak of star formation around z∼ 2 and are thus not observed. To reach the same overall number of detected mergers we need to have a larger fraction β of binaries that are in binaries and are on close enough orbits to merge within a Hubble time. § DISCUSSIONThe discovery of GW from merging binary BHs opens new perspectives for the studies of stellar evolution and BH formation. In this paper we introduced a framework that can be used to analyze upcoming GW detections in a full astrophysical context with the aim of constraining stellar evolution models. We qualitatively showed the effect of different models on the mass and redshift distribution of potential LIGO sources. We find that among the stellar evolution models discussed here the Limongi model without stellar rotation and the Fryer model provide the best description of the observed distribution. These models differ in the mass distribution of detectable BHs: while the Fryer model predicts a concentration of BHs around ∼ 20-30M_⊙ (a result of the modeling of mass loss in this case), the distribution is almost flat in the Limongi model. It therefore seems possible to discriminate between these models with more observations of BH mergers. We also find that the WWp model is not compatible with LIGO detections since it produces too few BHs above ∼ 25M_⊙. Moreover, the Limongi300 model, in which all the stars rotate at 300 km/sec is also unlikely due to a cutoff it introduces at ∼ 45M_⊙, which is a result of the fact that rotating stars undergo PISN at lower masses than their non-rotating counterparts.We also performed a basic parameter estimation analysis, focusing only on β and γand using 100 events drawn from a population computed with either Fryer or Limongi models. We found that these parameters are degenerate: the same number of detections can be obtained for lower binary fraction and shallower time delay distribution. It will be interesting to consider alternative BH formation channels, such as the dynamical formation channel, PopIII remnants and primordial BHs. In view of our results, it is clear that models which present specific unique features in their mass and/or redshift distribution will be the easiest to constrain. For example, even a single ∼ 150M_⊙ BH could point to one of these alternative channels, since it cannot be produced via standard stellar evolution (as it would fall in the PISN range). However, in the absence of such 'smoking-gun' detections and in view of the large variety of stellar evolution models it might be difficult to constrain some of these alternative channels with current ground-based interferometers. For example, the generic primordial BH scenario, discussed in this paper, seems to be difficult to constrain if the merger times are distributed roughly like 1/t_ delay as in <cit.> (and similarly to the stellar-origin BHs), and the BH mass function is bottom-heavy with a cutoff at ∼ 70M_⊙ as in <cit.>. While the redshift distribution of these sources will be constant out to high redshifts, contrary to the case of stellar-origin BHs, this feature will not be detectable before the next generation of ground-based interferometers becomes operational.Finally, we have not discussed the spins of the merging BHs, which can provide additional constraints, in particular for the dynamical formation channel, and which we plan to include in future work. § ACKNOWLEDGMENTSWe thank the anonymous referee for useful suggestions that helped improve the manuscript. ID is grateful to Thibaut Louis for useful discussions. This work has been done within the Labex ILP (reference ANR-10-LABX-63), part of the Idex SUPER, and received financial state aid managed by the Agence Nationale de la Recherche, as part of the programme Investissements d'avenir under the reference ANR-11-IDEX-0004-02. We acknowledge the financial support from the EMERGENCE 2016 project, Sorbonne Universités, convention no. SU-16-R-EMR-61 (MODOG).mn2e
http://arxiv.org/abs/1709.09197v3
{ "authors": [ "Irina Dvorkin", "Jean-Philippe Uzan", "Elisabeth Vangioni", "Joseph Silk" ], "categories": [ "astro-ph.HE", "astro-ph.SR", "gr-qc" ], "primary_category": "astro-ph.HE", "published": "20170926180300", "title": "Exploring stellar evolution with gravitational-wave observations" }
Institute for Materials Research, Tohoku University, Sendai 980-8577, Japan [email protected] Institute for Materials Research, Tohoku University, Sendai 980-8577, JapanNational Institute for Materials Science, Tsukuba 305-0047, JapanInstitute for Materials Research, Tohoku University, Sendai 980-8577, JapanWPI Advanced Institute for Materials Research, Tohoku University, Sendai 980-8577, JapanInstitute for Materials Research, Tohoku University, Sendai 980-8577, JapanNational Institute for Materials Science, Tsukuba 305-0047, JapanPRESTO, Japan Science and Technology Agency, Saitama 332-0012, JapanCenter for Spintronics Research Network, Tohoku University, Sendai 980-8577, JapanInstitute for Materials Research, Tohoku University, Sendai 980-8577, JapanWPI Advanced Institute for Materials Research, Tohoku University, Sendai 980-8577, JapanCenter for Spintronics Research Network, Tohoku University, Sendai 980-8577, JapanAdvanced Science Research Center, Japan Atomic Energy Agency, Tokai 319-1195, JapanWe report the observation of magnetic-field-induced suppression of the spin Peltier effect (SPE) in a junction of a paramagnetic metal Pt and a ferrimagnetic insulator Y_3Fe_5O_12 (YIG) at room temperature. For driving the SPE, spin currents are generated via the spin Hall effect from applied charge currents in the Pt layer, and injected into the adjacent thick YIG film. The resultant temperature modulation is detected by a commonly-used thermocouple attached to the Pt/YIG junction. The output of the thermocouple shows sign reversal when the magnetization is reversed and linearly increases with the applied current, demonstrating the detection of the SPE signal. We found that the SPE signal decreases with the magnetic field. The observed suppression rate was found to be comparable to that of the spin Seebeck effect (SSE), suggesting the dominant and similar contribution of the low-energy magnons in the SPE as in the SSE.Magnetic-field-induced suppression of spin Peltier effect in Pt/Y_3Fe_5O_12 system at room temperature Eiji Saitoh December 30, 2023 ======================================================================================================§ INTRODUCTION Thermoelectric conversion is one of the promising technologies for smart energy utilization <cit.>. Owing to the progress of spintronics in this decade, the spin-based thermoelectric conversion is now added to the scope of the thermoelectric technology <cit.>. In particular, the thermoelectric generation mediated by flow of spins, or spin current, has attracted much attention because of the advantageous scalability, simple fabrication processes, and flexible design of the figure of merit <cit.>. This is realized by combining the spin Seebeck effect (SSE) <cit.> and spin-to-charge conversion effects <cit.>, where a spin current is generated by an applied thermal gradient and is converted into electricity owing to spin–orbit coupling.The SSE has a reciprocal effect called the spin Peltier effect (SPE), discovered by Flipse et al. in 2014 in a Pt/yttrium iron garnet (Y_3Fe_5O_12: YIG) junction <cit.>. In the SPE, a spin current across a normal conductor (N)/ferromagnet (F) junction induces a heat current, which can change the temperature distribution around the junction system.To reveal the mechanism of the SPE, systematic experiments have been conducted <cit.>. Since the SPE is driven by magnetic fluctuations (magnons) in the F layer, detailed study on the magnetic-field and temperature dependence is indispensable for clarifying the microscopic relation between the SPE and magnon excitation and the reciprocity between the SPE and SSE <cit.>. A high magnetic field is expected to affect the magnitude of the SPE signal via the modulation of spectral properties of magnons. In fact, the SSE thermopower in a Pt/YIG system was shown to be suppressed by high magnetic fields even at room temperature against the conventional theoretical expectation based on the equal contribution over the magnon spectrum <cit.>. This anomalously-large suppression highlights the dominant contribution of sub-thermal magnons, which possess lower energy and longer propagation length than thermal magnons <cit.>. Thus, the experimental examination of the field dependence of the SPE is an important task for understanding the SPE. Although the SPE has recently been measured in various systems by using the lock-in thermography (LIT) <cit.>, it is difficult to be used at high fields and/or low temperatures. For investigating the high-magnetic-field response of the SPE, an alternative method is required. In this paper, we investigate the magnetic field dependence of the SPE up to 9 T at 300 K by using a commonly-used thermocouple (TC) wire. As revealed by the LIT experiments <cit.>, the temperature modulation induced by the SPE is localized in the vicinity of N/F interfaces. This is the reason why the magnitude of the SPE signals is very small in the first experiment by Flipse et al. <cit.>, where a thermopile sensor is put on the bare YIG surface, not on the Pt/YIG junction. Here, we show that the SPE can be detected with better sensitivity simply by attaching a common TC wire on a N/F junction. This simple SPE detection method enables systematic measurements of the magnetic field dependence of the SPE, since it is easily integrated to conventional measurement systems. In the following, we describe the details of the electric detection of the SPE signal using a TC, the results of the magnetic field dependence of the SPE signal in a high-magnetic-field range, and its comparison to that of the SSE thermopower.§ EXPERIMENTAL The spin current for driving the SPE is generated via the spin Hall effect (SHE) from a charge current applied to N<cit.>. The SHE-induced spin current then forms spin accumulation at the N/F interface, whose spin vector representation is given byμ_ s∝θ_ SHE𝐣_ c×𝐧,where θ_ SHE is the spin Hall angle of N, 𝐣_ c the charge-current-density vector, and 𝐧 the unit vector normal to the interface directing from F to N. μ_ s at the interface exerts spin-transfer torque to magnons in F via the interfacial exchange coupling at finite temperatures, when μ_ s is parallel or anti-parallel to the equilibrium magnetization (𝐦) <cit.>. The torque increases or decreases the number of the magnons depending on the polarization of the torque (μ_ s∥𝐦 or μ_ s∥-𝐦), and eventually changes the system temperature by energy transfer, concomitant with the spin-current injection <cit.>. The energy transfer induces observable temperature modulation in isolated systems, which satisfies the following relation Δ T_ SPE∝μ_ s·𝐦∝(𝐣_ c×𝐧)·𝐦. A schematic of the sample system and measurement geometry is shown in Fig. 1(a). The sample system is a Pt strip on a single-crystal YIG. The YIG layer is 112-μ m-thick and grown by a liquid phase epitaxy method on a 500-μ m-thick Gd_3Ga_5O_12 substrate with the lateral dimension 10 × 10 mm^2, where small amount of Bi is substituted for the Y-site of the YIG to compensate the lattice mismatching to the substrate. The Pt strip, connected to four electrodes, is 5-nm-thick, 0.5-mm-wide, fabricated by a sputtering method, and patterned with a metal mask. Then, the whole surface of the sample except the electrodes is covered by a highly-resistive Al_2O_3 film with a thickness of ∼100 nm by means of an atomic layer deposition method. We attached a TC wire to the top of the Pt/YIG junction, where the wire is electrically insulated from but thermally connected to the Pt layer owing to the Al_2O_3 layer. We used a type-E TC with a diameter of 0.013 mm (Omega engineering CHCO-0005), and fixed its junction part on the middle of the Pt strip using varnish. The rest of the TC wires were fixed on the sample surface for thermal-anchoring and avoiding thermal leakage from the top of the Pt/YIG junction [see the cross sectional view in Fig. <ref>(a)]. The expected thickness of the varnish between the TC and the sample surface is in the order of 10 μ m [This is estimated from the thickness of the varnish layer sandwiched between glass substrates pressed with the same pressure applied to the sample; we pressed the sample and the TC wire with an additional glass cover.]. Note that the thickness of the varnish layer does not affect the magnitude of the signal while it affects the temporal response of the TC [As the radiation to the outer environment at the surface is negligibly small, the vertical heat current in the varnish layer is zero at the steady-state condition. The effect of the lateral heat currents, expected at the edges of the Pt strip, is also small as the total thickness from the top of the Pt strip to the surface (∼30 μ m) is smaller than the width (500 μ m). ]. The ends of the Pt strip are connected to a current source and the other two electrodes are used for measuring resistance based on the four-terminal method. The magnetic field 𝐇 is applied in the film plane and perpendicular to the strip, thus is along μ_ s, satisfying the symmetry of the SPE [Eq. (<ref>)]. The TC is connected to electrodes on a heat bath, which acts as a thermal anchor, and further connected to a voltmeter via conductive wires. The measurements were carried out at 300 K and ∼10^-3 Pa.For the electric detection of the SPE, we measured the amplitude difference (Δ V_ TC) of the TC voltage V_ TC in response to a change (Δ J_ c) in the current J_ c (so-called Delta-mode of the nanovoltmeter Keithley 2182A). After the current is set to J_ c=J_ c^0±Δ J_ c, time delay (t_ delay) is inserted before measuring the corresponding voltages V_ TC^±. Then, Δ V_ TC is obtained as Δ V_ TC=(V_ TC^+-V_ TC^-)/2. The time delay t_ delay is necessary because the temperature modulation occurred at the Pt/YIG junction takes certain time to reach and stabilize the TC. The appropriate value of t_ delay can be determined from the delay-dependence of the SPE signal, which will be shown in Sec. III. For the SPE measurements, we set no offset current (J_ c^0=0) so that Δ V_ TC is free from Joule heating (∝ J_ c^2), and the SPE (∝ J_ c) is expected to dominate the Δ V_ TC signal [see Fig. <ref>(a)]. By using the measured values of Δ V_ TC, the temperature modulation (Δ T) is estimated via the relation Δ T=S_ TCΔ V_ TC, where S_ TC is the Seebeck coefficient of the TC. For the low-field measurements (μ_0H<0.1T), the reference value of S_ TC=61 μ V/K at 300 K is used, while, for the high-field measurements, the field dependence of S_ TC, determined by the method shown in Appendix <ref>, is used.§ RESULTS AND DISCUSSION First, we demonstrated the electric detection of the SPE at low fields. Figure <ref>(a) shows Δ V_ TC as a function of the field magnitude H at Δ J_ c=10mA and t_ delay=50ms. Δ V_ TC clearly changes its sign when the field direction is reversed and the appearance of the hysteresis demonstrates that it reflects the magnetization curve of the YIG, showing the symmetry expected from Eq. (<ref>) <cit.>. The small offset of Δ V_ TC may be attributed to the temperature modulation by the Peltier effect appearing around the current electrodes, Joule heating due to small uncanceled current offsets, and possible electrical leakage of the applied current from the sample to the TC. Since the Peltier and resistance effects are of even functions of the magnetization cosines though the SPE is of an odd function, the SPE-induced temperature modulation Δ T_ SPE can be extracted by subtracting the symmetric response to the magnetization: Δ T_ SPE=[Δ T(+H)-Δ T(-H)]/2 <cit.>. Figure <ref>(b) shows the Δ J_ c dependence of Δ T_ SPE and the temperature (T_ Pt) of the Pt strip, estimated from the resistance of the strip. While T_ Pt increases parabolically with the magnitude of Δ J_ c for Joule heating, Δ T _SPE increases linearly as is expected from the characteristic of the SPE. This distinct dependencies show negligibly small contribution to Δ T_ SPE from the Joule heating in this study [The SPE signal at 305 and 310 K (nominal) was observed to show the same magnitude as that at 300 K. Thus the temperature increase due to the Joule heating does not affect the measured SPE value.]. The magnitude of the SPE signal is estimated to be Δ T_ SPE/Δ j_ c=3.4×10^-13Km^2/A, where Δ j_ c is the difference in j_ c. This value is almost same as the value obtained in the thermographic experiments <cit.>; since in Ref. <cit.> the sine-wave amplitude A of Δ T_ SPE is divided by the rectangular-wave amplitude of Δ j_ c, a correction factor of π/4 is necessary, i.e. Δ T_ SPE/Δ j_ c=π A/(4Δ j_ c)=3.7×10^-13Km^2/A in the previous study. We note that, in the above and following measurements, t_ delay=50ms is chosen based on the t_ delay dependence of Δ T_ SPE [Fig. <ref>(c)], where Δ T_ SPE is almost saturated at t_ delay>10ms. Such finite but small thermal-stabilization time can be explained by the thermal diffusion from the junction to the TC and rapid thermal stabilization of the SPE-induced temperature modulation <cit.>. Next, we measured the field dependence of the SPE at higher fields up to μ_0H=9.0T. Figure 3(a) shows Δ T_ SPE as a function of H, where Δ J_ c is changed from 1 to 15 mA. The suppression of the Δ T_ SPE signal at higher fields is clearly observed for all the Δ J_ c values. As shown in Fig. <ref>(b), the signal shows a linear variation with J_ c both at μ_0H=0.1T and 8.0 T, demonstrating a constant suppression rate. Since the resistance of the sample varies only 1 % at most [Fig. <ref>(c)], the junction temperature keeps constant during the field scan. The field dependence of the thermal conductivity of YIG is also irrelevant to the Δ T_ SPE suppression as it is known to be negligibly small at room temperature <cit.>. Thus we can conclude that the suppression is attributed to the nature of the SPE. By calculating the suppression magnitude as δ_ SPE=1-Δ T_ SPE(μ_0H=8.0T)/Δ T_ SPE(μ_0H=0.1T),we obtained δ_ SPE=0.26.To compare δ_ SPE to the field-induced suppression of the SSE, we performed SSE measurements in a longitudinal configuration using a Pt/YIG junction system fabricated at the same time as the SPE sample. The SSE sample has the lateral dimension 2.0×6.0mm^2 and the same vertical configuration as the SPE sample except for the absence of the Al_2O_3 layer. The detailed method of the SSE measurement is available elsewhere <cit.>. Figure <ref>(d) shows the field dependence of the SSE thermopower in the Pt/YIG junction. The clear suppression of the SSE thermopower is observed. Importantly, the high-field response of the SSE is quite similar to that of the SPE in the Pt/YIG system. The suppression magnitude of the SSE δ_ SSE, defined in the same manner as the SPE, is estimated to be ∼0.22, consistent with the previously reported values <cit.>.The observed remarkable field-induced suppression of the SPE at room temperature shows that the SPE is likely dominated by low-energy magnons because the energy scale of the applied field is less than 10K and thus much lower than the thermal energy of 300 K <cit.>. The origin of the strong contribution of the low-energy magnons in the SPE can be (i) stronger coupling of the spin torque to the low-energy (sub-thermal) magnons and (ii) greater propagation length of the low-energy magnons than those of high-energy (thermal) magnons <cit.>. While (i) is not well experimentally investigated, the existence of the μ m-range length scale in the SPE <cit.> and the similarity between δ_ SPE and δ_ SSE suggest the dominant contribution from (ii) as in the case of the SSE <cit.>. In fact, recently, it has been demonstrated that the high magnetic fields reduce the propagation length of magnons contributing to the SSE <cit.>. This length-scale scenario can qualitatively explain the suppression in the SPE. Recalling that a heat current density (j_ q) existing over a distance (l) generates the temperature difference Δ T=κ^-1j_ ql in an isolated system, Δ T should decrease when l decreases, where κ is the thermal conductivity of the system. In the SPE, l corresponds to the magnon propagation length <cit.>, and a flow of magnons accompanies a heat current <cit.>. Consequently, when the high magnetic field is applied and the magnons with longer propagation length are suppressed by the Zeeman gap, the averaged magnon propagation length decreases and thus results in the reduced Δ T. To further investigate the microscopic mechanism of the SPE, consideration of the spectral non-uniformity may be vital both in experiments and theories.§ SUMMARY In this study, we showed the magnetic field dependence of the spin Peltier effect (SPE) up to 9.0 T at 300 K in a Pt/YIG junction system. We established a simple but sensitive detection method of the SPE using a commonly-available thermocouple wire. The SPE signals were observed to be suppressed at high magnetic fields, highlighting the stronger contribution of low-energy magnons in the SPE. The similar suppression rate of the SPE-induced temperature modulation to that of the SSE-induced thermopower suggests that the suppression originates the decrease in the magnon propagation length as in the case of the SSE. We anticipate that the experimental results and the method reported here will be useful for systematic investigation of the SPE.The authors thank T. Kikkawa for the aid in measuring the SSE and G. E. W. Bauer and Y. Ohnuma for the valuable discussion. This work was supported by PRESTO “Phase Interfaces for Highly Efficient Energy Utilization” (Grant No. JPMJPR12C1) and ERATO “Spin Quantum Rectification Project” (Grant No. JPMJER1402) from JST, Japan, Grant-in-Aid for Scientific Research (A) (Grant No. JP15H02012), and Grant-in-Aid for Scientific Research on Innovative Area “Nano Spin Conversion Science” (Grant No. JP26103005) from JSPS KAKENHI, Japan, NEC Corporation, the Noguchi Institute, and E-IMR, Tohoku University. S.D. was supported by JSPS through a research fellowship for young scientists (Grant No. JP16J02422). K.O. acknowledges support from GP-Spin at Tohoku University.§ CALIBRATION OF THERMO COUPLE AT HIGH MAGNETIC FIELDSTo measure the field dependence of S_ TC, we used the Joule-heating-induced signal as a reference. By adding a non-zero offset (J_ c^0) to the applied current, we obtained the temperature modulation induced by the Joule heating, of which the power P changes from P(H)=R(H)(J_ c^0-Δ J_ c)^2 to P(H)=R(H)(J_ c^0+Δ J_ c)^2, where R denotes the resistance of the strip [Fig. <ref>(a)]. Figure <ref>(b) shows the magnetic field dependence of the component of Δ V_ TC symmetric to the field (Δ V_ Joule=[Δ T_ TC(+H)+Δ T_ TC(-H)]/2). As the change in R, due to the ordinary, spin Hall, and Hanle magnetoresistance effects <cit.>, is in the order of 0.02 % [Fig.3(c)], its contribution to P can be neglected. Similarly, the field dependence of the thermal conductivity of YIG is negligibly small <cit.>, ensuring the constant temperature change. Accordingly, the field dependence of Δ V_ Joule directly reflects S_ TC(H). It increases by a factor of ∼1 % when the field magnitude increases up to 9.0 T. We approximated the field dependence of S_ TC(H) as S_ TC(H)=61(1+4.54×10^-3|μ_0H|^0.453) μ V/K by determining the relative change from the measurement results (Δ V_ Joule(H)/Δ V_ Joule|_H=0) and the absolute value from the reference value. apsrev4-1 41 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Zhang and Zhao(2015)]Zhang201592 author author X. Zhang and author L.-D. Zhao, http://dx.doi.org/10.1016/j.jmat.2015.01.001 journal journal J. Materiomics volume 1, pages 92(year 2015)NoStop [Bauer et al.(2012)Bauer, Saitoh, and van Wees]Bauer:2012fq author author G. E. W.Bauer, author E. Saitoh,and author B. J. van Wees, @noopjournal journal Nat. Mater. volume 11, pages 391 (year 2012)NoStop [Boona et al.(2014)Boona, Myers, and Heremans]boona2014spin author author S. R. Boona, author R. C. Myers, and author J. P. Heremans,@noopjournal journal Energy Environ. Sci. volume 7, pages 885 (year 2014)NoStop [Uchida et al.(2016)Uchida, Adachi, Kikkawa, Kirihara, Ishida, Yorozu, Maekawa,and Saitoh]Uchida:2016jo author author K. Uchida, author H. Adachi, author T. Kikkawa, author A. Kirihara, author M. Ishida, author S. Yorozu, author S. Maekawa,and author E. Saitoh, @noopjournal journal Proc. IEEE volume 104, pages 1946 (year 2016)NoStop [Flipse et al.(2014)Flipse, Dejene, Wagenaar, Bauer, Youssef, and van Wees]Flipse:2014cl author author J. Flipse, author F. K. Dejene, author D. Wagenaar, author G. E. W. Bauer, author J. B. Youssef,and author B. J. van Wees, @noopjournal journal Phys. Rev. Lett. volume 113, pages 027601 (year 2014)NoStop [Flipse et al.(2012)Flipse, Bakker, Slachter, Dejene,and van Wees]Flipse:2012kn author author J. Flipse, author F. L. Bakker, author A. Slachter, author F. K. Dejene,and author B. J. van Wees, @noopjournal journal Nat. Nanotech. volume 7, pages 166 (year 2012)NoStop [Slachter et al.(2010)Slachter, Bakker, Adam, and van Wees]Slachter:2010hj author author A. Slachter, author F. L. Bakker, author J. P. Adam, and author B. J. van Wees,@noopjournal journal Nat. Phys.volume 6, pages 879 (year 2010)NoStop [Uchida et al.(2008)Uchida, Takahashi, Harii, Ieda, Koshibae, Ando, Maekawa,and Saitoh]Uchida:2008cc author author K. Uchida, author S. Takahashi, author K. Harii, author J. Ieda, author W. Koshibae, author K. Ando, author S. Maekawa,and author E. Saitoh, @noopjournal journal Nature volume 455, pages 778 (year 2008)NoStop [Uchida et al.(2010a)Uchida, Xiao, Adachi, Ohe, Takahashi, Ieda, Ota, Kajiwara, Umezawa, Kawai, Bauer, Maekawa, and Saitoh]UchidaXiaoAdachiEtAl2010 author author K. Uchida, author J. Xiao, author H. Adachi, author J. Ohe, author S. Takahashi, author J. Ieda, author T. Ota, author Y. Kajiwara, author H. Umezawa, author H. Kawai, author G. E. W. Bauer, author S. Maekawa,and author E. Saitoh, http://dx.doi.org/10.1038/nmat2856 journal journal Nat. Mater. volume 9, pages 894 (year 2010a)NoStop [Kirihara et al.(2012)Kirihara, Uchida, Kajiwara, Ishida, Nakamura, Manako, Saitoh, and Yorozu]Kirihara:2012jq author author A. Kirihara, author K. Uchida, author Y. Kajiwara, author M. Ishida, author Y. Nakamura, author T. Manako, author E. Saitoh,and author S. Yorozu, @noopjournal journal Nat. Mater. volume 11, pages 686 (year 2012)NoStop [Ramos et al.(2015)Ramos, Kikkawa, Aguirre, Lucas, Anadón, Oyake, Uchida, Adachi, Shiomi, Algarabel, Morellón, Maekawa, Saitoh, and Ibarra]Ramos:2015kh author author R. Ramos, author T. Kikkawa, author M. H. Aguirre, author I. Lucas, author A. Anadón, author T. Oyake, author K. Uchida, author H. Adachi, author J. Shiomi, author P. A.Algarabel, author L. Morellón, author S. Maekawa, author E. Saitoh, and author M. R. Ibarra,@noopjournal journal Phys. Rev. Bvolume 92, pages 220407 (year 2015)NoStop [Adachi et al.(2013)Adachi, Uchida, Saitoh, and Maekawa]Adachi:2013jy author author H. Adachi, author K. Uchida, author E. Saitoh,and author S. Maekawa, @noopjournal journal Rep. Prog. Phys. volume 76, pages 036501 (year 2013)NoStop [Uchida et al.(2010b)Uchida, Adachi, Ota, Nakayama, Maekawa, andSaitoh]Uchida:2010jb author author K. Uchida, author H. Adachi, author T. Ota, author H. Nakayama, author S. Maekawa,and author E. Saitoh, @noopjournal journal Appl. Phys. Lett. volume 97, pages 172505 (year 2010b)NoStop [Saitoh et al.(2006)Saitoh, Ueda, Miyajima, and Tatara]Saitoh:2006kk author author E. Saitoh, author M. Ueda, author H. Miyajima,andauthor G. Tatara, @noopjournal journal Appl. Phys. Lett. volume 88, pages 2509 (year 2006)NoStop [Sinova et al.(2015)Sinova, Valenzuela, Wunderlich, Back, and Jungwirth]Sinova:2015ic author author J. Sinova, author S. O. Valenzuela, author J. Wunderlich, author C. H. Back,and author T. Jungwirth, @noopjournal journal Rev. Mod. Phys. volume 87, pages 1213 (year 2015)NoStop [Hoffmann(2013)]Hoffmann:-1el author author A. Hoffmann, @noopjournal journal IEEE Trans. Magn. volume 49, pages 5172 (year 2013)NoStop [Daimon et al.(2016)Daimon, Iguchi, Hioki, Saitoh, andUchida]Daimon:2016fja author author S. Daimon, author R. Iguchi, author T. Hioki, author E. Saitoh,and author K. Uchida, @noopjournal journal Nat. Commun. volume 7, pages 13754 (year 2016)NoStop [Uchida et al.(2017)Uchida, Iguchi, Daimon, Ramos, Anad n, Lucas, Algarabel, Morelln, Aguirre, Ibarra,and Saitoh]Uchida:2017kb author author K. Uchida, author R. Iguchi, author S. Daimon, author R. Ramos, author A. Anad n, author I. Lucas, author P. A.Algarabel, author L. Morelln, author M. H. Aguirre, author M. R. Ibarra,and author E. Saitoh,@noopjournal journal Phys. Rev. Bvolume 95, pages 184437 (year 2017)NoStop [Daimon et al.(2017)Daimon, Uchida, Iguchi, Hioki, andSaitoh]Daimon:2017jx author author S. Daimon, author K. Uchida, author R. Iguchi, author T. Hioki,and author E. Saitoh, @noopjournal journal Phys. Rev. B volume 96, pages 024424 (year 2017)NoStop [Uchida et al.(2014)Uchida, Kikkawa, Miura, Shiomi, andSaitoh]Uchida:2014jq author author K. Uchida, author T. Kikkawa, author A. Miura, author J. Shiomi,and author E. Saitoh, @noopjournal journal Phys. Rev. X volume 4, pages 041023 (year 2014)NoStop [Rezende et al.(2014)Rezende, Rodriguez-Suarez, Cunha, Rodrigues, Machado, Guerra, Ortiz, and Azevedo]Rezende:2014cr author author S. M. Rezende, author R. L. Rodriguez-Suarez, author R. O. Cunha, author A. R. Rodrigues, author F. L. A. Machado, author G. A. F. Guerra, author J. C. L. Ortiz,and author A. Azevedo, @noopjournal journal Phys. Rev. B volume 89, pages 014416 (year 2014)NoStop [Kikkawa et al.(2015)Kikkawa, Uchida, Daimon, Qiu, Shiomi, and Saitoh]Kikkawa:2015bn author author T. Kikkawa, author K. Uchida, author S. Daimon, author Z. Qiu, author Y. Shiomi,and author E. Saitoh, @noopjournal journal Phys. Rev. B volume 92, pages 064413 (year 2015)NoStop [Jin et al.(2015)Jin, Boona, Yang, Myers, andHeremans]Jin:2015ik author author H. Jin, author S. R. Boona, author Z. Yang, author R. C. Myers,and author J. P. Heremans, @noopjournal journal Phys. Rev. B volume 92, pages 054436 (year 2015)NoStop [Barker and Bauer(2016)]Barker:2016hy author author J. Barker and author G. E. W. Bauer, @noopjournal journal Phys. Rev. Lett. volume 117, pages 217201 (year 2016)NoStop [Guo et al.(2016)Guo, Cramer, Kehlberger, Ferguson, MacLaren, Jakob, and Kläui]Guo:2016go author author E.-J. Guo, author J. Cramer, author A. Kehlberger, author C. A. Ferguson, author D. A. MacLaren, author G. Jakob,and author M. Kläui, @noopjournal journal Phys. Rev. X volume 6, pages 031012 (year 2016)NoStop [Basso et al.(2016)Basso, Ferraro, Magni, Sola, Kuepferling, and Pasquale]Basso:2016gc author author V. Basso, author E. Ferraro, author A. Magni, author A. Sola, author M. Kuepferling,and author M. Pasquale, @noopjournal journal Phys. Rev. B volume 93, pages 184421 (year 2016)NoStop [Iguchi et al.(2017)Iguchi, Uchida, Daimon, and Saitoh]Anonymous:2017jm author author R. Iguchi, author K. Uchida, author S. Daimon,and author E. Saitoh, @noopjournal journal Phys. Rev. B volume 95, pages 174401 (year 2017)NoStop [Miura et al.(2017)Miura, Kikkawa, Iguchi, Uchida, Saitoh, and Shiomi]Anonymous:2017ju author author A. Miura, author T. Kikkawa, author R. Iguchi, author K. Uchida, author E. Saitoh,and author J. Shiomi, @noopjournal journal Phys. Rev. Materials volume 1, pages 014601 (year 2017)NoStop [Ohnuma et al.(2017)Ohnuma, Matsuo, and Maekawa]ohnumxiv author author Y. Ohnuma, author M. Matsuo, and author S. Maekawa,@noopjournal journal to be published in Phys. Rev. B(year 2017)NoStop [Hioki et al.(2017)Hioki, Iguchi, Qiu, Hou, Uchida, and Saitoh]Anonymous:2017de author author T. Hioki, author R. Iguchi, author Z. Qiu, author D. Hou, author K. Uchida,and author E. Saitoh, @noopjournal journal Appl. Phys. Express volume 10, pages 073002 (year 2017)NoStop [Tserkovnyak et al.(2005)Tserkovnyak, Brataas, Bauer, andHalperin]Tserkovnyak:2005fr author author Y. Tserkovnyak, author A. Brataas, author G. E. W. Bauer,and author B. I. Halperin, @noopjournal journal Rev. Mod. Phys. volume 77, pages 1375 (year 2005)NoStop [Zhang and Zhang(2012)]Zhang:2012hh author author S. S. L.Zhang and author S. Zhang, @noopjournal journal Phys. Rev. B volume 86, pages 214424 (year 2012)NoStop [Note1()]Note1 note This is estimated from the thickness of the varnish layer sandwiched between glass substrates pressed with the same pressure applied to the sample; we pressed the sample and the TC wire with an additional glass cover.Stop [Note2()]Note2 note As the radiation to the outer environment at the surface is negligibly small, the vertical heat current in the varnish layer is zero at the steady-state condition. The effect of the lateral heat currents, expected at the edges of the Pt strip, is also small as the total thickness from the top of the Pt strip to the surface (∼ 30 μ m) is smaller than the width (500 μ m).Stop [Note3()]Note3 note The SPE signal at 305 and 310 K (nominal) was observed to show the same magnitude as that at 300 K. Thus the temperature increase due to the Joule heating does not affect the measured SPE value.Stop [Boona and Heremans(2014)]Boona:2014fh author author S. R. Boona and author J. P. Heremans, @noopjournal journal Phys. Rev. B volume 90, pages 064421 (year 2014)NoStop [Uchida et al.(2012)Uchida, Ota, Adachi, Xiao, Nonaka, Kajiwara, Bauer, Maekawa, and Saitoh]Uchida:2012ew author author K. Uchida, author T. Ota, author H. Adachi, author J. Xiao, author T. Nonaka, author Y. Kajiwara, author G. E. W.Bauer, author S. Maekawa,and author E. Saitoh, @noopjournal journal J. Appl. Phys. volume 111, pages 103903 (year 2012)NoStop [Sola et al.(2017)Sola, Bougiatioti, Kuepferling, Meier, Reiss, Pasquale, Kuschel, and Basso]Sola:2017ki author author A. Sola, author P. Bougiatioti, author M. Kuepferling, author D. Meier, author G. Reiss, author M. Pasquale, author T. Kuschel,and author V. Basso, @noopjournal journal Sci. Rep. volume 7, pages 46752 (year 2017)NoStop [Cornelissen et al.(2016)Cornelissen, Peters, Bauer, Duine, and van Wees]Cornelissen:2016ji author author L. J. Cornelissen, author K. J. H. Peters, author G. E. W. Bauer, author R. A. Duine, and author B. J. van Wees,@noopjournal journal Phys. Rev. Bvolume 94, pages 014412 (year 2016)NoStop [Nakayama et al.(2013)Nakayama, Althammer, Chen, Uchida, Kajiwara, Kikuchi, Ohtani, Geprägs, Opel, Takahashi, Gross, Bauer, Goennenwein, and Saitoh]Nakayama:2013gs author author H. Nakayama, author M. Althammer, author Y. T. Chen, author K. Uchida, author Y. Kajiwara, author D. Kikuchi, author T. Ohtani, author S. Geprägs, author M. Opel, author S. Takahashi, author R. Gross, author G. E. W. Bauer, author S. T. B. Goennenwein,andauthor E. Saitoh, @noopjournal journal Phys. Rev. Lett. volume 110, pages 206601 (year 2013)NoStop [Vélez et al.(2016)Vélez, Golovach, Bedoya-Pinto, Isasa, Sagasta, Abadia, Rogero, Hueso, Bergeret,and Casanova]Velez:2016bm author author S. Vélez, author V. N. Golovach, author A. Bedoya-Pinto, author M. Isasa, author E. Sagasta, author M. Abadia, author C. Rogero, author L. E. Hueso, author F. S.Bergeret,and author F. Casanova, @noopjournal journal Phys. Rev. Lett. volume 116,pages 016603 (year 2016)NoStop
http://arxiv.org/abs/1709.08997v1
{ "authors": [ "Ryuichi Itoh", "Ryo Iguchi", "Shunsuke Daimon", "Koichi Oyanagi", "Ken-ichi Uchida", "Eiji Saitoh" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170926131726", "title": "Magnetic-field-induced suppression of spin Peltier effect in Pt/${\\rm Y_{3}Fe_{5}O_{12}}$ system at room temperature" }
Min-kowskithmTHEOREM[section] lem[thm]LEMMA cor[thm]COROLLARY prop[thm]PROPOSITION as[thm]ASSUMPTION conjecture[thm]Conjecture claim[thm]CLAIM hypoHYPOTHESISdefinition defi[thm]DEFINITION remark[thm]REMARK fact[thm]Fact ex[thm]Example
http://arxiv.org/abs/1709.09195v4
{ "authors": [ "José Antonio Carrillo", "Katy Craig", "Francesco S. Patacchini" ], "categories": [ "math.AP", "math.NA", "35Q35 35Q82 65M12 82C22" ], "primary_category": "math.AP", "published": "20170926180204", "title": "A blob method for diffusion" }
Understanding Infographics through Textual and Visual Tag Prediction Zoya Bylinskii1* Sami Alsheikh1* Spandan Madan2* Adrià Recasens1*Kimberli Zhong1 Hanspeter Pfister2 Fredo Durand1 Aude Oliva1 1 Massachusetts Institute of Technology 2 Harvard University{zoya,alsheikh,recasens,kimberli,fredo,oliva}@mit.edu {spandan_madan,pfister}@seas.harvard.edu30th January 2018 ============================================================================================================================================================================================================================================================================================================================= We introduce the problem of visual hashtag discovery for infographics:extracting visual elements from an infographic that are diagnostic of its topic. Given an infographic as input, our computational approach automatically outputs textual and visual elements predicted to be representative of the infographic content. Concretely, from a curated dataset of 29K large infographic images sampled across 26 categories and 391 tags, we present an automated two step approach. First, we extract the text from an infographic and use it to predict text tags indicative of the infographic content. And second, we use these predicted text tags as a supervisory signal to localize the most diagnostic visual elements from within the infographic i.e. visual hashtags. We report performances on a categorization and multi-label tag prediction problem and compare our proposed visual hashtags to human annotations. § INTRODUCTION If a hashtag can be worth 140 characters, how much is a visual hashtag worth? While text can be used to clearly convey a short message, a meaningful icon conveys the gist of a webpage or poster right away, grabbing attention while helping store the message in memory <cit.>. Identifying these visual regions requires an understanding of both the textual and visual content of the infographic.In this paper, we introduce a system that identifies these “visual hashtags", iconic image regionsthat represent key topics of an infographic. For instance, given an infographic with topics “economy” and “environment", relevant visual hashtags could be crops showing a coin (for economy) or the earth (for environment).* Indicates equal contribution. Infographics are visual encodings of visual and textual media, including graphs, visualizations, and graphic designs. They are specifically designed to provide an effective visual digest with the intent of delivering a message. Tags can serve as key words describing this message to facilitate data organization, retrieval from large databases, and sharing on social media. Analogously, we propose an effective visual digest of infographics via visual hashtags. Instead of providing visual summaries or thumbnails of the whole infographic, visual hashtags correspond to specific visual concepts or topics inside the infographic's rich visual space. We introduce a computational system that, given an infographic as input, produces discriminative textual and visual hashtags. Just as YouTube videos use representative frames as thumbnails, we identify relevant crops of an infographic as a “preview" of its content. Such thumbnails may aid in retrieval applications (e.g. organizing and visualizing large infographic collections from a webpage or file system). We evaluate the quality of visual hashtags by comparing the system's output to the image regions humans box as relevant to a particular textual tag on a given image.Unlike most natural images, infographics often contain embedded text that provides meaningful context for the visual content. We leverage this text to first make category (topic) and tag (sub-topic) predictions. We then use these predictions to constrain and disambiguate the automatically extracted visual features.This disambiguation is a key step in identifying the most diagnostic regions of an infographic. For instance, in Fig.  <ref> which contains diverse visual elements, if a predicted text tag is Environment then the system can condition visual object proposals on this topic and focus on related regions like the water droplet and spray bottle. On the other hand, if the predicted text tag is Education, the system can condition proposals on regions like the book. Thus, we can use the predicted text tags as a kind of supervisory signal for the visual model, to identify visual regions indicative of the different topics in the infographic.Approach: We present our tagging application on a dataset of 29K infographics scraped from Visually (<http://visual.ly/view>). Each infographic comes with a designer-assigned category label, multiple tags, and other meta-data (Sec. <ref>). We achieve prediction accuracy of 46% when predicting the top category out of 26 categories. For text tags, we achieve 48.2% top-1 average precision at predicting at least one of the possible few tags for an image out of 391 possible tags. These predictions are driven by text that we automatically extracted from the infographics and post-processed with a single-hidden-layer neural network (Sec. <ref>). Separately, we train category and tag prediction from image patches using a deep multiple-instance learning framework (Sec. <ref>). At test time, we run our patch-based visual network densely over an infographic, constrained to the tags predicted by the text network, to generate visual region proposals associated to the text tags. These proposals are then fed to a deep mask segmentation pipeline to generate the final visual hashtags (Fig. <ref>). Contributions: We introduce the problem of visual hashtag discovery, which consists of extracting diagnostic visual regions for particular topics. We demonstrate the utility of a patch-based, deep multiple instance approach for the processing of intractably large (up to 8000 pixels/side) and visually rich images. Unlike approaches that use text outside of an image for visual recognition tasks, we show the power of extracting text from within the image itself for facilitating visual recognition. On a novel curated dataset of 29K infographics, we report performances on a categorization problem and a multi-label tag prediction problem, and show results of our automatically extracted visual hashtags. § RELATED WORK Conventionally, computer vision research has focused mostly on understanding natural images and scenes, while very little work has been done on digitally born media. Some work has been present in <cit.> where the authors use computer vision techniques for geometry diagrams, and more recently in <cit.> where the authors use graph structures to syntactically parse diagrams. In a similar vein, <cit.> show that simpler, abstract digital images can be used in place of natural images to understand the semantic relationship between visual media and their natural language representation. However, to the best of our knowledge there is no work on automated understanding of infographics using computer vision techniques. Our task of text tag prediction for images is similar to that presented in <cit.>, however we attempt it on infographic images as opposed to natural images. Also, unlike <cit.>, where the authors trained a joint embedding of visual and text features, we solve the problem using just the visual features of an image. To work around the large size of the infographic images, we use a variant of multiple-instance learning approach <cit.>.We also predict text tags using the text extracted from within these images, which has not been tried before to the best of our knowledge. To obtain a distributed representation for the extracted text, we used the mean word2vec <cit.> representation, as suggested by <cit.>. We also tried other representations like the glove embedding <cit.>, and tweet2vec <cit.>.In this paper, we present a method to extract visual hashtags from infographics using only image-level tags. This weakly suppervised learning scheme is similar to <cit.>, where the category labels are used to estimate the location of the elements in the image. However, unlike <cit.>, we combine this weakly supervised model with a tag classifier based in the extracted text to improve the final prediction.§ INFOGRAPHICS DATASETWe scraped 63,885 static infographic images from the Visually website, a community platform for hand curated visual content. Each infographic is hand categorized, tagged, and described by the designer, making it a rich source of annotated images. Despite the difference in visual content, compared to other scene text datasets such as ICDAR 03 <cit.>, ICDAR 15 <cit.>, COCO-Text <cit.> and VGG SynthText in the wild <cit.>, the Visually dataset is similar in size and richness of text annotations, with metadata including labels for 26 categories (available for 90.21% of the images), 19K tags (for 76.81% of the images), titles (99.98%) and descriptions (93.82%). We curated a subset of this 63K dataset to obtain a representative subset of 28,973 images (Table <ref>). Uploaded tags are free text, so many of the original tags are either semantically redundant or have too few instances. Redundant tags were merged using WordNet <cit.> and manually, and only the 391 tags with at least 50 image instances each were retained. To produce the final 29K dataset, we further filtered images to contain a category annotation, at least one of the 391 tags.99.6% of these images had visual aspect ratio between 1:5 and 5:1. Of this dataset, 10% was held out as our test set, and the remaining 26K images were used for training our text and visual models. For 330 of the test images, we collected additional crowdsourced annotations in order to have ground truth visual hashtags for evaluation.§ APPROACH Given an infographic as input, our goal is to predict one or more text tags and visual hashtags that are diagnostic of the topics depicted in the infographic. We split this problem into two steps: (1) predicting the text tags for an infographic, and (2) using the predicted text tags to localize the most representative visual regions. Infographics are composed of a mix of text and visual elements, which combine to generate the message of the infographic. Given that the text is a very strong cue for the topic, we use it to provide context - a sort of supervisory signal - for the visual hashtag predictions. We use the text features to infer the category and tags for the infographic, and given these labels, we ask the visual model to predict the most confident visual regions indicative of these labels. Learning a mapping directly from visual features to labels is a more ambiguous problem: not all topics are represented visually, and not all visual elements are relevant to the topic of the infographic (Sec. <ref>). Textual features help to disambiguate the mapping between visual regions and topics. Importantly, the text we use for prediction is extracted from within the image using optical character recognition.§.§ Text to labels Given an infographic encoded as a bitmap as input, we detected and extracted (i.e., optical character recognition) the text, and then used the text to predict labels for the whole infographic. These labels come in two forms: either a single category per infographic (1 of 26), or multiple tags per infographic (out of a possible 391 tags).Automatic text extraction: We used the stand-alone text spotting system of Gupta et al. <cit.> to discover text regions in our infographics. We automatically cleaned the text using spell checking and dictionary constraints in addition to the ones already in <cit.> to further improve results. On average, we extracted 95 words per infographic (capturing the title, paragraphs, annotations, and other text).Feature learning with text: For each extracted word, we computed a 300-dimensional word2vec representation <cit.>. The mean word2vec of the bag of extracted words was used as the distributed representation for the extracted text of the whole image (the global feature vector of the text).This mean word2vec representation was fed into two single-hidden-layer neural networks for predicting the category and tags of each infographic.Category prediction was set up as a multi-class problem, where each infographic belongs to 1 of 26 categories. Tag prediction was set up as a multi-label problem with 391 tags, where each infographic could have multiple tags (Table <ref>).The network architecture is the same for both tasks and is depicted in the red box in Fig. <ref>, where the label is either one category or multiple tags. We used 26K labeled infographics for training and the rest for testing.§.§ Patches to labelsSeparately from the text, we trained a deep neural network model to learn an association between just the visual features and category and tag labels.Working with large images: Since we have categories and tags for all the images in the training data, a first attempt might be to directly learn to predict the category or tag from the whole image. However, the infographics are large images often measuring beyond 1000x1000 pixels. Resizing the images reduces the resolution of visual elements which might not be perceivable at small scales. In particular, relative to the full size of the infographic, many of the pictographs take up very little real-estate but could otherwise contribute to the label prediction. A fully convolutional approach with a batch of such large images was infeasible in terms of memory use.Our approach was to use a bag of sampled patches to represent the image. To sample the patches, we tried both random crops and object proposals from Alexe et al. <cit.>. Multiple instance learning (MIL) prediction:Given a category or tag label, we expect that specific parts of the infographic may be particularly revealing of that label, even though the whole infographic may contain many diverse visual elements.In MIL, the idea is that we may have a bag of samples (in this case, image patches) to which a label corresponds. The only constraint is that at least one of the samples correspond to the label; the other samples may or may not be relevant.We used the deep MIL formulation from Wu et al. <cit.> for learning deep visual representations. We passed each sampled patch from an infographic through the same convolutional neural network architecture, and aggregated the hidden representations to predict a label for the whole bag of patches (depicted in the blue box in Fig. <ref>). For aggregating the representations, we tried both element-wise mean and max at the last hidden layer before the softmax transformation, but found mean worked better. As with the text model, we trained separate models for multi-class category prediction and multi-label tag prediction. Feature learning with patches: We sampled 5 patches from each infographic and resized each to 224x224 pixels for input into our convolutional neural network. For feature learning, we used ResNet-50 <cit.>, a residual neural network architecture with 50 layers, initialized by pretraining on ImageNet <cit.>. We retrained all layers of this network on 26K infographics with ground truth labels. §.§ Labels to visual hashtags The text in an infographic is often the strongest predictor of the topic matter, achieving significantly better accuracies at predicting the category and tags of infographics than the visual features alone (Sec. <ref>). Driven by these results, we make our initial label (category and tag) predictions using the text features. The predictions in turn constrain the visual network to produce activations for the target label. At inference time, we sample 3500 random crops per infographic and compute the confidence, under the visual classifier, of the target label. We assign this confidence score to all the pixels within the patch, and aggregate per-pixel scores for the whole infographic. After normalizing these values by the number of sampled patches each pixel occurred in, we obtain a heatmap of activations for the target label. We use this activation map both to visualize the most highly activated regions in an infographic for a given label, and to extract visual hashtags from these regions.For automatically extracting visual hashtags, we threshold the activation heatmap for each predicted tag, and identify connected components as proposals for regions potentially containing visual hashtags. These are cropped and passed to the SharpMask segmentation network <cit.>. Finally, visual hashtags corresponding to the predicted textual tags for an input infographic are obtained by cropping tight bounding boxes around SharpMask's proposals from the original images (Fig. <ref>).§.§ Technical details Text model: For category prediction, the mean word2vec representation of an infographic was fed through a 300-dimensional fully-connected linear layer, followed by a ReLu, and a 27-dimensional (including a background class) fully-connected output layer.The feature vectors of all 29K training images fit in memory and could be trained in a single batch, with a softmax cross-entropy loss. For tag prediction, the output layer was 391-dimensional and was passed through a sigmoid layer. Given the multi-label setting, this network was trained with binary cross-entropy (BCE) loss and one-hot encoded target vectors. Both networks were trained for 20K iterations with a learning rate of 1e-3.Visual model: We used bags of 5 patches for aggregating visual information from infographics.We tried bags of random patches and bags of objectness proposals <cit.>. Rather than the raw objectness proposals with varied aspect ratios, we took a tight-fitting square patch around each objectness proposal. We found this improved results. As in the text model, we trained category classification with a softmax cross-entropy loss with 27-dimensional target vectors, and tag prediction with a BCE loss applied to 391-dimensional sigmoid outputs. We used a momentum of 0.9 and weight decay of 1e-4. Our learning rate was initialized at 1e-2. For category prediction, we updated the learning rate every epoch, and stopped training after 5 epochs. For tag prediction, we updated the learning rate every 50 epochs for 500 epochs. Tags were more specific and also much more unbalanced than category labels, so the model needed to train for significantly longer to see enough patch samples for different tags. Activation maps: To discover maximally activated image regions for a given label, 3500 multi-scale crops were used. To generate each crop, we sampled a random coordinate value for the top left corner of the crop, and a side length equal to 10-40% of the minimum image dimension.§ RESULTS We evaluate the ability of our full system to (1) predict category and tag labels for infographics and (2) to extract visual hashtags from images: visual regions or icons relevant to the visualization topic. Predicting the category is a high-level prediction task about the overall topic of the infographic. Predicting the multiple tags for an infographic is a finer-grained task of discovering sub-topics. We solve both tasks, and present results of our text and visual models. Given the text model's tag predictions, the visual model that learned to associate visual concepts with tags is used for finding the relevant visual areas, and to extract visual hashtag proposals (Fig. <ref>). To evaluate these proposals, we collected human ground truth. For a total of 650 image-tag pairs, participants boxed image regions corresponding to the provided tag (Fig. <ref>). We compare our model's visual hashtag proposals to these ground truth bounding boxes. §.§ Human upper bounds Might remove from the main paper the user study with text hashtags, because it is hard to compare them to the GT tags (humans don't know what our ground truth tags are, so comparison isn't fully fair); current top values for humans for tag prediction are like 25% precision, 38% recallWe designed two user interfaces to separately collect (1) text hashtags and (2) visual hashtags. The motivation is twofold: (1) To externally validate the designer-assigned categories and tags from the Visually dataset to see if they match how average users would hashtag these same images in a free-form setting; (2) To evaluate our visual hashtags, for which no ground truth data is available. We ran our crowdsourcing studies on a subset of 330 images out of our test set, manually curated to suit the size and format constraints of our user interfaces. We also made sure transcripts were available for these images for our analyses.Text hashtags: We designed a user interface to allow participants to both see a visualization all-at-once (resized), and to scroll over to explore any regions in detail using a zoom lens (Fig. <ref>a). Participants were instructed to provide “5 hashtags describing the image." We collected a total of 3940 tags for the 330 images from 82 Amazon Mechanical Turk workers (an average of 13.3 tags per image, or 2-3 participants/image). Visual hashtags: The Visually data comes with image-level categories and tags. Because a goal of this paper is to discover individual elements within infographics that correspond to the different labels, we wanted to measure how humans complete this task. We designed an interface in which participants are given an infographic and a target tag, and are asked to mark bounding boxes around all non text-regions (e.g., pictographs) that contain a depiction of the tag (Fig. <ref>b). We used the designer-assigned tags from the Visually dataset. If an image had multiple tags, it would be shown multiple times but to different users, with unique image-tag pairings.We collected a total of 3655 bounding boxes for the 330 images from 43 undergraduate students. Each image was seen by an average of 3 participants and we obtained an average of 4 bounding boxes per image. §.§ BaselinesText baselines: Spandan TO-DO Visual baselines: Zoya TO-DO§.§ Category prediction Evaluation: For each infographic, we measured the accuracy of predicting the correct ground truth category out of 26, within the top 1, 3, and 5 most confident predictions. Quantitative results: Chance level for our distribution of infographics across categories was 15.4%. We achieved 46% top-1 accuracy at predicting the category using our text model (Table <ref>). The purely vision-driven predictions are provided as a comparison point, although the final label predictions are performed using the text features. The text tends to contain a lot more information, while not all concepts can be communicated visually. The best performing visual model used a bag of random patches in a MIL framework (as in Fig. <ref>). Mean aggregation outperformed max aggregation for category prediction (Vis-rand-mean better than Vis-rand-max). Random crops outperformed objectness proposals (Vis-rand-mean better than Vis-obj-mean). We hypothesize this to be the case because each time we sampled random crops from an image, our model was exposed to new visual regions, whereas the number of objectness proposals was a limited sample of patches from an image. In other words, our model received more diverse training data in the random crops case.The patch-based predictions were similar to, or better than, the full visualization resized (Vis-resized). A patch-based approach is naturally better suited for sampling regions for visual hashtag extraction. We also tried to combine text and visual features directly during training but did not achieve gains in performance above the text model alone, indicating that it is a sufficiently rich source of information in most cases. Top activations per category:To validate that our visual network trained to predict categories learned meaningful features, we visualize the top patches that received the highest confidence under a few different categories (Fig. <ref>).These patches were obtained by sampling 100 random patches from each image, storing the single patch that maximally activated for each category per image, and outputting the top patches across all images. §.§ Tag prediction Evaluation: Each infographic in our 29K dataset comes with an average of 1-9 tags. At prediction time, we generate 1, 3, and 5 tags, and measure precision and recall of these predicted tags at capturing all ground truth tags for an image, for a variable number of ground truth tags.Quantitative results: We achieved 48.2% top-1 average precision at predicting at least one of the tags for each of our infographics, since all the infographics in our dataset contain an average of 2 tags (Table <ref>). Since tags are finer-grained than category labels, it is often the case that some word in the infographic itself maps directly to a tag. Using this insight, we add a simple automatic check: if any of the extracted words exactly match any of the 391 tags, we snap the prediction to the matching tags (Word2Vec-snap). Without this additional step, predicting top-1 tag achieves an average prediction of 30.1% using text features. Text modeling baselines: We computed several other representations of the extracted text (Table <ref>).We used a voting scheme (Word2Vec-voting) by voting for the closest text tag, in word2vec embedding space, for each word in the extracted text, and predicting the top-voted tags.We also computed the Tweet2Vec <cit.> representation of the extracted text, as well as the mean of the Glove representations <cit.> of all the words (Glove-mean).Using the mean word2vec as the text features (Word2Vec-mean) gave the best results for tag prediction.Text can disambiguate visual predictions: In some infographics, visual cues for particular tags or topics may be missing (e.g., for abstract concepts), they may be misleading (as visual metaphors), or they may be too numerous (in which case the most representative must be chosen). In these cases, label predictions driven by text are key, as in Fig. <ref>a, where visual features might seem to indicate that the infographic is about icebergs, or ocean, or travel; in this case, however, iceberg is used as a metaphor to discuss microblogging and social media. Our text model is able to pick up on this, and direct the visual features to activate in the relevant regions. §.§ Visual hashtag proposals Collecting ground truth: The Visually data comes with image-level categories and tags. Because a goal of this paper is to discover visual hashtags - individual elements within infographics that correspond to the different labels - we wanted to measure how humans complete this task. We designed an interface in which participants are given an infographic and a text tag, and are asked to mark bounding boxes around all non text-regions (e.g., pictographs) that contain a depiction of the tag (Fig. <ref>).If an image had multiple tags, it would be shown multiple times but to different users, with unique image-tag pairings.We collected a total of 3655 bounding boxes (ground truth visual hashtags) for the 330 images from 43 undergraduate students. Each image was seen by an average of 3 participants and we obtained an average of 4 boxes per image.§ USER STUDIES TODO: the remaining text here will be moved from here into the results section above, and this section heading will be removed §.§ Text hashtagsAnalysis: We compared the collected tags to existing Visually ground truth tags by measuring how many of the ground truth tags were captured by human participants. Aggregating all participant tags per image (2-3 participants per image), we found an average precision of 37% at reproducing the ground truth tags. After accounting for similar word roots (e.g., gun matches handgun), average precision is 51%. This shows that even withouta fixed list of tags to choose from (the 391 in our dataset), online participants converge on similar tags as the designer-assigned tags. In other words, the tags in this dataset are reproducible and generalizable, and different people find similar words representative of an infographic.Furthermore, of all the hashtags generated by our participants, on average 37% of them are verbatim words from the transcripts of the infographics. This is additional justification for text within infographics being highly predictive of the tags assigned to it. - 82 unique users, - 50 minimum labels per participant (45 for real data, 5 for validation data) - 296 files analyzed (excluding validation; 306 including val) - 3940 unique tags (excluding validation; with no processing whatsoever for capitalization, merged words in a hashtag, etc) - avg tags/vis = 13.31 - avg people/vis = 2.49Evaluation: On average, infographics had 2 ground truth tags, with a total of 650 unique image-tag pairs for which participants annotated visual hashtags. Of these 650 pairings, participants indicated that 119 (18%) did not have corresponding visual features. In these cases, the hashtag had no visual counterpart and could perhaps only be inferred from the text of the infographic. We evaluated the remaining 531 image-tag pairs with participant annotations (ground truth hashtags). We fed each of these image-tag pairs to our pipeline to obtain predicted visual hashtags (Sec. <ref>) and computed the intersection-over-union (IOU) of each of our predicted hashtags with participant annotations. We report only the single highest-confidence prediction for each image-tag pair (Table <ref>). The confidence of our proposals is measured as the mean activation value of our visual model within the hashtag bounding box. See Fig. <ref> for examples of our predicted hashtags overlaid with participant annotations.Our pipeline was constructed for high-precision as opposed to high-recall, because our goal is to produce a reasonable visual hashtag for an image-tag pair, rather than all possible hashtags. Therefore, our evaluation measures the percent of predictions that overlap with at least one of the human annotations. We report precision as the percent of predicted hashtags that have an IOU > 0.5 with at least one ground truth hashtag (in an image-tag pair). This threshold was chosen because it is most commonly used in the object detection literature <cit.>. To contrast with precision, we also report the total percent of image-tag pairs for which a successful proposal with IOU > 0.5 was generated (Acc.). In this case, for any image-tag pair for which a proposal was not generated, IOU is set to be 0.Object proposal baselines: Our average precision of 15.2% and accuracy of 9.4% beat other approaches on the task of outputting a visual hashtag proposal for a given image-tag pair (Table <ref>). We took the highest-confidence object proposals from Alexe et al. <cit.> (Objectness) and Pinheiro et al. <cit.> (SharpMask). We also used a top-performing neural network model of saliency <cit.> (SalNet) in place of our visual model's activation map, and ran it through the same post-processing pipeline as outlined in Sec. <ref> to obtain visual hashtag proposals. The benefits our activation map has over saliency is that saliency is tag-agnostic and will always output the same map for an image. Our visual model is conditioned on a particular tag label and activates in regions of a design that are most predictive of the label. For a comparison to another weakly-supervised approach, we adjust our network to have an average pooling layer at the end, as in CAM <cit.>. As a chance baseline we report the performance of random crops (Random). Increasing accuracy: When we take into account all image-tag pairs, the average percent of instances for which the predicted hashtag overlaps the ground truth with an IOU > 0.5 drops to 9.4% (from a precision of 15.2%). Our approach fails to output proposals for 38% of the image-tag pairs. Most of the filtering happens at the SharpMask stage, where region proposals from the visual activation map are passed to SharpMask for refinement. If SharpMask does not find an object candidate in an image region, that region is discarded. As a stand-alone method, SharpMask fails to output proposals for 34% of image-tag pairs. SharpMask is also used as a post-processing step for the SalNet model. In comparison, Objectness generates a candidate for all images.We can increase the percent of proposals returned by adding a fallback option to our method (Ours-fallback): even if SharpMask discards all candidates, return the most confident candidate. This allows us to guarantee proposals for all images, at the cost of lower precision.TODO: replace the following with IOU analysis. For each image-tag pair, we masked out all the image regions annotated by participants for a given image-tag pair, producing a binary mask. We then measured the mean activation value in this binary mask predicted by our visual model. Specifically, we normalized the visual activation heatmap to be zero-mean and computed the average normalized heatmap value in the region annotated by participants. For 65% of the tags, our visual model activated above chance in the human-annotated region. We examined which tags our visual model best localized based on the mean activation value of those visual regions across images. The best localized tags include home, automotive, food, weather, new york, and among the worst localized tags are more abstract tags such as savings, wellness, investment, happiness, medium, stress. More details are in the Supplement.- >= 43 participants (some people left email field blank) - 3655 total boxes - 634 total tag/image pairs considered - 327 filenames considered - 3655/43 =  85 bounding boxes drawn per participant - 634/43 =  14.74 images per participant Model Acc. Top-1 Top-3 Top-5 Vis-rand-mean prec 12.2% 8.4% 6.9%rec 6.7% 13.1% 17.8% Vis-rand-max prec 12.2% 8.4% 6.5%rec 6.8% 13.0% 16.8% Vis-resized prec 12.1% 8.2% 6.8%rec 6.5% 13.1% 17.8% Vis-obj-mean prec 11.4% 8.1% 6.6%rec 6.4% 12.6% 17.0% Vis-obj-max prec 11.1% 8.1% 6.4% rec 6.1% 12.5% 16.4% Chance prec 8.7% 6.4% 5.5%rec 5.1% 10.3% 14.3% §.§ Visual features§.§.§ Category prediction* Ferrari objectness [max MIL setting]:* MIL 5 patches, batches of 20: 24-25% (60-61%) (similar results with a batch of 10; same when sampling from nms top 20)* MIL 3 patches, batches of 33: 23% (58-59%)* (no MIL) 1 patch, batches of 50: 20-21% (56-57%) * Ferrari objectness [mean MIL setting]:* MIL 5 patches, batches of 20: 24-25% (62-63%)* MIL 3 patches, batches of 33: 24% (60-61%) * Random crops [max MIL setting]: * MIL 5 patches, batches of 20: 25-26% (62%)* MIL 3 patches, batches of 33: 25% (61%)* (no MIL) 1 patch, batches of 50: 23-24% (59-60%) * Random crops [mean MIL setting]:* MIL 5 patches, batches of 20: 27-29% (63-64%)* MIL 3 patches, batches of 33: 25-26% (62-63%) * DeepMask objectness [max MIL setting]:* MIL 5 patches, batches of 20: 21% (56%)§.§.§ More sampling hoping for a boost* Ferrari Max MIL 5 Patches: 27.30% (60.06%)* Ferrari Mean MIL 5 Patches: 27.51% (61.24%)* Random Max MIL 5 Patches: 27.37% (63.10%)* Random Mean MIL 5 Patches: 30.27% (63.79%)* Ferrari Max MIL 5 Patches on Random Crops: 23.40% (57.34%)* Ferrari Mean MIL 5 Patches on Random Crops: 25.30% (60.86%)* Ferrari Max MIL 5 Patches on Text Ranked NMS: 19.95% (54.44%) * §.§.§ Tag prediction §.§ Textual Features §.§.§ Tag Prediction without direct match b/w text and tags* * Top 1 predicted tag : P,R,F1 - 0.277183293062 0.159602297944 0.190806855477* Top 3 predicted tag : P,R,F1 - 0.180761707514 0.29523028749 0.210138865852* Top 5 predicted tag : P,R,F1 - 0.140490162237 0.371767181156 0.19278729938* Top 7 predicted tag : P,R,F1 - 0.113812318162 0.417855499121 0.170466585154 §.§.§ Tag Prediction with variable length prediction - Adria's method* * Top 1 - 23.2% precision§.§.§ Tag Prediction with direct match b/w text and tags* * Top 1 predicted tag : P,R,F1 - 0.451616614889 0.42048671039 0.410520666769* Top 3 predicted tag : P,R,F1 - 0.26273402699 0.489996219406 0.324129535175* Top 5 predicted tag : P,R,F1 - 0.189368038091 0.534931045252 0.266304554189* Top 7 predicted tag : P,R,F1 - 0.149249210882 0.568026447721 0.226315973098 § CONCLUSION To this point, the space of complex visual information beyond natural images has received limited attention in computer vision in the domain of classification and detection (notable exceptions include: <cit.>). We present a novel direction based on a dataset of infographics, containing rich visual media, with a mix of visual and textual features. In this paper, we showed how textual and visual elements can be used to jointly reason about the high-level topics (categories) of infographics, as well as the finer-grained sub-topics (tags). We demonstrated the power of text features in disambiguating and providing context for visual features. We presented a system whereby aside from predicting text labels, we can automatically extract iconic representative elements,what we call “visual hashtags". Despite never being trained to explicitly recognize objects in images, our model is able to localize a subset of the ground truth (human-annotated) visual hashtags.Infographics are specifically designed with a human viewer in mind, characterized by higher-level semantics, such as a story or a message. Beyond simply detecting the objects contained within them, an understanding of these infographics would involve the parsing and understanding of the included text, the layout and spatial relationships between the elements, and the intent of the designer. Human designers are experts at piecing together elements that are cognitively salient (or memorable) and maximize the utility of information. This new space of multi-media data gives computer vision researchers the opportunity to model and understand the higher-level properties of textual and visual elements of the story being told. ieee § SUPPLEMENTAL [TEMPORARY] SAMI COMMENTS - sort models in order of performance actually - make sure capitalization in titles is consistent - remove colon after “Tag prediction” - fix hashtag / emph / quotation marks - text vs. textual - “. or .” ? also same for commas - IOC in supplement - more on paper copy
http://arxiv.org/abs/1709.09215v1
{ "authors": [ "Zoya Bylinskii", "Sami Alsheikh", "Spandan Madan", "Adria Recasens", "Kimberli Zhong", "Hanspeter Pfister", "Fredo Durand", "Aude Oliva" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170926184528", "title": "Understanding Infographics through Textual and Visual Tag Prediction" }
[Eletronic address: ][email protected][Eletronic address: ][email protected][Eletronic address: ][email protected] ^1Instituto de Física, Universidade Federal do Rio de Janeiro, 21.941-972 - Rio de Janeiro-RJ - Brazil^2Departamento de Física, Colégio Pedro II, 20.921-903 - Rio de Janeiro-RJ - Brazil Using two different models from holographic quantum chromodynamics (QCD) we study the deconfinement phase transition in 2+1 dimensions in the presence of a magnetic field. Working in 2+1 dimensions lead us to exact solutions on the magnetic field, in contrast with the case of 3+1 dimensions where the solutions on the magnetic field are perturbative. As our main result we predict a critical magnetic field B_c where the deconfinement critical temperature vanishes. For weak fields meaning B<B_c we find that the critical temperature decreases with increasing magnetic field indicating an inverse magnetic catalysis (IMC). On the other hand, for strong magnetic fields B>B_c we find that the critical temperature raises with growing field showing a magnetic catalysis (MC).These results for IMC and MC are in agreement with the literature. Deconfinement phase transition in a magnetic field in 2+1 dimensions from holographic models Henrique Boschi-Filho^1, today ==============================================================================================§ INTRODUCTIONThe deconfinement phase transition in quantum chromodynamics (QCD) still remains an open and intriguing problem, since the standard perturbative methoddoes not work due to the strong coupling regime at low energies. The usualapproach to deal with this non-perturbative issue is lattice QCD where one finds a critical temperature T_c which characterises thedeconfinement phase transition. In particular, the presence of a magnetic field B modifies this scenario.It has been shown recently <cit.> that weak magnetic fields imply a decreasing T_c, an effect known as inverse magnetic catalysis (IMC). Furthermore, it is expected that for strong magnetic fields T_c increases with B, meaning a magnetic catalysis (MC). The MC/IMC studies are usually concerned with chiral symmetry breaking and/or deconfinement phase transition. Currently, many works have dealt with MC/IMC using a holographic approach based on the AdS/CFT correspondence. This correspondence or duality makes it possible to relate strong coupling theory in flat Minkowski space with weak coupling supergravity in anti-de Sitter space (AdS) in a higher dimensional space <cit.>. Among these works we can mention <cit.> where they study the MC problem, while <cit.> discuss IMC effects in different holographic models. Note that in all of these 3+1 dimensional models the gravitational solutions on the magnetic field are perturbative. Here in this work we study deconfinement phase transitionin 2+1 dimensions in the presence of an external magnetic field B within two different holographic AdS/QCD models. We find the IMC and MC pictures for the deconfinement phase transition and obtain an intriguing critical magnetic field B_c for which thecritical temperature T_c vanishes. The advantage of working in 2+1 dimensions is that the system of equations are simpler than the 3+1 dimensional case, leading us to some exact solutions where we can obtain the IMC/MC transition at a critical value of B=B_c. In ref. <cit.> the case of 2+1 dimensions was studied for the case of the MC on the fermion condensate. The holographicmodels that we use are known as the hard <cit.> and soft wall<cit.>.Such models were successful in predicting the deconfinement phase transition and its critical temperature T_c in the absence of a magnetic field <cit.>.These holographic models appeared after the proposal of the AdS/CFT correspondence, which provides an approach to deal problemsout of theperturbative regime of QCD or other strongly interacting systems. This work is organized as follows: in section <ref> we review the Einstein-Maxwell Theory in 4 dimensions and the geometric set up in the presence of an external magnetic field. In section <ref> we describe the holographic models used and compute the corresponding on-shell actions for both thermal and black hole AdS spaces. Then, in section <ref>, we present our results for the deconfinement phase transition in the hard and soft wall models in the presence of an external magnetic field and obtain the critical magnetic field B_c. Finally, in section <ref> we present our last comments and conclusions. § EINSTEIN-MAXWELL THEORY IN 4 DIMENSIONSHere, we start with holographic models defined in AdS_4 such that the dual field theory in Euclidean space lives in 3 dimensions. The full gravitational background is the eleven-dimensional supergravity on AdS_4× S^7. The dual field theory is the low-energy theory living on N M2-branes on ℝ^1,2, with 𝒩=8 SU(N) Super-Yang-Mills theory in the large N limit <cit.>.Via Kaluza-Klein dimensional reduction, the supergravity theory on AdS_4× S^7 may be consistently truncated to Einstein-Maxwell Theory on AdS_4 <cit.>. The action for this theory, in Euclidean signature is given by S_Ren = -12κ^2_4∫ d^4x √(g)(R -2Λ - L^2F_MNF^MN) - 1κ^2_4∫ d^3x √(γ)(K + 4L).where κ^2_4 is the 4-dimensional coupling constant, which is proportional to the 4-dimensional Newton's constant ( κ^2_4≡8π G_4 ), R is the Ricci scalar and Λ is the negative cosmological constant which, for AdS_4, are given by R = -12/L^2, and Λ = -3/L^2,respectively. L is the radius of AdS_4 and F_MN is the Maxwell field. The second integral corresponds to the surface and counter-terms in whichγ is the determinant of the induced metric γ_μν on the boundary, and K = γ^μνK_μν is the trace of the extrinsic curvature K_μν which gives the Gibbons-Hawking surface term <cit.>. The last term is a counter-term needed to cancel the UV divergences (z→0) of the bulk action.The field equations coming from the bulk action (<ref>) are <cit.>R_MN =2L^2( F_M^PF_NP-14g_MNF^2)- 3L^2g_MN,together with the Bianchi identities ∇_MF^MN = 0.The ansatz for the metric to solve these equations is given byds^2= L^2z^2( f(z)dτ^2 + dz^2f(z) + dx^2_1 + dx^2_2), in Euclidean signature with a compact time direction, 0≤τ≤β, with β = 1/T, and f(z) is a function to be determined in the following.The background magnetic field is chosen such that F = Bdx_1∧ dx_2, which implies F^2 = 2B^2z^4/L^4. Note that the magnetic field remains finite at the AdS_4 boundary (z→0). To see this let's consider the vector potential, which is a 1-form A such that F = dA. So, A = B2(x_1dx_2-x_2dx_1).Thus, we can treat it as an external background magnetic field <cit.>.Using the ansatz (<ref>)the field equations (<ref>) are simplified and given byz^2f”(z)-4zf'(z)+6f(z)-2B^2z^4-6 = 0, zf'(z)-3f(z)-B^2z^4+3 = 0. The two exact solutions of (<ref>)that we found are given byf_Th(z)=1 + B^2z^4 f_BH(z)=1 + B^2z^3(z-z_H) - z^3z^3_H The first solution, f_Th(z), corresponds to the thermal AdS_4 with an external background magnetic field. The second solution, f_BH(z), corresponds to a black hole in AdS_4 also in the presence of a background magnetic field, and where z_H is the horizon position, such that f_BH(z=z_H)=0. One can note the these two solutions indeed satisfy both differential equations (<ref>). This is in contrast with the 3+1 dimensional case where only perturbative solutions on the magnetic field B are found.§ ON-SHELL EUCLIDEAN ACTIONS §.§ Hard wall The hardwall model <cit.> consists in introducing a hard cut-off in the background geometry in order to break conformal invariance.The introduction of a cut-off z_max in this model implies that0⩽ z⩽ z_max,where z_max can be related to the mass scale of the boundary theory. For instance, in 4 dimensions z_max is usually related with energy scale of QCD <cit.> byz_max∼1Λ_QCD.Moreover we have to impose boundary conditions in z=z_max. In the hard wall model, the free energy for the thermal AdS_4, from the action(<ref>), is given by (see <cit.> for details): S_Th = β'𝒱_2L^2κ^2_4(-1/z_max^3 + B^2z_max + 𝒪(ϵ)) , where 𝒱_2≡∬ dx_1dx_2, β' is the corresponding period, and ϵ is an UV regulator.On the other hand, for the black hole case one getsthe free energy S_BH = β𝒱_2L^2κ^2_4( -12z_H^3 + 3 B^2 z_H2 +𝒪(ϵ)),where β is associated with the Hawking temperature. Now we have to compute the free energy difference, Δ S, defined by Δ S = ϵ0(S_BH-S_Th).Since we are comparing the two geometries at the same position z=ϵ→0 we can choose β' such that β' = β√(f(ϵ)) = β <cit.>, since f(ϵ) = 1 + 𝒪(ϵ^3) when ϵ→0, with f(z) given by the second equation in (<ref>). Therefore, with this choice, we have that the free energy difference for the hardwall model is given byΔ S_HW = β𝒱_2L^2κ^2_4(1z_max^3 -12z_H^3 + B^2 (3z_H2-z_max)). For B = 0, this equation corresponds to the 3-dimensional version of <cit.>. §.§ Soft wall For the soft wall model <cit.> we consider the following 4-dimensional actionS_SW = -12κ^2_4∫ d^4x √(g) e^-Φ(z)(ℛ -2Λ - L^2F_μνF^μν)- 1κ^2_4∫ d^3x √(γ)(K + 4L - 3ΦL),where Φ(z) = kz^2 is the dilatonfield, which has non-trivial expectation value. In this work we are assuming that the dilaton field does not backreact on the background geometry. Moreover, as in <cit.>, we assume that our metric ansatz (<ref>) satisfies the equations of motion for the full theory with f(z) given by (<ref>) for both thermal and black hole in AdS_4.One can note that we included one more term on the boundary action compared to (<ref>), due to the dilaton field in this soft wall model. The free energy for the thermal AdS_4, in the soft wall model is given byS_Th = β'𝒱_2L^2κ^2_4(√(π)(B^2 + 4k^2)2 √(k) + 𝒪(ϵ)). On the other hand, the free energy for the AdS_4 black hole for the soft wall model, is S_BH = β𝒱_2L^2κ^2_4( 12z_H^3+ e^-k z_H^2(2kz_H^2-1)z_H^3 + B^2z_H2 + + √(π)(B^2+4 k^2) erf(√(k)z_H)2√(k) + 𝒪(ϵ)) ,where erf(z) is the error function. Therefore, taking into account the same argument which led to β'=β√(f(ϵ)) =β in the hardwall model, the free energy difference, Δ S, for the softwall model is given byΔ S_SW =β𝒱_2L^2κ^2_4(12z_H^3 + e^-kz_H^2(2kz_H^2-1)z_H^3 + B^2z_H2 - √(π)(B^2+4 k^2)erfc(√(k)z_H)2√(k)),where erfc(z) is the complementary error function, defined as erfc(z) = 1 - erf(z). § DECONFINEMENT PHASE TRANSITION Following Hawking and Page <cit.> and Witten <cit.>, we study the deconfinement phase transition imposingΔ S(z_Hc,B) = 0,where z_Hc is the critical horizon, from which we calculate the critical temperature through the formula T_c = |f'(z=z_Hc)|/4π,where f(z) is the horizon function given by (<ref>).In the hard wall model with B=0 andfrom (<ref>) we find that the deconfinement phase transition occurs at 2 z_Hc^3=z_max^3 resulting inthe critical temperature T_c(B=0) ≈0.3 / z_max,which is the analoguein (2+1) dimensions of <cit.>.In order to fix the cut off z_max we use Neumann boundary condition which gives J_1/2(m z_max)=0 so thatz_max= 3.141/m, where m is the lightest scalar glueball massm_0^++/ √(σ) =4.37 for SU(3) in (2+1) dimensions where √(σ) is the string tension <cit.>.Then, one can compute z_max and the critical temperature, T_c, in units of the string tension for B=0: T_c(B=0)√(σ)=0.42(hardwall) . For the soft wall model, for B = 0,there is a phase transition when √(k)z_Hc = 0.60 which gives the critical temperatureT_c(B=0) = 0.40 √(k),consistent with the treatment presented in <cit.> for B=0 in one higher dimension.In order to fix the value of k we consider the soft wall model in 4 dimensionsso that we havem_n^2 = (4n + 6)k (see <cit.> for details).Using the mass for the lightest glueball in (2+1) dimensions from the lattice <cit.> and setting n=0, we can fix the dilaton constantk = 3.18 for the SU(3), in units of the string tension squared. Therefore, the critical temperature, T_c(B=0), in units of the string tension,is given byT_c(B=0)√(σ)=0.71 (soft wall) .On the other hand, for B≠0 from (<ref>) (hard wall), and (<ref>) (soft wall),the numerical results for the critical temperature as a function of the magnetic field, T_c(B), is shown in Figure <ref>, for both models. One can see from this figure that we have a phase in which the critical temperature, T_c(B), decreases with increasing magnetic field B, indicating an inverse magnetic catalysis (IMC). Furthermore, we also predict a phase in which the critical temperature, T_c(B), increases with increasing magnetic field B, indicating a magnetic catalysis (MC). The magnetic and inverse magnetic catalysis we have found for these models are separated by a critical magnetic field, B_c. The values of the critical magnetic fields found in these models, in units of the string tension squared, are the followingeB_cσ =6.97(hardwall) ; eB_cσ =13.6(soft wall) . In Figure <ref> we show the plot of the normalized critical temperature, T_c/T_c_0, as a function of B/B_c for both models, where T_c_0≡ T_c(B=0) and B_c is the critical magnetic field (<ref>), and(<ref>).§ DISCUSSIONS The IMChas been observed in lattice QCD <cit.> for eB≲1 GeV^2. Since then many holographic approaches reproduced this behavior in both deconfinement and chiral phase transition contexts within this range of magnetic field, see for instance <cit.>. However, in many of these approaches the problem could only be solved perturbatively in B, while in our resultsin (2+1) dimensions there is no restriction for the values or range of the magnetic field. This is in contrast with the 3+1 dimensional case where only perturbative solutions on the magnetic field B are found.Since we are working in (2+1) dimensions, physical quantities such as the critical temperature, T_c, magnetic field, B, and critical magnetic field, B_c, are not measured in GeV or MeV. Instead we used the string tension√(σ) as the basic unit for our physical quantities, as is the case in lattice simulations <cit.>. In conclusion, we emphasize that the critical magnetic field found here is an unexpected result since in 3+1 dimensional QCD there is evidence that the deconfinement (and chiral) transition is a cross over <cit.>.Acknowledgments:We would like to thank Luiz F. Ferreira, Adriana Lizeth Vela, Renato Critelli, Rômulo Rogeumont, and Marco Moriconi for helpful discussions during the course of this work. We also thank Elvis do Amaral for the help with numerical solutions. We would also like to thank Michael Teper for useful correspondence. D.M.R is supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), E.F.C. is partially supported by PROPGPEC-Colégio Pedro II, and H.B.-F. is partially supported by CNPq.ABCBali:2011qjG. S. Bali, F. Bruckmann, G. Endrodi, Z. Fodor, S. D. Katz, S. Krieg, A. Schafer and K. K. Szabo,JHEP 1202, 044 (2012)[arXiv:1111.4956 [hep-lat]].Maldacena:1997reJ. M. Maldacena,Int. J. Theor. Phys.38, 1113 (1999) [Adv. Theor. Math. Phys.2, 231 (1998)][hep-th/9711200].Gubser:1998bcS. S. Gubser, I. R. Klebanov and A. M. Polyakov,Phys. Lett. B 428, 105 (1998)[hep-th/9802109]. Witten:1998qj E. Witten,Adv. Theor. Math. Phys.2 (1998) 253 [hep-th/9802150]. Evans:2010xsN. Evans, T. Kalaydzhyan, K. y. Kim and I. Kirsch,JHEP 1101, 050 (2011) doi:10.1007/JHEP01(2011)050[arXiv:1011.2519 [hep-th]].Alam:2012fwM. S. Alam, V. S. Kaplunovsky and A. Kundu,JHEP 1204, 111 (2012) [arXiv:1202.3488 [hep-th]]. Filev:2010pmV. G. Filev and R. C. Raskov,Adv. High Energy Phys.2010, 473206 (2010) [arXiv:1010.0444 [hep-th]]. Callebaut:2011zzN. Callebaut, D. Dudal and H. Verschelde,Acta Phys. Polon. Supp.4, 671 (2011). Bolognesi:2011un S. Bolognesi and D. Tong,Class. Quant. Grav.29 (2012) 194003 [arXiv:1110.5902 [hep-th]]. Preis:2010cqF. Preis, A. Rebhan and A. Schmitt,JHEP 1103, 033 (2011) [arXiv:1012.4785 [hep-th]]. McInnes:2015kecB. McInnes,Nucl. Phys. B 906, 40 (2016) [arXiv:1511.05293 [hep-th]].Evans:2016jzoN. Evans, C. Miller and M. Scott,Phys. Rev. D 94, no. 7, 074034 (2016)[arXiv:1604.06307 [hep-ph]].Mamo:2015deaK. A. Mamo,JHEP 1505, 121 (2015)[arXiv:1501.03262 [hep-th]].Li:2016gfnD. Li, M. Huang, Y. Yang and P. H. Yuan,JHEP 1702, 030 (2017)[arXiv:1610.04618 [hep-th]].Dudal:2015wfnD. Dudal, D. R. Granado and T. G. Mertens,Phys. Rev. D 93, no. 12, 125004 (2016)[arXiv:1511.04042 [hep-th]]. Polchinski:2001tt J. Polchinski and M. J. Strassler,Phys. Rev. Lett.88, 031601 (2002) [arXiv:hep-th/0109174]. BoschiFilho:2002vd H. Boschi-Filho and N. R. F. Braga,JHEP 0305, 009 (2003) [arXiv:hep-th/0212207].Karch:2006pvA. Karch, E. Katz, D. T. Son and M. A. Stephanov,Phys. Rev. D 74, 015005 (2006)[hep-ph/0602229].Colangelo:2007ptP. Colangelo, F. De Fazio, F. Jugeau and S. Nicotri,Phys. Lett. B 652, 73 (2007)[hep-ph/0703316].Herzog:2006raC. P. Herzog,Phys. Rev. Lett.98, 091601 (2007)[hep-th/0608151].BallonBayona:2007vpC. A. Ballon Bayona, H. Boschi-Filho, N. R. F. Braga and L. A. Pando Zayas,Phys. Rev. D 77, 046002 (2008)[arXiv:0705.1529 [hep-th]].Aharony:1999tiO. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri and Y. Oz,Phys. Rept.323, 183 (2000)[hep-th/9905111].Herzog:2007ijC. P. Herzog, P. Kovtun, S. Sachdev and D. T. Son,Phys. Rev. D 75, 085020 (2007)[hep-th/0701036].Gibbons:1976ueG. W. Gibbons and S. W. Hawking,Phys. Rev. D 15, 2752 (1977).Hartnoll:2007aiS. A. Hartnoll and P. Kovtun,Phys. Rev. D 76, 066001 (2007)[arXiv:0704.1160 [hep-th]]. BoschiFilho:2005yh H. Boschi-Filho, N. R. F. Braga and H. L. Carrion,Phys. Rev. D 73, 047901 (2006) [hep-th/0507063]. Rodrigues:2016cdbD. M. Rodrigues, E. Folco Capossoli and H. Boschi-Filho,Phys. Rev. D 95, no. 7, 076011 (2017)[arXiv:1611.03820 [hep-th]]. DCB2017D. M. Rodrigues, E. Folco Capossoli and H. Boschi-Filho, arXiv:1710.07310 [hep-th].Hawking:1982dhS. W. Hawking and D. N. Page,Commun. Math. Phys.87, 577 (1983). Witten:1998zwE. Witten,Adv. Theor. Math. Phys.2, 505 (1998)[hep-th/9803131].Teper:1998teM. J. Teper,Phys. Rev. D 59, 014512 (1999)[hep-lat/9804008]. Athenodorou:2016ebgA. Athenodorou and M. Teper,JHEP 1702, 015 (2017)[arXiv:1609.03873 [hep-lat]].Meyer:2003wxH. B. Meyer and M. J. Teper,Nucl. Phys. B 668, 111 (2003) [hep-lat/0306019].
http://arxiv.org/abs/1709.09258v2
{ "authors": [ "Diego M. Rodrigues", "Eduardo Folco Capossoli", "Henrique Boschi-Filho" ], "categories": [ "hep-th", "hep-ph" ], "primary_category": "hep-th", "published": "20170926205259", "title": "Deconfinement phase transition in a magnetic field in 2+1 dimensions from holographic models" }
]Distinguishing short duration noise transients in LIGO data to improve the PyCBC search for gravitational waves from high mass binary black hole mergers. ^1 Max-Planck-Institut für Gravitationsphysik,Albert-Einstein-Institut, D-30167 Hannover, Germany"Blip glitches" are a type of short duration transient noise in LIGO data. The cause for the majority of these is currently unknown. Short duration transient noise creates challenges for searches of the highest mass binary black hole systems, as standard methods of applying signal consistency, which look for consistency in the accumulated signal-to-noise of the candidate event, are unable to distinguish many blip glitches from short duration gravitational-wave signals due to similarities in their time and frequency evolution. We demonstrate a straightforward method, employed during Advanced LIGO's second observing run, including the period of joint observation with the Virgo observatory, to separate the majority of this transient noise from potential gravitational-wave sources. This yields a ∼ 20% improvement in the detection rate of high mass binary black hole mergers (> 60 M_⊙) for the PyCBC analysis. [ Alexander H. Nitz^1, December 30, 2023 ==================================§ INTRODUCTIONAdvanced LIGO has only recently completed its second observing run (O2). Including the first observing run, multiple binary black hole mergers have been reported <cit.>. Continued observation provides the opportunity for additional detections, and to improve our understanding of the population of binary black holes <cit.>. Searches for gravitational waves from binary black hole mergers, and other compact object mergers, make use ofmatched filtering, which uses a waveform model <cit.> to extract signals from data. To search for a wide range of parameters, a set of waveform templates is chosen carefully so that any potential signal would have high overlap with at least one of the waveform templates, typically targeting no more than 10% loss in detection rate <cit.>. This paper focuses on an improvement to the PyCBC analysis <cit.> that was used to look for the gravitational waves from compact object mergers during the second observing run of Advanced LIGO[The modified ranking statistic described here was not introduced until after the analysis of GW170104]. The methods discussed in this paper can be extended for use with multiple gravitational-wave detectors, however, as Virgo data were not included in the initial matched-filtering based searches for compact binary mergers <cit.>, this paper focuses on the improvements gained in the analysis of LIGO data. The parameter space searched and the templates chosen for the O2 analysis are described in <cit.>.If the detector noise were Gaussian, matched filtering alone would be nearly sufficient to find signals within LIGO data, however, the data contains non-Gaussiannoise transients which produce large signal-to-noise ratios <cit.>. In addition to the matched filter, the PyCBC analysis employs a signal consistency test <cit.> as well as a consistency between the phase, amplitude, and time difference between the Hanford and Livingston observatories to rank gravitational wave candidates <cit.>. These candidates are compared to empirically measured background. The background is created by repeating the analysis after time shifting the data by an amount greater than the astrophysical time-of-flight <cit.>.Part of the construction of the full ranking statistic, is the signal consistency re-weighted signal-to-noise, ρ̃ <cit.>, which measures the loudness of a signal in a single detector. In this paper we introduce a new signal consistency test, which targets a type of noise transient that is not well suppressed by existing methods, known as "blips" <cit.>. The mechanism for most instances of this non-Gaussian noise source is not yet well understood. Many instances, however, contain excess power at higher frequencies than expected for the waveform templates that these glitches are mistaken for. We also modify the single detector loudness measure and demonstrate that we are able to achieve ∼ 20% improvement in the detection rate of binary black holes with masses greater than 60-100 M_⊙. § RANKING CANDIDATES AND SIGNAL CONSISTENCYMatched filter signal-to-noise ratio (SNR) is used in modeled searches for gravitational-waves from compact binary mergers <cit.>. This has been shown to be optimal for extracting signal from Gaussian noise <cit.>. When implicitly maximizing over an unknown amplitude and phase of a potential signal, represented by a waveform template, h, this can be defined as ρ^2 ≡⟨ s | h ⟩^2/⟨ h | h ⟩where the inner product is ⟨ a|b⟩ = 4 ∫^∞_0 ã(f)b̃^*(f)/S_n(f) df and s, and S_n(f) are the strain data and the estimated one-sided power spectral density of the noise around the time of a candidate event, respectively.The detector noise, however, is not Gaussian nor stationary. A canonical method used since initial LIGO to discriminate between gravitational-wave signals from non-Gaussian transient noise, which make create large peaks in the SNR times series (a trigger), has been comparing the morphology of a candidate signal to the expectation from the triggering template waveform, h <cit.>. We can construct this standard chi-squared <cit.> test by sub-dividing this template waveform into p non-overlapping frequency bins. Each bin is constructed so that each contributes equally to the SNR of a perfectly matching signal. The number of bins has been empirically tuned by comparing the distribution of single detector background triggers produced from engineering data leading up to LIGO's second observing. Several formulas which determine the number of bins are tested and the value the gives a trigger distribution with the smallest excess at large statistic values is selected <cit.>. We construct the chi-squared test as follows, χ^2_r = 1/2p-2∑_i=1^i=p⟨ s|h_i ⟩ - ⟨ h_i|h_i ⟩^2. If the data, s, is adequately described by Gaussian noise with an embedded signal that is well described by the waveform template h, this will follow a reduced χ^2 distribution with 2p - 2 degrees of freedom <cit.>. Many classes of non-Gaussian noise have been demonstrated to take larger values <cit.>, creating separation between signals and noise artifacts. There are a number of different techniques for combining the χ^2 test with the signal-to-noise ratio to produce a ranking statistic <cit.>. In this paper we will focus on a modification to the re-weighted SNR, introduced in <cit.>, which is a key component of the current ranking statistic used by the PyCBC analysis <cit.> method. This re-weighted SNR, ρ̃, is given as ρ̃ = ρ for χ^2_r <= 1ρ[ 1/2(1 + (χ^2_r)^3)]^-1/6 for χ^2_r > 1 , While ρ̃ has been shown to successfully remove most non-Gaussian transients from searches for low mass compact binary mergers (M_total < 25 M_⊙) <cit.>, searches for higher mass mergers, where the observable portion of signals are typically fractions of a second in duration, have backgrounds more polluted by short duration transient noise <cit.>.Fig. <ref> shows an example short duration noise transient that is able to fool the χ_r^2discriminator with χ^2_r= 1.1, and so produces a trigger in the Hanford detector with high ρ̃. In the next section we construct a new signal consistency test to help distinguish this type of noise transient from a short duration gravitational-wave signal.§ HIGH FREQUENCY SINE-GAUSSIAN Χ^2 DISCRIMINATORAs Fig. <ref> demonstrates, some glitches which are not distinguishable from short duration gravitational-wave signals using the ρ̃ statistic, have excess power at higher frequencies than the best matching gravitational-wave model would actually contain. The aim is to distinguish this class of glitch by measuring this excess power. This is done by placing a series of sine-Gaussian tiles at the time of the candidate event, typically defined by the peak amplitude of the template waveform.The tiles are placed at frequencies where we do not expect a true gravitational signal to contain power.A new signal discriminator, χ^2_r,sg can be written down as the sum of the signal-to-noise ratios squared of each individual sine-Gaussian tile. χ^2_r,sg≡1/2N∑^N_i ρ_i^2 =1/2N∑^N_i <s|g̃_i(f, f_0, t_0, Q)>^2 where the expectation value of this reduced form is one, and will follow a reduced χ^2 distribution with 2N degrees of freedom, when the data contains stationary, Gaussian noise at the location of each tile. Each sine-Gaussian tile, g_i, can be defined in the time domain as g ≡ A exp(-4π f_0^2(t-t_0)^2/Q^2)cos(2π f_0 t + ϕ_0) where, f_0 and t_0, are the central frequency and time of the sine-Gaussian respectively, Q is the quality factor, A is the amplitude, and ϕ_0 is the phase. The phase and amplitude are implicitly maximized over. As an overall amplitude factor of a template does not affect the SNR as defined in  <ref>, we choose to set the amplitude A to one. It has been shown that some glitches in LIGO data can be approximately modeled as sine-Gaussian and the effect on matched filtering has been studied <cit.>.The starting frequency of the first tile is determined by examining the expected contribution of power a signal would produce in different frequency bands. The current configuration places tiles from 30-120 Hz above the final frequency of a given template waveform, spaced in intervals of 15 Hz. A constant Q value of 20 has been selected. The frequency spacing and Q of the tiles was tested on a short subset of data, and the current values were empirically chosen from a limited number of hand selected variations. It may be possible to achieve a more optimal placement and choice of Q for each tile. This new discriminator, χ^2_r,sg is then combined with the re-weighted SNR described earlier to generate a new single detector test statistic, ρ̃_sg, which can be expressed as,ρ̃_sg = ρ̃ for χ^2_r,sg <= 4 ρ̃ (χ^2_r,sg / 4)^-1/2 for χ^2_r,sg > 4.The re-weighted SNR, ρ̃, has the property that for candidates which are well approximated by one of our templates in we recover the standard matched filter signal-to-noise ratio. This same property is preserved for our new single detector loudness ρ̃_sg. This ansatz was chosen to allow for the expected variation in χ^2_sg in Gaussian noise. For values of χ^2_sg less than 4, we recover exactly the standard re-weighted SNR, ρ̃. Using the formula in Eq. <ref> allows for some variation of the signal from our template waveforms, which may result in signal power spilling into the time-frequency region tested by the tiles.The effect of this new single detector loudness metric, ρ̃_sg, is demonstrated in Fig. <ref>, where we plot the distribution of background triggers from the Hanford data for the period from Feb 3rd to Feb 12th, 2017. We see that χ^2_r,sg provides additional information not encapsulated in ρ̃. We have also overlaid a population of simulated signals, whose population is described in the next section. For most signals, the loudness statistic, ρ̃ or ρ̃_sg will take the same value. Fig. <ref> shows the cumulative number of background triggers in the Hanford and Livingston detectors above each statistic. In Figs. <ref>-<ref> we have restricted to showing the background from waveform templates with total mass greater than 40 solar masses. At values between 7-10, we find an order of magnitude decrease in the number of background triggers using ρ̃_sg, and factor of 3 decrease in Hanford. This decrease is the result of the downranking visible in Fig. <ref> of triggers with poor χ^2_r, sg values and directly leads to improved sensitivity by producing a cleaner background. This will vary between analysis periods, but a large reduction is consistently observed. In the next section we will estimate the impact this has on the sensitivity of the PyCBC analysis.§ IMPACT ON SENSITIVITY In this section we evaluate the impact on search sensitivity when using using the ρ̃_sg statistic. So far, we have described the construction of an improved single detector loudness, ρ̃_sg. To build a complete two-detector ranking statistic we use the methods described in <cit.>, where we have substituted ρ̃ with ρ̃_sg, to rank candidate coincident events and coincident background events.We simulate an isotropic over the sky and volumetric over distance population of binary black hole mergers with a uniform distribution of component masses. A distance cutoff is imposed so that injections are not placed where their expected SNR would be significantly less than the minimum cutoff of the search (typically SNR ∼ 5.5). The minimum component mass, M_1,2 is 2 solar masses, and the total mass, M_total ranges from 10 to 100. Signals are approximated using the waveform model introduced in <cit.>. The spin distribution was assumed to be aligned with the orbital angular momentum of the binary and the spin of each component black hole is uniformly distributed between -0.998 to +0.998. We use the same set of templates introduced for the production O2 analysis <cit.>.The PyCBC analysis <cit.> is used to measure the background and significance of 𝒪(10^4) simulated signals. Each simulated merger has been added within the analysis to the LIGO data spanning Jan 4th - Feb 12th, 2017. Fig. <ref> shows the improvement in the rate of simulated detections using the modified ranking statistic. We find significant improvements in detection rate which increase with the total mass of the binary black hole. This is due to the fact that as the mass of the merger increases, signals are more difficult to distinguish from transient glitches, due to the increasing overlap in their time and frequency evolution. In addition, the sensitivity to lower mass mergers is unaffected. This is expected, as the detector noise does not regularly produce long duration transient glitches that could have the same time and frequency evolution as the signal from an astrophysical gravitational-wave merger.§ CONCLUSIONS In this work we have presented a novel method for ranking the single detector loudness of agravitational-wave candidate trigger. This method takes into account the morphology of a potential signal to look for excess power at higher frequencies than would be expected to be produced by a given candidate. We note that signals that are not well modeled by the templates, will be less effectively separated from background, so efforts to expand the space of models to include additional physical effects such as higher modes <cit.>, and precession <cit.>, may aid in detection efficiency.The method allows us to separate a large fraction of the short duration transient noise that is present in Hanford and Livingston data from the gravitational-waves expected from binary black hole mergers. This improves the overall sensitivity of the search by ∼ 20% to gravitational-waves from black hole mergers with total mass 60-100 M_⊙.We thank the LIGO Scientific Collaboration for access to the data and gratefully acknowledge the support of the United States National Science Foundation (NSF) for the construction and operation of the LIGO Laboratory and Advanced LIGO as well as the Science and TechnologyFacilities Council (STFC) of the United Kingdom, and the Max-Planck-Society(MPS) for support of the construction of Advanced LIGO. Additional supportfor Advanced LIGO was provided by the Australian Research Council. In addition, we acknowledge the Max Planck Gesellschaft for support and the Atlas cluster computing team at AEI Hannover where this analysis was carried out. We also thank Thomas Dent, Andrew Lundgren, Ian Harry, and Miriam Cabero for useful discussions.§ REFERENCESiopart-num.bst
http://arxiv.org/abs/1709.08974v2
{ "authors": [ "Alexander Harvey Nitz" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170926122940", "title": "Distinguishing short duration noise transients in LIGO data to improve the PyCBC search for gravitational waves from high mass binary black hole mergers" }
address1]Heran Yangaddress1,address2]Jian Suncorrauthor [corrauthor]Corresponding author [email protected],address2]Huibin Liaddress3]Lisheng Wangaddress1,address2]Zongben Xu[address1]School of Mathematics and Statistics, Xi'an Jiaotong University, China [address2]National Engineering Laboratory for Big Data Algorithm and Analysis Technology, China [address3]Department of Automation, Shanghai Jiaotong University, ChinaMulti-atlas segmentation approach is one of the most widely-used image segmentation techniques in biomedical applications. There are two major challenges in this category of methods, i.e., atlas selection and label fusion. In this paper, we propose a novel multi-atlas segmentation method that formulates multi-atlas segmentation in a deep learning framework for better solving these challenges. The proposed method, dubbed deep fusion net (DFN), is a deep architecture that integrates a feature extraction subnet and a non-local patch-based label fusion (NL-PLF) subnet in a single network. The network parameters are learned by end-to-end training for automatically learning deep features that enable optimal performance in a NL-PLF framework. The learned deep features are further utilized in defining a similarity measure for atlas selection. By evaluating on two public cardiac MR datasets of SATA-13 and LV-09 for left ventricle segmentation, our approach achieved 0.833 in averaged Dice metric (ADM) on SATA-13 dataset and 0.95 in ADM for epicardium segmentation on LV-09 dataset, comparing favorably with the other automatic left ventricle segmentation methods. We also tested ourapproach on Cardiac Atlas Project (CAP) testing set of MICCAI 2013 SATA Segmentation Challenge, and our method achieved 0.815 in ADM, ranking highest at the time of writing. Multi-atlas label fusion, left ventricle segmentation, deep fusion net, atlas selection.§ INTRODUCTION As one of the most successful medical image segmentation techniques, multi-atlas segmentation (MAS) approach has been applied to various medical image segmentation tasks, including segmentation ofabdominal anatomy <cit.>,cardiac ventricle <cit.>, brain <cit.>, etc. Given an image to be segmented, i.e., a target image, multi-atlas segmentation methods utilize multiple atlases, i.e., a number of images from multiple subjects with segmentation labels delineated by experts, to estimate the segmentation label of target image, i.e., target label. Typically,multi-atlas segmentation methods first register atlas images to the target image, and then the corresponding warped atlas labels are combined to estimate the target label by a label fusion procedure <cit.>. To raise computational efficiency or improve final segmentation accuracy, multi-atlas segmentation methods employ an atlas selection procedure to select a few warped atlas images most similar to the target image, and only the labels of these selected images are utilized in the label fusion procedure <cit.>. For a comprehensive review on multi-atlas segmentation methods, please refer to <cit.>. A large body of literatures on multi-atlas segmentation focuses on the label fusion procedure.One typical label fusion strategy is weighted voting,where the label of each target voxel is determined by weighted average of the labels of corresponding voxels in warped atlas images. Local label fusion methods determine the voxel-wise fusion weights by local intensity-based similarities between the target and atlas voxels <cit.>. To account for possible registration errors, <cit.> proposed to fuse labels of all voxels in a non-local search window or volume around the registered atlas voxel for predicting the target label.This category of methods <cit.> is commonly named as non-local patch-based label fusion.Moreover, statistical fusion methods were proposed to estimate the fusion weights by integrating models of rater performance <cit.>. Instead of using weighted voting strategy, sparsity-based dictionary learning <cit.> and matrix completion <cit.> methods predict target labels by representing image and label patch pairs using sparse regularization or low rank constraint. These related methods commonly use intensities or hand-crafted features for representing atlas and target voxels / patches to measure atlas-to-target similarity.A fundamental question is that whether we can automatically learn the features of atlas and target images directly aiming at achieving optimal label fusion performance. Atlas selection, another important issue in multi-atlas segmentation, aims at selecting the most relevant atlases, which is generally achieved by ranking the atlases according to their similarities to the target image.The traditional methods rank atlases using intensity-based similarity measures, e.g., normalized mutual information <cit.>. Other well designed measures for atlas selection include the distance between transformations <cit.>, registration consistency <cit.>, etc. Machine learning methods were also introduced to learn similarity measures for atlas selection, e.g., manifold learning <cit.>, ranking SVM <cit.>, etc.For atlas selection, how to design image features to measure the atlas-to-target image similarity for relevantatlas selection is also a fundamental task. This work aims at designing afeature learning-based approach for better achieving label fusion and atlas selection in multi-atlas segmentation. We propose a novel multi-atlas segmentation method by reformulating non-local patch-based label fusion (NL-PLF) method <cit.>to be a deep neural network <cit.>. As shown in Fig. <ref>, the network is comprised of a feature extraction subnet for extracting deep features of each voxel in atlas and target images, and a non-local patch-based label fusion subnet (NL-PLF subnet) for fusing the warped atlases based on the extracted deep features.Our proposed deep fusion net relies on atlas-to-target image registration, and the net concentrates on learning deep features for fusing these warped labels using NL-PLF by an end-to-end training strategy. More specifically, the feature extraction subnet is learned to embed image into a deep feature space in which the feature similarity well reflects the label similarity,and then these deep features are utilized in NL-PLF subnet to compute the label fusion weights for achieving optimal segmentation performance in a NL-PLF framework. The learned deep features for label fusion can also be taken as the features for measuring atlas-to-target image similarity in atlas selection.To the best of our knowledge, this is the first work accomplishing registration-based multi-atlas segmentation in a deep learning framework. The traditional non-local patch-based label fusion method relies on hand-crafted features, e.g., intensities <cit.>, contextual features <cit.>, for computing label fusion weights.Our approach takes advantage of the strong feature learning ability of deep neural network to learn optimal image features for label fusion in multi-atlas segmentation. This is achieved by our specially designed network architecture integrating the modules of feature learning and label fusion in an end-to-end learning framework. We apply the proposed deep fusion net to left ventricle (LV) segmentation from short-axis cardiac MR images <cit.>, and achieved competitiveresultson two publicly available cardiac MR datasets, i.e., MICCAI 2013 SATA Segmentation Challenge (SATA-13) dataset [https://www.synapse.org/#!Synapse:syn3193805/wiki/217780] <cit.> and MICCAI 2009 LV Segmentation Challenge (LV-09) dataset [https://smial.sri.utoronto.ca/LV_Challenge/Home.html] <cit.>. Our approach achieved 0.833 in averaged Dice metric (ADM) on SATA-13 dataset and 0.95 in ADM for epicardium segmentation on LV-09 dataset.On the Cardiac Atlas Project (CAP) testing set of MICCAI 2013 SATA Segmentation Challenge, our method achieved 0.815 in Dice metric, ranking highest on this dataset at the time of writing. Note that a preliminary version of this work was published in <cit.>. Compared with the conference paper, this journal paper presents the following extensions.(1) We more comprehensively discuss on the motivations and details of the proposed method for better readability. (2) Our method is evaluated on additional dataset (LV-09 dataset) and compared with more segmentation methods, including the state-of-the-art deep learning methods. (3) More experiments are conducted to further explore the behaviors of the proposed method, e.g., impact of atlas selection strategy, cross-dataset evaluation, etc. §.§ Related work §.§.§ Deep learning approach in medical image segmentation In recent years, deep learning approach was widely applied in medical image segmentation, e.g., vessel segmentation <cit.>, brain segmentation <cit.>, etc.These methods commonly design different deep network structures considering backgrounds of specific problems, and directly learn the optimal network parameters for voxel-wise label prediction.For example, <cit.> proposed two-pathway cascaded deep networks with a two-phase training procedure forbrain tumor segmentation.<cit.> designed a multi-task fully convolutional neural network with multi-level contextual information for object instance segmentation from histology images. <cit.> proposed a data augmentation strategy by applying elastic deformations to the available training images, and designed a U-net architecture that consists of a contracting path to capture context and a symmetric expanding path enabling precise localization for cell segmentation in microscopic images. Moreover, <cit.> proposed a multi-scale 3D convolutional neural network with two convolutional pathways for lesion segmentation in multi-modal brain MRI. In addition, some methods concentrate on defining proper loss functions for training deep segmentation networks on medical images. For example, <cit.> proposed a novel loss function based on Dice coefficient for prostate segmentation to deal with the strong imbalance between the number of foreground and background voxels. <cit.> designed a new loss for segmentations of histology glands to encode geometric and topological priors of containment and detachment. For a comprehensive review on deep learning in medical image analysis, please refer to <cit.>. §.§.§ Cardiac MR left ventricle segmentation Accurately segmenting the LV from cardiac MR images is essentially important for quantitatively assessing the cardiac function in clinical diagnosis, which is still a challenging task due to large variation of LV in intensity levels, structural shapes, respiratory motion artifacts, partial volume effects, etc. Multi-atlas segmentation approach is widely applied in LV segmentation. <cit.> incorporated intensity, gradient and contextual information into an augmented feature vector for similarity measure in the non-local patch-based label fusion framework. Instead of only considering the corresponding atlas slice to the target image, <cit.> fused the labels of all atlas slices in a neighborhood to produce a more accurate estimated target label.<cit.> formulated the weighted voting as a problem of minimizing total expectation of labeling error, and pairwise dependency between atlases was modeled as a joint probability of two atlases making a segmentation error at a voxel. Due to the close connection between label fusion and registration, <cit.> attempted to alternately perform patch-based label fusion and registration refinement. In recent years, deep learning approach was also applied to LV segmentation. For example, <cit.>utilized convolutional neural networks to determine the region of interest (ROI) containing LV and stacked autoencoders to infer the LV segmentation mask. Then the segmentation mask wasincorporated into level set model to produce the final segmentation result.<cit.> first estimated the ROI and an initial segmentation by deep belief network and Otsu's thresholding, and then refined the initial segmentation by level set model. These two methods achieved high accuracies on MICCAI 2009 LV Segmentation Challenge dataset, but their performance largely relies on the post-processing by level set method. In these work, the segmentation results purely using deep networks are unsatisfactory <cit.> possibly due toinsufficient labeled training data or limitation of designed network architecture, and the post-processing by level set method helps to refine the segmentation results to be continuous and connected. Our proposed deep fusion net bridges the multi-atlas segmentation approach and deep learning approach. Compared with the traditional multi-atlas segmentation methods, our net also relies on atlas-to-target image registration. But instead of using hand-crafted features for label fusion, our net learns to extract deep features for computing optimal fusion weights in a NL-PLF framework. Compared with the common deep learning methods directly learning mapping from image to semantic labels, our network architecture is non-conventional and inspired by a registration-based multi-atlas label fusion strategy. With this novel deep architecture, our approach achieved improved results compared with the traditionalmulti-atlas segmentation methods and automatic deep learning methods on two cardiac MR image datasets, and the only deep learning method surpassing our results depends on a strong manual prior <cit.>.§ DEEP FUSION NET FOR MULTI-ATLAS SEGMENTATION This section presents the proposed deep fusion net for multi-atlas segmentation, including the general framework, network architecture,training and testing procedures. §.§ General frameworkAs a registration-based multi-atlas segmentation method, multiple atlas images are first registered to the target image by a non-rigid registration method, and then the corresponding atlas labels are warped to the target image using these transformations. Our proposed deep fusion net(DFN) is designed to fuse the warped atlas labels using discriminatively learned deep features. As shown in Fig. <ref>, deep fusion net consists of a feature extraction subnet, followed by a non-local patch-based label fusion (NL-PLF) subnet.Feature extraction subnet: The feature extraction subnet is responsible for extracting dense deep features from target and atlas images, and all these images share the same feature extraction subnet. This subnet is learned to embed the input images into a deep feature space, in which the feature similarity between paired voxels are expected to better reflect the similarity of theirlabels. The extracted dense deep features are utilized by the following NL-PLF subnet for computing the label fusion weights using the deep feature similarity. NL-PLF subnet: The NL-PLF subnet aims to fuse the warped atlas labels for predicting the target label.It implements a non-local patch-based label fusion strategy that predicts the target label by the weighted average of atlas labels within a search volume around the registered voxels.The effectiveness of this label fusion strategy relies on an effective measure of feature similarity for the fusion weight computation. This subnet takes the extracted deep features by feature extraction subnet as input, and accomplishes the non-local patch-based label fusion using the weights computed based on these extracted features. The deep fusion net integrates the modules of feature extraction for feature embedding and label fusion for target label prediction. The network parameters are learned by an end-to-end training procedure minimizing a loss defined between network output and ground-truth target label.By enforcing that the estimated segmentation label should approximate the target label in training, the feature extraction subnet is discriminatively learned to extract deep features with feature similarity well representing the label similarity for computing the optimal label fusion weight. In computer vision, siamese neural networks were proposed for learning deep feature embedding, and the learned feature similarity was utilized for patch matching <cit.>, face recognition <cit.> and re-identification <cit.>, etc.Similarly, our feature extraction subnet aims to embed image patches [The image patch size is determined by the receptive field of feature extraction subnet.] into a deep feature space, in which the feature similarity is a meaningful measure of semantic label similarity. The deep feature similarity is further utilized to define label fusion weights for warped atlas label fusion or retrieve relevant atlases in atlas selection. In the following sections 2.2 and 2.3, we will respectively introduce the architectures of feature extraction subnet and NL-PLF subnet.§.§ Feature extraction subnetAssume that we are given a target image T and multiple atlases. After registering atlas images to the target image T, the pairs of warped atlas image and label map are denoted as { X_i,L(X_i)}_i=1^K, where X_i is the i-th warped atlas image, and L(X_i) is the corresponding warped label map. K denotes the number of atlases. The feature extraction subnet is responsible for extracting dense deep features from images, including the target image T and warped atlas images { X_i}_i=1^K, and the extracted dense features are respectively denoted as F(T) and {F(X_i)}_i=1^K. As shown in Fig. <ref>, the feature extraction subnet consists of multiple repetitions of convolutional layer with ReLU activation function, and a final sigmoid layer for feature normalization. All input images of this subnet, including the target image and warped atlas images, share the same subnet. Figure <ref> shows examples of extracted feature maps of one target image and two warped atlas images by the feature extraction subnet after training. These extracted features well describe different structures of the input images. Convolutional block. The convolutional block aims at learning discriminative local patterns represented by filters from its input features. This block convolves the input features using a set of learnable filters {𝒲_d'}_d'=1^D', followed by non-linear activation function. Each filter 𝒲_d'∈ℝ^w_f × w_f × D is a third-order tensor, where D' is the number of filters, D denotes the number of feature maps in input features, and w_f × w_f is the size of filters.Given the input features G^l-1(X) ∈ℝ^M × N × D of image X, the l-th convolutional block outputs features G^l(X) ∈ℝ^(M - w_f + 1) × (N - w_f + 1) × D',whose d'-th feature map, denoted as g^l_d', can be written as g^l_d' = φ(𝒲_d' * G^l-1(X) + b_d'^l ),where * is the 3-D convolution operator, and b_d'^l is the bias term.φ is a rectified linear unit (ReLU) function <cit.>: φ(x) = max (0, x). Sigmoid layer. The feature extraction subnet is completed with a final sigmoid layer. The sigmoid layer can suppress the large feature magnitude and non-linearlymap the extracted features to a limited range within [0.5, 1), which is expected to enforce robustness of deep fusion net to different image contrasts. The experimental evaluation on the necessity of this sigmoid layer is presented in section <ref>. Given input features G(X)∈ℝ^M × N × D of image X, each element f_m,n,din the transformed features F(X)∈ℝ^M × N × D can be computed via processing each element g_m,n,d of G(X) byf_m,n,d = 1/1 + e^ - g_m,n,d ,where (m,n,d) denotes the three-dimensional index of element in input or transformed features. §.§ Non-local patch-based label fusion subnetThis subnet is a deep architecture implementing non-local patch-based label fusion strategy on top of feature extraction subnet. As shown in Fig.<ref>, our NL-PLF subnet consists of shift layer, distance layer, weight layer and voting layer, and finally outputs the estimated label of target image. Figure <ref>(a) shows the idea of non-local patch-based label fusion strategy. For simplicity, we use the notation p to denote a pixel's spatial position instead of (m,n). To account for atlas-to-target image registration errors, all the pixels in a search window around the pixel p in warped atlas images {X_i}_i=1^K are considered as the potential corresponding pixels for the pixel p in target image T. Therefore, the labels of these pixels in atlas images are fused to produce the estimated target label at pixel p. Contrary to the hand-crafted features adopted in <cit.>, deep fusion net computes the fusion weights using the deep features extracted by the feature extraction subnet. The fusion weights are defined as the normalized distances between these deep features, and the normalization is accomplished by softmax operator, enforcing that the summation of all fusion weights related to each pixel p in target image T is one. More precisely, the fusion weight of pixel q in warped atlas image X_i for predicting the label of pixel p in target image T is determined asw_i,p,q(Θ) = exp(-||F_p(T; Θ) - F_q(X_i; Θ)||_2^2)/∑_j∑_q' ∈ N_pexp(-||F_p( T; Θ) - F_q'( X_ j; Θ)||_2^2) ,where Θ is the network parameters, i.e., filters and biases, in feature extraction subnet. F_p( T; Θ) is the extracted feature vector of image T at pixel p, and N_p is the search window around pixel p.Hence, the estimated label of pixel p in target image T can be written asL̂_p( T; Θ) = ∑_i∑_q ∈ N_p w_i, p,q (Θ) L_q( X_i),where L_q(X_i) is the label of atlas image X_i at pixel q.The objective of learning deep fusion net is to enforce that the estimated label of target image T in Eqn.(<ref>) should approximate the ground-truth target label L(T) as close as possible. Therefore, a loss layer for measuring the approximation error is defined as a L_2 loss:E(L̂(T; Θ),L(T)) = 1/P∑_p ‖L̂_p(T;Θ) - L_p(T)‖ _2 ^2 ,whereP denotes the number of pixels in target image T. Our task in network training is to minimize this loss function on a training set w.r.t. the network parameters Θ using back-propagation.For notational convenience, we will omit network parameters Θ in the rest of paper.Based on the formula of label prediction in Eqn. (<ref>), our network output L̂_p(T)is very often a hard 0 or 1 rather than the soft probability in (0,1) of a standard soft-max classifier, and we expect the L_2 loss to be more computationally stable and smoother to optimize than the commonly used cross-entropy loss. Please refer to section 1 of appendix for gradient computations using L_2 and cross-entropy losses. In preliminary experiments, we also tried several different loss functions, e.g., L_1, hinge, Dice and log losses, and L_2 loss behaved marginally better. Please refer to section <ref> for these experiments. To incorporate Eqn.(<ref>) into the neural network as a differentiable function, we decompose the computation of the fusion weights into several successive simple operations, modeled as shift layer, distance layer and weight layer. Each operation and the gradient of its output w.r.t. input can be easily calculated using standard deep-learning libraries. Figure <ref> shows our motivation for this decomposition in detail. Instead of directly computing the feature distance of pixel p in target image T and pixel q in atlas image X_i, shown in Fig. <ref>(a), we can equivalently compute the per-pixel feature distance at the pixel p between target image T and the shifted atlas image X_i by the shift vector α = p - q, as shown in Fig. <ref>(b). Suppose that the search window width is 2t+1. To calculate the fusion weights using Eqn.(<ref>) in deep networks, we first shift each feature map of X_i by each shift vector α within the non-local region R_nl ={ (u,v) ∈ℤ^2| -t ≤ u,v ≤ t} using a shift layer, then compute the pixel-wise feature distances using a distance layer, and finally transform these distances to fusion weights using a weight layer.§.§.§ Shift layerThe shift layer spatially shifts the extracted features or label map of each atlas image.Given the extracted features F(X_i) ∈ℝ^M × N × D of atlas image X_i, this layer generates (2t+1) × (2t+1) spatially shifted features along each shift vector α∈ R_nl. Each output of the shift layer can be written as S^α (F(X_i)), where S^α denotes the shift operator with a shift vector α. Figure <ref> illustrates an example of shift operator S^α with different values of α on one feature map. Each shift operator S^α is actually an identity function between shifted features, and thus the gradients can be propagated back accordingly. Note that careful cropping is needed for areas going out of the border due to shift operators. §.§.§ Distance layer The distance layer computes the pixel-wise feature distance at each pixel pbetween shifted features S^α(F(X_i)) of atlas X_i and target's features F(T) asD^α_p (T, X_i)= ‖ S^α_p (F(X_i)) - F_p (T) ‖ _2^2 ,where S^α_p (F(X_i)) denotes the feature vector of shifted features S^α(F(X_i)) at pixel p. It is defined as the pixel-wise L_2 distance, and the gradient of this layer can be simply derived w.r.t. both S^α(F(X_i)) and F(T). §.§.§ Weight layer The weight layer maps the feature distances to fusion weights using softmax operation. The fusion weight of pixel q (q = p - α ,α∈ R_nl) in atlas image X_i for predicting the label of pixel p in target image T can be written asw_i, p, q = w^α_p( X_i)= e^-D^α_p ( T,X_i)/∑_j∑_α ' ∈ R_nle^-D^α '_p ( T,X_j). This softmax operation is a common layer in architecture of deep networks, and the gradient of this layer w.r.t. the input D^α_p ( T,X_k) can be easily derived <cit.>, where k is any atlas index.§.§.§ Voting layer The voting layer estimates the label of target image T at pixel p byL̂_p(T)= ∑_i ∑_α∈ R_nl w_p^α(X_i)S_p^α(L(X_i)) .As a linear operation, the gradient of this layer can be easily derived w.r.t. the input w_p^α(X_i).Summary: The non-local patch-based label fusion subnet successively processes the extracted features and warped atlas labels by shift, distance, and weight layers to output fusion weights, which are then utilized by voting layer to estimate the target label. This subnet implements Eqn.(<ref>) by using the above simple layers, and the gradients of these layers can be easily calculated using standard deep-learning libraries. §.§ Deep fusion net for binary and multi-class segmentationThe deep fusion net can be adapted to both tasks of binary and multi-class segmentation.This can be uniformly achieved by representing the segmentation label L_p of target or warped atlas pixel p using probability vector in the voting layer. Suppose that there are C classes for segmentation, the segmentation label of pixel p can be represented byL_p = (a_1, ⋯, a_C)^⊤ where a_i ≥ 0 and ∑_i a_i = 1.a_i represents the probability of pixel p belonging to class i.For binary segmentation, L_p = (a, 1-a)^⊤ where a represents the probability of pixel p belonging to object of interest. §.§ Network trainingWe learn the network parameters Θ by minimizing the loss in Eqn.(<ref>)w.r.t. Θ using back-propagation. Given a number of atlases for training, each atlas is selected as the target image in turn, and then the remaining atlases are registered to this target image as the warped atlases. Suppose that i-th atlas is picked as the target image T with ground-truth label L(T), the corresponding warped atlases are denoted by 𝒜_i = {X_j, L(X_j)|j=1,2,...,K, j ≠ i }, where K is the total number of atlases. We use stochastic gradient descent in training, and each triplet of (𝒜_i, T,L(T)) is taken as a batch.Instead of using all warped atlases within 𝒜_i in each batch, which may require a large GPU memory and long training time, we randomly sample K_0 (K_0=5) warped atlases for training,according to a distribution proportional to the normalized mutual information between warped atlas images {X_j |j=1,2,...,K, j ≠ i } and target image T.This random atlas selection strategy enriches the diversity of warped atlases for target image at training phase. Since there may be discrepancy between distributions of training and testing data, this strategy may improve the generalization ability of deep fusion net for testing data. We will evaluate the atlas selection strategy at training phase in section <ref>. Many randomized techniques were also proposed in deep learning to improve network generalization ability, e.g., dropout <cit.>, data augmentation <cit.>, etc. §.§ Network testing At testing phase, the learned deep fusion net loads a test sample (a target image and its warped atlases) and outputs the estimated target label. Similar to multiple atlas selection approaches <cit.>, we only pick a few most similar atlases for the target image in label fusion, which is an atlas selection problem.As discussed in section <ref>, feature extraction subnet actually learns an embedding space at training process, in which pixel-wise L_2 feature distance within distance layer corresponds to image similarity. We can naturally define a deep feature distance between a target image T and its warped atlas image X_i asd_F( T,X_i) = ‖ F( T) - F( X_i) ‖ ^2_F,where F(·) is the extracted features using well-trained feature extraction subnet at training process. At testing phase, we take the top-k atlases with smallest deep feature distances as the selected atlases for a target image, which are then fed into the learned deep fusion net to estimate the target label. Note that the number k of selected atlases at testing process does not need to be same as K_0 used at training phase.In fact, one could intuitively expect that K_0 at training phase would make the features to be learned in a way that the network operates best at testing phase with k = K_0. However, we show empirically in section <ref> that a larger number k at testing phase would generally produce better segmentation accuracies. This happens possibly because of the random atlas selection strategy utilized at training phase, which largely extends the diversity of warped atlases for target image at training phase and enables the network to utilize more warped atlases at testing process.§ IMPLEMENTATION DETAILS AND COMPUTATIONAL REQUIREMENTS Our current implementation is based on the MatConvNet library [http://www.vlfeat.org/matconvnet/], and all experiments are performed on a Dell Precision T7910 workstation with GeForce GTX TITAN X (12GB) on an Ubuntu platform. In the current study, our proposed method processes cardiac MR images slice by slice, and the GPU memory requirement for processing one target image is about O (MNDK_0(2t+1)^2).The averaged size of target images for numerical experiments in section <ref> is about 110×140 for SATA-13 dataset and 135×155 for LV-09 dataset. In practice, for a target image in size of 120 × 140 with K_0 = 5, a deep fusion net with network parameters in section <ref> requires about 4GB GPU memory for one forward and backward processes, which respectively take about 0.30 second and 0.31 second.In this case, the maximum allowed number of selected atlases K_0 for training is up to 42 in our computational platform. § EXPERIMENTSIn this section, we evaluate the performance of deep fusion net on two cardiac MR datasets for left ventricle (LV) segmentation.In the following paragraphs, we will compare our method with other segmentation methods and further investigate the performance of our deep fusion net with variants of architecture, loss, atlas selection strategy, cross-dataset evaluation on these two datasets. In appendix, we also provide additional experiments, including justification of the effectiveness of learned deep features as similarity measure, and all the p appearing in the following paragraphs. §.§ Datasets§.§.§ SATA-13 dataset The MICCAI 2013 SATA Segmentation Challenge (SATA-13) provides a cardiac dataset for LV segmentation. The samples in this dataset are randomly selected from DETERMINE (Defibrillators to Reduce Risk by Magnetic Resonance Imaging Evaluation) in Cardiac Atlas Project (CAP). The cardiac MR images are acquired using the steady-state free precession (SSFP) pulse sequence, and each image is acquired during a breath-hold of 8-15 seconds duration. Sufficient short-axis slices are obtained to cover the whole heart, and the MR parameters vary between cases.Typically, each cine image sequence has about 25 frames, and each frame has about 10 slices. The size of slice ranges from 138×192 to 512×512. The slice thickness is less than 10 mm, and the gap between slices is less than 2 mm.The SATA-13 dataset includes 83 training subjects and 72 testing subjects.The ground-truth myocardium segmentation masks for all frames in training subjects are provided, while we evaluate our deep fusion net only on end-diastole (ED) frame, as a common practice in the literature <cit.>. The experiment is performed over 83 training subjects using 5-fold cross validation, i.e., one fifth of training subjects are taken as validation set and the remaining four fifths of subjects are taken for learning deep fusion net in each fold.The averaged 3D Dice metric (ADM) and averaged 3D Hausdorff distance (AHD) over the validation sets in five folds are taken as the final accuracies. For one subject in validation sets, assume that Ω_gt and Ω_et are respectively its ground-truth and estimated segmentations represented by sets of pixels labeled as object of interest, the Dice metric (DM) and Hausdorff distance (HD) are defined as DM(Ω_gt,Ω_et)= 2 | Ω_gt∩Ω_et | /|Ω_gt|+|Ω_et| and HD(Ω_gt,Ω_et)= max( max_p∈Ω_gt ( min_q∈Ω_etd(p,q) ), max_q∈Ω_et ( min_p∈Ω_gtd(p,q) ) ), where |·| denotes the number of elements in a set and d(p,q) denotes the Euclidean distance between coordinates of pixels p and q. The Hausdorff distance is computed in spatial resolution of millimeter obtained from the DICOM file.§.§.§ LV-09 dataset The MICCAI 2009 LV Segmentation Challenge (LV-09) dataset <cit.>is provided by Sunnybrook Health Sciences Center, and contains 45 subjects with expert annotations, respectively in subsets of “training”, “testing” and “online”. The cardiac cine-MR short-axis images are acquired using SSFP pulse sequence with a 1.5T GE Signa MRI. All images are obtained during 10-15 seconds of breath holding with a temporal resolution of 20 cardiac phases over the heart cycle. Each subject contains all slices at end-diastole (ED) and end-systole (ES) frames, and each frame contains 6-12 short-axis images obtained from the atrioventricular ring to the apex (thickness = 8 mm, gap = 8 mm, FOV = 320 × 320 mm, matrix = 256 × 256). Both endocardial and epicardial contours are drawn by experienced cardiologists in all slices at ED frame, while only endocardial contours are given at ES frame. In the experiment, as in <cit.>, we utilize 15 training subjects for learning deep fusion net, and 30 subjects in testing and online sets for evaluating the performance. The standard evaluation scheme of MICCAI 2009 LV Segmentation Challenge <cit.> isutilized in our comparison, which is based on the following three measures: 1) percentage of “good" contours, 2) averaged Dice metric (ADM) of the “good" contours, and 3) averaged perpendicular distance (APD) of the “good" contours. A contour is classified as “good" if APD is less than 5 mm. The Dice metric and perpendicular distance are calculated for each 2D slice separately, and the evaluation measures (i.e., “Good" percentage, ADM and APD) are averaged over all slices within ED and ES frames of all subjects in testing or online sets. §.§ PreprocessingBefore starting to learn deep fusion net, atlas images are registered to the target image using ITK software package [http://www.itk.org/], and we utilize two different registration frameworks to test the robustness of our proposed method. Landmark-based (LB) registration: Each atlas subject is warped to the target subject using landmark-based registration with 3D affine transformation, and five landmarks are manually labeled at both atlas and target images, as in <cit.>. These landmarks are also used to crop the region of interest (ROI) from complex backgrounds to reduce the computational cost of registration. To compensate for the potential inter-slice shift, 2D B-spline registration is then utilized on each pair of slicesin atlas and target subjects respectively. Landmark-free (LF) registration: The ROI is cropped by a bounding box on each subject determined by two corner points, and then each atlas subject is warped to the target subject by 3D affine registration without using any landmarks. Normalized mutual information (NMI) is taken as the similarity metric. At last, 2D affine registration and 2D B-spline registration are successively applied to each slice for reducing the potential inter-slice shift. Generally speaking, LB registration framework would produce more accurate results due to the guidance of landmarks. In each process of registration, with the estimated motion between atlas and target images, each atlas label iswarped to provide an estimate for the target label using the corresponding motion field. All the warped estimates for target label are fused by deep fusion net to generate a final prediction of the target label. §.§ Parameter settingsEmpirically, we fix the learning rate of deep fusion net to 5 × 10^-7. Unless otherwise stated, the numbers of selected atlases at training and testing phases, i.e., K_0 and k, are respectively 5 and 10, and the size of search window in the shift layer is 7 × 7. As for the feature extraction subnet, we utilize 4 convolutional layers with strides of 1, and these layers respectively have 64 filters in size of 5 × 5 × 1, 64 filters in size of 5 × 5 × 64, 128 filters in size of 5 × 5 × 64, and 128 filters in size of 5 × 5 × 128. §.§ Segmentation accuracy§.§.§ SATA-13 datasetFor SATA-13 dataset, we compare the segmentation accuracy of deep fusion net with the results of majority voting (MV), patch-based segmentation (PB) <cit.>, as well as SVM-based segmentation with augmented features (SVMAF) <cit.>. Another version of deep fusion net which uses normalized mutual information (NMI) as similarity measure for atlas selection at testing process, denoted as DFN_NMI, is also included in the comparison. The results of MV, PB and SVMAF are reproduced by the published codes [<http://wp.doc.ic.ac.uk/wbai/software/>] using the same parameter settings in <cit.>. The segmentation accuracy is evaluated by averaged Dice metric (higher value means better performance) and averaged Hausdorff distance (lower value means better performance) of the myocardium across ED frame and all the testing subjects in 5-fold cross validation. A paired-samples t-test is conducted to compare the performance of segmentation approaches.Table <ref> reports the quantitative segmentation accuracies of different methods using LB or LF registrations introduced in section <ref>. The experimental results show that our DFN achieves better performance than themethods of MV, PB and SVMAF in both ADM (p < 0.001) and AHD (p < 0.05) when using LB registration.In addition, our DFN and DFN_NMI using LF registration obtain higher accuracies in ADM than these compared methods of SVMAF, PB, MV, CNN using LB registration (p < 0.001). As for similarity measures in atlas selection, by comparing DFN (using deep features for atlas selection) with DFN_NMI (using NMI for atlas selection), our defined deep feature-based atlas selection works significantly better in terms of both two metrics (p < 0.001) when using LF registration, improving ADM from 0.802 to 0.815 and AHD from 27.00 to 18.33.However, this improvement when using landmark-based (LB) registration is marginal by comparing DFN and DFN_NMI with LB registration, possibly because LBregistration achieves better registration qualitydecreasing the dependency of segmentation accuracy on a better atlas selection strategy. Convolutional neural network (CNN) is also included in this comparison to evaluate the effect of NL-PLF subnet, and we restrict that the CNN and our DFN have identical network capacity of feature extraction, i.e., the compared CNN in Table <ref> has the same feature extraction subnet as ours in section <ref>, which is then followed by a convolutional layer and a softmax layer to output the target label. The meta-parameters of CNN, e.g., learning rate, are reconfigured to the best of our abilities. Experimental results show that our DFN performs significantly better than CNN in all metrics (p < 0.001), indicating the effectiveness of NL-PLF subnet. Figure <ref> compares the segmentation accuracies in the box-plot, showing that our DFN based on LB registration achieves significantly higher median values than the compared methods in ADM, and is marginally better in AHD compared with PB and MV. Figure <ref> shows visual examples of the segmentation results by different methods on the basal, mid-ventricular and apical slices from a testing subject. Furthermore, to compare with other state-of-the-art methods, we trained a deep fusion net on Cardiac Atlas Project (CAP) training set of MICCAI 2013 SATA Segmentation Challenge and tested its performance on CAP testing set of the challenge, whose ground-truth segmentations are unknown to challenge participants. We submitted our segmentation results to the challenge website, and the corresponding testing accuracies were evaluated by the website and published on its leaderboard [Old website: http://masi.vuse.vanderbilt.edu/submission/leaderboard.html]^,[New website: https://www.synapse.org/#!Synapse:syn3193805/wiki/217788 (Last accessed: 10 July 2018)]. Our proposed deep fusion net based on LB registration (referred as DeepMAS_LB) achieved 0.815 in Dice metric,ranking first among the submitted results at the time of writing. §.§.§ LV-09 datasetFor LV-09 dataset, we respectively compare the epicardium and endocardium segmentation accuracies of the testing and online sets in Tables <ref> and <ref>.Similar to <cit.>, both ED and ES frames are evaluated to measure accuracies for endocardium segmentation, while only ED frame is evaluated for the epicardium segmentation. Besides the traditional multi-atlas segmentation methods, we also compare two recent deep learning-based methods that achieve state-of-the-art results on this dataset, i.e. the combined stacked autoencoder and level set method (SAELS) <cit.> and the combined deep belief network and level set method (DBNLS) <cit.>. Both SAELS and DBNLS first utilize deep network to estimate an initial segmentation result, which is then refined by level set approach to produce a final estimation. More exactly, DBNLS respectively trains four DBNs for epicardium or endocardium at ED or ES frame, and SAELS respectively trains two networks for large-contour or small-contour images on endocardium segmentation. In our experiments, we try two different versions of network training: first, three different deep fusion nets are respectively trained for epicardium at ED, endocardium at ED and endocardium at ES; second, one multi-class deep fusion net (indicated by “multi" in tables) is trained for epicardium and endocardium at ED, and one deep fusion net is trained for endocardium at ES. Notice that DBNLS respectively utilizes training set for model training and online set for model selection, and reports the performance on testing set, while SAELS has the same experimental setting with us, i.e., training on training set and reporting results on testing and online sets. Since SAELS has not been evaluated on epicardium in <cit.>, we only compare its accuracy on endocardium segmentation in Table <ref>. As shown in Tables <ref> and <ref>, our DFN performs better than the other registration-based methods (i.e., MV, PB and SVMAF) on epicardium and endocardium segmentations in all metrics (p < 0.001) when using LB registration. In comparison with the deep learning-based methods, the only method that outperforms our DFN is DBNLS but only when it utilizes the strong manual prior (i.e., manually-labeled segmentation masks as prior, referred as DBNLS(semiauto)). In fact, compared to DBNLS without using the manual prior (referred as DBNLS), our DFN based on LB registration achieves much higher accuracies in all metrics. As for SAELS, our DFN using LB registration achieves comparable results on endocardium segmentation. However, SAELS utilizes post-processing to refine the initial segmentation of stacked autoencoderby level set method. Without post-processing, SAELS reports the ADM scores of 0.90 in testing set and 0.89 in online set, compared with 0.92 and 0.92 of ours purely based on thresholding the estimated label probability maps of deep fusion nets, as shown in Table <ref>. Compared with training separate DFN (referred as DFN), the multi-class DFN produces worse APD scores for epicardium (p < 0.001) and is comparable in other metrics. Our DFN and DFN_NMI based on LF registration achieve lower values of “Good" percentage in all comparisons, due to the inaccurate registration results without using landmarks on this dataset. This indicates that our method, as a multi-atlas segmentation method, relies on a relatively good registration method, and breaks down on LV-09 dataset when LF registration does not work well. Figure <ref> compares the segmentation accuracies in the box-plot, and Figure <ref> shows the epicardium and endocardium segmentation results using different methods for the basal, mid-ventricular and apical slices from one subject in testing set. §.§ Evaluation on network architecture§.§.§ Impact of sigmoid layerIn the feature extraction subnet discussed in section 2.2, we added a sigmoid layer at the end of this subnet to suppress the magnitudes of deep features for robustness, and its outputs are taken as voxel-wise features for computing label fusion weights. We now experimentally evaluate the necessity of sigmoid layer on segmentation performance.First, we train two deep fusion nets with or without sigmoid layer for epicardium at ED frame using 15 training subjects, and test on 30 testing and online subjects of LV-09 database.The testing accuracies are reported in Table <ref>. The results show that the network with sigmoid layer is better than the one without sigmoid layer in “Good" percentage (p < 0.05) and APD (p < 0.001) with comparable ADM, indicating the effectiveness of the sigmoid layer on improving the segmentation accuracies.Second, we also evaluate the performance of learned deep features with and without sigmoid layer as similarity measure. We compare the majority voting methods respectively using deep features learned by deep fusion nets with sigmoid layer (referred as Deep features (w/ sigmoid)) or without sigmoid layer (referred as Deep features (w/o sigmoid)), and normalized mutual information (referred as NMI) as similarity measures on epicardium segmentation at ED frame of LV-09 database. Only training set is used as atlases, and the testing accuracies are computed in testing and online sets.The results are listed in Table <ref>, showing that deep features learned with sigmoid layer work better than the ones learned without sigmoid layer and NMI in “Good" percentage (p < 0.05) with comparable ADM and APD. §.§.§ Impact of loss layerIn this experiment, we compare the performance of our methods using 5 different loss functions, i.e., L_2, L_1, hinge <cit.>, Dice <cit.> and log <cit.> losses. We train 5 different networks for epicardium at ED using 15 training subjects, and test on 30 testing and online subjects of LV-09 database. The learning rates are tuned such that the loss can decently decrease. The testing accuracies reported in Table <ref> show that L_2 loss achieves marginally better accuracies compared with some traditional segmentation losses, e.g., hinge, Dice and log losses. We failed to train a converged parameter set for deep fusion net using cross-entropy loss,which seems to be counter-intuitive. But due to the specially designed linear voting layer before the loss layer, the cross-entropy loss causes unstable gradients for error back-propagation in network training (please refer to section 2.3).§.§.§ Impact of search volumeWe now evaluate the influence of search volume R_nl, i.e., the non-local region for patch-based label fusion, on the segmentation performance.As shown in Table <ref>, we respectively train five DFNs with different search volumes, whose sizes range from 1 to 9 with interval of 2, on epicardium at ED frame using 15 training subjects, and test on 30 testing and online subjects of LV-09 dataset.The results indicate that the segmentation accuracies are not sensitive tosearch volume, and generally a larger size of search volume could produce marginally better segmentation accuracies. This is reasonable since larger search volume provides more patch candidates around registered pixels for label fusion, resulting in robustness to inaccurate registrations.§.§ Comparison on atlas selection strategyIn the training and testing phases of our DFNs, we could choose different atlas selection strategies, e.g., selecting different numbers of atlases, using either deep feature distance or NMI for atlas selection. In this section, we will test the impacts of different atlas selection strategies on the performance of DFN. We first compare the segmentation accuracies of our proposed methods (referred to DFN_NMI and DFN) with respect to different numbers of selected atlases at testing phase. The deep fusion nets learned in sections <ref> and <ref>, which use randomly selected 5 atlases according to a distribution proportional to NMI between target image and atlases at training phase, are utilized in this comparison.As shown in Fig. <ref>, we compare the performance of DFN_NMI and DFN on SATA-13 dataset (Fig. <ref> and <ref>) and LV-09 dataset (Fig. <ref> and <ref>) respectively.The experimental results show that atlas selection using deep feature distance consistently works better than that using NMI for different numbers of selected atlases at testing phase. Moreover, using larger numbers of atlases generally produces better ADM scores, but the accuracies saturate after around 11 atlases. We next experimentally investigate the advantages of our random atlas selection strategy at training phase, which has been discussed in section <ref>.By fixing the number of selected atlases to 5, we compare the performance of DFNs using three different atlas selection strategies in each forward-backward computation of training process, i.e., selecting top-5 atlases with smallest deep feature distances (top5_DF), selecting top-5 atlases with largest normalize mutual information (top5_NMI), and randomly selecting atlases according to a distribution proportional to NMI between target and atlas images (random5_NMI).We train DFNs with the above strategies for epicardium at ED using 15 training subjects, and test on 30 testing and online subjects of LV-09 dataset.For each strategy at training phase, we attempt two different atlas selection strategies at testing phase, i.e., select top-5 atlases with largest normalize mutual information (top5_NMI) and select top-5 atlases with smallest deep feature distance (top5_DF). The experimental results are listed in Table <ref>. Compared with the strategies of top5_DF and top5_NMI at training phase, the strategy of using random5_NMI performs best in all metrics.Moreover, random5_NMI paired with top5_DF strategies at training and testing phases achieves the best accuracies among all the compared strategies. In Table <ref>, we also compare the performance of DFN by varying the numbers of selected atlases (denoted as K_0) at training phase using random atlas selection strategy, with K_0 atlases selected by deep feature distance at testing phase. The experimental results show that the segmentation accuracies are relatively stable to K_0, e.g., the accuracies are not significantly lower even when using 1 training atlas (i.e., K_0 = 1). This is probably because the random atlas selection strategy enforces that each target image at training phase can be paired with diverse atlases. Empirically, K_0 with values of 5 to 7 produces best accuracies. §.§ Cross-dataset evaluation To evaluate the generalization abilities of the learned DFNs across different datasets, we conduct experimental comparisons for network training and testing across two different datasets. More specifically, we train each DFN on one dataset and test it on the other one.When we test the DFN learned from training dataset on the testing dataset, we attempt two different atlas selection strategies for the testing subjects, i.e., selecting atlases from the subjects in the training dataset or in the testing dataset. We create the segmentation mask of myocardium for LV-09 dataset by subtracting the endocardium mask from the epicardium mask, to be compatible with the ground-truth segmentation mask provided in SATA-13 dataset. The experiments are performed on ED frame of both datasets, and the accuracy is evaluated by averaged 3D Dice metric (ADM) and averaged 3D Hausdorff distance (AHD) over all subjects at ED frame of the testing dataset.We first evaluate the cross-dataset performance using the strategy that selects atlases from the subjects in the training dataset for the testing subjects in the testing dataset,and the segmentation accuracies are listed in Table <ref> (train DFN on LV-09 and test it on SATA-13) and Table <ref>(train DFN on SATA-13 and test it on LV-09). The experimental results show that our learned DFNs achieve significantly higher ADMs (p < 0.001) and marginally better AHDs than traditional multi-atlas segmentation methods on both two datasets. Compared with the methods that perform training and testing on the same dataset, e.g., Table <ref> in section <ref>, the accuracies of our DFN for cross-dataset evaluation are relatively low, which is possibly because of the different styles of manual labels in these two datasets, as shown in Fig. <ref> (please refer to the figure for detailed descriptions of this difference). Figure <ref> shows the performance of deep fusion nets using different numbers of selected atlases respectively on two datasets in the testing process, indicating that our defined deep feature distance consistently works better than NMI in all metrics. We also evaluate the cross-dataset performance using the strategy that selects atlases from the subjects in the testing dataset for testing subjects. In Table <ref>, we present the results of our methods (denoted as DFN(crossDS) and DFN_NMI(crossDS)) by applying the DFN learned from LV-09 dataset to the SATA-13 dataset without fine-tuning. These accuracies are calculated using the same 5-fold cross validation utilized in section <ref>.The experimental results show that, if the subjects in SATA-13 dataset are used as atlases for testing subjects, the DFN learned from LV-09 dataset can achieve comparable results on SATA-13 dataset in all metrics, compared with those segmentation methods training and testing on the same SATA-13 dataset. This justifies that our learned feature extraction subnet has well generalization ability across different datasets for LV segmentation. § CONCLUSIONS In this work, we accomplish the multi-atlas based LV segmentation by a specially designed convolutional neural network. Our network relies on atlas-to-target image registration, and aims to extractdeep features for optimally fusing the warped atlas labels in a non-local patch-based label fusion framework.This deep fusion net naturally bridges the traditional registration-based multi-atlas approach and modern deep learning approach, and provides a novel deep architecture for solving the tasks of label fusion and atlas selection in multi-atlas segmentation approach. The proposed net was evaluated SATA-13 and LV-09 datasets for LV segmentation, andthe results demonstrate that it achieves better accuracies in various metrics than the other LV segmentation methods on both datasets, and the only method surpassing ours is the deep learning method using a strong manual prior <cit.>. We also extensively evaluate the performance of deep fusion nets using variants of architecture, training loss, atlas selection strategy, cross-dataset training and testing, etc. As a registration-based multi-atlas segmentation method, our deep fusion net relies on a good image registration method, and may fail when the atlas-to-target image registration is not accurate enough. For example, our deep fusion net using landmark-free registration works well on SATA-13 dataset while producing unsatisfactory results on LV-09 dataset. To improve its robustness to registration errors, first, larger search volume can be utilized to incorporate more voxels around registered voxel for label fusion. Second, it is interesting to build a more robust label fusion subnet for multi-atlas segmentation via the statistical fusion strategy. Moreover, our present study is based on 2D slices mainly due to the constraint of GPU memory.This is also a common issue for deep learning approaches when applied to 3D medical images. One common solution is to train the networks using 3D patches instead of full 3D images <cit.>. In the future work, we are interested in improving the label fusion subnet by investigating more robust statistical fusion strategy, and applying the proposed framework to 3D images.As a general framework, our deep fusion net can also be applied to other multi-atlas based applications, e.g., image synthesis <cit.>, brain segmentation <cit.>, etc.§ ACKNOWLEDGMENTSThis work was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 11622106, 61472313, 11690011 and 61721002, International Exchange Foundation of China NSFC and United Kingdom RS under grant No. 61711530242. § REFERENCES
http://arxiv.org/abs/1709.09641v2
{ "authors": [ "Heran Yang", "Jian Sun", "Huibin Li", "Lisheng Wang", "Zongben Xu" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170927172517", "title": "Neural Multi-Atlas Label Fusion: Application to Cardiac MR Images" }
§ INTRODUCTION In spite of no direct indication for physics beyond the Standard Model of particle physics at the LHC, there are many good experimental and theoretical reasons to assume that the Standard Model (SM) is an incomplete description of our world. Lacking distinct measurements of new phenomena, two things become crucial for the analyses of SM extensions: The precise understanding of the SM as well as the consistent combination of all information on possible indirect signs of “New Physics” we can gather. In the following, I will present such an analysis for one of the most popular models that extend the SM, the Two-Higgs-Doublet model. It adds a second Higgs doublet to the SM particle content. In the past, many different constraints on this model have been explored; recently, the discovery of the 125 GeV scalar <cit.> as well as the absence of a second scalar resonance at the LHC have put strong bounds on the existence of a second Higgs doublet. Here, I will quantify these bounds in the light of updated LHC data, performing global fits to the Two-Higgs-Doublet models with a softly broken ℤ_2 symmetry (2HDM) of type I and II. Before going into detail, I will give an introduction to the fitting framework HEPfit, which also guarantees the consistent treatment of the SM part at the best precision available.§ HEPFIT As statistical setup for the global 2HDM fits I use the open-source C++ code HEPfit <cit.>, which is linked to the Bayesian Analysis Toolkit (BAT) <cit.>. HEPfit calculates flavour and Higgs observables as well as electroweak Z-pole observables, most of them at the best known precision. It can be linked to other programs as a library, but it also comes with an interface to BAT and can be used to perform global fits in the SM and several of its extensions (see also the other HEPfit contributions at EPS-HEP 2017 <cit.>). The release of the first fully documented HEPfit version is planned in the near future.§ THE 2HDM The most general formulation of Two-Higgs-Doublet models <cit.> is characterized by the Higgs potentialV_H^2HDM=m^2_11Φ_1^†Φ_1 +m^2_22Φ_2^†Φ_2^† -( m_12^2 Φ_1^†Φ_2^† + H.c.) +λ_1/2( Φ^† _1Φ^†_1 ) ^2 +λ_2/2( Φ_2^†Φ_2^†) ^2 +λ_3 ( Φ_1^†Φ_1^†)( Φ_2^†Φ_2^†) +λ_4 ( Φ_1^†Φ_2^†)( Φ_2^†Φ_1^†) +[ λ_5/2( Φ_1^†Φ_2^†) ^2 +λ_6 ( Φ_1^†Φ_1^†) ( Φ_1^†Φ_2^†) +λ_7 ( Φ_2^†Φ_2^†) ( Φ_1^†Φ_2^†) + H.c.] ,where Φ_1 and Φ_2 are the two Higgs doublets.The corresponding Yukawa Lagrangian reads L_Yukawa = -∑_j,k=1^3 [ Y^d,1_jk( Q̅_j Φ_1 ) d_k +Y^d,2_jk( Q̅_j Φ_2 ) d_k . +Y^u,1_jk( Q̅ _j iσ _2 Φ^*_1 ) u_k +Y^u,2_jk( Q̅ _j iσ _2 Φ^*_2 ) u_k.+Y^ℓ,1_jk( L̅_j Φ_1 ) ℓ _k +Y^ℓ,2_jk( L̅_j Φ_2 ) ℓ _k+ H.c.] ,with the left-handed fermion fields Q and L and the right-handed fermion fields u, d and ℓ.While the HEPfit collaboration is working at an implementation of the most general model into HEPfit, I will focus here on the case without explicitly broken ℤ_2 symmetry. This implies λ_6=λ_7=Y^u,1=0 and either Y^d,1=Y^ℓ,1=0 (type I) or Y^d,2=Y^ℓ,2=0 (type II).[For the cases with Y^d,1=Y^ℓ,2=0 or Y^d,2=Y^ℓ,1=0 I refer to <cit.>.] For an example of a 2HDM fit to a more general Yukawa sector with HEPfit, see <cit.>. I furthermore assume that all couplings in V_H^2HDM are real and that the 125 GeV scalar is the lightest 2HDM scalar h. The other physical Higgs particles are the neutral scalar H, the neutral pseudoscalar A and the charged scalars H^±. In the fits, I assume that their masses as well as the soft ℤ_2 breaking scale |m_12| are below 1.5 TeV. Apart from these masses, the 2HDM is defined by the two mixing angles α and β between these scalars, instead of which I will use tanβ and β-α. The SM parameters are fixed to their best fit values <cit.>.§ CONSTRAINTS As mentioned before, I want to emphasize the impact of the LHC measurements on the 2HDM parameter space. They can be divided into the signal strengths of the 125 GeV resonance h and the searches for heavier scalars. For both, I use all available data from the 7+8 TeV run and the 13 TeV run which was made public before the EPS-HEP 2017 conference: signal strengths of h decaying to γγ, bb, ττ, μμ, WW and ZZ <cit.> and the search for heavy neutral resonances in decays to bb, ττ, γγ, Zγ, ZZ, WW, hh and hZ <cit.> as well as the search for charged scalars with the final states τν and tb <cit.>. The details of the implementation of these observables into HEPfit can be found in <cit.>; here, I assume that the observed upper limits on the cross sections are identical with the expected ones, given that almost everywhere the two are compatible at the 2σ level.On top of the LHC data, I apply a conservative choice for the theoretical constraints: In order to require that the vacuum is stable, I need to guarantee that V_H^2HDM is bounded from below <cit.> and that the electroweak minimum is the global minimum <cit.>. Moreover, I demand that the eigenvalues of the scattering matrix of two-scalar-to-two-scalar scattering processes do not exceed 1 in magnitude <cit.>, and that the next-to-leading order contribution to these eigenvalues is not larger than its leading order value <cit.>. Finally, I combine the mentioned bounds with the remaining relevant constraints: The 2HDM should be in agreement with electroweak precision data, so I use the latest HEPfit values <cit.> for the Peskin-Takeuchi pseudo-observables S, T and U <cit.>. I also include the two most relevant flavour observables to the fit, namely B(b→ sγ) and Δ m_B_s <cit.>.§ FIT RESULTS To start with, I discuss the effect the h signal strengths have on the 2HDM parameters. Since the tree-level couplings of h to fermions and gauge bosons only depend on the 2HDM angles, it is obvious to study the β-α vs. tanβ plane. For both types, these planes are shown in Figure <ref> with the single contributions of all relevant h decays as well as their combination. While tanβ can have any value between 0.3 and 30, the difference between β and α is forced to be close to the so-called alignment limit of π/2 for which the h couplings become SM-like. The maximal deviation of this value depends on tanβ and the 2HDM type and is smaller compared to the fits to data from the 7+8 TeV run of the LHC (see e.g. <cit.>). In the low tanβ range, the γγ signal strengths prevail in type I, whereas for tanβ>8 the h→ ZZ measurements are the strongest. In type II, the most stringent bounds come from the ZZ and WW signal strengths. The tree-level coupling of the h boson to massive gauge bosons is type independent, but these constraints also depend on the fermion couplings in the h production and decay width; that is why the bounds are much stronger in type II. For the latter, it is also worth mentioning that the so-called wrong-sign Yukawa coupling solution is only compatible with all signal strengths in a very small region around tanβ=3.2 and β-α=1.In Figure <ref> I show the combination of all signal strengths transferred to the β-α vs. m_H plane; it is independent of the heavy Higgs mass. I confront it with the regions disfavoured by all heavy Higgs searches, which mainly constrain H masses below 1 TeV, and the scenarios excluded by the theoretical bounds, which for m_H above 600 GeV push the 2HDM towards the alignment limit. While both, the heavy Higgs searches and the theoretical constraints hardly have an effect stronger than the signal strengths in this plane for type II, they are more relevant in type I, where the impact of the signal strengths is weaker. Finally, combining the LHC and theory constraints with the ones from flavour and Z-pole physics, one obtains the strips within the black contours. In both types, the global fit to all constraints only allows small deviations from the alignment limit; the type II H mass additionally gets a lower bound of around 750 GeV if one simultaneously fits the lower bound on the charged Higgs mass from b→ s γ transitions with the electroweak precision observables and the theory constraints (see also <cit.>).§ SUMMARY After introducing the multi-purpose code HEPfit, I show its application to the 2HDM types I and II: I discuss the impact of the h signal strengths and the searches for heavy Higgs bosons on the 2HDM parameter space. These measurements by the ATLAS and CMS collaborations yield strong bounds especially on the angle difference β-α. With increasing data, it is being pushed more and more to the value of π/2, for which the 125 Higgs resembles the SM Higgs. For even more up-to-date fits, also to the two remaining types of ℤ_2 symmetry I have not discussed here, I refer to <cit.>.I thank Debtosh Chowdhury for helpful discussions. This work was supported by the Spanish Government and ERDF funds from the European Commission (Grants No. FPA2014-53631-C2-1-P and SEV-2014-0398).JHEP
http://arxiv.org/abs/1709.09414v1
{ "authors": [ "Otto Eberhardt" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20170927094304", "title": "Two-Higgs-doublet model fits with HEPfit" }