text
stringlengths 1
2.25M
|
---|
---
abstract: 'We present here a comprehensive investigation of the magnetic ordering in Ni$_{50}$Mn$_{35}$In$_{15}$ composition. A concomitant first order martensitic transition and the magnetic ordering occurring in this off-stoichiometric Heusler compound at room temperature signifies the multifunctional character of this magnetic shape memory alloy. Unusual features are observed in the dependence of the magnetization on temperature that can be ascribed to a frustrated magnetic order. It is compelling to ascribe these features to the cluster type description that may arise due to inhomogeneity in the distribution of magnetic atoms. However, evidences are presented from our ac susceptibility, electrical resistivity and dc magnetization studies that there exists a competing ferromagnetic and antiferromagnetic order within crystal structure of this system. We show that excess Mn atoms that substitute the In atoms have a crucial bearing on the magnetic order of this compound. These excess Mn atoms are antiferromagnetically aligned to the other Mn, which explains the peculiar dependence of magnetization on temperature.'
address:
- ' Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005 India'
- ' Department of Physics, Goa University, Goa, 403 206 India'
author:
- 'P. A. Bhobe, K. R. Priolkarand A. K. Nigam'
title: 'Anomalous Magnetic Properties in Ni$_{50}$Mn$_{35}$In$_{15}$'
---
Introduction
============
Ni-Mn based Heusler alloys of the type Ni$_{50}$Mn$_{50-y}$X$_y$ ( X = In, Sn and Sb) have recently been identified by Sotou [*et.al*]{} [@suto] as systems undergoing martensitic transition in the ferromagnetic state. Such compounds have the tendency to display huge strains with application of moderate magnetic fields. This unique magnetoelastic property has numerous technological implications leading to wide spread applied research like the magnetic shape memory, magnetocaloric and magnetoresisitive effect. These unique properties are an outcome of the [*Martensitic Phase Transformation*]{} that takes place in a magnetically ordered state. The classic example of this class of materials is the Heusler alloy, Ni$_2$MnGa [@web; @ulla; @tick; @sozi].
Martensitic transformation is a first-order structural phase change wherein the constituent atoms get displaced from its crystallographic positions over varied amplitude of displacement in a highly correlated fashion. Upon transformation, the crystal structure changes from a highly symmetric cubic phase to a low symmetry structure. Some ferromagnetic Heusler compounds are known to undergo a martensitic transition that is highly reversible and the compounds can be cycled several times through the transformation temperature. The generic formula for Heusler compounds is X$_2$YZ. In the high temperature phase, the structure can be viewed as four interpenetrating fcc lattices with X atoms occupying the (0,0,0) and ($1\over2$,0,0) sites; Y atoms occupying the ($1\over4$,$1\over4$,$1\over4$) sites and Z atoms occupying the ($3\over4$,$1\over4$,$1\over4$) sites. A tetragonal or orthorhombic structure is observed in the low temperature martensitic phase with the displacement of atoms giving rise to modulations that extend over several crystal planes.
Recently found Ni$_{50}$Mn$_{50-x}$In$_{x}$ compounds that belong to the class of Heusler alloys have been in focus, especially for $x$ having martensitic transition temperatures near room temperature. In particular, the stoichiometric composition Ni$_{50}$Mn$_{25}$In$_{25}$ is known to order ferromagnetically at T$_C$ $\sim$ 315 K [@web-book], does not undergo a martensitic transition. While, the Mn-rich off-stoichiometric composition Ni$_{50}$Mn$_{50-x}$In$_{x}$ exemplifies a martensitic transition. This transition temperature (T$_M$) is highly sensitive to the Mn:In ratio and a value between 260 K to 302 K have been reported for In content varying over a small value of 13 to 16% atomic concentration [@suto; @acet]. However, the T$_C$ is not very influenced by the Mn:In ratio and varies only slightly with the composition. A large negative magnetoresistance over 50% is attainable in the Ni-Mn-In compound at moderate field strengths [@yu]. A giant isothermal entropy change takes place when the structural and the magnetic transition temperatures coincide or are in close vicinity to each other resulting in an inverse magnetocaloric effect (MCE) near room temperature [@ali1; @pab-apl; @roy]. It may be noted that an [*inverse*]{} MCE is generally displayed by materials with antiferromagnetic order and the magnitude obtained is generally quite low. It is therefore fundamentally important to gain thorough insight into the structural and magnetic aspects of the technologically important compound. With this aim, we propose to study the nature of magnetic interactions in the composition Ni$_{50}$Mn$_{35}$In$_{15}$ where a concomitant structural and magnetic transition is obtained near room temperature. The key question to be addressed in the present study is the character of the various magnetic interactions that develop with varying distances between magnetic atoms in the two crystallographic phases. We have carried out the measurement of magnetic and transport properties of Ni$_{50}$Mn$_{35}$In$_{15}$ Heusler alloy and the results are presented here.
Experimental
============
Ni$_{50}$Mn$_{35}$In$_{15}$ was prepared by arc-melting the constituent elements of 4N purity under argon atmosphere. The button so obtained, was annealed at 1000 K for 48 hours followed by quenching in ice water. Subsequent energy dispersive x-ray (EDX) analysis confirmed the composition to be close to nominal with Ni = 49.26, Mn = 35.13 and In = 15.49. The ac susceptibility ($\chi_{ac}$) and four probe resistivity measurements were performed using Quantum Design Physical Properties Measurement System. While for the dc magnetization measurements the Quantum Design SQUID magnetometers (MPMS-XL and MPMS SQUID-VSM) were employed. The $\chi_{ac}$ Vs. temperature was measured in the presence of different dc fields and with the excitation frequency varied over three decades. The sample was cooled to lowest measurement temperature (5 K) in zero field state and the data was recorded while warming up to $\sim$ 330 K. Magnetization as a function of field was measured under sweep magnetic fields up to ${\pm}$ 5 T at various temperatures. Before each measurement, the sample was heated to 330 K and cooled in zero field state to the desired temperature. For the resistivity measurements the data was recorded in the region 5 K to 380 K in the presence of 5 T and 9 T magnetic fields.
Results and Discussion
======================
The temperature dependence of the magnetization from 5 to 380 K, recorded while cooling in the zero field state (nominal field 5 Oe), for Ni$_{50}$Mn$_{35}$In$_{15}$ is presented in figure \[vsm-zfc\]. It can be clearly seen that with decreasing temperature the magnetization rises abruptly at the ferromagnetic ordering temperature T$_C$ = 306 K. While the abrupt decrease in magnetization occurs at the martensitic phase transformation with T$_M$ = 302 K. The inset shows the enlarged view of these magnetic and martensitic transformations. $\chi_{ac}$ measured at a frequency [*f*]{} = 13 Hz in an ac field of 1 Oe is presented in figure \[acchi\]. The data shows a very sharp peak at about 300 K which is an outcome of concomitant martensitic and magnetic transformation taking place in the compound.
As mentioned earlier, the martensitic transformation is a first order structural transformation, taking place from a high symmetry cubic phase to a low-symmetry structure. It thus involves a start temperature, when the structure starts deforming and a finish temperature, when the transformation is complete. The variants of the new crystallographic phase that are formed in this region of the start and finish temperatures, re-establish the magnetic interactions that had been present in the parent cubic phase. The difference in anisotropy strongly modifies the field dependence of the magnetization in these two phases [@albe]. For the present sample, the structure starts deforming at 302 K and the transformation completes by $\sim$ 270 K. However, it is interesting to note that magnetization or $\chi_{ac}$ attains an almost zero value after the structural transition is complete. This is an uncommon feature and implies that the magnetic interactions of this compound should be quite complex.
The $\chi_{ac}$ measured during the warm-up cycle displays yet another broad peak at T$^*$ = 170 K, while the dc magnetization (measured while cooling) shows a constantly increasing magnetization with a hump-like feature at the same temperature. However, the magnetization keeps increasing with the subsequent fall in temperature below T$^*$. The exact nature of T$^*$ is not clear at the moment. The first report on this family of compounds by reference [@suto] claims the occurrence of an additional martensite-to-martensite transformation at low temperatures. They observe some intricate changes similar to T$^*$ in their low temperature magnetization data (M(T) with H = 500 Oe) that are taken as signatures for the occurrence of second structural transformation. However, feature similar to T$^*$ observed in the magnetization measurements by reference [@acet] has been ascribed to the magnetic ordering of the martensitic phase by them. Thus, it becomes essential to verify the nature of T$^*$ and also to investigate the reason for the drastic fall in $\chi_{ac}$ to almost zero value upon martensitic transformation. Hence $\chi_{ac}(T)$ was measured in a constant dc field varying from 0 to 500 Oe and the plots are presented in figure \[chi-field\]. With the increase in dc field the peak at T$^*$ broadens and decreases in magnitude. At 500 Oe, the peak smears out completely and a small hump begins to appear at a lower temperature, as can be seen in the inset of figure \[chi-field\]. Such dependence of the peak at T$^*$ on dc magnetic field implies that there exists other competing magnetic interactions in this system that compete with the long range ferromagnetic order. Such complicated behaviour with competing ferromagnetic (FM) and antiferromagnetic (AFM) interactions generally results in a frustrated magnetic order.
A possibility that may give rise to competing FM/ AFM interactions in Ni$_{50}$Mn$_{35}$In$_{15}$ is the inhomogeneous distribution of the constituent elements forming regions of varied stoichiometries. Since the amount of In is the least in this compound, regions rich in Ni$_2$MnIn and NiMn can form. It is well established that Ni$_2$MnIn is ferromagnetic in nature [@web] while NiMn displays antiferromagnetism [@kas]. In case such a possibility exists in the present compound, the random distribution of such magnetic entities would lead to cluster formation and eventually result in the freezing of the resultant magnetic moment. It is important mention here that the present compound shows robust martensitic phase transformation at room temperature while neither Ni$_2$MnIn or NiMn display this property. Hence little doubt exists about Ni$_{50}$Mn$_{35}$In$_{15}$ lacking in compositional uniformity. Nonetheless, to further investigate the cause for FM/AFM interactions due to possibility of inhomogeneous mixing, $\chi_{ac}$ was measured with varying frequencies. Temperature dependence of $\chi_{ac}$ at different frequencies [*f*]{} (13 to 1333 Hz) is presented at figure \[ac-freq\]. It is evident from this plots that the peak position at T$^*$ does not show a shift in temperature with changing frequencies. This observation rules out the possibility of T$^*$ being a time dependent phenomenon and hence cannot be related to freezing of the magnetic moment like in a spin-glass state.
The shape of the hysteresis loop generally provides a better understanding of the magnetic ground state of a material. Hence M(H) loops at select temperatures were obtained by heating the sample to the paramagnetic region (350 K) and cooling to the desired temperature in a zero field state. As can be witnessed from figure \[mart-loop\], the M(H) loop at 300 K (and 290 K) displays a very complex behaviour. The initial steep rise in M(H) for small field values demonstrates the ferromagnetic character of the sample. With the increase in field to intermediate values, a metamagnetic transition is observed as depicted in the inset. In the case of ferromagnetic shape memory alloys, M(H) measurements in the martensitic transformation region does display a metamagnetic transition due to reorientation of the already formed martensite variants. However, the type of metamagnetism seen in the present case is quite unusual. Also, there is a considerable increase in the saturation magnetization upon cycling the sample though magnetic field. The mechanism driving the transition between the two metastable states cannot be explained by a simple reorientation of an existing martensite component, but may be connected to the nucleation of an additional magnetic phase. It is rather difficult to attribute the exact cause for the metamagnetic transition. Further, the width of the hysteresis almost goes to zero at H = 0. Such hysteresis have been considered earlier as possible signatures of a field-induced firstorder metamagnetic transition from AFM to FM in cubic laves phase Co-doped CeFe$_2$ alloys [@N-ali]. Also, with the drop in temperature, there is a decrease in the overall saturation magnetization seen for the M(H) at 290 K. This observation is in agreement with the $\chi(T)$ plot where the susceptibility decreases with decreasing temperature in this temperature region and starts building up below 240 K with a peak at T$^*$. Thus M(H) was measured in the vicinity of T$^*$.
Initially the magnetization rises sharply at small field values indicating a ferromagnetic order. However, it does not saturate at high fields as expected for a typical ferromagnet. Also, interesting features are seen in the low field region of M(H) as displayed in figure \[all-loop\]. Small hysteresis is observed at 170 K and 160 K. For the M(H) at 100 K and below, the virgin curve initially shows a linear rise in magnetization with field up to $\sim$ 500 Oe and then the slope of the curve changes and it lies outside the hysteresis loop. Such a feature resembles to that of a field-induced transition. We can then define H$_{cric} \sim$ 500 Oe as a crossover field. This value of crossover field remains roughly the same for all the subsequent M(H) curves. It is coincidental that this value of crossover field is the same as the dc field applied in the $\chi_{ac}$(T) measurements at which the peak at T$^*$ smears out completely. The overall features observed in the M(H) curves clearly demonstrate that the Ni-Mn-In system does not show a pure ferromagnetic order. Hence $T^*$ cannot be assigned to be the FM ordering temperature of the martensitic phase. The competing magnetic interactions present in the system do not allow the ferromagnetic state to stabilize. Moreover, the magnitude of the coercive field is too small and does not increase with the fall in temperature. This suggests that there is no clustering of regions having ferromagnetic character. It thus implies that the possibility of segregation of magnetic entities forming regions of varied stoichiometry should be ruled out.
M(H) at 10 K and 5 K show shifted loops with nearly zero coercive magnetization at H = 0. Typically, pinching of the hysteresis loop of this nature taking place along the magnetic field axis has been observed in systems like small coated particles, inhomogeneous materials, thin films and bilayer [@nog]. This effect is usually observed when the FM/AFM system is either zero field cooled in a demagnetized state or field cooled from above the Neel temperature of the antiferromagnet [@adv]. It is related to a FM/AFM exchange coupling interaction across the interface and believed to be due to the formation and pinning of domains either in the FM or in the AFM [@zha; @mil]. The observation of such a feature in the low temperature magnetization for the present sample suggests a canted spin structure with ferromagnetic and antiferromagnetic spin components in zero magnetic field, and the ferromagnetism is field-induced. Hence the reduction in magnetization to almost zero upon martensitic transformation and the T$^*$ signature is apparently due to incipient antiferromagnetic coupling inherent in the unit cell of the compound. The two interactions compete with each other throughout the measurement temperature range giving rise to complex behaviour in the magnetic properties of the Ni-Mn-In system.
Figure \[res\](a) represents the temperature dependence of resistivity measured in the region 5 K to 380 K. The most striking feature here is the large jump in the resistivity of Ni$_{50}$Mn$_{35}$In$_{15}$ at room temperature. When viewed from the high temperature side, a change in slope of $\rho(T)$ is seen at about $\sim$ 310 K. This is the signature of ferromagnetic ordering. At about 302 K, the resistivity sharply increases resulting in a large step-like feature at the start of the martensitic transition. With further fall in temperature the overall resistivity does not seem to change much down to the lowest measurement temperature.
A small step or kink in resistivity has routinely been observed for several other martensitic materials. Infact, such a feature along with the associated thermal hysteresis has traditionally been considered as the signature for martensitic transition (see for example reference [@vasil]). The reason for the step /kink is believed to be the trapping of electrons in the nested regions of the Fermi surface due to long range structural ordering formed as a consequence of martensitic phase change. And the nesting vector corresponds to the modulation of martensite formed [@zhao; @wil; @veli]. However, after the transformation is complete, $\rho$ in the martensitic phase can be extrapolated to match with the curve obtained before the transition had occurred. What makes the observed step like feature special in Ni$_{50}$Mn$_{35}$In$_{15}$ is the accompanying change in magnitude of resistivity. It may be noted that the change in $\rho(T)$ is about $\sim$40% in this case.
The anomalous feature in the $\rho(T)$ of Ni$_{50}$Mn$_{35}$In$_{15}$ resembles to that observed in the intermetallic compounds undergoing a transition to AFM state. A typical example being CeFe$_2$ and its substitutional derivatives [@N-ali; @garde]. In such cases, a rise in resistivity after the AFM transition is said to be due to the formation of super-zone gap. Due to the establishment of the AFM sublattice, the underlying zone boundaries get re-defined giving rise to a gap at the Fermi level. The conduction electrons thus have to overcome this gap resulting in a large anomaly in the transport property of the AFM state. The magnetic properties of Ni$_{50}$Mn$_{35}$In$_{15}$ already reveals a possibility of AFM interactions being present in the system along with the underlying FM ones. Thus the anomalous feature in the $\rho(T)$ and all other aforementioned measurements indicate that the AFM interactions develops when the system heads towards the structural transition. To ascertain this further, we measured resistivity as a function of temperature in constant magnetic fields. The $\rho(T)$ curves in the presence of different magnetic fields show a very similar behavior to that at zero field. However, the martensitic transition temperature shifts to lower value with the increase in the magnetic field. The $\rho(T)$ curves at a magnetic field of 5 T and 9 T are also shown in figure \[res\](a). This trend implies that the magnetic field suppresses the structural phase transition temperature and stabilizes the FM phase. This observation is in congruence to that observed in figure \[res\](b) where a decrease in the T$_M$ is observed in the field cooled data recorded at 1 T field. Also, the hysteresis observed in the field cooled and field warmed magnetization data further indicates the first order nature of the martensitic transformation. The antiferromagnetic interaction is seen to couple strongly with the martensitic phase transformation.
Magnetism in Heusler alloys has always been fascinating and continues to attract a lot of research interests till date [@saso]. As follows from the magnetic properties of Ni$_{50}$Mn$_{35}$In$_{15}$, the system exhibits complex interplay between ferro- and antiferro- magnetic order. The competition between these two magnetic interactions exists through the entire temperature range of measurement up to T$_C$. Microscopically, the formation of the competing phases can be related to the interatomic exchange interactions that are dominated by separation between atoms and the change in conduction electron density. In the Mn- based Heusler systems, the spatial separation between the neighbouring Mn atoms being comparatively large ($\sim$ 4Å), a considerable direct overlap of Mn [*3d*]{} states is not observed here. Consequently, an indirect RKKY type exchange mediated via the conduction electrons of the system is often revoked to describe the magnetic ordering in these materials [@kluber]. In addition, if such systems undergo a martensitic transition, the change in interatomic distances upon transformation are expected to highly manipulate the magnetic interactions.
Amongst the stoichiometric Ni$_2$MnZ (Z = Ga, In, Sn, Sb), only Ni$_2$MnGa undergoes a martensitic transformation. Experimentally, all of them are ferromagnetic and have similar values of the Curie temperature. In the case of Ni$_2$MnIn, the ferromagnetic ordering takes place at T$_C$ = 315 [@web-book] and the martensitic transformation is observed only in the non-stoichiometric, Mn rich compositions. The ordered crystal structure in such Mn rich composition contains excess Mn atoms at the In site in addition to its regular ($1\over4$,$1\over4$,$1\over4$) sites. We have previously studied the local crystal structure of these systems in the cubic and martensitic phase and obtained the exact interatomic separation between constituent atoms in both the phases [@pab-jpd]. In the cubic phase, Ni$_{50}$Mn$_{35}$In$_{15}$ has a lattice parameters of $\sim$ 6.04 Å . Accordingly, the Mn atoms which substitute In atoms develop an additional Mn-Mn interaction at $\sim$ 2.911 Å with a 4.2 coordination number. While the separation between Mn atoms present at its own site is 4.27Å with coordination number 12. Since the magnetic coupling between atoms is governed by the interatomic separation, the Mn-Mn interactions at 4.27Å is FM in nature. While the coupling between Mn atoms at $\sim$ 2.91 Å apart, must be AFM. The AFM nature of such correlations have previously been anticipated from high resolution neutron powder diffraction measurements on similar composition, Ni$_{50}$Mn$_{34}$Sn$_{16}$ [@brown-sn]. However it is important to note that no additional diffraction peaks corresponding to an antiferromagnetic sublattice was observed in this measurements.
Moreover, it is also known from the local structure study of Ni$_{50}$Mn$_{35}$In$_{15}$ that the cubic structure gets highly unstable with Mn atoms moving with largest amplitude of displacement as the T$_M$ is approached. Such vibration of Mn atoms from its crystallographic position weakens the FM sublattice leading to its collapse at the structural phase transformation. The subsequent change in interatomic separation further uncovers the AFM sublattice interactions. These AFM interactions drive the system to nearly zero magnetic moment.
Once the structural transformation is complete and the martensitic phase is fully established, the constituent atoms cease to move vigorously. This is reflected in the low temperature bond distance and the associated thermal mean square variation. The change in crystal symmetry generates Mn-Mn bond at 2.89 Å with coordination number 4.2, while the Mn-Mn bonds at 4.27 Å in the cubic phase split into two correlations: 4.19 Å with coordination number 8 and 4.4Å with coordination number 4. The underlying magnetic interactions also get re-established with such a change and the associated FM/AFM interactions start competing resulting in an anomaly like at T$^*$ in the temperature dependent magnetization measurements. The relative strength of the two magnetic interactions depends on the magnitude of the applied field. The average Mn-Mn distance for ferromagnetic interaction (i.e. 4.19 Å and 4.4Å ) in the martensitic phase equals the Mn-Mn distance in the cubic phase. Thus though AFM interactions intensify during the process of martensitic transition, the FM phase emerges upon completion of the structural phase transformation and continues to dominate. Small magnetic field values are sufficient to strengthen the FM order further and help restore it. The two magnetic interactions compete for dominance and continue to co-exist.
Conclusion
==========
In conclusion, for the non-stoichiometric Ni-Mn-In compounds the substituent has a crucial bearing on its magnetic properties. Based on our ac susceptibility and dc magnetization study it is clear that the origin of the anomalous magnetization behaviour of Ni$_{50}$Mn$_{35}$In$_{15}$ and the exotic properties associated with it are an outcome of competing FM/AFM magnetic interactions. The AFM interactions manifests when the system heads towards structural instability. In the low temperature region, the short-range antiferromagnetic correlations are easily suppressed by magnetic fields exceeding few hundred Oesterds once the structural transformation is complete.
References {#references .unnumbered}
==========
[100]{} Sutou Y, Imano Y, Koeda N, Omori T, Kainuma R, Ishida K and Oikawa K, 2004 [*Appl. Phys. Lett.*]{} [**85**]{}, 4358 Webster P J, Zeibeck K R A, Town S L and Peak M S, 1984 [*Philos. Mag*]{} [**49**]{}, 295 Ullakko K, Huang J K, Kanter C, O’Handley R C and Kokorin V V, 1996 [*Appl. Phys. Lett.*]{} [**69**]{}, 1966 Tickle R and James R D, 1999 [*J. Magn. Magn. Mater.*]{} [**195**]{}, 627 Sozinov A, Likhachev A A, Lanska N and Ullakko K, 2002 [*Appl. Phys. Lett.*]{} [**80**]{}, 1746 Webster P J and Ziebeck K R A, 1988 [*Alloys and Compounds of d-Elements with Main Group Elements*]{}, P. 2, edited by Wijn H R J, Landolt-Börnstein, New Series, Group III, [**19/c**]{} (Springer, Berlin) 75–184 Krenke T, Acet M, Wassermann E F, Moya X, Manosa L and Planes A, 2006 [*Phys. Rev.*]{} B [**73**]{}, 174413 Yu S Y, Liu Z H, Liu G D, Chen J L, Cao Z X, Wu G H, Zhang B and Zhang X X, 2006 [*Appl. Phys. Lett.*]{} [**89**]{}, 162503 Pathak A K, Khan M, Dubenko I, Stadler S and Ali N, 2007 [*Appl. Phys. Lett.*]{} [**90**]{}, 262504 Bhobe P A, Priolkar K R and Nigam A K, 2007 [*Appl. Phys. Lett.*]{} [**91**]{}, 242503 Sharma V K, Chattopadhyay M K and Roy S B, 2007 [*J. Phys. D: Appl. Phys.*]{} [**40**]{}, 1869 Albertini F, Morellon L, Algarabel P A, Ibarra M R, Pareti L, Arnold Z and Calestani G, 2001 [*J. Appl. Phys.*]{} [**89**]{}, 5614 Kasper J S and Kouvel J S, 1959 [*J. Phys. Chem. Solids*]{} [**11**]{}, 231 Ali N and Zhang X, 1992 [*J. Phys.: Condens. Matter*]{} [**4**]{}, L351 Nogu[e]{}s J and Schuller I K, 1999 [*J. Magn. Magn. Mater.*]{} [**192**]{}, 203 Brück S, Sort J, Baltz V, Suriãch S, Muñoz J S, Dieny B, Baró M D and Nogués J, 2005 [*Adv. Mater.*]{} [**17**]{}, 2978 Zhang S and Li Z, 2001 [*Phys. Rev.*]{} B [**65**]{}, 054406 Miltényi P, Gierlings M, Keller J, Beschoten B, Güntherodt G, Nowak U and Usadel K D, 2000 [*Phys. Rev. Lett.*]{} [**84**]{}, 4224 Vasil’ev A N, Buchel’nikov V D, Takagi T, Khovailo V V, and Estrin E I, 2003 [*Phys. Usp.*]{} [**46**]{}, 559 and references therein. Zhao G L, Leung T C, Harmon B N, Keil M, Müllner M and Weber W, 1989 [*Phys. Rev.*]{} B [**40**]{}, 7999 Wilkinson I, Hughes R J, Major Zs, Dugdale S B, Alam M A, Bruno E, Ginatempo B and Giuliano E S, 2001 [*Phys. Rev. Lett.*]{} [**87**]{}, 216401 Velikokhatnyi O I and Naumov I I, 1999 [*Phys. Solid State.*]{} [**41**]{}, 617 Garde C S, Ray J and Chandra G, 1990 [*Phys. Rev.*]{} B [**42**]{}, 8643 Sasioglu E, Sandratskii L M and Bruno P, 2008 [*Phys. Rev.*]{} B [**77**]{}, 064417 Kübler J, Williams A R and Sommers C B, 1983 [*Phys. Rev.*]{} B [**28**]{},1745 Bhobe P A, Priolkar K R and Sarode P R, 2008 [*J. Phys. D: Appl. Phys.*]{} [**41**]{}, 045004 Brown P J, Grandy A P, Ishida K, Kainuma R, Kanomata T, Neumann K-U, Oikawa K, Ouladdiaf B and Ziebeck K R A, 2006 [*J. Phys.:Condens. Mater.*]{} [**18**]{}, 2249
|
---
abstract: 'In this paper, a preliminary correspondence between the thermodynamic curvature and the isoperimetric theorem is established from a $4$-dimensional ultraspinning black hole. We find that the thermodynamic curvature of ultraspinning black hole is negative which means the ultraspinning black hole is likely to present an attractive between its molecules phenomenologically if we accept the analogical observation that the thermodynamic curvature reflects the interaction between molecules in a black hole system. Meanwhile we obtain a general conclusion that the thermodynamic curvature of the extreme black hole of the super-entropic black hole has a (positive or negative) remnant approximately proportional to the reciprocal of entropy of the black hole.'
author:
- 'Zhen-Ming Xu$^{}$[^1]'
title: The correspondence between thermodynamic curvature and isoperimetric theorem from ultraspinning black hole
---
Introduction
============
A very interesting and challenging problem in black hole thermodynamics is the volume of black hole. Although there are various versions of black hole volume discussion [@Parikh2006; @Grumiller2006; @Ballik2010; @Ballik2013; @MacDonald2014; @Brenna2015; @Christodoulou2015; @Dolan2011a; @Kubiznak2017; @Dolan2011b], there is no unified description yet. In the problem of understanding the volume of black holes, especially in AdS black holes, the application of isoperimetric theorem deepens our mathematical understanding of black hole thermodynamics insofar as it places a constraint on the thermodynamic volume and entropy of an AdS (or dS) black hole [@Cvetic2011; @Dolan2013]. Isoperimetric theorem is an ancient mathematical problem, which simply means that in a simple closed curve of a given length on a plane, the area around the circumference is the largest. With the proposal of black hole area entropy (in the natural unit system, $S=A/4$, where $S$ is the entropy of the black hole and $A$ is the area of the event horizon) [@Bekenstein1973; @Bardeen1973] and the introduction of extended phase space [@Kastor2009], Cvetič, Gibbons, Kubizňák, and Pope creatively applied the theorem to AdS black hole system and conjectured that in general for any $d$-dimensional asymptotic AdS black hole, its thermodynamic volume $V$ and entropy $S$ satisfy the reverse isoperimetric inequality [@Cvetic2011], $$\begin{aligned}
\label{ratio}
\mathcal{R}=\left(\frac{(d-1)V}{\omega_{d-2}}\right)^{\frac{1}{d-1}}\left(\frac{\omega_{d-2}}{4S}\right)^{\frac{1}{d-2}}\geq 1,\end{aligned}$$ where $\omega_n=2\pi^{(n+1)/2}/\Gamma\left[(n+1)/2\right]$ is the standard volume of the round unit sphere, and the equality is attained for the (charged) Schwarzschild-AdS black hole. Physically, the above isoperimetric ratio indicates that the entropy of black holes is maximized for the (charged) Schwarzschild-AdS black hole at a given thermodynamic volume. Up to now, the ratio has been verified for a variety of black holes with the horizon of spherical topology and black rings with the horizon of toroidal topology [@Altamirano2014]. The black hole, which violates the reverse isoperimetric inequality, i.e., $\mathcal{R}<1$, is called a super-entropic black hole [@Mann2018]. To date, there are only two known super-entropic black holes. One is $(2+1)$-dimensional charged Banados-Teitelboim-Zanelli (BTZ) black hole which is the simplest [@Frassino2015; @Johnson2019a; @Johnson2019b; @Mo2017; @Xu2020a]. Another important super-entropic black hole is a kind of ultraspinning black hole [@Hennigar2015a; @Hennigar2015b; @Appels2019].
Now turn to another important concept, thermodynamic curvature. It is now the most important physical quantity in studying the micro-mechanism of black holes from the axioms of thermodynamics phenomenologically. Its theoretical basis is mainly based on the thermodynamics geometry, which is mainly to use the Hessian matrix structure to represent the thermodynamic fluctuation theory [@Ruppeiner1995]. Hitherto without an underlying theory of quantum gravity, the exploration on the microscopic structure of black holes is bound to some speculative assumptions. Owing to the well-established black hole thermodynamics, as an analogy analysis and a primary description, it can be said that the thermodynamic geometry should yet be regarded as probe kits to phenomenologically or qualitatively extract certain information about interactions of black holes. In this scene, one can regard that an empirical observation in ordinary thermodynamics that negative (positive) thermodynamic curvature is associated with attractive (repulsive) microscopic interactions, is also applicable to black hole systems [@Ruppeiner2014]. Based on this empirical analogy analysis, the primary microscopic information of the BTZ black hole, (charged) Schwarzschild (-AdS) black hole, Gauss-Bonnet (-AdS) black hole, higher dimensional black holes and other black holes are explored [@Ruppeiner2008; @Wei2015; @Wei2019a; @Wei2019b; @Wei2019c; @Miao2018a; @Miao2018b; @Miao2019a; @Miao2019b; @Aman2003; @Mirza2007; @Dehyadegari2017; @Cai1999; @Zhang2015a; @Zhang2015b; @Liu2010; @Xu2019a; @Xu2019b; @Niu2012; @Wang2019; @Ghosh2019; @Bhattacharya2017; @Chen2019; @Guo2019; @Mansoori2014; @Mansoori2015; @Sarkar2006; @Quevedo2009; @Akbar2011; @Mohammadzadeh2018; @Ghosh2020; @Xu2020b].
In this paper, we shall calculate the thermodynamic curvature of $4$-dimensional ultraspinning black hole and explore the correspondence between thermodynamic curvature and isoperimetric theorem of super-entropic black hole. First, the thermodynamic curvature of ultraspinning black hole has never been analyzed, so we want to fill this gap. Second, the isoperimetric ratio (\[ratio\]) has been simply an observation made in the literature, but no physical reason has been given for the bound. Hence we want to try to understand this isoperimetric ratio from the point of view of thermodynamics geometry. Third, in our previous work [@Xu2020a] about the thermodynamic curvature of $(2+1)$-dimensional charged BTZ black hole, we give a preliminary conjecture that [*when the isoperimetric ratio is saturated ($\mathcal{R}=1$), the thermodynamic curvature of an extreme black hole tends to be infinity while for super-entropic black holes ($\mathcal{R}<1$), the thermodynamic curvature of the extreme black hole goes to a finite value*]{}. In present paper, through the analysis of the thermodynamic curvature of the only second super-entropic black hole, we want to verify and perfect the previous conjecture and establish a new correspondence, that is, the correspondence of thermodynamics curvature and isoperimetric theorem of AdS black holes.
Thermodynamic properties of ultraspinning black hole {#sec2}
====================================================
We start to demonstrate this procedure with the $4$-dimensional Kerr-AdS black hole and write its metric in the standard Boyer-Lindquist form [@Dolan2011a; @Hennigar2015a] $$\begin{aligned}
ds^2=-\frac{\Delta_a}{\Sigma_a}\left[dt-\frac{a\sin^2 \theta}{\Xi}d\phi\right]^2+\frac{\Sigma_a}{\Delta_a}dr^2+\frac{\Sigma_a}{\Pi}d\theta^2+\frac{\Pi\sin^2\theta}{\Sigma_a}\left[adt-\frac{r^2+a^2}{\Xi}d\phi\right]^2\end{aligned}$$ where $$\begin{aligned}
\Sigma_a &=& r^2+a^2\cos^2\theta, \quad \Xi=1-\frac{a^2}{l^2}, \quad \Pi=1-\frac{a^2}{l^2}\cos^2\theta,\nonumber\\
\Delta_a &=& (r^2+a^2)\left(1+\frac{r^2}{l^2}\right)-2mr,\end{aligned}$$ here $m$ is related to black hole mass, $l$ is the AdS radius which is connected with the negative cosmological constant $\Lambda$ via $\Lambda=-1/l^2$ and $a$ is rotation parameter.
To avoid a singular metric in limit $a\rightarrow l$, Refs. [@Hennigar2015a; @Hennigar2015b] define a new azimuthal coordinate $\psi=\phi/\Xi$ and identify it with period $2\pi/\Xi$ to prevent a conical singularity. After these coordinate transformations and then taking the limit $a\rightarrow l$, one can get the metric of the ultraspinning black hole [@Hennigar2015a; @Hennigar2015b] $$\begin{aligned}
ds^2=-\frac{\Delta}{\Sigma}\left[dt-l\sin^2\theta d\phi\right]^2+\frac{\Sigma}{\Delta}dr^2+\frac{\Sigma}{\sin^2\theta}d\theta^2+\frac{\sin^4\theta}{\Sigma}\left[ldt-(r^2+l^2)d\phi\right]^2\end{aligned}$$ where $$\begin{aligned}
\Sigma=r^2+l^2\cos^2\theta, \quad \Delta=\left(l+\frac{r^2}{l}\right)^2-2mr,\end{aligned}$$ and the horizon $r_h$ defined by $\Delta(r_h)=0$. In addition, due to the new azimuthal coordinate $\psi$ is noncompact, Refs. [@Hennigar2015a; @Hennigar2015b] choose to compactify by requiring that $\psi\sim\psi+\mu$ with a dimensionless parameter $\mu$. For this black hole, in order to make the horizon exist, the mass of the black hole is required to have a minimum, that is, an extreme black hole, $$\begin{aligned}
\label{oex}
m\geq m_0=\frac{8}{3\sqrt{3}}l, \qquad r_0=\frac{l}{\sqrt{3}}.\end{aligned}$$ Correspondingly, the first law of ultraspinning black hole thermodynamics is [@Hennigar2015a; @Hennigar2015b] $$\label{olaw}
dM=TdS+VdP+\Omega dJ,$$ where the basic thermodynamic properties, i.e., enthalpy $M$, temperature $T$, entropy $S$, thermodynamic pressure $P$, thermodynamic volume $V$, angular momentum $J$ and angular velocity $\Omega$, of ultraspinning black hole associated with horizon radius $r_h$ are [@Hennigar2015a; @Hennigar2015b] $$\begin{aligned}
\label{properties}
M&=&\frac{\mu m}{2\pi}, \quad J=Ml, \quad \Omega=\frac{l}{r_h^2+l^2},\nonumber\\
S&=&\frac{\mu(r_h^2+l^2)}{2}, \quad T=\frac{1}{4\pi r_h}\left(\frac{3r_h^2}{l^2}-1\right),\nonumber\\
P&=&\frac{3}{8\pi l^2}, \quad V=\frac{2\mu r_h (r_h^2+l^2)}{3}.\end{aligned}$$
Meanwhile authors in Refs. [@Hennigar2015a; @Hennigar2015b] find the above ultraspinning black hole is super-entropic, i.e., the relation between the entropy $S$ and thermodynamic volume $V$ in Eq. (\[properties\]) violates the reverse isoperimetric inequality (\[ratio\]).
We notice that the above first law (\[olaw\]) is mathematically problematic, like as the Maxwell relation $(\partial T/\partial P)_{_{S,J}}\neq (\partial V/\partial S)_{_{P,J}}$. Because angular momentum $J=Ml$ (it’s also known in the Ref. [@Hennigar2015a] as chirality condition), it renders the enthalpy $M$ of a black hole just a function of entropy $S$ and pressure $P$. Hence we need to find a more suitable expression of the first law and the derived expressions of temperature and volume. By inserting the chirality condition into the Eq. (\[olaw\]), we can get the [*right*]{} form of the first law of ultraspinning black hole $$\label{law}
dM=\tilde{T}dS+\tilde{V}dP,$$ where $$\begin{aligned}
\tilde{T}=\frac{r_h^2+l^2}{4\pi r_h}\left(\frac{3}{l^2}-\frac{1}{r_h^2}\right),\label{temperature}\end{aligned}$$ and $$\begin{aligned}
\tilde{V}=\frac{\mu l^2(r_h^2+l^2)^2}{4r_h}\left(\frac{2}{l^2}-\frac{1}{r_h^2}\right). \label{volume}\end{aligned}$$ Of course, naturally, we can verify the Maxwell relation $(\partial \tilde{T}/\partial P)_{_{S}}=(\partial \tilde{V}/\partial S)_{_{P}}$. Meanwhile we can write the corresponding Smarr relation $$M=2\tilde{T}S-2\tilde{V}P,$$ which can also be derived from a scaling (dimensional) argument [@Kubiznak2012]. Next let’s check whether the ultraspinning black hole is still super-entropic in our new thermodynamic framework. Keeping in mind that the space is compactified due to $\psi\sim\psi+\mu$, we have $\omega_2=2\mu$ [@Hennigar2015a]. For convenience, we set a dimensionless parameter $x=l^2/r_h^2$. Consequently, the isoperimetric ratio reads $$\begin{aligned}
\mathcal{R}=(1+x)^{1/6}\left(1-\frac{x}{2}\right)^{1/3}.\end{aligned}$$
Now let’s analyze the situation of the extreme black hole in our new thermodynamic framework.
- For the black hole thermodynamic system, the temperature and thermodynamic volume of the system should be non-negative (we mainly focus on these two physical quantities and the others are positive). For the case of negative temperature and negative thermodynamic volume, this is beyond the scope of this paper, so we have to exclude this situation. Especially for negative thermodynamic volume, it is not well defined in thermodynamics.
- For the ultraspinning black hole, the original extreme black hole corresponds to Eq. (\[oex\]). There is a lower bound for the mass of the black hole. In short, the original black hole satisfies the condition $0 \leq x \leq 3$. Under this condition, the temperature and thermodynamic volume are not negative, and the extreme black hole is at $x=3$. But unfortunately, as mentioned earlier, the first law of thermodynamics Eq. (\[olaw\]) for the black hole is mathematically problematic.
- In our new thermodynamic framework, see Eqs. (\[law\]), (\[temperature\]), and (\[volume\]), we guarantee the [*right*]{} form of the first law of thermodynamics by introducing new expressions of black hole temperature and thermodynamic volume. In order to ensure the non-negativity of these two thermodynamics quantities, we must require $0 \leq x \leq 2$. Under this new condition, the first law of thermodynamics of the ultraspinning black hole is mathematically reasonable, but the cost is to change the original extreme configuration of the black hole. Specifically, the new extreme black hole is at $x=2$ or corresponds to the new lower bound $$\begin{aligned}
m\geq \tilde{m}_0=\frac{9}{4\sqrt{2}}l, \qquad \tilde{r}_0=\frac{l}{\sqrt{2}}.
\end{aligned}$$ This is different from the original extreme black hole structure Eq. (\[oex\]).
At $0< x \leq 2$, we can easily prove that $\mathcal{R}\leq 1$, which implies that the ultraspinning black hole is still super-entropic in our new thermodynamic framework. When the value of $x$ exceeds $2$, the thermodynamic volume of black hole becomes negative, and the isoperimetric ratio is no longer applicable, so it is impossible to determine whether the ultraspinning black hole is super-entropic or not.
Thermodynamic curvature of ultraspinning black hole
===================================================
Now we start to calculate the thermodynamic curvature of the ultraspinning black hole, so as to verify the corresponding relationship proposed by Ref. [@Xu2020a] between the thermodynamic curvature and the isoperimetric theorem, and extract the possible microscopic information of the ultraspinning black hole completely from a thermodynamic point of view.
Considering an isolated thermodynamic system with entropy $S$ in equilibrium, the author Ruppeiner in Refs. [@Ruppeiner1995; @Ruppeiner2014; @Ruppeiner2008] divided it into a small subsystem $S_B$ and a large subsystem $S_E$ with requirement of $S_B \ll S_E \sim S$. We have known that in equilibrium state, the isolated thermodynamic system has a local maximum entropy $S_0$ at $x_0^\mu$. Hence at the vicinity of the local maximum, we can expand the entropy $S$ of the system to a series form about the equilibrium state $$S=S_0+\frac{\partial S_B}{\partial x_B^\mu}\Delta x^\mu_B
+\frac{\partial S_E}{\partial x_E^\mu}\Delta x^\mu_E
+\frac{1}{2}\frac{\partial^2 S_B}{\partial x_B^\mu \partial x_B^\nu}\Delta x^\mu_B \Delta x^\nu_B
+\frac{1}{2}\frac{\partial^2 S_E}{\partial x_E^\mu \partial x_E^\nu}\Delta x^\mu_E \Delta x^\nu_E
+\cdots,$$ where $x^\mu$ stand for some independent thermodynamic variables. Due the conservation of the entropy of the equilibrium isolated system and the condition $S_B \ll S_E \sim S$, the above formula approximately becomes $$\Delta S =S_0-S \approx -\frac{1}{2}\frac{\partial^2 S_B}{\partial x_B^\mu \partial x_B^\nu}\Delta x^\mu_B \Delta x^\nu_B,$$ where the so-called Ruppeiner metric is (here we omit subscript $B$) $$\label{rmetric}
\Delta l^2=-\frac{\partial^2 S}{\partial x^\mu \partial x^\nu}\Delta x^\mu \Delta x^\nu=g^S_{\mu\nu}\Delta x^\mu \Delta x^\nu.$$
Now focus on the system of the ultraspinning black hole and its surrounding infinite environment. Black hole itself can be regarded as the small subsystem mentioned above. In the light of the [*right*]{} form of the first law of thermodynamics Eq. (\[law\]), we can get the general form of the Ruppeiner metric for the ultraspinning black holes $$\Delta l^2=\frac{1}{\tilde{T}}\Delta \tilde{T} \Delta S+\frac{1}{\tilde{T}}\Delta \tilde{V} \Delta P.$$
In principle, according to the first law Eq. (\[law\]), the phase space of the ultraspinning black hole is $\{\tilde{T}, P, S, \tilde{V}\}$. For the thermodynamics geometry, it is carried out in the space of generalized coordinates, like as $\{S,P\}$, $\{S,\tilde{V}\}$, $\{\tilde{T},\tilde{V}\}$ and $\{\tilde{T},P\}$. There is Legendre transformation between the thermodynamic potential functions corresponding to these coordinate spaces. Hence the thermodynamic curvatures obtained in these coordinate spaces are same. For avoiding the technique complexity, we take the coordinate space $\{S,P\}$ as an example for detailed calculation. The line element of thermodynamic geometry becomes [@Xu2020a; @Xu2019a] $$\label{linesp}
\begin{aligned}
\Delta l^2 &=\frac{1}{\tilde{T}}\left(\frac{\partial \tilde{T}}{\partial S}\right)_P \Delta S^2+\frac{2}{\tilde{T}}\left(\frac{\partial \tilde{T}}{\partial P}\right)_S \Delta S \Delta P+\frac{1}{\tilde{T}}\left(\frac{\partial \tilde{V}}{\partial P}\right)_S \Delta P^2\\
&=\frac{1}{\tilde{T}}\frac{\partial^2 M}{\partial X^\mu \partial X^\nu}\Delta X^\mu \Delta X^\nu=g_{\mu\nu}\Delta X^\mu \Delta X^\nu, \quad (\mu, \nu=1,2)
\end{aligned}$$ where $(X^1, X^2)=(S, P)$ and in the right part of the second equal sign, we have used the first law of thermodynamics Eq. (\[law\]). The above thermodynamic metric $g_{\mu\nu}$ is equivalent to the metric $g^S_{\mu\nu}$ in Eq. (\[rmetric\]), but they have different representation forms. The metric $g^S_{\mu\nu}$ is in the entropy representation, while the metric $g_{\mu\nu}$ is in the enthalpy representation. Next according to the specific form of the metric $g_{\mu\nu}$, we start to calculate the thermodynamic curvature, which is the “thermodynamic analog” of the geometric curvature in general relativity. By using the Christoffel symbols $\Gamma^{\alpha}_{\beta\gamma}=g^{\mu\alpha}\left(\partial_{\gamma}g_{\mu\beta}+\partial_{\beta}g_{\mu\gamma}-\partial_{\mu}g_{\beta\gamma}\right)/2$ and the Riemannian curvature tensors ${R^{\alpha}}_{\beta\gamma\delta}=\partial_{\delta}\Gamma^{\alpha}_{\beta\gamma}-\partial_{\gamma}\Gamma^{\alpha}_{\beta\delta}+
\Gamma^{\mu}_{\beta\gamma}\Gamma^{\alpha}_{\mu\delta}-\Gamma^{\mu}_{\beta\delta}\Gamma^{\alpha}_{\mu\gamma}$, we can obtain the thermodynamic curvature $R_{_{SP}}=g^{\mu\nu}{R^{\xi}}_{\mu\xi\nu}$.
With the help of Eqs. (\[temperature\]), (\[volume\]) and the expressions of entropy $S$ and thermodynamic pressure $P$ in Eq. (\[properties\]), the thermodynamic curvature can be directly read as $$\label{curvature}
R_{_{SP}}=-\frac{x(x+1)[x^2(x-3)(x^2+12)+27x-9]}{2S(x-3)[x^2(x-3)+3x-3]^2}.$$
In view of the thermodynamic curvature obtained above, some explanations are made.
- For the extreme black hole, i.e., $x=2$, we can observe clearly that thermodynamic curvature is finite negative value $R_{SP}|_{_{\text{extreme}}}=-57/S$.
- Due to $0< x \leq 2$, with a little calculation, we can obtain $R_{_{SP}}<0$. We can speculate that the ultraspinning black hole is likely to present a attractive between its molecules phenomenologically or qualitatively.
- Look at the original extreme black hole, i.e., $x=3$, you might intuitively get that the thermodynamic curvature tends to be infinite at this time according to Eq. (\[curvature\]). In fact, in this case, the basic thermodynamic metric (\[linesp\]) is no longer valid, because the first law (\[olaw\]) is pathological.
At present, the known super-entropic black holes are only (2+1)-dimensional charged BTZ black hole and the ultraspinning black hole. According to our current analysis and the calculation of charged BTZ black hole in the previous paper [@Xu2020a], we have for ultraspinning black hole $R_{SP}|_{_{\text{extreme}}}=-57/S$ and for charged BTZ black hole $R_{SP}|_{_{\text{extreme}}}=1/(3S)$. Hence, a universal relationship is $$\label{excurvature}
R_{SP}|_{_{\text{extreme}}}\propto\frac{1}{S}.$$ We know that the reverse isoperimetric inequality physically indicates that at a given thermodynamic volume, the (charged) Schwarzschild-AdS black holes are maximally entropic. The super-entropic black hole means that the entropy of black hole exceeds the maximal bound. For the (charged) Schwarzschild-AdS black hole, the thermodynamic curvature of the corresponding extreme black hole tends to be infinity, which is verified by various simple static black hole solutions of the pure Einstein gravity or higher-derivative generalizations thereof. Therefore, we can have the following corresponding relations:
- For the black holes with $\mathcal{R}=1$, the thermodynamic curvature of the corresponding extreme black hole tends to be (positive or negative) infinity.
- For the black holes with $\mathcal{R}<1$, the thermodynamic curvature of the corresponding extreme black hole has a (positive or negative) remnant which is approximately proportional to $1/S$.
- For the black holes with $\mathcal{R}>1$, the thermodynamic curvature of the corresponding extreme black hole is also (positive or negative) infinity.
We note that the last conjecture mentioned above about the extreme behavior of the thermodynamic curvature of the sub-entropic black hole ($\mathcal{R}>1$), needs further verification in the future. At present, we only think that at the case of exceeding the maximum entropy, the thermodynamic curvature of the corresponding extreme black hole has a finite remnant, but at the case of the maximum entropy, it tends to infinity. Naturally, when the entropy of black hole is less than the maximum entropy, the thermodynamic curvature of the corresponding extreme black hole tends to infinity intuitively.
Conclusion and Discussion
=========================
In this paper, we investigate the the thermodynamic curvature of the ultraspinning black hole by introducing the [*right*]{} form of the first law (\[law\]). We find that the ultraspinning black hole is still super-entropic in our new thermodynamic framework, which is consistent with the result obtained in [@Hennigar2015a; @Hennigar2015b]. Meanwhile the obtained thermodynamic curvature is negative which means the ultraspinning black hole is likely to present a attractive between its molecules phenomenologically or qualitatively if we accept the analogical observation that the thermodynamic curvature reflects the interaction between molecules in a black hole system. Through the analysis of the extreme behavior of the thermodynamic curvature, we can get a general conclusion that the thermodynamic curvature of the extreme black hole of the super-entropic black hole has a (positive or negative) remnant approximately proportional to $1/S$. This is a very interesting result.
In our previous work [@Xu2019a], we analyze the thermodynamic curvature of Schwarzschild black hole and obtain $R_{\text{Schwarzschild}}=\pm 1/S_{\text{Schwarzschild}}$. This one is very similar to what we’ve got in present paper. Maybe it’s a coincidence? Maybe it suggests that the excess entropy in the super-entropic black hole comes from the Schwarzschild black hole? This unexpected problem needs further analysis and discussion.
Furthermore, we need to in the future confirm the conjecture about the sub-entropic black hole, such as the Kerr-AdS black hole [@Cvetic2011; @Johnson2019c], STU black holes [@Johnson2019c; @Caceres2015], Taub-NUT/Bolt black hole [@Johnson2014], generalized exotic BTZ black hole [@Johnson2019b], noncommutative black hole [@Miao2017] and accelerating black holes[@Appels2016]. The verification of this conjecture will help us to improve the correspondence between the thermodynamic curvature and the isoperimetric theorem, which is a very meaningful research content.
Acknowledgments {#acknowledgments .unnumbered}
===============
The financial support from National Natural Science Foundation of China (Grant Nos. 11947208 and 11947301) is gratefully acknowledged. This research is also supported by The Double First-class University Construction Project of Northwest University. The author would like to thank the anonymous reviewers for the helpful comments that indeed greatly improve this work.
[99]{}
M.K. Parikh, The Volume of black holes, Phys. Rev. D 73 (2006) 124021.
D. Grumiller, The Volume of 2-D black holes, J. Phys. Conf. Ser. 33 (2006) 361.
W. Ballik and K. Lake, The volume of stationary black holes and the meaning of the surface gravity, arXiv:1005.1116 \[gr-qc\].
W. Ballik and K. Lake, Vector volume and black holes, Phys. Rev. D 88 (2013) 104038.
S. MacDonald, Thermodynamic Volume of Kerr-bolt-AdS Spacetime, arXiv:1406.1257 \[hep-th\].
W.G. Brenna, R.B. Mann, and M. Park, Mass and Thermodynamic Volume in Lifshitz Spacetimes, Phys. Rev. D 92 (2015) 044015.
M. Christodoulou and C. Rovelli, How big is a black hole? Phys. Rev. D 91 (2015) 064046.
B.P. Dolan, Pressure and volume in the first law of black hole thermodynamics, Classical Quantum Gravity 28 (2011) 235017.
D. Kubiznak, R. B. Mann, and M. Teo, Black hole chemistry: thermodynamics with Lambda, Classical Quantum Gravity 34 (2017) 063001.
B.P. Dolan, The cosmological constant and the black hole equation of state, Classical Quantum Gravity 28 (2011) 125020.
M. Cvetič, G.W. Gibbons, D. Kubiznak and C.N. Pope, Black hole enthalpy and an entropy inequality for the thermodynamic volume, Phys. Rev. D 84 (2011) 024037.
B.P. Dolan, D. Kastor, D. Kubiznak, R.B. Mann, and J. Traschen, Thermodynamic Volumes and Isoperimetric Inequalities for de Sitter Black Holes, Phys. Rev. D 87 (2013) 104017.
J.D. Bekenstein, Black holes and entropy, Phys. Rev. D 7 (1973) 2333.
J.M. Bardeen, B. Carter, and S. Hawking, The four laws of black hole mechanics, Commun. Math. Phys. 31 (1973) 161.
D. Kastor, S. Ray, and J. Traschen, Enthalpy and the mechanics of AdS black holes, Classical Quantum Gravity 26 (2009) 195011.
N. Altamirano, D. Kubiznak, R.B. Mann and Z. Sherkatghanad, Thermodynamics of rotating black holes and black rings: phase transitions and thermodynamic volume. Galaxies 2 (2014) 89.
R.B. Mann, Super-Entropic Black Holes, Springer Proc.Phys. 208 (2018) 105-113.
A.M. Frassino, R.B. Mann and J.R. Mureika, Lower-dimensional black hole chemistry, Phys. Rev. D 92 (2015) 124069.
C.V. Johnson, Instability of superentropic black holes in extended thermodynamics, Mod. Phys. Lett. A. 33 (2020) 2050098.
C.V. Johnson, V.L. Martin, and A. Svesko, A microscopic description of thermodynamic volume in extended black hole thermodynamics, Phys. Rev. D 101 (2020) 086006.
J.-X. Mo, F. Liang and G.-Q. Li, Heat engine in the three-dimensional spacetime, J. High Energy Phys. 03 (2017) 010.
Z.-M. Xu, B. Wu, and W.-L. Yang, Thermodynamic curvature and isoperimetric inequality for the charged BTZ black hole, arXiv: 2002.00117 \[gr-qc\].
R.A. Hennigar, D. Kubiznak and R.B. Mann, Super-Entropic black holes, Phys. Rev. Lett. 115 (2015) 031101.
R.A. Hennigar, D. Kubiznak, R.B. Mann and N. Musoke, Ultraspinning limits and super-entropic black holes, J. High Energy Phys. 06 (2015) 096.
M. Appels, L. Cuspinera, R. Gregory, P. Krtous, and D. Kubiznak, Are Superentropic black holes superentropic? J. High Energy Phys. 02 (2020) 195.
G. Ruppeiner, Riemannian geometry in thermodynamic fluctuation theory, Rev. Mod. Phys. 67 (1995) 605; Erratum ibid. 68 (1996) 313.
G. Ruppeiner, Thermodynamic curvature and black holes, In: S. Bellucci (eds), Breaking of supersymmetry and ultraviolet divergences in extended supergravity, Springer proceedings in physics, 153 (2014) 179.
G. Ruppeiner, Thermodynamic curvature and phase transitions in Kerr-Newman black holes, Phys. Rev. D 78 (2008) 024016.
S.-W. Wei and Y.-X. Liu, Insight into the microscopic structure of an AdS black hole from a thermodynamical phase transition, Phys. Rev. Lett. 115 (2015) 111302; Erratum ibid. 116 (2016) 169903.
S.-W. Wei, Y.-X. Liu, and R.B. Mann, Repulsive interactions and universal properties of charged anti-de Sitter black hole microstructures, Phys. Rev. Lett. 123 (2019) 071102.
S.-W. Wei, Y.-X. Liu, and R.B. Mann, Ruppeiner geometry, phase transitions, and the microstructure of charged AdS black holes, Phys. Rev. D 100 (2019) 124033.
S.-W. Wei and Y.-X. Liu, Intriguing microstructures of five-dimensional neutral Gauss-Bonnet AdS black hole, Phys. Lett. B 803 (2020) 135287.
Y.-G. Miao and Z.-M. Xu, Thermal molecular potential among micromolecules in charged AdS black holes, Phys. Rev. D 98 (2018) 044001.
Y.-G. Miao and Z.-M. Xu, Parametric phase transition for a Gauss-Bonnet AdS black hole, Phys. Rev. D 98 (2018) 084051.
Y.-G. Miao and Z.-M. Xu, Interaction potential and thermo-correction to the equation of state for thermally stable Schwarzschild anti-de Sitter black holes, Sci. China-Phys. Mech. Astron. 62 (2019) 010412.
Y.-G. Miao and Z.-M. Xu, Microscopic structures and thermal stability of black holes conformally coupled to scalar fields in five dimensions, Nucl. Phys. B 942 (2019) 205.
J.E. Aman, I. Bengtsson, and N. Pidokrajt, Geometry of black hole thermodynamics, Gen. Rel. Grav. 35 (2003) 1733.
B. Mirza, M. Zamani-Nasab, Ruppeiner geometry of RN black holes: flat or curved?, J. High Energy Phys. 06 (2007) 059.
A. Dehyadegari, A. Sheykhi, and A. Montakhab, Critical behavior and microscopic structure of charged AdS black holes via an alternative phase space, Phys. Lett. B 768 (2017) 235.
R.-G. Cai and J. H. Cho, Thermodynamic curvature of the BTZ black hole, Phys. Rev. D 60 (1999) 067502.
J.-L. Zhang, R.-G. Cai, and H.-W. Yu, Phase transition and thermodynamical geometry for Schwarzschild AdS black hole in AdS$_5\times$S$^5$ spacetime, J. High Energy Phys. 02 (2015) 143.
J.-L. Zhang, R.-G. Cai, and H.-W. Yu, Phase transition and thermodynamical geometry of Reissner-Nordström-AdS black holes in extended phase space, Phys. Rev. D 91 (2015) 044028.
H.-S. Liu, H. Lu, M.-X Luo, and K.-N. Shao, Thermodynamical metrics and black hole phase transitions, J. High Energy Phys. 12 (2010) 054.
Z.-M. Xu, B. Wu, and W.-L. Yang, Ruppeiner thermodynamic geometry for the Schwarzschild AdS black hole, Phys. Rev. D 101 (2020) 024018.
Z.-M. Xu, B. Wu, and W.-L. Yang, The fine micro-thermal structures for the Reissner-Nordström black hole, arXiv:1910.03378 \[gr-qc\], to be published in Chinese Physics C.
C. Niu, Y. Tian, and X.-N. Wu, Critical phenomena and thermodynamic geometry of Reissner-Nordström-anti-de Sitter black holes, Phys. Rev. D 85 (2012) 024017.
P. Wang, H.-W. Wu, and H.-T. Yang, Thermodynamic geometry of AdS black holes and black holes in a cavity, Eur. Phys. J. C 80 (2020) 216.
A. Ghosh and C. Bhamidipati, Thermodynamic geometry for charged Gauss-Bonnet black holes in AdS spacetimes, Phys. Rev. D 101 (2020) 046005.
K. Bhattacharya and B. R. Majhi, Thermogeometric description of the van der Waals like phase transition in AdS black holes, Phys. Rev. D 95 (2017) 104024.
Y. Chen, H.-t. Li and S.-J. Zhang, Microscopic explanation for black hole phase transitions via Ruppeiner geometry: Two competing factors–the temperature and repulsive interaction among BH molecules, Nucl. Phys. B 948 (2019) 114752.
X.-Y. Guo, H.-F. Li, L.-C. Zhang and R. Zhao, Microstructure and continuous phase transition of a Reissner-Nordstrom-AdS black hole, Phys. Rev. D 100 (2019) 064036.
S.A.H. Mansoori and B. Mirza, Correspondence of phase transition points and singularities of thermodynamic geometry of black holes, Eur. Phys. J. C 74 (2014) 2681.
S.A.H. Mansoori, B. Mirza and M. Fazel, Hessian matrix, specific heats, Nambu brackets, and thermodynamic geometry, J. High Energy Phys. 04 (2015) 115.
T. Sarkar, G. Sengupta and B.N. Tiwari, On the thermodynamic geometry of BTZ black holes, J. High Energy Phys. 11 (2006) 015.
H. Quevedo and A. Sanchez, Geometric description of BTZ black holes thermodynamics, Phys. Rev. D 79 (2009) 024012.
M. Akbar, H. Quevedo, K. Saifullah, A. Sanchez and S. Taj, Thermodynamic geometry Of charged rotating BTZ black holes, Phys. Rev. D 83 (2011) 084031.
H. Mohammadzadeh, M. Rastkar and M. N. Najafi, Thermodynamic geometry of normal (exotic) BTZ black hole regarding to the fluctuation of cosmological constant, arXiv:1802.01084 \[gr-qc\].
A. Ghosh and C. Bhamidipati, Thermodynamic geometry and interacting microstructures of BTZ black holes, Phys. Rev. D 101 (2020) 106007.
Z.-M. Xu, Analytic phase structures and thermodynamic curvature for the charged AdS black hole in alternative phase space, Submitted to Journal.
D. Kubiznak and R.B. Mann, $P-V$ criticality of charged AdS black holes, J. High Energy Phys. 07 (2012) 033.
C. V. Johnson, Specific heats and Schottky peaks for black holes in extended thermodynamics, Classical Quantum Gravity 37 (2020) 054003.
E. Caceres, P.H. Nguyen and J. F. Pedraza, Holographic entanglement entropy and the extended phase structure of STU black holes, J. High Energy Phys. 09 (2015) 184.
C. V. Johnson, Thermodynamic volumes for AdS-Taub-NUT and AdS-Taub-Bolt, Classical Quantum Gravity 31 (2014) 235003.
Y.-G. Miao and Z.-M. Xu, Phase transition and entropy inequality of noncommutative black holes in a new extended phase space, J. Cosmol. Astropart. Phys. 03 (2017) 046.
M. Appels, R. Gregory and D. Kubiznak, Thermodynamics of accelerating black holes, Phys. Rev. Lett. 117 (2016) 131303.
[^1]: E-mail: [email protected]
|
---
abstract: 'Many important multi-component crystalline solids undergo mechanochemical spinodal decomposition: a phase transformation in which the compositional redistribution is coupled with structural changes of the crystal, resulting in dynamic and intricate microstructures. The ability to rapidly compute the macroscopic behavior based on these detailed microstructures is of paramount importance for accelerating material discovery and design. However, the evaluation of macroscopic, nonlinear elastic properties purely based on direct numerical simulations (DNS) is computationally very expensive, and hence impractical for material design when a large number of microstructures need to be tested. A further complexity of a hierarchical nature arises if the elastic free energy and its variation with strain is a small scale fluctuation on the dominant trajectory of the total free energy driven by microstructural dynamics. To address these challenges, we present a data-driven approach, which combines advanced neural network (NN) models with DNS to predict the mechanical free energy and homogenized stress fields on microstructures in a family of two-dimensional multi-component crystalline solids. The microstructres are numerically generated by solving a coupled, Cahn-Hilliard and nonlinear strain gradient elasticity problem. The hierarchical structure of the free energy’s evolution induces a multi-resolution character to the machine learning paradigm: We construct knowledge-based neural networks (KBNNs) with either pre-trained fully connected deep neural networks (DNNs) or pre-trained convolutional neural networks (CNNs) that describe the dominant feature of the data to fully represent the hierarchichally evolving free energy. We demonstrate multi-resolution learning of the materials physics of nonlinear elastic response for both fixed and evolving microstructures.'
author:
- |
Xiaoxuan Zhang$^1$, Krishna Garikipati$^{1,2,3}$ [^1]\
$^1$Department of Mechanical Engineering, University of Michigan, United States\
$^2$Department of Mathematics, University of Michigan, United States\
$^3$Michigan Institute for Computational Discovery & Engineering, University of Michigan, United States\
bibliography:
- 'lib.bib'
title: 'Machine learning materials physics: Multi-resolution neural networks learn the free energy and nonlinear elastic response of evolving microstructures'
---
Introduction
============
Mechanochemical spinodal decomposition refers to a continuous phase transformation mechanism due to an onset of instability with respect to the composition and/or a structural order parameter. It occurs in materials systems with a free-energy density that is non-convex in strain-composition space. Wide regimes of the state space lie far from thermodynamic equilibrium, and the resulting first-order dynamics manifests in evolving microstructures that are distinguishable by strain and composition variables [@Garikipati2016Rudraraju-NPJ]. Mechanochemical spinodal decomposition exists in many important multi-component crystalline solids, such as cubic yttria-stabilized zirconia, lithium-ion battery electrode material Li$_x$Mn$_2$O$_4$, transition metal hydrides and certain two-dimensional materials such as TaS. In such material systems, as the first-order dynamics is driven by fluxes determined by the local free energy density, the material microstructure, delineated by strain and composition variables, undergoes changes. The macroscopic behaviors and properties are inherently related to the evolving microstructures. Progress has been made in understanding the detailed dynamics and in modeling the resulting microstructures [@Garikipati2016Rudraraju-NPJ; @Garikipati2016Sagiyama-Unconditionally]. However, in order to optimize the properties of existing materials and to design new materials, it also is essential to rapidly predict the material’s macroscopic response based on the detailed microstructure.
Macroscopic material responses/properties can be measured from well-designed experiments or predicted from physics-based direct numerical simulations (DNS). Numerical methods to upscale the nonlinear macroscopic behavior of a heterogeneous microstructure are commonly categorized as computational homogenization methods. They necessitate the solution of expensive boundary value problems (BVPs) on representative volume elements (RVEs) that encompass the targeted material microstructures [@Geers2010homogenization-trends-challenges; @saeb+steinmann+javili16]. It is impractical, if not impossible, to evaluate macroscopic material properties based on either experimental measurements or DNS when a large number of microstructures need to be tested.
Machine learning has emerged as a powerful approach among data-driven methods, and has been applied to study a wide range of problems in materials physics, such as material screening [@Meredig2014-screen-materials; @Wolverton2016Ward-screen-material-property; @Ramprasad2017-material-informatics-review], constitutive modeling [@Hashash2004-NN-constitutive; @Chinesta2019Ibanez-hybrid-constitutive-modeling; @Sun+Wang2019-game-constitutive], scale bridging [@Brockherde2017DFT-MD; @Garikipati2019Teichert-ML-bridge], and system identification [@Brunton2016Kutz-system-id; @Garikipati2019Wang-System-Identification]. Interested readers are directed to Refs [@Bock2019Kalidindi-ML-CM-review; @Dimiduk2018review-ML-on-material-process-structure] for more data-driven examples in the field of materials physics. Computational homogenization is yet another successful application of machine learning, where attempts to predict effective material properties [@Kalidindi+Cecen2018-CNN-Structure-property; @Li2019Zhuang-effective-ME-DNN; @Agrawal2018Yang-Composites-S-P-deep-learning; @Kondo2017CNN-ionic-conductivity; @Rong2019CNN-thermal-conductivity-composites] and non-linear material response [@Hambli2011multiscale-bone-with-NN; @Bessa2017-data-driven-framework-elasticity-inelasticity; @Garikipati2019Sagiyama-ML-Martensitic; @Jones2019Frankel-Oligocrytal-behavior-CNN; @Sun2018Wang-homogenization; @Yvonnet2015Le-RVE-elasticity; @Yvonnet2018Lu-NN-RVE-graphene] based on both experimentally and numerically generated data have been made by exploring different data-driven techniques. For example, convolutional neural networks (CNNs), which take images of microstructures as inputs, have been used to construct microstructure-property linkages [@Kalidindi+Cecen2018-CNN-Structure-property] and predict macroscopic properties, such as effective ionic conductivity in ceramics [@Kondo2017CNN-ionic-conductivity], effective mechanical properties in composites [@Agrawal2018Yang-Composites-S-P-deep-learning] and shale [@Li2019Zhuang-effective-ME-DNN], effective thermal conductivity in composites [@Rong2019CNN-thermal-conductivity-composites], and many others. Artificial neural networks (ANNs)/deep neural networks (DNNs), which are trained to construct complex nonlinear relationship between predefined features (e.g. strain components/volume fraction) and some quantities of interest (e.g. averaged stress responses/averaged elastic modulus ), have been coupled with finite element simulations to accelerate multiscale homogenization for bone remodeling [@Hambli2011multiscale-bone-with-NN], nonlinear elastic composites [@Yvonnet2015Le-RVE-elasticity], graphene/polymer nanocomposites with nonlinear anisotropic electrical response [@Yvonnet2018Lu-NN-RVE-graphene], geological materials with multi-porosity [@Sun2018Wang-homogenization], oligocrystals with plastic response [@Jones2019Frankel-Oligocrytal-behavior-CNN], and many others. Data-driven computational homogenization has demonstrated the potential to drastically reduce computational time in traditional multilevel calculations, making possible the inclusion of detailed microstructure information in multilevel calculations [@Yvonnet2015Le-RVE-elasticity; @Yvonnet2018Lu-NN-RVE-graphene; @Matous2017Review-multiscale-heter-model].
In this work, a data-driven homogenization approach is explored to jointly predict the mechanical free energy *and* homogenized stress-strain response of a family of 2D multi-component crystalline microstructures that are numerically generated based on the computational framework in [@Garikipati2016Rudraraju-NPJ]. The physics underlying mechano-chemical spinodal decomposition delivers families of microstructures that are not at thermodynamic equilibrium. As outlined above, these microstructures evolve driven by the free energy. There is a hierarchical nature to the free energy of this class of material phase transformations: The strain excursions imposed on a microstructure must remain “small” in order to prevent further evolution of the microstructure itself, or the elasticity equations drive the free energy out of local basins. The corresponding structural rearrangements could then be large enough that the microstructure itself changes, leaving ambiguous the notion of homogenization. Consequently, the fluctuations in elastic free energy themselves remain small, induced by the small strains. Thus, the free energy of each microstructure has a multi-resolution structure with a dominant trajectory from microstructure-evolving phase transformations and a small-scale fluctuation value from strains exploring a given microstructure. The dominant trajectory strongly depends on the microstructural information, such as the volume fraction, the location and orientation of each crystalline phase, or the interfaces. Knowledge based neural networks (KBNNs), [@Ghaboussi1991KBNN; @Garikipati2019Teichert-ML-SurrogateOpt], which are built upon pre-trained DNNs or CNNs, are used to capture this multi-resolution data structure, with DNNs or CNNs being trained to describe the dominant part of the free energy. It is important to mention that although the term DNNs refer to a large family of neural network structures, they will specifically refer to deep neural networks with fully connected layers in this work. Our studies demonstrate that multi-resolution neural networks using both DNN-based and CNN-based KBNN models can accurately learn the macroscopic mechanical behavior of a single microstructure. Furthermore, CNN-enhanced KBNN models are capable of learning the macroscopic mechanical behavior of many microstructures from different DNS. Such KBNN models for multi-resolution learning and testing can be used to rapidly screen materials based on their microstructures for applications such as additive manufacturing, polymer blending, or material synthesis.
The rest of the paper is organized as follows. In Section \[sec:spinodal-framework\], we summarize the mechanochemical spinodal decomposition computational framework that is used to generate different microstructures. The neural network (NN) model structures used in this work are presented in Section \[sec:NN\]. Section \[sec:data\] covers the procedures of data generation, features selection, and hyperparameter searches. The detailed simulation results are presented in Section \[sec:num-example\]. Concluding remarks and perspectives are offered in Section \[sec:conclusion\].
Mechanochemical spinodal decomposition {#sec:spinodal-framework}
======================================
In this section, the computational framework to describe mechanochemical spinodal decomposition is briefly summarized. Interested readers are directed to Ref. [@Garikipati2016Rudraraju-NPJ] for details.
Free energy density function
----------------------------
In this work, we focus on coupled diffusional/martensitic phase transformation in the two-dimensional setting. The solid has a single square phase at high temperature and undergoes a square-to-rectangle structural transformation at low temperature, which is analogous to the cubic-to-tetragonal transformation in three-dimensional space. The square lattice is the high symmetry phase that serves as the reference state for strain measurement. Here, the Green-Lagrange strain tensor $\BE$ is used,[^2] with its components denoted as $E_{11}, E_{22}, $ and $E_{12}$ (=$E_{21}$). The low-symmetry rectangular lattices are derived from the square lattice by homogeneous strain. For describing the structural changes, it is more convenient to introduce three reparameterized strains, which are based on the components of $\BE$ and defined as $e_1 = (E_{11} + E_{22})/\sqrt{2}$, $e_2 = (E_{11} - E_{22})/\sqrt{2}$, and $e_6 = \sqrt{2}E_{12}$. Here, $e_1$ and $e_6$ represent the dilatation and shear strain, respectively, in the infinitesimal strain regime. The reparameterized strain $e_2$ uniquely distinguishes the square lattice (when $e_2 = 0$) and its two rectangular variants: the “positive” rectangle ($e_2>0$) with elongated lattice in the global $X_1$ direction and the negative rectangle ($e_2<0$) with elongated lattice in the global $X_2$ direction. It thus serves as a structural order parameter. The composition $c$, which varies between 0 and 1, is the order parameter controlling the chemistry, with $c\sim0$ denoting the composition state with the stable square phase and $c\sim1$ denoting the composition state with two unstable rectangular phases, as illustrated in Fig. \[fig:free-energy\].
![Illustration of the free energy in the strain-composition space in the low temperature phase. The chemical part of $\psi$ has a double-well shape with respect to $c$, indicating a composition triggered phase transformation. The mechanical part of $\psi$ has a convex shape at $c=0$, indicating a stable square phase, and a double-well shape at $c=1$, indicating a deformation triggered phase transformation of the rectangular phases.[]{data-label="fig:free-energy"}](free-energy){width="1.0\linewidth"}
At low temperature, the coupled diffusional and structural phase transformation is triggered by instabilities with respect to both the compositional parameter $c$ and the structural order parameter $e_2$. This coupled phase transformation can be described by a non-convex free energy density function $\psi$ defined in the strain-composition space, as illustrated in Fig. \[fig:free-energy\], $$\psi (c, \Be, \nabla c, \nabla\Be) = \scrF(c, \Be) + \scrG(c, \Be, \nabla c, \nabla \Be),
\label{eq:general-free-energy}$$ with $\scrF$ representing a homogeneous contribution from both composition and strain and $\scrG$ as a gradient-dependent non-uniform contribution to regularize the free energy density. In , $\Bu$ is the displacement field and $\Be$ is a vector with $e_1$, $e_2$, and $e_6$ as its components. In the DNS, the following specific form of $\psi$ is used to generate two-dimensional microstructures
\[eq:2d-psi\] $$\begin{aligned}
{2}
\psi(c,\Bu)
& = 16 d_c c^4 - 32 d_c c^3 + 16 d_c c^2 + \frac{1}{2} \nabla c \cdot \kappa \nabla c \label{eq:2d-psi:1} \\
& + \frac{2d_e}{s_e^2}(e_1^2 + e_6^2) + \frac{d_e}{s_e^4}e_2^4 + \frac{1}{2} \nabla e_2 \cdot \lambda_e l_e^2 \nabla e_2 \label{eq:2d-psi:2} \\
& + (1-2c) \frac{2d_e}{s_e^2}e_2^2 \label{eq:2d-psi:3}
\end{aligned}$$
where $d_c$, $d_e$, $s_e$, $\kappa$, $\lambda_e$, and $l_e$ are material parameters. The free energy density function $\psi$ in consists of three contributions: a pure chemical contribution , a purely elastic contribution , and a mixed contribution from both chemistry and elasticity .
Governing equations
-------------------
Based on a generalized, Landau-type free energy density function in that couples strain and composition instability, mechano-chemical spinodal decomposition can be described by a set of equations that couple the classical Cahn-Hilliard formulation and nonlinear gradient elasticity. The non-equilibrium chemistry in this coupled system is governed by $$\begin{aligned}
\frac{\partial c}{\partial t} + \nabla \cdot \BJ & = 0
\quad \text{with} \quad
\BJ= -\BL(c, \Be) \nabla \mu \\
\end{aligned}
\label{eq:govern-chemistry}$$ where $\BL$ is a transport tensor related to mobility. In , $\mu$ is the chemical potential, which is obtained as a variational derivative of $$\begin{aligned}
\mu & = \frac{\partial \scrF}{\partial c} + \frac{\partial \scrG}{\partial c} - \nabla \cdot \left[ \frac{\partial \scrG}{\partial(\nabla c)} \right]. \\
\end{aligned}$$ Mechanical equilibrium in the setting of strain gradient elasticity is governed by [@Toupin1964; @Rudraraju2014-IGA-grad-elasticity; @Garikipati2016Rudraraju-NPJ; @Garikipati2016Sagiyama-Unconditionally; @Garikipati2016Wang-Toupin; @Garikipati2018Sagiyama-Martensitic] (most transparently written in coordinate notation): $$\begin{aligned}
P_{iJ,J} - B_{iJK,JK} & = 0
\label{eq:govern-mechanical}
\end{aligned}$$ where $\BP$ and $\BB$ are the stress tensors, conjugate to the deformation gradient $\BF$ and the gradient of the deformation gradient $\nabla \BF$, respectively, whose forms are given as $$\begin{aligned}
P_{iJ} & = \sum_\alpha \frac{\partial (\scrF + \scrG)}{\partial e_\alpha} \frac{\partial e_\alpha}{\partial F_{iJ}}
+ \sum_a \frac{\partial \scrG}{\partial e_{\alpha,I}} \frac{\partial e_{\alpha,I}}{\partial F_{iJ}} \\
B_{iJK} & = \sum_a \frac{\partial \scrG}{\partial e_{\alpha,I}} \frac{\partial e_{\alpha,I}}{\partial F_{iJ,K}}. \\
\end{aligned}$$ With appropriate initial conditions and boundary conditions, the composition and deformation fields are obtained by solving equations and . Our implementation is with the `echanoChemIGA` code, which is a publicly available and highly parallelized multiphysics code developed based on `PETSc` [@petsc-efficient; @petsc-user-ref], `Trilinos` [@Trilinos2005; @Trilinos-Overview], and `PetIGA` [@PetIGA] libraries within the Isogeometric Analysis (IGA) framework.
Homogenized mechanical properties for heterogeneous microstructures
-------------------------------------------------------------------
The microstructures obtained from solving and are highly heterogeneous, as illustrated in Fig. \[fig:dns-results\]. To describe their macroscopic mechanical responses, the averaged deformation gradient $\BF^\text{avg}$ and the total mechanical free energy $\Psi_\text{mech}$ are used, which are computed as $$\BF^\text{avg} = \int_{\Omega} \BF~dV \quad \text{and} \quad \Psi_\text{mech} = \int_{\Omega} \psi_\text{mech}(c, \Be, \nabla\Be)~dV,
\label{eq:avg-F-psi}$$ with $\Omega$ representing the domain of interest. In , $\psi_\text{mech}(c, \Be, \nabla\Be)$ is the total elastic free energy that consists of the purely elastic term and the mixed term in as $$\psi_\text{mech}(c, \Be, \nabla\Be) =
\frac{2d_e}{s_e^2}(e_1^2 + e_6^2) + \frac{d_e}{s_e^4}e_2^4 + \frac{1}{2} \nabla e_2 \cdot \lambda_e l_e^2 \nabla e_2
+ (1-2c) \frac{2d_e}{s_e^2}e_2^2.
\label{eq:2d-psi-mech}$$ The macroscopic first Piola-Kirchhoff stress tensor $\BP^\text{avg}$ is computed as $$P^\text{avg}_{iJ} = \int_{\Gamma} P_{iK}N_{K}~dA_{J}
\label{eq:avg-P}$$ by averaging the surface traction components ($T_i=P_{iK}N_K$) on a given surface $\Gamma$ with normal $\BN$ in the positive/negative $J^\text{th}$ direction [@Garikipati2019Sagiyama-ML-Martensitic].
Neural networks {#sec:NN}
===============
In this section, the architectures of DNNs, CNNs, and KBNNs used in Section \[sec:num-example\] are briefly discussed.
DNN {#sec:NN-dnn}
---
A DNN consists of multiple layers with one input layer, one output layer, and several hidden layers in between. The inputs and outputs are called features and labels, respectively. The optimal architecture of a DNN for a specific problem is unknown *a priori*. Users need to select the type and structure of each layer and the number of hidden layers. In this work, DNNs specifically refer to neural networks made of fully connected (FC) layers, to distinguish from CNNs discussed in Section \[sec:NN-cnn\]. A FC layer consists of multiple neurons, which take a group of weighted values and a bias as inputs, and return the output by applying an activation function to their summation. In DNNs, the weights and biases are variables subject to global optimization. The architecture of DNNs is determined by the total number of hidden layers and the number of neurons per layer, which are referred to as “hyperparameters”.
CNN {#sec:NN-cnn}
---
CNN is a versatile type of neural network developed originally to analyze image data for tasks such as pattern detection or feature selection [@Krizhevsky2012imagenet]. As discussed in the introduction, it has recently become a very useful tool for the study of study material microstructure-property relationships in situations where data from both experiments and computational material physics simulations are available as easily visualizable images. A CNN often is a mixture of convolutional layers, pooling layers, and FC layers. It can significantly reduce the dimensionality of the representation. A CNN typically requires far fewer variables than a DNN with only FC layers does for the same task. The structure of a convolutional layer is defined by hyperparameters, such as the size and number of filters, choices of paddings, and the stride numbers. In a convolutional layer, the biases and the kernel of filters are variables subject to global optimization. A pooling layer has the filter size, paddings, and stride number as hyperparameters but with no global variables.
KBNN {#sec:NN-kbnn}
----
A knowledge-based neural network (KBNN) utilizes information from pre-trained models, as illustrated in Fig. \[fig:kbnn\]. Whether or not to use a KBNN depends on the nature of the available data. For example, when the available data include abundant, less accurate data as well as expensive, scarce, highly accurate data, one can use the so-called multi-fidelity model. A low-fidelity model can first be trained with less accurate data, and a KBNN that is built upon the pre-trained low-fidelity model is used to improve the overall accuracy with high-fidelity data [@Garikipati2019Teichert-ML-SurrogateOpt]. Such an approach can significantly reduce the required amount of expensive and high-fiedlity data, but still achieve the desired model accuracy. The data itself may also have a multi-resolution structure, for which one neural network may be incapable of capturing all the information. In such a scenario, one NN can be trained first to describe the dominant feature of the data. Next, a KBNN can be built upon this pre-trained model with other free variables to be trained on the same dataset. The additional variables are used to resolve other details in the data, not well-delineated by the pre-trained model. In this work, the main neural network of the KBNN is named the master neural network (MNN), and the pre-trained neural network is called the embedded neural network (ENN). The variables in the master neural network need to be optimized, whereas those in the embedded neural networks are untrainable. In another word, variables in ENN are fixed while training the MNN.
**Remark 1:** For NNs, their global parameters are optimized via a back-propagation algorithm during the training process to drive down a loss function. The hyperparameters, which define the optimal architecture of NNs, need to be chosen by a separate process that usually involves cross-validation. For a given NN architecture, one further needs to adjust the learning rate to obtained the optimal weights and biases. A full-fledged discussion on avoidance of model underfitting or overfitting is beyond the scope of this work.
**Remark 2:** The open source library `TensorFlow` [@tensorflow2015-whitepaper] is used to create different neural network structures in this work. When NNs are used to learn a mathematical relationship with a unique physical meaning, the NNs are considered accurate only when both the label(s) and other physically meaningful quantities, usually involving the derivatives of the label(s), are accurate. For example, a DNN with fully connected layers is trained to learn the free energy density function of a Neo-Hookean hyperelastic material in [@Garikipati2019Sagiyama-ML-Martensitic]. For such problem, a NN is required not only to accurately represent the free energy function, but also its derivatives with respect to its features. In that specific problem, the features are the strain components and the derivatives of the NN are the stress fields. In this work, we evaluate the performance of NNs primarily based on the loss function, but also consider their derivatives whenever it is necessary. The standard automatic differentiation API from `TensorFlow` is utilized to compute the derivatives of NNs.
Data generation, feature selection, and hyperparameter search {#sec:data}
=============================================================
In this section, we first present detailed simulation procedures to generate synthetic microstructures based on the computational framework presented in Section \[sec:spinodal-framework\]. Next, several pre-defined features for DNNs used in Section \[sec:num-example\] are discussed. The hyperparameter search procedure for DNNs, CNNs, and KBNNs is covered in Section \[sec:hyperparameter\].
Microstructure generation
-------------------------
\
In this work, a two-dimensional solid in a domain of $\Omega = (0,0.01)\times (0,0.01)$ with a mesh size of $60\times60$ is studied. Initially, the solid is at high temperature and has a single square phase with a randomly fluctuating composition in the range of $c=0.46 \pm 0.05$. A steady biaxial Dirichlet-type loading is applied to the solid, as shown in Fig. \[fig:dns-setup-psi\](a). The solid is quenched to a low temperature state with a non-convex free energy density, as given in , under which mechanochemical spinodal decomposition occurs.
DNS of multiple phase evolution are performed, with each of them starting from different initial compositions and mechanical boundary conditions. Throughout each DNS of phase evolution, mechanical boundary conditions remain unchanged, and the total free energy of the solid and its mechanical part are driven by the second law of thermodynamics.[^3] Results from one of the many DNS are shown in Figs. \[fig:dns-setup-psi\](b) and \[fig:elastic-free-energy-ossilcation\](a). Selected snapshots of the composition $c$ and the strain order parameter $e_2$ at different states from this particular simulation are shown in Fig. \[fig:dns-results\], in which the coexistence of the square phase, the positive rectangle phase, and the negative rectangle phase is observed.
Each of the DNS takes many hundreds of time steps. We call the solutions at each time step as a frame. The homogenized deformation gradient $\BF^\text{avg}$ in , homogenized first Piola-Kirchhoff stress $\BP^\text{avg}$ in , and total mechanical free energy $\psi_\text{mech}$ in are computed for each frame of every DNS. Since each frame has a different volume ratio and different spatial distribution of these three phases, it is considered as a unique microstructure, whose effective mechanical behavior differs from those of the other microstructures. Thus, each DNS will generate multiple microstructures. We discard the first 50 frames of each simulation, as distinct phase separation is not yet fully developed at this stage. In this work, 20 DNS are performed with 17000 microstructures being generated.
Data preparation {#sec:data-preparation}
----------------
To evaluate how the macroscopic mechanical behavior of solids is related to their microstructures, 9 microstructures are uniformly sampled from each DNS with 180 microstructures being sampled in total. Combinations of different random shear and biaxial mechanical loadings are applied to each sampled microstructure. The newly applied mechanical testing loadings are much smaller than the initially applied ones for microstructure generation; hence the microstructures themselves are not altered during this posterior testing procedure. The quantities $\BF^\text{avg}$, $\BP^\text{avg}$, and $\psi_\text{mech}$, are collected for each test.
Four datasets are created in this work. Datasets $\text{D}_\text{I}$ and $\text{D}_\text{II}$, which contain microstructure features defined in Section \[sec:feature-selection\], the $e_2$ solution and the $\psi_\text{mech}^0$ from DNS, are created for microstructures from a single DNS and all microstructures from different DNS, respectively. Datasets $\text{D}_\text{III}$ and $\text{D}_\text{IV}$ contain mechanical testing information for a single microstructure and all the sampled microstructures, respectively. Specifically, in dataset $\text{D}_\text{III}$, the microstructure at frame 400 from one particular DNS, as shown in Fig. \[fig:elastic-free-energy-ossilcation\](a), is tested with 1600 different combinations of mechanical loading. The elastic free energy $\psi_\text{mech}$ from all the 1600 tests is plotted in Fig. \[fig:elastic-free-energy-ossilcation\](b), where $\psi_\text{mech}$ is oscillating around a base elastic free energy $\psi_\text{mech}^0 = -0.01923$. Here, $\psi_\text{mech}^0$ refers to the elastic free energy stored in the microstructure during phase evolution shown in Figs \[fig:dns-setup-psi\]b and \[fig:dns-results\], which is before the mechanical tests. The magnitudes of oscillations in $\psi_\text{mech}$ in Fig. \[fig:elastic-free-energy-ossilcation\](b) further confirms that the magnitude of the applied mechanical loadings are very small. In dataset $\text{D}_\text{IV}$, all the 180 sampled microstructures, 9 of which from one specific DNS are shown in Fig. \[fig:elastic-free-energy-ossilcation\](a), are tested under different mechanical loadings with 57600 data points collected.
Microstructure feature selection {#sec:feature-selection}
--------------------------------
To differentiate microstructures from each other, several features are selected. These features include volume fractions $\phi_r^+$ and $\phi_r^-$ for the positive and negative rectangle phases. The volume fraction of the square phase is not selected as an independent feature because it can be calculated as $\phi_s = 1-\phi_r^+-\psi_r^-$. Other selected features include the interfacial length between the square phase and the rectangle phases $l_s^r$, as shown in Fig. \[fig:interfacial-length\](a), the interfacial length of the positive rectangle phase $l^{r+}$, as shown in Fig. \[fig:interfacial-length\](b), and the interfacial length of the negative rectangle phase $l^{r-}$, as shown in Fig. \[fig:interfacial-length\](c).
Hyperparameter search {#sec:hyperparameter}
---------------------
As discussed in Section \[sec:NN\], the optimal architecture of NNs is unknown *a priori*. Hyperparameters can be selected via either manual tuning or automatic optimization algorithms, such as grid search or random search [@goodfellow2016deep]. In this work, grid search is performed for all the NNs. For DNNs and the MNN of KBNNs, we search for the number of hidden layers ($N_\text{HL}$) and the number of neurons per layer ($N_\text{NPL}$). In our search space, $N_\text{HL}$ varies between 1 and 10 with a step of 1. An identical $N_\text{NPL}$ is assumed for each hidden layer with its value varying between 2 and 256 for a step of 2. For CNNs, a kernel size of $(3,3)$ and a stride size of $(2,2)$ are pre-chosen. We only search for $N_\text{HL}$ and the number of filters per layer ($N_\text{FPL}$), with $N_\text{HL}$ varying from 1 to 10 for a step of 1 and $N_\text{FPL}$ varying from 2 to 32 for a step of 1. Unlike the case of $N_\text{NPL}$ for DNNs/MNNs, $N_\text{FPL}$ is not identical for each layer. Its value increases with the depth of the hidden layer. In this process, the exponentially decaying learning rate implemented in `Tensorflow`, which follows a stair case function, is used $$\text{lr} = \text{lr}_0 \cdot \text{pow}\left( v_\text{decay}, \frac{N_\text{total}}{N_\text{decay}}\right)
\label{eq:lr-step}$$ with an initial learning rate $\text{lr}_0 = 0.001$, a decay rate $v_\text{decay} = 0.7$, a decay step $N_\text{decay} = 100$, and a final $N_\text{total} = 2000$ epochs. The dataset is randomly split into a set consisting of $90\%$ for training and validation and a set of $10\%$ for testing. K-fold cross-validation procedure (with $k=5$) [@goodfellow2016deep] is performed on the set consisting of $90\%$ of the data to train and evaluate different NN models. Feature normalization and label scaling are used to improve the accuracy of NNs during training.
When performing the hyperparameter search, first, the total number of variables of each possible NN architecture in our search space is computed and sorted in an ascending order. These NNs with a total variable number larger than the size of the dataset will be excluded from the search space. Then, a grid search based on the total number of variables of the NNs is performed. The performance of each NN is evaluated based on the averaged validation loss and is sorted in an ascending order. The total number of variables of the top 30% performing NNs defines a refined search space, in which a new grid search is performed. The grid search is repeated three times in total. The model with the smallest averaged validation loss is selected as the best one. The hyperparameter search procedure is summarized in the Algorithm Box \[algo:hyper-search\].
Create a set $S$ containing all possible NN structures that lies in the search space defined by hyperparameters ($N_\text{HL}$, $N_\text{NPL}$, or $N_\text{FPL}$), with NNs in $S$ being sorted in an ascending order based on the total number of variables ($V_\text{total}$) of each NN. Grid search of hyperparameters in $S$ based on $V_\text{total}$.
Define an initial lower limit and an initial upper limit of $V_\text{total}$ with $V_\text{total}^\text{min} = 0$ and $V_\text{total}^\text{max} = \text{size of (dataset $D$)}$. Uniformaly sampling multiple $(=25,~\text{in this work})$ NNs out of all NNs, where each NN has $ V_\text{total}^\text{min} \le V_\text{total} \le V_\text{total}^\text{max}$, to form a subset $\bar{S}$. Perform K-fold cross-validation for each NN in $\bar{S}$. Split $D$ into $K$ mutually exclusive subsets $D_k$ Train $M_i$ with $D\backslash D_k$ Evaluate (validate) $M_i$ with $D_k$ to get the loss $\mathcal{L}_i^k$. Compute the averaged validation loss $\bar{\mathcal{L}}_i$ for $M_i$. Sort models in $\bar{S}$ based on $\bar{\mathcal{L}}_i$ in an ascending order. Refine the search space by updating $V_\text{total}^\text{min}$ and $V_\text{total}^\text{max}$, where $V_\text{total}^\text{min} = \text{min} ( V_\text{total})$ and $V_\text{total}^\text{max} = \text{max} ( V_\text{total})$ in $\bar{S}_{30}$, with $\bar{S}_{30}$ representing a subset of $\bar{S}$ that contains the top $30\%$ (an user-defined threshold value) performed models. Select the best model $M$ with the smallest $\bar{\mathcal{L}}$.
Numerical examples {#sec:num-example}
==================
In this section, we explore different NNs to predict the homogenized mechanical behavior of synthetically generated heterogeneous microstructures. Specifically, the base elastic free energy of microstructures from multiple DNS is studied in Section \[sec:sim-base-free-energy\] with both CNNs and DNNs. The homogenized mechanical behavior of a single microstructure is studied with KBNNs in Section \[sec:sim-kbnn-one-microstructure\]. Finally, CNN-enhanced KBNNs are trained to predict the homogenized mechanical behavior of different microstructures from multiple DNS in Section \[sec:sim-kbnn-multi-DNS\].
Base mechanical free energy for one DNS {#sec:sim-base-free-energy}
---------------------------------------
As revealed in Figs. \[fig:dns-setup-psi\] and \[fig:elastic-free-energy-ossilcation\], the elastic free energy $\psi_\text{mech}$ stored in microstructures due to phase evolution is of a sharply multi-resolution nature. It has $\psi_\text{mech}^0$ from microstructure phase evolution as the dominant feature and $\Delta \psi_\text{mech}$ from mechanical testing as the detailed feature. It is challenging to capture both features by a single NN, because the weights emphasize the dominant feature over the detailed feature during the training process. To overcome this challenge, we use KBNNs, as discussed in Section \[sec:NN-kbnn\], to represent this multi-resolution data. The ENN is trained to learn the base free energy $\psi_\text{mech}^0$ ($\text{D}_\text{I}$) in this section with both DNNs and CNNs being explored.
### DNN {#sec:label-shift-dnn}
\
A DNN using the mean squared error (MSE) loss function is trained to predict the base elastic free energy $\psi_\text{mech}^0$. The Softplus activation function is used for all the layers. The DNN has $\phi_r^+$, $\phi_r^-$, $l_s^r$, $l^{r+}$, and $l^{r-}$ as its features and $\psi_\text{mech}^0$ as its label. A grid search of the hyperparameters $\{N_\text{HL},~N_\text{NPL}\}$ for the DNN is conducted by following the procedure discussed in Section \[sec:hyperparameter\], with an obtained optimal structure of $N_\text{HL} = 1$, $N_\text{NPL}= 76$, and a total variable number of $533$. The model is trained with the Adam optimizer for 10000 epochs with the exponentially decaying learning rate given in where $v_\text{decay} = 0.92$. The learning curve for the DNN is plotted in Fig. \[fig:psi-label-shift-dnn-cnn\](a), where neither overfitting nor underfitting is observed. Figs. \[fig:psi-label-shift-dnn-cnn\](b,c) show that model can predict $\psi_\text{mech}^0$ in a satisfactory accuracy. The value of $\psi_\text{mech}^0$ computed from the DNN is denoted as $\psi_\text{mech,DNN}^0$.
### CNN {#sec:label-shift-cnn}
layer type notes
-------------------- ------------- --------------------------------------------
Input $e_2$ field
Conv2D filters = 2 kernel (3,3), padding = same, ReLU
MaxPooling2D - kernel (2,2), stride (2,2), padding = same
Conv2D filters = 3 kernel (3,3), padding = same, ReLU
MaxPooling2D - kernel (2,2), stride (2,2), padding = same
Conv2D filters = 5 kernel (3,3), padding = same, ReLU
MaxPooling2D - kernel (2,2), stride (2,2), padding = same
Conv2D filters = 6 kernel (3,3), padding = same, ReLU
MaxPooling2D - kernel (2,2), stride (2,2), padding = same
Flatten - -
Output Dense Layer label =1 Linear
: Detail of the CNN architecture for representing $\psi_\text{mech}^0$ of single DNS.[]{data-label="tab:cnn-base-psi-1-dns"}
The microstructure features selected in Section \[sec:feature-selection\] are the interpretation of image data based on authors domain knowledge of the global quantities that distinguish microstructures. Alternately, we can train CNNs to automatically identify features to represent microstructures. Such an approach underlies the treatment of this section with the goal of investigating the existence of any advantage for CNNs over DNNs for computational materials physics simulations.
A CNN consisting of multiple convolutional layers, multiple pooling layers, and one dense layer trained to predict the base elastic free energy in Fig. \[fig:elastic-free-energy-ossilcation\](a). The CNN takes the whole $e_2$ field solution from DNS as inputs, with a pixel resolution of $61\times 61$, and $\psi_\text{mech}^0$ as its label. A hyperparameter search is conducted by following the procedure discussed in Section \[sec:hyperparameter\], with the best architecture of the CNN given in Table \[tab:cnn-base-psi-1-dns\] with a total variable number of 590. The model is trained with the Adam optimizer for 10000 epochs with the exponentially decaying learning rate given in where $v_\text{decay} = 0.92$. The learning curve for the CNN is plotted in Fig. \[fig:psi-label-shift-dnn-cnn\](d). The model can accurately predict $\psi_\text{mech}^0$, as plotted in Figs. \[fig:psi-label-shift-dnn-cnn\](e,f), which show an improved accuracy compared with the DNN results in Figs. \[fig:psi-label-shift-dnn-cnn\](b,c).
Base elastic free energy for multiple DNS {#sec:sim-base-free-energy-m-dns}
-----------------------------------------
\
layer type notes
-------------------- -------------- --------------------------------------------
Input $e_2$ field
Conv2D filters = 4 kernel (3,3), padding = same, ReLU
MaxPooling2D - kernel (2,2), stride (2,2), padding = same
Conv2D filters = 8 kernel (3,3), padding = same, ReLU
MaxPooling2D - kernel (2,2), stride (2,2), padding = same
Conv2D filters = 16 kernel (3,3), padding = same, ReLU
MaxPooling2D - kernel (2,2), stride (2,2), padding = same
Conv2D filters = 18 kernel (3,3), padding = same, ReLU
MaxPooling2D - kernel (2,2), stride (2,2), padding = same
Flatten - -
Output Dense Layer label = 1 Linear
: Detail of the CNN architecture for representing $\psi_\text{mech}^0$ of multiple DNS.[]{data-label="tab:cnn-base-psi-m-dns"}
In this section, both DNNs and CNNs are explored to represent the base free energy $\psi_\text{mech}^0$ ($\text{D}_\text{II}$) from multiple DNS for the ENN. As in Section \[sec:label-shift-dnn\], the DNN takes $\phi_r^+$, $\phi_r^-$, $l_s^r$, $l^{r+}$, and $l^{r-}$ as its features and $\psi_\text{mech}^0$ as its label. The results of an optimal DNN structure obtained from the hyperparameter search, which has $N_\text{HL} = 7$, $N_\text{NPL}= 48$, and $V_\text{total} = 14449$, are shown in Fig. \[fig:psi-label-shift-dnn-cnn-m-dns\](a-c). The results of an optimal CNN structure, whose architecture is given in Table \[tab:cnn-base-psi-m-dns\] with $V_\text{total} = 4403$, are shown in Fig. \[fig:psi-label-shift-dnn-cnn-m-dns\](d-f). From Fig. \[fig:psi-label-shift-dnn-cnn-m-dns\], one can observe that both the DNN and the CNN show a good representation of the base free energy from multiple DNS with different initial conditions and boundary conditions.
Homogenized mechanical behavior of single microstructure {#sec:sim-kbnn-one-microstructure}
--------------------------------------------------------
\
In this section, KBNNs are constructed to study the homogenized mechanical behavior of a single microstructure (dataset $\text{D}_\text{III}$), with ENNs being either pre-trained DNNs or CNNs. The ENNs offset the dominant feature from the datasets to allow KBNNs to capture the detailed feature. This is achieved via a new MSE loss function with the form $$\text{MSE} = \frac{1}{m} \sum_{i} \left( \mathbf{Y}, \mathbf{Z} \right)_i^2
\quad \text{with} \quad
\mathbf{Y}= \psi_\text{mech} - \psi_\text{mech,NN}^0
\label{eq:new-mse}$$ where $\mathbf{Y}$ is the label, $\mathbf{Z}$ is the KBNN predicted value, $\psi_\text{mech}$ is the DNS value of the elastic free energy after mechanical testing, and $\psi_\text{mech,NN}^0$ is the ENN predicted base elastic free energy of the microstructure before mechanical testing. In , $\mathbf{Y}$ essentially represents the change of mechanical free energy $\Delta \psi_\text{mech}$ due to the posterior mechanical testing.
### DNN-based KBNN {#sec:dnn-based-kbnn}
\
With the DNN in Section \[sec:label-shift-dnn\] in hand, we now build a KBNN model with the structure presented in Fig. \[fig:kbnn\], with $F_{11}$, $F_{12}$, $F_{21}$, $F_{22}$, $\phi_r^+$, $\phi_r^-$, $l_s^r$, $l^{r+}$, and $l^{r-}$ as features and $\psi_\text{mech}$ as the label. In this KBNN, the embedded pre-trained DNN takes $\{\phi_r^+,~\phi_r^-,~l_s^r,~l^{r+},~l^{r-} \}$ to predict $\psi_\text{mech,NN}^0$. The remaining features $\{F_{11},~F_{12},~F_{21},~F_{22} \}$ and the shifted label $\Delta \psi_\text{mech} = \psi_\text{mech} - \psi_\text{mech,NN}^0$ are used to optimize the variables of the MNN. The MNN is not exposed to the features $\{\phi_r^+,~\phi_r^-,~l_s^r,~l^{r+},~l^{r-} \}$, and therefore does not have information on the microstructure that it is training against. This is a refinement we undertake in Section \[sec:sim-kbnn-multi-DNS\]. The optimal values of $N_\text{HL}$ and $N_\text{NPL}$ for the MNN are searched by following the procedures in Section \[sec:hyperparameter\]. An $L^2$ kernel regularization with a factor of 0.001 is applied to the input layer to minimize the coefficients of less important features to reduce overfitting. The Softplus activation function is used. An optimal MNN is obtained with $N_\text{HL}=3$, $N_\text{NPL}=24$, and $V_\text{total} = 1345$. The KBNN is trained with the Adam optimizer for 10000 epochs with the exponentially decaying learning rate given in where $v_\text{decay} = 0.92$. The learning curve for the KBNN is plotted in Fig. \[fig:psi-800-dnn-kbnn\](a), where neither overfitting nor underfitting is observed. Fig. \[fig:psi-800-dnn-kbnn\](b) shows that the KBNN can capture the detailed features of the data and predict $\Delta \psi_\text{mech}$ with satisfactory accuracy. The derivative of $\Delta \psi_\text{mech, NN}$ with respect to $\BF$ are shown in Fig. \[fig:psi-800-dnn-kbnn\](c-f), where the KBNN shows good performance on $P_{11}$ and $P_{22}$, but not $P_{12}$ and $P_{21}$ due to the fact that $P_{12}$ and $P_{21}$ are one order of magnitude smaller than $P_{11}$ and $P_{22}$ in the DNS.
### CNN-based KBNN
A CNN-based KBNN is built with $F_{11}$, $F_{12}$, $F_{21}$, $F_{22}$, and the image of the $e_2$ field solution as features and $\psi_\text{mech}$ as the label. In this KBNN, the embedded pre-trained CNN takes the image of the $e_2$ field solution of the base microstructure to predict $\psi_\text{mech}^0$. The remaining features $\{F_{11},~F_{12},~F_{21},~F_{22} \}$ and the shifted label $\Delta \psi_\text{mech} = \psi_\text{mech} - \psi_\text{mech}^0$ are used to optimize the variables of the MNN. The identical MNN as in Section \[sec:dnn-based-kbnn\] is used and trained. Fig. \[fig:psi-800-cnn-kbnn\](b) shows that the KBNN can capture the detailed features of the data and predict $\Delta \psi_\text{mech}$ with satisfactory accuracy. The derivative of $\Delta \psi_\text{mech, NN}$ with respect to $\BF$ are shown in Fig. \[fig:psi-800-cnn-kbnn\](c-f), similar as the results in Fig. \[fig:psi-800-dnn-kbnn\], where the CNN-based KBNN also shows good performance on $P_{11}$ and $P_{22}$, but not $P_{12}$ and $P_{21}$.
Layers notes
-------------------- ---------------------------------- --------------------------------------------
Input (1) perturbed $e_2$ fields
Conv2D filters = 12 kernel (3,3), padding = same, ReLU
MaxPooling2D - kernel (2,2), stride (2,2), padding = same
Conv2D filters = 16 kernel (3,3), padding = same, ReLU
MaxPooling2D - kernel (2,2), stride (2,2), padding = same
Flatten - -
Dense (\*) neurons = 8 ReLU
Input (2) $F_{11},~F_{12},~F_{21},~F_{22}$ -
Concatenate Dense (\*) + Input (2)
Dense neurons = 48 Softplus
Dense neurons = 48 Softplus
Dense neurons = 48 Softplus
Output Dense Layer label = 1 Linear
: Details of the MNN for predicting homogenized mechanical response of multiple microstructures sampled from different DNS.[]{data-label="tab:cnn-enhanced-MNN"}
![Illustration of the structure of CNN-enhanced KBNN. The ENN, which either takes pre-defined features or microstructure images, is used for offset the dominant feature. The CNN in the middle, which takes the perturbed $e_2$ field information, is used to identify the most relevant features for homogenized mechanical behavior prediction. The combination of the outputs from the CNN and the deformation gradient $\BF$ components serve as the input for a fully connected DNN for resolving the detailed features of the dataset.[]{data-label="fig:cnn-enchanced-kbnn"}](cnn-enhanced-KBNN){width="0.9\linewidth"}
\
Homogenized mechanical behavior of microstructures from multiple DNS {#sec:sim-kbnn-multi-DNS}
--------------------------------------------------------------------
Expanding beyond the studies for a single microstructure, KBNNs are constructed to predict the homogenized behavior of multiple microstructures from different DNS (dataset $\text{D}_\text{IV}$). KBNNs similar to those used in Section \[sec:sim-kbnn-one-microstructure\] are investigated. However, the MNN with $\{F_{11}$, $F_{12}$, $F_{21}$, $F_{22}\}$ as features is incapable of describing the homogenized mechanical behavior of different microstructures, as such a simple MNN is unaware of the details of each microstructures. Our studies also confirm that even the inclusion of pre-defined microstructure related features $\{\phi_r^+$, $\phi_r^-$, $l_s^r$, $l^{r+}$, $l^{r-}\}$ to the MNN shows insignificant improvement of the performance of KBNNs for multiple microstructures.
Since the MNN with pre-defined features has insufficient expressivity to describe the homogenized mechanical response across microstructures, a CNN-enhanced KBNN structure, as shown in Fig. \[fig:cnn-enchanced-kbnn\], is explored. Here, the CNN enhancement is utilized to identify the most relevant features from the $e_2$ fields. A manual hyperparameter tuning is performed. The details of an MNN with satisfactory performance, which has a total variable number of 9297, are summarized in Table \[tab:cnn-enhanced-MNN\]. Our results, as shown in Fig. \[fig:kbnn-m-dns\], confirm the effectiveness and good performance of the new KBNN structure, which can accurately predict the mechanical free energy on the test dataset. Furthermore, the $P_{11}$ and $P_{22}$ components of $\BP_\text{KBNN}$, obtained by taking derivatives of the KBNN with respect to the deformation gradient $\BF$, match well with respective components of $\BP_\text{DNS}$. The new KBNN structure which performs well at learning the homogenized mechanical behavior of different microstructures demonstrates the advantage of utilizing CNNs in a multi-resolution learning framework for this instance of computational material physics applications, with heterogeneous microstructures.
Conclusions {#sec:conclusion}
===========
In this work, different NN architectures are used to study the homogenized mechanical behavior of microstructures generated by mechano-chemical spinodal decomposition. Our preliminary results show the promise of applying CNNs in computational material physics. Particularly, we have demonstrated that both a CNN-based KBNN and a CNN-enhanced KBNN can be trained to rapidly predict the elastic response based on the images of microstructures.
Our investigations toward infusing the better-performing CNN architectures with interpretability reveal that the convolutional layers isolate a greater number of microstructural features than those that we identified on the basis of domain knowledge: $\{\phi_r^+,~\phi_r^-,~l_s^r,~l^{r+},~l^{r-} \}$. The volume fraction and interfaces appear as recognizable outputs from more than two and three convolutional layers, respectively. While not presenting a set of features with the parsimony that the expert may postulate for the problem, it suggests that CNN architectures use redundancy to outperform DNNs. Interestingly, it also raises questions about the completeness of the feature set $\{\phi_r^+,~\phi_r^-,~l_s^r,~l^{r+},~l^{r-} \}$ that was imposed on the DNN model, suggesting that there are epistemic gaps in the experts’ understanding of this problem.
This is important for future studies on combining image data from experiments with multiphysics simulations. Although this work focused on two-dimensional simulations, our results point to the CNN being more effective in the three-dimensional study. Because 3D data is more complex in its information content, our domain knowledge might harbor further inadequacies to identify the relevant features. The CNN, instead, could prove more effective at feature selection and dimensionality reduction.
Acknowledgements {#acknowledgements .unnumbered}
================
We gratefully acknowledge the support of Toyota Research Institute, Award \#849910: “Computational framework for data-driven, predictive, multi-scale and multi-physics modeling of battery materials”. Computing resources were provided in part by the National Science Foundation, United States via grant 1531752 MRI: Acquisition of Conflux, A Novel Platform for Data-Driven Computational Physics (Tech. Monitor: Ed Walker). This work also used the Extreme Science and Engineering Discovery Environment (XSEDE) Comet at the San Diego Supercomputer Center and Stampede2 at The University of Texas at Austin’s Texas Advanced Computing Center through allocation TG-MSS160003 and TG-DMR180072.
[^1]: Corresponding author. E-mail address: [email protected]
[^2]: Recall that $\BE = \frac{1}{2}(\BF^\text{T}\BF-\boldsymbol{1})$, where the deformation gradient is $\BF = \boldsymbol{1}+\partial\Bu/\partial\BX$, and $\Bu$ is the displacement vector.
[^3]: If the mechanical boundary conditions do no incremental work during the phase evolution, and if boundary fluxes vanish, the coupling of the first-order Cahn-Hilliard dynamics and gradient elasticity obeys the second law of thermodynamics, and the total free energy decreases. However, the use of varying Dirichlet boundary conditions on the mechanics translates to work done on the system, and the free energy may increase.
|
---
abstract: |
We construct various functorial maps (projections) from virtual knots to classical knots. These maps are defined on diagrams of virtual knots; in terms of Gauss diagram each of them can be represented as a deletion of some chords. The construction relies upon the notion of parity. As corollaries, we prove that the minimal classical crossing number for classical knots.
Such projections can be useful for lifting invariants from classical knots to virtual knots.
Different maps satisfy different properties.
author:
- 'Vassily Olegovich Manturov [^1] [^2] [^3]'
title: Parity and Projection from Virtual Knots to Classical Knots
---
MSC: 57M25, 57M27
Keywords: Knot, virtual knot, surface, group, projection, crossing, crossing number, bridge number
Introduction. Basic Notions
===========================
Classical knot theory studies the embeddings of a circle (several circles) to the plane up to isotopy in three-space. Virtual knot theory studies the embeddings of curves in thickened oriented surfaces of arbitrary genus, up to the addition and removal of empty handles from the surface. Virtual knots have a special diagrammatic theory, described below, that makes handling them very similar to the handling of classical knot diagrams. Many structures in classical knot theory generalize to the virtual domain directly, however, many other required more techniques [@MaIl]; nevertheless, many other structures (like Heegaard-Floer homology) have not been generealized to virtual knots so far; the existence of a well-defined projection from virtual knot theory to classical knot theory may help solving such problems.
In the diagrammatic theory of virtual knots one adds a [*virtual crossing*]{} (see Figure \[Figure 1\]) that is neither an overcrossing nor an undercrossing. A virtual crossing is represented by two crossing segments with a small circle placed around the crossing point. Figures \[Figure 1\] and \[Figure 4\] are borrowed from [@KM2].
Note that a classical knot vertex is a $4$-valent graphical node embedded in the plane with extra structure. The extra structure includes the diagrammatic choice of crossing (indicated by a broken segment) and a specific choice of cyclic order (counterclockwise when embedded in the plane) at the vertex. By a [*framing*]{} of a four-valent graph we mean a splitting of the four emanating (half)edges into two pairs of opposite (half)edges. The counterclockwise cyclic order includes more information than just a framing. A virtual knot is completely specified by its $4$-valent nodes with their cyclic structure if the edges incident to the nodes are labeled so that they can be connected by arcs to form the corresponding graph.
Throughout the paper, all knots are assumed oriented. The results of this paper are about virtual knots, as stated; nevertheless, after a small effort they can be upgraded for the case of virtual links.
A [*virtual diagram*]{} is an immersion of a collection of circles into the plane such that some crossings are structured as classical crossings and some are simply labeled as virtual crossings and indicated by a small circle drawn around the crossing. We regard the resulting diagram as a possible non-planar graph whose only nodes are the classical crossings, with their cyclic structure. Any immersion of such a graph, preserving the cyclic structure at the nodes, will represent the [*same*]{} virtual knot or link. From this, we use the [*detour move*]{} (see below) for arcs with consecutive virtual crossings, so that this equivalence is satisfied. For the projection of the unknot (unlink) without classical crossings we shall also admit a circle instead of graph; thus, we category of graphs includes the circle.
Immersion of each particular circle from the collection gives rise to a [*component*]{} of a virtual link diagram; virtual link diagrams with one component are [*virtual knot diagrams*]{}; we shall deal mostly with virtual knots and their diagrams, unless specified otherwise; (virtual) [*knots*]{} are one-component (virtual) links.
Moves on virtual diagrams generalize the Reidemeister moves (together with obvious planar isotopy) for classical pieces of knot and link diagrams (Figure \[Figure 1\]). One can summarize the moves on virtual diagrams by saying that the classical crossings interact with one another according to the usual Reidemeister moves while virtual crossings are artifacts of the attempt to draw the virtual structure in the plane. A segment of diagram consisting of a sequence of consecutive virtual crossings can be excised and a new connection made between the resulting free ends. If the new connecting segment intersects the remaining diagram (transversally) then each new intersection is taken to be virtual. Such an excision and reconnection is called a [*detour move*]{}. Adding the global detour move to the Reidemeister moves completes the description of moves on virtual diagrams. In Figure \[Figure 1\] we illustrate a set of local moves involving virtual crossings. The global detour move is a consequence of moves (B) and (C) in Figure \[Figure 1\]. The detour move is illustrated in Figure \[Figure 2\]. Virtual knot and link diagrams that can be connected by a finite sequence of these moves are said to be [*equivalent*]{} or [*virtually isotopic*]{}. A virtual knot is an equivalence class of virtual diagrams under these moves.
--------------------------------------------------------------------
![**Moves**[]{data-label="Figure 1"}](F1.eps "fig:"){width="10cm"}
--------------------------------------------------------------------
--------------------------------------------------------------------------
![**Detour Move**[]{data-label="Figure 2"}](F2.eps "fig:"){width="10cm"}
--------------------------------------------------------------------------
Another way to understand virtual diagrams is to regard them as representatives for oriented Gauss diagrams [@GPV]. The Gauss diagram encodes the information about classical crossings of a knot diagram and the way they are connected. However, not every Gauss diagram has a planar realization.
An attempt to draw the corresponding diagram on the plane leads to the production of the virtual crossings. Gauss diagrams are most convenient for knots, where there is one cycle in the code and one circle in the Gauss diagram. One can work with Gauss diagrams for links with a little bit more care, but we will not touch on this subject.
The detour move makes the particular choice of virtual crossings irrelevant.
[*Virtual isotopy is the same as the equivalence relation generated on the collection of oriented Gauss diagrams by abstract Reidemeister moves on these codes.*]{}
The paper is organized as follows. In the end of the introduction, we present all necessary constructions of Gauss diagrams, band presentation, and parity.
In Section 2, we formulate the main theorem (about projection) and prove it modulo some important auxiliary theorems, one of them due to I.M.Nikonov. We also prove two corollaries from the main theorem.
Section 3 is devoted to the proof of basic lemmas.
In Section 4, we introduce [*parity groups*]{} and discuss other possibilities of constructing projection maps from virtual knots to classical knots.
The paper is concluded by Section 5, where we discuss some obstacles which do not allow us to define the projection uniquely on the diagrammatic level.
Acknowledgements
----------------
I am grateful to L.H.Kauffman, I.M.Nikonov, V.V.Chernov, D.P.Ilyutko for various fruitful discussions.
Gauss diagrams
--------------
A [*Gauss daigram*]{} is a finite trivalent graph which consists of just an oriented cycle passing through all vertices (this cycle is called the [*core*]{} of the Gauss diagrams) and a collection of oriented edges ([*chords*]{}) connecting crosssings to each other. Besides the orientation, every chord is endowed with a sign.
Besides that we consider the [*empty Gauss diagram*]{} which is not a graph, but an oriented circle; this empty Gauss diagram corresponds to the unknot diagram without crossings.
Given a one-component virtual diagram $D$. Let us associate with it the following Gauss diagram $\G(D)$. Let us represent the framed four-valent graph $\Gamma$ of the diagram $D$ as the result of pasting of a closed curve at some points (corresponding to classical crossings) in such a way that the two parts of the neighbourhood of a pasted point are mapped to [*opposite*]{} edges at the crossing.
Thus, we have a map $f:S^{1}\to \Gamma$. For the [*core circle*]{} of the chord diagram we take $S^{1}$, vertices of the chord diagrams are preimages of vertices of $\Gamma$, and chords connect those pairs of vertices having the same image. The orientation of the circle corresponds to the orientation of the knot. Besides, the chord is directed from the preimage of the overcrossing arc to the preimage of an undercrossing arc; the sign of the chord is positive for crossings of type ${\raisebox{-0.25\height}{\includegraphics[width=0.5cm]{skcrro.eps}}}$ and negative for crossings of type ${\raisebox{-0.25\height}{\includegraphics[width=0.5cm]{skcrlo.eps}}}$.
We say that a Gauss diagram is [*classical*]{} if it can be represented by a classical diagram (embedding of a four-valent graph without virtual crossings). In Fig. 3, Reidemeister moves for Gauss diagrams are drawn without indication of signs and arrows. For Reidemeister-1 (the upper picture) move an addition/removal of a solitary chord of any sign and with any arrow direction is possible. For Reidemeister-2 move (two middle pictures), the chords $a$ and $b$ should have the same orientation, but different signs.
The articulation for the third Reidemeister move (lowest picture) is left for the reader as an exercise.
Note that two the Gauss diagram does not feel the detour move: if two diagrams $K,K'$ are virtually isotopic, then $\G(K)=\G(K')$.
We say that a virtual knot diagram $K_{1}$ is [*smaller*]{} than the diagram $K_{2}$, if the Gauss diagram of $K_{1}$ is obtained from that of $K_{2}$ by a deletion of some chords.
For this, we take the notation $K_{1}<K_{2}$.
As usual, we make no distinction between virtually isotopic diagrams.
This introduces a partial ordering on the set of virtual knot diagrams. The unknot diagram without classical crossings is smaller than any diagram with classical crossings.
-----------------------------------------------------------------------------------------------
![**Reidemeister Moves on Chord Diagrams**[]{data-label="fig3"}](G3.EPSF "fig:"){width="6cm"}
-----------------------------------------------------------------------------------------------
Having a Gauss diagram, one gets a collection of classical crossings with an indication how they are connected to each other. So, a Gauss diagram leads to a [*virtual equivalence classes*]{} of virtual knot diagrams (note that Gauss diagram carries no information about virtual crossings, so, virtually equivalent diagrams diagrams lead to the same Gauss diagram).
By a [*bridge*]{} [@CSV] of a Gauss diagram we mean an arc of the core circle between two adjacent arrowtails (for the edge orientation for the chords of the chord diagram) containing arrowheads only (possibly, none of them). In the corresponding planar diagram, a [*bridge*]{} is a branch of the knot diagram from an undercrossing to the next undercrossing containing overcrossings and virtual crossings only. Thus, every virtual knot diagrams naturally splits into bridges, see Fig. \[trefbrid\].
------------------------------------------------------------------------------------------------------
![**The Trefoil Knot and its Bridges**[]{data-label="trefbrid"}](trefbridge.eps "fig:"){width="6cm"}
------------------------------------------------------------------------------------------------------
The [*bridge number*]{} of a virtual knot diagram is the minimal number of its bridges. Since the bridge number is defined in terms of Gauss diagram, it does not change under detour moves.
With this, one can define the [*minimal crossing number*]{} and the [*bridge number*]{} for virtual knots to be the minimum of crossing numbers (resp., bridge numbers) over all virtual knot diagrams representing the given knot. When we restrict to classical knots, there we also have the definition when the minima are taken only over classical diagrams.
So, for crossing number and bridge number for classical knots, we have two definitions, the [*classical one*]{} and the [*virtual one*]{}. As we shall see in the present paper (Corollaries \[crl1\], \[bridge\]), these two definitions coincide, moreover, any virtual diagram of a classical knot where the minimal classical crossing number (resp., minimal bridge number) is obtained, is in fact, virtually equivalent to a classical one.
Band Presentation of Virtual Knots
----------------------------------
Note that knots in a thickened surface $S_{g}\times I$ are encoded by regular projections on $S_{g}$ with over and undercrossings and no virtual crossings. These diagrams are subject to classical Reidemeister moves which look locally precisely as in the classical case. No detour moves are needed since we have no virtual crossings for such diagrams.
Let ${\cal K}$ be a (class of a) virtual knot, given by some virtual diagram $K$. Let us describe the [*band presentation*]{} of this knot as a knot in a thickened surface (following N.Kamada and N.Kamada [@KK]).
We shall construct a surface $S(K)$ corresponding to the diagram $K$, as follows. First, we construct a surface with boundary corresponding to $K$.
With every classical crossing, we associate a “cross” (upper picture in Fig. \[cr\]), and with every virtual crossing, we associate a pair of “skew” bands (lower part of Fig. \[cr\]).
Connecting these crosses and bands by non-intersecting and non-twisted bands going along the edges of the diagram, we get an oriented $2$-manifold with boundary, to be denoted by $S'(K)$ (the orientation is taken from the plane), see Fig. \[Figure 4\].
(100,160) (5,95)[(1,1)[50]{}]{} (55,95)[(-1,1)[50]{}]{} (50,120)[$\longrightarrow$]{} (50,40)[$\longrightarrow$]{} (5,5)[(1,1)[50]{}]{} (55,5)[(-1,1)[50]{}]{} (30,30) (65,100)[(1,1)[20]{}]{} (85,120)[(-1,1)[20]{}]{} (70,95)[(1,1)[20]{}]{} (90,115)[(1,-1)[20]{}]{} (115,100)[(-1,1)[20]{}]{} (95,120)[(1,1)[20]{}]{} (110,145)[(-1,-1)[20]{}]{} (90,125)[(-1,1)[20]{}]{} (65,10)[(1,1)[50]{}]{} (70,5)[(1,1)[50]{}]{} (115,5)[(-1,1)[20]{}]{} (120,10)[(-1,1)[20]{}]{} (65,55)[(1,-1)[20]{}]{} (70,60)[(1,-1)[20]{}]{}
The diagram $K$ can be drawn on the surface $S'(K)$ in a natural way so that the arcs of the diagram (which may pass through virtual crossings) are located in such a way that the arcs are go along the middle lines of the band, and classical (flat) crossings correspond to intersection of middle lines inside crossings. Thus we get curve $\delta\subset S'(K)$ (for a link we would get a set of curves). Pasting the boundary components of the manifold $S'(K)$ by discs, we get an oriented manifold $S=S(K)$ without boundary with a curve $\delta$ in it; we call the surface $S(K)$ [*the underlying surface for the diagram $K$*]{}. We call the genus of this surface the [*underlying diagram genus*]{} of the diagram $K$.
We call the connected components of the boundary of $S'(K)$ the [*pasted cycles*]{} or the [*rotating cycles*]{}. Originally rotating cycles are defined by using source-sink orientation of $K$, but in this paper we regard them as the boundary of the oriented surface $S'(D)$ since we handle diagrams which do or do not admit a source-sink orientation. These pasted cycles treated as collections of vertices, will be used in the sequel for constructing parity groups.
By the [*underlying genus*]{} of a virtual knot we mean the minimum of all underlying genera over all diagrams of this knot.
We say that a diagram $K$ is a [*minimal genus diagram*]{} if the genus of the diagram coincides with the genus of the corresponding knot.
As we shall see, some minimal characteristics of virtual knots can be realized only on minimal genus diagrams.
The detour move does not change the band presentation of the knot at all. As for Reidemeister move, the first and the third moves do not change the genus of the knot, whence the second increasing/decreasing move may increase/decrease the genus of the underlying surface (cause stabilization/destabilization).
To define handle stabilization, regard the knot or link as represented by a diagram $D$ on a surface $S.$ If $C$ is an embedded curve in $S$ that does not intersect the diagram $D$ and cutting along $D$ does not disconnect the surface, then we cut along $C$ and add two disks to fill in the boundary of the cut surface. This is a handle destabilization move that reduces the genus of the surface to a surface $S'$ containing a new diagram $D'.$ The pairs $(S,D)$ and $(S',D')$ represent the same virtual knot or link. The reverse operation that takes $(S',D')$ to $(S,D)$ consists in choosing two disks in $S'$ that are disjoint from $D'$, cutting them out and joining their boundaries by a tube (hence the term handle addition for this direction of stabilization).
--------------------------------------------------------------------------------------
![ Surfaces and Virtual Knots[]{data-label="Figure 4"}](F4.eps "fig:"){width="10cm"}
--------------------------------------------------------------------------------------
We say that two such surface embeddings are [*stably equivalent*]{} if one can be obtained from another by isotopy in the thickened surfaces, homeomorphisms of the surfaces and handle stabilization.
The above description of a band representation leads to a bijection between virtual knots and stably equivalent classes of embeddings of circles in thickened surfaces.
So, we shall deal with the following two equivalences: the usual one (with (de)stabilisation) and the equivalence without (de)stabilisation which preserves the genus of the underlying surface.
The Kuperberg Theorem says that virtual knots can be studied by using their minimal representatives. More precisely, we have
A minimal genus diagram of a virtual knot ${\cal K}$ is unique up to isotopy; in other words, if two diagrams $K_{1},K_{2}$ are of the minimal genus then there is a sequence of Reidemeister moves from $K_{1}$ to $K_{2}$ such that all intermediate diagrams between $K_{1}$ and $K_{2}$ are of the same genus.
Parity
------
Let ${\cal L}$ be a knot theory, i.e., a theory whose objects are encoded by diagrams (four-valent framed graphs, possibly, with further decorations) modulo the three Reidemeister moves (and the detour move) applied to crossings. For every Reidemeister move transforming a diagram $K$ to a diagram $K_{1}$ there are corresponding crossings: those crossings outside the domain of the Reidemeister move for $K$ are in one-to-one correspondence with those crossings outside the domain of the Reidemeister move for $K_{1}$. Besides, for every third Reidemeister move $K\to K_{1}$ there is a natural correspondence between crossings of $K$ taking part in this move and the resulting crossings of $K_{1}$. By a [*parity*]{} for the knot theory ${\cal L}$ we mean a rule for associating $0$ or $1$ with every (classical) crossing of any diagram $K$ from the theory ${\cal L}$ in a way such that:
1. For every Reidemeister moves $K\to K_{1}$ the corresponding crossings have the same parity;
2. For each of the three Reidemeister moves the sum of parities of crossings taking part in this move is zero modulo two.
Now, a [*parity in a weak sense*]{} is defined in the same way as parity but with the second condition relaxed for the case of the third Reidemeister move. We allow three crossings taking part in the third Reidemeister move to be all odd (so for the third Reidemeister move the only forbidden case is when exactly one of three crossings is odd).
We shall deal with parities for [*virtual knots*]{} or for [*knots in a given thickened surface*]{}. In the latter case diagrams are drawn on a $2$-surface and Reidemeister moves are applied to these diagrams; no “stabilizing” Reidemeister moves changing the genus of the surface are allowed.
We say that two chords of a Gauss diagram $a,b$ are [*linked*]{} if two ends of one chord $a$ belong to different connected components of the complement to the endpoints of $b$ in the core circle of the Gauss diagram (it is assumed that no chord is linked with itself). We say that a chord of a Gauss diagram is [*even*]{} (with respect to the [*Gaussian parity*]{}) if it is linked with evenly many chords; otherwise we say that this chord is [*odd*]{} (with respect to the [*Gaussian parity*]{}). We shall say that a classical crossing of a virtual knot diagram is even whenever the corresponding chord is even. One can easily check the parity axioms for the Gaussian parity.
For every parity $p$ for virtual knots (or knots in a specific thickened surface), consider a mapping $pr_{p}:{\cal G}\to {\cal G}$ from the set of Gauss diagrams ${\cal G}$ to itself, defined as follows. For every virtual knot diagram $K$ represented by a Gauss diagram ${\cal G}(K)$ we take $pr_{p}(K)$ to be the virtual knot diagram represented by the Gauss diagram obtained from ${\cal G}(K)$ by deleting odd chords with respect to $p$. At the level of planar diagrams this means that we replace odd crossings by virtual crossings.
The following theorem follows from definitions, see,e.g.,[@Sbornik1].
The mapping $pr_{p}$ is well defined, i.e., if $K$ and $K'$ are equivalent, then so are $pr_{p}(K)$ and $pr_{p}(K')$.
The same is true for every parity in a weak sense as discussed above. \[gsthm\]
Thus, for the Gaussian parity $g$ one has a well-defined projection $pr_{g}$. Note that if $K$ is a virtual knot diagram, then $pr_{g}(K)$ might have odd chords: indeed, some crossings which were even in $K$ may become odd in $pr_{g}(K)$.
However, this map $pr_{g}$ may take diagrams from one theory to another; for example, if we consider equivalent knots lying in a given thickened surface, their images should not necessarily be realised in the same surface; they will just be equivalent virtual knots. For virtual knots, this is just a map from virtual knots to virtual knots.
Note that $pr_{g}$ is not an idempotent map. For example, if we take the Gauss diagram with four chords $a,b,c,d$ where $a$ is linked with $b,c$, the chord $b$ is linked with $a,d$, the chord $c$ is linked with $a$, and the chord $d$ is linked with $b$, then after applying $pr_{g}$, we shall get a diagram with two chords $a,b$, and they will both become odd, see Fig. \[notidemp\].
-------------------------------------------------------------------------------------------------------
![The parity projection is not idempotent[]{data-label="notidemp"}](notidemp.eps "fig:"){width="7cm"}
-------------------------------------------------------------------------------------------------------
Now, let $S_{g}$ be a surface of genus $g$. Fix a cohomology class $\alpha\in H^{1}(S_{g},\Z_{2})$. Let us consider those knots $K$ in $S_{g}$ for which the total homology class of the knot $K$ in $H_{1}(S_{g},\Z_{2})$ is trivial.
With every crossing $v$ of $K$ we associate the two [*halves*]{} $h_{v,1},h_{v,2}$ (elements of the fundamental group $\pi_{1}(S_{g},v)$) as follows. Let us smooth the diagram $K$ at $v$ according to the orientation of $K$. Thus, we get a two-component oriented link. If $\alpha(h_{v,1})=\alpha(h_{v,2})=0$ we say that the crossing $v$ is [*even*]{}; otherwise we say that it is [*odd*]{}.
In [@IMN] it is proved that this leads to a well-defined parity for knots in $S_{g}\times I$. Thus, every $\Z_{2}$-cohomology class of the surface which evaluates trivially on the knot itself, gives rise to a well-defined parity. We shall call it the [*homological parity*]{}.
Statements of Main Results
==========================
For every Gauss diagram one can decree some chords (crossings) to be [*true classical*]{} (in an ambiguous way, see discussion in the last section) and remove the other ones, so that the resulting Gauss diagram classical, and this map will give rise to a well-defined projection from virtual knots to classical knots. In Fig. \[vkclasproj\], a virtual knot $A$ is drawn in the left part; its band presentation belongs to the thickened torus (see upper part of the right picture); there are four “homologically non-trivial” crossings disappear which leads to the diagram $D$ (virtually isotopic to the one depicted in the lower picture of the right half). This is the classical trefoil knot diagram.
The aim of the present article is the proof of the following
For every virtual diagram $K$ there exists a classical diagram ${\bar K}$, such that:
1. ${\bar K}<K$;
2. ${\bar K}=K$ if and only if $K$ is classical.
3. If $K_{1}$ and $K_{2}$ are equivalent virtual knots, then so do ${\bar K_{1}}$ and ${\bar K_{2}}$.
4. The map restricted to non-classical knots is a surjection onto the set of all classical knots.
\[mainthm\]
The discrimination between “true classical” crossings and those crossings which will become virtual is of the topological nature, as we shall see in the proof of Theorem \[mainthm\].
As usual, we make no distinction between virtually isotopic diagrams: a virtual diagram is said to be [*classical*]{} if the corresponding Gauss diagram represents a classical knot.
Thus, it makes sense to speak about a map from the set of virtual knots to the set of classical knots. This map will be useful for lifting invariants from virtual knots to classical knots.
We shall denote this map by $K\to f(K)$ where $K$ means the knot type represented by $K$, and $f(K)$ means the resulting knot type of the corresponding classical knots.
The only statement of the theorem which deals with diagrams of knots which are not classical, is 4). Otherwise we could just project all diagarams which do not represent classical knots to the unknot diagram (without classical crossings), and the functorial map would be rather trivial.
Nevertheless, as we shall see, one can construct various maps of this sort. Different proofs of Theorem \[mainthm\] can be used for constructing various functorial maps and establishing properties of knot invariants.
A desired projection would be one for which there is a well defined mapping at the level of Gauss diagrams, and the projection is such that if any two diagrams which are connected by a Reidemeister moves, their images are connected by the same Reidemeister move or by a detour move. Unfortunately, such projections seem not to exist (see the discussion in the end of the paper); see also Nikonov’s Lemma (Theorem \[lmnik\]).
For example, based on the notion of weak parity and parity groups, we shall construct another projection satisfying the conditions of Theorem \[mainthm\]; the construction will not be in two turns as in the case when Nikonov’s lemma is applied; however, this map will “save” more classical crossings.
From Theorem \[mainthm\] we have the following two corollaries
Let ${\cal K}$ be an isotopy class of a classical knot. Then the minimal number of classical crossings for virtual diagrams of ${\cal
K}$ is realized on classical diagrams (and those obtained from them by the detour move). For every non-classical diagram realizing a knot from ${\cal K}$, the number of classical crossings is strictly greater than the minimal number of classical crossings.
Moreover, minimal classical crossing number of a non-classical virtual knot is realized only on minimal genus diagrams. \[crl1\]
Indeed, the projection map from the main theorem decreases the number of classical crossings, and preserves the knot type.
The observation that the following corollary is a consequence from Theorem \[mainthm\] is due to V.V.Chernov (Tchernov).
Let ${\cal K}$ be a classical knot class. Then the bridge number for the class ${\cal K}$ can be realized on classical diagrams of $K$ only.
Moreover, minimal bridge number of a non-classical virtual knot is realized on minimal genus diagrams (here we do not claim that it can not be realized on non-classical daigrams).
\[bridge\]
Indeed, it suffices to see that if $K'<K$ then $br(K')\le br(K)$: when replacing a classical crossing with a virtual crossing, the number of bridges cannot be increased; it can only decrease because two bridges can join to form one bridge.\[crl2\]
We do not claim that the diagram $K'$ representing the class $f(K)$ is unique. In fact, we shall construct many maps satisfying the conditions of Theorem \[mainthm\]. In the last section of the present work we discuss the question, to which extent the diagram $K'$ can be defined uniquely by the diagram $K$, see the discussion in the last section of the paper.
Theorem \[mainthm\] allows one to lift invariants of classical knots to virtual knots. The straighforward way to do it is to compose the projection with the invariant in question. However, there is another way of doing it where crossings which are not classical, are not completely forgotten (made virtual) but are treated in another way than just usual “true classical” crossings. In similar cases when projection is well defined at the level of diagrams, this was done in [@Sbornik1; @Af] etc.: in these papers a distinction between even and odd crossings was taken into account to refine many known invariants (note that, according to the parity projection map, one can completely disregard odd crossings; on the other hand, they can be treated as classical crossings as they were from the very beginning).
The proof of Theorem \[mainthm\] is proved in two steps.
Let $K$ be a virtual diagram, whose underlying diagram genus is not minimal in the class of the knot $K$. Then there exists a diagram $K'<K$ in the same knot class. \[lmkey\]
\[I.M.Nikonov\] There is a map $pr$ from minimal genus virtual knot diagrams to classical knot diagrams such that for every knot $K$ we have $pr(K)<K$ and if two diagrams $K_{1}$ and $K_{2}$ are related by a Reidemeister move (performed within the given minimal genus diagram) then their images $pr(K_{1})$ and $pr(K_{2})$ are related by a Reidemeister move. \[lmnik\]
We shall construct the projection map in two steps.
Let $K$ be a virtual knot diagram. If $K$ is of a minimal genus, then we take ${\bar K}$ to be just $pr(K)$ as in Theorem \[lmnik\]. Otherwise take a diagram $K'$ instead of $K$ as in Theorem \[lmkey\]. It is of the same knot type as $K$. If the genus of the resulting diagram is still not minimal, we proceed by iterating the operation $K'$, until we get to a diagram $K''$ of minimal genus which represents the class of $K$ and $K''<K$. Now, set ${\bar K}=pr(K'')$.
One can easily see that if we insert a small classical knot $L$ inside an edge of a diagram of $K$, then $f(K\# L)=f(K)\# f(L)$. So, the last statement of the theorem holds as well.
Proofs of Key Theorems
======================
The Proof of Theorem \[lmkey\]
------------------------------
Let $K$ be a virtual knot diagram on a surface $S_{g}$ of genus $g$. Assume this genus is not minimal for the knot class of $K$. Then by Kuperberg’s theorem it follows that there is a diagram ${\tilde K}$ on $S_{g}$ representing the same knot as $K$ and a curve $\gamma$ on $S_{g}$ such that ${\tilde K}$ does not intersect $\gamma$. Indeed, if there were no such diagram ${\tilde K}$, the knot in $S_{g}\times
I$ corresponding to the diagram $K$ would admit no destabilization, and the genus $g$ would be minimal.
The curve $\gamma$ gives rise to a (co)homological parity for knots in $S_{g}$ homotopic to $K$: a crossing is [*even*]{} if the number if intersections of any of the corresponding halves with $\gamma$ is even, and odd, otherwise.
Since $K$ has underlying diagram genus $g$, there exists at least one odd crossing of the diagram $K$. Let $K$ be the result of $\gamma$-parity projection applied to $K$. We have $K'<K$.
By construction, all crossings of ${\tilde K}$ are even.
Let us construct a chain of Reidemeister moves from $K$ to ${\tilde
K}$ and apply the $\gamma$-parity projection to it.
We shall get a chain of Reidemeister moves connecting $K'$ to ${\tilde K}$. So, $K'$ is of the same type as ${\tilde K}$ and $K$. The claim follows.
The Proof of Theorem \[lmnik\]
------------------------------
Let us construct the projection announced in Theorem \[lmnik\]. Fix a $2$-surface $S_{g}$. Let us consider knots in the thickening of $S_{g}$ for which genus $g$ is minimal (that is, there is no representative of lower genus for knots in question). Let $K$ be a diagram of such a knot. We shall denote crossings of knot diagrams in $S_{g}$ and the corresponding points on $S_{g}$ itself by the same letter (abusing notation).
As above, with every crossing $v$ of $K$ we associate the two [*halves*]{} $h_{v,1},h_{v,2}$, now considered as elements of the fundamental group $\pi_{1}(S_{g},v)$, as follows. Let us smooth the diagram $K$ at $v$ according to the orientation of $K$. Thus, we get a two-component oriented link with components $h_{v,1},h_{v,2}$. Consider every component of this link represented as a loop in $\pi_{1}(S_{g},v)$ and denote them again by $h_{v,1},h_{v,2}$.
Let $\gamma_{v},{\bar \gamma_{v}}$ be the two homotopy classes of the knot $K$ considered as an element of $\pi_{1}(S_{g},v)$: we have two classes because we can start traversing the knot along each of the two edges emanating from $v$. Note that $h_{v,1}\cdot
h_{v,2}=\gamma_{v}$ and $h_{v,2}\cdot h_{v,1}={\bar \gamma_{v}}$.
Let us now construct a knot diagram $pr(K)$ from $K$ as follows. If for a crossing $v$ we have $h_{v,1}=\gamma_{v}^{k}$ for some $k$ (or, equivalently, $h_{v,2}=\gamma_{v}^{1-k}$) then this crossing remains classical for $K'$; otherwise, a crossing becomes virtual. Note that it is immaterial whether we take $\gamma_{v}$ or ${\bar
\gamma_{v}}$ because if $h_{v,1}$ and $h_{v,2}$ are powers of the same element of the fundamental groups, then they obviously commute, which means that $\gamma_{v}={\bar {\gamma_{v}}}$.
1. For every $K$ as above, $pr(K)$ is a classical diagram;
2. $K=pr(K)$ whenever $K$ is classical
3. If $K_{1}$ and $K_{2}$ differ by a Reidemeister move then $pr(K_{1})$ and $pr(K_{2})$ differ by either a detour move or by a Reidemeister move.
Take $K$ as above and consider $pr(K)$. By construction, all “halves” of all crossings for $pr(K)$ are powers of the same homotopy class. We claim that the underlying surface for $pr(K)$ is a $2$-sphere. Indeed, when constructing a band presentation for $pr(K)$, we see that the surface with boundary has cyclic homology group. This happens only for a disc or for the cylinder; in both cases, the corresponding compact surface will be $S^{2}$.
The situation with the first Reidemeister move is obvious: the new added crossing has one trivial half and the other half equal to the homotopy class of the knot itself.
Now, to prove the last statement, we have to look carefully at the second and the third Reidemeister moves. Namely, if some two crossings $A$ and $B$ participate in a second Reidemeister move, then we have an obvious one-to-one correspondence between their halves such that whenever one half corresponding to $A$ is an power of $\gamma$, so is the corresponding half of $B$.
So, they either both survive in $pr(A),pr(B)$ (do not become virtual) or they both turn into virtual crossings. So, for $pr(A),pr(B)$ we get either the second Reidemeister move, or the detour move. Note that here we deal with the second Reidemeister move which does not change the underlying surface.
-------------------------------------------------------------------------------------------------------------------------------
![Triviality of two crossings yields the triviality of the third one[]{data-label="NikonovFig"}](nf.eps "fig:"){width="10cm"}
-------------------------------------------------------------------------------------------------------------------------------
Now, let us turn to the third Reidemeister move from $K$ to $K'$, and let $(A,B,C)$ and $(A',B',C')$ be the corresponding triples of crossings. We see that the homotopy classes of halves of $A$ are exactly those of $A'$, the same about $B,B'$ and $C,C'$. So, the only fact we have to check that the number of surviving crossings among $A,B,C$ is not equal to two (the crossings from the list $A',B',C'$ survive accordingly). This follows from Fig. \[NikonovFig\].
Indeed, without loss of generality assume $A$ and $B$ survive. This means that the class $h_{A,1}$ is a power of the class of the whole knot in the fundamental group with the reference point in $A$, and $h_{B,1}$ is a power of class of the knot with the reference point at $B$.
Let us not investigate $h_{C,1}$ (for convenience we have chosen $h_{C,1}$ to be the upper right part of the figure).
We see that $h_{C,1}$ consists of the following paths: $(ca)
h_{A,1}(ab)h_{B,1}(cb)^{-1}$, where $(ca), (ab),(cb)$ are non-closed paths connecting the points $A$, $B$, and $C$. Now, we can homotop the above loop to $(ca)h_{A,1}(ca)^{-1}(ca)(ab)h_{B,1}(cb)^{-1}$ and then homotop it to the product of $(ca)h_{A,1}(ca)^{-1}$ and $(cb)h_{B,1}(cb)^{-1}$.
We claim that both these loops are homotopic to $\gamma_{C}^{l}$ and $\gamma_{C}^{m}$ for some exponents $m,l$. Indeed, $h_{A,1}$ is $\gamma_{A}^{k}$ by assumption. Now, it remains to observe that in order to get from $\gamma_{A}$ to $\gamma_{C}$, it suffices to “conjugate” by a path along the knot; one can choose $(ac)$ as such a path. The same holds about $h_{C,1}$.
So, if all crossings $A,B,C$ survive in the projection of $pr(K)$ and $A',B',C'$ survive in $pr(K')$ then we see that $pr(K')$ differs from $pr(K)$ by a third Reidemeister move. If no more than one of $A,B,C$ survives then we have a detour move from $pr(K)$ to $pr(K')$.
The Parity Group, One More Projection, and Connected Sums
=========================================================
In the above text, we have defined parity as a way of decorating crossings by elements of $\Z_{2}$. It turns out that there is a way to construct an analogue of parity valued in more complicated objects, namely, in groups, depending on the knot diagram. Such “group-valued” parities can be also used for projections, see, e.g., [@IMN].
This group-valued parity can be thought of as a parity in a weak sense: a crossing is even if the corresponding element of the parity group is trivial, and odd otherwise.
However, this can be done for diagrams of some specific genus only.
Let $D$ be a virtual diagram of genus $g$. Now, let us construct the [*universal parity group*]{} $G(D)$. Note that this group will be “universal” only for a specific genus.
Recall that pasted cycles appear in a band–pass presentation of a virtual knot diagram as cycles on the boundary of a surface to be pasted by discs. Every cycle can be treated as a $1$-cycle in the $1$-frame of the knot diagram graph; the graph itself consists of classical crossings (vertices) and edges between them. Thus, every pasted cycle $C$ gives rise to a collection of classical crossings, it touches.
We shall use the additive notation for this group. For generators of $G(D)$ we take crossings of the diagram $D$. We define two sorts of relations:
1. $2a_{i}=0$ for every crossing and there will also be relations correspond to [*pasted cycles*]{}. Namely, a pasted cycle is just a rotating cycle on the $4$-valent graph (shadow of the knot)
2. The sum of crossings corresponding to any pasted cycle is zero.
It is obvious that for a classical knot diagram $D$ the group $G(D)$ is trivial (otherwise the reader is referred to Theorem \[thth\] ahead).
Denote the element of the group $G$ corresponding to a crossing $x$ of the knot diagram, by $g(x)$.
In [@IMN] it is proved that the parity group gives rise to a parity in a weak sense: all crossings for which the corresponding element of the group is trivial, are thought of as [*even*]{} crossings, and the other one are thought of as [*odd crossings*]{}. Thus, we get the following
For a virtual diagram $D$ with the surface $S_{g}$ genus $g$ the group $G(D)$ is the quotient group of $H_{1}(S_{g},\Z_{2})$ by the element generated by the knot. In particular, if $D$ is a checkerboard colourable diagram then $G(D)=H_{1}(S_{g},\Z_{2})$.
In particular, if $D_{1}$ and $D_{2}$ are nonstably equivalent diagrams then $G(D_{1})=G(D_{2})$.
\[thth\]
To prove the theorem, it suffices to associate with every crossing $x$ any of the two halves $h_{x,1}$ or $h_{x,2}$ and consider them as elements of the above mentioned quotient group.
A careful look to the formulation of Theorem \[thth\] shows that:
1. If a crossing $x$ corresponds to the first Reidemeister move, then the corresponding element of the quotient group is equal to zero.
2. If two crossings $x,y$ participate in the second Reidemeister move, then the corresponding elements of the group $G(D)$ are equal to each other.
3. If three crossings $a,b,c$ participate in a third Reidemeister move then $h_{a}+h_{b}+h_{c}=0$ in $G$.
Thus, the map to the group $G$ gives rise to the [*parity in a weak sense*]{}, which means, in particular, that there is a well-defined projection from knots in $S_{g}\times I$ to virtual knots.
Let $K$ be a knot diagram in $S_{g}$. Consider $K$ as a virtual knot diagram (up to virtual equivalence). Now, let $l(K)$ be the diagram obtained from $K$ by making those crossings $x$ of $K$ virtual, for which $h(X)\neq 0\in G(K)$.
If $K$ and $K_{1}$ are two diagrams of knots in $S_{g}\times I$ which differ by one Reidemeister move, then $l(K)$ and $l(K_{1})$ either differ by the same Reidemeister move, or coincide (are virtually equivalent).
Moreover $l(K)$ is (virtually equivalent to) $K$ if and only if $K$ is (virtually equivalent to) a classical knot.
The proof follows from general argument concerning parity in a weak sense.
One more projection
-------------------
Let us now give one more proof of Theorem \[mainthm\]. In fact, the map $f$ from our original proof of Theorem \[mainthm\] kills too many classical crossings.
For example, if we consider the classical trefoil diagram with three “boxes” shown in Fig.\[trefblackbox\]. Assume every black box represents a virtual knot diagram lying inside its minimal representative which is homologically trivial in the corresponding $2$-surface, and we put these diagrams into boxes after splitting them at some points. Then, if these diagrams are complicated enough then we see that all three middle classical crossings will become virtual after applying Nikonov’s projection.
![Classical trefoil with black boxes[]{data-label="trefblackbox"}](trefblackbox.eps){width="200pt"}
On the other hand, since these three virtual knots are homologically trivial, their persistence does not affect the homological triviality of the three crossings depicted in Fig. \[trefblackbox\]. So, there is a motivation how to find another projection satisfying the condition of Theorem \[mainthm\] which does not kill the three crossings depicted in this Figure.
The reason is that the Nikonov projection is very restrictive and makes many classical crossings virtual.
Let us now construct another map $g$ from virtual knots to classical knots satisfying all conditions of Theorem \[mainthm\].
Take a virtual knot diagram $K$. If it is not a minimal genus diagram, apply Theorem \[lmkey\]. We get a diagram $K'$. If $K'$ is not yet of the minimal genus, apply Theorem \[lmkey\] until we get to a mininal genus diagram. Take this minimal genus diagram $K_{m}$ and apply the projection with respect to the parity group. Then (if necessary) we again reiterate Theorem \[lmkey\] to get to the minimal genus diagram, and then apply the parity projection once.
Every time we shall have a mapping which is well defined on the classes of knots: Theorem \[lmkey\] does not change the class of the knot at all, and the group parity projection is well defined once we know that we are on the minimal genus.
The resulting diagram will be classical. Denote it by $g(K)$.
The reader can easily find virtual knots (1-1 tangles) to be inserted in Fig. \[trefblackbox\], so that for the resulting knot $K$, the projection $g(K)$ gives the trefoil knot, whence the projection $f(K)$ is the unknot.
For exact definitions of connected sums, see [@MyNewBook; @KM]
The map $g$ takes connected sum of virtual knots to connected sums of classical knots.
Of course, there are ways to mix the approaches described in the present paper to construct further projections satisfying the conditions of Theorem \[mainthm\].
An interesting question is to find “the most careful” projection satisfying all conditions of Theorem \[mainthm\] which preserves more classical data.
Problems with the existence of a well defined map on diagrams
=============================================================
Consider the virtual knot diagram $A$ drawn in the left picture of Fig. \[vkclasproj\]. If we seek a projection satisfying conditions of the Main Theorem, we may $A$ project to $D$ in the same picture (lower right). Note that $A$ is not classical. However, the two intermediate knots ($B$ and $C$) are both classical: they are drawn on the torus, however, they both fit into a cylinder, and hence, to the plane; so, they will project to themselves.
There is no obvious reason why the projection of $A$ should be exactly $D$ because both $B$ and $C$ are classical; on the other hand there is no obvious way to make a preferred choice between $B$ and $C$ if one decides to take them to be the result of projection of $A$.
So, a bigger diagram projects to a smaller one (we see that $A>B,A>C$ but $B>D,C>D$). This is the lack of naturality which does not allow one to make projection compatible with Reidemeister moves. Of course $A$ differs from $B$ by one Reidemeister move, as well as their images $D$ and $B$, but in the first case the move is decreasing, and in the second case it is increasing.
This is also the reason of ambiguity: in fact, one can also project $A$ to $B$ or $C$ since both these diagrams are classical.
--------------------------------------------------------------------------------------------------------------
![Virtual Knot and Its Classical Projection[]{data-label="vkclasproj"}](vkclasproj.eps "fig:"){width="10cm"}
--------------------------------------------------------------------------------------------------------------
[100]{}
D. M. Afanas’ev, “Refining virtual knot invariants by means of parity”, [*Math. sb.*]{}, [**201**]{}:6, 785–800 (2010) (Original Russian Text in [*Mathematical sbornik*]{} [**201**]{}:6, 3–18 (2010)).
M.Chrisman, V.O.Manturov, (2010) Combinatorial Formulae for Finite-Type Invariants via Parities, arXiv:math.GT$\slash$1002.0539
A. Stoimenow, V. Tchernov (Chernov), A. Vdovina: “The canonical genus of a classical and virtual knot”, MPI preprint 108 (2000) Geom. Dedicata 95 (2002), 215-225.
R. Fenn, L.H.Kauffman, and V.O. Manturov (2005), Virtual knot theory — unsolved problems, [*Fundamenta Mathematicae*]{}, 188, pp. 293-323.
Goussarov M., Polyak M., and Viro O. (2000), Finite type invariants of classical and virtual knots, [*Topology*]{}, [**39**]{}. pp. 1045–1068.
Hass, J. and Scott, P. (1994). Shortening curves on surfaces, *Topology* **33**, 1, pp. 25–43.
Ilyutko, D.P., Nikonov, I.M., Manturov, V.O. Virtual Knot Invariants Arising From Parities, arxiv/Mat.GT: 1102-5081.
N.Kamada and S.Kamada (2000), Abstract link diagrams and virtual knots, [*J. Knot Theory & Ramifications*]{}, [**9**]{}, P. 93-106.
L.H. Kauffman (1999), Virtual knot theory, [*European Journal of Combinatorics*]{} [**20**]{}:7 , P. 662–690.
L. H. Kauffman, V.O. Manturov (2006), Virtual knots and links, [*Proceedings of the Steklov Mathematical Institute*]{}, [**252**]{}, P. 104-121.
L. H. Kauffman, V. O. Manturov, A graphical construction of the $sl(3)$-invariant for virtual knots,arXiv:1207.0719
Kuperberg, G. (2002), What is a Virtual Link?, www.arXiv.org, math-GT$\slash$0208039, [*Algebraic and Geometric Topology*]{}, 2003, [**3**]{}, 587-591.
V.O. Manturov, (2010), Parity in Knot Theory, [*Sbornik Mathematics*]{}, N.201, [**5**]{}, P.65-110.
V.O. Manturov, Parity and Cobordisms of Free Knots, [*Sbornik Mathematics*]{}, to appear V.O. Manturov, Parity and See also: arXiv:math.GT$\slash$1001.2728.
V.O. Manturov, V.O. (2004), Long virtual knots and Its invariants, [*Journal of Knot Theory and Its Ramifications*]{}, [**13**]{} (8), pp.1029-1039.
V.O. Manturov, Free Knots and Parity (2011), arXiv:math.GT$\slash$09125348, v.1., to appear in: [*Proceedings of the Advanced Summer School on Knot Theory, Trieste*]{}, Series of Knots and Everything, World Scientific.
V.O. Manturov (2005), [*Teoriya Uzlov*]{} (Knot Theory, in Russian), M.-Izhevsk., RCD, 2005, 512 pp.
V.O. Manturov (2010), [*Virtual’nye Uzly. Sovremennoe sostoyanie teorii*]{} (Virtual Knots: The State of the Art, in Russian), M.-Izhevsk., RCD, 490 pp.
V.O. Manturov and D.P. Ilyutko (2012), [*Virtual knots: The State of The Art*]{}, Word Scientfiic, Series on Knots and Everything, vol. 51., 546 pp.
V.O. Manturov (2007), Khovanov homology for virtual knots with arbitrary coefficients, , [*Izvestiya Mathematcs*]{}: [**71**]{}:5, P. 111–148.
[^1]: Peoples’ Friendship University of Russia, Moscow 117198, Ordjonikidze St., 3
[^2]:
[^3]: Partially supported by grants of the Russian Government 11.G34.31.0053, RF President NSh – 1410.2012.1, Ministry of Education and Science of the Russian Federation 14.740.11.0794.
|
---
abstract: 'A simple exact analytical solution of the relativistic Duffin-Kemmer-Petiau equation within the framework of the asymptotic iteration method is presented. Exact bound state energy eigenvalues and corresponding eigenfunctions are determined for the relativistic harmonic oscillator as well as the Coulomb potentials. As a non-trivial example, the anharmonic oscillator is solved and the energy eigenvalues are obtained within the perturbation theory using the asymptotic iteration method.'
author:
- 'I. Boztosun, M. Karakoc, F. Yasuk and A. Durmus'
title: 'Asymptotic Iteration Method Solutions to the Relativistic Duffin-Kemmer-Petiau Equation'
---
Introduction
============
Exact analytical solutions to relativistic wave equations are important in relativistic quantum mechanics since the wave function contains all the necessary information to describe a quantum system fully. There are only a few potentials for which the relativistic Dirac, Klein-Gordon and Duffin-Kemmer-Petiau (DKP) equations can be solved analytically. So far, many methods such as the super-symmetric (SUSY) [@1], shape invariance [@2; @3], factorization and path integral [@4; @5; @6; @7] *etc* have been developed to solve the relativistic wave equations exactly, or quasi-exactly, for potentials like Coulomb, harmonic oscillator, Pösch Teller and exponential type ones. In recent years, an asymptotic iteration method for solving second order homogeneous linear differential equations has been proposed by Ciftci *et al.* [@8; @9; @10]. This method has been applied to solve the non-relativistic radial Schrödinger and Dirac equations for various potentials [@10].
Since the DKP equation is being increasingly used to describe the interactions of relativistic spin-0 and spin-1 bosons [@11; @12; @13; @14; @15; @16; @17; @18; @19], it would be interesting to probe whether the DKP equation is amenable to exact solutions in the framework of the asymptotic iteration method (AIM). This is precisely the aim of this paper.
In the next section, we explain the AIM briefly and show how to solve a second-order homogeneous differential equation. Then, we introduce the DKP oscillator and Coulomb problems and obtain their exact eigenvalues and eigenfunctions. In section \[anharmonic\], we present the solution of the anharmonic oscillator as a nontrivial example within the perturbation theory. Finally, in the last section, we provide our summary and conclusion.
Basic Equations of the Asymptotic Iteration Method (AIM) {#aim}
========================================================
We briefly outline the asymptotic iteration method here; the details can be found in references [@8; @9; @10]. The asymptotic iteration method was proposed to solve second-order differential equations of the form $$\label{diff}
y''=\lambda_{0}(x)y'+s_{0}(x)y$$
where $\lambda_{0}(x)\neq 0$ and s$_{0}$(x), $\lambda_{0}$(x) are in C$_{\infty}$(a,b). The variables, s$_{0}$(x) and $\lambda_{0}$(x), are sufficiently differentiable. The differential equation (\[diff\]) has a general solution [@8]
$$\label{generalsolution}
y(x)=exp \left( - \int^{x} \alpha dx^{'}\right ) \left [C_{2}+C_{1}
\int^{x}exp \left( \int^{x^{'}} \lambda_{0}(x^{''})+2\alpha(x^{''}) dx^{''} \right ) dx^{'} \right
]$$
if $n>0$, for sufficiently large $n$,
$$\label{quantization}
\frac{s_{n}}{\lambda_{n}}=\frac{s_{n-1}}{\lambda_{n-1}}=\alpha$$
where
$$\label{iter}
\lambda_{n}=\lambda_{n}'+s_{n-1}+\lambda_{0}\lambda_{n-1}\hspace{1cm} \mbox{and} \hspace{1cm}
s_{n}=s_{n-1}'+s_{0}\lambda_{n-1}$$
The quantization condition of the method together with equation (\[iter\]) can also be written as follows
$$\label{kuantization}
\delta(x)=\lambda_{n+1}(x)s_{n}(x)-\lambda_{n}(x)s_{n+1}(x)=0$$
For a given potential, the idea is to convert the relativistic wave equation to the form of equation (\[diff\]). Then, s$_{0}$ and $\lambda_{0}$ are determined and s$_{n}$ and $\lambda_{n}$ parameters are calculated. The energy eigenvalues are then obtained by the condition given by equation (\[kuantization\]). However, the wave functions are determined by using the wave function generator, namely $exp \left( - \int^{x} \alpha dx^{'}\right )$.
In this study, we seek the exact solution of DKP equation for which the relevant second order homogenous linear differential equation takes the following general form, $${y}'' = 2\left( {\frac{ax^{N + 1}}{1 - bx^{N + 2}} -
\frac{\left( {m + 1} \right)}{x}} \right){y}' - \frac{wx^N}{1 -
bx^{N + 2}}y$$ If this equation is compared to equation (\[diff\]), it entails the following expressions $$\label{snln}
\lambda _0 = 2\left( {\frac{ax^{N + 1}}{1 - bx^{N + 2}} -
\frac{\left( {m + 1} \right)}{x}} \right) \hspace{1cm} s_0 (x) =
- \frac{wx^N}{1 - bx^{N + 2}}$$ while the condition (\[quantization\]) yields for $N$=-1,0,1,2,3,.... $$\begin{aligned}
w_n^m (-1) & = & n\left( {2a + 2bm + (n + 1)b} \right)\\
w_n^m (0) & = & 2n\left( {2a + 2bm + (2n + 1)b} \right) \\
w_n^m (1) & = & 3n\left( {2a + 2bm + (3n + 1)b} \right) \\
w_n^m (2) & = & 4n\left( {2a + 2bm + (4n + 1)b} \right) \\
w_n^m (3) & = & 5n\left( {2a + 2bm + (5n + 1)b} \right) \\
\ldots \emph{etc} \nonumber\end{aligned}$$ Hence, these formulae are easily generalized as; $$w_n^m (N) = b\left( {N + 2} \right)^2n\left( {n + \frac{\left( {2m
+ 1} \right)b + 2a}{\left( {N + 2} \right)b}} \right)$$ The exact eigenfunctions can be derived from the following generator: $$\label{ef}
y_n (x) = C_2 \exp \left( { - \int\limits^x {\alpha _k dx^{'}} }
\right)$$ Using equation (\[quantization\]) and equation (\[snln\]), the eigenfunctions are obtained as follows;
$$\begin{aligned}
y_0 (x) & = & 1 \\
y_1 (x) & = & - C_2 (N + 2)\sigma \left( {1-\frac{b\left( {\rho + 1}\right)}{\sigma }x^{N + 2}} \right) \\
\\
y_2 (x) & = & C_2 (N + 2)^2\sigma \left( {\sigma + 1} \right)\left(
{1 - \frac{2b\left( {\rho + 2} \right)}{\sigma }x^{N + 2} +
\frac{b^2\left( {\rho + 2} \right)\left( {\rho + 3} \right)}{\sigma
\left( {\sigma + 1} \right)}x^{2(N + 2)}} \right)
\\
y_3(x) & = & - C_2\frac{\sigma\left({\sigma+1}\right) \left({\sigma+2}
\right)}{\left({N+2}\right)^{-3}} \nonumber \\ & \times & \left
({1-\frac{3b\left( {\rho + 3}\right)} \sigma x^{N+2}+
\frac{3b^2\left({\rho+3}\right)\left(
{\rho+4}\right)}{\sigma\left({\sigma+1} \right)}x^{2(N+2)}
-\frac{b^3\left({\rho+3}\right)\left({\rho+4}\right)\left({\rho+ 5}
\right)}{\rho \left( {\rho + 1} \right)\left( {\rho + 2}
\right)}x^{3\left( {N + 2} \right)}} \right ) \nonumber \\
\ldots \emph{etc}\end{aligned}$$
Finally, the following general formula for the exact solutions $y_n(x)$ is acquired as; $$\label{efson}
y_n (x) = \left( { - 1} \right)^nC_2 (N + 2)^n\left( \sigma
\right)_n { }_2F_1 ( - n,\rho + n;\sigma ;bx^{N + 2})$$
where $(\sigma )_n $=$\frac{\Gamma ( {\sigma + n} )}{\Gamma (\sigma
}, \quad \sigma$ = $\frac{2m + N + 3}{N + 2}$ $\rho$ = $\frac{( {2m + 1} )b + 2a}{( {N + 2} )b}$.
DKP Harmonic Oscillator
=======================
In this section, the Duffin-Kemmer-Petiau formalism [@12; @13] is briefly sketched and the DKP oscillator is solved using AIM. Generally, the first order relativistic Duffin-Kemmer-Petiau equation for a free spin zero or spin one particle of mass m is
$$\label{eq1} ( {c\mathbf{\beta}.\textbf{p}+mc^2})\psi=i\hbar\beta^0
\frac {d\psi}{dt}$$
where $\beta ^{\mu }$ ($\mu $= 0, 1, 2, 3) matrices satisfy the commutation relation $$\label{eq2} \beta ^\mu \beta ^\nu \beta ^\lambda +\beta ^\lambda
\beta ^\nu \beta ^\mu =g^{\mu \nu }\beta ^\lambda +g^{\nu \lambda
}\beta ^\mu$$ which defines the so-called Duffin-Kemmer-Petiau (DKP) algebra. The algebra generated by the four $\beta$ matrices has three irreducible representations: a ten dimensional one that is related to S=1, a five dimensional one relevant for S=0 (spinless particles) and a one dimensional one which is trivial.
In the spin-0 representation, $\beta ^\mu $ are $5\times 5$ matrices defined as ($i = 1, 2, 3$) $$\label{eq3} \beta ^0=\left( {{\begin{array}{*{20}c}
\theta \hfill & {\tilde {0}} \hfill \\
{\bar {0}_T } \hfill & \textbf{0} \hfill \\
\end{array} }} \right),
\quad \beta ^i=\left( {{\begin{array}{*{20}c}
{\tilde {0}} \hfill & {\rho ^i} \hfill \\
{-\rho _T^i } \hfill & \textbf{0} \hfill \\
\end{array} }} \right)$$ with $\tilde {0}$, $\bar {0}$, $\textbf{0}$ as $2\times 2$, $2\times 3$, $3\times 3$ zero matrices, respectively, and $$\label{eq4} \theta=\left( {{\begin{array}{*{20}c}
0 \hfill & 1 \hfill \\
1 \hfill & 0 \hfill \\
\end{array} }} \right),
\quad \rho^1=\left( {{\begin{array}{*{20}c}
-1 \hfill & 0 \hfill & 0 \hfill\\
0 \hfill & 0 \hfill & 0 \hfill\\
\end{array} }} \right),
\quad \rho^2=\left( {{\begin{array}{*{20}c}
0 \hfill & -1 \hfill & 0 \hfill\\
0 \hfill & 0 \hfill & 0 \hfill\\
\end{array} }} \right),
\quad \rho^3=\left( {{\begin{array}{*{20}c}
0 \hfill & 0 \hfill & -1 \hfill\\
0 \hfill & 0 \hfill & 0 \hfill\\
\end{array} }} \right)$$ For spin one particles, $\beta^\mu$ are $10\times 10$ matrices given by $$\label{eq5} \beta^0=\left( {{\begin{array}{*{20}c}
0 \hfill & {\bar 0} \hfill & {\bar 0} \hfill & {\bar 0} \hfill\\
{\bar 0}^T \hfill & \textbf {0} \hfill & \textbf {I} \hfill & \textbf {I}
\hfill\\
{\bar 0}^T \hfill & \textbf {I} \hfill & \textbf {0} \hfill & \textbf {0}
\hfill\\
{\bar 0}^T \hfill & \textbf {0} \hfill & \textbf {0} \hfill &
\textbf {0}
\hfill\\
\end{array} }} \right),
\quad \beta^i=\left( {{\begin{array}{*{20}c}
0 \hfill & {\bar 0} \hfill & e_i \hfill & {\bar 0} \hfill\\
{\bar 0}^T \hfill & \textbf {0} \hfill & \textbf {0} \hfill & -is_i \hfill\\
e_i^T \hfill & \textbf {0} \hfill & \textbf {0} \hfill & 0 \hfill\\
{\bar 0}^T \hfill & -is_i \hfill & \textbf {0} \hfill & 0 \hfill\\
\end{array} }} \right)$$ where $s_{i}$ are the usual $3\times 3$ spin one matrices $$\label{eq6} {\bar 0}=\left( {{\begin{array}{*{20}c}
0 \hfill & 0 \hfill & 0 \hfill\\
\end{array} }} \right),
\quad e_1=\left( {{\begin{array}{*{20}c}
1 \hfill & 0 \hfill & 0 \hfill\\
\end{array} }} \right),
\quad e_2=\left( {{\begin{array}{*{20}c}
0 \hfill & 1 \hfill & 0 \hfill\\
\end{array} }} \right),
\quad e_3=\left( {{\begin{array}{*{20}c}
0 \hfill & 0 \hfill & 1 \hfill\\
\end{array} }} \right).$$ $\textbf{I}$ and $\textbf{0}$ are the identity and zero matrices, respectively. While the dynamical state $\psi_{DKP}$ is a five component spinor for spin zero particles, it has ten component spinors for $S=1$ particles.
For the external potential introduced with the non-minimal substitution $$\textbf{p} \to \textbf{p} - im\omega \eta ^0\textbf{r}$$ where $\omega$ is the oscillator frequency and $\eta ^0$ = $2\beta
^{0^2}$ - 1, the DKP equation for the system is $$\label{dkpspinor}
\left[ {c\beta .(\textbf{p} - im\omega \eta ^0\textbf{r}) + mc^2}
\right]\psi = i\hbar \beta ^0\frac{d\psi }{dt}$$ In the spin zero representation, the five component DKP spinor
$$\label{eq8} \psi(\textbf {r})=\left( {{\begin{array}{*{20}c}
\psi_{upper} \hfill\\
i\psi_{lower} \hfill\\
\end{array} }} \right)
\quad \mbox{with}~~~\psi_{upper}\equiv \left(
{{\begin{array}{*{20}c}
\phi \hfill\\
\varphi \hfill\\
\end{array} }} \right)
\quad \mbox{and}~~~\psi_{lower}\equiv \left( {{\begin{array}{*{20}c}
A_1 \hfill\\
A_2 \hfill\\
A_3 \hfill\\
\end{array} }} \right).$$
so that for stationary states the DKP equation can be written as $$\begin{array}{l}
mc^2\phi = E\varphi \, + ic(\textbf{p} + im\omega \textbf{r}).\textbf{A} \\
mc^2\varphi = E\phi \\
mc^2\textbf{A} = ic(\textbf{p} - im\omega \textbf{r})\phi \\
\end{array}$$
where $\textbf{A}$ is the vector $(A_1,A_2, A_3)$.\
The five-component wavefunction $\psi$ is simultaneously an eigenfunction of $J^2$ and $J_3$ $$J^2\left( {{\begin{array}{*{20}c}
\psi_{upper} \hfill\\
\psi_{lower} \hfill\\
\end{array} }} \right)=\left( {{\begin{array}{*{20}c}
L^2\psi_{upper} \hfill\\
(L+S)^2\psi_{lower} \hfill\\
\end{array} }} \right)=J(J+1)\left( {{\begin{array}{*{20}c}
\psi_{upper} \hfill\\
\psi_{lower} \hfill\\
\end{array} }} \right)$$ $$J_3\left( {{\begin{array}{*{20}c}
\psi_{upper} \hfill\\
\psi_{lower} \hfill\\
\end{array} }} \right)=\left( {{\begin{array}{*{20}c}
L_3\psi_{upper} \hfill\\
(L_3+s_3)\psi_{lower} \hfill\\
\end{array} }} \right)=M\left( {{\begin{array}{*{20}c}
\psi_{upper} \hfill\\
\psi_{lower} \hfill\\
\end{array} }} \right)$$ where the total angular momentum $J=L+S$ which commutes with $\beta^0$, is a constant of the motion.
For $S=0$ DKP oscillator eigenstates problem, the most general solution for a central problem [@13] is presented as follows
$$\label{eq12} \psi_{JM}(r)=\left( {{\begin{array}{*{20}c}
F_{nJ}(r)Y_{JM}(\Omega) \hfill\\
G_{nJ}(r)Y_{JM}(\Omega) \hfill\\
i\sum_LH_{nJL}(r)Y_{JL1}^M(\Omega) \hfill\\
\end{array} }} \right)$$
where $$\alpha_J = \sqrt {\left( {J + 1} \right) / \left( {2J + 1} \right)},
\quad \zeta_J = \sqrt {J / \left( {2J + 1} \right)}$$
$$\begin{array}{l}
F_{nJ} (r) = F(r),\quad G_{nJ} = G(r),\quad H_{n,J,J\pm
1} (r) = H_{\pm 1} (r) \\
\end {array}$$
$\psi_{JM}$ of parity $(-1)^J$ is inserted into equation (\[dkpspinor\]) and the following equations are found. $$\label{h1}EF = mc^2G \\$$ $$\label{h2}
\hbar c\left( {\frac{d}{dr} - \frac{J + 1}{r} + \frac{m\omega
r}{\hbar }}
\right)F = - \frac{1}{\alpha _J }mc^2H_1 \\$$ $$\label{h3}
\hbar c\left( {\frac{d}{dr} - \frac{J}{r} + \frac{m\omega r}{\hbar }}
\right)F = - \frac{1}{\zeta _J }mc^2H_{ - 1}$$ $$\label{h4}
- \alpha _J \left(
{\frac{d}{dr} + \frac{J + 1}{r} - \frac{m\omega r}{\hbar }} \right)H_1 + \\
\zeta _J \left( {\frac{d}{dr} - \frac{J}{r} - \frac{m\omega r}{\hbar }}
\right)H_{ - 1} = \frac{1}{\hbar c}\left( {mc^2F - EG} \right) \\
$$ From the above equations, if equations (\[h1\]) to (\[h3\]) are inserted into equation (\[h4\]), the homogenous second order differential equation for the DKP harmonic oscillator [@13] is obtained as;
$$\label{hodiff}
\left( {\frac{d^2}{dr^2} + \frac{\left( {E^2 - m^2c^4}
\right)}{\left( {\hbar c} \right)^2} + \frac{3m\omega }{\hbar } -
\frac{m^2\omega ^2r^2}{\hbar ^2} - \frac{J\left( {J + 1}
\right)}{r^2}} \right)F(r) = 0$$
If we define $E_{eff}$ = $\frac{\left( {E^2 - m^2c^4}
\right)}{\left( {\hbar c} \right)^2} + \frac{3m\omega }{\hbar }$ and $k =\frac{m\omega }{\hbar }$, equation (\[hodiff\]) becomes
$$\label{hodiffarranged}
\left( {\frac{d^2}{dr^2} + E_{eff} - k^2r^2 - \frac{J\left( {J + 1}
\right)}{r^2}} \right)F(r) = 0$$
The asymptotic iteration method requires selecting the wave function as follows $$F(r) = r^{J + 1}e^{ - \frac{1}{2}kr^2}f(r)$$ equating it into equation (\[hodiffarranged\]) leads to $$\frac{d^2f(r)}{dr^2} -2\left( {kr - \frac{J + 1}{r}}
\right)\frac{df(r)}{dr} + \left( {E_{eff} - 3k - 2kJ} \right)f(r)=0$$ where $\lambda _0$ = $2\left( {kr - \frac{J + 1}{r}} \right)$ and $s_0 = 3k + 2kJ - E_{eff}$. By means of equation (\[iter\]), we may calculate $\lambda_n(r)$ and $s_n(r)$. This gives: $$\begin{aligned}
\label{sl}
\lambda _0 & = & 2\left( {kr - \frac{J + 1}{r}} \right) \nonumber \\
s_0 & = & 3k + 2kJ - E_{eff}
\nonumber \\
\lambda _1 & = & 5k + 2\frac{J + 1}{r^2} + 2kJ - E_{eff} + 4\left(
{kr
-\frac{J + 1}{r}} \right)^2 \nonumber \\
s_1 & = & 2\left( {3k + 2kJ - E_{eff} } \right)\left( {kr - \frac{J
+ 1}{r}}\right)
\nonumber \\
\lambda _2 & = & - 4\frac{J + 1}{r^3} + 2\left( {kr - \frac{J +
1}{r}} \right)\left[ {4\left( {k + \frac{J + 1}{r^2}} \right) +
\left( {3k - 2kJ -
E_{eff} } \right) + 2\left( {kr - \frac{J + 1}{r}} \right)} \right] \nonumber \\
s_2 & = & \left( {3k + 2kJ - E_{eff} } \right)\left[ {7k + 4\frac{J
+ 1}{r^2}+2kJ - E_{eff} + 4\left( {kr - \frac{J + 1}{r}} \right)^2}
\right] \\
\ldots \emph{etc} \nonumber\end{aligned}$$
Combining these results with the quantization condition given by equation (\[kuantization\]) yields $$\begin{aligned}
\frac{s_0 }{\lambda _0 } = \frac{s_1 }{\lambda _1 }\,\,\,\,\,\, \Rightarrow
\,\,\,\,\,\,\left( {E_{eff} } \right)_0 = 3k + 2kJ \\
\frac{s_1 }{\lambda _1 } = \frac{s_2 }{\lambda _2 }\,\,\,\,\,\, \Rightarrow
\,\,\,\,\,\,\left( {E_{eff} } \right)_1 = 7k + 2kJ \\
\frac{s_2 }{\lambda _2 } = \frac{s_3 }{\lambda _3 }\,\,\,\,\,\, \Rightarrow
\,\,\,\,\,\,\left( {E_{eff} } \right)_2 = 11k + 2kJ \\
\ldots \emph{etc} \nonumber
\end{aligned}$$
When the above expressions are generalized, the DKP oscillator eigenvalues turn out as $$\label{eigeneff}
(E_{eff})_n = k\left( {4n + 3 + 2J} \right)$$ If one inserts the values of $k$ and $E_{eff}$ into equation (\[eigeneff\]), the relativistic energy spectrum of DKP oscillator becomes $$\frac{1}{2mc^2}\left( {E_{NJ}^2 - m^2c^4} \right) = N\hbar \omega$$ where $N$ is the principal quantum number defined as $N=2n+J$. Our result is in agreement with the result of reference [@13] for the same potential.
As indicated in Section \[aim\], we can construct the corresponding eigenfunctions by using the wave function generator given by equation (\[ef\]) and equation (\[sl\]) where we obtain $\lambda$ and $s$ values. Therefore, similar to equation (\[efson\]), the wave function $f_n (r)$ can be written: $$f_n (r) = \left( { - 1} \right)^nC_2
2^n(\sigma)_n~{ }_1F_1 \left( { - n,\sigma ;kr^2} \right)$$ $F(r)$ ensues right away in the following form: $$F(r) = r^{J + 1}e^{ - \frac{1}{2}kr^2} \left [\left( { - 1}
\right)^nC_2 2^n(\sigma)_n~{ }_1F_1 \left( { - n,\sigma
;kr^2}\right)\right]$$ where $$ \sigma = \frac{2J + 3}{2} \hspace{0.5cm} \mbox{and} \hspace{0.5cm}
\left( \sigma \right)_n = \frac{\Gamma \left( {\sigma + n} \right)}{\Gamma
\left( \sigma \right)}
$$
Using the wave function $F(r)$, the wave functions $G(r)$, $H_{1}(r)$ and $H_{ - 1}(r)$ can be easily obtained by using equations (\[h1\]) to (\[h4\]).
DKP Coulomb Potential
=====================
We now apply the AIM method to the bound state problem of a spinless charged pion ($\pi^{-}$) in the Coulomb field of a nucleus. If we use following *ansatz*: $$a_{\pm} = \frac{mc^2 \pm E}{ \hbar c}, \quad \gamma = \alpha Z,
\quad \lambda_{\pi} = \frac{\hbar}{mc}, \quad\kappa = \frac{2}{
\hbar c} \sqrt{m^2c^4 - E^2}, \quad \xi= \frac {2\gamma E}{\kappa
\hbar c}, \quad \rho = \kappa r \label{kýsaltmalar}$$ the system of coupled equations for the Coulomb potential becomes $$\alpha_J \left( \frac{d F}{ d\rho} - \frac{J + 1}{ \rho} F \right)
= - \frac{1}{\kappa \lambda_{\pi} } H_1 \label{c1}$$ $$\zeta_J \left(\frac{d F}{d\rho} + \frac{J}{\rho} F \right) =
\frac{1}{\kappa\lambda_{\pi}} H_{-1} \label{c2}$$ $$\begin{aligned}
- \alpha_J \left( \frac{d H_{1}}{d\rho} + \frac{J + 1}{\rho} H_{1}
\right) + \zeta_J \left( \frac{d H_{-1}}{d\rho} - \frac{J}{\rho}
H_{-1} \right) \nonumber \\
= \kappa\lambda_{\pi}
\left( \frac{a_+}{ \kappa} + \frac{\gamma}{\rho} \right)
\left( \frac{a_-}{\kappa} - \frac{\gamma}{\rho} \right) F \label{c3}\end{aligned}$$
Eliminating $H_{1}$ and $H_{-1}$ in favor of $F$, the second-order differential equation for the Coulomb potential becomes $${\frac {d^{2}F(\rho)}{d{\rho}^{2}}}+\left ({\frac {\xi}{\rho}}-\frac{1}{4}-
{\frac {J\left (J+1\right )-\gamma^2}{{\rho}^{2}}}\right )F(\rho)=0 \label{coulombdkp}$$ Let the radial wave function be factorized as: $$F(\rho)={\rho}^{\Lambda+1}{e^{-\frac{1}{2}\rho}}f(\rho)$$ where $$\Lambda=-\frac{1}{2}+\sqrt {(J+\frac{1}{2})^2-\gamma^{2}}$$ Equation (\[coulombdkp\]) becomes
$${\frac {d^{2}f(\rho)}{d{\rho}^{2}}}-{\frac {\left (\rho-2\,
\Lambda-2\right )}{\rho}}{\frac {df(\rho)}{d\rho}} -{\frac {\left (\Lambda+1-\xi\right )}{\rho}}f(\rho)=0
\label{coulombaim}$$
which is now amenable to an AIM solution. In order to find the exact energy eigenvalues, we define $\lambda_0$ and $s_0$ as $$\lambda_0=-{\frac {\left (\rho-2 \Lambda-2\right )}{\rho}},
\hspace{0.5cm} s_0=-{\frac {\left (\Lambda+1-\xi\right )}{\rho}}
\label{ls}$$ Using the quantization condition given by equation (\[kuantization\]), the $\xi$ values take the form
$$\xi_{1}=\Lambda +1, \hspace{0.5cm} \xi_{2}=\Lambda+2,
\hspace{0.5cm} \xi_{3}=\Lambda+3,\hspace{0.5cm} \ldots$$
which can be generalized as $$\xi_{n}=\Lambda +n^{'} $$ Inserting $\xi$ and $\Lambda$ in equation (\[kýsaltmalar\]) and defining the principal quantum number as $n=n^{'}+J$, we obtain the exact bound state eigen-energies: $$E_{nJ}=mc^2\left[1+\frac{(\alpha Z)^{2}}{\left(n-J-\frac{1}{2}+\sqrt
{(J+\frac{1}{2})^2-(\alpha Z)^{2}}\right)^{2}}
\right]^{-\frac{1}{2}}$$
which is in agreement with the results of the references [@11; @12] for the same potential. The binding energy $B_{nJ}$ can be calculated from $B_{nJ}=mc^2-E_{nJ}$.
We can also construct the corresponding eigenfunctions using AIM as $$f_n (\rho) = \left( { - 1} \right)^nC_2(\sigma)_n~{ }_1F_1 \left(
{ - n,\sigma ;\rho} \right)$$ which gives $$\label{Fr}
F(\rho)={\rho}^{\Lambda+1}{e^{-\frac{1}{2}\rho}}\left[\left( { - 1}
\right)^nC_2(\sigma)_n~{ }_1F_1 \left( { - n,\sigma ;\rho}
\right)\right]$$ where $$\sigma = 2\Lambda + 2 \hspace{0.5cm} \mbox{and} \hspace{0.5cm}
\left( \sigma \right)_n = \frac{\Gamma \left( {\sigma + n}
\right)}{\Gamma \left( \sigma \right)}
$$ Other components of the the wave functions ($G(\rho)$, $H_{1}(\rho)$ and $H_{ - 1}(\rho)$) can be obtained through equations (\[c1\]) to (\[c2\]) using $F(\rho)$.
Anharmonic Oscillator {#anharmonic}
=====================
In this section, we present the application of the Asymptotic Iteration Method (AIM) to non-trivial problems. We have thus chosen a vector potential of type: $$U_V=r^{2\xi}, \quad \xi=2, 3 \ldots$$ Taking $\xi=2$, the second-order DKP equation becomes as follows $$\label{dkp2k}
{\frac {d^{2}}{d{r}^{2}}}F(r)+\left ({\frac
{{E}^{2}-2\,E{r}^{4}+{r}^{
8}-{m}^{2}{c}^{4}}{{h}^{2}{c}^{2}}}-{\frac {J\left (J+1\right
)}{{r}^{ 2}}}\right )F(r)=0$$ In order to solve this equation with AIM, we propose the following wave function to transform it to an equation similar to equation (\[diff\]): $$F(r)={e^{-1/2\,\beta\,{r}^{2}}}f(r)$$ where $\beta$ is an arbitrarily introduced constant to improve the convergence speed of the method. We take it $\beta$=5 as in reference [@20] to compare with their non-relativistic results for a similar problem. By taking $\hbar=c=m=1$ and $J=0$ (s-state) for simplicity and inserting this wave function into equation (\[dkp2k\]), we obtain $$\label{pert}
{\frac {d^{2}}{d{r} ^{2}}}f(r)=\left (-{E}^{2}+ 2\,E{r}^{4}+
\beta+1-{\beta}^{2}{r}^{2}-{r}^{8}\right )f(r)+ 2\,\beta\,r{\frac
{d}{dr}}f(r)$$ which can be now solved by AIM. Here, the $s_0(r)$ and $\lambda_0(r)$ are as follows $$\label {s0l0}
s_0(r)=\left (-{E}^{2}+ 2\,E{r}^{4}+
\beta+1-{\beta}^{2}{r}^{2}-{r}^{8}\right ), \quad
\lambda_0(r)= 2\,\beta\,r$$ In order to obtain the energy eigenvalues from equation (\[pert\]), using equation (\[iter\]), we obtain the $s_k(r)$ and $\lambda_k(r)$ in terms of $s_0(r)$ and $\lambda_0(r)$. Then, using the quantization condition of the method given by equation \[kuantization\], we obtain the energy eigenvalues. This straightforward application of AIM gives us the energy eigenvalues, however, we have observed that the energy eigenvalues oscillate and do not converge within a reasonable number of iteration. The sequence appears to converge when the number of iterations $k
\leq\simeq$ 30, but then it begins to oscillate as the iteration number $k$ increases. This result violates the principle behind the AIM; as the number of iteration increases, the method should converge and should not oscillate. We have noticed that the first reason for the oscillatory behavior is the $r^8$ term and the second but less serious reason is the $E^2$ term.
Therefore, in order to overcome this problem, we have used a perturbation approach within the framework of the AIM, similar to reference [@21]. In order to apply the perturbation, we introduce a parameter $\gamma$ for $s_0(r)$ in equation (\[s0l0\]): $$\label {s0pert}
s_0(r)=\left (-{E}^{2}+ 2\,E{r}^{4}+
\beta+1+\gamma (-{\beta}^{2}{r}^{2}-{r}^{8})\right )$$ $\gamma$ is an artificially introduced perturbation expansion parameter and at the end of the calculations, it will be seen that it is equal to 1. After this, equation (\[kuantization\]) becomes $$\label{kuant-per}
\delta_{k}(x,\gamma)=\lambda_{k+1}(x,\gamma)s_{k}(x,\gamma)-\lambda_{k}(x,\gamma)s_{k+1}(x,\gamma)=0$$ If we expand $\delta(x,\gamma)$ near $\gamma$=0, we obtain the following series: $$\label{delta-exp}
\delta_{k}(x,\gamma)=\delta_{k}(x,0)+{\frac {\gamma}{1!}}{\frac
{\partial\delta_{k}(x,\gamma)}{\partial\gamma}}\Big|_{\gamma=0}
+{\frac{\gamma^{2}}{2!}}{\frac{\partial^{2}\delta_{k}(x,\gamma)}{\partial\gamma^{2}}}\Big|_{\gamma=0}
+{\frac{\gamma^{3}}{3!}}{\frac{\partial^{3}\delta_{k}(x,\gamma)}{\partial\gamma^{3}}}\Big|_{\gamma=0}
+\ldots$$ According to AIM, the quantization condition $\delta_{k}(x,\gamma)$ must be equal to zero: $$\label{delta-terms}
\delta_{k}^{(j)}(x,\gamma)={\frac{\gamma^{j}}{j!}}{\frac{\partial^{j}\delta_{k}(x,\gamma)}{\partial\gamma^{j}}}\Big|_{\gamma=0}
, \quad j=0, 1, 2, \ldots$$ It is also suitable to expand the energy eigenvalue $E$, $$\label {E-exp}
E_{n}=E_{n}^{0}+ \gamma E_{n}^{1}+\gamma^{2}E_{n}^{2}+\gamma^{3}E_{n}^{3}+\gamma^{4}E_{n}^{4}+...$$ $E_{n}$ expansion terms can be obtained by comparing the terms with the same order of $\gamma$ in equations (\[delta-terms\]) and (\[E-exp\]). Hence, it is clear that the roots of $\delta_{k}^{(0)}(x,0)$=0 give us the main contribution energy terms $E_{n}^{0}$ and the roots of $\delta_{k}^{(1)}(x,0)$=0 give us the first correction $E_{n}^{1}$ and so on.
After we apply this perturbation approach, we have obtained the ground and the first even excited state energy eigenvalues. The results are presented in Tables \[E0s\] and \[E2s\] respectively for the ground and the first even excited state eigenvalues. In the first column of Table \[E0s\], we present the $E_{0}^{0}$ (unperturbed), the second column $E_{0}^{1}$ which is the first correction and so on. We have used the perturbation up to 5$^{th}$ term, but one can use higher terms to improve the results. However, the effect becomes smaller as it can be seen from tables. In the last column of Table \[E0s\], we show the non-relativistic results of the Fernandez [@20] for the same potential to compare with our results. For these calculations, we have observed that the first term ($E_{0}^{0}$) in the expansion (\[E-exp\]) converges around $k$=30 iterations, however, the correction terms require higher iterations and start to converge around $k$=50.
In Table \[E2s\], we show the first even excited state energy eigenvalues. Again, the perturbation is calculated up to 5$^{th}$ and the first term ($E_{0}^{0}$) converges around $k$=35 iterations, however, the correction terms require higher iterations and we have run them up to $k$=50 iteration.
Conclusion
==========
This paper has presented a different approach, the asymptotic iteration method, to the calculation of the non-zero angular momentum solutions of the relativistic Duffin-Kemmer-Petiau equation. Exact eigenvalues and eigenfunction for the relativistic Duffin-Kemmer-Petiau oscillator and Coulomb problems are derived easily. The advantage of the asymptotic iteration method is that it gives the eigenvalues directly by transforming the second-order differential equation into a form of ${y}''$ =$ \lambda _0 (r){y}' +
s_0 (r)y$. The exact wave functions are easily constructed by iterating the values of $s_0$ and $\lambda_0$. We have also shown how to solve the non-trivial problems with the help of the perturbation theory within the framework of the asymptotic iteration method. The method presented in this study is general and worth extending to the solution of other interaction problems.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work is supported by the Turkish Science and Research Council (TÜBİTAK), Grant No: TBAG-2398 and Erciyes University-Institute of Science: Grant no: FBA-03-27, FBT-04-15, FBT-04-16. Authors would like to thank Professors Y. Nedjadi, F. M. Fernandez and Dr. H. Çiftçi for useful comments, providing some materials and reading the manuscript.
[99]{} G. Levai, J. Phys. A: Math. Gen. **25**, L521 (1992). L. Gendenshtein, Zh. Eksp. Teor. Fiz. Pis. Red. **38**, 299, (1983). L. Gendenshtein, Engl. Transl. JETP Lett. **38**, 356, (1983). P.A.M. Dirac, Quantum Mechanic (Clarendon Press, Oxford, 1930). L. Infeld and T.E. Hull, Rev. Mod. Phys. **23**, 21 (1951). A. Stahlhofen, II Nuovo Cimento **B104**, 447 (1989). R.M. Edelstein, K.S. Govinder and F.M. Mahomed, J. Phys. A: Math. Gen. **34**, 1141 (2001). H. Ciftci, R.L. Hall and N.J. Saad, J. Phys. A: Math. Gen. **36**, 11807 (2003). H. Ciftci, R.L. Hall and N.J. Saad, Phys. A: Math. Gen. **38**, 1147 (2005). H. Ciftci, R.L. Hall and N.J. Saad, Phys. Rev. **A72**, 022101 (2005). Y. Nedjadi and R. C. Barrett, J. Physics A: Math. Gen. **27**, 4301 (1994). Y. Nedjadi and R. C. Barrett, J. of Math. Phys. **35** (9), 4517 (1994). Y. Nedjadi and R. C. Barrett, J. Physics **G19**, 87 (1993). B. Boutabia-Chéraitia and T. Boudjedaa, Phys. Lett. **A338**, 97 (2005). V.Ya. Fainberg and B.M. Pimentel, Phys. Lett. **A271**, 16 (2000);\
V.Ya. Fainberg, B.M. Pimentel, Theor. Math. Phys. **124**, 1234 (2000). J.T. Lunardi, B.M. Pimentel, R.G. Teixeiri and J.S. Valverde, Phys. Lett. **A268**, 165 (2000);\
J.T. Lunardi, L.A. Manzoni, B.M. Pimentel and J.S. Valverde, hep-th/0008098. L. Chetouani, M. Merad, T. Boudjedaa and A. Lecheheb, Int. J. of Theoretical Physics **43** (4), 1147 (2004). A. Boumali, Can. J. of Phys. **82** (1), 67 (2004). D.A. Kulikov, R.S. Tutik and A.P. Yaroshenko, Modern Phys. Lett. **A20** (1), 43 (2005). F.M. Fernandez, J. Phys. A: Math. Gen. **37**, 6173 (2004). H. Ciftci, R.L. Hall and N.J. Saad, Phys. Lett. **A340**, 388 (2005).
----------------------------------------------------------------------------------------------------------------------------------------------------
$k$ $E_{0}^{0}$ $E_{0}^{1}$ $E_{0}^{2}$ $E_{0}^{3}$ $E_{0}^{4}$ $E_{0}^{5}$ $E_{0}$ $E_{0}$ [@20]
----- ------------- ------------- -------------------------------------- ----------------- ------------- ------------- ------------- ---------------
5 2.478891 -0.481521 -0.171317 -0.087565 -0.038408 -0.030214 1.669866
10 2.477792 -0.485884 -0.158642 -0.080739 -0.054255 -0.036055 1.662217 1.325073435
15 2.477837 -0.485459 -0.159187 -0.082888 -0.052218 -0.035973 1.662112 1.147766154
20 2.477839 -0.485450 -0.159249 -0.083021 -0.051830 -0.035991 1.662298 1.072223000
25 2.477838 -0.485452 -0.159247 -0.082987 -0.051875 -0.036052 1.662225 1.062711298
30 “ & ” -0.159246 -0.082983 -0.051885 -0.036069 1.662203 1.060482716
35 “ & ” -0.159246 -0.082984 -0.051885 -0.036062 1.662209 1.060372025
40 “ & ” “& ” -0.051884 -0.036060 1.662212 1.060362059
45 “ & ” “& ” “& -0.036061& 1.662211& 1.060362077\ “ & ” “ & ” “& ” 1.060362091
50& ”
55 “ & ” “& ” “& ” “& 1.060362091\ “ & ” “ & ” “& ” 1.060362090
60& ”
65 “ & ” “& ” “& ” “& ”
70 “ & ” “& ” “& ” “& ”
----------------------------------------------------------------------------------------------------------------------------------------------------
: Ground state energy of the anharmonic oscillator where $k$ is the iteration number ($n=0$, $\hbar=c=m=1$ and $J=0$ (s-state))
\[E0s\]
$k$ $E_{2}^{0}$ $E_{2}^{1}$ $E_{2}^{2}$ $E_{2}^{3}$ $E_{2}^{4}$ $E_{2}^{5}$ $E_{2}$
----- ------------- ------------- ------------- ------------- ------------- ------------- ----------
5 5.698344 -1.478344 0.027555 -2.574579 -1.656472 3.568314 3.584818
10 5.370588 -0.911718 -0.040349 -0.433823 -0.040637 -0.068883 3.875178
15 5.413951 -0.974055 -0.267240 -0.202274 -0.037847 -0.001956 3.930579
20 5.415995 -0.992837 -0.286535 -0.104029 -0.066451 -0.078990 3.887153
25 5.415458 -0.991076 -0.277803 -0.121674 -0.072980 -0.054753 3.897172
30 5.415453 -0.990550 -0.276740 -0.127545 -0.073126 -0.040145 3.907347
35 5.415460 -0.990604 -0.277153 -0.126693 -0.071566 -0.043842 3.905602
40 5.415460 -0.990626 -0.277226 -0.126297 -0.071080 -0.046266 3.903965
45 5.415460 -0.990623 -0.277201 -0.126333 -0.071315 -0.045760 3.904228
50 5.415460 -0.990622 -0.277194 -0.126358 -0.071409 -0.045369 3.904508
: First even excited state energy of the anharmonic oscillator where $k$ is the iteration number ($n=2$, $\hbar=c=m=1$ and $J=0$ (s-state))
\[E2s\]
|
---
abstract: 'We show that almost all $n$-bit Boolean functions have bounded-error quantum query complexity at least $n/2$, up to lower-order terms. This improves over an earlier $n/4$ lower bound of Ambainis [@ambainis:aa], and shows that van Dam’s oracle interrogation [@dam:oracle] is essentially optimal for almost all functions. Our proof uses the fact that the acceptance probability of a $T$-query algorithm can be written as the sum of squares of degree-$T$ polynomials.'
author:
- 'Andris Ambainis[^1]'
- 'Artūrs Bačkurs[^2]'
- 'Juris Smotrovs[^3]'
- 'Ronald de Wolf[^4]'
title: Optimal quantum query bounds for almost all Boolean functions
---
Introduction
============
Most known quantum algorithms have been developed in the setting of quantum query complexity, which is the quantum generalization of the model of decision tree complexity. Here an algorithm is charged for each “query” to the input bits, while intermediate computation is free (see for more details about this model). For certain specific functions one can obtain large quantum-speedups in this model. For example, Grover’s algorithm [@grover:search] computes the $n$-bit OR function with $O(\sqrt{n})$ queries, while any classical algorithm needs $\Omega(n)$ queries. Many more such polynomial speed-ups are known, see for example [@ambainis:edj; @santha:qrwsurvey; @dhhm:graphproblemsj; @belovs:learninggraphs]. If one considers partial functions there are even exponential speed-ups, for example . Substantial quantum speed-ups are quite rare, and exploit very specific structure in problems that makes those problems amenable to quantum speed-ups.
On the other hand, one can also obtain a smaller speed-up that holds for *almost all* Boolean functions. Classically, almost all Boolean functions $f:\01^n\rightarrow\01$ have bounded-error query complexity $n$, minus lower-order terms. This is quite intuitive: if we have only seen 99% of the $n$ input bits, then the restriction of a random function to the 1% remaining variables will still be roughly balanced between 0 and 1-inputs. In contrast, van Dam [@dam:oracle] exhibited a beautiful quantum algorithm that recovers the complete $n$-bit input $x$ with high probability using roughly $n/2$ quantum queries. Briefly, his algorithm is as follows:
1. With $T=n/2+O(\sqrt{n\log(1/{\varepsilon})})$ and $B=\sum_{i=0}^{T}{n\choose i}$ being the number of $y\in\01^n$ with weight $|y|\leq T$, set up the $n$-qubit superposition $\frac{1}{\sqrt{B}}\sum_{y\in\01^n:|y|\leq T}{|y\rangle}.$
2. Apply the unitary ${|y\rangle}\mapsto(-1)^{x\cdot y}{|y\rangle}$. We can implement this using $T$ queries for $|y|\leq T$.
3. Apply a Hadamard transform to all qubits and measure.
To see correctness of this algorithm, note that the fraction of $n$-bit strings $y$ that have weight $>T$ is $\ll{\varepsilon}$. Hence the state obtained in step 2 is very close to the state $\frac{1}{\sqrt{2^n}}\sum_{y\in\01^n}(-1)^{x\cdot y}{|y\rangle}$, whose Hadamard transform is exactly ${|x\rangle}$.
Since obtaining $x$ suffices to compute $f(x)$ for any $f$ of our choice, van Dam’s algorithm implies that the ${\varepsilon}$-error quantum query complexity of $f$ is $$Q_{\varepsilon}(f)\leq n/2+O(\sqrt{n\log(1/{\varepsilon})})\mbox{ \ \ \ for all Boolean functions.}$$ It is known that this upper bound is essentially tight for *some* Boolean functions. For example, $Q_{\varepsilon}(f)={\lceil{n/2}\rceil}$ for the $n$-bit Parity function [@bbcmw:polynomialsj; @fggs:parity]. Our goal in this paper is to show that it is tight for *almost all* Boolean functions, i.e., that $Q_{\varepsilon}(f)$ is essentially lower bounded by $n/2$ for almost all $f$ (and fixed ${\varepsilon}$). How can we prove such a lower bound? Two general methods are known for proving quantum query lower bounds: the polynomial method [@bbcmw:polynomialsj] and the adversary method [@ambainis:lowerboundsj; @hls:madv]. As we explain below, in their standard form neither method is strong enough to prove our desired $n/2$ lower bound.
First, the adversary method in its strongest incarnation [@hls:madv Theorem 2] has the form $$Q_{\varepsilon}(f)\geq \frac{1}{2}(1-\sqrt{{\varepsilon}(1-{\varepsilon})})ADV^{\pm}(f),$$ where the “negative-weights adversary bound” $ADV^{\pm}(f)$ is a quantity that is at most $n$. Accordingly, for constant error probability ${\varepsilon}$ the adversary method can only prove lower bounds of the form $cn$ for some $c<1/2$.
Second, the polynomial method uses the fact (first proved in ) that the acceptance probability of a $T$-query algorithm can be written as a degree-$2T$ $n$-variate multilinear real polynomial $p(x)$ of the input. If the algorithm computes $f$ with error probability $\leq{\varepsilon}$, then $p(x)$ will approximate $f(x)$: $p(x)\in[0,{\varepsilon}]$ for every $x\in f^{-1}(0)$ and $p(x)\in[1-{\varepsilon},1]$ for every $x\in f^{-1}(1)$. Accordingly, a lower bound of $d$ on the ${\varepsilon}$-approximate polynomial degree $\deg_{\varepsilon}(f)$ implies a lower bound of $d/2$ on the ${\varepsilon}$-error quantum query complexity of $f$. This is how Ambainis [@ambainis:aa] proved the current best lower bound of roughly $n/4$ that holds for almost all $n$-bit Boolean functions: he showed that almost all $f$ satisfy $\deg_{\varepsilon}(f)\geq (1/2-o(1))n$. However, O’Donnell and Servedio proved a nearly matching upper bound: $\deg_{\varepsilon}(f)\leq (1/2+o(1))n$ for almost all $f$. Hence Ambainis’s lower bound approach via approximate degree cannot be improved to obtain our desired lower bound of $n/2$ on $Q_{\varepsilon}(f)$.[^5] This suggests that also the polynomial method is unable to obtain the conjectured factor 1/2 in the lower bound.
However, looking under the hood of the polynomial method, it actually gives a bit more information about the acceptance probability: $p(x)$ is not an arbitrary degree-$2T$ polynomial, but the sum of squares of degree-$T$ polynomials. Using this extra information, we prove in this paper that indeed $Q_{\varepsilon}(f)\geq n/2$ up to lower-order terms for almost all $f$.
Proof
=====
Suppose we have a quantum algorithm that uses $T$ queries to its $n$-bit input $x$. Then by [@bbcmw:polynomialsj Lemma 4.1], its final state can be written as a function of the input as $$\sum_z \alpha_z(x){|z\rangle},$$ where $z$ ranges over the computational basis states of the algorithm’s space, and the amplitudes $\alpha_z(x)$ are complex-valued multilinear $n$-variate polynomials of degree $\leq T$. We assume w.l.o.g. that the algorithm determines its Boolean output by measuring the first qubit of the final state. Then the acceptance probability (as a function of input $x$) is the following polynomial of degree $\leq 2T$: $$p(x) = \sum_{z:z_1=1} |\alpha_z(x)|^2.$$ Let $\alpha_z\in\mathbb{C}^{2^n}$ denote the vector with entries $\alpha_z(x)$. Define the following $2^n\times 2^n$ matrix $P$: $$P = \sum_{z:z_1=1} \alpha_z\alpha_z^*.$$ The diagonal entry $P_{xx}$ of this matrix is $p(x)$. Since $P$ is positive semidefinite, we have[^6] $${{\left\|{P}\right\|}}_1 = {\mbox{\rm Tr}}(P) = \sum_{x\in\01^n} p(x).$$ With $H$ denoting the $n$-qubit Hadamard transform, $H\alpha_z$ is proportional to the Fourier transform $\widehat{\alpha_z}$, which has support only on the $B = \sum_{i=0}^T{n\choose i}$ monomials of degree $\leq T$. Hence the matrix $HPH$ has support only on a $B\times B$ submatrix.
It will be convenient to use $+1$ and $-1$ as the range of a Boolean function, rather than 0 and 1. Consider Boolean function $f :\01^n\rightarrow\{\pm 1\}$. For $s\in\01^n$, the corresponding Fourier coefficient of $f$ is defined as $\widehat{f}(s)=\frac{1}{2^n}\sum_x(-1)^{s\cdot x}f(x)$. Let $F$ be the $2^n\times 2^n$ diagonal matrix with diagonal entries $f(x)$. Define $\widehat{F} = HFH$. Then for $s, t\in\01^n$, we have $$\widehat{F}_{s,t} = {\langles|}HFH{|t\rangle} = \frac{1}{2^n}\sum_{x,y} (-1)^{s\cdot x}(-1)^{t\cdot y}F_{xy}=\frac{1}{2^n}\sum_x (-1)^{(s\oplus t)\cdot x}f(x)=\widehat{f}(s\oplus t).$$ Let $\widehat{F}_T$ denote $\widehat{F}$ after zeroing out all $s,t$-entries where $|s| > T$ and/or $|t| > T$. Note that $HPH$ doesn’t have support on the entries that are zeroed out, hence ${\langle{HPH},{\widehat{F}}\rangle} = {\langle{HPH},{\widehat{F}_T}\rangle}$.
Suppose our $T$-query quantum algorithm computes $f$ with worst-case error probability at most some fixed constant $\leq{\varepsilon}$. Output 1 means the algorithm thinks $f(x)=1$, and output 0 means it thinks $f(x)=-1$. Then for every $x\in\01^n$, $2p(x)-1$ differs from $f(x)$ by at most $2{\varepsilon}$. Hence: $$\begin{aligned}
(1 - 2{\varepsilon})2^n & \leq & {\langle{2P-I},{F}\rangle}\\
& = & 2{\langle{P},{F}\rangle} - \sum_x f(x)\\
& = & 2{\langle{HPH},{\widehat{F}}\rangle} - \sum_x f(x)\\
& = & 2{\langle{HPH},{\widehat{F}_T}\rangle} - \sum_x f(x)\\
& \leq & 2{{\left\|{P}\right\|}}_1{{\left\|{\widehat{F}_T}\right\|}}_\infty - \sum_x f(x)\\
& = & 2{{\left\|{\widehat{F}_T}\right\|}}_\infty\sum_x p(x) - \sum_x f(x).\end{aligned}$$ We can assume w.l.o.g. that $\sum_x f(x)\geq 0$ (if this doesn’t hold for $f$ then just take its negation, which has the same query complexity as $f$). Since $\sum_x p(x)\leq 2^n$, we get $$\label{eq:normFTlowerbound}
{{\left\|{\widehat{F}_T}\right\|}}_\infty\geq 1/2-{\varepsilon}.$$ The technically hard part is to upper bound ${{\left\|{\widehat{F}_T}\right\|}}_\infty$ for most $f$. So consider the case where $f:\01^n\rightarrow\{\pm 1\}$ is a *uniformly random* function, meaning that the $2^n$ values $f(x)$ are independent uniformly random signs. In the next subsection we show
\[claim:maxSing\] With probability $1-o(1)$ (over the choice of $f$) we have ${{\left\|{\widehat{F}_T}\right\|}}_\infty=O\left(\sqrt{\frac{n B^{1+o(1)}}{2^n}}\right)$.
Combining this with the lower bound (\[eq:normFTlowerbound\]), we get that $B \geq 2^{n-o(n)}$. On the other hand, a well-known upper bound on the sum of binomial coefficients is $B=\sum_{i=0}^{T}{n\choose i}\leq 2^{nH(T/n)}$, where $H(q)=-q\log q -(1-q)\log(1-q)$ denotes the binary entropy function. Hence, $2^{n- o(n)}\leq 2^{nH(T/n)}$ which implies $T\geq n/2-o(n)$. This shows that $Q_{\epsilon}(f) \geq n/2-o(n)$ for almost all $f$ (and fixed constant ${\varepsilon}$).
Proof of Claim \[claim:maxSing\]
--------------------------------
Below, unless mentioned otherwise, probabilities and expectations will be taken over the random choice of $f$. We choose $T=n/2-o(n)$ sufficiently small that $B=\sum_{i=0}^{T}{n\choose i}=o(2^n)$, i.e., the $o(n)$ term in $T$ is taken to be $\omega(\sqrt{n})$.
Let $\lambda_i$ be the $i$-th eigenvalue of $\widehat{F}_T$. Since $\widehat{F}_T$ is symmetric we have $${{\left\|{\widehat{F}_T}\right\|}}_\infty = \max_i |\lambda_i| = \sqrt[2k]{\max_i \lambda_i^{2k}} \leq \sqrt[2k]{\sum_i \lambda_i^{2k}}=\sqrt[2k]{{\mbox{\rm Tr}}(\widehat{F}_T^{2k})}.$$ We are going to show that $$\label{eq:EFT2kupperbound}
{\mathop{\mathbb E}}\left[{\mbox{\rm Tr}}(\widehat{F}_T^{2k})\right] = O\left(B\left(B/2^n\right)^k\right)$$ for every constant $k$ (with a big-O constant depending on $k$). This means that, using Markov’s inequality, $$\begin{aligned}
\Pr\left[ {{\left\|{\widehat{F}_T}\right\|}}_\infty>C \sqrt{n B^{1+1/k}/2^n} \right] &
\leq \Pr\left[ \sqrt[2k]{{\mbox{\rm Tr}}(\widehat{F}_T^{2k})}>C \sqrt{n B^{1+1/k}/2^n} \right] \\
& = \Pr\left[ {\mbox{\rm Tr}}(\widehat{F}_T^{2k}) >C^{2k} n^k B^{k+1}/2^{nk} \right] \\
& \leq \frac{{\mathop{\mathbb E}}\left[{\mbox{\rm Tr}}(\widehat{F}_T^{2k})\right]}{C^{2k} n^k B^{k+1}/2^{nk}} = o(1) .\end{aligned}$$ Since this is true for any constant $k$, Claim \[claim:maxSing\] follows.
So now our goal is to prove (\[eq:EFT2kupperbound\]). Below we let each of $s_1,\ldots,s_{2k}$ range over the $B$ $n$-bit strings of weight $\leq T$, and each of $x_1,\ldots,x_{2k}$ range over $\01^n$. For simplicity we abbreviate $\vec{s}=s_1,s_2,\ldots,s_{2k}$ and $\vec{x}=x_1,x_2,\ldots,x_{2k}$. Writing out the $2k$-fold matrix product, we have $$\begin{aligned}
{\mathop{\mathbb E}}\left[{\mbox{\rm Tr}}(\widehat{F}_T^{2k})\right]& ={\mathop{\mathbb E}}\left[ \sum_{\vec{s}} \widehat{f}(s_1\oplus s_2)\widehat{f}(s_2\oplus s_3)\cdots \widehat{f}(s_{2k}\oplus s_1)\right]\\
&={1 \over 2^{2nk}}\sum_{\vec{s}} \sum_{\vec{x}} {\mathop{\mathbb E}}\left[(-1)^{(s_1 \oplus s_2)\cdot x_1}f(x_1) \cdots (-1)^{(s_{2k} \oplus s_1)\cdot x_{2k}}f(x_{2k})\right]\\
&={1 \over 2^{2nk}}\sum_{\vec{s}} \sum_{\vec{x}} (-1)^{(s_1 \oplus s_2)\cdot x_1+\cdots+(s_{2k} \oplus s_1)\cdot x_{2k}}{\mathop{\mathbb E}}\left[f(x_1) \cdots f(x_{2k})\right].\end{aligned}$$ For a particular $y\in\01^n$, there are as many Boolean functions having $f(y)=1$ as having $f(y)=-1$, independently of what is known about values of $f$ on other inputs. Thus, if any $y$ occurs an odd number of times in $\vec{x}=(x_1, \ldots, x_{2k})$, then ${\mathop{\mathbb E}}[f(x_1) \cdots f(x_{2k})]=0$. So only those summands are left where all multiplicities of distinct values among $x_1,\ldots,x_{2k}$ are even. We call such $\vec{x}$ *even*. We have $$\begin{aligned}
{\mathop{\mathbb E}}\left[{\mbox{\rm Tr}}(\widehat{F}_T^{2k})\right] & = & {1 \over 2^{2nk}}\sum_{\vec{s}} \sum_{\substack{\vec{x} \textnormal{ even}}} (-1)^{\sum_{i=1}^{2k}(s_i \oplus s_{i+1})\cdot x_i} \nonumber \\
& = & {1 \over 2^{2nk}}\sum_r \sum_{\substack{\textnormal{partition of }\{1,\ldots,2k\} \\ \textnormal{into even non-empty }I_1,\ldots,I_r}}\sum_{\vec{s}} \sum_{\substack{x^{(1)},\ldots,x^{(r)}\\ \textnormal{ different}}}(-1)^{\sum_{j=1}^r\left(\bigoplus_{i\in I_j}(s_i\oplus s_{i+1})\right)\cdot x^{(j)}}\label{eq:1}\end{aligned}$$ where $s_{2k+1}=s_1$ and the second summation is over all partitions of $\{1,\ldots,2k\}$ into even-sized non-empty parts $I_1, \ldots, I_r$ with the implied condition that $x_i=x_j$ iff $i$ and $j$ belong to the same part. Since the number of such partitions $(I_1, I_2, \ldots, I_r)$ depends only on $k$ (which is a constant), it suffices to prove that each term in the sum is of the order $O(B(B/2^n)^k)$. We will do this by proving
\[claim:sumEst\] For any fixed $m$ and any partition $I_1,\ldots,I_r$ of $\{1,\ldots,m\}$: $$\label{eq:claim}
\sum_{\vec{s}} \sum_{\substack{x^{(1)},\ldots,x^{(r)}\\ \textnormal{ different}}}(-1)^{\sum_{j=1}^r t_j(\vec{s})\cdot x^{(j)}}=O(B^{m-r+1}\cdot 2^{nr})$$ where $t_j(\vec{s})=\bigoplus_{i\in I_j}(s_i\oplus s_{i+1})$, $s_{m+1}=s_1$, and the big-O constant depends on $m$ and the partition.
We first show that Claim \[claim:sumEst\] implies Claim \[claim:maxSing\]. In our case, $m=2k$. Since $B=o(2^n)$, the upper bound $B^{2k-r+1}\cdot 2^{nr}$ increases when $r$ increases. Since each partition of $\{1,\ldots,2k\}$ into even-sized non-empty parts $I_1, \ldots, I_r$ must contain at least 2 elements in each $I_j$, we must have $r\leq (2k)/2=k$ and every term of the sum (\[eq:1\]) is upper bounded by $$\frac{1}{2^{2nk}} O\left(B^{2k-k+1}\cdot 2^{nk}\right)=
O\left(B\left(B/2^n\right)^k\right).$$
It remains to prove Claim \[claim:sumEst\], which we do by induction on $r$. If $r=1$ then $t_1(\vec{s})=\oplus_{i=1}^m (s_i\oplus s_{i+1})$ includes each $s_i$ exactly twice and hence sums to the all-0 string, hence $$\sum_{\vec{s}} \sum_{x\in\01^n}(-1)^{t_1(\vec{s})\cdot x}=\sum_{\vec{s}} \sum_{x\in\01^n}(-1)^{0\cdot x}=B^m\cdot 2^n.$$ For the inductive step, suppose Claim \[claim:sumEst\] is true for $r-1$. We rewrite the left-hand side of (\[eq:claim\]) as $$\begin{aligned}
\sum_{\vec{s}} & \sum_{\substack{x^{(1)},\ldots,x^{(r)}\\ \textnormal{ different}}} (-1)^{\sum_{j=1}^r t_j(\vec{s})\cdot x^{(j)}} \nonumber \\
& =\sum_{\vec{s}}\sum_{x^{(1)}}\sum_{\substack{x^{(2)},\ldots,x^{(r)}\\ \textnormal{ different}}}(-1)^{\sum_{j=1}^r t_j(\vec{s})\cdot x^{(j)}}-
\sum_{\vec{s}}\sum_{a=2}^r\sum_{\substack{x^{(2)},\ldots,x^{(r)}\\ \textnormal{ different, }x^{(1)}=x^{(a)}}}(-1)^{\sum_{j=1}^r t_j(\vec{s})\cdot x^{(j)}}.\label{eq:twosums}\end{aligned}$$ Let us estimate both sums of (\[eq:twosums\]). Since $\sum_{x^{(1)}}(-1)^{t_1(\vec{s})x^{(1)}}=2^n$ if $t_1(\vec{s})=0^n$, and $=0$ otherwise, the first sum equals $$\label{eq:2}
2^n\sum_{\vec{s}: t_1(\vec{s})=0}\sum_{\substack{x^{(2)},\ldots,x^{(r)}\\ \textnormal{ different}}}(-1)^{\sum_{j=2}^r t_j(\vec{s})\cdot x^{(j)}} .$$ We now transform this sum into the form of the left-hand side of (\[eq:claim\]), with both $m$ and $r$ smaller by 1 compared to their current values. After that, we will apply the induction hypothesis.
Let $\ell$ be such that $\ell\in I_1$, $\ell-1 \notin I_1$. Then $t_1(\vec{s})$ contains $s_\ell$ with coefficient 1 (because $t_1(\vec{s})$ includes $s_\ell\oplus s_{\ell+1}$ but not $s_{\ell-1}\oplus s_\ell$). We can use the condition $t_1(\vec{s})=0$ to express $s_\ell$ in terms of $s_1, \ldots, s_{\ell-1}$ and $s_{\ell+1}, \ldots, s_m$ as follows: $$\label{eq:3}
s_\ell = s_{\ell+1} \oplus \bigoplus_{i\in I_1:i\neq \ell} (s_i \oplus s_{i+1}) .$$ Let $b$ be such that $\ell-1\in I_b$. Then $t_b(\vec{s})$ contains $s_{\ell-1}\oplus s_\ell$ and we can substitute (\[eq:3\]) into $t_b(\vec{s})$, obtaining $$t_b(\vec{s}) = s_{\ell-1} \oplus s_{\ell+1} \oplus \bigoplus_{i\in I_1:i\neq \ell} (s_i \oplus s_{i+1})
\oplus \bigoplus_{i\in I_b: i\neq \ell-1} (s_i \oplus s_{i+1}).$$ We can now remove the variable $s_\ell$ (because it was only contained in $s_{\ell-1}\oplus s_\ell$ and $s_\ell \oplus s_{\ell+1}$) and redefine $I_b$ to be $I_1\cup I_b \setminus\{\ell\}$. Then we get that (\[eq:2\]) is equal to $$2^n\sum_{\substack{s_1,\ldots,s_{\ell-1}\\s_{\ell+1},\ldots,s_m}}\sum_{\substack{x^{(2)},\ldots,x^{(r)}\\ \textnormal{ different}}}(-1)^{\sum_{j=2}^r t_j(\vec{s})\cdot x^{(j)}}
=2^n\cdot O\left(B^{m-r+1}\cdot 2^{n(r-1)}\right)=O\left(B^{m-r+1}\cdot 2^{nr}\right)$$ with the estimate following from the induction hypothesis (with both $m$ and $r$ being smaller by 1).
As for the second sum of (\[eq:twosums\]), it is equal to $$\sum_{a=2}^r\sum_{\vec{s}}\sum_{\substack{x^{(2)},\ldots,x^{(r)}\\ \textnormal{ different}}}(-1)^{\sum_{j=2}^r t^{(a)}_j(\vec{s})\cdot x^{(j)}}=O\left(B^{m-r+2}\cdot 2^{n(r-1)}\right)$$ where $t_j^{(a)}(\vec{s})=t_j(\vec{s})$ except for $t_a^{(a)}(\vec{s})=t_a(\vec{s})\oplus t_1(\vec{s})$ (thus merging the partition parts $I_1$ and $I_a$). We have eliminated $x^{(1)}$ and apply the induction hypothesis (with $r$ being smaller by 1 and $m$ remaining the same). The outer sum over $a$ introduces only a factor depending on $r\leq m$.
Since $B=o(2^n)$ we have $B^{m-r+2}\cdot 2^{n(r-1)}=o(B^{m-r+1}\cdot 2^{nr})$. Hence the bound on the first sum in (\[eq:twosums\]) is of a larger order and we have completed the proof of Claim \[claim:sumEst\].
[DHHM06]{}
A. Ambainis. A note on quantum black-box complexity of almost all [B]{}oolean functions. , 71(1):5–7, 1999. quant-ph/9811080.
A. Ambainis. Quantum lower bounds by quantum arguments. , 64(4):750–767, 2002. Earlier version in STOC’00. quant-ph/0002066.
A. Ambainis. Quantum walk algorithm for element distinctness. , 37(1):210–239, 2007. Earlier version in FOCS’04. quant-ph/0311001.
R. Beals, H. Buhrman, R. Cleve, M. Mosca, and R. [de]{} Wolf. Quantum lower bounds by polynomials. , 48(4):778–797, 2001. Earlier version in FOCS’98. quant-ph/9802049.
N. [de]{} Beaudrap, R. Cleve, and J. Watrous. Sharp quantum vs. classical query complexity separations. , 34(4):449–461, 2002. quant-ph/0011065.
A. Belovs. Span programs for functions with constant-sized 1-certificates. In [*Proceedings of 43rd ACM STOC*]{}, pages 77–84, 2012. arXiv:1105.4024.
H. Buhrman, N. Vereshchagin, and R. [de]{} Wolf. On computation and communication with small bias. In [*Proceedings of 22nd IEEE Conference on Computational Complexity*]{}, pages 24–32, 2007.
H. Buhrman and R. [de]{} Wolf. Complexity measures and decision tree complexity: A survey. , 288(1):21–43, 2002.
W. [van]{} Dam. Quantum oracle interrogation: Getting all information for almost half the price. In [*Proceedings of 39th IEEE FOCS*]{}, pages 362–367, 1998. quant-ph/9805006.
C. D[ü]{}rr, M. Heiligman, P. H[ø]{}yer, and M. Mhalla. Quantum query complexity of some graph problems. , 35(6):1310–1328, 2006. Earlier version in ICALP’04.
D. Deutsch and R. Jozsa. Rapid solution of problems by quantum computation. In [*Proceedings of the Royal Society of London*]{}, volume A439, pages 553–558, 1992.
E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser. A limit on the speed of quantum computation in determining parity. , 81:5442–5444, 1998. quant-ph/9802045.
L. Fortnow and J. Rogers. Complexity limitations on quantum computation. , 59(2):240–252, 1999. Earlier version in Complexity’98. Also cs.CC/9811023.
L. K. Grover. A fast quantum mechanical algorithm for database search. In [*Proceedings of 28th ACM STOC*]{}, pages 212–219, 1996. quant-ph/9605043.
P. H[ø]{}yer, T. Lee, and R. [Š]{}palek. Negative weights make adversaries stronger. In [*Proceedings of 39th ACM STOC*]{}, pages 526–535, 2007. quant-ph/0611054.
R. [O’Donnell]{} and R. Servedio. Extremal properties of polynomial threshold functions. , 74(3):298–312, 2008. Earlier version in Complexity’03.
M. Santha. Quantum walk based search algorithms. In [*Proceedings of 5th TAMC*]{}, pages 31–46, 2008. arXiv/0808.0059.
P. W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. , 26(5):1484–1509, 1997. Earlier version in FOCS’94. quant-ph/9508027.
D. Simon. On the power of quantum computation. , 26(5):1474–1483, 1997. Earlier version in FOCS’94.
[^1]: University of Latvia, Riga. Supported by ESF project 1DP/1.1.1.2.0/09/APIA/VIAA/044.
[^2]: University of Latvia, Riga. Supported by the European Commission under the project QCS (Grant No. 255961).
[^3]: University of Latvia, Riga. Supported by ESF project 1DP/1.1.1.2.0/09/APIA/VIAA/044.
[^4]: CWI and University of Amsterdam, [email protected]. Supported by a Vidi grant from the Netherlands Organization for Scientific Research (NWO) and by the European Commission under the project QCS (Grant No. 255961).
[^5]: In fact, the *unbounded-error* quantum query complexity of almost all Boolean functions is only $n/4$ up to lower-order terms. This follows from the degree upper bound of combined with [@bvw:smallbias Theorem 1] and the fact that $d$-bit Parity can be computed with ${\lceil{d/2}\rceil}$ quantum queries.
[^6]: We use the following matrix-analytic notation. For $m\times m$ matrices $A$ and $B$, define inner product ${\langle{A},{B}\rangle}={\mbox{\rm Tr}}(A^*B)=\sum_{i,j} A_{ij}^*B_{ij}$. Note that this inner product is basis-independent: for every unitary $U$ we have ${\langle{UAU^*},{UBU^*}\rangle}={\langle{A},{B}\rangle}$. Let ${{\left\|{A}\right\|}}_p$ denote the (unitarily invariant) Schatten $p$-norm of $A$, which is the $p$-norm of the $m$-dimensional vector of singular values of $A$. In particular, ${{\left\|{A}\right\|}}_1$ is the sum of $A$’s singular values, and ${{\left\|{A}\right\|}}_\infty$ is its largest singular value. It is easy to see that ${{\left\|{A}\right\|}}_2^2={\mbox{\rm Tr}}(A^* A)=\sum_{i,j}|A_{ij}|^2$, and ${\langle{A},{B}\rangle}\leq{{\left\|{A}\right\|}}_1{{\left\|{B}\right\|}}_\infty$.
|
---
author:
- |
Josh Benaloh, Microsoft Research\
Douglas Jones, Department of Computer Science, University of Iowa\
Eric L. Lazarus, DecisionSmith\
Mark Lindeman\
Philip B. Stark, Department of Statistics, University of California, Berkeley
bibliography:
- '../../Bib/pbsBib.bib'
date: 26 June 2011
title: '**SOBA: Secrecy-preserving Observable Ballot-level Audit**'
---
Introduction and background
===========================
The majority of Americans now vote electronically, either on machine-counted paper ballots or on Direct Recording Electronic (DRE) machines. Electronic voting offers advantages over hand counts and lever machines, but it poses challenges for determining whether votes were recorded and counted correctly. A wide range of security vulnerabilities and other flaws have been documented in contemporary voting equipment. The 2007 “Top-to-Bottom Review” of the systems used in California found that all the systems had “serious design flaws” and “specific vulnerabilities, which attackers could exploit to affect election outcomes” [@bowen07]. While some of these vulnerabilities can be mitigated, the underlying verification challenge is formidable. As Rivest and Wack comment, “complexity is the enemy of security,” and demonstrating that any complex system is free of faults may be impossible or infeasible [@rivestWack06].
Electronic voting systems have failed in real elections. In the 2004 general election in Carteret County, North Carolina, over 4,000 votes were lost irretrievably due to a programming error that affected UniLect Patriot voting machines, casting doubt on a statewide election outcome [@bonner04]. More controversially, in the 2006 general election, ES&S iVotronic DREs in Sarasota County, Florida did not record a vote for U.S. House for about 15% of voters—far more than can plausibly be attributed to intentional undervoting. Inadvertent undervotes were probably decisive in that contest [@ashLamperti08; @mebaneDill07]. Hypotheses explaining these undervotes include voter confusion caused by poor ballot layout [@frisinaEtal08] and machine failure [@garber08; @mebane09]. Unfortunately, the forensic evidence generated by the voting systems was inadequate to determine the cause of the undervotes or the intentions of the voters.
Voter-marked paper ballots provide a clearer record of what voters did and more evidence about voter intent, but by themselves do not solve the election verification problem. In 2005, Harri Hursti repeatedly demonstrated the ability to “hack” optical scan counts when given access to a memory card [@zetter05]. In a June 2006 primary election in Pottawattamie County, Iowa, incorrectly configured optical scanners miscounted absentee ballots in every contest, altering two outcomes. The county auditor ordered a hand recount, which corrected the errors [@flaherty06]. Similar errors in other elections may have altered outcomes without ever being detected. Even when scanners work correctly, their results may differ materially from voter intent. Consider the 2006 U.S. Senate contest in Minnesota, where Al Franken beat Norm Coleman in a hand recount largely because of ballots where the human interpretation differed from the machine interpretation.[^1]
Software independence
---------------------
Computerized election equipment cannot be infallible, so @rivestWack06 and @rivest08 suggest that voting systems should be software-independent. A voting system is [*software-independent*]{} “if an undetected change or error in its software cannot cause an undetectable change or error in an \[apparent\] election outcome.” This idea can be generalized to define independence from hardware and from elections personnel, leading to so-called [*end-to-end verifiable*]{} election technologies. However, end-to-end technology may require fundamental changes in current voting processes.
The outcome of a contest is the set of winners, not the exact vote counts. The [*apparent*]{} outcome of a contest is the winner or winners according to the voting system. The [*correct*]{} outcome of a contest is the winner or winners that a full hand count of the “audit trail” would find. The audit trail is assumed to be an indelible record of how voters cast their votes. It might consist of a combination of voter-marked paper ballots, voter receipts, a voter-verifiable paper audit trail (VVPAT), and suitable electronic records.
This definition of “correct” is generally a matter of law. It does not necessarily imply that the audit trail is inviolate (nor that the outcome according to the audit trail is the same as the outcome according to how voters originally cast their ballots); that there is no controversy about which records in the audit trail reflect valid votes; that human observers agree on the interpretation of the audit trail; that the actual hand counting is accurate; nor that repeating the hand count would give the same answer. If there is no audit trail, defining what it means for the apparent outcome to be correct requires hypothetical counterfactuals—but for the fault in the voting system, what would the outcome have been?
Software independence means that errors that cause apparent outcomes to be wrong leave traces in the audit trail. But software independence does not guarantee any of the following:
1. that no such traces will occur if the apparent outcome is correct[^2]
2. that those traces will be noticed or acted upon
3. that the cost of looking through the audit trail for those traces is affordable
4. that, in principle, there is a way to correct the apparent outcome without holding another election
5. that, in practice, the audit trail was preserved and protected well enough to determine the outcome according to how the voters originally cast their ballots
The penultimate property is guaranteed by strong software independence. @rivestWack06 and @rivest08 define a voting system to be [*strongly software-independent*]{} if an undetected change or error in its software cannot cause an undetectable change or error in an \[apparent\] election outcome, and moreover, a detected change or error in an \[apparent\] election outcome (due to change or error in the software) can be corrected without re-running the election. Having an audit trail does not guarantee that anyone will dig through it to see whether there is a problem or to correct the outcome if the outcome is wrong. Strong software independence does not correct anything, but it is an essential ingredient for a system to be self-correcting.
[*Compliance audits*]{} can be used to assess whether the last property listed above holds: Given that the election used a strongly software-independent voting system, did it adhere to procedures that should keep the audit trail sufficiently accurate to reconstruct the outcome according to how voters cast their ballots? Strong evidence that such procedures were followed is strong evidence that the legally correct outcome—what a full hand count of the audit trail would show—is the same as the outcome according to how the voters originally cast their ballots. As we discuss below in section \[sec:discussion\], we believe that compliance audits should always be required: If the election fails the compliance audit,[^3] there is no assurance that even a full hand count of the audit trail would show the outcome according to how the voters really voted. Below, we assume that the election has passed a compliance audit.
Vote tabulation audits
----------------------
Vote tabulation audits compare reported vote subtotals for subsets of ballots (“audit units”) with hand counts of the votes for each of those subsets. Audit units have to be subsets for which the voting system reports vote subtotals. Most present U.S. audits use audit units that consist of all the ballots cast in individual precincts or all the ballots tabulated on individual voting machines. Generally, audit laws do not have provisions that would lead to correcting incorrect electoral outcomes [@hallEtal09].[^4]
A [*risk-limiting post-election audit*]{} uses the audit trail to guarantee that there is a large, pre-specified probability that the audit will correct the apparent outcome if the apparent outcome is wrong. Risk-limiting audits are widely considered best practice [@bestPractices08]. Risk-limiting audits have been endorsed by the American Statistical Association [@asa10], the Brennan Center for Justice, Common Cause, the League of Women Voters, and Verified Voting, among others. California AB 2023 (2010), requires a pilot of risk-limiting audits in 2011 [@ab2023_2010]. Colorado Revised Statutes §1-7-515 calls for implementing risk-limiting audits by 2014.
The first method for conducting risk-limiting audits was proposed by @stark08a; numerous improvements have been made [@stark08d; @stark09a; @stark09b; @stark09d; @miratrixStark09a; @stark10d]. See also [@checkowayEtal10]. Risk-limiting audits limit the risk of failing to correct an outcome that is wrong. The risk limit is 100% minus the minimum chance that the audit corrects the outcome. If the outcome is correct in the first place, a risk-limiting audit cannot make it wrong; but if the outcome is wrong, a risk-limiting audit has a large chance of correcting it. Hence, the probability that the outcome according to a risk-limiting audit is the correct outcome is at least 100% minus the risk limit.
For systems that are strongly software-independent, adding a risk-limiting audit addresses the second condition above: It ensures a large, pre-specified probability that the traces will be noticed and will be used to correct the apparent outcome if the apparent outcome is wrong.
Our goal
--------
Our goal in this work is to sketch a personally verifiable privacy-preserving $P$-resilient canvass framework. We must first say what this means.
A [*canvass framework*]{} consists of the vote-tabulation system together with other human, hardware, software, and procedural components of the canvass, including compliance and vote-tabulation audits. A canvass framework is [*resilient with probability $P$*]{} or [*$P$-resilient*]{} if the probability that the outcome it gives[^5] is the correct outcome is at least $P$, even if its software has an error, shortcoming, or undetected change.[^6] Resilience means that the framework tends to recover from faults. If a canvass framework is $P$-resilient, either the outcome it gives when all is said and done is correct, or something occurred that had probability less than $1-P$. The canvass framework that results from performing a risk-limiting audit on a strongly software-independent voting system that passes a compliance audit is $P$-resilient, with $P$ equal to 100% minus the risk limit. If the system fails the compliance audit, the framework should not declare any outcome. Instead, the election should be re-run.
Even if a canvass framework is $P$-resilient, in practice the public might not trust the system unless they can observe crucial steps, especially the audit. The mere right or opportunity to observe the audit will not engender much trust if—as a practical matter—no single person or small group [*could*]{} observe all the steps that are essential to ensuring the accuracy of the final result. For instance, if a vote-tabulation audit takes ten teams of auditors working in separate offices four days to complete, it would take a large team of independent observers—with lots of free time and long attention spans—to verify that the audit was carried out correctly. The longer an audit takes and the more people required to carry out the audit, the more opportunities there are to damage the audit trail, and the harder it is for an observer to be satisfied that the audit has been conducted correctly.
We define a canvass framework to be [*personally verifiable $P$-resilient*]{} if it is $P$-resilient and a single individual could, as a practical matter, observe enough of the process to have convincing evidence that the canvass framework is in fact $P$-resilient.
The transparency required for a canvass framework to be personally verifiable can impact privacy. For instance, publishing images of all the ballots cast in an election[^7] might give the individuals compelling evidence that the vote tabulation system found the correct outcome, since the images allow people to count the votes themselves—at least to the extent that voter intent is unambiguous.[^8] But publishing ballot images can facilitate vote-selling and coercion and can compromise privacy, because voters can deliberately or accidentally reveal their identities through marks on the ballots including idiosyncrasies of how individuals fill in bubbles [@calandrinoEtal11] or even the fiber structure of the paper on which the ballot is printed [@calandrinoEtal09].[^9]
A lesser but substantial degree of transparency is conferred by publishing cast vote records (CVRs)[^10] enabling anyone to verify that the contest outcomes are correct—if the CVRs are accurate. However, as @popoveniucStanton07 and @rescorla09 point out, publishing CVRs also can aid vote-selling or coercion because of the potential for pattern voting. One typical sample ballot (from Tulsa, Oklahoma) contains 18 contests with over 589,000 possible combinations if a voter votes in every contest, or over 688 million combinations allowing for undervotes. Thus, a voter could be instructed to vote for the preferred candidate in one contest, and to cast a series of other votes that would almost certainly (especially within a precinct), confirm the voter’s identity if all of the voter’s selections were published. Hence, publishing whole-ballot CVRs for large numbers of ballots improves transparency but can sacrifice privacy.
When there is not strong evidence that the apparent outcome is correct, risk-limiting audits can require examining the entire audit trail, potentially exposing all the ballots to public scrutiny.[^11] If the apparent outcome is wrong, such exposure is necessary in order to correct the outcome. Therefore, if a risk-limiting audit is to be personally verifiable, there may be occasions where compromising privacy is unavoidable. But minimizing the number of ballots or whole-ballot CVRs that are routinely exposed helps protect privacy, impeding vote-selling and coercion.
We define a canvass framework to be [*personally verifiable privacy-preserving $P$-resilient*]{} if it is personally verifiable $P$-resilient and it does not sacrifice privacy unnecessarily. Neither [*personally verifiable*]{} nor [*privacy-preserving*]{} is a mathematically precise characteristic, while $P$-resilience is.
The contribution of the present work is to sketch a personally verifiable privacy-preserving $P$-resilient voting system. We assume, as a foundation for building this system, that we are starting with a strongly software-independent voting system with an audit trail that corresponds to individual ballots. Moreover, we assume that a compliance audit has determined that the audit trail generated by the system is sufficiently trustworthy to reflect the correct outcomes of the contests. We augment the system with procedures and data structures that make it possible for an individual observer to gain compelling evidence that either the outcomes are correct, or something very unlikely occurred—that is, that the overall canvass framework is $P$-resilient. Unless some of the apparent outcomes are wrong or a margin is extremely small, gathering that evidence will generally involve exposing only a tiny percentage of ballots and whole-ballot CVRs.
In essence, our method adds a special risk-limiting audit to a strongly software-independent voting system (one that has had a compliance audit to ensure that its audit trail is intact). Since one person cannot be in two places at the same time, the procedure cannot be personally verifiable if it involves auditing a multi-jurisdictional contest in different jurisdictions simultaneously; it would then be necessary to trust confederates to observe what is happening elsewhere. The next few sections outline elements of this risk-limiting audit.
Ballot-level risk-limiting audits
=================================
One key to keeping the process personally verifiable (by keeping amount of observation required low) and to protecting privacy (by exposing as few ballots as possible to observers) is to audit the record at the level of individual ballots, rather than large batches of ballots such as precincts. The fewer ballots there are in each audit unit, the smaller the expected counting burden for risk-limiting audits tends to be—when the electoral outcome is correct (see, e.g., [@stark09c; @stark10c; @stark10d]). A vote-tabulation audit based on checking the CVRs of individual ballots against a human interpretation of those ballots is often called a “ballot-level audit,” a “single-ballot audit,” or a “ballot-based audit.” Because they reduce the time it takes to audit and the number of ballots involved, ballot-level risk-limiting audits are especially amenable to personal verification.
Ballot-level audits are extremely efficient statistically, but they are not simple to implement using current voting systems. To perform a ballot-level audit, there must be a way to identify each ballot uniquely, for instance, a serial number on a paper ballot, or identifying the ballot by its location: “the 17th ballot in deck 152 scanned by scanner C,” for instance.[^12] There must also be a way to match each ballot to its CVR. Some commercial voting systems do not generate or do not store CVRs for individual ballots. Other voting systems record individual CVRs, but are designed make it difficult or impossible to match individual CVRs to the ballots they purport to represent. In some cases, audit trails have identifiers that can be used to find the corresponding CVRs; this method was used for part of a 2008 audit in Eagle County, Colorado [@branscomb08][^13] and a ballot-level risk-limiting audit in Orange County, California, in 2011 \[P.B. Stark, personal communication, 2011\]. However, to protect privacy, most paper ballots do not have identification numbers. In a 2009 pilot ballot-level audit in Yolo County, California, @stark09d exploited the fact that the CVRs and the physical ballots were in the same order. The scanned images associated with each CVR in the audit sample were compared with the physical ballots to check the accuracy of the CVRs.
@calandrinoEtal07 describe an approach to election verification that involves imprinting ballots with identification numbers and scanning the ballots with a “parallel” system in addition to the system of record. The parallel system derives its own CVRs, from which the apparent contest outcome can be determined independently. The accuracy of the unofficial CVRs and of the imprinting process is then assessed by a ballot-level audit.
Since 2008, the Humboldt County Election Transparency Project (Humboldt County ETP) has experimented with publishing ballot images and independently tabulating CVRs extracted from those images. Using commercially available equipment, Humboldt County ETP rescans paper ballots after embossing them with serial numbers. Then, open-source software is used to form CVRs from the digital images. Humboldt County ETP has processed ballots for six elections and published scanned ballot images as well as its version of the CVRs for some of them. The results based on their re-scans generally have agreed well with the original results, with one important exception: The Humboldt County ETP analysis of the November 2008 election uncovered a defect in the election management software that led the results of an entire ballot batch to be silently discarded!
The Clear Ballot Group, inspired in part by Humboldt County ETP, is developing a system that, in its words, could permit election outcomes to be “thoroughly and transparently verified within 36–48 hours after the polls close.” Neither the Humboldt County ETP nor Clear Ballot Group currently incorporate risk-limiting audits,[^14] but the parallel scans their systems perform facilitate ballot-level risk-limiting audits, along the general lines proposed by @calandrinoEtal07. If the system of record and the parallel system agree on the set of winners, a risk-limiting audit of the parallel system transitively confirms the outcome according to the system of record.[^15]
A privacy-preserving audit
==========================
The method we propose here presupposes that CVRs are available, either from the system of record or from a parallel system. It publishes all the data contained in the CVRs in a form that (1) still permits all observers to check the contest outcomes on the assumption that the CVRs are accurate, (2) does not compromise privacy, and (3) enables the CVRs to be checked against the audit trail while minimizing the loss of privacy.
In SOBA, election officials make a cryptographic commitment[^16] to the full set of CVRs by publishing the CVRs separately for each contest, disaggregating the ballots (we call these contest-CVRs or CCVRs in contrast to whole-ballot CVRs), and a shrouded link between each CCVR and the ballot it purports to represent. Splitting the CVRs into CCVRs and obfuscating the identity of the ballot from which each CCVR comes eliminates some of the information required to identify a voter’s ballot style or to use pattern voting to signal the voter’s identity.[^17] This makes the procedure privacy-preserving. But it retains enough information for any observer to check that the apparent outcome agrees with the outcome according to the CCVRs, for each contest. That is, there is a known algorithm (the winner algorithm[^18]) that observers can apply to the published CCVRs to calculate the correct outcome of every contest—provided the CCVRs reflect the ballots (more generally, audit trail) accurately enough. This is part of making the procedure personally verifiable. Loosely speaking, the required level of accuracy depends on the number of CVRs that must have errors for the apparent outcome to be wrong:[^19] The fewer ballots that need to be changed to affect the outcome, the larger the sample generally will need to be to attain a given level of confidence that the apparent outcome is correct.
The CCVRs might fail to be sufficiently accurate because
- At least one CCVR and the ballot it purports to represent do not match because human and machine interpretations of voter intent differ (for instance, because the voter marked the ballot improperly). This is a failure of the generation of CCVRs.
- At least one CCVR does not in fact correspond to any ballot. It is an “orphan.” This is a failure of the mapping between ballots and CCVRs.
- More than one CCVR for the same contest is mapped to the same ballot. It is a “multiple.” This is also a failure of the mapping between ballots and CCVRs.
- There is no CCVR corresponding to some voting opportunity on a ballot.
A failure of the mapping might be the more distressing source of error, since it is a failure on the part of the election official, but we must ensure (statistically) that—together—all sources of error did not combine to cause the outcome to be wrong. SOBA uses a risk-limiting audit to assess statistically whether the winners according to the full audit trail differs from the winners according to the CCVRs, for all contests under audit, taking into account all sources of error. If the outcome according to the CCVRs is incorrect, the audit is very likely to proceed to a full hand count of the audit trail, thereby revealing the correct outcome. This provides $P$-resilience.
To make the risk-limiting audit possible, elections officials are required to publish another file, the [*ballot style file*]{}, which contains ballot identifiers and lists the contests each of those ballots contains. It does not contain the voters’ selections.
The risk-limiting technique we propose is the [*super-simple simultaneous single-ballot risk-limiting audit*]{} [@stark10d]. It is not the most efficient ballot-level audit, but the calculations it requires can be done by hand, increasing transparency. It involves drawing ballots at random with equal probability; some more efficient audits require using different probabilities for different ballots, which is harder to implement and to explain to the public. Moreover, this technique allows a collection of contests to be audited simultaneously using the same sample of ballots. That can reduce the number of randomly selected ballots that must be located, interpreted, and compared with CVRs, decreasing the cost and time required for the audit and thereby increasing transparency.
The following subsections give more technical detail.
Data framework and assumptions
------------------------------
We assume that the audit trail consists of one record per ballot cast. There are $C$ contests we wish to assess. The contests might be simple measures, measures requiring a super-majority, multi-candidate contests, or contests of the form “vote for up to $W$ candidates.”[^20] We refer to records in the audit trail as “ballots.” A ballot may be an actual voter-marked paper ballot, a voter-verifiable paper audit trail (VVPAT), or a suitable electronic record.
There are $N$ ballots in the audit trail that each contain one or more of the $C$ contests. Each ballot can be thought of as a list of pairs, one pair for each contest on that ballot. Each pair identifies a contest and the voter’s selection(s) in that contest, which might be an undervote or a vote for one or more candidates or positions. Examining a ballot by hand reveals all the voter’s selections on that ballot; we assume that there is no ambiguity in interpreting each voter’s intentions from the audit trail.
Before the audit starts, the voting system must report results for each of the $C$ contests. The report for contest $c$ gives $N_c$, the total number of ballots cast in contest $c$ (including undervotes and spoiled ballots), as well as the number of valid votes for each position or candidate in contest $c$. Let $M \equiv N_1 + N_2 + \cdots + N_C$ denote the total number of voting opportunities on the $N$ ballots. We assume that the compliance audit assures us (e.g., through ballot accounting) that the reported values of $N_c$ are accurate, and that the audit trail is trustworthy. In the present work, we do not consider attacks on the audit trail.
There is a published “ballot style file.” Each line in the ballot style file lists a ballot identifier and a list of contests that ballot is supposed to contain. The ballot identifier uniquely identifies a ballot in the audit trail. The identifier could be a number that is printed on a paper ballot or unambiguous instructions for locating the ballot (e.g., the 275th ballot in the 39th deck). There should be $N$ lines in the file, and the $N$ ballot identifiers should be unique. Because the ballot style file is published, individual can check this for themselves. Moreover, individuals can check whether the number of lines in the ballot style file that list contest $c$ equals $N_c$, the total number of ballots the system reports were cast in contest $c$.
Before the audit starts, the voting system or a parallel system has produced a CVR for each ballot. These are not published as whole-ballot CVRs. Rather, the CVRs are split by contest to make contest-specific CVRs (CCVRs) that contain voters’ selections in only one contest. Each whole-ballot CVR is (supposed to be) split into as many CCVRs as there are contests on the ballot.
The CCVRs for the contests are published in $C$ files, one for each contest. The CCVR file for contest $c$ should contain $N_c$ lines; because this file is published, individuals can check this for themselves. Each line in the CCVR file for contest $c$ lists a voter’s selection and a shrouded version of the identifier of the ballot that the selection is supposed to represent. The order of the lines in each of the $C$ CCVR files should by shuffled (preferably using random permutations) so that whole CVRs cannot be reassembled without knowing secret information.[^21]
The public can confirm whether the contest outcomes according to the CCVR files match the voting system’s reported outcomes. If they do not match, there should be a full hand count of any contests with discrepant outcomes. We assume henceforth that the outcomes do match, but we do not assume the exact vote totals according to the CCVR files match the reported vote totals.
The data include one more file that is not published, the [*lookup file*]{}. The lookup file contains $M$ lines, one for each voting opportunity on each ballot. Each line has three entries: a shrouded ballot identifier, the corresponding unshrouded ballot identifier, and a number (“salt”) that is used in computing the shrouded identifier from the unshrouded identifier using a cryptographic commitment function, as described below. (For a review of uses for cryptography in voting, see @adida06.)
The salt on the $j$th line of the file is denoted $u_j$. Each line corresponds to a (ballot, contest) pair: We can think of $u_j$ as being $u_{ic}$, the salt used to shroud the identity of ballot $b_i$ in the CCVR file for contest $c$. The election official will use this file to convince observers that every selection on every ballot corresponds to exactly one entry in a CCVR file, and vice versa.
Shrouding {#sec:shroud}
---------
The method of shrouding ballot identifiers is crucial to the approach. SOBA requires election officials to cryptographically commit to the value of the ballot identifier that goes with each CCVR. A cryptographic commitment ensures that the ballot identifier is secret but indelible: The election official can, in effect, prove to observers that a shrouded identifier corresponds to a unique unshrouded identifier, but nobody can figure out which unshrouded identifier corresponds to a given shrouded identifier without secret information.
The next few paragraphs describe a suggested instantiation of the cryptographic commitment. We assume that ballot identifiers all have the same length. If necessary, this can be achieved by padding identifiers with leading zeros. The commitment function $H()$ must be disclosed publicly and fixed for the duration of the election.
Each commitment represents a claim about a voter’s selection(s) on a given ballot in a given contest. For each set of selections that any voter made in each contest, including undervotes and votes for more than one candidate, the election official will create a set of commitments. Each commitment designates the ballot identifier of a ballot that the election official claims contains that set of selections in that contest. To commit to the ballot identifier $b$, the election official selects a secret “salt” value $u$[^22] and computes the commitment value $y=H(b, u)$. At a later stage, the official can open the commitment by revealing $u$ and $b$: Then anyone can verify that the value $y$ revealed earlier is indeed equal to $H(b, u)$.
Loosely speaking, a commitment function must have two properties, the [*binding property*]{} and the [*hiding property*]{}. The binding property makes it infeasible for the official to find any pair $(b', u') \ne (b, u)$ for which $H(b', u')=H(b, u)$. This provides integrity by helping to ensure that election officials cannot contrive to have more than one CCVR for a given contest claim to come from the same ballot.[^23] The binding property is crucial for $P$-resilience; indeed, the proof of $P$-resilience requires only that the commitment have the binding property and that $\{N_c\}_{c=1}^C$ are known.
The hiding property makes it infeasible for anyone with access only to the shrouded values $H(b, u)$ to learn anything about which ballot is involved in each commitment. This provides privacy by helping to ensure that observers cannot reassemble whole-ballot CVRs from the CCVR files without extra information. If observers could reassemble whole-ballot CVRs, that would open a channel of communication (pattern voting) for coercion or vote selling. Ballot identifier $b$ may appear in multiple commitments since a separate commitment is generated for each candidate selection on each ballot. The hiding property ensures that those collections of commitments do not together reveal the value of any $b$. This is crucial for the method to be privacy-preserving.
An HMAC (as described in Federal Information Processing Standard Publication 198) with a secure hash function such as SHA-256 (described in Federal Information Processing Standard Publication 180-2) can be used to instantiate the commitment function. However, since each of the parameters of the commitment function is of fixed length it is more efficient to simply use a cryptographic hash function such as SHA-256 directly. The length of the ballot identifiers does not matter, as long as all ballot identifiers in the election have the same length. We recommend that all salt values have equal length, of at least 128 bits. Our results do not depend on the particular commitment function chosen, as long as it has both the binding and hiding properties.[^24]
We now describe how to perform a risk-limiting audit that simultaneously checks the accuracy of the CCVRs, whether each CCVR entry comes from exactly one ballot, and whether every voting opportunity on every ballot is reflected in the correct CCVR file.
The audit
---------
The first three steps check the consistency of the CCVRs with the reported results and the uniqueness of the shrouded identifiers.
1. Verify that, for each contest $c$, there are $N_c$ entries in the CCVR file for contest $c$.
2. Verify that, for each contest $c$, the CCVR file shows the same outcome as the reported outcome.
3. Verify that the $M = N_1 + \cdots + N_C$ shrouded ballot identifiers in all $C$ CCVR files are unique.
If step 2 shows a different outcome for one or more contests, those contests (at least) should be completely hand counted.
Steps 4 and 5 check the logical consistency of the ballot style file with the reported results.
1. Verify that, for each contest $c$, there are $N_c$ entries in the ballot style file that list the contest.
2. Verify that the ballot identifiers in the ballot style file are unique.
If steps 1, 3, 4, or 5 fail, there has been an error or misrepresentation. The election official needs to correct all such problems before the audit can start.
The remaining steps comprise the statistical portion of the risk-limiting audit, which checks whether the CCVRs and the mapping from ballots to CCVRs is accurate enough to determine the correct winner.
1. Set the audit parameters:
1. Choose the risk limit $\alpha$.
2. Choose the maximum number of samples $D$ to draw; if there is not strong evidence that the outcomes are correct after $D$ draws, the entire audit trail will be counted by hand.
3. Choose the “error bound inflator” $\gamma > 1$ and the error tolerance $\lambda \in (0, 1)$ for the super-simple simultaneous method [@stark10d] ($\gamma = 1.01$ and $\lambda = 0.2$ are reasonable values).
4. Calculate $$\rho = \frac{-\log \alpha}{\frac{1}{2\gamma} +
\lambda \log(1 - \frac{1}{2\gamma})}.$$
5. For each of the $C$ contests, calculate the margin of victory $m_c$ in votes from the CCVRs for contest $c$.[^25]
6. Calculate the [*diluted margin*]{} $\mu$: the smallest value of $m_c/N$ among the $C$ contests.[^26]
7. Calculate the initial sample size $n_0 = \lceil \rho/\mu \rceil$.
8. Select a seed $s$ for a pseudo-random number generator (PRNG).[^27] Observers and election officials could contribute input values to $s$ or $s$ could be generated by an observable, mechanical source of randomness such as rolls of a 10-sided die. The seed should be selected only once.
2. Draw the initial sample by finding $n_0$ pseudo-random numbers between $1$ and $N$ and audit the corresponding ballots:
1. Use the PRNG and the seed $s$ to generate $n_0$ pseudo-random numbers, $r_1, r_2, \ldots, r_{n_0}$.
2. Let $\ell_j \equiv \lceil N r_j \rceil$, $j = 1, \ldots, n_0$. This list might contain repeated values. If so, the tests below only need to be performed once for each value, but the results count as many times as the value occurs in the list.[^28]
3. Find rows $\ell_1, \ldots, \ell_{n_0}$ in the ballot style file.
4. Retrieve the ballots $b_{\ell_j}$ in the audit trail identified by those rows in the ballot style file. If there is no ballot with identifier $b_{\ell_j}$, pretend in step 7(g) below that the ballot showed a vote for the runner-up in every contest listed in that row of the ballot style file.
5. Determine whether each ballot shows the same contests as its corresponding entry in the ballot style file. If there are any contests on the ballot that are not in the ballot style file entry, pretend in step 7(g) below that the CCVR for that (ballot, contest) pair showed a vote for the apparent winner of the contest. If there are any contests in the ballot style file entry that are not on the ballot, pretend in step 7(g) below that the ballot showed a vote for the apparent runner-up for that contest.
6. For each ballot $b_{\ell_j}$ in the sample, the election official reveals the value of $u_{\ell_j c}$ for each contest $c$ on the ballot.
7. For each ballot in the sample, for each contest on that ballot, observers calculate $H(b_{\ell_j}, u_{\ell_jc})$ and find the entry in the CCVR file for contest $c$ that has that shrouded identifier. If the shrouded identifier is not in the CCVR file, pretend that the CCVR file showed that the voter had selected the apparent winner of contest $c$. Compare the voter’s selection(s) according to the CCVR file to the voter’s selection(s) according to a human reading of ballot $b_{\ell_j}$. Find $e_{\ell_j}$, the largest number of votes by which any CCVR for ballot $b_{\ell_j}$ overstated the margin between any (winner, loser) pair in any contest on ballot $b_{\ell_j}$. This number will be between $-2$ and $+2$.
3. If no ballot in the sample has $e_{\ell_j} = 2$ and no more than $\lambda \mu n_0$ have $e_{\ell_j} = 1$, the audit stops. (In this calculation, the value of $e_{\ell_j}$ should be counted as many times as $\ell_j$ occurs in the sample.)
4. Otherwise, calculate the Kaplan-Markov $P$-value, $P_{KM}$ according to equation (9) in @stark09b [@stark09d; @stark10d].[^29] If $P_{KM}$ is less than $\alpha$, the audit stops. If $P_{KM}$ is greater than $\alpha$, the sample is expanded: Another random number $r_j$ is generated and steps 7(c)–(g) are repeated. The value of $P_{KM}$ is updated to include the overstatement errors found in the new draw.[^30] This continues until either $P_{KM} \le \alpha$ or there have been $D$ draws. In the latter case, all remaining ballots are counted by hand, revealing the true outcome.
The next section establishes that this procedure in fact gives a risk-limiting audit.
Proof of the risk-limiting property {#sec:proof}
-----------------------------------
If the ballot style file is correct and entries in the CCVR files are mapped properly to voting opportunities on actual ballots, the only potential source of error is that CCVR entries do not accurately reflect the voters’ selections according to a human reading of the ballot. If that is the case, this is an “ordinary” risk-limiting audit, and the proof in @stark10d that the super-simple simultaneous method is risk-limiting applies directly.
Suppose therefore that the ballot style file or the mapping between ballots and CCVRs is faulty. Recall that the super-simple simultaneous method assumes that no ballot can overstate any margin by more than $2\gamma$ votes, where $\gamma > 1$. There are seven cases to consider.
1. The ballot style file has more than one entry that corresponds to the same actual ballot, or more than one actual ballot corresponds to the same entry in the ballot style file. These faults are precluded by the uniqueness of the ballot identifiers and of the recipes for locating the actual ballot with each identifier.
2. More than one ballot identifier corresponds to the same shrouded entry (for different values of $u$). This is precluded by the binding property of $H$.
3. The ballot style file contains identifiers that do not correspond to actual ballots, or claims that a ballot contains a contest that it does not actually contain. The biggest effect this could have on an apparent contest outcome is if the ballot that entry is supposed to match showed a vote for the runner-up in every missing contest, which is no greater than a two-vote change to any margin. Because the audit samples entries of the ballot style file with equal probability, this kind of error in an entry is just as likely to be revealed as any other. If such a ballot style file entry is selected for audit, steps 7(d) and 7(e) treat it this worst-case way.
4. The ballot style file claims that a ballot does not contain a contest that it does contain. The biggest effect this could have on an apparent contest outcome is if the CCVR for that contest showed a vote for the apparent winner, which cannot change the margin by more than two votes, so the error-bound assumptions are satisfied. Because the audit samples entries of the ballot style file with equal probability, this kind of error in an entry is just as likely to be revealed as any other. If such a ballot style file entry is selected for audit, step 7(e) treats it this worst-case way.
5. There are ballots whose identifiers do not appear in the ballot style file. Since there are the same number of ballots as entries in the ballot style file and the ballot identifiers in the ballot style file are unique, there must be ballot identifiers in the ballot style file that do not match any ballot. Hence, case (3) holds.
6. There are CCVRs for which the shrouded ballot identifier is not the identifier of any ballot. If the shrouded identifier matches an identifier in the ballot style file, we are in case (3). Suppose therefore that the shrouded identifier does not match any in the ballot style file. Suppose this happens for contest $c$. The preliminary checks show that the ballot style file has exactly $N_c$ entries for contest $c$ and that there are exactly $N_c$ entries in the CCVR file for contest $c$. Therefore, if there is such a CCVR, one of the ballot style file entries that lists contest $c$ has an identifier that does not occur in shrouded form in the CCVR file for that contest. The largest effect this could have on contest $c$ is if the “substituted” CCVR entry reported a vote for the apparent winner; this cannot overstate the margin by more than two votes, so the audit’s error-bound assumption still holds. Because the audit samples entries of the ballot style file with equal probability, this kind of error in a ballot style file entry is just as likely to be revealed as any other. If such a ballot style file entry is selected for audit, step 7(e) treats it this worst-case way.
7. The same ballot identifier appears in shrouded form more than once in a single CCVR file. As in the previous case, we know there are $N_c$ entries in the CCVR file for contest $c$ and $N_c$ entries in the ballot style file that include contest $c$; moreover, the identifiers in the ballot style file are unique. Hence, there must be at least one entry in the ballot style file that lists contest $c$ for which the ballot identifier does not appear in shrouded form in the CCVR file. We are therefore in case (6).
Discussion {#sec:discussion}
==========
Others have proposed election verification methods that involve a cryptographic commitment by elections officials to a mapping between ballots and CVRs \[E.K. Rescorla, personal communication, 2011; R.L. Rivest, personal communication, 2009; D. Wallach, personal communication, 2010; see also @adida06\]. However, we believe SOBA is the first method that requires only one commitment and that uses a risk-limiting audit to check whether the mapping is accurate enough to determine the correct winner.
We have said little about the requirement for a compliance audit. In part, this is a definitional issue: Even if the audit trail is known to have been compromised, it is our understanding that in many states, a full hand count of the audit trail would still be the “correct” outcome, as a matter of law. Hence, an audit to assess whether the audit trail was protected and preserved adequately for it to reflect the outcome according to how the voters cast their ballots is legally superfluous. We consider this a shortcoming of current audit and recount laws. Moreover, we doubt that any system can be $P$-resilient unless the election and the data it generates satisfies particular conditions. For instance, risk-limiting audits generally assume that the number of ballots cast in all in each contest is known. Such conditions should be checked.
We would advocate carrying out a compliance audit to assess whether the procedures as followed in the election give reasonable assurance that the audit trail is trustworthy—sufficiently accurate to reflect the outcome according to how voters cast their ballots—and to assess whether any other preconditions of the risk-limiting audit hold. The compliance audit should evaluate whether there is strong evidence that the chain of custody of the ballots is intact, or whether it is plausible that ballots were lost, “found,” altered, or substituted. The compliance audit should confirm the values of $\{N_c\}$ by ballot accounting: confirming that the number of ballots printed equals the number returned voted, unvoted, and spoiled, for each ballot type.
If the election passes the compliance audit, a risk-limiting audit can then assess the accuracy of the reported result and would have a large chance of correcting the apparent outcome if it is wrong (by examining the full audit trail). But if the election fails the compliance audit—that is, if we lack strong evidence that the audit trail is reliable and that the preconditions for the risk-limiting audit are met—a $P$-resilient election framework should not declare any outcome at all.
For the method to be $P$-resilient, $H$ must be binding and we must know $\{N_c\}$. Because the election official discloses $H$ and the (fixed) length of the ballot identifiers, we can determine whether $H$ is binding. For the method to be privacy-preserving, $H$ must have the hiding property, which will depend on how the salts are chosen and how the CCVR files are organized. If the salts can be discovered, inferred, or guessed, or if observers have another way to reassemble whole-ballot CVRs from the CCVRs (for instance, if the CCVRs are in the same ballot order across contests), voter privacy can be compromised.
Conclusions
===========
SOBA makes possible a personally verifiable privacy-preserving $P$-resilient canvass framework. It allows individuals to obtain strong firsthand[^31] evidence that apparent election outcomes either are correct in the first place, or are corrected by a risk-limiting audit before becoming final, without unnecessary compromises to privacy. After the procedure is complete, either all the outcomes are correct or an event with probability less than $1-P$ has occurred. The published data structures allow the public to check the consistency of the apparent outcomes but do not allow whole-ballot cast vote records to be reconstructed, thereby preserving privacy. When all the apparent contest outcomes are correct, gathering the evidence that the outcomes are right typically will require exposing only a small fraction of ballots to observers, protecting privacy. But the data structures and auditing protocol ensure that if the apparent outcome of one or more of the contests is wrong, there is a large chance of a full hand count of the audit trail to set the record straight.
Acknowledgments
===============
This work was supported in part by NSF Grant CNS-05243 (ACCURATE). We are grateful to Poorvi Vora for shepherding the paper and to anonymous referees for helpful comments. We are grateful to Joseph Lorenzo Hall, David Jefferson, Neal McBurnett, Dan Reardon, Ronald L. Rivest, and Emily Shen for helpful conversations and comments on earlier drafts.
[^1]: The 2000 presidential election may have been decided by differences between the machine interpretation of certain Florida optical scan ballots and the likely human interpretation [@keating02].
[^2]: False alarms are possible. An analogy is that if a tamper-evident seal shows that a package has been opened, it does not follow that the package contents have been altered.
[^3]: “Failure” means failure to find strong evidence that such procedures were followed, rather than finding evidence that such procedures were not followed.
[^4]: For instance, under New York law, each county determines independently whether its audit in a particular contest must be expanded. This provision means that a correct outcome might be changed to an incorrect outcome even if the conduct of the audit is formally flawless.
[^5]: As discussed in section \[sec:discussion\], to be $P$-resilient, a canvass framework should refrain from giving any outcome at all if some preconditions are not met.
[^6]: The probability comes from the overall voting system, in our case from the fact that the audit relies on a random sample. The probability does not come from treating votes, voters, or election outcomes as random, for instance.
[^7]: There also needs to be proof that the images are sufficiently complete and accurate to determine the correct outcome.
[^8]: Verification methods like Humboldt County Election Transparency Project (see below) involve publishing digital images of all the ballots.
[^9]: There are arguments that images of ballots should be published anyway—that transparency is more important than privacy. In jurisdictions that permit voting by mail, there is an opportunity to confirm how someone votes for the purpose of vote-selling or coercion; indeed, someone could fill out another’s ballot. Whether publishing images of ballots would change the rate of vote-selling or coercion substantially is the subject of some debate.
[^10]: In the 2002 FEC Voting System Standards [@FEC2002b], these were called “ballot images”; however, the term CVR has been used in more recent EAC Voluntary Voting System Guidelines [@EAC2005b]; we prefer the latter term because it does not suggest an actual image but rather a record of the interpretation of the system’s interpretation of the ballot. And what matters is the system’s interpretation of the ballot as a set of votes.
[^11]: One could have a risk-limiting audit that, if it had not terminated after some fraction of the ballots had been examined, triggered a hand count of the remaining ballots, but did not allow the public to observe that hand count. But then why should the public trust that the hand count was accurate?
[^12]: If an identifier is printed on paper ballots, the printing should occur after the voter casts his or her vote and the ballots are co-mingled. If the identifier is printed before the voter casts his or her vote, privacy could be compromised.
[^13]: Optical-scan ballots as well as DRE paper audit trails can have identifiers. For instance, in Boulder County, Colorado, the Hart Ballot Now system is configured to print unique identifiers and bar codes on each ballot. In Orange County, California, ballots for the Hart Ballot Now system have non-unique identifiers and bar codes (numbered 1–2500, then repeating).
[^14]: Clear Ballot Group is adding support for risk-limiting audits to their software \[L. Moore, personal communication, 2011\].
[^15]: This is true as long as the systems agree on the set of winners, even if they disagree about vote totals or margins. For instance, suppose candidate A defeats candidate B by one percentage point in the original returns, and by ten points according to the parallel system. Such a large discrepancy might justify close scrutiny, but a risk-limiting audit of the results of the parallel system would still provide strong evidence that A defeated B, or would lead to a full hand count to set the record straight.
[^16]: See <http://en.wikipedia.org/wiki/Commitment_scheme>. Cryptographic commitments have two important properties, the binding property and the hiding property, discussed in section \[sec:shroud\].
[^17]: Of course, if there is a contest in which few voters are eligible to vote, eligibility itself is a signal.
[^18]: For first-past-the-post contests, the winner algorithm just finds who has the most votes. Other voting schemes, such as instant-runoff voting (IRV) or ranked choice voting (RCV), have more complicated winner algorithms.
[^19]: In plurality voting, this is the margin or the set of margins between each (winner, loser) pair. Defining the margins for IRV and calculating them for a given set of reported results is not simple. See [@cary11; @magrinoEtal11].
[^20]: We do not specifically consider instant-runoff voting or ranked-choice voting here. Risk-limiting methods can be extended to such voting methods, but the details are complex.
[^21]: For example, each CCVR file could be sorted in order of the shrouded ballot identifier.
[^22]: To protect voter privacy, it must be infeasible to guess the salts: Each salt should contain many random or pseudo-random bits. For the commitment to be effective, the length of all salt values should be fixed and equal. See section \[sec:discussion\].
[^23]: See step 7 of the proof in section \[sec:proof\].
[^24]: @menezesEtal96 offers a thorough treatment of hash functions and their use for commitments in applications such as digital signatures.
[^25]: This would be replaced by a different calculation for IRV or RCV contests. See, e.g., @magrinoEtal11 [@cary11].
[^26]: The diluted margin controls the sample size. If contest $c$ has the smallest value of $m_c/N$ and $N_c$ is rather smaller than $N$, it can be more efficient to audit contest $c$ separately rather than auditing all $C$ contests simultaneously.
[^27]: The code for the PRNG algorithm should be published so that it can be checked and so that, given the seed $s$, observers can reproduce the sequence of pseudo-random numbers. The PRNG should produce numbers that are statistically indistinguishable from independent random numbers uniformly distributed between 0 and 1 (i.e., have large $p$-values) for sample sizes up to millions for a reasonable battery of tests of randomness, such as the Diehard tests.
[^28]: The auditing method relies on sampling with replacement to limit the risk.
[^29]: We consider only plurality voting here: IRV is more complicated. For each contest $c$, let ${{\mathcal{W}}}_c$ be the indices of the apparent winners of the contest and let ${{\mathcal{L}}}_c$ be the indices of the apparent losers of the contest. If $w \in {{\mathcal{W}}}_c$ and $x \in {{\mathcal{L}}}_c$, let $V_{wx}$ be the margin in votes between candidate $w$ and candidate $x$ according to the CCVR file for contest $c$. For each candidate $k$ on ballot $\ell$, let $v_{\ell k}$ denote the number of votes for candidate $k$ on ballot $\ell$ according to the CCVR file and let $a_{\ell k}$ denote the number of votes on ballot $\ell$ for candidate $k$ according to a human reading of ballot $\ell$. Let $$\epsilon_\ell \equiv \max_c \max_{w \in {{\mathcal{W}}}_c, x \in {{\mathcal{L}}}_c} (v_{\ell w}-a_{\ell w} - v_{\ell x} + a_{\ell x})/V_{wx}.$$ Then $$P_{KM} \equiv \prod_{j=1}^n \frac{1 - 1/U}{1 - \frac{\epsilon_{\ell_j}}{2 \gamma/V}}.$$
[^30]: Overstatements are calculated as step 7 above, including, in particular, steps 7(e) and 7(g), which say how to treat failures to find ballots or contests.
[^31]: For multi-jurisdictional contests, it might not be possible to conduct an audit in a single place and time. If the audit step takes place in pieces in separate jurisdictions simultaneously, firsthand knowledge might be impossible; one might need to trust observers in other locations.
|
---
author:
- Masayo Fujimura and Masahiko Taniguchi
date: preprint
title: Stratification and coordinate systems for the moduli space of rational functions
---
Introduction
============
Let ${\rm Rat}_d$ be the set of all rational functions of degree $d>1$, and ${\rm M}_d$ the set of all Möbius conjugacy classes of elements in $ {\rm Rat}_d $, which is called the [*moduli space of rational functions of degree*]{} $d$.
Here it is a fundamental problem to give a good system of parameters on ${\rm M}_d$. And McMullen showed in [@Mc] that, outside the Latté loci, every multiplier spectrum at periodic points corresponds to a finite number of points in ${\rm M}_d$. This result is epoch-making, and many researches have been done on the system of multipliers, or indices, at periodic points. Among other things, the following example is well-known.
\[ex:milnor\] [ When $d=2$, there are $3$ fixed points counted including multiplicities, the indices of which satisfy a single simple relation, called Fatou’s index formula. Hence, we can consider a map $\Phi_2:{\rm M}_2\to {{\mathbb C}}^2$ induced by two of three fundamental symmetric functions of multipliers at fixed points. This map $\Phi_2$ is bijective, and hence gives a coordinate system for ${\rm M}_2$.]{}
In the case of polynomials, the set of multipliers, or indices, at fixed points gives an interesting system of parameters on the moduli space of polynomials. For the details, see [@F] and [@FT].
Clearly, the multipliers at fixed points only are not enough to parametrize the moduli space ${\rm M}_d$ when $d>2$. But, it seems difficult to find a suitable set of multipliers at periodic points for obtaining a good system of global parameters. On the other hand, in the case of polynomials, the set of monic centered ones is often used as a virtual set of representatives of points in the moduli space ${\rm MPoly}_d$ of polynomials of degree $d$, and it is well-known that the coefficients of them give a useful set of parameters on ${\rm MPoly}_d$, which in particular induces the complex orbifold structure of ${\rm MPoly}_d$. We give, in §2, a family of rational functions whose coefficients give a good system of parameters on ${\rm M}_d$ in a similar sense as in the case of the family of monic centered polynomials.
In §3, we investigate the correspondence between these coefficient parameters and the union of the set of the indices and location of fixed points, which gives a candidate of an important subsystem of parameters on ${\rm M}_d$. Here, the overlap type of fixed points naturally gives a stratification of ${\rm M}_d$. We introduce a natural system of coordinates on each stratum. As a byproduct, we give an affirmative answer to a conjecture of Milnor proposed in the book [@M].
A normalized family of rational functions
=========================================
A general form of a rational function of degree $d$ is $$\frac{P(z)}{Q(z)}$$ with polynomials $P(z)$ and $Q(z)$ of degree at most $d$, where $P(z)$ and $Q(z)$ have no common non-constant factors and one of them has $d$ as the degree. To consider the moduli space ${\rm M}_d$, we may assume without loss of generality that $Q(z)$ is of degree $d$, and that the resultant ${\rm Resul}(P,Q)$ of $P(z)$ and $Q(z)$ does not vanish. Also it imposes no restriction to assume that $Q(z)$ is monic. We call such a rational function satisfying the above conditions a [*canonical function*]{}.
[ The [*canonical family*]{} $C_d$ of rational functions of degree $d$ is defined as the totality of canonical functions of degree $d$ as above: $$\left\{R(z) = \frac{P(z)}{Q(z)} \in {\rm Rat}_d \Biggm| {\rm deg}\, Q = d, \
{\rm Resul}(P,Q)\not = 0,
\mbox{ $Q$ is monic}\right\}.$$ Moreover, writing $$P(z) = a_dz^d + \cdots + a_0, \qquad
Q(z) = z^d + b_{d-1}z^{d-1} +\cdots + b_0,$$ we call the vector $(a_d, \cdots, a_0, b_{d-1}, \cdots, b_0)$ the system of [*coefficient parameters*]{} for $C_d$. ]{}
Every point in ${\rm M}_d$ contains an element in $C_d$ as a representative. On the other hand, since ${\rm M}_d$ is $(2d-2)$-dimensional, while the dimension of $C_d$ is $2d+1$, we can consider to impose three normalization conditions on elements in $C_d$. Here we impose $$a_0 = 0, \quad b_1 = -1, \mbox{ and } \ b_0 = 1.$$ We call a rational function in $C_d$ satisfying these conditions a [*normalized function*]{}.
[ We call the family consisting of all normalized functions in $C_d$ the [*normalized family*]{} of degree $d$, and denoted by $N_d$. ]{}
More explicitly, $$N_d= \left\{ \frac{a_dz^d+ \cdots + a_1z}
{z^d+b_{d-1}z^{d-1}+ \cdots + b_2z^2-z+1} \in C_d \right\},$$ and we call the vector $(a_d,\cdots, a_1,b_{d-1},\cdots,b_2)$ the system of [*coefficient parameters*]{} for $N_d$. Here, we can show that $N_d$ is an ample family of rational functions for every $d$.
When $d=2$, the natural projection of $N_2$ to ${\rm M}_2$ is surjective. To see this, it suffices to show that every possible set of multipliers $\{m_1,m_2,m_3\}$ at fixed points corresponds to a rational function in $N_2$ (cf. Example \[ex:milnor\]).
First, if the set is $\{1,1,1\}$, then a corresponding rational function in $N_2$ is uniquely determined (cf. Example \[ex:overlap\]) and is $$R(z)=\frac{-z^2+z}{z^2-z+1}.$$ If the set is $\{1,1,m\}$ with $m\not=1$, then a corresponding rational function is $$R(z)=\frac{z(mz+p)}{p(z^2-z+1)}$$ with a solution $p$ of $ p^2+(m+1)p+m^2=0. $
Next, in the remaining cases, the set $ \{m_1,m_2,m_3\} $ of multipliers satisfies that $m_j\not=1$ $(j=1,2,3)$ and Fatou’s index formula $$\frac{1}{1-m_1}+\frac{1}{1-m_2}+\frac{1}{1-m_3}=1.$$ Here if the set is $\{0,0,2\}$, we can see that a corresponding rational function is $$R(z)=\frac{(3/2)z^2}{z^2-z+1}.$$ And otherwise, we can choose $m$ and $m'$ among $\{m_1,m_2,m_3\}$ so that $$m'\not\in \{0, \pm i/\sqrt{3}\}, \
mm'-1\neq0 ,\ \mbox{ and } \ m+m'-2 \neq 0,$$ which are assumed to be $m_1$ and $m_2$, respectively. Then the equation $$(-m_1^2+3m_1-3)p^2+(2m_2m_1-3m_2-1)p-m_2^2=0$$ has a non-zero solution $p$. With this $p$, we see that a corresponding rational function is $$R(z)=\frac{-\bigl((m_1-2)p-m_2\bigr)
\bigl((m_1p+1)z+(m_1^2-2m_1)p-m_2m_1\bigr)z}
{p\bigl((m_1-1)p-m_2+1\bigr)(z^2-z+1)}.$$ Here, $ (m_1-1)p-m_2+1\not=0 $ from the assumption, and we conclude the assertion when $d=2$.
Note that, in terms of the fundamental symmetric functions $$\sigma_1=m_1+m_2+m_3,\ \ \sigma_2=m_1m_2+m_1m_3+m_2m_3, \quad
\mbox{and}\quad
\sigma_3=m_1 m_2 m_3,$$ the natural projection of $ N_2\cong \{(a_2,a_1)\mid a_2^2+a_1a_2+a_1^2\not=0\}$ to ${\rm M}_2$ is given by $$\begin{aligned}
\sigma_1 &=\frac{2a_2^2+a_1^2a_2+a_1^3-2a_1^2+3a_1}
{a_2^2+a_1a_2+a_1^2},\\
\sigma_2 &= \frac{-(a_1^2-2a_1)a_2^2+(a_1-2)a_2-2a_1^3+4a_1^2-4a_1+3}
{a_2^2+a_1a_2+a_1^2},\\
\sigma_3 & =\sigma_1-2.\end{aligned}$$
In general, we obtain the following.
For every $d\geq 2$, the natural projection of $N_d$ to ${\rm M}_d$ is surjective.
The assertion for the case that $d=2$ is shown in the above example. When $d= 3$, we can show the assertion by direct calculations using a symbolic and algebraic computation system, the detail of which is contained in §4 for the sake of readers’ convenience. So, we assume that $d\geq 4$ in the sequel of the proof.
Let $x$ be a point of ${\rm M}_d$ and $R(z)$ a rational function of degree $d$ contained in the Möbius conjugacy class $x$. Then we may assume that $R(z)$ is canonical and $R(0)=0$, by taking a Möbius conjugate of $R(z)$ if necessary, which implies in particular that $$a_0 =0 \quad \mbox{and}\quad b_0\not= 0.$$
Next, if we take conjugate of $R(z)$ by a translation $L(z) = z+ \alpha$. Then we have $$L^{-1}\circ R\circ L(z) =
\frac{\left(a_d(z+\alpha)^d+ \cdots +a_0 \right)
- \alpha \left( (z+\alpha)^d+ \cdots +b_0 \right) }
{(z+\alpha)^d+ \cdots +b_0},$$ which we write as $$\frac{\tilde{a}_dz^d + \cdots + \tilde{a}_0}
{z^d + \tilde{b}_{d-1}z^{d-1} +\cdots + \tilde{b}_0}.$$ Here, if $\alpha$ is a fixed point of $R(z)$, then $$\tilde{a}_0 =0, \quad \mbox{and}\quad \tilde{b}_0\not= 0.$$ Also, taking as $\alpha$ one, say $\zeta_R$, of fixed points of $R(z)$ with the largest multiplicities, we may assume that $R(z)$ has no non-zero fixed points with multiplicity $d$. Moreover, if $0$ is a non-simple fixed point of $L^{-1}\circ R\circ L(z)$, then $$\tilde{a}_1= \tilde{b}_0.$$ And hence if $\tilde{a}_1\not= \tilde{b}_0$, then every fixed points of $R(z)$ is simple, and there is a non-zero fixed point $\zeta_R$ of $R(z)$ such that $\tilde{b}_1\not=0$. Indeed, letting $\{\zeta_1,\cdots, \zeta_d\}$ be the set of non-zero fixed points of $R(z)$, we consider conjugates of $R(z)$ by $L_k(z) = z+\zeta_k$. Then $$\tilde{b}_1 = d \zeta_k^{d-1}+
(d-1)b_{d-1} \zeta_k^{d-2}+ \cdots + b_1$$ can not be $0$ for all $k$. Also repeating such change of fixed points again if necessary, we can further assume that there are neither circles nor lines in ${{\mathbb C}}-\{0\}$ which contain all non-zero fixed points, since we have assumed that $d\geq 4$.
Thus we may assume from the beginning that [ *$a_0 =0$, $b_0\not= 0$, and $(db_0-a_1)z+ b_1$ is not constantly $0$, $R(z)$ has no non-zero fixed points with the multiplicities $d$, and if $R(z)$ has simple fixed points only, then there are neither circles nor lines in ${{\mathbb C}}-\{0\}$ which contain all non-zero fixed points.* ]{}
Now, set $$T(z) = \frac{z}{pz+q} \qquad (q\not=0).$$ Then we have $$T^{-1}\circ R\circ T(z) =
\frac{q(a_dz^d+ \cdots +a_1z(pz+q)^{d-1})}
{-p(a_dz^d+ \cdots +a_1z(pz+q)^{d-1}) + (z^d+ \cdots +b_0(pz+q)^d)}.$$ The constant term of the numerator remains to be $0$, and the coefficients of $z^d$ in the numerator and the denominator change to $$\begin{aligned}
a_d^*(p,q) &= q(a_d+ \cdots +a_1p^{d-1}) \quad \mbox{and} \\
b_d^*(p) &= -p(a_d+ \cdots +a_1p^{d-1})
+ (1+ \cdots +b_0p^d),\end{aligned}$$ respectively. If $b_d^*(p)\not=0$, divide both of the numerator and the denominator of the conjugate $T^{-1}\circ R\circ T(z)$ by $b_d^*(p)$. Then the coefficients $a_1$, $b_0$ and $b_1$, for instance, change to $$\begin{aligned}
a_1(p,q) &= \displaystyle{\frac{a_1q^d}{b_d^*(p)}}, \quad
b_0(p,q) = \displaystyle{\frac{b_0q^d}{b_d^*(p)}}, \\
b_1(p,q) &= \displaystyle{\frac{-a_1pq^{d-1}+ b_1q^{d-1}
+ db_0pq^{d-1}}{b_d^*(p)}}.\end{aligned}$$ Also, the condition $b_1(p,q)/b_0(p,q) = -1$ implies that $$q=q(p) =-\frac{(db_0-a_1)p+b_1}{b_0}.$$
First, if $b_0=a_1$, then $b_0(p,q(p))$ is a rational function of $p$ such that the degrees of the numerator and the denominator are exactly $d$ and not greater than $d-1$, respectively. Hence there is a finite $p$ with $b_0(p,q(p))=1$. Next, if $db_0=a_1$, then $b_0(p,q(p))$ is a rational function of $p$ such that the degree of the denominator is exactly $d$ and the numerator is a non-zero constant. Hence there is a finite $p$ with $b_0(p,q(p))=1$. Finally, if otherwise, namely, if $b_0\not=a_1$ and $db_0\not=a_1$, then the degrees of the numerator and the denominator are exactly $d$ and $R(z)$ has simple fixed points only. We write non-zero fixed points of $R(z)$ as $\{\zeta_k\}_{k=1}^d$. Suppose that $b_0(p,q(p))$ can take the value $1$ at $\infty$ only. Then with some non-zero constant $C$, $$b_0(p,q(p))= 1 + \frac{C}{b_d^*(p)},$$ which implies that $\{1/\zeta_k\}_{k=1}^d$ lie on the same circle, for $b_d^*(p) = \prod_{k=1}^d\, (1-\zeta_kp)$. But then, $\{\zeta_k\}_{k=1}^d$ should be on a same circle or a line not containing $0$, which contradicts to one of the assumptions from the beginning. Hence we conclude also in this case that there is a finite $p$ such that $b_0(p,q(p))=1$.
Thus we obtain a $T(z)$ such that $T^{-1}\circ R\circ T(z)$ belongs to $N_d$ if $d\geq 4$, and the proof is now complete.
For a generic point of ${\rm M}_d$, there are only a finite number of rational functions in $N_d$ belonging to the point, as is seen from the proof of Theorem 1. On the other hand, some points of ${\rm M}_d$ can blow up in $N_d$ as in the following example.
Set $$R(z)=\frac{-3z^3-4z^2-2z}{z^3-z-1}.$$ Then $R(z)$ has a simple fixed point at $0$, and one with multiplicity $3$ at $-1$.
As in the proof of Theorem 1, letting $$T(z)=\frac{z}{pz-1-p} \quad(p\neq -1),$$ set $ R_p(z)=T^{-1}\circ R\circ T(z)$. Then we have $$R_p(z)=
\frac{(2p^2+4p+3)z^3+(-4p^2-8p-4)z^2+(2p^2+4p+2)z}
{(p^2+2p+1)z^3+(-p^2-2p)z^2+(-p^2-2p-1)z+p^2+2p+1}.$$ Hence if we set $ \tilde{p}=1/(p^2+2p+1) $, $$R_p(z) = \tilde{R}_{\tilde{p}}(z)=
\frac{(\tilde{p}+2)z^3-4z^2+2z}{z^3+(\tilde{p}-1)z^2-z+1}.$$ Thus $ \tilde{R}_{\tilde{p}}(z) $ belongs to $N_3$, and represents the same point of ${\rm M}_3$ for every non-zero $\tilde{p}$. Indeed, every $ \tilde{R}_{\tilde{p}}(z) $ is conjugate to $ \tilde{R}_{1}(z) $ by $$S(z) = \frac{z}{(1-\tilde{p}^{1/2})z + \tilde{p}^{1/2}}.$$
A stratification of the moduli space
====================================
Every rational function $R(z)=P(z)/Q(z)$ of degree $d$ not fixing $\infty$ can be written also as $$R(z) = z - \frac{\hat{P}(z)}{Q(z)}$$ with monic polynomials $\hat{P}(z)$ and $Q(z)$ of degree $d+1$ and $d$, respectively. Using this representation, we have another system of parameters, some of which are fixed points of $R(z)$.
Let $$R(z) = z - \frac{\hat{P}(z)}{Q(z)},$$ with $$\hat{P}(z) = zQ(z)- P(z)=\prod_{j=1}^p\, (z-\zeta_j)^{n_j}
\qquad (\zeta_j\in {{\mathbb C}}),$$ where $\zeta_j$ are mutually distinct and $n_j$ are positive integers which satisfy $$\sum_{k=1}^p n_k=d+1.$$ Then we call the set $\{n_1, \cdots, n_p\}$ the [*overlap type*]{} of fixed points of $R(z)$.
We set $$C\{n_1, \cdots, n_p\} =
\bigl\{R(z)\in C_d \bigm|
\mbox{ the overlap type is }\{n_1, \cdots, n_p\}\bigr\}$$ and call it the [*$\{n_1, \cdots, n_p\}$-locus of $C_d$*]{}. The subset $$C_d' =
\bigl\{R(z)\in C_d
\bigm| \mbox{ the overlap type is not }
\{1, \cdots, 1\}\bigr\}$$ of $C_d$ is called the [*overlap locus*]{} of $C_d$.
Similarly, we can define the [*$\{n_1, \cdots, n_p\}$-locus of $N_d$*]{} by setting $$N\{n_1, \cdots, n_p\} =
\bigl\{R(z)\in N_d \bigm| \mbox{ the overlap type is }
\{n_1, \cdots, n_p\}\bigr\}.$$ Also the subset $$N_d' =
\bigl\{R(z)\in N_d
\bigm| \mbox{ the overlap type is not }
\{1, \cdots, 1\}\bigr\}$$ of $N_d$ is called the [*overlap locus*]{} of $N_d$.
Since the overlap type of fixed points is invariant under Möbius conjugation, Theorem 1 implies the following result.
Let ${\rm M}_d'$ be the subset of all points of ${\rm M}_d$ represented by rational functions having non-simple fixed points. Then the natural projection $\pi$ of $N_d'$ to ${\rm M}_d'$ is surjective for every $d\geq 2$.
[ The image of every $\{n_1, \cdots, n_p\}$-locus of $N_d$ by $\pi$ is called the [*$\{n_1, \cdots, n_p\}$-stratum of ${\rm M}_d$*]{}, and denoted by $M\{n_1, \cdots, n_p\}$. The resulting stratification of ${\rm M}_d$ is called the [*overlap type stratification*]{}. ]{}
The above loci are defined by algebraic equations (cf. Example \[ex:overlap\] and \[ex:overlap2\]), and hence a Zariski open subset of complex algebraic sets in $C_d$ and in $N_d$ (with respect to the system of coefficient parameters). For instance, $$C_d' = \left\{ \frac{\hat{P}(z)}{Q(z)}
\in C_d \bigm| {\rm Discr}(\hat{P}) = 0\right\}.$$
\[ex:overlap\] [In the case of $d=2$, $$\begin{aligned}
C\{3\} &\cong \biggl\{(a_2,a_1,a_0,b_1,b_0)\biggm|
\begin{array}{l}
a_1 = b_0-(b_1-a_2)^2/3, \\
a_0 = -(b_1-a_2)^3/27
\end{array} \biggr\}, \\
C_2' &\cong \biggl\{(a_2,a_1,a_0,b_1,b_0)\biggm|
\begin{array}{l}
-27a_0^2 + a_0\left\{ 4(b_1-a_2)^3-18(b_0-a_1)(b_1-a_2)\right\} \\
\ \ +(a_1-b_0)^2(b_1-a_2)^2+4(a_1-b_0)^3=0
\end{array}\biggr\}, \\
N\{3\} & \cong \bigl\{(-1,1,0,-1,1) \bigr\}, \\
N_2' & \cong \bigl\{ (a_2,a_1,0,-1,1)\bigm| \
a_1-1=-(a_2+1)^2/4 \quad \mbox{or} \quad a_1=1 \bigr\}. \end{aligned}$$ ]{}
\[ex:overlap2\] [In the case of $d=3$, $$\begin{gathered}
C\{4\}\cong \biggl\{(a_3,a_2,a_1,a_0, b_2,b_1,b_0)\\
\quad \biggm|
\begin{array}{l}
a_2 =b_1 -3(b_2-a_3)^2/8,\quad
a_1 = b_0 -(b_2-a_3)^3/16, \\
a_0 = -(b_2-a_3)^4/256
\end{array} \biggr\},\end{gathered}$$ $$\begin{gathered}
C_3'\cong \biggl\{(a_3,a_2,a_1,a_0, b_2,b_1,b_0)\biggm| \
\mbox{D}=0\biggr\}, \\\end{gathered}$$ where $$\begin{aligned}
\mbox{D}
& =256a_0^3
+a_0^2\bigl\{128(b_1-a_2)^2-144(b_2-a_3)^2(b_1-a_2)+27(b_2-a_3)^4\\
& +192(b_2-a_3)(b_0-a_1)\bigr\}
+a_0\bigl\{16(b_1-a_2)^4-4(b_2-a_3)^2(b_1-a_2)^3\\
& -80(b_0-a_1)(b_2-a_3)(b_1-a_2)^2
+18(b_0-a_1)((b_2-a_3)^3+8(b_0-a_1))(b_1-a_2)\\
& -6(b_0-a_1)^2(b_2-a_3)^2\bigr\}
+(b_0-a_1)^2(4(b_1-a_2)^3-(b_2-a_3)^2(b_1-a_2)^2\\
& -18(b_0-a_1)(b_2-a_3)(b_1-a_2)
+(b_0-a_1)(4(b_2-a_3)^3+27(b_0-a_1)))\end{aligned}$$ $$\begin{aligned}
N\{4\} \cong \bigl\{(c,-1,1,0, c,-1,1)\bigm| c\in {{\mathbb C}}\bigr\},\end{aligned}$$ and $$\begin{gathered}
N_3' \cong \biggl\{(a_3,a_2,a_1,0, b_2,-1,1)\\
\quad\biggm|
\begin{array}{l}
-27(a_1-1)^2+ (a_1-1)\bigl(4(b_2-a_3)^3
+18(a_2+1)(b_2-a_3)\bigr) \\ \ \
+(a_2+1)^2(b_2-a_3)^2+4(a_2+1)^3=0
\quad \mbox{or} \quad a_1=1
\end{array}\biggr\}.\end{gathered}$$ ]{}
On the other hand, it is well-known that the denominator $Q(z)$ of $R(z)$ in $C\{n_1, \cdots, n_p\}$ can be represented uniquely as $$Q(z) = \sum_{k=1}^p\,
\biggl\{
\Bigl(\sum_{n=0}^{n_k-1}\, \alpha_{k,n_k-n}{(z-\zeta_k)^{n}}\Bigr)
\prod_{j\not=k} \, (z-\zeta_j)^{n_j}\biggr\}.$$ In other words, $Q(z)/\hat{P}(z)$ has a unique partial fractions decomposition $$\frac{\alpha_{1,n_1}}{(z-\zeta_1)^{n_1}}+\cdots
+\frac{\alpha_{1,1}}{z-\zeta_1}
+\frac{\alpha_{2,n_2}}{(z-\zeta_2)^{n_2}}+\cdots
+\frac{\alpha_{p,1}}{z-\zeta_p}.$$ Here, the assumptions imply that $\alpha_{k,n_k}\not=0$ for every $k$ and $$\sum_{k=1}^p\, \alpha_{k,1} = 1.$$
[ The set $\{\zeta_k\}$ of fixed points and the set $\{\alpha_{k,\ell}\}$ of coefficients give a system of parameters for $C\{n_1, \cdots, n_p\}$, and is called the system of [*decomposition parameters*]{} for $C\{n_1, \cdots, n_p\}$. ]{}
Set $$\begin{aligned}
\tilde{E}\{n_1, \cdots, n_p\} =
\biggl\{
& (\zeta_1, \cdots, \zeta_p,
\alpha_{1,1}, \cdots, \alpha_{1,n_1},
\alpha_{2,1}, \cdots, \alpha_{p,n_p}) \in {{\mathbb C}}^{d+p+1}\\
& \biggm| \sum_{k=1}^p\, \alpha_{k,1}= 1, \quad
\alpha_{k,n_k}\not=0 \quad (k=1,\cdots, p) \biggl\}.\end{aligned}$$ Then the natural projection $\Pi$ of $\tilde{E}\{n_1, \cdots, n_p\}$ to $C\{n_1, \cdots, n_p\}$ (with respect to the system of coefficient parameters) is a holomorphic surjection.
Moreover, $C\{n_1, \cdots, n_p\}$ has a complex manifold structure such that $\Pi$ is a finite-sheeted holomorphic covering projection.
We call $\tilde{E}\{n_1, \cdots, n_p\}$ the [*marked $\{n_1, \cdots, n_p\}$-parameter domain*]{}.
Since $\Pi$ is a polynomial map, it is holomorphic. To show other assertions, note that the defining domains of the system of decomposition parameters is the product space $$\prod_{n=1}^{d+1}\, C_{N_n}({{\mathbb C}}^{n+1}),$$ where $C_m({{\mathbb C}}^n)$ is the configuration space of $m$ distinct vectors in ${{\mathbb C}}^n$ and $N_n$ is the number of $\ell$ with $n_\ell= n$. In particular, $N_n>0$ only if $$\min \{n_1,\cdots, n_p\} \leq n\leq \max \{n_1,\cdots, n_p\},$$ the set $ \{(n,1),\cdots, (n,N_n)\}$ is empty if there are no $\ell$ with $n_\ell=n$, and $$\sum_{n=1}^{d+1}\, nN_n = d+1.$$ The coordinates of the product space can be written explicitly as follows; $$\begin{aligned}
& E\{n_1, \cdots, n_p\}\\
& \begin{aligned}
= \Biggl\{\Bigl(
&
\bigl\{ \left(\zeta_{1,1}, \alpha_{(1,1),1}\right), \cdots,
\left(\zeta_{1,N_1}, \alpha_{(1,N_1),1}\right) \bigr\},
\ \cdots\cdots \\
&
\bigl\{
\left(\zeta_{d+1,1}, \alpha_{(d+1,1),1},\cdots,
\alpha_{(d+1,1),d+1}\right), \cdots, \\
& \quad
\left(\zeta_{d+1,N_{d+1}}, \alpha_{(d+1,N_{d+1}),1},\cdots,
\alpha_{(d+1,N_{d+1}),d+1} \right)
\bigr\} \Bigr)\in \prod_{n=1}^{d+1} C_{N_n}(\mathbb{C}^{n+1}) \\
&
\Biggm| \
\sum_{k=1}^{d+1}\biggl(\sum_{j=1}^{N_k}\, \alpha_{(k,j),1}\biggr)= 1,
\quad
\alpha_{(k,\ast),k}\not=0 \quad (k=1, \cdots, p)\Biggr\},
\end{aligned}\end{aligned}$$ where all $\zeta$s are mutually disjoint as before.
Now the map $\Pi$ is factored through by the canonical finite-sheeted holomorphic covering projection $\sigma$ of $\tilde{E}\{n_1, \cdots, n_p\}$ to $E\{n_1, \cdots, n_p\}$ and the natural holomorphic bijetion $\iota$ of $E\{n_1, \cdots, n_p\}$ to $C\{n_1, \cdots, n_p\}$: $$\Pi = \iota \circ \sigma.$$ In particular, $\iota$ induces the desired complex manifold structure on $C\{n_1, \cdots, n_p\}$.
On the non-overlap locus $C\{1, \cdots, 1\}= C_d-C_d'$, $\{\alpha_{k,1}\}_{k=1}^{d+1}$ in the system of decomposition parameters are nothing but the indices at the fixed points $\{\zeta_k\}_{k=1}^{d+1}$, which implies the assertion of Problem 12-d in [@M].
If the location and the overlap type of fixed points and the indices at them are fixed, then the resulting subset of $ C\{n_1, \cdots, n_p\}$ has a natural complex manifold structure of dimension $d+1-p$.
By Theorem 3, we need only to note that $$\dim_{{{\mathbb C}}}\, C\{n_1, \cdots, n_p\} = d+p.$$
This corollary gives the affirmative answer to a conjecture of Milnor stated in Remark below Problem 12-d [@M p.152].
The proof of Theorem 1 for the case that $d=3$
==============================================
Even in the case that $d=3$, the arguments of the proof of Theorem 1 can be applied, but we can not exclude the case that $R(z)$ has $4$ simple fixed points $0, w_1,w_2,w_3$ such that $1/w_1,1/w_2,1/w_3$ lie on the same circle. So, we will treat this case by direct calculation using a symbolic and algebraic computation system (cf. [@cox-ideal], [@cox-using]).
For this purpose, let $0, w_1,w_2,w_3 $ be the set of simple fixed points of a given $R(z)$ of degree $3$ (having simple fixed points only). We may assume that the denominator of which has the form $z^3+b_2z^2+b_1z+b_0$ with $ b_0\neq 0 $ as before. Let $$T(z)=\frac{z}{pz+q}\quad (q\neq 0),$$ and take the conjugate of $ R(z) $ by $ T(z) $. Then the coefficients $1, b_1$, and $b_0$ in the denominator $z^3+b_2z^2+b_1z+b_0$ change to $$\begin{array}{l}
b_3^*(p)=w_3w_2w_1p^3-((w_2+w_3)w_1+w_3w_2)p^2+(w_1+w_2+w_3)p-1,\\
b_1^*(p,q)=(w_3w_2w_1-2b_0)q^2p-b_1q^2, \mbox{ and }\\
b_0^*(q)=-b_0q^3.
\end{array}$$ So the condition $ b_1^*(p,q)/b_0^*(q)=-1 $ implies that $$q=\frac{(w_3w_2w_1-2b_0)p-b_1}{b_0}$$ and the condition $ b_0^*(p,q)/b_3^*(p)=1 $ is the equation $$\begin{gathered}
\label{eq:A}
(-w_1^3w_2^3w_3^3+6b_0w_1^2w_2^2w_3^2-13b_0^2w_1w_2w_3+8b_0^3)p^3\\
+((3w_1^2w_2^2w_3^2-12b_0w_1w_2w_3+12b_0^2)b_1
+(b_0^2w_2+b_0^2w_1)w_3+b_0^2w_1w_2)p^2 \\
+((-3w_1w_2w_3+6b_0)b_1^2-b_0^2w_3-b_0^2w_2-b_0^2w_1)p+b_1^3+b_0^2=0,\end{gathered}$$ which we write as $ A_3 p^3+A_2 p^2+A_1 p+A_0=0 $, where $ A_k$ are functions of $ w_1,w_2,w_3,b_0,b_1$.
Here, we consider the equations $$A_3=A_2=A_1=0.$$ By computing the Gröbner basis of lexicographic order $ b_1>b_0>w_1>w_2>w_3 $, we obtain the conditions $$\begin{gathered}
w_3=0,\ w_2=0, \ w_1=0 \\
\mbox{ or } \quad
W=(w_2^2-w_1w_2+w_1^2)w_3^2+(-w_1w_2^2-w_1^2w_2)w_3+w_1^2w_2^2=0. \end{gathered}$$ in $ \mathbb{C}[w_1,w_2,w_3] $. The conditions $ w_k=0 $ ($k=1,2,3$) contradict the assumption that $R(z)$ has $4$ simple fixed points. Also we recall that the case that $W=0$ is one excluded in the proof of Theorem 1, and actually the condition $W=0$ implies that $1/w_1,1/w_2$, and $1/w_3$ form a regular triangle in ${{\mathbb C}}$. (If $d\geq 4$, we can assume that there are neither circles nor lines in ${{\mathbb C}}-\{0\}$ which contain all non-zero fixed points.)
As before, we consider the conjugate of $R(z)$ by the translation $L_k(z)= z+ w_k$ for every $k$. Here we need to consider the case of $ L_1(z)=z+w_1 $ only, for the other cases are similar. Firstly, take the conjugate of $R(z)$ by $ L_1(z) $, and secondly take the conjugate by $ T(z) $, and we see that $R(z)$ changes to $$R^{\#}(z)=\frac{P^{\#}(z)}{Q^{\#}(z)}$$ with $$Q^{\#}(z)=b_3^\#z^3 + b_2^\# z^2 + b_1^\# z + b_0^\#,$$ where $$\begin{array}{l}
b^{\#}_3=(w_1^3+(-w_2-w_3)w_1^2+w_3w_2w_1)p^3+(3w_1^2+(-2w_2-2w_3)w_1\\
\qquad
+w_3w_2)p^2+(3w_1-w_2-w_3)p+1,\\
b^{\#}_1=(2w_1b_1+2w_1^2b_2+3w_1^3+(-w_2-w_3)w_1^2+w_3w_2w_1+2b_0)q^2p\\
\qquad
+(b_1+2w_1b_2+3w_1^2)q^2, \mbox{ and } \\
b^{\#}_0=(w_1b_1+w_1^2b_2+w_1^3+b_0)q^3. \\
\end{array}$$ Hence the condition $ b_1^{\#}/b_0^{\#}=-1 $ implies that $$\begin{gathered}
q = \frac{-1}{w_1b_1+w_1^2b_2+w_1^3+b_0}\\
\times
\bigl\{\bigl(2w_1b_1+2w_1^2b_2+3w_1^3-(w_2+w_3)w_1^2+w_3w_2w_1
+2b_0\bigr)p \\
+b_1+2w_1b_2+3w_1^2\bigr\},\end{gathered}$$ and the condition $ b_0^{\#}/b_3^{\#}=1 $ is the equation $$\label{eq:B}
B_3 p^3+B_2 p^2+B_1 p+B_0=0$$ with $$\begin{aligned}
B_3=\,
& -\biggl\{w_1b_1+w_1^2b_2+2w_1^3
+(-w_2-w_3)w_1^2+w_3w_2w_1+b_0\biggr\} \\
& \times
\biggl\{8w_1^2b_1^2+(16w_1^3b_2+21w_1^4
+(-5w_2-5w_3)w_1^3+5w_3w_2w_1^2
+16b_0w_1)b_1\\
& +8w_1^4b_2^2+(21w_1^5+(-5w_2-5w_3)w_1^4+5w_3w_2w_1^3
+16b_0w_1^2)b_2\\
& +14w_1^6+(-7w_2-7w_3)w_1^5+(w_2^2+9w_3w_2+w_3^2)w_1^4\\
& +(-2w_3w_2^2-2w_3^2w_2+21b_0)w_1^3
+(w_3^2w_2^2-5b_0w_2-5b_0w_3)w_1^2\\
& +5b_0w_3w_2w_1+8b_0^2\biggr\},\end{aligned}$$ $$\begin{aligned}
B_2=\,
&(-12w_1^2b_1^3-(48w_1^3b_2+75w_1^4-(14w_2+14w_3)w_1^3
+13w_3w_2w_1^2+24b_0w_1)b_1^2\\
& +(-60w_1^4b_2^2+(-186w_1^5
+(40w_2+40w_3)w_1^4-38w_3w_2w_1^3-72b_0w_1^2)b_2\\
& -141w_1^6+(58w_2+58w_3)w_1^5+(-3w_2^2-62w_3w_2-3w_3^2)w_1^4 \\
& +(6w_3w_2^2+6w_3^2w_2-114b_0)w_1^3
+(-3w_3^2w_2^2+16b_0w_2+16b_0w_3)w_1^2 \\
& -14b_0w_3w_2w_1-12b_0^2)b_1-24w_1^5b_2^3
+(-111w_1^6+(26w_2+26w_3)w_1^5 \\
& -25w_3w_2w_1^4-48b_0w_1^3)b_2^2+(-168w_1^7+(76w_2+76w_3)w_1^6 \\
& +(-6w_2^2-86w_3w_2-6w_3^2)w_1^5
+(12w_3w_2^2+12w_3^2w_2-150b_0)w_1^4 \\
& +(-6w_3^2w_2^2+28b_0w_2+28b_0w_3)w_1^3
-26b_0w_3w_2w_1^2-24b_0^2w_1)b_2 \\
& -84w_1^8+(56w_2+56w_3)w_1^7+(-9w_2^2-73w_3w_2-9w_3^2)w_1^6 \\
& +(18w_3w_2^2+18w_3^2w_2-114b_0)w_1^5+(-9w_3^2w_2^2
+40b_0w_2+40b_0w_3)w_1^4\\
& -38b_0w_3w_2w_1^3-39b_0^2w_1^2+(2b_0^2w_2+2b_0^2w_3)w_1-b_0^2w_3w_2),\end{aligned}$$ $$\begin{aligned}
B_1=\,
& (-6w_1b_1^3+(-30w_1^2b_2-48w_1^3+(4w_2+4w_3)w_1^2
-3w_3w_2w_1-6b_0)b_1^2\\
& +(-48w_1^3b_2^2+(-150w_1^4+(14w_2+14w_3)w_1^3
-12w_3w_2w_1^2-24b_0w_1)b_2 \\
& -114w_1^5+(20w_2+20w_3)w_1^4-18w_3w_2w_1^3-42b_0w_1^2+(2b_0w_2
+2b_0w_3)w_1)b_1\\
& -24w_1^4b_2^3+(-111w_1^5+(13w_2+13w_3)w_1^4-12w_3w_2w_1^3
-24b_0w_1^2)b_2^2\\
& +(-168w_1^6+(38w_2+38w_3)w_1^5-36w_3w_2w_1^4-78b_0w_1^3
+(2b_0w_2+2b_0w_3)w_1^2)b_2\\
& -84w_1^7+(28w_2+28w_3)w_1^6-27w_3w_2w_1^5
-60b_0w_1^4+(2b_0w_2+2b_0w_3)w_1^3\\
& -3b_0^2w_1+b_0^2w_2+b_0^2w_3), \end{aligned}$$ and $$\begin{aligned}
B_0=\,
& -b_1^3+(-6w_1b_2-10w_1^2)b_1^2+(-12w_1^2b_2^2-38w_1^3b_2-29w_1^4
-2b_0w_1)b_1\\
& -8w_1^3b_2^3-37w_1^4b_2^2
+(-56w_1^5-2b_0w_1^2)b_2-28w_1^6-2b_0w_1^3-b_0^2.\end{aligned}$$
Now, we consider the equations $$A_3=A_2=A_1=0 \ \mbox{ and } \ B_3=B_2=B_1=0.$$ By computing the Gröbner basis as before, we obtain the conditions $$w_3=0,\ w_2=0,\ \mbox{ or } \ w_2-w_3=0,$$ in $ \mathbb{C}[w_2,w_3] $, which again gives a contradiction to the assumption. Therefore, the equation either or has a solution $ p $.
Thus we have shown the assertion of Theorem 1 for the case that $d=3$.
[99]{}
D. Cox, J. Little, and D. O’Shea, , UTM, Springer-Verlag, 1998.
D. Cox, J. Little, and D. O’Shea, , GTM 185, Springer-Verlag, 1998.
M. Fujimura, , **7** (2007), 345–360.
M. Fujimura and M. Taniguchi,, , [**136**]{} (2008), 3601–3609.
C. McMullen, , , [**125**]{} (1987), 467–493.
J. Milnor, , , [**2**]{} (1993), 37–83.
J. Milnor, , , 2006.
Masayo Fujimura, Department of Mathematics, National Defense Academy, Yokosuka 239-8686, JAPAN ([email protected])
Masahiko Taniguchi, Department of Mathematics, Nara Women’s University, Nara 630-8506, JAPAN ([email protected])
|
---
abstract: 'We study asymptotic behaviors near the boundary of complete metrics of constant curvature in planar singular domains and establish an optimal estimate of these metrics by the corresponding metrics in tangent cones near isolated singular points on boundary. The conformal structure plays an essential role. We also discuss asymptotic behaviors of complete Kähler-Einstein metrics on singular product domains.'
address:
- |
Beijing International Center for Mathematical Research\
Peking University\
Beijing, 100871, China
- |
Department of Mathematics\
University of Notre Dame\
Notre Dame, IN 46556
- |
School of Mathematical Sciences\
Peking University\
Beijing, 100871, China
author:
- Qing Han
- Weiming Shen
title: |
Boundary Expansions for Liouville’s Equation\
in Planar Singular Domains
---
[^1]
Introduction {#sec-Intro}
============
Assume $\Omega\subset \mathbb{R}^{2}$ is a domain. We consider the following problem: $$\begin{aligned}
\label{eq-MainEq} \Delta{u}& =e^{ 2u } \quad\text{in }\Omega, \\
\label{eq-MainBoundary}u&=\infty\quad\text{on }\partial \Omega.\end{aligned}$$ The equation is known as Liouville’s equation. For a large class of domains $\Omega$, and admit a solution $u\in C^\infty(\Omega)$. Geometrically, $e^{ 2u }(dx_1\otimes dx_1+dx_2\otimes dx_2)$ is a complete metric with constant Gauss curvature $-1$ on $\Omega$. Our main concern in this paper is the asymptotic behavior of solutions $u$ near isolated [*singular*]{} points on boundary.
The higher dimensional counterpart is given by, for $\Omega\subset\mathbb R^n$, $n\ge 3$, $$\begin{aligned}
\label{eq-MainEq-HigherDim} \Delta{u} = u^{\frac{n+2}{n-2}}\quad\text{in }\Omega.\end{aligned}$$ More generally, we can study, for a function $f$, $$\begin{aligned}
\label{eq-MainEq-HigherDim-general} \Delta{u} = f(u)\quad\text{in }\Omega.\end{aligned}$$
The study of these problems has a rich history. To begin with, we assume $\Omega$ has an at least $C^2$-boundary. Bieberbach [@Bieberbach1916] studied the problem - and proved the existence of its solutions. If $f$ is monotone, Keller [@Keller1957] established the existence for and . In a pioneering work, Loewner and Nirenberg studied asymptotic behaviors of solutions of and and proved $$u(x)=\left(\frac{n(n-2)}{4}\right)^{\frac{n-2}{4}}d^{-\frac{n-2}{2}}+o(d^{-\frac{n-2}{2}}),$$ where $d$ is the distance to $\partial\Omega$. This result has been generalized to more general $f$ and up to higher order terms, for example, by Brandle and Marcus , Diaz and Letelier [@DiazLetelier1992], and Kichenassamy [@Kichenassamy2005JFA]. Moreover, if $\Omega$ has a smooth boundary, an estimate up to an arbitrarily finite order was established by Andersson, Chruściel and Friedrich [@ACF1982CMP] and Mazzeo [@Mazzeo1991]. In fact, they proved that solutions of and are polyhomogeneous. All these results require $\partial\Omega$ to have some degree of regularity. The case where $\partial\Omega$ is singular was studied by del Pino and Letelier [@delPino2002], and Marcus and Veron . However, no explicit estimates are known in neighborhoods of singular boundary points.
Other problems with a similar feature include complete Kähler-Einstein metrics discussed by Cheng and Yau [@ChengYau1980CPAM], Fefferman [@Fefferman1976], and Lee and Melrose [@LeeMelrose1982], the complete minimal graphs in the hyperbolic space by Han and Jiang [@HanJiang], Lin [@Lin1989Invent] and Tonegawa [@Tonegawa1996MathZ] and a class of Monge-Ampère equations by Jian and Wang [@JianWang2013JDG].
Now we return to -. For bounded domains $\Omega\subset\mathbb R^2$, let $d$ be the distance function to $\partial\Omega$. If $\partial\Omega$ is $C^2$, then $d$ is a $C^2$-function near $\partial\Omega$. Under this condition, the solution $u$ of - satisfies $$\label{eq-EstimateDegree1}|u+\log d|\le Cd,$$ where $C$ is a positive constant depending only on the geometry of $\partial\Omega$. This follows from a simple comparison of $u$ and the corresponding solutions in the interior and exterior tangent balls.
In this paper, we study the asymptotic behavior of $u$ near isolated singular points on $\partial\Omega$. Taking a boundary point, say the origin, we assume $\partial\Omega$ has a conic singularity at the origin in the following sense: $\partial\Omega$ in a neighborhood of the origin consists of two $C^2$-curves $\sigma_1$ and $\sigma_2$, intersecting at the origin with an angle $\mu\pi$ for some constant $\mu\in (0,2)$. Here, the origin is an end point of the both curves $\sigma_1$ and $\sigma_2$. Let $l_1$ and $l_2$ be two rays starting from the origin and tangent to $\sigma_1$ and $\sigma_2$ there, respectively. Then, an infinite cone $V_{\mu}$ formed by $l_1$ and $l_2$ is considered as a tangent cone of $\Omega$ at the origin, with an opening angle $\mu\pi$. Solutions of - in $V_{\mu}$ can be written explicitly. In fact, using polar coordinates, we write $$V_{\mu}=\{(r,\theta):\, r\in (0,\infty),\, \theta\in (0,\mu\pi)\}.$$ Here, $l_1$ corresponds to $\theta=0$ and $l_2$ to $\theta=\mu\pi$. Then, the solution $v_{\mu}$ of - in $V_\mu$ is given by $$\label{eq-definition_v_main}v_{\mu}=-\log\left(\mu r\sin\frac\theta\mu\right).$$ Intuitively, $v_{ \mu}$ should provide a good approximation of $u$ near the origin. However, there is a major problem. The symmetric difference $(\Omega\setminus V_{\mu})\cup (V_{\mu}\setminus\Omega)$ may be nonempty near the origin. For example, some $x\in\Omega$ may not be in the tangent cone $V_{\mu}$. For a remedy, we need to modify $v_\mu$ to get a function defined in $\Omega$ near the origin. To describe our result, we let $d, d_1$ and $ d_2$ be the distances to $\partial\Omega, \sigma_1$ and $ \sigma_2$, respectively. For $\mu\in (0,1]$, we define, for any $x\in \Omega$, $$\label{eq-definition_f_main1}f_{\mu}(x)=
-\log \left(\mu |x| \sin\frac{\arcsin\frac{d(x)}{|x|}}{\mu}\right).$$ We note that $f_\mu$ in is well-defined for $x$ sufficiently small and that $\{x\in\Omega:\, d_1(x)=d_2(x)\}$ is a curve from the origin for $\mu\in (0,1]$ near the origin. In fact, we can write, for $x$ sufficiently small, $$f_{\mu}(x)=
\begin{cases}
-\log (\mu |x| \sin\frac{\arcsin\frac{d_{1}(x)}{|x|}}{\mu})
& \text{if } d_1(x) \le d_2(x),\\
-\log(\mu |x| \sin\frac{\arcsin\frac{d_{2}(x)}{|x|}}{\mu})
& \text{if }d_1(x)>d_2(x).\\
\end{cases}$$ For $\mu\in (1,2)$, we define, for any $x\in \Omega$, $$\label{eq-definition_f_main2}
f_{\mu}(x)=
\begin{cases}
-\log (\mu |x| \sin\frac{\arcsin\frac{d_{1}(x)}{|x|}}{\mu})
& \text{if } d_1(x) <d_2(x),\\
-\log(\mu |x| \sin\frac{\theta}{\mu})
& \text{if }d_1(x) = d_2(x),\\
-\log(\mu |x| \sin\frac{\arcsin\frac{d_{2}(x)}{|x|}}{\mu})
& \text{if }d_1(x)>d_2(x),\\
\end{cases}$$ where $\theta$ is the angle anticlockwise from the ray $l_1$ to $\overrightarrow{Ox}$. We note that $f_\mu$ in is well-defined for $x$ sufficiently small and that $\{x\in\Omega:\, d_1(x)=d_2(x)\}$ has a nonempty interior for $\mu\in (1,2)$. It is easy to see that $f_\mu$ in and is $v_\mu$ in if $\Omega$ is the cone $V_\mu$.
We now state our main result in this paper.
\[thrm-Main\] Let $\Omega$ be a bounded domain in $\mathbb R^2$ and $\partial\Omega$ in a neighborhood of the origin consist of two $C^{2}$-curves $\sigma_1$ and $\sigma_2$ intersecting at the origin at an angle $\mu\pi$, for some constant $\mu\in (0,2)$. Suppose $ u \in C^{\infty}(\Omega)$ is a solution of -. Then, for any $x\in\Omega\cap B_\delta$, $$\label{eq-MainEstimate}\left|u(x)
-f_\mu(x)\right|\le Cd(x),$$ where $f_\mu$ is the function defined in for $\mu\in (0,1]$ and in for $\mu\in (1,2)$, $d$ is the distance to $\partial\Omega$, and $\delta$ and $C$ are positive constants depending only on the geometry of $\partial\Omega$.
The estimate generalizes to singular domains and is optimal. The power one of the distance function in the right-hand side cannot be improved without better regularity assumptions of the boundary. The proof of Theorem \[thrm-Main\] is based on a combination of conformal transforms and the maximum principle. An appropriate conformal transform changes the tangent cone at the origin to the upper half plane. The new boundary has a better regularity at the origin for $\mu\in (1,2)$ and becomes worse for $\mu\in (0,1)$. Such a change in the regularity of the boundary requires us to discuss the asymptotic behavior of solutions near $C^{1,\alpha}$-boundary and near $C^{2,\alpha}$-boundary.
The paper is organized as follows. In Section \[sec-Existence\], we prove the existence and the uniqueness of solutions of - in a large class of domains. In Section \[sec-C-1,alpha-boundary\], we study the asymptotic expansions near $C^{1,\alpha}$-boundary and derive an optimal estimate. In Section \[sec-C-2,alpha-boundary\], we study the asymptotic expansions near $C^{2,\alpha}$-boundary and derive the corresponding optimal estimate. In Section \[sec-IsolatedSingular\], we study the asymptotic expansions near isolated singular points and prove Theorem \[thrm-Main\]. In Section \[sec-app\], we discuss the asymptotic behavior of complete Kähler-Einstein metrics on singular product domains.
The Existence and Uniqueness {#sec-Existence}
============================
In this section, we prove the existence and the uniqueness of solutions of - in a large class of domains.
First, we introduce some notations. Let $x_{0}\in \mathbb R^2$ be a point and $r>0$ be a constant. For $\Omega=B_r(x_0)$, denote by $u_{r,x_0}$ the corresponding solution of -. Then, $$u_{r,x_{0}}(x)=\log\frac{2r}{r^{2}-|x-x_{0}|^{2}}.$$ With $d(x)=r-|x-x_0|$, we have $$u_{r,x_{0}}=-\log d-\log\left(1-\frac{d}{2r}\right).$$ For $\Omega=\mathbb R^2\setminus B_r(x_0)$, denote by $v_{r,x_0}$ the corresponding solution of -. Then, $$v_{r,x_{0}}(x)=\log\frac{2r}{|x-x_{0}|^{2}-r^{2}}.$$ With $d(x)=|x-x_0|-r$, we have $$v_{r,x_{0}}=-\log d-\log\left(1+\frac{d}{2r}\right).$$ These two solutions play an important role in this paper.
Now we prove a preliminary result for domains with singularity. We note that a finite cone is determined by its vertex, its axis, its height and its opening angle.
\[lemma-ExteriorCone\] Let $ \Omega $ be a bounded domain in $\mathbb R^2$ satisfying a uniform exterior cone condition. Suppose $ u \in C^{\infty}(\Omega)$ is a solution of -. Then, for any $x\in \Omega$ with $d<\delta$, $$|u+\log d|\le C,$$ where $\delta$ and $C$ are positive constants depending only on the uniform exterior cone.
For any $ x \in \Omega$ with $d(x)=d$, we have $ B_{d}(x) \subset \Omega$. We assume $d=|x-p|$ for some $p\in\partial\Omega $. Let $u_{d,x}$ be the solution of - in $B_d(x)$. By the maximum principle, we have $$u(x) \leq u_{d,x} (x)= -\log d-\log\left(1-\frac{d}{2d}\right)
=-\log d+\log 2.$$ Next, there exists a cone $V$, with vertex $p$, axis $\overrightarrow{e_{p}}$, height $h$ and opening angle $2\theta$, such that $V\cap \Omega=\emptyset$. Here, we can assume $h$ and $\theta$ do not depend on the choice of $p \in \partial \Omega.$ Set $\widetilde{p}= p+\frac{1}{\sin\theta}d \overrightarrow{e_{p}}.$ It is straightforward to check $B_{d}(\widetilde{p})\subset V\subset\Omega^{C} $, if $d<\frac{h}{1+\frac{1}{\sin\theta}}$, and dist$(x,\partial B_d(\widetilde p))\le \frac{d}{\sin\theta}$. Let $v_{d,\widetilde p}$ be the solution of - in $\mathbb R^2\setminus B_d(\widetilde p)$. Then, by the maximum principle, we have $$u(x) \geq v_{d,\widetilde{p}}(x)\geq
-\log\left(\frac{d}{\sin\theta}\right)
-\log\left(1+\frac{d}{2d\sin\theta}\right)
=-\log d-\log\left(\frac{1+2\sin\theta}{2\sin^2\theta}\right).$$ We have the desired result.
Next, we prove the existence and the uniqueness of solutions in a large class of domains. Such a result is well known. We include it here for completeness.
\[thrm-ExistenceUniqueness\] Let $ \Omega $ be a bounded domain in $\mathbb R^2$ satisfying a uniform exterior cone condition. Then, there exists a unique solution $ u \in C^{\infty}(\Omega)$ of -.
The proof consists of two steps. In the first step, we prove the existence of solutions by a standard method; while in the second step, we prove the uniqueness with the help of Lemma \[lemma-ExteriorCone\].
[*Step 1.*]{} We first construct a solution. For each positive integer $k$, let $u_k\in C(\bar\Omega)\cap C^\infty(\Omega)$ be the solution of $$\begin{aligned}
\Delta u_{k} &= e^{2u_{k}}\quad \text{in } \Omega,\\
u_{k} &=k \quad\text{on } \partial\Omega.\end{aligned}$$ By the maximum principle, we have $u_k\le u_{k+1}$. Moreover, by comparing with solutions in balls in $\Omega$ and by the standard estimates for elliptic equations, we obtain, for any $k\ge 1$ and any subdomain $\Omega'\subset\subset\Omega$, $$|u_k|_{L^\infty(\Omega')}\le C_1(\Omega'),$$ and then, for any integer $m\ge 1$ and any $\alpha\in (0,1)$, $$|u_k|_{C^{m,\alpha}(\Omega')}\le C_2(m, \alpha, \Omega').$$ Therefore, there exists a $u\in C^\infty(\Omega)$ such that, for any $m\ge 1$ and any $\Omega'\subset\subset\Omega$, $$u_ k \rightarrow u\quad\text{in }C^m(\Omega'),$$ and hence $u$ is a solution of . Moreover, $u$ satisfies by $u\ge u_k$ in $\Omega$ and $u_k=k$ on $\partial\Omega$.
[*Step 2.*]{} Let $u$ be the solution constructed in Step 1. Without loss of generality, we assume $\Omega$ contains the origin. Then, $u(\frac{x}{\varepsilon})+\log\frac{1}{\varepsilon}$ is a solution in $\varepsilon\Omega:=\{x:\,\frac{x}{\varepsilon}\in\Omega\}$, for $\varepsilon>0$. Hence, we may assume $\Omega\subset B_{1/2}$. Since the solution $u_{1/2, 0}$ of - in $B_{1/2}$ satisfies $u_{1/2, 0}\ge \log 4$, we have, by the maximum principle, $$u\ge \log 4\quad\text{in }\Omega.$$
Suppose $v$ is another solution of - in $\Omega$. By the construction of $u$ in Step 1 and the maximum principle, we have $$u\leq v \quad\text{in }\Omega.$$ Set $$w=\frac{v}{u}.$$ Then, $w\geq1 $ in $\Omega$ and, by Lemma \[lemma-ExteriorCone\], $w(x)\rightarrow 1$ uniformly as $x\rightarrow\partial\Omega$. By the equation for $u$ and $v$, we have $$e^{2uw}=e^{2v}=\Delta v = (\Delta u)w+ 2\nabla u \cdot \nabla w +(\Delta w)u,$$ and hence $$\Delta w + 2\frac{\nabla u }{u}\cdot \nabla w =\frac1u\big[(e^{2u})^{w}-(e^{2u})w\big].$$ If $w $ is not equal to 1 identically, $w$ must assume its maximum $w(x_0)>1$ at some point $x_{0}\in\Omega$. Then at $x_0$, we have $$\nabla w(x_{0})=0, \quad \Delta w(x_{0})\leq0.$$ Next, we set $f(s)= a^{s}-as $, for some constant $a>e$. Then, $f(1)=0$ and $$f'(s)=a^{s}\log a-a>0\quad\text{for any }s> 1.$$ Hence, $f(s)>0$ for any $s>1$. Therefore, $$\frac1u\big[(e^{2u})^{w}-(e^{2u})w\big]>0\quad\text{at }x_0,$$ since $e^{2u}\geq 16$. This leads to a contradiction. Therefore, $u=v$ in $\Omega$.
Expansions near $C^{1,\alpha}$-boundary {#sec-C-1,alpha-boundary}
=======================================
In this section, we study the asymptotic behavior near $C^{1,\alpha}$-portions of $\partial\Omega$.
\[thrm-C-1,alpha-expansion\] Let $\Omega$ be a bounded domain in $\mathbb R^2$ and $\partial\Omega$ be $C^{1,\alpha}$ near $x_0\in\partial\Omega$ for some $\alpha\in (0,1]$. Suppose $u\in C^\infty(\Omega)$ is a solution of -. Then, $$|u+\log d|\le Cd^\alpha\quad\text{in }\Omega\cap B_r(x_0),$$ where $d$ is the distance to $\partial\Omega$, and $r$ and $C$ are positive constants depending only on $\alpha$ and the geometry of $\Omega$.
We take $R>0$ sufficiently small such that $\partial\Omega\cap B_{R}(x_0)$ is $C^{1,\alpha}$. We fix an $x\in\Omega\cap B_{R/4}(x_0)$ and take $p\in \partial\Omega$, also near $x_0$, such that $d(x)=|x-p|$. Then, $p\in \partial\Omega\cap B_{R/2}(x_0)$. By a translation and rotation, we assume $p=0$ and the $x_2$-axis is the interior normal to $\partial\Omega$ at 0. Then, $x$ is on the positive $x_2$-axis, with $d=d(x)=|x|$, and the $x_1$-axis is the tangent line of $\partial\Omega$ at 0. Moreover, a portion of $\partial \Omega$ near 0 can be expressed as a $C^{1,\alpha}$-function $\varphi$ of $x_1\in (-s_0,s_0)$, with $\varphi(0)=0$, and $$\label{eq-boundaryC1alpha}
|\varphi(x_1)|\le M|x_1|^{1+\alpha}\quad\text{for any }x_1\in (-s_0,s_0).$$ Here, $s_0$ and $M$ are positive constants chosen to be uniform, independent of $x$.
We first consider the case $\alpha=1$. For any $r>0$, the lower semi-circle of $$x_1^2+(x_2-r)^2=r^2$$ satisfies $x_2\ge x_1^2/(2r)$. By fixing a constant $r$ sufficiently small, implies $$B_r(re_2)\subset\Omega\text{ and }B_r(-re_2)\cap \Omega=\emptyset.$$ Let $u_{r, re_2}$ and $v_{r, -re_2}$ be the solutions of - in $B_r(re_2)$ and $\mathbb R^2\setminus B_r(-re_2)$, respectively. Then, by the maximum principle, we have $$v_{r,-re_2}\le u\le u_{r, re_2}\quad\text{in }B_r(re_2).$$ For the $x$ above in the positive $x_2$-axis with $|x|=d<r$, we obtain $$-\log d-\log\left(1+\frac{d}{2r}\right)\le u\le -\log d-\log\left(1-\frac{d}{2r}\right).$$ This implies the desired result for $\alpha=1$.
Next, we consider $\alpha\in (0,1)$. Recall that $x$ is in the positive $x_2$-axis and $|x|=d$. We first note $$\label{eq-boundaryC1alpha2}
|x_{1}|^{1+\alpha}\le d^{1+\alpha}+\frac{1}{d^{1-\alpha}}x_{1}^{2} \quad\text{for any }x_1\in\mathbb R.$$ This follows from the Hölder inequality, or more easily, by considering $|x_1|\le d$ and $|x_1|\ge d$ separately. Let $r=d^{1-\alpha}/(2M)$ and $q$ be the point on the positive $x_2$-axis such that $|q|=Md^{1+\alpha}+r$. By taking $d$ sufficiently small, and imply $$B_r(q)\subset\Omega\text{ and }B_r(-q)\cap \Omega=\emptyset.$$ Let $u_{r, q}$ and $v_{r, -q}$ be the solutions of - in $B_r(q)$ and $\mathbb R^2\setminus B_r(-q)$, respectively. Then, by the maximum principle, we have $$v_{r,-q}\le u\le u_{r, q}\quad\text{in }B_r(q).$$ For the $x$ above, dist$(x, \partial B_r(q))=d-Md^{1+\alpha}$ and dist$(x, \partial B_r(-q))=d+Md^{1+\alpha}$. Evaluating at such an $x$, we obtain $$\begin{aligned}
&-\log(d+Md^{1+\alpha})-\log\left(1+\frac{M}{d^{1-\alpha}}(d+Md^{1+\alpha})\right)\\
&\qquad
\le u\le -\log(d-Md^{1+\alpha})-\log\left(1-\frac{M}{d^{1-\alpha}}(d-Md^{1+\alpha})\right).\end{aligned}$$ This implies the desired result for $\alpha\in (0,1)$.
Expansions near $C^{2,\alpha}$-boundary {#sec-C-2,alpha-boundary}
=======================================
In this section, we study the asymptotic behavior near $C^{2,\alpha}$-portions of $\partial\Omega$. It is straightforward to derive the upper bound and extra work is needed for lower bound. We also note that the curvature of the boundary is only $C^\alpha$ in the present case.
\[thrm-C-2,alpha-expansion\] Let $\Omega$ be a bounded domain in $\mathbb R^2$ and $\partial\Omega$ be $C^{2,\alpha}$ near $x_0\in\partial\Omega$ for some $\alpha\in (0,1)$. Suppose $u\in C^\infty(\Omega)$ is a solution of -. Then, $$\left|u+\log d-\frac{1}{2}\kappa d\right|\le Cd^{1+\alpha}\quad\text{in }\Omega\cap B_r(x_0),$$ where $d$ is the distance to $\partial\Omega$, $\kappa$ is the curvature of $\partial\Omega$, and $r$ and $C$ are positive constants depending only on $\alpha$ and the geometry of $\Omega$.
We take $R>0$ sufficiently small such that $\partial\Omega\cap B_{2R}(x_0)$ is $C^{2,\alpha}$ and that $d$ is $C^{2,\alpha}$ in $\Omega\cap B_{2R}(x_0)$. The proof consists of several steps.
[*Step 1.*]{} Set $$u=v-\log d.$$ A straightforward calculation yields $$\textit{S}(v)=0 \quad\text{in }\Omega,$$ where $$\label{eq for v}
\textit{S}(v)=d\Delta v-\Delta d-\frac{1}{d}(e^{2v}-1).$$ By Theorem \[thrm-C-1,alpha-expansion\] for $\alpha=1$, we have $$|v|\leq C_{0}d\quad\text{in }\Omega\cap B_{R}(x_0),$$ for some constant $C_{0}$ depending only on the geometry of $\Omega$. In particular, $v=0$ on $\partial\Omega\cap B_{R}(x_0)$.
To proceed, we denote by $(x', d)$ the principal coordinates in $\bar\Omega\cap B_R(x_0)$. Then, $$\Delta v=\frac{\partial^{2}v}{\partial d^{2}} + G\frac{\partial^{2}v}{\partial x'^{2}}
+I_{x'} \frac{\partial v}{\partial x'}+I_d \frac{\partial v}{\partial d},$$ where $G$, $I_{x'}$ and $I_d$ are at least continuous functions in $\bar\Omega\cap B_R(x_0)$. We note that $G$ has a positive lower bound and $I_d$ has the form $$\label{laplace_d}
I_d = -\kappa +O(d^\alpha),$$ where $\kappa$ is the curvature of $\partial\Omega$. Set, for any constant $r>0$, $$G_r =\{(x',d):|x'|\leq r, 0<d<r \}.$$
[*Step 2.*]{} We now construct supersolutions and prove an upper bound of $v$. We set $$\label{eq-definition_w}w(x)=d(x'^{2}+d^2)^{\frac{\alpha}{2}},$$ and, for some positive constants $A$ and $B$ to be determined, $$\overline{v}=\frac{1}{2}\kappa(0)d+ Aw+Bd^{1+\alpha}.$$ We write $$\textit{S}(\overline{v})=d\Delta \overline{v}-\Delta d-\frac2d\overline{v}
-\frac1d(e^{2\overline{v}}-1-2\overline{v}).$$ First, we note $$e^{2\overline{v}}\ge 1+2\overline{v}.$$ Then, $$\textit{S}(\overline{v})
\le d\Delta \overline{v}-\Delta d-\frac2d\overline{v}.$$ Hence, $$\begin{aligned}
\textit{S}(\overline{v})&\le \frac12\kappa(0)d\Delta d+Ad\Delta w+Bd\Delta d^{1+\alpha}\\
&\qquad-\Delta d
-\kappa(0)-2A(x'^2+d^2)^{\frac\alpha2}-2Bd^\alpha.\end{aligned}$$ Straightforward calculations yield $$|d \Delta w| \leq C(d^{\alpha}+w),$$ where $C$ is a positive constant depending only on the geometry of $\Omega$ near $x_0$. Note $$|\Delta d+\kappa(0)|\leq K(|x'|^2+d^2)^{\frac\alpha2},$$ for some positive constant $K $ depending only on the geometry of $\Omega$ near $x_0$. Then, $$\begin{aligned}
\textit{S}( \overline{v}) &\leq CAd^{\alpha} +B[\alpha(\alpha+1)+(1+\alpha)dI_d-2]d^{\alpha}\\
&\qquad+(CA d-2A)(x'^{2}+d^2)^{\frac{\alpha}{2}}+K(|x'|^2+d^2)^{\frac\alpha2} +C d.\end{aligned}$$ Since $\alpha<1$, we can take $r$ sufficiently small such that $$2-\alpha(\alpha+1)-(1+\alpha)dI_d\ge c_0\quad\text{in } G_r,$$ for some positive constant $c_0$. By taking $r$ small further and choosing $A\ge K+C$, we have $$\textit{S}( \overline{v}) \leq CAd^{\alpha} -c_0Bd^{\alpha}
\quad\text{in }G_r.$$ We take $A$ large further such that $$C_0d \leq \frac{1}{2}\kappa(0)d+ Ad(x'^{2}+d^2)^{\frac{\alpha}{2}}+Bd^{1+\alpha} \quad\text{on } \partial G_r.$$ Then, we take $B$ large such that $$c_0B\ge CA.$$ Therefore, $$\begin{aligned}
\textit{S}( \overline{v}) &\leq \textit{S}(v) \quad \text{in }G_r, \\
v &\leq \overline{v} \quad \text{on }\partial G_r.\end{aligned}$$ By the maximum principle, we have $ v \leq \overline{v}$ in $G_r$.
[*Step 3.*]{} We now construct subsolutions and prove a lower bound of $v$. By taking the same $w$ as in and setting, for some positive constants $A$ an $B$ to be determined, $$\underline{v}=\frac{1}{2}\kappa(0)d-Aw-Bd^{1+\alpha}.$$ We first assume $$\label{subsl-bound1}
|\kappa(0)|r +A 2^{\frac{\alpha}{2}}r^{1+\alpha}+Br^{1
+\alpha} \le \frac{2-\alpha(\alpha+1)}{16}.$$ Then, $$\left|\frac{1}{d}(e^{2\underline{v}}-1
-2\underline{v})\right|\\
\le 2\kappa^2(0)d + \frac{1}{2}[2-\alpha(\alpha+1)][A(x'^{2}+d^2)^{\frac{\alpha}{2}}+Bd^{\alpha}].$$ Arguing as in Step 2, we obtain $$\begin{aligned}
\textit{S}( \underline{v})
&\geq -CA d^{\alpha} +B\left[1-\frac{1}{2}\alpha(\alpha+1)-(1+\alpha)dI_d\right]d^{\alpha}\\
&\quad+(A-CA d)(x'^{2}+d^2)^{\frac{\alpha}{2}}-K(|x'|^2+d^2)^{\frac\alpha2} - C d.\end{aligned}$$ We require $$\label{eq-require_r}
d\le\frac{1}{2C}, \quad 1-\frac{1}{2}\alpha(\alpha+1)-(1+\alpha)dI_d\ge c_0\quad\text{in }G_r,$$ for some positive constant $c_0$. If $A\geq 2K+2C$, we have $$\textit{S}( \underline{v}) \geq -CAd^{\alpha} +c_0Bd^{\alpha}.$$ If $$c_0B\ge CA,$$ we have $\textit{S}( \underline{v}) \geq 0$. In order to have $ v \geq \underline{v}$ on $\partial G_r$, it is sufficient to require $$|\kappa(0)|+C_0 \leq Ar^{\alpha}.$$ In summary, we first choose $$A=\frac{|\kappa(0)|+C_0 }{r^{\alpha}},\quad B=\frac{AC}{c_0},$$ for some $r$ small to be determined. Then, we choose $r$ small satisfying such that $A\geq 2K+2C$ and holds. Therefore, we have $$\begin{aligned}
\textit{S}( \underline{v}) &\geq \textit{S}(v) \quad \text{in }G_r ,\\
v &\geq \underline{v} \quad \text{on }\partial G_r.\end{aligned}$$ By the maximum principle, we have $ v \geq \underline{v}$ in $G_r$.
[*Step 4.*]{} Therefore, we obtain $$\underline{v}\le v\le \overline{v}\quad\text{in }G_r.$$ By taking $x'=0$, we obtain, for any $d \in (0, r)$, $$\left|v(0,d)-\frac{1}{2}\kappa(0) d\right|\leq Cd^{1+\alpha}.$$ This is the desired estimate.
We point out that the proof above can be adapted to yield a similar result as in Theorem \[thrm-C-2,alpha-expansion\] for the equation .
Expansions near Isolated Singular Boundary Points {#sec-IsolatedSingular}
=================================================
In this section, we study the asymptotic behavior of $u$ near isolated singular boundary points. Throughout this section, we will adopt notations from complex analysis and denote by $z=(x,y)$ points in the plane.
We fix a boundary point; in the following, we always assume this is the origin. We assume $\partial\Omega$ in a neighborhood of the origin consists of two $C^{2}$ curves $\sigma_1$ and $\sigma_2$. Here, the origin is an end of both $\sigma_1$ and $\sigma_2$. Suppose $l_1$ and $l_2$ are two rays from the origin such that $\sigma_1$ and $\sigma_2$ are tangent to $l_1$ and $l_2$ at the origin, respectively. The rays $l_1$ and $l_2$ divide $\mathbb R^2$ into two cones and one of the cones is naturally defined as the tangent cone of $\Omega$ at the origin. By a rotation, we assume the tangent cone $V_\mu$ is given by, for some positive constant $\mu\in (0,2)$, $$\label{eq-Cone}
V_\mu=\{(r, \theta)\in\mathbb R^2:\, 0<r<\infty, \, 0<\theta<\mu \pi\}.$$ Here, we used the polar coordinates in $\mathbb R^2$. In fact, the tangent cone $V_\mu$ can be characterized by the following: For any $\varepsilon>0$, there exists an $r_0>0$ such that $$\{(r,\theta): r\in(0,r_0), \theta\in(\varepsilon, \mu\pi-\varepsilon)\}
\subset \Omega\cap B_{r_0}\subset
\{(r,\theta): r\in (0,r_0), \theta\in (-\varepsilon, \mu\pi+\varepsilon)\}.$$
Our goal is to approximate solutions near an isolated singular boundary point by the corresponding solutions in tangent cones. To this end, we express explicitly the solutions in tangent cones. For any constant $\mu\in (0,2)$, consider the unbounded cone $V_{\mu}$ defined by . Then, the solution of - in $V_\mu$ is given by $$\label{eq-Solution-Cone}
v_\mu= -\log\left(\mu r \sin\frac{\theta}{\mu}\right).$$ For $\mu\in (0,1)$ and $\theta\in (0, \mu\pi/2)$, we have $d=r\sin\theta$ and $$\label{eq-Solution-Cone1}v_\mu=-\log d-\log\frac{\mu\sin\frac{\theta}\mu}{\sin\theta}.$$ For $\mu\in (1,2)$, if $\theta\in (0, \pi/2)$, we have $d=r\sin\theta$ and the identity above; if $\theta\in (\pi/2, \mu\pi/2)$, we have $d=r$ and $$\label{eq-Solution-Cone2}v_\mu=-\log d-\log\left(\mu\sin\frac{\theta}{\mu}\right).$$ We note that the second terms in and are constant along the ray from the origin. This suggests that Lemma \[lemma-ExteriorCone\] cannot be improved in general if the boundary has a singularity.
Next, we modify the solution in and construct super- and subsolutions. Define $$\label{eq-superSolution-Cone}
\overline{u}_\mu=v_\mu+\log \left(1+A|z|^{\frac{\sqrt{2}}{\mu}}\right),$$ and $$\label{eq-subSolution-Cone}\underline{u}_\mu=v_\mu-\log \left(1+A|z|^{\frac{1}{\mu}}\right),$$ where $v_\mu$ is given by and $A$ is a positive constant.
\[lemma-Super-sub-solutions\] Let $V_{\mu}$ be the cone defined in , and $\overline{u}_\mu$ and $\underline{u}_\mu$ be defined by and , respectively. Then, $\overline{u}_\mu$ is a supersolution and $\underline{u}_\mu$ is a subsolution of in $V_\mu$, respectively.
We calculate in polar coordinates. For functions of $r$ only, we have $$\Delta=\partial_{rr}+\frac1r\partial_r.$$ Note $r=|z|$. A straightforward calculation yields $$\Delta\left(\log \left(1+A|z|^{\frac{\sqrt{2}}{\mu}}\right)\right)
=\frac{2}{\mu^2r^2} \cdot \frac{Ar^{\frac{\sqrt{2}}{\mu}} }{1+Ar^{\frac{\sqrt{2}}{\mu}}}
- \frac{2}{\mu^2r^2} \left(\frac{Ar^{\frac{\sqrt{2}}{\mu}} }{1+Ar^{\frac{\sqrt{2}}{\mu}}} \right)^2.$$ Then, $$\begin{aligned}
\Delta \overline{u}_\mu
&=\frac{1}{\mu^2r^2\sin^2\frac\theta\mu}+\frac{2}{\mu^2r^2}
\cdot \frac{Ar^{\frac{\sqrt{2}}{\mu}} }{1+Ar^{\frac{\sqrt{2}}{\mu}}}
- \frac{2}{\mu^2r^2} \left(\frac{Ar^{\frac{\sqrt{2}}{\mu}} }{1+Ar^{\frac{\sqrt{2}}{\mu}}} \right)^2\\
&=\frac{1}{\mu^2r^2\sin^2\frac\theta\mu}
\left(1+\frac{2Ar^{\frac{\sqrt{2}}{\mu}} }{1+Ar^{\frac{\sqrt{2}}{\mu}}} \sin^2\frac\theta\mu
- 2\left(\frac{Ar^{\frac{\sqrt{2}}{\mu}} }{1+Ar^{\frac{\sqrt{2}}{\mu}}} \right)^2 \sin^2\frac\theta\mu\right)\\
&\le\frac{1}{\mu^2r^2\sin^2\frac\theta\mu}\left(1+2Ar^{\frac{\sqrt{2}}{\mu}}
\right)
\le \left(\frac{1}{\mu r\sin\frac\theta\mu}\right)^2\left(1+Ar^{\frac{\sqrt{2}}{\mu}} \right)^2=e^{2\overline{u}_\mu}.\end{aligned}$$ Hence, $\overline{u}_\mu$ is a supersolution in $V_\mu$.
The proof for $\underline{u}_\mu$ is similar. In fact, we have $$\begin{aligned}
\Delta \underline{u}_\mu
&=\frac{1}{\mu^2r^2\sin^2\frac\theta\mu}
-\frac{1}{\mu^2r^2} \cdot \frac{Ar^{\frac{1}{\mu}} }{1+Ar^{\frac{1}{\mu}}}
+ \frac{1}{\mu^2r^2} \left(\frac{Ar^{\frac{1}{\mu}} }{1+Ar^{\frac{1}{\mu}}} \right)^2\\
&=\frac{1}{\mu^2r^2\sin^2\frac\theta\mu}
\left(1-\frac{Ar^{\frac{1}{\mu}} }{1+Ar^{\frac{1}{\mu}}} \sin^2\frac\theta\mu
+\left(\frac{Ar^{\frac{1}{\mu}} }{1+Ar^{\frac{1}{\mu}}} \right)^2 \sin^2\frac\theta\mu\right)\\
& \ge\frac{1}{\mu^2r^2\sin^2\frac\theta\mu}\left(1-\frac{Ar^{\frac{1}{\mu}} }{1+Ar^{\frac{1}{\mu}}} \right)
\ge
\left(\frac{1}{\mu r\sin\frac\theta\mu}\right)^2\left(1+Ar^{\frac{1}{\mu}} \right)^{-2}=e^{2\underline{u}_\mu}.\end{aligned}$$ Hence, $\underline{u}_\mu$ is a subsolution in $V_\mu$.
Next, we describe how solutions of - change under one-to-one holomorphic mappings.
\[lemma-solution\_under\_conformal\_trans\] Let $ \Omega_1$ and $\Omega_2$ be two domains in $\mathbb R^2$. Suppose $u_2\in C^{\infty}(\Omega_2)$ is a solution of in $\Omega_2$ and $f$ is a one-to-one holomorphic function from $\Omega_1$ onto $\Omega_2$. Then, $$u_1(z)=u_2(f(z))+\log|f'(z)|$$ is a solution of in $\Omega_1$.
Note that $g_2=e^{ 2u_2 }(dx\otimes dx+dy\otimes dy)$ is a complete metric with constant Gauss curvature $-1$ on $\Omega_{2}$. Since the Gauss curvature of the pull-back metric remains the same under the conformal mapping, then $g_1=f^{*}g_2=e^{ 2u_1 }(dx\otimes dx+dy\otimes dy)$ is a complete metric with constant Gauss curvature $-1$ on $\Omega_{1}$. Hence, $u_1$ solves in $\Omega_1$.
Next, we prove that asymptotic expansions near singular boundary points are local properties.
\[lemma-Localization\] Let $\Omega_{1}$ and $\Omega_{2}$ be two domains which coincide in a neighborhood of the origin and let $V_\mu$ be the tangent cone of $\Omega_1$ and $\Omega_2$ at the origin, for some $\mu\in (0,2)$. Suppose $u_1$ and $u_2$ are the solutions of - in $\Omega_{1}$ and $\Omega_{2}$, respectively. Then, $$\label{sol-loc}
u_1=u_2+O(|z|^{\frac{1}{\mu}}).$$
Taking $\widetilde{\mu}$ such that $\widetilde \mu>\mu$ and $\widetilde{\mu}<\min \{\sqrt{2}\mu ,2\}$ and set $$\widetilde{V}_{\widetilde{\mu}}=\left\{(r, \theta)\in\mathbb R^2:\, 0<r<\infty,
\, -\frac{\widetilde{\mu}-\mu}{2}\pi<\theta<\frac{\widetilde{\mu}+\mu}{2}\pi\right\}.$$ For some constant $\delta_{1}>0,$ we have $$\Omega_1\cap B_{\delta_1}\subseteq \widetilde{V}_{\widetilde{\mu}}.$$ Set $$\widetilde \theta=\theta+\frac12(\widetilde \mu-\mu)\pi.$$ By Lemma \[lemma-ExteriorCone\], we have, for $A_1$ sufficiently large, $$u_1(z)\ge-\log\left(\widetilde{\mu } |z| \sin\frac{\widetilde{\theta}}{\widetilde{\mu}}\right)
-\log \left(1+A_1|z|^{\frac{1}{\widetilde{\mu}}}\right)
\quad\text{on } \Omega_1\cap \partial B_{\delta_1}.$$ The estimate above obviously holds on $\partial\Omega_1\cap B_{\delta_1}$. By Lemma \[lemma-Super-sub-solutions\] and the maximum principle, we have $$\label{eq-subsolution-3}
u_1(z)\ge-\log\left(\widetilde{\mu } |z| \sin\frac{\widetilde{\theta}}{\widetilde{\mu}}\right)
-\log \left(1+A_1|z|^{\frac{1}{\widetilde{\mu}}}\right)
\quad\text{in } \Omega_1\cap B_{\delta_1}.$$ In particular, we can take $\delta_2<\delta_1$ such that $$e^{2u_1}\ge\frac{1}{2\mu^{2}|z|^2}\quad\text{in } \Omega_1\cap B_{\delta_2}.$$ As in the proof of Lemma \[lemma-Super-sub-solutions\], we can verify that $u_1-\log \left(1+A|z|^{\frac{1}{\mu}}\right)$ is a subsolution of in $\Omega_1\bigcap B_{\delta_2}$. By Lemma \[lemma-ExteriorCone\] and the maximum principle, we have, for $A$ sufficiently large, $$u_1\leq u_2 + \log \left(1+A|z|^{\frac{1}{\mu}}\right)\quad\text{in } \Omega_1\cap B_{\delta_2}.$$ Similarly, we have $$u_2\leq u_1 + \log \left(1+A|z|^{\frac{1}{\mu}}\right)\quad\text{in } \Omega_1\cap B_{\delta_2}.$$ This implies the desired result.
Now we prove a simple calculus result.
\[curve-distance-x-x’-1\] Let $\sigma$ be a curve defined by a function $y=\varphi(x)\in C^{1,\alpha}([0, \delta])$, for some constants $\alpha\in (0,1]$ and $\delta> 0$, satisfying $\varphi(0)=0$ and $$|\varphi' (x)| \leq Mx^{\alpha},$$ for some positive constant $M$. For any given point $z=(x,y)$ with $0 < x <\delta$ and $y>\varphi(x)$, let $p=(x', \varphi(x'))$ be the closest point to $z$ on $\sigma$ with the distance $d$. Then, for $|z|$ sufficient small, $$x' \le 2|z|.$$ Moreover, if $|y| \leq x/4 $, then $$|x-x'|\le C d x^{\alpha},$$ where $C$ is a positive constant depending only on $M$ and $\alpha$.
First, we note $d\le |z|$ since $d$ is the distance of $z$ to $\sigma$. Then, $$x'\le |p|\le |z|+ |z-p|=|z|+d\le 2|z|.$$ Next, for $x'\in (0,\delta)$, $x'$ is characterized by $$\frac{d}{dt}[(x-t)^2 + (y- \varphi(t))^2 ]|_{t=x'}=0,$$ or $$x-x'= (y- \varphi(x'))\varphi' (x').$$ If $|y| \leq x/4 $, then $|z|\le 5x/4$ and hence $x'\le 5x/2$. Moreover, $|y-\varphi(x')|\le d$. Then, $$\label{curve -distance-x-x'-2}
|x-x'| \leq d | \varphi' (x')|.$$ This implies the desired result.
We are ready to discuss the case when the opening angle of the tangent cone of $\Omega$ at the origin is less than $\pi$.
\[thrm-SmallAngles\] Let $\Omega$ be a bounded domain in $\mathbb R^2$ and $\partial\Omega$ in a neighborhood of the origin consist of two $C^{2}$-curves $\sigma_1$ and $\sigma_2$ intersecting at the origin with an angle $\mu\pi$, for some constant $\mu\in (0,1)$. Suppose $ u \in C^{\infty}(\Omega)$ is a solution of -. Then, for any $z\in \Omega\cap B_\delta$, $$\label{eq-main_estimate_small}
\left|u(z)-f_\mu(z)\right| \leq Cd(z),$$ where $f_\mu$ is given by $$\label{eq-definition_f0}
f_\mu(z)=- \log \left(\mu |z| \sin \frac{\arcsin \frac{d(z)}{|z|}}{\mu} \right),$$ $d$ is the distance to $\partial\Omega$, and $\delta$ and $C$ are positive constants depending only on the geometry of $\partial\Omega$.
We denote by $d_1$ and $d_2$ the distances to $\sigma_1$ and $ \sigma_2$, respectively. We only consider the case $d_{1}=d\leq d_2$. We also denote by $M$ the $C^{2}$-norm of $\sigma_1$ and $\sigma_2$. In the following, $C$ and $\delta$ are positive constants depending only on the geometry of $\partial\Omega$. We will prove with $d=d_1$.
Consider the conformal homeomorphism $T: z\mapsto z^{\frac{1}{\mu}}$. For $$z=(x,y)=(|z| \cos \theta, |z| \sin \theta),$$ we write $$T(z)=\widetilde{z}=(\widetilde{x}, \widetilde y)
=\left(|z|^{\frac{1}{\mu}} \cos \frac{\theta}{\mu}, |z|^{\frac{1}{\mu}} \sin \frac{\theta}{\mu}\right).$$ By restricting to a small neighborhood of the origin, we assume $\sigma_1$ and $\sigma_2$ are curves over their tangent lines at the origin. Set $\widetilde \sigma_i=T(\sigma_i)$, $i=1, 2$, and $\widetilde \sigma=\widetilde\sigma_1\cup \widetilde\sigma_2$. We first study the regularity of $\widetilde\sigma$. By expressing $\widetilde \sigma$ by $\widetilde y=\widetilde\varphi(\widetilde x)$, we claim $$\label{eq-estimates_on_new_curve}
|\widetilde{\varphi} (\widetilde{x})| \leq \widetilde{M} \widetilde{x}^{1+\mu}, \quad
|\widetilde{\varphi}'(\widetilde{x})|\leq \widetilde{M}\widetilde{x}^{\mu}, \quad
|\widetilde{\varphi}''(\widetilde{x})|\leq \widetilde{M}\widetilde{x}^{\mu-1},$$ where $\widetilde M$ is a positive constant depending only on $M$ and $\mu$.
To prove , we assume $\sigma_1$ is given by the function $y=\varphi_1(x)$ satisfying $ \varphi_1(0) =0$, $\varphi_1'(0) =0$ and $$|\varphi_1'' (x)| \leq M.$$ Assume $\widetilde \sigma_1=T(\sigma_1)$ is given by $\widetilde{y} = \widetilde{\varphi}_1 (\widetilde{x})$. To prove the estimate of $\widetilde\varphi_1$, we note $|y|\le Cx^2$ on $\sigma_1$ and $|z|=O(\widetilde x^\mu)$ on $\widetilde\sigma_1$ for $|z|$ sufficiently small. Then, $$|\widetilde y|=|z|^{\frac1\mu-1}\left||z|\sin\frac\theta \mu\right|
\le C|z|^{\frac1\mu-1}|y|\le C|z|^{\frac1\mu-1}x^2
\le C|z|^{1+\frac1\mu}\le C\widetilde x^{1+\mu}.$$ This is the first estimate in . To prove estimates of derivatives of $\widetilde\varphi_1$, we first note that $(\widetilde x, \widetilde y)$ on $\widetilde \sigma_1$ is given by $$\widetilde{x}=(x^{2}+\varphi_{1}(x)^{2})^{\frac{1}{2\mu}}
\cos\frac{\arcsin \frac{\varphi_{1}(x)}{((x^{2}+\varphi_{1}(x)^{2})^{\frac{1}{2}}}}{\mu},$$ and $$\widetilde{y}=(x^{2}+\varphi_{1}(x)^{2})^{\frac{1}{2\mu}}
\sin\frac{\arcsin \frac{\varphi_{1}(x)}{((x^{2}+\varphi_{1}(x)^{2})^{\frac{1}{2}}}}{\mu}.$$ Straightforward calculations yield $$\begin{aligned}
\frac{d\widetilde{x}}{d x}&=\frac{1}{\mu}x^{\frac{1}{\mu}-1}(1+O(x)),\\
\frac{d^{2}\widetilde{x}}{d x^{2}}&=\frac{1}{\mu}\left(\frac{1}{\mu}-1\right)
x^{\frac{1}{\mu}-2}(1+O(x)),\end{aligned}$$ and $$\begin{aligned}
\left|\frac{d\widetilde{y}}{d x}\right| &\leq \frac{1}{\mu}
\left(\frac{1}{\mu}+1\right)M x^{\frac{1}{\mu}}(1+O(x)),\\
\left|\frac{d^{2}\widetilde{y}}{d x^{2}}\right|
&\leq \frac{1}{\mu^{2}}\left(\frac{1}{\mu}+1\right)M x^{\frac{1}{\mu}-1}(1+O(x)).\end{aligned}$$ With $x=O(\widetilde{x}^{\mu})$, we get the second and third estimates in . This finishes the proof of for $\widetilde x\ge 0$. A similar argument holds for $\widetilde x<0$.
We now discuss three cases for $z\in \Omega\cap B_\delta$ with $d_1(z)\le d_2(z)$, for $\delta$ sufficiently small. For simplicity, we set $$\begin{aligned}
\label{eq-Definition_Omega} \begin{split}
\Omega_1&=\{z\in\Omega:\, d_1(z)> c_0|z|\},\\
\Omega_2&=\{z\in\Omega:\, c_0|z|<d_1(z)< c_1|z|^2\},\\
\Omega_3&=\{z\in\Omega:\, d_1(z)< c_1|z|^2\},\end{split}\end{aligned}$$ and $$\begin{aligned}
\label{eq-Definition_gamma}\begin{split}
\gamma_1&=\{z\in\Omega:\, d_1(z)= c_0|z|\},\\
\gamma_2&=\{z\in\Omega:\, d_1(z)= c_1|z|^2\},\end{split}\end{aligned}$$ where $c_0$ and $c_1$ are appropriately chosen constants with $c_0 <\frac{1}{2} \mu \arctan \frac{1}{4}$.
[*Case 1.*]{} We consider $z\in \Omega_1\cap B_\delta$.
Set $$\Omega_{+}= \Omega\cap B_{\delta}, \quad
\Omega_{-}=\Omega \cup B_{\delta}^{c}.$$ Let $u_+$ and $u_-$ be the solutions of - in $\Omega_{+}$ and $\Omega_{-}$, respectively. Then, we have $$\label{case1-1}u_- \leq u \leq u_+ \quad \textrm{in }\Omega_+.$$ We take $\delta$ small so that $T$ is one-to-one on $\Omega_+$.
Set $\widetilde{\Omega}_{+}=T(\Omega_+)$ and let $\widetilde u_+$ be the solution of - in $\widetilde\Omega_+$. By , the curve $\widetilde\sigma$ given by $\widetilde y=\widetilde \varphi(\widetilde x)$ satisfies $$-\widetilde M|\widetilde x|^{1+\mu}\le |\widetilde\varphi(\widetilde x)|\le \widetilde M|\widetilde x|^{1+\mu}.$$ Theorem \[thrm-C-1,alpha-expansion\] implies, for $\widetilde z$ close to the origin, $$\label{case1-lower}
\widetilde{u}_+(\widetilde{z})\leq -\log\widetilde{ d}_{1} +O(\widetilde{d}_{1}^{\mu}),$$ and $$\label{case1-upper}
\widetilde{u}_+(\widetilde{z})\geq -\log \widetilde{ d}_{2} +O(\widetilde{d}_{2}^{\mu}),$$ where $\widetilde{d}_1$ and $\widetilde{d}_2$ are the distances from $\widetilde{z}$ to the curves $\widetilde{y}=\widetilde{M}|\widetilde{x}|^{1+\mu}$ and $\widetilde{y}=-\widetilde{M}|\widetilde{x}|^{1+\mu}$, respectively. Let $(\widetilde x', \widetilde y')$ be the point on $\widetilde{y}=\widetilde{M}|\widetilde{x}|^{1+\mu} $ realizing the distance from $\widetilde z$. Then, $$\widetilde y-\widetilde y'\le \widetilde d_1\le \widetilde y,$$ and hence $$|\widetilde d_1-\widetilde y|\le \widetilde y'=\widetilde\varphi(\widetilde x')\le
\widetilde M|\widetilde x'|^{1+\mu}
\le C\left(|z|^{\frac1\mu}\right)^{1+\mu}=C|z|^{1+\frac1\mu}.$$ By $d\ge c_0|z|$, we have $\theta\ge \theta_0$ for some positive constant $\theta_0$, for $|z|$ sufficiently small, and then $|z|=O(y)=O(\widetilde y^\mu)$. Hence, $|\widetilde{d}_1-\widetilde{y}|\le C\widetilde y|z|$ and then $\widetilde d_1=\widetilde y(1+O(|z|))$. Therefore, $$\log \widetilde d_1=\log \widetilde y+O(|z|)=\log \widetilde y+O(d).$$ Next, we note $$\widetilde d_1^\mu\le |\widetilde z|^\mu=|z|\le Cd.$$ Similar estimates hold for $\widetilde d_2$. Then, and imply $$\label{eq-estimate_u_+}
\widetilde u_+(\widetilde z)=- \log \widetilde y+O(d).$$ Let $V_\mu$ be the tangent cone of $\Omega$ at the origin given by and $v$ be the corresponding solution in the tangent cone $V_\mu$ given by . Then, $T(V)$ is the upper half-plane and the solution $\widetilde v$ of - in $T(V)$ is given by $$\widetilde{v}(\widetilde{z})= -\log \widetilde{ y}.$$ Hence, implies $$\widetilde u_+(\widetilde z)= \widetilde v(\widetilde z)+O(d).$$ By Lemma \[lemma-solution\_under\_conformal\_trans\], we have $$u_+(z)=\widetilde{u}_+(\widetilde{z})+ \log \left(\frac{1}{\mu}|z|^{\frac{1}{\mu}-1}\right),$$ and $$v(z)=\widetilde{v}(\widetilde{z})+ \log\left(\frac{1}{\mu}|z|^{\frac{1}{\mu}-1}\right).$$ Therefore, we obtain $$\label{case1-2}u_+(z)=v(z)+O(d).$$
Next, we fix a point $P\in \Omega_{-}^{c}$ and consider the conformal homeomorphism $\widehat T: z\mapsto \frac{1}{z-P}$. We assume that $\widehat T$ maps $\Omega_{-}$ to $\widehat{\Omega}_{-}$, $V_{\mu}$ to $\widehat{V}_{\mu}$, $\sigma_i$ to $\widehat{\sigma}_i$, and $ l_i$ to $\widehat{l}_i$. Then, $\widehat{\sigma}_i$ and $ \widehat{l}_i$ are $C^{2}$-curves with bounded $C^{2}$-norms in a small neighborhood of $\widehat T(0)$ since $\widehat T$ is smooth in $\overline{B}_{|0P|/2}$. The tangent cone of $\widehat{\Omega}_{-}$ at $\widehat T(0)$, denoted by $\underline{V}$, has an opening angle $\mu\pi$ since $\widehat T$ is conformal. Let $\widehat{u}_{-}$, $\widehat v$ and $\underline{v}$ be the solutions of - in $\widehat{\Omega}_{-}$, $\widehat V_\mu$ and $\underline{V}$, respectively. By Lemma \[lemma-solution\_under\_conformal\_trans\], we have $$u_{-}(z)=\widehat{u}_{-}(\widehat{z})- 2\ln|z- P|,$$ and $$v(z)=\widehat{v}(\widehat{z})- 2\ln|z- P|.$$ By applying (for $u_+$ in $\Omega_{+} $) to $\widehat{u}_{-}$ and $\widehat v$ in $\widehat{\Omega}_{-}$ and $\widehat V_\mu$, respectively, we have $$\widehat{u}_{-}(\widehat{z}) = \underline{v}(\widehat{z}) + O(\widehat{d}),$$ and $$\widehat{v}(\widehat{z}) = \underline{v}(\widehat{z}) + O(\widehat{d}).$$ We note that the distance $\widehat d$ from $\widehat z$ to $\partial\widehat\Omega_-$ is comparable to that from $\widehat z$ to $\partial\widehat V_\mu$. Therefore, $$\widehat u_{-}(\widehat z) = \widehat v(\widehat z) + O(\widehat d),$$ and hence $$\label{case1-3}u_{-}(z) = v(z) + O(d).$$
By combining , and , we have $$u(z) = v(z) + O(d).$$ By the explicit expression of $v$ in , it is straightforward to verify $$v(z) = -\log \left(\mu |z| \sin \frac{\arcsin \frac{d}{|z|}}{\mu} \right) + O(d).$$ We hence have .
[*Case 2.*]{} We consider $z\in \Omega_2\cap B_\delta$ and discuss in two cases.
[*Case 2.1.*]{} First, we assume $T$ is one-to-one in $\Omega$. Set $\widetilde \Omega=T(\Omega)$ and let $\widetilde u$ be the solution of - in $\widetilde \Omega$. Let $\widetilde{p}=(\widetilde{x}', \widetilde{y}')$ be the closest point to $\widetilde{z}$ on $\widetilde{\sigma}_1$ with the distance $\widetilde{d}$. If $c_0$ is small, then $|\widetilde y|\le c_*|\widetilde x|$ for some constant $c_*$ small. By Lemma \[curve-distance-x-x’-1\], we have $$\widetilde{x}'= \widetilde{x}+ O( \widetilde{x}'^{\mu}\widetilde{d}).$$ Note that $|z|$ is comparable with $x$ and $|\widetilde z|$ is comparable with $\widetilde x$. With $\widetilde{x}'^{\mu} \leq |z|$, we also have $$\label{eq-slope1}
|\widetilde\varphi'(\widetilde x')| \leq \widetilde M\widetilde x'^{\mu}\le C|z|.$$ Similarly, we have $$\label{eq-slope2}\frac{|\widetilde\varphi(\widetilde x')|}{|\widetilde x'|}\le C|z|.$$
Next, we claim, for any $\widehat x$ sufficient small, $$\label{taylor-expansion_of_curve}
|\widetilde{\varphi}(\widehat{x})-\widetilde{\varphi}(\widetilde{x}')
-\widetilde\varphi'(\widetilde x')(\widehat{x}-\widetilde{x}')|
\leq K |z|^{1-\frac{1}{\mu}}(\widehat{x} -\widetilde{x}' )^2,$$ where $K$ is a positive constant depending only on $M$ and $ \mu$. We prove in three cases. If $\widehat{x} \geq |z|^{\frac{1}{\mu}}/3$, then, with $\mu\in (0,1)$, $$|\widetilde{\varphi} ''(\widehat{x})|\leq \widetilde{M}\widehat{x}^{\mu-1}
\le C|z|^{1-\frac{1}{\mu}},$$ and holds by the Taylor expansion. If $ 0 \leq \widehat{x} \leq |z|^{\frac{1}{\mu}}/3$, we have $$\begin{aligned}
&|\widetilde{\varphi}(\widehat{x})|\leq \widetilde{M}\widehat {x}^{1+\mu} \leq C|z|^{1+\frac{1}{\mu}}, \\
&|\widetilde\varphi'(\widetilde x')(\widehat{x}-\widetilde{x}')| \leq C|z|^{1+\frac{1}{\mu}}, \end{aligned}$$ and $$|z|^{1+\frac{1}{\mu}}=|z|^{1-\frac{1}{\mu}}(|z|^{\frac{1}{\mu}})^2
\leq C|z|^{1-\frac{1}{\mu}}(\widehat{x} - \widetilde{x}' )^2 .$$ Then, follows. If $ \widehat{x} \leq 0$, we have $$|\widetilde{\varphi}(\widehat{x})|\leq \widetilde{M}|\widehat{x}|^{1+\mu}
\leq Cr^{1-\frac{1}{\mu}}(\widehat{x} - \widetilde{x}' )^2,$$ and $$|\widetilde\varphi'(\widetilde x')(\widehat{x}-\widetilde{x}')|
\leq Cr^{1-\frac{1}{\mu}}(\widehat{x} - \widetilde{x}' )^2.$$ Then, also holds.
By , it is easy to check, for some $R=C'|z|^{\frac{1}{\mu}-1}$, $$B_R( \widetilde{p}+R\vec{n})\subset\widetilde\Omega
\quad\text{and}\quad
B_R( \widetilde{p}-R\vec{n})\cap\widetilde \Omega=\emptyset,$$ where $\vec{n}$ is the unit inward normal vector of $\widetilde{\sigma}_1$ at $\widetilde{p}$ and $C'$ is some constant depending only on the geometry of $\partial\Omega$. Let $u_{R,\widetilde{p}+R\vec{n}}$ and $v_{R, \widetilde{p}-R\vec{n}}$ be the solutions of - in $B_R(\widetilde{p}+R\vec{n})$ and $\mathbb R^2\setminus B_R(\widetilde{p}-R\vec{n})$, respectively. Then, by the maximum principle, we have $$v_{R,\widetilde{p}-R\vec{n}}\le \widetilde u\le u_{R, \widetilde{p}+R\vec{n}}
\quad\text{in }B_R(\widetilde{p}+R\vec{n}),$$ and hence, at $\widetilde{z}$, $$-\log \widetilde{d}-\log\left(1+\frac{\widetilde{d}}{2R}\right)\le \widetilde u
\le -\log \widetilde d-\log\left(1-\frac{\widetilde{d}}{2R}\right).$$ Therefore, $$\label{error_term_tilde_d}
\widetilde{u} (\widetilde{z}) = -\log \widetilde{ d} + O\left(\frac{\widetilde{d}}{|z|^{\frac{1}{\mu}-1}}\right).$$ For *T*: $z\mapsto z^{\frac{1}{\mu}}$, if $z_1, z_2 \in B_{|z|/3}(z)$, we have $$|\textit{T}(z_1)-\textit{T}(z_2)|\leq \frac{1}{\mu}\max_{z' \in B_{|z|/3}(z)}
\{|z'|^{\frac{1}{\mu}-1}\}|z_1 -z_2|.$$ Let $q $ be the closest point to $z$ on $\sigma_1$. By $d_1 \le c_0 |z|$ for $c_0$ small, we have $q \in B_{|z|/3}(z)$ if $|z|$ is small. Therefore, $$\widetilde{d} \leq \textrm{dist}(\widetilde{z} ,\widetilde{q}) \leq C|z|^{\frac{1}{\mu}-1} d.$$ With , we obtain $$\widetilde{u} (\widetilde{z}) = -\log\widetilde{ d} + O(d).$$ Let $\widetilde{l}$ be the tangent line of $\widetilde{\sigma}_1$ at $\widetilde{p}$ and $\widetilde{l}'$ be the line passing the origin and intersecting $\widetilde\sigma_1$ at the point $\widetilde{p}$. Then, the slopes of these two straight lines are bounded by $C|z|$ by and . Therefore, the included angle $\widetilde\theta$ between $\widetilde{l} $ and $\widetilde{l} '$ is less than $C|z|$, and hence, $$\textrm{dist} (\widetilde{z} ,\widetilde{l}') = \widetilde{d}\cos\widetilde{\theta}
= \widetilde{d} (1+ O(\widetilde{\theta}^{2}) ) = \widetilde{d} (1+ O(|z|^{2}) ).$$ By $c_1 |z|^2 \leq d $, we obtain $$\textrm{dist} (\widetilde{z} ,\widetilde{l}') = \widetilde{d} (1+ O(d ) ).$$ Let $\widetilde{V}'$ be the region above the line $\widetilde{l}'$ and $\widetilde{v}'(z)$ be the solution of - in $\widetilde{V}'$. Then, $$\widetilde{v}'(\widetilde{z}) = - \log \widetilde{d} + O( d ),$$ and hence $$\label{expansion_case2}
\widetilde{u}(\widetilde{z}) = \widetilde{v}'(\widetilde{z}) + O( d ).$$ Let $V'$ be the image of $\widetilde{V}'$ under the conformal homeomorphism $\textit{T}^{-1} : \widetilde z\mapsto \widetilde z^{\mu}$, and $l'$ be the image of $\widetilde{l}'\bigcap \{\widetilde{x}>0\}$ under $\textit{T}^{-1}$. For the solution $v'$ of - in $V'$, we have $$v'(z)=\widetilde{v}'(\widetilde{z})+ \log\left(\frac{1}{\mu}|z|^{\frac{1}{\mu}-1}\right).$$ Combining with , we have $$u(z) =v'(z) + O ( d ).$$
By Lemma \[curve-distance-x-x’-1\], we have $$|\widetilde x'- \widetilde{x}| \leq C\widetilde{x}^{\mu }\widetilde{d},$$ and hence $$\operatorname{dist}(\widetilde{p}, (\widetilde{x}, \widetilde{\varphi}_{1}(\widetilde{x}) )\le
C\widetilde{x}^{\mu }\widetilde{d}.$$ Under the conformal homeomorphism $\textit{T}^{-1}: \widetilde z\mapsto \widetilde z^{\mu}$, we assume $$\widetilde{p} \mapsto p, \quad \quad
(\widetilde{x}, \widetilde{\varphi}_{1}(\widetilde{x}) )\mapsto (x, \varphi_{1}(x) ).$$ Then, $$\begin{aligned}
\textrm{dist}(p, (x, \varphi_{1}(x) ))
&\leq C (|z|^{\frac{1}{\mu}})^{\mu-1}\textrm{dist}(\widetilde{p},
(\widetilde{x}, \widetilde{\varphi_{1}}(\widetilde{x}) ) )
\leq C (|z|^{\frac{1}{\mu}})^{\mu-1}\widetilde{x}^{\mu }\widetilde{d}\\
&\leq C (|z|^{\frac{1}{\mu}})^{\mu-1}(|z|^{\frac{1}{\mu}})^{\mu }|z|^{\frac{1}{\mu}-1} d
\leq C |z|d.\end{aligned}$$ Recall that $q$ is the closest point to $z$ on $\sigma_1$. By Lemma \[curve-distance-x-x’-1\], we have $$\operatorname{dist}(q, (x, \varphi_{1}(x) ))\le C |z|d.$$ By setting $p=(x', \varphi_1(x'))$ and $q=(\overline{x}, \varphi_1(\overline x))$, we have $$|x'-\overline{x}|\le \operatorname{dist}(p,q)\le C|z|d.$$ Denote by $l'$ and $\overline{l}$ the straight lines passing the origin and intersecting $\sigma_1$ at $p$ and $q$, respectively. Then, the difference of their slopes can be estimated by $$\left|\frac{\varphi_1(x')}{x'}-\frac{\varphi_1(\overline{x})}{\overline{x}}\right|\le C|z|d,$$ and a similar estimate holds for the angle between $l'$ and $\overline{l}$. With $c_1|z|^2\le d$, we have $$|\operatorname{dist}(z,l')-\operatorname{dist}(z,\overline{l})|\le |z|\cdot C|z|d=C|z|^2d\le Cd^2.$$ Denote by $\widehat \theta$ the angle between the line $\overline{l}$ and the tangent line of $\sigma_1$ at $q$. Then, $$\operatorname{dist}(z,\overline{l})=d\cos\widehat \theta=d(1+O(|z|^2))=d(1+O(d)),$$ and hence $$\operatorname{dist}(z,l') = d( 1+O(d) ).$$ By the explicit expressions of $v'$ in , it is straightforward to verify $$v'(z) = -\log \left(\mu r \sin \frac{\arcsin \frac{d}{|z|}}{\mu} \right) + O(d).$$ We hence have .
[*Case 2.2.*]{} Now we consider the general case that the map $T: z\mapsto z^{\frac1\mu}$ is not necessary one-to-one on $\Omega$. Take $R>0$ sufficiently small such that $T$ is one-to-one on $D=\Omega\cap B_R$. Let $u_D$ be the solution of - in $D$. Then, the desired estimates for $u_D$ holds in $\Omega_1$ and $\Omega_2$ by Case 1 and Case 2. In the following, we denote the given solution $u$ in $\Omega$ by $u_\Omega$. Then, holds for $u_\Omega$ in $\Omega_1$. We now prove holds for $u_\Omega$ in $\Omega_2$. Since $D$ and $\Omega$ coincide in a neighborhood of the origin, we have, by , $$u_\Omega(z)= -\log \left(\mu |z| \sin \frac{\arcsin \frac{d}{|z|}}{\mu} \right) + O(d)+O(|z|^{\frac1\mu}).$$ We need to estimate $|z|^{\frac1\mu}$.
If $\frac{1}{\mu} \geq 2$, we have $|z|^{\frac1\mu}\le Cd$ and then for $u_\Omega$. For $\frac{1}{\mu} < 2$, we adopt notations in the proof of Lemma \[lemma-Localization\]. We take $\widetilde \mu>\mu$ sufficiently close to $\mu$ and set $$\widetilde \theta=\theta+\frac12(\widetilde \mu-\mu)\pi.$$ By , we have $$e^{2u_D}\ge\left(\frac{1}{\widetilde{\mu } |z|\sin\frac{\widetilde{\theta}}{\widetilde{\mu}}}\right)^2
\left(1+A|z|^{\frac{1}{\widetilde{\mu}}} \right)^{-2} \quad\text{in } \Omega\cap B_\delta,$$ for $\delta$ sufficient small. Consider $$\widehat\Omega=\Omega_2\cup\gamma_2\cup\Omega_3=\{z\in \Omega: d_1(z)< c_0|z|\}.$$ For $c_0$ small, we have $$e^{2u_D}\ge\frac{2}{|z|^{2}}\quad\text{in }
\widehat\Omega\cap B_{\delta},$$ if $\delta$ is smaller. Then, it is straightforward to verify that $u_D +\log (1+A|z|^{2})$ is a supersolution of in $\widehat\Omega\cap B_\delta.$ By Case 1, we have $$\label{u_2-U_1-case1}
u_\Omega \leq u_D +Cd_1\quad\text{on } \gamma_1\cap B_\delta.$$ We set, for two constants $a$ and $ b,$ $$\phi(d_1)=a d_1 - bd_{1}^{2}.$$ Then, $$\Delta\phi = -a\kappa - 2b + O(d).$$ We can take positive constants $a$ and $b$ depending only on $M$ and $\mu$ such that $$\phi(d_1)>0, \quad \Delta\phi(d_1)<0
\quad\text{in }\widehat\Omega\cap B_{\delta},$$ and $$u_\Omega \leq u_D + \phi(d_1)
\quad\text{on }\gamma_1\cap B_\delta.$$ By Lemma \[lemma-ExteriorCone\] and the maximum principle, we have $$u_\Omega \leq u_D +\log (1+A|z|^{2}) + \phi(d_1)
\quad\text{in }\widehat\Omega\cap B_{\delta}.$$ Similarly, we obtain $$u_D \leq u_\Omega +\log (1+A|z|^{2}) + \phi(d_1)
\quad\text{in }\widehat\Omega\cap B_{\delta},$$ and hence $$u_\Omega = u_D +\log (1+A|z|^{2}) + \phi(d_1)
\quad\text{in }\widehat\Omega\cap B_{\delta}.$$ Note $c_1|z|^2\le d_1$ in $\Omega_2$, we get $$\label{u_2-U_1-case2}
u_\Omega =u_D + O( d_1 )\quad\text{in }\Omega_2\cap B_\delta,$$ and hence for $u_\Omega$.
We note that $\mu<1$ is not used here. What we proved is the following statement: If $$u_\Omega = u_D + O( d_1 )\quad\text{in }\gamma_1\cap B_\delta,$$ then holds in $\Omega_2\cap B_{\delta}$.
[*Case 3.*]{} We consider $z\in \Omega_3\cap B_\delta$. We point out that we will not need the transform $T$ in this case.
Let $q $ be the closest point to $z$ on $\sigma_1$ and set $B=B_{\frac{1}{20c_1}}(q+\frac{1}{20c_1}\vec{n})$, where $\vec{n}$ is the unit inward normal vector of $\sigma_1$ at $q$. Denote by $Q$ one of the intersects of $\partial B$ and the curve $\gamma_2$, with the larger distance to the origin. Then for $c_1= c_1 (M, \mu )$ large, we have $\operatorname{dist}(O, Q)<3|z|$. With $d_1\le c_1|z|^2$, we have $$\begin{aligned}
\label{eq-expansion_cubic}
\mu |z| \sin \frac{\arcsin \frac{d_{1}}{|z|}}{\mu}
= |z|\left[\frac{d_1}{|z|}+O \left(\left(\frac{d_1}{|z|}\right)^{3}\right) \right]= d_1 (1+ O(d_1) )
\quad\text{in }\Omega_3.\end{aligned}$$ By what we proved in Case 2, we have $$u = -\log d_1 +O (d_1)\quad\text{on }\gamma_2.$$ For some positive constants $a$ and $b$, set $$\phi(d_1)=ad_1-bd_1^2.$$ Then, $$\Delta\phi=-a\kappa-2b+O(d_1).$$ Let $u_B$ be the solution of - in $B$. By taking $a$ and $b$ depending only on $M$ and $\mu$, we have $$\phi(d_1)>0, \quad \Delta\phi(d_1)<0
\quad\text{in }\Omega_3\cap B_{\delta},$$ and $$u \leq u_B+ \phi(d_1)
\quad\text{on }\gamma_2\cap B.$$ By the maximum principle, we obtain $$u \leq u_B+ \phi(d_1)
\quad\text{in }\Omega_3\cap B.$$ With $u_{B} = -\log d_1 + O(d_1 )$, we have, at the fixed $z$, $$u \leq -\log d_1 + C d_1.$$ Since we can always put a ball outside $\Omega$ and tangent to $\partial\Omega$ at $q$ due to $\mu<1$, we get $$u \geq -\log d_1 - C d_1.$$ Therefore, we obtain $$u(z)=-\log d_1+O(d_1),$$ and hence by .
By combining Cases 1-3, we finish the proof of .
Now, we discuss the case when the opening angle of the tangent cone of $\Omega$ at the origin is larger than $\pi$. We first introduce the limit function. Let $\partial\Omega$ in a neighborhood of the origin consist of two $C^{2}$-curves $\sigma_1$ and $\sigma_2$ intersecting at the origin at an angle $\mu\pi$, for some constant $\mu\in (1,2)$. Define, for any $z\in \Omega$, $$\label{eq-definition_f}
f_{\mu}(z)=
\begin{cases}
-\log (\mu |z| \sin\frac{\arcsin\frac{d_{1}(z)}{|z|}}{\mu})
& \text{if } d_1(z) <d_2(z),\\
-\log(\mu |z| \sin\frac{\theta}{\mu})
& \text{if }d_1(z) = d_2(z),\\
-\log(\mu |z| \sin\frac{\arcsin\frac{d_{2}(z)}{|z|}}{\mu})
& \text{if }d_1(z)>d_2(z),\\
\end{cases}$$ where $d, d_1$ and $ d_2$ are the distances to $\partial\Omega, \sigma_1$ and $ \sigma_2$, respectively, $\theta$ is the angle anticlockwise from the tangent line of $\sigma_1$ at the origin to $\overrightarrow{Oz}$. We note that $\{z\in\Omega:\, d_1(z)=d_2(z)\}$ has a nonempty interior for $\mu\in (1,2)$ and that $f_\mu$ is well-defined for $z$ sufficiently small. It is straightforward to verify that $\partial\{z\in\Omega: d_1(z)< (\text{or}>) d_2(z)\}\cap \Omega$ near the origin is a line segment perpendicular to the tangent line of $\sigma_1$ (or $\sigma_2$) at the origin. Hence, $f_\mu$ is continuous in $\Omega\cap B_\delta$ for $\delta$ sufficiently small.
\[thrm-LargeAngles\] Let $\Omega$ be a bounded domain in $\mathbb R^2$ and $\partial\Omega$ in a neighborhood of the origin consist of two $C^{2}$-curves $\sigma_1$ and $\sigma_2$ intersecting at the origin at an angle $\mu\pi$, for some constant $\mu\in (1,2)$. Suppose $ u \in C^{\infty}(\Omega)$ is a solution of -. Then, for any $z\in\Omega\cap B_{\delta}$, $$|u(z)-f_\mu(z)|\le Cd(z),$$ where $f_\mu$ is the function defined by , and $ \delta$ and $C$ are positive constants depending only on the geometry of $\partial\Omega$.
We proceed similarly as in the proof of Theorem \[thrm-SmallAngles\] and adopt the same notations. We denote by $M$ the $C^{2}$-norms of $\sigma_1$ and $\sigma_2$, and define $\Omega_1$, $\Omega_2$, $\Omega_3$ and $\gamma_1, \gamma_2$ by and , respectively. Consider *T*: $z\mapsto z^{\frac{1}{\mu}}$.
We fix a point $z\in \Omega\cap B_\delta$ for some $\delta$ sufficiently small. Without loss of generality, we assume $d_{1}=d_{1}(z)=d(z)\leq d_2=d_{2}(z)$.
[*Case 1.*]{} We consider $z\in \Omega_1\cap B_\delta$, where $c_0$ is some small constant such that $c_0 <\frac{1}{2} \arctan \frac{1}{4}$.
Set $\Omega_{+}= \Omega\cap B_{\delta}$ and let $u_+$ be the solution of - in $\Omega_{+}$. We take $\delta$ small so that $T$ is one-to-one on $\Omega_+$. Set $\widetilde{\Omega}_{+}=T(\Omega_+)$ and let $\widetilde u_+$ be the solution of - in $\widetilde\Omega_+$. By , the curve $\widetilde\sigma$ given by $\widetilde y=\widetilde \varphi(\widetilde x)$ satisfies $$-\widetilde M|\widetilde x|^{1+\mu}\le |\widetilde\varphi(\widetilde x)|\le \widetilde M|\widetilde x|^{1+\mu}.$$ We note here $1+\mu>2$. Theorem \[thrm-C-2,alpha-expansion\] implies, for $\widetilde z$ close to the origin, $$\label{case1-lower1}
\widetilde{u}_+(\widetilde{z})\leq -\log\widetilde{ d}_{1}+\frac12\kappa_1\widetilde d_1 +O(\widetilde{d}_{1}^{\mu}),$$ and $$\label{case1-upper1}
\widetilde{u}_+(\widetilde{z})\geq -\log \widetilde{ d}_{2}+\frac12\kappa_2\widetilde d_1 +O(\widetilde{d}_{2}^{\mu}),$$ where $\widetilde{d}_1$ and $\widetilde{d}_2$ are the distances from $\widetilde{z}$ to the curves $\widetilde{y}=\widetilde{M}|\widetilde{x}|^{1+\mu}$ and $\widetilde{y}=-\widetilde{M}|\widetilde{x}|^{1+\mu}$, respectively, and $\kappa_1$ and $\kappa_2$ are the curvatures of the curves $\widetilde{y}=\widetilde{M}|\widetilde{x}|^{1+\mu}$ and $\widetilde{y}=-\widetilde{M}|\widetilde{x}|^{1+\mu}$, respectively. Recall, for $c_0|z|<d$, $$\log \widetilde d_i=\log \widetilde y+O(d),$$ and $$\widetilde d_i^\mu\le Cd.$$ Moreover, $$|\kappa_i|\le C|\widetilde{z}|^{\mu-1}=C|z|^{\frac{\mu-1}{\mu}}\le Cd^{\frac{\mu-1}{\mu}}.$$ Therefore, and imply $$\widetilde u_+(\widetilde z)=- \log \widetilde y+O(d).$$ This is . The rest of the proof for Case 1 is identical to that in the proof of Theorem \[thrm-SmallAngles\].
[*Case 2.*]{} We consider $z\in \Omega_2\cap B_\delta$.
Arguing similarly as in the proof of Theorem \[thrm-SmallAngles\], we have $$\label{taylor-expansion_of_curve2}
|\widetilde{\varphi}(\widehat{x})-\widetilde{\varphi}(\widetilde{x}')
-\widetilde\varphi'(\widetilde x')(\widehat{x}-\widetilde{x}')|
\leq K (|z|^{1-\frac{1}{\mu}}+ |\widehat{x} -\widetilde{x}'|^{\mu -1} )
(\widehat{x} -\widetilde{x}' )^2.$$ This plays a similar role as . Then, we have $$\widetilde{u}(\widetilde{z})\leq -\log \widehat{ d}_{1}
+\frac12\kappa_{1}\widehat{d}_{1}+O(\widehat{d}_{1}^{\mu}),$$ and $$\widetilde{u}(\widetilde{z})\geq -\log \widehat{ d}_{2}
+\frac12\kappa_{2}\widehat{d}_{2} +O(\widehat{d}_{2}^{\mu}),$$ where $\widehat{d}_1$ is the distance from $\widetilde{z}$ to the curve $$\widehat{y}=\widetilde{\varphi}(\widetilde{x}')
+\widetilde\varphi'(\widetilde x')(\widehat{x}-\widetilde{x}')
+ K (|z|^{1-\frac{1}{\mu}}+ |\widehat{x} -\widetilde{x}'|^{\mu -1} )
(\widehat{x} -\widetilde{x}' )^2,$$ and $\widehat{d}_2$ is the distance from $\widetilde{z}$ to the curve $$\widehat{y}=\widetilde{\varphi}(\widetilde{x}')
+\widetilde\varphi'(\widetilde x')(\widehat{x}-\widetilde{x}')
- K (|z|^{1-\frac{1}{\mu}}+ |\widehat{x} -\widetilde{x}'|^{\mu -1} )
(\widehat{x} -\widetilde{x}' )^2.$$ Then, we proceed similarly as in Case 2 in the proof of Theorem \[thrm-SmallAngles\].
[*Case 3.*]{} We consider $z\in \Omega_3\cap B_\delta$.
We take $q\in \sigma_1$ with the least distance to $z$, and denote by $l$ the tangent line of $\sigma_1$ at $p$. We put $q$ at the origin of the line $l$. A portion of $\sigma_1$ near $q$, including the part from the origin to $q$, can be expressed as a $C^{2}$-function $\varphi$ in $(-s_0, s_0)$, with $\varphi(-s_0)$ corresponding to the origin in $\mathbb R^2$ and $\varphi(0)$ corresponding to $q$, i.e., $\varphi(0)=0$. Then, $$\label{eq-boundaryC1alpha-Version2}
|\varphi(s)|\le \frac12M|s|^{2}\quad\text{for any }s\in (-s_0,s_0).$$ In the present case, $M$ is uniform, independent of $z$; however, $s_0$ depends on $z$. We should first estimate $s_0$ in terms of $d_2$. We note, for $d_2$ sufficiently small, $$\label{eq-Estimate-d_2}\frac12|z|\sin\frac{(2-\mu)\pi}{2}\le d_2\le |z|.$$ By the triangle inequality and , we have $$s_0\le \frac12Ms_0^{2}+|z|+d_1,$$ and $$s_0\ge -\frac12Ms_0^{2}+|z|-d_1.$$ Then, $s_0/|z|\to 1$ as $|z|\to 0$. We take $|z|$ sufficiently small such that $s_0\ge 2|z|/3$.
By taking $|z|$ sufficiently small, implies $$B_{r_1}(q-r_1\vec{n})\cap \Omega=\emptyset,$$ where $\vec{n}$ is the unit inward normal vector of $\sigma_1$ at $q$ and $$r_1=\frac12|z|\sin\frac{(2-\mu)\pi}{8}.$$ By the maximum principle, we have $$u(z) \geq v_{r_1,q-r_1\vec{n}}\quad\text{in }\Omega.$$ Hence, $$\label{expansion_d_1/2}
u\ge -\log d- C|z| \quad\text{in }\Omega_3\cap B_{\delta}.$$
By taking $R = R(M,\mu)$ small, we have $$\textrm{dist}(z', \sigma_1) \leq \frac{1}{2}\textrm{dist}(z',\partial B_R(q-R\vec{n})).$$ By what we proved in Case 2, we get $$u(z) = -\log d_1 +O (d_1)\quad\text{on }\gamma_2\cap B_\delta.$$ Combining with , we have, for $|z|$ sufficient small, $$u(z) \geq v_{R,q-R\vec{n}_{L}}\quad\text{in }\Omega_3\cap
\partial B_{3|z|} (z).$$ Set $$\phi(d_1)=a d_1 - bd_{1}^{2}.$$ We can take two positive constants $a$ and $b$ depending only the geometry of $\Omega$ such that $$\phi(d_1)>0, \quad \Delta\phi(d_1)<0
\quad\text{in }\Omega_3\cap \partial B_{\delta},$$ and $$v_{R,q-R\vec{n}}\leq u + \phi(d_1)\quad\text{on }\gamma_2\cap B_{\delta}.$$ By the maximum principle, we obtain $$v_{R,q-R\vec{n}}\leq u + \phi(d_1)\quad\text{in }\Omega_3\cap B_{3|z|} (z).$$ By $$v_{R,q-R\vec{n}}(z) = -\log d_1 + O(d_1 ),$$ we have $$u(z) \geq -\log d_1 - C d_1.$$ Since we can always put a ball inside $\Omega$ and tangent to $\partial\Omega$ at $q$ due to $\mu>1$, we get $$u(z) \leq -\log d_1 + C d_1.$$ Therefore, $$u(z) = -\log d_1 + O( d_1),$$ and hence $$u(z) = -\log \left(\mu r \sin \frac{\arcsin \frac{d_{1}}{r}}{\mu} \right) + O(d_1).$$ This is the desired estimate.
We point out the estimates in Theorem \[thrm-SmallAngles\] and Theorem \[thrm-LargeAngles\] are local; namely, they hold in $\Omega$ near the origin, independent of $\Omega$ away from the origin.
With a slightly more complicated argument, we can prove the following estimate: if $\sigma_1$ and $\sigma_2$ are $C^{1,\alpha}$-curves, for some $\alpha\in (0,1)$, then for any $z\in \Omega\cap B_\delta$, $$\left|u(z)-f_\mu(z)\right| \leq Cd^\alpha(z),$$ where $f_\mu$ is given by for $\mu\in (0,1]$ and by for $\mu\in (1,2)$, and $\delta$ and $C$ are positive constants depending only on the geometry of $\partial\Omega$. This estimate can be viewed as a generalization of Theorem \[thrm-C-1,alpha-expansion\].
Application to Kähler-Einstein metrics {#sec-app}
======================================
Cheng and Yau [@ChengYau1980CPAM] studied the following problem: $$\begin{aligned}
\label{eq-Eq} \det u_{i\bar{j }}& =e^{ (n+1)u } \quad\text{in }\Omega, \\
\label{eq-Boundary}u&=\infty\quad\text{on }\partial \Omega,\end{aligned}$$ where $\Omega\subset \mathbb{C}^{n}$ is a smooth bounded strictly pseudoconvex domain, $n \geq 2$. They proved that - admits a smooth strictly plurisubharmonic solution. Geometrically, if $u$ is a strictly plurisubharmonic solution to -, then $$\sum_{i,j=1}^n\frac{\partial^{2}u}{\partial z_{i}\partial z_{\bar{j}}} dz_i dz_{\bar{j}}$$ is a complete Kähler-Einstein metric on $\Omega$. Lee and Melrose [@LeeMelrose1982] discussed boundary expansions of $u$ in smooth bounded strictly pseudoconvex domain.
In this section, we discuss the asymptotic behavior in singular product domains. We note that - reduces to - upon a rescaling, for $n=1$.
\[thrm-ProductDomain\] Assume that $\Omega\subset\mathbb C^n$ has the form $$\Omega= \Omega_{1}^{1}\times ...\times\Omega_{k}^{1}\times\Omega_{k+1}^{n-k},$$ where $\Omega_{i}^1 \subset \mathbb{C}^{1}$ is a bounded Lipschitz domain bounded by finite many $C^{2}$-curves, $i=1, \cdots, k$, for some $1\le k\le n$, and $\Omega_{k+1}^{n-k} \subset \mathbb{C}^{n-k}$ is a smooth bounded strictly pseudoconvex domain. Then, - admits a unique smooth strictly plurisubharmonic solution $u$ in the form $$\label{eq-expression_u}
u(z_{1},..,z_{n})=u_{1}(z_{1})+\cdots+u_{k}(z_{k})+u_{k+1}(z_{k+1},...,z_{n}),$$ where $u_{i}$, $i = 1,...,k$, is the unique solution of $$\begin{aligned}
\label{eq-Eq1} \Delta{u_{i}}& = 4 e^{ (n+1)u_{i} } \quad\text{in }\Omega_{i}^1, \\
\label{eq-Boundary1}u_{i}&=\infty\quad\text{on }\partial \Omega_{i}^1,\end{aligned}$$ and $u_{k+1}$ is the unique strictly plurisubharmonic solution of $$\begin{aligned}
\label{eq-Eq2} \det u_{k+1,\,i\bar{j }}& =e^{ (n+1)u_{k+1} } \quad\text{in }\Omega_{k+1}^{n-k}, \\
\label{eq-Boundary2}u_{k+1}&=\infty\quad\text{on }\partial \Omega_{k+1}^{n-k}.\end{aligned}$$
In , $\Delta$ is the standard Laplacian in $ \mathbb{R}^{2}$.
Without loss of generality, we assume $\Omega$ contains the origin. Set, for $i=1, \cdots, k$, $$v_i(x) = \frac{n+1}{2}u_i\left(\sqrt{\frac{8}{n+1}}x\right).$$ Then, $v_i$ satisfies - for $\Omega =\sqrt{\frac{n+1}{8}}\Omega_i^1$. By Lemma \[lemma-ExteriorCone\], we have, $$\left|u_{i}(z_{i})-\frac{2}{n+1}\log d_{i}(z_i)\right|
\leq C\quad\text{in } \Omega_i^1,$$ where $C$ is a positive constant depending only on the geometry of $\Omega_i^1$, and $d_{i}(z_i)$ is the distance from $z_i$ to $\partial \Omega_{i}^1$. Set $$v_{k+1}(z_{k+1}, \cdots, z_n) =
\frac{n+1}{n-k+1}u_{k+1}\left(\left(\frac{n-k+1}{n+1}\right)^{\frac{1}{2(n-k)}}(z_{k+1}, \cdots, z_n)\right),$$ and $$\Omega^{n-k}=\left(\frac{n+1}{n-k+1}\right)^{\frac{1}{2(n-k)}}\Omega_{k+1}^{n-k}.$$ Then, $v_{k+1} $ satisfies $$\begin{aligned}
\label{eq-Eq3} \det v_{k+1,\,i\bar{j }}& =e^{ (n-k+1)v_{k+1} } \quad\text{in }\Omega^{n-k}, \\
\label{eq-Boundary3}v_{k+1}&=\infty\quad\text{on }\partial\Omega^{n-k}.\end{aligned}$$ We can get the asymptotic behavior of $v_{k+1} $ by applying the result in [@LeeMelrose1982] and scaling back. Hence, $u(z_{1},..,z_{n}) $ given by satisfies $$\begin{aligned}
\label{eq-Asymptotic_u}\begin{split}
\left|u(z) +\frac{2}{n+1}(\log d_{1}(z_1) + \cdots+\log d_k(z_k))\right.&\\
\left.+\frac{n-k+1}{n+1}\log d_{k+1}(z_{k+1},...,z_{n})\right|&\le C ,
\end{split}\end{aligned}$$ where $d_1, \cdots, d_k$ and $d_{k+1}$ are distances to $\partial\Omega_1^1, \cdots, \partial\Omega_k^1$ and $\partial\Omega_{k+1}^{n-k}$, respectively. In the following, we set $$c_1=\frac{2}{n+1},\quad c_2=\frac{n-k+1}{n+1}.$$
We now prove that $u$ given by is the only solution of -. Let $w$ be an arbitrary strictly plurisubharmonic solution of -. Then, $$u\left(\frac{z}{\varepsilon}\right)+\frac{2n}{n+1}\log\frac{1}{\varepsilon}$$ is a solution in $\varepsilon\Omega:=\{z:\,\frac{z}{\varepsilon}\in\Omega\}$, for $\varepsilon>0$. Hence, we may assume $\Omega\subset B_{r}\times\cdots\times B_r$. Since the solution $u_r$ of - in $B_{r}\times \cdots\times B_r$ satisfies $$u_{r}\ge \frac{2n}{n+1}\left( -\log r+\frac{1}{2}\log\frac{8}{n+1}\right),$$ we have, by the maximum principle, $$u\ge \frac{2n}{n+1}\left( -\log r+\frac{1}{2}\log\frac{8}{n+1}\right).$$ Therefore, we can assume $u$ is large enough since we can take $r$ sufficiently small. This also holds for $w$.
Set $$f(z)=\frac{w(z)}{u(z)}.$$ Then, it is easy to see $$f(z) \geq 1\quad \text{in } \Omega.$$ We now claim $$\label{eq-Boundary_f}f(z)\rightarrow 1\quad\text{as }z\rightarrow\partial\Omega.$$ To this end, we approximate $\Omega_1^1, \cdots, \Omega_k^1$ and $\Omega_{k+1}^{n-k}$ appropriately from their interiors by $\Omega_{1,m}^{1}, \cdots, \Omega_{k,m}^1$ and $\Omega_{k+1,m}^{n-k}$. Set $$\Omega_m=\Omega_{1,m}^{1}\times \cdots\times\Omega_{k,m}^{1}\times\Omega_{k+1,m}^{n-k}.$$ Assume $u_{i}^{m}$, $i=1, \cdots, k$, is the unique solution of $$\begin{aligned}
\Delta{u_{i}^{m}}& = 4 e^{ (n+1)u_{i}^{m} } \quad\text{in }\Omega_{i,m}^{1}, \\
u_{i}^{m}&=\infty\quad\text{on }\partial \Omega_{i,m}^{1},\end{aligned}$$ and $u_{k+1}^{m}$ is the unique strictly plurisubharmonic solution of $$\begin{aligned}
\det u^{m}_{k+1,\,i\bar{j }}& =e^{ (n+1)u^{m}_{k+1} } \quad\text{in }\Omega_{k+1,m}^{n-k}, \\
u^{m}_{k+1}&=\infty\quad\text{on }\partial \Omega_{k+1,m}^{n-k}.\end{aligned}$$ Set $$u_{m}(z_{1},..,z_{n})=u_{1,m}({z_{1}})+\cdots+u_{k,m}(z_{k})+u_{k+1,m}(z_{k+1},...,z_{n}).$$ Fix a point $z=(z_{1},..,z_{n}) \in \Omega$. Then for $m$ large, we have $z \in \Omega_m$. By the maximum principle, we have $w(z)\le u_m(z)$, and hence by , $$w(z) \le -c_1\left(\log d_{1,m}(z_1 )+ \cdots+\log d_{k,m}(z_k)\right)-c_2\log d_{k+1,m}(z_{k+1},...,z_{n}) +C_m ,$$ where $d_{i,m}$ is the distance to $\partial \Omega_{i,m}^{1}$, $i = 1,...,k$, $d_{k+1,m}$ is the distance to $\partial \Omega_{k+1,m}^{n-k}$, and $C_m$ is a positive constant. By the geometry of $\Omega_{i}^{1}$ and $\Omega_{k+1}^{n-k}$, we can choose $C_m$ independent of $m$. Letting $m\rightarrow \infty$, we have $$\label{eq-Upper_w} w(z) \leq -c_1\left(\log d_{1}(z_1) +\cdots+\log d_k(z_k)\right)
-c_2\log d_{k+1}(z_{k+1},...,z_{n}) +C,$$ where $d_{i}$ is the distance to $\partial \Omega_{i}^{1}$, $i = 1,...,k$, $d_{k+1}$ is the distance to $\partial \Omega_{k+1}^{n-k}$ and $C $ is a positive constant depending only on the geometry of $\Omega$. On the other hand, we have $$\label{eq-Lower_w} w(z) \geq u(z) >-c_1\left(\log d_{1}(z_1) + \cdots+\log d_k(z_k)\right)
-c_2\log d_{k+1}(z_{k+1},...,z_{n}) -C ,$$ for some constant $C$ depending only on the geometry of $\Omega$. By combining and , we obtain . If $f$ is not equal to 1 identically, $f$ must assume its maximum $f(z_0)>1$ at some $z_{0}\in\Omega$. It is easy to check $\det w_{i\bar j}\le f^n\det u_{i\bar j}$ at $z_0$, and hence, $$e^{(n+1)uf}\le f^n e^{(n+1)u}\quad\text{at }z_0.$$ Next, we set $h(s)= a^{s}-as^{n} $, for some constant $a$. Then, $h(1)=0$ and $h(s)>0$ for any $s>1$ if $a$ is large. This leads to a contradiction. Therefore, $f=1$ and then $u=w$ in $\Omega$.
For the solution $u$ in , we can apply Theorem \[thrm-SmallAngles\] and Theorem \[thrm-LargeAngles\] to get, near the singular point of $\partial \Omega_i^1$, $$\left|u_{i}(z_{i})-\frac{2}{n+1} \left(f_\mu(z_i) +\frac{1}{2}\log\frac{8}{n+1}\right)\right|
\leq Cd_{\Omega_{i}^1}(z_i),$$ where $f_\mu$ is given by for $\mu\in (0,1]$ and by for $\mu\in (1,2)$, and $d_{\Omega_{i}^1}(z_i)$ is the distance from $z_i$ to $\partial \Omega_{i}^1$. By applying the result in [@LeeMelrose1982], we get an expansion for $u_{k+1}$. By putting these estimates together, we get an expansion for $u$.
[DG]{}
L. Andersson, P. Chruściel, H. Friedrich, *On the regularity of solutions to the Yamabe equation and the existence of smooth hyperboloidal initial data for EinsteinÕs field equations*, Comm. Math. Phys., 149(1992), 587-612.
L. Bieberbach, *$\Delta u=e^u$ und die automorphen funktionen*, Math. Ann., 77(1916), 173-212.
C. Brandle, M. Marcus, [*Asymptotic behaviour of solutions and their derivatives, for semililnear eliptic problems with blowup on the boundary*]{}, Annales de l’I. H. P., section C, 2(1995), 155-171.
S.-Y. Cheng, S.-T. Yau, *On the existence of a complete Kähler metric on non-compact complex manifolds and the regularity of FeffermanÕs equation*, Comm. Pure Appl. Math., 33(1980), 507-544.
M. del Pino, R. Letelier, *The influence of domain geometry in boundary blow-up elliptic problems*, Nonlinear Anal., 48(2002), 897-904.
G. Diaz, R. Letelier, *Explosive solutions of quasilinear elliptic equations: existence and uniqueness*, Nonlinear Anal. TMA, 20(1992), 97-125.
C. Fefferman, *Monge-Ampère equation, the Bergman kernel, and geometry of pseudoconvex domains*, Ann. Math., 103(1976), 395-416.
D. Gilbarg, N. Trudinger, [*Elliptic Partial Differential Equations of Elliptic Type*]{}, Springer, Berlin, 1983.
Q. Han, X. Jiang, *Boundary expansions for minimal graphs in the hyperbolic space*, preprint, 2014.
J. Keller, *On solutions of $\Delta u=f(u)$*, Comm. Pure Appl. Math., 10(1957), 503-510.
S. Kichenassamy, *Boundary behavior in the Loewner-Nirenberg problem*, J. of Funct. Anal., 222(2005), 98-113.
H. Jian, X.-J. Wang, [*Bernstein theorem and regularity for a class of Monge-Ampère equations*]{}, J. Diff. Geom., 93(2013), 431-469.
H. Jian, X.-J. Wang, [*Optimal boundary regularity for nonlinear singular elliptic equations*]{}, Adv. Math., 251(2014), 111-126.
J. Lee, R. Melrose, *Boundary behavior of the complex Monge-Ampère equation*, Acta Math., 148(1982), 159-192.
F.-H. Lin, [*On the Dirichlet problem for minimal graphs in hyperbolic space*]{}, Invent. Math., 96(1989), 593-612.
F.-H. Lin, [*Erratum: On the Dirichlet problem for minimal graphs in hyperbolic space*]{}, Invent. Math., 187(2012), 755-757.
C. Loewner, L. Nirenberg, *Partial differential equations invariant under conformal or projective transformations*, Contributions to Analysis, 245-272, Academic Press, New York, 1974.
M. Marcus, L. Veron, *Uniqueness and asymptotic behavior of solutions with boundary blow-up for a class of nonlinear elliptic equations*, Ann. Inst. H. Poincare, 14(1997), 237-274.
R. Mazzeo, *Regularity for the singular Yamabe problem*, Indiana Univ. Math. Journal, 40(1991), 1277-1299.
Y. Tonegawa, *Existence and regularity of constant mean curvature hypersurfaces in hyperbolic space*, Math. Z., 221(1996), 591-615.
[^1]: The first author acknowledges the support of NSF Grant DMS-1404596.
|
---
abstract: 'A critical issue in evolutionary robotics is the transfer of controllers learned in simulation to reality. This is especially the case for small Unmanned Aerial Vehicles (UAVs), as the platforms are highly dynamic and susceptible to breakage. Previous approaches often require simulation models with a high level of accuracy, otherwise significant errors may arise when the well-designed controller is being deployed onto the targeted platform. Here we try to overcome the transfer problem from a different perspective, by designing a spiking neurocontroller which uses synaptic plasticity to cross the reality gap via online adaptation. Through a set of experiments we show that the evolved plastic spiking controller can maintain its functionality by self-adapting to model changes that take place after evolutionary training, and consequently exhibit better performance than its non-plastic counterpart.'
author:
- Huanneng Qiu
- Matthew Garratt
- David Howard
- Sreenatha Anavatti
title: Crossing the Reality Gap with Evolved Plastic Neurocontrollers
---
<ccs2012> <concept> <concept\_id>10010520.10010553.10010554.10010556.10011814</concept\_id> <concept\_desc>Computer systems organization Evolutionary robotics</concept\_desc> <concept\_significance>100</concept\_significance> </concept> <concept> <concept\_id>10010147.10010257.10010293.10010294</concept\_id> <concept\_desc>Computing methodologies Neural networks</concept\_desc> <concept\_significance>300</concept\_significance> </concept> <concept> <concept\_id>10010147.10010257.10010293.10011809.10011814</concept\_id> <concept\_desc>Computing methodologies Evolutionary robotics</concept\_desc> <concept\_significance>500</concept\_significance> </concept> <concept> <concept\_id>10010405.10010444.10010087.10010091</concept\_id> <concept\_desc>Applied computing Biological networks</concept\_desc> <concept\_significance>500</concept\_significance> </concept> </ccs2012>
Introduction {#intro}
============
Unmanned Aerial Vehicles (UAVs) are challenging platforms for developing and testing advanced control techniques, because they are highly dynamic, with strong couplings between different subsystems [@Ng2004]. Controller design for these agile platforms is naturally difficult, as a poorly-performing controller can lead to catastrophic consequences, e.g., the UAV crashing. In addition, many learning approaches require large numbers of fitness evaluations. Therefore, there still exist a large group of aerial robotic studies relying on simulations as an intermediate step to develop control algorithms [@Kendoul2012].
When simulating, it is not uncommon to derive UAV models mathematically from first principles [@Pounds2010; @Alaimo2013]. However, such models are ill-suited to capturing every aspect of the system dynamics, because some of them cannot easily be modeled analytically, e.g., actuator kinematic nonlinearities, servo dynamics, etc [@Garratt2012]. Ignoring these effects can significantly deteriorate the performance of the designed controller when being deployed onto the targeted platform. To address this issue, a common practice is to develop control algorithms based on an ‘identified’ model that is a simulated representation of the real plant. This identified model is obtained by applying a data-driven process called ‘system identification’ that models the exact dynamics from the measured plant’s input and output data. Such implementations have been successful amongst previous research [@Ng2004; @Ng2006; @Garratt2012; @Kendoul2012; @Hoffer2014].
While a lot of works have pursued a perfect model that well characterizes UAV platforms, a key issue is that loss of performance is still likely to happen when transferring the well-designed (in simulation) controller onto the real platform that has somewhat different dynamics – the well-known [*reality gap*]{}. In this work we demonstrate a novel approach to compensate the reality gap across different platform representations, which works specifically with Spiking Neural Networks (SNNs) that exhibit online adaptation ability through Hebbian plasticity [@Gerstner2002]. We propose an evolutionary learning strategy for SNNs, which includes topology and weight evolution as per NEAT [@Stanley2002], and integration of biological plastic learning mechanisms. In recent decades, SNNs have been of great interest in the computational intelligence community. Applications have been both non-behavioral [@Abbott2016] and behavioral [@Vasu2017; @Qiu2018]. Unlike traditional Artificial Neural Networks (ANNs) which carry out feed-forward computation based on weighted summation of real values, information transmission in SNNs is by means of discrete *spikes* generated during a potential integration process. Such spatiotemporal dynamics are able to yield more powerful computation compared with non-spiking neural systems [@Maass1997]. Moreover, neuromorphic hardware implementations of SNNs are believed to provide fast and low-power information processing due to their event-driven sparsity [@Bouvier2019], which perfectly suits embedded applications such as UAVs. Plasticity of SNNs is modeled using Hebbian learning rules to update connection weights during neuron activations based on local neural interactions, which involves changes of synaptic weights or even formation/removal of synapses. In our work, plastic behaviours are determined by leveraging evolutionary algorithms to optimize plastic rule coefficients, such that each connection is able to develop its own plastic rule. Plasticity evolution has been used in conventional ANNs [@Urzelai2001; @Soltoggio2007; @Tonelli2011] and SNNs [@Howard2012a], where evolution takes place in the rules that govern synaptic self-organization instead of in the synapses themselves. In this work, we focus on the development of UAV height control. Our approach to resolve this problem is threefold. First, explicit mathematical modeling of the aircraft is not required. Instead, a simplified linear model is identified based on the measurement of the plant’s input and output data. In reality, such models are fast to run and simple to develop. Second, neuroevolution takes place as usual to search through solution space for the construction of high-performance networks. Finally, Hebbian plasticity is implemented by leveraging evolutionary algorithms to optimize plastic rule coefficients that describe how neural connections are updated. The evolved controller is able to exhibit online adaptation due to plasticity, which allows successful transfer to a more realistic model and indicates that transfer to reality would be similarly successful.
Organization of the rest of this paper is as follows. Section \[sect:snn\] introduces our SNN package that is utilized to develop our UAV controller, including descriptions of spiking neuron models, the mechanism of plasticity learning and evolutionary learning strategies. Section \[sect:sysmdl\] presents the plant model to be controlled in this work. Section \[sect:prob\_desc\], \[sect:sysid\] and \[sect:ctrller\] describe the controller development process in detail. Results and analysis are given in Section \[sect:results\]. Finally, conclusions are presented in Section \[sect:conclusion\].
eSpinn: Learning with Spikes {#sect:snn}
============================
Background
----------
The current widely used ANN neurons follow a computation cycle of *multiply-accumulate-activate*. The neuron model consists of two components: a weighted sum of inputs and an activation function generating the output accordingly. Both the inputs and outputs of these neurons are real-valued. While ANN models have shown exceptional performance in the artificial intelligence domain, they are highly abstracted from their biological counterparts in terms of information representation, transmission and computation paradigms.
SNNs, on the other hand, carry out computation based on biological modeling of neurons and synaptic interactions. As shown in Fig. \[fig:illus\], spikes are fired at certain points in time, whenever the *membrane potential* of a neuron exceeds its threshold. They will travel through synapses from the *presynaptic* neurons and arrive at all forward-connected *postsynaptic* neurons. The information measured by spikes is in form of timing and frequency, rather than the amplitude or intensity.
![Illustration of spike transmission in SNNs. Membrane potential $v$ accumulates as input spikes arrive and decays with time. Whenever it reaches a given threshold $\theta$, an output spike will be fired, and the potential will be reset to a resting value.[]{data-label="fig:illus"}](spiking_neuron){width="37.00000%"}
In order to assist the process of designing our spiking controller, we have developed the `eSpinn` software package. The `eSpinn` library stands for **E**volving **Spi**king **N**eural **N**etworks. It is designed to develop controller learning strategies for nonlinear control models by integrating biological learning mechanisms with neuroevolution algorithms. It is able to accommodate different network implementations (ANNs, SNNs and hybrid models) with specific dataflow schemes. `eSpinn` is written in C++ and has abundant interfaces to easily archive data through serialization. It also contains scripts for data visualization and integration with MATLAB and Simulink simulations.
Neuron Model
------------
To date, there have been different kinds of spiking neuron models. When implementing a neuron model, trade-offs must be considered between biological reality and computational efficiency. In this work we use the two-dimensional Izhikevich model [@Izhikevich2003], because of its capability of exhibiting richness and complexity in neuron firing behavior with only two ordinary differential equations:
$$\begin{aligned}
\dot{v} &= 0.04v^2 + 5v +140 -u + I \\
\dot{u} &= a(bv - u)
\end{aligned}
\label{eq:izhi}$$
with after-spike resetting following: $$\text{if } v \geq v_t \text{, then}
\left \{
\begin{array}{l}
v =c \\
u = u + d
\end{array}
\right.
\label{eq:spikereset}$$
Here $v$ represents the membrane potential of the neuron; $u$ represents a recovery variable; $\dot{v}$ and $\dot{u}$ denote their derivatives, respectively. $I$ represents the synaptic current that is injected into the neuron. Whenever $v$ exceeds the threshold of membrane potential $v_t$, a spike will be fired and $v$ and $u$ will be reset following Eq. . $a, b, c$ and $d$ are dimensionless coefficients that are tunable to form different firing patterns [@Izhikevich2003]. The membrane potential response of an Izhikevich neuron is given in Fig. \[fig:izhi\], with an injected current signal.
![Membrane potential response $v(t)$ to an external current signal $I(t)$ of an Izhikevich neuron with the following settings: $a$ = 0.02; $b$ = 0.2; $c$ = -65; $d$ = 2.[]{data-label="fig:izhi"}](izhi){width="47.00000%"}
A spike train is defined as a temporal sequence of firing times: $$s(t) = \sum_f \delta(t-t^{(f)})$$ where $\delta(t)$ is the Dirac $\delta$ function; $t^{(f)}$ represents the firing time, i.e., the moment of $v$ crossing threshold $v_t$ from below: $$t^{(f)}: v(t^{(f)}) = v_t \text{ and } \dot{v}(t^{(f)}) > 0$$
Network Structure
-----------------
We use a three-layer architecture that has hidden-layer recurrent connections, illustrated in Fig. \[fig:snn\_topo\]. The input layer consists of *encoding* neurons which act as information converters. Hidden-layer spiking neurons are connected via unidirectional weighted synapses among themselves. Such internal recurrence ensures a history of recent inputs is preserved within the network, which exhibits highly nonlinear functionality. Output neurons can be configured as either activation-based or spiking. In this work a linear unit is used to obtain real-value outputs from a weighted sum of outputs from hidden-layer neurons. A bias neuron that has a constant output value is able to connect to any neurons in the hidden and output layers. Connection weights are bounded within \[-1, 1\]. The NEAT topology and weight evolution scheme is used to form and update network connections and consequently to seek functional network compositions.
In a rate coding scheme, neuron output is defined as the spike train frequency calculated within a given time window. Loss of precision during this process is likely to happen. `eSpinn` configures a decoding method with high accuracy to derive continuous outputs from discrete spike trains. The implementation involves direct transfer of intermediate membrane potentials as well as decoding of spikes in a rate-based manner.
/ł\[count=\] in [1,2]{} (input-) at (0,0.5+) [i]{};
in [1,2,3]{} (hidden-) at (2,) [h]{};
in [1]{} (output-) at (4,1+) [o]{};
in [1]{} (bias-) at (3.5,2.4+) [b]{};
(input-1) – ++(-0.6,0) node \[above, midway\] [$v_z$]{};
(input-2) – ++(-0.6,0) node \[above, midway\] [$e_z$]{};
(output-1) – ++(0.6,0) node \[above, midway\] [$T$]{};
iin [1,...,2]{} in [1,...,2]{} (input-i) – (hidden-); (input-2) – (hidden-3);
(hidden-2) – (output-1) node \[below, near end, sloped\] [$w_2 o_2$]{}; (hidden-3) – (output-1) node \[above, near end, sloped\] [$w_1 o_1$]{};
(bias-1) – (output-1) node \[right, midway\] [$w_3 b$]{}; (bias-1) – (hidden-2); (bias-1) – (hidden-3);
(hidden-2) to \[out=270,in=315,loop,looseness=5\] (hidden-2); (hidden-3) to \[out=45,in=90,loop,looseness=5\] (hidden-3);
(hidden-1) to \[out=120,in=-120,loop,looseness=1\] (hidden-2); (hidden-1) to \[out=30,in=-60,loop,looseness=0.8\] (hidden-3);
Hebbian Plasticity
------------------
In neuroscience, studies have shown that synaptic strength in biological neural systems is not fixed but changes over time [@Kandel1992] – connection weights between pre- and postsynaptic neurons change according to their degree of causality. This phenomenon is often referred to as Hebbian plasticity as inspired by the Hebb’s postulate [@Hebb1949]. Modern Hebbian rules generally describe weight change $\Delta w$ as a function of the joint activity of pre- and postsynaptic neurons: $$\Delta w = f(w_{ij}, u_j, u_i)
$$ where $w_{ij}$ represents the weight of the connection from neuron $j$ to neuron $i$; $u_j$ and $u_i$ represent the firing activity of $j$ and $i$, respectively.
In a spike-based scheme, we consider the synaptic plasticity at the level of individual spikes. This has led to a phenomenological temporal Hebbian paradigm: Spiking-Timing Dependent Plasticity (STDP) [@Gerstner2002], which modulates synaptic weights between neurons based on the temporal difference of spikes.
While different STDP variants have been proposed [@Izhikevich2003a], the basic principle of STDP is that the change of weight is driven by the causal correlations between the pre- and postsynaptic spikes. Weight change would be more significant when the two spikes fire closer together in time. The standard STDP learning window is formulated as:
$$W (\Delta t) =
\left \{
\begin{array}{ll}
A_+ e^{-\frac{\Delta t}{\tau_+}} & \Delta t > 0, \\
A_- e^{\frac{\Delta t}{\tau_-}} & \Delta t < 0.
\end{array}
\right.
\label{stdp}$$
where $A_+$ and $A_-$ are scaling constants of strength of potentiation and depression; $\tau_+$ and $\tau_-$ represent the time decay constants; $\Delta t$ is the time difference between pre- and post-synaptic firing timings: $$\Delta t = t_{post} - t_{pre}$$
In `eSpinn` we have introduced a rate-based Hebbian model derived from the nearest neighbor STDP implementation [@Izhikevich2003a], with two additional evolvable parameters: $$\dot{w} = u_i (\frac{A_+}{\tau_+^{-1}+u_i} + \frac{k_m (u_j-u_i+k_c)+A_-}{\tau_-^{-1}+u_i})
\label{eq:hebb}$$ where $k_m$ is a magnitude term that determines the amplitude of weight changes, and $k_c$ is a correlation term that determines the correlation between pre- and postsynaptic firing activity. These factors are set to as evolvable so that the best values can be autonomously located. Fig. \[fig:hebb\] shows the resulting Hebbian learning curve. The connection weight has a stable converging equilibrium at $u_{\theta}$, which is due to the correlation term $k_c$. This equilibrium corresponds to a balance of the pre- and postsynaptic firing.
![Hebbian learning curve with $A_+$ = 0.1, $A_-$ = -0.1, $\tau_+$ = 0.02s, $\tau_-$ = 0.02s[]{data-label="fig:hebb"}](hebb){width="47.00000%"}
Learning in Neuroevolution {#neat}
--------------------------
While gradient methods have been very successful in training traditional MLPs [@Demuth2014], their implementations on SNNs are not as straightforward because they require the availability of gradient information. Instead, `eSpinn` has developed its own version of a popular neuroevolution approach – NEAT [@Stanley2002], which can accommodate different network implementations and integrate with Hebbian plasticity, as the method to learn the best network controller.
NEAT is a popular neuroevolution algorithm that involves network topology and weight evolution. It enables an incremental network topological growth to discover the (near) minimal effective network structure.
The basis of NEAT is the use of *historical markings*, which are essentially gene IDs. They are used as a measurement of the genetic similarity of network topology, based on which, genomes are clustered into different species. Then NEAT uses an explicit fitness sharing scheme [@Eiben2015] to preserve network diversities. Meanwhile, these markings are also used to line up genes from variant topologies and allow crossover of divergent genomes in a rationale manner.
`eSpinn` keeps a global list of innovations (e.g., structural variations), so that when an innovation occurs, we can know whether it has already existed. This mechanism will ensure networks with the same topology will have the exactly same innovation numbers, which is essential during the process of network structural growth.
System Modeling {#sect:sysmdl}
===============
The experimental platform is a commercial hexacopter, Tarot 680 Pro, fitted with a Pixhawk 2 autopilot system. To assist the development and tests of our control paradigms, we have developed a Simulink model derived from first principles, which contains 6-DOF rigid body dynamics and non-linear aerodynamics. Many aspects of the hexacopter dynamics are modeled with C/C++ S-functions, which describe the functionalities of Simulink blocks in C/C++ with MATLAB built-in APIs.
The simulation system is based on a hierarchical architecture. The top-level diagram of the system is given in Fig. \[fig:hexa-diagram\]. The ‘Control Mixing’ block combines controller commands from the ‘Attitude Controller’, ‘Yaw Controller’ and ‘Height Controller’ to calculate appropriate rotor speed commands using a linear mixing matrix.
In the ‘Forces & Moments’ block we take the rotor speeds and calculate the thrust and torque of each rotor based on the relative airflow through the blades. Then the yawing torque will be obtained by simply summing up the torque of each rotor. Rolling and pitching torques can also be calculated by multiplying the thrust of each rotor with corresponding torque arms. Meanwhile, we have also introduced a drag term on the fuselage caused by aircraft climb/descent, of which the direction is opposite to the vector sum of aircraft velocity. The collective thrust would be equal to the sum of thrust of each rotor combined with the drag effect.
Afterwards, the thrust and torques are fed to the ‘Hexacopter Dynamics’ block. Assuming the UAV is a rigid body, Newton’s second law of motion is used to calculate the linear and angular accelerations and hence the state of the drone will be updated. To convert the local velocities of the UAV to the earth-based coordinate we will need a rotation matrix, which is parameterized in terms of quaternion to avoid singularities caused by reciprocating trigonometric functions (gimbal lock).
{width="87.00000%"}
Finally, closed-loop simulations have been tested to validate the functionality of the Simulink model. Tuned PID controllers that display fast response and low steady output error are used in both the inner and outer loops as a challenging benchmark.
Problem Description {#sect:prob_desc}
===================
In this work, we are aiming to develop an SNN controller for height control of a hexacopter without explicit modeling of the UAV. Hebbian plasticity that is evolved offline enables online adaptation to cross the gap between the identified model and the targeted plant.
The controller takes some known states of the plant model (i.e., error in z-axis between the desired and current position as well as the vertical velocity) and learns to generate a functional action selection policy. The output is a thrust command that will be fed into the plant so that its status can be updated.
Our approach to resolve the problem is threefold. First, system identification is carried out to construct a heave model to loosely approximate the dynamics of the hexacopter. Then neuroevolution is used to search for functional SNN controllers to control the identified heave model. Network topology and initial weight configurations are determined. Finally, the fittest controller is selected for further evolution. Hebbian plasticity is activated so that the network is able to adapt connection weights according to local neural activations. An EA is used to determine the best plasticity rules by evolving the two parameters $k_m$ and $k_c$ in Eq. \[eq:hebb\]. Each connection can develop its own plasticity rule. The above-mentioned processes will be offline and only involve the identified model, and the dynamics of the hexacopter are unknown to the controller.
On completion of training, the champion network with the best plasticity rules will be deployed to drive the hexacopter model, which is a more true-to-life representation of the real plant. Note that although the goal is simulation-to-reality transfer, we here prove the concept in a time-efficient manner by transferring from a simpler to a more complex model, a transfer that encapsulates some issues inherent in crossing the reality gap, i.e. incomplete capture of true flight dynamics, oversimplification of true conditions.
Identification of Heave Model {#sect:sysid}
=============================
In this part we are going to identify a heave model that resembles the dynamics of the hexacopter. The purpose of this process is not to obtain a model that capture the exact dynamics, but to build a model that is a loose approximation of the hexacopter model.
Essentially, this is to model the relationship between the vertical velocity $v_z$, collective thrust $T$ and the vertical acceleration $a_z$. Fig. \[fig:acc-tvz\] shows the nonlinear response of vertical acceleration with varying thrust command when the vertical speed is set as -3m/s to 3m/s. Note here that the acceleration is actually the net effect of z-axis force acting on the body, which are generated from the rotor thrust, vertical drag caused by rotor downwash and fuselage. The net acceleration $a_n$ would be $a_z$ plus the gravitational acceleration $g$.
![Nonlinear relationship between vertical velocity $v_z$ (-3m/s to 3m/s), thrust command $T$ and vertical acceleration $a_z$.[]{data-label="fig:acc-tvz"}](hexa_dynamics){width="46.00000%"}
In our identified model, vertical acceleration $a_z$ is approximated as a linear combination of the thrust command $T$ and vertical speed $v_z$. $v_z$, on the other hand, is obtained by integrating the net acceleration of z-axis $a_n$:
$$\begin{aligned}
a_z &= k_T T + k_v v_z + b \\
a_{n} &= a_z + g \\
v_z &= \int a_{n}
\label{eq:idmodel}
\end{aligned}$$
where $k_T$ and $k_v$ are configurable coefficients; $b$ is a bias that is also tunable to make sure that the linear function will be expanded at the point where the net acceleration equals zero, i.e., $a_z = -g$.
![Acceleration curves of the identified model ($a_z^{id}$) and the hexacopter model ($a_z$) with varying thrust command. The identified curve is tangent with that of the hexacopter model at the point where the net acceleration $a_n$ is 0. $\alpha$ is the slope angle of the identified linear curve, from which $k_T$ is obtained. $k_v$ is calculated from the vertical distance between the two nonlinear curves. []{data-label="fig:idmodel"}](heave_dynamics){width="46.00000%"}
We take two of the acceleration curves from Fig. \[fig:acc-tvz\] (i.e., for $v_z$ = 0m/s and $v_z$ = 1m/s) to model the linear function. $k_T$ is identified as the slope of $a_z$ against $T$ when $v_z$ = 0 at the point where $a_n = 0$. $k_v$ is then calculated from the vertical distance between the two nonlinear curves. Finally, $b$ is set to shift the linear curve vertically, so that the identified model will be tangent with the hexacopter curve at the point where $a_n = 0$. The response of the resulting identified linear model is given in Fig. \[fig:idmodel\].
Finally, the same random thrust command is fed to the two different models for validation of functional similarity. System response of the two models are given in Fig. \[fig:valid\]. Clearly the response of the identified model differs from the hexacopter model, which is desired, but still the identified model approximates the original system.
![Validation of identified heave model. System response of the two models when fed with the same thrust command signal.[]{data-label="fig:valid"}](heave_valid){width="43.00000%"}
Controller Development and Deployment {#sect:ctrller}
=====================================
Evolution of Non-plastic Controllers
------------------------------------
With the identified model we have developed according to Eq. \[eq:idmodel\], we begin to search for optimal network compositions by evolving SNNs using our NEAT implementation. By ‘optimal,’ we mean the SNN controller is defined to be able to drive the plant model to follow a reference signal with minimal error in height during the course of flight. Each simulation (flight) lasts 80s and is updated every 0.02s. To speed up the evolution process, the whole simulation in this part is implemented in C++, with our `eSpinn` package. At the beginning, a population of non-plastic networks are initialized and categorized into different species. These networks are feed-forward, fully-connected with random connection weights. The initial topology is 2-4-1 (input-hidden-output layer neurons), with an additional bias neuron that is connected to all hidden and output layer neurons. The two inputs consist of error of position in z-axis $e_z$ and vertical velocity $v_z$, other than which, the system’s dynamics are unknown to the controller. Output of the controller is thrust command that will be fed to the plant model.
Encoding of sensing data is done by the `encoding` neurons in the input layer. Input data are first normalized within the range of \[0,1\], so that the standardized signal can be linearly converted into a current value (i.e., $I$ in Eq. \[eq:izhi\]). This so-called ‘current coding’ method is a common practice to provide a notional scale to the input metrics. After initialization, each network will be iterated one by one to be evaluated against the plant model. A fitness value will be assigned to each of them based on their performance. Afterwards, these networks will be ranked within their species according to their fitness values in descending order. A newer generation will be formed from the best parent networks using NEAT: only the top 20% of parents in each species are allowed to reproduce, after which, the previous generation is discarded and the newly created children will form the next generation. During evolution, hidden layer neurons will increase with a probability of 0.005, connections will be added with a probability of 0.01. Connection weights will be bounded within \[-1, 1\].
The program terminates when the population’s best fitness has been stagnant for 12 generations or if the evolution has reached 50 generations[^1]. During the simulation, outputs of the champion will be saved to files for later visualization. The best fitness will also be saved. Upon completion of simulation, data structure of the whole population will be archived to a text file, which can be retrieved to be constructed in our later work.
Searching Solutions
-------------------
Note the control system to be solved is a Constraint Problem [@Michalewicz1996], because the height of the UAV must be bounded within some certain range in the real world. However, constraint handling is not straightforward in NEAT – invalid solutions that violate the system’s boundary can be generated, even if their parents satisfy these constraints. Therefore, in this paper we use the feasibility-first principle [@Michalewicz1996] to handle the constraints.
We divide the potential solution space into two disjoint regions, the feasible region and the infeasible, by whether the hexacopter is staying in the bounded area during the entire simulation. For infeasible candidates, a penalized fitness function is introduced so that their fitness values are guaranteed to be smaller than those feasible.
We define the fitness function of feasible solutions based on the mean absolute error during the simulation: $$f = 1- \bar{\lvert e_n \rvert} \label{fit_f}
$$ where $\lvert e_n \rvert$ denotes the normalized absolute error between actual and reference position. Since the error is normalized, desired solution will have a fitness value close to 1.
For infeasible solutions, we define the fitness based on the time that the hexacopter stays in the bounded region: $$f = k (t_i / t_t) \label{fit_if}$$ where $t_i$ is the steps that the hexacopter successively stays in the bounded region, and $t_t$ is the total amount of steps the entire simulation has. Penalty is applied using a scalar $k$ of 0.2.
Enabling Plasticity
-------------------
Once the above step is done to discover the optimal network topology, we proceed to consider the plasticity rules.
The champion network from the previous step is loaded from file, with the Hebbian rule activated. It is spawned into a NEAT population, where each network connection has randomly initialized Hebbian parameters (i.e., $k_m$ and $k_c$ in Eq. \[eq:hebb\]).
Networks are evaluated as previously stated. The best parents will be selected to reproduce. During this step, all evolution is disabled except for that of the plasticity rules, e.g. the EA is only used to determine the optimal configuration of the plasticity rules.
Transferring Controllers
------------------------
Upon completion of the previous steps, the final network controller is obtained and ready for deployment. To construct the controller in the Simulink hexacopter model, it is implemented as a C++ S-function block.
Results and Analysis {#sect:results}
====================
10 runs of the controller development process have been conducted to perform statistical analysis. Data are recorded to files and analyzed offline with MATLAB.
Adaptation in Progress
----------------------
Table \[tab:fit\] shows the fitness changes of the best controller during the course. From left to right are non-plastic networks controlling the identified model, plastic networks controlling the identified model and plastic networks controlling the hexacopter model, respectively. The fitness values are averaged among the 10 runs.
[cccc]{} & & &\
\
Fitness & 0.9189 & 0.9349 & 0.9298\
The best non-plastic networks have the lowest fitness overall. By enabling the plasticity rule, an increase in fitness can be clearly observed when controlling the same identified model. The plastic controllers demonstrate better performance even when transferred to control the hexacopter model that has slightly different dynamics.
Plastic vs. Non-plastic
-----------------------
Fitness Non-plastic Plastic
--------- ------------- ---------
Run 1 0.9188 0.9350
Run 2 0.9074 0.9271
Run 3 0.9261 0.9396
Run 4 0.9280 0.9465
Run 5 0.9053 0.9162
Run 6 0.9046 0.9166
Run 7 0.9174 0.9338
Run 8 0.9188 0.9256
Run 9 0.9219 0.9366
Run 10 0.9210 0.9207
Mean 0.9169 0.9298
: Non-Plastic vs. Plastic Controllers on the Hexacopter Model[]{data-label="tab:p-vs-np"}
A second comparison is conducted between non-plastic and plastic controllers on the hexacopter model. Results are given in Table \[tab:p-vs-np\]. For 9 out of the 10 runs, we can see a performance improvement when plasticity is enabled. The only one not being better, still has a close fitness value. Statistic difference is assessed using the two-tailed Mann-Whitney *U*-test between the two sets of data. The $U$-value is 21, showing the plastic controllers are significantly better than the non-plastics at $p < 0.05$.
![Height control using the non-plastic and plastic SNNs[]{data-label="fig:np-vs-p"}](np_vs_p){width="47.00000%"}
![Connection weight adaptation during simulation[]{data-label="fig:weight"}](weight_p){width="37.00000%"}
Fig. \[fig:np-vs-p\] shows a typical run using the non-plastic and plastic controller. We can see the plastic control system has a faster response as well as smaller steady error. It is clear that plasticity is a key component to bridge the gap between two models. To visualize the adaptation process, a weight watcher is introduced to monitor connection weight changes during the simulation. The result is given in Fig. \[fig:weight\].
{width="96.00000%"}
Validation of Plasticity
------------------------
In order to verify the contribution of the proposed Hebbian plasticity, we have extracted the evolved best plastic rule and applied it to other networks that have sub-optimal performance. With plasticity enabled, a sub-optimal network is selected to repetitively drive the hexacopter model to follow the same reference signal. Fig. \[fig:np\_p123\] shows the progress of 4 consecutive runs when a) plasticity is disabled; b-d) plasticity is enabled.
We can see that in Fig. a), there is a considerable steady system output error. When plasticity is turned on, connection weights begin to adjust themselves gradually. The system follows the reference signal with a decreasing steady error until around 0.005m. Meanwhile a fitness increase is witnessed from a) 0.921296, b) 0.927559, c) 0.932286 to d) 0.933918.
The same results can be obtained when the rule is assigned to other near-optimal networks, while for those with poor initial performance, plasticity learns worse patterns. This analysis has justified our evolutionary approach to search for the optimal plastic function, demonstrating that plasticity narrows the reality gap for evolved spiking neurocontrollers.
Comparing with PID control
--------------------------
PID control is a classic linear control algorithm that has been dominant in engineering. The aforementioned PID height controller is taken for comparison. This is a challenging benchmark as the PID has been tuned for high performance. Note here the PID controller is designed directly based on the hexacopter model, whereas the SNN controller only relies on the identified model and utilizes Hebbian plasticity to adapt itself to the new plant model.
System outputs of the two approaches is given in Fig. \[fig:pid-vs-p\]. Evidently our controller has smaller overshoot and steady output error. The PID controller has a fitness value of 0.946319, while our plastic SNN controller has a value of 0.955022.
![Height control using PID and plastic SNNs[]{data-label="fig:pid-vs-p"}](pid_vs_p){width="47.00000%"}
Conclusions {#sect:conclusion}
===========
Our work has presented a solution to applied evolutionary aerial robotics, where evolution is used not only in network initial construction, but also to formulate plasticity rules which govern synaptic self-modulation for online adaptation based on local neural activities. We have shown that plasticity can make the controller more adaptive to model changes in a way that evolutionary approaches cannot accommodate. We are currently in the process of applying this controller development strategy to a real hexacopter platform, and expanding from height control to encompass all degrees of freedom in the UAV.
[25]{}
\#1 \#1[\#1]{}\#1 \#1 \#1 \#1 \#1[\#1]{} \#1[\#1]{}
[Abbott2016]{} . . ( ), .
[Alaimo2013]{} . . In . .
[Bouvier2019]{} . . , , Article ( ), pages.
[Demuth2014]{} . . , .
[Eiben2015]{} . . .
[Garratt2012]{} . . , ( ), .
[Gerstner2002]{} . . , .
[Hebb1949]{} . . .
[Hoffer2014]{} . . , ( ), .
[Howard2012a]{} . . , ( ), .
[Izhikevich2003]{} . . , ( ), .
[Izhikevich2003a]{} . . , (), .
[Kandel1992]{} . . , (), .
[Kendoul2012]{} . . , (), .
[Maass1997]{} . . , (), .
[Michalewicz1996]{} . . , (), .
[Ng2006]{} . . , , .
[Ng2004]{} . . In , (Eds.). , .
[Pounds2010]{} . . , (), .
[Qiu2018]{} . . In . .
[Soltoggio2007]{} . . In . .
[Stanley2002]{} . . , (), .
[Tonelli2011]{} . . In . , , .
[Urzelai2001]{} . . , (), .
[Vasu2017]{} . . In . , , .
[^1]: empirically determined
|
---
author:
- |
[Giuseppe Petracca$^{\star}$]{}\
[email protected]
- |
[Ahmad Atamli$^{\ast}$]{}\
[email protected]
- |
[Yuqiong Sun$^{\star}$]{}\
[email protected]
- |
[Jens Grossklags$^{\star}$]{}\
[email protected]
- |
[Trent Jaeger$^{\star}$]{}\
[email protected]\
$^{\star}$The Pennsylvania State University\
$^{\ast}$University of Oxford
title: '**: Controlling App Access to I/O Devices on Mobile Platforms**'
---
Introduction {#sect:intro}
============
Nowadays, mobile platforms are equipped with cameras, microphones and wide screens, which enable a variety of popular functions, ranging from audio/video recording to displaying information to users. Many apps now utilize these functions to provide services that leverage on-board Input/Output (I/O) devices. For example, many apps now support voice and video messages, as well as photo/video shooting and editing. Even insurance or banking apps now utilize mobile platforms’ cameras to collect sensitive information to expedite claim processing or check depositing [@esurance; @pnc]. Apps able to record the screen content are also available for remote screen sharing or tutorial editing.
However, uncontrolled access to on-board I/O devices can enable malicious apps, with access to these devices, to exfiltrate sensitive information. Adversaries have built malware apps, referred to as [*Remote Access Trojans*]{} (RATs), that abuse authorized access to such devices to extract audio, video, screen content, and more, from mobile platforms, such as smartphones and tablets. Malware in the wild both surreptitiously records data from a variety of mobile devices [@lipovsky; @rogers; @npr], but also performs directed attacks, such as constructing three-dimensional models of indoor environments [@templeman] and extracting credit card numbers and PIN numbers [@schlegel] from screenshots or keyboards’ tones. Furthermore, uncontrolled accesses to on-board cameras and microphone can become sensitive if apps can stealthily take photos or videos and record audio by running a service in background[^1]. Additionally, RAT apps can also use social-engineering techniques [@soc_eng] to hijack user-requested activities, such as showing a soft-button on screen that supposedly allows the user to take a picture when in reality the user action will trigger voice recording instead through the smartphone’s microphone.
Current defenses do not prevent malicious apps that happen to be granted access to I/O devices from exfiltrating sensitive data. Android, iOS, and Windows Phone OS all require users to authorize apps for access to I/O devices, such as the camera and microphone, at install time or at first use. In many cases, users may assume that the use of such devices will be important, if not fundamental, for the effective operation of such apps. Should the user authorize an app, the app can then choose when to use the device. In addition, access to screen content is not even mediated by any of the mobile operating systems. More restrictive security models, such as SE Android [@smalley2013security], cannot control apps access to such devices further, as they mainly protect the lower level Android system. Some research systems aim to prevent unauthorized access to resources, such as these devices, by empowering apps to assist in the decision making [@roesner2012user; @nadkarni2014asm; @backes2014android]. However, since apps may be malicious, attacks cannot be prevented by this method. Alternatively, researchers have explored auditing the use of such devices [@xu2015semadroid] and providing visible indication of app behaviors [@bianchi2015app], but the former only detects attacks retroactively after the data has been exfiltrated, whereas user notification alone requires users to pay attention to each operation’s security status, continuously, to avoid missing attacks.
In this work, we propose the framework for authorizing app requests to perform operations using I/O devices, which binds app requests with user intentions to make all uses of I/O devices explicit. To do this, we take the following steps. First, app requests for operations using I/O devices are intercepted by the -enabled services to mediate all security-sensitive operations. Second, makes each app request visible to the user, independently from the app, to enable users to express their intents without being spoofed. Third, makes ongoing operations, targeting I/O devices, visible to users, so they may choose to terminate the operation. As opposed to previous solutions, does not depend on apps to govern users, but rather links app requests and user input. Also, maintains the security status of ongoing operations, so user actions are only necessary at operation initiation and completion, rather than requiring ongoing user monitoring [@bianchi2015app].
We have implemented on Android OS (android-6.0.1\_r5 version) and found it to perform effectively, by adding a maximum overhead of 4.79% (minimum 2.19%). We have performed a user study, involving 74 human subject. Without , only 18% of the study participants were able to identify attacks from tested RAT apps. systematically blocks all the attacks in absence of user consent and enabled study participants to identify 82% of social-engineering attacks tested to hijack approved requests, including some more sophisticated forms of social engineering not yet present in available RATs. This paper makes the following contributions:
- We reverse-engineer two real-world RAT apps and two proof-of-concept RAT apps to systematically study and categorize the different techniques used by adversaries in mounting attacks targeting on-board I/O devices. We identify five security properties that must be satisfied in order to ensure protection against stealthy operations from malicious apps targeting on-board I/O devices.
- We propose , a security framework, that introduces defense mechanisms for enforcing these five security properties by mediating app requests to I/O devices and matching those requests to user consent, which blocks all unapproved requests and maintains the security status of ongoing requests to the user to enable prevention of social-engineering attacks.
- We conduct an extensive user study involving 74 human subjects to evaluate: (1) users’ awareness of RAT attacks targeting I/O devices, (2) effectiveness of RAT attacks targeting on-board I/O devices, and (3) effectiveness of our proposed defense mechanisms in increasing users’ awareness and control over sensitive operations targeting I/O devices.
- We evaluate our approach on five RAT apps and eight widely-used apps, to show that it is possible to prevent against attacks from RAT apps without compromising functionality or introducing significant performance overhead.
Problem Definition
==================
In this section, we describe Remote Access Trojans (RATs) to demonstrate attacks that exploit use of I/O devices. We then examine the state-of-the-art in permission granting for mobile platforms to understand why such malware apps are capable of exploiting I/O devices on smartphones. We then outline the challenges for defenses capable of blocking such attacks.
Remote Access Trojans (RATs) {#rats}
----------------------------
On smartphones, *Remote Access Trojans* (RATs) are malicious apps users may be tricked into installing on their smartphones that aim to violate users’ privacy and data confidentiality. RATs collect security-sensitive information through on-board I/O devices, such as cameras, microphones, and screen buffers using authorized app permissions to perform stealthy, malicious operations including taking photos and videos, recording audio, or capturing screenshots.
Researchers have designed and developed mobile RATs to demonstrate limitations of current access control models adopted in mobile OSs. Examples include: *PlaceRaider* [@templeman], which uses the camera and other sensors to construct rich, three-dimensional, models of indoor environments; and *Soundcomber* [@schlegel], which exfiltrates sensitive data, such as credit card and PIN numbers, from both tones and speech-based interaction with phone menu systems. Real-world RATs are also available online for purchase. Two popular ones, widely discussed in security blogs and anti-virus companies, are: *Dendroid* [@rogers], which takes photos using the phone’s camera, records audio and video, downloads existing photos, records calls, sends texts, etc. and *Krysanec* [@lipovsky], which takes photos, records audio through the microphone, locates victims via devices’ GPS, views victims’ contact list, and intercepts and sends text messages.
We reverse engineered and statically analyzed these RATs. The two real-world RATs were leaked online, whereas the two proof-of-concept RATs have been shared by researchers. From the analysis, we obtained details on how RATs work. We found out that: (1) *all* analyzed RATs require a specific set of permissions to access on-board I/O devices, except when accessing the screen buffers; (2) *all* of them run a background service that stealthily performs malicious operations to abuse permissions granted at install time or first use; (3) *none* of their activities is shown on screen; (4) *all* of them need access to the Internet to leak security-sensitive data collected; and (5) their stealthy operations are *never* associated with any user interaction with the smartphone.
Limits of Permission Granting
-----------------------------
Mobile OSes currently support two default mechanisms to grant apps permission to access on-board I/O devices.
First, in Android OS, users grant apps permission to access I/O devices at *install time*. Apps receiving permission at install time can then access I/O devices *at any time*, without further user approval, so users are unable to track *how* and *when* sensitive on-board I/O devices are accessed by apps at runtime. Table \[permission\_analysis\] summarizes the permission sets required by the analyzed RAT apps to perform stealthy operations. We found out that: (1) *all* permissions used by RAT apps to perform stealthy operations are classified as *dangerous* by the official Android OS documentation [@AndroidDoc]; (2) users are *never* notified about accesses to security-sensitive on-board I/O devices[^2]at runtime, via on-screen prompts, or notifications; and (3) the *same* sets of permissions are used by well-known benign apps downloaded by millions of users worldwide. We performed an extensive analysis of permission sets used by apps by randomly selecting 74 apps from third-party app stores [@MobileApkWorld; @ApksFree] and 329 apps from the official Google Play [@GooglePlay]. The results cause concern: 83.89% of apps from the official Google Play store could potentially take stealthy screenshots, whereas 25.68% of apps from third-party app stores could potentially take stealthy photos (complete analysis results are reported in Appendix \[perm\_ineff\]).
[cccccccc]{} & & &\
& & &\
& & &\
& & &\
& & &\
& & &\
& & &\
Second, starting from Android OS 6.0 (Marshmallow) and in other mobile operating systems, such as Apple iOS and Windows Phone OS, users are prompted with access requests *the first time* apps request access to I/O devices. Researchers have shown that, while these prompts attempt to verify users’ intent, in practice, they create an excessive burden on users, which leads to users ignoring these prompts eventually[^3]. In addition, these mobile operating systems allow users to manage permission grants at runtime by accessing a per app or per I/O device permission control panel. This feature allows for better flexibility in permission granting, since it is now possible to revoke, at runtime, permissions granted to apps at install time or first use.
Unfortunately, neither of these mechanisms ensure that sensitive operations targeting on-board I/O devices are performed *only* in response to users’ interaction with running apps, which *must unmistakeably bind the user’s consent with specific security-sensitive operations targeting an I/O devices*. In the absence of such binding, malicious apps are free to leverage I/O devices, even while running as background services, once they can trick users into granting them permissions as shown in Figure \[fig:timeline\].
![Effect of the lack of binding between users’ interaction and operations performed by running apps[]{data-label="fig:timeline"}](timeline){width="80mm"}
Challenges in Preventing RAT Exploits
-------------------------------------
Researchers have identified several attacks targeting on-board I/O devices [@templeman; @schlegel], and proposed various defense mechanisms [@roesner2012user; @roesner2014world; @smalley2013security; @AndroidEnh]. Unfortunately, significant attacks are still capable of circumventing proposed defenses. Furthermore, most anti-malware tools available for smartphones are not able to identify apps behaving as RATs, especially if not advertised and commercialized as spying tools on the Web[^4]. Similarly, current static analysis tools [@VirusTotal; @chen2015finding; @tam2015copperdroid; @bouncer] designed to analyze apps’ source code to identify malice are often not able to find stealthy operations targeting I/O devices[^5].
On one hand, several attempts have been made, in the past, to include users or surrounding environments in the access control decision mechanism. Unfortunately, User-Driver Access Control (UDAC) [@roesner2012user] is subject to social-engineering attacks [@bianchi2015app], whereas World-Driven Access Control (WDAC) [@roesner2014world] limits the objects that may be recorded, but does not enable users to control when apps use I/O devices for recording. However, solutions to prevent social-engineering attacks [@bianchi2015app] currently require users to pay attention to the security status of an operation throughout its execution, rather than just at the beginning and end. On the other hand, researchers have also proposed methods to augment contemporary access control models based on the Android Permission mechanism [@AndroidDoc; @felt2012] and SE Android [@smalley2013security; @AndroidEnh]. Android Security Modules (ASM) [@nadkarni2014asm] enable apps to assist in security decision-making, but unfortunately the apps performing operations are the adversary we must control; whereas Android Security Framework (ASF) [@backes2014android], which operates at the kernel level, does not have the necessary information about higher level events required to detect security-sensitive operations performed by processes running apps. Additionally, solutions that leverage hardware support for app isolation, such as Samsung Knox [@knox], would prevent apps in one domain from stealing sensitive data from apps in other domains, but RAT apps can still perform stealthy operations within their own domain.
Despite the various efforts above, several open challenges remain to be addressed:
- Once apps obtain permission, at *install time* or at *first use*, they may stealthily access sensitive I/O devices *at any time*.
- Mobile operating systems lack a method to [*connect user interactions with security-sensitive operations*]{} targeting on-board I/O devices for controlling access to such operations.
- Apps may use *social-engineering* techniques [@soc_eng; @anderson2015supporting; @bianchi2015app] to hijacking user-intended activities and trick users in authorizing undesired operations.
In addition, any effectitve solution to these problems must only require user input and attention consistent with normal application use.
Background
==========
In this paper, we focus our attention on Android OS due to its open-source nature, availability, and popularity [@market]. Similar considerations hold true for other mobile operating systems, such as Apple iOS and Windows Phone OS. In the following subsections, we provide background information useful to understand the mechanisms proposed in this paper.
I/O Device Management in Android {#io_background}
--------------------------------
We briefly describe how processes obtain access to on-board I/O devices in Android OS. For performance and security reasons, only the Linux kernel has direct access to on-board I/O device drivers. The Hardware Abstraction Layer (HAL) implements an interface that allows system services (privileged Android processes) to indirectly access on-board I/O devices via well-defined APIs. SE Linux for Android guarantees that only system services can access on-board I/O devices at runtime. Thus, apps must communicate with system services, through the Binder mechanism [@AndroidDoc], to request execution of specific operations targeting on-board I/O devices. The system services designed to handle requests from apps would then execute the operations on behalf of processes running apps, if and only if, the necessary permissions have been granted to the requesting apps by the user or operating system. Permissions are validated by the *Package Manager*, part of Android OS, each time apps request to perform operations targeting I/O devices. For instance, for operations targeting the microphone, the *Package Manager* checks the apps `AndroidManifest.xml` files to verify if the RECORD\_AUDIO permission has been granted to the requesting app. If the required permissions are not granted to apps, the *Package Manager* fires up a security exception to communicate the operation abortion. Android services do not require a permission check for apps to access the screen buffer.
Android UI Graphical Elements
-----------------------------
The Android User Interface is composed by three main graphical elements, as depicted in Figure \[fig:screen\] (A). Different manufacturers may different layouts, but all distributions of the Android OS have the same three main elements.
\[t!\] ![Android User Interface Graphical Elements and Gestures to bring back the Navigation and Status Bars into view when apps are in full screen mode[]{data-label="fig:screen"}](screen "fig:"){width="80mm"}
The *Status Bar* shows the device’s state, such as battery level and network connectivity. The *Navigation Bar* includes three navigation buttons to interact with all currently running apps and the home screen. The *Activity Window* is the only portion of the screen that processes running apps can draw graphical elements on, such as *Activities* and *Views* inside activities. All activities created by apps are drawn in the system-managed *Activity Window*. Activities are organized in a stack managed by the *ActivityManager* system service, the only process allowed to manage these three main graphical elements. The only activity shown to the user is the one on top of the stack, even though, previously displayed activities might be visible if the new activity on top is only partially covering the *Activity Window*.
The *Activity Window* becomes the only graphical element visible on screen when apps go in *Full Screen* mode. Starting from version 4.4, the Android OS offers apps two approaches to go full screen: *Lean Back*[^6] and *Immersive*[^7]. In both approaches, all persistent system bars are hidden. The difference between them is how the user brings the bars back into view, as shown in Figure \[fig:screen\] (B). In *Lean Back* mode, users can bring back the bars by simply touching the screen anywhere. Whereas, in *Immersive* mode, they can just swipe from any edge where a system bar is hidden. This gestures are easy and intuitive, an explanatory message appears on screen the first time the app goes full screen.
Security Model
==============
When designing , we focus on defending against any process running an app that has permissions to access security-sensitive, on-board I/O devices (e.g., camera, microphone, and touch screen) based on the following threat model.
We assume that adversaries have no control over the operating system (e.g., Linux kernel and Android OS) or Android system apps (e.g., SystemUI) and services (e.g., Binder, AudioSystem, MediaServer, SensorManager, InputManager and WindowManager). Next, we assume that SEAndroid enforces mandatory access control over access to on-board I/O devices, preventing unauthorized access from apps containing native code. Therefore, we assume that only system services can access on-board I/O devices indirectly through the use of the Java Native Interface (JNI) [@JNI] to the Linux kernel. Further, we assume apps can access I/O devices *only* through APIs provided by the standard Android SDK [@AndroidDoc]. Finally, we assume that Android system services can enforce access control policies that are applied at the time that data would be collected from an I/O device only. We aim to control whether apps can receive data produced by on-board I/O devices, but do not provide any guarantee after the app is granted access to the data itself.
Adversaries in control of apps may cause threats by executing the following operations. First, adversarial apps request use of security-sensitive operations on on-board I/O devices, including accessing devices’ camera to take picture or video recording, devices’ microphones to record users’ voices or surrounding environments, and devices’ screens to steal security sensitive information displayed to users. Additionally, adversarial apps may alter the display to launch social-engineering attacks to trick users into consenting to operations they do not want by overlaying a frame buffer over another apps’ or system component’s display or displaying a request for one operation then performing a different operation.
Design Overview {#overview}
===============
Operation
---------
An overview of the design is shown in Figure \[fig:overview\]. In the overview description, we use Android OS as reference, similar considerations hold for other mobile operating systems available on the market. Typically, a process $Prc$, running an Android app, sends a request to perform an operation $Opr$ using a specific I/O device $Dev$ (step ). An example could be a request from an app ($Prc$) to access the front camera ($Dev$) and take a photo ($Opr$). The request is received by a system service $Srv$ (e.g., the MediaServer for the camera), one of the privileged processes in charge of authorizing and processing access requests to system resources. At first, the conventional access control mechanisms are activated (step ). For instance, in Android OS, the Android permissions and the SE Android access control policy checks are activated. If the result of the conventional access control enforcement is a denial, then the system service $Srv$ is notified of the security exception, which in turn notifies the requesting process $Prc$ of the access denial (step ). Otherwise, if the result of the conventional access control enforcement is a grant, is activated in order to implement its additional access control conditional checks.
\[t!\] ![Overview of the Design[]{data-label="fig:overview"}](overview "fig:"){width="75mm"}
The Conditional Engine collects information about the process $Prc$ requesting a certain operation $Opr$ over a target device $Dev$ (step ) through the system service $Srv$. Based on this information, the Conditional Engine then identifies and selects, from the Conditional Rule Store, the set of conditional rules $Cnds$ that must be satisfied to allow operation $Opr$ to be performed on behalf of process $Prc$, over the target device $Dev$, by the system service $Srv$ (step ). Subsequently, collects the measurements necessary to evaluate if the selected conditional rules are satisfied (step ). Measurements can involve readings from on-board I/O devices and sensors, or system events necessary to verify specific environmental conditions. If and only if *all* the selected conditional rules $Cnds$ are satisfied, then generates an authorized session during which the system service $Srv$ is authorized to perform the security-sensitive operation $Opr$, over the target device $Dev$, on behalf of process $Prc$ (step ). During authorized sessions, verifies that all conditional rules remain satisfied during the entire session and notifies users about ongoing operations (step ). Whenever *any* of the selected conditional rules is not satisfied, abort the operation $Opr$ and notifies the service $Srv$ about the set of unsatisfied conditional rules via a denial message (step ).
\[tab:rules\]
[|l|l|]{} Preconditions & Security Properties\
-------------------------------------------------------------------------------------
User interacts with process *Prc* to request operation *Opr* targeting device *Dev*
Process *Prc* requests Service *Srv* to perform operation *Opr* over device *Dev*
User is aware of what operation *Opr*, targeting device *Dev*, is going to be
performed by Service *Srv* on behalf of process *Prc*
User approves operation *Opr* on behalf of process *Prc* targeting device *Dev*
-------------------------------------------------------------------------------------
&
---------------------------------------------------------------------
All app requests to perform security-sensitive operations targeting
on-board I/O devices must be authorized
Security-sensitive operations performed using on-board I/O devices
match a users’ consenting action
---------------------------------------------------------------------
\
Ongoing Conditions\
----------------------------------------------------------------------------
User has continuous visibility of operation *Opr* performed
on behalf of process *Prc* over device *Dev*
The authorized session *Ses*, for process *Prc* to execute operation *Opr*
over device *Dev*, is logged to allow retroactive actions
----------------------------------------------------------------------------
&
------------------------------------------------------------------------
Ongoing security-sensitive operations targeting I/O devices are always
visible to users
Ongoing security-sensitive operations targeting I/O devices are always
logged
------------------------------------------------------------------------
\
Exit Conditions\
---------------------------------------------------------------------
The authorized session *Ses*, relative to process *Prc*, terminates
Termination of session *Ses* is logged
User has visibility that session *Ses* has been terminated
---------------------------------------------------------------------
&
------------------------------------------------------------------
All on-going security-sensitive operations targeting I/O devices
are visible to the user as long as they run
------------------------------------------------------------------
\
Conditional Engine
------------------
Conditional Engine uses *conditional rules* to authorize app requests to I/O devices and maintain security state during any authorized sessions. To verify the satisfaction of conditional rules, is designed to collect inputs from I/O devices, sensors and system services at run-time, by using additional hooks placed inside the Android framework. conditional rules are of three types: (1) *Preconditions* that must be satisfied before an operation targeting I/O devices is authorized; (2) *Ongoing Conditions* that must be satisfied while the authorized operations targeting I/O devices are being performed, until their completion; and (3) *Exit Conditions* that must be all satisfied once authorized operations terminate due to users’ actions or runtime exceptions.
*Preconditions* ensure two security properties. All app requests to perform security-sensitive operations targeting on-board I/O devices must be authorized. This security property prevents processes from performing such operations stealthly. Security-sensitive operations performed using on-board I/O devices match a users’ consenting action. This property ensures that every such operation is initiated by a user action. *Ongoing Conditions* ensure two additional security properties. Ongoing security-sensitive operations targeting I/O devices are always visible to users. This security property enables users to check that the authorized operation is what they expected, reducing the possibility of undetected social-engineering attacks. Ongoing security-sensitive operations targeting I/O devices are always logged. Such logging enables users to examine the progress of ongoing security-sensitive operations, which may enable termination of an ongoing operation deemed suspicious or retrospective analysis of past operations. *Exit Conditions* ensure one more security property. All ongoing security-sensitive operations targeting I/O devices are visible to the user as long as they run. This property ensures that apps cannot keep operations running after user terminates them and the user interface correctly removes only terminated operations from the display. The conditional rules and security properties above mentioned are summarized, for future reference, in Table \[rules\]. To express conditional rules, the Usage Control (UCON) model [@park2004ucon] could be adopted.
Design
======
We present the design in terms of four mechanisms necessary to fulfill the five security properties.
Mediation of Access Requests {#mediate}
----------------------------
Complete mediation of *all* requests to access I/O devices from processes running apps and matching each request to a user input corresponding to the app request is necessary to ensure that only authorized app requests are run, guaranteeing . Mediation involves several system services, such as services controlling gestures on screen, or handling requests from running processes, when targeting on-board I/O devices and sensors. Users’ interaction events have to be aggregated and mapped to requests instantiated by processes running apps. The SE Android reference monitor must be extended to ensure [*complete mediation*]{} of all security-sensitive operations targeting I/O devices [@monitor]. Identifying the right locations where to place additional hooks, to mediate every access to I/O devices, is challenging.
Mainly, complete mediation of accesses to I/O devices can be achieved in two ways: (1) by placing hooks inside the Android framework and libraries, or (2) by placing hooks inside the Linux kernel and I/O device drivers. To achieve complete mediation of accesses to I/O devices, which are low-level system resources, the kernel and device drivers seem to be the most appropriate place where to add mediation hooks. However, two main issues arise from this approach: (1) low level hooks would not have the required level of information to map requests to processes running apps, due to the fact that requests are always handled by system services on behalf of the requesting processes; (2) mobile platforms are equipped with different I/O devices, which would require the operating system to be able to support customized hooks defined for different drivers by driver vendors.
In , hooks are placed at the Android framework and libraries level, to avoid the above mentioned issues. *Hooks* provide complete mediation, because system services are the only path through which processes, running apps, can access I/O devices and sensors, due to Android framework architecture and MAC rules enforced by SE Android [@AndroidEnh]. We have dynamically analyzed the Android framework and libraries code, relative to SDK APIs handling accesses to I/O devices and sensors, to validate complete mediation, and check that every access to the I/O devices and sensors is captured by one of the 18 hooks introduced. Retaining such logging could be used to detect errors, if any exist. Callbacks from hooks inform the Conditional Engine about users interacting with processes running apps, and requesting operations over I/O devices. These callbacks are used to validate precondition. *Hooks* also capture resources acquisition and release by system services operating on behalf of processes running apps. Callbacks from these hooks are used to validate precondition and exit condition . Satisfying these conditions is sufficient to reliably bind users interaction with apps requests to operate over I/O devices, therefore guaranteeing .
Visibility over Sensitive Operations {#secure_message}
------------------------------------
Secure display of operations targeting I/O devices when they are requested, when they are ongoing, and when they are terminated is necessary to fulfill multiple guarantees. First, by maintaining visibility of operations after they are authorized, users may identify undesired operations approved by mistake to guarantee . Furthermore, ensuring that operations are visible to users as long as they run guarantees that there are no stealthy operation on I/O devices ongoing . Visibility over accesses to I/O devices from running apps may be provided to users in four different ways: (1) via notification lights, similar to those used for cameras on laptops or external USB cameras; (2) by playing a distinctive sound, similar to the shutter sound produced when taking a photo; (3) by displaying notification icons, similar to the location icon shown on the status bar; and (4) by visualizing alert messages on screen. Unfortunately, notification lights, sounds, or notification icons can only alert users about accesses to sensitive I/O devices, but cannot convey exact information about operations performed, target devices, and processes responsible for such operations. Furthermore, the sounds might not be audible in silent or vibrate mode. A better way to convey complete information about operations performed over I/O devices, by running processes, is by displaying on screen alert messages to users.
Solutions that make use of the *Activity Window* portion of the screen, to display access notifications or alert messages, are subject to user deception attacks, were screen overlays are used by malicious apps to surreptitiously replace, or mimic, the GUI of other apps and mount social-engineering attacks, or else mislead the user perception of ongoing operations. avoids this problem by displays *Security Messages* to users on the *Status Bar*, a reserved portion of the screen drawable only by the *WindowManager* system service, a privileged process part of the Android OS. A similar approach has been adopted by Bianchi *et al.* [@bianchi2015app], where the *Navigation Bar* has been used to host a security indicator as solution against User Interface (UI) deception. However, Bianchi’s solution, as it is, cannot be adopted to provide visibility to users about operations targeting I/O devices or to automatically prevent operation programmatically initiated by processes running apps, without users interaction. In fact, Bianchi’s solution does not provide the necessary mechanisms to bind users interaction to access requests for I/O devices from processes running apps.
uses *Security Messages* displayed on the *Status Bar* to convey, to users, two types of events: (1) *pending operations* initiated by users; and (2) status feedback about *ongoing operations* authorized by users. A *Security Message* includes the app identifier (e.g., app icon or name) and a text message specifying the target operation. The first type of message makes users aware of the operation resulting from the interaction with a soft-button displayed by an app on screen. For example, in Figure \[fig:messages\] (A), if the user presses the button depicting a camera, the *Security Message* specifies that the Instagram app will take a photo using the smartphone front camera[^8]. The second type of message informs users about ongoing authorized sessions targeting on-board I/O devices. As example, in Figure \[fig:messages\] (B), a *Security Message* is used to inform the user that the Google Voice Search app is using the microphone to listen to the user voice for commands. If multiple operations are simultaneously targeting different I/O devices, the *Security Message* alternate messages to make users aware of all the ongoing operations.
*Security Messages* are used to validate precondition, ongoing condition and exit condition. Satisfying these conditions is sufficient to reliably provide visibility to users over sensitive operations targeting I/O devices, therefore guaranteeing and .
![Security Messages displayed on the Status Bar[]{data-label="fig:messages"}](messages "fig:"){width="60mm"}\
*Security Messages* are always visible to users even if apps are in full screen mode. Upon use-initiated operations targeting I/O devices (i.e., press soft-button to take a photo), systematically reactivates the *Status Bar* to display a *Security Message* specifying the pending operation. Thus, any attempt by malicious apps to draw a fake *Status Bar* with a fake *Security Message* would fail, since the original *Status Bar* is always drawn on screen.
Eliciting User Input for Approval {#user_input}
---------------------------------
The requirement for users to approve or abort pending app requests for operations on I/O devices by providing user input through GUI elements, guarantees . On-screen prompts could be used to request approval, every time I/O devices are accessed by a process running an app, in response to user-initiated interactions. However, while prompts attempt to verify users’ intention, in practice, they create an excessive burden on users, which leads to users ignoring these prompts eventually[^9]. Therefore, prompting users every time I/O devices are accessed seems unreasonable.
To avoid excessive burden on users and, at the same time, enforce a per-access approval, uses a *Gesture Identification* mechanism to identifying specific sequence of gestures, by users, on smartphones’ screen. Gestures are intercepted and analyzed, in real-time, to infer users’ intention when interacting with apps. Captured gestures on screen are mapped with undergoing operations performed by apps running in foreground. User-initiated interactions are combined with *Security Message* on screen, as depicted in the state machine diagram in Figure \[fig:stm\]. With , operations targeting I/O devices can *only* be initiated by user, pressing and holding down a soft-button on screen. Upon user-initiated operations, displays the pending operation on a *Security Message*, for a preset time period, after which the operation is abort in absence of user interaction[^10]. After looking at the *Security Message* users can confirm the operation by simply releasing the soft-button, or aborting the pending operation by sliding their finger out from the soft-button area, as shown in Figure \[fig:user\_input\] (A).
![State Transition Diagram for Security Messages based on User-Initiated Interactions[]{data-label="fig:stm"}](stm){width="80mm"}
![Alternative Approaches to retrieve Users Input[]{data-label="fig:user_input"}](user_input){width="80mm"}
We could have designed in order to have two different areas of the screen where users could place their finger to either deny or allow operations, as shown in figure \[fig:user\_input\] (B). This solution would have been subject to social-engineering attacks because the two areas would appear in the *Activity Window*, allowing malicious apps to overlay fake messages, and swap the deny area with the allow area to trick the user into allowing an operation.
The *Gesture Identification* mechanism also support an alternative method that makes use of the fingerprint scanner to authenticate users interacting with smartphonesUsers scan their finger to confirm pending operations displayed on *Security Messages*, as illustrated on the right side of Figure \[fig:user\_input\] (C). interprets the absence of specific sequences of gestures, from users, how operations not matching users’ intention and volition, therefore, blocks and logs attempts from malicious apps trying to programmatically activate security-sensitive operations targeting on-board I/O device. The *Gesture Identification* mechanism is used to validate precondition , which in conjunction with , and are sufficient to guarantee that operations performed over on-board I/O devices match users’ intention and volition, therefore guaranteeing .
For users willing to lower the security of their mobile platforms, allows to disable the *Gesture Identification* mechanism per-app or when a remote controller (i.e., Bluetooth selfie stick) is used. However, we discourage white listing apps, even after a certain period of usage, because apps can dynamically change their behavior during time, due to automatic, periodic, software updates. Furthermore, apps could ask another apps to perform specific operations targeting I/O devices, through the intent mechanism. Thus, a white listed app could be tricked in serving a request coming from a malicious app.
Supporting Retrospective Actions {#logs}
--------------------------------
The requirement to log actions occurring during the execution of operations targeting I/O devices, guarantees . To support retroactive actions by users, generates three type of access logs for security-sensitive operation targeting on-board I/O devices. First, logs any failed attempt by running processes in accessing I/O devices, due to lack of necessary conditions required to allow requested operations. These logs are accessible in the *Blocked Accesses* section, shown in Figure \[fig:logs\], and allow users to identify apps that attempts to perform stealthy operations while running as background services. Second, logs any operation denied by users though the *Gesture Identification* mechanism. These logs are accessible in the *Denied Accesses* section, shown in Figure \[fig:logs\], and allow users to identify apps using social-engineering techniques to trick them in authorizing undesired operations. Third, log any operation performed over I/O devices, authorized by users though the *Gesture Identification* mechanism, allowing users to track authorized operations.
To better catch users’ attention, attempted access violations are signaled by by producing a sound and showing a *Security Message* communicating undesired behaviors from running apps. The *Logs* can be accessed by users anytime, from the app menu or by tapping on the *Security Message* displayed on the *Status Bar*. Each access log entry reports information regarding apps ID, date, time and operations performed by apps, as shown in Figure \[fig:logs\].
![On-Board I/O Devices Access Logs[]{data-label="fig:logs"}](logs){width="75mm"}
*Logs* allow users to perform two retrospective actions: (1) uninstall apps identified as malicious, and (2) revoke granted permissions to prevent future undesired accesses[^11]. Retrospective actions can be taken by users either immediately, when the I/O devices is still being used by apps to block undesired operations, or after reviewing what apps are doing over time.
The *Log* mechanism is used to validate Ongoing Condition and Exit Condition. Satisfying these conditions is sufficient to allow users to perform retrospective actions over sensitive operations targeting I/O devices, therefore guaranteeing .
User Study and Aware Evaluation
===============================
Study Objectives
----------------
We performed a comprehensive laboratory-based survey, user study, and system experiments with the following five objectives. First, we survey users’ privacy and security attitudes as they pertain to the malicious use of I/O devices, and investigate users’ awareness about RAT attacks. Second, we observe users’ vigilance during a series of interactive tasks, while RAT attacks targeting on-board I/O devices are deployed and the defense mechanisms are not active. Third, we investigate the effectiveness of from two perspectives. Initially, during another series of interactive tasks, we observe whether users notice and can adequately respond to customized social-engineering attacks, when those can be thwarted with the user interface components of . We then debrief users about their experience with . Fourth, we measure whether effectively and systematically shields users from practically deployed RAT attacks, while they perform the second series of interaction tasks. Fifth, we measure the performance overhead incurs on the critical paths of processing app requests.
\[task\]
[|c|l|l|l|l|l|c|]{} &
----------------------
**Task Description**
----------------------
&
--------------
**App Used**
--------------
&
-------------------
**Attack Source**
-------------------
&
------------------------
**Attack Description**
------------------------
&
-----------------
**Perceivable**
-----------------
&
--------------------
**Detection Rate**
--------------------
\
T1 & Take a picture & Instagram (B) & None & N/A & N/A & N/A\
T2 & Take a video & Fideo (B) & None & N/A & N/A & N/A\
T3 & Record a voice message & Messenger (B) & None & N/A & N/A & N/A\
T4 & Record a video message & Skype (B) & None & N/A & N/A & N/A\
T5 & Record the device screen & Rec. (B) & None & N/A & N/A & N/A\
T6-A1 & Navigate Internet & Chrome (B) & Krysanec (M) & Stealthy Photo & Camera Shutter Sound & 18%\
T7-A2 & Watch a Video & YouTube (B) & Soundcomber (M) & Stealthy Voice Recording & None & 0%\
T8-A3 & Add a new contact & Contacts (B) & Dendroid (M) & Stealthy Video Recording & UI Slow Down & 1%\
T9-A4 & Send email & Gmail (B) & PlaceRaider (M) & Stealthy Photos & UI Slow Down & 0%\
T10 & Take a screenshot & None\* & None & N/A & N/A & N/A\
T11-A5 & Record a video & SimpleFilters $\dagger$ & SimpleFilters $\dagger$ & Stealthy Screenshot & Security Message Mismatch & 82%\
T12-A6 & Take a photo & SimpleFilters $\dagger$ & SimpleFilters $\dagger$ & Stealthy Voice Record & Security Message Mismatch & 76%\
T13 & Analyze Logs & Logs (B) & None & N/A & N/A & N/A\
\
[(B) Benign App (M) Malicious App \* Performed by pressing the power and volume-down physical buttons at the same time\
$\dagger$ SimpleFilters appears as a benign app, but includes functionality to run additional I/O operations beyond those consented by users]{}
Study Components
----------------
The study has three survey components, two interactive user task sequences, and one group of measured tasks. We obtained IRB approval at our institution.
*:* Individuals completed an initial questionnaire with demographic questions and questions about their usage of mobile platforms. A second survey debriefed participants about the first series of interactive tasks, performed using an of-the-shelf Android smartphone and investigated their privacy and security attitudes. A third survey debriefed participants about the second series of interaction tasks, performed using a smartphone running , and their perceptions about . The surveys included standard Likert-type psychometric scale questions (e.g., to measure attitudes) as well as open-ended question formats (e.g., to solicit participants’ experiences during the interactive tasks). Surveys were implemented on Qualtrics and executed on a lab computer.
*:* In the first series of interactive tasks, we studied participants’ potential reactions to practically deployed RAT attacks. Participants were asked to interact with a Nexus 5 smartphone, running a vanilla version of the Android OS (6.0.1\_r25), and to perform 9 tasks ranging from taking a picture with the smartphone’s camera to sending an email. The first 5 tasks (T1-T5), summarized in Table \[task\], were not associated with any RAT attacks. Tasks T6-T10 were associated with 4 different, visibly noticeable attacks (A1-A4), also summarized in Table \[task\]. These attacks were carefully triggered by the experimenter, while participants engaged in the interactive tasks. The attacks varied in the degree to which they are perceivable, as highlighted in Table \[task\]. Please note that individuals were not explicitly instructed about the presence of RAT attacks before the tasks, however we asked them to report unusual behaviors verbally and in the survey. Before the second series of interactive tasks, subjects were debriefed about the previously experienced attacks. We further familiarized them with the system through instructional materials and by allowing them to inspect the user interface on a Nexus 5X smartphone. As before, the participants were engaged in several interactive tasks. First, participants engaged in tasks T1-T5 and T10, which did not present any *noticeable* RAT behaviors. Then, in tasks T11 and T12, we used a test RAT app to investigate users’ responses to attacks where the user consented to one action, but the app performed additional, unapproved actions. These social-engineering attacks (A5 and A6) aimed to trick users into executing unwanted I/O operations, such as recording voice when they consented to the app taking a photo. Using the security message and gesture mechanism, we investigated whether participants could notice and thwart the attacks. The series of tasks concluded with T13, which did not include any attacks. We did not brief individuals about which tasks were associated with attacks or which I/O devices would be targeted. We summarize the 9 tasks and associated attacks in Table \[task\]. We recorded participants’ interactive behaviors, and their survey responses. In addition, we applied a think-aloud protocol by encouraging participants to speak out about their experiences while being engaged in the tasks.
*:* During the second series of tasks, participants were still exposed to attacks programmatically and persistently triggered from the four RAT apps used in our study (i.e., Krysanec, Soundcomber, Dendroid, and PlaceRaider). The system was expected to shield the user from these attacks, and therefore no *noticeable* effects should have been observable by the participants. As such, the purpose of the measurement task is to evaluate whether effectively and automatically shields the participants *while they engage in realistic user interaction behaviors*. All participants completed the entire set of study components in about 25-35 minutes and were compensated with a \$10 gift card.
Results
-------
*:* In total, 74 participants completed the whole set of surveys and tasks. The majority of the sample were between 20-29 years old (76%). We recruited predominantly undergraduate and graduate students; the majority having an international background (70%), and fields of study *different* from computer science (75%). Most participants actively used smartphones (99%) and additional devices associated with third-party apps such as tablets (54%), and to a lesser extent smart watches and fitness bands (10%).
*:* We asked participants how concerned they are about threats to their personal privacy when using smartphones, and found that 43% were moderately or extremely concerned. Participants were even more concerned about privacy and security aspects as they related to third-party apps (57%). Most important to our study, concern levels were high for the misuse of smartphones’ camera (62%) and microphone (55%).
*:* The majority of the participants stated that they were aware that apps could access the camera (56%) and the microphone (56%) of their smartphones at any time without repeatedly asking for consent. However, participants had little knowledge of specific RATs that exploit smartphones’ I/O devices, such as Dendroid and SoundComber (4% each), and Krysanec and PlaceRaider (3% each). A small number of participants (8%) were able to articulate how malicious apps apply social-engineering techniques to misled users into taking an action. Only 24% of the participants use a mobile anti-malware product, whereas 78% of subjects stated that they avoid downloading apps from unofficial app stores.
*:* The attacks in the first series of interaction tasks varied in the degree to which *risk signals* in the vanilla Android system are perceivable by a user. A1 was associated with the camera shutter sound when the Krysanec malware took a stealthy photo, while participants were browsing the Internet. Only 18% correctly noticed that a camera shutter sound was audible. 8% incorrectly thought that a screenshot was taken. 4% merely noticed a sound. Not a single participant stated any suspicion in the survey or the think-aloud comments that malware or a security problem could be responsible for the sound. Two participants questioned whether “something” or “someone was taking a screenshot”. Only one participant noticed the UI slow-down (i.e., a perceived glitch in the UI) in task T8-A3 when a stealthy voice recording was initiated by the SoundComber app. None of the participants identified any suspicious activity when being involved in attack tasks T7-A2 and T9-A4. An overview of the detection rates for the RAT attacks is available in Table \[task\].
In summary, audible risk signals (associated with the built-in camera in A1) are noticed by at least some participants, but their interpretation of the risk, ability to identify the cause or at least to question the cause of the unexpected behavior is highly limited. State-of-the-art malware apps engaging in stealthy voice and video recordings remained almost entirely unnoticed. PlaceRaider (A4) which also takes stealthy pictures and sidesteps audible risk signals was not noticed by any participant.
*:* Using substantially simplifies the problem space of unwanted I/O access in two ways. First, any access to an I/O device which is not associated with a user interaction will be automatically blocked. Thereby, attacks that participants were encountering in the first series of interactive tasks would be automatically thwarted. We will verify this aspect of ’s effectiveness in the subsequent measurement results subsection. Second, only user-intended interactions which explicitly trigger an I/O device will be allowed to proceed. Note, however, that malicious apps could nevertheless try to hijack these user-intended activities. also aims to thwart attacks that coincide with such user-intended I/O interactions. For this purpose, provides user interface features (i.e., security messages and gesture identification) that help to *notice* and to *block* such attacks. In addition, users can *inspect the log* to verify which wanted or unwanted I/O operations have taken place.
To investigate the effectiveness of when users encounter such sophisticated attacks (which exploit the *context* of users’ interactions), we have customized a simple photographic filter app called SimpleFilters. The app is modified to attempt to take a screenshot (A5) and a voice recording (A6) stealthily when users intentionally engage in a different type of I/O interaction. Using , 82% of the participants noticed the attack while performing task T11-A5, and 76% noticed the attack while performing task T12-A6. In contrast to the first series of user interactions, these participants were also able to clearly articulate what attacks took place. A typical debriefing message for attack A5 was: “It was taking a screenshot, while I asked it to take a video”, whereas for attack A6, messages were a variations of: “message said the microphone was being accessed”.
Inspecting the system logs, we were also able to determine how often participants used the gestures to *block* the attacks that they noticed. For A5, all of the 82% of the participants who noticed the attack successfully used the gestures to abort the task. Similarly, all participants who noticed attack A6 succeeded in blocking the attempt to record audio instead of taking a picture. In the final task (T13), we asked individuals to inspect Logs to evaluate which I/O access operations had taken place during the second series of interactions. 88% of all participants found Logs helpful in identifying suspicious activities from running apps, and they were clearly able to articulate what attacks had taken place.
After the interaction tasks, we solicited further feedback from the participants. 90% of the participants found more secure than the vanilla Android OS, and 80% found it as (or more) usable compared to the vanilla Android OS. These are encouraging results since additional security mechanisms often meet with user resistance, for example, because they may distract from the user’s primary task. Further, 57% of the participants said they would prefer the notice and gesture mechanism compared to other notification options. For example, only 21% of the participants preferred to be prompted with a permission dialog at every access. Further, only 10% of them stated that they would prefer to be asked for permission at install time, and 8% of them at first use. Most importantly, 99% of the participants would like integrated in their current mobile OS.
*:* To evaluate the effectiveness of in the presence of realistic user interactions, we allowed the 4 RAT apps used in the first series of user interaction tasks to also be active during the second series of interaction tasks. We also tested whether activities from the customized SimpleFilters app would be blocked if they did not coincide with the users’ interactions with the I/O devices (i.e., in T11 and T12). In order to monitor whether any of the malicious activities of these RAT apps were successful, we used logcat [@AndroidDoc], the Android logging system, which provides a mechanism for collecting system debug output about activities from various apps and system services. For the 74 sessions involving participants, we found that RAT apps attempted 1080 times to perform stealthy operations targeting I/O devices, but they *never* succeeded in accessing the on-board camera, microphone or screen content, as result of systematically validating preconditions and . In other words, the absence of users’ interaction and consent prevented RAT apps from succeeding in performing stealthy operations while running services in the background. Furthermore, based on the logs, there were no run-time exceptions, triggered by the components, which could have caused any of the 9 well-known legitimate apps to crash or unexpectedly terminate.
*[^12]:* prevented all attempts from RAT apps to perform stealthy operations that did not coincide with users’ intended I/O access operations and considerably reduced the success rate of social-engineering attacks, without breaking any apps’ logic. Therefore, significantly raises the bar compared to the detection rate of state-of-the-art static/dynamic analysis tools, and anti-malware tools, available to users to identify malicious apps running on smartphones[^13]. We anticipate that with additional experience users will become even more proficient with the security messages and the gesture mechanism, which would further reduce the effectiveness of social-engineering techniques. However, the achieved results are very impressive given the sophisticated nature of the attacks tested with the SimpleFilters app[^14]. automatically blocks all attacks which are not carefully socially-engineered and significantly reduces the attention burden placed on users, thereby, reduces habituation and notice fatigue [@Bohme11].
[c|c|c|c|c| >c |]{} & &\
& & & & &\
& 15.90$\pm$1.54 & 14.39$\pm$1.12 & 16.11$\pm$1.77 & 15.01$\pm$1.38 & 4.04% (2.21%)\
& 16.08$\pm$1.32 & 15.68$\pm$1.87 & 16.44$\pm$1.06 & 16.37$\pm$1.91 & 4.31% (2.57%)\
& 12.36$\pm$2.01 & 11.86$\pm$1.99 & 12.65$\pm$2.15 & 12.32$\pm$1.85 & 4.03% (2.19%)\
& 17.76$\pm$0.99 & 16.23$\pm$0.69 & 18.61$\pm$0.90 & 17.02$\pm$1.01 & 4.79% (2.94%)\
Performance Evaluation
----------------------
We have measured the overhead introduced by while handling each access request for operating on-board I/O devices, such as the camera to take photos and video, the microphone to record audio, and the screen to capture screenshots. Due to lack of publicly available benchmarks for Android OS, we only provide microbenchmark analysis of such operations for two phones, a Nexus 5 and a Nexus 5X running Android OS (version android-6.0.1\_r5). The overhead is calculated by measuring the time interval from the time the request is made by the process running the app to the time the operation is granted/denied by . Table \[performance\] reports the average time over 10,000 requests, the standard deviation and the maximum recorded overhead introduced by . Overall, introduces a negligible overhead of the order of 1 $\mu$s per access. The maximum recorded overhead is 4.79% while accessing the screen buffers.
Conclusion
==========
In this paper, we presented , a security framework for authorizing app requests to perform sensitive operations using I/O devices, which binds app requests with user intentions to make all uses of certain I/O devices explicit. We evaluated the proposed defense mechanisms through laboratory-based experimentation and a user study, involving 74 human subjects, whose ability to identify undesired operations targeting I/O devices increased significantly. Without , only 18% of the participants were able to identify attacks from tested RAT apps. systematically blocked all the attacks, in absence of user-initiated interaction, and supported users in identifying 82% of more sophisticated attacks, which used social-engineering techniques to hijack user-initiated operations. introduced only 4.79% maximum performance overhead over operations targeting I/O devices.
[1]{}
Dendroid malware can take over your camera, record audio, and sneak into Google Play. https://blog.lookout.com/blog/ 2014/03/06/dendroid/
Smartphone [OS]{} Market Share. http://www.idc.com/prodserv/ smartphone-os-market-share.jsp
Krysanec Trojan: Android backdoor lurking inside legitimate apps. http://www.welivesecurity.com/ 2014/08/12/krysanec-trojan-android
Official Android Documentation. http://developer.android.com
SEAndroid. http://seandroid.bitbucket.org
Android Open Source Project. https://source.android.com/
Java Native Interface. http://en.wikipedia.org/wiki/Java Native Interface
Google Play app store. https://play.google.com/
Android Central. http://www.androidcentral.com/stagefright
Apks Free app store. http://www.androidapksfree.com
Mobile Apk World app store. http://mobileapkworld.com
VirusTotal. Free Online Virus, Malware and URL Scanner. https://www.virustotal.com
Smartphones Are Used To Stalk, Control Domestic Abuse Victims. http://www.npr.org/sections/alltechconsidered/2014/09/15/ 346149979/smartphones-are-used-to-stalk-control-domestic-abuse-victims
Google Bouncer. http://blog.trendmicro.com/trendlabs-security- intelligence/a-look-at-google-bouncer/
Speed up your car insurance claim. https://www.esurance.com/ photo-claims
Social-Engineering Attacks. https://en.wikipedia.org/wiki/Social \_engineering\_(security)
PNC Mobile Banking. https://www.pnc.com/en/personal-banking/banking/online-and-mobile-banking/mobile-banking.html
Samsung Know White Papers. https://www.samsungknox.com/ en/support/knox-workspace/white-papers
Yee, Ka-Ping. Aligning security and usability.
Park, Jaehong and Sandhu, Ravi. The UCON ABC usage control model.
Anderson, James P. Computer Security Technology Planning Study. Volume 2.
Robert Templeman and Zahid Rahman and David Crandall and Apu Kapadia. PlaceRaider: Virtual theft in physical spaces with smartphones.
Schlegel, Roman and Zhang, Kehuan and Zhou, Xiao-yong and Intwala, Mehool and Kapadia, Apu and Wang, XiaoFeng. Soundcomber: A Stealthy and Context-Aware Sound Trojan for Smartphones.
Felt, Adrienne Porter and Ha, Elizabeth and Egelman, Serge and Haney, Ariel and Chin, Erika and Wagner, David. Android permissions: [U]{}ser attention, comprehension, and behavior.
Roesner, Franziska and Kohno, Tohru and Moshchuk, Alexander and Parno, Bryan and Wang, Harry Jiannan and Cowan, Crispin. User-driven access control: [R]{}ethinking permission granting in modern operating systems.
World-driven access control for continuous sensing Roesner, Franziska and Molnar, David and Moshchuk, Alexander and Kohno, Tadayoshi and Wang, Helen.
Anderson, Fraser and Grossman, Tovi and Wigdor, Daniel and Fitzmaurice, George. Supporting Subtlety with Deceptive Devices and Illusory Interactions.
Bianchi, Antonio and Corbetta, Jacopo and Invernizzi, Luca and Fratantonio, Yanick and Kruegel, Christopher and Vigna, Giovanni. What the App is That? Deception and Countermeasures in the Android User Interface.
Smalley, Stephen and Craig, Robert. Security Enhanced Android: Bringing Flexible MAC to Android.
Nadkarni, Adwait and Enck, William. ASM: A programmable interface for extending Android security.
Backes, Michael and Bugiel, Sven and Gerling, Sebastian and von Styp-Rekowsky, Philipp. Android Security Framework: Extensible multi-layered access control on Android.
Chen, Kai and Wang, Peng and Lee, Yeonjoon and Wang, XiaoFeng and Zhang, Nan and Huang, Heqing and Zou, Wei and Liu, Peng. Finding unknown malice in 10 seconds: [M]{}ass vetting for new threats at the Google-Play scale.
Tam, Kimberly and Khan, Salahuddin and Fattori, Aristide and Cavallaro, Lorenzo. CopperDroid: Automatic Reconstruction of Android Malware Behaviors.
Xu, Zhi and Zhu, Sencun. SemaDroid: [A]{} Privacy-Aware Sensor Management Framework for Smartphones.
Petracca, Giuseppe and Sun, Yuqiong and Atamli, Ahmad and Jaeger, Trent. AuDroid: Preventing Attacks on Audio Channels in Mobile Devices. Primal Wijesekera and Arjun Baokar and Ashkan Hosseini and Serge Egelman and David Wagner and Konstantin Beznosov. Android Permissions Remystified: [A]{} Field Study on Contextual Integrity.
Sheng, Steve and Holbrook, Mandy and Kumaraguru, Ponnurangam and Cranor, Lorrie Faith and Downs, Julie. Who Falls for Phish? A Demographic Analysis of Phishing Susceptibility and Effectiveness of Interventions.
Böhme, Rainer and Grossklags, Jens. The Security Cost of Cheap User Interaction.
Appendices {#appendices .unnumbered}
==========
Android Permission Set Analysis {#perm_ineff}
-------------------------------
[lm[1cm]{}|m[1cm]{}|m[1cm]{}|m[1cm]{}|m[1cm]{}|m[1cm]{}|m[1cm]{}|m[1cm]{}|]{}
& & & & & & & &\
& & & & & & & &\
\
& & & & & & & &\
& & & & & & & &\
& & & & & & & &\
& & & & & & & &\
\
& & & & & & & &\
\
\
& & & & & & & &\
& & & & & & & &\
& & & & & & & &\
& & & & & & & &\
\
& & & & & & & &\
\
The analysis of 74 apps from third-party app stores [@MobileApkWorld; @ApksFree] and 329 apps from the official Google Play [@GooglePlay], shows concerning results, summarized in Table \[tab:app-an\]. In particular, many of the analyzed apps could potentially behave as RAT, since they have the necessary permissions to perform stealthy operations targeting on-board I/O devices. For example, from the Google Play, 83.89% of apps could potentially take stealthy screenshots. Furthermore, 25.68% of apps from third-party app stores could potentially take stealthy photos. In each cell of Table \[tab:app-an\], the first value represents the number, the second value the percentage of apps, among all the app analyzed in the same category, that have the permissions required to perform the stealthy operation specified in the first column. We are not aware if these apps are actually misusing their permission, but we want to point out that it is possible for these apps to misuse their permissions to perform stealthy operations, and by statically analyzing the set of Android permissions used by apps, it is by no mean sufficient to distinguish between purely benign apps and malicious apps.
Anti-Malware Tools Detection Analysis {#anti}
-------------------------------------
The analysis results relative to RAT detection by the 15 most popular anti-malware apps, available on Google Play [@GooglePlay] and used on smartphones by millions of user around the world, are summarized in Table \[tools\]. The *Installs* column indicates the number of installs performed by users on their smartphones. The *Reviews* column specifies how many people gave a personal review and a score for the anti-malware app on Google Play. Finally, the *Score* column reports the average score received by the app over a scale of 5 by user reviewing the app itself.
=0.07cm
-- -- -- -- ----------- -- ----------- ------------ -----
100M-500M 8,092,733 4.6
100K-500K 5,383 4.1
10M-50M 283,332 4.5
100M-500M 3,481,194 4.5
100M-500M 4,079,893 4.4
1M-5M 33,909 4.3
$\circ$ 100M-500M 13,258,188 4.7
50M-100M 863,913 4.5
5M-10M 300,470 4.6
10M-50M 1,019,526 4.6
100M-500M 830,941 4.4
$\otimes$ 1M-5M 53,990 4.3
10M-50M 334,162 4.3
10M-50M 509,521 4.4
10M-50M 497,640 4.3
-- -- -- -- ----------- -- ----------- ------------ -----
All the anti-malware apps have been updated with the most recent malware database before starting the scan, indeed 3 anti-malware apps (marked with $\star$ in Table \[tab:app-an\]) have detected the Stagefright vulnerability [@ACentral] that would allow malicious code to send fraudulent MMS, only recently discovered. The analysis results are based on a first scan before malware installation, and a second scan during the execution of the malware, when the anti-malware has been kept actively scanning for 10 minutes. Subsequently, three consecutive scans have been performed, after malware installation. After the first scan, the anti-malware has been configured to actively keep scanning. We made sure to select full/deep scan from the scanning options. The three successive scans have been manually activated to force the anti-malware to rescan the entire system. At each new malware installation the smartphone has been flashed again with a clean copy of the OS and anti-malware software installation.
The analysis revealed that most anti-malware tools are able to detect well-known RATs (e.g., Dendroid and Krysanec). We believe that this is due to the fact that well-known RATs have been classified and a signature has been generated and distributed on the Web. On the other hand, proof-of-concept RATs (e.g., PlaceRaider, SoundComber and StealthyStalker) are unknown and gone undetected by anti-malware tools even though they use similar techniques used by well-known RATs. Exceptionally, the AVAST Mobile Security Anti-Malware identifies some malice in both PlaceRaider and StealthyStalker.
On the Android OS side, an interesting finding was that, at install time, an alert was triggered to block the installation of Krysanec. At the second attempt, the Android OS asked the user if to proceed with the installation anyhow. Additionally, while uninstalling the app, Krysanec attacked the operating system by exploiting privilege escalation.
Static and Dynamic Analysis Tools {#stealthystalker}
---------------------------------
The analysis results relative to RAT detection by four state-of-the-art static and dynamic analysis tools are reported in Table \[tools\].
*VirusTotal* [@VirusTotal], originally developed by Hispasec and now own by Google, is a free service that analyzes suspicious files and URLs and facilitates the quick detection of viruses, worms, trojans, and all kinds of malware. It uses 56 different anti-malware products and 61 online scan engines to check for viruses. VirusTotal was selected by PC World as one of the best 100 products of 2007. As shown in Table \[tools\], VirusTotal detects well-know RATs (e.g., Dendroid and Krysanec). The two RATs are identified as malicious with a score of respectively 22/56 and 20/56. The nominator in the score fractions refers to the number of tools that identify the app as potentially malicious, the denominator indicates the total number of tools that have analyzed the app.
*MassVet* [@chen2015finding] compares a submitted app with apps already on a market, focusing on the difference between those sharing a similar UI structure (indicating a possible repackaging relation), and the commonality among those seemingly unrelated. MassVet uses a “DiffCom” analysis on top of an efficient similarity comparison algorithm, which maps features of an app’s UI structure or a method’s control-flow graph to a value for a fast comparison. As shown in Table \[tools\], MassVet detects malicious code in 3 of the 5 RATs analyzed.
[l|c|c|c|c|]{} & &\
& [VirusTotal]{} & [MassVet]{} &
---------
Google
Bouncer
---------
: RAT Detection via Static and Dynamic Analysis[]{data-label="tool"}
& [CopperDroid]{}\
& & &N/A & N/A\
& & & N/A & N/A\
& & & & N/A\
& & & N/A &N/A\
& & & N/A & N/A\
\
*Legend:* Malware Detected Malware Undetected N/A Data Not Available
*Google Bouncer* [@bouncer] is a codename used by Google, for a security service introduced early in 2012, to keep malicious apps off the official Google Play[^15]. Bouncer quietly and automatically scans apps (both new and previously uploaded ones) and developer accounts in Google Play with its reputation engine and cloud infrastructure. To test the effectiveness of Bouncer, in detecting RAT apps, we have implemented a proof-of-concept testing app, called *StealthyStalker*[^16], able to take stealthy photos, videos, screenshots, record audio and hijack user-initiated operations. To release an app through Google Play, a third-party developer has to participate in Android developer program and submit to Google for review. The app is signed and published by Google only after it passes the review process. As shown in Table \[tools\] and Figure \[fig:bouncer\], the StealthyStalker app (submitted for publication with the fake name of SimpleFilter) successfully passed the Google Play (Bouncer) review and published, after a couple of hours, despite the hidden malicious code. This means that Google Bouncer did not find any potential harm in the app. Following the ethical hacking practice, we immediately removed the app from Google Play before any user could actually download it, as proved by the download statistic provided by Google. Results for the other 4 RATs are not available due to the fact that we are not authorized to submit code written by other researchers or malicious developers.
![StealthyStalker RAT app published on Google Play under the name of SimpleFilters[]{data-label="fig:bouncer"}](bouncer.eps){width="80mm"}
*CopperDroid* [@tam2015copperdroid] is a tool to perform dynamic analysis over Android apps to characterize the behavior of Android malware. It automatically analyzes low-level OS-specific and high-lelvel Android-specific behaviors of Android malware by observing and analyzing system call invocations, including IPC and RPC interactions, carried out as system calls. Although CopperDroid is a powerful tool that dynamically analyzes Android apps, analysis results do not provide any hint on the maliciousness of a given sample. Indeed, by manually analyzing the logs files generated by CopperDroid (e.g, syscalls, pcap, logcat and basicbinder), we were not able to identify evidence of malice for the analyzed software.
Summary of selected Results from our User Study and Aware Evaluation {#res}
--------------------------------------------------------------------
-- -- -- -- -- -- -- -- --
-- -- -- -- -- -- -- -- --
\
Demographic Distribution of Human Subjects participating in the Study\
\
Major/Minor’s Degree Distribution of Human Subjects participating in the Study
[^1]: 75% of operations requiring permissions are performed when the screen is off or apps are running in background as services [@primal]
[^2]: Users could notice the Wi-Fi or cellular network icon on the phone screen status bar, but they do not know what app is responsible for the network traffic, and what data is flowing out through the network.
[^3]: Prompts are disruptive and cause excessive fatigue, conditioning user to simply accept any prompt query, resulting in undermining the usefulness of the prompts [@felt2012; @yee2004aligning; @primal].
[^4]: We have tested the 15 most popular Android anti-malware tools, complete results are reported in Appendix \[anti\].
[^5]: We have tested 2 static and 2 dynamic analysis tools currently adopted by researchers and the general mobile app community. Complete results are reported in Appendix \[stealthystalker\].
[^6]: Used when users won’t be interacting heavily with the screen while consuming content, like while watching a video.
[^7]: Mainly intended for apps in which the user will be heavily interacting with the full screen as part of the primary experience, like while playing games, viewing images in a gallery, or reading a book.
[^8]: \(F) used to indicate front camera and (B) indicate back camera.
[^9]: On average, there are 8 requests per minute by processes running apps to request permission to access sensitive resources [@primal].
[^10]: The timer is used to support apps that require users to keep pressing down a button to perform the operation (i.e., record a video).
[^11]: This mechanism supports the Android AppOps mechanism reintroduced starting from Android OS 6.0 Marshmallow [@AndroidDoc], which allows to revoke permissions granted at install time, to running apps.
[^12]: See Appendix \[res\] for a summary of selected results.
[^13]: Detailed analysis results reported in Appendices \[anti\] and \[stealthystalker\].
[^14]: Research on carefully crafted Phishing attacks shows that, even with repeated security training, a significant share of users will fall for such attacks [@Sheng10].
[^15]: According to Google, Bouncer was responsible for a 40% drop in the number of malicious apps in its app store.
[^16]: Submitted to Google Play under the name of *SimpleFilters*
|
---
abstract: 'We report on the computation of invariants, covariants, and contravariants of cubic surfaces. All algorithms are implemented in the computer algebra system [magma]{}.'
address:
- |
Andreas-Stephan Elsenhans\
Institut für Mathematik\
Universität Würzburg
- |
Jörg Jahnel\
Department Mathematik\
Universität Siegen
author:
- 'Andreas-Stephan Elsenhans'
- Jörg Jahnel
title: Computing invariants of cubic surfaces
---
Introduction
============
Given two hypersurfaces of the same degree in projective space over an algebraically closed field, one may ask for the existence of an automorphism of the projective space that maps one of the hypersurfaces to the other. It turns out that if the hypersurfaces are stable [@MFK Def. 1.7] in the sense of geometric invariant theory such an isomorphism exists if and only if all the invariants of the hypersurfaces coincide [@Mu].
Aside from cubic curves in ${\mathop{\text{\bf P}}\nolimits}^2$ and quartic surfaces in ${\mathop{\text{\bf P}}\nolimits}^3$, an isomorphism between smooth hypersurfaces of degree $d \geq 3$ always extends to an automorphism of the ambient projective space [@MM Th. 2]. Thus, the invariants may be used to test abstract isomorphy.
If the base field is not algebraically closed, two varieties with equal invariants can differ by a twist. A necessary condition for the existence of a non-trivial twist is that the variety has a non-trivial automorphism.
In this article, we focus on the case of cubic surfaces. For them, it was proven by Clebsch [@Cl] that the ring of invariants of even weight is generated by five invariants of degrees 8, 16, 24, 32, and 40. Later, Salmon [@Sa3] worked out explicit formulas for these invariants based on the pentahedral representation of the cubic surface, introduced by Sylvester [@Sy]. Using modern computer algebra, it is possible to compute the pentahedral representation of a given cubic surface and to deduce the invariants from this [@EJ1].
We describe a different approach to compute the Clebsch-Salmon invariants, linear covariants, and some contravariants of cubic surfaces, based on the Clebsch transfer principle. Using this, we also compute an invariant of degree 100 [@Do Sec. 9.4.5] and odd weight that vanishes if and only if the cubic surface has a non-trivial automorphism. The square of this invariant is a polynomial expression in Clebsch’s invariants.
This answers the question of isomorphy for all stable cubic surfaces over algebraically closed fields and for all surfaces over non-closed fields, for which the degree 100 invariant does not vanish.
All algorithms are implemented in the computer algebra system [magma]{} [@BCP].
The Clebsch-Salmon invariants
=============================
\[def\_inv\] Let $K$ be a field of characteristic zero and $K[X_1,\ldots,X_n]^{(d)}$ the $K$-vector space of all homogeneous forms of degree $d$. Further, we fix the left group action $$\begin{aligned}
{\mathop{\text{\rm GL}}\nolimits}_n(K) \times K[X_1,\ldots,X_n] \rightarrow K[X_1,\ldots,X_n],\quad (M,f) \mapsto M \cdot f,\end{aligned}$$ with $(M \cdot f)(X_1,\ldots,X_n) := f((X_1,\ldots,X_n) \, M)$. Finally, on the polynomial ring $K[Y_1,\ldots,Y_n]$, we choose the action $$\begin{aligned}
{\mathop{\text{\rm GL}}\nolimits}_n(K) \times K[Y_1,\ldots,Y_n] \rightarrow K[Y_1,\ldots,Y_n], \quad
(M,f) \mapsto M \cdot f,\end{aligned}$$ given by $(M \cdot f)(Y_1,\ldots,Y_n) := f((Y_1,\ldots,Y_n) \left(M^{-1}\right)^\top)$.
1. An [*invariant $I$ of degree $D$ and weight $w$*]{} is a map $K[X_1,\ldots,X_n]^{(d)} \rightarrow K$ that may be given by a homogeneous polynomial of degree $D$ in the coefficients of $f$ and satisfies $$I(M \cdot f) = \det(M)^w \cdot I(f),$$ for all $M \in {\mathop{\text{\rm GL}}\nolimits}_n(K)$ and all forms $f \in K[X_1,\ldots,X_n]^{(d)}$.
2. A [*covariant $C$ of degree $D$, order $p$, and weight $w$*]{} is a map $$K[X_1,\ldots,X_n]^{(d)} \rightarrow K[X_1,\ldots,X_n]^{(p)}$$ such that each coefficient of $C(f)$ is a homogeneous degree $D$ polynomial in the coefficients of $f$ and that satisfies $$C(M \cdot f) = \det(M)^w \cdot M \cdot (C(f)),$$ for all $M \in {\mathop{\text{\rm GL}}\nolimits}_n(K)$ and all forms $f \in K[X_1,\ldots,X_n]^{(d)}$.
3. A [*contravariant $c$ of degree $D$, order $p$, and weight $w$*]{} is a map $$K[X_1,\ldots,X_n]^{(d)} \rightarrow K[Y_1,\ldots,Y_n]^{(p)}$$ such that each coefficient of $c(f)$ is a homogeneous degree $D$ polynomial in the coefficients of $f$ and that satisfies $$c(M \cdot f) = \det(M)^w \cdot M \cdot c(f),$$ for all $M \in {\mathop{\text{\rm GL}}\nolimits}_n(K)$ and all forms $f \in K[X_1,\ldots,X_n]^{(d)}$. Note that the right hand side uses the action on $K[Y_1,\ldots,Y_n]$.
<!-- -->
1. The set of all invariants is a commutative ring and an algebra over the base field.
2. The set of all covariants (resp. contravariants) is a commutative ring and a module over the ring of invariants.
3. Geometrically, the vanishing locus of $f$ or a covariant $C(f)$ is a subset of the projective space whereas the vanishing locus of a contravariant $c(f)$ is a subset of the dual projective space. Replacing the matrix by the transpose inverse matrix gives the action on the dual space in a naive way.
<!-- -->
1. The discriminant of binary forms of degree $d$ is an invariant of degree $2d - 2$ and weight $d(d-1)$ [@Ol Chap. 2].
2. Let $f$ be a form of degree $d > 2$ in $n$ variables. Then the [*Hessian*]{} $H$ defined by $$H(f) := \det \left(\frac{\partial^2 f}{\partial X_i \, \partial X_j} \right)_{i,j =1,\ldots,n}$$ is a covariant of degree $n$, order $(d-2) n$, and weight $2$.
3. Let a smooth plane curve $V \subset {\mathop{\text{\bf P}}\nolimits}^2$ be given by a ternary form $f$ of degree $d$. Mapping $f$ to the form that defines the dual curve [@Do Sec. 1.2.2] of $V$ is an example of a contravariant of degree $2d - 2$ and order $d(d-1)$.
Salmon’s formulas {#salmons-formulas .unnumbered}
-----------------
A cubic surface given by a system of equations of the shape $$a_0 X_0^3 + a_1 X_1^3 +a_2 X_2^3 +a_3 X_3^3 +a_4 X_4^3 = 0
, \quad X_0 + X_1 + X_2 + X_3 + X_4 = 0$$ is said to be in [*pentahedral form*]{}. The coefficients $a_0,\ldots,a_4$ are called the pentahedral coefficients of the surface. The cubic surfaces that have a pentahedral form are a Zariski open subset in the Hilbert scheme of all cubic surfaces. Thus, it suffices to give the invariants for these surfaces. For this, we denote by $\sigma_1,\ldots,\sigma_5$ the elementary symmetric functions of the pentahedral coefficients. Then the Clebsch-Salmon invariants (as mentioned in the introduction) of the cubic surface are given by [@Sa3 § 467], $$\begin{aligned}
I_8 = \sigma_4^2 - 4 \sigma_3 \sigma_5, \quad
I_{16} = \sigma_1 \sigma_5^3, \quad
I_{24} = \sigma_4 \sigma_5^4, \quad
I_{32} = \sigma_2 \sigma_5^6, \quad
I_{40} = \sigma_5^8\, .\end{aligned}$$ Further, Salmon lists four linear covariants of degrees 11, 19, 27, and 43 [@Sa3 § 468] $$\begin{aligned}
L_{11} &= \sigma_5^2 \sum_{i=0}^4 a_i x_i, &
L_{19} &= \sigma_5^4 \sum_{i=0}^4 \frac{1}{a_i} x_i, \\
L_{27} &= \sigma_5^5 \sum_{i=0}^4 a_i^2 x_i, &
L_{43} &= \sigma_5^8 \sum_{i=0}^4 a_i^3 x_i \,.\end{aligned}$$ Finally, the $4 \times 4$-determinant of the matrix formed by the coefficients of these linear covariants of a cubic surface in ${\mathop{\text{\bf P}}\nolimits}^3$ is an invariant $I_{100}$ of degree 100. It vanishes if and only if the surface has Eckardt points or equivalently a non-trivial automorphism group [@Do Sec. 9.4.5, Table 9.6]. The square of $I_{100}$ can be expressed in terms of the other invariants above. For a modern view on these invariants, we refer to [@Do Sec. 9.4.5].
Transvection
============
One classical approach to write down invariants is to use the transvection (called Überschiebung in German). This is part of the so called symbolic method [@We Chap. 8, §2], [@Hu App. B.2]. We illustrate it in the case of ternary forms.
Let $K[X_1,\ldots,X_n,Y_1,\ldots,Y_n,Z_1,\ldots,Z_n]$ be the polynomial ring in $3 n$ variables. For $i,j,k \in \{1,\ldots,n\}$, we denote by $(i\, j\, k)$ the differential operator $$\begin{aligned}
(i\, j\, k) :=
\det
\left(
\begin{array}{ccc}
\frac{\partial}{\partial X_i} &
\frac{\partial}{\partial X_j} &
\frac{\partial}{\partial X_k} \\
\frac{\partial}{\partial Y_i} &
\frac{\partial}{\partial Y_j} &
\frac{\partial}{\partial Y_k} \\
\frac{\partial}{\partial Z_i} &
\frac{\partial}{\partial Z_j} &
\frac{\partial}{\partial Z_k} \\
\end{array}
\right)
\, .\end{aligned}$$
Using this notation, the [*Aronhold invariants*]{} $S$ and $T$ of the ternary cubic form $f$ are given by $$\begin{aligned}
S(f) &:=
(1\, 2 \, 3) (2 \, 3 \, 4) (3 \, 4 \, 1) (4 \, 1 \, 2)
f(X_1,Y_1,Z_1) \cdots f(X_4,Y_4,Z_4), \\
T(f) &:=
(1 \, 2 \, 3) (1 \, 2 \, 4) (2 \, 3 \, 5) (3 \, 1 \, 6)
(4 \, 5 \, 6)^2 f(X_1,Y_1,Z_1) \cdots f(X_6,Y_6,Z_6)\, .\end{aligned}$$ The first one is of degree and weight $4$, the second one is of degree and weight $6$. Using $S$ and $T$, one can write down the discriminant of a ternary cubic as $\Delta := S^3 - 6 T^2$. The discriminant vanishes if and only if the corresponding cubic curve is singular.
See [@Sa2 Sec. V] for a historical and [@Do Sec. 3.4.1] for modern references concerning invariants of ternary cubic forms.
One can use the transvection to write down invariants of quaternary forms, as well. For example, if $f$ is a quartic form in four variables then $$(1\, 2\, 3\, 4)^4 f(X_1,Y_1,Z_1,W_1) \cdots f(X_4,Y_4,Z_4,W_4)$$ is an invariant of degree 4. Here, $(1\, 2\, 3 \, 4)$ denotes the differential operator $$(1\, 2\, 3 \, 4) :=
\det
\left(
\begin{array}{cccc}
\frac{\partial}{\partial X_1} &
\frac{\partial}{\partial X_2} &
\frac{\partial}{\partial X_3} &
\frac{\partial}{\partial X_4} \\
\frac{\partial}{\partial Y_1} &
\frac{\partial}{\partial Y_2} &
\frac{\partial}{\partial Y_3} &
\frac{\partial}{\partial Y_4} \\
\frac{\partial}{\partial Z_1} &
\frac{\partial}{\partial Z_2} &
\frac{\partial}{\partial Z_3} &
\frac{\partial}{\partial Z_4} \\
\frac{\partial}{\partial W_1} &
\frac{\partial}{\partial W_2} &
\frac{\partial}{\partial W_3} &
\frac{\partial}{\partial W_4}
\end{array}
\right)\, .$$ For a quaternary cubic form, one can apply this to its Hessian to get an invariant of degree 16. However, a direct evaluation of such formulas for forms in four variables is too slow in practice. The reason is that both the differential operators and the product $f(X_1,Y_1,Z_1,W_1) \cdots f(X_4,Y_4,Z_4,W_4)$ usually have many terms.
The Clebsch transfer principle
==============================
We refer to [@Do Sec. 3.4.2] for a detailed and modern description of the Clebsch transfer principle. The basic idea is to compute a contravariant of a form of degree $d$ in $n$ variables out of an invariant of a form of degree $d$ in $(n-1)$ variables.
1. We consider the vector space $V = K^n$ and choose the volume form given by the determinant. We have the following isomorphism $$\Phi \colon \Lambda^{n-1} V \rightarrow V^*,\quad
v_1 \wedge \dots \wedge v_{n-1} \mapsto
(v \mapsto \det (v,v_1,\ldots,v_{n-1}))\,.$$
2. Let $I$ be a degree $D$, weight $w$ invariant on $K[U_1,\ldots,U_{n-1}]^{(d)}$. Then the [*Clebsch transfer*]{} of $I$ is the contravariant $\tilde{I}$ of degree $D$ and order $w$ $$\tilde{I} \colon K[X_1,\ldots,X_{n}]^{(d)} \rightarrow K[Y_1,\ldots,Y_n]^{(w)},$$ given by $$\tilde{I}(f) \colon (K^n)^* \rightarrow K,\quad
l \mapsto I(f(U_1 v_1 + \cdots + U_{n-1} v_{n-1}))\, .$$ Here, $v_1,\ldots,v_{n-1}$ are given by $v_1 \wedge \ldots \wedge v_{n-1} = \Phi^{-1}(l)$. Note that $\tilde{I}(f)$, as defined, is indeed a polynomial mapping and homogeneous of degree $w$.
Denote by $S$ and $T$ the invariants of ternary cubic forms, introduced above. Then $\tilde{S}$ is a degree 4, order 4 contravariant of quaternary cubic forms. Further, $\tilde{T}$ is a contravariant of degree 6 and order 6.
The discriminant of a cubic curve is given by $\Delta = S^3 - 6 T^2$. It vanishes if and only if the cubic curve is singular. Thus, the dual surface of the smooth cubic surface $V(f)$ is given by $\tilde{\Delta}(f) = \tilde{S}(f)^3 - 6 \tilde{T}(f)^2 = 0$.
By definition, the dual surface of a smooth surface $V(f) \subset {\mathop{\text{\bf P}}\nolimits}^3$ is the set of all tangent hyperplanes of $V(f)$. A plane $P \in ({\mathop{\text{\bf P}}\nolimits}^3)^*$ is tangent if and only it the intersection $V(f) \cap P$ is singular. Thus, $P$ is a point on the dual surface if and only if $\tilde{\Delta}(f)(P) = 0$. Here, $\Delta$ is the discriminant of ternary forms of the same degree as $f$.
For a given cubic form $f \in K[X,Y,Z,W]$, we compute $\tilde{S}(f)$ by interpolation as follows:
1. Choose 35 vectors $p_1,\ldots,p_{35} \in \left(K^4\right)^*$ in general position.
2. Compute $\Phi^{-1}(p_i)$, for $i = 1,\ldots,35$.
3. Compute $s_i := S(f(U_1 v_1 + U_2 v_2 + U_3 v_3))$, for $v_1 \wedge v_2 \wedge v_3 = \Phi^{-1}(p_i)$ and all $i = 1,\ldots,35$.
4. Compute the degree $4$ form $\tilde{S}(f)$ by interpolating the arguments $p_i$ and the values $s_i$.
We can compute $\tilde{T}(f)$ in the same way. The only modification necessary is to increase the number of vectors, as the space of sextic forms is of dimension 84.
Action of contravariants on covariants and vice versa
=====================================================
1. Recall that the rings $K[X_1,\ldots, X_n]$ and $K[Y_1,\ldots, Y_n]$ are equipped with ${\mathop{\text{\rm GL}}\nolimits}_n(K)$-actions, as introduced in Definition \[def\_inv\].
2. The ring of differential operators $$K\left[\frac{\partial }{\partial X_1},\ldots,\frac{\partial }{\partial X_n}\right]$$ acts on ${K[X_1,\ldots,X_n]}$.
3. The ${\mathop{\text{\rm GL}}\nolimits}_n(K)$-action on ${K\left[\frac{\partial }{\partial X_1},\ldots,\frac{\partial}{\partial X_n}\right]}$ given by $$M \cdot \left(\frac{\partial }{\partial v} \right) :=
\frac{\partial }{\partial (v \cdot M^{-1})} \mbox{ for all } v \in K^n$$ results in the equality $$M \cdot \left(\frac{\partial f}{\partial v}\right) =
\left( M \cdot \frac{\partial}{\partial v} \right) \left(M \cdot f \right),$$ for all $f \in K[X_1,\ldots,X_n]$ and all $v \in K^n$.
4. The map $$\psi
\colon K[Y_1,\ldots,Y_n] \rightarrow
K\left[\frac{\partial }{\partial X_1},\ldots,\frac{\partial }{\partial X_n}\right], \quad
Y_i \mapsto \frac{\partial }{\partial X_i}$$ is an isomorphism of rings. Further, for each $M \in {\mathop{\text{\rm GL}}\nolimits}_n(K)$, we have the following commutative diagram $$\begin{aligned}
\diagram
K[Y_1,\ldots,Y_n] \rrto^{\psi~~~} \dto_M & &
{K\left[\frac{\partial }{\partial X_1},\ldots,\frac{\partial}{\partial X_n}\right]}\dto^{M} \\
K[Y_1,\ldots,Y_n] \rrto^{\psi~~~} & &
{K\left[\frac{\partial }{\partial X_1},\ldots,\frac{\partial}{\partial X_n}\right]}\enddiagram_{\displaystyle .}\end{aligned}$$ In other words, $\psi$ is an isomorphism of ${\mathop{\text{\rm GL}}\nolimits}_n(K)$-modules.
5. Let $C$ be a covariant and $c$ a contravariant on $K[X_1,\ldots,X_n]^{(d)}$. Denote the order of $C$ by $P$ and the order of $c$ by $p$. For $P \geq p$, we define $$\begin{aligned}
&c \vdash C \colon K[X_1,\ldots,X_n]^{(d)} \rightarrow K[X_1,\ldots,X_n]^{(P-p)},\quad
f \mapsto \psi(c(f)) \left(C(f)\right)\, .\end{aligned}$$ The notation $\vdash$ follows [@Hu p. 304].
6. Assume $c \vdash C$ not to be zero. If $p < P$ then $c \vdash C$ is a covariant of order $P - p$. If $p = P$ then $c \vdash C$ is an invariant. In both cases, the degree of $c \vdash C$ is the sum of the degrees of $c$ and $C$.
7. Similarly to $\psi$, one can introduce a map $$\widehat{\psi} \colon K[X_1,\ldots,X_n] \rightarrow
K\left[\frac{\partial }{\partial Y_1},\ldots,\frac{\partial }{\partial Y_n} \right],\quad
X_i \mapsto \frac{\partial }{\partial Y_i}\,.$$ As above, $\widehat{\psi}$ is an isomorphism of rings and ${\mathop{\text{\rm GL}}\nolimits}_n(K)$-modules. Let $C$ a covariant and $c$ a contravariant on $K[X_1,\ldots,X_n]^{(d)}$. We define $C \vdash c$ by $$(C \vdash c)(f) := \widehat{\psi}(C(f)) \left(c(f)\right)\, .$$
8. Assume $c \vdash C$ not to be zero. If $p > P$ then $C \vdash c$ is a contravariant of order $p - P$. If $p = P$ then $C \vdash c$ is an invariant. In both cases, the degree of $C \vdash c$ is the sum of the degrees of $C$ and $c$.
Explicit invariants of cubic surfaces
=====================================
1. It is well known that the ring of invariants of quaternary cubic forms is generated by the six invariants of degrees 8, 16, 24, 32, 40, and 100 [@Do Sec. 9.4.5]. The first five generators are primary invariants [@DK Def. 2.4.6]. Thus, the vector spaces of all invariants of degrees 8, 16, 24, 32 and 40 are of dimensions 1, 2, 3, 5, and 7. In general, these dimensions are encoded in the Molien series, which can be computed efficiently using character theory [@DK Ch. 4.6].
2. In the lucky case that one is able to write down a basis of the vector space of all invariants of a given degree $d$, one can find an expression of a given invariant of degree $d$ by linear algebra. This requires that the invariant is known for sufficiently many surfaces. For cubic surfaces, this is provided by the pentahedral equation.
3. Applying the methods above, we can write down many invariants for quaternary cubic forms. We start with the form $f$, its Hessian covariant $H(f)$, and the contravariant $\tilde{S}(f)$. Then we apply known covariants to contravariants and vice versa. Further, one can multiply two covariants or contravariants to get a new one. For efficiency, it is useful to keep the orders of the covariants and contravariants as small as possible. This way, they will not consist of too many terms.
Let $f$ be a quarternary cubic form. With $$\begin{aligned}
C_{4,0,4} &:= \tilde{S}(f),
& C_{4,4} &:= H(f), \\
C_{6,2} &:= C_{4,0,4} \vdash f^2,
& C_{9,3} &:= C_{4,0,4} \vdash (f \cdot C_{4,4}), \\
C_{10,0,2} &:= C_{6,2} \vdash C_{4,0,4},
& C_{11,1a} &:= C_{10,0,2} \vdash f, \\
C_{13,0,1} &:= C_{9,3} \vdash C_{4,0,4},
& C_{14,2} &:= C_{10,0,2} \vdash C_{4,4}, \\
C_{14,2a} &:= C_{13,0,1} \vdash f,
& C_{19,1a} &:= C_{13,0,1} \vdash C_{6,2}, \end{aligned}$$ the following expressions $$\begin{aligned}
I_8 &:= \frac{1}{2^{11} \cdot 3^9} C_{4,0,4} \vdash C_{4,4},\\
I_{16} &:= \frac{1}{2^{30} \cdot 3^{22}} C_{6,2} \vdash C_{10,0,2}, \\
I_{24} &:= \frac{1}{2^{41} \cdot 3^{33}} C_{10,0,2} \vdash C_{14,2}, \\
I_{32a} &:= C_{10,0,2} \vdash C_{11,1a}^2, \\
I_{32} &:= \frac{2}{5}(I_{16}^2 - \frac{1}{2^{60} \cdot 3^{44}} \cdot I_{32a}), \\
I_{40a} &:= C_{4,0,4} \vdash (C_{11,1a}^2 \cdot C_{14,2}), \\
I_{40} &:= \frac{-1}{100} \cdot I_8 \cdot I_{32} - \frac{1}{50} \cdot I_{16} \cdot I_{24}
- \frac{1}{2^{72} \cdot 3^{53} \cdot 5^2} I_{40a},\end{aligned}$$ give the Clebsch-Salmon invariants $I_8,\ I_{16},\ I_{24},\ I_{32},$ and $I_{40}$. Further, with $$\begin{aligned}
C_{11,1} :=& \frac{1}{2^{20} 3^{15}} C_{11,1a}, \\
C_{19,1} :=& \frac{1}{2^{33} \cdot 3^{24} \cdot 5} (C_{19,1a} + 2^{32} \cdot 3^{24} \cdot I_8 \cdot C_{11,1a}), \\
C_{27,1a} :=& \frac{1}{2^{42} 3^{33}} C_{13,0,1} \vdash C_{14,2a}, \\
C_{27,1} :=& I_{16} \cdot C_{11,1} + \frac{1}{200}(C_{27,1a} - 2 \cdot I_8^2 \cdot C_{11,1} - 10 \cdot I_8 \cdot C_{19,1}), \\
C_{43,1a} :=& \frac{1}{2^{68} \cdot 3^{53}} C_{13,0,1} \vdash ( C_{13,0,1} \vdash (C_{13,0,1} \vdash C_{4,4})), \\
C_{43,1} :=& \frac{-1}{1000} C_{43,1a} - \frac{1}{200} \cdot I_8^2 \cdot C_{27,1} + I_{16} \cdot C_{27,1} \\
& + \frac{1}{1000} \cdot I_8^3 \cdot C_{19,1} -\frac{1}{10} \cdot I_8 \cdot I_{16} \cdot C_{19,1} - I_{24} \cdot C_{19,1} \\
& + \frac{1}{200} \cdot I_8^2 \cdot I_{16} \cdot C_{11,1} + \frac{3}{20} \cdot I_8 \cdot I_{24} \cdot C_{11,1},\end{aligned}$$ $C_{11,1},\ C_{19,1},\ C_{27,1},$ and $C_{43,1}$ are Salmon’s linear covariants. Here, we use the first index to indicate the degree of an invariant, covariant, or contravariant. The second index is the order of a covariant, whereas the third index is the order of a contravariant. Finally, we can compute $I_{100}$ as the determinant of the 4 linear covariants.
The following [magma]{} script shows in approximately one second of CPU time that the algorithm as described above coincides with Salmon’s formulas for the pentahedral family, as the last two comparisons result in [true]{}.
r5 := PolynomialRing(Integers(),5);
ff5<a,b,c,d,e> := FunctionField(Rationals(),5);
r4<x,y,z,w> := PolynomialRing(ff5,4);
lfl := [x,y,z,w,-x-y-z-w];
col := [ff5.i : i in [1..5]];
f := a*x^3 + b*y^3 + c*z^3 + d*w^3 + e*(-x-y-z-w)^3;
sy_f := [ElementarySymmetricPolynomial(r5,i) : i in [1..5]];
sigma := [Evaluate(sf,col) : sf in sy_f];
I_8 := sigma[4]^2 - 4 *sigma[3] * sigma[5];
I_16 := sigma[1] * sigma[5]^3;
I_24 := sigma[4] * sigma[5]^4;
I_32 := sigma[2] * sigma[5]^6;
I_40 := sigma[5]^8;
L_11 := sigma[5]^2 * &+[ col[i] * lfl[i] : i in [1..5]];
L_19 := sigma[5]^4 * &+[ 1/col[i] * lfl[i] : i in [1..5]];
L_27 := sigma[5]^5 * &+[ col[i]^2 * lfl[i] : i in [1..5]];
L_43 := sigma[5]^8 * &+[ col[i]^3 * lfl[i] : i in [1..5]];
inv := ClebschSalmonInvariants(f);
cov := LinearCovariantsOfCubicSurface(f);
inv eq [I_8, I_16, I_24, I_32, I_40];
cov eq [L_11, L_19, L_27, L_43];
Performance test
================
Computing the Clebsch-Salmon invariants, following the approach above, for 100 cubic surfaces chosen at random with two digit integer coefficients takes about 3 seconds of CPU time. Most of the time is used for the direct evaluation of the invariant $S$ of ternary cubics by transvection. Note that computing the contravariant $\tilde{S}$ by interpolation requires 35 evaluations of the invariant $S$ of a ternary cubic. Computing both contravariants $\tilde{S}$ and $\tilde{T}$ and the dual surface takes about 18 seconds of CPU time for the same 100 randomly chosen surfaces.
For comparison, the computation of the pentahedral form by inspecting the singular points of the Hessian takes about 10 seconds per example [@EJ1 Sec. 5.11].
All computations are done on one core of an Intel i5-2400 processor running at 3.1GHz.
[99]{}
Bosma, W., Cannon, J., and Playoust, C.: [*The Magma algebra system. I. The user language.*]{} J. Symbolic Comput. [**24**]{} (1997), 235–265.
Clebsch, A.: [*Ueber eine Transformation der homogenen Functionen dritter Ordnung mit vier Veränderlichen.*]{} J. für die Reine und Angew. Math. [**58**]{} (1861), 109–126.
Derksen, H. and Kemper, G.: [*Computational Invariant Theory.*]{} Springer (2002)
Dolgachev, I.: [*Classical Algebraic Geometry: A modern view.*]{} Cambridge Univ. Press (2012)
Elsenhans, A.-S. and Jahnel, J.: [*Moduli spaces and the inverse Galois problem for cubic surfaces.*]{} Transactions of the AMS [**367**]{} (2015), 7837–7861
Hunt, B.: [*The geometry of some special arithmetic quotients.*]{} Springer (1996)
Matsumura, H. and Monsky, P.: [*On the automorphisms of hypersurfaces.*]{} J. Math. Kyoto Univ. [**3**]{} (1963/1964) 347–361
Mumford, D.: Stability of projective varieties, Enseign. Math. [**23**]{} (1977), 39–110.
Mumford, D., Fogarty, J., and Kirwan, F.: [*Geometric invariant theory,*]{} Third edition, Ergebnisse der Mathematik und ihrer Grenzgebiete 34, Springer, Berlin 1994
Olver, P.J.: [*Classical invariant theory,*]{} London Mathematical Society Student Texts 44, Cambridge University Press, Cambridge 1999
Salmon, G.: [*A treatise on the analytic geometry of three dimensions.*]{} Hodges-Smith (1862)
Salmon, G.: [*A treatise on the higher plane curves.*]{} Second edition, Hodges-Foster (1873)
Salmon, G.: [*Lessons introductory to the modern higher algebra.*]{} Hodges, Figgis, and Co. (1885)
Silverman, J.: [*The arithmetic of elliptic curves.*]{} Springer 1987
Sylvester, J. J.: [*Sketch of a memoir on elimination, transformation, and canonical forms.*]{} Cambridge and Dublin Mathematical Journal [**6**]{} (1851), 186–200.
Weyl, H.: [*The classical groups.*]{} Princeton Univ. Press (1973)
|
---
author:
- 'David Meidinger, Dhritiman Nandan, Brenda Penante, Congkao Wen'
bibliography:
- 'refs.bib'
title: |
A note on NMHV form factors from the\
Graßmannian and the twistor string
---
`HU-MATH-2017-05`\
`HU-EP-17/16`\
`CERN-TH-2017-139`\
`Edinburgh 2017/14`
**Abstract**\
In this note we investigate Graßmannian formulas for form factors of the chiral part of the stress-tensor multiplet in $\mathcal{N}\!=\!4$ superconformal Yang-Mills theory. We present an all-$n$ contour for the $G(3,n+2)$ Graßmannian integral of NMHV form factors derived from on-shell diagrams and the BCFW recursion relation. In addition, we study other $G(3,n+2)$ formulas obtained from the connected prescription introduced recently. We find a recursive expression for all $n$ and study its properties. For $n\geq 6$, our formula has the same recursive structure as its amplitude counterpart, making its soft behaviour manifest. Finally, we explore the connection between the two Graßmannian formulations, using the global residue theorem, and find that it is much more intricate compared to scattering amplitudes.
Introduction
============
Although the study of analytic properties of scattering amplitudes in general field theories is an old subject in Physics, no theory has seen a rate of development as steep as maximally supersymmetric Yang-Mills theory (${\mathcal{N}}=4$ SYM) in the planar limit. Scattering amplitudes in ${\mathcal{N}}=4$ SYM became a subject of intense study in particular after a duality with a topological twistor string theory was proposed in [@Witten:GaugeAsStringInTwistor2003]. This sparked a tremendous amount of work which, among other results, allowed the hidden symmetries and the integrability [@Beisert:2010jr] of the theory to become apparent from the perspective of scattering amplitudes [@Drummond:2009fd; @Ferro:2013dga; @Chicherin:2013ora]. A key role in these developments was played by novel formulations of scattering amplitudes. Among the various streams of results in this regard is a representation of tree-level scattering amplitudes and loop level leading singularities as contour integrals over a Graßmannian space [@ArkaniHamed:2009dn]. This representation led to the emergence of the on-shell diagram formalism [@ArkaniHamed:2012nw] and finally to the amplituhedron [@Arkani-Hamed:2013jha; @Arkani-Hamed:2013kca], providing a new, geometrical perspective on amplitudes, hidden in the usual space-time formulation.[^1]
A natural question one may ask is whether similar geometrical formulations hold for quantities which are more generic than on-shell scattering amplitudes, for instance form factors involving off-shell gauge invariant operators ${\mathcal{O}}(x)$, defined as the matrix element of an operator taken between the vacuum and an on-shell state of $n$ particles, $$\begin{aligned}
\begin{split}
\mathcal{F}_{{\mathcal{O}}}(1,\dots,n;q)\equiv &\int\! {\ensuremath{\mathrm{d}\xspace}}^4x\, e^{-iq x}{\langle}1\ldots n|\mathcal{O}(x)|0\rangle\ .
\end{split}
\end{aligned}$$ In addition to the on-shell momenta $p_i,\,i=1,\dots,n$ satisfying $p_i^2=0$, a form factor depends on the momentum $q$ conjugate to the position of the operator. This momentum, unlike those of the on-shell particles, is in general not light-like
The operator with the most well-studied form factors [@Brandhuber:2010ad; @Brandhuber:2011tv; @Bork:2012tt; @Bork:2014eqa; @Bork:2016hst; @Bork:2017qyh] is the chiral part of the stress-tensor multiplet ${\mathcal{T}}(x,\theta^+)$, which is a protected supersymmetric operator. It can be expanded in harmonic superspace[^2] Graßmann coordinates $\theta_\alpha^{+ a}$, with $\alpha, a = 1,2$, and contains the operator ${\text{Tr}}(\phi^2)$, with $\phi$ one of the scalars of the theory, as the top component and the on-shell Lagrangian of [${\mathcal{N}}\!\!=\!4$ SYM]{}as the coefficient of the highest power in $\theta^+$. Form factors of operators belonging to the same supersymmetric multiplet can be combined into a supersymmetric form factor as $$\begin{aligned}
{\mathcal{F}}_{{\mathcal{T}}}(1,\dots,n;q,\gamma^-) = \int\! {\ensuremath{\mathrm{d}\xspace}}^4 x \,{\ensuremath{\mathrm{d}\xspace}}^4 \theta^+ e^{-iq x -i\theta^{+a}_\alpha \gamma_a^{-\alpha}} \langle 1,\dots,n|{\mathcal{T}}(x,\theta^+)|0\rangle{\; , }\end{aligned}$$ where $\gamma^{- \alpha}_{a}$ is the variable conjugate to the superspace coordinate $\theta_\alpha^{+a}$. Like scattering amplitudes, this expression admits an expansion in MHV degrees, $ k $. In the following we denote by ${\mathsf{F}}_{n,k}$ the colour ordered N$^{k-2}$MHV form factor of ${\mathcal{T}}$ with $n$ on-shell states, and by ${\mathsf{A}}_{n,k}$ its amplitude counterpart.
In the Graßmannian formulation of [@ArkaniHamed:2009dn], ${\mathsf{A}}_{n,k}$ is represented as a contour integral over the Graßmannian $G(k,n)$, which is the space of $k$-dimensional planes in $\mathbb{C}^n$. In [@Frassek:2015rka] on-shell diagrams and an associated Graßmannian formula were presented for tree-level form factors of the operator ${\mathcal{T}}$, using a parametrization of the operator momentum as a sum of two on-shell momenta. From the Graßmannian integral, N$^{k-2}$MHV form factors of ${\mathcal{T}}$ can be obtained from a combination of residues in $G(k,n+2)$ [@Frassek:2015rka]. Compared to scattering amplitudes, some difficulties arise as a result of the operator being a colour singlet and not participating in the colour ordering of the external particles: for instance, there exist $n$ cyclically related top forms on the Graßmannian, and no single form contains all residues which build up the tree level form factor. As a result, residues from different top forms must be combined in a way that was, until now, only known on a case by case basis.
In this note we address this matter, providing a general contour prescription for the Graßmannian formulation of [@Frassek:2015rka]. To this end we utilise the correspondence between cells of the Graßmannian and on-shell diagrams. We find a recursive solution to the Britto-Cachazo-Feng-Witten (BCFW) recursion relation [@Britto:2005fq], ensuring that the residues reproduce all factorization poles. This yields the analogue of the tree-level contour defined in [@ArkaniHamed:2009dn] for scattering amplitudes. While we focus on NMHV form factors, this technique can be applied for general MHV degree. A corollary of our finding is that no linear combination of top forms can be taken to reproduce the form factor when endowed with a joint contour prescription for all forms. Rather, individual residues from different top forms must be picked individually, but nevertheless systematically.[^3] We also discuss ambiguities concerning the choice of top form for each residue.
The Graßmannian formulation of scattering amplitudes was shown to be tightly related to the twistor string theory formalism [@Spradlin:2009qr; @Dolan:2009wf]. The connection between these two approaches was realised by expressing the Roiban-Spradlin-Volovich (RSV) formulas for tree-level amplitudes in $\mathcal{N}=4$ SYM [@Roiban:2004yf] in terms of the link variables introduced in [@ArkaniHamed:2009si]. The kinematic constraints in the RSV picture do not leave any free integration variables, and recasting the formulas as integrals over the Graßmannian [@Nandan:2009cc] via the link variables played an important role in the formulation of the general tree-level contour for the amplitude Graßmannian integral [@ArkaniHamed:2009dg; @Bourjaily:2010kw]. A generalization of the RSV prescription for form factors was developed in [@Brandhuber:2016xue] and [@He:2016jdg]. In particular, [@Brandhuber:2016xue] put forward a link representation. In this work, we perform the last step into lifting the link representation to the Graßmannian. This provides a different Graßmannian representation compared to the integral obtained from on-shell diagrams, with a fixed contour of integration. We find a recursive definition of the formula and show that it can be interpreted as the “inverse soft” addition of particles, identical to the structure of amplitudes [@ArkaniHamed:2009dg].
For scattering amplitudes, the fact that the two Graßmannian formulations—based on on-shell diagrams and on the connected prescription—lead to the same result can be shown through successive applications of the global residue theorem (GRT) [@Nandan:2009cc]. In addition, it is possible to define a family of Graßmannian formulas parametrised by a smooth parameter $t$ such that it returns the connected prescription for $t=1$ and the standard $G(k,n)$ formula endowed with the tree level contour for $t=0$ or $t=\infty$. In this work we investigate similar relations between the two Graßmannian formulas for NMHV form factors. In particular, we show that a smooth deformation between them is not available, although the application of successive GRTs can uncover the BCFW poles from the connected formula for four and five points. Starting from six points, the relation between the two representations becomes very subtle; we show that the Graßmannian integral from the connected prescription does not possess all BCFW factorization poles in a way accessible via the GRT.
This note is organised as follows. In Section \[sec:contour\] we study the BCFW recursion relation in terms of on-shell diagrams and derive a compact formula for the form-factor Graßmannian contour in the NMHV case. In Section \[sec:connected\] we lift the link representation of [@Brandhuber:2016xue] to a second Graßmannian formula for NMHV form factors. We study different representations of this formula and show that it can be written in a way which closely mirrors the corresponding representation of amplitudes. Section \[sec:relation\] is devoted to relating the two Graßmannian formulas for NMHV form factors by means of the GRT.
The NMHV contour for the form factor Graßmannian {#sec:contour}
================================================
The Graßmannian formulation for N$^{k-2}$MHV form factors was introduced in [@Frassek:2015rka], where a form factor top form in $G(k,n+2)$ was first written down. This formulation lacked a contour prescription, and the combination of residues that compose a given form factor—originating in general from different top forms related by cyclic symmetry—was worked out case by case. In this section, we present a closed formula for the tree-level contour for NMHV form factors. This provides a systematic way of computing form factors of the chiral part of the stress-tensor operator for any $n$.
Brief review of the Graßmannian integral for NMHV form factors {#sec:Grassmannian}
--------------------------------------------------------------
In [@Frassek:2015rka] it was shown that form factors of the chiral stress-tensor multiplet in [${\mathcal{N}}\!\!=\!4$ SYM]{}can be represented via a generalization of on-shell diagrams [@ArkaniHamed:2012nw]. These diagrams use the minimal, i.e. two point form factor as a vertex, in addition to the two three-point amplitudes, $$\label{eq:vertices}
\begin{aligned}
\begin{aligned}
\begin{tikzpicture}[scale=0.58]
\draw (1,1) -- (1,1+0.65);
\draw (1,1) -- (1-0.5,1-0.5);
\draw (1,1) -- (1+0.5,1-0.5);
\node [] at (1,1+0.65+\labelvdist) {\footnotesize$1$};
\node [] at (1-0.5-0.2,1-0.5-0.2) {\footnotesize$3$};
\node [] at (1+0.5+0.2,1-0.5-0.2) {\footnotesize$2$};
\node[db] at (1,1) {};
\end{tikzpicture}
\end{aligned}
&=
{\mathsf{A}}_{3,2}(1,2,3)=\frac{
\delta^4({{\smash{\lambda}}}_1{{\smash{\tilde{\lambda}}}}_1+{{\smash{\lambda}}}_2{{\smash{\tilde{\lambda}}}}_2+{{\smash{\lambda}}}_3{{\smash{\tilde{\lambda}}}}_3)
\delta^8({{\smash{\lambda}}}_1{{\smash{\tilde{\eta}}}}_1+{{\smash{\lambda}}}_2{{\smash{\tilde{\eta}}}}_2+{{\smash{\lambda}}}_3{{\smash{\tilde{\eta}}}}_3)
}{{\langle 12 \rangle}{\langle 23 \rangle}{\langle 31 \rangle}} {\; , }\\
\begin{aligned}
\begin{tikzpicture}[scale=0.58]
\draw (1,1) -- (1,1+0.65);
\draw (1,1) -- (1-0.5,1-0.5);
\draw (1,1) -- (1+0.5,1-0.5);
\node [] at (1,1+0.65+\labelvdist) {\footnotesize$1$};
\node [] at (1-0.5-0.2,1-0.5-0.2) {\footnotesize$3$};
\node [] at (1+0.5+0.2,1-0.5-0.2) {\footnotesize$2$};
\node[dw] at (1,1) {};
\end{tikzpicture}
\end{aligned}
&=
{\mathsf{A}}_{3,1}(1,2,3)
=\frac{
\delta^4({{\smash{\lambda}}}_1{{\smash{\tilde{\lambda}}}}_1+{{\smash{\lambda}}}_2{{\smash{\tilde{\lambda}}}}_2+{{\smash{\lambda}}}_3{{\smash{\tilde{\lambda}}}}_3)
\delta^4({\left[ 12 \right]}{{\smash{\tilde{\eta}}}}_3+{\left[ 23 \right]}{{\smash{\tilde{\eta}}}}_1+{\left[ 31 \right]}{{\smash{\tilde{\eta}}}}_2)
}{{\left[ 12 \right]}{\left[ 23 \right]}{\left[ 31 \right]}} {\; , }\\
\begin{aligned}
\begin{tikzpicture}[scale=0.58]
{
\draw[thick,double] (1-0.5,-0) -- (1-0.5,-0.5); \draw (1-0.5,-0.5) -- (1-1,-\vacuumheight);
\draw (1-0.5,-0.5) -- (1,-\vacuumheight);}
\node [] at (-\labelddist,-\vacuumheight-\labelddist) {\footnotesize$2$};
\node [] at (1+\labelddist,-\vacuumheight-\labelddist) {\footnotesize$1$};
\end{tikzpicture}
\end{aligned}
&=
{\mathsf{F}}_{2,2}(1,2;q,\gamma^-)=\frac{
\delta^4({{\smash{\lambda}}}_1{{\smash{\tilde{\lambda}}}}_1 + {{\smash{\lambda}}}_2{{\smash{\tilde{\lambda}}}}_2 - q)
\delta^4({{\smash{\lambda}}}_1{{\smash{\tilde{\eta}}}}_1^+ + {{\smash{\lambda}}}_2{{\smash{\tilde{\eta}}}}_2^+ )
\delta^4({{\smash{\lambda}}}_1{{\smash{\tilde{\eta}}}}_1^- + {{\smash{\lambda}}}_2{{\smash{\tilde{\eta}}}}_2^- - \gamma^-)
}{{\langle 12 \rangle}{\langle 21 \rangle}} {\; . }\end{aligned}$$ Generic form-factor on-shell diagrams are obtained by gluing the fundamental vertices above, i.e. by performing an integration over the one-particle on-shell phase space for each internal edge. The parametrization of the off-shell momentum $q$ and supermomentum $\gamma^-$ is done via the addition of two auxiliary on-shell particles. We label these particles by $x$ and $y$ in order to distinguish them from the $n$ on-shell states of the form factor. Concretely, let ${{\smash{\lambda}}}_x$ and ${{\smash{\lambda}}}_y$ be arbitrary (non-collinear) reference spinors, and define $$\begin{aligned}
{{\smash{\tilde{\lambda}}}}_{x}&=-\frac{\bra{y}q}{{\langle yx \rangle}}{\; , }\quad &
{{\smash{\tilde{\eta}}}}_{x}^-&=-\frac{\bra{y}\gamma^-}{{\langle yx \rangle}}{\; , }\quad &
{{\smash{\tilde{\eta}}}}_x^+&=0\ ,\\
{{\smash{\tilde{\lambda}}}}_{y}&=-\frac{\bra{x}q}{{\langle xy \rangle}}{\; , }&
{{\smash{\tilde{\eta}}}}_{y}^-&=-\frac{\bra{x}\gamma^-}{{\langle xy \rangle}}{\; , }&
{{\smash{\tilde{\eta}}}}_y^+&=0\ ,
\end{aligned}
\label{eq:kinematics}$$ such that ${{\smash{\lambda}}}_x{{\smash{\tilde{\lambda}}}}_x+{{\smash{\lambda}}}_y{{\smash{\tilde{\lambda}}}}_y=p_x+p_y=-q$, ${{\smash{\lambda}}}_x{{\smash{\tilde{\eta}}}}^-_x+{{\smash{\lambda}}}_y{{\smash{\tilde{\eta}}}}^-_y=-\gamma^-$ and ${{\smash{\lambda}}}_x{{\smash{\tilde{\eta}}}}^+_x+{{\smash{\lambda}}}_y{{\smash{\tilde{\eta}}}}^+_y=0$.
Using these variables, the Graßmannian formula for NMHV form factors is given by [@Frassek:2015rka] $${\mathcal{G}}_{n,3}^{[s]} =
{\langle xy \rangle}^2
\int\frac{{\ensuremath{\mathrm{d}\xspace}}^{3\times(n+2)}C}{\text{Vol}[GL(3)]}\;
\frac{
\delta^{2\times 3}(C\cdot{{\smash{\tilde{\lambda}}}}) \, \delta^{4\times 3}(C\cdot{{\smash{\tilde{\eta}}}}) \, \delta^{2\times(n-1)}(C^\perp\cdot{{\smash{\lambda}}})
}{ \big[
{(1)}\cdots{(n-2)} \;\; {(\underline{{1}})}{(\underline{{n}})} \;\; (xy\,(n{\! - \!}1{\;}n)\cap(12))
\big]}_{\smash{\sigma_s}
}\ .
\label{eq:gi}$$ The notation used here is as follows. $C$ is a $3\times(n+2)$ matrix parametrizing $G(3,n+2)$, $$\label{eq:vectorC}
\begin{pmatrix}
C_{1}, & C_2, & \cdots, & C_{n-1}, & C_{n}, & C_x, & C_y \end{pmatrix} \, ,$$ where each column $C_i$ is a $k$-dimensional vector, namely $C^{\rm T}_i\equiv (C_{1i} \, , C_{2i} \, , \cdots, C_{ki})$ (for NMHV $k=3$). We abbreviate minors of $C$ which are consecutive in the $n$ labels corresponding to the on-shell particles with a single label, as in [@ArkaniHamed:2009dg; @Bourjaily:2010kw], and use a similar notation for minors involving the columns with labels $x$ and $y$, $$\label{eq:poles}
{(i)} \equiv (i{\;}i{\! + \!}1{\;}i{\! + \!}2)
{\; , }\qquad
{(\underline{{i}})} \equiv (i {\;}x{\;}y)
{\; . }$$ Furthermore, we employ the standard notation $(ij)\cap(kl) \equiv C_i (jkl) - C_j (ikl)$ for the intersection of the lines $(ij)$ and $(kl)$.[^4] Finally, $\sigma_s$ is a cyclic shift of the on-shell labels by $s$ appearing in the integrand, $$\sigma_s = \begin{pmatrix}
1 & 2 & \cdots & n-1 & n & x & y \\
\downarrow &
\downarrow &
&
\downarrow &
\downarrow &
\downarrow &
\downarrow \\
1+s & 2+s & & n-1+s & n+s & x & y
\end{pmatrix}
\quad \text{with } i+n \simeq i{\; , }$$ reflecting the fact that the insertion of the colourless operator in the on-shell diagram artificially breaks the cyclic invariance in the on-shell labels. This leads to $n$ inequivalent top forms labelled by the shift $s$. The Graßmannian integral is the form factor analogue of the NMHV amplitude formula [@ArkaniHamed:2009dn] $$\label{eq:ampG}
{\mathcal{L}}^{\rm amp}_{n,3} = \int_{\Gamma^{\rm BCFW}_{n,3}} \frac{{\ensuremath{\mathrm{d}\xspace}}^{3\times n}C}{\text{Vol}[GL(3)]}\;
\frac{
\delta^{2\times 3}(C\cdot{{\smash{\tilde{\lambda}}}}) \, \delta^{4\times 3}(C\cdot{{\smash{\tilde{\eta}}}}) \, \delta^{2\times(n-3)}(C^\perp\cdot{{\smash{\lambda}}})
}{
{(1)} {(2)} \cdots{(n)}
}{\; , }$$ which is equipped with the BCFW contour $\Gamma^{\rm BCFW}_{n,3}$, whose general expression is known [@ArkaniHamed:2009dg]. For an $n$-point amplitude, there are $(n-5)$ free integration variables $ \tau_1,\ldots, \tau_{n-5} $. We employ the following notation for the residues: $$\{f_1,f_2,\dots,f_{n-5}\}\quad \leftrightarrow\quad \text{Residue of Gra\ss mannian integral around poles $|\tau_i - f_i| = \epsilon_i \rightarrow 0$\ . }
\label{residuenotation}$$ The tree-level contour can then be specified by $(n-5)$ vanishing minors $\{(i_1) \, ,(i_2) \, ,\cdots, (i_{n-5}) \}$. Using this notation, the NMHV BCFW contour takes an “odd-even” pattern, explicitly given by $$\label{eq:ampContour}
\Gamma^{\rm BCFW}_{n,3} = \mathscr{O} \star \mathscr{E} \star \mathscr{O} \star \mathscr{E} \star
\cdots \, ,$$ where $\mathscr{O}$ is the set of odd numbered particles and $\mathscr{E}$ is the set of even numbered particles, $$\mathscr{O} = \sum_{i \in {\rm Odd}} \{(i) \} \, , \quad
\mathscr{E} = \sum_{i \in {\rm Even}} \{(i) \} \,,$$ and the product $\star$ is defined as $$\{ (i)\} \star \{ (j) \} = \left\{ \begin{array}{rcl}
& \{ (i) , (j) \} & \mbox{for}
\quad i<j
\\
\\
& 0 & \mbox{for} \quad i>j
\end{array}\right. {\; . }$$ The aim of this section is to present a similar closed formula for the tree-level contour for NMHV form factors. Unlike amplitudes, in principle top forms with different values of shift parameter $s$ must be combined together in order to reproduce all factorization poles of the form factor.
Closed form of the contour {#sec:recursion}
--------------------------
In this section, we derive a closed formula for the NMHV tree-level contour for the form factor formula from the BCFW recursion relation. Due to the fact that multiple top forms have to be considered, it turns out that the contour cannot be thought of as a single domain of integration, but rather as a set of contours for the individual top forms. We express it as a list of poles which are in one-to-one correspondence with the BCFW terms, the residues of which add up to the tree-level form factor. These residues may come from distinct top forms (different values of the shift $s$), but we argue that every choice of $s$ produces the same residue, provided the corresponding form has a non-vanishing residue on the respective configuration.
After solving the kinematical constraints, the NMHV form factors with $n$ legs is a contour integral in $n-3$ variables $\tau_1,\dots,\tau_{n-3}$, expressed as $ \{f_1,f_2,\dots,f_{n-3}\}$ following the notation . Using , the tree-level NMHV $n$-point form factor is now given by the combination of residues $$\label{eq:FFcontour}
{\mathcal{C}^{\mathrm{BCFW}}}_{n,3} = \sum_{m=0}^{n-3}
\left[
R_m^{00}
+ \sum_{i_1=1}^m R^{i_10}_m
+ \sum_{i_2=1}^{n-m-3} R^{0i_2}_m
\right]\ ,$$ where each residue above reads $$R^{i_1 i_2}_m \coloneqq
\Bigg\{
\underbrace{
\underbrace{
{(\underline{{1}})},\ldots,{(\underline{{i_1}})}
}_{i_1}
,
{(i_1+1)},\ldots,{(m)}
}_m
,
\underbrace{
\underbrace{
{(\underline{{m+3}})},\ldots,{(\underline{{m+i_2+2}})}
}_{i_2}
,
{(m+i_2+3)},\ldots,{(n-1)}
}_{n-m-3}
\Bigg\}{\; . }$$ Note that makes no mention of the shift $s$ that labels the top form in . The reason is that for each term, one can take the residue from any top form (using any shift), as long as this form has a pole at the desired configuration. As we show shortly, each term in corresponds to a particular BCFW factorization, and the degeneracy in $s$ follows from the cyclic symmetry of a sub-form factor entering the recursion relation. To be explicit, we can summarise the possible choices for $s$: $$\begin{tabular}{lccccc}
\toprule
terms:\qquad\quad &
$R^{i_1 0}_m$ &
$R^{0 i_2}_m$ &
$R^{0 0}_0$ &
$R^{0 0}_{n-3}$&
$R^{0 0}_{m}$
\\
\midrule
shifts $s$: \qquad\quad&
$0,1,\ldots,i_1$ \quad&
$m+2,\ldots,m+2+i_2$ \quad&
$1,2$ \quad&
$0,n-1$ \quad&
$m+2$\\
\bottomrule
\end{tabular}$$ The closed formula for the contour follows from the BCFW recursion relation [@Britto:2005fq], which can be depicted graphically for NMHV form factors as [@Brandhuber:2010ad; @Brandhuber:2011tv] $$\label{eq:bcfwnmhv}
{\mathsf{F}}_{n,3}=
\sum_{n_l = 2}^{n-2}
\!
\begin{aligned}
\begin{tikzpicture}[scale=0.7]
\draw (0,-0) -- (2,-0); \draw (0,-1.5) -- (2,-1.5); \draw (0,-0) -- (0,-2.25); \draw (2,-0) -- (2,-2.25); \draw (0,-0) -- (-1.2,-0);
\draw (0,-0) -- (-0,+1.2);
\draw (2,-0) -- (+3.2,-0);
\draw (2,-0) -- (+2,+1.2);
\node[] at (-0.7,+0.7) {\rotatebox{45}{$\cdots$}};
\node[] at (+2.7,+0.7) {\rotatebox{-45}{$\cdots$}};
\node[dw] at (0,-1.5) {};
\node[db] at (2,-1.5) {};
\draw [thick,double] (0,0) -- (-1,-1);
\node[circle, black, fill=grayn, minimum width=5*\onshellradius, draw, inner sep=0pt] at (0,0) {$\scriptstyle {\mathsf{F}}_{n_l,2}$};
\node[circle, black, fill=grayn, minimum width=5*\onshellradius, draw, inner sep=0pt] at (2,0) {$\scriptstyle {\mathsf{A}}_{n_r,2}$};
\node[] at (0,-2.25-\labelvdist) {\footnotesize$1$};
\node[] at (2,-2.25-\labelvdist) {\footnotesize$n$};
\end{tikzpicture}
\end{aligned}
+\;
\sum_{n_l = 3}^{n}
\!\!\!
\begin{aligned}
\begin{tikzpicture}[scale=0.7]
\draw (0,-0) -- (2,-0); \draw (0,-1.5) -- (2,-1.5); \draw (0,-0) -- (0,-2.25); \draw (2,-0) -- (2,-2.25); \draw (0,-0) -- (-1.2,-0);
\draw (0,-0) -- (-0,+1.2);
\draw (2,-0) -- (+3.2,-0);
\draw (2,-0) -- (+2,+1.2);
\node[] at (-0.7,+0.7) {\rotatebox{45}{$\cdots$}};
\node[] at (+2.7,+0.7) {\rotatebox{-45}{$\cdots$}};
\node[dw] at (0,-1.5) {};
\node[db] at (2,-1.5) {};
\draw [thick,double] (2,0) -- (+3,-1);
\node[circle, black, fill=grayn, minimum width=5*\onshellradius, draw, inner sep=0pt] at (0,0) {$\scriptstyle {\mathsf{A}}_{n_l,2}$};
\node[circle, black, fill=grayn, minimum width=5*\onshellradius, draw, inner sep=0pt] at (2,0) {$\scriptstyle {\mathsf{F}}_{n_r,2}$};
\node[] at (0,-2.25-\labelvdist) {\footnotesize$1$};
\node[] at (2,-2.25-\labelvdist) {\footnotesize$n$};
\end{tikzpicture}
\end{aligned}
\;
+
\begin{aligned}
\begin{tikzpicture}[scale=0.7]
\draw (0,-0) -- (2,-0); \draw (0,-1.5) -- (2,-1.5); \draw (0,-0) -- (0,-2.25); \draw (2,-0) -- (2,-2.25); \draw (0,-0) -- (-1.2,-0);
\draw (0,-0) -- (-0,+1.2);
\draw (2,-0) -- (+2.5,+0.5);
\node[] at (-0.7,+0.7) {\rotatebox{45}{$\cdots$}};
\node[dw] at (0,-1.5) {};
\node[db] at (2,-1.5) {};
\draw [thick,double] (0,0) -- (-1,-1);
\node[circle, black, fill=grayn, minimum width=5.6*\onshellradius, draw, inner sep=0pt] at (0,0) {$\scriptstyle {\mathsf{F}}_{n-1,3}$};
\node[dw] at (2,0) {};
\node[] at (0,-2.25-\labelvdist) {\footnotesize$1$};
\node[] at (2,-2.25-\labelvdist) {\footnotesize$n$};
\end{tikzpicture}
\end{aligned} {\; , }$$ where $n_r=n-n_l+2$. Without loss of generality we choose to use the common BCFW shift at legs $n$ and $1$.
Recall that bipartite on-shell diagrams are associated with a decorated permutation $\sigma(i)\geq i$, which can be read off the diagram using left-right paths [@ArkaniHamed:2012nw]. The permutation $i\rightarrow \sigma(i)$ is obtained starting from the external leg labelled $i$ and then turning right/left when encountering a black/white vertex (three-point MHV/$\overline{\rm MHV}$ amplitude), ending finally on the external leg $\sigma(i)$. For the purpose of understanding the contours of the Graßmannian integral, we use the fact that this permutation encodes linear relations among the columns $C_i$ in the Graßmannian $G(k,n+2)$, when viewed as $k$-dimensional vectors. These linear relations are sufficient to determine the configuration of points in the Graßmannian $G(k,n+2)$ associated with any on-shell diagram, thus fixing the contour of integration for the associated Graßmannian integral.
In particular, $\sigma(i)= i+1$ leads to a linear relation between vectors $C_i$ and $C_{i+1}$, while $\sigma(i)= i+2$ gives a linear relation among $C_i, C_{i+1}$ and $C_{i+2}$, rendering these points collinear in projective space. For NMHV amplitudes or form factors we consider in this section, the vectors $C_i$ are three-dimensional. In this case, writing these linear relations in terms of minors, we obtain the following dictionary from permutations to vanishing minors, for any label $ a $, $$\begin{aligned}
\begin{split}
& \sigma(i)= i+1
\quad\implies\quad
(a {\;}i{\;}i{\! + \!}1 )=(i{\;}i{\! + \!}1 {\;}a)=0\\
& \sigma(i)= i+2
\quad\implies\quad
(i{\;}i{\! + \!}1 {\;}i{\! + \!}2)=0
{\; . }\label{eq:cond}
\end{split}\end{aligned}$$ In order to apply this strategy to form factors, we first map the form factor diagram to an amplitude diagram by replacing the minimal form factor with a four-point amplitude, as in [@Frassek:2015rka], $$\label{eq:ffamp}
\begin{aligned}
\begin{tikzpicture}[scale=0.8]
\draw[thick,double] (1.5,-0+1.5) -- (1.5,-0.5+1.5); \draw (1.5,-0.5+1.5) -- (2,-\vacuumheight+1.4);
\draw (1.5,-0.5+1.5) -- (1,-\vacuumheight+1.4);
\end{tikzpicture}
\end{aligned}
\quad
\longleftrightarrow
\quad
\begin{aligned}
\begin{tikzpicture}[scale=0.8]
\draw (1,1) -- (1,2) -- (2,2) -- (2,1) -- (1,1);
\draw (0.5,0.5) -- (1,1);
\draw (2.5,0.5) -- (2,1);
\draw (2.5,2.5) -- (2,2);
\draw (0.5,2.5) -- (1,2);
\node[dw] at (2,1) {};
\node[dw] at (1,2) {};
\node[db] at (1,1) {};
\node[db] at (2,2) {};
\end{tikzpicture}
\end{aligned}
{\; . }$$ This replacement works for reading off the configuration in the Graßmannian because any constraint which does not involve the two columns corresponding to the operator insertion is also present in the purely on-shell part of the form factor diagram, with the minimal form factor removed. This latter diagram, however, has two degrees of freedom fewer, which are restored by the auxiliary four-point amplitude.
[b[0.33]{}b[0.33]{}b[0.33]{}]{} (a) & (b) & (c)\
![Structure of left-right paths for the three types of terms contributing to the NMHV form factor in the BCFW recursion relations.[]{data-label="fig:terms"}](contourFA.pdf "fig:")& ![Structure of left-right paths for the three types of terms contributing to the NMHV form factor in the BCFW recursion relations.[]{data-label="fig:terms"}](contourAF.pdf "fig:")& ![Structure of left-right paths for the three types of terms contributing to the NMHV form factor in the BCFW recursion relations.[]{data-label="fig:terms"}](contourFrec.pdf "fig:")
\
We now consider the three types of terms in the BCFW recursion relation , depicted in Figure \[fig:terms\], in turn.
#### MHV form factor $\times$ MHV amplitude
We first work out the configurations for BCFW terms with an MHV form factor on the left side of the factorization and an MHV amplitude on the right. These on-shell diagrams have the form shown in Figure \[fig:terms\](a). Note that the additional four-point amplitude with labels $x$ and $y$ could have been added between any two of the external labels of the sub-form factor on the left of the diagram because the operator is a colour singlet, and the sub-form factor therefore cyclically invariant. After the transformation ${\mathsf{F}}_{n,2}\rightarrow {\mathsf{A}}_{n+2,2}$, the two amplitudes in the diagram are MHV and thus the permutations associated with the sub-diagrams are given by $\sigma_{l/r}(i)=i+2$. Each sub-diagram therefore imposes a geometrical configuration for which the $C_i$ related to its external states all lie on the same line. More concretely, the sub-form factor ensures that $C_1$ up to $C_{n_l-1}$ all lie on the line in $\mathbb{CP}^2$ defined by $x$ and $y$, which we denote by $(xy)$. This results in the vanishing of the minors ${(\underline{{1}})}$ up to ${(\underline{{n_l-1}})}$: $$\begin{aligned}
\begin{tikzpicture}
\draw (0,0) -- (4,0);
\node[dot] at (0.5,0) {};
\node[] at (0.5,0.34) {$x$};
\node[dot] at (1.0,0) {};
\node[] at (1.0,0.3) {$y$};
\node[dot] at (1.5,0) {};
\node[] at (1.5,0.345) {$1$};
\node[] at (2.2,0.3) {$\cdots$};
\node[dot] at (3.3,0) {};
\node[] at (3.3,0.32) {$n_l-1$};
\end{tikzpicture}
\end{aligned}
\qquad \longrightarrow \qquad
{(\underline{{1}})},\ldots,{(\underline{{n_l-1}})}=0{\; . }$$ From the MHV amplitude, we can read off the collinearity of $C_{n_l}$ through $C_{n-1}$ which implies that the following minors vanish: $$\begin{aligned}
\begin{tikzpicture}
\draw (0,0) -- (5,0);
\node[dot] at (0.5,0) {};
\node[] at (0.5,0.3) {$n_l$};
\node[dot] at (1.5,0) {};
\node[] at (1.5,0.33) {$n_l+1$};
\node[] at (2.5,0.33) {$\cdots$};
\node[dot] at (3.3,0) {};
\node[] at (3.3,0.39) {$n-2$};
\node[dot] at (4.3,0) {};
\node[] at (4.3,0.395) {$n-1$};
\end{tikzpicture}
\end{aligned}
\qquad \longrightarrow \qquad
{(n_l)},\ldots,{(n-3)} = 0{\; . }$$ This gives us $n-3$ residues of the form $$\{{(\underline{{1}})},\ldots,{(\underline{{n_l-1}})},
{(n_l)},\ldots,{(n-3)}\}
\; ,\quad
\text{for $n_l=2,\ldots,n-2$ }
{\; , }$$ which are all the terms with $m=n-3$ in , namely $R_{n-3}^{00}+\sum\nolimits_{i_1=1}^{n-3} R^{i_10}_{n-3}$.\
As noted above, in order to fully specify a “contour”, we need to prescribe which of the cyclically related top forms to use. Since the MHV sub-form factor is cyclically invariant in its on-shell legs, for each term we can take the residue from any top form with a shift of $$s=0,1,\ldots,n_l-1
{\; . }$$ Note that a shift of $s=0$ appears to be incompatible with our choice of BCFW shift, as the BCFW bridge does not allow the minimal form factor to be between legs $n$ and $1$. The validity of this shift nevertheless follows from the consistency of all possible adjacent BCFW shifts. Moreover, we note that the top forms with these shifts are exactly those which contain a pole of the given form.
#### MHV amplitude $\times$ MHV form factor
The second type of term has the schematic form given in Figure \[fig:terms\](b), and the argument is similar to the terms just discussed. In particular, the four-point amplitude with $x$ and $y$ could have been attached in other positions for the sub-form factor on the right-hand-side. In this case, the sub-amplitude and sub-form factor enforce $$\begin{aligned}
&\begin{aligned}
\begin{tikzpicture}
\draw (0,0) -- (3.6,0);
\node[dot] at (0.5,0) {};
\node[] at (0.5,0.3) {$1$};
\node[dot] at (1.0,0) {};
\node[] at (1.0,0.3) {$2$};
\node[] at (1.8,0.3) {$\cdots$};
\node[dot] at (2.8,0) {};
\node[] at (2.8,0.3) {$n_l-1$};
\end{tikzpicture}
\end{aligned}
\qquad \longrightarrow \qquad
{(1)},\ldots,{(n_l-3)} = 0
\qquad\text{(sub amplitude)}
{\; , }\\
&\begin{aligned}
\begin{tikzpicture}
\draw (0,0) -- (4,0);
\node[dot] at (0.5,0) {};
\node[] at (0.5,0.34) {$x$};
\node[dot] at (1.0,0) {};
\node[] at (1.0,0.3) {$y$};
\node[dot] at (1.5,0) {};
\node[] at (1.5,0.325) {$n_l$};
\node[] at (2.2,0.3) {$\cdots$};
\node[dot] at (3.3,0) {};
\node[] at (3.3,0.34) {$n-1$};
\end{tikzpicture}
\end{aligned}
\qquad \longrightarrow \qquad
{(\underline{{n_l}})},\ldots,{(\underline{{n-1}})}
\;\qquad\text{(sub form factor)}
{\; . }\end{aligned}$$ This gives $n-2$ terms with poles $$\label{eq:MHVAxMHVFFterms}
\{{(1)},\ldots,{(n_l-3)},
{(\underline{{n_l}})},\ldots,{(\underline{{n-1}})}\}
\; ,\quad
\text{for $n_l=3,\ldots,n$ }
{\; , }$$ which are the terms of the form $\sum\nolimits_{m=0}^{n-3} \sum\nolimits_{i_2=1}^{n-m-3} R^{0i_2}_m$ in .\
The possible shifts for these configurations are $$s=\begin{cases}
n_l-1,\ldots,n-1 & \,\text{ for }n_l=3,\ldots,n-1 \\
0,n-1 & \,\text{ for }n_l=n
\end{cases}
{\; , }$$ which again follow from the cyclicity of the sub-form factor, except for the shift $s=0$, which is nevertheless valid and ensures that all top forms which contain the respective pole can be used to obtain the corresponding BCFW term.
#### Lower point NMHV form factor
The last term in is the most interesting one, since it contains the lower point NMHV form factor ${\mathsf{F}}_{n-1,3}$, which itself is given in terms of a sum of diagrams. It is the inverse soft limit of this $n-1$ point NMHV form factor, with a $k$-preserving inverse soft factor attached to the diagram as in Figure \[fig:terms\](c). For each term in the sub-form factor ${\mathsf{F}}_{n-1,3}$, the inverse soft factor imposes ${(n-1)}=0$, in addition to the vanishing minors of the lower point form factor: $$\sum_{\mathrm{subdiagrams}} \{\text{poles of subdiagram}\} \cup \{{(n-1)}\}
\label{eq:recursivepoles}{\; . }$$ These terms are the remaining ones in , namely $\sum\nolimits_{m=0}^{n-2}
\sum\nolimits_{i_1=1}^m R^{i_10}_m $. The poles of the sub-diagram are obtained in exactly the same way, meaning that the explicit knowledge of the BCFW poles[^5] for cases with low $n$ is enough to specify the contour for any number of legs recursively. Note that the possible shifts are simply inherited from the sub-diagram
#### General structure of the NMHV contour
The recursive structure of the contour becomes clear if one arranges the residues on a grid, as those shown in Figure \[fig:contourgrid\] for $n=4,5,6$. In those pictures the poles corresponding to MHV form factor $\times$ MHV amplitude factorization channels are arranged in the first row and the poles corresponding to MHV amplitude $\times$ MHV form factor channels lie in the last column. Finally, the poles of the form form a sub-grid which obeys the same pattern, but with one point fewer. Using the labels of , the rows are sorted with increasing value of $m$, and each row starts with the terms $R^{i_1 0}_m$ with decreasing values of $i_1$ followed by $R^{0 i_2}_m$ with increasing values of $i_2$. We also observe that this contour bears similarity to the tree-level contour for scattering amplitudes, reviewed in . Lastly, note that despite appearing as poles in the Graßmannian integral , the general formula for the contour never produces a residue at configurations involving an intersection of lines.
[0.35]{}[YY]{}\
\
&\
$\{{(\underline{{1}})}\}$ & $\{{(1)}\}$\
$s=0,1$ & $s=0,3$\
$\color{Blue}{\mathsf{F}}_{2,2}\times{\mathsf{A}}_{4,2}$ & $\color{Blue}{\mathsf{A}}_{4,2}\times{\mathsf{F}}_{2,2}$\
&\
&\
$\{{(3)}\}$ & $\{{(\underline{{3}})}\}$\
$s=1,2$ & $s=2,3$\
$\color{Blue}{\mathsf{F}}_{3,3}\times{\mathsf{A}}_{3,1}$ & $\color{Blue}{\mathsf{A}}_{3,2}\times{\mathsf{F}}_{3,2}$
[0.5]{}[YYY]{}\
\
&&\
$\{{(\underline{{1}})},{(\underline{{2}})}\}$ & $\{{(\underline{{1}})},{(2)}\}$ &\
$s=0,1,2$ & $s=0,1$ &\
$\color{Blue}{\mathsf{F}}_{3,2}\times{\mathsf{A}}_{4,2}$ & $\color{Blue}{\mathsf{F}}_{2,2}\times{\mathsf{A}}_{5,2}$ &\
&&\
& &\
& $\{{(1)},{\color{Blue}{(4)}}\}$ &\
& $s=0,3$ &\
& &\
& &\
& $\{{(\underline{{3}})},{\color{Blue}{(4)}}\}$ &\
& $s=2,3$ &\
& &\
& &\
[0.8]{}[YYYY]{}\
\
&&&\
$\{{(\underline{{1}})},{(\underline{{2}})},{(\underline{{3}})}\}$ & $\{{(\underline{{1}})},{(\underline{{2}})},{(3)}\}$ & $\{{(\underline{{1}})},{(2)},{(3)}\}$ &\
$s=0,1,2,3$ & $s=0,1,2$ & $s=0,1$ &\
$\color{Blue}{\mathsf{F}}_{4,2}\times{\mathsf{A}}_{4,2}$ & $\color{Blue}{\mathsf{F}}_{3,2}\times{\mathsf{A}}_{5,2}$ & $\color{Blue}{\mathsf{F}}_{2,2}\times{\mathsf{A}}_{6,2}$ &\
&&&\
& & &\
& $\{{(\underline{{1}})},{(2)},{\color{Blue}{(5)}}\}$ & $\{{(1)},{(2)},{\color{Blue}{(5)}}\}$ &\
& $s=0,1$ & $s=0,4$ &\
& & &\
& & &\
& $\{{(1)},{(4)},{\color{Blue}{(5)}}\}$ & $\{{(1)},{(\underline{{4}})},{\color{Blue}{(5)}}\}$ &\
& $s=0,3$ & $s=3,4$ &\
& & &\
& & &\
& $\{{(\underline{{3}})},{(4)},{\color{Blue}{(5)}}\}$ & $\{{(\underline{{3}})},{(\underline{{4}})},{\color{Blue}{(5)}}\}$ &\
& $s=2,3$ & $s=2,3,4$ &\
& & &\
& & &\
We also note that, although cannot generally be thought of as a contour in the real sense, a special case where *can* be interpreted as such is for $n=4$. Summing two top forms with shifts $s=0$ and $s=2$ subject to the contour we get $$\begin{gathered}
\Big[{\mathcal{G}}_{4,3}^{[0]} + {\mathcal{G}}_{4,3}^{[2]}\Big]\bigg|_{{\mathcal{C}^{\mathrm{BCFW}}}_{4,3}} =
{\langle xy \rangle}^2
\int_{{\mathcal{C}^{\mathrm{BCFW}}}_{4,3}}\frac{{\ensuremath{\mathrm{d}\xspace}}^{3\times 6}C}{\text{Vol}[GL(3)]}\;
\delta^{2\times 3}(C\cdot{{\smash{\tilde{\lambda}}}}) \, \delta^{4\times 3}(C\cdot{{\smash{\tilde{\eta}}}}) \, \delta^{2\times(n-1)}(C^\perp\cdot{{\smash{\lambda}}})
\\
\times \left[\frac{1}{\big[
{(1)}{(2)}{(\underline{{1}})}{(\underline{{4}})} (xy\,(34)\cap(12))
\big]}+\frac{1}{\big[
{(3)}{(4)}{(\underline{{3}})}{(\underline{{2}})} (xy\,(12)\cap(34))
\big]}\right]\ .
\label{eq:gi4}\end{gathered}$$ According to , the contour for $n=4$ is $${\mathcal{C}^{\mathrm{BCFW}}}_{4,3} = \{{(1)}\}+ \{{(3)}\}+\{{(\underline{{1}})}\}+\{{(\underline{{3}})}\}\, =\, -\big[\{{(2)} \} + \{{(4)} \}+\{{(\underline{{2}})} \} + \{{(\underline{{4}})} \} + \{{(xy\,(12)\cap(34))} \}\big]{\; , }$$ where in the last line we have used Cauchy’s theorem. Interestingly, the combination $\{{(2)} \} + \{{(4)} \}+\{{(\underline{{2}})} \} + \{{(\underline{{4}})} \}$ gives the (P)BCFW contour and for the residue $\{{(xy\,(12)\cap(34))} \}$ the contributions of the two top forms cancel out. The fact that the integrands can be combined in this way is accidental for $n=4$ since the top forms with $s=0$ and $s=2$ together contain all poles contributing to the BCFW representation. Therefore returns the form factor. For larger values of $n$, as can be seen by inspecting Figure \[fig:contourgrid\], there is no combination of top forms which contains all poles picked out by the contour the same number of times, and therefore a combination such as is not possible.
It is clear that the prescription given above for obtaining contours applies to general N$^{k-2}$MHV form factors. Just like for scattering amplitudes [@ArkaniHamed:2012nw], the contour of a given form factor is determined by the on-shell diagrams dictated by the BCFW recursion relation. We showed explicitly for NMHV form factors the general property that the decorated permutations of the corresponding bipartite on-shell diagrams provide the necessary information to select the lower-dimensional cells of the Graßmannian $G(k,n+2)$ which contribute to a general $n$-point N$^{k-2}$MHV form factor.
For scattering amplitudes, there exists a second way of obtaining the general tree-level contour for the Graßmannian integral in a compact closed form, namely the connected prescription [@Bourjaily:2010kw] derived from the twistor string. In the following sections, we study the analogous connected formula for form factors [@He:2016jdg; @Brandhuber:2016xue].
A Graßmannian formulation from the connected prescription {#sec:connected}
=========================================================
So far we have considered the $G(3,n+2)$ formulation of form factors which is analogous to the $G(3,n)$ amplitudes formula , namely a contour integral equipped with a tree-level contour [@ArkaniHamed:2009dn]. A dual $G(3,n)$ formulation for scattering amplitudes arises from the connected formula after the embedding of $G(2,n)$ into $G(3,n)$ [@Nandan:2009cc; @ArkaniHamed:2009dg]. This mapping returns a representation of the $G(3,n)$ integral which by construction inherits the contour of the connected formula. This section is devoted to studying the analogous connected formula for form factors. In particular, we present a lift from the link representation of [@Brandhuber:2016xue] to the Graßmannian valid for any value of $n$.
Brief review of the connected prescription and link representation {#sec:connected-to-link}
------------------------------------------------------------------
In analogy with the amplitude connected prescription [@Roiban:2004yf], in [@Brandhuber:2016xue] and [@He:2016jdg] a similar formula was obtained for form factors of the chiral part of the stress tensor operator. This representation was given an ambitwistor string interpretation in [@Brandhuber:2016xue] and [@Bork:2017qyh]. Here we review the derivation of [@Brandhuber:2016xue] for the form factor connected formula in the link representation. The kinematic setup is the same as for the Graßmannian integral: we add to the set of $n$ on-shell states two additional particles labelled by $x$ and $y$, representing the kinematics of the operator. Then, for a helicity sector with Graßmann degree $4k$ one chooses $k$ labels from the set $\{1,\dots,n\}$ to form the set ${\rm m}$, indexed by upper case letters $I=\{i_1,\dots, i_k\}$. The remaining $n+2-k$ labels (which always contain $x$ and $y$) form the set $\overline{\rm p}$, labelled by lower case letters $i$. The set $\rm p$ is the same as $\overline{\rm p}$ with $x$ and $y$ removed.\
Using this notation, the form factor connected formula reads $$\begin{aligned}
\label{eq:ff-connected}
\begin{split}
{\mathsf{F}}_{n,k} = &{\braket{xy}}^2 \int \frac{1}{\text{Vol}(GL(2))} \frac{{\ensuremath{\mathrm{d}\xspace}}^2\sigma_x {\ensuremath{\mathrm{d}\xspace}}^2\sigma_y}{(xy)^2}\prod_{a=1}^n \frac{{\ensuremath{\mathrm{d}\xspace}}^2\sigma_a}{(a\,a+1)}\\
&\times \prod_{i\in \overline{\rm p}} \delta^2 (\lambda_i - \lambda(\sigma_i)) \prod_{I \in {\rm m}} \delta^{2} ({{\smash{\tilde{\lambda}}}}_I - {{\smash{\tilde{\lambda}}}}(\sigma_I)) \delta^{4} ({{\smash{\tilde{\eta}}}}_I - {{\smash{\tilde{\eta}}}}(\sigma_I))\ ,
\end{split}\end{aligned}$$ where $(\sigma_a^1, \sigma_a^2)$ are homogeneous coordinates in $\mathbb{CP}^1$, $(ab)=\epsilon_{\alpha \beta} \sigma_a^\alpha \sigma_b^\beta$, and $$\begin{aligned}
\label{eq:Witten-RSV}
\lambda(\sigma_I) = \sum_{i \in \overline{\rm p}} \frac{1}{(Ii)}\lambda^i,\quad {{\smash{\tilde{\lambda}}}}(\sigma_i)=-\sum_{I \in {\rm m}} \frac{1}{(Ii)}{{\smash{\tilde{\lambda}}}}^I,\quad {{\smash{\tilde{\eta}}}}(\sigma_i)=-\sum_{I \in {\rm m}} \frac{1}{(Ii)}{{\smash{\tilde{\eta}}}}^I\ .\end{aligned}$$
As is the case with scattering amplitudes, one can go from the connected prescription to the *link representation* by introducing a new set of variables $c_{Ij}$, termed *link variables* [@ArkaniHamed:2009si], and imposing the additional equations $c_{Ij} = \frac{1}{(Ij)}$ [@Spradlin:2009qr; @Dolan:2009wf]. The advantage of using these variables is that the equations become linear. In [@Brandhuber:2016xue], a generic expression for form factors in this representation was given: $$\begin{aligned}
\label{eq:ff-link}
{\mathsf{F}}_{n,k}= &{\braket{xy}}^2 \int \prod_{I \in {\rm m}, j \in \overline{\rm p}} {\ensuremath{\mathrm{d}\xspace}}c_{Ij} U(c_{Ij}) \times \prod_{i\in \overline{\rm p}} \delta^2 (\lambda_i - c_{Ii}\lambda_i) \prod_{I \in {\rm m}} \delta^{2} ({{\smash{\tilde{\lambda}}}}_I + c_{Ii} {{\smash{\tilde{\lambda}}}}_i) \delta^{4} ({{\smash{\tilde{\eta}}}}_I + c_{Ii} {{\smash{\tilde{\eta}}}}_i) \\
\label{eq:U}
&U(c_{Ii})= \int \frac{1}{\text{Vol}(GL(2))} \frac{{\ensuremath{\mathrm{d}\xspace}}^2\sigma_x {\ensuremath{\mathrm{d}\xspace}}^2\sigma_y}{(xy)^2} \prod_{a=1}^n \frac{{\ensuremath{\mathrm{d}\xspace}}^2\sigma_a}{(a\,a+1)} \prod_{I \in {\rm m}, i \in \overline{\rm p}} \delta\left(c_{Ii}-\frac{1}{(Ii)}\right) \ .\end{aligned}$$ Note that although carries the degrees of freedom of a $G(k,n+2)$ Graßmannian formula, all integration variables are fixed by the delta functions. Similarly to what was done for scattering amplitudes in [@ArkaniHamed:2009dn], we now lift this formulation in the NMHV case to a fully $GL(3)$ invariant Graßmannian formulation by performing the $\sigma$ integrations.
From the link representation to the Graßmannian
-----------------------------------------------
In the following we focus on our case of interest, namely NMHV form factors with $k=3$, and write with the integrand in the form of a $GL(3)$ invariant Graßmannian integral, with no free integration variables. Indeed, while the explicit delta functions of can only fix $2n$ out of the $3(n-1)$ integration variables $c_{Ii}$, the function $U(c_{Ij})$ provides precisely the additional $n-3$ constrains required to solve for all $c_{Ij}$. After solving $2n$ out of the $3n-3$ constraints imposed by the delta functions of , there are no integrations over the variables $\sigma_a$ left. It is then straightforward to restore the $GL(3)$ invariance.
The $n-3$ remaining delta functions, evaluated at the solutions of the others, generate constraints depending on six points each. These equations, when written in terms of $GL(3)$ minors, have the general form $\delta(S_{i_1 i_2 i_3 i_4 i_5 i_6})$, where[^6] $$\begin{aligned}
\label{eq:sextic}
S_{i_1 i_2 i_3 i_4 i_5 i_6} \equiv (i_1 i_2 i_3 )(i_3i_4i_5)(i_5i_6 i_1)( i_2 i_4 i_6)-( i_2 i_3 i_4)( i_4 i_5 i_6)( i_6 i_1 i_2)( i_3 i_5 i_1)\ .\end{aligned}$$ The equations $S=0$ are the same that feature for scattering amplitudes, and are in general polynomials of degree four in the link variables. Their geometric meaning was discussed in [@White:1915; @ArkaniHamed:2009dg]; the localization of N$^{k-2}$MHV scattering amplitudes on degree $(k-1)$-curves in twistor space, as in Witten’s twistor string theory, has a counterpart as a localization in the Graßmannian. Namely, by viewing each column in the matrix $C \in G(k,n+2)$ as a point in $\mathbb{CP}^{k-1}$, each column must be the image of a map $\mathbb{CP}^1\mapsto\mathbb{CP}^{k-1}$, generally given by the Veronese map $$\begin{aligned}
\label{eq:veronese}
(\sigma^1, \sigma^2)
\mapsto
\left(
(\sigma^1)^{k-1} , (\sigma^1)^{k-2}\sigma^2 , \cdots , \sigma^1(\sigma^2)^{k-2} , (\sigma^2)^{k-1}
\right)
\ .\end{aligned}$$ For $k=3$ this corresponds to a map of degree two, and therefore the constraints arising from must ensure that all $n+2$ points lie on the same curve. This is achieved by a combination of equations of the form , which impose that a sixth point lies on the degree-two curve generated by the other five. For this reason, we refer to these equations as *conic constraints*. It is straightforward to see that, if a matrix $C\in G(3,n+2)$ has all columns as in , all equations trivially vanish since the $3\times 3$ minors factorise in terms of $2\times 2$ minors formed of the $\sigma$ coordinates as $(abc)=(ab)(bc)(ca)$.
Performing an explicit lift of from the link representation to an integral over $GL(3)$ for low values of $n$ reveals a recursive structure in which the $n$-point form factor is obtained from the $(n-1)$-point as follows: $$\begin{aligned}
{\mathsf{F}}_{n,3} &= {\langle xy \rangle}^2 \int \frac{{\ensuremath{\mathrm{d}\xspace}}^{3\times (n+2)}C}{\text{Vol(GL(3))}}
\; I_{n,3} \;
\delta^{2\times 3}(C\cdot{{\smash{\tilde{\lambda}}}}) \, \delta^{4\times 3}(C\cdot{{\smash{\tilde{\eta}}}}) \, \delta^{2\times(n-1)}(C^\perp\cdot{{\smash{\lambda}}})
\ ,\\[10pt]
I_{4,3} &= \frac{(13x)(13y)}{(123)(134)(1xy)(3xy)} \delta(S_{1234xy})\ ,\\[10pt]
I_{n,3} &= I_{n-1,3} \times \left[(-1)^{n-1} \frac{(12n-1)(13n-1)(1xy)(23x)(23y)}{(1n-1n)(23n-1)} \delta(S_{123nxy})\right],\quad n\geq 5{\; , }\end{aligned}
\label{eq:Fn}$$ where we chose to display only the integrands with $n\geq 4$, which are genuinely NMHV. Although the integrands of this formulation no longer enjoy the manifest cyclic invariance of the connected formula, the conic constraints imposed by the delta functions ensure this symmetry is present.\
There are several ways of representing the integrand of , all coinciding on the support of the conic constraints. Likewise, the choice of equations appearing inside the delta functions is not unique as the geometric constraint that the $n+2$ points lie on the same degree-two curve can be represented is various distinct ways. For the particular representation in , we consider the conic defined by the five points $\{1,2,3,x,y\}$ and each conic constraint imposes that one of the other points $\{4,\dots,n\}$ lie on the same curve, as can be seen from the additional constraints present in each recursive factor. The minors appearing in the numerator of the recursive factor are responsible for annihilating spurious solutions of the conic constraints. For instance, a configuration where four out of the points belonging to the set $\{1,2,3,x,y\}$ are collinear would set to zero all conic constraints, but would not imply that all points lie on the same curve. The numerator factor $(13x)(13y)(23x)(23y)(1xy)$ precisely vanishes for every configuration of this sort. A special case where the cancellation of spurious solutions of the conic constrains does *not* happen is for $n=5$, since the factor of $(1xy)$ cancels between $I_{4,3}$ and the recursive factor in . In this case, one needs to ensure that only the physical solutions of the conic constraints are taken into account. This situation is discussed in further detail in Section \[sec:5points\].
Formulation with inverse soft interpretation
--------------------------------------------
For scattering amplitudes, it is possible to interpret the recursive factors $I_n/I_{n-1}$ as the addition of a particle via an inverse soft factor [@ArkaniHamed:2009dg; @Bourjaily:2010kw]. The same should be true for form factors, as they are inverse soft constructible [@Nandan:2012rk]. In particular, one can show that for form factors with sufficiently many on-shell legs, namely six, the effect of the operator may be omitted and it is possible to write the recursive factor of in the same way as that for amplitudes. This is achieved by rewriting in a way more similar to the amplitude formulas presented in e.g. [@Bourjaily:2010kw] by means of the identity $$\delta(S_{ijkrst})\delta(S_{ijkrsu})
=
\frac{(jkt)(irt)}{(jks)(irs)}
\delta(S_{ijkrst})\delta(S_{ijkrtu})
{\; . }\label{eq:id}$$ We start by considering the ratio $I_{5,3}/I_{4,3}$, and trade $S_{123xy5}\rightarrow S_{123x45}$ on the support of $S_{1234xy}=0$ using , which results in $$I_{5,3}/I_{4,3}
=
\frac{(124)(134)(23x)(1x4)}{(145)}
\delta(S_{123x45})
{\; . }\label{eq:isf5}$$ This factor is already much more similar to the amplitude “soft factor”, but it is clear that either $x$ or $y$, representing the kinematics of the operator, has to be an index in the left-over $S$. Next we consider $I_{6,3}/I_{5,3}$. We first trade $y$ in $S_{123xy6}$ for $4$ using $S_{1234xy}$, and then $x\rightarrow 5$ using $S_{123x45}$, getting $$I_{6,3}/I_{5,3}\sim
\frac{(125)(135)(234)(145)}{(156)}
\delta(S_{123456}){\; , }$$ which is precisely the recursive factor which maps ${\mathsf{A}}_{5,3}$ to ${\mathsf{A}}_{6,3}$.
We can now proceed recursively, and find that also for higher point form factors the recursive structure of the integrand can be written in exactly the same way as for amplitudes, $$I_{n,3}/I_{n-1,3} =
\frac{(12n-1)(13n-1)(1n-2n-1)(23n-2)}{(1n-1n)}
\delta(S_{123n-2n-1n})
{\; , }\qquad n\geq 6
{\; . }\label{eq:isfactor}$$ This form of the recursive factor is the same as the one used in [@ArkaniHamed:2009dg], where it was shown that this factor ensures the correct soft limit for particle $n$. This representation was also important for matching the connected formula with the Graßmannian integral via applications of the GRT, as its integrand has singularities at all BCFW poles. In the next section we investigate this strategy for form factors.
From the connected prescription to BCFW via the GRT {#sec:relation}
===================================================
In the previous sections, we studied two different Graßmannian representations of form factors. On one side there is the formula associated with the BCFW recursion relation and on-shell diagrams, given in and equipped with the contour . On the other hand there is the formula that arises from the connected prescription, represented as in or , which does not require a separate specification of the contour.
These formulations are the form factor analogues of corresponding expressions for scattering amplitudes, whose NMHV Graßmannian formulas are related as shown below in Figure \[fig:rel\] [@Nandan:2009cc; @ArkaniHamed:2009dg].
![ Relations between different Graßmannian formulations of scattering amplitudes. Here $\mathcal{L}_{G(2,n)}^{\mathrm{amp}}$ denotes the amplitude connected formula, which can be understood as an integral over the Graßmannian $G(2,n)$. The Veronese map leads from $\L^{\rm amp}_{G(2,n)}$ to the Graßmannian integral with conic constraints, $\mathcal{L}_{G(3,n)}^{\mathrm{amp,conic}}$. There are different ways in which the Graßmannian integral with BCFW or (P)BCFW integration contour, $\mathcal{L}_{\Gamma_{n,3}}^{\mathrm{amp}}$, can be obtained from this representation: either via the smooth deformations of the conic constraints $\mathcal{L}_{G(3,n)}^{\mathrm{amp,conic}}(t)$, or via the application of GRTs. []{data-label="fig:rel"}](relationsAmp){width="0.95\linewidth"}
The Veronese map referred to in this diagram is given in . The $t$-deformation amounts to introducing $n-5$ parameters $t_j$ into the conic constraints in a systematic way, by defining $$\begin{aligned}
\label{eq:sextic-t_j}
S_{i_1 i_2 i_3 i_4 i_5 i_6}(t_j) \equiv (i_1 i_2 i_3 )(i_3i_4i_5)(i_5i_6 i_1)( i_2 i_4 i_6)-t_j\,( i_2 i_3 i_4)( i_4 i_5 i_6)( i_6 i_1 i_2)( i_3 i_5 i_1)\ .\end{aligned}$$ Note in particular that the BCFW contour can be recovered both from taking limits of the deformation parameters $t_j$ or through applications of the GRT starting from the formula with the conic constraints.
The aim of this section is to investigate the validity of similar relations between the corresponding formulas for form factors. A preliminary attempt to use the Veronese map to relate the Graßmannian integral based on on-shell diagrams directly to the connected formula was made in [@Brandhuber:2016xue], and found to be impossible. Based on the derivation of Section \[sec:contour\], we conclude that the BCFW contour contains poles originating from different top forms in such a way that no linear combination of top forms gives the tree-level form factor with a single contour of integration. Such a single integral, would however be necessary for a direct application of the Veronese map.
In this section, we explore the possibility of relating the Graßmannian formulations directly using the GRT, focusing on low-point examples. Already at four points we find that there is no naive analogue of the $t$-deformation for the form-factor formulas. Moreover, we show that successive applications of the GRT lead from the Graßmannian formula with conic constraints to that with the BCFW contour for four and five points. However, this is no longer the case starting at six points. We furthermore highlight subtleties involved in the computation of the BCFW residues which do not appear for scattering amplitudes, such as the necessity of regularising residues with a 0/0 behaviour.
Four points
-----------
Consider the integral given in , which we repeat here for convenience: $$I_{4,3} = \frac{(13x)(13y)}{(123)(134)(1xy)(3xy)} \delta(S_{1234xy}){\; . }$$ The contour is defined by the equation $S_{1234xy}=0$. Applying the residue theorem one obtains a new combination of residues given by $$\label{eq:GRT-4points}
\{S_{1234xy}\} \rightarrow -\{(123)\}-\{(341)\}-\{(1xy)\}-\{(3xy)\} {\; . }$$ The location of these poles are the same as the four-point BCFW contour which can be read off Figure \[fig:contourgrid\], cf. for the notation. For each of the factors on the right-hand-side of , the factor of $S_{1234xy}$ in the denominator factorises into a product of four minors. It is straightforward to check that the value of each residue is the same as that stemming from the Graßmannian formula .
A lesson can be taken from this simple case. Consider the analogous example of the six-point scattering amplitude: $$\begin{aligned}
I^{\rm amp}_{6,3} = \frac{(135)}{(123)(345)(561)} \delta(S_{123456}) = \frac{(246)}{(234)(456)(612)} \delta(S_{123456}) . \end{aligned}$$ In this situation $S_{123456}$ always factorises in the same way for all three poles present in the integrand, both in the BCFW or (P)BCFW representations. This means that one can introduce a parameter $t$ to the term that vanishes as in , i.e. $S_{123456}(t)=
t(123)(345)(561)(246)-(234)(456)(612)(351)$, and the amplitude is independent of the value of $t$ [@ArkaniHamed:2009dg; @Nandan:2009cc]. In particular, a one-parameter family of dual Graßmannian theories is defined in this fashion, with the particular cases of the twistor string for $t=1$ and the BCFW and (P)BCFW cases for $t=0$ or $t=\infty$, respectively, as shown schematically in Figure \[fig:rel\].
For form factors this is not possible: in the four-point example we see that $S_{1234xy}$ always factorises, but differently at each pole. Explicitly, using the permutation invariance of the conic constraints in its labels, $$\begin{aligned}
S_{1234xy} =
\begin{cases}
\phantom{-}S_{314yx2}\quad &\rightarrow\quad\phantom{-}(314)(4yx)(x23)(1y2) \qquad\text{ on }\{(123)\} \\
-S_{312yx4}\quad &\rightarrow\quad -(312)(2yx)(x43)(1y4) \qquad\text{ on }\{(341)\} \\
-S_{243yx1}\quad &\rightarrow\quad-(243)(3yx)(x12)(4y1) \qquad\text{ on }\{(1xy)\} \\
\phantom{-}S_{xy1423} \quad&\rightarrow\quad \phantom{-}(xy1)(142)(23x)(y43) \qquad\text{ on }\{(3xy)\}
\end{cases} . \end{aligned}$$ This means that there is no deformation—or at least no naive one—of $S_{1234xy}$ which could interpolate between the Graßmannian integral related to on-shell diagrams and the one based on the connected prescription.[^7]
Five points {#sec:5points}
-----------
We now consider the five-point form factor, for which the integrand in the inverse soft formulation reads $$\label{eq:i5}
I_{5,3}=\frac{(13x)(13y)(23x)(124)(14x)}{(123)(1xy)(3xy)(145)} \delta(S_{1234xy})\delta(S_{123x45})$$ As mentioned in Section \[sec:connected-to-link\], the integrand is finite for a spurious solution of $S_{1234xy} = S_{123x45} = 0$, namely that with particles 1,2,3 and 4 collinear, as the ratio $\frac{(124)}{(123)}$ does not vanish.
We denote $S_1 \equiv S_{1234xy} $ and $S_2 \equiv S_{123x45}$. Consider first the GRT $\{f_1,f_2\}=0$ with $f_1 \equiv S_{1} $ and $f_2 \equiv S_2 (123)(1xy)(3xy)(145)$. The residue theorem then implies $$\begin{aligned}
\label{eq:GRT16points}
\{S_1,S_2\} = - \{S_1,(123)\} - \{S_1,(1xy)\} - \{S_1,(3xy)\} - \{S_1,(145)\} = 0
{\; . }\end{aligned}$$ Note further that for $(123)=0$, $S_1$ factorises and thus $$\begin{aligned}
\label{eq:GRT26points}
\{S_1,(123)\} = \{(234),(123)\}+\{(4xy),(123)\}+\{(y12),(123)\}+\{(3x1),(123)\} .\end{aligned}$$ Plugging back into , we get $$\begin{aligned}
\label{eq:GRT36points}
\begin{split}
&\textcolor{purple}{\{S_1,S_2\}} = - \textcolor{purple}{\{(234),(123)\}} - \{(4xy),(123)\} -\{(y12),(123)\}\\
& -\{(3x1),(123)\} - \{S_1,(1xy)\} - \{S_1,(3xy)\} - \{S_1,(145)\} = 0
\end{split}\end{aligned}$$ Note the subtlety here: the two highlighted terms appear not to be distinct, since the configuration where $(123)=(234)=0$ is also a (spurious) solution of $S_1=S_2=0$. The fact that such a configuration appears after the application of the GRT follows from the requirement that the constraint $S_1=S_2=0$ in only includes non-spurious solutions, which for five points is not enforced by the numerator.
Interestingly, the term $\{(234),(123)\}$ also highlights another phenomenon which does not occur for amplitudes. For this term the integrand is given by $$\frac{(13y)(23x)(124)(14x)}{(1xy)(3xy)(145)(4xy)(y12)\;S_2} \delta\big((234)\big)\delta\big((123)\big)
{\; , }$$ and both the minor $(124)$ in the numerator as well as $S_{2}$ in the denominator approach zero linearly if one parametrises the constraints imposed by the delta function. Under such a parametrisation, one finds that the direction in which the limit is taken changes the result. To calculate the correct residue, we have to take the limit ensuring that $ S_{1}$ is vanishing. We do so by setting $(123)=\varepsilon$ and $(234)=\frac{(34x)(xy1)(24y)}{(4xy)(y12)(3x1)}\varepsilon$, and then letting $\varepsilon\to 0$. Note that the term under consideration arises from factorizing $S_1$ in $\{S_1,(123)\}$; the limit ensures that $(123)=0$ is approached precisely from the surface $S_1=0$.
The other residues coming from can be calculated straightforwardly. Note that $S_1$ factorises for the terms $\{S_1,(1xy)\}$ and $\{S_1,(3xy)\}$; the resulting terms, together with those not involving $S_1$ in are in one-to-one correspondence with the MHV$\times$MHV factorization poles of the BCFW contour . For the term $\{S_1,(145)\}$ one applies the GRT again, after which the calculation is identical to the four point case, and results in all inverse soft contributions to the form factor. For all terms, gives the same residues as the corresponding poles of the Graßmannian integral.
Six points
----------
For the six point form factor, we checked numerically that the Graßmannian formula evaluated on the conic constraints gives the correct result for the form factor. However, when attempting to perform a one-to-one mapping of the poles of this Graßmannian integral to those obtained from the BCFW contour (see Figure \[fig:contourgrid\]) via the GRT, we find that it is impossible to identify all of them. We furthermore collected evidence that even by using the identity repeatedly, one might not be able to generate other representations which have all BCFW poles.
The six-point form-factor integrand in the inverse-soft-like representation is given by $$I_{6,3}=\frac{(13x)(13y)(23x)(124)(14x)(125)(135)(234)}{(123)(1xy)(3xy)(156)} \delta(S_{1234xy})\delta(S_{12345x}) \delta(S_{123456})
{\; , }\label{eq:i6}$$ and the poles contributing to the BCFW representation of the form factor can be found in Figure \[fig:contourgrid\]. Most of these poles can be recovered by successively applying the GRT to , in particular all poles with $(156)=0$, corresponding to the inverse soft limit of ${\mathsf{F}}_{5,3}$.
It is however impossible to find the poles $\{{(1)},{(2)},{(3)}\}$ and $\{{(1)},{(2)},{(\underline{{5}})}\}$, corresponding to the factorization channels ${\mathsf{A}}_{6,2}\times{\mathsf{F}}_{2,2}$ and ${\mathsf{A}}_{5,2}\times{\mathsf{F}}_{3,2}$. To see that these poles can never appear it is sufficient to realise that, in the vicinity of these configurations, the integrand is not singular enough to produce a finite residue. Letting each of the vanishing minors at those poles approach zero as $\varepsilon\sim 0$, we find that for the respective configurations the integrand behaves as $$\begin{aligned}
&\{{(1)},{(2)},{(3)}\} \colon\quad
\frac{(124)(125)(135)(234)}{(123)\;S_{1234xy}S_{12345x}S_{123456}}\sim \frac{1}{\varepsilon^2}{\; , }\\
&\{{(1)},{(2)},{(\underline{{5}})}\}\colon\quad
\frac{(124)(234)}{(123)\;S_{1234xy}S_{12345x}S_{123456}}\sim \frac{1}{\varepsilon^2}{\; , }\end{aligned}$$ while in order for a residue to exist, the integrand would have to scale as $\varepsilon^{-3}$. Since the GRT does not change this power counting, potential poles at these locations would be cancelled by numerator factors.
The identity can change the degree of divergence at configurations away from the support of the conic constraints, i.e. at positions reached by the GRT. In order to see if other representations of the integrand with the correct singularities at all BCFW poles exist, we generated a very high number (${\mathcal{O}}(10^6)$) of different representations of the integrand with a computer program, using the identity and cyclic symmetry, and taking both and as starting points. We then checked that none of these representations has the correct degree of divergence at all BCFW poles. This result is not conclusive, since we could only generate a finite number of representations due to computational constraints. In principle, the identity can be applied over and over again. Nevertheless, our result is a very strong indication that there may not be any $G(3,8)$ representation based on the connected formula from which we can identify all BCFW terms one by one, although we emphasise once again that the connected formula does produce the correct form factor, namely the sum of all BCFW terms.
Note however that some way of relating the formulations has to exist. We speculate that it is possible to apply a GRT to , and then to apply different identities to each of the resulting terms, effectively combining different representations. Of course there is a proliferation of such possibilities without a clear physical motivation, and several attempts did not lead to the identification of the expected residues. Since in any case the relation between the formulations is much more subtle compared to scattering amplitudes, it could be difficult to apply such a strategy systematically to find the BCFW contour prescription in closed form beyond NMHV. It remains to be investigated whether this tells us something about the physical properties of Graßmannian representations of (partially) off-shell observables. We leave this for future work.
Conclusions
===========
In this note we investigated the contours of integration of the Graßmannian formulation of form factors proposed in [@Frassek:2015rka], as well as the relation to the connected prescription for form factors [@Brandhuber:2016xue; @He:2016jdg]. To this end, we used the on-shell diagram representation of form factors. The permutations labelling the bipartite on-shell diagrams allowed to obtain the linear relations among the minors in the Graßmannian formula for a given diagram, and thus deduce the corresponding contour of integration. We applied this procedure explicitly to NMHV form factors, arriving at a compact form of the contour given in , which is the analogue of the odd-even form of the NMHV contour for amplitudes [@ArkaniHamed:2009dg]. As we emphasised, this method should apply to general form factors beyond the NMHV case. It would be of interest to investigate similarities and differences between the contours of integration for general N$^{k-2}$MHV amplitudes and form factors.
We then studied the connected prescription for form factors, lifting this formulation to the Graßmannian. In particular we provided a representation of this Graßmannian formula which has the same recursive structure as its amplitude counterpart. In this representation each additional particle is added via a factor which ensures the correct behaviour in the soft limit. Analysing this formulation using the global residue theorem, we were able to show that the connected prescription also non-trivially gives rise to the BCFW contour obtained from on-shell diagrams for four and five points. We found that a new feature arises already at five points, where a $0/0$ term appears. This requires a careful treatment, in particular regarding the direction in which the pole is approached. At six points, we first checked that the connected prescription formula gives the same results as the BCFW formula. Interestingly, we also found strong evidence that through a direct application of GRTs it may not be possible to perform a one-to-one mapping between the poles present in the connected prescription and in the BCFW contour. This situation is quite different from that of on-shell scattering amplitudes, for which the two formulas can be smoothly deformed into one another, and it may teach us important lessons about applying the Graßmannian formalism and the connected prescription to form factors or more general off-shell and/or non-planar objects. As a way forward it may be fruitful to note the role such smooth deformations play in showing the equivalence of similar integral formulas in the case of form factors of Wilson line operators [@Bork:2017qyh].
Form factors provide a bridge between on-shell scattering amplitudes and completely off-shell correlation functions, and thus they are ideal objects for a better understanding of how the Graßmannian integral and on-shell diagrams can be generalised to off-shell quantities. The recent progress in studying correlation functions in terms of amplituhedron-like geometries [@Eden:2017fow] raises hope that these methods are indeed more generally applicable for a variety of observables in ${\mathcal{N}}=4$ SYM. It would be interesting to see if form factors can interpolate between the geometries corresponding to amplitudes and correlation functions. Furthermore, form factors are intrinsically non-planar, even in the large-$N$ limit, which may be one of the main causes of the new features we found in the study of the connected prescription using the GRT. It would therefore be interesting to explore applications of the recent developments concerning non-planar on-shell diagrams [@Franco:2015rma; @Bourjaily:2016mnp] to form factors. Finally, it would be interesting to explore the interplay between ambitwistor strings and on-shell diagrams, studied in [@Farrow:2017eol] for amplitudes in ${\mathcal{N}}=4$ SYM and ${\mathcal{N}}=8$ supergravity, for form factors at loop level.
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank Andi Brandhuber, Ed Hughes, Rodolfo Panerai, Gregor Richter, Matthias Staudacher, Gabriele Travaglini and especially Matthias Wilhelm for very interesting discussions. We thank L.V. Bork and A.I. Onishchenko for clarifying various points regarding their recent work [@Bork:2017qyh]. DM received support from GK 1504 *“Masse, Spektrum, Symmetrie”*. The work of BP was supported by the ERC starting grant 637019 “*MathAm*”. The work of CW was supported in part by a DOE Early Career Award under Grant No. DE-SC0010255. DN is supported by the STFC consolidated grant “Particle Physics at the Higgs Centre”, by the National Science Foundation. BP would like to thank Humboldt University of Berlin, where part of this work was accomplished. DN and CW would like to thank the support and hospitality of the KITP program *“Scattering Amplitudes and Beyond”* at UCSB, where last stages of this work were carried out. DN and CW’s research was supported in part by the NSF under Grant No. NSF PHY-1125915. DN would also like to thank Walter Burke ITP at Caltech and QMAP at UC Davis for hospitality during final stages of this work.
[^1]: See also [@Eden:2017fow] for a geometric picture of correlation functions in $\mathcal{N}=4$ SYM.
[^2]: We follow closely the notation and conventions of [@Eden:2011yp; @Eden:2011ku] for the harmonic projections, see also [@Brandhuber:2011tv]. Note that since we are studying the chiral part of the stress tensor multiplet the $ \theta^- $ is set to zero in this notation. The fermionic variables associated to the on-shell particles will be denoted as $\eta^{+a}$ and $\eta^{-a}$.
[^3]: This was already observed for amplitudes in ${\mathcal{N}}=8$ supergravity beyond the MHV case [@Farrow:2017eol].
[^4]: We observe that the occurrence of poles of the form $ (xy\,(a b)\cap(cd)) $ in is similar to those found in [@Franco:2015rma; @Bourjaily:2016mnp] for non-planar on-shell diagrams.
[^5]: We remark that we use the term “BCFW pole” to denote the pole in the Graßmannian integral the residue of which produces a term in the BCFW recursion relation.
[^6]: This expression is invariant under permutations of the six labels up to a sign of the signature of the permutation.
[^7]: Aspects of this deformation play an important role in the derivation of similar integrals for form factors of Wilson line operators from the ambitwistor string in [@Bork:2017qyh]. It would be very interesting to see if the approach of this work can shed more light on this issue.
|
---
abstract: 'We propose a method to grow high-quality twisted bilayer graphene epitaxially on SiC using borazine as a surfactant. With this method, closed layers with a constant orientation with respect to the substrate can be grown over mm-size samples. Using high-resolution electron diffraction, we find a twist angle distribution centered at $30^\circ$ with a standard deviation of $(0.46\pm 0.01)^\circ$, a compression of the top (rotated) graphene layer by 0.7% with respect to the bottom layer, and a $(N\times N)R0^\circ$ Moiré unit cell with $N=12.84\pm0.12$ with respect to the top graphene. The interlayer hopping resulting from a comparison of tight binding simulations with angle-resolved photoelectron spectroscopy agrees with values reported for other twist angles.'
author:
- 'Y.-R. Lin'
- 'N. Samiseresht'
- 'M. Franke'
- 'S. Parhizkar'
- 'S. Soubatch'
- 'B. Amorim'
- 'T.-L. Lee'
- 'C. Kumpf'
- 'F.S. Tautz'
- 'F.C. Bocquet'
title: 'Surfactant-Mediated Growth of Twisted Bilayer Graphene on SiC'
---
[^1]
[^2]
Since the isolation of the first two-dimensional material, graphene, by micromechanical cleavage of graphite in 2004 [@Novoselov2004], efforts have been made to control the electronic properties of graphene without affecting the lattice. Twisted bilayer graphene (tBLG) has already been identified as a promising candidate in 2007 [@Lopes2007]. In this material, the twist angle [@Rozhkov2016] as well as the relative strain [@Huder2018; @Naumis2017; @Kumar2015; @Beechem2014] between the top and the bottom layers are decisive parameters that determine the electronic properties of the system. Among the intriguing properties of tBLG are chirality [@Stauber2018; @Morell2017; @Kim2016], magnetism [@Sboychakov2018; @Gonzalez2017], and a tunable band gap [@Rozhkov2017; @Liu2015; @Muniz2012]. More recently, this system has attracted additional interest after unconventional superconductivity was discovered for twist angles of approximately 1.1$^\circ$ [@Cao2018_article; @Cao2018_letter].
Up to now, most experimental studies of tBLG are performed on manually stacked, exfoliated graphene single layers. This method has the advantage that virtually any material can be stacked with any angle, but the moderate material quality, small sample size and limited twist angle control are its major shortcomings. For example, the presence of interlayer contaminants forming bubbles is a challenging issue [@Frisenda2018]. Moreover, this method is not scalable. In contrast, epitaxial growth is more limited in scope, but if the correct growth parameters for a given stack and a given twist angle have been identified, it is reproducible and readily scalable, and it offers unrivaled cleanness and control at the atomic scale.
tBLG can be grown epitaxially on metals. Examples are Pt(111) [@Yao2018], Pd foil [@Zuo2018], Pd(111) [@Murata2012], Cu foil [@Peng2017; @Hu2017; @Lu2013], Ni–Cu gradient foil [@Gao2018], Ir(111) [@Nie2011], Ni(111) [@Iwasaki2014]. Thermal decomposition of 6$H$-SiC(000$\bar{1}$) is another route to obtain tBLG [@Lee2011; @Tejeda2012; @Razado2016]. However, in all of these cases one finds random twist angles and/or random orientation across the sample, and each domain of a given twist angle has a typical maximum diameter of 1 to 10 $\mu$m. This makes these samples unsuitable for any application that requires a definite twist angle and orientation with respect to the substrate over a large area. Furthermore, the limited domain size and random orientation make the use of non-local characterization techniques for these samples difficult. For example, measuring the electronic band structure by means of angle-resolved photoemission spectroscopy (ARPES) requires a highly focused photon beam (nano-ARPES) to address only one twist angle at a time [@Yao2018; @Razado2016; @Yin2016; @Peng2017; @Tan2016; @Gao2018].
In this Letter, we report a method for growing tBLG with a well-defined twist angle and orientation on a macroscopic scale. Specifically, we anneal SiC in a borazine (B$_3$N$_3$H$_6$) atmosphere. Thereby, borazine acts as a surface-active molecule (surfactant), enforcing a specific rotation of the top graphene layer. Since the growth of the rotated graphene layer is self-limiting and the growth temperature is below the one required for multilayer non-rotated graphene, only tBLG is formed. As a substrate we use the silicon-terminated 6$H$-SiC(0001) surface. Because of the very high quality of the obtained tBLG, leading to sharp diffraction spots, it is possible to detect its Moiré, and to accurately determine the azimuthal distribution of twist angles, the average strain and the average domain size for each of the two graphene layers.
Our method is based on two well-known facts. Firstly, it has long been established that on SiC(0001), large single-layer graphene domains can be obtained by annealing the sample in a high pressure argon environment [@Emtsev2009; @Forti2011]. The same structure can be obtained by growing in ultra-high vacuum (UHV), with only the domains being smaller [@Emtsev2009]. The natural lattice orientation of such epitaxial monolayer graphene (EMLG) on SiC(0001) is 30$^\circ$ with respect to the SiC lattice, and its lattice is compressed by 0.22% relative to graphite [@Schumann2014]. Further annealing leads to the growth of a second graphene layer of the same orientation below the initial one. Secondly, it has recently been shown that single-layer hexagonal boron nitride (hBN) grows epitaxially on 6$H$-SiC(0001) in a borazine atmosphere [@Shin2015]. The hBN lattice vectors are aligned with the SiC substrate surface lattice vectors ([hBN-$R0^\circ$]{}) [^3]. By annealing [hBN-$R0^\circ$]{}to higher temperatures, a graphene single layer ([SLG-$R0^\circ$]{}) gradually forms in the [hBN-$R0^\circ$]{}, following its orientation and finally replacing it [@Shin2015]. This observation, in conjunction with the well-known fact that multilayer $R30^\circ$ graphene can be grown on SiC by thermal decomposition, suggests that [$30^\circ$-tBLG]{}may be grown on SiC(0001) with the help of hBN [@Ahn2018]. However, it is clear that in such a scheme, the quality of the final [$30^\circ$-tBLG]{}is limited by the quality of the formed hBN layer. In order to eliminate this influence, we propose using borazine as a surfactant molecule during the growth of [$30^\circ$-tBLG]{}, thus avoiding the stabilization of a static hBN lattice but keeping the orienting influence of hBN nuclei which act as a template for the growth of rotated graphene. This dynamic approach may offer the advantage that, because borazine molecules are provided continually during growth, small hBN nuclei are constantly present at the surface until a closed rotated graphene layer is formed. It also avoids the negative effects of domain boundaries in large-scale static hBN, lattice mismatch between graphene and hBN etc. As we show in this Letter, the surfactant-mediated growth indeed provides [$30^\circ$-tBLG]{}of superior quality.
The SiC wafers were obtained from TankBlue Semiconductor Co. Ltd. The sample, a 5$\times$10 mm$^2$ N-doped 6$H$-SiC(0001) wafer piece, is cleaned by annealing in UHV while being exposed to a flux of silicon atoms. In this way, the Si-rich $(\sqrt{3}\times\sqrt{3})R30^\circ$ reconstruction is prepared [@Starke1999]. The temperature is controlled by direct current heating and measured by a pyrometer [^4]. After the cleaning process, we prepare the more Si-rich [$(3\times 3)$]{}reconstruction [@Schardt2000] and then anneal the sample directly to 1380[$^\circ$C]{}in a partial pressure of borazine ($1.5\times 10^{-6}$ mbar) to obtain [$30^\circ$-tBLG]{}. If one carries out the same process at a lower temperature (1225[$^\circ$C]{}), [SLG-$R0^\circ$]{}of poor crystalline quality forms. At an even lower temperature (1100[$^\circ$C]{}) [hBN-$R0^\circ$]{}grows.
All results reported in this Letter were obtained from three different UHV setups. All sample transfers between the setups were performed in UHV using a pumped suitcase. In the first setup, we prepared the samples and characterize them with low energy electron diffraction (LEED) and angle-resolved photoelectron spectroscopy (ARPES) using a monochromatized UV lamp (h$\nu=21.2$ eV) and a Scienta R4000 analyzer. Quantitative diffraction was performed in a second setup using spot profile analysis LEED (SPA-LEED). In these two setups, the sample was held at room temperature. Finally, ARPES at 105 eV photon energy was performed with a SPECS Phoibos 225 analyzer in an end-station of the I09 beamline at the Diamond Light Source in Didcot, UK, at a sample temperature of 20 K. All used techniques have an electron or photon beam footprint on the sample ranging from 0.05 to 3 mm$^2$ and thus provide an averaged signal from the sample.
![SPA-LEED patterns of (a) [hBN-$R0^\circ$]{}, (b) [SLG-$R0^\circ$]{}, (c) [$30^\circ$-tBLG]{}grown by annealing SiC(0001)-[$(3\times 3)$]{}in a borazine atmosphere and (d) [$30^\circ$-tBLG]{}grown by annealing [SLG-$R0^\circ$]{}in UHV. First order diffraction spots are assigned as follows: $\square$ – 6$H$-SiC(0001); $\bigtriangleup$ – [hBN-$R0^\circ$]{}; $\bigcirc$ – [G-$R0^\circ$]{}; $\Diamond$ – [G-$R30^\circ$]{}; Dotted diamond – [$30^\circ$-tBLG]{}Moiré. The dotted circle indicates the well-known apparent $(6\times6)$ periodicity of the $(6\sqrt{3}\times6\sqrt{3})R30^\circ$ buffer layer. The high symmetry directions of the SiC surface Brillouin zone are indicated in (a). All patterns have the same gray scale and were acquired with an impact energy of 165 eV.[]{data-label="fig:LEED"}](LEED_Lars_Fanny_Igel_Roya_165eV-13.pdf){width=".70\columnwidth"}
Fig. \[fig:LEED\] displays SPA-LEED patterns obtained at various growth stages. Fig. \[fig:LEED\](a) shows the diffraction pattern obtained by annealing the SiC(0001)-[$(3\times 3)$]{}reconstruction to 1100[$^\circ$C]{}in a borazine atmosphere. Around each first order $\langle 10 \rangle$ SiC diffraction spot [^5], marked by orange squares ($\square$), we observe a hexagonal arrangement of elongated diffraction spots. The lattice constant corresponding to the spots marked by black triangles ($\bigtriangleup$) $(2.576 \pm 0.01)~\mathrm{\AA}$ agrees with the value reported for single-layer hBN on SiC [@Shin2015], exhibiting an average tensile strain of 2.84% relative to the hBN bulk lattice parameter of $(2.5047\pm 0.0002)~\mathrm{\AA}$ [@Paszkowicz2002]. The position and elongation of these spots indicate that the hBN domains are aligned to the $\bar\Gamma\bar M$ direction of SiC within a small but significant azimuthal range. We therefore conclude that, at this annealing temperature, [hBN-$R0^\circ$]{}forms on the SiC surface. The other diffraction spots in Fig. \[fig:LEED\](a) are attributed to multiple electron scattering on different lattices. We note that the diffraction pattern of [hBN-$R0^\circ$]{}grown on SiC cannot be recovered after 48 hours air exposure followed by a mild annealing in UHV. As hBN is known to be stable in air [@Yuan2018], our observation suggests that [hBN-$R0^\circ$]{}does not form a closed layer, thus allowing the oxidation of SiC.
Fig. \[fig:LEED\](b) shows the diffraction pattern obtained after annealing [hBN-$R0^\circ$]{}in UHV at 1225[$^\circ$C]{}. The same pattern can alternatively be obtained by annealing SiC(0001)-[$(3\times 3)$]{}directly to the same temperature in a borazine atmosphere. As for [hBN-$R0^\circ$]{}, the diffraction pattern shows a hexagonal arrangement of spots around the $\langle 10 \rangle$ spots of SiC ($\square$). However, the lattice constant of the spots marked by red circles ($\bigcirc$) $(2.524\pm 0.02)~\mathrm{\AA}$ is smaller than the one found for [hBN-$R0^\circ$]{}. The presence of Dirac cones at the $\bar K$ points of this lattice (not shown) proves that it corresponds to single-layer graphene aligned to the SiC substrate ([SLG-$R0^\circ$]{}), in agreement with Ref. [@Shin2015]. Note that [SLG-$R0^\circ$]{}in Fig. \[fig:LEED\](b) has an average tensile strain of 2.55% relative to graphite (2.461 Å [@Schumann2014; @Hattab2011]). Moreover, the diffraction pattern is characterized by very broad spots with low intensity and high background. This indicates that the graphene domains are small and accompanied by large areas without long-range order, and is consistent with the fact that its diffraction pattern cannot be recovered after 48 hours of air exposure.
Fig. \[fig:LEED\](c) displays the diffraction pattern obtained by annealing SiC(0001)-[$(3\times 3)$]{}directly to 1380[$^\circ$C]{}in a borazine atmosphere. The elongated diffraction spots marked by red circles ($\bigcirc$) correspond to a lattice parameter of $(2.439\pm 0.006)~\mathrm{\AA}$, and are assigned to a graphene layer aligned to the SiC substrate ([G-$R0^\circ$]{}), as evidenced by the observation of Dirac cones (see below). Additionally, diffraction spots corresponding to a similar lattice parameter, but aligned with $\bar\Gamma\bar K$ of SiC ($R30^\circ$), are present ($\Diamond$). They originate from to a second type of graphene rotated by $30^\circ$ with respect to the substrate ([G-$R30^\circ$]{}). The following two observations suggest that these two graphene layers are stacked, [G-$R0^\circ$]{}on top of [G-$R30^\circ$]{}. First, the [G-$R30^\circ$]{}has a smaller intensity than [G-$R0^\circ$]{}– this can be explained by the attenuation of the electron diffraction intensity. Second, unlike [G-$R0^\circ$]{}, [G-$R30^\circ$]{}has diffraction spots with a circular shape, which can be understood as a consequence of the rigid locking of the layer to the SiC substrate, yielding superior azimuthal order similar to EMLG [@Riedl2007]. Interestingly, the diffraction pattern of this [$30^\circ$-tBLG]{}is recovered after four months air exposure and a mild annealing in UHV. This indicates that [$30^\circ$-tBLG]{}forms a closed layer across the SiC surface.
Fig. \[fig:LEED\](d) displays the diffraction pattern obtained by annealing [SLG-$R0^\circ$]{}at 1380[$^\circ$C]{}in UHV, an alternative route reported in Ref. [@Ahn2018] to prepare [$30^\circ$-tBLG]{}. The pattern is the same as in Fig. \[fig:LEED\](c), however, the background intensity is noticeably higher, the spots are much broader and we observe fewer higher-order diffraction spots. We conclude that this preparation method yields a [$30^\circ$-tBLG]{}with not only lower crystalline quality but also areas without long-range order. In the following, we concentrate exclusively on the high-quality [$30^\circ$-tBLG]{}in Fig. \[fig:LEED\](c).
![Electron diffraction spot profiles taken on [$30^\circ$-tBLG]{}prepared in a borazine atmosphere (see Fig. \[fig:LEED\](c)). (a) Typical radial profiles of $\langle 10 \rangle$ spots of [G-$R30^\circ$]{}, [G-$R0^\circ$]{}and SiC. (b) Radial and azimuthal profiles of the $\langle 10 \rangle$ spot of [G-$R0^\circ$]{}. The profiles have normalized intensity. Colored dots represent experimental data. Black lines are fits (only shown for broad profiles).[]{data-label="fig:Profile"}](LEED_Roya_LineProfiles-4.pdf){width="0.65\columnwidth"}
Deeper insight into the quality of the crystalline structures in Fig. \[fig:LEED\](c) is provided by the analysis of the respective $\langle 10 \rangle$ diffraction spot profiles (Fig. \[fig:Profile\]). The full width at half maximum (FWHM) $w$ of the radial spot profile arises from the combined effect of the finite instrumental resolution and the finite size of crystalline domains. $2\pi/w$ represents a lower limit to the average domain size. Moreover, an azimuthal profile broader than the radial profile is a direct indication of azimuthal disorder. To estimate the azimuthal distribution, we convolve radial profile with a Gaussian to fit the azimuthal profile. The standard deviation $\sigma$ of the fitted Gaussian is a measure of the azimuthal disorder.
Radial $\langle 10 \rangle$ spot profiles are shown in Fig. \[fig:Profile\](a). The $\langle 10 \rangle$ SiC spots ($\square$) have a circular shape and the radial line profile is fitted by a Voigt function with a FWHM of $(0.525\pm 0.002)$ , or $(0.01237\pm 0.00005)~\mathrm{\AA^{-1}}$. In real space, this corresponds to an average domain size $> 508~\mathrm{\AA}$ [^6]. Next, we consider the $\langle 10 \rangle$ spots of [G-$R30^\circ$]{}($\Diamond$). These spots have a circular shape that corresponds to an average domain size $> 147~\mathrm{\AA}$. In contrast, the $\langle 10 \rangle$ spots of [G-$R0^\circ$]{}($\bigcirc$) are elongated in the azimuthal direction. Its radial profile corresponds to an average domain size $> 290~\mathrm{\AA}$, significantly larger than for [G-$R30^\circ$]{}. This difference in domain size within the [$30^\circ$-tBLG]{}layer is explained by the fact that [G-$R30^\circ$]{}can only grow if [G-$R0^\circ$]{}is already present locally, thus protecting the former from borazine. Therefore, the [G-$R30^\circ$]{}domains are necessarily smaller.
It is clear from Fig. \[fig:Profile\](b) that the azimuthal profile of the top graphene layer [G-$R0^\circ$]{}is broader than its radial profile. This is evidence that several azimuthal orientations of [G-$R0^\circ$]{}coexist on the surface. We find $\sigma=(0.46\pm 0.01)^\circ$ around the $R0^\circ$ direction. As the bottom graphene layer has an exact $R30^\circ$ orientation (circular spots), approximately 70% of the [$30^\circ$-tBLG]{}consists of bilayer graphene with twist angles ranging from $29.54^\circ$ to $30.46^\circ$.
The narrow twist angle distribution in [$30^\circ$-tBLG]{}suggests a non-vanishing interaction between the two graphene layers. Additional indications of such an interaction come from a further analysis of the diffraction pattern, yielding two important observations. Firstly, close to (0,0), there are additional spots in Fig. \[fig:LEED\](c), one of them marked by a dotted circle, which cannot be explained by either of the two graphene layers ($\Diamond$, $\bigcirc$), the buffer layer (dotted diamonds) [^7], the SiC substrate ($\square$) or multiple scattering involving those individual lattices. Therefore, they must arise from a Moiré modulation of the complete [$30^\circ$-tBLG]{}. This corresponds to a $(N\times N)R0^\circ$ lattice with $N=12.84\pm0.12$ with respect to the unit-cell of [G-$R0^\circ$]{}, or with $N=10.17\pm0.10$ with respect to SiC. Note that in the case of an electron density modulation forming the Moiré, the structural modulation may even be larger. An analogous effect is well known for EMLG on SiC [@Riedl2007; @Lauffer2008; @Sforzini2015]. Secondly, looking at the precise lattice constant of [G-$R0^\circ$]{}, we find that it is 0.7% contracted with respect to [G-$R30^\circ$]{}. Together with the azimuthal distribution of twist angles, these two observations show that [$30^\circ$-tBLG]{}grown in borazine atmosphere relaxes locally around the perfect quasicrystalline order of two unstrained $30^\circ$-rotated graphene lattices [@Koren2016], in order to minimize its energy. This might be, at least partially, an effect of the SiC(0001) substrate. A detailed structural investigation requires microscopic real space methods and is beyond the scope of this Letter. Note that in the absence of the SiC substrate and the buffer layer, the structure of minimum energy of [$30^\circ$-tBLG]{}may differ from the one observed here [@Ahn2018].
![(a) Experimental CBM of [$30^\circ$-tBLG]{}, at a binding energy of 0.39 eV, superposed with a zoom of the Brillouin zones of [G-$R0^\circ$]{}(red line) and [G-$R30^\circ$]{}(blue line). The dotted lines indicate where the EDM shown in the following panels are taken. (b) EDM at $\bar K_{30^\circ}$ in the $\bar K_{30^\circ}-\bar K_{30^\circ}$ direction, (c) similarly at $\bar K_{0^\circ}$. The intensities in panel (b) are multiplied by a factor six in comparison to panel (c) in order to obtain comparable gray scales. (d)-(f) The three identical gray-scale images show the EDM along the $\bar K_{30^\circ}-\bar K_{0^\circ}$ direction (green dotted line in (a)). The size of the orange dots indicates the simulated ARPES intensity for freestanding [$30^\circ$-tBLG]{}with (d) $V_{pp\sigma}^0=0$ eV and (e) $V_{pp\sigma}^0=0.2$ eV (see text). The ARPES intensity induced by the six buffer layer replicas (dotted diamonds in panel (a)) is represented by red dots.[]{data-label="fig:ARPES"}](Fig_ARPES_with2cuts_55.pdf){width="\columnwidth"}
We now turn to the electronic properties of our [$30^\circ$-tBLG]{}sample. An ARPES constant binding energy map (CBM) taken close to the Dirac energies is presented in Fig. \[fig:ARPES\](a). The intensity found at the $\bar K$ points of the individual graphene layers in the CBM of Fig. \[fig:ARPES\](a) together with the linear band dispersion in the energy distribution maps (EDM) in Fig. \[fig:ARPES\](b)-(c) demonstrate the presence of two graphene layers with a difference in orientation of approximately $30^\circ$. The [G-$R30^\circ$]{}is $n$-doped with $E_\mathrm{D}=(0.37\pm0.01)$ eV, and [G-$R0^\circ$]{}with $E_\mathrm{D}=(0.41\pm0.01)$ eV. The intensity at $\bar K_{0^\circ}$ is approximately six times higher than at $\bar K_{30^\circ}$. The absorption of [G-$R30^\circ$]{}photoelectrons by [G-$R0^\circ$]{}, and the lower coverage of [G-$R30^\circ$]{}as found in SPA-LEED data, explain this difference [^8]. The Dirac cone replicas around $\bar K_{0^\circ}$, indicated by red dotted diamonds in Fig. \[fig:ARPES\](a), arise from the diffraction of photoelectrons from the top [G-$R0^\circ$]{}by the buffer layer lattice located below [G-$R30^\circ$]{}, as seen for EMLG [@Zhou2007]. This is the ultimate proof that we have indeed prepared [$30^\circ$-tBLG]{}. Due to the reduced intensity at $\bar K_{30^\circ}$, the replicas around this point (blue dotted diamonds) cannot be detected.
The high quality of our [$30^\circ$-tBLG]{}sample offers the possibility to search for interlayer coupling in the electronic band structure. If such a coupling is present, one expects the formation of band gaps at the position where Dirac cones of the two different layers cross. In Fig. \[fig:ARPES\](d)-(f), the EDM along the $\bar K_{0^\circ}-\bar K_{30^\circ}$ direction is shown. The crossing is found midway between the two Dirac cones at a binding energy of approximately 2.7 eV. To interpret our ARPES data, we simulate the electronic band structure within the tight binding approximation and the one-step model of photoemission, using the plane wave approximation for the final state (orange dots) [@Amorim2018] [^9]. Despite replicas (red dots) and possible areas with [SLG-$R0^\circ$]{}only, the simulations with $V_{pp\sigma}^0=0.2$ eV (Fig. \[fig:ARPES\](e)) [^10] reproduces the $\bar K_{0^\circ}$ split band measured in ARPES better than with $V_{pp\sigma}^0=0$ (Fig. \[fig:ARPES\](d)). This is in agreement with structural indications of an interlayer coupling.
Finally, we briefly comment on the mechanism of tBLG growth. Because of the high temperature, [hBN-$R0^\circ$]{}does not stabilize *in spite of* the presence of borazine. Yet, *due to* the presence of the surfactant borazine molecule, the graphene layer which grows at this temperature is forced to adopt a lattice orientation close to $R0^\circ$. This is a self-limiting process, because the graphene layer underneath *is not* any more exposed to borazine and therefore grows in the orientation defined by the SiC substrate to which it *is* exposed. We believe that this new preparation method, in which a surfactant enables the growth of a graphene layer in an unusual orientation, will foster new approaches to produce large-scale tBLG, thereby bringing its intriguing properties closer to applications.
F.C.B., C.K. and F.S.T. acknowledge funding by the DFG through the SFB 1083 Structure and Dynamics of Internal Interfaces (project A 12). B.A. received funding from the European Union’s Horizon 2020 research and innovation program under the Grant Agreement No. 706538. We thank Diamond Light Source for access to beamline I09 (Proposals No. SI20855 and No. SI20810) that contributed to the results presented here. The research leading to this result has been supported by the project CALIPSOplus under the Grant Agreement No. 730872 from the EU Framework Programme for Research and Innovation HORIZON 2020.
[99]{} [[K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Electric Field Effect in Atomically Thin Carbon Films, Science **306**, 666 (2004).](http://dx.doi.org/10.1126/science.1102896)]{}
[[J. M. B. Lopes dos Santos, N. M. R. Peres, and A. H. Castro Neto, Graphene Bilayer with a Twist: Electronic Structure, Phys. Rev. Lett. **99**, 256802 (2007).](http://dx.doi.org/10.1103/PhysRevLett.99.256802)]{}
[[A. V. Rozhkov, A. O. Sboychakov, A. L. Rakhmanov, F. Nori, Electronic properties of graphene-based bilayer systems, Phys. Rep. **648**, 1 (2016).](http://dx.doi.org/10.1016/j.physrep.2016.07.003)]{}
[[L. Huder, A. Artaud, T. Le Quang, G. Trambly de Laissardière, A. G. M. Jansen, G. Lapertot, C. Chapelier, and V. T. Renard, Electronic Spectrum of Twisted Graphene Layers under Heterostrain, Phys. Rev. Lett. **120**, 156405 (2018).](http://dx.doi.org/10.1103/PhysRevLett.120.156405)]{}
[[G. G. Naumis, S. Barraza-Lopez, M. Oliva-Leyva, and H. Terrones, Electronic and optical properties of strained graphene and other strained 2D materials: a review, Rep. Prog. Phys. **80**, 096501 (2017).](http://dx.doi.org/10.1088/1361-6633/aa74ef)]{}
[[H. Kumar, D. Er, L. Dong, J. Li, and V. B. Shenoy, Elastic Deformations in 2D van der waals Heterostructures and their Impact on Optoelectronic Properties: Predictions from a Multiscale Computational Approach, Sci. Rep. **5**, 10872 (2015).](http://dx.doi.org/10.1038/srep10872)]{}
[[T. E. Beechem, T. Ohta, B. Diaconescu, and J. T. Robinson, Rotational Disorder in Twisted Bilayer Graphene, ACS Nano **8**, 1655 (2014).](http://dx.doi.org/10.1021/nn405999z)]{}
[[T. Stauber, T. Low, and G. G[ó]{}mez-Santos, Chiral Response of Twisted Bilayer Graphene, Phys. Rev. Lett. **120**, 046801 (2018).](http://dx.doi.org/10.1103/PhysRevLett.120.046801)]{}
[[E. S. Morell, L. Chico, and L. Brey, Twisting dirac fermions: circular dichroism in bilayer graphene, 2D Mater. **4**, 035015 (2017).](http://dx.doi.org/10.1088/2053-1583/aa7eb6)]{}
[[C.-J. Kim, A. Sánchez-Castillo, Z. Ziegler, Y. Ogawa, C. Noguez, and J. Park, Chiral atomically thin films, Nat. Nanotech. **11**, 520 (2016).](http://dx.doi.org/10.1038/NNANO.2016.3)]{}
[[A. O. Sboychakov, A. V. Rozhkov, A. L. Rakhmanov, and F. Nori, Externally Controlled Magnetism and Band Gap in Twisted Bilayer Graphene, Phys. Rev. Lett. **120**, 266402 (2018).](http://dx.doi.org/10.1103/PhysRevLett.120.266402)]{}
[[L. A. Gonzalez-Arraga, J. L. Lado, F. Guinea, and P. San-Jose, Electrically Controllable Magnetism in Twisted Bilayer Graphene, Phys. Rev. Lett. **119**, 107201 (2017).](http://dx.doi.org/10.1103/PhysRevLett.119.107201)]{}
[[A. V. Rozhkov, A. O. Sboychakov, A. L. Rakhmanov, and F. Nori, Single-electron gap in the spectrum of twisted bilayer graphene, Phys. Rev. B **95**, 045119 (2017).](http://dx.doi.org/10.1103/PhysRevB.95.045119)]{}
[[J.-B. Liu, P.-J. Li, Y.-F. Chen, Z.-G. Wang, F. Qi, J.-R. He, B.-J. Zheng, J.-H. Zhou, W.-L. Zhang, L. Gu, and Y.-R. Li, Observation of tunable electrical bandgap in large-area twisted bilayer graphene synthesized by chemical vapor deposition, Sci. Rep. **5**, 15285 (2015).](http://dx.doi.org/10.1038/srep15285)]{}
[[A. R. Muniz and D. Maroudas, Opening and tuning of band gap by the formation of diamond superlattices in twisted bilayer graphene, Phys. Rev. B **86**, 075404 (2012).](http://dx.doi.org/10.1103/PhysRevB.86.075404)]{}
[[Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, Unconventional superconductivity in magic-angle graphene superlattices, Nature **556**, 43 (2018).](http://dx.doi.org/10.1038/nature26160)]{}
[[Y. Cao, V. Fatemi, A. Demir, S. Fang, S. L. Tomarken, J. Y. Luo, J. D. Sanchez-Yamagishi, K. Watanabe, T. Taniguchi, E. Kaxiras, R. C. Ashoori, and P. Jarillo-Herrero, Correlated insulator behaviour at half-filling in magic-angle graphene superlattices, Nature **556**, 80 (2018).](http://dx.doi.org/10.1038/nature26154)]{}
[[R. Frisenda, E. Navarro-Moratalla, P. Gant, D. Pérez De Lara, P. Jarillo-Herrero, R. V. Gorbachev, and A. Castellanos-Gomez, Recent progress in the assembly of nanodevices and van der Waals heterostructures by deterministic placement of 2D materials, Chem. Soc. Rev. **47**, 53 (2018).](http://dx.doi.org/10.1039/c7cs00556c)]{}
[[W. Yao, E. Wang, C. Bao, Y. Zhang, K. Zhang, K. Bao, C. K. Chan, C. Chen, J. Avila, M. C. Asensio, J. Zhu, and S. Zhou, Quasicrystalline 30$^\circ$ twisted bilayer graphene as an incommensurate superlattice with strong interlayer coupling, PNAS **115**, 6928 (2018).](http://dx.doi.org/10.1073/pnas.1720865115)]{}
[[W.-J. Zuo, J. B. Qiao, D. L. Ma, L.-J Yin, G. Sun, J.-Y. Zhang, L.-Y. Guan, and L. He, Scanning tunneling microscopy and spectroscopy of twisted trilayer graphene, Phys. Rev. B **97**, 035440 (2018).](http://dx.doi.org/10.1103/PhysRevB.97.035440)]{}
[[Y. Murata, S. Nie, A. Ebnonnasir, E. Starodub, B. B. Kappes, K. F. McCarty, C. V. Ciobanu, and S. Kodambaka, Growth structure and work function of bilayer graphene on Pd(111), Phys. Rev. B **85**, 205443 (2012).](http://dx.doi.org/10.1103/PhysRevB.85.205443)]{}
[[H. Peng, N. B. M. Schröter, J. Yin, H. Wang, T.-F. Chung, H. Yang, S. Ekahana, Z. Liu, J. Jiang, L. Yang, T. Zhang, C. Chen, H. Ni, A. Barinov, Y. P. Chen, Z. Liu, H. Peng, and Y. Chen, Substrate Doping Effect and Unusually Large Angle van Hove Singularity Evolution in Twisted Bi- and Multilayer Graphene, Adv. Mat. **29**, 1606741 (2017).](http://dx.doi.org/10.1002/adma.201606741)]{}
[[F. Hu, S. R. Das, Y. Luan, T.-F. Chung, Y. P. Chen, and Z. Fei, Real-Space Imaging of the Tailored Plasmons in Twisted Bilayer Graphene, Phys. Rev. Lett. **119**, 247402 (2017).](http://dx.doi.org/10.1103/PhysRevLett.119.247402)]{}
[[C.-C. Lu, Y.-C. Lin, Z. Liu, C.-H. Yeh, K. Suenaga, and P.-W. Chiu, Twisting Bilayer Graphene Superlattices, ACS Nano **7**, 2587 (2013).](http://dx.doi.org/10.1021/nn3059828)]{}
[[Z. Gao, Q. Zhang, C. H. Naylor, Y. Kim, I. H. Abidi, J. Ping, P. Ducos, J. Zauberman, M.-Q. Zhao, A. M. Rappe, Z. Luo, L. Ren, and A. T. C. Johnson, Crystalline Bilayer Graphene with Preferential Stacking from Ni–Cu Gradient Alloy, ACS Nano **12**, 2275 (2018).](http://dx.doi.org/10.1021/acsnano.7b06992)]{}
[[S. Nie, A. L. Walter, N. C. Bartelt, E. Starodub, A. Bostwick, E. Rotenberg, and K. F. McCarty, Growth from Below: Graphene Bilayers on Ir(111), ACS Nano **5**, 2298 (2011).](http://dx.doi.org/10.1021/nn103582g)]{}
[[T. Iwasaki, A. A. Zakharov, T. Eelbo, M. Waśniowska, R. Wiesendanger, J. H. Smet, and U. Starke, Formation and structural analysis of twisted bilayer graphene on Ni(111) thin films, Surf. Sci. **625**, 44 (2014).](http://dx.doi.org/10.1016/j.susc.2014.03.004)]{}
[[D. S. Lee, C. Riedl, T. Beringer, A. H. Castro Neto, K. von Klitzing, U. Starke, and J. H. Smet, Quantum Hall Effect in Twisted Bilayer Graphene, Phys. Rev. Lett. **107**, 216602 (2011).](http://dx.doi.org/10.1103/PhysRevLett.107.216602)]{}
[[A. Tejeda, A. Taleb-Ibrahimi, W. de Heer, C. Berger, and E. H. Conrad, Electronic structure of epitaxial graphene grown on the C-face of SiC and its relation to the structure, New J. Phys. **14**, 125007 (2012).](http://dx.doi.org/10.1088/1367-2630/14/12/125007)]{}
[[I. Razado-Colambo, J. Avila, J.-P. Nys, C. Chen, X. Wallart, M.-C. Asensio, and D. Vignaud, NanoARPES of twisted bilayer graphene on SiC: absence of velocity renormalization for small angles, Sci. Rep. **6**, 27261 (2016).](http://dx.doi.org/10.1038/srep27261)]{}
[[J. Yin, H. Wang, H. Peng, Z. Tan, L. Liao, L. Lin, X. Sun, A. L. Koh, Y. Chen, H. Peng, and Z. Liu, Selectively enhanced photocurrent generation in twisted bilayer graphene with van Hove singularity, Nat. Comm. **7**, 10699 (2016).](http://dx.doi.org/10.1038/ncomms10699)]{}
[[Z. Tan, J. Yin, C. Chen, H. Wang, L. Lin, L. Sun, J. Wu, X. Sun, H. Yang, Y. Chen, H. Peng, and Z. Liu, Building Large-Domain Twisted Bilayer Graphene with van Hove Singularity, ACS Nano **10**, 6725 (2016).](http://dx.doi.org/10.1021/acsnano.6b02046)]{}
[[K. V. Emtsev, A. Bostwick, K. Horn, J. Jobst, G. L. Kellogg, L. Ley, J. L. McChesney, T. Ohta, S. A. Reshanov, J. Röhrl, E. Rotenberg, A. K. Schmid, D. Waldmann, H. B. Weber, and T. Seyller, Towards wafer-size graphene layers by atmospheric pressure graphitization of silicon carbide, Nat. Mater. **8**, 203 (2009).](http://dx.doi.org/10.1038/NMAT2382)]{}
[[S. Forti, K. V. Emtsev, C. Coletti, A. A. Zakharov, C. Riedl, and U. Starke, Large-area homogeneous quasifree standing epitaxial graphene on SiC(0001): Electronic and structural characterization, Phys. Rev. B **84**, 125449 (2011).](http://dx.doi.org/10.1103/PhysRevB.84.125449)]{}
[[T. Schumann, M. Dubslaff, M. H. Oliveira, Jr., M. Hanke, J. M. J. Lopes, and H. Riechert, Effect of buffer layer coupling on the lattice parameter of epitaxial graphene on SiC(0001), Phys. Rev. B **90**, 041403(R) (2014).](http://dx.doi.org/10.1103/PhysRevB.90.041403)]{}
[[H.-C. Shin, Y. Jang, T.-H. Kim, J.-H. Lee, D.-H. Oh, S. J. Ahn, J. H. Lee, Y. Moon, J.-H. Park, S. J. Yoo, C.-Y. Park, D. Whang, C.-W. Yang, and J. R. Ahn, Epitaxial Growth of a Single-Crystal Hybridized Boron Nitride and Graphene Layer on a Wide-Band Gap Semiconductor, J. Am. Chem. Soc. **137**, 6897 (2015).](http://dx.doi.org/10.1021/jacs.5b03151)]{}
[[S. J. Ahn, P. Moon, T.-H. Kim, H.-W. Kim, H.-C. Shin, E. H. Kim, H. W. Cha, S.-J. Kahng, P. Kim, M. Koshino, Y.-W. Son, C.-W. Yang, and J. R. Ahn, Dirac electrons in a dodecagonal graphene quasicrystal, Science **361**, 782 (2018).](http://dx.doi.org/10.1126/science.aar8412)]{}
[[J. Park, W. C. Mitchel, S. Elhamri, L. Grazulis, J. Hoelscher, K. Mahalingam, C. Hwang, S.-K. Mo, and J. Lee, Observation of the intrinsic bandgap behaviour in as-grown epitaxial twisted graphene, Nat. Comm. **6**, 5677 (2015).](http://dx.doi.org/10.1038/ncomms6677)]{}
[[U. Starke, J. Schardt, J. Bernhardt, M. Franke, and K. Heinz, Stacking Transformation from Hexagonal to Cubic SiC Induced by Surface Reconstruction: A Seed for Heterostructure Growth, Phys. Rev. Lett. **82**, 2107 (1999).](http://dx.doi.org/10.1103/PhysRevLett.82.2107)]{}
[[J. Schardt, J. Bernhardt, U. Starke, and K. Heinz, Crystallography of the [$(3\times 3)$]{}surface reconstruction of $3C$-SiC(111), $4H$-SiC(0001), and $6H$-SiC(0001) surfaces retrieved by low-energy electron diffraction, Phys. Rev. B **62**, 10335 (2000).](http://dx.doi.org/10.1103/PhysRevB.62.10335)]{}
[[W. Paszkowicz, J. B. Pelka, M. Knapp, T. Szyszko, and S. Podsiadlo, Lattice parameters and anisotropic thermal expansion of hexagonal boron nitride in the 10-297.5 K temperature range, Appl. Phys. A **75**, 431 (2002).](http://dx.doi.org/10.1007/s003390100999)]{}
[[S. Yuan, C. Shen, B. Deng, X. Chen, Q. Guo, Y. Ma, A. Abbas, B. Liu, R. Haiges, C. Ott, T. Nilges, K. Watanabe, T. Taniguchi, O. Sinai, D. Naveh, C. Zhou, and F. Xia, Air-Stable Room-Temperature Mid-Infrared Photodetectors Based on hBN/Black Arsenic Phosphorus/hBN Heterostructures, Nano Lett. **18**, 3172 (2018).](http://dx.doi.org/10.1021/acs.nanolett.8b00835)]{}
[[H. Hattab, A. T. N’Diaye, D. Wall, C. Klein, G. Jnawali, J. Coraux, C. Busse, R. van Gastel, B. Poelsema, T. Michely, F.-J. Meyer zu Heringdorf, and M. Horn-von Hoegen, Interplay of Wrinkles, Strain, and Lattice Parameter in Graphene on Iridium, Nano Lett. **12**, 678 (2011).](http://dx.doi.org/10.1021/nl203530t)]{}
[[C. Riedl, U. Starke, J. Bernhardt, M. Franke, and K. Heinz, Structural properties of the graphene-SiC(0001) interface as a key for the preparation of homogeneous large-terrace graphene surfaces, Phys. Rev. B **76**, 245406 (2007).](http://dx.doi.org/10.1103/PhysRevB.76.245406)]{}
[[P. Lauffer, K. V. Emtsev, R. Graupner, T. Seyller, L. Ley, S. A. Reshanov, and H. B. Weber, Atomic and electronic structure of few-layer graphene on SiC(0001) studied with scanning tunneling microscopy and spectroscopy, Phys. Rev. B **77**, 155426 (2008).](http://dx.doi.org/10.1103/PhysRevB.77.155426)]{}
[[J. Sforzini, L. Nemec, T. Denig, B. Stadtmüller, T.-L. Lee, C. Kumpf, S. Soubatch, U. Starke, P. Rinke, V. Blum, F. C. Bocquet, and F. S. Tautz, Approaching Truly Freestanding Graphene: The Structure of Hydrogen-Intercalated Graphene on 6H-SiC(0001), Phys. Rev. Lett. **114**, 106804 (2015).](http://dx.doi.org/10.1103/PhysRevLett.114.106804)]{}
[[E. Koren and U. Duerig, Superlubricity in quasicrystalline twisted bilayer graphene, Phys. Rev. B **93**, 201404(R) (2016).](http://dx.doi.org/10.1103/PhysRevB.93.201404)]{}
[[S. Y. Zhou, G.-H. Gweon, A. V. Fedorov, P. N. First, W. A. De Heer, D.-H. Lee, F. Guinea, A. H. Castro Neto, and A. Lanzara, Substrate-induced bandgap opening in epitaxial graphene, Nat. Mater. **6**, 770 (2007).](http://dx.doi.org/10.1038/nmat2003)]{}
[[B. Amorim, General theoretical description of angle-resolved photoemission spectroscopy of van der Waals structures, Phys. Rev. B **97**, 165414 (2018).](http://dx.doi.org/10.1103/PhysRevB.97.165414)]{}
[[A. Bauer, J. Kräußlich, L. Dressler, P. Kuschnerus, J. Wolf, K. Goetz, P. Käckell, J. Furthmüller, and F. Bechstedt, High-precision determination of atomic positions in crystals: The case of 6$H$- and 4$H$-SiC, Phys. Rev. B **57**, 2647 (1998).](http://dx.doi.org/10.1103/PhysRevB.57.2647)]{}
[[M. Horn-von Hoegen, Growth of semiconductor layers studied by spot profile analysing low energy electron diffraction – Part I, Zeit. Kristall. **214**, 591 (1999).](http://dx.doi.org/10.1524/zkri.1999.214.10.591)]{}
[[M. Horn-von Hoegen, Growth of semiconductor layers studied by spot profile analysing low energy electron diffraction – Part II, Zeit. Kristall. **214**, 684 (1999).](http://dx.doi.org/10.1524/zkri.1999.214.11.684)]{}
[[G. Li, A. Luican, J. M. B. Lopes dos Santos, A. H. Castro Neto, A. Reina, J. Kong, and E. Y. Andrei, Observation of Van Hove singularities in twisted graphene layers, Nat. Phys. **6**, 109 (2010).](http://dx.doi.org/10.1038/NPHYS1463)]{}
[[Q. Yao, R. van Bremen, G. J. Slotman, L. Zhang, S. Haartsen, K. Sotthewes, P. Bampoulis, P. L. de Boeij, A. van Houselt, S. Yuan, and H. J. W. Zandvliet, Spatially resolved electronic structure of twisted graphene, Phys. Rev. B **95**, 245116 (2017).](http://dx.doi.org/10.1103/PhysRevB.95.245116)]{}
[[L. M. Malard, J. Nilsson, D. C. Elias, J. C. Brant, F. Plentz, E. S. Alves, A. H. Castro Neto, and M. A. Pimenta, Probing the electronic structure of bilayer graphene by Raman scattering, Phys. Rev. B **76**, 201401(R) (2007).](http://dx.doi.org/10.1103/PhysRevB.76.201401)]{}
[^1]: Y.R.L. and N.S. contributed equally to this work.
[^2]: Y.R.L. and N.S. contributed equally to this work.
[^3]: In this Letter, we use the following terminology. hBN stands for hexagonal boron nitride, G for graphene, SLG for single-layer graphene, and tBLG stands for twisted bilayer graphene. The angle $\beta$ between the reciprocal unit cell vector and the $\bar\Gamma\bar M$ direction of the SiC substrate is given with the suffix $R\beta$. In other words, $R0^\circ$ corresponds to the $\bar\Gamma\bar M$ direction of the SiC, and $R30^\circ$ corresponds to the $\bar\Gamma\bar K$. The twist angle $\alpha$ in tBLG is given as a prefix, e.g., [$30^\circ$-tBLG]{}.
[^4]: The used emissivity $\epsilon$ value is 0.825. The measured temperature depends on the SiC doping level and wafer thickness. The temperatures at which the $(\sqrt{3}\times\sqrt{3})R30^\circ$ and [$(3\times 3)$]{}reconstructions form are used to calibrate the temperature.
[^5]: The surface Brillouin zone has been calibrated with respect to the 6$H$-SiC lattice with a reference lattice parameter value of $(3.08129\pm 0.00004)~\mathrm{\AA}$ [@Bauer1998].
[^6]: Note that the instrumental resolution is approximately four times greater. Typically, a resolution-limited transfer width of $2000~\mathrm{\AA}$ is expected for a SPA-LEED instrument [@vonHoegen1999-1; @vonHoegen1999-2].
[^7]: It is well known that in the $(6\sqrt{3}\times6\sqrt{3})R30^\circ$ buffer layer reconstruction, the low-order diffraction spots of the $(6\times 6)$ sub-pattern are particularly intense [@Riedl2007; @Lauffer2008].
[^8]: A similar absorption effect has been reported in nano-ARPES on tBLG with various twist angles and substrates [@Yao2018; @Peng2017; @Yin2016].
[^9]: Compared to the calculation performed in Ref. [@Amorim2018], the value of the graphene intralayer nearest-neighbor intralayer hopping was adjusted ($t = 3.11$ eV for [G-$R0^\circ$]{}and 3.08 eV for [G-$R30^\circ$]{}) in order to reproduce the energy at which the avoided crossing is observed. This corresponds to a Fermi velocity of $0.998\times10^6$ m/s. The interlayer coupling parameter $V_{pp\pi}^0$ was set to $-2.7$ eV, although its precise value is not important, as the interlayer coupling is dominated by $V_{pp\sigma}^0$. Besides $t$, $V_{pp\pi}^0$, and $V_{pp\sigma}^0$, all other parameters are the same.
[^10]: For twisted and untwisted AB-stacked bilayer graphene, $V_{pp\sigma}^0$ ranges from 0.24 to 0.30 eV [@Yao2017; @Li2010; @Malard2007].
|
---
abstract: 'We study information theoretic methods for ranking biomarkers. In clinical trials there are two, closely related, types of biomarkers: predictive and prognostic, and disentangling them is a key challenge. Our first step is to phrase biomarker ranking in terms of optimizing an information theoretic quantity. This formalization of the problem will enable us to derive rankings of predictive/prognostic biomarkers, by estimating different, high dimensional, [*conditional mutual information*]{} terms. To estimate these terms, we suggest efficient low dimensional approximations, and we derive an empirical Bayes estimator, which is suitable for small or sparse datasets. Finally, we introduce a new visualisation tool that captures the [*prognostic*]{} and the [*predictive*]{} strength of a set of biomarkers. We believe this representation will prove to be a powerful tool in biomarker discovery.'
author:
- |
Konstantinos Sechidis\
School of Computer Science\
University of Manchester\
`[email protected]`\
Emily Turner\
School of Computer Science\
University of Manchester\
`[email protected]`\
Paul D. Metcalfe\
Advanced Analytics Centre\
Global Medicines Development, AstraZeneca\
`[email protected]`\
James Weatherall\
Advanced Analytics Centre,\
Global Medicines Development, AstraZeneca\
`[email protected]`\
Gavin Brown\
School of Computer Science\
University of Manchester\
`[email protected]`\
bibliography:
- './Bibliography.bib'
title: Ranking Biomarkers Through Mutual Information
---
Introduction
============
We present an information theoretic approach to disentangle predictive and prognostic biomarkers. In clinical trials, a [*prognostic biomarker*]{} is a clinical or biological characteristic that provides information on the likely outcome irrespective of the treatment. On the other hand a [*predictive biomarker*]{}, is a clinical or biological characteristic that provides information on the likely benefit from treatment. One of the key challenges in personalised medicine is to discover predictive biomarkers which will guide the analysis for tailored therapies, while discovering prognostic biomarkers is crucial for general patient care [@Ruberg2015]. We should clarify that our work focuses on hypothesis generation (exploratory analysis), instead of hypothesis testing (confirmatory analysis) [@DmitrienkoEtAll2016]. In our work we will focus on a clinical dataset $\mathcal{D}=\{y_i,\x_i,t_i\}_{i=1}^n$, where, $y$ is a realization of a binary target variable $Y,$ $t$ is a realization of binary treatment indicator $T$ (i.e. $T=1$ if patient received experimental treatment, $0$ otherwise), and $\x$ is a $p$-dimensional realization of the feature vector $\X,$ which describes the joint random variable of the $p$ categorical features (or biomarkers). To make the distinction between prognostic and predictive biomarkers more formal we will follow a strategy introduced by various previous works [@FosterEtAll2011; @LipkovichDmitrienko2014b]. Let us assume that the true underlying model is the following logistic regression with up to second order interaction terms: [ $$\begin{aligned}
\text{logit}P(\Ypos|t,\x) = \alpha + \sum_{i=1}^p \beta_{i} x_{i} + \sum_{i,j=1}^p \beta_{i,j} x_{i}x_{j}
+ \gamma t + \left( \sum_{i=1}^p \delta_{i} x_{i} + \sum_{i,j=1}^p \delta_{i,j} x_{i}x_{j} \right)t. \notag\end{aligned}$$ ]{} Covariates with non-zero $\beta$ coefficients are prognostic, while the ones with non-zero $\delta$ coefficients are predictive. Our work proposes an information theoretic framework for deriving two different rankings of the biomarkers, one that captures their [*prognostic*]{} strength, and one that captures their [*predictive*]{} strength. On top of that, we introduce a visualisation tool that captures both the [*prognosticness*]{} and the [*predictiveness*]{} of a set of biomarkers. This tool enables us to identify potentially undiscovered biomarkers, worthy of further investigation.
Background on Biomarker Ranking {#sec:Back_BiomarkerRanking}
===============================
This section connects the problem of biomarker discovery, in context of the machine learning problem of feature selection and the clinical trials problem of subgroup identification.
Prognostic Biomarker Discovery and Feature Selection {#sec:Back_Prognostic}
----------------------------------------------------
We now demonstrate that the problem of selecting [*prognostic biomarkers*]{} is equivalent to feature selection using a supervised dataset $\{y_i,\x_i\}_{i=1}^n$. There are many different methods for feature selection, but we will focus on information theoretic approaches, where, firstly we [*rank*]{} the features and then we [*select*]{} the top-$k$ ones that contain most of the useful information. The underlying objective function is to find the smallest feature set $\X^*$ that maximizes $I(\X^*;Y)$, or in other words that the shared information between $\X^*$ and $Y$ is maximized: $$\begin{aligned}
\X^* = {\underset{\X_{\theta} \in \X}{\operatorname{arg}\,\operatorname{max}}\;} {I}(\X_{\theta};Y). \notag\end{aligned}$$ @BrownPocockZhaoLujan2012 derived a greedy optimization process which assesses features based on a simple scoring criterion on the utility of including a feature. At each step we select the feature $X_k$ that maximizes the conditional mutual information (CMI): $J^{\text{CMI}}(X_k) = \hat{I}(X_k;Y|{\bf{X}}_{\theta}),$ where ${\bf{X}}_{\theta}$ is the set of the features already selected. As the number of selected features grows, the dimension of $X_{\theta}$ also grows, and this makes our estimates less reliable. To overcome this problem [*low order*]{} criteria have been derived. For example, by ranking the features independently on their mutual information with the class, we derive a ranking that takes into account the *relevancy* with the class label. Choosing the features according to this ranking corresponds to the [*Mutual Information Maximization*]{} (MIM) criterion; where the score of each feature $X_k$ is given by: $J^{\text{MIM}}(X_k)=I(X_k;Y).$ This approach does not take into account the *redundancy* between the features. By using more advanced techniques [@PengLongDing2005], we can take into account both the relevancy and the redundancy between the features themselves, [*without*]{} having to compute very high dimensional distributions. @BrownPocockZhaoLujan2012 showed that a criterion that controls relevancy, redundancy, conditional redundancy and provides a very good tradeoff in terms of accuracy, stability and flexibility is the [*Joint Mutual Information*]{} (JMI) criterion [@YangMoody1999]: $J^{\text{JMI}}(X_k) = \sum_{X_j \in {\bf{X}}_{\theta}}\hat{I}(X_k;Y|X_j).$ Through heuristic, this guarantees to increase the likelihood at each step.
While the above framework has been suggested for supervised scenarios, our aim is to explore how it can been extended to be useful in clinical trial scenarios, i.e. $\mathcal{D}$. The extra treatment variable $T$ provides interesting dynamics, but before showing our suggested extension, we will briefly present the literature on predictive biomarkers and subgroup identification.
Predictive Biomarker Discovery and Subgroup Identification {#sec:Back_Predictive}
----------------------------------------------------------
The problem of deriving [*predictive biomarkers*]{} is closely related to the problem of subgroup identification [@DmitrienkoEtAll2016]. In clinical trials, patient populations cannot be considered homogeneous, and thus the effect of treatment will vary across different subgroups of the population. Exploring the heterogeneity of subject responses to treatment is very critical for drug development, which is underlined by a draft Food and Drug Administration guidance [@Ruberg2015]. As a result consideration of patient subgroups is necessary in multiple stages of trial development. @Berry1990 gives the following definition: subgrouping is a partition of the set of all patients into disjoint subsets or subgroups and it is usually determined by a small number of measurable covariates, which are the predictive biomarkers. In the traditional subgroup identification problem the set of predictive biomarkers is relatively small, i.e. 2-3 biomarkers [@LipkovichEtAll2011].
In the literature there are many different methods for subgroup identification. A popular one is [*recursive partitioning*]{} of the covariate space, using criteria that capture the interaction between $T$ and $Y$ [@SuEtAll2009; @LipkovichEtAll2011; @LohEtAll2015]. Another solution builds upon the [*counterfactual modelling*]{} idea: firstly by deriving a new variable for each patient that captures the treatment effect and then using this variable to select or rank the covariates. For example, @FosterEtAll2011 can be seen as exploring the covariate space which maximizes the odds-ratio between $T$ and $Y$. In the following section, we will show that starting from a natural objective function, we can derive predictive biomarkers by exploring areas that maximize the mutual information between $T$ and $Y$.
An Information Theoretic View on Biomarker Ranking
==================================================
Our work extends the feature ranking framework from supervised to clinical trial data. The treatment variable $T$ provides extra useful information, and a natural way to capture this is by the following criterion: to maximize the shared mutual information between the target $Y$ and the joint random variable of the treatment $T$ and the optimal feature set $\X^*$, or in information theoretic notation: $
\X^* = \textrm{argmax}~I (\X_{\theta}T;Y).
$ By using the chain rule [@CoverThomas2006], these objective can be decomposed as follows in the following way: [ $$\begin{aligned}
\X^* = {\underset{\X_{\theta} \in \X}{\operatorname{arg}\,\operatorname{max}}\;} {I}(\X_{\theta}T;Y) ={\underset{\X_{\theta} \in \X}{\operatorname{arg}\,\operatorname{max}}\;} \Big( \underbrace{{I}(\X_{\theta};Y)}_{\mathclap{\text{Prognostic term}}} + \underbrace{{I}(T;Y|\X_{\theta})}_{\mathclap{\text{Predictive term}}}\Big) \notag\end{aligned}$$ ]{} The first term, captures the features with prognostic power, while the second captures the features with predictive power. By optimizing these two terms independently we can derive two different objectives for the two different features set: $\X^*_{\text{Prog}} = {\underset{\X_{\theta} \in \X}{\operatorname{arg}\,\operatorname{max}}\;} {I}(\X_{\theta};Y)$ and $\X^*_{\text{Pred}} = {\underset{\X_{\theta} \in \X}{\operatorname{arg}\,\operatorname{max}}\;} {I}(T;Y|\X_{\theta}).$ Similar to [@BrownPocockZhaoLujan2012], to optimize these two objectives, we can derive a greedy optimization process, where are each step we select the feature $X_k$ that maximizes the following terms: $$\begin{aligned}
J_{{Prog}}(X_k)=I(X_k;Y|\X_{{Prog}}),~~~~~~~~~~~~~~~~J_{{Pred}}(X_k)=I(T;Y|X_k\X_{{Pred}}). \notag\end{aligned}$$
where ${\bf{X}}_{{Prog}}$ are the features already been ranked as prognostic, while ${\bf{X}}_{{Pred}}$ as predictive. As the number of selected features grows, the dimension of ${\bf{X}}_{{Prog}}$ and ${\bf{X}}_{{Pred}}$ also grows, and this makes the estimates less reliable. To overcome this issue, we derive low-order approximations, such as the one presented in Section \[sec:Back\_Prognostic\].
Lower-order approximations
--------------------------
With the following theorem we present our main contribution – lower order approximations of $J_{\text{Prog}}(X_k)$ and $J_{\text{Pred}}(X_k)$:
\[thm\] The first two order approximations for deriving Prog. and Pred. rankings are given by:\
[ $$\begin{aligned}
J^{1^{st}}_{\text{Prog}}(X_k) & = I(X_k;Y), \notag \\
J^{2^{nd}}_{\text{Prog}}(X_k) & = \sum_{X_j \in \X_{\text{Prog}}}{I}(X_k;Y|X_j), \notag\end{aligned}$$ ]{}
[ $$\begin{aligned}
J^{1^{st}}_{\text{Pred}}(X_k) & = I(T;Y|X_k). \notag \\
J^{2^{nd}}_{\text{Pred}}(X_k) & = \sum_{X_j \in \X_{\text{Pred}}}{I}(T;Y|X_kX_j). \notag\end{aligned}$$ ]{}
[Proof sketches: For prognostic, the proof is identical to [@BrownPocockZhaoLujan2012], while for the predictive we can prove these approximations by combining the results of @BrownPocockZhaoLujan2012 with the chain rule [@CoverThomas2006].]{}
For example, by making assumptions similar to the ones of MIM, we can derive the $1^{st}$-order criteria for deriving prognostic and predictive rankings respectively. These criteria do not take into account interactions between features, and as a result fail to capture the [*redundancy*]{}. To overcome this limitation so we can use higher order criteria, such as JMI, which explores $2^{nd}$-order interaction terms between features.
Estimating Conditional Mutual Information Through an Empirical Bayes Approach
-----------------------------------------------------------------------------
[r]{}[0.35]{} -0.5cm {width="35.00000%"}
In order to derive the above rankings we need to estimate conditional mutual information terms. In our work we will focus on categorical data, and we derive an efficient way for estimating these terms through an empirical Bayes procedure. Due to space limitations we omit the technical details, but our analysis extends a work on entropy estimation. @hausser2009 suggested an entropy estimator that employs James-Stein-type shrinkage at the level of cell frequencies. Building upon this, we derived an estimator for the conditional mutual information. Our proposed estimator achieves smaller mean squared error than maximum-likelihood, especially in “small $n$, large $p$” scenarios – which are common in micro-array data. For example, Figure \[fig:MSE\] compares the performance of the maximum likelihood estimator against our proposed empirical bayes approach, and as we observe our proposed estimator converges much faster.
Predictive–Prognostic (PP) Graphs
=================================
We now present a visualisation tool that captures both the [*prognostic*]{} and the [*predictive*]{} power of a set of biomarkers (PP-graphs). We believe that this representation will provide useful information over both the prognostic and predictive power of each biomarker, and it will be helpful for controlling false discoveries in clinical trials. For example, in subgroup-identification (Section \[sec:Back\_Predictive\]), we define interesting subgroupings by using predictive biomarkers. Many methods, such as the counterfactual modelling, i.e. Virtual-twins suggested by [@FosterEtAll2011], derive as predictive, biomarkers that are strongly prognostic. Using a PP-graph we get more insight over the prognostic and predictive power of each biomarker and this may help in eliminating these type of errors.
Now we will show these graphs through a motivating example. We will use the same data generation model as in [@FosterEtAll2011]. Let us assume that we simulate randomized trials with $1000$ patients, and the $X$s are generated as independent $X_j \sim N(0,1),j=1...15$. We consider logit models for data generation $$\text{logit} P(\Ypos|t,\x) = -1 + 0.5x_1 + 0.5x_2 -0.5x_7 + 0.5x_2x_7 + 0.1 t + 1.5 t \mathbb{I}(x_1>0 \cap x_2<0 \cap x_3>0).$$ Thus, the patients with $\left( x_1>0 \cap x_2<0 \cap x_3>0 \right)$ will have an enhanced treatment effect. As a result the three variables, $X_1, X_2$ and $X_3$, are the predictive biomarkers. Furthermore, $X_1, X_2$ and $X_7$ are the three prognostic biomarkers and the other nine biomarkers are irrelevant. Figure \[fig:PP\] shows three PP-graphs. In the $x$-axis we have the normalised score of each biomarker derived by a prognostic ranking. We normalised scores to take values from $[0,1]$, where $1$ is the score for the most-prognostic biomarker. In the $y$-axis we have the normalised scores for the predictive ranking. The red area (vertical shaded region) represents the top-$k$ prognostic-biomarkers, while the green (horizontal shaded region) the top-$k$ predictive, for these specific PP-graphs we used $k=3,$ which corresponds to the score cut-off value of $(p-k)/p=(15-3)/15=0.80$. The intersection of these two areas – orange area (top right shaded corner)– should contain the biomarkers that are both prognostic and predictive. We plot the average predictive/prognostic rankings over $100$ sample datasets, using Virtual-twins [@FosterEtAll2011] and our two approaches suggested in Theorem \[thm\]. For estimating mutual information, the features were discretized in $4$ equal width bins. As we observe, Virtual-twins [@FosterEtAll2011], tends to push a prognostic biomarker (i.e. $X_7$) into the predictive area –[*false positive*]{}. The $1^{st}$-order approach classifies $X_1$ only as prognostic and not as predictive –[*false negative*]{}. While, our $2^{nd}$-order criterion distinguishes perfectly between predictive and prognostic.
[0.32]{} ![[**P-P graphs**]{} when: $X_1,X_2$ and $X_3$ are truly predictive, $X_1,X_2$ and $X_7$ are truly prognostic, and the rest nine biomarkers are irrelevant. Note that our our $2^{nd}$-order approximation distinguishes perfectly between predictive and prognostic. []{data-label="fig:PP"}](./Counterfactual "fig:"){width="\textwidth"}
[0.32]{} ![[**P-P graphs**]{} when: $X_1,X_2$ and $X_3$ are truly predictive, $X_1,X_2$ and $X_7$ are truly prognostic, and the rest nine biomarkers are irrelevant. Note that our our $2^{nd}$-order approximation distinguishes perfectly between predictive and prognostic. []{data-label="fig:PP"}](./MIM "fig:"){width="\textwidth"}
[0.32]{} ![[**P-P graphs**]{} when: $X_1,X_2$ and $X_3$ are truly predictive, $X_1,X_2$ and $X_7$ are truly prognostic, and the rest nine biomarkers are irrelevant. Note that our our $2^{nd}$-order approximation distinguishes perfectly between predictive and prognostic. []{data-label="fig:PP"}](./JMI "fig:"){width="\textwidth"}
Conclusions and Future Work
===========================
In this work we focused on disentangling rankings of the biomarkers that quantify their predictive and their prognostic power. We presented an information-theoretic approach, where we started from a clearly specified objective function and we suggested lower-order approximations. Furthermore, we suggested an efficient estimator for these approximations, by using an empirical Bayes approach to estimate conditional mutual information. Lastly, we introduced a new graphical representation that captures the dynamics of biomarker ranking. For future work we are planning to apply our methodologies in discovering cardiovascular events in patients undergoing hemodialysis [@CardiovascularDS2009]. This study contains numerical and categorical covariates. Since discretizing the numerical features it may be a suboptimal solution, we should explore ways of handling them directly. One potential approach is by using the maximal information coefficient [@reshef2011detecting]. Another interesting direction is to improve the interpretability of the PP-graphs. For example, in the $1^{st}$-order approach, instead of plotting the ranking score of each biomarker, we can plot a $p$-value, derived from a univariate testing of whether the biomarker is predictive or prognostic.
|
---
abstract: 'We present an alternative scheme of finding apparent horizons based on spectral methods applied to Robinson-Trautman spacetimes. We have considered distinct initial data such as representing the spheroids of matter and the head-on collision of two non-rotating black holes. The evolution of the apparent horizon is presented. We have obtained in some cases a mass gap between the final Bondi and apparent horizon masses, whose implications were briefly commented in the light of the thermodynamics of black holes.'
author:
- 'H. P. de Oliveira'
- 'E. L. Rodrigues'
- 'I. Damião Soares'
title: 'The dynamics of apparent horizons in Robinson-Trautman spacetimes'
---
Introduction
============
One of the most important problems in classical General Relativity is the evolution of apparent horizons. The apparent horizon [@AH] is defined as the outermost marginally trapped surface that can be located on each spacelike surface during the overall dynamics of the spacetime. According to cosmic censorship hypothesis there must exist outside the apparent horizon an event horizon [@AH], and for this reason apparent horizons are the key structures that signalize the formation of black holes in gravitational collapse, as well play relevant role in the merging of black holes [@coal_bh]. Besides the apparent horizon another typical structure present in a spacetime that contains a black hole is the event horizon, but its determination depends on whole history of the spacetime due to the fact that an event horizon is the boundary that separates those null geodesics that reach infinity from those that not. In essence, while the event horizon is a global structure, the apparent horizon is local meaning that it can be determined at each instant. Therefore, the construction of apparent horizon finders is a crucial issue in numerical relativity that has received a great deal of attention in the last years [@AH_finders]. Basically, these codes are built to solve numerically the apparent horizon equation, which is a nonlinear elliptical equation, simultaneously with the numerical evolution of the spacetime.
In general most of the numerical strategies to solve the apparent horizon equation are based on the finite difference techniques. On the other hand, numerical codes based on spectral methods [@bonazzola; @review_sm] have increased considerably in the last years mainly due to the economy of the computational resources to achieve a desired accuracy. In this direction we shall present here a simple and efficient numerical strategy using a convenient combination of Galerkin [@galerkin] and collocation methods [@boyd; @canuto] to determine the evolution of the apparent horizon of Robinson-Trautman spacetimes [@rt].
The Robinson-Trautman (RT) spacetimes are the simplest class of asymptotically flat geometries admitting gravitational waves with two interesting basic features: (a) a RT spacetime can be interpreted as describing the exterior geometry of a bounded or isolated system emitting gravitational waves; (b) for regular initial data the RT spacetimes evolve asymptotically towards the Schwarzschild black hole [@chru]. For the sake of completeness, the line element of the Robinson-Tratuman spacetimes can be conveniently written as
$$\begin{aligned}
ds^2 &=& \left(\lambda(u,\theta) - \frac{2 m_0}{r} + 2 r \frac{\dot{K}}{K}\right) d u^2 + 2 du dr - \nonumber \\
& & r^{2}K^{2}(u,\theta)(d \theta^{2}+\sin^{2}\theta d \varphi^{2}), \label{eq1}\end{aligned}$$
where dot means derivative with respect to the null coordinate $u$, $r$ is the radial coordinate, $(\theta,\varphi)$ are the usual angular coordinates, and $m_0$ is an arbitrary constant. The Einstein equations can be cast in the following form
$$\label{eq2} \lambda(u,\theta)=\frac{1}{K^2}-\frac{K_{\theta \theta}}{K^3}+\frac{K_{\theta}^{2}}{K^4}-\frac{K_{\theta}}{K^3}\cot
\theta$$
$$-6 m_{0}\frac{\dot{K}}{K}+\frac{(\lambda_{\theta} \sin
\theta)_{\theta}}{2 K^2 \sin \theta}=0. \label{eq3}$$
Here the subscript $\theta$ denote derivative with respect to the angle $\theta$; the function $\lambda(u,\theta)$ is the Gaussian curvature of the surfaces $(u={\rm{const.}},r={\rm{const.}})$. The structure of the field equations is typical of a characteristic problem [@winicour], in which the first equation is a hypersurface equation relating the functions $\lambda(u,\theta)$ and $K(u,\theta)$, whereas the second equation is the evolution equation. Accordingly, once the initial data $K(u_0,\theta)$ is prescribed on a given null surface $u=u_0$, the hypersurface equation fixes $\lambda(u_0,\theta)$, and the evolution equation determines $K(u,\theta)$ on the next null surface, and whole process repeats providing the evolution of the spacetime.
The only known analytical solutions of the RT field equations are the two forms of the Schwarzschild solution described by
$$\begin{aligned}
K = K_0 = \mathrm{constant}, \label{eq4}\\
\nonumber \\
K(\theta) = \frac{\bar{K_0}}{\cosh\gamma + \cos\theta \sinh\gamma}. \label{eq5}\end{aligned}$$
The first reproduces the Schwarzschild black hole with mass $M_{BH} = m_0 K_0^3$, and the second expression a boosted black hole with constant velocity $v = \tanh \gamma$ with respect to an inertial observer at infinity. In this case the total mass-energy content is given by
$$M_{BH} = \frac{m_0 \bar{K_0}^3}{\sqrt{1-v^2}}. \label{eq6}$$
Notice that this above expression corresponds to the total relativistic energy of a moving particle with rest mass $m_0 \bar{K_0}^3$ and velocity $v$.
This paper is divided as follows. In Section 2 we present the apparent horizon equation and the numerical strategy based on spectral methods adopted to solve it. In Section 3 we exhibit the numerical results that consists in testing the code along with the dynamics of the apparent horizon corresponding to initial data representing spheroids [@rt_radiation] and the collision of black holes [@rt_collision] in RT spacetimes. Finally, the final remarks are presented in Section 4.
Solving the apparent horizon equation using spectral methods
============================================================
As already mentioned, RT spacetimes have interesting features such as the asymptotic flatness and the presence of gravitational waves, which can be interpreted as arising from a bounded distribution of matter evolving towards a Schwarzschild black hole, and therefore indicating a simple example of non-spherical collapse. However, these geometries do not have a future apparent horizon characterized by the vanishing of the null expansion associated to outgoing future directed rays, but only past apparent horizon [@penrose; @tod; @chow_lun]. A past apparent horizon is the outermost boundary of past-trapped surfaces corresponding to that value of $u$; more precisely, consider a hypersurface $S$ defined by $S = r-V(u,\theta) = 0$, and in particular if $S$ is a marginally past-trapped 2-surface, the ingoing normal null vector $n_\alpha=\partial_\alpha S$ has vanishing divergence
$$\theta_{-} = n^\alpha_{;\alpha}=0. \label{eq7}$$
From this equation it can be shown [@tod] that the function $V(u,\theta)$ satisfies the following equation at each slice $u=$ constant,
$$\frac{1}{\sin \theta}\,\left(\sin \theta \frac{V_\theta}{V}\right)_\theta - \lambda K^2 + \frac{2 m_0}{V}K^2 = 0, \label{eq8}$$
which is known as the apparent horizon equation. The dynamics of the apparent horizon is obtained after solving this equation at each hypersurface $u=\mathrm{constant}$, where the function $K(u,\theta)$ is determined from the evolution equation (\[eq3\]). There are few analytical results about the properties of past apparent horizons in RT spacetimes. Tod [@tod] has shown the validity of the isoperimetric inequality and the existence of a unique marginally past-trapped surface at each hypersurface $u$=constant.
The apparent horizon equation (\[eq8\]) will be solved here using a numerical scheme based on a suitable combination of Galerkin and collocation methods in a similar way we have implemented to solve the field equations (\[eq2\]) and (\[eq3\]). For this reason we shall briefly outline our previous numerical scheme [@rt_prd; @rt_ijmpc] for solving the field equations and, in the sequence, the procedure employed to integrate the apparent horizon equation. According to Ref. [@rt_ijmpc] the first step is to establish the Galerkin expansion for the function $K(u,\theta)$,
$$K_a^2(u,x) = {\rm e}^{Q_a(u,x)}={\exp}\left(\sum_{k=0}^{N}\,b_k(u) P_k(x)\right),
\label{eq9}$$
where the subscript $a$ indicates an approximation of the exact $K(u,x)$. The angular coordinate $\theta$ is replaced by $x=\cos \theta$, $N$ is the truncation order that indicates where the series stops, and the $N+1$ modes $b_k(u)$ are unknown functions of $u$ to be determined; the Legendre polynomials $P_k(x)$ were chosen as the basis or the trial functions. Next, an approximate expression for the function $\lambda(u,x)$ is obtained after substituting the above expansion into the constraint equation (\[eq2\]), or
[$$\begin{aligned}
\lambda_a(u,x)=\mathrm{e}^{-Q_a(u,x)}\left(1 + \sum_{k=0}^{N}\frac{k(k+1)}{2} b_k(u) P_k(x)\right).
\label{eq10}\end{aligned}$$ ]{}
These last two equations are substituted into Eq. (\[eq3\]) to yield what is know as the residual equation associated to the evolution equation,
$$\begin{aligned}
\mathrm{Res}_K(u,x)=6\,m_0\,\sum_{k=0}^{N}\,\dot{b}_k(u) P_k(x) - \nonumber \\
{\rm e}^{-Q_a(u,x)}\,\Big[(1-x^2)\,\lambda_a^{\prime}\Big]^{\prime},
\label{eq11}\end{aligned}$$
where prime denotes derivative with respect to $x$. Notice that the residual equation does not vanish exactly due to the adopted approximations for the functions $K(u,x)$ and $\lambda(u,x)$, but as we have shown it converges to zero as the truncation order $N$ is increased [@rt_ijmpc]. Following the Galerkin method, the projections of the residual equation with respect to a suitable set of test functions ${\Psi_n(x)}$ vanish, namely
$$\begin{aligned}
\left<\mathrm{Res}_K(u,x),\Psi_n(x)\right> = \int_{-1}^1\,{\rm Res}_K(u,x) \Psi_n(x)\,dx = 0,\nonumber \\
\label{eq12}\end{aligned}$$
for $n=0,1,..N$. It means that the modes $b_j(u)$ are chosen in such a way that the residual equation is forced to be zero in an average sense [@finlayson]. Following the Galerkin method we have selected, for the above integrations, the test functions to be same as the trial functions, $\Psi_n(x)=P_n(x)$. At this point we have introduced an additional approximation for the exponential term given by
$$\exp(-Q_\mathrm{a}(u,x)) \approx \sum_{j=0}^{\bar{N}}\,c_j T_j(x), \label{eq13}$$
where $T_j(x)$ is the Chebyshev polynomial of order $j$ and $\bar{N}$ indicates the number of modes $c_j$. Basically, the motivation behind this approximation is to allow rapid and direct integrations of the residual equation. As a consequence of the above expansion, the $\bar{N}+1$ modes $c_j$ are related to the $N+1$ modes $b_k$ by assuming that the projections of the residual equation $\mathrm{Res}_Q(u,x) = \exp(-Q_\mathrm{a}(u,x)) - \sum_{j=0}^{\bar{N}}\,c_j T_j(x)$ with respect to the test functions $\Psi_n(x)=\delta(x-x_n)$ vanish, where $x_0,x_1,..,x_{\bar{N}}$ are the collocation points associated to the Chebyshev polynomials. The additional approximation (\[eq13\]) is introduced into Eq. (\[eq12\]) and after performing the $N+1$ integrals, a set of ordinary differential equations for the modes $b_k(u)$ arises. Therefore, evolving these equations means to determine the dynamics of RT spacetimes since the function $K(u,x)$ can be reconstructed at each $u$.
The past horizon equation (\[eq8\]) will be solved at each time level $u$ by applying a similar combination of spectral methods as described previously. We have followed Ref. [@tod] and introduced an auxiliary function $F(u,x)$ by setting
$$V=2m_0\exp(-F), \label{eq14}$$
in order to eliminate $m_0$ from the apparent horizon equation. A natural Galerkin expansion for $F(u,x)$ is given by
$$F_a(u,x) = \sum_{k=0}^M\,a_k(u) P_k(x), \label{eq15}$$
where $M$ is the truncation order not necessarily the same as $N$ (see Eq. (\[eq9\])). The apparent horizon equation is rewritten in function of $F(u,x)$, and after substituting the above expansion together with the approximate expressions for $K(u,x)$ and $\lambda(u,x)$ (Eqs. (\[eq9\]) and (\[eq10\])), we have obtained the residual equation associated to the apparent horizon equation
$$\begin{aligned}
\mathrm{Res}_{\mathrm{AH}}(u,x) &=& \left[(1-x^2) F_a^\prime\right]^\prime + 1 +\sum_{k=0}^{N}\,\frac{1}{2} k(k+1) \times \nonumber \\
& & b_k(u) P_k(x) - \exp(F_a+Q_a).
\label{eq16}\end{aligned}$$
As we have described before, the next step is to impose that all projections of the residual equation (\[eq16\]) with respect to each basis function, $P_n(x)$, $n=0,1,...,M$, must vanish. Schematically, we have
$$\left<\mathrm{Res}_{\rm AH},P_n(x)\right>=\int_{-1}^1\mathrm{Res}_{\rm AH}(u,x)P_n(x) = 0.
\label{eq17}$$
Notice the presence of two exponential terms in the residual equation (\[eq16\]) that can be reexpressed using additional approximations as,
$$\begin{aligned}
\exp(F_\mathrm{a}(u,x)) \approx \sum_{k=0}^{\bar{M}}\,\alpha_k T_k(x), \\
\exp(Q_\mathrm{a}(u,x)) \approx \sum_{k=0}^{\bar{M}}\,\beta_k T_k(x),\end{aligned}$$
where $\alpha_k$ and $\beta_k$ are the modes associated to these new approximations, and $\bar{M}$ is the truncation order for both expansions. The projections of the corresponding residual equations, $\mathrm{Res}_{F} = \exp(F_{\mathrm{a}}(u,x)) - \sum_{k=0}^{\bar{M}}\,\alpha_k T_k(x)$ and $\mathrm{Res}_Q = \exp(Q_{\mathrm{a}}(u,x)) - \sum_{k=0}^{\bar{M}}\,\beta_k T_k(x)$, with respect to the test functions ${\delta(x-x_n)}$, with $x_n=0,1,..,\bar{M}$ being the collocation points of Chebyshev polynomials, are forced to vanish. Consequently, two sets of $\bar{M}+1$ algebraic equations relating the modes $(\alpha_k,\beta_k)$ with $(a_j,b_k)$, respectively, are generated. These approximate expressions are then inserted into Eq. (\[eq17\]), yielding
[$$\begin{aligned}
& & \left<\mathrm{Res}_{\rm AH},P_n(x)\right>=\int_{-1}^1\,\{\left[(1-x^2) F_a^\prime\right]^\prime + 1 + \frac{1}{2} \times \nonumber \\
& & \sum_{k=0}^{N}\,k(k+1)b_k(u) P_k(x) - \sum_{k,j=0}^{\bar{M}}\,\alpha_k(u)\beta_j(u)T_k(x) \times \nonumber \\
& & T_j(x)\}P_n(x) = 0.\end{aligned}$$]{}
After performing the above integrals, a set of $M+1$ algebraic equations of the type $f_k(a_j,b_j,\alpha_i,\beta_i)=0$ is obtained. Since we can express the modes $\alpha_k$ and $\beta_k$ in terms of $a_j$ and $b_j$, and the modes $b_k$ are known at each $u$, we can solve, in principle, this set of algebraic equations for the modes $a_k$, and therefore determining the apparent horizon as described by Eq. (\[eq15\]).
Numerical results
=================
In this section, we present the numerical tests of our code as well the results about the dynamics of apparent horizons in RT spacetimes. We first need to specify the initial data $K(u=0,x)$ that determine the initial values for the $N+1$ modes $b_k(0)$ through
$$b_j(0)=\frac{2 \left<\ln K(0,x),P_j\right>}{\left<P_j,P_j\right>}.
\label{eq20}$$
We are going to consider two initial data in our numerical experiments. The first represents the exterior spacetime of a homogeneous oblate spheroid described by [@rt_radiation]
$$K(0,x)=\Big[1+\frac{B_0}{2}\Big(\alpha(\zeta_0)+\frac{1}{2}\beta(\zeta_0)P_2(x)\Big)\Big]^2,
\label{eq21}$$
where $\zeta_0$ and $B_0$ are free parameters and $\alpha(\zeta_0)=\arctan(1/\zeta_0)$, $\beta(\zeta_0)=(1+3\zeta_0^2)\arctan(1/\zeta_0)-3\zeta_0$. There is a clear astrophysical motivation for such a family of initial data as pointed out in the works on the axisymmetric gravitational collapse of oblate gas spheroids satisfying the Vlasov equation either in Newtonian theory [@lin], as well in its relativistic generalization [@shapiro]; and also connected with the efficiency of the emission of gravitational waves [@eardley_spheroids]. The second initial data family describe two initially boosted Schwarzschild black holes with opposite velocities $v=\tanh \eta_0$ [@rt_collision] in which
$$\begin{aligned}
K(0,x)=\Big( \frac{A_{1}}{\sqrt{\cosh \eta_0-x\sinh \eta_0}} + \nonumber \\
+ \frac{A_{2}}{\sqrt{\cosh \eta_0+x\sinh \eta_0}} \Big)^2,
\label{eq22}\end{aligned}$$
where $A_{1}$ and $A_{2}$ are arbitrary positive constants associated to the mass of each black hole.
We first exhibit an important test for the spectral code used to integrate numerically the field equations (\[eq2\]) and (\[eq3\]). In spite of non-stationary analytical solutions of the field equations are not known (unless in the linear regime), there exists a conserved quantity
$$I_0 = \int_{-1}^{1}\,K^2(u,x) dx,$$
which is derived from the field equations and interpreted as the area of the fundamental 2-sphere spanned by $(\theta,\phi)$. The conservation of $I_0$ can be deduced after multiplying Eq. (\[eq3\]) by $K^2$ and integrating in the angular domain. Then, by specifying the initial data $K(0,x)$, the initial value of $I_0$ will be fixed and must be kept constant until the asymptotic state is achieved. In order to test if the numerically generated solution is accurate, we have evaluated the relative error between the numerical and exact values of $I_0$, $\sigma = |I_0-I_{\mathrm{numer}}|/I_0$, whose result is shown in Fig. 1 where we have included the influence of increasing truncation orders, or $N=7,9,13$ (cf. Eq. (\[eq9\])). As it can be observed, the conservation of $I_0$ is attained to about $10^{-8}\%$ accuracy for the smallest truncation order $N=7$, and about $10^{-12}\%$ accuracy for $N=13$. Therefore, this result is a vivid proof of the accuracy of the numerical evolution scheme despite those other tests described in Ref. [@rt_ijmpc].
We now proceed with the numerical tests of the algorithm used to solve the apparent horizon equation (\[eq8\]) at each time level $u$. Two steps will be needed. The first is to evaluate the time evolution of all modes $b_k(u)$ by integrating the dynamical system resulting from (\[eq12\]). In the second step these modes calculated at each $u$ are inserted into the system of $M+1$ algebraic equations derived from (20) and solve them to obtain the corresponding modes $a_k(u)$ that describe the apparent horizon through the function $V(u,x)$ (cf. Eqs. (\[eq14\]) and (\[eq15\])). In this way, the evolution of the apparent horizon is obtained until the stationary solution is attained. As a matter of fact, as an important piece of evidence of the accuracy of our numerical scheme, we have plotted in Fig. 2 the modulus of the residual equation (\[eq16\]) evaluated at the initial instant $u=0$ for both initial data families (\[eq21\]) and (\[eq22\]) taking into account distinct values of the truncation orders $N$ and $M$ associated to the functions $K(u,x)$ and $V(u,x)$ (cf. Eqs. (\[eq9\]) and (\[eq15\])). According to these plots the residual equation approach to zero as a consequence of increasing the truncation orders under consideration.
A more enlightening experiment for depicting the convergence of the code is to exhibit the evolution of the $L_2$ norm corresponding to the residual equation (\[eq16\]) given by
$$L_2 = \sqrt{\frac{1}{2}\int^{1}_{-1}{{\rm Res_{\mathrm{AH}}}(u,x)^{2}dx}},
\label{eq23}$$
considering again distinct values of the truncation orders $N$ and $M$. From Fig. 3 it can be seen that the norm evaluated at $u=0.4$ decays exponentially if the truncation order $N$ is increased, which demonstrates the expected geometric convergence typical of spectral methods. In Fig. 4 the full evolution of $L_2$ is presented for increasing truncation orders $M,N$, and as expected it is noticed a rapid decreased of the norm until reaching to the value considered zero up to our numerical precision; also when the truncation order is increased, less time is necessary to reach to that value.
The evolution of the apparent horizon is illustrate by a sequence of polar plots of $r=V(u,x)$ and depicted in Fig. 5. We started at $u=0$ with the oblate spheroid initial data (\[eq21\]) and several plots are shown in subsequent instants until $u_f = 500$ where a circle is formed. Indeed, this is a consequence from the fact that the asymptotic state is the Schwarzschild configuration characterized by $K=\mathrm{constant}$ according to Eq. (\[eq8\]) which implies also in $V=\mathrm{constant}$.
An interesting application of our code is to follow the behavior of the apparent horizon mass that is basically the amount of mass enclosed by the apparent horizon. It is worth of mentioning that the apparent horizon mass has thermodynamical properties similar to those associated to black holes [@chow_lun]. In the case of RT spacetimes the past apparent horizon can only decrease in area, and therefore its mass decreases, contrary to the monotonic increase of the future apparent horizon area. The apparent horizon area $S_{AH}$ is evaluated through the following expression
$$\begin{aligned}
S_{AH} &=& 2 \pi \int_{-1}^1\,r^2 K^2(u,x) dx = \nonumber \\
&=& 8 \pi m_0^2 \int_{-1}^1\,\mathrm{e}^{-2F(u,x)} K^2(u,x) dx,\label{eq24}\end{aligned}$$
where $r=V(u,x)=2m_0 \mathrm{e}^{-2F(u,x)}$ describes the apparent horizon (cf. Eq. (\[eq14\])), and the apparent horizon mass is expressed as
$$M_{AH} = \sqrt{\frac{S_{AH}}{16\pi}}. \label{eq25}$$
In Fig. 6 we present the evolution of the apparent horizon mass and the Bondi mass [@rt_prd2; @kramer]
$$M_{B} = \frac{1}{2}m_0\,\int_{-1}^1 K^3(u,x) dx, \label{eq26}$$
for the first family of initial data (\[eq21\]). According to Ref. [@rt_radiation] the asymptotic configuration is the Schwarzschild black hole whose final mass assumes the value $M_{BH}=m_0K_0^3$, where $K_0=\lim_{u\rightarrow \infty}\,K(u,x)$. This amount is smaller than the mass associated to the initial data since part of it is extracted by gravitational waves [@bondi] during the evolution of the spacetime, and consequently producing a monotonic decrease of the Bondi mass as shown in Fig. 6. Nonetheless, the decay of the apparent horizon mass is due to the decrease in area of the past apparent horizon as we have mentioned before. Notice also that both final values of the apparent horizon and Bondi masses coincide. As a matter of fact, this result is expected from the asymptotic solution of the apparent horizon equation $V_{\mathrm{asympt}}=2m_0K_0^2$, and together with the expression for the apparent horizon mass (\[eq25\]) one can arrive at $M_{AH}=M_{BH}=m_0K_0^3$.

The final task is to consider the second family of initial data (\[eq22\]) which represent the head-on collision of two Schwarzschild black holes and a generalization of the initial data (\[eq21\]) that describe the exterior of an inhomogeneous oblate spheroid [@rt_radiation]. The common feature of both initial data is that the asymptotic configuration will be a boosted black hole [@rt_collision; @rt_radiation] described by
$$\lim_{u \rightarrow \infty}\,K(u,x) = \frac{\bar{K_0}}{\cosh \mu + x \sinh \mu}, \label{eq27}$$
where the values of $\bar{K_0}$ and the boost parameter $\mu$ are fixed by the numerical solution of the RT equation (see Ref. [@rt_collision] for details), and the final Bondi mass is given by Eq. (\[eq6\]). It is worth mentioning that the imbalance in momentum of the initial gravitational wave distribution is responsible for the boost of the resulting black hole. In Figs. 7(a) and 7(b) we observe again the monotonic decay of both $M_{AH}$ and $M_B$ with the retarded time $u$, but there is a gap between their asymptotic values. In order to understand the origin of this gap, we have noticed that according to the numerical experiments the asymptotic solution of the apparent horizon equation (\[eq8\]) is the same as the previous case, $V_{\mathrm{asympt}} = 2 m_0 \bar{K_0}^2$, in spite of $K_{\mathrm{asympt}}$ not being a constant (cf. Eq. (\[eq26\])). Therefore, the final value of the apparent horizon mass can be evaluated from Eq. (\[eq25\]) (note that in this situation $K=K(x)$), and whose result is $M_{AH}=m_0 \bar{K_0}^3$. In fact, this value is exactly the rest mass of the boosted black hole and consequently the gap observed in both graphs of Fig. 7 is due to the kinetic energy of the resulting black hole.
The above results can be interpreted in the light of the so called First Law of Black Hole Thermodynamics [@beken]-[@wald]. The final mass configurations displayed in each of Figs. 7 can actually be interpreted as two static black holes boosted with respect to the each other, namely, they are connected by a $K$-transformation of the BMS group [@bondi] corresponding to a boost along the $z$-axis, as given by Eq. (\[eq27\]). This is the origin of the gap in Figs. 7, which has the value $M_{\rm B}/M_{AH}={\rm cosh} \mu$, where $\mu$ is the boost parameter of the $K$-transformation specified in (\[eq27\]). Now the entropy of each final black hole, considered as a thermodynamical system in equilibrium, is defined as proportional to the area $\mathcal{A}$ of its event horizon and is invariant by a $K$-transformation as can be easily verified. This should be expected since an eventual possible definition of the BH entropy by a counting of its microscopic states could not depend, in principle, on the state of motion of stationary black holes relative to inertial frames at infinity. Therefore we have $\delta \Big({\mathcal A}/4 \pi G \Big) = \delta M_{B}/T_{B}=\delta M_{AH}/T_{AH}$ so that this gap also defines the temperature transformation $T_{B} \rightarrow T_{B}~{\rm cosh} \mu$ between the two inertial rest frames of the BHs.
Final considerations
====================
In this paper we have implemented and tested a numerical scheme based on a combination of Galerkin and pseudo-spectral methods to solve the apparent horizon equation in RT spacetimes. This a direct extension of the previous algorithm [@rt_ijmpc] used to integrate the field equations (\[eq2\]) and (\[eq3\]). The apparent horizon equation is reduced to a set of nonlinear algebraic equations for the modes $a_k$ and whose solution at each instant determines the apparent horizon described by Eq. (\[eq8\]). The applications have consisted in solving the apparent horizon corresponding to initial data describing the exterior fields of oblate spheroids and the collision of two Schwarzschild black holes.
We have performed numerical tests that strongly indicate the convergence and accuracy of the code. In our numerical experiments two initial data families in RT spacetimes were considered: the first represents the gravitational field outside a oblate spheroid while the second two initially boosted Schwarzschild black holes with opposed velocities. We have confirmed that the Bondi mass $M_B$ decreases monotonically as the result of the mass extraction due to the gravitational waves, until a asymptotic value that coincides with the total mass of the resulting Schwarzschild black hole. The apparent horizon mass $M_{AH}$ also decreases with respect to $u$ in face of the decreased in area which is expected in the case of past apparent horizon. By considering the first initial data, the asymptotic values of $M_B$ and $M_{AH}$ coincide to the value of the total mass of the resulting Schwarzschild black hole. In this case the apparent horizon mass is exactly the mass enclosed by the event horizon. On the other hand, if we take into account the second initial data a gap between the asymptotic values of both masses is observed similarly as noticed by Chow and Lun [@chow_lun]. The origin of the gap is associated to the final configuration identified as a boosted Schwarzschild black hole for which the Bondi mass is the total mass-energy content that includes the rest mass $m_0\bar{K}_0^3$ plus the kinetic energy, whereas the final apparent horizon mass is the rest mass. Finally, in spite of RT spacetimes being the simplest asymptotically flat radiating geometries, they can be potentially used as simple but useful theoretical laboratories to study relevant features of the bounded sources emitting gravitational waves (see Refs. [@rt_radiation], [@rt_collision], [@rt_prd] and [@rt_bremss]), and also to test new numerical schemes such the one we have implemented here. The natural step in our research is to examine the evolution of apparent horizons in the case of general RT spacetimes, and also in more realistic frameworks such as for spacetimes with Brill waves.
The authors acknowledge the financial support of the Brazilian agencies CNPq and FAPERJ.
[99]{}
S. W. Hawking and G. F. R. Ellis, *The Large Scale Structure of Spacetime* (Cambridge University Press, Cambridge, England, 1973).
Frans Pretorius, Phys. Rev. Lett. **95**, 121101 (2005).
Jonathan Thornburg, *Event and Apparent Horizon Finders for 3+1 Numerical Relativity*, Living Rev. Relativity 10, (2007), 3. http://www.livingreviews.org/lrr-2007-3
S. Bonazzola, E. Gourgoulhon and J. A. Marck, J. Comput. Appl. Math. **109**, 433 (1999).
Philippe Grandclément and Jérôme Novak, *Spectral Methods for Numerical Relativity*, Living Rev. Relativity 12, (2009), 1. http://www.livingreviews.org/lrr-2009-1
P. Holmes. John L. Lumley and Gal Berkooz, [*Turbulence, Coherent Structures, Dynamical Systems and Symmetry*]{}, Cambridge University Press (Cambridge, 1998).
J. P. Boyd, [*Chebyshev and Fourier Spectral Methods*]{}, Dover (2001).
C. Canuto, M. Y. Hussaini, A. Quarteroni and T. A. Zang, [*Spectral Methods, Fundamentals in Single Domains*]{}, Springer (2006).
I. Robinson and A. Trautman, Phys. Rev. Lett. **4**, 431 (1960); Proc. Roy. Soc. A**265**, 463 (1962).
P. Chrusciel, Commun. Math. Phys. 137, 289 (1991); Proc. Roy. Soc. London **436**, 299 (1992); P. Chrusciel and D. B. Singleton, Commun. Math. Phys. **147**, 137 (1992).
Jeffrey Winicour, *Characteristic Evolution and Matching*, Living Rev. Relativity 8, (2005), 10. http://www.livingreviews.org/lrr-2005-10
H. P. de Oliveira and E. L. Rodrigues, Class. Quantum Grav. **25**, p. 205020 (2008).
R. Aranha, H. P. de Oliveira, I. D. Soares and E. V. Tonini, Int. J. Mod. Phys. D, **17**, 1 (2008)
R. Penrose, Ann. NY Acad. Sci. **224**, 115 (1973)
K. P. Tod, Class. Quantum Grav., **6**, 1159 (1989).
E. W. Chow and A. W. Lun, *Apparent Horizons in Vacuum Robinson-Trautman Spacetimes*, preprint gr-qc 9503065.
H. P. de Oliveira and I. Damião Soares, Phys. Rev. D**70**, 084041 (2004).
H. P. de Oliveira, E. L. Rodrigues, I. Damião Soares and E. V. Tonini, Int. J. Mod. Phys. C, **18**, 1853 (2007). Bruce A. Finlayson, *The Method of Weighted Residuals and Variational Principles*, Academic Press (1972).
C. C. Lin, L. Mestel and F. H. Shu, Astrophys. J. **142**, 1431 (1965).
Stuart L. Shapiro and Saul L. Teukolsky, Phys. Rev. Lett., **66**, 994 (1991); Phys. Rev. D**45**, 2006 (1992).
D. M. Eardley, Phys. Rev. D **12**, 3072 (1975).
U. von der Gönna and D. Kramer, Class. Quant. Grav. **15**, 215 (1998).
H. P. de Oliveira and I. Damião Soares, Phys. Rev. D **71**, 124034 (2005).
H. Bondi, M. G. J. van der Berg and A. W. K. Metzner, Proc. R. Soc. London Ser. A**269**, 21 (1962); R. K. Sachs, Phys. Rev. **128**, 2851 (1962).
J. D. Bekenstein, Phys. Rev. D **7**, 2333 (1973).
R. M. Wald, [*General Relativity*]{} (University of Chicago Press, Chicago, 1984).
H. P. de Oliveira, I. Damião Soares and E. V. Tonini, Phys. Rev. D **78**, 044017 (2008).
|
---
abstract: 'Comparison is made of the electronic structure of the little-studied layered transition metal oxide LiNbO$_2$ with that of Na$_x$CoO$_2$, which has attracted tremendous interest since superconductivity was discovered in its hydrate. Although the active transition metal $d$ states are quite different due to different crystal fields and band filling, both systems show a strong change of electronic structure with changes in the distance between the transition metal ion layer and the oxygen layers. The niobate is unusual in having a large second-neighbor hopping amplitude, and a nearest neighbor hopping amplitude that is sensitive to the Nb-O separation. Li$_x$NbO$_2$ also presents the attractive simplicity of a single band triangular lattice system with variable carrier concentration that is superconducting.'
author:
- 'E. R. Ylvisaker, K.-W. Lee, and W. E. Pickett'
title: |
Comparison of the Electronic Structures of Two Non-cuprate\
Layered Transition Metal Oxide Superconductors
---
Motivation
==========
Among the various areas of research that were stimulated by the discovery of high temperature superconductors (HTS) nearly two decades ago is that of two-dimensional (2D) (or nearly so) transition metal oxides (TMOs). A second surprise appeared in 2001 with the discovery[@akimitsu] of T$_c$ = 40 K in MgB$_2$, where the physics is entirely different but the 2D character is crucial[@mazin; @pickett] for the surprisingly high value of critical temperature T$_c$. A further stimulus for study of superconductivity in 2D TMOs was provided in 2003 with the discovery of superconductivity[@takada] in hydrated Na$_x$CoO$_2$ at 4.5 K. These discoveries suggest a more general look at superconducting 2D TMOs besides the cuprates, to try to identify trends (or perhaps lack of trends).
Being isostructural to the first HTS (La,Sr)$_2$CuO$_4$, the ruthenate Sr$_2$RuO$_4$ has a special status in this class. Its electronic structure is quite distinct from that of HTS, however, and T$_c$ is only around 1 K. There is now a very large literature on Sr$_2$RuO$_4$. It is a different and very perplexing superconductor, but we will not pursue it in this paper.
What we focus on here is the little-noticed layered TMO superconductor Li$_x$NbO$_2$, with brief comparison with the cobaltate system Na$_x$CoO$_2$. This niobate was discovered[@Geselbract-Nature] in 1990 when the community was absorbed with the new HTS materials, and has not yet attracted the attention that it deserves. While its T$_c$ = 5.5 K is quite close to that of the hydrated cobaltates (4.5 K), it is the contrasts that we will focus on. These differences revolve mainly on: $4d$ versus $3d$ ion, trigonal versus octahedral coordination by six oxygen neighbors, and single band versus multiband character. We expose one similarity: $z$-displacement of the oxygen layers, which modulates the TM-oxygen distance, has a strong influence on the electronic structure.
Layered Lithium Niobate
=======================
The compound LiNbO$_2$ itself is a band insulator with gap $\sim$2 eV. The de-lithiated phase Li$_x$NbO$_2$ was found by the Berkeley group to be superconducting,[@Geselbract] with the few reports to date suggesting superconductivity sets in around $x\approx 0.8$ ([*i.e.*]{} when 20% of the Li is removed), beyond which T$_c$ does not depend much on the Li content $x$. The structure of LiNbO$_2$ consists of a triangular lattice of both the Li cations and the transition metal (niobium) ions, separated by layers of oxygen atoms, similar to Na$_x$CoO$_2$ except for the TM coordination. The trigonal prismatic coordination of niobium atoms by oxygen ions provides a big distinction. The trigonal crystal field results in an energetic lowering of the Nb d$_{z^2}$ states with respect to the other $4d$ states by about 4 eV, leaving the system with only a single band per formula unit to consider. This valence-conduction band is also well separated from the O $2p$ bands below (see Fig. 1).
Removal of the lithium has the effect of adding holes to the conduction band made up of Nb $d_{z^2}$ states. Superconductivity appears, as it does when holes are introduced into NaCoO$_2$ (followed by hydration), and at a very similar temperature (5 K), but apparently at quite different carrier concentrations and for very different electronic structures. Since the Li content is variable, this compound becomes a clean representation of a single band triangular lattice system which can be compared rather directly with Hubbard model results. As part of our study of this system, we obtain a tight-binding (TB) representation of the band to allow the subsequent study of possible correlation effects within the Hubbard model. We return to these issues below.
[*Structure.*]{} LiNbO$_2$ takes on a hexagonal structure[@Mosh; @Geselbract; @Meyer] ($a$=2.90 Å, $c$=10.46 Å) having space group $P6_3/mmc$ (No. 194), with sites Li \[$2a$ (0,0,0), ${\overline 3}m$\], Nb \[$2d$ $(\frac23, \frac13, \frac14)$, ${\overline 6}m2$\], and O \[$4f$ $(\frac13, \frac23, z_O)$, $3m$\]. The oxygen internal parameter $z_O$ specifies the Nb-O bond length, and due to the stacking type there are two LiNbO$_2$ layers per cell. The distance between Nb atoms, $a$, is almost identical to bond length 2.86 Å in elemental bcc Nb, so direct Nb-Nb overlap should be kept in mind. Experimental values[@Meyer; @Mosh; @Geselbract; @Tyut] of the internal parameter range from 0.125-0.129. Our optimization by energy minimization using the abinit code gives the value $z_O$=0.125 (lattice constants held at the experimental values).
\[fig:Band\]
[*Electronic structure and tight-binding representation.*]{} The band structure of LiNbO$_2$ pictured in Fig. 1 is similar to that given earlier by Novikov [*et al.*]{}[@Novikov94-2] and indicates a Nb $d_{z^2}$ bandwidth of 1.9 eV. The Nb $d_{z^2}$-O $2p$ bands can be fit straightforwardly to a TB model based on orthonormal Wannier functions on the two Nb atoms per cell (one Nb per layer). A full description of the results will be given elsewhere, but we provide a synopsis here. There are three important features of the TB fit that we emphasize here. First, a good fit requires rather long range hoppings, up to fourth neighbors within the layer and to three neighbors in the layers above and below. Second, with oxygen ions at their equilibrium position, the second neighbor (in-plane) hopping amplitude $t_2 \approx$ 100 meV is much larger than the nearest neighbor hopping $t_1 \approx$ 60 meV. The smaller value of $t_1$ may reflect interference between direct Nb-Nb interaction and the standard O-mediated Nb-O-Nb processes. The same trend has been observed for 2H-TaS$_2$,[@Wei-arxiv] where the small value of $t_1$ was traced to phase cancellation in the hopping integral when Wannier functions are on nearest neighbors. This $t_2 > t_1$ feature may have important implications for the microscopic understanding of the properties of Li$_x$NbO$_2$, since if $t_2$ were the only nonzero hopping, the system decomposes into three decoupled triangular lattices with lattice constant $\sqrt{3} a$; $t_1$ then becomes the “perturbation” that couples the three sublattices, breaks symmetry and removes degeneracy. Thirdly, the nearest neighbor hopping $t_1$ is very strongly modulated by oxygen displacement. We find that $t_1$ increases strongly as the O layers “squash” against the Nb layers, as in the $A_g$ Raman mode. This modulation may provide the largest contribution to electron-phonon coupling in this compound.
---------------------- ---------- ---------- ----------- ---- ---------- ---------- -----------
Li Nb O Na Co O
${\mathbf Z}^*_{xx}$ 1.10 2.26 -1.68 0.87 2.49 -1.68
${\mathbf Z}^*_{zz}$ 1.69 1.31 -1.50 1.37 0.87 -1.12
${\mathbf Z}^*_{av}$ 1.30 1.94 -1.62 1.04 1.95 -1.49
${\mathbf Z}^0$ +1 +3 -2 +1 +3 -2
---------------------- ---------- ---------- ----------- ---- ---------- ---------- -----------
: Born effective charges for LiNbO$_2$, together with a comparison with NaCoO$_2$ calculated by Li [*et al.*]{}[@Li-Yang] The angular average $Z^*_{av}$ is also displayed. Note the unexpected deviations from the formal values Z$^0$ of the effective charges for Li and Nb in the $z$-direction (larger for Li, smaller for Nb). For O in LiNbO$_2$, the effective charges are nearly isotropic. Overall, the anisotropies are rather similar in NaCoO$_2$, but somewhat more pronounced.
\[tbl:Born\]
\[fig:nxcobands\]
[*Effective charges.*]{} We have evaluated the Born effective charge tensor as described by Gonze and Lee[@Gonze-response-2] using the [*abinit*]{} code.[@abinit] Given in Table \[tbl:Born\] are the two distinct elements of the effective charge tensor for each atom type, calculated in the relaxed atomic structure, together with the formal charges. $Z^*_{xx}$(Li) ($Z^*_{yy}=Z^*_{xx}$) is close to the formal charge of Li indicating primarily ionic type bonding for motion in the $x-y$ plane, consistent with its propensity for de-intercalation. The charge tensor for Li shows similar anisotropy to that of LiBC[@Kwan-Woo], which is similar structurally and electronically (if some Li is de-intercalated) to the 40 K superconductor MgB$_2$. In LiBC $Z_{xx}^*$(Li)=0.81, $Z_{zz}^*$(Li)=1.46, and it was concluded that Li is involved in electronic coupling (not only ionic, but covalent) between consecutive B-C layers. Similar Li involvement might be expected in LiNbO$_2$, and indeed the band structure shows clear effects of interlayer coupling. The difference from the formal charges for the Nb ions (formally Nb$^{3+}$, O$^{2-}$) indicate substantial covalent character to the bonding, which appears to be especially strong for $z$ displacement of the Nb ion.
The Born effective charges have been reported[@Li-Yang] for NaCoO$_2$, and since we investigate O squashing in this compound in the next section, we have included the NaCoO$_2$ effective charges in Table \[tbl:Born\] for comparison. Indeed there are several similarities, as noted in the table caption.
\[fig:dos\]
Layered Sodium Cobaltates
=========================
There is already a substantial literature on the electronic structure of the Na$_x$CoO$_2$ system. Briefly: the $t_{2g}$ bands are broken in symmetry by the layered structure and by the squashing of the CoO$_2$ layers away from ideal cubic coordination by the six O ions. The bands are doped with $1-x$ holes, with all the evidence indicating the holes go, at least initially, into $a_g$ states rather than $e_g'$ states. Using the observed structure, it is found that this results from the somewhat larger $a_g$ bandwidth, because the band centers remain indistinguishable.
We address here the effect of the height of the O ions above/below the Co layer. In the calculations, the full-potential nonorthogonal local-orbital minimum-basis scheme (FPLO) was used.[@fplo] For specific doping levels and treatments of the Na ions, the height has been optimized by a number of groups[@PZhang; @MDJ; @JNi; @ZLi; @KWLee], revealing that there is some sensitivity of the O position to the environment. To clarify the question of the effect of squashing without reference to a specific doping level, we display in Fig. 2 the $t_{2g}$ bands for O height (from the Co layer) of 1.14 Å (corresponding to undistorted CoO$_6$ octahedra), 0.96 Å (typical value for intermediate values of $x$), and 0.88 Å (the smallest value reported). For orientation, we note that Johannes [*et al.*]{},[@MDJ] using the virtual crystal approximation for the Na concentrations $x$ = 0.3, 0.5, and 0.7, obtained the heights 0.88, 0.90, 0.93 Å respectively. The corresponding projected densities of states are shown in Fig. 3. For these calculations we used $x$=0.5, treated within the virtual crystal model. To avoid unphysical O-O interactions across the layers as the O layer position was varied, the $c$ axis was artificially increased by 20% for these calculations.
Simple crystal field arguments would suggest: (1) for the cubic octahedron $z_O$ = 1.14 Å, the $a_g$ and $e_g'$ DOS should be the same, and (2) as the O ions are squashed down, the $e_g'$ states should rise relative to the $a_g$ states. The first expectation is severely violated in the region just below $E_F$ due to the dispersion being only two-dimensional (presuming crystal fields from ions beyond nearest O ions are negligible). In addition, the effects of squashing are much more complex than suggested by the crystal field model. There is minor change in the mean energies of the $a_g$ and $e_g'$ states (they remain essentially equal, see Fig. 3), the main change is an [*increase*]{} in the $a_g$ bandwidth compared to that of the $e_g'$ states upon squashing. For $z_O$ = 1.14 Å, doped holes initially would go equally into each band. At the highly squashed end, $\sim$0.4 holes/per Co can go into the $a_g$ band before encountering the $e_g'$ states. We emphasize that this is a model, constrained result; self-consistency and geometrical relaxation will change the details. There is also the question of decreasing interaction with the O $2p$ states upon squashing. This change, which is of course also included in the changes shown in Figs. 2 and 3, may affect the $a_g$ and $e_g'$ states differently.
The changes in the band structure, Fig. 2, are more instructive. At $\Gamma$, the $a_g$ state is almost 0.5 eV below its maximum for the cubic octahedron $z_O$=1.14 Å, the maxima occurring midway along both $\Gamma$-M and $\Gamma$-K lines. The additional structure, and the associated decrease in bandwidth reflects longer range hopping, and most likely a strong change in the ratio $t_2/t_1$, analogous to the changes in LiNbO$_2$ but with additional complications due to the presence of the $e_g'$ bands. The shift with squashing motion in the $e_g'$ bands is noticeable not only at $\Gamma$, where the state increases in energy, but also in the degeneracy at the K point, which rises to the top of the $t_{2g}$ bands for the symmetric CoO$_6$ octahedron.
Summary
=======
In this paper we have briefly compared and contrasted the electronic structure of the little-studied layered TMO LiNbO$_2$ to that of Na$_x$CoO$_2$, which has attracted tremendous since superconductivity was discovered in its hydrate. Although the active states are quite different, both systems show a strong change of electronic structure with changes in the TM-oxygen distance. The niobate is unusual in having a large second-neighbor hopping amplitude, and it also presents the attractive simplicity of a single active band on a triangular lattice. One of the primary questions to address is whether electronic correlations are important in the delithiated system, and whether the origin of superconductivity is of electronic or lattice origin.
Acknowledgments
===============
We acknowledge stimulating comments from D. Khomskii and R. J. Cava on the effect of oxygen “squashing” in the Na$_x$CoO$_2$ system, and clarification from M. D. Johannes on calculations relating to this question. This work was supported by National Science Foundation Grant DMR-0421810.
[10]{} J. Nagamitsu, N. Nakagawa, T. Muranaka, Y. Zenitani, and J. Akimitsu, Nature (London) [**410**]{}, 63 (2001).
I. I. Mazin and V. P. Antropov, Physica C [**385**]{}, 49 (2003).
W. E. Pickett, Brazilian J. Phys. [**33**]{}, 695 (2003).
K. Takada, Y. Sasago, E. Takayama-Muromachi, F. Izumi, R. A. Dilanian, and T. Sasaki, Nature (London) [**422**]{}, 53 (2003).
M. J. Geselbract, T. J. Richardson, and A. M. Stacy, Nature [**345**]{}, 324 (1990).
M. J. Geselbracht, A. M. Stacy, A. R. Garcia, B. G. Slibernagel, and G. H. Kwei, J. Phys. Chem [**97**]{}, 7102 (1993).
E. G. Moshopoulou, P. Bordet, and J. J. Capponi, Phys. Rev. B [**59**]{}, 14 (1999).
G. Meyer and R. Hoppe, Angew. Chem. (Intl. Ed.) [**13**]{}, 11 (1974).
A. P. Tyutyunnik, V. G. Zubkov, D. G. Kellerman, V. A. Pereliaev, and A. E. Kar’kin, Eur. J. Solid State Inorg. Chem. [**33**]{}, 53 (1996).
D. L. Novikov, V. A. Gubanov, V. G. Zubkov, and A. J. Freeman, Phys. Rev. B [**49**]{}, 15830 (1994).
R. L. Barnett, A. Polkovnikov, E, Demler, W.-G. Yin, and W. Ku, cond-mat/0508590.
X. Gonze and C. Lee, Phys. Rev. B [**55**]{}, 10355 (1997).
X. Gonze, J.-M. Beuken, R. Caracas, F. Detraux, M. Fuchs, G.-M. Rignanese, L. Sindic, M. Verstraete, G. Zerah, F. Jollet, M. Torrent, A. Roy, M. Mikami, Ph. Ghosez, J.-Y. Raty, and D.C. Allan, Comput. Mater. Sci. [**25**]{}, 478 (2002); The [abinit]{} code is a common project of the Universit${\acute e}$ Catholique de Louvain, Corning Incorporated, and other contributors (URL http://www.abinit.org).
K.-W. Lee and W. E. Pickett, Phys. Rev. B [**68**]{}, 085308 (2003).
Z. Li, J. Yang, J. G. Hou, and Q. Zhu, Phys. Rev. B [**70**]{}, 144518 (2004).
K. Koepernik and H. Eschrig, Phys. Rev. B [**59**]{}, 1743 (1999).
P. Zhang, W. Luo, V. H. Crespi, M. L. Cohen, and S. G. Louie, Phys. Rev. B [**70**]{}, 085108 (2004).
M. D. Johannes, D. A. Papaconstantopoulos, D. J. Singh, and M. J. Mehl, Europhys. Lett. [**68**]{} 433 (2004).
J. Ni and G. Zhang, Phys. Rev. B [**69**]{}, 214503 (2004).
Z. Li, J. Yang, J. G. Hou, and Q. Zhu, Phys. Rev. B [**71**]{}, 024502 (2005).
K.-W. Lee and W. E. Pickett, Phys. Rev. B [**72**]{}, 115110 (2005).
|
---
author:
- '0ndřej Chvála *for the NA49 Collaboration* [^1]'
date: 'Received: date / Revised version: date'
title: On the Importance of Isospin Effects for the Interpretation of Nuclear Collisions
---
Introduction {#intro}
============
The study of heavy ion collisions made at the SPS and the RHIC attracts wide interest. However, it becomes clearly apparent that the understanding of elementary nucleon-nucleon interactions is crucial for the correct interpretation of the more complex nuclear collisions.
One of the basics ingredients to that problem is the role played by the isospin invariance. Since neutrons constitute 60% of the nucleons inside a heavy ion nucleus, and since even the spatial distribution of protons and neutrons is known to be different in heavy nuclei [@pnratnucl], the proper evaluation of isospin effects in proton and neutron fragmentation is of obvious interest.
The NA49 experiment [@RefNA49] was the first to measure the yields of identified hadrons from neutron fragmentation in the SPS energy range [@RefHGF]. In this article, some consequences of these new measurements for relativistic nuclear interactions will be presented.
$\pi^{+}/\pi^{-}$ ratios {#sec:1}
========================
The $\pi^{+}$ and $\pi^{-}$ yields from both the proton and the neutron fragmentation have been measured by NA49 [@RefHGF], [@RefHGF-isospin]. As expected from isospin symmetry, the $\pi^{+}$ and the $\pi^{-}$ yields change their place when switching from proton to neutron projectiles. Consequently, the ratio $\pi^{+}/\pi^{-}$ from protons equals $\pi^{-}/\pi^{+}$ from neutron fragmentation. These expectations have been verified for a wide range of $x_F$ and for beam momenta of 40 and 160GeV/c. See the upper panel in fig. \[fig:1\] for the latter. For details see [@RefHGF-isospin].
It is known that total and differential pion yields in $AA$ collisions differ only little from a linear superposition of nucleon–nucleon collisions according to the number of participant nucleon pairs [@ferencQM99]. It seems therefore reasonable to predict the evolution of the $(\pi^{+}/\pi^{-})^{A}$ ratio with the kinematic variables $x_F$ and $\sqrt s$ as a function of $(\pi^{+}/\pi^{-})^{p}$, if the detailed behavior of the latter is known:
$$\bigg(\frac{\pi^+}{\pi^-}\bigg)^{A} (x_F, \sqrt s) =
\frac{f^p \ (\pi^{+}/\pi^{-})^{p} + f^n}{f^p + f^n \ (\pi^{+}/\pi^{-})^{p}} \ (x_f, \sqrt s)
\label{eq:1}$$
where $f^p$ and $f^n$ are the relative protonic and neutronic contents of the nuclei – “isospin mixture”, $f^p +f^n = 1$.
Evidently, deviations from $(\pi^{+}/\pi^{-})^{A} = 1$ are predicted, growing with $(\pi^{+}/\pi^{-})^{p}$. This ratio is both a strong function of $x_F$ and $\sqrt s$, see the upper plots on figures \[fig:1\] and \[fig:2\].
Whereas the measurements of $\pi^{+}/\pi^{-}$ dependence on $x_F$ (fig. \[fig:1\], bottom panel) in the symmetric $SiSi$ system and in central $PbPb$ collisions follow the expectation from the above prediction rather closely, the data indicate a substantially higher neutron content in peripheral $PbPb$ interactions, as has indeed been established with independent experimental methods [@pnratnucl].
There is a steep dependence of the total and the midrapidity $\pi^+/\pi^-$ ratios on the $\sqrt s$ in $pp$ interactions from pion production threshold to values close to unity at ISR and RHIC energies. The curve represents a parameterization of a large set of existing measurements. Note the midrapidity ratio in the upper panel in fig. \[fig:2\] (the curve represents a parameterization of a large set of existing measurements), together with the prediction for $AA$ using equation \[eq:1\].
On the bottom plot, the above prediction is compared with existing measurements in central heavy ion collisions. Again the data (see [@RefAAsdep] for the data at lower energies) follow the simple superposition picture rather closely.
$K/\pi$ ratios {#sec:2}
==============
Contrary to pions, the charged kaon yields were measured to be the same from both the proton and the neutron projectile fragmentation. This experimental observation has important consequences for the $K/\pi$ ratios in $AA$ collisions. This can be exemplified on the basis of double ratios $(K/\pi)^{A}/(K/\pi)^{p}$ as the kaons drop out from the double ratios. Simple relations for $K/\pi$ ratios from protons and neutrons, equations \[eq:2\] and \[eq:3\], and for arbitrary mixtures of these nucleons, equations \[eq:4\] and \[eq:5\], can therefore be established:
$$\frac{(K^{+}/\pi^{+})^n}{(K^{+}/\pi^{+})^p} = \frac{(\pi^{+})^p}{(\pi^{+})^n} = \bigg(\frac{\pi^{+}}{\pi^{-}}\bigg)^p
\label{eq:2}$$
$$\frac{(K^{-}/\pi^{-})^n}{(K^{-}/\pi^{-})^p} = \frac{(\pi^{-})^p}{(\pi^{-})^n} = \bigg(\frac{\pi^{-}}{\pi^{+}}\bigg)^p
\label{eq:3}$$
$$\frac{(K^{+}/\pi^{+})^A}{(K^{+}/\pi^{+})^p} = \frac{(\pi^{+}/\pi^-)^p}{f^n + f^p \ (\pi^{+}/\pi^-)^p}
\label{eq:4}$$
$$\frac{(K^{-}/\pi^{-})^A}{(K^{-}/\pi^{-})^p} = \frac{(\pi^{-}/\pi^+)^p}{f^n + f^p \ (\pi^{-}/\pi^+)^p}
\label{eq:5}$$
Corresponding predictions for $K/\pi$ ratios assuming a linear superposition as used for the $\pi^{+}/\pi^{-}$ ratios discussed above, are shown in figure \[fig:3\]. Since there is a strong dependence of the $(\pi^{+}/\pi^{-})^p$ ratio on both $\sqrt s$ and on $x_F$, we can use the double ratios to make predictions for the evolution of the $K/\pi$ ratios in $AA$ with these kinematic variables. Note the scales below the plot in fig. \[fig:3\].
The important consequences of the above isospin effects for the interpretation of $K/\pi$ ratios in $AA$ as a function of $x_F$ have been demonstrated in [@RefHGF], [@RefHGF-isospin], [@strangeness]. It was concluded that the enhancements of strange particles in central $pA$ and $AA$ collisions become comparable once the isospin effects are corrected for.
The evolution of $K/\pi$ with $\sqrt s$ is presented in fig. \[fig:4\]. In the upper panel, the $(\pi^{+}/\pi^{-})$ midrapidity ratio from $pp$ collisions is presented. Note the equation \[eq:2\], and the isospin–mixed one according to equation \[eq:4\]. Existing data are plotted in the bottom panel. They were fitted by a flat line with a threshold.
The isospin correction for positives diverges for decreasing $\sqrt s$. This divergence is, however, to be convoluted with the threshold behavior of kaon production. Depending on the detailed $\sqrt s$ dependence of the $K^{+}/\pi^{+}$ ratio in $pp$ collisions, the threshold cut–off tends to produce a spike (fig. \[fig:4\] lower panel) below about 10GeV in $AA$ collisions. Such non-monotonic behavior is indeed observed in $PbPb$ interactions [@Alt:2003rn].
For the negatives, the prediction leads to further depletion below the threshold, changing its slope, as again observed in $AA$ collisions [@Alt:2003rn].
A similar phenomenon is predicted for the evolution of $\Lambda/\pi$ ratios, see figure \[fig:5\].
Conclusions {#sec:3}
===========
The fragmentation of neutron and proton projectiles into identified secondary hadrons has been measured at the CERN SPS using $np$ and $pp$ collisions. Based on these measurements and knowledge of $\pi^+/\pi^-$ ratio in proton-proton collisions, predictions for $AA$ interactions (assuming that a nuclear collision can be pictured as a sum of independently fragmenting nucleons) have been formulated. The predictions of the evolution of $\pi^+/\pi^-$, $K/\pi$ and $\Lambda/\pi$, were found to describe the gross features of the data.
This is especially important for strangeness production in the region of $\sqrt s \ <$ 20GeV, where a combination of isospin effects and threshold dependencies creates a pronounced, non–monotonic structure.
Improved datasets (in particular for $np$ interactions) are therefore mandatory before any conclusions on new phenomena in relativistic heavy ion collisions can be drawn.
S. V. Afanasiev [*et al.*]{} \[NA49 Collaboration\],\
Nucl. Instrum. Meth. A **A430** (1999) 210-244 H. G. Fischer [*et al.*]{} \[NA49 Collaboration\],\
Nucl. Phys. A [**715**]{} (2003) 118 \[arXiv:hep-ex/0209043\]. H. G. Fischer [*et al.*]{} \[NA49 Collaboration\],\
Heavy Ion Phys. [**17**]{} (2003) 369. J. Bachler [*et al.*]{} \[NA49 Collaboration\],\
Nucl. Phys. A [**661**]{} (1999) 45. A. Trzcinska, J. Jastrzebski, P. Lubinski, F. J. Hartmann, R. Schmidt, T. von Egidy and B. Klos,\
Phys. Rev. Lett. [**87**]{} (2001) 082501. A. Rybicki[*et al.*]{} \[The NA49 Collaboration\],\
Proc. 7th International Conference on Strange Quarks in Matter (SQM2003), to appear in J. Phys. [**G**]{} S. V. Afanasiev [*et al.*]{} \[The NA49 Collaboration\],\
Phys. Rev. C [**66**]{}, 054902 (2002) \[arXiv:nucl-ex/0205002\]. C. Alt [*et al.*]{} \[The NA49 Collaboration\],\
Proc. 7th International Conference on Strange Quarks in Matter (SQM2003), to appear in J. Phys. [**G**]{} \[arXiv:nucl-ex/0305017\]
[^1]: For a full author list of the NA49 Collaboration see [@Alt:2003rn]
|
---
abstract: 'Strategy changes are an essential part of evolutionary games. Here we introduce a simple rule that, depending on the value of a single parameter $w$, influences the selection of players that are considered as potential sources of the new strategy. For positive $w$ players with high payoffs will be considered more likely, while for negative $w$ the opposite holds. Setting $w$ equal to zero returns the frequently adopted random selection of the opponent. We find that increasing the probability of adopting the strategy from the fittest player within reach, *i.e.* setting $w$ positive, promotes the evolution of cooperation. The robustness of this observation is tested against different levels of uncertainty in the strategy adoption process and for different interaction network. Since the evolution to widespread defection is tightly associated with cooperators having a lower fitness than defectors, the fact that positive values of $w$ facilitate cooperation is quite surprising. We show that the results can be explained by means of a negative feedback effect that increases the vulnerability of defectors although initially increasing their survivability. Moreover, we demonstrate that the introduction of $w$ effectively alters the interaction network and thus also the impact of uncertainty by strategy adoptions on the evolution of cooperation.'
author:
- 'Zhen Wang$^1$ and Matja[ž]{} Perc$^{2,}$[^1]'
title: 'Aspiring to the fittest and promotion of cooperation in the prisoner’s dilemma game'
---
Introduction
============
Cooperation within groups of selfish individuals is ubiquitous in human and animal societies. To explain and understand the origin of this phenomenon, evolutionary games, providing a suitable theoretical framework, have been studied extensively by many researches from various disciplines over the past decades [@maynard_82; @weibull_95; @nowak_06]. The evolutionary prisoner’s dilemma game in particular, illustrating the social conflict between cooperative and selfish behavior, has attracted considerable attention both in theoretical as well as experimental studies [@axelrod_84]. In a typical prisoner’s dilemma [@hofbauer_98], two players simultaneously decide whether they wish to cooperate or defect. They will receive the reward R if both cooperate, and the punishment P if both defect. However, if one player defects while the other decides to cooperate, the former will get the temptation T while the latter will get the sucker’s payoff S. The ranking of these four payoffs is T$>$R$>$P$>$S, from where it is clear that players need to defect if they wish to maximize their own payoff, irrespective of the opponent’s decision. Resulting is a social dilemma, which typically leads to widespread defection. To overcome this unfortunate outcome, several mechanisms that support the evolution of cooperation have been identified (see [@nowak_s06] for a review).
Of particular renown are the investigations of spatial prisoner’s dilemma games, which have turned out to be very inspirational over decades. In the first spatial prisoner’s dilemma game introduced by Nowak and May [@nowak_n92b], players were located on a square lattice, and their payoffs were gathered from the games with their neighbors. Subsequently, players were allowed to adopt the strategy of their neighbors, providing their fitness was higher. It was shown that the introduction of spatial structure enables cooperators to form clusters, thereby promoting the evolution of cooperation. Along this pioneering line of research, many different mechanisms aimed at sustaining cooperation were subsequently proposed and investigated. Examples include the reward mechanism [@jimenez_jtb08; @cuesta_jtb08], simultaneous adoption of different strategies depending on the opponents [@wardil_epl09], preferential selection of a neighbor [@wu_zx_pre06; @fu_pre08; @van-segbroeck_prl09; @chen_xj_pre09b; @shi_dm_pa09], the mobility of players [@vainstein_pre01; @vainstein_jtb07; @helbing_acs08; @helbing_pnas09; @meloni_pre09; @droz_epjb09; @wu_zx_pre09; @sicardi_jtb09; @jiang_ll_pre10], heterogeneous teaching activity [@szolnoki_epl07; @szabo_pre09], differences in evolutionary time scales [@roca_prl06; @wu_zx_pre09b], neutral evolution [@cremer_njp09], and coevolutionary selection of dynamical rules [@szolnoki_pre09d; @szabo_epl09], to name but a few. Looking at some examples more specifically, in a recent research paper [@fu_pre09], where players were allowed to either adjust their strategy or switch their defective partners, an optimal state that maximizes cooperation was reported. In [@helbing_acs08; @helbing_pnas09] it was shown that the mobility of players can lead to an outbreak of cooperation, even if the conditions are noisy and don’t necessarily favor the spreading of cooperators. Inspired by these successful research efforts, an interesting question posses itself, which we aim to address in what follows. Namely, if we consider a simple addition to the prisoner’s dilemma game that allows players to aspire to the fittest, *i.e.* introducing the propensity of designating the most successful neighbor as being the role model, is this beneficial for the evolution of cooperation or not? The answer is not straightforward since, as we have mentioned, defectors spread by means of their higher fitness. Thus, the modification we consider might give them higher chances of replication. In the early pioneering works, Nowak et al. [@nowak_pnas94; @nowak_ijbc94] have shown that increasing the probability to copy high payoff neighbors asymptotically leads to increased cooperation, yet this dependence was not monotonic over the whole parameter range. Here we aim to investigate this further in the presence of different levels of uncertainty by strategy adoptions and provide an interpretation of reported results.
Aside from the progress in promoting cooperation described above, another very important development came from replacing the square lattice with more complex interaction topologies (see [@szabo_pr07] for a review), possibly reflecting the actual state in social networks more closely. Recently, many studies have attested to the fact that complex networks play a critical role in the maintenance of cooperation for a wide range of parameters [@abramson_pre01; @santos_prl05; @vukov_pre06; @santos_prslb06; @hauert_ajp05; @rong_pre07; @gomez-gardenes_prl07; @pusch_pre08; @poncela_eplp09; @perc_njp09]. Quite remarkably, in the early investigations, it has been discovered that the scale-free network can greatly elevate the survivability of cooperators if compared to the classical square lattice [@santos_prl05]. Following this discovery, many studies have built on it in order to extend the scope of cooperation on complex networks. For example, a high value of the clustering coefficient was found beneficial [@assenza_pre08], while payoff normalization was found to impair the evolution of cooperation [@tomassini_ijmpc07; @masuda_prsb07; @szolnoki_pa08]. Motivated by these studies, we examine also how aspiring to the fittest in the prisoner’s dilemma game fares on complex networks; in particular, whether it promotes or hinders the evolution on cooperation.
Here we thus study the prisoner’s dilemma game with the introduction of a mechanism that allows players to aspire to the fittest. Comparing with previous works [@szabo_pre98; @hauert_ajp05], where a neighbor was chosen uniformly at random from all the neighbors, the propensity of designating the most successful neighbor as the role model is the most significant difference. Our aim is to study how this mechanism affects the evolution of cooperation on the square lattice, as well as on the scale-free network and the random regular graph, for different levels of uncertainty by strategy adoptions. By means of systematic computer simulations we demonstrate, similarly as was reported already by Nowak et al. [@nowak_pnas94; @nowak_ijbc94], that this simple mechanism can actually promote the evolution of cooperation significantly. We give an interpretation of the observed phenomena and examine the impact of different levels of uncertainty by strategy adoptions and the impact of different interaction networks on the outcome of the modified prisoner’s dilemma. In the remainder of this paper we will first describe the considered evolutionary game, subsequently we will present the main results, and finally we will summarize our conclusions.
Evolutionary game
=================
We consider an evolutionary prisoner’s dilemma game with the temptation to defect $T = b$ (the highest payoff received by a defector if playing against a cooperator), reward for mutual cooperation $R = b-c$, the punishment for mutual defection $P=0$, and the sucker’s payoff $S=-c$ (the lowest payoff received by a cooperator if playing against a defector). For positive $b>c$ we have $T>R>P>S$, thus strictly satisfying the prisoner’s dilemma payoff ranking. For simplicity, but without loss of generality, the payoffs can be rescaled such that $R=1$, $T=1+r$, $S=-r$ and $P=0$, where $r=c/(b-c)$ is the cost-to-benefit ratio [@hauert_ajp05]. Depending on the interaction network, the strategy adoption rule and other simulation details (see *e.g.* [@szabo_pr07; @roca_plr09; @perc_bs10]), there always exists a critical cost-to-benefit ratio $r=r_c$ at which cooperators die out. We will be interested in determining to what extend does aspiring to the fittest, as we are going to introduce in what follows, affects this critical value under different circumstances.
Throughout this work each player $x$ is initially designated either as a cooperator ($s_x=$C) or defector (D) with equal probability. As the interaction network, we use either a regular $L \times L$ square lattice, the random regular graph constructed as described in [@szabo_jpa04], or the scale-free network with $L^2$ nodes and an average degree of four generated via the Barab[á]{}si-Albert algorithm [@barabasi_s99]. The game is iterated forward in accordance with the sequential simulation procedure comprising the following elementary steps. First, player $x$ acquires its payoff $p_x$ by playing the game with all its neighbors. Next, we evaluate in the same way the payoffs of all the neighbors of player $x$ and subsequently select one neighbor $y$ via the probability $$\Pi_{y}=\frac{\exp(w p_{y})}{\sum_{z} \exp(w p_{z})},$$ where the sum runs over all the neighbors of player $x$ and $w$ is the newly introduced selection parameter. Evidently, for $w=0$ the most frequently adopted situation is recovered where player $y$ is chosen uniformly at random from all the neighbors of player $x$. For $w>0$, however, Eq. (1) introduces a preference towards those neighbors of player $x$ that have a higher payoff $p_y$. Conversely, for $w<0$ players with a lower payoff are more likely to be selected as potential strategy donors. Lastly then, player $x$ adopts the strategy $s_y$ from the selected player $y$ with the probability $$W(s_y \rightarrow s_x)=\frac{1}{1+\exp[(p_x-p_y)/K]},$$ where $K$ denotes the amplitude of noise or its inverse ($1/K$) the so-called intensity of selection [@szabo_pre98]. Irrespective of the value of $w$ one full iteration step involves all players $x=1,2, \ldots, L^2$ having a chance to adopt a strategy from one of their neighbors once. Here the evolutionary prisoner’s dilemma game is thus supplemented by a selection parameters $w$, enabling us to tune the preference towards which neighbor will be considered more likely as a potential strategy donor. For positive values of $w$ the players are more likely to aspire to their most fittest neighbors, while for negative values of $w$ the less successful neighbors will more likely act as strategy donors. This amendment seems reasonable and is easily justifiable with realistic examples. For example, it is a fact that people are, in general, much more likely to follow a successful individual than someone who is struggling to get by. This is taken into account by positive values of $w$. However, under certain (admittedly rare) circumstances, it is also possible that individuals will be inspired to copy their less successful partners. Indeed, the most frequently adopted random selection of a neighbor, retrieved in our case by $w=0$, seems in many ways like the least probable alternative. It is also informative to note that aspiring to the fittest becomes identical to the frequently adopted “best takes all” rule if $w \to \infty$ in Eq. (1) and $K \to 0$ in Eq. (2). This rule was adopted in the seminal work by Nowak and May [@nowak_n92b], as well as subsequently by Huberman and Glance [@huberman_pnas93] who showed that under certain circumstances asynchronous updating is substantially less successful in ensuring the survivability of cooperators than synchronous updating. Although in our simulations we never quite reach the “best takes all” limit, and thus a direct comparison is somewhat circumstantial, it is interesting to note that an additional uncertainty in the strategy adoption process via finite values of $K$ may alleviate the disadvantage that is due to asynchronous updating [@szabo_pre98].
Results of computer simulations presented below were obtained on populations comprising $100 \times 100$ to $400 \times 400$ individuals, whereby the fraction of cooperators $\rho_{{\rm C}}$ was determined within $10^5$ full iteration steps after sufficiently long transients were discarded. Moreover, since the preferential selection of neighbors may introduce additional disturbances, final results were averaged over up to $40$ independent runs for each set of parameter values in order to assure suitable accuracy.
Results
=======
We start by visually inspecting characteristic spatial distributions of cooperators and defectors for different values of the selection parameter $w$. Figure \[fig1\] features the results obtained for $r=0.022$ and $K=0.1$, whereat for $w=0$ (upper middle panel) a small fraction of cooperators can prevail on the square lattice by means of forming clusters, thereby protecting themselves against the exploitation by defectors [@hauert_n04]. As evidenced in the upper leftmost panel, for negative values of $w$ even this small fraction of cooperators goes extinct, thus yielding as a results exclusive dominance of defectors. For positive values of $w$ (upper right panel), however, the cooperators start mushrooming, whereby clustering remains their mechanism of spreading and survivability. Interestingly, large enough values of $w$ can facilitate the evolution of cooperation to the point of near-complete cooperator dominance (bottom right panel), or at least equality with the defectors, as implied by $\rho_{{\rm C}} \geq \rho_{{\rm D}}$ in all lower panels of Fig. \[fig1\]. These results suggest that when players aspire to adopt the strategy from their fittest neighbor the evolution of cooperation thrives. In what follows we will systematically examine the validity of this claim.
To quantify the ability of particular values of the selection parameter to facilitate and maintain cooperation more precisely, we first calculate $\rho_{{\rm C}}$ in dependence on the cost-to-benefit ratio $r$ for different values of $w$. Results presented in the top panel of Fig. \[fig2\] clearly attest to the fact that positive values of $w$ promote the evolution of cooperation, while on the other hand, negative values of $w$ impede it. Note that the critical cost-to-benefit ratio $r=r_{c}$, marking the extinction of cooperators, increases by a full order of magnitude at $w=4.0$ (orange stars) if compared to the $w=0$ (black squares) case. Interestingly, the promotive effect on the survivability of cooperators becomes more potent monotonously with increasing $w$, thus suggesting that a universally applicable mechanism is underlying the observed behavior. Indeed, the monotonous increase of $r=r_{c}$ for increasing $w$ is obvious from the bottom panel of Fig. \[fig2\], showing concisely the extend to which aspiring to the fittest promotes the evolution of cooperation on the square lattice.
Importantly, qualitatively identical results can be obtained on interaction networks other than the square lattice. Results presented in Fig. \[fig3\] depict how cooperators fare on the random regular graph and the scale-free network for different values of $w$. Similarly as in Fig. \[fig2\], it can be observed that positive values of $w$ promote the evolution of cooperation. Conversely, negative values of $w$ promote the evolution of defection. This is in agreement with the observations made on the square lattice, thus designating $w>0$ as being universally effective in promoting the evolution of cooperation, in particular, working on regular lattices and graphs as well as highly heterogeneous networks. Since the latter have been identified as potent promoters of cooperation on their own right [@santos_prl05], this conclusion is all the more inspiring.
In order to explain the promotive impact of positive values of $w$ on the evolution of cooperation, we examine time courses of $\rho_{{\rm C}}$ for different values of the selection parameter. Figure \[fig4\] features results obtained for $r=0.03$ and $K=0.1$, whereat cooperators die out if $w=0$ (black line; see also Fig. \[fig2\]). For positive values of $w$, on the other hand, the stationary state is a mixed C+D phase with cooperators occupying the larger portion of the square lattice. Interestingly, however, in the most early stages of the evolutionary process (note that values of $\rho_{{\rm C}}$ were recorded also in-between full iteration steps) it appears as if defectors would actually fare better for $w>0$. In fact, the larger the value of $w$, the deeper the initial downfall of cooperators. This is actually what one would expect, given that defectors are, as individuals, more successful than cooperators and will thus be chosen more likely as potential strategy donors if $w$ is positive. This in turn amplifies their chances of spreading and results in the decimation of cooperators (only slightly more than 20 % survive). Quite surprisingly though, the tide changes fast, and as one can observe from the presented time courses, the more so the deeper the initial downfall of cooperators. For $w=4.0$ we can observe instead of cooperator extinction their near-dominance with $\rho_{{\rm C}}$ hoovering comfortably over $0.8$ (orange line). We argue that for positive values of $w$ a negative feedback effect occurs, which halts and eventually reverts what appears to be a march of defectors towards their undisputed dominance. Namely, in the very early stages of the game defectors are able to plunder very efficiently, which quickly results in a state where there are hardly any cooperators left to exploit. Consequently, the few remaining clusters of cooperators start recovering lost ground against weakened defectors. Crucial thereby is the fact that the clusters formed by cooperators are impervious to defector attacks even at high values of $r$ because of the positive selection towards the fittest neighbors acting as strategy sources (occurring for $w>0$). In a sea of cooperators this is practically always another cooperator rather than a defector trying to penetrate into the cluster. This newly identified mechanism ultimately results in widespread cooperation that goes beyond what can be warranted by the spatial reciprocity alone (see *e.g.* [@szabo_pr07]), and this irrespective of the underlying interaction network. As such, aspiration to the fittest, *i.e.* the propensity of designating the most successful neighbor as being the role model, may be seen as a universally applicable promoter of cooperation.
Lastly, it is instructive to examine the evolution of cooperation for $w>0$ in dependence on the uncertainty by strategy adoptions. The latter can be tuned via $K$, which acts as a temperature parameter in the employed Fermi strategy adoption function [@szabo_pre98]. Accordingly, when $K \to \infty$ all information is lost and the strategies are adopted by means of a coin toss. The phase diagram presented in the top panel of Fig. \[fig5\] is well-known, implying the existence of an optimal level of uncertainty for the evolution of cooperation, as was previously reported in [@perc_njp06a; @vukov_pre06]. In particular, note that the D $\leftrightarrow$ C+D transition line is bell shaped, indicating that $K \approx 0.37$ is the optimal temperature at which cooperators are able to survive at the highest value of $r$. This phenomenon can be interpreted as an evolutionary resonance [@perc_njp06b], albeit it can only be observed on interaction topologies lacking overlapping triangles [@szabo_pre05; @szolnoki_pre09c]. Interestingly, positive $w$ eradicate (as do interaction networks incorporating overlapping triangles) the existence of an optimal $K$, as can be observed from the phase diagram presented in the bottom panel of Fig. \[fig5\]. The latter was obtained for $w=2.0$ and exhibits an inverted bell-shaped D $\leftrightarrow$ C+D transition line, indicating the existence of the worst rather than an optimal temperature $K$ for the evolution of cooperation. This in turn implies that introducing a preference towards the fittest neighbors effectively alters the interaction network. While the square lattice obviously lacks overlapping triangles and thus enables the observation of an optimal $K$, trimming the likelihood of who will act as a strategy source seems to effectively enhance linkage among essentially disconnected triplets and thus precludes the same observation. A similar phenomenon was observed recently in public goods games, where the joint membership in large groups was also found to alter the effective interaction network and thus the impact of uncertainly on the evolution of cooperation [@szolnoki_pre09c].
Summary
=======
In sum, we have shown that aspiring to the fittest promotes the evolution of cooperation in the prisoner’s dilemma game irrespective of the underlying interaction network and the uncertainty by strategy adoptions. The essence of the identified mechanism for the cooperation promotion has been attributed to a negative feedback effect, occurring because of the formation of extremely robust clusters (or groups on complex networks) of cooperators that are impervious to defector attacks even at high temptations to defect. Although initially the defectors appear to be heading to an undisputed victory, the fast exploitation and the consequent shortage of cooperators weakens the defectors and makes them susceptible to an overtake by the few remaining cooperators. Further interesting is the fact that the introduction of a selection parameter, making the fittest neighbors more likely to act as sources of adopted strategies, effectively alters the interaction network. While in its absence there exists an intermediate uncertainty governing the process of strategy adoptions $K$ by which the largest cost-to-benefit ratio $r$ still warrants the survival of at least some cooperators, in its presence this feature vanishes and becomes qualitatively identical to what was observed previously on lattices that do incorporate overlapping triangles, such as the kagome lattice [@szolnoki_pre09c]. Since in fact the actual interaction topology remains unaffected by the different values of the selection parameter $w$, we have argued that the differences in the evolution of cooperation are due to an effective transition of the interaction topology, which is brought about by the fact that some players are more likely to act as strategy sources than others. Therefore, the bonds between certain player pairs appears stronger than average, although the interaction networks consist of links that are not weighted.
Since aspiring to the fittest, *i.e.* the propensity of designating the most successful neighbors as role models, appears to be both widely applicable as well realistically justifiable, we hope it will inspire future studies, especially in terms of understanding the emergence of successful leaders in societies via a coevolutionary process [@perc_bs10]. An interesting interpretation of the selection parameter $w$ can also be obtained if the latter is considered as a measure of cognitive complexity of each individual. In particular, it is possible to argue that the more obtuse an individual is, the closer to random his choice of a role model will be. If individuals are to be able to aspire to the fittest, they should have some degree of information processing capabilities. On the other hand, negative values of $w$ can be interpreted as a choice that is based on moral values [@helbing_plos10], for example, when highly successful individuals are so by unethical actions and thus should not be imitated.
Matja[ž]{} Perc acknowledges support from the Slovenian Research Agency (Grant No. Z1-2032). Zhen Wang acknowledges support from the Center for Asia Studies of Nankai University (Grant No. 2010-5) and from the National Natural Science Foundation of China (Grant No. 10672081). Helpful discussion with Professor Lianzhong Zhang are gratefully acknowledged as well. This works has benefited substantially from the insightful comments of the Physical Review referees, and we are very grateful for their help.
[57]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ** (, , ).
, ** (, , ).
, ** (, , ).
, ** (, , ).
, ** (, , ).
, ****, ().
, ****, ().
, , , , ****, ().
, , , , ****, ().
, ****, ().
, , , , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , , ****, ().
, , , , , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , , , , , ****, ().
, , , ****, ().
, ****, ().
, , , , ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, , , , ****, ().
, , , ****, ().
, , , , , ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , , ****, ().
[^1]: Corresponding author.\
Electronic address: [email protected]
|
---
abstract: 'Data stream learning has been largely studied for extracting knowledge structures from continuous and rapid data records. In the semantic Web, data is interpreted in ontologies and its ordered sequence is represented as an ontology stream. Our work exploits the semantics of such streams to tackle the problem of concept drift i.e., unexpected changes in data distribution, causing most of models to be less accurate as time passes. To this end we revisited (i) semantic inference in the context of supervised stream learning, and (ii) models with semantic embeddings. The experiments show accurate prediction with data from Dublin and Beijing.'
author:
- |
Freddy Lécué\
INRIA, France\
Accenture Labs, Ireland Jiaoyan Chen\
Zhejiang University\
China Jeff Z. Pan\
University of Aberdeen\
United Kingdom Huajun Chen\
Zhejiang University\
China
bibliography:
- 'ijcai17-stream.bib'
title: |
Learning from Ontology Streams with Semantic Concept Drift\
[^1]
---
Introduction and Related Work
=============================
Stream learning, or the problem of extracting and predicting knowledge from temporal evolution of data, has been largely studied. Most of techniques in Database e.g., [@CheHNW96], adapting [Apriori]{} [@AgrMSTV96] for streams, focus on syntactic representation of data to identify frequent associations and exploit them for prediction. [@LeeCL03] improved its scalability by partitioning all streams using a sliding-window filtering. Approaches in Machine Learning e.g., [@GamK11] focus on learning decision rules for classifying data from streams in real-time. Although highly scalable, most approaches have been shown to be non robust to *concept drift* i.e., unexpected changes in data distribution [@coble2000real]. Indeed their models, built on old data and then inconsistent with new data, are less accurate as time passes. Towards this challenge [@ChuZLTT11] applied online active learning using customized properties of weighing, Alternatively [@gao2007general] prioritized recent data during the elaboration of the learning model through regular updates, assuming temporally adjacent data is the most representative information for prediction. [@cao2003support] trained an adaptive support vector machine by placing higher weight on the errors from recent training samples. [@Kolter2007] identify multiple candidate models learnt from different historical samples and adopt a dynamic weighted majority strategy. [@bifet2015efficient] go further by considering dynamic sliding windows. Although such approaches manage gradual changes, they fail in maintaining high accuracy for sudden, abrupt changes. This is mainly due to inconsistent evolution of knowledge and lack of metrics to understand the semantics of its changes and concept drifts.
Towards this issue we consider their representation in the semantic Web. Such streams, represented as ontology streams [@HuaS05], are evolutive versions of ontologies where OWL (Web Ontology Language), which is underpinned by Description Logics (DL) [@BaaN03], is used as a rich description language. From knowledge materialization [@DBLP:conf/ijcai/BeckDE16; @galarraga2013amie], to predictive reasoning [@DBLP:conf/ijcai/Lecue15], all are inferences where dynamics, semantics of data are exploited for deriving a priori knowledge from pre-established (certain) statements. However concept drift is not handled, which limits accuracy of prediction for highly changing streams.
Our approach, exploiting the semantics of data streams, tackles the problem of learning and prediction with concept drifts. Given some continuous knowledge, how to manage its changes and their inconsistent evolution to ensure accurate prediction? Semantic reasoning and machine learning have been combined by revisiting features embeddings as semantic embeddings i.e., vectors capturing consistency and knowledge entailment in ontology streams. Such embeddings are then exploited in a context of supervised stream learning to learn models, which are robust to concept drifts i.e., sudden and inconsistent prediction changes. Our approach has been shown to be adaptable and flexible to basic learning techniques. The experiments have shown accurate prediction with live stream data from Dublin in Ireland and Beijing in China.
Next section reviews the adopted logic and ontology stream learning problem. In Section 3 we study concept drift and its significance. Section 4 presents how semantic embeddings are elaborated and exploited to derive accurate prediction. Finally, we report experimental results on accuracy with data from Dublin and Beijing and draw some conclusions
Background
==========
[\[sec:Background\]]{}
The semantics of data is represented using an ontology. We focus on Description Logic (DL) to define ontologies since it offers reasoning support for most of its expressive families and compatibility to W3C standards e.g., OWL 2. Our work is illustrated using DL $\mathcal{EL}^{++}$ [@BaaBL05], which supports polynomial time reasoning. We review (i) DL basics of $\mathcal{EL}^{++}$, (ii) ontology stream, (iii) stream learning problem.
Description Logics $\mathcal{EL}^{++}$
---------------------------------------
A signature $\Sigma$, noted $(\mathcal{N}_C, \mathcal{N}_R, \mathcal{N}_I)$ consists of $3$ disjoint sets of (i) atomic concepts $\mathcal{N}_C$, (ii) atomic roles $\mathcal{N}_R$, and (iii) individuals $\mathcal{N}_I$. Given a signature, the top concept $\top$, the bottom concept $\bot$, an atomic concept $A$, an individual $a$, an atomic role expression $r$, $\mathcal{EL}^{++}$ concept expressions $C$ and $D$ in $\mathcal{C}$ can be composed with the following constructs: $$\top\;|\;\bot\;|\;A\;|\;C\sqcap D\;|\;\exists r.C\;|\;\{a\}\nonumber$$ The DL ontology $\mathcal{O}\stackrel{.}{=}\langle\mathcal{T}, \mathcal{A}\rangle$ is composed of TBox $\mathcal{T}$, ABox $\mathcal{A}$. A TBox is a set of concept, role axioms. $\mathcal{EL}^{++}$ supports General Concept Inclusion axioms (GCIs e.g. $C \sqsubseteq D$), Role Inclusion axioms (RIs e.g., $r \sqsubseteq s$ ). An ABox is a set of concept assertion axioms e.g., $C(a)$, role assertion axioms e.g., $R(a, b)$, individual in/equality axioms e.g., $a \neq b$, $a = b$.
**(TBox and ABox Concept Assertion Axioms)**\
Figure \[fig:StaticOntology:Background:Knowledge\] presents (i) a TBox $\mathcal{T}$ where $DisruptedRoad$ denotes the concept of “[roads which are adjacent to an event causing high disruption]{}", (ii) concept assertions (\[eq:adjR1\]-\[eq:adjR2\]) with the individual $r_0$ having roads $r_1$ and $r_2$ as adjunct roads.
All completion rules, which are used to classify $\mathcal{EL}^{++}$ TBox $\mathcal{T}$ and entail subsumption, are described in [@BaaBL05]. Reasoning with such rules is PTime-Complete.
Ontology Stream
---------------
We represent knowledge evolution by a dynamic, evolutive version of ontologies [@HuaS05]. [Data (ABox), its inferred statements (entailments) are evolving over time while its schema (TBox) remains unchanged.]{}
[\[defn:ontologyStreamDef\]]{}**(DL $\mathcal{L}$ Ontology Stream)**\
A DL $\mathcal{L}$ ontology stream $\mathcal{P}_m^{n}$ from point of time $m$ to point of time $n$ is a sequence of (sets of) Abox axioms $(\mathcal{P}_m^{n}(m), \mathcal{P}_m^{n}(m\Plus1),\cdots, \mathcal{P}_m^{n}(n))$ with respect to a static TBox $\mathcal{T}$ in a DL $\mathcal{L}$ where $m, n\in \mathbb{N}$ and $m<n$.
$\mathcal{P}_m^{n}(i)$ is a snapshot of an ontology stream $\mathcal{P}_m^{n}$ at time $i$, referring to ABox axioms. Thus a transition from $\mathcal{P}_m^{n}(i)$ to $\mathcal{P}_m^{n}(i\Plus1)$ is seen as an ABox update. We denote by $\mathcal{P}_m^{n}[i,j]$ [ i.e., $\bigcup_{k=i}^{j} \mathcal{P}_m^n(k)$ ]{} a windowed stream of $\mathcal{P}_m^{n}$ between time $i$ and $j$ with $i \leq j$. [ Any window $[i,j]$ has a fixed length. $1$-length windows are denoted by $(i)$.]{} We consider streams $\mathcal{P}_{0}^{n}$ with $[\alpha] \doteq [i,j]$, $[\beta] \doteq [k,l]$ as windows in $[0,n]$ [and $i<k$.]{}
**(DL $\mathcal{EL}^{++}$ Ontology Stream)**\
Figure \[fig:DynamicOntologyStream\] illustrates $\mathcal{EL}^{++}$ streams $\mathcal{P}_{0}^{n}$, $\mathcal{Q}_{0}^{n}$, $\mathcal{R}_{0}^{n}$, related to events, travel time, buses, through snapshots at time $i\in\{0,1,2,3\}$ (i.e., a view on $[0,3]$). In our example $n$ is any integer greater than $5$. Their dynamic knowledge is captured by evolutive ABox axioms e.g., captures $e_1$ as “a social poetry event occurring in $r_2$" at time $1$ of $\mathcal{P}_0^n$.
By applying completion rules on static knowledge $\mathcal{T}$ and ontology streams $\mathcal{P}_0^n$, snapshot-specific axioms are inferred.
The evolution of a stream is captured along its changes i.e., *new*, *obsolete* and *invariant* ABox entailments from one windowed stream to another one in Definition \[defn:ontologyStreamChangesIJCAI2015\] [@DBLP:conf/ijcai/Lecue15].
[\[defn:ontologyStreamChangesIJCAI2015\]]{}**(ABox Entailment-based Stream Changes)**\
Let $\mathcal{S}_0^n$ be a stream; $[\alpha]$, $[\beta]$ be windows in $[0,n]$; $\mathcal{T}$ be axioms, $\mathcal{G}$ its ABox entailments. The changes occurring from $\mathcal{S}_{0}^{n}[\alpha]$ to $\mathcal{S}_{0}^{n}[\beta]$, denoted by $\mathcal{S}_{0}^{n}[\beta] \nabla \mathcal{S}_{0}^{n}[\alpha]$, are ABox entailments in $\mathcal{G}$ being $new$ , $obsolete$ , $invariant$ .
$$\begin{aligned}
{\mathcal{G}^{[\alpha],[\beta]}_{new}} &\doteq \{g\in\mathcal{G}\;|\;\mathcal{T}\cup\mathcal{S}_{0}^{n}[\beta]\models g\;\wedge \mathcal{T}\cup\mathcal{S}_{0}^{n}[\alpha]\not\models g\}\label{eq:newSubsumedIJCAI2015}\\
{\mathcal{G}^{[\alpha],[\beta]}_{obs}} &\doteq\{g\in\mathcal{G}\;|\;\mathcal{T}\cup\mathcal{S}_{0}^{n}[\beta]\not\models g\;\wedge \mathcal{T}\cup\mathcal{S}_{0}^{n}[\alpha]\models g\}\label{eq:obsoleteSubsumedIJCAI2015}\\
{\mathcal{G}^{[\alpha],[\beta]}_{inv}} &\doteq\{g\in\mathcal{G}\;|\;\mathcal{T}\cup\mathcal{S}_{0}^{n}[\beta]\models g\;\wedge \mathcal{T}\cup\mathcal{S}_{0}^{n}[\alpha]\models g\}\label{eq:invariantSubsumedIJCAI2015}\end{aligned}$$
reflects knowledge we gain by sliding window from $[\alpha]$ to $[\beta]$ while and denote respectively lost and stable knowledge. All duplicates are supposed removed. Definition \[defn:ontologyStreamChangesIJCAI2015\] provides basics, through ABox entailments, for understanding how knowledge is evolving over time.
[\[ex:ontologyStreamChangesIJCAI2013\]]{}**(ABox Entailment-based Stream Changes)**\
Table \[tab:ontologyStreamChangesIJCAI2015\] illustrates changes occurring from $(\mathcal{Q}\cup\mathcal{R})_{0}^{n}[0,1]$ to $(\mathcal{Q}\cup\mathcal{R})_{0}^{n}[2,3]$ through ABox entailements. For instance “$r_2$ as a disrupted road window $[2,3]$ of $(\mathcal{Q}\cup\mathcal{R})_{0}^{n}$ is $new$ with respect to knowledge in $[0,1]$. It is entailed using DL completion rules on , , , , , and .
[@[ ]{}c@[ ]{}|c|c|c]{}
Windowed Stream &\
Changes& $obsolete$ & $invariant $ & $new$\
$with(r_2,b_0)$ & & &\
$ClearedRoad(r_2)$ & & &\
$DisruptedRoad(r_2)$ & & &\
Ontology Stream Learning Problem
--------------------------------
Definition \[def:OSL\] revisits classic supervised learning [@domingos2000mining] for ontology stream as the problem of predicting knowledge (through entailment) in a future snapshot.
[\[def:OSL\]]{}**(Ontology Stream Learning Problem)**\
Let $\mathcal{S}_0^{n}$ be a stream; $\mathcal{T}$, $\mathcal{A}$ be respectively TBox, ABox; $g\in\mathcal{G}$ an ABox entailment. An Ontology Stream Learning Problem, noted OSLP$\langle \mathcal{S}_0^n, k, \mathcal{T}, \mathcal{A}, g\rangle$, is the problem of estimating whether $g$ can be entailed from $\mathcal{T}$ and $\mathcal{A}$ at time $k\in(0,n]$ of stream $\mathcal{S}_0^n$, given knowledge at time $t < k$ of $\mathcal{S}_0^n$.
This estimation is denoted as $p_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_{0}^{n}(k)\models g)$ with values in $[0,1]$ [and $k\geq 1$. $g$ is a class assertion entailment in the form of $G(a)$, with $G$ a concept expression and $a$ an individual.]{} The [estimation]{}, adapted from [@DBLP:conf/icdm/GaoFH07], can be elaborated using knowledge from [previous snapshots of $\mathcal{S}_0^{k}$:]{}
$${\label{eq:conditionalProbability}}
p_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_{0}^{n}(k)\models g) \doteq \frac{p_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_0^{k\Minus 1}\models g)}{p_{|\mathcal{T}\cup\mathcal{A}}(a \in \mathcal{S}_0^{k\Minus 1})}$$
[Estimation $p_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_{0}^{k \Minus 1}\models g)$ is the proportion of snapshots in $\mathcal{S}_0^{k \Minus 1}$ entailing $g$. The conditional probability of $a$ in $\mathcal{S}_0^{k\Minus 1}$ (noted $a \in \mathcal{S}_0^{k\Minus 1}$) given $\mathcal{S}_0^{k\Minus 1}$ entailing $g$, or $G(a)$, is 1.]{}
[\[ex:OSL\]]{}**(Ontology Stream Learning Problem)**\
The problem of estimating whether class assertion $g$, defined as [$DisruptedRoad(r_2)$]{}, can be entailed from $\mathcal{T}$ and $\mathcal{A}$ at time $4$ of $(\mathcal{Q}\cup\mathcal{R})_0^n$ is defined as OSLP$\langle (\mathcal{Q}\cup\mathcal{R})_0^n, 4, \mathcal{T}, \mathcal{A}, g\rangle$. The estimation can be retrieved using [ hence $p_{|\mathcal{T}\cup\mathcal{A}}((\mathcal{Q}\cup\mathcal{R})_0^n(4)\models DisruptedRoad(r_2)) \doteq \sfrac{2}{3}$.]{}
Concept Drift in An Ontology Stream
===================================
[\[sec:Subsection:TODO\]]{}
We introduce semantic concept drift, as a basis for qualifying, quantifying sudden and abrupt changes in an ontology stream.
Semantic Concept Drift
----------------------
Definition \[def:SCD\] revisits *concept drift* [@gao2007general] for ontology streams as *prediction changes* (Definition \[def:SC\]) in ABox entailment, which are *sudden* and *abrupt* (Definitions \[def:SSC\], \[def:ASC\]).
[\[def:SC\]]{}**(Prediction Change)**\
Let $\mathcal{S}_0^n$ be a stream; $\mathcal{T}$, $\mathcal{A}$ and $\mathcal{G}$ be TBox, Abox and its entailments. A prediction change in $\mathcal{S}_0^n$ is ocuring between time $i$ and $j$ in $[0,n]$ with respect to $\mathcal{T}$, $\mathcal{A}$ and its entailments iff:
$${\label{eq:predictionChange}}
\exists g\in\mathcal{G} : \norm{p_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_0^n(i)\models g) - p_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_0^n(j)\models g)} \geq \varepsilon$$
where $\varepsilon \in (0,1]$ is a variable bounding the difference of estimation, $\norm{v}$ refers to the absolute value of $v$, and $j > i$
[ABox entailment $g$ is called an evidence entailment of the prediction change. We denote by $\mathbb{C}_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_0^n, i, j, \varepsilon)$, the set of all evidence entailments of the prediction change with an $\varepsilon$ difference between time $i$ and $j$ of ontology stream $\mathcal{S}_0^n$.]{}
[\[ex:SC\]]{}**(Prediction Change)**\
$g \doteq DisruptedRoad(r_2)$ can be entailed from $\mathcal{T}$ and $\mathcal{A}$ at time $2$ of $(\mathcal{Q}\cup\mathcal{R})_0^n$ with a zero probability following . Therefore a prediction change between times $2$ and $4$ (cf. Example \[ex:OSL\]) is captured with $g \in \mathbb{C}_{|\mathcal{T}\cup\mathcal{A}}((\mathcal{Q}\cup\mathcal{R})_0^n, 2, 4,\sfrac{1}{3})$.
[\[def:SSC\]]{}**($\alpha$-Sudden Prediction Change)**\
A prediction change at point of time $i$ in stream $\mathcal{S}_0^n$, satisfying , is defined as $\alpha$-sudden, with $\alpha\in(0,n\Minus i]$ iff $j = i+\alpha$.
[\[def:ASC\]]{}**(Abrupt Prediction Change)**\
A prediction change, satisfying , is abrupt iff $\exists g'\in\mathcal{G}$ s.t.
$${\label{eq:APC}}
\mathcal{T} \cup \mathcal{A} \cup g \cup g' \bigcup_{k=0}^{\max\{i,j\}}\mathcal{S}_0^n(k) \models \bot$$
where $\bigcup_{k=0}^{\max\{i,j\}}\mathcal{S}_0^n(k)$ captures all axioms from any snapshot $\mathcal{S}_0^n(k)$ of stream $\mathcal{S}_0^n$ with $k\in [0, \max\{i,j\}]$.
Suddenness characterises the proximity of prediction changes in streams i.e., the lower $\alpha$ the closer the changes. Abruptness captures disruptive changes from a semantic perspective i.e., conflicting knowledge among snapshots $\mathcal{S}_0^n(i)$, $\mathcal{S}_0^n(j)$ with respect to background knowledge $\mathcal{T} \cup \mathcal{A}$.
[\[def:SCD\]]{}**(Semantic Concept Drift)**\
A semantic concept drift in $\mathcal{S}_0^n$, is defined as a $1$-sudden and abrupt prediction change.
Evaluating if a concept drift occurs for a snapshot update is in worst case polynomial time with respect to acyclic TBoxes and $\mathcal{S}_0^n$ in $\mathcal{EL}^{++}$ since subsumption and satisfiability in , can be checked in polynomial time [@BaaBL05].
[\[ex:SCD\]]{}**(Semantic Concept Drift)**\
Two prediction changes from time $i=2$ to $3$ and $3$ to $4$ (cf. Table \[tab:ontologyStreamChangesIJCAI2017\]) have occurred for $g \doteq DisruptedRoad(r_2)$ in $(\mathcal{Q}\cup\mathcal{R})_0^n$. They are semantic concept drifts as they are $1$-sudden and abrupt with $g' \doteq ClearedRoad(r_2)$ in $(\mathcal{Q}\cup\mathcal{R})_0^n(1)$.
[@[ ]{}c@[ ]{}|@[ ]{}c@[ ]{}|@[ ]{}c@[ ]{}|@[ ]{}c@[ ]{}|@[ ]{}c@[ ]{}]{} &\
Past Points & Time & $p_{|\mathcal{T}\cup\mathcal{A}}$& $g\in\mathbb{C}_{|\mathcal{T}\cup\mathcal{A}}$ & Abrupt-\
of Time & $i$ & $((\mathcal{Q}\cup\mathcal{R})_0^n(i)\models g)$ &$((\mathcal{Q}\cup\mathcal{R})_0^n, i, i \Plus 1, \sfrac{1}{3})$ & ness\
$\{0\}$ & $1$ & $0$ & &\
$\{0,1\}$ & $2$ & $0$ & &\
$\{0,1,2\}$ & $3$ & $\sfrac{1}{2}$ & &\
$\{0,1,2,3\}$ & $4$ & $\sfrac{2}{3}$ & N/A & N/A\
Significance of Concept Drift
-----------------------------
Significance of semantic concept drift (Definition \[def:Significance\]) is an indicator on its severity. It captures the homogeneity of the concept drift across ABox entailments as the proportion of ABox entailments from $\mathcal{S}_0^n(i)$ and $\mathcal{S}_0^n(i\Plus 1)$ causing semantic concept drift. The values of significance range in $[0,1]$.
[\[def:Significance\]]{}**(Semantic Concept Drift Significance)**\
The significance of a semantic concept drift, defined between points of time $i\in(0,n)$ and $i\Plus 1$ of $\mathcal{S}_0^n$ with $\varepsilon$, $\mathcal{T}$, $\mathcal{A}$, $\mathcal{G}$ as difference, TBox, ABox, and entailments, is:
$${\label{eq:Significance}}
\hspace{-0.2cm}\sigma_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_0^n, i, \varepsilon)\doteq \frac{|\mathbb{C}_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_0^n, i, i \Plus 1, \varepsilon)|}{|\{g\in\mathcal{G}\;|\;\mathcal{S}_0^n(i)\models g \vee \mathcal{S}_0^n(i \Plus 1)\models g\;\}|}$$
where the expression in between $|$ refers to its cardinality.
Evaluating is in worst case polynomial time cf. complexity of Definition \[def:SCD\].
[\[ex:Significance\]]{}**(Semantic Concept Drift Significance)**\
By applying on concept drifts of Table \[tab:ontologyStreamChangesIJCAI2017\] we derive that $\sigma_{|\mathcal{T}\cup\mathcal{A}}((\mathcal{Q}\cup\mathcal{R})_0^n, 2, \sfrac{1}{3})$ is $\sfrac{4}{7}$ while $\sigma_{|\mathcal{T}\cup\mathcal{A}}((\mathcal{Q}\cup\mathcal{R})_0^n, 3, \sfrac{1}{3})$ is $0$, hence a more significant drift between times $2$, $3$ than $3$, $4$. In other words conflicting facts $g \doteq DisruptedRoad(r_2)$ and $g'\doteq ClearedRoad(r_2)$ w.r.t. $\mathcal{T}$ and $\mathcal{A}$ have the most significant impact on prediction changes at times $2$ and $3$.
**(Semantic Concept Drift Evolution)**\[lemma:drift\]\
A semantic concept drift in any ontology stream $\mathcal{S}_0^n$ is more significant at time $i > 0$ than at time $i\Plus1$ if $|{\mathcal{G}^{[0,i],[0,i\Plus1]}_{new}}| = 0$.
(Sketch) Since $|{\mathcal{G}^{[0,i],[0,i\Plus1]}_{new}}| = 0$, $\mathcal{S}_0^n(i)$ and $\mathcal{S}_0^n(i\Plus1)$ are similar w.r.t $\models_{\mathcal{T}\cup\mathcal{A}}$. Thus, the set of all entailments, predicted at $i\Plus 1$ and $i\Plus 2$ from , are similar but with different prediction values $\forall \varepsilon \geq 0$. So $\sigma_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_0^n, i, \varepsilon)$ and $\sigma_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_0^n, i\Plus1, \varepsilon)$ in have same denominators while $\mathbb{C}_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_0^n, i\Plus1, i \Plus2, \varepsilon) \subseteq \mathbb{C}_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_0^n, i, i \Plus 1, \varepsilon)$ hence $\sigma_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_0^n, i\Plus1, \varepsilon) \leq \sigma_{|\mathcal{T}\cup\mathcal{A}}(\mathcal{S}_0^n, i, \varepsilon)$.
Algorithm \[algo:rulesInterestingnessUpdate\] retrieves significant concept drifts in $\mathcal{S}_0^n$ with minimal significance $\sigma_{\min}$. iterates on all snapshots updates except those with no new ABox entailment (line \[algo:Drift:Lemma\] - lemma \[lemma:drift\]) for minimizing satisfiability and subsumption checking. Semantic concept drifts, as $1$-sudden and abrupt prediction changes, are retrieved (line \[algo:Drift:SCD\]). completes the process (line \[algo:Drift:SSCD\]) by filtering drifts by significance $\sigma_{\min}$.
Computing a solution with given a polynomial input $n$, number of axioms, entailments in $\mathcal{O}$ and $\mathcal{S}_0^n$ is in worst case polynomial time, due to the complexity of evaluating a semantic drift cf. complexity of Definition \[def:SCD\]. However computing significant $\alpha$-sudden, abrupt prediction changes following is in worst case NP w.r.t. the number snapshots.
Ontology Stream Learning
========================
[\[sec:Subsection:OSL\]]{}
We tackle the ontology stream learning problem by (i) computing semantic embeddings, as mathematical objects exploiting the properties of concept drifts, (ii) applying all embeddings in model-based learning approaches (Algorithm \[algo:ConsistentPrediction\]).
Semantic Embeddings
-------------------
The semantics of streams exposes two levels of knowledge which are crucial for learning with concept drift: (i) (in-)consistency evolution of knowledge, and (ii) entailment of the forecasting target from stream assertions and axioms. They are semantic embeddings, captured as: *consistency vectors* (Definition \[def:ConsistencyVector\]) and *entailment vector* (Definition \[def:EntailmentVector\]).
[\[def:ConsistencyVector\]]{}**(Consistency Vector)**\
A consistency vector of snapshot $\mathcal{S}_0^n(i)$ in $\mathcal{S}_0^n$, denoted by ${\bf{c}}_{i}$, is defined $\forall j\in[0,n]$ by ${c}_{ij}$ if $i<j$; ${c}_{ji}$ otherwise such that:
$$\label{eq:ConsistencyVector}
\hspace{-0.1cm}c_{ij} \stackrel{.}{=}
\hspace*{-0.1cm}\left\{ \begin{array}{lcl}
\hspace{-0.1cm} \frac{|{\mathcal{G}^{i,j}_{inv}} |}{|{\mathcal{G}^{i,j}_{new}} | + |{\mathcal{G}^{i,j}_{inv}} | + |{\mathcal{G}^{i,j}_{obs}} |} &
\vspace*{0.2cm}\hspace{-0.18cm}\scriptsize{\text{if $\mathcal{T}\cup\mathcal{S}_0^n(i)\cup\mathcal{S}_0^n(j)\not\models \bot$}}\\
\hspace{-0.1cm} \frac{|{\mathcal{G}^{i,j}_{inv}} |}{|{\mathcal{G}^{i,j}_{new}} | + |{\mathcal{G}^{i,j}_{inv}} | + |{\mathcal{G}^{i,j}_{obs}} |} -1 & \hspace{-0.15cm}\text{otherwise}
\end{array} \right.$$
where the expressions in between $|$ refer to its cardinality i.e., the number of new , obsolete , invariant ABox entailments from $\mathcal{S}_{0}^{n}(i)$ to $\mathcal{S}_{0}^{n}(j)$. $c_{ij} = c_{ji}\;\forall i,j\in[0,n]$.
A consistent vector, with values in $[-1,1]^{n\Plus1}$, encodes (i) (in-)consistency with (negative) positive values, and (ii) similarity of knowledge among $\mathcal{S}_0^n(i)$ and any other snapshot $\mathcal{S}_0^n(j)_{j\in[0,n]}$ of stream $\mathcal{S}_0^n$ w.r.t axioms $\mathcal{T}$ and $\mathcal{A}$. The number of invariant entailments has a positive influence on . On contrary, the number of new and obsolete ABox entailments, capturing some differentiators in knowledge evolution, has a negative impact. When an inconsistency occurs, the value $1$ is subtracted instead of considering its additive inverse. This ensures that the invariant factor has always a positive impact.
Evaluating is in worst case polynomial time with respect to $\mathcal{T}$ and $\mathcal{S}_0^n$ in $\mathcal{EL}^{++}$. Indeed its evaluation requires (i) ABox entailment, and (ii) basic set theory operations from Definition \[defn:ontologyStreamChangesIJCAI2015\], both in polynomial time [@BaaBL05].
[\[ex:ConsistencyVector\]]{}**(Consistency Vector)**\
Consistency vector ${\bf c}_3$ i.e., $(c_{03}, c_{13}, c_{23}, c_{33})$ of $(\mathcal{Q}\cup\mathcal{R})_0^n(3)$ is $(0,\Minus\;0.8,1,1)$. Knowledge at time $3$ is consistent / inconsistent / similar with knowledge at times $0$ / $1$ / $2$ and $3$.
An entailment vector (Definition \[def:EntailmentVector\]) is adapting the concept of feature vector [@bishop2006pattern] in Machine Learning to represent the (non-)presence of all ABox entailments (using $\models$ w.r.t. $\mathcal{T}$, $\mathcal{A}$) in a given snapshot. Each dimension captures whether a particular ABox entailment is in ($1$) or not ($0$).
[\[def:EntailmentVector\]]{}**(Entailment Vector)**\
Let $\mathcal{G}\doteq\{g_1,\ldots,g_m\}$ be all distinct ABox entailments in $\mathcal{S}_0^n$. An entailment vector of a snapshot $\mathcal{S}_0^n(i)$ in $\mathcal{S}_0^n$, denoted by ${\bf e_i}$, is a vector of dimension $m$ such that $\forall j\in[0,m]$
$$\label{eq:EntailmentVector}
e_{ij} \stackrel{.}{=} 1\;\;{\text{if $\mathcal{T}\cup\mathcal{A}\cup\mathcal{S}_0^n(i)\models g_j$}},\;0\;\;\text{otherwise}
$$
[\[rmk:fev\]]{}**(Feature vs. Entailment Vector)**\
Feature vectors are bounded to only raw data while entailment vectors, with much larger dimensions, embed both data and its inferred assertions from $\mathcal{T}$ and DL completion rules. The latter ensures a larger and more contextual coverage.
Semantic Prediction
-------------------
Algorithm \[algo:ConsistentPrediction\] aims at learning a model (line \[algo:ConsistentPrediction:CLM\]) over ${\bf N}\leq n\Plus1$ snapshots of $\mathcal{S}_0^n$, noted $\mathcal{S}_0^n|_{\kappa}$, for prediction at $n\Plus1$. $\kappa$ refers to the proportion of snapshots with concept drift used for modelling. $\mathcal{S}_0^n|_{\kappa}$ is selected to capture (i) $\mathcal{S}_0^n(n)$ i.e., the closest (temporally) to $\mathcal{S}_0^n(n\Plus1)$ (line \[algo:ConsistentPrediction:Init\]), (ii) knowledge in the most (lines \[algo:ConsistentPrediction:F2\]-\[algo:ConsistentPrediction:F3\]) significant concept drifts (Definition \[def:Significance\] - line \[algo:ConsistentPrediction:F1\]), (iii) any other snapshots to meet ${\bf N}$ (line \[algo:ConsistentPrediction:F5\]).
The model is trained, following Stochastic Gradient Descent method [@Zhang2004], using samples of the form $\left\{({\bf e}_i, g_i)\;|\;{i\in\{1,\ldots,{\bf N}\}}\right\}$ where ${\bf e}_i$ is the entailment vector for $\mathcal{S}_0^n(i)$ and ${\bf v}(g_i)$ is the target variable in $[0,1]$, capturing the estimation of $g_i$ to be entailed. $g_i$ is determined by the entailment vector. The goal is to learn a linear scoring function $f({\bf e}_i)=a^T{\bf e}_i+b$ with model parameters $a\in\mathbf{R}^{\bf N}$ and $b\in {\bf R}$ which minimizes the following objective function $O_j$: $$\label{eq:loss}
\begin{aligned}
O_j(a,b) \doteq \sum_{i=1}^{\kappa} \omega_{ij} L({\bf v}(g_i),f({\bf e}_i)) + \alpha R(a),
\end{aligned}$$ where $L$ represents the loss function (e.g., Hinge for SVM or $\log$ for logistic regression). $R$ and $\alpha$ control the variance of the model in case of over fitting. $R$ is a regularization term and $\alpha > 0$ is a non-negative hyperparameter. Each sample $({\bf e}_i, g_i)$ in is weighted by $\omega_{ij}$ in (resp. ) for filtering out consistent (resp. inconsistent) historical snapshots w.r.t. . $\omega_{ij}$ controls the consistency level of models.
[2]{}$$\label{eq:g1}
\hspace*{-0.2cm}\omega_{ij} \doteq
\begin{cases}
0, & \mbox{if } {c}_{ij} > 0 \\
\Minus {c}_{ij} & \mbox{else},
\end{cases}$$ $$\label{eq:g2}
\hspace*{-0.6cm}\omega_{ij} \doteq
\begin{cases}
0, & \mbox{if } {c}_{ij} < 0 \\
{c}_{ij} & \mbox{else},
\end{cases}$$
parameterized with low $\varepsilon$, $\sigma_{\min}$, high $\kappa$ and as weight (line \[algo:ConsistentPrediction:CLM\] (i)) favours models with significant concept drifts for prediction, which supports diversity and prediction changes in the model. Parameterized with high $\varepsilon$, $\sigma_{\min}$, low $\kappa$ and as weight, it will capture more consistent models. The linear scoring function $f$ in has the following advantages compared to more complex structures such as artificial neural network: (i) better handling over-fitting with reduced sample size - due to filtering of snapshots not involved in significant concept drifts (lines \[algo:ConsistentPrediction:F2\]-\[algo:ConsistentPrediction:F3\] in ), (ii) ensuring efficient, scalable learning and prediction for online contexts.
Experimental Results {#sec:evaluation}
====================
We report accuracy by (i) studying the impact of and semantic embeddings on concept drift for *Dublin-Ireland*, *Beijing-China* applications, and (ii) comparing its results with state-of-the-art approaches. The system is tested on: 16 Intel(R) Xeon(R) CPU E5-2680, 2.80GHz cores, 32GB RAM.
$\bullet$ **Beijing Air Quality (BAQ) Context:** BAQ index, ranging from Good (value $5$), Moderate ($4$), Unhealthy ($3$), Very Unhealthy ($2$) to Hazardous ($1$), can be forecasted using data streams of $B_1$: air pollutants and meteorology elements $B_2$: wind speed, $B_3$: humidity observed in $12$ sites. The variation of context, characterising a concept drift problem, makes BAQ index difficult to be forecasted specially with potentially erroneous sensor data. The semantics of context is based on a DL $\mathcal{ALC}$ ontology, including $48$ concepts, $13$ roles, $598$ axioms. An average of $6,500$ RDF triples are generated at each update (i.e., every $600$ seconds) for all streams.
$\bullet$ **Dublin Bus Delay (DBD) Context:** DBD, classified as Free (value $5$), Low ($4$), Moderate ($3$), Heavy ($2$), Stopped ($1$) can be forecasted using reputable live stream contextual data (Table \[tab:DataSets\]) related to $D_1$: bus GPS location, delay, congestion status, $D_2$: weather conditions, $D_3$: road incidents. However bus delay is subject to major changes due the high degree of context variation. The latter, responsible for the concept drift problem, impacts accuracy the most. We consider an extended settings by enriching data using a DL $\mathcal{EL}^{++}$ domain ontology ($55$ concepts, $19$ roles and $25,456$ axioms).
$\bullet$ **Validation:** Accuracy is measured by comparing predictions with real-time situations in cities, where results can be easily extracted and compared from all different approaches.
$\bullet$ **Semantic Impact:** Table \[res:table1\] reports the positive impact of using semantic embeddings (cf. columns with ) on all forecasting tasks, with an average improvement of $26.6\%$. The embeddings naturally identify semantically (dis-)similar contexts by capturing temporal (in-)consistency(ies). Thus, they help in building discriminating models, even for long-term-ahead forecasting as shown for $\Delta = 18$-hours with a $33.1\%$ gain. The difference of results between Beijing and Dublin confirms the importance of semantic expressivity i.e., $40$+ times more axioms with a $71.5\%$ gain of accuracy for Dublin.
$\bullet$ **Feature Impact:** Table \[res:table1\] emphasises an extra accuracy gain when increasing the number of features i.e., average gain of $68.5\%$ accuracy from $1$ to $3$ features. $\bullet$ **Concept Drift** is characterised by $48\%$ and $51\%$ of stream updates in respectively BAQ and DBD. We focus on $4$ levels of concept drifts, ranging from a $.2$ to $.8$ significance $\forall \Delta\in\{6, 12, 18\}$. Level $0$ does not capture any change. Figure \[res:conceptDrift\] reports the proportion of severity levels in concept drift for BAQ and DBD e.g., $7\%$ are level-$.4$ for BAQ while $19\%$ are level-$.8$ for DBD. Although accuracy clearly declined by increasing the severity level e.g., from $96\%$ (level-$.2$) to $21\%$ (level-$.8$) in DBD, semantic embeddings has shown to significantly boost accuracy. More interestingly the more severity the higher improvement i.e., (average) $36\%$ to $56\%$ on level-$.4$ to $.8$. Thus integrating semantics is a way forward to build machine learning models which are robust to changes, potential erroneous sensor data and concept drifts.
$\bullet$ **Model Consistency Impact:** Figures \[res:consistent\] and \[res:inconsistent\] report accuracy of forecasting tasks on a [**H**]{}igh and [**L**]{}ow [**C**]{}oncept [**D**]{}rift versions of the Dublin and Beijing problems, noted HCD and LCD. $85\%$ and $15\%$ of snapshots are impacted by concept drift respectively in HCD and LCD.
![Forecasting Accuracy vs. Drift Significance.[]{data-label="res:conceptDrift"}](./figure/conceptDrift.pdf)
is evaluated with $3$ variances of $(\varepsilon, \sigma_{\min}, \kappa)$: (i) consistent model with $(.9, .9, .1)$, (ii) mixed model with $(.5, .5, .5)$, (iii) inconsistent model with $(.1, .1, .9)$. ${\bf N}=1,500$. Figure \[res:consistent\] (resp. \[res:inconsistent\]) reports that prediction with consistent (resp. inconsistent) samples outperforms models with inconsistent (resp. consistent) samples by about $318\%$ (resp. $456\%$) and $254\%$ (resp. $322\%$) in respectively Beijing and Dublin for LCD (resp. HCD). These results confirm the importance of semantic encoding, which support the encoding of concept drift and consistency properties in our approach.
![Model Consistency & Forecasting Accuracy. Low Concept Drift. ($15\%$ of snapshots impacted by concept drift).[]{data-label="res:consistent"}](./figure/NoSignificantDrift.pdf)
![Model Consistency & Forecasting Accuracy. High Concept Drift. ($85\%$ of snapshots impacted by concept drift).[]{data-label="res:inconsistent"}](./figure/SignificantDrift.pdf)
$\bullet$ **Baseline:** We compare our approach ${\bf{\mathcal{B}_i}}, {\bf{\mathcal{D}_{i,1 \leq i \leq 4}}}$ in Table \[res:table1\] with (i) weighted [**S**]{}tochastic [**G**]{}radient [**D**]{}escent (SGD), (ii) [**A**]{}uto-[**R**]{}egressive [**I**]{}ntegrated [**M**]{}oving [**A**]{}verage (ARIMA), a standard time-series forecasting model [@saboia1977autoregressive], and two methods addressing concept drift: (iii) [**A**]{}daptive-[**S**]{}ize [**H**]{}oeffding [**T**]{}ree (ASHT), (iv) [**AD**]{}aptive [**WIN**]{}dowing bagging (ADWIN) [@Bifet2009; @Bifet2010]. ARIMA considers one stream variable: BAQ index for Beijing and DBD for Dublin while SGD, ASHT and ADWIN use all features of ${\bf{\mathcal{B}_4}}, {\bf{\mathcal{D}_{4}}}$ and favour recent snapshots during learning. The forecasted real value in $[0,5]$ is discretised back using our categories. Results with optimum parameters for are reported. Figure \[res:baselines\] emphasises that our approach (with $3$ levels of features: $\mathcal{B}_4$, $\mathcal{D}_4$) over-performs state-of-the-art methods. The more features the more accurate. More interestingly classic learning algorithms do not generalise as well as in presence of semantics although SGD, ASHT, ADWIN integrate all features. shows to be very robust with less variance. Experiments also demonstrate that semantic (in-)consistency matters more than recentness during learning.
$\bullet$ **Lessons Learnt:** Adding semantics to classic learning model has clearly shown the positive impact on accuracy, specially in presence of concept drifts. Our approach also demonstrates that the more semantic axioms the more robust is the model and hence the higher the accuracy. Axiom numbers are critical as they drive and control the semantics of data in streams, which improve accuracy, concept drift detection but not scalability (not reported in the paper). It is worst with more expressive DLs due to consistency checks, and with limited impact on accuracy. Lightweight semantics such as RDF-S would highly limit the scope of our model given the omission of inconsistency checking cf. Figures \[res:consistent\]-\[res:inconsistent\].
Conclusion {#sec:conclusion}
==========
Our approach, exploiting the semantics of data streams, tackles the problem of learning and prediction with concept drifts. Semantic reasoning and machine learning have been combined by revisiting features embeddings as semantic embeddings i.e., vectors capturing consistency and entailment of any snapshot in ontology streams. Such embeddings are then exploited in a context of supervised stream learning to learn models, which are robust to concept drifts i.e., sudden and abrupt (inconsistent) prediction changes. Our approach has been shown to be adaptable and flexible to basic learning algorithms. In addition to demonstrate accurate prediction with concept drifts in Dublin and Beijing forecasting applications, experiments have shown that encoding semantics in models is a way towards outperforming state-of-the-art approaches.
In future work we will investigate the impact of semantic embeddings in other Machine Learning models.
[^1]:
|
---
abstract: |
Two seemingly unrelated problems are intimately connected.
The first is the equsingularity problem in ${\mathbb{R}}^2$: For an analytic family $f_t:({\mathbb{R}}^2,0){\rightarrow}({\mathbb{R}},0)$, when should it be called an “equisingular deformation"? This amounts to finding a suitable trivialization condition (as strong as possible) and, of course, a criterion.
The second is on the Morse stability. We define ${\mathbb{R}}_*$, which is ${\mathbb{R}}$ “enriched" with a class of infinitesimals. How to generalize the Morse Stability Theorem to polynomials over ${\mathbb{R}}_*$?
The space ${\mathbb{R}}_*$ is much smaller than the space used in Non-standard Analysis. Our infinitesimals are analytic arcs, represented by fractional power series, *e.g.*, $x=y^3+\cdots$, $x=y^{5/2}+\cdots$, $x=y^{3/2}+\cdots$, are infinitesimals at $0\in {\mathbb{R}}$, in descending orders.
Thus, $p_t(x)\!:=f_t(x,y)\!:=x^4-t^2x^2y^2-y^4$ is a family of polynomials over ${\mathbb{R}}_*$. This family is not Morse stable:a triple critical point in ${\mathbb{R}}_*$ splits into three when $t\not=0$.
In our TheoremII, (B) is a trivialization condition which can serve as a definition for equisingular deformation; (A), and (A’) in Addendum\[BMorse\], are criteria, using the stability of “critical points" and the “complete initial form"; (C) is the Morse stability (Remark (\[MandZ\])). Theorem I consists of weaker conditions (a), (b), (c). The detailed proofs will appear later.
We were inspired by the intriguing discovery of S. Koike ([@ko]) that the Briançon-Speder family, while blow-analytically trivial, admits no contact order preserving trivialization. The notion of blow-analytic trivialization must be modified; (B) and (b) are options.
address: 'School of Mathematics, University of Sydney, Sydney, NSW, 2006, Australia '
author:
- 'Tzee-Char Kuo and Laurentiu Paunescu'
title: |
Equisingularity in $\textbf{\textit{R}}^{\textbf{2}}$ As Morse Stability\
In Infinitesimal Calculus
---
Results. {#Results}
========
As in the Curve Selection Lemma, by a *parameterized arc* at $0$ in ${\mathbb{R}}^2$ (resp.${\mathbb{C}}^2$) we mean a *real* analytic map germ ${\vec{\lambda}}: [0, {\epsilon}){\rightarrow}{\mathbb{R}}^2$ (resp.${\mathbb{C}}^2$), ${\vec{\lambda}}(0)=0$, ${\vec{\lambda}}(s)\not\equiv 0$. We call the image set, ${\pmb{\lambda}}\!:=Im({\vec{\lambda}})$, a (geometric) *arc* at $0$, or the *locus* of ${\vec{\lambda}}$; call ${\vec{\lambda}}$ *a* *parametrization* of ${\pmb{\lambda}}$.
Take ${\pmb{\lambda}}\not={\pmb{\mu}}$. The distance from $P\in {\pmb{\lambda}}$ to ${\pmb{\mu}}$ is a fractional power series in $s\!:=\overline{OP}$, $dist(P,{\pmb{\mu}})=as^h+\cdots$, where $a>0$, $h\in {\mathbb{Q}}^+$.
We call ${\mathcal{O}}({\pmb{\lambda}}, {\pmb{\mu}})\!:=h$ the ***contact order*** of ${\pmb{\lambda}}$ and ${\pmb{\mu}}$. Define ${\mathcal{O}}({\pmb{\lambda}},{\pmb{\lambda}})\!:=\infty$.
Let ${\textbf{S}_*}^1$, or simply ${\textbf{S}_*}$, denote the set of arcs at $0$ in ${\mathbb{R}}^2$. This is called the *enriched* *unit circle* for the following reason. The tangent half line at $0$, $\pmb{l}$, of a given ${\pmb{\lambda}}$ can be identified with a point of the unit circle $\textbf{S}^1$. If $\pmb{\lambda}\not=\pmb{l}$, then $1<{\mathcal{O}}(\pmb{\lambda},\pmb{l})<\infty$. Hence we can regard $\pmb{\lambda}$ as an “*infinitesimal*" *at* $\pmb{l}$, and ${\textbf{S}_*}$ as $\textbf{S}^1$ *“enriched"* with infinitesimals.
Let $f:({\mathbb{R}}^2,0){\rightarrow}({\mathbb{R}},0)$ be analytic. Write $ {\textit{V}^{\,{\mathbb{C}}}_*}(f)\!:=\{{\pmb{\zeta}}\in {\textbf{S}_*}^3|f(z,w)\equiv 0 \;\text{on}\; {\pmb{\zeta}}\}$, where ${\textbf{S}_*}^3$ denotes the set of arcs at $0$ in ${\mathbb{C}}^2(={\mathbb{R}}^4)$, and $f(z,w)$ is the complexification of $f$.
For ${\pmb{\lambda}}\in {\textbf{S}_*}$, write ${\mathcal{O}}({\pmb{\lambda}},{\textit{V}^{\,{\mathbb{C}}}_*}(f))\!:=\max\{{\mathcal{O}}({\pmb{\lambda}},{\pmb{\zeta}})|\,{\pmb{\zeta}}\in{\textit{V}^{\,{\mathbb{C}}}_*}(f)\}$. Define the ***f-height*** of ${\pmb{\lambda}}$ by $h_f({\pmb{\lambda}})\!:={\mathcal{O}}({\pmb{\lambda}},{\textit{V}^{\,{\mathbb{C}}}_*}(f))$. Hence $h_f({\pmb{\lambda}})=\infty$ if $f(x,y)\equiv 0$ along ${\pmb{\lambda}}$.
For ${\pmb{\lambda}}_1$, ${\pmb{\lambda}}_2\in {\textbf{S}_*}$, define ${\pmb{\lambda}}_1 \thicksim_f {\pmb{\lambda}}_2$ *if and only if* $h_f({\pmb{\lambda}}_1)=h_f({\pmb{\lambda}}_2)<{\mathcal{O}}({\pmb{\lambda}}_1,{\pmb{\lambda}}_2)$. (In fact, $h_f({\pmb{\lambda}}_1)<{\mathcal{O}}({\pmb{\lambda}}_1,{\pmb{\lambda}}_2)$ implies $h_f({\pmb{\lambda}}_1)=h_f({\pmb{\lambda}}_2)$.) The equivalence class of ${\pmb{\lambda}}$ is denoted by ${\pmb{\lambda}}_f$.
We call ${\pmb{\lambda}}_f$ an ***f-truncated arc***, or simply an ***f-arc***. Write ${\textbf{S}_{*/f}}:={\textbf{S}_*}/\thicksim_f$, $h({\pmb{\lambda}_f})\!:=h_f({\pmb{\lambda}})$.
(Intuitively, once $f$ is given, arcs are “blurred" so that only the equivalence classes are “observable". We were tempted to call ${\pmb{\lambda}_f}$ an “$f$-observable".)
Define the ***contact** **order*** of ${\pmb{\lambda}}_f$ and ${\pmb{\mu}}_f$ by: if ${\pmb{\lambda}}_f\not ={\pmb{\mu}}_f$, ${\mathcal{O}}({\pmb{\lambda}}_f,{\pmb{\mu}}_f)\!:={\mathcal{O}}({\pmb{\lambda}},{\pmb{\mu}})$, ${\pmb{\lambda}}\in {\pmb{\lambda}}_f$, ${\pmb{\mu}}\in
{\pmb{\mu}}_f$; and ${\mathcal{O}}({\pmb{\lambda}}_f,{\pmb{\lambda}}_f)\!:=\infty$. This is well-defined. Write ${\mathcal{O}}({\pmb{\lambda}}_f,{\textit{V}^{\,{\mathbb{C}}}_*}(f))
\!:={\mathcal{O}}(\pmb{\lambda},\textit{V}^{{\mathbb{C}}}_*(f))$.
From now on we assume $f(x,y)$ is ***mini-regular*** in $x$, that is, regular in $x$ of order $m(f)$, the multiplicity of $f$. (Thus the positive and negative $x$-directions are not important.)
Let ${{\mathbb{R}}_*^+}$ (resp.${{\mathbb{R}}_{*/{f}}^+}$) denote those arcs of ${\textbf{S}_*}$ (resp.${\textbf{S}_{*/f}}$) in $y>0$, not tangent to the $x$-axis, and ${{\mathbb{R}}_*^-}$ (resp.${{\mathbb{R}}_{*/f}^-}$) denote those in $y<0$. Write ${\mathbb{R}}_{*}\!:={{\mathbb{R}}_*^+}\cup
{{\mathbb{R}}_*^-}$, ${\mathbb{R}}_{*/f}\!:={{\mathbb{R}}_{*/{f}}^+}\cup {{\mathbb{R}}_{*/f}^-}$.
Take ${\pmb{\lambda}_f}$, ${\pmb{\mu}_f}\in {{\mathbb{R}}_{*/{f}}^+}$, or $\in{{\mathbb{R}}_{*/f}^-}$. Define $\pmb{\lambda}_f \simeq \pmb{\mu}_f $ (read:“bar equivalent") *if and only if* $\text{either}\; {\pmb{\lambda}_f}={\pmb{\mu}_f}, \;\text{or
else} \; h({\pmb{\lambda}_f})=h({\pmb{\mu}_f})= {\mathcal{O}}({\pmb{\lambda}_f},{\pmb{\mu}_f})$. Call an equivalence class an ***f-bar***. The one containing ${\pmb{\lambda}_f}$ is denoted by $B({\pmb{\lambda}}_f)$, having ***height*** $h(B({\pmb{\lambda}}_f))\!:=h({\pmb{\lambda}}_f)$. (See [@Kuo-L], [@kuo-par], [@Kur-Pau].)
If $h({\pmb{\lambda}}_f)=\infty$ then $B({\pmb{\lambda}}_f)=\{{\pmb{\lambda}}_f\}$, a singleton, and conversely.
The given coordinates $(x,y)$ yield a coordinate on each bar of finite height, as follows.
Take $B$, say in ${\mathbb{R}}_{*/f}^+$, $h(B)<\infty$. Take $\pmb{\lambda}\in \pmb{\lambda}_f\in B$ with parametrization $\vec{\lambda}(s)$. Eliminating $s$ ($s\geq 0$) yields a *unique* fractional power series (as in [@walker]) $$\label{representation}
x=\lambda(y)=a_1y^{\frac{n_1}{d}}+a_2y^\frac{n_2}{d}+\cdots, \;
d\leq n_1<n_2<\cdots, \; (y\geq 0).$$ Here all $a_i\in {\mathbb{R}}$. Let $\lambda_B(y)$ denote $\lambda(y)$ with all terms $y^e$, $e\geq h(B)$, deleted. Observe that for any $\pmb{\mu}\in \pmb{\lambda}_f\in B$, $\mu(y)$ has the form $\mu(y)=\lambda_B(y)+uy^{h(B)}+\cdots$, where $u\in {\mathbb{R}}$ is *uniquely* determined by $\pmb{\lambda}_f$. We say $\pmb{\lambda}_f\in B$ has ***canonical coordinate*** $u$, writing ${\pmb{\lambda}_f}\!:=u$. We call $x=\lambda_B(y)$, which depends only on $B$, the ***canonical representation*** of $B$.
Take $B$, $h(B)<\infty$, and $u={\pmb{\lambda}_f}\in B$. Let us write $$f(\lambda_B(y)+uy^{h(B)}+\cdots, y)\!:=I^B_f(u)y^e+\cdots, \; I^B_f({\pmb{\lambda}_f})\!:=I^B_f(u)\not=0.$$
An important observation is that $e$ depends only on $B$, not on ${\pmb{\lambda}_f}$; $I^B_f(u)$ depends only on ${\pmb{\lambda}_f}$, not on ${\pmb{\lambda}}\in {\pmb{\lambda}_f}$, and is a polynomial (Lemma(\[I\]) below). We call $\i{L}_f(B)\!:=L_f({\pmb{\lambda}_f})\!:=e$ the ***Lojasiewicz exponent*** of $f$ on $B$.
Not every $u\in {\mathbb{R}}$ is a canonical coordinate. For example, $f(x,y)=x^2-y^3$ has a bar $B$ of height $3/2$, and $\pm 1$ are not canonical coordinates; $I^B_f(u)$ is not a priori defined at $\pm 1$. Since $I^B_f$ is a polynomial, we shall regard it *as defined for all* $u\in {\mathbb{R}}$.
In general, the canonical coordinate identifies $B$ with a copy of ${\mathbb{R}}$ minus the real roots of $I^B_f$. Hence $\bar{B}$, the metric space completion, is a copy of ${\mathbb{R}}$.
If $ B=\{{\pmb{\lambda}_f}\}$, a singleton, we define $I^B_f({\pmb{\lambda}_f})\!:= 0$, $L_f({\pmb{\lambda}_f})\!:=\infty$.
Now, take $l(x,y)\!:=x$, and consider ${\textbf{S}_{*/l}}$. If $\nu(y)=ay^e+\cdots$, $a\not=0$, $e\geq 1$, then the $l$-arc $\pmb{\nu}_l$ can be identified with $(a,e)\in ({\mathbb{R}}-\{0\})\times
{\mathbb{Q}}^{+1}$, ${\mathbb{Q}}^{+1}\!:=\{r\in {\mathbb{Q}}^+|\,r\geq 1\}$. If $\nu(y)\equiv
0$ then $h(\pmb{\nu}_l)=\infty$; we write $\pmb{\nu}_l
\!:=(0,\infty)$. We call $\mathcal{V}\!:=(({\mathbb{R}}-\{0\})\times
{\mathbb{Q}}^{+1})\cup \{(0,\infty)\}(={{\mathbb{R}}_{*/l}^{\pm}})$ the ***infinitesimal value space***. The given $f$, mini-regular in $x$, induces a $\mathcal{V}$-valued function $$f_*: {\mathbb{R}}_{*/f}\rightarrow \mathcal{V}, \;
f_*({\pmb{\lambda}_f})\!:=(I^B_f({\pmb{\lambda}_f}), L_f({\pmb{\lambda}_f}))\in \mathcal{V}, \; ({\pmb{\lambda}_f}\in
B).$$
Take $z\in {\mathbb{C}}$. We say $z$ is a $B$-***root*** of $f$ if $f$ has a Newton-Puiseux root of the form $\alpha(y)=\lambda_B(y)+zy^{h(B)}+\cdots$. The number of such roots is the *multiplicity* of $z$.
\[critical point\] Take $c\!:={\pmb{\gamma}_f}\in B$. If $h(B)<\infty$ and $c\,(\in {\mathbb{R}})$ is a $B$-root of $f_x$, say of multiplicity $k$, we say ${\pmb{\gamma}_f}$ is a (*real*) ***critical point*** of $f_*$ of multiplicity $m({\pmb{\gamma}_f})\!:=k$.
If $B=\{{\pmb{\gamma}_f}\}$, and $m(B)\geq 2$, we also call ${\pmb{\gamma}_f}$ a critical point of multiplicity $m(B)-1$.
Call $f_*(c)\!:=f_*({\pmb{\gamma}_f})\in \mathcal{V}$ the ***critical value*** at ${\pmb{\gamma}_f}$.
If $f_x$ has complex $B$-root(s), but no real $B$-root, then we take a *generic* real number $r$, put $\gamma(y)\!:=\lambda_B(y)+ry^{h(B)}$, and call ${\pmb{\gamma}_f}$ *the* real critical point in $B$ with multiplicity $m({\pmb{\gamma}_f})\!:=1$. (Convention: For different such $B$, we take *different* generic $r$.)
The above is the list of all (real) critical points. (If $f_x$ has no $B$-root, $B$ yields no critical point.) The number of critical points is finite (Lemma (\[I\])).
Now, let ${\mathbb{M}}$ be the maximal ideal of ${\mathbb{R}}\{s\}$, furnished with the point-wise convergence topology, that is, the smallest topology so that the projection maps $$\pi_N: {\mathbb{M}}\longrightarrow {\mathbb{R}}^N,
\quad a_1s+\cdots +a_Ns^N+\cdots \mapsto (a_1,\cdots,a_N), \quad
N\in {\mathbb{Z}}^+,$$are continuous. Furnish ${\textbf{S}_*}$, ${\textbf{S}_{*/f}}$ with the quotient topologies by the quotient maps $$p_*:{\mathbb{M}}^2-\{0\}{\rightarrow}{\textbf{S}_*},\quad
p_{*/f}:{\mathbb{M}}^2-\{0\}{\rightarrow}{\textbf{S}_{*/f}}.$$
Take ${\vec{\lambda}}\in {\mathbb{M}}^2$, and a real-valued function, $\alpha$, defined near ${\vec{\lambda}}$. We say $\alpha$ is *analytic* at ${\vec{\lambda}}$ if $\alpha =\varphi \circ \pi_N$, $\pi_N$ a projection, $\varphi$ an analytic function at $\pi_N(\vec{\lambda})$ in ${\mathbb{R}}^N$. This defines an analytic structure on ${\mathbb{M}}^2$. We furnish ${\textbf{S}_*}$ and ${\textbf{S}_{*/f}}$ with the quotient analytic structure.
In the following, let $I$ be a sufficiently small neighborhood of $0$ in ${\mathbb{R}}$. We write “*c*-" for “continuous", “*a*-" for “analytic", “*c/a*-" for “continuous (resp.analytic)".
Let $F(x,y;t)$ be a given $t$-parameterized $a$-deformation of $f(x,y)$. That is to say, $F(x,y;t)$ is real analytic in $(x,y,t)$, defined for $(x,y)$ near $0\in {\mathbb{R}}^2$, $t\in I$, with $F(x,y;0)=f(x,y)$, $F(0,0;t)\equiv 0$. When $t$ is fixed, we also write $F(x,y;t)$ as $f_t(x,y)$.
In ${\textbf{S}_*}\times I$ define $({\pmb{\lambda}},t)\sim_F({\pmb{\lambda}}^{\prime},t^{\prime})$ *if and only if* $ t=t^{\prime} \;\text{and}\;
{\pmb{\lambda}}\sim_{f_t}{\pmb{\lambda}}^{\prime}.$ Denote the quotient space by ${\textbf{S}_*}\times_F I$. Similarly, ${\mathbb{R}}_{*}^{\pm}\times_F
I\!:={\mathbb{R}}_{*}^{\pm}\times I/\sim_F$.
By a $t$-parameterized ***c/a*-*deformation*** of ${\pmb{\lambda}_f}$ we mean a family of $f_t$-arcs, ${\pmb{\lambda}}_{f_t}$, obtained as follows. Take a parametrization ${\vec{\lambda}}(s)$ of ${\pmb{\lambda}_f}$, and a $c/a$-map: $I{\rightarrow}{\mathbb{M}}^2$, $t\mapsto {\vec{\lambda}}_t$, ${\vec{\lambda}}_0={\vec{\lambda}}$. Then ${\pmb{\lambda}}_{f_t}\!:=p_{*/{f_t}}({\vec{\lambda}}_t)$. This is equivalent to taking a *c/a*-map: $I {\rightarrow}{\textbf{S}_*}\times_F I$, $t \mapsto
({\pmb{\lambda}}_{f_t},t)$. A ***c/a-deformation*** of a given $B$ is, *by definition*, a family $\{B_t\}$ obtained by taking any ${\pmb{\lambda}_f}\in B$, a $c/a$-deformation ${\pmb{\lambda}}_{f_t}$, and then $B_t\!:=B({\pmb{\lambda}}_{f_t})$.
\[main\] The following three conditions are equivalent.
(**a**) Each (real) critical point, ${\pmb{\gamma}_f}$, of $f_*$ is ***stable along*** $\{f_t\}$ in the sense that ${\pmb{\gamma}_f}$ admits a $c$-deformation ${\pmb{\gamma}}_{f_t}$, a critical point of $(f_t)_*$, such that $m({\pmb{\gamma}}_{f_t})$, $h({\pmb{\gamma}}_{f_t})$, $L_{f_t}({\pmb{\gamma}}_{f_t})$ are constants. (If ${\pmb{\gamma}_f}$ arises from the generic number $r$, we use the same $r$ for ${\pmb{\gamma}}_{f_t}$.)
(**b**) There exists a ($t$-level preserving) homeomorphism $$H: ({\mathbb{R}}^2\times I, 0\times I)\rightarrow ({\mathbb{R}}^2\times I, 0\times I),
\quad ((x,y),t)\mapsto (\eta_t(x,y),t),$$ which is bi-analytic off the $t$-axis $\{0\}\times I$, with the following five properties:
(b.1) $f_t(\eta_t(x,y))=f(x,y)$, $t\in I$, (trivialization of $F(x,y;t)$);
(b.2) Given any bar $B$, $\eta_t(\vec{\alpha}(s))$ is analytic in $(\vec{\alpha},s,t)$, $\vec{\alpha}\in
p_{*/f}^{-1}(B)$ (analyticity on each bar); in particular, $\eta_t$ is arc-analytic, for any fixed $t$;
(b.3) ${\mathcal{O}}(\pmb{\alpha},
\pmb{\beta})={\mathcal{O}}(\eta_t(\pmb{\alpha}), \eta_t(\pmb{\beta}))$ (contact order preserving); moreover, $\eta_t(\pmb{\alpha}_f)\in
{\textbf{S}_{*/{f_t}}}$ is well-defined (invariance of truncated arcs).
(b.4) The induced mapping $\eta_{t}:B\rightarrow B_t$ extends to an analytic isomorphism:$\bar{B}\rightarrow
\bar{B}_t$.
(b.5) If $c$ is a critical point of $f_*$, then $c_t\!=\eta_t(c)$ is one of $(f_t)_*$, $m(c)=m(c_t)$.
(***c***) There exists an **isomorphism** $H_*:{\mathbb{R}}_{*/f}\times I \rightarrow {\mathbb{R}}_*\times_F I$, $({\pmb{\alpha}_f},t)\mapsto (\eta_{t}({\pmb{\alpha}_f}), t)$, preserving critical points and multiplicities. That is to say, $H_*$ is a homeomorphism,
(c.1) Given $B$, $B_t\!:=\eta_{t}(B)$ is a bar, $h(B_t)=h(B)$, $m(B_t)=m(B)$;
(c.2) The restriction of $\eta_{t}$ to $B$ extends to an analytic isomorphism $\bar{\eta}_{t}:\,\bar{B}{\rightarrow}\bar{B}_t$;
(c.2) If $c$ is a critical point of $f_*$, then $c_t\!:=\eta_{t}(c)$ is one of $(f_t)_*$, $m(c)=m(c_t)$.
The following three conditions are equivalent.
(**A**) The function $f_*$ is ***Morse stable along*** $\{f_t\}$. That is, every critical point is stable along $\{f_t\}$, and for critical points $c\in B$, $c^{\,\prime}\in
B^{\prime}$, $f_*(c)=f_*(c^{\,\prime})$ implies $(f_t)_*(c_t)=(f_t)_*(c_t^{\,\prime})$.
(**B**) There exists $H$, as in (***b***), with an additional property:
(b.6) If $c$, $c^{\,\prime}$ are critical points, $f_*(c)=f_*(c^{\,\prime})$, then $(f_t)_*(c_t)=(f_t)_*(c_t^{\,\prime})$.
(**C**) There exist an isomorphism $H_*$ as in (***c***), and an isomorphism $K_*:\mathcal{V}\times
I{\rightarrow}\mathcal{V}\times I$, such that $K_*\circ (f_*\times
id)=\Phi\circ H_*$, where $\Phi({\pmb{\alpha}}_{f_t},t)\!:=((f_t)_*({\pmb{\alpha}}_{f_t}),t)$.
\[I\] Let $\{z_1,\cdots,z_q\}$ be the set of $B$-roots of $f$ ($z_i\in
{\mathbb{C}}$), $h(B)<\infty$. Then $$I^B_f(x)=a\prod_{i=1}^q(x -z_i)^{m_i},\;
0\not=a\in{\mathbb{R}},\,\text{a constant},\,\;m_i\;\text{the multiplicity
of}\;z_i.$$ In particular, $I^B_f(x)$ is a polynomial with real coefficients.
If $c\!:={\pmb{\gamma}_f}\in B$ is a critical point of $f_*$, then $\frac{d}{dx}I^B_f(c)=0\not =I^B_f(c)$, and conversely. The multiplicity of $c$ (as a critical point of the polynomial $I^B_f(x)$) equals $m({\pmb{\gamma}_f})$.
The number of critical points of $f_*$ in ${\mathbb{R}}_{*/f}^{+}$ (resp.${\mathbb{R}}_{*/f}^-$) is bounded by $m(f)-1$.
The degree of $I^B_f(x)$ is called the ***multiplicity*** of $B$, denoted by $m(B)$.
We say $B$ is a ***polar bar*** if $I^B_f(x)$ has at least two distinct roots (in ${\mathbb{C}}$), or $B$ is a singleton with $m(B)\geq 2$. Call $\mathcal{I}(f)\!:=\{(B,I^B_f)\,|\,B\;\text{polar}\}$ the ***complete initial form*** of $f$.
Each critical point belongs to a polar bar; each polar bar contains at least one critical point.
We recall Morse Theory. Take an $a$-family of real polynomials $p_t(x)=a_0(t)x^d+\cdots +a_d(t)$, $a_0(0)\not=0$, $t\in I$, as an $a$-deformation of $p(x)\!:=p_0(x)$. Let $c_0\in {\mathbb{R}}$ be a critical point of $p(x)$, of multiplicity $m(c_0)$. We say $c_0$ is *stable along* $\{p_t\}$, if it admits a $c$-deformation $c_t$, $\frac{d}{dx}p_t(c_t)=0$, $m(c_t)=m(c_0)$. (A $c$-deformation $c_t$, if exists, is necessarily an $a$-deformation.)
We say $p(x)$ is ***Morse and zero stable*** along $\{p_t\}$ if:
\(i) Every (real) critical point of $p_0(x)$ is stable along $\{p_t\}$;
\(ii) For critical points $c_0$, $c^{\,\prime}_0$, $p_0(c_0)=p_0(c_0^{\,\prime})$ implies $p_t(c_t)=p_t(c_t^{\,\prime})$.
[(iii)]{} If $p_0(c_0)=\frac{d}{dx}p_0(c_0)=0$, then $p_t(c_t)=\frac{d}{dx}p_t(c_t)=0$.
\[MandZ\] Theorem II generalizes a version of the Morse Stability Theorem: If $p(x)$ is Morse and zero stable along $\{p_t\}$ then there exist analytic isomorphisms $H, K:\,{\mathbb{R}}\times
I\rightarrow {\mathbb{R}}\times I$, such that $K\circ (p\times id)=\Phi\circ
H$, $K(0,t)\equiv 0$, where $\Phi(x,t)\!:=(p_t(x),t)$.
That (a)$\Rightarrow$(c) reduces to the following. Given $x=f_i(t)$, $1\leq i\leq N$, analytic, $f_i(t)\not =f_j(t)$, for $i\not=j$, $t\in I$. There exists an analytic isomorphism $H:{\mathbb{R}}\times I\rightarrow {\mathbb{R}}\times I$, $(x,t)\mapsto
(\eta_t(x),t)$, $\eta_t(f_i(t))=const$, $1\leq i\leq N$. (Proved by Cartan’s Theorem A, or Interpolation.)
We say $\mathcal{I}(f)$ is ***Morse and zero stable*** along $\{f_t\}$ if each polar $B$ admits a $c$-deformation $B_t$, a polar bar of $f_t$, such that two of $h(B_t)$, $m(B_t)$, $L_{f_t}(B_t)$ are constants (we can then show all three are), and $\{I^{B}_f\}$ is Morse and zero stable along $\{I^{B_t}_{f_t}\}$, for each $B$.
\[BMorse\](***B***) is also equivalent to ($\textit{\textbf{A}}^{\,\prime}$): $\mathcal{I}(f)$ is Morse and zero stable along $\{f_t\}$.
\[acondition\] For $f_t(x,y)=x^3+3tx^2y+3t^2xy^2+t^3y^3-y^4$, $f(x,y)$ has critical point ${\pmb{\gamma}_f}$, $\gamma(y)\equiv 0$, with deformation $\gamma_t(y)=-ty$, found by a Tschirnhausen transform, satisfying (***A***). However, for $g_t(x,y)=x^3+3tx^2y+t^3y^3-y^4$, terms involving $t$ below the Newton Polygon of $f$ cannot be cleared, (***a***) is not satisfied. This idea is elaborated in §\[Newton\].
Relative Newton Polygons. {#Newton}
=========================
Take ${\pmb{\lambda}}$, say in ${\mathbb{R}}_*^+$, with $\lambda(y)$ . Let us change variables: $X\!:=x-\lambda(y), \; Y\!:=y$, $$\mathcal{F}(X,Y)\!:=f(X+\lambda(Y),Y)
\!:=\sum a_{ij}X^iY^{j/d}, \quad i, j \geq 0,\; i+j> 0.$$
In the first quadrant of a coordinate plane we plot a dot at $(i,j/d)$ for each $a_{ij}\not=0$, called a (Newton) dot. The Newton polygon of $\mathcal{F}$ in the usual sense is called the *Newton Polygon* *of* $f$ *relative to* ${\pmb{\lambda}}$, denoted by ${\mathbb{P}}(f,{\pmb{\lambda}})$. (See [@kuo-par].) Write $m_0\!:=m(f)$. Let the vertices be $$V_0=(m_0,0),\dots,
V_k=(m_k,q_k),\; q_i\in {\mathbb{Q}}^+,\, m_i>m_{i+1},\,q_i<q_{i+1}.$$
The (Newton) *edges* are: $E_i=\overline{V_{i-1}V_i}$, with *angle* $\theta_i$, $\tan
\theta_i\!:=\frac{q_i-q_{i-1}}{m_{i-1}-m_i}$, $\pi/4\leq \theta_i<
\pi/2$; a vertical one, $E_{k+1}$, sitting at $V_k$, $\theta_{k+1}=\pi/2$; a horizontal one, $E_0$, which is unimportant.
If $m_k\geq 1$ then $f\equiv 0$ on ${\pmb{\lambda}}$. If $m_k\geq 2$, $f$ is singular on ${\pmb{\lambda}}$. If ${\pmb{\lambda}}\sim_f {\pmb{\lambda}}^{\prime}$ then ${\mathbb{P}}(f,{\pmb{\lambda}})={\mathbb{P}}(f,{\pmb{\lambda}}^{\prime})$, hence ${\mathbb{P}}(f,{\pmb{\lambda}_f})$ is well-defined.
**Notation**: $L(E_i)\!:=\overline{V_{i-1}V_i^{\prime}}$, $V_i^{\prime}\!:=(0,q_{i-1}+m_{i-1}\tan \theta_i)$, *i.e.* $E_i$ extended to the $y$-axis.
\[flem\] Suppose each polar bar $B$ admits a $c$-deformation $B_t$ such that $h(B_t)$ and $m(B_t)$ are independent of $t$. Then each ${\pmb{\lambda}}_f\in {\mathbb{R}}_{*/f}$ admits an *a*-deformation ${\pmb{\lambda}}_{f_t}\in
{\mathbb{R}}_{*/f_t} $ such that ${\mathbb{P}}(f_t,{\pmb{\lambda}}_{f_t})$ is independent of $t$. The induced deformation $B_t\!:=B({\pmb{\lambda}}_{f_t})$ of $B_0\!:=B({\pmb{\lambda}_f})$, and hence the $a$-deformation $x=\lambda_{B_t}(y)$ of the canonical representation $x=\lambda_{B_0}(y)$, are uniquely defined; that is, if we take any $\pmb{\eta}_f\in B({\pmb{\lambda}_f})$, and a *c*-deformation $\pmb{\eta}_{f_t}$ with ${\mathbb{P}}(f_t,\pmb{\eta}_{f_t})={\mathbb{P}}(f,{\pmb{\lambda}}_f)$, then $B(\pmb{\eta}_{f_t})=B({\pmb{\lambda}}_{f_t})$.
Given $B$, $B^{\prime}$. The contact order ${\mathcal{O}}(B_t,B_t^{\prime})$, defined below, is independent of $t$.
For $B\not=B^{\prime}$, define ${\mathcal{O}}(B,B^{\prime})\!:={\mathcal{O}}({\pmb{\lambda}}_f,{\pmb{\lambda}}^{\prime}_f)$, ${\pmb{\lambda}}_f\in B$, ${\pmb{\lambda}}^{\prime}_f\in B^{\prime}$; and ${\mathcal{O}}(B,B)\!:=\infty$.
For $x^2+2xy-ty^2$, obviously equisingular, the usual Newton Polygon depends of $t$. This shows the relevance of *merely* considering Polygons relative to critical points.
The Lemma is proved by a succession of Tschirnhausen transforms at the vertices, beginning at $V_0$, which represents $a_{m0}X^m$ in $\mathcal{F}(X,Y)$, $m\!:=m(f)$. Let us define $\mathcal{P}$ by $$\label{perturbation}
F(X+\lambda(Y),Y;t)\!:=\mathcal{F}(X,Y) +\mathcal{P}(X,Y;t),\;
\mathcal{P}(X,Y;t)\!:=\sum p_{ij}(t)X^iY^{j/d},$$ where $p_{ij}(t)$ are analytic, $p_{ij}(0)=0$. Take a root of $\frac{\partial^{m-1}}{\partial
X^{m-1}}[a_{m0}X^m+\mathcal{P}(X,Y;t)]=0$, $$X=\rho_t(Y)\!:=\sum
b_j(t)Y^{j/d},\, b_j(0)=0,\; b_j(t) \; \text{analytic}.\,
(\text{Implicit Function Theorem})$$ Thus, $\lambda(y)+\rho_t(y)$ is an $a$-deformation of $\lambda(y)$. Let $X_1\!:=X-\rho_t(Y)$, $Y_1\!:=Y$. Then $$F(X_1+\lambda(Y_1)+\rho_t(Y_1),Y_1;t)
\!:=\mathcal{F}(X_1,Y_1)+\mathcal{P}^{(1)}(X_1,Y_1;t),$$ where $\mathcal{P}^{(1)}\!:=\sum p_{ij}^{(1)}(t)X_1^iY_1^{j/d},$ $p_{ij}^{(1)}(0)=0$, and $p_{m-1,j}^{(1)}(t)\equiv 0$ (Tschirnhausen).
For brevity, we shall write the coordinates $(X_1,Y_1,t)$ simply as $(X,Y,t)$, abusing notations. That is, we now have $p_{m-1,j}(t)\equiv 0$ in (\[perturbation\]).
We claim that $\mathcal{P}$ in fact has no dot below $L(E_1)$. This is proved by contradiction.
Suppose it has. Take a generic number $s\in {\mathbb{R}}$. Let $\zeta(y)\!:=\lambda(y)+sy^e$, $e\!:=\tan \theta_1$, and $$F(\widetilde{X}+\zeta(\widetilde{Y}), \widetilde{Y};t)\!:
=\mathcal{F}(\widetilde{X},\widetilde{Y})+\widetilde{\mathcal{P}},
\quad \widetilde{\mathcal{P}}(\widetilde{X},\widetilde{Y};0)\equiv
0.$$
Since $s$ is generic, ${\mathbb{P}}(f,{\pmb{\zeta}}_f)$ has only one edge, which is $L(E_1)$, and $B({\pmb{\zeta}}_f)$ is polar. Below $L(E_1)$, $\widetilde{\mathcal{P}}$ has at least one dot (when $t\not=0$), but still no dot of the form $(m-1,q)$.
A $c$-deformation $B_t$ of $B({\pmb{\zeta}}_f)$ would either create new dot(s) of the form $(m-1,q)$ below $L(E_1)$, or else not change the existing dot(s) of $\widetilde{\mathcal{P}}$ below $L(E_1)$. (This is the spirit of the Tschirnhausen transformation.) Thus, as $t\not=0$, $h(B_t)$ or $m(B_t)$, or both, will drop. This contradicts to the hypothesis of the Fundamental Lemma.
This argument can be repeated recursively at $V_1$, $V_2$, etc., to clear all dots under ${\mathbb{P}}(f,\pmb{\lambda}_f)$. More precisely, suppose in (\[perturbation\]), $\mathcal{P}$ has no dots below $L(E_i)$, $0\leq i\leq r$. By the Newton-Puiseux Theorem, there exists a root $\rho_t$ of $\frac{\partial^{m_r-1}}{\partial
X^{m_r-1}}[aX^{m_r}Y^{q_r}+\mathcal{P}]=0$ with ${\mathcal{O}}_y(\rho_t)\geq
\tan\theta_{r+1}$, where $aX^{m_r}Y^{q_r}$ is the term for $V_r$. A Tschirnhausen transform will then eliminate all dots of $\mathcal{P}$ of the form $(m_r-1, q)$. As before, all dots below $L(E_{r+1})$ also disappear.
We have seen the *only* way to clear dots below ${\mathbb{P}}(f,\pmb{\lambda}_f)$ is by the Tschirnhausen transforms. If ${\mathbb{P}}(f,\pmb{\eta}_{f_t})={\mathbb{P}}(f,\pmb{\lambda}_f)$, we must have ${\mathcal{O}}(\pmb{\lambda}_{f_t}, \pmb{\eta}_{f_t})\geq h(B_0)$. The uniqueness follows.
Define a partial ordering “$>$" by: $B>\hat{B} $ if and only if $
h(B)>h(\hat{B})={\mathcal{O}}({\pmb{\lambda}_f,\pmb{\mu}_f}),\,
\pmb{\lambda}_f\in B,\pmb{\mu}_f\in \hat{B}$. Let $\hat{B}$ be the largest bar so that $B\geq \hat{B}$, $B^{\prime}\geq\hat{B}$. We write $\lambda_B(y)=\lambda_{\hat{B}}(y)+ay^e+\cdots$, $\lambda_{B^{\prime}}(y)=\lambda_{\hat{B}}(y)+by^e+\cdots$, $e\!:=h(\hat{B})$. The uniqueness of $\hat{B}_t$ completes the proof.
Vector fields. {#v.f.}
==============
Assume ($\textit{\textbf{a}}$). We use a vector field $\vec{v}$ to prove $(\textbf{\textit{b}})$. The other implications are not hard.
Take a critical point ${\pmb{\gamma}_f}$, say in $B$, $\gamma(y)=\lambda_B(y)+cy^{h(B)}$. Let $B_t$ be the deformation of $B$. Let $c_t$ be the $a$-deformation of $c$, $\frac{d}{dx}I^{B_t}_{f_t}(c_t)=0$, $m(c_t)=m(c)$. (If $c$ is generic, take $c_t=c$.)
Let $\gamma_t(y)\!:=\lambda_{B_t}(y)+c_ty^{h(B_t)}$. Then ${\pmb{\gamma}}_t$ is a critical point of $f_t$ in $B_t$.
Now, let ${\pmb{\gamma}_f}^{(i)}$, $1\leq i\leq N$, denote all the critical points of $f$, for *all* (polar) $B$. For brevity, write ${\pmb{\gamma}}^{(i)}\!:={\pmb{\gamma}_f}^{(i)}$, with deformations ${\pmb{\gamma}}^{(i)}_t$, just defined.
We can assume $F(x,0;t)=\pm x^m$, and hence $\frac{\partial
F}{\partial t}(x,0;t)\equiv 0$. As $F(x,0;t)=a(t)x^m+\cdots,\,
a(0)\not=0$, a substitution $u=\sqrt[m]{|a(t)|}\cdot x+\cdots$ will bring $F(x,0,t)$ to this form.
We can also assume ${\pmb{\gamma}}^{(i)}\in {{\mathbb{R}}_{*/{f}}^+}$ for $1\leq i\leq r$, and ${\pmb{\gamma}}^{(i)}\in{{\mathbb{R}}_{*/f}^-}$ for $r+1\leq i\leq N$.
For each ${\pmb{\gamma}}^{(i)}\in {{\mathbb{R}}_{*/{f}}^+}$, we now construct a vector field ${\vec{v}}_i^+(x,y,t)$, defined for $y\geq 0$.
Write ${\pmb{\gamma}}_t\!:={\pmb{\gamma}}_t^{(i)}$. Let $X\!:=x-\gamma_t(y), Y\!:=y$. Then $\mathcal{F}(X,Y;T)\!:=F(X+\gamma_t(Y),Y;T)$ is analytic in $(X,Y^{1/d},T)$. As in [@F-Y], [@Pau], define $\vec{v}^{\,+}_i(x,y,t)\!:=\vec{V}(x-\gamma_t(y),y,t)$, $y\geq 0$, where $$\label{V}
\vec{V}(X,Y,t)\!:=\frac{X\mathcal{F}_X\mathcal{F}_t}
{(X\mathcal{F}_X)^2+(Y\mathcal{F}_Y)^2}\cdot
X\frac{\partial}{\partial X}+\frac{Y\mathcal{F}_Y\,\mathcal{F}_t}
{(X\mathcal{F}_X)^2+(Y\mathcal{F}_Y)^2}\cdot
Y\frac{\partial}{\partial Y}-\frac{\partial}{\partial t}.$$
In general, given $\pmb{\alpha}_i$, $x=\alpha_i(y)$, say in $
{\mathbb{R}}_{*}^+$, $1\leq i \leq r$. Let $q(x,y)\!:=\prod_{k=1}^r(x-\alpha_k(y))^2$, $$q_i(x,y)\!:=q(x,y)/(x-\alpha_i(y))^2, \quad p_i(x,y)\!:=q_i(x,y)/[q_1(x,y)+\cdots +q_r(x,y)].$$ We call $\{p_1,\cdots,p_r\}$ a ***partition of unity*** for $\{{\pmb{\alpha}}_1,\cdots,{\pmb{\alpha}}_r\}$.
Now, take $\{p_i\}$ for $\{{\pmb{\gamma}}^{(1)}_t, \cdots {\pmb{\gamma}}^{(r)}_t\}$. Define $\vec{v}^{\,+}(x,y,t)\!:=\sum_{i=1}^r p_i(x,y,t)\,
\vec{v}_i^{\,+}(x,y,t)$.
Similarly, ${\pmb{\gamma}}^{(i)}_f$, $r+1\leq i \leq N$, yield $\vec{v}^{\,-}(x,y,t)$, $y\leq 0$. We can then glue $\vec{v}^{\,\pm}(x,y,t)$ together along the $x$-axis, since $\vec{v}^{\, \pm}(x,0,t)\equiv -\frac{\partial}{\partial t}$. This is our vector field $\vec{v}(x,y,t)$, which, by (\[V\]), is clearly tangent to the level surfaces of $F(x,y;t)$, proving ($b.1$).
Sketch of Proof {#proof}
===============
\[Euler\] Let $W(X,Y)$ be a weighted form of degree $d$, $w(X)=h$, $w(Y)=1$. Take $u_0$, not a multiple root of $W(X,1)$. If $W(u_0,1)\not=0$ or $u_0\not =0$ then, with $X=uv^h$, $Y=v$, $$|XW_X|+|YW_Y|=unit\cdot \mid\!v\!\mid^d,\;\text{for}\;\, u\;\text{near}\;\, u_0.$$
For, by Euler’s Theorem, if $X-u_0Y^h$ divides $W_X$ and $W_Y$, then $u_0$ is a multiple root.
To show $(b.2)$, etc., take ${\pmb{\alpha}}$, say in ${{\mathbb{R}}_*^+}$. Take $k$, ${\mathcal{O}}({\pmb{\gamma}}^{(k)},{\pmb{\alpha}})=\max\{{\mathcal{O}}({\pmb{\gamma}}^{(j)},{\pmb{\alpha}})|1\leq j\leq r\}$.
We can assume ${\pmb{\alpha}}$ is not a multiple root of $f$, $e\!:={\mathcal{O}}({\pmb{\gamma}}^{(k)},{\pmb{\alpha}_f})<\infty$. (If ${\pmb{\alpha}}$ is, then ${\pmb{\gamma}}^{(k)}={\pmb{\alpha}_f}$, $h(B)=\infty$. This case is easy.)
Write $B\!:=B({\pmb{\alpha}_f})$ if $B({\pmb{\alpha}_f})\leq B({\pmb{\gamma}}^{(k)})$, and $B\!:=B({\pmb{\gamma}}^{(k)})$ if $B({\pmb{\alpha}_f})>B({\pmb{\gamma}}^{(k)})$.
Thus $\alpha(y)=\lambda_B(y)+ay^e+\cdots$, $\frac{d}{du}I^B_f(a)\not=0$. Let us consider the mapping $$\tau : (u,v,t)\mapsto
(x,y,t)\!:=(\lambda_{B_t}(v)+uv^e,v,t),\; u\in {\mathbb{R}},\;0\leq v
<\varepsilon, \; t\in I,$$ $B_t$ the deformation of $B$, and the liftings $\vec{\nu}_j^+\!:=(d\tau)^{-1}(p_j\vec{v}_j^+)$, $\vec{\nu}^+\!:=\sum_{j=1}^r\vec{\nu}_j^+.$
\[key\] The lifted vector fields $\vec{\nu}_j^+$, and hence $\vec{\nu}^+$, are analytic at $(u,v,t)$, if $u$ is not a multiple root of $I^{B_t}_{f_t}$. Moreover, $\vec{\nu}^+(u,0,t)$ is analytic for all $u\in {\mathbb{R}}$; that is, $\lim_{v\rightarrow
0^+}\vec{\nu}^+(u,v,t)$ has only removable singularities on the $u$-axis.
We analyze each $\vec{\nu}_i^+$, using (\[V\]). For brevity, write ${\mathbb{B}}\!:=B({\pmb{\gamma}}^{(i)})$, ${\mathbb{B}}_t\!:=B({\pmb{\gamma}}^{(i)}_t)$.
First, consider the case $B={\mathbb{B}}$. This case exposes the main ideas.
Now $I^B_f$ and ${\mathbb{P}}(f,{\pmb{\gamma}}^{(i)})$ are related as follows. Let $W(X,Y)=\sum_{i,j}a_{ij}X^iY^{j/d}$ be the (unique) weighted form such that $W(u,1)=I^B_f(u+c)$, $w(X)=h(B)$, $w(Y)=1$, where $c$ is the canonical coordinate of ${\pmb{\gamma}}^{(i)}$. The Newton dots on the highest compact edge of ${\mathbb{P}}(f,{\pmb{\gamma}}^{(i)})$ represent the non-zero terms of $W(X,Y)$; the highest vertex is $(0,L_f(B))$.
Thus $\frac{d}{du}W(0,1)=\frac{d}{du}I^B_f(c)=0$, $W(0,1)\not=0$. The weighted degree of $W(X,Y)$ is $L_f(B)$.
Hence, by Lemma (\[Euler\]), the substitution $X=x-\lambda_B(y)-cy^{h(B)}=(u-c)v^{h(B)}$, $Y=v$, yields ${\mathcal{O}}_v(|X\mathcal{F}_X|+|Y\mathcal{F}_Y|)= L_f({\mathbb{B}})$, if $u-c$ is not a multiple root of $W(u,1)$.
The Newton Polygon is independent of $t$: ${\mathbb{P}}(f,{\pmb{\gamma}}^{(i)})={\mathbb{P}}(f_t,{\pmb{\gamma}}_t^{(i)})$. All Newton dots of ${\mathcal{F}}$, and hence those of $\mathcal{F}_T$, are contained in ${\mathbb{P}}(f,{\pmb{\gamma}}^{(i)})$. Hence ${\mathcal{O}}_v(\mathcal{F}_T((u-c)v^{h(B)},v;T))\geq L_f(B)$.
By the Chain Rule, we have $X\frac{\partial}{\partial
X}=(u-c)\frac{\partial}{\partial u}$, $Y\frac{\partial}{\partial
Y}=v\frac{\partial}{\partial v}-h(B)(u-c)\frac{\partial}{\partial
u}$.
It follows that $(d\tau)^{-1}(\vec{v}_i^+)$ and $\vec{\nu}_i$ are analytic at $(u,v,t)$, if $u$ is not a multiple root of $I^{B_t}_{f_t}$.
Next, suppose $B<{\mathbb{B}}$. Again we show $(d\tau)^{-1}(\vec{v}_i^+)$ has the required property.
Write $\gamma^{(i)}(y)\!:=\lambda_B(y)+c^{\,\prime}y^{h(B)}+\cdots$. Let $W(X,Y)$ denote the weighted form such that $W(u,1)=I^B_f(u+c^{\,\prime})$, $w(X)=h(B)$, $w(Y)=1$.
If $W(X,Y)$ has more than one terms, they are dots on a compact edge of ${\mathbb{P}}(f,{\pmb{\gamma}}^{(i)})$, not the highest one. If $W(X,Y)$ has only one term, it is a vertex, say $(\bar{m}, \bar{q})$, $\bar{m}\geq 2$.
In either case, $u=0$ is a multiple root of $W(u,1)$. All Newton dots of $\mathcal{F}_T$ are contained in ${\mathbb{P}}(f,{\pmb{\gamma}}^{(i)})$. The rest of the argument is the same as above.
Finally, suppose $B\not \leq{\mathbb{B}}$. Here $p_i$ plays a vital role in analyzing $\vec{\nu}_i^+$.
Let $\bar{B}$ denote the largest bar such that $B>\bar{B}\leq {\mathbb{B}}$.
Let $U\!:=x-\lambda_{B_t}(y)$, $V\!:=y$. The identity $p_i={p_kq_i}/{q_k}$, and the Chain Rule yield $$p_i\cdot X\frac{\partial}{\partial X}=p_k\frac{(U+\varepsilon)^2}
{(U+\delta)^2}(U+\delta)\frac{\partial}{\partial U},\quad p_i\cdot
Y\frac{\partial}{\partial
Y}=p_k\cdot\frac{(U+\varepsilon)^2}{(U+\delta)^2}[V\frac{\partial}{\partial
V} -V\delta^{\prime}(V)\frac{\partial}{\partial U}],$$ where $\delta\!:=\delta(y,t)\!:=\lambda_{B_t}(y)-\gamma_t^{(i)}(y)$, $\varepsilon\!:=\lambda_{B_t}(y)-\gamma^{(k)}_t(y)$, ${\mathcal{O}}_y(\delta)=h(\bar{B})<h(B)\leq {\mathcal{O}}_y(\varepsilon)$.
The substitution $U=uv^{h(B)}$, $V=v$ lifts both to analytic vector fields in $(u,v,t)$.
It remains to study $\Psi\!:=\mathcal{F}_T/(|X\mathcal{F}_X|+|Y\mathcal{F}_Y|)$ when $X=\delta(v,t)+uv^{h(B)}$, $Y=v$.
Let $\mathcal{G}(U,V,T)\!:=\mathcal{F}(U+\delta(V,T),V,T)$. The Chain Rule yields $$\label{Dominating}
X\mathcal{F}_X=(U+\delta)\mathcal{G}_U,\;
Y\mathcal{F}_Y=V(\mathcal{G}_V-\delta_V \mathcal{G}_U)
,\;\mathcal{F}_T=\mathcal{G}_T-\delta_T\mathcal{G}_U.$$ Let us compare ${\mathbb{P}}(f,{\pmb{\gamma}}^{(i)})$ and ${\mathbb{P}}(\mathcal{G}, U=0)$, the (usual) Newton Polygon of $\mathcal{G}$. Let $E^{\,\prime}_i$, $\theta_i^{\,\prime}$ and $V_i^{\,\prime}$ denote the edges, angles and vertices of the latter. Then $E_i=E_i^{\prime}$, for $1\leq i\leq l$, where $l$ is the largest integer such that $\tan
\theta_l<h(\bar{B})$. Moreover, $E^{\,\prime}_{l+1}$ may be different).
Consider the vertex $V^{\,\prime}_{l+1}\!:=(m_{l+1}^{\prime},q_{l+1}^{\prime})$, $m_{l+1}^{\prime}\geq 2$. It yields a term $\mu\!:=a(T)U^pV^q$ of $\delta \mathcal{G}_U$, $a(0)\not=0$, $p\!:=m_{l+1}^{\prime}-1$, $q\!:=q_{l+1}^{\prime}+\tan \theta_{l+1}$. With the substitution $U=uv^{h(B)}$, ($u\not=0$,) $V=v$, $\mu$ is the dominating term in (\[Dominating\]). That is, ${\mathcal{O}}_v(\mu)<{\mathcal{O}}_v(\mu^{\prime})$, for all terms $\mu^{\prime}$ in $U\mathcal{G}_U$, $V\mathcal{G}_V$, etc., (and for all terms $\mu^{\prime}\not=\mu$ in $\delta
\mathcal{G}_U$), since ${\mathcal{O}}_Y(\delta)=\tan\theta_{l+1}$.
It follows that $\Psi$ is analytic. That $\lim \vec{\nu}^+_i$ has only removable singularities also follows.
Conditions $(b.2)$ etc. can be derived from the Key Lemma.
[99]{}
T. Fukui and E. Yoshinaga, *The modified analytic trivialization of family of real analytic functions*, Invent. math. , **82** (1985), 467-477.
S. Koike, [*On strong $C^{\,0}$- equivalence of real analytic functions*]{}, J. Math. Soc. Japan [**45**]{} (1993), 313-320.
T.-C. Kuo and Y.C. Lu, *On analytic function germs of two complex variables*, Topology, **16** (1977), 299–310.
T.-C. Kuo and A. Parusiński, *Newton polygon relative to an arc*, in Real and Complex Singularities (São Carlos, 1998), Chapman & Hall Res. Notes Math., [**412**]{}, 2000, 76–93.
K. Kurdyka and L. Paunescu, *Arc-analytic roots of analytic functions are Lipschitz*, Proc. Amer. Math Soc., [**132**]{}, No.6 (2004), 1693-1702.
L. Paunescu, *A weighted version of the Kuiper-Kuo-Bochnak-Lojasiewicz Theorem* , J. Algebraic Geometry, **2** (1993), 69-79.
R. J. Walker, *Algebraic Curves*, Princeton University Press, 1950 (Reprinted, Dover, 1962; Springer-Verlag, 1972. Springer-Verlag, 1972).
|
---
abstract: 'We analysed the identified hadron multiplicity predictions of the modified thermodynamical model of the multiparticle production processes with non-extensive statistic. The replacement of the standard Boltzmann exponential factor by the eventually much slower falling Tsallis one is suggested by the analysis of the transverse momentum distributions measured at high energies. The increase of high transverse momenta should accord with the abundance of heavy secondary particles, in particular multistrange barions. The introduction to the thermodynamical model of suppression factors similar to the ones in a quark jet fragmentation models is discussed.'
author:
- 'T. Wibig'
date: 'Received: date / Accepted: date'
title: 'Constrains for non-standard statistical models of particle creations by identified hadron multiplicity results at LHC energies'
---
Introduction
============
The identified hadron ratios have been measured with all LHC detectors and results were compared with high-energy event generators available in the market[@atlas; @lhcb; @aliceA; @aliceE; @aliceB]. The comparison, in general, is not very satisfactory.
In the present paper we would like to use data from the ALICE experiment performed with $pp$ interaction of $\sqrt{s}$ 7 TeV available energy [@aliceA; @aliceE; @aliceB; @omegadopi; @aliceF; @aliceG] to test the particle creation description based on thermodynamical approach.
The standard statistical picture is known to work well in the soft, low $\pt$, sector of the particle creation process, where the exponential fall of the transverse momentum distribution is observed. The hard inelastic scattering leads to the quark jet fragmentation with the power-law transverse momentum (transverse mass) distributions. Detail studies of the measured charged particle transverse momentum (transverse mass) distributions suggested already some time ago that the very good agreement of the invariant differential cross section in the whole transverse momentum range can be obtained with “an empirical formula inspired by QCD” from [@hage1983] $$\label{qcdinpired}
E~{ {\rm d ^3} \sigma \over {\rm d} p ^3} ~=~ A\: \left( \frac {p_0} {\pt+p_0} \right) ^{n}$$ (see, e.g., [@wongwilk] for further discussion and references ). It has been shown [@twkurp] that not only the fit of the simple form of Eq.(\[qcdinpired\]) works well but the whole theoretical model of particle creation which stands behind it could be successfully applied to the highest available energy data on charged particle transverse momentum [@jpg-q].
The model parameters found in [@jpg-q] define the occupation of phase space for given charged particle transverse momentum. If the picture is self-consistent, the same set of parameters should give correct yields of different kinds of created particles. It is known that the multiplicities of new created heavy particles are described to some extent by the Boltzmann statistical model (e.g., [@redlich; @bacatt97]). The Tsallis modification undoubtedly increases the high $\pt$s, and, obviously, the high transverse mass particle abundances. This should lead to the overabundance of heavy particles. We would like to look for the possibility to suppress this effect in a consistent way and to see if satisfactory results could be obtained.
Thermodynamical model
=====================
The thermodynamical picture of particle creation process in hadronic collisions was the first and quite successful attempt to describe it. The elaborated and complete theory was presented in series of papers by Hagedorn (see [@hage] and references therein). The idea of the fireball together with the proposition that “all fireballs are equal” gives considerable predictions concerning produced particle spectra.
One of the predictions was that the temperature of the “hadronic soup” (precisely defined) could not exceed a universal constant $T_0$ of order of 160 MeV. This value comes not as a result of the procedure of parameter adjusting using multiparticle production (e.g., transverse momenta) data, but from examination of elementary particle mass spectrum.
The Hagedorn theory were abundant for some time, when more sophisticated, jet or QCD based ideas appeared [@feynman]. One of the reasons was the failure of the high-transverse momenta description. The temperature of the fireball is defined as the parameter in the classical Boltzmann exponential term of the probability weights for phase space average occupation numbers. This gives the (asymptotic) form of the distribution of transverse momentum of particles created from decaying fireballs. It was found that at high and very high interaction energies the predicted exponential fall do not agree with the observed high $p_\bot$ behaviour. Successes of QCD based description of the hard processes gave deep insight into the nature of physics involved, and belief that this is just the right theory of strong interactions, makes the thermodynamical approach very approximate, simple and naive tool of limited applicability and thus of limited significance. But on the other hand, the simplicity of the theory and notorious constant lack of the effective QCD theory of soft hadronization processes give a hope that the fireball idea can be enriched, modified and can become important again.
The Hagedorn idea was used again to describe the identified particle multiplicities in hadronization both, in $e^+e^-$ annihilation and hadronic collisions. The [*grand canonical*]{} formalism of Hagedorn was replaced in the serial of papers by Becattini and co-workers [@becattiniheinz] by the [*canonical*]{} one, much relevant for studies of small systems like primary created fireballs for which the requirement of exact conservation of some quantum numbers seems important.
In general, thermodynamics of the system is determined by the partition function which can be written as $$\Z \left( Q^0 \right) ~=~\sum \limits_{
{Q}} \delta(Q-Q^0)
\: \prod \limits_{i,j} \p_{jk}^{\nu _{jk}} \ ,
\label{z0}$$
where $\p$ is the classical Boltzmann factor and $j$ and $k$ enumerate particle types and momentum cells, $Q^0$ is the initial fireball quantum number vector and $Q$ is the respective vector of the particular state, and $\nu_{jk}$ is the occupation number. Introducing Fourier transform of $\delta$ (and reducing vector $Q$ to 3-dimensional: charge, baryon number and strangeness) Eq.(\[z0\]) becomes $$\begin{aligned}
\Z\left(Q^0\right)~=~{1 \over (2 \pi)^3}\:
\int \limits_{0}^{2\pi}
\int \limits_{0}^{2\pi}
\int \limits_{0}^{2\pi}
{\rm d}^3 \phi
\ {\rm e}^{iQ^0\phi} \times \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \\
\ \ \ \ \ \ \
\times \exp \left\{
\sum _{j=1}^{n_B} \:w_j
\: \sum \limits_{k} \log \left( 1- \p_{jk} {\rm e}^{-iq_j\phi}\right)^{-1} \:
\ + \ \right.\nonumber \\ + \left. \
\sum \limits_{j=1}^{n_F} \:w_j
\: \sum \limits_{k} \log \left( 1+ \p_{jk} {\rm
e}^{-iq_j\phi} \right) \:\right\} \label{z1}
,
\label{zq0}\end{aligned}$$ where $q_{j}$ is the quantum number vector of the particle $j$ and $w_j$ is the weight factor associated with the particle of the type $j$. The first guess is that it should be equal to $(2 J_j + 1)$ and counts spin states. However, this does not seem to be so simple (see, e.g.,[@jetset]) and other solutions introducing factors responsible for some wave-function normalization, which should disfavour heavier states were found to be preferable by measurements. We will discuss this point later on.
With the Eq.(\[zq0\]) we are ready for detailed numerical calculations.
Average multiplicities
----------------------
With the known partition function $\Z$ the average characteristics of the system can be obtained in an usual way. For the average multiplicity we have $$\begin{aligned}
\langle n_j \rangle~=~
w_j\ { V \over \left(2\pi \right)^3 }\:
\:{1 \over \left(2\pi \right)^3}
\int \limits_{0}^{2\pi}
\int \limits_{0}^{2\pi}
\int \limits_{0}^{2\pi}
\d^3 \phi \: \times \ \ \ \ \ \ \ \ \nonumber \\
\times \int \d ^3 p
\:\left[ \e^{E/T}\:\e^{i \:\q_j \phi} \pm 1 \right]^{-1}~,
\label{mult1}\end{aligned}$$ where the upper sign is for fermions and the lower is for bosons. Because the $\e^{-E/T}$ factor is expected to be small (for all particles except pions) then $$\begin{aligned}
\langle n_j \rangle~
\approx~
{\Z(\Q^0 -\q_j) \over \Z(\Q^0)}
\:w_j
{ V \over \left(2\pi \right)^3 }\:
\int \d ^3 p
\:\e^{-E/T}~.
\label{mult2}\end{aligned}$$
The conventional Boltzmann-Gibbs description shown above could be, in principle, modified to allow the description of the systems of not-completely-free particles: the correlation “strength”, however defined, was introduced with the help of the new [*non-extensivity*]{} parameter and the new statistics, which in such case has to be also [*non-extensive*]{}. In the limit of the absence of correlations the new description approaches the Boltzmann form.
There could be infinitely many “generalized” statistics which fulfill such requirements. We choose the one which is simple and has well defined theoretical background. In the present paper we test the possibility, proposed by Tsallis [@Tsallis:1988eu], based on the modification of the classical entropy definition $$S_{\rm BG}~=~-k \sum \limits_{i}^{W} \:\p_i\ln \p_i
\label{entrobg}$$ by the new one $$S_{q}~=~k ~{1 \over q~-1}\left({1-\sum \limits_{i}^{W} \:\p_i^q }\right)~
\label{entrots}$$ with the new parameter $q$ called the non-extensivity parameter. This modification has been adopted in other physical applications (see, e.g., [@beck]).
Maximization of the entropy requirement with the total energy constraint leads to the probability of realization of the state $i$ (with energy $E_i$) given by
$$\p_i~=~{1\over \Z_q}\:
\left[1-(1-q)/T_q(E_i-E_0) \right]^{1/(1-q)}~,
\label{peq0}$$
where $\Z_q$ is the normalization constant related to $\Z(\Q^0)$ of Eq.(\[z0\]) where the Boltzmann terms $x$ is replaced by the probabilities of the form of Eq.(\[peq0\]).
Eq. (\[peq0\]) can be rewritten introducing new symbol, $\eq$, defined as $
\eq^{x} =
\left[ 1+(1-q)x\right]^{1/(1-q)}
$ $$\p_i~\sim~
\eq ^{ -E_i/T_q}
\label{peq}$$ and the modified partition function can be written then in the form $$\Zq(\Q)=\sum \limits_{\rm states}
{w_j \over \left(2\pi \right)^3}
\int \limits_{0}^{2\pi}
\int \limits_{0}^{2\pi}
\int \limits_{0}^{2\pi}
\d^3 \phi \:
\eq^{-E/T} \:
\e^{i (\Q_0-\Q)\:\phi} .
\label{zqdef}$$ Eq.(\[mult2\]) with this modification of the partition function gives abundances of initially created particles in the hadronization process described by the modified, non-extensive, statistics.
Results
=======
We have evaluated $\Zq$ functions (and $\Z=\Z_1$ for the standard Boltzmann thermodynamics) for a variety of values of the thermodynamical model parameters: $T$, $V$, and a number of $\Q$ values which cover the production of over 100 hadrons of masses below 2 GeV/c$^2$. All decays of short-lived particles were then taken into account.
Measurements of identified particle ratios performed mostly by the ALICE Collaboration give the opportunity to test the modified statistical model of particle flavour creation in the new, higher energy region. The lower energy jet hadronization results have been analyzed by the serial of papers by Becattini and others (see, e.g., [@redlich; @bacatt97]). It has been shown that the micro-canonical Boltzmann description works well for $e^+e^-$ from $\sqrt{s}\approx $10 GeV [@becatpesa] to 91 GeV [@redlich] and $p p$ and $p \bar p$ interactions up to SPS evergies, $\sqrt{s}\approx $ 900 GeV [@bacatt97].
![Relative particle multiplicities, $f = \left( n_i/n_{ch}\right)$, obtained for the Tsallis statistics (with $T=150$ MeV, $V=100$ fm$^3$ and $q=1.12$) compared with the same ratios for the Boltzmann statistics (with $T=150$ MeV, $V=100$ fm$^3$ and $q=1$).[]{data-label="f1"}](fig01.eps){width="7.3cm"}
The comparison of the results obtained with Boltzmann (non-extensivity parameter $q=1$) and Tsallis with the value of $q = 1.12$ which obtained in Ref. [@twkurp; @jpg-q] is shown in Fig. \[f1\]. We can see the clear enhancement of the exponential ’tail’ for modified statistics model. The non-extensivity parameter $q$ values adjusted to high energy data recently [@cleymans] are of 1.1–1.15, ($\sim1.17$ in Ref. [@wongwilk]). We have shown the effect of the change of the non-extensivity parameter in Fig. \[f2\]. The difference between the Boltzman and Tsallis statistics results are given for three values of $q$, 1.10, 1.12, and 1.15. The biggest difference is seen for $\Omega$ . For lighter particles, even for $\Xi$ the effect of small change of $q$ is not significant. Concluding, we can say that the relative particle multiplicities are not the appropriate observable to adjust the non-extensivity parameter.
![Enhancement of the relative particle multiplicity obtained for Tsallis statistics (with $T=150$ MeV, $V=80$ fm$^3$) with respect to the standard Boltzmann model for the non-extensivity parameter values equal to 1.10,1.12, and 1.15 (squares, circles and triangles, respectively).[]{data-label="f2"}](fig02.eps){width="7.3cm"}
We have tested also the particle multiplicity dependencies on thermodynamical hadronization parameters $T$ and $V$. The effect seems to be almost negligible with the limits possible changes, allowed by the data on transverse momentum distribution and total multiplicities. However, the effect of volume $V$ have to be taken with care, because the spacial and temporal history of the hadronization process is not exactly known. It is expected that the canonical picture should take into account the multichain idea (e.g., [@twgmc; @becattiniheinz]) of decomposition of the ’hadronic soup’ to chains of independently hadronized objects/fireballs. For the Boltzmann statistics, by the definition of extensivity, the sum of many hadron sources is equivalent to one big source [@becatpesa]. This is, in general, not the case of Tsallis non-extensive statistics. But we can say that the strength of the non-extensivity is still not big and the effect of subdivision of the hadronization volume does not change much the conclusion about the identified particles ratios. Another important point, to be mentioned here, is connected to the effect of the canonical treatment of small fireballs which relates to the suppression of strange quark (and diquark, or strange diquark) production what was mentioned already in [@hage]. Additionally the importance of reaction volume in hadronic collisions in the canonical picture, specially for multistrange particles, like $\Omega$, is discussed extensively in [@rafelski]. We have to say that the changes of the hadronization volume do not act strong on the total multiplicities. The possible small changes which we can study do not effect the particle ratios in a significant way. Detailed studies, however, are needed to answer all questions here.
We can say, that the thermodynamical model predictions are, in a sense, very robust. They cannot be adjusted to the measured ratios, at least with reasonable possible changes of the hadronization parameters $T$, $V$ and $q$. This situation is, on the other side, very fortunate. The comparison of them with the experimental results could be the [*experimentum crucis*]{} of the model in general.
We have shown results for particle ratios in the comparison with the ALICE Collaboration data in Tab. \[t1\]. Thermodynamical parameter values ($T$ and $q$) were taken from the literature, and adjusted ($V$) to reproduce roughly the charged particle multiplicities. We have applied there the simply counting of spin states to calculate the weight factor $w_i$, $(2J_i+1)$, and strangeness suppression factor value of $\gamma_s = 0.5$ acting of strange quark particle contents.
[|cc|ccc|]{} particle ratio& ALICE measurement& $\begin{smallmatrix}\\ Boltzmann \\ \\ V=50\ {\rm fm}^3 \\T=160\ {\rm MeV}\\q=1.00\\ \\ \end{smallmatrix}$ & $\begin{smallmatrix}\\ Tsallis \\ \\ V=80\ {\rm fm}^3\\T=170\ {\rm MeV} \\q=1.12\\ \\ \end{smallmatrix}$ & $\begin{smallmatrix}\\ Tsallis \\ \\ V=80\ {\rm fm}^3\\T=150\ {\rm MeV} \\q=1.15\\ \\ \end{smallmatrix}$\
$\rho / \omega$ & $1.15 \pm 0.2 \pm 0.12^a$ &0.985& 0.855&0.848\
$\phi /(\rho + \omega) $&$0.084 \pm 0.013 \pm 0.012^a$ &0.042 &0.035&0.033\
$K^{*0} / K^- $ &$0.35 \pm 0.001 \pm 0.04^b$ &0.337 &0.466&0.466\
$\phi / K^{*0} $&$0.33 \pm 0.004 \pm 0.05^b$ &0.268 &0.215&0.207\
$\phi / \pi^{-}$&$0.014 \pm 0.0002\pm 0.002^b$ &0.0063 &0.0080&0.0077\
$\phi / K^{-}$&$0.11 \pm 0.001 \pm 0.02^b$ &0.090& 0.100 & 0.097\
$\omega/\pi^0$&$ 0.6 \pm 0.1^c$ & 1.36& 0.861 &0.704\
$\Omega /\Xi$&$0.067\pm 0.01^d$ & 0.068 &0.237 &0.240\
$\Omega /\phi$ &$0.04 \pm .008^e$ & 0.119 &0.362 &0.403\
$\eta / \pi^0$ &$0.1067 \pm 0.0259 \pm 0.0212^f$ & 0.206 &0.092 &0.081\
\
\
\
\
\
\
Problems when comparing the ALICE results with prediction of listed hadronisation model results can be seen. Some ratios, especially those involving strange and multi-strange hadrons looks unexplainable. As it is discussed above (Fig. \[f2\]) formal fits or readjustments of the model parameters ($T$, $V$ and even $q$) could not help. It should be mentioned here again, that model parameters are related to other interaction properties measured extensively with LHC, e.g., the transverse momentum distributions and total multiplicities, and their values are rather fixed. Any significant change of $V$, $T$ (and $T$ with $q$) could disturb fits made for the charged particle inclusive spectra measured with very high accuracy and in a large range of the transverse momentum space.
There is, however, in Eqs. (\[mult1\],\[mult2\]) the weight factor $w_j$ which give us some hope and freedom to get closer to the data. The simply obvious form of $(2J+1)$ is, in general, modified since there had been experimentally found suppression of $K$ mesons with respect to non-strange ones. The general statement is that the strange phase space is not fully available for particle production what can be realized by multiplying the partition function by the special factor for each strange valence quark in the particle in question.
The strangeness suppression factor is also one of the basic parameters in the jet fragmentation model introduced by Feynman and developed finally by the Lund group [@jetset]. In the Lund jet fragmentation process new hadrons appear as a breaking of the colour field string stretched between quarks moving apart by the production of a new pair of quarks (sometimes diquarks). If there is enough energy left, further breaks may occur and eventually only on-mass-shell hadrons remains. The creation of new quark-antiquark pair in the Lund model is a kind of the quantum tunneling process, so it is expected that the heavy quark creation is suppressed. It is usually assumed that $u : d : s \sim 1 : 1 : 0.3$ [@wroblewski] Additionally, $w_j$ weight factor is related with the spin states of newly created hadrons: for mesons: pseudoscalar and vector states. The suppression here is not defined in the Lund fragmentation model. Counting the spin states gives $1 : 3$ ratio, but in the JETSET model this ratio is eventually close to $1:1$, according to the ’tunnelling normalisation’.
The situation with baryon creation in the Lund model is much more complicated. The tunneling mechanism is also adopted here. We have here the probability of string breakup via diquark mode and further combination of quark and diquark. If we take into account the pop-corn mechanism of diquark breakups and lack of general rules, we can have the number of parameters to be adjusted to the data comparable with the number of measured ratios to be used for this adjustment. The number of parameters describing the production of baryons measured with good accuracy in the experiments at LHC is at the moment higher then the number of such baryons itself [@jetset].
The Lund model and particular JETSET hadronization generator is used also by the PHOJET [@phojet] program package for the recent theoretical examination of the LHC data description and comparison. Some parameters in PHOJET are different than the default Lund model values.
We first discuss the possibility of introducing the strangeness suppression factor $\gamma_s$ for $$w_j~ =~ (2 J\:+\:1) \times \left( \gamma_s\right)^{N_j}~~~~.
\label{eqwjgamma}$$ The $N_j$ is the ’degree of strangeness’ which is, in fact, not defined, yet. It should be related to the contents of the particle $j$. Three possibilities are rather natural.
$$N_j~=~\left\{
{
\begin{array}{lr}
S&~~{\rm strangeness~ of ~the~ particle~of~ the~ type~ }j\\
n_s& ~{\rm number~ of~ strange~ (or~ anti strange)}~~~~~~~ \\
~~ &~~~~~~~~~~~~ {\rm valence~ quarks~ of~}j\\
n_{s \bar s}&{\rm number~ of~ }s \bar s~{\rm pairs~~~~~~~~~~~~~~~~~~~~~~~~~~~~}\\
& ~~~~~~{\rm involved~ to~ create~the ~ particle~ }j\\
\end{array}
}
\right.
\label{eq16}$$
The difference could be seen in the comparison of $K$ and $\phi$ weights. For the direct $K$ they are $\gamma_s$, $\gamma_s$, and $\gamma_s$, for three possibilities in Eq.(\[eq16\]), respectively. While for the direct $\phi$ they are 1, $\gamma_s^2$, and $\gamma_s$, for the first, the second and the third possibility in Eq.(\[eq16\]). The actual situation is more complicated, because of the effect of decays of heavy resonances. To see the eventual results complete calculations have to be performed.
Some examples are given in the Tab. \[t2\]. We show there only these ratios which are sensitive to the choice of $N_j$ in Eq. (eq16).
particle ratio ALICE measured ratios $n_{s}$ $n_{s \bar s}$ $S$
-------------------------- -------------------------------- --------- ---------------- -------
$\phi /(\rho + \omega) $ $0.084 \pm 0.013 \pm 0.012^a$ 0.0360 0.0718 0.142
$\phi / K^{*0} $ $0.33 \pm 0.004 \pm 0.05^b$ 0.219 0.440 0.751
$\phi / \pi^{-}$ $0.014 \pm 0.0002\pm 0.002^b$ 0.008 0.016 0.030
$\phi / K^{-}$ $0.11 \pm 0.001 \pm 0.02^b$ 0.101 0.186 0.287
$\Omega /\phi$ $0.04 \pm .008^e$ 0.329 0.135 0.082
It can be seen that calculated ratios shown in Tab. \[t2\] that the case ($N_j=n_{s}$) works well for strange mesons and the simple $(2J+1)$ factor results not far from measurements. The multiplicity of $\phi$ is crucial here, as it could be expected.
Further model ’fine tuning’ can involve the adjustment of the value of the strangeness suppression factor $\gamma_s$. We did not, however, go very far. We have checked three values which have been used in the literature: 0.5 originally proposed by Feynman in the jet fragmentation model [@feynman] and still of use [@becattiniheinz], 2/3 used successfully by Becattini [@becattni066] for the $e^+e^-$ data and $p \bar p$ results from SPS, and $\gamma_s=1$ as the limit of no strangeness suppression. Some results for $n_{s}$ choice in Eq.(\[eq16\]) and spin counting states $w_i=(2J_i+1)$ are shown in Tab.\[t3\] ($T=150\ {\rm MeV}$, $V=80\ {\rm fm}^3$, $q=1.12$).
particle ratio ALICE results $\gamma_s=1$ $\gamma_s=2/3$ $\gamma_s=1/2$
-------------------------- ---------------------------------- -------------- ---------------- ----------------
$\phi /(\rho + \omega) $ $0.084 \pm 0.013 \pm 0.012^a$ 0.137 0.062 0.036
$\phi / K^{*0} $ $0.33 \pm 0.004 \pm 0.05^b$ 0.433 0.289 0.219
$\phi / \pi^{-}$ $0.014 \pm 0.0002\pm 0.002^b$ 0.023 0.013 0.008
$\phi / K^{-}$ $0.11 \pm 0.001 \pm 0.02^b$ 0.186 0.131 0.101
$\omega/\pi^0$ $ 0.6 \pm 0.1^c$ 0.787 0.857 0.894
$\Omega /\Xi$ $0.067\pm 0.01^d$ 0.443 0.303 0.233
$\Omega /\phi$ $0.04 \pm .008^e$ 0.585 0.416 0.329
$\eta / \pi^0$ $0.1067 \pm 0.0259 \pm 0.0212^f$ 0.084 0.094 0.161
The agreement for $\gamma_s=2/3$ seems slightly better than for 1/2. The no-suppression ($\gamma_s=1$) is in general worst.
The discrepancy exists still in ratios involving baryons $\Omega$ and $\Xi$. As it has been said there is a great degree of freedom to modify created baryon multiplicities. The diquark suppression factor $\gamma_{qq}$ is the one possibility which we used and another factor was introduced specially for $\Omega$ baryons $\gamma_{ss}$ related to creation of double strange diquark.
Our final results on ratios of particle multiplicities calculated for with strangeness suppression factor $\gamma_s=2/3$ and extra diquark suppressions $\gamma_{qq}=\gamma_{ss}=1/2$ are shown in the Tab. \[t4\] ($T=150\ {\rm MeV}$, $V=80\ {\rm fm}^3$ and $q=1.12$).
particle ratio ALICE results calculated
-------------------------- ---------------------------------- ------------
$\rho / \omega$ 1.15 $\pm$ 0.2 $\pm 0.12^a$ 0.867
$\phi /(\rho + \omega) $ $0.084 \pm 0.013 \pm 0.012^a$ 0.062
$K^{*0} / K^- $ $0.35 \pm 0.001 \pm 0.04^b$ 0.453
$\phi / K^{*0} $ $0.33 \pm 0.004 \pm 0.05^b$ 0.302
$\phi / \pi^{-}$ $0.014 \pm 0.0002\pm 0.002^b$ 0.0147
$\phi / K^{-}$ $0.11 \pm 0.001 \pm 0.02^b$ 0.136
$\omega/\pi^0$ $ 0.6 \pm 0.1^c$ 0.872
$\Omega /\Xi$ $0.067\pm 0.01^d$ 0.103
$\Omega /\phi$ $0.04 \pm .008^e$ 0.045
$\eta / \pi^0$ $0.1067 \pm 0.0259 \pm 0.0212^f$ 0.105
: Ratios of particle multiplicities calculated for with different strangeness suppression factor $\gamma_s=0.66$ and extra diquark suppressions $\gamma_{qq}=\gamma_{ss}=0.5$ compared with the measurement results (description like in Tab.\[t1\]).\[t4\]
Because of relatively limited amount of data we do not wish at the moment to go further with ’tuning’ the suppression parameters ($\gamma_s$, $\gamma_{qq}$ and $\gamma _{ss}$). We would like to show the general possibility to improve the data description in the thermodynamical model by introducing diquark suppression factors. The additional suppression of heavy, strange barions required by the modified thermodynamical model can be naturally realized this way.
With all the modifications described above the $\chi^2$ for the values listed in Tab. \[t4\] gets lower from enormous thousands for predictions shown in Tab. \[t1\] results to about 30. This value has a chance probability of $p=0.0004$, equivalent to ’3.5 $\sigma$’ deviation. It is, in fact, the disagreement, but it gives also a hope to be reduced further with more sophisticated calculations and model improvements.
{width="7.3cm"} {width="7.3cm"}
{width="7.3cm"} {width="7.3cm"}
The ALICE Collaboration published data showing also particle ratios as a function of particle transverse momentum. Taking into account that the modification of the statistics of the multiparticle production process was developed primarily for the transverse momentum description this kind of data could be valuable to verify the model. Comparison of our final model prediction and the data are shown in Fig. \[f5\]. The solid lines represent predictions of the discussed modified statistical model with the final suppressions as presented in the Tab.\[t4\], $\gamma_s = 0.66$ and both diquark suppression factors $\gamma_{qq}=\gamma_{ss}=0.5$. The Boltzmann statistics results (dotted lines) are also given for a comparison. As it is seen in Fig. \[f5\], the standard statistics does not work very well for the LHC ALICE data shown, as well as the the modified, Tsallis statistics without additional diquark suppression (dashed lines). Only introducing the diquark suppression effect with our chosen, first guess, values of suppression factors reproduces the data better. This is of course the effect of adding two new parameters and adjusting the model to match the points, but the question if similar modification of the standard Boltzmann picture will give similar result is still open.
Conclusions
===========
The modified thermodynamical model parameters found analyzing the transverse momentum distributions measured at 7 TeV without any re-adjustment with the standard strangeness suppression factor $\gamma$ of about 2/3 and additional suppressions of diquark and strange diquark production were used for the calculations of identified particle multiplicities. We have shown that the introduction of non-extensive statistics to the thermodynamical theory of multiparticle production in hadronic collisions opens an interesting possibility of the description of the hadronization process.
[99]{}
G. Aad (ATLAS Collaboration), Phys. Rev. D[**85**]{}, 012001 (2012).
R. Aaij (LHCb Collaboration), Eur. Phys. J. C[**72**]{}, 2168 (2012).
B. Abelev [*et al.*]{}, (The ALICE Collaboration)Eur. Phys. J. C[**72**]{}, 2183, (2012).
B. Abelev [*et al.*]{}, (The ALICE Collaboration), Phys. Lett. B[**717**]{}, 162, (2012).
B. Abelev [*et al.*]{}, (The ALICE Collaboration), Phys. Lett. B[**712**]{}, 309 (2012).
D. Peresunko [*et al.*]{}, (The ALICE Collaboration), arXiv:1210.5749 (2012).
B. Abelev [*et al.*]{}, (The ALICE Collaboration), Phys. Lett. B[**710**]{}, 557(2012).
R. Preghenella [*et al.*]{}, (The ALICE Collaboration), Acta Physica Polonica B[**43**]{}, 555, (2012).
R. Hagedorn, Riv. Nuovo Cimento 6, 1 (1983).
C.Y. Wong and G. Wilk, Phys. Rev. D 87, 114007 (2013).
T. Wibig and I. Kurp, JHEP [**12**]{}, 039 2003.
T. Wibig, J. Phys. G: Nucl. Part. Phys., [**37**]{}, 115009, 2010.
K. Redlich, A. Andronic, F. Beutler, P. Braun-Munzinger, J. Stachel, J.Phys.G [**36**]{} 064021 (2009).
F Becattini, J. Phys. G: Nucl. Part. Phys. [**23**]{}, 1933 (1997).
R. Hagedorn, Nucl. Phys. B [**24**]{}, 93, 1970; R. Hagedorn, K. Redlich, Z. Phys. C [**27**]{}, 541 (1985); R. Hagedorn, in: Hot Hadronic Matter: Theory and Experiment, J. Letessier et al., Eds., NATO ASI Series, [**346**]{}, Plenum, New York, (1995).
R.P. Feynman and R.D. Field, Nucl. Phys. B[**136**]{}, 1 (1978).
F. Becattini and U. Heinz, Z. Phys. C[**76**]{}, 269 (1997).
B. Andersson, G. Gustafson, C. Peterson, Z. Phys. C [**1**]{}, 105 (1979); T. Sjöstrand, LU-TP-95-20, CERN-TH-7112-93-REV, e-Print: hep-ph/9508391 1995; H.U. Bengtson and T. Sjöstrand. Comp. Phys. Commun [**46**]{} 43, (1987).
C. Tsallis, J. Stat. Phys. [**52**]{}, 479 (1988).
Ch. Beck, Physica A 286, 164, 2000; G. Wilk and Z. Włodarczyk, Eur. Phys. J. A[**40**]{} 299, (2009); G. Wilk, Z. Włodarczyk, Cent. Eur. J. Phys. [**8**]{} 726, (2010).
F. Becattini and G. Passaleva, Eur. Phys. J. C [**23**]{}, 551 (2002).
J. Cleymans, G.I. Lykasov, A.S. Parvan, A.S. Sorin, O.V. Teryaev, D. Worku, Physics Letters B, [**723**]{}, 351 (2013); J. Cleymans, Journal of Physics: Conference Series [**455**]{}, 012049 (2013).
T. Wibig, Phys. Rev. D[**56**]{}, 4350 (1997).
J. Rafelski and J. Letessier, J. Phys. G, [**28**]{}, 1819 (2002).
A. Wróblewski, Acta Phys. Pol., [bf B16]{} 379, (1985).
S. Ritter and J. Ranft, Acta Phys. Polon. B[**11**]{}, 259 (1980); R. Engel, Univ Siegen preprint 95-05 (1997). R. Engel and J. Ranft, Phys.Rev. D[**54**]{}, 4244(1996).
F. Becattini, P. Castorina, A. Milov and H. Satz, Eur. Phys. J. C[**66**]{}, 377 (2010); J. Phys. G: Nucl. Part. Phys. [**38**]{} 025002 (2011).
|
---
abstract: 'We consider the clustering of Lennard-Jones particles by using an energetic connectivity criterion proposed long ago by T.L. Hill \[J. Chem. Phys. **32**, 617 (1955)\] for the bond between pairs of particles. The criterion establishes that two particles are bonded (directly connected) if their relative kinetic energy is less than minus their relative potential energy. Thus, in general, it depends on the direction as well as on the magnitude of the velocities and positions of the particles. An integral equation for the pair connectedness function, proposed by two of the authors \[Phys Rev. E **61**, R6067 (2000)\], is solved for this criterion and the results are compared with those obtained from molecular dynamics simulations and from a connectedness Percus-Yevick like integral equation for a velocity-averaged version of Hill’s energetic criterion.'
author:
- 'Luis A. Pugnaloni'
- 'Guillermo J. Zarragoicoechea'
- Fernando Vericat
title: 'Cluster pair correlation function of simple fluids: energetic connectivity criteria'
---
Introduction
============
The concepts of clustering and percolation have been widely used in order to explain several phenomena in very diverse areas including Physics, Chemistry, Biology, Geology, Sociology and Economics. In particular, with reference to chemical–physics, phenomena such as nucleation,[@Senger1] hydrogen bonding,[@Starr1] insulator–conductor, sol–gel and glass transitions [@Simon1; @Chen1; @Coniglio1; @Butler1; @Stanley1; @Grest1; @Wittmann1] as well as bridging in granular materials [@Pugnaloni1] are currently studied from this point of view. In all these cases, the system under study can be thought of as a collection of individuals (atoms, molecules, grains, etc.) that, with generality, we call particles. Most of the efforts have been based on lattice representations of the systems. The relative simplicity of lattice models allows for a wide variety of treatments, which extend from almost heuristic [@Sahimi2] to quite rigorous.[Grimmett1]{} Whatever the treatment is, the concept of connectivity between the particles plays an important role.
Sometimes, however, a continuous description—where particles can occupy any point in a continuum phase space—is needed to reach a more realistic picture of the phenomena under consideration. For this context, the concept of connectivity has been generalized and adapted to describe clustering and percolation in continuum systems. The main ideas have been established in the pioneering works of Hill [@Hill1] and Coniglio *et al.*[Coniglio2]{} Hill considers a partition of the whole system into subsystems of particles (the clusters) that satisfy some linking properties. The concept of cluster is thus directly related to the idea of bonded pairs. A bonded pair is a set of two particles that are linked by some direct mechanism. A cluster is then defined as a set of particles such that any pair of particles in the set is connected through a path of bonded pairs. We call these clusters chemical clustersto distinguish them from the non-pair-bonded clusters we have introduced in a previous work [@Pugnaloni2]—note, however, that this does not mean that clusters are necessarily formed through a true chemical bonding. A system is said to be in a percolated configuration if it contains a cluster that spans the system volume.
From Hill’s theory, we see that a connectivity criterion is needed in order to decide whether two particles are bonded or not. This connectivity criterion has to be defined in accordance with the phenomenon under study. [@comment1; @Chen1; @Pugnaloni3] In the search for stable atomic clusters, which mark the onset of a phase transition in a monatomic gas, Hill proposed a simple energetic criterion (HE): two particles are bonded if their relative kinetic energy is less than the negative of their relative potential energy.[@Hill1] In principle, this criterion takes into account the relative positions and velocities of the relevant pair of particles. For molecular fluids, instead of just their distance, the potential energy could in general depends on the direction and magnitude of the vector position of each of the two involved particles and their relative orientations.
From a theoretical point of view, a criterion that involves the velocity of the particles prevents the straightforward integration of the momenta in the partition function, which is the great advantage of classical statistical mechanics. To avoid this obstacle, Hill [@Hill1] himself has proposed a velocity-averaged (VA) version of his criterion giving effective potentials between bound and unbound particles.
The VA and the complete HE criteria have been used in computer simulations as well as in integral equations studies. For the VA criterion only the particle positions come into account, so it is suitable to both, Monte Carlo (MC) and molecular dynamics (MD) calculations. With respect to the integral equations approach, Coniglio *et al.*[@Coniglio2] have obtained a connectedness Ornstein–Zernike (OZ) relationship for the pair connectedness function $g^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2})$ (see also a review from Stell).[@Stell1] This function is proportional to the joint probability density of finding two particles belonging to the same cluster and at positions $\mathbf{r}_{1}$ and $\mathbf{r}_{2}$, respectively. Therefore, by integrating $g^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2})$, the mean cluster size $S$ and the percolation density $\rho _{p}$—*i.e.* the value of $\rho $ for which $S(\rho )$ diverges—can be obtained. Since Coniglio’s theory deals only with the positions of the particles, the HE criterion cannot be implemented. Instead, the VA criterion was used by Coniglio *et al.* [@Coniglio2] to analytically calculate the percolation loci, for a potential made up of a hard core plus an attractive interaction, in a crude mean-field approximation.
It is worth mentioning that most of the theoretical studies[Chiew1,DeSimone1,Laria1,Carlevaro1]{} on connectivity and percolation in continuum systems based on Coniglio’s type equations were focused in the rather simple Stillinger’s connectivity criterion. [@Stillinger1] This criterion states that two particles are bonded if they are separated by a distance shorter than a given connectivity distance $d$. In this case, $d$ is an *ad hoc* parameter, which must be chosen on physical grounds. Although this criterion might be sensible in the study of certain insulator–conductor transitions, it is unrealistic regarding clustering in saturated vapors.
A general theory which is appropriate for bonding criteria involving both, the momenta and positions of a pair of particles, has been developed by two of us. [@Pugnaloni4] The main object in our theory is the pair connectedness function $g^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}%
_{1},\mathbf{p}_{2})$ which is proportional to the joint probability density of finding two particles at positions $\mathbf{r}_{1}$ and $\mathbf{r}_{2}$ with momenta $\mathbf{p}_{1}$ and $\mathbf{p}_{2}$, respectively, and belonging to the same cluster. This function verifies also an OZ like relationship. In a previous paper [@Pugnaloni5] (thereafter denoted as I) we applied our general theory to study the complete HE criterion for the same model fluid considered by Coniglio *et al.* [@Coniglio2] under the VA criterion. We also used the same simple closure relation proposed by Coniglio *et al.* More recently, we have reported [Zarragoicoechea1]{} the solution of our generalized connectedness OZ type relation for a Lennard–Jones fluid closed with a connectedness Percus–Yevick (PY) condition. We implemented a connectivity criterion which generalizes Stillinger’s criterion in that a life time $\tau $ for the bonds is required. [@Pugnaloni2]
In Ref. I we have also performed MD simulations of the Lennard–Jones fluid and have used both criteria (HE and VA) to define the clusters. We concluded that the VA criterion strongly overestimates percolation densities. We will partially revise these results here and will discuss some subtleties related to the identification of percolating clusters. Notice that MD simulations are convenient when the HE criterion is used to identify clusters since MC algorithms do not provide *per se* the velocities of the particles. [@comment2] It should be mentioned that the HE criterion has been considered in MD studies of small clusters and the critical percolation behavior of Lennard–Jones fluids by several authors. [@Soto1; @Soto2; @Campi1] It has been suggested that the percolation line—the line that separates the temperature–density phase diagram into percolated and non-percolated states—might be experimentally observable. [@Campi1; @Coniglio3] Moreover, cluster analysis based on this criterion seems to be useful in locating the gas–liquid coexistence curve. [Campi1]{}
The main purpose of this work is to apply the generalized connectedness OZ type relationship closed with a connectedness PY condition for the Lennard–Jones fluid in order to obtain the pair connectedness function $g_{%
\text{HE}}^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}%
_{2})$ for HE clusters and thus the related cluster pair correlation function:
$$g_{\text{HE}}^{\dagger}(\mathbf{r}_{1},\mathbf{r}_{2})=\int\rho(\mathbf{r}%
_{1},\mathbf{p}_{1})\rho(\mathbf{r}_{2},\mathbf{p}_{2})g_{\text{HE}%
}^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})d%
\mathbf{p}_{1}d\mathbf{p}_{2}, \label{1}$$
where $\rho(\mathbf{r}_{1},\mathbf{p}_{1})$ is $N$ times the probability density of finding a particle at the phase space configuration $(\mathbf{r}%
_{1}$, $\mathbf{p}_{1})$.
We compare $g_{\text{HE}}^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2})$ so obtained with the function $g_{\text{VA}}^{\dagger }(\mathbf{r}_{1},\mathbf{r%
}_{2})$ for the VA criterion calculated using the integral equation that results when the OZ type relationship of Coniglio *et al.* [Coniglio2]{} is closed with a PY like conditions. Both functions—$g_{\text{%
HE}}^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2})$ and $g_{\text{VA}}^{\dagger
}(\mathbf{r}_{1},\mathbf{r}_{2})$—are compared with the corresponding curves given by MD simulations.
The paper is organized as follow. In Sec. II we present the model system and the two connectivity criteria, i.e. HE and VA, we will work on. Also we use this section to discuss some aspects about the MD simulations. The continuum clustering theories suitable for each criteria will be sketched in Sec. III. There, we briefly describe the integral equation for $g_{\text{HE}}^{\dagger
}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})$ and its solution following Lado’s orthogonal polynomials method. [Lado1,Zarragoicoechea1]{} Finally in Sec. IV we compare the theoretical results with those obtained from simulations. We then summarize and give the conclusions.
Model system and energetic criteria
===================================
We consider a system of $N$ particles whose configurations are given by their positions and momenta $(\mathbf{r}_{i},\mathbf{p}_{i})$ $($*i =* $1,...,N)$. The canonical ($NVT$) ensemble will be used throughout. We assume that particles interact via the Lennard–Jones pair potential
$$v(r_{ij})=4\varepsilon\left[ \left( \frac{\sigma}{r_{ij}}\right)
^{12}-\left( \frac{\sigma}{r_{ij}}\right) ^{6}\right] , \label{2}$$
where $r_{ij}=\left\vert \mathbf{r}_{ij}\right\vert $ with $\mathbf{r}_{ij}=%
\mathbf{r}_{j}-\mathbf{r}_{i}$.
The clustering criteria are expressed in terms of the bond conditional probability density $P(\mathbf{r}_{i,j},\mathbf{p}_{i,j})$, say the probability density that two particles $i$ and $j$ are bonded under the condition that their positions and momenta are $(\mathbf{r}_{i},\mathbf{p}%
_{i})$ and $(\mathbf{r}_{j},\mathbf{p}_{j})$, respectively.
HE criterion
------------
The original Hill’s criterion (HE) identifies clusters by defining: [Hill1]{}
$$P_{\text{HE}}(\mathbf{r}_{i,j},\mathbf{p}_{i,j})=\left\{
\begin{array}{ll}
1 & \mathbf{p}_{i,j}^{2}/4m<-v(\mathbf{r}_{i},\mathbf{r}_{j})\text{ \ and \ }%
r_{i,j}\leq d \\
0 & \mathbf{p}_{i,j}^{2}/4m\geqslant -v(\mathbf{r}_{i},\mathbf{r}_{j})\text{
\ or \ }r_{i,j}>d%
\end{array}%
\right. \label{3}$$
with $\mathbf{p}_{i,j}$ the relative momentum: $\mathbf{p}_{i,j}=\mathbf{p}%
_{j}-\mathbf{p}_{i}$. A maximum connectivity distance $d$ has been added to the criterion in order to avoid unrealistic bonding.
VA criterion
------------
By integrating the relative momenta weighted by the Maxwell distribution in the region where the relative kinetic energy is lesser than minus the pair potential, the momenta are eliminated and the VA bond conditional probability density is obtained: [@Hill1; @Coniglio2] $$P_{\text{VA}}(r_{i,j})=\left\{
\begin{array}{cc}
0 & v(r_{i,j})>0\text{ \ or \ }r_{i,j}>d \\
\gamma \lbrack 3/2,-v(r_{i,j})/k_{B}T]/\Gamma \lbrack 3/2] & v(r_{i,j})\leq 0%
\text{ \ and \ }r_{i,j}\leq d,%
\end{array}%
\right. \label{4}$$where $\Gamma \lbrack a]$ is the gamma function and $\gamma \lbrack a,x]$ is the incomplete gamma function.
Molecular dynamics calculations
-------------------------------
We consider a system of Lennard–Jones particles in a cubic box with periodic boundary conditions in the $NVT$ ensemble and use a leap-frog algorithm with velocity correction. [@Allen1] The time step is chosen as $\Delta t^{\ast }=\Delta t\sigma ^{-1}\sqrt{k_{B}T/(\varepsilon m)}=0.01$. Quantities are averaged over $10^{3}$ configurations chosen every 100 $%
\Delta t$ after equilibration. A cut off distance equal to $2.5\sigma $ was used in the pair potential.
In the VA case, for each pair of particles that satisfies $v(r_{i,j})\leq 0$ and $r_{i,j}\leq d$, we generate a random number $z$, between $0$ and $1$. If $z<\gamma \lbrack 3/2,-v(r_{i,j})/k_{B}T]/\Gamma \lbrack 3/2]$ we consider that the particles form a bonded-pair, otherwise we do not. Note that this criterion can also be used in MC simulations because it does not require information on the particle velocities. To identify the clusters from the list of bonded-pairs we use the Stoddard’s algorithm. [Allen1,Stoddard1]{}
A system is said to be in a percolated state if a cluster that spans the replicas is present 50 percent of the time. [@Seaton1] It is known that this criterion yields results that are marginally affected by finite size effects. [@Lee1] Then, a percolation transition curve, which separates the percolated from the non-percolated states of the system, can be drawn above the coexistence curve in the $T-\rho $ phase diagram.
In Fig. 1, the percolation loci for HE and VA connectivity criteria are presented for $d=3\sigma $. These simulations were performed with $N=1372$ particles. The gas–liquid coexistence curve obtained by Panagiotopoulos [@Panagio1] and the MC liquid–solid coexistence curve of Hansen and Verlet [@Hansen1] are also shown. The MD results for the HE criterion are similar to those obtained by Campi *et al.* [@Campi1] These authors consider that the system is on the percolation line if the second moment of the cluster size distribution that excludes the largest cluster $%
n^{\prime }(s)$ reaches its maximum. In Fig. 2 we show the dependence of the calculated percolation density with system size and connectivity distance $d$. Extrapolation to infinite systems can be obtained by fitting a straight-line to a plot of $\rho _{p}$ versus $L^{-1/\nu }$. [@Seaton1] We have used the universal value of $\nu =0.88\pm 0.02$ reported by Gaunt and Sykes [@Gaunt1] for three-dimensional systems. The error due to finite size effects in the calculated value of $\rho _{p}$ for the $1372$-particle system is of $1.0\%$. This correction is smaller than the size of our symbols in Fig. 1. Also from Fig. 2, we see that the effect of the connectivity distance $d$ on $\rho _{p}$ is negligible for $d>2.5$.
As we can see from Fig. 1, the VA criterion is a very good approximation to the full HE criterion as far as the percolation loci is concerned. In Ref. I, we reported a VA percolation line that was located at much larger densities, and concluded that the approximation was rather poor. The revised results reported here show that this is not the case. The source of the error in Ref. I comes from the way percolating clusters are detected according to the Seaton–Glandt prescription. [@Seaton1] All clusters in a given configuration are first identified by the Stoddard’s algorithm, then each cluster is analyzed separately to determine if its replicas are connected with one another. Since the VA criterion implies the use of random numbers to decide whether two particles are connected, the second step where each separated cluster is analyzed for percolation needs to reuse the same random numbers generated when it was first identified. This subtlety was overseen in Ref. I, which led to the incorrect identification of actual percolating clusters as disconnected replicas.
Cluster pair correlation functions
==================================
In the remainder of the paper we restrict our attention to the cluster pair correlations for the HE and VA energetic criteria. We calculate them by using the above mentioned integral equations and MD simulations. Thus, this section will be devoted to pose the integral equations for the cluster correlation functions $g_{\text{HE}}^{\dagger }(\mathbf{r}_{1},\mathbf{r}%
_{2})$ and $g_{\text{VA}}^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2})$ and briefly discuss their solutions.
VA criterion
------------
In order to study clustering in a system composed of $N$ classical particles interacting via a pair potential $v(\mathbf{r}_{1},\mathbf{r}_{2})$, Hill separated the Boltzmann factor $e(\mathbf{r}_{1},\mathbf{r}_{2})=\exp
[-\beta v(\mathbf{r}_{1},\mathbf{r}_{2})]$, into bonded $(\dagger )$ and unbounded $(\ast )$ terms: [@Hill1] $e(\mathbf{r}_{1},\mathbf{r}%
_{2})=e^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2})+e^{\ast }(\mathbf{r}_{1},%
\mathbf{r}_{2})$. As usual $\beta =1/k_{B}T$, being $k_{B}$ the Boltzmann constant. Since $e^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2})$ represents the basic probability density of finding two particles bonded and at positions $%
\mathbf{r}_{1}$ and $\mathbf{r}_{2}$, this separation yields a diagrammatic expansion for the partition function in terms of chemical clusters. We express Hill’s separation as follows
$$e^{\dagger}(\mathbf{r}_{1},\mathbf{r}_{2})=P(\mathbf{r}_{1},\mathbf{r}_{2})%
\exp[-\beta v(\mathbf{r}_{1},\mathbf{r}_{2})] \label{5}$$
$$e^{\ast}(\mathbf{r}_{1},\mathbf{r}_{2})=[1-P(\mathbf{r}_{1},\mathbf{r}_{2})]%
\exp[-\beta v(\mathbf{r}_{1},\mathbf{r}_{2})] \label{6}$$
where $P(\mathbf{r}_{1},\mathbf{r}_{2})=P_{\text{VA}}(r_{1,2})$ is given by Eq. (\[4\]) in the case of the VA energetic criterion.
Fugacity and density expansions have been found, within Hill’s formalism, by Coniglio and co-workers [@Coniglio1] for the pair connectedness function $g^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2})\equiv g_{\text{VA}}^{\dagger }(%
\mathbf{r}_{1},\mathbf{r}_{2})$. As it was already mentioned, this function is proportional to the joint probability density of finding two particles belonging to the same cluster and at positions $\mathbf{r}_{1}$ and $\mathbf{%
r}_{2}$, respectively. Moreover, by collecting nodal and non-nodal diagrams in these expansions an OZ-type relationship is obtained
$$g^{\dagger}(\mathbf{r}_{1},\mathbf{r}_{2})=c^{\dagger}(\mathbf{r}_{1},%
\mathbf{r}_{2})+\rho\int c^{\dagger}(\mathbf{r}_{1},\mathbf{r}%
_{3})g^{\dagger}(\mathbf{r}_{3},\mathbf{r}_{2})d\mathbf{r}_{3}. \label{7}$$
The function $c^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2})$ is the direct pair connectedness function. By posing a closure relation, an integral equation for $g^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2})$ is obtained. Here, we use the more reliable closure available, *i.e.* the PY-like relation [@Coniglio1]
$$g^{\dagger }(r_{1,2})=[f^{\ast }(r_{1,2})+1][g^{\dagger
}(r_{1,2})-c^{\dagger }(r_{1,2})]+\exp [\beta
v(r_{1,2})]g(r_{1,2})f^{\dagger }(r_{1,2}). \label{8}$$
In Eq. (\[8\]), $f^{\ast }(r_{1,2})=e^{\ast }(r_{1,2})-1=\exp [-\beta
v(r_{1,2})][1-P_{\text{VA}}(r_{1,2})]-1$ is the unbound Mayer function and $%
g(r_{1,2})$ is the thermal pair distribution function (PDF).
In order to solve the integral equation given by Eqs. (\[7\]) and (\[8\]), for the Lennard–Jones potential we have implemented Labik’s numerical algorithm. [@Labik1]
HE criterion
------------
### The integral equation
We summarize here the basic theory that we have developed [@Pugnaloni4] to describe the clustering and percolation for clusters whose bond definition depends on the positions and momenta of the two particles under consideration.
For a system of $N$ classical particles that interact through a pair potential $v(\mathbf{r}_{i},\mathbf{r}_{j})$, we define a density correlation function $\rho(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},%
\mathbf{p}_{2})$ that is $N(N-1)$ times the probability density of finding two particles at the phase space configurations $(\mathbf{r}_{1}$, $\mathbf{p%
}_{1})$ and $(\mathbf{r}_{2}$, $\mathbf{p}_{2})$ respectively:
$$\begin{aligned}
\rho (\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})& =\frac{%
N(N-1)}{h^{3N}N!Q(N,V,T)} \notag \\
& \times \int \prod_{i=1}^{N}\exp [-\beta \frac{\mathbf{p}_{i}^{2}}{2m}%
]\prod_{i=1}^{N}\prod_{j>i}^{N}\exp [-\beta v(\mathbf{r}_{i},\mathbf{r}%
_{j})]d\mathbf{r}^{N-2}d\mathbf{p}^{N-2}. \label{9}\end{aligned}$$
Here $h$ is Planck’s constant and $Q(N,V,T)$ the canonical partition function of the system. Then, in the same spirit of Hill and Coniglio *et al.,* [@Hill1; @Coniglio1] we separate $\exp [-\beta v(\mathbf{r%
}_{i},\mathbf{r}_{j})]$ into connecting and blocking parts,
$$\exp [-\beta v(\mathbf{r}_{i},\mathbf{r}_{j})]=f^{\dagger }(\mathbf{r}_{i},%
\mathbf{r}_{j},\mathbf{p}_{i},\mathbf{p}_{j})+f^{\ast }(\mathbf{r}_{i},%
\mathbf{r}_{j},\mathbf{p}_{i},\mathbf{p}_{j})+1. \label{10}$$
Here $f^{\dagger }(\mathbf{r}_{i},\mathbf{r}_{j},\mathbf{p}_{i},\mathbf{p}%
_{j})$ represents the basic probability density that two particles in configuration $(\mathbf{r}_{i},\mathbf{r}_{j},\mathbf{p}_{i},\mathbf{p}_{j})$ are bonded. We will sometimes use the shorthand notation $f^{\gamma }(%
\mathbf{r}_{i},\mathbf{r}_{j},\mathbf{p}_{i},\mathbf{p}_{j})\equiv
f_{i,j}^{\gamma }$, where $\gamma $ can be either $\dagger $ or $\ast $. Substitution of Eq. (\[10\]) in Eq. (\[9\]) yields $$\begin{aligned}
\rho (\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})& =\frac{%
N(N-1)}{h^{3N}N!Q(N,V,T)}\exp [-\beta v(\mathbf{r}_{1},\mathbf{r}_{2})]
\notag \\
& \times \int \prod_{i=1}^{N}\exp [-\beta \frac{\mathbf{p}_{i}^{2}}{2m}]\sum
\{\prod f_{i,j}^{\dagger }f_{k,l}^{\ast }\}dr^{N-2}dp^{N-2}, \label{10b}\end{aligned}$$where the sum is carried out over all possible arrangements of products of functions $f_{i,j}^{\dagger }$ and $f_{k,l}^{\ast }$.
We note that the functions $f_{i,j}^{\dagger }$ and $f_{i,j}^{\ast }$ can depend on the momenta as well as on the positions of the two particles, but the sum of $f_{i,j}^{\dagger }$ and $f_{i,j}^{\ast }$ must be momentum independent in order to conform to Eq. (\[10\]). Except for this last condition, the functions $f_{i,j}^{\dagger }$ and $f_{i,j}^{\ast }$ are otherwise arbitrary for thermodynamic purposes. Of course, we choose them in such a way that the desired definition of bonded particles for HE clusters is achieved, *i.e.*,
$$f_{i,j}^{\dagger}=\exp[-\beta v(r_{i,j})]P(\mathbf{r}_{i,j},\mathbf{p}_{i,j})
\label{11}$$
$$f_{i,j}^{\ast}=\exp[-\beta v(r_{i,j})][1-P(\mathbf{r}_{i,j},\mathbf{p}_{i,j})%
]-1 \label{12}$$
where $P(\mathbf{r}_{i,j},\mathbf{p}_{i,j})=P_{\text{HE}}(\mathbf{r}_{i,j},%
\mathbf{p}_{i,j})$ is given in Eq. (\[3\]).
Each term in the integrand of Eq. (\[10b\]) can be represented as a diagram consisting of two white $e_{1}$ and $e_{2}$ points, $N-2$ black $%
e_{i}$ points and some $f_{i,j}^{\dagger }$ and $f_{i,j}^{\ast }$ connections except between the white points. Here we take $e_{i}\equiv \exp
[-\beta \frac{\mathbf{p}_{i}^{2}}{2m}]$. White points are not integrated over whereas black points are integrated over both their positions and momenta. All the machinery normally used to handle standard diagrams in classical liquid theory [@Hansen2] can now be extended to treat these new type of diagrams. By following Coniglio’s recipe to separate connecting and blocking parts in the PDF, $g(\mathbf{r}_{1},\mathbf{r}_{2})=g^{\dagger
}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})+g^{\ast }(%
\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})$, we obtain an OZ-like integral equation for $g^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2},%
\mathbf{p}_{1},\mathbf{p}_{2}),$$$\begin{aligned}
g^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})&
=c^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})
\notag \\
& +\int \rho (\mathbf{r}_{3},\mathbf{p}_{3})c^{\dagger }(\mathbf{r}_{1},%
\mathbf{r}_{3},\mathbf{p}_{1},\mathbf{p}_{3})g^{\dagger }(\mathbf{r}_{3},%
\mathbf{r}_{2},\mathbf{p}_{3},\mathbf{p}_{2})d\mathbf{r}_{3}d\mathbf{p}_{3}.
\label{13}\end{aligned}$$Here $\rho (\mathbf{r}_{1},\mathbf{p}_{1})\rho (\mathbf{r}_{2},\mathbf{p}%
_{2})g^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}%
_{2})$ is $N(N-1)$ times the joint probability density of finding two particles at positions $\mathbf{r}_{1}$ and $\mathbf{r}_{2}$ with momenta $%
\mathbf{p}_{1}$ and $\mathbf{p}_{2}$, respectively, and belonging to the same cluster, where the bonding criterion is given by Eqs. (\[11\]), ([12]{}) and (\[3\]), while $$\rho (\mathbf{r}_{1},\mathbf{p}_{1})=\frac{1}{N-1}\int \rho (\mathbf{r}_{1},%
\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})d\mathbf{r}_{2}d\mathbf{p}_{2}.
\label{14}$$The function $c^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},%
\mathbf{p}_{2})$ denotes the sum of all the non-nodal diagrams in the diagrammatic expansion of $g^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2},%
\mathbf{p}_{1},\mathbf{p}_{2}).$ We recall here that a nodal diagram contains at least one black point through which all paths between the two white points pass. For a homogeneous system, we have
$$\begin{aligned}
g^{\dagger}(\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}_{2}) & =c^{\dagger }(%
\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}_{2})+\frac{\rho}{(2\pi
mk_{B}T)^{3/2}} \notag \\
& \times\int\exp[-\beta\frac{p_{3}^{2}}{2m}]c^{\dagger}(\mathbf{r}_{13},%
\mathbf{p}_{1},\mathbf{p}_{3})g^{\dagger}(\mathbf{r}_{32},\mathbf{p}_{3},%
\mathbf{p}_{2})d\mathbf{r}_{3}d\mathbf{p}_{3}. \label{15}\end{aligned}$$
To obtain a closed integral equation with Eq. (\[13\]) or Eq. (\[15\]), we need a closure relation between $g^{\dagger }(\mathbf{r}_{1},\mathbf{r}%
_{2},\mathbf{p}_{1},\mathbf{p}_{2})$ and $c^{\dagger }(\mathbf{r}_{1},%
\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})$. Here we will use the PY approximation $g(\mathbf{r}_{1},\mathbf{r}_{2})\exp [\beta v(\mathbf{r}_{1},%
\mathbf{r}_{2})]=1+N(\mathbf{r}_{1},\mathbf{r}_{2}),$ where the function $N(%
\mathbf{r}_{1},\mathbf{r}_{2})$ is the sum of the nodal diagrams in the expansion of $g(\mathbf{r}_{1},\mathbf{r}_{2})$. Separation into connecting and blocking parts, $g(\mathbf{r}_{1},\mathbf{r}_{2})=g^{\dagger }(\mathbf{r}%
_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})+g^{\ast }(\mathbf{r}_{1},%
\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})$ and $N(\mathbf{r}_{1},\mathbf{%
r}_{2})=N^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}%
_{2})+N^{\ast }(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})$, yields
$$\begin{aligned}
g^{\dagger}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2}) &
=[f^{\ast}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}%
_{2})+1][g^{\dagger}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}%
_{2})-c^{\dagger}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}%
_{2})] \notag \\
& +\exp[\beta v(\mathbf{r}_{1},\mathbf{r}_{2})]g(\mathbf{r}_{1},\mathbf{r}%
_{2})f^{\dagger}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}%
_{2}), \label{16a}\end{aligned}$$
or, for a homogeneous system,
$$\begin{aligned}
g^{\dagger}(\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}_{2}) & =[f^{\ast }(%
\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}_{2})+1][g^{\dagger}(\mathbf{r}%
_{12},\mathbf{p}_{1},\mathbf{p}_{2})-c^{\dagger}(\mathbf{r}_{12},\mathbf{p}%
_{1},\mathbf{p}_{2})] \notag \\
& +\exp[\beta v(\mathbf{r}_{12}]g(\mathbf{r}_{12})f^{\dagger}(\mathbf{r}%
_{12},\mathbf{p}_{1},\mathbf{p}_{2}). \label{16b}\end{aligned}$$
Equation (\[13\]) joined with Eq. (\[16a\]) or Eq. (\[15\]) joined with Eq. (\[16b\]) give a closed set of equations for $g^{\dagger}(\mathbf{%
r}_{1},\mathbf{r}_{2},\mathbf{p}_{1},\mathbf{p}_{2})$.
From the function $g_{\text{HE}}^{\dagger }(\mathbf{r}_{1},\mathbf{r}_{2},%
\mathbf{p}_{1},\mathbf{p}_{2})\equiv g^{\dagger }(\mathbf{r}_{1},\mathbf{r}%
_{2},\mathbf{p}_{1},\mathbf{p}_{2})$ we define the pair correlation function for energetic clusters $g_{\text{HE}}^{\dagger }(\mathbf{r}_{1},\mathbf{r}%
_{2})$ according to Eq. (\[1\]).
### Solution of the integral equation
##### Equivalence with an integral equation for polarizable fluids
Our problem consists in solving Eq. (\[15\]) for $g^{\dagger }(\mathbf{r}%
_{12},\mathbf{p}_{1},\mathbf{p}_{2})$ closed by the connectedness PY relation (\[16b\]) with $f^{\dagger }(\mathbf{r}_{i},\mathbf{r}_{j},%
\mathbf{p}_{i},\mathbf{p}_{j})$ and $f^{\ast }(\mathbf{r}_{i},\mathbf{r}_{j},%
\mathbf{p}_{i},\mathbf{p}_{j})$ given by Eqs. (\[11\]) and (\[12\]). In the closure relation (\[16b\]), $g(\mathbf{r}_{12})$ is the thermal PDF of the system. In this work we take $g(\mathbf{r}_{12})$ from the solution of the thermal OZ equation in the PY approximation. [@Hansen2]
An equation mathematically equivalent to Eq. (\[15\]) has been previously solved by Lado [@Lado1] in the study of nonpolar polarizable molecules. Explicitly, the equation considered there, which is a generalized OZ equation, relates the fluid total correlation function (TCF) $h(\mathbf{r}%
_{12},\mathbf{p}_{1},\mathbf{p}_{2})=g(\mathbf{r}_{12},\mathbf{p}_{1},%
\mathbf{p}_{2})-1$ (with $g(\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}_{2})$ the PDF) and the direct correlation function (DCF) $c(\mathbf{r}_{12},%
\mathbf{p}_{1},\mathbf{p}_{2})$,
$$\begin{aligned}
h(\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}_{2}) & =c(\mathbf{r}_{12},%
\mathbf{p}_{1},\mathbf{p}_{2}) \notag \\
& +\rho\int f\left( p_{3}\right) c(\mathbf{r}_{13},\mathbf{p}_{1},\mathbf{p}%
_{3})h(\mathbf{r}_{32},\mathbf{p}_{3},\mathbf{p}_{2})d\mathbf{r}_{3}d\mathbf{%
p}_{3}, \label{16c}\end{aligned}$$
where $\mathbf{p}_{i}$ denotes the instantaneous dipolar moment induced on molecule $i$ by the remaining molecules of the system. The function $f\left(
p\right) $ gives the instantaneous dipolar moment thermal distribution which is assumed to have a Gaussian form
$$f\left( p\right) =\frac{1}{\left( 2\pi\alpha/\beta\right) ^{3/2}}\exp\left( -%
\frac{\beta p^{2}}{2\alpha}\right) ,$$
where $\alpha$ is the effective polarizability of the molecules.
We observe that Eqs. (\[15\]) and (\[16c\]) are the same if we identify $%
h$ with $g^{\dagger }$, $c$ with $c^{\dagger }$, the induced dipolar moment $%
\mathbf{p}_{i}$ with the kinetic momentum $\mathbf{p}_{i}$ and the polarizability $\alpha $ with the particle mass $m$. There are, however, some differences between the connectivity problem and the polarizable-molecule problem. The form of $f(p)$ does not need to be Gaussian for polarizable molecules; moreover, $f(p)$ is coupled to the TCF. Therefore, the value of the effective polarizability $\alpha $ depends on the density and temperature of the system. In the connectivity problem, however, the equivalent of $f(p)$, $\rho (\mathbf{r},\mathbf{p})/\rho $, is intrinsically Gaussian and independent of the thermodynamic macrostate of the system.
Another difference between the connectivity problem here and the problem described by Lado is that our closure relation must be complemented with the condition given by Eqs. (\[11\]) and (\[12\]). In addition, the closures are different. Here we consider the connectedness version of PY whereas an *almost* exact relation between DCF and TCF (van Leeuwen–Groeneveld–De Boer [@vanLeeuwen1] exact relation with approximate bridge function) is used by Lado. [@Lado1] Nevertheless, these differences do not affect the general method of solution developed by Lado and we can apply the same principle of expansions in orthogonal functions.
Thus, following Lado, [@Lado1; @Zarragoicoechea1] we start by reassigning the unknown function to be the indirect correlation function
$$\gamma^{\dagger}(\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}_{2})=g^{\dagger }(%
\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}_{2})-c^{\dagger}(\mathbf{r}_{12},%
\mathbf{p}_{1},\mathbf{p}_{2}), \label{17}$$
rather than $g^{\dagger}(\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}_{2})$, and rewriting Eq. (\[15\]) in Fourier representation,
$$\begin{aligned}
\tilde{\gamma}^{\dagger}(\mathbf{k},\mathbf{p}_{1},\mathbf{p}_{2}) & =\frac{%
\rho}{(2\pi mk_{B}T)^{3/2}}\dint d\mathbf{p}_{3}\exp[-\beta\frac{p_{3}^{2}}{%
2m}] \notag \\
& \left[ \tilde{\gamma}^{\dagger}(\mathbf{k},\mathbf{p}_{1},\mathbf{p}_{3})+%
\tilde{c}^{\dagger}(\mathbf{k},\mathbf{p}_{1},\mathbf{p}_{3})\right] \tilde{c%
}^{\dagger}(\mathbf{k},\mathbf{p}_{3},\mathbf{p}_{2}). \label{18}\end{aligned}$$
The closure given by the PY relation \[Eq. (\[16b\])\] together with the conditions (\[11\]), (\[12\]) and (\[3\]) yield
$$c^{\dagger}(\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}_{2})=\left\{
\begin{array}{cc}
g(\mathbf{r}_{12})-\gamma^{\dagger}(\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}%
_{2}) & \mathbf{p}_{1,2}^{2}/4m<-v(r_{12})\text{ \ and \ }r_{12}\leq d \\
\left( \exp[-\beta v(r_{12})]-1\right) \gamma^{\dagger}(\mathbf{r}_{12},%
\mathbf{p}_{1},\mathbf{p}_{2}) & \mathbf{p}_{i,j}^{2}/4m\geq -v(r_{12})\text{
\ or \ }r_{1,2}>d%
\end{array}
\right. \label{19}$$
The connectivity part of the PDF is then computed from $\gamma^{\dagger}$ as
$$g^{\dagger}(\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}_{2})=\left\{
\begin{array}{cc}
g(\mathbf{r}_{12}) & \mathbf{p}_{1,2}^{2}/4m<-v(r_{12})\text{ \ and \ }%
r_{12}\leq d \\
\exp[-\beta v(r_{12})]\gamma^{\dagger}(\mathbf{r}_{12},\mathbf{p}_{1},%
\mathbf{p}_{2}) & \text{ }\mathbf{p}_{i,j}^{2}/4m\geq-v(r_{12})\text{ \ or \
}r_{1,2}>d%
\end{array}
\right. \label{20}$$
The Fourier transform in Eq. (\[18\]) and its inverse are defined as
$$\tilde{f}\left( \mathbf{k}\right) =\dint d\mathbf{r}f\left( \mathbf{r}%
\right) e^{-i\mathbf{k.r}}, \label{21}$$
$$f\left( \mathbf{r}\right) =\frac{1}{\left( 2\pi\right) ^{3}}\dint d\mathbf{k}%
\tilde{f}\left( \mathbf{k}\right) e^{i\mathbf{k.r}}. \label{22}$$
The standard method for solving Eqs. (\[18\]) and (\[19\]) is to explicitly break out the angular dependence of all functions in the form of expansions in spherical harmonics. [@Gray1]
##### Expansion of the pair functions in orthogonal polynomials
The essential point in the integral equation solution method [@Lado1] is the expansion of all the pair functions, like $\gamma^{\dagger}(\mathbf{r}%
_{12},\mathbf{p}_{1},\mathbf{p}_{2})$, in terms of orthogonal polynomials. First we expand
$$\begin{aligned}
\gamma^{\dagger}(\mathbf{r}_{12},\mathbf{p}_{1},\mathbf{p}_{2}) &
=\gamma^{\dagger}(r,p_{1},p_{2},\omega_{1},\omega_{2}) \notag \\
& =4\pi\tsum
\limits_{l_{1},l_{2},m}\gamma_{l_{1}l_{2}m}^{%
\dagger}(r,p_{1},p_{2})Y_{l_{1}m}\left( \omega _{1}\right) Y_{l_{2}\overline{%
m}}\left( \omega_{2}\right) , \label{23}\end{aligned}$$
where $\omega_{1}$ and $\omega_{2}$ are the directions of the momenta $%
\mathbf{p}_{1}$ and $\mathbf{p}_{2}$, $\overline{m}=-m$, and $%
m=-l,-l+1,...,l $. In this and similar expressions, the vector $\mathbf{r}%
_{12}$ has been implicitly chosen as the $z$ direction in the specification of the Euler angles $\omega=\left( \theta,\phi\right) $. The spherical harmonics satisfy the orthogonality condition
$$\dint d\omega Y_{lm}\left( \omega\right)
Y_{l^{^{\prime}}m^{^{\prime}}}^{\ast }\left( \omega\right)
=\delta_{ll^{^{\prime}}}\delta_{mm^{^{\prime}}}, \label{24}$$
so that the coefficients of the expansion (\[23\]) are immediately obtainable as $$\gamma_{l_{1}l_{2}m}^{\dagger}(r,p_{1},p_{2})=\frac{1}{4\pi}\dint
d\omega_{1}d\omega_{2}\gamma^{\dagger}(r,p_{1},p_{2},\omega_{1},\omega
_{2})Y_{l_{1}m}\left( \omega_{1}\right) Y_{l_{2}\overline{m}}^{\ast}\left(
\omega_{2}\right) . \label{25}$$
Similarly, we can break out the kinetic momentum dependence in the form of expansions in polynomials of $p$,
$$\gamma_{l_{1}l_{2}m}^{\dagger}(r,p_{1},p_{2})=\tsum
\limits_{n_{1},n_{2}}\gamma_{l_{1}l_{2}m}^{\dagger\text{ }n_{1}n_{2}}\left(
r\right) Q_{n_{1}l_{1}}\left( p_{1}\right) Q_{n_{2}l_{2}}\left( p_{2}\right)
, \label{26}$$
which are constructed to be orthogonal with Gaussian weight function
$$f\left( p\right) =\frac{1}{(2\pi m/\beta)^{3/2}}\exp[-\beta p^{2}/2m],
\label{27}$$
namely,
$$4\pi\dint \limits_{0}^{\infty}dpp^{2}f\left( p\right) Q_{nl}\left( p\right)
Q_{n^{^{\prime}}l}\left( p\right) =\delta_{nn^{^{\prime}}.} \label{28}$$
The coefficients of the expansion are then again obtainable by quadratures,
$$\begin{aligned}
\gamma_{l_{1}l_{2}m}^{\dagger\text{ }n_{1}n_{2}}\left( r\right) & =\dint
\limits_{0}^{\infty}dp_{1}dp_{2}\left[ 4\pi p_{1}^{2}f\left( p_{1}\right) %
\right] \left[ 4\pi p_{2}^{2}f\left( p_{2}\right) \right] \notag \\
& \times\gamma_{l_{1}l_{2}m}^{\dagger}(r,p_{1},p_{2})Q_{n_{1}l_{1}}\left(
p_{1}\right) Q_{n_{2}l_{2}}\left( p_{2}\right) . \label{29}\end{aligned}$$
Given the Gaussian form of the weight function $f\left( p\right) $, the associated polynomials are [@Morse1]
$$Q_{nl}\left( p\right) =\left[ \frac{\Gamma\left( \frac{1}{2}\left(
n-l\right) +1\right) \Gamma\left( \frac{3}{2}\right) }{\Gamma\left( \frac{1}{%
2}\left( n+l\right) +\frac{3}{2}\right) }\right] ^{1/2}\left( \frac{\beta
p^{2}}{2m}\right) ^{l/2}L_{\left( n-l\right) /2}^{l+1/2}\left( \frac{\beta
p^{2}}{2m}\right) , \label{30}$$
where $L_{n}^{b}\left( t\right) $ are the associated Laguerre polynomials [@Abramowitz1] and $\Gamma\left( z\right) $ is the gamma function.
Accordingly, all the functions in $\mathbf{r}$-space are expanded in the form
$$\gamma^{\dagger}(\mathbf{r},\mathbf{p}_{1},\mathbf{p}_{2})=4\pi\dsum
\limits_{n_{1},\text{ }n_{2},\text{ }l_{1},\text{ }l_{2},\text{ }%
m}\gamma_{l_{1}l_{2}m}^{\dagger\text{ }n_{1}n_{2}}\left( r\right)
Q_{n_{1}l_{1}}\left( p_{1}\right) Q_{n_{2}l_{2}}\left( p_{2}\right)
Y_{l_{1}m}\left( \omega_{1}\right) Y_{l_{2}\overline{m}}\left( \omega
_{2}\right) , \label{31}$$
where the $z$ axis is along $\mathbf{r}$ and the summation indices satisfy the constraints
$$\begin{aligned}
n & =0,1,2,..., \notag \\
l & =n,\text{ }n-2,\text{ }n-4,...,1\text{ or }0\text{,} \label{32} \\
m & =0,\pm1,\pm2,...,\pm l. \notag\end{aligned}$$
The coefficients of Eq. (\[31\]) can be obtained as
$$\begin{aligned}
\gamma_{l_{1}l_{2}m}^{\dagger\text{ }n_{1}n_{2}}\left( r\right) & =4\pi\dint
d\mathbf{p}_{1}d\mathbf{p}_{2}f\left( p_{1}\right) f\left( p_{2}\right)
\gamma^{\dagger}(\mathbf{r},\mathbf{p}_{1},\mathbf{p}_{2}) \notag \\
& \times Q_{n_{1}l_{1}}\left( p_{1}\right) Q_{n_{2}l_{2}}\left( p_{2}\right)
Y_{l_{1}m}^{\ast}\left( \omega_{1}\right) Y_{l_{2}\overline {m}%
}^{\ast}\left( \omega_{2}\right) \label{33}\end{aligned}$$
with $f\left( p\right) $ given by Eq. (\[27\]). The complete orthonormality condition is
$$4\pi\dint d\mathbf{p}f\left( p\right) Q_{nl_{{}}}\left( p\right)
Q_{n^{^{\prime}}l^{^{\prime}}}\left( p\right) Y_{lm}\left( \omega\right)
Y_{l^{^{\prime}}m^{^{\prime}}}^{\ast}\left( \omega\right)
=\delta_{nn^{^{\prime}}}\delta_{ll^{^{\prime}}}\delta_{mm^{^{\prime}}}.
\label{34}$$
The functions in $\mathbf{k}$ can be expanded in a similar way. Setting the $%
z$ axis along $\mathbf{k}$, we write
$$\tilde{\gamma}^{\dagger }(\mathbf{k},\mathbf{p}_{1},\mathbf{p}_{2})=4\pi
\dsum\limits_{n_{1},\text{ }n_{2},\text{ }l_{1},\text{ }l_{2},\text{ }m}%
\tilde{\gamma}_{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left( k\right)
Q_{n_{1}l_{1}}\left( p_{1}\right) Q_{n_{2}l_{2}}\left( p_{2}\right)
Y_{l_{1}m}\left( \omega _{1}\right) Y_{l_{2}\overline{m}}\left( \omega
_{2}\right) . \label{35}$$
However, the angles $\omega _{1}$ and $\omega _{2}$ are referred to different axes in Eqs. (\[31\]) and (\[35\]), so that the coefficients in these expansions are not themselves mutual Fourier transforms.
Introducing the expansion for $\tilde{\gamma}^{\dagger }(\mathbf{k},\mathbf{p%
}_{1},\mathbf{p}_{2})$ and the corresponding expansion for $\tilde{c}%
^{\dagger }(\mathbf{k},\mathbf{p}_{1},\mathbf{p}_{2})$, one finds that the OZ-like equation in Fourier space \[Eq. (\[18\])\] goes over into a set of matrix equations for the respective coefficients,
$$\tilde{\gamma}_{l_{1}l_{2}m}^{\dagger\text{ }n_{1}n_{2}}\left( k\right)
=\left( -1\right) ^{m}\rho\tsum \limits_{n_{3},l_{3}}\left[ \tilde{\gamma}%
_{l_{1}l_{3}m}^{\dagger\text{ }n_{1}n_{3}}\left( k\right) +\tilde{c}%
_{l_{1}l_{3}m}^{\dagger\text{ }n_{1}n_{3}}\left( k\right) \right] \tilde{c}%
_{l_{3}l_{2}m}^{\dagger\text{ }n_{3}n_{2}}\left( k\right) . \label{36}$$
##### Numerical procedure
To obtain a numerical solution for the set of equations (\[15\]) and ([16b]{}) one needs the discrete versions of the expansion for $\gamma
^{\dagger }(\mathbf{r},\mathbf{p}_{1},\mathbf{p}_{2})$ \[Eq. (\[31\])\] and the quadratures for the coefficients $\gamma _{l_{1}l_{2}m}^{\dagger \text{ }%
n_{1}n_{2}}\left( r\right) $ \[Eq. (\[33\])\]; these are
$$\begin{aligned}
\gamma^{\dagger}(r,i_{1},i_{2},k_{1},k_{2},j) & =4\pi\dsum \limits_{n_{1},%
\text{ }n_{2},\text{ }l_{1},\text{ }l_{2},\text{ }m}\gamma_{l_{1}l_{2}m}^{%
\dagger\text{ }n_{1}n_{2}}\left( r\right) Q_{n_{1}l_{1}}\left( i_{1}\right)
Q_{n_{2}l_{2}}\left( i_{2}\right) \notag \\
& \times\mathcal{P}_{l_{1}m}\left( k_{1}\right) \mathcal{P}_{l_{2}\overline{m%
}}\left( k_{2}\right) \nu_{m}T_{m}\left( j\right) \label{37}\end{aligned}$$
and
$$\begin{aligned}
\gamma_{l_{1}l_{2}m}^{\dagger\text{ }n_{1}n_{2}}\left( r\right) & =\tsum
\limits_{i_{1},i_{2},k_{1},k_{2},j=1}^{N_{p}}w\left( i_{1}\right) w\left(
i_{2}\right) w\left( k_{1}\right) w\left( k_{2}\right) w\left( j\right)
\gamma^{\dagger}(r,i_{1},i_{2},k_{1},k_{2},j) \notag \\
& \times Q_{n_{1}l_{1}}\left( i_{1}\right) Q_{n_{2}l_{2}}\left( i_{2}\right)
\mathcal{P}_{l_{1}m}\left( k_{1}\right) \mathcal{P}_{l_{2}\overline{m}%
}\left( k_{2}\right) \left( -1\right) ^{m}T_{m}\left( j\right) . \label{38}\end{aligned}$$
In Eq. (\[37\]), $\nu_{0}=1$ and $\nu_{m}=2$ for $m>0$. In Eq. (\[38\]), Gaussian quadratures are being used, with the argument $i$ standing for $%
t_{i}=\beta p_{i}^{2}/2m$, the $i$th root of $L_{N_{p}}^{1/2}\left( t\right)
$, $k$ for $x_{k}=\cos\theta_{k}$, the $k$th root of $P_{N_{p}}\left(
x\right) $, and $j$ for $y_{j}=\cos\phi_{j}$, the $j$th root of $%
T_{N_{p}}\left( y\right) $, where $L_{N_{p}}^{1/2}\left( t\right) $, $%
P_{N_{p}}\left( x\right) ,$ and $T_{N_{p}}\left( y\right) $ are the associated Laguerre, Legendre, and Chebyshev polynomials, respectively, all of order $N_{p}$; here the associated Legendre functions $\mathcal{P}%
_{lm}(x) $ are normalized to 2. The $w$ are the corresponding Gaussian weights,
$$w\left( i\right) =\left\{ t_{i}\left[ L_{N_{p}}^{1/2\prime}\left(
t_{i}\right) \right] ^{2}\right\} ^{-1}, \label{39}$$
$$w\left( k\right) =\left\{ \left( 1-x_{k}^{2}\right) \left[
P_{N_{p}}^{\prime}\left( x_{k}\right) \right] ^{2}\right\} ^{-1}, \label{40}$$
$$w\left( j\right) =N_{p}^{-1}, \label{41}$$
where the prime denotes derivative.
The solution follows an iterative procedure. The preparatory stages of the calculation consist of (i) computing the thermal PDF $g\left( r_{12}\right) $ for the Lennard–Jones fluid over a suitable mesh using the PY equation, (ii) reducing the momentum space to the discrete set of points $\mathbf{p}%
_{i,k,j}\equiv \left( p_{i}\text{, }\theta _{k}\text{, }\phi _{j}\right) $ with $i,k,j=1,2,...,N_{p}$, and (iii) identifying the subset of states —within all possible configurational states $(r_{12},\mathbf{p}_{1},%
\mathbf{p}_{2})$ of a pair of particles— that correspond to a bonded pair.
We construct a logical array $\text{B}(r_{12},\mathbf{p}_{1;i,k,j},\mathbf{p}%
_{2;i,k,j})$ of dimension seven whose value is TRUE if the configurational state of the pair of particles corresponds to a bonded state, *i.e.,* if $\ \mathbf{p}_{1,2}^{2}/4m<-v(r_{12})$ and $\ r_{12}\leq d$. If instead this condition is not satisfied, then $\text{B}(r_{12},\mathbf{p}_{1;i,k,j},%
\mathbf{p}_{2;i,k,j})\text{ is FALSE.}$
The iterative solution of Eqs. (\[36\]) and (\[19\]) starts by guessing the initial values of the coefficients $\gamma_{l_{1}l_{2}m}^{\dagger\text{ }%
n_{1}n_{2}}\left( r_{12}\right) $. Then, if $\text{B}(r_{12},\mathbf{p}%
_{1;i,k,j},\mathbf{p}_{2;i,k,j})\text{ is TRUE}$, following Eq. (\[20\]) we take
$$g_{l_{1}l_{2}m}^{\dagger\text{ }n_{1}n_{2}}\left( r_{12}\right) =\QATOPD{\{}{%
.}{g\left( r_{12}\right) \text{ \ if \ }n_{1}=n_{2}=l_{1}=l_{2}=m=0}{0\text{
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ otherwise.\ \ \ \
\ \ \ \ \ \ \ \ \ \ }} \label{42}$$
If instead $\text{B}(r_{12},\mathbf{p}_{1;i,k,j},\mathbf{p}_{2;i,k,j})\text{
is FALSE}$ then, following Eq. (\[20\]), we take
$$g_{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left( r_{12}\right) =\exp
[-\beta v(r_{12})]\gamma _{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left(
r_{12}\right) . \label{43}$$
Knowing $g_{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left( r_{12}\right) $ and $\gamma _{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left( r_{12}\right) $ for all the mesh points and allowed indices, we can calculate \[see Eqs. ([17]{}) or (\[19\])\] $$c_{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left( r_{12}\right)
=g_{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left( r_{12}\right) -\gamma
_{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left( r_{12}\right) . \label{44}$$
We now need to transform the coefficients $c_{l_{1}l_{2}m}^{\dagger \text{ }%
n_{1}n_{2}}\left( r_{12}\right) $ in real space into coefficients $\tilde{c}%
_{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left( k\right) $ in Fourier space. However, as we have mentioned, they are not themselves Fourier transforms of each other. Thus, we have to assemble the complete function first using the equation analogous to (\[37\]) for $c^{\dagger
}(r,i_{1},i_{2},k_{1},k_{2},j)$ and then use a generalized fast-transform algorithm [@Lado1] to calculate $\tilde{c}^{\dagger
}(k,i_{1},i_{2},k_{1},k_{2},j)$. Using the equation analogous to (\[38\]) in $\mathbf{k}$-space we then have the coefficients $\tilde{c}%
_{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left( k\right) $ for the complete set of indices and all the values of $k$ on an adequate mesh. The coefficients $\tilde{\gamma}_{l_{1}l_{2}m}^{\dagger \text{ }%
n_{1}n_{2}}\left( k\right) $ are then easily calculated by using the OZ-like equation in Fourier space \[see Eq. (\[36\])\]. Again, we assemble the complete function $\tilde{\gamma}^{\dagger }(k,i_{1},i_{2},k_{1},k_{2},j)$ \[using the Fourier space version of Eq. (\[37\])\]. The inverse transform $%
\gamma ^{\dagger }(r_{12},i_{1},i_{2},k_{1},k_{2},j)$ is calculated with the fast-transform algorithm and so new coefficients $\gamma
_{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left( r_{12}\right) $ \[obtained from Eq. (\[38\])\] are available to start again the iterative cycle. The iterations end when convergence is reached, as measured by $$\left\vert \left[ \gamma _{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left(
r_{12}\right) \right] _{\left( s+1\right) \text{th iteration}}-\left[ \gamma
_{l_{1}l_{2}m}^{\dagger \text{ }n_{1}n_{2}}\left( r_{12}\right) \right] _{s%
\text{th iteration}}\right\vert <\epsilon \label{45}$$for the complete set of indices. The tolerance $\epsilon $ is set to 0.0001.
The pair correlation function for an energetic cluster \[see Eq. (\[1\])\] is finally given by
$$g_{\text{HE}}^{\dagger }(r_{12})=g_{000}^{\dagger \text{ }00}\left(
r_{12}\right) , \label{46}$$
where the orthonormality condition \[see Eq. (\[34\])\] has been used.
Results
=======
Firstly, as a complement to Fig. 1, we show in Fig. 3 the cluster pair correlation functions $g_{\text{HE}}^{\dagger }(r_{12})$ and $g_{\text{VA}%
}^{\dagger }(r_{12})$ obtained from MD using $N=4000$ particles for ($\rho
^{\ast }$,$T^{\ast }$) = ($0.24$,$1.4$), ($\rho ^{\ast }$,$T^{\ast }$) = ($%
0.42$,$1.4$) and ($\rho ^{\ast }$,$T^{\ast }$) = ($0.429$,$1.4$) where $%
\rho ^{\ast }=\rho \sigma ^{3}$ and $T^{\ast }=k_{B}T/\varepsilon $. The two last points correspond, respectively, to the VA and HE percolation loci for $T^{\ast }=1.4$. The main peak in the cluster correlation functions is higher for the VA criterion than for the HE criterion. This implies that there is a larger tendency to consider as directly connected two neighbor particles by this criterion. However, as it can be appreciated in Fig. 3, $%
g_{\text{VA}}^{\dagger }(r_{12})$ falls faster than $g_{\text{HE}}^{\dagger
}(r_{12})$ for larger $r$. This contrasting behavior can be understood by analyzing the cluster size distribution function $n(s)$, which gives the number of clusters in the system consisting of $s$ particles. Figure 4 shows $n(s)$ for the same three state points as Fig. 3. We can see here that the VA criterion identifies a larger amount of clusters than the HE criterion up to a certain size—which depends on the density. However the HE criterion always identifies some clusters larger than the largest clusters identified by the VA criterion. This means that the cluster correlation function is more long ranged for the HE clusters.
Figs. 5 and 6 show the theoretical cluster correlation functions $g_{\text{HE%
}}^{\dagger }(r_{12})$ and $g_{\text{VA}}^{\dagger }(r_{12})$, calculated by solving the corresponding integral equations \[Eqs.(\[15\]),(\[16b\]) and (\[7\]),(\[8\]), respectively\] following the methods indicated in the previous section, together with the corresponding MD simulation results. We show the curves at temperatures $T^{\ast }=1.4$ and $3.0$ and densities $%
\rho ^{\ast }=0.24$ and $0.55$, respectively. This values are rather far from the percolation loci since we have been unable so far to obtain convergence of the numerical algorithms at higher densities when the HE criterion is considered.
With generality, Figs. 5 and 6 say that the theoretical results follow quite well the trends of the corresponding simulations for each criterion.
Conclusions
===========
We have shown that, contrary to our previous results (Ref. I), the VA energetic criterion is, in general, a good approximation to the full HE criterion for estimating the percolation loci in a Lennard–Jones fluid. However, the cluster correlation functions are somewhat different in the VA case. We have obtained the cluster pair correlation functions for both energetic criteria through the numerical integration of connectedness OZ integral equations. In particular, we have used a generalization of the integral equations that allows the implementation of the HE criterion. The theoretical results agree rather well with the simulations.
This work was supported by CONICET, CICPBA and UNLP (Argentina). We thank F. Lado for providing the source code of his algorithm for the solution of the nonpolar polarizable Lennard–Jones fluid.
[99]{} B. Senger, P. Schaaf, D. S. Corti, R. Bowles, J. C. Voegel and H. Reiss, J. Chem. Phys. **110**, 6421 (1999).
F.W. Starr, J.K. Nielsen and H.E. Stanley, Phys. Rev. Lett. **82**, 2294 (1999).
S. H. Simon, V. Dobrodavjević, and R. M. Stratt, J. Chem. Phys. **94**, 7360 (1991).
S. H. Chen, J. Rouch, F. Sciortino and P. Tartaglia, J. Phys.: Condens. Matter **6**, 10855 (1994).
A. Coniglio, H. E.Stanley, and W. Klein, Phys. Rev. B **25**, 6805 (1982).
B. D. Butler, H. J. M. Hanley, D. Hansen and D. J. Evans, Phys. Rev. Lett. **74**, 4468 (1995).
H. E. Stanley, R. L. Blumberg, and A. Geiger, Phys. Rev. B, **28**, 1626 (1983).
G. S. Grest and M. H. Cohen, in *Percolation Structures and Processes,* edited by G. Deutscher, R. Zallen and J. Adler (Adam Hilger, Bristol, 1983).
H.-P. Wittmann, K. Kremer, and K. Binder, J. Chem. Phys. **96**, 6291 (1992).
L. A. Pugnaloni, G. C. Barker and A. Mehta, Adv. Complex Systems **4**, 289 (2001).
M. Sahimi, *Applications of Percolation Theory* (Taylor and Francis, London, 1994).
G. Grimmett, *Percolation* (Springer-Verlag, Berlin, 1999).
T.L. Hill, J. Chem. Phys, **23**, 617 (1955).
A. Coniglio, U. De Angelis, A. Forlani, J. Phys. A:Math. Gen., **10**, 1123 (1977).
L. A. Pugnaloni, and F. Vericat, J. Chem. Phys. **116**, 1097 (2002).
For example, in the conductor–insulator transition of water in oil microemulsions, two micelles are consider bonded if they can share charge carriers. [@Chen1] However, in the identification of bridges formed by grains within granular materials, two grains are bonded if they stabilize each other. [@Pugnaloni1]
L. A. Pugnaloni, arXive:con-mat0406713 (2004).
G. Stell, J. Phys. A: Math. Gen., **17**, L885 (1984).
Y. C. Chiew and E. D. Glandt, J. Phys. A: Math. Gen. **16**, 2599 (1983).
T. DeSimone, S. Demoulini, and R. M. Stratt, J. Chem. Phys. **85**, 391 (1986).
D. Laría and F. Vericat, Phys. Rev. A **43**, 1932 (1991).
C.M. Carlevaro, C.O. Stoico, C.O. and F. Vericat. J. Phys.: Condens. Matter, **8**, 1857 (1996).
F. H. Stillinger, J. Chem. Phys. **38**, 1486 (1963).
L. A. Pugnaloni and F. Vericat, Phys. Rev. E **61**, R6067 (2000).
L. A. Pugnaloni, I. F. Marquez, and F. Vericat, Physica A **321**, 398 (2003).
G.J. Zarragoicoechea, L.A. Pugnaloni , F. Lado, E. Lomba, E. and F. Vericat. Phys. Rev. E, **71**, 031202/1-9 (2005).
Interestingly, Campi *et al.* [@Campi1] found that picking up velocities at random from a Boltzmann distribution instead of using the real velocities of the particles yield the same results for the percolation properties of the Lennard–Jones fluid in the HE criterion.
R. Soto and P. Cordero, Phys. Rev. E **56**, 2851 (1997).
R. Soto and P. Cordero, J. Chem. Phys. **108**, 8989 (1998).
X. Campi, H. Krivine, N. Sator, Physica A **296**, 24 (2001).
A. Coniglio, J. Phys: Condens. Matter **13**, 9039 (2001).
F. Lado, Phys. Rev. E **55**, 426 (1997).
M. P. Allen and D. J. Tildesley, Computer Simulation of Liquids (Clarendon Press, Oxford, 1987).
S. D. Stoddard, J. Comp. Phys. **27**, 291 (1978).
N.A. Seaton and E.D. Glandt, J. Chem. Phys. **86**, 4668 (1987).
S.B. Lee and S. Torquato, Phys. Rev. A **41**, 5338 (1990).
A. Z. Panagiotopoulos, Mol. Phys. **61**, 813 (1987).
J. P. Hansen and L. Verlet, Phys. Rev. **184**, 161 (1969).
D. S. Gaunt and M. F. Sykes, J. Phys. A: Math. Gen. **16**, 783 (1983).
S. Labik, A. Malijevsky and P Voñka, Mol. Phys. **56**, 709 (1985).
J. P. Hansen and I. R. McDonald, *Theory of Simple Liquids* (Academic Press, London, 1976).
J. M. J. van Leeuwen, J. Groeneveld and J. De Boer, Physica **25**, 792 (1959).
C. G. Gray and K. E. Gubbins, *Theory of Molecular Fluids* (Clarendon, Oxford, 1984).
P. M. Morse and H. Feshbach, *Methods of Theoretical Physics* (McGraw-Hill, New York, 1953).
*Handbook of Mathematical Functions*, edited by M. Abtramowitz and I. A. Stegun* *(Dover, New York, 1965). Chap. 22.
**Figure Captions**
**Figure 1.** Coexistence and percolation curves for the Lennard–Jones fluid. Open diamonds correspond to the gas–liquid coexistence.[Panagio1]{} Open squares correspond to the fluid–solid coexistence.[Hansen1]{} The percolation loci for the HE criterion (open circles) and for the VA criterion (open triangles) are compared. Solid circles correspond to the percolation loci for the HE criterion from Campi *et al.*[Campi1]{} Lines are only to guide the eye.
**Figure 2.** Percolation density for the HE criterion as a function of $L^{-1/\nu }$. $L$ is the simulation box length and $\nu =0.88$. [Gaunt1]{} The largest system corresponds to $4000$ particles. The system size used for the calculation of the percolation curves in Fig. 1 is indicated by an arrow. The inset shows the percolation threshold as a function of $d$ for the $1372$-particle system. The arrow shows the value of $d$ used in the rest of the paper.
**Figure 3.** Connectedness correlation function for the HE criterion (solid line) and for the VA criterion (dotted line) at $T^{\ast }=1.4$ for various densities. The results for $\rho ^{\ast }=0.42$ and $0.429$ correspond to the percolation loci for the VA and the HE criterion, respectively.
**Figure 4.** Cluster size distribution for the HE criterion (black solid line) and for the VA criterion (red dotted line) at $T^{\ast }=1.4$ for the same densities as in Fig. 2.
**Figure 5.** Connectedness correlation function for the HE criterion (solid line and open circles) and for the VA criterion (dotted line and open triangles) at $T^{\ast }=1.4$ and $\rho ^{\ast }=0.24$. Lines correspond to the solution of the connectedness OZ equations closed with the connectedness PY relation. Symbols correspond to MD.
**Figure 6.** Connectedness correlation function for the HE criterion (solid line and open circles) and for the VA criterion (dotted line and open triangles) at $T^{\ast }=3.0$ and $\rho ^{\ast }=0.55$.
|
---
author:
- Stefano Gualandi and Giuseppe Toscani
title: 'Human Behavior And Lognormal Distribution. A Kinetic Description'
---
Introduction
============
The study of random variations that occur in the data from many scientific disciplines often show more or less skewed probability distributions. Skewed distributions are particularly common when the measured values are positive, as it happens, for example, with species abundance [@Hir; @Lop], lengths of latent periods of infectious diseases [@Kon; @Sar1; @Sar2], and distribution of mineral resources in the Earth’s crust [@Ahr; @Mal; @Raz]. Skewed distributions often closely fit the lognormal distribution [@Aic; @Cro; @Lim]. The list of phenomena which fit lognormal distribution in natural sciences is quite long, and the interested reader can have an almost complete picture about them by reading the exhaustive review paper by Limpert, Stahel and Abbt [@Lim].
In addition to samples from physical and biological sciences, a relevant number of phenomena involving measurable quantities of a population and fitting lognormal distribution comes from social sciences and economics, areas where it can be reasonably assumed that the appearance of this distribution is a consequence of a certain human behavior.
Among others, a good fitting has been observed while looking at the distribution of body weight [@BC], at women’s age at first marriage [@Pre], at drivers behavior [@JJ], or, from the economic world, when looking at consumption in a western society [@BBL], at the size of cities [@BRS], and at call-center service times [@Brown].
Most of the scientific work in this area of research is mainly devoted to understand at best the possible reasons which justify the underlying lognormal distribution, and to estimate its parameters, while dynamical mathematical models trying to explain the formation of lognormal distribution are usually not dealt with.
In this paper we will try to close this gap by showing that the aforementioned phenomena in social sciences and economics can be reasonably well described in terms of a partial differential equation of Fokker–Planck type for the density $f = f(w,t)$ of agents which have the (positive) value of the hallmark under consideration equal to $w$ at time $t\ge 0$. This Fokker–Planck equation, which will be the main object to study, takes the form
$$\label{FPori}
\frac{\partial f(w,t)}{\partial t} = \frac \lambda 2 \frac{\partial^2 }{\partial w^2}
\left(w^2 f(w,t)\right )+ \frac \gamma 2
\frac{\partial}{\partial w}\left( w\, \log \frac w{\bar w_L} f(w,t)\right).$$
In [(\[FPori\])]{} $\lambda, \gamma$ and $\bar w_L$ are positive constants closely related to the typical quantities of the phenomenon under study. In view of the fact that the independent variable $w$ is non-negative, the initial value problem for the Fokker–Planck equation [(\[FPori\])]{} is usually coupled with suitable boundary conditions at the point $w=0$ [@PT13; @FPTT]. The equilibrium density of the Fokker–Planck equation [(\[FPori\])]{} is given by the lognormal density \[equili\] f\_(w) = 1[ w]{} { - }, where \[pa\] = , = |w\_L - . Moreover, for any given initial distribution $f_0(w)$ of agents, convergence to the lognormal equilibrium is shown to hold exponentially fast in time with explicit rate (cf. Section \[trend\]).
The Fokker–Planck equation [(\[FPori\])]{} has been first derived in the economic context in [@GT17]. There, this equation has been obtained starting from a detailed explanation of the possible motivations behind the forming of a lognormal (steady) distribution in agents service times of call centers (previously observed by Brown and others [@Brown]), and resorting to the well-consolidated methods of statistical mechanics [@BKS; @CFL; @NPT; @PT13].
The approach used in [@GT17] has its roots in kinetic theory. This approach is robust in particular when modeling socio-economic problems. Indeed, mathematical modeling of social and economical phenomena in multi-agent systems became in the last twenty years a challenging and productive field of research involving both applied mathematicians and physicists. In economics, the formation of Pareto curves in wealth distribution of western countries has been one of the main issues studied in various aspects [@ChaCha00; @CCM; @ChChSt05; @CoPaTo05; @DY00; @GSV; @SGD]. Likewise, in social sciences, the investigation of statistical mechanics of opinion formation played a leading rule [@BN2; @BN3; @BN1; @BeDe; @Bou; @Bou1; @Bou2; @CDT; @DMPW; @GGS; @GM; @Gal; @GZ; @SW; @To1].
Connections of kinetic modeling of social and economical phenomena with classical kinetic kinetic theory of rarefied gases are well-established. A recent discussion can be found in [@GT-ec]. There, both the appearance of the classical Fokker–Planck equation and of its equilibrium, given by a normal density, are justified by analyzing the Kac caricature of Maxwell molecules [@Kac59].
Among others, previous research in the field of statistical mechanics, which revealed unexpected similarities with the problem of service times duration in call-centers [@GT17] was the statistical description of agents acting in a simple financial market, by taking into account specific behavioral aspects. A kinetic approach to this leading problem in economics has been proposed by Pareschi and Maldarella in [@MD]. There the authors, to investigate the price formation of a good in a multi-agent market, introduced a kinetic model for a multi-agent system consisting of two different trader populations, playing different rules of trading, and possibly changing their point of view. The kinetic description was inspired by the microscopic Lux–Marchesi model [@LMa; @LMb] (cf. also [@LLS; @LLSb]).
The connection with the present problem is mainly related to the trading rules, which were assumed to depend on the opinion of traders through a kinetic model of opinion formation recently introduced in [@To1]. Also, psychological and behavioral components of the agents, like the way they interact with each other and perceive risks, which may produce non-rational behaviors, were taken into account. This has been done by resorting, in agreement with the prospect theory by Kahneman and Twersky [@KT; @KT1] to interactions containing a suitable *value function*.
The analysis of [@MD] enlightens the importance of the human behavior pioneered in [@Zipf] (cf. [@BHT; @BCKS]), which is there reproduced by the nonlinear value function, in the microscopic mechanism leading to the underlying kinetic equation, able to reproduce the macroscopic evolution of the market. Also, *mutatis mutandis*, it suggests how the presence of a suitable value function can justify at a microscopic level the mechanism of formation of the service time distribution [@GT17].
The leading idea in [@GT17] can be expressed by this general principle. For a certain specific hallmark of the population of agents, measured in terms of a positive value $w\in {\mathbb R}_+$, agents have the objective to reach a target value, corresponding to a certain fixed value $\bar w$. This value could be reached by repeated upgrading, which correspond to microscopic interactions. However, the upgrade of the actual value towards the target value is different depending of the actual state of the agents. If the value $w$ is less than the target value $\bar w$, to get closer to it is much more satisfactory than in the opposite situation.
To clarify the meaning of the previous assertion, let us take into account the case in which the target value $\bar w$ to be reached is the departure time of a train, that often has some delay. Then, given a time interval of five minutes, the mood of the traveller who is going to the train station, will be completely different once he will realize that the station will be reached five minutes before or after the departure time. For example, by referring to the problem of the characterization of a call center service time, treated in [@GT17], an agent is more relaxed if he made a service staying below the expected mean time fixed by the firm to close the operation, than in the opposite situation.
In the forthcoming Section \[model\], we shall introduce a linear kinetic model for a multi-agent system, in which agents can be characterized in terms of a certain hallmark that can be measured in terms of a non nonnegative quantity (the weight in grams, the age of first marriage in years, and so on), and subject to microscopic interactions which describe the microscopic rate of change of the value of the hallmark itself, according to the previous general principle. As we shall see, the relevant mechanism of the microscopic interaction is given by resorting to a suitable value function, in the spirit of the analysis of Kahneman and Twersky[@KT; @KT1], which reproduces at best the human behavior.
Then in Section \[quasi\] we will show that in a suitable asymptotic procedure (hereafter called *quasi-invariant* limit) the solution to the kinetic model tends towards the solution of the Fokker-Planck type equation [(\[FPori\])]{}.
Similar asymptotic analysis was performed in [@CPP; @DMTb] for a kinetic model for the distribution of wealth in a simple market economy subject to microscopic binary trades in presence of risk, showing formation of steady states with Pareto tails, in [@TBD] on kinetic equations for price formation, and in [@To1] in the context of opinion formation in presence of self-thinking. A general view about this asymptotic passage from kinetic equations based on general interactions towards Fokker–Planck type equations can be found in [@FPTT]. Other relationships of this asymptotic procedure with the classical problem of the *grazing collision limit* of the Boltzmann equation in kinetic theory of rarefied gases have been recently enlightened in [@GT17].
Once the Fokker–Planck equation [(\[FPori\])]{} has been derived, the main examples quoted in this introduction will be collected in Section \[examples\], together with a detailed explanation of the relevant mechanism which leads to the typical microscopic interaction in terms of the value function.
In Section \[trend\] we will further discuss various mathematical results concerned with the large-time behavior of the solution to this Fokker–Planck equation. In particular, we will show that convergence to equilibrium is exponentially fast in time, thus justifying *a posteriori* the consistency of the model.
Last, resorting to some examples taken from real data sets, we will verify in Section \[numer\] that the kinetic approach provides an accurate description of these skewed human phenomena.
The kinetic model {#model}
=================
Let us consider a population of agents, which can be considered homogeneous with respect to some hallmark, that can be measured in terms of some standard positive measure. To fix the ideas, suppose that the hallmark to be studied is the weight of a population, and that the unity of measure is the gram. In this case, to have a homogenous population, we have to restrict it, for example, with respect to age, sex, social class and so on. On the basis of statistical mechanics, to construct a model able to study the evolution of some selected hallmark of the multi-agent system, the fundamental assumption is that agents are indistinguishable [@PT13]. This means that an agent’s state at any instant of time $t\ge 0$ is completely characterized by the measure $w \ge0$ of his hallmark. The unknown is the density (or distribution function) $f = f(w, t)$, where $w\in {\mathbb R}_+$ and the time $t\ge 0$. Its time evolution is described, as shown later on, by a kinetic equation of Boltzmann type. The precise meaning of the density $f$ is the following. Given the system of agents to study, and given an interval or a more complex sub-domain $D \subseteq {\mathbb R}_+$, the integral $$\int_D f(w, t)\, dw$$ represents the number of individuals with are characterized by a measure $w \in D$ of the hallmark at time $t > 0$. It is assumed that the density function is normalized to one, that is $$\int_{{\mathbb R}_+} f(w, t)\, dw = 1.$$ The change in time of the density is due to the fact that agents continuously upgrade the measure $w$ of their hallmark in time by some action. To maintain the connection with classical kinetic theory of rarefied gases, we will always refer to a single upgrade of the measure as an *interaction*.
In the problem we want to study, the result depends on some human behavior, that can be translated by saying that agents tend to increase the value $w$ by interactions. Referring to the example of the weight, this behavior is clearly satisfied, since it is more pleasant to eat (and to gain weight) than to fast (and to loose it). In reason of the existence of this human tendency, and to avoid problems related to an abnormal growth, it is fixed for the agents an upper ideal measure $\bar w$ of the hallmark relative to the homogenous class, as well as a second value $\bar w_L$, with $\bar w_L > \bar w$, threshold value that would be better not to exceed. Consequently, the human tendency to increase the value $w$ by interactions has to be coupled with the existence of this limit value $\bar w_L$ which would be better not to overcome. This is the classical situation excellently described by Kahneman and Twersky in their pioneering paper [@KT], devoted to describe decision under risk. Inspired by this idea, we will describe an agent’s interaction as \[coll\] w\_\* = w - (w/|w\_L) w + w. The function $\Phi$ plays the role of the *value function* in the prospect theory of Kahneman and Twersky[@KT]. In [@KT] a classical value function is positive and concave above the reference value $1$ ($w > \bar w_L$), while negative and convex below ($w < \bar w_L$). At difference with the choice of Kahneman and Twersky we will assume as value function the increasing concave function \[vf\] (s) = , s 0. In [(\[vf\])]{} $0<\mu < 1$ and $0< \delta < 1$ are suitable constants characterizing the agents behavior. In particular, the value $\mu$ will denote the maximal amount of variation that agents will be able to obtain in a single interaction. Note indeed that the value function $\Phi(s)$ is such that \[bounds\] |(s)| . The function in [(\[vf\])]{} maintains most of the physical properties of the value function of prospect theory required in [@KT], and is particularly well adapted to the present situation. The presence of the minus sign in front of the value function $\Phi$ is due to the obvious fact that an agent will respect his tendency to increase the value $w$ when $w < \bar w_L$, while it will be induced to decrease it if $w >\bar w_L$. Note moreover that the function $\Phi(s)$ is such that, given $0 < s < 1$ $$- \Phi\left(1-s \right) > \Phi\left(1+s \right).$$ Therefore, given two agents starting at the same distance from the limit value $\bar w_L$ from below and above, the agent starting below will move closer to the optimal value, than the agent starting above.
Last, to take into account a certain amount of human unpredictability in the outcome of an interaction, it is reasonable to assume that any result can have random variations (expressed by $\eta w$ in [(\[coll\])]{}) which in the mean are negligible, and in any case are not so significant to produce a sensible variation of the value $w$. Also, to be consistent with the necessary positivity of the value $w_*$, it is assumed that the random variable $\eta$ can take values in the interval $(-1 +\mu, +\infty)$, while $\langle \eta\rangle = 0$. Here and after, $\langle \cdot \rangle$ denotes mathematical expectation. It will further assumed that the variance $\langle \eta^2\rangle = \lambda$, where clearly $\lambda >0$.
Clearly, the choice of the value function [(\[vf\])]{} is only one of the possible choices. For example, to differentiate the percentage of maximal increasing of the value from the percentage of decreasing it, and to outline the difficulty to act again the tendency, one can consider the value function \[diff\] (s) = , s 0, where the constant $ \nu >1$. In this case, [(\[bounds\])]{} modifies to \[bound2\] -(s) < . In this case, the possibility to go against the natural tendency is slowed down. As we shall see in Section \[quasi\], this choice will modify the parameters of the steady state distribution.
Given the *interaction* [(\[coll\])]{}, the study of the time-evolution of the distribution of the values of the hallmark under study can be obtained by resorting to kinetic collision-like models [@Cer; @PT13]. The variation of the density $f(w,t)$ obeys to a linear Boltzmann-like equation. This equation is usually and fruitfully written in weak form. It corresponds to say that the solution $f(w,t)$ satisfies, for all smooth functions $\varphi(w)$ (the observable quantities) $$\label{kin-w}
\frac{d}{dt}\int_{{\mathbb R}_+}\varphi(w)\,f(w,t)\,dx = \frac 1\tau
\Big \langle \int_{{\mathbb R}_+} \bigl( \varphi(w_*)-\varphi(w) \bigr) f(w,t)
\,dw \Big \rangle.$$ Here expectation $\langle \cdot \rangle$ takes into account the presence of the random parameter $\eta$ in [(\[coll\])]{}. The positive constant $\tau$ measures the interaction frequency.
The right-hand side of equation [(\[kin-w\])]{} represents the difference in density between agents that modify their value from $w$ to $w_* $ (loss term with negative sign) and agents that change their value from $w_*$ to $w$ (gain term with positive sign).
In reason of the nonlinearity (in the hallmark variable $w$) of the interaction [(\[coll\])]{}, it is immediate to verify that the only conserved quantity of equation [(\[kin-w\])]{} is obtained by setting $\varphi = 1$. This conservation law implies that the solution to [(\[kin-w\])]{} remains a probability density for all subsequent times $t >0$. The evolution of other moments is difficult to follow. As main example, let us take $\varphi(w) = w$, which allows to obtain that the evolution of the mean value $$m(t) = \int_{{\mathbb R}_+}w\, f(w, t)\, dw.$$ Since $$\langle w_* - w \rangle = \mu\frac{w^\delta -\bar w_L^\delta}{w^\delta +\bar w_L^\delta}\, w,$$ we obtain \[evo-m\] = \_[[R]{}\_+]{} w f(w,t) dw. Note that equation [(\[evo-m\])]{} is not explicitly solvable. However, in view of condition [(\[bounds\])]{} the mean value of the solution to equation [(\[kin-w\])]{} remains bounded at any time $t >0$, provided that it is bounded initially, with the explicit upper bound $$m(t) \le m_0\exp \left\{\frac\mu\tau \, t \right\}.$$ Analogous result holds for the evolution of the second moment, which corresponds to assume $\varphi(w) = w^2$. In this case, since $$\langle w_*^2 -w^2\rangle = \big[ \Phi\left( w/\bar w_L \right)^2 -2 \Phi\left( w/\bar w_L\right) + \lambda \big] w^2 \le [\mu^2 + \lambda]w^2$$ the boundedness of the initial second moment implies the boundedness of the second moment of the solution at any subsequent time $t>0$, with the explicit upper bound $$m_2(t) \le m_{2,0}\exp \left\{\frac{\mu^2 +\lambda}\tau \, t \right\}.$$
Quasi-invariant limit and the Fokker-Planck equation {#quasi}
====================================================
The linear kinetic equation [(\[kin-w\])]{} describes the evolution of the density consequent to interactions of type [(\[coll\])]{}, and it is valid for any choice of the parameters $\delta, \mu$ and $\lambda$. In real situations, however, it happens that a single interaction (any meal in the case of weight) determines only an extremely small change of the value $w$. This situation is well-known in kinetic theory of rarefied gases, where interactions of this type are called *grazing collisions* [@PT13; @Vi]. The presence of this smallness can be easily achieved by setting in [(\[coll\])]{} for some value $\e$, with $\e \ll 1$ \[scal\] , . This scaling allows to retain the effect of all parameters in the forthcoming limit procedure. An exhaustive discussion on these scaling assumptions can be found in [@FPTT]. Using [(\[scal\])]{}, since for any time $t >0$ we can write [(\[evo-m\])]{} as $$\frac{d }{dt}m(t) = \e \bar w_L \, \, \frac\mu\tau\int_{{\mathbb R}_+} \frac 1\e\left[ \left(\frac w{\bar w_L}\right)^{\e\delta} -1\right] \, \frac w{\bar w}\, \frac 1{\left(w/\bar w_L\right)^{\e\delta} + 1}\, f(w,t)\, dw,$$ and, for $s \ge 1$, independently of the value of the small parameter $\e$ $$\frac 1{\e\delta}\left[ s^{\e\delta} -1\right] \le s,$$ while for $s \le 1$ $$\frac 1{\e\delta}e\left[ s^{\e\delta} -1\right] s \ge -1,$$ the scaling [(\[scal\])]{} is such that, for any given fixed time $t >0$, the consequent variation of the mean value $m(t)$ is small with $\e$ small. In this situation it is clear that, if we want to observe an evolution of the average value independent of $\e$, we need to resort to a scaling of the frequency $\tau$. If we set $\tau \to \e \tau $, and $f_\e(w, t) $ will denote the density corresponding to the scaled interaction and frequency, then the evolution of the average value for $f_\e(w, t)$ satisfies $$\frac{d}{dt}\int_{{\mathbb R}_+}w \,f_\e(w,t)\,dw = \bar w_L\, \, \frac{\mu\delta}\tau\int_{{\mathbb R}_+} \frac 1{\e\delta}\left[ \left(\frac w{\bar w_L}\right)^{\e\delta} -1\right] \, \frac w{\bar w_L}\, \frac 1{\left(w/\bar w_L\right)^{\e\delta} + 1}\, f_\e(w,t)\, dw,$$ namely a computable evolution law for the average value of $f$, which remains bounded even in the limit $\e \to 0$, since pointwise \[AA\] A\_(w) = 1 1[(w/|w\_L)\^ + 1]{} 12 w[|w\_L]{}. The reason behind the scaling of the frequency of interactions is clear. Since the scaled interactions produce a very small change in the value $w$, a finite observable variation of the density can be observed in reason of the fact that each agent has a huge number of interactions in a fixed period of time. By using the same scaling [(\[scal\])]{} one can easily obtain the evolution equation for the second moment of $f_\e(w,t)$, which will be well-defined also in the limit $\e \to 0$ (cf. the analysis in [@FPTT]). The previous discussion about moments enlightens the main motivations and the mathematical ingredients that justify the passage from the kinetic model [(\[kin-w\])]{} to its continuous counterpart given by a Fokker–Planck type equation. Given a smooth function $\varphi(w)$, and a collision [(\[coll\])]{} that produces a small variation of the difference $w_*-w$, we can expand in Taylor series $\varphi(w_*)$ around $\varphi(w)$. Using the scaling [(\[scal\])]{} one obtains $$\langle w_* -w \rangle = - \e\delta \, \mu \,A_\e(w)\,w; \quad \langle (w_* -w)^2\rangle = \left(\e^2 \, \mu^2\, \delta^2 \,A_\e^2(w) + \e \lambda\right) w^2.$$ Therefore, equating powers of $\e$, we obtain the expression $$\langle \varphi(w_*) -\varphi(w) \rangle = \e \left( - \varphi'(w)\frac {\mu\delta} 2\,w \log \frac w{\bar w_L}
+ \frac \lambda 2 \, \varphi''(w) w^2 \right) + R_\e (w),$$ where the remainder term $R_\e$, for a suitable $0\le \theta \le 1$, is given by
\[rem\] R\_(w) = & 12 ()\^2 ”(w) A\_\^2(w) w\^2 + ( A\_(w) - 12 w[|w\_L]{})w +\
& 16 ”’(w+(w\_\* -w)) (w\_\* -w)\^3,
and it is such that $$\frac 1\e \, R_\e(w) \to 0$$ as $\e \to 0$. Therefore, if we set $\tau \to \e \tau$, we obtain that the evolution of the (smooth) observable quantity $\varphi(w)$ is given by $$\begin{aligned}
& \frac{d}{dt}\int_{{\mathbb R}_+}\varphi(w) \,f_\e(w,t)\,dw = \\
& \int_{{\mathbb R}_+} \left( - \varphi'(w)\, \frac {\mu\delta} 2 \, w\, \log \frac w{\bar w_L} + \frac \lambda 2 \varphi''(w) w^2 \right) f_\e(w,t)\, dw \ + \frac 1\e \mathcal R_\e(w,t) ,
\end{aligned}$$ where $$\label{rem3}
\mathcal R_\e(t) = \int_{{\mathbb R}_+ } R_\e(w) f_\e(w,t)\, dw,$$ and $R_\e$ is given by [(\[rem\])]{}. Letting $\e \to 0$, shows that in consequence of the scaling [(\[scal\])]{} the weak form of the kinetic model [(\[kin-w\])]{} is well approximated by the weak form of a linear Fokker–Planck equation (with variable coefficients)
\[m-13\] & \_[[R]{}\_+]{}(w) g(w,t)dw =\
& \_[[R]{}\_+]{} \_[[R]{}\_+]{} ( - ’(w) 2 w w[|w\_L]{} + 2 ”(w) w\^2 ) g(w,t) dw
in which we defined $\gamma = \delta\mu$. Provided the boundary terms produced by the integration by parts vanish, equation [(\[m-13\])]{} coincides with the weak form of the Fokker–Planck equation $$\label{FP2}
\frac{\partial g(w,t)}{\partial t} = \frac \lambda 2 \frac{\partial^2 }{\partial w^2}
\left(w^2 g(w,t)\right )+ \frac \gamma 2
\frac{\partial}{\partial w}\left( w\, \log \frac w{\bar w_L} g(w,t)\right).$$ Equation [(\[FP2\])]{} describes the evolution of the distribution density $g(w,t)$ of the hallmark $w \in {\mathbb R}_+$, in the limit of the *grazing* interactions. As often happens with Fokker-Planck type equations, the steady state density can be explicitly evaluated, and it results to be a lognormal density, with parameters linked to the details of the microscopic interaction [(\[coll\])]{}.
The steady state is a lognormal density
=======================================
The stationary distribution of the Fokker–Planck equation [(\[FP2\])]{} is easily found by solving the differential equation \[sd\] 2 (w\^2 g(w))+ 2 w w[|w\_L]{} g(w) =0. Solving [(\[sd\])]{} with respect to $h(w)= w^2 g(w)$ by separation of variables gives as unique solution to [(\[sd\])]{} of unit mass the density \[equilibrio\] g\_(w) = 1[ w]{} { - }, where \[para\] = , = |w\_L - . Hence, the equilibrium distribution [(\[equilibrio\])]{} takes the form of a lognormal density with mean and variance given respectively by \[moments\] m(g\_) = |w\_L e\^[-/2]{}, Var(g\_) = |w\_L\^2 ( 1 - e\^[-]{}). Note that the moments are expressed in terms of the parameters $\bar w_L$, $\lambda$ and $\gamma = \delta\mu$ denoting respectively the limit value $\bar w_L$, the variance $\lambda$ of the random effects and the value $\gamma= \delta\mu$ of the value function $\phi$. Note moreover that, since $\delta<1$ in [(\[coll\])]{}, the constant $\gamma <\mu$, namely less than the maximal amount of weight that can be lost in a single interaction.
In particular, both the mean value and the variance of the steady state density depend only on the ratio $\sigma = \lambda/\gamma$ between the variance $\lambda$ of the random variation of weight allowed, and the portion $\delta\mu$ of maximal percentage $\mu$ allowed of possible variation of weight in a single interaction.
If the value $\sigma$ satisfies \[www\] 2 , the mean value is lower that the fixed ideal value $\bar w$, which represents a very favorable situation for the population of agents.
If the value function [(\[diff\])]{} is considered, then [(\[AA\])]{} will be substituted by \[AB\] A\_(w) = 1 1[(w/|w\_L)\^+ 1]{} 1[1+]{} w[|w\_L]{}. where the constant $ \nu >1$. In this case, the drift term in the Fokker–Planck equation [(\[FP2\])]{} modifies to \[drift2\] D(g)(w) = ( w w[|w\_L]{} g(w,t)) In this case, by setting $$\tilde \gamma = \gamma \, \frac 2 {1+\nu} < \gamma,$$ the steady state [(\[equilibrio\])]{} remains a lognormal density, with $\sigma$ substituted by $\tilde\sigma = \sigma(1+\nu)/2 > \sigma$.
Examples
========
Body weight distribution {#weight}
------------------------
The current emphasis on probabilistic approaches to risk assessment makes information on the complete body weight distribution, not just the average weight, important [@PTR]. In addition, these distributions are needed not just for the overall population, but for different groupings, including age and sex subgroups.
Various studies based on data periodically collected on the health and nutrition status of U.S. residents by NHANES, (National Health and Nutritional Examination Survey) show that body weights tend to follow a log-normal distribution [@BB; @BC]. To confirm this observation with data from various time different data collected by NHANES, in [@PTR] both graphical and formal goodness-of-fit tests were performed within each sex for each age group. These tests indicated that the lognormal assumption provides a reasonable fit (results not shown due to the sheer volume of analysis results). For the most part, the log-normal distribution is adequate for each age and sex category, while the fit is poorer for data sets where a wide range of adjacent age categories are combined for the assessment. This reduced fit has been explained by age-based changes in log-body-weight mean and standard deviation.
In the case of body weight, the appearance of lognormal distribution can be fully justified on the basis of the previous kinetic modeling. Let us fix a population of agents, homogeneous with respect to age, sex and other possible classes. Agents have a precise indication about the ideal weight $\bar w$ of their homogenous class, through media advertisements, web and others. Also, agents perfectly know that to remain below this ideal weight $\bar w$ has a lot of advantages in terms of health. However, this bound is in conflict with the pleasure of eating. At the end, it can be reasonably assumed that agents will be really afraid of having consequences about the size of their weight only when this weight overcomes a certain limit value $\bar w_L$, with $\bar w_L > \bar w$. Consequently, an agent with a weight $w > \bar w_L$, will in most cases assume a reduced number of calories with the scope to have a weight reduction. On then other side, in the case in which the agent is so lucky to be under weight, expressed by a value $w < \bar w_L$, he will fully enjoy his meal, which could naturally lead to a progressive increasing value of his weight. It is clear that the two situations are completely different, since in the former an agent will be worried about his weight, while in the latter the agent will be fully relaxed. Therefore, given two agents starting at the same distance from the limit weight $\bar w_L$ from below and above, it is easier (and more pleasant) for the agent starting below to move closer to the optimal weight, than for the agent starting above. Indeed, the perception an agent will have of his weight will be completely different depending of the sign of the value function.
Last, it is clear that there is a certain amount of unpredictability in weight variations. This random variability can be easily recognized to be consequence of diseases or dysfunctions, as well as consequence of a diffuse ignorance of the caloric content or of the glycemic index of foods.
Hence, an interaction of type [(\[coll\])]{} fully agrees with the case of body weight. Also, the grazing limit considered in Section \[quasi\] is highly justified. Indeed, it is clear that single interactions, which are here represented generically by meals, produce only an extremely small variation of body weight, while sensible variations of weight can be observed only after a certain period of time (weeks or more). The evident consequence of this observation is that the Fokker–Planck equation [(\[FPori\])]{}, once the relevant parameters are properly identified, provides a good description of the phenomenon of lognormal distribution of body weight in a homogeneous population of agents.
The age of first marriage
-------------------------
It is known that in the western countries the number of women marrying for the first time tends to be distributed quite closely lognormally by age. As documented in [@Pre] the lognormal fit was fairly good for women in the United Kingdom in 1973 (as it is for many other years and western countries). In many situations, moreover, there is concentration of marriages around a certain age, which is commonly hypothesized to be due to social pressure. In many cases, like the data of United Kingdom that Preston analyzed [@Pre], the data variations are difficult to interpret, since women age at marriage was not necessarily the age at first marriage.
The mathematical modeling of Sections \[model\] and \[quasi\] allows to justify the lognormal variations of the age of first marriage of women in western countries. In this case, the hallmark measured by $w$ will indicate the age of the first marriage, and the characteristic microscopic *interaction* is the change in $w$ that results from a statistical control made at regular fixed intervals of time. The starting assumption is that woman have a social pressure to be married around a certain age $\bar w$, in any case preferably remaining below a certain age $\bar w_L$, with $\bar w_L > \bar w$. It is reasonable to assume that most women will take the value $\bar w_L$ (instead of $\bar w$) as the significant age of marriage to respect. Indeed, one can assume that a woman will be really looking at the necessity to be married only when her age tends to move away from above from the value $\bar w_L$. The reason relies in the existence of the natural age bound to respect, which is related to the desire for motherhood. Consequently, the number of woman that are not married at the age $w > \bar w_L$, will tend to decrease. Likewise, in the case in which a woman is not married at the age $w < \bar w_L$, she will in general enjoy her freedom, and she will retain preferable to postpone the age of first marriage. Note that the relative motivations are in agreement with the choice of the value function [(\[vf\])]{}.
Also in this situation, in reason of the human nature, one is forced to introduce a certain amount of unpredictability in the variation of the age of first marriage, which can be anticipated in the case in which a woman knew a new boyfriend, or postponed when she lost the old one. Consequently an interaction of type [(\[coll\])]{} is in agreement with the variation in time of the age of first marriage. Once again, let us discuss the grazing limit assumption leading to the analysis of Section \[quasi\]. This assumption follows by assuming that the number of women which have been married in a unit of time (for example a day) is extremely small with respect to the whole population. Therefore, deterministic variations of the age of first marriage tend to be negligible in subsequent observation, while the period of time needed to observe a sensible variation has to be very high. According to the analysis of Section \[weight\], the evident consequence of this behavior is that the Fokker–Planck equation [(\[FPori\])]{}, once the relevant parameters are properly identified, provides a good description of the phenomenon of lognormal distribution of the age of first marriage.
Modeling drivers in traffic
---------------------------
The typical driving behavior for a vehicle on a busy road often follows well-established rules. The driver of the following vehicle will repeatedly adjust the velocity to track the leading vehicle and meanwhile keep a safe distance between the two vehicles. The following drivers may brake to increase the time headway or accelerate to decrease the time headway. However, this is not as easy as free driving, since the vehicles are moving as a queue and the spacing between each other can be small. In reason of the fact that the leading vehicle’s movement is often unpredictable (at least not fully predictable), the accelerating and braking actions of the driver are often overdue. In particular, the behavior of drivers tends to be different concerning acceleration and decelerations. On average, the absolute magnitudes of actual accelerations are typically smaller than that of actual decelerations, because accelerations are constrained by the separation distance to the leading vehicle.
The previous discussion clarify the possible reasons behind the appearance of lognormal distributions in situations of crowded traffic. One of these situations has been detailed in [@JJ], by a precise fitting of the detailed distribution of the departure headway obtained by analyzing the video traffic data collected from various road intersections in Beijing during the years $2006$ and $2007$. The data were shown to be consistent with a certain lognormal distribution (though with different mean and variance values), respectively. This suggested intuitively that such distributions should be interpreted as the outcome of the interactions between the vehicles in the discharging queue. To verify this conjecture, the authors introduced a new car-following model, designed to simulate these interactions. In this model [@JJ], drivers update their position according to a two-step rule which takes into account acceleration and deceleration rates. Results showed consistency between the observed empirical distribution and the simulated departure headway given by the model.
Also in this context, the appearance of lognormal distribution can be justified on the basis of the kinetic modeling assumptions of section \[model\]. Let us fix a population of drivers, which behave according to the normal rules of safety. Given a certain mean speed of cars on the traffic line, drivers have a precise indication about the ideal distance $\bar w$ to maintain from the vehicle in front, to avoid possible car accidents. However, this ideal bound is in conflict with the usual rush to arrive as soon as possible. At the end, it can be reasonably assumed that agents will be really afraid of having consequences only when the minimal distance from the vehicle in front reaches a certain limit value $\bar w_L$, with $\bar w_L < \bar w$. Consequently, when a driver recognizes to be at a distance $w < \bar w_L$, will soon decelerate with the scope to increase his distance from the vehicle in front. On then other side, in the case in which the distance from the vehicle in front has a value $w > \bar w_L$, he will increase its speed, which could naturally lead to a reach a shorter distance from the vehicle in front. It is clear that the two situations are completely different, since, as discussed in [@JJ] in the former an agent will be worried about his safety and his deceleration will be more pronounced, while in the latter the agent will be relaxed and his acceleration will be less pronounced. Therefore, given two drivers starting at the same distance from the limit $\bar w_L$ from below and above, the perception they will have about the safety will be completely different depending of the sign of the value function.
In this situation, random effect are fully justified in reason of the fact that the leading vehicle’s movement is not fully predictable. Consequently an interaction of type [(\[coll\])]{} is in agreement with the variation of the distance.
Last, the grazing limit assumption leading to the analysis of Section \[quasi\] follows by considering that, in a crowded lane, most of the drivers continuously update their distance from the vehicle in front. According to the analysis of Section \[quasi\], the evident consequence of this behavior is that the Fokker–Planck equation [(\[FPori\])]{}, once the relevant parameters are properly identified, provides a good description of the distance distribution.
It is important to remark that, at difference with the situations studied before, here the sign of inequalities is reversed. In the case of body weight, an agent is relaxed when his weight is below the ideal one, while in this case a driver is relaxed when his distance from the car in front is above the ideal safety distance. To maintain the same direction, we can fix $v = 1/w$ as the hallmark to be studied. Then, the analysis of Sections \[model\] and \[quasi\] leads to the Fokker–Planck equation [(\[FP2\])]{}, with equilibrium distribution the lognormal density [(\[equilibrio\])]{}.
On the other hand, it is well-known that, if a random phenomenon $X$ is lognormal with parameters $\kappa$ and $\sigma$, as given in [(\[equili\])]{}, then $1/X$ is lognormal with parameters $-\kappa$ and $\sigma$. This shows that the human behavior of drivers justifies the formation of a lognormal distribution of distances among vehicles.
Consumption is more lognormal than income
-----------------------------------------
The classic explanation for log normality of income is Gibrat’s law [@Gib], which essentially models income as an accumulation of random multiplicative shocks. In reference [@BBL] a detailed critical analysis of data from the income distribution in countries including the United States and the United Kingdom revealed that the shape of income is close to, but not quite, lognormal, while the distribution of consumption is much closer to lognormal than income. The findings have been questioned in [@BBL], by means of an economic explanation of the reason why lognormal distribution is more adapted to consumption. The effective distribution of consumption is in any case very difficult to fit. Recent attempts claim that, while distribution of consumption is commonly approximated by the lognormal distribution, consumption is better described by a double Pareto-lognormal distribution, a distribution which has a lognormal body with two Pareto tails and arises as the stationary distribution in dynamic general equilibrium models [@Toda].
On the basis of the analysis of the present paper, it can be easily argued that, together with economic explanations, the formation of a lognormal distribution in consumption could reasonably be a consequence of the human tendency to prefer to spend than to earn by work.
Let us consider a multi-agent system of consumers, which belong to a homogeneous set, represented by a fixed value of incomes, denoted by $w_0$. In this case, the characteristic microscopic *interaction* consists in the variation of the allowed consumption expenditures. The basic assumption is that consumers have a precise idea about the percentage of income to be devoted to expenditures, denoted by $\bar w$. However, since in general to spend money gives a great satisfaction, the barrier $\bar w$ is often exceeded, and the worry about possible consequences for excessive consumption will begin only above a certain limit, denoted by $\bar w_L$, with $\bar w_L > \bar w$, but, to avoid the unpleasant possibility to have debts, $\bar w_L \le w_0$. Consequently, if a consumer is in the situation to have spent a quantity $w > \bar w_L$, he will be careful about, to reduce its forthcoming forthcoming budget. Likewise, in the case in which the consumer realizes that he did not use the whole amount of money in expenditure, so that $w < \bar w_L$, he will leave leisurely, having the possibility to spend more money in the next occasion.
In this situation, to take into account the possible risks linked to financial operations, is it necessary to introduce a certain amount of unpredictability in the variation of consumption. Consequently an interaction of type [(\[coll\])]{} is in agreement with the variation of consumption.
The grazing limit assumption leading to the analysis of Section \[quasi\] follows by considering that most of the consumption expenses have a value which is extremely small with respect to the budget at disposal of the consumer. Therefore, *grazing* interaction prevail. According to the analysis of Section \[weight\], the evident consequence of this behavior is that the Fokker–Planck equation [(\[FPori\])]{}, once the relevant parameters are properly identified, provides a good description of the phenomenon of lognormal distribution of consumption.
Even if these modeling assumptions are very elementary, and do not have a strong economic content, nevertheless they give a satisfactory answer to the lognormal fitting of real consumption data. Economic effects can obviously be taken into account, and it can be hypothesized that the variations in lognormal shape is consequent to the consideration of additional effects (cf. the discussion of Section \[city\]).
The size of cities {#city}
------------------
The debate about city size distributions knew in recent years a renewed interest. While older studies, focussed only on large cities, argued that sizes follow a Pareto tailed distribution, or even follow exactly the famous rank-size rule known as Zipf’s law [@Zipf], in the influential article [@Eech], Eeckhout showed that Pareto law does not hold when taking into account all the populated places of a country. This conclusion raises at least the important question to characterize at best the appropriate distribution for city sizes. In his model cities grow stochastically, and this growth process, in the pure form of Gibrat’s law, asymptotically generates a lognormal size distribution. Eeckhout then shows that the lognormal distribution delivers a good fit to actual city sizes in the US (cf. also [@GZS; @GRSC; @PR; @Ram]). As a matter of fact, even if the lognormal does not follow a power law in the upper tail and, hence, it is strictly speaking not compatible with Pareto and Zipf, the different distributions have similar properties in the upper tail and can become virtually indistinguishable.
As discussed in [@BRS], beside the specific intellectual curiosity to properly define the size distribution, there are theoretical reasons for investigating the matter, as competing models yield different implications. Indeed, while the seminal paper by Gabaix [@Ga99] predicts a Zipf’s law, Eeckhout[@Eech] proposes an equilibrium theory to explain the lognormal distribution of cities.
In the recent paper [@GT2], we used mathematical modeling analogous to the one presented in this paper to obtain a Fokker–Planck like equation for the size distribution of cities, by introducing interactions based on some migration rule among cities. In this picture of formation of city size, it was assumed that the rate of migration depends of the size of the city, and it is inversely proportional to the size itself [@GT2]. Then, the resulting steady state of the Fokker–Planck equation is close to Pareto law.
Among others, it seems indeed established that the main phenomenon leading to the formation of cities is the tendency of inhabitants to migrate, tendency which relies in both cultural and socio-economic reasons, first and foremost the search for new and better living conditions. As discussed in [@MZ], this is a relatively recent behavior. In very primitive times a small community (even a family) was able to perform all necessary activities to survive, and there was no need to aggregate with other families beyond the size of a tribe. This is no more true in modern times, where mutual cooperation and competition brings people to live together. Clearly this tendency to aggregate does not work in a single direction, since while a larger part of population prefers to live in a big city, another part prefers to move towards smaller cities with the hope to reach a better quality of life. Note that migration of population across cities can be justified on the basis of other motivations, including the possibility to adjust for resources [@BGS; @GCCC]. In any case, as it happens in all social phenomena, there is a certain degree of randomness in the process, which takes into account the possibility that part of the variation in size of cities could be generated by unforeseeable reasons.
In [@GT2] it was considered that each elementary variation of the size $v$ of a city is the result of three different contributes \[cit\] v\^\* = v -(v)v + I\_E(v)z + v. In [(\[cit\])]{} the variable $z \in {\mathbb R}_+$ indicates the amount of population which can migrate towards a city from the environment. It is usual to assume that this value is sampled by a certain known distribution function, which characterizes the environment itself.
The functions $\Phi(v)$ and $I_E(v)$ describe the rates of variation of the size $v$ consequent to internal (respectively external) mechanisms. Always maintaining migration as main phenomenon to justify the distribution of city size, let us consider the case in which people have a precise idea about the ideal size $\bar w$ of city, in terms of quality of life, possibility of work and so on. This ideal size is today achieved in western countries by looking at the ranking of the most livable cities, ranking available every year through media advertisements, web and others. Coupling this with the fact that very big cities still remain attractive, it is reasonable to assume that citizens are preferably willing to migrate to another city when its size is below a certain limit value $\bar w_L$, with $\bar w_L > \bar w$. In conclusion, we can assign different values to the intention to migrate from a small city to a bigger one, rather than from a big city to a smaller one. Therefore, given two citizen leaving in a city of size at the same distance from the limit size $\bar w_L$ from below and above, it is more probable for the citizen leaving in city of smaller size to move closer to the size $\bar w_L$, than for a citizen leaving in a city of bigger size. Indeed, the perception a citizen will have of his advantages will be completely different depending of the sign of the value function. This justifies the choice of the *internal* rate of variation $\Phi(\cdot)$ in the form \[rata\] (v) = , that, for small values of the parameter $\delta$ produces formula [(\[AA\])]{}.
Therefore, if we assume that the dominant effect in migration is given by a variation of type [(\[cit\])]{}, with a negligible contribution of the external immmigration term $I_E(v)$, in view of the analysis of Sections \[model\] and \[quasi\] we conclude that the size distribution of cities has the form of a lognormal distribution.
On the other hand, reasons behind migration are very complex, and it is quite difficult to select one or other reason as dominant. This clearly justifies the fact that the kinetic interaction is a mixture of effects (and reasons), which give in the limit a distribution which can be closer to a Pareto or Zipf law, or to a mixteure of lognormal ones. In any case, the kinetic modeling considered in this paper (or in [@GT2]) is enough to clarify the coexistence of various distributions in terms of various different microscopic interactions.
Service times in a call center {#service}
------------------------------
Call centers constitute an increasingly important part of business world, employing millions of agents across the globe and serving as a primary customer-facing channel for firms in many different industries. For this reason, many issues associated with human resources management, sales, and marketing have also become increasingly relevant to call center operations and associated academic research. Mathematical models are built up by taking into account statistics concerning system primitives, such as the number of agents working, the rate at which calls arrive, the time required for a customer to be served, and the length of time customers are willing to wait on hold before they hang up the phone and abandon the queue. Outputs are performance measures, such as the distribution of time that customers wait *on hold* and the fraction of customers that abandon the queue before being served [@AAM; @Brown]. A deep inside into service times in a call center with hundreds of operators was the object of a statistical research reported by Brown et al. in [@Brown]. They noticed that the true distribution of service times was very close to lognormal, but is not exactly lognormal. The analysis of the huge amount of data provided by the company, covering a one-month period, made evident the almost perfect fitting of the statistical distribution of service times to a lognormal one. Among others, the same phenomenon was noticed before [@Brown], even if the conclusion there was that the true distribution is very close to lognormal, but is not exactly lognormal. After excluding short service times, the strong resemblance to a lognormal distribution was shown to hold in different months.
Lognormal shape of processing times has been occasionally recognized by researchers in telecommunications and psychology. Empirical results suggesting that the distribution of the logarithm of call duration is normal for individual telephone customers and a mixture of normals for *subscriber-line* groups was discussed in [@Bol]. Also, theoretical arguments to justify the lognormal curve of reaction times using models from mathematical psychology were introduced in [@Bre; @UM].
Taking into account these attempts, in [@GT17] we outlined the mathematical modeling of Sections \[model\] and \[quasi\] to justify the lognormal variations of service times in a call center. In this case, the population of agents consists of call center employed, and the characteristic microscopic *interaction* is the change in future time serving of any agent who concluded its work in a single operation in a certain time $w$. The starting assumption is that agents have precise instructions from the service manager to conclude the service in a certain ideal time $\bar w$, in any case remaining below a certain limit time for the service, denoted by $\bar w_L$, with $\bar w_L > \bar w$. It is reasonable to assume that most agents will take the value $\bar w_L$ (instead of $\bar w$) as the significant time to respect. Indeed, agents will be really afraid of having consequences about their delays in serving only when the service time is above the value $\bar w_L$. Consequently, if an agent concluded a service in a time $w > \bar w_L$, he will accelerate to conclude its forthcoming service in a shorter time. Likewise, in the case in which the agent was so quick (or lucky) to conclude a service in a time $w < \bar w_L$, he will work leisurely to conclude its forthcoming service in a better way, by using a longer time.
Also in this situation, one needs to introduce a certain amount of unpredictability in any future realization of the service, which can be unexpectedly solved quickly in some case, or to present additional difficulties due on the non properly formulated request of the customer or on accidental problems to access the data relative to the request.
Consequently an interaction of type [(\[coll\])]{} is in agreement with the agent’s behavior in a call center. Last, let us discuss the grazing limit assumption leading to the analysis of Section \[quasi\]. This assumption follows by assuming that any agent knows very well the work to be done to conclude a service. Therefore, deterministic variations of the service time relative to a well-known service tend to be negligible, while the number of services needed to have a sensible variation has to be very high. According to the analysis of Section \[weight\], the evident consequence of this behavior is that the Fokker–Planck equation [(\[FPori\])]{}, once the relevant parameters are properly identified, provides a good description of the phenomenon of lognormal distribution of service times.
Mathematical aspects of the Fokker–Planck equation {#trend}
==================================================
In the physical literature, Fokker–Planck equations with logarithmic factors in diffusion and drift terms have been considered and studied before in [@Lo; @Pes]. However, the presence of the logarithm in the diffusion coefficient allows to find analytical closed-form solutions only in special cases. It is however remarkable that this type of equations revealed to be interesting in the field of econophysics, to modeling the exchange rate dynamics in a target zone [@Lo].
In social sciences and economics, the kinetic description of phenomena, extensively treated in the recent book [@PT13], rarely leads to the appearance of lognormal distributions. To our knowledge, lognormal densities have been found in [@CPP] as self-similar solutions of a linear Fokker–Planck equation, with time-depending coefficients of diffusion and drift, describing the behavior of a financial market where a population of homogeneous agents can create their own portfolio between two investment alternatives, a stock and a bound. The model was closely related to the Levy–Levy–Solomon microscopic model in finance [@LLS; @LLSb].
More recently [@To3], a kinetic description of the density $\Phi(v,t)$ of a multi-agent system of agents interacting through a linear interaction reminiscent of Gibrat’s law [@Gib], led in the grazing limit considered in Section \[quasi\] to a linear diffusion equation with non constant diffusion coefficient, given by $$\frac{\partial \Phi}{\partial t} = \frac{\partial^2\left( v^2 \Phi\right)}{\partial v^2} ,$$ well-known to people working in probability theory and finance, since it describes a particular version of the geometric Brownian motion [@Oks]. Also in this case, the lognormal density appears as self-similar solution of the diffusion equation. It is interesting to remark that the analytical derivation of self-similar solutions in [@To3] (respectively in [@CPP]) suggests that the right way to look at the mathematical analysis of the diffusion equation and to the Fokker–Planck equation with time depending coefficients, is to enlighten their relationships with the linear diffusion, and, respectively, with the classical Fokker–Planck equation.
This idea has been developed in [@To4]. Applying this strategy, it is immediate to show that the study of the initial-boundary value problem for the Fokker–Planck equation [(\[FP2\])]{}, and the large-time behavior of its solution, takes advantage of its strong connections with the classical one-dimensional Fokker–Planck equation for the one-dimensional density $f=f(v,t)$, with $v \in {\mathbb R}$ and $t \ge 0$ (cf. [@Ch43]) \[FPcla\] = +1T ( ( v - m)f), where $m$ and $T>0$ are suitable constants related to mean and variance of the stationary solution. Indeed, the unique steady solution of equation [(\[FPcla\])]{} of unit mass is is the Gaussian density (the Maxwellian equilibrium) \[Max-cla\] M(v) = 1 { - }. Let us briefly describe the main steps of the method developed in [@To4]. Let $g_0(w)$ denote a probability density on ${\mathbb R}_+$. To avoid inessential difficulties, let us suppose that both the initial datum and the corresponding solutions are smooth enough to justify computations. To study the initial-boundary value problem for equation [(\[FP2\])]{} one needs to specify the boundary condition on the boundary $w =0$. If mass conservation is imposed on the solution to equation [(\[FP2\])]{} (cf. the discussion in [@FPTT]), the natural boundary condition is given by the so-called *no-flux* condition, expressed by \[bu\] . (w\^2 g(w,t) ) + ww[|w\_L]{} g(w,t) |\_[w=0 ]{} = 0, t>0. Therefore, at least formally, the Fokker–Planck equation [(\[FP2\])]{}, for a given initial density $g_0(w)$, and boundary conditions [(\[bu\])]{} has a solution $g(w,t)$, that, in consequence of mass conservation, remains a probability density in ${\mathbb R}_+$ for all times $t >0$.
To start with, we will show that equation [(\[FP2\])]{} allows to obtain many equivalent formulations of [(\[FP2\])]{}, which contain the quotient $G(w,t)= g(w,t)/g_\infty(w)$, each of them useful for various purposes. Since the lognormal density $g_\infty$, stationary solution of equation [(\[FP2\])]{}, satisfies the differential equation [(\[sd\])]{}, which can be rewritten as \[stazio\] (w\^2 g\_) = - 1ww[|w\_L]{}, we obtain $$\lambda \frac{\partial }{\partial w}\left(w^2
g \right) + \gamma \, w\,\log \frac w{\bar w_L}\,g = \lambda w^2\, g \left( \frac{\partial }{\partial w} \log(w^2 g) + \frac\gamma\lambda \,\frac 1w\,\log \frac w{\bar w_L}\right)=$$ $$\lambda w^2\, g \left( \frac{\partial }{\partial w} \log(w^2 g)- \frac{\partial }{\partial w} \log(w^2 g_\infty) \right) = \lambda w^2\, g \frac{\partial }{\partial w} \log\frac g{g_\infty}= \lambda w^2 g_\infty \frac{\partial }{\partial v}\frac g{g_\infty}.$$ Hence, we can write the Fokker–Planck equation [(\[FP2\])]{} in the equivalent form \[FPalt\] = , which enlightens the role of the logarithm of $G$, or in the form \[FPal2\] = . In particular, recalling [(\[stazio\])]{}, we can extract from [(\[FPal2\])]{} the evolution equation for $G(w,t)= g(w,t)/g_\infty(w)$. Indeed $$\frac{\partial g}{\partial t} = g_\infty \frac{\partial G}{\partial t} = \frac\lambda{2} w^2 g_\infty \frac{\partial^2 G}{\partial v^2} + \frac\lambda{2} \frac{\partial }{\partial w} (w^2 g_\infty) \frac{\partial G}{\partial w}=$$ $$\frac\lambda{2} w^2 g_\infty \frac{\partial^2 G }{\partial w^2} -\frac\gamma{2} w\,\log \frac w{\bar w_L}\, g_\infty \frac{\partial G}{\partial w},$$ which shows that $G = G(w,t)$ satisfies the equation \[quo\] = w\^2 - ww[|w\_L]{} . Also, the boundary condition [(\[bu\])]{} modifies accordingly. For the two equivalent forms of the Fokker-Planck equation [(\[FP2\])]{}, given by [(\[FPalt\])]{} and [(\[FPal2\])]{} the boundary condition at $w=0$ takes the forms \[BCalt\] . w\^2 g(w, t) G(w,t) |\_[w=0]{} = 0, t >0, and \[BCal2\] . w\^2 g\_(w) |\_[w=0]{} =0, t >0. Note that boundary condition [(\[BCal2\])]{} can be used for equation [(\[quo\])]{} as well.
Let us introduce the transformation of variables \[chiave\] v= v(w) = w, (t) = t, that is well-defined and invertible for $w \ge 0$. In addition, let us consider for $t \ge 0$ the new function $F= F(v, \tau)$, defined by \[inv\] F(v, ) = G(w,t), with $v,\tau$ defined as in [(\[chiave\])]{} Clearly it holds \[newt\] = = = while \[der1\] = = = 1w , and \[der2\] = 1[w\^2]{} - 1[w\^2]{} . Substituting into [(\[quo\])]{} we obtain that, if $G(w,t)$ satisfies equation [(\[quo\])]{}, in terms of the variables $v=v(w)$ and $\t = \tau(t)$, the function $F(v,\t)=G(w,t)$ satisfies the equation \[new-quo\] = - , where the constants $\sigma$ and $\kappa$ are defined as in [(\[para\])]{}. Moreover, the boundary condition [(\[BCal2\])]{} becomes \[new-bu\] . f\_(v) |\_[v= -]{}= 0, where $f_\infty(v)$ is the Gaussian function defined in [(\[Max-cla\])]{}, that is \[Max\] f\_(v) = 1{ - } of mean $\kappa$ and variance $\sigma$. Now, let $f_0(v)$ be a probability density in the whole space ${\mathbb R}$, and let $f(v,\t)$ be the unique solution to the initial value problem for the classical one-dimensional Fokker–Planck equation \[FP\] = + (f). Then, by setting $F(v,\tau) = f(v,\t)/f_\infty(v)$, and repeating step by step the previous computations, we conclude that $F(v,\t)$ satisfies [(\[new-quo\])]{}. Hence, through equations [(\[quo\])]{} and [(\[new-quo\])]{}, which are obtained each other by means of the transformation [(\[chiave\])]{}, we established an easy-to-handle connection between the classical Fokker–Planck equation [(\[FP\])]{} and the Fokker–Planck equation with logarithmic drift [(\[FP2\])]{}. To appreciate the importance of this connection, given the initial value $g_0(w)$, $w \in {\mathbb R}_+$, of equation [(\[FP2\])]{}, let us fix as initial value for the Fokker–Planck equation [(\[FP\])]{} the function \[init\] f\_0(v) = w g\_0(w), v= w. Clearly, if $g_0(w)$ is a probability density in ${\mathbb R}_+$, $f_0(v)$ is a probability density function in ${\mathbb R}$. Moreover, the boundary condition [(\[new-bu\])]{} reduces to require that the solution to the Fokker–Planck equation [(\[FP\])]{} satisfies a suitable decay at $v = -\infty$, condition that is shown to hold by assuming that the initial density $f_0$ has some moment bounded. Consequently, in view of the relationship [(\[inv\])]{}, any result for the Fokker–Planck equation [(\[FP\])]{} with initial density $f_0(v)$ translates into a result for the Fokker–Planck equation [(\[FP2\])]{} with initial density $g_0(w)$.
The main fact is that Fokker–Planck equations with constant diffusion and linear drift have been extensively studied, and many mathematical results are available. The interested reader can refer to the seminal paper [@OV] by Otto and Villani. In this paper, the Fokker–Planck structure has been utilized to obtain various inequalities in sharp form, including inequalities of logarithmic Sobolev type, thus generalizing the approach in [@To3; @To99]. In particular, it has been made use of the form [(\[new-quo\])]{}. For the solution to the Fokker–Planck equation [(\[FP\])]{} it is well-known that the relative Shannon’s entropy \[sha\] H(f()/f\_) = \_[R]{}f(v, ) dv, converges exponentially fast in time towards zero [@CT14; @OV; @To3; @To99], provided it is initially bounded, and the initial density has finite moments up to order two. The result follows by studying the decay in time of the relative entropy, and using logarithmic Sobolev inequality to get a lower bound on the entropy production in terms of the relative entropy. At the end, for the solution to equation [(\[FP\])]{} one gets the bound [@OV; @To3; @To99] \[dec7\] H(f()/f\_) H(f\_0/f\_){ - 2 }. Consider now that the relative entropy [(\[sha\])]{} can be rewritten as \[sha1\] H(f()/f\_) = \_[R]{}f\_(v) dv. Hence, changing variable in the integral in the right-hand side of [(\[sha1\])]{} according to [(\[chiave\])]{}, and using [(\[inv\])]{}, one obtains the equality \[equ3\] H(g(t)/g\_) = H(f()/f\_) , = t, which implies, thanks to the time transformation [(\[chiave\])]{} \[dec8\] H(g(t)/g\_) H(g\_0/g\_){ - t }. It is important to outline that the boundedness of the second moment of the initial value of the Fokker–Planck equation [(\[FP\])]{}, required for the validity of the decay [(\[dec7\])]{}, translates, in view of [(\[init\])]{} into the condition \[ini7\] \_[[R]{}\_+]{} |w\^2| g\_0(w) dw < +. Even if this is not a strong condition on the initial density $g_0(w)$, it implies at least that the initial value has to decay to zero at some rate as $w \to 0$.
Using inequality [(\[dec8\])]{} one can easily pass to recover the time decay in some more standard distance. Indeed, Csiszar–Kullback inequality[@Csi; @Kul] implies that $$\left( \int_{{\mathbb R}_+} \left| g(w,t) - g_\infty(w)\right|\, dw \right)^2 \le 2 H(g(t)/g_\infty),$$ which implies exponential convergence in $L_1({\mathbb R}_+)$-distance at the suboptimal rate $\gamma/2$.
It is interesting to remark that, at difference with the decay rate of the relative entropy of the Fokker-Planck equation [(\[FP\])]{}, the decay rate of the relative entropy of the Fokker–Planck equation [(\[FP2\])]{} does not depend of the parameter $\lambda$, which measures the intensity of the diffusion process. Going back to the physical meaning of the Fokker–Planck equation [(\[FP2\])]{}, which describes random variation of measured data in social and economic phenomena, it simply means that the lognormal diffusion is exponentially reached in time independently of the intensity of the random effects in the microscopic interaction.
This exponential in time convergence towards the stationary solution also clarifies that the multi-agent system, even if subject to perturbation, quickly returns to equilibrium, and the data we observe fit the lognormal distribution.
Lognormal distributions from real data {#numer}
======================================
The following subsections show that the theoretical analysis presented in this paper, which leads to a Fokker–Planck type equation with a universal lognormal equilibrium density, is in good agreement with real data in various situations described in Section \[examples\]. We present results of numerical fitting in some selected example, in which it was possible to extract almost complete data from the pertinent web sites: (i) the women age at the first marriage, (ii) the distribution of the service time in a call center, and (iii) the distribution of city size. In all cases the findings are in good agreement with the theoretical modeling.
The data analysis is performed using the open source statistical software R. The fitting of the lognormal distribution has been obtained resorting to the [*fitdist*]{} package. This package provides a function that plots the empirical probability distribution functions, the empirical cumulative distribution functions, the quantile-quantile plots (Q-Q plot), and the probability-probability plots (P-P plot). We recall here that the Q-Q plot and the P-P plot constitute a perfect tool to visualize a qualitative goodness of fit of the model to the data. The closer the fitted data are to the straight line $y=x$, the better is the quality of the fitting.
In all cases in which there were need to fit a multi-nomial lognormal distribution, we made use of the [mixtools]{} software package.[^1] For full details on the [mixtools]{} package we refer the interested reader to reference [@mixtools].
Women’ Age at first Marriage
----------------------------
Our first example of lognormal distribution refers to the distribution of the age of women at their first marriage. We reproduce, with a different data set, the lognormal distribution already observed by Preston in his pioneering paper [@Pre]. We use the open data which have been published by the municipality of the city of Milan.[^2] These data contains public informations about the $36\,081$ marriages celebrated during the period 2003–2015. In agreement with the analysis of Preston, we selected from all the marriages available in the data set the marriages of women denoted by the Italian word “nubile”, namely women that got married for the first time.
![Women’ age at first marriage: Data for the city of Milan, period 2003-2016.[]{data-label="fig:marriage"}](matrimonio_eta.png){width="\textwidth"}
Figure \[fig:marriage\] shows the results obtained by fitting the data with a lognormal distribution using the [*fitdist*]{} package mentioned above. The four subplots give, in order, the density kernel of the fitted lognormal distribution, the empirical and theoretical cumulative distribution function, Q-Q plot, and the P-P plot. Note that the horizontal axis reports the exponent in log scale of the age measured in years. The scale is $10^x$, with $x$ ranging from 1.2 (16 years) to 1.9 (80 years).
The mean of the lognormal distribution is close to the age of 31 years, with a standard deviation of approximately 14 months. The Q-Q plot and the P-P plot give a visual impact of the goodness of fit, which appears very good. It is remarkable that the Q-Q plot shows a small deviation of the empirical distribution (vertical axis) from the theoretical distribution (horizontal axis), essentially due to a small number of women who get married for the first time above the age of 60 years, which corresponds to the value 1.8 in the plot. Overall, our results are in accordance to the results already observed in [@Pre] for women of United Kingdom.
Service Times in Call Centers
-----------------------------
As noticed in a number of papers [@AAM; @Brown] and recently discussed in [@GT17] from the modelling point of view, lognormal distribution arises when analyzing the distributions of the service time in a call center.
![Evaluation of fitting the log of the service time $log(w)$ with a Gaussian distribution.[]{data-label="fig3"}](fig_4_lognorm_fitting.png){width="\textwidth"}
The analysis of reference [@GT17] is relative to the call center of an Italian national communication company. In such a call center, every day more than 300 operators work a number of jobs that ranges between 10000 and 20000. On the basis of the specific requests, jobs are classified into 270 different types.
Figure \[fig3\] shows the quality of fitting with a Gaussian distribution of the logarithm of the service times, using 280’000 samples provided by the industrial partner. As in the previous subsection, the results have been obtained using the [*fitdist*]{} package. The four subplots show in order: (i) the histogram of the empirical observations along with the kernel density (red line); (ii) the empirical and the theoretical empirical Cumulative Distribution Functions (CDF); (iii) the Q-Q plot, and (iv) the P-P plot. Note that in particular the Q-Q plot and the P-P plot clearly show the goodness of fitting. In both cases the data follow a straight line, with a precision even better than the one observed in Figure \[fig:marriage\].
Once verified that the service time distributions follows a lognormal distribution, one can perform a deeper analysis by fitting a lognormal distribution for each job type. The goal of this analysis is to observe how the distribution of the service times behaves with respect of the ideal time $\bar{w}$, given by the service manager, and the limit $\bar{w}_L$, given by the time constraints related to the Quality of Service (QoS), which are clearly different for each job type. As in the previous figures, the blue empirical density function refers to the real data, while the red probability density function shows the lognormal distribution with mean $\mu$ and deviation $\sigma$ detailed in each sub-figure. In addition, the plots show with vertical lines the values of $\log{(\bar{w})}$ and $\log{(\bar{w}_L)}$.
For instance, the first subplot, which refers to Job Type 1, illustrates the distribution of 28425 service times. The blue dotted vertical line refers to the log of the ideal time $\log{(\bar{w})=6}$ (i.e., 400 seconds), and the dashed green vertical line refer to the log of the time limit $\log{(\bar{w}_L)=7.3}$ (i.e., 1’500 seconds). The fitted lognormal distribution has mean $\mu=4.9$ and variance $\sigma=1.2$.
![Fitted lognormal distributions over the six most frequent job types. The vertical dashed lines illustrate the reference times $\bar{w}_L$ for the Quality of Service constraint.[]{data-label="fig4"}](fig_5_lognorm_by_type_with_reference.png){height="0.8\textheight"}
Figure \[fig4\] also shows that the service time of some job is not perfectly described by a simple lognormal distribution. For example, this can be observed for types 5 and 6. A satisfactory answer here comes from a multi model data analysis, with the objective to investigate multi modal lognormal distribution. Figure \[fig5\] shows the fitting of the data used for the last subplot of Figure \[fig4\], which corresponds to job type 6, with a bimodal lognormal distribution. We remark that this bimodal model is able to capture a behavior of the call center operators which tend to have two ways of working out a job: either to reject the job in a short time (represented by the first mode with average of 30 seconds, given by the read area in Figure \[fig5\]), or to accept to work “hard” on the job and, in consequence of this decision, to conclude the service in a longer time, with an average of more than 10 minutes.
We highlight once more the good fit of the lognormal distribution also when it is used as basic kernel in multi modal data fitting. By using only a mixture of two lognormal distribution we perfectly catch the human behavior also in this rather complex situation. We remark that we have analyzed numerous tests by using different models and different kernels, or employing a large number of modes. In all cases, by using “only” two lognormal distributions we got the simplest and more robust fit.
![Bimodal lognormal distribution: the red line is the bimodal fitted distribution, the dashed black line represent the kernel density, while the two red and blue areas represents the two modes of the bimodal distribution.[]{data-label="fig5"}](bimodal_callcenter.png){width="0.7\linewidth"}
Size Distribution of Cities
---------------------------
Our last example is concerned with the problem of understanding at best the size distribution of cities [@GT2]. The results that follow are extracted from the open data published by the Italian National Institute of Statistics[^3] and by the Swiss Federal Statistical Office.[^4] The first data set contains the size distribution of $8\,006$ Italian cities, ranging from the smallest village to the largest city, that is Rome with $2\,873\,494$ citizens. These data refer to the last official Italian census of 2016. The second data set enumerates the size distribution of $2\,289$ Swiss cities, from the smallest one to the largest city that is Zurich with $396\,955$ citizens. This second data refers to the last official Swiss census of 2014. Table \[tab:summary\] reports the basic statistics of the two data sets, giving in order the minimum, the first quartile, the median, the mean, the third quartile and the maximum values of city size. Clearly, the basic statistics are clueless about the real distribution of cities size.
Min 1st Quart. Median Mean 3rd Quart. Max
------------- ----- ------------ ---------- ---------- ------------ ---------------
Italy 30 $1\,019$ $2\,452$ $7\,571$ $6\,218$ $2\,873\,494$
Switzerland 13 642 $1\,425$ $3\,638$ $3\,513$ $396\,955$
: Basic statistics of Italian and Swiss distribution of cities size.\[tab:summary\]
In the literature, data set on distribution of the size of cities are usually studied and fitted using a Zipf’s law [@Ga99; @GCCC] However, if we just take the logarithm of every city size and we plot the resulting distribution, we get what looks like a classical Gaussian distribution. This can be verified through the examination of Figures \[fig:1\](a) and \[fig:2\](a), which refer respectively to the Italian and Swiss data sets. In addition, and surprisingly, it is almost impossible to distinguish between the shape of the two distributions. Even if we looks to the inverse cumulative functions, plotted in Figures \[fig:2\](b) and \[fig:2\](b), it is pretty hard to distinguish the resulting function from a Gaussian cumulative function. However, if we analyze the inverse cumulative functions with bi-logarithm plots, it is possible to notice that a single Gaussian does not capture the trend of the tails of the distribution. This appears evident by looking at the red lines in Figures \[fig:1\](c), \[fig:1\](d), \[fig:2\](c), and \[fig:2\](d). On the contrary, a single Gaussian is able to perfectly fit the lower tails, which are never captured by the celebrated Zipf’s law.
In order to improve the fitting of the distributions also on the higher tails, it is enough to fit the distributions of cities sizes using a multi-modal Gaussian model, by resorting to the [mixtools]{} software package,[^5] available in the R statistical programming language. For full details on the [mixtools]{} package we refer the interested reader to \[Z\]. Basically, using [mixtools]{} one is able to fit the distribution of city size with a mixture of only two Gaussians $$\label{eq:bimodal}
g(x) = \lambda_1 N(x; \mu_1, \sigma_1) + \lambda_2 N(x; \mu_2, \sigma_2)$$ Table \[tab:bimodal\] reports the parameters fitted by [mixtools]{} for both data sets and Figures \[fig:3\] and \[fig:4\] show the respective probability density functions. It is evident that for both data sets there is a *dominating* Gaussian, since $\lambda_1 = 0.945$ for Italy and $\lambda_1 = 0.967$ for Switzerland. In addition, there are the two tiny Gaussians (characterized by the small values of $\lambda_2$) that capture the behavior of the higher tails, and which have both larger means and larger deviations. We remark that the blue solid line on top of the histograms represents the corresponding bimodal distribution. Finally, by looking at the green lines in Figures \[fig:1\](c), \[fig:1\](d), \[fig:2\](c), and \[fig:2\](d), the goodness of fitting cities size distributions with a mixture of two Gaussian is striking evident.
$\lambda_1$ $\mu_1$ $\sigma_1$ $\lambda_2$ $\mu_2$ $\sigma_2$
------------- ------------- --------- ------------ ------------- --------- ------------
Italy 0.945 3.371 0.563 0.054 3.993 0.731
Switzerland 0.967 3.162 0.533 0.032 3.483 0.896
: Mixture of two Gaussians: Model Parameters.\[tab:bimodal\]
![Probability distribution function and inverse cumulative functions of Italian cities.[]{data-label="fig:1"}](italia_all.png){width="\textwidth"}
![Probability distribution function and inverse cumulative functions of Swiss cities.[]{data-label="fig:2"}](swiss_all.png){width="\textwidth"}
![Log Size Distribution of 8006 Italian Cities, Census 2016.[]{data-label="fig:3"}](italia_mixed.png){width="60.00000%"}
![Log Size Distribution of 2289 Swiss Cities, Census 2014.[]{data-label="fig:4"}](swiss_mixed.png){width="60.00000%"}
Conclusions
===========
We introduced and discussed in the present paper a number of social and economic phenomena which are characterized by data which present a good fitting with the lognormal distribution. We tried to explain this common behavior in terms of human behavior. In all cases, agents want to achieve a well-defined goal, characterized by a certain fixed value, while the rate of change in their approach is different, and depends on the side from which an agent is looking at the fixed value. The kinetic modeling is based on microscopic variations of the quantity under consideration which are obtained by resorting to a strong analogy with the arguments of prospect theory [@KT; @KT1] suitable to model an agent’s decision under risk. Well-known arguments of kinetic theory then allow to model these phenomena by means of a Fokker–Planck equation with variable coefficients of diffusion and drift, which exhibits a lognormal density at equilibrium. Interestingly enough, this Fokker–Planck equation can be exhaustively studied from the mathematical point of view, since it results to be linked to the classical Fokker–Planck equation, for which a number of results are available in the pertinent literature.
It is clear that the examples considered in this paper cover only partially the huge number of phenomena in which human activity is mainly subject to a skewed perspective. Also, the numerical evidence of the appearing of the lognormal distribution in these phenomena is not restricted to the few cases treated here. In any case, we retain that our analysis can constitute a reasonable and easily generalizable modeling approach to the lognormal word of the human behavior.
0,5cm
Acknowledgement {#acknowledgement .unnumbered}
===============
This work has been written within the activities of GNFM group of INdAM (National Institute of High Mathematics), and partially supported by MIUR project “Optimal mass transportation, geometrical and functional inequalities with applications”.
This research was partially supported by the Italian Ministry of Education, University and Research (MIUR): Dipartimenti di Eccellenza Program (2018–2022) - Dept. of Mathematics “F. Casorati”, University of Pavia.
1,5cm
[10]{}
L.H. Ahrens, The log-normal distribution of the elements (A fundamental law of geochemistry and its subsidiary), *Geochimica et Cosmochimica Acta* **5** (1954) 49–73.
J. Aitchison and J.A.C. Brown, *The Log-normal Distribution*, Cambridge University Press, Cambridge, UK 1957.
Z. Aksin, M. Armony and V. Mehrotra, The modern call center: a multi- disciplinary perspective on operations management research, *Production And Operations Management POMS* **16** (6) (2007) 665–688.
E. Battistin, R. Blundell and A. Lewbel, Why is consumption more log normal than income? Gibrat’s law revisited, *Journal of Political Economy* **117** (6) (2009) 1140–1154.
P. Beaudry, D.A. Green and B.M. Sand, Spatial equilibrium with unemployment and wage bargaining: Theory and estimation, *J. Urban Econ.* **79** (2014) 2–19.
M. Bee, M. Riccaboni and S. Schiavo, The size distribution of US cities: not Pareto, even in the tail, *Economics Letters* **120** (2013) 232–237.
N. Bellomo, M. A. Herrero and A. Tosin, On the dynamics of social conflicts looking for the Black Swan, *Kinet. Relat. Models* (6)(2013) 459–-479.
N. Bellomo, D. Knopoff and J. Soler, On the difficult interplay between life, complexity, and mathematical sciences, (2013) 1861–1913.
N. Bellomo, F. Colasuonno, D. Knopoff and J. Soler, From a systems theory of sociology to modeling the onset and evolution of criminality, *Netw. Heterog. Media* **10** (2015) 421–441.
T. Benaglia, D. Chauveau, D. Hunter and D. Young, Mixtools: An R package for analyzing finite mixture models, *Journal of Statistical Software* **32** (6) (2009) 1–29.
E. Ben-Naim, P.L. Krapivski and S. Redner, Bifurcations and patterns in compromise processes, *Physica D* **183** (2003) 190–204.
E. Ben-Naim, P.L. Krapivski, R. Vazquez and S. Redner, Unity and discord in opinion dynamics, *Physica A* **330** (2003) 99–106.
E. Ben-Naim, Opinion dynamics: rise and fall of political parties, *Europhys. Lett.* **69** (2005) 671–677.
M.L. Bertotti and M. Delitala, On a discrete generalized kinetic approach for modelling persuader’s influence in opinion formation processes, *Math. Comp. Model.* **48** (2008) 1107–1121.
V. Bolotin, Telephone circuit holding time distributions, In *14th International Tele-Traffic Conference (ITC-14)*, 125–134, Elsevier, Amsterdam, (1994).
L. Boudin and F. Salvarani, The quasi-invariant limit for a kinetic model of sociological collective behavior, *Kinetic Rel. Mod.* **2** (2009) 433–449.
L. Boudin and F. Salvarani, A kinetic approach to the study of opinion formation, *ESAIM: Math. Mod. Num. Anal.* **43** (2009) 507–522.
L. Boudin, A. Mercier and F. Salvarani, Conciliatory and contradictory dynamics in opinion formation, *Physica A* **391** (2012) 5672–5684.
J. Brainard and D.E. Burmaster, Bivariate distributions for height and weight of men and women in the United States, *Risk Analysis* **12** (1992) 267–275.
G. Breukelen, Theoretical note: parallel information processing models compatible with lognormally distributed response times, *Journal of Mathematical Psychology* **39** (1995) 396–399.
L. Brown, N. Gans, A. Mandelbaum, A. Sakov, H. Shen, S. Zeltyn and L. Zhao, Statistical analysis of a telephone call center: a queueing-science perspective, *Journal of the American Statistical Association* **100**, No. 469, Applications and Case Studies, March 2005.
D.E. Burmaster and E.A. Crouch, Lognormal distributions for body weight as a function of age for males and females in the United States, 1976–1980, *Risk Anal.* **17** (4) (1997) 499–505.
J.A. Carrillo and G. Toscani, Renyi entropy and improved equilibration rates to self-similarity for nonlinear diffusion equations, *Nonlinearity* **27** (2014) 3159–3177.
C. Castellano, S. Fortunato and V. Loreto, Statistical physics of social dynamics, *Rev. Mod. Phys.* **81** (2009) 591–646.
C. Cercignani, *The Boltzmann equation and its applications*, Springer Series in Applied Mathematical Sciences, Vol.**67** Springer–Verlag, New York 1988.
A. Chakraborti and B.K. Chakrabarti, Statistical Mechanics of Money: Effects of Saving Propensity, [*Eur. Phys. J. B*]{} **17** (2000) 167–170.
S. Chandrasekhar, Stochastic problems in physics and astronomy, *Rev. Modern Phys.* **15** (1943) 1–110.
A. Chatterjee, B.K. Chakrabarti and S.S. Manna, Pareto law in a kinetic model of market with random saving propensity, *Physica A* [**335**]{} (2004) 155–163.
A. Chatterjee, B.K. Chakrabarti and R.B. Stinchcombe, Master equation for a kinetic model of trading market and its analytic solution, [*Phys. Rev. E*]{} **72** (2005) 026126.
V. Comincioli, L. Della Croce and G. Toscani, A Boltzmann-like equation for choice formation, *Kinetic Rel. Mod.* **2** (2009) 135–149.
S. Cordier, L. Pareschi and C. Piatecki, Mesoscopic modelling of financial markets, *J. Stat. Phys.* **134** (1) (2009) 161–184.
S. Cordier, L. Pareschi and G. Toscani, On a kinetic model for a simple market economy, [*J. Stat. Phys.*]{} **120** (2005) 253–277.
E.L. Crow and K. Shimizu eds., *Log-normal distributions: theory and application*. Marcel Dekker, New York NY 1988.
I. Csiszar, Information-type measures of difference of probability distributions and indirect observations, *Stud. Sci. Math. Hung.* **2** (1967) 299–-318.
A. Drǎgulescu and V.M. Yakovenko, [Statistical mechanics of money]{}, [*Eur. Phys. Jour. B*]{} **17** (2000) 723–729.
B. D[ü]{}ring, P.A. Markowich, J-F. Pietschmann and M-T. Wolfram, Boltzmann and [F]{}okker-[P]{}lanck equations modelling opinion formation in the presence of strong leaders, (2009) 3687–3708.
B. Düring, D. Matthes and G. Toscani, Kinetic equations modelling wealth redistribution: a comparison of approaches, *Phys. Rev. E* **78** (2008) 056103.
J. Eeckhout, Gibrat’s law for (all) cities, *American Economic Review* **94** (2004) 1429–1451.
G. Furioli, A. Pulvirenti, E. Terraneo and G. Toscani, Fokker–Planck equations in the modelling of socio-economic phenomena, *Math. Mod. Meth. Appl. Scie.* **27** (1) (2017) 115–158.
X. Gabaix, Zipf’s law for cities: an explanation, *Quart. J. Econom.* **114** (1999) 739–767.
, Sociophysics: A new approach of sociological collective behavior. I. Mean-behaviour description of a strike, [*J. Math. Sociology*]{} **9** (1982) 1–13.
S. Galam and S. Moscovici, Towards a theory of collective phenomena: consensus and attitude changes in groups, *Euro. J. Social Psychology* **21** (1991) 49–74.
S. Galam, Rational group decision making: A random field Ising model at $T= 0$. *Physica A* **238** (1997) 66–80.
S. Galam and J.D. Zucker, From individual choice to group decision-making. *Physica A* **287** (2000) 644–659.
U. Garibaldi, E. Scalas and P. Viarengo, Statistical equilibrium in simple exchange games II. The redistribution game, *Eur. Phys. Jour. B* **60**(2) (2007) 241–246.
A. Ghosh, A. Chatterjee, A.S. Chakrabarti and B.K. Chakrabarti, Zipf’s law in city size from a resource utilization model, *Phys. Rev. E* **90** (2014) 042815.
R. Gibrat, *Les inegalites economiques*, Librairie du Recueil Sirey, Paris 1931.
K. Giesen, A. Zimmermann and J. Suedekum, The size distribution across all cities-Double Pareto lognormal strikes, *Journal of Urban Economics* **68** (2010) 129-–137.
R. González–Val, A. Ramos, F. Sanz–Gracia and M. Vera–Cabello, Size distributions for all cities: Which one is best?, *Papers in Regional Sciences* **94** (2015) 177–196.
S. Gualandi and G. Toscani, Pareto tails in socio-economic phenomena: a kinetic description. *Economics: The Open-Access, Open-Assessment E-Journal*, 12 (2018-31): 1–17.
S. Gualandi and G. Toscani, Call center service times are lognormal. A Fokker–Planck description. *Math. Mod. Meth. Appl. Scie.* **28** (8) (2018) 1513–1527. S. Gualandi, and G. Toscani, The size distribution of cities: a kinetic explanation. Available at http://arxiv.org/abs/1807.00496 (2018).
S.S. Hirano, E.V. Nordheim, D.C. Arny and C.D. Upper, Log-normal distribution of epiphytic bacterial populations on leaf surfaces, *Applied and Environmental Microbiology* **44** (1982) 695–700.
X. Jin, Y. Zhang, F. Wang, L. Li, D. Yao, Y. Su and Z. Wei, Departure headways at signalized intersections: A log-normal distribution model approach, *Transportation Research Part C* **17** (2009) 318–327.
M. Kac, *Probability and related topics in physical sciences*, With special lectures by G. E. Uhlenbeck, A. R. Hibbs, and B. van der Pol. Lectures in Applied Mathematics. Proceedings of the Summer Seminar, Boulder, Colorado. Interscience Publishers, London-New York, 1959.
D. Kahneman and A. Tversky, Prospect theory: an analysis of decision under risk, *Econometrica* **47** (2) (1979) 263–292.
D. Kahneman and A. Tversky, *Choices, values, and frames*, Cambridge University Press, Cambridge, UK 2000.
K. Kondo, The log-normal distribution of the incubation time of exogenous diseases, *Japanese Journal of Human Genetics* **21** (1977) 217–237.
S. Kullback, A lower bound for discrimination information in terms of variation, *IEEE Trans. Inf. The.* **4** (1967) 126–127.
M. Levy, H. Levy and S. Solomon, A microscopic model of the stock market: Cycles, booms and crashes, *Econ. Lett.* **45** (1994) 103–111.
M. Levy, H. Levy and S. Solomon, [*Microscopic simulation of financial markets: from investor behaviour to market phenomena*]{}, Academic Press, San Diego, CA 2000.
E. Limpert, W.A. Stahel and M. Abbt, Log-normal distributions across the sciences: keys and clues, *BioScience* **51** (5) (2001) 341–352.
C.F. Lo, Dynamics of Fokker–Planck equation with logarithmic coefficients and its application in econophysics, *Chin. Phys. Lett.* **27** (8) (2010) 080503.
J.E. Loper, T.V. Suslow and M.N. Schroth, Log-normal distribution of bacterial populations in the rhizosphere, *Phytopathology* **74** (1984) 1454–1460.
T. Lux and M. Marchesi, Volatility clustering in financial markets: a microscopic simulation of interacting agents, *International Journal of Theoretical and Applied Finance* **3** (2000) 675–702.
T. Lux and M. Marchesi, Scaling and criticality in a stocastich multi-agent model of a financial market, *Nature* **397** (11) (1999) 498–500.
A. Malanca, L. Gaidolfi, V. Pessina and G. Dallara, Distribution of 226-Ra, 232-Th, and 40-K in soils of Rio Grande do Norte (Brazil), *Journal of Environmental Radioactivity* **30** (1996) 55–67.
D. Maldarella and L. Pareschi, Kinetic models for socio–economic dynamics of speculative markets, *Physica A* **391** (2012) 715–730.
M. Marsili and Yi-Cheng Zhang, Interacting individuals leading to Zipf’s Law, *Phys. Rev. Lett.* **80** (1988) 2741–2744.
G.Naldi, L.Pareschi and G.Toscani eds., *Mathematical modeling of collective behavior in socio-economic and life sciences*, Birkhauser, Boston 2010.
B. Oksendal, *Stochastic differential equations. an introduction with applications*, Springer-Verlag, Heidelberg 2013.
F. Otto and C. Villani, Generalization of an inequality by Talagrand and links with the logarithmic Sobolev inequality, *J. Funct. Anal.* **173** (2000) 361–400.
L. Pareschi and G. Toscani, *Interacting multiagent systems: kinetic equations and Monte Carlo methods*, Oxford University Press, Oxford 2014.
K. Pesz, A class of Fokker-Planck equations with logarithmic factors in diffusion and drift terms, *Journal of Physics A* **35** (8) (2002) 1827–1832.
K. Portier, J.K. Tolson and S.M. Roberts, Body weight distributions for risk assessment, *Risk Analysis* **27** (1) (2007) 11–26.
F.W. Preston, Pseudo-lognormal distributions, *Ecology* **62** (2) (1981) 355–364.
M. Puente-Ajovín and A. Ramos, On the parametric description of the French, German, Italian and Spanish city size distributions. *Ann. Reg. Sci.* **54** (2015) 489–509.
A. Ramos, Are the log-growth rates of city sizes distributed normally? Empirical evidence for the USA. *Empir. Econ.* **53** (2017) 1109–1123.
N.K. Razumovsky, Distribution of metal values in ore deposits, *Comptes Rendus (Doklady) de l’Académie des Sciences de l’URSS* **9** (1940) 814–816.
P.E. Sartwell, The distribution of incubation periods of infectious disease, *American Journal of Hygiene* **51** (1950) 310–318.
P.E. Sartwell, The incubation period and the dynamics of infectious disease, *American Journal of Epidemiology* **83** (1966) 204–216.
E. Scalas, U. Garibaldi and S. Donadio, Statistical equilibrium in the simple exchange games I. Methods of solution and application to the Bennati–Dragulescu–Yakovenko (BDY) game, *Eur. Phys. J. B* **53** (2006) 267–272.
K. Sznajd–Weron and J. Sznajd, Opinion evolution in closed community, *Int. J. Mod. Phys. C* **11** (2000) 1157–1165.
A.A. Toda, A note on the size distribution of consumption: more double Pareto than lognormal. *Macroeconomic Dynamics*, **21** (6) (2017) 1508–1518.
G. Toscani, Sur l’inégalité logarithmique de Sobolev, *CRAS* **324**, Série I (1997) 689–694.
G. Toscani, Entropy production and the rate of convergence to equilibrium for the Fokker-Planck equation, *Quarterly of Appl. Math.* **LVII** (1999) 521–541.
G. Toscani, Kinetic models of opinion formation, *Commun. Math. Sci.* **4** (2006) 481–496.
G. Toscani, C. Brugna and S. Demichelis, Kinetic models for the trading of goods, *J. Stat. Phys*, **151** (2013) 549–566.
G. Toscani, Kinetic and mean field description of Gibrat’s law, *Physica A*, **461** (2016) 802–811
G. Toscani, Sharp weighted inequalities for probability densities on the real line, in preparation.
R. Ulrich and J. Miller, Information processing models generating lognormally distributed reaction times, *Journal of Mathematical Psychology* **37** (1993) 513–525.
C. Villani, Contribution [à]{} l’[é]{}tude math[é]{}matique des [é]{}quations de [B]{}oltzmann et de [L]{}andau en th[é]{}orie cin[é]{}tique des gaz et des plasmas. [*PhD thesis, Univ. Paris-Dauphine*]{} (1998).
G.K. Zipf, *Human behavior and the principle of least effort: An introduction to human ecology* Addison-Wesley, Reading, MA 1949.
[^1]: https://cran.r-project.org/web/packages/mixtools, last visited, July, 17th, 2018.
[^2]: [http://dati.comune.milano.it/data set/ds138-popolazione-matrimoni-celebrati-2003-2015](http://dati.comune.milano.it/data set/ds138-popolazione-matrimoni-celebrati-2003-2015)
[^3]: http://www.istat.it, last visited June, 20th, 2018.
[^4]: http://www.bfs.admin.ch, last visited June, 20th, 2018.
[^5]: https://cran.r-project.org/web/packages/mixtools, last visited, June, 20th, 2018.
|
---
abstract: 'A collector wishes to collect $m$ complete sets of $N$ distinct coupons. The draws from the population are considered to be independent and identical distributed with replacement, and the probability that a type-$j$ coupon is drawn is noted as $p_{j}$. Let $T_{m}(N)$ the number of trials needed for this problem. We present the asymptotics for the expectation (five terms plus an error), the second rising moment (six terms plus an error), and the variance of $T_{m}(N)$ (leading term), as well as its limit distribution as $N\rightarrow \infty$, when $$p_{j}=\frac{a_{j}}{\sum_{j=2}^{N+1} a_{j}}, \,\,\,\text{where}\,\,\, a_{j}=\left(\ln j\right)^{-p}, \,\,p>0.$$ These “log-Zipf" classes of coupon probabilities are not covered by the existing literature and the present paper comes to fill this gap. Therefore, we enlarge the classes for which the collector’s problem is solved (moments, variance, distribution).'
author:
- |
Aristides V. Doumas$^{1}$ and Vassilis G. Papanicolaou$^{2}$\
Department of Mathematics\
National Technical University of Athens\
Zografou Campus\
157 80 Athens, GREECE\
$^{1}[email protected] $^{2}[email protected]
title: 'The logarithmic Zipf version of the coupon collector’s problem'
---
**Keywords.** Urn problems; coupon collector’s problem; double Dixie cup problem; Gumbel distribution; Laplace method for integrals - Determination of higher order terms; Generalized Zipf law, Eulerian logarithmic integral.\
\
**2010 AMS Mathematics Classification.** 60F05; 60F99; 60G70.
Introduction and Motivation
===========================
The coupon collector’s problem (CCP) is a classic urn problem of probability theory. It refers to a population whose members are of $N$ different *types* (e.g., baseball cards, viruses, fish, words, etc). For $1 \leq j \leq N$ we denote by $p_j$ the probability that a member of the population is of type $j$, where $p_j > 0$ and $\sum_{j=1}^{N}p_{j}=1$. The members of the population are sampled independently *with replacement* and their types are recorded. Naturally, the main object of study is the number $T(N)$ of trials needed until all $N$ types are detected (at least once). The simple case where all $p_{j}$’s are equal has a long history. It began with A. De Moivre at the eighteenth century and later with P.S. Laplace (see [@Ho], [@D-H]).\
In the recent years D.J. Newman and L. Shepp studied the more general problem where the collector’s goal is to complete $m$ sets of all $N$ existing different coupons (still uniformly distributed), [@N-S]. This problem is known as the double Dixie cup problem due to a successful marketing policy of the Dixie Cup Company, (see [@Ma]). Let $T_{m}(N)$ be the number of trials needed for this case. The main result of [@N-S] was that for any fixed $m$ $$E\left[\, T_m(N)\,\right]= N \ln N + \left(m-1\right) N \ln \ln N + N C_m + o(N)
\label{1}$$ as $N\rightarrow \infty$, where $C_{m}$ is a constant depending on $m$. Soonafter, P. Erdős and A. Rényi went a step further and determined the limit distribution of $T_{m}(N)$, as well as the exact value of the constant $C_{m}$, see [@E-R]. They proved that $$C_m = \gamma - \ln \left(m-1\right)!,\label{2}$$ where $\gamma=0.5772\cdots$ is the Euler-Mascheroni constant, and that for every real $y$ the following limiting result holds: $$\lim_{N \rightarrow \infty} P\left\{\frac{T_m(N) - N \ln N - (m - 1) N \ln \ln N + N \ln (m-1)!}{N} \leq y\right\}
= e^{-e^{-y}}
\label{333}$$ (the right-hand side of is the standard Gumbel distribution function). For the case of unequal coupon probabilities R.K. Brayton (1963) under the quite restrictive assumption of “nearly equal coupon probabilities", namely $$\lambda(N) :=
\frac{\max_{1\leq j\leq N}{\left\{p_{j}\right\}}}{\min_{1\leq j\leq N}{\left\{p_{j}\right\}}}\leq M < \infty, \qquad\text{independently of $N$,}$$ employed the formulae $$\begin{aligned}
E[T_m(N)]&=\int_{0}^{\infty}\left\{1-\prod_{j=1}^{N}\left[1-S_{m}(p_{j}t) e^{-p_{j}t} \right]\right\} dt,
\label{5}
\\
E\left[T_m(N)\left(T_{m}(N)+1\right)\right]&=2\int_{0}^{\infty}
\left\{1-\prod_{j=1}^{N}\left[1-S_{m}(p_{j}t) e^{-p_{j}t}\right]\right\} t dt
\label{5A}\end{aligned}$$ and obtained [@B] detailed asymptotics of the expectation $E[T_m(N)]$ and the second rising moment $E\left[T_m(N)\left(T_{m}(N)+1\right)\right]$. Here and in what follows $S_{m}(y)$ denotes the $m$-th partial sum of $e^{y}$, namely $$S_{m}(y) := 1+y+\frac{y^{2}}{2!}+\cdots+\frac{y^{m-1}}{\left(m-1\right)!}=\sum_{l=0}^{m-1}\frac{y^l}{l!}\,.
\label{7}$$ As for the asymptotics of the variance, he only did the case $m=1$, where he found the formula $$V\left[\, T_1(N)\,\right]
= N^{2}\left[\frac{\pi^{2}}{6} + O\left(\frac{\ln \ln \ln N}{\ln \ln N}\right)\right] \quad \text{as}\quad N\rightarrow \infty.$$ For the case of unequal coupon probabilities and for $m=1$, general results have been published in [@DP] and [@DPM], while for general (however fixed) values of $m$ a paper of ours has been recently uploaded in the *arxiv*, [@MSETS]. Since our motivation arises from these works we will briefly present their results. Let $\alpha =\{a_{j}\}_{j=1}^{\infty }$ be a sequence of strictly positive numbers. Then, for each integer $N > 0$, one can create a probability measure $\pi _N =\{p_1,...,p_N\}$ on the set of types $\{1,...,N\}$ by taking $$p_j = \frac{a_j}{A_N},
\qquad \text{where}\quad
A_N = \sum_{j=1}^N a_j.
\label{8}$$ Notice that $p_j$ depends on $\alpha $ and $N$, thus, given $\alpha $, it makes sense to consider the asymptotic behavior of $E\left[\, T_m(N)\,\right]$, $E\left[\,T_m(N)\left(T_{m}(N)+1\right)\,\right]$, and $V\left[\, T_m(N)\,\right]$ as $N\rightarrow \infty$. It follows that $$E\left[\, T_m(N)\,\right] =A_{N}\,E_{m}(N;\alpha),
\label{12}$$ $$E\left[\,T_m(N)\left(T_{m}(N)+1\right)\,\right] =A^{2}_{N}\,Q_{m}(N;\alpha),
\label{15}$$ where $$\begin{aligned}
E_{m}(N;\alpha ):&=\int_{0}^{\infty}\left[1-\prod_{j=1}^{N}\bigg(1-e^{-a_{j}t}\,S_{m}\left(a_{j}t\right)\bigg)\right]dt, \label{9} \\
Q_{m}(N;\alpha ):&=2\int_{0}^{\infty}t\left[1-\prod_{j=1}^{N}\bigg(1-e^{-a_{j}t}\,S_{m}\left(a_{j}t\right)\bigg)\right]dt. \label{13}\end{aligned}$$ Let $$L_{1}(\alpha;m ):=\lim_{N}E_{m}(N;\alpha )\,\,\,\text{and}\,\,\,L_{2}(\alpha;m ):=\lim_{N}Q_{m}(N;\alpha ).
\label{17}$$ The sequences $\alpha=\left\{a_{j}\right\}_{j=1}^{\infty}$ were separated as follows: $$\text{\textbf{(Case I)}}\,\,\,\sum_{j = 1}^\infty e^{-a_j \tau} < \infty\,\,\,\,\text{for some}\,\, \tau>0.$$ Notice that Case I is equivalent to $L_1 (\alpha;m)<\infty$ and $L_2 (\alpha;m)< \infty$. As it turned out the leading term of both the expectation and the second (rising) moment of $T_{m}(N)$ is enough to obtain the leading asymptotics of its variance. As for the distribution of $T_m(N)$, for all $s \in [0, \infty)$ one has $$P\left\{\frac{T_m(N)}{A_N} \leq s \right\}\rightarrow F(s) := \prod_{j=1}^{\infty}\left[1 - S_{m}(a_j s) e^{-a_j s} \right],
\qquad
N \rightarrow \infty,$$ where $S_m(\,\cdot \,)$ is given by (\[7\]).
Examples of sequences falling in this case are $a_{j}=j^{p}$, $p>0$ (for $p=1$ we have the so-called *linear case*), $b_{j}=e^{p j}$, $p>0$, and $c_{j}=j!$. $$\text{\textbf{(Case II)}}\,\,\,\sum_{j = 1}^\infty e^{-a_j \tau} = \infty\,\,\,\,\text{for all}\,\, \tau>0,$$ which is equivalent to $L_1 (\alpha;m)=L_2 (\alpha;m)=\infty$. In order to make some progress one has to make some assumptions for the sequence $\alpha=\left\{a_{j}\right\}_{j=1}^{\infty}$. If we write $a_{j}$ as $$a_{j}=f(j)^{-1},
\label{aj}$$ where $$f(x) > 0 \qquad \text{and} \qquad f'(x) > 0,$$ and assume that $f(x)$ possesses three derivatives satisfying the following conditions as $x\rightarrow \infty$: $$\begin{aligned}
\text{(i) }& f(x)\rightarrow \infty,
& &\text{(ii) } \frac{f^{\prime }(x)}{f(x)}\rightarrow 0,\nonumber \\
\text{(iii) } &\frac{f^{\prime \prime}(x)/f^{\prime }(x)}{f'(x)/f(x)} = O\left(1\right),
& &\text{(iv) } \frac{f^{\prime \prime\prime}(x)\;f(x)^{2}}{ f^{\prime }(x)^{3}} = O\left(1\right),
\label{C1}\end{aligned}$$ then, the asymptotics of the expectation of $T_{m}(N)$ (up to the fifth term), and the asymptotics of its second rising moment (up to the sixth term) were obtained. These results were needed for the leading asymptotics of the variance $V[\,T_{m}(N)\,]$ to appear. As for the limiting distribution as it turned out the random variable $T_{m}(N)$ (under the appropriate normalization) converges in distribution to a Gumbel random variable.
**Remark 1.** Roughly speaking, $f(\cdot)$ belongs to the class of positive and strictly increasing functions, which grow to $\infty$ (as $x \rightarrow \infty$) *slower than exponentials, but faster than powers of logarithms*.
In particular, $(ii)$ is a sub exponential condition. Conditions $(iii)$ (mainly) and $(iv)$ interpret the above remark for the growth of $f(\cdot)$. These conditions are satisfied by a variety of commonly used functions. For example, $$f(x) = x^p (\ln x)^q, \quad p > 0,\ q \in \mathbb{R},\qquad \qquad
f(x) = \exp(x^{r}),\quad 0 < r < 1,$$ or various convex combinations of products of such functions.\
In particular when $$f(x)=x^{p},\,\, p>0$$ that is the coupon probabilities are $$p_{j}=\frac{a_{j}}{\sum_{j=1}^{N}a_{j}},\,\,\,\,a_{j}=\frac{1}{j^{p}},\,\,\, p>0 \label{Z}$$ we have the so-called *generalized Zipf distribution*, a surprising law, which have attracted the interest of many researchers, mainly due to its application in computer science and linguistics (the literature on the Zipf law is extensive). In reference to the CCP the standard Zipf distribution (that is the case where $p=1$) and when $m=1$, the asymptotics of the expectation (leading term) of $T_{1}(N)$, was first studied by Flajolet *et al*, see [@F-G-T].\
To summarize, we have an answer for the asymptotics of the expectation and the second rising moment of $T_{m}(N)$, as well as the leading asymptotics of the variance $V[\,T_{m}(N)\,]$, and its limiting distribution for rich classes of coupon probabilities. Moreover, even exponential sequences belong to the set of classes of sequences, for which we are able to solve our problem. For example the sequence $\beta = \{e^{-pj} \}_{j=1}^{\infty}$, $p > 0$ falls into Case II; but condition $(ii)$ of (\[C1\]) is violated. However, if one considers the sequence $\alpha = \{e^{pj} \}_{j=1}^{\infty}$ it is immediate that $\alpha$ and $\beta$ produce the same coupon probabilities, and since $\alpha$ falls into Case I, a solution to our problem exists.\
The question arises naturally: can we extend the classes of functions $f(\cdot)$? What happens if our functions grows as powers of logarithms?
**Problem.** What can be said about the moments, the variance, and the distribution of the random variable $T_{m}(N)$, when $f(x)= \ln x$, or more generally when $f(x)= (\ln x)^{-p}$, $p>0$? In other words what can be said for the case the coupon probabilities satisfy: $$p_{j}=\frac{a_{j}}{\sum_{j=2}^{N+1} a_{j}}, \,\,\,\text{where}\,\,\, a_{j}=\left(\ln j\right)^{-p}, \,\,p>0. \label{cc}$$ **Remark 2.** Formulae (\[Z\]) and (\[cc\]) are explaining the title of this paper.
Discussion and main results
===========================
Consider the case $a_{j}=\left(\ln j\right)^{-p}, \,\,p>0$. Clearly, $$\sum_{j=2}^{\infty}e^{-\tau\left(\ln j\right)^{-p}}=\infty \quad \text{for all}\quad \tau>0.$$ Therefore, these sequences fall into Case II. However, conditions $(iii)$ and $(iv)$ of (\[C1\]) are violated. In view of (\[cc\]), (\[12\]), and (\[5\]) we get $$\begin{aligned}
E[\,T_m(N)\,]&=\left(\sum_{j=2}^{N+1}\left(\ln j\right)^{-p}\right)\int_{0}^{\infty}\left\{1-\prod_{j=2}^{N+1}
\left[1-S_{m}\bigg(t \left(\ln j\right)^{-p}\bigg) e^{-t\, \left(\ln j\right)^{-p}} \right]\right\} dt.
\label{exp}\end{aligned}$$ **Remark 3.** From here and in what follows we replace $N+1$ by $N$, in both the sum and the integral above, without loss of information regarding the asymptotics of $E[\,T_m(N)\,]$.\
\
The sum $\sum_{j=2}^{N}\left(\ln j\right)^{-p}$ in (\[exp\]) is easy to handle. In fact one may easily obtain its full asymptotic expanssion by using the Euler-Maclaurin summation formula, and hence the associated integral $\int_{j=2}^{N}\left(\ln x\right)^{-p}dx$, and then repeated integration by parts, (see [@B-O]). In particular, for $p=1$ we get the so-called *offset logarithmic integral* or *Eulerian logarithmic integral*, which is a very good approximation to the number of prime numbers less than $N$ (i.e., $\pi (x)\sim \int_{j=2}^{N}\left(\ln x\right)^{-p}dx$). We get $$A_N = \sum_{j=2}^N \frac{1}{(\ln j)^p} = \frac{N}{(\ln N)^p} + \frac{p\,N}{(\ln N)^{p+1}}
+\frac{p\left(p+1\right)\,N}{(\ln N)^{p+2}} + O\left( \frac{N}{(\ln N)^{p+3}} \right).
\label{SD4}$$ The integral appearing in (\[exp\]) is $E_{m}(N;\alpha )$ of (\[13\]) and is our main task. Our approach lies in three steps.\
is a change of variables $$t=g(N)\,s$$ where $$\lim_{N} g(N)=\infty.$$ There are maybe infinite choices for $g(N)$, but a convinient one is $$g(N)=(\ln N)^{p+1},$$ which makes things simpler by invoking (\[SD4\]). Thus, $$\begin{aligned}
E[\,T_m(N)\,]&=\bigg(N\ln N + p\,N+p\left(p+1\right)\frac{N}{\ln N}+ O\left( \frac{N}{\left(\ln N\right)^{2}}\right)\bigg) \nonumber\\
&\times \int_{0}^{\infty}\left\{1-\exp \Bigg( \sum_{j=2}^{N}\ln
\left[1-S_{m}\bigg(\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg) \exp{\bigg(-\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg)} \right]\Bigg)\right\} ds.
\label{EXP}\end{aligned}$$ . The asymptotics (as $N\rightarrow \infty$) of the integral $$I_{k}(N):=\int_{2}^{N} \exp{\bigg(-\frac{(\ln N)^{p+1}}{ \left(\ln x\right)^{p}}\,s\bigg)}
\frac{dx}{\left(\ln x\right)^{kp}}, \quad k=0,1,\cdots,m-1,\,\,\,p>0.
\label{LL2}$$
$$I_{k}(N)=N^{1-s}\left(\ln N\right)^{-k p}\left[\frac{1}{1+ps}+\frac{k p}{\left(1+ps\right)^{2}\ln N}-\frac{p\left(p+1\right)s}{\left(1+ps\right)^{3}\ln N}\left(1+O\left(\frac{1}{\ln N}\right)\right)\right],$$
uniformly in $s\in [s_{0},\infty)$, for any fixed $s_{0} > 0 $.
All the proofs of this paper are gathered in Section 3. For now we only wish to note that the main tool to estimate the integral above is the Laplace method for integrals for the determination of higher order terms. Hence, $$\lim_{N}\int_{2}^{N}\exp{\bigg(-\frac{(\ln N)^{p+1}}{ \left(\ln x\right)^{p}}s\bigg)} S_{1}\bigg(\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg)dx =\left\{
\begin{array}{rcc}
\infty,& \text{if } s<1, \\
(1+p)^{-1},& \text{if } s=1, \\
0,& \text{if } s> 1,
\end{array}
\right. \label{LL2a}$$ while for $m\geq 2$ $$\lim_{N}\int_{2}^{N}\exp{\bigg(-\frac{(\ln N)^{p+1}}{ \left(\ln x\right)^{p}}s\bigg)} S_{m}\bigg(\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg)dx
=\left\{
\begin{array}{rcc}
\infty,& \text{if }& s\leq 1, \\
0,& \text{if }& s> 1,
\end{array}
\right.
\label{LL1}$$ Now from the comparison of sums and integrals it follows that the limits above are valid, if the integral is replaced by the associated sum. Moreover, from the Taylor expansion for the logarithm, namely $\ln(1-x)\sim-x$ as $x\rightarrow0$, one gets the corresponding limits, e.g. for all $m\geq 2$ $$\lim_{N}\sum_{j=2}^{N}\ln \left[1-S_{m}\bigg(\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg) \exp{\bigg(-\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg)} \right] =\left\{
\begin{array}{rc}
-\infty,& \text{if } s<1 \\
0,& \text{if } s\geq1.
\end{array}
\right. \label{SL1A}$$ The limit above drives us to . This is actually a method we proposed recently in [@DP]. We do not claim that this method is new, but even though there is *no guarantee* that it can be applied in our problem (since conditions (\[C1\]) are violated), it turns out that it is leads to a solution. We will briefly discuss it here and complete the proof in the next section. Let us denote by $\tilde{E}_{m}(N;\alpha)$ the integral appearing in (\[EXP\]). For any given $\varepsilon \in (0,1)$ one has $$\begin{aligned}
\tilde{E}_{m}(N;\alpha)= \left[\,1+\varepsilon -I_1 (N)-I_2 (N)+I_3 (N)\,\right],
\label{b3a}\end{aligned}$$ where $$\begin{aligned}
I_1 (N):&= \int_0^{1-\varepsilon} e^{M_{m}(N;s)}\, ds,\label{I1}\\
I_{2}(N): &= \int_{1-\varepsilon}^{1+\varepsilon } e^{M_{m}(N;s)}\, ds,\label{I2}\\
I_{3}(N): &=\int_{1+\varepsilon}^{\infty } 1-e^{M_{m}(N;s)}\, ds,\label{I3}\end{aligned}$$ and $$M_{m}(N;s) :=\sum_{j=2}^N \ln \left[1-S_{m}\bigg(\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg) \exp{\bigg(-\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg)} \right].
\label{AsN}$$ The heart of Step 3 is that $I_{3}(N)$ and $I_{1}(N)$ are dominated by the sixth term in the asymptotics of $I_{2}(N)$ as $N\rightarrow \infty$. Intuitively one expects that the main contribution of $\tilde{E}_{m}(N;\alpha)$ should come from $I_{2}(N)$ (due to the limit of (\[SL1A\]), but it turns out that $I_{2}(N)$ is much more important. The analysis of $I_{2}(N)$ lies in Lemma 2.1(critical contribution), as well as in classical techniques of asymptotic analysis. The computations needed are often, quite involved.
**(Main result I)**\
Let $T_m(N)$ the number of trials a collector needs to complete $m$ sets of $N$ different types of coupons with replacement. If the coupon probabilities satisfy $$p_{j}=\frac{a_{j}}{\sum_{j=2}^{N}a_{j}}, \quad\text{where}\quad a_{j}=\left(\ln j\right)^{-p}, \,\,\,p>0$$ then, the asymptotics of the average of $T_m(N)$ (as $N\rightarrow \infty$) satisfy $$\begin{aligned}
E\left[\, T_m(N)\,\right] = N \ln N &+ \left(m-1\right)N \ln\ln N + \left[\,p+\gamma-\ln\left(m-1\right)!-\ln\left(p+1\right)\,\right]\,N \nonumber \\
&-(m-1)\left[\frac{p}{p+1}-\left(m-1\right)-p\right]\,\frac{\ln\ln N}{\ln N}\,N\nonumber \\
&+N\left[p\left(p+1\right)-p\,\bigg(\ln\left(m-1\right)!+\ln\left(p+1\right)-\gamma \bigg)\right.\nonumber\\
&\left.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,-\bigg(\frac{p}{p+1}-\left(m-1\right)\bigg)\times\left[\gamma-\ln\left(m-1\right)!-\ln\left(p+1\right)\right.\right.\nonumber\\
&\left.\left.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,-\frac{1}{\left(p+1\right)^{2}}\left(\frac{m-1}{p+1}-\frac{p+1}{p}-3\left(\frac{p}{p+1}\right)^{2}\right)\right]\right]\nonumber\\
&+O\left(\frac{\ln\ln N}{\left(\ln N\right)^{2}}\,N\right),
\label{R1}\end{aligned}$$ where $\gamma$ is, as usual, the Euler-Mascheroni constant.
**Remark 4.** Notice that the expected value in is slightly bigger than the corresponding expected value for the case of equal coupon probabilities (recall –), due to the term $p - \ln(p+1)$ which is strictly positive for all $p > 0$. This is in accordance with the statement: For fixed positive integers $m$ and $N$, the case of equal probabilities, has the property that it is the one with the stochastically smallest $T_m(N)$. This result is due to [@MO].
**(Main result II)**\
For the second (rising) moment of the random variable $T_m(N)$ we have the following asymptotic expression as $N\rightarrow \infty$ $$\begin{aligned}
E\left[\,T_m(N)\left(T_{m}(N)+1\right)\,\right]& = N^{2} \left(\ln N\right)^{2} +2\left(m-1\right)N^{2}{\ln N}\left(\ln\ln N\right)\nonumber \\
&+2 \left[\,p+\gamma-\ln\left(m-1\right)!-\ln\left(p+1\right)\,\right]\,N^{2}\ln N \nonumber \\
&+\left(m-1\right)^{2}\,N^{2}\left(\ln\ln N\right)^{2}\nonumber \\
&-2\left(m-1\right)\left(\frac{p}{p+1}-\left(m-1\right)-\gamma-2p\right.\nonumber\\
&\left.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+\ln\left(m-1\right)!+\ln\left(p+1\right)\right)N^{2}\ln\ln N\nonumber \\
&+N^{2}\left[p^{2}+2p\left(p+1\right)-2\left(2p+\gamma\right)\,\bigg(\ln\left(m-1\right)!+\ln\left(p+1\right)\bigg)\right.\nonumber\\
&\left.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+4p\gamma-\bigg(\ln\left(m-1\right)!+\ln\left(p+1\right)\bigg)^{2}+\gamma^{2}+\frac{\pi^{2}}{6}\right.\nonumber\\
&\left.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,-2\bigg(\frac{p}{p+1}-\left(m-1\right)\bigg)\times\left[\gamma-\ln\left(m-1\right)!-\ln\left(p+1\right)\right.\right.\nonumber\\
&\left.\left.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,-\frac{1}{\left(p+1\right)^{2}}\left(\frac{m-1}{p+1}-\frac{p+1}{p}-3\left(\frac{p}{p+1}\right)^{2}\right)\right]\right]\nonumber\\
&+O\left(\frac{\left(\ln\ln N\right)^{2}}{\ln N}\,N^{2}\right).
\label{R2}\end{aligned}$$
**(Main result III)**\
Let $T_m(N)$ the number of trials a collector needs to complete $m$ sets of $N$ different types of coupons with replacement ($m$ is a fixed positive integer). When the coupon probabilities satisfy $$p_{j}=\frac{a_{j}}{\sum_{j=2}^{N}a_{j}}, \quad\text{where}\quad a_{j}=\left(\ln j\right)^{-p}, \,\,\,p>0$$ we have as $N\rightarrow \infty$ $$V\left[\,T_m(N)\,\right] \sim \frac{\pi^2}{6}\;N^2
\label{FINAL}$$ [independently]{} of the value of the positive integer $m$.
Having detailed asymptotics for $E\left[\,T_m(N)\,\right]$ and the leading asymptotics for the variance $V\left[\,T_m(N)\,\right]$ we take advantage of a well known but very general limit theorem of P.Neal (see Section 3), and present the following
**(Main result IV)**\
Suppose the coupon probabilities $p_{j}$ come from the sequence $\alpha = \{a_j = (\ln j)^{-p}\}_{j=2}^{\infty}$ for some $p > 0$, $p_{j}=a_{j}/\sum_{j=2}^{N}a_{j}$. Then, for all $y \in \mathbb{R}$ and for all positive integer $m$ we have as $N \rightarrow \infty$ $$P\left\{\frac{T_m(N) - N \ln N - (m-1) N \ln\ln N - \left[\gamma + p - \ln\bigg((p+1)(m-1)!\bigg) \right] N }{N} \leq y \right\}\rightarrow e^{-e^{-y}}.
\label{SD1a}$$ That is, the random variable $T_{m}(N)$ (under the normalization above) converges in distribution to a Gumbel random variable.
Final comments
--------------
The main task of this paper is to enlarge the classes of coupon probabilities for which we have an answer to the collector’s problem (and in general for the Dixie cup problem) for the average, the variance and the limiting distribution. Since the full asymptotic expansion of $\sum_{j=2}^{N}\left(\ln j\right)^{-p}$ is available our approach is analytic (continuous). We approximate sums by integrals. For example, a key formula is (\[AsN\]), which is valid for $m\geq2$: $$\lim_{N}\sum_{j=2}^{N}\ln \left[1-S_{m}\bigg(\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg) \exp{\bigg(-\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg)} \right] =\left\{
\begin{array}{rc}
-\infty,& \text{if } s<1 \\
0,& \text{if } s\geq1.
\end{array}
\right.$$ As for the corresponding integrals we apply the Laplace method for the determination of higher order terms. The analysis of these integrals is complicated. We build on the method proposed in previous works of ours even though the original conditions are violated and one would expect that this approach does not guarantee a path to a solution. We believe that this method could be valuable for future researchers in order to further enlarge the classes of distributions for this problem.\
Let us now comment on the moments of the random variable $T_{m}(N)$. In view of (\[I2\]) and (\[I5\]) (see Section 3), the key integral for the $r$ rising moment of $T_{m}(N)$ should be $$\begin{aligned}
I(N): &= \int_{1-\varepsilon}^{1+\varepsilon } s^{r-1}\,e^{M_{m}(N;s)}\, ds,\end{aligned}$$ where $$M_{m}(N;s) :=\sum_{j=2}^N \ln \left[1-S_{m}\bigg(\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg) \exp{\bigg(-\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg)} \right].$$ To give closure let us illustrate a concrete instance of Main result IV motivated by the following example from Feller, [@F] (which is also in Durrett, [@D]):
**Example.** What is the probability that in a village of $2190\, (=6 \cdot 365)$ people all birthdays are presented? Is the answer much different for $1825\, (=5\cdot 365)$ people?
We will answer for both cases of uniform and log-Zipf distributions.\
In the case of equal probabilities we apply the result of P. Erdős and A. Rényi, see (\[333\]) and get (since $m=1$) $$\begin{aligned}
P\left(T_{\text{equal}}(365)\leq 2190\right)&=&P\left(\left(T_{\text{equal}}(365)-2153\right)/365\leq37/365\right) \\
&\approx&\exp(-e^{-0.1014})=\exp(-0.9036)=0.4051.\end{aligned}$$ On the other hand $$\begin{aligned}
P\left(T_{\text{equal}}(365)\leq 1825\right)&=&P\left(\left(T_{\text{equal}}(365)-2153\right)/365\leq -328/365\right) \\
&\approx&\exp(-e^{0.8986})=\exp(-2.4562)=0.085.\end{aligned}$$ For the case $$p_{j}=\frac{a_{j}}{\sum_{j=2}^{366}a_{j}}, \quad\text{where}\quad a_{j}=\left(\ln j\right)^{-1},$$ we apply Main result IV. We have $N=365$, and $N\ln N=2153$, $(\gamma+1)365=575.684$ and get $$\begin{aligned}
P\left(T_{\text{Log Zipf}}(365)\leq 2190\right)&=&P\left(\left(T_{\text{equal}}(365)-2153-575.684\right)/365\leq(-538.684)/365\right) \\
&\approx&\exp(-e^{1.47585})=0.0126\end{aligned}$$ and $$\begin{aligned}
P\left(T_{\text{Log Zipf}}(365)\leq 1825\right)&=&P\left(\left(T_{\text{equal}}(365)-2153-575.684\right)/365\leq(-903.684)/365\right) \\
&\approx&\exp(-e^{2.47585})=6.84652\times 10^{-6}=0.00000684652\end{aligned}$$ Notice that for the equal case, we have the following ratio $$P\left(T_{\text{equal}}(365)\leq 2190\right)/P\left(T_{\text{equal}}(365)\leq 1825\right)=4.77$$ while, for the Log Zipf case we get $$P\left(T_{\text{Log Zipf}}(365)\leq 2190\right)/P\left(T_{\text{Log Zipf}}(365)\leq 1825\right)=1840.$$
Proofs
======
**Proof of Lemma 2.1**. From (\[LL2\]) we easily have $$I_{k}(N)=\int_{\ln 2}^{\ln N} \exp{\bigg(-\frac{(\ln N)^{p+1}}{ y^{p}}\,s\bigg)}
\frac{e^{y}}{y^{kp}}dy.$$ The substitution $y=\left(s^{1/p+1} \ln N\right) t $ yields $$I_{k}(N)=\frac{s^{\frac{1-k p}{p+1}}}{\left(\ln N\right)^{k p}}\int_{a\, s^{-1/p+1}}^{s^{-1/p+1}}
\exp{\bigg(s^{1/p+1}\ln N \left(t-t^{-p}\right)\bigg)}
\frac{dt}{t^{kp}}.$$ where $a=\ln2/\ln N$. For convenience we set the integral above as $\tilde{I}_{k}(N)$. Now as long as $s\geq s_{0}>0$ for any fixed $s_{0}$, we have $$\lim_{N} s^{1/p+1}\ln N=\infty,\,\,\, \text{for all}\,\, p>0.$$ Moreover, the function $$\phi (t):=t-t^{-p}$$ attains its maximum value at $t_{0}=s^{-1/p+1}$. Hence, only the immediate neighborhood of $t_{0}$ contributes to the full asymptotic expansion of $\tilde{I}_{k}(N)$. Set $h(t):=t^{-k p}$. Careful application of Laplace’s method for integrals (for the determination of higher-order terms) drives us to approximate $\phi(t)$ by $\phi(t_{0})+(t-t_{0})\phi^{\prime}(t_{0})+\frac{1}{2} (t-t_{0})^{2}\phi^{\prime\prime}(t_{0})$ and $h(t)$ by $h(t_{0})+(t-t_{0})h^{\prime}(t_{0})+\frac{1}{2} (t-t_{0})^{2}h^{\prime\prime}(t_{0})$. Then, $$\begin{aligned}
\tilde{I}_{k}(N)\sim&\int_{t_{0}-\epsilon}^{t_{0}} \left[h(t_{0})+(t-t_{0})h^{\prime}(t_{0})+\frac{1}{2} (t-t_{0})^{2}h^{\prime\prime}(t_{0})\right]\\
&\times\exp{\bigg(s^{1/p+1}\ln N \left[\phi(t_{0})+(t-t_{0})\phi^{\prime}(t_{0})+\frac{1}{2} (t-t_{0})^{2}\phi^{\prime\prime}(t_{0})\right]\bigg)}
dt.\end{aligned}$$ Because $\epsilon$ may be chosen small, we Taylor expand the term $$\exp\left[s^{1/p+1}\ln N \frac{1}{2} (t-t_{0})^{2}\phi^{\prime\prime}(t_{0})\right].$$ Substituting this expansion in the above, then collecting powers of $(t-t_{0})$, and finally, extending the range of integration to $(-\infty,t_{0}]$, yields $$\begin{aligned}
\tilde{I}_{k}(N)\sim\,&e^{s^{1/p+1}\ln N\,\phi(t_{0})}\int_{-\infty}^{t_{0}}e^{s^{1/p+1}\ln N\,(t-t_{0})\,\phi^{\prime}(t_{0})}\\
&\times\left[h(t_{0})+(t-t_{0})h^{\prime}(t_{0})+\frac{1}{2} (t-t_{0})^{2}\left(h^{\prime\prime}(t_{0})+s^{1/p+1}\ln N\, h(t_{0})\,\phi^{\prime\prime}(t_{0})
\right)+\cdots\right] dt.\end{aligned}$$ and the proof completes the evaluation of the above integral. For more details on this method, see e.g., [@B-O].\
\
**Proof of main result I**. To analyse (\[b3a\]) we will start from $I_{2}(N)$ (see (\[I2\])) and obtain the five first terms in its asymptotic expansion (plus an error). Then we will calculate the leading term of $I_{3}(N)$ and prove that is negligible compared to the sixth term of $I_{2}(N)$ as $N\rightarrow \infty$. Finally, we will estimate the leading term of $I_{1}(N)$, for which we will see that is negligible compared to the leading term of $I_{3}(N)$.\
Since $\ln (1-x) = -x + O(x^{2})$ as $x\rightarrow 0$, it follows from (\[AsN\]) and (\[7\]) that $$\begin{aligned}
M_{m}(N;s)=&-\sum_{k=0}^{m-1}\frac{\left(\ln N\right)^{k(p+1)}\,s^{k}}{k!}
\left(\sum_{j=2}^{N}\left(\ln j\right)^{-k p}\,\exp{-\frac{\left(\ln N\right)^{p+1}}{\left(\ln j\right)^{p}}s}\right) \nonumber \\
&+\sum_{j=1}^{N}O\left(e^{-\frac{2\left(\ln N\right)^{p+1}}{\left(\ln j\right)^{p}}s}\left[S_{m}\left(\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\right)\right]^{2}\right).\label{40}\end{aligned}$$ From the comparison of sums and integrals and Lemma 2.1 (remember that we are interested in $I_{2}(N)$, $s$ is strictly positive and hence we are able to apply Lemma 2.1) $$\begin{aligned}
M_{m}(N;s)=-N^{1-s}\,\sum_{k=0}^{m-1}\frac{\left(\ln N\right)^{k}\,s^{k}}{k!}&\left[\frac{1}{1+ps}+\frac{k p}{\left(1+ps\right)^{2}\ln N}\right.\nonumber\\
&\left.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,-\frac{p\left(p+1\right)s}{\left(1+ps\right)^{3}\ln N}\left(1+O\left(\frac{1}{\ln N}\right)\right)\right]. \label{qw}\end{aligned}$$ Next, we substitute (\[qw\]) into (\[I2\])) and apply the change of variables $s=1-t$. Thus, $$\begin{aligned}
I_{2}(N)=\int_{-\varepsilon}^{\varepsilon}\exp&\left\{-N^{t}\left(\ln N\right)^{m-1}\left(1-t\right)^{m-1}\frac{\left(1-b\right)}{\left(m-1\right)!}\sum_{n=0}^{\infty}
\left(b\,t\right)^{n}+\left(\ln N\right)^{m-2}\frac{\left(1-t\right)^{m-2}}{\left(m-1\right)!}\right.\\
&\left.\times\left[\left(m-1\right)\left(1-b\right)\sum_{n=0}^{\infty}\left(b\,t\right)^{n}+\left(m-1\right)\left(1-b\right)\left(1-t\right)\sum_{n=1}^{\infty} n b^{n}t^{n-1}\right.\right.\\
&\left.\left.-\frac{1-b}{2b}\left(1-t\right)^{2}\sum_{n=2}^{\infty}n\left(n-1\right)b^{n}t^{n-2}\left(1+O\left(\frac{1}{\ln N}\right)\right)\right]\right\}\,dt,\end{aligned}$$ where $$b=\frac{p}{p+1}, \label{beta}$$ and we have used that $$\begin{aligned}
\left(1-bt\right)^{-1}=\sum_{n=0}^{\infty}\left(b\,t\right)^{n},\,\,\,\left(1-bt\right)^{-2}=b^{-1}\sum_{n=1}^{\infty}nb^{n}\,t^{n-1},\\
\,\,\,\left(1-bt\right)^{-3}=2b^{-2}\sum_{n=2}^{\infty}n\left(n-1\right)b^{n}\,t^{n-2},\end{aligned}$$ since $\varepsilon \in (0,1)$, $b \in (0,1)$, and $t\in[-\varepsilon,\varepsilon]$. If we change the variables as $N^{t}=u\,\omega^{m-1}$, where $\omega:=\left(\ln N\right)^{-1}$, and apply the binomial theorem, after some careful computations we get $$\begin{aligned}
I_{2}(N)=\,\omega\int_{\omega^{1-m}\exp\left(-\varepsilon /\omega\right)}^{\omega^{1-m}\exp\left(\varepsilon /\omega\right)}
&\exp\left\{-\frac{\left(1-b\right)u}{\left(m-1\right)!}\left[\,1+\left(b-\left(m-1\right)\right)\omega\,\ln\left(u \omega^{m-1}\right)\right.\right.\\
&\left.\left.+ O\left(\omega\,\ln\left(u \omega^{m-1}\right)\right)^{2}\right]\right\}\\
&\times \exp\left\{-\frac{\omega\, u}{\left(m-1\right)!}\left[d_{1}+O\left(\omega\,\ln\left(u \omega^{m-1}\right)\right)\right]\right\}\frac{du}{u},\end{aligned}$$ where $$d_{1}=\left(1-b^{2}\right)\left(m-1\right)-\frac{1-b}{b}-3b^{2}\left(1-b\right).\label{d1}$$ Notice that, $N\rightarrow \infty$ implies $\omega \rightarrow 0^+$. We claim that we can replace the upper limit in the above expression by $\infty$. Let us rewrite $I_{2}(N)$ as $$I_{2}(N)=\omega \left(\int_{\omega^{1-m}\exp\left(-\varepsilon /\omega\right)}^{1/\sqrt{\omega}}
+ \int_{1/\sqrt{\omega}}^{\omega^{1-m}\exp\left(\varepsilon /\omega\right)}\right). \label{eint5}$$ The second integral of (\[eint5\]) is easily bounded by $O\left(\sqrt{\omega}\; e^{-(1-b)/\left(m-1\right)!\sqrt{\omega}}\right)$. Let us denote $I_{21}(\omega)$ the first integral of (\[eint5\]). We expand the exponentials and get $$\begin{aligned}
I_{21}(\omega)=\int_{\omega^{1-m}\exp\left(-\varepsilon /\omega\right)}^{1/\sqrt{\omega}}\frac{e^{-\left(1-b\right)u/\left(m-1\right)!}}{u}
&\left[1-\frac{1-b}{\left(m-1\right)!}\left(b-\left(m-1\right)\right)u\omega\,\ln\left(u\omega^{m-1}\right)\right.\nonumber\\
&\left.\,\,\,\,\,\,-\frac{d_{1}}{\left(m-1\right)!}\,u\,\omega\,\left(1+O\left(\omega\ln\left(u\,\omega^{m-1}\right)\right)\right)\right]du.\end{aligned}$$ We write the integral above as $$I_{21}(\omega)=\int_{\omega^{1-m}\exp\left(-\varepsilon /\omega\right)}^{\infty}-\int_{1/\sqrt{\omega}}^{\infty}.\label{20}$$ Again, the second integral of (\[20\]) is easily bounded by $O\left(\sqrt{\omega}\,e^{-\left(1-b\right)/\left(m-1\right)!\sqrt{\omega}}\right)$ as $\omega\rightarrow 0^{+}$, and our claim is proved. It is now an easy exercise to evaluate $I_{2}(N)$. We have $$\begin{aligned}
I_{2}(N)=&\,\varepsilon+\left(m-1\right)\omega \ln \omega+\left[\,\ln\left(m-1\right)!+\ln\left(p+1\right)- \gamma\,\right] \omega\nonumber\\
&-\left(m-1\right)\left(b-\left(m-1\right)\right) \omega^{2}\ln \omega\nonumber\\
&+\left[\left(b-\left(m-1\right)\right)\left(\gamma-\ln\left(m-1\right)!-\ln\left(p+1\right)-d_{1}\left(1-b\right)\right)\right]\omega^{2}\nonumber\\
&+O\left(\omega^{3}\left(\ln \omega\right)^{2}\right),\end{aligned}$$ (where $b$ and $d_{1}$ as defined in (\[beta\]) and (\[d1\]) respectively). Notice that the error term in the above dominates the previously mentioned term $O\left(\sqrt{\omega}\,e^{-\left(1-b\right)/\left(m-1\right)!\sqrt{\omega}}\right)$ as $\omega\rightarrow 0^{+}$.\
Now, we turn our attention to $I_{3}(N)$ of (\[I3\]). As we will see the leading term is enough. The idea is that one can replace the integrand of (\[I3\]) with $\left[\,-M_{m}(N;s)\,\right]$ and then by the quantity $$N_{m}(N;s):=\sum_{j=2}^N \left[S_{m}\bigg(\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg) \exp{\bigg(-\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg)} \right].$$ For a rigorous approach see, [@DP]. Hence as $N\rightarrow \infty$ $$I_{3}(N)=\int_{1+\varepsilon}^{\infty} N_{m}(N;s)\left[1+O\left(N_{m}(N;s)\right)\right] ds.$$ From the comparison of sums and integrals and Lemma 2.1 one easily arrives at $$I_{3}(N)=\sum_{k=0}^{m-1}\frac{\left(\ln N\right)^{k}}{k!}\,\int_{1+\varepsilon}^{\infty} \frac{s^{k}N^{1-s}}{1+ps}
\left[1+O\left(\ln N\right)\right]ds.$$ Substitute $s=1-t$ and apply the Lapace method for integrals yields $$I_{3}(N)=\frac{\left(1+\varepsilon\right)^{m-1}}{\left(1+p\right)\left(m-1\right)!\,\omega^{m-2}}\,e^{-\varepsilon/\omega}
\left[1+O\left(\frac{1}{\omega}\right)\right]
\label{I3RESULT}$$ as $\omega \rightarrow 0^{+}$ and as we have set $\omega=\left(\ln N\right)^{-1}$. The reader now observes that the leading term of $I_{3}(N)$ is dominated by the sixth term of $I_{2}(N)$ as $N\rightarrow \infty$. We finish our approach by estimating the integral $I_{1}(N)$ of (\[I1\]). For any given $\varepsilon \in (0,1)$ it is easy to see that $$\begin{aligned}
I_{1}(N)&= \int_0^{1 - \varepsilon} \exp \left[
\sum_{j=2}^{N}\ln \left[1-S_{m}\bigg(\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg) \exp{\bigg(-\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg)}\right] \right] ds\nonumber\\
&< \exp \left[-\sum_{k=0}^{m-1}\left[\frac{\left(1-\varepsilon\right)^{k}\ln N^{k\,(p+1)}}{\left(m-1\right)!}
\left( \sum_{j=2}^{N} \left(\ln j\right)^{-kp}e^{-\left(1 - \varepsilon \right)\frac{\left(\ln N\right)^{p+1}}{\left(\ln j\right)^{p}}}\right)\right]\right].\end{aligned}$$ From the comparison for sums and integrals it follows that (as $N\rightarrow \infty$) $$\sum_{j=2}^{N} \left(\ln j\right)^{-kp}e^{-\left(1 - \varepsilon \right)\frac{\left(\ln N\right)^{p+1}}{\left(\ln j\right)^{p}}}\sim
\int_{j=2}^{N} \left(\ln x\right)^{-kp}e^{-\left(1 - \varepsilon \right)\frac{\left(\ln N\right)^{p+1}}{\left(\ln x\right)^{p}}}dx.$$ Since $1-\varepsilon$ is strictly positive *it is safe* to apply Lemma 2.1 and easily arrive at the inequality $$\begin{aligned}
I_{1}(N)&
< \exp\left[-\sum_{k=0}^{m-1}\frac{\left(1-\varepsilon\right)^{k}}{\left(1+p\left(1-\varepsilon\right)\right)\left(m-1\right)!}\,\frac{e^{\varepsilon/\omega}}{\omega^{k}}\,\left(1 + M_1\,\omega \right) \right]\nonumber\\
&=\exp\left[-\frac{1}{\left(1+p\left(1-\varepsilon\right)\right)\left(m-1\right)!}\,\frac{\omega^{m}-\left(1-\varepsilon\right)^{m}}{\omega^{m-1}\left(\omega-\left(1-\varepsilon\right)\right)}\,e^{\varepsilon/\omega}\,\left(1 + M_1\,\omega \right) \right],\end{aligned}$$ where $M_1$ is a positive constant. Since $\omega \rightarrow 0^{+}$ and $\varepsilon \in (0,1)$ we have $$I_{1}(N) << \frac{\left(1+\varepsilon\right)^{m-1}}{\left(1+p\right)\left(m-1\right)!\,\omega^{m-2}}\,e^{-\varepsilon/\omega},$$ for sufficiently large $N$, $m=1,2,3,\cdots.$\
Now **Main result I** follows immediately. It is notable that the *third term* of $A_{N}=\sum_{j=2}^{N}\left(\ln j\right)^{-p}$ contributes to the average of $T_{m}(N)$.
**Proof of main result II.** From (\[15\]), (\[13\]), and (\[SD4\]) we have $$\begin{aligned}
&E[\,T_m(N)\left(T_m(N)+1\right)\,]=2\,N^{2}\bigg(\left(\ln N\right)^{2} +2 p\,\ln N+\left(p^{2}+2p\left(p+1\right)\right)+ O\left( \frac{1}{\ln N}\right)\bigg)\nonumber \\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\times \int_{0}^{\infty}s\left\{1-\exp \Bigg( \sum_{j=2}^{N}\ln
\left[1-S_{m}\bigg(\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg) \exp{\bigg(-\frac{(\ln N)^{p+1}}{ \left(\ln j\right)^{p}}s\bigg)} \right]\Bigg)\right\} ds.\label{RII}\end{aligned}$$ Let us denote $\tilde{Q}_{m}(N;\alpha)$ the integral above. Then, for any given $\varepsilon \in (0,1)$ we have $$\begin{aligned}
\tilde{Q}_{m}(N;\alpha)= \left[\,\frac{1}{2}+\varepsilon+\varepsilon^{2} -I_4 (N)-I_5 (N)+I_6 (N)\,\right],\end{aligned}$$ where $$\begin{aligned}
I_4 (N):&= \int_0^{1-\varepsilon} s\,e^{M_{m}(N;s)}\, ds,\nonumber\\
I_{5}(N): &= \int_{1-\varepsilon}^{1+\varepsilon } s\,e^{M_{m}(N;s)}\, ds,\label{I5} \\
I_{6}(N): &=\int_{1+\varepsilon}^{\infty } s\left[1-e^{M_{m}(N;s)}\right]\, ds,\nonumber\end{aligned}$$ and $M_{m}(N;s)$ is given in (\[AsN\]). If we treat $I_5(N)$ as we treated $I_2(N)$ and with a little patiences and paper, one finally arrives at $$\begin{aligned}
I_{5}(N)=&\varepsilon+\frac{\varepsilon^{2}}{2}+\left(m-1\right)\omega\ln \omega+\left[\ln\left(m-1\right)!+\ln\left(p+1\right)-\gamma\right]\omega
-\frac{\left(m-1\right)^{2}}{2} \omega^{2}\ln^{2} \omega\nonumber\\
&\,\,\,+\left(m-1\right)\left[\left(m-1\right)-\frac{p}{p+1}-\ln\left(m-1\right)!-\ln\left(p+1\right)+\gamma\right]\omega^{2}\ln \omega\nonumber\\
&\,\,\,+\left[\left(b-\left(m-1\right)\right)\left(\gamma-\ln\left(m-1\right)!-\ln\left(p+1\right)-d_{1}\left(1-b\right)\right)\right.\nonumber\\
&\left.\,\,\,\,\,\,\,\,\,-\frac{1}{2}\left(\gamma^{2}+\frac{\pi^{2}}{6}\right)+\gamma \left(\ln\left(m-1\right)!+\ln\left(p+1\right)\right)\right.\nonumber\\
&\left.\,\,\,\,\,\,\,\,\,+\frac{1}{2}\left(\ln\left(m-1\right)!+\ln\left(p+1\right)\right)^{2}\right]\omega^{2}+O\left(\omega^{3}\left(\ln\omega\right)^{2}\right),\end{aligned}$$ (where $b$ and $d_{1}$ as defined in (\[beta\]) and (\[d1\]) respectively). With similar steps as in Main result I one has that $I_{4}(N)$ and $I_{6}(N)$ are negligible compared to the *eighth* of $I_{5}(N)$. Now Main result II follows immediately by invoking (\[RII\]).
**Proof of main result III.** The proof follows immediately from the identity $$V[\,T_m(N)\,]=E[\,T_m(N)\left(T_m(N)+1\right)\,]-E[\,T_m(N)\,]-E[\,T_m(N)\,]^{2}$$ by invoking Main results I and II.
**Proof of main result IV.** P. Neal [@N] has established a general theorem regarding the limit distribution of $T_m(N)$ (appropriately normalized) as $N \to \infty$, where $\pi_N = \{p_{N1}, p_{N2},...,p_{NN} \}$, $N = 1, 2,...$, is a sequence of (sub)probability measures, not necessarily of the form (\[8\]).
**Theorem N.** Suppose that there exist sequences $\{b_N\}$ and $\{k_N\}$ such that $k_N / b_N \rightarrow 0$ as $N \rightarrow \infty$ and that, for $y \in \mathbb{R}$, $$\Lambda_N(y\,;m) := \frac{b_N^{m-1}}{\left(m-1\right)!} \sum_{j=1}^N p_{Nj}^{m-1}\exp\bigg(-p_{Nj} \left(b_N + y k_N\right) \bigg) \rightarrow g(y),
\quad
N \rightarrow \infty,
\label{N1}$$ for a nonincreasing function $g(\cdot)$ with $g(y) \rightarrow \infty$ as $y \rightarrow -\infty$ and $g(y) \rightarrow 0$ as $y \rightarrow \infty$. Then $$\frac{T_{m}(N) - b_N}{k_N} \overset{D}{\longrightarrow} Y,
\qquad
N \rightarrow \infty,
\label{N2}$$ where $Y$ has distribution function $$F(y) = P\{ Y \leq y \} = e^{-g(y)},
\qquad
y \in \mathbb{R}.
\label{N222a}$$.
Theorem N *does not indicate at all* how to choose the sequences $\{b_N\}$ and $\{k_N\}$. Here our asymptotic formulas can help. In particular, we will choose $$b_N = N \ln N + (m-1) N \ln\ln N
\qquad \text{and} \qquad
k_N = N
\label{SD0}$$ and for all $y \in \mathbb{R}$ we will prove that $$P\left\{\frac{T_m(N) - N \ln N - (m-1) N \ln\ln N}{N} \leq y \right\}\rightarrow \exp\left(-\frac{e^{-(y-p)}}{(p+1) (m-1)!}\right)
\label{SD1}$$ as $N \rightarrow \infty$, which is equivalent to Main result IV. Under the choice of (\[SD0\]), $\Lambda_N(y\,;m)$ of (\[N1\]) satisfies, as $N \rightarrow \infty$, $$\Lambda_N(y\,; m) \sim \frac{(N \ln N)^{m-1}}{(m-1)!}
\sum_{j=2}^N \left(\frac{a_j}{A_N}\right)^{m-1} e^{-(a_j / A_N) (N \ln N + (m-1) N \ln\ln N + N y)}
\label{SD2}$$ where $$a_j = \frac{1}{(\ln j)^p}\quad\text{and}\quad
A_N = \sum_{j=2}^N \frac{1}{(\ln j)^p} = \frac{N}{(\ln N)^p} + \frac{pN}{(\ln N)^{p+1}} + O\left( \frac{N}{(\ln N)^{p+2}} \right).$$ Hence, yields $$\Lambda_N(y\,; m) \sim \frac{(\ln N)^{(p+1)(m-1)}}{(m-1)!} \, S_N(y),
\label{SD5}$$ where $$S_N(y) := \sum_{j=2}^N \frac{1}{(\ln j)^{p(m-1)}} \exp\left(-\frac{(\ln N)^p (1 - p / \ln N) (\ln N + (m-1) \ln\ln N + y)}{(\ln j)^p}\right).
\label{SD6}$$ Now, $$S_N(y) \sim I_N(y)
\label{SD7}$$ where $$I_N(y) := \int_2^N \frac{1}{(\ln x)^{p(m-1)}} \exp\left(-\frac{(\ln N)^p (1 - p / \ln N) (\ln N + (m-1) \ln\ln N + y)}{(\ln x)^p}\right) dx.
\label{SD8}$$ By substituting $u = \ln x$ in the above integral we get $$I_N(y) := \int_2^M \frac{1}{u^{p(m-1)}} \exp\left(-\frac{B}{u^p} + u\right) du,
\label{SD9}$$ where for typographical convenience we have set $$B := \omega^{-\left(p+1\right)} \left(1 - p\,\omega\right) \left(1 - \frac{(m-1)\omega \ln \omega}{M} + y\,\omega\right)
\quad \text{and} \quad
\omega:= \left(\ln N\right)^{-1}
\label{SD10}$$ so that $B \rightarrow \infty$ and $\omega \rightarrow 0^{+}$ as $N \rightarrow \infty$.
Next, in the integral of we substitute $u = B^{1 / (p+1)} t$ and obtain $$I_N(y) \sim B^{1 - \frac{pm}{p+1}} \int_0^{\theta} \frac{1}{t^{p(m-1)}} \, e^{B^{1 / (p+1)} \phi(t)} dt,
\label{SD11}$$ where $$\theta := \omega^{-1} B^{-1 / (p+1)}
\quad \text{and} \quad
\phi(t) := t - \frac{1}{t^p}.
\label{SD12}$$ The integral in the right-hand side of can be treated as a Laplace integral [@B-O], where the large parameter is $B^{1 / (p+1)}$. Since $\phi(t)$ is strictly increasing, the main contribution to the asymptotics of this integral comes from the endpoint $\theta$ (notice that $\theta \sim 1$ as $N \to \infty$). Thus, by applying the standard analysis of Laplace integrals, after some straightforward algebraic manipulations becomes $$I_N(y) \sim M^{-(p+1)(m-1)} \frac{e^{-(y-p)}}{(p+1)}.
\label{SD13}$$ Finally, by combining with , , and we obtain $$\Lambda_N(y\,; m) \sim \frac{e^{-(y-p)}}{(p+1) (m-1)!}
\label{SD14}$$ and the proof is finished by invoking Theorem $N$.
[4]{}
C.M. Bender and S.A. Orszag, *Advanced Mathematical Methods for Scientists and Engineers I: Asymptotic Methods and Perturbation Theory*, Springer-Verlag, New York, 1999.
S. Boneh and V.G. Papanicolaou, General Asymptotic Estimates for the Coupon Collector Problem, *Journal of Computational and Applied Mathematics* **67** (2) (Mar. 1996) 277–289.
R.K. Brayton, On the asymptotic behavior of the number of trials necessary to complete a set with random selection, *Journal of Mathematical Analysis and Applications* **7 **(1963) 31–61.
P. Diaconis and S. Holmes, A Bayesian peek into Feller volume I, *Sankhyā*, Special issue in memory of D. Basu, **64** Ser. A (3, part 2) (2002) 820–841.
A.V. Doumas and V.G. Papanicolaou, The Coupon Collector’s Problem Revisited: Asymptotics of the Variance, *Adv. Appl. Prob.* **44** (1) (2012) 166–195.
A.V. Doumas and V.G. Papanicolaou, Asymptotics of the rising moments for the Coupon Collector’s Problem, *Electron. J. Probab.* **Vol. 18** (Article no. 41) (2012) 1–15.
A.V. Doumas and V.G. Papanicolaou, The Coupon Collector’s Problem Revisited: Generalizing the Double Dixie Cup Problem of Newman and Shepp, *http://arxiv.org/abs/1412.3626* (submitted).
R. Durrett, *Probability: Theory and Examples*, Third Edition, Duxbury Advanced Series, Brooks/Cole—Thomson Learning. Belmont, CA, USA, 2005.
P. Erdős and A. Rényi, On a classical problem of probability theory, *Magyar. Tud. Akad. Mat. Kutató Int. Közl.*, **6** (1961), 215–220.
W. Feller, *An Introduction to Probability Theory and Its Applications*, Vol. I & II, John Wiley & Sons, Inc., New York, 1966.
P. Flajolet, D. Gardy and L. Thimonier, Birthday paradox, coupon collectors, caching algorithms and self-organizing search, *Discrete Applied Mathematics* **39** (1992) 207–229.
L. Holst, On Birthday, Collectors’, Occupancy and other classical Urn problems, *International Statistical Review* **54** (1986) 15–27.
H.M. Mahmoud, *Pólya urn models*, CRC Press, New York, 2008.
A.W. Marshall, I. Olkin, and B. Arnold, *Inequalities: Theory of Majorization and Its Applications*, Springer, 2nd Ed., 2009.
P. Neal, The Generalised Coupon Collector Problem, *J. Appl. Prob.* **45** (2008) 621–629.
D.J. Newman and L. Shepp, The double Dixie cup problem, *Amer. Math. Monthly* **67** (1960) 58–61. MR0120672
|
---
abstract: 'We present the first contemporaneous 43GHz and 86GHz VLBI images of the v=1 J=2$\rightarrow$1 and J=1$\rightarrow$0 SiO masers in the Orion-KL nebula. Both maser species exhibit the same general morphology of earlier J=1$\rightarrow$0 maser images which appear to trace the edges of a bi-polar conical outflow. Surprisingly, the J=2$\rightarrow$1 masers form further from the central protostar than the J=1$\rightarrow$0 masers, a fact not readily explained by current SiO maser pumping models. The average magnitude of offsets between corresponding regions of the two masing transitions is approximately 14% of the total radial extent of the SiO maser emission. This offset indicates that each transition must trace different physical conditions.'
author:
- 'Sheperd S. Doeleman, Colin J. Lonsdale'
- 'Paul T. Kondratko'
- 'C. Read Predmore[^1]'
title: 'Using VLBI to Probe the Orion-KL Outflow on AU Scales'
---
Introduction
============
Young massive stars spend a considerable fraction (10-20%) of their lifetimes embedded within the molecular cloud cores from which they formed (Wood & Churchwell 1989). Throughout this period, however, they can have a dominant effect on the inter-stellar dynamics in the surrounding region by forming complex and large scale molecular outflows. The exact mechanism by which these stars drive such outflows remains unclear, but models generally involve a stellar wind entraining molecular material (Richer et al 2000). Study of these objects at early phases of evolution before they have emerged from parental clouds is difficult due to the naturally large opacities in the IR and optical. Furthermore, angular resolutions of instruments in these wavebands are insufficient to image the very beginning of an outflow close to the young stellar object. Connected element radio interferometry relieves the opacity problem, but still cannot probe angular scales much less than 0.5 arc seconds. In some cases, high brightness and compact SiO maser emission is seen towards massive star forming regions and can be imaged with 0.1 milli arc second resolution (Greenhill et al 1998, Doeleman et al 1999 (Paper I), Eisner et al 2000). The pumping requirements of these masers ($\rho_H\sim10^9\mbox{cm}^{-3}$, $T\sim1200K$ (Elitzur 1992)) require that they originate very close to a high luminosity source and thus offer a method of exploring the immediate circumstellar regions of select massive young stars with angular resolutions much smaller than the stellar disk.
The Orion BN/KL region is undeniably a site of intense molecular outflow activity. Bow shocks seen in $\mbox{H}_2$ form along “fingers" extending up to 2 arc minutes that collectively trace back to the center of the BN/KL region (Gezari, Backman & Werner 1998, Stolovy et al. 1998). A high velocity and weakly bipolar CO outflow is also centered there (Chernin & Wright 1996). Water maser features out to 15 arc seconds exhibit proper motions consistent with a common center of expansion that is also near the BN/KL center (Genzel et al 1979). A number of objects in the region probably contribute to powering this complex dynamical picture, but one in particular, the radio continuum source I (Churchwell et al 1987), is associated with powerful SiO masers. Its inverted radio spectrum makes it likely that Source I marks the position of an HII region or stellar jet associated with a young massive star, and precise astrometry places I at the exact centroid of the Orion BN/KL SiO maser features (Menten & Reid 1995, Gezari et al 1998). No definitive optical or IR counterpart to Source I has been found, a fact attributable to the visible extinction towards this object for which estimates yield values of $A_{\nu}\sim60$ (Gezari et al 1998).
Compact v=1 J=1-0 SiO maser emission extends only $\sim70$AU from Source I, well within the extent of the larger scale outflows described above. Early single dish polarimetry of this emission, combined with connected element array observations, led to a model in which the SiO masers formed in a rotating and expanding circumstellar disk (Barvainis 1984, Plambeck et al 1990). These efforts, however, were tainted by spectral blending. Because the angular extent of the maser emission is smaller than the synthesized beams of connected arrays, widely separated maser features at similar radial velocities cannot be distinguished and the maps of Plambeck et al (1990) show only the centroids of emission in each observing frequency channel. These maps show the centroids to form two arcs that appear to encircle Source I, a morphology consistent with a circumstellar disk.
Recent high resolution VLBI imaging reveals that the SiO maser emission does not form in two arcs, but resolves into four main regions that appear to trace the outlines of a bipolar conical outflow oriented in the NW-SE direction of the larger CO outflow (Greenhill et al 1998, Doeleman et al 1999). The SiO emission, however, is redshifted to the NW and blueshifted to the SE, opposite the weak polarity of the CO outflow. This apparent conflict can be accommodated if the outflow is oriented close to our line of sight which would magnify the effects of small direction changes in the outflow (Doeleman et al 1999). Alternatively, limb brightening effects on the conical surfaces may allow blue maser emission from a generally red shifted cone of emission (and vice-versa) if the outflow alignment is close to the plane of the sky (Greenhill et al 1998). In either case, it is important to investigate the physical conditions and spatial extent of the maser emitting region in an effort to forge a link between the small scale SiO structures and the larger scale outflows.
In general, multi-transition SiO maser imaging holds the promise of revealing small scale temperature and density gradients in the host environment. Conclusions from previous multi-line SiO maser studies, though, have necessarily been somewhat uncertain due to their heavy reliance on data from single dish monitoring. The particular difficulty with this approach stems from the spectral blending that may occur when two spatially separated maser features emit at nearly the same frequency. Barvainis and Predmore (1985), for example, used single dish polarimetry of the v=1 J=2$\rightarrow$1 and the J=1$\rightarrow$0 transitions towards a set of evolved stars and inferred a low degree of spatial overlap between the lines. McIntosh and Predmore (1987), however, used similar observations of SiO masers surrounding the variable star Mira to conclude that the same transitions [*were*]{} cospatial. Comparison of single dish SiO maser spectra cannot definitively address the question of relative maser positions. Important connected-element interferometry work exemplified by Baudry, Herpin & Lucas (1999), Morita et al. (1992) and Colomer et al. (1996) provides inter-line comparisons of centroids of emission at a given velocity but no detailed spatial information. The first multi-line VLBI comparison of SiO masers was the claim by Miyoshi et al (1995) that the v=1 and v=2 J=1$\rightarrow$0 transitions were largely cospatial towards the evolved star VYCMa. They concluded that only a collisional pumping mechanism could account for the overlap. The angular resolution of their observations, however, was still much larger than the smallest SiO maser feature sizes ($\sim 0.2$ mas), and thus their observations were insufficient to definitively prove cospatiality. Indeed, more recent and higher resolution observations (Desmurs et al. 2000) suggest that on 0.2 mas scales, these two J=1$\rightarrow$0 transitions are offset from each other. Phillips et al. (2003) have registered VLBI maps of the v=1 J=2$\rightarrow$1 and J=1$\rightarrow$0 maser emission in the envelope of the evolved star RCas. For this source, some maser features from both transitions appear to arise in the same volumes of gas, but the overall morphology of emission in the two transitions differs.
Here, we report on results of mapping the Orion-KL SiO masers in both the v=1 $J=2\rightarrow1$ ($\nu_{\mbox{rest}}=86243.442$ MHz) and $J=1\rightarrow0$ ($\nu_{\mbox{rest}}=43122.027$ MHz) transitions. The accuracy of relative astrometry in the images allows comparison at the sub-AU level and we find that, at the resolution of our maps, the brightest masers in each transition are not cospatial. Spatial offsets between the two transitions place the $J=1\rightarrow0$ emission slightly closer to the central exciting source, leading us to conclude that they trace and occur in different physical conditions.
Observations
============
We observed the Orion-KL SiO masers using the position determined by Wright et al (1990) of $\alpha=5^h35^m14\fs505$, $\delta=-05^\circ22'30.45''$ (J2000). Observations at $\lambda7$ mm took place on 13 Dec. 1997 using seven antennas of the VLBA and one element of the VLA, both run by the NRAO[^2]. Paper I describes the $\lambda$7 mm observations and the resulting calibration and imaging steps used to generate high resolution maps. Coordinated Millimeter VLBI Array[^3] (CMVA) observations at 86 GHz covered a time range from 13 Dec. to 15 Dec. 1997 and included the following antennas: Haystack(Westford, MA.), FCRAO 14m (Amherst, MA.)[^4], Kittpeak 12m(Kittpeak, AZ.), the phased BIMA (Redding, CA.) and the VLBA site at PieTown, NM. The overlap in time between the 43GHz and 86GHz observing sessions ensures that variability of source structure has a negligible effect on comparisons between the two maser transitions. The array was split into two sub-arrays, a ’low-resolution’ array consisting of the relatively short (85 km) Haystack- Quabbin baseline and a ’high-resolution’ array comprising Kittpeak, BIMA and PieTown. There were very few interferometric detections between the short baseline array and the high resolution array, forcing us to separate the analysis of the two data sets. A technical problem at the PieTown VLBA site rendered data from the ’high-resolution’ array unsuitable for imaging. In this letter, we report on results only from the Haystack-Quabbin baseline. Data were recorded in three partially overlapping IF channels of the MKIII VLBI system, each of 4MHz bandwidth yielding a total effective velocity coverage at 86 GHz of $\sim34$km/s. The IF overlap was sufficient to avoid band edge effects and also allowed removal of instrumental phase shifts between IFs. Correlation at Haystack Observatory yielded 112 spectral channels in each IF for a velocity resolution of 0.12km/s which closely matches the resolution of the 43 GHz results (0.11km/s) of Paper I.
Calibration and Imaging
=======================
Single baseline 86 GHz VLBI has already been used to map SiO masers around the evolved giant star VXSgr (Doeleman, Lonsdale & Greenhill 1998) and the associated difficulties are well understood. The most important aspect of the reduction is the need to find a spectrally isolated component which is point-like. Phase referencing the entire data set to this reference feature simultaneously removes atmospheric effects and shifts the reference feature to the map origin. Without a point source as reference, the data set would be corrupted by the emission structure in the reference channel which cannot be satisfactorily determined in the absence of calibrated phases. To identify a point source with a single baseline, one must search through all spectral channels for one in which the visibility amplitudes are constant as a function of baseline length and orientation. This signature can easily be masked by amplitude variations due to antenna gain fluctuations and high quality antenna gain calibration is therefore essential.
Gain calibration for both sites was obtained using spectral template fitting. A 90 second total power spectrum of the maser emission at the Quabbin telescope was calibrated assuming a 40 Jy/K gain and served as a template. Spectra at all other times from both sites were fitted to the template, and relative gains as a function of time for both antennas were determined (Fig. \[fig:gaincurves\]) for each 90 second interval. Relative calibration errors are at the 5% level due to the high quality of the total power spectra while we estimate the absolute calibration of the template spectrum to be within 10%. Once calibrated, the channel at $V_{\mbox{\footnotesize
LSR}}$=0.84km/s had amplitudes constant to 20% and we adopted it as the reference. For comparison, Fig. \[fig:amplitudes\] shows the amplitudes as a function of time for the reference channel as well as amplitudes at a nearby velocity which exhibit dramatic “beating" indicative of complex structure.
Strong fringe detections on a single 390 second VLBI scan of the continuum source 3C273 provided the delay calibration for both days allowing removal of linear phase slopes as a function of frequency across the bandpass. Phase offsets between adjacent IFs were checked using maser features in the overlapping portions of the three IF channels and shown to be good to within a few degrees. In addition, visibility phase versus frequency was differenced for two segments of data at the same LST on both days and showed a negligible shift in delay. The lack of phase slope in this difference validates use of the single 3C273 scan to calibrate the delay for both days.
Fringe rate solutions from the reference channel were applied to the data and synthesis image cubes constructed using standard techniques implemented in the NRAO AIPS software package. Resulting images covered $0.64\arcsec\times0.64\arcsec$ on the sky, sufficient to completely map the known extent of the maser emission (Paper I). Due to high beam sidelobe levels (71%), CLEAN deconvolution was applied conservatively in each velocity channel using a loop gain of 0.05 with a limit of 100 iterations. Deconvolution tests on these data showed that the adopted CLEAN parameters prevented divergence in the algorithm and minimized imaging artifacts. The restoring beam was $9\times
66$ milli arcseconds with a PA of $-18^\circ$.
Image Analysis
==============
With only two antennas and the relatively sparse baseline coverage that results, the fidelity of the channel maps is limited by sidelobes of bright emission regions caused by imperfect deconvolution of the synthesized beam. Bright isolated maser emission, for example, is often accompanied by negative sidelobes which distort nearby faint emission. Faint emission that is far removed from bright map features can be more clearly distinguished against the residual map background. Setting a threshold above which map features are taken to represent maser emission, is thus position dependent and requires that each channel map be considered separately.
Composite maps for three velocity ranges corresponding to the three observed 4MHz passbands were formed by selecting the maximum intensity of all velocity channels at each image pixel (Fig. \[fig:overlay\]a-c). The lowest 3mm contours in the composite maps mark the cutoff below which sidelobe features begin to appear.
For a more detailed analysis of the 3mm image, maser spot maps were made at each velocity, where ’spot’ (or component) will be defined as an isolated region of maser emission within a single frequency channel. This was done by fitting 2-D elliptical Gaussians to each map feature to determine size and integrated flux density. In almost all cases, the high spectral resolution and relatively coarse angular resolution of the observations ensured that each identified map feature could be well modeled as an unresolved source convolved with the restoring beam. Restrictive criteria were applied to these features to identify maser emission components. These included feature persistence in at least two adjacent velocity channels and a flux density threshold set by the largest negative sidelobe within 3 beamwidths. In Fig. \[fig:overlay\]d, all 3 mm maser spots meeting our selection criteria are plotted as circles with area proportional to total flux density.
The positional accuracy of maser components relative to each other within the 3mm map depends on a number of factors. Foremost among these is the effect of non-point like structure in the reference channel which will contaminate phases for all other channels resulting in position errors. In the Orion 3mm data (Fig. \[fig:amplitudes\]) the observed variation in reference channel visibility amplitudes is $\sim$20%. The corresponding phase error can be estimated by assuming this amplitude variation is due to two spatially separated maser spots in the reference channel. The observed range of amplitudes would then imply visibility phase variations up to $\pm6^\circ$ or $\pm0.02$ of the synthesized beam width.
A secondary concern is the effect of combining the two days of 3mm VLBI data on relative maser positions. The data were combined primarily to increase signal to noise ratio, but if the masers exhibit very high proper motions, data between the two days could be inconsistent. Assuming the proper motions are comparable to the radial velocity range observed ($\sim 30$km/s), the two day separation corresponds to motions of 0.1 mas, a negligible offset compared to the 3mm VLBI resolution. Even if the SiO outflow were $\sim10^\circ$ out of the plane of the sky, the corresponding proper motions would be $\sim150$ km/s, leading to 0.5 mas positional offsets between the two days.
Mis-calibration of the interferometer delay due to geometrical and station clock errors can also lead to uncertainty in relative astrometry (Genzel et al 1981, Thompson, Moran & Swenson 1986). Such delay errors cause phase slopes across the bandpass leading to astrometry errors whose magnitudes vary as a function of frequency. The high signal to noise ratio 3C273 detections limit clock errors to $\sim5$ns or 18 degrees of phase across the observing bandpass. The maximum geometrical delay error, in turn, is set by the uncertainty in the position of the Orion BN/KL SiO masers and the baseline length. The Orion BN/KL maser position is known to within $0.6\arcsec$ ($3\sigma$) which, coupled with the relatively short Quabbin - Haystack baseline, produces a delay error of only $0.75$ns. Thus, the total delay error contributes positional errors of no more than 21 degrees of phase, or $\pm 0.06$ synthesized beam widths.
Further errors in maser spot positions (Fig. \[fig:overlay\]d) can be attributed to uncertainty in identification of individual features and fitting them with elliptical Gaussians, but in all cases, these uncertainties are a small fraction of the synthesized beam. Combining all sources of relative positional error above, we conservatively limit relative errors in maser spot positions of 0.1 of the synthesized beam. For the main purposes of the present work, the combination of all relative positional errors are small compared to the errors in aligning the 3mm and 7mm maser transitions discussed below.
Comparison of 3mm and 7mm Emission
==================================
Characterization of 3mm Maser Emission
--------------------------------------
Maps of the 3mm maser emission show it to be broadly similar in structure to that observed in the 7mm transition. Regions A, B, G and H marking the outlines of opposing conical outflows at 7mm in Paper I have corresponding regions at 3mm. Even region E, which does not conform to the simple bi-conical picture, is populated by both SiO transitions. At 3mm, each region comprises multiple components whose velocity widths range from 0.5 km/s to over 2 km/s. These are larger than spectral widths of typical maser features in the 7mm image, but this is likely due to the larger beam size at 86 GHz which subtends multiple closely spaced features at similar velocities. As at 7mm, there is obvious velocity overlap between region pairs (A,B) and (G,H); strong emission often appears in both paired regions at the same velocity.
3mm and 7mm Map Alignment
-------------------------
Because the 3mm and 7mm data were obtained using different instruments, it is impossible to register the maps using phase referencing techniques. We note that use of these techniques may be possible in the future as experience with VLBA operation at 86 GHz increases and if the frequency switching time remains below the coherence time of the atmosphere. Instead, we necessarily explored registration techniques involving map comparisons and certain assumptions about maser structure.
Comparison of the two maps reveals that a pure translation of one map relative to the other will not allow all four main emission regions (A, B, H, (F+G)) to coincide. Exactly superposing region B, for example, at each frequency leaves the remaining regions grossly misaligned. If, however, we assume that emission in both transitions originates in similar interactions of a bi-polar outflow with the surrounding cloud, then the center of symmetry in both transitions should be coincident and reflect the position of the protostar. An approximate center of symmetry in each map can be defined as the intersection of lines that connect the emission centroids of regions A with H and B with (F+G). Registering the 3mm and 7mm emission in this manner (Fig. \[fig:overlay\]), shows that J=2$\rightarrow$1 masers appear to form farther from the central proto-star than the J=1$\rightarrow$0 masers.
After registration, total offsets between the 3 mm and 7 mm centroids for each emission region were measured to be: A (6.6 AU), B (7.5 AU), F+G (7 AU), H (17 AU) assuming a 450 pc distance to the source (Paper I). Variation in the offsets is directly related to the extent of emission in each region. Region H, for example, exhibits the largest offset, due primarily to the large area over which 3mm emission is found. In all cases, the 3 mm emission appears further from the presumed location of the protostar than the 7 mm emission.
Spectra
-------
Total power spectra of each transition in Fig. \[fig:total\_spectra\] show that the 3mm and 7mm masers share the same basic double peaked form with a central velocity near 5.3 km/s. This indicates a common bulk flow affecting both maser lines. In fact, individual spectra of emission in regions A-H of the 3mm map cover roughly the same velocity ranges as the corresponding 7mm spectra : Region A (11 $\rightarrow$ 17.5 km/s), Region B (13.5 $\rightarrow$ 19 km/s), Region H (2.5 $\rightarrow$ -9 km/s), Region G ( -2.5 $\rightarrow$ -11 km/s). Both the red and blue shifted peaks contain multiple spectral components, a few of which have matching peaks in both transitions that correspond to within 0.5 km/s. A one-to-one correspondence in spectral features between the two transitions, though, is clearly absent. The observed spatial offset between the lines underlines the pitfalls of deriving spatial coincidence information solely from single dish spectra.
Discussion
==========
Outflow Model and Dynamics
--------------------------
Impetus for the biconical outflow model of the Orion-KL SiO masers originated with a clear “X” morphology of the v=1 J=1$\rightarrow$0 transition, presumed to trace the outflow boundary. Though generally not coincident with the J=1$\rightarrow$0 masers, the J=2$\rightarrow$1 emission displays a remarkably similar general structure consistent with the outflow model. This general structure is consistent with both SiO maser transitions inhabiting a zone of shocks and overdensities where an outflow interacts with the surrounding molecular medium (Paper I). The resolution of the J=2$\rightarrow$1 image does not allow us to compare the two transitions on the smallest scales seen in the J=1$\rightarrow$0 maps. We cannot rule out some overlap on scales smaller than the 3 mm beam and future high resolution work at this frequency is needed to address this. One caveat to make clear is that in both transitions, roughly half the flux density is undetected by VLBI. This “missing flux" must exist in structures larger than the 5 milliarcsecond synthesized beam of the J=2$\rightarrow$1 data. The smallest baselines in the VLBA array also correspond to this size scale. Detection and location of this large scale emission will require connected element arrays with baselines of order 100 km.
The combined spatial and velocity information from both 3 mm and 7 mm VLBI now make it possible to explore velocity patterns within each main emission region. In region A, the combined 3 mm and 7 mm maser emission covers a range of radii from 40 to 67 AU as measured from the central registration point. Over this range, the average radial velocity across region A steadily decreases from 26 to 13 km/s implying a smooth velocity gradient of $\sim 0.5$km/s/AU due North. Similar calculations for the other three regions yield no clear velocity gradients, arguing for a local explanation of the pattern observed in Region A.
One possibility is that in Region A, maser features closer to the protostar are also located closer to the central stellar jet. Molecular gas closer to the jet will accelerate more quickly and to generally higher terminal velocities, giving rise to the observed velocity gradient. In this scenario, the thickness of the conical shell where masers occur is sufficient to generate the velocity difference among Region A maser features. Similarly, a local gradient in the density of the medium surrounding the stellar jet could have the same effect on the radial maser velocities by altering the efficiency of entrainment as a function of distance from the central protostar. Constraining these possibilities is not possible with the measurements described herein, but future proper motion measurements of individual maser features will allow a much more detailed examination of the SiO maser dynamics and directly address this issue. For completeness, we note that one could also attribute the velocity gradient to a gravitational deceleration of a ballistic flow in which the masers form. The central mass required would then be $M_\star \sim
28\;[\cos^2(\theta_{\mbox{los}})\sin(\theta_{\mbox{los}})]^{-1}\; M_\odot$ where $M_\star$ is the enclosed central mass and $\theta_{\mbox{los}}$ is the angle made by the direction of propagation of the Region A SiO masers to our line of sight. At its minimum, this expression yields a central mass of $M_\star \sim 73 M_\odot$. In the case of gravitational deceleration, though, one would expect similar velocity gradients in all maser regions.
The new J=2$\rightarrow$1 images also highlight a region not directly addressed in Paper I. Region E is the weakest of all the labeled maser complexes but appears clearly in J=2$\rightarrow$1 and more faintly in J=1$\rightarrow$0. This region differs importantly from the others in that the spatial relationship between the transitions is reversed with J=1$\rightarrow$0 slightly farther from the center. The mere existence of this maser cluster in both transitions is difficult to understand in the context of a pure biconical outflow model. It cannot correspond in any simple way to a conical edge or tangent line. Since Region E emission velocities are near the centerpoint between the red and blue lobes (Vlsr$\sim$5 km/s) it may result from outflow shocks propagating to a central disk or torus oriented perpendicularly to the outflow as observed by Wright et al (1995) in thermal SiO. Another possibility is that Region E forms in an equatorial outflow modeled by Greenhill et al. (1998) to explain the morphology of the $\mbox{H}_2\mbox{O}$ masers. Whatever its genesis, the 86 GHz confirmation of strong SiO emission in Region E must be accounted for in future dynamical models.
Maser Pumping
-------------
Most treatments of SiO maser pumping model the case of a spherically symmetric evolved star. Elitzur (1982) has outlined a possible explanation for the special case of the Orion-BN/KL SiO masers assuming they form in the bulk of an expanding wind from the protostar. Models of this type are not generally workable in the evolved stellar case as mass loss rates in those objects are too small to sustain maser amplification within a smooth wind. The validity of the Elitzur (1982) model, which also dealt with the specific radiation field in the region, is now called into question by the filamentary maser features seen in J=1$\rightarrow$0 (Paper I). These high aspect ratio structures indicate maser formation in shocks and local density enhancements rather than in a smooth wind.
Many stellar SiO maser models incorporate the effects of dust grain formation and stellar pulsation driven shocks, effects that likely play a role in the Orion SiO masers. Both radiative and collisional pump mechanisms have been used to explain SiO masers in these models, but a general consensus remains elusive. VLBI results of Miyoshi et al (1994) showing the J=1$\rightarrow$0, v=1 and v=2 masers in VYCMa and WHya to be coincident within 2 mas, strengthened the case for collisional pumping which allows these transitions to be cospatial over a wide range of physical parameters (Lockett & Elitzur 1992). Radiative pumping schemes are much more restrictive and one would not typically expect such coincidence (Bujurrabal 1994). Desmurs et al. (2000) have mapped TXCam with a resolution of 0.2 mas and claim that centroids of J=1$\rightarrow$0, v=1 and v=2 maser features are offset by $\sim$1.5 mas. Their refutation of the Miyoshi et al. (1994) results must be tempered though, by the fact that their maps show a high degree of overlap between the transitions despite the measured mean offset.
The positional offset now observed between the J=2$\rightarrow$1 and J=1$\rightarrow$0 masers is in stark disagreement with predictions of most maser pumping models for SiO masers around evolved stars. Radiative and collisional excitation theories both predict multiple SiO maser lines among rotational levels within a given vibrational state. These rotational “chains" are due to the monotonic decrease in radiative decay rates with increasing rotational level (J) when ro-vibrational transitions become optically thick (Elitzur 1992). This produces a natural inversion between rotational states given a J-independent pump mechanism. In such a case, one would expect v=1, J=2$\rightarrow$1 and J=1$\rightarrow$0 masers to generally inhabit the same volumes of gas. Our results show that this is not the case for the brightest maser features at the resolution of our maps.
Some SiO maser models do allow for spatial separation of rotational masers within a vibrational state, but require specialized conditions for this to take place. Lockett & Elitzur (1992) show that collisional pumps tuned for conditions around evolved stars selectively quench higher J level transitions as SiO column densities rise, so that the J=1$\rightarrow$0 maser is the only one to survive above $10^{20}\mbox{cm}^{-2}$. These results are consistent with J=1$\rightarrow$0 masers occuring closer to the protostar where higher neutral $H_2$ densities would be found. Larger SiO column densities might also result from an increase in SiO abundance due to the liberation of SiO into a gaseous state from dust grains in shocks at the interface between the Orion outflow and the surrounding medium (Caselli, Hartquist & Havnes 1997). Such interpretations would, however, require very high column densities to suppress the higher J transition. Radiative pumps typically show little change in J=1$\rightarrow$0 and J=2$\rightarrow$1 maser intensities as $H_2$ density and SiO abundance are varied (Bujurabbal 1994) and offer no clear explanation of the findings reported here.
Given the large number of SiO maser transitions possible, saturation and competitive gain effects can be important. Doel et al (1995) argue that SiO maser spot sizes and flux densities imply that much of the maser emission is in the saturated regime. These authors have extended SiO maser models for evolved stars to include these effects and find that the J=2$\rightarrow$1 maser gains can exceed those of J=1$\rightarrow$0 in a very narrow range of $H_2$ density around $5\times10^9\mbox{cm}^{-3}$. The possibility that this density range holds throughout the large SiO maser region in Orion BN/KL is unlikely.
Potentially of more relevance to the Orion BN/KL case is the work of Humphreys et al (2002) to include the effects of hydrodynamic shocks on SiO maser formation in evolved stellar envelopes. In this model, stellar pulsations drive shocks which enhance collisions and give rise to pockets of SiO masers emission. Simulations show that the v=1 J=2$\rightarrow$1 emission occurs at slightly larger radii from the central star than the v=1 1$\rightarrow$0 emission. The size of this effect ($\sim2$% of the maser radii) is much smaller than the $\sim14$% offset between the two transitions we have observed here. Inclusion of shocks in SiO models, though, will likely be important in future specific application to the Orion BN/KL case.
In general, the relevance of these SiO maser models in the context of hard radiation fields and shocks in the environment surrounding Source I is questionable. Perhaps the clearest statement that can be made regarding the offset between the two rotational maser transitions discussed in this work, is that they appear to require distinct physical conditions for maser amplification. Use of these transitions as probes of the Orion-KL environment depends heavily on the specific predictions of theoretical models which currently cannot adequately explain our observations.
Conclusions
===========
We have made the first contemporaneous spectral line VLBI observations of 3mm and 7mm wavelength SiO maser transitions towards the Orion BN/KL region. Images of the 3mm v=1 J=2$\rightarrow$1 transition show the masers to be grouped in four main emission regions along the arms of an ’X’, similar morphology to that previously reported for corresponding observations of the 7mm J=1$\rightarrow$0 transition. These results reinforce a scenario in which the SiO masers form in the interface region between a bi-conical protostellar outflow and the surrounding medium. Long tangential maser gain paths along the edges of the outflow result in the masers appearing along the outline of the outflow. Significant SiO maser emission outside the outflow cones defined by the main maser regions is easily identified in the new J=2$\rightarrow$1 image (Region E) and exists, but is less distinct, in the J=1$\rightarrow$0 map. These maser features are inconsistent with the simple bi-conical picture and may indicate the presence of a protostellar disk or other dense outflowing material in the plane orthogonal to the stellar outflow.
In contrast with predictions of SiO maser models, we observe a positional offset between the centroids of J=2$\rightarrow$1 and J=1$\rightarrow$0 maser emission with the higher rotational transition occurring farther from the central protostar. This offset indicates a preference of the two transitions for distinct physical environments. The J=2$\rightarrow$1 masers extend the maximum radius at which SiO masers are seen in Orion-BN/KL to 67 AU and agree with a velocity gradient observed in J=1$\rightarrow$0 emission along the Northern outflow limbs. This is almost certainly due to an effect local to this region of emission as the remaining three maser regions exhibit no clear velocity gradients.
A more complete picture of the SiO masers and their use as tracers at the origins of the Orion BN/KL outflow will require further observations. Proper motion studies of individual maser features will produce three dimensional velocity information and allow detailed study of dynamics in the maser region. High resolution (0.2 mas) simultaneous observation of multiple maser lines are needed to make progress toward understanding the maser pump models. Indeed, VLBI study of SiO masers has now outpaced theoretical maser efforts. The utility of multi-line studies highlights the need to extend the spectral line VLBI technique to higher frequencies to reach other SiO maser transitions. Imaging of higher rotational level maser lines may reveal unexpected effects similar to those discussed in this work.
Barvainis, R. E. 1984, , 279, 358 Barvainis, R.E. & Predmore, C.R. 1985, , 288, 694 Baudry, A., Herpin, F., & Lucas, R. 1998, , 335, 654 Boboltz, D.A., Diamond, P.J., & Kemball, A.J. 1997, , 487, L147 Bujarrabal, V. 1994, , 285, 953 Caselli, P., Hartquist, T.W., & Havnes, O. 1997, , 322, 296 Chernin, L.M. & Wright, M.H. 1996, , 467, 676 Churchwell, E., Wood, D.O.S., Felli, M., & Massi, M. 1987, , 321, 516. Colomer, F., Baudry, A., Graham, D.A., Booth, R.S., de Vicente, P., Krichbaum, T.P., Gomez-Gonzalez, J., & Schalinski, C. 1996, , 312, 950 Desmurs, J.-F., Bujarrabal, V., Colomer, F., & Alcolea, J. 2000, , 360, 189 Diamond, P. J., Kemball, A. J., Zensus, A., Benson, J., & Dhawan, V. 1994, , 430, L61 Diamond, P.J. & Kemball, A.J. 1998, BAAS, 30(4), 1349 Doel, R. C., Gray, M. D., Humphreys, E. M. L., Braithwaite, M. F., & Field, D. 1995, , 302, 797 Doeleman, S.S., Lonsdale, C.J., & Greenhill, L.J 1998, , 494, 400 Doeleman, S.S., Lonsdale, C.J., & Pelkey, S. 1999, , 510, L55 (Paper I) Eisner, J.A., Greenhill, L.J., Hernstein, J.R., Moran, J.M., & Menten, K.M. 2002, , 569, 334 Elitzur, M. 1982, , 262, 189 Elitzur, M. 1992, Astronomical Masers (Dordrecht: Kluwer) Gaume, R.A., Wilson, T.L., Vrba, F.J., Johnston, K.J., & Schmid-Burgk, J. 1998, , 493,940 Genzel, R., Moran, J., Lane, A. P., Predmore, C. R., Ho, P. T. P., Hansen, S. S., & Reid, M. J. 1979, , 231, L73 Genzel, R., Reid, M.J., Moran, J.M., & Downes, D., , 244, 884 Gezari, D.Y., Backman, D.E., & Werner, M.W. 1998, , 509, 283 Greenhill, L.J., Gwinn, C.R., Schwartz, C., Moran, J.M., & Diamond, P.J. 1998, Nature, 396, 650 Humphreys, E.M.L., Gray, M.D., Yates, J.A., Field, D., Bowen, G.H., & Diamond, P.J. 2002, , 386, 256 Lockett, P. & Elitzur M. 1992, , 399, 704 McIntosh, G.C. & Predmore, C.R. 1993, , 404, L71 Menten, K.M. & Reid, M.J. 1995, , 445, L157 Miyoshi, M., Matsumoto, K., Kameno, S., Takaba, H., & Iwata, T. 1994, Nature, 371, 395 Morita, K.I., Hasegawa, T., Ukita, N., Okumura, S.K., & Ishiguro, M. 1992, , 44, 373 Pardo, J.R., Cernicharo, J., Gonzalez-Alfonso, E., & Bujarrabal, V. 1998, , 329, 219 Phillips, R.B., Straughn, A.H., Doeleman, S.S. & Lonsdale, C.J. 2003, , 588, L105 Plambeck, R.L, Wright, M.C.H., & Carlstrom, J.E. 1990, , 348, L65 Richer, J. S., Shepherd, D. S., Cabrit, S., Bachiller, R., & Churchwell, E. 2000, Protostars and Planets IV, 867 Rodríguez-Fernández, N., Martín-Pintado, J., & Wilson, T.L. 1999, , 344, 57 Shepherd, D.S., Watson, A.M., Sargent, A.I., & Churchwell, E. 1998, , 507, 861 Wood, D.O.S. & Churchwell, E. 1989, , 340, 265 Wright, M.C.H., Carlstrom, J.E., Plambeck, R.L., & Welch, W.J. 1990, , 99, 1299 Wright, M.C.H., Plambeck, R.L., Mundy, L.G., & Looney, L.W. 1995, , 455, 185L.
Figure Captions
===============
[^1]: Current address: Predmore Associates, 120 Pulpit Rd., Suite 22, Amherst, MA 01002
[^2]: The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
[^3]: Support for the Coordinated Millimeter VLBI Array work at the Haystack Observatory is provided under a grant from the NSF to the Northeast Radio Observatory Corporation
[^4]: This work was supported by NSF grant AST 97-25951 to the Five College Radio Astronomy Observatory
|
---
author:
- 'Rohan R. Poojary$^{1,2}$'
bibliography:
- 'bulk\_syk\_soft\_modes.bib'
title: 'BTZ dynamics and chaos. '
---
Introduction
============
A very intriguing phenomena of strongly coupled thermal systems is chaos. In a classical sense, phase space trajectories which differ in their initial values by a small amount tend to grow exponentially far apart at later times. A good quantum mechanical analogue of this would be the time scale when $$C(t)=\langle[W(t),V(0)]^2\rangle_\beta
\label{otoc1}$$ becomes equal to $ 2 \langle WW\rangle_\beta\langle VV\rangle_\beta$. Here, $W \& V$ are simple Hermitian operators with $\mathcal{O}(1)$ degrees of freedom. The exponential increase in the time of $C(t)\approx e^{\lambda_L t}$ can be considered as the Lypunov index generically associated with chaotic systems [@Kitaev:2014talk]. The chaotic behaviour of thermal large $N$ CFTs is related to the chaotic behaviour of black holes $via$ the gauge-gravity duality, the latter are conjectured to be the fastest “scramblers” of information [@Sekino:2008he]. Shenker and Stanford [@Shenker:2014cwa] first computed the $out$-$of$-$time$-$ordered$ (otoc) term in (\[otoc1\]) $$\langle W(t)V(0)W(t)V(0) \rangle_\beta
\label{otoc2}$$ holographically using the eikonal approximation. In [@Shenker:2014cwa] the next order in probe approximation in $G_N$ was computed for a $2\rightarrow2$ scattering for 2 minimally coupled scalars in $AdS_d$ Schwarzchild interacting with each other only $via$ gravity. The Lyapunov index thus obtained was $\lambda_L=2\pi/\beta$, $\beta$ being the inverse temperature of the $AdS_d$ Schwarzchild. This lead Maldacena, Shenker and Stanford [@Maldacena:2015waa] to propose a bound on the Lyapunov index of large $N$ thermal QFTs to be $\lambda_L \leq 2\pi/\beta$ by using generic arguments of unitarity and analyticity of Whightman functions on the complex plane. It was assumed that holographic CFTs saturate this bound as evidenced by [@Shenker:2014cwa; @Shenker:2013pqa].\
\
Further interest in chaotic systems was heightened by the study of the SYK [@Sachdev:2010um]and SYK-like models initiated first in [@Polchinski:2016xgd] and [@Maldacena:2016hyu]; in [@Maldacena:2016hyu] Maldacena and Stanford found that the otoc for the fermions has $\lambda_L=2\pi/\beta$, this computation was done in the strong coupling (zero temperature) limit of the SYK model where the model is conformal. In order to compute the leading contribution to the 4pt. function they had to break the conformal invariance at zero temperature. The modes which are responsible for maximizing chaos where shown to be the modes related by diffeomorphism which now have an action due to breaking of conformal invariance. Their effective action was computed and found to be the Schwarzian derivative of reparametrizations of the thermal circle. Many interesting properties of the SYK model have since been uncovered [@Kitaev:2017awl; @Sonner:2017hxc; @Eberlein:2017wah; @Gross:2017vhb; @Dartois:2017xoe; @Garcia-Garcia:2016mno; @Gross:2017aos; @Stanford:2017thb] to quote a few.\
\
There have been many variations to the original SYK problem which had relied upon averaging over a space of couplings. A unitary model proposed by Gurav[@Gurau:2016lzk] and Witten[@Witten:2016iux] showed a similar behaviour to the SYK model at large $N$. There have also since been higher-dimensional and super-symmetric avatars of this model, [@Choudhury:2017tax; @Gonzalez:2018enk; @Bhattacharya:2017vaz; @Krishnan:2017lra; @Yoon:2017nig; @Bulycheva:2017uqj; @Murugan:2017eto; @Li:2017hdt; @Davison:2016ngz; @Klebanov:2016xxf; @Berkooz:2016cvq; @Fu:2016vas; @Gross:2016kjj; @Gu:2016oyy] study interesting properties.\
\
This lead to investigations to ascertain the bulk degrees of freedom which are responsible for similar chaotic behaviour. The dynamics of near extremal black holes was found to be captured by a 2d dilaton-gravity theory of Jackiw[@Jackiw:1984je] and Teitelboim[@Teitelboim:1983ux] in [@Almheiri:2014cka]. The $nAdS_2$ dynamics of Jackiw-Teitelboim (JT) action is essentially dictated by its asymptotic symmetries since the theory possesses zero propagating degrees of freedom. The effective action for these modes was captured by the Schwarzian action for the $AdS_2$ boundary diffeomorphisms [@Jensen:2016pah; @Maldacena:2016upp], similar ideas where pursued in [@Engelsoy:2016xyb]. Explicit computations on near extremal RN $AdS_4$ black holes [@Nayak:2018qej] corroborated this understanding from a higher dimensional perspective.\
\
The dynamics of the 2d gravity theory reproducing the Schwarzian effective action have since been studied [@Mertens:2018fds; @Taylor:2017dly]. Apart from the JT action, the Polyakov action with a cosmological constant was also studied and found to describe the $AdS_2$ bulk dynamics dual to the soft modes of SYK [@Mandal:2017thl]. This was done by analysing the action for the co-adjoint orbits of the Virasoro group which can be thought to describe the soft modes. Different aspects of $AdS_2$ gravity were also covered in [@Haehl:2017pak; @Grumiller:2017qao; @Dubovsky:2017cnj; @Eling:2017txo; @Kyono:2017pxs; @Forste:2017kwy; @Cvetic:2016eiv; @Almheiri:2016fws; @Engelsoy:2016xyb; @Gaikwad:2018dfc]. The effect of $AdS_2$ arising in rotating horizons was also studied in [@Anninos:2017cnw], here a large $N$ SYK like system was modelled to mimic the near horizon near extremal symmetries of of Kerr-Neuman blackholes in $AdS_4$. For past works the reader may refer to [@Gouteraux:2011qh; @Castro:2008ms] and references therein.\
\
There have also been efforts to realize the SYK model completely by a providing a holographic description [@Jevicki:2016bwu; @Jevicki:2016ito; @Das:2017pif; @Das:2017hrt; @Das:2017wae]. These have also been studied for the SYK tensor models too in some detail [@Forste:2017apw; @Halmagyi:2017leq; @Cai:2017nwk; @Caputa:2017yrh; @Krishnan:2017txw; @Gross:2017hcz; @Krishnan:2016bvg].\
\
It is also worth noting that a theory of open string governed by the Nambu-Goto action probing an $AdS$ Schwarzchild geometry also exhibits maximal chaos [@deBoer:2017xdk]. In such a system the scrambling time is governed by the string tension. A Schwarzian effective action has also been uncovered for such systems as being comprised of the reparametrizations of the world sheet [@Banerjee:2018twd; @Banerjee:2018kwy].\
\
It would be an interesting question to ask if such modes can be found in thermal large $N$ CFTs such that their effective actions govern the chaotic behaviour of the system, like in the SYK model studied in [@Maldacena:2016hyu]. The present holographic understanding of this phenomenon allows one to visualize these modes close to extremality in the near horizon region for at least non-rotating geometries. The near extremal geometries possess a near horizon $AdS_2$ throat, and bulk scattering of the form studied in [@Shenker:2014cwa] excite graviton modes which can be described by a JT theory confined to this throat region [@Maldacena:2015waa]. It is worth noting that the holographic computations in [@Shenker:2014cwa; @Shenker:2013pqa] do not assume extremality. It would therefore be worthwhile to understand how these modes behave away from extremality and also in the entirety of a black hole in $AdS$.\
\
To this end we address a simpler problem of that in $AdS_3$ which like in the dilaton-gravity theory in $AdS_2$ has only boundary degrees of freedom. In fact in $AdS_3$ these have been well studied and are called the Brown-Henneaux modes [@Brown:1986nw] which are in one-to-one correspondence with 2d infinite conformal symmetries of the boundary CFT$_2$. In section 2 using the gauge gravity prescription we first formally equate the computation of eikonal scattering in the bulk done in [@Shenker:2014cwa] to computing correlators in the boundary CFT upto linear order in $G_N$. This we do by computing the effective action for conformal transformations on the boundary obtained from the bulk on-shell path integral. We then (section 3) compute the the effective action for the Brown-Henneaux modes about a rotating BTZ and find it to be the product of square-root Schwarzian derivatives, each for left and right moving conformal transformations of the boundary.\
\
In section 4 we proceed to compute the correction to the 4pt function of 2 boundary operators - computed in the probe approximation; to linear order in $G_N$. We thus reproduce the answer of [@Shenker:2014cwa] for the non-rotating BTZ case of $\lambda_L=2\pi/\beta$. The similar procedure when used for the rotating BTZ yields $\lambda_L=2\pi/\beta_+$ where $\beta_\pm=\beta(1\mp\mu_L)$ with $\mu_L=r_-/r_+$ being the chemical potential associated with angular momentum. We thus find that for the rotating BTZ $\lambda_L=2\pi/\beta_+>2\pi/\beta$.\
\
We end with section 5 with some conclusions and discuss the possible implications of the result. In particular we point out a possible modification of a part of the proof given in [@Maldacena:2015waa] so as to allow for a modified bound in the presence of a chemical potential for angular momentum.
Bulk Computation
================
In this section we heuristically equate the eikonal approximate calculation of [@Shenker:2014cwa] to the one generally done while introducing the AdS/CFT correspondence $i.e.$ equating the bulk on-shell (small $G_N$) path-integral to the generating function of boundary correlators: $$\underset{\phi\rightarrow\phi_0}{\underset{g\rightarrow\eta}{\int}}\mathcal{D}[g]\mathcal{D}[\phi_i]\,\,e^{i(S_{grav}+S_{matter})}=Z_{CFT}[\phi_0]=\langle e^{i\int_{\partial}\phi_0\mathcal{O}}\rangle_{CFT}
\label{adscft}$$ where $\phi_0$ is the boundary value of the scalar field in the bulk which sources a scalar operator $\mathcal{O}$ in the boundary CFT. Like in [@Shenker:2014cwa] we will consider 2 minimally coupled scalar fields in the bulk with masses $m_1\&\,m_2$[^1], with no interaction terms between them $$S_{matter}=-\int\sqrt{-g}\tfrac{1}{2}\left[(\partial\phi_i)^2-m^2_i\phi_i^2\right],\,\hspace{1cm}\,S_{grav}=-\frac{1}{16\pi G_N}\int\sqrt{-g}(R-2\Lambda).$$ Here we have not written down the boundary terms for the actions which make the the variational problem well defined and render the on-shell action finite.\
\
We will concern ourselves with the computation of the bulk 4pt. function $\langle\phi_1\phi_1\phi_2\phi_2\rangle$, which is equal to the boundary 4pt. function $$\langle\mathcal{O}_1\mathcal{O}_1\mathcal{O}_2\mathcal{O}_2\rangle\approx \lim_{r\rightarrow\infty}r^{-2(2d-\Delta_1-\Delta_2)}\langle\phi_1\phi_1\phi_2\phi_2\rangle
\label{bulkbndy1}$$ where each of the bulk coordinates is taken to the boundary[^2]. Using the bulk path integral expression for 4pt function gives $$\langle\phi_1\phi_1\phi_2\phi_2\rangle=\int\mathcal{D}[g]\mathcal{D}[\phi_i]\,\,\phi_1\phi_1\phi_2\phi_2 \,\,e^{i(S_{grav}+S_{matter})}
\label{bulkpi}$$ Here for simplicity of notation we have not mentioned the space time dependence. Since we would be concerned in the limit in which classical gravity dominates we would be interested in the saddle point evaluation of the above path-integral. Further one usually considers the probe approximation in which the scalars act as probes for a given metric which satisfies vacuum Einstein’s equation with $\Lambda$. We will denote this solution as $\bar{g}_{\mu\nu}$. In this limit since the matter action is quadratic the answer is readily computed $$\langle\phi_1\phi_1\rangle_{\bar{g}}\langle\phi_2\phi_2\rangle_{\bar{g}} =\int\mathcal{D}[\phi_i]\,\,\phi_1\phi_1\phi_2\phi_2 \,\,e^{i(S_{grav}[\bar{g}]+S_{matter}[\bar{\phi_i}])}
\label{bulkpi0}$$ where $\bar{\phi_i}$ solves the Klein-Gordon equation in the background $\bar{g}_{\mu\nu}$[^3]. Let us consider the effect of first order back reaction in orders of $G_N$ by considering $$R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}+\Lambda g_{\mu\nu}=4\pi G_N T_{\mu\nu}
\label{bulkeom}$$ where $g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu}$ and $T_{\mu\nu}$ is determined entirely in terms of $\bar{\phi_i}$s. Therefore we can rewrite (\[bulkpi\]) as $$\begin{aligned}
\langle\phi_1\phi_1\phi_2\phi_2\rangle&=&\int\mathcal{D}[h]\mathcal{D}[\phi_i]\,\,\phi_1\phi_1\phi_2\phi_2\,\, e^{i\left\lbrace S_{grav}[\bar{g}]+\delta S_{grav}[h]+S_{matter}[\bar{g},\bar{\phi_i}]+\delta S_{matter}[h]\right\rbrace}\cr
&=&\int\mathcal{D}[h]\langle\phi_1\phi_1\rangle_{\bar{g}+h}\langle\phi_2\phi_2\rangle_{\bar{g}+h}\,\, e^{i\delta S_{grav}[h]}\cr
&=&\int\mathcal{D}[h]\langle\phi_1\phi_1\rangle_{\bar{g}+h}\langle\phi_2\phi_2\rangle_{\bar{g}+h}\,\, {\rm exp}\left[\tfrac{il}{16\pi G_N}\int \tfrac{1}{2}h{\rm D^2}h\right]
\label{bulkpi2}\end{aligned}$$ where $\langle\phi_1\phi_1\rangle_{\bar{g}+h}$ denotes the 2pt. function evaluated in background metric $\bar{g}_{\mu\nu}+h_{\mu\nu}$ for an $h_{\mu\nu}$ determined by (\[bulkeom\]) and constrained by boundary conditions on the bulk metric. Note, that here we have only considered diagrams in which the graviton lines attach to different scalar legs, correction to scalar propagator due to graviton attching to the same scalar propagators have been ignored.\
\
In the eikonal approximation $i.e.$ in the limit when the scalar field momenta are taken to be large and light-like, (\[bulkpi2\]) reduces to [@Kabat:1992tb] $$\int\mathcal{D}[h]\,\,{\rm exp}\left[\frac{il}{16\pi G_N}\int \tfrac{1}{2}hD^2h +h_{\mu\nu}T^{\mu\nu}\right]$$ where we have assumed that the $T_{\mu\nu}$ due to the matter fields are like shockwaves with light-like momenta. In this approximation the incoming and the out-going momenta are the same and we get the effect of infinite graviton ladder exchanges between the 2 scalar propagators. This precisely the the origin of the $e^{i\delta(s)}$ factor in [@Shenker:2013pqa]. Here $h_{\mu\nu}$ is the response to the shockwave generated by high momentum scalar propagators in $T_{\mu\nu}$. This approximation is justified in the shock wave analysis of [@Shenker:2013pqa] since the scattering is arranged to have a maximum contribution from the near horizon region. By the time any scalar perturbation reaches the horizon area it would be blue-shifted exponentially, thus giving the first leading contribution to the correction to the 4pt function. The eikonal approximation used in [@Shenker:2014cwa] effectively means that the contribution comes from the bifurcate horizon. One can then use bulk to boundary propagators to compute the correlation function on the boundary.\
\
Let us now try to rephrase the same computation in $AdS_3$. Let write the probe approximate 4pt function as $$\langle\mathcal{O}_1\mathcal{O}_1\rangle_{\bar{g}}\langle\mathcal{O}_2 \mathcal{O}_2\rangle_{\bar{g}}=\lim_{r\rightarrow\partial}\int\mathcal{D}[\phi_i]\,\,\phi_1\phi_1\phi_2\phi_2\,\, e^{iS_{matter}[\bar{g},\bar{\phi_i}]}
\label{4pt1}$$ here we have haven’t included $S_{grav}[\bar{g}]$ since it would a constant. The correction it would receive from the gravity path integral would be captured in $$\langle\mathcal{O}_1\mathcal{O}_1\mathcal{O}_2\mathcal{O}_2\rangle=\int \mathcal{D}[g] \langle\mathcal{O}_1\mathcal{O}_1\rangle_{g}\langle\mathcal{O}_2 \mathcal{O}_2\rangle_{g}\,\,{\rm exp}\left[\tfrac{il}{16\pi G_N}\int\sqrt{-g}(R-2\Lambda)\right].
\label{4pt2}$$ We could now consider the above path integral being dominated by the gravity saddle by considering the small $G_N$ limit. Therefore we would only be seeking contributions from geometries satisfying vacuum Einstein’s equations. This way we should be able to reproduce the leading correction in the small $G_N$ limit.\
In the above metric path integral only boundary degrees of freedom contribute in $AdS_3$. These are in one-to one correspondence with the boundary conformal transformations[^4]. Further $\langle\mathcal{O}_1\mathcal{O}_1\rangle_{g}\approx\langle\mathcal{O}_1\mathcal{O}_1\rangle_{\bar{g}+h}$ would simply correspond to the change in the 2pt. functions due to conformal transformations. Therefore atleast in $AdS_3$ we must be able to capture the effect of chaos $via$ (\[4pt2\]).
$AdS_3$ story from the boundary
===============================
In this section we will derive the effective action for the soft modes by calculating the gravity bulk on-shell action about an arbitrary BTZ geometry. As justified in the previous section this amounts to finding the effective action for the diffeomorphisms in the bulk respecting Dirichlet boundary conditions. It is known that such configurations in the bulk are dual to states in the boundary CFT which are created by the action of 2 copies of commuting Virasoro algebra. We begin by writing the most general bulk configuration with the boundary metric being flat, we begin with a Lorenztian metric in the Fefferman-Graham gauge [@Banados:1998gg; @Henningson:1998ey]. &=&-+(T\_[++]{}dx\^[+2]{}+T\_[–]{}dx\^[-2]{})-T\_[++]{}T\_[–]{}dx\^+dx\^-, \[Banados\_metric\] where $T_{++}=T_{++}(x^+)\,\,T_{--}=T_{--}(x^-)$ and $x^\pm=t\pm\phi$. One can cast the BTZ [@Banados:1992wn] metric in $AdS_3$ &=&-+r\^2(d-dt)\^2M&=&r\_+\^2+r\_-\^2,J=2lr\_+r\_-. \[BTZ\_metric\] in the Fefferman-Graham gauge with $T_{\pm\pm}=(r_+\pm r_-)^2$. It is worthwhile to notice that the radial coordinate in sees the horizon at $r_h=\sqrt{r_+^2-r_-^2}\,$[^5].
The bulk action with relevant boundary(counter) terms is[@Henningson:1998ey; @deHaro:2000vlm] 16G\_N S\_[bulk]{}&=&d\^3x (R-2)+2\_d\^2x(K+), \[bulk\_action\] where $\Lambda=-1/l^2$ and $l$ is the length of $AdS_3$. The $1/l$ term in the boundary action is used to make the on-shell action finite. Therefore the on-shell value of for arbitrary metrics given by is S\^[on-shell]{}\_[bulk]{}&=&\_(r\_h\^2+)&=&\_ \[bulk\_onshell\_action\] Since gravity in 3-dim is non-dynamical in the bulk, all solutions to the bulk action can be obtained from one another $via$ diffeomorphisms. In fact the Fefferman-Graham theorem [@Henningson:1998ey] allows one to express any solution in the form as a diffeomorphisms of the other and we thus only need to concern ourselves with diffeomorphisms that preserve this form to generate all solutions. We would therefore like to know the action associated with such a diffeomorphism having started from a particular solution (for eg.) of the form of the BTZ metric . To this end we would like to know how $T_{\pm\pm}$ depend on these diffeomorphisms.\
\
We notice that for infinitesimal diffeomorphisms which maintain the form of , the change in $T_{\pm\pm}$ is given by [@Balasubramanian:1999re; @Brown:1986nw] T\_&=&\^\_[(0)]{}T’\_+2[\_[[0]{}]{}\^]{}’T\_-2[\_[[0]{}]{}\^]{}”’ \[delta\_T\] where[^6] \^\_&=&\^r\_r+\_[(0)]{}\^+(x\^+)\_+ + \_[(0)]{}\^-(x\^-)\_-+[(1/r)]{}\^r&=&-([\_[(0)]{}\^+]{}’+[\_[(0)]{}\^-]{}’). \[FG\_diffeomorphism\] We also note that change in a Schwarzian derivative $\{T(u),u\}$ due to a diffeo $u\rightarrow u+ \epsilon(u)$ is: {T(u)+(u)T’(u),u}&=&{T,u}+(u)\_u{T,u}+2’(u){T,u}+”’(u). \[delta\_Sch\] One can find the full non-linear completion of (\[FG\_diffeomorphism\]) [@Roberts:2012aq] which takes a Poincáre $AdS_3$ (with $T_{\pm\pm}=0$) to (\[Banados\_metric\]). Under such a diffeomorphism the stress tensor is proportional to the Schwarzian for boundary conformal transformations. Comparing and we can deduce that under $x^\pm\rightarrow X^\pm(x^\pm)$ T\_&=&-2{X\^,x\^}{X,x}&=& where for infinitesimal diffeomorphisms $X^\pm\equiv x^\pm + \xi_{(0)}^\pm$. One notices that for $X^\pm=x^\pm\implies$ $T_{\pm\pm}=0$. This value can be shifted by defining T\_&=&-2{X\^,x\^}+L\_’\^2 where the choice of ${X^\pm}'^2$ makes sure that the linear in $T_{\pm\pm}$ terms in remain the same. Here, $L_\pm$ define the charge of the BTZ metric about which the change in the parameters $T_{\pm\pm}$ is measured[^7].\
\
Therefore the on-shell action in is S\^[on-shell]{}\_[bulk]{}&=&\_ \[bulk\_onshell\_action\_explicit\] where $L_\pm$ decides which bulk configuration one mesures the change from. The above action is defined on the boundary of $AdS_3$, the integral in the $AdS$ radial direction receives contribution from $r=r_h$ and the boundary $r=\infty$. The divergent contribution from the boundary at $r=\infty$ is cancelled by the holographic boundary counter-terms, the only finite contribution comes from the horizon. Under infinitesimal diffeomorphisms $X^\pm\rightarrow x^\pm + \epsilon^\pm(x^\pm)$ the quadratic action takes the form &&S\^[on-shell]{}\_[bulk]{}=\_(L\_-\^2([\^+]{}”’(x\^+)\^2+L\_+[\^+]{}”(x\^+)\^2)+L\_+\^2([\^-]{}”’(x\^-)\^2+L\_-[\^-]{}”(x\^-)\^2))&& \[bulk\_onshell\_action\_quad\] where we have ignored boundary terms. Since we would be interested in computing OTOC’s later we Euclideanize the above action &&S\^[on-shell]{}\_[bulk,E]{}=\_(L\_-\^2([\^+]{}”’(x\^+)\^2-L\_+[\^+]{}”(x\^+)\^2)+L\_+\^2([\^-]{}”’(x\^-)\^2-L\_-[\^-]{}”(x\^-)\^2))&& \[bulk\_onshell\_action\_quad\_euclid\] The quadratic action divides itself into left and right sector. The action (\[bulk\_onshell\_action\_explicit\]) evidently has the symmetries of the Schwarzian, the infinitesimal versions of which are manifested in (\[bulk\_onshell\_action\_quad\_euclid\]). We would correspondingly have got the above action by working in the bulk in a Euclidean setting to begin with. This would have invariably required us to have the angular momentum associated with the BTZ metric to be imaginary so as to have a real Euclidean metric.\
\
One can in principle derive the one-loop exact action for the boundary gravitons in $AdS_3$ as was done recently by [@Cotler:2018zff] using the Chern-Simmons prescription. The authors obtain a theory of re-parametrizations which encodes loop contributions of boundary gravitons in a perturbation in $1/c$[^8].
Propagators
-----------
We are now in the Euclidean setting where in the time $\tau$ is along the imaginary direction while the space-like coordinate $\phi$ is real. The $x^\pm$ coordinates of Euclideanised BTZ metric can be regarded to have complex periodicities [@KeskiVakkuri:1998nw] x\^= x\^+i\_,\_=i= \[periodicity\] We regard the integral in (\[bulk\_onshell\_action\_quad\_euclid\]) to be in one such periodic interval, therefore the above action splits into two 1-dim actions S\_+\[\^+\]=\_+\^+(|\^[(6)]{}+L\_+|\^[(4)]{})\^+, S\_-\[\^-\]=\_-\^-(\^[(6)]{}+L\_-\^[(4)]{})\^- , where $\alpha_\pm = \frac{-2\pi l}{64\pi G_N L_\pm^{3/2}}$. For convenience we have defined |[z]{}=-ix\^+=|[z]{}+\_+,z=-ix\^-=z+\_- \[periodicity1\] and we analyse the propagator for $\epsilon^+$, for which we evaluate the Green’s function $G_+$ for the operator \_+=|\^[(4)]{}(|\^[(2)]{}+()\^2),\_+G\_+=(|[z]{}). \[propagatoreq\] Here, we observe that $G_+$ would depend on $\bar{z}$ by a function of the ratio $\bar{z}/\beta_+$. The zero modes themselves would look like $\{1,e^{\tfrac{2\pi\bar{z}}{\beta_+}},e^{-\tfrac{2\pi\bar{z}}{\beta_+}}\}$. We will compute the $G_+$ first for the Schwarzchild case and try and generalize for the rotating BTZ case.\
\
For $AdS_3$ Schwarzchild, $\beta_+=\beta_-=\beta\in{\mathbb{R}}$. Assuming $G_+$ be a function of $\bar{z}/\beta$ we solve (\[propagatoreq\]) for real values of $\bar{z}/\beta$ $i.e.$ $G_+(\tau/\beta)$ and then using Schwarz’s theorem analytically continue it for arbitrary complex values of $\bar{z}/\beta$ [@Streater:1964]. This allows us to express both sides of the (\[propagatoreq\]) as a discrete sum, thus G\_+=, , where the prime on the sum denotes $n\notin\{-1,0,1\}$. Doing the relevant Matsubara summation and analytically continuing in the complex $\bar{z}/\beta$ plane yields ()G\_+(|[z]{})&=& (2-)\^4 -(2-)\^2+&&+(2-)(2)+ +a+b(2).\[propagator\] Here, $$\Big\|\frac{\bar{z}}{\beta}\Big\|=
\begin{cases}
\frac{\bar{z}}{\beta}, & \text{if Re} \left[\frac{\bar{z}}{\beta}\right]>0\\&\\
\frac{-\bar{z}}{\beta}, & \text{if Re} \left[\frac{\bar{z}}{\beta}\right]<0.
\end{cases}
$$ The last 2 terms[^9] in (\[propagator\]) are comprised of the zero modes we neglected in the sum and would drop out of any computation which respects the bulk isometries. An identical expression would exist for $G_-$ in terms of $z/\beta$.
Rotating BTZ propagators
------------------------
One could extend the above method naively to rotating BTZ case. This would require solving the (\[propagatoreq\]) for the real values of $\bar{z}/\beta_+$ |[z]{}/\_+ =, \[realsection\] where $\Omega=\mu\beta$. Same holds true for $z/\beta_-\in{\mathbb{R}}$. It is quite clear from the outset that one could define Euclidean coordinates $\{\tilde{\tau},\tilde{\phi}\}$. $$\tilde{\tau}=\frac{\tau+\mu\phi}{(1+\mu^2)},\hspace{1cm}\tilde{\phi}=\frac{\phi-\mu\tau}{(1+\mu^2)}.
\label{tildecoord}$$ Therefore $\bar{z}/\beta_+=\bar{\tilde{z}}/\beta$, where $\bar{\tilde{z}}=\tilde{\tau}-i\tilde{\phi}$. Which is a conformal transformation on the boundary metric: $$ds^2=d\tau^2+d\phi^2\rightarrow(1+\mu^2)(d\tilde{\tau}^2+d\tilde{\phi}^2).$$ We do not however perform such a transformation on the propagator, we merely use (\[tildecoord\]) for making the coordinated dependence look simple. Therefore solving (\[propagatoreq\]) for real values of $\bar{\tilde{z}}/\beta$ and analytically continuing we get ()G\_+(|[z]{})&=& (2-)\^4 -(2-)\^2+&&+(2-)(2)+ +a+b(2).\[propagatorBTZ\] where $$\Big\|\frac{\bar{z}}{\beta_+}\Big\|=
\begin{cases}
\frac{\bar{z}}{\beta_+}, & \text{if Re} \left[\frac{\bar{z}}{\beta_+}\right]>0\\&\\
\frac{-\bar{z}}{\beta_+}, & \text{if Re} \left[\frac{\bar{z}}{\beta_+}\right]<0.
\end{cases}
$$ Note that the conformal transformation (\[tildecoord\]) isn’t one of the $SL(2,\mathbb{R})$ zero modes.
4pt correlator
==============
In this section we will use the propagators obtained in the last section to compute the next to leading order in $G_N$ corrections to the 4pt. function. We consider the first the leading contribution to the Euclidean 4pt. function of four scalars [@KeskiVakkuri:1998nw] V\_1V\_2W\_3W\_4= \[4pt0\] where $z_{12}=z_1-z_2$ and $V_1\equiv V(\bar{z}_1,z_1)$. We would be interested in seeing how they would depend on the bulk on-shell metrics in the path integral (\[4pt2\]). As explained before, these would correspond to computing the change in (\[4pt0\]) due to conformal transformations parametrized by $\epsilon^+(\bar{z})$ & $\epsilon^-(z)$. Under $\bar{z}\rightarrow\bar{z}+\epsilon^+(\bar{z})$ & $z\rightarrow z+\epsilon^-(z)$ we have && (\^\_1,\^\_2),(\^\_1,\^\_2)&=& |[h]{}+[c.c]{} \[2ptchange\] It can be seen that $\mathcal{B}$ above is invariant under the $SL(2,{\mathbb{R}})$ zero modes of $\epsilon^+=\{1,e^{\pm 2\pi i \bar{z}/\beta_+} \}$ & $\epsilon^-=\{1,e^{\pm 2\pi i z/\beta_-} \}$. The correction to (\[4pt0\]) is obtained by Wick contracting the $\epsilon^\pm$s with each other using the propagator (\[propagator\]) and it’s complex conjugate. We will first analyse this around $AdS_3$ Schwarzchild and then in a generic rotating BTZ background.
$AdS_3$ Schwarzchild {#non_rotating_btz}
--------------------
For the case of $AdS_3$ Schwarzchild we have $\beta_\pm=\beta$. The reality condition of (\[realsection\]) implies $\phi=0$, $i.e.$ we compute the propagators $G_\pm$ along the $\tau$ real line and then analytically continue. Here to proceed we first have to order the Euclidean times for the operators in question and then use the appropriate propagator value for Wick contraction. We then add arbitrary Lorenztian time arguments corresponding to each operator and then read off the answer. The Euclidean answer to the expression = (\^\_1,\^\_2)(\^\_3,\^\_4)\[BB\] looks like h\_1h\_2&&()\^6(z\_[13]{}\^4+z\_[24]{}\^4-z\_[14]{}\^4-z\_[23]{}\^4)()().&&+()\^5&&+()\^4&&+()\^3&&.+()\^2(\^2-3)+ [c.c.]{}|\_[h\_1|[h]{}\_1,h\_2|[h]{}\_2]{} \[EuclideanTOSch\] Introducing Lorentzian time coordinates for each of operator $i.e.$ $z\rightarrow \tau +i \phi -it$, we fix $\tau_1=\beta,\tau_2=\beta/4,\tau_3=\beta/2,\tau_4=3\beta/4$ and further fix $t_1=t_2=t, t_3=t_4=0$ & $\phi_1=\phi_2=0,\phi_3=\phi_4=\phi$. Having done this (\[EuclideanTOSch\]) shows a polynomial growth in time.\
\
For the case of OTOC, contracting $\epsilon_i$ in (\[BB\]) we similarly get &&()\^6(z\_[13]{}\^4+z\_[24]{}\^4-z\_[14]{}\^4-z\_[23]{}\^4)()()&&+()\^5&&+()\^4&&+()\^3, &&+()\^2+ |\_[h\_1|[h]{}\_1,h\_2|[h]{}\_2]{} \[EuclideanOTOSch\] Similarly after introducing Lorentzian time coordinates for each of operator $i.e.$ $z\rightarrow \tau +i \phi -it$, we fix $\tau_1=\beta,\tau_3=\beta/4,\tau_2=\beta/2,\tau_4=3\beta/4$ and further fix $t_1=t_2=t, t_3=t_4=0$ & $\phi_1=\phi_2=0,\phi_3=\phi_4=\phi$. Here one clearly sees the exponential behaviour of the correlator for both the boundary null coordinates $t\pm\phi$ as \~\[SchcldChaos\] Thus we see that the Schwarzian action (\[bulk\_onshell\_action\_explicit\]) associated with Brown-Henneaux modes (\[FG\_diffeomorphism\]) are responsible for the maximal Lypunov index at least in non-rotating BTZ. Note that since we have been cavalier about causality in our propagators we would not reproduce the $(t-|\phi|)$ behaviour in the exponent like [@Shenker:2014cwa][^10].
Rotating BTZ {#rotating_btz}
------------
Let’s do the similar exercise for rotating BTZ where $\beta_\pm$ are complex parameters. Here in order to use the propagator in (\[propagator\]) we would have to use a shifted Euclidean time in (\[tildecoord\]) $\tilde{\tau}=(\tau+\mu \phi)/(1+\mu^2)$ for ordering the different Euclidean times. $i.e.$ for time ordered correlator we arrange $\tilde{\tau}_1>\tilde{\tau}_2>\tilde{\tau}_3>\tilde{\tau}_4$ while for out-of-time-ordered $\tilde{\tau}_1>\tilde{\tau}_3>\tilde{\tau}_2>\tilde{\tau}_4$. We would then fix the $\tilde{\tau}$ on a circle of period $\beta$\
\
For the time-ordered case the exact Euclidean answer is given in the appendix (\[EuclideanTObtz\]) for the sake of brevity. As before we introduce Lorentzian time $\tilde{t}$ this time for the shifted coordinate $\tilde{\tau}$. We will infer the Lorenztian equivalent of (\[tildecoord\]) later. We fix the Euclidean times to $\tilde{\tau}_1=\beta,\tilde{\tau_2}=\beta/4,\tau_3=\beta/2,\tilde{\tau}_4=3\beta/4$ and further fix $\tilde{t}_1=\tilde{t}_2=\tilde{t}, \tilde{t}_3=\tilde{t}_4=0$ & $\tilde{\phi}_1=\tilde{\phi}_2=0,\tilde{\phi}_3=\tilde{\phi}_4=\tilde{\phi}$. Having done this (\[EuclideanTObtz\]) shows a polynomial growth in time.\
\
Similarly the out of time ordered Euclidean answer is (\[EuclideanOTObtz\]). Introducing Lorentzian times and fixing Euclidean times to $\tilde{\tau}_1=\beta,\tilde{\tau}_3=\beta/4,\tilde{\tau}_2=\beta/2,\tilde{\tau}_4=3\beta/4$ so as to compute OTOC; we then fix $\tilde{t}_1=\tilde{t}_2=\tilde{t}, \tilde{t}_3=\tilde{t}_4=0$ & $\tilde{\phi}_1=\tilde{\phi}_2=0,\tilde{\phi}_3=\tilde{\phi}_4=\tilde{\phi}$. Here we find the exponentially growing term in $\tilde{t}$ as \~ \[btzChaos\] Let’s convert this back to $\{t,\phi\}$ by the Lorentzian version of (\[tildecoord\]) $i.e.$ $$\tilde{x}^\pm=\tilde{t}\pm\tilde{\phi}=\frac{x^\pm}{(1\mp\mu_L)}\implies \phi=\tilde{\phi}-\mu_L\tilde{t},\,\,\,t=\tilde{t}-\mu_L\tilde{\phi}
\label{tildecoordLor}$$ where we define the the Lorenztian angular velocity as $\mu_L=i\mu=r_-/r_+$ as it would has risen in a Lorenztian bulk geometry, thus yielding $$\frac{G_N\beta}{l}
\left[h_1h_2\cosh\left(\tfrac{2\pi(t-\phi)}{\beta_-}\right)+\bar{h}_1\bar{h}_2\cosh\left(\tfrac{2\pi(t+\phi)}{\beta_+}\right)\right]$$ Note, that the transformation (\[tildecoordLor\]) is a conformal transformation of the boundary in the metric (\[Banados\_metric\]). The proper boundary coordinates along the lines of section 2 of [@Maldacena:2016upp] are $\{t,\phi\}$ which is what one must use to measure correlators in the boundary. It is clear from the last expression the Lypunov index for the each of the left and right moving modes is governed by $\beta_\pm=\beta(1\mp\mu_L)$ in stead of $\beta$. Thus the Lypunov index for the 4pt OTOC would be $\lambda_L=2\pi/\beta_+>2\pi/\beta$ as it governs the fastest growth.\
\
A some what similar conclusion was reached in [@Stikonas:2018ane] with mutual information computed between the left and right intervals of $|\left.\text{TFD}\right\rangle$ corresponding to a rotating BTZ. This was computed both by computing mutual information $via$ Rényi entropy in the 2D CFT with a chemical potential $\mu$ perturbed by a heavy operator, and from the bulk by employing the Ryu-Takayanagi prescription of minimal surfaces in a shock wave background. In [@Stikonas:2018ane] the symmetry between $\beta_\pm$ is broken by the spatial arrangement of the heavy operator relative to the entangling interval in question. This arrangement is such that only one of the modes with a smaller temperature effects the entangling region for positive times. For a different spatial arrangement one may seem to find that the scrambling time computed is governed by the higher of the 2 temperatures.
Results and Discussions
=======================
The bulk understanding of how the Lyapunov index is $2\pi/\beta$ has as of yet always relied upon near the extremal property of an $AdS$ black hole exhibiting an $AdS_2$ throat. In some sense the back reaction of the scalars in the bulk gets the most contribution from this region. Any deviation in the $AdS_2$ geometry is captured in an action like the Jackiw-Teitielboim thus giving rise to a Schwarzian action. What we have demonstrated here - at least in $AdS_3$; is that the Schwarzian arises even when one is far away from extremality. Moreover we find this as an effective action at the boundary of $AdS_3$ rather than at some screen in the interior. It would be interesting to investigate how such an action can be arrived at for black holes in $AdS_{d>3}$, this would indeed give some understanding of the soft modes in higher dimensional large $N$ CFTs.\
\
In the probe approximation there is an inherent conformal symmetry in the bulk emanating from the asymptotic symmetries of $AdS_3$. This would correspond to the 2 copies of Virasoro algebra in the boundary CFT. Any arbitrary solution to the Einstein’s equation with matter would not have such a symmetry. Expanding perturbatively about the probe approximation in orders of $G_N$ ($i.e.$ back reaction) breaks this symmetry spontaneously. The action (\[bulk\_onshell\_action\_explicit\]) therefore can be seen as the action cost associated with conformal transformations at the boundary of $AdS_3$ when one tries to go away from the probe approximation to linear order in $G_N$.\
\
Extremality can be reached in the simplest possible manner by turning on charges for the $AdS$ black hole, the top down understanding of [@Maldacena:2016upp] in such a setting was explained in [@Nayak:2018qej] in $AdS_4$. Here the authors studied a probe uncharged scalar in the bulk thus having no dynamics for the gauge field. It would be interesting to analyse how the near horizon picture in [@Maldacena:2016upp] is reached for rotating geometries close to extremality in $AdS_{d>3}$. In [@Castro:2018ffi] 5d rotating Kerr geometries were analysed close to their near extremal limit in the near horizon throat region. There the authors have discovered a generalized JT action consisting of a dilaton and an additional scalar.\
\
In (\[btzChaos\]) we take the view that $\beta_\pm$ are complex to begin with. The complex value of $\beta_\pm=\beta \mp i \Omega$ is required to make sense of the Euclidean BTZ metric as a real quantity. Further the $i\epsilon$[^11] prescription that we use requires us to first compute a Euclidean correlator and then analytically continue it to desired Lorentzian times. This is similar to the technique employed in [@Stikonas:2018ane] for computing the mutual entanglement from the 2D CFT, as computing the Rényi entropy involves analytically continuing the Euclidean 4pt correlator $\langle \psi \sigma \tilde{\sigma} \psi^\dagger\rangle $[^12] to obtain the Lorentzian answer. This requires making the left and right moving temperatures real: $\beta_\pm=\beta\mp\Omega$[^13] when all Euclidean times have been put to zero. (\[btzChaos\]) would therefore yield a growth in the scrambling time that would violate the chaos bound for the Lyapunov index of $2\pi/\beta$ due to the left moving (anti-holomorphic) modes for $x^+$ $i.e.$ $\frac{2\pi}{\beta-\Omega}$.\
\
The presence of rotation in the bulk implies a CFT with a chemical potential corresponding to angular momentum. 1d SYK and gauged-SYK models have been studied in the presence of a chemical potential [@Bhattacharya:2017vaz]. Here the Lypunov index computed was found to be bounded by $2\pi/\beta$. This also bodes well with the intuition that holding other charges fixed makes the system less chaotic. However the chemical potential present in such cases were associated to an internal symmetry and not a space-time symmetry.\
\
The analysis of section (4.2) for the case of rotating BTZ seems to yield a result in contradiction with the mathematical proof in [@Maldacena:2015waa]. The proof in section (4.1) of [@Maldacena:2015waa] is basically based on the maximal modulus theorem for a bounded holomorphic function. Here we try to give a simple understanding as to how one may try to reconcile the result of section (4.2) of this paper with that of [@Maldacena:2015waa]. The proof in section (4.1) of [@Maldacena:2015waa] relies crucially upon mapping the half strip of width $\beta$ in Eulidean time $\tau$ to a disk $via$ a conformal map w=. This map has a periodicity under $\tau\rightarrow\tau+\beta$ which is exhibited by $\frac{\langle V_1 V_2 W_3 W_4 \rangle}{\langle V_1 V_2 \rangle\langle W_3 W_4\rangle}$. For the case of rotating BTZ we must demand a periodicity in the light-like directions (\[periodicity1\]) |[z]{}=-i|[z]{}+\_+,z++iz+\_-.This is borne out from the probe 2-pt functions (\[4pt0\]) computed entirely from the bulk [@KeskiVakkuri:1998nw] and also in the effective actions (\[bulk\_onshell\_action\_explicit\]) and (\[bulk\_onshell\_action\_quad\_euclid\]). Further, since at least to linear order in $G_N$ the left and right moving modes do not talk to each other, the functional dependence of $\frac{\langle V_1 V_2 W_3 W_4 \rangle}{\langle V_1 V_2 \rangle\langle W_3 W_4\rangle}$ can be assumed to be a sum of right and left movers; this is also evident from the infinitesimal action (\[bulk\_onshell\_action\_quad\_euclid\]). Therefore one can define a conformal map w\^+=,w\^-=\[LR\_strip\_to\_disk\]for each strip corresponding to the left and right moving modes. Now, arguments similar to the ones in section (4.1) in [@Maldacena:2015waa] yield that the growth on the real axis $i.e.$ in Lorentzian time $t$ is bounded by $2\pi/\beta_+$ for the left moving contribution to $\frac{\langle V_1 V_2 W_3 W_4 \rangle}{\langle V_1 V_2 \rangle\langle W_3 W_4\rangle}$ and similarly $2\pi/\beta_-$ for the right moving contribution. In other words, demanding periodicity of the kind (\[periodicity1\]) generalizes the analysis of [@Maldacena:2015waa] to the case at hand. Here we take the view that $\beta_\pm$ are complex to begin with and are analytically continued to have real values $\beta_\pm=\beta(1\mp\mu_L)$ in the end after the growth in the Lorentzian time has been deduced.\
\
One could very well have guessed such a result simply by observing that the effective action (\[bulk\_onshell\_action\_explicit\]) doesn’t mix the left and right movers, each of which have different inverse temperatures $i.e.$ $\beta$. Therefore the maximal growth would be governed by the smaller of the two $i.e.$ $min[\beta_+,\beta-]$, while the surface gravity of the bulk would be related to the average $\frac{\beta_++\beta_-}{2}$. It would be very interesting to see how these considerations would have to be modified when analysing rotating geometries in $AdS_{d>3}$ as unlike $AdS_3$ the bulk degrees of freedom of the metric would also participate in the dynamics.\
\
The result of section (4.2) is also validated by the analysis of mutual information for late times computed in the BTZ geometry subjected to a shock wave [@Stikonas:2018ane]. Here the author found the the Lypunov index to be related to the smaller of the two temperatures $i.e.$ $\lambda_L=\frac{2\pi}{\beta_-}$ in the conventions of this paper. The mutual information in the $|TFD\rangle$ state corresponding to an eternal BTZ subjected to a shock wave is computed in [@Stikonas:2018ane] on the boundary by taking the limit of the Rényi entropy, and in the bulk by employing the Ryu-Takayanagi prescription of minimal area. However the spatial arrangement of the heavy operator in [@Stikonas:2018ane] $w.r.t.$ the entangling region under consideration only sees the effect of one of the modes.\
\
The techniques used in this paper seem to be too well suited for $AdS_3$. As mentioned before that generalizing this to higher dimensional $AdS$ black holes would be interesting, it would nonetheless be easier to analyse the rotating BTZ along the lines of [@Shenker:2014cwa] by computing bulk eikonal scattering which seems to be an analysis suited for all dimensions[^14].\
\
To conclude, this work also suggests that if the Lypunov index associated with rotating $AdS$ black holes in Einstein-Hilbert theory have maximal chaos, then for large $N$ thermal CFTs with chemical potential associated with angular momentum the chaos bound is $\lambda_L=\frac{2\pi}{\beta(1-\mu_L)}$. It would be interesting to find a more thorough generalization of the proof in [@Maldacena:2015waa] for large-$N$ CFTs with chemical potential associated with angular momentum.
Acknowledgements {#acknowledgements .unnumbered}
================
The author is indebted to Gautam Mandal for fruitful discussions throughout the duration of this work. The author also benefited by earlier work with Gautam Mandal, Pranjal Nayak, Nemani Suryanarayana and Spenta Wadia. The author also acknowledges Adwait Gaikwad, Anurag Kaushal, R. Loganayagam, Pranjal Nayak, Ronak Soni, Shiraz Minwalla & M. V. Vishal for their numerous discussions during the completion of this work. The author also acknowledges Abijit Gadde for giving inputs on this work. The author was also found the academic environment of the Indian Strings Meet 2018 held in IISER Thiruvanathapuram conducive towards the completion of this work.
Appendix
========
The time ordered Euclidean answer for (\[BB\]) corresponding to the rotating BTZ is &&|\_[TO]{}=&&h\_1h\_2()\^6(z\_[14]{}\^4+z\_[23]{}\^4-z\_[13]{}\^4-z\_[24]{}\^4)()().&&+()\^5&&+()\^4&&+()\^3&&.+()\^2(\^2-3)+ [c.c.]{}|\_[h\_1|[h]{}\_1,h\_2|[h]{}\_2]{} \[EuclideanTObtz\] The above expression yields a polynomial expression in terms of the coordinates after analytically continuing to the Lorentzian times. Similarly the out of time ordered Euclidean answer for the rotating BTZ case is &&|\_[OTO]{}=&&h\_1h\_2()\^6(z\_[14]{}\^4+z\_[23]{}\^4-z\_[13]{}\^4-z\_[24]{}\^4)()().&&+()\^5&&+()\^4&&+()\^3&&.-()\^2&&+[c.c.]{}|\_[h\_1|[h]{}\_1,h\_2|[h]{}\_2]{} \[EuclideanOTObtz\] Introducing Lorentzian times and fixing $\tau_1=\tilde{\beta}-\mu \phi_1,\tau_3=\tilde{\beta}/4-\mu\phi_3,\tau_2=\tilde{\beta}/2-\mu\phi_2,\tau_4=3\tilde{\beta}/4-\mu\phi_4$ and further fixing $t_1=t_2=t, t_3=t_4=0$ & $\phi_1=\phi_2=0,\phi_3=\phi_4=\phi$. Here we find the exponentially growing term in $t$ as \~
[^1]: There is summation in $i$ and the space time integrals are suppressed for brevity.
[^2]: $\Delta_i=\tfrac{d}{2}+\sqrt{\tfrac{d^2}{4}+m_i^2l^2}$, where $l$ is the $AdS$ radius.
[^3]: This is basically the expression for the propagator in the form of a path integral for a free theory.
[^4]: Barring the truly small diffeomorphisms which we ignore.
[^5]: Throughout the text $r_\pm$ would refer to outer or inner horizons in the metric in and will be casual in the use of $r$ as the radial coordinate in any metric.
[^6]: The primes denote derviatives $w.r.t.$ respective coordinate dependence.
[^7]: $T_{\pm\pm}$ are components of the Brown-York stress tensor for the bulk metric, which are also the CFT$_2$ stress tensor components.
[^8]: $c$ being the central charge.
[^9]: Explicitly: $a=\left(1+\tfrac{\pi^2}{6}+\tfrac{7\pi^4}{360}\right)$ & $b=9/2$.
[^10]: If one does consider this then there will be possibly step functions multiplying each of the terms above.
[^11]: Here Euclidean time is $\tau$ instead of $\epsilon$.
[^12]: $\psi$ is the heavy operator generating the shock-wave in the BTZ while $\sigma$ is the twist operator.
[^13]: In our analysis $\beta_\pm\sim\frac{1}{\sqrt{L_\pm}}\sim\frac{1}{\sqrt{M\pm J/l}}$ are associated with $x^\pm$ respectively.
[^14]: Barring the difficulty of computing bulk to boundary propagators for rotating black holes in $AdS_{d>3}$.
|
---
bibliography:
- 'main.bib'
---
**6G White Paper on Machine Learning in Wireless Communication Networks**
Samad Ali[^1], Walid Saad[^2], Nandana Rajatheva, Kapseok Chang[^3], Daniel Steinbach[^4], Benjamin Sliwa[^5], Christian Wietfeld, Kai Mei, Hamid Shiri, Hans-Jürgen Zepernick[^6], Thi My Chinh Chu, Ijaz Ahmad[^7], Jykri Huusko, Jaakko Suutala[^8], Shubhangi Bhadauria[^9], Vimal Bhatia[^10], Rangeet Mitra[^11], Saidhiraj Amuru[^12], Robert Abbas[^13], Baohua Shao[^14], Michele Capobianco[^15], Guanghui Yu[^16], Maelick Claes[^17], Teemu Karvonen, Mingzhe Chen[^18], Maksym Girnyk[^19], Hassan Malik[^20]
Abstract
========
The focus of this white paper is on machine learning (ML) in wireless communications. 6G wireless communication networks will be the backbone of the digital transformation of societies by providing ubiquitous, reliable, and near-instant wireless connectivity for humans and machines. Recent advances in ML research has led enable a wide range of novel technologies such as self-driving vehicles and voice assistants. Such innovation is possible as a result of the availability of advanced ML models, large datasets, and high computational power. On the other hand, the ever-increasing demand for connectivity will require a lot of innovation in 6G wireless networks, and ML tools will play a major role in solving problems in the wireless domain. In this paper, we provide an overview of the vision of how ML will impact the wireless communication systems. We first give an overview of the ML methods that have the highest potential to be used in wireless networks. Then, we discuss the problems that can be solved by using ML in various layers of the network such as the physical layer, medium access layer, and application layer. Zero-touch optimization of wireless networks using ML is another interesting aspect that is discussed in this paper. Finally, at the end of each section, important research questions that the section aims to answer are presented.
Introduction {#sec:introduction}
============
Today’s technological aspirations will represent tomorrow’s reality with technologies such as holographic telepresence, eHealth and wellness applications, pervasive connectivity in smart environments, industry 4.0 and massive robotics, massive unmanned mobility in three dimensions, augmented reality (AR) and virtual reality (VR) to name a few. Each of them is expected to require more effective and efficient wireless communications than ever before and 6G wireless networks must provide broadband, near-instant, and reliable connectivity to enable massive data exchange at different frequencies and by using a large variety of technologies. Moreover, the evolution of technologies are towards more intelligent device in the internet of things (IoT) which will require a more reliable, efficient, resilient and secure connectivity. When the connected objects are more intelligent it becomes difficult deal with their complexity by using the communication network in a static, simplistic and rigid manner. The same need will likely emerge for other “traditional” services such as phone calls or a video streaming, where the wireless communication network will no longer just provide a connection between two or more people, but will bring the need to properly authenticate both parties, guarantee the security of data fluxes and recognizing possible abnormal behaviors and events. Data exchange will be, in practice, much more than just pure data exchange and will become the exchange of information, knowledge, experience, and also past, present, and possibly future properties of the data. What we can easily anticipate is the fact that larger and larger amounts of data will be transferred through the future wireless communication networks and more added value applications and services will heavily depend on such data exchanges. Machine learning (ML) will represent a basic functionality to guarantee the efficiency of future wireless communication networks and, at the same time, can represent the enabling technology for several added-value applications and services. ML on the wireless communication nodes can enable several advanced services and quality of service functionalities for the proposed applications.
Current wireless networks heavily rely on mathematical models that define the structure of the communication system. Such mathematical models often do not present the systems accurately. Moreover, there are no mathematical models for some of the building blocks of wireless networks and devices and as a result, modeling of such blocks becomes challenging. On the other hand, the optimization of wireless networks also requires heavy mathematical solutions that are often not efficient in terms of computational time and complexity, and, also consume a lot of energy. The above mentioned mathematical models and solutions will most likely fall short in improving the capacity and performance of wireless networks that are expected to meet the stringent requirements that will be set by 6G applications [@walid6g]. ML, therefore, will play a crucial role in 6G wireless networks as it is capable of modeling systems that can not be presented by a mathematical equation. Moreover, it is expected that ML tools can be used to replace heuristic or brute-force algorithms to optimize certain localized tasks. Meanwhile, it is envisioned that ML will enable real-time analysis and automated zero touch operation and control in 6G networks. Such intelligence will rely on the availability of data timely streamed from wireless devices, especially in extreme applications, such as real-time video monitoring and extended reality (XR). To fully leverage these capabilities, the network should support ML-native agents that can be freely placed and moved to the required network locations.
Furthermore, additional ML actions or predictions could be performed by mobile devices and reported to the network to assist in decision making in resource management, making mobile devices an integral part of the infrastructure resource. 6G networks are expected to employ ML agents for multiple functions, including optimization of the radio interface, adaptive beamforming strategies, network management, and orchestration. Such functionality will require data from different domains and sources in the network. This poses additional requirements on the efficiency of data transfer to avoid the transmission and storage of massive amounts of data that may never be utilized over network management interfaces.
ML algorithms should be deployed and trained at different levels of the network: management layer, core, radio base stations, and as well as in mobile devices, possibly with the assistance of the network itself (e.g., via configuration and/or device programmability). These new paradigms may drive the need for ML-native and data-driven network architecture, as network functions in the network and management domains may require data from different sources. Meanwhile, physical-layer algorithms (e.g., link adaptation), as well as higher layer algorithms (e.g., mobility) can be optimized with ML agents deployed in a controlled and predictable manner. Currently, such algorithms tend to be deployed statically, whereas allowing them to change dynamically would open up for enhanced performance and utilization. Moreover, allowing also configurations of the network to be automatized reduces the need for expensive hands-on human work.
The white paper provides a vision for the role of ML in wireless communications by discussing the various network problems that can utilize learning methods. A detailed look at the problems at different layers of the communications protocol stack is provided. Moreover, ML in security of wireless networks as well as standardization activities are also discussed.
\[t!\] ![The role of ML in 6G networks.[]{data-label="general"}](general.png "fig:"){width="100.00000%"}
Machine Learning Overview
=========================
ML models are computing systems that are used to learn the characteristics of a system that can not be presented by an explicit mathematical model. These models are used in tasks such as classification, regression, and interactions of an intelligent agent with an environment. Once a model learns the characteristics of a system, which is known as a trained model, then it can efficiently perform the task using some basic arithmetic calculations. ML spans three paradigms which are known as a) supervised learning: where the model is learned by presenting input samples and their associated outputs, b) unsupervised learning, in which, there are no output labels and the model learns to classify samples of the input, and, c) reinforcement learning, where an agent interacts with an environment and learns to map any input to an action. A general overview of some of ML methods is provided in the following.
Deep learning
-------------
Deep learning methods based on artificial neural networks (ANNs) have recently been able to solve many learning problems. The rise of the deep learning paradigm has mainly been fueled by the availability of sufficient computational power and access to large datasets. There exist many architectures in deep learning that are used for various tasks. In this section, we mention some of the most important deep learning architectures that are suitable for problems in wireless communications. Multi-layer perceptrons (MLPs) are the basic models that are generally used in many learning tasks. Convolutional neural networks (CNN) which use convolution operation to reduce the input size are often used in image recognition tasks. For learning tasks which require sequential models, recurrent neural network (RNN) are most suitable. Autoencoder based deep learning models are used for dimension reduction and generative adversarial networks (GANs) are used to generate samples similar to the available dataset.
In the wireless communications domain, the amount of training data is still far away from being comparable to the huge data sets used by the big industry players for core applications of deep learning such as computer vision and speech recognition. Due to the curse of dimensionality [@Zappone/etal/2020a], deep learning models require very large training data sets in order to achieve significant performance increases in comparison to simpler models. Another limiting factor is the network heterogeneity implied by the coexistence of different mobile network operators. Although standardized data sets formats could help to establish interoperability, implementations for network management functions might differ significantly between the different operators. Moreover, due to business-oriented principles, operators might rather keep their acquired data confidentially. These factors leads to the conclusion that deep learning alone will not be the optimal solution for all data analysis tasks within 6G networks. Instead, a variety of application- and platform-dependent models will be required in order to enable cognitive decision making even on highly resource-constrained platforms such as ultra low power microcontrollers.
Probabilistic methods
---------------------
There have been a lot of recent advances in probabilistic ML and Bayesian inference [@Ghahramani2015] which could potentially be used in 6G wireless networks. Compared to more classical frequentist methods, they provide probability theory-based fundamental framework to handle prior knowledge and to quantify uncertainty, needed in noisy real-world learning and modeling scenarios such as data rich 6G applications and services. Especially flexibility of handling small, limited or incrementally growing datasets, non-parametric Bayesian methods such as Gaussian processes can provide promising interpretable techniques to model complex spatio-temporal and high-dimensional sensing and prediction problems in 6G networks. As non-parametric models grow on data, the computational complexity of these models is the biggest disadvantage compared to parametric models. However, there has been a lot of work based on approximation methods such as variational Bayes, Expectation Propagation, and sampling approaches based on Markov chain Monte Carlo to be able scale these techniques to distributed big data challenges of wireless communication systems.
Reproducing Kernel Hilbert Space (RKHS)
---------------------------------------
Methods Massive connectivity requirements in 6G will result in high interference, which will culminate in a significant performance-bottleneck. Furthermore, the massive connectivity will also involve serving a wide range of devices of various manufacturing qualities and hence will result in introduction of impairments due to diverse artefacts introduced by non-ideal hardware (nonlinear characteristics, I/Q imbalance and others), high-mobility especially in the context of varied industry verticals where a fixed solution may not be applicable. To fulfill the promise of 10-100 times data-rate improvement in these scenarios as compared to 5G, the Reproducing kernel Hilbert space (RKHS) based solutions are particularly useful, as RKHS based methods are computationally simple, scalable, and have significantly lower approximation error (even in high-interference non-Gaussian environments that will be potentially encountered in 6G) as compared to contemporary polynomial filtering based approaches. Recently, the RKHS based approaches have emerged as a panacea for mitigation of a variety of impairments in the context of several applications in the next-generation communication systems. As a consequence, several RKHS based methods have been proposed for problems such as detection, tracking and localization [@rkhs1; @8966596].
Over the last few years, we have also seen tremendous research in deep learning being heavily used in wireless communications problems; though there is a well-known concern regarding the sensitivity of deep learning based approaches to hyperparameters. Being an active area of research, recent advances further improves performance of RKHS based approaches by extracting features using Monte-Carlo sampling of RKHS, and utilizing these features as input to the deep-learning based approaches, to enhance the performance of models used in 6G. In addition, RKHS based deep-learning approaches are found to deliver improved performance as compared to classical deep-learning algorithms as these features are intrinsically regularized, and are supported with strong analytical framework. Lastly, the RKHS based solutions have fewer hyperparameters to tune in general, and there exist several “rules of thumb” for learning these hyperparameters.
Federated Learning
------------------
Traditional centralized ML algorithms [@8755300] require mobile devices to transmit their collected to the data center for training purpose. Due to privacy issues and communication over, it is impractical for all wireless mobile devices to transmit their local data for training ML models. Federated Learning (FL) is a distributed ML algorithm that enables mobile devices to collaboratively learn a shared ML model without data exchange among mobile devices. In FL, each mobile device and the data center will have their own ML models. The ML model of each mobile device is called local FL model while the ML model of the data center is called global FL model. The ML model that is The training process of FL can be summarized as follows:
- Each mobile device uses its collected data to train its local FL model and sends the trained local FL model to the data center.
- The data center integrates the local FL models to generated the global FL model and broadcasts it back to all mobile devices.
- Steps b. and c. are repeated until find the optimal FL models to minimize the FL loss functions.
From the FL training process, we can see that mobile devices must transmit the training parameters over wireless links. Hence, imperfect wireless transmission, dynamic wireless channels, and limited wireless resource (e.g., bandwidth) will significantly affect the performance of FL. In consequence, a number of the existing works such as [@chen2019joint] and [@chen2020convergence] have studied the optimization of wireless networks for the implementation of FL. Meanwhile, since FL enables mobile devices to collaboratively train a shared ML model without data transmission, it has been studied for solving wireless communication problems such as intrusion detection [@ferdowsi2019generative], orientation and mobility prediction, and extreme event prediction.
Reinforcement Learning
----------------------
In a reinforcement learning problem, an agent interacts with an environment and learns how to take actions. At every step of the learning process, the agent observes the state of the environment, takes action from the set of available actions, receives a numeric reward, and moves to the next state. The agent aims to maximize long term cumulative reward. Many wireless problems such as resource allocation can be formulated as a reinforcement learning problem. Neural networks can be used in reinforcement learning problems as function approximators to learn the rewards that are generated by the environment or values of each state. Various deep reinforcement learning architectures can be used to solve many problems in wireless networks such as power control, beamforming, and modulation and coding scheme selection. One major limitation of RL is its high reliance on training. However, there has been some recent advances towards reducing this reliance, particularly when dealing with extreme network situations. In particular, the concept of *experienced deep reinforcement learning* was proposed in [@MM] in which RL is trained using GANs that generate synthetic data to complement a limited, existing real dataset. This work has shown that, by gaining experience, deep RL can be come more reliable to extreme events and faster to recover and convergence.
In this section, we have tried to answer the following research questions:
- Which ML methods will play a major role in 6G wireless networks?
- Which areas of 6G wireless networks will use deep learning?
- Why deep reinforcement learning will be one of the major components of automation of 6G wireless networks?
- How can the goal for open data access be brought together with business-oriented mobile network operator interests?
- How can models be efficiently transferred to highly resource-constrained platforms?
- How to dynamically select and deploy application- and platform-dependent models?
ML at the Physical Layer {#phymac}
========================
In recent years, ML has begun to penetrate into all walks of life, including the field of wireless communication. The physical layer of traditional wireless communication is generally designed based on mathematical models, and several major modules are modeled and optimized separately. This design method can adapt to the fast time-varying characteristics of the physical layer, but often some nonlinear factors in the physical layer cannot be modeled. The research and attempt to use ML in the physical layer of wireless communication have been carried out in recent years [@dunduzair], and some progress has been made, so it is necessary to integrate ML into the physical layer of 6G wireless systems. There are several levels to integrate ML into 6G wireless communication system. The first level is ML for some special functions. We should first consider using ML to replace some functions that are not well solved at present. For example, interference detection and mitigation, uplink and downlink reciprocity in FDD, channel prediction and so on. These problems still exist in the current communication systems due to lack of accurate models or non-linearity. The second level is to update the existing discrete modules. The traditional design of each module is generally based on a linear model, so once the system encounters strong non-linear factors, the performance of the system will decline sharply. The third level is the joint optimization of the modules in the physical layer. As mentioned above, the traditional design of physical layer is divided into modules and optimized separately. For example, decoding, modulation and waveform are designed separately in the traditional design. Once the three are considered together, the complexity of the receiver is often too high to be optimized as a whole. However, for ML, it is not necessary to design all kinds of coding schemes carefully, nor to think about all kinds of constellations. The best end-to-end mapping mode can be obtained automatically by learning. Which modules in the physical layer use ML for joint optimization is a future direction to be worth exploring. The fourth level is the integration of ML and the existing model based methods. Although the traditional model-based method is sometimes over idealized, it can describe the main characteristics of a process after all. If some existing model features are used for reference in the design process of ML and input into ML as additional information, it is likely to overcome some inherent defects of ML, such as the need for huge training data, under fitting or over fitting, slow convergence, etc.
The above discussion provides an overview of how ML will be used in the physical layer of the communication systems. In the following, we provide a detailed view of some of the problems in the physical layer that can benefit from ML methods.
Research Areas
--------------
Some of the major research areas of ML-driven PHY-layer include channel coding, synchronization, positioning, and channel estimations. Coping with these items, this section defines their definitions, and provides their technical trends and prospects.
### Channel Coding
Channel coding is needed to overcome wireless channel imperfections or to correct errors that may occur on the channel. Since the discovery of the Turbo code, which is close to Shannon limit, channel coding codes such as low-density parity-check (LDPC) and Polar codes have been actively studied until recently. The research direction of recent channel codes has been moving towards enabling a rapid coding and decoding process to support low-latency services, while also having a high-fidelity error correction capability. To solve complex problems of channel coding, studies have been underway to apply deep learning to channel coding [@NachmaniLearning], [@AskriDNN]. To replace the channel coding portion of the communication system with deep learning, learning is required for codeword length equal to at least hundreds of bits (control channel assumption) and the resulting output so far remains tens of bits (16 or 32 by Polar Code). In other words, it is difficult to compare/predict whether the code length actually can be learned, and how much of the benefits will be in terms of computational complexity and time compared to the currently commercialized state-of-the-art. The difficulty of increasing code length is that the range of codes to learn increases exponentially as the length of code increases (e.g., to learn length-$n$ codeword, the number of $2^n$ cases must be learned). There are several attempts to overcome this problem (e.g., how to learn about all zero-codeword), but there are no clear findings yet. However, in order to be applied to actual systems, it should be reviewed not only for performance but also for the time spent in decoding and other aspects. For example, it is necessary to graph an optimal scheme and a sub-optimal scheme in terms of both performance and time. In addition, these graphs may vary not only over time but also by a service using channel coding or by parameters that a system considers important (such as computational complexity).
### Synchronization
In general, all devices must go through time/frequency and cell synchronization procedure without exception, so synchronization that meets system requirements is the starting point for all standards, including 4G long-term evolution (LTE) and 5G new radio (NR). Accordingly, it is pivotal to have synchronization technology that meets system requirement on synchronization accuracy, even in the worst radio channel environment, the fastest mobile environment, and the highest carrier frequency offset (CFO) environment. An end-to-end auto-encoder (AE) based communication system is likely to achieve global optimal performance, with the possibility of implementing the communication system as an end-to-end deep neural network, including transmitter, channel model, synchronization using sync signal (SS) as reference, and receiver [@dunduzair]. However, in the presence of sampling timing offset (STO) and sampling frequency offset (SFO) between transmitter and receiver, it is still too early to perform both synchronization and signal detection/decoding with only one auto-encoder (AE) based deep learning neural network. Recent research has shown that deep learning technologies using SS separately from signal detection/decoding [@WuDeep],[@SchmitzA], forward error correction (FEC) based synchronization [@ChadovMachine], and classification based synchronization [@LiUnsupervised] were introduced.
### Positioning
Current mobile positioning technology has identified the location of users in indoor or outdoor environments based on various signals received from mobile devices or wireless channels using a mathematical approach. However, a fatal problem with the mathematical approach is that if there are non-line-of-sight (NLoS) multi-paths, there are high positional errors. As a means of solving this problem, most of recent ML methods have been deep neural networks. To date, the deep learning technology applied to the position technology is mostly based on indoor dimensions, and existing fingerprint methods are characterized by learning from deep learning model and applying it. Received signal strength (RSS), channel state information (CSI), or Hybrid information are used as input data for the fingerprint based deep learning. The learning and experiment results of most deep learning-based positioning technologies were used to build learning data and measure performance in ideal environments, such as fixed experimental environments. There is no guarantee that deep learning models, which represent the best performance in space A, will also perform well in space B. Therefore, it is necessary to develop a learning model that is less sensitive to changes in the environment and represents good performance, or to make the best learning model for each environment. In real-world environments, input data may not be as perfect as they were in learning (e.g., RSS information missing, if the bulb is turned off when using a light sensor, temperature changes, environmental changes by people/things not considered, etc.). Therefore, it is necessary to develop and analyze a learning network model that can operate when the input data value changes. Most positioning systems have been carried out taking into account only one target environment. However, it is expected that the actual system will have interference effects in an environment with multiple people or multiple devices. Therefore, the performance difference of given deep learning-based location-level technology between the experimental and the actual environment must be analyzed in the course of the research. Through this analysis, a deep learning technology with large positioning difference should facilitate evolution by means of adaptive technology (e.g., combined with signal processing technology) to adapt to the actual environment. On the other hand, a deep learning technology without significant positioning difference in itself should facilitate evolution through coupling with online learning in addition to offline learning.
### Channel Estimation
In many communications standards including LTE and 5G NR, channel estimation is an inevitable module, which provides the information about how the channel distorts the transmitted signal. Linear minimum mean square error (LMMSE) estimation has the optimal performance under the condition that the channel is linear and stationary. But real channels may be non-linear and non-stationary. Under such complicated channel conditions, the analytical form of the optimal estimator is hard to be derived. On the other hand, deep learning based channel estimation can be optimized through the training of the neural network despite the complicated channel environments. Moreover, channel estimation along with other modules, e.g., equalization [@8052521], can be realized in a single DNN. Hence, the separate modules in conventional communication systems can be jointly optimized to achieve better performance. Nevertheless, the existing deep learning based channel estimation techniques have one common shortcoming. Since DNN has to be trained offline because of requirements on long training period and large training data, mismatches between the real channels and the channels in the training phase may cause performance degradation. In the future researches, online training and constructing training data that matches real-world channel might be promising approaches to overcome this problem.
### Beamforming
At the physical level, *intelligent* beamforming and smart antenna solutions can also greatly contribute at guaranteeing performance, stability of the throughput, reduce sensitivity to interferences, extend coverage, enable highly mobile applications, and reduce energy consumption. We already witnessed the evolution of antenna technologies from relatively *dumb* and basic antennas to more advanced *active* antennas that include progressively more and more *intelligence* to map the knowledge of the *environment* and guarantee the optimization of the radio links. This evolution is already an integral part of 5G communication and will be boosted further with 6G communication where all elements in the communication chain will be characterized by some level of intelligence or at least capacity to operate in a optimal manner following some degree of training. Again at this level ML (and more specifically deep learning) can represent the optimal solution to support adaptive and real time massive MIMO beamforming, follow mobility patterns to capture structural information of the radio channels, coordinate beams with neighbor base stations, properly allocate power, adapt emission patters for mobile devices, exploit beamforming for added value services. Dedicated hardware, other than dedicated algorithms, can help implement efficient machine learning solution to support a new generation of intelligent beamforming and smart antennas.
### Physical Layer Optimization with ML
At the physical layer, many optimization problems are non-convex, e.g., maximizing throughput by means of power control, multiuser spectrum optimization in multicarrier systems, optimization of spectrum sensing for cognitive radios, optimal beamforming formulated as a sum rate maximization problem under a total power constraint to name only a few. This type of problems may be solved using dual decomposition techniques that require iterative algorithms which in turn often cannot be computed in real time due to high computational load. To alleviate the high computational complexity and resulting latency associated with existing iterative algorithms, heuristic solutions have been proposed for some physical layer problems such as beamforming design. Although heuristic solutions can be obtained with low computational delay, this benefit comes at the expense of performance loss. On the other hand, deep learning techniques have great potential to find solutions to those problems in real time while maintaining good performance and reducing computational delay. As such, deep learning is a powerful technique for designing, enhancing, and optimizing one or multiple functions in the physical layer for 6G. This includes CNNs for signal classification and DNNs for channel estimation and signal detection.
Recent research on physical layer optimization that exploit ML includes a deep learning framework for optimization of multi-input multi-output downlink beamforming [@hans1]. The CNN-based solution takes expert knowledge into account such as uplink-downlink duality as well as the known structure of the optimal solutions. The proposed beamforming neural network (BNN) is shown to achieve a good trade-off between performance and computational complexity. Open questions in this context include providing solutions for imperfect CSI and multi-cell scenarios.
In case that joint optimization of functional blocks at the physical layer is considered and the channels are too complex for modeling, deep learning models are the best solutions for achieving performance improvement. Conventionally, the channel estimation based on the pilot estimation and the signal detection based on channel estimation are executed separately one after the other. In [@hans2], by considering the channel as a black box, a fully connected DNN with five layers is implemented for joint channel estimation and detection. The received signals corresponding to both the transmit signals and the pilots are taken as inputs of the DNN to recover the transmit signals as outputs. This DNN has been shown to be more robust to the number of pilots than conventional methods and is able to address complicated channel distortions. Future directions in physical layer optimization with ML center around the paradigm of an autoencoder that has been introduced in [@TimOSheaAutoEnc] aiming at a deep learning-based end-to-end physical layer architecture. In this approach, transmitter and receiver components are jointly optimized in the presence of a given channel. Autoencoders of the deep learning network for building the end-to-end physical layer modules consider designing a communication system as an end-to-end reconstruction optimization task. The autoencoder would jointly learn transmitter and receiver realizations without the need of expert knowledge and modules. Given the complexity related to building end-to-end physical layers, it is currently more feasible to exploit deep learning techniques for designing, enhancing, and optimizing one or multiple functions in the physical layer for 6G.
Implementation of ML At the Physical Layer
------------------------------------------
While power, cost and size are always considerations in implementations of neural networks, they are of extreme importance in implementing ML algorithms in user equipment (UE) or at the cell edge. Additional considerations during simulation and prototyping of ML in UE devices need to be taken into account to optimize physical realization of designs. Implementations may be software-centric early in the design-phase, the only way to achieve expected battery life while processing real-time data is to migrate to a HW centric solution. The following sections outline the three main phases of development and the expected requirements of an artificial neural network (ANN) during those stages. It is expected that training will occur during the simulation and prototype phases. For a final product, the feed-forward network will be ported to the physical device where weights are still software defined but other features may be fixed in the hardware design.
### Simulation {#simulation .unnumbered}
The first stage of development of a wireless modem typically is a software simulation of the physical layer transmitter and receiver. The air interface is simulated with a channel model that tries to recreate real world conditions such as noise, fading, multipath, Doppler spread and path loss. Various aspects of the receiver can be implemented in an ANN which are talked about in this paper. At this point, ML will take place in ANNs where the number of nodes, layers, connections, activation functions and back propagation loss functions all need to be flexible while the network trains. During this initial stage, the many parameters and characteristics of the ANN will need to be identified with trade-offs between performance and physical resources. Even though training of an ANN is not performed in real-time, performance considerations are still important since there are practical limitations how long simulations can run. Offloading ML algorithms from a Central Processing Unit (CPU) to a Graphics Processing Unit (GPU) can increase performance by 10 to 100 times [@Kayid2018]. In addition specific ANN accelerators can improve performance even more but are not always suited to supporting back-propagation required for training[@Chen2014].
In order to train an ANN, many different channel models need to be employed and run in a Monte-Carlo style simulation with multiple trials. Each trial run with a different random seed can be fairly complex to generate and take hours to run since the model simulates impairments at the symbol rate. How well the ANN will model real world conditions depends upon the quality and diversity of the channel models. For illustrative purposes, if we have 30 channel models, each is run 20 times with randomized data and the simulation takes 8 hours to run that would result in 200 days of run time. This shows that these simulations would need to run in parallel on a high end grid or cloud based engine. Also it is obvious that we want to reduce simulation time by offloading the ANN to specialized hardware. One big task during simulation is to identify the structure and features of the neural network. If we want to compare performance of several activation functions or vary the numbers of connected nodes in each layer, we can see that the computing resources required in the simulation stage is vast.
A big part of the design with any ML algorithm in the physical layer is to determine what the inputs to the ANN are. Filtered outputs such as channel estimator data, FFT output, pilot symbols or possible uniquely filtered data are all candidates as inputs to the ANN. Raw I/Q samples would likely overwhelm any reasonably sized ANN and cause convergence to take way too long if at all possible. Hooks into the processing stack are required to bring out any raw data that is required as input to the ANN. Also outputs such as BLER, BER, SINR and CRC validation will need to be fed back into the loss function.
### Prototyping {#prototyping .unnumbered}
After simulation, a prototype platform typically will be developed utilizing a Field Programmable Gate Array (FPGA) as the main processing engine [@Shawahna2019]. It is desirable to be able to run the platform real-time or at least at a scaled down rate such as 1/2 or 1/4 of the real-time sampling rate. We want to be able to transmit and receive over the air in the band of interest so that we are not limited to training with predefined channel models as in the simulation stage. In this case, ANNs can be trained over a wide set of conditions that include varying distance, rural or urban environments, speed and weather. It is important to be careful so that when training in one scenario, the ANN doesn’t “forget” previous scenarios. For example the system may adapt well to a rural environment but after then training in an urban environment, the performance in a rural environment may suffer [@Kirkpatrick2016].
There are IP cores that can be synthesized into an FPGA to implement a DNN [@PG338]. These cores such as Xilinx’s Deep Learning Processor Unit (DPU) are highly configurable allowing the user to allocate resources such as DSP slices, block RAM, UltraRAM, and convolutional architecture. However these settings only allow choosing from a fixed set of possible architectures so an extremely efficient design to fit just what is required is not possible. Also there are now chips such as the Xilinx Versal [@Foxton2019] where there are up to 400 inference engines built into the FPGA. This will allow for a lot of flexibility and speed in the design.
There is also an open-source framework for accelerating Deep Neural Networks on FPGAs called DnnWeaver (dnnweaver.org). The framework lets a developer specify the ANN architecture from a high level and then the tool automatically generates Verilog code. It is also platform independent so it is not specific to one manufacturer over another.
With the end goal of an efficient ASIC, after acceptable performance is found, the ANN has to be analyzed for optimization. It has been shown [@Chen2014] that reducing the number of bits in fixed point multipliers, even from 32 bits to 16 can result in only a very small performance loss but use almost 1/8th the power and area of the die. Even quantization to 8 bits can result in little inference loss [@Chen2020]. Weights that are close to zero can be pruned so that memory is saved in addition to computational resources as shown in [@Chen2020] with minimum accuracy loss. The assumption is that the number of nodes and layers in an ANN would not change significantly when porting the design to an Application Specific Integrated Circuit (ASIC).
### Product Phase {#product-phase .unnumbered}
Any final product with an ANN to facilitate physical layer processing will have to place hard limits on the number of nodes, layers and bits in fixed point MAC operations. Once the design is ported to an ASIC, it will be assumed that a fully trained ANN will be imported to the design. However there has to still be some flexibility in updating the network as well so that weights and some connection information can be updated through software downloads.
Design considerations have to be made regarding which inputs and outputs will be available to/from the ANN. Allowing the ANN to reside on a separate co-processor, requiring moving data off chip can take up more than the available timeline. Any ANN would have to be treated the same as any physical layer processing block where the data is readily available and the neural net is part of the processing chain.
Future Directions
-----------------
It is expected that deep learning technology will be employed for wireless transmission of 6G mobile communication infrastructure in the next 10 years by carrying out practical learning online as part of the model trimming approach to overcome differences in performance based on learned wireless channel model and actual wireless channel environment.
\[t!\] ![Impact and uncertainty regarding deep learning-driven PHY-layer technologies.[]{data-label="fig:1"}](chang2.pdf "fig:"){width="10.1cm"}
More specifically, firstly, we predict impact and uncertainty regarding deep learning-driven PHY-layer technologies as shown in Fig. \[fig:1\]. The classification criteria in this figure are as follows. Performance degradation due to offline learning and actual mismatch in the wireless channel environment is expected to be relatively higher for positioning items. Because channel coding assumes radio channel compensation, the effect of radio channels is reduced in the form of colored noise. And synchronization is not to perform any channel estimation itself but to perform a correlation greater than the synchronization signal length for a given radio channel, which may be less affected by environmental inconsistencies. On the other hand, positioning is based on the fact that it is directly related to the nature of radio channels and therefore more likely to be affected by environmental inconsistencies.
\[ht bp\] ![Research direction regarding deep learning-driven PHY-layer technologies.[]{data-label="fig:2"}](chang1.pdf "fig:"){width="10.1cm"}
Secondly, as shown in Fig. \[fig:2\], we outline the research direction regarding deep learning-driven PHY-layer technologies. From now until $x$ years[^21], it is expected to have two directions as follows. The first is Multi-Offline Learning and Adaptation (MOLA), which will perform offline learning on a number of channel models in advance, file-up to the system, and monitor the actual radio channel characteristics to apply the appropriate offline learning to the system. The second is Single Offline Learning and Online Learning (SOLO), which identifies the performance sensitivity of each radio channel characteristic factor, and applies offline learning based on the least sensitive factors to the actual system, and online learning to adapt to the actual radio channel characteristics online. Next, it is expected that after $y$ years, it will be used either MOLA or SOLO depending on the radio channel situation. The classification criteria in this figure are as follows. The MOLA is expected to take a long time as it will require vast amounts of data bases and memory, but is expected to be used effectively in some wireless channel environments. In addition, in radio-channel environments that are not covered by MOLA, SOLO is expected to be applied in a semi-optimal manner, but this is a prediction and should not be ruled out that if the SOLO can cover the MOLA in both performance and implementation terms, it can eventually go in the form of SOLO.
In this section, we have tried to answer the following research questions:
- How will ML methods impact the physical layer of communication systems?
- Which problems in the physical layer could be solved by ML and what methods are most suitable?
- Is end-to-end communication system design possible using deep learning?
- How deep learning will be used to optimize physical layer?
- What are the implementation issues of using ML methods is physical layer?
- How does deep learning-based physical layer optimization perform on real data obtained from test-beds that operate in actual physical environments?
- How does deep learning-based physical layer optimization perform under imperfect CSI, channel correlation, and other imperfections?
- How can deep learning-based physical layer optimization be combined with the intelligence in upper layers?
- How to reduce the training workload of deep learning models for physical layer optimization by using the recent advancement of deep learning such as domain adaptation and transfer learning?
- How to reduce training data by applying the advancement of generative adversarial networks to generate artificial data in the physical layer?
ML at the Medium Access Control Layer {#mlmac}
=====================================
Medium access control (MAC) layer of a cellular networks performs tasks such as user selection, user pairing for MIMO systems, resource allocation, modulation and coding scheme selection, power control of uplink transmissions and random access and handover control. Several heuristic algorithms are in place currently to address these problems considering the complexity of the problem. There are no optimal solutions for these problems in real environments. ML tools must be leveraged to significantly enhance the MAC scheduler in order to provide significant gains in real environments. While optimal solutions are not available, significant thought must go on how to train ML models for these problems. Hence, reinforcement learning frameworks are most sutiable for problems in which the network can adapt to varying users conditions such as channel conditions and learn the optimal strategies. For example, a scheduler must learn to predict the buffer traffic characteristics, speed and channel variations over time and use these predictions to make intelligent scheduling decisions. Care must be taken because the state-action space can grow large very quickly in such a situation. Intelligent deep reinforcement learning algorithms that can deal with combinatorial actions spaces and multi-agent environments must be explored for these problems. In the following, we provide some use cases for which ML can be used in MAC layer communications.
FL for Orientation and Mobility Prediction in Wireless Virtual Reality Networks
-------------------------------------------------------------------------------
An elegant and interesting use of FL for solving wireless communication problem is presented in [@8851408] for minimizing breaks in presence (BIPs) that can detach the virtual reality (VR) users from their virtual world. The model in [@8851408] considers a set of base stations (BSs) service a set of wireless VR users over both uplink and downlink. The uplink is used for tracking information transmission while the downlink is used for VR image transmission. The VR users can operate at both mmWave and sub-6 GHz frequencies. The sub-6 GHz band is used for tracking information transmission while the mmWave band is used VR image transmission. Different from the existing VR works such as in [@8717714; @8648419; @8395443] that assume the VR users are static, the the VR users’ locations and orientations in [@8851408] will affect the BIPs of each VR user. Since mmWave band is used for VR image transmission, the blockage effect caused by human body is considered. Therefore, the purpose of [@8851408] is to minimize the BIP of each VR user via adjusting user association. To determine user association, the orientation and mobility of each VR user must be proactively determined.
Federated echo state network (ESN) prediction algorithm is used to proactively determine the users’ orientations and mobility. The input of the federated ESN is historical orientations and mobility of each user. The output of the federated ESN is the future orientations and locations of each user. At each training iteration, each BS only need to transmit the ESN parameters to other BSs without transmission of users’ orientation and mobility data. When the training process is completed, federated ESN algorithm can predict the locations and orientations of each user. Given the predictions of each VR users’ locations and orientations, the BSs can optimize user association so as to minimize the BIP of each user and enhance VR quality-of-experience.
Predictive Resource Allocation in Machine-Type Communications
-------------------------------------------------------------
Most Internet-of-Things (IoT) applications have devices that are stationary or of low mobility and the traffic originating from such IoT devices has specific patterns, therefore, ML-based predictive resource allocation is possible by using the so-called “fast uplink grant” [@LTE14Outlook]. Such a predictive resource allocation will decrease the latency of the network and alleviate problems associated with the random access process for machine-type communications (MTC) [@samad_commag]. Some initial results and directions on the predictive resource allocation for MTC are presented in [@samad_globecom]. However, there are many open problems to be solved. The first is to study various types of data traffic originating from MTC and to find the right tools to solve the source traffic prediction problem. For example, event-driven traffic prediction requires sophisticated ML tools for event detection and traffic prediction. Second is optimal predictive resource allocation using online decision-making tools in various systems such as Non-Orthogonal Multiple Access (NOMA), massive MIMO, and cell-free massive MIMO. Therefore, it is clear that ML plays a big role in enabling predictive resource allocation mechanisms in future MTC networks.
Predictive Power Management
---------------------------
Energy Consumption is one of the crucial factors in the design of wireless networks. Besides the environmental factor, the requirements of IoT devices to have long battery life is one of the key drivers for continuous exploration of techniques to conserve energy for future 6G networks as well. Energy Conservation can be performed at different layers of the system, however, at the MAC layer, it is considered to be most effective due to direct control of radio which consumes the maximum power. Therefore, using ML techniques to predict the traffic and segregate the packet based on priority can improve the performance of adaptive power saving mechanisms.
Moreover, current wireless networks employ transmit power control or retransmissions to improve the system performance in high interference scenarios. Such an approach has a detrimental impact on system performance in terms of energy efficiency. Predicting the transmit power based on actual network conditions could result in improved energy and spectrum efficiency of the overall system. Naturally, reinforcement learning techniques are most suitable for power control problems.
By deploying ML based algorithms to study the cell level traces collected in a real network, it is possible to devise models based on predicted traffic patterns and contextual data for different cells and the corresponding load metrics. These models can learn the behavior of neighboring interfering users and adjust the sleeping scheduling and transmit power accordingly. Moreover, such models can be used to dynamically switch on and off BSs to conserve energy at higher levels as well.
Asymmetric Traffic Accommodation
--------------------------------
Wireless data traffic is asymmetric in nature. Due to which current wireless networks employ duplexing techniques like time division duplexing (TDD) or frequency division duplexing (FDD). In TDD systems, addressing the issue of asymmetric traffic is simple and can be managed based on the traffic load in downlink and uplink. However, in FDD systems, the frequency bands of downlink and uplink are separated by frequency gap to provide isolation from self-interference. Although much progress has been made to provide efficient cancellation of such interference and enable a true full-duplex system, however, it is still not mature enough. In FDD systems, the symmetric allocation of resources between uplink and downlink results in under-utilization of resources. ML techniques can help to provide intelligent solutions to such problems by seamless integration of data traces collected from cells to enable proactive MAC functions rather than traditional reactive ones.
Recently, the concept of flexible duplex is introduced which allows to dynamically allocate the resources in both time and frequency domain simultaneously rather than static TDD and FDD to address asymmetric traffic. Flexible duplexing allows matching the resource based on traffic pattern even in single paired FDD. On the other hand, the use of TDD will allow the allocation of resources to symbol level granularity rather than carrier level in FDD. Deploying flexible duplexing for broadband communication might be possible where downlink traffic is more and uplink resources will be used for downlink transmission. In this case, downlink traffic will have interference from neighboring cell uplink users with low transmit power and therefore, corresponding downlink power can be adjusted. ML based algorithms can drive such techniques in a proactive manner as the entire concept is based on traffic pattern and network activity data and would result in increased system performance.
In this section, we have tried to answer the following research questions:
- What is the role of predictive models in MAC layer?
- How ML will help in resource allocation in wireless networks?
- How asymmetric traffic prediction could benefit from ML?
- What ML methods could be used for MTC networks?
- How FL will be used to address mobility for virutal reality?
ML for Security of Wireless Networks {#mlsecurity}
====================================
This section give a brief discussion on the role of ML in security of the future wireless systems. First, a general road-map towards 6G security is provided which is then followed by security aspects in wireless medium.
The Road Towards 6G Security
----------------------------
The integration of massive IoT, and the provision of new services, for example for smart homes, hospitals, transport, and electric grid systems in 5G will exacerbate the security challenges. Among the prominent solutions to govern the network security, ML-based solutions have grasped the most attention due to the exacerbating amount of traffic expected in 5G. In 6G, the speeds will grow up by many-folds, latency will be non-observable, connectivity is poised to be ubiquitous, and critical infrastructures will be automated using the underlying network infrastructure. Therefore, the network security paradigm will be further shifted to extremely agile, dynamic and autonomous systems. ML-based security solutions, thus, will be inevitable.
With the conglomeration of diverse IoT devices and services, UAVs, V2X, and smart home appliances within 6G networks, differentiating between a security attack and legitimate traffic will be practically impossible or unmanageable without using ML tools [@7943477]. Analysis of humongous amount of data for monitoring the network traffic-in-transit for security would require designing proactive, self-aware and self-adaptive ML techniques. Since critical infrastructures such as electricity smart grids, transportation and health-care systems will be connected, proactive security measures that require continuous intelligence gathering, and using that intelligence to mitigate the possibility of security risks and lapses, will necessitate the needed storage and computing resources in zero latency vicinity. Such systems would, thus, require using ML to proactively transport ML-based security systems to different network perimeters, as well as scaling the required resources dynamically without any delay. Hence, ML will be a stepping stone to predict and provide the needed security systems in those perimeters on one hand, and the extend the necessary resources through scaling up the resources from the pool of available virtual resources on the other hand.
ML-based security approaches need to be considered from end-to-end network perspectives in 6G. Currently, ML has been used in different services, parts of the networks, and different networked nodes. In 6G, the use of ML must bye synchronized across the network. For example, consider the case of intelligent ML-based spectrum sharing. Intelligent spectrum sharing requires the spectrum information to be securely shared among peers competing for the same frequency slot. The first case would be to secure the sharing of information among the contending and provider peers. ML can be used to recognize the legitimacy of the contending peers, and in securely sharing information. However, adjusting the upper layers after hopping to the available spectrum as agreed by the providing and contending peers, will also need security to be in place. For example, adjusting secure routing procedures in the network layer after decision taken in the physical layer. Such systems employing ML in each layer e.g. in the physical layer regarding secure spectrum sharing and in the network layer regarding secure route establishment and security of the payload afterwards, require synchronized ML procedures.
Wireless Security
-----------------
The inherent openness of wireless medium makes it susceptible to interference. Interference can be either intentional or un-intentional. Un-intentional interference could be caused by devices close to us that may transmit at higher power levels as instructed by their controllers. Intentional interference, on the other hand, corresponds to adversarial attacks on a system. Adversarial attacks are detrimental for a system as they may hamper the communication among various nodes and may potentially stop important communication. There are two aspects for wireless security that must be studied - defense and attack. Defensive mechanisms include cryptography and the likes, while attacks refer to mechanisms where proactively an attack such as jamming or eavesdropping is performed to secure the future transmissions. Such security-related studies not only allow for the analysis of the system vulnerabilities but also enable to undermine an enemy system capabilities.
The fast pace of research in the field of ML can potentially enable every device to possess some sort of intelligence that can be used either in a positive or in a negative manner. If such capabilities exist with the malicious devices i.e., the ones that want to intentionally cause interference, then it is a threat to the security of the various devices that co-exist in the same environment. It is thus highly important that devices be intelligent to know everything about the adversary so as to limit the effectiveness of attacks.
Typically, such problems have been addressed via game theory or optimization frameworks. While they give good insights, they often assume static environments, or static action space for an adversary etc which may not be the case when an adversary himself possesses the ML capabilities. Therefore, we must study these systems both from attack [@AmuruPHY1] and defense [@TugbaPHY] perspective. From an attack perspective, we need to design ML models that can learn the environment in real-time and stop the adversary from communicating or interfering with the required network. From a defense perspective, we need to design a communication system that is robust against any kind of attacks and adversarial ML mechanisms can be used for the same to design robust techniques.
In this section, we have tried to answer the following research questions:
- What is the role of Machine Learning in 6G Security (beyond ML-based security in 5G)?
- What aspects of security, physical layer, mac layer, network layer can be addressed via Machine Learning?
- What and where does machine learning in security find use cases? Defense applications as an example?
ML at the Application Layer {#applicationmanagement}
===========================
ML solutions directly embedded on the wireless communication nodes at the lower layers, with advanced features such as context awareness, performance optimization, and multi-agent reinforcement learning, will enable more reliable and stable per user and per-application data rate, peak data rate, air-interface latency, spectrum efficiency, energy efficiency. At the same time Embedded ML solutions on the wireless communication nodes at the transport layer or the application layer, with sensor fusion techniques and with the capacity to run ML as a service, will improve experience sharing, remote control capacity, seamless connectivity, and services.
Context-aware systems, in particular, provide the capacity to implement services that maximize the application’s safety while minimizing the application’s explicit interaction with the environment. In general, a set of rules has to be specified for the possible contextual configurations and each rule is assigned to a specific service in the context-aware system. This is a common problem to determine and limit the set of possible context configurations. Instead of a rule-based approach, ML can be used to predict all possible and meaningful context configurations. This ML approach can use the previous choice about the service and can adapt itself by a new choice about the service from user/application feedback information. A variety of ML techniques can help to develop general-purpose context-aware applications without needing to define a priori rules and elements of the context. This context-aware application can provide service proactively by using different types of learning algorithms in an operational environment that can be smart and continuously changing. The user preferences (choice for services) may also change over time so that an adaptive learning algorithm would certainly be preferable. The middleware layer plays a vital role in the context-aware system. The middleware is responsible for context modeling, context reasoning and controlling sensors and data sources, appliances and devices based on the decision from the context-aware application layer.
Making ML available as a service on wireless communication nodes will flexibility and power to the communication networks. Four key trends are making ML more accessible to users and companies: (1) improved processing power, (2) reduced costs for data storage and processing, (3) expanding data availability, and (4) improved techniques, like the emergence of cloud-based deep learning solutions. Hybrid cloud and fog computing might likely further extend such accessibility by making ML available as a service for users and applications in the application layer of wireless communication nodes.
ML for 6G Network Performance Management Automation
---------------------------------------------------
5G advanced/6G Mobile networks have increased complexity which demands smarter network features and approaches to handle any Key Performance Indicator (KPI) degradation, anomaly detection, and trend prediction to keep KPI within the required thresholds [@lam2020machine]. This can be achieved by applying ML and software defined networking (SDN) solutions. ML will enhance the decision-making process to keep excellent KPI network service levels. For 6G, a new approach is required for the management and implementation of Radio Access Networks (RAN). Example ideas include adding ML to baseband processes, using a virtualized container-based RAN compute architecture, and by running the containers close to the Mobile Edge Computing (MEC) servers to achieve latency as low as 1 ms. 6G virtualization for RAN and CORE both are moving to container-based applications from open stack VM based due to efficiency and security. ML enables anomaly detection in KPI trend spikes, success and failure rates, handover KPIs, accessibility KPIs, availability KPI as well as integrity, privacy, and security KPIs.
Enabling ML modeling for accessibility, availability, mobility, and traffic performance using the 6G network real-time data extracted from UE measurement reports will enhance and automate network performance management to keep KPI’s within predefined thresholds. ML enables the management automation of 6G dynamic mobile networks with smart adaptive cells. This could enhance the performance of coverage, throughput, QoS prediction, automatic network configuration, power control, operation, maintenance, fault management, power-saving, and beam management. Fig. \[figabbas1\] shows ML enhancing 6G network performance management aspects.
{width="12cm"}
ML-Aided UAV Control
--------------------
One of the major ultra-reliable low-latency communications (URLLC) applications is to control unmanned aerial vehicles (UAVs) over wireless links in new mission-critical scenarios such as providing emergency wireless networks for the disastrous area and delivering first aid kits in rescue missions [@b40]. Therein, the control of UAVs requires stability guarantee, which is only possible if the wireless communications can assure case-specific targets of ultra-reliability and low-latency. In 5G URLLC, transmission techniques such as short packet transmission and spatial/frequency/temporal diversity, are considered to achieve the $ 99.999\% $ reliability and $ 1 $ms latency targets. However, considering control techniques to guarantee (physical) stability allows relaxing the latency and reliability requirements for transmission in mission-critical applications [@shiri2019massive; @park2020extreme]. Additionally, due to various (communication and/or control) constraints, the communication and control co-design (CoCoCo) in real-time UAV use cases as well as many other automation applications can become a very complex problem. To overcome the complex nature of CoCoCo problems, the regression/adaptation/learning capabilities in ML methods can be utilized. In the following, two use cases of CoCoCo are described briefly.
As the first exemplary use case, a single UAV is controlled by a ground controller to reach a target destination. At each control cycle, the UAV state (velocity and distance to the destination at each time instant) is downloaded to the controller, and the optimal action (acceleration) computed by a ANN in the controller is uploaded (UL) to the UAV within a time deadline. The UL transmission power can be tuned based on the download latency, to meet the time deadline. When the environment dynamics is learned and the transmission cost becomes high, the UAV switches to autonomous mode by receiving the ANN model from the ground controller [@shiri2019remote]. As a result, the UAV is always controlled by a well-trained ANN even to complete the desired mission.
As another example use case consider a swam of autonomous UAVs is dispatched from a source to a target destination. Each autonomous UAV is controlled by running a pair of ANN to obtain mean-field (MF) approximation of other UAVs states (MF neural network) and then to compute its optimal action (action ANN). To reduce the risk of collision, the action ANN is affected when the relative distance of UAVs becomes small or their relative speed becomes large. The stability of this control method is guaranteed when the initial states of the UAVs are exchanged. Moreover, this ANN-based control method can reduce transmission power[@shiri2019massive; @shiri2020communicationefficient].
In both examples, ML and communication are considered together and as a result, the reliability, safety, and energy consumption of the UAV control are improved. This way of control where both the ML training and the communications benefit from each other is extensively studied in [@EdgeML]. Other possible ML and communication co-design use-cases such as [@Elbamby_2018] intelligently utilize communications resources with the help of predictions provided by the ML, and [@elgabli2019gadmm] solve a distributed ML problem in a communication efficient way. Based on these research examples, considering communication and ML/control can provide many advantages. However, the control and communications co-design is still a challenging issue that needs to be addressed further in 6G.
Opportunistic Data Transfer in Vehicular Networks
-------------------------------------------------
Parallel to the technological advancements that drive the development of 6G networks, road vehicles are subject to a step-wise evolution process that aims to improve traffic safety and efficiency by introducing means of connectivity and automation. As a side-effect of this development, the manifold sensing capabilities of modern cars will allow exploiting vehicles as moving sensor nodes that can cover large areas and provide highly-accurate measurements. Crowdsensing-enabled services such as the distributed generation of high-definition environment maps will then be available to improve the situation awareness of the vehicles themselves.
Data transfer in vehicular networks is a challenging task since the channel dynamics depend on a large amount of external and environment-specific impact factors. Vehicular communications systems have to be compliant with very high velocities on highways and be able to cope with sporadic line-of-sight situations in inner cities. As a result, moving vehicles frequently encounter low-connectivity regions where link loss, packet errors are highly probable which results in a need for retransmissions.
Client-based context-aware network optimization techniques such as opportunistic data transfer and multi-connectivity offer the potential of achieving relief without requiring to extend the actual network infrastructure. Hereby, ML-based data rate prediction allows selecting network interfaces and schedule data transmissions based on the anticipated resource efficiency within a window of delay tolerance related to application-specific requirements for the age of information of the sensor measurements. This approach allows us to proactively detect and avoid highly resource-consuming transmissions.
Although first feasibility studies [@Sliwa/etal/2019d] that make use of passive downlink indicators practically demonstrated the achievable benefits of this approach, purely client-based techniques are almost unaware of the network load and the potentially available resources within the connected cell which ultimately limits the achievable data rate prediction accuracy.
Within 6G networks, these limitations could be overcome through a cooperative approach where the network infrastructure actively shares its load information with the clients via control channels.
![Opportunistic data transfer in vehicular networks.[]{data-label="smart_traffic"}](smart_traffic){width="80.00000%"}
Software Development Aspects
----------------------------
For real-world usage, choosing a ML model for solving a specific problem cannot be solely decided based upon prediction performance metrics. Limitations on computation, energy resources, and requirements in response time have an impact on software technologies used to extract data, store it, train ML models, and make predictions. However, relying on ML for networking and wireless communications will also have a profound impact on software development practices needed for providing results while ensuring quality.
Research in engineering of ML solutions in wireless communication must address also challenges in development practices. If the trends in wireless systems software development will shift increasingly towards ML-based methods, the main challenge will be related to the engineering paradigm change from deterministic, classic requirements-driven projects and processes towards a data-driven monitor, extract data, learn, predict cycles in development of the systems and services. Consequently, one of the first steps is that existing engineering tools, methods and processes should evaluated based on their adaptivity to above described ML-driven development loop. This gives an overacrching understanding of the magnitude of changes and investments that are required in industry domains.
In parallel, with data science and ML gaining in popularity, software problems specific to the field also became apparent. Systems relying heavily on ML not only share the same issues that other software systems encounter, but have additional long-term shortcomings that can incur high maintenance costs for real-world usage. In the past years, DataOps, a movement inspired by DevOps, has emerged to better deal with those problems specific to data science and ML [@ereth2018dataops]. This movement aims at providing development processes and software technologies to improve quality when delivering data-based solutions in a fast-changing world. To provide ML solutions at a large scale for wireless systems, 6G will have to embrace development practices from Agile software development, DevOps, and DataOps. Moreover, movements like DevOps and DataOps are relatively new and in an ever-evolving state. Thus, because networking and wireless communications have their specificities and requirements of their own, people with an active role in the development of 6G might also have to take an active role in these movements.
In this section, we have tried to answer the following research questions:
- What is the vision for 6G Network Management?
- How will ML enable, enhance and automate the network performance management for 6G Mobile networks?
- How will ML will enable, enhance and automate the 6G mobile network optimization?
- What existing software development practices, processes, and technologies will be needed in order to incorporate ML in large scale real-world networking and wireless communication technologies?
- What are specificities of 6G that will require to adapt existing or create new Agile, DevOps, or DataOps practices, processes, and technologies?
Standardization Activities
==========================
Various standardization bodies like 3GPP, and International Telecommunication Union (ITU), but also the 5GAA (5G Automotive Association) have started evaluating ML in 5G and future networks. From a standardization perspective, the ML models and algorithms will not be standardized [@S1_193479]. Other bodies such as the ORAN alliance have started defining open interfaces in order to exchange relevant information between various parts of the protocol stack. Specifically, they have defined entities’ names as a real-time intelligent controller and a non-real-time intelligent controller. Non-real time RIC is one where the training for the ML models happens using the data captured by lower layers. This learning happens very slowly and hence the word non-realtime.
This learned model is fed into the real-time RIC which uses this model on real-time data and makes real-time decisions in an online fashion. Such systems can be deployed in core networks or in RAN based on the type of data that can be collected.
The discussion of introducing ML capabilities in the 3GPP RAN is still in the preliminary stage in the standardization. The autonomous network is an important topic for RAN considering the complexity of future networks. Six levels of automation are proposed for the RAN. Level zero (L0) starts with a manual operating network and ends with L5 at fully autonomous networks with no human involvement at any stage. The levels are summarized in the Table \[tab:my-table\] along with the tasks [@3gpp28_810].
Additionally, it is also required to define
- signaling support for ML training and execution,
- data required by the ML algorithms either reported by the user equipment (UE) or collected from an NG-RAN node, and
- outputs generated by the algorithms to be delivered to the network including the network functions and core network.
Also, if the UE has the capability to support at least a part of ML inference on board then it becomes relevant to study how the ML-enabled UE obtains an updated ML model and intermediate output based on dynamic environment changes and application. It is unfeasible to pre-load all possible models on-board because of limited storage space in the UE’s. Therefore, the ML model downloading or transfer learning is needed. ITU-T Rec. Y.3172 defines a technology-agnostic logical architecture model for the high-level machine learning requirements – such as interfaces, support for heterogeneous data sources, machine learning mechanisms – in future networks. The actual underlay network technology (e.g., 4G, 5G, 6G, IEEE 802.11) is virtually mirrored by a digital twin – referred to as *closed-loop subsystem* – which is utilized to safely explore the outcomes of different machine learning-enabled acting options.
Acknowledgement {#acknowledgement .unnumbered}
===============
This draft white paper has been written by an international expert group, led by the Finnish 6G Flagship program (6gflagship.com) at the University of Oulu, within a series of twelve 6G white papers to be published in their final format in June 2020.
[^1]: \[cwc\]Centre for Wireless Communications, University of Oulu, Finland, (emails:{samad.ali, nandana.rajatheva, hamid.shiri, kai.mei}@oulu.fi).
[^2]: \[vt\] Wireless@VT, Bradley Department of Electrical and Computer Engineering, Virginia Tech, USA, (email: [email protected]).
[^3]: Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea (email: [email protected]).
[^4]: InterDigital, Inc., USA, (email: [email protected])
[^5]: \[tudortmund\]TU Dortmund University, Germany, (emails: {benjamin.sliwa, christian.wietfeld}@tu-dortmund.de)
[^6]: \[bk\] Blekinge Institute of Technology, Sweden, (emails: {hans-jurgen.zepernickm, thi.my.chinh.chu}@bth.se)
[^7]: \[vtt\]VTT Technical Research Center, Finland, (emails: {ijaz.ahmad, jyrki.huusko}@vtt.fi)
[^8]: Biomimetics and Intelligent Systems (BISG), University of Oulu, Finland, (email: [email protected])
[^9]: Fraunhofer Institute for Integrated Circuits IIS, Germany, (email: [email protected])
[^10]: \[IITindoore\] IIT Indore, India, (emai: [email protected])
[^11]: University of Quebec, Montreal, Canada, (email:[email protected])
[^12]: IIT Hyderabad, India, (emai: [email protected])
[^13]: Macquarie University, Australia, (email: [email protected])
[^14]: Warwick Institute for the Science of Cities, UK, (email: [email protected])
[^15]: Capobianco - Business Innovation Management, Pordenone, Italy, (email: [email protected])
[^16]: ZTE Corporation, China, (email:[email protected])
[^17]: \[oulusfotware\] Empirical Software Engineering in Software, Systems and Services (M3S), University of Oulu, (emails: {maelick.claes, teemu.karvonen}@oulu.fi)
[^18]: Princeton Univeristy, USA, (email: [email protected])
[^19]: Ericson Research, Sweden, (email: [email protected])
[^20]: Prontominds OÜ, Estonia, (email: [email protected])
[^21]: Depending on the degree of performance deterioration caused by offline learning and actual wireless channel environment mismatch, $x$ may differ for each item. If it is based on Fig. \[fig:2\], channel coding < synchronization < positioning is listed by item with the smallest $x$ value.
|
---
abstract: 'DDoS attacks are increasingly used by ‘hackers’ and ‘hacktivists’ for various purposes. A number of on-line tools are available to launch an attack of significant intensity. These attacks lead to a variety of losses at the victim’s end. We analyse the impact of Distributed Denial-of-Service (DDoS) attack announcements over a period of 5 years on the stock prices of the victim firms. We propose a method for event studies that does not assume the cumulative abnormal returns to be normally distributed, instead we use the empirical distribution for testing purposes. In most cases we find no significant impact on the stock returns but in cases where a DDoS attack creates an interruption in the services provided to the customer, we find a significant negative impact.'
author:
-
-
-
bibliography:
- 'Bibliography.bib'
title: Analysing The Impact Of A DDoS Attack Announcement On Victim Stock Prices
---
Abnormal Returns, Event Study, Cyber Security, DDoS Attacks.
Introduction
============
The trend of significant growth in the magnitude of high intensity DDoS attacks has been consistent in the past years [@WISR2015] and these attacks have resulted in heavy losses for firms [@PI2015]. The rise in the number of attacks being encountered can be attributed to the ample availability of online tools for launching DDoS attacks. Booter websites have become successful in creating a market for themselves and as a consequence technical knowledge is no longer a prerequisite for launching a DDoS attack [@Santanna2014].
The losses encountered by firms due to these cyber assaults can be divided into direct and indirect ones [@Anderson2012]. Financial damages due to infrastructural downtime, loss of online traffic, paid ransom and customer compensation etc. are accounted as direct losses. Indirect losses include damage to company’s reputation and impact at stock prices etc. We examine the indirect loss due to the decrease in the market value of a firm as a result of an announcement of getting hit by a DDoS attack. In Section \[previous works1\] we discuss several studies on the impact of information security breaches on the stock prices [@Spanos2016].
In our study, we consider all DDoS attacks reported after 2010. We do this in order to understand the effects caused by these announcements. Unlike earlier studies we will study the impact of DDoS attack announcements only, because these attacks do not lead to any form of information leaks and do not pose any danger to customer data. Hence, in our sample we do not consider any of the events where DDoS has been used as a smoke screen.
Previous Work {#previous works1}
=============
[c c c c c >m[4.4 cm]{} c]{} & **Author** & **Estimation Model** & **Sample Size** & **Breach Type** & **Conclusion** & **Sample Period**\
[@HovavAnatandDArcy2003] & Hovav & D’Arcy (2003) & Market Model & 23 & DoS & No significant impact of DoS attacks on the capital market. Some indication of impact on firms that rely on the web for their business. & 1998-2002\
[@Campbell2003] & Campbell [*et al.*]{}(2003) & Market Model & 43 & Generic & Some negative stock market impact to reported information security breaches. & 1995-2000\
[@Garg2003] & Garg [*et al.*]{}(2003) & N/A & 22 & Generic & Average fall in the stock price was approximately 2.9% over a 2-day and 3.6% over 3-day period. & 1996-2002\
[@Cavusoglu2004] & Cavusoglu [*et al.*]{}(2004) & Market Model & 66 & Generic & Security breach announcements affect the values of the announcing firms and also the Internet security developers. & 1996-2001\
[@Kannan2007] & Kannan [*et al.*]{}(2007) & Market Model & 102 & Generic & Drop of 1.4% in the market valuation relative to the control group of firms. & 1997-2003\
[@Gordon2011] & Gordon [*et al.*]{}(2011) & Fama-French Model & 258 & Generic & Pre 9/11 information security breaches showed significant negative stock market returns but the results for the post 9/11 period were not significant. & 1995-2007\
Event studies measure the impact of company related events on the market value of the firm. MacKinlay [@Mackinlay1997] discusses the procedure for conducting an event study and also the various models that can be used for estimation of normal behaviour of the market. In the past many researchers have studied the impact of information technology related events on the market value of the firm. Santos [*et al.*]{}[@Santos1993] examined the impact of information technology investment announcements on the market value of the firm and suggested that there is no significant impact of these investment announcements on the market value.
Previous studies [@HovavAnatandDArcy2003; @Campbell2003; @Cavusoglu2004; @Kannan2007] have used a one-factor market model for the estimation of stock prices as shown in Equation \[market model\]. Where $r_{it}$ represents the rate of return of the stock $i$ and $r_{mt}$ represents the rate of return of the market index on day $t$. For instance, $r_{it}$ can be calculated as $(P_{it}-P_{it-1})/P_{it-1}$, where $P_{it}$ is the price of the stock on day $t$.
$$\label{market model}
r_{it}={\alpha_i}+{\beta_i}r_{mt} + \epsilon_{it}$$
The parameters $\alpha$ and $\beta$ are firm dependent coefficients. $\hat{\alpha}$ and $\hat{\beta}$ are their ordinary least square (OLS) estimators. The stochastic variable $\epsilon_{it}$ is the error term with $\operatorname*{\mathbb{E}}{[\epsilon_{it}]}=0$. Gordon [*et al.*]{}[@Gordon2011] uses a Fama-French three factor model [@famafrench] to predict the stock prices. The three factors being company size, company price-to-book ratio and market risk. The three factor model is shown in Figure \[famafrench\].
$$\label{famafrench}
r_{it}={a_i}+{b_i}r_{mt}+{s_i}SMB_t+{h_i}HML_t+\epsilon_{it},$$
$SMB_t$ is the difference between the return on the portfolio of small stocks and the return on the portfolio of large stocks on day $t$ and $HML_t$ is the difference between the return on a portfolio of low-book-to-market stocks and the return on a portfolio of low-book-to-market stocks on day $t$. The parameters ${a_i}$,${b_i}$,${s_i}$ and ${h_i}$ are Fama and French three-factor model estimated firm dependent coefficients. The stochastic variable $\epsilon_{it}$ is the error term with $\operatorname*{\mathbb{E}}{[\epsilon_{it}]}=0$.
These studies [@Kannan2007; @HovavAnatandDArcy2003; @Campbell2003] use abnormal returns (additive) and cumulative abnormal returns (additive) as a measure of event impact. Equations \[addar\] and \[addcar\] show the relations used to compute abnormal returns and cumulative abnormal returns respectively. As they assume normal distribution for the $CAR$ values hence they use Z statistic to test their hypothesis.
$$\label{addar}
AR_{it}=r_{it}-(\hat{\alpha_i}+\hat{\beta_i}r_{mt})$$
$$\label{addcar}
CAR_n=\sum_{t=-1}^{n}AR_{it}$$
Past studies have been conducted on evaluating the impact of information security breaches on the prices of the victim firm’s shares. Table \[Previous Works\] lists selected works and their conclusions. In this table we also take a look on the sample size and period of the sample considered by these studies.
Previous studies had a mixed response on the impact of denial of service attacks on the stock returns of the victim firms. Some studies like Garg [*et al.*]{}[@Garg2003] and Hovav & D’Arcy [@HovavAnatandDArcy2003] suggest that DDoS attack announcements lead to a negative abnormal returns, while Gordon [*et al.*]{}[@Gordon2011] deny the effect of these attacks on the market value of the firm. Spanos & Angelis [@Spanos2016] conducted a systematic literature review on the impact of information security events on the stock market and concluded that the events examined created a significant impact on the stock price of the firms.
Method
======
(data) at (-2,0) [Historical stock\
Prices($R_i$)\
(200 days)]{}; (data1) at (2,0) [S&P 500\
Index Values ($R_m$)]{}; (rate) at (0,-2) [Calculation of rates ($r_i,r_m$)]{}; (market) at (0,-4) [*Multiplicative model*]{}; (abnormal) at (0,-6) [Calculation of abnormal returns $AR_i$]{}; (scenario) at (0,-8) [*Generation of random scenarios*]{}; (actual scenario) at (0,-10) [Determining the position of actual scenario]{}; (data)->(rate); (data1)–(rate); (rate)–(market); (market)–(abnormal); (abnormal)–(scenario); (scenario)–(actual scenario); (-3.5,-1) – (-3.5,0.75) node \[black,midway,xshift=-0.6cm\]; (-3.5,-10) – (-3.5,-1) node \[black,midway,xshift=-0.6cm\];
To analyse the impact of DDoS attack announcements on stock returns we use the method as shown in Figure \[Method Diagram\]. We can broadly divide the method into two sections:
1. Data collection.
2. Analysis
Our contribution to the analysis is at two instances. Firstly, we use a *multiplicative model* for the estimation of return rates and secondly, we use the empirical distribution of abnormal returns by *generation of random scenarios* for the analysis. In Section \[Sec: Data Collection\] we explain the approach for data collection. Section \[Sec: Analysis\] deals with the identification of the impact caused by the announcements.
Data Collection {#Sec: Data Collection}
---------------
**Organisation** **Announcement Date** **Source** **Infrastructure** **Firm Type**
--------------------------------- ----------------------- --------------------------- ------------------------- --------------------
Master Card 8-12-2010 spiegel.de Website Financial Services
Visa 8-12-2010 spiegel.de Website Financial Services
Bank of America 27-12-2010 infosecisland.com Website Financial Services
Vodafone 5-10-2011 infosecurity-magazine.com None Telecommunications
Apple 29-5-2012 att-iphone-unlock.com Website IT
AT&T 16-8-2012 pcworld.com None Telecommunications
Wells Fargo 20-12-2012 technologybanker.com DNS Financial Services
JP Morgan Chase 13-3-2013 scmagazine.com Website Financial Services
TD Canada Trust 21-3-2013 thestar.com E Services Financial Services
American Express Company 28-3-2013 bankinfosecurity.com Website Financial Services
International Netherlands Group 9-4-2013 nrc.nl Payment Services Financial Services
Linkdin 21-6-2013 news.softpedia.com Website Social Networking
Microsoft 27-11-2013 scmagazine.com DNS IT/Gaming
Royal Bank of Scotland 4-12-2013 theguardian.com Banking Services Financial Services
JP Morgan Chase 30-1-2014 bobsguide.com Online Banking Services Financial Services
Bank of America 30-1-2014 bobsguide.com Online Banking Services Financial Services
Facebook 21-2-2014 nos.nl Messageing Services Social Networking
Activision Blizzard 29-3-2014 ign.com Gaming Services Gaming
Danske Bank 10-7-2014 ddosattacks.net Website Financial Services
Storebrand 10-7-2014 ddosattacks.net Website Insurance Company
Gjensidige Forsikr 10-7-2014 ddosattacks.net Website Insurance Company
Sony Corporation 24-8-2014 techcrunch.com Gaming Services IT
Amazon 27-8-2014 shacknews.com Twitch Streamers E-commerce
Activision Blizzard 14-11-2014 eurogamer.net Gaming Services Gaming
Sony Corporation 26-11-2014 wiwo.de Gaming Services IT
Rackspace 22-12-2014 welivesecurity.com DNS Hosting
Microsoft 24-12-2014 krebsonsecurity.com Gaming Services IT/Gaming
Sony Corporation 24-12-2014 krebsonsecurity.com Gaming Services IT
Alibaba Group 25-12-2014 ddosattacks.net Cloud Services E-commerce
Nordea Bank 4-1-2015 ddosattacks.net Online Banking Services Financial Services
Facebook 27-1-2015 gizmodo.com.au Website Social Networking
Amazon 16-3-2015 scmagazineuk.com Twitch Streamers E-commerce
EA Sports 18-3-2015 ibtimes.com Gaming Services Gaming
Ziggo 18-8-2015 emerce.nl DNS Telecommunications
Overstock.com 3-9-2015 ddosattacks.net DNS E-commerce
\[sample\]
In this study we consider all DDoS attack announcements that were made on the web since ‘Operation Payback’, launched by Anonymous in December, 2010. Table \[sample\] shows the final list of all announcements that we analysed. For each attack we collected the date of announcement, the company type and also the services disrupted. The initial list consisted of 43 announcements.
We further filtered the list using the following criteria:
1. If multiple announcements were made on consecutive days, then the earliest date was considered.
2. All announcements regarding companies that were not publicly traded at the time of attack were eliminated.
3. All attack announcements in which DDoS attack was coupled with information theft were also not considered for analysis. This was done in order to be able to see the isolated effect of a DDoS attack announcements on the firm’s stock price.
The stock prices for all the firms in the sample were collected by using the Yahoo! finance API. For measuring the market rate we collected the S&P 500 index values.The final sample consisted of 35 announcements.
Analysis {#Sec: Analysis}
--------
We depart from the familiar research strategy for event studies for the following reasons. We wish to avoid the widespread practice of approximating multi-day returns by simply adding up the corresponding single-day returns[^1] and instead use the exact ones. Secondly, we want to avoid the equally wide-spread assumption about short-term returns being (approximately) distributed according to a normal, i.e., Gaussian, distribution. We refrain from imposing as an alternative one of the better known distributions such as the Weibull or the Erlang distributions, as the problem generally is not only skewness (asymmetry) but fatness of both tails, i.e., realisations quite far from the average are more common than for instance in the normal distribution with the same mean and variance. Another route not taken is to use the data consisting of a sample of returns for a period of 200 days prior to the event, and tone of these alternative distributions to the data. Instead we assume that the one-day returns follow an unknown distribution which we are going to approximate by the empirical distribution, i.e., the distribution of the 200-day-sample returns.
We acknowledge the considerable merits of the widespread research strategy involving these two approximations as they subsequently allow the construction of test variables which follow the Student’s t-distribution in order to engage in the testing of hypotheses.
As we do not use the approximations central to this research strategy, we are faced with the challenge of establishing a pertinent distribution for similar hypothesis testing. We do this by the technique of bootstrapping (e.g., Efron [@efron1992bootstrap]) which in our case involves generating a sufficiently large number of multi-day returns by drawing from the empirical distribution a number of consecutive one-day returns corresponding to the number desired. With such a series of one-day returns we compute the exact multi-day returns, and proceed in the same fashion to obtain a large number (in our case 5 million) of such multi-day returns. The relative frequencies of this large population of exact multi-day returns are then employed as the relevant distribution for hypothesis testing. Note that the standard approach in event studies is to take as the null hypothesis that the event has no influence at all, meaning in statistical terms that the distribution of returns before the event and the one after the event are identical. So, under this assumption the sample returns can be indeed used to generate the relevant distributions of multi-day returns.
We consider a multiplicative model to represent the normal behaviour of the market. According to the model if $r_{it}$ represents the rate of return of the stock $i$ on day $t$ and $r_{mt}$ represents the rate of return of the market index on day $t$, then the model can be represented mathematically by Equation \[model\]. Rate of return can be calculated as shown in Equation \[rate\], where $R_{it}$ and $R_{mt}$ represent the stock price and market index for day $t$. The value of the market index shows the average of returns of all the firms included in the market index.
$$\label{rate}
\begin{split}
r_{it} &= \dfrac{R_{it}-R_{i(t-1)}}{R_{i(t-1)}}\\
r_{mt} &= \dfrac{R_{mt}-R_{m(t-1)}}{R_{m(t-1)}}\\
\end{split}$$
$$\label{model}
(1+r_{it}) = \alpha_i(1+r_{mt})^{\beta_i}\\$$
A multiplicative model is used to estimate the returns on a firm’s stock. In this study we use the Standard and Poor’s (S&P) 500 as the index of the market. S&P 500 has been used as a market study in many of the previous event studies. The parameters $\alpha_i$ and $\beta_i$ are firm dependent and will be estimated.
Equation \[model\] is linearised in Equation \[log model\]. The stochastic variable $\epsilon_{it}$ is the error term with $\operatorname*{\mathbb{E}}{[\epsilon_{it}]}=0$. We use ordinary least square (OLS) estimation to obtain estimations $\widehat{\ln{\alpha_i}}$ and $\hat{\beta_i}$ for $\ln{\alpha_i}$ and $\beta_i$ by considering daily returns over a period of 200 days. This period starts 201 days before the date of announcement and ends 2 days before the announcement. In this study we will call this the *estimation period*. This length of the estimation period is consistent with the previous event studies [@Gordon2011; @HovavAnatandDArcy2003; @Santos1993].
$$\label{log model}
\ln(1+r_{it}) = \ln(\alpha_i)+\beta_i\ln(1+r_{mt})+\epsilon_{it}$$
The abnormal return measures the deviation that the stock shows from the model we calculate. $AR$ is calculated for the estimation period and is given by Equation \[abnormal returns\]. Hence, abnormal returns can be calculated by using Equation \[transformed\].
$$\label{abnormal returns}
\ln(1+AR_{it})=[\ln(1+r_{it})-[\widehat{\ln(\alpha_i)}+\hat{\beta_i}\ln(1+r_{mt})]]\\$$
$$\label{transformed}
AR_{it} = \frac{(1+r_{it})}{\hat{\alpha_i}(1+r_{mt})^{\hat{\beta_i}}}-1\\$$
The estimator[^2] $\widehat{\ln(\alpha)}$ is good for $\ln(\alpha)$ but cannot be used to estimate $\hat{\alpha}$. Hence, for estimating $\hat{\alpha}$ we make use of Equation \[alpha estimator\], that is derived using Equation \[log model\].
$$\label{alpha estimator}
\hat{\alpha_i}=\dfrac{\sum_{t=1}^{T}(1+r_{it})}{\sum_{t=1}^{T}(1+r_{mt})^{\hat{\beta_i}}},$$
where $T$ is the total number of days in the estimation period. In order to measure the impact of the announcements on the stock return we define five event periods as shown in Figure \[periods\]. These are:
1. One day prior to the announcement to the day of announcement $[t-1,t]$.
2. One day prior to the announcement to 1 days after it $[t-1,t+1]$.
3. One day prior to the announcement to 3 days after it $[t-1,t+3]$.
4. One day prior to the announcement to 5 days after it $[t-1,t+5]$.
5. One day prior to the announcement to 10 days after it $[t-1,t+10]$.
(-5,0)–(1.5,0); (-4.8,0)node\[below\]–(-4.8,2) ; (-1,0)node\[below\]–(-1,3) ; (-1.40,0)node\[below\]–(-1.40,2) ; (0.5,0)node\[below\]–(0.5,2.25) ; (1,0)node\[below\]–(1,3) ; (-0.6,0)node\[below\]–(-0.6,0.75); (-0.2,0)node\[below\]–(-0.2,1.25); (0.2,0)node\[below\]–(0.2,1.75); (-4.8,2)–(-1.40,2) node\[midway,below\]; (-4.8,2)–(-1.40,2) node\[midway,above\]; (-1,3)–(1,3) node\[midway,below\]; (-1,2.25)–(0.5,2.25) node\[midway,above\]; (-1,0.75)–(-0.6,0.75) node\[right,above\]; (-1,1.25)–(-0.2,1.25) node\[midway,above\]; (-1,1.75)–(0.2,1.75) node\[midway,above\]; (-1,3)–(1,3) node\[midway,above\];
We consider the starting point of the event period one day prior to the announcement so as to accommodate for information leaks. We randomly draw 2,3,5,7 and 12 abnormal returns from the estimation period to represent the abnormal returns for event periods $[-1,0],[-1,1],[-1,3],[-1,5]$ and $[-1,10]$ respectively. We generate five million possible random scenarios for the event periods. Recall, we do not assume the abnormal returns to be normally distributed at any point. This is done to improve the accuracy of our results. We consider short event periods of 2 days, 3 days, 5 days, 7 days and 12 days respectively in accordance with the results of previous studies [@Garg2003; @HovavAnatandDArcy2003; @Gordon2011].
For evaluating the combined effect over a certain number of days we also calculate cumulative abnormal returns for the randomly generated scenarios. $CAR$ is calculated using relation shown in Equation \[CAR\]. Where, $N_1$ and $N_2$ represent the start and ending days of the event period. The actual $AR$s and $CAR$s for the event period are calculated using Equations \[abnormal returns\] and \[CAR\] respectively on the real stock data for the event periods. It is important to note that previous studies have assumed these cumulative abnormal returns to be normally distributed for strategic convenience. We use the empirical distribution of $CAR$ for analytical purposes, i.e. for hypothesis testing.
$$\label{CAR}
CAR= \prod_{t=N_1}^{N_2}(1+AR_{it})-1$$
![Empirical distribution of two-day $CAR$ values for ING.[]{data-label="fig:distribution"}](car2){width="50.00000%"}
Finally, to determine the effect of the announcement on the daily stock return rates we check where do the cumulative actual abnormal returns lie in the distribution of simulated cumulative abnormal returns (multiplicative). Figure \[fig:distribution\] shows an example of the distribution two-day $CAR$ values for *International Netherlands Group*. A highly unlikely negative $CAR$ value will represent a negative impact of the announcement and the actual scenario will fit to the extreme left of the probability distribution. For our analysis we consider 10 percentile of the scenarios on the left to be representative of a negative impact and 10 percentile of the scenarios on the right represent the positive impact. Hence, for the evaluation of the results we use the decision rule as shown in Figure \[Rule\].
(CAR) at (-2.7,0) [$CAR$]{}; (N) at (3,0) [No Significant Impact]{}; (P) at (3,1) [Positive Impact]{}; (Ne) at (3,-1) [Negative Impact]{}; (-1,1)–(-1,0)\[-\]; (-1,0)–(-1,-1); (-2,0)–(-1,0); (-1,0)–(1,0)node\[midway,above\]; (-1,-1)–(1,-1)node\[midway,above\]; (-1,-1)–(1,-1)node\[midway,below\]; (-1,1)–(1,1)node\[midway,above\]; (-1,1)–(1,1)node\[midway,below\];
Results
=======
The complete results for our study are shown in Table \[Full Results\] in Appendix \[Results Table\]. According to the results of our analysis we observe a significant negative impact in the case of *International Netherlands Group* and *EA sports*. Whereas, a delayed negative effect is noticeable in the case of *Bank of America*, *Storebrand* and *Nordea Bank*. In most cases we do not see a negative effect on the victim stock prices.
In cases where the announcements state that the availability of the infrastructure under attack did not affect the customers, no significant impact was noticed. For example, in the case of *Visa* and *MasterCard* the infrastructure under attack was their *website* but the customers were still able to use their cards for payment purposes. Whereas in the case of *International Netherlands Group*, customers had troubles using the payment services. Similarly, in the case of *EA Sports*, gamers were not able to log onto their on-line gaming accounts.
In the case of *Ziggo*, the customers did face troubles due to the unavailability of internet services but as the firm is a part of a bigger conglomerate *Liberty Global*, we were unable to spot any significant impact on the stock prices.
Conclusion
==========
As a conclusion, we can say that there is a noticeable negative impact on the stock prices of the victim firm whenever the attack causes interruptions to the services provided by the firm to its customers. This drop is consistent with the results of the previous studies [@Garg2003; @HovavAnatandDArcy2003]. However, it is not possible to comment on the intensity of the impact because it is firm dependent.
Complete Results. {#Results Table}
=================
**Company** **Impact** **Event Period**
------------------ -------------- ---------- ------------ ------------------
-0.015882584 26.9273 None \[-1,0 \]
-0.027930361 19.52314 None \[-1,1 \]
MasterCard -0.025912927 27.83942 None \[-1,3 \]
0.107126511 97.48828 Positive \[-1,5 \]
0.135604103 97.36842 Positive \[-1,10 \]
-0.028139803 12.72802 None \[-1,0 \]
-0.040441177 9.26076 Negative \[-1,1 \]
Visa -0.047573044 11.8319 None \[-1,3 \]
0.140838043 99.09936 Positive \[-1,5 \]
0.112939429 94.97662 Positive \[-1,10 \]
-0.024699949 19.20136 None \[-1,0 \]
-0.024230469 24.77702 None \[-1,1 \]
Bank of America -0.031326586 24.99882 None \[-1,3 \]
-0.092847266 3.46072 Negative \[-1,5 \]
-0.128977999 2.344 Negative \[-1,10 \]
0.000824461 51.7293 None \[-1,0 \]
0.00794087 65.27714 None \[-1,1 \]
Vodafone 0.004882324 57.58806 None \[-1,3 \]
-0.012009277 35.3259 None \[-1,5 \]
-0.011377693 39.66936 None \[-1,10 \]
-0.027029116 11.10264 None \[-1,0 \]
-0.023728852 18.3817 None \[-1,1 \]
Apple -0.005504079 42.55964 None \[-1,3 \]
-0.005196594 44.34158 None \[-1,5 \]
0.001828533 51.35376 None \[-1,10 \]
0.00547585 73.01768 None \[-1,0 \]
0.014332099 89.90238 None \[-1,1 \]
AT&T 0.024144556 94.53076 Positive \[-1,3 \]
0.014879076 80.10654 None \[-1,5 \]
0.027903421 88.6526 None \[-1,10 \]
0.002251211 57.74708 None \[-1,0 \]
0.002846928 58.18658 None \[-1,1 \]
Wells Fargo 0.006975269 64.68026 None \[-1,3 \]
0.010258418 67.66082 None \[-1,5 \]
0.006787068 60.10972 None \[-1,10 \]
-0.007305775 30.4058 None \[-1,0 \]
0.013550503 77.18096 None \[-1,1 \]
JP Morgan Chase 0.031340549 90.29374 Positive \[-1,3 \]
0.053073337 96.23408 Positive \[-1,5 \]
0.09046202 98.84216 Positive \[-1,10 \]
-0.001108209 43.57652 None \[-1,0 \]
-0.009229913 14.02556 None \[-1,1 \]
TD Canada Trust -0.005118312 32.22566 None \[-1,3 \]
-0.013999826 13.95322 None \[-1,5 \]
0.023281975 91.56936 Positive \[-1,10 \]
-0.00041386 51.144 None \[-1,0 \]
-0.003047091 43.91128 None \[-1,1 \]
American Express 0.006055721 63.35356 None \[-1,3 \]
0.025735512 86.15674 None \[-1,5 \]
0.050253429 94.5455 Positive \[-1,10 \]
: Results of analysis.[]{data-label="Full Results"}
**Company** **Impact** **Event Period**
--------------------------------- -------------- ---------- ------------ ------------------
-0.053961291 2.95846 Negative \[-1,0 \]
-0.06072401 4.1384 Negative \[-1,1 \]
International Netherlands Group -0.020229264 34.1261 None \[-1,3 \]
-0.029848597 30.17358 None \[-1,5 \]
-0.081706673 12.26994 None \[-1,10 \]
0.016841037 74.30582 None \[-1,0 \]
-0.002526331 46.45178 None \[-1,1 \]
LinkedIn -0.017375856 34.262 None \[-1,3 \]
-0.026284125 30.81402 None \[-1,5 \]
-0.045621561 26.63628 None \[-1,10 \]
-0.017162556 18.47456 None \[-1,0 \]
-0.024200351 16.51522 None \[-1,1 \]
Microsoft -0.034090835 15.74556 None \[-1,3 \]
-0.015471464 36.68258 None \[-1,5 \]
0.034940648 76.79054 None \[-1,10 \]
0.007847170 62.39716 None \[-1,0 \]
-0.007792594 44.4221 None \[-1,1 \]
Royal Bank of Scotland -0.014817017 40.55122 None \[-1,3 \]
0.048606528 79.92352 None \[-1,5 \]
0.026742735 65.6532 None \[-1,10 \]
0.004977842 64.09092 None \[-1,0 \]
0.014513902 80.37984 None \[-1,1 \]
JP Morgan Chase 0.000234979 49.82888 None \[-1,3 \]
-0.014250438 29.04242 None \[-1,5 \]
-0.031128703 18.06696 None \[-1,10 \]
-0.000184061 46.84568 None \[-1,0 \]
0.016315624 80.3436 None \[-1,1 \]
Bank of America 0.017620533 76.0839 None \[-1,3 \]
0.004226565 55.21462 None \[-1,5 \]
0.025992899 75.85016 None \[-1,10 \]
-0.007029030 29.70018 None \[-1,0 \]
-0.009490565 28.69322 None \[-1,1 \]
Facebook 0.024868377 73.9342 None \[-1,3 \]
0.047184864 86.43622 None \[-1,5 \]
0.092061897 95.16446 Positive \[-1,10 \]
0.001928050 54.8177 None \[-1,0 \]
0.001096061 52.14442 None \[-1,1 \]
Activision Blizzard -0.016714484 24.34886 None \[-1,3 \]
-0.006333017 41.35782 None \[-1,5 \]
-0.062767985 4.28474 Negative \[-1,10 \]
-0.000274242 47.75408 None \[-1,0 \]
-0.016656729 24.76632 None \[-1,1 \]
Danske Bank -0.014954993 31.98618 None \[-1,3 \]
0.008732713 58.03892 None \[-1,5 \]
-0.007350568 44.4036 None \[-1,10 \]
-0.004439445 35.08586 None \[-1,0 \]
-0.018229923 15.0597 None \[-1,1 \]
Storebrand -0.063078035 0.4896 Negative \[-1,3 \]
-0.061395155 1.39984 Negative \[-1,5 \]
-0.049772166 7.76122 Negative \[-1,10 \]
0.000963002 49.70118 None \[-1,0 \]
0.003381149 54.0087 None \[-1,1 \]
Gjensidige Forsikr -0.010505422 37.04708 None \[-1,3 \]
-0.040286066 15.8641 None \[-1,5 \]
-0.028966577 29.2466 None \[-1,10 \]
0.002407147 60.15424 None \[-1,0 \]
0.004563586 62.29666 None \[-1,1 \]
Sony Corporation 0.001822970 58.2152 None \[-1,3 \]
-0.021498560 37.79418 None \[-1,5 \]
0.014746326 63.87318 None \[-1,10 \]
: TABLE \[Full Results\] continued.
**Company** **Impact** **Event Period**
--------------------- -------------- ---------- ------------ ------------------
0.021753872 88.64916 None \[-1,0 \]
0.016990920 78.1646 None \[-1,1 \]
Amazon -0.036447006 11.17068 None \[-1,3 \]
-0.043888322 11.2798 None \[-1,5 \]
-0.038297272 21.30828 None \[-1,10 \]
-0.000709556 48.81262 None \[-1,0 \]
-0.007735669 39.30916 None \[-1,1 \]
Activision Blizzard 0.004492289 56.1743 None \[-1,3 \]
-0.002983879 48.68772 None \[-1,5 \]
0.087616411 92.7746 Positive \[-1,10 \]
-0.003403517 44.67244 None \[-1,0 \]
-0.010883831 36.797 None \[-1,1 \]
Sony Corporation -0.000200726 50.39402 None \[-1,3 \]
0.026361475 68.49948 None \[-1,5 \]
0.026080234 64.358 None \[-1,10 \]
0.014832688 88.2169 None \[-1,0 \]
0.025240246 94.41064 Positive \[-1,1 \]
Rackspace 0.043310538 97.7079 Positive \[-1,3 \]
0.040679243 94.7607 Positive \[-1,5 \]
0.040587064 89.53928 None \[-1,10 \]
-0.018099374 21.19232 None \[-1,0 \]
-0.012956852 32.89316 None \[-1,1 \]
Microsoft 0.019714934 71.0867 None \[-1,3 \]
0.028748607 74.6767 None \[-1,5 \]
-0.021794832 37.10918 None \[-1,10 \]
0.017723392 92.3236 Positive \[-1,0 \]
0.026038429 94.92672 Positive \[-1,1 \]
Sony Corporation 0.029414053 92.04924 Positive \[-1,3 \]
0.04572172 96.4814 Positive \[-1,5 \]
0.037697846 87.84194 None \[-1,10 \]
0.003252883 55.78888 None \[-1,0 \]
0.006038200 58.02368 None \[-1,1 \]
Alibaba Group 0.027994135 73.14122 None \[-1,3 \]
0.028993768 71.19542 None \[-1,5 \]
0.064338831 82.87406 None \[-1,10 \]
-0.010079453 26.95334 None \[-1,0 \]
0.002073589 55.11578 None \[-1,1 \]
Nordea Bank -0.030816091 12.2077 None \[-1,3 \]
-0.061577132 2.58652 Negative \[-1,5 \]
-0.174724652 0.0002 Negative \[-1,10 \]
0.012003977 73.88932 None \[-1,0 \]
-0.007382446 39.45172 None \[-1,1 \]
Facebook 0.034738393 85.06418 None \[-1,3 \]
0.030894154 78.7436 None \[-1,5 \]
0.028141425 71.76988 None \[-1,10 \]
-0.001261847 48.43654 None \[-1,0 \]
-0.007662918 39.32008 None \[-1,1 \]
Amazon -0.014995234 34.24676 None \[-1,3 \]
-0.003671321 48.09398 None \[-1,5 \]
0.002219653 53.17424 None \[-1,10 \]
-0.026102614 10.94388 None \[-1,0 \]
-0.044632962 5.67572 Negative \[-1,1 \]
EA Sports -0.052965749 7.55248 Negative \[-1,3 \]
-0.020028623 30.5396 None \[-1,5 \]
-0.034627666 26.39366 None \[-1,10 \]
0.009412519 73.84972 None \[-1,0 \]
0.037493964 94.77486 Positive \[-1,1 \]
Liberty Global 0.099809937 99.5675 Positive \[-1,3 \]
0.101846496 99.21738 Positive \[-1,5 \]
0.106292715 98.23724 Positive \[-1,10 \]
0.000637514 52.07072 None \[-1,0 \]
-0.018705128 30.38584 None \[-1,1 \]
Overstock.com -0.023468622 31.49814 None \[-1,3 \]
-0.003278425 49.49494 None \[-1,5 \]
0.010503096 58.08244 None \[-1,10 \]
: TABLE \[Full Results\] continued.
[^1]: Note that a 10% increase, followed by a 10% decrease imply a total decrease of 1% according to the multiplicative formula $(1.1)(0.9)=0.99$. The additive approximation would yield a 0% change, an overestimation of 1%.
[^2]: Note that $\hat{\alpha}$ is not $e^{\widehat{\ln(\alpha)}}$ as $\operatorname*{\mathbb{E}}{[\alpha]}\neq\operatorname*{\mathbb{E}}{[\ln{\alpha}]}$.
|
---
author:
- |
[^1]\
Baruch College, The City University of New York, 17 Lexington Avenue, New York, NY 10010, U.S.A. ,\
\
The Graduate School and University Center, The City University of New York, 365 Fifth Avenue, New York, NY 10016, U.S.A. and\
\
Scuola Internazionale Superiore di Studi Avanzati, via Bonomea 265, 34136 Trieste, Italy.\
E-mail:
title: 'The Hadronic Spectrum and Confined Phase in (1+1)-Dimensional Massive Yang-Mills Theory'
---
Introduction
============
In this talk we discuss the spectrum of Massive Yang-Mills theory in 1+1 dimensions. This model is renormalizable, in contrast to the same model in higher dimensions [@renormalizable]. Renormalizability is proven by noticing that the Massive Yang-Mills action is equivalent to a gauged principal chiral sigma model (PCSM), which is asymptotically free.
The action of the PCSM is $$\begin{aligned}
S=\frac{N}{2g^2}\int d^2x\,{\rm Tr}\partial_\mu U^\dag(x)\partial^\mu U(x),\label{pcsm}\end{aligned}$$ with $U\in SU(N)$. We review key aspects of the PCSM, including its integrability, in the next section. The action (\[pcsm\]) has an $SU(N)\times SU(N)$ global symmetry given by $U(x)\to V^L U(x) V^R$, with $V^{L,R}\in SU(N)$. We call the Noether currents associated with these symmetries, $J^L(x)$, and $J^{R}(x)$.
We promote one of the $SU(N)$ global symmetries (the left handed $V^L$) of \[pcsm\] to a global gauge symmetry. The action of the gauged sigma model is $$\begin{aligned}
S=\int d^2x -\frac{1}{4}{\rm Tr} F_{\mu\nu} F^{\mu\nu} +\frac{1}{2g_0^2} {\rm Tr}D_\mu U^{\dag} D^\mu U,\label{gaugedpcsm}\end{aligned}$$ with $D_\mu=\partial_\mu+i e A_\mu^{L}$. It can be seen that the action (\[gaugedpcsm\]) is that of massive Yang-Mills theory by looking at the unitary gauge $U(x)=1$, where $$\begin{aligned}
S=\int d^2x -\frac{1}{4}{\rm Tr}F_{\mu\nu}F^{\mu\nu} -\frac{e^2}{2g_0^2}{\rm Tr} A_\mu A^\mu.\label{unitarygauge}\end{aligned}$$
From looking at the action (\[unitarygauge\]) one would naively guess that the particles of this theory are gluons with mass $e/g_0$. However, the asymptotic freedom of the PCSM forces the bare coupling $g_0$ to vanish. The gluon mass then diverges, and one cannot observe this gluon at low energies.
The spectrum of massive Yang-Mills theory in two dimensions does not consist of massive gluons. The question we address in this talk is, what then are the particles of this model.
The main point we present is that two-dimensional massive Yang-Mills theory is not in a Higgs-like phase, but in a confined phase. The physical particles of this model are not massive gluons, but hadron-like bound states of sigma model particles. This is different than 2+1 and 3+1 dimensions, where both phases are present [@phases].
Review of the Principal Chiral Sigma Model
==========================================
In this section, we recall previous exact results that have been obtained using the integrability of the PCSM. Integrability in quantum field theories implies that all scattering events are elastic and factorizable. There is no particle creation, the set of particle momenta is conserved, and any scattering can be written as the product of two-particle S-matrices. These properties have been used extensively to calculate exact S-matrices [@zamolodchikov]. In particular, the S-matrix of two PCSM particles is known [@wiegmann]: $$\begin{aligned}
&&\left._{\rm out}\langle P,\theta_1',c_1,d_1;P,\theta_2',c_2,d_2|P,\theta_1,a_1,b_1;P,\theta_2,a_2,b_2\rangle\right._{\rm in}=S(\theta,N)^{c_1, d_1; c_2, d_2}_{a_2, b_2; a_1, b_1}\langle \theta_1^\prime\vert \theta_1\rangle \langle \theta_2^\prime\vert \theta_2\rangle \nonumber\\
&&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,=S(\theta,N)\left(\delta_{a_1}^{c_1}\delta_{a_2}^{c_2}-\frac{2\pi i}{N\theta}\delta_{a_1}^{c_2}\delta_{a_2}^{c_1}\right)\times\left(\delta_{b_1}^{d_1}\delta_{b_2}^{d_2}-\frac{2\pi i}{N\theta}\delta_{b_1}^{d_2}\delta_{b_2}^{d_2}\right)\langle \theta_1'|\theta_1\rangle\langle \theta_2'|\theta_2\rangle,\label{smatrix}\end{aligned}$$ where $P$ labels a particle, $A$ labels an antiparticle and $\theta_i$ is the rapidity of the $i$-th particle, defined by the parametrization of energy and momentum: $E_i=m\cosh\theta_i,\,\,p_i=m\sinh\theta_i$, and $\theta=\theta_1-\theta_2$. The $i$-th particle has a left-color index $a_i$ and a right color index $b_i$, with $$\begin{aligned}
S(\theta)=\frac{\sinh(\frac{\theta}{2}-\frac{\pi i}{N})}{\sinh(\frac{\theta}{2}+\frac{\pi i}{N})}\left[\frac{\Gamma(i\theta/2\pi +1)\Gamma(-i\theta/2\pi-\frac{1}{N})}{\Gamma(i\theta/2\pi+1-\frac{1}{N})\Gamma(-i\theta/2\pi)}\right]^2. \nonumber\end{aligned}$$ The particle-antiparticle S-matrix can be found by crossing symmetry, $\theta\to\pi i-\theta$.
Knowledge of the exact S-matrix is the starting point of the integrable bootstrap program. One can use the S-matrix to calculate exact form factors (matrix elements of local operators). Form factors of the PCSM in the ’tHooft large-$N$ limit for different operators have been found in References [@orland], [@multiparticle], [@correlation]. At finite $N$, only the first nontrivial form factors have been found (with only two-particle states). Of particular interest to us is the form factor of the Noether current operator [@multiparticle] $$\begin{aligned}
\langle 0 \vert \!\!\!&&\!\!\!j_\mu^L(0)_{a_0c_0}\vert A,\theta_1,b_1,a_1;P,\theta_2,a_2,b_2\rangle=(p_1-p_2)_\mu\left( \delta_{a_0a_2}\delta_{c_0a_1}-\frac{1}{N}\delta_{a_0c_0}\delta_{a_1a_2}\delta_{b_1b_2}\right)
\nonumber\\
&&\times\frac{2\pi i}{(\theta+\pi i)}\exp \int_0^\infty \frac{dx}{x}\left[\frac{-2\sinh\left(\frac{2x}{N}\right)}{\sinh x}+\frac{4e^{-x}\left(e^{2x/N}-1\right)}{1-e^{-2x}}\right]\frac{\sin^2[x(\pi i-\theta)/2\pi]}{\sinh x}.\label{formfactor}\end{aligned}$$ for $N>2$. The form factors in the case $N=2$ are equivalent to the form factors of the isovector-valued $O(4)$ nonlinear sigma model, by virtue of $SU(2)\times SU(2)\simeq O(4)$. The two-particle form factors of the $O(N)$ sigma model have been found in Ref. [@karowskiweisz].
The Bound-state Spectrum in Massive Yang-Mills
==============================================
In this section, we will omit writing color indices, for simplicity and clarity. The Hamiltonian of massive Yang-Mills theory in the completely-fixed axial gauge, $A_0=0,\,A_1(t=0)=0$, is found in detail in Ref. [@dynamicalmass], to be $$\begin{aligned}
H=H_{\rm PCSM} -\frac{e^2}{2g_0^4}\int dx^1 \int dy^1\vert x^1-y^1\vert j_0^L(x^1)j_0^L(y^1).\label{axialhamiltonian}\end{aligned}$$ The Hamiltonian (\[axialhamiltonian\]) appears nonlocal, but this is a natural consequence of the axial gauge. It can be made local again by re-introducing the temporal component of the field $A_0$.
The Hamiltonian (\[axialhamiltonian\]) describes sigma-model particles confined by a linear potential with string tension $\sigma=e^2C_N$, where $C_N$ is the smallest eigenvalue of the Casimir operator of $SU(N)$. This can be seen by interpreting the temporal component of the current $J_\mu^L$ as the left-color charge density. The physical eigenstates of (\[axialhamiltonian\]) are meson-like particle-antiparticle bound states of mass $M=2m+E$, or baryon-like $N$-particle bound states. One can in principle find the bound-state spectrum by calculating the wave function and eigenvalues of (\[axialhamiltonian\]). We have found this wave function for the meson-like bound states in the nonrelativistic limit, in Reference [@dynamicalmass]. A relativistic approach for treating confinement in integrable field theories with nonintegrable deformations has been introduced in [@ffpt], which goes beyond the level of our analysis of this model.
We now present our results for the nonrelativistic limit where $m\gg e$. The meson-like state consists of a particle at position $x^1$, and an antiparticle at $y^1$. Using center of mass coordinates $x=x^1-y^1$, the meson wave function, $\Psi(x)$, satisfies the nonrelativistic Schrödinger equation $$\begin{aligned}
-\frac{1}{m}\frac{d^2}{dx^2}\Psi(x)+\sigma \left\vert x \right\vert \,\Psi(x)=E\Psi(x),\label{schroedinger}\end{aligned}$$ The solution to Eq. (\[schroedinger\]) is $$\begin{aligned}
\Psi(x)=
C Ai\left[(m\sigma)^{\frac{1}{3}}\left(\vert x\vert-\frac{E}{\sigma}\right)\right],\label{airy}\end{aligned}$$ where $Ai(x)$ is an Airy function of the first kind, and $C$ is a normalization constant.
A quantization condition for the binding energy $E$ arises from the requirement that the wave function (\[airy\]) becomes the wave function of a free particle antiparticle pair as $x\to0$ (because the linear potential vanishes). The wave function of a free particle antiparticle pair is $$\begin{aligned}
\Psi(x^1,y^1)=\left\{\begin{array}{c}
e^{ip_1x^1+ip_2y^1},\,\,\,\,\,\,\,\,\,\,\,\,\,{\rm for}\,\,x^1<y^1\\
\,\\
e^{ip_2x^1+ip_1y^1}S(\theta),\,\,\,\,\,\,{\rm for}\,\,x^1>y^1\end{array}\right.,\label{freewave}\end{aligned}$$ where $S(\theta)$ is the particle-antiparticle S-matrix.
By making (\[airy\]) and (\[freewave\]) be equivalent as $x\to0$, we find the quantized energy spectrum $$\begin{aligned}
E_n=\left\{\left[\epsilon_n+\left(\epsilon_n^2+\beta_N^3\right)^{\frac{1}{2}}\right]^{\frac{1}{3}}+\left[\epsilon_n-\left(\epsilon_n^2+\beta_N^3\right)^{\frac{1}{2}}\right]^{\frac{1}{3}}\right\}^{\frac{1}{2}},\nonumber \end{aligned}$$ where $$\begin{aligned}
\epsilon_n=\frac{3\pi}{4}\left(\frac{\sigma}{m}\right)^{\frac{1}{2}}\left(n+\frac{1}{2}\pm \frac{1}{4}\right),\,\,\,\,\,\,\,\,\,\,\,\,\,\nonumber\end{aligned}$$ $$\begin{aligned}
\beta_N=\frac{\sigma^{\frac{1}{2}}}{2\pi m}\int_0^\infty \frac{d\xi}{\sinh \xi}\left[2(e^{2\xi/N}-1)-\sinh(2\xi/N)\right], \nonumber\end{aligned}$$ The meson states have masses $M_n=2m+E_n$.
The baryon mass spectrum can be found in principle by the same method. The only difficulty is that one has to solve a $N$-body Schrödinger equation with potential $$\begin{aligned}
V(x_1,\dots, x_N)=\sum_{i=1}^{N-1}\sum_{j=i+1}^{N} \sigma \vert x_i^1-x_j^1\vert.\nonumber\end{aligned}$$ The $N$-body wave function is much harder to find exactly, and numerical methods are necessary. Recently, there have been some efforts in this direction for the related problem of the baryon spectrum in the three-states Potts model in an external magnetic field [@potts].
Correlation Functions
=====================
With knowledge of the bound-state wave function (\[airy\]) and the two-particle form factor (\[formfactor\]) we can find approximate correlation functions which work well at large distances. The first step is to find the form factor of an operator $\mathcal{A}(x)$ with a bound state of mass $M_n$: $$\begin{aligned}
\langle 0\vert\mathcal{A}(x)\vert B, \phi, n\rangle,\label{mesonformfactor}\end{aligned}$$ where $B$ denotes that the excitation is a meson bound state, $\phi$ is the meson’s rapidity, and $n$ is the meson’s energy level.
In the nonrelativistic limit, we can evaluate (\[mesonformfactor\]) using the “two-quark” approximation originally presented in Ref. [@zamolodchikovfonseca], $$\begin{aligned}
\vert B, \phi, n\rangle\approx e^{ix^1M_n \sinh\phi}\frac{1}{\sqrt{m}}\int_{-\infty}^\infty\frac{d\theta}{4\pi} \Psi_n(\theta)\,\vert A,\theta,a_1,b_1;P,-\theta,a_1,b_1\rangle,\label{twoquark}\end{aligned}$$ where $\Psi_n(x)$ is given by substituting $E_n$ in Eq. (\[airy\]), and $\Psi_n(\theta)$ is its Fourier transform. There are relativistic corrections to (\[twoquark\]) from states with a higher number of particles which we ignore. The one-meson form factor of an operator $\mathcal{A}$ is written in terms of the two-sigma-model-particle form factor: $$\begin{aligned}
\langle 0\vert \mathcal{A}(x)\vert B,\phi,n\rangle&=&e^{s\phi}e^{ix^1M_n \sinh\phi}\int dz\int\frac{d\theta}{4\pi} e^{izm\sinh\theta}\frac{1}{\sqrt{m}}\left(\frac{E_n}{\sigma^H}\right)^{\frac{1}{4}} {\rm Ai}\left[(m\sigma^H)^{\frac{1}{3}}\left(\vert z\vert -\frac{E_n}{\sigma^H}\right)\right]\nonumber\\
&&\times\langle 0\vert \mathcal{A}(x)\vert A,\theta,a_1,b_1;P,-\theta,a_1,b_1\rangle\nonumber\end{aligned}$$ We find a two-point correlation function, $D^{\mathcal{A}}(x)=\langle0\vert \mathcal{A}(x)\mathcal{A}(0)\vert0\rangle$, by summing over intermediate particle states: $$\begin{aligned}
&&\mathcal{D}^{\mathcal{A}}(x)=\langle 0\vert \mathcal{A}(x)\vert 0\rangle \langle 0\vert \mathcal{A}(0)\vert 0\rangle\nonumber\\
&&\,\,+\sum_{n=1}^{n_s}\int\frac{d\phi}{4\pi}e^{-ix^0M_n\cosh\phi+ix^1M_n\sinh \phi}\left|\int dz\int\frac{d\theta}{4\pi} e^{izm\sinh\theta}\frac{1}{\sqrt{m}}\left(\frac{E_n}{\sigma^H}\right)^{\frac{1}{4}} {\rm Ai}\left[(m\sigma^H)^{\frac{1}{3}}\left(\vert z\vert -\frac{E_n}{\sigma^H}\right)\right]\right.,\nonumber\\
&&\times\left.\langle 0\vert \mathcal{A}(x)\vert A,\theta,a_1,b_1;P,-\theta,a_1,b_1\rangle\right|^2+\dots,\label{correlationfunction}\end{aligned}$$ where we have omitted contributions from higher particle states, which is a good approximation at large distances, $x$. If the operator $\mathcal{A}$ in (\[correlationfunction\]) is the Noether current $J^L$, we use the two-particle form factor from Eq. (\[formfactor\]). A similar calculation of correlation functions using intermediate meson states was done for the Ising model in a magnetic field [@tsvelik], and for anisotropic (2+1)-dimensional Yang-Mills theory [@twoplusoneform].
Comments on Numerical Results and Finite Size Effects
=====================================================
The model we present in this talk has been recently studied numerically by Gongyo and Zwanziger [@gongyozwanziger]. The quantities studied in their paper were the expectation value of the Wilson loop, the massive gluon propagator and the order parameter (proportional to the integral over space of the PCSM field, $U$). These objects were evaluated using different lattice spacings, space-time volumes, and values of the coupling constant. From this analysis it was found that at weak Yang-Mills coupling (equivalent to our nonrelativistic limit) the system is in a confined phase, which completely agrees with our results. Their results suggest the existence of a Higgs-like phase as the coupling is increased, which we do not observe. This Higgs-like phase, however, seems to disappear as the volume is increased. We believe this phase to be a finite-volume effect that does not exist at infinite volume.
We can examine the action (\[gaugedpcsm\]) in the axial gauge, $A_1=0$ (note that this is different from the completely-fixed axial gauge we previously discussed), and find $$\begin{aligned}
S=\int d^{2}x \left[\frac{1}{2}{\rm Tr}\,(\partial_{1}A_{0})^{2}+
\frac{1}{2g_{0}^{2}}{\rm Tr}\,(\partial_{0}U^{\dagger}+{\rm i}eU^{\dagger}A_{0})(\partial_{0}U-{\rm i}eA_{0}U)-\frac{1}{2g_{0}^{2}}{\rm Tr}\,\partial_{1}U^{\dagger}\partial_{1}U
\right].\label{axialgaugeaction}
\end{aligned}$$ We now integrate out the gauge field $A_0$ and find $$\begin{aligned}
S=\int d^{2}x \left(\frac{1}{2g_0^2} {\rm Tr}\,\partial_\mu U^{\dag} \partial^\mu U +\frac{1}{2} \,{j_{0}^{L}}\,\frac{1}{-\partial_{1}^{2}+e^{2}/g_{0}^{2}U^\dag U}
\, {j_{0}^{L}}\right).\label{integrateazero}\end{aligned}$$ Naively, one would think that since the PCSM field is unitary, $U^\dag U=1$, and the charges are screened. This yields a Higgs-like phase instead of a confined phase. This reasoning is wrong. The reason is that the physical renormalized field is not unitary. The physical field is $\Phi(x)\sim Z(g_0,\Lambda)^{-1/2}U(x)$, where $Z(g_0,\Lambda)$ is a renormalization constant that diverges as $\Lambda\to\infty$ [@orland],[@asymptotic]. The quantity $\Phi^\dag(x)\Phi(x)$ diverges in the continuum limit and infinite volume. In this sense the completely fixed axial gauge is a much more practical way to see the actual spectrum of the theory.
There is still a possibility that a Higgs phase could arise at finite volume. We would need to examine the quantity $\lim_{x\to0}\langle 0\vert \Phi^\dag(x)\Phi(0)\vert 0\rangle$ at finite volume. The calculation of finite volume form factors and correlation functions has been discussed in references [@finitevolume]. Finite volume effects could regularize this correlation function, and lead to a screened potential between sigma model particles. We would like to point out that the spectrum of the PCSM at finite volume has recently been calculated in Ref. [@kazakov] studying its Hirota dynamics.
[**Acknowledgements:**]{} I would like like to thank P. Orland for his collaboration and many discussions throughout this project, and S. Gongyo for explaining some of their work to me during this conference.
[99]{}
W.A. Bardeen and K. Shizuya, Phys. Rev. [**D18**]{} (1978) 1969. E. Fradkin and S. Shenker, Phys. Rev. [**D19**]{} (1979) 3682. A.B. Zamolodchikov and Al. B. Zamolodchikov, Nucl. Phys. [**B 133**]{} (1978) 525. A.M. Polyakov and P.B. Wiegmann, Phys. Lett. [**131 B**]{} (1983) 121; E. Abadalla, M.C.B. Abadalla and M. Lima-Santos, Phys. Lett. [**140 B**]{} (1984) 71; P.B. Wiegmann, Phys. Lett. [**141 B**]{} (1984) 217; Phys. Lett. [**142 B**]{} (1984) 173. P. Orland, Phys. Rev. [**D 84**]{} (2011) 105005; Phys. Rev. [**D 86**]{} (2012) 045023. A. Cortés Cubero, Phys. Rev. [**D 86**]{} (2012) 025025. A. Cortés Cubero and P. Orland, Phys. Rev. [**D 88**]{} (2013) 025044. M. Karowski and P. Weisz, Nucl. Phys. [**B139**]{}, 455 (1978). A. Cortés Cubero and P. Orland, Phys. Rev. [**D 89**]{} (2014) 085027. G. Delfino, G. Mussardo and P. Simonetti, Nucl. Phys. [**B 473**]{} (1996) 469. S.B. Rutkevich, arXiv:1408.1818 (2014). P. Fonseca and A. B. Zamolodchikov, J. Stat. Phys, [**110**]{} (2003) 527. M.J. Bhaseen and A.M. Tsvelik, in From Fields to Strings; Circumnavigating Theoretical Physics, Ian Kogan memorial volumes, Vol. 1 (2004), pg. 661, arXiv:cond-mat/0409602. A. Cortés Cubero, Phys. Rev. [**D 90**]{} (2014) 065002. S. Gongyo and D. Zwanziger, arXiv:1402.7124 (2014). P. Orland, arXiv:1410.2627 (2014). A. Leclair and G. Mussardo Nucl. Phys. [**B552**]{} (1999) 624-642; G. Delfino, J. Phys. [**A34**]{} (2001). V. Kazakov and S. Leurent, arXiv: 1007.1770 (2010).
[^1]: This talk includes work done in collaboration with Peter Orland
|
---
author:
- 'Maciej Trzetrzelewski [^1] [^2]'
title: 'Relativistic Black-Scholes model'
---
Introduction
============
Among many unrealistic assumptions made in the Black-Scholes model [@BS], one is particularly problematic - constant volatility $\sigma$. When the current market data are used against the Black-Scholes formula one finds that $\sigma$ must in fact depend on the strike $K$, and time to expiry $T$, in order to make the pricing formula work. Therefore the market data imply that $\sigma$ is not constant but a function $\sigma_I(K,T)$ - called implied volatility. The shape of the curve $\sigma_I(K,T)$ with $T$ fixed, is often U shaped so that it became a standard practice to call it a volatility smile. However that shape can also look more like a skew (a smirk) or a frown depending on the data/market one is considering.
Clearly, the fact that $\sigma_I(K,T)$ is not constant falsifies the Black-Scholes model. However, it is also well known that this situation was completely different before the market crash in late 80’. In the equity market before 1987, the implied volatility was indeed fairly constant - why it is not constant nowadays [@Derman] ?
One could explain this problem by blaming everything on yet another unrealistic assumption of the Black-Scholes model - that the underlier $S_t$ undergoes the geometric Brownian motion $$\label{gbm}
dS_t/S_t = \mu dt + \sigma dW_t, \ \ \ \ \mu\in \mathbb{R},\sigma >0$$ (where $W_t$ is a Wiener process). It follows form (\[gbm\]) that log-returns (i.e. returns of $\ln S_t$) have Gaussian distribution. However it is very well known [@Mandelbrot] that the actual log-returns are not distributed like that - instead they exhibit fat tails (Figure 1a).
{width="140mm"}
Therefore a rather natural way to generalize (\[gbm\]) is to replace $W_t$ with the process whose PDF exhibits fat tails corresponding to the ones observed in the markets. However a careful inspection shows that this cannot be the main reason of the volatility smile observed today. The point is that even before 1987 the log-return distribution revealed fat tails (see Figure 1b; note that Mandelbrot’s paper [@Mandelbrot] was published in 1963) but at the same time the Black-Scholes model was working well. This is clearly an issue. If fat tails are the reason of all these discrepancies then why the constant volatility assumption was correct before 1987?
Because of practical reasons the models that consider generalizations of $W_t$ are not very popular and the development in this subject went in a completely different direction. Instead of changing $W_t$, financial practitioners prefer to leave $W_t$ unchanged and assume that $\sigma$ is a function $\sigma=\sigma(S,t)$ - called local volatility [@localvol]. Then the smile is explained by assuming that $\sigma$ increases for large $|\ln S_t|$ - if this is the case then the tails of the Gaussian distribution will become fatter. There exists a way to find the function $\sigma=\sigma(S,t)$ directly using the market data [@Dupire]. However it turns out that this model also has its drawbacks i.e. while the smile can be accommodated, its dynamics (the dynamics of the smile when the strike changes) is not captured correctly. This brings us to further generalization by assuming that $\sigma$ itself is a stochastic process [@Heston] $$\label{svol}
d\sigma_t = \alpha(\sigma_t,t)dt + \beta(\sigma_t,t) dW_t$$ (here $\alpha$ and $\beta$ are some deterministic functions).This generalization is counter intuitive: the amplitude $\sigma$, that multiplies the random factor $dW_t$, is stochastic now, but shouldn’t $dW_t$ contain all the randomness? Moreover, stochastic volatility models also fail in certain situations e.g. in the limit $T\to 0$ where $T$ is the time to maturity [@Gatheral]. This could be a motivation to generalize further and introduce jumps i.e. discontinuous moves of the underling $S_t$ [@jumps].[^3]
It is clear that this way of making the models more general is likely to have little explanation power. These models may fit very well to the market data but in say 10 years from now they will most probably fail in some situations and one will have to make some other generalizations to fit the market data again. This implies that the stochastic volatility models are non falsifiable.
For example, if we agree on the fact that volatility $\sigma$ is a stochastic process and satisfies (\[svol\]) then there is a priori no reason not to go further and assume that $\beta$ is also stochastic. This would make our model even better calibrated to the market data. The possibilities are quite frankly unlimited and if it weren’t for the fact that Monte Carlo simulations are time consuming, they would certainly be investigated. Because one can always augment the model in such way that it will be consistent with the data, it follows that the model cannot be falsified.
Nevertheless most financial practitioners prefer stochastic volatility models because then, one can still use Ito calculus and obtain some analytical, robust results (otherwise, when $dW_t$ is not a Wiener process, little exact results/methods are known [@nonWiener]). It may seem unusual, from the scientific point of view, that robustness of the model is used as a criteria of its applicability. However quantitative finance, unlike Physics, is not about predicting future events but about pricing financial instruments today. Therefore as long as our models are calibrated to the market, minimize arbitrage opportunities and are stable against small fluctuations of the data, there is a priori no problem in the existence of plethora of possible models in this subject.
In Physics the situation is much different. There, we care about predictions and recalibration is not allowed. A theory that contains parameters and degrees of freedom in such amount that can explain any experimental data, by just appropriately fitting them, cannot be falsified and hence is physically useless[^4]. For every theory, it is absolutely crucial to have an example of an experiment which outcome may, in principle, disagree with the results of the theory. This way of thinking is in fact opposite to the way one proceeds in finance.
Stochastic volatility models are clearly very successful but just like in the case of fat-tail distributions they will not be able to explain why before 1987 the Black-Scholes model was working well. In fact if one assumes that volatility is stochastic then clearly it must have been stochastic before 1987 - which seems not to be the case (one could still object to this point by saying that before 1987 the volatility was stochastic but with a tiny mean-reversion amplitude and hence the model could be approximated by constant volatility).
In this paper we would like to approach these issues from a different perspective. It is well known that algorithmic trading became more and more popular in the 80’ - increasing the changes of the prices, per second[^5]. However there exists a concrete underlying limitation for market movements: the change of any price $S(t)$ cannot be arbitrary large per unit of time i.e. there exist maximal speed $c_M$ such that $\dot{S}(t) < c_M$ (market speed of light, $[c_M]=s^{-1}$). An obvious proof of this assertion comes from the fact that the speed of information exchange is limited by the speed of light. It seems that this limitation should not be very restrictive since light travels about $30$cm per nano-second($ns$). Assuming that servers of two counter parties are, say, $30$cm from each other, it takes at least $1ns$ to send an order. Therefore we should not see any relativistic effects, unless we are considering situation in which there are at least billions ($10^9$) orders per second, sent to a single server. At this point it is clear that future development of high frequency trading may in principle influence the situation considerably.
However there is one feature of every liquid market whose consequences are seen already and hence we would like to discuss it in more details. Any price $S(t)$ going (say) up from $S(t)$ to $S(t+\Delta t)>S(t)$, must overcome all the offers made in the interval $[S(t),S(t+\Delta t)]$. This introduces a natural concept of friction/resistance in the markets simply because there is always somebody who thinks that the price is too high. This situation is similar to what happens in physical systems e.g. electrons in conductors. An electron can a priori move with arbitrary (but less than $c$ - the speed of light) velocity. However due to constant collisions with atoms of the conductor the maximal velocity is in fact bounded even more. The drift velocity of electrons can be as small as e.g. $1m/h$. Perhaps a better physical example is light traveling in a dense media where the effective speed of light is $c/n$ where $n$ is the refractive index (e.g. $n=1.3, \ \ 1.5, \ \ 2.4$ for water, glass and diamond respectively). In extreme situations, when light travels through the Bose-Einstein condensate, the effective speed of light can be as small as $1m/s$ [@condensate].
To see that this resistance effect is big in the markets let us consider the logarithm $x(t)=\ln S(t)$ and the corresponding bound on the derivative of $x(t)$ $$|\dot{x}(t)| = |\dot{S}|/S < c_M/S.$$ If we assume that the order of the underlying is about 100$\$$ and that $c_M$ is at least $10^9 s^{-1}$ then we obtain $|\dot{x}|<10^7s^{-1}$. On a daily basis this implies that the difference $$\Delta x:=|x(\hbox{day}) - x(\hbox{previous day})|$$ can a priori be as big as $10^7 \cdot 3600 \cdot 24 = 8.64 \cdot 10^{12}$. However at the same time nothing alike is observed in the market. The value of $\Delta x$ for any asset was, to our knowledge, never bigger than $1$. We have analyzed top 100 companies (considering their market capitalization as of March 2012) of the SP500 index. We order them w.r.t. decreasing maximal absolute value of their log-returns. The list of first 15 of them is presented below.\
[l\*[6]{}[c]{}r]{} Company & log-return & market move (close) & date\
WMT & -0.735707 & 0.0192 $\to$ 0.0092 & Dec 1974\
AAPL & -0.730867 & 26.18 $\to$ 12.60 & Sep 2000\
INTC & 0.698627 & 0.0091 $\to$ 0.0183 & Jan 1972\
C & -0.494691 & 24.53 $\to$ 14.96 & Feb 2009\
ORCL & -0.382345 & 0.61 $\to$ 0.42 & Mar 1990\
PG & -0.360296 & 40.43 $\to$ 28.20 & Mar 2000\
MSFT & -0.356939 & 0.37 $\to$ 0.26 & Oct 1987\
BAC & -0.34205 & 7.10 $\to$ 5.04 & Jan 2009\
QCOM & 0.327329 & 8.47 $\to$ 11.76 & Apr 1999\
JPM & -0.3241 & 6.03 $\to$ 4.36 & Oct 1987\
MRK & -0.311709 & 40.90 $\to$ 29.95 & Sep 2004\
AMZN & -0.296181 & 16.03 $\to$ 12.06 & Jul 2001\
KO & 0-.283731 & 2.37 $\to$ 1.78 & Oct 1987\
WFC & 0.283415 & 19.37 $\to$ 25.72 & Jul 2008\
IBM & -0.268241 & 32.35 $\to$ 24.74 & Oct 1987
In the table we also added the corresponding movement of the stock price (close) and the date for reference. The biggest change of the log-return is due to Walmart and Apple (note that the historical data we use are subject to adjustments for stock splits) whose shares dropped resulting in almost the same loss (in terms of log-returns). In any case we see that the magnitude of log-returns may be of order of $10^0$, not $10^{12}$.
This implies that there is a huge resistance in the market for the price to move up or down (notice that some of the log-returns in our table are positive, e.g. for Wells Fargo). Therefore one may conclude that the effective maximal velocity of $S(t)$ is much smaller than $c_M$. For completeness we also performed the same analysis for other markets. Below we present the results for Forex majors, some precious metals and major indices.\
[l\*[6]{}[c]{}r]{} Forex major & log-return & market move (close) & date\
AUDUSD & -0.192451 & 1.2304 $\to$ 1.015 & Nov 1976\
UDSCHF & 0.103529 & 1.9949 $\to$ 2.2125 & Dec 1982\
EURUSD & 0.0619917 & 0.6192 $\to$ 0.6588 & Feb 1973\
GBPUSD & 0.0459699 & 1.1819 $\to$ 1.2375 & Mar 1985\
USDCAD & -0.0388492 & 1.2688 $\to$ 1.2218 & Oct 2008\
USDJPY & -0.0950101 & 293.26 $\to$ 266.68 & Feb 1973\
Commodity & & &\
XAGUSD & -0.222755 & 12.87 $\to$ 10.20 & Feb 1983\
XPTUSD & -0.221841 & 594.6 $\to$ 476.3 & Mar 1980\
XAUUSD & -0.203157 & 809.9 $\to$ 661 & Jan 1980\
Index & & &\
DJI & -0.256325 & 2246.7 $\to$ 1738.7 & Oct 1987\
SPX & -0.228997 & 282.7 $\to$ 224.84 & Oct 1987\
NKX & 0.200503 & 135.89 $\to$ 166.06 & Aug 1951\
NDX & 0.17203 & 2128.78 $\to$ 2528.38 & Jan 2001\
DAX & -0.137061 & 1589.28 $\to$ 1385.72 & Oct 1989\
FTM & -0.119613 & 2528.55 $\to$ 2243.49 & Oct 1987\
Again, all the log-returns are small. This confirms our claim that the maximal value of $|\dot{x}|$ is smaller than $1$ per day. We will use the notation $c_m$ for the upper bound of $|\dot{x}|$.
In the next section we present a basic idea investigated in this paper - the existence of the bound on log-returns implies that the corresponding PDF, $p(x,t)$, cannot be positive everywhere but must be $0$ for $|x| > x_{max}:=c_m t$. This generically introduces a skew/smirk of the volatility when comparing to the Gaussian distribution. Based on the market data analysed above we claim that this effect can in fact be noticeable. The main question is then, in what way we can generalize the Black-Scholes model so that the finiteness of $c_m$ is taken into account. Towards this direction it seems natural we study the relativistic generalization of the diffusion equation. One could object that such relativistic extension is a bit artificial. After all, using the analogy of an electron in the conductor, the electron is only slowed down to drift velocity and no relativistic effects occur at this speed. This argument is of course true in generic cases. However there are examples of conductors (graphene surfaces, for a review see e.g. [@graphene]) for which description of electrons is effectively given by the massless Dirac equation i.e the description is relativistic even though the electron’s speed is still not even close to the speed of light $c$ (it is about $1\%$ of $c$). This is due to a particular honey-comb lattice structure of the graphene. It is therefore a physical example of the non-relativistic processes whose effective description nevertheless requires relativistic equations due to the specific structure of the environment. We see no reason why a phenomena of this kind could not take place in financial markets.
In Section 3 we review the correspondence between the relativistic diffusion equation, the telegraphers equation and the Dirac equation found a few decades ago [@Goldstein; @Kac; @GJKS; @JS; @Orsingher1; @Orsingher2]. The diffusion equation can be obtained from the telegrapher equation in the limit $v \to \infty$, where $v$ is the velocity of a particle. Since the Black-Scholes equation is equivalent to the diffusion equation in the $x=\ln S$ variables, we make a proposal that its proper relativistic extension is given by the telegrapher equation with $v $ replaced by $c_m$ (Section 4). As a result, in Section 5, we arrive at a pricing formula for options and present numerical analysis for option prices, put-call parity and implied volatility. In particular we find that the proposed formula allows for arbitrage opportunities. In the region of parameters where put-call parity is not violated significantly we calculated the implied volatility and find a volatility-frown like effect. Lastly we perform the $1/c_m$ expansions and find exact formula for $1/c_m^2$ corrections ($1/c_m$ terms give no contribution). This result can then be used to evaluate the implied volatility exactly when $c_m$ is large.
Basic idea
==========
Suppose we are considering a model that takes into account finite maximal speed of propagation of information (locality in the market). The speed of $S(t)$ and hence $x=\ln S(t)$ is bounded. Let $p(x,t)$ be the corresponding probability density and let us expand it about the normal distribution as follows $$\label{expansion}
p(x,t) = \frac{e^{-\frac{x^2}{2\sigma^2 t}}}{\sqrt{2\pi \sigma^2 t}}\left(1+\frac{1}{c_m^2}f(x,t,\sigma)+\ldots \right),$$ where $\sigma$ is the volatility in the Black-Scholes model and where $f(x,t,\sigma)$ is of compact support, corresponding to the $1/c_m^2$ corrections of this expansion (anticipating results from Section 5, we do not consider $1/c_m$ corrections). Note that $f(x,t,\sigma)$ must be such that the distribution $p(x,t)$ is $0$ for $|x|\ge x_{max}:=c_m t$ (i.e. $f$ is $\frown$ shaped) - a result following simply from locality.
We are interested in the $x$, and $t$ dependent volatility $\sigma_{DI}(x,t,\sigma)$ (density-implied volatility) so that $$\label{expansion1}
p(x,t)= \frac{e^{-\frac{x^2}{2\sigma^2_{DI} t}}}{\sqrt{2\pi \sigma_{DI}^2 t}}.$$ Density implied volatility $\sigma_{DI}$ is of course a different concept than the implied volatility (which we denoted as $\sigma_I$). In this section we would like to make a simple, model independent, observation using $\sigma_{DI}$.
We will look for the solution of (\[expansion1\]) in the form $$\sigma_{DI}^2=\sigma^2\cdot (1+s(x,t,\sigma)), \ \ \ |s(x,t,\sigma)|<1.$$ Expanding (\[expansion1\]) and comparing the appropriate terms we find one should take $$\label{frown}
\sigma_{DI}^2=\sigma^2\cdot \left(1-\frac{2}{c_m^2(1-\frac{x^2}{\sigma^2 t})}f(x,t,\sigma)+\ldots \right).$$ Therefore, since $f(x,t,\sigma)$ is $\frown$ shaped, in general $\sigma_{DI}$ will also be $\frown$ shaped in variable $x$. However in the Black-Scholes model $x$ is given roughly by the log of the moneyness, $x \sim \log S/K$, where $S$ is the underlying and $K$ is the strike[^6] . Therefore, in terms of the underlying $S$, the frown $\frown$ changes now into a skew (see Figure 2).
![Density implied volatility for $f=-1-10 x^4$ in $x$ variable (left) and in $S=\exp x$ variable (right) for $c_m=3$, $t=0.5$, $\sigma=0.15$. The choice of $f$ is related to the actual result in Section 5 of this paper. Note the spike near the origin due to the singularity at $x^2=\sigma^2 t$ in (\[frown\]). The region plotted corresponds to the condition $|s(x,t,\sigma)|<1$ i.e. $x\in (-6.32,-0.11) \cup (0.11,6.32)$.](diVolEx.pdf){width="120mm"}
It is therefore a qualitative evidence of the fact that the volatility smile (which often takes the form of the skew) can be fairly easily explained by introducing causality to the Black-Scholes model. A study of a concrete realization of this idea is the main aim of this paper.
Heat, Dirac and the telegraphers
================================
In this section we discuss relations between the heat, Dirac and the telegraphers equations in 1+1 dimensions [^7]. The material is well known to physicists and hence can be omitted if the reader is familiar with [@Goldstein; @Kac; @GJKS; @JS; @Orsingher1] and [@Orsingher2].
Euclidean time
--------------
It is a simple observation that the Schrödnger equation of a free particle results in the diffusion equation when multiplying time by the imaginary unit $t \to-it$. In Physics literature this procedure has many names: Wick rotation, analytical continuation, Euclidean time. This correspondence is only formal as the map $t \to -it$ has no physical reason (in fact, one may as well make the mass imaginary, $m \to i m$, also arriving at the heat equation - clearly there is no physical content here). On the other hand, that map immediately suggests that if one would like to generalize the diffusion equation to the relativistic case one should use the Euclidean version of the Dirac equation. This reasoning can be captured in the following diagram $$\renewcommand{\arraystretch}{1.5}
\begin{array}{ccc}
\hbox{Dirac equation} &\xrightarrow{v<<c}& \hbox{Schr\"odinger equation} \\
\Big\downarrow\rlap{$t \to -it$} & &\Big\downarrow\rlap{$t \to -it$} \\
\hbox{Euclidean Dirac equation} & & \hbox{Heat equation}
\end{array}$$ (where $v$ is a velocity, $c$ is the speed of light) when assumed that it closes.
However, by blindly using the Euclidean Dirac equation in this way, one loses understanding of the underlying stochastic process. For example, the object satisfying the Dirac equation is a spinor which in turn has many components (depending on dimensionality of the problem - in our case, 1+1 dimensions, the spinor has two real components). Therefore a question about the interpretation of these components in terms of applications in finance, is not answered as such. Nevertheless there exists a clear and rigorous connection between stochastic processes and the Euclidean Dirac equation which follows form [@Kac] and [@JS].
Underling Poisson process
-------------------------
Let us first start with the well known fact that the Wiener process, $W_t$, underlies the heat equation (this observation, of course, dates back to the beginning of the last century [@Einstein; @Smoluchowski]). We consider a particle on a line that follows a simple random walk (probability $1/2$ of going to the right or left). Given time $t$ define $p(x,t)$ as the probability density of a particle at point $x$. It follows that for small $\Delta t$ and $\Delta x$ this density must satisfy $$p(x,t+\Delta t) \approx \frac{1}{2}p(x-\Delta x,t) + \frac{1}{2}p(x+\Delta x,t).$$ Performing the Taylor expansion we observe that the 1st order derivative terms in $x$ cancel and hence the continuum limit $\Delta t \to 0$, $\Delta x \to 0$ is nontrivial only if $(\Delta x)^2/\Delta t$ is non zero in the limit. The resulting equation is $$\label{diff}
\partial_t p = \frac{1}{2}\sigma^2\partial_{x}^2p$$ where $\sigma^2:=\lim_{\Delta t \to 0}(\Delta x)^2/\Delta t$, i.e. the heat equation. On the other hand, in the limit considered, a simple random walk becomes the Brownian motion. In particular the coordinate of the particle is given by $$\label{wiener}
X(t) = X(0) +\sigma \int_0^t dW_s$$ hence $W_t$ underlies the heat equation. This derivation is a bit sketchy (for a rigorous treatment see e.g. [@Shreve] or [@Oksendal]) however it is very intuitive and useful for further generalizations/modifications.
Let us now consider a stochastic process in which a particle travels along the line with constant velocity and changes the direction after time $\Delta t$ with probability $\lambda \Delta t$ where $\lambda$ is some constant. It follows that the particle does not change the direction after time $\Delta t$ with probability $1-\lambda\Delta t$. Let us now consider two probability densities related to this process
- $P_+(x,t)$ - a particle at time $t$, point $x$ with velocity to the right
- $P_-(x,t)$ - a particle at time $t$ , point $x$ with velocity to the left.
It follows that these densities satisfy (for small $\Delta t$ and $\Delta x$) $$P_\pm(x,t+\Delta t) \approx P_\pm(x \mp \Delta x,t)(1-\lambda\Delta t) + P_\mp(x \pm \Delta x,t)\lambda \Delta t$$ which, after expanding the l.h.s. and taking the $\Delta t \to 0$ limit, imply that $P_+$ and $P_-$ satisfy a system of coupled first order PDE’s $$\label{master}
\partial_tP_\pm = \mp v \partial_x P_\pm \pm \lambda (P_- - P_+)$$ where $v:=\lim_{\Delta t \to 0}\Delta x / \Delta t$. Differentiating (\[master\]) over $t$ or $x$ we find that $P_{\pm}$ decouple and satisfy the telegrapher equation $$\label{telegraph}
\frac{2\lambda}{v^2}\partial_t P_{\pm} = \left(\partial_x^2 - \frac{1}{v^2}\partial_t^2\right)P_{\pm}.$$ The same equation is satisfied for the probability density $p(x,t):=P_+(x,t) + P_-(x,t)$ (a particle at time $t$, point $x$, any velocity) and the flow density $w(x,t):=P_+(x,t) - P_-(x,t)$.
On the other hand, in the limit $\Delta t \to 0$ the coordinate of the particle is given by $$\label{tele}
X(t) = X(0)+v \int_0^t (-1)^{N(s)} ds$$ where $N(s)$ is the number of events of the homogeneous Poisson process, at time $s$. Therefore a stochastic process underling the telegrapher equation is the Poisson process [@Goldstein; @Kac].
Let us observe that in the large $v$ limit, while keeping the limit $\lim_{v \to \infty}\lambda/v^2=1/\sigma^2$ fixed, one arrives at the diffusion equation (\[diff\]). In this sense the telegraphers equation generalizes the diffusion equation to the case of finite $v$. At the same time we see that in that limit we have $\lambda \to \infty$ therefore the Wiener process is recovered from the Poisson one, when the average number of flips per second becomes infinite.
1+1 Dirac equation
------------------
As pointed out in [@JS], equations (\[master\]) are in fact equivalent to Euclidean version of the Dirac equation in $1+1$ dimensions. To see this let us write the Dirac equation in $1+1$ dimensions $$\label{Dirac2d}
i\hbar \partial_t \Psi = -ic\hbar \sigma_3 \partial_x \Psi + mc^2 \sigma_1 \Psi.$$ Here we keep all the constants (the Planck constant $\hbar$ and the speed of light $c$) explicitly, even though these constants have little physical meaning in the case of $1+1$ dimensional space-time. The wave function $\Psi$ has two components $\Psi= (\psi_+, \psi_-)^T$ while $\sigma_1$, $\sigma_3$ are the usual Pauli matrices $$\sigma_1 =\left(\begin{matrix}
0 & 1 \cr
1 & 0 \end{matrix}\right), \ \ \
\sigma_3= \left(\begin{matrix}
1 & 0 \cr
0 & -1 \end{matrix}\right).$$ Let us introduce the Euclidean time $t_E = i t$ (consequently the Euclidean speed of light $c_E = -i c$), and define new spinor components $u_\pm(t_E,x) = e^{\frac{mc_E^2}{\hbar}t_E}\psi_\pm$. We find, from (\[Dirac2d\]), that $u_\pm$ satisfy $$\partial_{t_E} u_{\pm} =\mp c_E \partial_x u_\pm \pm \frac{mc_E^2}{\hbar}(u_- -u_+)$$ which is equivalent to (\[master\]) provided we make the following identification: $t \leftrightarrow t_E$, $P_\pm \leftrightarrow u_\pm$, $v \leftrightarrow c_E$, $\lambda \leftrightarrow \frac{mc_E^2}{\hbar}$.
Therefore one may conclude that the diagram discussed in the beginning of this section is not just formal. The Euclidean versions of the Dirac equation can be derived from the underling Poisson process - the components $\psi_\pm$ of the spinor $\Psi$ correspond to probability densities $P_\pm$ multiplied by the factor $e^{-\lambda t}$.
Fundamental solution
--------------------
As indicated in [@Kac] the telegraphers equation becomes the heat equation in the $v \to \infty$ limit while keeping $\lambda/v^2$ fixed. Therefore the solutions of the telegraphers equation should converge to the solutions of the heat equation in that limit. Since telegraphers equation is second order in time derivatives one needs to fix the function and the first order derivatives at (say) $t=0$. Setting $$p(x,t)=\delta(x), \ \ \ \ \partial_t p(x,t) = 0 \ \ \ \ \hbox{for} \ \ \ \ t=0$$ one can prove that the solution is [@Goldstein; @Orsingher1; @Orsingher2] $$p(x,t)= \frac{e^{-\lambda t}}{2v}\left[\delta(|x|-vt)+\lambda G(x,t) + \partial_t G(x,t)\right],$$ $$\label{solution}
G(x,t) = \begin{cases} I_0\left(\frac{\lambda}{v}\sqrt{v^2t^2-x^2}\right), & \mbox{for} |x|\le vt \\ 0, & \mbox{otherwise} \end{cases}$$ where $I_0(z)$ is the order zero, modified Bessel function of the first kind, $ I_0(z)=\sum_{k=0}^{\infty}\frac{1}{k!^2}\left(\frac{z}{2}\right)^k$. Note that $p(x,t)$ is zero outside of the light-cone (i.e. for $|x|>vt$). In financial terms this means that the log-returns cannot by arbitrary large/small - as expected.
As shown in [@Orsingher2], this solution indeed converges to the fundamental solution of the heat equation $$\lim_{\substack{v \to \infty \\ v^2/\lambda \to \sigma^2}}p(x,t) = \frac{1}{\sqrt{2\pi \sigma^2 t}}e^{-\frac{x^2}{2\sigma^2t}}.$$ Moreover the variance of the process (\[tele\]) is $$Var[X(t)] = \frac{1}{2}v^2\left[\frac{2t}{\lambda}-\frac{1-e^{-2\lambda t}}{\lambda^2}\right]$$ which in the limit coincides with the result for the Wiener process (this result can be obtained from the solution (\[solution\]) or directly from definition (\[tele\]), [@Orsingher2]).
Generalizing Black-Scholes equation
===================================
Ideally one would like to use the Poisson process and its relation to the Wiener process (c.p. previous section) to derive the generalization of the Black-Scholes equation, using the standard hedging argument. Comparing the corresponding stochastic processes (\[wiener\]) and (\[tele\]) it seems reasonable to assume that a good starting point for the process describing the underlying asset $S(t)$ would be $$\label{rgbm}
\frac{dS_t}{S_t} = \mu dt + c_m(-1)^{N_t}dt,$$ where we replaced $v$ with the maximal log-market velocity $c_m$. In the $c_m \to \infty$ limit, with $c_m/\sqrt{\lambda} = \sigma$ the term $c_m(-1)^{N_t}dt$ can be replaced by $\sigma dW_t$ (in a sense that the process (\[tele\]) converges to (\[wiener\])) and one recovers the geometric Brownian motion. However it does not seem clear what is the analog of the Ito lemma for a process like (\[rgbm\]).
In this section we will use a different route to arrive at the “relativistic” equation for pricing options. We shall take advantage of the $f$ map $$(V,S,t) \xrightarrow{ \ \ f \ \ } (u,x,\tau)$$ $$\label{map}
x= \ln S/K + \left(r-\frac{1}{2}\sigma^2\right) \tau, \ \ \
\tau = T-t, \ \ \ u(x,\tau) = e^{-rt}V(S,t)$$ which one uses to bring the Black-Scholes equation to the form of the heat equation[^8] $$V_t + rSV_S +\frac{1}{2}\sigma^2 S^2 V_{SS} = r V \ \ \ \xrightarrow{ \ \ f \ \ } \ \ \ u_\tau = \frac{1}{2}\sigma^2 u_{xx}.$$ Since the relativistic counterpart of the heat equation is the telegraphers equation, by applying the inverse map, $f^{-1}$, to the coordinates of the latter, one arrives at a relativistic extension of the Black-Scholes (see the diagram below). $$\renewcommand{\arraystretch}{1.5}
\begin{array}{ccccc}
\hbox{Black-Scholes}& \xrightarrow{ \ \ f \ \ }& \hbox{Heat equation} \\
\Big\uparrow\rlap{$c_m \to \infty$} & & \Big\uparrow\rlap{$c_m \to \infty$} \\
\hbox{relativistic Black-Scholes}& \xleftarrow{ \ f^{-1}} & \hbox{Telegraphers equation}
\end{array}$$ This method leaves a certain degree of ambiguity e.g. instead of using the inverse $f^{-1}$ one could use the inverse of a different map, $f_{c_m}$, such that $f_{c_m} \to f$ as $c_m \to \infty$. Therefore the above reasoning should not be understood as a derivation but more as a proposal for the relativistic Black-Scholes equation.
A straightforward calculation shows that the inverse of the map (\[map\]) applied to the telegrapher equation results in $$\label{rbs}
V_t + rSV_S +\frac{1}{2}\sigma^2 S^2 V_{SS} = r V + \frac{\sigma^2}{2c_m^2} R$$ with $$R = r^2 V -2rV_t +V_{tt} +2S\left(r-\frac{1}{2}\sigma^2\right)(V_t-rV)$$ $$+ \left(r-\frac{1}{2}\sigma^2\right)^2 (S V_S +S^2 V_{SS}).$$ Clearly, in the $c_m \to \infty$ limit the Black-Scholes equation is recovered.
Let us make a comment about the non-Markovian character of equation (\[rbs\]). That equation is indeed non-Markovian since it is second order in time derivatives which generically implies non-Markovian character of the process [@review]. Indeed, one can verify that the fundamental solution (\[solution\]), does not satisfy the Kolmogorow-Smoluchowski condition [@review] $$\label{smolu}
p(x-z,\tau) = \int_{\mathbb{R}} p(x-y,\tau)p(y-z,\tau)dy.$$ Is our model non-Markovian then? The answer is: no. As shown above, equation (\[rbs\]) is derived from the system (\[master\]) which in fact is Markov. The contradiction appears when one forgets that the complete information about the system is given by a pair of PDFs $(p(x,\tau),w(x,\tau))$ and not just a single $p(x,\tau)$. Therefore instead of (\[smolu\]) one should be checking its generalization where $p$ is replaced by the $2 \times 2$ matrix kernel of the process. That kernel satisfies the generalized Kolmogorov-Smoluchowski condition since the system is essentially equivalent to relativistic quantum mechanics.
Plain vanilla options
=====================
Using the discussion in the previous section we are now able to write the formula for the European Calls and Puts. Formula (\[solution\]) corresponds to the fundamental solution of the telegraphers equation. However if $p(x,\tau)$ is that solution then clearly so is $$V(S,t)=\int_{\mathbb{R}} p(x-y,\tau)f(y)dy$$ for arbitrary function $f(y)$. In our case $x$ and $\tau$ are given by $S$ and $t$ according to (\[map\]). Because the boundary conditions are such that at the expiry $V(S,t)$ is equal to the payoff of the derivative instrument: $$V(S,T)=\mathcal{P}(S) = \begin{cases} \max(S-K,0), & \mbox{for a call option} \\ \max(K-S,0), & \mbox{for a put option} \end{cases}$$ and because at $\tau=0$ (i.e. $t=T$) we have $p(x,\tau=0)=\delta(x)$, we find that the $f(y)$ is equal to the payoff function. Therefore in the original coordinates $S,t$ the solution reads $$\label{vanilla}
V(S,t)= e^{-r(T-t)}\int_0^\infty p(X(S'),T-t) \mathcal{P}(S')\frac{dS'}{S'},$$ $$X(S')= \ln \frac{S}{S'} + \left(r-\frac{1}{2}\sigma^2\right) (T-t).$$ If $p(x,\tau)$ was given by the fundamental solution of the heat equation then (\[vanilla\]) would give us the Black-Scholes formula for Puts and Calls. In our case $p(x,\tau)$ is a more complicated expression in terms of modified Bessel function (\[solution\]).
Exact calculation of the integral (\[vanilla\]) seems difficult however numerical evaluation is fairly straightforward (see Figure 3, we take $S=100$, $T-t=0.5$, $r=0.05$, $\sigma=0.15$ and evaluate the integral for various strikes $K$.).
![Black-Scholes Call minus a Call form (\[vanilla\]), as a function of $K$ for $S=100$, $T-t=0.5$, $r=0.05$, $\sigma=0.15$. ](optDiff.pdf){width="100mm"}
We observe that for $c_m=0.1$ the Call prices differ significantly from the Black-Scholes ones. For $c_m=0.5$ these differences are already of order of $0.1$ while for $c_m=2.5$ and $c_m=10$ they are smaller then $0.006$ and $0.00035$ respectively.
An important test of the formula (\[vanilla\]) is the verification of the put-call parity i.e. whether the formula allows for the arbitrage opportunities (see Figure 4). For $c_m=0.1$ we see that the put-call parity is significantly violated for all (but one) values of the strike. When $c_m$ is increased the arbitrage opportunities are slowly disappearing, nevertheless they are present. For example, for $c_m=2.5$ the put-call parity is satisfied up to $0.002$ for strikes $K \in [0,155]$. Above $K=155$ the departure from put-call parity are becoming noticeable. This is a serious drawback of the formula (\[vanilla\]).
{width="100mm"}
However there are regions where put-call parity is not violated significantly (which is definitely the case for larger $c_m$) and therefore it is reasonable to calculate the implied volatility from these new prices. The corresponding numerical results are presented in Figure 5.
{width="100mm"}
The plots are made only for those strikes for which the implied volatility can be found. For example, for $c_m=0.5$ the implied volatility can be found in the range $K \in [84,166]$, outside this range there is no $\sigma$ for which the Black-Scholes formula is equal to (\[vanilla\]). Results for $c_m=0.1$ and $c_m=0.5$ exist for regions of $K$ for which there exist arbitrage opportunity therefore we will not discuss them further. However for $c_m=2.5$ and $c_m=10$ we observe the left side of the volatility frown (a spike near $K=70$). This is expected considering the general remarks we made in Section 2. However the right side of the volatility frown is not seen in this range therefore it is hard to argue, as we did in Section 2, that the skew effect emerges naturally (although a delicate skew can be observed for $c_m=2.5$ for $K>70$).
$1/c_m$ expansion
-----------------
Since in the $c_m \to \infty $ limit the exact solution (\[solution\]) becomes the normal distribution, it is instructive to see what are the $1/c_m$ corrections before the limit is performed.
Following [@Orsingher2] we observe that in the large $c_m$ limit the argument of the Bessel function $I_0(\cdot)$ in (\[solution\]) is large, hence we can take advantage of the asymptotic expansion [@Abramowitz] $$\label{exp1}
I_0(z) = \frac{e^{z}}{\sqrt{2\pi z}}\left(1+\frac{1}{8z}+ \ldots \right), \ \ \ \ z>>1.$$ The argument $z$ in our case can also be expanded as $$\label{exp2}
z= \frac{\lambda}{c_m}\sqrt{c_m^2t^2-x^2}= \lambda t - \frac{\lambda x^2}{2c_m^2 t} - \frac{\lambda x^4}{8c_m^4 t^3 }+ \ldots \ \ .$$ Note that since we have $\lambda=c_m^2/\sigma^2$, all the terms in (\[exp1\]) and (\[exp2\]) are necessary to capture all the $1/c_m^2$ contributions. On the other hand, to prove that $p(x,\tau)$ converges to the normal distribution, as it is done in [@Orsingher2], one does not need the $1/8z$ term in (\[exp1\]) and the $x^4$ term in (\[exp2\]). Substituting (\[exp1\]) and (\[exp2\]) to (\[solution\]) and using $\lambda=c_m^2/\sigma^2$ we find that the solution (\[solution\]) resolves as $$\label{exp3}
p(x,\tau) = \frac{e^{-\frac{x^2}{2\sigma^2 \tau}}}{\sqrt{2\pi \sigma^2 \tau}}\left(1+\frac{1}{c_m^2}f(x,\tau)+\ldots \right),$$ $$\label{corr}
f(x,\tau):= -\frac{\sigma^2}{8 \tau}+\frac{x^2}{2\tau^2}-\frac{x^4}{8\sigma^2 \tau^3}.$$
A crosscheck
------------
An independent way to verify (\[corr\]) is to start with the telegraphers equation (\[telegraph\]) and search for the solutions of the form of (\[exp3\]). Substituting (\[corr\]) to (\[exp3\]) we verify that the result satisfies the telegrapher equation up to the terms of order $1/c_m^2$ - as expected.
A more systematic way to see that is as follows. Using only the expansion (\[exp3\]) we find that the telegraphers equation implies $$\frac{1}{2}\tau^2 \sigma^2 \partial_x^2f -\tau x\partial_xf -\tau^2 \partial_\tau f- \frac{x^4}{8\sigma^2\tau^2}+\frac{3x^2}{4\tau}-\frac{3\sigma^2}{8}=0$$ where we neglected the terms of order $1/c_m^4$ and smaller. Now, we observe that the substitution $f(x,\tau)=w(\xi)/\tau$, $\xi=x^2/\tau$ results in an ordinary differential equation for $w(\xi)$ $$\label{corr1}
2\sigma^2 \xi w''(\xi)+(\sigma^2-\xi)w'(\xi)+w(\xi)+\frac{3\xi}{4}-\frac{\xi^2}{8\sigma^2}-\frac{3\sigma^2}{8}=0$$ for which the most general, quadratic in $\xi$, solution is $$w(\xi) = -\frac{1}{8\sigma^2}\xi^2+\left(\frac{3}{8}-\frac{a}{\sigma^2}\right)\xi +a, \ \ \ \ a\in \mathbb{R}.$$ Taking $a=-\sigma^2/8$ we see that $w(x^2/\tau)/\tau$ coincides with (\[corr\]). Therefore we have shown that the $1/c_m^2$ corrections (\[corr\]) are consistent with the expansion (\[exp3\])
The general solution of (\[corr1\]) can be obtained by finding the general solution of the corresponding homogeneous equation and adding it to the special solution $w(\xi)$. The result is $$w_{gen}(\xi)=\frac{\xi-\sigma^2}{2\sigma}c_1 + \left[ \frac{2\sqrt{\xi}}{\pi}+\sqrt{\frac{2}{\pi}}\operatorname{erfi}\left(\frac{\sqrt{\xi}}{\sqrt{2}\sigma}\right)\left(\sigma-\frac{\xi}{\sigma}\right)\right]c_2 + w(\xi)$$ where $\operatorname{erfi}(x)$ is the imaginary error function $\operatorname{erfi}(x):=-i \operatorname{erf}(ix)$. The fact that the above solution is not unique is reasonable since we did not specify the boundary conditions for $w(\xi)$.
Black-Scholes formula with $1/c_m^2$ corrections
------------------------------------------------
A complete treatment of the problem requires calculating the exact value of e.g. the Call, which we shall do now. Substituting (\[exp3\]) and (\[corr\]) to (\[vanilla\]) one finds that the Call option is $$\label{integral}
V(S,\tau)=\frac{K e^{-r \tau}}{\sqrt{2\pi \sigma^2 \tau}} \int_0^{y_{max}} (e^y-1)e^{-\frac{(x-y)^2}{2\sigma^2\tau}}\left[1-\frac{1}{c_m^2}f(y,\tau)\right]dy,$$ $$x= \ln S/K + \left(r-\frac{1}{2}\sigma^2\right) \tau$$ where we changed the integration variables for convenience. The upper integration limit $y_{max}$ is given implicitly by $f(y_{max},\tau)=c_m^2$ which has four solutions, however only one of them is always real and positive $$y_{max}= \sqrt{2\sigma^2 \tau +\sigma \tau \sqrt{3\sigma^2+8c_m^2 \tau} }.$$ In the limit $c _m\to \infty$ we have $y_{max}\to \infty$ and the integral (\[integral\]) results in the Black-Scholes formula. For finite $c_m$ the integration is more complicated. Because of the exponential damping of the integrand we will approximate the integral by assuming that $y_{max}=\infty$. By dong so we introduce a negligible error compared to the $1/c_m^2$ corrections that are already in the integrand. However now the integral is elementary since $f(y,\tau)$ is a (quartic) polynomial in $y$. The final result is relatively simple in terms of standard $d_1$, $d_2$ parameters $$\label{bscorr}
V(S,\tau)=S N(d_1)-K e^{-rt}N(d_2) +\frac{1}{c_m^2}v,$$ $$d_1=\frac{\sigma^2 \tau+x}{\sigma \sqrt{\tau}}, \ \ \ \ d_2 = \frac{x}{\sigma \sqrt{\tau}}$$ where $$v=-\frac{\sigma^2}{8\tau} [S M(d_1)-K e^{-rt}M(d_2) ]
-\frac{S \sigma^2}{8\sqrt{2\pi \tau}} e^{-\frac{d_1^2}{2}}\left(1+\frac{3}{2}d_1^2+\frac{3}{2}d_2^2-\frac{1}{2}\sigma^2 \tau \right)$$ where $$M(z):=N(z)z^2(z^2+2), \ \ \ \ N(z)=\frac{1}{\sqrt{2\pi}} \int_{-\infty}^z e^{-t^2/2}dt.$$ Having derived the $1/c_m^2$ corrections to the Black-Scholes formula we are now in the position to find the corresponding implied volatility. To this end we make a similar analysis as in Section 2. We examine how the Black-Scholes formula changes when $\sigma \to \sigma \cdot (1 + s)$ where $s$ is small. The $d_1$ and $d_2$ parameters become $$d_1 \to d_1 +s \left(\sigma \sqrt{\tau} - \frac{x}{\sigma\sqrt{\tau}} \right), \ \ \ \ d_2 \to d_2 -\frac{ s x}{\sigma \sqrt{\tau}}$$ and hence the Black-Scholes formula $V_{BS}=S N(d_1)-K e^{-r\tau}N(d_2) $ is $$V_{BS}\to V_{BS} + s \bar{v},$$ $$\label{bsvar}
\bar{v} = \frac{1}{\sigma\sqrt{2\pi \tau}} \left[ S (\sigma^2 \tau -x) \sigma e^{-\frac{d_1^2}{2}} + K x e^{-r \tau-\frac{d_2^2}{2}} \right].$$ Comparing (\[bsvar\]) with (\[bscorr\]) we find that $s=v/\bar{v}c_m^2$ and hence the implied volatility is $$\label{implVol}
\sigma_I = \sigma \cdot \left(1+ \frac{v}{c_m^2\bar{v}}\right).$$ This result is plotted in Figure 6.
![$1/c_m^2$ corrections of the implied volatility given by (\[bsvar\]), for $S=100$, $T-t=0.5$, $r=0.05$, $\sigma=0.15$. ](implVol.pdf){width="100mm"}
Summary and Outlook
===================
Relativistic extensions of the Black-Scholes model seem very natural, considering future development of high frequency trading. However the physical bound on the maximal speed of the asset is, to our understanding, still too high to give noticeable effects in the market. On the other hand, as we argued in the introduction, the effective maximal speed of log-returns, $c_m$, is much smaller due to the “resistance” of the market - an analogous phenomena appears in some physical situations. Therefore relativistic extensions with such effective velocity, instead of the real one, seem reasonable.
In this paper we considered a certain relativistic extension of the Black-Scholes model, based on the observation that the Black-Scholes equation, in particular coordinates, becomes a heat equation. The latter is clearly non relativistic and therefore it is a good starting point for relativistic extensions. The stochastic process behind the heat equation is a Brownian motion, which implies that an appropriate extension should be related to a process such that in the $c_m \to \infty$ limit the Wiener process is recovered. A very well known process which satisfies this condition is the telegrapher process. Not only does it converge to the Wiener process in the above limit but also, it incorporates the features of relativity in a very clever way: the system of PDEs describing the probability densities of the telegrapher process is equivalent to the Euclidean version of the Dirac equation in $1+1$ dimensions. Therefore it provides an extremely elegant framework. Our most important finance-related conclusion based on these remarks is that the geometric Brownian motion should be replaced by its relativistic counterpart (\[rgbm\]) $$dS_t/S_t = \mu dt + c_m (-1)^{N(t)}dt,$$ where $N(t)$ is the number of events in the homogeneous Poisson process with rate parameter $\lambda$. This SDE becomes the geometric Brownian motion with volatility $\sigma$ when the $c_m \to \infty$ limit is performed (keeping $\lambda = c_m^2/\sigma^2$). It is not an Ito process and therefore one cannot use the Ito lemma to derive the corresponding equation for a derivative instrument. We circumvent this problem by claiming that in order to price a vanilla option one should replace the Gaussian probability distribution by its relativistic counterpart. If this is the case then the pricing formula is given by Eq. (\[vanilla\]). By performing numerical integration we have found that equation (\[vanilla\]) in general violates put-call parity. However there is a region of parameters (in particular for large $c_m$) for which arbitrage possibilities are small. In these cases the volatility frown effect is observed as expected. We then evaluated the $1/c_m^2$ corrections to the Black-Scholes formula, using Eq. (\[vanilla\]), and found that the corresponding implied volatility resembles the frown shape which is in accordance with the previous numerical analysis.
There are several direction where one can improve our results and the model itself. One is to perform thorough Monte Carlo simulations based on the SDE (\[rgbm\]) which could then be compared with numerical results of Section 5 as well as with (\[bsvar\]). Formula (\[vanilla\]) was nowhere proven to be the solution of option pricing based on (\[rgbm\]). It may very well be that the true solution is different form (\[vanilla\]), and that it does not violate put-call parity as (\[vanilla\]) does. Still, it is desirable to bring the integral (\[vanilla\]), for arbitrary $c_m$, to a form similar to the Black-Scholes pricing formula. This seems possible as the integrand involves the Bessel function and its time derivative, which have many special properties.
Second, it would be very interesting to derive a counterpart of the Ito lemma for the process (\[rgbm\]) as it could be used to derive the pricing PDE from first principles.
Lastly one could generalize the process (\[rgbm\]) by using non-constant effective velocity $c_m$ (because it is effective there is a priori no reason to assume that it is constant). Clearly one could also consider a stochastic process for $c_m$ (e.g. some mean-reverting process) $$dc_m^2 = \alpha(c_m,t)dt + \beta(c_m,t) dW_t$$ which, together with (\[rgbm\]) and the constraint $c_m^2 = \lambda \sigma^2$, would result in a certain generalization of the stochastic volatility models. The randomness of volatility would then be explained by the randomness of $c_m$ since $d\sigma^2 = \lambda^{-1} d c_m^2$. Furthermore one can also consider a non-homogeneous Poisson process (i.e. with non constant $\lambda$) therefore adding one more degree of freedom to the model.
Acknowledgements {#acknowledgements .unnumbered}
================
I would like to thank G. Araujo, T. Banerjee, N. Butt, A. Gomez, M. Kust, M. Maurette and S. Mercuri for reading the manuscript and comments. I also thank M. Kuźniak for email correspondence and B. Trzetrzelewska for encouragement.
[99]{}
F. Black, M. Scholes, *The Pricing of Options and Corporate Liabilities*, The Journal of Political Economy, Vol. 81, No. 3, 637-654 (1973).
E. Derman, *Laughter in the dark - the problem of the volatility smile*, lecture given at the University of Amsterdam (2003); Lecture notes from the MFE Program at Columbia University (unpublished).
B. B. Mandelbrot, *The variation of certain speculative prices*, Journal of Business 36, 394-419 (1963).
E. Derman, I. Kani, *The Volatility Smile and Its Implied Tree*, RISK, pp. 139-145, pp. 32-39 (1994).
B. Dupire. *Pricing with a Smile*, RISK, pp. 18-20, (1994).
S. L. Heston, *A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options*, The Review of Financial Studies, Vol. 6, number 2, pp. 327Ð343, (1993).
J. Gatheral, *The Volatility Surface: A Practitioner’s Guide*, John Wiley and Sons, New Jersey, (2006).
R. C. Merton, *Option pricing when underlying stock returns are discontinuous*, Journal of Financial Economics 3, 125-144 (1976).
E. Jondeau, S-H. Poon, M. Rockinger, *Financial Modeling Under Non-Gaussian Distributions*, Springer, London, (2007).
N. S. Ginsberg, S. R. Garner, L. V. Hau, *Coherent control of optical information with matter wave dynamics*, Nature 445 (7128): 623Ð626, (2007).
N. M. R. Peres, *Colloquium: The transport properties of graphene: An introduction*, Rev. Mod. Phys. 82, pp. 2673-2700 (2010), [arXiv:1007.2849]{}.
S. Goldstein. *On diffusion by discontinuous movements, and on the telegraph equation*, Q. J. Mech. App. Math., 4(2):129Ð156, (1951).
M. Kac, *A stochastic model related to the telegraphers equation*, Rocky Mountain Journal of Mathematics, Volume 4, Number 3, (1974).
B. Gaveau, T. Jacobson, M. Kac, L. S. Schulman, *Relativistic extension of the analogy between qunantum mechanics and Brownian motion*, PRL Volume 53, Number 5 (1984).
T. Jacobson, L. S. Schulman, *Quantum stochastics: the passage from a relativistic to a non-relativistic path integral* , J. Phys. A: Math. Gen. 17 (1984) 375-383.
E. Orsingher, *Hyperbolic equations arising in random models*, Stochastic Processes and their Applications 21 (1985) 93-106.
E. Orsingher, *Probability law, flow function, maximum distribution of wave-governed random motions and their connections with Kirchoff’s laws*, Stochastic Processes and their Applications 34 (1990) 49-66.
A. Einstein, *Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen*, Annalen der Physik 17 (8), pp. 549Ð560 (1905).
M. Smoluchowski, *Zur kinetischen Theorie der Brownschen Molekularbewegung und der Suspensionen*, Annalen der Physik 21 (14), pp. 756Ð780, (1906).
S. Shreve, *Stochastic Calculus for Finance II: Continuous-Time Models*, Springer, New York, (2010).
B. Oksendal, *Stochastic Differential Equations: An Introduction with Applications*, Springer, London-New York, 6th ed. (2005).
M. Abramowitz, I.A. Stegun, *Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables*, New York: Dover, (1965).
J. Dunkel, P. Hänggi, *Relativistic Brownian Motion*, Phys. Rep. 471(1), pp. 1-73, (2009), [arXiv:0812.1996]{}.
[^1]: CRISIL Global Research and Analytics, Av. Libertador 1969, Olivos, Buenos Aires, Argentina, e-mail: [email protected]
[^2]: Opinions expressed in this document are only personal views of the author.
[^3]: The reader will note that the line of reasoning presented here differs from the chronological way these ideas were considered. Jumps were introduced in 1976, three years after the Black-Scholes paper, stochastic volatility in 1993, local volatility in 1994.
[^4]: At this point it is worth noting that in theoretical physics there are constructions (such as string theory) which suffer from making no predictions in this sense.
[^5]: There is a common belief that the crash in the 80’ was due to algorithmic trading. We do not share this point of view. Following R. Roll’s argument: if the algorithmic trading was to blame the crash would not have started in Hong-Kong where program trading was not allowed yet.
[^6]: Exact dependence is of course $x=\log S/K +\left(r -\frac{1}{2}\sigma^2\right) (T-t)$ where $r$ is the interest rate, $\sigma$ is the volatility, $T-t$ is time to maturity.
[^7]: In quantitative finance the notation $1+1$ is rarely used. It refers to one space and one time directions. In finance this means one underling (or $\ln$ of the underling) and one time variable.
[^8]: We use standard notation: $S$ - the underlying, $K$ - strike, $T$ - maturity, $r$ - interest rates, $\sigma$ - volatility.
|
---
abstract: 'We explore the fundamental limits to which reionization histories can be constrained using only large-scale cosmic microwave background (CMB) anisotropy measurements. The redshift distribution of the fractional ionization $x_e(z)$ affects the angular distribution of CMB polarization. We project constraints on the reionization history of the universe using low-noise full-sky temperature and E-mode measurements of the CMB. We show that the measured TE power spectrum, [$\hat C_\ell^\mathrm{TE}$]{}, has roughly one quarter of the constraining power of [$\hat C_\ell^\mathrm{EE}$]{}on the reionization optical depth $\tau$, and its addition improves the precision on $\tau$ by 20% over using [$\hat C_\ell^\mathrm{EE}$]{}only. We also use a two-step reionization model with an additional high redshift step, parametrized by an early ionization fraction $x_e^\mathrm{min}$, and a late reionization step at $z_\mathrm{re}$. We find that future high signal-to-noise measurements of the multipoles $10\leqslant\ell<20$ are especially important for breaking the degeneracy between $x_e^\mathrm{min}$ and $z_\mathrm{re}$. In addition, we show that the uncertainties on these parameters determined from a map with sensitivity $10\,\mathrm{\mu K\,arcmin}$ are less than 5% larger than the uncertainties in the noiseless case, making this noise level a natural target for future large sky area E-mode measurements.'
author:
- 'D. J. Watts'
- 'G. E. Addison'
- 'C. L. Bennett'
- 'J. L. Weiland'
bibliography:
- 'reio.bib'
- 'Planck\_bib.bib'
title: 'Beyond optical depth: Future determination of ionization history from the CMB'
---
Introduction {#sec:intro}
============
Cosmic reionization is a poorly understood part of standard $\Lambda$CDM cosmology. Reionization, when neutral hydrogen and helium in the intergalactic medium (IGM) becomes ionized, creates a plasma that scatters CMB photons [@rees; @basko; @bond]. This reduces the amplitude of the CMB anisotropy at angular scales $\ell\gtrsim10$ and creates additional polarized power that dominates at scales $\ell\lesssim10$ [@zaldarriaga]. We illustrate the separate effects of reionization and recombination on the E-mode power spectrum in . Because the temperature and E-mode polarization angular power spectra ($C_\ell^\mathrm{TT}$, $C_\ell^\mathrm{TE}$, and $C_\ell^\mathrm{EE}$) depend on the redshift of scattering, their characterization at high signal-to-noise can be used to constrain ionization histories.
It is observationally known that after the universe became neutral at the epoch of recombination, by $z=6$ it was ionized once again [e.g., @gunnpeterson; @becker; @fan]. Determinations of the ionization fraction of the IGM have been made at redshifts $6\lesssim z\lesssim 8$ [@bouwens; @greig; @banados; @mason; @davies] by probing the epoch of reionization via measurements of Lyman $\alpha$ emission, but these data are sparse, and do not yet constrain the free electron fraction during the epoch of reionization [see e.g., @plancklegacy Figure 36].
Commonly, CMB constraints on the reionization history of the universe are derived assuming a sharp transition from a neutral to fully ionized IGM. Measurements of the large-scale CMB polarization constrain the ionization history by inferring the optical depth to the last scattering surface of the CMB, $\tau\equiv \int_{t_\mathrm{lss}}^{t_0} c\sigma_\mathrm Tn_e(t){\,\mathrm d}t$, where $c$ is the speed of light, $\sigma_\mathrm T$ is the Thomson scattering cross section, $n_e(t)$ is the free electron number density, $t_0$ is the current age of the universe, and $t_\mathrm{lss}$ is the last time photons interacted with matter during the epoch of recombination. Determining the free electron density $n_e(t)$ is then an inverse problem that relies on assumptions and priors. For example, a tanh-like reionization history [e.g., @tanh Equation B3] with a transition from neutral to ionized at a single reionization redshift $z_\mathrm{re}$ with width $\delta z_\mathrm{re}=0.5$ has been used [e.g., @wmapparams; @planckparams18 Section 3.3]. Observations from the *Wilkinson Microwave Anisotropy Probe* ([*WMAP*]{}) satellite were used to make a measurement of the optical depth from the surface of last scattering $\tau=0.089\pm0.014$ [@wmapparams], although this decreases to $\tau=0.067\pm 0.013$ when using [*Planck*]{} 353 GHz data as a template to remove Galactic dust emission [@plancklike16]. @plancklegacy increased the precision of this measurement to $\tau=0.0544\pm0.0073$. @pagano claim to have further reduced large scale [*Planck*]{}systematics, reporting $\tau=0.059\pm0.006$.
As a cross-check, it is possible to obtain competitive constraints without using CMB polarization. [*Planck*]{} temperature measurements combined with [*Planck*]{} weak lensing and baryon acoustic oscillation (BAO) data give ${\tau=0.067 \pm 0.016}$ [@planck2014-a15], consistent with results using [*WMAP*]{} temperature, [*Planck*]{} weak lensing, and BAO data, ${\tau=0.066\pm0.020}$ [@weilandtau]. @weilandtau include a compilation of $\tau$ measurements, and conclude that the measured values are all consistent with $\tau=0.07\pm0.02$. Unlike the Hubble constant $H_0$, (e.g., @bernal, @freedman, @addisonH0, and @riess), the issue with reionization is not tension between measurements, but a lack of desired precision.
![Effect of reionization on the $C_\ell^\mathrm{EE}$ power spectrum. We take the difference between an E-mode signal with $\tau=0$ and one with $\tau=0.06$ with fixed $A_s e^{-2\tau}$ to demonstrate the effects of tanh-like reionization on $C_\ell^\mathrm{EE}$ versus those from recombination. The black dashed line is the total $C_\ell^\mathrm{EE}$ spectrum when $\tau=0.06$. The E-mode signal from recombination dominates above $\ell\gtrsim20$, whereas the reionization signal emerges at multipoles $\ell\lesssim20$.[]{data-label="fig:reio_reco"}](fig1.pdf){width="\columnwidth"}
Using the one-to-one mapping of $\tau\leftrightarrow z_\mathrm{re}$ in tanh-like reionization, @plancklegacy use the low-$\ell$ polarization power spectra to infer ${z_\mathrm{re}=7.67\pm0.73}$ ([*Planck*]{}likelihood `Plik` best fit), while measurements of the kinetic Sunyaev-Zel’dovich effect at arcminute scales by the South Pole Telescope (SPT) and the Atacama Cosmology Telescope (ACT) can be used to limit the duration of inhomogeneous reionization to ${\delta z_\mathrm{re}<2.8}$ at the 95% C.L. with the prior that reionization ends by $z=6$ [@sptksz; @actksz; @planckreio].
It is typically assumed that the universe was ionized by ultraviolet photons from massive stars escaping from galaxies. However, indirect measurements using absorption spectra from gamma-ray bursts have been made that suggest either that star formation and gamma-ray bursts are somehow decoupled [@2006Natur.441..463F], that the escape fraction of star forming galaxies is $\lesssim1\%$ rather than the 10–20% required to ionize the IGM , or that the nature of star forming galaxies changes significantly at $z\gtrsim6$ [@1997ApJ...476..458H; @2014arXiv1405.7400C]. Other potential mechanisms with different redshift dependence have also been put forward. In particular, binary black hole collisions can be a source of X-rays at $z\gtrsim30$, which can raise the ionizing fraction with less fractional contribution from star formation [@2016MNRAS.461.2722I]. Quasars and annihilating particles have also been proposed as ionizing mechanisms [@2008mgm..conf..979M; @2015ApJ...813L...8M; @2016MNRAS.457.4051K; @2018MNRAS.473.1416M].
As we look to the future with more sensitive data, we would like to make quantitative statements about a more detailed physical model for reionization. We explore the potential to make these constraints in this paper. In this work, we explore potential future CMB constraints on the reionization history as parametrized by both instantaneous and extended redshift scenarios. We focus specifically on a reionization history that consists of the usual instantaneous reionization and a second early high redshift period of reionization that partially ionizes the universe.
This paper is organized as follows. In , we quantify the relative constraining power for parameter likelihoods based on [$\hat C_\ell^\mathrm{EE}$]{} alone, [$\hat C_\ell^\mathrm{TE}$]{} alone, and [${\hat C_\ell^\mathrm{TT}+\hat C_\ell^\mathrm{TE}+\hat C_\ell^\mathrm{EE}}$]{}. We define the different likelihoods in and obtain constraints on the nearly instantaneous tanh-like reionization model in . In we explore a toy reionization history model that consists of the usual instantaneous reionization and a second early (high redshift) period of reionization that partially ionizes the universe. We then quantify the projected limits the CMB can impose on a reionization history of this type with free parameters of reionization redshift $z_\mathrm{re}$ and high-redshift ionization fraction $x^\mathrm{min}_e$. We describe this modification to the standard reionization history in . We then forecast sensitivity to this model’s parameters as a function of noise and multipole range in , and demonstrate that most of the parameter space can be precisely constrained with the map sensitivity ${w_p^{-1/2}\lesssim 10\,\mathrm{\mu K\,arcmin}}$ using the multipole range ${10\lesssim\ell\lesssim20}$. We summarize our findings in .
Throughout this paper our default model is flat $\Lambda$CDM with the @planckparams18 `Plik` TT,TE,EE+lowE+lensing mean parameters ${\Omega_bh^2=0.02237}$, ${\Omega_ch^2=0.1200}$, ${100\theta_\mathrm{MC}=1.04092}$, $\ln(10^{10}A_s e^{-2\tau})=2.9352$, and ${n_s=0.9649}$. When $\tau$ is varied, $\ln(10^{10}A_s)$ is set to $2.9352+2\tau$.
Maximizing information used\
in power spectrum analysis {#sec:emodes}
============================
In this section, we develop a formalism for extracting reionization information from a full-sky map of the intensity and linear polarization of the CMB. In , we define the three likelihoods we use for different subsets of data; Wishart (for [${\hat C_\ell^\mathrm{TT}+\hat C_\ell^\mathrm{TE}+\hat C_\ell^\mathrm{EE}}$]{}), $\chi^2$ (for [$\hat C_\ell^\mathrm{EE}$]{}), and variance-gamma (for [$\hat C_\ell^\mathrm{TE}$]{}). In , we characterize these likelihoods for the case of instantaneous tanh-like reionization.
Likelihoods for power spectra {#subsec:like_ps}
-----------------------------
In standard $\Lambda$CDM, the CMB Stokes parameters ${\boldsymbol m=(I,Q,U)}$ are a realization of a Gaussian random process. The spherical harmonic transforms of these maps $\boldsymbol a_{\ell m}=(a_{\ell m}^\mathrm T,a_{\ell m}^\mathrm E,a_{\ell m}^\mathrm B)$ are therefore also Gaussian distributed. Neglecting B-modes, the $\boldsymbol a_{\ell m}$ are distributed as a complex Gaussian $\boldsymbol a_{\ell m}\sim\mathcal N\left(\boldsymbol 0,\boldsymbol{\mathsf C}_\ell\right)$ with mean $\boldsymbol 0$ and covariance $$\boldsymbol {\mathsf C}_\ell=\begin{pmatrix}
C_\ell^\mathrm{TT}&C_\ell^\mathrm{TE}
\\
C_\ell^\mathrm{TE}&C_\ell^\mathrm{EE}
\end{pmatrix}.$$
As demonstrated in @hamimeche, the sample covariance matrix of measured power spectra $\hat{\boldsymbol{\mathsf C}}_\ell$ drawn from a theory covariance matrix $\boldsymbol{\mathsf C}_\ell$ is given by a Wishart distribution, $$(2\ell+1)\hat{\boldsymbol{\mathsf C}}_\ell\equiv\sum_m\boldsymbol a_{\ell m}^\dagger\boldsymbol a_{\ell m}^{\phantom{\dagger}}
\sim W_n(2\ell + 1, \boldsymbol{\mathsf C}_\ell)$$ where $n$ is the number of dimensions in $\boldsymbol a_{\ell m}$. A Wishart distribution is a multivariate gamma distribution. A gamma distribution is a two-parameter probability distribution of which the $\chi^2$ distribution is a special case. When considered as a likelihood $\mathcal L({\boldsymbol{\mathsf C}}_\ell)\equiv P({\hat{\boldsymbol{\mathsf C}}}_\ell|{\boldsymbol{\mathsf C}}_\ell)$, this is often normalized such that ${\chi^2_{\mathrm{eff},\ell}\equiv -2\ln \mathcal L({\boldsymbol{\mathsf C}}_\ell)=0}$ when ${\boldsymbol{\mathsf C}}_\ell={\hat{\boldsymbol{\mathsf C}}}_\ell$, i.e., $$-2\ln \mathcal L({\boldsymbol{\mathsf C}}_\ell)=(2\ell+1)\Big[\operatorname{Tr}[{\hat{\boldsymbol{\mathsf C}}}_\ell {\boldsymbol{\mathsf C}}_\ell^{-1}]-\ln|{\hat{\boldsymbol{\mathsf C}}}_\ell{\boldsymbol{\mathsf C}}^{-1}|
-n\Big].
\label{eq:chi2_eff}$$ In the single-dimensional case, this reduces to the more familiar $\chi^2$ distribution, $$-2\ln\mathcal L(C_\ell)=(2\ell+1)\left[\frac{\hat C_\ell}{C_\ell}-\ln\frac{\hat C_\ell}{C_\ell} -1\right],$$ in agreement with Equation 8 of @hamimeche when normalized such that $\ln\mathcal L=0$ when ${C_\ell=\hat C_\ell}$.
We also use the distribution of $\hat C_\ell^\mathrm{TE}$, i.e., the mean of the product of correlated Gaussian random variables. This was derived in @mangilli and independently in @corrgauss and @variancegamma, and is given by a variance-gamma distribution (also called a generalized Laplace distribution or a Bessel function distribution) with functional form $$\label{eq:vargam}
P(\hat C_\ell^\mathrm{TE}|\boldsymbol\theta)
=\frac{N^{(N+1)/2}|\hat{c}|^{(N-1)/2}e^{N\rho\hat c/\xi}K_{\ell}\left(\frac{N|\hat c|}\xi\right)}
{2^{(N-1)/2}\sqrt\pi\Gamma(N/2)\sqrt \xi(\sigma_\ell^\mathrm{TT}\sigma_\ell^\mathrm{EE})^{N/2}},$$ where $\boldsymbol\theta=\{C_\ell^\mathrm{TT},C_\ell^\mathrm{TE},C_\ell^\mathrm{EE}\}$, $\hat c=\hat C_\ell^\mathrm{TE}$, $\rho=C_\ell^\mathrm{TE}/(\sigma_\ell^\mathrm{EE}\sigma_\ell^\mathrm{TT})$ is the correlation coefficient between the two noisy vectors, ${\sigma_\ell^\mathrm{XX}=\sqrt{C_\ell^\mathrm{XX}+N_\ell^\mathrm{XX}}}$ is the total uncertainty on the power spectrum $C_\ell^\mathrm{XX}$, $N_\ell^\mathrm{XX}$ is the noise power spectrum, $N=2\ell+1$ is the number of modes per multipole, $\xi=(1-\rho^2)\sigma_\ell^\mathrm{TT}\sigma_\ell^\mathrm{EE}$ is a useful auxiliary variable, $\Gamma$ is the gamma function, and $K_\nu$ is the modified Bessel function of the second kind of order $\nu$.
To better understand the variance-gamma distribution, we show how it reduces to the $\chi^2$ distribution when taking a cross spectrum of identical vectors, i.e., $\rho\to1$. This distribution $P(x)$ is proportional to $|x|^{(N-1)/2} e^{N\rho x/\xi}K_\ell\left(\frac{N|x|}{\xi}\right)\xi^{-1/2}$. The modified Bessel function of the second kind decays exponentially, and its zeroth order expansion is given by $K_\nu(x)\approx \sqrt{\frac{\pi}{2x}}e^{-x}$ [@abramowitz+stegun]. In the limit of large $x$, the functional form of the variance-gamma distribution goes to $$\begin{aligned}
P(x)&\propto |x|^{(N-1)/2}
\exp\left(\frac{N\rho x}{\xi}\right)
\exp\left(-\frac{N|x|}{\xi}\right)\sqrt{\frac{\pi \xi}{2N|x|}}\xi^{-1/2}
\\
&\propto |x|^{N/2-1}
\exp\left(\frac{\rho x-|x|)}{1-\rho^2}\right).\end{aligned}$$ For perfectly correlated variables, the correlation $\rho=1$ and the data are positive definite with $x\geqslant0$, giving $P(x)\propto x^{N/2-1}e^{-x/2}$, the $\chi^2$ distribution with $N$ degrees of freedom.
This parametrization of the variance-gamma distribution has mean and variance per multipole $$\begin{aligned}
\langle\hat C_\ell^\mathrm{TE}\rangle&=C_\ell^\mathrm{TE}\\
\mathrm{var}(\hat C_\ell^\mathrm{TE})&=\frac1{2\ell+1}\left[(C_\ell^\mathrm{TE})^2+C_\ell^\mathrm{TT}
C_\ell^\mathrm{EE}\right],
\label{eq:varTE}\end{aligned}$$ in agreement with the mean and variance of the off-diagonal component of the Wishart distribution and the Gaussian distribution of $a_{\ell m}^\mathrm T$ and $a_{\ell m}^\mathrm E$. We have also validated the functional form using $10^4$ realizations of $\boldsymbol a_{\ell m}$ vectors, and find that the distribution of $\hat C_\ell^\mathrm{TE}$ agrees with .
Likelihood for instantaneous reionization {#subsec:like_zreio}
-----------------------------------------
To demonstrate the relative constraining power of the Wishart, $\chi^2$, and variance-gamma likelihoods, we start with the theoretical power spectra as a function of the reionization optical depth $\tau$ in the case of instantaneous reionization, $C_\ell^\mathrm{TT/TE/EE}=f(\tau, A_s)$, with $A_s e^{-2\tau}$ fixed. Additionally, we include a white noise component that is uncorrelated between $I$, $Q$, and $U$ and whose amplitude $w_p^{-1/2}$ varies between 0–230 $\mathrm{\mu K\,arcmin}$. Using this formalism allows us to make predictions for the best-case constraining power on $\tau$ for future experiments, assuming instantaneous reionization.
![Normalized product of 50000 likelihood distributions of $\hat{\mathbf C}_\ell$ realizations with input $\tau=0.06$. We plot the likelihood from the variance-gamma distribution for [$\hat C_\ell^\mathrm{TE}$]{} (red), the likelihood from the $\chi^2$ distribution for [$\hat C_\ell^\mathrm{EE}$]{} (orange), and the likelihood from the Wishart distribution for [${\hat C_\ell^\mathrm{TT}+\hat C_\ell^\mathrm{TE}+\hat C_\ell^\mathrm{EE}}$]{} (blue). The standard deviations of these distributions for input $\tau=0.06$ are $\sigma_\tau=\{0.0072, 0.0021, 0.0017\}$ respectively. []{data-label="fig:neglect_te"}](fig2.pdf){width="\columnwidth"}
We characterize the likelihood of $\tau$ by evaluating $\mathcal L(\tau|\{\hat C_\ell^\mathrm{TT/TE/EE}\})$ for many realizations of the CMB sky. We create 50000 realizations of $\boldsymbol a_{\ell m}$ with $2\leqslant\ell\leqslant100$ to test this formalism using the `HEALPix`[^1] routine `synalm`. In , we show the averaged likelihood of these different spectra in the case of a full-sky cosmic variance-limited measurement, and obtain $\sigma_\tau^\mathrm{TT+TE+EE}=0.0017$, $\sigma_\tau^\mathrm{EE}=0.0021$, and $\sigma_\tau^\mathrm{TE}=0.0072$. The TE-only constraint is comparable to the uncertainty from [*Planck*]{}, $\sigma_\tau^\mathit{Planck}=0.007$, which only includes E-mode data. The distribution for [$\hat C_\ell^\mathrm{TE}$]{} in is visibly skewed. This is a manifestation of the underlying skewed distributions that the $\hat C_\ell^\mathrm{TT/TE/EE}$ are themselves drawn from.
Since the uncertainty on $\hat C_\ell^\mathrm{TE}$ in is a function of $(C_\ell^\mathrm{EE,th}+N_\ell^\mathrm{EE})(C_\ell^\mathrm{TT,th}+N_\ell^\mathrm{TT})$, it is reasonable to ask whether there is a combination of uncertainties that makes $\sigma_\tau^\mathrm{TE}$ competitive with $\sigma_\tau^\mathrm{EE}$. demonstrates that at a given polarized white noise level $w_p^{-1/2}$, the constraining power on $\tau$ from the $\hat C_\ell^\mathrm{TE}$ alone is a factor of $\sim3.5$ weaker than the $\hat C_\ell^\mathrm{EE}$ constraint. This means that using [${\hat C_\ell^\mathrm{TT}+\hat C_\ell^\mathrm{TE}+\hat C_\ell^\mathrm{EE}}$]{}results in an approximately $20\%$ increase in precision compared to using [$\hat C_\ell^\mathrm{EE}$]{}data alone.
The white noise temperature component is functionally negligible for this analysis. We can see this by looking at the components of contributing to the white noise in $\hat C_\ell^\mathrm{TE}$, ${(C_\ell^\mathrm{TT,th}+N_\ell^\mathrm{TT})(C_\ell^\mathrm{EE,th}+N_\ell^\mathrm{EE})}$. The theory-noise cross-terms are comparable when $N_\ell^\mathrm{TT}C_\ell^\mathrm{EE,th}\approx N_\ell^\mathrm{EE}C_\ell^\mathrm{TT,th}$. Since $C_\ell^\mathrm{TT,th}/C_\ell^\mathrm{EE,th}\simeq10^4$ for $\ell\lesssim100$, the polarization sensitivity $w_p^{-1/2}$ would have to be $\mathcal O(10^{-2})$ times that of temperature for the temperature spectrum’s white noise component to noticeably contribute to the [$\hat C_\ell^\mathrm{TE}$]{} variation.
![Uncertainty on $\tau$ as as a function of white noise amplitude in polarization for a full-sky measurement. Using $\hat C_\ell^\mathrm{TE}$ alone is always less constraining than $\hat C_\ell^\mathrm{EE}$ by a factor of $\sim3.5$. Including [$\hat C_\ell^\mathrm{TE}$]{}and $\hat C_\ell^\mathrm{TT}$ data improves the precision of a $\tau$ measurement by 20% over using $\hat C_\ell^\mathrm{EE}$ alone. []{data-label="fig:compare_sigma"}](fig3.pdf){width="\columnwidth"}
The CMB’s sensitivity to varying reionization histories {#sec:fisher}
=======================================================
We begin discussing a specific simple model for early reionization in , then discuss quantitative forecasts in .
A simple model for early reionization {#subsec:reio}
-------------------------------------
We explore the constraining power of low-multipole CMB polarization data using a specific parametrization of the reionization history. We parametrize the global reionization history $x_e(z)$ using the the ratio of free electrons to hydrogen nuclei as a function of time, $x_e\equiv n_e/n_\mathrm H$,[^2] and write the contribution to the reionization optical depth between two redshifts $z_1$ and $z_2$ as $$\label{eq:tauz}
\tau(z_1,z_2)\equiv\int_{t(z_1)}^{t(z_2)}c\sigma_\mathrm T x_e\big[z(t)\big] n_\mathrm H\big[z(t)\big]{\,\mathrm d}t.$$ We parametrize the reionization history using a similar model to that used in Equation A3 of @heinrich, $$\begin{aligned}
\label{eq:xe}
x_e(z)=&&\frac{1+f_\mathrm{He}-x_e^\mathrm{min}}2
&\left\{1+\tanh\left[\frac{y_\mathrm{re}-y}{\delta y}\right]
\right\}
\nonumber
\\
&&+\frac{x_e^\mathrm{min}-x_e^\mathrm{rec}}2
&\left\{1+\tanh\left[\frac{y_\mathrm t-y}{\delta y}\right]
\right\}
+x_e^\mathrm{rec}\end{aligned}$$ where $y(z)\equiv (1+z)^{3/2}$ and $\delta y = \frac32(1+z)^{1/2}\delta z_\mathrm{re}$. The ionization fraction from recombination alone is $x_e^\mathrm{rec}$, the second transition step is given at the redshift $z_\mathrm t$, the amplitude of reionization from the second transition is $x_e^\mathrm{min}$, and the fraction of electrons from singly ionized helium is given by $f_\mathrm{He}\equiv n_\mathrm{He}/n_\mathrm H$. We use this form because it parametrizes a small but non-zero early ionization fraction. An upper limit on $x_e(15\leq z\leq30)$ was first inferred by @millea and further constrained by @planckparams18. Figure 45 of @planckparams18 shows that above $z\gtrsim10$, [*Planck*]{}measurements do not rule out $x_e^\mathrm{min}\approx10\%$. Motivated by this result, we choose a fiducial value of $x_e^\mathrm{min}=0.05$ to demonstrate the potential effects this ionization fraction can have on CMB measurements. We also choose $z_\mathrm{re}=6.75$ so that the total optical depth $\tau=0.06$ is consistent with the [@planckparams18] values. We highlight the parameters of this model in , with $x_e^\mathrm{min}$ set to $0.2$ for visibility purposes.
![Visualization of our toy model for early reionization. We indicate the central redshift of late reionization $z_\mathrm{re}$ (red), the amplitude of the early reionization fraction $x_e^\mathrm{min}$ (blue), the redshift $z_\mathrm t$ where early reionization begins (cyan), and the width $\delta z_\mathrm{re}$ of these transitions (brown). We inflate $x_e^\mathrm{min}\to0.2$ to illustrate this parameter’s effect in the model.[]{data-label="fig:model"}](fig4.pdf){width="\columnwidth"}
We show the dependence of $C_\ell^\mathrm{EE}$ and $C_\ell^\mathrm{TE}$ on these reionization histories in . We choose the ranges of the parameters such that they induce roughly equivalent changes in the amplitude of the output power spectrum. The equivalent white noise powers are labeled on the right-hand side of . We also vary $\delta z_\mathrm{re}$ to show that although this parameter does affect the power spectra, unphysically large widths $\delta z_\mathrm{re}\gtrsim5$ are needed to affect the power spectra as much as $z_\mathrm{re}$ and $x_e^\mathrm{min}$. We fix the width of these transitions to $\delta z_\mathrm{re}=0.5$ because it is weakly constrained by E-mode power spectra for a reionization history that is complete by $z=0$.
{width="0.8\paperwidth"}
Constraints on high-redshift reionization {#subsec:fisher_forecasts}
-----------------------------------------
In the parametrization of , it is natural to compare to and constrain the parameters ${\tau_{\mathrm{lo}}\equiv\tau(0, z_\mathrm{split})}$ and $\tau_{\mathrm{hi}}\equiv \tau(z_\mathrm{split},z_\mathrm{dark})$. We choose $z_\mathrm{dark}=100$ as a redshift sufficiently far removed from both recombination and reionization effects. We define ${z_\mathrm{split}\equiv z_\mathrm{re}+1}$. This parametrization essentially allows a one-dimensional mapping such that ${\tau_\mathrm{lo}=f(z_\mathrm{re})}$ and ${\tau_\mathrm{hi}=g(x_e^\mathrm{min})}$. In the case of standard tanh-like reionization, $\tau_{\mathrm{lo}}\to\tau$ and $\tau_{\mathrm{hi}}\to0$, or equivalently $x_e^\mathrm{min}\to x_e^\mathrm{rec}$.
The primary effect of adding a second component to $x_e(z)$ is an increase in the total reionization optical depth $\tau$, and therefore the rough amplitudes of the polarized power spectra, specifically $C_\ell^\mathrm{EE}\propto\tau^2$ and $C_\ell^\mathrm{TE}\propto\tau$ at the lowest multipoles $\ell\lesssim10$. The second and more distinguishing effect is that both of these power spectra change shape due to the different angular sizes of local quadrupoles at the primary and secondary reionization redshifts. This provides an opportunity to go beyond $\tau$ in probing the nature of reionization. We demonstrate the effects of varying $x_e^\mathrm{min}$ and $z_\mathrm{re}$ on the polarized power spectra (see ) using the Boltzmann code `CLASS` [@CLASS]. For every reionization history, we compute $\tau$ and vary $A_s$ such that $A_s e^{-2\tau}$ is held constant.
Using the `CLASS` code, we set `reio_parameterization` equal to `reio_many_tanh` with $\delta z_\mathrm{re}=0.5$, fixing ${z_\mathrm t=30}$, ${f_\mathrm{He}=1.324}$ using the fiducial helium mass fraction ${Y_p=0.25}$, and ${x_e^\mathrm{rec}=2\times10^{-4}}$. We vary $z_\mathrm{re}$ and $x_e^\mathrm{min}$ to write the cosmological power spectrum as a function of two parameters, $C_\ell^\mathrm{TT/TE/EE}=f(z_\mathrm{re}, x_e^\mathrm{min})$.
![Effective goodness-of-fit as a function of multipole. The top subplot varies $z_\mathrm{re}$ and the bottom varies $x_e^\mathrm{min}$. Using an observed set of power spectra $\{\hat C_\ell^\mathrm{TT}, \hat C_\ell^\mathrm{TE}, \hat C_\ell^\mathrm{EE}\}$ that are identical to their theory values $\{C_\ell^\mathrm{TT}, C_\ell^\mathrm{TE}, C_\ell^\mathrm{EE}\}$ with $z_\mathrm{re}=6$ and $x_e^\mathrm{min}=0$, we calculate the global goodness-of-fit while varying $z_\mathrm{re}$ and $x_e^\mathrm{min}$ independently of each other. We note that the $2\lesssim\ell\lesssim20$ range of angular scales contains most of the effective constraining power of polarized CMB measurements. []{data-label="fig:chi2_per_ell"}](fig6_a.pdf "fig:"){width="\columnwidth"} ![Effective goodness-of-fit as a function of multipole. The top subplot varies $z_\mathrm{re}$ and the bottom varies $x_e^\mathrm{min}$. Using an observed set of power spectra $\{\hat C_\ell^\mathrm{TT}, \hat C_\ell^\mathrm{TE}, \hat C_\ell^\mathrm{EE}\}$ that are identical to their theory values $\{C_\ell^\mathrm{TT}, C_\ell^\mathrm{TE}, C_\ell^\mathrm{EE}\}$ with $z_\mathrm{re}=6$ and $x_e^\mathrm{min}=0$, we calculate the global goodness-of-fit while varying $z_\mathrm{re}$ and $x_e^\mathrm{min}$ independently of each other. We note that the $2\lesssim\ell\lesssim20$ range of angular scales contains most of the effective constraining power of polarized CMB measurements. []{data-label="fig:chi2_per_ell"}](fig6_b.pdf "fig:"){width="\columnwidth"}
In , we plot $\chi^2_\mathrm{eff,\ell}$ using . By varying $z_\mathrm{re}$ and $x_e^\mathrm{min}$ separately, we can observe a few noteworthy features. First, although there is more variation in the power spectra at the very largest scales, the constraining power peaks at $\ell\simeq10$, corresponding to fluctuations on scales of tens of degrees. Second, the two different reionization histories have notably different $\chi^2_\mathrm{eff,\ell}$, demonstrating that the partial degeneracy between these two modifications to reionization history can be broken with high signal-to-noise measurements across this range of angular scales. The very largest scales $\ell<10$ are much more constraining for $z_\mathrm{re}$ than $x_e^\mathrm{min}$, whereas the $10\leqslant\ell<20$ range is very sensitive to both parameters.
![Fisher forecasts for cosmic variance limited [${\hat C_\ell^\mathrm{TT}+\hat C_\ell^\mathrm{TE}+\hat C_\ell^\mathrm{EE}}$]{}data over subsets of multipole ranges. The high opacity and low opacity ellipses are $1\sigma$ and $2\sigma$ contours, respectively. When considering the two-parameter reionization model, the $10\leqslant\ell<20$ range is most important for distinguishing between alternate models of reionization. This range has the maximum $\chi^2_\mathrm{eff}$ variation due to its relatively strong model dependence compared to higher multipoles and relatively small cosmic variance compared to lower multipoles.[]{data-label="fig:contours_ellranges"}](fig7.pdf){width="\columnwidth"}
![Constraints on $\tau$ uncertainty as a function of white noise. In all cases, the uncertainty saturates at white noise level $w_p^{-1/2}\sim 10\,\mathrm{\mu K\,arcmin}$. Here we display the uncertainty on the optical depth from various components of reionization, as well as the total reionization optical depth $\tau_\mathrm{tot}$. []{data-label="fig:sig_versus_noise"}](fig8.pdf){width="\columnwidth"}
We quantify this multipole dependence by performing Fisher forecasts on subsets of cosmic variance-limited data in . As expected, there is relatively little constraining power in the $30\lesssim\ell\lesssim100$ multipole range, but the majority of constraining power comes from the $10\lesssim\ell\lesssim20$ range, in agreement with the shape of the $\chi^2_{\mathrm{eff},\ell}$ curves in . While there is significant constraining power in the $2\lesssim\ell\lesssim10$ and $20\lesssim\ell\lesssim30$ ranges, the 10–20 range is by far the most important for quantitative assessment for this reionization scenario.
We show the uncertainty on the optical depth parameters $\tau_\mathrm{lo}$, $\tau_\mathrm{hi}$, $\tau_\mathrm{tot}\equiv\tau_\mathrm{lo}+\tau_\mathrm{hi}$, and $\tau$ from tanh-like reionization as a function of white noise level in . This demonstrates that the optical depth from high-redshift reionization can be meaningfully constrained with relatively high white noise levels, and that the uncertainty on any additional optical depth from high-redshift sources can be improved by an order of magnitude above current measurements with white noise levels as high as $10\,\mathrm{\mu K\,arcmin}$.
The constraining power of low-$\ell$ polarization data can most clearly be seen in the Fisher contours in . At current noise levels, the constraints on the reionization redshift are relatively weak, and the presence of high-redshift reionization cannot be distinguished from instantaneous reionization. As expected, there is a negative degeneracy between $x_e^\mathrm{min}$ and $z_\mathrm{re}$ that is most pronounced at high noise levels where only the lowest multipoles contribute to the variation of the power spectra, while the degeneracy becomes less severe as the noise level decreases.
demonstrates the possible advances in our understanding of reionization from the CMB. The ultimate sensitivity to $(z_\mathrm{re},x_e^\mathrm{min})=(6.75, 0.05)$ — equivalent to $\tau_\mathrm{tot}=0.06$ — from the CMB is shown in blue, using a Fisher forecast with zero instrumental noise. This noiseless measurement represents the fundamental limits for constraining these reionization parameters with large-scale CMB polarization measurements. We plot Fisher contours for noise levels ${w_p^{-1/2}=\{10, 60, 100\}\,\mathrm{\mu K\,arcmin}}$. The smallest number corresponds to the projected Cosmology Large Angular Scale Surveyor (CLASS) white noise level over 70% of the sky in @essinger-hileman using all four of its observing bands. This value is a benchmark for detecting primordial gravitational waves with a tensor-to-scalar ratio $r\sim0.01$, a goal for the current generation of ground-based CMB measurements. The $w_p^{-1/2}=60\,\mathrm{\mu K\,arcmin}$ white noise level corresponds to the sensitivity of the CLASS Q-band ($40\,\mathrm{\mu K\,arcmin}$) cleaned using the [*WMAP*]{}K-band ($280\,\mathrm{\mu K\,arcmin}$) as a synchrotron template. The $100\,\mathrm{\mu K\,arcmin}$ value corresponds to the geometric mean of the 100 GHz and 143 GHz white noise levels reported in Table 4 of @plancklegacy.
We transform the contours in to the integrated quantities $\tau_\mathrm{lo}$ and $\tau_\mathrm{hi}$, both approximately single-variable functions of $z_\mathrm{re}$ and $x_e^\mathrm{min}$, respectively, in . We also plot lines of constant $\tau=\tau_\mathrm{lo}+\tau_\mathrm{hi}$ to show the total integrated contribution of this two-parameter reionization model.
We summarize the results of this section in . We highlight data rows that are particularly constraining. This includes the full resolution cosmic variance measurement, the noiseless measurement with $10\leqslant\ell<20$, and the $w_p^{-1/2}=10\,\mathrm{\mu K\,arcmin}$ measurements. We highlight these to emphasize the relative importance of future data with these properties to constrain reionization histories.
With our fiducial parameters, the ultimate sensitivity to this model with large-scale CMB anisotropy measurements is $\sigma_{z_\mathrm{re}}=0.3$ and $\sigma_{x_e^\mathrm{min}}=0.005$. Remarkably, this constraint does not weaken appreciably either when examining only the multipole range $10\leqslant\ell<20$, or when the data are contaminated by white noise at the $10\,\mathrm{\mu K\,arcmin}$ level.
![Fisher forecasts for $x_e^\mathrm{min}$ and $z_\mathrm{re}$ as a function of white noise level. The high opacity and low opacity ellipses are $1\sigma$ and $2\sigma$ contours, respectively. A noiseless measurement (blue) represents the fundamental limits of constraining these reionization parameters with large-scale CMB polarization measurements. For comparison, a $10\,\mathrm{\mu K\,arcmin}$ white noise level is shown (orange) and is almost completely hidden under the blue $0\,\mathrm{\mu K\, arcmin}$ contour. We also plot the projected white noise contribution for a CLASS Q-band foreground cleaned map (red), and the white noise contribution in the [*Planck*]{} 2018 $\hat C_\ell^{100\times143}$ data (cyan). []{data-label="fig:contours"}](fig9.pdf){width="\columnwidth"}
![Fisher forecasts for $\tau_\mathrm{lo}$ and $\tau_\mathrm{hi}$ as a function of white noise level. The color of the contours is the same as in . Additionally, we plot dashed lines of constant $\tau=\tau_\mathrm{lo}+\tau_\mathrm{hi}$ in five equal steps from $\tau=0.04$ to $\tau=0.08$, denoted by increasing dash length. The optimal $\tau_\mathrm{hi}$ uncertainty is $0.001$, while Fisher forecasts using $100\,\mathrm{\mu K\,arcmin}$ white noise project an uncertainty of $\sigma_{\tau_\mathrm{hi}}=0.008$, which is degraded further by its strong negative degeneracy with $\tau_\mathrm{lo}$. []{data-label="fig:contours_tau"}](fig10.pdf){width="\columnwidth"}
[|r|c||r|r|r|r|]{} **0** &$\boldsymbol{\phn2\leqslant\ell<100}$ & **0.3** & **0.005** &**0.003**&**0.001**\
0 &$\phn2\leqslant\ell<\phn10$ & 0.8&0.020&0.007&0.005\
**0** &$\boldsymbol{10\leqslant\ell<\phn20}$ & **1.0** &**0.005**&**0.009**&**0.001**\
0 &$20\leqslant\ell<\phn30$ & 5.9& 0.024&0.054&0.006\
0 &$30\leqslant\ell<100$ & 10.8 & 0.152&0.991&0.038\
**10** & $\boldsymbol{\phn2\leqslant\ell<100}$ & **0.4**&**0.005**&**0.003**&**0.001**\
60 & $\phn2\leqslant\ell<100$ & 0.7 &0.018&0.006&0.004\
100 & $\phn2\leqslant\ell<100$ & 1.0 & 0.031&0.009&0.008
Conclusions {#sec:conclusions}
===========
In this work we have explored the constraining power of the CMB temperature and E-mode polarization on reionization history.
- We have demonstrated the potential for a $20\%$ improvement on the precision of the reionization optical depth $\tau$ by using [$\hat C_\ell^\mathrm{TT}$]{} and [$\hat C_\ell^\mathrm{TE}$]{} in addition to [$\hat C_\ell^\mathrm{EE}$]{}when the white noise level drops below $10\,\mathrm{\mu K\,arcmin}$.
- We have shown that in the case of an early 5% ionization fraction, a scenario allowed by measurements in @planckparams18, the maximum precision from CMB large-scale measurements is $\sigma_{z_\mathrm{re}}=0.3$, and $\sigma_{x_e^\mathrm{min}}=0.005$. We also show that this constraint is very nearly met when the white noise level is $10\,\mathrm{\mu K arcmin}$, with $\sigma_{z_\mathrm{re}}=0.4$ and $\sigma_{x_e^\mathrm{min}}=0.005$. We have also shown that a key multipole range for this scenario is $10\leqslant\ell\leqslant20$, where $\sigma_{z_\mathrm{re}}=1.0$ and $\sigma_{x_e^\mathrm{min}}=0.005$.
Future measurements of the large-scale polarized CMB will be made by CLASS [@essinger-hileman currently observing] and *LiteBIRD*[^3] [@litebird expected launch late 2020s]. *LiteBIRD*’s goal of measuring primordial B-modes with $\sigma_r=0.001$ with ${w_p^{-1/2}=2\,\mathrm{\mu K\,arcmin}}$ observations of the whole sky will be able to constrain the reionization model presented in this paper to its cosmic variance limit. Now operating, CLASS’s projected sensitivity of $w_p^{-1/2}=10\,\mathrm{\mu K\,arcmin}$ will yield observations 70% of the sky with lower sensitivity, but as we have demonstrated here, this will be more than sufficient to constrain a period of early reionization to its cosmic variance limit.
This work has focused on the ultimate sensitivity to a specific toy model of reionization with an early high-redshift contribution. Processes that ionize the IGM across different epochs of cosmic time will generate different $x_e(z)$ profiles. Constraints on our model can therefore help discriminate between physical mechanisms that ionized the IGM. Reionization history constraints from CMB measurements will both inform and complement future tomographic measurements of 21 cm emission and of the first generation of galaxies designed to characterize $x_e(z)$. Knowledge of the ionization history is also important for understanding the large scale B-modes in the reionization peak, whose fluctuations are created during the same epoch as large scale E-modes. Fluctuations attributed to deviations from single-field slow roll inflation can also be induced by deviations from the standard tanh-like reionization model, and these effects must be taken into account when analyzing large angular scale B-modes.
[^1]: **H**ierarchical **E**qual **A**rea iso**L**atitude **Pix**elation<https://healpix.sourceforge.io/>
[^2]: The free electron fraction $x_e$ is greater than one at low redshifts because of the free electrons corresponding to helium. When helium is singly ionized, the electron number density is $n_e=n_\mathrm H+n_\mathrm{He}=(1+f_\mathrm{He})n_\mathrm H$, and when it is doubly ionized $n_e=(1+2f_\mathrm{He})n_\mathrm H$.
[^3]: **Lite** (Light) satellite for the studies of **B**-mode polarization and **I**nflation from cosmic background **R**adiation **D**etection
|
---
abstract: 'Besides Standard Model measurements and other Beyond Standard Model studies, the ATLAS and CMS experiments at the LHC will search for Supersymmetry, one of the most attractive explanation for dark matter. The SUSY discovery potential with early data is presented here together with some first results obtained with 2010 collision data at 7 . Emphasis is placed on measurements and parameter determination that can be performed to disentangle the possible SUSY models and SUSY look-alike and the interpretation of a possible positive supersymmetric signal as an explanation of dark matter.'
address:
- |
$^1$ Instituto de Física Corpuscular (IFIC), CSIC – Universitat de València,\
Parc Científic, Apartado de Correos 22085, E-46071, Valencia, Spain
- '$^2$ CERN, PH Department, CH-1211 Geneva 12, Switzerland'
author:
- 'Vasiliki A Mitsou$^{1,2}$'
title: Dark matter searches at LHC
---
Introduction {#sc:intro}
============
Unveiling the nature of dark matter (DM) [@dm-review] is a quest in both Astrophysics and Particle Physics. Among the list of well-motivated candidates, the most popular particles are cold and weakly interacting, and typically predict missing energy signals at particle colliders. Supersymmetry (SUSY) and models with extra dimensions are theoretical scenarios that inherently provide such a dark matter candidate. The Large Hadron Collider (LHC) [@lhc] currently in operation at CERN in Geneva, Switzerland, is an ideal machine for discovering DM in colliders and exploring both phenomenological as much as purely theoretical aspects aspects of DM.
The exploration of dark matter is being complemented by other types of searches of particle dark matter: direct detection in low background underground experiments [@direct] and indirect detection of neutrinos, gamma-rays and antimatter with terrestrial and space-borne detectors [@indirect]. Experiments in upcoming colliders, such as the ILC [@ilc] and CLIC [@clic], are expected to further constraint such models and make a key step in understanding dark matter.
The structure of this paper is as follows. Section \[sc:dm\] provides a brief introduction to dark matter properties as defined by the current cosmological data. In section \[sc:lhc\] the features of collider experiments that play a central role in exploring DM are highlighted. In section \[sc:susy\] we discuss the strategy for discovering supersymmetry at the LHC, some recent results and prospects for the near future. Studies on methods to constrain dark matter parameters at the LHC, such as particle masses and spins, are reviewed in section \[sc:dmlhc\]. Some alternative theoretical models yielding a modified DM density and its implications for SUSY searches are discussed in section \[sc:alt\]. The paper concludes with an outlook in section \[sc:out\].
Dark matter evidence {#sc:dm}
====================
The nature of the dark sector of our Universe constitutes one of the major mysteries of fundamental physics. According to observations over the past two decades, most of our Universe energy budget consists of unknown entities: $\sim\!23\%$ is dark matter and $\sim\!72\%$ is dark energy, a form of ground-state energy. Dark energy is believed to be responsible for the current-era acceleration of the Universe. Dark matter, on the other hand, is matter inferred to exist from gravitational effects on visible matter, but is undetectable by emitted or scattered electromagnetic radiation. A possible explanation then —other than the introduction of a new particle— is to ascribe the observed effects to modified Newtonian dynamics (MOND) [@mond]. There is a variety of theoretical proposals predicting DM particles interacting weakly only, as discussed in detail below, however the possibility of charged dark matter still remains open [@khlopov].
The energy budget of the Cosmos (fig. \[fg:budget\]) has been obtained by combining a variety of astrophysical data, such as type-Ia supernovae [@snIa], cosmic microwave background (CMB) [@wmap], baryon oscillations [@bao] and weak lensing data [@lensing]. The most precise measurement comes from anisotropies of the cosmic microwave background [@wmap; @cmb]. The third peak in the temperature power spectrum, shown in fig. \[fg:cmb\], can be used to extract information about the dark matter contribution to the Universe energy budget.
![\[fg:cmb\]Temperature power spectrum from WMAP7. The third acoustic peak is sensitive to the dark matter density [@cmb].](budget){width="\textwidth"}
![\[fg:cmb\]Temperature power spectrum from WMAP7. The third acoustic peak is sensitive to the dark matter density [@cmb].](cmb){width="\textwidth"}
Evidence from the formation of large-scale structure (galaxies and their clusters) strongly favour cosmologies where non-baryonic DM is entirely composed of cold dark matter (CDM), i.e. non-relativistic particles.[^1] CDM particles, in turn, may be axions [@axion], superheavy non-thermal relics (wimpzillas, cryptons) [@shdm] or weakly interacting massive particles (WIMPs). The latter class of DM candidates arises naturally in models which attempt to explain the origin of electroweak symmetry breaking and this is precisely where the connection between Cosmology and Particle Physics lies. Furthermore the typical (weak-scale) cross sections characterizing these models are of the same order of magnitude as the WIMP annihilation cross section, thus establishing the so-called *WIMP miracle*.
WIMPs and colliders {#sc:lhc}
===================
WIMP dark matter candidates include the lightest neutralino in models with weak-scale supersymmetry [@susy], while Kaluza-Klein photons arise in scenarios with universal extra dimensions (UED) [@ued], and lightest $T$-odd particles are predicted in Little Higgs models [@little] with a conserved $T$-parity. The common denominator in these theories is that they all predict the existence of a electrically neutral, colourless and *stable* particle, whose decay is prevented by a kind of symmetry: -parity, connected to baryon and lepton number conservation in SUSY models; $KK$-parity, the four-dimensional remnant of momentum conservation in the extra dimensions; and a $Z_2$ discrete symmetry called $T$-parity in Little Higgs models. The origin of DM can be attributed to more than one particles, even within the same theoretical framework, e.g. in the degenerate scenario of the next-to-minimal supersymmetric standard model (NMSSM) [@grigoris].
In this paper, we focus on SUSY signatures, although these may be very similar to the other afore-mentioned models. -parity is defined as: $R = (-1)^{3(B-L)+2S}$, where $B$, $L$ and $S$ are the baryon number, lepton number and spin, respectively. Hence $R=+1$ for all Standard Model particles and $R=-1$ for all SUSY particles. It is stressed that the conservation of -parity is an *ad hoc* assumption. The only firm restriction comes from the proton lifetime: non-conservation of both $B$ and $L$ leads to rapid proton decay. -parity conservation has serious consequences in SUSY phenomenology in colliders: the SUSY particles are produced in pairs and the lightest SUSY particle (LSP) is absolutely stable.
A parenthesis is due here addressing the issue of (not necessarily cold) dark matter in SUSY models with -parity violation (RPV) [@rpv]. These seemingly incompatible concepts *can* be reconciled in models with a gravitino [@rpv-grav] or an axino [@rpv-axino] LSP with a lifetime exceeding the age of the Universe. In both cases, RPV is induced by bilinear terms in the superpotential that can also explain current data on neutrino masses and mixings without invoking any GUT-scale physics. Decays of the next-to-lightest superparticle occur rapidly via RPV interaction, and thus they do not upset the Big-Bang nucleosynthesis, unlike the -parity conserving case. -violating couplings can be sufficiently large to lead to interesting expectations for collider phenomenology; these will be the standard signatures of -parity conserving supersymmetry, with multi-lepton or multi-jet events and the possibility of explicit lepton number violation at the final state [@lola]. Nevertheless determining whether -parity is conserved or broken may not be trivial as WIMPs, whether absolutely stable or quasi-stable, cannot be detected directly in collider experiments.
Indeed weakly interacting massive particles do not interact neither electromagnetically nor hadronically with matter and thus, once produced, they traverse the various detectors layers without leaving a trace (just like neutrinos do). However by exploiting the hermeticity of the experiments, we can get a hint of the WIMP presence through the balance of the energy/momentum measured in the various detector components, the so-called *missing energy*. In hadron colliders, in particular, since the (longitudinal) momenta of the colliding partons are unknown, only the *transverse missing energy*, , can be reliably used to ‘detect’ DM particles.
SUSY(-like) searches {#sc:susy}
====================
Here we focus on searches for dark matter in the two general-purpose experiments of the LHC, namely ATLAS [@atlas-det] and CMS [@cms-det]. We review recent studies[^2] performed on up to 300 of LHC proton-proton collisions data at a centre-of-mass energy of 7 . For comprehensive accounts of the envisaged analyses, based on Monte Carlo simulations, the reader is referred to the ATLAS ‘CSC Book’ [@atlas-csc] and the CMS Physics Technical Design Report [@cms-tdr].
At the LHC, supersymmetric particles are expected to be predominantly produced hadronically, i.e. through gluino-pair, squark-pair and squark-gluino production. Each of these (heavy) sparticles is going to decay into lighter ones in a cascade decay that finally leads to an LSP, which in most of the scenarios considered is the lightest neutralino , as depicted in fig. \[fg:cascade\]. The two LSPs would escape detection giving rise to high transverse missing energy, which is rigorously defined in fig. \[fg:met\]. Such a simulated event as it will appear in the ATLAS detector is illustrated in fig. \[fg:etmiss\].
![\[fg:etmiss\]Transverse view of a simulated SUSY event in the ATLAS detector [@alan]. Note the imbalance of the deposited energy distribution towards the right-hand side. Two (stable) lightest neutralinos (not shown) escape detection towards the left-hand side.](cascade){width="\textwidth"}
$$\begin{aligned}
\met &\equiv& \left\Vert -\sum_{i}\vec{p}_{T,i} \right\Vert \\
&=& \left| -\sum_{i}p_{x,i} - \sum_{i}p_{y,i} \right|\end{aligned}$$
![\[fg:etmiss\]Transverse view of a simulated SUSY event in the ATLAS detector [@alan]. Note the imbalance of the deposited energy distribution towards the right-hand side. Two (stable) lightest neutralinos (not shown) escape detection towards the left-hand side.](etmiss){width="\textwidth"}
The search strategy followed in the inclusive channels is based on the detection of high , many jets and possibly energetic leptons. The analyses make extensive use of data-driven Standard Model background measurements. Detailed studies have been carried out for various signatures using Monte Carlo data-sets fully simulated with Geant4 [@geant4] for specific SUSY signal parameters and for the relevant Standard Model backgrounds. Although various SUSY-breaking mechanisms have been considered by the two collaborations, the early results and projections for higher integrated luminosity highlighted here have been performed in the context of the minimal Supergravity (mSUGRA) model. By assumption the mSUGRA model avoids both flavour-changing neutral currents and extra sources of $CP$ violation. For masses in the TeV range, it typically predicts too much cold dark matter unless something enhances the annihilation, as discussed in section \[sc:alt\]. The specific points mentioned here, denoted SU$n$ for ATLAS and LM$n$ for CMS, have been chosen to be roughly consistent with the observed CDM and represent a variety of different decay modes.
SUSY searches require careful control over backgrounds from standard model processes. Several methods for data-driven background determinations were developed and tested on early LHC $pp$ collision data. Such a method allowed CMS to study QCD backgrounds, to control jet-energy mismeasurement, and to measure background contributions from processes producing non-prompt leptons or hadrons misidentified as leptons [@cms-at]. It is based on the discriminating variable for the all-hadronic channel, defined for two jets j1 and j2 as: $$\at \equiv \frac{E{\rm _{T}^{j2}}}{M{\rm _{T}^{j1,j2}}},
\label{eq:atone}$$ where $M{\rm _{T}^{j1,j2}}$ is the transverse mass of the two jets, i.e.: $$\at \equiv \frac{\sqrt{E{\rm _{T}^{j2}} / E{\rm _{T}^{j1}}}}{\sqrt{2(1-\cos\Delta\phi)}},
\label{eq:attwo}$$ with $\Delta\phi$ the angle between the two jets. The method is roughly based on the concept of reversing a cut on a control variable ($H_{\rm T}\equiv\sum_{\text{jets}\,j}p_{{\rm T}j}$) to check the (ideally) signal-free region of a discriminating variable (). The MC-versus-data agreement in a control sample ($80<H_{\rm T}<120~\gev$) far from the signal region ($H_{\rm T}<350~\gev$) is illustrated in fig. \[fg:cms-at\]. A study of the $\at > 0.55$ rejection power as the $H_{\rm T}$ threshold increases demonstrated the robust behaviour of the cut [@cms-at]. A detailed account of recent SUSY studies performed at CMS with LHC early data is presented in ref. [@widl], whereas a search for long-lived (‘stopped’) gluinos is given in ref. [@azzurri] among other searches for New Physics.
![\[fg:atlas-met\]Distribution of the missing transverse momentum for ATLAS events in the 0-lepton, three-jet channel after basic preselection [@atlas-met]. Note the long tail in the prediction for the SU4 mSUGRA point.](cms-at){width="\textwidth"}
![\[fg:atlas-met\]Distribution of the missing transverse momentum for ATLAS events in the 0-lepton, three-jet channel after basic preselection [@atlas-met]. Note the long tail in the prediction for the SU4 mSUGRA point.](atlas-met){width="\textwidth"}
The first LHC collision data at 7 collected by ATLAS and CMS also allowed testing the robustness of the -based strategy for discovering SUSY. This is illustrated in fig. \[fg:atlas-met\], where the Monte Carlo distributions for various SM processes that may fake a SUSY-like signal are drawn. The estimates are predominantly based on Monte Carlo simulations, however in some cases these have been scaled to data-deduced factors. The data agree well with the simulation in the 0-lepton plus three-jet selection [@atlas-met]. In the same plot, the prediction for the (low mass, high cross section) mSUGRA point SU4[^3] is superposed. The characteristic long, high- tail of the production of DM particles discussed in section \[sc:lhc\] is clearly visible. More insight on current DM searches performed by ATLAS, either related to supersymmetry or to universal extra dimensions, can be found in ref. [@siragusa].
The LHC data processed and analyzed so far are $\mathcal{O}(100~\inb)$, not sufficient to supersede the exclusion limits set by LEP and Tevatron. Nevertheless with the $\sim35~\ipb$ already recorded by ATLAS and CMS and the few inverse femtobarns expected to be provided in the next 1–2 years, a discovery will be within reach. This is demonstrated in fig. \[fg:cms-reach\] for the CMS [@cms-2010-008] and in fig. \[fg:atlas-reach\] for ATLAS [@atl-phys-pub-2010-010] for integrated luminosities from 100 to 1 . The largest coverage is achieved through the all-hadronic channel, while comparable reach is provided by the 1-lepton mode. Other channels, e.g. the 2-lepton, apart from cross checks, will play a central role —in the event of a discovery— in constraining sparticles masses, as discussed in section \[sc:dmlhc\].
![\[fg:cms-reach\] Estimated 95% C.L. exclusion limits for the all-hadronic SUSY search, based on simulated events, expressed in mSUGRA parameter space [@cms-2010-008].](cms-reach){width="70.00000%"}
![\[fg:atlas-reach\] $5\sigma$ discovery reach as a function of and for a $\tan\beta=10$ mSUGRA scan for channels with 0, 1 and 2 leptons. The assumed integrated luminosity is 1 [@atl-phys-pub-2010-010].\
](atlas-reach){width="45.00000%"}
Pinning down dark matter at LHC {#sc:dmlhc}
===============================
Once clear evidence for a possible supersymmetric signal is established, the question of whether this implies the existence of SUSY or one of its lookalike arises. In addition, if we assume that it is a SUSY signal, we need to pin down the exact SUSY breaking mechanism and measure its theoretical parameters. Both issues are discussed here, giving some examples of sparticle mass measurements and spin and parameter determination.
Constraining sparticle masses {#sc:masses}
-----------------------------
As an example of the methods aiming at constraining sparticle-mass relations in sparticle cascade decays, we present here a dilepton analysis studied at CMS with simulated data [@cms-dilepton-one; @cms-dilepton-two]. Similar analyses with ATLAS are documented in refs. [@atlas-phys-tdr; @atlas-ued]. The analysis is targeting an integrated luminosity of 200–300 at $\sqrt{s}=10~\tev$ and its objectives are: (i) to observe a significant excess of opposite-sign same-flavour leptons over the various backgrounds, and (ii) to measure the endpoint in the invariant mass distribution. The latter is directly related to sparticle-masses differences, being sensitive to opposite-sign same-flavour (OS-SF) dileptons coming from the last stages of the decay chain of sparticles: $$\t{q} \to \t{\chi}_{2}^{0} \, q \to \t{\ell}^{\pm} \ell^{\mp} q \to \X\,\ell^{\pm} \ell^{\mp} q, \qquad \ell = e \text{ or } \mu
\label{eq:dilepton}$$ The shape of the distribution largely depends on the exact decay chain, so various mSUGRA benchmark points have been considered. For instance at LM0,[^4] the mass difference of the two lightest neutralinos is smaller than the boson mass and any slepton mass. Two opposite-sign same-flavour leptons come from the decay chain $\t{\chi}_2^0\to\X\ell^{\pm}\ell^{\mp}$, hence the edge position represents this mass difference: $$m_{\ell\ell,{\rm max}} = m_{\t{\chi}_2^0} - m_{\X}.
\label{eq:dilepton-diff}$$
At the LM1,[^5] on the other hand, the mass difference of the two lightest neutralinos is larger than the mass of the lightest slepton, so a slepton can be an intermediate product in the neutralino decay chain $\t{\chi}_2^0\to\t{\ell}_R\ell\to\X\ell^{\pm}\ell^{\mp}$. The equation connecting the position of the edge with sparticles masses takes the form: $$(m_{\ell\ell,{\rm max}})^2 = \frac{(m_{\t{\chi}_2^0}^{2} - m_{\t{\ell}}^{2})(m_{\t{\ell}}^{2} - m_{\X}^{2})}{m_{\t{\ell}}^{2}}.
\label{eq:dilepton-differ}$$
The main sources of physics background are uncorrelated supersymmetric decays and SM processes: , dibosons, and associated production of / bosons and jets. A data-driven strategy has been developed to eliminate these background processes, based on different-flavour dilepton ($e\mu$) pairs in order to estimate the background in the $ee$ and $\mu\mu$ combinations. It has been demonstrated that the background estimate is reliable for flavour-symmetric background processes.
The position of the endpoint in the invariant mass distribution of the two leptons is then extracted via a maximum-likelihood fit with components describing the background flavour-symmetric processes (see fig. \[fg:cms-edge-1\], left) and the characteristic triangular shape of the supersymmetric signal (see fig. \[fg:cms-edge-1\], right). The latter may be fitted by a function corresponding to a two- or a three-body decay.
![*Left:* The fit of the background function to the $e\mu$ invariant mass distribution [@cms-dilepton-two]. *Right:* The combined fit at LM0 for 200 . The green curve represents the SUSY signal model, the red curve is the background function and the light green dashed line the contribution. The black points represent the MC events [@cms-dilepton-two]. []{data-label="fg:cms-edge-1"}](cms-edge-1b-bis "fig:"){width="49.00000%"}![*Left:* The fit of the background function to the $e\mu$ invariant mass distribution [@cms-dilepton-two]. *Right:* The combined fit at LM0 for 200 . The green curve represents the SUSY signal model, the red curve is the background function and the light green dashed line the contribution. The black points represent the MC events [@cms-dilepton-two]. []{data-label="fg:cms-edge-1"}](cms-edge-1a-bis "fig:"){width="49.00000%"}
The number of signal events derived from the fit agrees with the true number of signal events [@cms-dilepton-two]. The theoretical endpoint value is reproduced in case of the fit with the three-body decay model, while the theoretical value is underestimated if the model is fitted for a two-body decay. At the benchmark points LM1 and LM9,[^6] a higher integrated luminosity is necessary to measure the endpoint. In other studied benchmark points, an integrated luminosity needed to obtain a $5\sigma$ discovery using shape information of 250 (LM1) and 350 (LM9) [@cms-dilepton-two].
Further constraints can be set in more complex combinations of sparticle masses if jets are added to the invariant mass calculation [@atlas-csc]. If sufficient endpoints are known, then the masses themselves can be deduced, e.g. by using a numerical $\chi^2$ minimization based on the MINUIT package [@minuit] to extract the SUSY particle masses from a combination of endpoints. A first look at sparticle masses is possible with early data, although with large uncertainties. Appropriate model assumptions and additional information will probably have to be used to constrain the fits.
Parameter determination
-----------------------
The next step after discovery will be to select specific supersymmetric decay chains to measure the properties of the new particles. Here we focus on how a selected set of early studies can be combined to obtain the first measurements of supersymmetric masses and of the parameters of the mSUGRA model with 1 of ATLAS [@atlas-csc] at a centre-of-mass LHC energy of 10 . Specific benchmarks in parameter space have been used to demonstrate the precision that can be expected from these measurements (such as the SU3[^7] point here), but the same (or similar) techniques can be applied to a considerable portion of the SUSY parameter space accessible with LHC data.
A first glimpse of the possible parameter space can be obtained by performing a Markov-chain analysis. With this technique it is possible to efficiently explore a large-dimensional parameter spaces and check whether there are several topologically disconnected parameter regions which are favoured by a given set of measurements.
A stringent constraint on the SUSY model can be achieved by fitting theoretical calculations for a given set of parameters —performed by spectrum calculators like SPheno [@spheno], SoftSUSY [@softsusy] or the ISASUSY [@isasusy] decay package of ISAJET [@isajet]— to the mass combinations acquired by measuring edge points in invariant distributions. This fitting can be performed by specialized parameter-fitting packages, such as Fittino [@fittino] or SFitter [@sfitter]. In order to estimate the expected precision for such measurements, a number of toy fits for a fixed ${\rm sgn}\,\mu$ has been performed by ATLAS [@atlas-csc]. The four-dimensional distribution of parameters obtained from these toy fits is used to derive the parameter uncertainties and their correlations. The mean values and uncertainties of the results for the parameters $m_0$, $m_{1/2}$, $\tan\beta$ and $A_0$ are listed in table \[tb:parameters\]. The parameters $m_0$ and $m_{1/2}$ can be derived reliably with uncertainties of $\mathcal{O}(10~\gev)$, whereas for $\tan\beta$ and $A_0$ only the order of magnitude can be derived from these measurements. The $\chi^2$ distribution of the fits can be used to evaluate the toy-fit performance. The observed mean $\chi^2= 12.6\pm0.2$ for ${\rm sgn}\,\mu=+1$ is compatible with the expected value of $N_{\rm dof} = 11$. The solutions for the wrong assumption ${\rm sgn}\,\mu=-1$, also reported in table \[tb:parameters\], cannot however be ruled out as the observed mean $\chi^2=15.4\pm0.3$ is also acceptable.
[\*[4]{}[l]{}]{} Parameter & SU3 value & Fitted value & Uncertainty\
\
$\tan\beta$ & 6 & 7.4 & 4.6\
$m_0$ \[\] & 100 & 98.5 & 9.3\
$m_{1/2}$ \[\] & 300 & 317.7 & 6.9\
$A_0$ \[\] & 300 & 445 & 408\
\
$\tan\beta$ & & 13.9 & 2.8\
$m_0$ \[\] & & 104 & 18\
$m_{1/2}$ \[\] & & 309.6 & 5.9\
$A_0$ \[\] & & 489 & 189\
Hence with 1 the reconstruction of part of the supersymmetric mass spectrum will only be possible for favourable SUSY scenarios and with some assumptions about the decay chains involved. Larger integrated luminosity will help to overcome these limitations, as more measurements become possible and the precision of each increases. Furthermore the mass spectrum constraints in conjunction with precision observables, such as the $(g-2)_{\mu}$ and the $b\to s\gamma$, will illuminate the flavour mixing and possibly the $CP$ properties of the supersymmetric model [@neil].
Spin measurement
----------------
Measurements of the number of new particles and their masses will provide us enough information to extract model parameters for one of the SUSY models. However, the mass information alone will not be enough to distinguish different new physics scenarios. For example universal extra dimensions [@ued] with Kaluza-Klein parity can have a mass spectrum very similar to the one of certain SUSY models. However, the spin of the new particles is different and can be used to discriminate between models [@Smillie:2005ar]. Another method based on robust ratios of inclusive counts of simple physics objects has also been proposed [@Hubisz:2008gg].
In order to measure the spin of newly discovered particle, one possibility is to use two-body slepton decay chains as the ones described earlier in this section. In particular the cascade decay of the $\t{q}_L$ to $\t{\chi}_2^0$ which further decays to slepton (fig. \[fg:atl-spin-cascade\]) is very convenient for such measurements [@Barr:2004ze].
![\[fg:atl-spin-cascade\] Schematic view of $L$-type squark decay. The lepton from the $\t{\chi}_2^0$ decay is called *near*; the lepton from $\t{\ell}_{L,R}$ decay is called *far*.](atl-spin-cascade){width="35.00000%"}
The charge asymmetry of $\ell q$ pairs, for instance, can be used to measure the spin of $\t{\chi}_2^0$, while the shape of dilepton invariant mass spectrum measures slepton spin [@Biglietti:2007mj]. The first lepton in the decay chain is called the *near* lepton while the other is called the *far* lepton. The invariant masses $m(q\ell_{\rm near})$ charge asymmetry $A$ is trivially defined as: $$\label{eq:asymmetry}
A \equiv \frac{s^+-s^-}{s^++s^-}\, ,$$ where $s^{\pm} = {\rm d}\sigma/{\rm d}m(q\ell_{\rm near}^{\pm})$.
In general is not possible to distinguish between the near and the far lepton and only $m(\bar{q}\ell_{\rm near})$ can be measured, diluting $A$. The expected asymmetry for the mSUGRA benchmark point SU3, as estimated by ATLAS for $\sqrt{s}=14~\tev$, is shown in fig. \[fg:atl-spin\] for a luminosity of 30 [@Biglietti:2007mj].
![\[fg:atl-spin\]Charge asymmetry for lepton-jet invariant mass after SFOS-OFOS subtraction using both near and far leptons in SU3 point [@Biglietti:2007mj].\
](atl-spin){width="48.00000%"}
Results show that, in a fast simulation approach without taking into account systematic effects coming from a realistic detector description, an integrated luminosity of at least 100 is needed in the case of the SU1[^8] point to observe a non-zero charge asymmetry with a confidence level of about 99%, while in the more favourable case of the SU3 point 10 would be sufficient [@Biglietti:2007mj]. It becomes therefore evident that even if the LHC experiments observe a SUSY-like signal during the next two years (2011–2012) of operation at the ‘low’ LHC energies of $7-8~\tev$, much more data at higher energy of $14~\tev$ will be necessary to establish the identity of the underlying theory.
A significant role in this context will be played by the high precision measurements expected to be performed at the ILC [@ilc; @dm-colliders]. If the determination of the properties of the DM particle by collider experiments [@dm-colliders] matches cosmological observations to high precision, then (and only then) we will be able to claim to have determined what DM is. Such an achievement would be a great success of the Particle-Physics/Cosmology connection and would give us confidence in our understanding of the Universe.
Interplay between the dark sector and LHC: Alternative scenarios {#sc:alt}
================================================================
The simplest proposal to explain the origin of dark energy is to add a cosmological constant to Einstein’s equation [@lcdm]. However, the reason why the dark matter content is comparable to the dark energy content at the present time remains a puzzle. Modifications to general relativity, braneworld scenarios, and topological defects are some of the proposals attempting to explain this fundamental issue. In string theory, the dilaton can play the role of dark energy [@dm]. In this section we review experimental signatures of SUSY as consequences of a rolling dilaton in the Q-cosmology scenario [@dm; @mm1] which offers an alternative framework that establishes the Supercritical (or non-critical) String Cosmology (or SSC) [@dm]. Such a dilaton modifies the Boltzmann equation, thus diluting the supersymmetric dark matter density (of neutralinos) by a factor $\mathcal{O}(10)$ [@dm-modified; @lahanas] and consequently the parameter-space regions excluded by the standard scenario are allowed in the SSC. Such deviating predictions in the DM relic abundance also arise in the context of space-time (D-particle) foam in string/brane-theories [@mavro]. However in such cases, the effects of the D-particle foam on the relic abundance are opposite to those of the dilaton in the SSC models, but their magnitude depends on the string scale and thus such effects can be relevant to LHC only for low () string scales.
The mSUGRA final states at the LHC favoured by supercritical string cosmology have been studied in depth in ref. [@dutta]. It becomes evident by inspecting the two panels in fig. \[fg:dutta\], that the dark-matter-allowed region has larger values of compared to the standard cosmology case. Thus, the final states in the SSC scenario are different from those of the standard cosmology. For example, in the case of standard cosmology for smaller values of (also allowed by the $(g-2)_{\mu}$ constraint), we have low-energy taus in the final state due to the proximity of the stau to the neutralino mass in the stau-neutralino coannihilation region. On the other hand, in the SSC case the final states contain bosons, Higgs bosons or high-energy taus.
![\[fg:dutta\] WMAP3-allowed parameter space in mSUGRA $(\mz,\,\mh)$ plane for the standard cosmology (left) and the SSC (right) for $A_0=0$ and $\tan\beta=10$: regions where the neutralino relic density is within the WMAP3 limits (green stripe) and where it is lower than this (hatched region) are shown. Also shown are the $h^0$ mass boundary (dash-dotted blue), the $(g-2)_{\mu}$ $1\sigma$ (dashed red) and $2\sigma$ (dotted red) boundaries and the stau-LSP region (lower solid red) [@dm-modified].](dutta-normal "fig:"){width="48.00000%"} ![\[fg:dutta\] WMAP3-allowed parameter space in mSUGRA $(\mz,\,\mh)$ plane for the standard cosmology (left) and the SSC (right) for $A_0=0$ and $\tan\beta=10$: regions where the neutralino relic density is within the WMAP3 limits (green stripe) and where it is lower than this (hatched region) are shown. Also shown are the $h^0$ mass boundary (dash-dotted blue), the $(g-2)_{\mu}$ $1\sigma$ (dashed red) and $2\sigma$ (dotted red) boundaries and the stau-LSP region (lower solid red) [@dm-modified].](dutta-reduced "fig:"){width="48.00000%"}
In fact these final states dominate in most of the allowed mSUGRA parameter space. Therefore, by analyzing the parameter space of the SSC model, most regions of the mSUGRA parameter space at the LHC are explored. The following final states have been studied [@dutta]:
---------------- ------------------------------------------
Higgs decays: ($h^{0}\to$) $b\bar{b}$ + jets +
boson decays: ($Z\to$) $\ell^{\pm}\ell^{\mp}$ + jets +
Ditau channel: $2\tau$ + jets +
---------------- ------------------------------------------
as well as constructed observables such as the endpoints of invariant mass distributions $M_{bbj}$, $M_{\ell\ell j}$, and $M_{\tau\tau j}$ and the peak position of $M_{\tau\tau}$. All of these analyses have been studied with ATLAS and/or CMS with simulated data at $\sqrt{s}=10\;\text{or}\;14~\tev$: $h^{0}\to b\bar{b}$ channel [@atlas-phys-tdr; @thesis; @atlas-csc; @cms-tdr], the $Z\to\ell^{\pm}\ell^{\mp}$ mode [@cms-tdr] and the 2$\tau$ mode [@atlas-csc; @cms-tdr]. In the future, when $pp$ collision data at an energy of 14 will be accumulated, searches for SUSY with these signatures will be possible.
It is remarked that the SSC scenario is consistent with the smoothly evolving dark energy at least for $0<z<1.6$, in accordance with the observations on supernovae [@mm1], on the galaxy-ages-measured Hubble rate [@mm2] and on the baryon acoustic oscillations [@mm3]. Hence it offers a cosmologically viable solution.
Outlook {#sc:out}
=======
The origin of dark matter remains one of the most compelling mysteries in our understanding of the Universe today and the Large Hadron Collider, already delivering $pp$ collision data at CERN at an unprecedented high-energy, is going to play a central role in constraining some of its parameters. A deviation from SM in inclusive signatures like missing energy plus jets (plus leptons) will hint a discovery of DM, however exclusive studies are required to roughly determine the new-particle properties and model parameters. Although the scheme is developed with SUSY in mind, it is applicable to other beyond-standard-model scenarios such as UED and $T$-parity Little Higgs.
If LHC should discover general WIMP dark matter, it will be non-trivial to prove that it has the right properties. Future $e^+e^-$ colliders (ILC, CLIC) are expected to extend the LHC discovery potential and improve the identification of the underlying DM model. By providing more precise determination of model parameters, they will consequently put bounds on relic density, direct detection rate and WIMP annihilation processes.
The complementarity between LHC and cosmo/astroparticle experiments lies in the uncorrelated systematics and the measurement of different model parameters. In the following years we expect a continuous interplay between particle physics experiments and astrophysical/cosmological observations.
The author is grateful to the DISCRETE2010 organizers and especially Antonio Di Domenico for the kind invitation and support. Thanks to them, a warm, friendly and intellectually stimulating atmosphere was enjoyed by the speakers and participants throughout the Symposium. This work was supported in part by the Spanish Ministry of Science and Innovation (MICINN) under the project FPA2009-13234-C04-01 and by the Spanish Agency of International Cooperation for Development under the PCI projects A/023372/09 and A/030322/10. The author acknowledges support by the CERN Corresponding Associate Programme.
References {#references .unnumbered}
==========
[99]{}
For a pedagogical introduction, see e.g.: Hooper D 2009 TASI 2008 Lectures on Dark Matter [*Preprint*]{} arXiv:0901.4090 \[hep-ph\] Evans L and Bryant P 2008 LHC Machine [*J. Instrum.*]{} [**3**]{} S08001 Akimov D 2011 Techniques and Results for the Direct Detection of Dark Matter (Review) [*Nucl. Instrum. Meth.*]{} A [**628**]{} 50–58 [*and references therein*]{} Morselli A 2011 Indirect detection of dark matter, current status and recent results [*Prog. Part. Nucl. Phys.*]{} [**66**]{} 208–215 [*and references therein*]{} Battaglia M 2009 The role of an $e^+e^-$ linear collider in the study of cosmic dark matter [*New J. Phys.*]{} [**11**]{} 105025\
Aarons G [*et al*]{} \[ILC Collaboration\] 2007 International Linear Collider Reference Design Report Volume 2: Physics at the ILC [*Preprint*]{} arXiv:0709.1893 \[hep-ph\] Accomando E [*et al*]{} \[CLIC Physics Working Group\] Physics at the CLIC multi-TeV linear collider [*Preprint*]{} arXiv:hep-ph/0412251 For an review on MOND, see e.g.: Sanders R H and McGaugh S S 2002 Modified Newtonian Dynamics as an Alternative to Dark Matter [*Ann. Rev. Astron. Astrophys.*]{} [**40**]{} 263–317 ([*Preprint*]{} arXiv:astro-ph/0204521) Glashow S L 2005 A Sinister extension of the standard model to $SU(3)\times SU(2)\times SU(2)\times U(1)$ [*Preprint*]{} arXiv:hep-ph/0504287\
Khlopov M Y and Kouvaris C 2008 Strong Interactive Massive Particles from a Strong Coupled Theory [*Phys. Rev.*]{} D [**77**]{} 065002 ([*Preprint*]{} arXiv:0710.2189 \[astro-ph\]) [*and references therein*]{}
Amanullah R [*et al*]{} 2010 Spectra and Light Curves of Six Type Ia Supernovae at $0.511<z<1.12$ and the Union2 Compilation [*Astrophys. J.*]{} [**716**]{} 712–738 ([*Preprint*]{} arXiv:1004.1711 \[astro-ph.CO\]) Komatsu E [*et al*]{} 2011 Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation [*Astrophys. J. Suppl.*]{} [**192**]{} ([*Preprint*]{} arXiv:1001.4538 \[astro-ph.CO\]) Eisenstein D J [*et al*]{} \[SDSS Collaboration\] 2005 Detection of the Baryon Acoustic Peak in the Large-Scale Correlation Function of SDSS Luminous Red Galaxies [*Astrophys. J.*]{} [**633**]{} 560–574 ([*Preprint*]{} astro-ph/0501171) [*and references therein*]{} Broadhurst T, Umetsu K, Medezinski E, Oguri M and Rephaeli Y 2008 Comparison of Cluster Lensing Profiles with Lambda CDM Predictions [*Astrophys. J.*]{} [**685**]{} L9–L12 ([*Preprint*]{} arXiv:0805.2617 \[astro-ph\]) [*and references therein*]{} Larson D [*et al*]{} 2011 Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Power Spectra and WMAP-Derived Parameters [*Astrophys. J. Suppl.*]{} [**192**]{} 16 ([*Preprint*]{} arXiv:1001.4635 \[astro-ph.CO\]) Carroll S M, Press W H and Turner E L 1992 The Cosmological constant [*Ann. Rev. Astron. Astrophys.*]{} [**30**]{} 499–542 Zioutas K, Tsagri M, Papaevangelou T, Dafni T and Anastassopoulos V 2009 Axion Searches with Helioscopes and astrophysical signatures for axion(-like) particles [*New J. Phys.*]{} [**11**]{} 105020 ([*Preprint*]{} arXiv:0903.1807 \[astro-ph.SR\]) Chung D J H, Kolb E W and Riotto A 1999 Superheavy dark matter [*Phys. Rev.*]{} D [**59**]{} 023501 ([*Preprint*]{} arXiv:hep-ph/9802238) Kazakov D I 2010 Supersymmetry on the Run: LHC and Dark Matter [*Nucl. Phys. Proc. Suppl.*]{} [**203-204**]{} 118–154 ([*Preprint*]{} arXiv:1010.5419 \[hep-ph\]) Hooper D and Profumo S 2007 Dark matter and collider phenomenology of universal extra dimensions [*Phys. Rept.*]{} [**453**]{} 29–115 ([*Preprint*]{} arXiv:hep-ph/0701197) Birkedal A, Noble A, Perelstein M and Spray A 2006 Little Higgs dark matter [*Phys. Rev.*]{} D [**74**]{} 035002 ([*Preprint*]{} arXiv:hep-ph/0603077) Panotopoulos G 2011 The degenerate scenario in the NMSSM: Direct singlino-like neutralino searches with a gravitino LSP [*Preprint*]{} arXiv:1103.0140 \[hep-ph\] [*in these proceedings*]{} Barbier R [*et al*]{} 2005 R-parity violating supersymmetry [*Phys. Rept.*]{} [**420**]{} 1–202 ([*Preprint*]{} arXiv:hep-ph/0406039) Takayama F and Yamaguchi M 2000 Gravitino dark matter without -parity [*Phys. Lett.*]{} B [**485**]{} 388–392 ([*Preprint*]{} arXiv:hep-ph/0005214)\
Hirsch M, Porod W and Restrepo D 2005 Collider signals of gravitino dark matter in bilinearly broken -parity [*J. High Energy Phys.*]{} JHEP03(2005)062 ([*Preprint*]{} arXiv:hep-ph/0503059)\
Buchmuller W, Covi L, Hamaguchi K, Ibarra A and Yanagida T 2007 Gravitino dark matter in -parity breaking vacua [*J. High Energy Phys.*]{} JHEP03(2007)037 ([*Preprint*]{} arXiv:hep-ph/0702184) Chun E J and Kim H B 2006 Axino Light Dark Matter and Neutrino Masses with -parity Violation [*J. High Energy Phys.*]{} JHEP10(2006)082 ([*Preprint*]{} arXiv:hep-ph/0607076) Lola S 2009 Gravitino dark matter, neutrino masses and lepton flavor violation from broken -parity [*AIP Conf. Proc.*]{} [**1115**]{} 318–323 [*and references therein*]{} Aad G [*et al*]{} \[ATLAS Collaboration\] 2008 The ATLAS Experiment at the CERN Large Hadron Collider [*J. Instrum.*]{} [**3**]{} S08003 Adolphi R [*et al*]{} \[CMS Collaboration\] 2008 The CMS experiment at the CERN LHC [*J. Instrum.*]{} [**3**]{} S08004 Aad G [*et al*]{} \[ATLAS Collaboration\] 2009 Expected Performance of the ATLAS Experiment – Detector, Trigger and Physics [*CERN Report*]{} CERN-OPEN-2008-020 [*Preprint*]{} arXiv:0901.0512 \[hep-ex\] Bayatian G L [*et al*]{} \[CMS Collaboration\] 2007 CMS technical design report, volume II: Physics performance [*J. Phys.*]{} G [**34**]{} 995–1579 ÊCourtesy of Alan Barr
Agostinelli S [*et al*]{} \[GEANT4 Collaboration\] 2003 GEANT4: A simulation toolkit [*Nucl. Instrum. Meth.*]{} A [**506**]{} 250–303\
Allison J [*et al*]{} 2006 Geant4 developments and applications [*IEEE Trans. Nucl. Sci.*]{} [**53**]{} 270–278 CMS Collaboration 2010 Performance of Methods for Data-Driven Background Estimation in SUSY Searches [*CMS Physics Analysis Summary*]{} CMS-PAS-SUS-10-001
Widl E 2011 Searches for Supersymmetry with the CMS detector at the LHC [*in these proceedings*]{}
ÊAzzurri P 2011 First Results of Searches for New Physics at sqrt(s)= 7 TeV with the CMS detector [*Preprint*]{} arXiv:1103.1048 \[hep-ex\] [*in these proceedings*]{} ATLAS Collaboration 2010 Early supersymmetry searches in channels with jets and missing transverse momentum with the ATLAS detector [*ATLAS Note*]{} ATLAS-CONF-2010-065
Siragusa G 2011 New Physics with ATLAS: experimental prospects [*ATLAS Note*]{} ATL-PHYS-PROC-2011-011 [*in these proceedings*]{}
CMS Collaboration 2010 The CMS physics reach for searches at 7 TeV [*CMS Note*]{} CMS-NOTE-2010-008 ATLAS Collaboration 2010 Prospects for Supersymmetry discovery based on inclusive searches at a 7 TeV centre-of-mass energy with the ATLAS detector [*ATLAS Note*]{} ATL-PHYS-PUB-2010-010
CMS Collaboration 2009 Dilepton + Jets + MET channel: Observation and Measurement of $\t{\chi}_2^0\to\X\ell\ell$ [*CMS Physics Analysis Summary*]{} CMS-PAS-SUS-08-001
CMS Collaboration 2009 Discovery potential and measurement of a dilepton mass edge in SUSY events at $\sqrt{s}=10~\tev$ [*CMS Physics Analysis Summary*]{} CMS-PAS-SUS-09-002
Airapetian A [*et al*]{} \[ATLAS Collaboration\] 1999 [*ATLAS: Detector and physics performance technical design report*]{} vol 1 (Geneva: CERN) 475 p ([*CERN Report*]{} CERN-LHCC-99-14)\
Airapetian A [*et al*]{} \[ATLAS Collaboration\] 1999 [*ATLAS: Detector and physics performance technical design report*]{} vol 2 (Geneva: CERN) 519 p ([*CERN Report*]{} CERN-LHCC-99-15) ATLAS Collaboration 2009 Prospects for Supersymmetry and Univeral Extra Dimensions discovery based on inclusive searches at a 10 centre-of-mass energy with the ATLAS detector [*ATLAS Note*]{} ATL-PHYS-PUB-2009-084
James F and Roos M 1975 MINUIT: A System for Function Minimization and Analysis of the Parameter Errors and Correlations [*Comput. Phys. Commun.*]{} [**10**]{} 343–367 Porod W 2003 SPheno, a program for calculating supersymmetric spectra, SUSY particle decays and SUSY particle production at $e^+e^-$ colliders [*Comput. Phys. Commun.*]{} [**153**]{} 275–315 ([*Preprint*]{} arXiv:hep-ph/0301101) Allanach B C 2002 SOFTSUSY: a program for calculating supersymmetric spectra [*Comput. Phys. Commun.*]{} [**143**]{} 305–331 ([*Preprint*]{} arXiv:hep-ph/0104145) Baer H, Paige F E, Protopopescu S D and Tata X 1993 Simulating Supersymmetry With ISAJET 7.0 / ISASUSY 1.0 [*Preprint*]{} arXiv:hep-ph/9305342 Paige F E, Protopopescu S D, Baer H and Tata X 2003 ISAJET 7.69: A Monte Carlo event generator for $pp$, $\bar{p}p$, and $e^+e^-$ reactions [*Preprint*]{} arXiv:hep-ph/0312045 Bechtle P, Desch K and Wienemann P 2006 Fittino, a program for determining MSSM parameters from collider observables using an iterative method [*Comput. Phys. Commun.*]{} [**174**]{} 47–70 ([*Preprint*]{} arXiv:hep-ph/0412012) Lafaye R, Plehn T and Zerwas D 2004 SFITTER: SUSY parameter analysis at LHC and LC [*Preprint*]{} arXiv:hep-ph/0404282 Hodgkinson R N 2011 Decoding new physics at 1 LHC with flavour and CP observables [*in these proceedings*]{}
Smillie J M and Webber B R 2005 Distinguishing Spins in Supersymmetric and Universal Extra Dimension Models at the Large Hadron Collider [*J. High Energy Phys.*]{} JHEP10(2005)069 ([*Preprint*]{} arXiv:hep-ph/0507170)\
Datta A, Kong K and Matchev K T 2005 Discrimination of supersymmetry and universal extra dimensions at hadron colliders [*Phys. Rev.*]{} D [**72**]{} 096006, Erratum-ibid. 2005 [**72**]{} 119901 ([*Preprint*]{} arXiv:hep-ph/0509246) Hubisz J, Lykken J, Pierini M and Spiropulu M 2008 Missing energy look-alikes with 100 pb$^{-1}$ at the LHC [*Phys. Rev.*]{} D [**78**]{} 075008 ([*Preprint*]{} arXiv:0805.2398 \[hep-ph\]) Barr A J 2004 Using lepton charge asymmetry to investigate the spin of supersymmetric particles at the LHC [*Phys. Lett.*]{} B [**596**]{} 205–212 ([*Preprint*]{} arXiv:hep-ph/0405052) Biglietti M [*et al*]{} 2007 Study of second lightest neutralino $\t{\chi}_2^0$ spin measurement with ATLAS detector at LHC [*ATLAS Note*]{} ATL-PHYS-PUB-2007-004 Baltz E A, Battaglia M, Peskin M E and Wizansky T 2006 Determination of dark matter properties at high-energy colliders [*Phys. Rev.*]{} D [**74**]{} 103521 ([*Preprint*]{} arXiv:hep-ph/0602187) Mavromatos N E 2008 LHC Physics and Cosmology [*Fundamental Interactions: Proc. Lake Louise Winter Institute 2007 (Lake Louise)*]{} ed A Astbury [*et al*]{} (Singapore: World Scientific) pp 80–127 ([*Preprint*]{} arXiv:0708.0134 \[hep-ph\]) Ellis J R, Mavromatos N E, Mitsou V A and Nanopoulos D V 2007 Confronting dark energy models with astrophysical data [*Astropart. Phys.*]{} [**27**]{} 185–198 ([*Preprint*]{} arXiv:astro-ph/0604272)\
Mitsou V A 2008 Constraints on Dissipative Non-Equilibrium Dark Energy Models from Recent Supernova Data [*Fundamental Interactions: Proc. Lake Louise Winter Institute 2007 (Lake Louise)*]{} ed A Astbury [*et al*]{} (Singapore: World Scientific) pp 363–367 ([*Preprint*]{} arXiv:0708.0113 \[astro-ph\]) Lahanas A B, Mavromatos N E and Nanopoulos D V 2007 Smoothly evolving Supercritical-String Dark Energy relaxes Supersymmetric-Dark-Matter Constraints [*Phys. Lett.*]{} B [**649**]{} 83–90 ([*Preprint*]{} arXiv:hep-ph/0612152) Diamandis G A, Georgalas B C, Lahanas A B, Mavromatos N E and Nanopoulos D V 2006 Dissipative Liouville cosmology: A case study [*Phys. Lett.*]{} B [**642**]{} 179–186 ([*Preprint*]{} arXiv:hep-th/0605181)\
Lahanas A B, Mavromatos N E and Nanopoulos D V 2007 Dilaton and off-shell (non-critical string) effects in Boltzmann equation for species abundances [*PMC Phys.*]{} A [**1**]{} 2 ([*Preprint*]{} arXiv:hep-ph/0608153) Mavromatos N E, Sarkar S and Vergou A 2011 Stringy Space-Time Foam, Finsler-like Metrics and Dark Matter Relics [*Phys. Lett.*]{} B [**696**]{} 300–304 ([*Preprint*]{} arXiv:1009.2880 \[hep-th\])\
Mavromatos N E, Mitsou V A, Sarkar S and Vergou A 2010 Stochastic Finsler D-particle Space-Time Foam Enhances Dark Matter Relics [*Preprint*]{} arXiv:1012.4094 \[hep-ph\] Dutta B, Gurrola A, Kamon T, Krislock A, Lahanas A B, Mavromatos N E and Nanopoulos D V 2009 Supersymmetry Signals of Supercritical String Cosmology at the Large Hadron Collider [*Phys. Rev.*]{} D [**79**]{} 055002 ([*Preprint*]{} arXiv:0808.1372 \[hep-ph\]) Mitsou V A 2002 Prospects on Higgs boson discovery and physics beyond the standard model at the LHC and development of the transition radiation tracker of the ATLAS experiment [*PhD Thesis*]{} CERN-THESIS-2002-005\
Mitsou V A 1999 Observability of the $h\to b\bar{b}$ channel in cascade decay of SUSY particles within the SUGRA model [*ATLAS Note*]{} CERN-ATL-PHYS-99-017
Mavromatos N E and Mitsou V A 2008 Observational Evidence for Negative-Energy Dust in Late-Times Cosmology [*Astropart. Phys.*]{} [**29**]{} 442–452 ([*Preprint*]{} arXiv:0707.4671 \[astro-ph\])\
Mavromatos N E and Mitsou V A 2007 Relaxation dark energy in non-critical string cosmologies and astrophysical data [*Proc. IDM 2006: 6th Int. Workshop on the Identification of Dark Matter (Island of Rhodes)*]{} ed M Axenides [*et al*]{} (Singapore: World Scientific) pp 623–634 ([*Preprint*]{} arXiv:astro-ph/0611788) Mitsou V A 2010 Constraining super-critical string/brane cosmologies with astrophysical data [*J. Phys. Conf. Ser.*]{} [**203**]{} 012054 ([*Preprint*]{} arXiv:0909.5095 \[astro-ph.CO\])
[^1]: The interpretation of the ‘raw’ astrophysical data in the context of the Universe energy/matter content as presented above is based on the Standard Cosmological Model ($\Lambda$CDM) [@lcdm], involving cold DM as the dominant DM species, and a positive cosmological constant $\Lambda>0$. Nevertheless, as discussed in section \[sc:alt\], the possibility for different theoretical scenarios is open, that modify the estimated DM relic abundance for given cosmological observations.
[^2]: As of December 2010.
[^3]: SU4: =200 , =160 , $\tan\beta=10$, $A_{0}=-400~\gev$, $\mu>0$.
[^4]: LM0: =200 , =160 , $\tan\beta=10$, $A_{0}=-400~\gev$, $\mu>0$.
[^5]: LM1: =60 , =250 , $\tan\beta=10$, $A_{0}=0$, $\mu>0$.
[^6]: LM9: =1450 , =175 , $\tan\beta=50$, $A_{0}=0$, $\mu>0$.
[^7]: SU3: =100 , =300 , $\tan\beta=6$, $A_{0}=-300~\gev$, $\mu>0$.
[^8]: SU1: =70 , =350 , $\tan\beta=10$, $A_{0}=0$, $\mu>0$.
|
---
address: 'School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332'
bibliography:
- 'database.bib'
---
Introduction {#introduction .unnumbered}
============
These are the notes from a course of five lectures at the 2009 Park City Math Institute. The focus is on [*elliptic curves*]{} over function fields over [*finite fields*]{}. In the first three lectures, we explain the main classical results (mainly due to Tate) on the Birch and Swinnerton-Dyer conjecture in this context and its connection to the Tate conjecture about divisors on surfaces. This is preceded by a “Lecture 0” on background material. In the remaining two lectures, we discuss more recent developments on elliptic curves of large rank and constructions of explicit points in high rank situations.
A great deal of this material generalizes naturally to the context of curves and Jacobians of [*any genus*]{} over function fields over [*arbitrary ground fields*]{}. These generalizations were discussed in a course of 12 lectures at the CRM in Barcelona in February, 2010, and will be written up as a companion to these notes, see [@UlmerCRM]. Unfortunately, theorems on unbounded ranks over function fields are currently known only in the context of finite ground fields.
Finally, we mention here that very interesting theorems of Gross-Zagier type exist also in the function field context. These would be the subject of another series of lectures and we will not say anything more about them in these notes.
It is a pleasure to thank the organizers of the 2009 PCMI for the invitation to speak, the students for their interest, enthusiasm, and stimulating questions, and the “elder statesmen”—Bryan Birch, Dick Gross, John Tate, and Yuri Zarhin—for their remarks and encouragement. Thanks also to Keith Conrad for bringing the fascinating historical articles of Roquette [@Roquette06] to my attention. Last but not least, thanks are due as well to Lisa Berger, Tommy Occhipinti, Karl Rubin, Alice Silverberg, Yuri Zarhin, and an anonymous referee for their suggestions and TeXnical advice.
This “Lecture 0” covers definitions and notations that are probably familiar to many readers and that were reviewed very quickly during the PCMI lectures. Readers are invited to skip it and refer back as necessary.
Terminology
===========
Throughout, we use the language of schemes. This is necessary to be on firm ground when dealing with some of the more subtle aspects involving non-perfect ground fields and possibly non-reduced group schemes. However, the instances where we use any hard results from this theory are isolated and students should be able to follow readily the main lines of discussion, perhaps with the assistance of a friendly algebraic geometer.
Throughout, a [*variety*]{} over a field $F$ is a separated, reduced scheme of finite type over $\operatorname{Spec}F$. A [*curve*]{} is a variety purely of dimension 1 and a [*surface*]{} is a variety purely of dimension 2.
Function fields and curves {#s:ffs}
==========================
Throughout, $p$ will be a prime number and ${{\mathbb{F}_q}}$ will denote the field with $q$ elements with $q$ a power of $p$. We write ${\mathcal{C}}$ for a smooth, projective, and absolutely irreducible curve of genus $g$ over ${{\mathbb{F}_q}}$ and we write $K={{\mathbb{F}_q}}({\mathcal{C}})$ for the function field of ${\mathcal{C}}$ over ${{\mathbb{F}_q}}$. The most important example is when ${\mathcal{C}}={\mathbb{P}}^1$, the projective line, in which case $K={{\mathbb{F}_q}}({\mathcal{C}})={{\mathbb{F}_q}}(t)$ is the field of rational functions in a variable $t$ over ${{\mathbb{F}_q}}$.
We write $v$ for a closed point of ${\mathcal{C}}$, or equivalently for an equivalence class of valuations of $K$. For each such $v$ we write ${\mathcal{O}}_{(v)}$ for the local ring at $v$ (the ring of rational functions on ${\mathcal{C}}$ regular at $v$), ${\mathfrak{m}}_v\subset{\mathcal{O}}_{(v)}$ for the maximal ideal (those functions vanishing at $v$), and $\kappa_v={\mathcal{O}}_{(v)}/{\mathfrak{m}}_v$ for the residue field at $v$. The extension $\kappa_v/{{\mathbb{F}_q}}$ is finite and we set $\deg(v)=[\kappa_v:{{\mathbb{F}_q}}]$ and $q_v=q^{\deg(v)}$ so that $\kappa_v\cong{\mathbb{F}}_{q_v}$.
For example, in the case where ${\mathcal{C}}={\mathbb{P}}^1$, the “finite” places of ${\mathcal{C}}$ correspond bijectively to monic irreducible polynomials $f\in{{\mathbb{F}_q}}[t]$. If $v$ corresponds to $f$, then ${\mathcal{O}}_{(v)}$ is the set of ratios $g/h$ where $g,h\in{{\mathbb{F}_q}}[t]$ and $f$ does not divide $h$. The maximal ideal ${\mathfrak{m}}_v$ consists of ratios $g/h$ where $f$ does divide $g$, and the degree of $v$ is the degree of $f$ as a polynomial in $t$. There is one more place of $K$, the “infinite” place $v=\infty$. The local ring consists of ratios $g/h$ with $g,h\in{{\mathbb{F}_q}}[t]$ and $\deg(g)\le\deg(h)$. The maximal ideal consists of ratios $g/h$ where $\deg(g)<\deg(h)$ and the degree of $v=\infty$ is 1. The finite and infinite places of ${\mathbb{P}}^1$ give all closed points of ${\mathbb{P}}^1$.
We write $K^{sep}$ for a separable closure of $K$ and let $G_K=\operatorname{Gal}(K^{sep}/K)$. We write ${{\overline{\mathbb{F}}_q}}$ for the algebraic closure of ${{\mathbb{F}_q}}$ in $K^{sep}$. For each place $v$ of $K$ we have the decomposition group $D_v$ (defined only up to conjugacy), its normal subgroup the inertia group $I_v\subset D_v$, and $\operatorname{Fr}_v$ the (geometric) Frobenius at $v$, a canonical generator of the quotient $D_v/I_v\cong\operatorname{Gal}({{\overline{\mathbb{F}}_q}}/{{\mathbb{F}_q}})$ that acts as $x\mapsto x^{q_v^{-1}}$ on the residue field at a place $w$ dividing $v$ in a finite extension $F\subset K^{sep}$ unramified over $v$.
General references for this section and the next are [@GoldschmidtAFPC], [@RosenNTFF], and [@StichtenothAFFC].
Zeta functions {#s:zetas}
==============
Let ${\mathcal{X}}$ be a variety over the finite field ${{\mathbb{F}_q}}$. Extending the notation of the previous section, if $x$ is a closed point of ${\mathcal{X}}$, we write $\kappa_x$ for the residue field at $x$, $q_x$ for its cardinality, and $\deg(x)$ for $[\kappa_x:{{\mathbb{F}_q}}]$.
We define the $Z$ and $\zeta$ functions of ${\mathcal{X}}$ via Euler products: $$Z({\mathcal{X}},T)=\prod_x\left(1-T^{\deg(x)}\right)^{-1}$$ and $$\zeta({\mathcal{X}},s)=Z({\mathcal{X}},q^{-s})=\prod_x\left(1-q_x^{-s}\right)^{-1}$$ where the products are over the closed points of ${\mathcal{X}}$. It is a standard exercise to show that $$Z({\mathcal{X}},T)=\exp\left(\sum_{n\ge1}N_n\frac{T^n}{n}\right)$$ where $N_n$ is the number of ${\mathbb{F}}_{q^n}$-valued points of ${\mathcal{X}}$. It follows from a crude estimate for the number of ${\mathbb{F}}_{q^n}$ points of ${\mathcal{X}}$ that the Euler product defining $\zeta({\mathcal{X}},s)$ converges in the half plane $\operatorname{Re}(s)>\dim {\mathcal{X}}$.
If ${\mathcal{X}}$ is smooth and projective, then it is known that $Z({\mathcal{X}},T)$ is a rational function of the form $$\frac{\prod_{i=0}^{\dim{\mathcal{X}}-1}P_{2i+1}(T)}{\prod_{i=0}^{\dim{\mathcal{X}}}P_{2i}(T)}$$ where $P_0(T)=(1-T)$, $P_{2\dim{\mathcal{X}}}(T)=(1-q^{\dim{\mathcal{X}}}T)$, and for all $0\le i\le2\dim{\mathcal{X}}$ $P_i(T)$ is a polynomial with integer coefficients and constant term 1. We denote the inverse roots of $P_i$ by $\alpha_{ij}$ so that $$P_i(T)=\prod_j(1-\alpha_{ij}T)$$
The inverse roots $\alpha_{ij}$ of $P_i(T)$ are algebraic integers that have absolute value $q^{i/2}$ in every complex embedding. (We say that they are [*Weil numbers of size $q^{i/2}$*]{}.) It follows that $\zeta({\mathcal{X}},s)$ has a meromorphic continuation to the whole $s$ plane, with poles on the lines $\operatorname{Re}s\in\{0,\dots,\dim{\mathcal{X}}\}$ and zeroes on the lines $\operatorname{Re}s\in\{1/2,\dots,\dim{\mathcal{X}}-1/2\}$. This is the analogue of the Riemann hypothesis for $\zeta({\mathcal{X}},s)$.
It is also known that the set of inverse roots of $P_i(T)$ (with multiplicities) is stable under $\alpha_{ij}\mapsto q/\alpha_{ij}$. Thus $\zeta({\mathcal{X}},s)$ satisfies a functional equation when $s$ is replaced by $\dim{\mathcal{X}}-s$.
In the case where ${\mathcal{X}}$ is a curve, $P_1(T)$ has degree $2g$ ($g=$ the genus of ${\mathcal{C}}$) and has the form $$P_1(T)=1+\cdots+q^gT^{2g}=\prod_{j=1}^{2g}(1-\alpha_{1j}T).$$ Thus $\zeta({\mathcal{C}},s)$ has simple poles for $s\in\frac{2\pi i}{\log q}{\mathbb{Z}}$ and $s\in1+\frac{2\pi i}{\log q}{\mathbb{Z}}$ and its zeroes lie on the line $\operatorname{Re}s=1/2$.
For a fascinating history of the early work on zeta functions and the Riemann hypothesis for curves over finite fields, see [@Roquette06] and parts I and II of that work.
Cohomology {#s:cohomology}
==========
Assume that ${\mathcal{X}}$ is a smooth projective variety over $k={{\mathbb{F}_q}}$. We write ${\overline{\mathcal{X}}}$ for ${\mathcal{X}}\times_{{{\mathbb{F}_q}}}{{\overline{\mathbb{F}}_q}}$. Note that $G_k=\operatorname{Gal}({{\overline{\mathbb{F}}_q}}/{{\mathbb{F}_q}})$ acts on ${\overline{\mathcal{X}}}$ via the factor ${{\overline{\mathbb{F}}_q}}$.
Choose a prime $\ell\neq p$. We have $\ell$-adic cohomology groups $H^i({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})$ which are finite-dimensional ${{\mathbb{Q}_\ell}}$-vector spaces and which vanish unless $0\le i\le 2\dim{\mathcal{X}}$.
Functoriality in ${\overline{\mathcal{X}}}$ gives a continuous action of $\operatorname{Gal}({{\overline{\mathbb{F}}_q}}/{{\mathbb{F}_q}})$. Since the geometric Frobenius ($\operatorname{Fr}_q(a)=a^{q^{-1}}$) is a topological generator of $\operatorname{Gal}({{\overline{\mathbb{F}}_q}}/{{\mathbb{F}_q}})$, the characteristic polynomial of $\operatorname{Fr}_q$ on $H^i({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})$ determines the eigenvalues of the action of $\operatorname{Gal}({{\overline{\mathbb{F}}_q}}/{{\mathbb{F}_q}})$; in fancier language, it determines the action up to semi-simplification.
An important result (inspired by [@Weil49] and proven in great generality in [@SGA5]) says that the factors $P_i$ of $Z({\mathcal{X}},t)$ are characteristic polynomials of Frobenius: $$\label{eq:P-cohom}
P_i(T)=\det(1-T\operatorname{Fr}_q|H^i({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})).$$ From this point of view, the functional equation and Riemann hypothesis for $Z({\mathcal{X}},T)$ are statements about duality and purity.
To discuss the connections, we need more notation. Let ${{\mathbb{Z}_\ell}}(1)=\varprojlim_n\mu_{\ell^n}({{\overline{\mathbb{F}}_q}})$ and ${{\mathbb{Q}_\ell}}(1)={{\mathbb{Z}_\ell}}(1){\otimes}_{{{\mathbb{Z}_\ell}}}{{\mathbb{Q}_\ell}}$, so that ${{\mathbb{Q}_\ell}}(1)$ is a one-dimensional ${{\mathbb{Q}_\ell}}$-vector space on which $\operatorname{Gal}({{\overline{\mathbb{F}}_q}}/{{\mathbb{F}_q}})$ acts via the $\ell$-adic cyclotomic character. More generally, for $n>0$ set ${{\mathbb{Q}_\ell}}(n)={{\mathbb{Q}_\ell}}(1)^{{\otimes}n}$ ($n$-th tensor power) and ${{\mathbb{Q}_\ell}}(-n)=\operatorname{Hom}({{\mathbb{Q}_\ell}}(n),{{\mathbb{Q}_\ell}})$, so that for all $n$, ${{\mathbb{Q}_\ell}}(n)$ is a one-dimensional ${{\mathbb{Q}_\ell}}$-vector space on which $\operatorname{Gal}({{\overline{\mathbb{F}}_q}}/{{\mathbb{F}_q}})$ acts via the $n$th power of the $\ell$-adic cyclotomic character.
We have $H^0({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})\cong{{\mathbb{Q}_\ell}}$ (with trivial Galois action) and $H^{2\dim{\mathcal{X}}}({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})\cong{{\mathbb{Q}_\ell}}(\dim{\mathcal{X}})$. The functional equation follows from the fact that we have a canonical non-degenerate, Galois equivariant pairing $$H^i({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})\times H^{2\dim{\mathcal{X}}-i}({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})\to
H^{2\dim{\mathcal{X}}}({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})\cong{{\mathbb{Q}_\ell}}(\dim{\mathcal{X}}).$$ Indeed, the non-degeneracy of this pairing implies that if $\alpha$ is an eigenvalue of $\operatorname{Fr}_q$ on $H^i({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})$, then $q^{\dim{\mathcal{X}}}/\alpha$ is an eigenvalue of $\operatorname{Fr}_q$ on $H^{2\dim{\mathcal{X}}-i}({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})$.
The Riemann hypothesis in this context is the statement that the eigenvalues of $\operatorname{Fr}_q$ on $H^i({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})$ are algebraic integers with absolute value $q^{i/2}$ in every complex embedding.
See [@SGA4.5] or [@MilneEC] for an overview of étale cohomology and its connections with the Weil conjectures.
Jacobians
=========
Picard and Albanese properties {#ss:pic-alb}
------------------------------
We briefly review two (dual) universal properties of the Jacobian of a curve that we will need. See [@Milne86jv] for more details.
We assume throughout that the curve ${\mathcal{C}}$ has an ${{\mathbb{F}_q}}$-rational point $x$, i.e., a closed point with residue field ${{\mathbb{F}_q}}$. If $T$ is another connected variety over ${{\mathbb{F}_q}}$ with an ${{\mathbb{F}_q}}$-rational point $t$, a [ *divisorial correspondence*]{} between $(C,x)$ and $(T,t)$ is an invertible sheaf ${\mathcal{L}}$ on $C\times_{{\mathbb{F}_q}}T$ such that ${\mathcal{L}}|_{C\times t}$ and ${\mathcal{L}}|_{x\times T}$ are trivial. Two divisorial correspondences are equal when they are isomorphic as invertible sheaves. Note that the set of divisorial correspondences between $({\mathcal{C}},x)$ and $(T,t)$ forms a group under tensor product and is thus a subgroup of $\operatorname{Pic}({\mathcal{C}}\times T)$. We write $$\operatorname{DivCorr}(({\mathcal{C}},x),(T,t))\subset\operatorname{Pic}({\mathcal{C}}\times T)$$ for this subgroup. One may think of a divisorial correspondence as giving a family of invertible sheaves on $C$: $s\mapsto{\mathcal{L}}|_{C\times
s}$.
Let $J=J_{\mathcal{C}}$ be the Jacobian of ${\mathcal{C}}$ and write $0$ for its identity element. Then $J$ is a $g$-dimensional abelian variety over ${{\mathbb{F}_q}}$ and it carries the “universal divisorial correspondence with $C$.” More precisely, there is a divisorial correspondence ${\mathcal{M}}$ between $(C,x)$ and $(J,0)$ such that if $S$ is another connected variety over ${{\mathbb{F}_q}}$ with ${{\mathbb{F}_q}}$-rational point $s$ and ${\mathcal{L}}$ is a divisorial correspondence between $(C,x)$ and $(S,s)$, then there is a unique morphism $\phi:S\to J$ sending $s$ to $0$ such that ${\mathcal{L}}=\phi^*{\mathcal{M}}$. (Of course ${\mathcal{M}}$ depends on the choice of base point $x$, but we omit this from the notation.)
It follows that there is a canonical morphism, the Abel-Jacobi morphism, $AJ:{\mathcal{C}}\to J$ sending $x$ to $0$. Intuitively, this corresponds to the family of invertible sheaves parameterized by ${\mathcal{C}}$ that sends $y\in{\mathcal{C}}$ to ${\mathcal{O}}_{\mathcal{C}}(y-x)$. More precisely, let $\Delta\subset {\mathcal{C}}\times C$ be the diagonal, let $$D=\Delta-x\times {\mathcal{C}}-{\mathcal{C}}\times x,$$ and let ${\mathcal{L}}={\mathcal{O}}_{{\mathcal{C}}\times{\mathcal{C}}}(D)$ which is a divisorial correspondence between $(C,x)$ and itself. The universal property above then yields the morphism $AJ:{\mathcal{C}}\to J$. It is known that $AJ$ is a closed immersion and that its image generates $J$ as an algebraic group.
The second universal property enjoyed by $J$ (or rather by $AJ$) is the Albanese property: it is universal for maps to abelian varieties. More precisely, if $A$ is an abelian variety and $\phi:{\mathcal{C}}\to A$ is a morphism sending $x$ to 0, then there is a unique homomorphism of abelian varieties $\psi:J\to A$ such that $\phi=\psi{\circ}AJ$.
Combining the two universal properties gives a useful connection between correspondences and homomorphisms: Suppose ${\mathcal{C}}$ and ${\mathcal{D}}$ are curves over ${{\mathbb{F}_q}}$ with rational points $x\in{\mathcal{C}}$ and $y\in{\mathcal{D}}$. Then we have an isomorphism $$\label{eq:divcor-hom}
\operatorname{DivCorr}(({\mathcal{C}},x),({\mathcal{D}},y))\cong\operatorname{Hom}(J_{\mathcal{C}},J_{\mathcal{D}}).$$ Intuitively, given a divisorial correspondence on ${\mathcal{C}}\times{\mathcal{D}}$, we get a family of invertible sheaves on ${\mathcal{D}}$ parameterized by ${\mathcal{C}}$ and thus a morphism ${\mathcal{C}}\to
J_{\mathcal{D}}$. The Albanese property then gives a homomorphism $J_{\mathcal{C}}\to
J_{\mathcal{D}}$. We leave the precise version as an exercise, or see [@Milne86jv]\*[6.3]{}. We will use this isomorphism later to understand the Néron-Severi group of a product of curves.
The Tate module
---------------
Let $A$ be an abelian variety of dimension $g$ over ${{\mathbb{F}_q}}$, for example the Jacobian of a curve of genus $g$. (See [@Milne86av] for a brief introduction to abelian varieties and [@MumfordAV] for a much more complete treatment.) Choose a prime $\ell\neq p$. Let $A[\ell^n]$ be the set of ${{\overline{\mathbb{F}}_q}}$ points of $A$ of order dividing $\ell^n$. It is a group isomorphic to $({\mathbb{Z}}/\ell^n{\mathbb{Z}})^{2g}$ with a linear action of $\operatorname{Gal}({{\overline{\mathbb{F}}_q}}/{{\mathbb{F}_q}})$. We form the inverse limit $$T_\ell A=\varprojlim_n A[\ell^n]$$ where the transition maps are given by multiplication by $\ell$. Let $V_\ell A=T_\ell A{\otimes}_{{{\mathbb{Z}_\ell}}}{{\mathbb{Q}_\ell}}$, a $2g$-dimensional ${{\mathbb{Q}_\ell}}$-vector space with a linear action of $\operatorname{Gal}({{\overline{\mathbb{F}}_q}}/{{\mathbb{F}_q}})$. It is often called the [*Tate module*]{} of $A$.
According to Roquette, what we now call the Tate module seems to have first been used in print by Deuring [@Deuring40] as a substitute for homology in his work on correspondences on curves. It appears already in a letter of Hasse from 1935, see [@Roquette06]\*[p. 36]{}.
The following proposition is the modern interpretation of the connection between homology and torsion points.
Let $A$ be an abelian variety over a field $k$ and let $\ell$ be a prime not equal to the characteristic of $k$. Let $V_\ell A$ be the Tate module of $A$ and $(V_\ell A)^*$ its dual as a $G_k=\operatorname{Gal}(k^{sep}/k)$-module.
- There is a canonical isomorphism of $G_k$-modules $$(V_\ell A)^*\cong H^1(A\times{{\overline{k}}},{{\mathbb{Q}_\ell}}).$$
- If $A$ is the Jacobian of a curve ${\mathcal{C}}$ over $k$, then $$H^1(A\times{{\overline{k}}},{{\mathbb{Q}_\ell}})\cong
H^1({\mathcal{C}}\times{{\overline{k}}},{{\mathbb{Q}_\ell}}).$$
For a proof of part 1, see [@Milne86av]\*[15.1]{} and for part 2, see [@Milne86jv]\*[9.6]{}.
These exercises are meant to make the Proposition more plausible.
1. Show that if $A({\mathbb{C}})$ is a complex torus ${\mathbb{C}}^g/\Lambda$, then the singular homology $H_1(A({\mathbb{C}}),{{\mathbb{Q}_\ell}})$ is canonically isomorphic to $V_\ell A({\mathbb{C}})$. (Hint: Use the universal coefficient theorem to show that $H_1(A({\mathbb{C}}),{\mathbb{Z}}/\ell^n{\mathbb{Z}})\cong\Lambda/\ell^n\Lambda$.)
2. (Advanced) Let ${\mathcal{C}}$ be a smooth projective curve over an algebraically closed field $k$. Let $\ell$ be a prime not equal to the characteristic of $k$. Use geometric class field theory (as in [@SerreAGCF]) to show that unramified Galois covers ${\mathcal{C}}'\to{\mathcal{C}}$ equipped with an isomorphism $\operatorname{Gal}({\mathcal{C}}'/{\mathcal{C}})\cong{\mathbb{Z}}/\ell{\mathbb{Z}}$ are in bijection with elements of $\operatorname{Hom}(J_{\mathcal{C}}[\ell],{\mathbb{Z}}/\ell{\mathbb{Z}})$. (Make a convention to deal with the trivial homomorphism.) This suggests that $H^1({\mathcal{C}},{\mathbb{Z}}/\ell{\mathbb{Z}})$ “should be” $\operatorname{Hom}(J_{\mathcal{C}}[\ell],{\mathbb{Z}}/\ell{\mathbb{Z}})$ and $H_1(C,{\mathbb{Z}}/\ell{\mathbb{Z}})$ “should be” $J_{\mathcal{C}}[\ell]$. The reason we only have “should be” rather than a theorem is that a non-trivial Galois cover ${\mathcal{C}}'\to{\mathcal{C}}$ is never locally constant in the Zariski topology. This is a prime motivation for introducing the étale topology.
Tate’s theorem on homomorphisms of abelian varieties {#s:Tate-thm}
====================================================
As usual, let $k$ be a finite field and let $A$ and $B$ be two abelian varieties over $k$. Choose a prime $\ell$ not equal to the characteristic of $k$ and form the Tate modules $V_\ell A$ and $V_\ell
B$. Any homomorphism of abelian varieties $\phi:A\to B$ induces a homomorphism of Tate modules $\phi_*:V_\ell A\to V_\ell B$ and this homomorphism commutes with the action of $G_k=\operatorname{Gal}({{\overline{k}}}/k)$ on the Tate modules. We get an induced homomorphism $\operatorname{Hom}_k(A,B){\otimes}{{\mathbb{Q}_\ell}}\to\operatorname{Hom}_{G_k}\left(V_\ell A,V_\ell B\right)$. Tate’s famous result [@Tate66a] asserts that this is an isomorphism:
\[thm:TateIsogThm\] The map $\phi\mapsto\phi_*$ induces an isomorphism of ${{\mathbb{Q}_\ell}}$-vector spaces: $$\operatorname{Hom}_k(A,B){\otimes}{{\mathbb{Q}_\ell}}{\tilde{\to}}\operatorname{Hom}_{G_k}\left(V_\ell A,V_\ell B\right).$$
We also mention [@Zarhin08] which gives a different proof and a strengthening with finite coefficients.
We will use Tate’s theorem in Theorem \[thm:products\] of Lecture 2 to understand the divisors on a product of curves in terms of homomorphisms between their Jacobians.
In this lecture we discuss the basic facts about elliptic curves over function fields over finite fields. We assume the reader has some familiarity with elliptic curves over global fields such as ${\mathbb{Q}}$ or number fields, as explained, e.g., in [@SilvermanAEC], and we will focus on aspects specific to characteristic $p$. The lecture ends with statements of the main results known about the conjecture of Birch and Swinnerton-Dyer in this context.
Elliptic curves {#s:ell-curves}
===============
Definitions
-----------
We write $k={{\mathbb{F}_q}}$ for the finite field of cardinality $q$ and characteristic $p$ and we let $K$ be the function field of a smooth, projective, absolutely irreducible curve ${\mathcal{C}}$ over $k$.
An [*elliptic curve*]{} over $K$ is a smooth, projective, absolutely irreducible curve of genus 1 over $K$ equipped with a $K$-rational point $O$ that will serve as the origin of the group law.
All the basic geometric facts, e.g., of [@SilvermanAEC]\*[Ch. III and App. A]{}, continue to hold in the context of function fields. We review a few of them to establish notation, but will not enter into full details.
Using the Riemann-Roch theorem, an elliptic curve $E$ over $K$ can always be presented as a projective plane cubic curve defined by a Weierstrass equation, i.e., by an equation of the form $$\label{eq:cubic}
Y^2Z+a_1XYZ+a_3YZ^2=X^3+a_2X^2Z+a_4XZ^2+a_6Z^3$$ where $a_1,\dots,a_6\in K$. The origin $O$ is the point at infinity $[0:1:0]$. We often give the equation in affine form: $$\label{eq:cubica}
y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6$$ where $x=X/Z$ and $y=Y/Z$.
The quantities $b_2,\dots,b_8,c_4,c_6,\Delta,j$ are defined by the usual formulas ([@SilvermanAEC]\*[III.1]{} or [@Deligne75]). Since $E$ is smooth, by the following exercise $\Delta\neq0$.
The word “smooth” in the definition of an elliptic curve means that the morphism $E\to\operatorname{Spec}K$ is smooth. Smoothness of a morphism can be tested via the Jacobian criterion (see, e.g., [@HartshorneAG]\*[III.10.4]{} or [@LiuAGAC]\*[4.3.3]{}). Show that the projective plane cubic (\[eq:cubic\]) is smooth if and only if $\Delta\neq0$. Because the ground field $K$ is not perfect, smoothness is strictly stronger than the requirement that $E$ be regular, i.e., that its local rings be regular local rings (cf. [@LiuAGAC]\*[4.2.2]{}). For example, show that the projective cubic defined by $Y^2Z=X^3-tZ^3$ over $K={{\mathbb{F}_p}}(t)$ with $p=2$ or $3$ is a regular scheme, but is not smooth over $K$.
Let $E$ be an elliptic curve over $K$.
1. We say $E$ is [*constant*]{} if there is an elliptic curve $E_0$ defined over $k$ such that $E\cong E_0\times_kK$. Equivalently, $E$ is constant if it can be defined by a Weierstrass cubic (\[eq:cubic\]) where the $a_i\in k$.
2. We say $E$ is [*isotrivial*]{} if there exists a finite extension $K'$ of $K$ such that $E$ becomes constant over $K'$. Note that a constant curve is isotrivial.
3. We say $E$ is [*non-isotrivial*]{} if it is not isotrivial. We say $E$ is [*non-constant*]{} if it is not constant.
Show that $E$ is isotrivial if and only if $j(E)\in k$. Suppose that $E$ is isotrivial, so that $E$ becomes constant over a finite extension $K'$ and let $k'$ be the field of constants of $K'$ (the algebraic closure of $k$ in $K'$). [*A priori*]{}, the definition of isotrivial says that there is an elliptic curve $E_0$ over $k'$ such that $E\times_KK'\cong E_0\times_{k'}K'$. Show that we may take $K'$ to have field of constants $k$ and $E_0$ to be defined over $k$. Show also that we may take $K'$ to be separable and of degree dividing 24 over $K$.
For any elliptic curve $E$ over $K$, the functor on $K$-algebras $L\mapsto\operatorname{Aut}_L(E\times L)$ is represented by a group scheme $\underline{\operatorname{Aut}}(E)$. (Concretely, this means there is a group scheme $\underline{\operatorname{Aut}}(E)$ such that for any $K$-algebra $L$, $\operatorname{Aut}_L(E\times L)$ is $\underline{\operatorname{Aut}}(E)(L)$, the group of $L$-valued points of $\underline{\operatorname{Aut}}(E)$.) Show that $\underline{\operatorname{Aut}}(E)$ is an étale group scheme. Equivalently, show that any element of $\operatorname{Aut}_{\overline K}(E)$ is defined over a separable extension of $K$. (This is closely related to the previous exercise.)
Examples {#ss:examples}
--------
Let $K={{\mathbb{F}_p}}(t)$ with $p>3$ and define elliptic curves $$\begin{aligned}
E_1:\quad&y^2=x^3+1\\
E_2:\quad&y^2=x^3+t^6\\
E_3:\quad&y^2=x^3+t\\
E_4:\quad&y^2=x^3+x+t.\end{aligned}$$ Then $E_1\cong E_2$ over $K$ and both are constant, $E_3$ is isotrivial and non-constant, whereas $E_4$ is non-isotrivial.
For more examples, let $K={{\mathbb{F}_p}}(t)$ (with $p$ restricted as indicated) and define $$\begin{aligned}
(p\neq3)\qquad&E_5:\quad y^2+ty=x^3\\
(p\neq2)\qquad&E_6:\quad y^2=x^3+tx\\
(p~\text{arbitrary})\qquad&E_7:\quad y^2+xy+ty=x^3\\
(p~\text{arbitrary})\qquad&E_8:\quad y^2+xy=x^3+tx\\
(p~\text{arbitrary})\qquad&E_9:\quad y^2+xy=x^3+t.\end{aligned}$$ Then $E_5$ and $E_6$ are isotrivial and non-constant whereas $E_7$, $E_8$, and $E_9$ are non-isotrivial.
Frobenius
=========
If $X$ is a scheme of characteristic $p$, we define the [*absolute Frobenius*]{} morphism $\operatorname{Fr}_X:X\to X$ as usual: It is the identity on the underlying topological space and raises functions to the $p$-th power. When $X=\operatorname{Spec}K$, $\operatorname{Fr}_X$ is just the map of schemes induced by the ring homomorphism $K\to K$, $a\mapsto a^p$.
Suppose as usual that $K$ is a function field and let $E$ be an elliptic curve over $K$. Define a new elliptic curve $E^{(p)}$ over $K$ by the fiber product diagram: $$\xymatrix{\mathllap{E^{(p)}=}\operatorname{Spec}K\times_{\operatorname{Spec}K}E\ar[r]\ar[d]&E\ar[d]\\
\operatorname{Spec}K\ar[r]^{\operatorname{Fr}}&\operatorname{Spec}K}$$ More concretely, if $E$ is presented as a Weierstrass cubic as in equation (\[eq:cubica\]), then $E^{(p)}$ is given by the equation with $a_i$ replaced by $a_i^p$. The universal property of the fiber product gives a canonical morphism $\operatorname{Fr}_{E/K}$, the [*relative Frobenius*]{}: $$\xymatrix{E\ar[r]^{\operatorname{Fr}_{E/K}}\ar[dr]&E^{(p)}\ar[r]\ar[d]&E\ar[d]\\
&\operatorname{Spec}K\ar[r]^{\operatorname{Fr}}&\operatorname{Spec}K}$$ By definition $\operatorname{Fr}_{E/K}$ is a morphism over $K$. In terms of Weierstrass equations for $E$ and $E^{(p)}$ as above, it is just the map $(x,y)\mapsto(x^p,y^p)$.
It is evident that $\operatorname{Fr}_{E/K}$ is an isogeny, i.e., a surjective homomorphism of elliptic curves, and that its degree is $p$. We define $V=V_{E/K}$ to be the dual isogeny, so that $V_{E/K}\circ\operatorname{Fr}_{E/K}=[p]$, multiplication by $p$ on $E$.
Note that $j(E^{(p)})=j(E)^p$ so that if $E$ is non-isotrivial, $E$ and $E^{(p)}$ are not isomorphic. Thus, using Frobenius and its iterates, we see that there are infinitely many non-isomorphic elliptic curves isogenous to any non-isotrivial $E$. This is in marked contrast to the situation over number fields (cf. [@Faltings86]).
\[l:(p)\] Let $E$ be an elliptic curve over $K$. Then $j(E)$ is a $p$-th power in $K$ if and only if there exists an elliptic curve $E'$ over $K$ such that $E\cong E^{\prime(p)}$.
We sketch a fancy argument and pose as an exercise a more down-to-earth proof. Obviously if there is an $E'$ with $E\cong
E^{\prime(p)}$, then $j(E)=j(E^{\prime(p)})=j(E')^p\in K^p$. Conversely, suppose $j(E)\in K^p$ and choose an elliptic curve $E''$ such that $j(E'')^p=j(E)$. It follows that $E^{\prime\prime(p)}$ is isomorphic to $E$ over a finite separable extension of $K$. In other words, $E$ is the twist of $E^{\prime\prime(p)}$ by a cocycle in $H^1(G_K,\operatorname{Aut}_{K^{sep}}(E^{\prime\prime(p)}))$. But there is a canonical isomorphism $\operatorname{Aut}_{K^{sep}}(E^{\prime\prime(p)})\cong\operatorname{Aut}_{K^{sep}}(E^{\prime\prime})$ and twisting $E''$ by the corresponding element of $$H^1(G_K,\operatorname{Aut}_{K^{sep}}(E^{\prime\prime}))\cong
H^1(G_K,\operatorname{Aut}_{K^{sep}}(E^{\prime\prime(p)}))$$ we obtain an elliptic curve $E'$ with $E^{\prime(p)}\cong E$.
Use explicit equations, as in [@SilvermanAEC]\*[Appendix A]{}, to prove the lemma.
The Hasse invariant
===================
Let $F$ be a field of characteristic $p$ and $E$ an elliptic curve over $F$. Let ${\mathcal{O}}_E$ be the sheaf of regular functions on $E$ and let $\Omega^1_E$ be the sheaf of Kähler differentials on $E$. The coherent cohomology group $H^1(E,{\mathcal{O}}_E)$ is a one-dimensional $F$-vector space and is Serre dual to the space of invariant differentials $H^0(E,\Omega^1_E)$. Choose a non-zero differential $\omega\in H^0(E,\Omega^1_E)$ and let $\eta$ be the dual element of $H^1(E,{\mathcal{O}}_E)$. The absolute Frobenius $\operatorname{Fr}_E$ induces a ($p$-linear) homomorphism: $$\operatorname{Fr}_E^*:H^1(E,{\mathcal{O}}_E)\to H^1(E,{\mathcal{O}}_E).$$ We define an element $A=A(E,\omega)$ of $F$ by requiring that $\operatorname{Fr}_E^*(\eta)=A(E,\omega)\eta$. This is the [*Hasse invariant*]{} of $E$. It has weight $p-1$ in the sense that $A(E,\lambda^{-1}\omega)=\lambda^{p-1}A(E,\omega)$ for all $\lambda\in
F^\times$.
Suppose $E$ is given by a Weierstrass equation (\[eq:cubica\]) and $\omega=dx/(2y+a_1x+a_3)$. If $p=2$, then $A(E,\omega)=a_1$. If $p>2$, choose an equation with $a_1=a_3=0$. Then $A(E,\omega)=$ the coefficient of $x^{p-1}$ in $(x^3+a_2x^2+a_4x+a_6)^{(p-1)/2}$. These assertions follow from [@KatzMazurAM]\*[12.4]{} where several other calculations of $A$ are also presented.
Recall that $E/K$ is [*ordinary*]{} if the group of $p$-torsion points $E(\overline K)[p]\neq 0$ and [*supersingular*]{} otherwise. It is known that $E$ is supersingular if and only if $A(E,\omega)=0$ (e.g., [@KatzMazurAM]\*[12.3.6 and 12.4]{}) and in this case $j(E)\in{\mathbb{F}}_{p^2}$ (e.g., [@KatzMazurAM]\*[proof of 2.9.4]{}). (Alternatively, one may apply [@SilvermanAEC]\*[V.3.1]{} to $E$ over $\overline K$.) In particular, if $E$ is supersingular, then it must also be isotrivial.
Endomorphisms
=============
The classification of endomorphism rings in [@SilvermanAEC]\*[III.9]{} goes over verbatim to the function field case: $\operatorname{End}_{\overline K}(E)$ is either ${\mathbb{Z}}$, an order in an imaginary quadratic number field, or an order in a quaternion algebra over ${\mathbb{Q}}$ ramified exactly at $\infty$ and $p$. The quaternionic case occurs if and only if $E$ is supersingular, and the imaginary quadratic case occurs if and only if $j(E)$ is in ${{\overline{\mathbb{F}}_p}}$ and $E$ is not supersingular ([@SilvermanAEC]\*[V.3.1 and Exer. V.5.8]{}).
In particular, if $E$ is non-isotrivial, then $\operatorname{End}_{\overline
K}(E)=\operatorname{End}_K(E)={\mathbb{Z}}$.
The Mordell-Weil-Lang-Néron theorem
===================================
We write $E(K)$ for the group of $K$-rational points of $E$ and we call $E(K)$ the Mordell-Weil group of $E$ over $K$. Lang and Néron (independently) generalized the classical Mordell-Weil theorem to the function field context:
Assume that $K={{\mathbb{F}_q}}({\mathbb{C}})$ is the function field of a curve over a finite field and let $E$ be an elliptic curve over $K$. Then $E(K)$ is a finitely generated abelian group.
(The theorems of Lang and Néron apply much more generally to any abelian variety $A$ over a field $K$ that is finitely generated over its “constant field” $k$, but one has to take care of the “constant part” of $A$. See [@UlmerCRM] for details.)
We will not give a detailed proof of the MWLN theorem here, but will mention two strategies. One is to follow the method of proof of the Mordell-Weil (MW) theorem over a number field. Choose a prime number $\ell\neq p$. By an argument very similar to that in [@SilvermanAEC]\*[Ch. VIII]{} one can show that $E(K)/\ell E(K)$ is finite (the “weak Mordell-Weil theorem”) by embedding it in a Selmer group and showing that the Selmer group is finite by using the two fundamental finiteness results of algebraic number theory (finiteness of the class group and finite generation of the unit group) applied to Dedekind domains in $K$. One can then introduce a theory of heights exactly as in [@SilvermanAEC] and show that the MW theorem follows from the weak MW theorem and finiteness properties of heights. See the original paper of Lang and Néron [@LangNeron59] for the full details. A complete treatment in modern language has been given by Conrad [@Conrad06].
One interesting twist in the function field setting comes if one takes $\ell=p$ above. It is still true that the Selmer group for $p$ is finite, but one needs to use the local restrictions at all places; the maximal abelian extension of exponent $p$ unramified outside a finite but non-empty set of places is not finite and so one needs some control on ramification at every place. See [@Ulmer91] for a detailed account of $p$-descent in characteristic $p$.
A second strategy of proof, about which we will say more in Lecture 3, involves relating the Mordell-Weil group of $E$ to the Néron-Severi group of a closely related surface ${\mathcal{E}}$. In fact, finite generation of the Néron-Severi group (known as the “theorem of the base”) is equivalent to the Lang-Néron theorem. A direct proof of the theorem of the base was given by Kleiman in [@SGA6]\*[XIII]{}. See also [@MilneEC]\*[V.3.25]{}.
The constant case {#s:constant}
=================
It is worth pausing in the general development to look at the case of a constant curve $E$. Recall that $K$ is the function field $k({\mathcal{C}})$ of the curve ${\mathcal{C}}$ over $k={{\mathbb{F}_q}}$. Suppose $E_0$ is an elliptic curve over $k$ and let $E=E_0\times_kK$.
We have a canonical isomorphism $$E(K)\cong\operatorname{Mor}_k({\mathcal{C}},E_0)$$ where $\operatorname{Mor}_k$ denotes morphisms of varieties over $k$ =morphisms of $k$-schemes. Under this isomorphism, $E(K)_{tor}$ corresponds to the subset of constant morphisms.
By definition, $E(K)$ is the set of $K$-morphisms $$\operatorname{Spec}K\to E=E_0\times_kK.$$ By the universal property of the fiber product, these are in bijection with $k$-morphisms $\operatorname{Spec}K\to E_0$. Since ${\mathcal{C}}$ is a smooth curve, any $k$-morphism $\operatorname{Spec}K\to E_0$ extends uniquely to a $k$-morphism ${\mathcal{C}}\to E_0$. This establishes a map $E(K)\to\operatorname{Mor}_k({\mathcal{C}},E_0)$. If $\eta:\operatorname{Spec}K\to{\mathcal{C}}$ denotes the canonical inclusion, composition with $\eta$ ($\phi\mapsto\phi\circ\eta$) induces a map $\operatorname{Mor}_k({\mathcal{C}},E_0)\to
E(K)$ inverse to the map above. This establishes the desired bijection and this bijection is obviously compatible with the group structures.
Since $k$ is finite, it is clear that a constant morphism goes over to a torsion point. Conversely, if $P\in E(K)$ is torsion, say of order $n$, then the image of the corresponding $\phi:{\mathcal{C}}\to E_0$ must lie in the set of $n$-torsion points of $E_0$, a discrete set, and this implies that $\phi$ is constant.
For example, if $K$ is rational (i.e., ${\mathcal{C}}={\mathbb{P}}^1$ so that $K=k(t)$), then $E(K)=E_0(k)$.
Let $J_{\mathcal{C}}$ be the Jacobian of ${\mathcal{C}}$. We have canonical isomorphisms $$E(K)/E(K)_{tor}\cong\operatorname{Hom}_{k-av}(J_{\mathcal{C}},E_0)\cong\operatorname{Hom}_{k-av}(E_0,J_{\mathcal{C}}).$$
The Albanese property of the Jacobian of ${\mathcal{C}}$ (Subsection \[ss:pic-alb\] of Lecture 0) gives a surjective homomorphism $$\operatorname{Mor}_{k}({\mathcal{C}},E_0)\to\operatorname{Hom}_{k-av}(J_{\mathcal{C}}, E_0).$$ This homomorphism sends non-constant (and therefore surjective) morphisms to non-constant (surjective) homomorphisms, so its kernel consists exactly of the constant morphisms. The second isomorphism in the statement of the corollary follows from the fact that Jacobians are self-dual.
By Poincaré complete reducibility [@Milne86av]\*[12.1]{}, $J_{\mathcal{C}}$ is isogenous to a product of simple abelian varieties. Suppose $J_{\mathcal{C}}$ is isogenous to $E_0^m\times A$ and $A$ admits no non-zero morphisms to $E_0$. We say that “$E_0$ appears in $J_{\mathcal{C}}$ with multiplicity $m$.” Then it is clear from the corollary that $E(K)/E(K)_{tor}\cong\operatorname{End}_k(E_0)^m$ and so the rank of $E(K)$ is $m$, $2m$, or $4m$.
Tate and Shafarevich used these ideas to exhibit isotrivial elliptic curves over $F={\mathbb{F}}_p(t)$ of arbitrarily large rank. Indeed, using Tate’s theorem on isogenies of abelian varieties over finite fields (reviewed in Section \[s:Tate-thm\] of Lecture 0) and a calculation of zeta functions in terms of Gauss sums, they were able to produce a hyperelliptic curve ${\mathcal{C}}$ over ${{\mathbb{F}_p}}$ whose Jacobian is isogenous to $E_0^m\times A$ where $E_0$ is a supersingular elliptic curve and the multiplicity $m$ is as large as desired. If $K={{\mathbb{F}_p}}({\mathcal{C}})$, $E$ is the constant curve $E=E_0\times F$, and $E'$ is the twist of $E$ by the quadratic extension $K/F$, then $\operatorname{Rank}E'(F)=\operatorname{Rank}E(K)$ and so $E'(F)$ has large rank by the analysis above. See the original article [@TateShafarevich67] for more details and a series of articles by Elkies (starting with [@Elkies94]) for a beautiful application to the construction of lattices with high packing densities.
Torsion
=======
An immediate corollary of the MWLN theorem is that $E(K)_{tor}$ is finite. In fact, $E(K)_{tor}$ is isomorphic to a group of the form $${\mathbb{Z}}/m{\mathbb{Z}}\times{\mathbb{Z}}/n{\mathbb{Z}}$$ where $m$ divides $n$ and $p$ does not divide $m$. (See for example [@SilvermanAEC]\*[Ch. 3]{}.) One can also see using the theory of modular curves that every such group appears for a suitable $K$ and $E$.
In another direction, one can give uniform bounds on torsion that depend only on crude invariants of the field $K$.
Indeed, in the constant case, $E(K)_{tor}\cong E_0({{\mathbb{F}_q}})$ which has order bounded by $(q^{1/2}+1)^2$. In the isotrivial case, there is a finite extension $K'$ with the same field of constants $k={{\mathbb{F}_q}}$ over which $E$ becomes constant. Thus $E(K)_{tor}\subset E(L)_{tor}$ again has cardinality bounded by $(q^{1/2}+1)^2$.
We now turn to the non-isotrivial case.
Assume that $E$ is non-isotrivial and let $g_{\mathcal{C}}$ be the genus of ${\mathcal{C}}$. Then there is a finite and effectively calculable list of groups—depending only on $g_{\mathcal{C}}$ and $p$—such that for any non-isotrivial elliptic curve $E$ over $K$, $E(K)_{tor}$ appears on the list.
(Sketch) First consider the prime-to-$p$ torsion subgroup of $E(K)$. It has the form $G={\mathbb{Z}}/m{\mathbb{Z}}\times{\mathbb{Z}}/n{\mathbb{Z}}$ where $m|n$ and $p{\not|}m$. There is a modular curve $X(m,n)$, irreducible and defined over ${{\mathbb{F}_p}}(\mu_m)$, that is a coarse moduli space for elliptic curves with subgroups isomorphic to $G$. We get a morphism ${\mathcal{C}}\to X(m,n)$ which is non-constant (because $E$ is non-isotrivial) and therefore surjective. The Riemann-Hurwitz formula then implies that $g_{\mathcal{C}}\ge
g_{X(m,n)}$. But the genus of $X(m,n)$ goes to infinity with $n$. Indeed, $g_{X(m,n)}\ge g_{X(1,n)}$ and standard genus formulae ([@MiyakeMF]\*[4.2]{}) together with crude estimation show that the latter is bounded below by $$1+\frac{n^2}{24\zeta(2)}-\frac{n\log_2n}{4}.$$ This shows that for a fixed value of $g_{\mathcal{C}}$, only finitely many groups $G$ as above can appear as $E(K)_{tor}$.
The argument for $p$-torsion is similar, except that ones uses the Igusa curves $Ig(p^n)$ (cf. [@KatzMazurAM]\*[Ch. 12]{}). If $E(K)$ has a point of order $p^n$, we get a non-constant morphism ${\mathcal{C}}\to
Ig(p^n)$ and the genus of $Ig(p^n)$ is asymptotic to $p^{2n}/48$ [@Igusa68]\*[p. 96]{}.
This proposition seems to have been rediscovered repeatedly over the years. The first reference I know of is [@Levin68].
Since the genus of a function field is an analogue of the discriminant (more precisely $q^{2g-2}$ is an analogue of the absolute value of the discriminant of a number field), the proposition is an analogue of bounding $E(K)_{tor}$ in terms of the discriminant of a number field $K$. One could ask for a strengthening where torsion is bounded by “gonality”, i.e., by the smallest degree of a non-constant map ${\mathcal{C}}\to{\mathbb{P}}^1$. This would be an analogue of bounding $E(K)_{tor}$ in terms of the degree of a number field $K$, as in the theorems of Mazur, Kamienny, and Merel [@Merel96]. This is indeed possible and can be proven by mimicking the proof of the proposition, replacing bounds on the genus of the modular curve with bounds on its gonality. See [@Poonen07] for the best results currently known on gonality of modular curves.
Compute the optimal list mentioned in the proposition for $g=0$. (This is rather involved.) Note that the optimal list in fact depends on $p$. Indeed, ${\mathbb{Z}}/11{\mathbb{Z}}$ is on the list if and only if $p=11$.
One can be very explicit about $p$-torsion:
Suppose that $E$ is a non-isotrivial elliptic curve over $K$. Then $E(K)$ has a point of order $p$ if and only if $j(E)\in K^{p}$ and $A(E,\omega)$ is a $(p-1)$st power in $K^\times$.
Note that whether $A(E,\omega)$ is a $(p-1)$st power is independent of the choice of the differential $\omega$.
Let $E\xrightarrow{\operatorname{Fr}}E^{(p)}\xrightarrow{V}E$ be the standard factorization of multiplication by $p$ into Frobenius and Verschiebung. Recall (e.g., [@Ulmer91]\*[2.1]{}) that $A(E,\omega)$ is a $(p-1)$st power in $K$ if and only if $\ker \operatorname{Fr}\cong\mu_p$ if and only if $\ker
V\cong{\mathbb{Z}}/p{\mathbb{Z}}$ if and only if there is a non-trivial $p$-torsion point in $E^{(p)}(K)$.
Now suppose that $P\in E(K)$ is a non-trivial $p$-torsion point. Then $\operatorname{Fr}(P)$ is a non-trivial $p$-torsion point in $E^{(p)}(K)$ and so $A(E,\omega)$ is a $(p-1)$st power in $K$. Let $E'$ be the quotient of $E$ by the cyclic subgroup generated by $P$: $E'=E/{\langle}P{\rangle}$. Since ${\langle}P{\rangle}$ is in the kernel of multiplication by $p$, we have a factorization of multiplication by $p$: $$[p]:E\to E'\to E.$$ Since $E\to E'$ is étale of degree $p$ and $[p]$ is inseparable of degree $p^2$, we have that $E'\to E$ is purely inseparable of degree $p$. But an elliptic curve in characteristic $p$ has a unique inseparable isogeny of degree $p$ (namely the quotient by the unique connected subgroup of order $p$, the kernel of Frobenius) so we have an identification $E=E^{\prime(p)}$. By \[l:(p)\], $j(E)\in K^p$.
Conversely, suppose $A(E,\omega)$ is a $(p-1)$st power and $j(E)\in
K^p$. Let $E'$ be the elliptic curve such that $E^{\prime(p)}\cong
E$. Given a differential $\omega$ on $E$, there is a differential $\omega'$ on $E'$ such that $A(E,\omega)=A(E',\omega')^p$ (as can be seen for example by using Weierstrass equations). It follows that $A(E',\omega')$ is also a $(p-1)$st power in $K$. Thus we have a non-trivial point of order $p$ in $E^{\prime(p)}(K)=E(K)$.
Part of the proposition generalizes trivially by iteration: if $E(K)$ has a point of order $p^n$, then $j(E)\in K^{p^n}$. A full characterization of $p^n$ torsion seems harder—the condition that $A(E,\omega)$ be a $(p-1)$st power is closely related to the equations defining the Igusa curve $Ig(p)$ ([@KatzMazurAM]\*[12.8]{}), but we do not have such explicit equations for $Ig(p^n)$ when $n>1$.
Local invariants {#s:local-invs}
================
Let $E$ be an elliptic curve over $K$ and let $v$ be a place of $K$. A model (\[eq:cubica\]) for $E$ with coefficients in the valuation ring ${\mathcal{O}}_{(v)}$ is said to be [*integral at $v$*]{}. The valuation of the discriminant $\Delta$ of an integral model is a non-negative integer and so there are models where this valuation takes its minimum value. Such models are [*minimal integral models at $v$*]{}.
Choose a model for $E$ that is minimal integral at $v$: $$y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6.$$ Let $\overline a_i\in\kappa(v)$ be the reductions of the coefficients and let $E_v$ be the plane cubic $$\label{eq:reduction}
y^2+\overline a_1xy+\overline a_3y
=x^3+\overline a_2x^2+\overline a_4x+\overline a_6$$ over the residue field $\kappa_v$. It is not hard to check using Weierstrass equations that the isomorphism type of the reduced cubic (\[eq:reduction\]) is independent of the choice of minimal model.
If the discriminant of a minimal integral model at $v$ has valuation zero, i.e., is a unit at $v$, then the reduced equation defines an elliptic curve over $\kappa_v$. If the minimal valuation is positive, then the reduced curve is singular. We distinguish three cases according to the geometry of the reduced curve.
1. If $E_v$ is a smooth cubic, we say $E$ has [*good reduction at $v$*]{}.
2. If $E_v$ is a nodal cubic, we say $E$ has [*multiplicative reduction at $v$*]{}. If the tangent lines at the node are rational over $\kappa(v)$ we say the reduction is [*split multiplicative*]{} and if they are rational only over a quadratic extension, we say the reduction is [*non-split multiplicative*]{}.
3. If $E_v$ is a cuspidal cubic, we say $E$ has [*additive reduction*]{}.
Define an integer $a_v$ as follows: $$\label{eq:a_v}
a_v=\begin{cases}
q_v+1-\#E_v(\kappa_v)&\text{if $E$ has good reduction at $v$}\\
1&\text{if $E$ has split multiplicative reduction at $v$}\\
-1&\text{if $E$ has non-split multiplicative reduction at $v$}\\
0&\text{if $E$ has additive reduction at $v$}
\end{cases}$$
To make this definition less [*ad hoc*]{}, note that in the good reduction case, the numerator of the $\zeta$-function of the reduced curve is $1-a_vq_v^{-s}+q_v^{1-2s}$. Show that in the bad reduction cases, the $\zeta$-function of the reduced curve is $$\frac{1-a_vq_v^{-s}}{(1-q_v^{-s})(1-q_v^{1-s})}.$$
In the good reduction case, the results about zeta functions and étale cohomology reviewed in Lecture 0, Sections \[s:zetas\] and \[s:cohomology\] imply the “Hasse bound”: $|a_v|\le2\sqrt{q_v}$.
There are two more refined invariants in the bad reduction cases: the Néron model and the conductor. The local exponent of the conductor at $v$, denoted $n_v$ is defined as $$n_v=\begin{cases}
0&\text{if $E$ has good reduction at $v$}\\
1&\text{if $E$ has multiplicative reduction at $v$}\\
2+\delta_v&\text{if $E$ has additive reduction at $v$}
\end{cases}$$ Here $\delta_v$ is a non-negative integer that is 0 when $p>3$ and is $\ge0$ when $p=2$ or 3. We refer to [@Tate75] for a definition and an algorithm to compute $\delta_v$.
The (global) conductor of $E$ is defined to be the divisor ${\mathfrak{n}}=\sum_v
n_v[v]$. Its degree is $\deg{\mathfrak{n}}=\sum_vn_v\deg v$.
The Néron model will be discussed in Lecture 3 below.
Mimic [@SilvermanAEC]\*[Ch. VII]{} to define a filtration on the points of $E$ over a completion $K_v$ of $K$. Show that the prime-to-$p$ part of $E(K)_{tor}$ maps injectively into $E(K_v)/E(K_v)_1$. Relate $E(K_v)/E(K_v)_1$ to the special fiber of the Néron model of $E$ at $v$. As in the classical case, this gives an excellent way to bound the prime-to-$p$ part of $E(K)_{tor}$.
The $L$-function
================
We define the $L$-function of $E/K$ as an Euler product: $$\label{eq:Ldef}
L(E,T)=\prod_{\text{good }v}\left(1-a_vT^{\deg v}+q_vT^{2\deg
v}\right)^{-1}
\prod_{\text{bad }v}\left(1-a_vT^{\deg v}\right)^{-1}$$ and $$L(E,s)=L(E,q^{-s}).$$ (Here $T$ is a formal indeterminant and $s$ is a complex number. Unfortunately, there is no standard reasonable parallel of the notations $Z$ and $\zeta$ to distinguish the function of $T$ and the function of $s$.) Because of the Hasse bound on the size of $a_v$, the product converges absolutely in the region $\operatorname{Re}s>3/2$, and as we will see below, it has a meromorphic continuation to all $s$.
When $E$ is constant it is elementary to calculate $L(E,s)$ in terms of the zeta-functions of $E_0$ and ${\mathcal{C}}$.
\[exer:L-const\] Suppose that $E=E_0\times_kK$. Write the $\zeta$-functions of $E_0$ and ${\mathcal{C}}$ as rational functions: $$\zeta(E_0,s)=\frac{\prod_{i=1}^{2}(1-\alpha_iq^{-s})}{(1-q^{-s})(1-q^{1-s})}$$ and $$\zeta({\mathcal{C}},s)=\frac{\prod_{j=1}^{2g_{\mathcal{C}}}(1-\beta_jq^{-s})}{(1-q^{-s})(1-q^{1-s})}.$$ Prove that $$L(E,s)=\frac{\prod_{i,j}(1-\alpha_i\beta_jq^{-s})}
{\prod_{i=1}^{2}(1-\alpha_iq^{-s})\prod_{i=1}^{2}(1-\alpha_iq^{1-s})}.$$
Thus $L(E,s)$ is a rational function in $q^{-s}$ of degree $4g_{\mathcal{C}}-4$, it extends to a meromorphic function of $s$, and it satisfies a functional equation for $s\leftrightarrow 2-s$. Its poles lie on the lines $\operatorname{Re}s=1/2$ and $\operatorname{Re}s=3/2$ and its zeroes lie on the line $\operatorname{Re}s=1$.
Although the proofs are much less elementary, these facts extend to the non-constant case as well:
Suppose the $E$ is a non-constant elliptic curve over $K$. Let ${\mathfrak{n}}$ be the conductor of $E$. Then $L(E,s)$ is a polynomial in $q^{-s}$ of degree $N=4g_{\mathcal{C}}-4+\deg{\mathfrak{n}}$, it satisfies a functional equation for $s\leftrightarrow2-s$, and its zeroes lie on the line $\operatorname{Re}s=1$. More precisely, $$L(E,s)=\prod_{i=1}^{N}(1-\alpha_iq^{-s})$$ where each $\alpha_i$ is an algebraic integer of absolute value $q$ in every complex embedding. The collection of $\alpha_i$ with multiplicities is invariant under $\alpha_i\mapsto q^2/\alpha_i$.
The theorem is a combination of results of Grothendieck, Deligne, and others. We will sketch a proof of it in Lecture 4.
Note that in all cases $L(E,s)$ is holomorphic at $s=1$. In the non-constant case, its order of vanishing at $s=1$ is bounded above by $N$ and it equals $N$ if and only if $L(E,s)=(1-q^{1-s})^N$.
The basic BSD conjecture
========================
This remarkable conjecture connects the analytic behavior of the function $L(E,s)$, constructed from local data, to the Mordell-Weil group, a global invariant.
$$\operatorname{Rank}E(K) = \operatorname{ord}_{s=1}L(E,s)$$
The original conjecture was stated only for elliptic curves over ${\mathbb{Q}}$ [@BSD65] but it is easily seen to make sense for abelian varieties over global fields. There is very strong evidence in favor of it, especially for elliptic curves over ${\mathbb{Q}}$ and abelian varieties over function fields. See [@GrossPCMI]\*[Lecture 3, §4]{} for a summary of the best theoretical evidence in the number field case. We will discuss what is known for elliptic curves in the function field case later in this course. See Section \[s:results\] for statements of the main results and [@UlmerCRM] for a discussion of the case of higher dimensional abelian varieties over function fields.
The Tate-Shafarevich group
==========================
We define the Tate-Shafarevich group of $E$ over $K$ as $${{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K)=\ker\left(H^1(K,E)\to\prod_v H^1(K_v,E)\right).$$ Here the cohomology groups can be taken to be Galois cohomology groups: $$H^1(K,E)=H^1(G_K,E(K^{sep}))$$ and similarly for $H^1(K_v,E)$; or they can be taken as étale or flat cohomology groups of $\operatorname{Spec}K$ with coefficients in the sheaf associated to $E$. The flat cohomology definition is essential for proving finer results on $p$-torsion in ${{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K)$.
Show that the group $H^1(K,E)$ (and therefore ${{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K)$) is torsion. Hint: Show that given a class $c\in H^1(K,E)$, there is a finite Galois extension $L/K$ such that $c$ vanishes in $H^1(L,E)$.
The refined BSD conjecture relates the leading coefficient of $L(E,s)$ at $s=1$ to invariants of $E$ including heights, Tamagawa numbers, and the order of ${{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K)$. In particular, the conjecture that ${{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K)$ is finite is included in the refined BSD conjecture. We will not discuss that conjecture in these lectures, so we refer to [@GrossPCMI] and [@UlmerCRM] for more details.
Statements of the main results {#s:results}
==============================
Much is known about the BSD conjecture over function fields. We start with general results.
\[thm:BSD1\] Let $E$ be an elliptic curve over a function field $K$. Then we have:
1. \[item:inequality\] $\operatorname{Rank}E(K)\le\operatorname{ord}_{s=1}L(E,s)$.
2. \[item:sha\] The following are equivalent:
- $\operatorname{Rank}E(K)=\operatorname{ord}_{s=1}L(E,s)$
- ${{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K)$ is finite
- for any one prime number $\ell$ $\ell=p$ is allowed, the $\ell$-primary part ${{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K)_{\ell^\infty}$ is finite.
3. \[item:descent\] If $K'/K$ is a finite extension and if the BSD conjecture holds for $E$ over $K'$, then it holds for $E$ over $K$.
The theorem was proven by Tate [@Tate66b] and Milne [@Milne75] and we will sketch a proof in Lecture 3. When the equivalent conditions of Item \[item:sha\] hold, it turns out that the refined BSD conjecture automatically follows. (This is also due to Tate and Milne and will be discussed in detail in [@UlmerCRM].)
We now state several special cases where the conjecture is known to be true. As will be seen in the sequel, they all ultimately reduce either to Tate’s theorem on isogenies of abelian varieties over finite fields (Theorem \[thm:TateIsogThm\] of Lecture 0) or to a theorem of Artin and Swinnerton-Dyer on $K3$ surfaces [@ArtinSwinnertonDyer73].
\[thm:BSD2\] If $E$ is an isotrivial elliptic curve over a function field $K$, then the BSD conjecture holds for $E$.
Recall that a constant curve is also isotrivial.
To state the next result, we make an [*ad hoc*]{} definition. If $E$ is an elliptic curve over $K={{\mathbb{F}_q}}(t)$ we define the [*height*]{} $h$ of $E$ to be the smallest non-negative integer such that $E$ can be defined by a Weierstrass equation (\[eq:cubic\]) where the $a_i$ are all polynomials and $\deg(a_i)\le hi$. For example, the curves $E_1$ and $E_2$ in Subsection \[ss:examples\] have height $h=0$ and the other curves $E_3,\dots E_9$ there all have height $h=1$. See Section \[s:height\] of Lecture 3 below for a more general definition.
\[thm:BSD-low-height\] Suppose that $K=k(t)$ and that $E$ is an elliptic curve over $K$ of height $h\le2$. Then the BSD conjecture holds for $E$.
Note that this case overlaps the preceding one since an elliptic curve over $k(t)$ is constant if and only if its height is zero (cf. Proposition \[prop:h=0\] in Lecture 3).
The following case is essentially due to Shioda [@Shioda86]. To state it, consider a polynomial $f$ in three variables with coefficients in $k$ which is the sum of exactly 4 non-zero monomials, say $$f=\sum_{i=1}^4c_i\prod_{j=1}^3x_j^{e_{ij}}$$ where the $c_i\in k$ are non-zero. Set $e_{i4}=1-\sum_{j=1}^3e_{ij}$ and let $A$ be the $4\times4$ integer matrix $A=(e_{ij})$. If $\det
A\neq0\pmod p$, we say that $f$ [*satisfies Shioda’s condition*]{}. Note that the condition is independent of the order of the variables $x_j$.
\[thm:4-monos\] Suppose that $K=k(t)$ and that $E$ is an elliptic curve over $K$. Suppose that $E$ is birational to a plane curve $V(f)\subset{\mathbb{A}}^2_K$ where $f$ is a polynomial in $k[t,x,y]\subset
K[x,y]$ which is the sum of exactly 4 non-zero monomials and which satisfies Shioda’s condition. Then the BSD conjecture holds for $E$.
For example, the theorem applies to the curves $E_4$, $E_7$, $E_8$, and $E_9$ of Subsection \[ss:examples\] over $K={{\mathbb{F}_q}}(t)$ for any prime power $q$. It applies more generally to these curves when $t$ is replaced by $t^d$ for any $d$ prime to $p$. Note that when $d$ is large, the height of the curve is also large, and so we get cases of BSD not covered by Theorem \[thm:BSD-low-height\].
Finally we state another more recent and ultimately much more flexible special case due to Lisa Berger [@Berger08].
\[thm:berger\] Suppose that $K=k(t)$ and that $E$ is an elliptic curve over $K$. Suppose that $E$ is birational to a plane curve of the form $$f(x)=t^dg(y)$$ where $f$ and $g$ are rational functions of one variable and $d$ is prime to $p$. Then the BSD conjecture holds for $E$.
Here one should clear denominators to interpret the equation $f=t^dg$ (or work in a Zariski open subset of the plane). For example, if $f(x)=x(x-1)$ and $g(y)=y^2/(1-y)$ then we have the plane curve over $K=k(t)$ defined by $$x(x-1)(1-y)=t^dy^2$$ which turns out to be birational to $$y^2+xy+t^dy=x^3+t^dx^2.$$
The rest of the course
======================
The remainder of these lectures will be devoted to sketching the proofs of most of the main results and applying them to construct elliptic curves of large rank over function fields.
More precisely, in Lecture 2 we will review facts about surfaces and the Tate conjecture on divisors. This is a close relative of the BSD conjecture.
In Lecture 3 we will explain the relationship between the BSD and Tate conjectures and use it to prove the part of Theorem \[thm:BSD1\] related to $\ell\neq p$ as well as most of the other theorems stated in the previous section.
In Lecture 4 we will recall a general result on vanishing of $L$-functions in towers and combine it with the results above to obtain many elliptic curves of arbitrarily large rank.
In Lecture 5, we will give other applications of these ideas to ranks of elliptic curves and explicit points.
Motivation
==========
Consider an elliptic curve $E/K$ and suppose that $K=k(t)$ and that we choose an equation for $E$ as in Lecture 1, equation (\[eq:cubica\]) where the $a_i$ are in $k[t]$. Then (\[eq:cubica\]), viewed in $K[x,y]$, defines an affine open subset of an elliptic curve $E$. But if we view it as an equation in $k[t,x,y]$, it defines an affine surface with a projection to the affine $t$ line. The generic fiber of this projection is the affine curve just mentioned.
With a little more work (discussed in the next lecture), for any $E$ over $K=k({\mathcal{C}})$ we can define a smooth projective surface ${\mathcal{E}}$ over $k$ with a morphism $\pi:{\mathcal{E}}\to{\mathcal{C}}$ whose generic fiber is $E$. Obviously there will be close connection between the arithmetic of ${\mathcal{E}}$ and that of $E$. Although ${\mathcal{E}}$ has higher dimension than $E$, it is defined over the finite field $k$ and as a result we have better control over its arithmetic. Pursuing this line of inquiry leads to the main theorems stated at the end of the previous section.
In this lecture, we discuss the relevant facts and conjectures about surfaces over finite fields. In the next lecture we will look carefully at the connections between ${\mathcal{E}}$ and $E$ and deduce the main classical theorems.
There are many excellent references for the general theory of surfaces, including [@BeauvilleCAS], [@BarthHulekPetersVandeVenCCS], and [@BadescuAS]. We generally refer to [@BadescuAS] below since it works throughout over a field of arbitrary characteristic.
Surfaces
========
Let $k={{\mathbb{F}_q}}$ be a finite field of characteristic $p$. As always, by a surface over $k$ we mean a purely 2-dimensional, separated, reduced scheme of finite type over $k$. Such a scheme is automatically quasi-projective and is projective if and only if it is complete [@BadescuAS]\*[1.28]{}. Since $k$ is perfect, a surface ${\mathcal{X}}$ is a regular scheme if and only if ${\mathcal{X}}\to\operatorname{Spec}k$ is a smooth morphism (e.g., [@LiuAGAC]\*[4.3.3, Exer. 3.24]{}). We sloppily say that “${\mathcal{X}}$ is smooth” if these conditions hold. Resolution of singularities is known for surfaces: For any surface ${\mathcal{X}}$, there is a proper birational morphism $\tilde{\mathcal{X}}\to{\mathcal{X}}$ with $\tilde{\mathcal{X}}$ smooth. (We may even take this morphism to be a composition of normalizations and blow ups at closed points [@Lipman78]. See also [@Artin86] for a nice exposition.) Therefore, every surface is birational to a smooth projective surface. In the cases of interest to us, this can be made very explicit in an elementary manner.
Throughout we assume that ${\mathcal{X}}$ is a smooth, projective, absolutely irreducible surface over $k$ and we assume that ${\mathcal{X}}(k)$ is non-empty, i.e., ${\mathcal{X}}$ has a $k$-rational point.
Divisors and the Néron-Severi group
===================================
We give a lightning review of divisors and equivalence relations on divisors. See, for example, [@HartshorneAG]\*[V.1]{} for more details.
Divisor classes
---------------
A (Weil) [*divisor*]{} is a finite formal ${\mathbb{Z}}$-linear combination of reduced, closed, codimension 1 subvarieties of ${\mathcal{X}}$: $$D=\sum a_ZZ.$$ In other words, the set of divisors is the free abelian group on the reduced, closed, codimension 1 subvarieties on ${\mathcal{X}}$.
If $Z$ is a reduced, closed subvariety of ${\mathcal{X}}$ of codimension 1, there is an associated valuation $$\operatorname{ord}_Z:k({\mathcal{X}})^\times\to{{\mathcal{Z}}}$$ that sends a rational function to its order of zero or pole along $Z$.
A rational function $f$ on ${\mathcal{X}}$ has a divisor: $$\operatorname{\rm Div}(f)=\sum_Z\operatorname{ord}_Z(f)Z.$$
A divisor $D$ is said to be [*linearly equivalent to zero*]{} if there is a rational function $f$ such that $\operatorname{\rm Div}(f)=D$. Two divisors $D$ and $D'$ are linearly equivalent if their difference $D-D'$ is linearly equivalent to zero.
The group of divisors modulo those linear equivalent to zero is the [*divisor class group*]{} $DivCl({\mathcal{X}})$. It is a fundamental invariant of ${\mathcal{X}}$.
The Picard group
----------------
Let $\operatorname{Pic}({\mathcal{X}})$ be the Picard group of ${\mathcal{X}}$, i.e., the group of isomorphism classes of invertible sheaves on ${\mathcal{X}}$ with group law given by the tensor product. There is a cohomological calculation of $\operatorname{Pic}({\mathcal{X}})$: $$\operatorname{Pic}({\mathcal{X}})\cong H^1({\mathcal{X}},{\mathcal{O}}_{\mathcal{X}}^\times).$$
The map sending a divisor $D$ to the invertible sheaf ${\mathcal{O}}_{\mathcal{X}}(D)$ induces an isomorphism $DivCl({\mathcal{X}}){\tilde{\to}}\operatorname{Pic}({\mathcal{X}})$.
The Néron-Severi group
----------------------
As usual, we write ${\overline{\mathcal{X}}}$ for ${\mathcal{X}}\times_k{{\overline{k}}}$. We first introduce the notion of algebraic equivalence for divisors on ${\overline{\mathcal{X}}}$. Intuitively, two divisors $D$ and $D'$ are algebraically equivalent if they lie in a family parameterized by a connected variety (which we may take to be a smooth curve). More precisely, if $T$ is a smooth curve over ${{\overline{k}}}$ and ${\mathcal{D}}\subset {\mathcal{X}}\times_{{\overline{k}}}T$ is a divisor that is flat over $T$, then we get a family of divisors on ${\mathcal{X}}$ parameterized by $T$: $t\in T$ corresponds to ${\mathcal{X}}\times\{t\}\cap{\mathcal{D}}$. Two divisors $D_1$ and $D_2$ on ${\mathcal{X}}$ are algebraically equivalent if they lie in such a family, i.e., if there is a curve $T$ and a divisor ${\mathcal{D}}$ as above and two points $t_1$ and $t_2\in T({{\overline{k}}})$ such that $D_i={\mathcal{X}}\times_{{\overline{k}}}\{t_i\}\cap{\mathcal{D}}$. ([*A priori*]{}, to ensure transitivity of this relation we should use chains of equivalences (see [@HartshorneAG]\*[Exer. V.1.7]{}) but see [@FultonIT]\*[10.3.2]{} for an argument that shows the definition works as is.) Note that linear equivalence is algebraic equivalence where $T$ is restricted to be ${\mathbb{P}}^1$ ([@HartshorneAG]\*[Exer. V.1.7]{}) and so algebraic equivalence is weaker than linear equivalence.
The group of divisors on ${\overline{\mathcal{X}}}$ modulo those algebraically equivalent to zero is the [*Néron-Severi*]{} group $\operatorname{NS}({\overline{\mathcal{X}}})$. A classical (and difficult) theorem, the “theorem of the base,” says that $\operatorname{NS}({\overline{\mathcal{X}}})$ is finitely generated. See [@LangNeron59] and [@SGA6]\*[XIII.5.1]{} for proofs and Lecture 3 below for more discussion. See also [@Conrad06] for a modern discussion of the results in [@LangNeron59].
Since linear equivalence is weaker than algebraic equivalence, $\operatorname{NS}({\overline{\mathcal{X}}})$ is a quotient of $\operatorname{Pic}({\overline{\mathcal{X}}})$.
We define $\operatorname{NS}({\mathcal{X}})$ to be the image of $\operatorname{\rm Div}({\mathcal{X}})$ in $\operatorname{NS}({\overline{\mathcal{X}}})$ or equivalently the image of $\operatorname{Pic}({\mathcal{X}})$ in $\operatorname{NS}({\overline{\mathcal{X}}})$. Thus $\operatorname{NS}({\mathcal{X}})$ is again a finitely generated abelian group. As we will see, it is of arithmetical nature.
Let $G_k=\operatorname{Gal}({{\overline{k}}}/k)$. Show that $\operatorname{NS}({\mathcal{X}})$ is the group of invariants $\operatorname{NS}({\overline{\mathcal{X}}})^{G_k}$. You will need to use that $k$ is a finite field.
The Picard scheme
=================
We define $\operatorname{Pic}^0({\mathcal{X}})$ as the kernel of the surjection $\operatorname{Pic}({\mathcal{X}})\to\operatorname{NS}({\mathcal{X}})$. In order to understand this group better, we will introduce more structure on the Picard group. The main fact we need to know is that the group $\operatorname{Pic}^0({\mathcal{X}}\times{{\overline{k}}})$ is the set of points on an abelian variety and is therefore a divisible group. (I.e., for every class $c\in\operatorname{Pic}^0({\mathcal{X}}\times{{\overline{k}}})$ and every positive integer $n$, there is a class $c'$ such that $n\,c'=c$.) Readers willing to accept this assertion can skip the rest of this section.
The Picard group $\operatorname{Pic}({\mathcal{X}})$ is the set of $k$-points of a group scheme. More precisely, under our hypotheses on ${\mathcal{X}}$ there is a group scheme called the [*Picard scheme*]{} and denoted $\operatorname{\underline{Pic}}_{{\mathcal{X}}/k}$ which is locally of finite type over $k$ and represents the relative Picard functor. This means that if $T\to S=\operatorname{Spec}k$ is a morphism of schemes and $\pi_T:{\mathcal{X}}_T:={\mathcal{X}}\times_{\operatorname{Spec}k}T\to T$ is the base change then $$\operatorname{\underline{Pic}}_{{\mathcal{X}}/k}(T)=\frac{\operatorname{Pic}({\mathcal{X}}_T)}{\pi_T^*\operatorname{Pic}(T)}.$$ Here the left hand side is the group of $T$-valued points of $\operatorname{\underline{Pic}}_{{\mathcal{X}}/k}$. See [@Kleiman05] for a thorough and detailed overview of the Picard scheme, and in particular [@Kleiman05]\*[9.4.8]{} for the proof that there is a scheme representing the relative Picard functor as above.
We write $\operatorname{\underline{Pic}}^0_{{\mathcal{X}}/k}$ for the connected component of $\operatorname{\underline{Pic}}_{{\mathcal{X}}/k}$ containing the identity. Under our hypotheses, $\operatorname{\underline{Pic}}^0_{{\mathcal{X}}/k}$ is a geometrically irreducible projective group scheme over $k$ [@Kleiman05]\*[9.5.3, 9.5.4]{}. It may be non-reduced. (See examples in [@Igusa55] and [@Serre58] and a full analysis of this phenomenon in [@MumfordLCAS].) We let $\operatorname{PicVar}_{{\mathcal{X}}/k}=\left(\operatorname{\underline{Pic}}^0_{{\mathcal{X}}/k}\right)_{red}$, the [*Picard variety*]{} of ${\mathcal{X}}$ over $k$, which is an abelian variety over $k$.
If $k'$ is a field extension of $k$, we have $$\operatorname{Pic}^0({\mathcal{X}}_{k'})=\operatorname{\underline{Pic}}^0_{{\mathcal{X}}/k}(k')=\operatorname{PicVar}_{{\mathcal{X}}/k}(k')$$ so that $\operatorname{Pic}^0({\mathcal{X}}_{k'})$ is the set of points of an abelian variety.
By [@Kleiman05]\*[9.5.10]{}, $\operatorname{\underline{Pic}}^0_{{\mathcal{X}}/k}(k)=\operatorname{Pic}^0({\mathcal{X}})$, in other words, the class of a divisor in $\operatorname{\underline{Pic}}({\mathcal{X}})$ lies in $\operatorname{\underline{Pic}}^0({\mathcal{X}})$ if and only if the divisor is algebraically equivalent to 0.
Intersection numbers and numerical equivalence
==============================================
There is an intersection pairing on the Néron-Severi group: $$\operatorname{NS}({\mathcal{X}})\times\operatorname{NS}({\mathcal{X}})\to{\mathbb{Z}}$$ which is bilinear and symmetric. If $D$ and $D'$ are divisors, we write $D.D'$ for their intersection pairing.
There are two approaches to defining the pairing. In the first approach, one shows that given two divisors, there are divisors in the same classes in $\operatorname{NS}({\mathcal{X}})$ (or even the same classes in $\operatorname{Pic}({\mathcal{X}})$) that meet transversally. Then the intersection number is literally the number of points of intersection. The work in this approach is to prove a moving lemma and then show that the resulting pairing is well defined. See [@HartshorneAG]\*[V.1]{} for the details.
In the second approach, one uses coherent cohomology. If ${\mathcal{L}}$ is an invertible sheaf on ${\mathcal{X}}$, let $$\chi({\mathcal{L}})=\sum_{i=0}^2(-1)^i \dim_k H^i({\mathcal{X}},{\mathcal{L}})$$ be the coherent Euler characteristic of ${\mathcal{L}}$. Then define $$D.D'=\chi({\mathcal{O}}_{\mathcal{X}})-\chi({\mathcal{O}}_{\mathcal{X}}(-D))-\chi({\mathcal{O}}_{\mathcal{X}}(-D'))
+\chi({\mathcal{O}}_{\mathcal{X}}(-D-D')).$$ One checks that if $C$ is a smooth irreducible curve on ${\mathcal{X}}$, then $C.D=\deg{\mathcal{O}}_{\mathcal{X}}(D)|_{C}$ and that if $C$ and $C'$ are two distinct irreducible curves on ${\mathcal{X}}$ meeting transversally, then $C.C'$ is the sum of local intersection multiplicities. See [@BeauvilleCAS]\*[I.1-7]{} for details. (Nowhere is it used in this part of [@BeauvilleCAS] that the ground field is ${\mathbb{C}}$.)
Two divisors $D$ and $D'$ are said to be [*numerically equivalent*]{} if $D.D''=D'.D''$ for all divisors $D''$. If $\operatorname{Num}({\mathcal{X}})$ denotes the group of divisors in ${\mathcal{X}}$ up to numerical equivalence, then we have surjections $$\operatorname{Pic}({\mathcal{X}}){\twoheadrightarrow}\operatorname{NS}({\mathcal{X}}){\twoheadrightarrow}\operatorname{Num}({\mathcal{X}})$$ and so $\operatorname{Num}({\mathcal{X}})$ is a finitely generated group. It is clear from the definition that $\operatorname{Num}({\mathcal{X}})$ is torsion-free and so we can insert $\operatorname{NS}({\mathcal{X}})/{tor}$ (Néron-Severi modulo torsion) into this chain: $$\operatorname{Pic}({\mathcal{X}}){\twoheadrightarrow}\operatorname{NS}({\mathcal{X}}){\twoheadrightarrow}\operatorname{NS}({\mathcal{X}})/{tor}{\twoheadrightarrow}\operatorname{Num}({\mathcal{X}}).$$
Cycle classes and homological equivalence {#s:cycle}
=========================================
There is a general theory of cycle classes in $\ell$-adic cohomology, see for example [@SGA4.5]\*[\[Cycle\]]{}. In the case of divisors, things are much simpler and we can construct a cycle class map from the Kummer sequence.
Indeed, consider the short exact sequence of sheaves on ${\overline{\mathcal{X}}}$ for the étale topology: $$0\to\mu_{\ell^n}\to{\mathbb{G}}_m\stackrel{\ell^n}{\longrightarrow}{\mathbb{G}}_m\to0.$$ (The sheaves $\mu_{\ell^n}$ and ${\mathbb{G}}_m$ are perfectly reasonable sheaves in the Zariski topology on ${\mathcal{X}}$, but the arrow in the right is not surjective in that context. We need to use the étale topology or a finer one.) Taking cohomology, we get a homomorphism $$Pic({\overline{\mathcal{X}}})/\ell^n=H^1({\overline{\mathcal{X}}},{\mathbb{G}}_m)/\ell^n\to
H^2({\overline{\mathcal{X}}},\mu_{\ell^n}).$$ Since $\operatorname{Pic}^0({\overline{\mathcal{X}}})$ is a divisible group, we have $\operatorname{NS}({\overline{\mathcal{X}}})/\ell^n=\operatorname{Pic}({\overline{\mathcal{X}}})/\ell^n$ and so taking an inverse limit gives an injection $$\operatorname{NS}({\overline{\mathcal{X}}}){\otimes}{{\mathbb{Z}_\ell}}\to H^2({\overline{\mathcal{X}}},{{\mathbb{Z}_\ell}}(1)).$$
Composing with the natural homomorphism $\operatorname{NS}({\mathcal{X}})\to\operatorname{NS}({\overline{\mathcal{X}}})$ gives our cycle class map $$\label{eq:cycle}
\operatorname{NS}({\mathcal{X}})\to\operatorname{NS}({\mathcal{X}}){\otimes}{{\mathbb{Z}_\ell}}\to H^2({\overline{\mathcal{X}}},{{\mathbb{Z}_\ell}}(1)).$$
We declare two divisors to be ($\ell$-)[*homologically equivalent*]{} if their classes in $H^2({\overline{\mathcal{X}}},{{\mathbb{Z}_\ell}}(1))$ are equal. (We will see below that this notion is independent of $\ell$.) The group of divisors modulo homological equivalence will (temporarily) be denoted $\operatorname{Homol}({\mathcal{X}})$. It will turn out to be a finitely generated free abelian group.
The intersection pairing on $NS({\mathcal{X}})$ corresponds under the cycle class map to the cup product on cohomology. This means that a divisor that is homologically equivalent to zero is also numerically equivalent to zero. Thus we have a chain of surjections: $$\operatorname{Pic}({\mathcal{X}}){\twoheadrightarrow}\operatorname{NS}({\mathcal{X}}){\twoheadrightarrow}\operatorname{NS}({\mathcal{X}})/{tor}
{\twoheadrightarrow}\operatorname{Homol}({\mathcal{X}}){\twoheadrightarrow}\operatorname{Num}({\mathcal{X}}).$$
Comparison of equivalence relations on divisors
===============================================
A theorem of Matsusaka [@Matsusaka57] asserts that the surjection $$\operatorname{NS}({\mathcal{X}})/{tor}{\twoheadrightarrow}\operatorname{Num}({\mathcal{X}})$$ is in fact an isomorphism. Thus $$\operatorname{NS}({\mathcal{X}})/{tor}\cong\operatorname{Homol}({\mathcal{X}})\cong\operatorname{Num}({\mathcal{X}})$$ and these groups are finitely generated, free abelian groups. Since $\operatorname{NS}({\mathcal{X}})$ is finitely generated, $\operatorname{NS}({\mathcal{X}})_{tor}$ is finite.
In all of the examples we will consider, $\operatorname{NS}({\mathcal{X}})$ is torsion free. (In fact, for an elliptic surface with a section, the surjection $\operatorname{NS}({\mathcal{X}})\to\operatorname{Num}({\mathcal{X}})$ is always an isomorphism, see [@SchuttShiodaES]\*[Theorem 6.5]{}.) So to understand $\operatorname{Pic}({\mathcal{X}})$ we have only to consider the finitely generated free abelian group $\operatorname{NS}({\mathcal{X}})$ and the group $\operatorname{Pic}^0({\mathcal{X}})$, which is (the set of points of) an abelian variety.
In the case of a surface ${\mathcal{X}}$ over the complex numbers, use the cohomology of the exponential sequence $$0\to{\mathbb{Z}}\to{\mathcal{O}}_{\mathcal{X}}\stackrel{\exp}{\longrightarrow}{\mathcal{O}}^\times_{\mathcal{X}}\to0$$ to analyze the structure of $\operatorname{Pic}({\mathcal{X}})$.
Examples {#examples}
========
${\mathbb{P}}^2$
----------------
It is well known (e.g., [@HartshorneAG]\*[II.6.4]{}) that two curves on ${\mathbb{P}}^2$ are linearly equivalent if and only if they have the same degree. It follows that $\operatorname{Pic}({\mathbb{P}}^2)=\operatorname{NS}({\mathbb{P}}^2)\cong{\mathbb{Z}}$.
${\mathbb{P}}^1\times{\mathbb{P}}^1$
------------------------------------
By [@HartshorneAG]\*[II.6.6.1]{}, two curves on ${\mathbb{P}}^1\times{\mathbb{P}}^1$ are linearly equivalent if and only if they have the same bi-degree. It follows that $\operatorname{Pic}({\mathbb{P}}^1\times{\mathbb{P}}^1)=\operatorname{NS}({\mathbb{P}}^1\times{\mathbb{P}}^1)\cong{\mathbb{Z}}^2$.
Abelian varieties
-----------------
If ${\mathcal{X}}$ is an abelian variety (of any dimension $g$), then $\operatorname{Pic}^0({\mathcal{X}})$ is the dual abelian variety and $\operatorname{NS}({\mathcal{X}})$ is a finitely generated free abelian group of rank between 1 and $4g^2$. See [@MumfordAV] for details.
Products of curves {#ss:products}
------------------
Suppose that ${\mathcal{C}}$ and ${\mathcal{D}}$ are smooth projective curves over $k$ with $k$-rational points $x\in{\mathcal{C}}$ and $y\in{\mathcal{D}}$. By definition (see Subsection \[ss:pic-alb\] of Lecture 0), the group of divisorial correspondences between $({\mathcal{C}},x)$ and $({\mathcal{D}},y)$ is a subgroup of $\operatorname{Pic}({\mathcal{C}}\times{\mathcal{D}})$ and it is clear that $$\begin{aligned}
\operatorname{Pic}({\mathcal{C}}\times{\mathcal{D}})&\cong\operatorname{Pic}({\mathcal{C}})\times\operatorname{Pic}({\mathcal{D}})\times
\operatorname{DivCorr}\left(({\mathcal{C}},x),({\mathcal{D}},y)\right)\\
&\cong\operatorname{Pic}^0({\mathcal{C}})\times\operatorname{Pic}^0({\mathcal{D}})\times{\mathbb{Z}}^2\times
\operatorname{DivCorr}\left(({\mathcal{C}},x),({\mathcal{D}},y)\right).\end{aligned}$$ Moreover, as we saw in Lecture 0, $$\operatorname{DivCorr}\left(({\mathcal{C}},x),({\mathcal{D}},y)\right)\cong\operatorname{Hom}(J_{\mathcal{C}},J_{\mathcal{D}})$$ is a discrete group. It follows that $$\label{eq:Pic0(CxD)}
\operatorname{Pic}^0({\mathcal{C}}\times{\mathcal{D}})\cong\operatorname{Pic}^0({\mathcal{C}})\times\operatorname{Pic}^0({\mathcal{D}})$$ and $$\label{eq:NS(CxD)}
\operatorname{NS}({\mathcal{C}}\times{\mathcal{D}})\cong{\mathbb{Z}}^2\times\operatorname{Hom}(J_{\mathcal{C}},J_{\mathcal{D}}).$$ This last isomorphism will be important for a new approach to elliptic curves of high rank over function fields discussed in Lecture 5.
Blow ups {#ss:blow-ups}
--------
Let ${\mathcal{X}}$ be a smooth projective surface over $k$ and let $\pi:{\mathcal{Y}}\to{\mathcal{X}}$ be the blow up of ${\mathcal{X}}$ at a closed point $x\in{\mathcal{X}}$ so that $E=\pi^{-1}(x)$ is a rational curve on ${\mathcal{Y}}$. Then we have canonical isomorphisms $$\operatorname{Pic}({\mathcal{Y}})\cong\operatorname{Pic}({\mathcal{X}})\oplus{\mathbb{Z}}\quad\text{and}\quad
\operatorname{NS}({\mathcal{Y}})\cong\operatorname{NS}({\mathcal{X}})\oplus{\mathbb{Z}}$$ where in both groups the factor ${\mathbb{Z}}$ is generated by the class of $E$. See [@HartshorneAG]\*[V.3.2]{}.
Fibrations {#ss:fibrations}
----------
Let ${\mathcal{X}}$ be a smooth projective surface over $k$, ${\mathcal{C}}$ a smooth projective curve over $k$, and $\pi:{\mathcal{X}}\to{\mathcal{C}}$ a non-constant morphism. Assume that the induced extension of function fields $k({\mathcal{C}}){\hookrightarrow}k({\mathcal{X}})$ is separable and $k({\mathcal{C}})$ is algebraically closed in $k({\mathcal{X}})$. Then for every closed point $y\in{\mathcal{C}}$, the fiber $\pi^{-1}(y)$ is connected, and it is irreducible for almost all $y$. Write $F$ for the class in $\operatorname{NS}({\mathcal{X}})$ of the fiber over a $k$-rational point $y$ of ${\mathcal{C}}$. (This exists because we assumed that ${\mathcal{X}}$ has a $k$-rational point.) We write ${\langle}F{\rangle}$ for the subgroup of $\operatorname{NS}({\mathcal{X}})$ generated by $F$.
It is clear from the definition of $\operatorname{NS}({\mathcal{X}})$ that if $y'$ is another closed point of ${\mathcal{C}}$, then the class in $\operatorname{NS}({\mathcal{X}})$ of $\pi^{-1}(y')$ is equal to $(\deg y')F$.
Now suppose that $z\in{\mathcal{C}}$ is a closed point such that $\pi^{-1}(z)$ is reducible, say $$\pi^{-1}(z)=\sum_{i=1}^{f_z}n_iZ_i$$ where the $Z_i$ are the irreducible components of $\pi^{-1}(z)$ and the $n_i$ are their multiplicities in the fiber. Then a consideration of intersection multiplicities (see for example [@SilvermanAT]\*[III.8]{}) shows that for any integers $m_i$, $$\sum_im_iZ_i\in{\langle}F{\rangle}\subset\operatorname{NS}({\mathcal{X}})$$ if and only if there is a rational number $\alpha$ such that $m_i=\alpha n_i$ for all $i$. More precisely, the intersection pairing restricted to the part of $\operatorname{NS}({\mathcal{X}})$ generated by the classes of the $Z_i$ is negative semi-definite, with a one-dimensional kernel spanned by integral divisors that are rational multiples of the whole fiber. It follows that the subgroup of $\operatorname{NS}({\mathcal{X}})/{\langle}F{\rangle}$ generated by the classes of the $Z_i$ has rank $f_z-1$. It is free of this rank if the gcd of the multiplicities $n_i$ is 1.
It also follows that if $D$ is a divisor supported on a fiber of $\pi$ and $D'$ is another divisor supported on other fibers, then $D=D'$ in $\operatorname{NS}({\mathcal{X}})/{\langle}F{\rangle}$ if and only if $D=D'=0$ in $\operatorname{NS}({\mathcal{X}})/{\langle}F{\rangle}$.
Define $L^2\operatorname{NS}({\mathcal{X}})$ to be the subgroup of $\operatorname{NS}({\mathcal{X}})$ generated by all components of all fibers of $\pi$ over closed points of ${\mathcal{C}}$. By the above, it is the direct sum of the ${\langle}F{\rangle}$ and the subgroups of $\operatorname{NS}({\mathcal{X}})/{\langle}F{\rangle}$ generated by the components of the various fibers. Thus we obtain the following computation of the rank of $L^2\operatorname{NS}({\mathcal{X}})$.
For a closed point $y$ of ${\mathcal{C}}$, let $f_y$ denote the number of irreducible components in the fiber $\pi^{-1}(y)$. Then the rank of $L^2\operatorname{NS}({\mathcal{X}})$ is $$1+\sum_y(f_y-1).$$ If for all $y$ the greatest common divisor of the multiplicities of the components in the fiber of $\pi$ over $y$ is 1, then $L^2\operatorname{NS}({\mathcal{X}})$ is torsion-free.
Tate’s conjectures $T_1$ and $T_2$
==================================
Tate’s conjecture $T_1$ for ${\mathcal{X}}$ (which we denote $T_1({\mathcal{X}})$) characterizes the image of the cycle class map:
For any prime $\ell\neq p$, the cycle class map induces an isomorphism $$\operatorname{NS}({\mathcal{X}}){\otimes}{{\mathbb{Q}_\ell}}\to H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))^{G_k}$$
We will see below that $T_1({\mathcal{X}})$ is equivalent to the apparently stronger integral statement that the cycle class induces an isomorphism $$\operatorname{NS}({\mathcal{X}}){\otimes}{{\mathbb{Z}_\ell}}\to H^2({\overline{\mathcal{X}}},{{\mathbb{Z}_\ell}}(1))^{G_k}$$
We will also see that $T_1({\mathcal{X}})$ is independent of $\ell$ which is why we have omitted $\ell$ from the notation.
Since $G_k$ is generated topologically by $Fr_q$, we have $$H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))^{G_k}=H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))^{Fr_q=1}
=H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})^{Fr_q=q}.$$ The injectivity of the cycle class map implies that $$\operatorname{Rank}\operatorname{NS}({\mathcal{X}})\le \dim_{{{\mathbb{Q}_\ell}}}H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})^{Fr_q=q}$$ and $T_1({\mathcal{X}})$ is the statement that these two dimensions are equal.
The second Tate conjecture relates the zeta-function to divisors. Recall that $\zeta({\mathcal{X}},s)$ denotes the zeta function of ${\mathcal{X}}$, defined in Lecture 0, Section \[s:zetas\].
We have $$\operatorname{Rank}\operatorname{NS}({\mathcal{X}}) = -\operatorname{ord}_{s=1}\zeta({\mathcal{X}},s)$$
Note that by the Riemann hypothesis, the poles of $\zeta({\mathcal{X}},s)$ at $s=1$ come from $P_2({\mathcal{X}},q^{-s})$. More precisely, using the cohomological formula (\[eq:P-cohom\]) of Lecture 0 for $P_2$, we have that the order of pole of $\zeta({\mathcal{X}},s)$ at $s=1$ is equal to the multiplicity of $q$ as an eigenvalue of $Fr_q$ on $H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})$.
Thus we have a string of inequalities $$\label{prop:T-ineqs}
\operatorname{Rank}\operatorname{NS}({\mathcal{X}})\le \dim_{{{\mathbb{Q}_\ell}}}H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})^{Fr_q=q}
\le -\operatorname{ord}_{s=1}\zeta({\mathcal{X}},s).$$ Conjecture $T_1({\mathcal{X}})$ is that the first inequality is an equality and conjecture $T_2({\mathcal{X}})$ is that the leftmost and rightmost integers are equal. It follows trivially that $T_2({\mathcal{X}})$ implies $T_1({\mathcal{X}})$. Tate proved the reverse implication.
\[prop:T1->T2\] The conjectures $T_1({\mathcal{X}})$ and $T_2({\mathcal{X}})$ are equivalent. In particular, $T_1({\mathcal{X}})$ is independent of $\ell$.
First note that the intersection pairing on $\operatorname{NS}({\mathcal{X}})$ is non-degenerate, so we get an isomorphism $$\operatorname{NS}({\mathcal{X}}){\otimes}{{\mathbb{Q}_\ell}}\cong\operatorname{Hom}(\operatorname{NS}({\mathcal{X}}),{{\mathbb{Q}_\ell}}).$$ On the other hand, the cup product on $H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))$ is also non-degenerate (by Poincaré duality), so we have $$H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))\cong\operatorname{Hom}(H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1)),{{\mathbb{Q}_\ell}}).$$ If we use a superscript $G_k$ to denote invariants and a subscript $G_k$ to denote coinvariants, then we have a natural homomorphism $$H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))^{G_k}\to H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))_{G_k}$$ which is an isomorphism if and only if the subspace of $H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))$ where $\operatorname{Fr}_q$ acts by 1 is equal to the whole of the generalized eigenspace for the eigenvalue 1. As we have seen above, this holds if and only if we have $$\dim_{{{\mathbb{Q}_\ell}}}H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})^{Fr_q=q}= -\operatorname{ord}_{s=1}\zeta({\mathcal{X}},s).$$
Now consider the diagram $$\xymatrix{
\operatorname{NS}({\mathcal{X}}){\otimes}{{\mathbb{Q}_\ell}}\ar[d]^h\ar@{=}[rr]&&\operatorname{Hom}(\operatorname{NS}({\mathcal{X}}),{{\mathbb{Q}_\ell}})\\
H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))^{G_k}\ar[r]^f&H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))_{G_k}\ar@{=}[r]&
\operatorname{Hom}(H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))^{G_k},{{\mathbb{Q}_\ell}})\ar[u]_{h^*}.}$$ The lower right arrow is an isomorphism by elementary linear algebra. The maps $h$ and $h^*$ are the cycle map and its transpose and they are isomorphisms if and only if $T_1({\mathcal{X}})$ holds. One checks that the diagram commutes ([@Tate66b]\*[p. 24]{} or [@Milne75]\*[Lemma 5.3]{}) and so $T_1({\mathcal{X}})$ implies that $f$ is an isomorphism. Thus $T_1({\mathcal{X}})$ implies $T_2({\mathcal{X}})$.
We remark that the equality of $\dim_{{{\mathbb{Q}_\ell}}}H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})^{Fr_q=q}$ and $-\operatorname{ord}_{s=1}\zeta({\mathcal{X}},s)$ would follow from the semi-simplicity of $Fr_q$ acting on $H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})$ (or even from its semisimplicity on the $Fr_q=q$ generalized eigenspace). This is a separate “standard” conjecture (see for example [@Tate94]); it does not seem to imply $T_1({\mathcal{X}})$.
$T_1$ and the Brauer group
==========================
We define the (cohomological) Brauer group $\operatorname{Br}({\mathcal{X}})$ by $$\operatorname{Br}({\mathcal{X}})=H^2({\mathcal{X}},{\mathbb{G}}_m)=H^2({\mathcal{X}},{\mathcal{O}}_{\mathcal{X}}^\times)$$ (with respect to the étale or finer topologies). Because ${\mathcal{X}}$ is a smooth proper surface over a finite field, the cohomological Brauer group is isomorphic to the usual Brauer group (defined in terms of Azumaya algebras) and it is known to be a torsion group. (See [@MilneEC]\*[IV.2]{} and also three fascinating articles by Grothendieck collected in [@Grothendieck68].) Artin and Tate conjectured in [@Tate66b] that $\operatorname{Br}({\mathcal{X}})$ is finite.
Similarly, define $$\operatorname{Br}({\overline{\mathcal{X}}})=H^2({\overline{\mathcal{X}}},{\mathbb{G}}_m)=H^2({\overline{\mathcal{X}}},{\mathcal{O}}_{\mathcal{X}}^\times).$$ This group is torsion but need not be finite.
Taking the cohomology of the exact sequence $$0\to\mu_{\ell^n}\to{\mathbb{G}}_m\stackrel{\ell^n}{\longrightarrow}{\mathbb{G}}_m\to0$$ as in Section \[s:cycle\], we have an exact sequence $$\label{eq:Kummer-ell^n}
0\to\operatorname{NS}({\overline{\mathcal{X}}})/\ell^n\to
H^2({\overline{\mathcal{X}}},\mu_{\ell^n})\to\operatorname{Br}({\overline{\mathcal{X}}})_{\ell^n}\to0.$$ Taking $G_k$-invariants and then the inverse limit over powers of $\ell$, we obtain an exact sequence $$0\to\operatorname{NS}({\mathcal{X}}){\otimes}{{\mathbb{Z}_\ell}}\to H^2({\overline{\mathcal{X}}},{{\mathbb{Z}_\ell}}(1))^{G_k}\to
T_\ell\operatorname{Br}({\mathcal{X}})\to0.$$ Since $\operatorname{Br}({\mathcal{X}})_\ell$ is finite, $T_\ell\operatorname{Br}({\mathcal{X}})$ is zero if and only if the $\ell$-primary part of $\operatorname{Br}({\mathcal{X}})$ is finite. It follows that the $\ell$ part of the Brauer group is finite if and only if $T_1({\mathcal{X}})$ for $\ell$ holds if and only if the integral version of $T_1({\mathcal{X}})$ for $\ell$ holds. In particular, since $T_1({\mathcal{X}})$ is independent of $\ell$, if $\operatorname{Br}({\mathcal{X}})[\ell^\infty]$ is finite for one $\ell$, then $\operatorname{Br}({\mathcal{X}})[\ell^\infty]$ is finite for all $\ell\neq p$. It is even true, although more difficult to prove, that $T_1({\mathcal{X}})$ is equivalent to the finiteness of $\operatorname{Br}({\mathcal{X}})$.
\[thm:T1-Br\] $T_1({\mathcal{X}})$ holds if and only if $\operatorname{Br}({\mathcal{X}})$ is finite if and only if there is an $\ell$ $\ell=p$ allowed such that the $\ell$-primary part of $\operatorname{Br}({\mathcal{X}})$ is finite.
We sketch the proof of the prime-to-$p$ part of this assertion following [@Tate66b] and refer to [@Milne75] for the full proof. We already noted that the $\ell$-primary part of $\operatorname{Br}({\mathcal{X}})$ is finite for one $\ell\neq p$ if and only if $T_1({\mathcal{X}})$ holds. To see that almost all $\ell$-primary parts vanish, we consider the following diagram, which is an integral version of the diagram in the proof of Proposition \[prop:T1->T2\]: $$\xymatrix@C-15pt{
\operatorname{NS}({\mathcal{X}}){\otimes}{{\mathbb{Z}_\ell}}\ar[d]_h\ar[r]^>>>>{e}&
\operatorname{Hom}(\operatorname{NS}({\mathcal{X}}){\otimes}{{\mathbb{Z}_\ell}},{{\mathbb{Z}_\ell}})\ar@{=}[r]&
\operatorname{Hom}(\operatorname{NS}({\mathcal{X}}){\otimes}{{\mathbb{Q}_\ell}}/{{\mathbb{Z}_\ell}},{{\mathbb{Q}_\ell}}/{{\mathbb{Z}_\ell}})\\
H^2({\overline{\mathcal{X}}},{{\mathbb{Z}_\ell}}(1))^{G_k}\ar[r]^f&H^2({\overline{\mathcal{X}}},{{\mathbb{Z}_\ell}}(1))_{G_k}\ar@{=}[r]&
\operatorname{Hom}(H^2({\overline{\mathcal{X}}},({{\mathbb{Q}_\ell}}/{{\mathbb{Z}_\ell}})(1))^{G_k},{{\mathbb{Q}_\ell}}/{{\mathbb{Z}_\ell}})\ar[u]_{g^*}}$$
Here $e$ is induced by the intersection form, $h$ is the cycle class map, $f$ is induced by the identity map of $H^1({\overline{\mathcal{X}}},{{\mathbb{Z}_\ell}}(1))$ and $g^*$ is the transpose of a map $$g:\operatorname{NS}({\mathcal{X}}){\otimes}{{\mathbb{Q}_\ell}}/{{\mathbb{Z}_\ell}}\to H^2({\overline{\mathcal{X}}},({{\mathbb{Q}_\ell}}/{{\mathbb{Z}_\ell}})(1))$$ obtained by taking the direct limit over powers of $\ell$ of the first map in equation (\[eq:Kummer-ell\^n\]).
We say that a homomorphism $\phi:A\to B$ of ${{\mathbb{Z}_\ell}}$-modules is a [ *quasi-isomorphism*]{} if it has a finite kernel and cokernel. In this case, we define $$z(\phi)=\frac{\#\ker(\phi)}{\#\operatorname{coker}(\phi)}.$$ It is easy to check that if $\phi_3=\phi_2\phi_1$ (composition) and if two of the maps $\phi_1$, $\phi_2$, $\phi_3$ are quasi-isomorphisms, then so is the third and we have $z(\phi_3)=z(\phi_2)z(\phi_1)$.
In the diagram above, if we assume $T_1({\mathcal{X}})$, then $h$ is an isomorphism. The map $e$ is induced from the intersection pairing and is a quasi-isomorphism and $z(e)$ is (the $\ell$ part of) the order of the torsion subgroup of $\operatorname{NS}({\mathcal{X}})$ divided by (the $\ell$ part of) discriminant of the intersection form. We saw above that under the assumption of $T_1({\mathcal{X}})$, the map $f$ is a quasi-isomorphism and it turns out that $z(f)$ is essentially (the $\ell$ part of) the leading term of the zeta function $\zeta({\mathcal{X}},s)$ at $s=1$. In particular, under $T_1({\mathcal{X}})$, $e$, $f$, and $h$ are isomorphisms for almost all $\ell$. The same must therefore be true of $g^*$. By taking $G_k$-invariants and a direct limit over powers of $\ell$ in equation (\[eq:Kummer-ell\^n\]), one finds that $z(g^*)$ is equal to the order of $\operatorname{Br}({\mathcal{X}})[\ell^\infty]$ and so this group is trivial for almost all $\ell$. This completes our sketch of the proof of the theorem.
The sketch above has all the main ideas needed to prove that the prime-to-$p$ part of the Artin-Tate conjecture on the leading coefficient of the zeta function at $s=1$ follows from the Tate conjecture $T_1({\mathcal{X}})$. The $p$-part is formally similar although more delicate. To handle it, Milne replaces the group in the lower right of the diagram with the larger group $\operatorname{Hom}(H^2({\mathcal{X}},({{\mathbb{Q}_p}}/{{\mathbb{Z}_p}})(1)),{{\mathbb{Q}_p}}/{{\mathbb{Z}_p}})$. The $z$ invariants of the maps to and from this group turn out to have more $p$-adic content that is related to the term $q^\alpha({\mathcal{X}})$ in the Artin-Tate leading coefficient conjecture. We refer to [@Milne75] for the full details and to [@UlmerCRM] for a discussion of several related points, including the case $p=2$ (excluded in Milne’s article, but now provable due to improved $p$-adic cohomology) and higher dimensional abelian varieties.
The descent property of $T_1$
=============================
If $\tilde{\mathcal{X}}\to{\mathcal{X}}$ is the blow up of ${\mathcal{X}}$ at a closed point, then $T_1(\tilde{\mathcal{X}})$ is equivalent to $T_1({\mathcal{X}})$. Indeed, under blowing up both the rank of $\operatorname{NS}(\cdot)$ and the dimension of $H^2(\cdot,{{\mathbb{Q}_\ell}}(1))^{G_k}$ increase by one. (See Example \[ss:blow-ups\] above.) In fact:
\[prop:T-descent\] $T_1({\mathcal{X}})$ is invariant under birational isomorphism. More generally, if ${\mathcal{X}}\to{\mathcal{Y}}$ is a dominant rational map, then $T_1({\mathcal{X}})$ implies $T_1({\mathcal{Y}})$.
We give simple proof of the case where ${\mathcal{X}}$ and ${\mathcal{Y}}$ are surfaces. See [@Tate94] for the general case.
First, we may assume ${\mathcal{X}}{{\dashrightarrow}}{\mathcal{Y}}$ is a morphism. Indeed, let $\tilde{\mathcal{X}}\to{\mathcal{X}}$ be a blow up resolving the indeterminacy of ${\mathcal{X}}{{\dashrightarrow}}{\mathcal{Y}}$, i.e., so that the composition $\tilde{\mathcal{X}}\to{\mathcal{X}}{{\dashrightarrow}}{\mathcal{Y}}$ is a morphism. As we have seen above ${\mathcal{T}}_1({\mathcal{X}})$ implies $T_1(\tilde{\mathcal{X}})$ so we may replace ${\mathcal{X}}$ with $\tilde{\mathcal{X}}$ and show that $T_1({\mathcal{Y}})$ holds.
So now suppose that $\pi:{\mathcal{X}}\to{\mathcal{Y}}$ is a dominant morphism. Since the dimensions of ${\mathcal{X}}$ and ${\mathcal{Y}}$ are equal, $\pi$ must be generically finite, say of degree $d$. But then the push forward and pull-back maps on cycles present $\operatorname{NS}({\mathcal{Y}}){\otimes}{{\mathbb{Q}_\ell}}$ as a direct factor of $NS({\mathcal{X}}){\otimes}{{\mathbb{Q}_\ell}}$; they also present $H^2({\overline{\mathcal{Y}}},{{\mathbb{Q}_\ell}}(1))$ as a direct factor of $H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))$. The cycle class maps and Galois actions are compatible with these decompositions and since by assumption $NS({\mathcal{X}}){\otimes}{{\mathbb{Q}_\ell}}{\tilde{\to}}H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))^{G_k}$, we must also have $NS({\mathcal{Y}}){\otimes}{{\mathbb{Q}_\ell}}{\tilde{\to}}H^2({\overline{\mathcal{Y}}},{{\mathbb{Q}_\ell}}(1))^{G_k}$, i.e., $T_1({\mathcal{Y}})$.
Note that the dominant rational map ${\mathcal{X}}{{\dashrightarrow}}{\mathcal{Y}}$ could be a ground field extension, or even a purely inseparable morphism.
Tate’s theorem on products
==========================
In this section we sketch how $T_1$ for products of curves follows from Tate’s theorem on endomorphisms of abelian varieties over finite fields.
\[thm:products\] Let ${\mathcal{C}}$ and ${\mathcal{D}}$ be curves over $k$ and set ${\mathcal{X}}={\mathcal{C}}\times_k{\mathcal{D}}$. Then $T_1({\mathcal{X}})$ holds.
Extending $k$ if necessary, we may assume that ${\mathcal{C}}$ and ${\mathcal{D}}$ both have rational points. Fix rational base points $x$ and $y$ (which we will mostly omit from the notation below). Recall from Subsection \[ss:products\] that $$\operatorname{NS}({\mathcal{C}}\times{\mathcal{D}})\cong{\mathbb{Z}}^2\times\operatorname{DivCorr}({\mathcal{C}},{\mathcal{D}})
\cong{\mathbb{Z}}^2\times\operatorname{Hom}(J_{\mathcal{C}},J_{\mathcal{D}}).$$
By the Künneth formula, $$\begin{aligned}
H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}})&\cong
\left(H^2({\overline{\mathcal{C}}},{{\mathbb{Q}_\ell}}){\otimes}H^0({\overline{\mathcal{D}}},{{\mathbb{Q}_\ell}})\right)\oplus
\left(H^0({\overline{\mathcal{C}}},{{\mathbb{Q}_\ell}}){\otimes}H^2({\overline{\mathcal{D}}},{{\mathbb{Q}_\ell}})\right)\\
&\qquad\oplus \left(H^1({\overline{\mathcal{C}}},{{\mathbb{Q}_\ell}})
{\otimes}H^1({\overline{\mathcal{D}}},{{\mathbb{Q}_\ell}})\right)\\
&\cong{{\mathbb{Q}_\ell}}(-1)\oplus{{\mathbb{Q}_\ell}}(-1)\oplus
\left(H^1({\overline{\mathcal{C}}},{{\mathbb{Q}_\ell}}){\otimes}H^1({\overline{\mathcal{D}}},{{\mathbb{Q}_\ell}})\right)\end{aligned}$$ Twisting by ${{\mathbb{Q}_\ell}}(1)$ and taking invariants, we have $$H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))^{G_k}=
{{\mathbb{Q}_\ell}}^2\oplus\left(H^1({\overline{\mathcal{C}}},{{\mathbb{Q}_\ell}}){\otimes}H^1({\overline{\mathcal{D}}},{{\mathbb{Q}_\ell}})(1)\right)^{G_k}.$$ Under the cycle class map, the factor ${\mathbb{Z}}^2$ of $\operatorname{NS}({\mathcal{X}})$ (corresponding to ${\mathcal{C}}\times\{y\}$ and $\{x\}\times{\mathcal{D}}$) spans the factor ${{\mathbb{Q}_\ell}}^2$ of $H^2({\overline{\mathcal{X}}},{{\mathbb{Q}_\ell}}(1))^{G_k}$ (corresponding to $H^2{\otimes}H^0$ and $H^0{\otimes}H^2$ in the Kunneth decomposition). Thus what we have to show is that the cycle class map induces an isomorphism $$\operatorname{Hom}(J_{\mathcal{C}},J_{\mathcal{D}}){\otimes}{{\mathbb{Q}_\ell}}{\tilde{\to}}\left(H^1({\overline{\mathcal{C}}},{{\mathbb{Q}_\ell}}){\otimes}H^1({\overline{\mathcal{D}}},{{\mathbb{Q}_\ell}})(1)\right)^{G_k}$$
But $H^1({\overline{\mathcal{D}}},{{\mathbb{Q}_\ell}})(1)\cong H^1({\overline{\mathcal{D}}},{{\mathbb{Q}_\ell}})^*\cong V_\ell(J_{\mathcal{D}})$ and $H^1({\overline{\mathcal{C}}},{{\mathbb{Q}_\ell}})\cong V_\ell(J_{\mathcal{C}})^*$ ($*={{\mathbb{Q}_\ell}}$-linear dual). Thus $$\left(H^1({\overline{\mathcal{C}}},{{\mathbb{Q}_\ell}}){\otimes}H^1({\overline{\mathcal{D}}},{{\mathbb{Q}_\ell}})(1)\right)^{G_k}\cong
\left( V_\ell(J_{\mathcal{C}})^*{\otimes}V_\ell(J_{\mathcal{D}})\right)^{G_k}
\cong\operatorname{Hom}_{G_k}(V_\ell(J_{\mathcal{C}}),V_\ell(J_{\mathcal{D}})).$$
Thus the needed isomorphism is $$\operatorname{Hom}(J_{\mathcal{C}},J_{\mathcal{D}}){\otimes}{{\mathbb{Q}_\ell}}{\tilde{\to}}\operatorname{Hom}_{G_k}(V_\ell(J_{\mathcal{C}}),V_\ell(J_{\mathcal{D}}))$$ and this is exactly the statement of Tate’s theorem (Lecture 0, Theorem \[thm:TateIsogThm\]). This completes the proof of the theorem.
1. A variation of the argument above, using Picard and Albanese varieties, shows that $T_1$ for a product ${\mathcal{X}}\times{\mathcal{Y}}$ of varieties of any dimension follows from $T_1$ for the factors.
2. It is worth noting that Tate’s conjecture $T_1$ (and the proof of it for products of curves) only characterizes the image of in $\ell$-adic cohomology of $\operatorname{NS}({\mathcal{X}}){\otimes}{{\mathbb{Z}_\ell}}$, not the image of $\operatorname{NS}({\mathcal{X}})$ itself. This should be contrasted with the Lefschetz $(1,1)$ theorem, which characterizes the image of $\operatorname{NS}({\mathcal{X}})$ in deRham cohomology when the ground field is ${\mathbb{C}}$.
Products of curves and DPC
==========================
Assembling the various parts of this lecture gives the main result:
\[prop:T-DPC\] Let ${\mathcal{X}}$ be a smooth, projective surface over $k$. If there is a dominant rational map $${\mathcal{C}}\times_k{\mathcal{D}}{{\dashrightarrow}}{\mathcal{X}}$$ from a product of curves to ${\mathcal{X}}$, then the Tate conjectures $T_1({\mathcal{X}})$ and $T_2({\mathcal{X}})$ hold.
Indeed, by Theorem \[thm:products\], we have $T_1({\mathcal{C}}\times{\mathcal{D}})$ and then by Proposition \[prop:T-descent\] we deduce $T_1({\mathcal{X}})$. By Proposition \[prop:T1->T2\], $T_2({\mathcal{X}})$ follows as well.
We say that “${\mathcal{X}}$ is dominated by a product of curves (DPC).” The question of which varieties are dominated by products of curves has been studied by Schoen [@Schoen96]. In particular, over any field there are surfaces that are not dominated by products of curves. Nevertheless, as we will see below, the collection of DPC surfaces is sufficiently rich to give some striking results on the Birch and Swinnerton-Dyer conjecture.
\[l:BSD-T\] We keep our standard notations throughout this lecture: $p$ is a prime, $k={{\mathbb{F}_q}}$ is the finite field of characteristic $p$ with $q$ elements, ${\mathcal{C}}$ is a smooth, projective, absolutely irreducible curve over $k$, $K=k({\mathcal{C}})$ is the function field of ${\mathcal{C}}$, and $E$ is an elliptic curve over $K$.
Curves and surfaces {#s:curves-surfaces}
===================
In this section we will construct an elliptic surface ${\mathcal{E}}\to{\mathcal{C}}$ canonically associated to an elliptic curve $E/K$. More precisely, we give a constructive proof of the following result:
\[prop:model\] Given an elliptic curve $E/K$, there exists a surface ${\mathcal{E}}$ over $k$ and a morphism $\pi:{\mathcal{E}}\to{\mathcal{C}}$ with the following properties: ${\mathcal{E}}$ is smooth, absolutely irreducible, and projective over $k$, $\pi$ is surjective and relatively minimal, and the generic fiber of $\pi$ is isomorphic to $E$. The surface ${\mathcal{E}}$ and the morphism $\pi$ are uniquely determined up to isomorphism by these requirements.
Here “the generic fiber of $\pi$” means ${\mathcal{E}}_K$, the fiber product: $$\xymatrix{{\mathcal{E}}_K:=\eta\times_{{\mathcal{C}}}{\mathcal{E}}\ar[r]\ar[d]&{\mathcal{E}}\ar[d]^{\pi}\\
\eta=\operatorname{Spec}K\ar[r]&{\mathcal{C}}}$$ “Relatively minimal” means that if ${\mathcal{E}}'$ is another smooth, absolutely irreducible, projective surface over $k$ with a surjective morphism $\pi':{\mathcal{E}}'\to{\mathcal{C}}$, then any birational morphism ${\mathcal{E}}\to{\mathcal{E}}'$ commuting with $\pi$ and $\pi'$ is an isomorphism. Relative minimality is equivalent to the condition that there are no rational curves of self-intersection $-1$ in the fibers of $\pi$ (i.e., to the non-existence of curves in fibers that can be blown down).
The requirements on ${\mathcal{E}}$ and $\pi$ imply that $\pi$ is flat and projective and that all geometric fibers of $\pi$ are connected. These properties of $\pi$ will be evident from the explicit construction below. It follows that $\pi_*{\mathcal{O}}_{\mathcal{E}}\cong{\mathcal{O}}_{\mathcal{C}}$ and more generally that $\pi$ is “cohomologically flat in dimension zero,” meaning that for every morphism $T\to C$ the base change $$\pi_T:{\mathcal{E}}_T={\mathcal{E}}\times_{\mathcal{C}}T\to T$$ satisfies $\pi_{T*}{\mathcal{O}}_{{\mathcal{E}}_T}={\mathcal{O}}_T$.
Uniqueness in Proposition \[prop:model\] follows from general results on minimal models, in particular [@Lichtenbaum68]\*[Thm. 4.4]{}. See [@Chinburg86] and [@LiuAGAC]\*[9.3]{} for other expositions.
We first give a detailed construction of a (possibly singular) “Weierstrass surface” ${{\mathcal{W}}}\to{\mathcal{C}}$ and then resolve singularities to obtain ${\mathcal{E}}\to{\mathcal{C}}$.
More precisely, the proposition follows from the following two results.
\[prop:W\] Given an elliptic curve $E/K$, there exists a surface ${{\mathcal{W}}}$ over $k$ and a morphism $\pi_0:{{\mathcal{W}}}\to{\mathcal{C}}$ with the following properties: ${{\mathcal{W}}}$ is normal, absolutely irreducible, and projective over $k$, $\pi_0$ is surjective, each of its fibers is isomorphic to an irreducible plane cubic, and its generic fiber is isomorphic to $E$.
This proposition is elementary, but does not seem to be explained in detail in the literature, so we give a proof below.
\[prop:Tate-algo\] There is an explicit sequence of blow ups along closed points and curves in ${{\mathcal{W}}}$ yielding a proper birational morphism $\sigma:{\mathcal{E}}\to{{\mathcal{W}}}$ where the surface ${\mathcal{E}}$ and the composed morphism $\pi=\pi_0{\circ}\sigma:{\mathcal{E}}\to{{\mathcal{W}}}\to{\mathcal{C}}$ have the properties mentioned in Proposition \[prop:model\].
Choose a Weierstrass equation for $E$: $$\label{eq:model}
y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6$$ where the $a_i$ are in $K=k({\mathcal{C}})$. Recall that we have defined the notion of a minimal integral model at a place $v$ of $K$: the $a_i$ should be integral at $v$ and the valuation at $v$ of $\Delta$ should be minimal subject to the integrality of the $a_i$. Clearly, there is a non-empty Zariski open subset $U\subset{\mathcal{C}}$ such that for every closed point $v\in U$, the model (\[eq:model\]) is a minimal integral model.
Let ${{\mathcal{W}}}_1$ be the closed subset of ${\mathbb{P}}^2_U:={\mathbb{P}}^2_k\times_k U$ defined by the vanishing of $$\label{eq:EE}
Y^2Z+a_1XYZ+a_3YZ^2-\left(X^3+a_2X^2Z+a_4XZ^2+a_6Z^3\right)$$ where $X,Y,Z$ are the standard homogeneous coordinates on ${\mathbb{P}}^2_k$. Then ${{\mathcal{W}}}_1$ is geometrically irreducible and there is an obvious projection $\pi_1:{{\mathcal{W}}}_1\to U$ (the restriction to ${{\mathcal{W}}}_1$ of the projection ${\mathbb{P}}^2_U\to U$). The fiber of $\pi_1$ over a closed point $v$ of $U$ is the plane cubic $$Y^2Z+a_1(v)XYZ+a_3(v)YZ^2=X^3+a_2(v)X^2Z+a_4(v)XZ^2+a_6(v)Z^3$$ over the residue field $\kappa_v$ at $v$. The generic point $\eta$ of ${\mathcal{C}}$ lies in $U$ and the fiber of $\pi_1$ at $\eta$ is $E/K$.
There are finitely many points in ${\mathcal{C}}\setminus U$ and we must extend the model ${{\mathcal{W}}}_1\to U$ over each of these points. Choose one of them, call it $w$, and choose a model of $E$ that is integral and minimal at $w$. In other words, choose a model of $E$ $$\label{eq:model'}
y^{\prime2}+a'_1x'y'+a'_3y'=x^{\prime3}+a'_2x'^2+a'_4x'+a'_6$$ where the $a'_i\in K$ are integral at $w$ and the valuation at $w$ of the discriminant $\Delta$ is minimal. The new coordinates are related to the old by a transformation $$\label{eq:change-of-coords}
(x,y)=(u^2x'+r,u^3y'+su^2x'+t)$$ with $u\in K^\times$ and $r,s,t\in K$. Let $U'$ be a Zariski open subset of ${\mathcal{C}}$ containing $w$ on which all of the $a_i'$ are integral and the model (\[eq:model’\]) is minimal. Let ${{\mathcal{W}}}'$ be the geometrically irreducible closed subset of ${\mathbb{P}}^1_{U'}$ defined by the vanishing of $$Y^{\prime2}Z'+a'_1X'Y'Z'+a'_3Y'Z^{\prime2}-
\left(X^{\prime3}+a'_2X^{\prime2}Z'+a'_4X'Z^{\prime2}+a'_6Z^{\prime}3\right)$$ with its obvious projection $\pi':{{\mathcal{W}}}'\to U'$. On the open set $V=U\cap U'$, $u$ is a unit and the change of coordinates (\[eq:change-of-coords\]), or rather its projective version $$(X,Y,Z)=(u^2X'+rZ,u^3Y'+su^2X'+tZ',Z')$$ defines an isomorphism between $\pi_1^{-1}(V)$ and $\pi^{\prime-1}(V)$ compatible with the projections. Glueing ${{\mathcal{W}}}_1$ and ${{\mathcal{W}}}'$ along this isomorphism yields a new surface ${{\mathcal{W}}}_2$ equipped with a projection $\pi_2:{{\mathcal{W}}}_2\to U_2$ where $U_2=U\cup U'$. Note that $U_2$ is strictly larger than $U$. Moreover $\pi_2$ is surjective, its geometric fibers are irreducible projective plane cubics, and its generic fiber is $E$.
We now iterate this construction finitely many times to extend the original model over all of ${\mathcal{C}}$. We arrive at a surface ${{\mathcal{W}}}$ equipped with a proper, surjective morphism $\pi:{{\mathcal{W}}}\to{\mathcal{C}}$ whose geometric fibers are irreducible plane cubics and whose generic fiber is $E$. Since ${\mathcal{C}}$ is projective over $k$, so is ${{\mathcal{W}}}$. Since ${{\mathcal{W}}}$ is obtained by glueing reduced, geometrically irreducible surfaces along open subsets, it is also reduced and geometrically irreducible. Since it has only isolated singular points, by Serre’s criterion it is normal.
This completes the proof of Proposition \[prop:W\].
Note that the closure in ${{\mathcal{W}}}$ of the identity element of $E$ is a divisor on ${{\mathcal{W}}}$ which maps isomorphically to the base curve ${\mathcal{C}}$. We write $s_0:{\mathcal{C}}\to{{\mathcal{W}}}$ for the inverse morphism. This is the [ *zero section*]{} of $\pi_0$. In terms of the coordinates on ${{\mathcal{W}}}_1$ used in the proof above, it is just the map $t\mapsto([0,1,0],t)$.
The algorithm mentioned in the Proposition is the subject of Tate’s famous paper [@Tate75]. His article does not mention blowing up, but the steps of the algorithm nevertheless give the recipe for the blow ups needed. The actual process of blowing up is explained in detail in [@SilvermanAT]\*[IV.9]{} so we will not give the details here. Rather, we explain why there is a simple algorithm, following [@Conrad05].
First note that the surface ${{\mathcal{W}}}$ is reduced and irreducible and so has no embedded components. Also, it has isolated singularities. (They are contained in the set of singular points of fibers of $\pi_0$.) By Serre’s criterion, ${{\mathcal{W}}}$ is thus normal. Moreover, and this is the key point, its singularities are [*rational double points*]{}. (See [@Artin86] for the definition and basic properties of rational singularities and [@BadescuAS]\*[Chapters 3 and 4]{} for many more details. See [@Conrad05]\*[Section 8]{} for the fact that the singularities of a minimal Weierstrass model are rational.) This implies that the blow up of ${{\mathcal{W}}}$ at one of its singular points is again normal (so has isolated singularities) and again has at worst rational double points. An algorithm to desingularize is then simply to blow up at a singular point and iterate until the resulting surface is smooth. Given the explicit nature of the equations defining ${{\mathcal{W}}}$, finding the singular points and carrying out the blow ups is straightforward.
In fact, Tate’s algorithm also calls for blowing up along certain curves. (This happens at steps 6 and 7.) This has the effect of dealing with several singular points at the same time, so is more efficient, but it is not essential to the success of the algorithm.
This completes our discussion of Proposition \[prop:Tate-algo\]. See below for a detailed example covering a case not treated explicitly in [@SilvermanAT].
Conrad’s article [@Conrad05] also gives a coordinate-free treatment of integral minimal models of elliptic curves.
It is worth remarking that Tate’s algorithm and the possible structures of the bad fibers are essentially the same in characteristic $p$ as in mixed characteristic. On the other hand, for non-perfect residue fields $k$ of characteristic $p\le3$, there are more possibilities for the bad fibers, in both equal and mixed characteristics—see [@Szydlo04].
The zero section of ${{\mathcal{W}}}$ lifts uniquely to a section which we again denote by $s_0:{\mathcal{C}}\to{\mathcal{E}}$.
The bundle $\omega$ and the height of ${\mathcal{E}}$
=====================================================
We construct an invertible sheaf on ${\mathcal{C}}$ as follows, using the notation of the proof of Proposition \[prop:model\]. Take the trivial invertible sheaf ${\mathcal{O}}_U$ on $U$ with its generating section $1_U$. At each stage of the construction, extend this sheaf by glueing ${\mathcal{O}}_U$ and ${\mathcal{O}}_{U'}$ over $U\cap U'$ by identifying $1_U$ and $u^{-1}1_{U'}$ where $u$ is the function appearing in the change of coordinates (\[eq:change-of-coords\]).
The resulting invertible sheaf $\omega$ has several other descriptions. For example, the sheaf of relative differentials $\Omega^1_{{\mathcal{E}}/{\mathcal{C}}}$ is invertible on the locus of ${\mathcal{E}}$ where $\pi:{\mathcal{E}}\to{\mathcal{C}}$ is smooth (in particular in a neighborhood of the zero section) and, more or less directly from the definition, $\omega$ can be identified with $s_0^*(\Omega^1_{{\mathcal{E}}/{\mathcal{C}}})$. Using relative duality theory, $\omega$ can also be identified with the inverse of $R^1\pi_*{\mathcal{O}}_{\mathcal{E}}$. Finally, since ${{\mathcal{W}}}$ as only rational singularities, $\omega$ is also isomorphic to $R^1\pi_{0*}{\mathcal{O}}_{{\mathcal{W}}}$
One may identify the coefficients $a_i$ of the Weierstrass equation locally defining ${{\mathcal{W}}}$ with sections of $\omega^i$. Using this point of view, ${{\mathcal{W}}}$ can be identified with a closed subvariety of a certain ${\mathbb{P}}^2$-bundle over ${\mathcal{C}}$. Namely, let $V$ be the locally free ${\mathcal{O}}_{\mathcal{C}}$ module of rank three $$\label{eq:bundle}
V=\omega^2\oplus\omega^3\oplus{\mathcal{O}}_{\mathcal{C}}$$ (where the exponents denote tensor powers). If ${\mathbb{P}}V$ denotes the projectivization of $V$ over ${\mathcal{C}}$, a ${\mathbb{P}}^2$ bundle over ${\mathcal{C}}$, then ${{\mathcal{W}}}$ is naturally the closed subset of ${\mathbb{P}}E$ defined locally by the vanishing of Weierstrass equations as in (\[eq:EE\]).
Verify the identifications and assertions in this section. In the case where ${\mathcal{C}}={\mathbb{P}}^1$, so $K=k(t)$, check that $\omega={\mathcal{O}}_{{\mathbb{P}}^1}(h)$ where $h$ is the smallest positive integer such that $E$ has a model (\[eq:model\]) where the $a_i$ are in $k[t]$ and $\deg a_i\le hi$.
Check that $c_4$, $c_6$, and $\Delta$ define [*canonical*]{} sections of $\omega^4$, $\omega^6$, and $\omega^{12}$ respectively, independent of the choice of equation for $E$. If $p=2$ or $3$, check that $b_2$ defines a canonical section of $\omega^2$ and that $c_4=b_2^2$ and $c_6=-b_2^3$. If $p=2$, check that $a_1$ defines a canonical section of $\omega$ and that $b_2=a_1^2$. Note that since positive powers of $\omega$ have non-zero sections, the degree of $\omega$ is non-negative.
The [*height*]{} of ${\mathcal{E}}$, denoted $h$, is defined by $h=\deg(\omega)$, the degree of $\omega$ as an invertible sheaf on ${\mathcal{C}}$.
Note that if $E/K$ is constant (in the sense of Lecture 1) then the height of the corresponding ${\mathcal{E}}$ is 0.
Examples {#examples-1}
========
The case when ${\mathcal{C}}={\mathbb{P}}^1$ is particularly simple. First of all, one may choose a model (\[eq:model\]) that is integral and minimal simultaneously at every finite $v$, i.e., for every $v\in{\mathbb{A}}^1_k$. Indeed, start with any model and change coordinates so that the $a_i$ are in $k[t]$. If $w$ is a finite place where this model is not minimal, it is possible (because $k[t]$ is a PID) to choose a change of coordinates $$(x,y)=(u^2x'+r,u^3y'+su^2x'+t)$$ where $r,s,t,u\in k[t][1/w]$ and $u$ a unit yielding a model that is minimal at $w$. Such a change of coordinates does not change the minimality at any other finite place. Thus after finitely many steps, we have a model integral and minimal at all finite places. (This argument would apply for any $K$ and any Dedekind domain $R\subset K$ which is a PID, yielding a model with the $a_i\in R$ that is minimal at all $v\in\operatorname{Spec}R$.)
Focusing attention at $t=\infty$, there is a change of coordinates (\[eq:change-of-coords\]) with $u=t^{-h}$ yielding a model integral and minimal at $\infty$. (Here $h$ is minimal so that $\deg(a_i)\le hi$.) So the bundle $\omega={\mathcal{O}}(h)={\mathcal{O}}(h\infty)$.
As a very concrete example, consider the curve $$y^2=x(x+1)(x+t^d)$$ over ${{\mathbb{F}_p}}(t)$ where $p>2$ and $d$ is not divisible by $p$. Since $\Delta=16t^{2d}(t^d-1)^2$, this model is integral and minimal at all non-zero finite places. It is also minimal at zero as one may see by noting that $c_4$ and $c_6$ are units at 0. At infinity, the change of coordinates $$(x,y)=(t^{2h}x',t^{3h}y')$$ with $h=\lceil d/2\rceil$ yields a minimal integral model. Thus $\omega={\mathcal{O}}(h)$.
Working with Tate’s algorithm shows that $E$ has $I_2$ reduction at the $d$-th roots of unity, $I_{2d}$ reduction at $t=0$, and either $I_{2d}^*$ or $I_{2d}$ reduction at infinity depending on whether $d$ is odd or even.
Since the case of $I_n$ reduction is not treated explicitly in [@SilvermanAT], we give more details on the blow ups needed to resolve the singularity over $t=0$. In terms of the coordinates on ${{\mathcal{W}}}_1$ used in the proof of Proposition \[prop:Tate-algo\] we can consider the affine surface defined by $$x^3+(t^d+1)x^2+t^dx-y^2=0$$ which is an open neighborhood of the singularity at $x=y=t=0$. If $d=1$, then the tangent cone is the irreducible plane conic defined by $x^2+tx-y^2=0$. The singular point thus blows up into a smooth rational curve and it is easy to check that the resulting surface is smooth in a neighborhood of the fiber $t=0$. Now assume that $d>1$. Then the tangent cone is the reducible conic $x^2-y^2=0$ and so the singular point blows up into two rational curves meeting at one point. More precisely, the blow up is covered by three affine patches. In one of them, the surface upstairs is $$tx_1^3+(t^d+1)x_1^2+t^{d-1}x_1-y_1^2=0$$ and the morphism is $x=tx_1$, $y=ty_1$. The exceptional divisor is the reducible curve $t=x_1^2-y_1^2=0$ and the point of intersection of the components $t=x_1=y_1=0$ is again a double point. Considering the other charts shows that there are no other singular points in a neighborhood of $t=0$ and that the exceptional divisor meets the original fiber over $t=0$ in two points. We now iterate this process $d-1$ times, introducing two new components at each stage. After $d-1$ blow ups, the interesting part of our surface is given by $$t^{d-1}x_{d-1}^3+(t^d+1)x_{d-1}^2+tx_{d-1}-y_{d-1}^2=0.$$ At this last stage, blowing up introduces one more component meeting the two components introduced in the preceding step at one point each. The (interesting part of the) surface is now $$t^{d}x_{d}^3+(t^d+1)x_{d}^2+x_{d}-y_{d}^2=0$$ which is regular in a neighborhood of $t=0$. Thus we see that the fiber over $t=0$ in ${\mathcal{E}}$ is a chain of $2d$ rational curves, i.e., a fiber of type $I_{2d}$.
The resolution of the singularities over points with $t^d=1$ is similar but simpler because only one blow up is required. At $t=\infty$, if $d$ is even then the situation is very similar to that over $t=0$ and the reduction is again of type $I_{2d}$. If $d$ is odd, the reduction is of type $I_{2d}^*$. We omit the details in this case since it is treated fully in [@SilvermanAT].
In the table in Tate’s algorithm paper [@Tate75] (and the slightly more precise version in [@SilvermanAEC]\*[p. 448]{}), the last three rows have restrictions on $p$. Give examples showing that these restrictions are all necessary for the discriminant and conductor statements, and for the statement about $j$ in the $I_n^*$, $p=2$ case. Show that the other assertions about the $j$-invariant are correct for all $p$.
${\mathcal{E}}$ and the classification of surfaces {#s:height}
==================================================
It is sometimes useful to know how ${\mathcal{E}}$ fits into the Enriques-Kodaira classification of surfaces. In this section only, we replace $k$ with ${{\overline{k}}}$ and write ${\mathcal{E}}$ for what elsewhere is denoted ${\overline{\mathcal{E}}}$.
Recall that the height of ${\mathcal{E}}$ is defined as $h=\deg\omega$.
\[prop:h=0\] $\omega\cong{\mathcal{O}}_{\mathcal{C}}$ if and only if $E$ is constant. If $h=\deg(\omega)=0$, then $E$ is isotrivial.
It is obvious that if $E$ is constant, then $\omega\cong{\mathcal{O}}_{\mathcal{C}}$. Conversely, suppose $\omega\cong{\mathcal{O}}_{\mathcal{C}}$. Then the construction of $\pi_0:{{\mathcal{W}}}\to{\mathcal{C}}$ in Proposition \[prop:W\] yields an irreducible closed subset of ${\mathbb{P}}^2_{\mathcal{C}}$ (because the ${\mathbb{P}}^2$-bundle ${\mathbb{P}}V$ in (\[eq:bundle\]) is trivial): $${{\mathcal{W}}}\subset{\mathbb{P}}^2_{\mathcal{C}}={\mathbb{P}}^2_k\times_k{\mathcal{C}}.$$ Let $\sigma:{{\mathcal{W}}}\to{\mathbb{P}}^2_k$ be the restriction of the projection ${\mathbb{P}}^2_C\to{\mathbb{P}}^2_k$. Then $\sigma$ is not surjective (since most points in the line at infinity $Z=0$ are not in the image) and so its image has dimension $<2$. Considering the restriction of $\sigma$ to a fiber of $\pi_0$ shows that the image of $\sigma$ is in fact an elliptic curve $E_0$ and then it is obvious from dimension considerations that $${{\mathcal{W}}}=\pi_0^{-1}(\pi_0({{\mathcal{W}}}))=E_0\times{\mathcal{C}}.$$ It follows that $E$, the generic fiber of $\pi_0$, is isomorphic to $E_0\times\operatorname{Spec}K$, i.e., that $E$ is constant.
Now assume that $h=0$. Then $\Delta$ is a non-zero global section of the invertible sheaf $\omega^{12}$ on ${\mathcal{C}}$ of degree 0. Thus $\omega^{12}$ is trivial. It follows that there is a finite unramified cover of ${\mathcal{C}}$ over which $\omega$ becomes trivial and so by the first part, $E$ becomes constant over a finite extension, i.e., $E$ is isotrivial.
Note that $E$ being isotrivial does not imply that $h=0$.
Give an example of a non-constant $E$ of height zero. Hint: Consider the quotient of a product of elliptic curves by a suitable free action of a group of order two.
The canonical bundle of ${\mathcal{E}}$ is $\Omega^2_{\mathcal{E}}\cong\pi^*\left(\Omega^1_C{\otimes}\omega\right)$.
Here we are using that ${\mathcal{E}}\to{\mathcal{C}}$ has a section and therefore no multiple fibers. The proof, which we omit, proceeds by considering $R^1\pi_*{\mathcal{O}}_{\mathcal{E}}$ and using relative duality. See for example [@BadescuAS]\*[7.15]{}.
We now consider several cases:
If $2g_{\mathcal{C}}-2+h>0$, then it follows from the Proposition that the dimension of $H^0({\mathcal{E}},(\Omega^2)^{{\otimes}^n})$ grows linearly with $n$, so ${\mathcal{E}}$ has Kodaira dimension 1.
If $2g_{\mathbb{C}}-2+h=0$, then the Kodaira dimension of ${\mathcal{E}}$ is zero and there are two possibilities: (1) $g_{\mathcal{C}}=1$ and $h=0$; or (2) $g_{\mathcal{C}}=0$ and $h=2$. In the first case, there is an unramified cover of ${\mathcal{C}}$ over which ${\mathcal{E}}$ becomes constant and so ${\mathcal{E}}$ is the quotient of a product of two elliptic curves. These surfaces are sometimes called “bi-elliptic.” In the second case, $\Omega^2_{\mathcal{E}}={\mathcal{O}}_{\mathcal{E}}$ and $H^1({\mathcal{E}},{\mathcal{O}}_{\mathcal{E}})=H^0({\mathcal{C}},\omega^{-1})=0$ and so ${\mathcal{E}}$ is a K3 surface.
If $2g_{\mathcal{C}}-2+h<0$, then the Kodaira dimension of ${\mathcal{E}}$ is $-\infty$ and there are again two possibilities: (1) $g_{\mathcal{C}}=0$ and $h=1$, in which case ${\mathcal{E}}$ is a rational surface by Castelnuovo’s criterion; or (2) $g_{\mathcal{C}}=0$ and $h=0$, in which case $E$ is constant and ${\mathcal{E}}$ is a ruled surface $E_0\times{\mathcal{C}}=E_0\times{\mathbb{P}}^1$.
Points and divisors, Shioda-Tate {#s:Shioda-Tate}
================================
If $D$ is an irreducible curve on ${\mathcal{E}}$, then its generic fiber $$D.E:=D\times_{{\mathcal{C}}}E$$ is either empty or is a closed point of $E$. The former occurs if and only if $D$ is supported in a fiber of $\pi$. In the latter case, the residue degree of $D.E$ is equal to the generic degree of $D\to{\mathcal{C}}$. Extending by linearity, we get homomorphism $$\operatorname{\rm Div}({\mathcal{E}})\to\operatorname{\rm Div}(E)$$ whose kernel consists of divisors supported in the fibers of $\pi$.
There is a set-theoretic splitting of this homomorphism, induced by the map sending a closed point of $E$ to its scheme-theoretic closure in ${\mathcal{E}}$. However, this is not in general a group homomorphism.
Let $L^1\operatorname{\rm Div}({\mathcal{E}})$ be the subgroup of divisors $D$ such that the degree of $D.E$ is zero and let $L^2\operatorname{\rm Div}({\mathcal{E}})$ be subgroup such that $D.E=0$. We write $L^i\operatorname{Pic}({\mathcal{E}})$ and $L^i\operatorname{NS}({\mathcal{E}})$ ($i=1,2$) for the images of $L^i({\mathcal{E}})$ in $\operatorname{Pic}({\mathcal{E}})$ and $\operatorname{NS}({\mathcal{E}})$ respectively.
The Shioda-Tate theorem relates the Néron-Severi group of ${\mathcal{E}}$ to the Mordell-Weil group of $E$:
\[thm:Shioda-Tate\] If ${\mathcal{E}}\to{\mathcal{C}}$ is non-constant, $D\mapsto D.E$ induces an isomorphism $$\frac{L^1\operatorname{NS}({\mathcal{E}})}{L^2\operatorname{NS}({\mathcal{E}})}\cong E(K)$$ If ${\mathcal{E}}\to{\mathcal{C}}$ is constant, we have $$\frac{L^1\operatorname{NS}({\mathcal{E}})}{L^2\operatorname{NS}({\mathcal{E}})}\cong E(K)/E(k)$$
This theorem seems to have been known to the ancients (Lang, Néron, Weil, ...) and was stated explicitly in [@Tate66b] and in papers of Shioda. A detailed proof in a more general context is given in [@Shioda99]. Note however that in [@Shioda99] the ground field is assumed to be algebraically closed. See [@UlmerCRM] for the small modifications needed to treat finite $k$.
It is obvious that $NS({\mathcal{E}})/L^1NS({\mathcal{E}})$ is infinite cyclic. We saw in Example \[ss:fibrations\] of Lecture 2 that $L^2NS({\mathcal{E}})$ is free abelian of rank $1+\sum_v(f_v-1)$. So as a corollary of the theorem, we have the following rank formula, known as the Shioda-Tate formula: $$\label{eq:STformula}
\operatorname{Rank}E(K)=\operatorname{Rank}NS({\mathcal{E}})-2-\sum_v(f_v-1)$$
For more on the geometry of elliptic surfaces and elliptic curves over function fields, with an emphasis on rational and K3 surfaces, I recommend [@SchuttShiodaES].
$L$-functions and Zeta-functions
================================
We are going to relate the $L$-function of $E$ and the zeta function of ${\mathcal{E}}$. We note that from the definition, $Z({\mathcal{E}},T)$ depends only on the underlying set of closed points of ${\mathcal{E}}$ and we may partition this set using the map $\pi$.
We have $$\begin{aligned}
Z({\mathcal{E}},T)&=\prod_{\text{closed }x\in{\mathcal{E}}}\left(1-T^{\deg(x)}\right)^{-1}\\
&=\prod_{\text{closed }y\in{\mathcal{C}}}
\prod_{x\in\pi^{-1}(y)}\left(1-T^{\deg(x)}\right)^{-1}\\
&=\prod_{\text{closed }y\in{\mathcal{C}}}Z(\pi^{-1}(y),T^{\deg(y)})\end{aligned}$$
For $y$ such that $\pi^{-1}(y)$ is a smooth elliptic curve, we know that $$Z(\pi^{-1}(y),T)=\frac{(1-a_yT+q_yT^2)}{(1-T)(1-q_yT)}$$ and the numerator here is the factor that enters into the definition of $L(E,T)$.
To complete the calculation, we need an analysis of the contribution of the bad fibers. We consider the fiber $\pi^{-1}(y)$ as a scheme of finite type over the residue field $\kappa_y$, the field of $q_y$ elements. As such, it has irreducible components. Its “geometric components” are the components of the base change to $\overline{\kappa}_y$; these are defined over some finite extension of $\kappa_y$.
For certain reduction types ($I_n$, $I_n^*$ ($n\ge0$), $IV$ and $IV^*$) it may happen that all the geometric components are defined over $\kappa_y$, in which case we say the reduction is “split”, or it may happen that some geometric components are only defined over a quadratic extension of $\kappa_y$, in which case we say the reduction is “non-split.” This agrees with the standard usage in the case of $I_n$ reduction and may be non-standard in the other cases.
The zeta function of the a singular fiber of $\pi$ has the form $$\begin{aligned}
Z(\pi^{-1}(y),T)&=
\frac{(1-T)^{a}(1+T)^b}{(1-q_yT)^{f}(1+q_yT)^g}\\
&=\frac{1}{(1-T)(1-q_yT)}\frac{(1-T)^{a+1}(1+T)^b}{(1-q_yT)^{f-1}(1+q_yT)^g}\end{aligned}$$ where the integers $a$, $b$, $f$, and $g$ are determined by the reduction type at $y$ and are given in the following table: $$\vbox{\offinterlineskip\hrule
\halign{&\vrule#&\strut\quad\hfil#\hfil\quad\cr
&\hfill &&$a$&&$b$&&$f$&&$g$&\cr
\noalign{\hrule}
&\hfill split $I_n$&&$0$&&$0$&&$n$&&$0$&\cr
\noalign{\hrule}
&\hfill non-split $I_n$, $n$ odd&&$-1$&&$1$&&$(n+1)/2$&&$(n-1)/2$&\cr
\noalign{\hrule}
&\hfill non-split $I_n$, $n$ even&&$-1$&&$1$&&$n/2+1$&&$(n-2)/2$&\cr
\noalign{\hrule}
&\hfill split $I_n^*$&&$-1$&&$0$&&$5+n$&&$0$&\cr
\noalign{\hrule}
&\hfill non-split $I_n^*$&&$-1$&&$0$&&$4+n$&&$1$&\cr
\noalign{\hrule}
&\hfill $II$&&$-1$&&$0$&&$1$&&$0$&\cr
\noalign{\hrule}
&\hfill $II^*$&&$-1$&&$0$&&$9$&&$0$&\cr
\noalign{\hrule}
&\hfill $III$&&$-1$&&$0$&&$2$&&$0$&\cr
\noalign{\hrule}
&\hfill $III^*$&&$-1$&&$0$&&$8$&&$0$&\cr
\noalign{\hrule}
&\hfill split $IV$&&$-1$&&$0$&&$3$&&$0$&\cr
\noalign{\hrule}
&\hfill non-split $IV$&&$-1$&&$0$&&$2$&&$1$&\cr
\noalign{\hrule}
&\hfill split $IV^*$&&$-1$&&$0$&&$7$&&$0$&\cr
\noalign{\hrule}
&\hfill non-split $IV^*$&&$-1$&&$0$&&$3$&&$4$&\cr
\noalign{\hrule}
}}$$
Use an elementary point-counting argument to verify the proposition. In particular, check that the number of components of $\pi^{-1}(y)$ that are rational over $\kappa_y$ is $f$ and that the order of pole at $T=q_y^{-1}$ of $${Z(\pi^{-1}(y),T)}{(1-T)(1-q_yT)}$$ is $f-1$.
Using the Proposition and the definition of the $L$-function (in Lecture 1, equation (\[eq:Ldef\])) we find that $$\label{eq:Z-L}
L(E,T)=\frac{Z({\mathcal{C}},T)Z({\mathcal{C}},qT)}{Z({\mathcal{E}},T)}
\prod_{\text{bad }v}
\frac{(1-T)^{a_v+1}(1+T)^{b_v}}{(1-q_vT^{\deg(v)})^{f_v-1}(1+q_vT^{\deg(v)})^{g_v}}$$ where $a_v$, $b_v$, $f_v$ and $g_v$ are the invariants defined in the Proposition at the place $v$. Using the Weil conjectures (see Section \[s:zetas\] of Lecture 0), we see that the orders of $L(E,s)$ and $\zeta({\mathcal{E}},s)$ at $s=1$ are related as follows: $$\label{eq:Z-L-ords}
\operatorname{ord}_{s=1}L(E,s)=-\operatorname{ord}_{s=1}\zeta({\mathcal{E}},s)
-2-\sum_v(f_v-1).$$
This simple approach to evaluating the order of zero of the $L$-function does not yield the important fact that $L(E,T)$ is a polynomial in $T$ when $E$ is non-constant, nor does it yield the Riemann hypothesis for $L(E,T)$.
For a slightly more sophisticated (and less explicit) comparison of $\zeta$-functions and $L$-functions in a more general context, see [@Gordon79].
The Tate-Shafarevich and Brauer groups
======================================
The last relationship between $E$ and ${\mathcal{E}}$ we need concerns the Tate-Shafarevich and Brauer groups.
\[thm:Sha-Br\] Suppose that $E$ is an elliptic curve over $K=k({\mathcal{C}})$ and ${\mathcal{E}}\to{\mathcal{C}}$ is the associated elliptic surface as in Proposition \[prop:model\]. Then there is a canonical isomorphism $$\operatorname{Br}({\mathcal{E}})\cong{{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K).$$
The proof of this result, which is somewhat involved, is given in [@Grothendieck68]\*[Section 4]{}. The main idea is simple enough: one computes $\operatorname{Br}({\mathcal{E}})=H^2({\mathcal{E}},{\mathbb{G}}_m)$ using the morphism $\pi:{\mathcal{E}}\to{\mathcal{C}}$ and a spectral sequence. Using that the Brauer group of a smooth, complete curve over a finite field vanishes, one finds that the main term is $H^1({\mathcal{C}},R^1\pi_*{\mathbb{G}}_m)$. Since $R^1\pi_*{\mathbb{G}}_m$ is the sheaf associated to the relative Picard group, it is closely related to the sheaf on ${\mathcal{C}}$ represented by the Néron model of $E$. This provides a connection with the Tate-Shafarevich group which leads to the theorem.
See [@UlmerCRM] for more details about this and the closely related connection between $H^2({\overline{\mathcal{E}}},{{\mathbb{Z}_\ell}}(1))^{G_k}$ and the $\ell$-Selmer group of $E$.
The main classical results
==========================
We are now in a position to prove the theorems of Section \[s:results\] of Lecture 1. For convenience, we restate Theorem \[thm:BSD1\] and a related result.
\[thm:BSD-Tate\] Suppose that $E$ is an elliptic curve over $K=k({\mathcal{C}})$ and ${\mathcal{E}}\to{\mathcal{C}}$ is the associated elliptic surface as in Proposition \[prop:model\].
1. BSD holds for $E$ if and only if $T_2$ holds for ${\mathcal{E}}$.
2. $\operatorname{Rank}E(K)\le\operatorname{ord}_{s=1}L(E,s)$.
3. The following are equivalent:
- $\operatorname{Rank}E(K)=\operatorname{ord}_{s=1}L(E,s)$
- ${{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K)$ is finite
- for any one prime number $\ell$ $\ell=p$ is allowed, the $\ell$-primary part ${{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K)_{\ell^\infty}$ is finite.
4. If $K'/K$ is a finite extension and if the BSD conjecture holds for $E$ over $K'$, then it holds for $E$ over $K$.
Comparing (\[eq:STformula\]) and (\[eq:Z-L-ords\]), we have that $$\operatorname{Rank}E(K)-\operatorname{ord}_{s=1}L(E,s)=\operatorname{Rank}NS({\mathcal{E}})+\operatorname{ord}_{s=1}\zeta({\mathcal{E}},s).$$ Since BSD is the assertion that the left hand side is zero and $T_2$ is the assertion that the right hand side is zero, these conjectures are equivalent.
By Theorem \[prop:T-ineqs\] of Lecture 2, the right hand side is $\le0$ and therefore so is the left. This gives the inequality $\operatorname{Rank}E(K)\le\operatorname{ord}_{s=1}L(E,s)$.
The statements about ${{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K)$ follow from Theorem \[thm:Sha-Br\] (${{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K)\cong\operatorname{Br}({\mathcal{E}})$), the equivalence of BSD and $T_2({\mathcal{E}})$, and Theorem \[thm:T1-Br\] of Lecture 2.
The last point follows from the equivalence of BSD and $T_2({\mathcal{E}})$ and Proposition \[prop:T-descent\] of Lecture 2.
Theorem \[thm:BSD2\] of Lecture 1 concerns isotrivial elliptic curves. By the last point of Theorem \[thm:BSD-Tate\] above, it suffices to show that BSD holds for constant curves. But if $E$ is constant, then ${\mathcal{E}}$ is a product of curves, so the Tate conjecture for ${\mathcal{E}}$ follows from Theorem \[thm:products\] of Lecture 2. The first point of Theorem \[thm:BSD-Tate\] above then gives BSD for $E$.
Theorem \[thm:BSD-low-height\] of Lecture 1 concerns elliptic curves over $k(t)$ of low height. By the discussion in Section \[s:height\], if $E/k(t)$ has height $\le2$ then ${\mathcal{E}}$ is a rational or K3 surface. (Strictly speaking, this is true only over a finite extension of $k$, but the last point of Theorem \[thm:BSD-Tate\] allows us to make this extension without loss of generality.) But $T_2({\mathcal{X}})$ for a rational surface follows from Proposition \[prop:T-DPC\] of Lecture 2. For $E$ such that ${\mathcal{E}}$ is a K3 surfaces, Artin and Swinnerton-Dyer proved the finiteness of ${{\hbox to 10pt{\rlap{\hskip2.8pt\vrule
height6pt\hskip1.6pt\vrule height6pt\hskip1.6pt
\vrule height6pt}\hskip1pt\vrule height0.8pt width 8pt\hskip1pt}}}(E/K)$ (and therefore BSD) in [@ArtinSwinnertonDyer73].
Domination by a product of curves
=================================
Combining part 1 of Theorem \[thm:BSD-Tate\] with Proposition \[prop:T-DPC\] of Lecture 2, we have the following.
\[thm:DPC-BSD\] Let $E$ be an elliptic curve over $K$ with associated surface ${\mathcal{E}}$. If ${\mathcal{E}}$ is dominated by a product of curves, then BSD holds for $E$.
Theorem \[thm:4-monos\] (“four monomials”) and Berger’s theorem \[thm:Berger\] are both corollaries of Theorem \[thm:DPC-BSD\], as we will explain in the remainder of this lecture.
Four monomials
==============
We recall Shioda’s conditions. Suppose that $f\in R=k[x_1,x_2,x_3]$ is the sum of exactly four non-zero monomials: $$f=\sum_{i=1}^4c_i\prod_{j=1}^3 x_j^{e_{ij}}$$ where $c_i\in k$ and the $e_{ij}$ are non-negative integers. Let $e_{i4}=1-\sum_{j=1}^3e_{ij}$ and form the $4\times4$ matrix $A=(e_{ij})$. Assuming that $\det(A)\neq0$ (in ${\mathbb{Z}}$), let $\delta$ be the smallest positive integer such that there is a $4\times4$ integer matrix $B$ with $AB=\delta I_{4\times4}$. We say that $f$ [ *satisfies Shioda’s 4-monomial condition*]{} if $\delta\neq0$ in $k$, i.e., if $p{\not|}\delta$. The following exercise shows that this is equivalent to the definition in Lecture 1.
Show that a prime $\ell$ divides $\delta$ if and only if it divides $\det(A)$. Show that if we change the definition of $e_{i4}$ to $e_{i4}=d-\sum_{j=1}^3e_{ij}$ for some other non-zero integer $d$ and define $\delta_d$ using the new $A=(e_{ij})$, then $\delta_1$ divides $\delta_d$ for all $d$. I.e., $d=1$ is the optimal choice to minimize $\delta$.
\[exer:change-of-coords\] With $c_i$ and $e_{ij}$ as above, show that the system of equations $$\prod_{j=1}^4d_{j}^{e_{ij}}=c_i^{-1}\qquad i=1,\dots,4$$ has a solution with $d_{j}\in{{\overline{\mathbb{F}}_q}}$, $j=1,\dots,4$.
Briefly, the hypotheses imply that the associated elliptic surface ${\mathcal{E}}\to{\mathbb{P}}^1$ is dominated by a Fermat surface (of degree $\delta$) and thus by a product of Fermat curves (of degree $\delta$). Thus Theorem \[thm:DPC-BSD\] implies that BSD holds for $E$.
In more detail, note that ${\mathcal{E}}$ is birational to the affine surface $V(f)\subset{\mathbb{A}}^3_k$. So it will suffice to show that $V(f)$ is dominated by a product of curves. To that end, it will be convenient to identify $k[t,x,y]$ and $R=k[x_1,x_2,x_3]$ by sending $t\mapsto
x_1$, $x\mapsto x_2$ and $y\mapsto x_3$, so that $f$ becomes $$f=\sum_{i=1}^4c_i\prod_{j=1}^3 x_j^{e_{ij}}.$$
Exercise \[exer:change-of-coords\] implies that, after extending $k$ if necessary, we may change coordinates ($x_j\mapsto d_jx_j$) so that the coefficients $c_i$ are all $1$. Then the matrix $A$ defines rational a map $\phi$ from $V(f)$ to the Fermat surface of degree $1$ $$F^2_1 =\{y_1+y_2+y_3+y_4=0\}\subset{\mathbb{P}}^3_k,$$ namely $\phi^*(y_i)=\prod_{j=1}^4 x_j^{e_{ij}}$. Similarly, the matrix $B$ defines a rational map $\psi$ from the Fermat surface of degree $\delta$ $$F^2_\delta=\{z_1^\delta+z_2^\delta+z_3^\delta+z_4^\delta=0\}
\subset{\mathbb{P}}^3_k$$ to $V(f)$, namely $\psi^*(x_i)=\prod_{j=1}^4 z_j^{B_{ij}}$. The composition of these maps is the standard projection from $F^2_\delta$ to $F^2_1$, namely $y_i\mapsto z_i^\delta$ and so both maps are dominant.
Finally, Shioda and Katsura [@ShiodaKatsura79] showed that $F^2_\delta$ is dominated by the product of Fermat curves $F^1_\delta\times F^1_\delta$. Thus, after extending $k$, ${\mathcal{E}}$ is dominated by a product of curves and Theorem \[thm:DPC-BSD\] finishes the proof.
As we will explain below, this Theorem can be combined with results on analytic ranks to give examples of elliptic curves over ${{\mathbb{F}_p}}(t)$ with arbitrarily large Mordell-Weil rank. (In fact, similar ideas can be used to produce Jacobians of every dimension with large rank. For this, see [@Ulmer07b] and also [@UlmerCRM].)
Unfortunately, Theorem \[thm:4-monos\] is very rigid—as one sees in the proof, varying the coefficients in the 4-nomial $f$ does not vary the isomorphism class of ${\mathcal{E}}$ over ${{\overline{\mathbb{F}}_q}}$ and so we get only finitely many non-isomorphic elliptic curves over ${{\overline{\mathbb{F}}_p}}(t)$. Berger’s construction, explained in the next subsection, was motivated by a desire to overcome this rigidity and give [*families*]{} of examples of curves where one knows the BSD conjecture.
Berger’s construction {#s:Berger}
=====================
Berger gave a much more flexible construction of surfaces that are dominated by a product of curves in a tower. More precisely, we note that if ${\mathcal{E}}\to{\mathbb{P}}^1$ is an elliptic surface and $\phi:{\mathbb{P}}^1\to{\mathbb{P}}^1$ is the morphism with $\phi^*(t)=u^d$ (corresponding to the field extension $k(u)/k(t)$ with $u^d=t$), then it is not in general the case that the base changed surface $$\xymatrix{{\mathcal{E}}'={\mathcal{E}}\times{\mathbb{P}}^1_k\ar[r]\ar[d]&{\mathbb{P}}^1_k\ar[d]\\
{\mathbb{P}}^1_k\ar^\phi[r]&{\mathbb{P}}^1_k}$$ is dominated by a product of curves. Berger’s construction gives a rich class of curves for which DPC [*does*]{} hold in every layer of a tower of coverings. We restate Theorem \[thm:berger\] from Lecture 1 in a slightly different (but visibly equivalent) form.
\[thm:Berger\] Let $E$ be an elliptic curve over $K=k(t)$ and assume that there are rational functions $f(x)$ and $g(y)$ on ${\mathbb{P}}^1_k$ such that $E$ is birational to the curve $V(f(x)-tg(y))\subset{\mathbb{P}}^1_K\times{\mathbb{P}}^1_K$. Then the BSD conjecture holds for $E$ over the field $k(u)=k(t^{1/d})$ for all $d$ prime to $p$.
Clearing denominators we may interpret $f(x)-tg(y)$ as defining a hypersurface ${\mathcal{X}}$ in the affine space ${\mathbb{A}}^3$ with coordinates $x$, $y$, and $t$ and it is clear that the elliptic surface ${\mathcal{E}}\to{\mathbb{P}}^1$ associated to $E$ is birationally isomorphic to ${\mathcal{X}}$. On the other hand, ${\mathcal{X}}$ is visibly birational to ${\mathbb{P}}^1\times{\mathbb{P}}^1$ since we may eliminate $t$. Thus ${\mathcal{X}}$ and ${\mathcal{E}}$ are dominated by a product of curves. This checks the case $d=1$.
For larger $d$, note that the elliptic surface ${\mathcal{E}}_d\to{\mathbb{P}}^1$ associated to $E/k(u)$ is birational to the hypersurface ${\mathcal{X}}_d$ in ${\mathbb{A}}^3_k$ defined by $f(x)-u^dg(y)$. Berger showed by a fundamental group argument, generalizing [@Schoen96], that ${\mathcal{X}}_d$ is dominated by a product of curves, more precisely, by a product of covers of ${\mathbb{P}}^1$. (For her argument to be correct, $\pi_1$ should be replaced by the prime-to-$p$ fundamental group $\pi_1^{p'}$ throughout.) This was later made more explicit in [@UlmerDPCT], where it was observed that ${\mathcal{X}}_d$ is dominated by a product of two explicit covers of ${\mathbb{P}}^1$.
More precisely, let ${\mathcal{C}}_d$ and ${\mathcal{D}}_d$ be the covers of ${\mathbb{P}}^1_k$ defined by $z^d=f(x)$ and $w^d=g(y)$. Then there is a rational map from ${\mathcal{C}}_d\times{\mathcal{D}}_d$ to the hypersurface ${\mathcal{X}}_d$, namely $$(x,z,y,w)\mapsto (x,y,u=z/w).$$ This is clearly dominant and so ${\mathcal{X}}_d$ and ${\mathcal{E}}$ are dominated by products of curves.
Applying Theorem \[thm:DPC-BSD\] finishes the proof.
Note that there is a great deal of flexibility in the choice of data for Berger’s construction. As an example, take $f(x)=x(x-a)/(x-1)$ and $g(y)=y(y-1)$ where $a\in{{\mathbb{F}_q}}$ is a parameter. Then if $a\neq1$, the curve $f(x)=tg(y)$ in ${\mathbb{P}}^1\times{\mathbb{P}}^1$ has genus 1 and a rational point. A simple calculation shows that it is birational to the Weierstrass cubic $$y^2+txy-ty=x^3-tax^2+t^2ax.$$ Theorem \[thm:Berger\] implies that this curve satisfies the BSD conjecture over ${\mathbb{F}}_{q^n}(t^{1/d})$ for all $n$ and all $d$ prime to $p$. Varying $q$ and $a$ we get infinitely many curves for which BSD holds at every layer of a tower.
We will give more examples and discuss further applications of the idea behind Berger’s construction in Lectures 4 and 5.
In order to prove results on analytic ranks in towers, we need a more sophisticated approach to $L$-functions. In this lecture we explain Grothendieck’s approach to $L$-functions over function fields and then use it and a new linear algebra lemma to find elliptic curves with unbounded analytic and algebraic ranks in towers of function fields.
Grothendieck’s analysis of $L$-functions
========================================
Galois representations {#ss:gal-reps}
----------------------
As usual, we let $K=k({\mathcal{C}})$ be the function field of a curve over a finite field $k$ and $G_K=\operatorname{Gal}(K^{sep}/K)$ its Galois group. As in Lecture 0, Section \[s:ffs\], we write $D_v$, $I_v$, and $\operatorname{Fr}_v$ for the decomposition group, inertia group, and (geometric) Frobenius at a place $v$ of $K$.
We fix a prime $\ell\neq p$ and consider a representation $$\label{eq:rho}
\rho:G_K\to{\mathrm{GL}}(V)\cong{\mathrm{GL}}_n({{\overline{\mathbb{Q}}_\ell}})$$ on a finite-dimensional ${{\overline{\mathbb{Q}}_\ell}}$ vector space. We make several standing assumptions about $\rho$.
First, we always assume $\rho$ is continuous and unramified away from a finite set of places of $K$. By a compactness argument (see [@KatzSarnakRMFEM]\*[9.0.7]{}) , it is possible to define $\rho$ over a finite extension $L$ of ${{\mathbb{Q}_\ell}}$, i.e., there is a representation $$\rho':G_K\to{\mathrm{GL}}_n(L)$$ isomorphic to $\rho$ over ${{\overline{\mathbb{Q}}_\ell}}$. Nothing we say will depend on the field of definition of $\rho$ and we will generally not distinguish between $\rho$ and isomorphic representations defined over subfields of ${{\overline{\mathbb{Q}}_\ell}}$.
We also always assume that $\rho$ is pure of integral weight $w$, i.e., for all $v$ where $\rho$ is unramified, the eigenvalues of $\rho(\operatorname{Fr}_v)$ are Weil numbers of size $q_v^{w/2}$.
Finally, we sometimes assume that $\rho$ is “symplectically self-dual of weight $w$.” This means that on the space $V$ where $\rho$ acts, there is an $G_K$-equivariant, alternating pairing with values in ${{\overline{\mathbb{Q}}_\ell}}(-w)$.
Conductors
----------
The Artin conductor of $\rho$ is a divisor on ${\mathcal{C}}$ (a formal sum of places of $K$) and is a measure of its ramification. We write $\operatorname{Cond}(\rho)={\mathfrak{n}}=\sum_vn_v[v]$. To define the local coefficients, fix a place $v$ of $K$ and let $G_i\subset I_v$ be the higher ramification groups at $v$ (in the lower numbering). Then define $$n_v=\sum_{i=0}^\infty\frac1{[G_0:G_i]}\dim V/V^{G_i}.$$ Here $V^{G_i}$ denotes the subspace of $V$ invariant under $G_i$. It is clear that $n_v=0$ if and only if $\rho$ is unramified at $v$. If $\rho$ is tamely ramified at $v$ (i.e., $G_1$ acts trivially), then $n_v=\dim V/V^{G_0}=\dim V/V^{I_v}$. In general, the first term of the sum above is the [*tame conductor*]{} and the rest of the sum is the [*Swan conductor*]{}. We refer to [@MilneEC]\*[V.2]{} and also [@SerreLRFG]\*[§19]{} for an alternative definition and more discussion about the conductor, including the fact that the local coefficients $n_v$ are integers.
$L$-functions
-------------
Let us fix an isomorphism ${{\overline{\mathbb{Q}}_\ell}}\cong{\mathbb{C}}$ so that we may regard eigenvalues of Frobenius on $\ell$-adic representations as complex numbers. Having done this, a representation (\[eq:rho\]) gives rise to an $L$-function, defined as an Euler product: $$\label{eq:L-def}
L(\rho,T)=\prod_v\det\left(1-T\operatorname{Fr}_v|V^{I_v}\right)$$ and $L(\rho,s)=L(\rho,q^{-s})$. The product is over the places of $K$, the exponent $I_v$ denotes the subspace of elements invariant under the inertia group $I_v$, and $\operatorname{Fr}_v$ is a Frobenius element at $v$.
Because of our assumption that $\rho$ is pure of weight $w$, the product defining $L(\rho,s)$ converges absolutely and defines a holomorphic function in the region $\operatorname{Re}s>w/2+1$.
It is clear from the definition that if $\rho$ and $\sigma$ are Galois representations then $L(\rho\oplus\sigma,s)=L(\rho,s)L(\sigma,s)$ and $L(\rho(n),s)=L(\rho,s-n)$.
It is also clear that $L(\rho_{triv},s)=\zeta({\mathcal{C}},s)$. and so $L(\rho_{triv}(n),s)=\zeta({\mathcal{C}},s-n)$.
Prove that if $\rho$ factors through $G_K\to G_k$, so that $\operatorname{Fr}_v$ goes to $\alpha^{\deg v}$, then $$L(\rho,T)=Z({\mathcal{C}},\alpha T)$$ is a twisted version of the zeta function of ${\mathcal{C}}$. Compare with Exercise \[exer:L-const\] of Lecture 1. Note that a representation factors through $G_K\to G_k$ if and only if it is trivial on $G_{{{\overline{k}}}K}$, so this exercise fills in the missing cases in the following theorem.
\[thm:GA\] Suppose that $\rho$ is a representation of $G_K$ satisfying the standing hypotheses of Subsection \[ss:gal-reps\] that contains no copies of the trivial representation when restricted to $G_{{{\overline{k}}}K}$. Then there is a canonically defined ${{\overline{\mathbb{Q}}_\ell}}$-vector space $H(\rho)$ with continuous $G_k$ action such that $$L(\rho,s)=\det\left(1-q^{-s}\operatorname{Fr}_q|H(\rho)\right).$$ The dimension of $H(\rho)$ is $\deg(\rho)(2g_{\mathcal{C}}-2)+\deg{\mathfrak{n}}$ where ${\mathfrak{n}}$ is the conductor of $\rho$.
(Sketch) The representation $\rho:G_K\to{\mathrm{GL}}(V)$ gives rise to a constructible sheaf ${\mathcal{F}}_\rho$ on ${\mathcal{C}}$. In outline: $\rho$ is essentially the same thing as a lisse sheaf ${\mathcal{F}}_U$ on the open subset $j:U{\hookrightarrow}{\mathcal{C}}$ over which $\rho$ is unramified. We defined ${\mathcal{F}}_\rho$ as the push-forward $j_*{\mathcal{F}}_U$. For each closed point $v$ of ${\mathcal{C}}$, the stalk of $\rho$ at $v$ is $V^{I_v}$.
Let $H^i({\overline{\mathcal{C}}},{\mathcal{F}})$ be the étale cohomology groups of ${\mathcal{F}}$. They are finite dimensional ${{\overline{\mathbb{Q}}_\ell}}$ vector spaces and give continuous representations of $G_k$.
The Grothendieck-Lefschetz fixed point formula says that for each finite extension ${{\mathbb{F}_{q^n}}}$ of $k\cong{{\mathbb{F}_q}}$, we have $$\sum_{x\in{\mathcal{C}}({{\mathbb{F}_{q^n}}})} \operatorname{Tr}(Fr_x|{\mathcal{F}}_x)=
\sum_{i=0}^2(-1)^i\operatorname{Tr}\left(Fr_{q^n}|H^i({\overline{\mathcal{C}}},{\mathcal{F}})\right).$$ On the left hand side, the sum is over points of ${\mathcal{C}}$ with values in ${{\mathbb{F}_{q^n}}}$ and the summand is the trace of the action of the Frobenius at $x$ on the stalk of ${\mathcal{F}}$ at a geometric point over $x$.
Multiplying both sides by $T^n/n$, summing over $n\ge1$, and exponentiating, one finds that $$L(\rho,T)=
\prod_{i=0}^2\det\left(1-T\operatorname{Fr}_q|H^i({\overline{\mathcal{C}}},{\mathcal{F}})\right)^{(-1)^{i+1}}.$$
Now $H^0({\overline{\mathcal{C}}},{\mathcal{F}})$ and $H^2({\overline{\mathcal{C}}},{\mathcal{F}})$ are isomorphic respectively to the invariants and coinvariants of $V$ under $G_{{{\overline{k}}}K}$ and so under our hypotheses on $\rho$, $H^i({\overline{\mathcal{C}}},{\mathcal{F}})$ vanishes for $i=0,2$. Thus we have $$L(\rho,s)=\det\left(1-q^{-s}\operatorname{Fr}_q|H(\rho)\right)$$ where $H(\rho)=H^1({\overline{\mathcal{C}}},{\mathcal{F}})$.
The dimension formula comes from an Euler characteristic formula proven by Raynaud and sometimes called the Grothendieck-Ogg-Shafarevich formula. It says $$\sum_{i=0}^2(-1)^i\dim H^i({\overline{\mathcal{C}}},{\mathcal{F}})
=\deg(\rho)(2-2g_{\mathcal{C}})-\deg(\operatorname{Cond}(\rho)).$$ Since $H^0$ and $H^2$ vanish, this gives the desired dimension formula.
Obviously we have omitted many details. I recommend [@MilneEC]\*[V.1 and V.2]{} as a compact and readable source for several of the key points, including passing from $\ell$-torsion sheaves to $\ell$-adic sheaves, the conductor, and the Grothendieck-Ogg-Shafarevich formula. See [@MilneEC]\*[VI.13]{} for the Grothendieck-Lefschetz trace formula.
If we are willing to use a virtual representation of $G_k$ in place of a usual representation, then the Theorem has a more elegant restatement which avoids singling out representations that are trivial when restricted to $G_{{{\overline{k}}}K}$. State and prove this generalization.
\[exer:Artin\] Check that we have the Artin formalism formula: if $F/K$ is a finite separable extension and $\rho$ is a representation of $G_F$, then $$L(\rho,s)=L(\operatorname{Ind}^{G_K}_{G_F}\rho,s).$$ Note that the left hand side is an Euler product on $F$ with almost all factors of some degree, say $N$, whereas the right hand side is an Euler product on $K$, with almost all factors of degree $N[F:K]$. The equality can be taken to be an equality of Euler products, where that on the left is grouped according to the places of $K$.
Functional equation and Riemann hypothesis
------------------------------------------
Theorem \[thm:GA\] shows that the $L$-function of $\rho$ has an analytic continuation to the entire $s$ plane (meromorphic if we allow $\rho$ to have trivial factors over ${{\overline{k}}}K$). In this section we deduce other good analytic properties of $L(\rho,s)$.
Suppose in addition to the standing hypotheses that $\rho$ is symplectically self-dual of weight $w$. Then $L(\rho,s)$ satisfies a functional equation $$L(\rho,w+1-s)=\pm q^{N(s-(w+1)/2)}L(\rho,s)$$ where $N=(2g_{\mathcal{C}}-2)\deg(\rho)+\deg(\operatorname{Cond}(\rho))$. The zeroes of $\rho$ lie on the line $\operatorname{Re}s=(w+1)/2$.
(Sketch) We use the notation of the proof of Theorem \[thm:GA\]. The functional equation comes from a symmetric pairing $$H(\rho)\times H(\rho)\to H^2({\overline{\mathcal{C}}},{{\overline{\mathbb{Q}}_\ell}}(-w))\cong{{\overline{\mathbb{Q}}_\ell}}(-w-1).$$ (Symmetric because $\rho$ is skew-symmetric and $H=H^1$.) That there is such a pairing is not as straightforward as it looks, because we defined the sheaf ${\mathcal{F}}$ as a push forward $j_*{\mathcal{F}}_U$ where $j:U{\hookrightarrow}{\mathcal{C}}$ is a non-empty open set over which $\rho$ is unramified and ${\mathcal{F}}_U$ is the lisse sheaf on $U$ corresponding to $\rho$. It is well-known that $j^*$ identifies $H^1({\overline{\mathcal{C}}},{\mathcal{F}})$ with the image of the “forget supports” map $$H^1_c(\overline{U},{\mathcal{F}}_U)\to H^1(\overline{U},{\mathcal{F}}_U)$$ from compactly supported cohomology to usual cohomology. (This is often stated, but the only proof I know of in the literature is [@Ulmer05]\*[7.1.6]{}.) The cup product $$H^1_c(\overline{U},{\mathcal{F}}_U)\times H^1(\overline{U},{\mathcal{F}}^*_U)\to
H^2_c(\overline{U},{{\overline{\mathbb{Q}}_\ell}})\cong{{\overline{\mathbb{Q}}_\ell}}(-1)$$ then induces a pairing on $H^1({\overline{\mathcal{C}}},{\mathcal{F}})$ via the above identification. Poincaré duality shows that the pairing is non-degenerate and so $H(\rho)$ is orthogonally self-dual of weight $w+1$.
The location of the zeroes is related to the eigenvalues of Frobenius on $H(\rho)=H^1({\overline{\mathcal{C}}},{\mathcal{F}})$ and these are Weil numbers of size $q^{w+1}$ by Deligne’s purity theorem [@Deligne80]. I recommend the Arizona Winter School 2000 lectures of Katz (published as [@Katz01]) for a streamlined proof of Deligne’s theorem in the generality needed here.
The case of an elliptic curve
=============================
Next, we apply the results of the previous section to elliptic curves. Throughout, $E$ will be an elliptic curve over a function field $K=k({\mathcal{C}})$ over a finite field $k$ of characteristic $p$.
The Tate module
---------------
We consider the Tate module of $E$. More precisely, fix a prime $\ell\neq p$ and let $$T_\ell E=\varprojlim_n E({{\overline{K}}})[\ell^n]
\quad\text{and}\quad V_\ell E=T_\ell E{\otimes}_{{\mathbb{Z}_\ell}}{{\mathbb{Q}_\ell}}.$$ Let $\rho_E$ be the representation of $G_K$ on the dual vector space $V_\ell^*=\operatorname{Hom}(V_\ell E,{{\mathbb{Q}_\ell}})\cong H^1(\overline{E},{{\mathbb{Q}_\ell}})$. Then $\rho_E$ is two-dimensional and continuous and (by the criterion of Ogg-Néron-Shafarevich, see [@SerreTate68]\*[Thm. 1]{}) it is unramified outside the (finite) set of places where $E$ has bad reduction.
At every place $v$ of $K$ where $E$ has good reduction, we have $$\det(1-\rho(\operatorname{Fr}_v)T)=1-a_vT+q_vT^2$$ where $a_v$ is defined as in (\[eq:a\_v\]) by $\#E_v(\kappa_v)=1-a_v+q_v$. This follows from the smooth base change theorem [@MilneEC]\*[VI.4]{} and the cohomological description of the zeta function of the reduction, as in Section \[s:cohomology\] of Lecture 0. Thus $\rho$ is pure of weight $w=1$.
The Weil pairing induces an alternating, $G_k$-equivariant pairing $V_\ell E\times V_\ell E\to{{\mathbb{Q}_\ell}}(-1)$ and so $\rho$ is symplectically self-dual of weight 1.
If $E$ is constant, then $\rho_E$ factors through $G_K\to G_k$ and since $G_k$ is abelian, $\rho_E$ is the direct sum of two characters. More precisely, if $E\cong E_0\times_kK$ and $1-aT+qT^2=(1-\alpha_1T)(1-\alpha_2T)$ is the numerator of the $Z$-function of $E_0$, then $\rho_E$ is the sum of the two characters that send $\operatorname{Fr}_v$ to $\alpha_i^{\deg v}$.
If $E$ is non-isotrivial, then $\rho_E$ restricted to $G_{{{\overline{k}}}K}$ has no trivial subrepresentations. One way to see this is to use a slight generalization of the MWLN theorem, according to which $E({{\overline{k}}}K)$ is finitely generated (when $E$ is non-isotrivial). Thus its $\ell$-power torsion is finite and this certainly precludes a trivial subrepresentation in $\rho|_{G_{{{\overline{k}}}K}}$. In fact, by a theorem of Igusa [@Igusa59], $\rho|_{G_{{{\overline{k}}}K}}$ is contains an open subgroup of ${\mathrm{SL}}_2({\mathbb{Z}}_\ell)$ so is certainly irreducible, even absolutely irreducible.
Show that if $E$ is isotrivial but not constant, then $\rho_E$ restricted to $G_{{{\overline{k}}}K}$ has no trivial subrepresentation. Hint: $E$ is a twist of a contant curve $E'=E_0\times_k K$. Relate the action of $G_K$ on the Tate module of $E$ to its action on that of $E'$ and show that there exists an element $\sigma\in G_{{{\overline{k}}}K}$ that acts on $V_\ell E$ via a non-trivial automorphism of $E$. But a non-trivial automorphism has only finitely many fixed points.
We can summarize this discussion as follows.
Let $\rho$ be the action of $G_K$ on the Tate module $V_\ell E$ of $E$. Then $\rho$ is continuous, unramified outside a finite set of places of $K$, and is pure and symplectically self-dual of weight $1$. If $E$ is non-constant, then $\rho|_{G_{{{\overline{k}}}K}}$ has no trivial subrepresentations.
The conductor of $\rho_E$ as defined in the previous section is equal to the conductor of $E$ as mentioned in Section \[s:local-invs\] of Lecture 1. This was proven by Ogg in [@Ogg67].
The $L$-function
----------------
Applying the results of the previous section, we get a very satisfactory analysis of the $L$-function of $E$. Since we know everything about the constant case by an elementary analysis (cf. exercise \[exer:L-const\] of Lecture 1), we restrict to the non-constant case.
Let $E$ be a non-constant elliptic curve over $K=k({\mathcal{C}})$ and let $q$ be the cardinality of $k$. Let ${\mathfrak{n}}$ be the conductor of $E$. Then $L(E,s)$ is a polynomial in $q^{-s}$ of degree $N=4g_{\mathcal{C}}-4+\deg({\mathfrak{n}})$. Its inverse roots are Weil numbers of size $q$ and it satisfies a functional equation $$L(E,2-s)=\pm q^{N(s-1)} L(E,s).$$
Combining the Theorem with Theorem \[thm:BSD1\], we obtain the following.
\[cor:rankbound\] The rank of $E(K)$ is bounded above by $N=4g_{\mathcal{C}}-4+\deg({\mathfrak{n}})$. If equality holds, then $L(E,s)=(1-q^{1-s})^N$.
The sign in the functional equation can be computed as a product of local factors. This can be seen via the connection with automorphic forms (a connection which is outside the scope of these lectures) or, because we are in the function field situation, directly via cohomological techniques. See [@Laumon84] for the latter.
Large analytic ranks in towers
==============================
Statement of the theorem {#ss:analytic-ranks}
------------------------
We give a general context in which one obtains large analytic ranks by passing to layers of a suitable tower of function fields.
As usual, let $p$ be a prime and $q$ a power of $p$. Let $K={{\mathbb{F}_q}}(t)$, for each $d$ not divisible by $p$, set $F_d={{\mathbb{F}_q}}(t^{1/d})\cong{{\mathbb{F}_q}}(u)$, and $K_d={{\mathbb{F}_q}}(\mu_d)(t^{1/d})\cong{{\mathbb{F}_q}}(\mu_d)(u)$.
Suppose that $E$ is an elliptic curve over $K$. Let ${\mathfrak{n}}$ be the conductor of $E$ and let $${\mathfrak{n}}'={\mathfrak{n}}-\dim (V_\ell E/V_\ell E^{I_0})[0]-\dim (V_\ell E/V_\ell E^{I_\infty})[\infty].$$ This is the conductor of $E$ except that we have removed the tame part at $t=0$ and $t=\infty$.
\[thm:analytic-ranks\] Let $E$ be an elliptic curve over $K$ and define ${\mathfrak{n}}'$ as above. Suppose that $\deg{\mathfrak{n}}'$ is odd. Then the analytic rank of $E$ over $F_d$ and $K_d$ is unbounded as $d$ varies. More precisely, there exists a constant $c$ depending only on $E$ such that if $d$ has the form $d=q^n+1$, then $$\operatorname{ord}_{s=1}L(E/F_d,s)\ge \frac{d}{2n}-c=\frac{q^n+1}{2n}-c.$$ and $$\operatorname{ord}_{s=1}L(E/K_d,s)\ge d-c=q^n+1-c$$
This theorem is proven in detail in [@Ulmer07b]\*[§2-4]{}. We will sketch the main lines of the argument below.
A linear algebra lemma
----------------------
Our analytic rank results ultimately come from the following odd-looking result of linear algebra.
\[prop:la\] Let $V$ be a finite-dimensional vector space with subspaces $W_i$ indexed by $i\in{\mathbb{Z}}/a{\mathbb{Z}}$ such that $V=\oplus_{i\in{\mathbb{Z}}/a{\mathbb{Z}}}W_i$. Let $\phi:V\to V$ be an invertible linear transformation such that $\phi(W_i)= W_{i+1}$ for all $i\in{\mathbb{Z}}/a{\mathbb{Z}}$. Suppose that $V$ admits a non-degenerate, $\phi$-invariant symmetric bilinear form ${\langle},{\rangle}$. Suppose that $a$ is even and ${\langle},{\rangle}$ induces an isomorphism $W_{a/2}\cong W_0^*$ the dual vector space of $W_0$. Suppose also that $N=\dim W_0$ is odd. Then the polynomial $1- T^{a}$ divides $\det(1-\phi T|V)$.
We omit the proof of this proposition, since it is not hard and it appears in two forms in the literature already. Namely, embedded in [@Ulmer05]\*[7.1.11ff]{} is a matrix-language proof of the proposition, and a coordinate-free proof is given in [@Ulmer07b]\*[§2]{}.
Sketch of the proof of Theorem \[thm:analytic-ranks\]
-----------------------------------------------------
For simplicity, we assume that $E$ is non-isotrivial. (If $p>3$ and $E$ is isotrivial, then the theorem is vacuous because all of the local conductor exponents $n_v$ are even.) Let $\rho$ be the representation of $G_K$ on $V=H^1(\overline{E},{{\mathbb{Q}_\ell}})=(V_\ell E)^*$ and let $\rho_d$ be the restriction of $\rho$ to $G_{F_d}$. Then by Grothendieck’s analysis, we have $$L(E/F_d,s)=\det\left(1-\operatorname{Fr}_qq^{-s}|H(\rho_d)\right).$$ Here $H(\rho_d)$ is an $H^1$ on the rational curve whose function field is $F_d={{\overline{\mathbb{F}}_q}}(u)={{\overline{\mathbb{F}}_q}}(t^{1/d})$.
The projection formula in cohomology (a parallel of the Artin formalism \[exer:Artin\]) implies that $$H(\rho_d)\cong H(\operatorname{Ind}_{G_{F_d}}^{G_K}\rho)\cong
H(\rho{\otimes}\operatorname{Ind}_{G_{F_d}}^{G_K}{\bf 1})$$ where $\bf 1$ denotes the trivial representation. Since the cohomology $H$ is computed on $\overline{{\mathbb{P}}}^1_u$ (the ${\mathbb{P}}^1$ with coordinate $u$, with scalars extended to ${{\overline{\mathbb{F}}_q}}$) and $\overline{{\mathbb{P}}}^1_u\to\overline{{\mathbb{P}}}^1_t$ is Galois with group $\mu_d$, we have $$H(\rho_d)\cong\bigoplus_{j=0}^{d-1}H(\rho{\otimes}\chi^j)$$ where $\chi$ is a character of $\operatorname{Gal}({{\overline{\mathbb{F}}_q}}(u)/{{\overline{\mathbb{F}}_q}}(t))$ of order exactly $d$.
Now the decomposition displayed above is not preserved by Frobenius. Indeed $\operatorname{Fr}_q$ sends $H(\rho{\otimes}\chi^j)$ to $H(\rho{\otimes}\chi^{qj})$. Thus we let $o\subset{\mathbb{Z}}/d{\mathbb{Z}}$ denote an orbit for multiplication by $q$ and we regroup: $$H(\rho_d)\cong\bigoplus_{o\subset{\mathbb{Z}}/d{\mathbb{Z}}}
\left(\bigoplus_{j\in o}H(\rho{\otimes}\chi^j)\right).$$
We write $V_o$ for the summand indexed by an orbit $o\subset{\mathbb{Z}}/d{\mathbb{Z}}$ in the last display and $a_o$ for the cardinality of $o$. As we will see presently, the hypotheses of the theorem imply that Proposition \[prop:la\] applies to most of the $V_o$ and for each one where it does, we get a zero of the $L$-function. Before we do that, there is one small technical point to take care of: The linear algebra proposition requires that $V$ be literally self-dual (not self-dual with a weight) and it implies that $1$ is an eigenvalue of $\phi$ on $V$. To get the eigenvalue $q$ that we need, we should twist $\rho$ by $-1/2$ (which is legitimate once we have fixed choice of square root of $q$) so that it has weight 0, apply the lemma, and twist back to get the desired zero. We leave the details of these points to the reader.
Assuming we have made the twist just mentioned, we need to check which $V_o$ are self-dual. Since $\rho$ is self-dual, Poincaré duality gives a non-degenerate pairing on $H(\rho_d)$ which puts $H(\rho{\otimes}\chi^j)$ in duality with $H(\rho{\otimes}\chi^{-j})$. Thus if $d=q^n+1$ for some $n>0$, then all of the orbits $o$ will yield a self-dual $V_o$. Possibly two of these orbits have odd order (those through $0$ and $d/2$, which have order $1$) and all of the other have $a_o$ even. Moreover, for the orbits of even order, setting $W_{o,i}=H(\rho{\otimes}\chi^{q^ij_o})$ for some fixed $j_o\in
o$, we have $$V_o\cong\bigoplus_{i=0}^{a_o-1}W_{o,i}$$ with $W_{o,i}$ and $W_{o,i+a_0/2}$ in duality.
The last point that we need is that $W_{o,i}$ should be odd-dimensional. The hypothesis on ${\mathfrak{n}}'$ implies that for all characters $\chi^j$ of sufficiently high order (depending only on $E$), the conductor of $\rho{\otimes}\chi^j$ is odd. The Grothendieck-Ogg-Shafarevich dimension formula (mentioned at the end of the proof of Theorem \[thm:GA\]) then implies that for all orbits $o$ consisting of characters of high order, $H(\rho{\otimes}\chi^{j_o})$ has odd dimension.
The linear algebra proposition \[prop:la\] now implies that for $d=q^n+1$ and for most orbits $o\subset{\mathbb{Z}}/d{\mathbb{Z}}$, $1$ is an eigenvalue of $\operatorname{Fr}_q$ on $V_o$ (and $q$ is an eigenvalue of $\operatorname{Fr}_q$ on the corresponding factor of $H(\rho_d)$). Since each of these orbits has size $\le 2n$, there is a constant $c$ such that the number of “good” orbits is $\ge d/2n$. Thus $$\operatorname{ord}_{s=1}L(E/F_d,s)\ge\frac{d}{2n}-c$$ for a constant $c$ depending only on $E$.
To get the assertions over $K_d$, note that in passing from $F_d$ to $K_d$, each factor $(1-q^{a_o}T^{a_o})$ of $L(E/F_d,T)$ becomes $(1-qT)^{a_o}$ and so $$\operatorname{ord}_{s=1}L(E/K_d,s)\ge d-c$$ for another $c$ independent of $E$.
This completes our discussion of Theorem \[thm:analytic-ranks\]. We refer to [@Ulmer07b]\*[§2-4]{} for more details.
Examples {#examples-2}
--------
It is easy to see that the hypotheses in Theorem \[thm:analytic-ranks\] are not very restrictive and that high analytic ranks are in a sense ubiquitous. The following rephrasing of the condition in the theorem should make this clear.
Prove that if $p>3$ and $E$ is an elliptic curve over $K$, then Theorem \[thm:analytic-ranks\] guarantees that $E$ has unbounded analytic rank in the tower $F_d$ if the number of geometric points of ${\mathbb{P}}^1_{{{\mathbb{F}_q}}}$ over which $E$ has multiplicative reduction is odd.
\[cor:a-r-examples\] Let $p$ be any prime number, $K={{\mathbb{F}_p}}(t)$, and let $E$ be one of the curves $E_7$, $E_8$, or $E_9$ defined in Subsection \[ss:examples\] of Lecture 1. Then $$\operatorname{ord}_{s=1}L(E/{{\mathbb{F}_p}}(t^{1/d}),s)$$ is unbounded as $d$ varies through integers prime to $p$
If $p>3$, then one sees immediately by considering the discriminant and $j$-invariant that $E$ has one finite, non-zero place of multiplicative reduction and is tame at 0 and $\infty$, thus it satisfies the hypotheses of Theorem \[thm:analytic-ranks\]. If $p=2$ or 3, one checks using Tate’s algorithm that $E$ has good reduction at all finite non-zero places and is tame at zero, but the wild part of the conductor at $\infty$ is odd and so the theorem again applies.
For another example, take the Legendre curve $$y^2=x(x-1)(x-t)$$ over ${{\mathbb{F}_p}}(t)$, $p>2$. It is tame at 0 and $\infty$ and has exactly one finite, non-zero place of multiplicative reduction.
Large algebraic ranks
=====================
Examples via the four-monomial theorem
--------------------------------------
Noting that the curves $E_7$, $E_8$, and $E_9$ are defined by equations involving exactly four monomials, we get a very nice result on algebraic ranks.
Let $p$ be any prime number, $K={{\mathbb{F}_p}}(t)$, and let $E$ be one of the curves $E_7$, $E_8$, or $E_9$ defined in Subsection \[ss:examples\] of Lecture 1. Then for all $d$ prime to $p$ and all powers $q$ of $p$, the Birch and Swinnerton-Dyer conjecture holds for $E$ over $K_d={{\mathbb{F}_q}}(t^{1/d})$. Moreover, the rank of $E({{\mathbb{F}_p}}(t^{1/d}))$ is unbounded as $d$ varies.
This follows immediately from Corollary \[cor:a-r-examples\] and Theorem \[thm:4-monos\] of Lecture 1 as soon as we note that $E/K_d$ is defined by an equation satisfying Shioda’s conditions.
Similar ideas can be used to show that for every prime $p$ and every genus $g>0$, there is an explicit hyperelliptic curve $C$ over ${{\mathbb{F}_p}}(t)$ such that the Jacobian of $C$ satisfies BSD over ${{\mathbb{F}_q}}(t^{1/d})$ for all $q$ and $d$ and has unbounded rank in the tower ${{\mathbb{F}_p}}(t^{1/d})$. This is the main theorem of [@Ulmer07b].
Examples via Berger’s construction
----------------------------------
As we pointed out in Lecture 3, the Shioda 4-monomial construction is rigid—varying the coefficients does not lead to families that vary geometrically. Berger’s thesis developed a new construction with parameters that leads to families of curves for which the BSD conjecture holds in a tower of fields. This together with the analytic ranks result \[thm:analytic-ranks\] gives examples of families of elliptic curves with unbounded ranks.
To make this concrete, we quote the first example with parameters from [@Berger08] that, together with the analytic rank construction \[thm:analytic-ranks\], gives rise to unbounded analytic and algebraic ranks.
Let $k={{\mathbb{F}_q}}$ be a finite field of characteristic $p$ and let $a\in{{\mathbb{F}_q}}$ with $a\neq0,1,2$. Let $E$ be the elliptic curve over $K={{\mathbb{F}_q}}(t)$ defined by $$y^2+a(t-1)xy+a(t^2-t)y=x^3+(2a+1)tx^2+a(a+2)t^2x+a^2t^3.$$ Then for all $d$ prime to $p$ the BSD conjecture holds for $E$ over ${{\mathbb{F}_q}}(t^{1/d})$. Moreover, for every $q$ and $a$ as above, the rank of $E({{\mathbb{F}_q}}(t^{1/d}))$ is unbounded as $d$ varies.
This is an instance of Berger’s construction (Theorem \[thm:Berger\] of Lecture 3). Indeed, let $f(x)=x(x-a)/(x-1)$ and $g(y)=y(y-a)/(y-1)$. Then $V(f-tg)\subset{\mathbb{P}}^1_K\times{\mathbb{P}}^1_K$ is birational to $E$, which is a smooth elliptic curve for all $a\neq0,1$. Berger’s Theorem \[thm:Berger\] of Lecture 3 shows that $E$ satisfies BSD over the fields ${{\mathbb{F}_q}}(t^{1/d})$.
The discriminant of $E$ is $$\Delta=a^2(a-1)^4t^4(t-1)^2\left(a^2t^2-(2a^2-16a+16)t+a^2\right).$$ Assume first that $p>3$. One checks that $\Delta$ is relatively prime to $c_4$ so that the zeroes of $\Delta$ are places of multiplicative reduction. Since the discriminant (in $t$) of the quadratic factor $a^2t^2-(2a^2-16a+16)t+a^2$ is $-64(a-1)(a-2)^2$ we see that there are three finite, non-zero geometric points of multiplicative reduction. Since $p>3$, the reduction at 0 and $\infty$ is tame and so ${\mathfrak{n}}'$ (defined as in Subsection \[ss:analytic-ranks\] of Lecture 4) has degree 3. Thus by Theorem \[thm:analytic-ranks\] of Lecture 4, $E$ has unbounded analytic ranks in the tower ${{\mathbb{F}_q}}(t^{1/d})$ and thus also unbounded algebraic ranks by the previous paragraph on BSD.
If $p=2$ or 3, one needs to use Tate’s algorithm to compute ${\mathfrak{n}}'$, which again turns out to have degree 3. We leave the details of this computation as a pleasant exercise for the reader.
In the last part of Lecture 4, we chose special curves $E$ and used a domination ${\mathcal{C}}\times{\mathcal{D}}{{\dashrightarrow}}{\mathcal{E}}$ of the associated surface to deduce the Tate conjecture for ${\mathcal{E}}$ and thus the BSD conjecture for $E$. This yields an [*a priori*]{} equality of analytic and algebraic ranks. We then used other, cohomological, methods (namely the analytic ranks theorem) to compute the analytic rank.
It turns out to be possible to use domination by a product of curves and geometry to prove directly results about algebraic ranks and explicit points. We sketch some of these applications in this lecture.
More on Berger’s construction {#s:moreBerger}
=============================
Let $k$ be a field (not necessarily finite), $K=k(t)$, and $K_d=k(t^{1/d})=k(u)$. Recall that in Berger’s construction we start with rational curves ${\mathcal{C}}={\mathbb{P}}^1_k$ and ${\mathcal{D}}={\mathbb{P}}^1_k$ and rational functions $f(x)$ on ${\mathcal{C}}$ and $g(y)$ on ${\mathcal{D}}$. We get a curve in ${\mathbb{P}}^1_K\times{\mathbb{P}}^1_K$ defined by $f(x)-tg(y)=0$ and we let $E$ be the smooth proper model over $K$ of this curve. (Some hypotheses are required for this to exist, but they are weaker than our standing hypotheses below.) The genus of $E$ was computed by Berger in [@Berger08]\*[Theorem 3.1]{}. All the examples we consider will be of genus 1 and will have a $K$-rational point.
We establish more notation to state a precise result. Let us assume for simplicity all the zeroes and poles of $f$ and $g$ are $k$-rational. Write $$\label{eq:divs}
\operatorname{div}(f)=\sum_{i=1}^k a_iP_i-\sum_{i'=1}^{k'} a'_{i'}P'_{i'}
\quad\text{and}\quad \operatorname{div}(g)=\sum_{j=1}^\ell
b_jQ_j-\sum_{j'=1}^{\ell'} b'_{j'}Q'_{j'}$$ with $a_i,a'_{i'},b_i,b'_{j'}$ positive integers and $P_i$, $P'_{i'}$, $Q_j$, and $Q'_{j'}$ distinct $k$-rational points. Let $$m=\sum_{i=1}^ka_i=\sum_{i'=1}^{k'} a'_{i'}
\quad\text{and}\quad
n=\sum_{j=1}^\ell b_j=\sum_{j'=1}^{\ell'} b'_{j'}.$$
As standing hypotheses, we assume that: (i) all the multiplicities $a_i$, $a'_{i'}$, $b_j$, and $b'_{j'}$ are prime to the characteristic of $k$; and (ii) $\gcd(a_1\dots,a_k,a'_1,\dots,a'_{k'}
)=\gcd(b_1\dots,b_\ell,b'_1,\dots,b'_{\ell'})=1$.
Under these hypotheses, Berger computes that the genus of $E$ is $$\label{eq:genus}
g_E=(m -1)(n-1)-\sum_{i,j}\delta(a_i,b_j)-\sum_{i',j'}\delta(a'_{i'},b'_{j'})$$ where $\delta(a,b)=(ab-a-b+\gcd(a,b))/2$.
From now on we assume that we have chosen the data $f$ and $g$ so that $E$ has genus 1. Two typical cases are where $f$ and $g$ are quadratic rational functions with simple zeroes and poles, or where $f$ and $g$ are cubic polynomials. There is always a $K$-rational point on $E$; for example, we may take a point where $x$ and $y$ are zeroes of $f$ and $g$.
Let ${\mathcal{E}}_d\to{\mathbb{P}}^1$ be the elliptic surface over $k$ attached to $E/K_d$. It is clear that ${\mathcal{E}}_d$ is birational to the closed subset of ${\mathbb{P}}^1_k\times{\mathbb{P}}^1_k\times{\mathbb{P}}^1_k$ (with coordinates $x,y,u$) defined by the vanishing of $f(x)-u^dg(y)$. We saw in Section \[s:Berger\] of Lecture 3 that ${\mathcal{E}}$ is dominated by a product of curves and we would now like to make this more precise.
Recall that we defined covers ${\mathcal{C}}_d\to{\mathcal{C}}={\mathbb{P}}^1$ and ${\mathcal{D}}_d\to{\mathcal{D}}={\mathbb{P}}^1$ by the equations $z^d=f(x)$ and $w^d=g(y)$. Note that there is an action of $\mu_d$, the $d$-th roots of unity, on ${\mathcal{C}}_d$ and on ${\mathcal{D}}_d$.
\[prop:quotient\] The surface ${\mathcal{E}}_d$ is birationally isomorphic to the quotient surface $({\mathcal{C}}_d\times{\mathcal{D}}_d)/\mu_d$ where $\mu_d$ acts diagonally.
We have already noted that ${\mathcal{E}}_d$ is birational to the zero set ${\mathcal{X}}$ of $f(x)-u^dg(y)$ in ${\mathbb{P}}^1_k\times{\mathbb{P}}^1_k\times{\mathbb{P}}^1_k$. Define a rational map from ${\mathcal{C}}_d\times{\mathcal{D}}_d$ to ${\mathcal{X}}$ by sending $(x,z,y,w)$ to $(x,y,u=z/w)$. It is clear that this map factors through the quotient $({\mathcal{C}}_d\times{\mathcal{D}}_d)/\mu_d$. Since the map is generically of degree $d$, it induces a birational isomorphism between $({\mathcal{C}}_d\times{\mathcal{D}}_d)/\mu_d$ and ${\mathcal{X}}$. Thus $({\mathcal{C}}_d\times{\mathcal{D}}_d)/\mu_d$ is birationally isomorphic to ${\mathcal{E}}_d$.
In the next section we will explain how this birational isomorphism can be used to compute the Néron-Severi group of ${\mathcal{E}}_d$ and the Mordell-Weil group $E(K_d)$.
A rank formula {#s:rankformula}
==============
We keep the notation and hypotheses of the preceding subsection. Consider the base ${\mathbb{P}}^1_k$, the one corresponding to $K$, with coordinate $t$. For each geometric point $x$ of this ${\mathbb{P}}^1_k$, let $f_x$ be the number of components in the fiber of ${\mathcal{E}}\to{\mathbb{P}}^1$ over $x$. For almost all $x$, $f_x=1$ and its value at any point can be computed using Tate’s algorithm.
Define two constants $c_1$ and $c_2$ by the formulae $$c_1=\sum_{x\neq0,\infty}(f_x-1)$$ and $$c_2=(k-1)(\ell-1)+(k'-1)(\ell'-1).$$ Here the sum is over geometric points of ${\mathbb{P}}^1_k$ except $t=0$ and $t=\infty$ and $k$, $k'$, $\ell$, and $\ell'$ are the numbers of distinct zeroes and poles of $f$ and $g$ (cf. equation (\[eq:divs\])). Note that $c_1$ and $c_2$ depend only on the data defining $E/K$, not on $d$.
\[thm:rank-formula\] Suppose that $k$ is algebraically closed and that $d$ is relatively prime to all of the multiplicities $a_i$, $a'_{i'}$, $b_j$, and $b'_{j'}$ and to the characteristic of $k$. Then we have $$\operatorname{Rank}E(K_d)=\operatorname{Rank}\operatorname{Hom}(J_{{\mathcal{C}}_d},J_{{\mathcal{D}}_d})^{\mu_d}-c_1d+c_2.$$ Here $\operatorname{Hom}(\cdots)^{\mu_d}$ signifies the homomorphisms commuting with the actions of $\mu_d$ on the two Jacobians induced by its action on the curves.
In brief, we use the birational isomorphism $$({\mathcal{C}}_d\times{\mathcal{D}}_d)/\mu_d{{\dashrightarrow}}{\mathcal{E}}_d$$ to compute the rank of the Néron-Severi group of ${\mathcal{E}}_d$ and then use the Shioda-Tate formula to compute the rank of $E(K_d)$.
More precisely, we saw in Lecture 2, Subsection \[ss:products\] that the Néron-Severi group of the product ${\mathcal{C}}_d\times{\mathcal{D}}_d$ is isomorphic to ${\mathbb{Z}}^2\times\operatorname{Hom}(J_{{\mathcal{C}}_d},J_{{\mathcal{D}}_d})$. It follows easily that the Néron-Severi group of the quotient $({\mathcal{C}}_d\times{\mathcal{D}}_d)/\mu_d$ is isomorphic to ${\mathbb{Z}}^2\times\operatorname{Hom}(J_{{\mathcal{C}}_d},J_{{\mathcal{D}}_d})^{\mu_d}$.
One then keeps careful track of the blow-ups needed to pass from $({\mathcal{C}}_d\times{\mathcal{D}}_d)/\mu_d$ to ${\mathcal{E}}_d$. The effect of blow-ups on Néron-Severi is quite simple and was noted in Subsection \[ss:blow-ups\] of Lecture 2. This is the main source of the term $c_2$ in the formula.
Finally, one computes the rank of $E(K_d)$ using the Shioda-Tate formula, as in Section \[s:Shioda-Tate\] of Lecture 3. This step is the main source of the term $c_1d$.
The hypothesis that $k$ is algebraically closed is not essential for any of the above, but it avoids rationality questions that would greatly complicate the formula.
For full details on the proof of this theorem (in a more general context) see [@UlmerDPCT]\*[Section 6]{}.
First examples {#s:firstexample}
==============
One of the first examples is already quite interesting. We give a brief sketch and refer to [@UlmerDPCT] for more details.
With notation as in Section \[s:moreBerger\], we take $f(x)=x(x-1)$ and $g(y)=y^2/(1-y)$. The genus formula (\[eq:genus\]) shows that $E$ has genus 1. In fact, the change of coordinates $x=-y/(x+t)$, $y=-x/t$ brings it into the Weierstrass form $$y^2+xy+ty=x^3+tx^2.$$
We remark in passing that if the characteristic of $k$ is not $2$, $E$ has multiplicative reduction at $t=1/16$ and good reduction elsewhere away from $0$ and $\infty$. Thus by the analytic rank result of Lecture 2, when $k$ is finite, say $k={{\mathbb{F}_p}}$ and $p>3$, we expect $E$ to have unbounded analytic rank in the tower ${{\mathbb{F}_p}}(t^{1/d})$. (In fact a more careful analysis gives the same conclusion for every $p$.)
Now assume that $k$ is algebraically closed. To compute the constant $c_1$, one checks that (for $k$ of any characteristic) $E$ has exactly one irreducible component over each geometric point of ${\mathbb{P}}^1_k$. Thus $c_1=0$. It is immediate from the definition that $c_2=0$. Thus our rank formula yields $$\operatorname{Rank}E(K_d)=\operatorname{Rank}\operatorname{Hom}(J_{{\mathcal{C}}_d},J_{{\mathcal{D}}_d})^{\mu_d}.$$
Next we note that there is an isomorphism $\phi:{\mathcal{C}}_d\to{\mathcal{D}}_d$ sending $(x,z)$ to $(y=1/x,w=1/z)$. This isomorphism [*anti-commutes*]{} with the $\mu_d$ action: Let $\zeta_d$ be a primitive $d$-th root of unity and write $[\zeta_d]$ for its action on curves or Jacobians. Then $\phi{\circ}[\zeta_d]=[\zeta_d^{-1}]{\circ}\phi$. Using $\phi$ to identify ${\mathcal{C}}_d$ and ${\mathcal{D}}_d$, our rank formula becomes $$\operatorname{Rank}E(K_d)=\operatorname{Rank}\operatorname{End}(J_{{\mathcal{C}}_d})^{anti-\mu_d}$$ where “$\operatorname{End}(\cdots)^{anti-\mu_d}$” denotes those endomorphisms anti-commuting with $\mu_d$ in the sense above.
Suppose that $k$ has characteristic zero. Then a consideration of the (faithful) action of $\operatorname{End}(J_{{\mathcal{C}}_d})$ on the differentials $H^0(J_{{\mathcal{C}}_d},\Omega^1)$ shows that $\operatorname{End}(J_{{\mathcal{C}}_d})^{anti-\mu_d}=0$ for all $d$ (see [@UlmerDPCT]\*[7.6]{}). We conclude that for $k$ of characteristic zero, the rank of $E(K_d)$ is zero for all $d$.
Now assume that $k$ has characteristic $p$ (and is algebraically closed). If we take $d$ of the form $p^f+1$ then we get many elements of $\operatorname{End}(J_{{\mathcal{C}}_d})^{anti-\mu_d}$. Namely, we consider the Frobenius $\operatorname{Fr}_{p^f}$ and compute that $$\operatorname{Fr}_{p^f}{\circ}[\zeta_d]=[\zeta_d^{p^f}]{\circ}\operatorname{Fr}_{p^f}=
[\zeta_d^{-1}]{\circ}\operatorname{Fr}_{p^f}.$$ The same computation shows that $\operatorname{Fr}_{p^f}{\circ}[\zeta_d^i]$ anticommutes with $\mu_d$ for all $i$. It turns out that there are two relations among these endomorphism in $\operatorname{End}(J_{{\mathcal{C}}_d})$ if $p>2$ and just one relation if $p=2$ (see [@UlmerDPCT]\*[7.8-7.10]{}). Thus we find that, for $d$ of the special form $d=p^f+1$, $$\operatorname{Rank}E({{\overline{\mathbb{F}}_p}}(t^{1/d}))=\begin{cases}
d-2&\text{if $p>2$}\\
d-1&\text{if $p=2$.}
\end{cases}$$ The reader may enjoy checking that this is in exact agreement with what the analytic rank result (Theorem \[thm:analytic-ranks\] of Lecture 4) predicts.
Somewhat surprisingly, there are [*more*]{} values of $d$ for which we get high ranks. A natural question is to identify all pairs $(p,d)$ such that $E({{\overline{\mathbb{F}}_p}}(t^{1/d})$ has “new” rank, i.e, points of infinite order not coming from smaller values of $d$. The exact set of pairs $(p,d)$ for which we get high rank is mysterious. There are “systematic” cases (such as $(p,p^f+1)$, as above, or $(p,2(p-1))$) and other cases that may be sporadic. This is the subject of ongoing research so we will not go into more detail, except to note that the example in Section \[s:2ndexample\] below is relevant to this question.
Explicit points {#s:explicitpoints}
===============
The main ingredients in the rank formula of Section \[s:rankformula\] are the calculation of the Néron-Severi group of a product of curves in terms of homomorphisms of Jacobians and the Shioda-Tate formula. Tracing through the proof leads to a homomorphism $$\operatorname{Hom}(J_{{\mathcal{C}}_d},J_{{\mathcal{D}}_d})^{\mu_d}\cong\operatorname{DivCorr}({\mathcal{C}}_d,{\mathcal{D}}_d)
\to L^1\operatorname{NS}({\mathcal{E}}_d)\to
\frac{L^1\operatorname{NS}({\mathcal{E}}_d)}{L^2\operatorname{NS}({\mathcal{E}}_d)}\cong E(K_d).$$
For elements of $\operatorname{Hom}(J_{{\mathcal{C}}_d},J_{{\mathcal{D}}_d})^{\mu_d}$ where we can find an explicit representation in $\operatorname{DivCorr}({\mathcal{C}}_d,{\mathcal{D}}_d)$, the geometry of Berger’s construction leads to explicit points in $E(K_d)$. This applies notably to the endomorphisms $\operatorname{Fr}_{p^f}{\circ}[\zeta_d^i]$ appearing in the analysis of the first example above. Indeed, these endomorphisms are represented in $\operatorname{DivCorr}({\mathcal{C}}_d,{\mathcal{D}}_d)$ by the graphs of Frobenius composed with the automorphisms $[\zeta_d^i]$ of ${\mathcal{C}}_d$.
Tracing through the geometry leads to remarkable explicit expressions for points in $E(K_d)$. The details of the calculation are presented in [@UlmerDPCT]\*[§8]{} so we will just state the results here, and only in the case $p>2$.
Let $p>2$, $k={{\overline{\mathbb{F}}_p}}$ and $K=k(t)$. Let $E$ be the elliptic curve $$y^2+xy+ty=x^3+tx^2$$ over $K$. Let $q=p^f$, $d=q+1$, $K_d=k(t^{1/d})$, and $$P(u)=\left(\frac{u^q(u^q-u)}{(1+4u)^q},
\frac{u^{2q}(1+2u+2u^q)}{2(1+4u)^{(3q-1)/2}}
-\frac{u^{2q}}{2(1+4u)^{q-1}}\right).$$ Then the points $P_i=P(\zeta_d^it^{1/d})$ for $i=0,\dots,d-1$ lie in $E(K_d)$ and they generate a finite index subgroup of $E(K_d)$, which has rank $d-2$. The relations among them are that $\sum_{i=0}^{d-1}P_i$ and $\sum_{i=0}^{d-1}(-1)^iP_i$ are torsion.
It is elementary to check that the points lie in $E(K_d)$. To check their independence and the relations by elementary means, one may compute the height pairing on the lattice they generate. It turns out to be a scaling of the direct sum of two copies of the $A_{(d-2)/2}^*$ lattice. Since we know from the previous section that $E(K_d)$ has rank $d-2$, the explicit points generate a subgroup of finite index. As another check that they have finite index, we could compute the conductor of $E$—it turns out to have degree $d+2$—and apply Corollary \[cor:rankbound\] of Lecture 4. All this is explained in detail in [@UlmerDPCT]\*[§8]{}.
Another example {#s:2ndexample}
===============
We keep the notation and hypotheses of Sections \[s:moreBerger\] and \[s:rankformula\]. For another example, assume that $k={{\overline{\mathbb{F}}_p}}$ with $p>2$. Let $f(x)=x/(x^2-1)$ and $g(y)=y(y-1)$. The curve $f(x)-tg(y)=0$ has genus 1 and the change of coordinates $x=(x'+t)/(x'-t)$, $y=-y'/2tx'$ brings it into the Weierstrass form $$y^{\prime2}+2tx'y'=x^{\prime3}-t^2x'.$$ This curve, call it $E$, has multiplicative reduction of type $I_1$ at the places dividing $t^2+4$, good reduction at other finite, non-zero places, and tame reduction at $t=0$ and $t=\infty$. We find that the constants $c_1$ and $c_2$ are both zero and that $$\operatorname{Rank}E({{\overline{\mathbb{F}}_p}}(t^{1/d}))=\operatorname{Rank}\operatorname{Hom}(J_{{\mathcal{C}}_d},J_{{\mathcal{D}}_d})^{\mu_d}.$$
Recall that the curves ${\mathcal{C}}_d$ and ${\mathcal{D}}_d$ are defined by the equations $$z^d=f(x)=\frac x{x^2-1}\quad\text{and}\quad w^d=g(y)=y(y-1).$$ Consider the morphism $\phi:{\mathcal{C}}_d\to{\mathcal{D}}_d$ defined by $\phi^*(y)=1/(1-x^2)$ and $\phi^*(w)=z^2$. It is obviously not constant and so induces a surjective homomorphism $\phi_*:J_{{\mathcal{C}}_d}\to J_{{\mathcal{D}}_d}$.
The homomorphism $\phi_*$ clearly does not commute with the action of $\mu_d$. Indeed, if $\zeta_d$ denotes a primitive $d$-th root of unity and $[\zeta_d]$ its action on one of the Jacobians, we have $\phi_*{\circ}[\zeta_d]=[\zeta_d^2]{\circ}\phi_*$. (This formula already holds at the level of the curves ${\mathcal{C}}_d$ and ${\mathcal{D}}_d$.)
Now let us assume that $d$ has the form $d=2p^f-1$ and consider the map $\phi{\circ}\operatorname{Fr}_{p^f}:{\mathcal{C}}_d\to{\mathcal{D}}_d$. Then we find that $$(\phi{\circ}\operatorname{Fr}_{p^f})_*{\circ}[\zeta_d]=
[\zeta_d^{2p^f}]{\circ}(\phi{\circ}\operatorname{Fr}_{p^f})_*=
[\zeta_d]{\circ}(\phi{\circ}\operatorname{Fr}_{p^f})_*$$ in $\operatorname{Hom}(J_{{\mathcal{C}}_d},J_{{\mathcal{D}}_d})$, in other words that $(\phi{\circ}\operatorname{Fr}_{p^f})_*$ commutes with the $\mu_d$ action. Similarly $([\zeta_d^i]{\circ}\phi{\circ}\operatorname{Fr}_{p^f})_*$ commutes with the $\mu_d$ action for all $i$.
Further analysis of the homomorphisms $([\zeta_d^i]{\circ}\phi{\circ}\operatorname{Fr}_{p^f})_*$ in $\operatorname{Hom}(J_{{\mathcal{C}}_d},J_{{\mathcal{D}}_d})^{\mu_d}$ (along the lines of [@UlmerDPCT]\*[7.8]{}) shows that they are almost independent; more precisely, they generate a subgroup of rank $d-1$. Thus we find (for $d$ of the form $d=2p^f-1$) that the rank of $E(k(t^{1/d}))$ is at least $d-1$.
The reader may find it a pleasant exercise to write down explicit points in this situation, along the lines of the discussion in Section \[s:explicitpoints\] and [@UlmerDPCT]\*[§8]{}.
Further developments
====================
There have been further developments in the area of rational points on curves and Jacobians over function fields. To close, we mention three of them.
In the examples of Sections \[s:firstexample\] and \[s:2ndexample\], the set of $d$ that are “interesting,” i.e., for which we get high rank over $K_d$, depends very much on $p$, the characteristic of $k$. In his thesis (University of Arizona, 2010), Tommy Occhipinti gives, for every $p$, remarkable examples of elliptic curves $E$ over ${{\mathbb{F}_p}}(t)$ such that for [*all*]{} $d$ prime to $p$ we have $$\operatorname{Rank}E({{\overline{\mathbb{F}}_p}}(t^{1/d}))\ge d.$$ The curves come from Berger’s construction where $f$ and $g$ are generic degree two rational functions. The rank inequality comes from the rank formula in Theorem \[thm:rank-formula\] and the Honda-Tate theory of isogeny classes of abelian varieties over finite fields.
In the opposite direction, the author and Zarhin have given examples of curves of every genus over ${\mathbb{C}}(t)$ such that their Jacobians have bounded rank in the tower of fields ${\mathbb{C}}(t^{1/\ell^n})$ where $\ell$ is a prime. See [@UlmerZarhin10].
Finally, after some encouragement by Dick Gross at PCMI, the author produced explicit points on the Legendre curve over the fields ${{\mathbb{F}_p}}(\mu_d)(t^{1/d})$ where $d$ has the form $p^f+1$ and proved in a completely elementary way that they give Mordell-Weil groups of unbounded rank. In fact, this construction is considerably easier than that of Tate and Shafarevich [@TateShafarevich67] and could have been found in the 1960s. See [@UlmerLegendre].
It appears that this territory is rather fertile and that there is much still to be discovered about high ranks and explicit points on curves and Jacobians over function fields. Happy hunting!
|
---
abstract: 'We consider the approximation of elliptic eigenvalue problem with an immersed interface. The main aim of this paper is to prove the stability and convergence of an immersed finite element method (IFEM) for eigenvalues using Crouzeix-Raviart $P_1$-nonconforming approximation. We show that spectral analysis for the classical eigenvalue problem can be easily applied to our model problem. We analyze the IFEM for elliptic eigenvalue problem with an immersed interface and derive the optimal convergence of eigenvalues. Numerical experiments demonstrate our theoretical results.'
author:
- '[Seungwoo Lee]{}'
- '[Do Y. Kwak]{}'
- Imbo Sim
title: Immersed Finite Element Method for Eigenvalue Problem
---
eigenvalue, finite elements, immersed interface
15A15, 15A09, 15A23
Introduction
============
In this paper, we consider the approximation of elliptic eigenvalue problem with an immersed interface. The interface problems are often encountered in fluid dynamics, electromagnetics, and materials science. Especially, elastic waves propagating in heterogeneous media with interfaces occur in materials science [@Deak-Ahmed; @Zhang-Leveque]. Also, electromagnetic problems with different conductivity or permeability often arise in optical waveguides [@Badia-Codina; @Hiptmair-Li-Zou]. The main difficulty in solving such problems is caused mainly by the non-smoothness of solution across the interface. One choice to overcome it is to use finite element methods based on fitted meshes along the interface. Another choice is to use any meshes independent of interface geometry for the computational domain. In the latter case, LeVeque and Li [@LeVeque-Li] introduce the immersed interface method based on the finite difference method where the jump conditions are properly incorporated in the scheme. However, the resulting linear system of equation from this method may not be symmetric and positive definite [@Li-Lin-Wu]. On the other hand, the immersed finite element method (IFEM) has been developed where the local basis functions are constructed to satisfy the jump conditions [@Li-Lin-Wu] and its variants have been analyzed [@Chou-Kwak-Wee; @Hou-Liu; @Kwak-W-C; @Li-Lin-Lin-Rogers]. The related work in this direction can be found in [@Chang-Kwak; @Gong-Li-Li; @Lin-Sheen-Zhang] and references therein.
The purpose of this paper is to prove the stability and convergence of an immersed finite element method for eigenvalues using Crouzeix-Raviart $P_1$-nonconforming approximation [@Kwak-W-C]. As a model problem, we consider the eigenvalue problem with an immersed interface, i.e. $$\begin{aligned}
-\nabla \cdot (\beta \nabla u) &= \lambda u \quad \; \text{in} \quad \;\Omega^{+} \cup \Omega^{-}, \nonumber\\\, [u]_\Gamma &= 0, \quad \left[\beta\frac{\partial u}{\partial n} \right]_\Gamma = 0, \label{eq:modelEq} \\
u &= 0 \qquad \text{on} \quad \partial\Omega \nonumber,
\end{aligned}$$ where $\Omega$ is a convex polygonal domain in $\mathbb{R}^2$ which is separated into two subdomains $\Omega^+ $ and $\Omega^-$ by a $C^2$-interface $\Gamma = \partial \Omega^- \subset
\Omega$ with $\Omega^+ = \Omega \setminus \Omega^-$. The symbol $[\,\cdot\,]_\Gamma$ denotes the jump across $\Gamma$. The coefficient $\beta(x)$ is a discontinuous function bounded below and above by two positive constants. For the sake of simplicity, we assume that the coefficient $\beta$ is a positive piecewise constant, that is, $$\beta(x) = \left\{
\begin{aligned}
\beta^- \quad \text{for} \; x \in \Omega^-, \\
\beta^+ \quad \text{for} \; x \in \Omega^+.
\end{aligned}
\right.$$
The $P_1$-nonconforming FEM is widely used in solving elliptic equations and is shown to be useful in solving the mixed formulation of elliptic problems [@Arnold-Brezzi] and the Stokes equations [@Crouzeix-Raviart]. Recently, Kwak et al. [@Kwak-W-C] introduced an IFEM based on the piecewise $P_1$-nonconforming polynomials and they proved optimal orders of convergence in the $H^1$ and $L^2$-norm.
There have been various mathematical studies of finite element methods for eigenvalue problems. A unified approach to a posteriori and a priori error analysis for finite element approximations of self-adjoint elliptic eigenvalue problems is presented in [@Larson]. The convergence of an adaptive method for elliptic eigenvalue problems is proved in [@Giani-Graham]. For a nonconforming approximation, Dari et al. [@Dari-Duran-Padra] prove a posteriori error analysis of the eigenvalue. The study of mixed eigenvalue problems can be found in [@Boffi2007; @Boffi-Brezi-Gastaldi; @Mercier-Osborn-Rappaz-Raviart]. To our best knowledge, spectral and convergence analysis of IFEM for eigenvalue problems with immersed interface has not been done so far. It is worth emphasizing that the spectral properties of eigenvalue problems with immersed interface play key roles in the analysis and simulation for more complicated problems, such as fluid-structure interactions, moving interfaces and the numerical stability for PDEs.
In this work, we analyze the IFEM for elliptic eigenvalue problems with immersed interface and derive the optimal convergence of eigenvalues. Furthermore, we show that spectral analysis for the classical eigenvalue problem can be easily applied to our model problem. In particular, the spectral approximation of Galerkin methods can be proved by using fundamental properties of compact operators in Banach space. Such an investigation originates from a series of papers of Osborn and Babuška [@Babuska-Osborn; @Osborn]. It has been extended in [@Descloux-Nassif-Rappaz1978-1; @Descloux-Nassif-Rappaz1978-2] to estimate Galerkin approximations for noncompact operators. Further application to discontinuous Galerkin approximations has been developed by Buffa et al [@Antonietti-Buffa-Perugia]. We formulate the eigenvalue problem with immersed interface in terms of compact operators in order to understand the spectral behavior. The analysis presented in this paper is carried out along the lines of the references [@Descloux-Nassif-Rappaz1978-1; @Descloux-Nassif-Rappaz1978-2].
The paper is structured as follows. In the next section, we give a brief review on $P_1$-nonconforming IFEM [@Kwak-W-C]. In Section 3, we introduce a modified version of IFEM with an additional term and formulate the eigenvalue problem with the immersed interface. Section 4 contains the analysis of the spectral approximation which is proved to be spurious-free. The approximation is proved by means of basic results from the theory of compact operator in Banach space. In section 5 we derive the convergence rate of eigenvalues based on $P_1$-nonconforming IFEM. In the final section, we demonstrate numerical experiments for the model problem which corroborate the theoretical results in the preceding sections.
Preliminaries
=============
We consider an elliptic interface problem corresponding to the model problem (\[eq:modelEq\]): $$\begin{aligned}
\label{eq}
-{\nabla\cdot}(\beta{\nabla}u) &=& f ~~ \mathrm{ in}~ \;\Omega^{+} \cup \Omega^{-}, \\
\,[u]_\Gamma&=&0, ~~~ \left[\,\beta\pd un\,\right]_\Gamma=0, \label{flux}\\
u &=& 0 ~~ \mathrm{ on}~ {\partial}\Omega. \label{BC}\end{aligned}$$ The weak formulation of the problem (\[eq\]) - (\[BC\]) is to find $u\in H^1_0(\Omega)$ such that $$\label{op}
\int_\Omega \beta \nabla u\cdot \nabla v dx = \int_\Omega f v dx , ~~~ \forall v
\in H^1_0(\Omega)$$ with $f \in L^2(\Omega)$.
(-1,-1)(1,1) (0.9,0.9)(-0.9,0.9)(-0.9,-0.9)(0.9,-0.9) (0.47,0) (0.2,0.22)(-0.6,-0.1)(-0.2,-0.5)(0.5,-0.11) (0,0)[$\Omega^-$]{} (-0.3,0.5)[$\Omega^+$]{} (0.7,0)[$\Gamma$]{}
We begin by introducing a Sobolev space which is convenient for describing the regularity of the solution of the elliptic interface problem (\[eq\]) - (\[BC\]). For a bounded domain $D$, we let $H^m(D) = W^m_2(D)$ be the usual Sobolev space of order $m$ with semi-norm and norm denoted by $|\cdot|_{m,D}$ and $\|\cdot\|_{m,D}$, respectively. We define the space $$\begin{aligned}
{\widetilde}{H}^m(\Omega) := \{ u\in H^{m-1}(\Omega)\, :\, u\in H^m(\Omega^s),
s=+,- \}\end{aligned}$$ equipped with the norm $$\begin{aligned}
\|u\|^2_{{\widetilde}{H}^m(\Omega)} :=
\|u\|^2_{H^{m-1}(\Omega)}+\|u\|^2_{H^m(\Omega^+)} +
\|u\|^2_{H^m(\Omega^-)},~~ \forall\, u\in{\widetilde}{H}^m(\Omega).\end{aligned}$$ By Sobolev embedding theorem, for any $u \in H^2(\Omega)$, we have $u \in W^1_s(\Omega), \;\forall s > 2$. Then we have following regularity theorem for the weak solution $u$ of the variational problem (\[op\]); see [@Bramble-King] and [@Ladyzenskaja-Rivkind-Uralceva].
\[thm:reg\] The variational problem (\[op\]) has a unique solution $u\in{\widetilde}{H}^2(\Omega)$ which satisfies for some constant $C>0$ $$\begin{aligned}
\|u\|_{{\widetilde}{H}^2(\Omega)} \leq C \|f\|_{0,\Omega}. \nonumber\end{aligned}$$
We now describe an immersed finite element method (IFEM) based on Crouzeix-Raviart element [@Kwak-W-C]. Let $\{\mathcal{K}_h\}$ be the usual quasi-uniform triangulations of the domain $\Omega$ by the triangles of maximum diameter $h$. Note that we do not require an element $K \in \mathcal K_h$ to be aligned with the interface $\Gamma$. We assume the following situations:
- the interface intersects the edges of an element at no more than two points
- the interface intersects each edge at most once, except possibly it passes through two vertices.
For a smooth interface, those assumptions are satisfied if $h$ is sufficiently small. We call an element $K\in\mathcal{K}_h$ an *interface element* if the interface $\Gamma$ passes through the interior of $K$, otherwise $K$ is a *non-interface element*. We denote by $\mathcal K_h^*$ the collection of all interface elements. We may replace $\Gamma\cap K$ by the line segment joining two intersection points on the edges of each $K\in \cK_h$.
For each $K\in \mathcal{K}_h$ and non-negative integer $m$, let $$\begin{aligned}
{\widetilde}{H}^m(K) &:=& \{\,u\in L^2(K) : \,u|_{K\cap \Omega^s}\in
H^m(K\cap \Omega^s), s = +,-\,\},\end{aligned}$$ equipped with norms $$\begin{aligned}
|u|^2_{m,K} &:=& |u|^2_{m,K\cap \Omega^+} + |u|^2_{m,K\cap \Omega^-},\\
\|u\|^2_{m,K} &:=& \|u\|^2_{m,K\cap \Omega^+} + \|u\|^2_{m,K\cap \Omega^-}.\end{aligned}$$ To deal with the interface conditions in the model problem (\[eq:modelEq\]), we introduce the following spaces, $$\begin{aligned}
{\widetilde}{H}^2_{\Gamma}(K) &:=& \left\{\,u\in H^1(K) : \, u|_{K\cap
\Omega^s}\in H^2(K\cap \Omega^s),\,s = +,-~\, \text{and} \left[\beta \pd un\right]_\Gamma = 0 \text{ on } \Gamma\cap K\, \right\},\\
{\widetilde}{H}^2_{\Gamma}(\Omega) &:=& \left\{\,u\in H^1_0(\Omega) :
\, u|_{K}\in{\widetilde}{H}^2_{\Gamma}(K),\,\, \forall K\in \mathcal{K}_h \right\}.\end{aligned}$$ Clearly, ${\widetilde}{H}^2_{\Gamma}(K)$ and ${\widetilde}{H}^2_{\Gamma}(\Omega)$ are subspace of ${\widetilde}{H}^2(K)$ and ${\widetilde}{H}^2(\Omega)$, respectively.
As usual, we construct local basis functions on each element $K$ of the triangulation $\mathcal{K}_h$. We let $$\overline v|_e= \frac1{|e|}\int_{e}{v}\,ds$$ denote the average of a function $v\in H^1(K)$ along an edge $e$. For a non-interface element $K\in\mathcal{K}_h$, we simply use the standard linear shape functions whose degrees of freedom are determined by average values on the edges. Let $N_h(K)$ denote the linear space spanned by the three basis functions $\phi_i$ satisfying $\overline{\phi_i}|_{e_j} = \delta_{ij}$ for $i,j=1,2,3$. The $P_1$-nonconforming space $N_h(\Omega)$ is given by $$N_h(\Omega)= \left\{
\begin{aligned}
\phi: \phi|&_K\in P_1(K) \mbox{ for } K\in\cK_h\setminus \cK_h^*;\ \mbox{if $K_1,K_2 \in \mathcal K _h$ share an edge $e$,} \\
&\text{then} \int_{e}{\phi}|_{\partial K_1} ds= \int_{e}{\phi}|_{\partial K_2} ds; \mbox{ and }
\int_{\partial K \cap \partial\Omega}{\phi}\,ds=0
\end{aligned}
\right\}.$$
(0,0)(1,1) (0,0)(1,0)(0,1) (0,0.65)(0.35,0) (0,0.65)(0.1,0.58)(0.2,0.15)(0.35,0) (0,1.05)[$C$]{} (-0.05,0)[$A$]{} (1.05,0)[$B$]{} (0.5,-0.06)[$e_3$]{} (0.55,0.55)[$e_1$]{} (-0.06,0.5)[$e_2$]{} (-0.05,0.65)[$E$]{} (0.08,0.12)[$K^-$]{} (0.45,0.25)[$K^+$]{} (0.35,-0.05)[$D$]{} (0.20,0.48)[$\Gamma$]{} (-.3,0.6)[a]{} (0.12,0.5)[b]{}
(-.3,.3)[a]{} (0.22,0.21)[b]{}
Now we consider a reference interface element $K$ and assume that the interface $\Gamma$ intersects the edges of an element $K$ at $D$ and $E$ as in Figure \[fig:interel\]. Given a linear function $\phi = V_1\phi_1 + V_2\phi_2 + V_3\phi_3$ on $K$ where $V_i \in \mathbb{R}$, $\phi_i,\,(i=1,2,3)$ are the standard basis functions [@Crouzeix-Raviart]. We construct a new basis function $\hat{\phi}$ which holds the same degrees of freedom as $\phi$. Additionally, the function $\hat{\phi}$ should be linear on $K^+$ and $K^-$, and satisfy the jump conditions in (\[flux\]). Since the edge $e_1$ does not intersect the interface, the function $\hat{\phi}$ on the interface element $K$ can be conveniently described as follows: $$\begin{aligned}
\label{def:basis}
\hat{\phi} = \left\{
\begin{array}{cc}
c_1^-\phi_1+ c_2^-\phi_2 + c_3^-\phi_3 & \text{in $K^-$,}\\
V_1\phi_1+ c_2^+\phi_2 + c_3^+\phi_3 & \text{in $K^+$},
\end{array}\right.\end{aligned}$$ satisfying $$\begin{aligned}
&&\hat{\phi}^-(D) = \hat{\phi}^+(D),~\hat{\phi}^-(E) =
\hat{\phi}^+(E), \label{conti}\\
&& \frac1{|e_i|}\int_{e_i}{\hat\phi}\,ds=V_i,\ i=2,3,\\
&& \beta^-\pd{\hat{\phi}^-}{n\;\;\,}|_{\overline{DE}} =
\beta^+\pd{\hat{\phi}^+}{n\;\;\,}|_{\overline{DE}} . \label{fconti}\end{aligned}$$ It turns out that the function $\hat{\phi}$ implies $ \frac1{|e_1|}\int_{e_1}{\hat\phi}\,ds=V_1$ from the second equation in (\[def:basis\]). The modified function $\hat{\phi}$ is uniquely determined by (\[def:basis\]) - (\[fconti\]) (see [@Kwak-W-C]).
We denote by ${\widehat}N_h(K)$ the local finite element space on the interface element $K$ whose basis functions $\hat{\phi}_i,\,
i=1,2,3$ are defined by above construction. We define the [*immersed finite element space*]{} ${\widehat}N_h(\Omega)$ as the collection of functions $\hat\phi \in L^2(\Omega)$ such that
- $ \hat\phi|_K\in {\widehat}N_h(K) \mbox{ if } K\in\cK_h^*$
- $ \hat{\phi}|_K \in N_h(K) \mbox{ if } K\in \cK_h\setminus \cK_h^*$
- $ \int_{e}{\hat\phi}|_{\partial K_1} ds= \int_{e}{\hat\phi}|_{\partial K_2} ds \;\mbox{ if $K_1,K_2 \in \mathcal K _h$ share an edge $e$}$
- $ \int_{\partial K \cap \partial\Omega}{\hat\phi}\,ds=0.$
Let $H_h(\Omega) := H^1_0(\Omega) + {\widehat}N_h(\Omega)$ be endowed with the broken $H^1$-norm $\|v\|^2_{1,h} := \sum_{K\in \mathcal{K}_h}\|v\|^2_{1,K}$. Next, we need an interpolation operator. For any $v\in {H}^1(K)$, $ I_h v\in {\widehat}N_h(K)$ is determined by the average values of $v$ on each edge: $$\overline{(I_h v)}|_{e_i} = \bar{v}|_{e_i},~~ i=1,2,3.$$ We call $I_h v$ the local *interpolant* of $v$ in ${\widehat}N_h(K)$. We naturally extend it to $ {H}^1(\Omega)$ by $(I_hv)|_{K} =I_h (v|_K)$ for each $K \in \mathcal{K}_h$. Then we have the following approximation property of the interpolation in ${\widetilde}{H}_{\Gamma}^2(\Omega)$ [@Kwak-W-C].
\[thm:apperror\] There exists a constant $C>0$ such that $$\begin{aligned}
\|v-I_h v\|_{0,\Omega} + h\|v- I_h v\|_{1,h} \leq C h^2
\|v\|_{{\widetilde}{H}^2(\Omega)}, \quad \forall v \in {\widetilde}{H}_{\Gamma}^2(\Omega).\end{aligned}$$
Variational formulation
=======================
In this section, we consider a variational formulation for the model problem (\[eq:modelEq\]). Let $\Omega$, $\Gamma$ and $\beta$ be the same as in the previous section. Multiplying $v \in H^1_0(\Omega)$ and integrating by parts in $\Omega^\pm$, we obtain $$\begin{aligned}
\sum_{s=\pm} \int_{\Omega^s} -\nabla\cdot(\beta\nabla u) \cdot v\, dx &= \sum_{s=\pm} \int_{\Omega^s} \beta\nabla u\cdot\nabla v \, dx - \int_{\Gamma}\left[\beta\frac{\partial u}{\partial n}\right] v \, dx \label{eq:weakpro1}\nonumber\\
&= \int_{\Omega} \beta \nabla u\cdot \nabla v \, dx. \nonumber\end{aligned}$$ Hence the weak formulation of the problem (\[eq:modelEq\]) is to find the eigenvalues $\lambda \in \mathbb{C}$ and the eigenfunctions $u \in H^1_0(\Omega)$ such that $$a(u,v) = \lambda(u,v), \quad \forall v \in H^1_0(\Omega),
\label{eq:dweakform}$$ where $$a(u,v) = \int_{\Omega} \beta\nabla u \cdot \nabla v\, dx, \quad \forall u,v \in H^1_0(\Omega).$$ Using the solution operator $T:L^2(\Omega) \rightarrow H^1_0(\Omega)$, the eigenvalue problem (\[eq:dweakform\]) can be treated as the variational form $$\label{eq:T_operator}
a(Tf, v) = (f,v), \quad \forall v \in H^1_0(\Omega) \nonumber$$ with $f\in L^2(\Omega)$. Note that if $(\lambda, u) \in \mathbb{C}\setminus\{0\} \times H^1_0(\Omega)$ is an eigenpair of (\[eq:dweakform\]), then $(\lambda^{-1},\, u)$ is an eigenpair for the operator $T$.
For the application of IFEM to eigenvalue problems, we construct IFEM with a penalty term. We start by presenting a modified $P_1$-nonconforming IFEM for the elliptic problem (\[eq\]) - (\[BC\]). For some additional notations, let the collection of all the edges of $K\in \mathcal K_h$ be denoted by $\mathcal{E}_h$. We split $\mathcal{E}_h$ into two disjoint sets $\mathcal{E}_h = \mathcal{E}_h^o\cup \mathcal{E}_h^b$, where $\mathcal{E}_h^o$ is the set of edges lying in the interior of $\Omega$, and $\mathcal{E}_h^b$ is the set of edges on the boundary of $\Omega$.
The IFEM (modified by a penalty term) for (\[op\]) is to find $\hat{u}_h \in {\widehat}N_h(\Omega)$ such that $$a_h^\sigma(\hat{u}_h, \hat{\phi}) = (f, \hat{\phi}), \quad \forall \hat{\phi} \in {\widehat}N_h(\Omega),
\label{eq:dsweakform}$$ where $$\begin{aligned}
a_h^\sigma(u,v) &:=& a_h(u,v) + j_\sigma(u,v),\\
a_h(u,v) &:=& \sum_{K\in \mathcal{K}_h}\int_K \beta \nabla u \cdot \nabla v\, dx, \\
j_\sigma(u,v) &:=& \sum_{e\in \mathcal E^o_h} \int_{e}\frac \sigma h [u]_{e}[v]_{e}\, ds,
\mbox { for some } \sigma>0.\end{aligned}$$ We define the mesh dependent norm $\|\cdot\|_{1,J}$ on the space $H_h(\Omega)$ by $$\|v\|^2_{1,J} := \sum_{K \in \mathcal{K}_h} \|v\|^2_{0,K} + \sum_{K \in \mathcal{K}_h} \|\nabla u\|^2_{0,K} + \sum_{e \in \mathcal E^o_h} h^{-1}\|[v]\|^2_{0,e}.$$ By the trace inequality [@Brenner-Scott], this norm is equivalent to $\|\cdot\|_{1,h}$. The coerciveness and boundedness of the bilinear form $ a_h^\sigma(\cdot,\cdot)$ are satisfied.
There exist positive constants $C_b$ and $C_c$ such that $$\begin{aligned}
{2}
|a_h^{\sigma}(u,v)| &\leq C_b\|u\|_{1,J}\|v\|_{1,J}, \quad &\forall\, u,v \in H_h(\Omega), \\
a_h^{\sigma}(v,v) &\geq C_c\|v\|^2_{1,J}, &\forall\, v \in {\widehat}N_h(\Omega).\end{aligned}$$
The following error estimate for (\[eq:dsweakform\]) can be obtained by a slight modification of the proof in [@Kwak-W-C], by noting that $j_\sigma(u,v) =0$ for any $u\in {H}^1(\Omega)$ and $v\in {\widehat}N_h(\Omega)$.
\[thm:energyerror\] Let $u\in {\widetilde}{H}^2(\Omega),~ \hat u_h\in {\widehat}N_h(\Omega)$ be the solutions of (\[op\]) and (\[eq:dsweakform\]), respectively. Then there exists a constant $C>0$ such that $$\begin{aligned}
\|u- \hat u_h\|_{0,\Omega}+h\|u- \hat u_h\|_{1,J}\leq C h^2\|u\|_{{\widetilde}{H}^2(\Omega)}.\end{aligned}$$
The IFEM for the eigenvalue problem (\[eq:modelEq\]) is to find the pairs $(\lambda_h,\hat{u}_h) \in \mathbb{C} \times {\widehat}{N}_h(\Omega)$ such that $$a_h^\sigma(\hat{u}_h, \hat{\phi}) = \lambda_h(\hat{u}_h, \hat{\phi}), \quad \forall \hat{\phi}
\in {\widehat}N_h(\Omega).
\nonumber$$ Let us define the discrete solution operator $T_h:L^2(\Omega) \rightarrow {\widehat}N_h(\Omega)$ by $$a_h^\sigma(T_h f, \hat\phi) = (f, \hat\phi), \quad \forall \hat\phi \in {\widehat}N_h(\Omega)
\label{eq:revariform} \nonumber$$ with $f \in L^2(\Omega)$. In view of the definition of the discrete solution operator $T_h$, the eigenvalues $\mu_h$ of the operator $T_h$ are given by $\mu_h =1/ \lambda_h$.
Spectral approximation
======================
Now we are concerned with the spectral approximation that can be proved by using some properties of compact operators in Banach space. We follow the approaches given in [@Descloux-Nassif-Rappaz1978-1; @Descloux-Nassif-Rappaz1978-2].
Clearly, the operator $T$ is self-adjoint and bounded. Similarly the operator $T_h$ is self-adjoint such that $$a_h^\sigma(T_hf,\phi) = a_h^\sigma(f,T_h\phi), \quad \forall f,\phi \in {\widehat}N_h(\Omega). \nonumber$$ Next, the boundedness of the operator $T_h$ can be shown by using the coerciveness of the bilinear form $a_h^\sigma(\cdot, \cdot)$. For any $f\in L^2(\Omega)$, it holds that $$\begin{aligned}
\|T_h f\|^2_{1,J} &\leq Ca_h^\sigma(T_h f,T_h f) \\
&= C(f,T_h f) \\
& \leq C\|f\|_{0,\Omega}\|T_h f\|_{0,\Omega} \\
&\leq C\|f\|_{0,\Omega}\|T_h f\|_{1,J}.\end{aligned}$$ Therefore, $\|T_h f\|_{1,J} \leq C\|f\|_{0,\Omega} $.
The operator $T$ is compact in $H^1_0(\Omega)$ due to the boundedness of $T$ and Rellich-Kondrachov theorem i.e. the compact embedding $H^1_0(\Omega) \subset L^2(\Omega)$ [@Adams-Fournier]. Clearly, the operator $T_h$ is compact in $H_h(\Omega)$ by the definition of $T_h$.
Let $\sigma(T)$ and $\rho(T)$ be the spectrum and resolvent set of $T$, respectively. The spectrum $\sigma(T)$ is a countable set with no accumulation points different from zero and consists of positive real eigenvalues with finite multiplicity. The algebraic multiplicity of each eigenvalue $\mu \in \sigma(T)$ is equal to the geometric multiplicity due to the self-adjointness and compactness of the operator $T$ [@Kato]. For any $z\in \rho(T)$, the resolvent operator $R_z(T)$ is defined by $R_z(T) = (z-T)^{-1}$ from $L^2(\Omega)$ to $L^2(\Omega)$ or from $H^1_0(\Omega)$ to $H^1_0(\Omega)$. Following the references [@Descloux-Nassif-Rappaz1978-1; @Descloux-Nassif-Rappaz1978-2], we prove the non-pollution of the spectrum $\sigma(T)$. To do so, we need the following results.
For $z\in \rho(T)$, $z\neq 0$, there is a constant $C>0$ depending on only $\Omega$ and $|z|$ such that $$\|(z-T)f\|_{1,J} \geq C\|f\|_{1,J}, \quad \forall f \in H_h(\Omega).$$ \[lem:contiresol\]
Let $g = (z- T)f$. We need to show $\|f\|_{1,J} \leq C\|g\|_{1,J}$. From the definition of $T$ and $g$, we have the equalities, $$a(Tf, v) = a(zf - g, v) = (f,v), \quad \forall v\in H^1_0(\Omega).
\label{eq:lemweak1}$$ Reformulating the second equality in (\[eq:lemweak1\]), we obtain $$a(zf-g,v) - \frac{1}{z}(zf-g,v) = \frac{1}{z}(g,v), \quad \forall v\in H^1_0(\Omega) .
\label{eq:lemweak}$$ Since $z \in \rho(T)$, the inverse $z^{-1}$ is not an eigenvalue of $a(\cdot, \cdot)$. Hence $zf -g$ is the solution of the weak formulation (\[eq:lemweak\]). By using Theorem \[thm:reg\], we have $$\|zf - g\|_{1,J} \leq C\frac{1}{|z|}\|g\|_{0,\Omega} \leq C\frac{1}{|z|}\|g\|_{1,J}.
\label{eq:contiresol_eq1}$$ From the triangle inequality and (\[eq:contiresol\_eq1\]), it follows immediately that $$\begin{aligned}
\|f\|_{1,J} &\leq \frac{1}{|z|}(\|zf - g\|_{1,J} + \|g\|_{1,J}) \\
&\leq \frac{1}{|z|}(C\frac{1}{|z|}\|g\|_{1,J} + \|g\|_{1,J}) \\
& \leq C(|z|) \|g\|_{1,J}\; ,\end{aligned}$$ where $C(|z|)$ is a constant depending on $|z|$.
For $z\in \rho(T)$, $z\neq 0$, there is a constant $C>0$ depending only on $\Omega$ and $|z|$ such that for $h$ small enough $$\|(z-T_h)f\|_{1,J} \geq C\|f\|_{1,J}, \quad \forall f\in H_h(\Omega).$$ In other words, the resolvent operator $R_z(T_h) = (z-T_h)^{-1}$ is bounded. \[thm:disResol\]
By Theorem \[thm:energyerror\] and Lemma \[lem:contiresol\], we get $$\begin{aligned}
\|(z-T_h)f\|_{1,J} &\geq \|(z-T)f\|_{1,J} - \|(T-T_h)f\|_{1,J} \\
&\geq (C_1(|z|) - C_2 h)\|f\|_{1,J} \\
&\geq C(|z|)\|f\|_{1,J},\end{aligned}$$ for $h$ small enough.
Before we state the following Corollary, we denote an operator norm $\|L\|_{\mathscr{L}(X,Y)}$ for a bounded linear operator $L : X \to Y$ by $$\|L\|_{\mathscr{L}(X,Y)} = \sup_{x \in X} \frac{\|Lx\|_Y}{\|x\|_X}. \label{eq:oper-norm}$$
Let $F \subset \rho(T)$ be closed, then $$\|R_z(T_h)\|_{\mathscr{L}(H_h(\Omega), H_h(\Omega))} \leq C, \quad \forall z \in F,$$ for some constant $C$. \[lem:disResolBdd\]
The following result is a direct consequence of Corollary \[lem:disResolBdd\]. We note that the proof is analogous to Theorem 1 in [@Descloux-Nassif-Rappaz1978-1].
(Non-pollution of the spectrum) Let $A \subset \mathbb{C}$ be an open set containing $\sigma(T)$. Then for sufficiently small $h$, $\sigma(T_h) \subset A$. \[thm:nonpollspec\]
\
This implies that there are no discrete spurious eigenvalues of the solution operator $T_h$.
Now we turn to show the non-pollution and completeness of the eigenspace [@Descloux-Nassif-Rappaz1978-1; @Descloux-Nassif-Rappaz1978-2]. Let $\mu$ be an eigenvalue of $T$ with algebraic multiplicity $n$. We define the spectral projection $E(\mu)$ from $L^2(\Omega)$ into $H^1_0(\Omega)$ by $$E(\mu) = \frac{1}{2\pi i}\int_{\Lambda} R_z(T)\, dz,$$ where $\Lambda$ be a Jordan curve in $\mathbb{C}$ containing $\mu$, which lies in $\rho(T)$ and does not enclose any other points of $\sigma(T)$ [@Kato]. By Corollary \[lem:disResolBdd\], the discrete resolvent operator $R_z (T_h)$ is bounded. Therefore, we can define the discrete spectral projection $E_h(\mu)$ from $L^2(\Omega)$ into $H_h(\Omega)$ by $$E_h(\mu) = \frac{1}{2\pi i} \int_{\Lambda} R_z(T_h)\, dz.$$ The projections $E(\mu)$ and $E_h(\mu)$ are simply denoted by $E$ and $E_h$, respectively. The following Theorem provides the uniform convergence of spectral projections.
It holds that $$\lim_{h \to 0}\|E - E_h\|_{\mathscr{L}(L^2(\Omega), H_h(\Omega))} = 0. \nonumber$$ \[thm:specConv\]
By using the resolvent identity $$R_z(T) - R_z(T_h) = R_z(T_h)(T-T_h)R_z(T),$$ we obtain for $f \in L^2(\Omega)$, $$\begin{split}
\|(E - E_h)f\|_{1,J} & \leq C\|(R_z(T) - R_z(T_h))f\|_{1,J} \\
& = C\| R_z(T_h) (T-T_h) R_z(T) f\|_{1,J} \\
& \leq C\| R_z(T_h) \|_{\mathscr{L}(H_h(\Omega), H_h(\Omega))}\|T-T_h\|_{\mathscr{L}(L^2(\Omega), H_h(\Omega))} \\
&\qquad \cdot \|R_z(T)\|_{\mathscr{L}(L^2(\Omega), L^2(\Omega))}\|f\|_{L^2(\Omega)}.
\end{split}$$ For $h$ small enough, $\|R_z(T_h)\|_{\mathscr{L}(H_h(\Omega), H_h(\Omega))}$ and $\|R_z(T)\|_{\mathscr{L}(L^2(\Omega), L^2(\Omega))}$ are bounded by Theorem \[thm:disResol\] and Fredholm alternative [@Conway], respectively. The operator norm $\|T-T_h\|_{\mathscr{L}(L^2(\Omega), H_h(\Omega))}$ goes to zero as $h \to 0$. The proof is now complete.
We are now in a position to show the boundedness of the distance between eigenspaces. Such a distance for any closed subspaces of $H_h(\Omega)$ may be evaluated by means of distance functions $$\begin{aligned}
&\mathrm{dist}_h (x, Y) &\;= \inf_{y \in Y}\|x - y\|_{1,J}, \quad \mathrm{dist}_h(Y,Z) \:= \sup_{y \in Y, \|y\|_{1,J} = 1}\mathrm{dist}_h(y,Z), \\
&\mathrm{dist}\;(Y,\,Z) &\;= \max(\mathrm{dist}_h(Y,Z), \, \mathrm{dist}_h(Z,Y)).\end{aligned}$$ The following results are analogous to Theorem \[thm:specConv\], whose proofs can be obtained as in [@Descloux-Nassif-Rappaz1978-1].
- (Non-pollution of the eigenspace) $$\lim_{h \to 0} \mathrm{dist}_h(E_h(H_h(\Omega)), E(H^1_0(\Omega)))=0.$$
- (Completeness of the eigenspace) $$\lim_{h \to 0} \mathrm{dist}_h(E(H^1_0(\Omega)), E_h(H_h(\Omega)))=0.$$ \[n.p.e\]
It remains to show that the distance between the spectrums of $T$ and $T_h$ vanishes as $h$ goes to zero.
(Completeness of the spectrum) For all $z \in \sigma(T)$, $$\lim_{h \to 0} \mathrm{dist}_h (z, \sigma(T_h)) = 0.$$ \[c.s\]
The proof follows from Theorem $6$ in [@Descloux-Nassif-Rappaz1978-1].
Convergence analysis
====================
In this section, we present the convergence analysis of eigenvalues. By using the spectral properties of compact operators in the previous section, we show the convergence rate of eigenvalues.
\[thm:convEig\] Let $\mu$ be an eigenvalue of $T$ with multiplicity $n$. Then for $h$ small enough there exist $n$ eigenvalues $\{\mu_{1,h}, ... , \mu_{n,h}\}$ of $T_h$ which converge to $\mu$ as follows $$\sup_{1\leq i \leq n}|\mu - \mu_{i,h}| \leq C h^2,$$ where a positive constant $C$ is independent of $\mu$ and $h$.
The existence of $\mu_{i,h}$ is a direct consequence of the previous section. Now we estimate the convergence rate of $\mu_{i,h}$. Let $\Phi_h$ be the restriction of $E_h$ to $E(L^2(\Omega))$: $$\Phi_h=E_h|_{E(L^2(\Omega))} : E(L^2(\Omega)) \to E_h(H_h(\Omega)).$$ Following the arguments in [@Babuska-Osborn; @Osborn], we can show that the inverse $\Phi_h^{-1} : E_h(H_h(\Omega)) \to E(L^2(\Omega))$ is bounded for $h$ small enough. To show $\Phi_h^{-1}$ is defined, let $\Phi_h f = 0$ with $f \in E(L^2(\Omega))$. Then by Theorem \[thm:specConv\], we have $$\|f\|_{0,\Omega} = \|f - \Phi_h f\|_{0,\Omega} = \|E f - E_h f\|_{0,\Omega} \leq \|E - E_h\|_{\mathscr{L}(L^2(\Omega), H_h(\Omega))}\|f\|_{0,\Omega}.$$ Thus $\Phi_h$ is one-to-one. By Theorem \[n.p.e\], $\Phi_h$ is onto such that the inverse $\Phi_h^{-1}$ is defined. Now we show that $\Phi_h^{-1}$ is bounded. For $f \in E(L^2(\Omega))$ and $h$ small enough, $$\begin{aligned}
\|\Phi_h f \|_{0,\Omega} &\geq \|f\|_{0,\Omega} - \|f - \Phi_h f \|_{0,\Omega} \\
&= \|f\|_{0,\Omega} - \|E f - E_h f\|_{0,\Omega} \\
&\geq (1 - \|E- E_h\|_{\mathscr{L}(L^2(\Omega), H_h(\Omega))}) \|f\|_{0,\Omega} \\
&\geq \frac{1}{2} \|f\|_{0,\Omega}.\end{aligned}$$ Hence the inverse $\Phi_h^{-1}$ is bounded.
Let ${\widetilde}T$ be the restriction of $T$ to $E(L^2(\Omega))$ and define ${\widetilde}T_h := \Phi^{-1}_h T_h \Phi_h$. Setting $S_h = \Phi_h^{-1}E_h : L^2(\Omega) \to E(L^2(\Omega))$ (see Figure \[fig:operator\]), we see that $S_h$ is bounded and $S_h f = f$ for any $f \in E(L^2(\Omega))$. By definitions of ${\widetilde}T$, ${\widetilde}T_h$, $S_h$ and $\Phi_h$, and the property $T_h E_h = E_h T_h$, we have for any $f \in E(L^2(\Omega))$, $$\begin{aligned}
({\widetilde}{T}-{\widetilde}{T}_h)f &= Tf - \Phi_h^{-1}T_h \Phi_h f \nonumber \\
&= S_h T f - \Phi_h^{-1} T_h E_h f \nonumber \\
&= S_h T f - \Phi_h^{-1}E_hT_h f \nonumber \\
&= S_h(T - T_h)f. \nonumber\end{aligned}$$ By definition of operator norm (\[eq:oper-norm\]) and Theorem \[thm:energyerror\], we have $$\begin{aligned}
\sup_{1\leq i \leq n} |\mu - \mu_{i,h}| &\leq C \| {\widetilde}T - {\widetilde}T_h\|_{\mathscr{L}(E(L^2(\Omega)), E(L^2(\Omega)))} \\
&= C \sup_{f \in E(L^2(\Omega))} \frac{\|({\widetilde}T - {\widetilde}T_h)f\|_{0,\Omega}}{\|u\|_{0,\Omega}} \\
&= C \sup_{f \in E(L^2(\Omega))} \frac{\|S_h(T - T_h) f\|_{0,\Omega}}{\|f \|_{0,\Omega}} \\
&\leq C \sup_{f \in E(L^2(\Omega))} \frac{\|(T - T_h) f\|_{0, \Omega}}{\|f \|_{0,\Omega}} \\
&\leq C h^2\;.\end{aligned}$$
(-1.5,-.7)(10.,3.) (0.67,2.3) (3.4 ,2.3) (2,2.6)[ $E_h$ ]{} (0.2,0.3) (3.6,2.0) (1.7,1.15)[ $\Phi_h^{-1}$]{} (0.1,1.3)[$S_h=\Phi_h^{-1}E_h$]{} (0,2.3)[$L^2(\Omega) $]{} (4,2.3)[$ E_h(H_h)$]{} (0,0)[$E(L^2(\Omega))$]{} (1.14,1.6)[(0,0)[0.3]{}[120]{}[240]{}]{} (0,1.95) (0,0.5) (5.0,0)[ (0.75,0.0) (3.6,0.0) (2,-0.3)[ $\Phi_h$]{} (0.2,0.3) (3.6,2.0) (1.7,1.15)[ $\Phi_h^{-1}$]{} (4,2.3)[$ E_h(H_h)$]{} (0,0)[$E(L^2(\Omega))$]{} (4.2,0)[$ E_h(H_h)$]{} (4,1.95) (4,0.5) (0.2,0.1) [(0,0)[0.45]{}[250]{}[160]{}]{} (2.65,0.6)[$\tilde T_h=\Phi_h^{-1}T_h\Phi_h $]{} (4.3,1.3)[$T_h $]{} ]{}
Theorem \[thm:convEig\] can be expressed in terms of the eigenvalues $\lambda = \mu^{-1}$ and $\lambda_{i,h} = \mu_{i,h}^{-1}$ as $$\frac{|\lambda - \lambda_{i,h}|}{|\lambda_{i,h}|} \leq C_1 h^2.$$ For $h$ small enough, we derive the estimate for the relative error $$\frac{|\lambda - \lambda_{i,h}|}{|\lambda|} \leq \frac{C_1 h^2}{1 - C_1|\lambda| h^2} \leq C h^2.$$
Numerical results
=================
We demonstrate numerical experiments for the problem (\[eq:modelEq\]). In the first example, we test an elliptic eigenvalue problem with a circular interface for which we know the exact eigenvalues. Next we perform an experiment for the case with star-shaped interface. We observe the optimal orders of convergence of numerical eigenvalues. In our computations we use the package ARPACK [@Lehoucq-Sorensen-Yang] which is designed for solving large sparse eigenvalue problems.
**Example 1**. Let a circular computational domain be $\Omega = \{ (r,\theta) : 0 \leq r \leq R_O, 0 \leq \theta < 2\pi\}$ with an interface $\Gamma = \{(r, \theta) : r = R_I, 0 \leq \theta < 2\pi\}$. The eigenpairs $(\lambda,u)$ of the model problem (\[eq:modelEq\]) are given by $u(x,y) = R(r)\Theta(\theta)$, $$\begin{aligned}
&\Theta(\theta) = d_1\cos m \theta + d_2\sin m\theta, \\
& R(r) = \left\{\begin{array}{l l} c_1^{+}J_{m}(\sqrt{\frac{\lambda}{\beta^{+}}}r) + c_2^{+}Y_{m}(\sqrt{\frac{\lambda}{\beta^{+}}}r), & R_I < r \leq R_O, \\
c_1^{-}J_{m}(\sqrt{\frac{\lambda}{\beta^{-}}}r) , & 0 \leq r \leq R_I,
\end{array}
\right.\end{aligned}$$ where $c^{\pm}_i$ and $d_i$ are constants, and $J_{m}$ and $Y_{m}$ the Bessel functions of the first and second kind of order $m$, respectively. In Appendix, we explain in more details how the coefficients $c^{\pm}_i$, $d_i$ could be determined. We set $R_O=1$, $R_I=0.38$ and $(\beta^-, \beta^+) = (1,1000), (1000,1)$. It seems to be good to choose $\sigma$ dependent on $\beta$, say $\sigma = \kappa \beta$ for some $ \kappa >0$. The triangulation of the circular domain consists of qusi-regular triangles with the maximal diameter $h$, which may intersect the interface $\Gamma$ as Figure \[fig:mesh1\]. Tables \[table:circle1-\] and \[table:circle1+\] show the first ten eigenvalues and their rates of convergence. The first columns are the exact values and the other columns are the eigenvalues of IFEM for varying $h$. From the second to sixth column, the meshes are generated so that the degree of freedom quadruples, thus $h$ nearly halves. The numbers in parentheses for each column show the order of convergence. We observe that the order of convergence is quadratic and there are no spurious eigenvalues. Figure \[fig:circle-eigV\] illustrates two eigenfunctions corresponding to eigenvalues $\lambda_1$ and $\lambda_2$ in the case of $\beta^- = 1, \, \beta^+ = 1000$. The cases with other values of $R_I$ and $\beta$ show similar results, although we do not present here.
We recount the influence of penalty parameter $\sigma$ in (\[eq:dsweakform\]). The results are shown in Figure \[fig:sigma\]. We notice that the case for $\kappa = 0.1$ has some deteriorations in the order of convergence. The cases when $\kappa \in [1,100]$ achieve desired convergence orders (see Theorem \[thm:convEig\]). In Tables \[table:circle1-\] and \[table:circle1+\], we choose $\kappa = 1$.
![Example of mesh generation. The inner broken line represents the interface $\Gamma$.[]{data-label="fig:mesh1"}](mesh1.eps){width="50.00000%" height="50.00000%"}
--------- ---------------- ---------------- ---------------- ---------------- ----------------
39.972 40.018 (2.11) 39.982 (2.15) 39.974 (1.99) 39.972 (2.08) 39.972 (1.99)
101.523 101.744 (2.40) 101.566 (2.33) 101.533 (2.02) 101.525 (2.12) 101.523 (2.17)
101.523 101.772 (2.52) 101.573 (2.31) 101.534 (2.18) 101.525 (2.13) 101.523 (2.18)
182.473 183.212 (2.64) 182.626 (2.27) 182.507 (2.18) 182.481 (2.20) 182.475 (2.25)
182.473 183.522 (2.56) 182.636 (2.68) 182.507 (2.27) 182.481 (2.16) 182.475 (2.27)
210.604 211.120 (1.82) 210.723 (2.11) 210.635 (1.96) 210.612 (2.00) 210.606 (2.00)
281.713 283.846 (2.54) 282.098 (2.46) 281.792 (2.29) 281.730 (2.18) 281.716 (2.28)
281.713 284.413 (2.67) 282.140 (2.66) 281.798 (2.32) 281.731 (2.25) 281.717 (2.29)
340.329 341.615 (2.00) 340.625 (2.11) 340.404 (1.98) 340.347 (2.06) 340.333 (2.08)
340.329 341.799 (2.16) 340.643 (2.22) 340.405 (2.04) 340.347 (2.05) 340.333 (2.09)
--------- ---------------- ---------------- ---------------- ---------------- ----------------
: Eigenvalues by IFEM in Figure \[fig:mesh1\] when $\beta^- = 1, \beta^+=1000$ and $\kappa = 1$.[]{data-label="table:circle1-"}
-------- --------------- --------------- --------------- --------------- ---------------
6.047 6.049 (2.01) 6.047 (1.99) 6.047 (2.00) 6.047 (1.98) 6.047 (1.98)
27.355 27.380 (2.35) 27.360 (2.33) 27.356 (2.19) 27.355 (2.13) 27.355 (2.46)
27.355 27.382 (2.42) 27.360 (2.36) 27.356 (2.26) 27.355 (2.12) 27.355 (2.46)
34.126 34.175 (2.49) 34.135 (2.44) 34.128 (2.24) 34.126 (2.13) 34.126 (2.33)
34.126 34.183 (2.42) 34.136 (2.54) 34.128 (2.28) 34.126 (2.16) 34.126 (2.35)
39.742 39.766 (2.08) 39.748 (1.99) 39.744 (2.03) 39.743 (1.96) 39.742 (2.01)
45.091 45.171 (2.12) 45.104 (2.54) 45.094 (2.21) 45.091 (2.11) 45.091 (2.23)
45.091 45.176 (2.62) 45.106 (2.44) 45.094 (2.27) 45.091 (2.17) 45.091 (2.27)
59.871 59.968 (2.40) 59.890 (2.31) 59.875 (2.17) 59.872 (2.09) 59.871 (2.17)
59.871 59.990 (2.21) 59.892 (2.50) 59.875 (2.20) 59.872 (2.14) 59.871 (2.17)
-------- --------------- --------------- --------------- --------------- ---------------
: Eigenvalues by IFEM in Figure \[fig:mesh1\] when $\beta^- = 1000, \beta^+=1$ and $\kappa = 1$. []{data-label="table:circle1+"}
![The log-log plot of $h$ versus the relative error of eigenvalues for $\lambda_i,\; 1\leq i \leq 4$ with $\kappa = 0.1$ (asterisk), $\kappa = 1.0$ (circle), $\kappa = 10$ (plus sign) and $\kappa = 100$ (cross). The broken line represents the convergence rate.[]{data-label="fig:sigma"}](lam1.eps "fig:"){width="48.00000%" height="48.00000%"} ![The log-log plot of $h$ versus the relative error of eigenvalues for $\lambda_i,\; 1\leq i \leq 4$ with $\kappa = 0.1$ (asterisk), $\kappa = 1.0$ (circle), $\kappa = 10$ (plus sign) and $\kappa = 100$ (cross). The broken line represents the convergence rate.[]{data-label="fig:sigma"}](lam2.eps "fig:"){width="48.00000%" height="48.00000%"} ![The log-log plot of $h$ versus the relative error of eigenvalues for $\lambda_i,\; 1\leq i \leq 4$ with $\kappa = 0.1$ (asterisk), $\kappa = 1.0$ (circle), $\kappa = 10$ (plus sign) and $\kappa = 100$ (cross). The broken line represents the convergence rate.[]{data-label="fig:sigma"}](lam3.eps "fig:"){width="48.00000%" height="48.00000%"} ![The log-log plot of $h$ versus the relative error of eigenvalues for $\lambda_i,\; 1\leq i \leq 4$ with $\kappa = 0.1$ (asterisk), $\kappa = 1.0$ (circle), $\kappa = 10$ (plus sign) and $\kappa = 100$ (cross). The broken line represents the convergence rate.[]{data-label="fig:sigma"}](lam4.eps "fig:"){width="48.00000%" height="48.00000%"}
![Eigenfunctions corresponding to eigenvalues $\lambda_1$ and $\lambda_2$ in [Example 1]{} in the case of $(\beta^-, \beta^+)=(1,1000)$.[]{data-label="fig:circle-eigV"}](lam1-1.eps "fig:"){width="48.00000%" height="48.00000%"} ![Eigenfunctions corresponding to eigenvalues $\lambda_1$ and $\lambda_2$ in [Example 1]{} in the case of $(\beta^-, \beta^+)=(1,1000)$.[]{data-label="fig:circle-eigV"}](lam2-1.eps "fig:"){width="48.00000%" height="48.00000%"}
**Example 2**. Let a computational domain be $\Omega = [-1.1]^2$ and a star-shaped interface is given by $\Gamma = \{(x,y) : \sqrt{x^2+y^2} - 0.2\sin(5\theta - \pi/5) + 0.5 =0\}$, where $\theta = \tan^{-1}(y/x)$. Our computation is performed on a uniform mesh in Figure \[fig:mesh2\]. Since the exact eigenvalues are not available, we use the numerical results on a sufficiently refined mesh with mesh size $h = 2^{-10}$ as the reference eigenvalues for the purpose of estimating the orders. Tables \[table:star1-\] and \[table:star1+\] contain errors of the eigenvalues $\lambda_h$ with various mesh size $h$ for the interface problem with the coefficient $(\beta^-, \beta^+) = (1,1000) , (1000,1)$. We display some eigenfunctions in Figure \[fig:star-eigV\].
![Star-shaped interface with $h=1/2^{3}$ and $h= 1/2^{4}$.[]{data-label="fig:mesh2"}](mesh2-4.eps "fig:"){width="45.00000%" height="45.00000%"} ![Star-shaped interface with $h=1/2^{3}$ and $h= 1/2^{4}$.[]{data-label="fig:mesh2"}](mesh2-5.eps "fig:"){width="45.00000%" height="45.00000%"}
$\lambda_{ref}$
----------------- ---------------- ---------------- ---------------- ---------------- ----------------
43.206 46.794 (2.14) 44.313 (1.70) 43.465 (2.09) 43.253 (2.46) 43.218 (1.88)
97.442 103.736 (1.82) 99.028 (1.99) 97.805 (2.12) 97.531 (2.02) 97.461 (2.17)
97.442 105.890 (2.01) 100.338 (1.55) 98.030 (2.30) 97.553 (2.40) 97.472 (1.88)
128.947 139.100 (1.74) 131.402 (2.05) 129.423 (2.36) 129.061 (2.05) 128.972 (2.15)
128.955 141.697 (2.10) 132.611 (1.80) 129.465 (2.84) 129.069 (2.16) 128.980 (2.14)
144.481 153.170 (2.67) 146.479 (2.12) 144.916 (2.19) 144.583 (2.09) 144.503 (2.17)
172.374 187.286 (2.39) 175.982 (2.04) 173.131 (2.25) 172.545 (2.14) 172.412 (2.15)
172.374 190.745 (2.47) 176.841 (2.04) 173.385 (2.14) 172.564 (2.41) 172.420 (2.04)
219.650 247.605 (2.09) 226.494 (2.03) 220.993 (2.35) 219.963 (2.10) 219.723 (2.10)
219.652 248.667 (2.45) 227.195 (1.94) 221.279 (2.21) 219.977 (2.32) 219.728 (2.09)
: First ten eigenvalues by IFEM in Figure \[fig:mesh2\] in the case of $\beta^- = 1, \beta^+=1000$. The reference eigenvalues $\lambda_{ref}$ in the first column are computed with $h = 1/2^{10}$. The numbers in parentheses show convergence rates.[]{data-label="table:star1-"}
$\lambda_{ref}$
----------------- --------------- --------------- --------------- --------------- ---------------
6.052 6.096 (1.91) 6.062 (2.14) 6.054 (2.02) 6.052 (2.49) 6.052 (2.38)
30.527 31.250 (1.79) 30.677 (2.26) 30.567 (1.88) 30.534 (2.44) 30.528 (2.21)
32.388 32.862 (2.09) 32.498 (2.11) 32.413 (2.14) 32.393 (2.32) 32.389 (2.61)
34.867 35.532 (1.80) 35.035 (1.98) 34.905 (2.12) 34.875 (2.23) 34.868 (2.35)
42.751 44.057 (1.61) 43.073 (2.02) 42.827 (2.07) 42.766 (2.30) 42.754 (2.29)
45.710 46.864 (1.61) 45.999 (2.00) 45.770 (2.27) 45.722 (2.22) 45.712 (2.33)
54.380 56.156 (1.82) 54.807 (2.05) 54.466 (2.30) 54.399 (2.15) 54.384 (2.33)
57.901 59.472 (1.87) 58.286 (2.03) 57.987 (2.15) 57.920 (2.13) 57.904 (2.37)
62.358 64.218 (2.00) 62.821 (2.00) 62.462 (2.14) 62.381 (2.14) 62.363 (2.21)
66.220 69.027 (1.58) 66.831 (2.20) 66.384 (1.89) 66.249 (2.49) 66.226 (2.18)
: First ten eigenvalues by IFEM in Figure \[fig:mesh2\] in the case of $\beta^- = 1000, \beta^+=1$. The reference eigenvalues $\lambda_{ref}$ in the first column are computed with $h = 1/2^{10}$. The numbers in parentheses show convergence rates.[]{data-label="table:star1+"}
![Eigenfunctions corresponding to eigenvalues $\lambda_1$ and $\lambda_4$ in [Example 2]{} in the case of $(\beta^-, \beta^+)=(1000,1)$.[]{data-label="fig:star-eigV"}](star-1000-1-lam1-1.eps "fig:"){width="48.00000%" height="48.00000%"} ![Eigenfunctions corresponding to eigenvalues $\lambda_1$ and $\lambda_4$ in [Example 2]{} in the case of $(\beta^-, \beta^+)=(1000,1)$.[]{data-label="fig:star-eigV"}](star-1000-1-lam4-1.eps "fig:"){width="48.00000%" height="48.00000%"}
Appendix {#appendix .unnumbered}
========
We show how the eigenvalues from Example 1 in Section 6 can be determined in an analytical way. Recall the domain $\Omega = \{ (r,\theta) : 0 \leq r \leq R_O, 0 \leq \theta < 2\pi\}$ and the interface $\Gamma = \{(r, \theta) : r = R_I, 0 \leq \theta < 2\pi\}$. The eigenfunction $u(x,y)$ can be determined by the separation of variables, i.e. $u(x,y) = R(r)\Theta(\theta)$. The model problem (\[eq:modelEq\]) is rewritten in polar coordinates as follows: $$\begin{aligned}
\label{eq:model1}
\frac{\partial^2 R}{\partial r^2}\Theta + \frac{1}{r}\frac{\partial R}{\partial r}\Theta + \frac{1}{r^2} R \frac{\partial^2 \Theta}{\partial \theta^2} &= - \frac{\lambda}{\beta} R\Theta ~~~\text{in}~~ \Omega^s,\quad s=\pm, \\ [R(r)]_{\Gamma} &= 0, ~~~~\left[\beta r \pd {R(r)}{r} \right]_{\Gamma} = 0, \label{eq:JumpCond} \\
R(r) &= 0 ~~~~~~\text{on}~~\partial \Omega. \label{eq:BoundCond}\end{aligned}$$ A reformulation of the equation (\[eq:model1\]) is $$\frac{r^2R^{''} + rR^{'} + \frac{\lambda}{\beta}r^2R}{R} = -\frac{\Theta^{''}}{\Theta} = m^2 ~~\text{in}~~\Omega^s,\quad s=\pm. \label{eq:model2}$$ The second relation in (\[eq:model2\]) gives $\Theta(\theta) = d_1 \cos m\theta + d_2 \sin m\theta$. It also establishes that $m$ is an integer since we must have the same value at $\theta = 0$ and $\theta = 2\pi$. The first relation in (\[eq:model2\]) is the Bessel equation $$r^2 R^{''} + r R^{'} +\left(\frac{\lambda}{\beta} r^2 - m^2\right) R = 0 ~~\text{in}~~\Omega^s, \quad s=\pm.$$ Recall that $\beta$ and $\lambda$ are positive by the properties of the model problem (\[eq:modelEq\]). We obtain $R(r)$ as follows: $$R(r) = \left\{\begin{array}{l l} c_1^{+}J_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}r\right) + c_2^{+}Y_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}r\right), & \text{in}~\Omega^+, \\
c_1^{-}J_{ m }\left(\sqrt{\frac{\lambda}{\beta^{-}}}r\right) + c_2^{-}Y_{ m }\left(\sqrt{\frac{\lambda}{\beta^{-}}}r\right), & \text{in}~\Omega^-,
\end{array}\right.$$ where $J_{m}$ and $Y_{m}$ are the Bessel functions of the first and second kind of order $m$. Since $J_{m}$ is analytic and $Y_{ m }$ is singular at the origin, we have $c_2^{-} = 0$. The coefficients $c_1^+, c_2^+, c_1^-$ are determined by (\[eq:JumpCond\]) and (\[eq:BoundCond\]). The condition (\[eq:BoundCond\]) leads to the equation $$c_1^{+}J_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}R_O\right) + c_2^{+}Y_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}R_O\right) = 0 .
\label{eq:bd1}$$ By using the first relation of (\[eq:JumpCond\]), we obtain the equation $$c_1^{+}J_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}R_I\right) + c_2^{+}Y_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}R_I\right) = c_1^{-}J_{ m }\left(\sqrt{\frac{\lambda}{\beta^{-}}}R_I\right) .
\label{eq:bd2}$$ The second part of (\[eq:JumpCond\]) gives $$\begin{aligned}
\, &&\beta^{+}\left(c_1^{+}\frac{d}{d r}\left(J_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}r\right)\right) + c_2^{+}\frac{d}{d r}\left(Y_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}r\right)\right)\right) \nonumber\\
= \, && \beta^{-}c_1^{-}\frac{d}{d r}\left(J_{ m }\left(\sqrt{\frac{\lambda}{\beta^{-}}}r\right)\right)\;\;\;\;\; \text{on}\;\;\;r = R_I.
\label{eq:bd3}\end{aligned}$$ From the equations (\[eq:bd1\]),(\[eq:bd2\]), and (\[eq:bd3\]), we have a homogeneous matrix equation $$A \mathbf{c} = {0},
\label{eq:Det}$$ where $$A = \left[
\begin{smallmatrix}
J_m\left(\sqrt{\frac{\lambda}{\beta^{+}}}R_O\right) & Y_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}R_O\right) & \qquad 0 \\
J_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}R_I\right) & Y_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}R_I\right) & -J_{ m }\left(\sqrt{\frac{\lambda}{\beta^{-}}}R_I\right) \\
\beta^{+}\frac{d}{d r}\left(J_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}r\right)\right)|_{r=R_I} & \beta^{+}\frac{d}{d r}\left(Y_{ m }\left(\sqrt{\frac{\lambda}{\beta^{+}}}r\right)\right)|_{r=R_I} & -\beta^{-}\frac{d}{d r}\left(J_{ m }\left(\sqrt{\frac{\lambda}{\beta^{-}}}r\right)\right)|_{r=R_I}
\end{smallmatrix}
\right]$$ and $\mathbf c = [c_1^{+}, c_2^{+}, c_1^{-}]^{T}$. A nonzero solution of (\[eq:Det\]) exists when the determinant of the matrix $A$ is zero. For each index $m$, the eigenvalues $\lambda$ from (\[eq:model1\]) coincide with the roots of the determinant of the matrix $A$, which can be easily calculated by any root-finding method such as the bisection method.
[10]{}
, [*Sobolev Spaces*]{}, 2nd ed., Elsevier, Amsterdam, 2003.
, [*Discontinuous Galerkin approximation of the Laplace eigenproblem*]{}, Comput. Methods Appl. Mech. Engrg. 195 (2006), pp. 3483–3503.
, [*Mixed and nonconforming finite element methods: implementation, postprocessing and error estimates*]{}, RAIRO Modél. Math. Anal. Numér. 19 (1985), pp. 7–32.
, [*Eigenvalue problems*]{}, Handb. Numer. Anal. II, North-Holland, Amsterdam, 1991.
, [*A combined nodal continuous-discontinuous finite element formulation for the Maxwell problem*]{}, Appl. Math. Comput. 218 (2011), pp. 4276–4294.
, [*Approximation of eigenvalues in mixed form, discrete compactness property, and application to hp mixed finite elements*]{}, Comput. Methods Appl. Mech. Engrg. 196 (2007), pp. 3672–3681.
, [*On the problem of spurious eigenvalues in the approximation of linear elliptic problems in mixed form*]{}, Math. Comp. 69 (2000), pp. 121–140.
,[ *A finite element method for interface problems in domains with smooth boundaries and interfaces*]{}, Adv. Comput. Math. 6 (1996), pp. 109–138.
, [*The mathematical theory of finite element methods*]{}, 3rd ed., Texts Appl. Math. 15, Springer, New York, 2008.
, [*Discontinuous bubble scheme for elliptic problems with jumps in the solution*]{}, Comput. Methods Appl. Mech. Engrg. 200 (2011), pp. 494–508.
, [*Optimal convergence analysis of an immersed interface finite element method*]{}, Adv. Comput. Math. 33 (2010), pp. 149–168.
, [*A course in functional analysis*]{}, 2nd ed., Springer-Verlag, Berlin, 1990.
, [*Conforming and nonconforming finite element methods for solving the stationary Stokes equations I*]{}, Rev. Fr. Autom. Inf. Rech. Oper. 7 (1973), pp. 33–75.
, [*A posteriori error estimates for non-conforming approximation of eigenvalue problems*]{}, Appl. Numer. Math. 62 (2012), pp. 580–591.
, [*Convergence of finite element method for linear second-order wave equations with discontinuous coefficients*]{}, Numer. Methods Partial Differential Equations 29 (2013), pp. 1522–1542.
, [*On spectral approximation. I. The problem of convergence*]{}, RAIRO Anal. Numér. 12 (1978), pp. 97–112.
, [*On spectral approximation. II. Error estimates for the Galerkin method*]{}, RAIRO Anal. Numér. 12 (1978), pp. 113–119.
, [*A convergent adaptive method for elliptic eigenvalue problems*]{}, SIAM J. Numer. Anal. 47 (2009), pp. 1067–1091.
, [*Immersed-interface finite-element methods for elliptic interface problems with nonhomogeneous jump conditions*]{}, SIAM J. Numer. Anal. 46 (2008), pp. 472–495.
, [*Convergence analysis of finite element methods for $H(curl; \Omega)$-elliptic interface problems*]{}, Numer. Math. 122 (2012), pp. 557–578.
, [*A numerical method for solving variable coefficient elliptic equation with interfaces*]{}, J. Comput. Phys. 202 (2005), pp. 411–445.
, [*Perturbation theory for linear operators*]{}, Classics in Mathematics, Springer-Verlag, Berlin, 1995.
, [*An analysis of a broken $P_1$-nonconforming finite element method for interface problems*]{}, SIAM J. Numer. Anal. 48 (2010), pp. 2117–2134.
, [*Solvability of diffraction problems in the classical sense*]{}, Trudy Mat. Inst. Steklov. 92 (1966), pp. 116–146.
, [*A posteriori and a priori error analysis for finite element approximations of self-adjoint elliptic eigenvalue problems*]{}, SIAM J. Numer. Anal. 38 (2000), pp. 608–625.
, [*ARPACK users’ guide: solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods*]{}, SIAM, Philadelphia, 1998.
, [*The immersed interface method for elliptic equations with discontinuous coefficients and singular sources*]{}, SIAM J. Numer. Anal. 31 (1994), pp. 1019–1044.
, [*An immersed finite element space and its approximation capability*]{}, Numer. Methods Partial Differential Equations 20 (2004), pp. 338–367.
, [*New Cartesian grid methods for interface problems using the finite element formulation*]{}, Numer. Math. 96 (2003), pp. 61–98.
, [*A locking-free immersed finite element method for planar elasticity interface problems*]{}, J. Comput. Phys. 247 (2013), pp. 228–247.
, [*Eigenvalue approximation by mixed and hybrid methods*]{}, Math. Comp. 36 (1981), pp. 427–453.
, [*Spectral approximation for compact operators*]{}, Math. Comput. 29 (1975), pp. 712–725.
, [*The immersed interface method for acoustic wave equations with discontinuous coefficients*]{}, Wave Motion 25 (1997), pp. 237–263.
|
---
abstract: 'Recently, several authors have advocated the use of rule learning algorithms to model multi-label data, as rules are interpretable and can be comprehended, analyzed, or qualitatively evaluated by domain experts. Many rule learning algorithms employ a heuristic-guided search for rules that model regularities contained in the training data and it is commonly accepted that the choice of the heuristic has a significant impact on the predictive performance of the learner. Whereas the properties of rule learning heuristics have been studied in the realm of single-label classification, there is no such work taking into account the particularities of multi-label classification. This is surprising, as the quality of multi-label predictions is usually assessed in terms of a variety of different, potentially competing, performance measures that cannot all be optimized by a single learner at the same time. In this work, we show empirically that it is crucial to trade off the consistency and coverage of rules differently, depending on which multi-label measure should be optimized by a model. Based on these findings, we emphasize the need for configurable learners that can flexibly use different heuristics. As our experiments reveal, the choice of the heuristic is not straight-forward, because a search for rules that optimize a measure locally does usually not result in a model that maximizes that measure globally.'
author:
- |
***Preprint version**. To appear in Proceedings of the 22nd International Conference on Discovery Science, 2019*\
\
Michael Rapp
- Eneldo Loza Mencía
- Johannes Fürnkranz
bibliography:
- 'bibliography.bib'
title: 'On the Trade-off Between Consistency and Coverage in Multi-label Rule Learning Heuristics'
---
Introduction {#sec_introduction}
============
As many real-world classification problems require to assign more than one label to an instance, multi-label classification (MLC) has become a well-established topic in the machine learning community. There are various applications of MLC such as text categorization [@lewis1992; @klimt2004], the annotation of images [@boutell2004; @li2008] and music [@trohidis2008; @turnbull2008], as well as use cases in bioinformatics [@diplaris2005] and medicine [@pestian2007].
Rule learning algorithms are a well-researched approach to solve classification problems [@furnkranz2012]. In comparison to complex statistical methods, like for example support vector machines or artificial neural networks, their main advantage is the interpretability of the resulting models. Rule-based models can easily be understood by humans and form a structured hypothesis space that can be analyzed and modified by domain experts. Ideally, rule-based approaches are able to yield insight into the application domain by revealing patterns and regularities hidden in the data and allow to reason why individual predictions have been made by a system. This is especially relevant in safety-critical domains, such as medicine, power systems, or financial markets, where malfunctions and unexpected behavior may entail the risk of health damage or financial harm.
### Motivation and goals. {#sec_motivation}
To assess the quality of multi-label predictions in terms of a single score, several commonly used performance measures exist. Even though some of them originate from measures used in binary or multi-class classification, different ways to aggregate and average the predictions for individual labels and instances — most prominently *micro-* and *macro-averaging* — exist in MLC. Some measures like *subset accuracy* are even unique to the multi-label setting. No studies that investigate the effects of using different rule learning heuristics in MLC and discuss how they affect different multi-label performance measures have been published so far.
In accordance with previous publications in single-label classification, we argue that all common rule learning heuristics basically trade off between two aspects, *consistency* and *coverage* [@furnkranz2005]. Our long-term goal is to better understand how these two aspects should be weighed to assess the quality of candidate rules during training if one is interested in a model that optimizes a certain multi-label performance measure. As a first step towards this goal, we present a method for flexibly creating rule-based models that are built with respect to certain heuristics. Using this method, we empirically analyze how different heuristics affect the models in terms of predictive performance and model characteristics. We demonstrate how models that aim to optimize a given multi-label performance measure can deliberately be trained by choosing a suitable heuristic. By comparing our results to a state-of-the-art rule learner, we emphasize the need for configurable approaches that can flexibly be tailored to different multi-label measures. Due to space limitations, we restrict ourselves to micro-averaged measures, as well as to Hamming and subset accuracy.
### Structure of this work. {#sec_structure}
We start in Section \[sec\_preliminaries\] by giving a formal definition of multi-label classification tasks as well as an overview of inductive rule learning and the rule evaluation measures that are relevant to this work. Based on these foundations, in Section \[sec\_algorithm\], we discuss our approach for flexibly creating rule-based classifiers that are built with respect to said measures. In Section \[sec\_evaluation\], we present the results of the empirical study we have conducted, before we provide an overview of related work in Section \[sec\_related\_work\]. Finally, we conclude in Section \[sec\_conclusion\] by recapitulating our results and giving an outlook on planned future work.
Preliminaries {#sec_preliminaries}
=============
MLC is a supervised learning problem in which the task is to associate an instance with one or several labels $\lambda_i$ out of a finite label space $\mathbb{L} = \left \{ \lambda_1, \dots, \lambda_n \right \}$, with $n = \left| \mathbb{L} \right|$ being the total number of predefined labels. An individual instance $\boldsymbol{x}_j$ is represented in attribute-value form, i.e., it consists of a vector $\boldsymbol{x}_j = \left( v_1, \dots, v_l \right) \in \mathbb{D} = A_1 \times \dots \times A_l$, where $A_i$ is a numeric or nominal attribute. Additionally, each instance $\boldsymbol{x}_j$ is associated with a binary label vector $\boldsymbol{y}_j = \left( y_1, \dots, y_n \right) = \left \{ 0, 1 \right \}^n$, where $y_i$ indicates the presence ($1$) or absence ($0$) of label $\lambda_i$. Consequently, the training data set of a MLC problem can be defined as a set of tuples $T = \left\{ \left( \boldsymbol{x}_1, \boldsymbol{y}_1 \right), \dots, \left( \boldsymbol{x}_m, \boldsymbol{y}_m \right) \right\}$, with $m = \left| T \right|$ being the number of available training instances. The classifier function $g \left( . \right)$, that is deduced from a given training data set, maps an instance $\boldsymbol{x}$ to a predicted label vector $\boldsymbol{\hat{y}} = \left( \hat{y}_1, \dots, \hat{y}_n \right) = \left \{ 0, 1 \right \}^n$.
Classification rules {#sec_rules}
--------------------
In this work, we are concerned with the induction of conjunctive, propositional rules $\boldsymbol{r}: H \leftarrow B$. The body $B$ of such a rule consists of one or several conditions that compare an attribute-value $v_i$ of an instance to a constant by using a relational operator such as $=$ (in case of nominal attributes), or $<$ and $\geq$ (in case of numerical attributes). On the one hand, the body of a conjunctive rule can be viewed as a predicate $B: \boldsymbol{x} \rightarrow \left \{ \text{true}, \text{false} \right \}$ that states whether an instance $\boldsymbol{x}$ satisfies all of the given conditions, i.e., whether the instance is *covered* by the rule or not. On the other hand, the head $H$ of a (single-label head) rule consists of a single label assignment ($\hat{y}_i = 0$ or $\hat{y}_i = 1$) that specifies whether the label $\lambda_i$ should be predicted as present ($1$) or absent ($0$).
Binary relevance method {#sec_binary_relevance}
-----------------------
In the present work, we use the *binary relevance* transformation method (cf. [@boutell2004]), which reduces MLC to binary classification by treating each label $\lambda_i \in \mathbb{L}$ of a MLC problem independently. For each label $\lambda_i$, we aim at learning rules that predict the minority class $t_i \in \left \{ 0, 1 \right \}$, i.e., rules that contain the label assignment $\hat{y}_i = t_i$ in their head. We define $t_i = 1$, if the corresponding label $\lambda_i$ is associated with less than 50% of the training instances, or $t_i = 0$ otherwise.
A rule-based classifier — also referred to as a *theory* — combines several rules into a single model. In this work, we use (unordered) rule sets containing all rules that have been induced for the individual labels. Such a rule set can be considered as a disjunction of conjunctive rules (DNF). At prediction time, all rules that cover a given instance are taken into account to determine the predicted label vector $\boldsymbol{\hat{y}}$. An individual element $\hat{y}_i \in \boldsymbol{\hat{y}}$, that corresponds to the label $\lambda_i$, is set to the minority class $t_i$ if at least one of the covering rules contains the label assignment $\hat{y}_i = t_i$ in its head. Otherwise, the element is set to the majority class $1 - t_i$. As all rules that have been induced for a label $\lambda_i$ have the same head, no conflicts may arise in the process.
Bipartition evaluation functions {#sec_bipartitions}
--------------------------------
To assess the quality of individual rules, usually bipartition evaluation functions $\delta: \mathbb{N}^{2 \times 2} \rightarrow \mathbb{R}$ are used [@tsoumakas2009]. Such functions — also called *heuristics* — map a two-dimensional confusion matrix to a heuristic value $h \in \left[ 0, 1 \right]$. A confusion matrix consists of the number of *true positive* (${\textit{TP}}$), *false positive* (${\textit{FP}}$), *true negative* (${\textit{TN}}$), and *false negative* (${\textit{FN}}$) labels that are predicted by a rule. We calculate the example-wise aggregated confusion matrix $C_{\boldsymbol{r}}$ for a rule $\boldsymbol{r}: \hat{y}_i \leftarrow B$ as $$\label{eq_confusion_matrix}
\begin{split}
C_{\boldsymbol{r}} & \coloneqq \left( \begin{array}{cc}
{\textit{TP}}& {\textit{FP}}\\
{\textit{FN}}& {\textit{TN}}\end{array} \right) = C_i^1 \oplus \dots \oplus C_i^j \oplus \dots \oplus C_i^m
\end{split}$$ where $\oplus$ denotes the cell-wise addition of atomic confusion matrices $C_i^j$ that correspond to label $\lambda_i$ and instance $\boldsymbol{x}_j$.
Further, let $y_i^j$ and $\hat{y}_i^j$ denote the absence ($0$) or presence ($1$) of label $\lambda_i$ for an instance $\boldsymbol{y}_j$ according to the ground truth and a rule’s prediction, respectively. Based on these variables, we calculate the elements of $C_i^j$ as $$\label{eq_atomic_confusion_matrix}
\begin{array}{cc}
{\textit{TP}}_i^j = \llbracket y_i^j = t_i \wedge \hat{y}_i^j = t_i \rrbracket \quad & {\textit{FP}}_i^j = \llbracket y_i^j \neq t_i \wedge \hat{y}_i^j = t_i \rrbracket \\
{\textit{FN}}_i^j = \llbracket y_i^j = t_i \wedge \hat{y}_i^j \neq t_i \rrbracket \quad & {\textit{TN}}_i^j = \llbracket y_i^j \neq t_i \wedge \hat{y}_i^j \neq t_i \rrbracket
\end{array}$$ where $\llbracket x \rrbracket = 1$, if $x$ is true, $0$ otherwise.
Rule learning heuristics {#sec_heuristics}
------------------------
A good rule learning heuristic should (among other aspects) take both, the *consistency* and *coverage* of a rule, into account [@janssen2010; @furnkranz2012]. On the one hand, rules should be consistent, i.e., their prediction should be correct for as many of the covered instances as possible. On the other hand, rules with great coverage, i.e., rules that cover a large number of instances, tend to be more reliable, even though they may be less consistent.
The *precision* metric exclusively focuses on the consistency of a rule. It calculates as the fraction of correct predictions among all covered instances: $$\label{eq_precision}
\begin{split}
\delta_{prec} \left( C \right) \coloneqq & \frac{{\textit{TP}}}{{\textit{TP}}+ {\textit{FP}}}
\end{split}$$
In contrast, *recall* focuses on the coverage of a rule. It measures the fraction of covered instances among all — covered and uncovered — instances for which the label assignment in the rule’s head is correct: $$\label{eq_recall}
\begin{split}
\delta_{rec} \left( C \right) \coloneqq & \frac{{\textit{TP}}}{{\textit{TP}}+ {\textit{FN}}}
\end{split}$$
The *F-measure* calculates as the (weighted) harmonic mean of precision and recall. It allows to trade off the consistency and coverage of a rule depending on the user-configurable parameter $\beta$: $$\label{eq_fmeasure}
\begin{split}
\delta_F \left( C \right) \coloneqq & \frac{\beta^2 + 1}{\frac{\beta^2}{\delta_{rec} \left( C \right)} + \frac{1}{\delta_{prec} \left( C \right)}} \text{, with } \beta \in \left[ 0, +\infty \right]
\end{split}$$
As an alternative to the F-measure, we use different parameterizations of the *m-estimate* in this work. It is defined as $$\label{eq_mestimate}
\begin{split}
\delta_m \left( C \right) \coloneqq & \frac{{\textit{TP}}+ m \cdot \frac{P}{P + N}}{{\textit{TP}}+ {\textit{FP}}+ m} \text{, with } m \geq 0
\end{split}$$ where $P = {\textit{TP}}+ {\textit{FN}}$ and $N = {\textit{FP}}+ {\textit{TN}}$. Depending on the parameter $m$, this measure trades off precision and *weighted relative accuracy* (WRA). If $m = 0$, it is equivalent to precision and therefore focuses on consistency. As $m$ approaches $+\infty$, it converges to WRA and puts more emphasis on coverage, respectively [@furnkranz2012].
Induction of rule-based theories {#sec_algorithm}
================================
For our experimental study, we implemented a method that allows to generate a large number of rules for a given training data set in a short amount of time (cf. Section \[sec\_rule\_generation\]).[^1] The rules should ideally be unbiased, i.e., they should not be biased in favor of a certain heuristic, and they should be diverse, i.e., general rules should be included as well as specific rules. Given that these requirements are met, we consider the generated rules to be representative samples for the space of all possible rules, which is way too large to be explored exhaustively. We use the generated candidate rules as a starting point for building different theories. They consist of a subset of rules that are selected with respect to a specific heuristic (cf. Section \[sec\_candidate\_selection\]) and filtered according to a threshold (cf. Section \[sec\_thresholding\]). Whereas the first step yields a theory with great coverage, the threshold selection aims at improving its consistency.
Generation of candidate rules {#sec_rule_generation}
-----------------------------
As noted in Section \[sec\_binary\_relevance\], we consider each label $\lambda_i \in \mathbb{L}$ of a MLC problem independently. For each of the labels we train multiple random forests [@breiman2001], using varying configuration parameters, and extract rules from their decision trees.[^2] As illustrated in Algorithm \[alg\_rule\_generation\], we repeat the process until a predefined number of rules $\gamma$ has been generated.
Each random forest consists of a predefined number of decision trees (we specify $I = 10$). To ensure that we are able to generate diverse rules later on, we vary the configuration parameter $depth \in \left[ 0, 8 \right]$ that specifies the maximum depth of trees (unrestricted, if $depth = 0$) (cf. Algorithm \[alg\_rule\_generation\], `trainForest`). For building individual trees, we only take a subset of the available training instances and attributes into account, which guarantees a diverse set of trees. Bagging is used for sampling the training instances, i.e., if $m$ instances are available in total, $m \cdot P$ instances ($P = 100\%$, by default) are drawn randomly with replacement. Additionally, each time a new node is added to a decision tree, only a random selection of $K$ out of $l$ attributes ($K = \log_2 \left( l - 1 \right) + 1$, by default) is considered.
\[alg\_rule\_generation\] $R = \emptyset$\
$R$
To extract rules from a random forest (cf. Algorithm \[alg\_rule\_generation\], `extractRules`), we traverse all paths from the root node to a leaf in each of its decision trees. We only consider paths that lead to a leaf where the minority class $t_i$ is predicted. As a consequence, all rules that are generated with respect to a certain label $\lambda_i$ have the same head $\hat{y}_i = t_i$. The body of a rule consists of a conjunction of all conditions encountered on the path from the root to the correspondin
Candidate subset selection {#sec_candidate_selection}
--------------------------
Like many traditional rule learning algorithms, we use a *separate-and-conquer* (SeCo) strategy for selecting candidate rules, i.e., new rules are added to the theory until all training instances are covered (or until it describes the training data sufficiently according to some stopping criterion). Whenever a new rule is added to the theory the training instances it covers are removed (“separate” step), and the next rule is chosen according to its performance on the remaining instances (“conquer” step).
To create different theories, we select subsets of the rules that have been generated earlier (cf. Section \[sec\_rule\_generation\]). We therefore apply the SeCo strategy for each label independently, i.e., for each label $\lambda_i$ we take all rules with head $\hat{y}_i = t_i$ into account. Among these candidates we successively select the best rule according to a heuristic $\delta$ (cf. Section \[sec\_heuristics\]) until all *positive* training instances $P_i = \left \{ \left( \boldsymbol{x}, \boldsymbol{y} \right) \in T \mid y_i = t_i \right \}$, with respect to label $\lambda_i$, are covered. To measure the quality of a candidate $\boldsymbol{r}$ according to $\delta$, we only take yet uncovered instances into account for computing the confusion matrix $C_{\boldsymbol{r}}$. If two candidates evaluate to the same heuristic value, we prefer the one that
covers more true positives, or
contains fewer conditions in its body.
Whenever a new rule is added, the overall coverage of the theory increases, as more positive training instances are covered. However, the rule may also cover some of the *negative* instances $N_i = T \setminus P_i$. As the rule’s prediction is incorrect in such cases, the consistency of the theory may decrease.
Threshold selection {#sec_thresholding}
-------------------
As described in Section \[sec\_candidate\_selection\], we use a SeCo strategy to select more rules until all positive training instances are covered for each label. In this way, the coverage of the resulting theory is maximized at the expense of consistency, because each rule contributes to the overall coverage, but might introduce wrong predictions for some instances. To trade off between these aspects, we allow to (optionally) specify a threshold $\phi$ that aims at diminishing the effects of inconsistent rules. It is compared to a heuristic value that is calculated for each rule according to the heuristic $\delta$. For calculating the heuristic value, the rule’s predictions on the entire training data set are taken into account. This is different from the candidate selection discussed in Section \[sec\_candidate\_selection\], where instances that are already covered by previously selected rules are not considered. Because the candidate selection aims at selecting non-redundant rules, that cover the positive training instances as uniformly as possible, it considers rules in the context of their predecessors. In contrast, the threshold $\phi$ is applied at prediction time when no order is imposed on the rules, i.e., all rules whose heuristic value exceeds the threshold equally contribute to the prediction.
Evaluation {#sec_evaluation}
==========
In this section, we present an empirical study that emphasises the need to use varying heuristics for candidate selection and filtering to learn theories that are tailored to specific multi-label measures. We further compare our method to different baselines to demonstrate the benefits of being able to flexibly adjust a learner to different measures, rather than employing a general-purpose learner.
Experimental setup {#sec_experimental_setup}
------------------
We applied our method to eight different data sets taken from the Mulan project.[^3] We set the minimum number of rules to be generated to 300.000 (cf. Algorithm \[alg\_rule\_generation\], parameter $\gamma$). For candidate selection according to Section \[sec\_candidate\_selection\], we used the (cf. Equation \[eq\_mestimate\]) with $m = 0, 2^1, 2^2, \dots, 2^{19}$. For each of these variants, we applied varying thresholds $\phi$ according to Section \[sec\_thresholding\]. The thresholds have been chosen such that they are satisfied by at least $100\%, 95\%, \dots, 5\%$ of the selected rules. All results have been obtained using 10-fold cross validation.
In addition to the m-estimate, we also used the F-measure (cf. Equation \[eq\_fmeasure\]) with varying $\beta$-parameters. As the conclusions drawn from these experiments are very similar to those for the m-estimate, we focus on the latter at this point.
Among the performance measures that we report are micro-averaged precision and recall. Given a global confusion matrix $C \coloneqq C_1^1 \oplus \dots \oplus C_i^j \oplus \dots \oplus C_n^m$ that consists of the ${\textit{TP}}$, ${\textit{FP}}$, ${\textit{TN}}$, and ${\textit{FN}}$ aggregated over all test instances $\boldsymbol{x}_j$ and labels $\lambda_i$, these two measures are calculated as defined in Equations \[eq\_precision\] and \[eq\_recall\]. Moreover, we report the micro-averaged F1 score (cf. Equation \[eq\_fmeasure\] with $\beta = 1$) as well as Hamming and subset accuracy. Hamming accuracy calculates as $$\label{eq_hamming_accuracy}
\begin{split}
\delta_{Hamm} \left( C \right) & \coloneqq \frac{{\textit{TP}}+ {\textit{TN}}}{{\textit{TP}}+ {\textit{FP}}+ {\textit{TN}}+ {\textit{FN}}}
\end{split}$$ whereas subset accuracy differs from the other measures, because it is computed instance-wise. Given true label vectors $Y = \left( \boldsymbol{y}_1, \dots, \boldsymbol{y}_m \right)$ and predicted label vectors $\hat{Y} = \left( \boldsymbol{\hat{y}}_1, \dots, \boldsymbol{\hat{y}}_m \right)$, it measures the fraction of perfectly labeled instances: $$\label{eq_subset_accuracy}
\begin{split}
\delta_{acc} \left( Y, \hat{Y} \right) & \coloneqq \frac{1}{m} \sum_j \llbracket \boldsymbol{y}_j = \hat{\boldsymbol{y}}_j \rrbracket
\end{split}$$
Analysis of different parameter settings {#sec_analysis}
----------------------------------------
For a broad analysis, we trained $20^2=400$ theories per data set using the same candidate rules, but selecting and filtering them differently by using varying combinations of the parameters $m$ and $\phi$ as discussed in Section \[sec\_experimental\_setup\]. We visualize the performance and characteristics of the resulting models as two-dimensional matrices of scores (cf. e.g. Figure \[fig\_evaluation\_avg\_hamm\_subs\]). One dimension corresponds to the used $m$-parameter, the other refers to the threshold $\phi$, respectively.
[lllll]{}
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to Hamming and subset accuracy using different parameters $m$ (horizontal axis) and $\phi$ (vertical axis). Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_hamm_subs"}](img/y_axis_labels.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to Hamming and subset accuracy using different parameters $m$ (horizontal axis) and $\phi$ (vertical axis). Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_hamm_subs"}](img/y_axis_labels.pdf "fig:")\
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to Hamming and subset accuracy using different parameters $m$ (horizontal axis) and $\phi$ (vertical axis). Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_hamm_subs"}](img/m-estimate/avg_hamming_accuracy.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to Hamming and subset accuracy using different parameters $m$ (horizontal axis) and $\phi$ (vertical axis). Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_hamm_subs"}](img/m-estimate/avg_subset_accuracy.pdf "fig:")\
![Ranks and standard deviation of average ranks over all data sets according to Hamming and subset accuracy using different parameters $m$ (horizontal axis) and $\phi$ (vertical axis). Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_hamm_subs"}](img/m-estimate/x_axis_labels.pdf "fig:")\
&
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to Hamming and subset accuracy using different parameters $m$ (horizontal axis) and $\phi$ (vertical axis). Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_hamm_subs"}](img/m-estimate/avg_hamming_accuracy_bar.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to Hamming and subset accuracy using different parameters $m$ (horizontal axis) and $\phi$ (vertical axis). Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_hamm_subs"}](img/m-estimate/avg_subset_accuracy_bar.pdf "fig:")\
&
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to Hamming and subset accuracy using different parameters $m$ (horizontal axis) and $\phi$ (vertical axis). Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_hamm_subs"}](img/m-estimate/stddev_hamming_accuracy.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to Hamming and subset accuracy using different parameters $m$ (horizontal axis) and $\phi$ (vertical axis). Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_hamm_subs"}](img/m-estimate/stddev_subset_accuracy.pdf "fig:")\
![Ranks and standard deviation of average ranks over all data sets according to Hamming and subset accuracy using different parameters $m$ (horizontal axis) and $\phi$ (vertical axis). Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_hamm_subs"}](img/m-estimate/x_axis_labels.pdf "fig:")\
&
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to Hamming and subset accuracy using different parameters $m$ (horizontal axis) and $\phi$ (vertical axis). Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_hamm_subs"}](img/m-estimate/stddev_hamming_accuracy_bar.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to Hamming and subset accuracy using different parameters $m$ (horizontal axis) and $\phi$ (vertical axis). Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_hamm_subs"}](img/m-estimate/stddev_subset_accuracy_bar.pdf "fig:")\
$m$\
\
[lllll]{}
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/y_axis_labels.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/y_axis_labels.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/y_axis_labels.pdf "fig:")\
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/avg_micro_precision.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/avg_micro_recall.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/avg_micro_f1.pdf "fig:")\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/x_axis_labels.pdf "fig:")\
&
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/avg_micro_precision_bar.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/avg_micro_recall_bar.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/avg_micro_f1_bar.pdf "fig:")\
&
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/stddev_micro_precision.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/stddev_micro_recall.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/stddev_micro_f1.pdf "fig:")\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/x_axis_labels.pdf "fig:")\
&
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/stddev_micro_precision_bar.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/stddev_micro_recall_bar.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged precision, recall, and F1-measure. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_micro_measures"}](img/m-estimate/stddev_micro_f1_bar.pdf "fig:")\
$m$\
\
Some of the used data sets (<span style="font-variant:small-caps;">cal500</span>, <span style="font-variant:small-caps;">flags</span>, and <span style="font-variant:small-caps;">yeast</span>) contain very frequent labels for which the minority class $t_i = 0$. This is rather atypical in MLC and causes the unintuitive effect that the removal of individual rules results in a theory with greater recall and/or lower precision. To be able to compare different parameter settings across multiple data sets, we worked around this effect by altering affected data sets., i.e., inverting all labels for which $t_i = 0$.
### Predictive performance. {#sec_predictive_performance}
In Figure \[fig\_evaluation\_avg\_hamm\_subs\] and \[fig\_evaluation\_avg\_micro\_measures\] the average ranks of the tested configurations according to different performance measures are depicted. The rank of each of the 400 parameter settings was determined for each data set separately and then averaged over all data sets. The depicted standard deviations show that the optimal parameter settings for a respective measure may vary depending on the data set. However, for each measure there is an area in the parameter space where a good setting can be found with high certainty.
As it can clearly be seen, precision and recall are competing measures. The first is maximized by choosing small values for $m$ and filtering extensively, the latter benefits from large values for $m$ and no filtering. Interestingly, setting $m = 0$, i.e., selecting candidates according to the precision metric, does not result in models with the highest overall precision. This is in accordance with Figure \[fig\_evaluation\_avg\_f1\], where the models with the highest F1 score do not result from using the F1-measure for candidate selection. Instead, optimizing the F1 score requires to choose small values for $m$ to trade off between consistency and coverage. The same applies to Hamming and subset accuracy, albeit both of these measure demand to put even more weight on consistency and filtering more extensively compared to F1.
[lllll]{}
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged F1-measure, when using the F-measure with varying $\beta$-parameters (horizontal axis) instead of the m-estimate for candidate selection. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_f1"}](img/y_axis_labels.pdf "fig:")\
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged F1-measure, when using the F-measure with varying $\beta$-parameters (horizontal axis) instead of the m-estimate for candidate selection. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_f1"}](img/f-measure/avg_micro_f1.pdf "fig:")\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged F1-measure, when using the F-measure with varying $\beta$-parameters (horizontal axis) instead of the m-estimate for candidate selection. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_f1"}](img/f-measure/x_axis_labels.pdf "fig:")\
&
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged F1-measure, when using the F-measure with varying $\beta$-parameters (horizontal axis) instead of the m-estimate for candidate selection. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_f1"}](img/f-measure/avg_micro_f1_bar.pdf "fig:")
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
&
[l]{}\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged F1-measure, when using the F-measure with varying $\beta$-parameters (horizontal axis) instead of the m-estimate for candidate selection. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_f1"}](img/f-measure/stddev_micro_f1.pdf "fig:")\
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged F1-measure, when using the F-measure with varying $\beta$-parameters (horizontal axis) instead of the m-estimate for candidate selection. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_f1"}](img/f-measure/x_axis_labels.pdf "fig:")\
&
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Ranks and standard deviation of average ranks over all data sets according to micro-averaged F1-measure, when using the F-measure with varying $\beta$-parameters (horizontal axis) instead of the m-estimate for candidate selection. Best parameters for different data sets specified by red signs.[]{data-label="fig_evaluation_avg_f1"}](img/f-measure/stddev_micro_f1_bar.pdf "fig:")
$\beta$
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\
[llll]{}
[l]{}\
![Ranks and standard deviation of average ranks over all data sets regarding the number of rules and conditions. A smaller rank means more rules or conditions.[]{data-label="fig_evaluation_avg_number_rules_conditions"}](img/y_axis_labels.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets regarding the number of rules and conditions. A smaller rank means more rules or conditions.[]{data-label="fig_evaluation_avg_number_rules_conditions"}](img/y_axis_labels.pdf "fig:")\
[l]{}\
![Ranks and standard deviation of average ranks over all data sets regarding the number of rules and conditions. A smaller rank means more rules or conditions.[]{data-label="fig_evaluation_avg_number_rules_conditions"}](img/m-estimate/stats/avg_number_rules.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets regarding the number of rules and conditions. A smaller rank means more rules or conditions.[]{data-label="fig_evaluation_avg_number_rules_conditions"}](img/m-estimate/stats/avg_number_conditions.pdf "fig:")\
![Ranks and standard deviation of average ranks over all data sets regarding the number of rules and conditions. A smaller rank means more rules or conditions.[]{data-label="fig_evaluation_avg_number_rules_conditions"}](figures/evaluation/x_axis_labels_m-estimate.pdf "fig:")\
&
[l]{}\
![Ranks and standard deviation of average ranks over all data sets regarding the number of rules and conditions. A smaller rank means more rules or conditions.[]{data-label="fig_evaluation_avg_number_rules_conditions"}](img/m-estimate/stats/avg_number_rules_bar.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets regarding the number of rules and conditions. A smaller rank means more rules or conditions.[]{data-label="fig_evaluation_avg_number_rules_conditions"}](img/m-estimate/stats/avg_number_conditions_bar.pdf "fig:")\
[l]{}\
![Ranks and standard deviation of average ranks over all data sets regarding the number of rules and conditions. A smaller rank means more rules or conditions.[]{data-label="fig_evaluation_avg_number_rules_conditions"}](img/m-estimate/stats/stddev_number_rules.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets regarding the number of rules and conditions. A smaller rank means more rules or conditions.[]{data-label="fig_evaluation_avg_number_rules_conditions"}](img/m-estimate/stats/stddev_number_conditions.pdf "fig:")\
![Ranks and standard deviation of average ranks over all data sets regarding the number of rules and conditions. A smaller rank means more rules or conditions.[]{data-label="fig_evaluation_avg_number_rules_conditions"}](figures/evaluation/x_axis_labels_m-estimate.pdf "fig:")\
&
[l]{}\
![Ranks and standard deviation of average ranks over all data sets regarding the number of rules and conditions. A smaller rank means more rules or conditions.[]{data-label="fig_evaluation_avg_number_rules_conditions"}](img/m-estimate/stats/stddev_number_rules_bar.pdf "fig:")\
\
![Ranks and standard deviation of average ranks over all data sets regarding the number of rules and conditions. A smaller rank means more rules or conditions.[]{data-label="fig_evaluation_avg_number_rules_conditions"}](img/m-estimate/stats/stddev_number_conditions_bar.pdf "fig:")\
$m$\
\
[X r]{} $m = 16, \phi = 0.3$ & Mi. Precision = 74.07%, Mi. Recall = 78.26%\
[r X]{} $Cough \leftarrow$ &\
$Cough \leftarrow$ &\
$Cough \leftarrow$ &\
$Cough \leftarrow$ &\
$Cough \leftarrow$ &\
[X r]{} $m = 262144, \phi = 1.0$ & Mi. Precision = 65.61%, Mi. Recall = 89.57%\
[r X]{} $Cough \leftarrow$ &\
$Cough \leftarrow$ &\
$Cough \leftarrow$ &\
$Cough \leftarrow$ &\
### Model characteristics. {#sec_model_characteristics}
Besides the predictive performance, we are also interested in the characteristics of the theories. Figure \[fig\_evaluation\_avg\_number\_rules\_conditions\] shows how the number of rules in a theory as well as the average number of conditions are affected by varying parameter settings. The number of rules independently declines when using greater values for the parameter $m$ and/or smaller values for $\phi$. resulting in less complex theories that can be comprehended by humans more easily. The average number of conditions is mostly affected by the parameter $m$.
Figure \[fig\_example\_rulesets\] provides an example of how different parameters affect the model characteristics. It shows the rules for predicting the same label as induced by two fundamentally different approaches. The first approach ($m = 16, \phi = 0.3$) reaches high scores according to the F1-measure, Hamming accuracy, and subset accuracy, whereas the second one ($m = 262144, \phi = 1.0$) results in high recall.
Baseline comparison {#sec_baseline_comparison}
-------------------
Although the goal of this work is not to develop a method that generally outperforms existing rule learners, we want to ensure that we achieve competitive results. For this reason, we compared our method to JRip, Weka’s re-implementation of Ripper [@cohen1995], using the binary relevance method. By default, Ripper uses *incremental reduced error pruning* (IREP) and post-processes the induced rule set. Although our approach could make use of such optimizations, this is out of the scope of this work. For a fair comparison, we also report the results of JRip without using IREP ($P = \textit{false}$) and/or with post-processing turned off ($O = 0$).
Note that we do not consider the random forests from which we generate rules (cf. Section \[sec\_rule\_generation\]) to be relevant baselines. This is, because random forests use voting for making a prediction, which is fundamentally different than rule learners that model a DNF. Also, we train random forests consisting of a very large number of trees with varying depths to generate diverse rules. In our experience, these random forests perform badly compared to commonly used configurations.
We tested three different configurations of our approach. The parameters $m$ and $\phi$ used by these approaches have been determined on a validation set by using nested 5-fold cross validation on the training data. For the approach $M_F$, the parameters have been chosen such that the F1-measure is maximized. The approaches $M_H$ and $M_S$ were tuned with respect to Hamming and subset accuracy, respectively.
According to Table \[table\_baselines\], our method is able to achieve reasonable predictive performances. With respect to the measure they try to optimize, our approaches generally rank before JRip with optimizations turned off ($R_1$), which is the competitor that is conceptually closest to our method. Although IREP definitely has a positive effect on the predictive performance, our approaches also tend to outperform JRip with IREP enabled, but without using post-processing ($R_2$). Despite the absence of advanced pruning and post-processing techniques, our approaches are even able to surpass the fully fledged variant of JRip ($R_1$) on some data sets. We consider these results as a clear indication that it is indispensable to be able to flexibly adapt the heuristic used by a rule learner — which JRip is not capable of —, if one aims at deliberately optimizing a specific multi-label performance measure.
Related work {#sec_related_work}
============
Several rule-based approaches to multi-label classification have been proposed in the literature. On the one hand, there are methods based on descriptive rule learning, such as association rule discovery [@thabtah2004; @thabtah2006; @li2008; @lakkaraju2016], genetic algorithms [@allamanis2013; @cano2013], or evolutionary classification systems [@arunadevi2011; @avila2010]. On the other hand, there are algorithms that adopt the separate-and-conquer strategy used by many traditional rule learners for binary or multi-class classification, e.g. by Ripper [@cohen1995], and transfer it to MLC [@mencia2016; @rapp2018]. Whereas in descriptive rule learning one does usually not aim at discovering rules that minimize a certain (multi-label) loss, the latter approaches employ a heuristic-guided search for rules that optimize a given rule learning heuristic and hence could benefit from the results of this work.
Similar to our experiments, empirical studies aimed at discovering optimal rule learning heuristics have been published in the realm of single-label classification [@janssen2008; @janssen2010]. Moreover, to investigate the properties of bipartition evaluation functions, ROC space isometrics have been proven to be a helpful tool [@flach2003; @furnkranz2003]. They have successfully been used in the literature to study the effects of using different heuristics in separate-and-conquer algorithms [@furnkranz2005], or for ranking and filtering rules [@furnkranz2004].
Conclusions {#sec_conclusion}
===========
In this work, we presented a first empirically study that thoroughly investigates the effects of using different rule learning heuristics for candidate selection and filtering in the context of multi-label classification. As commonly used multi-label measures, such as micro-averaged F1, Hamming accuracy, or subset accuracy, require to put more weight on the consistency of rules rather than on their coverage, models that perform well with respect to these measures are usually small and tend to contain specific rules. This is beneficial in terms of interpretability as less complex models are assumed to be easier to understand by humans.
As our main contribution, we emphasise the need to flexibly trade off the consistency and coverage of rules, e.g., by using parameterized heuristics like the m-estimate, depending on the multi-label measure that should be optimized by the model. Our study revealed that the choice of the heuristic is not straight-forward, because selecting rules that minimize a certain loss functions locally does not necessarily result in that loss being optimized globally. E.g., selecting rules according to the F1-measure does not result in the overall F1 score to be maximized. For optimal results, the trade-off between consistency and coverage should be fine-tuned depending on the data set at hand. However, our results indicate that, even across different domains, the optimal settings for maximizing a measure can often be found in the same region of the parameter space.
In this work, we restricted our study to DNFs, i.e., models that consist of non-conflicting rules that all predict the same outcome for an individual label. On the one hand, this restriction simplifies the implementation and comprehensibility of the learner, as no conflicts may arise at prediction time. On the other hand, we expect that including both, rules that model the presence as well as the absence of labels, could be beneficial in terms of robustness and could have similar, positive effects on the consistency of the models as the threshold selection used in this work. Furthermore, we leave the empirical analysis of macro-averaged performance measures for future work.
[^1]: Source code available at <https://github.com/mrapp-ke/RuleGeneration>.
[^2]: We use the random forest implementation provided by Weka 3.9.3, which is available at <https://www.cs.waikato.ac.nz/ml/weka>.
[^3]: Data sets and detailed statistics available at <http://mulan.sourceforge.net/datasets-mlc.html>.
|
---
abstract: 'We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer an infinite-dimensional parameter from experimental data observed at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. Specifically, we seek an optimal subset of sensors from among a fixed set of candidate sensor locations. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. To cope with the infinite-dimensional character of the parameter field, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator, which is defined only implicitly. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions. Variational adjoint methods are used to efficiently compute the gradient of the PDE-constrained OED objective function. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the coefficient of an elliptic PDE. Furthermore, we provide numerical results for inference of the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.'
author:
- Alen Alexanderian
- Noemi Petra
- Georg Stadler
- 'Omar Ghattas[ ]{}'
title: 'A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-dimensional Bayesian Nonlinear Inverse Problems'
---
Optimal experimental design, A-optimal design, Bayesian inference, sensor placement, nonlinear inverse problems, randomized trace estimator, sparsified designs.
62K05, 35Q62, 62F15, 35R30, 35Q93, 65C60.
Introduction
============
We address the problem of optimal design of experiments for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). Our goal is to determine sensor locations, at which experimental data are collected, in such a way that the uncertainty in the inferred parameter field is minimized, in a sense made precise below. The numerical solution of a Bayesian inverse problem, which is just a subproblem of the optimal experimental design (OED) problem, is challenging, in particular for problems with infinite-dimensional (high-dimensional upon discretization) parameters and expensive-to-evaluate parameter-to-observable (forward) maps. Computing optimal experimental designs requires repeated solution of the underlying Bayesian inverse problem; hence, the OED problem inherits all of the challenges of solving the Bayesian inverse problem, which in turn inherits the computational difficulties of solving the PDEs describing the forward problem. These challenges necessitate algorithms that maximally exploit the problem structure to make OED tractable for problems that are of large scale—in the state, parameter, and data dimensions.
#### Related work
Standard references for OED include [@Ucinski05; @AtkinsonDonev92; @Pukelsheim93; @Pazman86]. While most of these classical developments concern OED for inverse problems of low parameter dimension, and consider well-posed inverse problems, recently there has been an increased interest in OED for large-scale problems governed by expensive-to-solve forward models. In particular, the authors of [@HaberHoreshTenorio10; @HoreshHaberTenorio10; @ChungHaber12] present numerical methods for OED for nonlinear ill-posed inverse problems governed by large-scale models. In these papers, a frequentist point of view is taken. In particular, the OED objective function is defined as an empirical estimate of the Bayes risk of the point estimator—the solution to a Tikhonov-regularized deterministic inverse problem—for a finite-dimensional inference parameter. This amounts to solving an optimization problem for the OED that is constrained by first-order optimality conditions representing solution of an inverse problem for each member of a set of training models. There are two main differences between the work in [@HaberHoreshTenorio10; @HoreshHaberTenorio10] and that proposed here. First, we address the mathematical and computational challenges stemming from the problem of OED for [*infinite-dimensional*]{} inverse problems. In particular, the choice of the prior, of the discretization, and of the discrete inner products is such that the discrete problems are all approximations of the same infinite-dimensional inverse problem. Second, in the OED objective, we explicitly incorporate the covariance operator of (a Gaussian approximation of) the Bayesian posterior measure, thus directly capturing the uncertainty in the inferred parameters in the objective function. This entails a more complex and difficult OED optimization problem, since now it is constrained not only by the first-order optimality conditions for the inverse problem (i.e., gradients), but also by second-order information (i.e., Hessians). Nevertheless, we demonstrate that we can construct scalable algorithms (those whose cost measured in forward PDE solves is independent of problem dimension) to solve these OED optimization problems.
Other efforts in the area include [@BauerBockKorkelEtAl00; @KorkelKostinaBockEtAl04]. In [@BauerBockKorkelEtAl00], the authors use sequential quadratic programming (SQP) to compute optimal designs with different OED criteria for finite-dimensional inverse problems governed by nonlinear systems of differential–algebraic equations (DAEs). In [@KorkelKostinaBockEtAl04], the design of robust experiments for inverse problems governed by nonlinear DAEs is addressed; see also the review article [@BockKoerkelSchloeder13]. While the inverse problems discussed in these papers are governed by nonlinear DAEs, they usually have a small to moderate number of parameters. Another idea, mainly aimed at nonlinear inverse problems with low to moderate parameter dimension, is that of [@HuanMarzouk13; @HuanMarzouk14] in which the authors use a generalized polynomial chaos surrogate for the forward model, and utilize techniques of stochastic optimization to compute experimental designs that maximize the expected information gain as measured by the Kullback-Liebler divergence from posterior to prior. Since no closed form expression for the expected information gain is available for nonlinear Bayesian inverse problems, one must resort to computationally expensive sampling approaches. The paper [@LongScavinoTemponeEtAl13] offers an alternate approach through a methodology based on a Laplace approximation, i.e., a Gaussian approximation, of the posterior distribution to accelerate the numerical computation of the expected information gain.
#### Contributions
In this work we address the OED problem for infinite-dimensional Bayesian inverse problems, and seek scalable algorithms for its solution. We retain the infinite-dimensional structure of the problem during the development of solution methods, which not only leads to elegant mathematical formulations but also is of practical importance: studying the problem in infinite dimensions guides the choice of prior measures that are meaningful for infinite-dimensional parameters and forces one to use appropriate discretizations of the Bayesian inverse problem that avoid mesh artifacts. Moreover, the infinite-dimensional formulation provides, via the Lagrangian formalism, a straightforward way to derive adjoint-based expressions for derivatives of the OED objective. The main contributions of our work are as follows: (1) We propose a method for A-optimal experimental design for infinite-dimensional Bayesian nonlinear inverse problems; the proposed formulation aims at minimizing the expected average posterior variance. (2) We employ several approximations, which, when combined with structure-exploiting algorithms, render OED for large-scale inverse problems computationally tractable. In particular, we formulate the OED problem as a bilevel PDE-constrained optimization problem. (3) We use the problem of inferring a coefficient field in an elliptic PDE to elaborate our approach for A-optimal sensor placement. For the resulting PDE-constrained OED problem, we derive efficient adjoint-based expressions for the gradient and assess the computational complexity of the objective function evaluation and the gradient computation. (4) We present a comprehensive numerical study of the effectiveness of the OED method for optimal sensor placement for a subsurface flow inverse problem and demonstrate scalability of our framework in terms of the number of forward (and adjoint) PDE solves as the parameter and sensor dimensions increase.
#### Description of the method
Following an A-optimal design strategy, we seek to minimize the average posterior variance of the parameter estimates, which is given by the trace of the posterior covariance operator. For a linear inverse problem with Gaussian prior and noise distributions, a closed form expression for the posterior covariance operator is available and is independent of the experimental data [@Tarantola05]. For nonlinear inverse problems, however, such a closed form expression is not available and the posterior covariance operator depends on the experimental data. Since the data cannot be measured before the experiment is conducted, formally this would not lead to a meaningful OED problem. To cope with the dependence of the posterior covariance $\Cpost$ on the experimental data $\obs$, we consider the average of the trace of the posterior covariance operator over all possible experimental data: $$\label{eq:intro1}
\ave_\obs \{ \trace(\Cpost(\obs) \},$$ where $\ave_\obs$ is the expectation over data. For nonlinear inverse problems, no closed form expressions for $\Cpost(\obs)$ are available and the computation of $\trace(\Cpost(\obs))$ typically requires sampling-based methods (e.g., MCMC sampling), which are particularly expensive in high dimensions. To permit applicability to large-scale problems, we use a Gaussian approximation of the posterior measure, with mean given by the maximum a posteriori probability (MAP) point $\iparmap = \iparmap(\obs)$ and covariance given by the inverse of the Hessian operator $\H$ of the regularized data misfit functional, whose minimizer is the MAP point. This Hessian is evaluated at the MAP point, i.e., $\H = \H(\iparmap(\obs),\obs)$. Notice that this approximation to the posterior is exact when the parameter-to-observable map is linear. Moreover, a Gaussian is often a good approximation to the posterior when a nonlinear parameter-to-observable map is well approximated by a linearization over the set of parameters with significant posterior probability. Using this Gaussian approximation, is replaced by $$\label{eq:intro2}
\ave_\obs \{ \trace(\H^{-1}(\iparmap(\obs),\obs)\}.$$ The expectation in is approximated by averaging over a sample set $\{\obs_1,\ldots,\obs_\Nd\}$, where each $\obs_i$ is specified according to the noise model $$\label{eq:intro3}
\obs_i=\ff(\ipar_i) + \vec\eta_i,$$ where $\ff(\cdot)$ is the parameter-to-observable map, and $\ipar_i$ and $\vec\eta_i$ are draws from the prior and the noise distributions, respectively. These approximations result in a formulation of the A-optimal design problem as a PDE-constrained optimization problem with constraints given by the optimality conditions of the *inner* optimization problem that determines the MAP point, as well as PDEs describing the application of the inverse of the Hessian.
The OED objective function involves traces of inverses of operators that are implicitly defined through solutions of PDEs. We address this difficulty by using randomized trace estimators, whose use for infinite-dimensional operators is also addressed in this paper. The experimental design is introduced in the Bayesian inverse problem through a vector of non-negative weights for possible locations where experimental data can be collected: a weight of 0 indicates absence of a sensor, and a weight of 1 means that a sensor is placed at that location. To enable use of gradient-based optimization methods for an otherwise combinatorial problem, we relax the binary assumptions on the weights and allow them to take on any value in $[0,1]$. To control the number of nonzero weights, and thus the number of sensors in the experimental design, we use a sparsifying penalty [@HaberHoreshTenorio08] that also favors binary weights [@AlexanderianPetraStadlerEtAl14]. Each evaluation of the OED objective requires the solution of an inner optimization problem to find the MAP point (solved using an inexact Newton-CG method), and applications of the inverse Hessian to vectors. Gradients of the OED objective with respect to the weights are computed efficiently using adjoint equations, which are derived through a Lagrangian formalism.
We elaborate the proposed OED method for the problem of inferring the log coefficient field in an elliptic PDE. Physically this can be interpreted as a subsurface flow problem in which we seek well locations at which pressure data are collected so that the uncertainty in the inferred log permeability field is minimized. We first consider a model problem in which we conduct a comprehensive numerical study of the quality of the optimal design as compared to various suboptimal designs. In these tests, we compare the designs by assessing their impact on the statistical quality of the solution of the Bayesian inverse problem. To this end, we compare the designs with respect to the average posterior variance as well as the quality of the MAP estimator which, respectively, indicate the ability of the designs to reduce uncertainty and to reconstruct “truth” log permeability fields. These tests show that optimal designs result in significant improvements over suboptimal designs with the same number of sensors. We also examine the computational complexity, in terms of the number of forward/adjoint PDE solves, of the components of our method, and numerically study its scalability. Finally, we compute an optimal experimental design for a larger-scale subsurface flow test problem with the setup and the “truth” log permeability field taken from the Society of Petroleum Engineers’ 10th Comparative Solution Project (SPE10).
Preliminaries
=============
In this section, we summarize the background material required for the formulation and solution of OED problems for infinite-dimensional Bayesian inverse problems.
Probability measures on Hilbert spaces {#sec:borelmeas}
--------------------------------------
Let $\hilb$ denote an infinite-dimensional separable real Hilbert space with inner product $\ip{\cdot\,}{\cdot}_\hilb$ and induced norm $\|\cdot\|_{\hilb}$, and $\borel(\hilb)$ the Borel $\sigma$-algebra on $\hilb$. A probability measure on $(\hilb, \borel(\hilb))$ is called a Borel probability measure. We consider a Borel probability measure $\mu$ on $\hilb$ with finite first and second moments with mean $\bar m \in \hilb$ and covariance operator $\C:\hilb \to \hilb$. $\C$ must be positive, self-adjoint, and of trace-class [@Prato06] and satisfies $$\int_\hilb \|m - \bar m \|_\hilb^2 \,\mu(dm) = \trace( \C).$$ A Borel probability measure $\mu$ on $\hilb$ is said to be Gaussian if and only if for each $x \in \hilb$, the functional $u\mapsto\ip{x}{u}_\hilb
\in \R$, viewed as a real-valued random variable on $(\hilb, \borel(\hilb), \mu)$, is Gaussian [@PratoZabczyk92; @Prato06]. We denote by $\GM{\bar m}{\C}$ a Gaussian measure on $\hilb$ with mean $\bar m$ and covariance operator $\C$.
In the present work, $\hilb = L^2(\D)$ with the standard $L^2$-inner product $\ip{\cdot\,}{\cdot}$ and induced norm $\|\cdot\|$, where $\D \subset \R^d$ ($d=2,3$) is a bounded domain with sufficiently regular boundary. Let $(\Omega, \Sigma, \mathsf{P})$ be a probability space and let $\ipar:(\Omega, \Sigma, \mathsf{P}) \to (\hilb, \borel(\hilb))$ be an $\hilb$-valued random variable with law $\mu$, i.e., $\mu(E) = \mathsf{P}(\ipar \in E), \text{ for } E \in \borel(\hilb)$. Notice that for each $\omega \in \Omega$, $\ipar(\cdot,\omega):\D\to\R$ is a function. Alternatively, we may consider $\ipar$ as real-valued function defined on $\D \times \Omega$, where for each $\vec{x} \in \D$, $\ipar(\vec{x}, \cdot)$ is a real-valued random variable, i.e., $\ipar$ is a random field. In this paper, we consider random fields that are jointly measurable on $(\hilb, \borel(\hilb)) \otimes
(\Omega, \Sigma)$ and have finite second moment. Invoking Tonelli’s theorem, the pointwise variance $\var\{\ipar(\vec{x})\}$, $\vec{x} \in \D$, satisfies, $$\label{eq:avvar}
\begin{split}
\int_\D \var\{ \ipar(\vec{x}) \}\, d\vec{x}
&= \int_\D \int_\Omega \big(\ipar(\vec{x}, \omega) - \bar\ipar(\vec{x})\big)^2 \, \mathsf{P}(d\omega) \, d\vec{x} \\
&= \int_\Omega \int_\D \big(\ipar(\vec{x}, \omega) - \bar\ipar(\vec{x})\big)^2 \, d\vec{x} \, \mathsf{P}(d\omega) \\
&= \int_\Omega \norm{ \ipar(\cdot, \omega) - \bar\ipar(\cdot) }^2 \, \mathsf{P}(d\omega) \\
&= \int_\hilb \norm{ \ipar - \bar\ipar }^2 \, \mu(d\ipar)
= \trace(\C),
\end{split}$$ where as before $\bar \ipar$ denotes the mean of $\ipar$. This shows that the trace of the covariance operator is proportional to the average of the pointwise variance over the physical domain $\D$—a relation that is central to our formulation of A-optimal experimental design in an infinite-dimensional Hilbert space.
Bayesian inversion in an infinite-dimensional Hilbert space {#sec:HilbertBayes}
-----------------------------------------------------------
We consider the problem of inferring the law of the parameter $m$, modeled as an $\hilb$-valued random variable, from observations. Here, we describe the main ingredients of a Bayesian inverse problem.
#### The prior distribution law
We use a Gaussian prior distribution law $\priorm=\GM{\iparpr}{\Cprior}$ for the inference parameter, where the prior mean $\iparpr$ is a sufficiently regular element of $\hilb$ and $\Cprior:\hilb \to \hilb$ a strictly positive self-adjoint trace-class operator given by the inverse of a differential operator. To be precise, following [@Bui-ThanhGhattasMartinEtAl13; @Stuart10], we use $\C = \A^{-2}$, where $\A$ is a Laplacian-like operator; this choice ensures that in two and three space dimensions, $\C$ is a trace-class operator and, thus, the distribution is well-defined. The measure $\priorm$ induces the Cameron-Martin space $\CM
= \ran(\Cprior^{1/2}) = \dom(\A)$ which is a dense subspace of $\hilb$ and is endowed with the inner product, $$\cip{x}{y} = \ip{\A x}{\A y}, \quad x, y \in \CM.$$ In what follows, we assume that the prior mean $\iparpr$ is an element of $\CM$.
Note that the choice of a prior that is meaningful in a function space setting is a known challenge and an active field of research, [@Stuart10; @LassasSaksmanSiltanen09; @DashtiHarrisStuart12; @DashtiStuart15]. Gaussian priors are a common choice for infinite-dimensional Bayesian inverse problems. From a practical point of view, the use of a Gaussian prior is a modeling choice. The prior mean describes our best guess about the uncertain parameter, which could be obtained from existing measurements or from other available information. The covariance operator allows modeling of the correlation lengths and of the pointwise variance. The choices for mean and prior might depend on the properties that are relevant for the parameter-to-observable map. For instance, for the subsurface flow problems considered in sections \[sec:example1\] and \[sec:example2\], the pore-scale rock features only influence the flow in an averaged sense. Thus, considering smoother permeability fields that describe different types of rocks is sufficient and an effective permeability field is all one can hope to infer from observations. For the prior defined above, the Green’s function of the differential operator $\A$ describes the correlation between the parameter values at different spational points, and so one can choose $\A$ such that it incorporates the desired correlation information. We also mention the article [@LindgrenRueLindstroem11], where a detailed study of this relation between explicitly specified Mat[é]{}rn-type Gaussian random fields and PDE operators is presented.
#### The parameter-to-observable map and the data likelihood
Next, we introduce the data likelihood, which describes the distribution of experimental data $\obs$ for a given parameter $\ipar \in \hilb$. Here, we consider finite-dimensional observations $\obs \in \R^q$, and denote by $\like(\obs | m)$ the likelihood probability density function (pdf). Let $\ff: \hilb \to \R^q$ denote a *parameter-to-observable map*, which is a sufficiently regular (see [@Stuart10]) deterministic function that maps a parameter $\ipar \in \hilb$ to an experimental data $\obs$. In the problems we target, an evaluation of $\ff(\ipar)$ typically requires a forward solve (typically a PDE solve) followed by the application of an observation operator. We consider an additive Gaussian noise model $$ \obs = \ff(\ipar) + \vec{\eta}, \quad \eeta \sim \GM{\vec{0}}{\ncov},$$ where $\ncov\in \R^{q\times q}$ is the noise covariance matrix. Note that $\vec{\eta}$ is independent of $\ipar$ and thus $\obs | \ipar
\sim \GM{\ff(\ipar)}{\ncov}$ and the likelihood is given by $$\like(\obs | \ipar) \propto \exp\left\{ -\frac12 \big(\ff(\ipar) - \obs\big)^T \ncov^{-1} \big(\ff(\ipar) - \obs\big)\right\}.$$
#### The Bayes formula in infinite dimensions
The solution of a Bayesian inverse problem is the posterior measure, which describes the probability law of the parameter $\ipar$ conditioned on observed data $\obs$. The relationship between the prior measure, the data likelihood, and this posterior measure is described by the Bayes formula, which in the infinite-dimensional Hilbert space settings is given by [@Stuart10], $$\frac{d\postm}{d\priorm} \propto \like(\obs | \ipar).$$ Here, the left hand side is the Radon-Nikodym derivative [@Williams1991] of the posterior probability measure $\postm$ with respect to the prior measure $\priorm$. See [@Stuart10] for conditions on the parameter-to-observable map $\ff$ that ensure that the above Bayes formula holds.
The maximum a posteriori probability (MAP) point {#sec:map_point}
------------------------------------------------
For a finite-dimensional inference problem, the MAP point is a point in the parameter space at which the posterior pdf is maximized. While this notion does not extend directly to infinite dimensions, one can define the MAP point $\iparmap$ as the point $\ipar \in \hilb$ that maximizes the posterior probability of balls of radius $\eps$ centered at $\ipar$, as $\eps\to 0$. Analogous to the finite-dimensional case, the MAP point can be found by minimizing the functional $\J:\CM \to \R$ given by [@DashtiLawStuartEtAl13], $$\J(\ipar) \defeq \frac 12 \eip{\ff(\ipar) - \obs}{\ncov^{-1}(\ff(\ipar) - \obs)} +
\frac12 \cip{\ipar - \iparpr}{\ipar - \iparpr}.$$ That is, $$\label{equ:inner-opt}
\iparmap = \operatorname*{arg\,min}_{\ipar \in \CM} \J(\ipar).$$ The existence of solutions to the above optimization problem follows standard arguments [@Stuart10]. We point out that is equivalent to a deterministic inverse problem, where inner products in the regularized data misfit functional $\J$ are weighted according to the statistical description of the problem, i.e., with the noise and prior covariance operators. Note that the MAP point $\iparmap$ depends on the experimental data $\obs$. This is a challenge in the context of OED, where data are not available a priori. Moreover, the solution of is not guaranteed to be unique.
Experimental design in a Bayesian inverse problem {#sec:oed-basic}
-------------------------------------------------
Next, we define what we mean by an *experimental design*, and describe how an experimental design enters in the Bayesian inverse problem formulation. We consider the problem of optimal placement of sensors that measure experimental data. We fix a collection of *candidate sensor locations*, $\vec{x}_1,
\ldots, \vec{x}_\Ns$ in $\D$ and assign to each location a non-negative weight $w_i$, which controls whether experimental data are gathered at location $\vec x_i$, for $i=1,\ldots,\Ns$. Thus, a design is fully specified by a weight vector $\vec{w}:=(w_1,\ldots,w_\Ns) \in \R^\Ns_{\scriptscriptstyle\ge 0}$. Since an experimental design determines the subset of the set of candidate sensor locations at which data are collected, $\vec w$ enters the Bayesian inverse problem through the data likelihood, amounting to a weighted data likelihood: $$\label{equ:w-likelihood}
\like(\obs | \ipar; \vec{w}) \propto \exp\left\{ -\frac12 \big(\ff(\ipar) - \obs\big)^T
\W^{1/2} \ncov^{-1} \W^{1/2}\big(\ff(\ipar) - \obs\big)\right\},$$ where $\W = \diag({w_1,\ldots,w_\Ns})$. Notice that this formulation assumes that the dimension of the data vector equals the number of candidate sensor locations, i.e., $q = \Ns$.
Here, we consider uncorrelated observations, that is, the noise covariance is diagonal, $\ncov = \diag(\sigma^{2}_1, \ldots, \sigma^2_\Ns)$. Thus, $$\label{equ:Wn}
\Wn:= \W^{1/2} \ncov^{-1} \W^{1/2} = \diag(w_1/\sigma^2_1, \ldots, w_\Ns / \sigma^2_\Ns).$$ The solution of the Bayesian inverse problem with the weighted likelihood now additionally depends on the design $\vec w$. For example, the MAP point (or estimator) $\iparmap$ is the minimizer, with respect to $\ipar$, of the weighted cost functional, $$\label{equ:w-costJ}
\J(\ipar, \vec{w}; \obs) := \frac 12 \eip{\ff(\ipar) - \obs}{\Wn(\ff(\ipar) - \obs)} +
\frac12 \cip{\ipar - \iparpr}{\ipar - \iparpr},$$ i.e., $$\label{equ:w-opt}
\iparmap(\vec{w}; \obs) = \operatorname*{arg\,min}_{\ipar \in \CM} \J(\ipar, \vec{w}; \obs).$$ Other statistics of the posterior, such as the mean and the covariance operator, also depend on $\vec w$.
In classical OED formulations [@Pazman86; @AtkinsonDonev92; @Pukelsheim93; @Ucinski05], one commonly interprets the components of a design vector $\vec{w}$ as probability masses for candidate sensor location, i.e., $w_i \geq 0$ and $\sum w_i = 1$. A practitioner might place sensors at the candidate locations whose weights are large or use the weights to decide which experiments to perform, and how often to perform them (if experiments can be repeated) to reduce the experimental noise level through repeated experiments. An alternate point of view is to neglect the constraint $\sum w_i = 1$ and to incorporate a penalty function $P(\vec{w})$ instead, which associates a cost to each sensor placed [@AlexanderianPetraStadlerEtAl14; @HaberMagnantLuceroEtAl12; @HaberHoreshTenorio08]. The simplest-to-interpret weight vector $\vec w$ contains 0’s where no sensor is placed and 1’s in locations where sensors are placed. This leads to a binary optimization problem, which can be challenging to solve. Thus, we relax the binary assumption on the components of the weight vector, and allow the weights to take values in the interval $[0, 1]$ and enforce binary weights through properly chosen sparsifying penalty functions, or continuation with a family of penalty functions (see section \[sec:sparsity\]).
Randomized trace estimation {#sec:randomized-trace-estimation}
---------------------------
We address A-optimal experimental design problems, which require minimization of traces of large dense covariance matrices that are defined implicitly through their applications to vectors. In our OED method, we approximate traces of covariance matrices using randomized trace estimators. These estimators approximate the trace of a matrix $\mat{A}
\in \R^{n \times n}$ via Monte-Carlo estimates of the form $\trace(\mat{A}) \approx \frac{1}{\Ntr} \sum_{k = 1}^\Ntr
\ip{\vec{z}_k}{\mat{A} \vec{z}_k}_{\R^n}$, where the vectors $\vec{z}_k$ are random $n$-vectors. Reasonably accurate estimation of traces of high-dimensional covariance matrices are possible with a small number of random vectors; see e.g., [@AvronToledo11; @Roosta-KhorasaniAscher13] for descriptions of different trace estimators and their convergence properties, and [@AlexanderianPetraStadlerEtAl14; @HaberHoreshTenorio08; @HaberMagnantLuceroEtAl12] for discussions regarding the use of randomized trace estimators for high-dimensional implicitly defined covariance operators. There are several possibilities for the choice of random vectors $\vec{z}_k$. The Hutchinson estimator [@Hutchinson90] uses random vectors with $\pm 1$ entries, each with a probability of $1/2$. Another possibility, used in this paper, is the Gaussian trace estimator, which uses Gaussian random vectors with independent standard normal entries. In our numerical computations, we estimate traces of matrices that are discretizations of covariance operators defined on an infinite-dimensional Hilbert space. Thus, we next briefly justify randomized trace estimation in infinite dimensions. In particular, to define the infinite-dimensional analog of the Gaussian trace estimator, we consider an $\hilb$-valued random variable $Z_\delta$ whose law is given by $\mu_\delta = \GM{0}{\C_\delta}$, where $\C_\delta = (-\delta \Delta +
I)^{-2}$; here, $\Delta$ denotes the Laplacian operator with homogeneous Neumann boundary conditions, and $\delta$ is a positive real number. Note that $\C_\delta$ so constructed is positive, self-adjoint, and of trace-class on $L^2(\D)$, with $\D \subseteq \R^d$, $d = 2, 3$. Let $\A$ be a positive self-adjoint trace-class operator on $\hilb$. First, note that $$\label{equ:quadform}
\ave\{{\ip{Z_\delta}{\A Z_\delta}}\} = \int_\hilb \ip{z}{\A z} \, \mu_\delta(dz)
= \trace(\A\C_\delta).$$ Moreover, as shown in Appendix \[apdx:trace\_estimator\], $\trace(\A) = \lim_{\delta \to 0} \trace(\A\C_\delta)$. Hence, choosing small values of $\delta$ provides reasonable estimates for $\trace(\A)$. Therefore, one is justified to use Monte Carlo estimates of the form, $$\trace(\A) \approx \frac{1}{\Ntr} \sum_{i = 1}^{\Ntr} \ip{z_i}{\A z_i},$$ where $z_i$ are realizations of $Z_\delta$ for a sufficiently small $\delta$ (in the finite-dimensional case, we can take $\delta = 0$).
A-optimal design for Bayesian *linear* inverse problems {#sec:linAoptimal}
=======================================================
The classical definition of an A-optimal design is for inverse problems where the parameter-to-observable map $\ff$ is linear and one assumes an additive Gaussian noise model. In this case, the posterior covariance operator does not depend on the experimental data. Denoting by $\Cpost(\vec{w})$ the covariance operator of the posterior measure $\postm$ for a given design vector $\vec{w}$, an A-optimal design is one that minimizes the average posterior variance. This is equivalent to minimizing $\trace\big(\Cpost(\vec{w})\big)$. Denoting the linear parameter-to-observable map by $\mat{F}:\hilb \to \R^q$ and assuming a Gaussian prior $\priorm=\GM{\cdot}{\Cprior}$, the posterior covariance operator is $\Cpost(\vec{w}) = ( \mat{F}^* \Wn \mat{F} +
\Cprior^{-1})^{-1}$, with $\Wn$ as in . Notice that $\mat F$ is independent of the parameter $\ipar$ and the experimental data $\obs$. Using a low rank singular value decomposition of the prior-preconditioned parameter-to-observable map $\mat{F}\Cprior^{1/2}$, computed *once* upfront, enables evaluating the A-optimal objective function and its gradient without further PDE solves; see [@AlexanderianPetraStadlerEtAl14; @HaberMagnantLuceroEtAl12].
This A-optimal design approach leads to the following optimization problem: $$\min_{\vec{w} \in [0, 1]^\Ns} \trace( \Cpost(\vec{w}) ) + \upgamma P(\vec{w}),$$ where $\upgamma P(\vec{w})$ controls the sparsity of the design $\vec{w}$. There are various options for choosing a sparsifying penalty function $P(\vec{w})$. One possibility is to use $P(\vec{w}) = \sum_i w_i$, which amounts to an $\ell^1$ penalty. Here, we use a continuation strategy with a sequence of penalty functions that asymptotically approximate the $\ell^0$-“norm”; see section \[sec:sparsity\] and [@AlexanderianPetraStadlerEtAl14].
A-optimal design for Bayesian *nonlinear* inverse problems {#sec:oed-formulation}
==========================================================
In this section, we present a formulation of the A-optimal experimental design criterion for infinite-dimensional Bayesian nonlinear inverse problems. To make the resulting OED problem computationally tractable, we introduce a series of approximations, such that the formulation culminates in a *Hessian constrained* bilevel optimization problem.
Formulation
-----------
For a design vector $\vec{w}$ and experimental data $\obs$, the Bayesian inverse problem with the weighted data likelihood is given by $$\frac{d\postm}{d\priorm} \propto \like(\obs | \ipar; \vec{w}).$$ Following an A-optimal design criterion, we seek to minimize the average posterior variance of the inferred parameter over all possible design vectors $\vec w$. From it follows that the average variance is given by $\trace\left[\Cpost(\vec{w};
\obs)\right]$, where $\Cpost$ is the covariance operator corresponding to the posterior measure. Note that for a fixed experimental design vector $\vec{w}$, the result of the inference still depends on the experimental data $\obs$. Since experimental data is, in general, not available a priori, we average $\trace\left[\Cpost(\vec{w};
\obs)\right]$ over the experimental data $\obs$, which, for given $\ipar \in \hilb$, are distributed according to $\GM{\ff(\ipar)}{\ncov}$, as specified by the data likelihood. Notice that this distribution of $\obs$ is conditioned on $\ipar$, the parameter in the Bayesian inverse problem. To address this issue, we rely on our prior knowledge of the parameter $\ipar$ as described by the prior measure, and define the *expected* average posterior variance $\obj$ as follows: $$\label{equ:oed-objective-general}
\obj(\vec{w}) := \ave_{\priorm}\ave_{\obs | \ipar}\left\{ \trace\left[\Cpost(\vec{w}; \obs)\right]\right\}
\!= \!\int_\hilb\!\! \int_{\R^q} \!
\trace\left[\Cpost(\vec{w}; \obs)\right] \, \mu_{\obs|\ipar}(d\obs) \, \priorm(d\ipar),$$ where $\mu_{\obs|\ipar} = \GM{\ff(\ipar)}{\ncov}$.
Gaussian approximation of the posterior measure
-----------------------------------------------
If the parameter-to-observable map $\ff$ is linear, and given a Gaussian prior distribution and an additive Gaussian noise model, the posterior is also Gaussian, with mean and covariance given by closed form expressions, namely the MAP point and the inverse of the Hessian of the functional $\mathcal J$ defined in , respectively, [@Tarantola05; @Stuart10]. However, if $\ff$ is nonlinear, the posterior is not Gaussian and there exists no closed-form expression for the posterior covariance operator. As a consequence, one has to rely on techniques such as Markov chain Monte Carlo sampling to compute the average posterior variance [@RobertCasella05]. This requires a large number of statistically independent samples, which in turn requires many evaluations of the parameter-to-observable map $\ff$, which can make sampling computationally extremely expensive, in particular for high-dimensional problems and expensive-to-evaluate parameter-to-observable maps $\ff$. Thus, to make the problem at hand tractable, we consider a Gaussian approximation of the posterior measure at the MAP point. That is, given an experimental design $\vec{w}$ and a realization of the data $\obs$, we compute the MAP point $\iparmap = \iparmap(\vec{w}; \obs)$ and define the Gaussian approximation of $\postm$ as $$ \postmGauss \defeq \GM{\iparmap(\vec{w}; \obs)}{\H^{-1}\big(\iparmap(\vec{w}; \obs), \vec{w}; \obs\big)},$$ where $\H\big(\iparmap(\vec{w}; \obs), \vec{w}; \obs\big)$ is the Hessian of (or an approximation of the Hessian, e.g., the Gauss-Newton approximation). Note that, in general, $\H$ depends on the design $\vec{w}$ and data $\obs$ both explicitly, and implicitly through the MAP point. Using this Gaussian approximation, we proceed to define the following approximation $\objG$ for the OED objective function $\obj$ defined in : $$\label{equ:oed-objective-Gaussian}
\objG(\vec{w}) = \ave_{\priorm}\ave_{\obs | \ipar}\left\{ \trace\left[\H^{-1}\big(\iparmap(\vec{w}; \obs), \vec{w}; \obs\big) \right]\right\}.$$
To ensure that the Gaussian approximation $\postmGauss$ is well defined, we make the following assumption:
\[ass1\] For every experimental data $\obs$ and every design vector $\vec w$ from the admissible set of designs the inverse of the Hessian $\H^{-1}\big(\iparmap(\vec{w}; \obs),
\vec{w};\obs\big)$ exists and is a positive trace-class operator.
Sample averaging and randomized trace estimation {#subsec:sample}
------------------------------------------------
The evaluation of $\objG$ given in involves integration over an infinite-dimensional (upon discretization, high-dimensional) space. To approximate this integration, we replace $\objG$ by the Monte Carlo sum $$\label{eq:objGnd}
\objG_{\!\Nd}(\vec{w}) = \frac1\Nd \sum_{i = 1}^\Nd \trace\left[\H^{-1}\big(\iparmap(\vec{w}; \obs_i), \vec{w}; \obs_i\big) \right].$$ The data samples $\obs_i$ are given by $\obs_i = \ff(\ipar_i) + \vec{\eta}_i$, where $\{(\ipar_i, \vec{\eta}_i)\}_{i=1}^\Nd$ is a sample set from the product space $(\hilb, \priorm) \times (\R^q,
\GM{\vec{0}}{\ncov})$. Note that in practical computations usually only a moderate number of data samples can be afforded for reasons that will become clear later in the paper. From a frequentist’s perspective, the draws $\ipar_i$ from the prior can be considered as training models. Note that the draws $\obs_i$ enter in through the MAP point and the Hessian at the MAP point. This incorporates the physical properties of the parameter-to-observable map $\ff$ in the OED objective function. For instance, if $\ff$ damps highly oscillatory modes of the parameters, $\objG$ is insensitive to the highly oscillatory modes of $\ipar_i$ used to compute $\obs_i$. This indirect dependence of the OED objective on “training” draws from the prior is in contrast to the OED approach for nonlinear inverse problems proposed in [@HaberHoreshTenorio10; @HoreshHaberTenorio10], in which training models enter in the OED objective function directly.
The objective function involves the trace of $\H^{-1}_i = \H^{-1}(\iparmap(\vec{w}; \obs_i), \vec{w}; \obs_i)$. This trace is given by $\trace[\H^{-1}_i] = \sum_{k=1}^\infty
\ip{e_k}{\H^{-1}_i e_k}$, where $\{e_k\}$ is a complete orthonormal set in $\hilb$. Thus, we can write as follows: $$\label{equ:oed-objective-mc}
\objG_{\!\Nd}(\vec{w}) =
\frac1\Nd \sum_{i = 1}^\Nd \sum_{k=1}^\infty \ip{e_k}{y_{ik}},$$ where for $i \in\{ 1, \ldots, \Nd\}$ and $ k \in \N$: $$\begin{aligned}
{2}
\iparmap(\vec{w}; \obs_i) &= \displaystyle \operatorname*{arg\,min}_\ipar \J\big(\ipar, \vec{w}; \obs_i \big)&& \\ \H\big(\iparmap(\vec{w}; \obs_i), \vec{w}; \obs_i\big) y_{ik} &= e_k. &&\end{aligned}$$ Notice that for each $i \in \{1, \ldots, \Ns\}$, we obtain a MAP point $\iparmap(\vec{w}; \obs_i)$, which is used to define the corresponding Hessian operator $\H_i = \H\big(\iparmap(\vec{w}; \obs_i), \vec{w}; \obs_i\big)$.
The computation of the trace based on a complete orthogonal basis as in is not practical. We thus use a randomized trace estimator (see section \[sec:randomized-trace-estimation\]) to obtain an expression that can be computed efficiently. This final approximation step results in a computationally tractable OED objective function, which is used in the formulation of an A-optimal experimental design problem below.
The resulting A-optimal experimental design problem
---------------------------------------------------
The definitions and approximations discussed above result in the following formulation of an A-optimal design objective function for a nonlinear Bayesian inverse problem: $$\begin{aligned}
\label{equ:psihat}
\hat\obj(\vec w)&:=
\frac1{\Nd\,\Ntr} \sum_{i = 1}^\Nd \sum_{k = 1}^\Ntr \ip{z_k}{y_{ik}}, \end{aligned}$$ where $z_k$, $k \in \{1,\ldots,\Ntr\}$, are random vectors as discussed in section \[sec:randomized-trace-estimation\], and for $i \in\{ 1, \ldots,
\Nd\}$, $y_{ik}$ is defined through $$\begin{aligned}
\iparmap(\vec{w}; \obs_i) &= \displaystyle \operatorname*{arg\,min}_\ipar \J\big(\ipar, \vec{w}; \obs_i \big), \nonumber\\
\H\big(\iparmap(\vec{w}; \obs_i), \vec{w}; \obs_i\big) y_{ik} &= z_k.\nonumber\end{aligned}$$ The corresponding A-optimal experimental design optimization problem, with a sparsifying penalty term (as discussed in section \[sec:oed-basic\]) is given by $$\label{equ:oed-optim-problem}\tag{$\mathcal P$}
\min_{\vec{w}\in [0,1]^{\Ns}} \hat\obj(\vec w) + \upgamma P(\vec{w}).$$ Since we rely on gradient-based methods to solve , in addition to Assumption \[ass1\], we require the following assumption to hold.
The OED objective $\hat\obj(\cdot)$ is continuously differentiable with respect to the weight vector $\vec w$ for all $\vec{w}\in
[0,1]^{\Ns}$.
OED for coefficient field inference in an elliptic PDE {#sec:ellipticOED}
======================================================
Next, we elaborate our approach for A-optimal design of experiments to the inference of the log coefficient field in an elliptic partial differential equation, i.e., we consider the forward model, $$\label{equ:poi}
\begin{split}
-\grad \cdot (\Exp{m} \grad u) &= f \quad \text{ in }\D, \\
u &= g \quad \text{ on } {\ensuremath{\Gamma_{\!\!D}}}, \\
\Exp{m} \grad{u} \cdot \vec{n} &= h \quad \text{ on } {\ensuremath{\Gamma_{\!\!N}}},
\end{split}$$ where $\D \subset \R^d$ ($d=2,3$) is an open bounded domain with sufficiently smooth boundary $\Gamma = {\ensuremath{\Gamma_{\!\!D}}}\cup {\ensuremath{\Gamma_{\!\!N}}}$, ${\ensuremath{\Gamma_{\!\!D}}}\cap {\ensuremath{\Gamma_{\!\!N}}}=
\emptyset$. Here, $u$ is the state variable, $f\in L^2(\D)$ is a source term, and $g\in H^{1/2}({\ensuremath{\Gamma_{\!\!D}}})$ and $h\in L^2({\ensuremath{\Gamma_{\!\!N}}})$ are Dirichlet and Neumann boundary data, respectively. The prior distribution for $\ipar$ ensures that, almost surely, realizations of $\ipar$ are continuous in $\bar{\D}$ . Hence, $\Exp{m}$ is positive and bounded, ensuring existence of a solution of . Define the spaces, $$ \Vg = \{ v \in H^1(\D) : \restr{v}{{\ensuremath{\Gamma_{\!\!D}}}} = g\}, \quad
\V = \{ v \in H^1(\D) : \restr{v}{{\ensuremath{\Gamma_{\!\!D}}}} = 0\},$$ where $H^1(\D)$ is the Sobolev space of functions in $L^2(\D)$ with square integrable derivatives. Then, the weak form of reads as follows: Find $u \in \Vg$ such that $$\ip{\Exp{m} \grad{u}}{\grad{p}} = \ip{f}{p} + \ip{h}{p}_{{\ensuremath{\Gamma_{\!\!N}}}}, \quad \forall p \in \V.$$ In the following subsections, we specialize the OED problem for the inference of $\ipar$ in from pointwise observations of the state variable $u$. For theoretical aspects of the Bayesian approach to estimating the coefficient field in elliptic PDEs we refer to [@Stuart10; @DashtiStuart15].
In sections \[sec:MAP\] and \[sec:Hessian-mat-vec\], we derive expressions for the first and second derivatives of the “inner” problem, i.e., the inverse problem whose solution is the MAP point. In Section \[sec:poi-oed-formulation\] we formulate the OED problem as a bilevel optimization problem, constrained by PDEs characterizing the MAP point and PDEs defining the action of the inverse Hessian. Then, in section \[sec:oed-adjoint-grad\], we formulate the OED objective resulting in the “outer” OED optimization problem, and derive expressions for the gradient of the OED objective using associated adjoint equations. A discussion of the complexity of evaluating the OED objective and its gradient, in terms of the number of forward PDE solves, is provided in section \[sec:complexity\].
Optimality system for the MAP point {#sec:MAP}
-----------------------------------
We first specialize the (weighted) cost functional , whose minimizer is the MAP point, for the problem of inferring $\ipar$ in from observations $\B u$, where $\B$ is a linear observation operator that extracts measurements from $u$: $$\label{equ:poi-inner-opt}
\J(\ipar, \vec{w}; \obs) = \frac12 \eip{\B u - \obs}{\Wn(\B u - \obs)} +
\frac{1}{2} \cip{m - \iparpr}{m-\iparpr}.$$ Here, for a given $\ipar$, the state variable $u$ is the solution to , $\iparpr$ is the prior mean of the log coefficient field, and $\obs\in \R^q$ is a given data vector. Note that every evaluation of the OED objective function in with a given design $\vec w$ requires minimization of the PDE-constrained data misfit cost functional in . Hence, in what follows, we refer to the minimization of as the *inner* optimization problem.
We use the standard variational approach to derive optimality conditions for with fixed design $\vec w$. The Lagrangian functional $\LI: \Vg \times \CM \times \V \to \R$ is given by $$\label{eq:model:L}
\LI(u,m,p):= \J(\ipar, \vec{w}; \obs)
+ \ip{\Exp{m}\grad u}{\grad p} - \ip{f}{p} - \ip{p}{h}_{{\ensuremath{\Gamma_{\!\!N}}}}.$$ Here, $p \in \V$ is the Lagrange multiplier and we use the superscript $I$ to emphasize that the Lagrangian corresponds to the inner optimization problem. The formal Lagrange multiplier method [@Troltzsch10] yields that, at a minimizer of , variations of the Lagrangian functional with respect to all variables vanish, which yields
$$\begin{aligned}
\ip{\Exp{m} \grad u}{\grad \ut{p}} -
\ip{f}{\ut{p}} - \ip{\ut{p}}{h}_{{\ensuremath{\Gamma_{\!\!N}}}} & = 0, \label{eq:firststate}\\
\ip{\Exp{m} \grad \ut u}{\grad p}
+\ip{\B^*\Wn(\B u - \obs)}{\ut{u}} &= 0, \label{eq:firstadj}\\
\cip{m - \iparpr}{\ut{m}}
+ \ip{\ut{m} \Exp{m}\grad u}{\grad p} &= 0, \label{eq:firstcontrol}
\end{aligned}$$
for all variations $(\ut{u}, \ut{m}, \ut{p}) \in \V \times \CM \times \V$. Note that , and are the weak forms of the state, the adjoint and the gradient equations, respectively. The left hand side of is the gradient for the cost functional , provided that $u$ and $p$ are solutions to the state and adjoint equations, respectively [@Troltzsch10; @BorziSchulz12].
Hessian-vector application {#sec:Hessian-mat-vec}
--------------------------
To evaluate the OED objective function , systems of the form $\H y = z$ have to be solved, where $\H$ is the Hessian with respect to $\ipar$ of the regularized data misfit functional defined in . Using second variations of $\LI$ defined in allows derivation of expressions for $\H y =
z$. For $z\in\hilb\subset \CM'$, the solution $y\in \CM$ of $\H y =
z$ is obtained by solving a coupled system of PDEs: Find $(v, q, y)
\in \V\times \V\times \CM$ such that for all $(\ut{p}, \ut{u},
\ut{y}) \in \V \times \V \times \CM$ the following equations are satisfied:
\[eq:incrementals\] $$\begin{aligned}
\ip{ \Exp{m} \grad v}{\grad \ut p} + \ip{y \Exp{m}\grad u}{\grad
\ut p} &= 0, \label{eq:incrementals1}\\
\ip{ \B^* \Wn \B v}{\ut u} + \ip{ y \Exp{m}\grad \ut u}{\grad p}
+ \ip{\Exp{m} \grad \ut u}{\grad q} &= 0,\label{eq:incrementals2}\\
\ip{\ut{y} \Exp{m} \grad v}{\grad p}
+ \cip{y}{\ut{y}} + \ip{\ut{y} y \Exp{m} \grad u}{\grad p}
+ \ip{ \ut{y} \Exp{m} \grad u}{\grad q} &= \ip{z}{\tilde{y}}. \label{eq:incrementals3}
\end{aligned}$$
The equations and are sometimes called incremental state and adjoint equations, respectively, and the left hand side in describes the application of the Hessian to a vector $y$. In practice, $\H y = z$ is solved iteratively using a Krylov method, which requires only the application of $\H$ to vectors. This application can be computed by first solving for $v$, then solving for $q$, and then using these solutions in . Next, we provide explicit expressions for the OED problem for the inference of the log coefficient field in .
The OED problem as a PDE-constrained optimization problem {#sec:poi-oed-formulation}
---------------------------------------------------------
Specializing the A-optimal experimental design problem for the problem of inference of the log coefficient field in we obtain,
\[equ:OEDpoi\] $$\begin{aligned}
&\min_{\vec{w}\in [0, 1]^\Ns} \, \frac{1}{\Nd \Ntr} \sum_{i=1}^\Nd \sum_{k = 1}^\Ntr \ip{z_k}{y_{ik}} + \upgamma P(\vec{w}) \label{equ:outeropt}
\end{aligned}$$ where for $ i = 1, \ldots, \Nd$ and $ k = 1, \ldots, \Ntr \nonumber$ $$\begin{aligned}
\ip{\Exp{m_i} \grad u_i}{\grad \ut{p}} - \ip{f}{\ut{p}} - \ip{\ut{p}}{h}_{{\ensuremath{\Gamma_{\!\!N}}}} &=0, &&\forall \ut{p} \in \V, \label{equ:state}\\
\ip{\Exp{m_i} \grad \ut u}{\grad p_i} +\ip{\B^*\Wn(\B u_i - \obs_i)}{\ut{u}} &=0, &&\forall \ut{u} \in \V, \label{equ:adjoint}\\
\!\cip{m_i - \iparpr}{\ut{m}} + \ip{\ut{m} \Exp{m_i}\grad u_i}{\grad p_i} &=0, &&\forall \ut{m} \in \!\CM, \label{equ:grad}\\
\ip{\B^* \Wn \B v_{ik}}{\ut u} + \ip{ y_{ik} \Exp{m_i}\grad \ut u}{\grad p_i}+\ip{\Exp{m_i} \grad \ut u}{\grad q_{ik}} &= 0, &&\forall \ut u \in \V, \label{equ:incadjoint}\\
\ip{\ut{y} \Exp{m_i} \grad v_{ik}}{\grad p_i} + \cip{y_{ik}}{\ut{y}} + \ip{\ut{y} y_{ik} \Exp{m_i} \grad u_i}{\grad p_i} \nonumber \\
\tab\tab+\ip{ \ut{y} \Exp{m_i} \grad u_i}{\grad q_{ik}} &= \ip{z_k}{\tilde{y}}, &&\forall \ut y \in \CM,\label{equ:incgrad}\\
\ip{\Exp{m_i} \grad v_{ik}}{\grad \ut p} + \ip{y_{ik} \Exp{m_i}\grad u_i}{\grad \ut p} &=0, &&\forall \ut p \in \V. \label{equ:incstate}
\end{aligned}$$
The PDE constraints – are the optimality system – characterizing the MAP point $\ipar_i = \iparmap(\vec{w}; \obs_i)$. The equations – are the PDE constraints that describe $\H\big(\iparmap(\vec{w}; \obs_i), \vec{w}; \obs_i\big)
y_{ik} = z_k$ for $z_k \in \hilb$. Note also that compared to , we have re-ordered the equations –. While follows the order in which the Hessian application is computed in practice, the order in – is such that the linear (block-)operator on the left hand side is symmetric. In summary, is a PDE-constrained optimization problem, where the constraints are the first-order optimality conditions of a PDE-constrained inverse problem, and a set of PDEs describing the application of the inverse Hessian to vectors.
Evaluation and gradient computation of the OED objective {#sec:oed-adjoint-grad}
--------------------------------------------------------
Evaluating the OED objective function in involves the following steps: (1) find $(u_i, \ipar_i, p_i)$ that satisfy equations – and (2) find $(v_{ik}, y_{ik}, q_{ik})$ that satisfy –, for $i \in \{1,
\ldots, \Nd\}$ and $k \in \{1, \ldots, \Ntr\}$.
To solve the optimization problem , we rely on gradient-based optimization methods. Thus we need efficient methods for computing the gradient of $\hat \Psi$ with respect to the design vector $\vec{w}$. Again we follow a Lagrangian approach, and employ adjoint variables (i.e., Lagrange multipliers) to enforce the PDE constraints – in the OED problem. The derivation of expressions for the gradient is rather involved, and it deferred to Appendix \[appdx:oed-gradient\]. Below, we simply present the final expression for the gradient, which takes the form: $$\label{equ:oed-grad}
\hat\obj'(\vec{w})\! =\! \sum_{i = 1}^\Nd \ncov^{-1}(\B u_i - \obs_i) \odot
\B\ad{p}_i - \frac{1}{\Nd\Ntr} \sum_{i = 1}^\Nd \sum_{k = 1}^\Ntr
\ncov^{-1}\B v_{ik} \odot \B v_{ik}$$ where $u_i, v_{ik}$ are available from the evaluation of $\hat\obj$ as described above, $\odot$ denotes the Hadamard product,[^1] and, for $i \in \{1, \ldots, \Nd\}$, the $\ad p_i$ are obtained by solving the following systems for the *OED adjoint variables* $(\ad p_i, \ad m_i, \ad
u_i)\in \V\times \CM\times \V $:
\[equ:outer-adj\] $$\begin{aligned}
\!\!\ip{\B^*\Wn\B\ad p_i}{\ut u}
\!+\! \ip{\ad{m}_i \Exp{m_i}\grad \ut{u}}{\grad p_i}
\!+\! \ip{\Exp{m_i} \grad \ut{u}}{\grad \ad{u}_i}
&\!=\! \ip{b^1_i}{\ut u},
\label{equ:outer-adj4-simp}
\\
\!\!\!\ip{\ut{m}\Exp{m_i}\grad{p}_i}{\grad\ad{p}_i}
\!+\! \cip{\ad{m}_i}{\ut{m}} \!+\! \ip{\ut m \ad{m}_i \Exp{m_i}\grad u_i}{\grad p_i}
\!+\! \ip{\ut{m}\Exp{m_i}\grad u_i}{\grad\ad{u}_i}
&\!=\!\ip{b_i^2}{\ut m},
\label{equ:outer-adj5-simp}
\\
\!\!\ip{\Exp{m_i} \grad \ut{p}}{\grad \ad{p}_i}
\!+\! \ip{\ad{m}_i \Exp{m_i}\grad u_i}{\grad \ut{p}}
&\!=\!\ip{b_i^3}{\ut p},
\label{equ:outer-adj6-simp}\end{aligned}$$
for all $(\ut u, \ut m, \ut p) \in \V \times \CM \times \V$, with the right hand sides given by $$\label{equ:rhs-nice}
\begin{aligned}
\ip{b^1_i}{\ut u} &= \frac{1}{\Nd\Ntr}\sum_{k = 1}^\Ntr \big[2 \ip{y_{ik}\Exp{m_i}\grad\ut{u}}{\grad q_{ik}} + (y_{ik}^2 \Exp{m_i} \grad \ut u, \grad p_i)\big],
\\
\ip{b^2_i}{\ut m} &=
\frac{1}{\Nd\Ntr} \sum_{k = 1}^\Ntr \big[2 \ip{ \ut{m_i} \Exp{m_i} \grad v}{\grad q}
+ 2 \ip{\ut m y_{ik} \Exp{m_i} \grad u_i}{\grad q_{ik}}\\
&\qquad\qquad\qquad+ 2 \ip{\ut m y_{ik} \Exp{m_i} \grad v_{ik}}{\grad p_i}
+ \ip{\ut m y_{ik}^2 \Exp{m_i} \grad u_i}{\grad p_i}\big],
\\
\ip{b^3_i}{\ut p} &= \frac{1}{\Ntr\Nd} \sum_{k = 1}^\Ntr \big[ 2\ip{y_{ik} \Exp{m_i}\grad\ut{p}}{\grad v_{ik}} + \ip{y_{ik}^2 \Exp{m_i} \grad u_i}{\grad \ut p}\big].
\end{aligned}$$ Note that the linear operator on the left hand side of coincides with the left hand side operator in –, after proper identification of variables. The fact that the system for the OED adjoint variables coincides with the system describing the Hessian of the inner optimization problem, i.e., the Hessian of $\mathcal J$, defined in , with respect to $\ipar$ can be exploited in numerical computations. In particular, if a Newton solver for the inner optimization problem is available, the implementation can easily be adapted to perform the computations required to evaluate the OED objective, and to compute the OED gradient. We summarize the steps for computing the OED objective function and its gradient in Algorithm \[alg:aopt\].
\[alg:aopt\] Design vector $\vec{w}$, trace estimator vectors $\{z_k\}_1^\Ntr$, data samples $\{\obs_i\}_1^\Nd$ $\hat\obj = \hat\obj(\vec{w})$ and $\hat\obj' = \hat\obj'(\vec{w})$ Initialize $\hat \obj = 0$ and $\hat\obj' = 0$ `/* Evaluation of the objective function */` Compute $\iparmap(\vec{w}; \obs_i)$ Solve $\H_i y_{ik} = z_k$ $\hat \obj \gets \hat\obj + \frac{1}{\Nd\Ntr}\sum_{k=1}^\Ntr \ip{z_k}{y_{ik}}$ `/* Evaluation of the gradient */` Compute ${v}_{ik}$ and $q_{ik}$ Solve $\H_i\ad{m}_i = \bar{b}_i$ Compute $\ad{p}_i$ Compute $\hat\obj' \gets \hat\obj' + \ncov^{-1} (\B u_i - \obs_i) \odot \B\ad{p}_i - \frac{1}{\Nd\Ntr}\sum_{k = 1}^\Ntr \ncov^{-1} \B v_{ik} \odot \B v_{ik}$
Scalability of the OED solver {#sec:complexity}
-----------------------------
Here, we provide a discussion of the computational complexity and resulting scalability of solving the OED problem . Although this discussion is qualitative in nature, we do provide numerical evidence of the scalability of our OED solver in section \[sec:example1\]. The cost of solving the OED problem in measured in terms of the number of required forward-like PDE solves, i.e., solves of , or its adjoint or incremental variants. We measure cost in this way to remain agnostic to the specific governing forward PDEs and the particular PDE solver employed. These forward-like PDE solves constitute the kernel component of the OED optimization solver, and for any non-trivial PDE forward problem, the PDE solves overwhelmingly dominate the overall cost; the remaining linear algebra is negligible in comparison. Having defined cost in this manner, scalability then requires that the number of forward-like PDE solves is independent of problem dimensions, which for the OED problem are the (discretized) parameter dimension and the sensor dimension $\Ns$ (the state dimension is hidden within the forward-like PDE solver).
To assert scalability of the OED solver, we have to argue that (1) the evaluation of the OED objective $\hat\obj$, (2) the evaluation of the gradient of the OED objective $\hat\obj'$, and (3) the number of OED optimization iterations are all independent of the parameter and sensor dimensions. To make this argument, we begin by identifying a property of the Hessian systems that are solved at each OED optimization iteration. These Hessian systems include those arising at each iteration of the inner optimization problem (i.e., minimizing $\mathcal{J}$ in ), as well as the Hessian solves characterizing the posterior covariance in the OED objective evaluation (–) and those arising in OED gradient computation . Consider the Hessian $\H$ evaluated at the MAP point and notice that $\H$ can be written as $\H = \HM + \Cprior^{-1}$, with $\HM$ representing the Hessian of the first term (i.e., the data misfit term) in $\J$ defined in . As discussed in [@FlathWilcoxAkcelikEtAl11; @Bui-ThanhGhattasMartinEtAl13], the numerical rank $r$ of the prior-preconditioned data misfit Hessian, $\HMt
= \Cprior^{1/2} \HM \Cprior^{1/2}$, is independent of the parameter dimension and, for many inverse problems, small. Moreover, the rank is independent of the sensor dimension as well. This parameter/sensor dimension-independence of $r$ reflects the fact that (1) the data are often finite-dimensional, (2) the parameter-to-observable map is often smoothing, and (3) the prior covariance operator is of smoothing type. The numerical rank $r$ depends on the parameter-to-observable map, the smoothing properties of the prior, and the true information content of the data. The rank grows initially with parameter and sensor dimensions until all information contained in the data about the parameters has been resolved. Beyond this, the rank $r$ of $\HMt$ is insensitive to further increases in parameter and sensor dimension (e.g., through mesh refinement).
Next, we analyze the computational cost (again, measured in forward-like PDE solves) of evaluation of the OED objective function and its gradient as detailed in Algorithm \[alg:aopt\]. We rely on inexact Newton-CG with Armijo line search to solve the inner optimization problems in step 4 of the algorithm. The computational cost of each Newton step is dominated by the conjugate gradient iterations. Using the prior covariance as a preconditioner for CG, the number of CG iterations will be $\O(r)$ (see [@CampbellIpsenKelleyEtAl96] for mesh invariance properties of CG for operators that are compact perturbations of the identity). Each CG iteration involves an application of the data misfit Hessian, which in turn involves a pair of incremental forward/adjoint PDE solves; therefore, the cost, in terms of forward-like PDE solves, of each inner optimization problem is $\O(n_\text{newton} \times 2 \times r)$, where $n_\text{newton}$ is the total number of Newton iterations. Note that here we do not take into account the inexactness of Newton-CG. If in earlier iterations the Newton system is solved only approximately, $n_\text{newton}$ can be replaced with a smaller number. Next, we note that for each data sample $\obs_i$, $i = 1 \ldots \Nd$, we perform $\Ntr$ Hessian solves in steps 5–7 of the algorithm, where we solve for $y_{ik}$, $k = 1, \ldots, \Ntr$. Thus, since we use CG to solve these systems, it follows that the computational cost, measured in forward-like PDE solves, of evaluating the OED objective function is $$\label{equ:OED-obj-cost}
\O(\Nd \times n_\text{newton} \times 2 \times r) + \O(\Nd \times \Ntr \times 2 \times r).$$ Note also that by the mesh invariance properties of the Newton method for nonlinear optimization [@Deuflhard04], $n_\text{newton}$ is independent of the parameter dimension.
To compute the gradient, we need to perform the computations in step 11, which entail $2 \times \Nd \times \Ntr$ PDE solves, as well as the Hessian solves in step 13 of the Algorithm \[alg:aopt\], whose cost is $\O(\Nd \times 2 \times r)$ PDE solves. Thus, the cost of evaluating the OED gradient is $$2 \times \Ntr \times \Nd + \O(\Nd \times 2 \times r)$$ forward-like PDE solves.
Observe that step 6 of Algorithm \[alg:aopt\] involves $\Ntr$ systems with the same Hessian operator and different right hand sides. Thus, it is possible to further reduce the complexity of the algorithm. For instance, precomputing a low rank approximation of the prior-preconditioned data misfit Hessian $\HMt$ (after solving the inner optimization problem) provides an efficient method for applications of the inverse Hessian that is free of PDE solves [@Bui-ThanhGhattasMartinEtAl13; @FlathWilcoxAkcelikEtAl11]. Using this low rank approximation of $\HMt$ allows us to remove the factor $\Ntr$ in the second term of .
The final argument to make is that the number of OED optimization iterations is parameter/sensor dimension-independent. If one solves the OED problem using a Newton method, we would expect this to be the case. In the example of section \[sec:example1\], we employ a quasi-Newton method. It is difficult to make a dimension-independence argument for quasi-Newton for the OED problem; however, in that section we do observe dimension independence of OED optimization iterations.
Sparsity control {#sec:sparsity}
----------------
Here we briefly comment on the sparsity enforcing penalty method used in the present work, which is based on the approach in [@AlexanderianPetraStadlerEtAl14]. In particular, considering the problem , we first solve the problem with $P(\vec{w}) = \vec{1}^T\vec{w}$, amounting to an $\ell^1$ penalty to obtain the minimizer $\vec{w}^*_0$. Subsequently, we consider a sequence of penalty functions, $P_\eps(\vec{w})$ such that as $\eps
\to 0$, $P_\eps$ approaches the $\ell^0$ norm. To cope with the non-convexity of these penalty functions, we follow a continuation strategy, i.e., we decrease $\{\eps_i\}$: For $\eps_1$, we solve with penalty function $P_{\eps_1}$ and the initial guess (for the optimization algorithm) given by the $\vec{w}^*_0$. Subsequently, for each $i \geq 2$ the problem is solved with $P_{\eps_i}$ as the penalty function and the initial guess given by the solution of the proceeding optimization problem corresponding to $\eps_{i-1}$. The precise definition of the penalty functions $P_\eps$ used follows [@AlexanderianPetraStadlerEtAl14]. In practice, we observe that a few continuation iterations are sufficient to attain an optimal weight vector with a 0/1 structure.
Example 1: Idealized subsurface flow {#sec:example1}
====================================
In this section, we study the effectiveness of our OED approach applied to the parameter estimation problem considered in section \[sec:ellipticOED\]. We interpret as subsurface flow problem and thus refer to $u$ as pressure and to $m$ as log permeability.
Setup of forward problem {#sec:prob1_forward}
------------------------
To detail the forward problem , we consider the domain $\D := (0, 1) \times (0,
1)\subset\mathbb R^2$ and no volume forcing, i.e., $f = 0$. We assume no-outflow conditions on ${\ensuremath{\Gamma_{\!\!N}}}:=\{0,1\}\times (0,1)$, i.e., the homogeneous Neumann conditions $\Exp{\ipar} \grad u \cdot \vec{n} = 0$ on ${\ensuremath{\Gamma_{\!\!N}}}$. The flow is driven by a pressure difference between the top and the bottom boundary, i.e., we use $u = 1$ on $(0,1)\times \{1\}$ and $u = 0$ on $(0,1)\times \{0\}$. This Dirichlet part of the boundary is denoted by ${\ensuremath{\Gamma_{\!\!D}}}:=(0,1)\times\{0,1\}$. In Figure \[fig:forwardprob\], we show the “truth” permeability used in our numerical tests, the corresponding pressure and the Darcy velocity field.
\(1) at (0\*, 0\*); (2) at (0.7\*, 0\*); at (0.0\*-0.26\*, -0.22\*) [a)]{}; at (0.7\*-0.26\*, -0.22\*) [b)]{};
Prior and noise model {#sec:prob1_prior_and_noise}
---------------------
We assume given estimates $\ipart^1,\ldots,\ipart^5$ of the log permeability at five points, i.e., $N = 5$, in $\D$, namely $\vec{x}_1 = (0.1,0.1)$, $\vec{x}_2 = (0.1,0.9)$, $\vec{x}_3 = (0.9,0.1)$, $\vec{x}_4 =
(0.9,0.9)$, and $\vec{x}_5 = (0.5,0.5)$. Based on this knowledge, we compute $\iparpr$, the mean of the prior measure, as a regularized least-squares fit of these point observations by solving $$\label{equ:prior_mean_prob}
\iparpr = \operatorname*{arg\,min}_{\ipar \in \CM}
\frac12 \ip{\ipar}{\A \ipar} + \frac\alpha2 \sum_{i = 1}^N
\int_\D
\delta_i(\vec{x}) \big[\ipar(\vec{x}) - \ipart(\vec{x})\big]^2 \, d\vec{x}.$$ Here, $\A[\ipar]= -\grad \cdot (\mat{\Theta} \grad \ipar)$, where the positive definite matrix $\mat\Theta$ allows to control the prior covariance. We define the prior covariance as $\Cprior := \mathcal{L}^{-2}$, where $\mathcal{L} = \A + \alpha \sum_{i = 1}^N \delta_i$, where we use the following parameter values: $$\label{equ:prior_parameters}
\alpha = 1, \quad \mat{\Theta} = 5\times 10^{-2}\begin{pmatrix} 1/2 & 0\\ 0 & 2\end{pmatrix}.$$ In Figure \[fig:prior\], we show the prior mean $\iparpr$, obtained by solving and three random draws from the prior distribution. Note that our choice for $\Theta$ corresponds to a prior distribution with stronger correlation in $y$-direction. It remains to specify the noise covariance matrix, for which we choose $\ncov = \sigma^2 I$, with $\sigma = 0.05$. We use a linear triangular finite element mesh with $\Nm = 1{,}121$ degrees of freedom to discretize the state, adjoint and the parameter variables. The discrete inference parameters are the coefficients in the finite element expansion of the parameter field.
\(1) at (0\*, 0\*); (2) at (.5\*, 0\*); (3) at (1\*, 0\*); (4) at (1.5\*, 0\*); at (0.0\*-0.22\*, -0.2\*) [a)]{}; at (.5\*-0.22\*, -0.2\*) [b)]{}; at (1.\*-0.22\*, -0.2\*) [c)]{}; at (1.5\*-0.22\*, -0.2\*) [d)]{}; at (-0.05\*-0.15\*, -0.18\*) ; at (-0.05\*+0.18\*, -0.18\*) ; at (-0.05\*-0.15\*, 0.17\*) ; at (-0.05\*+0.18\*, 0.17\*) ; at (-0.03\*, 0.005\*) ;
Effectiveness of A-optimal design {#sec:prob1_effectivness}
---------------------------------
We solve the OED problem with $\Nd = 5$ experimental data samples $\obs_i$, and use $\Ntr =
20$ random vectors in the trace estimator. We employ $\ell_0$-sparsification using the continuation process described in section \[sec:sparsity\]. We obtain an optimal sensor configuration with $10$ sensors for the penalty parameter $\upgamma = 0.008$, and an optimal design with $20$ sensors for $\upgamma = 0.005$. As first test of the effectiveness of the resulting designs, we solve the inference problem with the “truth” parameter field given in Figure \[fig:forwardprob\]a). Using data obtained at the A-optimal sensor configuration (with $10$ sensors), we compute the MAP point by solving and the Gaussian approximation of the posterior measure at the MAP point. The results are shown in Figure \[fig:oed10\], where the posterior standard deviation field is also compared with the prior standard deviation field.
\(1) at (0\*, 0\*); (1) at (0.73\*, 0\*); (1) at (1.37\*, 0\*); at (0.0\*-0.3\*, -0.24\*) ; at (.68\*-0.2\*, -0.24\*) ; at (1.33\*-0.25\*, -0.24\*) ;
To study the effectiveness of the optimal designs, we first report the error with respect to the “truth” permeability field $\ipar_\text{true}$. In Figure \[fig:cloudplots\] we show a comparison of the relative error of the MAP estimator, $$E_\text{rel}(\vec{w}) = \frac{\norm{\iparmap(\vec{w}) - \ipar_{\text{true}}}}{\norm{\ipar_{\text{true}}}},$$ and of $\trace(\H(\vec{w})^{-1})=\trace(\postcov)$ for the optimal design $\vec{w}_\text{opt}$ and for random designs with the same number of sensor locations, where $\|\cdot\|$ is the $L^2$-norm. From Figure \[fig:oed10\], we draw the following conclusions: (1) The optimal design with $10$ sensors improves over randomly selected designs more significantly than the optimal design with $20$ sensors; this indicates that as sensors become more scarce, computing optimal design is more important. (2) There is a correlation between minimizing the average variance and that of minimizing the $L^2$-error of the MAP estimator. This is interesting but not entirely surprising, because for a Bayesian linear inverse problem with Gaussian prior and noise, it can be shown that minimizing the average posterior variance is equivalent to minimizing the average mean square error of the MAP estimator [@AlexanderianPetraStadlerEtAl14].
[cc]{}
table\[x=dist,y=tr\][./cloud10.txt]{}; table\[x=dist,y=tr\][./cloud10\_opt.txt]{};
&
table\[x=dist,y=tr\][./cloud20.txt]{}; table\[x=dist,y=tr\][./cloud20\_opt.txt]{};
Note that the results shown in Figure \[fig:cloudplots\] study the effectiveness of the OED with respect to a specific “truth” model. A natural question to ask is how effective the design is if we were trying to recover a different underlying truth? To address this issue, we conduct a statistical test of the effectiveness of the optimal designs as follows. We draw samples $\{ m_1', \ldots, m_{\Ndp}'\}$ from the prior measure and get corresponding data vectors $\obs_i' = f(m_i') + \vec{\eta}_i'$, with $\eta_i'$ drawn from $\GM{\vec{0}}{\ncov}$, $i = 1, \ldots, \Ndp$. For a given design, $\vec{w}$, we compute an expected error $\overline E_\text{rel}$ and an expected average variance $\overline{V}$: $$\begin{aligned}
\overline{V}(\vec{w}) = \frac1{\Ndp} \sum_{i = 1}^{\Ndp} \trace\big(\H^{-1}(\vec{w}, \obs_i')\big), \qquad
\overline{E}_\text{rel}(\vec{w}) = \frac1{\Ndp} \sum_{i = 1}^{\Ndp} \frac{\norm{\iparmap(\vec{w}; \obs_i') - m_i'}}{\norm{m_i'}}.
\end{aligned}$$ For the purpose of this numerical test, we let $\Ndp$ be larger than the number $\Nd$ of the data samples used in computing the optimal design, and the samples $\{ m_1', \ldots, m_{\Ndp}'\}$ are drawn independently of the samples used in the sample average used for the OED objective function (see section \[subsec:sample\]). Hence, $\overline V$ is essentially a more accurate estimate of the objective function we sought to minimize when solving the OED problem. This allows us to assess how well an optimal design, computed based on a small set of data $\{\obs_1,\ldots,\obs_\Nd\}$ does in minimizing the more accurate estimate $\overline V$. For designs with $10$ and $20$ sensors, we compute $\overline{V}(\vec w)$ and $\overline{E}_\text{rel}(\vec w)$ with $\Ndp = 50$ for optimal and for $\Nw = 30$ randomly chosen designs $\vec{w}_1,
\ldots, \vec{w}_\Nw$. The results, shown in Figure \[fig:ubercloudplots\], indicate that the A-optimal designs computed with a relatively small number of data samples not only minimize the average posterior variance, but also result in a minimal expected error between the true parameter and the MAP point.
[cc]{}
table\[x=dist,y=tr\][./ubercloud10.txt]{}; table\[x=dist,y=tr\][./ubercloud10\_opt.txt]{};
&
table\[x=dist,y=tr\][./ubercloud20.txt]{}; table\[x=dist,y=tr\][./ubercloud20\_opt.txt]{};
Scalability and performance {#sec:prob1_scalability}
---------------------------
Finally, we examine the convergence behavior of our method as the number of parameters and the number of sensor candidate locations increases. Specifically, we study the computational cost in terms of the number of solves of , its adjoint, or the associated incrementals. These elliptic PDE solves are the main building block of our method.
First, we consider the cost of computing the OED objective function and its gradient. As seen in Algorithm \[alg:aopt\] and the discussion in section \[sec:complexity\], a significant part of the computational cost of evaluating the OED objective function amounts to solving the inner optimization problem for the MAP point using an inexact Newton-CG method. Here, the computational cost is dominated by the CG iterations needed in each Newton step. Hence, as a measure of the computational cost, we report the total number of “inner” CG iterations. We also report the number of “outer” CG iterations in steps 6 and 13 of Algorithm \[alg:aopt\], which are required for computing the OED objective function and the gradient, respectively. For this numerical study, we focused on the evaluation of the OED cost function and its gradient at $\vec{w} = (1, 1, \cdots, 1) \in \R^\Ns$ (i.e. with all sensors active) and with $\Ntr = \Nd = 1$. The results shown in Figure \[fig:scalability\] indicate that the computational cost of evaluating the OED objective function and its gradient are insensitive to increasing the parameter dimension, and only depend weakly on the number of sensor candidate locations. Figure \[fig:scalability\] also shows the number of interior point quasi-Newton iterations required for solving the OED optimization problem, as parameter and sensor dimensions increase. As can be seen, the number of iterations for solving the OED optimization problem is insensitive to both parameter and sensor dimensions.
[cc]{}
table\[x=n,y=inner\][./fnEval\_scalability\_data\_meshref.txt]{}; at (axis cs: 750, 270) [a)]{}; table\[x=n,y=outer\][./fnEval\_scalability\_data\_meshref.txt]{};
&
table\[x=ns,y=inner\][./fnEval\_scalability\_data\_wref.txt]{}; at (axis cs: 60, 270) [b)]{}; table\[x=ns,y=outer\][./fnEval\_scalability\_data\_wref.txt]{};
\
table\[x=n,y=iter\][./optimization\_scalability\_data\_meshref.txt]{}; at (axis cs: 750, 105) [c)]{};
&
table\[x=ns,y=iter\][./optimization\_scalability\_data\_wref.txt]{}; at (axis cs: 60, 105) [d)]{};
Example 2: Subsurface flow based on SPE10 model {#sec:example2}
===============================================
In this section, we consider a more realistic permeability field using permeability field data from the Society of Petroleum Engineers’ 10th SPE Comparative Solution Project (SPE10).[^2]
Bayesian inverse problem setup
------------------------------
We define the physical domain $\D = (0, 2.2) \times (0, 1.2)$ (with unit of length in 1000’s of feet) and use as the “truth” permeability field a vertical slice[^3] of the three-dimensional SPE10 permeability data. Following the setup of the SPE10 model, we consider an injection well in the center of the domain, and four production wells at the corners of the domain. The injection well is modeled as a mollified point source, and enters through the right hand side function $f$ given by $
f(\vec{x}) = {C}/({2\pi L}) \exp \left\{ -{1}/({2L}) (\vec{x} - \vec{x}_0)^T(\vec{x} - \vec{x}_0) \right\},
$ with $L = 10^{-4}$ and $C = 50$, and $\vec{x}_0 = (1.1, 0.6)$. To model the production wells, we fix the pressure at zero at the four corners of the domain. Specifically, we cut circular regions from the four corners of the domain (modeling the boundaries of wells) and impose zero Dirichlet boundary conditions on the resulting quarter circles. Homogeneous Neumann boundary conditions are used on the remainder of the boundary. In Figure \[fig:spe\_forward\], we show the “truth” log permeability field, as well as the Darcy velocity field and the pressure obtained by solving the state equation with the true permeability field.
\(1) at (0\*, 0\*); (2) at (1\*, 0\*); at (0.0\*-0.42\*, 0.17\*) [a)]{}; at (1\*-0.42\*, 0.17\*) [b)]{};
The prior construction is similar as in the previous test problem. We assume estimates $\ipart^1,\ldots,\ipart^5$ of the log permeability at $N=5$ points, one at the injection well in the center of $\D$, and the others are near each of the four corners of the domain (at the production well boundaries). Based on this data, we compute the mean of the prior measure, as a regularized least-squares fit of these point observations as in ; see Figure \[fig:spe\_prior\](a). As before, the prior covariance is $\C_0 = \mathcal{L}^{-2}$ where $\mathcal{L} = -\theta\Delta + \alpha \sum_{i = 1}^N \delta_i$, with parameter values in given by $\theta =
3.54\times10^{-2}$ and $\alpha = 1.25\times10^1$.
\(1) at (0\*, 0\*); (2) at (0.7\*, 0\*); (3) at (1.4\*, 0\*); at (0.2\*-0.48\*, 0.13\*) [a)]{}; at (0.9\*-0.48\*, 0.13\*) [b)]{}; at (1.6\*-0.48\*, 0.13\*) [c)]{};
Linear triangular finite elements with $\Nm = 10{,}202$ degrees of freedom are used to discretize the state, adjoint and the parameter variables.
A-optimal design of experiments
-------------------------------
We use a grid of $128$ candidate sensor locations in the domain $\D$, and compute an A-optimal design based on one data sample, computed using one random draw from prior depicted in Figure \[fig:spe\_prior\](b). For the OED objective function, given in , we use a trace estimator with $\Ntr = 20$ random vectors. After six continuation iterations, our method converged to a 0/1 design vector. In each continuation step we terminated the interior-point iterations if either the relative residual fell below $10^{-5}$ or if we reached a maximum of 100 interior-point BFGS iterations.
We solve the Bayesian inverse problem using experimental data at the A-optimal sensor locations for the “truth” log permeability $\ipart$. To capture the extreme variations in the permeability field, we solve the forward problem using quadratic triangular elements on a finer mesh with $n = 237,573$ degrees of freedom, and record pressure measurements at the sensor sites. This data vector is subsequently used in the solution of the Bayesian inverse problem. After solving the Bayesian inverse problem with the A-optimal sensor configuration, in Figure \[fig:spe\_prior\]c, we show the MAP point, and in Figure \[fig:spe\_aopt\], compare the prior and posterior standard deviation fields.
\(1) at (0\*, 0\*); (1) at (1\*, 0\*); at (0.05\*-0.48\*, 0.2\*) ; at (1\*-0.49\*, 0.2\*) ;
Finally, to assess the effectiveness of the A-optimal sensor placement computed, we compare the relative error of the MAP point as well as the average posterior variance, based on solving the Bayesian inverse problem using the optimal design versus that of solving the problem with randomly generated designs with the same number of sensors. Note that the A-optimal sensor placement outperforms the random designs.
table\[x=err,y=avgvar\][./spe\_cloud\_randpts.txt]{}; table\[x=err,y=avgvar\][./spe\_cloud\_opt.txt]{};
Conclusions and remarks
=======================
We have developed a scalable method for computing A-optimal experimental designs for infinite-dimensional Bayesian nonlinear inverse problems governed by PDEs. By scalable, we mean that the cost (measured in forward-like PDE solves) of solving the OED problem is independent of the parameter and sensor dimensions. The OED formulation results in a bilevel optimization problem that features an inverse problem as the inner optimization problem, and additional forward-like PDEs representing the action of the inverse Hessian of the inverse problem as constraints for the outer optimization problem. We specialize this OED formulation to the problem of determining the sensor placement that optimally infers the coefficient of an elliptic PDE in the sense that the uncertainty in the recovered coefficient is minimized over a set of prior model samples. For the resulting PDE-constrained OED problem, we derive adjoint-based expressions for the gradient, which enables use of efficient gradient-based optimization algorithms. Computing the gradient of the OED objective function requires differentiating expressions involving the Hessian, which requires third derivatives of the parameter-to-observable map. These are made tractable via a variational formulation of the OED problem. Numerical studies of the performance of our OED method for the inference of the log permeability field in a porous media flow problem indicate that the computational cost of computing an A-optimal experimental design, measured in the number of forward-like PDE solves, is insensitive to the dimension of the discretized parameter field and to the sensor dimension.
A potential limitation of our method is defining the OED objective in terms of a Gaussian approximation to the posterior distribution of the parameter field. However, as mentioned in the introduction, a Gaussian provides a good approximation to the posterior in cases where a linear approximation to the parameter-to-observable map over the set of parameters with significant posterior probability is sufficiently accurate. Relaxing the Gaussian approximation of the posterior for large-scale Bayesian inverse problems with expensive-to-evaluate parameter-to-observable maps is extremely challenging. The fact that the Bayesian inverse problem is merely an inner problem for computing OEDs compounds these challenges.
A related consideration is the influence of the prior on the OED obtained from our formulation. In cases where one has limited prior information, samples from the prior may have rather different features. Since data computed from these vastly different prior samples are used as “training data” in our OED formulation, the resulting design might be suboptimal for the “truth” parameter as we are searching for an A-optimal design that accommodates a wide range of data. In such cases, an effective strategy could be an iterative process: namely, one conducts initial field experiments and obtains a Bayesian update, which better constrains the uncertain parameter field. This field is then used as prior in the computation of an OED, whose target is to collect additional experimental data.
Another limitation of our approach is that our sparsification strategy provides only indirect control on the number of sensors in the optimal configuration. In practice, solving multiple OED problems may be required to determine an appropriate penalty parameter experimentally. This, however, is the price we pay to render an otherwise combinatorial sensor placement problem computationally tractable.
Computing optimal experimental designs still requires a large number of forward (or adjoint or incremental) PDE solves. However, as discussed in Section \[sec:complexity\], a number of systems characterized by the same Hessian operator must be solved at each OED step, which suggests that using low rank Hessian approximations as discussed in [@AlexanderianPetraStadlerEtAl14; @Bui-ThanhGhattasMartinEtAl13; @FlathWilcoxAkcelikEtAl11] can mitigate this computational cost. Moreover, our OED method contains important coarse-grained parallelism: the inverse problems corresponding to each data sample can be solved independently.
In future work, we intend to study the sensitivity of the optimal sensor placement to the number of data samples in the OED problem. The data samples are generated by sampling the prior model; their number is dictated by the need to solve an additional inverse problem for each sample at each OED iteration. For this reason, the numerical experiments in this paper have been limited to a small number of data samples. However, we speculate that increasing the number of data samples leads to diminishing returns, since the goal is not to fully sample the prior, but to determine optimal sensor locations, and we expect that they will be sensitive to only a limited number of directions in the parameter space. Thus, an interesting extension of this work is to determine how many data samples are needed.
An infinite-dimensional trace estimator {#apdx:trace_estimator}
=======================================
Let $\mu_\delta = \GM{0}{\C_\delta}$ and $\tilde{\mu}_\delta = \GM{0}{\A^{1/2} \C_\delta \A^{1/2}}$ with $\A$ and $\C_\delta$ as in the paragraph preceding ; the final equality in follows by noting that $$\int_\hilb \ip{z}{\A z} \, \mu_\delta(dz)
\!=\! \int_\hilb \|\A^{1/2}z\|^2\, \, \mu_\delta(dz)
\!=\! \int_\hilb \norm{y}^2 \, \tilde{\mu}_\delta(dy)
\!=\! \trace(\A^{1/2} \C_\delta\A^{1/2}) = \trace(\A\C_\delta).$$ The following result justifies taking the limit as we let $\delta \to 0$.
Let $\D$ be a bounded domain with Lipschitz boundary and consider the operator $\C_\delta = (-\delta \Delta + I)^{-2}$ defined on $L^2(\D)$, where $\delta$ is a positive real and $\Delta$ is the Laplacian operator on $\D$ with the natural boundary condition. Suppose $\A$ is a positive self-adjoint trace-class operator on $L^2(\D)$. Then, $$ \lim_{\delta \to 0} \trace(\A\C_\delta) = \trace(\A).$$
Let us consider the difference, $\trace(\A) - \trace(\A\C_\delta) = \trace(\A(I - \C_\delta))$. Denote by $\{e_i\}_{i = 1}^\infty$ the eigenvectors of $I - \C_\delta$ (independent of $\delta$) and by $\lambda_i^\delta$ the respective eigenvalues. By the definition of the operator $\C_\delta$ we have $\lambda_i^\delta = 1 - 1/(1 + \delta \nu_i)^2$ where $\nu_i$ are the (unbounded) eigenvalues of $-\Delta$. Using the fact that $0 \leq \nu_i \to \infty$, we know $0 \leq \lambda_i^\delta < 1$ for all $\delta > 0$. Next, we note $$\trace(\A(I - \C_\delta)) = \sum_{i = 1}^\infty \ip{e_i}{\A(I - \C_\delta)e_i}
= \sum_{i = 1}^\infty \lambda_i^\delta \ip{e_i}{\A e_i} < \infty.$$ Let $\eps > 0$ be fixed but arbitrary and note that we can fix $N_0 \in \N$ such that, $
\sum_{i = N_0 + 1}^\infty \lambda_i^\delta \ip{e_i}{\A e_i} \leq \sum_{i = N_0 + 1}^\infty \ip{e_i}{\A e_i} < \eps/2
$. Also, we can choose $\delta > 0$ sufficiently small so that, $
\sum_{i = 1}^{N_0} \lambda_i^\delta \ip{e_i}{\A e_i} \leq \norm{A} \sum_{i = 1}^{N_0} \lambda_i^\delta < \eps/2,
$ and hence the assertion of the proposition follows.
Gradient derivation of OED objective function $\hat\obj$ {#appdx:oed-gradient}
========================================================
Here, we summarize the derivation of the gradient of the OED objective function presented in . To derive the expression for the gradient, we employ a formal Lagrangian approach [@Troltzsch10], which uses a Lagrangian function composed of the objective function with the PDE constraints – enforced through Lagrange multiplier functions. This Lagrangian function $\LOED$ for the OED problem is given by: $$\begin{aligned}
{1}
\LOED&\left(\vec{w}, \{u_i\}, \{m_i\}, \{p_i\}, \{v_{ik}\}, \{q_{ik}\},\{ y_{ik}\},
\{\ad{u}_i\}, \{\ad{m}_i\}, \{\ad{p}_i\}, \{\ad{v}_{ik}\}, \{\ad{q}_{ik}\}, \{\ad{y}_{ik}\}\right) \\
=&\frac{1}{\Nd\Ntr}\sum_{i=1}^\Nd \sum_{k = 1}^\Ntr \ip{ z_k }{y_{ik}} \\
&+ \sum_{i = 1}^\Nd \big[ \ip{\Exp{m_i} \grad u_i}{\grad \ad{u}_i} - \ip{f}{\ad{u}_i} - \ip{h}{\ad{u}_i}_{{\ensuremath{\Gamma_{\!\!N}}}}\big]\\
&+ \sum_{i = 1}^\Nd \big[\ip{\Exp{m_i} \grad p_i}{\grad \ad{p}_i} + \ip{\B^*\Wn(\B u_i - \obs_i)}{\ad{p}_i}\big]\\
&+ \sum_{i = 1}^\Nd \big[\cip{m_i - \iparpr}{ \ad{m}_i} + \ip{\ad{m}_i \Exp{m_i}\grad u_i}{\grad p_i}\big]\\
&+ \sum_{i = 1}^\Nd \sum_{k = 1}^\Ntr\big[\ip{ \Exp{m_i}\grad v_{ik}}{\grad \ad{v}_{ik}} + \ip{y_{ik}\Exp{m_i}\grad{u_i}}{\grad \ad{v}_{ik}}\big]\\
&+ \sum_{i = 1}^\Nd \sum_{k = 1}^\Ntr\big[\ip{ \Exp{m_i}\grad q_{ik}}{\grad \ad q_{ik}} + \ip{y_{ik} \Exp{m_i}\grad p_i}{\grad \ad{q}_{ik}} + \ip{\B^*\Wn\B v_{ik}}{\ad{q}_{ik}}\big]\\
&+ \sum_{i = 1}^\Nd \sum_{k = 1}^\Ntr\big[\ip{\ad{y}_{ik} \Exp{m_i}\grad v_{ik}}{\grad p_i} + \cip{\ad{y}_{ik}}{y_{ik}}
+ \ip{\ad{y}_{ik}\Exp{m_i}\grad u_i}{\grad q_{ik}}\\
&\qquad\qquad+ \ip{\ad y_{ik} y_{ik} \Exp{m_i} \grad u_i}{\grad p_i} - \ip{z_k}{\ad{y}_{ik}}\big].\end{aligned}$$ The variables $(u_i, m_i, p_i) \in \Vg\times \CM\times \V$, for $i \in
\{1, \ldots,\Nd\}$, and $(v_{ik}, q_{ik}, y_{ik})\in \V\times \V \times
\CM$, with $(i, k) \in \{1,
\ldots, \Nd\} \times \{1, \ldots, \Ntr\}$ are the *OED state variables*. The *OED adjoint variables* $\ad{u}_i, \ad{m}_i, \ad{p}_i, \ad{v}_{ik}, \ad{q}_{ik}$, and $\ad{y}_{ik}$ belong to the test function spaces corresponding to their state counterparts.
The gradient for is given by the derivative of $\LOED$ with respect to the weight vector $\vec w$, provided that variations of $\LOED$ with respect to the OED state and adjoint variables vanish. The weight vector enters the Lagrangian through the weight matrix $\Wn = \sum_{j = 1}^\Ns w_j \mat{E}_j$, where $\mat{E}_j =
\sigma_j^{-2} \vec{e}_j \vec{e}_j^T$. (Here $\vec{e}_j$ denotes the $j$th standard basis vector in $\R^\Ns$.) Using this notation, it is straightforward to compute derivatives of the Lagrangian function with respect to $w_j$, the $j$th component of the weight vector $\vec w$: $$\LOED_{w_j} = \sum_{i = 1}^\Nd \ip{\B^* \mat{E}_j (\B u_i - \obs_i)}{ \ad{p}_i} +
\sum_{i=1}^\Nd \sum_{k = 1}^\Ntr \ip{\B^* \mat{E}_j \B v_{ik}}{\ad{q}_{ik}}, \quad \mbox{ for } j = 1, \ldots, \Ns.$$ Recalling the definition of $\mat{E}_j$ and using a vector form for the gradient, we obtain $$\label{equ:gradw}
\hat\obj' =
\sum_{i = 1}^\Nd \ncov^{-1}(\B u_i - \obs_i) \odot \B\ad{p}_i +
\sum_{i = 1}^\Nd \sum_{k = 1}^\Ntr \ncov^{-1} \B v_{ik} \odot \B \ad{q}_{ik},$$ provided appropriate state and adjoint equations are satisfied. These equations are computed next.
Requiring that variations of $\LOED$ with respect to the OED adjoint variables vanish, we recover the OED state equations –. The variables $\ad{p}_{ik}$ and $\ad{q}_{ik}$ are defined through adjoint equations, obtained by requiring that variations of $\LOED$ with respect to the OED state variables vanish. That is, for each $i \in \{1, \ldots, \Nd\}$ and $k \in \{1, \ldots, \Ntr\}$, $$\begin{aligned}
\LOED_{v_{ik}}[\ut{v}] &= \ip{\B^*\Wn\B\ut{v}}{\ad{q}_{ik}} + \ip{\ad{y}_{ik} \Exp{m_i}\grad \ut{v}}{\grad p_i} + \ip{\Exp{m_i}\grad\ut{v}}{\grad \ad{v}_{ik}} = 0,
\label{equ:outer-adj1}
\\
\LOED_{q_{ik}}[\ut{q}] &= \ip{\Exp{m_i} \grad \ut q}{\grad \ad q_{ik}} \!+\! \ip{\ad{y}_{ik} \Exp{m_i}\grad u_i}{\grad \ut{q}} \!=\! 0,
\label{equ:outer-adj2}
\\
\LOED_{y_{ik}}[\ut{y}] &= \ip{\ut{y} \Exp{m_i}\grad p_i}{\grad \ad{q}_{ik}}
\!+\! \cip{\ad{y}_{ik}}{\ut{y}} \!+\! \ip{\ut y \ad y_{ik} \Exp{m_i} \grad u_i}{\grad p_i}
\!+\! \ip{\ut{y} \Exp{m_i}\grad u_{i}}{\grad \ad{v}_{ik}}\nonumber
\\&\tab\tab\!+\!\frac1{\Nd\Ntr}\ip{z_k}{\ut{y}} \!=\! 0,
\label{equ:outer-adj3}
\\
\LOED_{u_i}[\ut u] &=
\ip{\B^*\Wn\B\ut u}{\ad p_i}
\!+\! \ip{\ad{m}_i \Exp{m_i}\grad \ut{u}}{\grad p_i}
\!+\! \ip{\Exp{m_i} \grad \ut{u}}{\grad \ad{u}_i}
\!-\! \ip{b^{(1)}_i}{\ut u} \!=\! 0,
\label{equ:outer-adj4}
\\
\LOED_{m_i}[\ut m] &=
\ip{\ut{m}\Exp{m_i}\grad{p}_i}{\grad\ad{p}_i}
\!+\! \cip{\ad{m}_i}{\ut{m}} \!+\! \ip{\ut m \ad{m}_i \Exp{m_i}\grad u_i}{\grad p_i}
\nonumber\\
&\tab\tab\!+\! \ip{\ut{m}\Exp{m_i}\grad u_i}{\grad\ad{u}_i}
\!-\! \ip{b_i^{(2)}}{\ut m} \!=\! 0,
\label{equ:outer-adj5}
\\
\LOED_{p_i}[\ut p] &=
\ip{\Exp{m_i} \grad \ut{p}}{\grad \ad{p}_i}
\!+\! \ip{\ad{m}_i \Exp{m_i}\grad u_i}{\grad \ut{p}}
\!-\! \ip{b_i^{(3)}}{\ut p} \!=\! 0,
\label{equ:outer-adj6}
\end{aligned}$$ for all $(\ut{v}, \ut{q}, \ut{y}, \ut{u}, \ut{m}, \ut{p}) \in \V \times \V \times \CM \times \V \times \CM \times \V$. Here, $b_i^{(1)}$, $b_i^{(2)}$, and $b_i^{(3)}$ are $$\label{equ:rhs-ugly}
\begin{aligned}
\ip{b^{(1)}_i}{\ut u} = &-\sum_{k = 1}^\Ntr \big[\ip{y_{ik} \Exp{m_i} \grad\ut{u}}{\grad \ad v_{ik}}
+\ip{\ad{y}_{ik} \Exp{m_i} \grad \ut{u}}{\grad q_{ik}}
+\ip{\ad y_{ik} y_{ik} \Exp{m_i} \grad \ut u}{\grad p_i}\big],
\\
\ip{b^{(2)}_i}{\ut m} = &-\sum_{k = 1}^\Ntr \big[\ip{ \ut{m} \Exp{m_i} \grad v_{ik}}{\grad \ad v_{ik}}
+\ip{\ut{m} \Exp{m_i} \grad q_{ik}}{\grad \ad{q}_{ik}}
+\ip{\ut m y_{ik} \Exp{m_i} \grad u_i}{\grad \ad v_{ik}}
\\
&\tab\tab\tab+\ip{\ut m y_{ik} \Exp{m_i} \grad p_i}{\grad \ad q_{ik}}
+\ip{\ut m \ad y_{ik} \Exp{m_i} \grad v_{ik}}{\grad p_i}
+\ip{\ut m \ad y_{ik} \Exp{m_i} \grad u_i}{\grad q_{ik}}
\\
&\tab\tab\tab+\ip{\ut m \ad y_{ik} y_{ik} \Exp{m_i} \grad u_i}{\grad p_i}\big],
\\
\ip{b^{(3)}_i}{\ut p} = &-\sum_{k = 1}^\Ntr \big[\ip{y_{ik}\Exp{m_i}\grad\ut{p}}{\grad\ad{q}_{ik}}
+\ip{\ad{y}_{ik} \Exp{m_i}\grad v_{ik}}{\grad \ut{p}}
+\ip{\ad y_{ik} y_{ik} \Exp{m_i} \grad u_i}{\grad \ut p}\big].
\end{aligned}$$ Upon inspecting the OED adjoint equations – and comparing them to the system of equations –, we notice that the OED adjoint equations inherit structure from the OED state equations. Specifically, notice that after rearranging and identifying terms, the system – for $(\ad{q}_{ik}, \ad{v}_{ik}, \ad{y}_{ik})$ is the same as the system –, except for the right hand sides, which coincide up to a constant. This reveals the following relations: $$ \ad{q}_{ik} = -\frac{1}{\Nd\Ntr}v_{ik}, \quad \ad{y}_{ik} =
-\frac{1}{\Nd\Ntr}y_{ik}, \quad \ad{v}_{ik} =
-\frac{1}{\Nd\Ntr}q_{ik},$$ for $i \in \{1, \ldots, \Nd\}$ and $k \in \{1, \ldots, \Ntr\}$. Thus, the OED adjoint variables $\ad{q}_{ik}$, $\ad{y}_{ik}$, and $\ad{v}_{ik}$ can be eliminated from the system and the right hand sides $b^{(1)}_{ik}, b^{(2)}_{ik}, b^{(3)}_{ik}$ defined in simplify, and result in .
Discretization and computational details {#appdx:discretization}
========================================
We use a finite-element discretization of the parameter field and the state and adjoint variables, and we denote by boldfaced letters the discretized versions of the variables and operators appearing in the expressions. Next, we describe the numerical computation of the OED objective function in and of its gradient, where we again consider that $\upgamma = 0$. The discrete OED function is $$\label{equ:oed-obj}
\hat\obj_h(\vec{w}) = \frac{1}{\Nd\Ntr} \sum_{i = 1}^\Nd \sum_{k = 1}^\Ntr \mip{\vec{z}_k}{\vec{y}_{ik}}.$$ Note that, to discretize the infinite-dimensional Hilbert space, we use a mass-weighted inner product in . This is necessary since the finite-dimensional inference parameters are the coefficients of the finite element approximation, and helps to ensure that the discrete problems are appropriate discretizations of the infinite-dimensional problem. We rely on a Gaussian trace estimator and let $\vec{z}_k = \mat{M}^{-1/2} \vec{\nu}_k$, $k = 1, \ldots, \Ntr$, where $\vec{\nu}_k$ are draws from $\GM{\vec{0}}{\vec{I}}$. See [@AlexanderianPetraStadlerEtAl14] for a justification of the form of the mass-weighted trace estimator and also an efficient procedure for computing the application of $\mat{M}^{-1/2}$ to a vector.
For a given design $\vec{w}$ and data samples $\obs_i$, $i \in \{1, \ldots, \Nd\}$, we solve the inner optimization problem – for the MAP point $\dpar_i = \dparmap(\vec{w}; \obs_i)$; we also evaluate the state $\vec{u}_i$ and adjoint $\vec{p}_i$ variables (for the inner optimization) at the MAP point. Next, we need to solve for $\vec{y}_{ik}$ and the variables $\vec{v}_{ik}$ and $\vec{q}_{ik}$ in –. This is accomplished by solving a linear system of the following block form
$$\label{equ:KKTy-fd}
\begin{bmatrix}
{\mat{D}}& {\mat{S}^T}& \mat{A}^T\\
{\mat{S}}& {\mat{Q}}& \mat{C}^T\\
\mat{A} & \mat{C} & \mat{0}
\end{bmatrix}
\begin{bmatrix}
\vec{v}_{ik}\\
\vec{y}_{ik}\\
\vec{q}_{ik}
\end{bmatrix}
=
\begin{bmatrix}
\vec{0} \\
\vec{z}_k \\
\vec{0}
\end{bmatrix}.$$
In the above system, ${\mat{D}}= \mat{B}^T \Wn \mat{B}$, where $\mat{B}$ is the discretization of the observation operator $\B$. The remaining blocks in the system are discretizations of the differential operators appearing in –, evaluated at $(\vec{u}_i, \dpar_i, \vec{p}_i)$; we refer to [@PetraStadler11] for more details on the discretization of the Hessian system for an inverse coefficient problem with an elliptic PDE. To solve the system , we first block eliminate $\vec{v}_{ik}$ and $\vec{q}_{ik}$, namely $$\begin{aligned}
\vec{v}_{ik} = -\mat{A}^{-1}\mat{C} \vec{y}_{ik}, \quad
\vec{q}_{ik} = -\mat{A}^{-T} ({\mat{D}}\vec{v}_{ik} + {\mat{S}^T}\vec{y}_{ik}),\end{aligned}$$ for $i \in \{1, \ldots, \Nd\}$ and $k \in \{1, \ldots, \Ntr\}$, and solve $\mat{H} \vec{y}_{ik} = \vec{z}_{k}$ with $$\label{equ:reduced-hess}
\mat{H} = \mat{C}^T \mat{A}^{-T} ({\mat{D}}\mat{A}^{-1} \mat{C} - {\mat{S}^T})
- {\mat{S}}\mat{A}^{-1} \mat{C}
+ {\mat{Q}}.$$ Once $\vec{y}_{ik}$ is available for $i \in \{1, \ldots, \Nd\}$ and $k \in \{1, \ldots, \Ntr\}$, we can compute the OED objective function . To compute the gradient we also need the OED adjoint variables $\adfd{p}_i$, $i = 1, \ldots, \Nd$, which are computed by solving a linear system similar to , for $(\adfd{p}_i, \adfd{m}_i, \adfd{u}_i)$, where the blocks in the system right hand side are replaced by $\vec{b}^{(1)}_i, \vec{b}^{(2)}_i$, and $\vec{b}^{(3)}_i$ which are discretizations of the expressions in . Thus, we solve $\mat{H} \adfd{m}_i = \bar{\vec{b}}_i$, where $\mat{H}$ is as in , and $\bar{\vec{b}}_i$ is given by $$\bar{\vec{b}}_i = \vec{b}^{(2)}_i
- \mat{C}^T \mat{A}^{-T}\vec{b}^{(1)}_i
- {\mat{S}}\mat{A}^{-1} \vec{b}^{(3)}_i
+ \mat{C}^T \mat{A}^{-T}{\mat{D}}\mat{A}^{-1} \vec{b}^{(3)}_i,$$ for $i \in \{1, \ldots, \Nd\}$. Next, we solve for $\adfd{p}_i$, $$\adfd{p}_i = \mat{A}^{-1} (\vec{b}^{(3)}_i - \mat{C} \adfd{m}_i), \quad i \in \{1, \ldots, \Nd\}.$$ Subsequently, we have all the quantities required in the expression for the (discretized) gradient: $$ \nabla \hat \obj_h(\vec{w}) = \sum_{i = 1}^\Nd \ncov^{-1}(\mat{B} \vec{u}_i - \obs_i) \odot \mat{B}\adfd{p}_i -
\frac{1}{\Nd\Ntr} \sum_{i = 1}^\Nd \sum_{k = 1}^\Ntr \ncov^{-1}\mat{B} \vec{v}_{ik} \odot \mat{B} \vec{v}_{ik}.$$
[^1]: For vectors $\vec{x}$ and $\vec{y}$ in $\R^n$, the Hadamard product, $\vec{x} \odot \vec{y}$, is a vector in $\R^n$ with components $(\vec{x} \odot \vec{y})_i = x_i y_i$, $i = 1, \ldots, n$.
[^2]: See <http://www.spe.org/web/csp/datasets/set02.htm> for the description of the dataset.
[^3]: We use the 70th slice, counted from the top.
|
---
abstract: 'In this paper, we propose a recurrent neural network (RNN) with residual attention (RRA) to learn long-range dependencies from sequential data. We propose to add residual connections across timesteps to RNN, which explicitly enhances the interaction between current state and hidden states that are several timesteps apart. This also allows training errors to be directly back-propagated through residual connections and effectively alleviates the gradient vanishing problem. We further reformulate an attention mechanism over residual connections. An attention gate is defined to summarize the individual contribution from multiple previous hidden states in computing the current state. We evaluate RRA on three tasks: the adding problem, pixel-by-pixel MNIST classification and sentiment analysis on the IMDB dataset. Our experiments demonstrate that RRA yields better performance, faster convergence and more stable training compared to a standard LSTM network. Furthermore, RRA shows highly competitive performance to the state-of-the-art methods.'
author:
- |
Cheng Wang\
NEC Laboratories Europe, Heidelberg, Germany\
[email protected]\
bibliography:
- 'references.bib'
title: 'RRA: Recurrent Residual Attention for Sequence Learning'
---
Introduction
============
Deep neural networks (DNN) have shown significant improvements in several application domains including image recognition [@krizhevsky2012imagenet], natural language processing [@mikolov2013distributed] and speech recognition [@hinton2012deep]. Recurrent neural networks (RNNs), a particular type of DNN, have powerful capability in processing complicated sequential data. By using recurrent connections, the previous context information can be captured and used to predict the next hidden state output. However, training RNN remains a difficult task due to gradient vanishing and exploding problems [@pascanu2013difficulty], especially when the RNN needs to learn very long dependencies from sequential inputs. The main issue is that training an RNN using back-propagation through time (BPTT) [@williams1986learning] entails multiplying gradients a large number of times (specifically, once for each time step) with the weights matrix $\mathbf{W}$. If $\mathbf{W}$ contains small values (namely, if the largest eigenvalue of $\mathbf{W}$ is less than 1), then gradient contributions from “far away” states become zero and have no influence on future states, this is the *gradient vanishing problem*. On the other hand, if the weights in the matrix are large, the gradient signal grows without bound, and learning diverges, this is the *gradient exploding problem*. To alleviate the effects of gradient vanishing, many methods have been proposed. Long Short-Term Memory (LSTM) [@hochreiter1997long] can be seen as the most successful one among those techniques. The introduced memory cell in LSTM has its own input, forget and output gates to control whether to store the context information or remove it from memory. This allows LSTM networks to capture the long-range relational dependencies from input sequences as compared to a regular RNN.
![Learning recurrent residual attention. The interaction with hidden states that far apart can be enhanced by residual connections. The attention over residual connections decides how far RRA cell can look back at given timestep, meanwhile, controls the individual contribution of previous hidden states. In this example, each RRA cell is able to look back at the past 5 time steps, the semantic dependency between the word “girl” and “her” can be explicitly captured.[]{data-label="fig:RRA_framework"}](RRA_framework.png){width="50.00000%"}
The gradient vanishing problem is not limited to recurrent neural network and can also appear in feedforward neural network, particularly, in training very deep networks. If we treat an RNN in its unfolded form, a shallow RNN with multiple timesteps is equivalent to a very deep network. Residual learning [@he2016deep] provides a novel learning scheme for ultra-deep convolutional neural network (CNN) (e.g. more than 1000 layers) by introducing residual connections across layers. These *shortcut connections* connect far-away layers to ensure training error signal can be back-propagated from higher layer to lower layer directly and alleviate gradient vanishing problem. Inspired by the success of residual learning in CNN on computer vision tasks, this work reformulates residual learning into recurrent network for learning ultra-long range dependencies across timesteps in sequence learning.
Different to residual learning [@he2016deep] where an identity shortcut connection is used to add the input and the outputs from stacked layers (i.e. $\mathcal{F}(\mathbf{x})$+$\mathbf{x}$, $\mathcal{F}$ is residual function), in the context of sequence learning, we reformulate the recurrent residual connection to have attention over multiple precessing steps. It results in a residual function with attention across timesteps: $\mathcal{M}(\mathbf{x}_t,\mathbf{h}_{t-1})$+$\mathcal{F}(\mathbf{h}_{t-2}, \mathbf{h}_{t-3},..., \mathbf{h}_{t-K-1};\mathbf{W}_a)$ where $\mathcal{M}$ is a recurrent model and $\mathbf{W}_a$ is the attention weights. At each timestep $t$, in computing the current state $\mathbf{h}_t$, this reformulation ensures recurrent units have the ability to look back as far as $K$+$1$ past timesteps and control the relative contribution of each hidden state $\mathbf{h}_{t-2}, \mathbf{h}_{t-3},..., \mathbf{h}_{t-K-1}$ to the current state $\mathbf{h}_t$.
Even though attention mechanism has been widely studied in machine translation [@bahdanau2014neural], image captioning [@xu2015show], object detection [@ba2014multiple] and generative models [@mnih2014recurrent; @gregor2015draw]. Basically, this sort of attention models are either layer-based or network-based. They are only allowed to receive attended information from a previous layer or a separate network. By casting attention mechanism to recurrent residual connection, the recurrent unit provides a more natural way to sequence learning. Because it explicitly looks back at multiple preceding steps and automatically decides how much previous information should be “seen” by weighting them. For a specific sequential pattern (e.g. English or German sentence $w_1,...w_{T}$), the semantic dependencies between words that are far apart (e.g. $w_{t}$ and $w_{t-k}$, 1$<$$k$$<$$t$) can be stronger than that between two adjacent words (e.g. $w_t$ and $w_{t-1}$). Figure \[fig:RRA\_framework\] gives an example which intuitively supports our assumption. The word “*drawing*” is explicitly involved in predicting the word “*her*”, it is obvious that word “*girl*” would also make significant semantic contribution. Essentially, the sentence is saying: “*The girl is beautiful*”, however, regular RNNs suffer difficulties in capturing the meaning. Thus, it is reasonable to explicitly consider the information that are several steps apart in learning the semantic meaning from sequential data. In this work, we address this problem by casting attention mechanism to residual connection over timesteps in recurrent network.
The benefits of recurrent residual attention (RRA) are two fold: (1) RRA enhances the interactions between hidden states that are several steps apart, that is, RRA allows training error can be back-propagated across multiple timesteps. (2) The attention over residual connection gives a more natural way in which past hidden states can selectively “attend” to future states in sequence learning.
Our main contributions are summarized as follows:
- We propose a novel learning scheme for sequential data, it reformulates residual learning with attention in recurrent network. The code will be made publicly available soon.
- A new gate—*attention gate* is defined in LSTM RNN to control the individual contribution of context information from multiple previous hidden states.
- Our proposed RRA shows promising performance as compared to a standard LSTM network on three benchmark tasks: the adding problem, pixel-by-pixel MNIST and sentiment analysis. RRA also outperforms or matches the state-of-the-art methods.
The rest of this paper is structured as follows, section 2\[sec:relate\_work\] gives the related work. In section 3\[sec:models\], we elaborate the reformation of residual learning with attention in recurrent manner. We describe our experiments and discussions in section 4\[sec:experiment\] and conclude this work in section 5\[sec:conclusion\].
Related Work {#sec:relate_work}
============
**Recurrent Neural Network (RNN)** RNN is a powerful network architecture for processing sequential data. It has been widely used in natural language processing [@socher2011parsing], speech recognition [@graves2013speech] and handwriting recognition [@graves2009novel] in recent years. In RNN, it allows cyclical connection and reuse the weights across different instances of neurons, each of them associated with different time steps. This idea can explicitly support network to learn the entire history of previous states and map them to current states. With this property, RNN is able to map an arbitrary length sequence to a fixed length vector. But RNN is known for its difficult training due to gradient vanishing problem.
The vanishing problem was originally found in [@hochreiter1997long], then LSTM (Long short-term memory) was proposed to prevent gradient from vanishing during training. Therefore, compare to traditional RNN, LSTM has the ability to learn the long-term dependencies between inputs and outputs. Recently, LSTM has became very popular in the field of machine translation [@cho2014learning], speech recognition [@graves2013speech] and sequence learning [@sutskever2014sequence] recently. Another special type of RNN is Gated Recurrent Unit (GRU)[@cho2014learning]. It simplifies LSTM by removing memory cell and provides a different way to prevent vanishing gradient problem. Our work falls into this category and aims to alleviate gradient vanishing in learning ultra-long dependencies.
**Residual Learning** Previous work [@simonyan2014very; @szegedy2015going] have proven that network depth is of crucial importance of neural network architectures, but it is more challenging to train deeper networks. Residual learning [@he2016deep] paves a way for training such networks. The residual mapping between layers enables networks can be substantially deep (e.g. with hundreds of layers) and leads more efficient optimization, most importantly, yields better performance. The short-cut skip connections were considered across multiple layers to force a direct information flow in both forward and backward passes. By doing this, feedforward signals as well as feedback errors can be passed easily. Adding residual connection across layers has shown its powerful capability in computer vision [@he2016deep; @szegedy2017inception]. Inspired by this, our work incorporates residual connection across multiple precessing steps to learn long and complex dependencies from sequential data.
![Overview of proposed methods. (a) Standard RNN and its unfolded form. (b) RNN with residual connections. (c) recurrent network with attention mechanism (over layers). (d) Recurrent residual with attention (over timesteps), at each timestep $t$, units are able to look back at the past $K$+$1$ states in computing the current state $\mathbf{h}_t$, and $\sum_{k}^{K}\mathbf{W}a_k=1$. []{data-label="fig:framework"}](framework.png){width="50.00000%"}
**Attention Mechanism** Attention in neural networks [@bahdanau2014neural] is designed to assign weights to different inputs instead of threat all input sequences equally as original neural networks do. It can be seen as an additional network that is now widely incorporated into different neural networks leading to a new variety of models [@xu2015show; @ba2014multiple; @mnih2014recurrent; @gregor2015draw]. Formally, an attention model takes $k$ arguments e.g. $h_1$,...,$h_k$, and a context information $c$. It returns a weighted output $z$ which summaries based on how $h_i$ is related to context $c$. The weights are corresponds to the relevances between each $h_i$ and $c$ and sum to 1, e.g. the weights $a_k$ in Figure \[fig:framework\] (c). This determines the relative contributions of each $h_i$ to final output. But the current state-of-the-art attention methods are either layer or network based, and not well studied in recurrent manner. This work reformulates an attention over residual connection in recurrent network.
Models {#sec:models}
======
This section describes our proposed approach to learn recurrent residual attention from sequential data. We firstly introduce existing way for sequence learning with recurrent network and explain our intuition of extending recurrent network to learn more complex dependencies. Then we describe how to reformulate residual connection into RNN, and followed by casting attention mechanism to recurrent residual connection. Here, we use LSTM as base recurrent network to elaborate our approach, but it can be easily generalized to plain RNN or GRU.
Recurrent Networks for Sequence Learning
----------------------------------------
A recurrent network basically generalizes feedforward network to learning from sequential data. The goal of recurrent models is to estimate the conditional probability $p(\mathbf{y}_1,...\mathbf{y}_{T'}|\mathbf{x}_1,..\mathbf{x}_T)$ by: $$\begin{aligned}
p(\mathbf{y}_1,...\mathbf{y}_{T'}|\mathbf{x}_1,...\mathbf{x}_T)=\prod\nolimits_{t=1}^{T'}p(\mathbf{y}_t|\mathbf{y}_1,...,\mathbf{y}_{t-1})\\
p(\mathbf{y}_t|\mathbf{y}_1,...,\mathbf{y}_{t-1})=p(\mathbf{y}_t|\mathbf{h}_t)\\
\mathbf{h}_t=\mathcal{M}(\mathbf{h}_{t-1},\mathbf{x}_t)
\label{equ:recu}\end{aligned}$$ where $(\mathbf{x}_1,...\mathbf{x}_T)$ and $(\mathbf{y}_1,...\mathbf{y}_{T'})$ are input sequence and target sequence respectively. The input sequence length $T$ may differ from target sequence length $T'$. $\mathbf{h}_t$ is the hidden state from a model $\mathcal{M}$ for a given hidden state $\mathbf{h}_{t-1}$ and a new input $\mathbf{x}_t$. The $\mathcal{M}$ is recurrent model that can be a standard RNN or its variants. The equation (\[equ:recu\]) can be viewed as a general form of recurrent learning algorithm which is able to capture the semantic dependencies across timesteps. For example the hidden state $\mathbf{h}_{t-1}$ is explicitly used for outputting $\mathbf{h}_{t}$ while the past hidden state before $\mathbf{h}_{t-1}$ are only implicitly involved.
This challenges existing RNNs in a task that needs model to explicitly capture the long-range semantic dependencies between the states that are several timesteps apart, as the task we described in Figure \[fig:RRA\_framework\]. Adding a shortcut connection to skip one or multiple timesteps and enforcing a direct information across timesteps is a way to explicitly use previous hidden states in ($\mathbf{h}_{2}$,...,$\mathbf{h}_{t-k-1}$) in computing future states. This entails recurrent residual learning.
Recurrent Residual Learning
---------------------------
The overview of reformulating recurrent network to have residual connection is illustrated in Figure \[fig:framework\] (b), in which a shortcut connection is designed to impose a fluent information flow across timesteps. With residual connection in recurrent network, at a given timestep $t$, the hidden state $\mathbf{h}_t$ can be computed as: $$\mathbf{h}_t=\mathcal{M}(\mathbf{h}_{t-1},\mathbf{x}_{t};\mathbf{W}_m)+\mathcal{F}(\mathbf{h}_{t-k};\mathbf{W}_f)
\label{equ:1}$$ where $\mathcal{M}$ is a RNN model with weights $\mathbf{W}_m$, it receives $\mathbf{h}_{t-1}$ and $\mathbf{x}_{t}$ as regular RNN. Here we keep $\mathcal{M}$ to receive $\mathbf{h}_{t-1}$ so as to form a residual skip connection across timesteps. $\mathcal{F}$ approximates the residual function with weights $\mathbf{W}_f$. $\mathcal{F}$ can be an identity function such that $\mathcal{F}(\mathbf{h}_{t-k};\mathbf{W}_f)$ = $\mathbf{h}_{t-k}$ where $\mathbf{h}_{t-k}$ is the hidden state at $t$-$k$ time step. With this formulation, when computing a hidden state $\mathbf{h}_{t}$, besides $\mathbf{h}_{t-1}$ and $x$, $\mathbf{h}_{t-k}$ can be explicitly considered. If $\mathbf{W}_f$ approximating 0, equation (\[equ:1\]) returns back to plain RNN.
By making $\mathcal{F}$ to weight multiple previous hidden states, i.e. $\mathbf{h}_{t-2}$,...,$\mathbf{h}_{t-k}$, can lead to recurrent residual learning with attention over timesteps: $$\mathbf{h}_t=\mathcal{M}(\mathbf{h}_{t-1},\mathbf{x}_{t};\mathbf{W}_m)+\mathcal{F}(\mathbf{h}_{t-2},...,\mathbf{h}_{t-k};\mathbf{W}_a)
\label{equ:2}$$ where $\mathbf{W}_a$$\in$$\mathbb{R}^{1 \times (k-1)}$ is the attention weight matrix that controls the relative contribution of the past hidden states and $\sum_{i=1}^{k-1}\mathbf{W}_a^{(i)}$=$1$.
![RRA cell. An attention gate is defined to control how much information from hidden state $\mathbf{h}_{t-2}$ to $\mathbf{h}_{t-k}$ should be considered in computing current state $\mathbf{h}_t$. []{data-label="fig:RRA_cell"}](RRA.png){width="40.00000%"}
Learning Recurrent Residual Attention
-------------------------------------
Figure \[fig:framework\] (d) gives our design of reformulating attention on residual connections in recurrent network. The recurrent residual attention is considered at each timestep, this can be viewed as a sliding attention window with size of $K$ over timesteps. To make the past states selectively “attend” in future state, we enforce the residual attention effects memory cell directly, a new gate—*attention* gate is defined to LSTM cell, making LSTM has residual attention. Then the equation (\[equ:2\]) is reformulated as $$\mathbf{h}_t=\mathcal{M}((\mathbf{h}_{t-1},\mathbf{x}_{t}, \mathbf{a}_t);\mathbf{W}_m)
\label{equ:3}$$ where $\mathbf{a}_t$=$\mathcal{F}(\mathbf{h}_{t-2},...,\mathbf{h}_{t-k};\mathbf{W}_a)$. Figure \[fig:RRA\_cell\] demonstrates the internal gates of RRA cell, where the attention gate controls the relative contributions of the past $K$ states. Basically, the hidden state of each gate within RRA can be computed as: $$\begin{aligned}
\label{equ:input}
%\mathbf{i}_t=\sigma (\mathbf{W}_{xi}\mathbf{x}_t+\mathbf{W}_{hi}\mathbf{h}_{t-1}+\mathbf{b}_i)\\
%\mathbf{f}_t=\sigma (\mathbf{W}_{xf}\mathbf{x}_t+\mathbf{W}_{hf}\mathbf{h}_{t-1}+\mathbf{b}_f)\\
%\mathbf{o}_t=\sigma (\mathbf{W}_{xo}\mathbf{x}_t+\mathbf{W}_{ho}\mathbf{h}_{t-1}+\mathbf{b}_o)\\
%\mathbf{g}_t=\phi (\mathbf{W}_{xc}\mathbf{x}_t+\mathbf{W}_{hc}\mathbf{h}_{t-1}+\mathbf{b}_c)\\
\begin{pmatrix}\mathbf{i}_t
\\ \mathbf{f}_t
\\ \mathbf{o}_t
\\ \mathbf{g}_t
\end{pmatrix}= \begin{pmatrix} \sigma
\\ \sigma
\\ \sigma
\\ \tanh
\end{pmatrix}\mathbf{W}\begin{pmatrix} \mathbf{x}_t
\\ \mathbf{h}_{t-1}
\end{pmatrix}\\
\label{equ:cell}
\mathbf{c}_t=\mathbf{f}_t\odot\mathbf{c}_{t-1}+\mathbf{i}_t\odot\mathbf{g}_t\\
\label{equ:att}
\mathbf{a}_t=\mathbf{W}_{a}
\begin{pmatrix} \mathbf{h}_{t-2}
\\ \mathbf{h}_{t-3}
\\ ...
\\ ...
\\ \mathbf{h}_{t-k}
\end{pmatrix}\\
\label{equ:res}
\mathbf{h}_t=\mathbf{o}_t\odot\tanh(\mathbf{c}_t+\mathbf{a}_t)\end{aligned}$$ where $\mathbf{i}_t$, $\mathbf{f}_t$ and $\mathbf{o}_t$ are input, forget and output gate respectively. $\mathbf{c}_t$ is memory cell, $\sigma(\cdot)$ is the sigmoid function. Equations(\[equ:input\]) - (\[equ:cell\]) are from original LSTM, $\mathbf{a}_t$ in equation (\[equ:att\]) is the defined attention gate which summarizes relative contributions in the range from $\mathbf{h}_{t-2}$ to $\mathbf{h}_{t-k-1}$. The hidden state $\mathbf{h}_{t-1}$ is used in original way and attended at each step so that to form a residual (shortcut) connection across timesteps. The attention weights $\mathbf{W}_{a}$ is normalized by $\mathbf{W}_{a}^{(i)}$=$\frac{\mathbf{W}_{a}^{(i)}}{\sum_{j}^{K}\mathbf{W}_{a}^{(j)}}$[^1]. In equation (\[equ:res\]), follow residual network [@he2016deep], element-wise addition is used to form the residual function of attention $\mathbf{a}_t$ which directly effect memory cell $\mathbf{c}_t$ for outputting $\mathbf{h}_{t}$.
By defining an attention gate in RNNs, only $K$ additional differentiable parameters over residual connection are introduced. The optimization can be realized by using standard back-propagation through time (BPTT)[@williams1986learning] as regular RNNs.
Experiments {#sec:experiment}
===========
In this section, we explore the performance of proposed RRA in multiple tasks including the adding problem, pixel-by-pixel MNIST image classification and sentiment analysis on the IMDB dataset.
Our implementation was based on Theano[^2]. We conducted all our experiments on a single Titan Xp with 12G memory. The weights for input-to-hidden layer and hidden-to-output layer were initialized by drawing the uniform distribution $\left [ -\sqrt{\frac{6}{N_{in}+N_{out}}},\sqrt{\frac{6}{N_{in}+N_{out}}} \right ]$ ($N$: number of units). The RNN internal weights $\mathbf{W}$ were orthogonally initialized [@saxe2013exact]. The attention weights $\mathbf{W}_a$ were randomly initialized. By default, the attention window size $K$=10, which means the past hidden states from $\mathbf{h}_{t-1}$ to $\mathbf{h}_{t-11}$ are considered at every timestep. Initial learning rate was set to 0.0001 and 0.5 dropout rate was used after recurrent layer. Gradients were clipped to 1 to prevent exploding gradients. All models were configured to have only one recurrent layer and trained with given number of iterations without early stopping. All experimental settings for LSTM and RRA are same.
Adding Problem
--------------
This task was originally defined in [@hochreiter1997long] for testing the ability of RNN to capture the long dependencies in a sequential data. The task is asked to add two numbers $x_i$ and $x_j$ that randomly selected from a sequence. For a given sequence with length $S$, each element of this sequence is a pair consisting of two components $(x, m)$, the first one is an actual number $x$ that uniformly sampled at $\mathcal{U}[0,1]$, the second one is an indicator $m$ decides whether to add $x$ (if $m$=$1$) or just ingore $x$ (if $m$=$0$). There are only two numbers ($x_i$ and $x_j$) in each sequence are marked as 1 for addition: the first number $x_i$ is placed to the first 10% of sequence, i.e. $i\in[0,\left \lfloor \frac{S}{10} \right \rfloor]$, the second number $x_j$ is from the last 50% in the sequence, i.e. $j\in [\left \lfloor \frac{S}{2} \right \rfloor, S]$. This leads to a sequence has long-range dependency where only two significant but remote inputs. A naive strategy is always to predict the target output as 1 regardless of the input sequences [@le2015simple; @arjovsky2016unitary], it gives an expected mean squared error (MSE) of 0.167 which is used as baseline to beat.
![The performance on the adding problem for sequence length $\textit{S}$=$100$ (top) and $\textit{S}$=$500$ (bottom). []{data-label="fig:adding"}](adding_100_1.png "fig:"){width="\linewidth"} ![The performance on the adding problem for sequence length $\textit{S}$=$100$ (top) and $\textit{S}$=$500$ (bottom). []{data-label="fig:adding"}](adding_500_1.png "fig:"){width="\linewidth"}
We used 128 hidden units for both LSTM and RRA, the batch size was set to 50, the models were optimized with ADADELTA [@zeiler2012adadelta]. We generated 100,000 training examples and 10,000 test examples. Figure \[fig:adding\] presents the performance of LSTM and RRA on test dataset as we varied sequence length $\textit{S}$. As we can see, for $\textit{S}$=$100$, LSTM is able to consistently beat baseline around 4,400 iterations while RRA approximately beats baseline at 2,200 iterations. As we increased $\textit{S}$ to 500, the task gets harder because the dependency between target output and the two relevant sequence inputs becomes more remote, this requires model is able to capture longer dependencies. In the first 40,000 iterations, both LSTM and RRA struggled to minimize MSE, RRA started to beat baseline after 43,000 iterations, this is significantly faster than LSTM that started to beat baseline after around 92,000 iterations.
Although this task against the advantage of RRA since there are only two significant numbers in each sequence, RRA demonstrates good performance in learning long-range dependencies.
![Performance on Pixel-by-Pixel MNIST. Normal MNIST (left) and Permuted MNIST (right).[]{data-label="fig:mnist"}](mnist.png){width="\linewidth"}
![Performance on Pixel-by-Pixel MNIST. Normal MNIST (left) and Permuted MNIST (right).[]{data-label="fig:mnist"}](mnist_premute.png){width="\linewidth"}
Pixel-by-Pixel MNIST
--------------------
This task is asked to classify MNIST digits [@lecun1998gradient] as suggested by [@le2015simple]. Each 28-by-28 image in MNIST is treated as sequential data and fed to recurrent network. This leads to pixel sequences with length of 784. Two versions of pixel-by-pixel MNIST were considered: (1) normal MNIST that the pixel sequence is read in order from left to right, top to bottom. (2) The pixel sequence is randomly permuted. We configured both networks to have 256 hidden units, optimizer is replaced with RMSprop which provides more steady improvement on this task for both networks. The training batch size is 50, LSTM is used as baseline to beat as plain RNN has been proved poor performance on such tasks in [@le2015simple; @arjovsky2016unitary].
Figure \[fig:mnist\] reports the test accuracy against iterations. On normal pixel-by-pixel MNIST (Figure \[fig:mnist\](left)), similar to previous work [@arjovsky2016unitary], both LSTM and RRA show good performance. RRA achieves 98.58% that beats LSTM of 97.66%. Besides, it shows that RRA is able to yield faster convergence, more stable improvement as compared to the standard LSTM.
The task was configured to be more challenging when we randomly permuted the order of pixels in image. By applying same permutation to each image, the dependencies across pixels become longer than original pixel order. This requires models to learn and remember more complicated dependencies across different timesteps. As shown in Figure \[fig:mnist\](right), RRA shows superior capability in capturing such long and complicated dependencies. It achieves 95.84% against 91.2% for LSTM, but again, faster convergence.
We further compared RRA with recent proposed methods: IRNN [@le2015simple], URNN [@arjovsky2016unitary] and RWA [@ostmeyer2017machine] in Table \[tab:mnist\_comp\]. RRA achieves the state-of-the-art performance on both normal and permuted pixel-by-pixel MNIST. It should be noted that both URNN and RWA are not able to beat LSTM on normal MNIST in their configurations. Nevertheless, RRA achieves sightly better performance on normal MNIST and outperforms LSTM on permuted MNIST in a certain margin.
Models Normal MNIST Premuted MNIST
-------- -------------- ----------------
IRNN 97% 82%
URNN 95.1% 88%
RWA 98.1% 93.5%
LSTM 97.66% 91.2%
RRA 98.58% 95.84%
: Test accuracy on pixel-by-pixel MNIST[]{data-label="tab:mnist_comparison"}
\[tab:mnist\_comp\]
Models Reported Error Rate
----------------------------------------------------------- ---------------------
BoW (bnc)[@maas-EtAl:2011:ACL-HLT2011] 12.20%
BoW(b$\Delta$ t$\acute{c}$) [@maas-EtAl:2011:ACL-HLT2011] 11.77%
LDA [@maas-EtAl:2011:ACL-HLT2011] 32.58%
LSA [@maas-EtAl:2011:ACL-HLT2011] 17.04 %
Full+BoW [@maas-EtAl:2011:ACL-HLT2011] 11.67%
Full+unlabelled+BoW [@maas-EtAl:2011:ACL-HLT2011] 11.11%
WRRBM [@dahl2012training] 12.58%
WRRBM+BoW(bnc) [@dahl2012training] 10.77%
MNB-uni [@wang2012baselines] 16.45%
MNB-bi [@wang2012baselines] 13.41%
SVM-uni [@wang2012baselines] 13.05%
SVM-bi [@wang2012baselines] 10.84%
NBSVM-uni [@wang2012baselines] 11.71%
NBSVM-bi [@wang2012baselines] 8.78%
seq2-bown-CNN[@johnson2014effective] 14.70%
Paragraph Vector [@le2014distributed] 7.42%
LSTM with tuning and dropout [@dai2015semi] 13.50%
LSTM initialized with word2vec embeddings [@dai2015semi] 10.00%
LM-LSTM [@dai2015semi] 7.64%
SA-LSTM [@dai2015semi] 7.24%
SA-LSTM with liner gain [@dai2015semi] 9.17%
SA-LSTM with joint training [@dai2015semi] 14.70%
TS-ATT[@yuan2016learning] 13.75%
SS-ATT[@yuan2016learning] 13.26%
LSTM 11.63%
RRA(K=5) 11.27%
RRA(K=10) 11.59%
RRA(K=20) 12.22%
Bidirectional RRA (K=5) 9.05%
Sentiment Analysis
------------------
To evaluate the performance of RRA on sentiment analysis, we conducted experiments on IMDB review dataset [@maas-EtAl:2011:ACL-HLT2011][^3]. This dataset consists of 100,000 movie reviews from IMDB. The dataset is split into 75% for training and 25% for testing. There are only 25,000 reviews in training reviews are labeled, and the rest of 50,000 are unlabeled, all testing reviews are labeled. In this task,we used the labeled 25,000 training reviews and 25,000 test for binary sentiment classification (positive or negative), thus randomly guessing yields 50% accuracy. Different to some previous approaches, e.g. Bag-of-Words (BOW) and Latent Dirichlet Allocation (LDA)[@blei2003latent] etc., the review sentences are treated as sequential data. This task is particularly challenging because the average review length is 281 and the longest review can reach 2,956 words. This requires our model has strong ability to capture the long-range semantic dependencies among words.
![Performance on IMDB Review Dataset.[]{data-label="fig:imdb_test"}](imdb.png){width="40.00000%"}
In our experiments, we limited the word vocabulary size to 10,000, all other words were mapped to “*unk*” token. We used 128 units for embeddings and 128 units for both LSTM and RRA with ADADELTA[@zeiler2012adadelta] optimizer, batch size was set to 16. We tested RRA with different attention window size $K$=$5$, $K$=$10$ and $K$=$20$. Figure \[fig:imdb\_test\] presents the test error against iterations for original LSTM and RRA with different $K$. Each model was trained around 15 epochs without early stopping. We can see that RRA fits the dataset quite well since 4,000 iterations, considerably faster than LSTM. With varied attention window size $K$, we found that the test error is not very sensitive to different $K$, RRA obtains sightly better results when $K$=5. We conjecture that for a certain pattern of sequence (e.g. English sequence in this task), the semantic contributions from previous $K$ hidden states are sufficient to compute the current state.
In order to compare RRA with recent methods, we add more recently reported baselines. Table \[tab:imdb\_comparison\] shows the performance comparison. It proves that RRA can effectively learn good representations from input word sequence for sentiment classification as compared to previous non-sequential representations, e.g. BoW, LDA and LSA with SVM classifiers. RRA is also highly competitive to recent approaches LM-LSTM and SA-LSTM (which used 1024 units for memory cells, 512 embedding units with 50,000 unlabeled reviews for per-training). It should be noted that our models were solely based on proposed RRA with only 128 hidden units, without using additional unlabeled data for pre-training as well as word2vec embeddings. With bidirectional RRA, the performance of our model is sufficiently close to the state-of-the-art.
![Visualization of normalized attention weights for $K$=5 (top) and $K$=10 (bottom). The attention unit index 0 corrsponding to $\mathbf{W}_a^{(1)}$, the weight that is assigned to $\mathbf{h}_{t-2}$.[]{data-label="fig:vis"}](imdb_vis_5.png "fig:"){width="\linewidth"} ![Visualization of normalized attention weights for $K$=5 (top) and $K$=10 (bottom). The attention unit index 0 corrsponding to $\mathbf{W}_a^{(1)}$, the weight that is assigned to $\mathbf{h}_{t-2}$.[]{data-label="fig:vis"}](imdb_vis_10.png "fig:"){width="\linewidth"}
We also visualized the attention weights in the case of $K$=5 and $K$=10 respectively in Figure \[fig:vis\]. The evolution of normalized weights of attention units suggests that attention gate learns to control the relative contributions from previous hidden states from $\mathbf{h}_{t-2}$ to $\mathbf{h}_{t-K-1}$. They are explicitly considered in predicting $\mathbf{h}_t$, this is contrast with standard RNN/LSTM and other variants where history information indirectly considered via $\mathbf{h}_{t-1}$.
Discussion
----------
**RRA alleviates gradient vanishing** In BPTT, gradient vanishing when gradient $\frac{\partial L}{\partial \mathbf{W}}$=$\sum\frac{\partial L}{\partial z}\frac{\partial z}{\partial \mathbf{h}_T}\frac{\partial \mathbf{h}_T}{\partial \mathbf{h}_{T-1}}\cdots \frac{\partial \mathbf{h}_0}{\partial \mathbf{W}}$ is close 0. Because it sums each gradient contribution from every timestep, the dependency across timesteps cannot be captured if the gradient contribution is 0. RRA explicitly enforces short-cut connection across timestep and directly passes error signal through $\mathbf{h}_T$ to $\mathbf{h}_{T-K}$. The attention over residual connection enables to control the relative contribution across multiple timesteps to alleviate gradient become to 0, particularly in learning dependencies from long and complex sequence. Our experiments in Figure \[fig:adding\], \[fig:mnist\] and \[fig:imdb\_test\] have demonstrated the stability of RRA in learning long and complex sequence.
**Relation to related work** There are some RNN variants have been recently proposed to address gradient vanishing problem in recurrent networks. IRNN [@le2015simple] is an RNN that is composed of ReLUs and initialized with an identity weight matrix, URNN [@arjovsky2016unitary] uses a unitary hidden-to-hidden matrix by generalizing the orthogonal matrices to the complex domain. Differently, this work focuses on explicitly use multiple previous hidden states via residual connection with attention. Higher order RNN (HORNNs)[@soltani2016higher] is proposed for language modeling which is similar to our work but the key differences are existed: (1) RRA uses $\mathbf{h}_{t-1}$ as regular RNN so that to form a residual connection with attention while HORNN directly considers $\mathbf{h}_{t-1}$ to $\mathbf{h}_{h-K}$. (2) RRA introduces much less parameters, e.g., when each unit is required to consider the past 3 states, RRA only introduces 2 additional parameters while HORNN introduces 0.3 millions more weights compared to a plain RNN. Recurrent Weighted Average (RWA)[@ostmeyer2017machine] also explores attention in RNN. But the difference is that RWA performs a weighted average over $\mathbf{h}_{1}$ to $\mathbf{h}_{t-1}$ when computing each $\mathbf{h}_t$. RRA is more flexible by considering $K$+$1$ past states with residual attention. **Limitation of RRA** Although RRA shows its ability in capturing long-range dependencies across timesteps with faster convergence, more stable training compared to a standard LSTM on multiple tasks, it also has limitation: training speed is sightly slower than standard LSTMs, e.g., on permuted MNIST, LSTM took average 394s for one epoch while RRA($K$=5) took 760s and RRA($K$=10) took 773.6s. We conjecture that additional time is spent to compute the derivative of residual attention, and pass the error signal from current states to the states that are several step far apart directly. However, it should be noted that, all our experiments did not use early stopping, when it is applied to RRA and LSTM, RRA can finish the training and stop much earlier than LSTM.
Conclusion {#sec:conclusion}
==========
In this paper we introduced RRA to learn long-term dependencies from sequential data. The residual shortcut connection can effectively pass error signal across timesteps that are several apart away so that to prevent gradient vanishing problem. The defined attention mechanism over timesteps provides a more natural way to summarize the individual contribution of the past hidden states in predicting future hidden states. We compared RRA to a standard implementation of LSTM. RRA shows superior performance, more stable training and fast convergence on the adding problem, pixel-by-pixel MNIST classification and sentiment analysis. Although without using additional mechanism, e.g. word2vec embedding, pre-training with unlabeled data, RRA demonstrates competitive performance as compared to recent methods. Future work will extend RRA on different sequence learning scenarios including machine translation, speech recognition etc..
Acknowledgements
================
We thank Mathias Niepert and Brandon Malone for their discussions and suggestions on this work.
[^1]: while softmax is more often used here, we found this is more straightforward and faster in BPTT without losing performance.
[^2]: <http://www.deeplearning.net/software/theano/>
[^3]: <http://ai.stanford.edu/~amaas/data/sentiment/>
|
---
abstract: 'The reductions of the Heun equation to the hypergeometric equation by polynomial transformations of its independent variable are enumerated and classified. Heun-to-hypergeometric reductions are similar to classical hypergeometric identities, but the conditions for the existence of a reduction involve features of the Heun equation that the hypergeometric equation does not possess; namely, its cross-ratio and accessory parameters. The reductions include quadratic and cubic transformations, which may be performed only if the singular points of the Heun equation form a harmonic or an equianharmonic quadruple, respectively; and several higher-degree transformations. This result cor and extends a theorem in a previous paper, which found only the quadratic transformations. \[See K. Kuiken, “Heun’s equation and the hypergeometric equation”, [*SIAM Journal on Mathematical Analysis*]{} 10 (3) (1979), 655–657.\]'
address: 'Depts. of Mathematics and Physics, University of Arizona, Tucson AZ 85721, USA'
author:
- 'Robert S. Maier'
title: On Reducing the Heun Equation to the Hypergeometric Equation
---
Heun equation, hypergeometric equation, hypergeometric identity, Lamé equation, special function, Clarkson–Olver transformation.
Introduction {#sec:intro}
============
Consider the class of linear second-order differential equations on the Riemann sphere $\mathbb{CP}^1$ which are Fuchsian, i.e., have only regular singular points [@Hille76]. Any such equation with exactly three singular points can be transformed to the hypergeometric equation by appropriate changes of the independent and dependent variables. Similarly, any such equation with exactly four singular points can be transformed to the Heun equation. (See [@Erdelyi53 Chapter 15],[@Ronveaux95; @Snow52].)
Solutions of the Heun equation are much less well understood than hypergeometric functions [@Arscott81]. No general integral representation for them is known, for instance. Such solutions have recently been used in fluid dynamics [@Craster98; @Schmitz94] and drift–diffusion theory [@Debosscher98]. They also arise in lattice combinatorics [@Guttmann93; @Joyce94]. But it is difficult to carry out practical computations involving them. An explicit solution to the two-point connection problem for the general Heun equation is not known [@Schafke80a], though the corresponding problem for the hypergeometric equation has a classical solution. Most work on solutions of the Heun equation has focused on special cases, such as the Lamé equation [@Erdelyi53; @Maier04].
Determining which Heun equation solutions are expressible in terms of more familiar functions would obviously be useful: it would facilitate the solution of the two-point connection problem, and the computation of Heun equation monodromies. A significant result in this direction was obtained by Kuiken [@Kuiken79]. It is sometimes possible, by performing a quadratic change of the independent variable, to reduce the Heun equation to the hypergeometric equation, and thereby express its solutions in terms of hypergeometric functions. Kuiken’s quadratic transformations are not so well known as they should be. The useful monograph edited by Ronveaux [@Ronveaux95] does not mention them explicitly, though it lists Ref. [@Kuiken79] in its bibliography. One of Kuiken’s transformations was recently rediscovered by Ivanov [@Ivanov2001], in a disguised form.
Unfortunately, the main theorem of Ref. [@Kuiken79] is incorrect. The theorem asserts that a reduction to the hypergeometric equation, by a rational change of the independent variable, is possible only if the singular points of the Heun equation form a harmonic quadruple in the sense of projective geometry; in which case the change of variables must be quadratic. In this paper, we show that there are many alternatives. A reduction may also be possible if the singular points form an equianharmonic quadruple, with the change of variables being cubic. Additional singular point configurations permit changes of variable of degrees $3$, $4$, $5$, and $6$. Our main theorem (Theorem \[thm:main\]) and its corollaries classify all such reductions, up to affine automorphisms of the Heun and hypergeometric equations. It replaces the theorem of Ref. [@Kuiken79].
It follows from Theorem \[thm:main\] that in a suitably defined ‘nontrivial’ case, the local Heun function ${\mathop{{}\it Hl}\nolimits}$ can be reduced to the Gauss hypergeometric function ${}_2F_1$ by a formula of the type ${\mathop{{}\it Hl}\nolimits}(t)={}_2F_1(R(t))$ only if the pair $(d,q/\alpha\beta)$, computed from the parameters of ${\mathop{{}\it Hl}\nolimits}$, takes one of exactly $23$ values. These are listed in Theorem \[thm:culmination\]. A representative list of reductions is given in Theorem \[thm:useful0\]. These theorems should be of interest to special function theorists and applied mathematicians. We were led to our correction and expansion of the theorem of Ref. [@Kuiken79] by a discovery of Clarkson and Olver [@Clarkson96]: an unexpected reduction of the Weierstrass form of the equianharmonic Lamé equation to the hypergeometric equation. In §\[sec:CO\], we explain how this is a special case of the cubic Heun-to-hypergeometric reduction.
The new reductions are similar to classical hypergeometric transformations. (See [@Andrews99 Chapter 3]; also [@Erdelyi53 Chapter 2].) But reducing the Heun equation to the hypergeometric equation is more difficult than transforming the hypergeometric equation to itself, since conditions involving its singular point location parameter and accessory parameter, as well as its exponent parameters, must be satisfied. Actually, the reductions classified in this paper are of a somewhat restricted type, since unlike many classical hypergeometric transformations, they involve no change of the dependent variable. A classification of reductions of the more general type is possible, but is best phrased in algebraic-geometric terms, as a classification of certain branched covers of the Riemann sphere by itself. A further extension would allow the transformation of the independent variable to be algebraic rather than polynomial or rational, since at least one algebraic Heun-to-hypergeometric reduction is known to exist [@Joyce94]. Extended classification schemes are deferred to one or more further papers.
Preliminaries
=============
The Equations {#sec:defs}
-------------
The Gauss hypergeometric equation is $$\label{eq:hyper}
\frac{\d^2 y}{\d z^2} + \left(\frac{c}{z} + \frac{a+b-c+1}{z-1}\right)
\frac{\d y}{\d z} + \frac{ab}{z(z-1)}\,y = 0,$$
where $a,b,c\in\mathbb{C}$ are parameters. It and its solution space are specified by the Riemann $P$-symbol
\[eq:HeunP\] P{
[cccc]{} 0&1&&\
0&0&a& ;z\
1-c&c-a-b&b&
},
where each column, except the last, refers to a regular singular point. The first entry is its location, and the final two are the characteristic exponents of the solutions there. The exponents at each singular point are obtained by solving an indicial equation [@Hille76]. In general, each finite singular point $z_0$ has $\zeta$ as an exponent if and only if the equation has a local (Frobenius) solution of the form $(z-z_0)^\zeta h(z)$ in a neighborhood of $z=z_0$, where $h$ is analytic and nonzero at $z=z_0$. If the exponents at $z=z_0$ differ by an integer, this statement must be modified: the solution corresponding to the smaller exponent may have a logarithmic singularity at $z=z_0$. The definition extends in a straightforward way to $z_0=\infty$, and also to ordinary points, each of which has exponents $0,1$.
There are $2\times3=6$ local solutions of ($\mathfrak{h}$) in all: two per singular point. If $c$ is not a nonpositive integer, the solution at $z=0$ belonging to the exponent zero will be analytic. When normalized to unity at $z=0$, it will be the Gauss hypergeometric function ${}_2F_1(a,b;c;z)$ [@Erdelyi53]. This is the sum of a hypergeometric series, which converges in a neighborhood of $z=0$. In general, ${}_2F_1(a,b;c;z)$ is not defined when $c$ is a nonpositive integer.
The Heun equation is usually written in the form $$\label{eq:Heun}
\frac{\d^2 u}{\d t^2}
+ \left( \frac\gamma t + \frac\delta{t-1} + \frac\epsilon{t-d}
\right)\frac{\d u}{\d t} + \frac{\alpha\beta t - q}{t(t-1)(t-d)}\,u = 0.$$
Here $d\in\mathbb{C}$, the location of the fourth singular point, is a parameter ($d\neq0,1$), and $\alpha,\beta,\gamma,\delta,\epsilon\in\mathbb{C}$ are exponent-related parameters. The $P$-symbol is
\[eq:Psymbol\] P{
[ccccc]{} 0&1&d&&\
0&0&0&& ;t\
1-&1-&1-&&
}.
This does not uniquely specify the equation and its solutions, since it omits the accessory parameter $q\in\mathbb{C}$. The exponents are constrained by $$\label{eq:Pconstraint}
\alpha+\beta-\gamma-\delta-\epsilon+1 = 0.$$ This is a special case of Fuchs’s relation, according to which the sum of the $2n$ characteristic exponents of any second-order Fuchsian equation on $\mathbb{CP}^1$ with $n$ singular points must equal $n-2$ [@Poole36].
There are $2\times4=8$ local solutions of ($\mathfrak{H}$) in all: two per singular point. If $\gamma$ is not a nonpositive integer, the solution at $t=0$ belonging to the exponent zero will be analytic. When normalized to unity at $t=0$, it is called the local Heun function, and is denoted ${\mathop{{}\it Hl}\nolimits}(d,q;\alpha,\beta,\gamma,\delta;t)$ [@Ronveaux95]. It is the sum of a Heun series, which converges in a neighborhood of $t=0$ [@Ronveaux95; @Snow52]. In general, ${\mathop{{}\it Hl}\nolimits}(d,q;\alpha,\beta,\gamma,\delta;t)$ is not defined when $\gamma$ is a nonpositive integer.
If $\epsilon=0$ and $q=\alpha\beta d$, the Heun equation loses a singular point and becomes a hypergeometric equation. Similar losses occur if $\delta=0$, $q=\alpha\beta$, or $\gamma=0$, $q=0$. This paper will exclude the case when the Heun equation has fewer than four singular points, since reducing ($\mathfrak{h}$) to itself is a separate problem, leading to the classical hypergeometric transformations. The following case, in which the solution of (\[eq:Heun\]) can be reduced to quadratures, will be initially excluded.
\[def:triviality\] If $\alpha\beta=0$ and $q=0$, the Heun equation (\[eq:Heun\]) is said to be trivial. Triviality implies that one of the exponents at $t=\infty$ is zero (i.e., $\alpha\beta=0$), and is implied by absence of the singular point at $t=\infty$ (i.e., $\alpha\beta=0$, $\alpha+\beta=1$, $q=0$).
The transformation to ($\mathfrak{H}$) or ($\mathfrak{h}$) of a linear second-order Fuchsian differential equation with singular points at $t=0,1,d,\infty$ (resp. $z=0,1,\infty$), and with arbitrary exponents, is accomplished by certain linear changes of the dependent variable, called F-homotopies. (See [@Erdelyi53] and [@Ronveaux95 §[A]{}2 and Addendum, §1.8].) If an equation with singular points at $t=0,1,d,\infty$ has dependent variable $u$, carrying out the substitution $\tilde u(t)=t^{-\rho}(t-1)^{-\sigma}(t-d)^{-\tau} u(t)$ will convert the equation to a new one, with the exponents at $t=0,1,d$ reduced by $\rho,\sigma,\tau$ respectively, and those at $t=\infty$ increased by $\rho+\sigma+\tau$. By this technique, one exponent at each finite singular point can be shifted to zero.
In fact, the Heun equation has a group of F-homotopic automorphisms isomorphic to $({\mathbb Z}_2)^3$, since at each of $t=0,1,d$, the exponents $0,\zeta$ can be shifted to $-\zeta,0$, i.e., to $0,-\zeta$. Similarly, the hypergeometric equation has a group of F-homotopic automorphisms isomorphic to $({\mathbb Z}_2)^2$. These groups act on the $6$ and $3$-dimensional parameter spaces, respectively. For example, one of the latter actions is $(a,b;c)\mapsto(c-a,c-b;c)$, which is induced by an F-homotopy at $z=1$. From this F-homotopy follows Euler’s transformation [@Andrews99 §2.2] $$\label{eq:flip}
{}_2F_1(a,\,b;\,c;\,z)= (1-z)^{c-a-b}{}_2F_1(c-a,\,c-b;\,c;\,z),$$ which holds because ${}_2F_1$ is a local solution at $z=0$, rather than at $z=1$.
If the singular points of the differential equation are arbitrarily placed, transforming it to the Heun or hypergeometric equation will require a Möbius (i.e., projective linear or homographic) transformation, which repositions the singular points to the standard locations. A unique Möbius transformation maps any three distinct points in $\mathbb{CP}^1$ to any other three; but the same is not true of four points, which is why ($\mathfrak{H}$) has the singular point $d$ as a free parameter.
The Cross-Ratio {#subsec:crossratio}
---------------
The characterization of Heun equations that can be reduced to the hypergeometric equation will employ the cross-ratio orbit of $\{0,1,d,\infty\}$, defined as follows. If $A,B,C,D\in\mathbb{CP}^1$ are distinct, their cross-ratio is $$(A,B;C,D){\stackrel{\rm{def}}{=}}\frac{(C-A)(D-B)}{(D-A)(C-B)}\in\mathbb{CP}^1\setminus\{0,1,\infty\},$$ which is invariant under Möbius transformations. Permuting $A,B,C,D$ yields an action of the symmetric group $S_4$ on $\mathbb{CP}^1\setminus\{0,1,\infty\}$. The cross-ratio is invariant under interchange of $A,B$ and $C,D$, and also under simultaneous interchange of the two points in each pair. So each orbit contains no more than $4!/4=6$ cross-ratios. The possible actions of $S_4$ on $s\in\mathbb{CP}^1\setminus\{0,1,\infty\}$ are generated by $s\mapsto1-s$ and $s\mapsto 1/s$, and the orbit of $s$ comprises $$s,\quad 1-s,\quad 1/s,\quad 1/(1-s),\quad s/(s-1),\quad (s-1)/s,$$ which may not be distinct. This is called the cross-ratio orbit of $s$; or, if $s=(A,B;\allowbreak C,D)$, the cross-ratio orbit of the unordered set $\{A,B,C,D\}\subset\mathbb{CP}^1$. Two sets of distinct points $\{A_i,B_i,C_i,D_i\}$ ($i=1,2$) have the same cross-ratio orbit iff they are related by a Möbius transformation.
Cross-ratio orbits generically contain six values, but there are two exceptional orbits: one with three and one with two. If $(A,B;\allowbreak
C,D)=-1$, the cross-ratio orbit of $\{A,B,C,D\}$ will be $\{-1,\frac12,2\}$. The value $-1$ for $(A,B;\allowbreak C,D)$ defines a so-called harmonic configuration: $A,B$ and $C,D$ are said to be harmonic pairs. More generally, if $\{A,B,C,D\}$ has cross-ratio orbit $\{-1,\frac12,2\}$, it is said to be a harmonic quadruple. It is easy to see that if $C=\infty$ and $A,B,D$ are distinct finite points, then $A,B$ and $C,D$ will be harmonic pairs iff $D$ is the midpoint of the line segment $\overline{AB}$. In consequence, $\{A,B,\infty,D\}$ will be a harmonic quadruple iff $\{A,B,D\}\subset\mathbb{C}$ comprises collinear, equally spaced points. So, $\{A,B,C,D\}\subset\mathbb{CP}^1$ will be a harmonic quadruple iff it can be mapped by a Möbius transformation to a set consisting of three equally spaced finite points and the point at infinity; equivalently, to the vertices of a square in $\mathbb{C}$.
The cross-ratio orbit containing exactly two values is $\{\frac12\pm\ri\frac{\sqrt3}2\}$. Any set $\{A,B,C,D\}$ with this as cross-ratio orbit is said to be an equianharmonic quadruple. $\{A,B,\infty,D\}$ will be an equianharmonic quadruple iff $A,B,D$ are the vertices of an equilateral triangle in $\mathbb{C}$. If $\mathbb{CP}^1$ is interpreted as a sphere via the usual stereographic projection, then by an affine transformation (a special Möbius transformation), this situation reduces to the case when $A,B,\infty,D$ are the vertices of a regular tetrahedron. So, $\{A,B,C,D\}\subset\mathbb{CP}^1$ will be an equianharmonic quadruple iff it can be mapped by a Möbius transformation to the vertices of a regular tetrahedron in $\mathbb{CP}^1$.
Cross-ratio orbits are of two sorts: real orbits such as the harmonic orbit, and non-real orbits such as the equianharmonic orbit. All values in a real orbit are real, and in a non-real orbit, all have a nonzero imaginary part. So, $\{A,B,C,D\}$ will have a specified real orbit as its cross-ratio orbit iff it can be mapped by a Möbius transformation to a set consisting of three specified collinear points in $\mathbb{C}$ and the point at infinity; equivalently, to the vertices of a specified quadrangle (generically irregular) in $\mathbb{C}$. Similarly, it will have a specified non-real orbit as its cross-ratio orbit iff it can be mapped to a set consisting of a specified triangle in $\mathbb{C}$ and the point at infinity; equivalently, to the vertices of a specified tetrahedron (generically irregular) in $\mathbb{CP}^1$.
The cross-ratio orbit of $\{0,1,d,\infty\}$ will be the harmonic orbit iff $d$ equals $-1$, $\frac12$, or $2$, and the equianharmonic orbit iff $d$ equals $\frac12\pm \ri\frac{\sqrt{3}}2$. In contrast, it will be a specified generic orbit iff $d$ takes one of six orbit-specific values. The cross-ratio orbit of $\{0,1,d,\infty\}$ being a specified orbit is equivalent to its being the same as the cross-ratio orbit of some specified quadruple of the form $\{0,1,D,\infty\}$, i.e., to there being a Möbius transformation that maps $\{0,1,d,\infty\}$ onto $\{0,1,D,\infty\}$. The possibilities are $$\label{eq:firstlist}
d=D,\quad 1-D,\quad 1/D,\quad 1/(1-D),\quad D/(D-1),\quad (D-1)/D.$$ By examination, this is equivalent to $\{0,1,d\}$ being mapped onto $\{0,1,D\}$ by some [*affine*]{} transformation, i.e., to $\triangle01d$ being similar to $\triangle01D$. The corresponding affine transformations $t\mapsto A_1(t)$ are $$\label{eq:secondlist}
A_1(t)= t,\quad 1-t,\quad Dt,\quad (D-1)t+1,\quad (1-D)t+D,\quad D(1-t).$$ This interpretation gives a geometric significance to the possible values of $d$.
For the equianharmonic orbit, in which the six values degenerate to two, the triangle $\triangle01D$ may be taken to be any equilateral triangle. For any real orbit, $\triangle01D$ must be taken to be degenerate, with collinear vertices. For example, the cross-ratio orbit of $\{0,1,d,\infty\}$ being the harmonic orbit is equivalent to $\triangle01d$ being similar to $\triangle012$, i.e., to $\{0,1,d\}$ consisting of three equally spaced collinear points. This will be the case iff $d$ equals $-1$, $\frac12$, or $2$, in agreement with the definition of a harmonic quadruple.
Automorphisms {#subsec:auto}
-------------
According to the theory of the Riemann $P$-function, any Möbius transformation $M$ of the independent variable will preserve characteristic exponents. For the hypergeometric equation (\[eq:hyper\]), this implies that if $M$ is one of the $3!$ Möbius transformations that permute the singular points $z=0,1,\infty$, the exponents of the transformed equation ($\tilde{\mathfrak{h}}$) at its singular points $M(0),M(1),M(\infty)$ will be those of (\[eq:hyper\]) at $0,1,\infty$. But if $M$ is not affine, i.e., $M(\infty)\neq\infty$, then ($\tilde{\mathfrak{h}}$) will not in general be a hypergeometric equation, since its exponents at $M(\infty)$ may both be nonzero. To convert ($\tilde{\mathfrak{h}}$) to a hypergeometric equation, the permutation must in this case be followed by an F-homotopic transformation of the form $\tilde y(z)=[z-M(\infty)]^{-a} y(z)$ or $\tilde y(z)=[z-M(\infty)]^{-b}
y(z)$.
${\rm Aut}(\mathfrak{h})$, the automorphism group of the hypergeometric equation (\[eq:hyper\]), is the group of changes of variable (Möbius of the independent variable, linear of the dependent) which leave (\[eq:hyper\]) invariant, up to parameter changes. Similarly, ${\rm Aut}(\mathfrak{H})$ is the automorphism group of ($\mathfrak{H}$).
${\rm Aut}(\mathfrak{h})$ acts on the 3-dimensional parameter space of (\[eq:hyper\]). It contains the symmetric group $S_3$ of permutations of singular points as a subgroup, and the group $({\mathbb Z}_2)^2$ of F-homotopies as a normal subgroup. So ${\rm Aut}(\mathfrak{h})\simeq
({\mathbb Z}_2)^2 \rtimes S_3$, a semidirect product. It is isomorphic to $S_4$, the octahedral group [@Dwork84].
Within ${\rm Aut}(\mathfrak{h})$, the Möbius automorphism subgroup is the group ${\mathcal M}(\mathfrak{h}){\stackrel{\rm{def}}{=}}\{1\}\times S_3$, which permutes the singular points $z=0,1,\infty$. The subgroup of affine automorphisms is ${\mathcal A}(\mathfrak{h}){\stackrel{\rm{def}}{=}}\{1\}\times S_2$, which permutes the [*finite*]{} singular points $z=0,1$, and fixes $\infty$. (It is generated by the involution $z\mapsto1-z$.) The F-homotopic automorphism subgroup is $({\mathbb Z}_2)^2\times\{1\}$.
The action of ${\rm Aut}(\mathfrak{h})$ on the $2\times3=6$ local solutions is as follows. $\left|{\rm Aut}(\mathfrak{h})\right|=2^2\times3!=24$, and applying the transformations in ${\rm Aut}(\mathfrak{h})$ to any single local solution yields $24$ solutions of (\[eq:hyper\]). Applying them to ${}_2F_1$, for instance, yields the $24$ series solutions of Kummer [@Dwork84]. However, the $24$ solutions split into six sets of four, since for each singular point $z_0\in\{0,1,\infty\}$ there is a subgroup of ${\rm Aut}(\mathfrak{h})$ of order $2^1\times2!=4$, each element of which fixes $z=z_0$ and performs no F-homotopy there; so it leaves each local solution at $z=z_0$ invariant.
For example, the four transformations in the subgroup associated to $z=0$ yield four equivalent expressions for ${}_2F_1(a,b;c;z)$; one of which is ${}_2F_1(a,b;c;z)$ itself, and another of which appears above in (\[eq:flip\]). The remaining two are $(1-z)^{-a}{}_2F_1\left(a,c-b;c;z/(z-1)\right)$ and $(1-z)^{-b}{}_2F_1\left(b,c-a;c;z/(z-1)\right)$. The five remaining sets of four are expressions for the five remaining local solutions. One that will play a role is the ‘second’ local solution at $z=0$, which belongs to the exponent $1-c$. One of the four expressions for it, in terms of ${}_2F_1$, is [@Erdelyi53] $$\label{eq:tilde1}
\widetilde{{}_2F_1}(a,\,b;\,c;\,z){\stackrel{\rm{def}}{=}}z^{1-c}{}_2F_1(a-c+1,\,b-c+1;\,2-c;\,z).$$ $\widetilde{{}_2F_1}(a,b;c;z)$ is defined if $c\neq2,3,4,\dotsc$ The second local solution must be specified differently if $c=2,3,4,\dotsc$, and also if $c=1$, since in that case, $\widetilde{{}_2F_1}$ reduces to ${{}_2F_1}$. (Cf. [@Abramowitz65 §15.5].) When $\widetilde{{}_2F_1}$ is defined, it may be given a unique meaning by choosing the principal branch of $z^{1-c}$.
The automorphism group of the Heun equation is slightly more complicated. There are $4!$ Möbius transformations $M$ that map the singular points $t=0,1,d,\infty$ onto $t=0,1,d',\infty$, for some $d'\in\mathbb{CP}^1\setminus\{0,1,\infty\}$. The possible $d'$ constitute the cross-ratio orbit of $\{0,1,d,\infty\}$. Of these $4!$ transformations, $3!$ fix $t=\infty$, i.e., are affine. All values $d'$ are obtained by affine transformations, i.e., a mapping is possible iff $\triangle01d$ is similar to $\triangle01d'$. (Cf. the discussion in §\[subsec:crossratio\].) If $M$ is not affine, it must be followed by an F-homotopic transformation of the form $\tilde
u(t)=[t-M(\infty)]^{-\alpha} u(t)$ or $\tilde u(t)=[t-M(\infty)]^{-\beta}
u(t)$.
${\rm Aut}(\mathfrak{H})$ acts on the 6-dimensional parameter space of (\[eq:Heun\]). It contains the group $S_4$ of singular point permutations as a subgroup, and the group $({\mathbb Z}_2)^3$ of F-homotopies as a normal subgroup. So ${\rm Aut}(\mathfrak{H})\simeq
({\mathbb Z}_2)^3 \rtimes S_4$. It turns out to be isomorphic to the Coxeter group $\mathcal{D}_4$ [@Grove85].
Within ${\rm Aut}(\mathfrak{H})$, the Möbius automorphism subgroup is the group ${\mathcal M}(\mathfrak{H}){\stackrel{\rm{def}}{=}}\{1\}\times S_4$, which maps between sets of singular points of the form $\{0,1,d',\infty\}$. The subgroup of affine automorphisms is ${\mathcal A}(\mathfrak{H}){\stackrel{\rm{def}}{=}}\{1\}\times S_3$, which maps between sets of [*finite*]{} singular points of the form $\{0,1,d'\}$, and fixes $\infty$. The F-homotopic automorphism subgroup is $({\mathbb Z}_2)^3\times\{1\}$.
The action of ${\rm Aut}(\mathfrak{H})$ on the $2\times4=8$ local solutions is as follows. $\left|{\rm Aut}(\mathfrak{H})\right|=2^3\times4!=192$, and applying the transformations in ${\rm Aut}(\mathfrak{H})$ to any single local solution yields $192$ solutions of (\[eq:Heun\]). However, the $192$ solutions split into eight sets of $24$, since for each singular point $t_0\in\{0,1,d,\infty\}$ there is a subgroup of ${\rm
Aut}(\mathfrak{H})$ of order $2^2\times3!=24$, each element of which fixes $t=t_0$ and performs no F-homotopy there; so it leaves each local solution at $t=t_0$ invariant. This statement must be interpreted with care: selecting $t_0=d$ selects not a single singular point, but rather a cross-ratio orbit.
The $24$ transformations in the subgroup associated to $t_0=0$ yield $23$ equivalent expressions for ${\mathop{{}\it Hl}\nolimits}(d,q;\alpha,\beta,\gamma,\delta;t)$, one of which, the only one with no F-homotopic prefactor, appears on the right in the identity [@Ronveaux95; @Snow52] $$\label{eq:newguy1}
\quad{\mathop{{}\it Hl}\nolimits}\left(d,\,q;\,\alpha,\,\beta,\,\gamma,\,\delta;\,t\right)
={\mathop{{}\it Hl}\nolimits}\left(1/d,\,q/d;\,\alpha,\,\beta,\,\gamma,\,\alpha+\beta-\gamma-\delta+1;\,t/d\right).$$ (The two sides are defined if $\gamma$ is not a nonpositive integer.) The remaining seven sets of 24 are expressions for the remaining seven local solutions. One that will play a role is the ‘second’ solution at $t=0$, which belongs to the exponent $1-\gamma$. One of the $24$ expressions for it, in terms of ${\mathop{{}\it Hl}\nolimits}$, is [@Snow52] $$\label{eq:newguy2}
\quad\widetilde{\mathop{{}\it Hl}\nolimits}(d,\,q;\,\alpha,\,\beta,\,\gamma,\,\delta;\,t){\stackrel{\rm{def}}{=}}t^{1-\gamma}
{\mathop{{}\it Hl}\nolimits}(d,\,\tilde q;\,\alpha-\gamma+1,\,\beta-\gamma+1,\,2-\gamma,\,\delta;\,t),$$ where the transformed accessory parameter $\tilde q$ equals $q+(1-\gamma)(\epsilon+d\delta)$. The quantity $\widetilde{{\mathop{{}\it Hl}\nolimits}}(d,q;\alpha,\beta,\gamma,\delta;t)$ is defined if $\gamma\neq2,3,4,\dotsc$ The second local solution must be specified differently if $\gamma=2,3,4,\dotsc$, and also if $\gamma=1$, since in that case, $\widetilde{{\mathop{{}\it Hl}\nolimits}}$ reduces to ${\mathop{{}\it Hl}\nolimits}$. When $\widetilde{\mathop{{}\it Hl}\nolimits}$ is defined, it may be given a unique meaning by choosing the principal branch of $t^{1-\gamma}$.
In general, transformations in ${\rm Aut}(\mathfrak{H})$ will alter not merely $d$ and the exponent parameters, but also the accessory parameter $q$. This is illustrated by (\[eq:newguy1\]) and (\[eq:newguy2\]). The general transformation law of $q$ is rather complicated. Partly for this reason, no satisfactory list of the 192 solutions has appeared in print. The original paper of Heun [@Heun1889] tabulates 48 of the 192, but omits the value of $q$ in each. His table also unfortunately contains numerous misprints and cannot be used in practical applications [@Schmitz94 §6.3]. Incidentally, one sometimes encounters a statement that there are only 96 distinct solutions [@Babister67; @Exton93; @Ronveaux95]. This is true only if one uses (\[eq:newguy1\]) to identify the 192 solutions in pairs.
Polynomial Heun-to-Hypergeometric Reductions {#sec:main}
============================================
We now state and prove Theorem \[thm:main\], our corrected and expanded version of the theorem of Kuiken [@Kuiken79].
The theorem will characterize when a homomorphism of rational substitution type from the Heun equation (\[eq:Heun\]) to the hypergeometric equation (\[eq:hyper\]) exists. It will list the possible substitutions, up to affine automorphisms of the two equations. It is really a characterization of the ${\mathcal A}(\mathfrak{H})$-orbits that can be mapped by homomorphisms of this type to ${\mathcal
A}(\mathfrak{h})$-orbits. The possible substitutions, it turns out, are all polynomial.
For ease of understanding, the characterization will be concrete: it will state that $\triangle01d$ must be similar to one of five specified triangles of the form $\triangle01D$. By the remarks in §\[subsec:crossratio\], similarity occurs iff $d$ belongs to the cross-ratio orbit of $D$, i.e., iff $D$ can be generated from $d$ by repeated application of $d\mapsto 1-d$ and $d\mapsto1/d$. The two exceptional cross-ratio orbits, namely $\{-1,\frac12,2\}$ (harmonic) and $\{\frac12\pm\ri\frac{\sqrt3}2\}$ (equianharmonic), will play a prominent role. It is worth noting that if ${\rm Re}\, D=\frac12$, the orbit of $D$ is closed under complex conjugation.
For each value of $D$, the polynomial map from $\mathbb{CP}^1\ni t$ to $\mathbb{CP}^1\ni z$, which is denoted $R$, will be given explicitly when $d=D$. If $d$ is any other value on the cross-ratio orbit of $D$, as listed in (\[eq:firstlist\]), the polynomial map would be computed by composing with the corresponding affine transformation $A_1$ of $\mathbb{C}$ that takes $\triangle01d$ to $\triangle01D$; which is listed in (\[eq:secondlist\]). So if $d\neq D$, statements in the theorem dealing with singular points, characteristic exponents, and the accessory parameter must be altered. For example, case 1 of the theorem refers to a distinguished singular point $d_0$, the mandatory value of which is given when $d=D$. If $d\neq D$, its mandatory value would be computed as the preimage of that point under $A_1$. Similarly, a statement “the exponents of $t=0$ must be $0,1/2$”, valid when $d=D$, would be interpreted if $d\neq D$ as “the exponents of the preimage of $t=0$ under $A_1$ must be $0,1/2$”. And a statement that $q/\alpha\beta$ must take some value would be interpreted if $d\neq D$ as a statement that $q/\alpha\beta$ must equal the preimage of that value under $A_1$.
In the statement of the theorem and what follows, $S{\stackrel{\rm{def}}{=}}1-R$.
\[thm:main\] A Heun equation [(]{}\[eq:Heun\][)]{}, which has four singular points and is nontrivial [(]{}i.e., $\alpha\beta\neq0$ or $q\neq0$[)]{}, can be transformed to the hypergeometric equation [(]{}$\mathfrak{h}$[)]{} by a rational substitution $z=R(t)$ if and only if $R$ is a polynomial, $\alpha\beta\neq0$, and one of the following two conditions is satisfied.
1. $\triangle01d$ is similar to $\triangle01D$, for one of the values of $D$ listed in subcases [1a]{}–[1c]{}; each of which is real, so the triangle must be degenerate. Also, the normalized accessory parameter $q/\alpha\beta$ must equal one of $0,1,d$, which may be denoted $d_0$. Each subcase lists the value of $d_0$ when $d=D$.
2. $\triangle01d$ is similar to $\triangle01D$, for one of the values of $D$ listed in subcases [2a]{}–[2d]{}; each of which is non-real and has real part equal to $\frac12$, so the triangle must be isosceles. Each subcase lists the value of $q/\alpha\beta$ when $d=D$.
Besides specifying $D$ and the value of $q/\alpha\beta$ when $d=D$, each subcase imposes restrictions on the characteristic exponent parameters at the singular points $0,1,d$. The subcases of the ‘real’ case [1]{} are the following.
1. [\[]{}Harmonic [(]{}equally spaced collinear points[)]{} case.[\]]{} $D=2$. Suppose $d=D$. Then $d_0$ must equal $1$, and $t=0,d$ must have the same characteristic exponents, i.e., $\gamma=\epsilon$. In general, either $R$ or $S$ will be the degree-$2$ polynomial $t(2-t)$, which maps $t=0,d$ to $z=0$ and $t=1$ to $z=1$ [(]{}with double multiplicity[)]{}. There are special circumstances in which $R$ may be quartic, which are listed separately, as subcase [1c]{}.
2. $D=4$. Suppose $d=D$. Then $d_0$ must equal $1$, the point $t=1$ must have characteristic exponents that are double those of $t=d$, i.e., $1-\delta=2(1-\epsilon)$, and $t=0$ must have exponents $0,1/2$, i.e., $\gamma=1/2$. Either $R$ or $S$ will be the degree-$3$ polynomial $(t-1)^2(1-t/4)$, which maps $t=0$ to $z=1$ and $t=1,d$ to $z=0$ [(]{}the former with double multiplicity[)]{}.
3. [\[]{}Special harmonic case.[\]]{} $D=2$. Suppose $d=D$. Then $d_0$ must equal $1$, and $t=0,d$ must have the same characteristic exponents, i.e., $\gamma=\epsilon$. Moreover, the exponents of $t=1$ must be twice those of $t=0,d$, i.e., $1-\delta=2(1-\gamma)=2(1-\epsilon)$. Either $R$ or $S$ will be the degree-$4$ polynomial $4[t(2-t)-\frac12]^2$, which maps $t=0,1,d$ to $z=1$ [(]{}$t=1$ with double multiplicity[)]{}.
The subcases of the ‘non-real’ case [2]{} are the following.
1. [\[]{}Equianharmonic [(]{}equilateral triangle[)]{} case.[\]]{} $D=\frac12+ \ri\frac{\sqrt{3}}2$. $q/\alpha\beta$ must equal the mean of $0,1,d$, and $t=0,1,d$ must have the same characteristic exponents, i.e., $\gamma=\delta=\epsilon$. Suppose $d=D$. Then $q/\alpha\beta$ must equal $\frac12+\ri\frac{\sqrt3}6$. In general, either $R$ or $S$ will be the degree-$3$ polynomial $\left[1-t/(\frac12+\ri\frac{\sqrt3}6)\right]^3$, which maps $t=0,1,d$ to $z=1$ and $t=q/\alpha\beta$ to $z=0$ [(]{}with triple multiplicity[)]{}. There are special circumstances in which $R$ may be sextic, which are listed separately, as subcase [2d]{}.
2. $D=\frac12+\ri\frac{5\sqrt2}4$. Suppose $d=D$. Then $q/\alpha\beta$ must equal $\frac12+\ri\frac{\sqrt2}4$, $t=d$ must have characteristic exponents $0,1/3$, i.e., $\epsilon=2/3$, and $t=0,1$ must have exponents $0,1/2$, i.e., $\gamma=\delta=1/2$. Either $R$ or $S$ will be the degree-$4$ polynomial $\left[1-t/(\frac12+\ri\frac{5\sqrt2}4)\right]\left[1-t/(\frac12+\ri\frac{\sqrt2}4)\right]^3$, which maps $t=d,q/\alpha\beta$ to $z=0$ [(]{}the latter with triple multiplicity[)]{} and $t=0,1$ to $z=1$.
3. $D=\frac12+\ri\frac{11\sqrt{15}}{90}$. Suppose $d=D$. Then $q/\alpha\beta$ must equal $\frac12+\ri\frac{\sqrt{15}}{18}$, $t=d$ must have characteristic exponents $0,1/2$, i.e., $\epsilon=1/2$, and $t=0,1$ must have exponents $0,1/3$, i.e., $\gamma=\delta=2/3$. Either $R$ or $S$ will be the degree-$5$ polynomial $At(t-1)\left[t-(\frac12+\ri\frac{\sqrt{15}}{18})\right]^3$, which maps $t=0,1,q/\alpha\beta$ to $z=0$ [(]{}the last with triple multiplicity[)]{}. The factor $A$ is chosen so that it maps $t=d$ to $z=1$, as well; explicitly, $A=-\ri\frac{2025\sqrt{15}}{64}$.
4. [\[]{}Special equianharmonic case.[\]]{} $D=\frac12+\ri\frac{\sqrt{3}}2$. $q/\alpha\beta$ must equal the mean of $0,1,d$, and $t=0,1,d$ must have characteristic exponents $0,1/3$, i.e., $\gamma=\delta=\epsilon=2/3$. Suppose $d=D$. Then $q/\alpha\beta$ must equal $\frac12+\ri\frac{\sqrt3}6$. Either $R$ or $S$ will be the degree-$6$ polynomial $4\left\{[1-t/(\frac12+\ri\frac{\sqrt3}6)]^3-\frac12\right\}^2$, which maps $t=0,1,d,q/\alpha\beta$ to $z=1$ [(]{}the last with triple multiplicity[)]{}.
The origin of the special harmonic and equianharmonic subcases is easy to understand. In subcase 1c, $t\mapsto R(t)$ or $S(t)$ is the composition of the quadratic map of subcase 1a with the map $z\mapsto4(z-\frac12)^2$. In subcase 2d, $t\mapsto R(t)$ or $S(t)$ is similarly the composition of the cubic map of subcase 2a with $z\mapsto4(z-\frac12)^2$. In both 1c and 2d, the further restrictions on exponents make possible the additional quadratic transformation of $z$, which transforms the hypergeometric equation into itself (see [@Andrews99 §3.1] and [@Erdelyi53]).
\[rem:rem2\] $R$ is determined uniquely by the choices enumerated in the theorem. There is a choice of subcase, a choice of $d$ from the cross-ratio orbit of $D$, and a binary choice between $R$ and $S$. The final two choices amount to choosing affine maps $A_1\in{\mathcal A}(\mathfrak{H})$ and $A_2\in{\mathcal A}(\mathfrak{h})$, i.e., $A_2(z)=z$ or $1-z$, which precede and follow a canonical substitution.
In the harmonic case 1a, in which the ${\mathcal A}(\mathfrak{H})$-orbit includes three values of $d$, there are accordingly $3\times2=6$ possibilities for $R$; namely, $$R=t^2,\,1-t^2;\quad (2t-1)^2,\,1-(2t-1)^2;\quad t(2-t),\,1-t(2-t),$$ corresponding to $d=-1,-1;\frac12,\frac12;2,2$, respectively. These are the quadratic transformations of Kuiken [@Kuiken79]. In the equianharmonic case 2a, in which the orbit includes only two values of $d$, there are $2\times2=4$ possibilities; namely, $$R=[1-t/(\tfrac12\pm \ri\tfrac{\sqrt3}6)]^3,
\qquad
1-[1-t/(\tfrac12\pm \ri\tfrac{\sqrt3}6)]^3,$$ corresponding to $d=\frac12\pm \ri\frac{\sqrt{3}}2$. The remaining subcases, with the exception of 1c and 2d, correspond to generic cross-ratio orbits: each value of $D$ specifies six values of $d$. In each of those subcases, there are $6\times2=12$ possibilities. So in all, there are 56 possibilities for $R$.
\[rem:jumpahead\] The characteristic exponents of the singular points $z=0,1,\infty$ of (\[eq:hyper\]) can be computed from those of the singular points $t=0,1,d,\infty$ of (\[eq:Heun\]), together with the formula for $R$. The computation relies on Proposition \[thm:basicprop\] below, which may be summarized thus. If $t=t_0$ is not a critical point of the map $t\mapsto z=R(t)$, then the exponents of $z=R(t_0)$ will be the same as those of $t_0$. If, on the other hand, $t=t_0$ is mapped to $z=z_0{\stackrel{\rm{def}}{=}}R(t_0)$ with multiplicity $k>1$, i.e., $t=t_0$ is a $k-1$-fold critical point of $R$ and $z=z_0$ is a critical value, then the exponents of $z_0$ will be $1/k$ times those of $t_0$.
For example, in the harmonic case 1a, the map $t\mapsto z$ takes two of $t=0,1,d$ to either $z=0$ or $z=1$, and by examination, the coalesced point is not a critical value of the map; so the characteristic exponents of those two points are preserved, and must therefore be the same, as stated in the theorem. On the other hand, the characteristic exponents of the third point of the three, $t=d_0$, are necessarily halved when it is mapped to $z=1$ or $z=0$, since by examination, $R$ always has a simple critical point at $t=d_0$, i.e., $z\sim{\rm const} + C(t-d_0)^2$ for some nonzero $C$. (These statements follow by considering the canonical $d=D$ case.) So if $\delta_0$ denotes the parameter (out of $\gamma,\delta,\epsilon$) corresponding to $t=d_0$, the characteristic exponents of $z=1$ or $z=0$ will be $0,(1-\delta_0)/2$. $R$, being a quadratic polynomial, also has a simple critical point at $t=\infty$, so the characteristic exponents of $z=\infty$ are one-half those of $t=\infty$, i.e., $\alpha/2,\beta/2$. It follows that in the harmonic case, the Gauss parameters $(a,b;c)$ of the resulting hypergeometric equation will be $(\alpha/2,\beta/2;(\delta_0+1)/2)$ or $(\alpha/2,\beta/2;(\alpha+\beta-\delta_0+1)/2)$.
In the equianharmonic case 2a, the map $t\mapsto z$ takes $t=0,1,d$ to either $z=0$ or $z=1$; and by examination, the coalesced point is not a critical value of the map; so the characteristic exponents of those three points are preserved, and must therefore be the same, as stated in the theorem. On the other hand, at $t=q/\alpha\beta$, which is mapped to $z=1$ or $z=0$, $R$ has, by examination, a double critical point, i.e., $z\sim{\rm const}+ C(t-q/\alpha\beta)^3$ for some nonzero $C$. So the characteristic exponents of $z=1$ or $z=0$, since $t=q/\alpha\beta$ is an ordinary point of the Heun equation and effectively has characteristic exponents $0,1$, are $0,1/3$. $R$, being a cubic polynomial, also has a double critical point at $t=\infty$, so the characteristic exponents of $z=\infty$ are one-third those of $t=\infty$, i.e., $\alpha/3,\beta/3$. It follows that in the equianharmonic case, the parameters $(a,b;c)$ of the resulting hypergeometric equation will be $(\alpha/3,\beta/3;2/3)$ or $\left(\alpha/3,\beta/3;(\alpha+\beta+1)/3\right)$.
A rational map $R:\mathbb{CP}^1\to\mathbb{CP}^1$ is said to map the characteristic exponents of the Heun equation (\[eq:Heun\]) to the characteristic exponents of the hypergeometric equation ($\mathfrak{h}$) if, for all $t_0\in\mathbb{CP}^1$, the exponents of $t=t_0$ according to the Heun equation, divided by the multiplicity of $t_0\mapsto z_0{\stackrel{\rm{def}}{=}}R(t_0)$, equal the exponents of $z=z_0$ according to the hypergeometric equation.
For example, if $t_0$ and $z_0$ are both finite, this says that if $z\sim
z_0+C(t-t_0)^k$ to leading order, for some nonzero $C$, then the exponents of $z=z_0$ must be those of $t=t_0$, divided by $k$. If $t=t_0$ is an ordinary point of the Heun equation, then the exponents of $z=z_0$ will be $0,1/k$. This implies that if $k>1$, $z=z_0$ must be one of the three singular points of the hypergeometric equation.
\[thm:basicprop\] A Heun equation of the form [(]{}\[eq:Heun\][)]{} can be reduced to a hypergeometric equation of the form [(]{}\[eq:hyper\][)]{} by a rational substitution $z=R(t)$ of its independent variable only if $R$ maps exponents to exponents.
Proposition \[thm:basicprop\], which was already used in Remark \[rem:jumpahead\] above, is a special case of a basic fact in the theory of the Riemann $P$-function: if a rational change of the independent variable transforms one Fuchsian equation to another, then the characteristic exponents are transformed multiplicatively. It can be proved by examining the effects of the change of variable on each local (Frobenius) solution. It also follows immediately from Lemma \[thm:newlemma\] below.
This lemma begins the study of [*sufficient*]{} conditions for the existence of a Heun-to-hypergeometric transformation. Finding them requires care, since an accessory parameter is involved. Performing the substitution $z=R(t)$ explicitly is useful. Substituting $z=R(t)$ into (\[eq:hyper\]) ‘pulls it back’ (cf. [@Kuiken79]) to $$\label{eq:substituted}
\frac{\d^2 y}{\d t^2} + \left\{ -\frac{\ddot R}{\dot R} + \frac{\dot
R}{R(1-R)} [c-(a+b+1)R]\right\} \frac{\d y}{\d t} - \frac{ab\dot
R^2}{R(1-R)}\,y = 0.$$ To save space here, $\d R/\!\d t,\d^2 R/\!\d t^2$ are written as $\dot
R,\ddot R$.
\[thm:newlemma\] The coefficient of the $\d y/\!\d t$ term in the pulled-back hypergeometric equation [(\[eq:substituted\])]{}, which may be denoted $W(t)$, equals the coefficient of the $\d u/\!\d t$ term in the Heun equation [(]{}\[eq:Heun\][)]{}, i.e., $\gamma/t+\delta/(t-1)+\epsilon/(t-d)$, if and only if $R$ maps exponents to exponents. That is, the transformation at least partly ‘works’ if and only if $R$ maps exponents to exponents.
This follows by elementary, if tedious calculations. Suppose that $R$ maps $t=t_0$ to $z=z_0{\stackrel{\rm{def}}{=}}R(t_0)$ with multiplicity $k$, i.e., to leading order $R(t)\sim z_0+C(t-t_0)^k$; if $t_0$ and $z_0$ are finite, that is. By direct computation, the leading behavior of $W$ at $t=t_0$ is the following. In the case when $t_0$ is finite, $W(t)\sim (1-k)(t-t_0)^{-1}$ if $z_0\neq0,1,\infty$; $[1-k(1-c)](t-t_0)^{-1}$ if $z_0=0$; $[1-k(c-a-b)](t-t_0)^{-1}$ if $z_0=1$, and $[1-k(a+b)](t-t_0)^{-1}$ if $z_0=\infty$. In the case when $t_0=\infty$, $W(t)\sim (1+k)t^{-1}$ if $z_0\neq0,1,\infty$; $[1+k(1-c)]t^{-1}$ if $z_0=0$; $[1+k(c-a-b)]t^{-1}$ if $z_0=1$, and $[1+k(a+b)]t^{-1}$ if $z_0=\infty$.
This may be restated as follows. At $t=t_0$, for finite $t_0$, the leading behavior of $W$ is $W(t)\sim(1-k\eta)(t-t_0)^{-1}$, where $k$ is the multiplicity of $t_0\mapsto z_0{\stackrel{\rm{def}}{=}}R(t_0)$ and $\eta$ is the sum of the two characteristic exponents of the hypergeometric equation at $z=z_0$; if the coefficient $1-k\eta$ equals zero then $W$ has no pole at $t=t_0$. Likewise, the leading behavior of $W$ at $t=\infty$ is $W(t)\sim(1+k\eta)t^{-1}$, where $k$ is the multiplicity of $\infty\mapsto
z_0{\stackrel{\rm{def}}{=}}R(\infty)$ and $\eta$ is the sum of the two exponents at $z=z_0$; if the coefficient $1+k\eta$ equals zero then $W$ has a higher-order zero at $t=\infty$.
By the definition of ‘mapping exponents to exponents’, it follows that the leading behavior of $W$ at $t=t_0$, for all $t_0$ finite, is of the form $W(t)\sim(1-\eta')(t-t_0)^{-1}$, and also at $t_0=\infty$, is of the form $W(t)\sim(1+\eta')t^{-1}$, where in both cases $\eta'$ is the sum of the exponents of the Heun equation at $t=t_0$, iff $R$ maps exponents to exponents.
That is, the rational function $W$ has leading behavior $\gamma t^{-1}$ at $t=0$, $\delta(t-1)^{-1}$ at $t=1$, $\epsilon(t-d)^{-1}$ at $t=d$, $(1+\alpha+\beta)t^{-1}=(\gamma+\delta+\epsilon)t^{-1}$ at $t=\infty$, and is regular at all $t$ other than $0,1,d,\infty$, iff $R$ maps exponents to exponents.
The following two propositions characterize when the pulled-back hypergeometric equation (\[eq:substituted\]) is, in fact, the Heun equation (\[eq:Heun\]). The first deals with Heun equations that are trivial in the sense of Definition \[def:triviality\], and will be used in §\[sec:trivial\]. The second will be applied to prove Theorem \[thm:main\].
\[thm:trivialprop\] A Heun equation of the form [(]{}\[eq:Heun\][)]{}, which is trivial [(]{}i.e., $\alpha\beta=0$ and $q=0$[)]{}, will be reduced to a hypergeometric equation of the form [(]{}\[eq:hyper\][)]{} by a specified rational substitution $z=R(t)$ of its independent variable if and only if $R$ maps exponents to exponents.
The ‘if’ half is new, and requires proof. By Lemma \[thm:newlemma\], the coefficients of $\d y/\!\d t$ and $\d u/\!\d t$ agree iff $R$ maps exponents to exponents, so it suffices to determine whether the coefficients of $y$ and $u$ agree. But by triviality, the coefficient of $u$ in [(\[eq:Heun\])]{} is zero. Also, $t=\infty$ has zero as one of its exponents, so all points $t\in\mathbb{CP}^1$ have zero as an exponent. By the mapping of exponents to exponents, $z=\infty$ must also have zero as an exponent, i.e., $ab=0$. So the coefficient of $y$ in (\[eq:substituted\]) is also zero.
\[thm:mainproposition\] A Heun equation of the form [[(]{}\[eq:Heun\][)]{}]{}, which has four singular points and is nontrivial [(]{}i.e., $\alpha\beta\neq0$ or $q\neq0$[)]{}, will be reduced to a hypergeometric equation of the form [(]{}\[eq:hyper\][)]{} by a specified rational substitution $z=R(t)$ of its independent variable if and only if $R$ maps exponents to exponents, and moreover, $R$ is a polynomial, $\alpha\beta\neq0$, and one of the following two conditions on the normalized accessory parameter $p\equiv q/\alpha\beta$ is satisfied.
1. $p$ equals one of $0,1,d$. Call this point $d_0$, and the other two singular points $d_1$ and $d_2$. In this case, $d_0$ must be a double zero of $R$ or $S$, and each of $d_1,d_2$ must be a simple zero of $R$ or $S$.
2. $p$ does not equal any of $0,1,d$. In this case, each of $0,1,d$ must be a simple zero of $R$ or $S$, and $p$ must be a triple zero of either $R$ or $S$.
In both cases, $R$ and $S$ must have no additional simple zeroes or zeroes of order greater than two. Also, if $1-c$ [(]{}the nonzero exponent at $z=0$[)]{} does not equal $1/2$, then $R$ must have no additional double zeroes; and if $c-a-b$ [(]{}the nonzero exponent at $z=1$[)]{} does not equal $1/2$, then $S$ must have no additional double zeroes.
Moreover, in both cases no additional double zero, if any, must be mapped by $R$ to the point [(]{}out of $z=0,1$[)]{} to which $p$ is mapped. [(]{}So additional double zeroes, if any, must all be zeroes of $R$, or all be zeroes of $S$.[)]{}
Like Proposition \[thm:trivialprop\], this follows by comparing the pulled-back hypergeometric equation (\[eq:substituted\]) to the Heun equation (\[eq:Heun\]). By Lemma \[thm:newlemma\], the coefficients of $\d y/\!\d t$ and $\d u/\!\d t$ agree iff $R$ maps exponents to exponents, so it suffices to characterize when the coefficients of $y$ and $u$ agree.
The coefficient of $y$ in (\[eq:substituted\]) is to equal the coefficient of $u$ in (\[eq:Heun\]). It follows that $ab=0$ is possible iff $\alpha\beta=0$ and $q=0$, which is ruled out by nontriviality. So $ab\neq0$, and equality of the coefficients can hold iff $$\label{eq:cond}
U\equiv\frac{\d R/\!\d t}{R}\,\frac{\d S/\!\d t}{S} =
\frac{-(\alpha\beta t - q)/ab}{t(t-1)(t-d)}
\equiv\frac{C_0}{t} + \frac{C_1}{t-1} + \frac{C_d}{t-d},$$ where $S\equiv1-R$, and at least two of $C_0,C_1,C_d\in\mathbb{C}$ are nonzero.
Both $R^{-1}\d R/\!\d t$ and $S^{-1}\d S/\!\d t$ are sums of terms of the form $n(t-\lambda)^{-1}$, where $n$ is a nonzero integer and $\lambda$ is a zero or a pole of $R$ or $S$. Poles are impossible, since $\lambda$ is a pole of $R$ iff $\lambda$ is a pole of $S$, and there are no double poles on the right-hand side of (\[eq:cond\]). So $R$ must be a polynomial.
By examining the definition of $U$ in terms of $R$ and $S$, one sees the following is true of any $\lambda\in\mathbb{C}$: if $R$ or $S$ has a simple zero at $t=\lambda$, then $U$ will have a simple pole at $t=\lambda$; if $R$ or $S$ has a double zero at $t=\lambda$, then $U$ will have an ordinary point (non-zero, non-pole) at $t=\lambda$, and if $R$ or $S$ has a zero of order $k>2$ at $t=\lambda$, then $U$ will have a zero of order $k-2$ at $t=\lambda$.
Most of what follows is devoted to proving the ‘only if’ half of the proposition in the light of these facts, by examining the consequences of the equality (\[eq:cond\]). In the final paragraph, the ‘if’ half will be proved.
There are exactly three ways in which the equality (\[eq:cond\]) can hold.
1. $\alpha\beta=0$, but due to nontriviality, $q\neq0$. $U$ has three simple poles on $\mathbb{C}$, at $t=0,1,d$. It has no other poles, and no zeroes. So each of $0,1,d$ must be a simple zero of either $R$ or $S$; also, $R$ and $S$ can have no other simple zeroes, and no zeroes of order $k>2$. Except for possible double zeroes, the zeroes of $R$ and $S$ are determined. The degree of $R$ must equal the number of zeroes of $R$, and also equal the number of zeroes of $S$, counting multiplicity. But irrespective of how many double zeroes are assigned to $R$ or $S$, either $R$ or $S$ will have an odd number of zeroes, and the other an even number, counting multiplicity. So case 0 cannot occur.
2. $\alpha\beta\neq0$ and $\alpha\beta t-q$ is a nonzero multiple of $t-d_0$, where $d_0=0$, $1$, or $d$, so exactly one of $C_0,C_1,C_d$ is zero. $U$ has two simple poles on $\mathbb{C}$, at $t=d_1,d_2$ (the two singular points other than $d_0$); it has no other poles, and no zeroes. So each of $d_1,d_2$ must be a simple zero of either $R$ or $S$; also, $R$ and $S$ can have no other simple zeroes, and no zeroes of order $k>2$. Since by assumption ($\mathfrak{H}$) has four singular points, each of $0,1,d$ must be a singular point, so the coefficient of $\d y/\!\d t$ in (\[eq:substituted\]) must have a pole at $t=d_0$, which implies that $R$ or $S$ must have a zero at $d_0$ of the only remaining type: a double zero.
3. $\alpha\beta\neq0$ but $\alpha\beta t-q$ is not a multiple of $t$, $t-1$, or $t-d$, so none of $C_0,C_1,C_d$ is zero. $U$ has three poles on $\mathbb{C}$, and exactly one zero, at $t=p\equiv q/\alpha\beta$, which is simple. So each of $0,1,d$ must be a simple zero of either $R$ or $S$, and $q/\alpha\beta$ must be a triple zero of either $R$ or $S$. Also, $R$ and $S$ can have no other simple zeroes, and no other zeroes of order $k>2$.
In cases 1,2, what remain to be determined are the (additional) double zeroes of $R$ and $S$, if any. That is, it must be determined if any [*ordinary*]{} point of the Heun equation can be mapped to $z=0$ or $z=1$ with double multiplicity. But by Proposition \[thm:basicprop\], $R$ can map an ordinary point $t=t_0$ to $z=0$ (resp. $z=1$) in this way only if the exponents of $z=0$ (resp. $z=1$) are $0,1/2$.
Suppose this occurs. In case 1, if the exponents of $t=p=d_0$ are denoted $0,\gamma_0$, the exponents of $R(p)$ will be $0,\gamma_0/2$, since $t=p$ will be mapped with double multiplicity to $z=R(p)$. So if $R(t_0)=R(p)$ then $\gamma_0$ must equal $1$, which, since $q=\alpha\beta
d_0$, is ruled out by the assumption that each of $0,1,d$, including $d_0$, is a singular point. It follows that in case 1, $R(t_0)\neq R(p)$. A related argument applies in case 2. In case 2, the point $p$ is an ordinary point of ($\mathfrak{H}$), and a double critical point of the $t\mapsto z$ map. So as a singular point of ($\mathfrak{h}$), $R(p)$ must have exponents $0,1/3$. It follows that $R(t_0)=R(p)$ is impossible.
The ‘only if’ half of the proposition has now been proved; the ‘if’ half remains. Just as (\[eq:cond\]) implies the stated conditions on $R$, so the stated conditions must be shown to imply (\[eq:cond\]). But the conditions on $R$ are equivalent to the left and right-hand sides having the same poles and zeroes, i.e., to their being the same up to a constant factor. To show the constant is unity, it is enough to consider the limit $t\to\infty$. If $\deg R=n$, then $R^{-1}\d R/\!\d t\sim n/t$ and $S^{-1}\d S/\!\d t\sim -n/t$, so $U$, i.e., the left-hand side, has asymptotic behavior $-n^2/t^2$. This will be the same as that of the right-hand side if $(\alpha\beta)/(ab)=n^2$. But $a=\alpha/n$ and $b=\beta/n$ follow from the assumption that $R$ maps exponents to exponents.
Finally, we can prove the main theorem, with the aid of the polynomial manipulation facilities of the [Macsyma]{} computer algebra system.
[Proof (of Theorem \[thm:main\])]{} By Proposition \[thm:mainproposition\], the preimages of $z=0,1$ under $R$ must include $t=0,1,d$, and in case 2, $t=p\equiv q/\alpha\beta$. They may also include $l$ (additional) double zeroes of $R$ or of $S$, which will be denoted $t=a_1,\dotsc,a_l$. Cases 1,2 of the theorem correspond to cases 1,2 of the proposition, and the subcases of the theorem correspond to distinct choices of $l$.
Necessarily $\deg R=|R^{-1}(0)|=|R^{-1}(1)|$, where the inverse images are defined as multisets rather than sets, to take multiplicity into account. This places tight constraints on $l$, since each of $0,1,d$ (and $p$, in case 2) may be assigned to either $R^{-1}(0)$ or $R^{-1}(1)$, but by the proposition, all of $a_1,\dotsc,a_l$ must be assigned, twice, to one or the other. In case 1, one of $0,1,d$ (denoted $d_0$ in the proposition) has multiplicity $2$, and the other two (denoted $d_1,d_2$) have multiplicity $1$. It follows that $0\leq l\leq 2$, with $\deg R=l+2$. In case 2, each of $0,1,d$ has multiplicity $1$, and $p$ has multiplicity $3$. It follows that $0\leq l\leq 3$, with $\deg R=l+3$. Subcases are as follows.
1. Case 1, $l=0$, $\deg R=2$. Necessarily $R^{-1}(0),R^{-1}(1)$ are $\{d_0,d_0\},\allowbreak\{d_1,d_2\}$, or vice versa. Without loss of generality (w.l.o.g.), assume the latter, and assume $d_0=1$. Then $R^{-1}(0)=\{0,d\}$ and $R^{-1}(1)=\{1,1\}$, i.e., $S^{-1}(0)=\{1,1\}$ and $S^{-1}(1)=\{0,d\}$. Since $t=1$ is a double zero of $S$, $S(t)=C(t-1)^2$ for some $C$. But $S(0)=1$, which implies $C=1$, and $S(d)=1$, which implies $d=2$. So $S(t)=(t-1)^2$ and $R(t)=t(2-t)$. Since $t=0,d$ are both mapped singly to the singular point $z=0$ by $R$, their exponents must be those of $z=0$, and hence must be identical.
2. Case 1, $l=1$, $\deg R=3$. Necessarily $R^{-1}(0),R^{-1}(1)$ are $\{d_0,d_0,d_1\},\allowbreak\{d_2,a_1,a_1\}$, or vice versa. W.l.o.g., assume the former, and also assume $d_0,d_1,d_2$ equal $1,d,0$, respectively. Then $R^{-1}(0)=\{1,1,d\}$ and $R^{-1}(1)=\{0,a_1,a_1\}$. It follows that $R(t)=(t-1)^2(1-t/d)$, where $d$ is determined by the condition that the critical point of $R$ other than $t=1$ (i.e., $t=a_1$) be mapped to $1$. Solving $\d R/\!\d t=0$ yields $a_1=(2d+1)/3$, and substitution into $R(a_1)-1=0$ yields $d=4$ or $-1/2$. But the latter is ruled out: it would imply $a_1=0$, which is impossible. So $d=4$ and $a_1=3$. Since $t=1,d$ are mapped to the singular point $z=0$, doubly and singly respectively, the exponents of $t=1$ must be twice those of $z=0$, and the exponents of $t=d$ must be the same as those of $z=0$.
3. Case 1, $l=2$, $\deg R=4$. Necessarily $R^{-1}(0)$ and $R^{-1}(1)$ are $\{d_0,d_0,d_1,d_2\}$ and $\{a_1,a_1,a_2,a_2\}$, or vice versa. W.l.o.g., assume the latter, and assume ${d_0=1}$. Then $R^{-1}(0)=\{a_1,a_1,a_2,a_2\}$ and $R^{-1}(1)=\{0,1,1,d\}$, i.e., $S^{-1}(0)=\{0,1,1,d\}$ and $S^{-1}(1)=\{a_1,a_1,a_2,a_2\}$. So $S(t)$ equals $t(t-1)^2(t-d)$, where $d$ is determined by the condition that $S$ must have two critical points other than $t=1$, i.e., $t=a_1,a_2$, which are mapped by $S$ to the same critical value (in fact, to $z=1$). Computation yields $\d S/\!\d t=A(t-1)\left[4t^2-(3d+2)t+d\right]$, so $a_1,a_2$ must be the roots of $4t^2-(3d+2)t+d$. If the corresponding critical values are $Aw_1,Aw_2$, then $w_1,w_2$ are the roots of the polynomial in $w$ obtained by eliminating $t$ between $w-S(t)/A$ and $4t^2-(3d+2)t+d$. Its discriminant turns out to be proportional to $(d-2)^2(9d^2-4d+4)^3$, so the criterion for equal values is that $d=2$ or $9d^2-4d+4=0$. But the latter can be ruled out, since by examination it would result in $a_1,a_2$ being equal. So $d=2$; $a_1,a_2=1\pm\sqrt2/2$; and $S(t)=At(t-1)^2(t-2)$ with $A=-4$, so that $S(a_i)=1$. Hence $R(t)=4\left[t(2-t)-\frac12\right]^2$. Since $t=0,d$ are mapped simply to $z=1$ and $t=1$ is mapped doubly, the exponents of $t=0,d$ must be the same, and double those of $t=1$.
4. Case 2, $l=0$, $\deg R=3$. Necessarily $R^{-1}(0),R^{-1}(1)$ are $\{p,p,p\},\allowbreak\{0,1,d\}$, or vice versa. W.l.o.g., assume the former. Then $R(t)=A(t-p)^3$ for some $A$. Since $t=0,1,d$ are to be mapped singly to $1$, they must be the vertices of an equilateral triangle, with mean $p$, so $A=-1/p^3$ and $R=(1-t/p)^3$. W.l.o.g., take $d=\frac12+\ri\frac{\sqrt3}2$, so $p=\frac12+\ri\frac{\sqrt3}6$. The exponents at $t=0,1,d$ must be equal, since they all equal the exponents at $z=1$.
5. Case 2, $l=1$, $\deg R=4$. Assume w.l.o.g. that $R^{-1}(0)=\{p,p,p,d\}$ and $R^{-1}(1)=\{0,1,a_1,a_1\}$, i.e., $S^{-1}(0)=\{0,1,a_1,a_1\}$ and $S^{-1}(1)=\{p,p,p,d\}$. It follows that $R(t)=(1-t/d)(1-t/p)^3$, but to determine $d$ and $p$, it is best to focus on $S$. Necessarily $S(t)=At(t-1)(t-a_1)^2$, and $p$ can be a triple zero of $R$ iff it is a double critical point of $S$ as well as $R$. The condition that $S$ have a double critical point determines $a_1$. $\d
S/\!\d t=A(t-a_1)\left[4t^2-(3+2a_1)t+a_1\right]$, so the polynomial $4t^2-(3+2a_1)t+a_1$ must have a double root. Its discriminant is $4a_1^2-4a_1+9$, which will equal zero iff $a_1=\frac12\pm\ri\sqrt{2}$. The corresponding value of the double root, i.e., the mandatory value of $p$, is $\frac12\pm \ri\frac{\sqrt{2}}4$. The requirement that $S$ map $p$ to $1$ implies $A=1/p(p-1)(p-a_1)^2$. $d$ is determined as the root of $R=1-S$ other than $p$; some computation yields $\frac12\pm
\ri\frac{5\sqrt2}4$. W.l.o.g. the ‘$\pm$’ in the expressions for $p$ and $d$ can be replaced by ‘$+$’. Since $t=p$ is an ordinary point and $R$ maps $t=p$ triply to $z=0$, $z=0$ must have exponents $0,1/3$. Since $R$ maps $t=d$ simply to $z=0$, $t=d$ must also have exponents $0,1/3$. Similarly, since $R$ maps the ordinary point $t=a_1$ doubly to $z=1$, $z=1$ must have exponents $0,1/2$; so $t=0$ and $t=1$, which are mapped simply to $z=1$, must also.
6. Case 2, $l=2$, $\deg R=5$. Assume w.l.o.g. that $R^{-1}(0)=\{p,p,p,0,1\}$ and $R^{-1}(1)=\{d,a_1,a_1,a_2,a_2\}$. Then $R(t)=At(t-1)(t-p)^3$, where $p$ is determined by $R$ having two critical points other than $t=p$, i.e., $t=a_1,a_2$, which are mapped to the same critical value (i.e., to ${z=1}$). $\d R/\!\d
t=A(t-p)^2\left[5t^2-(2p+4)t+p\right]$, so $a_1,a_2$ must be the two roots of $5t^2-(2p+4)t+p$. If the corresponding critical values are $Aw_1,Aw_2$, then $w_1,w_2$ are the roots of the polynomial in $w$ obtained by eliminating $t$ between $w-R(t)/A$ and $5t^2-(2p+4)t+p$. Its discriminant turns out to be proportional to $(p^2-p+4)^3(27p^2-27p+8)^2$, so the criterion for equal values is that $p^2-p+4=0$ or $27p^2-27p+8=0$. But the former can be ruled out, since by examination it would result in $a_1,a_2$ being equal. The latter is true iff $p=\frac12\pm
\ri\frac{\sqrt{15}}{18}$. W.l.o.g., the plus sign may be used. This yields $a_1,a_2=\frac12 \pm \frac{2\sqrt3}9 + \ri\frac{\sqrt{15}}{90}$. From the condition $R(a_i)=1$, it follows that $A=-\ri\frac{2025\sqrt{15}}{64}$. $d$ is determined as the root of $R(t)-1$ other than $a_1,a_2$; computation yields $d=\frac12+\frac{11\sqrt{15}}{90}$. Since $t=p$ is an ordinary point mapped triply to $z=0$, $z=0$ must have exponents $0,1/3$. Similarly, since $R$ maps the ordinary points $t=a_i$ to $z=1$, $z=1$ must have exponents $0,1/2$, so $t=d$, which is mapped singly to it, must also.
7. Case 2, $l=3$, $\deg R=6$. Necessarily $R^{-1}(0)$ and $R^{-1}(1)$ are $\{p,p,p,0,1,d\}$ and $\{a_1,a_1,a_2,a_2,a_3,a_3\}$, or vice versa. W.l.o.g., assume the latter. Then $R(t)=A(t-a_1)^2(t-a_2)^2(t-a_3)^2$ and $S(t)=Bt(t-1)(t-d)(t-p)^3$. Since $t=p$ is a triple zero of $S$, $R(t)\sim1-C(t-p)^3$ for some nonzero $C$. So $\sqrt{R(t)}$, defined to equal $+1$ at $t=p$, will have a similar Taylor series: $\sqrt{A}(t-a_1)(t-a_2)(t-a_3)\sim1-C(t-p)^3/2$. This is possible only if $a_1,a_2,a_3$ are the vertices of an equilateral triangle, and $p$ is their mean. It follows that the roots of $S$ other than $t=p$, i.e., $t=0,1,d$, are also the vertices of an equilateral triangle centered on $p$. W.l.o.g., choose $d=\frac12+\ri\frac{\sqrt3}2$ and $p=\frac12+\ri\frac{\sqrt3}6$. With a bit of algebra, $R$ can be rewritten in the form given in the theorem. Since $t=p$ is an ordinary point and $R$ maps it triply to $z=0$, $z=0$ must have exponents $0,1/3$. Since $R$ maps $t=0,1,d$ simply to $z=0$, $t=0,1,d$ must also have exponents $0,1/3$.
The theorem is proved.
Now that the main theorem is proved, we can proceed to derive explicit Heun-to-hypergeometric reduction formulæ. Theorem \[thm:culmination\] gives a necessary condition for the existence of a reduction, and Theorem \[thm:useful0\] presents a representative list.
\[thm:culmination\] Suppose the Heun equation [(\[eq:Heun\])]{} has four singular points and is nontrivial [(]{}$\alpha\beta\neq0$ or $q\neq0$[)]{}. Then its local solution ${\mathop{{}\it Hl}\nolimits}(d,q;\alpha,\beta,\gamma,\delta;t)$ can be reduced to a hypergeometric function ${}_2F_1(a,b;c;z)$ by a formula of the type ${\mathop{{}\it Hl}\nolimits}(t)={}_2F_1(R(t))$, with $R$ a rational function, only if its parameters $d,q;\alpha,\beta$ satisfy $q=\alpha\beta p$, with $(d,p)$ equal to one of the following $23$ pairs. If a reduction of this type exists, $R(t)$ will be a polynomial of the stated degree. $$\begin{array}{lll}
{\rm(1a)}& [\deg R=2\hbox{ \rm{or} }4]. & (-1,0),\,(\frac12,\frac12),\,(2,1). \\
{\rm(1b)}& [\deg R=3]. &
(-3,0),\,(-\frac13,0),\,(\frac14,\frac14),\,(\frac34,\frac34),\,(\frac43,1),\,(4,1). \\
{\rm(2a)}& [\deg R=3\hbox{ \rm{or} }6]. &
(\frac12\pm\ri\frac{\sqrt{3}}2, \frac12\pm\ri\frac{\sqrt{3}}6). \\
{\rm(2b)}& [\deg R=4]. & (\frac12\pm\ri\frac{5\sqrt2}4,\frac12\pm\ri\frac{\sqrt2}4), \\
& & (\frac4{27}\pm\ri\frac{10\sqrt2}{27},\frac7{27}\pm\ri\frac{4\sqrt2}{27}),\,(\frac{23}{27}\pm\ri\frac{10\sqrt2}{27},\frac{20}{27}\pm\ri\frac{4\sqrt2}{27}). \\
{\rm(2c)}& [\deg R=5]. &
(\frac12\pm\ri\frac{11\sqrt{15}}{90},\frac12\pm\ri\frac{\sqrt{15}}{18}), \\
& & (\frac{135}{128}\pm\ri\frac{33\sqrt{15}}{128},\frac{95}{128}\pm\ri\frac{9\sqrt{15}}{128}),\,(-\frac{7}{128}\pm\ri\frac{33\sqrt{15}}{128},\frac{33}{128}\pm\ri\frac{9\sqrt{15}}{128}).
\end{array}$$
The five subcases are taken from Theorem \[thm:main\], with (1c),(2d) respectively subsumed into (1a),(2a). Theorem \[thm:main\] supplies only a single pair $(D,p_0)$ for each subcase. By the remarks preceding that theorem, $d$ may be any value on the cross-ratio orbit of $D$. There are up to $6$ distinct possibilities, which were given in (\[eq:firstlist\]). The value of $p$ associated to $d$ is the preimage of $p_0$ under the corresponding affine map $t\mapsto A_1(t)$, given in (\[eq:secondlist\]).
\[thm:useful0\] Suppose a Heun equation has four singular points and is nontrivial [(]{}$\alpha\beta\neq0$ or $q\neq0$[)]{}. Then the only reductions of its local Heun function ${\mathop{{}\it Hl}\nolimits}$ to ${}_2F_1$ that can be performed by a rational transformation of the independent variable involve polynomial transformations of degrees $2$, $3$, $4$, $5$, and $6$. There are seven distinct types, each of which can exist only if $d$ lies on an appropriate cross-ratio orbit. The following list includes a representative reduction of each type. The ones with real $d$ [(]{}and $\deg R=2,3,4$[)]{} include
$$\begin{aligned}
\label{eq:generalharmonic}
\qquad&{\mathop{{}\it Hl}\nolimits}\left(2,\,\alpha\beta;\,\alpha,\,\beta,\,\gamma,\,\alpha+\beta-2\gamma+1;\,t\right)\\
\qquad&\qquad={}_2F_1\left(\tfrac{\alpha}2,\,\tfrac{\beta}2;\,\gamma;\,t(2-t)\right),
\nonumber\\[\jot]
\qquad&{\mathop{{}\it Hl}\nolimits}\left(4,\,\alpha\beta;\,\alpha,\,\beta,\,\tfrac12,\,\tfrac{2(\alpha+\beta)}3;\,t\right)\\
\qquad&\qquad={}_2F_1\left(\tfrac{\alpha}3,\,\tfrac{\beta}3;\,\tfrac12;\,1-(t-1)^2(1-t/4)\right),
\nonumber\\[\jot]
\label{eq:specialharmonic}
\qquad&{\mathop{{}\it Hl}\nolimits}\left(2,\,\alpha\beta;\,\alpha,\,\beta,\,\tfrac{\alpha+\beta+2}4,\,\tfrac{\alpha+\beta}2;\,t\right)\\
\qquad&\qquad={}_2F_1\left(\tfrac{\alpha}4,\,\tfrac{\beta}4;\,\tfrac{\alpha+\beta+2}4;\,1-4\bigl[t(2-t)-\tfrac12\bigr]^2\right),
\nonumber\end{aligned}$$
and the ones with non-real $d$ [(]{}and $\deg R=3,4,5,6$[)]{} include
$$\begin{aligned}
\label{eq:generalequianharmonic}
\qquad&{\mathop{{}\it Hl}\nolimits}\left(\tfrac12\pm\ri\tfrac{\sqrt3}{2},\,\alpha\beta(\tfrac12\pm\ri\tfrac{\sqrt3}6);\,\alpha,\,\beta,\,\tfrac{\alpha+\beta+1}3,\,\tfrac{\alpha+\beta+1}3;\,t\right)\\
\qquad&\qquad={}_2F_1\left(\tfrac{\alpha}3,\,\tfrac{\beta}3;\,\tfrac{\alpha+\beta+1}3;\,1-\bigl[1-t/(\tfrac12\pm\ri\tfrac{\sqrt3}6)\bigr]^3\right),
\nonumber\\[\jot]
\qquad&{\mathop{{}\it Hl}\nolimits}\left(\tfrac12\pm\ri\tfrac{5\sqrt2}4,\,\alpha({\tfrac23}-\alpha)(\tfrac12\pm\ri\tfrac{\sqrt2}4);\,\alpha,\,{\tfrac23}-\alpha,\,\tfrac12,\,\tfrac12;\,t\right)\\
\qquad&\qquad={}_2F_1\Bigl(\tfrac{\alpha}4,\,{\tfrac16}-\tfrac{\alpha}4;\,\tfrac12;\,
\nonumber\\[-4pt]
\qquad&\qquad\qquad\qquad 1-\bigl[1-t/(\tfrac12\pm\ri\tfrac{5\sqrt2}4)\bigr]\bigl[1-t/(\tfrac12\pm\ri\tfrac{\sqrt2}4)\bigr]^3\Bigr),
\nonumber
\displaybreak[0]
\\[\jot]
\label{eq:neednorm}
\qquad&{\mathop{{}\it Hl}\nolimits}\left(\tfrac12\pm\ri\tfrac{11\sqrt{15}}{90},\,
\alpha({\tfrac56}-\alpha)(\tfrac12\pm\ri\tfrac{\sqrt{15}}{18});\,
\alpha,\,{\tfrac56}-\alpha,\,{\tfrac23},\,{\tfrac23},\,t\right)\\
\qquad&\qquad={}_2F_1\Bigl(\tfrac{\alpha}5,\,{\tfrac16}-\tfrac{\alpha}5;\,{\tfrac23};\,
\nonumber\\[-4pt]
\qquad&\qquad\qquad\qquad (\mp\ri\tfrac{2025\sqrt{15}}{64})\,t(t-1)\bigl[t-(\tfrac12\pm\ri\tfrac{\sqrt{15}}{18})\bigr]^3\Bigr),
\nonumber\\[\jot]
\label{eq:specialequianharmonic}
\qquad&{\mathop{{}\it Hl}\nolimits}\left(\tfrac12\pm\ri\tfrac{\sqrt3}2,\,\alpha(1-\alpha)(\tfrac12\pm\ri\tfrac{\sqrt3}6);\,\alpha,\,1-\alpha,\,{\tfrac23},\,{\tfrac23},\,t\right)\\
\qquad&\qquad={}_2F_1\left(\tfrac{\alpha}6,\,\tfrac16-\tfrac{\alpha}6;\,{\tfrac23};\,1-4\left\{\bigl[1-t/(\tfrac12\pm\ri\tfrac{\sqrt3}6)\bigr]^3-\tfrac12\right\}^2\right).
\nonumber\end{aligned}$$
In the preceding reductions, $\alpha,\beta,\gamma$ are free parameters. Each of these equalities holds in a neighborhood of $t=0$ whenever the two sides are defined, e.g., whenever the fifth argument of ${\mathop{{}\it Hl}\nolimits}$ and the third argument of ${}_2F_1$ are not equal to a nonpositive integer.
The equalities of Theorem \[thm:useful0\] hold even if the Heun equation has fewer than four singular points, or is trivial; but in either of those cases, additional reductions are possible. For the trivial case, see §\[sec:trivial\].
The special harmonic reduction (\[eq:specialharmonic\]) is composite: it can be obtained from the case $\gamma=(\alpha+\beta+2)/4$ of the reduction (\[eq:generalharmonic\]) by applying Gauss’s quadratic hypergeometric transformation [@Andrews99 §3.1] $$\label{eq:apply}
\quad{}_2F_1\bigl(a,\,b;\,(a+b+1)/2;\,z\bigr) = {}_2F_1\bigl(a/2,\,b/2;\,(a+b+1)/2;\,1-4(z-\tfrac12)^2\bigr)$$ to the right-hand side. The special equianharmonic reduction (\[eq:specialequianharmonic\]) can be obtained in the same way from the case $\beta=1-\alpha$ of the reduction (\[eq:generalequianharmonic\]).
One might think that by applying (\[eq:apply\]) to the right-hand sides of the remaining reductions in (\[eq:generalharmonic\])–(\[eq:specialequianharmonic\]), additional composite reduction formulæ could be generated. However, there are only a few cases in which it can be applied; and when it can, it imposes conditions on the parameters of ${\mathop{{}\it Hl}\nolimits}$ which require that the corresponding Heun equation have fewer than four singular points.
${\mathop{{}\it Hl}\nolimits}$ and ${}_2F_1$ are the local solutions of their respective equations which belong to the exponent zero at $t=0$ (resp. $z=0$), and are regular and normalized to unity there. So the theorem follows readily from Theorem \[thm:main\]: (\[eq:generalharmonic\])–(\[eq:specialharmonic\]) come from subcases 1a–1c, and (\[eq:generalequianharmonic\])–(\[eq:specialequianharmonic\]) from subcases 2a–2d. In each subcase, the Gauss parameters $(a,b;c)$ of ${}_2F_1$ are computed by first calculating the exponents at $z=0,1,\infty$, in the way explained in Remark \[rem:jumpahead\]. In some subcases, the polynomial map supplied in Theorem \[thm:main\] must be chosen to be $S=1-R$ rather than $R$, due to the need to map $t=0$ to $z=0$ rather than to $z=1$, so that the transformation will reduce ${\mathop{{}\it Hl}\nolimits}$ to ${}_2F_1$, and not to another local solution of the hypergeometric equation.
The list of Heun-to-hypergeometric reductions given in Theorem \[thm:useful0\] is representative rather than exhaustive. For each subcase of Theorem \[thm:main\], there is one reduction for each allowed value of $d$. Each reduction on the above list came from choosing $d=D$, but any other $d$ on the cross-ratio orbit of $D$ may be chosen. The orbit is defined by $\triangle01d$ being one of the triangles (at most six) similar to $\triangle01D$, i.e., by $\triangle01D$ being obtained from $\triangle01d$ by an affine transformation $A_1\in{\mathcal
A}(\mathfrak{H})$. So for any subcase of Theorem \[thm:main\] and choice of $d$, the appropriate polynomial map will be $z=A_2(R_1(A_1(t)))$, where $A_1$ is constrained to map $\triangle01d$ to $\triangle01D$ and is listed in (\[eq:secondlist\]), $R_1$ is the polynomial map given in the subcase, and $A_2\in{\mathcal A}(\mathfrak{h})$, i.e., $A_2(z)=z$ or $1-z$, is chosen so that $t=0$ is mapped to $z=0$ rather than to $z=1$.
As an example, consider the harmonic subcase 1a of Theorem \[thm:main\], in which $D=2$, the cross-ratio orbit of $D$ is $\{-1,\frac12,2\}$, and the polynomial map is $R_1(t)=t(2-t)$. Choosing $d=D$ yields the reduction (\[eq:generalharmonic\]). Choosing $d=1-D=-1$ yields an alternative reduction of ${\mathop{{}\it Hl}\nolimits}$ to ${}_2F_1$, namely $$\label{eq:extrathing}
\quad{\mathop{{}\it Hl}\nolimits}\bigl(-1,\,0;\,\alpha,\,\beta,\,\gamma,\,(\alpha+\beta-\gamma+1)/2;\,t\bigr)
={}_2F_1\left(\alpha/2,\,\beta/2;\,(\gamma+1)/2;\,t^2\right),$$ in which $A_1(t)=1-t$ according to (\[eq:secondlist\]), and $A_2(z)=1-z$.
It is not difficult to check that in all, exactly $28$ Heun-to-hypergeometric reductions can be derived from Theorem \[thm:main\]. They exhibit the $23$ values of the pair $(d,q/\alpha\beta)$ listed in Theorem \[thm:culmination\]. Of the $28$, eleven were given in Theorem \[thm:useful0\], and (\[eq:extrathing\]) is a twelfth. With the exception of the two reductions with $(d,q/\alpha\beta)=(-1,0)$, one of which is (\[eq:extrathing\]), the $28$ split into pairs, each pair being related by the identity (\[eq:newguy1\]), which takes $d$ to $1/d$.
A Generalization
================
In applied mathematics, it is seldom the case that the four singular points of an equation of Heun type are located at $0,1,d,\infty$. But the main theorem, Theorem \[thm:main\], may readily be generalized. Consider the situation when three of the four have zero as a characteristic exponent, since this may always be arranged by applying an F-homotopy. There are two cases of interest: either the singular points include $\infty$ and each of the finite singular points has zero as a characteristic exponent; or the location of the singular points is unrestricted. The latter includes the former. They have respective $P$-symbols
\[eq:generalizedP\] P{
[ccccc]{} d\_1&d\_2&d\_3&&\
0&0&0&& ;s\
1-&1-&1-&&
}, P{
[ccccc]{} d\_1&d\_2&d\_3&d\_4&\
0&0&0&& ;s\
1-&1-&1-&&
}.
In the nomenclature of Ref. [@Ronveaux95], they are canonical cases of the natural general-form and general-form Heun equations. They are transformed to the Heun equation ($\mathfrak{H}$) by the affine and Möbius transformations $$\label{eq:respmaps}
t=\frac{s-d_1}{d_2-d_1},
\qquad
t=\frac{(s-d_1)(d_2-d_4)}{(d_2-d_1)(s-d_4)},$$ respectively. Each of the $P$-symbols (\[eq:generalizedP\]) is accompanied by an accessory parameter. The equation specified by (\[eq:generalizedP\]a) can be written as $$\label{eq:genHeun}
\quad\frac{\d^2 u}{\d s^2}
+ \left( \frac\gamma{s-d_1} + \frac\delta{s-d_2} + \frac\epsilon{s-d_3}
\right)\frac{\d u}{\d s} + \frac{\alpha\beta s - q'}{(s-d_1)(s-d_2)(s-d_3)}\,u = 0,$$ where $q'$ is the accessory parameter [@Ronveaux95]. The equation specified by (\[eq:generalizedP\]b) with $d_4\neq\infty$ can be written as $$\begin{gathered}
\label{eq:gen2Heun}
\frac{\d^2 u}{\d s^2}
+ \left( \frac\gamma{s-d_1} + \frac\delta{s-d_2} + \frac\epsilon{s-d_3}
+ \frac{1-\alpha-\beta}{s-d_4}
\right)\frac{\d u}{\d s} \\
\qquad\qquad{}+ \frac{\alpha\beta\left.\left[{\displaystyle\prod_{i=1}^3(d_4-d_i)}\right]\right/(s-d_4) - q''}
{(s-d_1)(s-d_2)(s-d_3)(s-d_4)}\,u = 0,\end{gathered}$$ where $q''$ is the accessory parameter [@Ronveaux95].
The impediment to the generalization of Theorem \[thm:main\] to these two equations is the specification of the cases that should be excluded due to their being ‘trivial’, or having fewer than four singular points. The excluded cases should really be specified not in terms of the [*ad hoc*]{} parameters $q'$ and $q''$, but rather in an invariant way, in terms of an accessory parameter defined so as to be invariant under affine or Möbius transformations, respectively. Schäfke [@Schafke83] has defined new accessory parameters of second-order Fuchsian equations on $\mathbb{CP}^1$ that are invariant under affine transformations, but no extension to general Möbius transformations seems to have been developed.
In the absence of an invariantly defined accessory parameter, an [*ad hoc*]{} approach will be followed. It is clear that (\[eq:genHeun\]) is trivial, i.e., can be transformed to a trivial Heun equation by an affine transformation, iff $\alpha\beta=0$, $q'=0$. Also, it will have fewer than four singular points if $\gamma=0$, $q'=0$; or $\delta=0$, $q'=\alpha\beta$; or $\epsilon=0$, $q'=\alpha\beta d$. Likewise, it is fairly clear that (\[eq:gen2Heun\]) will be trivial, i.e., can be transformed to a trivial Heun equation by a Möbius transformation, iff $\alpha\beta=0$, $q''=0$. The conditions on the parameters for there to be a full set of singular points are, however, more complicated.
The first generalization of Theorem \[thm:main\] is Corollary \[thm:gen1\], which follows from Theorem \[thm:main\] by applying the affine transformation (\[eq:respmaps\]a). It mentions a polynomial transformation, which is the composition of the $s\mapsto t$ affine transformation with the $t\mapsto z$ polynomial map of Theorem \[thm:main\]. To avoid repetition, Corollary \[thm:gen1\] simply cites Theorem \[thm:main\] for the necessary and sufficient conditions on the exponent parameters and the accessory parameter.
\[thm:gen1\] A natural general-form Heun equation of the canonical type [(\[eq:genHeun\])]{}, which has four singular points and is nontrivial [(]{}i.e., $\alpha\beta\neq0$ or $q'\neq0$[)]{}, can be reduced to a hypergeometric equation of the form [[(]{}\[eq:hyper\][)]{}]{} by a rational substitution $z=R(s)$ iff $\alpha\beta\neq0$, $R$ is a polynomial, and the Heun equation satisfies the following conditions.
[(i)]{} $\triangle d_1d_2d_3$ must be similar to $\triangle 01D$, with $D=2$ or $\frac12+\ri\tfrac{\sqrt3}2$, or $D=4$, $\frac12+\ri\tfrac{5\sqrt2}4$, or $\frac12+\ri\frac{11\sqrt{15}}{90}$. That is, it must either be a degenerate triangle consisting of three equally spaced collinear points [(]{}the harmonic case[)]{}, or be an equilateral triangle [(]{}the equianharmonic case[)]{}, or be similar to one of three other specified triangles, of which one is degenerate and two are isosceles. [(ii)]{} The exponent parameters $\gamma,\delta,\epsilon$ must satisfy conditions that follow from the corresponding subcases of Theorem [\[thm:main\]]{}. [(iii)]{} The parameter $q'$ must take a value that can be computed uniquely from the parameters $\gamma,\delta,\epsilon$ and the choice of subcase.
In the harmonic case, the two endpoints of the degenerate triangle of singular points $\triangle d_1d_2d_3$ must have equal exponent parameters, and $q'$ must equal $\alpha\beta$ times the intermediate point. In this case, $R$ will typically be a quadratic polynomial. There are two possibilities: $R$ will map the two endpoints to $z=0$ and the intermediate point to $z=1$, or vice versa. If the characteristic exponents of the intermediate point are twice those of the endpoints, then $R$ may be quartic instead: the composition of either possible quadratic polynomial with a subsequent ${z\mapsto 4(z-\frac12)^2}$ or $z\mapsto 4z(1-z)$ map.
In the equianharmonic case, all three exponent parameters $\gamma,\delta,\epsilon$ must be equal, and the accessory parameter $q'$ must equal $\alpha\beta$ times the mean of $d_1,d_2,d_3$. In this case, $R$ will typically be a cubic polynomial. There are two possibilities: $R$ will map $d_1,d_2,d_3$ to $z=0$ and their mean to $z=1$, or vice versa. If the exponent parameters $\gamma,\delta,\epsilon$ equal $2/3$, then $R$ may be sextic instead: the composition of either possible cubic polynomial with a subsequent $z\mapsto 4(z-\frac12)^2$ or $z\mapsto
4z(1-z)$ map.
The further generalization of Theorem \[thm:main\] is Corollary \[thm:gen2\], which follows from Theorem \[thm:main\] by applying the Möbius transformation (\[eq:respmaps\]b). It mentions a rational substitution, which is the composition of the $s\mapsto t$ Möbius transformation with the $t\mapsto z$ polynomial map of Theorem \[thm:main\].
\[thm:gen2\] A general-form Heun equation of the canonical type [(\[eq:gen2Heun\])]{}, which has four singular points and is nontrivial [(]{}i.e., $\alpha\beta\neq0$ or $q''\neq0$[)]{}, can be reduced to a hypergeometric equation of the form [(]{}\[eq:hyper\][)]{} by a rational substitution $z=R(s)$ iff $\alpha\beta\neq0$, and the Heun equation satisfies the following conditions.
[(i)]{} The cross-ratio orbit of $\{d_1,d_2,d_3,d_4\}$ must be that of $\{0,1,D,\infty\}$, where $D$ is one of the five values enumerated above. That is, it must be the harmonic orbit, the equianharmonic orbit, or one of three specified generic orbits, one real and two non-real. [(ii)]{} The exponent parameters $\gamma,\delta,\epsilon$ must satisfy conditions that follow from the corresponding subcases of Theorem [\[thm:main\]]{}. [(iii)]{} The parameter $q''$ must take a value that can be computed uniquely from the parameters $\gamma,\delta,\epsilon$ and the choice of subcase.
Suppose $d_1,d_2,d_3,d_4$ form a harmonic quadruple, i.e., can be mapped by a Möbius transformation to the vertices of a square in $\mathbb{C}$. Moreover, suppose two of $d_1,d_2,d_3$ have the same characteristic exponents, and are mapped to diagonally opposite vertices of the square. That is, of the three parameters $\gamma,\delta,\epsilon$, the two corresponding to a diagonally opposite pair must be equal. Then provided $q''$ takes a value that can be computed from the other parameters, a substitution $R$ will exist. It will typically be a degree-$2$ rational function, the only critical points of which are the third singular point (out of $d_1,d_2,d_3$) and $d_4$. Either $R$ will map the two distinguished singular points to $z=1$ and the third singular point to $z=0$, or vice versa; and $d_4$ to $z=\infty$. In the special case when the exponents of the third point are twice those of the two distinguished points, it is possible for $R$ to be a degree-$4$ rational function.
\[example:2\] Suppose $d_1,d_2,d_3,d_4$ form an equianharmonic quadruple, i.e., can be mapped by a Möbius transformation to the vertices of a regular tetrahedron in $\mathbb{CP}^1$. Moreover, suppose $d_1,d_2,d_3$ have the same characteristic exponents, i.e., $\gamma=\delta=\epsilon$. Then provided $q''$ takes a value uniquely determined by the other parameters, a substitution $R$ will exist. Typically, $R$ will be a degree-$3$ rational function, the only critical points of which are the mean of $d_1,d_2,d_3$ with respect to $d_4$, and $d_4$. Either $R$ will map $d_1,d_2,d_3$ to $z=1$ and the mean of $d_1,d_2,d_3$ with respect to $d_4$ to $z=0$, or vice versa; and $d_4$ to $z=\infty$. In the special case when the exponents of each of $d_1,d_2,d_3$ equal $0,1/3$, it is possible for $R$ to be a degree-$6$ rational function.
In Example \[example:2\], the concept of the mean of three points in $\mathbb{CP}^1$ with respect to a distinct fourth point was used. A projectively invariant definition is the following. If $T$ is a Möbius transformation that takes $d_4$ ($\neq\nobreak d_1,d_2,d_3$) to the point at infinity, the mean of $d_1,d_2,d_3$ with respect to $d_4$ is the point that would be mapped to the mean of $Td_1,Td_2,Td_3$ by $T$.
The Clarkson–Olver Transformation {#sec:CO}
=================================
The reduction discovered by Clarkson and Olver [@Clarkson96], which stimulated these investigations, turns out to be a special case of the equianharmonic Heun-to-hypergeometric reduction of §\[sec:main\]. Their reduction was originally given in a rather complicated form, which we shall simplify.
Recall that the Weierstrass function $\wp(u)\equiv\wp(u;g_2,g_3)$ with invariants $g_2,g_3\in\mathbb{C}$, which cannot both equal zero, has a double pole at $u=0$ and satisfies $$\begin{aligned}
\label{eq:Wode}
{\wp'}^2 &= 4\wp^3 - g_2\wp - g_3\\
&= 4(\wp - e_1)(\wp - e_2)(\wp - e_3).\nonumber\end{aligned}$$ Here $e_1,e_2,e_3$, the zeroes of the defining cubic polynomial, are the finite critical values of $\wp$, the sum of which is zero; they are required to be distinct. $\wp$ is doubly periodic on $\mathbb{C}$, with periods denoted $2\omega,2\omega'$. So it can be viewed as a function on the torus $\mathbb{T}{\stackrel{\rm{def}}{=}}\mathbb{C}/{\mathcal L}$, where ${\mathcal
L}=2\omega\mathbb{Z}\oplus2\omega'\mathbb{Z}$ is the period lattice. It turns out that the half-lattice $\{0,
\omega,\omega',\omega+\omega'\}+{\mathcal L}$ comprises the critical points of $\wp$. The map $\wp:\mathbb{T}\to\mathbb{CP}^1$ is a double branched cover of the Riemann sphere, but $\mathbb{T}$ is uniquely coordinatized by the pair $(\wp,\wp')$.
The modular discriminant $\Delta{\stackrel{\rm{def}}{=}}g_2^3-27g_3^2\neq0$ is familiar from elliptic function theory. If $g_2,g_3\in\mathbb{R}$ and ${\Delta>0}$ (the so-called real rectangular case, which predominates in applications), $\omega,\omega'$ can be chosen to be real and imaginary, respectively. If $\Delta<0$ (the less familiar real rhombic case), they can be chosen to be complex conjugates, so that the third critical point $\omega_2{\stackrel{\rm{def}}{=}}\omega+\omega'$ is real.
Clarkson and Olver considered the Weierstrass-form Lamé equation $$\label{eq:Lame}
\frac{\d^2\psi}{\d u^2} - \left[\ell(\ell+1)\wp(u) + B\right]\psi = 0,$$ which is a Fuchsian equation on $\mathbb{T}$ with exactly one singular point (at $(\wp,\wp')=(\infty,\infty)$) and a single accessory parameter, $B$. \[We have altered their exponent parameter $-36\sigma$ to $\ell(\ell+1)$, to agree with the literature, and have added the accessory parameter.\] In particular, they considered the case $g_2=0$, $g_3\neq0$, $B=0$. They mapped $u\in\mathbb{T}$ to $z\in\mathbb{CP}^1$ by the formal substitution $$\label{eq:COsubst}
u = \frac{\ri}{(16g_3)^{1/6}}
\int^{(1-z)^{1/3}}\frac{\d\tau}{\sqrt{1-\tau^3}},$$ and showed that the Lamé equation is reduced to $$\label{eq:hyperCO}
z(1-z)\frac{\d^2\psi}{\d z^2} + \left(\frac12 - \frac76 z\right)
\frac{\d\psi}{\d z}
+\frac{\ell(\ell+1)}{36}\,\psi = 0.$$ This is a hypergeometric equation with $(a,b;c)=\left(-\ell/6,(\ell+1)/6;1/2\right)$. It has exponents $0,1/2$ at $z=0$; $0,1/3$ at $z=1$; and $-\ell/6,(\ell+1)/6$ at $z=\infty$.
In elliptic function theory the case $g_2=0$, $g_3\neq0$ is called equianharmonic, since the corresponding critical values $e_1,e_2,e_3$ are the vertices of an equilateral triangle in $\mathbb{C}$. If, for example, $g_3\in\mathbb{R}$, then $\Delta<0$; and by convention, $e_1,e_2,e_3$ correspond to $\omega,\omega_2,\omega'$, respectively. $e_1$ and $e_3$ are complex conjugates, and $e_2$ is real. The triangle $\triangle0\omega_2\omega'$ is also equilateral [@Abramowitz65 §18.13].
So, what Clarkson and Olver considered was the [*equianharmonic*]{} Lamé equation, the natural domain of definition of which is a torus $\mathbb T$ (i.e., a complex elliptic curve) with special symmetries. For the Lamé equation (\[eq:Lame\]) to be viewed as a Heun equation on $\mathbb{CP}^1$, it must be transformed by $s=\wp(u)$ to its algebraic form [@Hille76]. The algebraic form is $$\label{eq:algLame}
\quad\frac{\d^2 \psi}{\d s^2}
+ \left( \frac{1/2}{s-e_1} + \frac{1/2}{s-e_2} + \frac{1/2}{s-e_3}
\right)\frac{\d\psi}{\d s} + \frac{[-\ell(\ell+1)/4]s-B/4}{(s-e_1)(s-e_2)(s-e_3)}\,\psi = 0.$$ This is a special case of (\[eq:genHeun\]), the canonical version of the natural general-form Heun equation, with distinct finite singular points $d_1,d_2,d_3 = e_1,e_2,e_3$. Also, $\alpha,\beta=-\ell/2,(\ell+1)/2$, $\gamma=\delta=\epsilon=1/2$, and $q'=B/4$. It has characteristic exponents $0,1/2$ at $s=e_1,e_2,e_3$, and $-\ell/2,(\ell+1)/2$ at $s=\infty$.
Applying Corollary \[thm:gen1\] to (\[eq:algLame\]) yields the following.
The algebraic-form Lamé equation [(\[eq:algLame\])]{}, in the equianharmonic case $g_2=0$, $g_3\neq0$, can be reduced when $\ell(\ell+1)\neq0$ to a hypergeometric equation of the form [(]{}\[eq:hyper\][)]{} by a rational transformation $z=R(s)$ iff the accessory parameter $B$ equals zero. If this is the case, $R$ will necessarily be a cubic polynomial; $$z=4s^3/g_3,\qquad z=1-4s^3/g_3$$ will both work, and they are the only possibilities.
If $\ell(\ell+1)\neq0$, the Heun equation (\[eq:algLame\]) is nontrivial in the sense of Definition \[def:triviality\], with four singular points; by (\[eq:Wode\]), the $e_i$ are the cube roots of $g_3/4$, and are the vertices of an equilateral triangle. Since $\gamma=\delta=\epsilon$, the equianharmonic case of Corollary \[thm:gen1\] applies, and no other.
The mean of $e_1,e_2,e_3$ is zero. So the polynomial $4s^3/g_3$ is the cubic polynomial that maps each singular point to $1$, and their mean to zero; $1-4s^3/g_3$ does the reverse. These are the only possibilities for the map $s\mapsto z$, since the sextic polynomials mentioned in the equianharmonic case of Corollary \[thm:gen1\] can be employed only if $\gamma,\delta,\epsilon$ equal $2/3$, which is not the case here.
[*Remark*]{}. Corollary \[thm:gen1\] (equianharmonic case) also applies to the equianharmonic algebraic-form Lamé equation with $\ell(\ell+1)=0$, $B\neq0$, and guarantees it cannot be transformed to the hypergeometric equation by any rational substitution; since in the sense used above, this too is a nontrivial Heun equation.
\[thm:cor\] The Weierstrass-form Lamé equation [(\[eq:Lame\])]{}, in the equianharmonic case $g_2=0$, $g_3\neq0$, can be reduced when $\ell(\ell+1)\neq0$ to a hypergeometric equation of the form [[(]{}\[eq:hyper\][)]{}]{} by a substitution of the form $z=R\left(\wp(u)\right)$, where $R$ is rational, iff the accessory parameter $B$ equals zero. In this case, $$\label{eq:Wsubsts}
z=4\wp(u)^3/g_3,\qquad z=1-4\wp(u)^3/g_3$$ will both work, and they are the only such substitutions.
Applying the substitution $z=1-4\wp(u)^3/g_3$ to the Lamé equation (\[eq:Lame\]) reduces it to the hypergeometric equation (\[eq:hyperCO\]), as is readily verified. (The other substitution $z=4\wp(u)^3/g_3$ yields a closely related hypergeometric equation, with the singular points $z=0,1$ interchanged.) The Clarkson–Olver substitution formula (\[eq:COsubst\]) contains a multivalued elliptic integral, but it may be inverted with the aid of (\[eq:Wode\]) to yield $z=1-4\wp(u)^3/g_3$. So their transformation fits into the framework of Corollary \[thm:cor\].
The most noteworthy feature of the Clarkson–Olver transformation is that it can be performed irrespective of the choice of exponent parameter $\ell$. Only the accessory parameter $B$ is restricted. As they remark, when $\ell=1$, $1/2$, $1/4$, or $1/10$, it is a classical result of Schwarz that all solutions of the hypergeometric equation (\[eq:hyperCO\]) are necessarily algebraic [@Hille76 §10.3]. This implies that if $B=0$, the same is true of all solutions of the algebraic Lamé equation (\[eq:algLame\]); which had previously been proved by Baldassarri [@Baldassarri81], using rather different techniques. But irrespective of the choice of $\ell$, the solutions of the $B=0$ Lamé equation reduce to solutions of the hypergeometric equation. This is quite unlike the other known classes of exact solutions of the Lamé equation, which restrict $\ell$ to take values in a discrete set [@Morales99 §2.8.4]. But it is typical of hypergeometric reductions of the Heun equation. As the theorems of $\S\,\ref{sec:main}$ make clear, in general it is possible to alter characteristic exponents continuously, without affecting the existence of a reduction to the hypergeometric equation.
It should be mentioned that the harmonic as well as the equianharmonic case of Corollary \[thm:gen1\] can be applied to the algebraic-form Lamé equation. One of the resulting quadratic transformations was recently rediscovered by Ivanov [@Ivanov2001], in a heavily disguised form. The case of quadratic rather than cubic changes of the independent variable will be considered elsewhere.
The Seemingly Trivial Case $\alpha\beta=0$, $q=0$ {#sec:trivial}
=================================================
If the Heun equation (\[eq:Heun\]) is trivial in the sense of Definition \[def:triviality\], it may be solved by quadratures. A basis of solutions is $$\quad u_1(t) = 1,\quad\,\,
u_2(t) = \int^t\exp\left[-\int^v\left(\frac\gamma w+\frac\delta{w-1} +
\frac\epsilon{w-d}\right)\,\d w\right]\,\d v.$$ In the trivial limit, the local Heun function ${\mathop{{}\it Hl}\nolimits}(d,q;\alpha,\beta,\gamma,\delta;t)$ degenerates to the former, and the solution belonging to the exponent $1-\gamma$ at $t=0$, denoted $\widetilde{\mathop{{}\it Hl}\nolimits}(d,q;\alpha,\beta,\gamma,\delta;t)$ here, to the latter. In applications, explicit solutions, if any, are what matter. It is nonetheless interesting to examine under what circumstances a trivial Heun equation can be reduced to a hypergeometric equation. This question was first considered by Kuiken [@Kuiken79].
The canonical polynomial substitutions of §\[sec:main\] give rise to many [*nonpolynomial*]{} rational reductions of trivial Heun equations to hypergeometric equations, by composing with certain Möbius transformations. To understand why, recall that Theorem \[thm:main\] characterized, up to affine automorphisms of the two equations, the polynomial substitutions that can reduce a nontrivial Heun equation to a hypergeometric equation. If $t\mapsto R_1(t)$ denotes a canonical polynomial substitution, the full set of polynomial substitutions derived from it comprises all $t\mapsto A_2\left(R_1(A_1(t))\right)$, where $A_1\in{\mathcal A}(\mathfrak{H})$ is an affine automorphism of the Heun equation, which maps $\{0,1,d\}$ onto $\{0,1,D\}$, and $A_2\in{\mathcal
A}(\mathfrak{h})$ is an affine automorphism of the hypergeometric equation, which maps $\{0,1\}$ onto $\{0,1\}$. (The only two possibilities are $A_2(z)=z$ and $A_2(z)=1-z$.)
In the context of [*nontrivial*]{} Heun equations, Möbius automorphisms that are not affine could not be employed; essentially because, as discussed in §\[subsec:auto\], moving the point at infinity would require a compensating F-homotopy. But in the trivial case no such issue arises: by Proposition \[thm:trivialprop\], the Heun equation is reduced to a hypergeometric equation by a rational substitution of its independent variable, $z=R(t)$, iff the substitution maps exponents to exponents. And Möbius transformations that are not affine certainly preserve exponents.
\[thm:trivial\] A Heun equation of the form [(]{}\[eq:Heun\][)]{}, which has four singular points and is trivial [(]{}i.e., $\alpha\beta=0$ and $q=0$[)]{}, can be reduced to a hypergeometric equation of the form [(]{}\[eq:hyper\][)]{} by any rational substitution of the form $z=M_2\left(R_1(M_1(t))\right)$, where $z=R_1(t)$ is a polynomial that maps $\{0,1,D\}$ to $\{0,1\}$, listed [(]{}along with $D$[)]{} in one of the seven subcases of Theorem [\[thm:main\]]{}, and where $M_1\in{\mathcal
M}(\mathfrak{H})$, and $M_1\in{\mathcal M}(\mathfrak{h})$. That is, $M_1$ maps $\{0,1,d,\infty\}$ onto $\{0,1,D,\infty\}$, and $M_2$ maps $\{0,1,\infty\}$ onto $\{0,1,\infty\}$. The necessary conditions on characteristic exponents stated in Theorem [\[thm:main\]]{} must be satisfied, the conditions on exponents at specified values of $t$ being taken to refer to the exponents at the preimages of these points under $M_1$.
As in the derivation of the ${\mathop{{}\it Hl}\nolimits}(t)={}_2F_1(R(t))$ reduction formulælisted in Theorem \[thm:useful0\], the Gauss parameters $(a,b;c)$ of the resulting hypergeometric equation can be computed by first calculating the exponents at $z=R(t)=0,1,\infty$, using the mapping of exponents to exponents.
The following example shows how such nonpolynomial rational substitutions are constructed. In the harmonic subcase 1a of Theorem \[thm:main\], $D=2$ and the polynomial tranformation is $t\mapsto z=R_1(t)=t(2-t)$; the necessary condition on exponents is that $t=0,d$ have identical exponents. Consider $d=-1$, which is on the cross-ratio orbit of $D$. $M_1(t)=(t-1)/t$ can be chosen; also, let $M_2(z)=1/z$. Then the composition $$z=R(t)\equiv M_2\left(R_1(M_1(t))\right)= t^2/(t^2-1)$$ maps $t=0$ to $z=0$ and $t=\infty$ to $z=1$ (both with double multiplicity), and $t=1,d$ to $z=\infty$. This substitution may be applied to any trivial Heun equation with $d=-1$ and identical exponents at $t=1,d$, i.e., with ${\delta=\epsilon}$.
In this example, ${M}_1,{M}_2$ were selected with foresight, to ensure that $R$ maps $t=0$ to $z=0$. This makes it possible to regard the substitution as a reduction of ${\mathop{{}\it Hl}\nolimits}$ to ${}_2F_1$, or of $\widetilde{\mathop{{}\it Hl}\nolimits}$ to $\widetilde{{}_2F_1}$. By calculation of exponents, the reduction is $$\begin{aligned}
\label{eq:explicitred}
&\widetilde{\mathop{{}\it Hl}\nolimits}\left(-1,\,0;\,0,\,\beta,\,\gamma,(1+\beta+\gamma)/2;\,t\right)\\
&\qquad=(-1)^{(\gamma-1)/2}\,
\widetilde{{}_2F_1}\left(0,\,(1-\beta+\gamma)/2;\,(1+\gamma)/2;\,t^2/(t^2-1)\right).\nonumber\end{aligned}$$ The normalization factor $(-1)^{(\gamma-1)/2}$ is present because by convention $\widetilde{\mathop{{}\it Hl}\nolimits}(t)\sim t^{1-\gamma}$ and $\widetilde{{}_2F_1}(z)\sim z^{1-c}$ in a neighborhood of $t=0$ (resp.$z=0$), where the principal branches are meant. The corresponding reduction of ${\mathop{{}\it Hl}\nolimits}$ to ${}_2F_1$ is trivially valid (both sides are constant functions of $t$, and equal unity).
Working out the number of rational substitutions $z=R(t)$ that may be applied to trivial Heun equations, where $R$ is of the form $M_2\circ R_1\circ M_1$, is a useful exercise. There are seven subcases of Theorem \[thm:main\], i.e., choices for the polynomial $R_1$. Each subcase allows $d$ to be chosen from an orbit consisting of $m$ cross-ratio values: $m=3$ in the harmonic subcases 1a and 1c, $m=2$ in the equianharmonic subcases 2a and 2d, and $m=6$ in the others. In any subcase, the $4!$ choices for $M_1$ are divided equally among the $m$ values of $d$, and there are also $3!$ choices for $M_2$. So each subcase yields $(4!/m)3!$ rational substitutions for each value of $d$, but not all are distinct.
To count [*distinct*]{} rational substitutions for each value of $d$, note the following. $R$ will map $t=0,1,d,\infty$ to $z=0,1,\infty$. Each of the subcases of Theorem \[thm:main\] has a ‘signature’, specifying the cardinalities of the inverse images of the points $0,1,\infty$. For example, case 1a has signature $2;1;1$, which means that of those three points, one has two preimages and the other two have one. (Order here is not significant.) In all, subcases 1a,1b,2b,2c have signature $2;1;1$, and the others have signature $3;1;0$. By inspection, the number of distinct mappings of $t=0,1,d,\infty$ to $z=0,1,\infty$ consistent with the signature $2;1;1$ is $36$, and the number consistent with $3;1;0$ is $18$.
Kuiken [@Kuiken79] supplies a useful list of the $36$ rational substitutions arising from the harmonic subcase 1a, but states incorrectly that they are the only rational substitutions that may be applied to a trivial Heun equation. Actually, subcases 1a–1c and 2a–2d give rise to $36,36,18;\allowbreak18,36,36,18$ rational substitutions, respectively. By dividing by $m$, it follows that for each subcase, the number of distinct rational substitutions per value of $d$ is $12,6,6;\allowbreak9,6,6,9$. Of these, exactly one-third map $t=0$ to $z=0$, rather than to $z=1$ or $z=\infty$, and consequently yield reductions of ${\mathop{{}\it Hl}\nolimits}$ to ${}_2F_1$, or of $\widetilde{\mathop{{}\it Hl}\nolimits}$ to $\widetilde{{}_2F_1}$. So for each subcase, the number of such reductions per value of $d$ is $4,2,2;\allowbreak3,2,2,3$.
For example, the four reductions with $d=-1$ that arise from the harmonic subcase 1a are
$$\begin{aligned}
\label{eq:firstred}
&\widetilde{\mathop{{}\it Hl}\nolimits}\left(-1,\,0;\,0,\,\beta,\,\gamma,\,(1+\beta-\gamma)/2;\,t\right)\\
&\qquad=\widetilde{{}_2F_1}\left(0,\,\beta/2;\,(1+\gamma)/2;\,t^2\right)\nonumber\\
\label{eq:secondred}
&\widetilde{\mathop{{}\it Hl}\nolimits}\left(-1,\,0;\,0,\,\beta,\,\gamma,(1+\beta+\gamma)/2;\,t\right)\\
&\qquad=(-1)^{(\gamma-1)/2}\,\widetilde{{}_2F_1}\left(0,\,(1-\beta+\gamma)/2;\,(1+\gamma)/2;\,t^2/(t^2-1)\right)\nonumber
\displaybreak[0]
\\
\label{eq:newred1}
&\widetilde{\mathop{{}\it Hl}\nolimits}\left(-1,\,0;\,0,\,\beta,\,1-\beta,\,\delta;\,t\right)\\
&\qquad=4^{-\beta}\widetilde{{}_2F_1}\left(0,\,(1-2\beta+\delta)/2;\,1-\beta;\,4t/(t+1)^2\right)\nonumber\\
\label{eq:newred2}
&\widetilde{\mathop{{}\it Hl}\nolimits}\left(-1,\,0;\,0,\,\beta,\,1-\beta,\,\delta,\,t\right)\\
&\qquad=(-4)^{-\beta}\,\widetilde{{}_2F_1}\left(0,\,(1-\delta)/2;\,1-\beta;\,-4t/(t-1)^2\right)\nonumber\end{aligned}$$
The reduction (\[eq:firstred\]), which is the only one of the four in which the degree-$2$ rational function $R$ is a polynomial, is simply the trivial (i.e., $\alpha=0$) case of the quadratic reduction (\[eq:extrathing\]). The reduction (\[eq:secondred\]) was derived above as (\[eq:explicitred\]), but (\[eq:newred1\]) and (\[eq:newred2\]) are new. They are related by composition with $z\mapsto z/(z-1)$, i.e., by the involution in ${\mathcal M}(\mathfrak{h})$ that interchanges $z=1$ and $z=\infty$.
Remarkably, many rational reductions of trivial Heun equations to the hypergeometric equation are [*not*]{} derived from the polynomial reductions of Theorem \[thm:main\]. The following curious degree-$4$ reduction is an example. The rational function $$z=Q(t){\stackrel{\rm{def}}{=}}1-\left(\frac{t-1-\ri}{t-1+\ri}\right)^4
=\frac{8\ri\,t(t-1)(t-2)}{(t-1+\ri)^4}$$ takes $t=0,1,d\equiv2,\infty$ to $z=0$; and $t=1\pm\ri$ to $z=1,\infty$ (both with quadruple multiplicity). By Proposition \[thm:trivialprop\], a trivial Heun equation with $d=2$ will be reduced by $Q$ to a hypergeometric equation iff $Q$ maps exponents to exponents. This constrains the singular points $t=0,1,d,\infty$ to have the same exponents; which by Fuchs’s relation (\[eq:Pconstraint\]) is possible only if each has exponents $0,1/2$; which must also be the exponents of $z=0$. Also, since $t=1\pm\ri$ are ordinary points of the Heun equation, with exponents $0,1$, the exponents of the hypergeometric equation at $z=1,\infty$ must be $0,1/4$. It follows that on the level of solutions, the reduction is $$\label{eq:tiring}
\widetilde{\mathop{{}\it Hl}\nolimits}(2,\,0;\,0,\,\tfrac12,\,\tfrac12,\,\tfrac12;\,t)
=(\ri/4)^{1/2}\,\widetilde{{}_2F_1}\left( 0,\,{\tfrac14};\,\tfrac12;\,
\frac{8\ri\,t(t-1)(t-2)}{(t-1+\ri)^4}\right),$$ where the normalization factor $(\ri/4)^{1/2}$ follows from the known behavior of the functions $\widetilde{\mathop{{}\it Hl}\nolimits}(t)$ and $\widetilde{{}_2F_1}(z)$ as $t\to0$ and $z\to0$.
Each of the preceding rational reductions of a trivial $\widetilde{\mathop{{}\it Hl}\nolimits}$ to a $\widetilde{{}_2F_1}$ can be converted to a rational reduction of a [*nontrivial*]{} ${\mathop{{}\it Hl}\nolimits}$ to a ${}_2F_1$, by using the definitions (\[eq:tilde1\]),(\[eq:newguy2\]) of $\widetilde{{\mathop{{}\it Hl}\nolimits}},\widetilde{{}_2F_1}$. For example, (\[eq:tiring\]) implies $$\begin{aligned}
\label{eq:finalreduction}
&{\mathop{{}\it Hl}\nolimits}(2,\,\tfrac34;\,\tfrac12,\,1,\,\tfrac32,\,\tfrac12;\,t)\\
&\quad=
(1-t)^{1/2}(1-t/2)^{1/2}\left[1-t/(1-\ri)\right]^{-2}
{}_2F_1\left(
\tfrac12,\,\tfrac34;\,\tfrac32;\,
\frac{8\ri t(t-1)(t-2)}{(t-1+\ri)^4}
\right).\nonumber\end{aligned}$$ The equality (\[eq:finalreduction\]) holds in a neighborhood of $t=0$ (both sides are real when $t$ is real and sufficiently small). This reduction is not related to the previously derived harmonic reduction (\[eq:generalharmonic\]), in which $d=2$ also. The pair $(d,q/\alpha\beta)$ here equals $(2,\frac32)$, which is not listed in Theorem \[thm:culmination\].
The formula (\[eq:finalreduction\]) is a reduction of a nontrivial ${\mathop{{}\it Hl}\nolimits}$ to a ${}_2F_1$, but of a more general type than has been considered in this paper. The underlying reduction of the Heun equation (\[eq:Heun\]) to the hypergeometric equation (\[eq:hyper\]) includes a linear change of the [*dependent*]{} variable, resembling a complicated F-homotopy, in addition to a rational change of the independent variable.
The author gratefully acknowledges the hospitality of the Texas Institute for Computational and Applied Mathematics (TICAM).
[10]{}
M. Abramowitz, I. A. Stegun (Eds.), Handbook of Mathematical Functions, Dover, New York, 1965.
G. E. Andrews, R. Askey, R. Roy, Special Functions, Vol. 71 of Encyclopedia of Mathematics and Its Applications, Cambridge University Press, Cambridge, UK, 1999.
F. M. Arscott, The land beyond [B]{}essel: A survey of higher special functions, in: Ordinary and Partial Differential Equations: Proceedings of the Sixth Dundee Conference, no. 846 in Lecture Notes in Mathematics, Springer-Verlag, 1980, pp. 26–45.
A. W. Babister, Transcendental Functions Satisfying Nonhomogeneous Linear Differential Equations, Macmillan, New York, 1967.
F. Baldassarri, On algebraic solutions of [L]{}am[é]{}’s differential equation, J. Differential Equations 41 (1) (1981) 44–58.
P. A. Clarkson, P. J. Olver, Symmetry and the [Chazy]{} equation, J. Differential Equations 124 (1) (1996) 225–246.
R. V. Craster, V. H. Ho[à]{}ng, Applications of [F]{}uchsian differential equations to free boundary problems, Proc. Roy. Soc. London Ser. A 454 (1972) (1998) 1241–1252.
A. Debosscher, A unification of one-dimensional [Fokker]{}–[Planck]{} equations beyond hypergeometrics: Factorization solution method and eigenvalue schemes, Phys. Rev. E 57 (1) (1998) 252–275.
B. Dwork, On [K]{}ummer’s twenty-four solutions of the hypergeometric differential equation, Trans. Amer. Math. Soc. 285 (2) (1984) 497–521.
A. Erd[é]{}lyi (Ed.), Higher Transcendental Functions, McGraw–Hill, New York, 1953–55, also known as The Bateman Manuscript Project.
H. Exton, Solutions of [H]{}eun’s equation, Bull. Soc. Math. Belg. S[é]{}r. B 45 (1) (1993) 49–57.
L. C. Grove, C. T. Benson, Finite Reflection Groups, 2nd Edition, Springer-Verlag, New York/Berlin, 1985.
A. J. Guttmann, T. Prellberg, Staircase polygons, elliptic integrals, [H]{}eun functions, and lattice [G]{}reen functions, Phys. Rev. E 47 (4) (1993) R2233–R2236.
K. Heun, Zur [T]{}heorie der [R]{}iemann’schen [F]{}unctionen zweiter [O]{}rdnung mit vier [V]{}erzweigungspunkten, Math. Ann. 33 (1889) 161–179.
E. Hille, Ordinary Differential Equations in the Complex Domain, Wiley, New York, 1976.
P. Ivanov, On [L]{}am[é]{}’s equation of a particular kind, J. Phys. A 34 (39) (2001) 8145–8150, available as arXiv:math-ph/0008008.
G. S. Joyce, On the cubic lattice [Green]{} functions, Proc. Roy. Soc. London Ser. A 445 (1924) (1994) 463–477.
K. Kuiken, Heun’s equation and the hypergeometric equation, [SIAM]{} J. Math. Anal. 10 (3) (1979) 655–657.
R. S. Maier, Algebraic solutions of the [L]{}am[é]{} equation, revisited, J. Differential Equations 198 (1) (2004) 16–34, available as arXiv:math.CA/0206285.
J. J. [Morales Ruiz]{}, Differential [G]{}alois Theory and Non-Integrability of [H]{}amiltonian Systems, Birk[ä]{}user, Boston/Basel, 1999.
E. G. C. Poole, Linear Differential Equations, Oxford University Press, Oxford, 1936.
A. Ronveaux (Ed.), Heun’s Differential Equations, Oxford University Press, Oxford, 1995.
F. W. Sch[ä]{}fke, Zur (konfluenten) Fuchsschen Differentialgleichungen 2. Ordnung, Analysis 3 (1–4) (1983) 101–122.
R. Sch[ä]{}fke, D. Schmidt, The connection problem for general linear ordinary differential equations at two regular singular points with applications to the theory of special functions, [SIAM]{} J. Math. Anal. 11 (5) (1980) 848–862.
F. Schmitz, B. Fleck, On the propagation of linear 3-[D]{} hydrodynamic waves in plane non-isothermal atmospheres, Astron. Astrophys. Suppl. Ser. 106 (1) (1994) 129–139.
C. Snow, Hypergeometric and [Legendre]{} Functions with Applications to Integral Equations of Potential Theory, 2nd Edition, no. 19 in Applied Mathematics Series, National Bureau of Standards, Washington, DC, 1952.
|
---
abstract: 'In a random laser (RL), a system possessing in itself both resonator and amplifying medium while lacking of a macroscopic cavity, the feedback is provided by the scattering, which forces light to travel across very long random paths. Here we demonstrate that RL properties may be tuned by the topology of the scattering system retaining unchanged scattering strength and gain efficiency. This is possible in a system based on sparse clusters, possessing two relevant structural lengths: the macroscopic inter cluster separation and the mesoscopic intra-cluster mean free path.'
author:
- Marco Leonetti
- Cefe Lopez
bibliography:
- 'lpr-demo.bib'
title: 'Random lasing in structures with multi-scale transport properties'
---
The understanding of the interaction of light with complex systems is of paramount importance for the possibility to tailor optical properties and to realize the future light-driven photonic chip. There are many techniques that allow building nano-sized photonic structures, like direct laser writing [@Fischer:11], ion beam milling [@0960-1317-11-4-301], or electron beam lithography [@E_beam_lit] but in the last decades the self assembly of micron sized building blocks demonstrated to be between the most cost effective strategies for light science[@ADMA:ADMA201000356]. This approach allows the fabrication of large scale and cheap ordered and disordered photonic structures like photonic crystals [@1464-4258-8-5-R01; @PhysRevA.83.023801], and photonic glasses [@ADMA:ADMA200900827].
The expertise in the fabrication of such structures allows now not only to tailor light propagation trough Bloch modes in an ordered structure but also to engineer the regime of light diffusion [@ADFM:ADFM200902008; @Barthelemy2008] when disorder is prominent because the material’s properties define the propagation characteristics. The overall scattering strength that is measured by the transport mean free path $\ell$, depends both on the size and on the spatial distribution of the scatterers which may vary by many orders of magnitude, like from colloidal structures to intergalactic dust[@1074123].
If optical amplification is introduced in a disordered structure the balance between gain and losses brings about a lasing threshold that when surpassed, switches on a RL [@Wiersma_Rew]. In RL, that is the first lasing material possessing in itself both resonator and amplifying medium while lacking a macroscopic cavity, the feedback is provided by the scattering that confines the light in the amplifying volume for very long times forcing it to very long paths. Optimization of such RLs has been focused, up to now, on various aspects like gain efficiency [@Leonetti:09], the scattering strength[@PhysRevA.74.053812], and also on the nature of the building blocks [@Gottardo2008]. On the other hand the RL is a very complex phenomenon because both local (resonances hosted by microcavities of the structures [@PhysRevLett.82.2278]) and extended (the overall scattering efficiency [@PhysRevA.80.013833; @Fallert_coexistence_nature; @PhysRevLett.98.143901]) properties of the system concur in the emission process, while the role of the interplay between local and extended properties has been investigated just recently.
![The graph reports the A$_0$ parameter (squares, errors derived from the fit root mean squared error) and the average density (open circles) measured for samples with different ethanol concentration. Error bars for density has been obtained by statistical averages on 10 different portions measurements from different positions from five different samples with the same concentration. Absence of the bars indicate an error smaller than the marker size.[]{data-label="Fig1"}](Figure_1.eps){width="82.5"}
Here we demonstrate that the improvement of the disordered lasing materials may be achieved on a system composed of clusters of titanium dioxide lying sparsely in a bath of dye-doped solution thus possessing two relevant lengths: the macroscopic inter cluster separation and the mesoscopic cluster size. It is possible to tune RL properties by changing the topology of the scattering system while retaining unchanged extensive properties: the scatterers density and the gain efficiency.
In practical terms the cluster size may be tuned starting from a titanium dioxide sol and by controlling the inter particle interaction by changing the ratio between polar (ethanol) and less polar (diethylene glycol, DEG) liquids composing the external phase[@Leonetti2011].
The actual sample consists of an almost cylindrical volume of dye solution confined between two microscopy coverslips (separated by plastic spacers and laterally sealed by cyanoacrylic glue) containing titania particles that self assemble on the bottom surface. After one day the majority of the ethanol will be evaporated leaving clusters in a stable configuration embedded in a solution mainly composed of dye doped diethylene glycol. The size of clusters may be controlled by changing the ratio between ethanol and diethylene glycol in the starting solution without affecting the density (variations smaller than 2%) and the amount of titanium dioxide present in the sample as shown in in Figure \[Fig1\]. The area distribution of the clusters is non-gaussian as shown in figure \[Fig2\]. The relative abundance exponentially decreases with size following the expression:
$$P(A)= k*exp(\frac{-A}{A_0}).$$
We find that this is a good description of the size distribution since such a model fits the data and casts a size parameter A$_0$ with a small fitting error. In the graph of \[Fig1\] we report A$_0$ (full squares scale on the left) as a function of the \[EtOH\] that has been probed in the range 0-0.8. With higher \[EtOH\] concentrations the samples has been found unstable (strong evaporation). Cluster area and cluster density has been evaluated by analyzing digitized microscope images (such as those reported in the insets of figure \[Fig2\]). Area has been obtained by counting pixels composing the clusters once the image is rendered in black and white (BW) by selecting an appropriate intensity threshold equal for all the measurements. Individual nanoparticles (structures with size much smaller than the average particle size) have not been taken into account in this analysis. Density has been estimated by calculating the ratio between the black (strongly scattering) and the white (transmitting) areas in the BW images. The clusters thickness (their vertical dimension) has been estimated by low depth of field optical microscope that furnished a value varying between some micrometers up to 25 $\mu$m.
![The top five graphs report cluster size distributions for different values of \[EtOH\], ranging from 0 to 0.8. The last graph report the area fraction occupied by isolated particles as a function of the ethanol concentration \[EtOH\].[]{data-label="Fig2"}](Figure_2.eps){width="82.5"}
Images of the samples with different ethanol concentration \[EtOH\] and obtained by a 20$\times$ optical microscope are reported in the insets of figure \[Fig2\] together with graphs detailing the normalized cluster size distributions P(A) of the Area. Agglomerates with size smaller than a micrometer are excluded from the statistics, because have high probability of being composed by a single particles (SP). Cluster size distributions are derived from the analysis of ten images from each sample. The last graph of the figure, reporting the ratio SP between the area occupied by single particles and the total area occupied by titanium dioxide as a function of \[EtOH\] confirms that the increasing presence of hydroxyl improves the clustering process.
The inter-particle interaction is critical in defining the stability of a colloid, [@ADMA:ADMA200900827; @Book_1]. A key role is played by the particle surface charge that depends on the percentage and degree of polarity of liquids composing the solution. The surface of titania particles is capped with hydroxyl groups for which it is favorable to donate protons so that if placed in a medium unable to accept them the particles are unable to donate them and may even be forced to accept some, resulting in neutral or even positively charged particles. The addition of ethanol to the solution enables the particle to release protons and switches the surface charge to negative. The average effect of the presence of a polar liquid in the solution is to lower the zeta potential[@Chadwick2002229] thus favoring aggregation.
Having demonstrated that we can control the clusters’ size we now turn to study the effects on the laser action that may be obtained upon external pumping. The experimental setup we employed is a system capable to shine and generate population inversion in predefined, computer-designed area. This is achieved by exploiting a spatial light modulator (SLM) in amplitude modulation configuration: vertically polarized light from the pump laser is reflected by an SLM (model Holoeye LC-R 1080 1962x1200 pixels) in which each pixel changes by 90 $^\circ$ the polarization of the reflected light when set in the ON state. An image of the SLM is produced on the sample by using two lenses as described in reference[@Leonetti2011]. After the SLM the vertical polarization is selected by a reflective polarizer so that the activated pixels are not transmitted on the sample. With this approach which allows to achieve diffraction limited resolution, we will measure the effects of the anomalous diffusion on the lasing efficiency.
![(a) a sketch of the sample under pumping and of the collection objective (pumping optics are not shown as they lie below the sample). The stripe shaped inverted area may be of different sizes such as S1 S2 and S3 shown in the picture. In panel b) and c) the spectra (obtained with a 303 mm focal length spectrograph, 0.25 nm resolution) obtained from a single cluster placed at the edge of the stripe for small (b, S =300 $\mu m$) and large (c, S =600 $\mu m$) stripe length. In panel d) the full width at half maximum as a function of the stripe length S is reported. The horizontal dashed line indicates the threshold condition, while the vertical dashed line indicates the value of T.[]{data-label="setup"}](Figure_3.eps){width="82.5"}
We will demonstrate in fact that the sizes distribution affects the overall efficiency of this lasing material. Indeed light traveling in the sample plane experiences a particular regime of propagation suffering a very strong scattering inside a cluster while traveling in straight line from one cluster to the next. Thus our system may be approximated as possessing two different relevant scattering mean free paths: an inter cluster mean free path $\ell_f$ and an intra-cluster mean free-path. An estimate of $\ell_f$ may be obtained directly from the microscope images. This is done trough a computer routine that selects a single cluster and measures the distance from its first neighbors (counting the pixel separating the two clusters) in all directions. The operation is repeated for all the clusters in the image and the average of the set of distances obtained is $\ell_f$.
It has been shown recently that a particular distribution in the size of the particles composing a diffusive medium may give rise to diffusion regimes characterized by non Gaussian path length distributions. In particular in systems showing a power law distribution for mean free path, light propagation may be modeled in the framework of the Levy statistics [@ADFM:ADFM200902008; @Barthelemy2008]. In our system instead the presence of an inter-particle space induces a twofold distribution of the mean free path, thus we will study the effects of $\ell_f$ on the lasing threshold which is one of the most important characteristics of a RL defining the efficiency of the balance between gain and losses.
To study effects of $\ell_f$ on lasing threshold we prepared a setup in which a single cluster is placed at one of the edges of a stripe shaped pump spot of thickness 10 $\mu$m and variable length S (see Fig. \[setup\]). The stripe generates a flux of stimulated emission that is used as directional pump for the cluster [@Leonetti:11]. We make sure that the energy density over the whole stripe extension is kept constant at 0.5 $nJ/\mu m^2$. This is achieved by spatial filtering and magnification of the laser spot previous to the spatial modulation: in practice we exploit just the flattened top of the Gaussian profile in order to have a nearly homogeneous intensity on the reflective part of the SLM used for the spatial modulation.
Photons generated by stimulated emission in the intercluster space supply energy to the cluster. This is demonstrated in figure \[setup\] panels b and c, lasing emission from a cluster shined with S=300 $\mu m$ is less intense than the the emission from the same cluster but with a pumping stipe of S=600 $\mu m$. We stress here that light is collected just from the cluster: no emission is retrieved when the optics is aligned to the inter-cluster space. However the path followed from light before of hitting the target cluster plays an important role: the chance that amplified stimulated emission has to reach the titanium dioxide structure is lowered by the scattering suffered along that path. Therefore in our system the inter cluster separation distribution affects directly the threshold. In the following we will demonstrate that the lasing efficiency directly depends on the $\ell_f$ parameter.
In standard experiments threshold T is estimated by measuring the narrowing of the full width at half maximum (FWHM) of emitted spectra as a function of the pump energy. A similar behavior for the FWHM is found if S is varied instead of energy, (see from figure \[setup\] d). This happens because, as in variable stripe length experiments [@Leonetti:09; @PhysRevA.83.023801], the illuminated stripe act as an one dimensional amplifier at the edges of which amplified spontaneous emission (ASE) is channeled. Thus increasing the size S of the stripe results in an exponential increase of the ASE intensity pumping the cluster.
In our experimental protocol T is defined as the value of S, for which the full width is the average between its maximum and minimum values: $$(Max(FWHM)-Min(FWHM))/2= FWHM(T)$$ (where max and min represent respectively the maximum and the minimum values for all the FWHM obtained). We report T in figure \[Fig4\] as a function of $\ell_f$. Five samples have been prepared for each \[EtOH\] value that, due to different disorder realization, show slightly different values of $\ell_f$. Each point results from the measurements of a randomly chosen cluster with a diameter between 10 and 15 $\mu$m. The figure shows a clear decrease of the lasing threshold as a function of the inter cluster transport mean free path. The last points of the graph (purple stars) has been obtained by performing the “isolation procedure” described in [@Leonetti_pra2011] that allows to isolate a single cluster by exploiting optically generated hydrodynamical fluxes. In practice the samples originally created with \[EtOH\]=0.4 possessing $\ell_f$ around 50$\mu m$ have been used to obtain nearly isolated clusters. In this case an estimate (lower limit) for the $\ell_f$ is given by the distance to the cluster to the one under examination that ranges between 800 and 1200 $\mu$m.
We also confirmed that the presence of a variable percentage of ethanol does not influence the efficiency of the gain molecules. We measured the gain length G by using the variable stripe length technique that allows to measure optical amplification in nanostructures [@PhysRevA.83.023801]. We retrieved G =100$\pm$4 cm$^{-1}$ for a solution containing only DEG and G=106$\pm$8 cm$^{-1}$ for a solution with DEG with \[EtOH\]=0.8.
![Lasing threshold measured in samples with different $\ell_f$. Error bars result from the resolution of the threshold measurement. All measurements have been performed with an input fluence of 0.5 nJ/$\mu$m$^2$ per pulse. The samples indicated with “\[EtOH\]= 0.4 + I” ( full stars) results from the isolation process.[]{data-label="Fig4"}](Figure_4.eps){width="82.5"}
To summarize we demonstrated that it is possible to control the topology of self assembling of titanium clusters by regulating the density of hydroxyl groups that sets the inter particle interaction. Samples in which clusters with different size distribution and embedded in a dye-doped solution have been fabricated. Then we proved that the inter-cluster mean free path affects the lasing efficiency. By acting on the fashion in which the building block of the disordered structure are arranged, the lasing threshold has been reduced and, even more striking, this has been done without changing the density of the scatterers or the gain of the active medium, thus effectively increasing the efficiency achievable from disordered lasing materials. This study is suitable for the fabrication of a cheap photonic material with increased emission capability and synthesizable at large scale. The future step for its improvement is the polymerization of the inter cluster space that will allow to fabricate solid and stable chip with controllable lasing efficiency.
We thank Marta Ibisate, for fruitful discussions. The work was supported by; EU FP7 NoE Nanophotonics4Enery Grant No 248855; the Spanish MICINN CSD2007-0046 (Nanolight.es); MAT2009-07841 (GLUSFA) and Comunidad de Madrid S2009/MAT-1756 (PHAMA).
|
---
author:
- 'R. Hind'
- 'J. von Bergmann'
title: |
Existence and Stability of Foliations\
by $J$–Holomorphic Spheres
---
Introduction
============
The theory of pseudoholomorphic curves was introduced in Gromov’s seminal paper [@gromov]. There is a Fredholm theory showing that, for generic almost-complex structures $J$, pseudoholomorphic, or $J$-holomorphic, curves appear in finite dimensional families, with the dimension given by the Riemann-Roch theorem. Furthermore, in the presence of a taming symplectic form, suitable moduli spaces of $J$-holomorphic curves are compact modulo bubbling. These results have many important applications in symplectic topology. Notably they lead to Gromov-Witten invariants and Floer homology, which have been the main methods for establishing rigidity results in symplectic and contact topology.
Applied to symplectic manifolds of dimension $4$ the theory of pseudoholomorphic curves is especially powerful and it becomes possible to prove classification results which are as yet inaccessible in higher dimensions. For example, symplectic forms on $S^2 \times
S^2$ are classified, see [@SW_GW] and [@gromov], their symplectomorphism groups are well understood, see [@gromov] and [@abreu_mcduff], and the Lagrangian spheres are all known to be symplectically equivalent, see [@hind_spheres]. These results all rely on the existence of foliations by $J$-holomorphic spheres. More precisely, they utilize the following theorem of Gromov. We say that a homology class $A \in H_2(X)$ is $\omega$–minimal if $\omega(A) =
\min_{B \in H_2(X), \omega(B)>0} \omega(B)$.
\[thm:4d-foliation\] Let $(X,\omega)$ be a symplectic $4$-manifold with a tamed almost-complex structure $J$ and suppose that there exists an embedded symplectic sphere in a homology class $A$ satisfying $A
\bullet A =0$ and $A$ is $\omega$–minimal.
Then $X$ is foliated by the images of $J$-holomorphic spheres homologous to $A$. The foliations vary smoothly with the almost-complex structure $J$.
The aim of paper is to investigate the extent to which this remains true when $X$ has higher dimension.
It turns out that in general Theorem \[thm:4d-foliation\] is false if $X$ is allowed to have dimension greater than $4$. The existence of $J$-holomorphic spheres in the class $A$ can be guaranteed at least for an open set of almost-complex structures by imposing an index constraint. But even if a foliation is known to exist for a particular $J$, it is unstable in the sense that varying $J$, even in the most generic fashion, can cause the foliation to degenerate.
To be more precise, we recall that given a family of tame almost-complex structures $J_t$, $0 \le t \le 1$, on the symplectic manifold $X^{2n}$ we can define the universal moduli space $$\begin{aligned}
{\cal M} = \{(u,t)|u:S^2 \to X\ J_t\mathrm{-holomorphic},\,[u(S^2)]=A
\}.\label{eq:M}\end{aligned}$$ Suppose that $c_1(A)=2$, then ${\cal M}$ has virtual dimension $2n+5$ and if the family $\{J_t\}$ is regular then ${\cal M}$ is a manifold (with boundary) of dimension $2n+5$. Furthermore, in the generic case the projection map $T:{\cal M} \to [0,1]$, $(u,t) \mapsto t$ is a Morse function and for all but finitely many $t$ the fiber ${\cal M}_t$ is a manifold of dimension $2n+4$ consisting of the $J_t$-holomorphic spheres in the class $A$. For such regular $t$ there is a smooth evaluation map $$e_t:{\cal M}_t \times _G S^2 \to X$$ where the equivalence relation $G$ is reparameterization of the holomorphic spheres. Both ${\cal M}_t \times _G S^2$ and $X$ are smooth $2n$-dimensional manifolds.
We say that $X$ is [*foliated*]{} by $J_t$-holomorphic spheres in the class $A$ if the map $e_t$, when restricted to some connected component of its domain, is a homeomorphism.
We say that $X$ is [*smoothly foliated*]{} by $J_t$-holomorphic spheres in the class $A$ if the map $e_t$, when restricted to some connected component of its domain, is a diffeomorphism.
When $X$ is $4$-dimensional, or when the almost-complex structure $J_t$ is integrable, these two notions coincide, but in higher dimensions there exist foliations (at least if we allow nonregular curves) for which the corresponding evaluation map is a smooth homeomorphism that is not a diffeomorphism. An example is given in Remark \[rmk:foln\].
The following result shows that Theorem \[thm:4d-foliation\] fails completely in dimension greater than four. Let $(M,\omega_M)$ be a symplectic manifold of dimension at least four.
\[thm:counterexample\] There exists a regular family $J_t$ of tame almost-complex structures on $(X,\omega)=(S^2\times
M,\sigma_0\oplus \omega_M)$ such that ${\cal M}$ has a component ${\cal N}$ where the curves in $t^{-1}(0) \cap {\cal N}$ form a foliation of $X$ but the curves in $t^{-1}(1) \cap {\cal N}$ do not, the curves are not disjoint.
In fact, we can take $J_0$ to be a product structure on $S^2\times M$ and so ${\cal M}_0$ has a single component consisting of curves with images $S^2 \times \{z\}$ for $z \in M$. Fixing a point $0 \in M$ we can further assume that the corresponding sphere $C_0 = S^2 \times
\{0\}$ is $J_t$-holomorphic and regular for all $t$. However there exists a two parameter family of curves $C_r$ in ${\cal M}_1$ which includes $C_0$ but with $C_r \cap C_0 \neq \emptyset$ for all $r$.
An analog of Theorem \[thm:4d-foliation\] does remain true if we impose restrictions on the $J_t$. In this paper we will explain how to guarantee the existence and stability of foliations in the case of integrable complex structures and additional restrictions on the curvature.
\[thm:integrable\] Let $(X,\omega,J)$ be Kähler with holomorphic bisectional curvature bounded from below by $c>-\pi/\omega(A)$, where $A\in
H_2(X;\setZ)$ is an $\omega$–minimal homology class with $GW^X_{0,1,A}(pt)=1$. Then $X$ is smoothly foliated by $J$–holomorphic spheres.
We remark that if $X$ is a product $(M,k\omega) \times (S^2, \sigma)$, where $(M,\omega,J)$ is Kähler and $\sigma$ is the area form on $S^2$, then a product complex structure will satisfy the hypotheses of Theorem \[thm:integrable\] whenever $k$ is sufficiently large, and so will any other integrable complex structure that is sufficiently close to the product one.
Of central importance to the stability of foliations is the notion of superregularity as defined in [@donaldson_discs].
\[def:superregular\] A real–linear Cauchy–Riemann operator $D$ on a complex vector bundle over $S^2$ is called [*regular*]{} if $D$ is surjective. It is called [ *superregular*]{} if $\ker D$ contains a collection of sections that are linearly independent over each point in $S^2$. A choice of such a collection of sections is called a [*superregular basis*]{} for $D$.
A $J$–holomorphic sphere $u$ is called regular if the induced real–linear Cauchy–Riemann operator $D_u$ on $u^\ast TX$ is regular. An immersed $J$–holomorphic sphere $u$ is called superregular if $D_u$ acting on sections of the normal bundle is superregular.
Note that regularity does not imply superregularity and vice versa. For example, no regular linearized operator at a $J$–holomorphic curve in a 4–manifold with self-intersection number $\ne 0$ is superregular. We will give an example of a superregular operator that is not regular in Section \[sec:constr-line-oper\].
Another way to understand what it means for a linearized operator to be regular and superregular is the following. Suppose $u$ is regular so that the moduli space of curves near $u$ is a smooth manifold. Then, in the language of Section 3.4 in [@mcduff2], the evaluation map from the moduli space of $J$–holomorphic curves near a map $u:S^2\rightarrow X$ is transverse to all $x\in {\operatorname{image}}(u)\subset
X$ if and only if $u$ superregular. Hence all curves $u$ in a smooth foliation are superregular.
The paper is arranged as follows. We first establish the non-existence result Theorem \[thm:counterexample\] in Section \[sec:counterexample\]. Then we discuss the integrable case in Section \[sec:stab-foli-integr\] to prove Theorem \[thm:integrable\].
Non-stability of foliations {#sec:counterexample}
===========================
For clarity of exposition we will restrict ourselves to work in dimension 6. It is clear how to generalize this to higher dimension, e.g. by taking the product with another symplectic manifold with compatible almost complex structure. However, our construction does not work in dimension less than 6 since in that case Hirsch’s theorem about immersions does not apply, and consequently Lemma \[lem:D\_super\] does not hold.
Superregular Operator with Cokernel {#sec:constr-line-oper}
-----------------------------------
Here we will construct a superregular Cauchy-Riemann operator with nontrivial cokernel. This immediately gives examples of foliations by holomorphic spheres which are not superregular.
Throughout this section $N=S^2\times\setR^4$ denotes the trivial bundle. Let $\{\bar e_i\}_{i=1}^4$ be the canonical basis of $\setR^4$ and $J_0$ the canonical complex structure. Using the trivialization of $N$ we will frequently identify sections of $N$ with functions from $S^2$ into $\setR^4$.
Recall the structure of a real–linear Cauchy–Riemann $D$ operator acting on sections $\xi$ of the complex vector bundle $(N,J_0)$ with trivial connection $\nabla$ via $$\begin{aligned}
D\xi=\frac12\left(\nabla \xi+J_0\nabla\xi\circ j\right)+\frac12 Y\xi
={{\ensuremath{\bar\partial }}}_{J_0}\xi+\frac12Y\xi\end{aligned}$$ where $Y:N\rightarrow\Lambda^{0,1}(T^\ast S^2\otimes_\setC N)$ is a vector bundle homomorphism.
Recall Definition \[def:superregular\] of our use of the terms regular and superregular.
\[lem:J(z)-hol\] Let $D$ be a superregular real–linear Cauchy–Riemann operator on $(N,J_0)$ with superregular basis $\{e_i\}_{i=1}^4$. Let $\Phi:S^2\times
\setR^4\rightarrow N$ be the corresponding trivialization, i.e. $\Phi(z,x)=\sum_{i=1}^n x_i\,e_i(z)$.
Then for a function $f:S^2\rightarrow \setR^4$ we have $D
\Phi(z, f(z))=\Phi_\ast{{\ensuremath{\bar\partial }}}_J f$, where $J:S^2\rightarrow End(\setR^4)$, is given by $\Phi^\ast J_0$.
$$\begin{aligned}
D\Phi(z,f(z))=D\sum_{i=1}^4 f_i(z)e_i(z)
=\sum_{i=1}^4\frac12\{df_i+J_0\,df_i\circ j\}e_{i}
=\Phi_\ast({{\ensuremath{\bar\partial }}}_J f).
\end{aligned}$$
We need the following elementary observation.
\[lem:Y\] Given any four sections $\{e_i\}_{i=1}^4$ of $N=S^2\times \setC^2$ that are linearly independent over each $z\in S^2$, there exists a unique real–linear Cauchy–Riemann operator $D={{\ensuremath{\bar\partial }}}_0+Y$, where ${{\ensuremath{\bar\partial }}}_0$ is the canonical complex Cauchy–Riemann operator, $Y\in
\Lambda^{0,1}T^\ast S^2\otimes \setC^2$, and $\{e_i\}_{i=1}^4$ is a superregular basis for $D$, i.e. so that $D e_i=0$ for $i=1,\ldots
,4$.
Let $\nu_i=D_0\,e_i$. Since the $\{e_i\}_{i=1}^4$ are linearly independent for all $z\in S^2$ we may define $Y$ via $Y_z\,e_i(z)=-\nu_i(z)$. Thus $D e_i=0$ so $\{e_i\}_{i=1}^4$ is a superregular basis. Conversely, if $D e_i=0$ for $i=1,\ldots,4$ then $Y_z(e_i(z))=-\nu_i(z)$, defining $Y$ uniquely.
We now aim to construct a superregular real–linear Cauchy–Riemann operator on $N$ that has a non–trivial cokernel. The following Lemma clears some topological obstructions.
\[lem:F\_0\] There exists a complex bundle monomorphism $F_0:TS^2\rightarrow
\underline{\setC^2}$.
Here $\underline{\setC^2}$ denotes the trivial $\setC^2$–bundle over $S^2=\setC\setP^1$.
Consider the diagram in Figure \[fig:1\]. Let $K\rightarrow \setC\setP^1$ be the tautological bundle, i.e. $$\begin{aligned}
K=\{(v,[z:w]|\,v\in \mathrm{span}_\setC(z,w),\quad
(z,w)\in\setC^2\}
\end{aligned}$$ and let $\tilde f_0:S^2\rightarrow \setC\setP^1$ be a degree 2 map. Then $TS^2$ and $\tilde f_0^\ast K$ have the same Chern class, so they are isomorphic complex vector bundles. Let $\tilde
F_0:TS^2\rightarrow K$ be a bundle isomorphism (covering $\tilde
f_0$).
Let $\iota:K\rightarrow \underline{\setC^2}$ be the standard inclusion, i.e. $\iota(v,[z:w])=(v,[z:w])\in\setC^2\times\setC\setP^1=\underline{\setC^2}$ and set $$\begin{aligned}
F_0:TS^2\rightarrow\underline{\setC^2},\qquad
F_0=\iota\circ\tilde F_0
\end{aligned}$$ $F_0$ is an injective complex vector bundle homomorphism because $\tilde F_0$ and $\iota$ are.
\[fig:1\]
Set $f_0:S^2\rightarrow \setC^2$, $f_0(z)=0$ and let $F_0:TS^2\rightarrow f_0^\ast T\setC^2=\underline{\setC^2}$ as in Lemma \[lem:F\_0\]. We aim to construct an actual immersion $f_1:S^2\rightarrow \setR^4$ so that $(f_1,F_1=df_1)$ has the same topological data as $(f_0,F_0)$.
By Theorem 6.1 of [@hirsch] (or alternatively by the $h$–principle) there exists an immersion $f_1:S^2\rightarrow \setR^4$ with $F_1=df_1:S^2\rightarrow f_1^\ast T\setR^4=\underline{\setR^4}$ together with a homotopy $f_t:S^2\rightarrow \setR^4$ connecting $f_0$ and $f_1$ covered by a homotopy of (real) monomorphisms $F_t:TS^2\rightarrow \underline{\setR^4}$ connecting $F_0$ and $F_1$. Here $\underline{\setR^4}$ is again the trivial bundle and we implicitly made use of the canonical (real) isomorphism $\underline{\setC^2}=\underline{\setR^4}$.
We need one more definition to construct a superregular operator with non–trivial cokernel. Let $\langle\cdot,\cdot\rangle$ denote the standard inner product on $\setR^4$ and let $\J$ denote the space of complex structures on $\setR^4$ and let $\A$ be the set of injective (real–linear) homomorphisms from $\setC$ into $\setR^4$. We define the map $$\begin{aligned}
\label{eq:Phi}
\Phi:\A\rightarrow\J\end{aligned}$$ in the following way. $A$ defines a splitting $\setR^4=V\oplus W$, where $V={\operatorname{image}}A$ and $W=V^\perp$. Define $J=\Phi(A)$ to be the unique $J\in\J$ that leaves this splitting invariant, makes $A$ complex linear, and satisfies $$\begin{aligned}
\langle Je_1,e_2\rangle=1,\qquad \langle Je_1,e_1\rangle=0\end{aligned}$$ on an oriented orthonormal basis $e_1,e_2$ of $W$.
\[lem:D\_super\] There exists a superregular real–linear Cauchy–Riemann operator $D$ with non–trivial cokernel.
For each $z\in S^2$ define $J(z)=\Phi\circ F_1(z)$, where $\Phi$ is the map from Equation (\[eq:Phi\]). Note that $\Phi\circ
F_0(z)=J_0$, so $\Phi\circ F_0$ is covered by the map $G_0=\id:S^2\rightarrow GL(4,\setR)$, where the projection $\pi:GL(4,\setR)\rightarrow \J$ is given by $\pi g=g^\ast
J_0=g^{-1}\circ J\circ g$. The map $\pi:GL(4,\setR)\rightarrow \J$ is a bundle projection and thus has the homotopy lifting property. Let $G_t$ be a lift of the homotopy $\Phi\circ F_t$ to $GL(4,\setR)$.
For $i=1,\ldots,4$ define sections $e_i(z)=G_1(z)\bar e_i$, where $\{\bar e_i\}_{i=1}^4$ denotes the canonical basis of $\setR^4$. By definition these are linearly independent for each $z\in S^2$, so by Lemma \[lem:Y\] we can choose $Y$ so that these sections satisfy $D\,e_i=0$, where $D=D_0+Y$. Thus $D$ is superregular with superregular basis $\{e_i\}_{i=1}^4$.
Set $e_5(z)=G_1(z)f_1(z)$, and note that ${{\ensuremath{\bar\partial }}}_{J(z)}f_1=0$ by the definition of $J(z)$. So by Lemma \[lem:J(z)-hol\] $e_5$ satisfies $D\,e_5=0$.
Thus the kernel of $D$ is at least 5–dimensional, so $D$ has non–trivial cokernel.
\[rmk:foln\] Adding a suitable multiple of $e_4$ to $e_5$ we may assume that the pointwise inner product $\langle
e_4(z),e_5(z)\rangle\ge 0$ but that the strict inequality does not hold. Then consider the map $$e:\setR^4 \times S^2 \to
N,$$ $$(t_1,t_2,t_3,t_4,z)\mapsto \sum_{i=1}^3 t_i e_i + t_4 e_5 +
t_4^2 e_4.$$ This is clearly a smooth map giving a foliation of $N$ but its differential is not an isomorphism wherever $z$ satisfies $\langle e_4(z),e_5(z)\rangle = 0$.
Family of Almost Complex Structures {#sec:constr-family-almost}
-----------------------------------
Here we will extend the example from the previous section to construct a family $D_s$ of Cauchy-Riemann operators for $s \in [-1,2]$ such that $D_s$ is regular for all $s$, superregular for $s$ close to $2$, but not superregular for $s$ close to $-1$. The example can be globalized to produce the counterexample needed for Theorem \[thm:counterexample\].
Let $N\rightarrow S^2$ be a trivial complex rank 2 vector bundle. Fix an inner product $\langle\cdot,\cdot\rangle$ on N and let $$\begin{aligned}
D:\Gamma N\rightarrow \Omega^{0,1}(N)\end{aligned}$$ be a superregular real–linear Cauchy–Riemann operator with non–trivial cokernel as given by Lemma \[lem:D\_super\]. Let $K=\ker(D)$ and fix a superregular basis $\{e_i\}_{i=1}^4$ and let $e_5\in K$ be another section that is linearly independent from the $\{e_i\}_{i=1}^4$. Without loss of generality assume that $e_5$ is perpendicular to $e_1,e_2,e_3$ in $L^2$ and the pointwise inner product $h(z)=\langle e_4(z),e_5(z)\rangle\ge 0$ with $h(0)=0$. Assume that ${{\ensuremath{|\!|e_i|\!|}}}=1$, $i=1,\ldots,4$ and scale $e_5$ so that there exists $p_0\in S^2$ with $$\begin{aligned}
\label{eq:p_0}
h(p_0)=\langle e_4(p_0),e_5(p_0)\rangle=1.\end{aligned}$$
Let $C={\operatorname{coker}}(D)$ be spanned by the orthonormal basis $\{\eta_i\}_{i=1}^n\in C$. Let $\tilde D:K^\perp\rightarrow C^\perp$ be the restriction of $D$ to $K^\perp$ and let $P:C^\perp\rightarrow
K^\perp$ denote its inverse. Let $$\begin{aligned}
\pi_C:\Omega^{0,1}(N)\rightarrow C\end{aligned}$$ denote the orthogonal projection in $L^2$.
\[lem:L-Y\] There exists family of vector bundle homomorphisms $Y_s:N\rightarrow
\Lambda^{0,1}(T^\ast S^2\otimes_\setC N)$, $s\in[-1,1]$ so that $$\begin{aligned}
L_s=\pi_C\circ Y_s:\Gamma(N)\rightarrow C
\end{aligned}$$ is surjective and $$\begin{aligned}
K_s=\ker(L_s)\cap K=span\{e_1,e_2,e_3,s\,e_4+(1-s)e_5\}.
\end{aligned}$$
In order to prove this we need the following three lemmas. We will make use of the notation introduced above. Also recall the point $p_0\in
S^2$ from Equation (\[eq:p\_0\]), so the family of sections $e_1^s$ defined via $e_i^s=e_i$ for $i=1,\ldots, 3$ and $e_4^s=s\,e_4+(1-s)e_5$ are linearly independent vectors over $p_0$ for all $s\in[-1,1]$. Let $U\subset S^2$ be an open neighborhood of $p_0$ so that the $\{e^s_i(z)\}_{i=1}^4$ still are linearly independent vectors over all $z\in \overline U$. Let $V\subset
C^\perp$ denote the subspace of smooth sections in $C^\perp$ that are supported in $U$. Then we can define a family of homomorphisms $Y_s$ from $N$ to $\Lambda^{0,1}(T^\ast S^2\otimes_\setC N)$ via functions $g_i^s:[-1,1]\times U\rightarrow V$ by $$\begin{aligned}
Y_s(e_i^s)=g_i^s.\end{aligned}$$ Set $L_s=\pi_C\circ Y_s:\Gamma(N)\rightarrow C$ and $K_s=\ker(L_s)$. Note that by construction $e^s_i\in K_s$ for all $i=1,\ldots 4$. We will prove Lemma \[lem:L-Y\] by finding suitable $g_i^s$ so that $L_s$ is surjective. Alternatively we can define a map $$\begin{aligned}
\label{eq:F}
F:[-1,1]\times V^4\rightarrow [-1,1]\times Hom(K,C),\qquad
F(g_1^s,\ldots,g_4^s)=L_s|_{K}\end{aligned}$$ and we need to show that the image of $F$ contains a family of surjective homomorphisms.
\[lem:1\] Suppose that $g_i^s$ are given so that $\mathrm{dim}(K_s) \le p+4$ for some $0\le p\le m$ and all $s\in[-1,1]$. Then there exists a smooth family of linearly independent sections $\{f_j^s\}_{j=1}^m\in
K$ that are orthogonal to $e_i^s$ for $i=1,\ldots,4$ such that $\{f_j^s\}_{j=p+1}^m \in K_s^\perp \cap K$ for all $s$, and $<L_s(f_j),L_s(f_k)>=0$ for all $j \le p$ and $k>p$.
Define $S_{p+4} \subset S_{p+3} \subset ... \subset S_4 = [-1,1]$ where $S_r = \{s \in [-1,1] | \mathrm{dim}(K_s) \ge r\}$. Restricted to $S_{p+4}$ the vector spaces $K_s^{\perp} \cap K$ form a smooth vector bundle and so admit a smooth frame of $m-p$ sections $\{f_j^s\}_{j=p+1}^m$. But suppose that we have defined an $m-p$–frame (that is, $m-p$ linearly independent sections) over $S_r$. The vector spaces $K_s^{\perp} \cap K$ also form a continuous vector bundle over $S_{r-1} \setminus S_r$ which extend to a continuous vector bundle over $S_r$. Thus by the Tietze extension theorem the sections $\{f_j^s\}_{j=p+1}^m$ extend to $S_{r-1}$ and we can conclude by induction to define these sections over $[-1,1]$. The other sections $f_j^s$ can then be constructed smoothly over $s$ since the orthogonal complement of the sections already constructed forms a vector bundle over $[-1,1]$ which therefore admits a smooth frame field. We can arrange that their images are othogonal by a Gram-Schmidt procedure.
\[lem:3\] Let $\hat K\subset K$ and $\hat C\subset C$ be $p$–dimensional vector spaces. Then the map $$\begin{aligned}
\hat F:V^4\rightarrow Hom(\hat K,\hat C),\qquad
\hat F(g_1,\ldots g_4)=\pi_{\hat C}\circ Y|_{\hat K},
\end{aligned}$$ where $Y:N\rightarrow \Lambda^{0,1}(T^\ast S\otimes_\setC N)$ is the homomorphism associated to the $g_i$, is nonzero.
Let $a_{ij}$ be such that $f_j^s=\sum_{i=1}^4 a_{ij}^se_{i}^s$ on $U$. Then $\hat F(g_1,\ldots g_4)(f_j^s)=\sum_{i=1}^4 g_i^s
a_{ij}^s$ on $U$.
If $\hat F$ was trivial, then the $L^2$ inner product $$\begin{aligned}
\sum_{i=1}^4\langle g_i^s\cdot a_{ij}^s,\eta_k\rangle=0\quad
\forall\,k,j\quad\forall g_i^s\in V
\end{aligned}$$ Thus $(a_{ij}^s\eta_k)_{i=1}^4\in (V^4)^\perp$ for all $j,k$. Recall that $K$ and $C$ are the kernel and cokernel of a real–linear Cauchy–Riemann operator $D$, respectively. Then for fixed $j,k$, if $a_{ij}^s\eta_k\in V^\perp$ for all $i$, we have over $U$ $$\begin{aligned}
D^\ast (a_{ij}^s\eta_k)=({{\ensuremath{\bar\partial }}}^\ast a_{ij}^s)\otimes
\eta_k=0\quad
\forall\,i.
\end{aligned}$$ On the other hand $$\begin{aligned}
0=D\,f_j=\sum_{i=1}^4 D(a_{ij}^se_i^s)=\sum_{i=1}^4 ({{\ensuremath{\bar\partial }}}a_{ij}^s)\otimes e_i^s
\end{aligned}$$ and the two equations taken together imply that the $\{a_{ij}^s\}_{i=1}^4$ are constant functions over $U$. This means that $f_j^s$ is a linear combination of the $e_i^s$ on $U$. By unique continuation, using that $f_i^s$ and $e_i$ are in the kernel of the operator $D$, we conclude that $f_j^s$ is globally a linear combination of the $e_i$. But that contradicts that $f_j^s$ is linearly independent of the $e_i^s$.
\[lem:2\] Suppose that $Y_s$ and corresponding sections $f_j$ are given as in Lemma \[lem:1\]. Then there exist a smooth family $(g^s_i)_{i=1}^4
\in V^4$ such that for each of the corresponding maps $L_s$ there exists an $f^s \in \mathrm{span}\{f_j^s\}_{j=1}^p$ with $L_s(f^s)\ne
0$ and independent of the span of the $L_s(f_j)$ for all $j>p$.
As in the previous lemma we express the $\{f_j^s\}_{j=1}^p$ as a linear combination of functional multiples of the $\{e_i^s\}_{i=1}^4$, at least over the open set $U$. Then we redefine the above linear map $F$ as in Equation (\[eq:F\]) such that its range is paths of $p \times p$ matrices with entries determined by the $L^2$ inner products of the $\{f_j^s\}_{j=1}^p$ with $\{\eta_j^s\}_{j=1}^p$ orthogonal to the $L_s(f_j)$ for all $j>p$. This linear map is nonzero for all $s$ by Lemma \[lem:3\]. Therefore it has a positive codimensional kernel $M^s
\subset V^4$ for all $s$ and the lemma follows if we can find a continuous section of $M^{\perp}$ over $[0,1]$. But the rank of these vector spaces is again lower semicontinuous in $s$ and so does indeed admit a section as above by first defining over points of minimal rank and then extending as before in the proof of Lemma \[lem:1\].
We prove this by induction by perturbing $g^s_i$. Suppose that we have found $g_i^s$ such that $\mathrm{dim}\ker(K_s) \le p+4$ for all $s$ and some $0<p\le m$. Then we can apply Lemma \[lem:1\] to find corresponding families of sections and thus a perturbation of the $g_i^s$ using Lemma \[lem:2\]. If the $g_i^s$ are chosen sufficiently small then the $L_s(f_j)$ are still linearly independent for all $s$ and $j=p+1,..,m$. However for each $s$ there is now an $f^s \in \mathrm{span}(\{f_j^s\}_{j=1}^p)$ with $L_s(f^s)\ne 0$ and independent to the $L_s(f_j)$ for $j>p$. Hence for each $s$ we have that $\mathrm{dim}(K_s) \le p+4-1$ and the proof follows.
Let $Y_s$ and $L_s$ be as in Lemma \[lem:L-Y\] and consider the family of real–linear Cauchy–Riemann operators $$\begin{aligned}
\label{eq:D_st}
D_{s,t}=D+t\,Y_s.\end{aligned}$$
The kernel of $D_{s,t}$ gets arbitrarily close to $K_s$ as $t$ gets small as described below. This result is well established in the literature (see e.g. [@kato]), but we give a proof here for the convenience of the reader. In the following ${{\ensuremath{|\!|\cdot|\!|}}}$ denotes the $L^2$–norm.
\[lem:D\_st\] There exists a constant $c>0$ so that for all $0<|t|<1/2c$, $s\in
[-1,1]$ and $v_s\in K_s$, $D_{s,t}$ is surjective and there exists a unique $\xi_{s,t}(v_s)\in K_s^\perp$ so that $$\begin{aligned}
v_s+\xi_{s,t}(v_s)\in\ker D_{s,t}.
\end{aligned}$$ Moreover ${{\ensuremath{|\!|\xi_{s,t}(v_s)|\!|}}}\le 2tC{{\ensuremath{|\!|v_s|\!|}}}$.
Let $V_s=K_s^\perp\cap K$ denote the orthogonal complement of $K_s$ in $K$ and set $$\begin{aligned}
\tilde L_s=L_s|_{V_s}:V_s\rightarrow C.
\end{aligned}$$ Then $\tilde L_s$ is an isomorphism.
Let $W_s=K_s^\perp\cap \ker L_s\subset L^2(N)$ and consider the compact operator $$\begin{aligned}
F_s:\ker L_s\rightarrow W_s,\qquad
F_s(\zeta)=\tilde L_s^{-1}\circ L_s(PY_s(\zeta))-PY_s(\zeta),
\end{aligned}$$ where $P:C^\perp\rightarrow K^\perp$ is the inverse of $D|_{K^\perp}:K^\perp\rightarrow C^\perp$.
Note that $L_s\circ F_s=0$ and $\tilde L^{-1}$ and $P$ have image in $K_s^\perp$, so $F_s$ is well defined. Let $c=\sup_{s\in[-1,1]}{{\ensuremath{|\!|F_s|\!|}}}_{L^2}$. For $|t|c\le \frac12$ and $v\in \ker L_s$ note that $$\begin{aligned}
{{\ensuremath{|\!|\sum_{n=1}^N t^n F_s^n(v)|\!|}}}
\le\sum_{n=1}^N |t|^n {{\ensuremath{|\!|F_s^n(v)|\!|}}}
\le\sum_{n=1}^N |t|^nc^n{{\ensuremath{|\!|v|\!|}}}
< 2|t|c{{\ensuremath{|\!|v|\!|}}}
\end{aligned}$$ so we may define $$\begin{aligned}
\xi_{s,t}(v)=\sum_{n=1}^\infty t^n F_s^n(v),
\end{aligned}$$ also satisfying ${{\ensuremath{|\!|\xi_{s,t}(v)|\!|}}}\le 2tc{{\ensuremath{|\!|v|\!|}}}$. Moreover, $$\begin{aligned}
D_{s,t}\circ F_s=D\circ F_s+tY_s\circ F_s=-Y_s+tY_s\circ F_s
\end{aligned}$$ and thus $$\begin{aligned}
D_{s,t}\left(v+\sum_{n=1}^N t^nF_s^n(v)\right)
&=&Dv+tY_s(v)+\sum_{n=1}^N\left(
t^{n+1}Y_sF_s^n(v)-t^nY_sF_s^{n-1}(v)\right)\\
&=&Dv+t^{N+1}Y_sF_s^N(v),
\end{aligned}$$ which converges strongly to $Dv$ (in $L^2$) as $N\rightarrow\infty$, so $$\begin{aligned}
D_{s,t}(v+\xi_{s,t}(v))=Dv,\qquad \forall v\in \ker L_s.
\end{aligned}$$ In particular $v+\xi_{s,t}(v)\in\ker D_{s,t}$ for all $v\in K_s$.
Next we show that $D_{s,t}$ is surjective. It suffices to show that the image of $D_{s,t}$ is dense as $D_{s,t}$ is Fredholm and thus has a closed image. By Hahn–Banach, it suffices to show that there does not exists $0\ne\mu\in L^2(\Lambda^{0,1}(T^\ast
S^2\otimes_\setC N))$ that annihilates the image of $D_{s,t}$. Suppose to the contrary such a $\mu$ exists. Write $\mu=\mu_0+\mu_1$, where $\mu_0\in C$ and $\mu_1\in
C^\perp$. Without loss of generality assume that $\mu_1\ne0$, otherwise, for $\zeta=\tilde L_s^{-1}(\mu_0)\in V_s$, $$\begin{aligned}
\langle D_{s,t}\zeta,\mu\rangle
=\langle tY_s(\tilde L_s^{-1}(\mu_0)),\mu_0\rangle
=t\langle \tilde L_s(\tilde L_s^{-1}(\mu_0)),\mu_0\rangle
=t{{\ensuremath{|\!|\mu_0|\!|}}}^2=t{{\ensuremath{|\!|\mu|\!|}}}^2\ne 0.
\end{aligned}$$
Set $\zeta=P(\mu_1)-\tilde L_s^{-1}\circ L_s\circ P(\mu_1)\in \ker
L_s$ and consider $\zeta+\xi_{s,t}(\zeta)$. Then $$\begin{aligned}
\langle D_{s,t}(\zeta+\xi_{s,t}(\zeta)),\mu\rangle
=\langle D\zeta,\mu\rangle
=\langle \mu_1,\mu\rangle={{\ensuremath{|\!|\mu_1|\!|}}}^2\ne 0.
\end{aligned}$$ This shows that $D_{s,t}$ is surjective.
The uniqueness of $\xi_{s,t}(v)$ satisfying $D_{s,t}(v+\xi_{s,t}(v))$ follows from the surjectivity of $D_{s,t}$.
In particular the above Lemma guarantees that for any given $\delta>0$ there exists $t_0>0$ so that for all $t<t_0$ the regular operators $D_{s,t}$ are superregular for $s\in[\delta,1]$ and $D_{s,t}$ are not superregular for $s\in[-1,-\delta]$. To see this note that for $s$ in that range the quantity $e_4^s=s\,e_4+(1-s)e_5$ satisfies $\langle
e_4(0),e_4^s(0)\rangle<0$ and $\langle e_4(p_0),e_4^s(p_0)\rangle=1$ by Equation (\[eq:p\_0\]). Thus near $p_0$ the tuple $(e_1,e_2,e_3,e_4^s)$ forms an oriented basis of $\setR^4$ and at the point $0$ they form a basis with the opposite orientation. In particular there must be points in $S^2$ where the sections do not form a basis of $\setR^4$. This remains true under small perturbations of the tuple $(e_1,e_2,e_3,e_4^s)$.
A real–linear Cauchy Riemann operator $D$ on $N$ gives rise to an $\setR$–invariant almost complex structure $J$ on the total space of $N$ in the following way. Choose a local complex trivialization $N=S^2\times \setC^2$ and write $D={{\ensuremath{\bar\partial }}}_0+\frac12 Y\circ j$, where $Y\in \mathrm{Hom}_\setR(\setC^2,\Lambda^{0,1}T^\ast
S^2\otimes\setC^2)$. Utilizing the projections to each factor $S^2$ and $\setC^2$ of $N$, referred to as the horizontal and vertical directions with complex structures $j$ and $i$, respectively, we define the almost complex structure $J$ at a point $x=(w,u)\in N$ acting on a vector $(h,v)\in T_xN$ via $$\begin{aligned}
J(h,v)=jh+Y_{(w,u)}h+iv.\end{aligned}$$ Note that $J$ is independent of the trivialization chosen and indeed satisfies $J^2=-\id$. Moreover, if $f:S^2\rightarrow N$ with $f(z)=(w(z),u(z))\in S^2\times \setC^2$ in the homology class of a section, then $$\begin{aligned}
{{\ensuremath{\bar\partial }}}_J f=\frac12\left\{df+J\,df\circ j_0\right\}
=\frac12\left\{dw+j\,dw\circ j_0\right\}
+\frac12\left\{du+i\,du\circ j_0+Y_{(w,u)} dw\circ j_0\right\}\end{aligned}$$ Thus for ${{\ensuremath{\bar\partial }}}_Jf=0$ it is necessary that $w(z)=z$ and $j=j_0$, up to a diffeomorphism of the domain $S^2$. In that case $$\begin{aligned}
{{\ensuremath{\bar\partial }}}_Jf=\frac12\left\{du+i\,du\circ j_0+Y_{(w,u)}\circ j\right\}
=D(u)\end{aligned}$$ so maps $f:S^2\rightarrow N$ in the class of a section are $J$–holomorphic if and only if they can be parametrized as a section $f(z)=(z,\xi(z))$ and $D\xi=0$. Moreover note the the zero section is always a $J$–holomorphic section no matter what $D$ is and that the linearization of ${{\ensuremath{\bar\partial }}}_J$ at the zero section is $D$.
Let $\omega$ be the canonical product symplectic form on $N$ so that on each fiber it reduces to the Fubini-Study form and let $\tilde J$ be the canonical product complex structure on $N$ and $\tilde D$ the associated Cauchy–Riemann operator. Given any symplectic $4$-manifold $(M,\omega_M)$ there exists a symplectic embedding from $U$ into $(X,\omega)=(S^2 \times M, \sigma_0 \oplus \omega_M)$ preserving the $S^2$ factors, where $U$ is a suitable small neighborhood of the zero-section in $N$. Thus $\tilde J$ extends to a product complex structure on $X$ which is tamed by $\omega$, and $X$ is smoothly foliated by regular $\tilde J$-holomorphic spheres.
Let $D_s$, $s\in[-1,2]$ be a smooth family of real–linear Cauchy Riemann operators on $N$ so that $D_s=D_{s,t}$ for some small fixed $t$ and $s\in[-1,1]$, where $D_{s,t}$ is the operator from Lemma \[lem:D\_st\], and $D_s$ interpolates between $D_1$ and $\tilde D$ for $s\in[1,2]$. Denote the associated family of almost complex structures by $J_s$. Note that $J_s$ are tamed by $\omega$ on a neighborhood $U$ of the zero section in $N$. We now modify the family $J_s$ to construct a family of almost complex structures $\tilde J_s$ on $N$ with the property that $\tilde J_s=\tilde J$ outside of $U$ and $\tilde J_s=J_s$ in an open neighborhood $V\subset U$ of the zero section so that $\tilde J_s$ is tamed by $\omega$. Using the above embedding we similarly construct the family $\tilde J_s$ on $X$.
The family $\tilde J_s$ is tamed by the canonical symplectic structure on $X=S^2\times M$, and $\tilde J_2$ is the product complex structure on $X$. Thus $\tilde J_2$ is regular (and superregular) and $X$ is foliated by $\tilde J_2$–holomorphic spheres. By construction $\tilde J_s$ is regular for all curves outside of $U$ and inside of $V$. By possibly adding a small perturbation to the family $\tilde
J_s$ over $U\setminus V\subset X$ we may assume that the family $\tilde J_s$ is a regular family of almost complex structure and that $\tilde J_{-1}$ is regular.
Since $\tilde J_s=\tilde J$ outside of $U$, the complement of $U$ is foliated by $\tilde J$–holomorphic spheres for all $s\in[-1,2]$. Moreover, the zero section is $\tilde J_s$–holomorphic for all $s\in[-1,2]$. But the linearized operator at the zero section is $D_s$, which is not superregular for $s=-1$ by construction, so the foliation does not persist to a $J_{-1}$–holomorphic foliation of $X$.
This proves Theorem \[thm:counterexample\] in the case that $(X,\omega)=(S^2\times M^4,\sigma_0\times \omega_M)$.
Stability of Foliations for Integrable Complex Structures {#sec:stab-foli-integr}
=========================================================
In this section we show that holomorphic foliations are stable under perturbations of complex structure so that the holomorphic bisectional curvature (see below or e.g. [@kobayashi_nomizu_2]) remains bounded.
Let $(M,J)$ be a complex manifold. The [*holomorphic bisectional curvature*]{} $H(p,p')$ of two $J$–invariant planes $p$ and $p'$ in $T_xM$ is $$\begin{aligned}
H(p,p')=R(X,JX,Y,JY)
\end{aligned}$$ where $R$ is the regular Riemannian curvature tensor and $X$ and $Y$ are unit vectors in $p$ and $p'$, respectively.
We say that $(M,J)$ has holomorphic bisectional curvature bounded from above (below) by a constant $c$ if $H(p,p')\le c$ ($H(p,p')\ge
c$) for all $x\in M$ and $J$–invariant planes $p,p'\subset T_xM$.
The following result is a computation from page 79 of [@griffiths_harris].
\[lem:quotient\_curvature\] Let $G\rightarrow M$ be a holomorphic vector bundle of (complex) rank at least 2 over a complex manifold $M$, and let $E\subset G$ be a holomorphic subbundle and $F=E^\perp$ the orthogonal complement of $E$ in $G$. Then the local curvature form of $F$ is greater than or equal to the curvature form of $G$ restricted to $F$.
\[lem:cur\_k\] Let $(X,\omega,J)$ be Kähler and let $u:S^2\rightarrow X$ be a $J$–holomorphic sphere. Assume that the the holomorphic bisectional curvature of $(X,J)$ is bounded from below by $c>\pi k/\omega[u]$. Then the pullback bundle $u^\ast TX$ has no holomorphic line-subbundle with first Chern class less than or equal to $k$.
Let $F$ be a holomorphic line-subbundle of $u^\ast TX$, and let $E$ be a complementary holomorphic subbundle, which exists by a result of Grotherndieck [@grothendieck] if the real dimension of $X$ is at least 4 and is taken to be empty otherwise. Let $E^\perp$ denote the orthogonal complement of $E$ in $u^\ast TX$ and denote the curvature of $E^\perp$ with respect to the connection induced by $u^\ast TX$ by $K$. By Lemma \[lem:quotient\_curvature\] we know that $K(\cdot)\ge u^\ast
H(\cdot,F)$ on any complex frame. Then $$\begin{aligned}
c_1(F)=c_1(E^\perp)=\frac1{2\pi}\int_{S^2}K
\ge\frac1{2\pi}\int_{S^2}u^\ast H(\cdot,F)
\ge\frac1{2\pi}c\int_{S^2}{{\ensuremath{|\!|du|\!|}}}^2\dvol
>k.
\end{aligned}$$
Recall Definition \[def:superregular\] for our use of the terms regular and superregular.
\[lem:superregular\] Let $(X,\omega,J)$ be Kähler so that the holomorphic bisectional curvature is bounded from below by $c>-2\pi/\omega(A)$, where $A\in
H_2(X;\setZ)$. Then any $J$–holomorphic sphere in the class of $A$ is regular.
If furthermore $u$ is immersed, $c_1(A)=2$, and the holomorphic bisectional curvature is bounded from below by $c>-\pi/\omega(A)$, then $u$ is also superregular.
Let $u:S^2\rightarrow X$ be a $J$–holomorphic curve representing $A$. By Lemma \[lem:cur\_k\], and using that $c>-2\pi/\omega(A)$, the pullback tangent bundle $u^\ast TX$ does not have a holomorphic line-subbundle with first Chern class less than $-1$. Thus $u$ is regular by Lemma 3.3.1 in [@mcduff2].
If furthermore $u$ is immersed and $c>-\pi/\omega(A)$, then every holomorphic line-subbundle of the normal bundle has first Chern class $\ge 0$. Since $u$ is immersed and $c_1(A)=2$, the first Chern class of the normal bundle is 0. Thus any holomorphic line-subbundle of the holomorphic normal bundle has first Chern class 0 and has a superregular basis. So the normal bundle to $u$ has a superregular basis and $u$ is superregular.
\[cor:integrable\_foln\] Let $J_I^c$ be the space of integrable compatible complex structures on $(X^{2n},\omega)$ with holomorphic bisectional curvature bounded from below by $c>-2\pi/\omega(A)$, where $A\in H_2(X;\setZ)$ with $c_1(A)=2$. Further assume that there exists a $J_0$–holomorphic foliation of $X$ by spheres in the class of $A$ for some $J_0\in\J_I^c$.
Then $X$ is foliated by $J$–holomorphic spheres for any $J\in\J_I^c$ in the path–component of $J_0$.
Let $J=J_1\in\J_I^c$ be connected to $J_0$ via a path $\{J_t\}_{t\in
[0,1]}$ and let $\M$ denote the family space of $\{J_t\}_{t\in [0,1]}$–holomorphic spheres in the class of $A$ as in Equation (\[eq:M\]) and let $$\begin{aligned}
\M^1=\M\times_G S^2
\end{aligned}$$ denote the component of the one–pointed moduli space, modulo automorphisms.
By Lemma \[lem:superregular\] all curves in $\M^1$ are regular, so $\M^1$ is a smooth manifold (of dimension $2n+1$) and the projection onto the $[0,1]$–factor is a submersion.
Denote the connected component of $\M^1$ containing the initial $J_0$–holomorphic foliation by $\tilde M$ and let $\tilde
M_s=\{(u,p,t)\in \tilde M|\,t=s\}\subset \M$. Again, by Lemma \[lem:superregular\], all curves in $\M_s$ are regular, so $\M_s$ is a smooth manifold (of dimension $2n$). The evaluation map $ev_s:\M_s\rightarrow X$ is holomorphic with respect to the natural complex structure on $\M_s$. It has degree 1, since $ev_0$ is of degree 1 by assumption. Thus $ev_s$ is a diffeomorphism for all $s\in[0,1]$ and $\M_s$ is a smooth foliation of $X$.
Note that any $J$–holomorphic sphere $u$ (for integrable $J$) that is part of a smooth foliation is automatically regular and superregular. Indeed, any line subbundle of the normal bundle of $u$ has non-negative first Chern class, since it has holomorphic sections induced by nearby curves. Since the first Chern class of the normal bundle at a leaf of a foliation is trivial all linear subbundles must have first Chern class 0, so the curve is regular and superregular.
Under more stringent curvature assumptions, we can prove the existence of foliations given conditions on a Gromov-Witten invariant. This is the substance of Theorem \[thm:integrable\] that we are now prepared to prove.
Let $J\in\J_I^c$ and let $\M$ be the space of $J$–holomorphic spheres in the class $A$. $\M$ is non-empty since the GW count is non-zero. Let $u\in\M$. By Lemma \[lem:cur\_k\] we know that all holomorphic linear subbundles of $u^\ast TX$ have first Chern class greater than or equal to 0 and $u$ is regular. We claim that $u$ is immersed. If not, then the first Chern class of the tangent bundle is at least 4. But by the dimension formula we know that $c_1(A)=2$, so the holomorphic normal bundle would contain a linear subbundle with first Chern class less than 0 which is impossible. Thus $u$ is superregular.
Since every $u\in\M$ is regular and superregular, the evaluation map is transverse to any $x\in X$ and the curves contribute positively to the GW count of a point class. Thus the evaluation map $ev:\M\rightarrow X$ has degree one and is holomorphic, so it is a diffeomorphism, showing that $X$ is foliated by embedded holomorphic spheres.
[Don02]{}
Miguel Abreu and Dusa McDuff, *Topology of symplectomorphism groups of rational ruled surfaces*, J. Amer. Math. Soc. **13** (2000), no. 4, 971–1009 (electronic). [MR ]{}[MR1775741 (2001k:57035)]{}
S. K. Donaldson, *Holomorphic discs and the complex [M]{}onge-[A]{}mpère equation*, J. Symplectic Geom. **1** (2002), no. 2, 171–196. [MR ]{}[MR1959581 (2003m:32037)]{}
P Griffiths and J. Harris, *Principles of algrebraic geometry*, John Wiley & Sons, 1978.
A. Grothendieck, *Sur la classification des fibrés holomorphes sur la sphère de [R]{}iemann*, Amer. J. Math. **79** (1957), 121–138. [MR ]{}[MR0087176 (19,315b)]{}
M. Gromov, *Pseudo holomorphic curves in symplectic manifolds pseudo holomorphic curves in symplectic manifolds*, Invent. Math. **82** (1985), no. 2, 307–347.
R. Hind, *Lagrangian spheres in [$S\sp 2\times S\sp 2$]{}*, Geom. Funct. Anal. **14** (2004), no. 2, 303–318. [MR ]{}[MR2060197 (2005g:53151)]{}
Morris W. Hirsch, *Immersions of manifolds*, Trans. Amer. Math. Soc. **93** (1959), 242–276. [MR ]{}[MR0119214 (22 \#9980)]{}
Tosio Kato, *Perturbation theory for linear operators*, Classics in Mathematics, Springer-Verlag, Berlin, 1995, Reprint of the 1980 edition. [MR ]{}[MR1335452 (96a:47025)]{}
Shoshichi Kobayashi and Katsumi Nomizu, *Foundations of differential geometry. [V]{}ol. [II]{}*, Wiley Classics Library, John Wiley & Sons Inc., New York, 1996, Reprint of the 1969 original, A Wiley-Interscience Publication. [MR ]{}[MR1393941 (97c:53001b)]{}
D. McDuff and D. Salamon, *${J}$-holomorphic curves and symplectic topology*, 2 ed., American Mathematical Society, 2004.
Clifford H. Taubes, *[${\rm SW}\Rightarrow{\rm Gr}$]{}: from the [S]{}eiberg-[W]{}itten equations to pseudo-holomorphic curves*, J. Amer. Math. Soc. **9** (1996), no. 3, 845–918. [MR ]{}[MR1362874 (97a:57033)]{}
|
---
abstract: 'For the purpose of searching for Lorentz-invariance violation in the minimal Standard-Model Extension, we perfom a reanalysis of data obtained from the $^{133}$Cs fountain clock operating at SYRTE. The previous study led to new limits on eight components of the $\tilde{c}_{\mu \nu}$ tensor, which quantifies the anisotropy of the proton’s kinetic energy. We recently derived an advanced model for the frequency shift of hyperfine Zeeman transition due to Lorentz violation and became able to constrain the ninth component, the isotropic coefficient $\tilde{c}_{\text{{\tiny \textsc{TT}}}}$, which is the least well-constrained coefficient of $\tilde{c}_{\mu \nu}$. This model is based on a second-order boost Lorentz transformation from the laboratory frame to the Sun-centered frame, and it gives rise to an improvement of five orders of magnitude on $\tilde{c}_{\text{{\tiny \textsc{TT}}}}$ compared to the state of the art.'
address:
- |
$^1$SYRTE, Observatoire de Paris, PSL Research University, CNRS\
Sorbonne Universités, UPMC Univ. Paris 06, LNE\
61 avenue de l’Observatoire, 75014 Paris, France
- |
$^2$Laboratoire Kastler Brossel, ENS-PSL Research University, CNRS\
UPMC-Sorbonne Universités, Collège de France, 75005 Paris, France
- '$^3$Embry-Riddle Aeronautical University, Prescott, Arizona 86301, USA'
author:
- 'H. Pihan-Le Bars,$^1$ C. Guerlin,$^{1,2}$ Q.G. Bailey,$^3$ S. Bize,$^1$ and P. Wolf$^1$'
title: |
Improved Tests of Lorentz Invariance in the Matter Sector using\
Atomic Clocks
---
![Schematic view of an atomic fountain.[@Bize2005]](lebars-fig1.eps){width="0.6\hsize"}
The $^{133}$Cs and $^{87}$Rb double fountain (see Fig. 1[@Bize2005]) was run in Cs mode on a combination of [$\vert F=3, m_F \rangle \longleftrightarrow \vert F=4, m_F\rangle$ hyperfine transitions,[@Guena2012; @Guena2014] which have good sensitivity to the quadrupolar energy shift of the proton and a weak dependence on the first-order Zeeman effect. The combined observable $\nu_c$, build by measuring quasi-simultaneously the clock frequency for $m_F = + 3, -3, 0$]{}, can be related to a model for hyperfine transitions in the minimal Standard-Model Extension (SME)[@Bluhm2003; @Kostelecky1999] and leads to the laboratory-frame SME model presented in Ref. . This observable depends on the proton’s laboratory-frame coefficient $\tilde{c}_q^p$, which is a combination of the $c_{\mu \nu}$ tensor components.
To search for a periodic modulation of the clock frequency, the laboratory coefficients must be expressed as functions of the Sun-centered frame coefficients.[@Kostelecky2002] This transformation is usually done via a first-order ($O(\beta)$) boost Lorentz transformation,[@Bluhm2003; @Kostelecky1999; @Wolf2006] but for purpose of setting a limit on the isotropic coefficient, $\tilde{c}_{\text{{\tiny \textsc{TT}}}}$, which appears in an [$O( \beta^2 )$]{} model suppressed by a factor $\beta^2$, we develop an improved model using a second-order boost matrix (see also Ref. ). This contains all the terms up to [$O( \beta^2 )$]{}, in contrast to Ref. which kept the [$O( \beta^2 )$]{} terms exclusively for $\tilde{c}_{\text{{\tiny \textsc{TT}}}}$. We also include the annual frequency, previously taken as a constant[@Wolf2006]. The model now exhibits in total 13 frequency components (25 quadratures), instead of 3 frequency components (5 quadratures) for the previous analysis.
We perform a complete least-squares adjustment of the [$O( \beta^2 )$]{} model to the data used in Ref. . This model is fitted in the SME coefficient basis, which enables us to evaluate simultaneously the nine $\tilde{c}_{\mu \nu}$ coefficients for the proton and their respective correlations. It also avoids additional assumptions on parameter expectation values and underestimation of the uncertainties.[@HPB] The main systematic effects are related to the first- and second-order Zeeman effects. The second-order effect is responsible for an offset of the data from zero, assessed at $-2.2$ mHz, and the residual first-order Zeeman effect is calibrated via a least-squares fitting of the [$O( \beta^2 )$]{} model to the time of flight of the atoms in the fountain.[@HPB; @Wolf2006]
$\begin{array}{c c c c c c c}
\hline \hline
\text{Coefficient} & \text{ Measured value } & \multicolumn{3}{c}{\text{ Uncertainty }} & \text{Unit (GeV)}\\
& &\text{ Statistical} & \text{Systematic} &\text{ Total }&\\
\hline\vspace*{-3.2mm}\\
\tilde{c}_{\text{{\tiny \textsc{Q}}}}& -0.3 & 10^{-2} & 2.1 & 2.1 & 10^{-22} \vspace*{-.6mm}\\
\tilde{c}_{\text{{\tiny \textsc{-}}}}& 1.4 & 0.7 & 8.9 & 9.0 & 10^{-24} \vspace*{-.6mm}\\
\tilde{c}_{\text{{\tiny \textsc{X}}}}& -1.5 & 0.7 & 5.2 & 5.3 & 10^{-24} \vspace*{-.6mm}\\
\tilde{c}_{\text{{\tiny \textsc{Y}}}}& 0.8 & 0.3 & 1.6 & 1.6 & 10^{-24} \vspace*{-.6mm}\\
\tilde{c}_{\text{{\tiny \textsc{Z}}}}& 1.0 & 0.8 & 3.9 & 3.9 & 10^{-24} \vspace*{-.6mm}\\
\tilde{c}_{\text{{\tiny \textsc{TX}}}}& -1.5 & 0.6 & 5.7 & 5.7 & 10^{-20} \vspace*{-.6mm}\\
\tilde{c}_{\text{{\tiny \textsc{TY}}}}& 1.4 & 0.3 & 5.9 & 5.9 & 10^{-20} \vspace*{-.6mm}\\
\tilde{c}_{\text{{\tiny \textsc{TZ}}}}& -1.1 & 0.2 & 3.5 & 3.5 & 10^{-20} \vspace*{-.6mm}\\
\tilde{c}_{\text{{\tiny \textsc{TT}}}}& 1.6 & 0.9 & 6.9 & 6.9 & 10^{-16} \vspace*{-.4mm}\\
\hline
\hline
\end{array}$
\[coeff\]
The bounds on $\tilde{c}_{\mu \nu}$ components obtained using the complete [$O( \beta^2 )$]{} model are presented in Table 1. They show an improvement by five orders of magnitude on $\tilde{c}_{\text{{\tiny \textsc{TT}}}}$ compared to the state of the art.[@datatables] Despite our advanced model, the correlation matrix still contains large values (up to $0.95$), except for the $\tilde{c}_{\text{{\tiny \textsc{Q}}}}$ coefficient, which is almost decorrelated at this sensitivity level. This indicates that our marginalized uncertainties in Table 1 are dominated by those correlations, and could thus be significantly improved with more data spread over the year.
In conclusion, our improved model including $O\left( \beta^2 \right)$ terms and annual frequency modulations enables us to improve the present limits on the isotropic coefficient $\tilde{c}_{\text{{\tiny \textsc{TT}}}}$ by 5 orders of magnitude. Furthermore, we expect that an additional data set would reduce the marginalized uncertainties and lead to an improvement by one extra order of magitude on all the limits, bringing the constraint on $\tilde{c}_{\text{{\tiny \textsc{TT}}}}$ near one Planck scale suppresion, i.e. $10^{-17}$ GeV.
[xx]{}
S. Bize , J. Phys. B [**38**]{}, S449 (2005).
J. Guena , IEEE Trans. UFFC [**59**]{}, 391 (2012).
J. Guena , Metrologia [**51**]{}, 108 (2014).
P. Wolf , Phys. Rev. Lett. [**96**]{}, 060801 (2006).
V.A. Kostelecký and C.D. Lane, Phys. Rev. D [**60**]{}, 116010 (1999).
R. Bluhm , Phys. Rev. D [**68**]{}, 125008 (2003).
V.A. Kostelecký and M. Mewes, Phys. Rev. D [**66**]{}, 056005 (2002).
C. Guerlin , these proceedings.
M.A. Hohensee , Phys. Rev. Lett. [**111**]{}, 050401 (2013).
H. Pihan-Le Bars , in preparation.
V.A. Kostelecký and N. Russell, 2016 edition, arXiv:0801.0287v9.
|
---
abstract: 'In recent work, we considered the frequencies of patterns of consecutive primes ${\,\left(\text{mod }q\right)}$ and numerically found biases toward certain patterns and against others. We made a conjecture explaining these biases, the dominant factor in which permits an easy description but fails to distinguish many patterns that have seemingly very different frequencies. There was a secondary factor in our conjecture accounting for this additional variation, but it was given only by a complicated expression whose distribution was not easily understood. Here, we study this term, which proves to be connected to both the Fourier transform of classical Dedekind sums and the error term in the asymptotic formula for the sum of $\phi(n)$.'
address:
- 'Department of Mathematics, Tufts University'
- 'Department of Mathematics, Stanford University'
author:
- 'Robert J. Lemke Oliver'
- Kannan Soundararajan
bibliography:
- 'distribution.bib'
title: The distribution of consecutive prime biases and sums of sawtooth random variables
---
Introduction
============
Let $p_n$ denote the sequence of primes in ascending order. Given $q\ge 3$ and $\mathbf{a}=(a_1,\dots,a_r)$ satisfying $(a_i,q)=1$ for all $1\leq i \leq r$, in recent work [@LOS] we studied biases in the occurrence of the pattern $\mathbf{a}$ in strings of $r$ consecutive primes reduced ${\,\left(\text{mod }q\right)}$. Thus, we defined $$\pi(x;q,\mathbf{a}) := \#\{p_n\leq x: p_{n+i-1} \equiv a_i {\,\left(\text{mod }q\right)} \text{ for } 1\leq i \leq r\},$$ and conjectured that $$\label{eqn:bias}
\pi(x;q,\mathbf{a}) = \frac{\mathrm{li}(x)}{\phi(q)^r} \Big(1 + c_1(q;\mathbf{a}) \frac{\log\log x}{\log x} + c_2(q;\mathbf{a}) \frac{1}{\log x} + O((\log x)^{-7/4})\Big),$$ where $c_1(q;\mathbf{a})$ and $c_2(q;\mathbf{a})$ are certain explicit constants. The term $c_1(q;\mathbf{a})$ is easily described, $$c_1(q;\mathbf{a}) = \frac{\phi(q)}{2}\Big(\frac{r-1}{\phi(q)} - \#\{i\leq r-1: a_i \equiv a_{i+1} {\,\left(\text{mod }q\right)}\}\Big),$$ and it acts as a bias against immediate repetitions in the pattern $\mathbf{a}$. The term $c_2(q;\mathbf{a})$ is more complicated, and the goal of this paper is to understand its distribution. If $r\ge 3$ then $$c_2(q;\mathbf{a}) = \sum_{i=1}^{r-1} c_2(q;(a_i,a_{i+1})) + \frac{\phi(q)}{2}\sum_{j=1}^{r-2} \frac{1}{j}\Big(\frac{r-1-j}{\phi(q)} - \#\{i : a_i \equiv a_{i+j+1} {\,\left(\text{mod }q\right)}\}\Big),$$ so that it is sufficient to understand the case $r=2$; that is, $c_2(q;(a,b))$ with $(a,q)=(b,q)=1$.
For the sake of simplicity, we shall confine ourselves to the case when $q$ is prime. For any character $\chi {\,\left(\text{mod }q\right)}$ we define $$\label{Adef}
A_{q,\chi} = \prod_{p\nmid q} \Big(1-\frac{(1-\chi(p))^2}{(p-1)^2}\Big).$$ Then the quantity $c_2(q;(a,b))$ is given by $$c_2(q;(a,a)) = \frac{q-2}{2} \log (q/2\pi),$$ and when $a\not \equiv b {\,\left(\text{mod }q\right)}$ by $$\label{eqn:non-diag}
c_2(q;(a,b)) = \frac{1}{2} \log \frac{2\pi}{q} + \frac{q}{\phi(q)} \sum_{\chi \neq \chi_0{\,\left(\text{mod }q\right)}} \Big( \overline{\chi}(b-a) +
\frac{1}{\phi(q)} (\overline{\chi}(b)-\overline{\chi}(a)) \Big) L(0,\chi)L(1,\chi) A_{q,\chi}.$$ The diagonal term $c_2(q;(a,a))$ is thus completely explicit, and of size $q\log q$. Our work here shows that the off-diagonal terms $c_2(q;(a,b))$ can also be large; usually they are of size about $q$, occasionally getting to size $q\log \log q$ (attaining both positive and negative values), which we believe is their maximal size.
Before stating our result, we make one more simplification. Define $$\label{Cdef}
C(k) =C(k;q)= \frac{1}{\phi(q)} \sum_{\chi\neq \chi_0 {\,\left(\text{mod }q\right)}} \overline{\chi(k)} L(0,\chi)L(1,\chi) A_{q,\chi}.$$ Since $A_{q,\chi} \ll 1$, $L(1,\chi) \ll \log q$, and (upon using the functional equation) $L(0,\chi) \ll \sqrt{q}\log q$, from it follows that for $a\not\equiv b{\,\left(\text{mod }q\right)}$ $$\label{1.5}
\frac{c_2(q;(a,b))}{q} = C(b-a) + O\Big( \frac{(\log q)^2}{\sqrt{q}} \Big).$$ Thus for large $q$ it is enough to understand the distribution of $C(k)$ as $k$ varies over all non-zero residue classes ${\,\left(\text{mod }q\right)}$. Since $L(0,\chi) = 0$ for even characters $\chi$, in only odd characters $\chi$ make a contribution, and therefore $C(k) = - C(-k)$ is an odd function of $k$.
\[thm1\] (1) As $q\to \infty$ the distribution of $C(k)$ tends to a continuous probability distribution, symmetric around $0$. Precisely, there is a continuous function $\Phi_C$ with $\Phi_C(-x)+ \Phi_C(x) =1$ such that uniformly for all $x \in [-X,X]$ one has $$\frac{1}{q} \# \{ k{\,\left(\text{mod }q\right)}: \ C(k) \le \tfrac {e^{\gamma}}{2} x \} = \Phi_C(x) + o(1).$$
\(2) Uniformly for all $e \le x\le (\frac 12-\epsilon) \log \log q$ one has $$\exp( -A_1 e^x/x) \,\,\,\ge\,\,\, \frac{1}{q} \# \{ k {\,\left(\text{mod }q\right)}: \ C(k) \ge \tfrac{e^{\gamma}}{2} x \} \,\,\,\ge\,\,\, \exp(- A_2 e^{x}\log x)$$ for some positive constants $A_1$ and $A_2$.
\(3) For all large $q$, there exists $k {\,\left(\text{mod }q\right)}$ with $$-C(-k) = C(k) \ge \Big( \frac{e^{\gamma}}{4} -\epsilon\Big) \log \log q.$$
\(4) For all $k {\,\left(\text{mod }q\right)}$ we have $$C(k) \ll (\log q)^{\frac 23} (\log \log q)^2.$$
\(5) The values $C(k)$ have an “almost periodic" structure. Precisely, suppose $1\le m \le q/4$ is a multiple of every natural number below $B \ge 2$. Then $$\frac 1q \sum_{k {\,\left(\text{mod }q\right)}} |C(k) - C(k+m)|^2 \ll \frac{1}{B^{1-\epsilon}} + \frac{m}{q} \log B.$$
We make a few comments concerning Theorem \[thm1\] before proceeding to related results. In part 1, we believe that the distribution for $C(k)$ has a density, which is to say that $\Phi_C$ is in fact differentiable. Our proof falls just a little short of establishing this. In part 2, there is a gap between the upper and lower bounds for the tail frequencies. With a little more care, we can improve the lower bound there to $\exp(-A_3 e^x)$ for a suitable positive constant $A_3$, but there still remains a gap between the two bounds. The distribution of $C(k)$, and especially the double exponential decay seen in part 2, are reminiscent of the distribution of values of $L(1,\chi_d)$ (see [@GS]). Motivated by this analogy, or by extrapolating the lower bounds in part 2, we believe that in part 3 there should exist values of $C(k)$ as large as $(\frac{e^{\gamma}}{2} -\epsilon) \log \log q$. We also conjecture that $(\frac{e^{\gamma}}{2} +\epsilon) \log \log q$ should be the largest possible value of $C(k)$, which would be a substantial strengthening of part 4. Finally, in addition to the almost periodic structure given in part 5 (where $k$ varies), there should be an almost periodic structure as $q$ varies. That is, if $q_1$ and $q_2$ are two large random primes with $q_1-q_2$ being a multiple of the numbers below $B$, then $C(k;q_1)$ and $C(k;q_2)$ will be close to each other (on average over $k$). We hope that an interested reader will embrace some of these remaining problems.
While the quantity $C(k)$ is the main focus of this paper, closely related objects arise in two other seemingly unrelated contexts. The first of these concerns Dedekind sums. Let $\psi(x)$ denote the sawtooth function defined by $$\psi(x) = \begin{cases}
\{ x \} -1/2 &\text{ if } x \not \in \mathbf{Z}, \\
0 &\text{ if } x \in \mathbf{Z},
\end{cases}$$ which is an odd function, periodic with period $1$. If $q$ is prime and $a$ is a reduced residue ${\,\left(\text{mod }q\right)}$, then the Dedekind sum $s_q(a)$ is defined by $$\label{eqn:dedekind-def}
s_q(a) := \sum_{x{\,\left(\text{mod }q\right)}} \psi\Big(\frac{x}{q}\Big) \psi\Big(\frac{ax}{q}\Big).$$ The Dedekind sum arises naturally in number theory when studying the modular transformation properties of the Dedekind $\eta$-function, but it also appears in other contexts and satisfies many interesting properties [@Apostol; @Vardi]. We study here the discrete Fourier transform of the Dedekind sum $s_q(a)$. Thus for a prime $q$ and residue class $t{\,\left(\text{mod }q\right)}$ we define $$\label{eqn:fourier-transform-def}
\widehat{s}_q(t) := \frac{1}{q} \sum_{a{\,\left(\text{mod }q\right)}} s_q(a) e(at/q),$$ where $e(z) =e^{2\pi iz}$ throughout. In Lemma \[lem:dedekind-sum\] we shall see that $$\widehat{s}_q(t) = \frac{-1}{\pi i \phi(q)} \sum_{\chi\neq \chi_0{\,\left(\text{mod }q\right)}} \bar\chi(t) L(0,\chi) L(1,\chi),$$ so that $\widehat{s}_q(t)$ is indeed a simpler version of $C(k)$. An alternative useful expression is $$\label{eqn:dedekind-sawtooth}
\widehat{s}_q(t) = \frac{1}{\pi i} \sum_{\substack{n=1 \\ (n,q)=1}}^{\infty} \frac{\psi(t\overline{n}/q)}{n},$$ where $\overline n$ denotes the multiplicative inverse of the reduced residue class $n{\,\left(\text{mod }q\right)}$ and the sum converges since the partial sums $\sum_{n\le x, (n,q)=1} \psi(t\overline{n}/q)$ are bounded.
\[thm2\] (1) As $q\to \infty$ the distribution of $\pi i \widehat{s}_q(t)$ tends to a continuous probability distribution, symmetric around $0$. Precisely, there is a continuous function $\Phi_{s}$ with $\Phi_{s}(-x)+ \Phi_{s}(x) =1$ such that uniformly for all $x \in [-X,X]$ one has $$\frac{1}{q} \# \{ t{\,\left(\text{mod }q\right)}: \ \pi i \widehat{s}_q(t) \le \tfrac {e^{\gamma}}{2} x \} = \Phi_{ s}(x) + o(1).$$
\(2) Uniformly for all $e \le x\le (\frac 12-\epsilon) \log \log q$ one has $$\exp( -A_1 e^x/x) \,\,\, \ge\,\,\, \frac{1}{q} \# \{ t {\,\left(\text{mod }q\right)}: \ \pi i \widehat{s}_q(t) \ge \frac{e^{\gamma}}{2} x \} \,\,\,\ge\,\,\, \exp(- A_2 e^{x}\log x)$$ for some positive constants $A_1$ and $A_2$.
\(3) For all large $q$, there exists $t {\,\left(\text{mod }q\right)}$ with $$-\pi i \widehat{s}_q(-t) =\pi i \widehat{s}_q(t) \ge \Big( \frac{e^{\gamma}}{4} -\epsilon\Big) \log \log q.$$
\(4) For all $t {\,\left(\text{mod }q\right)}$ we have $$\widehat{s}_q(t) \ll (\log q)^{\frac 23} (\log \log q)^2.$$
\(5) The values $\widehat{s}_q(t)$ have an “almost periodic" structure. Precisely, suppose $1\le m \le q/4$ is a multiple of every natural number below $B \ge 2$. Then $$\frac 1q \sum_{t {\,\left(\text{mod }q\right)}} |\widehat{s}_q(t) - \widehat{s}_q(k+m)|^2 \ll \frac{1}{B^{1-\epsilon}} + \frac{m}{q} \log B.$$
Theorem \[thm2\] exactly parallels the results of Theorem \[thm1\], with the same deficiencies discussed there. The proofs of Theorems \[thm1\] and \[thm2\] are nearly identical, and so we give details only for Theorem \[thm1\].
Our third topic concerns the remainder term in the asymptotic for the mean value of Euler’s $\phi$-function. Define the quantity $R(x)$ by the relation $$\sum_{n\leq x} \phi(n) = \frac{3}{\pi^2} x^2 + R(x).$$ Simple arguments show that $R(x) \ll x \log x$, and Walfisz [@Walfisz] established that $R(x) \ll x (\log x)^{2/3} (\log\log x)^{4/3}$, which is presently the best known estimate. Montgomery [@Montgomery] conjectured that $R(x) \ll x \log\log x$ and $R(x) = \Omega_{\pm}(x \log\log x)$, and he showed that $R(x) = \Omega_\pm(x \sqrt{\log\log x})$. Key to Montgomery’s work is the expression $$R(x) = \frac{\phi(x)}{2} - x \sum_{n\leq x} \frac{\mu(n) \psi(x/n)}{n} + O\Big(x \exp(-c\sqrt{\log x})\Big)$$ for some positive constant $c$, where $\phi(x) = 0$ if $x\not\in\mathbf{Z}$. The sum in this expression is akin to the equation for $\widehat{s}_q(t)$ with $\overline{n}/q$ replaced by $1/n$ and with the weight $1/n$ replaced with $\mu(n)/n$. Accordingly, many of the techniques used to prove Theorems \[thm1\] and \[thm2\] apply to $R(x)$ as well, though unfortunately with less precision owing to the presence of $\mu(n)$. For convenience, we define $\widetilde{R}(x) = R(x)/x - \phi(x)/2x$.
\[thm3\] As $y\to \infty$ the distribution of $\widetilde{R}(u)$ for real $u\leq y$ tends to a probability distribution, symmetric around $0$. Precisely, there is a function $\Phi_R$ with $\Phi_R(-x)+ \Phi_R(x) =1$ such that uniformly for all $x \in [-X,X]$ one has $$\frac{1}{y} \mathrm{meas}(\{ u \leq y: \ \widetilde{R}(u) \le \tfrac {3e^{\gamma}}{\pi^2} x \}) = \Phi_R(x) + o(1),$$ where $\mathrm{meas}(I)$ denotes the Lebesgue measure of $I \subseteq \mathbf{R}$. Moreover, uniformly for all $e \le x\le (\frac 12-\epsilon) \log \log y$ one has $$\frac{1}{y} \mathrm{meas}(\{ u \leq y: \ \widetilde{R}(u) \ge \tfrac {3e^{\gamma}}{\pi^2} x \}) \leq \exp(-A_1 e^x/x)$$ for some positive constant $A_1$.
We prove Theorem \[thm3\] by showing that all positive integral moments of $\widetilde{R}(n)$ exist and are not too large. The moment calculation refines earlier work of Pillai and Chowla [@PillaiChowla] and Chowla [@Chowla], who computed the mean and variance respectively: $$\sum_{n\leq x} \widetilde{R}(n) = o(x) \quad \text{and} \quad \frac{1}{y}\int_0^y \widetilde{R}(u)^2 \,du \sim \frac{1}{2\pi^2}.$$ In Theorem \[thm3\], using Montgomery’s construction in his $\Omega$-result, we can obtain a lower bound for the frequency of large values of ${\widetilde R}(u)$ of the form $\exp(-e^{x^{2+\epsilon}})$, which is very far from the upper bound. We expect that there is a lower bound similar to that in Theorems \[thm1\] and \[thm2\] in this situation also, and this would be in keeping with Montgomery’s conjecture on the true size of ${\widetilde R}(u)$.
Organization {#organization .unnumbered}
------------
Our main focus is the proofs of Theorems \[thm1\] and \[thm2\]. We establish preliminary results useful for both in Sections \[sec:prelim\] and \[sec:B\]. We then prove Theorem \[thm1\] in Sections \[sec:moments\]-\[sec:proofthm\]; since the proof of Theorem \[thm2\] follows along identical lines, we omit it. In Section \[sec:montgomery\], we discuss the modifications that lead to Theorem \[thm3\].
First steps {#sec:prelim}
===========
Here we establish some formulae for $\widehat{s}_q(t)$ and $C(k)$ which will be the basis for our subsequent work.
\[lem:dedekind-sum\] Let $q$ be prime. For any $(t,q)=1$, we have $$\widehat{s}_q(t) = \frac{-1}{\pi i \phi(q) }\sum_{\chi\neq \chi_0{\,\left(\text{mod }q\right)}} \bar\chi(t) L(0,\chi) L(1,\chi) = \frac{1}{\pi i} \sum_{\substack{n=1\\(n,q)=1}}^{\infty} \frac{\psi(t\overline{n}/q)}{n}.$$ Moreover, for any $x\ge 1$ we have $$\widehat{s}_q(t) = \frac{1}{\pi i} \sum_{\substack{n\le x \\ (n,q)=1}} \frac{\psi(t\overline{n}/q)}{n} + O\Big(\frac qx \Big).$$
For any non-principal character $\chi {\,\left(\text{mod }q\right)}$, we have (see, e.g., [@Washington Theorem 4.2]) $$\label{eqn:l-zero}
L(0,\chi) = -\sum_{a{\,\left(\text{mod }q\right)}} \chi(a) \psi(a/q).$$ Notice that $L(0,\chi)=0$ if $\chi$ is an even character, and that right side of the formula in evaluates to 0 if $\chi$ is principal. The functional equation for odd characters gives $$L(1,\chi) = -\frac{\tau(\chi)\pi i}{q} L(0,\bar\chi),$$ where $\tau(\chi) = \sum_{m{\,\left(\text{mod }q\right)}} \chi(m) e(m/q)$ denotes the Gauss sum. Thus we obtain $$\begin{aligned}
\sum_{\chi\neq\chi_0{\,\left(\text{mod }q\right)}} \bar\chi(t)L(0,\chi)L(1,\chi)
&= -\frac{\pi i}{q} \sum_{\chi{\,\left(\text{mod }q\right)}} \tau(\chi) \Big|\sum_{a{\,\left(\text{mod }q\right)}} \chi(a)\psi\Big(\frac{a}{q}\Big)\Big|^2 \\
&= -\frac{\pi i}{q} \sum_{a,b,m {\,\left(\text{mod }q\right)}} e(m/q)\psi\Big(\frac{a}{q}\Big)\psi\Big(\frac{b}{q}\Big)\sum_{\chi{\,\left(\text{mod }q\right)}} \chi(am)\bar\chi(bt) \\
&= -\frac{\phi(q) \pi i}{q} \sum_{a,b \not\equiv 0 {\,\left(\text{mod }q\right)}} e\Big(\frac{tb\overline{a}}{q}\Big)\psi\Big(\frac{a}{q}\Big)\psi\Big(\frac{b}{q}\Big) \\
&= -\phi(q) \pi i\, \widehat{s}_q(t).\end{aligned}$$ The first identity in the lemma follows.
To obtain the second identity, note that using and the orthogonality relation for characters $$\begin{aligned}
\label{2.2}
-\sum_{\chi \neq\chi_0 {\,\left(\text{mod }q\right)}} \overline{\chi}(t) L(0,\chi) \sum_{n\le N}\frac{\chi(n)}{n}
&= \sum_{\chi {\,\left(\text{mod }q\right)}} \overline{\chi}(t) \sum_{a{\,\left(\text{mod }q\right)}} \chi(a) \psi(a/q) \sum_{n\le N} \frac{\chi(n)}{n} \nonumber \\
&= \phi(q) \sum_{\substack{n\le N \\ (n,q)=1}} \frac{1}{n} \psi(t\overline{n}/q). \end{aligned}$$ Letting $N \to \infty$, the second identity follows.
To obtain the truncated version, note that $$\Big| \sum_{\substack{n\le x \\ (n,q)=1} } \psi(t\overline{n}/q) \Big| \le q$$ trivially, and therefore $$\sum_{\substack{ n> x \\ (n,q)=1} } \frac{\psi(t\overline{n}/q)}{n} = \int_x^{\infty} \frac{1}{y^2} \sum_{x<n\le y} \psi(t\overline{n}/q) dy
\ll \frac{q}{x}.$$
Recall the definition of $A_{q,\chi}$ from . Expanding this product out, we find $$\label{Adef2}
A_{q,\chi}
= (2\chi(2)-\chi(2)^2) \prod_{p\nmid 2q} \Big(1-\frac{1}{(p-1)^2}\Big) \Big(1 + \frac{2\chi(p) - \chi(p)^2}{p^2-2p}\Big)
= C \sum_{n=1}^\infty a(n) \chi(2n).$$ Here $$\label{Adef3}
C = 2 \prod_{\substack{p\ge 3 \\ p \nmid q} } \Big(1- \frac{1}{(p-1)^2} \Big),$$ and $a(n)$ is a multiplicative function defined by $a(2)=-1/2$ and $a(2^v) =0$ for all $v\ge 2$, and for odd primes $p$ we have $$\label{Adef4}
a(p) = \frac{2}{p(p-2)}, \qquad a(p^2) = -\frac{1}{p(p-2)}, \qquad \text{and } \qquad a(p^v)= 0 \text{ for all } v \ge 3.$$ From the definition of $a(n)$ it is easy to check that $\sum_{n=1}^{\infty} |a(n)|n^{\sigma}$ converges for all $\sigma <1/2$ so that $$\label{Adef5}
\sum_{n\ge N} |a(n)| \ll N^{-\frac 12+\epsilon} \qquad \text{and } \qquad C \sum_{\substack{n\le N\\ (n,q)=1}} a(n) = 1 +O(N^{-\frac 12+\epsilon}).$$
\[lem2.2\] Define the multiplicative function $b(n)$ by setting $b(n) = \sum_{uv=n} a(u)/v$, so that $b(n)=0$ unless $n$ is odd and square-free, and $b(p) = 1/(p-2)$ for all odd primes $p$. Then for any natural number $N$ we have $$C(k) = - C \sum_{\substack{ n\le N \\ (n,q)=1}} b(n) \psi(k\overline{2n}/q) + O( q^{\frac 32+\epsilon} N^{-\frac 14+\epsilon}).$$
Arguing as in we find $$\label{2.65}
\frac{1}{\phi(q)} \sum_{\chi \neq \chi_0 {\,\left(\text{mod }q\right)}} \overline{\chi(k)} L(0,\chi) \sum_{\substack{n\le N \\ (n,q)=1}} b(n) \chi(2n) =
-\sum_{\substack{n\le N \\ (n,q)=1}} b(n) \psi(k \overline{2n}/q).$$ Now if $n =uv \le N$ then either $u\le \sqrt{N}$ or $v\le \sqrt{N}$ and $\sqrt{N} < u\le N/v$. Therefore $$\label{2.7}
\sum_{n\le N} b(n) \chi(2n) = \sum_{u\le \sqrt{N}} a(u) \chi(2u) \sum_{v\le N/u} \frac{\chi(v)}{v} + \sum_{v\le \sqrt{N}} \frac{\chi(v)}{v} \sum_{\sqrt{N} < u \le N/v} a(u) \chi(2u).$$ Bounding the partial sums of characters trivially, we find $$\label{2.9}
L(1,\chi) = \sum_{ n\le x} \frac{\chi(n)}{n} + \int_{x}^{\infty} \sum_{x<n\le y} \chi(n) \frac{dy}{y^2} = \sum_{n\le x} \frac{\chi(n)}{n} + O\Big( \frac{q}{x}\Big),$$ and so the first term in is (using ) $$\begin{aligned}
&\sum_{u\le \sqrt{N}} a(u) \chi(2u) \Big( L(1,\chi) + O\Big( \frac{qu}{N}\Big) \Big)\\
=&
C^{-1} A_{q,\chi}L(1,\chi) + O( (\log q)N^{-\frac 14+\epsilon}) + O(qN^{-\frac 12})\\
=& C^{-1} A_{q,\chi} L(1,\chi)+ O(qN^{-\frac 14+\epsilon}). \end{aligned}$$
As for the second term in , using we may bound this by $$\ll \sum_{v\le \sqrt{N}} \frac{1}{v} N^{-\frac 14+\epsilon} \ll N^{-\frac 14+\epsilon}.$$ We conclude that $$C\sum_{\substack{n\le N \\ (n,q)=1}} b(n) \psi(k \overline{2n}/q) = - \frac{1}{\phi(q)} \sum_{\chi\neq \chi_0 {\,\left(\text{mod }q\right)}} \overline{\chi(k)}
L(0,\chi) \Big( A_{q,\chi} L(1,\chi) + O(qN^{-\frac 14+\epsilon})\Big),$$ and since $L(0,\chi) \ll \sqrt{q} \log q$, the lemma follows.
Lemmas \[lem:dedekind-sum\] and \[lem2.2\] give crude approximations to $\widehat{s}_q(t)$ and $C(k)$ by long sums (for example taking $x=q^2$ in Lemma \[lem:dedekind-sum\], or taking $N= q^8$ in Lemma \[lem2.2\]). However, on average over $t$ or $k$, it is possible to approximate these quantities by very short sums.
\[lem2.3\] Let $1\le B< q$ be a real number. Then $$\frac{1}{\phi(q)} \sum_{k {\,\left(\text{mod }q\right)}} \Big| C(k) + C\sum_{n\le B} b(n) \psi(k\overline{2n}/q) \Big|^2 \ll B^{-1+\epsilon} ,$$ and $$\frac{1}{\phi(q)} \sum_{t {\,\left(\text{mod }q\right)}} \Big| \widehat{s}_q(t) - \frac{1}{\pi i} \sum_{n\le B} \frac{\psi(t\overline{n}/q)}{n} \Big|^2 \ll B^{-1+\epsilon} .$$
We shall content ourselves with proving the estimate for $C(k)$, the situation for ${\widehat{s}}_q(t)$ being entirely similar. Using and Lemma \[lem2.2\] we see that $$\begin{aligned}
&\frac{1}{\phi(q)} \sum_{k {\,\left(\text{mod }q\right)}} \Big| C(k) + C\sum_{n\le B} b(n) \psi(k\overline{2n}/q) \Big|^2 \\
=&
\frac{1}{\phi(q)} \sum_{k{\,\left(\text{mod }q\right)}} \Big| \frac{C}{\phi(q)} \sum_{\chi \neq \chi_0 {\,\left(\text{mod }q\right)}} \overline{\chi(k)} L(0,\chi) \sum_{\substack{
B< n \le q^{10} }} b(n) \chi(2n) + O(q^{-1+\epsilon}) \Big|^2.\end{aligned}$$ Using the orthogonality of characters to evaluate the sum over $k$, this is $$\ll \frac{1}{\phi(q)^2} \sum_{\chi \neq \chi_0 {\,\left(\text{mod }q\right)}} |L(0,\chi)|^2 \Big| \sum_{B < n \le q^{10}} b(n) \chi(n) \Big|^2 + q^{-2+\epsilon},$$ and using and the functional equation this is $$\ll \frac{1}{q} \sum_{\chi \neq \chi_0 {\,\left(\text{mod }q\right)}} \Big| \sum_{m\le q^2} \frac{\chi(m)}{m} \sum_{B< n \le q^{10}} b(n) \chi(n)\Big|^2 + q^{-2+\epsilon}.$$
Write temporarily $$\sum_{m\le q^2} \frac{\chi(m)}{m} \sum_{B< n \le q^{10}} b(n) \chi(n) = \sum_{B< n \le q^{12} } \frac{\alpha(n)}{n} \chi(n),$$ for some coefficients $\alpha(n) \ll n^{\epsilon}$. Then (including also the contribution of $\chi_0$ below) $$\frac{1}{q} \sum_{\chi \neq \chi_0 {\,\left(\text{mod }q\right)}} \Big|\sum_{B < n \le q^{12}} \frac{\alpha(n)}{n} \chi(n) \Big|^2
\ll \sum_{\substack{ B < n_1, n_2 \le q^{12} \\ n_1 \equiv n_2 {\,\left(\text{mod }q\right)}}} \frac{|\alpha(n_1) \alpha(n_2)|}{n_1 n_2}.$$ The terms with $n_1$, $n_2$ both below $q$ (so that $n_1 =n_2$) contribute $$\ll \sum_{B <n <q} \frac{n^{\epsilon}}{n^2} \ll B^{-1+\epsilon}.$$ The terms with $\max (n_1, n_2) \ge q$ contribute (assume without loss of generality that $n_2$ is the larger one) $$\ll q^{\epsilon} \sum_{B< n_1 \le q^{12}} \frac{1}{n_1} \sum_{\substack{ q < n_2 \le q^{12} \\ n_2 \equiv n_1{\,\left(\text{mod }q\right)}}} \frac{1}{n_2}
\ll q^{\epsilon} \log q \frac{\log q}{q} \ll q^{-1+\epsilon}.$$ Assembling these estimates, the lemma follows.
A key quantity {#sec:B}
==============
We shall study ${\widehat s}_q(t)$ and $C(k)$ by computing their moments, and the following key quantity will arise in this context. Let $\ell$ be a natural number, and suppose $n_1$, $\ldots$, $n_\ell$ are $\ell$ natural numbers. Then set $$\label{Bdef}
{\mathcal B}(n_1,\ldots, n_\ell) = \frac{1}{n_1\cdots n_\ell} \int_0^{n_1\cdots n_\ell} \prod_{j=1}^{\ell} \psi(x/n_j) dx.$$
\[propB\] The quantity $\mathcal{B}(n_1,\dots,n_\ell)$ satisfies the following properties.
\(1) If $\ell$ is odd then ${\mathcal B}(n_1,\ldots, n_\ell)=0$. For even $\ell$ we have $$\label{3.2}
{\mathcal B}(n_1,\ldots, n_\ell) = \Big( \frac{i}{2\pi}\Big)^{\ell} \sum_{\substack{k_1, \ldots , k_\ell \neq 0\\ \sum k_j/n_j =0 }} \frac{1}{k_1\cdots k_\ell},$$ where the sum is over all non-zero integers $k_j$, and this sum is absolutely convergent. In the case $\ell=2$ one has $${\mathcal B}(n_1,n_2) = \frac{(n_1,n_2)^2}{12 n_1n_2}.$$
\(2) If $p$ is a prime dividing $n_j$ and such that $p$ does not divide any other $n_i$, then $${\mathcal B}(n_1,\ldots,n_j,\ldots,n_\ell) = \frac{1}{p} {\mathcal B}(n_1,\ldots, n_j/p, \ldots, n_\ell).$$
\(3) If we write $n_1\cdots n_\ell = rs$ where $r$ and $s$ are coprime and $r$ is square-free while $s$ is square-full then $$|{\mathcal B}(n_1,\ldots, n_\ell)| \le 2^{-\ell}r^{-1}.$$
We begin by recalling the Fourier expansion of the sawtooth function. Note that ${\widehat \psi}(0)=0$ and for $k\neq 0$ we have $$\label{Fex1}
{\widehat \psi}(k) = \int_0^1 \psi(x) e(-kx) dx = \frac{1}{-2\pi i k} = \frac{i}{2\pi k},$$ and so $$\label{eqn:sawtooth-fourier}
\psi(x) = i \sum_{k\neq 0} \frac{e(kx)}{2\pi k}.$$ This series converges conditionally pointwise for each $x\not\in \mathbf{Z}$, and also in the $L^2$-sense. For any non-negative integer $N$, recall also the Fejer kernel $$\label{Fejer1}
K_N(x) = \sum_{j=-N}^{N} \Big( 1- \frac{|j|}{N+1} \Big) e(jx) = \frac{1}{N+1} \Big( \frac{\sin (\pi (N+1)x)}{\sin \pi x} \Big)^2.$$ We shall find it convenient to replace $\psi(x)$ by the approximation $\psi_N(x)$ defined by $$\label{Fex3}
\psi_N(x) = i \sum_{0<|k|\le N} \frac{e(kx)}{2\pi k}\Big(1-\frac{|k|}{N+1}\Big).$$ Note that $\psi_N$ is the convolution of $\psi$ with the Fejer kernel $K_N$ $$\psi_N(x) = \int_{0}^{1} \psi(y) K_N(x-y) dy,$$ and so $$\label{Fex4}
|\psi_N(x) - \psi(x)| \ll \min \Big( 1, \frac{1}{N \Vert x \Vert}\Big),$$ which implies that $$\label{Fex5}
\int_0^1 | \psi_N(x) - \psi(x)| dx \ll \frac{1+ \log N}{N}.$$ Note also that $|\psi_N(x)| \le 1/2$ always.
Since $\psi$ is an odd function, it is clear that ${\mathcal B}(n_1,\ldots, n_\ell) =0$ for odd $\ell$. Now suppose $\ell$ is even. By Parseval it follows that $$\label{Pars1}
\frac{1}{n_1\cdots n_\ell} \int_0^{n_1\cdots n_\ell} \psi_N(x/n_1)\cdots \psi_N(x/n_\ell) dx = \Big(\frac{i}{2\pi}\Big)^{\ell}
\sum_{\substack{0<|k_j| \le N \\ \sum k_j/n_j =0 }}
\frac{1}{k_1\cdots k_\ell} \prod_{j=1}^{\ell} \Big(1- \frac{|k_j|}{N+1}\Big).$$ For any complex numbers $\alpha_1$, $\ldots$, $\alpha_\ell$ and $\beta_1$, $\ldots$, $\beta_\ell$ note the simple identity $$\label{identity}
\alpha_1\cdots \alpha_\ell - \beta_1 \cdots \beta_\ell = (\alpha_1 -\beta_1)\alpha_2\cdots \alpha_\ell+ \beta_1 (\alpha_2-\beta_2) \alpha_3\cdots \alpha_\ell + \beta_1\cdots \beta_{\ell-1} (\alpha_\ell -\beta_\ell).$$ Applying this, we obtain $$|\psi(x/n_1) \cdots \psi(x/n_\ell) - \psi_{N}(x/n_1)\cdots \psi_{N}(x/n_{\ell})| \le \frac{1}{2^{\ell -1}} \sum_{j=1}^{\ell} |\psi(x/n_j) -
\psi_{N}(x/n_j)|,$$ and so by and we conclude that $$\begin{aligned}
\label{Pars2}
{\mathcal B}(n_1,\ldots,n_\ell) &=\frac{1}{n_1\cdots n_\ell} \int_0^{n_1\cdots n_\ell} \psi(x/n_1)\cdots \psi(x/n_\ell) dx \nonumber \\
&= \Big(\frac{i}{2\pi}\Big)^{\ell} \sum_{\substack{0<|k_j| \le N \\ \sum k_j/n_j =0 }}
\frac{1}{k_1\cdots k_\ell} \prod_{j=1}^{\ell} \Big(1- \frac{|k_j|}{N+1}\Big) + O\Big( \frac{1+\log N}{N}\Big). \end{aligned}$$
We now show that $$\sum_{\substack{0<|k_j| \le N \\ \sum k_j/n_j =0 }} \frac{1}{|k_1\cdots k_\ell|}$$ is bounded, so that will imply (letting $N\to \infty$) the stated formula for ${\mathcal B}(n_1,\ldots,n_\ell)$ and that the sum there converges absolutely. By Parseval $$\sum_{\substack{0<|k_j| \le N \\ \sum k_j/n_j =0 }} \frac{1}{|k_1\cdots k_\ell|}
= \frac{1}{n_1\cdots n_{\ell}} \int_0^{n_1 \cdots n_\ell} \prod_{j=1}^{\ell} \Big( \sum_{0< |k_j| \le N} \frac{e(k_j x/n_j)}{|k_j|} \Big) dx.$$ One may check that (with $\Vert x\Vert$ denoting the distance of $x$ from the nearest integer) $$\sum_{0< |k| \le N} \frac{e(k\theta)}{|k|} \ll \log \min \Big( N, \frac{1}{\Vert \theta \Vert}\Big) \ll \log \frac{N}{1+N\Vert \theta\Vert}.$$ Using this and the arithmetic-geometric mean inequality above, we find $$\begin{aligned}
\sum_{\substack{0<|k_j| \le N \\ \sum k_j/n_j =0 }} \frac{1}{|k_1\cdots k_\ell|} &\ll
\sum_{j=1}^{\ell} \frac{1}{n_1\cdots n_\ell} \int_0^{n_1 \cdots n_\ell} \Big( \log \frac{N}{1+N \Vert x/n_j\Vert} \Big)^{\ell} dx
\\
&\ll \int_0^1 \Big( \log \frac{N}{1+N\Vert x\Vert} \Big)^{\ell} dx \ll 1.\end{aligned}$$ This proves our claim, and establishes . If $\ell =2$ then the condition $k_1/n_1+k_2/n_2 =0$ means that $k_1= r n_1/(n_1,n_2)$ and $k_2 = -r n_2/(n_1,n_2)$ for some non-zero integer $r$. Therefore $${\mathcal B}(n_1, n_2) = -\frac{1}{4\pi^2} \sum_{r\neq 0} \frac{-1}{r^2} \frac{(n_1,n_2)^2}{n_1n_2} = \frac{(n_1,n_2)^2}{12 n_1n_2}.$$
If $p$ divides $n_j$ and no other $n_i$, then, in , $k_j$ must necessarily be a multiple of $p$. Cancelling $p$ from $k_j$ and $n_j$, Part 2 follows. Part 3 follows from Part 2, and noting that $|{\mathcal B}(n_1,\ldots,n_\ell)| \le 2^{-\ell}$ always.
For computing the moments of ${\widehat{s}}_q(t)$ and $C(k)$ the following proposition, which connects correlations of the sawtooth function with ${\mathcal B}$, will be very useful.
\[propdiscrete\] Let $n_1$, $\ldots$, $n_\ell$ be positive integers. Define $K=n_1\cdots n_{\ell}/\min(n_1, \ldots, n_{\ell})$. If $K< q/\ell$ then $$\frac{1}{q} \sum_{k{\,\left(\text{mod }q\right)}} \psi(k\overline{n_1}/q) \cdots \psi (k\overline{n_{\ell}}/q) = {\mathcal B}(n_1,\ldots, n_\ell) + O\Big( \frac{\ell K}{q} \log \Big(\frac{eq}{K}\Big) \Big).$$
Take $N= \lfloor q/(\ell K)\rfloor$. The identity gives $$\begin{aligned}
\sum_{k{\,\left(\text{mod }q\right)}} | \psi(k\overline{n_1}/q) \cdots \psi (k\overline{n_{\ell}}/q) &-\psi_N(k\overline{n_1}/q) \cdots \psi_N(k\overline{n_\ell}/q)| \\
&\le \frac{1}{2^{\ell-1}} \sum_{j=1}^{\ell} \sum_{k{\,\left(\text{mod }q\right)}} |\psi(k\overline{n_j}/q) - \psi_N(k\overline{n_j}/q)|.\end{aligned}$$ Using now , the above is $$\label{propdiscrete1}
\ll \frac{1}{2^{\ell}} \sum_{j=1}^{\ell} \sum_{k{\,\left(\text{mod }q\right)}} \min \Big( 1 ,\frac{1}{N \Vert k\overline{n_j}/q\Vert} \Big)
\ll \frac{q}{N} \log (eN).$$
By Parseval $$\label{propdiscrete2}
\frac{1}{q} \sum_{k {\,\left(\text{mod }q\right)}} \psi_N(k\overline{n_1}/q) \cdots \psi_N(k\overline{n_\ell}/q) = \Big(\frac{i}{2\pi}\Big)^{\ell} \sum_{\substack{0<|k_j|\le N \\ \sum_j k_j \overline{n_j} \equiv 0 {\,\left(\text{mod }q\right)}}} \frac{1}{k_1\cdots k_\ell} \prod_{j=1}^{\ell} \Big(1- \frac{|k_j|}{N+1}\Big),$$ which bears a striking resemblance to . With our choice for $N$, we claim that in fact the right side of is exactly equal to the expression in . Multiplying through by $n_1\cdots n_\ell$, the congruence $\sum k_j \overline{n_j} \equiv 0 {\,\left(\text{mod }q\right)}$ becomes $\sum_j k_j (n_1\cdots n_\ell/n_j)
\equiv 0{\,\left(\text{mod }q\right)}$. Since $|k_j| < q/(\ell K)$ and $(n_1 \cdots n_\ell/n_j ) \le K$ for all $j$, it follows that $|\sum_{j} k_j (n_1\cdots n_\ell/n_j) | < q$ so that the congruence becomes the equality $\sum_j k_j (n_1\cdots n_\ell/n_j) =0$, which is the same as the criterion $\sum_j k_j/n_j=0$ of . Combining this observation with and , our proposition follows.
The moments of ${\widehat s}_q(t)$ and $C(k)$ {#sec:moments}
=============================================
We now state our main result on computing the moments of ${\widehat s}_q(t)$ and $C(k)$.
\[thm4.1\] Let $q$ be a prime, and $\ell$ a natural number. Then, uniformly in the range $\ell\le \sqrt{\log q}/\log \log q$, $$\label{4.1}
\frac{1}{q} \sum_{k{\,\left(\text{mod }q\right)}} C(k)^\ell = M_C(\ell) + O(q^{-1/(20\ell \log \ell)} ),$$ where $$M_C(\ell) = C^\ell \sum_{n_1, \ldots, n_\ell \ge 1} b(n_1) \cdots b(n_\ell) {\mathcal B}(n_1,\ldots, n_\ell).$$ The quantity $M_C(\ell)$ equals zero for all odd $\ell$, and for even $\ell$ satisfies $$\label{4.2}
\frac{e^{\gamma}}{2} (\log\ell - \log \log \ell +O(1)) \le M_C(\ell)^{\frac 1\ell} \le \frac{e^{\gamma}}{2} \log \ell + O(1).$$
\[thm4.2\] Let $q$ be a prime, and $\ell$ a natural number. Then, uniformly in $\ell$, $$\frac{1}{q} \sum_{t {\,\left(\text{mod }q\right)}} (\pi i {\widehat s}_q(t))^\ell = M_s(\ell) + O(q^{-1/(20\ell\log\ell)} ),$$ where $$M_s(\ell) = \sum_{n_1, \ldots, n_\ell \ge 1} \frac{ {\mathcal B}(n_1,\ldots, n_\ell)}{n_1\cdots n_\ell} .$$ The quantity $M_s(\ell)$ equals zero for all odd $\ell$, and for even $\ell$ satisfies $$\frac{e^{\gamma}}{2} (\log \ell - \log \log \ell +O(1)) \le M_s(\ell)^{\frac 1\ell} \le \frac{e^{\gamma}}{2} \log \ell + O(1).$$
We confine ourselves to proving Theorem \[thm4.1\], and the proof of Theorem \[thm4.2\] follows along similar lines. In the rest of this section, we establish the asymptotic and the upper bound in ; the lower bound in needs more work, and will be treated in the next section.
Since $C(-k) = -C(k)$ the odd moments of $C(k)$ vanish. When $\ell$ is odd, ${\mathcal B}(n_1,\ldots, n_\ell) =0$ and so the quantity $M_C(\ell)$ is also zero here. In what follows, we may therefore assume that $\ell$ is an even natural number.
Let $1\le B\le q$ be a parameter to be chosen shortly. Note that $$\begin{aligned}
&\Big| C(k)^\ell - \Big( -C\sum_{n\le B} b(n)\psi\Big(\frac{k\overline{2n}}q\Big)\Big)^\ell \Big|\\
\le & \Big| C(k) +C \sum_{n\le B} b(n) \psi\Big(\frac{k\overline{2n}}q\Big)\Big| \cdot \sum_{j=0}^{\ell-1} |C(k)|^j \Big|C \sum_{n\le B}
b(n) \psi\Big(\frac{k\overline{2n}}q\Big)\Big|^{\ell-1-j} \\
\le &(C_0 \log q)^{\ell-1} \Big| C(k) + C \sum_{n\le B} b(n) \psi\Big(\frac{k\overline{2n}}q\Big) \Big| ,\end{aligned}$$ for some absolute constant $C_0$. By Cauchy-Schwarz and Lemma \[lem2.3\], $$\frac{1}{q} \sum_{k{\,\left(\text{mod }q\right)}} \Big| C(k) +C \sum_{n\le B} b(n) \psi\Big(\frac{k\overline{2n}}q\Big) \Big| \ll B^{-\frac 12+\epsilon}.$$ We choose $B= q^{1/\ell}$, and (in the range $\ell \le \sqrt{\log q}/\log \log q$) deduce that $$\frac{1}{q} \sum_{k{\,\left(\text{mod }q\right)}} C(k)^\ell = \frac{1}{q} \sum_{k{\,\left(\text{mod }q\right)}} \Big(C\sum_{n\le B} b(n) \psi\Big(\frac{k\overline{2n}}{q} \Big) \Big)^\ell + O(q^{-\frac{1}{4\ell}}).$$ Expand out the main term above, replace $k {\,\left(\text{mod }q\right)}$ by $2k {\,\left(\text{mod }q\right)}$, and appeal to Proposition \[propdiscrete\] with $K$ there being $\le q^{(\ell-1)/\ell}$. It follows that $$\label{4.3}
\frac{1}{q} \sum_{k{\,\left(\text{mod }q\right)}} C(k)^\ell = C^{\ell} \sum_{n_1,\ldots, n_\ell \le q^{1/\ell} } b(n_1)\cdots b(n_\ell) {\mathcal B}(n_1,\ldots,n_\ell)
+ O( q^{-\frac{1}{4\ell}}).$$
It remains now to bound the difference between the main term in and the expression for $M_C(\ell)$, which is $$\le C^{\ell} \sum_{n > q^{1/\ell}} \sum_{n_1\cdots n_\ell = n} b(n_1) \cdots b(n_\ell) | {\mathcal B}(n_1,\ldots, n_\ell)| \le
(C/2)^{\ell} \sum_{n > q^{1/\ell}} \sum_{n_1\cdots n_\ell = n} b(n_1) \cdots b(n_\ell) \frac{1}{\mathrm{sf}(n)},$$ where $\mathrm{sf}(n)$ is the largest squarefree divisor $d$ of $n$ that is coprime to $n/d$. We estimate the sum above by Rankin’s trick; with $\alpha= 1/(10\log \ell)$ the above is $$\begin{aligned}
&\le (C/2)^{\ell} q^{-\alpha/\ell} \sum_{n=1}^{\infty} \sum_{n_1\cdots n_\ell = n} b(n_1) \cdots b(n_\ell) \frac{n^{\alpha}}{\mathrm{sf}(n)}
\\
&\le e^{O(\ell)} q^{-\alpha/\ell} \prod_{p\ge 3} \Big( 1+ \frac{\ell p^{\alpha}}{p(p-2)} + \sum_{j=2}^{\ell} \binom{\ell}{j} \frac{p^{j\alpha}}{(p-2)^j}
\Big),\end{aligned}$$ upon recalling the definition of $b(n)$. The contribution of primes $p\le \ell$ to the product above is $$\le \prod_{3\le p\le \ell} \Big(1 +\frac{p^{\alpha}}{p-2} \Big)^\ell \le (\log \ell)^{\ell} e^{O(\ell)},$$ while the contribution of primes $p>\ell$ to the product above is $$\ll \prod_{p> \ell} \exp\Big( O\Big(\frac{\ell^2 p^{2\alpha}}{p^2} \Big) \Big) = e^{O(\ell)}.$$ We conclude that the difference between the main term in and the expression for $M_C(\ell)$ is $$\ll (\log \ell)^{\ell} e^{O(\ell)} q^{-\alpha/\ell} \ll q^{-1/(20\ell \log \ell)},$$ completing the proof of .
Note that $$\begin{aligned}
M_C(\ell) &\le C^\ell \sum_{n_1, \ldots, n_\ell} b(n_1)\cdots b(n_{\ell})|{\mathcal B}(n_1,\ldots, n_\ell)|
\le (C/2)^{\ell} \sum_{n_1,\ldots, n_{\ell}} \frac{b(n_1)\cdots b(n_\ell)}{\mathrm{sf}(n)} \\
&\le (C/2)^{\ell} \prod_{p\ge 3} \Big( 1 +\frac{\ell}{p(p-2)} + \sum_{j=2}^{\ell} \binom{\ell}{j} \frac{1}{(p-2)^j} \Big). \end{aligned}$$ The contribution of primes $p\le \ell$ is $$\le \prod_{3 \le p \le \ell} \Big( 1+ \frac{1}{p-2} \Big)^{\ell} = \Big( \prod_{3 \le p \le \ell} \Big( 1 -\frac{1}{(p-1)^2}\Big)^{-1} \Big(1-\frac{1} {p}\Big)^{-1}\Big)^{\ell} = C^{-\ell} (e^{\gamma} \log \ell + O(1) )^{\ell},$$ upon using Mertens’s theorem. The contribution of primes $p> \ell$ is $$\exp \Big( \sum_{p > \ell} O \Big(\frac{\ell^2}{p^2} \Big) \Big) = \exp \Big( O\Big( \frac{\ell}{\log \ell}\Big)\Big),$$ and so the upper bound in follows.
Completing the proof of Theorem \[thm4.1\]: Proof of the lower bound in
========================================================================
To obtain the lower bound in we take an indirect approach, working with a continuous model that has the same moments as $C(k)$. Let $B$ be a positive integer, and let $L(B)$ denote the least common multiple of the natural numbers $n\le B$. For a real number $x$, define $$C(x;B) = C \sum_{n\le B} b(n) \psi(x/n).$$ It follows readily that $$\frac{1}{L(B)} \int_0^{L(B)} C(x;B)^\ell dx = C^{\ell} \sum_{n_1, \ldots, n_{\ell} \le B} b(n_1) \cdots b(n_\ell) {\mathcal B}(n_1,\ldots, n_{\ell} ),$$ so that $$\label{5.1}
M_{C}(\ell) = \lim_{B\to \infty} \frac{1}{L(B)} \int_0^{L(B)} C(x;B)^{\ell} dx.$$ We shall obtain a lower bound for the right side of ; naturally, we may assume that $\ell$ is even and large.
Suppose that $B>\ell$, and put $\ell_0 = \ell/\log \ell$. Let ${\mathcal I}$ denote the subset of $[0,L(B)]$ consisting of points $x= k L(\ell_0) -y$ with $1\le k \le L(B)/L(\ell_0)$, and $0< y\le 1/10$. Let $\psi^+(t) = \psi(t)$ whenever $t$ is not an integer, and $\psi^+(t) = 1/2$ when $t$ is an integer. Then for $x=k L(\ell_0)-y\in {\mathcal I}$ note that $$C(x;B) = C \sum_{n\le B} b(n) \psi((k L(\ell_0) -y)/n) = C\sum_{n \le B} b(n) \Big( \psi^+ (kL(\ell_0)/n) - y/n\Big).$$ Since, for $n\le B$, $$\sum_{k=1}^{L(B)/L(\ell_0)} \psi^{+}\Big( \frac{kL(\ell_0)}{n} \Big) = \frac 12 \frac{L(B)}{L(\ell_0)} \frac{(n,L(\ell_0))}{n},$$ it follows that (note $|{\mathcal I}| = L(B)/(10L(\ell_0))$) $$\frac{1}{|{\mathcal I}|} \int_{\mathcal I} C(x;B) dx = \frac{C}{2} \sum_{n\le B} b(n) \frac{(n,L(\ell_0))-1/20}{n},$$ and therefore by H[" o]{}lder’s inequality that $$\frac{1}{L(B)} \int_0^B C(x,B)^\ell dx \ge \frac{1}{10L(\ell_0)} \frac{1}{|{\mathcal I}|} \int_{\mathcal I} C(x;B)^{\ell}dx \ge
\frac{1}{10L(\ell_0)} \Big(\frac C2 \sum_{n\le B} b(n) \frac{(n,L(\ell_0))-1/20}{n}\Big)^{\ell}.$$ Now letting $B\to \infty$, we find by that $$M_C(\ell) \ge \frac{1}{10 L(\ell_0)} \Big( \frac C2\sum_{n=1}^{\infty} \frac{b(n) (n,L(\ell_0))}{n} +O(1)\Big)^{\ell} \ge e^{-O(\ell_0)} \Big(\frac C2 \prod_{3\le p\le \ell_0}
\Big(1 + \frac{1}{p-2}\Big) + O(1) \Big)^{\ell},$$ upon using the prime number theorem to estimate $L(\ell_0)$, and recalling the definition of $b$. Now $$\frac C2 \prod_{3\le p\le \ell_0} \Big(1+\frac{1}{p-2}\Big) = \prod_{3 \le p\le \ell_0} \Big(1-\frac 1p\Big)^{-1} \Big( 1 + O\Big( \frac{1}{\ell_0}\Big)\Big) =
\frac{e^{\gamma}}{2} \log \ell_0 +O(1), $$ and therefore the lower bound in follows.
Proof of Theorem \[thm1\] {#sec:proofthm}
=========================
Theorem \[thm4.1\] shows that all the moments of $C(k)$ exist, and do not grow too rapidly. The moment generating function $\sum_{\ell =0}^{\infty} x^\ell M_{C}(\ell)/\ell!$ converges for all $x$, and therefore the sequence of moments $M_C(\ell)$ uniquely determines a distribution, which is the limiting distribution for $C(k)$. Since $C(k) = -C(-k)$, the limiting distribution is clearly symmetric around $0$.
To gain an understanding of this limiting distribution, and to establish its continuity, it is helpful to think of the continuous model $C(x;B)$ discussed in Section 5. Consider the characteristic function (that is, Fourier transform) of $C(x;B)$; namely $${\Bbb E}( e^{it C(x,B)}) = \frac{1}{L(B)} \int_0^{L(B)} e^{itC(x,B)} dx.$$ Omit the measure zero set of integers $x$, and write $x=k-y$ with $1\le k\le L(B)$ and $0< y < 1$. Then, with $\psi^+$ as in Section 5 and $C^+(x;B) = C\sum_{b\leq B} b(n) \psi^+(x/b)$, we have $C(x;B) = C^+(k;B) - y \sum_{n\le B} b(n)/n$, and so $$\label{6.0}
\frac{1}{L(B)} \int_0^{L(B)} e^{itC(x,B)} dx
=\frac{1}{L(B)} \sum_{k=1}^{L(B)} e^{it C^+(k,B)} \int_{0}^{1} e^{-ity \sum_{n\le B} b(n)/n} dy \ll \frac{1}{1+|t|}.$$
Given an interval $I = (\alpha-\epsilon, \alpha+\epsilon)$ with $\epsilon <1/2$, we can readily find a majorant $\Psi(x)$ of the indicator function of $I$, with $|{\widehat \Psi}(x)| \ll \epsilon/ (1+(\epsilon x)^2)$. For example take $\Psi(x) =
\max(2 - |x-\alpha|/\epsilon, 0)$, which is a relative of the Fejer kernel. Then by Fourier inversion $$\begin{aligned}
\frac{1}{L(B)} \int_{\substack{x\in [0, L(B)] \\ C(x,B) \in I } } dx &\le
\frac{1}{L(B)} \int_{0}^{L(B)} \Psi(C(x,B)) dx \\
&= \int_{-\infty}^{\infty} {\widehat \Psi}(t) {\Bbb E}(e^{it C(x,B)}) dt
\ll \int_{-\infty}^{\infty} \frac{1}{1+|t|} \frac{\epsilon}{1+(\epsilon t)^2} dt \ll \epsilon \log (1/\epsilon).
\end{aligned}$$ Therefore $C(x,B)$ has a continuous distribution, and the continuity is uniform in $B$, so that letting $B\to \infty$, we conclude that the limiting distribution for $C(k)$ is also continuous.
Since Part 3 follows upon taking $x =( \frac 12-\epsilon)\log \log q$ in Part 2, it is enough to prove Part 2. For any even $\ell \le \sqrt{\log q}/\log \log q$, we see using Theorem \[thm4.1\] that $$\frac{1}{q} \# \{ k{\,\left(\text{mod }q\right)}: C(k) \ge \frac{e^{\gamma}}{2} x \} \,\,\le\,\, \Big(\frac{e^{\gamma}}{2} x\Big)^{-\ell} (M_{C}(\ell) +o(1))
\,\,\ll\,\, \Big( \frac{\log \ell +O(1)}{x} \Big)^\ell.$$ Choosing $\ell$ to be an even integer around $A e^{x}$ for a suitably small positive constant $A$, the upper bound in Part 2 follows.
To establish the lower bound in Part 2, note that for even $\ell \le \sqrt{\log q}/(2\log \log q)$, we have by Theorem \[thm4.1\] $$\label{6.1}
\Big( \frac{e^{\gamma}}{2} (\log \ell -\log \log \ell + O(1))\Big)^{\ell} \ll \frac 1q \sum_{k {\,\left(\text{mod }q\right)}} C(k)^\ell.$$ The contribution from terms $k$ with $|C(k)| \le \frac{e^{\gamma}}{2} (\log \ell -\log \log \ell -A)$ for a suitably large constant $A$ is clearly negligible compared to the right side of . The contribution from terms $k$ with $|C(k)|
\ge \frac{e^{\gamma}}{2} (\log \ell +\log \log \ell +A)$ for a suitably large constant $A$ is $$\begin{aligned}
&\le \Big( \frac{e^{\gamma}}{2} (\log \ell +\log \log \ell +A)\Big)^{-\ell} \frac{1}{q} \sum_{k {\,\left(\text{mod }q\right)}} C(k)^{2\ell}
\\
&\ll \Big( \frac{e^{\gamma}}{2} (\log \ell +\log \log \ell +A)\Big)^{-\ell} \Big(\frac{e^{\gamma}}{2} \log \ell +O(1) \Big)^{2\ell}, \end{aligned}$$ upon using Theorem \[thm4.1\] to estimate the $2\ell$-th moment. If $A$ is suitably large, then this too is negligible in comparison to the right side of . Therefore it is the terms with $|C(k)|$ lying between $ \frac{e^{\gamma}}{2} (\log \ell -\log \log \ell -A)$ and $ \frac{e^{\gamma}}{2} (\log \ell +\log \log \ell +A)$ that account for the bulk of the contribution to , and so $$\begin{aligned}
&\Big( \frac{e^{\gamma}}{2} (\log \ell +\log \log \ell +A)\Big)^\ell \,\,\frac{1}{q} \# \{ k: |C(k)| \ge \tfrac{e^{\gamma}}{2} (\log \ell -\log \log \ell -A)\}\\
\gg &\Big( \frac{e^{\gamma}}{2} (\log \ell -\log \log \ell + O(1))\Big)^\ell.\end{aligned}$$ Choosing $\ell$ of size $xe^{x}$, the lower bound in Part 2 follows.
First suppose that $C(k)$ is negative. From [@Montgomery10] (Chapter 1, page 6) we recall that for each natural number $K$ there is a trigonometric polynomial $$B_K(x) = \frac{1}{2(K+1)} + \sum_{1\le |j|\le K} c_j e(jx)$$ with $c_j \ll 1/j$, such that $B_K(x) \ge \psi(x)$ for all $x$. Using Lemma \[lem2.2\] with $N=q^8$ we obtain $$0 \le -C(k) = C\sum_{\substack{n\le q^8 \\ (n,q)=1}} b(n) \psi(k\overline{2n}/q) + O(1) \le C \sum_{n\le q^{8}} b(n) B_K(k\overline{2n}/q) + O(1).$$ Thus, for some positive constant $A$, $$\label{6.2}
-C(k) \le A \Big(1+ \frac{1}{K+1} \sum_{n\le q^{8}} b(n) + \sum_{1\le |j| \le K} \frac 1j \Big| \sum_{\substack{ n\le q^{8}\\ (n,q)=1}} b(n) e\Big( \frac{kj \overline{2n}}{q}\Big)\Big|\Big).$$ At this stage, we need the following result which follows from work of Bourgain and Garaev [@BG] (refining earlier work of Karatsuba [@Ka]; see also Korolev [@Ko]).
\[lem6.1\] Let $q$ be a prime, and $a$ be any integer coprime to $q$. Then for all $N\ge 1$ $$\Big| \sum_{\substack{n\le N \\ (n,q)=1}} \frac 1n e\Big(\frac{a\overline{n}}{q}\Big) \Big| \ll (\log q)^{\frac 23} (\log \log q)^2.$$
Theorem 16 of Bourgain and Garaev [@BG] gives $$\Big| \sum_{n\le x} e\Big( \frac{a\overline{n}}{q}\Big) \Big| \ll \frac{x}{(\log x)^{\frac 32}} \log q (\log \log q)^3.$$ Partial summation using this bound for $x\ge \exp((\log q)^{\frac 23} (\log \log q)^2)$, and the trivial bound (that the sum is at most $x$) for smaller $x$ yields the lemma.
Returning to , take there $K=\lfloor \log q\rfloor$. Then the right side of is (recalling the definition $b(n) = \sum_{uv=n} a(u)/v$) $$\ll 1 + \sum_{j\le K} \frac 1j \sum_{\substack{ u\le q^8 \\ (u,q)=1}} |a(u)| \Big| \sum_{\substack{v\le q^{8}/u \\ (v,q)=1}} \frac 1v e \Big(
\frac{kj\overline{2uv}}{q}\Big) \Big| \ll (\log q)^{\frac 23} (\log \log q)^3,$$ using Lemma \[lem6.1\] and since $\sum_{n} |a(n)| \ll 1$. This proves that $-C(k) \le A (\log q)^{\frac 23} (\log \log q)^3$, which is the desired bound in the case $C(k)$ negative. Arguing similarly with a minorant for $\psi(x)$ instead of a majorant, leads to the same bound for $C(k)$ in the case when it is positive.
Applying Lemma \[lem2.3\] we find that $$\frac{1}{q} \sum_{k {\,\left(\text{mod }q\right)}} |C(k) -C(k+m)|^2 \ll B^{-1+\epsilon} + \frac{1}{q} \sum_{k{\,\left(\text{mod }q\right)}} \Big| \sum_{n\le B} b(n) \Big(\psi\Big(\frac{(k+m)\overline{2n}}{q}
\Big) - \psi\Big(\frac{k\overline{2n}}{q}\Big)\Big)\Big|^2.$$ Using Cauchy-Schwarz the second term above is $$\label{6.4}
\ll \frac 1q \Big(\sum_{n\le B} b(n) \Big) \sum_{n\le B} b(n) \sum_{k{\,\left(\text{mod }q\right)}} \Big(\psi\Big( \frac{k + m\overline{2n}}{q} \Big) -\psi\Big(\frac{k}{q}\Big)\Big)^2,$$ where in the inner sum we replaced $k$ by $2kn$. Since $|\psi((k+a)/q) - \psi(k/q)|\le |a|/q$ unless there is an integer between $k/q$ and $(k+a)/q$, we may check that $$\frac 1q \sum_{k {\,\left(\text{mod }q\right)}} \Big( \psi \Big( \frac{k+a}{q}\Big) - \psi\Big( \frac kq\Big) \Big)^2 \ll \frac{a}{q}.$$ Since $m$ is a multiple of all numbers $B$ (and recalling that $b(n)=0$ unless $n$ is odd), we may write $m\overline{2n} = qr +a$ with $a=m/(2n)$. Therefore the quantity in is $$\ll (\log B) \sum_{n\le B} \frac{m}{nq} \ll \frac{m}{q} \log B,$$ completing our proof.
Proof of Theorem \[thm3\] {#sec:montgomery}
=========================
As in the proofs of Theorems \[thm1\] and \[thm2\], the main result is to compute the moments of $\widetilde{R}(u)$. The proof of Theorem \[thm3\] then follows in exactly the same way as the corresponding parts of Theorem \[thm1\].
\[thm:R-moments\] There is a positive number $c<1$ such that uniformly for all natural numbers $\ell$ in the range $\ell \leq \frac{c}{9}{\sqrt{\log y}}/{\log\log y}$, we have $$\frac{1}{y} \int_0^y \widetilde{R}(u)^\ell \, du = M_R(\ell) + O\Big(\exp\Big(-\frac{c}{8}\sqrt{\log y}\Big)\Big),$$ where $$M_R(\ell) = \sum_{n_1,\dots,n_\ell} \frac{\mu(n_1)\dots\mu(n_\ell)}{n_1\dots n_\ell} \mathcal{B}(n_1,\dots,n_\ell).$$ For odd $\ell$, $M_R(\ell)=0$, while $M_R(2) = 1/2\pi^2$ and for even $\ell \geq 4$ we have $$M_R(\ell) \leq \Big(\frac{3e^\gamma}{\pi^2} \log \ell + O(1) \Big)^{\ell}.$$
We begin with a lemma, which will allow us to truncate ${\widetilde R}(u)$ by a short sum of sawtooth functions.
\[lem:mont\] For all $1\le N\le y$ we have $$\sum_{N < n_1, n_2 \le 2N } \Big| \frac 1y \int_0^y \psi(x/n_1) \psi(x/n_2) dx \Big| \ll (\log y)^2 \Big( N + N^2 \frac{\sqrt{N}}{\sqrt{y}}\Big).$$
Let $K \geq 2$ be a parameter to be chosen shortly, and let $\psi_K(x)$ be as in . First note that $$\frac 1y \int_0^y |\psi(x/n_1)\psi(x/n_2) - \psi_K(x/n_1)\psi_K(x/n_2)| dx
\le \frac 1y \int_0^y \sum_{j=1}^{2} |\psi(x/n_j)-\psi_K(x/n_j)| dx \ll \frac{1}{K},$$ upon using , and since $n_1$ and $n_2$ are at most $N\le y$. Next, from the Fourier expansion of $\psi_K$ (see ) it follows that $$\begin{aligned}
\frac{1}{y}\Big| \int_0^y \psi_K(x/n_1) \psi_K(x/n_2) dx \Big|
&\ll \sum_{0< |k_1|, |k_2| \le K} \frac{1}{|k_1k_2|} \Big|\frac 1y \int_0^y e\Big(x \Big(\frac{k_1}{n_1} + \frac{k_2}{n_2}\Big) \Big) dx \Big|
\\
&\ll \sum_{0< |k_1|, |k_2| \le K} \frac{1}{|k_1k_2|} \min \Big( 1, \frac{1}{y|k_1/n_1+k_2/n_2|}\Big). \end{aligned}$$ From these two estimates it follows that the sum to be bounded is $$\ll \frac{N^2}{K} + \sum_{0< |k_1|, |k_2| \le K} \frac{1}{|k_1 k_2|} \sum_{N < n_1, n_2 \le 2N} \min \Big( 1, \frac{1}{y|k_1/n_1 + k_2/n_2|}\Big).$$
To estimate the sum above, we split the terms into two groups: those with $|k_1/n_1+k_2/n_2| \ge K/y$ and those terms with $|k_1/n_1+k_2/n_2| < K/y$. The first group contributes $$\ll \frac {N^2}K \sum_{0< |k_1|, |k_2| \le K} \frac{1}{|k_1 k_2|} \frac{1}{|k_1 k_2|} \ll \frac{N^2}{K} (\log K)^2.$$ Terms in the second group only exist for $k_1$ and $k_2$ of opposite sign, and here $|k_1n_2 + k_2 n_1| \ll KN^2/y$, so that if $k_1$, $n_1$, and $k_2$ are fixed, then $n_2$ has $\ll 1 + KN^2/y$ choices. Therefore the second group contributes $$\ll \Big(1 + \frac{KN^2}{y}\Big) N \sum_{0< |k_1|, |k_2| \le K} \frac{1}{|k_1 k_2|} \ll (\log K)^2 N \Big(1+ \frac{KN^2}{y}\Big).$$ Choosing $K= 2\lceil \sqrt{y/N} \rceil$, the lemma follows.
From Theorem 1 and Lemma 1 of [@Montgomery] (but beware of the changes in notation, especially that his saw tooth function differs from ours in sign) it follows that with $N= y\exp(-c\sqrt{\log y})$ for a suitable positive constant $c<1$, one has $${\widetilde R}(u) = -\sum_{n\le N} \frac{\mu(n)}{n} \psi(u/n) + O(\exp(-c\sqrt{\log y})),$$ for all $N\le u \le y$. Since ${\widetilde R}(u)$ and the sum over $n$ above are $\ll \log y$, it follows that for $\ell \le
\frac c9 \sqrt{\log y}/\log \log y$ $$\label{7.1}
\frac 1y \int_0^y {\widetilde R}(u)^{\ell} du = \frac{(-1)^{\ell}}{y} \int_0^y \Big( \sum_{n\le N} \frac{\mu(n)}{n}\psi(u/n)\Big)^{\ell} du +
O(\exp(-\tfrac c2 \sqrt{\log y} ) ).$$ Now applying we see that $$\begin{aligned}
\label{7.2}
\frac{1}{y} \int_0^y \Big( \sum_{n\le N} \frac{\mu(n)}{n}\psi(u/n)\Big)^{\ell} du
&= \frac 1y \int_0^y \Big( \sum_{n\le y^{1/(2\ell)}} \frac{\mu(n)}{n}\psi(u/n)\Big)^{\ell} du \nonumber\\
&+
O \Big( \frac{\ell (\log y)^{\ell-1}}{y} \int_0^y \Big| \sum_{y^{1/(2\ell)} \le n \le N} \frac{\mu(n)}{n} \psi(u/n) \Big| du \Big). \end{aligned}$$
Expanding out, the main term in is $$\begin{aligned}
& \sum_{n_1, \ldots, n_\ell \le y^{1/(2\ell)}} \frac{\mu(n_1) \cdots \mu(n_\ell)}{n_1\cdots n_\ell} \frac 1y \int_{0}^y \prod_{j=1}^{\ell} \psi(u/n_j) du \\
= & \sum_{n_1, \ldots, n_\ell \le y^{1/(2\ell)}} \frac{\mu(n_1) \cdots \mu(n_\ell)}{n_1\cdots n_\ell} ({\mathcal B}(n_1,\ldots, n_\ell) + O(n_1\cdots n_\ell)).
\end{aligned}$$ Arguing as in the proof of Theorem \[thm4.1\], this may be seen to equal $M_R(\ell) + O(y^{-1/(40\ell \log \ell)})$.
As for the remainder term in , splitting the terms $y^{1/(2\ell)} \le n\le N$ into dyadic blocks, we may bound this by $$\ll \exp(\tfrac c8\sqrt{\log y}) \max_{\substack{ y^{1/(2\ell)} \le M \le N \\ I \subset [M,2M]} }
\frac{1}{y} \int_0^y \Big| \sum_{n\in I} \frac{\mu(n)}{n} \psi(u/n) \Big| du,$$ where the maximum is over subintervals $I$ of $[M,2M]$. By Cauchy-Schwarz and Lemma \[lem:mont\], this is $$\ll \exp(\tfrac c8\sqrt{\log y}) \max_{\substack{ y^{1/(2\ell)} \le M \le N \\ I \subset [M,2M]} } (\log y) \Big( \frac{1}{M} + \frac{\sqrt{M}}{\sqrt{y}}\Big)^{\frac 12}
\ll \exp(-\tfrac c8 \sqrt{\log y}).$$ This justifies the first claim of the theorem. It is also clear that $M_R(\ell) =0$ for odd $\ell$, and the formula for $M_R(2)$ follows from our knowledge of ${\mathcal B}(n_1,n_2)$. Lastly, the claimed upper bound on $M_R(\ell)$ follows exactly as the upper bound for $M_C(\ell)$ in Theorem \[thm4.1\].
|
---
abstract: 'Let $M_{l,n}$ be the number of blocks with frequency $l$ in the exchangeable random partition induced by a sample of size $n$ from the Ewens-Pitman sampling model. We show that, as $n$ tends to infinity, $n^{-1}M_{l,n}$ satisfies a large deviation principle and we characterize the corresponding rate function. A conditional counterpart of this large deviation principle is also presented. Specifically, given an initial sample of size $n$ from the Ewens-Pitman sampling model, we consider an additional sample of size $m$. For any fixed $n$ and as $m$ tends to infinity, we establish a large deviation principle for the conditional number of blocks with frequency $l$ in the enlarged sample, given the initial sample. Interestingly, the conditional and unconditional large deviation principles coincide, namely there is no long lasting impact of the given initial sample. Potential applications of our results are discussed in the context of Bayesian nonparametric inference for discovery probabilities.'
address:
- |
Department of Economics and Statistics\
University of Torino\
Corso Unione Sovietica 218/bis, I–10134 Torino, Italy\
- |
Department of Mathematics and Statistics\
McMaster University\
Hamilton, Canada L8S 4K1\
author:
-
-
title: 'Large deviation principles for the Ewens-Pitman sampling model'
---
Introduction
============
The present paper focuses on exchangeable random partitions induced by the so-called Ewens-Pitman sampling model, first introduced in Pitman (1995). Let $\mathbb{X}$ be a Polish space and let $\nu$ be a nonatomic probability measure on $\mathbb{X}$. For any $\alpha\in[0,1)$ and $\theta>-\alpha$ let us consider a sequence $(X_{i})_{i\geq1}$ of $\mathbb{X}$-valued random variables such that ${\mathds{P}}[X_{1}\in\cdot]=\nu(\cdot)$, and for any $i\geq1$ $$\label{predict}
{\mathds{P}}[X_{i+1}\in\cdot\,|\,X_{1},\ldots,X_{i}]=\frac{\theta+j\alpha}{\theta+i}\nu(\cdot)+\frac{1}{\theta+i}\sum_{l=1}^{j}(n_{l}-\alpha)\delta_{X_{l}^{\ast}}(\cdot)$$ with $X_{1}^{\ast},\ldots,X_{j}^{\ast}$ being the $j$ distinct values in $(X_{1},\ldots,X_{i})$ with frequencies $\mathbf{n}=(n_{1},\ldots,n_{j})$. The predictive distribution is referred to as the Ewens-Pitman sampling model. Pitman [@Pit(95)] showed that the sequence $(X_{i})_{i\geq1}$ generated by is exchangeable and its de Finetti measure $\Pi$ is the distribution of the two parameter Poisson-Dirichlet process $\tilde{P}_{\alpha,\theta,\nu}$ introduced in Perman et al. [@Per(92)], i.e. $$\begin{aligned}
\label{eq:bnpmodel}
X_i\,|\,\tilde P_{\alpha,\theta,\nu} & \quad\simiid\quad \tilde P_{\alpha,\theta,\nu}\qquad i=1,\ldots,n\\[4pt]
\notag\tilde P_{\alpha,\theta,\nu} & \quad\sim\quad \Pi,\end{aligned}$$ for any $n\geq1$. See Pitman and Yor [@Pit(97)] for details on $\tilde{P}_{\alpha,\theta,\nu}$. For $\alpha=0$ the Ewens-Pitman sampling model reduces to the celebrated sampling model by Ewens [@Ewe(72)]. The Ewens-Pitman sampling model plays an important role in several research areas such as population genetics, Bayesian nonparametrics, machine learning, combinatorics and statistical physics. See Pitman [@Pit(06)] for a detailed account.
According to and , a sample $(X_{1},\ldots,X_{n})$ from $\tilde{P}_{\alpha,\theta,\nu}$ induces an exchangeable random partition of $\{1,\ldots,n\}$ into $K_{n}$ blocks with frequencies $\mathbf{N}_{n}=(N_{1},\ldots,N_{K_{n}})$. See Pitman [@Pit(95)] for details. Such a random partition has been the subject of a rich and active literature and, in particular, there have been several studies on the large $n$ asymptotic behaviour of $K_{n}$. Specifically, for any $\alpha\in(0,1)$ and $q > -1$, let $S_{\alpha,q\alpha}$ be a positive random variable such that $$\label{eq:sdiversity}
{\mathds{P}}[S_{\alpha, q\alpha}\in dy]=\frac{\Gamma(q\alpha+1)}{\alpha\Gamma(q+1)}y^{q-1-1/\alpha}f_{\alpha}(y^{-1/\alpha})dy,$$ where $f_{\alpha}$ denotes the density function of a positive $\alpha$-stable random variable. For any $\alpha\in(0,1)$ and $\theta>-\alpha$ Pitman [@Pit(96)] established a fluctuation limit for $K_{n}$, namely $$\label{eq:asim_prior_2pd}
\lim_{n\rightarrow+\infty}\frac{K_{n}}{n^{\alpha}}=S_{\alpha,\theta}\qquad\text{a.s.}$$ Furthermore, let $M_{l,n}$ be the number of blocks with frequency $l\geq1$ such that $K_{n}=\sum_{1\leq l\leq n}M_{l,n}$ and $n=\sum_{1\leq l\leq n}lM_{l,n}$. Then, Pitman [@Pit(06)] showed that $$\label{eq:asim_prior_2pd_freq}
\lim_{n\rightarrow+\infty}\frac{M_{l,n}}{n^{\alpha}}=\frac{\alpha(1-\alpha)_{(l-1)}}{l!}S_{\alpha,\theta}\qquad\text{a.s.}$$ where $(x)_{(n)}=(x)(x+1)\cdots(x+n-1)$ denotes the rising factorial of $x$ of order $n$ with the proviso $(x)_{(0)}=1$. In contrast, for $\alpha=0$ and $\theta>0$, $K_{n}$ and $M_{l,n}$ have a different asymptotic behaviour. Specifically, Korwar and Hollander [@Kor(73)] and Arratia et al. [@Arr(92)] showed that $\lim_{n\rightarrow+\infty}K_{n}/\log n=\theta$ and $\lim_{n\rightarrow+\infty}M_{l,n}=P_{\theta/l}$ almost surely, where $P_{\theta/l}$ is distributed according to a Poisson distribution with parameter $\theta/l$. See Arratia et al. [@Arr(03)], Barbour and Gnedin [@Bar(09)] and Schweinsberg [@Sch(10)] for recent generalizations and refinements of and .
Feng and Hoppe [@Fen(98)] further investigated the large $n$ asymptotic behaviour of the random variable $K_{n}$ and, in particular, they established a large deviation principle for $K_{n}$. Specifically, for any $\alpha\in(0,1)$ and $\theta>-\alpha$, they showed that $n^{-1}K_{n}$ satisfies a large deviation principle with speed $n$ and rate function $$\label{rate_prior_2pd_lamb}
I^{\alpha}(x)=\sup_{\lambda}\{\lambda x-\Lambda_{\alpha}(\lambda)\}$$ where $\Lambda_{\alpha}(\lambda)=-\log(1-(1-e^{-\lambda})^{1/\alpha})\mathbbm{1}_{(0,+\infty)}(\lambda)$. In contrast, for $\alpha=0$ and $\theta>0$, it was shown by Feng and Hoppe [@Fen(98)] that $(\log n)^{-1}K_{n}$ satisfies a large deviation principle with speed $\log n$ and rate function of the following form $$I_{\theta}(x)=\left\{\begin{array}{ll}
x\log \frac{x}{\theta}-x+\theta&\qquad\text{ }x>0\\[4pt]
\theta&\qquad\text{ }x=0\\[4pt]
+\infty&\qquad\text{ }x<0.
\end{array}\right.$$ It is worth pointing out that the rate function depends only on the parameter $\alpha$, which displays the different roles of the two parameters, $\alpha$ and $\theta$, at different scales. We refer to Feng and Hoppe [@Fen(98)] for an intuitive explanation in terms of a Poisson embedding scheme for the Ewens-Pitman sampling model.
In this paper we establish a large deviation principle for $M_{l,n}$. Specifically for any $\alpha\in(0,1)$ and $\theta>-\alpha$ we show that, as $n$ tends to infinity, $n^{-1}M_{l,n}$ satisfies a large deviation principle with speed $n$ and we characterize the corresponding rate function $I^{\alpha}_{l}$. We also present a conditional counterpart of this large deviation principle. To this end, with a slightly abuse of notation, we write $X\,|\,Y$ to denote a random variable whose distribution coincides with the conditional distribution of $X$ given $Y$. Moreover, let $(X_{1},\ldots,X_{n})$ be an initial sample from $\tilde{P}_{\alpha,\theta,\nu}$ featuring $K_{n}=j$ blocks with corresponding frequencies $\mathbf{N}_{n}=\mathbf{n}$, and let $(X_{n+1},\ldots,X_{n+m})$ be an additional unobserved sample. Recently, Lijoi et al. [@Lij(07)], Favaro et al. [@Fav(09)] and Favaro et al. [@Fav(13)] derived and investigated the conditional distributions of the number $K_{m}^{(n)}$ of new blocks in $(X_{n+1},\ldots,X_{n+m})$ and of the number $M_{l,m}^{(n)}$ of blocks with frequency $l\geq1$ in $(X_{1},\ldots,X_{n+m})$, given $(X_{1},\ldots,X_{n})$. In particular, they showed that $$\label{eq:fluct_post}
\lim_{m\rightarrow+\infty}\frac{K_{m}^{(n)}}{m^{\alpha}}\,|\,(K_{n}=j,\mathbf{N}_{n}=\mathbf{n})=S_{\alpha,\theta}^{(n,j)}\qquad\text{a.s.}$$ and $$\label{eq:fluct_post_freq}
\lim_{m\rightarrow+\infty}\frac{M_{l,m}^{(n)}}{m^{\alpha}}\,|\,(K_{n}=j,\mathbf{N}_{n}=\mathbf{n})=\frac{\alpha(1-\alpha)_{(l-1)}}{l!}S_{\alpha,\theta}^{(n,j)}\qquad\text{a.s.}$$ where $S^{(n,j)}_{\alpha,\theta}\stackrel{\text{d}}{=}B_{j+\theta/\alpha,n/\alpha-j}S_{\alpha,\theta+n}$ with $B_{j+\theta/\alpha,n/\alpha-j}$ and $S_{\alpha,\theta+n}$ being independent and distributed according to a Beta distribution with parameter $(j+\theta/\alpha,n/\alpha-j)$ and according to with $q=\theta+n$, respectively. Intuitively, as suggested by the fluctuations and , one may expect that $M_{l,n}$ and $M_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$ have different asymptotic behaviours also in terms of large deviations, as $n$ and $m$ tend to infinity, respectively. Here we show that, for any fixed $n$ and as $m$ tends to infinity, $m^{-1}M_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$ satisfies a large deviation principle with speed $m$ and rate function $I_{l}^{\alpha}$. In other terms, we show that there is no long lasting impact of the given initial sample to the large deviations. A similar behaviour was recently observed in Favaro and Feng [@Fav(14)] with respect to the large deviation principles for $K_{n}$ and $K_{m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$.
The problem of studying conditional properties of exchangeable random partitions was first considered in Lijoi et al. [@Lij(08)]. Such a problem consists in evaluating, conditionally on the random partition $(K_{n},\mathbf{N}_{n})$ induced by a sample $(X_{1},\ldots,X_{n})$ from $\tilde{P}_{\alpha,\theta,\nu}$, the distribution of statistics from an additional sample $(X_{n+1},\ldots,X_{n+m})$. As observed in Lijoi et al. [@Lij(07)], these statistics have direct applications in Bayesian nonparametric inference for species sampling problems arising from ecology, biology, genetic, linguistic, etc. Indeed, from a Bayesian perspective, is a nonparametric model for the individuals $X_{i}$’s from a population with infinite species, where $\Pi$ is the prior distribution on the composition of such a population. The aforementioned $M_{l,m}^{(n)}$ is a representative statistic of practical interest. See, e.g., Griffiths and Spanò [@Gri(07)] and Bacallado et al. [@Bac(13)] for other statistics. In particular ${\mathds{P}}[M_{l,m}^{(n)}=m_{l}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$ takes on the interpretation of the posterior distribution of the number of species with frequency $l$ in the enlarged sample $(X_{1},\ldots,X_{n+m})$, given $(X_{1},\ldots,X_{n})$ features $j$ species with frequencies $\mathbf{n}$. Hence ${\mathds{E}}_{\alpha,\theta}[M_{l,m}^{(n)}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$ is the corresponding Bayesian nonparametric estimator under a squared loss function. In such a framework our conditional large deviation principle provides a large $m$ approximation of the estimator ${\mathds{P}}[m^{-1}M_{l,m}^{(n)}\geq x\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$, for any $x\geq0$. For large $m$ this is the right tail of the posterior proportion of species with frequency $l$ in the enlarged sample.
A closer inspection of the fluctuations and reveals that for $l=1$ our conditional large deviation principle has a natural interpretation in the context of Bayesian nonparametric inference for discovery probabilities. Indeed, let ${\mathds{P}}[D_{m}^{(n)}\in\cdot\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}] $ be the conditional, or posterior, distribution of the probability of discovering a new species at the $(n+m+1)$-th draw, given the random partition $(K_{n},\mathbf{N}_{n})$ induced by $(X_{1},\ldots,X_{n})$. The additional sample $(X_{n+1},\ldots,X_{n+m})$ is assumed to be not observed. For large $m$, we show that ${\mathds{P}}[D_{m}^{(n)}\in\cdot\,|\,K_{n}=j,\mathbf{N}_{n}]$ and ${\mathds{P}}[m^{-1}M_{1,m}^{(n)}\in \cdot\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$ are approximately equal. Accordingly our conditional large deviation principle provides a large $m$ approximation of the Bayesian nonparametric estimator ${\mathds{P}}[D_{m}^{(n)}\geq x\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$. Similarly, ${\mathds{E}}_{\alpha,\theta}[m^{-1}M_{1,m}^{(n)}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$ provides a large $m$ approximation of the estimator of the probability of discovering a new species at the $(n+m+1)$-th draw, namely ${\mathds{E}}_{\alpha,\theta}[D_{m}^{(n)}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$, which first appeared in Lijoi et al. [@Lij(07)]. An illustration of these asymptotic estimators is presented by using a genomic dataset. The interest in ${\mathds{E}}_{\alpha,\theta}[D_{m}^{(n)}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$ and ${\mathds{P}}[D_{m}^{(n)}\geq x\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$, as well as in their large $m$ approximations, is related to the problem of determining the optimal sample size in species sampling problems. Indeed this problem is typically faced by setting a threshold $\tau$ for an exact or approximate mean functional of ${\mathds{P}}[D_{m}^{(n)}\in\cdot\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$, and then making inference on the sample size $m$ for which this mean functional falls below, or above, $\tau$. This introduces a criterion for evaluating the effectiveness of further sampling.
The paper is structured as follows. In Section 2 we present the main result of the paper, namely the large deviation principle for $M_{l,n}$. Section 3 contains the conditional counterpart of this large deviation principle. In Section 4 we discuss potential applications of our conditional large deviation principle in the context of Bayesian nonparametric inference for species sampling problems.
Large deviations for $M_{l,n}$
==============================
For any $\alpha\in(0,1)$ and $\theta>-\alpha$ the large deviation principle for $M_{l,n}$ is established through a detailed study of the moment generating function of $M_{l,n}$. This is in line with the approach originally adopted in Feng and Hoppe [@Fen(98)] for $K_{n}$. For any $\lambda>0$ let $y=1-e^{-\lambda}$ and $$\label{eq_genfun_prior}
G_{M_{l,n}}(y;\alpha,\theta)={\mathds{E}}_{\alpha,\theta}\left[\left(\frac{1}{1-y}\right)^{M_{l,n}}\right]=\sum_{i\geq0}\frac{y^{i}}{i!}\mathbb{E}_{\alpha,\theta}[(M_{l,n})_{(i)}]$$ be the moment generating function of the random variable $M_{l,n}$. Let $(y)_{[n]}=y(y-1)\cdots(y-n+1)$ be the falling factorial of $y$ of order $n$, with the proviso $(y)_{[0]}=1$. Proposition 1 in Favaro et al. [@Fav(13)] provides an explicit expression for $\mathbb{E}_{\alpha,\theta}[(M_{l,n})_{[r]}]$. Recalling that $(y)_{(n)}=\sum_{0\leq i\leq n}\sum_{0\leq j\leq i}|s(n,i)|S(i,j)(y)_{[j]}$, where $s$ and $S$ denote the Stirling number of the first type and the second type, an explicit expression for $\mathbb{E}_{\alpha,\theta}[(M_{l,n})_{(r)}]$ is obtained. Specifically, we have $$\label{prior_rising_1}
{\mathds{E}}_{\alpha,\theta}[(M_{l,n})_{(r)}]=r!\sum_{i=0}^{r}{r-1\choose r-i}\frac{\left(\alpha\frac{(1-\alpha)_{(l-1)}}{l!}\right)^{i}\left(\frac{\theta}{\alpha}\right)_{(i)}(n)_{[il]}(\theta+i\alpha)_{(n-il)}}{i!(\theta)_{(n)}}$$ and $$\label{prior_rising_2}
{\mathds{E}}_{\alpha,0}[(M_{l,n})_{(r)}]=(r-1)!\sum_{i=0}^{r}{r\choose i}\frac{ \left(\alpha\frac{(1-\alpha)_{(l-1)}}{l!}\right)^{i}(n)_{[il]}(i\alpha)_{(n-il)}}{\alpha\Gamma(n)},$$ where the sums over $i$ is nonnull for $0\leq i\leq \min(r,\lfloor{n/l\rfloor})$. In the next lemma we provide an explicit expression for the moment generating function $G_{M_{l,n}}(y;\alpha,0)$. This result follows by combining with the series expansion on the right-hand side of , and by means of standard combinatorial manipulations.
\[lemma\_prior\] For any $\alpha\in(0,1)$ $$\begin{aligned}
\label{mgf_prior}
&G_{M_{l,n}}(y;\alpha,0)\\
&\notag\quad=\sum_{i=0}^{\lfloor{n/l\rfloor}}\left(\frac{y}{1-y}\right)^{i} \left(\alpha\frac{(1-\alpha)_{(l-1)}}{l!}\right)^{i}\frac{n}{n-il}{n-il+i\alpha-1\choose n-il-1}.\end{aligned}$$
In the next theorem, which is the main result of the paper, we exploit and in order to establish the large deviation principle for $M_{l,n}$. The proof of this theorem is split into three main parts. The first two parts deal with the large deviation principle for $M_{l,n}$ under the assumption $\alpha\in(0,1)$ and $\theta=0$, whereas the third part deals with the general case $\alpha\in(0,1)$ and $\theta>-\alpha$.
\[teorema\_prior\] For any $\alpha\in(0,1)$ and $\theta>-\alpha$, as $n$ tends to infinity, $n^{-1}M_{l,n}$ satisfies a large deviation principle with speed $n$ and rate function $I_{l}^{\alpha}(x)=\sup_{\lambda}\{\lambda x-\Lambda_{\alpha,l}(\lambda) \}$ where $\Lambda_{\alpha,l}$ is specified in . In particular, for almost all $x>0$ $$\lim_{n\rightarrow+\infty}\frac{1}{n}\log\mathbb{P}\left[\frac{M_{l,n}}{n}> x\right]=-I^{\alpha}_{l}(x)$$ where $I^{\alpha}_{l}(0)=0$ and $I^{\alpha}_{l}(x)<+\infty$ for $x\in(0,1/l]$. Moreover $I^{\alpha}_{l}(x)=+\infty$ for $x\notin[0,1/l]$.
In the first part of the proof we show that, assuming $\alpha\in(0,1)$ and $\theta=0$, $n^{-1}M_{l,n}$ satisfies a large deviation principle with speed $n$ and we characterize the corresponding rate function $I_{l}^{\alpha}$. For large $n$, by means of we have $$\label{s-eq6}
{\mathds{E}}_{\alpha,0}[M_{l,n}]=\frac{\alpha (1-\alpha)_{(l-1)}}{\alpha\Gamma(n)l!}(n)_{[l]}(\alpha)_{(n-l)}\approx n^{\alpha}$$ and $$\label{s-eq7}
G_{M_{l,n}}(y;\alpha,0)=\sum_{i=1}^{\lfloor n/l\rfloor}\tilde{y}^i\frac{n}{n-il}{n-il+\alpha i-1\choose n-il-1}\notag$$ where $\tilde{y}= \alpha y (1-\alpha)_{(l-1)}/(1-y)l!$. If $n/l$ is an integer, then the final term in the above expression is $n\tilde{y}^{n/l}$. By direct calculation we have that $\lim_{n\rightarrow +\infty}n^{-1}\log {\mathds{E}}_{\alpha,0}[e^{\lambda M_{l,n}}]=0$ for any $\lambda \leq 0 $. Also, for any $\lambda >0$ and $y=1-e^{-\lambda}$, $$\begin{aligned}
&\lim_{n\rightarrow +\infty}\frac{1}{n}\log G_{M_{l,n}}(y;\alpha,0)\\
&\quad= \lim_{n\rightarrow +\infty}\frac{1}{n}\log \max \left\{\tilde{y}^i {n-il+\alpha i-1\choose n-il-1}\text{; }i=0, \ldots, \frac{n}{l}\right\}\notag\\
&\quad=\lim_{n\rightarrow +\infty} \max \left\{\frac{1}{n}\log\tilde{y}^i {n-il+\alpha i-1\choose n-il-1}\text{; }i =0, \ldots, \frac{n}{l}\right\}.\notag\end{aligned}$$ For $\alpha i < 1$, it is clear that $\lim_{n\rightarrow +\infty}n^{-1}\log \tilde{y}^i {n-(l-\alpha)i-1\choose n-li-1} =0$. For $i$ satisfying $0\leq n-il <1$, there are two possibilities: either $n=il$ or $i=\lfloor n/l\rfloor < n/l$. In both cases $\lim_{n\rightarrow +\infty}n^{-1}\log\tilde{y}^i {n-(l-\alpha)i-1\choose n-li-1} =l^{-1}\log \tilde{y}$. Next we consider the case for which $i$ satisfies $n-li \geq 1$ and $\alpha i \geq 1$. For $0<\epsilon<1/l $, set $\phi(\epsilon)= \epsilon\log\epsilon$ and $ \varphi(\epsilon)=\phi(1-(l-\alpha)\epsilon)-\phi(1-l\epsilon)-\phi(\alpha\epsilon) +\epsilon\log \tilde{y}$. Using $\Gamma(z)=\sqrt{2\pi}z^{z-1/2}e^{-z}[1+r(z)]$, where we set $|r(z)| \leq e^{1/12z}-1$ for any $z>0$, we have $$\begin{aligned}
&{n-il+\alpha i-1\choose n-il-1}\\
&\quad=\frac{\Gamma(n-(l-\alpha)i)}{\alpha i\Gamma(n-li)\Gamma(\alpha i)}\\
&\quad= \frac{(1+r(n-(l-\alpha)i))e}{\sqrt{2\pi}(1+r(n-li))(1+r(\alpha i))}\left(\frac{(n-li)}{\alpha i(n-(l-\alpha)i)}\right)^{1/2}\\
&\quad\quad \times \frac{(1-(l-\alpha)i/n)^{n-(l-\alpha)i}}{(1-li/n)^{n-li}(\alpha i/n)^{\alpha i}}\\
&\quad= \frac{(1+r(n-(l-\alpha)i))e}{\sqrt{2\pi}(1+r(n-li))(1+r(\alpha i))}\left(\frac{(n-li)(\alpha i+1)}{(n-(l-\alpha)i)}\right)^{1/2}\\
&\quad\quad \times \exp\left\{n \left[\phi\left(1-(l-\alpha)\frac{i}{n}\right)-\phi\left(1-l\frac{i}{n}\right)-\phi\left(\alpha \frac{i}{n}\right)\right]\right\}.\end{aligned}$$ The fact that $\alpha^{-1}\leq i \leq (n-1)/l$ implies that $(1+r(n-(l-\alpha)i))e/\sqrt{2\pi}(1+r(n-li))(1+r(\alpha i))$ is uniformly bounded from above by a constant $d_1>0$. Hence, $$\begin{aligned}
\label{eq:teo1_1}
&{n-il+\alpha i-1\choose n-il-1}\\
&\notag\quad \leq d_1 \sqrt{n} \exp\left\{ n \left[\phi\left(1-(l-\alpha)\frac{i}{n}\right)-\phi\left(1-l\frac{i}{n}\right)-\phi\left(\alpha \frac{i}{n}\right)\right]\right\},\end{aligned}$$ and $$\begin{aligned}
\label{eq:teo1_2}
&\lim_{n\rightarrow+\infty} \max \left\{\frac{1}{n}\log\tilde{y}^i {n-il+\alpha i-1\choose n-il-1}\text{; }\frac{1}{\alpha}\leq i \leq \frac{(n-1)}{l}\right\}\\
&\notag\quad\leq \max\left\{\varphi(\epsilon): 0<\epsilon <\frac{1}{l}\right\}.\end{aligned}$$ In particular, by combining the inequalities stated in and , respectively, we have $$\begin{aligned}
&\lim_{n\rightarrow +\infty}\frac{1}{n}\log G_{M_{l,n}}(y;\alpha,0)\notag\\
&\quad= \lim_{n\rightarrow+ \infty}\frac{1}{n}\log \max \left\{\tilde{y}^i {n-il+\alpha i-1\choose n-il-1}\text{; }i =0, \ldots, \frac{n}{l}\right\}\notag\\
&\quad =\max \left\{\max \left\{\lim_{n\rightarrow+ \infty}\frac{1}{n}\log\tilde{y}^i {n-il+\alpha i-1\choose n-il-1}\text{; }i <\frac{1}{\alpha}\right\}\right.,\\
&\quad\quad\quad\quad \left.\max \left\{\lim_{n\rightarrow+ \infty}\frac{1}{n}\log\tilde{y}^i {n-il+\alpha i-1\choose n-il-1}\text{; }\frac{1}{\alpha}\leq i\leq \frac{n-1}{l}\right\}\text{, } \frac{1}{l}\log\tilde{y}\right\}\notag\\
&\quad= \max\left\{0, \max\left\{\varphi(\epsilon):0<\epsilon<\frac{1}{l}\right\}\text{, } \frac{1}{l}\log\tilde{y}\right\}\notag\\
&\quad \leq \max\left\{\varphi(\epsilon): 0< \epsilon < \frac{1}{l}\right\}.\notag\end{aligned}$$ On the other hand, for any $\epsilon$ in $(0,1/l)$, there exists a sequence $(i_n)_{n\geq1}$ such that $(i_n/n)_{n\geq1}$ converges to $\epsilon$ as $n$ tends to infinity. For this particular sequence $$\begin{aligned}
\label{s-eq9}
\varphi(\epsilon)&= \lim_{n\rightarrow +\infty}\frac{1}{n}\log\tilde{y}^{i_n} {n-i_nl+\alpha i_n-1\choose n-i_nl-1}\\
&\notag\leq \lim_{n\rightarrow +\infty}\frac{1}{n}\log G_{M_{l,n}}(y;\alpha,0).\end{aligned}$$ Thus $$\label{s-eq10}
\lim_{n\rightarrow +\infty}\frac{1}{n}\log G_{M_{l,n}}(y;0,\alpha)=\max\left\{\varphi(\epsilon): 0\leq \epsilon \leq\frac{1}{l}\right\}.$$ Noting that $$\varphi'(\epsilon)=-(l-\alpha)\log (1-(l-\alpha)\epsilon) +l\log(1-l\epsilon)-\alpha\log\alpha\epsilon +\log\tilde{x},$$ one has $$\varphi(\epsilon)= \log(1-(l-\alpha)\epsilon)-\log(1-l\epsilon)+\varphi'(\epsilon)\epsilon.$$ Moreover, since $\varphi'(0+)=+\infty$ and $\varphi'(1/l-)=-\infty$, then the function $\varphi(\epsilon)$ reaches a maximum at a point $\epsilon_0$ in $(0,1/l)$ where $\varphi'(\epsilon_0)=0$. Clearly $\epsilon_0$ depends on $\alpha$, $l$ and $\lambda$. Moreover note that $\varphi''(\epsilon)= -\alpha/\epsilon(1-(l-\alpha)\epsilon)(1-l\epsilon)<0$, which implies that $\epsilon_0(\lambda)$ is unique and $\Lambda_{\alpha,l}(\lambda)= \log[1+\alpha\epsilon_0/(1-l\epsilon_0)]$. In particular, since $$\log \tilde{x}=\lambda +\log\frac{e^{\lambda}-1}{e^{\lambda}}+\log \frac{\alpha (1-\alpha)_{(l-1)}}{l!}$$ and $\varphi'(\epsilon_0)=-(l-\alpha)\log (1-(l-\alpha)\epsilon_0) +l\log(1-l\epsilon_0)-\alpha\log\alpha\epsilon_0 +\log\tilde{x}=0$, one has $$\begin{aligned}
\label{s-eq12}
&\lambda +\log\frac{e^{\lambda}-1}{e^{\lambda}}+\log \frac{\alpha (1-\alpha)_{(l-1)}}{\alpha^{\alpha}l!}\\
&\quad= l\log\frac{1-(l-\alpha)\epsilon_0}{1-l\epsilon_0} +\alpha \log\frac{\epsilon_0}{1-(l-\alpha)\epsilon_0}.\end{aligned}$$ Set $$\label{eq:funch1}
h_1(\lambda)=\lambda +\log\frac{e^{\lambda}-1}{e^{\lambda}}+\log \frac{\alpha (1-\alpha)_{(l-1)}}{\alpha^{\alpha}l!}$$ and $$\label{eq:funch2}
h_2(\epsilon_0)= l\log\frac{1-(l-\alpha)\epsilon_0}{1-l\epsilon_0} +\alpha \log\frac{\epsilon_0}{1-(l-\alpha)\epsilon_0}.$$ Note that, since both the functions $h_1$ and $h_2$ are strictly increasing functions with differentiable inverses, then $\epsilon_0 = h_2^{-1}\circ h_1(\lambda)$ is a differentiable strictly increasing function and, in particular, we have $\lim_{\lambda \rightarrow 0}\epsilon_0=0$ and $\lim_{\lambda\rightarrow +\infty}\epsilon_0 =1/l$. Now, if we set $\Lambda_{\alpha,l}(\lambda)$ to be zero for nonpositive $\lambda$, and for $\lambda>0$ $$\label{s-ldp-eq1}
\Lambda_{\alpha,l}(\lambda)= \log\left(1+\frac{\alpha h_2^{-1}\circ h_1(\lambda)}{1-l h_2^{-1}\circ h_1(\lambda)}\right),$$ then it is clear that $\{\lambda: \Lambda_{\alpha,l}(\lambda)<+ \infty\}=\mathbb{R}$ and $\Lambda_{\alpha,l}(\lambda)$ is differentiable for $\lambda \neq 0$. The left derivative of $\Lambda_{\alpha,l}(\lambda)$ at zero is clearly zero. On the other hand, for $\lambda >0$ $$\frac{d\Lambda_{\alpha,l}(\lambda)}{d \lambda}= \bigg[ \frac{\alpha-l}{1+(\alpha-l)\epsilon_0}+\frac{l}{1-l\epsilon_0}\bigg]\frac{d\epsilon_0}{d\lambda}.$$ Since $\epsilon_0$ converges to zero it follows from direct calculation that, as $\lambda \downarrow 0$ one has $$\frac{d\epsilon_0}{d\lambda}=\frac{(e^{h_1(\lambda)})'}{(e^{h_2(\epsilon)})'|_{\epsilon=\epsilon_0}}\rightarrow 0.$$ Accordingly $\Lambda_{\alpha,l}(\lambda)$ is differentiable everywhere. By the Gärtner-Ellis theorem (see Dembo and Zeitouni [@DZ98] for details), a large deviation principle holds for $n^{-1}M_{l,n}$ on space $\mathbb{R}$ as $n$ tends to infinity with speed $n$ and good rate function $I^{\alpha}_{l}(x)=\sup_{\lambda}\{\lambda x-\Lambda_{\alpha,l}(\lambda) \}$. This completes the first part of the proof. In the second part of the proof we further specify the rate function $I^{\alpha}_{l}$. In particular, let us rewrite $\Lambda_{\alpha,l}(\lambda)$ as $\Lambda_{\alpha,l}(\lambda)= \lambda/l+\tilde{\Lambda}_{\alpha,l}(\lambda)$, where we defined $$\tilde{\Lambda}_{\alpha,l}(\lambda)=-\lambda/l,$$ for $\lambda\leq 0$, and $$\tilde{\Lambda}_{\alpha,l}(\lambda)=\frac{1}{l}\log\frac{e^{\lambda}-1}{e^{\lambda}}+\frac{1}{l} \log \frac{\alpha (1-\alpha)_{(l-1)}}{\alpha^{\alpha}l!}-\frac{\alpha}{l} \log\frac{\epsilon_0}{1-(l-\alpha)\epsilon_0}$$ for $\lambda>0$. Since there exists a strictly positive constant $d_2>0$ such that $\epsilon_0 \geq d_2$ for $\lambda \geq 1$, then $\tilde{\Lambda}_{\alpha,l}$ is uniformly bounded for $\lambda\geq 1$. This implies that the rate function $I_l^{\alpha}(x)=\sup_{\lambda}\{\lambda \left(x-1/l\right)-\tilde{\Lambda}_{\alpha,l}(\lambda)\}$ is infinity for any $x>1/l$, which is consistent with the fact that $n^{-1}M_{l,n}\leq 1/l$. Additionally we have $$\label{rate_prec_1}
I_{l}^{\alpha}(x)=\left\{\begin{array}{ll}
0&\hspace{0.4cm}\text{ if }x=0\\[4pt]
<+\infty&\hspace{0.4cm}\text{ if } x\in(0,1/l]\\[4pt]
+\infty&\hspace{0.4cm}\text{ otherwise }.
\end{array}\right.$$ For this to hold, we need to verify that $I_l^{\alpha}(x)$ is finite for $x$ in $(0,1/l]$. By definition, $$\label{s-eq13a}
\sup_{0\leq \lambda \leq 1}\{\lambda x -\Lambda(\lambda)\} \leq \sup_{0\leq \lambda \leq 1}\{\lambda x\} =x<+\infty$$ for any $x$ in $(0,1/l]$. For any $\lambda\geq 1$, let $d_2$ be the value of $\epsilon_0$ at $\lambda=1$. Then $\epsilon_0 \geq d_2$ for any $\lambda\geq 1$ and this implies that $\tilde{\Lambda}(\lambda)$ is bounded for all $\lambda \geq 1$. Accordingly, we can write $\sup_{\lambda \geq 1}\{\lambda\left(y-1/l\right)-\tilde{\Lambda}_{\alpha,l}(\lambda)\} \leq \sup_{\lambda \geq 1}\{|\tilde{\Lambda}_{\alpha,l}(\lambda)|\}<+\infty$, which combined with implies . This completes the second part of the proof. Finally, in the third part of the proof we extend the large deviation principle to the case $\alpha\in(0,1)$ and $\theta>-\alpha$. By combining the definition with , and by means of standard combinatorial manipulations, one has $$\begin{aligned}
&G_{M_{l,n}}(y;\alpha,\theta)\label{s-may23-eq1}\\
&\quad=\sum_{i=0}^{\lfloor{n/l\rfloor}} D(\alpha,\theta,n,i) \left(y \alpha\frac{(1-\alpha)_{(1-y)(l-1)}}{l!}\right)^{i}
\frac{n}{(n-il)}{n-il+i\alpha-1\choose n-il-1}, \notag\end{aligned}$$ where the function $D$ is such that $D(\alpha,\theta,n,0)=1$ and, for any $1\leq i \leq\lfloor n/l \rfloor$, $$\label{s-may27-eq6}
D(\alpha,\theta,n,i)= \frac{\Gamma(n)}{(\theta+1)_{(n-1)}}\frac{(\theta/\alpha+1)_{(i-1)}}{\Gamma(i)}\frac{(\theta+i\alpha)_{(n-il)}}{(i\alpha)_{(n-il)}}.$$ Since $\theta/\alpha >-1$, it follows from basic algebra that one can find positive constants, say $d_3$ and $d_4$, that are independent of $n$ and $i$ and such that it follows $$\label{s-may27-eq6}
d_3 n^{-2} \leq D(\alpha,\theta,n,i) \leq d_4 n^{k}$$ where $k$ is the smallest integer greater than $1+|\theta|+|\theta/\alpha|$. Accordingly, we have $$\begin{aligned}
&\lim_{n \rightarrow +\infty}\frac{1}{n}\log {\mathds{E}}_{\alpha,\theta}[e^{\lambda M_{l,n}}]\\
&\quad= \lim_{n \rightarrow +\infty}\frac{1}{n}\log G_{M_{l,n}}(y;\alpha,\theta)\label{s-may23-eq2}\\
&\quad= \lim_{n \rightarrow +\infty}\frac{1}{n}\log G_{M_{l,n}}(y;\alpha,0)\notag\\
&\quad=\lim_{n \rightarrow +\infty}\frac{1}{n}\log {\mathds{E}}_{\alpha,0}[e^{\lambda M_{l,n}}]=\Lambda_{\alpha,l}(\lambda).\end{aligned}$$ Then, for any $\alpha\in(0,1)$ and $\theta>-\alpha$, $n^{-1}M_{l,n}$ satisfies a large deviation principle with speed $n$ and rate function $I^{\alpha}_{l}$. This completes the third part of the proof.
In general it is difficult to get a more explicit expression for $I_{l}^{\alpha}$. Indeed, $\Lambda_{\alpha,l}$ depends on $\lambda$ in an implicit form, namely $\Lambda_{\alpha,l}$ is a function of $\lambda$ in terms of $h_2^{-1}\circ h_1(\lambda)$, where where $h_{1}$ and $h_{2}$ are in and respectively. However, under the assumption $\alpha=1/2$ and $l=1$, an explicit expression for $I^{\alpha}_{l}$ can be derived. For any $\alpha\in(0,1)$ and $\theta>-\alpha$, the rate function $I^{\alpha}_{l}$ displayed in can be easily evaluated by means of standard numerical techniques.
\[proposition\_prior\] Let $B_1$ be the function specified in . Then, for any $x\in [0, 1]$ $$I_{1}^{1/2}(x)=x\log [B_1(x)+1]+\log 2 -\log \left(1+\sqrt{B^2_1(x)+1}\right).$$
Under the assumption $\alpha=1/2$ and $l=1$, the equation $-(l-\alpha)\log (1-(l-\alpha)\epsilon_0) +l\log(1-l\epsilon_0)-\alpha\log\alpha\epsilon_0 +\log\tilde{y}=0$ becomes of the form $$-\frac{1}{2}\log\left(1-\frac{\epsilon_0}{2}\right) +\log(1-\epsilon_0)-\frac{1}{2}\log\epsilon_0 +\frac{1}{2}\log 2 +\log(e^{\lambda}-1) -\log 2=0.$$ Equivalently we have $(e^{\lambda}-1)^2 =(2-\epsilon_0)\epsilon_0/(1-\epsilon_0)^2$. Solving the equation we obtain $\epsilon_0 =1-1/\sqrt{B^2+1}$ with $B =e^{\lambda}-1$. Going back to the rate function, we have $$\begin{aligned}
I_{1}^{1/2}(x)&= \sup_{\lambda>0}\left\{\lambda x -\log \frac{1-\epsilon_0/2}{1-\epsilon_0}\right\}\\
&= \sup_{\lambda>0}\left\{\lambda x -\log \frac{2-\epsilon_0}{1-\epsilon_0}\right\}+\log 2\\
&= \sup_{\lambda>0}\left\{\lambda x -\log (1+\sqrt{B^2+1})\right\}+\log 2.\end{aligned}$$ It is known that $I_{1}^{1/2}(0)=0$. Moreover, for $x=1$, we have the following expression $$\begin{aligned}
&\sup_{\lambda>0}\left\{\lambda -\log (1+\sqrt{B^2+1})\right\}\\
&\quad= \sup_{\lambda>0}\left\{\log \frac{B+1}{1+\sqrt{B^2+1}}\right\}\\
&\quad= \lim_{\lambda \rightarrow +\infty}\log \frac{B+1}{1+\sqrt{B^2+1}}=0,\end{aligned}$$ which implies that $I^{1/2}_{1}(1)=\log2$. In general, for $0<x<1$, set $h(\lambda)= \lambda x - \log(1+\sqrt{B^2+1})$. Then $h'(\lambda)=x- B(B+1)/(B^2 +1 +\sqrt{B^2+1})$ and, in particular, the solution of the equation $h'(\lambda)=0$ satisfies the following identity $$\label{s-eq16}
(1-x)^2B^3 +2(1-x)B^2 +(1-x)^2B-2x=0,$$ and $$\begin{aligned}
\Delta &= 64 x(1-x)^3 +4(1-x)^6-36x(1-x)^5-4(1-x)^8-108x^2(1-x)^4\\
&=4(1-x)^6[1-(1-x)^2] + (1-x)^3x[64 - 36(1-x)^2 -108x(1-x)] \\
&\geq 4(1-x)^6[1-(1-x)^2] + (1-x)^3x[64-36-27]>0\end{aligned}$$ is the discriminant. Let $G(B)$ denote the left-hand side of . By a direct calculation it follows that $G'(B)=0$ has two negative roots. This, combined with the fact that $G(0)=-2x <0$, implies that one and only one of the three roots of is positive. Denote this root by $B_1(x)$. Then the rate function is $$\label{rate12}
I_{1}^{1/2}(x)= x\log [B_1(x)+1]+\log 2 -\log \left(1+\sqrt{B^2_1(x)+1}\right).$$ Making a change of variable in such that $C=B+2/(3(1-x))$ we obtain the following depressed form of the equation $C^3 +pC+q =0$ where $p= 1-4/(3(1-x)^2)<0$ and $q=[16-18(1-x)^2-54 x(1-x)]/(27(1-x)^3)$. Then $$\label{s-eq17}
B_1(x)= 2\sqrt{\frac{-p}{3}}\cos\bigg(\frac{1}{3}\arccos \bigg(\frac{3q}{2p}\sqrt{\frac{-3}{p}}\bigg)\bigg) -\frac{2}{3(1-x)}$$ follows by a direct application of the Viéte’s trigonometric formula. The proof is completed by combining the rate function with the function $B_{1}$ in .
To some extent Theorem \[teorema\_prior\] provides a generalization of the large deviation principle for $K_{n}$ introduced in Theorem 1.2 of Feng and Hoppe [@Fen(98)], for any $\alpha\in(0,1)$ and $\theta>-\alpha$. Indeed, recall that one has the following relations between $K_{n}$ and $M_{l,n}$: $K_n=\sum_{1\leq i\leq n}M_{l,n}$ and $n=\sum_{1\leq i\leq n}lM_{l,n}$. However, so far it is not clear to us how to relate the large deviation principle for $M_{l,n}$ with the large deviation principle for $K_n$. In this respect we retain that the results in Dinwoodie and Zabell [@Din(92)] may be helpful in understanding such a relation.
Conditional large deviations {#sec3}
============================
For any $\alpha\in(0,1)$ and $\theta>-\alpha$, let $(X_1, \ldots,X_n)$ be an initial sample from $\tilde{P}_{\alpha,\theta,\nu}$ and let $(X_{n+1},\ldots,X_{n+m})$ be an additional sample, for any $m\geq1$. Furthermore, let $X^{\ast}_{1},\ldots,X^{\ast}_{K_{n}}$ be the labels identifying the $K_{n}$ blocks generated by $(X_{1},\ldots,X_{n})$ with corresponding frequencies $\mathbf{N}_{n}$, and let $L_{m}^{(n)}=\sum_{1\leq i\leq m}\prod_{1\leq k\leq K_{n}}\mathbbm{1}_{\{X_{k}^{\ast}\}^{c}}(X_{n+i})$ be the number of elements in the additional sample that do not coincide with elements in the initial sample. If we denote by $K_{m}^{(n)}$ the number of new blocks generated by these $L_{m}^{(n)}$ elements and by $X^{\ast}_{K_{n}+1},\ldots,X^{\ast}_{K_{n}+K_{m}^{(n)}}$ their labels, then $$\label{eq:new_freq}
S_{i}=\sum_{l=1}^{m}\mathbbm{1}_{\{X^{\ast}_{K_{n}+i}\}}(X_{n+l}),$$ for $i=1,\ldots,K_{m}^{(n)}$, are the frequencies of the $K_{m}^{(n)}$ blocks. The frequencies of the blocks generated by the remaining $m-L_{m}^{(n)}$ elements of the additional sample are $$\label{eq:old_freq}
R_{i}=\sum_{l=1}^{m}\mathbbm{1}_{\{X^{\ast}_{i}\}}(X_{n+l}),$$ for $i=1,\ldots,K_{n}$. The blocks generated by the $m-L_{m}^{(n)}$ elements of the additional sample are termed “old" to be distinguished from the $K_{m}^{(n)}$ new blocks generated by the $L_{m}^{(n)}$ elements of the additional sample. The random variables and , together with $L_{m}^{(n)}$ and $K_{m}^{(n)}$, completely describe the conditional random partition induced by $(X_{n+1},\ldots,X_{n+m})$ given $(X_{1},\ldots,X_{n})$. See Lijoi et al. [@Lij(08)] and Favaro et al. [@Fav(13)] for a comprehensive study on the conditional distributions of these random variables given the initial sample.
The random variables and lead to define the number $M_{l,m}^{(n)}$ of blocks with frequency $l$ in the enlarged sample $(X_{1},\ldots,X_{n+m})$. This is the number of new blocks with frequency $l$ generated by $(X_{n+1},\ldots,X_{n+m})$ plus the number of old blocks with frequency $l$ that arise by updating, via $(X_{n+1},\ldots,X_{n+m})$, the frequencies already induced by $(X_{1},\ldots,X_{n})$. Specifically, let $$\label{eq:freq_new_block}
N_{l,m}^{(n)}=\sum_{i=1}^{K_{m}^{(n)}}\mathbbm{1}_{\{S_{i}=l\}}$$ be the number of new blocks with frequency $l$. Specifically, these new blocks are generated by the $L_{m}^{(n)}$ elements of the additional sample. Furthermore, let $$\label{eq:freq_old_block}
O_{l,m}^{(n)}=\sum_{i=1}^{K_{n}}\mathbbm{1}_{\{N_{i}+R_{i}=l\}}$$ be the number of old blocks with frequency $l$. Specifically, these old blocks are generated by updating, via the $m-L_{m}^{(n)}$ elements of the additional sample, the frequencies of random partition induced by the initial sample. Therefore, $M_{l,m}^{(n)}=O_{l,m}^{(n)}+N_{l,m}^{(n)}$. The conditional distribution of $M_{l,m}^{(n)}$, given the initial sample, has been recently derived and investigated in Favaro et al. [@Fav(13)]. Hereafter we present a large deviation principle, as $m$ tends to infinity, for $M_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$.
The study of large deviations for $M_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$ reduces to the study of large deviations for the conditional number of new blocks with frequency $l$, namely $N_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$. Indeed $N_{l,m}^{(n)}\leq M_{l,m}^{(n)}\leq N_{l,m}^{(n)}+n$. Hence, by means of a direct application of Corollary B.9 in Feng [@Feng10], the quantities $m^{-1}M_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$ and $m^{-1}N_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$ satisfy the same large deviation principle. This large deviation principle is established through the study of the moment generating function of $N_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$. For any $\lambda>0$ and $y=1-\text{e}^{-\lambda}$, let $$\begin{aligned}
\label{eq_genfun_posterior}
&G_{N_{l,m}^{(n)}}(y;\alpha,\theta)\\
&\notag\quad={\mathds{E}}_{\alpha,\theta}\left[\left(\frac{1}{1-y}\right)^{N_{l,m}^{(n)}}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}\right]\\
&\notag\quad=\sum_{i\geq0}\frac{y^{i}}{i!}\mathbb{E}_{\alpha,\theta}[(N_{l,m}^{(n)})_{(i)}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}].\end{aligned}$$ Theorem 1 in Favaro et al. [@Fav(13)] provides an explicit expression for the falling factorial moment $\mathbb{E}_{\alpha,\theta}[(N_{l,m}^{(n)})_{[r]}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$. Hence, by exploiting the aforementioned relation between falling factorials and rising factorials, an explicit expression for $\mathbb{E}_{\alpha,\theta}[(N_{l,m}^{(n)})_{(r)}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$ is obtained. Specifically, $$\begin{aligned}
\label{fat_post_alpha}
&{\mathds{E}}_{\alpha,\theta}[(N_{l,m}^{(n)})_{(r)}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]\\
&\notag\quad=r!\sum_{i=0}^{r}{r-1\choose r-i}\frac{\left(\frac{\alpha(1-\alpha)_{(l-1)}}{l!}\right)^{i}\left(\frac{\theta}{\alpha}\right)_{(j+i)}(m)_{[il]}(\theta+i\alpha+n)_{(m-il)}}{i!(\theta+n)_{(m)}(\theta/\alpha)_{(j)}}\end{aligned}$$ and $$\begin{aligned}
\label{fat_post_0}
&{\mathds{E}}_{\alpha,0}[(N_{l,m}^{(n)})_{(r)}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]\\
&\notag\quad=j(r-1)! \sum_{i=0}^{r}{r\choose i}{j+i-1\choose i-1}\frac{\left(\frac{\alpha(1-\alpha)_{(l-1)}}{l!}\right)^{i}(m)_{[il]}(i\alpha+n)_{(m-il)}}{(n)_{(m)}}\end{aligned}$$ where the sums over $i$ is nonnull for $0\leq i\leq \min(r,\lfloor{m/l\rfloor})$. Note that $\mathbb{E}_{\alpha,\theta}[(N_{l,m}^{(n)})_{(r)}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]={\mathds{E}}_{\alpha,\theta}[(N_{l,m}^{(n)})_{(r)}\,|\,K_{n}=j]$. In other terms the number $K_{n}$ of blocks in the initial sample is a sufficient statistics for $\mathbb{E}_{\alpha,\theta}[(N_{l,m}^{(n)})_{(r)}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$. This property of sufficiency was pointed out in Favaro et al. [@Fav(13)]. Along lines similar to Lemma \[lemma\_prior\], in the next lemma we provide an explicit expression for the moment generating function $G_{N_{l,m}^{(n)}}(y;\alpha,0)$.
\[lemma\_post\_0\] For any $\alpha\in(0,1)$ $$\begin{aligned}
\label{mom_gen_post_0}
&G_{N_{l,m}^{(n)}}(y;\alpha,0)\\
&\notag\quad =\frac{m!}{(n)_{(m)}}\sum_{i=0}^{\lfloor m/l\rfloor}\left(\frac{y}{1-y}\right)^{i}\left(\frac{\alpha(1-\alpha)_{(l-1)}}{l!}\right)^{i}\\
&\notag\quad\quad\times{j+i-1\choose i}\frac{(i\alpha+n)}{(m-il)}{n+m+i\alpha-il-1\choose m-il-1}.\end{aligned}$$
In the next theorem we exploit the moment generating function and the rising factorial moment in order to establish the large deviation principle for $M_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$. Such a result provides a conditional counterpart of Theorem \[teorema\_prior\].
\[teorema\_posterior\] For any $\alpha\in(0,1)$ and $\theta>-\alpha$, as $m$ tends to infinity, $m^{-1}M_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$ satisfies a large deviation principle with speed $m$ and rate function $I_{l}^{\alpha}(x)=\sup_{\lambda}\{\lambda x-\Lambda_{\alpha,l}(\lambda) \}$ where $\Lambda_{\alpha,l}$ is specified in . In particular, for almost all $x>0$ $$\lim_{m\rightarrow+\infty}\frac{1}{m}\log\mathbb{P}\left[\frac{M_{l,m}^{(n)}}{m}> x\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}\right]=-I^{\alpha}_{l}(x)$$ where $I^{\alpha}_{l}(0)=0$ and $I^{\alpha}_{l}(x)<+\infty$ for $x\in(0,1/l]$. Moreover $I^{\alpha}_{l}(x)=+\infty$ for $x\notin[0,1/l]$.
As we anticipated, in order to prove the theorem, it is sufficient to prove the large deviation principle for $N_{l,m}^{(n)}\,|\,K_{n}$, for any $\alpha\in(0,1)$ and $\theta>-\alpha$. We start with the assumption $\alpha\in(0,1)$ and $\theta=0$ and then we consider the general case. From the moment generating function we can write $$G_{N_{l,m}^{(n)}} (y;\alpha,0)=\sum_{i=0}^{\lfloor m/l\rfloor}\tilde{y}^i C(i,m; n,j,\alpha,l)$$ where $$\begin{aligned}
&C(i,m;n,j,\alpha,l)\\
&\quad=\frac{m!}{(n)_{(m)}}{j+i-1\choose i}\frac{i\alpha+n}{m-il}{n+m+i\alpha-il-1\choose m-il-1}\\
&\quad={n+m +i\alpha -il-1\choose n+m-il-1}\frac{m!}{(n)_{(m)}}{j+i-1\choose i} \frac{(m-il+1)_{(n-2)}}{(i\alpha+1)_{(n-2)}}\\
&\quad={n+m +i\alpha -il-1\choose n+m-il-1}\\
&\quad\quad\times\frac{(n-1)!}{(m+1)\cdots(m+n-1)}\frac{(m-il+1)_{(n-2)}}{(i\alpha+1)_{(n-2)}} {j+i-1\choose i} \end{aligned}$$ which is bounded below by $((n-1)!/(m+n)^{n-1})^2$, and from above by $(m+n)^{n+j-1}$. Hence, $$\label{s-may27-eq1}
G_{N_{l,m}^{(n)}}(y;\alpha,0)\leq (m+n)^{n+j-1}G_{M_{l,n+m}}(y;\alpha,0)$$ and $$\label{s-may27-eq2}
G_{N_{l,m}^{(n)}}(y;\alpha,0)\geq\frac{\left(G_{M_{l,n+m}}(y;\alpha,0)-\sum_{i=\lfloor m/l\rfloor+1}^{\lfloor (n+m)/l \rfloor}\tilde{y}^i {n+m +i\alpha -il-1\choose n+m-il-1}\right)}{\left(\frac{(n-1)!}{(m+n)^{n-1}}\right)^{-2}}.$$ Note that, for any index $i$ such that $\lfloor m/l\rfloor+1\leq i \leq \lfloor(m+n)/l\rfloor$, we can write the following inequalities $1\leq{n+m +i\alpha -il-1\choose n+m-il-1}= (n+m-il)\cdots (n+m-il-1 +i\alpha)/(i\alpha)!\leq (n+1)\cdots (n +i\alpha)/(i\alpha)!\leq (2n+m)^n$ and, in particular one has $$\label{s-may27-eq3}
\lim_{m\rightarrow +\infty}\frac{1}{m}\log \sum_{i=\lfloor m/l\rfloor+1}^{\lfloor (n+m)/l \rfloor}\tilde{y}^i {n+m +i\alpha -il-1\choose n+m-il-1}=0.$$ Accordingly, putting together , and , we obtain the following identity $$\begin{aligned}
&\lim_{m\rightarrow+ \infty}\frac{1}{m}\log G_{N_{l,m}^{(n)}}(y;\alpha,0)= \lim_{m\rightarrow+\infty}\frac{1}{n+m}\log G_{M_{l,n+m}}(y;\alpha,0) \end{aligned}$$ which, once combined with Theorem \[teorema\_prior\], implies that $m^{-1}N_{l,m}^{(n)}\,|\,K_{n}$ satisfies a large deviation principle with speed $m$ and rate function $I_{l}^{\alpha}$. In order to deal with the general case $\alpha\in(0,1)$ and $\theta>-\alpha$, one needs a term wise comparison between and . In particular, for any $i\leq m/l$ let us define $$\label{s-may27-eq7}
D(m,i;\alpha,\theta,n,j)=\frac{(n)_{(m)}}{(\theta+n)_{(m)}} \frac{(j-1)!(\frac{\theta}{\alpha})_{(j+i)}}{(j+i-1)!(\frac{\theta}{\alpha})_{(j)}}\frac{(\theta +n+i\alpha)_{(m-il)}}{(n+i\alpha)_{(m-il) 1}}.$$ Then, one has $$\begin{aligned}
&{\mathds{E}}_{\alpha,\theta}[(N_{l,m}^{(n)})_{(r)}\,|\,K_{n}=j]\\
&\notag\quad=\frac{j}{(n)_{(m)}}(r-1)! \sum_{i=0}^{r}D(m,i;\alpha,\theta,n,j){r\choose i}{j+i-1\choose i-1}(m)_{[il]}\\
&\notag\quad\quad\times\left(\frac{\alpha(1-\alpha)_{(l-1)}}{l!}\right)^{i}(i\alpha+n)_{(m-il)}.\end{aligned}$$ By an argument similar to those used in deriving it follows that one can find constants $d_5>0$ and $d_6>0$ and positive integers $k_1$ and $k_2$ independent of $m$ and $i$ such that $d_5 (n+m)^{-k_1}\leq D(m,i;\alpha,\theta,n,j)\leq d_6 (n+m)^{k_2}$ which leads to $$\begin{aligned}
&d_5 \left(\frac{1}{n+m}\right)^{k_1}G_{N_{l,m}^{(n)}}(y;\alpha,0)\\
&\quad\leq G_{N_{l,m}^{(n)}}(y;\alpha,\theta)\\
&\quad\leq G_{N_{l,m}^{(n)}}(y;\alpha,0) d_6 (n+m)^{k_2}.\end{aligned}$$ Such a result, combined with Theorem \[teorema\_prior\], implies that $m^{-1}N_{l,m}^{(n)}\,|\,K_{n}$ satisfies a large deviation principle with speed $m$ and rate function $I_{l}^{\alpha}$. Hence, by a direct application of Corollary B.9 in Feng [@Feng10], $m^{-1}M_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$ satisfies a large deviation principle with speed $m$ and rate function $I_{l}^{\alpha}$, and the proof is completed.
In contrast with the fluctuations and , Theorem \[teorema\_prior\] and Theorem \[teorema\_posterior\] show that in terms of large deviations the given initial sample $(X_{1},\ldots,X_{n})$ have no long lasting impact. Specifically the large deviation principles for $M_{l,n}$ and $M_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$ are equivalent when $n$ and $m$ tend to infinity, respectively. This is caused by the two different scalings involved, namely $m^{-1}$ for large deviations and $m^{-\alpha}$ for the fluctuations. According to Corollary 20 in Pitman [@Pit(96a)], the initial sample $(X_{1},\ldots,X_{n})$ leads to modify the parameter $\theta$ in the conditional distribution of $\tilde{P}_{\alpha,\theta,\nu}$ given $(X_{1},\ldots,X_{n})$. Hence we conjecture that the conditional and the unconditional large deviation results will be different if $n$ is allowed to grow and leads to large parameter $\theta$. In the unconditional setting this kind of asymptotic behaviour is discussed in Feng [@Feng(07)], where the parameter $\theta$ and the sample size $n$ grow together and the large deviation result will depend on the relative growth rate between $n$ and $\theta$.
If $m$ depends on $n$ and both approach infinity then one can expect very different behaviours in terms of law of large numbers and fluctuations. The large deviation principle for $M_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$ may not be easily derived, by means of a direct comparison argument, from the large deviation principle of $N_{l,m}^{(n)}\,|\,K_{n}$. In this respect, it is helpful to study the moment generating function $$\begin{aligned}
\label{eq:mom_gen_totale}
&G_{M_{l,m}^{(n)}}(y;\alpha,\theta)\\
&\notag\quad={\mathds{E}}_{\alpha,\theta}\left[\left(\frac{1}{1-y}\right)^{M_{l,m}^{(n)}}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}\right]\\
&\notag\quad=\sum_{i\geq0}\frac{y^{i}}{i!}\mathbb{E}_{\alpha,\theta}[(M_{l,m}^{(n)})_{(i)}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}].\end{aligned}$$ We intend to pursue this study further in a subsequent project. As in Lemma , an explicit expression for follows by combining the rising factorial moments of $M_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$ with the series expansion on the right-hand side of , and by means of standard combinatorial manipulations. The rising factorial moments of $M_{l,m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$ are obtained from Theorem 3 in Favaro et al. [@Fav(13)].
Discussion
==========
Our large deviation results contribute to the study of conditional and unconditional properties of the Ewens-Pitman sampling model. Theorem \[teorema\_posterior\] has potential applications in the context of Bayesian nonparametric inference for species sampling problems. Indeed, as we pointed out in the Introduction, in such a context ${\mathds{P}}[M_{l,m}^{(n)}\in \cdot\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$ takes on the interpretation of the posterior distribution of the number of species with frequency $l$ in a sample $(X_{1},\ldots,X_{n+m})$ from $\tilde{P}_{\alpha,\theta,\nu}$, given the initial observed sample $(X_{1},\ldots,X_{n})$ featuring $K_{n}=j$ species with corresponding frequencies $\mathbf{N}_{n}=\mathbf{n}$. The reader is referred to Favaro et al. [@Fav(13)] for a comprehensive account on this posterior distribution with applications to Bayesian nonparametric inference for the so-called rare, or local, species variety.
For large $m$, $m^{-1}M_{l,m}^{(n)}$ is the random proportion of species with frequency $l$ in $(X_{1},\ldots,X_{n+m})$. In Theorem \[teorema\_posterior\] we characterized the rate function $I^{\alpha}_{l}$ of a conditional large deviation principle associated to such a random proportion. The rate function $I_{l}^{\alpha}$ is nondecreasing over the set $[0,1/l]$. Then the number of discontinuous points of $I_{l}^{\alpha}$ is at most countable and therefore $\inf_{z\geq x}I_{l}^{\alpha}(z)=\inf_{z>x} I_{l}^{\alpha}(z)$ for almost all $x \in [0,1/l]$. Hence, for almost all $x>0$, $$\begin{aligned}
\label{eq1_discuss}
&\lim_{m\rightarrow+\infty}\frac{1}{m}\log{\mathds{P}}\left[\frac{M_{l,m}^{(n)}}{m}\geq x\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}\right]\\
&\notag\quad=\lim_{m\rightarrow+\infty}\frac{1}{m}\log{\mathds{P}}\left[\frac{M_{l,m}^{(n)}}{m}> x\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}\right]=-I_{l}^{\alpha}(x).\end{aligned}$$ Therefore identity provides a large $m$ approximation of the Bayesian nonparametric estimator ${\mathds{P}}[m^{-1}M_{l,m}^{(n)}\geq x\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$, for any $x\geq0$, namely $$\label{eq:tail_est}
\mathcal{T}_{l,m}^{(n)}(x)={\mathds{P}}\left[\frac{M_{l,m}^{(n)}}{m}\geq x \,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}\right]\approx\exp\{-mI_{l}^{\alpha}(x)\}.$$ Hereafter we thoroughly discuss $\mathcal{T}_{1,m}^{(n)}$ in Bayesian nonparametric inference for discovery probabilities. In particular we introduce a novel approximation, for large $m$, of the posterior distribution of the probability of discovering a new species at the $(n+m+1)$-th draw. Such an approximation, then, induces a natural interpretation of $\mathcal{T}_{1,m}^{(n)}$ in the context of Bayesian nonparametric inference for the probability of discovering a new species at the $(n+m+1)$-th draw.
Discovery probabilities and large deviations
--------------------------------------------
Let $D_{m}^{(n)}$ be the probability of discovering a new species at the $(n+m+1)$-th draw. Since the additional sample $(X_{n+1}\ldots,X_{n+m})$ is not observed, $D_{m}^{(n)}\,|\,(K_{n},\mathbf{N}_{n})$ is a random probability. The randomness being determined by $(X_{n+1}\ldots,X_{n+m})$. In particular, by means of the predictive distribution , we observe that ${\mathds{P}}[D_{m}^{(n)}\in\cdot\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$ is related to ${\mathds{P}}[K_{m}^{(n)}\in\cdot\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$ as follows $$\begin{aligned}
\label{rand_disc}
&{\mathds{P}}[D_{m}^{(n)}\in\cdot\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]\\
&\notag\quad={\mathds{P}}[D_{m}^{(n)}\in\cdot\,|\,K_{n}=j]\\
&\notag\quad={\mathds{P}}\left[\frac{\theta+j\alpha+K_{m}^{(n)}\alpha}{\theta+n+m}\in\cdot\,|\,K_{n}=j\right]\\
&\notag\quad={\mathds{P}}\left[\frac{\theta+j\alpha+K_{m}^{(n)}\alpha}{\theta+n+m}\in\cdot\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}\right],\end{aligned}$$ where the conditional, or posterior, distribution ${\mathds{P}}[K_{m}^{(n)}\in \cdot\,|\,K_{n}=j]$ was obtained in Lijoi et al. [@Lij(07)] and then investigated in Favaro et al. [@Fav(09)]. Specifically, let $\mathscr{C}(n,x,a,b)=(x!)^{-1}\sum_{0\leq i\leq x}(-1)^{i}{x\choose i}(-ia-b)_{(n)}$ be the noncentral generalized factorial coefficient. See Charalambides [@Cha(05)] for details. Then, for any $k=0,1,\ldots,m$, $$\label{post_dist_k}
{\mathds{P}}[K_{m}^{(n)}=k\,|\,K_{n}=j]=\frac{(\theta/\alpha+j)_{(k)}}{(\theta+n)_{(m)}}\mathscr{C}(m,k;\alpha,-n+\alpha j),$$ and $$\label{estim_dist}
{\mathds{E}}_{\alpha,\theta}[K_{m}^{(n)}\,|\,K_{n}=j]=\left(\frac{\theta}{\alpha}+j\right)\left(\frac{(\theta+n+\alpha)_{(m)}}{(\theta+n)_{(m)}}-1\right).$$ The distribution is the posterior distribution of the probability of discovering a new species at the $(n+m+1)$-th draw. An explicit expression for this distribution is obtained by means of . Also, $\mathcal{D}_{m}^{(n)}={\mathds{E}}_{\alpha,\theta}[D_{m}^{(n)}\,|\,K_{n}=j]$ is the Bayesian nonparametric estimator, with respect to a squared loss function, of the probability of discovering a new species at the $(n+m+1)$-th draw. An explicit expression of this estimator is obtained by combining with .
We introduce a large $m$ approximation of ${\mathds{P}}[D_{m}^{(n)}\in\cdot\,|\,K_{n}=j]$ and a corresponding large $m$ approximation of the Bayesian nonparametric estimator $\mathcal{D}_{m}^{(n)}$. This approximation sets a novel connection between the posterior distribution of the proportion of species with frequency $1$ in the enlarged sample and the posterior distribution ${\mathds{P}}[D_{m}^{(n)}\in\cdot\,|\,K_{n}=j]$. Specifically, by simply combining the fluctuation limit with , one has the following fluctuation $$\label{eq:fluct_post_discov}
\lim_{m\rightarrow+\infty}\frac{D_{m}^{(n)}}{m^{\alpha-1}}\,|\,(K_{n}=j)=\alpha S_{\alpha,\theta}^{(n,j)}\qquad\text{a.s.}$$ where $S_{\alpha,\theta}^{(n,j)}$ has been defined in and . In particular ${\mathds{E}}[S_{\alpha,\theta}^{(n,j)}]=(j+\theta/\alpha)\Gamma(\theta+n)/\Gamma(\theta+n+\alpha)$. Then, for large $m$, the fluctuations and lead to $$\begin{aligned}
\label{eq:approx}
&{\mathds{P}}[D_{m}^{(n)}\in\cdot\,|\,K_{n}=j]\\
&\notag\quad\approx{\mathds{P}}[m^{\alpha-1}\alpha S_{\alpha,\theta}^{(n,j)}\in\cdot\,|\,K_{n}=j]\\
&\notag\quad\approx{\mathds{P}}\left[\frac{M_{1,m}^{(n)}}{m}\in\cdot\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}\right] \end{aligned}$$ and $$\begin{aligned}
\label{eq4_discuss}
\mathcal{D}_{m}^{(n)}&=\frac{\theta+j\alpha}{\theta+n}\frac{(\theta+n+\alpha)_{m}}{(\theta+n+1)_{m}}\\
&\notag\approx m^{\alpha-1}(j\alpha+\theta)\frac{\Gamma(\theta+n)}{\Gamma(\theta+n+\alpha)}\\
&\notag\approx{\mathds{E}}_{\alpha,\theta}\left[\frac{M_{1,m}^{(n)}}{m}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}\right]\\
&\notag=\frac{m_{1}}{m}\frac{(\theta+n-1+\alpha)_{m}}{(\theta+n)_{m}}+(\theta+j\alpha)\frac{(\theta+n+\alpha)_{m-1}}{(\theta+n)_{m}}\end{aligned}$$ where the last identity of is obtained by means of Theorem 3 in Favaro et al. [@Fav(13)]. Interestingly, the second approximation of is somehow reminiscent of the celebrated Good-Turing estimators introduced in Good [@Goo(53)] and Good and Toulmin [@Goo(56)]. Indeed, it shows that the estimator of the probability of discovering a new species at the $(n+m+1)$-th draw is related to the estimator of the number of species with frequency $1$ in the enlarged sample.
Intuitively, when $\theta$ and $n$ are moderately large and not overwhelmingly smaller than $m$, the exact value of $\mathcal{D}_{m}^{(n)}$ given in is much smaller than its asymptotic approximation, which is much smaller than the exact value of $m^{-1}\mathcal{M}_{1,m}^{(n)}={\mathds{E}}_{\alpha,\theta}[m^{-1}M_{1,m}^{(n)}\,|\,K_{n}=j,\mathbf{N}_{n}=\mathbf{n}]$. This suggests that a finer normalization constant than $m^{\alpha}$ is to be used in the fluctuations and , respectively. Equivalent, though less rough, normalization rates for and are $$\label{rate_corr1}
r_{M}(m;\alpha,\theta,n,j,m_{1})=\frac{\Gamma(\theta+\alpha+n+m-1)}{\Gamma(\theta+n+m)}\left(m_{1}\frac{\theta+\alpha+n-1}{\theta+j\alpha}+m\right),$$ and $$\label{rate_corr2}
r_{D}(m;\alpha,\theta,n,j)=\frac{\Gamma(\theta+\alpha+n+m)}{\Gamma(\theta+n+m+1)}$$ respectively. Obviously, in terms of asymptotic $r_{M}(m;\alpha,\theta,n,j,m_{1})/m^{\alpha}\rightarrow1$ and $r_{D}(m;\alpha,\theta,n,j)/m^{\alpha-1}\rightarrow1$ as $m$ tends to infinity. These corrected normalization rates are determined in such a way that $\mathcal{D}_{m}^{(n)}$ and $m^{-1}\mathcal{M}_{1,m}^{(n)}$ coincide with the corresponding asymptotic moments. Of course different procedures may be considered. Note that the number $j$ of species and the number $m_{1}$ of species with frequency $1$ affect the corrected normalization rate .
Besides being an interesting large $m$ approximation of ${\mathds{P}}[D_{m}^{(n)}\in\cdot\,|\,K_{n}=j]$, the result displayed in induces a natural interpretation of the conditional large deviation principle of Theorem \[teorema\_posterior\], with $l=1$, in the context of Bayesian nonparametric inference for discovery probabilities. Indeed by combining the approximations in and we can write the large $m$ approximation $$\begin{aligned}
\label{eq5_discuss11}
\mathcal{D}_{m}^{(n)}(x)&={\mathds{P}}[D_{m}^{(n)}\geq x\,|\,K_{n}=j]\\
&\notag\approx\mathcal{T}_{1,m}^{(n)}(x) \\
&\notag\approx\exp\{-mI_{1}^{\alpha}(x)\}.\end{aligned}$$ By exploiting the corrected normalization rates and , a corrected version of is $$\begin{aligned}
\label{eq5_discuss12}
\mathcal{D}_{m}^{(n)}(x)&={\mathds{P}}[D_{m}^{(n)}\geq x\,|\,K_{n}=j]\\
&\notag\approx\mathcal{T}_{1,m}^{(n)}\left(x\frac{r_{M}(m;\alpha,\theta,n,j,m_{1})}{mr_{D}(m;\alpha,\theta,n,j)}\right)\\
&\notag\approx\exp\left\{-mI_{1}^{\alpha}\left(x\frac{r_{M}(m;\alpha,\theta,n,j,m_{1})}{mr_{D}(m;\alpha,\theta,n,j)}\right)\right\}.\end{aligned}$$ In other terms Theorem \[teorema\_posterior\] with $l=1$ provides a large $m$ approximation of the Bayesian nonparametric estimator of the right tail of the probability of discovering a new species at the $(n+m+1)$-th draw, without observing $(X_{n+1},\ldots,X_{n+m})$. We point out that If $\alpha=1/2$ then the rate function in the approximations and can be exactly computed by means of Proposition \[proposition\_prior\].
Illustration
------------
We present an illustration of our results dealing with a well-known benchmark Expressed Sequence Tag (EST) dataset. This dataset is obtained by sequencing two cDNA libraries of the amitochondriate protist Mastigamoeba balamuthi: the first library is non-normalized, whereas the second library is normalized, namely it undergoes a normalization protocol which aims at making the frequencies of genes in the library more uniform so to increase the discovery rate. See Susko and Roger [@Sus(04)] for comprehensive account on the Mastigamoeba cDNA library. For the Mastigamoeba non-normalized the observed sample consists of $n=715$ ESTs with $j=460$ distinct genes whose frequencies are $m_{i,715}=378, 33, 21, 9, 6, 1, 3, 1, 1, 1, 1, 5$ with $i\in\{1,2,\ldots,10\}\cup\{13,15\}$. For the the Mastigamoeba normalized the observed sample consists of $n=363$ with $j=248$ distinct genes whose frequencies are $m_{i,363}=200, 21, 14, 4, 3, 3, 1, 0, 1, 1$ with $i\in\{1,2,\ldots,9\}\cup\{14\}$. This means that we are observing $m_{1,n}$ genes which appear once, $m_{2,n}$ genes which appear twice, etc.
Under the Bayesian nonparametric model , the first issue to face is represented by the specification of the parameter $(\alpha,\theta)$ characterizing the prior $\Pi$. This is typically achieved by adopting an empirical Bayes procedure in order to obtain an estimate $(\hat\alpha,\hat\theta)$ of $(\alpha,\theta)$. Specifically we fix $(\alpha,\theta)$ so to maximize the likelihood function of the model under the observed sample, namely $$(\hat\alpha,\hat\theta)=\operatorname*{arg\,max}_{(\alpha,\theta)}\left\{\frac{\prod_{i=0}^{j-1}(\theta+i\alpha)}{(\theta)_{n}}\prod_{i=1}^j(1-\alpha)_{(n_{i}-1)}\right\}.$$ Alternatively, one could specify a prior distribution for $(\alpha,\theta)$. Here we adopt a less elaborate specification of the parameter $(\alpha,\theta)$. We choice $\alpha=1/2$ and then we set $\theta$ such that ${\mathds{E}}_{1/2,\theta}[K_{n}]=(2\theta)(((\theta+2^{-1})_{n}/(\theta)_{n})-1)=j$. Empirical investigations with simulated data suggests that $\alpha=1/2$ is always a good choice when no precise prior information is available. See Lijoi et al. [@Lij(07)] for details. This approach gives $(\alpha,\theta)=(1/2,206.75)$ for the Mastigamoeba non-normalized and $(\alpha,\theta)=(1/2,132.92)$ for the Mastigamoeba normalized.
For the Mastigamoeba non-normalized and normalized cDNA libraries, Table 1 reports the exact estimate $\mathcal{D}_{m}^{(n)}$ and the corresponding large $m$ approximate estimates under the uncorrected normalization rate $m^{\alpha-1}$ and the corrected normalization rate . These are denoted by $\bar{\mathcal{D}}_{m}^{(n)}$ and $\tilde{\mathcal{D}}_{m}^{(n)}$, respectively. In a similar fashion, Table 2 reports the exact estimate $m^{-1}\mathcal{M}_{1,m}^{(n)}$ and the corresponding large $m$ approximate estimates under the uncorrected normalization rate $m^{\alpha}$ and the corrected normalization rate , respectively. These are denote by $m^{-1}\bar{\mathcal{M}}_{1,m}^{(n)}$ and $m^{-1}\tilde{\mathcal{M}}_{1,m}^{(n)}$, respectively. See for details.
(Table 1 and Table 2 about here)
Table 1 and Table 2 clearly show that the corrected normalization rates and are of fundamental importance when the additional sample size $m$ is not much larger than the sample size $n$ and the parameter $\theta$. Figure 1 and Figure 2 show the large deviation approximations and of the estimate $\mathcal{D}_{m}^{(n)}(x)$.
(Figure 1 and Figure 2 about here)
[9]{}
<span style="font-variant:small-caps;">Arratia, R., Barbour, A.D. and Tavaré, S.</span> (1992). Poisson process approximations for the Ewens sampling formula. *Ann. Appl. Probab.* [**2**]{}, 519–535.
<span style="font-variant:small-caps;">Arratia, R., Barbour, A.D. and Tavaré, S.</span> (2003). *Logarithmic combinatorial structures: a probabilistic approach*. EMS Monograph in Mathematics.
<span style="font-variant:small-caps;">Bacallado, S., Favaro, S. and Trippa, L.</span> (2013). Looking-backward probabilities for Gibbs-type exchangeable random partitions. *Bernoulli*, to appear.
<span style="font-variant:small-caps;">Barbour, A.D. and Gnedin, A.V.</span> (2009). Small counts in the infinite occupancy scheme. *Electron. J. Probab.*, **13**, 365–384.
<span style="font-variant:small-caps;">Charalambides, C.A.</span> (2005). *Combinatorial methods in discrete distributions*. Wiley Interscience.
<span style="font-variant:small-caps;">Dembo, A. and Zeitouni, O.</span> (1998) *Large deviations techniques and applications*. Springer, New York.
<span style="font-variant:small-caps;">Dinwoodie, I.H. and Zabell, S.L.</span> (1992). Large deviations for exchangeable random vectors. *Ann. Probab.*, **20**, 1147-1166
<span style="font-variant:small-caps;">Ewens, W.J.</span> (1972). The sampling theory of selectively neutral alleles. *Theor. Popul. Biol.*, **3**, 87–112.
<span style="font-variant:small-caps;">Favaro, S., Lijoi, A., Mena, R.H. and Prünster, I.</span> (2009). Bayesian nonparametric inference for species variety with a two parameter Poisson-Dirichlet process prior. *J. Roy. Statist. Soc. Ser. B*, **71**, 993–1008.
<span style="font-variant:small-caps;">Favaro, S., Lijoi, A. and Prünster, I.</span> (2013). Conditional formulae for Gibbs-type exchangeable random partitions. *Ann. Appl. Probab.*, **23**, 1721–1754.
<span style="font-variant:small-caps;">Favaro, S., Feng, S.</span> (2014). Asymptotics for the conditional number of blocks in the Ewens-Pitman sampling model. *Electron. J. Probab.*, **19**, 1–15
<span style="font-variant:small-caps;">Feng, S.</span> (2007). Large deviations associated with Poisson-Dirichlet distribution and Ewens sampling formula. *Ann. Appl. Probab.*, **17**,1570–1595.
<span style="font-variant:small-caps;">Feng, S.</span> (2010). *The Poisson-Dirichlet distribution and related topics: models and asymptotic behaviors*, Springer, Heidelberg.
<span style="font-variant:small-caps;">Feng, S. and Hoppe, F.M.</span> (1998). Large deviation principles for some random combinatorial structures in population genetics and Brownian motion. *Ann. Appl. Probab.*, **8**, 975–994.
<span style="font-variant:small-caps;">Good, I.J.</span> (1953). The population frequencies of species and the estimation of population parameters. *Biometrika*, **40**, 237–264.
<span style="font-variant:small-caps;">Good, I.J. and Toulmin, G.H.</span> (1956). The number of new species, and the increase in population coverage, when a sample is increased. *Biometrika*, **43**, 45–63.
<span style="font-variant:small-caps;">Griffiths, R.C. and Spanò, D.</span> (2007). Record indices and age-ordered frequencies in exchangeable Gibbs partitions. *Electron. J. Probab.*, **12**, 1101–1130.
<span style="font-variant:small-caps;">Korwar, R.M. and Hollander, M.</span> (1973). Contribution to the theory of Dirichlet processes. *Ann. Probab.*, **1**,705–711.
<span style="font-variant:small-caps;">Lijoi, A., Mena, R.H. and Prünster, I.</span> (2007). Bayesian nonparametric estimation of the probability of discovering a new species *Biometrika*, **94**, 769–786.
<span style="font-variant:small-caps;">Lijoi, A., Prünster, I. and Walker, S.G.</span> (2008). Bayesian nonparametric estimators derived from conditional Gibbs structures. *Ann. Appl. Probab.*, **18**, 1519–1547.
<span style="font-variant:small-caps;">Perman, M., Pitman, J. and Yor, M.</span> (1992). Size-biased sampling of Poisson point processes and excursions. *Probab. Theory Related Fields*, **92**, 21–39.
<span style="font-variant:small-caps;">Pitman, J.</span> (1995). Exchangeable and partially exchangeable random partitions. *Probab. Theory Related Fields*, **102**, 145–158.
<span style="font-variant:small-caps;">Pitman, J.</span> (1996). Some developments of the Blackwell-MacQueen urn scheme. In [*Statistics, Probability and Game Theory*]{} (T.S. Ferguson, L.S. Shapley and J.B. MacQueen Eds.), Hayward: Institute of Mathematical Statistics, 245–267.
<span style="font-variant:small-caps;">Pitman, J.</span> (1997). Partition structures derived from Brownian motion and stable subordinators. *Bernoulli*, **3**, 79–66.
<span style="font-variant:small-caps;">Pitman, J. and Yor, M.</span> (1997). The two parameter Poisson-Dirichlet distribution derived from a stable subordinator. *Ann. Probab.*, **25**, 855–900.
<span style="font-variant:small-caps;">Pitman, J.</span> (2006). *Combinatorial stochastic processes.* Ecole d’Eté de Probabilités de Saint-Flour XXXII. Lecture Notes in Mathematics N. 1875, Springer-Verlag, New York.
<span style="font-variant:small-caps;">Schweinsberg, J.</span> (2010). The number of small blocks in exchangeable random partitions. *ALEA Lat. Am. J. Probab. Math. Stat*. **7**, 217–242.
<span style="font-variant:small-caps;">Susko, E. and Roger, A.J.</span> (2004). Estimating and comparing the rates of gene discovery and expressed sequence tag (EST) frequencies in EST surveys. *Bioinformatics*, **20**, 2279–2287.
Table 2. *Exact estimate and corresponding asymptotic estimates under the uncorrected and corrected normalization rate*.\
Table 2. *Exact estimate and corresponding asymptotic estimates under the uncorrected and corrected normalization rate*.\
Figure 1. *Mastigamoeba non-normalized. Large deviation approximations of the estimate $\mathcal{D}_{m}^{(715)}(x)$ under the uncorrected (blue line) and corrected (red line) normalization rate.*
{width="1\linewidth"}
Figure 2. *Mastigamoeba normalized. Large deviation approximations of the estimate $\mathcal{D}_{m}^{(363)}(x)$ under the uncorrected (blue line) and corrected (red line) normalization rate.*
{width="1\linewidth"}
|
---
abstract: 'We propose a unitary toy model of black hole evaporation, in which the entanglement between the interior and exterior degrees of freedom vanishes at late times. Our model possesses the information-free property and satisfies the niceness conditions discussed in the literature. A key feature of the model is that the Hilbert space of black hole internal states contains a vacuum state corresponding to the completely evaporated black hole, which can be reached from any initial state via the Hawking process. Our model suggests a novel quantum cosmological way in which information can get out of an evaporating black hole.'
---
Bart[ł]{}omiej Czech, Klaus Larjo, Moshe Rozali
*Department of Physics and Astronomy*
*University of British Columbia*
*6224 Agricultural Road, Vancouver, BC V6T 1Z1, Canada*
czech, larjo, [email protected]
Introduction
============
Black hole evaporation appears to lead to either loss of unitarity or highly entangled remnants carrying macroscopic entropy [@Hawking:1976ra]. A possible resolution of this ‘information paradox’ is that information about the internal state of the black hole escapes and is available to an observer outside the black hole via the Hawking radiation, or more precisely as small state-dependent corrections on top of the purely thermal radiation. In [@mathurtheorem; @mathurmodel] it was argued, subject to certain plausible assumptions, that this cannot the case: neither semiclassical Hawking radiation nor small corrections to it can carry away sufficient amount of information. Instead, the entanglement entropy between degrees of freedom internal and external to the black hole will increase indefinitely, leading inevitably to either a remnant or to a mixed state, signifying a loss of unitarity.
In this note we present a toy model that satisfies the same set of assumptions, yet avoids the conclusion above: the entanglement entropy turns around after an initial hike and starts decreasing toward zero, which allows the information to escape the black hole before it evaporates completely. The new ingredient in our toy model is the explicit accounting for pathways or histories whereby the evaporation process can terminate. This is implemented by designating a specific internal state to be ‘the vacuum’ corresponding to an evaporated black hole. While the emitted Hawking radiation is still completely independent of the internal state of the black hole, it may depend on the mass of the black hole. In our model such dependence takes the simplest possible form: we only stipulate that Hawking radiation ceases once the black hole has evaporated. This suggests a novel mechanism for the way information trapped inside a black hole can be released and made accessible to an outside observer, which we discuss further in the final section.
The plan of this note is as follows. We begin, in Section \[revhorizons\], by providing some background and explaining the conclusions of [@mathurtheorem; @mathurmodel]. We then proceed to contrast this with our models in Section \[models\]. Rather than introduce our complete model from the get go, we start by constructing a few trial toy models which incorporate some, but not all, features that may be expected from models of black hole evaporation. These models all share the property that information becomes accessible to an outside observer before the black hole has completely evaporated. We close Section \[models\] with the introduction of our final model, which combines all the desirable characteristics of the trial models of that section. After that, in Section \[information\], we discuss information retrieval and estimate the information retrieval time. We conclude with a discussion of our results and potential directions for future research.
Information-free horizons, niceness conditions and the growth of entanglement entropy {#revhorizons}
=====================================================================================
Ref. [@mathurmodel] analyzed and contrasted models of black holes evaporation and of burning paper. It found the following crucial difference: while outgoing Hawking radiation is entirely independent of the internal state of the black hole, a burning material completely determines the outgoing particles.[^1] A model in which the outgoing particles are independent of the state of the system is said to have an ‘information-free horizon’. The information-free property is the first feature we shall demand of the models constructed in this note.
It was argued in [@mathurtheorem] that in any unitary black hole model, which has an information-free horizon and satisfies a set of ‘niceness conditions’, the entanglement entropy between the interior and exterior will always increase unboundedly. For our purposes, ‘niceness conditions’ mean that the black hole should have a good semiclassical description, the second feature we shall demand of our models. In [@mathurtheorem], the information-free horizon was implemented by choosing the time evolution to be of the form $$\psi_i \otimes \chi_i \to \psi_i \otimes \left( \frac{1}{\sqrt{2}} |0\rangle_{\rm int} |0\rangle_{\rm ext} + \frac{1}{\sqrt{2}}|1\rangle_{\rm int} |1\rangle_{\rm ext} \right) \otimes \chi_i, \label{mathurevolution}$$ where $\psi_i$ is the internal state of the black hole, $\chi_i$ is the state of previously emitted Hawking radiation, and the expression in parenthesis represents the state of a newly created Hawking pair at the horizon. The negative energy quantum $| \rangle_{\rm int}$ falls inside the hole and is considered part of $\psi_i$ at the next step, while the other particle $|\rangle_{\rm out}$ escapes and is considered part of $\chi_i$ in future steps. The time evolution entangles the interior and exterior, with the entanglement entropy growing by $\ln 2$ at every step.
#### A note on small corrections:
In [@mathurtheorem; @mathurmodel] small corrections on top of the pair creation (\[mathurevolution\]) were analyzed in order to probe whether such corrections are sufficient for information retrieval. These corrections were implemented by allowing the pair to be created in a different state with a small probability amplitude $\epsilon$. It was found that such corrections were not enough to reverse the growth of the entanglement entropy. Hence, the hope that small state-dependent corrections to Hawking radiation could transmit information about the internal state was shown to be false.
As we show below, information retrieval in our models does not rely on such corrections. Our models can be easily extended to include such corrections and we have checked that the behavior of the entanglement entropy is not strongly affected by such extensions. Therefore, for the sake of simplicity of presentation, we shall only consider the basic Hawking pair creation process.
The models {#models}
==========
We start this section by presenting the physical reasons why it is possible to get around the argument of [@mathurtheorem]. We then devise three unitary trial models, which contain some strengths and weaknesses. In the final subsection we combine the trial models to construct the final model of black hole evaporation, which satisfies all the requirements – unitarity, niceness conditions and the information-free property.
The strategy {#strategy}
------------
It is crucial to realize that while the time evolution (\[mathurevolution\]) is sufficient for an information-free horizon, it is not implied by it. Hence, we can consider other types of time evolutions. It is also clear that the evolution (\[mathurevolution\]) can never lead to the evaporation of the black hole, because at every step a new particle is tensored onto the internal state $ \psi_i$, thus leading to an ever-increasing complexity of the internal state of the black hole.
We wish to modify (\[mathurevolution\]) in such a way that the outgoing particles are still tensored onto the previously escaped Hawking radiation, but the infalling particles act on the black hole microstate as operators, rather than just enlarging the Hilbert space: $$\psi_i \otimes \chi_i \to \psi_i \, \overleftarrow{S} \otimes \chi_i. \label{ourevolution}$$ The left-arrow refers to our typographical convention: the pair is created at the horizon, and the escaping particle moves out (right), while the other particle falls in (left). Thus $\overleftarrow{S}$ denotes an operator acting on the internal state $\psi_i$.
The physical motivation for this is simple: while the information-free property guarantees that the created pair is independent of the details of the internal microstate, the pair may still depend on one property of the black hole: the mass. After all, large black holes are colder than small ones and an evaporated black hole ceases to radiate. Hence, black hole states are graded by their mass, which in turn may affect the form of the Hawking radiation. In some of our models we shall not use the whole grading. Instead, we will learn that it is sufficient to demand the existence of a single vacuum state (evaporated black hole), which may be reached from any other state via transitions correlated with randomly generated Hawking radiation.
#### An example – the Rubik’s cube:
Think of the interior of the black hole as a Rubik’s cube. The interior Hilbert space is spanned by the $4\times 10^{19}$ configurations of the cube. We declare that the solved configuration of the cube is the internal vacuum and corresponds to the black hole having evaporated. When a fluctuation creates a particle-antiparticle pair at the horizon, the black hole is seen to emit a Hawking particle. The negative energy quantum trapped inside the black hole affects the internal degrees of freedom (cube configurations). We can think of this effect as being enacted by (a linear combination of) the basic Rubik moves, with the proviso that once the cube is solved (once the black hole evaporates), the Hawking process is discontinued. Of course, the way in which the Hawking process affects the internal degrees of freedom is entangled with the escaping particles. Each classical history of the black hole looks like a series of moves which eventually solves the cube. Quantum mechanically, the wavefunction of the black hole gradually concentrates on the internal vacuum, because any random set of moves eventually leads to a solution of the cube.
One could object that it is a considerable restriction to consider a finite-dimensional internal space, as then by construction the system will eventually concentrate on the vacuum. However, for a black hole of a given mass the number of microstates is finite and given by the exponential of the entropy. As Hawking radiation can only decrease the mass of a black hole, throughout the evaporation process the system can only access the finitely many states whose mass is no greater than the initial mass of the black hole. We believe that using the Rubik model sufficiently captures this. It is certainly possible to augment the model with infinitely many additional microstates of higher mass without affecting the evolution of the entanglement entropy.
Trial models
------------
### Model I – Full randomness
For the purpose of simulations, we replace the Rubik model with a simpler analogue, Model I. Let the internal Hilbert space be spanned by configurations of numbers $1, 2, 3, 4$ arranged in $2 \times 2$ tableaux: $$\psi_i \equiv
\begin{array}{|c|c|}
\hline
a & b \\
\hline
c & d \\
\hline
\end{array}
\label{intbasis}$$ We take these 24 states to be orthonormal. We single out the state $$|\rm{vac}\rangle \equiv
\begin{array}{|c|c|}
\hline
1 & 2 \\
\hline
3 & 4 \\
\hline
\end{array}
\label{intvacuum}$$ as the internal vacuum, corresponding to the black hole having completely evaporated. We define three elementary operations $$\begin{array}{|c|c|}
\hline
a & b \\
\hline
c & d \\
\hline
\end{array} \, \, \, \overleftarrow{L} = \begin{array}{|c|c|}
\hline
c & b \\
\hline
a & d \\
\hline
\end{array}\, \, , \qquad\quad
\begin{array}{|c|c|}
\hline
a & b \\
\hline
c & d \\
\hline
\end{array} \, \, \, \overleftarrow{R}= \begin{array}{|c|c|}
\hline
a & d \\
\hline
c & b \\
\hline
\end{array} \, \, , \qquad\quad
\begin{array}{|c|c|}
\hline
a & b \\
\hline
c & d \\
\hline
\end{array} \, \, \, \overleftarrow{U} = \begin{array}{|c|c|}
\hline
b & a \\
\hline
c & d \\
\hline
\end{array}\, \, \, . \label{deflru}$$ One can think of these three operations as moves on a puzzle whose objective is to arrange the four numbers as in expression (\[intvacuum\]). We also define a fourth operation $\overleftarrow{N}$, which leaves the state unchanged.[^2]
The external states $\chi_i$ are ordered sequences of Hawking-radiated particles. We take the external particles to be of four[^3] basic types $\{n,l,r,u\}$, so $\chi_i$ are words built from these four letters, e.g. $unlrl$. All such states are taken to be orthonormal. Two states $\psi_i \otimes \chi_j$ and $\psi_k \otimes \chi_l$ are orthogonal unless the internal states $\psi_i$ and $\psi_k$ and the radiated Hawking words $\chi_j$ and $\chi_l$ both agree.
To define the time evolution (\[ourevolution\]) we must choose the operator $\overleftarrow{S}$ enacting the Hawking radiation. When acting on states whose interior component is *not* the vacuum, the action is chosen to be: $$\overleftarrow{S} = \frac{1}{2}\left( \overleftarrow{N} \otimes n + \overleftarrow{L} \otimes l + \overleftarrow{R} \otimes r+ \overleftarrow{U} \otimes u \right). \label{def1}$$ In these tensor expressions, the left component acts on the internal state (\[intbasis\]) according to (\[deflru\]) while the right component prepends the particle $n,l,r,u$ to the Hawking-radiated word $\chi_i$. An example of one time step would be: $$\begin{aligned}
\begin{array}{|c|c|}
\hline
4 & 2 \\
\hline
1 & 3 \\
\hline
\end{array} \, \, |unlrl\rangle & \longrightarrow \,\,\,
\begin{array}{|c|c|}
\hline
4 & 2 \\
\hline
1 & 3 \\
\hline
\end{array} \, \, \overleftarrow{S} |unlrl\rangle \label{exevolution} \\ & =
\frac{1}{2} \!
\left(\,
\begin{array}{|c|c|}
\hline
4 & 2 \\
\hline
1 & 3 \\
\hline
\end{array} \, |nunlrl\rangle
+
\begin{array}{|c|c|}
\hline
1 & 2 \\
\hline
4 & 3 \\
\hline
\end{array} \, |lunlrl\rangle
+
\begin{array}{|c|c|}
\hline
4 & 3 \\
\hline
1 & 2 \\
\hline
\end{array} \, |runlrl\rangle
+
\begin{array}{|c|c|}
\hline
2 & 4 \\
\hline
1 & 3 \\
\hline
\end{array} \, |uunlrl\rangle
\!\right)\, . \nonumber\end{aligned}$$ On the other hand, when acting on a state whose internal component is the evaporated state $|{\rm vac} \rangle$, the state is left unchanged, e.g.: $$\begin{array}{|c|c|}
\hline
1 & 2 \\
\hline
3 & 4 \\
\hline
\end{array} \,\,\overleftarrow{S} \, |unlrl\rangle = \,
\begin{array}{|c|c|}
\hline
1 & 2 \\
\hline
3 & 4 \\
\hline
\end{array} \, |nunlrl\rangle \, .
\label{actonvacuum}$$ Recall that the prepended $n$ should be viewed as the absence of a particle. Eq. (\[actonvacuum\]) states that once the black hole evaporates, the Hawking process terminates and the state does not evolve further.
It is easy to check that this model is unitary. It also enjoys the information-free property, since from the viewpoint of an external observer the Hawking process randomly spits out particles $l,r,u$, each of them radiated an equal fraction of the time. The only exception to this is the vacuum state, in which the Hawking process ceases.
The time evolution eventually forces every internal state to be brought to the internal vacuum. Thus, if one considers the internal degrees of freedom in isolation, their time evolution is not unitary. Physically this is a trivial point, a consequence of the fact that black holes evaporate. However, it highlights a key difference between Model I and eq. (\[mathurevolution\]). In the latter, the dynamics of the internal degrees of freedom is unitary all by itself, without accounting for the exterior subsystem.
Because the black hole eventually evaporates, the entanglement entropy between the interior and exterior degrees of freedom cannot avoid turning around and approaching zero at late times. We have simulated Model I numerically and the entanglement entropy is plotted in Figure \[fig-model12\]. The initial state for the evolution was chosen to be $\begin{array}{|c|c|}
\hline
3 & 2 \\
\hline
1 & 4 \\
\hline
\end{array}\,$; other choices yield similar plots. The location of the turning point depends on how many moves away from the vacuum the initial state was. The model is too simple for deriving quantitative results about black holes.
$
\begin{array}{cc}
\includegraphics[width=0.4\textwidth]{ModelI.pdf} \quad \qquad &
\includegraphics[width=0.4\textwidth]{ModelII.pdf}
\end{array}$
#### Strengths and weaknesses:
Model I incorporates an information-free horizon, but it does not satisfy the niceness conditions. One way of seeing this is to note that while an arbitrary puzzle is randomly solved in an average of roughly 40 moves, the standard deviation in the number is of the same order of magnitude. Thus, given an initial black hole it is not possible to accurately predict when it evaporates, which contradicts the expected behavior of a large semi-classical black hole.
A comparison with semiclassical black holes reveals one other deficit. The process of black hole evaporation is a gradient process, characterized by a number of macroscopic parameters – mass, entropy, inverse temperature – continually decreasing to zero. Model I has no viable counterpart to these quantities, because its time evolution draws on randomness and does not respect any grading of the internal Hilbert space. Our next model is designed to fix this deficiency.
### Model II – The hidden hand
We require that the black hole shrink which each particle emitted. To do so, we grade the basis of the internal Hilbert space (\[intbasis\]) by the minimal number of moves required to solve the puzzle. Then we modify the time evolution of Model I by the following rule: after applying the operator (\[def1\]) to an internal state $\psi_i$, we project out those wavefunction components that are further away from the solution of the puzzle than $\psi_i$ and normalize the remaining wavefunction. This rule ensures that the evolution proceeds ‘toward the vacuum’ and defines Model II.
As an example, consider again the time evolution (\[exevolution\]), which now becomes $$\begin{gathered}
\begin{array}{|c|c|}
\hline
4 & 2 \\
\hline
1 & 3 \\
\hline
\end{array} \, \, |unlrl\rangle \longrightarrow \,\,\, \\
\cancel{\frac{1}{\sqrt{4}}}
\frac{1}{\sqrt{3}}\!
\left(\,
\begin{array}{|c|c|}
\hline
4 & 2 \\
\hline
1 & 3 \\
\hline
\end{array} \, |nunlrl\rangle
+
\cancel{\begin{array}{|c|c|}
\hline
1 & 2 \\
\hline
4 & 3 \\
\hline
\end{array} \, |lunlrl\rangle}
+
\begin{array}{|c|c|}
\hline
4 & 3 \\
\hline
1 & 2 \\
\hline
\end{array} \, |runlrl\rangle
+
\begin{array}{|c|c|}
\hline
2 & 4 \\
\hline
1 & 3 \\
\hline
\end{array} \, |uunlrl\rangle
\!\right).
\label{model2ex}\end{gathered}$$ The initial state is four moves from the vacuum, and the second component is projected out because it is five moves from the vacuum; retaining it would be akin to the black hole expelling a negative energy Hawking particle. Figure \[fig-model12\] plots the entanglement entropy for Model II with the initial state $\begin{array}{|c|c|}
\hline
3 & 1 \\
\hline
4 & 2 \\
\hline
\end{array}\,$. The entropy starts decreasing sooner for Model II, because the evolution has access to fewer internal states than in Model I.[^4]
#### Strengths and weaknesses:
Model II is unitary and has the desirable feature that there is a quantity (distance from vacuum), which decreases in the process of black hole evaporation. An analogue of Model II, in which the $2 \times 2$ tableaux are replaced with more complicated puzzles to produce longer black hole lifetimes, will satisfy the niceness conditions. To see this, note that the lifetime of a Model II black hole is roughly given by the negative binomial distribution [@nbd] with parameters $R,P$ set to the distance of the initial state to the vacuum and the probability of emitting the particle $n$ at each evolution step, respectively. The standard deviation-to-mean ratio of the negative binomial distribution is $\sqrt{P/R}$, but $P \leq 1/2$ and $R$ is of the same order of magnitude as the lifetime of the black hole.
However, Model II does not have an information-free horizon, because its Hawking process depends on the state of the black hole. Indeed, an outside observer obtains information about the internal state by observing the absence of certain particles in the radiation signal, such as particle $l$ in example (\[model2ex\]).
### Model III – The depository
We have now achieved both the niceness conditions and an information-free horizon, though not in the same model. Before combining the positive features of Models I and II, let us address another potential objection to time evolution (\[ourevolution\]). Assuming that the internal degrees of freedom of a black hole are in some way localized, either in the deep interior or spread across the horizon, one may be wary that a Hawking emission has an immediate, global effect on the internal state. In order to address this we introduce a depository of particles, into which infalling particles are placed until they have fallen far enough to affect the internal state. Hence, the states are of the form $$\begin{array}{|c|c|}
\hline
4 & 2 \\
\hline
1 & 3 \\
\hline
\end{array} \, \{ \overleftarrow{L} \overleftarrow{N} \overleftarrow{U} \} \, |unlrl\rangle,$$ where the length of the depository is taken to be three particles. Each time a pair is created, the infalling particle is placed at the right end of the depository (the horizon). The other particles in the depository are then moved left by one unit and the leftmost particle ($\overleftarrow{L}$ in the example above) operates on the interior state.
The entanglement entropy in this model again increases and decreases. It is perhaps noteworthy that for the first $p$ steps, $p$ being the length of the depository, the entropy increases linearly as in the models of [@mathurtheorem; @mathurmodel]. Hence those models can be interpreted as a special case of Model III with the length of the depository taken to infinity.
Model III is unitary, but it inherits the deficiences of its predecessors. It violates the niceness conditions or lacks an information-free horizon, depending on whether one adds the depository to Model I or II.
The Final Model {#finalmodel}
---------------
We now define a model that incorporates the positive aspects of Models I and II without sharing their weaknesses. For simplicity we refrain from including the depository of Model III; including it would be trivial.
Consider a model in which the interior of the black hole consists of a large number $E$ of unsolved Rubik’s cubes or puzzles of the type considered in Model I. The quantity $E$ will decrease in the process of black hole evaporation, analogous to energy, entropy or inverse temperature; in what follows we refer to this quantity as ‘energy’ in inverted commas. The basis states are of the form $$\overbrace{ \left\{ \begin{array}{|c|c|}
\hline
4 & 2 \\
\hline
1 & 3 \\
\hline
\end{array}\,\, , \ldots, \,\, \begin{array}{|c|c|}
\hline
2 & 4 \\
\hline
3 & 1 \\
\hline
\end{array} \right\}}^E \, |unlrl\rangle.
\label{exampstate}$$ The time evolution takes the form familiar from Model I except that the operators $\overleftarrow{N},\overleftarrow{L},\overleftarrow{R},\overleftarrow{U}$ act on each tableau individually. In order to guarantee unitarity, we further stipulate that each time a square is solved (brought to the vacuum state), an extra particle $q$ is emitted. Augmenting our model with the $q$-particle is physically justified, because an outside observer ought to be able to detect that the black hole has lost ‘energy’. The following example illustrates the time evolution involving a $q$-particle: $$\begin{gathered}
\left\{
\begin{array}{|c|c|}
\hline
4 & 2 \\
\hline
1 & 3 \\
\hline
\end{array}
\, , \,
\begin{array}{|c|c|}
\hline
2 & 1 \\
\hline
3 & 4 \\
\hline
\end{array}
\, \right\}\!
|\rangle \longrightarrow \\
\frac{1}{2}\!\left(
\left\{
\begin{array}{|c|c|}
\hline
4 & 2 \\
\hline
1 & 3 \\
\hline
\end{array}
\, , \,
\begin{array}{|c|c|}
\hline
2 & 1 \\
\hline
3 & 4 \\
\hline
\end{array}
\, \right\}\!
|n\rangle
+
\left\{
\begin{array}{|c|c|}
\hline
1 & 2 \\
\hline
4 & 3 \\
\hline
\end{array}
\, , \,
\begin{array}{|c|c|}
\hline
3 & 1 \\
\hline
2 & 4 \\
\hline
\end{array}
\, \right\}\!
|l\rangle
+
\left\{
\begin{array}{|c|c|}
\hline
4 & 3 \\
\hline
1 & 2 \\
\hline
\end{array}
\, , \,
\begin{array}{|c|c|}
\hline
2 & 4 \\
\hline
3 & 1 \\
\hline
\end{array}
\, \right\}\!
|r\rangle
+
\begin{array}{|c|c|}
\hline
2 & 4 \\
\hline
1 & 3 \\
\hline
\end{array}\,\,
|qu\rangle
\right)
\label{finalmodelex}\end{gathered}$$ In the wavefunction component in the last term the second square is erased, because the transposition $\overleftarrow{U}$ solved that puzzle. An outside observer can detect the drop in ‘energy’ by seeing the outgoing particle $q$. The particle $q$ is necessary to preserve unitarity; if it were absent, the state obtained after one time step from $\begin{array}{|c|c|}
\hline
4 & 2 \\
\hline
1 & 3 \\
\hline
\end{array}\,\,|\rangle$ would have non-trivial overlap with (\[finalmodelex\]). A sample plot of the entanglement entropy in this model is presented in Figure \[fig-modelfinal\].
#### Properties and interpretation:
The model is unitary; the $q$-particles ensure unitarity in all cases that do not follow directly from Model I. Our model also has the desired gradient property that was missing in Model I, because the number of unsolved puzzles $E$ can only decrease in the course of the evolution.
The other two desirable characteristics, an information-free horizon and the niceness conditions, require some explanation. Because the production of particles $n,l,r,u$ follows Model I, it is by itself information-free, but $q$-particles inform the outside observer about the count of ‘energy’ lost in each time step. However, recall that the rationale for the information-free property is the semiclassical picture of black holes, in which only *internal* degrees of freedom must remain invisible to outside observers. In contrast, *global* properties such as mass or temperature can and should be visible to outside observers and affect the form of Hawking radiation. Since the Final Model was motivated by a desire to construct an evaporation-decreasing, global quantity analogous to energy, it is not surprising that that quantity becomes accessible to the outside observer as the count of outgoing $q$-particles. It is an interesting lesson that in our model that access is exacted by unitarity.
![\[fig-modelfinal\] Entanglement entropy versus time in the Final Model. For computational reasons, this plot was obtained in a simplified version of the model, in which Rubik’s cubes or square tableaux were replaced with copies of a simpler puzzle with only six discrete configurations.](ModelFinal.pdf){width="45.00000%"}
In order to establish the niceness conditions, we will require the system to be large in a certain quantifiable sense. As a first observation, notice that the number of $q$-particles emitted by our black hole as a function of time is determined by a random walk in the space of internal states, which is generated by the basic moves (definition (\[deflru\]) in our current model). Whenever the random walk takes one of the $E$ initial component puzzles to the vacuum, a $q$-particle is emitted. But when the random walk self-intersects, no $q$-particles are produced, because the puzzles which would be solved on that step had been solved and erased from the state description before. We shall now argue that in the regime of high $E$, the niceness conditions hold whenever the internal space is large and rich enough so that random walks typically do not self-intersect for long times. This, of course, does not hold for the $2 \times 2$ puzzles used in eqs. (\[exampstate\]-\[finalmodelex\]), but it does hold for Rubik’s cubes.
Take $E$ to be much larger than the size of the internal space $K$ (in the $2 \times 2$ model $K=24$ while in the Rubik’s cube $K=4\times 10^{19}$), so that almost all internal states contain unsolved puzzles in almost all configurations. Initially, the outside observer will see at every time step approximately the same number of $q$-particles, $E/K$, with typical relative deviation of order $\sqrt{K/E}$. The niceness conditions hold firmly so long as these deviations remain small, that is so long as random walks do not (frequently) self-intersect. When self-intersections begin to kick in, the wavefunction of the outgoing radiation develops components marked by conspicuous absences of $q$-particles. At this stage, the nice semiclassical description of the state is still maintained in a coarse-grained sense: the average number of detected $q$-particles per $k$ steps is well-behaved for sufficiently large $k$. But when the random walk has covered almost all the internal space, leaving only a few unsolved puzzles scattered over distant regions of the internal space, then large uncertainties take over and the niceness conditions are gone. This is easily interpreted: at advanced stages of evaporation, black holes are nearly Planck-sized and not well-described by semiclassical physics.
At last, we remark that our model is too simple to reproduce the evaporation of black holes quantitatively. In particular, in our model large black holes evaporate faster while small black holes linger for longer times, the opposite of the standard relation $T \propto M^{-1}$.
Information retrieval {#information}
=====================
It is of interest to analyze how information implicit in the initial state of a black hole escapes from a system with an ‘information-free’ horizon with the emitted Hawking radiation. To analyze this quantitatively we follow the discussion of [@hp], which quantified the issue of retrieval of quantum information from black holes.
We take the initial state of the black hole to be entangled with a reference system held by Charlie. Hence the quantum state lives in the Hilbert space $$\mathcal{H}_{\rm int} \otimes \mathcal{H}_{\rm ext} \otimes \mathcal{H}_{\rm Charlie},$$ where time evolution on $\mathcal{H}_{\rm Charlie}$ is taken to be trivial. For simplicity of presentation we work with the black holes of Model I, but the discussion applies equally well to our Final Model.
Clearly, one cannot demand Bob, the outside observer, to determine the exact quantum state of the radiation or the initial state of the black hole. Rather, we say that the information about the initial state has come out with accuracy $(1-\delta)$ when any measurement that Charlie can make on the reference system with outcome probabilities $\{p_i\}$ can be reproduced by Bob with outcome probabilities $\{p_i \pm \mathcal{O}(\delta)\}$. We know that this will eventually happen, because the interior wavefunction concentrates in time on the vacuum configuration so that we forget nothing by tracing over $\mathcal{H}_{\rm int}$. By unitarity, if the initial entanglement between Charlie and the interior is lost, a compensating entanglement between Charlie and Bob must have formed.
We illustrate this with the parity qubit,[^5] which in a sense explained below is the hardest observable to decode for Bob. Parity distinguishes even configurations (internal squares that require an even number of transpositions to be solved) from odd ones. A suitable reference system for this qubit is $\mathcal{H}_{\rm Charlie} = {\rm Span}\{ |{\rm even} \rangle, \,\, |{\rm odd} \rangle \}$, with the initial state of the form $$\psi(t=0) = \sum_{(\,\,)_i {\rm even}} a_i |(\,\,)_i \rangle_{\rm int} \otimes |\rangle_{\rm ext} \otimes |{\rm even}\rangle_{\rm Charlie} + \sum_{(\,\,)_i {\rm odd}} b_i |(\,\,)_i \rangle_{\rm int} \otimes |\rangle_{\rm ext} \otimes |{\rm odd}\rangle_{\rm Charlie},$$ where the sums are over even and odd basis configurations in $\mathcal{H}_{\rm int}$. Charlie can measure the parity of the reference system, finding ‘even’ and ‘odd’ with probabilities $$p^{\rm C}_{\rm even} = \sum_i |a_i|^2 \quad {\rm and} \quad p^{\rm C}_{\rm odd} = \sum_i |b_i|^2.$$ In order to quantify how fast Bob can learn about the parity of the black hole, we must understand how to translate his measurements into decisions about parity. This is because in contrast to Charlie, who measures properties of black hole microstates directly, the strings of Hawking particles measured by Bob correspond to [*paths*]{} between microstates. Bob can reason as follows: because for late times the internal state concentrates on the vacuum, the parity of the string of Hawking particles (whether the string consists of an even or odd number of particles) should on average reflect the parity of the initial state with increasing accuracy. This reasoning, which extends the definition of the qubit of interest to strings of outgoing radiation, is the most tenuous for parity, because on any string that has not yet brought the initial state to the vacuum Bob incurs a $50\%$ chance of mistake. It is in this sense that parity is the hardest qubit for decoding; qubits with lower entropy or more workable correlations give Bob extra headway.
After $m$ steps the state can be written as $$\begin{aligned}
\psi(t=m) &= \sum_{(\alpha) \,\, {\rm even}} \tilde{a}_{\alpha} |{\rm vac}\rangle_{\rm int} \otimes |(\alpha) \rangle_{\rm ext}\otimes |{\rm even} \rangle_{\rm Charlie} \nonumber \\ &+ \sum_{(\alpha) \,\, {\rm odd}} \tilde{b}_{\alpha} |{\rm vac}\rangle_{\rm int} \otimes |(\alpha) \rangle_{\rm ext}\otimes |{\rm odd} \rangle_{\rm Charlie} + \epsilon_m ({\rm remainder}),
\label{remaindereq}\end{aligned}$$ where $(\alpha)$ denotes strings of $m$ particles and (remainder) denotes the part of the quantum state where the internal microstate has not yet reached the vacuum, with $\epsilon_m \to 0$ as $m\to \infty$. By construction, Bob recovers even and odd radiation strings with probabilities $$\begin{aligned}
p^{\rm B}_{m,{\rm even}} &= \sum_\alpha |\tilde{a}_{\alpha}|^2 + \mathcal{O}(\epsilon_m^2) \stackrel{m\to \infty}{\longrightarrow} \sum_i |a_i|^2 = p^{\rm C}_{\rm even}, \\
p^{\rm B}_{m,{\rm odd}} &= \sum_\alpha |\tilde{b}_{\alpha}|^2 + \mathcal{O}(\epsilon_m^2) \stackrel{m\to \infty}{\longrightarrow} \sum_i |b_i|^2 = p^{\rm C}_{\rm odd}.\end{aligned}$$ As an illustration, start with the initial state $$\psi(t=0) =\sqrt{ \frac{1}{3}} \,\,
\begin{array}{|c|c|}
\hline
4 & 1 \\
\hline
2 & 3\\
\hline
\end{array} \otimes |{\rm odd}\rangle + \sqrt{\frac{2}{3}} \,\,
\begin{array}{|c|c|}
\hline
2 & 1 \\
\hline
4 & 3 \\
\hline
\end{array} \otimes |{\rm even}\rangle,
\label{exinitial}$$ whose components are five and six moves away from the vacuum, respectively. Figure \[fig-inf\] plots $p^{\rm B}_{m,\rm even}$. It approaches Charlie’s measurement $p^{\rm C}_{\rm even}=2/3$ at roughly $m \sim \mathcal{O}(100)$. This time scale, which is generic for most choices of qubit and initial states, can be recovered quantitatively with the following argument. The Model I time evolution (\[exevolution\]-\[actonvacuum\]) can be thought of as generating paths in the Cayley graph of the permutation group $S_4$ in the presentation generated by $\overleftarrow{L},\overleftarrow{R},\overleftarrow{U}$. To estimate the remainder term in eq. (\[remaindereq\]) we must evaluate the fraction of walks of length $m$ that do not visit the vacuum. This is easily done by constructing the adjacency matrix of the Cayley graph[^6] and raising it to power $m$. The largest eigenvalue of that matrix divided by 4 (because a random walk of length $m-1$ produces 4 offspring walks of length $m$) determines the rate at which $p^{\rm B}_{m,\rm even}$ approaches $p^{\rm C}_{\rm even}$. In Model I, this number turns out to be $0.98$, so that $$|p^{\rm B}_{m,\rm even} - p^{\rm C}_{\rm even}| \propto 0.98^m , \label{therate}$$ where the proportionality constant depends on the choice of qubit and initial state. Thus, Bob’s measurements reproduce those of Charlie’s with accuracy $\delta$ after $\ln{\delta} / \ln{0.98} \approx 52\ln{\delta^{-1}}$ steps. We did not attempt to evaluate the analogue of (\[therate\]) for a model based on the full Rubik’s cube.
$
\begin{array}{cc}
\includegraphics[width=0.5\textwidth]{plot20.pdf} \quad \qquad &
\includegraphics[width=0.4\textwidth]{plot200.pdf}
\end{array}$
It is interesting to contrast $p^{{\rm B}}_{m,{\rm even}}$ with the analogous quantity in the original model (\[mathurevolution\]), which we call $p^{{\rm B},\otimes}_{m,{\rm even}}$. In that case the likelihood of emitting any particle is $1/4$ at all times. Since the parity of a string is given by the number of non-trivial moves, simple combinatorics gives $$p^{{\rm B},\otimes}_{m,{\rm even}} = \frac{1}{{4^m}} \sum_{j=0}^{[m/2]} 3^{2j}{\binom{m}{2j}} \stackrel{m\to \infty}{\longrightarrow} \frac{1}{2}.$$ We note from the figure that $p^{\rm B}_{m,\rm even}$ and $p^{{\rm B},\otimes}_{m,{\rm even}}$ agree for the first five steps, that is until the wavefunction first develops an internal vacuum component. At that point, $p^{\rm B}_{m,\rm even}$ starts differing from $p^{{\rm B},\otimes}_{m,\rm even}$, because the vacuum no longer emits particles, which favors the production of $n$ over $\{l,r,u\}$. This is the point when information begins to leak out.
The rate (\[therate\]) should be contrasted with the ‘mirroring’ behavior of [@hp], where it was found that the information thrown into a black hole escapes almost immediately with Hawking radiation. The difference arises because the outside observer in [@hp] is assumed to have been observing the black hole for its whole history and, as a result, be fully entangled with the black hole. In our case Bob has no initial knowledge of the history of the black hole and has to start his measurements from the beginning.
We also note that the information recovery time scale is longer than the time scale discussed by Page [@Page:1993df]. This is not unexpected since we are asking a slightly different question: while Page’s time scale is roughly the time when an external observer starts distiguishing the Hawking radiation from completely scrambled pure thermal radiation, we are asking for the external observer to be able to reproduce the initial state with a given precision.
Discussion
==========
We considered in this note a class of models for black hole evaporation which evade the conclusions of [@mathurtheorem]. The new ingredient that causes the entanglement entropy in our models to eventually decrease to zero is that the black hole evaporates rather than growing in complexity ad infinitum. Encoding this requires one to account for the way the negative energy quantum produced in the Hawking process acts on the interal degrees of freedom of the black hole.
Models I and II achieve this goal in two different ways. In Model I, we consider a finite dimensional internal Hilbert space, which guarantees that any random sequence of moves eventually hits the vacuum. This is somewhat unsatisfactory, because one would like to find some quantity that monotonously decreases in the course of the evaporation process, analogously to the mass or horizon area of real black holes. In Model II black holes evaporate for a different reason: the time evolution is engineered to decrease a certain quantity until the state hits the vacuum. This meets the objection to Model I and in principle allows one to consider infinite dimensional Hilbert spaces. However, the model does not possess an information-free horizon. In the end, our Final Model combines the attractive features of both predecessors: it satisfies the niceness conditions and has an information-free horizon.
As the details of the Hawking process in our model are independent of the internal state and the outside observer detects the same flux of radiated particles regardless of what sits inside the black hole, how does information get out? The mechanism is black hole evaporation. In our models, an outside observer can detect one non-trivial thing about the black hole, namely that it has ceased to emit radiation because it has evaporated. In time, the black hole wavefunction peaks more and more strongly on the evaporated configuration. An outside observer identifies the internal microstate based on the full wavefunction, which is a superposition of different Hawking emission products. In a sense, the information available to an outside observer is a weighted average of the different lifespans of the black hole. To our knowledge, this mechanism for preserving unitarity has not been discussed before in the literature.
The prominent role that superpositions of internal states play in this mechanism is not too surprising, because any solution to the information paradox must make use of quantum gravity effects (see [@review] for a recent survey of the relevant arguments). The idea of tracing properties of black holes to their being quantum superposition states was explored in [@superp], while the program of understanding black holes through course graining and statistical properties of underlying ensembles of microstates was initiated in [@babel]. One challenge that may be raised against our model is that over time the putative black holes may lose their good semiclassical descriptions before they evaporate. On the other hand, it is not clear why quantum gravity should be expected to maintain a good semiclassical description of an evaporating black hole toward the end of the evaporation process. A situation in which quantum fuzziness eventually blurs away a good semiclassical description of a macroscopic object may seem exotic, but perhaps not more so than the black hole information problem itself.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank Philip Argyres, Joshua Davis, Patrick Hayden, Joanna Karczmarek, Thomas Levi and Mark Van Raamsdonk for discussions. We are supported in part by Natural Sciences and Engineering Research Council of Canada and KL is also supported by the Institute of Particle Physics.
[..]{}
S. W. Hawking, “Breakdown of Predictability in Gravitational Collapse,” Phys. Rev. [**D14**]{}, 2460-2473 (1976).
S. D. Mathur, “The information paradox: A pedagogical introduction,” Class. Quant. Grav. [**26**]{}, 224001 (2009) \[arXiv:0909.1038 \[hep-th\]\]. S. D. Mathur and C. J. Plumberg, “Correlations in Hawking radiation and the infall problem,” \[arXiv:1101.4899 \[hep-th\]\]. E. W. Weisstein, “Negative Binomial Distribution,” from MathWorld–A Wolfram Web Resource <http://mathworld.wolfram.com/NegativeBinomialDistribution.html>.
P. Hayden, J. Preskill, “Black holes as mirrors: Quantum information in random subsystems,” JHEP [**0709**]{}, 120 (2007) \[arXiv:0708.4025 \[hep-th\]\].
D. N. Page, “Average entropy of a subsystem,” Phys. Rev. Lett. [**71**]{}, 1291 (1993) \[arXiv:gr-qc/9305007\]. V. Balasubramanian, B. Czech, “Quantitative approaches to information recovery from black holes,” \[arXiv:1102.3566 \[hep-th\]\]. V. Balasubramanian, B. Czech, V. E. Hubeny, K. Larjo, M. Rangamani and J. Simon, “Typicality versus thermality: An analytic distinction,” Gen. Rel. Grav. [**40**]{} (2008) 1863 \[arXiv:hep-th/0701122\]. V. Balasubramanian, J. de Boer, V. Jejjala and J. Simon, “The Library of Babel: On the origin of gravitational thermodynamics,” JHEP [**0512**]{}, 006 (2005) \[arXiv:hep-th/0508023\].
[^1]: This difference between black holes and burning paper rules out hypothetical black hole analogues of spodomancy, because escaping Hawking particles are unrelated to the innards of a black hole.
[^2]: [**N**]{}o move, [**L**]{}eft, [**R**]{}ight, [**U**]{}pper.
[^3]: Although $n$ should really be seen as the absence of an emitted particle at that timestep, it is convenient to call it a ‘particle’.
[^4]: For computational reasons we chose different initial states for the two plots; hence one should not try to compare them too closely.
[^5]: By qubit we mean a single property of the system, an answer to a yes / no question. It is not implied that the system is necessarily a tensor product of localized binary degrees of freedom.
[^6]: in a slightly modified form: to account for the moves $\overleftarrow{N}$ one adds to the Cayley matrix the identity matrix and to account for the ‘sink’ in the vacuum configuration one removes its corresponding row and column.
|
---
abstract: 'We analyse high-quality *NuSTAR* observations of the local (*z*=0.011) Seyfert 2 active galactic nucleus (AGN) IC 3639, in conjunction with archival *Suzaku* and *Chandra* data. This provides the first broadband X-ray spectral analysis of the source, spanning nearly two decades in energy (0.5–30keV). Previous X-ray observations of the source below 10keV indicated strong reflection/obscuration on the basis of a pronounced iron fluorescence line at 6.4keV. The hard X-ray energy coverage of *NuSTAR*, together with self-consistent toroidal reprocessing models, enables direct broadband constraints on the obscuring column density of the source. We find the source to be heavily Compton-thick (CTK) with an obscuring column in excess of $3.6\times10^{24}$cm$^{-2}$, unconstrained at the upper end. We further find an intrinsic 2–10keV luminosity of $\textrm{log}_{10}(L_{\textrm{2\,--\,10\,keV}}\,\textrm{[erg\,s}^{-1}])\,=\,43.4^{+0.6}_{-1.1}$ to 90% confidence, almost 400 times the observed flux, and consistent with various multi-wavelength diagnostics. Such a high intrinsic to observed flux ratio in addition to an Fe-K$\alpha$ fluorescence line equivalent width exceeding 2keV is extreme amongst known *bona fide* CTK AGN, which we suggest are both due to the high level of obscuration present around IC 3639. Our study demonstrates that broadband spectroscopic modelling with *NuSTAR* enables large corrections for obscuration to be carried out robustly, and emphasises the need for improved modelling of AGN tori showing intense iron fluorescence.'
author:
- 'Peter G. Boorman, P. Gandhi, D. Alexander, A. Annuar, D. R. Ballantyne, F. Bauer, S. E. Boggs, W. N. Brandt, M. Brightman, F. E. Christensen, W. W. Craig, D. Farrah, C. J. Hailey, F. A. Harrison, S. F. H[ö]{}nig, M. Koss, S. M. LaMassa, A. Masini, C. Ricci, G. Risaliti, D. Stern, W. W. Zhang'
bibliography:
- './bibliography.bib'
title: 'IC 3639 – A new bona fide Compton thick AGN unveiled by *NuSTAR*'
---
INTRODUCTION {#sec:introduction}
============
The origin of the cosmic X-ray background (CXB) has been under study ever since its discovery more than 60 years ago [@Giacconi1962]. Spanning from fractions of a keV (soft X-rays) up to several hundreds of keV (hard X-rays), the general consensus today is that the majority of the CXB arises from the integrated emission of discrete sources of radiation, with the most prominent contribution arising from active galactic nuclei (AGN) (e.g. @Mushotzky2000). The unified model of AGN [@Antonucci1993; @Netzer2015] predicts that the major differences seen between different classes of AGN can be attributed to an orientation effect, with the primary radiation source being surrounded by an obscuring torus inclined relative to our line-of-sight (LOS). This leads to effectively two types of AGN - those with a direct view to the nucleus (largely unobscured) and those with an obscured view to the nucleus from behind a putative torus (see @Marin2016 for a recent review on the orientation of AGN). In addition, obscured AGN have been required to fit the CXB, with @Setti1989 requiring a considerable contribution from this AGN population. Multiple studies have revealed this to be the case, although with a dependence on X-ray luminosity (e.g. @Lawrence2010). This suggests heavily obscured AGN may be a major contributor to the CXB, and there are many ongoing efforts to study this population (e.g. @Brandt2015, and references therein).
In the X-ray band, the two important interaction processes between photons and matter surrounding an AGN are photoelectric absorption and Compton scattering. Photoelectric absorption is dominant at lower energies, whereas Compton scattering dominates in hard X-rays above $\sim$10keV up to the Klein-Nishina decline. X-ray photons with energy greater than a few keV are visible if the LOS obscuring column density ($N_{\textrm H}$) is $\lesssim$1.5$\times\,10^{24}\,\textnormal{cm}^{-2}$, and such AGN are named *Compton-thin* (CTN) since the matter is optically thin to Compton scattering and a significant fraction of the photons with *E*>10keV escape after one or more scatterings. This leads to only slight depletion of hard X-rays for CTN sources. Sources with column densities greater than this value are classified as *Compton-thick* (CTK) since even high-energy X-rays *can* be diminished via Compton scattering, leading to the X-ray spectrum being depressed over the entire energy range. The hard X-ray spectrum of *typical* CTK AGN is characterised by three main components: a Compton reflection hump, peaking at $\sim$30keV; a strong neutral Fe-K$\alpha$ fluorescence line at $\sim$6.4keV [@Matt2000] (*strong* generally refers to an equivalent width EW$\gtrsim$1keV); and an underlying absorbed power law with an upper cut-off of several hundred keV (intrinsic to the AGN, arising from the Comptonisation of accretion disc photons in the corona). The ability to detect the absorbed power law in the spectrum of a source depends on the level of obscuration - in heavily CTK sources, this component is severely weakened and can be entirely undetectable. The Compton hump and Fe-K$\alpha$ line are both reflection features from the putative torus.
X-ray selection is one of the most effective strategies available for detecting CTK sources because hard X-ray photons stand a greater chance of escaping the enshrouding obscuring media due to their increased penetrating power. In addition, photons with initial propagation directions out of the LOS can also be detected through Compton scattering into our LOS. For this reason, the best energy range to observe CTK AGN is *E*>10keV. In general, many synthesis models formulated to date seem to agree that fitting to the peak flux of the CXB at $\sim$30keV requires a CTK AGN contribution in the range 10–25% [@Comastri1995; @Gandhi2003; @Gilli2007; @Treister2009; @Draper2010; @Akylas2012; @Ueda2014]. The actual number density of CTK AGN remains unclear, with various recent sample observations suggesting a fraction exceeding 20% [@Goulding2011; @Lansbury2015; @Ricci2015; @Koss2016b]. @Gandhi2007 and @Treister2009 discuss degeneracy between the different component parameters (e.g. reflection and obscuration) used to fit the CXB. This is why the shape of the CXB cannot be directly used to determine the number of CTK AGN, and further explains the large uncertainty associated with the CTK fraction.
Many X-ray missions to date have been capable of detecting photons above 10keV, such as *BeppoSAX*, *Swift*, *Suzaku* and *INTEGRAL*. However, due to issues including high background levels, relatively small effective areas and low angular resolution, few CTK sources have been identified. The *Nuclear Spectroscopic Telescope Array* (*NuSTAR*) [@Harrison2013] is the first mission in orbit capable of *true* X-ray imaging in the energy range $\sim$3–79keV. Since launch, *NuSTAR* has not only studied well known CTK AGN in detail [@Arevalo2014; @Bauer2015; @Marinucci2016], it has also helped to identify and confirm numerous CTK candidates in the local Universe [@Gandhi2014; @Balokovic2014; @Annuar2015; @Koss15; @Koss2016], as well as carry out variability studies focusing on *changing-look* AGN [@Risaliti2013; @Walton2014; @Rivers2015; @Marinucci2016; @Ricci2016; @Masini2016b]. Moreover, deep *NuSTAR* surveys have resolved a fraction of 35$\pm$5% of the total integrated flux of the CXB in the 8–24keV band [@Harrison2015].
Detailed modelling of individual highly obscured sources is the most effective way to understand the spectral components contributing to the missing fraction of the peak CXB flux. Here we carry out the first robust broad-band X-ray spectral analysis of the nearby Seyfert 2 and candidate CTK AGN IC 3639 (also called Tololo 1238-364). The source is hosted by a barred spiral galaxy (Hubble classification SBbc[^1]) with redshift *z*=0.011 and corresponding luminosity distance D=53.6Mpc. This is calculated for a flat cosmology with $H_{0}$=67.3 kms$^{-1}$Mpc$^{-1}$, $\Omega_{\Lambda}$=0.685 and $\Omega_{M}$=0.315 [@Planck2014]. All uncertainties are quoted at a 90% confidence level for one interesting parameter, unless stated otherwise. This paper uses *NuSTAR* and archival X-ray data from the *Suzaku* and *Chandra* satellites. The *Suzaku* satellite operated in the energy range $\sim$0.1–600keV and is thus capable of detecting hard X-rays. However, the hard X-ray energy range of this satellite is covered by a non-imaging detector, leading to potential complications for faint sources, as outlined in Section \[sec:obs\_SU\_HXD\]. *Chandra* has a high energy limit of $\sim$8keV, and very high angular resolution with a lower energy limit $\gtrsim$0.1keV. Consequently, the different capabilities of *NuSTAR*, *Suzaku* and *Chandra* complement each other so that a multi-instrument study provides a *broad-band* spectral energy range.
The paper is structured accordingly: Section \[sec:target\_selection\] explains the target selection, with Section \[sec:observations\] describing the details behind each X-ray observation of the source used as well as the spectral extraction processes. The corresponding X-ray spectral fitting and results are outlined in Sections \[sec:x-ray\_fitting\] and \[sec:results\], respectively. Finally, Section \[sec:discussion\] outlines broadband spectral components determined from the fits, the intrinsic luminosity of the source and a multi-wavelength comparison with other CTK sources. We conclude with a summary of our findings in Section \[sec:conclusions\].
THE TARGET {#sec:target_selection}
==========
The first published X-ray data of IC 3639 were reported by @Risaliti1999, where they suggest the source to be CTK with column density $N_\textrm{H}$>$10^{25}\,\textnormal{cm}^{-2}$. This lower limit was determined from a soft X-ray spectrum provided by the *BeppoSAX* satellite, together with multi-wavelength diagnostic information (see @Risaliti1999b and references therein for further details on the modelling used). Additionally, the EW of the Fe-K$\alpha$ emission-line was reported as $3.20^{\,+0.98}_{\,-1.74}$keV. Such high EWs are extreme though not unheard of, as reported by @Levenson2002. Optical images of the source, as well as surrounding source redshifts, infer IC 3639 to be part of a triple merger system (e.g. Figure \[fig:1a\], upper left panel - IC 3639 is $\sim$15 away from its nearest galaxy neighbour to the North-East). However, @Barnes2001 use the HI detection in this interacting group to suggest that it is free of any significant galaxy interaction or merger.
@Miyazawa2009 analysed the source as part of a sample of 36 AGN observed by *Suzaku*, including the higher energy HXD PIN data. The source was found to have an obscuring column density of $7.47^{+4.81}_{-3.14}\,\times10^{23}\,\textnormal{cm}^{-2}$ with photon index $1.76^{+0.52}_{-0.44}$, suggesting a CTN nature. This *could* indicate variability between the 2007 *Suzaku* observation and the 1999 *BeppoSAX* observation reported by @Risaliti1999.
As outlined in Section \[sec:introduction\], CTK sources are notoriously hard to detect due to their low count rate in the soft X-ray band. For this reason, one must use particular spectral characteristics, indicative of CTK sources. The first, most obvious indication is a prominent Fe-K$\alpha$ fluorescence line. This can occur if the fluorescing material is exposed to a greater X-ray flux than is directly observed, so that the line appears strong relative to the continuum emission [@Krolik1987].
Other CTK diagnostics are provided through multi-wavelength analysis. For example, by comparing the MIR and X-ray luminosities of the source, which have been shown to correlate for AGN [@Elvis1978; @Krabbe2001; @Horst2008; @Gandhi2009; @Levenson2009; @Mateos2015; @Stern2015; @Asmus2015]. X-ray obscuration is expected to significantly offset CTK sources from this relation. Indeed, IC 3639 shows an *observed* weak X-ray (2–10keV) luminosity compared to the *predicted* value from this correlation. Another multi-wavelength technique compares emission lines originating in the narrow line region (NLR) on larger scales than the X-ray emission, which arises close to the core of the AGN. Of the multitude of emission lines available for such analysis, one well studied correlation uses the optical \[OIII\] emission-line flux at $\lambda$=5007Å. @Panessa2006 and @Berney2015, among others, study a correlation between the observed \[OIII\] emission-line luminosity and X-ray (2–10keV) luminosity for a group of Seyfert galaxies, after correcting for obscuration. IC 3639 again shows a weak X-ray flux compared to the observed \[OIII\] luminosity. This indicates heavy obscuration depleting the X-ray luminosity. For a comparison between the ratios of MIR and \[OIII\] emission flux to X-ray flux with the average value for (largely unobscured) Seyfert 1s, see @LaMassa2010 [Figure 2].
@Dadin2007 reports that the *BeppoSAX* observation of IC 3639 in both the 20–100keV and 20–50keV bands had negligible detection significance, and place an upper bound on the 20–100keV flux of $F_{\textnormal{20\,--\,100\,keV}}\leqslant$9.12$\times$10$^{-12}$ergs$^{-1}$cm$^{-2}$. Unfortunately, IC 3639 lies below the *Swift*/BAT all-sky survey limit of $\sim$1.3$\,\times\,10^{-11}$ergs$^{-1}$cm$^{-2}$ in the 14–195keV band [@Baumgartner2013].
OBSERVATIONS & DATA REDUCTION {#sec:observations}
=============================
Archival observations for IC 3639 used in this paper were all extracted from the HEASARC archive[^2]. Together, we use *Suzaku* (XIS & HXD), *Chandra* and recent *NuSTAR* data in this study. Table \[tab:obs\_info\] shows the details of each of these observations.
[rllll]{} Satellite & Obs. ID & Date /<span style="font-variant:small-caps;">y-m-d</span> & Exp./ks & PI\
& & 2015-01-09 & & P. Gandhi\
& & 2007-07-12 & & H. Awaki\
& & 2004-03-07 & & R. Pogge\
*NuSTAR* {#sec:obs_NU}
--------
Data from both focal plane modules (FPMA & FPMB) onboard the *NuSTAR* satellite were processed using the *NuSTAR* Data Analysis Software (<span style="font-variant:small-caps;">NuSTARDAS</span>) within the <span style="font-variant:small-caps;">heasoft</span> package. The corresponding <span style="font-variant:small-caps;">caldb</span> files were used with the <span style="font-variant:small-caps;">NuSTARDAS</span> task <span style="font-variant:small-caps;">nupipeline</span> to produce calibrated and cleaned event files. The spectra and response files were produced using the <span style="font-variant:small-caps;">nuproducts</span> task, after standard data screening procedures. The net count rate in the 3–79keV band for FPMA & FPMB were $(9.413\,\pm\,0.549)\times\,10^{-3}$countss$^{-1}$ and $(8.018\,\pm\,0.544)\times\,10^{-3}$countss$^{-1}$, for net exposures of 58.7ks and 58.6ks, respectively (this corresponds to total count rates of $(1.686\,\pm\,0.054)\times\,10^{-2}$countss$^{-1}$ and $(1.648\,\pm\,0.053)\times\,10^{-2}$countss$^{-1}$ for FPMA & FPMB, respectively). Circular source regions of radius 075 were used to extract source counts from the corresponding event files. Background counts were extracted from annular regions of outer radius 25 and inner radius 075, centred on the source regions to avoid any cross-contamination between source and background counts. The background region was chosen to be as large as possible in the same module as the source.
The extracted spectra for FPMA and FPMB were then analysed using the <span style="font-variant:small-caps;">xspec</span> version 12.9.0 software package[^3]. The energy range was constrained to the optimum energy range of *NuSTAR* and grouped so that each bin contained a signal-to-noise ratio (SNR) of at least 4. The resulting spectra are shown in count-rate units in Figure \[fig:f2\].
Figure \[fig:1a\] shows the comparison of the *NuSTAR* FPMA image with an optical *Digitised Sky Survey* (*DSS*) image. The blue regions highlight the counterparts of the merging triple, clearly visible in the optical. However, there is no detection of the separate galaxies in the *NuSTAR* image, with the primary emission originating from IC 3639.
*Suzaku* {#sec:obs_SU}
--------
When fully operational, *Suzaku* had four CCD X-ray imaging spectrometers (XIS) and a hard X-ray detector (HXD). The XIS covered an energy range of 0.4–10keV with typical resolution 120eV.[^4] During the lifetime of *Suzaku*, one of the four XIS detectors became non-operational, leaving two front illuminated (FI) detectors (XIS0 and XIS3), and one back illuminated (BI) detector (XIS1). HXD was a non-imaging instrument designed for observations in the energy range 10–700keV.
### XIS {#sec:obs_SU_XIS}
First, the <span style="font-variant:small-caps;">ximage</span> software package[^5] was used to create an image by summing over the three XIS cleaned event files. Next source counts were extracted from a circular region of radius 26, with background counts extracted from an annular region of inner radius 26 and outer radius 50. The background annular region was again centred around the source region to avoid source and background count contamination. <span style="font-variant:small-caps;">xselect</span> was then used to extract a spectrum for each XIS detector cleaned event file using the source and background regions defined above. Lastly, we used the <span style="font-variant:small-caps;">addascaspec</span> command to combine the two FI XIS spectra. The final result was two spectra: one for the FI cameras (XIS0 + XIS3, referred to as XIS03 herein) and one for the single BI camera (XIS1). The net exposure times for XIS03 and XIS1 were 107.8ks and 53.4ks, respectively. The data were again grouped with a minimum SNR of 4. Additionally the XIS spectral data in the energy range 1.7–1.9keV and 2.1–2.3keV were ignored due to instrumental calibration uncertainties associated with the silicon and gold edges in the spectra[^6].
### HXD {#sec:obs_SU_HXD}
The corresponding spectrum for the HXD instrument was generated with the <span style="font-variant:small-caps;">ftools</span> command $\tt hxdpinxbpi$. The data were then binned to allow a minimum of 500 counts per bin. The energy range 10–700keV of HXD is achieved with Gadolinium Silicate (GSO) counters for >50keV and PIN diodes for the range 15–50keV. The GSO instrument is significantly less sensitive than *NuSTAR* and thus not used here. For the PIN instrument, a model has been designed to simulate the non-X-ray background (NXB). In the 15–40keV range, current systematic uncertainties in the modelled NXB are estimated to be $\sim$3.2%. A *tuned* NXB file for the particle background is provided by the *Suzaku* team, whereas the CXB is evaluated separately and added to the tuned background resulting in a final *total* background. The modelled CXB is $\sim$5% of the total background for PIN. The <span style="font-variant:small-caps;">ftool</span> command `hxdpinxbpi` then uses the total background to produce a dead-time corrected PIN source and background (NXB+CXB) spectrum. The net source counts for IC 3639 are shown in red in Figure \[fig:f3\]. The gross counts (source+background) are considerably higher than the net source counts and are shown in black on the same figure for comparison. The source flux was calculated in the energy range 10–40keV for a simple power-law model. The corresponding fluxes for source+background ($F_{\textrm{B,15-40\,keV}}$) and source alone ($F_{\textrm{S,15-40\,keV}}$) are:\
$F_{\textrm{B,15-40\,keV}}=1.41\,\pm\,0.01\,\times\,10^{-10}$ ergs$^{-1}$cm$^{-2}$, and\
$F_{\textrm{S,15-40\,keV}}=8.20\,^{+0.47}_{-8.20}\,\times\,10^{-13}$ ergs$^{-1}$cm$^{-2}$.\
The CXB is known to vary between different instruments on the order of $\sim$10% in the energy range considered here. For this reason, as a consistency check, we compared the background uncertainty from *Suzaku* ($3-5\%$)[^7] to the error in the total background found when the CXB flux component carried a $10\%$ uncertainty. This altered the tuned background error to 2.9%–4.8%. Thus, within acceptable precision, the total background appears to be unaltered by potential CXB cross-instrument fluctuations. If the source spectrum is less than $\sim$5% of the tuned background, the detection is weak, and a source spectrum flux lower than $\sim$3% of the background would require careful assessment. The IC 3639 source counts are found to be $0.8\,^{+0.8}_{-0.9}\%$ of the tuned background counts in the 15–40keV range. For this reason, we do not use the HXD data in our spectral analysis for IC 3639. This value contradicts @Miyazawa2009 who report a 15–50keV flux of $F_{15-50\,keV}\,=\,1.0\,\times\,10^{-11}$ergs$^{-1}$cm$^{-2}$ – approximately two orders of magnitude higher than we find, as well as greater than the upper limit attained from the *BeppoSAX* satellite mentioned in Section \[sec:target\_selection\].
We further find this result to be inconsistent with *NuSTAR*. Using a simple <span style="font-variant:small-caps;">powerlaw+gaussian</span> model fitted to the *NuSTAR* data, we obtain $F\,^{\textrm{FPMA}}_{15-50\,\textrm{keV}}=3.0\,^{+0.9}_{-0.5}\,\times\,10^{-12}$ergs$^{-1}$cm$^{-2}$ and $F\,^{\textrm{FPMB}}_{15-50\,\textrm{keV}}=3.1\,^{+1.0}_{-0.4}\,\times\,10^{-12}$ergs$^{-1}$cm$^{-2}$. Extrapolating these fluxes to the 20–100keV band gives $F\,_{20-100\,\textrm{keV}}\,\sim\,1.8\,\times\,10^{-12}$ergs$^{-1}$cm$^{-2}$ for both Focal Plane Modules, which is fully consistent with the upper limit found with the *BeppoSAX* satellite ($F^{BeppoSAX}_{\textnormal{20\,--\,100\,keV}}\leqslant$9.12$\times$10$^{-12}$ergs$^{-1}$cm$^{-2}$).
Figure \[fig:f2\] shows the *Suzaku* XIS spectra over-plotted with the *NuSTAR* data. The spectral data for *Suzaku* XIS and *NuSTAR* are consistent with each other in shape within the common energy range 3–10keV. The composite spectrum formed spans approximately two dex in energy, from 0.7–34keV, as a result of the minimum SNR grouping procedures for each data set.
*Chandra* {#sec:obs_CHA}
---------
The *Chandra* level 2 event file was obtained from the <span style="font-variant:small-caps;">heasarc</span> database. A fraction of the total collecting area of the detector was used in the *timed exposure mode* setting, where the CCD collects data for a set frame time. Selecting a frame time less than the default value (3.2s for *Chandra*) reduces the probability of *pile-up*. This is where more than one photon in a particular bin from the same event is detected. An exposure time of 0.4s per frame was used in the observation of IC 3639, which gives a reduced predicted pile-up fraction of $\sim$0.3%. This setting was chosen in the original *Chandra* observation proposal due to the previously unknown X-ray flux of the source, to minimise the risk associated with pile-up.
Spectral extraction from the *Chandra* data was carried out with the <span style="font-variant:small-caps;">ciao 4.7</span> software package[^8]. We primarily investigated the *Chandra* image of IC 3639 for potential contaminants located within the *Suzaku* and *NuSTAR* extraction regions. However, no particularly prominent contaminating sources were visible in the immediate vicinity of the AGN. A comparatively large circular source region of 26 was used with the XIS image due to its larger point spread function (PSF) relative to *Chandra*. As a result, the XIS spectra will contain some flux from non-AGN related activity, unresolvable by that instrument. To account for this, the *Chandra* image was used to model as much of the unresolved non-AGN activity from the *Suzaku* XIS image as possible. An annular *Chandra* source region with outer radius as close to that of the circular XIS source region as possible was created with inner region of radius 005, excluding the central AGN. A simple power law was fitted to this spectrum, and was added to the model used with the *Suzaku* XIS data. This power law is referred to as the *contamination power law (CPL)* hereafter.
Ideally, the outer radius of the annular extraction radius used in the *Chandra* image would equal the radius of the circular source region used in the XIS image. However, as noted above, data were taken with a custom 1/8 sub-array on ACIS-S3. This meant that *Chandra* only observed part of the sky covered by the other instruments. As such, the image produced could not have a source region wider than $\sim$05, as opposed to the XIS source region of radius 26. Accordingly, we used an annular region of outer radius 05 for the *Chandra* image. The annular counts extraction region and circular background region of equal radius used with the *Chandra* image are shown in the right panel of Figure \[fig:1b\]. The $\tt specextract$ command was used to create the spectral and response files for use in <span style="font-variant:small-caps;">xspec</span>. The *Chandra* spectrum was grouped with greater than or equal to 20 counts per bin prior to the <span style="font-variant:small-caps;">cpl</span> modelling. Furthermore, the flux of the CPL in the *Chandra* energy band (0.5–8keV) was $F\,^{\textrm{CPL}}_{0.5-8.0\,\textrm{keV}}\,=\,7.17\,\times\,10^{-14}$ergs$^{-1}$cm$^{-2}$.
X-RAY SPECTRAL FITTING {#sec:x-ray_fitting}
======================
The resulting spectral data sets for *Suzaku* XIS and *NuSTAR* were used in the energy ranges 0.7–9.0keV and 3.0–34.0keV, respectively. Data above this threshold were excluded due to low SNR. Initially, the *Suzaku* XIS and *NuSTAR* data were fitted independently of each other with a simple <span style="font-variant:small-caps;">powerlaw+gaussian</span> model to give the following fluxes in the 2–10keV energy band:\
*NuSTAR*:\
$F\,^{\textrm{FPMA}}_{2\,-\,10\,\textrm{keV}}=1.81\,^{+0.16}_{-0.41}\,\times\,10^{-13}$ergs$^{-1}$cm$^{-2}$,\
$F\,^{\textrm{FPMB}}_{2\,-\,10\,\textrm{keV}}=1.86\,^{+0.17}_{-0.40}\,\times\,10^{-13}$ergs$^{-1}$cm$^{-2}$.\
*Suzaku* XIS:\
$F\,^{\textrm{XIS03}}_{2\,-\,10\,\textrm{keV}}=1.84\,\pm\,0.24\,\times\,10^{-13}$ergs$^{-1}$cm$^{-2}$,\
$F\,^{\textrm{XIS1}}_{2\,-\,10\,\textrm{keV}}=2.12\,^{+0.34}_{-0.32}\,\times\,10^{-13}$ergs$^{-1}$cm$^{-2}$.\
The overall match between these fluxes implies we can analyse the data sets together. A *Chandra* spectrum was extracted from a 39 circular extraction region. However, there were only 35 counts present in the 2–10keV band. By using the same <span style="font-variant:small-caps;">powerlaw+gaussian</span> model to determine the flux as with the other data sets, we get:\
$F\,^{\textrm{\textit{Chandra}}}_{2\,-\,10\,\textrm{keV}}=9.51\,^{+7.83}_{-9.51}\,\times\,10^{-14}$ergs$^{-1}$cm$^{-2}$.\
This flux is only mildly inconsistent with the other data sets, but also consistent with zero, due to the low SNR of the data. Given this low SNR, we do not use the *Chandra* AGN data for further spectral analysis, but the consistency found between this and *Suzaku*/*NuSTAR* data sets suggests that we are classifying IC 3639 as a bona fide CTK AGN robustly.
The power-law slope of the composite spectrum (*Suzaku*+*NuSTAR*) is hard, with photon index $\mathrm{\Gamma}\sim1.8$ and the EW of the Fe-K$\alpha$ line is very large (EW$\sim 2.4$keV). These are consistent characteristics of a heavily obscured AGN. Due to the high EWs found for the Fe-K$\alpha$ line with a power-law+Gaussian model, we proceeded to fit more physically motivated models as follows.
A general model *structure* was used with each spectrum, included in Equation \[eq:template\]. However, not all models used in this study required all the components listed in the template. $$\begin{gathered}
\textsc{\textbf{Template} = const $\times$ phabs[gal] $\times$ [apec + cpl + spl + \nonumber}\\
\textsc{(refl + obsc $\times$ ipl + f\_lines)]}
\label{eq:template}\end{gathered}$$ Below, we give explicit details for each component in Equation \[eq:template\]:
- <span style="font-variant:small-caps;">const</span>: multiplying constant used to determine the cross-calibration between different instruments. The *NuSTAR* FPMA constant was frozen to unity and the other three constants left free (@Madsen2015 report cross-normalisation constants within 10% of FPMA).
- <span style="font-variant:small-caps;">phabs\[gal\]</span>: component used to account for photoelectric absorption through the Milky Way, based on $H_{\textrm I}$ measurements along the LOS [@Dickey1990]. This is represented as an obscuring column density in units of cm$^{-2}$, and assumed constant between instruments (and so was frozen for each data set). The determined value was 5.86$\times\,10^{20}\,\textnormal{cm}^{-2}$.
- <span style="font-variant:small-caps;">apec</span> [@Smith2001]: model component used as a simple parameterisation of the softer energy X-ray emission associated with a thermally excited diffuse gas surrounding the AGN. Detailed studies of brighter local AGN indicate that photoionisation may provide a better description of the soft X-ray emission in AGN spectra (e.g. @Guainazzi2009 [@Bianchi2010]), but such modelling would require a higher SNR and better spectral resolution than currently available for IC 3639. The low-energy spectral shape for IC 3639 found with *Suzaku* XIS is far softer than for the higher energy portion of the spectrum.
- <span style="font-variant:small-caps;">cpl</span>: component referring to the *contamination power law*, used to account for the unresolved non-AGN emission *contaminating* the *Suzaku* XIS spectral counts. See Section \[sec:obs\_CHA\] for further details.
- <span style="font-variant:small-caps;">spl</span>: component referring to the *scattered power law*. This accounts for *intrinsic* AGN emission that has been scattered into our LOS from regions closer to the AGN, such as the NLR. The power law photon index and normalisation were tied to the intrinsic AGN emission as a simplification. However, a constant multiplying the <span style="font-variant:small-caps;">spl</span> component was left free to allow a variable fraction of observed flux arising from scattered emission.
- The final term in Equation \[eq:template\] collectively consists of three parts:
- <span style="font-variant:small-caps;">refl</span>: reflected component, arising from the primary nuclear obscurer and has been modelled in varying ways. In this work, we use slab models (<span style="font-variant:small-caps;">pexrav</span> and <span style="font-variant:small-caps;">pexmon</span>) as well as toroidal geometry models (<span style="font-variant:small-caps;">torus</span> and <span style="font-variant:small-caps;">mytorus</span>), described in Sections \[sec:slabs\], \[sec:T\] and \[sec:M\] respectively.
- <span style="font-variant:small-caps;">obsc $\times$ ipl</span>: Most models include the direct transmitted component (*intrinsic power law* or <span style="font-variant:small-caps;">ipl</span>), after accounting for depletion due to absorption through the obscurer via the multiplying <span style="font-variant:small-caps;">obsc</span> term.
- <span style="font-variant:small-caps;">f\_lines</span>: component describing fluorescence lines believed to arise from photon interactions with the circumnuclear obscurer.
Slab models: <span style="font-variant:small-caps;">pexrav</span> and <span style="font-variant:small-caps;">pexmon</span> {#sec:slabs}
--------------------------------------------------------------------------------------------------------------------------
Slab models describe X-ray reflection off an infinitely thick and long flat slab, from a central illuminating source. <span style="font-variant:small-caps;">pexrav</span> [@Magdziarz1995] comprises an exponentially cut-off power law illuminating spectrum reflected from neutral material. To acquire the reflection component alone, with no direct transmitted component, the reflection scaling factor parameter is set to a value $R$<0. Other parameters of interest include the power law photon index; cutoff energy; abundance of elements heavier than helium; iron abundance (relative to the previous abundance) and inclination angle of the slab (90$^{\circ}$ describes an edge-on configuration; 0$^{\circ}$ describes face-on). The model gave far better reduced chi-squared values for a reflection-dominated configuration, and as such the reflection scaling factor was frozen to $-1.0$, corresponding to a 50% covering factor. <span style="font-variant:small-caps;">pexrav</span> does not self-consistently include fluorescent line emission. As such, a basic Gaussian component was initially added to account for the strong Fe-K$\alpha$ line, resulting in an EW$\sim1.4-3.0$keV. Alternatively, the <span style="font-variant:small-caps;">pexmon</span> model [@Nandra2007] combines <span style="font-variant:small-caps;">pexrav</span> with approximated fluorescence lines and an Fe-K$\alpha$ Compton shoulder [@Yaqoob2011]. The fluorescence lines include Fe-K$\alpha$, Fe-K$\beta$ and nickel-K$\alpha$. All analysis with slab models refer to the <span style="font-variant:small-caps;">pexmon</span> model hereafter, and are denoted as model **P**. The high EWs found for the Fe-K$\alpha$ line with the reflection-dominated <span style="font-variant:small-caps;">pexrav</span>+Gaussian model in addition to the power-law+Gaussian model, we next considered more physically motivated self-consistent obscured AGN models.
<span style="font-variant:small-caps;">bntorus</span> {#sec:T}
-----------------------------------------------------
Two tabular models are provided by @Brightman2011 to describe the obscurer self-consistently including the intrinsic emission and reflected line components. The spherical version describes a covering fraction of one in a geometry completely enclosing the source. The presence and morphology of NLRs in a multitude of sources favours a covering factor <1, implying an anisotropic geometry for the shape of the obscurer in most Seyfert galaxies. For this reason, preliminary results were developed with this model, before analysis was carried out with toroidal models. For further discussion of the NLR of IC 3639, see Section \[sec:dis\_spec\_comps\].
The second <span style="font-variant:small-caps;">bntorus</span> model (model **T** hereafter) was used extensively in this study. This models a toroidal obscurer surrounding the source, with varying opening and inclination angles. Here, the opening angle describes the conical segment extending from both poles of the source (i.e. the half-opening angle). Because the obscurer is a spherical section in this model, the column density along different inclination angles does not vary. The range of opening angles studied is restricted by the inclination angle since for inclination angles less than the opening angle, the source becomes unobscured. Thus to allow exploration of the full range of opening angles, we fixed the inclination angle to the upper limit allowed by the model: 87$^{\circ}$ [@Brightman2015]. The tables provided for <span style="font-variant:small-caps;">bntorus</span> are valid in the energy range 0.1–320keV, up to obscuring column densities of $10^{26}$cm$^{-2}$. Equation \[eq:T\] describes the form of model T used in <span style="font-variant:small-caps;">xspec</span> - all properties associated with the absorber are present in the <span style="font-variant:small-caps;">torus</span> term, and collectively used in the modelling process.
$$\begin{gathered}
\textsc{model \textbf{T} = const $\times$ phabs $\times$ (apec + spl + \nonumber}\\
\textsc{cpl + torus)}
\label{eq:T}\end{gathered}$$
@Liu2015 report model T to over-predict the reflection component for edge-on geometries, resulting in uncertainties. However, varying the inclination angle did not drastically alter our fits and consistent results were acquired between both models T and M, described next.
<span style="font-variant:small-caps;">mytorus</span> {#sec:M}
-----------------------------------------------------
The <span style="font-variant:small-caps;">mytorus</span> model, developed by @Murphy2009, describes a toroidal-shaped obscurer with a fixed half-opening angle of 60$^{\circ}$, and free inclination angle. However, because the geometry here is a doughnut as opposed to a sphere, the LOS column density will always be less than or equal to the equatorial column density (with equality representing an edge-on orientation with respect to the observer).
The full computational form of this model is shown in Equation \[eq:M\], and encompasses the energy range 0.5–500keV, for column densities up to $10^{25}$cm$^{-2}$. Three separate tables are used to describe this model: the *transmitted absorption* or *zeroth order* continuum, altered by photoelectric absorption and Compton scattering; the *scattered component*, describing Compton scattering off the torus; and the *fluorescence emission* for neutral Fe-K$\alpha$ and Fe-K$\beta$ together with their associated Compton shoulders. This study uses the *coupled* mode for this model, where model parameters are tied between different table components (referred to as model **M** hereafter). For further details on the decoupled mode, which is often used for sources showing variability or non-toroidal geometries with high SNR data, refer to the publicly available <span style="font-variant:small-caps;">mytorus</span> examples,[^9] or see @Yaqoob2012.
$$\begin{gathered}
\textsc{model \textbf{M} = const $\times$ phabs $\times$ (apec + spl + cpl + \nonumber}\\
\textsc{pow * etable \{\tt trans. absorption\} + \nonumber}\\
\textsc{atable\{\tt scattered\} + \nonumber}\\
\textsc{atable\{\tt fluor\_lines\})}
\label{eq:M}\end{gathered}$$
Results from spectral fitting {#sec:results}
=============================
In this section, we present the results of our X-ray spectral fitting of IC 3639 together with model-specific parameters shown in Table \[tab:obs\_parameters\]. Figures \[fig:f4a\] and \[fig:f4b\] show the spectra and best-fit models attained for models T and M, respectively. First we consider the EW of the Fe-K$\alpha$ line. As previously mentioned, an EW of the order of 1keV can be indicative of strong reflection. @Risaliti1999 found the EW for IC 3639 to be $3.20^{+0.98}_{-1.74}$keV. In order to determine an EW for the Fe-K$\alpha$ line here, we modelled a restricted energy range of $\sim$3–9keV with a simple <span style="font-variant:small-caps;">(powerlaw+gaussian)</span> model. Here the power law was used to represent the underlying continuum, and the Gaussian was used as a simple approximation to the Fe-K$\alpha$ fluorescence line. Additionally, all four data sets were pre-multiplied by cross-calibration constants in the same way as described in the template model.
Due to low signal-to-noise, the continuum normalisation had a large uncertainty. As such, a robust error could not be directly determined on the EW using <span style="font-variant:small-caps;">xspec</span>. Alternatively, we carried out a four dimensional grid to step over all parameters of the model in <span style="font-variant:small-caps;">xspec</span>, excluding line energy, which was well defined and frozen at 6.36keV in the observed frame. The EW was calculated for each grid value, and the corresponding confidence plot is shown in Figure \[fig:f5a\]. The horizontal black line represents the 90% confidence region for the chi-squared difference from the best-fit value, $\Delta\chi^2$. Here, the 90% confidence level refers to the chi-squared distribution for four free parameters with value $\Delta\chi^2$=7.779. Figure \[fig:f5b\] shows the model used, fitted to the four data sets. This gave an EW of $2.94^{+2.79}_{-1.30}$keV, consistent with @Risaliti1999. This is well above the approximate threshold of 1keV typically associated with the presence of CTK obscuration. However, @Gohil2015 find the presence of dust in the obscuring medium can enhance the Fe-K$\alpha$ line detection even for CTN gas. This is a further reason for the importance of consistent modelling to determine the column density more robustly. Additionally, the errors seem to favour a high EW, with the upper limit fully encapsulating the most extreme cases reported by @Levenson2002.
\[fig:f4\]
\[fig:f5\]
Both models T and M yield consistent cross-calibration constants between data sets (see Table \[tab:obs\_parameters\]), with the exception of cross-calibration between *NuSTAR* FPMA and *Suzaku* XIS1. The cross-calibration constant between *NuSTAR* and *Suzaku* data significantly deviated from unity if the <span style="font-variant:small-caps;">cpl</span> component were removed. This strongly indicates that the extra <span style="font-variant:small-caps;">cpl</span> component is necessary. The varying cross-calibration between *Suzaku* and *NuSTAR* may be due to instrumental differences unaccounted for with the <span style="font-variant:small-caps;">cpl</span> component, or perhaps a subtle signature of variability. To test these results, the cross-calibration constants were fixed to similar results found in @Madsen2015. This resulted in comparable reduced chi-squared values of 98/80 and 104/80 for models T and M respectively, together with marginally altered physical parameters from those presented in Table \[tab:obs\_parameters\].
The soft emission was modelled with <span style="font-variant:small-caps;">apec</span>. The values of *kT* found for either model show strong agreement, both being consistent with 0.8keV within errors. Varying other parameters did not significantly alter this value or its corresponding normalisation. Agreement for the <span style="font-variant:small-caps;">apec</span> component between models T and M is expected from Figures \[fig:f4a\] and \[fig:f4b\], since this dominates the other model components at soft enough energies for both models. The corresponding intrinsic soft luminosity (solely from the <span style="font-variant:small-caps;">apec</span> component) in the 0.5–2keV band was consistently found to be $2.0\,\times\,10^{40}$ergs$^{-1}$. Note the <span style="font-variant:small-caps;">apec</span> flux is $\sim$3 times the flux derived from the <span style="font-variant:small-caps;">cpl</span> component.
The scattering fraction (numerically represented by the constant multiplying the <span style="font-variant:small-caps;">spl</span> component) is comparable between models T and M. Even within the high upper limit found for either model, the total scattering fraction is $\lesssim$0.6%. Such values are not uncommon in previous CTK studies (e.g. @Gandhi2015 [@Annuar2015]), and suggest that a minor contribution of the total flux arises from scattered emission here, although proper modelling of higher SNR data describing the soft emission would be required to better constrain this.
Next we consider parameters relating to the absorber specifically. The equatorial column density for model T is the same as the column density along the LOS, whereas the LOS column density for model M is less than or equal to the equatorial column density. This is the reason for the two separate entries in Table \[tab:obs\_parameters\] for model M. Model T indicates a strongly CTK obscuring column density of $9.0 \times 10^{\textrm{24}}\,\textrm{cm}^{\textrm{-2}}$ along the LOS. For comparison, model M gives a similar LOS column density at $9.8 \times 10^{\textrm{24}}\,\textrm{cm}^{-2}$. Both models are unconstrained at the upper limit and also >$3.0 \times 10^{\textrm{24}}\,\textrm{cm}^{\textrm{-2}}$ for the lower limit, consistently within the CTK regime (see Table \[tab:obs\_parameters\]).
Initially the inclination angle and opening angle were left free to vary in model T, but this led to the model diverging to the limits - i.e. the upper limit on inclination angle (describing an edge-on torus) and the lower limit on opening angle (describing a large covering fraction). The inclination angle for both models was tested by stepping over the parameter in <span style="font-variant:small-caps;">xspec</span> in the full allowable range, in addition to fixing the angle to intermediate values such as 60$^{\circ}$. This did not result in a significant improvement in $\Delta \chi^2$, and in some cases worsened the fit. As discussed in Section \[sec:T\], the inclination angle of model T was fixed to 87$^{\circ}$ to allow exploration of a full range of opening angles. In contrast, model M has a fixed half-opening angle (by default) and the inclination angle was left free. The inclination angle found for model M is lower than for model T, at $\sim$84$^{\circ}$, inconsistent with model T at the upper end. This could be affected by the model inconsistencies at edge-on inclinations for model T reported by @Liu2015. This still suggests a near edge-on torus inclination however. In contrast, the opening angle for model T (29$^{\circ}$) is lower than the fixed value in model M. A reduced opening angle implies an increased covering factor surrounding the source and thus potentially a strengthened reflection component.
The intrinsic AGN spectrum can be studied via the continuum photon index. Both models consistently agree on a soft photon index of $\sim$2.5 - far softer than the average value of $\sim$1.9 found in large surveys (e.g. @Mateos2005). However, our value is consistent with typical values within the uncertainties. To test this, the photon index was fixed to 1.9 in both models. The $\Delta \chi^2$ values increased to 97/78 and 101/78, yielding F-test statistics of 3.05 and 1.72 for models T and M, respectively. These values suggest that a photon index of 1.9 is marginally less likely, but not immediately ruled out in either case. Such high photon indices have been found before from the torus models used with CTK sources [@Balokovic2014; @Brightman2015] and could imply accretion at a large fraction of the Eddington rate [@Brightman2016]. The Eddington ratio is discussed further in Section \[sec:dis\_L\]. Additionally, the absorber is likely more complex in reality than a geometrically smooth torus as assumed in models T and M (coupled). This has been found in NGC 1068, by @Bauer2015 for example, where a multi-component reflector is comprised of several layers of differing column densities. We include in the Appendix a contour plot between the intrinsic photon index and column density for models T and M as an example. The plots both show the unconstrained nature of $N_{\textrm{H}}$ as well as the favoured soft photon index by either model.
Overall both models T and M give acceptable fit statistic $\chi^2\mathbin{/}\textrm{dof}$ values of 94/77 and 99/77, respectively. Initial testing with model P yielded a lower value of reduced chi-squared of 85/76. Since the transmitted power law is not directly visible over any of the spectrum, constraining the reflection fraction (defined as the strength of the reflection component relative to a semi-infinite slab subtending $2\pi$steradians on the sky, fully reflecting the intrinsic power law) is highly uncertain. This was used as justification to fix the reflection scaling factor to -1.0. Other than the reflection dominated nature of the source, there are few aspects to be learnt from the over simplified slab geometry of <span style="font-variant:small-caps;">pexmon</span>. Furthermore, slab models effectively give a lower limit on the intrinsic power of the source, since the slab subtends 2$\pi$ steradians on the sky, equivalent to a 50% unobscured covering factor, as opposed to the torus models, in which this solid angle is computed self-consistently with inclination. Model P did, however, appear to require a super-Solar iron abundance to explain the prominent iron line complex present in the spectra of IC 3639. The iron abundance (defined in units of the Solar abundance) and abundance of elements heavier than helium (defined in units of the iron abundance) were tied to each other and left free. This yielded an abundance of $2.0^{+0.7}_{-0.5}$. We tested this outcome by freezing the abundance and iron abundance to Solar values, as is default in models M and T. This resulted in a considerable increase in reduced chi-squared to 102/78. Fixing one of these individually of the other resulted in comparable best-fit reduced chi-squared values, but with the free parameter of the two significantly deviating from 1.0. In comparison, the fits shown in Figures \[fig:f4a\] and \[fig:f4b\], using the toroidal models T and M respectively, show a slight residual around the iron line region. This suggests both models are insufficiently describing the iron fluorescence. Besides strong reflection, high iron abundance is one possible cause of prominent iron fluorescence and may be partly responsible for the extreme Fe-K$\alpha$ line EW observed for IC 3639. Alternatively, @Levenson2002 discuss how circumnuclear starbursts can also lead to strong iron emission. This is analysed further in Section \[sec:dis\_spec\_comps\], where the star formation rate (SFR) is considered.
[rcccl]{} Component & Parameter & Model T & Model M &Units\
Fe-K$\alpha$ fluorescence emission-line & Equivalent width & &keV\
Cross-calibration constants & $\textrm{[FPMA$\mapsto$FPMB]}$ & $1.02^{+0.14}_{-0.15}$ & $1.02^{+0.15}_{-0.14}$ &-\
& & $1.21^{+0.25}_{-0.21}$ & $1.25^{+0.25}_{-0.23}$ &-\
& & $1.11^{+0.21}_{-0.16}$ & $1.14^{+0.20}_{-0.19}$ &-\
Soft emission (<span style="font-variant:small-caps;">apec</span>) & *kT* & $0.79^{+0.13}_{-0.09}$ & $0.78^{+0.08}_{-0.10}$ &keV\
& $L^{\textrm{int}}_{0.5-2\textrm{ keV}}$$^{\dagger}$ & $2.01$ & $2.06$ &$\times\, 10^{40}$erg s$^{-1}$\
Diffuse scattering fraction (<span style="font-variant:small-caps;">spl</span>)& $f_{\textrm{scatt}}$ & $0.97^{+3.39}_{-0.63}$ & $0.20^{+5.58}_{-0.15}$ &$\times\, 10^{-3}$\
Column densities & $N_{\textrm{H}}$(eq) & & $10.0^{+\textrm u}_{-4.1}$ &\
& $N_{\textrm{H}}$(los) & &$9.76^{+\textrm u}_{-6.15}$\
Orientation Angle & $\theta_{\textrm{inc}}$ & $87.0^{\textrm{f}}$ & $83.8^{+1.9}_{-17.2}$ &\
Half-opening Angle & $\theta_{\textrm{tor}}$ & $28.5^{+26.1}_{-\textrm u}$ &$60.0^{\textrm{f}}$\
AGN continuum & $\Gamma_{\textrm{int}}$ & $2.54^{+0.27}_{-0.33}$ & $2.46^{+u}_{-0.60}$ &-\
& $L^{\textrm{int}}_{2-10\textrm{ keV}}$$^{\dagger}$ & $9.26$ & $45.7$ &$\times\, 10^{42}$erg s$^{-1}$\
& $L^{\textrm{int}}_{0.5-30\textrm{ keV}}$$^{\dagger}$ & $2.99$ & $14.0$ &$\times\, 10^{43}$erg s$^{-1}$\
$\chi^2\mathbin{/}\textrm{dof}$ & & $94/77$ & $99/77$ &-\
\
\
\
\
\[tab:obs\_parameters\]
DISCUSSION {#sec:discussion}
==========
Spectral components {#sec:dis_spec_comps}
-------------------
The LOS obscuring column densities for models M and T are consistent with one another, both well within the CTK regime and unconstrained at the upper limit. Our findings are also consistent with @Risaliti1999, arguing against source variability between the *NuSTAR* and *Suzaku* observations.
The column density determined here establishes IC 3639 as a CTK AGN in a face-on host-galaxy. Such a configuration is uncommon but not unheard of (e.g. @Annuar2015). However, @Fischer2013 find no correlation between the orientations of the NLR and host-galaxy disc suggesting that the obscurer thought to be responsible for shaping the NLR in many galaxies may be independent of the host disk. Furthermore, @Fischer2013 find IC 3639 to have ambiguous NLR kinematics. This is where targets display a symmetrical ionised gas component on either side of the nucleus, but uncertainty remains as to whether or not these represent each half of a NLR bicone. A non-biconical outflow is consistent with heavy obscuration and could indicate a high covering factor, restricting NLR emission.
@Levenson2002 find the highest Fe-K$\alpha$ EWs for sources with $N_\textrm{H}\sim6\times10^{24}\,\textrm{cm}^{-2}$, in combination with large inclination angles. However, from simulations, the authors found that the EW diminishes at even higher column densities (since the fluorescence photons cannot escape for such high optical depths), comparable with the $N_\textrm{H}$ values determined here. It should be noted that their simulations are for a more simplistic geometry formulation with a square torus cross section (although @Yaqoob2010 found CTK lines-of-sight gave Fe-K$\alpha$ line strengths considerably less than the maximum possible for a given geometry). So this may indicate a secondary source of strong iron fluorescence for IC 3639, such as super-Solar iron abundance. As already stated, this was found in model P to help fit the residuals present in the iron-line energy region. Since models T and M assume Solar abundance, the final residuals present in Figures \[fig:f4a\] and \[fig:f4b\] around the iron line complex may be due to a super-Solar iron abundance present in IC 3639, or perhaps a high SFR. Such high elemental abundances have been postulated to arise from different astrophysical events, such as high supernovae type Ia rates in the host-galaxy. Alternatively, @Ricci2014 find that as the column density for CTK AGN is increased, the EW of the iron fluorescence line is decreased. This indicates a suppression of the reflection component for *heavily* obscured systems, and suggests that the intrinsic iron line EW of IC 3639 could be even greater than we are observing here.
Regarding SFR, the intrinsic soft band (0.5–2keV) luminosity found from the <span style="font-variant:small-caps;">apec</span> component was $\sim2.0\,\times\,10^{40}\,\textnormal{erg\,s}^{\textnormal{-1}}$ for both models. @Mineo2012 detail a conversion between soft X-ray luminosity and host-galaxy SFR. The authors determined the soft X-ray luminosity through the <span style="font-variant:small-caps;">mekal</span> model component, but for our purposes using the <span style="font-variant:small-caps;">apec</span>-determined luminosity is sufficient to establish an order of magnitude estimate. By accounting for the dispersion in the @Mineo2012 relation of 0.34 dex, we find $\textrm{SFR}_{\textrm{X-ray}}\,=\,39^{+46}_{-21}\,\textrm{M}_{\odot}\,\textrm{yr}^{-1}$.
Using a total IR luminosity calculated from the *IRAS* catalogued fluxes[^10] of $L_{8-1000\,\mu m}\,=\,8.14 \times 10^{\textrm{10}}\,\textrm{L}_{\odot}$, we find an IR-derived SFR using the relation presented by @Murphy2011 to be $\textrm{SFR}_{\textrm{IR}}$$\sim$$12\,\textrm{M}_{\odot}\,\textrm{yr}^{-1}$.
Alternatively, polycyclic aromatic hydrocarbon (PAH) features are believed to be prominent in the spectra of starburst galaxies, with a good correlation found between PAH strength and IR luminosity. We use Equation (5) from @Farrah2007 to calculate a PAH-derived SFR (this equation uses an approximate scaling to account for the high rate of star formation observed in ultraluminous infrared galaxies). Using the 6.2 and 11.2$\mu$m luminosities for IC 3639 presented in @Wu2009 (based on *Spitzer*/IRS data). We find an IR(PAH)-derived SFR of $\textrm{SFR}_{\textrm{IR(PAH)}}\,=\,12\,\pm\,6\,\textrm{M}_{\odot}\,\textrm{yr}^{-1}$, fully consistent with the *IRAS*-derived value.
The X-ray SFR is higher than the IR(PAH) and IR(IRAS) SFRs by a factor of $\sim$2–3, but fully consistent within the uncertainties. All SFRs determined here for IC 3639 are comparable with typical starburst galaxy SFRs determined and studied by @Brandl2006. Furthermore, although @Barnes2001 find the interacting galaxy group hosting IC 3639 to be free of a strong merger, they still report the possibility of enhancing star formation via galaxy harassment. All of these factors are consistent with the hypothesis of @Levenson2002 that circumnuclear starbursts may lead to strong iron emission.
Intrinsic AGN luminosity {#sec:dis_L}
------------------------
The unobscured luminosity of the source in the 2–10keV band (the *intrinsic* emission) was calculated with the model-dependent photon index and normalisation of the <span style="font-variant:small-caps;">ipl</span> component. By stepping over the photon index and normalisation for either model in a two-dimensional grid, the intrinsic X-ray luminosity and corresponding $\Delta\chi^2$ value were determined. The *envelope* of all $\Delta\chi^2$ values for any given luminosity was then extracted, and is plotted in Figure \[fig:f6\] for models T and M, similar to the four-dimensional grid used in Figure \[fig:f5a\] to determine the EW of the iron line. This gives a luminosity range of $\textrm{log}_{10}(L_{\textrm{2\,--\,10\,keV}}\,\textrm{[erg\,s}^{-1}])\,=\,42.3$–$44.0$, with intrinsic average X-ray luminosity $\textrm{log}_{10}(L_{\textrm{2\,--\,10\,keV}}\,\textrm{[erg\,s}^{-1}])\,=\,43.4^{+0.6}_{-1.1}$. As can be seen in Figure \[fig:f6\], luminosities for model M completely encompass luminosities for model T at 90% confidence. Additional tests appear to show that this wide range of allowable model M luminosities is due to an uncertain inclination angle. For example, we fixed the inclination angle to intermediate values in the range 70–84$^{\circ}$ for model M (approximate lower and upper limits found for the best fit). The envelope presented in Figure \[fig:f6\] fully encompassed the intermediate fixed inclination angle results. Furthermore, a three-dimensional parameter space analysis between inclination angle, photon index and normalisation showed an increase of intrinsic X-ray luminosity with inclination angle, but also an increase in best-fit chi-squared value.
{width="95.00000%"}
Recent works have demonstrated a correlation between X-ray luminosity and accretion disc luminosity. For example, we use Equation (6) from @Marchese2012 to approximate the accretion disc luminosity of IC 3639. In order to consistently use this relation which is calculated to the 1–$\sigma$ confidence level, we have derived the 2–10keV luminosity for IC 3639, based on Figure \[fig:f6\], but at the 1–$\sigma$ confidence level for the chi-squared distribution with two free parameters of $\Delta\chi^2$=2.30. This gives $\textrm{log}_{10}(L_{\textrm{2-10\,keV}}\,\textrm{[erg\,s}^{-1}])\,=\,43.4^{+0.6\,(1\sigma)}_{-0.8\,(1\sigma)}$, resulting in a disc luminosity:\
$\textrm{log}_{10}(L_{\textrm{disc}}\,\textrm{[erg\,s}^{-1}])\,=\,44.5^{+0.7(+0.1)}_{-0.9(-0.2)}$.\
The upper and lower bounds in brackets represent the intrinsic scatter from the @Marchese2012 relation, based on treating $L_{\textnormal{disc}}$ or $L_{\textnormal{2\,--\,10\,keV}}$ as the independent variable. The other uncertainty represents the error associated with the observed 2–10keV luminosity uncertainty.
To determine the black hole mass ($M_\textrm{BH}$), we used the stellar velocity dispersion from @Marinucci2012 of $99\,\pm\,5\,\textnormal{km}\,\textnormal{s}^{-1}$ with the M–$\sigma$ relation from @Gultekin2009 to give $\textrm{log}_{10}(M_{\textrm{BH}} [M_{\odot}])=6.8\,\pm\,0.2$, and thus $\textrm{log}_{10}(L_{\textrm{Edd}}\,\textrm{[erg\,s}^{-1}])\,=\,44.9\,\pm\,0.2$. This corresponds to an Eddington ratio of:\
$\textrm{log}_{10}(\lambda_{\textrm{Edd}})=-0.4^{+0.8}_{-1.1}$,\
to the 1–$\sigma$ confidence level. Here we have defined $\textrm{log}_{10}(\lambda_{\textrm{Edd}})=\textrm{log}_{10}\Bigg(\displaystyle \frac{L_{\textrm{disc}}}{L_{\textrm{Edd}}}\Bigg)$. Using the accretion disc luminosity as opposed to the bolometric luminosity is acceptable since $L_{\textnormal{disc}}$ should dominate the bolometric luminosity. The mean Eddington ratio corresponds to an Eddington rate of $\sim$40%. The uncertainty is rather large and dominated by the unknown obscurer geometry (cf. the broad model M contours in Figure \[fig:f6\]), but these are robust uncertainties incorporating all systematics. Furthermore, as we discuss in the next section, the implied luminosity is high even at the lower uncertainty limit, and is consistent with other multi-wavelength diagnostics.
To compare with a bolometric luminosity determined Eddington ratio, we use the bolometric correction factor of $\sim$10–30 from @Vasudevan2010 for converting X-ray to bolometric luminosity. This gives a slightly shifted range of Eddington ratios of $\textrm{log}_{10}(\lambda_{\textrm{Edd}})\,=\,-1.6$$\rightarrow$$0.6$, which corresponds to $\gtrsim$2.5% of the Eddington rate (with the upper end being considerably super-Eddington).
Comparison with other *bona fide* CTK sources {#sec:dis_BFcomparison}
---------------------------------------------
### Ratio of intrinsic to observed luminosity {#sec:BFratio}
The intrinsic parameters determined here with broad-band spectral fitting are consistent with multiple observations reported over almost two decades showing a lack of extreme variability in the source. This allows us to stipulate IC 3639 as a *bona fide* CTK source. To date, there exists just $\sim$30 *bona fide* CTK sources, names of which are collated in Table \[BF\]. Here, a *bona fide* CTK source shows CTK column densities based on X-ray spectral analysis, and lacks extreme variability in the X-ray band. The ID numbers presented in all bona fide CTK source plots herein correspond to the values shown in Table \[BF\].
IC 3639 appears to show a comparatively high ratio of intrinsic to observed luminosity $\big(\sfrac{L_{\textrm{int}}}{L_{\textrm{obs}}}\big)$ relative to other bona fide CTK sources. Here we again specify the X-ray luminosity in the 2–10keV band, and the intrinsic luminosity to be the absorption corrected luminosity. Given the observed 2–10keV luminosity of $\textrm{log}_{10}(L_{\textrm{2-10\,keV}}^{\textrm{obs}}\,\textrm{[erg\,s}^{-1}])\,=\,40.79^{+0.04}_{-0.11}$, IC 3639 has:
$\textrm{log}_{10}\Bigg(\displaystyle \frac{L_{\textrm{int}}}{L_{\textrm{obs}}}\Bigg)=2.6^{+0.6}_{-1.1}$,
corresponding to a luminosity ratio of almost 300. In comparison with the other bona fide sources listed in Table \[BF\], there exists just one other source with such a comparatively high ratio - NGC 1068. The distribution of this ratio amongst the bona fide CTK AGN is shown in Figure \[fig:f7\]. Such a high value of the ratio complements the high column density predicted for the source based on multi-wavelength indicators, discussed next.
![Distribution of ratios of intrinsic to observed 2–10keV luminosity for the bona fide CTK AGN listed in Table \[BF\]. IC 3639 shows a comparatively large ratio, at $2.5^{+0.9}_{-1.3}$, and is represented as a red hatched patch in the distribution. The other source in this bin is NGC 1068.[]{data-label="fig:f7"}](f7.eps){width="1\columnwidth"}
### Multi-wavelength indicators {#sec:BFmulti_wavelength}
The large correction from observed to intrinsic X-ray luminosity for IC 3639 should be checked with independent methods, and for this we use multi-wavelength comparisons with the MIR and \[OIII\] luminosities. Using the published value of the reddening corrected \[OIII\] flux for IC 3639 [@LaMassa2010], we use a distance to the source of 53.6Mpc to calculate the \[OIII\] luminosity to be $\textrm{log}_{10}(L_{\textrm{[OIII]}}\,\textrm{[erg\,s}^{-1}])=42.0$. Furthermore, the MIR (rest-frame 12$\mu m$) luminosity for IC 3639 is $\textrm{log}_{10}(L_{\textrm{MIR}}\,\textrm{[erg\,s}^{-1}])\,=\,43.52\pm0.04$ using high-angular resolution MIR imaging performed with ground-based 8-m class telescopes, providing subarcsecond resolution $\lesssim 0\farcs4$, corresponding to a physical resolution of $\lesssim 100\,$pc for IC 3639 [@Asmus2014; @Asmus2015].
The \[OIII\] emission-line vs. X-ray luminosity relation from @Berney2015 is presented in Figure \[fig:f8\], with the shaded region corresponding to the 1–$\sigma$ confidence level from the original study. Over-plotted are all the *bona fide* CTK sources from Table \[BF\]. This plot illustrates the effect of correctly modelling the obscuration surrounding the sources to give a better estimate of the intrinsic X-ray luminosity in the 2–10keV energy band. Many of the sources have *intrinsic* X-ray luminosities in better agreement with the relation, IC 3639 being an example.
We also reproduce the relation between intrinsic X-ray luminosity and MIR luminosity from @Asmus2015 in Figure \[fig:f9\]. The shaded region shows the 1–$\sigma$ confidence region generated through Monte-Carlo generated uncertainties from uncertainties determined in the study. The MIR luminosities of bona fide CTK sources were either accumulated from @Asmus2015 or from the *Wide-field Infrared Survey Explorer (WISE)* all-sky survey[^11], both calculated at 12$\mu$m. Again, many bona fide CTK sources, including IC 3639, show improved agreement with the relation. The exception is NGC 4945, which has been scrutinised to explain its nature: for example, @Puccetti2014 suggest most of the high-energy emission is transmitted rather than scattered, whereas @Brightman2015 suggests the source to have a high covering factor. See @Gandhi2015 and @Asmus2015 for further discussion of recent studies of NGC 4945.
The @Berney2015 relation gives a predicted X-ray luminosity of $\textrm{log}_{10}(L_{\textrm{[2\,--\,10\,keV]}}\,\textrm{[erg\,s}^{-1}])\sim43.9\pm2.4$, whereas the @Asmus2015 relation gives a predicted X-ray luminosity for IC 3639 of $\textrm{log}_{10}(L_{\textrm{[2\,--\,10\,keV]}}\,\textrm{[erg\,s}^{-1}])\,=\,43.17\pm0.37$ [@Asmus2015]. Thus both *predicted* intrinsic X-ray luminosities are fully consistent with the directly modelled 2–10keV luminosity that we derive for IC 3639.
{width="100.00000%"}
{width="100.00000%"}
### The Fe-K$\alpha$ fluorescence line and the Future {#sec:BFironKa}
A high EW is indicative of strong reflection within a source, as detailed in Section \[sec:introduction\]. However, across the full spectrum of known bona fide CTK sources, there are a broad range of EWs, including values less than 1keV. The lowest EW value determined to date for a bona fide source is reported by @Gandhi2016 for NGC 7674, with an Fe-K$\alpha$ line EW of 0.38$_{-0.09}^{+0.10}$keV. Figure \[fig:f10\] compares the Fe-K$\alpha$ strength relative to a power law continuum for IC 3639 and NGC 7674. The data are from the combined *Suzaku* XIS03 detectors for both. IC 3639 shows a peak of the Fe-K$\alpha$ line consistent with ten times that of the continuum model, whereas NGC 7674 shows a peak of the Fe-K$\alpha$ line around two times the corresponding continuum power law model. This illustrates the broad range in EWs, as well as the need for improved diagnostics used to confirm candidate CTK AGN.
A large EW could correlate with the large SFRs found here. @Levenson2002 suggest that the mechanical energy provided through periods of strong star formation could effectively *inflate* the torus, altering the covering factor and thus EW associated with the Fe-K$\alpha$ fluorescence line.
ID Name ID Name ID Name
---- -------------- -- ---- ---------- -- ---- -----------
1 Arp 299B 11 NGC 1068 21 NGC 4945
2 CGC G420-15 12 NGC 1320 22 NGC 5194
3 Circinus 13 NGC 2273 23 NGC 5643
4 ESO 005-G004 14 NGC 3079 24 NGC 5728
5 ESO 138-G001 15 NGC 3281 25 NGC 6240S
6 ESO 565-G019 16 NGC 3393 26 NGC 7674
7 IC 2560 17 NGC 4102
8 IC 3639 18 NGC 424
9 Mrk 3 19 NGC 4785
10 Mrk 34 20 NGC 4939
: IDs corresponding to all currently known *bona fide* CTK AGN, including IC 3639, in reference to Figures \[fig:f7\], \[fig:f8\] and \[fig:f9\].[]{data-label="BF"}
Future missions such as *Athena* [@Nandra2013] hold the potential to resolve fluorescence complexes in much greater detail. In particular, resolved spectral imaging of the Compton shoulder could tell us more about how buried IC 3639 is in the surrounding obscuring shroud of dust. Figure \[fig:f11\] illustrates simulated data for the proposed *Athena* X-ray Integral Field Unit (XIFU), which will have a spectral resolution of $\sim$2.5eV at 6keV. We used the response and background files provided by the *Athena* website[^12] together with an exposure of 100ks. Over-plotted are the equivalent *NuSTAR* FPMA and FPMB data points from this work for the same region fitted with model T. A clear detection of the Compton shoulder and other fluorescence lines are visible with the *Athena* spectra, and could be used to investigate super-Solar abundances for IC 3639 in greater detail due to the higher SNR predicted (the current simulation assumes Solar abundances).
SUMMARY {#sec:conclusions}
=======
Recent *NuSTAR* observations were combined with archival *Suzaku* observations of the nearby type 2 Seyfert AGN IC 3639. Our key findings are enumerated below.
1
: We used the <span style="font-variant:small-caps;">mytorus</span> and <span style="font-variant:small-caps;">bntorus</span> models to self-consistently fit the broadband spectral data available for IC 3639. These predominantly show a very high level of obscuration, favouring column densities of order $N_\textrm{H}\,\sim\,1.0 \times 10^{\textrm{25}}\,\textrm{cm}^{\textrm{-2}}$. This is consistent with previous results from the literature, suggesting a lack of variability over the past two decades between the *BeppoSAX* and *NuSTAR* observations. As a result, we classify IC 3639 as a *bona fide* CTK AGN.\
2
: We consider the *Suzaku* HXD observation of the source to be a non-detection after accounting for the high background level and its reproducibility. This contradicts a previous study of the same HXD data set.\
3
: The combined results of the two torus models give an intrinsic X-ray luminosity (2–10keV band) of $\textrm{log}_{10}(L_{\textrm{2-10\,keV}}\,\textrm{[erg\,s}^{-1}])\,=\,43.4^{+0.6}_{-1.1}$. We then predict a source Eddington ratio of $\textrm{log}_{10}(\lambda_{\textrm{Edd}})=-0.4^{+0.8}_{-1.1}$, to the 1–$\sigma$ confidence level.\
4
: We find an extreme EW of the Fe-K$\alpha$ fluorescence line for the source of $2.94^{+2.79}_{-1.30}$keV, consistent with @Risaliti1999, and one of the highest amongst bona fide CTK AGN. The source also shows a high intrinsic to observed 2–10keV luminosity ratio.\
5
: A multi-wavelength comparison between X-ray and MIR continuum and \[OIII\] emission line fluxes of IC 3639 with all known *bona fide* CTK AGN give good agreement with known intrinsic correlations. This provides independent evidence that we are robustly measuring the absorption-corrected X-ray luminosity.\
Further studies of other local CTK candidates are clearly vital to properly ascertain the cosmological processes behind the formation of different AGN classes, as well as to help resolve the peak of the CXB flux.
ACKNOWLEDGEMENTS {#sec:acknowledgements .unnumbered}
================
This work made use of data from the [*NuSTAR*]{} mission, a project led by the California Institute of Technology, managed by the Jet Propulsion Laboratory, and funded by the National Aeronautics and Space Administration. We thank the [*NuSTAR*]{} Operations, Software and Calibration teams for support with the execution and analysis of these observations. This research has made use of the [*NuSTAR*]{} Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA).
*Facilities: NuSTAR, Suzaku, Chandra*.
We thank the anonymous referee for their invaluable comments which have helped to improve the paper.
The scientific results reported in this article are based on observations made by the *Chandra* X-ray Observatory
This research has made use of data, software and/or web tools obtained from the High Energy Astrophysics Science Archive Research Center (HEASARC), a service of the Astrophysics Science Division at NASA/GSFC and of the Smithsonian Astrophysical Observatory’s High Energy Astrophysics Division.
This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
This research has made use of data obtained from the Suzaku satellite, a collaborative mission between the space agencies of Japan (JAXA) and the USA (NASA).
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.
P.B. thanks STFC and the RAS for funding.
P.G. thanks STFC for support (grant reference ST/J003697/2).
A.A. acknowledges financial support from Majlis Amanah Rakyat (MARA), Malaysia.
D.A. acknowledges the Science and Technology Facilities Council (DMA; ST/L00075X/1).
W.N.B. acknowledges Caltech NuSTAR subcontract 44A-1092750 and the V.M. Willaman Endowment.
S.F.H. acknowledges support from the European Research Council under Horizon 2020 grant ERC-2015-StG-677117.
M.K. acknowledges support from the Swiss National Science Foundation (SNSF) through the Ambizione fellowship grant PZ00P2154799/1
S.M.L. acknowledges support by an appointment to the NASA Postdoctoral Program at the NASA Goddard Space Flight Center, administered by Universities Space Research Association under contract with NASA.
A.M. acknowledges support from the ASI/INAF grant I/037/12/0-011/13.
F.E.B. and C.R. acknowledge support from NASA NuSTAR A01 Award NNX15AV27G, CONICYT-Chile grants Basal-CATA PFB-06/2007, FONDECYT Regular 1141218 and 1151408, “EMBIGGEN” Anillo ACT1101, the China-CONICYT, and the Ministry of Economy, Development, and Tourism’s Millennium Science Initiative through grant IC120009, awarded to The Millennium Institute of Astrophysics, MAS.
ADDITIONAL CONTOUR PLOTS
========================
Here we include the contour plots between photon index and column density for both toroidal models T and M. The column density plotted for model T (Figure \[fig:fA1\], left panel) corresponds to the LOS column density, whereas the column used in the model M contour plot (Figure \[fig:fA1\], right panel) corresponds to the equatorial column density. To 99% confidence (blue contour line), the corresponding LOS obscuring column density for both models is $\gtrsim\,4\,\times\,10^{24}\,\mathrm{cm}^{-2}$ - well into the CTK regime, and is unconstrained at the upper end allowed by either model. Although the model T contour plot illustrates a wider range in parameter space than the model M contour, the values found are clearly consistent between the two graphs.
[^1]: http://leda.univ-lyon1.fr
[^2]: https://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl
[^3]: https://heasarc.gsfc.nasa.gov/xanadu/xspec/XspecManual.pdf
[^4]: isas.jaxa.jp/e/enterp/missions/suzaku/index.shtml
[^5]: https://heasarc.gsfc.nasa.gov/xanadu/ximage/ximage.html
[^6]: http://heasarc.gsfc.nasa.gov/docs/suzaku/analysis/abc/node8.html
[^7]: http://heasarc.gsfc.nasa.gov/docs/suzaku/analysis/abc/node10.html, §7.5.1
[^8]: http://cxc.harvard.edu/ciao/
[^9]: http://mytorus.com/mytorus-examples.html
[^10]: http://irsa.ipac.caltech.edu/applications/Gator/index.html
[^11]: http://irsa.ipac.caltech.edu/cgi-bin/Gator/nph-dd
[^12]: http://x-ifu-resources.irap.omp.eu/PUBLIC/BACKGROUND/5arcsec/
|
---
abstract: '[Systems with holes, such as colloidal handlebodies and toroidal droplets, have been studied in the nematic liquid crystal (NLC) 4-cyano-4’-pentylbiphenyl (5CB): both point and ring topological defects can occur within each hole and around the system, while conserving the system’s overall topological charge. However, what has not been fully appreciated is the ability to manipulate the hole geometry with homeotropic (perpendicular) anchoring conditions to induce complex, saddle-like deformations. We exploit this by creating an array of holes suspended in an NLC cell with oriented planar (parallel) anchoring at the cell boundaries. We study both 5CB and a binary mixture of bicyclohexane derivatives (CCN-47 and CCN-55). Through simulations and experiments, we study how the bulk saddle deformations of each hole interact to create novel defect structures, including an array of disclination lines, reminiscent of those found in liquid crystal blue phases. The line locations are tunable via the NLC elastic constants, the cell geometry, and the size and spacing of holes in the array. This research lays the groundwork for the control of complex elastic deformations of varying length scales via geometrical cues in materials that are renowned in the display industry for their stability and easy manipulability.]{}'
author:
- Lisa Tran
- 'Maxim O. Lavrentovich'
- 'Daniel A. Beller'
- Ningwei Li
- 'Kathleen J. Stebe'
- 'Randall D. Kamien'
bibliography:
- 'LassoK24Bib.bib'
title: Lassoing saddle splay and the geometrical control of topological defects
---
The investigation of mechanisms, both chemical and geometrical, to control and manipulate defects in liquid crystals (LCs) is essential for the use of these defects in the hierarchical self-assembly [@pp; @yada; @ska] of photonic and meta-materials [@bp-c; @phtnrvz], as well as for studies in low-dimensional topology [@ska; @bs-tc; @utknt; @tord; @tnk; @opal]. For instance, the disclination line networks characteristic of blue phases [@stbp; @bplat] have been proposed to organize colloidal inclusions [@bp-c; @znovbp]. But can similar three-dimensional disclination line networks be designed in the simpler nematic LC? The ubiquitous use of NLCs in the display industry is a testament to their efficacy in applications. Wide-ranging studies on the role of nematic elasticity in designing tailored defect structures have focused primarily on the familiar splay, twist, and bend deformations. Recently, however, there has been a renewed interest in exploiting saddle-splay deformations [@tord; @zsd; @2016rav]. By confining nematics in cells with properly-designed boundary conditions, we demonstrate an array of controlled, defect-riddled minimum energy states that form as a result of saddle-splay distortions, excitable by the system’s surfaces.
Energy Considerations {#energy-considerations .unnumbered}
=====================
We begin with the Frank free energy for a nematic [@dgplc; @rdkpr]: $$\begin{aligned}
F& = \int \mathrm{d}^3 x \left\{ \frac{K_1}{2}[\mathbf{n} (\nabla \cdot \mathbf{n})]^2+\frac{K_2}{2}[\mathbf{n} \cdot (\nabla \times \mathbf{n})]^2 \right. \nonumber \\
& \left. {}+\frac{K_3}{2}[(\mathbf{n} \cdot \nabla)\mathbf{n}]^2-K_{24} \nabla \cdot [(\mathbf{n} \cdot \nabla)\mathbf{n}-(\nabla \cdot \mathbf{n})\mathbf{n} ]\right\}, \label{eq:Frank}\end{aligned}$$
![A schematic of a substrate with a hole with homeotropic (perpendicular) anchoring conditions causing a saddle deformation in the bulk. On a hypothetical (yellow) surface, the boundary conditions along the hole’s inner wall favor a surface normal with a principal radius of curvature $R_2$. When moving from the inner wall to the top of the substrate, the boundary conditions favor the normal bending with another principal radius of curvature $R_1$ of opposite sign, indicating that the surface is a saddle. The thick black lines represent the nematic director. \[FigTheory\]](Figure1.pdf){width="35.00000%"}
where $\mathbf{n} \equiv \mathbf{n}(\mathbf{x})$ is the (unit) nematic director and $K_1$, $K_2$, and $K_3$ are elastic constants that measure the energy cost for splay, twist, and bend deformations, respectively. The final term with the elastic constant $K_{24}$ is the saddle-splay and, as a total derivative, is absent from the corresponding Euler-Lagrange equation. However, it contributes to the energy when there are defects, potentially stabilizing them by balancing the energy cost of creating a defect core and the concomitant director distortions [@np-bp-s]. The saddle-splay term can be rewritten as a surface term through Stokes’ theorem, explicitly demonstrating that the saddle-splay is imposed via the boundaries. With strong anchoring of the director at the boundaries, this term therefore offers the possibility of changing the stable or metastable states in the bulk by boundary geometry manipulation. We may rewrite the saddle-splay in terms of concrete geometric properties of the nematic director. When the director is normal to a surface with principal radii of curvature $R_1$ and $R_2$, the splay and saddle-splay terms in Eq. are $[\mathbf{n} (\nabla \cdot \mathbf{n})]^2 = \left[1/R_1+1/R_2\right]^2$ and $ - \nabla \cdot \left[(\mathbf{n} \cdot \nabla)\mathbf{n}-(\nabla \cdot \mathbf{n})\mathbf{n} \right]= 2/(R_1 R_2)$ [@rdkpr], where the splay energy is proportional to the square of the mean curvature and the saddle-splay energy is proportional to the Gaussian curvature. A saddle deformation in the bulk can be induced if the boundary enforces opposite signs of $R_1$ and $R_2$, that is, a negative Gaussian curvature. A positive curvature can not reduce the splay contribution, but we see that negative curvature can – this is known as the principle of splay cancellation and can stabilize disclinations [@pa].
We develop a boundary that promotes these saddle distortions by creating a thin substrate with a hole removed and homeotropic anchoring on its surface. This is then suspended in the middle of the cell (Fig. \[FigTheory\]) (fabrication details to follow). The circular rim of the hole, and the slight rim rounding create principal curvatures of opposite signs, just as the inner half of a torus has negative Gaussian curvature. The anchoring aligns the director normal to this surface, and the saddle deformation propagates into the NLC bulk. The flat surfaces on the sample top and bottom provide further boundary conditions. When the flat surfaces have homeotropic anchoring, we find configurations with axial symmetry around the hole center. However, oriented (non-degenerate) planar anchoring breaks the azimuthal symmetry of the hole geometry, which is reflected in the director configurations. We find that a hole *array* causes the distortions from each hole to interact and create complex, but well-defined defect structures. We corroborated our experimental observations with numerical minimization and find that these are, at least, metastable minima.
Homeotropic Anchoring
=====================
We begin by studying hole arrays in cells with homeotropic anchoring on the top and bottom surfaces, as illustrated in Fig. \[FigSetup\]. A Mylar sheet is used as the hole substrate because of its controlled thickness, smoothness, and transparency, which aids in viewing defects via polarizing microscopy (PM). The LC cell fabrication and assembly are detailed in Materials and Methods. The LC cells were filled with two types of nematic LC: either the standard, highly birefringent 5CB or a binary mixture of CCN-47 and CCN-55. Separately, at room temperature, the two CCN-compounds are smectic, but their binary mixture is nematic. The CCN mixture is useful for its different elastic constants and its low birefringence, needed for fluorescent confocal polarizing microscopy (FCPM) [@fcpm]. When we anneal our samples, we heat them to the isotropic phase and allow them to cool to the nematic phase, all while a 12 V AC electric field is applied across the sample.
![ \[FigSetup\] Experimental setup: (a) Holes with a diameter of 50 $\mu$m are drilled with an excimer laser into a 25-$\mu$m thick Mylar sheet. The sheet is then coated with SiCl${}_4$ to be treated to have homeotropic surface anchoring. The sheet is suspended between two ITO coated glass cover slips with 25 $\mu$m Mylar spacers. These cover slips are treated to have either homeotropic or planar anchoring. (b) and (c): SEM micrographs of the SiCl${}_4$-coated Mylar hole array. ](Figure2.png){width=".48\textwidth"}
Because of the homeotropic anchoring on top and bottom, the net topological charge encoded in the director field vanishes. Since each hole in the Mylar sheet has a disclination ring that carries hedgehog charge, there must be a companion singularity to satisfy the topological constraint. Based on the geometry, the compensating defect in the LC bulk is expected to have a “$-1$” charge, giving either a hyperbolic hedgehog defect or a ring defect with “$-\nicefrac{1}{2}$” winding profile, the schematics of which are shown in Fig. \[FigHomeo\](g) and \[FigHomeo\](h), respectively. At this point, it is useful to recall [@rdkpr; @mkodl] that though a three-dimensional nematic director, taking values in $\mathbb{R}P^2$, has line defects in three spatial dimensions, these defects do not have a proper winding number as they are classified only by $\pi_1(\mathbb{R}P^2)=\mathbb{Z}_2$. Though one might be tempted to describe the imposed winding at the rim as “$+\nicefrac{1}{2}$” with an overall “+1" charge, that would be incorrect from a topological standpoint. To make this clear, we will describe this as “geometric winding”: For example, the rim enforces a geometric winding of $+\nicefrac{1}{2}$. We warn the reader that geometric winding is not always defined [@rdkpr; @mkodl] and is, for instance, not defined when the director has a true three-dimensional texture; the twist version of a defect cannot be assigned a geometric winding and, accordingly, switching from regions of $+\nicefrac{1}{2}$ to $-\nicefrac{1}{2}$ geometric winding requires some twist deformation. However, topology still plays a role: in the presence of the disclination loop around the rim, the $+\nicefrac{1}{2}$ geometric winding necessarily induces a true, topological point hedgehog charge [@rdkpr] of the companion defect. The simulated ring state is an example of this and is discussed below.
Indeed, we observe point defects at the hole centers, as shown through PM and FCPM in Fig. \[FigHomeo\](a-d & f). The director field around the point defect has a twisted configuration, similar to previously observed defects in nematic droplets with radial configurations [@tw-r-st]. In addition to point defects, sometimes we observe ring defects, also seen in previous work on handlebodies [@bs-tc]. When the Mylar sheet is 25 $\mu$m thick, point defects occur significantly more often than ring defects. However, when the thickness is reduced to 6 $\mu$m, rings appear more frequently, as shown in Fig. \[FigHomeo\](e). This scale-dependence of the defect structure is analogous to the physics of spherical colloids with homeotropic anchoring – Saturn ring defects become more stable for smaller system sizes compared to those of companion point defects [@rvp; @ska]. In our case, a thinner hole substrate has a greater density of splay distortion near the hole rims, favoring the expansion of the central point defect into a ring, allowing the splay distortion to be canceled near the rim.
![\[FigHomeo\] (a, b & e) and (c, d & f) show the homeotropic system with two different NLCs: 5CB and a binary CCN mixture, respectively. All micrographs were captured via PM, except for (f), which was obtained by overlaying FCPM fluorescent intensities for two perpendicular polarizing directions, indicated by arrows marked in the corresponding color. For a substrate thickness of 25 $\mu$m, point defects are preferred (a, b, c, d & f), but for a substrate thickness of 6 $\mu$m (e), ring defects occur more often. (g) and (h) show the director configuration for different thicknesses. ](Figure3.png){width=".48\textwidth"}
Planar Anchoring and Domain Walls
=================================
{width=".8\textwidth"}
We investigate configurations that break the hole axial symmetry: anti-parallel-planar ($\pi$-planar) and 90${}^{\circ}$-twisted planar ($\nicefrac{\pi}{2}$-planar), where anti-parallel in experiments refers to the opposite rubbing directions on the top and bottom planar surfaces, coated with polyvinyl alcohol (PVA) (see Materials and Methods). The rubbed PVA does not lie perfectly flat on the surface, but instead the polymer has a slight *pretilt* angle, approximately 1-3$^{\circ}$, in the vertical direction [@pretilt; @pi-c], with the angle facing the direction of rubbing. The equilibrium state of the $\pi$-planar configuration is depicted in Fig. \[FigPi\]. Similar optical textures are seen in 5CB and the CCN mixture (Fig. \[FigPi\](a) and Fig. \[FigPi\](b)). To understand the textures, it is useful to consider a system with the same top and bottom anchoring conditions, but without the perforated Mylar sheet; we can replace the sheet’s anchoring conditions with an effective aligning field. In this case, the physics of the Fréedericksz transition, employed in the traditional twisted-nematic display, should be recalled [@meas]. In the lower or upper half-cell, the director either “bends to the left” or “bends to the right” from the midplane to the bottom or top boundary. This leads to four possibilities shown in Fig. \[FigPi\](c), with two “C” formations and two “S” formations. With perfect planar alignment, all four are degenerate and we see domain walls between them. The domain walls occur when the curve of the director changes from bending one way out of the homeotropic midplane ([*e.g.*]{} from a C-formation) into bending the other direction ([*e.g.*]{} into an S-formation), as shown in Fig. \[FigPi\]. In devices, these unwanted domain walls are inhibited through pre-tilting the top and bottom anchoring to bias the bend direction, similarly to our experiments in which the pretilt angle gives rise to a preferred domain after electric field annealing (see Supplementary Video 1).
With this background field structure in mind, we return to the perforated sheet (bottom of Fig. \[FigPi\](c)). The planar boundary conditions above and below the holes impose more distortion in certain areas of the holes than others, marked in the red boxes in Fig. \[FigPi\](c). These are regions where the larger-scale S or C director curvature is in conflict with the preferred anchoring direction at the hole rim. In the C-formation, most of the distortion will be located along the rubbing direction axis on the side where the C faces (Fig. \[FigPi\](d)). For the S-formation, the distortion will be along the rubbing direction on both sides (Fig. \[FigPi\](e)). The optical texture asymmetry in Fig. \[FigPi\](e) reflects how the distortion in the S-formation occurs only near the upper or lower hole edge (see bottom of Fig. \[FigPi\](c)). When the sample is flipped and viewed from the other side, the larger and smaller bright regions of the optical texture switch locations, showing that the texture asymmetry arises from the distortions’ different $z$-locations. Also, near the hole edges, the homeotropic anchoring condition induces a saddle-splay distortion (Fig. \[FigTheory\]) that favors splay in the $xz$ and $xy$ planes, competing with the tendency to follow the wall anchoring conditions in the $xz$ plane (Fig. \[FigPi\](d) and \[FigPi\](e)).
The domain walls in the $\nicefrac{\pi}{2}$-planar cell (Fig. \[FigPiO2\]) follow the same principle as those in the $\pi$-planar cell. Again, there are four possible domains, seen in experiment (Fig. \[FigPiO2\](a,b,c)). The main difference between the $\pi$ and $\nicefrac{\pi}{2}$-planar cells is the point defect location and the distortion within the hole. Because the distortion must accommodate the director in two different directions above and below the hole, the defect and director distortion will be located in between these two rubbing directions (at 45${}^{\circ}$) (Fig. \[FigPiO2\](d)). As with the $\pi$-cell, the pretilt angles of the planar surfaces pick out the corresponding domain after electric field annealing.
The dark brushes in the optical textures, usually associated with disclination textures in planar systems, do not appear to rotate when the polarizers are rotated. We believe that this is due to the sample thickness and the twist in the director. The twist might suppress the typical brushes seen in quasi-2D nematic systems. Our numerical results, described in the next section, show that we do expect twisted director configurations. We also checked via numerical minimization ([*e.g.*]{} Fig. \[FigPi\](f,g) and Fig. \[FigPiO2\](e)) that the experimental results are consistent with expected equilibrium states.
![\[FigPiO2\] (a) and (b) show the $\nicefrac{\pi}{2}$-planar cell in PM with two different NLCs: 5CB and a binary CCN mixture, respectively. Arrows on the top right represent the glass rubbing direction, with the top box representing the top glass, and likewise for the bottom box and bottom glass. Both 5CB and CCN show point defects in the holes and domain walls. In (c), the director bends continuously to point in/out to meet the upper planar boundary and left/right to meet the lower planar boundary. When a hole with homeotropic anchoring is placed into the midplane (c), some hole rim areas (marked in (d) by a red triangle) will impose more bend. These areas always occur at $45^\circ$ angles from the rubbing directions (d). Numerical results with a splay energy density colormap (e) (in units of $K_1 /(\Delta x)^2 = 3.3 \times 10^5$ J/m${}^3$, $\Delta x$ being the mesh spacing) show a ring defect wrapping around the hole, with the greatest distortion located at $45^{\circ}$ from the rubbing directions, in agreement with (c & d). Defects are marked in green.](Figure5.png){width=".48\textwidth"}
Numerical Free Energy Minimization
==================================
We employ a $\mathbf{Q}$-tensor based Landau-de Gennes (LdG) model of a nematic to study the defects in a cell with the suspended hole array. This model more accurately represents configurations with defects, but reduces to the Frank free energy, Eq. \[eq:Frank\], in the uniaxial limit where the tensor components $Q_{ij}$ are related to the director components $n_i$ via $Q_{ij} = 3S(n_i n_j-\delta_{ij}/3)/2$, where $S$ is the Maier-Saupe order parameter [@ms; @dgplc]. The LdG free energy was numerically minimized, establishing the director field and the locations of defects [@z-rav]. Defect regions are calculated by finding all places where $S<0.9S_0$, with $S_0$ the equilibrium value of the order parameter (see Materials and Methods). The three eigenvalues of the matrix $Q_{ij}$ may be written as $S$, and $-S/2\pm S_B$, where $S_B$ is the biaxial order parameter and measures the degree of biaxiality in the system. We found a maximum ratio $S_B/S_0 \sim 0.1 $ outside of defect regions, with the majority of values on the order of $10^{-3}$, justifying our focus on the uniaxial limit. We used unequal elastic constants that match that of 5CB with a three-constant Landau-de Gennes free energy density. The energy density has an implicit saddle-splay term that is positive and equal to $K_2$ [@z-dr; @z-k24m].
The hole array has homeotropic anchoring, and we set the surfaces 110 nm above and 110 nm below the circular hole array with either planar or homeotropic anchoring. The hole array itself is also 110 nm thick and the hole rims are rounded with radius 35 nm. We execute our numerics in a box, periodic in the $xy$-plane, with dimensions $713\times713$ nm${}^2$. The nematic director at each mesh site is oriented in the $z$-direction as an initial condition. We investigate systems for which the oriented directions of planar surfaces are either parallel or rotated 90${}^{\circ}$ relative to one another (twisted planar cell). We vary the value of $K_2$ (adjusting $L_{24}$ accordingly to keep $K_{24}$ the same) to probe the role of twist deformations on the resulting minimum energy state, as well as the hole diameter to see how geometrical changes alter the defect structures.
Our numerical results reproduce the observed state with defects localized inside each hole, the so-called *ring state*. However, the simulations also predict a surprising new state: a *line state* in which disclination lines with geometric $-\nicefrac{1}{2}$ winding form between rows of holes, perpendicular to the rubbing direction of the closest planar surface (Fig. \[FigSim\]). In this state, there are additional defects with geometric $+\nicefrac{1}{2}$ winding that wrap around the hole walls. The ordered arrangement of the undulating disclination lines and the defect lines that weave in and out of holes in the twisted planar case (Fig. \[FigSim\](b) and \[FigSim\](f)) is reminiscent of those seen in blue phases [@bplat; @znovbp].
Though $K_1\sim K_2$ for 5CB, we study the effect of increased $K_2$ in simulations and found that the line state is preferred in the parallel configuration (Fig. \[FigSim\](e)). This suggests that the ring state has a greater amount of twist distortion than the line state. The geometry of the defect’s winding number profile sheds light on the energetic favorability of defect arrangements. For the ring state, cross sections of the ring defect in the $xz$-plane show geometric winding of $-\nicefrac{1}{2}$, while in the $xy$- plane the ring has a geometric winding of $+\nicefrac{1}{2}$, seen in Fig. \[FigSim\](c) through the saddle-splay colormap, where negative saddle-splay corresponds to positive geometric winding and vice versa (explained further below). To switch from one winding to the other, the nematic director must *twist* and, in this sample geometry, over a short length scale. Similar twisting ring defects were also observed in simulations of highly chiral LCs [@ring]. On the other hand, the defects in the line state do not change their geometric winding sign (Fig. \[FigSim\](d)). Cross-sections of the line state reveal that the geometric winding number of the long disclination lines is negative and is positive for the rings between the holes. Thus, we expect that when twisting is expensive, the line state will be favored over the ring state. Further analysis is necessary to determine whether lowering $K_2$ will lead to a stable ring state.
Moreover, we find that saddle-splay distortions help to elucidate the defect structure; compare the saddle-splay energy density of the two states depicted in Fig. \[FigSim\](c) and \[FigSim\](d). We plot the saddle-splay density for both the ring state (with 5CB elastic constants) and the line state (with $K_2$ doubled). In both states, regions with positive geometric winding have a negative saddle-splay and [*vice versa*]{}. Near the hole we observe that defects with a particular sign of saddle-splay prefer to nucleate near surfaces that induce saddle-splay of the opposite sign (Supplementary Figure 1). As we varied $K_{24}$ from $-2K_2$ to $2K_2$ in the simulations, the minimum energy state did not change. The saddle-splay distortions, independent of $K_{24}$ in Eq. , help determine the optimal defect arrangement locally, in agreement with other studies [@tord; @zsd; @2016rav].
We also find that the ratio of the hole diameter to the inter-hole spacing alters the phases’ stability: Larger holes (from 132 nm to 220 nm) stabilize the line state because the larger hole area relative to the intra-hole, homeotropic region increases the influence of the boundary cues on the bulk, establishing director configurations which bend in the directions imposed by the hole edges. Conversely, for smaller holes, the director twists and there are ring defects in the holes (Fig. \[FigSim\](c)). The director in the bulk is then free to satisfy the homeotropic anchoring condition between the holes and to uniformly bend in a direction chosen either spontaneously or via the pretilt angle, as it would in a hybrid-anchored cell without holes.
![\[FigSim\] A network of disclination lines in a nematic formed with a hole array. Defects are marked in green. The periodic hole substrate has homeotropic anchoring and is suspended between two planar substrates, with arrows indicating the oriented planar anchoring direction. $3 \times 3$ and $2 \times 2$ hole arrays have diameters $d = 132$ nm and $220$ nm, respectively. (a) and (b) use elastic constants matching that of 5CB, and the line state is not stable for smaller diameters in the parallel planar case but is stable for larger diameters in the twisted planar case. (e) and (f) have elastic constants matching that of 5CB, but with a doubled $K_2$ value. The line state is always stable for this case. Ring state (c) and line state (d) vertical cross sections have saddle-splay energy density colormaps (in units of $|A| = 0.172 \times 10^6$ J/m${}^3$) and demonstrate that positive saddle-splay corresponds to negative geometric winding and vice versa. In the horizontal cross section of the ring (c-inset), areas that have positive geometric winding carry negative saddle-splay.](Figure6.png){width=".48\textwidth"}
Starting with the initial condition ${\bf n}=\hat z$, a system with parallel (top and bottom) anchoring and with large $K_2$ relaxes to the line state. Note that we can slightly alter these boundary conditions to nucleate the ring state by introducing a small pretilt angle ($3^\circ$ to match that of rubbed PVA [@pretilt]) into the $\hat z$-direction. This increases the energy of the alternating curving structure of the line state and thus favors the ring state. Alternatively, if we relax the numerics starting from the ring state, the line state never ensues and we find a lower total free energy, suggesting that the line state is metastable and is unstable relative to the ring state under boundary condition perturbations. Supplementary Video 2 shows how defects in the line state annihilate to make the ring state.
The planar substrate/pretilt angle arrangements also influence the defect locations in simulations. When the planar substrates are parallel, the defect is located on one side of the hole, pinning in areas of highest splay along the hole rim and following a C-formation (Fig. \[FigPi\](f)). When the planar substrates are anti-parallel (the $\pi$-cell), the defect is also pinned on portions of the hole rim edges that have the greatest amount of splay, but on both sides of the hole, following an S-formation (Fig. \[FigPi\](g)). This is consistent with our experimental observations of the defects in C- and S- configurations. We believe that the disparity in defect types, rings in simulations and points in experiments, is due to their large difference in scales (micron-scale for experiments and nano-scale for simulations).
Let us now return to the line state: is this state we predicted from simulations observable in *experiment*?
Disclination Line State
=======================
For 5CB samples, we can indeed grow the line state! The state appears when the cooling front of the isotropic to nematic phase transition closes on or near the hole array, as shown in Fig. \[FigLine\](c). When this annealing condition is engineered, disclination lines can be seen in the sample, regardless of whether or not an electric field is applied. Otherwise, we do not see the line state, so we conclude that this state is metastable in 5CB, consistent with its low ratio of $K_2/K_1$. However, this state is reproducibly achieved in samples filled with the CCN mixture after annealing with the electric field (Fig. \[FigLine\](b)), (see Supplementary Video 3) suggesting that the lines state is stable for the CCN mixture under these conditions. Here, the lines state can be reliably obtained in CCN samples in both the $\pi$-planar (Fig. \[FigLine\](b)) and $\nicefrac{\pi}{2}$-planar configurations (Fig. \[FigLine\](d)), after which the state persists for over 24 hours. Supplementary videos 3 and 4 show how the system relaxes after electric field annealing. Our numerical results suggest that this line state stability follows from the higher ratio of $K_2/K_1$ in CCN. There may be other factors, such as different anchoring strengths for CCN and 5CB. The low birefringence makes it difficult to calculate the CCN elastic constants and anchoring strength. Such a calculation would be an interesting focus of future work.
![\[FigLine\] Disclination lines (geometric winding $+\nicefrac{1}{2}$) confirmed in experiment with PM. Arrows on the top right represent the glass rubbing direction, with the top box representing the top glass, and likewise for the bottom box and bottom glass. An 12 V AC electric field is applied across a $\pi$-planar cell with a suspended homeotropic hole substrate. The system is then heated and cooled from the isotropic phase back to the nematic phase, after which the field is turned off. For 5CB (a), domain walls across the hole array are annealed away. For the CCN mixture in $\pi$-planar (b) and $\pi/2$-planar cells (d), undulating disclination lines running perpendicular to the rubbing direction form between the holes and are stable for over 24 hours. For 5CB, disclination lines form if the phase transition front closes on or near the hole array (c), with or without an applied electric field. The dashed white box highlights the coexistence of the domain walls and the undulating lines in the lines state. Videos of these annealing processes are in Supplementary Materials.](Figure7.png){width=".47\textwidth"}
There is a relationship between how the director curves out of the homeotropic mid-plane to meet the planar surface (i.e. what determines the “four possible domains”), the planar anchoring strength, and whether or not a domain wall or defect line will form. With 5CB, the planar anchoring is strong: The majority domain (the remaining domain after annealing) is set by the planar surface arrangement. Any line discontinuity would likely be located near the planar surface to reduce the energy of disobeying the planar surface anchoring. We see domain walls form with or without a hole array. On the other hand, for the line state, the director curves out of the mid-plane in an alternating fashion with a periodicity set by the hole array (Fig. \[FigSim\](d), Fig. \[FigLine\](b)). This alternating curving leads to a discontinuity between rows of holes in the form of a disclination line in the bulk.
To conclude, we can “lasso up" three-dimensional networks of defect lines in NLCs, along with ordered arrays of point or ring defects, using a perforated sheet with homeotropic anchoring. Even with fixed system topology, a number of distinct equilibrium defect configurations are accessible by varying the boundaries’ geometrical parameters. Furthermore, we confirm that the boundary geometry and the geometric winding of defects are correlated; defects with certain saddle-splay distortions arrange near surfaces with oppositely-signed saddle-splay. This principle could be utilized to design surfaces with specific saddle-splay energies to precisely localize defects that have the corresponding geometric winding. The relative ease of inducing defect line networks, all with simple geometric cues, paves the way for more intricate blueprints of self-assembled structures in nematic LCs.
Materials {#materials .unnumbered}
=========
Numerical Modeling
------------------
We use a phenomenological LdG free energy of a nematic $\mathbf{Q}$-tensor field, based on the approach reviewed by Ravnik and Žumer [@z-rav; @sh2; @sh3]. The free energy is minimized in a finite difference scheme on a cubic mesh, on which a traceless, symmetric rank-2 tensor $\mathbf{Q}$ is defined. The nematic director can be deduced from $\mathbf{Q}$ as the eigenvector that corresponds to the leading eigenvalue $S$. The LdG free energy density is $f_{\mathrm{LdG}}=f_{\mathrm{phase}}+f_{\mathrm{grad}}$, where $f_{\mathrm{phase}}=A Q_{ij} Q_{ji}/2+ BQ_{ij} Q_{jk} Q_{ki}/2+C (Q_{ij} Q_{ji})^2/4$ and $f_{\mathrm{grad}} = L_1 \partial_k Q_{ij} \partial_k Q_{ij}/2+ L_2 \partial_j Q_{ij} \partial_k Q_{ik}/2+ L_3Q_{ij}\partial_i Q_{kl} \partial_j Q_{kl}$, where $\partial_i \equiv \frac{\partial}{\partial x_i}$ and we sum over repeated indices. In $f_{\mathrm{grad}}$, $L_1= 3.3 \times 10^{-12}$ N, $L_2 = 5.3 \times 10^{-12}$ N, and $L_3 = 3.5 \times 10^{-12}$ N to model 5CB with elastic constants $K_1 = 0.64 \times 10^{-11}$ N, $K_2 = 0.3 \times 10^{-11}$ N, $K_3 = 1 \times 10^{-11}$ N [@mkodl], and $K_{24} = K_2$ in the three-constant approximation. We also take typical values for the material constants of 5CB [@z-rav]: $A = -0.172 \times 10^6$ J/m${}^3$, $B = -2.12 \times 10^{6}$ J/m${}^3$, and $C = 1.73\times 10^6$ J/m${}^3$, giving a mesh spacing of 4.4 nm. Defects are where $S<0.9S_0$, with $S_0 \equiv (-B+ \sqrt{B^2 - 24AC}) / 6C \approx 0.533$. The LdG free energy is minimized over $\mathbf{Q}(x)$ using a conjugate gradient algorithm from the ALGLIB package (<http://www.alglib.net/>). To model the anchoring, we use a Rapini-Papoular-type surface potential $\Phi_{\mathrm{surf}}=W_0^s \int_s \mathrm{d} A \operatorname{Tr} [(\mathbf{Q}-\mathbf{Q}^s)^2]$, where $Q^s_{ij} = 3S_0(\nu_{i} \nu_j - \delta_{ij}/3)/2$ is the locally preferred $\mathbf{Q}$-tensor at the anchoring surface $s$ ($\nu_i$ is the surface normal for homeotropic or the locally preferred director direction for oriented planar conditions). The potential strengths are $W_0^s=1\times10^{-2}$ J/m${}^2$ for homeotropic anchoring and $W_0^s=1.5 \times 10^{-5}$ J/m${}^2$ for oriented planar anchoring, to match the strengths of 5CB on a surface with DMOAP [@ska] and rubbed PVA [@meas2], respectively. Energy density colormaps were calculated by computing $f_{\mathrm{grad}}$ and $f_{24} \equiv -L_{24}(\partial_i Q_{ij} \partial_k Q_{jk}-\partial_i Q_{jk} \partial_k Q_{ij})/2$ for a given $\mathbf{Q}(x)$, with altered constants $L_i$ such that all the constants $K_i=0$, except for the component of interest.
LCs
---
We use 5CB (Kingston Chemicals Limited) and a 50/50 mixture of $4'$-butyl-$4$-heptyl-bicyclohexyl-$4$-carbonitrile (CCN-47) and $4$,$4'$-dipentyl-bicyclohexyl-$4$-carbonitrile (CCN-55) (Nematel, GmbH), both thermotropic LCs with a nematic phase at room temperature. 5CB has a positive dielectric constant and the CCN mixture a negative one, resulting in the molecules aligning parallel and perpendicular to the electric field, respectively.
Suspended hole array in LC cell
-------------------------------
A $10 \times 10$ array of holes with radius of 50 $\mu$m is prepared by repeatedly drilling a Mylar sheet using IPG Microsystem’s IX-255 UV excimer laser in the low fluence setting, provided by the University of Pennsylvania’s Quattrone Nanofabrication Facility (QNF). The Mylar is coated with silicon tetrachloride (SiCl$_4$) through vapor deposition for the surface to be treated with N,N-dimethl-N-octadecyl-3-aminopropyltrimethoxysilyl (DMOAP; Sigma Aldrich) to obtain strong homeotropic anchoring [@ska; @sh2; @us-1]. Cover slips coated with indium tin oxide (ITO; SPI Supplies), for the application of an electric field across the sample, are treated to have oriented planar anchoring by spin coating a thin layer of polyvinyl alcohol (PVA; Sigma Aldrich), which is subsequently baked at 80${}^{\circ}$C for one hour, then rubbed with a velvet cloth in the desired direction [@us-1]. Additional 25 $\mu$m Mylar spacers are used to suspend the hole substrate between the two glass cover slips. An LC droplet was placed on a heated ITO cover slip at 50 ${}^{\circ}$C first. Next, the Mylar spacers are arranged on the cover slip, and then more LC is pipetted onto the hole array before the second cover slip is placed on top. Samples are then clamped and sealed with glue.
Optical Characterization
------------------------
PM micrographs are taken using an upright microscope in transmission mode furnished with crossed polarizers (Zeiss Axiolmager M1m) and a high-resolution color camera (Zeiss AxioCam HRc). FCPM images are obtained using an inverted IX81 Olympus microscope with an FV300 Olympus confocal scan box and a half-wave plate between the objective and filter cubes to rotate the scanning laser polarization [@fcpm; @sh2]. 0.01% weight of the dye N,N�-Bis(2,5-di-tert-butylphenyl)-3,4,9,10-preylenedicarboximide (BTBP; Sigma Aldrich) was incorporated into the CCN mixture to allow LC director determination via FCPM [@fcpm; @fcpm2]. A scanning laser wavelength of 488 nm was used for dye excitation. The hole array is characterized by environmental scanning electron microscopy (ESEM) on an FEI Quanta 600 FEG ESEM at 10kV, provided by the University of Pennsylvania’s Singh Center for Nanotechnology.
We thank O. Lavrentovich, B. Senyuk, Y. Xia, F. Serra, Z. Davidson, and U. Jagodič for helpful discussions. We thank T. Baumgart for access to FCPM. We also thank B. Peterson and E. Johnston of the QNF for help with hole array fabrication. This work was supported by NSF MRSEC Grant DMR11-20901 and NSF DMR12-62047. D.A.B. was supported by Harvard University through the George F. Carrier Fellowship. R.D.K. was partially supported by a Simons Investigator grant from the Simons Foundation.
{width=".8\textwidth"}
|
---
abstract: 'We consider various problems related to finding points in ${\bbb{Q}}^{2}$ and in ${\bbb{Q}}^{3}$ which lie at rational distance from the vertices of some specified geometric object, for example, a square or rectangle in ${\bbb{Q}}^{2}$, and a cube or tetrahedron in ${\bbb{Q}}^{3}$.'
author:
- 'Andrew Bremner, Maciej Ulas'
title: Points at rational distances from the vertices of certain geometric objects
---
[^1]
Introduction {#sec0}
============
Berry [@Be] showed that the set of rational points in the plane with rational distances to three given vertices of the unit square is infinite. More precisely, he showed that the set of rational parametric solutions of the corresponding system of equations is infinite; this generalizes some earlier work of Leech. In a related work, he was able to show that for any given triangle $ABC$ in which the length of at least one side is rational and the squares of the lengths of all sides are rational, then the set of points $P$ with rational distances $|PA|$, $|PB|$, $|PC|$ to the vertices of the triangle is dense in the plane of the triangle; see Berry [@Be1]. However, it is a notorious and unsolved problem to determine whether there exists a rational point in the plane at rational distance from the [*four*]{} corners of the unit square (see Problem D19 in Guy’s book [@Guy]). Because of the difficulty of this problem one can ask a slightly different question, as to whether there exist rational points in the plane which lie at rational distance from the four vertices of the [*rectangle*]{} with vertices $(0,0)$, $(0,1)$, $(a,0)$, and $(a,1)$, for $a \in {\bbb{Q}}$. This problem is briefly alluded to in section D19 on p. 284 of Guy’s book. In section \[sec2\] we reduce this problem to the investigation of the existence of rational points on members of a certain family of algebraic curves $\cal{C}_{a,t}$ (depending on rational parameters $a$, $t$). We show that the set of $a \in {\bbb{Q}}$ for which the set of rational points on $\cal{C}_{a,t}$ is infinite is dense in ${\bbb{R}}$ (in the Euclidean topology). Richard Guy has pointed out that there are immediate solutions to the four-distance unit square problem if the point is allowed to lie in three space ${\bbb{Q}}^3$. Indeed, $(\frac{1}{2}, \frac{1}{2}, \frac{1}{4})$ lies at rational distance to the four vertices $(0,0,0)$, $(0,1,0)$, $(1,0,0)$, $(1,1,0)$ of the square. This observation leads us to consider the more general problem, of points in ${\bbb{Q}}^3$ which lie at rational distance from the four vertices $(0,0,0)$, $(0,1,0)$, $(1,0,0)$, $(1,1,0)$ of the unit square. In section \[sec3\] we show that such points are dense on the line $x=\frac{1}{2}$, $y=\frac{1}{2}$, and dense on the plane $x=\frac{1}{2}$. Further, there are infinitely many parameterizations of such points on the plane $x=y$. In section \[sec4\] we consider the general problem of finding points $(x,y,z) \in {\bbb{Q}}^3$ with rational distances to the vertices of a unit square lying in the plane $z=0$ without any assumptions on $x,y,z$. Attempts to show such points are dense in ${\bbb{R}}^3$ have been unsuccessful to date. However, we are able to show that the variety related to this problem is unirational over ${\bbb{Q}}$. In particular, this implies the existence of a parametric family of rational points with rational distances to the four vertices $(0,0,0)$, $(0,1,0)$, $(1,0,0)$, $(1,1,0)$ of the unit square. Whether there exist points in ${\bbb{Q}}^3$ at rational distance from the eight vertices of the unit [*cube*]{} is another seemingly intractable problem which we leave as open and certainly worthy of further investigation.
In section \[sec5\] we consider the problem of finding points in ${\bbb{Q}}^3$ at rational distance from the vertices of a general tetrahedron (with rational vertices) and prove that the corresponding algebraic variety is unirational over ${\bbb{Q}}$. This is related to section D22 in Guy’s book. This result, together with the construction of a parameterized family of tetrahedra having rational edges, face areas, and volume (an independent investigation), leads to constructing a double infinity of sets of five points in ${\bbb{Q}}^3$ with the ten distances between them all rational.
Finally, in the last section we collect some numerical results and prove that under a certain symmetry assumption it is possible to find a parametric family of points in ${\bbb{Q}}^3$ with rational distances to the six vertices of the unit cube. Without symmetry, we found just one point with five of the distances rational.
Points in ${\bbb{Q}}^2$ with rational distances from the vertices of rectangles {#sec2}
===============================================================================
Let $a \in {\bbb{Q}}$. Consider the rectangle $\cal{R}_{a}$ in the plane with vertices at $P_1=(0,0)$, $P_2=(0,1)$, $P_3=(a,0)$, and $P_4=(a,1)$.
\[thm2-1\] The set of $a\in{\bbb{Q}}$ such that there are infinitely many rational points with rational distance to each of the corners $P_1,...,P_4$ of $\cal{R}_{a}$ is dense in ${\bbb{R}}$.
Let $M=(x,y)$ be a rational point with rational distance to each vertex $P_1,...,P_4$ of $\cal{R}$. This determines the following system of equations: $$\label{rectanglesys1}
\begin{cases}
\begin{array}{lll}
x^2+y^2 & = & P^2=|MP_{1}|^2, \\
x^2+(1-y)^2 & = & Q^2=|MP_{2}|^2, \\
(a-x)^2+y^2 & = & R^2=|MP_{3}|^2, \\
(a-x)^2+(1-y)^2 & = & S^2=|MP_{4}|^2.
\end{array}
\end{cases}$$ From the first and third equations, and the first and second equations, we deduce respectively $$\label{xy}
x=\frac{1}{2a}(a^2+P^2-R^2),\quad y=\frac{1}{2}(P^2-Q^2+1).$$ Eliminating $x,y$ from the system (\[rectanglesys1\]) we obtain $$\label{rectanglesys2}
\begin{cases}
\begin{array}{lll}
P^2-Q^2=R^2-S^2, \\
a^2 (R^4+a^2+1) + Q^4 + (1+a^2) S^4 = 2 Q^2 (a^2+S^2) + 2 a^2 R^2 (S^2+1).
\end{array}
\end{cases}$$ The first quadric may be parameterized by $$\label{RS}
R=\frac{(P+Q)t^2+P-Q}{2t},\quad S=\frac{(P+Q)t^2-P+Q}{2t}.$$ On homogenizing, by setting $P=X/Z$, $Q=Y/Z$, the second equation at (\[rectanglesys2\]) becomes: $$\begin{aligned}
&(1 - 4 t^2 + 6 t^4 + 16 a^2 t^4 - 4 t^6 + t^8) (X^4 + Y^4) +
4(t^2 - 1)^3(t^2 + 1) (X^2 + Y^2) X Y-\\
& 8 a^2 t^2 (1 + t^2)^2(X^2 + Y^2) Z^2 +
16 a^2 t^2 Z^2 ((1 - t^4) X Y + (1 + a^2) t^2 Z^2) +\\
&\quad 2 (3 - 4 t^2 + 2 t^4 - 16 a^2 t^4 - 4 t^6 + 3 t^8) X^2 Y^2=0.\end{aligned}$$ This equation defines a curve $\cal{C}_{a,t}$ of genus three over the field ${\bbb{Q}}(a,t)$. It is well known that a curve of genus at least $2$ defined over a function field has only finitely many points with coordinates in this field. Thus, in order to prove the theorem we must find some specialization $a_{0}$, $t_{0}$ of the rational parameters $a$, $t$, such that the corresponding curve $\cal{C}_{a_{0},t_{0}}$, has genus at most $1$. In particular, the curve $\cal{C}_{a,t}$ needs to have singular points. Denote the defining polynomial of $\cal{C}_{a,t}$ by $F=F(X,Y,Z)$. Now $\cal{C}_{a,t}$ has singular points when the system of equations $$\label{singsol}
F(X,Y,Z)=\partial_{X}F(X,Y,Z)=\partial_{Y}F(X,Y,Z)=\partial_{Z}F(X,Y,Z)=0$$ has rational solutions. In order to find solutions of this system, consider the ideal $$\op{Sing}=<F,\partial_{X}F,\partial_{Y}F,\partial_{Z}F>$$ and compute its Gröbner basis. The basis contains the polynomial $-a^2(1+a^2)t^6(1 + 2 a t - t^2) (-1 + 2 a t + t^2) Z^7$, and to obtain something non-trivial, we require $a=\pm (1-t^2)/2t$. We choose without loss of generality $a=(1-t^2)/2t$ (the other sign corresponds to solutions in which $x$ is replaced by $-x$). Now, $F=G^2$, where $$G(X,Y,Z)=(t^2-1)((t^2+1)X^2+2(t^2-1)X Y+(t^2+1)Y^2 -(t^2+1)Z^2)$$ and by abuse of notation we are working with the curve $\cal{C}_{a,t}:\;G(X,Y,Z)=0$ of degree 2 defined over the rational function field ${\bbb{Q}}(t)$. The genus of $\cal{C}_{a,t}$ is 0, and moreover, there is a ${\bbb{Q}}(t)$-rational point $(0,1,1)$ lying on $\cal{C}_{a,t}$. This point allows the parametrization of $\cal{C}_{a,t}$ in the following form: $$X=2u((1-t^2)u+(t^2+1)v), \; Y=(t^2+1)(u^2-v^2), \; Z=(t^2+1)(u^2+v^2)-2(t^2-1)u v.$$ Recalling that $P=X/Z, Q=Y/Z$ and using the expressions for $R,S$ at (\[RS\]), $x,y$ at (\[xy\]), we get that for $a=(1-t^2)/2t$ there is the following parametric solution of the system (\[rectanglesys1\]): $$\begin{aligned}
x=&\frac{4 t u(v^2-u^2)\left((t^2-1)u-(t^2+1)v\right)}{\left((t^2+1)u^2-2(t^2-1)u v+(1+t^2)v^2\right)^2},\\
y=&\frac{2 u\left((t-1)u-(t+1)v\right)\left((t+1)u-(t-1)v\right)\left((t^2-1)u - (t^2+1)v\right)}{\left((t^2+1)u^2-2(t^2-1)u v+(1+t^2)v^2\right)^2}.\end{aligned}$$ To finish the proof, note that the rational map $a:{\bbb{R}}\ni t\mapsto \frac{1-t^2}{2t}\in{\bbb{R}}$ is continuous and has the obvious property $$\lim_{t\rightarrow -\infty}a(t)=+\infty,\quad\quad \lim_{t\rightarrow +\infty}a(t)=-\infty.$$ The density of ${\bbb{Q}}$ in ${\bbb{R}}$ together with the properties of $a(t)$ immediately imply that the set $a({\bbb{Q}})\cap {\bbb{R}}_{+}$ is dense in ${\bbb{R}}_{+}$ in the Euclidean topology. The theorem follows.
Observe that the construction presented in the proof of Theorem \[thm2-1\] allows deduction of the following simple result..
\[sqrt[2]{}\] Let $K$ be a number field and suppose that $\sqrt{2}\in K$. Then the set of $K$-rational points with $K$-rational distances to the vertices of the square $\cal{R}_{1}$ is infinite.
Let $a=1$ and take $t=1+\sqrt{2}$. Then $1+2at-t^2=0$ and using the parametrization constructed at the end of Theorem \[thm2-1\] (with $v=1$) we get that for $$\begin{aligned}
x=&\frac{u(u-\sqrt{2})(1-u^2)}{(u^2-\sqrt{2}u+1)^2},\\
y=&\frac{(3-2\sqrt{2})u(\sqrt{2}(u-1)-2)(\sqrt{2}(u-1)+\sqrt{2})((1+\sqrt{2}) u-\sqrt{2}-2)}{2(u^2-\sqrt{2}u+1)^2}\end{aligned}$$ and any given $u\in K$ such that $\sqrt{2}u^2-2u+\sqrt{2}\neq 0$, the distance of the point $P=(x,y)$ to the vertices $P_{1}, P_{2}, P_{3}, P_{4}$ of $\cal{R}_{1}$ is $K$-rational.
The construction of $a$’s and the corresponding solutions $x,y$ of the system (\[rectanglesys1\]) presented in the proof of Theorem \[thm2-1\] has one aesthetic disadvantage. In order that $(x,y)$ lie [*inside*]{} the rectangle $\cal{R}$, it is necessary that $x$, $a-x$, $y$, $1-y$, all be positive. However, $$\begin{aligned}
x & (a-x)y(1-y) = -4 u^2(u^2-v^2)^2 \left( ((1-t)u+(1+t)v)((1+t)u+(1-t)v) \right)^2 \times \\
& \left( \frac{((1-t^2)u+(1+t^2)v)((1+2t-t^2)u+(1+t^2)v)((1-2t-t^2)u+(1+t^2)v)}{((1+t^2)u^2+2(1-t^2)u v+(1+t^2)v^2)^4} \right)^2\end{aligned}$$ which is evidently negative. Thus the point $(x,y)$ can never lie within the rectangle $\cal{R}$. A natural question arises therefore as to whether it is possible to find a positive rational number $a$ such that the system (\[rectanglesys1\]) has rational solutions $x, y$ with $x$, $a-x$, $y$, $1-y$, all positive? The answer is yes, on account of the family $$a=\frac{2t}{t^2-1},\quad x=\frac{t}{t^2-1},\quad y=\frac{1}{2}$$ where $x, a-x, y, 1-y$ are all positive when $t>1$; however, this family is rather uninteresting, in that correspondingly $P=Q=R=S$. An equivalent question was posed by Dodge in [@Dod] with an answer given by Shute and Yocom. They proved that if $p_{i}, q_{i}, r_{i}$ are Pythagorean triples for $i=1,2$, and $A=p_{1}q_{2}+p_{2}q_{1}, B=p_{1}p_{2}+q_{1}q_{2}$, then the point $M=(p_{1}q_{2}, q_{1}q_{2})$ lies inside the rectangle with vertices $(0,0)$, $(A,0)$, $(0,B)$, $(A,B)$, and, moreover, the distances of $M$ to the vertices of the rectangle are rational. Using their result one can prove that the set of those $a\in{\bbb{Q}}$, such that there are infinitely many rational points inside the rectangle $\cal{R}_{a}$ with rational distance to its vertices, is dense in ${\bbb{R}}_{+}$. Indeed, note that the point $$P=\left(\frac{p_{1}q_{2}}{B},\frac{q_{1}q_{2}}{B}\right)$$ lies inside the rectangle $\cal{R}_{a}$, with $a=A/B$. To finish the proof, it is enough to show that one can find infinitely many Pythagorean triples $p_{i}, q_{i}, r_{i}, i=1,2$, such that $a=A/B$ is constant. Put $$\begin{array}{lll}
p_{1}=1-U^2, & q_{1}=2U, & r_{1}=1+U^2, \\
p_{2}=1-V^2, & q_{2}=2V, & r_{2}=1+V^2
\end{array}$$ and then $$A(U,V)=2(U + V) (1-UV),\quad B(U,V)=(1+U-(1-U)V) (1 - U+ (1+U)V).$$ Since the rectangles $\cal{R}_{a}$ and $\cal{R}_{1/a}$ are equivalent under rotation by ninety degrees and scaling, we consider only the case $0<a<1$. Set $a=a(t)=\frac{2t}{1-t^2}$, with $0<t<\sqrt{2}-1$ (the transformation between $\cal{R}_{a}$ and $\cal{R}_{1/a}$ is now given by $t \leftrightarrow \frac{1-t}{1+t}$). Define $C_t$ to be the curve $A(U,V)=a(t)B(U,V)$: $$C_t: \; (U+V)(1-U V)(1-t^2) - t (1+U-(1-U)V)(1-U+(1+U)V) = 0.$$ The triple $(t,U,V)$ corresponds to a point $P$ with rational distances to the vertices of $\cal{R}_{a}$ (with $a=a(t)$) precisely when $$\label{tUVconditions}
0<\frac{p_{1}q_{2}}{B}<\frac{A}{B},\quad 0<\frac{q_{1}q_{2}}{B}<1,$$ that is, when $$\label{tUVconditions1}
\frac{V(1-U^2)}{\Delta}>0, \quad \frac{U(1-V^2)}{\Delta}>0, \quad \frac{U V}{\Delta}>0, \quad \frac{(1-U^2)(1-V^2)}{\Delta}>0,$$ where $$\Delta=(1+U-(1-U)V)(1-U+(1+U)V)=(1-U^2)(1-V^2)+4U V.$$ Our strategy is to show that the curve $C_t$ contains infinitely many rational points in the unit square $0<U<1$, $0<V<1$, when the inequalities (\[tUVconditions1\]) clearly hold, so that the inequalities (\[tUVconditions\]) will follow.\
The equation for $C_t$ defines the hyperelliptic quartic curve: $$\cal{C}_{t}: \; W^2=((t^2-1)U^2-4t U+ 1-t^2)^2+4(t U^2-(t^2-1)U-t)^2,$$ where $W=2(t-U)(1+t U)V + ((t-1)U-t-1)((t+1)U+t-1)$. Now $\cal{C}_{t}$ contains the point $R=(0,\;t^2+1)$, and a cubic model $\cal{E}_{t}$ for $\cal{C}_{t}$ is given by $$\cal{E}_{t}: \; Y^2=X(X+(t^2-2 t-1)^2)(X+(t^2+2 t-1)^2).$$ The curve $\cal{E}_{t}$ contains the point $H(X,Y)=(-(1+t^2)^2, \; 4t(1-t^4))$, and it is readily checked that $H$ is of infinite order in $\cal{E}_{t}({\bbb{Q}}(t))$. We now apply theorems of Silverman [@Sil p. 368] and of Hurwitz [@Hur] (see also Skolem [@Sko p. 78]). Silverman’s theorem states that if $E_{t}$ is an elliptic curve defined over ${\bbb{Q}}(t)$ with positive rank, then for all but finitely many $t_{0}\in{\bbb{Q}}$, the curve $E_{t_{0}}$ obtained from the curve $E_{t}$ by the specialization $t=t_{0}$ has positive rank. From this result it follows that for all but finitely many $t_0\in{\bbb{Q}}$ the elliptic curve $\cal{E}_{t_0}$ is of positive rank. (Indeed, a straightforward computation shows that the specialization of $H$ at $t=t_0$ is of infinite order in $\cal{E}_{t_0}({\bbb{Q}})$ for all $t_0 \in {\bbb{Q}}$ with $t_0 \neq 0, \pm 1$, that is, for all $t_0$ giving a nonsingular specialization).
The theorem of Hurwitz states that if an elliptic curve $E$ defined over ${\bbb{Q}}$ has positive rank and one torsion point of order two (defined over the field ${\bbb{R}}$) then the set $E({\bbb{Q}})$ is dense in $E({\bbb{R}})$. The same result holds if $E$ has three torsion points (defined over the field ${\bbb{R}}$) of order two under the assumption that there is a rational point of infinite order on the bounded branch of the set $E({\bbb{R}})$. Here, for $0<t<1$, the point $H$ satisfies this latter condition, since for $0<t<1$ we have $$-(-1-2t+t^2)^2 < -(1+t^2)^2 < -(-1+2t+t^2)^2.$$
Applying the Hurwitz theorem we get that for all but finitely many $t_0\in{\bbb{Q}}$ the set $\cal{E}_{t_0}({\bbb{Q}})$ is dense in the set $\cal{E}_{t_0}({\bbb{R}})$. This proves that the set $\cal{E}_{t_0}({\bbb{Q}})$ is dense in the set $\cal{E}_{t_0}({\bbb{R}})$ in the Euclidean topology. As a consequence we get that the set $\cal{C}_{t_0}({\bbb{Q}})$ is dense in the set $\cal{C}_{t_0}({\bbb{R}})$. This immediately implies that the image of the map $$\cal{C}_{t_0}({\bbb{Q}})\ni (U,W)\mapsto U\in{\bbb{R}}$$ is dense in ${\bbb{R}}$ for all but finitely many $t_0\in{\bbb{Q}}$ (which is a consequence of the positivity of the polynomial defining the quartic $\cal{C}_{t}$).
In order to finish the proof therefore we need to show that for given rational $t\in (0,\sqrt{2}-1)$ we can find infinitely many rational points $(U,V)\in \cal{C}_{t}({\bbb{Q}})$ satisfying $0<U<1$ and $0<V<1$. Now $$V=V(U)=(W(U) -((t-1) U-(t+1))((t+1)U+t-1))/(2(t-U)(1+t U)),$$ and we consider the connected component of the curve that passes through the point $(U,V)=(0,t)$, certainly a continuous function on the interval $0<U<t$. Using $$\frac{dV}{dU} = \frac{-1+t^2-2t U+4t V+2U V-2t^2U V+V^2-t^2V^2+2t U V^2}{1-t^2-4t U-U^2+t^2U^2+2t V-2U V+2t^2U V-2t U^2V}$$ we compute that $\frac{dV}{dU}(0,t)= - \frac{(-1-2t+t^2)(-1+2t+t^2)}{(1+t^2)} < 0$. Taking $\frac{dV}{dU}$ in the form $$\frac{dV}{dU} = -\frac{(1+U^2)(t-V)V(1+t V)}{(1+V^2)(t-U)U(1+t U)},$$ then the derivative can vanish for $0<U<t$ only when $V=-1/t$ (forcing $U=0$), $0$ (with $U=-1/t,t$), $t$ (with $U=0$). Accordingly $\frac{dV}{dU}$ has constant sign (negative) for $0<U<t$, so that $V(U)$ is a decreasing function on the interval $0 \leq U < t$. Accordingly, $0 \leq U < t$ implies $0 < V \leq t$ on this component of the curve. Thus the curve $C_t$ contains infinitely many rational points in the square $0<U<t$, $0<V<t$. The situation is graphed in Figure 1.
Summing up, for all but finitely many $t\in (0,\sqrt{2}-1)$ we can find infinitely many rational points satisfying the conditions (\[tUVconditions1\]) and the equation $A(U,V)=a(t)B(U,V)$. This implies that for all but finitely rational numbers $t\in (0,\sqrt{2}-1)$, the corresponding point $P$ lies inside the rectangle $\cal{R}_{a(t)}$. Because of the continuity of the function $a=a(t)$, we get that the set $a({\bbb{Q}}\cap (0,\sqrt{2}-1))$ with $a(t)$ having the required property, is dense in the set ${\bbb{R}}_{+} \cap (0,1)$.
The earlier remark about the equivalence of the rectangles $\cal{R}_{1/a}$ and $\cal{R}_a$ under rotation and scaling now gives the following theorem.
\[thm2-2\] The set of $a\in{\bbb{Q}}$ such that there are infinitely many rational points lying inside the rectangle $\cal{R}_{a}$ with rational distance to each of the corners $P_1,...,P_4$ of $\cal{R}_{a}$ is dense in ${\bbb{R}}_{+}$.
It is quite interesting that all $a$’s we have found above are of the form $(1-t^2)/2t$ or $2t/(1-t^2)$. A question arises as to whether we can find $a$’s which are not of this form and such that there is a rational point with rational distances to the vertices of the rectangle $\cal{R}_{a}$. A small numerical search for other such triples $(a,x,y) \in {\bbb{Q}}^3$ was undertaken. We wrote $(x,y)=(X/Z,Y/Z)$, $X,Y,Z>0$, and restricted the search to height of $a$ at most 20, and $X+Y+Z \leq 1000$. The involutions $(a,x,y) \leftrightarrow (1/a,y/a,x/a)$, $(a,x,y) \leftrightarrow (a,x-a,y)$, and $(a,x,y) \leftrightarrow (a,x,1-y)$ mean that we can restrict attention to solutions satisfying $a>1$, $x \leq a/2$, $y \leq 1/2$. Of the solutions found in the range, fourteen have $x=0$; seventeen have $x=a/2$; and forty-five have $y=1/2$. These all imply some equalities between $P,Q,R,S$, and we list only those solutions found where $P,Q,R,S$ are distinct.
$a$ $x$ $y$ $a$ $x$ $y$ $a$ $x$ $y$
------- --------- -------- ------- ------- ------- ------- --------- --------
13/12 88/399 55/133 12/11 24/77 32/77 19/12 35/204 7/17
9/8 120/169 50/169 17/15 15/14 11/56 13/6 273/500 34/125
9/8 15/56 5/14
Table 1: points in ${\bbb{Q}}^2$ at rational distance to vertices of $\mathcal{R}_a$
Special points in ${\bbb{Q}}^3$ at rational distance to the vertices of the unit square {#sec3}
=======================================================================================
We normalize coordinates so that the unit square lies in the plane $z=0$, with vertices $A=\{(0,0,0),(0,1,0),(1,0,0),(1,1,0)\}$.
\[prop1\] Let $\lambda$ be the line in ${\bbb{R}}^{3}$ given by $\lambda: x=y=\frac{1}{2}$. Then the set $$\Lambda=\{P\in \lambda({\bbb{Q}}):\;\mbox{the distance } |PQ| \mbox{ is rational for all } Q\in A\}$$ is dense in $\lambda({\bbb{R}})$.
It is clear that $P=(\frac{1}{2},\frac{1}{2},z)\in \Lambda$ if and only if $$\frac{1}{4}+z^2=T^2,$$ for some rational $T$. This equation represents a conic with rational point $(z,T)=(0,\frac{1}{2})$, and so is parameterizable, for example, by: $$z=\frac{1-u^2}{4u},\quad T=\frac{1+u^2}{4u}.$$ To finish the proof, note that for the rational map $z:{\bbb{R}}\ni u\mapsto \frac{1-u^2}{4u}\in{\bbb{R}}$ we have $\overline{z({\bbb{Q}})}={\bbb{R}}$. This implies that $\Lambda = \{(\frac{1}{2},\frac{1}{2},z(u)):\;u\in{\bbb{Q}}\setminus\{0\}\}$ is dense in $\lambda({\bbb{R}})$.
\[prop2\] Let $\pi$ be the plane in ${\bbb{R}}^{3}$ given by $\pi : x=\frac{1}{2}$. Then the set $$\Pi=\{M\in \pi({\bbb{Q}}):\;\mbox{ the distance } |MR| \mbox{ is rational for all } R\in A\}$$ is dense in $\pi({\bbb{R}})$.
Points $(\frac{1}{2},y,z)$ which lie in $\Pi$ are in one to one correspondence with rational points on the intersection of the following two quadric surfaces in ${\bbb{R}}^{4}$: $$\label{sys1}
\frac{1}{4}+y^2+z^2=P^2,\quad \frac{1}{4}+(1-y)^2+z^2=Q^2.$$ Subtracting the second equation from the first gives $y=\frac{1}{2}(P^2-Q^2+1)$. So, on eliminating $y$, the problem of finding rational solutions of (\[sys1\]) is equivalent to finding rational points on the surface $S$ given by the equation $$\label{surfaceS}
S:\;4z^2=-2+2P^2-P^4+2(P^2+1)Q^2-Q^4=:H(P,Q).$$ From a geometric point of view, the (homogenized version) of the surface $S$ represents a del Pezzo surface of degree two which is just a blowup of seven points lying in general position in $\mathbb{P}^{2}$. In particular, this implies that the surface is geometrically rational which means that it is rational over ${\bbb{C}}$. Note that this immediately implies the potential density of rational points on $S$, which means that there is a finite extension $K$ of ${\bbb{Q}}$ such that $S(K)$ is dense in the Zariski topology. However, we are interested in the density of rational points in the Euclidean topology, and it seems that there is no way to use the mentioned property in order to address this. We thus provide alternative reasoning.\
\
First, from (\[sys1\]) we have the inequalities $|P| \geq 1/2$, $|Q| \geq 1/2$, and because $H(P,Q)=H(\pm P, \pm Q)$ we may suppose without loss of generality that $P \geq 1/2$, $Q \geq 1/2$. We have the point on $\cal{S}$ defined by $(P_0(u,v),Q_0(u,v),z_0(u,v))=$ $$\left( \frac{u^4+1-4\frac{u(u^2-1)}{u^2+1}v+2 v^2}{4(u^2-1)v}, \quad \frac{u^4+1+4\frac{u(u^2-1)}{u^2+1}v+2v^2}{4(u^2-1)v}, \quad \frac{u^4+1-2v^2}{4(u^2+1)v} \right);$$ and in the domain $\cal{D}:=\{(u,v)\in{\bbb{R}}^{2}:\;u>1,v>0\}$, it is straightforward to verify that $P_0(u,v)$ has a single extremum at the point $$(u_0,v_0)=(\alpha+\alpha^2, \; (1+\alpha)(1+\alpha^2)), \qquad \alpha^2=\frac{1+\sqrt{5}}{2}.$$ This point is a local minimum, with minimum value $P_0(u_0,v_0)=\frac{1}{2}$. Since $P_0(u,v)$ is a continuous function in $\cal{D}$ and $\lim_{u \rightarrow 1^+} P_0(u,v) = \lim_{v \rightarrow 0^+} P_0(u,v)=\infty$, it follows that the set of values $\{P_0(u,v): u \in {\bbb{Q}}\cap (1,\infty), v \in {\bbb{Q}}\cap (0, \infty)\}$ is dense in the real interval $(\frac{1}{2}, \infty)$. Next, consider the equation $$C: \; 4Z^2=H(P_0(u,v),Q),$$ which we regard as defining a curve $C$ over ${\bbb{Q}}(u,v)$. The curve possesses the point $(Q,Z)=(Q_0(u,v), z_0(u,v))$, and has cubic model $$\begin{aligned}
E: & y^2 = x^3-\big((1+u^2)^2(1+u^4)^2+8u(1-u^8)v+4(5+6u^2-14u^4+6u^6+5u^8)v^2+ \\
& 16u(1-u^4)v^3+4(1+u^2)^2v^4 \big) x^2+16(u^4-1)^2v^2 \big( (1+u^2)(1+u^4)+2(1-u)(1+u)^3v+ \\
& 2(1+u^2)v^2 \big) \big( (1+u^2)(1+u^4)-2(1-u)^3(1+u)v+2(1+u^2)v^2 \big) x.\end{aligned}$$ It is easy to check that if $u',v'\in{\bbb{Q}}$, then the curve $E_{u',v'}$ obtained from $E$ by the specialization $u=u', v=v'$ is singular only when $u=1$ or $v=0$. However, the sets $\{1\}\times {\bbb{Q}}$, ${\bbb{Q}}\times\{0\}$ have empty intersection with $\cal{D}$. Thus, for all $(u',v')\in ({\bbb{Q}}\times{\bbb{Q}})\cap \cal{D}=:\cal{D}'$, the specialized curve $E_{u',v'}$ is an elliptic curve. Furthermore, we note that for each $u',v' \in {\bbb{Q}}$, $E_{u',v'}$ has three points of order 2 defined over ${\bbb{R}}$ (this is a simple consequence of the positivity of the discriminant of the polynomial defining the curve $E$), with $x$-coordinates $0<r_1<r_2$, so that $(0,0)$ lies on the bounded component of the curve.\
\
The image $R_{u,v}$ on $E$ of the point $(-Q_0(u,v), z_0(u,v))$ is of infinite order as element of the group $E({\bbb{Q}}(u,v))$. For any given $u'\in{\bbb{Q}}\cap (1,\infty)$ it is straightforward to compute the set of rational numbers $v'\in {\bbb{Q}}\cap (0,\infty)$ such that the point $R_{u',v'}$ is of finite order on $E_{u',v'}$; this set is finite in consequence of Mazur’s Theorem. Applying Silverman’s Theorem, for given $u'\in{\bbb{Q}}\cap (1,\infty)$ then for all but finitely many $v'\in {\bbb{Q}}$ the point $R_{u',v'}$ is of infinite order on the curve $E_{u',v'}$.
Now choose sequences $(u_{n})_{n\in{\bbb{N}}}$, $(v_{n})_{n\in{\bbb{N}}}$ of rational numbers such that $$u_{n} \in {\bbb{Q}}\cap (1,\infty), \lim_{n\rightarrow +\infty}u_{n}=\alpha+\alpha^2, \quad v_{n} \in {\bbb{Q}}\cap (0,\infty), \lim_{n\rightarrow +\infty} v_{n}=(1+\alpha)(1+\alpha^2),$$ so that $\lim_{n\rightarrow +\infty}P_{0}(u_{n},v_{n})=1/2$.
With $R_{u_{n},v'}$ of infinite order on $E_{u_{n},v'}$, then either $R_{u_{n},v'}$ or $R_{u_{n},v'}+(0,0)$ lies on the bounded component of the curve, and we can apply the Hurwitz Theorem as before to deduce that the set $E_{u_{n},v'}({\bbb{Q}})$ is dense in the set $E_{u_{n},v'}({\bbb{R}})$. This immediately implies that the set $E({\bbb{Q}})$ is dense in the set $E({\bbb{R}})$. Because $E$ is birationally equivalent to $C$, we get that $C({\bbb{Q}})$ is dense in $C({\bbb{R}})$. Because $$\bigcup_{n\in{\bbb{N}}}\{P_0(u_{n},v'): v'\in {\bbb{Q}}\cap (0, \infty)\}$$ is dense in $(\frac{1}{2}, \infty)$, it follows that $\cal{S}({\bbb{Q}}) \cap \{(P,Q,z): P>1/2\}$ is dense in the Euclidean topology in the set $\cal{S}({\bbb{R}}) \cap \{(P,Q,z): P>1/2\}$. Our theorem follows.
[ The proof could be simplified if we were able to demonstrate finiteness of the set of $(u',v') \in \cal{D}'$ for which the point $R_{u',v'}$ is of finite order in $E({\bbb{Q}})$, for then the theorems of Silverman and Hurwitz could be applied directly without the necessity of selecting limiting sequences. However, this computation is difficult. By Mazur’s Theorem the point $R_{u',v'}$ on $E_{u',v'}$ is of finite order provided $mR_{u',v'}=\cal{O}$ for some $m\in\{2,\ldots, 10,12 \}$. Let $$m R_{u,v}=\left(\frac{x_{m}}{d_{m}^2},\frac{y_{m}}{d_{m}^3}\right),$$ where $x_{m}, y_{m}, d_{m}\in{\bbb{Q}}[u,v]$ for $m=2,\ldots,10,12$. We consider the denominator of the $x$- coordinate of the point $mR_{u',v'}$ and define the curve $C_{m}:\;d_{m}(u,v)=0$. The set $C_{m}(\cal{D}')$ of points in $\cal{D}'$ lying on $C_{m}$ parameterize those pairs $(u',v') \in \cal{D}'$ which lead to $R_{u',v'}$ of order (dividing) $m$. Consider the map $$\Phi:\;C_{m}(\cal{D}') \ni (u,v)\mapsto u\in{\bbb{Q}}.$$ and put $$B:=\bigcup _{m=2}^{12}\Phi(C_{m}(\cal{D}')),$$ where for $m=7, 11$ we put $C_{m}(\cal{D}')=\emptyset$. Indeed, the case $m=7$ is impossible due to the existence of the rational point $(0,0)$ of order 2 on $E_{u',v'}$ and the fact that the torsion group of $E_{u',v'}$ cannot be isomorphic to ${\bbb{Z}}_{2}\times {\bbb{Z}}_{7}\simeq {\bbb{Z}}_{14}$. From the definition of $B$, if $u'\not\in B$ then the point $R_{u',v'}$ is of infinite order on $E_{u',v'}$ for all rational $v'>0$. In theory at least it is possible to give a precise description of the set $B$. Indeed, for given $m$ the polynomial $d_{m}$ may be factorized as $d_{m}=f_{1,m}\cdot\ldots\cdot f_{k_{m},m}$ in ${\bbb{Q}}[u,v]$, where $f_{i,m}$ is irreducible in ${\bbb{Q}}[u,v]$. Thus $d_{m}(u,v)=0$ if and only if $f_{i,m}(u,v)=0$ for some $i\in\{1,2,\ldots,k_{m}\}$. The equation $f_{i,m}(u,v)=0$ defines an irreducible curve, say $C_{i,m}$, and thus $$B=\bigcup_{m=2}^{12}\bigcup_{i=1}^{k_{m}}\Phi(C_{i,m}(\cal{D}'))$$ (where we define $C_{i,7}(\cal{D}')=C_{i,11}(\cal{D}')=\emptyset$). For example, we have $d_{2}(u,v)=(u^2-1)(u^4 - 2v^2 + 1)f_{4,2}(u,v)f_{5,2}(u,v)$. The curve $C_{3,2}:\;u^4 - 2v^2 + 1=0$ is of genus 1 and the only rational points on $C_{3,2}$ satisfy $|u|=|v|=1$. The genus of $C_{4,2}$ is 3 and the genus of $C_{5,2}$ is 19; thus by Faltings’s Theorem these curves contain only finitely many rational points. However, we are unable to compute the corresponding sets. Matters are even worse for $m\geq 3$. It is a highly non-trivial task to compute the factorization of $d_{m}$ and even when this has been done, it is still necessary to compute the genus of the corresponding curves. When $m=3$ we were able to compute that $d_{3}(u,v)=(u^2-1)(u^4 - 2v^2 + 1)f_{4,3}(u,v)$, where $f_{4,3}$ is of degree 72. A rather long computation was needed in order to check that the genus of $C_{4,3}$ is $\geq 65$. To get this inequality we reduce the curve $C_{4,3}$ modulo 5 and observe that $f_{4,3}\in\mathbb{F}_{5}[u,v]$ is irreducible and $\op{deg}_{{\bbb{Q}}[u,v]}f_{4,3}=\op{deg}_{\mathbb{F}_{5}[u,v]}f_{4,3}$. We thus get the inequality $\op{genus}_{{\bbb{C}}}(C_{4,3})\geq \op{genus}_{\mathbb{F}_{5}}(C_{4,3})=65$, where the last equality was obtained via computation in Magma. When $m=4$ we have $d_{4}(u,v)=(u^2-1)(u^4 - 2v^2 + 1)f_{4,4}(u,v)f_{5,4}(u,v)$, where $\op{deg}f_{4,4}=36$ and $\op{deg}f_{5,4}=72$. Using similar reasoning as for $m=3$, the genus of $C_{4,4}$ is $\geq 29$ and the genus of $C_{5,4}$ is $\geq 113$ (in this case we performed calculations over $\mathbb{F}_{3}$). When $m=5$ we have $d_{5}(u,v)=(u^2-1)(u^4 - 2v^2 + 1)f_{4,5}(u,v)$, where $\op{deg}f_{4,5}=216$. We were unable to finish the genus calculations in this case: Magma was still running after three days. However, we expect that these computations can be performed and believe that in each case the genus of the corresponding curve is $\geq 2$ which would imply (via the Faltings theorem) that the set $B$ is finite. ]{}
[The combination of the theorems of Hurwitz and Silverman which allows proof of the density results is a very useful tool and can be used in other situations too; see [@Be1; @BrUl; @Ul]. ]{}
[It is clear that the same result as in Proposition \[prop2\] can be obtained for the plane given by the equation $y=\frac{1}{2}$. ]{}
We are able to prove the following result (which falls short of being a density statement) concerning the existence of rational points on the plane $x=y$ with rational distances to elements of $A$.
\[prop3\] Let $\pi$ be the plane in ${\bbb{R}}^{3}$ given by $\pi : x=y$. Then the set $$\Pi=\{P\in \pi({\bbb{Q}}):\;\mbox{ the distance } |PQ| \mbox{ is rational for all } Q\in A\}$$ contains images of infinitely many rational parametric curves.
We now have $$\label{prop3.4}
\begin{cases}
\begin{array}{lll}
2 x^2 + z^2 & = &P^2, \\
2 x^2-2x+1+z^2 & = &Q^2, \\
2 (x-1)^2 +z^2 & = & S^2.
\end{array}
\end{cases}$$ Thus $$P^2-2 Q^2+S^2= 0, \qquad x=1/2+(P^2-Q^2)/2, \qquad z^2 = P^2-2 x^2.$$ The former is parametrized by $$\tau \;P = m^2+2m-1, \quad \tau \;Q=m^2+1, \quad \tau \;S=m^2-2m-1,$$ giving $$x=1/2-2m(1-m^2)/\tau^2, \qquad (\tau^2z)^2 = 1/2 (\tau^2-8 m^2)(-\tau^2+2(1-m^2)^2).$$ Regard the latter as an elliptic quartic over ${\bbb{Q}}(m)$. Under the quadratic base change $m=4k/(2+k^2)$, the curve becomes, with $\tau=t/(2+k^2)^2$, $Z=t^2 z$, $$\cal{C}:\;Z^2 = -\frac{1}{2}(t^2-128k^2(2+k^2)^2)(t^2 - 2(4-12k^2+k^4)^2),$$ which has a point at $$\begin{aligned}
(t,Z) = \Big(& \frac{4(2+k^2)^2(4-12k^2+k^4)}{(12-4k^2+3k^4)}, \\
& \frac{4(4-k^4)(4-12k^2+k^4)(16-352k^2-104k^4-88k^6+k^8)}{(12-4k^2+3k^4)^2} \Big).\end{aligned}$$ A cubic model of the curve is $$\cal{E}:\; Y^2 = X(X - (4-16k-12k^2-8k^3+k^4)^2) (X - (4+16k-12k^2+8k^3+k^4)^2),$$ with point of infinite order $Q=(X,Y)$, where $$\begin{aligned}
X&=\frac{(2+k^2)^2(12-4k^2+3k^4)^2}{(-2+k^2)^2}, \\
Y&= \frac{8(2+k^2)(-16+20k^2+k^6)(-4-5k^4+k^6)(12-4k^2+3k^4)}{(-2+k^2)^3}.\end{aligned}$$ We do not present explicitly the map ${\varphi}:\;\cal{C}\rightarrow \cal{E}$ because the formula is unwieldy. Note that the existence of $Q$ of infinite order on $\cal{E}$ implies the Zariski density of rational points on the surface $\cal{E}$ (using the same reasoning as in the proof of the previous theorem). Computing ${\varphi}^{-1}(mQ)$ for $m\in{\bbb{Z}}$, and then the expressions for $x, z$, we get rational parametric solutions of the system (\[prop3.4\]). This observation finishes the proof.
[The simplest parametric solution of (\[prop3.4\]) that we find is $$x=y=\frac{(4+2k-2k^2+k^3)(2-2k+k^2+k^3)(4-16k-12k^2-8k^3+k^4)}{2(2+k^2)^3(4-12k^2+k^4)},$$ $$z=\frac{(2-k^2)(16-352k^2-104k^4-88k^6+k^8)}{4(2+k^2)^3(4-12k^2+k^4))}.$$ ]{}
Points in ${\bbb{Q}}^3$ at rational distance to the vertices of the unit square {#sec4}
===============================================================================
We consider here the problem of finding points in ${\bbb{Q}}^3$ that lie at rational distance to the vertices of the unit square, i.e. we do not assume any additional constraints on the coordinates of the points. From the previous section we know that there is an infinite set $\cal{M}$ of rational curves lying in the plane $x=1/2$ (or in the plane $x=y$) with the property that each rational point on each curve $C\in\cal{M}$ has rational distance to the vertices of the unit square. A question arises whether in the more general situation considered here we can expect the existence of rational surfaces having the same property. Moreover, can any density result be obtained in this case? Unfortunately, we are unable to prove any density result. However, we show that there are many rational points in ${\bbb{Q}}^{3}$ lying at rational distance to the vertices of the unit square. More precisely, we show the following.
\[unirational\] Put $A=\{(0,0,0),\;(0,1,0),\;(1,0,0),\;(1,1,0)\}$ and consider the set $$\cal{F}:=\{P\in {\bbb{Q}}^3:\;\mbox{the distance}\;|PQ|\;\mbox{is rational for all}\;Q\in A\}.$$ Then the algebraic variety parameterizing the set $\cal{F}$ is unirational over ${\bbb{Q}}$.
It is clear that points in $\cal{F}$ are in one to one correspondence with rational points on the intersection in ${\bbb{R}}^{7}$ of the following four quadratic threefolds: $$\label{sys2}
\begin{cases}
\begin{array}{lll}
x^2+y^2+z^2 & = & P^2, \\
(1-x)^2+y^2+z^2 & = & Q^2, \\
x^2+(1-y)^2+z^2 & = & R^2, \\
(1-x)^2+(1-y)^2+z^2 & = & S^2.
\end{array}
\end{cases}$$ We immediately have $$\label{xysol}
x=\frac{1}{2}(P^2-Q^2+1),\quad y=\frac{1}{2}(P^2-R^2+1),$$ and $P^2-R^2=Q^2-S^2$. All rational solutions of the latter are given by $$P=uX+Y,\quad Q=uX-Y,\quad R=uY+X,\quad S=uY-X,$$ and then from (\[xysol\]), $$x=2 u X Y+\frac{1}{2},\quad y=\frac{1}{2}(u^2-1)(X^2-Y^2)+\frac{1}{2}.$$ Finding points on the system (\[sys2\]) now reduces to studying the algebraic variety $$\cal{S}:\;V^2=G(u,X,Y),$$ where $V=2z$ and the polynomial $G$ is given by $$G(u,X,Y)=-2+2(u^2+1)(X^2+Y^2)-(u^2-1)^2(X^2-Y^2)^2-16u^2X^2Y^2.$$ The dimension of $\cal{S}$ is 3. However, we can view the variety $\cal{S}$ as a del Pezzo surface of degree two defined over the field ${\bbb{Q}}(u)$. It is known then that the existence of a sufficiently general ${\bbb{Q}}(u)$-rational point on $\cal{S}$ implies ${\bbb{Q}}(u)$-unirationality, and in consequence ${\bbb{Q}}$-unirationality, of $\cal{S}$ (see Manin [@Man Theorem 29.4]). However, it seems that there is no general ${\bbb{Q}}(u)$-rational point on $\cal{S}$. Thus it is natural to ask how one can construct a rational base change $u={\varphi}(t)$ such that the surface $\cal{S}_{{\varphi}}:\;V^2=G({\varphi}(t),X,Y)$ defined over the field ${\bbb{Q}}(t)$, contains a ${\bbb{Q}}(t)$-rational point. We present the following approach to this problem. Suppose that $Q_{0}=(u_{0},X_{0},Y_{0},V_{0})$ is a rational point with non-zero coordinates lying on $\cal{S}$. We construct a parametric curve $\cal{L}$ lying on $\cal{S}$ as follows. Define $\cal{L}$ by equations $$\cal{L}:\;u=u_{0},\;X=T+X_{0},\;Y=pT+Y_{0},\;V=qT^2+tT+V_{0},$$ where $t$ is a rational parameter and $p,q,T$ are to be determined. With $u,X,Y,V$ so defined, $V^2-G(u,X,Y)=\sum_{i=1}^{4}A_{i}(p,q)T^i$. The expression $A_{1}$ is linear in $p$ and takes the form $A_{1}=pB_{1}+B_{0}+2tV_{0}$, where $B_{0}, B_{1}$ depend only on the coordinates of the point $Q$. In particular, $A_{1}$ is independent of $q$; so the equation $A_{1}=0$ has a non-zero solution for $p$ if and only if $B_{1}\neq 0$. The expression for $B_1$ is $$B_{1}=4Y_{0}((-u_0^4+10u_0^2-1)X_0^2+(u_0^2-1)^2Y_0^2-u_0^2-1).$$ Next, observe that $A_{2}=C_{2}p^2+C_{1}p+C_{0}+2qV_{0}+t^2$, where $C_{i}$ depend only on the coordinates of the point $Q_{0}$ for $i=0,1,2$, and thus $A_2=0$ can be solved for $q$ precisely when $V_0$ is non-zero. To sum up, the system $A_{1}=A_{2}=0$ has a non-trivial solution for $p,q$ as rational functions in ${\bbb{Q}}(t)$ when $B_1V_0 \neq 0$. With $p,q$ computed in this way: $$V^2-G(u,X,Y)=T^3(A_{3}(p,q)+A_{4}(p,q)T).$$ If $A_{3}A_{4}\neq 0$ as a function in $t$ then the expression for $T$ that we seek is given by $T=-A_{3}(p,q)/A_{4}(p,q)$. Thus if the point $Q_{0}=(u_0,X_0,Y_0,V_0)$ satisfies certain conditions, then there exists a rational curve on the surface $\cal{S}_{u_{0}}:\;V^2=G(u_{0},X,Y)$. Moreover, the curve constructed in this manner can be used to produce rational expressions for $P,Q,R,S$ and in consequence rational expressions for $x,y,z$ satisfying the system (\[sys2\]).
Let $X=X'(t), Y=Y'(t)$ be parametric equations of the constructed curve. The polynomial $G$ is invariant under the mapping $(u,X,Y)\mapsto \Big(\frac{X}{Y},uY,Y\Big)$ and thus we can define a non-constant base change $u={\varphi}(t)=X'(t)/Y'(t)$ such that the surface $\cal{S}_{{\varphi}}:\;V^2=G({\varphi}(t),X,Y)$ contains the ${\bbb{Q}}(t)$-rational point $(X,Y)=(u_{0}Y'(t),Y'(t))$. Using the cited result of Manin we get ${\bbb{Q}}(t)$-unirationality of $\cal{S}_{{\varphi}}$ and in consequence ${\bbb{Q}}$-unirationality of $\cal{S}$.
Thus in order to finish the proof it suffices to find a suitable point $Q_0$ on the threefold $\cal{S}$. It is straightforward to check that all the required conditions on $Q_0$ are met on taking $$(u_{0},X_{0},Y_{0},V_{0})=\Big(2,\frac{1}{12},\frac{19}{36},\frac{7}{27}\Big).$$ With this choice of $Q_0$, the expressions for $x,y,z$ arising from the constructed parametric curve are as follows: $$\begin{aligned}
x = & 3(5522066829177276301427600 - 258403606687492419505600t \\
& + 24350105869790104153088t^2 - 930272613423360964576t^3 \\
& + 39295267680627366536t^4 - 1085485845235095088t^5 \\
& + 24133448660417792t^6 - 401146604231320t^7 + 3899504263625t^8)/\Delta^2, \\
y = & 30(3992136439221148602640 - 6939554120499388567712t \\
& + 117488065643083258096t^2 - 13393876262858078048t^3 \\
& + 411476041942299568t^4 - 13249457441223848t^5 \\
& + 681815047971100t^6 - 7562115944888t^7 + 337499289355t^8)/\Delta^2, \\
z = & 714(3779374597422498556400 + 529318935972209201600t \\
& - 977278343015269168t^2 + 1745565618326470736t^3 \\
& - 10290117484952896t^4 + 1635035001144368t^5 \\
& - 3620551914412t^6 + 458263598420t^7 + 118863425t^8)/\Delta^2,\end{aligned}$$ where $$\Delta=18(221769748580 - 3052768504t + 670128264t^2 - 6059132t^3 + 500425t^4).$$
[The point $(x,y,z)$ satisfies $0<x,y<1$ for values of $t$ satisfying $t<-10.9337$, or $t>28.2852$. ]{}
Notwithstanding the large coefficient size in the above parameterization, there seem to be many points in ${\bbb{Q}}^3$ at rational distance to the vertices $A$ of the unit square. A (non-exhaustive) search finds the following points $(x,y,z) \in {\bbb{Q}}^3$ of height at most $10^4$, $x \neq \frac{1}{2}$, $x \neq y$, having rational distances to the vertices $(0,0,0)$, $(0,1,0)$, $(1,0,0)$, $(1,1,0)$ of the unit square, and which lie in the positive octant. We list only one point under the symmetries $x \leftrightarrow 1-x, \;\; y \leftrightarrow 1-y, \;\; x \leftrightarrow y$.
$x$ $y$ $z$ $x$ $y$ $z$
----------- ----------- ----------- ----------- ----------- ----------
41/27 77/108 28/27 1/35 37/105 17/140
5/54 35/108 7/54 161/80 587/300 7/25
83/125 549/500 14/75 37/156 987/2704 119/676
1/189 283/756 31/189 232/189 493/756 59/189
113/190 2369/1900 287/2850 202/195 213/325 161/1300
383/348 5397/1682 2429/1682 571/476 2419/2975 94/425
203/594 119/1188 469/594 1589/594 985/1188 427/594
1/756 127/1512 307/756 1436/847 7967/3388 992/847
127/1029 341/1372 307/343 251/1029 401/1372 223/343
791/1210 5299/3630 2569/2420 1571/1210 7487/7260 509/1210
1906/2541 4019/3388 360/847 2185/2541 3819/3388 345/847
3059/2738 4487/5476 3059/8214
Table 2: points in ${\bbb{Q}}^3$ at rational distance to the vertices of the unit square
We are motivated to make the following conjecture.
Put $A=\{(0,0,0),\;(0,1,0),\;(1,0,0),\;(1,1,0)\}$ and consider the set $$\cal{F}:=\{P\in {\bbb{Q}}^3:\;\mbox{the distance}\;|PQ|\;\mbox{is rational for all}\;Q\in A\}.$$ Then $\cal{F}$ is dense in ${\bbb{R}}^3$ in the Euclidean topology.
Points with rational distances from the vertices of a tetrahedron {#sec5}
=================================================================
Let $P_{0}, P_{1}, P_{2}, P_{3}$ be given points in ${\bbb{Q}}^{3}$, not all lying on a plane. Without loss of generality we may assume that $P_{0}=(0,0,0)$. Put $P_{i}=(a_{i1},a_{i2},a_{i3})$ for $i=1,2,3$, and define $d_{ij}=|P_{i}P_{j}|$ for $0 \leq i<j \leq 3$, i.e. $d_{ij}$ is the distance between the points $P_{i}, P_{j}$. The constraint on the points $P_i$ implies that the points define the vertices of a genuine tetrahedron with non-zero volume, so that the determinant of the matrix $[a_{ij}]_{1\leq i,j\leq 3}$ is non-zero. Let $P=(x,y,z)$ be a point in ${\bbb{Q}}^{3}$ with rational distance to each of the points $P_{0}$, $P_{1}$, $P_{2}$, $P_{3}$. The corresponding system of Diophantine equations is thus $$\begin{cases}
\begin{array}{lll}
x^2+y^2+z^2 & = & Q_{0}^2 \\
(x-a_{i1})^2+(y-a_{i2})^2+(z-a_{i3})^2 & = & Q_{i}^2,\quad \mbox{for}\; i=1,2,3,
\end{array}
\end{cases}$$ or equivalently, on replacing the second, third, and fourth equations by their differences with the first equation, $$\label{generalsys}
\begin{cases}
\begin{array}{lll}
x^2+y^2+z^2 & = & Q_{0}^2 \\
a_{i1}x+a_{i2}y+a_{i3}z & = & \frac{1}{2}(Q_{0}^2-Q_{i}^2+d_{0i}^{2}),\quad \mbox{for}\; i=1,2,3.
\end{array}
\end{cases}$$ Since the determinant of the matrix $A=[a_{ij}]_{1\leq i,j\leq 3}$ is non-zero, the (linear) system consisting of the last three equations from (\[generalsys\]) can be solved with respect to $x,y,z$. The solution takes the following form: $$\label{xyz}
x=\frac{\op{det}A_{1}}{\op{det}A},\quad y=\frac{\op{det}A_{2}}{\op{det}A},\quad z=\frac{\op{det}A_{3}}{\op{det}A}$$ where $A_{i}$, for $i=1,2,3$, is obtained from the matrix $A$ by replacing the $i$-th column by the column comprising the right hand sides of the last three equations from (\[generalsys\]). In particular, $x,y,z$ are (inhomogeneous) quadratic forms in four variables $Q_{0}$, $Q_{1}$, $Q_{2}$, $Q_{3}$, with coefficients in $\mathbb{K}:={\bbb{Q}}(\{a_{ij}:\;i,j\in\{1,2,3\}\})$. Putting these computed values of $x$,$y$,$z$ into the first equation, there results one inhomogeneous equation of degree four in four variables. We homogenize this equation by introducing new variables $Q_{i}=R_{i}/R_{4}$ for $i=0,1,2,3$, and work with the quartic threefold, say $\cal{X}$, defined by an equation of the form $\cal{F}({\bf R})=0$, where for ease of notation we put ${\bf R}=(R_{0},R_{1},R_{2},R_{3},R_{4})$. Using Mathematica, the set $\op{Sing}(\cal{X})$ of singular points of the variety $\cal{X}$ is computed to be $$\begin{aligned}
\op{Sing}(\cal{X})=\{&(0,\pm d_{01},\pm d_{02}, \pm d_{03},1), \; (\pm d_{01}, 0, \pm d_{12}, \pm d_{13},1), \\
&(\pm d_{02},\pm d_{12}, 0, \pm d_{23},1), \; (\pm d_{03},\pm d_{12}, \pm d_{23}, 0, 1), \; (1,\pm 1, \pm 1, \pm 1, 0)\}.\end{aligned}$$ Thus for generic choice of $P_{1}$, $P_{2}$, $P_{3}$, the variety $\cal{X}$ contains 40 isolated singular points.
We now prove that for generic choice of $P_{1}$, $P_{2}$, $P_{3}$, there is a solution depending on three (homogenous) parameters of the equation defining the variety $\cal{X}$. We thus regard $a_{ij}$ as independent variables and work with $\cal{X}$ as a quartic threefold defined over the rational function field $\mathbb{K}$. In order to find a parameterization we will use the rational double point $P=(1,1,1,1,0)$ lying on $\cal{X}$ and the idea used in the proof of Theorem \[unirational\]. Put $$R_{0}=T+1,\quad R_{i}=(p_{i}+1)T+1,\quad\mbox{for}\;i=1,2,3,\mbox{ and}\quad R_{4}=p_{4}T,$$ where $p_{i}$ and $T$ are to be determined. On substituting these expressions into the equation $\cal{F}({\bf R})=0$, there results $T^2(C_{2}+C_{3}T+C_{4}T^2)=0$, where $C_{i}$ is a homogenous form of degree $i$ in the four variables $p_{1},\ldots,p_{4}$. Certainly under the assumption on the points $P_{i}, i=0,1,2,3$ (namely, $\op{det}A\neq0$), the form $C_{2}$ is non-zero as an element of $\mathbb{K}[p_{1},p_{2},p_{3},p_{4}]$. Indeed, we have $C_{2}(0,0,0,p_{4})=-(\op{det}A)^2p_{4}^2$. We also checked that for a generic choice of the points $P_{1}$, $P_{2}$, $P_{3}$, the polynomial $C_{2}$ is genuinely dependent upon the variables $p_{1},\ldots,p_{4}$, in that there are no linear forms $L_{j}(p_1,...,p_4)$, $j=1,2,3$, such that $C_{2}(L_1,L_2,L_3)$ is a form in three or fewer variables.
Consider now the quadric $\cal{Y}:\;C_{2}(p_1,p_2,p_3,p_{4})=0$, regarded as a quadric defined over $\mathbb{K}$. There are $\mathbb{K}$-rational points on $\cal{Y}$, namely $Y_{j}=(a_{1j},a_{2j},a_{3j},1)$, $j=1,2,3$, and so in particular, $\cal{Y}$ can be rationally parameterized with parametrization of the form $p_{i}=X_{i}(q_1,q_2,q_3)$, for homogeneous quadratic forms $X_{i}$, $i=1,2,3,4$. Thus, after the substitution $p_{i}\rightarrow X_{i}$ there results an equation $T^3(C'_{3}+C'_{4}T)=0$, where $C'_{i}=C_{i}(X_1,X_2,X_3,X_4)$ and $C'_{i}\neq 0$ as an element of $\mathbb{K}[q_{1}, q_{2}, q_{3}]$ for $i=3,4$. This equation has a non-zero $\mathbb{K}$-rational root $T={\varphi}(q_1,q_2,q_3)=-C'_{3}/C'_{4}$ and accordingly we get a rational parametric solution in three (homogenous) parameters of the equation defining $\cal{X}$, in the form $$Q_{0}=\frac{1}{X_{4}({\bf q})}\Big(1+\frac{1}{{\varphi}({\bf q})}\Big),\quad Q_{i}=\frac{1}{X_{4}({\bf q})}\Big(1+X_{i}({\bf q})+\frac{1}{{\varphi}({\bf q})}\Big), \quad i=1,2,3,$$ where we put ${\bf q}=(q_1,q_2,q_3)$. It is straightforward to check that the image of the map $\Phi:\;\mathbb{P}(\mathbb{K})^2\ni (q_{1},q_{2},q_{3})\mapsto (Q_{0},Q_{1},Q_{2},Q_{3})\in\cal{X}(\mathbb{K})$ is not contained in a curve lying on the variety $\cal{X}$. Using now the expressions for $Q_{0}$, $Q_{1}$, $Q_{2}$, $Q_{3}$, we can recover the corresponding expressions for $x,y,z$ given by (\[xyz\]).
It is possible to write down from the Jacobian matrix of ${\bf R}(q_1,q_2,q_3)$ all the conditions on $\{a_{ij}\}$, $i,j\in\{1,2,3\}$, which guarantee that the parameterization is genuinely dependent on three (homogenous) parameters. However, we refrain from doing so, because the computation is massively memory intensive, and the resulting equations complicated and unenlightening. If we choose particular values of $a_{ij}$, then this independence of $q_1,q_2,q_3$ is readily checked (as happens, for example, when $P_1=(1,0,0)$, $P_2=(0,1,0)$, $P_3=(0,0,1)$). In general, there results an explicit rational parameterization in three independent parameters. There may, however, be some choices of vertices $P_i$ for which this approach (with the particular rational double point chosen in the construction) results in the image of the map $\Phi$ being a curve lying on $\cal{X}$.\
To sum up, we have the following result.
\[unirationlity2\] Let $P_{0}=(0,0,0)$ and let $P_{i}=(a_{i1},a_{i2},a_{i3})$ be generic points in ${\bbb{Q}}^3$ for $i=1,2,3$. Then the variety parameterizing the points $P\in{\bbb{Q}}^3$ with rational distances to $P_{i}$, $i=0,1,2,3$ is a quartic threefold $\cal{X}$; and the set of rational parametric solutions of the equation defining $\cal{X}$ is non-empty.
We believe that much more is true.
\[uniconj\] Let $P_{0}=(0,0,0)$ and $P_{1}$, $P_{2}$, $P_{3}$ be generic rational points such that no three lie on a line and the points do not all lie on a plane. Then the variety, say $\cal{X}$, parameterizing those $P\in{\bbb{Q}}^3$ with rational distances to $P_{i}$, $i=0,1,2,3$, is unirational over ${\bbb{Q}}$.
One can also state the following natural question.
Let $\cal{X}$ be defined as in Conjecture \[uniconj\]. Is the set $\cal{X}({\bbb{Q}})$ dense in the Euclidean topology in the set $\cal{X}({\bbb{R}})$?
We expect that the answer is yes.
[The construction above finds a double infinity of points in ${\bbb{Q}}^3$ at rational distance from the four vertices of the tetrahedron. If we suppose that the initial tetrahedron has rational edges, then we thus deduce infinitely many sets of five points in ${\bbb{Q}}^3$ where the ten mutual distances are rational. We take as example the tetrahedron with vertices $$P_{1}=(0,0,0),\quad P_{2}=(1,0,0),\quad P_{3}=\left(\frac{11}{200},\frac{117}{800},0\right),\quad P_{4}=\left(\frac{7}{25}, \frac{63}{325}, \frac{21}{260}\right).$$ This is chosen as an example of a tetrahedron, discovered by Rathbun, where the edges, face areas, and volume, are all rational. It corresponds to the first example in the list in Section D22 of Guy [@Guy]. The explicit parametrization as computed above takes several computer screens to display, so we do not present it. However, on computing specializations, the point with smallest coordinates (minimizing the least common multiple of the denominators of $x,y,z$) that we could find is $$\left( \frac{617}{4900}, \; \frac{2553}{63700}, \; \frac{3}{25480} \right),$$ which in fact lies within the tetrahedron. ]{}
The fact in the above proof that the matrix $A=[a_{ij}]_{1\leq i,j\leq 3}$ is non-singular follows from the assumption that the points $P_0,P_1,P_2,P_3$ define a genuine tetrahedron. A question arises as to what can be said in the situation when $\op{det}A=0$? We need to consider two cases: where the rank $\op{rk}(A)$ is 2 or 1, corresponding respectively to the four points being coplanar, and the four points being collinear. Consider first the case of $\op{rk}(A)=2$. Note that we encounter this situation in section \[sec4\]. The vectors $P_{1}, P_{2}, P_{3}$ are linearly dependent, and without loss of generality we can assume that $P_{1}, P_{2}$ are linearly independent, so that $P_{3}=pP_{1}+QP_{2}$ for some $p, q\in{\bbb{Q}}$. It follows that the linear forms in $x, y, z$ from the system (\[generalsys\]) are linearly dependent. Let $A_{ij}$ be the $2\times 2$ matrix obtained from $A$ by deleting the $i$-th row and the $j$-th column. Then at least one of $A_{31}$, $A_{32}$, $A_{33}$ has non-zero determinant. Without loss of generality, suppose $\op{det}A_{31} \neq 0$. Solving the first two equations at (\[generalsys\]) with respect to $y, z$: $$\begin{aligned}
y=&-\frac{\op{det}A_{32}}{\op{det}A_{31}}x-\frac{1}{2\op{det}A_{31}}(a_{23} d_{01}^2-a_{13} d_{02}^2+\left(a_{23}-a_{13}\right) Q_0^2-a_{23} Q_1^2+a_{13} Q_2^2),\\
z=&-\frac{\op{det}A_{33}}{\op{det}A_{31}}x-\frac{1}{2\op{det}A_{31}}(a_{22} d_{01}^2-a_{12} d_{02}^2+\left(a_{22}-a_{12}\right) Q_0^2-a_{22} Q_1^2+a_{12} Q_2^2.\end{aligned}$$ Moreover, $Q_{0}, Q_{1}, Q_{2}, Q_{3}$ need to satisfy the equation $$\label{quadric1}
\cal{Q}:\;(p+q-1)Q_{0}^2-pQ_{1}^2-qQ_{2}^2+Q_{3}^2=d_{03}^2-pd_{01}^2-qd_{02}^2.$$ The quadric $\cal{Q}$ may be viewed as a quadric defined over the function field $\mathbb{K}:={\bbb{Q}}(\{a_{ij}:\;i=1,2,j=1,2,3\})$. The quadric $\cal{Q}$ contains the point at infinity $(Q_{0}:Q_{1}:Q_{2}:Q_{3}:T)=(1:1:1:1:0)$ and thus $\cal{Q}$ can be parameterized by rational functions, say $Q_{i}=f_{i}({\bf R}) \in \mathbb{K}({\bf R})$, where ${\bf R}=(R_{0},R_{1},R_{2})$ are (non-homogenous) coordinates.
Moreover, the numerator of $f_{i}$ is of degree $\leq 2$ for $i=0,1,2$; and the same is true for the common denominator of $f_{i}, i=0,1,2$. Using this parametrization we compute the expressions for $y, z$. Next, substitute the computed values of $y, z$ and $Q_{0}$ into the equation $x^2+y^2+z^2=Q_{0}^2$. This equation is a quadratic equation in $x$ of the form $$C_{2}x^2+C_{1}x+C_{0}=0,$$ where $C_{i}\in\mathbb{K}({\bf R})$ for $i=0,1,2$. We arrive at the problem of finding rational points on the threefold $$\cal{X}:\;V^2=C_{1}^2-4C_{0}C_{2}=:F({\bf R})$$ defined over the field $\mathbb{K}$. The polynomial $F$ is of degree 6. However, one can check that with respect to each $R_{i}, i=0,1,2$, the degree of $F$ is 4, and thus $\cal{X}$ can be viewed as a hyperelliptic quartic (of genus $\leq 1$) defined over the field $\mathbb{K}({\bf R}')$, where ${\bf R}'$ is a vector comprising exactly two variables from $R_{0},R_{1}, R_{2}$. We thus expect that for most rational points $P_{1}, P_{2}, P_{3}$ with $P_{3}=pP_{1}+qP_{2}$, there is a specialization of $R_{0}, R_{1}$ (say), to rational numbers such that $\cal{X}_{R_{0}, R_{1}}$ represents a curve of genus one with infinitely many rational points. Tracing back the reasoning in this case we will get infinitely many rational points with rational distances to the points $P_{0}, P_{1}, P_{2}, P_{3}$.
What can be done in the case when $\op{rk}(A)=1$ (which corresponds to the points $P_0,P_1,P_2,P_3$ being collinear)? In order to simplify the notation, put $P_{1}=(a,b,c)$ and $d_{01}=d$. Without loss of generality we can assume that $P_{2}=pP_{1}, P_{3}=qP_{1}$ for some $p, q\in{\bbb{Q}}\setminus\{0\}$. Then the system (\[generalsys\]) comprises just one linear form in $x, y, z$ which needs to be represented by three non-homogenous quadratic forms. More precisely, $$\label{sys3}
ax+by+cz=\frac{1}{2}(Q_{0}^2-Q_{1}^2+d^2)=\frac{1}{2p}(Q_{0}^2-Q_{2}^2+p^2d^2)=\frac{1}{2q}(Q_{0}^2-Q_{3}^2+q^2d^2).$$ Let $\cal{V}$ be the variety defined by the last two equations. After homogenization by $Q_{i}\mapsto Q_{i}/T$ and simple manipulation, we get $$\cal{V}:\; \
\begin{cases}
\begin{array}{lll}
Q_{2}^2=(1-p)Q_{0}^2+pQ_{1}^2+p(p-1)d^2T^2, \\
Q_{3}^2=(1-q)Q_{0}^2+qQ_{1}^2+q(q-1)d^2T^2.
\end{array}
\end{cases}$$ The point $(Q_{0}:Q_{1}:Q_{2}:Q_{3}:T)=(1:1:1:1:0)$ lies on $\cal{V}$ and can be used to find parametric solutions of the system defining $\cal{V}$. However, observe that any point which lies on $\cal{V}$ with $T\neq 0$ allows us to compute the value of $z$ from equation (\[sys3\]). This expression for $z$ depends on $x, y$, and substituting into the first equation at (\[generalsys\]), namely $x^2+y^2+z^2=Q_{0}^2$, we are left with one equation of the form $$\cal{W}:\;C_{0}x^2+C_{1}xy+C_{2}y^2+C_{3}x+C_{4}y+C_{5}=0,$$ where $C_{i}$ depends on $p,q,a,b,c$ and the solution of the system defining the variety $\cal{V}$. In general, $\cal{W}$ is a conic and thus has genus 0. Thus, provided that we can find a rational point on $\cal{W}$, we can find infinitely many rational points (in fact a parameterized curve) with rational distances to the four collinear points $P_{0}, P_{1}, P_{2}, P_{3}$. As example here, assume that $d=\sqrt{a^2+b^2+c^2}$ is a rational number. Then the variety $\cal{V}$ contains the rational line $$(Q_{0}:Q_{1}:Q_{2}:Q_{3}:T)=(u-d/2:u+d/2:u - (1/2 - p)d:u - (1/2 - q)d:1).$$ In this case (\[sys3\]) reduces to the one equation $ax+by+cz=d(d-2u)/2$. Solving for $z$, and performing the necessary computations, the equation for the quadric $\cal{W}$ takes the following form: $$V^{2}=b^2 + c^2 - d^2+4adX-4(a^2 + b^2 + c^2)X^2=-a^2+4adX-4d^2X^2=-(a-2dX)^2,$$ where $V=(2(b^2+c^2)y-b(d^2-2du-2ax))/(c(d-2u))$ and $X=x/(d-2u)$, the last identity following from the equality $a^2+b^2+c^2=d^2$. From the assumption on rationality of $d$, we can find $x, y, z$ in the following form: $$x=\frac{a(d-2u)}{2d},\quad y=\frac{b(d-2u)}{2d},\quad z=\frac{c(d-2u)}{2d},$$ with $$Q_0=\frac{d-2u}{2}, \quad Q_1=\frac{d+2u}{2}, \quad Q_2=d p-\frac{d-2u}{2}, \quad Q_3=d q-\frac{d-2u}{2},$$ giving rational solutions of the original system.
Guy op. cit. gives one parameterized family of tetrahedra which have rational edges, face areas, and volume. He also lists nine examples due to John Leech of such tetrahedra comprised of four congruent acute-angled Heron triangles appropriately fitted together. The six edges of the tetrahedron thus fall into three equal pairs. We discover that it is straightforward to write down an infinite family of such tetrahedra as follows.
If the Heron triangle has sides $p,q,r$, then the area and volume conditions for the tetrahedron become $$\begin{aligned}
(p+q+r)(-p+q+r)(p-q+r)(p+q-r)= & \square, \\
2(-p^2+q^2+r^2)(p^2-q^2+r^2)(p^2+q^2-r^2)= & \square.\end{aligned}$$ Using the Brahmagupta parameterization of Heron triangles, we set $$(p,q,r)=((v+w)(u^2-v w), \; v(u^2+w^2), \; w(u^2+v^2) ),$$ reducing the two conditions above to the single demand $$-(u^2-v^2)(u^2-w^2)(u^2-u(v+w)-v w)(u^2+u(v+w)-v w) = \square.$$ Setting $W=w/u$, this is equivalent to $$-(1-W^2) \left( \frac{u+v}{u-v} -W \right) \left( \frac{u-v}{u+v} + W \right) = \square.$$ This elliptic quartic has cubic form $$Y^2 = X(X + v^2(u^2-v^2))(X - u^2(u^2-v^2)).$$ Demanding a point with $X=2u v^2(u+v)$ gives $$2(3u-v)(-u+2v) = \square,$$ parameterized by $$(u,v,w)=( m^2+4, 3 m^2+2), \mbox{ with } w=\frac{(2m^2+3)(m^2+4)}{4m^2+1}.$$ This in turn leads to the tetrahedron with vertices $$\begin{aligned}
P_1= & (0, \; 0,\; 0), \\
P_2= & (10(m^4-1)(m^4+3m^2+1), \; 0, \; 0), \\
P_3= & \big(\frac{2(m^2-1)(m^2+4)(3m^2+2)^2}{5}, \frac{(m^2+4)(2m^2+3)(3m^2+2)(4m^2+1)}{5}, \; 0\big), \\
P_4= & \big( \frac{2(m^2-1)(2m^2+3)^2(4m^2+1)}{5}, \\
& -\frac{(2m^2+3)(2m^2-5m-2)(2m^2+5m-2)(3m^2+2)}{5}, \\
& \; 4(m^2-1)m(2m^2+3)(3m^2+2) \big);\end{aligned}$$ edge lengths $(p,q,r)$ given by $$\begin{aligned}
p & = 10(m^4-1)(m^4+3m^2+1), \\
q & = (m^2+4)(3m^2+2)(2m^4+2m^2+1), \\
r & = (2m^2+3)(4m^2+1)(m^4+2m^2+2);\end{aligned}$$ face areas given by $$(m^4-1)(m^2+4)(4m^2+1)(2m^2+3)(3m^2+2)(1+3m^2+m^4);$$ and volume equal to $$\frac{1}{62208} m(m^2-1)(m^4-1)(m^2+4)(4m^2+1)(2m^2+3)^2(3m^2+2)^2(1+3m^2+m^4).$$
The unit cube {#sec6}
=============
Finding an infinity of points in ${\bbb{Q}}^3$, if indeed such exist, that lie at rational distance from the [*eight*]{} vertices $(i,j,k)$, $i,j,k=0,1$, of the unit cube seems to be an intractable problem. If we restrict attention to the plane $x=y$, we are aware of the following two points (equivalent under the symmetry $x \leftrightarrow 1-x$) where distances to the vertices of the unit square are rational, and distances to the two cube vertices $(1,0,1), (0,1,1)$ are rational: $$\label{pts6}
(x,y,z)=\left( \frac{31}{108}, \frac{31}{108}, \frac{1519}{1080} \right), \qquad \left( \frac{77}{108}, \frac{77}{108}, \frac{1519}{1080} \right).$$ The defining system of equations for this situation is $$\begin{cases}
\begin{array}{lll}
2 x^2 + z^2 & = & P^2, \\
2 x^2-2x+1+z^2 & = & Q^2, \\
2 (x-1)^2 +z^2 & = & S^2, \\
(1-x)^2+x^2+(1-z)^2 & = & T^2.
\end{array}
\end{cases}$$ Then $1+Q^2-2z=T^2$, so we obtain $$z^2 = 1/2 (1-8 m^2/t^2)(-1+2(1-m^2)^2/t^2), \qquad 1+(1+m^2)^2/t^2 - 2 z = T^2.$$ Equivalently, $$2(t^2-8 m^2)(-t^2+2(1-m^2)^2) = (t^2 + (1+m^2)^2 - U^2)^2,$$ where $U=Tt$, $Z=t^2 z$. A search over this surface up to a height of $5000$ resulted in discovering only the point $(m,t,U)=(-24,360,313)$ and symmetries, leading to the points at (\[pts6\]).\
\
A point on the plane $x=1/2$ at rational distance from the vertices of the cube results in four pairs of equal distance, with defining equations $$\label{symsys}
\begin{cases}
\begin{array}{lll}
1/4+y^2+z^2 & = & P^2, \\
1/4+(1-y)^2+z^2 & = & Q^2, \\
1/4+y^2+(1-z)^2 & = & R^2, \\
1/4+(1-y)^2+(1-z)^2 & = & S^2,
\end{array}
\end{cases}$$ and we found no solution. However, if we ask only for [*three*]{} pairs of distances to the cube vertices be rational, rather than four, e.g. consider the system of equations defined by the first three equations from (\[symsys\]), then we are able to prove the following result.
Let $\cal{A}$ be the set of rational curves lying on the plane $x=1/2$ with the property that each rational point on $A\in\cal{A}$ has rational distances to six vertices of the unit cube. Then $\cal{A}$ is infinite.
We consider only the system of equations defined by the first three equations from the system (\[symsys\]) above (other cases are treated in the same manner). The six distances now fall into three equal pairs, requiring $$y=(P^2-Q^2+1)/2, \quad z=(P^2-R^2+1)/2,$$ together with the equation which can be written in homogenous coordinates in the following form: $$\label{PQRT}
\cal{V}:\;(P^2 - R^2)^2 + (P^2 - Q^2)^2 -2 (Q^2+R^2) T^2 +3 T^4 = 0.$$ We prove that the set of rational curves lying on $\cal{V}$ is infinite. Consider the intersection of $\cal{V}$ with the family of planes $L_{a}:\;T=a(P-R)$. Remarkably, the intersection $\cal{V}\cap L_{a}$ defines a singular curve, say $\cal{C}$, in the projective plane $\mathbb{P}^{2}({\bbb{Q}}(a))$, with singular points $[P:Q:R]=[1:\pm 1:1]$. In fact, the curve $\cal{C}$ is of genus 1. By homogeneity we can assume that $R=1$. Making a change of variables $$(P,Q)=(p+1,\;pq+1)\quad\mbox{with inverse}\quad (p,q)=\Big(P-1,\;\frac{Q-1}{P-1}\Big)$$ the (inhomogenous) equation of $\cal{C}$ takes the form $p^2H(p,q)=0$, where $$H(p,q)=(2+3a^4-2(a^2+1)q^2+q^4)p^2+4(2-(1+a^2)q-q^2+q^3)p+4(q^2-2q-a^2+2)).$$ In other words, the curve $\cal{C}$ is the set-theoretic sum of the (double) line $p=0$ and the curve of degree 6, given by the equation $\cal{C}':\;H(p,q)=0$. The equation for $\cal{C}'$ can be rewritten as $$\cal{C}':\;W^2=(a^2-1)q^4-2(a^2-1)q^3-(2a^2-1)^2q^2+2a^2(3a^2-2)q+a^2(3a^4-6a^2+2),$$ where we put $W=\frac{1}{2}(q^4-2(a^2+1)q^2+3a^4+2)p+q^3-q^2-(a^2+1)q+2$. In order to guarantee the existence of rational points on $\cal{C}'$ (and hence on $\cal{C}$) we consider a quadratic base change $a=(t^2+1)/2t$. Then $a^2-1=((t^2-1)/2t)^2$ and thus the curve $\cal{C}'$ contains a ${\bbb{Q}}(t)$-rational point at infinity. The birational model $\cal{E}'$ of the curve $\cal{C}'$ is given by the equation in short Weierstrass form $\cal{E}':\;Y^2=X^3+AX+B$, where $$\begin{aligned}
A&=-108(13t^{16}-20t^{12}+78t^8-20t^4+13),\\
B&=864(23t^{24}-132t^{20}+129t^{16}-296t^{12}+129t^8-132t^4+23).\end{aligned}$$ The curve $\cal{E}'$ contains the point of infinite order $$Z=(12(2t^8+3t^6-2t^4+3t^2+2), \; 108t(t^2 + 1)(t^8 - 1)).$$ The point $2Z$ leads to a non-trivial curve lying on $\cal{V}$ (the equations for this curve are too unwieldy to present explicitly here), and correspond to the following $y,z$ satisfying the first three equations of our system: $$\begin{aligned}
y=&(t^{48}-8 t^{47}+20 t^{46}+8 t^{45}-24 t^{44}-1528 t^{43}+6684 t^{42}-4872 t^{41}-69302 t^{40}\\
&+96040 t^{39}+771532 t^{38}-2467368 t^{37}-4047800 t^{36}+22047704 t^{35}+12635044 t^{34} \\
& -107433944 t^{33} -23948593 t^{32} +342788016 t^{31}+24622088 t^{30}-780080048 t^{29} \\
& -638000 t^{28}+1324015696 t^{27} -37969832 t^{26} -1716035152 t^{25}+57538508 t^{24} \\
& +1716035152 t^{23}-37969832 t^{22} -1324015696 t^{21} -638000 t^{20} +780080048 t^{19} \\
& +24622088 t^{18}-342788016 t^{17}-23948593 t^{16}+107433944 t^{15} +12635044 t^{14} \\
& -22047704 t^{13}-4047800 t^{12}+2467368 t^{11}+771532 t^{10}-96040 t^9-69302 t^8 \\
& +4872 t^7+6684 t^6 +1528 t^5-24 t^4-8 t^3+20 t^2+8 t+1)/(2 t \Delta), \\
z&=(3 t^{48}-16 t^{47}+56 t^{46}-32 t^{45}+1096 t^{44}-5696 t^{43}+ 15928 t^{42}+11472 t^{41} \\
& +51710 t^{40}-551056 t^{39}+1282392 t^{38}+3181248 t^{37}-11188440 t^{36}-701152 t^{35} \\
& +39387992 t^{34}-55013168 t^{33} -75669523 t^{32}+272885472 t^{31}+75471984 t^{30} \\
& -744371648 t^{29}+210064 t^{28}+1377115648 t^{27} -116092816 t^{26} -1850031968 t^{25} \\
& +173321252 t^{24}+1850031968 t^{23}-116092816 t^{22} -1377115648 t^{21} +210064 t^{20} \\
& +744371648 t^{19}+75471984 t^{18}-272885472 t^{17}-75669523 t^{16}+55013168 t^{15} \\
& +39387992 t^{14} +701152 t^{13} -11188440 t^{12}-3181248 t^{11}+1282392 t^{10} \\
& +551056 t^9 +51710 t^8 -11472 t^7 +15928 t^6 +5696 t^5+1096 t^4 +32 t^3 +56 t^2 \\
& +16 t+3)/((t^2-1)\Delta),\end{aligned}$$ where $$\begin{aligned}
\Delta & =4(t-1)(t+1)(t^2+1)^2(t^8-4 t^7+10 t^6+12 t^5-14 t^4-12 t^3+10 t^2+4 t+1) \; \times \\
& (t^{16}-4 t^{14}+168 t^{12} -492t^{10}+718 t^8-492 t^6+168 t^4-4 t^2+1) \times (t^{16}+4 t^{14} \\
& -32 t^{13} +232 t^{12}+160 t^{11}-756 t^{10}-320 t^9 +1102 t^8+320 t^7-756 t^6-160 t^5 \\
& +232 t^4+32 t^3+4 t^2+1).\end{aligned}$$ Computing the points $mZ$ for $m=3,4,\ldots$ and the corresponding points on $\cal{C}$, we get infinitely many rational curves lying on $\cal{V}$; and the result follows.
We know (up to symmetry) precisely two points $(x,y,z) \in {\bbb{Q}}^3$, with $x \neq \frac{1}{2}$, $x \neq y$, where the distances to the vertices $(0,0,0)$, $(0,1,0)$, $(1,0,0)$, $(1,1,0)$ of the unit square are rational, and where there is also rational distance to a fifth vertex $(0,0,1)$ of the unit cube:
$x$ $y$ $z$ $d_1$ $d_2$ $d_3$ $d_4$ $d_5$
-------- --------- -------- --------- --------- --------- --------- ---------
77/108 41/27 -28/27 71/36 67/36 49/36 43/36 95/36
83/125 -49/500 -14/75 389/300 349/300 209/300 119/300 409/300
Table 3: points in ${\bbb{Q}}^3$ with five rational distances to vertices of the unit cube
[100]{}
T. G. Berry, [*Points at rational distance from the corners of a unit square*]{}, Ann. Scuola Norm. Sup. Pisa Cl. Sci. [**17**]{} (4) (1990), 505-–529.
T. G. Berry, [*Points at rational distance from the vertices of a triangle*]{}, Acta Arith. [**62**]{} (4) (1992), 391-–398.
A. Bremner, M. Ulas, [*Rational points in geometric progressions on certain hyperelliptic curves*]{}, Publ. Math. Debrecen, 82/3-4 (2013), 669–683.
C. W. Dodge, [*Problem 966*]{}, Math. Mag., [**49**]{} (1976), 43; partial solution [**50**]{} (1977), 166–67; comment [**59**]{} (1986), 52.
R.K. Guy, [Unsolved Problems in Number Theory]{}, 3rd edition, Springer, 2004.
A. Hurwitz, [Über ternäre diophantische Gleichungen dritten Grades]{}, Vjschr. naturforsch. Ges. Zürich bd. [**62**]{} (1917), 207–229 Yu. I. Manin, [*Cubic forms: Algebra, geometry, arithmetic*]{}, Second Edition, North-Holland Mathematical Library, vol. 4, North-Holland Publishing Co., Amsterdam, 1986. Translated from the Russian by M. Hazewinkel
J.H. Silverman, [*The Arithmetic of Elliptic Curves*]{}, Springer-Verlag, New York, 1986. Th. Skolem, [*Diophantische Gleichungen*]{}, New York : Chelsea Publishing Company, 1950.
M. Ulas, [*Rational points in arithmetic progressions on $y^2=x^n+k$*]{}, Can. Math. Bull. [**55**]{} (1) (2012), 193–-207.
School of Mathematical and Statistical Sciences, Arizona State University, Tempe AZ 85287-1804, USA; email:
Jagiellonian University, Faculty of Mathematics and Computer Science, Institute of Mathematics, [Ł]{}ojasiewicza 6, 30 - 348 Kraków, Poland; email: [[email protected]]{}
[^1]: The research of the second author is supported by the grant of the Polish National Science Centre no. UMO-2012/07/E/ST1/00185
|
---
author:
- 'René D. Oudmaijer'
- 'A.M. Parr'
- 'D. Baines'
- 'J.M. Porter [^1]'
bibliography:
- 'mnemonic.bib'
- 'RenesRefs.bib'
date: 'Received .. ; accepted ..'
title: 'Sub-milliarcsecond precision spectro-astrometry of Be stars'
---
[The origin of the disks around Be stars is still not known. Further progress requires a proper parametrization of their structure, both spatially and spectrally. This is challenging as the disks are very small. ]{} [Here we assess whether a novel method is capable of providing these data. ]{} [ We obtained spectro-astrometry around the Pa$\beta$ line of two bright Be stars, $\alpha$ Col and $\zeta$ Tau, to search for disk signatures. The data, with a pixel-to-pixel precision of the centroid position of 0.3..0.4 milliarcsecond is the most accurate such data to date. Artefacts at the 0.85 mas are present in the data, but these are readily identified as they were non-repeatable in our redundant datasets. This does illustrate the need of taking multiple data to avoid spurious detections. ]{} [ The data are compared with model simulations of the spectro-astrometric signatures due to rotating disks around Be stars. The upper limits we find for the disk radii correspond to disk sizes of a few dozen stellar radii if they rotate Keplerian. This is very close to observationally measured and theoretically expected disk sizes, and this paper therefore demonstrates that spectro-astrometry, of which we present the first such attempt, has the potential to resolve the disks around Be stars. ]{}
Introduction
==============
For decades it had been surmised that Be stars are surrounded by disk-like structures. At first this was inferred by indirect means such as the doubly peaked H$\alpha$ emission line profiles [@struve_1931] and polarization [@Poeckert:1975]. This notion was confirmed only much later by direct, interferometric observations at radio wavelengths [@dougherty_1992]. Later, dedicated long-baseline optical and near-infrared (NIR) interferometry resolved the disks at selected baselines (e.g. @quirrenbach_1997 [@tycner_2004; @meilland_2007] - for a general review on Be stars see @porter_review). So far, relatively few Be stars have been studied in this manner. This is due to the fact that the disks are small, even the largest observed disks are typically of order a few milliarcsec (mas) in diameter and observations remain challenging.
In this paper we investigate the potential of spectro-astrometry to detect disks around Be stars. This technique is a powerful tool; it enables us to investigate small scale structures with a standard instrumental set-up. In addition, since data are taken at high spectral resolution, it also allows kinematical studies to be performed at superior resolution than for interferometry. Spectro-astrometry is a proven method to study otherwise unresolved structures in longslit spectra. It has been used to study binaries (@bailey_1998 [@baines_2004; @baines_2006; @schnerr_2006]), outflows from young objects [@takami_2003], disks around young objects [@ponto_2008] and even made possible the discovery of bi-polar jets from Brown Dwarfs [@whelan_2005]. Conceptually, this technique is straightforward, it measures the relative spatial position of spectral features from a longslit spectrum. For example, the red- and blueshifted emission of a rotating disk will be located on opposite sides of the continuum. Even when spatially unresolved, the centroid position of the spectrum will be offset from the continuum, and this can be determined very accurately to sub-pixel values (see e.g. @bailey_1998). The method has been shown to detect binaries at separations of 0.1 arcsec in conditions where the seeing was in excess of 2 arcsec, while brightness differences between the binary components of up to 6 magnitudes have been observed as well [@baines_2006]. Observationally, it is a comparatively cheap method, requiring only a stable spectrograph and a digital detector. It can therefore be applied to large samples of object.
As for example demonstrated by @takami_2003, the positional accuracy of the centroid mainly depends on the number of photons and the seeing and can be expressed as $ \sigma = 0.5 \times {\rm FWHM}
\times N^{-\frac{1}{2}} $, with the error $\sigma$ and full width half maximum of the profile (typically the seeing) expressed in arcsec or milliarcsec, and $N$ is the number of photons. For shot-noise dominated statistics, $N^{-\frac{1}{2}}$ equates to the inverse signal-to-noise ratio (SNR) of the total spectrum. Therefore, the requirements are proper sampling, high SNR, and a narrow instrumental point spread function (i.e. good seeing). @baines_2006 achieved a position accuracy, as measured from the root-mean-square (rms) variations in the position, of 2 mas in 2 arcsec seeing. The aim of the present study is to significantly improve on this statistic to investigate whether we can detect the milliarcsecond scale disks around Be stars.
This paper is organized as follows. In Sec. 2 we describe the observations of two bright, nearby Be stars. and reduction procedure. In Sec. 3 we present the results of the study, introduce the sub-milliarcsecond spectro-astrometry and we discuss the results in terms of a simple model. We conclude in Sec. 4.
-------------- ------- --------- --------- -------------- -------------- ---------- ------ ----------------- --------------- --------- --
Object HD/HR Sp type [*V*]{} vsin$i$ [PA]{} exp time SNR Pa$\beta$ EW $\Delta \, v$ rms pos
(mag) (kms$^{-1})$ ($^{\rm o}$) (s) ([$\rm \AA$]{}) (kms$^{-1}$) (mas)
$\alpha$ Col 37795 B7IVe 2.64 176 90-270 8 x 6 1200 $-$8 134 0.35
1956 180-360 8 x 6 0.40
$\zeta$ Tau 37202 B4IIIpe 3 310 58-238 8 x 10 1500 $-$5 226 0.25
1910 148-328 8 x 10 0.30
\[table\]
-------------- ------- --------- --------- -------------- -------------- ---------- ------ ----------------- --------------- --------- --
Observations and Data Reduction
===============================
For this experiment we selected two Be stars that were bright, close-by and had a track record of strong hydrogen recombination line emission. These factors should ensure that they are surrounded by comparatively large disks. Indeed, $\zeta$ Tau had been measured to have an H$\alpha$ diameter of 7 mas (e.g. @tycner_2004). $\alpha$ Col has no published interferometric data, estimates indicate a larger size of its line emitting region than of $\zeta$ Tau [@dachs_1992]. For the choice of telescope, we had to trade-off between an excellent sampling of the spectro-astrometry and the choice of target line. H$\alpha$ may be expected to form in larger regions, but the near-infrared instrumentation described below provided excellent sampling. For this pilot study, we decided to push the best available equipment to us and observed Pa$\beta$ at 1.28$\mu$m.
The spectro-astrometric data were obtained in service mode with the Phoenix instrument [@hinklephoenix] mounted on 8m Gemini South in Chile during the night of December 19 (UT) 2004. Phoenix is a high resolution near-infrared spectrometer operating in the wavelength region 1-5$\mu$m. The target line was the Pa$\beta$ hydrogen recombination line, a strong line in a region of the spectrum that is relatively unaffected by telluric absorption.
The grating was set such that the 1.28 $\mu$m Pa$\beta$ line was in the centre. The detector was an Aladdin 1024$\times$1024 InSb array with a pixel size of 5.9$\times10^{-6} \, \mu$m (in wavelength, corresponding to 1.4 kms$^{-1}$) spectrally, resulting in an unvignetted wavelength coverage of 1300 kms$^{-1}$. The pixel size in the spatial direction was 85 milliarcsec, and the slit was 4 pixels wide. The seeing during the observations was hovering between 0.40-0.55 arcsec as measured from the central parts of the spectra. The set-up ensured that both the spatial and spectral resolution elements were sampled by 4-5 pixels. This is much better than Nyquist sampling and having the data sampled by such a large number of pixels is a crucial constraint when dealing with spectro-astrometry. Although not strictly necessary with such bright targets at this wavelength, the observations were done in the standard a-b-b-a nodding on the slit to remove sky emission.
In addition to the usual East-West and North-South slit positions, that makes sure we can measure the position angles for any extended material in the data, we observed at the opposite angles as well. The rationale is that the observations have to be repeatable and spurious effects should be identified by multiple observations. Real effects are each other’s mirror image (one is looking at the objects upside down as it were), instrumental artifacts would be present in the same direction on the array (see for more details @bailey_1998 and @baines_2006). In the case of $\alpha$ Col, the slit position angles (PA) were set at 0$^{\rm o}$, 90$^{\rm o}$, 180$^{\rm
o}$, and 270$^{\rm o}$. $\zeta$ Tau was observed at 58$^{\rm o}$, 148$^{\rm o}$, 238$^{\rm o}$, and 328$^{\rm o}$. The choice of different PAs rather than the usual EW-NW settings was that the slit would be aligned with the disk resolved in interferometric data of [@quirrenbach_1997]. However, they report a position angle of $-58^{\rm o}$, and the omission of the minus sign in our instrumental set-up means that the data are somewhat less efficient than they could have been. At each slit position, the objects were observed 4 times with the nodding procedure and the total spectra consist of 16 integrations with exposure times of 6s each for $\alpha$ Col and 10s for $\zeta$ Tau respectively. The overhead associated with rotating the slit was small and the total on-target time was less than 45 minutes in both cases.
{width="95.00000%"}
The data were reduced in a standard manner for optical data using both the IRAF [@iraf] and Starlink software packages. Dark frames were subtracted from the original frames, which were then divided by a normalized flatfield. The intensity spectra were extracted and the individual spectra co-added to arrive at the total spectra that are discussed in the remainder of this paper. The observational details and some derived parameters are summarised in Table 1 The wavelength calibration was performed by identifying telluric absorption lines measured from the catalogue by @hinkle_1995 and finding the dispersion of the spectra. The resulting wavelength should be accurate to within 0.25 kms$^{-1}$. The FWHM of telluric lines that were assumed to be unresolved was measured to be in the range of 6..6.5 kms$^{-1}$, in agreement with the expected resolution $\lambda /
\Delta \lambda \sim 50,000$.
Spectro-astrometric information was extracted from the 2-dimensional longslit data by fitting a Gaussian profile to the stellar flux at each pixel in the spatial direction. By visually inspecting the data we confirmed that the Gaussians are a good representation of the point spread function. However, as @porter_2004 pointed out, the precise shape of the fitting function is not very critical. The positions of the center were recorded as a function of pixel number, and later put on a wavelength scale.
Four position spectra were obtained at every position angle. The individual traces had different shapes on the larger (more than tens of pixels) scales. This effect is presumably present because they were recorded at different locations on the array because of the nodding procedure. To eliminate these large scale fluctuations, while at the same time preserving the smaller scale properties, the position spectra were fitted by a high order polynomial. The position data taken at opposite position angles (0-180$^{\rm o}$ etc), were subtracted from each other to minimize any remaining instrumental effects. Prior to combining, all eight traces per orientation were visually inspected in order to identify and remove outlying data. For both orientations of $\alpha$ Col, one trace each was discarded, for $\zeta$ Tau all traces at a PA of 58$^{\rm o}$ were retained, while 2 of the 8 traces perpendicular to this were excluded.
The precision in determining the photo-centre of the longslit spectra as measured from pixel-to-pixel deviations from the mean position was of order 0.3 mas (see Table 1). This positional accuracy is the best quality data ever published from a statistical pixel-to-pixel point of view. However, in the current data set we also see variations, unrelated to the pixel-to-pixel variations, over larger scales, with excursions up to 1 mas. The largest such multi-pixel variations can be seen in the East-West direction for $\alpha$ Col (see Figure \[specast\]). A sine-wave type feature spanning many pixels with an amplitude of 0.85 mas, almost 3$\sigma$, appears visible. The dip corresponds to the blue peak of the Pa$\beta$ line profile, but the two local “maxima” around the dip do not correspond to any obvious feature in the emission line. Further inspection of the data revealed that the individual traces taken at opposite angles also show this pattern. Significantly however, the minima and maxima occur at different wavelengths, and are therefore not reproduceable. A similar behaviour is also found in the data of $\zeta$ Tau, but in this case it is cancelled out after combination of the individual traces. We therefore conclude that the traces in $\alpha$ Col that seem to display a “periodic” signal with an amplitude of less than a hundredth of a pixel are artefacts, as they are not reproduceable in our, redundant, data. The reasons for this are unclear and future observations are planned to investigate this issue. In summary, the data have a, statistical, precision of 0.3 mas, while larger scale artefacts of order slightly less than 1 mas are identified.
Results
========
The Pa$\beta$ lines
-------------------
The results for $\alpha$ Col and $\zeta$ Tau are plotted in Fig. \[specast\]. The top panels display the total intensity spectra while the bottom two panels represent the spectro-astrometry at the two orientations respectively. Let us first discuss the Pa$\beta$ profiles. Both stars have doubly peaked emission lines. $\alpha$ Col has a regular, symmetric line profile, while the blue peak of $\zeta$ Tau’s emission is much stronger than the red peak. The Equivalent Widths ($W_{\lambda}$) are $-$8 and $-$5 $\rm \AA$ and the line peak separations are 134 and 226 kms$^{-1}$ for $\alpha$ Col and $\zeta$ Tau respectively.
The emission at the wavelengths covered by the Pa$\beta$ line is due to three components. Firstly, the emission line itself, secondly continuum free-free emission and thirdly, the stellar continuum, which is diluted by the underlying, photospheric, Pa$\beta$ absorption line. We can estimate the contribution of the free-free emission to the total flux. At shorter wavelengths it is fairly low, as the continuum excess due to free-free emission increases towards longer wavelengths (see e.g. the in-depth study by @dougherty_1991). For $\zeta$ Tau, @dougherty_1991 derive an excess of 0.1 magnitude at 1.25 $\mu$m, which roughly corresponds to 10% of the emission being due to the disk. @dougherty_1991 did not include $\alpha$ Col in their sample. @dachs_1988 observed the object in the optical and near-infrared one month apart and we derive a quasi-simultaneous [*V$-$J*]{} colour of $-0.18$. According to @koornneef_1983, [*(V$-$J$)_{0}$*]{} for a B7V object is $-$0.25 mag (he does not list values for sub-giants with luminosity class IV). Taken at face value, we would thus obtain a non-physical, negative excess. However, the difference is comparable to the uncertainty in spectral class and photometric errorbars, and illustrates that the excess continuum emission at Pa$\beta$ must be very small. The depth of the underlying photospheric Pa$\beta$ absorption can be assessed using the data of @wallace_2000. They present medium resolution spectroscopy of 88 MK spectral standards, amongst which a number of B-type stars. The stars in their sample with spectral types closest to ours, B7III, B7V and B3IV (HR 1791, HR 3982 and HR 6588 respectively), span a wide range in spectral type. The central dip ranges from 0.6..0.65 of the continuum for the narrower lines (B3IV, B7III) to 0.78 (B7V) for the broader line. At $-$100 and +100 kms$^{-1}$ from the line center, where the line emission peaks are found, the absorption for all three objects reaches down to 0.85..0.9 of the continuum, i.e. a depression of 10..15%.
Hence, given that the spectral types of our target objects of B4III and B7IV, are similar to that sampled by these MK standard stars, we assume that the photospheric absorption line underneath the line peaks is about the same fraction of the line free continuum. As the peak line emission is roughly twice that of the stellar continuum (1.98 and 1.84 for $\alpha$ Col and $\zeta$ Tau respectively), we find that the emission from the stellar photosphere and the hydrogen recombination line are approximately equal.
The spectro-astrometry
----------------------
The spectro-astrometric traces, at both orthogonal orientations, are shown in the middle and bottom panels of Fig. \[specast\]. The traces are normalized to the stellar continuum and the deviations from it are expressed in milliarcseconds. Several things are immediately apparent from the data. Firstly, the rms variations around the mean position are much smaller than 0.5 mas (Table 1). Thus, the data have a sub-milliarcsecond precision, and therefore constitute the most accurate spectro-astrometry of any object hitherto observed. As we are exploring unknown territory, it may not come as a surprise that we encounter new problems in the data. After the multi-pixel variations that stretch to slightly less than 1 mas in $\alpha$ Col, the second obvious finding in the spectro-astrometric traces is that the telluric features show strong, in fact the strongest, signals. These narrow absorption lines are unresolved and we suspect that pixellation effects give rise to these features. Checks on data taken at opposite angles revealed that the amplitude of the excursion varies with the location of the longslit spectrum on the array. It is always in the same direction, and therefore an artefact. For illustration we kept the data in the figures. Finally, there is no obvious excursion in position space associated with the Pa$\beta$ profiles.
We conclude that in the present data we find no evidence for significant features in the positional data down to sub-milliarcsecond levels. It also illustrates the need for multiple exposures to ensure consistency and to avoid artifacts in the data being interpreted as real. In the following we will investigate the implications for the presence of disks around both objects and the presence of a binary companions.
On the stars’ binarity and their disks’ sizes
---------------------------------------------
Based on radial measurements $\zeta$ Tau is reported to be a single-lined, close binary [@jarad_1987]. The separation is 5 mas and, from the mass function, the primary is at least 5 magnitudes brighter than the secondary [@tycner_2004]. This large magnitude difference combined with the small separation makes detection of the secondary very difficult (see also @baines_2006), and explains why we do not see a binary signature in the data of $\zeta$ Tau. $\alpha$ Col has not been reported to be a binary, and the data do not show the presence of a binary companion either.
We can estimate the size of the Pa$\beta$ line emitting region from the spectrum. Most methods such as the $W_{\lambda}$ of the emission employ the entire line profile to do this (e.g. @grundstrom_2006), even interferometrically determined sizes are based on the total line emission. Here, we exploit the fact that we have spatial information available at high spectral resolution. If the disks are in Keplerian rotation, we can compute the distance from the star of the bulk of the orbiting material using its rotation velocity. Using values for the masses and radii for the spectral types (taken from @str_kur, and interpolated between B3 and B5 to arrive at a value for $\zeta$ Tau), we computed the Keplerian rotation speeds at the stellar surface (457 and 489 kms$^{-1}$ for $\alpha$ Col and $\zeta$ Tau). The observed velocities at the line peaks (half the peak separation in Table 1) combined with the distances to the objects provided by Hipparcos, yield the distance of the line peak forming regions from the star of 8.8 mas ($\alpha$ Col) and 3.9 mas ($\zeta$ Tau). However, the observed velocities are smaller than the true value by a factor sin$i$, and the distance from which the emission originates is smaller by (sin$i$)$^{2}$ (for Keplerian rotation). Taking the inclinations derived by @fremat_2005 [45$^{\rm o}$ and 66$^{\rm o}$] we obtain 4.4 and 3.2 mas for $\alpha$ Col and $\zeta$ Tau respectively. The amplitude of the excursion in the positional data associated with these separations can be calculated by simulating the data at the line-peak, convolving them with the seeing and determining the spectro-astrometric trace (cf. @baines_2006). In both cases, the photo-center is precisely halfway because the line peaks are equally bright as the star. We therefore would expect, based on the above, that in the present data line excursions of up to 2.2 mas ($\alpha$ Col) and 1.6 mas ($\zeta$ Tau) can be observed. This is the maximum observable separation, as the above computation assumes that all line emission arises from a thin ring with a rotation speed corresponding to the line peak.
![Toy model predictions of the spectro-astrometric signature of a disk surrounding a Be star with properties similar to the target objects. The top panel shows the flux spectra, the bottom panel shows the predicted astrometry for the set-up used in our observations for three different cases, an outer radius of 70 stellar radii (solid line), 30 stellar radii (short-dashed), 10 stellar radii (long-dashed). \[specastmod\]](./thick.ps){width="50.00000%"}
Not all emission at the line peak comes from a single ring however, as the projected velocities for smaller, faster, rings will be observed as well at the observed Doppler shifts. To assess this effect we performed some simple model calculations. We assume the star to be surrounded by a geometrically thin, Keplerian rotating disk reaching onto the stellar surface, with the line flux per unit area following a simple power law in radius. The main input parameters of the model are the stellar radius, rotation speed, the inclination and emission line strength (which are all fairly well known), the remaining free parameters are the disk’s outer radius and the exponent of the power law. The model produces a two dimensional position-velocity diagram, which is binned up and smoothed to represent our pixel sizes of 5 kms$^{-1}$, 85 mas and seeing of 500 mas respectively. From the resulting data, the spectro-astrometry is then measured. Changing the outer radius of the model disk increases the spectro-astrometric excursions, which then occur at lower velocities, as expected from Keplerian rotation. A stronger line flux will yield a larger spectro-astrometric excursion because the photo-centre shifts more in the direction of the emission line.
In the extreme case of optically thick emission, the powerlaw will have a flat slope and the emerging line flux will be dominated by the outer parts of the disk. In the other extreme, that of optically thin emission, the power law depends on the density distribution. For an isothermal, flaring Keplerian disk, the surface density, and by implication the flux per unit area, has an $r^{-2}$ powerlaw dependence (cf. @carciofi_2006). As a consequence, the line emission moves towards the inner parts of the disk. The main positional excursions will thus occur at higher velocities, closer to the star and therefore be smaller than in the optically thick case. Changing the exponent of the powerlaw also affects the shape of the emission line. A shallower exponent, more representative of the optically thick case, puts more flux at lower velocities, while a steeper power law, closer to the optically thin situation, results in narrower lines, with the line peak at higher velocities.
In general though, unless the exponent gets too steep, the excursions are of similar magnitude when the same line-to-continuum ratio is simulated. We performed a large parameter study, but for the purposes of this paper, we will restrain ourselves to one illustrative example representative of both objects. We set the line-to-continuum ratio to be 2 (as per the spectra in Fig. \[specast\] and derived above), use a stellar rotational velocity of 475 kms$^{-1}$ (halfway the values for both objects) and an inclination of 55$^{\rm o}$ (also roughly halfway the two objects) and use a stellar radius of 0.2 mas. For the outer radius of the disk we take 10, 30 and a maximum of 70 stellar radii (cf. @marlborough_1997). The resulting data are shown in Fig. \[specastmod\]. The top panel presents the resulting model line profiles. As expected, the lines are doubly peaked with peak separations that are larger for smaller disk radii. The separations range from $\sim$110kms$^{-1}$ for the largest disk to 160 and 270 kms$^{-1}$ for the smallest disks, respectively. This trend is explained by the fact that these velocities correspond to the Keplerian rotation speeds at the maximum possible radii, where most of the line flux originates if the emission is optically thick. The most notable differences between the model line profiles and the observed line profiles are the relative narrowness of the line peaks and the little emission at low projected velocities. This is probably due to the fact that the model disks are assumed to be geometrically thin, resulting in low projected emitting surface areas at low velocities. In reality the disks are flared, and therefore the emitting area will be much larger, in particular at these low velocities. In addition, line broadening is not taken into account here. For a proper treatment, radiative transfer models such as those by @carciofibjorkman_2006 will be an excellent tool. Using such advanced models is beyond the scope of this paper, in which we wish to obtain a rough figure for the excursions only.
We also note that the blue peak of $\zeta$ Tau is much stronger than the red peak. This is most likely due to one-armed oscillations in its disk, which give rise to such asymmetry, as for example detected for $\zeta$ Tau at the 0.7 mas level by @vakili_1998. Accordingly, the positional excursion of the blue peak would be larger than that of the red one, but its signature would not affect the overall appearance of the spectro-astrometry.
Moving to the spectro-astrometric signature in the simulations, we find that the positional excursions are smaller for smaller disks (1.5, 0.6 and 0.2 mas respectively) for the same reason as that the peak separations are larger: most flux comes from the outer parts, and the largest model disk will naturally result in the largest detection. As the orientations of the disks are not aligned with the slit positions, we might in reality observe smaller excursions in our data by up to a factor of 0.7 due to projection effects.
The bottomline of the exploratory model simulations is that disks with a size of order 70 stellar radii and a predicted excursion of 1.5 mas could just about have been observed at the 3-4 $\sigma$ level, while the 30 stellar radii disk would have been a 2 $\sigma$ detection. These limits are approaching the real sizes of the disks. According to @tycner_2004, the disk of $\zeta$ Tau has a diameter of 7 mas, and thus a radius of $\sim$18 stellar radii, whereas $\alpha$ Col’s disk is assumed to be larger, mainly because of its larger line EW [@dachs_1992]. It is clear that our pixel-to-pixel precision, of order 0.35 mas, is reaching that needed to detect Keplerian disks. It will thus be possible to measure the disks and their kinematics and therefore to properly constrain the disks with future data. In order to achieve this, observations should result in a better SNR, or be targetted at stronger emission lines (in terms of line-to-continuum ratio), either for stars with larger disks or from intrinsically stronger lines at different wavelengths such as H$\alpha$.
High precision data such as these combined with the latest radiative transfer models @carciofibjorkman_2006 will allow us to be in a position to fully constrain the kinematical structure of Be star disks, and reveal their origin.
Conclusion
==========
In conclusion, we employed high precision spectro-astrometry to assess the potential of the method to detect the disks around two Be stars. We achieved rms variations in the position spectra of order 0.3 mas, the highest precision spectro-astrometric data in the literature. We did not detect any features related to the Pa$\beta$ lines, but found artefacts at the 1 mas level. These were easily identified as they were non-repeatable in our redundant datasets.
Simple, robust, estimates of the size of the line forming regions showed that the current set-up was on the limit of detecting the disks, if they were rotating Keplerian. Indeed, the method has the potential to distinguish between Keplerian rotating disks and angular momentum conserving disks which would be much smaller. This study, the first of its sort, has shown that the method has great potential in probing small, sub-milliarcsecond, scale structures and future observations using improved set-up, even higher SNR, and possibly moving to the intrinsically brightest hydrogen recombination lines which should reveal even larger disk-signatures in the data are planned.
Acknowledgments {#acknowledgments .unnumbered}
---------------
RDO is grateful for the support from the Leverhulme Trust for awarding a Research Fellowship. AMP acknowledges support from The Rothschild Community of Excellence Programme. This work is based on data from the Phoenix infrared spectrograph, developed and operated by the National Optical Astronomy Observatory. The observations are from programme GS-2004B-Q-92 obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership.
\[last page\]
[^1]: deceased
|
---
abstract: 'In this paper we analyze the tensor field (reducible gauge) theories in the context of very special relativity (VSR). Particularly, we study the VSR gauge symmetry as well as VSR BRST symmetry of Kalb-Ramond and Abelian 3-form fields involving a fixed null vector. We observe that the Kalb-Ramond and Abelian 3-form fields and corresponding ghosts get masses in VSR framework. The effective action in VSR-type axial gauge is greatly simplified compared with the VSR-type Lorenz gauge. Further, we quantize these models using a Batalin-Vilkovisy (BV) formulation in VSR.'
author:
- Sudhaker Upadhyay
title: '**Reducible gauge theories in very special relativity**'
---
Introduction
============
Special relativity (SR) postulates that the laws of physics share many of the symmetries of Maxwell’s equations and is valid at the largest energies attainable today [@pi]. The maximal symmetry group of Maxwell’s equations is the conformal group $SU(2, 4)$. However, the existence of particles with mass constrains spacetime symmetry to be no greater than the Lorentz group along with spacetime translations (i.e. Poincaré group). The Poincaré group is proposed as the symmetry of nature by SR principles. The possible violation of the Lorentz symmetry has received much attention with new experimental and theoretical challenges. For instance, many theories of quantum gravity predict breaking of some symmetry groups [@alf]. Experiments and astrophysical observations set precise limits upon the parameters illustrating these violations. The spontaneous symmetry breaking of the Lorentz symmetry is assumed in extension of the minimal standard model Refs. [@col; @f]. On the other hand, the nondynamical tensor fields are introduced to determine the preferred directions that break the Lorentz symmetry. Some of such investigations are discussed by the Myers-Pospelov model [@rc] together with QED in a constant axial vector background [@aa]. The Lorentz-invariant theories could emerge as effective theories from a more fundamental scheme, which is invariant under VSR groups but not invariant under the full Poincaré group; this is addressed in Ref. [@co].
VSR is the set of subgroups of the Poincaré group which preserve the constancy of the velocity of light. In this framework, it is proposed that the laws of physics are invariant under the subgroups of Lorentz group SO(1,3) (with six parameters) rather than full Lorentz group [@co; @co1]. Two most interesting subgroups of Lorentz group, the four parameter $SIM(2)$ group, and the three parameter $HOM(2)$ group have the property of rescaling a fixed null vector. The remarkable property of these subgroups of the Lorentz group is that when they are supplemented with $T, P$, or $CP$ the whole Lorentz group will be recovered. VSR has been studied as regards several aspects. For instance, it has been generalized for the inclusion of supersymmetry [@co2; @vo]. It admits the generation of a neutrino mass without lepton number and sterile neutrinos violations [@co1]. Further, it has been discussed in the case of curved spaces [@7; @8], noncommutativity [@11], the cosmological constant [@12], dark matter [@13], cosmology [@14], Abelian gauge fields [@15], Born–Infeld electrodynamics [@16], and non-Abelian gauge fields [@17]. Indeed it is not surprising that in spite of a considerable volume of research on VSR, the higher form gauge theories in VSR are still unstudied. A basic motivation of this paper is to bridge this gap.
Higher-form gauge theories are generalizations of electromagnetism in which the vector potential a one-form is replaced by exterior forms of higher degree. Higher-form gauge theories are an important ingredients in supergravity and superstring theory. They are also important for the other branches of physics [@green; @pol; @sud]. For instance, the low energy excitations in string theories contain states described by antisymmetric tensor fields [@h; @i]. The antisymmetric tensor fields help to describe the various supergravity models. The Abelian rank-2 tensor field gets relevance for the classical string theories [@a], for the theory of vortex motion in an irrotational, incompressible fluid [@b; @c], for the dual formulation of the Abelian Higgs model [@d; @e]. They are also an ingredient of supergravity multiplets [@g] and play a role in anomaly cancellation of certain superstring theories [@degu]. The 3-form gauge fields are also important for supergravity theories. For instant, $N=1$ supergravity theory in $d=11$ dimensions includes a massless 3-form gauge field. The study of solutions of such supergravity theory shows that there are two-branes that indeed do provide sources for 3-form fields [@st].
In this paper we study the Kalb-Ramond (2-form) and Abelian rank-3 tensor (3-form) gauge theories in VSR. Specifically, we derive the gauge-invariant action for such theories in VSR. We notice that such an action is not invariant under the usual gauge transformation. However, it is invariant under the modified gauge transformation written in terms of the wiggle operator. The equations of motion for a Kalb-Ramond field is derived by which one eventually gets a mass in VSR. A guage theory cannot be quantized without choosing an appropriate gauge. Therefore, we choose a VSR-type Lorenz gauge to quantize the theory. This gauge is incorporated in theory by adding a corresponding gauge-fixing term to the gauge-invariant action. To make the theory physically equivalent, the gauge-fixing term induce a ghost term in the path integral. A remarkable feature of such study is that the ghost fields and ghost of ghost fields also get mass in VSR. Since all these fields acquire a common mass, it cannot be used as an alternative for the Higgs mechanism. Further we compute the BRST symmetry for Kalb-Ramond theory in VSR. To quantize the theory a VSR-type axial gauge is also chosen which has a simpler form than the VSR-type Lorenz gauge. We also quantize the theory utilizing BV formulation where we derive the extended quantum action of the model satisfying the quantum master equation. Further we study an Abelian 3-form gauge theory in VSR. The 3-form gauge field together with various ghost fields also acquires mass in the VSR framework. We further perform the BRST quantization of such model also in VSR. We also shed light on Abelian 3-form gauge theory in a BV formulation with a similar outcome as in case of the 2-form gauge theory.
This presentation of the paper is as follows. First we discuss the BRST quantization of an Abelian 2-form gauge theory in VSR in section II. In this section the BV formulation prospects are also studied. In section III, we analyze Abelian 3-form gauge theory in VSR. The BRST quantization and BV formulation are also studied in this section. Finally, we conclude the results with a remarks on future future in the last section.
Abelian 2-form fields in VSR
============================
The Maxwell theory is modified in VSR; the same must happen to the Abelian rank-2 tensor (Kalb-Ramond) field theory. To see this, we start with the field-strength tensor in VSR for Kalb-Ramond tensor field $B_{\mu\nu}$ in VSR involving a fixed null vector $n_\mu$ as $$\begin{aligned}
F_{\mu\nu\rho}&=&\partial_\mu B_{\nu\rho}+\partial_\nu B_{\rho\mu}+\partial_\rho B_{\mu\nu}
+\frac{1}{2}m^2\left[n_\mu \frac{1}{(n\cdot\partial)^2} n^\alpha(\partial_\nu B_{\rho\alpha} +\partial_\rho B_{\nu\alpha}) \right.\nonumber\\
&+&\left. n_\nu \frac{1}{(n\cdot\partial)^2} n^\alpha(\partial_\rho B_{\mu\alpha} +\partial_\mu B_{\rho\alpha})+n_\rho \frac{1}{(n\cdot\partial)^2} n^\alpha(\partial_\mu B_{\nu\alpha} +\partial_\nu B_{\mu\alpha})\right].\label{fi}
\end{aligned}$$ The null vector $n^\mu$ transforms multiplicatively under a VSR transformation so that the terms containing ratios having $n^\mu$ are invariant. This field-strength tensor is not invariant under the standard gauge transformation $ \delta B_{\mu\nu}= \partial_\mu \zeta_\nu - \partial_\nu\zeta_\mu$, where $\zeta_{\mu}(x)$ is a vector parameter. Rather, this remains invariant under the following modified (VSR-type) gauge transformation: $$\begin{aligned}
\delta B_{\mu\nu}&=&\tilde\partial_\mu \zeta_\nu -\tilde\partial_\nu\zeta_\mu,\nonumber\\
&=&\partial_\mu\zeta_\nu -\partial_\nu\zeta_\mu -\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu\zeta_\nu +
\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\nu\zeta_\mu,
\end{aligned}$$ where $\tilde\partial_\mu=\partial_\mu -\frac{1}{2}\frac{m^2}{n\cdot \partial}n_\mu$ is known as the wiggle operator. To have the usual mass dimension for the wiggle operator, a constant $m$ has to be introduced which fixes the scale of VSR effects.
The gauge-invariant action in VSR describing the massive Kalb-Ramond tensor field is given by $$\begin{aligned}
S^{(2)}_0=\frac{1}{12}\int d^4x\ \tilde F_{\mu\nu\rho}\tilde F^{\mu\nu\rho},\label{cl}
\end{aligned}$$ where the wiggle field-strength tensor has the following form: $$\begin{aligned}
\tilde F_{\mu\nu\rho}&=&\tilde \partial_\mu B_{\nu\rho}+\tilde\partial_\nu B_{\rho\mu}+\tilde\partial_\rho B_{\mu\nu},\nonumber\\
&=&\partial_\mu B_{\nu\rho}+\partial_\nu B_{\rho\mu}+\partial_\rho B_{\mu\nu}
-\frac{1}{2}\frac{m^2}{n\cdot \partial}n_\mu B_{\nu\rho}-\frac{1}{2}\frac{m^2}{n\cdot \partial}n_\nu B_{\rho\mu}-\frac{1}{2}\frac{m^2}{n\cdot \partial}n_\rho B_{\mu\nu},\nonumber\\
&=&F_{\mu\nu\rho} -\frac{1}{2} m^2 \left(n_\mu\frac{1}{(n\cdot\partial)^2}n^\alpha F_{\nu\rho\alpha}+n_\nu\frac{1}{(n\cdot\partial)^2}n^\alpha F_{ \rho\mu\alpha}+n_\rho\frac{1}{(n\cdot\partial)^2}n^\alpha F_{\mu\nu\alpha} \right).
\end{aligned}$$ It is evident from the above relation that $\tilde F_{\mu\nu\rho}$ does not coincide with $ F_{\mu\nu\rho}$ given in (\[fi\]).
The equations of motion (EOM) for the Kalb-Ramond field is calculated as $$\begin{aligned}
\tilde\partial_{\mu}\tilde F^{\mu\nu\rho}=0.
\end{aligned}$$ For the VSR-type Lorenz gauge $\tilde \partial_\mu B^{\mu\nu}=0$, the EOM reduces to $$\begin{aligned}
[\square -m^2 ]B^{\nu\rho}=0,
\end{aligned}$$ which remarkably implies that the field $B_{\mu\nu}$ has mass $m$. The non-local terms are dealt with the following relation [@ale]: $$\begin{aligned}
\frac{1}{n\cdot\partial}=\frac{1}{\partial_t +\partial_z}=\int dt_+,\end{aligned}$$ where $t_+=\frac{t+z}{2}$. Here we observe that our results are in agreement with [@15]. Next we will study the covariant quantization of Abelian 2-form gauge theory in VSR.
Different gauges
----------------
In order to quantize a gauge theory we must add gauge fixing term and the corresponding Faddeev-Popov term to the invariant action. Doing so, the gauge fixing term breaks the local gauge symmetry and thus the divergence of the functional integral disappears. However, the ghost term improves the integration measure to provide correct predictions for gauge invariant observables. Therefore, for so-called BRST quantization, it is necessary to introduce the following ghost and auxiliary fields for reducible 2-from gauge theory: anticommuting vector fields $\rho_{\mu}$ and $\bar\rho_{\mu}$, a commuting vector field $\beta_{\mu}$, anticommuting scalar fields $\chi$ and $\bar\chi$, and commuting scalar fields $\sigma, \varphi,$ and $ \bar\sigma $. The gauge fixing and ghost action for antisymmetric rank 2 tensor field in VSR-type Lorenz gauge is given by $$\begin{aligned}
S_{gf+gh}^{(2)L}&=&\int d^4x\left[ i\bar\rho_\nu \tilde\partial_\mu(\tilde\partial^\mu\rho^\nu -
\tilde\partial^\nu\rho^\mu )-\bar\sigma\tilde\partial_\mu\tilde\partial^\mu\sigma +\beta_\nu(\tilde\partial_\mu B^{
\mu\nu} +\lambda_1\beta^\nu -\partial^\nu\varphi)\right.\nonumber\\
&-&\left. i\bar\chi(\tilde\partial_\mu\rho^\mu +\lambda_2 \chi) -i\bar\rho^\mu \tilde \partial_\mu \chi \right],\nonumber\\
&=&\int d^4x\left[i\bar\rho_\nu \left(\partial_\mu\partial^\mu \rho^\nu -\partial_\mu\partial^\nu
\rho^\mu -m^2\rho^\nu +\frac{1}{2} \frac{m^2}{n\cdot \partial}n^\nu\partial\cdot\rho
+ \frac{1}{2} \frac{m^2}{n\cdot \partial} \partial^\nu n\cdot\rho\right.\right.\nonumber\\
& -&\left.\left. \frac{1}{4}\frac{m^2}{(n\cdot\partial)^2}n^\nu n\cdot\rho\right) -\bar{\sigma}
(\partial_\mu\partial^\mu -m^2)\sigma +\beta_\nu\partial_\mu B^{
\mu\nu} -\frac{1}{2}m^2\beta_\nu\frac{1}{n\cdot\partial}n_\mu B^{\mu\nu}+\lambda_1\beta_\nu\beta^\nu \right.\nonumber\\
&-& \left. \beta_\nu\partial^\nu\varphi -i\bar\chi \partial_\mu\rho^\mu +\frac{i}{2}m^2\bar\chi\frac{1}{n\cdot\partial}n_\mu \rho^\mu-i\lambda_2\bar\chi\chi -i\bar\rho^\mu\partial_\mu\chi-\frac{i}{2}\frac{m^2}{n\cdot\partial}\bar\rho^\mu n_\mu\chi\right], \label{gfix}\end{aligned}$$ where $\lambda_1$ and $\lambda_2$ are gauge parameters. It is evident from the above expression that the ghost fields and ghost of ghost fields have mass $m$ in VSR. Since all the fields acquire a common mass, it cannot be used as a replacement for the Higgs mechanism. The ghost propagator and ghost of ghost propagator are computed, respectively, as $$\begin{aligned}
&&D_{\mu\nu}^{gh}(k) =-\frac{1}{k^2+m^2}\left[g_{\mu\nu}+\frac{k_\mu k_\nu}{m^2}\right],\nonumber\\
&&D^{ggh}(p) =-\frac{1}{p^2+m^2}.\end{aligned}$$ It can be seen that the propagators and vertices have the same large momentum behavior as in Lorentz-invariant theories. So the 2-form gauge theory in VSR is renormalizable.
The expression (\[gfix\]) can further be written in terms of BRST variation $\delta_b$ of gauge-fixing fermion $\psi^L$ as follows: $$\begin{aligned}
S_{gf+gh}^{(2)L}&=&\delta_b\int d^4x\ \psi^L,\nonumber\\
&=&\delta_b\int d^4x\left[-i\bar\rho_\nu(\tilde\partial_\mu B^{\mu\nu}+\lambda_1\beta^\nu-\tilde\partial^\nu\varphi)
-i\bar\sigma(\tilde\partial_\mu\rho^\mu +\lambda_2\chi)\right],\nonumber\\
&=&\delta_b\int d^4x\left[-i\bar\rho_\nu \partial_\mu B^{\mu\nu}+\frac{i}{2}m^2\bar\rho_\nu\frac{1}{n\cdot \partial}n_\mu B^{\mu\nu}-i\lambda_1\bar\rho_\nu\beta^\nu +i\bar\rho_\nu\partial^\nu\varphi
\right.\nonumber\\
&-&\left. \frac{i}{2}m^2\bar\rho_\nu\frac{1}{n\cdot \partial}n^\nu\varphi -i\bar\sigma\left(\partial_\mu\rho^\mu -\frac{1}{2}\frac{m^2}{n\cdot \partial}n_\mu \rho^\mu +\lambda_2\chi\right)\right],\label{gff}\end{aligned}$$ where the BRST transformation of the fields is given by $$\begin{aligned}
\delta_b B_{\mu\nu} &=& -\left(\partial_\mu\rho_\nu -\partial_\nu\rho_\mu -\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu\rho_\nu +\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\nu\rho_\mu\right)\Lambda, \nonumber\\
\delta_b\rho_\mu &=& -i\left( \partial_\mu\sigma - \frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu\sigma\right)\Lambda, \ \ \ \ \delta_b\sigma
= 0, \nonumber\\
\delta_b\bar\rho_\mu &=&i\beta_\mu \Lambda, \ \ \ \
\delta_b\beta_\mu = 0,\ \ \ \
\delta_b\bar\sigma =-\bar\chi\Lambda, \nonumber\\
\delta_b\varphi &=& -\chi\Lambda, \ \ \ \ \
\delta_b\bar\chi =0, \ \ \ \ \delta_b\chi =0.\label{sym}\end{aligned}$$ Here $\Lambda$ is an infinitesimal Grassmann parameter.
The gauge fixing and ghost action for the antisymmetric rank-2 tensor field in VSR-type axial gauge (i.e. $\eta_\mu B^{\mu\nu}=0$) is given by $$\begin{aligned}
S_{gf+gh}^{(2)A}&=&\int d^4x\left[ i\bar\rho_\nu \eta_\mu(\tilde\partial^\mu\rho^\nu -
\tilde\partial^\nu\rho^\mu )-\bar\sigma\eta_\mu\tilde\partial^\mu\sigma +\beta_\nu(\eta_\mu B^{
\mu\nu} +\lambda_1\beta^\nu -\eta^\nu\varphi)\right.\nonumber\\
&-&\left. i\bar\chi(\eta_\mu\rho^\mu +\lambda_2 \chi) -i\bar\rho^\mu \eta_\mu \chi \right],\nonumber\\
&=&\int d^4x\left[ i\bar\rho_\nu \eta_\mu\left(\partial^\mu\rho^\nu -
\partial^\mu\rho^\nu -\frac{1}{2}\frac{m^2}{n\cdot \partial}n^\mu\rho^\nu +\frac{1}{2}\frac{m^2}{n\cdot \partial}n^\mu\rho^\nu\right)-\bar\sigma\eta_\mu\partial^\mu\sigma \right.\nonumber\\
&+&\left.\frac{1}{2}m^2
\bar\sigma \frac{1}{n\cdot\partial}\eta\cdot n\sigma +\beta_\nu(\eta_\mu B^{
\mu\nu} +\lambda_1\beta^\nu -\eta^\nu\varphi)
- i\bar\chi(\eta_\mu\rho^\mu +\lambda_2 \chi) -i\bar\rho^\mu \eta_\mu \chi \right].\end{aligned}$$ In terms of a gauge fixing fermion it can further be written as $$\begin{aligned}
S_{gf+gh}^{(2)A}&=&\delta_b\int d^4x\left[-i\bar\rho_\nu(\eta_\mu B^{\mu\nu}+\lambda_1\beta^\nu-\eta^\nu\varphi)
-i\bar\sigma(\eta_\mu\rho^\mu +\lambda_2\chi)\right].\end{aligned}$$ We see here that the gauge-fixed action in VSR-type axial gauge has simpler form than the Lorenz gauge. Next we discuss BV formulation of this model.
Batalin-Vilkovisky formulation
------------------------------
To analyze the BV formulation for Abelian rank-2 antisymmetric tensor field theory in VSR, we first define the generating functional in the VSR-type Lorenz gauge in field/antifield formulation by introducing an antifield corresponding to each field of the theory with opposite statistics, thus: $$\begin{aligned}
Z_{2-form}^L &=&\int\left[dBd\rho d\bar{\rho}d\sigma d\bar{\sigma}d\varphi d\chi d\bar{\chi}d
\beta\right]\exp\left[i\int d^4x\left\{\frac{1}{12}F_{\mu\nu\lambda}F^{\mu\nu\lambda}\right.\right. \nonumber\\
&-& \left.\left. B^{
\mu\nu\star}\left(\partial_\mu\rho_\nu-\partial_\nu\rho_\mu -\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu\rho^\nu +\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\nu\rho^\mu\right)
\right.\right. \nonumber\\
&-& \left.\left. i\rho^{\mu\star}\left(\partial_\mu\sigma - \frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu\sigma\right) +i{\bar{\rho}}^{\nu\star}\beta_\nu-\bar{
\sigma}^\star\bar\chi-\varphi^\star\chi\right\}\right].\end{aligned}$$ These antifields (starred fields) are identified with the help of the gauge fixed fermion given in (\[gff\]) as $$\begin{aligned}
\psi^L
&=& -i\bar\rho_\nu \partial_\mu B^{\mu\nu}+\frac{i}{2}m^2\bar\rho_\nu\frac{1}{n\cdot \partial}n_\mu B^{\mu\nu}-i\lambda_1\bar\rho_\nu\beta^\nu +i\bar\rho_\nu\partial^\nu\varphi -\frac{i}{2}m^2\bar\rho_\nu\frac{1}{n\cdot \partial}n^\nu\varphi
\nonumber\\
&-& i\bar\sigma\left(\partial_\mu\rho^\mu -\frac{1}{2}\frac{m^2}{n\cdot \partial}n_\mu \rho^\mu +\lambda_2\chi\right).\end{aligned}$$ These identifications are $$\begin{aligned}
B^{\mu\nu\star }&=&\frac{\delta \psi^L}{\delta B_{\mu\nu}}=i\partial^\mu\bar\rho^\nu+
\frac{i}{2}m^2\bar\rho^\nu\frac{1 }{n\cdot\partial}n^\mu, \nonumber\\
\bar\rho^{\nu\star}&=&\frac{\delta \psi^L}{\delta \bar \rho_\nu}=-i\left(\partial_\mu B^{\mu\nu}-\frac{1}{2}m^2\frac{1}{n\cdot\partial}n_\mu B^{\mu\nu}+
\lambda_1\beta^\nu -\partial^\nu\varphi +\frac{1}{2}m^2\frac{1}{n\cdot\partial}n^\nu\varphi \right),\nonumber\\
\rho^{\mu\star }&=&\frac{\delta \psi^L}{\delta \rho_{\mu}}=i\partial^\mu\bar\sigma +\frac{i}{2}\bar\sigma\frac{m^2}{n\cdot\partial}n^\mu, \ \ \ \
\bar\sigma^\star =\frac{\delta \psi^L}{\delta\bar\sigma}=-i\left(\partial_\mu\rho^\mu -\frac{1}{2}\frac{m^2}{n\cdot \partial}n_\mu \rho^\mu +\lambda_2\chi\right),\nonumber\\
\sigma^\star &=&\frac{\delta \psi^L}{\delta \sigma}=0, \ \ \ \ \ \ \ \chi^\star =\frac{\delta \psi^L}{\delta\chi}=-i\lambda_2 \bar \sigma,\nonumber\\
\varphi^\star &=&\frac{\delta \psi^L}{\delta\varphi}=-i
\partial_\mu\bar\rho^\mu -\frac{i}{2}m^2\bar\rho_\nu\frac{1}{n\cdot\partial}n^\nu, \ \ \bar\chi^\star =\frac{\delta \psi^L}{
\delta\bar\chi}=0.\end{aligned}$$ This can further be expressed in a compact form as $$Z^L_{2-form} = \int {\cal D}\phi\ e^{iW_{2-form}(\phi,\phi^\star)},$$ where $W_{2-form}(\phi,\phi^\star)$ is an extended quantum action for the Abelian 2-form gauge theory in the VSR-type Lorenz gauge written in terms of generic field $\phi$ and antifield $\phi^\star$. It is well-known that the value of generating functional $Z^L_{2-form}$ does not depend on the choice of gauge-fixing fermion. This extended quantum action, $W_{2-form}(\phi,\phi^\star)$, is the solution of certain rich mathematical relation, which is called the quantum master equation, given by $$\Delta e^{iW_{2-form}[\phi, \phi^\star ]} =0,\ \
\Delta\equiv (-1)^{\epsilon}\frac{\partial_l}{
\partial\phi}\frac{\partial_l}{\partial\phi^\star } .$$ Corresponding to different choices of gauge condition, there will be many possible solutions of the quantum master equation. The ghost number and statistics of $\phi^\star$ are $$\begin{aligned}
\mbox{gh} [\phi^\star]=-\mbox{gh} [\phi]-1,\ \ \ \epsilon(\phi^\star)= \epsilon(\phi)+1\ (\mbox{mod}
\ 2).\end{aligned}$$ The quantum action can be extended up to the one-loop order correction as $$W_{2-form}[\phi, \phi^\star ]=S_0[\phi] +S^{(2)L}_{gf+gh} [\phi, \phi^\star ] +\hbar M_1[\phi, \phi^\star ],$$ where $S_0 +S^{(2)L}_{gf+gh} $ is the complete action given in Eqs. (\[cl\]) and (\[gff\]) and $M_1$ appears from nontrivial measure factors.
The behavior of $ W_{2-form}$ for BRST transformations can be given by $$\delta_b
W_{2-form} =i\hbar\Delta W_{2-form}.$$ For (non-anomalous) gauge theory up to first-order correction $M_1$ the solution does not depend on antifields. In this situation, the BRST transformations of the complete action $S_0 +S^{(2)L}_{gf+gh}$ and $M_1$ are given by $$\delta_b (S_0 +S^{(2)L}_{gf+gh})=0, \ \ \delta_b M_1 =i\Delta (S_0 +S^{(2)L}_{gf+gh}).$$ This result can further be generalized up to higher order of perturbation.
In the next section we will study the case of an Abelian rank-3 tensor field theory in VSR.
Abelian 3-form fields in VSR
============================
The Abelian 3-form gauge field is important for supergravity theory in higher spacetime dimensions. So it is important to study such a gauge field in VSR. Let us start by writing the the field-strength for the Abelian 3-form gauge theory in arbitrary $d$ dimensions for VSR as $$\begin{aligned}
H_{\mu\nu\eta\chi}&=&\partial_\mu B_{\nu\eta\chi} -\partial_\nu B_{\eta\chi\mu}+
\partial_{\eta}B_{\chi\mu\nu}-\partial_\chi B_{\mu\nu\eta}+\frac{1}{2}m^2 \left[n_\mu
\frac{1}{(n\cdot\partial)^2}n^\alpha (\partial_\nu B_{\eta\chi\alpha
}-\partial_\eta B_{\chi\alpha\nu}+\partial_\chi B_{\alpha\nu\eta})\right.\nonumber\\
&-&\left. n_\nu
\frac{1}{(n\cdot\partial)^2}n^\alpha (\partial_\eta B_{ \chi\mu\alpha
}-\partial_\chi B_{\mu\alpha\eta}+\partial_\mu B_{\alpha \eta\chi})+n_\eta
\frac{1}{(n\cdot\partial)^2}n^\alpha (\partial_\chi B_{\mu\nu\alpha
}-\partial_\mu B_{\nu\alpha\chi}+\partial_\nu B_{\alpha\chi\mu})\right.\nonumber\\
&-&\left. n_\chi
\frac{1}{(n\cdot\partial)^2}n^\alpha (\partial_\mu B_{ \nu\eta\alpha
}-\partial_\nu B_{\eta\alpha\mu}+\partial_\eta B_{\alpha\mu\nu})
\right].\end{aligned}$$ It is straightforward to check that this field-strength is not invariant under the standard gauge transformation, $\delta B_{\mu\nu\eta}=\partial_\mu\lambda_{\nu\eta}+\partial_\nu\lambda_{\eta\mu}+\partial_\eta
\lambda_{\mu\nu}$. Rather, this is invariant under the following modified (VSR-type) gauge transformation: $$\begin{aligned}
\delta B_{\mu\nu\eta}=\partial_\mu\lambda_{\nu\eta}+\partial_\nu\lambda_{\eta\mu}+\partial_\eta
\lambda_{\mu\nu}-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu\lambda_{\nu\eta}-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\nu\lambda_{\eta\mu}-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\eta\lambda_{\mu\nu},
\label{gau}\end{aligned}$$ where $\lambda_{\mu\nu}$ is a tensor parameter of transformation.
To describe a massive 3-form field we define the VSR-type gauge invariant action in $d$ dimensions as follows: $$\begin{aligned}
S_0= \kappa\int d^dx \ \tilde H_{\mu\nu\eta\chi}\tilde H^{\mu\nu\eta\chi},\label{ac}\end{aligned}$$ where $\kappa$ is some fixed constant and the wiggle field strength for the 3-form gauge field is given by $$\begin{aligned}
\tilde H_{\mu\nu\eta\chi}&=&\tilde \partial_\mu B_{\nu\eta\chi} -\tilde \partial_\nu B_{\eta\chi\mu}+
\tilde \partial_{\eta}B_{\chi\mu\nu}-\tilde \partial_\chi B_{\mu\nu\eta},\nonumber\\
&=&H_{\mu\nu\eta\chi}+\frac{1}{2}m^2\left[n_\mu\frac{1}{(n\cdot\partial)^2}n^\alpha H_{\nu\eta\chi\alpha}
- n_\nu\frac{1}{(n\cdot\partial)^2}n^\alpha H_{ \eta\chi\mu\alpha}\right.\nonumber\\
&+&\left. n_\eta\frac{1}{(n\cdot\partial)^2}n^\alpha H_{ \chi\mu\nu\alpha}-n_\chi\frac{1}{(n\cdot\partial)^2}n^\alpha H_{\mu\nu\eta\alpha} \right].\end{aligned}$$ From the above expression it is evident that the wiggle field strength does not coincide with the field strength $H_{\mu\nu\eta\chi}$. Now, the EOM for the 3-form gauge field is calculated by $$\tilde{\partial}_\mu\tilde{H}^{\mu\nu\eta\chi} =0,$$ which, in turn, for VSR-type Lorenz gauge (i.e. $\tilde{\partial}_\mu B^{\mu\nu\eta}$) reduces to $$(\square -m^2)B^{\nu\eta\chi}=0.$$ This is a Klein-Gordon equation for a massive field. This implies that the 3-form gauge field $B^{\nu\eta\chi}$ has mass $m$.
Since the action (\[ac\]) respects the VSR-type gauge symmetry, for a perturbative formulation, we need to break the local gauge invariance by adding a gauge-fixing term. When constructing the effective action at higher orders, maintaining unitarity, one has to replace the local guage symmetry by (global) BRST symmetry. To make the gauge-fixing term BRST invariant, we need to add ghost terms to the effective action. We, therefore, fix a VSR-type Lorenz gauge (i.e. $\tilde{\partial}_\mu B^{\mu\nu\eta}=0$). Since it is a reducible gauge theory, we need some more fixing for other (ghost) fields. So, this gauge-fixing condition is incorporated by adding the following gauge-fixed action together with the induced ghost term: $$\begin{aligned}
S^L_{gf+gh}
&=& \int d^dx \left[ \tilde\partial_\mu B^{\mu\nu\eta}B_{\nu\eta} +
\frac{1}{2}B_{\mu\nu}\bar B^{\mu\nu}
+
(\tilde\partial_\mu \bar c_{\nu\eta} + \tilde\partial_\nu \bar c_{\eta\mu}
+ \tilde\partial_\eta \bar c_{\mu\nu})\tilde\partial ^\mu
c^{\nu\eta}\right.\nonumber\\
&-& \left.(\tilde\partial_\mu\bar \beta_\nu -\tilde\partial_\nu \bar\beta_\mu )\tilde\partial^\mu\beta^\nu -BB_2 -
\frac{1}{2} B_1^2 +(\tilde\partial_\mu \bar c^{\mu\nu}+\tilde\partial^\nu \bar c_1)f_\nu \right.
\nonumber\\
&-&\left.(\tilde\partial_\mu c^{\mu\nu}- \tilde{\partial}^\nu c_1)\bar F_\nu +\tilde\partial_\mu\bar c_2 \tilde\partial^\mu c_2
+ \tilde\partial_\mu\beta^\mu B_2 +\tilde\partial_\mu \phi^\mu B_1 -
\tilde\partial_\mu\bar\beta^\mu B\right],\end{aligned}$$ where antisymmetric ghost and antighost fields ($c_{\mu\nu}$ and $\bar c_{\mu\nu}$) are Grassmannian and the vector field $\phi_\mu$, antisymmetric auxiliary fields $B_{\mu\nu}, \bar B_{\mu\nu}$ and auxiliary fields $B,
B_1, B_2$ are bosonic in nature. The ghost of ghosts ($\beta_\mu$ and $\bar\beta_\mu$) are bosonic in nature. However, ghost of ghost of ghosts ($c_2$ and $\bar{c}_2$) are fermionic in nature. The rest of the Grassmannian fields ($c_1, \bar c_1, f_\mu$ and $\bar F_\mu$) are auxiliary fields. It can easily be see here that the ghosts ($c_{\mu\nu}$ and $\bar c_{\mu\nu}$), ghost of ghosts ($\beta_\mu$ and $\bar\beta_\mu$) and ghost of ghost of ghosts ($c_2$ and $\bar{c}_2$) have mass $m$.
Expanding the wiggle operation, it reduces to $$\begin{aligned}
S^L_{gf+gh} &=& \int d^dx \left[ \partial_\mu B^{\mu\nu\eta}B_{\nu\eta} -\frac{1}{2}m^2B_{\nu\eta}\frac{1}{n\cdot\partial}n_\mu B^{\mu\nu\eta}+
\frac{1}{2}B_{\mu\nu}\bar B^{\mu\nu}
+
( \partial_\mu \bar c_{\nu\eta} + \partial_\nu \bar c_{\eta\mu}
+ \partial_\eta \bar c_{\mu\nu}) \partial ^\mu
c^{\nu\eta}\right.\nonumber\\
&+& \left.m^2\bar c_{\nu\eta}c^{\nu\eta} -m^2\partial_\nu\bar c_{\eta\mu}\frac{1}{n\cdot\partial}
n^\mu c^{\nu\eta} -\frac{m^2}{n\cdot\partial}n_\nu\bar c_{\eta\mu}\partial^\mu c^{\nu\eta}+
\frac{1}{2}\frac{m^4}{n\cdot\partial}n_\nu\bar{c}_{\eta\mu}\frac{1}{n\cdot\partial}n^\mu c^{\nu\eta}
\right.\nonumber\\
&-& \left.( \partial_\mu\bar \beta_\nu - \partial_\nu \bar\beta_\mu ) \partial^\mu\beta^\nu -m^2\bar \beta_\nu\beta^\nu -\frac{1}{2}m^2\partial_\nu\bar\beta_\mu\frac{1}{n\cdot\partial}n^\mu\beta^\nu
-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\nu\bar\beta_\mu\partial^\mu\beta^\nu
\right.
\nonumber\\
&+&\left.\frac{1}{4}\frac{m^4}{n\cdot\partial}n_\nu\bar\beta_\mu \frac{1}{n\cdot\partial}n^\mu\beta^\nu
-BB_2 -
\frac{1}{2} B_1^2 +(\partial_\mu \bar c^{\mu\nu})f_\nu -\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu\bar c^{\mu\nu}f_\nu \right.
\nonumber\\
&+&\left. \partial^\nu \bar c_1 f_\nu - \frac{1}{2}\frac{m^2}{n\cdot\partial}n^\nu \bar c_1 f_\nu -( \partial_\mu c^{\mu\nu})\bar F_\nu +\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu c^{\mu\nu}\bar F_\nu
+\partial^\nu c_1 \bar F_\nu
\right.
\nonumber\\
&-&\left. \frac{1}{2}\frac{m^2}{n\cdot\partial}n^\nu c_1 \bar F_\nu -\bar c_2( \partial_\mu\partial^\mu -m^2)c_2
+ \partial_\mu\beta^\mu B_2-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu\beta^\mu B_2 \right.
\nonumber\\
&+&\left. \partial_\mu \phi^\mu B_1-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu \phi^\mu B_1-
\partial_\mu\bar\beta^\mu B+ \frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu \bar\beta^\mu B\right].\label{lag}\end{aligned}$$
The effective action together with (\[ac\]) and (\[lag\]) remains invariant under the following set of BRST transformations: $$\begin{aligned}
\delta_b B_{\mu\nu\eta} &=& -\left(\partial_\mu c_{\nu\eta}+\partial_\nu c_{\eta\mu} +\partial_\eta c_{\mu\nu}-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu c_{\nu\eta}-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\nu c_{ \eta\mu}-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\eta c_{\mu\nu}\right) \Lambda,\nonumber\\
\delta_b c_{\mu\nu} &=& \left(\partial_\mu\beta_\nu -\partial_\nu \beta_\mu-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu \beta_\nu +\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\nu \beta_\mu\right) \Lambda,\
\delta_b\bar c_{\mu\nu}=B_{\mu\nu} \Lambda, \nonumber\\
\delta_b\bar B_{\mu\nu} &=&-\left(\partial_\mu f_\nu -\partial_\nu f_\mu-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu f_\nu +\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\nu f_\mu\right) \Lambda, \ \
\delta_b\bar\beta_\mu = -\bar F_\mu \Lambda,\nonumber\\
\delta_b\beta_\mu &=&-\left(\partial_\mu c_2 -\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu c_2\right)\Lambda, \ \ \
\delta_b\bar c_2 =B_2 \Lambda,\ \ \ \ \ \delta_b c_1=-B \Lambda,\nonumber\\
\delta_b \phi_\mu
&=&-f_\mu \Lambda,\ \
\delta_b \bar c_1= B_1 \Lambda,\ \ \
\delta_b {\cal M} =0,\ \ \ \
{\cal M} \equiv \{c_2, f_\mu, \bar F_\mu, B, B_1, B_2, B_{\mu\nu}\},\label{brst}\end{aligned}$$ where $\Lambda$ is the fermionic transformation parameter. The gauge fixing fermion is given by $$\begin{aligned}
\psi_L &=& -\partial_\mu\bar c_{\nu\eta}B^{\mu\nu\eta}+\frac{1}{2}\frac{m^2}{n\cdot\partial}
n_\mu \bar c_{\nu\eta} B^{\mu\nu\eta} -\frac{1}{2}\bar c_2 B
+\frac{1}{2}c_1B_2
- \frac{1}{2}\bar c_1 B_1 - c^{\mu\nu}
\partial_\mu\bar \beta_\nu +c_1\partial_\mu \bar{\beta}^\mu \nonumber\\
&+&\frac{1}{2}c^{\mu\nu}\frac{m^2}{n\cdot\partial}n_\mu\bar\beta_\nu - \partial_\mu \bar c_2 \beta^\mu +\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu \bar c_2 \beta^\mu + \frac{1}{2}\bar c_{\mu\nu} \bar B^{\mu\nu} +\bar c_1\partial_\mu \phi^\mu -\frac{1}{2}\bar{c}_1 \frac{m^2}{n\cdot\partial}n_\mu \phi^\mu. \label{psi}\end{aligned}$$ This expression will play an important role in the next subsection to get identification for the antifields in VSR-type Lorenz gauge.
Batalin-Vilkovisky formulation
------------------------------
To describe 3-form gauge theory in BV formulation in VSR, we introduce the antifields corresponding to each field of the model with opposite statistics having non-vanishing BRST symmetry in the generating functional as follows: $$\begin{aligned}
Z_{3-form}^L &=&\int {\cal D}\phi \exp\left[i\int d^dx\left\{\frac{1}{24}F_{\mu\nu\eta\chi}F^{\mu\nu\eta\chi}
-
B_{\mu\nu\eta}^\star \left(
\partial^\mu c^{\nu \eta} + \partial^\nu c^{ \eta\mu} +
\partial^\eta c^{\mu\nu }\right. \right.\right. \nonumber\\
& -& \left.\left.\left. \frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu c_{\nu\eta}-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\nu c_{ \eta\mu}-\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\eta c_{\mu\nu} \right)+
{c}_{\mu\nu}^\star \left( \partial^\mu\beta^\nu -\partial^\nu\beta^\mu \right. \right.\right. \nonumber\\
& -& \left.\left.\left. \frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu \beta_\nu +\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\nu \beta_\mu
\right)+\bar{c}_{\mu\nu}^\star
B^{\mu\nu} -\bar B_{\mu\nu}^\star \left(\partial^\mu f^\nu -\partial^\nu f^\mu \right. \right.\right. \nonumber\\
& -& \left.\left.\left. \frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu f_\nu +\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\nu f_\mu\right)
-\beta_\mu^\star \left(\partial^\mu c_2 -\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu c_2 \right) -\bar \beta_\mu^\star \bar F ^\mu \right.\right. \nonumber\\
& +& \left.\left.
\bar c_2^\star B_2
+\bar c_1^\star B_1 -c_1^\star B
-\phi_\mu ^\star f^\mu \right\}\right ].\end{aligned}$$ These antifields (starred fields) are evaluated with the help of the gauge-fixing fermion (\[psi\]) as follows $$\begin{aligned}
&&B_{\mu\nu\eta}^\star=-\partial_\mu \bar c_{\nu\eta}+\frac{1}{2}\frac{m^2}{n\cdot\partial}
n_\mu \bar c_{\nu\eta} \ \
c_{\mu\nu}^\star = -\partial_\mu\bar \beta_\nu+\frac{1}{2} \frac{m^2}
{n\cdot\partial}n_\mu\bar\beta_\nu,\ \
\bar B_{\mu\nu}^\star = \frac{1}{2}\bar c_{\mu\nu},\nonumber\\
&&\bar c_{\mu\nu}^\star =\frac{1}{2}\bar B_{\mu\nu} +
\partial^\eta B_{\mu\nu\eta}-\frac{1}{2}\frac{m^2}{n\cdot\partial}
n^\eta B_{\eta\mu\nu},\ \
\beta_{\mu}^\star =- \partial_\mu\bar c_2+\frac{1}{2}\frac{m^2}{n\cdot\partial}n_\mu \bar c_2,
\nonumber\\
&&\bar \beta_{\mu}^\star =-\partial_\mu c_1 +\partial^\nu c_{\nu\mu}-\frac{1}{2}c_{\nu\mu}\frac{m^2}
{n\cdot\partial}n^\nu,\ \ \bar c_2^{\star}=-\frac{1}{2}B+\partial_\mu\beta^\mu -\frac{1}{2} \frac{m^2}
{n\cdot\partial}n_\mu\beta^\mu,\nonumber\\
&&
c_1^{\star} =\frac{1}{2}B_2 +\partial_\mu \bar \beta^\mu,\ \
\bar c_1^{\star} =-\frac{1}{2}B_1 +\partial_\mu\phi^\mu -\frac{1}{2}\bar{c}_1 \frac{m^2}{n\cdot\partial}n_\mu \phi^\mu,\nonumber\\
&&\phi_{\mu}^\star =-\partial_\mu \bar c_1 -\frac{1}{2}\frac{m^2}
{n\cdot\partial}n_\mu,\ \
B^{\star} =-\frac{1}{2}\bar c_2,\ \
B_1^{\star} =-\frac{1}{2}\bar c_1,\nonumber\\
&&B_2^{\star} = \frac{1}{2}c_1,\ \
\{ B_{\mu\nu}^\star, \bar F_{\mu}^\star, f_{\mu }^\star, c_2^{\star}\}=0.\end{aligned}$$ The generating functional can further be written in compact form as $$Z^L_{3-form} = \int {\cal D}\phi\ e^{iW_{3-form}(\phi,\phi^\star)},$$ where $W_{3-form}(\phi,\phi^\star)$ is an extended quantum action for Abelian 3-form gauge theory in the VSR-type Lorentz gauge written in terms of generic field $\phi$ and antifield $\phi^\star$. This extended quantum action, $W_{3-form}(\phi,\phi^\star)$, is the solution of a certain rich mathematical relation, which is called the quantum master equation, given by $$\Delta e^{iW_{3-form}[\phi, \phi^\star ]} =0,\ \
\Delta\equiv (-1)^{\epsilon}\frac{\partial_l}{
\partial\phi}\frac{\partial_l}{\partial\phi^\star } .
\label{mq}$$ Here $W_{3-form}[\phi, \phi^\star ]$ is the solution of quantum master equation. For different gauge choices, it corresponds to different solutions of the quantum master equation. From this quantum master equation one can get a relation between different correlation functions.
Conclusion
==========
In this paper we have analyzed the reducible gauge theories in VSR. To be more specific, we have demonstrated the Kalb-Ramond field theory in VSR involving a fixed null vector. We have derived the classical action for such theory in VSR. We have found that the action is not invariant under the standard gauge transformation for the Kalb-Ramond field. However, such action remains invariant under the modified (VSR-type) gauge transformation written in terms of the wiggle operator. We have derived the equations of motion for a Kalb-Ramond field, which eventually turns out to be the Klein-Gordon equation for a massive field. This ensures that the Kalb-Ramond field in VSR gets a mass. gets a mass. Further, to quantize such a theory in VSR we have fixed a VSR-type Lorenz gauge which breaks the local (VSR) gauge symmetry. The propagators have also been calculated. This gauge-fixing term induced a ghost term in the path integral. Here we have observed that the ghost fields and ghost of ghost fields also get a common mass in VSR. Therefore it cannot be an alternative to the Higgs mechanism. Further we demonstrate the BRST symmetry for Kalb-Ramond gauge theory in VSR. To break the gauge symmetry an axial-type gauge has also been chosen which has simpler form than the Lorenz type gauge. We have also quantized the theory utilizing BV formulation where we derive the extended quantum action of the model to the first order in perturbation satisfying the quantum master equation.
Subsequently, we have considered an Abelian 3-form gauge theory (another reducible gauge theory) also in VSR which plays an important role in $11$ dimensional supergravity. It has been shown that this model also respects a VSR-type gauge invariance rather than standard gauge symmetry. We have found that the gauge fields together with all ghost fields get s common mass for s such theory in VSR. We have also analyzed the BRST quantization of 3-form gauge theory in VSR. Further, we have studied the model in BV formulation. Now it would be extremely interesting to evaluate the different identities following the BRST symmetry of the reducible gauge theories in VSR. This might be helpful in gaining a clear understanding of the theory in VSR. It will also be interesting to explore such results in further interesting models such as perturbative quantum gravity, super-Yang-Mills theory and supersymmetric Chern-Simmons theory etc.
[99]{} Pierre Auger Collaboration, Phys. Rev. Lett. 101, 061101 (2008). J. Alfaro, H. Morales-Tecotl, and L. F. Urrutia, Phys. Rev. Lett. 84, 2318 (2000); Phys. Rev. D 65, 103509 (2002). D. Colladay and V. A. Kostelecky´, Phys. Rev. D 55, 6760 (1997); Phys. Rev. D 58, 116002 (1998). For a review, see, for example, Proceedings of the Meeting on CPT and Lorentz Symmetry, ed. by V. A. Kostelecky´ (World Scientific, Singapore, 1999); Proceedings of the Second, Third and Fourth Meeting on CPT and Lorentz Symmetry, ed. by V. A. Kostelecky´ (World Scientific, Singapore, 1999). R. C. Myers and M. Pospelov, Phys. Rev. Lett. 90, 211601 (2003); C.M. Reyes, L. F. Urrutia, and J. D. Vergara, Phys. Rev. D 78, 125011 (2008); Phys. Lett. B 675, 336 (2009). A. A. Andrianov, P. Giacconi, and R. Soldati, J. High Energy Phys. 02 (2002) 030; J. Alfaro, A. A. Andrianov, M. Cambiaso, P. Giacconi, and R. Soldati, Phys. Lett. B 639, 586 (2006); Int. J. Mod. Phys. A 25, 3271 (2010). A. G. Cohen, S. L. Glashow, Phys. Rev. Lett. 97, 021601 (2006). A. G. Cohen, S. L. Glashow, arXiv:hep-ph/0605036. A. G. Cohen, D.Z. Freedman, J. High Energy Phys. 0707, 039 (2007). J. Vohanka, Phys. Rev. D 85, 105009 (2012). G. Gibbons, J. Gomis, C. Pope, General very special relativity is Finsler geometry, Phys. Rev. D 76, 081701 (2007). W. Muck, Phys. Lett. B 670, 95 (2008). M. M. Sheikh-Jabbari and A. Tureanu, Phys. Rev. Lett. 101, 261601 (2008); S. Das, S. Ghosh, and S. Mignemi, Phys. Lett. A 375, 3237 (2011). E. Alvarez and R. Vidal, Phys. Rev. D 77, 127702 (2008). D. V. Ahluwalia and S. P. Horvath, J. High Energy Phys. 11, 078 (2010). Z. Chang, M.-H. Li, X. Li, and S. Wang, Eur. Phys. J. C 73, 2459 (2013). S. Cheon, C. Lee, and S. Lee, Phys. Lett. B 679, 73 (2009). R. Bufalo, Phys. Lett. B 746, 251 (2015). J. Alfaro and V. O. Rivelles, Phys. Lett. B, 734, 239 (2014).
M. Green, J. Schwarz and E. Witten, [*[Superstring Theory]{}*]{}, (Cambridge University Press, Cambridge, 1987). J. Polchinski, [*[ String Theory]{}*]{}, (Cambridge University Press, Cambridge, 1998). S. Upadhyay, EPL 103, 61002 (2013); S. Upadhyay, M. K. Dwivedi and B. P. Mandal, Int. J. Mod. Phys. A 28, 1350033 (2013); S. Upadhyay and B. P. Mandal, Eur. Phys. J. C 72, 2059 (2012); Mod. Phys. Lett. A 25, 3347 (2010). M. B. Green, J. H. Schwarz and E. Witten, [*[Superstring Theory]{}*]{}, (Cambridge University Press, Cambridge, 1987). J. Polchinski, [*[String Theory]{}*]{} (Cambridge University Press, Cambridge, 1998). M. Kalb and P. Ramond, [*[Phys. Rev.]{}*]{} [**[D 9]{}**]{}, 2273 (1974). F. Lund and T. Regge, [*[Phys. Rev.]{}*]{} [**[D 14]{}**]{}, 1524 (1976). M. Sato and S. Yahikozawa, [*[Nucl. Phys.]{}*]{} [**[B 436]{}**]{}, 100 (1995). A. Sugamoto, [*[Phys. Rev.]{}*]{} [**[D 19]{}**]{}, 1820 (1979). R. L. Davis and E. P. S. Shellard, [*[Phys. Lett.]{}*]{} [**[B 214]{}**]{}, 219 (1988). A. Salam and E. Sezgin, [*[Supergravities in Diverse Dimensions]{}*]{} (North-Holland/World Scientific, Amsterdam/Singapore, 1989). S. Deguchi, T. Mukai and T. Nakajima, [*Phys. Rev.*]{} [**D 59**]{}, 065003 (1999). M. J. Duff and K. Stelle, Phys. Lett. B 253, 113 (1991). A. C. Nayak, R. K. Verma and P. Jain, JCAP 07, 031 (2015).
|
---
abstract: |
Collatz Conjecture (also known as Ulam’s conjecture and 3x+1 problem) concerns the behavior of the iterates of a particular function on natural numbers. A number of generalizations of the conjecture have been subjected to extensive study. This paper explores Additive Collatz Trajectories, a particular case of a generalization of Collatz conjecture and puts forward a sufficient and necessary condition for looping of Additive Collatz Trajectories, along with two minor results. An algorithm to compute the number of equivalence classes when natural numbers are quotiented by the limiting behavior of their corresponding trajectories is also proposed.
**Keywords:** Collatz Conjecture, Multiplicative Groups, Computational Number Theory
author:
- 'Aalok Thakkar, Mrunmay Jagadale'
date: 'August 8, 2016'
title: Additive Collatz Trajectories
---
Introduction
============
The Collatz Conjecture
----------------------
The $3x + 1$ problem is most simply stated in terms of the Collatz function $C(x)$ defined on integers as:
$$C(x) = \begin{cases}
3x + 1 & \text{if } x \equiv 1 \mod 2\\
\frac{x}{2} &\text{if } x \equiv 0 \mod 2\\
\end{cases}$$
The Collatz Conjecture states that every for every $m \in \mathbb{N}$, there exists $k \in \mathbb{N}$ such that iterate $T^{(k)}(m) = 1$. [@lagarias]. One natural generalization of Collatz function would be to consider an arbitrary affine linear map of $x$ instead of $3x + 1$.
Generalization
--------------
We introduce the concept of a generalized Collatz function $C_{a,d,m}'(x)$ as:
$$C'_{a,d,m}(x) = \begin{cases}
mx + a & \text{if } x \not\equiv 0 \mod d\\
\frac{x}{d} &\text{if } x \equiv 0 \mod d\\
\end{cases}$$
where $x,a,d,m \in \mathbb{N}$. For an arbitrary choice of $(x, a, d, m)$ and for sufficient large values of $k \in \mathbb{N}$, we would like to explore the nature of $C_{a,d,m}'^{(k)}(x)$ [@generalized]. In this paper, we look at a particular case of the generalized Collatz function, which we term as the Additive Collatz function. An additive Collatz function $T_{a,d}(x)$ is defined as $C_{a,d,1}'(x)$.
Terminology
-----------
We use the following definitions:
An additive Collatz trajectory $O_{a,d}$ of an integer $x$ is the infinite tuple
$$O_{a,d}(x) = (x, T_{a,d}(x), T_{a,d}^{(2)}(x), ... )$$
A trajectory $O = (o_0, o_1, o_2 ... )$ is said to loop if $\exists k, N \in \mathbb{N}$ such that $\forall n > N$, $o_n = o_{n+k}$.
Given $a, d \in \mathbb{N}$, two natural numbers $x_1$ and $x_2$ are said to be equivalent under the orbit equivalence relation if $\exists n_1, n_2, N \in \mathbb{N}$ such that $\forall k > N$:
$$T_{a,d}^{(k+n_1)}(x_1) = T_{a,d}^{(k+n_2)}(x_2)$$
Given $a, d \in \mathbb{N}$, an orbit is an element of partition of $\mathbb{N}$ under the orbit equivalence relation.
Analysis of Additive Collatz Trajectories
=========================================
The limiting behavior of an additive Collatz trajectory is identified based on the eventual formation of loops. It is a straightforward observation that if $a$ and $d$ are not co-prime, then the trajectory does not necessarily loop.
For non-coprime $a,d \in \mathbb{N}$, there exists $x \in \mathbb{N}$ such that $O_{a,d}(x)$ does not loop.
Consider $ r \not\equiv 0 \mod \gcd(a,d) $. Then, $O_{a,d}(r)$ does not loop. This can be proved by induction on natural numbers.\
Claim: $\forall k \in \mathbb{N}, T_{a,d}^{(k)}(r) = r + ak$\
Base Case: Trivially true for $k = 0$\
Induction Hypothesis: $\exists k \in \mathbb{N} \ni T_{a,d}^{(k)}(r) = r + ak$\
Induction Step: Let $\delta= \gcd(a,d)$.
$$T_{a,d}^{(k)}(r) = r + ak \equiv r \mod \delta \implies T_{a,d}^{(k)}(r) \not\equiv 0 \mod d$$ $$\implies T_{a,d}^{(k+1)}(r) = T_{a,d}^{(k)}(r) + a = r + ka + a = r + (k+1)a$$
Hence, the trajectory is an increasing progression, and does not loop.\
It can be observed that if $r$ is not a multiple of $\gcd(a,d)$, then $O_{a,d}(r)$ does not loop. We would now like to analyze the converse of this statement.
Given $a,d \in \mathbb{N}$, if $r \equiv 0 \mod \delta$, $$T_{a,d}^{(k)}(r) = \delta T_{\frac{a}{\delta},\frac{d}{\delta}}^{(k)}\Big( \frac{r}{\delta} \Big)$$ where $\delta = \gcd(a,d)$
If $r$ is a multiple of $\gcd(a,d)$, then Lemma 1 permits us to reduce our analysis to a case where $a$ and $d$ are co-prime. We now only consider the cases where $a$ and $d$ are co-prime and prove that for all natural numbers $x$, the additive Collatz trajectory $O_{a,d}(x)$ loops. This analysis is divided into three propositions.
For co-prime $a,d \in \mathbb{N}$ given $x \in \mathbb{N}$, there exists $N(x) \in \mathbb{N}$ such that $T_{a,d}^{N(x)}(x) \leq a$
Consider $(n_i)$ as a sub-trajectory of $O_{a,d}(x)$ such that $$n_0 = x$$ $$n_i = T_{a,d}^{(z_i)}(x)$$
where $T_{a,d}^{(z_i - 1)}(x) = dT_{a,d}^{(z_i)}(x)$. As $a$ and $d$ are co-prime, Bézout’s Lemma forces the existence of such a sub-trajectory. Let $y_i = z_{i+1} - z_i - 1$. Then,
$$\label{1}
n_{i+1} = \frac{n_i + ay_i}{d}$$
On solving the recursion,
$$n_{k+1} = \frac{n_0 + a\sum_{i = 0}^k y_id^i}{d^{k+1}}$$
By definition, $y_i$ is the least non-negative integer such that $n_i + ay_i$ is divisible by $d$, and hence $y_i$ is strictly less than $d$. Hence,
$$n_{k+1} \leq \frac{n_0 + a\sum_{i = 0}^k (d-1)d^i}{d^{k+1}}$$
Which simplifies to
$$n_{k+1} \leq \frac{n_0 - a}{d^{k+1}} + a$$
For sufficiently large $k$, $d^{k+1} > n_0 - a$. Hence, there is an element in the trajectory which is less than or equal to $a$.\
For co-prime $a,d \in \mathbb{N}$, $O_{a,d}(x)$ loops for all $x \leq a$.
Equation (1) and the fact that $y_i \leq (d-1) $ implies that if $ n_i \leq a$, then $ n_{i+1} \leq \dfrac{a + a(d-1)}{d} = a $ . Thus if $x = n_0 \leq a $ then $ \forall j \in \mathbb{N} $ $n_j \leq a $. As there are only finitely many natural numbers less than or equal to $a$, trajectory loops.
Given $a,d \in \mathbb{N}$, $O_{a,d}(r)$ loops for all $r \equiv 0 \mod \gcd(a,d)$.
By Lemma 1, $O_{a,d}(r)$ loops if and only if $O_{\frac{a}{\delta},\frac{d}{\delta}}(\frac{r}{\delta})$ loops where $\delta = \gcd(a,d)$.
Let $a' = a/\delta$, $d' = d/\delta$ and $r' = r/\delta$. By Proposition 2, there exists $N(r')$ such that $T_{a,d}^{(N(r'))}(r') \leq a$. Let $T_{a,d}^{(N(r'))}(r') = k$. As the each term of the trajectory depends only on the previous term, $O_{a',d'}(r')$ is equal to $O_{a',d'}(k)$. By Proposition 3, $O_{a',d'}(k)$ loops, hence $O_{a',d'}(r')$ loops. This implies that $O_{a,d}(r)$ loops.\
A straightforward implication of Proposition 4 is that if $a$ and $d$ are co-prime, then $\forall x \in \mathbb{N}$, $O_{a,d}(x)$ loops.
Orbit Counting
==============
For co-prime $a$ and $d$, we have shown that $O_{a,d}(x)$ loops. We would now like to characterize the number of unique loops possible. For this, the concept of orbit equivalence relation is proposed. Two natural numbers are said to be equivalent, if their trajectories are the same, eventually. Formally speaking, given $a, d \in \mathbb{N}$, two natural numbers $x_1$ and $x_2$ are said to be equivalent under the orbit equivalence relation if $\exists n_1, n_2, N \in \mathbb{N}$ such that $\forall k > N$:
$$T_{a,d}^{(k+n_1)}(x_1) = T_{a,d}^{(k+n_2)}(x_2)$$
Under the orbit equivalence relation, the set of natural numbers can be partitioned into equivalence classes. The number of equivalence classes is the same as the number of unique loops of trajectories possible.
Proposition 3 and Proposition 4 imply that the number of unique loops formed by $O_{a,d}(x)$ where $x \in \mathbb{N}$ is the same as the number of unique loops formed by $O_{a,d}(x)$ where $x \in \mathbb{Z}/a\mathbb{Z}$. In order to count the equivalence classes, we identify them with orbits of a group action and use Pólya’s Enumeration Theorem to count them.
Group Action
------------
Consider the sub-trajectory $(n_i)$ as defined in the proof of Proposition 2. We observe that given a trajectory $O_{a,d}(x)$, one can construct the sub-trajectory $(n_i)$ and vice-versa. Hence, the number loops of the sub-trajectories have a one-to-one correspondence with the loops of the trajectories.
By definition of the sub-trajectory $(n_i)$
$$n_{i+1} = d^{-1}n_i \mod a$$
For some $k$, all elements of the sub-trajectory are eventually smaller than $a$. This allows us to identify the limiting behavior of the sub-trajectory with $$(n_k,\: d^{-1}n_k \mod a,\: d^{-2}n_k \mod a,\: d^{-3}n_k \mod a,\: ...)$$ where $n_k \leq a$. Each element of the sub-trajectory is a power of $d^{-1}$ multiplied by $n_k$ and hence, the sub-trajectory can be identified with the action of the group of negative powers of $d$ on $n_k$ under multiplication modulo $a$.
Under the binary operation of multiplication modulo a natural number $a$, the numbers co-prime to $a$ (modulo $a$) form an Abelian group denoted by $(\mathbb{Z}/a\mathbb{Z})^*$. For some $d$ in $(\mathbb{Z}/a\mathbb{Z})^*$, let $H$ be the subgroup generated by $d$.
$$H = \{ d^i : i \in \mathbb{Z} \}$$
Then, the number of orbits formed by quotienting natural numbers by the nature of limiting behavior of the additive Collatz trajectory $O_{a,d}(x)$ is given by the number of orbits under the action of $H$ on $\mathbb{Z}/a\mathbb{Z}$ under the binary operation multiplication modulo $a$.
Computation
-----------
Let $\xi(a,d)$ denote the number of orbits of $(\mathbb{Z}/n\mathbb{Z})$ when $H$ acts on it. By Pólya’s Enumeration Theorem, we have:
$$\xi(a,d) = \frac{1}{|H|} \sum_{x \in S} |H_x|$$
where $H_x = \{g \in H: gx \equiv x \mod a \}$. $|H_x|$ can be computed by finding the number of solutions for $d^t \in H$ in the equation:
$$d^tx \equiv x \mod a$$
Let $m_x = \gcd(x,a)$, $p_x = \frac{a}{m_x}$ and $q_x = \frac{x}{m_x}$. On substitution in equation (2), we have:
$$d^tm_xq_x \equiv m_xq_x \mod (m_xp_x)$$
As $\gcd(p_x,q_x) = 1$, the number of solutions to equation (3) is same as the number of solutions to :
$$d^t \equiv 1 \mod p_x$$
Let the smallest solution to equation be termed as $\alpha_{p_x} (d)$. Therefore the total number of solutions to equation (2) are: $\frac{|H|}{\alpha_{p_x}(d)}$
Hence, we have:
$$\xi(a,d) = \frac{1}{|H|}\sum_{x \in S}\frac{|H|}{\alpha_{p_x}(d)} = \sum_{x \in S} \frac{1}{\alpha_{p_x}(d)}$$
For each $p_x$, $q_x$ takes values co prime to $p_x$, hence, by counting repetitions, we get:
$$\xi(a,d) = \sum_{f | a} \frac{\phi(f)}{\alpha_{f}(d)}$$
where $\phi$ is the Euler-totient function.
Upper and Lower Bounds
----------------------
We can set a lower bound on $\xi(a,d)$ by considering the Carmichael function $\lambda$.
$$\lambda(m) = \max \{ \alpha_m(d) : d \in (Z/mZ) \}$$
Hence,
$$\xi(a,d) = \sum_{f | a} \frac{\phi(f)}{\alpha_{f}(d)} \geq \sum_{f | a} \frac{\phi(f)}{\lambda(f)} = \xi_{inf}(a)$$
One can further claim that $\xi_{inf}(a)$ is a strong lower bound for $\xi(a,d)$ as for every $a$, there exists $d$ such that for all factors $f$ of $a$, $\alpha_{f}(d) = \lambda(f)$. The proof of this claim relies on the decomposition of $(\mathbb{Z}/n\mathbb{Z})^*$ into cyclic groups.
The strong upper bound for $\xi(a,d)$ is $a$, which is attained when $d$ is $1$.
Applications
------------
The computation of $\xi(a,d)$ employs factorization as well as the discrete logarithm function (in computation of $\alpha_{f}(d)$). There are no known efficient algorithms to compute either of them, making computation of $\xi(a,d)$ difficult. This difficulty can be employed for public-key cryptography. Knowing the prime factorization of $a$, can make the computation of $\xi(a,d)$ easier. Consider two primes $p$ and $q$. Then, for some $d$ co-prime to $pq$,
$$\xi(pq,d) = 1 + \frac{\phi(p)}{\alpha_{p}(d)} + \frac{\phi(q)}{\alpha_{q}(d)} + \frac{\phi(pq)}{\alpha_{pq}(d)}$$
On simplification, we have:
$$\xi(pq,d) = 1 + \frac{p-1}{\alpha_{p}(d)} + \frac{q-1}{\alpha_{q}(d)} + \frac{(p-1)(q-1)}{\alpha_{p}(d)\alpha_{q}(d)}\gcd(\alpha_{p}(d),\alpha_{q}(d))$$
Computing $\xi(pq,d)$ would be cumbersome without using equation (9), however, it is much simpler using the prime factorization. This provides much hope for the possibility of design of a public key cryptography algorithm or key exchange system using additive Collatz trajectories.
Results
=======
This paper puts forward the concept of Additive Collatz Trajectories and provides an analysis of their limiting behavior. A necessary and sufficient condition is provided for eventual looping of the Additive Collatz trajectories, along with a formula to compute the number of unique trajectories possible up to the orbit equivalence relation.
Further Scope
=============
Generalized Collatz Trajectories
--------------------------------
The spirit and strategy of this paper can be used to deal with generalized Collatz trajectories. One immediate result is that if the equation:
$$m^{r+1} x + a ( m^r + m^{r-1} + ... +m +1) \equiv 0 \mod d$$
does not have a solution for any $r$, then for all $x \in \mathbb{N}$, the trajectory formed by iteration of $C_{a,d,m}'(x)$ would not loop. Also, if $ m \equiv 1 \mod d $, then the equation will always have a solution. We can then define a sub-trajectory similar to that done in Proposition 2 as: $$n_0 = x$$ $$n_{i} = C_{a,d,m}'^{(z_i)}(x)$$ where $C_{a,d,m}'^{(z_i - 1)}(x) = dC_{a,d,m}'^{(z_i)}(x)$. We will be able to show that:
If $ d \not | n_i $ $$n_{i+1} = \dfrac{m^{r_i}n_i + a( m^{r_i-1} + ... +m +1 )}{d}$$ where $ r \equiv -a^{-1}n_i \mod d$ and $ 0 < r_i \leq (d-1) $\
else,$$n_{i+1} = \dfrac{n_i}{d}$$
Public-key Cryptography
-----------------------
As mentioned in subsection 3.4, there is a hope for developing a public-key cryptography system that relies on Additive Collatz Trajectories, particularly on counting the number of equivalence classes formed under the orbit equivalence relation. Much effort and study is required to design an implementable cryptography design, as there are a number of challenges. Firstly, there is no trivial characteristic that is common among the elements of an orbit equivalence class. Secondly, the formula for counting the number of orbit equivalence classes employs a number of functions, hence there is no natural way to compute the inverse for decryption. Lastly, the formula uses the discrete logarithm function which cannot practically be computed for larger cases. One must deal with these challenges in the process of an encryption algorithm design using the results proved in this paper.
[9]{} J. C. Lagarias. *The $3x + 1$ problem:* An Annotated Bibliography, II (2000-2009)
Hayden R. Messerman, Joey LeBeau, Dominic Klyve. *Generalized Collatz Functions:* Cycle Lengths and Statistics, International Journal of Undergraduate Research and Creative Activities, Volume 4
|
---
abstract: 'We propose a multinomial logistic regression model for link prediction in a time series of directed binary networks. To account for the dynamic nature of the data we employ a dynamic model for the model parameters that is strongly connected with the fused lasso penalty. In addition to promoting sparseness, this prior allows us to explore the presence of change points in the structure of the network. We introduce fast computational algorithms for estimation and prediction using both optimization and Bayesian approaches. The performance of the model is illustrated using simulated data and data from a financial trading network in the NYMEX natural gas futures market. Supplementary material containing the trading network data set and code to implement the algorithms is available online.'
author:
- |
Brenda Betancourt\
Department of Statistics, UC Santa Cruz\
and\
Abel Rodr[í]{}guez\
Department of Statistics, UC Santa Cruz\
and\
Naomi Boyd\
Department of Finance, West Virginia University
bibliography:
- 'Allfinal.bib'
title: '**Bayesian Fused Lasso regression for dynamic binary networks**'
---
\#1
0
[0]{}
1
[0]{}
[**Bayesian Fused Lasso regression for dynamic binary networks**]{}
[*Keywords:*]{} multinomial logistic regression, network link prediction, Pólya-Gamma latent variables, Split Bregman method.
Introduction {#sec:intro}
============
Network data, in which observations correspond to the interactions among a group of nodes, has become pervasive in disciplines as diverse as social, physical and biological sciences. Accordingly, there has been a growing interest in developing tools for the analysis of network data, particularly from a model-based perspective (for excellent reviews see [@Newman], [@GoldZhAi09] and [@Snij11]). The focus on this paper is on models for time series of binary directed networks that involve the same set of subjects at each time point. In particular, our work is motivated by the study of financial trading networks (FTNs), which capture the pattern of buy and sell transactions in a financial market. A primary goal in the analysis of this type of dynamic network data is link prediction at future times, going as far as predicting the structure of the whole network. An additional goal is to provide a simple model to explore the evolution of the network, and possibly identify change-points in the network dynamic. To accomplish these goals we extend the idea of $p_1$ models initially proposed by @Holland for static binary networks.
Consider a directed binary network among $n$ nodes, ${\mathbf{Y}}= [ y_{i,j} ]$, where $y_{i,j} = 1$ if there is a link directed from node $i$ to node $j$, and $y_{i,j} = 0$ otherwise. Holland and Leinhardt’s model assumes conditional independence between pairs of nodes (dyads) and focuses on modeling the pairs $( y_{i,j},y_{j,i} )$ jointly for $i<j$, $j=1,\ldots,n$ as follows $$\begin{aligned}
\label{eq:p1model}
p\left( y_{i,j}, y_{j,i} \right) \propto
\exp \left\{ \theta_{1} y_{i,j} + \theta_{2} y_{j,i}+\theta_{3} y_{i,j}y_{j,i} \right\}.\end{aligned}$$
This class of models has been extended to a dynamic setting by introducing Markov dependency upon past observations (e.g., see [@BaCa96]). In contrast, in the modeling approach discussed in this paper, the model parameters are set to be time dependent to add flexibility and account for alterations in the network evolution over time. One challenging feature that is often present in model-based approaches to network data is high-dimensionality. In particular, the number of parameters in our proposed model is larger than the number of available observations. To deal with this issue we resort to fused lasso regression by imposing an $L_1$ penalty on the difference between neighboring model parameters [@Tibshi05]. In a Bayesian setting, this is equivalent to assuming a double exponential prior on the differences of the coefficients in contiguous time points. Here, we explore two different computational approaches for our model. First, full Bayesian inference is presented and implemented using two different sampling schemes. However, the heavy computational load of a full Bayesian analysis is a challenging task as the number of nodes in the network increases. As an alternative, we also carry out maximum a posteriori (MAP) estimation utilizing an optimization approach.
The remainder of the paper is organized as follows: Section \[sec:model\] describes our modeling approach. Section \[sec:estimation\] describes the computational algorithms for estimation and prediction from optimization and Bayesian perspectives. Section \[sec:related\] discusses other related work. Section \[sec:applications\] presents three illustrations, two based on simulated data and a third one that focuses on trading networks from the natural gas futures market in the New York Mercantile Exchange (NYMEX). Finally, a short discussion is presented in Section \[sec:discussion\].
Modeling Approach {#sec:model}
=================
Consider a sequence of binary directed networks ${\mathbf{Y}}_{1},\ldots,{\mathbf{Y}}_{T}$, each one observed over a common set of $n$ nodes. The adjacency matrix of the network at time $t$ is therefore an $n \times n$ binary matrix ${\mathbf{Y}}_{t} = [ y_{i,j,t} ]$, where $y_{i,j,t} = 1$ if there is a link directed from node $i$ to node $j$ at time $t$, and $y_{i,j,t} = 0$ otherwise. We adopt the convention $y_{i,i,t} \equiv 0$ so that there are no loops within the network. In the illustration we discuss in Section \[sec:NYMEX\], the nodes in the network correspond to traders in the NYMEX natural gas futures market, so that $y_{i,j,t} = 1$ if trader $i$ sold to trader $j$ at least once during week $t$.
We consider an extension of in which the pairs $\{ ( y_{i,j,t} , y_{j,i,t} ) : i < j \}$ are modeled independently using a logistic model of the form $$\label{eq:model}
p\left( y_{i,j,t},y_{j,i,t} \right) \propto
\exp \left\{\theta_{i,j,t,1} y_{i,j,t} + \theta_{i,j,t,2} y_{j,i,t} + \theta_{i,j,t,3} y_{i,j,t}y_{j,i,t} \right\} ,$$ where $\theta_{i,j,t,1}$ and $\theta_{i,j,t,2}$ represent the baseline probabilities of a directed link between nodes $i$ and $j$, and $\theta_{i,j,t,3}$ controls the level of dependence between $y_{i,j,t}$ and $y_{j,i,t}$. For example, $\theta_{i,j,t,3} = 0$ implies that $y_{i,j,t}$ and $y_{j,i,t}$ are conditionally independent with ${\mathsf{Pr}}( y_{i,j,t} = 1 ) = \exp\left\{ \theta_{i,j,t,1} \right\}/\left(1 + \exp\left\{ \theta_{i,j,t,1} \right\} \right)$ and ${\mathsf{Pr}}( y_{j,i,t} = 1 ) = \exp\left\{ \theta_{i,j,t,2} \right\}/\left(1 + \exp\left\{ \theta_{i,j,t,2} \right\} \right)$. On the other hand, $\theta_{i,j,t,3} > 0$ favors outcomes in which $y_{i,j,t} = y_{j,i,t}$ (a phenomenon often called positive reciprocity in the network literature), while $\theta_{i,j,t,3} < 0$ favors situations in which $y_{i,j,t} \ne y_{j,i,t}$ (often called negative reciprocity). Hence, by allowing the values of $y_{i,j,t}$ and $y_{j,i,t}$ to be potentially correlated the model can accommodate reciprocity.
The parameters in the multinomial logistic model we just described are time dependent. Hence, it is natural and useful to take into account the information about their temporal correlation structure in the estimation process. In particular, we are interested in a random walk model with double exponential priors of the form: $$\begin{aligned}
\theta_{i,j,t,r}& = \theta_{i,j,t-1,r}+\epsilon_{i,j,t,r}, & \epsilon_{i,j,t,r} &\sim {\mathsf{DE}}( 0,1/\lambda ),\end{aligned}$$ where ${\mathsf{DE}}$ represents the double exponential distribution, and $\lambda> 0$ is the parameter that controls the shrinkage level in the differences of the coefficients. A dynamic model of this type on the parameters leads to the joint prior $$\begin{aligned}
p \left( {\boldsymbol{\Theta}}_{i,j,r} \mid \lambda \right) \propto \exp\left\{ -\lambda\sum_{t=1}^{T}|\theta_{i,j,t,r}-\theta_{i,j,t-1,r}|\right\},\end{aligned}$$ where ${\boldsymbol{\Theta}}_{i,j,r}=\left(\theta_{i,j,0,r}, \theta_{i,j,1,r},\ldots, \theta_{i,j,T,r}\right)$ is the vector of parameters for class $r$ and pair of nodes $(i,j)$, and $\theta_{i,j,0,r}=\hat{\theta}_{r,0}$ is assumed known. This pairwise difference prior belongs to the class of Markov random fields and corresponds to a scale mixture of conditionally autoregressive (CAR) priors, which are frequently used in time series, spatial statistics and image processing (e.g., see [@rue2005gaussian]). By assuming that $\theta_{i,j,0,r}$ is known we ensure that the prior distribution, and therefore the associated posterior, is proper. Indeed, note that the more common choice of a flat (improper) prior on $\theta_{i,j,t,r}$ leads in this case to an improper posterior distribution [@sun1999posterior]. In addition, assuming double exponential priors is equivalent to imposing $L_1$ penalty functions on the differences of the parameters in contiguous time points. This penalty type is commonly referred to as the fused lasso with tuning parameter $\lambda$. An extensive review of the fused lasso and its theoretical properties is presented in @Rinal09.
We propose to set the hyperparameters $\hat{\theta}_{1,0}$, $\hat{\theta}_{2,0}$ and $\hat{\theta}_{3,0}$ using a procedure reminiscent of empirical Bayes. In particular, we assume values of $\hat{\theta}_{1,0}$, $\hat{\theta}_{2,0}$ and $\hat{\theta}_{3,0}$ so that the probabilities of the (unobserved) events $( y_{i,j,0}, y_{j,i,0} ) = ( 0, 0 )$, $( y_{i,j,0}, y_{j,i,0} ) = ( 1, 0 )$, $( y_{i,j,0}, y_{j,i,0} ) = ( 0, 1 )$ and $( y_{i,j,0}, y_{j,i,0} ) = ( 1, 1 )$ correspond to their time-average probabilities, i.e., $$\begin{aligned}
\hat{\theta}_{1,0} &= \frac{\hat{p}_{1,0}}{\hat{p}_{0,0}} & \hat{\theta}_{2,0} &= \frac{\hat{p}_{0,1}}{\hat{p}_{0,0}} & \hat{\theta}_{3,0} &= \frac{\hat{p}_{1,1}}{\hat{p}_{0,0}} - \frac{\hat{p}_{1,0}}{\hat{p}_{0,0}} - \frac{\hat{p}_{0,1}}{\hat{p}_{0,0}} ,\end{aligned}$$ where $$\begin{aligned}
\hat{p}_{0,0} &= \frac{2}{n( n - 1 )}\sum_{t = 1}^{T} \sum_{i = 1}^{I} \sum_{j = i + 1}^{J}\mathsf{I} ( y_{i,j,t} = 0, y_{j,i,t} = 0 ) , \\
\hat{p}_{1,0} &= \frac{2}{n( n - 1 )}\sum_{t = 1}^{T} \sum_{i = 1}^{I} \sum_{j = i + 1}^{J}\mathsf{I} ( y_{i,j,t} = 1, y_{j,i,t} = 0 ) , \\
\hat{p}_{0,1} &= \frac{2}{n( n - 1 )}\sum_{t = 1}^{T} \sum_{i = 1}^{I} \sum_{j = i + 1}^{J}\mathsf{I} ( y_{i,j,t} = 0, y_{j,i,t} = 1 ) , \\
\hat{p}_{1,1} &= \frac{2}{n( n - 1 )}\sum_{t = 1}^{T} \sum_{i = 1}^{I} \sum_{j = i + 1}^{J}\mathsf{I} ( y_{i,j,t} = 1, y_{j,i,t} = 1 ) ,\end{aligned}$$ and $\mathsf{I}(\cdot)$ represents the indicator function. Other appealing default alternatives are possible, and we use them to study the sensitivity of the model to the prior specification. For example, we could specify $\hat{\theta}_{1,0}$ as the logit of the average probability of an incoming link over the whole history of the network, $\hat{\theta}_{2,0}$ as the logit of the average probability of an outgoing link, and $\hat{\theta}_{3,0} = 0$ to reflect our assumption of no reciprocity a priori. Finally, we also tried setting $\theta_{1,0} = \theta_{2,0} = \theta_{3,0} = 0$, which is consistent with the idea that all categories have the same probability a priori at time 0.
Estimation and Prediction {#sec:estimation}
=========================
Let ${\boldsymbol{\Theta}}_{i,j}=\{{\boldsymbol{\Theta}}_{i,j,2}, {\boldsymbol{\Theta}}_{i,j,3}, {\boldsymbol{\Theta}}_{i,j,4} \}$ be the vector of all nonzero parameters for the pair of nodes $(i,j)$. The log-posterior distribution of the parameters is given by $$\begin{aligned}
\label{eq:posterior}
\sum_{i < j} \left\{ V_{i,j}({\boldsymbol{\Theta}}_{i,j}) - \lambda\sum\limits_{r=2}^{4}\|{\mathbf{L}}{\boldsymbol{\Theta}}_{i,j,r}\|_{1} \right\}\end{aligned}$$ where $$\begin{gathered}
V_{i,j}({\boldsymbol{\Theta}}_{i,j}) =
\sum_{t=1}^{T} \Big\{ y_{i,j,t} \theta_{i,j,t,1} + y_{j,i,t} \theta_{i,j,t,2} + y_{i,j,t}y_{j,i,t} \theta_{i,j,t,3} \\
- \log \left( 1 + \exp\left\{ \theta_{i,j,t,1} \right\} + \exp\left\{ \theta_{i,j,t,2} \right\} + \exp\left\{ \theta_{i,j,t,1} + \theta_{i,j,t,2} + \theta_{i,j,t,3} \right\} \right)
\Big\} \end{gathered}$$ is the (unpenalized) log-likelihood, $\| \cdot \|_{1}$ denotes the $L_{1}$-norm, and ${\mathbf{L}}$ is a pairwise difference matrix of dimension $T \times ( T + 1 )$ of the form
$$\begin{aligned}
{\mathbf{L}}=
\begin{bmatrix}
-1 & 1 & 0 & \cdots & 0 & 0 \\
0 & -1 & 1 & \cdots & 0 & 0\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
0 & 0& 0 & \cdots &-1 & 1
\end{bmatrix}.\end{aligned}$$
Given $\lambda$, can be broken down into $n( n - 1 )/2$ estimation problems, each one corresponding to fitting a multinomial regression for each pair of nodes in the network.
In the sequel we focus on algorithms that can be used to solve each of these independent problems, which are then naively implemented in a parallel environment. First, we describe two different sampling algorithms for full Bayesian inference. Estimation results with these sampling schemes are identical but we are interested in comparing their efficiency (see section \[sec:applications\]). We also present a faster optimization alternative for point estimation and prediction that allows implementation of the model in big data settings.
Full Bayesian Inference {#sec:Bayes}
-----------------------
In order to perform Bayesian inference with a multinomial likelihood, we exploit the data-augmentation method based on Pólya-Gamma latent variables proposed by @PolsonScott13. Using this approach, the multinomial likelihood can be represented as a mixture of normals with Pólya-Gamma mixing distribution. This approach allows for a full conjugate hierarchical representation of the model and posterior inference through relatively simple Markov chain Monte Carlo (MCMC) algorithms.
For the Bernoulli case, the contribution of the observation $y_{t} \in \{0,1\}$ to the likelihood can be written as $$\begin{aligned}
L(\psi_{t})=\dfrac{\exp(y_t \psi_{t})}{1+\exp(\psi_{t})}
\propto \exp(\kappa_{t}\psi_{t}) \int_{0}^{\infty}\exp\{-\omega_{t}\psi_{t}^{2}/2\}p(\omega_{t})d\omega_{t}\end{aligned}$$ where $\psi_{t}$ is the log odds of $y_{t} = 1$, $\kappa_{t} = y_{t} - 1/2$ and $p(\omega_{t})$ is the Pólya-Gamma density with parameters $(1,0)$. Hence, by augmenting the model with the latent variable $\omega_t$, conditional Gaussianity for the Bernoulli likelihood can be easily achieved.
Similarly, for the multinomial case, conditional on $\omega_{i,j,t,r}$, the full conditional *likelihood* of each $\theta_{t,r}$ is given by $$\begin{aligned}
L(\theta_{i,j,t,r} \mid \theta_{i,j,t,-r}) \propto \exp \left \{ -\frac{\omega_{i,j,t,r}}{2}(\theta_{t,r} + C_{i,j,t,r})^{2} +\kappa_{i,j,t,r}(\theta_{i,j,t,r}+C_{i,j,t,r}) \right\}\end{aligned}$$ with $$\begin{aligned}
C_{t,1} &= \log \frac{1 + \exp\left\{\theta_{t,2} + \theta_{t,3}\right\}}{1 + \exp\left\{\theta_{t,2}\right\}} & \kappa_{t,1} &= y_{i,j} - 1/2 \\
C_{t,2} &= \log \frac{1 + \exp\left\{\theta_{t,1} + \theta_{t,3}\right\}}{1 + \exp\left\{\theta_{t,1}\right\}} & \kappa_{t,2} &= y_{j,i} - 1/2 \\
C_{t,3} &= \log \frac{\exp\left\{\theta_{t,1} + \theta_{t,1}\right\}}{\exp\left\{\theta_{t,1}\right\} + \exp\left\{\theta_{t,2}\right\}} & \kappa_{t,3} &= y_{i,j}y_{j,i} - 1/2 \end{aligned}$$ and $\omega_{i,j,t,r} \mid {\boldsymbol{\Theta}}\sim {\mathsf{PG}}\left(1,\theta_{i,j,t,r} + C_{i,j,t,r} \right)$. In the previous expression, ${\mathsf{PG}}$ denotes a Pólya-Gamma distribution. Hence, conditionally on the latent variable $\omega_{i,j,t,r}$ we obtain an augmented Gaussian likelihood with observations $y^{*}_{i,j,t,r} \sim {\mathsf{Normal}}(\theta_{i,j,t,r},\omega^{-1}_{i,j,t,r})$, where $y^{*}_{i,j,t,r}=\kappa_{i,j,t,r}/\omega_{i,j,t,r} - C_{i,j,t,r}$. Hereinafter, we simplify notation by dropping the subindex $i$ and $j$ associated with the subject pair.
### Latent Variables Approach {#se:latent}
Using the fact that the double exponential distribution can be expressed as a scale mixture of normals with exponential mixing density ([@ParkCas08]) : $$\begin{aligned}
\frac{a}{2}\exp(-a|x|)=
\int_{0}^{\infty}\frac{1}{\sqrt{2\pi \tau}}\exp\left(\frac{x^2}{2\tau}\right)\frac{a^2}{2}\exp\left(-\frac{a^{2}\tau}{2}\right)d\tau, \end{aligned}$$ the proposed model can be expressed as a simple hierarchical extension of a dynamic linear model $$\begin{aligned}
y^{*}_{t,r}&=\theta_{t,r}+\epsilon_{t,r}, &\epsilon_{t,r}&\sim {\mathsf{Normal}}(0,\omega^{-1}_{t,r}),\\
\theta_{t,r}&=\theta_{t-1,r}+\varepsilon_{t,r}, & \varepsilon_{t,r}&\sim {\mathsf{Normal}}(0,\tau^{2}_{t,r}),\end{aligned}$$ for $2 \leq t \leq T$, where $\theta_{1,r} \sim {\mathsf{Normal}}\left( \hat{\theta}_{0,r}, \tau^2_{1,r} \right)$ and $\tau^2_{t,r}$ is exponentially distributed a priori with mean $\frac{2}{\lambda^2}$.
We rely on the dynamic linear model representation to update the parameters in a component-wise fashion using a forward filtering backward sampling (FFBS) algorithm ([@Sylvia94; @CarterKohn94]). Furthermore, the latent parameters $\tau_{t,r}$ for $t = 0,\ldots,T-1$ are independent a posteriori and updated as $$\begin{aligned}
\left(1/\tau^{2}_{t,r}| {\boldsymbol{\Theta}}_{r},\lambda\right) \sim {\mathsf{IGau}}\left(\sqrt{\frac{\lambda^{2}}{(\theta_{t,r}-\theta_{t-1,r})^{2}}},\lambda^{2}\right), \end{aligned}$$ where ${\mathsf{IGau}}$ denotes the Inverse Gaussian distribution ([@KyGiCa10]).
### Direct Sampling {#se:direct}
Note that the full conditional prior on $\theta_{t,r}$ only involves its two nearest neighbors, so that for $1 \leq t \leq T-1$: $$\begin{aligned}
\pi(\theta_{t,r}|\theta_{t-1,r},\theta_{t+1,r}) \propto
\exp \left \{-\lambda( |\theta_{t,r}-\theta_{t-1,r}|+|\theta_{t+1,r}-\theta_{t,r}|)\right\}.\end{aligned}$$ Hence, the full conditional posterior distribution of $\theta_{t,r}$ is a mixture of truncated normal distributions with three components: $$\begin{gathered}
(\theta_{t,r} \mid y^{*}_{t,r},\theta_{t-1,r},\theta_{t+1,r},\omega_{t,r}) \sim
w_{1}{\mathsf{TN}}(\mu^{(1)}_{t,r},\sigma_{t,r};\theta_{t,r}<\xi_{t,r}) +
w_{2}{\mathsf{TN}}(\mu^{(2)}_{t,r},\sigma_{t,r};\theta_{t,r}>\zeta_{t,r}) \\
+ w_{3}{\mathsf{TN}}(\mu^{(3)}_{t,r},\sigma_{t,r};\xi_{t,r}<\theta_{t,r}<\zeta_{t,r}) \end{gathered}$$ where $\sigma_{t,r}=1/\sqrt{\omega_{t,r}}$, $\xi_{t,r}=\min\{\theta_{t-1,r},\theta_{t+1,r}\}$, $\zeta_{t,r}=\max\{\theta_{t-1,r},\theta_{t+1,r}\}$, the means of the truncated normal distributions are given by the following expressions: $$\begin{aligned}
\mu^{(1)}_{t,r} &= y^{*}_{t,r}+\frac{2\lambda}{\omega_{t,r}}, & \mu^{(2)}_{t,r} &= y^{*}_{t,r}-\frac{2\lambda}{\omega_{t,r}}, & \mu^{(3)}_{t,r} &= y^{*}_{t,r},\end{aligned}$$ and the conditional posterior probabilities of the components of the mixture are given by $$\begin{gathered}
w_{1}=\exp\left\{\frac{\omega_{t,r}}{2}\mu^{(1)}_{t,r}-\lambda(\xi_{t,r}+\zeta_{t,r})\right\} \Phi\left(\frac{\xi_{t,r}-\mu^{(1)}_{t,r}}{\sigma_{t,r}}\right)\\
w_{2}=\exp\left\{\frac{\omega_{t,r}}{2}\mu^{(2)}_{t,r}+\lambda(\xi_{t,r}+\zeta_{t,r})\right\}\Phi\left(\frac{-\zeta_{t,r}-\mu^{(2)}_{t,r}}{\sigma_{t,r}}\right)\\
w_{3}=\exp\left\{\frac{\omega_{t,r}}{2}\mu^{(3)}_{t,r}-\lambda(\zeta_{t,r}-\xi_{t,r})\right\}\left[ \Phi\left(\frac{\zeta_{t,r}-\mu^{(3)}_{t,r}}{\sigma_{t,r}}\right)- \Phi\left(\frac{\xi_{t,r}-\mu^{(3)}_{t,r}}{\sigma_{t,r}}\right)\right] , \end{gathered}$$ where $\Phi$ represents the Gaussian cumulative distribution function.
The direct characterization of the posterior distribution for our model is similar to the work of @Hans09 on Bayesian lasso regression with Gaussian likelihoods. In principle, the efficiency of this algorithm is limited by the use of the full conditional distributions for posterior sampling. However, this approach avoids the introduction of the latent variables $\{\tau_{t,r}\}$ discussed in section \[se:latent\].
### Penalty parameter estimation {#se:penaltyMCMC}
The value of the penalty parameter $ \lambda$ has a direct impact on the quality of the estimates and predictions generated by the model. Hence, under the Bayesian version of our model we assign $\lambda$ a Gamma hyperprior, $\lambda \sim {\mathsf{Gam}}\left( a, b \right)$. This choice is conditionally conjugate and the full-conditional posterior is simply $$\lambda \mid \cdots \sim {\mathsf{Gam}}\left( a + \frac{ 3 (T-1) n (n-1) }{4} , b + \frac{1}{2}\sum_{t=2}^{T} \sum_{i=1}^{n}\sum_{j=i+1}^{n}\sum_{r=1}^{3} \left| \theta_{i,j,t,r} - \theta_{i,j,t-1,r} \right| \right) .$$
Because the parameters are on the logistic scale we select values of $a$ and $b$ such that, marginally, ${\mathsf{Var}}\left\{ \theta_{i,j,t,r} \mid \theta_{i,j,t-1,r} \right\}$ is no larger than 1 (so that we do not favor link probabilities that are very close to either 0 or 1). We suggest $a=1$ and $b=1/5$, so that the median of ${\mathsf{Var}}\left\{ \theta_{i,j,t,r} \mid \theta_{i,j,t-1,r} , \lambda \right\}$ is approximately 0.4, and perform a sensitivity analysis to investigate the effect of our choice in the quality of the predictions.
### Link Prediction
As we discussed in the introduction, one of the goals of our analysis is short-term link prediction. For either of our sampling algorithms, Monte Carlo posterior samples of the parameters at a future time $T+1$ can be obtained as: $$\begin{aligned}
\theta^{(b)}_{i,j,T+1,,r} &\sim {\mathsf{DE}}\left( \theta^{(b)}_{i,j,T,r},1/\lambda^{(b)} \right) , & b&=1,\ldots,B.\end{aligned}$$ Hence, we can estimate (for $i<j$) the probability of a directed link from node $i$ to node $j$ at time $T+1$ as $$\begin{gathered}
\hat{p}\left(y_{i,j,T+1} = 1 \mid {\mathbf{Y}}_{T} \right) =
\dfrac{1}{B}\sum_{b=1}^{B}\left\{p\left[(y_{i,j,T+1},y_{j,i,T+1}) = (1,0) \mid {\boldsymbol{\theta}}^{(b)}_{i,j,T+1}\right] \right. \\
\left. + p\left[(y_{i,j,T+1},y_{j,i,T+1}) = (1,1) \mid {\boldsymbol{\theta}}^{(b)}_{i,j,T+1}\right] \right\},\end{gathered}$$ with a similar expression being valid for $\hat{p}\left(y_{j,i,T+1} = 1 \mid {\mathbf{Y}}_{T} \right)$.
Posterior mode estimation {#sec:optim}
-------------------------
Markov chain Monte Carlo algorithms allow for full posterior inference on our model, but can be too slow to be of practical applicability in large datasets. This issue is particularly pronounced in the case of network data because the number of observations grows as the square of the number of nodes. As an alternative, we develop an optimization algorithm for maximum a posteriori estimation and prediction. Our algorithm is an extension of the Split Bregman method proposed by @YeXie11 to solve general optimization problems with convex loss functions and $L^{1}$ penalized parameters (see also [@GoldOsher09]). The algorithm is iterative and involves the reformulation of as a constrained problem with the linear restriction ${\mathbf{L}}{\boldsymbol{\Theta}}={\mathbf{b}}$, and the introduction of a vector of dual variables ${\mathbf{v}}$ used to split the optimization problem into more tractable steps. Furthermore, we also rely on a second-order Taylor approximation to the multinomial likelihood for the implementation.
The proposed algorithm consists on repeating the following steps until convergence for each vector of parameters ${\boldsymbol{\Theta}}_{r}$:
1. $ {\boldsymbol{\Theta}}_{r}^{(m+1)}=\underset{{\boldsymbol{\Theta}}}{\arg\max} \quad V({\boldsymbol{\Theta}}^{(m)})
-\langle {\mathbf{v}}_{r}^{(m)},\mathbf{L}{\boldsymbol{\Theta}}_{r}^{(m)}-{\mathbf{b}}_{r}^{(m)} \rangle -\frac{\mu}{2}\|{\mathbf{L}}{\boldsymbol{\Theta}}_{r}^{(m)}-{\mathbf{b}}_{r}^{(m)}\|^{2}_2$
2. ${\mathbf{b}}_{r}^{(m+1)}=\mathfrak{T}_{\lambda_{2}\mu^{-1}}\left({\mathbf{L}}{\boldsymbol{\Theta}}_{r}^{(m+1)}+\mu^{-1}{\mathbf{v}}_{r}^{(m)}\right)$
3. ${\mathbf{v}}_{r}^{(m+1)}={\mathbf{v}}_{r}^{(m)}+\delta\left({\mathbf{L}}{\boldsymbol{\Theta}}_{r}^{(m+1)}-{\mathbf{b}}_{r}^{(m+1)}\right)$
where ${\mathbf{v}}_{r}$ is a vector of dual variables, and $\mathfrak{T}_{\lambda}(\mathbf{w})=[t_{\lambda}(w_1),t_{\lambda}(w_2),\ldots]^{'}$ is a thresholding operator with $t_{\lambda}(w_i)=\operatorname{sgn}(w_i)\max\{0,|w_i|-\lambda\}$, and $0<\delta \leq\mu$. We follow previous literature and set $\delta=\mu$ for our implementation noting that convergence of the algorithm is guaranteed for any value of $\mu$ [@YeXie11; @GoldOsher09].
Efficiency of this algorithm is mainly constrained by the maximization of ${{\boldsymbol{\Theta}}_r}$ in the first step. To accelerate it, we replace $V({\boldsymbol{\Theta}})$ by its second-order Taylor expansion around the current iterate and proceed to perform component-wise optimization (e.g., see [@Krishna05]). Using this substitution, subproblem (i) is differentiable and the estimate of a component $\theta_{t,r}$ of ${\boldsymbol{\Theta}}_{r}$ for $1 < t < T-1$ is updated as:
$$\begin{gathered}
\hat{\theta}^{(m+1)}_{t,r}= \left(G^{(m)}_{t,r}-2\mu \right)^{-1}
\left[G^{(m)}_{t,r}\hat{\theta}^{(m)}_{t,r}-g^{(m)}_{t,r} - (v^{(m)}_{t,r}-v^{(m)}_{t-1,r})\right. \\ \left.
-\mu(\hat{\theta}^{(m)}_{t+1,r}+\hat{\theta}^{(m)}_{t-1,r}+b^{(m)}_{t-1,r}-b^{(m)}_{t,r})\right],\end{gathered}$$
where $g^{(m)}_{t,r} = \left. \frac{\partial V}{\partial \theta_{t,r}} \right|_{{\boldsymbol{\Theta}}^{(m)}_{r}}$ and $G^{(m)}_{t,r} = \left. - \frac{\partial^2 V}{\partial \theta^{2}_{t,r}} \right|_{{\boldsymbol{\Theta}}^{(m)}_{r}}$ are the gradient and the information in the direction of $\theta_{t,r}$ evaluated in the current iterate values. The updates for $t=T$ are obtained in a similar fashion with some minor adjustments.
Note that in the maximum a posteriori estimates obtained in this fashion, the coefficient differences (${\mathbf{b}}={\mathbf{L}}{\boldsymbol{\Theta}}$) can be exactly zero. This induces a block partition of the parameters that is suitable for change-point identification [@RojasWahl14; @HarLevy08].
### Selection of the penalty parameter {#se:select}
The penalty $\lambda$ can be selected through cross-validation by training the model on an observed sample ${\mathbf{Y}}_1, \ldots, {\mathbf{Y}}_t$, and performing a one-step-ahead prediction for ${\mathbf{Y}}_{t+1}$ for a grid of values of $\lambda$. This procedure can be repeated to obtain a set of predicted networks $\hat{{\mathbf{Y}}}_{t+1}, \ldots, \hat{{\mathbf{Y}}}_{t+m}$ for $t+m \leq T$, each of these predictions can then be compared against the respective observed networks, the number of false and true positives is computed, and a receiver operating characteristic (ROC) curve is constructed. Finally, the optimal penalty parameter can be chosen as the value of $\lambda$ in the grid that provides the highest area under the curve (AUC) average over the $m$ predicted networks in the testing dataset.
One potential drawback of this approach is that selection of the optimal tuning parameter through cross-validation can be computationally expensive [@Tibshi05]. A popular alternative method that can be used with our MAP estimation procedure is to use model selection criteria (e.g AIC, BIC). Our approach is to select the penalty $\lambda$ from among a pre-specified grid of values by maximizing the Bayesian Information Criteria (BIC) $$\begin{aligned}
BIC_{\lambda} =\sum_{i < j} \left[ 2V_{i,j}(\hat{{\boldsymbol{\Theta}}}_{i,j}) - \mathcal{K}_{i,j}(\lambda) \log(T-1)\right],\end{aligned}$$ where $\mathcal{K}_{i,j}(\lambda)$ is an estimate of the number of degrees of freedom when the penalty parameter $\lambda$ is used to compute the MAP estimate. In the case of the fused lasso, @TibTay11 showed that the number of non-zero blocks of coefficients in $\hat{{\boldsymbol{\Theta}}}_{i,j}$ is a rough unbiased estimate of the degrees of freedom.
### Link Prediction
Given a point estimate $ \hat{{\boldsymbol{\theta}}}_{i,j,T}$ based on an observed sample ${\mathbf{Y}}_1, \ldots, {\mathbf{Y}}_T$, the probability of a directed link from node $i$ to node $j$ at time $T+1$ is estimated as $$\begin{gathered}
\hat{p}\left(y_{i,j,T+1} = 1 \mid {\mathbf{Y}}_{T} \right) =
p\left[(y_{i,j,T+1},y_{j,i,T+1}) = (1,0) \mid \hat{{\boldsymbol{\theta}}}_{i,j,T}\right]\\
+ p\left[(y_{i,j,T+1},y_{j,i,T+1}) = (1,1) \mid \hat{{\boldsymbol{\theta}}}_{i,j,T}\right],\end{gathered}$$ with a similar expression being valid for $\hat{p}\left(y_{j,i,T+1} = 1 \mid {\mathbf{Y}}_{T} \right)$.
Related Work {#sec:related}
============
Computation {#se:revcomput}
-----------
The literature on algorithms for parameter estimation for linear regression with a fused lasso penalty is extensive. This is a challenging problem because the fused lasso penalty is not a separable and smooth function, and traditional optimization methods fail under these conditions. In particular, some algorithms that provide a solution path for sequential increments of the regularization parameter have been developed for the Fused Lasso Signal Approximator (FLSA) where the design matrix is ${\mathbf{X}}={\mathbf{I}}$ [@Fried07; @Hoef10a], and for a general full rank design matrix ${\mathbf{X}}$ [@TibTay11] only in the case of gaussian regression.
In this work, we are interested in fused lasso penalized multiclass logistic regression. @FriHaTib10 explores coordinate descent regularization paths for logistic and multinomial logistic regression by using iteratively reweighted least squares (IRLS) but only for lasso, ridge and elastic net penalties (see also [@Krishna05]). @Hoef10b proposes a coordinate-wise algorithm for the fused lasso that can be extended to logistic regression using iterative reweighted least squares (IRWLS), but no path solution algorithms have been fully developed for the multinomial logistic regression setting that is the focus of this paper. Recently, @YuWon13 introduced a Majorization-Minimization (MM) algorithm for fused lasso penalized generalized linear models that benefits from parallel processing. They also present a good comparison with other existing algorithms including regularization path and first-order methods. For a fixed set of penalization parameters, several optimization algorithms have been proposed for fused lasso problems with general smooth and convex loss functions but not for the specific case of multinomial logistic regression. @LiuYuYe10 proposes an Efficient Fused Lasso Algorithm (EFLA) which solves a FLSA subproblem via a Subgradient Finding Algorithm. @GoldOsher09 use the split Bregman iteration method to deal with a set of image processing problems that can be treated as general $L_1$ penalized problems. Motivated by this idea, @YeXie11 developed the split Bregman based algorithm for the generalized fused lasso with Gaussian likelihoods. We further extend split Bregman algorithms by introducing a version of the approach for categorical and, in particular, dyadic data likelihoods. In our experience, these kind of algorithms tend to converge faster and avoid local modes that offer difficulties to most of the other algorithms mentioned above.
From a Bayesian perspective, a general hierarchical model for penalized linear regression that includes the fused lasso penalty is presented in @KyGiCa10 for the Gaussian case (see also [@ParkCas08; @Hans09]). In contrast, the MCMC algorithms discussed in Section \[sec:Bayes\] are designed to deal with categorical data. Furthermore, the latent variable approach from Section \[se:latent\] exploits the particular Markovian structure of the problem at hand to generate a much more efficient algorithm than the naive implementation of @KyGiCa10 would suggest. On the other hand, and to the best of our knowledge, the direct sampling algorithm of Section \[se:direct\], which extend that of [@Hans09] from the regular to the fussed lasso has never been described in the literature before. It is also worth mentioning the work of @ScottPillow12, who used a data augmentation approach for full Bayesian inference of neural spike data counts observed over time by proposing a dynamic negative-binomial factor model with an autoregressive structure. Although both kinds of problems share time-dependent parameters and their algorithm shares some features with our latent variable sample, the structure of our dyad-based likelihood is quite different, and the Pólya-Gamma augmentation scheme required for our network represents a non-trivial extension.
Alternative sparsity inducing priors for time-varying parameters to the fused lasso have been introduced in [@chan2012time], who discuss priors for model selection in dynamic contexts, and by [@fruhwirth2010stochastic], [@kalli2014time] and [@belmonte2014hierarchical], who derive a continuous shrinkage prior that aggressively shrink small coefficients without explicitly zeroing them out. All these techniques were developed in the context of dynamic regression models. Although they could be adapted to identify change points by considering differences between parameter levels, implementing them would come at a significant additional computational cost.
Models for dynamic networks
---------------------------
@Sarkar12 presents a nonparametric link prediction algorithm for sequences of directed binary networks where each observation in time is modeled using a moving window, and the function is estimated through kernel regression. They also incorporate pair specific features, and a spatial dimension using local neighborhoods for each node. @HuangLin09 present an autoregressive integrated moving average model (ARIMA), and combine it with link occurrence scores based on similarity indices of network topology measures for link prediction in temporal weighted networks (see also [@daSilva12]). More recently, @BlissFrank14 proposed a method based on similarity indices and node attributes joined with a covariance matrix adaptation evolution strategy for link prediction in networks with a large number of nodes.
Other relevant approaches include the dynamic versions of the latent space model of @Hoff2 presented in @SarkarMoore05 and @SewChen15, and the work of @XiFuSo10 developing the temporal extension of the mixed membership blockmodel first introduced in @Airoldi for community identification in social networks. @BetaRodBoyd15 extend the Bayesian infinite-dimensional model of @Kemp by linking different time periods through a hidden Markov model. On the other hand, @HanFuXi10 introduces a temporal version of the Exponential random graph model (tERGM) first introduced in @Frank. This temporal model can be used to infer links but its prediction ability is poor unless node attributes or dyadic covariates are included in the model in addition to traditional static network statistics (e.g. reciprocity, transitivity and popularity statistics). @CranDes11 present a more general temporal ERGM that includes node and dyad-level covariates with applications to political science (see also [@SnijSteBunt10]). In this extension, the square root of the indegree and outdegree are added as node attributes at every time point, and functions of past networks can be utilized as a dyadic covariates.
A key feature of our model is its scalability and efficiency. Because the model structure is relatively simple and dyads are modeled as conditionally independent, estimation and prediction algorithms are fast and can be easily implemented in parallel environments. This means that our model can more easily be scaled to long series of large networks than those discussed above. Conditional independence does have some drawbacks. In particular, although the model directly models reciprocity, it does not explicitly account for transitivity. In spite of this, the illustrations we present in the following sections suggest that our model is at least competitive and, in some cases, superior from a predictive point of view to other state-of-the-art models.
Illustrations {#sec:applications}
=============
The purpose of this section is to evaluate the performance of our model and compare it with the temporal Exponential Random Graph (tERGM) in terms of its link prediction ability. We used the `xergm` package in R to estimate the tERGM [@LeCranDes14]. More specifically, the tERGM is estimated with the function, which implements the bootstrapped pseudolikelihood procedure presented in @DesCran12. The model we fit includes all the typical ERGM terms, the square root of in and out-degrees as node covariates, and the lagged network and the delayed reciprocity to model cross-temporal dependencies.
We start by evaluating the performance of the two sampling schemes for Bayesian inference using the effective sample size (ESS) and execution time metrics, and comparing their efficiency with the optimization method for posterior mode estimation using simulated data. We then move on to evaluate the predictive capabilities of the models on both simulated and real data examples consisting of $n=71$ actors and $T=201$ observations in time. For this purpose, we carry out out-of-sample cross-validation exercises where we hold out the last ten weeks in the data set and make one-step-ahead predictions for the structure of the held-out networks. More specifically, for each $t=191,192,\ldots,200$ we use the information contained in ${\mathbf{Y}}_1, \ldots, {\mathbf{Y}}_{t}$ to estimate the model parameters and obtain predictions $\hat{{\mathbf{Y}}}_{t+1}$. Using a simple 0/1 utility function, a future link from node $i$ to node $j$ is predicted as $\hat{y}_{i,j,T+1} = \mathbb{I}(\hat{p}\left(y_{i,j,T+1}=1\mid {\mathbf{Y}}_{T} \right)> f)$, for some threshold $f$ that reflects the relative cost associated with false positive and false negative links. Each of these predictions is compared against the observed network ${\mathbf{Y}}_{t+1}$ to construct a receiver operating characteristic (ROC) curve. For the tERGM, these results are based on 1,000 MCMC simulations with other function parameters left as the default values (see documentation for more details).
MCMC Performance {#sec:MCMC}
----------------
In order to asses and compare the performance of the latent variable FFBS and the direct sampling MCMC algorithms, we simulated data from our model. The parameters across all the pairs of nodes were randomly drawn from double exponential distributions as $\theta_{t,r} \sim {\mathsf{DE}}(\theta_{t,r-1},1/\lambda)$ with a true concentration parameter value of $\lambda=3$. As a measure of efficiency, we use the ESS computed as: $$\begin{aligned}
ESS=\dfrac{B}{1+2\sum_{k=1}^{K}\rho(k)}\end{aligned}$$ where $B$ is the number of post burn-in samples, $\rho(k)$ is the autocorrelation at lag $k$, and $K$ is the cutoff lag point according to the initial monotone sequence estimator ([@Geyer92]).
We computed the effective sample size and the CPU run time in seconds for each pair of nodes based on 20,000 iterations after a burn-in period of 2,000 iterations. Table \[tab:ESS\] shows the results obtained by averaging over 5 runs for each sampling scheme, including the relative efficiency of the algorithms standardizing for CPU run time. From these results it is clear the latent parameters scheme for the fused lasso is much more efficient than the direct sampler that uses the mixture of truncated normals. Based on these results, in the following sections we perform time series cross-validation and prediction for the Bayesian approach using the latent variable FFBS algorithm.
[**Scheme**]{} [**ESS**]{} [**CPU(s)**]{} [**Rel.ESS**]{}
---------------- ------------- ---------------- ----------------- -- --
Direct 350 1827.11 0.192
FFBS 3171 818.77 3.878
: Average ESS and CPU times per pair of nodes for MCMC algorithms.[]{data-label="tab:ESS"}
It is also useful to contrast the execution time of the MCMC algorithms with that of the optimization method, which is only 8.03 seconds on average for each pair of nodes using a stopping criteria of $10^{-5}$ for the relative error. Hence, execution times for the MAP algorithm appear to be at least two orders of magnitude smaller than the fastest version of our MCMC algorithms.
Simulation studies
------------------
We first evaluate our model using two simulations. In the first setting, the parameters across all the pairs of nodes were randomly drawn from double exponential distributions so that $\theta_{t,r} \sim {\mathsf{DE}}(\theta_{t-1,r},1/\lambda)$ with a true penalty parameter value of $\lambda=12$ using initial values $\theta_{r,0}=0$. Because the initial value value of $\theta_{r,0}=0$ implies a relatively high initial link probability and the evolution variance $1/\lambda$ is relatively small, the resulting network is relatively dense (average number of links of 2682 at each time point, out of 4970 possible ties). A simple descriptive analysis of the networks shows that they also tend to exhibit low reciprocity and high transitivity.
As discussed in Section \[se:select\], we evaluate two methods to select the penalty parameter $\lambda$ for the split Bregman optimization algorithm and evaluate the predictive ability of the model. Firstly, we use a setup similar to calibration cross-validation (CCV) by partitioning the data into three sets. The first set is used for modeling and consists of the first 181 observations. Selection of the optimal penalization parameter was performed on the calibration set corresponding to observations $t=182,\ldots,191$, by searching the value of $\lambda$ that maximizes the mean AUC over the predictions of these middle ten observations. The search for $\lambda$ was conducted over a grid of 31 values between 0.1 and 15; as shown in the left panel of Figure \[fig:lambdaSim\], the optimal value is 2.5. Finally, we report out-of-sample prediction accuracy on the validation set consisting of the last ten observations, $t=192,\ldots, 201$. Secondly, we used the first 191 observations to estimate the model and search the value of $\lambda$ that optimizes BIC over the same grid of 31 values between 0.1 and 15. The resulting optimal parameter value in this case is $\lambda=6$ (see right panel of Figure \[fig:lambdaSim\]). Again, we evaluate the out-of-sample prediction accuracy of the model on the last ten observations.
Following Section \[se:penaltyMCMC\], for the Bayesian scheme, we used a prior $\lambda \sim {\mathsf{Gam}}(1, 1/5)$ (mean $5.0$, 95% prior symmetric credible interval $(0.12, 18)$, which is similar to the range of values used to select $\lambda$ under the optimization algorithm). The MCMC algorithm is first used to fit our model to the first 191 observations, and then an out-of-sample prediction for observation 192 is generated. This process is repeated by fitting 192 observations and then predicting observation 193, and so on. The posterior mean of $\lambda$ is around 9, and varies only very slightly over time.
![ Simulation 1: Mean AUC over $t=182,183,\ldots,191$ (left panel), and BIC values (right) using the optimization method over a grid of values of $\lambda$ for simulated dataset. The vertical lines indicate the optimal values of $\lambda$.[]{data-label="fig:lambdaSim"}](meanAUCOldSim_F.pdf "fig:"){width="3in"} ![ Simulation 1: Mean AUC over $t=182,183,\ldots,191$ (left panel), and BIC values (right) using the optimization method over a grid of values of $\lambda$ for simulated dataset. The vertical lines indicate the optimal values of $\lambda$.[]{data-label="fig:lambdaSim"}](BICOldSimNew_F.pdf "fig:"){width="3in"}
Figure \[fig:simula\] shows the ten operating characteristic curves associated with the out-of-sample predictions for the last ten observations using the full Bayesian approach of our model (FFBS algorithm). The right panel of Figure \[fig:simula\] shows the AUC values for the FFBS approach, the tERGM, and MAP predictions generated by using the optimal value of $\lambda$ obtained from cross-validation (denoted by Bregman-CV) and BIC (denoted by Bregman-BIC). The prediction accuracies for the FFBS algorithm and the Bregman optimization algorithm with cross-validation are almost identical and quite stable over time (both approaches show a good, roughly constant AUC around 83%). On the other hand, Bregman-BIC performs slightly worse than our two other approaches. Furthermore, in this scenario our model outperforms the tERGM, which shows only a fair predictive performance with an average AUC of 72%.
![Simulation 1: Plots of the ten operating characteristic curves associated with one-step-ahead out of sample predictions from the fused lasso model with FFBS algorithm (left panel). Area under the curves (AUC) for the temporal ERGM, and the fused lasso model with FFBS algorithm and Bregman optimization for simulated data. CV (cross-validation) and BIC represent the two methods for tuning parameter selection.[]{data-label="fig:simula"}](ROCFFBSOldSim.pdf "fig:"){width="3in"} ![Simulation 1: Plots of the ten operating characteristic curves associated with one-step-ahead out of sample predictions from the fused lasso model with FFBS algorithm (left panel). Area under the curves (AUC) for the temporal ERGM, and the fused lasso model with FFBS algorithm and Bregman optimization for simulated data. CV (cross-validation) and BIC represent the two methods for tuning parameter selection.[]{data-label="fig:simula"}](AUCS71FLOldSim_F.pdf "fig:"){width="3in"}
For our second simulation, we generated data with similar characteristics to the trading network dataset. The network is sparse with an average of 784 links over time, consistently shows relatively high reciprocity and includes a structural change around time 85, which can be seen in a shift from low to moderate transitivity (see left panel of Figure \[fig:simualteddata2\]).
![Left panel: Clustering coefficient for the networks in our second simulated dataset. Right panel: Time series of the estimated change-point probability for second simulated data set. The vertical line represents a structural change at time point 85.[]{data-label="fig:simualteddata2"}](transitivityNewSimT_F.pdf "fig:"){width="3in"} ![Left panel: Clustering coefficient for the networks in our second simulated dataset. Right panel: Time series of the estimated change-point probability for second simulated data set. The vertical line represents a structural change at time point 85.[]{data-label="fig:simualteddata2"}](ChangePointNewSimT_F.pdf "fig:"){width="3in"}
In this case, the search for the optimal penalization parameter for the optimization algorithm was performed by searching the value of $\lambda$ over a grid of 21 values between 0.1 and 10. Figure \[fig:lambdaSim2\] shows that the optimal values using the optimization algorithm are $\lambda=1.5$ for cross-validation, and $\lambda=3.5$ using BIC. For the Bayesian approach, assuming a hyperprior $\lambda \sim {\mathsf{Gam}}(1, 1/5)$, the posterior mean for $\lambda$ over all pairs of nodes is $3.7$.
![Simulation 2: Mean AUC over $t=182,183,\ldots,191$ (left panel), and BIC values (right) using the optimization method over a grid of values of $\lambda$ for simulated dataset. The vertical lines indicate the optimal values of $\lambda$.[]{data-label="fig:lambdaSim2"}](meanAUCCrossNewSimT_F.pdf "fig:"){width="3in"} ![Simulation 2: Mean AUC over $t=182,183,\ldots,191$ (left panel), and BIC values (right) using the optimization method over a grid of values of $\lambda$ for simulated dataset. The vertical lines indicate the optimal values of $\lambda$.[]{data-label="fig:lambdaSim2"}](BICNewSimT_F.pdf "fig:"){width="3in"}
Figure \[fig:simula2\] shows the ten operating characteristic curves associated with the out-of-sample predictions for the last ten observations using the full Bayesian approach of our model (FFBS algorithm). The right panel of Figure \[fig:simula2\] shows the AUC values for the tERGM and the different algorithms for our model. As before, the prediction accuracies for the FFBS algorithm and the Bregman optimization algorithm with cross-validation are very good (roughly 91% for both approaches), and Bregman-BIC performs just slightly worse. In this scenario our model again outperforms the tERGM, which shows a good predictive performance with an average AUC of 80%.
![Simulation 2: Plots of the ten operating characteristic curves associated with one-step-ahead out of sample predictions from the fused lasso model with FFBS algorithm (left panel). Area under the curves (AUC) for the temporal ERGM, and the fused lasso model with FFBS algorithm and Bregman optimization for simulated data. CV (cross-validation) and BIC represent the two methods for tuning parameter selection.[]{data-label="fig:simula2"}](ROCFFBSNewSimT.pdf "fig:"){width="3in"} ![Simulation 2: Plots of the ten operating characteristic curves associated with one-step-ahead out of sample predictions from the fused lasso model with FFBS algorithm (left panel). Area under the curves (AUC) for the temporal ERGM, and the fused lasso model with FFBS algorithm and Bregman optimization for simulated data. CV (cross-validation) and BIC represent the two methods for tuning parameter selection.[]{data-label="fig:simula2"}](AUCS71FLNewSimT_F.pdf "fig:"){width="3in"}
As we mentioned in section \[sec:optim\], the maximum a posteriori estimates of the parameters in the fused lasso regression model can be used to identify changes in the network structure over time. In particular, we use an indicator variable that assigns a value of 1 if at least one of the three parameters for pair $(i,j)$ change from time $t-1$ to time $t$, and 0 otherwise. The fraction of these indicators over all pairs of nodes provides a rough estimate of the chances that a change-point has occurred on a given week $t$. The right panel of Figure \[fig:simualteddata2\] shows how that proportion changes over time for our second simulation study, which includes a clear change-point around week 85. As expected, the proportion of dyads showing changes in their parameters peaks on the week the change-point occurs.
Inference for financial trading networks {#sec:NYMEX}
----------------------------------------
In this section we analyze a sequence of $T=201$ weekly financial trading networks constructed from *proprietary* trades in the natural gas futures market on the New York Mercantile Exchange (NYMEX) between January 2005 and December 2008. The directed binary networks were constructed by setting $y_{i,j,t}$ = 1 if there was at least one transaction in which trader $i$ sold a contract to trader $j$ during week $t$.
One particularity of this market is that futures were traded on the New York Mercantile Exchange (NYMEX) only through traditional open-outcry trades until September 5, 2006, and as a hybrid market that included electronic trading conducted via the CME Globex platform after that date. Our analysis focuses on 71 traders we identified as being present in the market (although not necessarily active) during the whole period. These trading network is sparse with an average of 826 links each week, and consistently shows very high reciprocity, moderate transitivity, mixing patterns and community structure [@BetaRodBoyd15].
![ Mean AUC over $t=182,183,\ldots,191$ (left panel), and BIC values (right) using the optimization method over a grid of values of $\lambda$ for trading network. The vertical lines indicate the optimal values of $\lambda$.[]{data-label="fig:lambda"}](meanAUCCrossTradNew_F.pdf "fig:"){width="3in"} ![ Mean AUC over $t=182,183,\ldots,191$ (left panel), and BIC values (right) using the optimization method over a grid of values of $\lambda$ for trading network. The vertical lines indicate the optimal values of $\lambda$.[]{data-label="fig:lambda"}](BICTraderNew_F.pdf "fig:"){width="3in"}
Analogous to the previous section, selection of the optimal penalization parameter for the optimization algorithm was performed by searching over a grid of 21 values between 0.1 and 10. The value of $\lambda$ that maximizes the mean AUC over the predictions of ten weeks $t=182,\ldots,191$ (left panel of Figure \[fig:lambda\]) is $\lambda=1.5$, while the optimal value for BIC over the first 191 weeks is $\lambda=3$. Similarly, for the MCMC algorithm we employ the same ${\mathsf{Gam}}(1,1/5)$ prior we used in our simulations.
![Plots of the ten operating characteristic curves associated with one-step-ahead out of sample predictions from the fused lasso model with FFBS algorithm (left panel). Area under the curves (AUC) for the temporal ERGM, and the fused lasso model with FFBS algorithm and Bregman optimization for the trading network. CV (cross-validation) and BIC represent the two methods for tuning parameter selection.[]{data-label="fig:traders"}](ROCFFBSNewSimT.pdf "fig:"){width="3in"} ![Plots of the ten operating characteristic curves associated with one-step-ahead out of sample predictions from the fused lasso model with FFBS algorithm (left panel). Area under the curves (AUC) for the temporal ERGM, and the fused lasso model with FFBS algorithm and Bregman optimization for the trading network. CV (cross-validation) and BIC represent the two methods for tuning parameter selection.[]{data-label="fig:traders"}](AUCSTrad71FLNew_F.pdf "fig:"){width="3in"}
The left panel in Figure \[fig:traders\] shows the operating characteristic curves associated with the out-of-sample predictions generated by our model fitted using the full Bayesian approach. Note that all the curves are very similar, showing that the performance of our model is quite stable over time. In the same spirit, the right panel of Figure \[fig:traders\] shows weekly AUCs for the FFBS algorithm, the tERGM, Bregman-CV and Bregman-BIC. The results show that our model performs quite well, with AUC values between 86 to 90% on every week. However, in this particular case the tERGM performs slightly but consistently better, with AUC values around 2% higher. Furthermore, as in the simulations, the performance of the FFBS and the Bregman-CV algorithm is very similar over all 10 weeks, and the Bregman-BIC performs slightly worse particularly during the first six weeks.
As discussed in our simulation study, the fraction of dyads for which at least one of the three parameters presents a change point at time $t$ provides a rough estimate of the chances that a change-point has occurred on a given week. Figure \[fig:change\] presents the time series of the fraction of pairs that show at least one parameter change each week under the Bregman-CV algorithm with the optimal cross-validated $\lambda=1.5$. The vertical line corresponds to the date of introduction of electronic trading. Note that the maximum of the time series over the 201 weeks appears right after the introduction of the electronic market and that a second, less marked peak appear around week 124. These results are consistent with previous analyses of this data [@BetaRodBoyd15].
![ Time series of the estimated change-point probability for the trading network. The vertical line represents the introduction of electronic trading in the market at week 85.[]{data-label="fig:change"}](ChangePointTrad_F.pdf){width="3in"}
Discussion {#sec:discussion}
==========
We have discussed a flexible and powerful model for prediction on dynamic networks. Indeed, the model we present shows competitive performance in the trading network dataset and superior performance in the simulation studies while being much more computationally efficient than alternatives available in the literature. Furthermore, the model can be easily extended to weighted networks by replacing the multinomial likelihood with an appropriate member of the exponential family. Similarly, a variation of the model can be devised for undirected networks.
Interestingly, the results on both the simulated and the trading network data showed that the prediction ability of the optimization approach is very similar to that of the Bayesian method, while being far more computationally efficient. In addition, the optimization approach also provides a way of exploring the presence of change points in the network dynamics. On the other hand, the cross-validation approach for tuning parameter selection provides slightly better results than the BIC method, but the computational cost is considerably higher.
[**SUPPLEMENTAL MATERIALS**]{}
[ **Code**]{}: Code to implement the algorithms for estimation and prediction described in this article. Please refer to the README file contained in the zip file for more details. (.cpp files)
[**Trading network data set:**]{} Data set used in the illustration in Section \[sec:NYMEX\]. (.txt file)
[**ACKNOWLEDGMENTS**]{}
This research was partially supported by NSF/DMS award number 1441433.
|
---
abstract: |
The symmetric maximum, denoted by ${\operatornamewithlimits{\varovee}}$, is an extension of the usual maximum $\vee$ operation so that 0 is the neutral element, and $-x$ is the symmetric (or inverse) of $x$, i.e., $x{\operatornamewithlimits{\varovee}}(-x)=0$. However, such an extension does not preserve the associativity of $\vee$. This fact asks for systematic ways of parenthesing (or bracketing) terms of a sequence (with more than two arguments) when using such an extended maximum. We refer to such systematic (predefined) ways of parenthesing as computation rules.
As it turns out there are infinitely many computation rules each of which corresponding to a systematic way of bracketing arguments of sequences. Essentially, computation rules reduce to deleting terms of sequences based on the condition $x{\operatornamewithlimits{\varovee}}(-x)=0$. This observation gives raise to a quasi-order on the set of such computation rules: say that rule 1 is below rule 2 if for all sequences of numbers, rule 1 deletes more terms in the sequence than rule 2.
In this paper we present a study of this quasi-ordering of computation rules. In particular, we show that the induced poset of all equivalence classes of computation rules is uncountably infinite, has infinitely many maximal elements, has infinitely many atoms, and it embeds the powerset of natural numbers ordered by inclusion.
author:
- |
Miguel COUCEIRO$^{1}$ and Michel GRABISCH$^{2}$[^1]\
1. Mathematics Research Unit, FSTC, University of Luxembourg\
6, rue Coudenhove-Kalergi, L-1359 Luxembourg, Luxembourg\
2. Paris School of Economics, University of Paris I\
106-112, Bd de l’Hôpital, 75013 Paris, France\
Email: `[email protected], [email protected]`
date: Version of
title: On the poset of computation rules for nonassociative calculus
---
[**Keywords:**]{} symmetric maximum, nonassociative algebra, computation rule, partially ordered set
Introduction {#intro}
============
Among the wide variety of algebraic structures sofar studied in the realm of aggregation theory, only a few have been considered with nonassociative fundamental operations; see e.g. [@BaeFodGra04; @Gra03; @Gra04; @Gra06; @PapStaj01], see also [@GraMarMesPap09] for a recent reference. If commutativity, distributivity, and existence of neutral element and of symmetric element, etc., are often abandonned, associativity remains a desirable property in order to avoid ambiguities when assessing the outcome of composed computations within the algebraic structure. However, in certain situations such nonassociative operations are both natural and necessary: this is the case of the symmetric maximum [@Gra03; @Gra04].
For a preliminary discussion, consider the set $\bN$ of nonnegative integers and the maximum operation $\vee$ defined on it. Let us try to build on $\bZ$ an operation ${\operatornamewithlimits{\varovee}}$ behaving like a group addition but coinciding with $\vee$ on the positive side, that is, for every $a,b\in\bZ$, $a{\operatornamewithlimits{\varovee}}0=a$ (neutral element), $a{\operatornamewithlimits{\varovee}}(-a)=0$ (symmetry), $a{\operatornamewithlimits{\varovee}}b = a\vee b$ if $a,b\geqslant
0$. If such an operation exists, it is necessarily nonassociative as shown below: $$\begin{aligned}
-3{\operatornamewithlimits{\varovee}}(3{\operatornamewithlimits{\varovee}}2) & = -3{\operatornamewithlimits{\varovee}}3=0\label{eq:0}\\
(-3{\operatornamewithlimits{\varovee}}3){\operatornamewithlimits{\varovee}}2 & = 0{\operatornamewithlimits{\varovee}}2 = 2.\label{eq:2}\end{aligned}$$ One can show [@Gra03] that the best definition (in the sense that it fails associativity on the smallest possible domain) of ${\operatornamewithlimits{\varovee}}$ is given by: $$\label{eq:3}
a{\operatornamewithlimits{\varovee}}b = \left\{ \begin{array}{ll}
-(|a| \vee |b|) & \mbox{ if } b \neq -a \mbox{ and } |a| \vee |b|=-a \mbox{ or } =-b
\\
0 & \mbox{ if } b=-a \\
|a| \vee |b| & \mbox{ otherwise.}
\end{array}
\right.$$ Except for the case $b=-a$, $a {\operatornamewithlimits{\varovee}}b$ equals the element among the two that has the greatest absolute value.
The main problem is how to interpret this nonassociative operation when evaluating expressions like ${\operatornamewithlimits{\varovee}}_{i=1}^na_i$, as it was the case in [@Gra04]. The solution proposed in [@Gra04; @Gra03] was to define *computation rules*, that is, to define systematic ways of putting parentheses so that no ambiguity occurs. Since we deal with commutative operations, a simple example of a computation rule is the following: put parentheses around each pair of maximal symmetric terms. If we apply this to our example above, this rule corresponds to (\[eq:2\]). Another one is to make the computation separetely on positive and on negative terms, and to aggregate the result: $({\operatornamewithlimits{\varovee}}_i a^+_i){\operatornamewithlimits{\varovee}}({\operatornamewithlimits{\varovee}}_i
a_i^-)$. This corresponds to (\[eq:0\]).
It is easy to see that there are many possible computation rules, but to study them, one needs to formalize the intuitive idea of a computation rule. The aim of this paper is twofold: to propose a formal definition of a computation rule, which was lacking in [@Gra03], and to study the set of all computation rules endowed with a very natural ordering. As we will see, the poset of computation rules induced by this ordering is uncountable; in fact, from Corollary \[cor:3\] below, it follows that this poset is equimorphic (equivalent with respect to embeddability) to the power set of positive integers ordered by inclusion. Moreover, we show that the poset of computation rules has infinitely many atoms and has infinitely many maximal elements; these are completely described in Subsections \[atoms\] and \[maximals\].
Throughout the paper, we adopt the following notation: if $Z$ is a set of symbols, then $\cL(Z)$ denotes the language (set of words, including the empty word $\varepsilon$) built on the alphabet $Z$.
The symmetric maximum {#sec:symmax}
=====================
In this section we recall basic concepts and preliminary results needed hereinafter (for further developments see [@Gra04; @Gra03] and [@GraMarMesPap09 §9.3]). However, we assume that the reader is familiar with elementary notions in the theory of ordered sets, and refer the reader, e.g., to [@Blyth05; @CasLecMon07; @DavPrie02] for basic background.
Let $C$ be a chain endowed with an order $\leqslant$ and least element $0$, and let $C^-:=\{-c: \, c\in C\}$ be its dually isomorphic copy, which we refer to as its symmetric counterpart.
We define $\tilde{C}:=C\cup C^-$, and set $0=-0$. Since we will only consider countable sequences of elements of $\tilde{C}$, without loss of generality, we may assume that $\tilde{C}=\mathbb{Z}$, or a finite symmetric interval of it.
Let us introduce a binary operation ${\operatornamewithlimits{\varovee}}$ on $\tilde{C}$ fulfilling the following independent conditions:
1. ${\operatornamewithlimits{\varovee}}$ coincides with $\vee$ on $C^2$.
2. $-a$ is the symmetric of $a$, i.e., $a{\operatornamewithlimits{\varovee}}(-a)=0$.
3. $-(a{\operatornamewithlimits{\varovee}}b)= (-a){\operatornamewithlimits{\varovee}}(-b)$ for all $a,b\in C$.
As observed in Section \[intro\], (I) and (II) imply that ${\operatornamewithlimits{\varovee}}$ is not associative. Note also that from (III), it follows that ${\operatornamewithlimits{\varovee}}$ coincides with the minimum on $C^-$. The following results are not difficult to verify.
Under the conditions (I), (II) and (III) above, no operation is associative on a larger domain than that on which the symmetric maximum defined by (\[eq:3\]) is associative.
\[prop:1s\] The symmetric maximum has the following properties:
1. ${\operatornamewithlimits{\varovee}}$ is commutative on $\tilde{C}$.
2. $0$ is the neutral element of ${\operatornamewithlimits{\varovee}}$.
3. ${\operatornamewithlimits{\varovee}}$ is associative on an expression involving $a_1,\ldots,a_n\in
\tilde{C}$, $n>2$, if and only if $\bigvee_{i=1}^n a_i \neq -
\bigwedge_{i=1}^na_i$.
4. ${\operatornamewithlimits{\varovee}}$ is nondecreasing in each argument on $\tilde{C}$.
Property (iii) of Proposition \[prop:1s\] will be the basis for defining computation rules.
Computation rules {#sec:coru}
=================
The lack of associativity of ${\operatornamewithlimits{\varovee}}$ induces ambiguity when evaluating expressions like ${\operatornamewithlimits{\varovee}}_{i=1}^n a_i$. To overcome this difficulty, computation rules were proposed in [@Gra04; @Gra03], and which amount to eliminating situations where nonassociativity occurs, as characterized by property (iii) in Proposition \[prop:1s\].
Given a sequence $(a_i)_{i\in I}$ with $I\subseteq \bN$, we say that it *fulfills associativity* if either $|I|\leqslant 2$ or $\bigvee_{i\in I}a_i\neq
-\bigwedge_{i\in I}a_i$. Hence ${\operatornamewithlimits{\varovee}}_{i\in I}a_i$ is well-defined if and only if $(a_i)_{i\in I}$ fulfills associativity. Informally speaking, a computation rule is a systematic (predefined) way to delete symbols in a sequence in order to make it associative, provided that this corresponds to some arrangement of parentheses.
Consider the following sequence in $\bZ$: $3,2,1,0,-2,-3,-3$. A possible way to make the sequence associative is to delete $3,-3$, which corresponds to the arrangement $$(3{\operatornamewithlimits{\varovee}}-3 ){\operatornamewithlimits{\varovee}}(-3{\operatornamewithlimits{\varovee}}2 {\operatornamewithlimits{\varovee}}-2 {\operatornamewithlimits{\varovee}}1 {\operatornamewithlimits{\varovee}}0) = -3.$$ Another possibility is to delete all occurrences of maximal symmetric symbols, that is, first $3,-3$ then $2,-2$, wich corresponds to: $$(3{\operatornamewithlimits{\varovee}}( -3 {\operatornamewithlimits{\varovee}}-3 )){\operatornamewithlimits{\varovee}}(2{\operatornamewithlimits{\varovee}}-2 ){\operatornamewithlimits{\varovee}}1{\operatornamewithlimits{\varovee}}0 = 1.$$ Even though deleting the 3 makes this sequence associative, it does not correspond to any arrangement of parentheses.
In this section we reassemble these ideas and propose a formalism where the intuitive idea of a computation rule is made precise, and show that our formalization fulfills our initial requirements.
Since 0 is the neutral element of ${\operatornamewithlimits{\varovee}}$, we deal with sequences (words) built on $Z:=\tilde{C}\setminus\{0\}$, including the empty sequence $\varepsilon$. Hence, we consider the language $\cL(Z)$. Nonempty words are denoted by $\sigma=(a_i)_{i\in I}$, where $I$ is a finite index set.
We are interested in computing expressions ${\operatornamewithlimits{\varovee}}_{i\in I}a_i$ unambiguously. Since ${\operatornamewithlimits{\varovee}}$ is commutative, the order of symbols in the word does not matter, and we can consider any particular ordering of the word, like the decreasing order of the absolute values of the elements in the sequence: $$(1,3,-2,-3,3,1,2) \rightarrow (3,3,-3,2,-2,1,1).$$ Hence, we do not deal with words, but with such ordered sequences. We denote by $\mathfrak{S}$ the set of all such sequences. We introduce a convenient and unambiguous encoding of sequences, based on two mappings. The mapping $\theta$ assigns to every $\sigma\in\mathfrak{S}$, the list of the absolute values in $\sigma$ in decreasing order: $$\theta(\sigma):= (n_1,\ldots,n_q).$$ We assume that $\theta(\sigma)$ is always a finite sequence of arbitrary length. The mapping $\psi:\mathfrak{S}\rightarrow\cup_{i\in \mathbb{N}}(\mathbb{N}_0^2)^i$ is defined by: $$\psi((a_i)_{i\in I}) = ((p_1,m_1),\ldots,(p_q,m_q))$$ where $p_k,m_k$ are the numbers of occurrences of the $k$-th greatest absolute value of elements in the sequence, $p_k$ being for the positive element, and $m_k$ for the negative one. In other words, for $\theta(\sigma)=(n_1,\ldots,n_q )$, the sequence $\sigma$ can be rewritten after reordering as: $$\sigma = (\underbrace{n_1,\ldots,n_1}_{p_1\text{
times}},\underbrace{-n_1,\ldots,-n_1}_{m_1\text{ times}},\ldots,\underbrace{n_q,\ldots,n_q}_{p_q\text{
times}},\underbrace{-n_q,\ldots,-n_q}_{m_q\text{ times}}).$$ Note that no pair in $\psi(\sigma)$ can be $(0,0)$.
Consider the sequence $\sigma=(1,3,-3,2,-2,-2,3,1,1,1)$. Then $$\begin{aligned}
\theta(\sigma) &= (3,2,1)\\
\psi(\sigma) &= ((2,1),(1,2),(4,0)).\end{aligned}$$
Note that $\theta(\sigma)$ and $\psi(\sigma)$ uniquely determine $\sigma$. Also, saying that $\sigma$ fulfills associativity means that either $p_1$ or $m_1$ is 0. We denote by $\mathfrak{S}_0$ the set of sequences which do *not* fulfill associativity.
\[def:elem\] There exist five elementary rules $\rho_i:\gS\rightarrow\gS$, defined as follows. For any sequence $\sigma$ with $\psi(\sigma)=((p_k,m_k)_{k=1,\ldots,q})$:
1. elementary rule $\rho_1$: if $p_1>1$ and $m_1>0$, the number $p_1$ is changed into $p_1=1$;
2. elementary rule $\rho_2$: if $m_1>1$ and $p_1>0$, the number $m_1$ is changed into $m_1=1$;
3. elementary rule $\rho_3$: if $p_1>0$, $m_1>0$, the pair $(p_1,m_1)$ is changed into $(p_1-c,m_1-c)$, where $c:=p_1\wedge m_1$. If this results in the pair (0,0), then this pair is deleted, and all subsequent pairs $(p_k,m_k)$, $k=2,3,\ldots$, are renumbered as $(p_{k-1},m_{k-1})$.
4. elementary rule $\rho_4$: if $p_1>0$, $m_1>0$, and if $p_2>0$, the number $p_2$ is changed into $p_2=0$. If this results in the pair (0,0), then this pair is deleted, and all subsequent pairs $(p_k,m_k)$, $k=3,4,\ldots$, are renumbered as $(p_{k-1},m_{k-1})$.
5. elementary rule $\rho_5$: if $p_1>0$, $m_1>0$, and if $m_2>0$, the number $m_2$ is changed into $m_2=0$. If this results in the pair (0,0), then this pair is deleted, and all subsequent pairs $(p_k,m_k)$, $k=3,4,\ldots$, are renumbered as $(p_{k-1},m_{k-1})$.
Rules $\rho_1,\ldots,\rho_5$ have no action (i.e., $\rho_i(\sigma)=\sigma$) if the conditions of application are not satisfied.
We define the (computation) alphabet as $\Psi:=\{\rho_1,\rho_2,\rho_3,\rho_4,\rho_5\}$.
A *computation rule* $R$ is any word built on $\Psi$, i.e., $R\in\cL(\Psi)$. We say that $R$ is a *well-formed computation rule* (*w.f.c.r.*) if for any sequence $\sigma\in\gS$ we have $R(\sigma)\in\gS\setminus\gS_0$. We denote by $\gR$ the set of well-formed computation rules.
For example, $\rho_2\rho_3\rho_1$, $\rho_4^*\rho_1$, $(\rho_1\rho_3)^*(\rho_4\rho_5)^*$ are computation rules, where as usual $w^*$ denotes the infinite concatenation $wwwww\cdots$ of the word $w$ (we recall that words are read from left to right). Observe that only the two latter rules are well-formed.
Note that from Definition \[def:elem\], we have $R(\sigma)=\sigma$ for any rule $R$ and any sequence $\sigma$ in $\gS\setminus\gS_0$. We give examples of w.f.c.r.’s which include those already proposed in [@Gra03] (we leave to the reader the proof that they are well-formed):
1. $\langle \cdot\rangle_0 = \rho_3^*$,
2. $\langle \cdot\rangle_= = (\rho_1\rho_2\rho_3)^*$,
3. $\langle \cdot\rangle_-^+ = (\rho_4\rho_5)^*\langle \cdot\rangle_==(\rho_4\rho_5)^*\rho_1\rho_2\rho_3$,
4. $\langle \cdot\rangle_{pess} = (\rho_4\rho_5)^*\rho_1\rho_3$,
5. $\langle \cdot\rangle_{opt} = (\rho_4\rho_5)^*\rho_2\rho_3$,
6. $\langle \cdot\rangle_L = (\rho_1\rho_3)^*$,
7. $\langle \cdot\rangle_R = (\rho_2\rho_3)^*$.
Note that $\langle\sigma\rangle^+_-=\varepsilon$ for all $\sigma\in\gS_0$.
We use ${\operatornamewithlimits{\varovee}}(R(\sigma))$ to denote the value of ${\operatornamewithlimits{\varovee}}_{i\in I}a_i$ after applying the computation rule $R\in\gR$ to $\psi(\sigma)=\psi((a_i)_{i\in I}) $. To compute ${\operatornamewithlimits{\varovee}}(R(\sigma))$, one needs to delete symbols in the sequence $\theta(\sigma)$ exactly as they are deleted in $\psi(\sigma)$. We say that $R,R'\in\gR$ are *equivalent*, denoted by $R\sim R'$, if for any sequence $\sigma\in\gS$ we have ${\operatornamewithlimits{\varovee}}(R(\sigma)) = {\operatornamewithlimits{\varovee}}(R'(\sigma))$.
The next fundamental theorem shows that our setting covers all possible ways of putting parentheses on words in $\cL(Z)$ in order to make them associative[^2].
\[thm:computation rules\] Any computation rule applied to some $\sigma\in\gS$ corresponds to an arrangement of parentheses and a permutation on $\sigma$. Conversely, any arrangement of parentheses and permutation on some $\sigma\in\gS$ making the sequence associative is equivalent to a computation rule applied to $\sigma$.
Let us define 5 basic rules applied on any sequence $\sigma$ with $\psi(\sigma)=(p_k,m_k)_{k\in K}$ as follows:
1. basic rule ${\rho'}_1^k$, for a given $k\in K$: if $p_k>1$, the number $p_k$ is changed into $p_k-1$;
2. basic rule ${\rho'}_2^k$, for a given $k\in K$: if $m_k>1$, the number $m_k$ is changed into $m_k-1$;
3. basic rule ${\rho'}_3^k$, for a given $k\in K$: if $p_k>0,m_k>0$, the pair $(p_k,m_k)$ is changed into $(p_k-1,m_k-1)$;
4. basic rule ${\rho'}_4^k$, for $k>1$: if $p_{k}>0$, the number $p_k$ is changed into $p_k-1$;
5. basic rule ${\rho'}_5^k$, for $k>1$: if $m_{k}>0$, the number $m_k$ is changed into $m_k-1$,
For all these rules, if a pair (0,0) appears, it is immediately deleted. Observe that the elementary rules are concatenations of the above basic rules. Indeed, we have: $$\rho_1 = ({\rho'}_1^1)^*,\quad \rho_2 = ({\rho'}_2^1)^*,\quad \rho_3 =
({\rho'}_3^1)^*,\quad \rho_4 = ({\rho'}_4^2)^*,\quad \rho_5 =
({\rho'}_5^2)^*.$$
\[claim:1\] Any way of parenthesing a word in $\cL(Z)$ corresponds to a word (rule) in $\cL(\{{\rho'}_1^k,\ldots,{\rho'}_5^k\}_{k\in\bN})$, and conversely.
Indeed, consider a word $w\in\cL(Z)$: parentheses are put around 2 consecutive elements, like $(a{\operatornamewithlimits{\varovee}}b)$, where $a$ or $b$ can be the result of a pair of parentheses too. Only three cases can occur:
1. either $a=b$, then $(a{\operatornamewithlimits{\varovee}}b) = a = b$. This corresponds to basic rules ${\rho'}_1^k$ (if $a>0$) or ${\rho'}_2^k$ (if $a<0$) for a suitable $k$;
2. or $a=-b$, then $(a{\operatornamewithlimits{\varovee}}b)= 0$. This corresponds to the basic rule ${\rho'}_3^k$ for a suitable $k$;
3. otherwise $|a|<|b|$ (or $|a|>|b|$). Then $(a{\operatornamewithlimits{\varovee}}b) = b$ and this corresponds to the basic rules ${\rho'}^k_4$ (if $a>0$) or ${\rho'}_5^k$ (if $a<0$) for a suitable $k$.
\[claim:2\] Given a sequence $\sigma\in\gS_0$, for any rule $\rho$ in $\cL(\{{\rho'}_1^k,\ldots,{\rho'}_5^k\}_{k\in\bN})$ making $\sigma$ associative, there exists a computation rule $R$ in $\cL(\Psi)$ such that ${\operatornamewithlimits{\varovee}}(\rho(\sigma)) = {\operatornamewithlimits{\varovee}}(R(\sigma))$.
We have already established that any elementary rule is a particular rule in $\cL(\{{\rho'}_1^k,\ldots,{\rho'}_5^k\}_{k\in\bN})$, and therefore this is true also for any computation rule in $\cL(\Psi)$.
Take then any rule $\rho$ in $\cL(\{{\rho'}_1^k,\ldots,{\rho'}_5^k\}_{k\in\bN})$ making $\sigma$ associative. The result ${\operatornamewithlimits{\varovee}}(\rho(\sigma))$ is some number in $\sigma$, say $\delta n_k$, with $\delta =1$ or $-1$ (i.e., the $k$th positive or negative symbol in $\theta(\sigma)$). Let us construct a computation rule $R$ such that ${\operatornamewithlimits{\varovee}}(R(\sigma))=\delta n_k$ as follows:
- Suppose $k=1,\delta=1$ (provided $p_1>1$). Then $R=\rho_2\rho_3$. For the case $\delta=-1$, we find $R=\rho_1\rho_3$.
- Suppose $k>1$, $\delta=1$ (provided $p_k>0$) or $\delta=-1$ (provided $m_k>0$). Apply the following algorithm:
- Initialization: $R\leftarrow \varepsilon$
- For $i=2$ to $k-1$, Do:
- If $p_i=0$ put $R\leftarrow R\rho_5$
- If $m_i=0$ put $R\leftarrow R\rho_4$
- Otherwise put $R\leftarrow R\rho_4\rho_5$
- Case $\delta=1$: if $m_k>0$, $R\leftarrow R\rho_5$.
- Case $\delta=-1$: if $p_k>0$, $R\leftarrow R\rho_4$.
- $R\leftarrow R\rho_1\rho_2\rho_3$
By construction, $R$ is equivalent to $\rho$ on $\sigma$, and the proof of the claim is now complete.
Theorem \[thm:computation rules\] now follows from Claims \[claim:1\] and \[claim:2\].
Note that a well-formed rule in $\cL(\{{\rho'}_1^k,\ldots,{\rho'}_5^k\}_{k\in\bN})$ (i.e., making any $\sigma$ associative) is not necessarily equivalent to a w.f.c.r. in $\gR$. For instance, consider the well-formed rule $\rho={\rho'}_5^3(({\rho'}_1^1)^*({\rho'}_2^1)^*{\rho'}_3^1)^*$, and apply it on the sequences: $$\sigma=(2,3)(1,0)(0,1)(2,1), \quad \sigma'=(2,3)(1,1)(0,1)(2,0).$$ Then ${\operatornamewithlimits{\varovee}}(\rho(\sigma)) = n_2$ and ${\operatornamewithlimits{\varovee}}(\rho(\sigma'))=n_4$. Let us try to build an equivalent w.f.c.r. $R\in \gR$ . Since the second pair in $\sigma$ is the final result, one cannot touch it. Therefore, $R$ contains only $\rho_1,\rho_2,\rho_3$, and thus one finds $-n_3$ on $\sigma'$. Hence, compositions of basic rules may result in rules more general than our computation rules. However, those rules which are not computation rules are rather artificial.
Hereinafter, we will make use of the following “factorization scheme" for computation rules.
\[lem:1a\] Let $R$ be a w.f.c.r. in $\gR$.
1. [**Factorization:**]{} Rule $R$ can be factorized into a composition $$\label{eq:1}
R=T_1T_2\cdots T_i\cdots$$ where each term has the form $T_i:=\omega_i\rho_1^{a_i}\rho_2^{b_i}\rho_3$, with $\omega_i\in\cL(\{\rho_4,\rho_5\})$ (possibly empty), and $a_i,b_i\in\{0,1\}$.
2. [**Simplification:**]{} Suppose that in there exists $j\in \mathbb{N}$ such that $\omega_j=\omega \rho^*_4$ or $\omega \rho^*_5$ for some $\omega \in
\cL(\{\rho_4,\rho_5\})$, or that $\rho_4 $ and $
\rho_5$ alternate infinitely many times in $\omega_j$. Let $$k_1=\min\{j:\, \mbox{$\omega_j=\omega \rho^*_4$ or $\omega \rho^*_5$} \}, \mbox{ and}$$ $$k_2=\min\{j:\, \mbox{$\rho_4 $ and $ \rho_5$ alternate infinitely many times in $\omega_j$} \}.$$
- If $k_1<k_2$, then $R\sim T_1\cdots T_{k_1}$.
- Otherwise, $k_2\leqslant k_1$, and $R\sim T_1\cdots T'_{k_2}$, where $T'_{k_2}=(\rho_4\rho_5)^*\rho_1^{a_{k_2}}\rho_2^{b_{k_2}}\rho_3$.
Let $R$ be a w.f.c.r. Then $R$ is necessarily infinite, otherwise one can always construct a sequence $\sigma$ such that $R(\sigma)\not\in\gS\setminus\gS_0$. Also, $\rho_3$ necessarily belongs to $R$, otherwise the sequence $\sigma$ with $\psi(\sigma)=(2,1)$ would not be made associative by $R$. Therefore, the word $R$ can be cut into terms where $\rho_3$ acts as a separator, i.e., $R=R_1\rho_3R_2\rho_3\cdots$, with $R_i\in\cL(\{\rho_1,\rho_2,\rho_4,\rho_5\})$. Now observe that $\rho_1$ and $\rho_1^k$ are equivalent for any $k>1$, and the same holds for $\rho_2$. Moreover, the order between $\rho_1,\rho_2$ and $\omega_i$ is unimportant because none of these symbols can make the sequence $\sigma$ associative (i.e., the rule will not stop after applying these elementary rules), and each of them applies on a different symbol of $\sigma$. This proves that each term $T_i$ can be written in form (\[eq:1\]).
Observe that since $R$ is infinite, there can be infinitely many factors $T_i$ or finitely many, provided one factor $T_i$ has an infinite $\omega_i$. In the first case, there is no last factor and the proof of (i) is complete. In the second case, it remains to prove that the last factor $T_l$ has the same form, i.e., it ends with $\rho_3$. Suppose on the contrary that there are elementary rules $\rho_4,\rho_5$ after $\rho_3$. If $\sigma$ is made associative after applying $\rho_3$, then the rule stops and the remaining $\rho_4,\rho_5$ are useless. If not, it is because $\rho_3$ has acted on a pair $(p,p)$ with $p>0$. But if the next pair is, say, (1,1), $\sigma$ will not be made associative by the remaining $\rho_4,\rho_5$, contradicting the fact that $R$ is well-formed.
Let us prove (ii). Suppose first that $k_2\leqslant k_1$. Observe that any $\omega_i$ where $\rho_4,\rho_5$ alternate infinitely many times is equivalent to $(\rho_4\rho_5)^*$. Moreover, $(\rho_4\rho_5)^*$ deletes all pairs after the current one. Therefore, it remains only the current pair, and $\rho_1^{a_{k_2}}\rho_2^{b_{k_2}}\rho_3$ necessarily stops on it, for any value of $a_{k_2},b_{k_2}$.
Suppose now that $k_1<k_2$, and $\omega_{k_1}=\omega\rho_4^*$ (the other case is similar). Then $\rho^*_4$ deletes all pairs after the current pair of the form $(p',0)$, and stops at the first pair of the form $(p',m')$ with $m'>0$, which is transformed into $(0,m')$. Then $\rho_1^{a_{k_1}}\rho_2^{b_{k_1}}\rho_3$ makes the current pair either of the form $(0,0)$, or $(p,0)$ or $(0,m)$. In the two last cases, $R$ stops. In the first case, the current term is deleted, and the next pair encountered is $(0,m')$, where the rule stops. The proof of (ii) is complete.
Note that (ii) of Lemma \[lem:1a\] does not refer to every $\omega$ containing a $\rho_4^*$ or a $\rho_5^*$. For instance, if $\omega=\rho_5\rho_4^*\rho_5$, then the subsequent terms of $R$ are relevant.
\[rem:2\] If $T_i:=\omega_i\rho_1^{a_i}\rho_2^{b_i}\rho_3$, where $\omega_i\neq (\rho_4\rho_5)^*, \omega\rho_4^*, \omega\rho_5^*$, for any $\omega \in \cL(\{\rho_4,\rho_5\})$, then there is $\sigma_i$ such that $T_i(\sigma_i)=\varepsilon$ and $T_i(\sigma_i\sigma)=\sigma$ for every $\sigma \in\gS$.
We refer to the compositions given in (ii) as *factorized irredundant forms* of computation rules. For instance, $\langle\cdot\rangle_-^+$ can be factorized into two equivalent compositions $$\langle\cdot\rangle_-^+=(\rho_4\rho_5)^*(\rho_1\rho_2\rho_3)^* = (\rho_4\rho_5)^*\rho_1\rho_2\rho_3.$$ but only the second is a factorized irredundant form. Note that our previous examples of w.f.c.r.’s are given in factorized irredundant forms.
Now it is natural to ask whether two equivalent rules have necessarily the same factorized irredundant form. The next proposition shows that there is a unique factorized irredundant form for each equivalence class of computation rules.
\[prop:ab\] Let $T=T_1\cdots T_n$ and $T'=T'_1\cdots T'_m$ be two rules in factorized irredundant form, where $n,m$ may be infinite. Then $T\sim T'$ if and only if $n=m$ and for every $1\leqslant i\leqslant n$, $T_i=T'_i$.
Clearly, the conditions are sufficient. So let us prove that they are also necessary.
First, we show that $n=m$. For a contradiction, suppose that $n\neq m$, say $n<m$. In particular, for every $j<m$, $\omega'_j$ is not of the $\omega \rho^*_4$ nor $\omega \rho^*_5$ form for any $\omega \in \cL(\{\rho_4,\rho_5\})$, and $\omega'_j\neq (\rho_4\rho_5)^*$.
Note that $(a_1,b_1)=(a'_1,b'_1)$, otherwise $T\not\sim T'$ (just consider $(2,1)$, $(2,2)$, or $(1,2)$). Thus, to verify that $T_1=T'_1$, it suffices to show that $\omega_1=\omega'_1$. Let $p$ and $p'$ be the number of times that $\rho_4 $ and $ \rho_5$ alternate in $\omega_1$ and $\omega'_1$, respectively. It is easy to see that $p=p'$ (just consider sequences of the form $(1,1)(1,0)^a(0,1)[(1,0)(0,1)]^q(1,0)(0,1)^b$, for suitable $a,b\in \{0,1\}$ and $q\in \mathbb{N}$). Moreover, either both start with $\rho_4$ or both start with $\rho_5$ (just consider strings of the form $(1,1)[(1,0)(0,1)]^p$).
So suppose both start with $\rho_4$ and $p=2t-1$ (the case $p=2t$ is similar), say $$\omega_1=\rho_4^{l_1}\rho_5^{r_1}\cdots \rho_4^{l_{t}}\rho_5^{r_t}\, \mbox{ and }\, \omega'_1=\rho_4^{l'_1}\rho_5^{r'_1}\cdots \rho_4^{l'_t}\rho_5^{r'_t}$$ where $r_t,r'_t\neq 0$, and let $k=\min\{j:l_j\neq l'_j \, \mbox{or} \, r_j\neq r'_j\}$, say $l_k< l'_k$ (the other cases are dealt with similarly). Then, for $$\sigma=(1,1)[(1,0)(0,1)]^{k-1}(1,0)^{l_k+1}(0,1)[(1,0)(0,1)]^{t-k}$$ ${\operatornamewithlimits{\varovee}}(T(\sigma))> {\operatornamewithlimits{\varovee}}(T'(\sigma))$, which contradicts $T\sim T'$. Hence, $\omega_1=\omega'_1$, and we conclude $T_1=T'_1$. In fact, following exactly the same steps, one can verify that $T_i=T'_i$, for every $i<n$.
Now, as in the case above $(a_n,b_n)=(a'_n,b'_n)$, otherwise $T\not\sim T'$. Moreover, by assumption, we have that $\omega_n=\omega \rho^*_4$ or $\omega \rho^*_5$ for some $\omega \in \cL(\{\rho_4,\rho_5\})\setminus \{(\rho_4\rho_5)^*\}$, or that $\omega_n=(\rho_4\rho_5)^*$. Since $\omega'_n\neq (\rho_4\rho_5)^*$, $\omega_n\neq (\rho_4\rho_5)^*$. Hence, $\rho_4 $ and $ \rho_5$ must alternate the same number of times, say $$\omega_n=\rho_4^{l_1}\rho_5^{r_1}\cdots \rho_4^{l_{t}}\rho_5^{r_t}\, \mbox{ and }\, \omega'_n=\rho_4^{l'_1}\rho_5^{r'_1}\cdots \rho_4^{l'_t}\rho_5^{r'_t},$$ where either $l_t=*\neq l'_t$ and $r_t=r'_t=0$, or $r_t=*\neq r'_t$ and $l_t, l'_t>0$. Without loss of generality, suppose that the latter holds. Then, for $$\sigma=(1,1)[(1,0)(0,1)]^{n-1}(1,0)(0,1)^{r'_t+1}\,$$ $ {\operatornamewithlimits{\varovee}}(T(\sigma))< {\operatornamewithlimits{\varovee}}(T'(\sigma)),$ again a contradiction. Using Lemma \[lem:1a\] (ii), we see that all possible cases have been considered and, since each leads to a contradiction, we have $n=m$.
Now, by making use of (concatenations of) sequences of the form $$(1,1)(1,0)^a(0,1)[(1,0)(0,1)]^q(1,0)(0,1)^b(2,1)^{c}(2,2)^d(1,2)^e,$$ if both have infinitely many terms $T_i, T'_i$, then $T_i= T'_i$ for every $i\in \mathbb{N}$, and if $T$ and $T'$ have the same (finite) number of terms, say $n$, then $T_i=T'_i$ for every $i<n$, and $(a_n,b_n)=(a'_n,b'_n)$.
Thus, to complete the proof it remains to show that in the latter case, we have $\omega_n=\omega'_n$; in fact, both $\omega_n$ and $\omega'_n$ are $(\rho_4\rho_5)^*$, or $\omega \rho^*_4$ or $\omega \rho^*_5$ for some $\omega \in \cL(\{\rho_4,\rho_5\})\setminus \{(\rho_4\rho_5)^*\}$.
For the sake of a contradiction, suppose first that $\omega_n=(\rho_4\rho_5)^*$ but $\omega'_n=\omega \rho^*_4$ or $\omega'_n=\omega \rho^*_5$ where $\rho_4$ and $\rho_5$ alternate finitely many times in $\omega$, say $p$ times. Then $${\operatornamewithlimits{\varovee}}(T(\sigma(1,1)[(1,0)(0,1)]^{p+2}))< {\operatornamewithlimits{\varovee}}(T'(\sigma(1,1)[(1,0)(0,1)]^{p+2})),$$ where $\sigma$ is the concatenation of the sequences $\sigma_i$, $1\leqslant i<n$, given in Remark \[rem:2\].
Now suppose that $\omega_n=\omega \rho^*_4$ and $\omega'_n=\omega' \rho^*_5$ where $\rho_4$ and $\rho_5$ alternate finitely many times in $\omega$ and $\omega'$, say $p$ and $p'$ times, respectively. (The remaining cases can be dealt with similarly.) Without loss of generality, suppose that $p\leqslant p'$, then taking $a$ as the ceiling of $\frac{p}{2}$ we have
- $ {\operatornamewithlimits{\varovee}}(T(\sigma(1,1)[(1,0)(0,1)]^{a}(0,1)))> {\operatornamewithlimits{\varovee}}(T(\sigma(1,1)[(1,0)(0,1)]^{a}(0,1))),$ if $\omega $ and $\omega'$ both start with $\rho_4$ or $\rho_5$, or $\omega $ and $\omega'$ start with $\rho_5$ and $\rho_4$, respectively, and
- $ {\operatornamewithlimits{\varovee}}(T(\sigma(1,1)(0,1)[(1,0)(0,1)]^{a}(0,1)))> {\operatornamewithlimits{\varovee}}(T(\sigma(1,1)(0,1)[(1,0)(0,1)]^{a}(0,1))),$ if $\omega $ and $\omega'$ start with $\rho_4$ and $\rho_5$, respectively,
where again $\sigma$ is the concatenation of the sequences $\sigma_i$, $1\leqslant i<n$, given in Remark \[rem:2\].
Since both cases yield the desired contradiction, the proof is now complete.
The poset $(\gR/_\sim,\leqslant)$ of computation rules {#sec:struc}
======================================================
The above considerations allow us to focus on the quotient $\gR/_\sim$ of equivalence classes rather than on the whole set of w.f.c.r.’s. Moreover, by making use of Lemma \[lem:1a\], we can focus on factorized irredundant forms.
We consider the following order $\leqslant$ on $\gR/_\sim$ which was introduced in [@Gra03]. Let $R,R'$ be two computation rules in $\gR/_\sim$ and, for each sequence $\sigma=(a_i)_{i\in I}$, let $J_\sigma$ and $J'_\sigma$, $J_\sigma,J'_\sigma\subseteq I$, be the sets of indices of the terms in $\sigma$ deleted by $R$ and $R'$, respectively. Then, we write $R\leqslant R'$ if for all sequences $\sigma\in\gS$ we have $J_\sigma\supseteq J'_\sigma$. To simplify our exposition, we use $R(\sigma)\sqsubseteq R'(\sigma)$ to denote the fact that $J_\sigma\supseteq J'_\sigma$. If $J_\sigma= J'_\sigma$, then we simply write $R(\sigma)= R'(\sigma)$. Moreover, we may adopt the same notation to arbitrary substrings of w.f.c.r.’s.
It is easy to verify that $\leqslant$ is reflexive and transitive (but, as we will see, not linear). Also, it is antisymmetric: if two rules $R,R'$ delete exactly the same terms, i.e., $R\leqslant R'$ and $R'\leqslant R$, then they are equivalent. Conversely, it follows from Proposition \[prop:ab\] that if two rules are equivalent, then they have the same factorized irredundant form, therefore $R\leqslant R'$ and $R'\leqslant R$. Thus, $(\gR/_\sim,\leqslant)$ is a poset (partially ordered set). In what follows, we make no distinction between w.f.c.r.’s and the elements of $\gR/_\sim$ which will be always written in the factorized irredundant form.
Preliminary results
-------------------
In the sequel, let $\omega, \omega'\in\cL(\{\rho_4,\rho_5 \})$, and $a,b, c,d\in\{0,1\}$.
\[lem:4\] Let $T,T' \in\gR/_\sim$. If $T\geqslant T'$, then $\omega\rho_1^a \rho_2^b \rho_3T\geqslant \omega\rho_1^a \rho_2^b \rho_3T'$. Moreover, if $\rho_4$ and $\rho_5$ alternate finitely many times in $\omega$, then $\omega\rho_1^a \rho_2^b \rho_3T> \omega\rho_1^a \rho_2^b \rho_3T'$ (resp. $\omega\rho_1^a \rho_2^b \rho_3T\parallel \omega\rho_1^a \rho_2^b \rho_3T'$) if and only if $T> T'$ (resp. $T\parallel T'$).
From Lemma \[lem:1a\] (ii), if $\rho_4$ and $\rho_5$ alternate infinitely many times in $\omega$, then $$\omega\rho_1^a \rho_2^b \rho_3T=(\rho_4\rho_5)^*\rho_1^a \rho_2^b \rho_3 =\omega\rho_1^a \rho_2^b \rho_3T'.$$ So we may assume that $\omega:=\rho_4^{a_1}\rho_5^{b_1}\cdots\rho_4^{a_n}\rho_5^{b_n}$, with $a_i,b_i\in \bN \cup \{*\}$. We assume also that $a_i\neq 0$ for $2\leqslant
i\leqslant n$, and $b_i\neq 0$ for $1\leqslant n-1$. We treat the case $a_1\neq 0$, $b_n\neq 0$, the remaining cases follow similarly.
To see that $\omega\rho_1^a \rho_2^b \rho_3T\geqslant \omega\rho_1^a \rho_2^b \rho_3T'$, just note that for every string $\gamma =(p_1,m_1)\cdots (p_k,m_k)$, $(p_1,m_1)\geq (1,1)$, we have $$\begin{aligned}
\omega\rho_1^a \rho_2^b \rho_3T(\gamma)&=&(\rho_1^a \rho_2^b \rho_3)(p_1,m_1)T((p'_1,m'_1)\cdots (p'_{k'},m'_{k'}))\\
&\sqsupseteq &
(\rho_1^a \rho_2^b \rho_3)(p_1,m_1)T'((p'_1,m'_1)\cdots (p'_{k'},m'_{k'}))=\omega\rho_1^a \rho_2^b \rho_3T'(\gamma).\end{aligned}$$ Hence, $\omega\rho_1^a \rho_2^b \rho_3T\geqslant \omega\rho_1^a \rho_2^b \rho_3T'$, and by antisymmetry, the strict inequality occurs if and only if $T>T'$. Similarly, if $T\parallel T'$, then by taking
- $\sigma_>$ and $\sigma_<$ such that $T(\sigma_>)\sqsupset T'(\sigma_>)$ and $T(\sigma_<)\sqsubset T'(\sigma_<)$, respectively, and
- $\gamma_>=\sigma\sigma_>$ and $\gamma_<=\sigma\sigma_<, $ where $\sigma$ is given in Remark \[rem:2\] (for $T_i=\omega\rho_1^a \rho_2^b \rho_3$),
we can verify that $\omega\rho_1^a \rho_2^b \rho_3T\parallel \omega\rho_1^a \rho_2^b \rho_3T'$. This completes the proof of the lemma.
By repeated applications of Lemma \[lem:4\], we have the following corollary.
\[cor:5\] Let $T, T'\in \gR/_\sim$, and let $R=T_1T_2\cdots T_m\in \cL(\Psi)$, where $T_i=\omega_i\rho_1^{a_i} \rho_2^{b_i} \rho_3$. If $T\geqslant T'$, then $RT\geqslant RT'$. Furthermore, if $\rho_4$ and $\rho_5$ alternate finitely many times in each $\omega_i$, then $RT> RT'$ (resp. $RT\parallel RT'$) if and only if $T> T'$ (resp. $T\parallel T'$).
\[rem:123\] In fact, by Corollary \[cor:5\] it follows that if $\rho_4$ and $\rho_5$ alternate finitely many times in each $\omega_i$, then $T\geqslant T'$ (resp. $T\parallel T'$) if and only if $RT\geqslant RT'$ (resp. $RT\parallel RT'$).
\[lem:1\] Let $T,T'\in\cL(\Psi)$ such that $T\geqslant T'$. Then $\omega\rho_1^a \rho_2^b \rho_3T\geqslant \omega\rho_1^c \rho_2^d \rho_3T'$ if and only if $(a,b)= (c,d)$ or $(c,d)=(1,1)$. Moreover, $\omega\rho_1^a \rho_2^b \rho_3T> \omega\rho_1^c \rho_2^d \rho_3T'$ if and only if $(a,b)\neq (c,d)=(1,1)$.
To see that the condition in the first claim is sufficient, observe that if $(a,b)=(c,d)$, then by Lemma \[lem:4\] $\omega\rho_1^a
\rho_2^b \rho_3T\geqslant \omega\rho_1^c \rho_2^d \rho_3T'$. If $(c,d)=(1,1)$, then for every nonassociative string $\sigma=(p_1,m_1)\cdots
(p_k,m_k)$ (i.e., $(p_1,m_1)\geqslant (1,1))$ we have $$\begin{aligned}
\omega\rho_1^a \rho_2^b \rho_3T(\sigma)&=&(\rho_1^a \rho_2^b \rho_3)(p_1,m_1)T((p'_1,m'_1)\cdots (p'_{k'},m'_{k'})\nonumber\\
&\sqsupseteq & T'((p'_1,m'_1)\cdots (p'_{k'},m'_{k'}))=\omega\rho_1^c \rho_2^d \rho_3T'(\sigma),\label{eq:ab}\end{aligned}$$ and hence $\omega\rho_1^a \rho_2^b \rho_3T\geqslant \omega\rho_1^c \rho_2^d \rho_3T'$. Moreover, if $(a,b)\neq(c,d)=(1,1)$, then by considering $(2,1)$ if $(a,b)$ equals $(0,1)$ or $(0,0)$, and $(1,2)$ if $(a,b)$ equals $(1,0)$, one can easily verify that $\omega\rho_1^a \rho_2^b \rho_3T> \omega\rho_1^c \rho_2^d \rho_3T'$, thus showing that the condition of the second claim is also sufficient.
To verify that the conditions in the first and second claims are also necessary, it suffices to show that if $(a,b), (c,d)\neq(1,1)$ and $(a,b)\neq (c,d)$, then $\omega\rho_1^a \rho_2^b \rho_3T\parallel \omega\rho_1^c \rho_2^d \rho_3T$. But this fact can be easily verified by making use of the strings $(2,1), (1,2)$ or $(2,2)$, and thus the proof of the lemma is now complete.
\[lem:2\] If $\rho_4$ and $\rho_5$ alternate infinitely many times in $\omega$ but not in $\omega'$, then $\omega\rho_1^a \rho_2^b \rho_3\sim\omega\rho_1^a \rho_2^b \rho_3T<\omega'\rho_1^a \rho_2^b \rho_3T'$, for every $T,T'\in\cL(\Psi)$.
Let $\omega=(\rho_4\rho_5)^*$ and $\omega':=\rho_4^{a_1}\rho_5^{b_1}\cdots\rho_4^{a_n}\rho_5^{b_n}$, with $a_i,b_i\in \bN \cup \{*\}$. (By Lemma \[lem:1a\], $\omega\rho_1^a \rho_2^b \rho_3\sim\omega\rho_1^a \rho_2^b \rho_3T$, for every $T\in\cL(\Psi)$.) Then for every string $\gamma =(p_1,m_1)\cdots (p_k,m_k)$, $(p_1,m_1)\geqslant (1,1)$, $$\omega\rho_1^a \rho_2^b \rho_3(\gamma)=(\rho_1^a \rho_2^b \rho_3)(p_1,m_1)\sqsubseteq
(\rho_1^a \rho_2^b \rho_3)(p_1,m_1)T'((p'_1,m'_1)\cdots (p'_{k'},m'_{k'}))=\omega'\rho_1^a \rho_2^b \rho_3T'(\gamma).$$ Hence, $\omega\rho_1^a \rho_2^b \rho_3\leqslant \omega'\rho_1^a \rho_2^b \rho_3T'$. For $1\leqslant i\leqslant n$, let $\alpha_i=0$ (resp. $\beta_i=0$) if $a_i=0$ (resp. $b_i=0$) and $\alpha_i=1$ (resp. $\beta_i=1$) otherwise. By considering $$\sigma=(1,1)(1,0)^{\alpha_1}(0,1)^{\beta_1}\cdots (1,0)^{\alpha_n}(0,1)^{\beta_n},$$ one can easily verify that $\omega\rho_1^a \rho_2^b \rho_3<\omega'\rho_1^a \rho_2^b \rho_3T'$.
\[lem:3\] Let $\omega:=\rho_4^{a_1}\rho_5^{b_1}\cdots\rho_4^{a_n}\rho_5^{b_n}$, $n\geq 0$, and let $\omega':=\rho_4^{a'_1}\rho_5^{b'_1}\cdots\rho_4^{a'_m}\rho_5^{b'_m}$, $m\geq 0$. For $T=\omega\rho_1^a \rho_2^b \rho_3\langle \cdot \rangle^+_-$ and $T'=\omega'\rho_1^a \rho_2^b \rho_3\langle \cdot \rangle^+_-$, the following assertions hold:
- If $n=m=1$, then $T\parallel T'$ if and only if $b_1\neq b'_1$, or $\big[b_1= b'_1=0$ and $a_1\neq a'_1\big]$.
- If $n=m>1$, then $T\parallel T'$ if and only if
- $b_n\neq b'_n$, or
- $b_n= b'_n=0$ and $a_n\neq a'_n$, or
- $b_n= b'_n\neq 0$ and there exists $1\leqslant j<n$ such that $(a_j,b_j)\neq (a'_j,b'_j)$, or
- $b_n= b'_n= 0$, $a_n= a'_n\neq 0$, and $a_{n-1}\neq a'_{n-1}$ or there exists $1\leqslant j<n-1$ such that $(a_j,b_j)\neq (a'_j,b'_j)$.
- If $n\neq m$, then $T\parallel T'$.
We may assume that $a_i\neq 0$ for $2\leqslant i\leqslant n$ and $b_i\neq 0$ for $1\leqslant n-1$, and that $a'_j\neq 0$ for $2\leqslant j\leqslant m$ and $b'_j\neq 0$ for $1\leqslant j\leqslant m-1$.
[**(i):**]{} To prove sufficiency, suppose first that $b_1\neq b'_1$, say $b_1>b'_1$. Then, by considering $\sigma=(1,1)(0,1)^{b'_1}(1,1)$ and $\sigma'=(1,1)(0,1)^{b'_1+1}$, we see that $T\not \leqslant T'$ and $T\not \geqslant T'$, respectively.
So suppose that $b_1= b'_1=0$ and $a_1\neq a'_1$, say $a_1>a'_1$. Then, by considering $\sigma=(1,1)(1,0)^{a'_1}(1,1)$ and $\sigma'=(1,1)(1,0)^{a'_1+1}$, we see that $T\not \leqslant T'$ and $T\not \geqslant T'$, respectively.
We prove necessity by counterposition. Observe first that if $b_1= b'_1$ and $(a_1,b_1)=(a'_1,b'_1)$, then $T=T'$. So suppose that $b_1= b'_1\neq 0$ and $a_1\neq a'_1$ , say $a_1>a'_1$. We claim that $T<T'$. By making use of $\sigma=(1,1)(1,0)^{a'_1+1}$, we see that $T(\sigma)\sqsubset T'(\sigma)$. Thus, we only have to show that $T\leqslant T'$.
Let $\sigma=(p_1,m_1)(p_2,m_2)\cdots (p_k,m_k)$. If the action of $\rho_1^a \rho_2^b \rho_3$ does not delete all terms of $(p_1,m_1)$, then $T(\sigma)\sqsubseteq T'(\sigma)$ since the “subrule" $\langle \cdot \rangle^+_-$ in $T$ and $T'$ does not act on $\sigma$; hence, without loss of generality, we may further assume that $(p_1,m_1)=(1,1)$.
Now, if $m_2\neq 0$, then $T(\sigma)\sqsubseteq T'(\sigma)$, and if $p_2=0$, then $T(\sigma)= T'(\sigma)$; hence, we may assume $m_2=0$ and $p_2\neq 0$. In fact, we may suppose that $p_2\leqslant a'_1$ since, otherwise, $T(\sigma)\sqsubset T'(\sigma)$.
Under the assumption that $m_2=0$ and $p_2\leqslant a'_1$, and applying the same reasoning to $(p_3,m_3)$, we again derive that the only case to consider is when $m_3=0$ and $0<p_3\leqslant a'_1-p_2$. Proceeding in this way, we may eventually arrive at $(p_j,m_j)$ with $m_i=0$ and $p_i>0$ for $i=2,\ldots,j-1$, $m_j=0$ and $$a'_1-\sum_{i=2}^{j-1}p_i\leqslant 0.$$ The only case to consider reduces then to $p_j=0$, hence $(p_j,m_j)=(0,0)$, so this term disappears, and similarly all remaining terms till $(p_k,m_k)$. Otherwise, if $$a'_1-\sum_{i=2}^{k-1}p_i>0$$ we have to consider the case $(p_k,m_k)=(p_k,0)$ with $0<p_k\leqslant a'_1-\sum_{i=2}^{k-1}p_i$. But then clearly $T(\sigma)\sqsubset T'(\sigma)$. In any case we have $T\leqslant T'$ (hence, $T\not\parallel T'$), and the proof is now complete.
[**(ii):**]{} The proof of sufficiency in the case when $(a_j,b_j)= (a'_j,b'_j)$, for $1\leqslant j<n$, follows exactly the same steps as in the proof of (i), by adding $(0,1)[(1,0)(0,1)]^{n-2}$ (after the first $(1,1)$) to the strings used above. We consider the case when $b_n= b'_n\neq 0$ and there exists $1\leqslant j<n$ such that $(a_j,b_j)\neq (a'_j,b'_j)$, say $a_j>a'_j$. The case $b_n= b'_n= 0$, $a_{n-1}= a'_{n-1}\neq 0$, and $(a_j,b_j)\neq
(a'_j,b'_j)$ (say $b_j>b'_j$) for some $1\leqslant j<n-1$, follows similarly by interchanging the roles of $\rho_4$ and $\rho_5$, and those of $(1,0)$ and $(0,1)$ (and $a'_j$ and $b'_j$) in the strings below.
So let $\sigma $ be given by
- $\sigma=(1,1)(0,1)(1,1)^{j-2}(1,0)^{a'_j+1}(0,1)(1,1)^{n-j-1}(1,0)$ if $1<j$, and
- $\sigma=(1,1)(1,0)^{a'_j+1}(0,1)(1,1)^{n-1}(1,0)$ otherwise.
Then $T(\sigma)\neq \varepsilon=T'(\sigma)$ and thus $T\not\leqslant T'$. Now let $\sigma'$ be given by
- $\sigma'=(1,1)(0,1)(1,1)^{j-2}(1,0)^{a'_j+1}(0,1)(1,1)^{n-j-2}(1,0)$ if $1<j$ (with $(1,1)^{n-j-2}=(0,0)$ whenever $n-j-2\leqslant 0$), and
- $\sigma'=(1,1)(1,0)^{a'_j+1}(0,1)(1,1)^{n-2}(1,0)$ otherwise.
Then $T(\sigma')= \varepsilon\neq T'(\sigma')$ and thus $T\not\geqslant T'$.
The proof of necessity is similar to case (i). If none of the conditions of (ii) is satisfied, then we may assume that $b_n= b'_n\neq 0$, $(a_j,b_j)= (a'_j,b'_j)$ for every $1\leqslant j<n$, and focus on the case $a_n\neq a'_n$ (for otherwise $T=T'$). (The case when $b_n= b'_n= 0$, $a_n=a'_n$, $b_{n-1}=b'_{n-1}$, and $(a_j,b_j)= (a'_j,b'_j)$ for every $1\leqslant j<n-1$ follows similarly by the above mentioned substitutions.)
So suppose without loss of generality that $a_n>a'_n=t$. As in case (i), we show that $T<T'$. By making use of $\sigma=(1,1)(0,1)(1,1)^{n-2}(1,0)^{a'_n+1}$, we see that $T(\sigma)\sqsubset T'(\sigma)$.
So let $\sigma=(p_1,m_1)(p_2,m_2)\cdots (p_k,m_k)$. By reasoning as in (i), we may assume that $k>t+n$ and, since $(a_j,b_j)= (a'_j,b'_j)$ for every $1\leqslant j<n$, that $(p_2,m_2)=(0,1)$, $(p_j,m_j)=(1,1)$ for $3\leqslant j<n+1$, and that $p_j= 0$ for each $n+1\leqslant j \leqslant t+n$; for otherwise we reach the same conclusion $T(\sigma)\sqsubseteq T'(\sigma)$.
If $p_{t+n+1}= 0$ or $m_{t+n+1}= 0$, then $T(\sigma)= T'(\sigma)$ or $T(\sigma)\sqsubset T'(\sigma)$, respectively. If $p_{t+n+1}\neq 0$ and $m_{t+n+1}\neq 0$, then $T(\sigma)\sqsubseteq T'(\sigma)$, and the proof of (ii) is now complete.
[**(iii):**]{} Suppose that $n\neq m$, say $1\leqslant n<m$. First we consider the case $n=1$. Let $\sigma=(1,1)(0,1)(1,0)$. Then $T(\sigma)\neq \varepsilon =T'(\sigma)$ and thus $T\not\leqslant T'$. Let $\sigma'=(1,1)(0,1)^\alpha(1,1)^{m-1}(1,0)$ where $\alpha=0$ if $b_1=0$, and $\alpha=1$ otherwise. Then $T(\sigma')=\varepsilon\neq T'(\sigma')$ and thus $T\not\geqslant T'$.
Suppose now that $n>1$. Then, for $\sigma=(1,1)(0,1)(1,1)^{m-2}(1,0)$, we have $T(\sigma)\neq \varepsilon=T'(\sigma)$ and thus $T\not\leqslant T'$.
To show that $T\not\geqslant T'$, let $\sigma'$ be given by
- $\sigma'=(1,1)(0,1)(1,1)^{n-1}(1,1) (1,1)^{m-n-1}(1,0)$ if $b_n,b'_m\neq 0$,
- $\sigma'=(1,1)(0,1)(1,1)^{n-2}(1,0)(1,1)(0,1) (1,1)^{m-n-1}(1,0)$ if $b_n=0$ and $b'_m\neq 0$,
- $\sigma'=(1,1)(0,1)(1,1)^{n-1}(1,1) (1,1)^{m-n-2}(1,0)(0,1)$ if $b_n\neq 0 $ and $b'_m= 0$, and
- $\sigma'=(1,1)(0,1)(1,1)^{n-2}(1,0)(1,1)(0,1) (1,1)^{m-n-1}(1,0)$ if $b_n,b'_m= 0$.
In each case we get $T(\sigma')=\varepsilon\neq T'(\sigma)$. Thus $T\not\geqslant T'$, and the proof of Lemma \[lem:3\] is now complete.
\[Rem:1234\] Note that the proofs of (i) and (ii) of Lemma \[lem:3\] show that if $b_n=b'_n\neq 0$ and $(a_j,b_j)= (a'_j,b'_j)$ for $1\leqslant j<n$, then $T<T'$ if and only if $a_n>a'_n$. Similarly, if $b_n=b'_n= 0$, $a_n=a'_n\neq 0$, $a_{n-1}= a'_{n-1}$, and $(a_j,b_j)= (a'_j,b'_j)$ for $1\leqslant j<n-1$, then $T<T'$ if and only if $b_{n-1}>b'_{n-1}$.
Moreover, if $b_n=b'_n$, $a_n=a'_n$, and there exists $1\leqslant j< n$ such that $a_j>a'_j$ or $b_j>b'_j$, then we have that $\omega'\rho_1\rho_2\rho_3\langle \cdot \rangle^+_-\not\leqslant
\omega\rho_1^a\rho_2^b\rho_3T$ for every $T\in \gR$ and any $a,b\in \{0,1\}$. To illustrate, suppose $a_j>a'_j$. Then consider $\sigma $ given by
- $\sigma=(1,1)(0,1)(1,1)^{j-2}(1,0)^{a'_j+1}(0,1)[(1,0)(0,1)]^{n-j-1}(1,0)$ if $1<j$, and
- $\sigma=(1,1)(1,0)^{a'_j+1}(0,1)[(1,0)(0,1)]^{n-1}(1,0)$ otherwise.
\[Rem:123456\] By reasoning as in the proof of (iii) of Lemma \[lem:3\] and taking $\omega $ and $\omega'$ as above with $m<n$, one can show that $\omega'\rho_1\rho_2\rho_3\langle \cdot \rangle^+_-\not\leqslant
\omega\rho_1^a\rho_2^b\rho_3T$ for every $T\in \gR$ and any $a,b\in \{0,1\}$.
The subposet $\gR_{123}$.
-------------------------
Let $\gR_{123}:=\{R\in\gR/_\sim:R\in \cL(\{\rho_1,\rho_2,\rho_3\})\}.$ Writing these rules in factorized irredundant form, they read $R=T_1T_2\cdots$ where, for each $i\in \mathbb{N}$, $T_i=\rho_1^a \rho_2^b \rho_3$ for some $a,b\in\{0,1\}$. Note that from Proposition \[prop:ab\] it follows that each such expression is necessarily in $\gR/_\sim$, is in the factorized irredundant form, and has infinite length.
For $T\in \gR_{123}$, and $a,b\in\{0,1\}$, set $\mathcal{I}^T_{ab}=\{i\in
\mathbb{N}:T_i=\rho_1^a \rho_2^b \rho_3\}$. Since $T$ is of infinite length, $(\mathcal{I}^T_{ab})_{a,b\in\{0,1\}}$ is a partition of $\mathbb{N}$. Moreover, $(\mathcal{I}^T_{ab})_{a,b\in\{0,1\}}$ uniquely determines $T$.
\[prop:1\] Let $T,T'\in \gR_{123}$. Then $T\leqslant T'$ if and only if $\mathcal{I}^T_{11}\supseteq \mathcal{I}^{T'}_{11}$ and $\mathcal{I}^T_{ab}\subseteq \mathcal{I}^{T'}_{ab}$, for any $(a,b)\neq(1,1)$. In particular,
- $T<T'$ whenever $\mathcal{I}^T_{11}\supset \mathcal{I}^{T'}_{11}$ and $\mathcal{I}^T_{ab}\subseteq \mathcal{I}^{T'}_{ab}$, for any $(a,b)\neq(1,1)$;
- $T\parallel T'$ whenever $\mathcal{I}^T_{ab}\parallel \mathcal{I}^{T'}_{ab}$ for some $a,b\in\{0,1\}$.
Clearly, the third claim is a consequence of the first. Since $(\mathcal{I}^T_{ab})_{a,b\in\{0,1\}}$ is a partition of $\mathbb{N}$, the second claim is also a consequence of the first.
To see that the conditions in the first claim are sufficient, note that, if $T$ acts on a string $\sigma =(p_1,m_1)\cdots (p_k,m_k)$, then its factor $T_i$ acts on the term $(p_i,m_i)$. Suppose that for some $\sigma$ we have $T(\sigma)\sqsupset T'(\sigma)$. Then, using the above remark, for some $i$ we have $T_i(p_i,m_i)\sqsupset T'_i(p_i,m_i)$, which means that $T'_i=\rho_1^a
\rho_2^b \rho_3$, $T_i=\rho_1^c \rho_2^d \rho_3$ with $(c,d)\leqslant (a,b)$ pointwise and $(c,d)\neq (a,b)$. There are two possibilities:
- $(a,b)=(1,1)$, and hence $\I^T_{11}\not\supseteq \I^{T'}_{11}$, or
- $(a,b)=(1,0)$ or $(a,b)=(0,1)$, in which case $(c,d)=(0,0)$, and thus $\I^T_{00}\not\subseteq \I^{T'}_{00}$.
To show that the conditions of the first claim are necessary, suppose first that $\mathcal{I}^T_{11}\not \supseteq \mathcal{I}^{T'}_{11}$. Let $i=\min\{j\in \mathcal{I}^{T'}_{11}: j\not \in\mathcal{I}^T_{11}\}$. If $i\in \mathcal{I}^T_{ab}$ for $ab= 00$ or $01$, consider $\sigma=(1,1)^{i-1}(2,1)$. Then $T(\sigma)=(1,0)\neq \varepsilon=T'(\sigma)$. If $i\in \mathcal{I}^T_{10}$, consider $\sigma=(1,1)^{i-1}(1,2)$. Then $T(\sigma)=(0,1)\neq\varepsilon=T'(\sigma)$. Thus $T\not\leqslant T'$.
So we may assume that $\mathcal{I}^T_{11}\supseteq \mathcal{I}^{T'}_{11}$. We treat the case $\mathcal{I}^T_{01}\not \subseteq \mathcal{I}^{T'}_{01}$; the remaining cases follow similarly. Let $i=\min\{j\in \mathcal{I}^{T}_{01}: j\not \in\mathcal{I}^{T'}_{01}\}$. If $i\in \mathcal{I}^{T'}_{10}$, consider $\sigma=(1,1)^{i-1}(2,1)$. Then $T(\sigma)=(1,0)\neq\varepsilon=T'(\sigma)$. If $i\in \mathcal{I}^{T'}_{00}$, consider $\sigma=(1,1)^{i-1}(2,2)$. Then $T(\sigma)=(1,0)\neq\varepsilon=T'(\sigma)$. Thus $T\not\leqslant T'$, and the proof is now complete.
As immediate corollaries we have the following results.
\[cor:1\] Let $T\in \gR_{123}$.
- $T$ is the least rule if and only if $\mathcal{I}^T_{11}=\mathbb{N}$, i.e., $T=\langle\cdot\rangle_=$.
- $T$ is an atom if and only if $\mathcal{I}^T_{11}=\mathbb{N}\setminus \{i\}$ for some $i\in \mathbb{N}$.
- $T$ is a maximal element if and only if $\mathcal{I}^T_{11}=\emptyset$.
\[cor:2\] Let $T, T'\in \gR_{123}$ where $T=T_1T_2\cdots$ and $T'=T'_1T'_2\cdots$. Then $T\wedge T'=S$ where $\mathcal{I}^S_{11}=\mathcal{I}^T_{11}\cup \mathcal{I}^{T'}_{11}\cup \bigcup_{(a,b)\neq(1,1)}\mathcal{I}^T_{ab}\oplus \mathcal{I}^{T'}_{ab}$, where $\oplus$ denotes the symmetric difference, and $\mathcal{I}^S_{ab}=\mathcal{I}^T_{ab}\cap \mathcal{I}^{T'}_{ab}$ for every $(a,b)\neq(1,1)$.
In other words, $\gR_{123}$ constitutes a $\wedge$-semilattice. Now, by Proposition \[prop:1\], if $T ,T' \leqslant R\in \gR_{123}$, then $\mathcal{I}^T_{11}, \mathcal{I}^{T'}_{11}\supseteq \mathcal{I}^{R}_{11}$ and $\mathcal{I}^T_{ab}, \mathcal{I}^{T'}_{ab}\subseteq \mathcal{I}^{R}_{ab}$ for every $(a,b)\neq(1,1)$. Hence, Corollary \[cor:2\] can be refined by considering intervals of the form $[\langle \cdot\rangle_=,R]$ for some $R\in \gR_{123}$.
\[cor:3\] Let $R\in \gR_{123}$. Then $([\langle \cdot\rangle_=,R], \leqslant)$ constitutes a lattice under $\wedge$ and $\vee$ defined by
1. $T\wedge T'=S$ where $\mathcal{I}^S_{11}=\mathcal{I}^T_{11}\cup \mathcal{I}^{T'}_{11}\cup \bigcup_{(a,b)\neq(1,1)}\mathcal{I}^T_{ab}\oplus \mathcal{I}^{T'}_{ab}$, and $\mathcal{I}^S_{ab}=\mathcal{I}^T_{ab}\cap \mathcal{I}^{T'}_{ab}$ for every $(a,b)\neq(1,1)$;
2. $T\vee T'=S$ where $\mathcal{I}^S_{11}=\mathcal{I}^T_{11}\cap \mathcal{I}^{T'}_{11}$, and $\mathcal{I}^S_{ab}=\mathcal{I}^T_{ab}\cup \mathcal{I}^{T'}_{ab}$ for every $(a,b)\neq(1,1)$,
for every $T ,T' \in [\langle \cdot\rangle_=,R]$, with $T=T_1T_2\cdots$ and $T'=T'_1T'_2\cdots$. Moreover, $([\langle \cdot\rangle_=,R], \leqslant)$ is order-isomorphic to $(\mathcal{P}(\bigcup_{(a,b)\neq(1,1)}\mathcal{I}^R_{ab}),\subseteq)$.
From Corollary \[cor:3\], it follows that $(\gR_{123}, \leqslant)$ embeds the power set of integers ordered by inclusion. Furthermore, for $R\in \gR_{123}$, if $ |\bigcup_{(a,b)\neq(1,1)}\mathcal{I}^R_{ab}|=n$ is finite, then $|[\langle \cdot\rangle_=, R]|=2^n$.
Least element and atoms {#atoms}
-----------------------
We turn to the study of the atoms of $\gR/_\sim$. The next proposition was presented in [@Gra03].
\[prop:3\] The rule $\langle \cdot\rangle_-^+$ is the least element of $\gR/_\sim$.
It follows immediately from the fact that $\langle \cdot\rangle_-^+$ deletes every term of a nonassociative string.
\[prop:3a\] Let $T=\omega \rho_1^a \rho_2^b \rho_3 T' $ be an element of $\gR/_\sim$, where $(a,b)\neq(1,1)$. Then $T$ is an atom if and only if $\rho_4$ and $\rho_5$ alternate infinitely many times in $\omega$ (and therefore $T'=\varepsilon$).
Note that if $\rho_4$ and $\rho_5$ alternate infinitely many times in $\omega$, then $T=\omega \rho_1^a \rho_2^b \rho_3 $. By Lemma \[lem:1\], the condition is sufficient.
To show that it is also necessary, let $T$ be an atom, and for the sake of a contradiction suppose that $\rho_4$ and $\rho_5$ alternate finitely many times in $\omega$. Let $T''=(\rho_4\rho_5)^*T$. Then $T''=(\rho_4\rho_5)^* \rho_1^a \rho_2^b \rho_3 > \langle \cdot\rangle_-^+ $. Moreover, by Lemma \[lem:2\] we have $T>T''$ which contradicts the fact that $T$ is an atom.
Consequently, for $(a,b)<(1,1)$, we have only 3 atoms, namely $(\rho_4\rho_5)^*\rho_3$, $(\rho_4\rho_5)^*\rho_1\rho_3$ and $(\rho_4\rho_5)^*\rho_2\rho_3$.
\[prop:4\] If $T=\omega\rho_1 \rho_2 \rho_3T'$ is an atom, then $\rho_4$ and $\rho_5$ alternate finitely many times in $\omega$.
If $\rho_4$ and $\rho_5$ alternate infinitely many times in $\omega$, then $T=\langle \cdot\rangle_-^+$.
\[prop:5\] Let $T=\omega\rho_1 \rho_2 \rho_3T'$ such that $\rho_4$ and $\rho_5$ alternate finitely many times in $\omega$. If $T$ is an atom, then $T'=\langle \cdot\rangle_-^+$.
Let $T=\omega\rho_1 \rho_2 \rho_3T'$ be an atom. Suppose for the sake of a contradiction that $T'\neq\langle \cdot\rangle_-^+$. By Proposition \[prop:3\], $T'>\langle \cdot\rangle_-^+$, and by Lemma \[lem:2\], $T''=\omega\rho_1 \rho_2\rho_3 \langle \cdot\rangle_-^+>\langle \cdot\rangle_-^+$. Moreover, by Lemma \[lem:4\], $T>T''$ which contradicts the fact that $T$ is an atom.
\[prop:6\] Let $T=\omega\rho_1 \rho_2 \rho_3 \langle \cdot\rangle_-^+$. Then $T$ is an atom if and only if $\omega:=\rho_4^{a_1}\rho_5^{b_1}\cdots\rho_4^{a_n}\rho_5^{b_n}$ with $a_i\neq 0$ for $2\leqslant i\leqslant n$ and $b_i\neq 0$ for $1\leqslant i\leqslant n-1$, and such that
- $b_n\neq 0$ and $a_n$ is infinite, or
- $b_n=0$, $a_n\neq 0$ and $b_{n-1}$ is infinite.
Necessity follows from Propositions \[prop:4\] and \[prop:5\], and Lemma \[lem:3\] and Remark \[Rem:1234\]. Sufficiency follows from Lemma \[lem:3\] and Remark \[Rem:1234\].
We can now explicitly describe the atoms of $\gR/_\sim$.
A w.f.c.r. $T$ is an atom of $\gR/_\sim$ if and only if $T=(\rho_4\rho_5)^* \rho_1^a \rho_2^b \rho_3$, for $(a,b)\neq(1,1)$, or $T=\omega\rho_1 \rho_2 \rho_3 \langle \cdot\rangle_-^+$ where $\omega:=\rho_4^{a_1}\rho_5^{b_1}\cdots\rho_4^{a_n}\rho_5^{b_n}$ with $a_i\neq 0$ for $2\leqslant i\leqslant n$ and $b_i\neq 0$ for $1\leqslant i\leqslant n-1$, and such that
- $b_n\neq 0$ and $a_n$ is infinite, or
- $b_n=0$, $a_n\neq 0$ and $b_{n-1}$ is infinite.
Maximal elements {#maximals}
----------------
We now focus on the maximal elements of $\gR/_\sim$. In [@Gra03], it was proved that $\langle\cdot\rangle_0$ is a maximal element of the set of well-formed computation rules.
\[prop:max1\] Let $T=\omega \rho_1^a \rho_2^b \rho_3T'$. If $T$ is maximal, then
- $\rho_4$ and $\rho_5$ alternate finitely many times in $\omega$,
- $(a,b)<(1,1)$, and
- $T'$ is maximal.
Conditions $(i)$ and $(ii)$ follow from Lemmas \[lem:2\] and \[lem:1\], respectively. Condition $(iii)$ follows from Lemma \[lem:4\].
As it turns out, every maximal element of $\gR_{123}$ is also maximal in $\gR/_\sim$.
\[prop:max123\] Let $T\in \gR_{123}$. If $\mathcal{I}^T_{11}=\emptyset$, then $T$ is a maximal element of $\gR/_\sim$.
It suffices to show that for every $T'=T'_1T'_2\cdots $ such that $T'\geqslant
T=T_1T_2\cdots$, we have $T'\in \mathcal{R}_{123}.$
For the sake of a contradiction, suppose $T'\not \in \mathcal{R}_{123}$, and let $i=\min\{j\in \mathbb{N}: T'_i\not \in \cL(\{\rho_1, \rho_2, \rho_3\})\}$. Note that $T_i=\rho_1^a \rho_2^b \rho_3$ for $(a,b)<(1,1)$, for every $i\in
\mathbb{N}$. Since $T'\geqslant T$, $T'_i=\omega \rho_1^a \rho_2^b \rho_3$ where $\rho_4$ and $\rho_5$ alternate finitely many times in $\omega$. Without loss of generality, suppose $\omega=\rho_4\omega'$. Consider $\sigma
=(1,1)^{i-1}(1,0)^2$. Then $T(\sigma)=(1,0)^2>(1,0)=T'(\sigma)$, thus yielding the desired contradiction.
Hence, $T'\in \mathcal{R}_{123}$ and, by Corollary \[cor:1\], $T'=T$. Thus $T$ is maximal in $\mathcal{R}$.
Now we consider the maximal elements $T\in (\gR/_\sim)\setminus \gR_{123}$.
\[prop:notmax1\] Let $T=\omega\rho_1^a \rho_2^b \rho_3R$. If $\omega \not \in \cL(\rho_4)\cup\cL(\rho_5)$, then $T$ is not maximal.
Let $\omega = \rho_4^{a_1}\rho_5^{b_1}\cdots \rho_4^{a_n}\rho_5^{b_n}$, $n\geqslant 1$. Without loss of generality, suppose that $a_i\neq 0$, $b_i\neq 0$, for every $1\leqslant i\leqslant n$. Assume also that $a,b=0$; the other cases $(a,b)<(1,1)$ follow similarly.
Let $R'=(\rho_1 \rho_2 \rho_3)^nR$, and set $T'=\rho_1^a \rho_2^b \rho_3R'$. Let $\gamma=(1,1)(1,0)(0,1)$. Then $T(\gamma)=\varepsilon<(1,0)(0,1)=T'(\sigma)$, and thus $T<T'$. Hence, $T$ is not maximal.
As an immediate corollary we get the following necessary condition for maximality.
\[cor:max4or5\] Let $T=T_1T_2\cdots$ with $T_i=\omega_i\rho_1^{a_i} \rho_2^{b_i} \rho_3$. If $T$ is maximal, then for every $i\in \mathbb{N}$, $\omega_i\in \cL(\rho_4)\cup\cL(\rho_5)$.
Let $T=T_1T_2\cdots$ where $T_i=\omega_i\rho_1^{a_i} \rho_2^{b_i} \rho_3$, with $\omega_i \in \cL(\rho_4)\cup\cL(\rho_5)$. If for some $i\in \mathbb{N}$, $\omega_i$ is $\rho_4^*$, then $T=T_1\cdots T_i$. Otherwise, $T=T_1T_2\cdots$, and for each $i$ there is a string $\gamma$ such that $T_i$ acts on $\gamma$.
\[prop:7\] Let $T=T_1T_2\cdots$ where $T_i=\omega_i\rho_1^{a_i} \rho_2^{b_i} \rho_3$, and $T'=T'_1T'_2\cdots $ where $T'_i=\omega'_i\rho_1^{a_i} \rho_2^{b_i} \rho_3$, with $\omega_i, \omega'_i \in \cL(\rho_4)\cup\cL(\rho_5)$. Then $T\parallel T'$ if and only if one of the following holds:
- each $\omega_i, \omega'_i$ has finite length, and $\omega_i\neq \omega'_i$, for some $i\in \mathbb{N}$,
- $T=T_1\cdots T_i$ and neither $\rho_4^*$ nor $\rho_5^*$ occur in $T_j$, $1\leqslant j\leqslant i-1$, nor in $T'$.
- $T=T_1\cdots T_i$ and $T'=T'_1\cdots T'_j$ where neither $\rho_4^*$ nor $\rho_5^*$ occur in $T_l$, $1\leqslant l\leqslant i-1$, nor in $T'_k$, $1\leqslant k\leqslant j-1$, or $\omega_t\neq \omega'_t$, for some $1\leqslant t\leqslant i\wedge j$.
To see that the conditions in (i)-(iii) are necessary, observe that if $T\parallel T'$ ($T$ and $T'$ in factorized irredundant forms), then we must have $T\neq T'$. Since each $\omega_i$ and each $\omega'_i$ is in $\cL(\rho_4)\cup\cL(\rho_5)$, one of (i)-(iii) must occur. To show that (i) is sufficient, assume that each $\omega_i, \omega'_i$ has finite length and, without loss of generality, suppose that $\omega_1\neq \omega'_1$. We consider 3 representative cases; the remaining cases follow similarly.
Suppose that $\omega_1\in \cL(\{\rho_4\})$ and $\omega'_1\in\cL(\{\rho_5\})$. Take $\sigma=(1,1)(0,1)$ and $\sigma'=(1,1)(1,0)$. Then $T(\sigma)=(0,1)\neq\varepsilon=T'(\sigma)$, but $T(\sigma')=\varepsilon \neq (1,0)=T'(\sigma')$.
Suppose that $\omega_1\in \cL(\{\rho_4\})$ and $\omega'_1=\varepsilon$. Take $\sigma=(1,1)(1,0)$ and $\sigma'=(1,1)(1,1)$. Then $T(\sigma)=\varepsilon\neq(1,0)=T'(\sigma)$, but $T(\sigma')=(0,1)\neq\varepsilon =T'(\sigma')$.
Suppose now that $\omega_1\in \rho_4^n$ and $\omega'_1=\rho_4^m$, say, $n<m$. Take $\sigma=(1,1)(1,0)^{n+1}$ and $\sigma'=(1,1)(1,0)^n(1,1)$. Then $T(\sigma)=(1,0)\neq\varepsilon=T'(\sigma)$, but $T(\sigma')=\varepsilon \neq(0,1)=T'(\sigma')$.
In all representative cases we conclude that $T\parallel T'$.
To show that (ii) is sufficient, suppose that $T=T_1\cdots T_i$ and neither $\rho_4^*$ nor $\rho_5^*$ occur in $T_l$, $1\leqslant l\leqslant i-1$, nor in $T'$. Let $k=\min\{j:\omega_j\neq \omega'_j\}$. If $k\leqslant i-1$, then the proof of (i) can be used to show that $T\parallel T'$.
So suppose that $k=i$ and, without loss of generality, suppose that $\omega_i= \rho_4^*$ and $\omega'_i=\rho_4^m$, $m>0$. Take $\sigma=(1,1)^i(1,0)^{m+1}$ and $\sigma'=(1,1)(1,0)^m(1,1)$. Then $T(\sigma)=\varepsilon\neq (1,0)=T'(\sigma)$, but $T(\sigma')=(0,1)\neq\varepsilon =T'(\sigma')$, and again we have that $T\parallel T'$.
Finally, to show that (iii) is sufficient, suppose that $T=T_1\cdots T_i$ and $T'=T'_1\cdots T'_j$ where neither $\rho_4^*$ nor $\rho_5^*$ occur in $T_l$, $1\leqslant l\leqslant i-1$, nor in $T'_k$, $1\leqslant k\leqslant j-1$, or $\omega_t\neq \omega'_t$, for some $1\leqslant t\leqslant i\wedge j$.
Now, as case (i), we may assume that $i<j$ (the case $i>j$ is similar), and that $\omega_i\in \cL(\{\rho_4\})$ and $\omega'_i=\rho_4^m$, $m>0$. But then, as in case (ii), we again have $T\parallel T'$, and thus the proof is now complete.
From Lemma \[lem:1\], the above necessary condition and Propositions \[prop:max123\] and \[prop:7\], we obtain the following explicit description of the maximal elements of $\gR/_\sim$.
Let $T\in \gR/_\sim$. Then $T$ is maximal if and only if
- $T$ is a maximal element of $\gR_{123}$, or
- $T=T_1T_2\cdots$ where $T_i=\omega_i\rho_1^{a_i} \rho_2^{b_i} \rho_3$ with $\omega_i \in \cL(\rho_4)\cup\cL(\rho_5)$ and $(a_i,b_i)<(1,1)$.
Concluding remarks: An alternative ordering of $\gR/_\sim$
==========================================================
An alternative ordering of $\gR/_\sim$ was proposed in [@Gra03], and which is defined as follows. Given $R\in \gR/_\sim$, let $\mathrm{Ker}(R):=\{\sigma: R(\sigma)=\varepsilon\}.$ For $R, R'\in \gR/_\sim$, we write $R\leqslant_{\mathrm{Ker}}R'$ if $\mathrm{Ker}(R)\supseteq \mathrm{Ker}(R')$. Clearly, $\leqslant_{\mathrm{Ker}}$ is a partial ordering of $\gR/_\sim$, and if $R\leqslant R'$, then $R\leqslant_{\mathrm{Ker}}R'$; see [@Gra03]. As it turns out, the converse is also true.
\[prop:equivOrders\] Let $R, R'\in \gR/_\sim$. Then $R\leqslant R'$ if and only if $R\leqslant_{\mathrm{Ker}}R'$.
To prove Proposition \[prop:equivOrders\] it remains to show that if $R\parallel R'$, then $R\parallel_{\mathrm{Ker}}R'$, i.e., $R\not\leqslant_{\mathrm{Ker}}R'$ and $R'\not\leqslant_{\mathrm{Ker}}R$.
So suppose that $R\parallel R'$, that is, there exist $\sigma_1$ and $\sigma_2$ such that $R(\sigma_1) \sqsubset R'(\sigma_1)$ and $R(\sigma_2) \sqsupset R'(\sigma_2)$.
Let $ \sigma'_1$ the string be obtained from $ \sigma_1$ by removing the indices in $R(\sigma_1)$ such that $R(\sigma'_1)=\varepsilon \neq R'(\sigma'_1)$. Hence, $R'\not\leqslant_{\mathrm{Ker}}R$.
Similarly, let $ \sigma'_2$ the string be obtained from $ \sigma_2$ by removing the indices in $R'(\sigma_2)$ such that $R(\sigma'_2) \neq \varepsilon=R'(\sigma'_2)$. Hence, $R\not\leqslant_{\mathrm{Ker}}R'$.
Thus $R\parallel_{\mathrm{Ker}}R'$, and the proof is now complete.
We have presented a partial description of the poset $\gR/_\sim$; being uncountable, there is little hope of obtaining an explicit description as it was the case of the subposet $\mathcal{R}_{123}$, which was shown to be isomorphic to the power set of natural numbers.
Looking at directions of further research, we are inevitably drawn to the question in determining whether $\gR/_\sim$ constitutes a $\wedge$-semilattice and, if that is the case, whether its closed intervals constitute lattices, as it was the case of the subposet $\mathcal{R}_{123}$.
[10]{}
T. S. Blyth. . Springer-Verlag (Universitext), London, UK, 2005.
B. De Baets, J. Fodor and M. Grabisch. The quest for rings on bipolar scales, , 12(4):499–512, 2004.
N. Caspard, B. Leclerc, B. Monjardet. . Springer, 2007.
B. A. Davey and H. A. Priestley. . Cambridge University Press, Cambridge, UK, 2002.
M. Grabisch. The symmetric Sugeno integral, , 139:473–490, 2003.
M. Grabisch. The Möbius transform on symmetric ordered structures and its application to capacities on finite sets, , 287(1-3):17–34, 2004.
M. Grabisch. Aggregation on bipolar scales, in [*Theory and Applications of Relational Structures as Knowledge Instruments II*]{}, H. de Swart, E. Orlowska, M. Roubens and G. Schmidt (eds), , Springer, 355–371, 2006.
M. Grabisch, J.-L. Marichal, R. Mesiar, and E. Pap. . Encyclopedia of Mathematics and its Applications 127. Cambridge University Press, Cambridge, UK, 2009.
E. Pap and I. Štajner-Papuga. Pseudo-integral based on non-associative and non-commutative pseudo-addition and pseudo-multiplication, , 9(2): 159–167, 2001.
[^1]: Corresponding author. Tel (+33) 1-44-07-82-85, Fax (+33) 1-44-07-83-01, email `[email protected]`
[^2]: It is noteworthy to observe that this framework is suitable for any nonassociative operation which satisfies (ii) and (iii) of Proposition \[prop:1s\].
|
---
abstract: 'We are examining mass loss from globular clusters giant stars, focussing on metallicity dependance. We present three sets of observations: TIMMI-2 mid-IR spectra of 47 Tuc, UVES high-resolution optical spectra of several clusters, and an infrared atlas of $\omega$ Cen using the Spitzer Space Telescope.'
author:
- 'Iain McDonald$^1$, Jacco Th. van Loon$^1$'
title: 'Optical & Infrared Observations of Stellar Mass Loss in Globular Cluster Red Giants'
---
Introduction
============
Mass loss from giants is of great importance to stellar evolution — low-mass stars lose up to 0.2 M$_{\odot}$ on the giant branch. Metal-poor globulars giants can probe metallicity dependances of this mass loss, exposing the Universe’s chemical enrichment history, and revealing how low-metallicity stars can produce large quantities of dust (*e.g.* @BWvL+06 [-@BWvL+06]).
Mid-Infrared Spectroscopy in 47 Tucanae
=======================================
Eight infrared-excessive giant branch stars in 47 Tuc were observed in the mid-infrared with the ESO La Silla 3.6m and TIMMI-2 spectrograph. Clear silicate emission is present in V1, corresponding to a mass loss rate of 10$^{-6}$ M$_{\odot}$ yr$^{-1}$ (estimated using the [dusty]{} code — [@INE99]). This work was published as [@vLMO+06].
Optical Spectroscopy of Cluster Giants
======================================
Optical VLT/UVES spectra were taken of 47 giant stars in six clusters, shown in Fig. 1, in order to analyse differences between IR-excessive and normal stars at a range of metallicities. At $R >$ 100,000, these represent some of the highest resolution data ever taken of globular cluster giants.
The H$\alpha$ and mid-IR Ca II triplet lines exhibit evidence for mass loss: core shifts of several km s$^{-1}$ exist in many stars, as does substantial emission. We are in the process of qualitatively analysing the H$\alpha$ lines using the [sei]{} code [@LCSP87] and various other computational methods. We will then perform an abundance analysis and attempt to model the chromospheric emission.
Spitzer Space Telescope Infrared Atlas of $\omega$ Centauri
===========================================================
Spitzer imaging of $\omega$ Cen totalling 18.4 hours was taken in six bands from 3.6–70 $\mu$m. We will produce the first multi-wavelength atlas of $\omega$ Cen in the infrared. We can then use this to find dust-enshrouded objects in the cluster, and measure their mass loss rates and other characteristics.
We are thankful to JPL/NASA and ESO for the use of their facilities. Iain McDonald is supported by a PPARC studentship.
Boyer, M. L., Woodward, C. E., van Loon, J. Th., Gordon, K., Evans, A., Gehrz, R. D., Helton, L. A., Polomski, E. F. 2006, , 132, 1415 Ivezić, Z., Nenkova, M., & Elizur, M. 1999, User manual for [dusty]{}, University of Kentucky Internal Report Lamers, H. J. G. L. M., Cerruti-Sola, M., & Perinotto, M. 1987, Ap. J., 314, 726 van Loon, J. Th., McDonald, I., Oliveira, J. M., Evans, A., Boyer, M .L., Gehrz, R. D., Polomski, E., Woodward, C. E. 2006, , 450, 339
|
---
abstract: 'We report drive-response experiments on individual superconducting vortices on a plane, a realization for a 1+1-dimensional directed polymer in random media. For this we use magnetic force microscopy (MFM) to image and manipulate individual vortices trapped on a twin boundary in YBCO near optimal doping. We find that when we drag a vortex with the magnetic tip it moves in a series of jumps. As theory suggests the jump-size distribution does not depend on the applied force and is consistent with power-law behavior. The measured power is much larger than widely accepted theoretical calculations.'
author:
- 'N. Shapira'
- 'Y. Lamhot'
- 'O. Shpielberg'
- 'Y. Kafri'
- 'B. J. Ramshaw'
- 'D. A. Bonn'
- Ruixing Liang
- 'W. N. Hardy'
- 'O. M. Auslaender'
bibliography:
- 'PowerLaw.bib'
title: 'Disorder induced power-law response of a superconducting vortex on a plane'
---
While the dynamics of driven systems in ordered media are well understood, disorder gives rise to much more elaborate behavior. Particularly interesting are phenomena arising from the interplay between disorder and elasticity [@agoritsas2012disordered; @kolton2006dynamics] such as the conformations of polyelectrolytes [@de1979scaling] (e.g. polypeptides and DNA [@bustamante2003ten]), kinetic roughening of driven interfaces (e.g. wetting in paper [@halpin1995kinetic; @herminghaus2012universal], magnetic and ferroelectric domain wall motion [@lemerle1998domain; @paruch2005domain; @kim2009interdimensional; @yamanouchi2007universality], the growth of bacterial colony edges [@halpin1995kinetic]), non-equilibrium effects that occur in randomly stirred fluids [@forster1977large] and more. Superconducting vortices, in materials in which they behave like elastic strings, are among the most important examples of such systems [@blatter1994vortices]. Despite a dearth of direct experimental proof, these quantized whirlpools of supercurrent are considered textbook examples of the theory of directed polymers in random media (DPRM) [@kardar2007statistical; @halpin20122+; @dotsenko2008joint], a foundation model for systems where disorder and elasticity compete. This model, that yields many results that are considered generic and universal, provides the backdrop for our experiment.
{width="7in"}
Here we concentrate on vortices that are trapped on a twin boundary (TB), a planar defect in YBa$_2$Cu$_3$O$_{7-\delta}$ (YBCO) [@nam2005twinning]. We cool the sample through the superconducting transition temperature $T_c$ in the presence of an external magnetic field $\vec{H}=H\hat{z}$, which directs the curve along which vortices cross the sample. Figure \[fig:fig1\]a depicts a vortex away from a TB (V in Fig. \[fig:fig1\]a) that is free to meander in the $d_{\perp}=2$ directions perpendicular to $\vec{H}$. For a vortex trapped on a TB (TBV in Fig. \[fig:fig1\]a) the meandering is limited to a plane, i.e. $d_{\perp}=1$. We concentrate on TB-vortices both because the reduced dimensionality makes data analysis simpler and, more importantly, because, unlike DPRM in higher dimensions, $1+1$-DPRM is a tractable model [@hwa1999mesoscopic].
The path of a vortex across a sample is determined by the competition between elasticity and disorder: while meandering allows a vortex to lower the energy of the system by locating its core near defects, the associated stretching is limited by finite line tension $\kappa$ [@blatter1994vortices]. As a result the unavoidable random disorder in a sample can make the optimal path for an isolated vortex elaborate. Despite this, DPRM theory provides many predictions for disorder-averaged quantities [@hwa1994anomalous]. For example, the thermal and disorder averaged offset distance from the field axis $\hat z$ ($\Delta$) scales like a power-law given by the wandering exponent $\zeta(d_\perp)$: $\overline{\left<\Delta\right>}\equiv\delta R\sim L^{\zeta(d_\perp)}$ for $L\gg a_z$ ($L$ is the sample thickness, $a_z$ is a sample-dependent lower-cutoff), which is a universal number. Theoretically $\zeta(d_\perp)$ describes a wide variety of systems [@kardar2007statistical] but there are only a few measurements of it [@lemerle1998domain; @paruch2005domain; @kim2009interdimensional; @yamanouchi2007universality; @bolle1999observation; @takeuchi2010universal; @takeuchi2011growing]. While a power-law also describes classical random walks ($\delta R\sim L^{\frac{1}{2}}$) disorder both enhances wandering ($\zeta(d_\perp)>\frac{1}{2}$) and stretches the distribution of offset distances $W(\Delta)$ from gaussian to $W(\Delta)\sim\Delta^{-\alpha_{theory}}$ ($\alpha_{theory}>0$), significantly increasing the prevalence of trajectories with large excursions [@hwa1994anomalous].
The power-law form of $W(\Delta)$ implies the absence of a characteristic length-scale and the existence of a significant number of vortex trajectories with a wide variety of $\Delta$’s and with free-energies almost as low as that of the optimal vortex path. Since these trajectories constitute the low-energy excitations of the system they are important for thermodynamics and response functions [@hwa1994anomalous]. While in thermal equilibrium the system has time to find these metastable states it is not clear what happens out of equilibrium, although one can expect that near equilibrium these trajectories remain important.
In this work we experimentally characterize the trajectories of individual vortices confined to move on a TB. Unlike most previous work we use a local probe (magnetic force microscopy, MFM) to measure individual vortices. The heart of MFM is a sharp magnetic tip situated at the end of a cantilever driven to oscillate in the $z$-direction normal to the sample surface at a resonant frequency $f$. A force $\vec{F}=F_x {\hat x}+F_y{\hat y}+F_z{\hat z}$ acting on the tip shifts $f$ by $\Delta f=f-f_0\approx-f_0/(2k)\partial F_z/\partial z$ ($f_0$ is the natural resonant frequency, $k$ is the cantilever spring constant, $z$ is the tip-sample distance) [@albrecht1991frequency]. For an image we record $\Delta f$ while rastering the tip at constant $z$. In addition we use the tip-vortex interaction to perturb vortices individually [@straver2008controlled]. Such perturbations show up as abrupt shifts of the signal from a vortex, which we dub “jumps”.
The sample we used is a nearly optimally doped YBCO single crystal ($T_c\approx91K$ [@TcNote]) grown from flux in a BaZrO$_3$ crucible for high purity and crystallinity [@liang1998growth]. The $L=80\mu m$-thick platelet-shaped sample has faces parallel to the crystal ab-plane and contains two parallel TBs (Fig. \[fig:fig1\]b). Field cooling was done with $\vec{H}=H\hat{z}$ parallel to the crystal $c$-axis and along the TB plane with the tip magnetized for attractive tip-vortex interactions.
Figure \[fig:fig1\]c is an MFM scan of vortex arrays on a TB and around it for $H\approx2mT$. Such a highly ordered Abrikosov lattice [@abrikosov1957magnetic; @kleiner1964bulk] at such a low field attests to the scarcity of strong defects other than the TB. Figure \[fig:fig1\]d is an MFM scan of a TB at $0\leq H\leq10\mu T$. In this near-zero field almost all of the vortices were trapped by the TBs, further attesting to the high quality of the sample and in agreement with early experiments showing that in YBCO TBs are strong vortex traps [@vinnikov1988direct]. Despite their relative high density, many of the TB-vortices can be considered isolated since their nearest-neighbor distance is much larger than the penetration depth $\lambda_{ab}\approx120nm$ [@kiefl2010direct] (Fig. \[fig:fig1\]e).
We tested how strongly vortices are trapped by a TB by performing low-height (and hence strong lateral force, up to 20 pN) scans. However, even for our lowest passes across the TB and even for $T\approx0.85T_c$ we never observed a vortex dislodging from the TB. This experimentally verifies that for the range of forces we applied TB-vortices behave as one-dimensional (1D) objects in an effective $d=1+1$ geometry.
Next, we performed a series of raster scans over an isolated TB-vortex (Fig. \[fig:fig1\]e) in order to perturb it. The scan pattern consisted of line-scans in which the tip moved back and forth (Fwd/Bwd) at $125\frac{nm}{sec}$ along the $x$-axis parallel to the TB. After each line-scan we reduced $z$ and stepped the tip in the $y$-direction. Since the force the tip exerts on a vortex depends on both $z$ and the tip-vortex lateral distance $\rho=\sqrt{(x-x_v)^2+(y-y_v)^2}$ ($x_v{\hat x}+y_v{\hat y}$ is the vortex position in the scan, see [@ForceNote]), a complete scan series gives the response of a TB-vortex to a wide range of forces along the TB, $F_x$.
![Manipulation scans of TB-vortices at $T=15K$. [**(a)**]{} Forward (Fwd) and backward (Bwd) line-scans (taken along the dashed lines from the scans in the insets) containing a tip-induced vortex jump of size $\Delta_{jump}=|x^*-x_{jump}|$ that we associate with an abrupt change in the position of the upper part of the vortex. [**(b)**]{} Fwd and Bwd line-scans taken along the dashed lines from the scans in the insets. Numerous vortex jumps with a variety of $\Delta_{jump}$’s are apparent. The difference between the overall shapes of the Fwd and Bwd line-scans suggests that non-equilibrium effects may be involved. [**Insets:**]{} The scans from which the line-scans in (a) and (b) were taken. The scan height and the span of $\Delta f$ is indicated for each panel. The horizontal double arrows indicate the back or forth scan direction along the TB (the x-axis) and the large vertical arrows indicate the direction we step the tip after each back and forth cycle (the y-axis).[]{data-label="fig:fig2"}](Fig2.pdf){width="3.4in"}
Figure \[fig:fig2\]a shows typical line-scans for an almost static vortex. $\Delta f$ becomes increasingly negative as the tip approaches the vortex due to the increasing tip-vortex attraction until it passes the minimal $\rho$ in the line-scan. From there $\Delta f $ increases until the interaction becomes negligible again. The line-scans in Fig. \[fig:fig2\]a show one of the first jumps for this particular vortex - a shift in $\Delta f(x)$ at $x=x_{jump}$. We associate this shift with a tip-induced abrupt change in the position of the upper part of the vortex. We determine the jump length $\Delta_{jump}=|x_{jump}-x^*|$ from the first position after the jump satisfying $\Delta f(x^*)=\Delta f(x_{jump})$ [@AlgorithmNote]. In addition, we calculate the value of $F_x$ before each jump using an approximation for the magnetic field from a single vortex and a model for the tip [@ForceNote]. Figure \[fig:fig2\]b shows typical line-scans for a moving vortex. While the signal in the central region of the line-scan contains numerous sharp changes, the envelope resembles a stretched version of the signal expected from a static vortex at the same $z$. This indicates that in the central region the top of the vortex moves with the tip in a series of jumps. The observed asymmetry between the Fwd and Bwd line-scans are typical for a moving vortex and suggest that the system is not in thermal equilibrium.
![Histograms binning all measured jump lengths ($\Delta_{jump}$) for different ranges of the force exerted along the TB by the tip ($F_x$). **Inset:** Normalized distributions of $\Delta_{jump}$ for the different $F_x$ ranges. All the distributions collapse onto each other revealing the independence of $\Delta_{jump}$ from $F_x$.[]{data-label="fig:fig3"}](Fig3.pdf){width="2.8in"}
Figure \[fig:fig3\] shows histograms containing all jumps of two vortices chosen for their large separation from their neighbors and each other (enough to safely consider their disorder environments independent). The histograms separate the jumps into three ranges of $F_x$. When we compare the distribution of $\Delta_{jump}$ within each $F_x$ range we find that the distributions collapse onto each other. Moreover, we find the same collapse when we consider jumps from each vortex separately [@SelfAVGNote]. This shows that for the range of forces we applied the distribution of $\Delta_{jump}$ does not depend on $F_x$ and justifies lumping all the jumps together regardless of the force.
Our main result is the force-independent distribution $\tilde{W}(\Delta_{jump})$ for both vortices together (Fig. \[fig:fig4\]). The most significant feature of $\tilde{W}(\Delta_{jump})$ is a long tail indicating that disorder is important - it is in complete disagreement with the gaussian distribution expected for a system where disorder is irrelevant [@hwa1994anomalous]. Another important feature is the saturation of $\alpha_{fit}$ obtained from best fits of $\tilde{W}(\Delta_{jump})$ to a power-law for different values of a lower cutoff $a_x$. The saturation is a strong indication that $\tilde{W}(\Delta_{jump})$ is a power-law for $\Delta_{jump}>a_0=49\pm3nm$ with the power given by $\alpha_{meas}=2.75\pm0.06$ ($80\%$ confidence level). We emphasize that we determined $\Delta_{jump}$ directly and without theoretical assumptions and that $\tilde{W}(\Delta_{jump})$, $\alpha_{meas}$ and $a_0$ are not sensitive to several important sources of systematic error (the independence of $\tilde{W}(\Delta_{jump})$ on $F_x$ implies that it is not sensitive to systematic errors in force estimates, the scale invariance of power-laws implies that $\alpha_{meas}$ is insensitive to errors in length calibration).
According to the fluctuation-susceptibility relation [@hwa1994anomalous] the statistics of the jump length ($\Delta_{jump}$) gives information on the properties of the rare, large-scale, low-energy excitations of the system characterized by $\Delta$. One might worry that when driven out of equilibrium short jumps will occur more readily than the long jumps required to reach one of the more favorable paths farther away. However, the properties of the accessible vortex trajectories ensure that such behavior is unlikely [@hwa1994anomalous].
![Measured vortex jump lengths ($\Delta_{jump}$) and fits to the data. Although the data is consistent with a power-law distribution the exponent we obtain does not match $\alpha_{theory}=3/2$ predicted for a system in equilibrium. **Inset:** Values of a power-law exponent $\alpha_{fit}$ obtained by fitting the data in the main panel for different values of the lower cutoff $a_x$. $\alpha_{fit}$ saturates (arrow) for $a_x>a_0=49\pm3$, a clear indication that $\alpha_{meas}=2.75\pm0.06$ is the best-fit exponent for the distribution.[]{data-label="fig:fig4"}](Fig4.pdf){width="3.4in"}
The independence of $\tilde{W}(\Delta_{jump})$ on $F_x$ (Fig. \[fig:fig3\]), which at first glance may seem puzzling, is attributed by DPRM theory [@hwa1994anomalous] to *statistical tilt symmetry*. This symmetry is a manifestation of the absence of correlations in the disorder which means that for sufficiently large force [@hwa1994anomalous], as in this experiment [@MinForceNote], each time we tilt a vortex it samples a new random environment and is equivalent to an un-tilted vortex experiencing a new disorder realization. The observed statistical tilt symmetry implies that theoretically we could have obtained disorder-averaged quantities from measurements of just one vortex. Indeed, when we examine the force-independent distributions of $\Delta_{jump}$ for each vortex separately [@TwoVortexNote] we find that the distributions are statistically similar. This observed self-averaging corroborates the statistical tilt symmetry and means that the measured distribution of jump lengths is indeed equivalent to the distribution of rare, large-scale, low-energy excitations; i.e. $\tilde{W}(\Delta_{jump})=W(\Delta)$.
While DPRM predicts the power-law behavior of $W(\Delta)$, the value we extract disagrees with the theoretical value: $\alpha_{theory}=d_{\perp}+2-\zeta^{-1}(d_\perp)$ [@hwa1994anomalous]. The value of the wandering exponent $\zeta(d_\perp=1)=2/3$ has been theoretically found by various methods [@kardar1987replica; @huse1985huse; @gwa1992bethe] giving $\alpha_{theory}=3/2$, very different from $\alpha_{meas}\approx2.75$. This deviation could result from a variety of reasons; however, the asymmetry of the line traces in Fig. \[fig:fig2\]b suggests that non-equilibrium effects may be involved. The fact that we observe a response that remains power-law distributed even out of equilibrium is surprising. Whether or not non-equilibrium effects in fact explain the enhancement of $\alpha_{meas}$ is a question that requires further study.
The value of the cutoff $a_0$ provides a new way to characterize statistical properties of the disorder near a TB. This is due to general scaling arguments that hold both in and out of equilibrium [@hwa1994anomalous] and give a relationship between $a_0$ and the disorder strength $D$, that in $d=1+1$ is $D=(k_BT)^3/(a_0\kappa)$ ($k_B$ is the Boltzmann constant) [@hwa1994anomalous]. Using $T=15K$ and $\kappa=2.4eV/\mu m$ we find $\sqrt{D}\approx135\mu eV$ [@KappaNote]. Similar scaling relations give an estimate for the cutoff along $z$, i.e. $a_z=(a_0^2\kappa)/(k_BT)\approx4.5\mu m\ll L=80\mu m$, consistent with the experiment being in the thick sample regime.
To conclude, we have used the interaction between a magnetic tip and superconducting vortices on a TB to study the behavior of individual directed 1D objects. This provides an ideal setup for studying the interplay between elasticity and disorder, which is ubiquitous in nature. After experimentally showing that vortices on a TB behave as 1D objects in an effective $1+1$ random medium we proceeded to pull them one at a time along the TB and measured the distribution of jump lengths $\tilde{W}(\Delta_{jump})$. We find that $\tilde{W}(\Delta_{jump})$ is independent of the force applied by the tip and is the same for two widely separated vortices, confirming the predicted statistical tilt symmetry in the system. Our central result is the power-law form of $\tilde{W}(\Delta_{jump})$ that suggests that even out of equilibrium excitations do not have a characteristic length-scale beyond the sample-specific lower cutoff $a_0$. The direct measurement of $a_0$ provides a new characterization of the local disorder strength $D$ around the TB, complementing other measures such as the critical current [@larbalestier2001high; @wee2013engineering].
We thank Thierry Giamarchi, who encouraged us to focus on vortex motion along the TB, as well as Anatoli Polkovnikov and Daniel Podolsky for comments and Gad Koren for help with characterization. N.S. acknowledges support from the Gutwirth Fellowship and Posnansky Research Fund in High Temperature Superconductivity. O.M.A. is supported by an Alon Fellowship and as a Horev Fellow is supported by the Taub Foundation. The project has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n$^\circ$ 268294.
|
---
abstract: 'The states of a boson pair in a one-dimensional double-well potential are investigated. Properties of the ground and lowest excited states of this system are studied, including the two-particle wavefunction, momentum pair distribution and entanglement. The effects of varying both the barrier height and the effective interaction strength are investigated.'
author:
- 'D. S. Murphy'
- 'J. F. McCann'
bibliography:
- 'two\_part\_double\_well.bib'
title: 'Low-energy excitations of a boson pair in a double-well trap'
---
\[sect:intro\] Introduction
===========================
Ensembles of ultracold, trapped atoms provide an ideal test system for the study of fundamental quantum principles. The manipulation of atoms with photons, [@coh98], has given rise to the experimental realization of Bose-Einstein condensation (BEC) [@and95; @dav95; @dal99] and, more recently, the trapping and manipulation of condensates using optical lattice potentials [@and98; @cat01; @bloA05; @bloB05]. The weak coupling of neutral atoms to their environment mean that this system of cold neutral atoms, confined by a periodic potential, may prove useful in the investigation of primitive quantum information processing [@mon02]. Indeed, such systems have already been used to carry out a two-qubit entangling operation [@jak99; @man03], thereby realizing the crucial CNOT gate. At the same time, the spatially periodic nature of the system makes it ideal for the detailed study of solid-state Hamiltonians [@fis89; @jak98; @gre02]. The benefit of this artificial system, in this regard, lies in the fact that the experimentalist can easily vary external control parameters (e.g. laser intensity or wavelength), thereby varying particular parameters of the system Hamiltonian. A degree of control that is not generally afforded to typical solid-state systems.
The dynamics of a system of ultracold atoms, confined by an optical lattice potential, can be accurately described within the framework of the Bose-Hubbard model [@fis89; @jak98]. In this model the system Hamiltonian is parameterized by the tunnelling strength between adjacent lattice sites, $J$, and the on-site interaction energy, $U$. The Hamiltonian describing the system dynamics can then be written as $$\label{eq:bose_hubbard_ham}
\hat{H} = J \sum_{\left< i,j \right>} \hat{b}_{i}^{\dagger} \hat{b}_{j} \;
+ \; \sum_{i} \epsilon_{i} \hat{n}_{i} \; + \; U \sum_{i}
\hat{n}_{i} \left( \hat{n}_{i} - 1 \right) \hspace*{0.3cm} ,$$ where $\hat{b}_{i}^{\left( \dagger \right)}$ is the annihilation (creation) operator for an atom at the lattice site $i$ and $\hat{n}_{i} =
\hat{b}_{i}^{\dagger} \hat{b}_{i}$ is the number operator for that site. Parameter $\epsilon_{i}$ is the single-particle energy at lattice site $i$ and will vary with $i$ for an inhomogeneous lattice. Implicit in this model is the assumption that the dynamics of the system is dominated by single- and two-particle effects. In this way, the system of two, confined, interacting particles represents the fundamental building block for the understanding of these many-body systems. Furthermore, continual advancement in optical lattice technology means that it has become possible to confine small numbers of atoms (e.g. 1 or 2) at individual lattice sites, effectively realizing a system of two trapped atoms.
For low-energy collisions the particle interactions can be accurately represented within the pseudopotential approximation [@hua_st]. The eigenstates for a system of two particles, interacting via a pseudopotential, can be determined analytically for both isotropic [@bus98] and anisotropic [@idz05] harmonic traps. Under such confinement, the ‘free-space’ pseudopotential approximation is found to be sufficiently accurate provided the length scale associated with the particle-particle interactions ($a$) is short compared to the length scale of the confining potential ($L$) [@blo02]. For the case in which $a$ and $L$ are comparable, one may introduce an energy-dependent scattering length and solve for the eigenenergies of the system self-consistently [@tie00; @bol02; @bol03].
In addition to providing small numbers of particles at individual lattice sites, optical lattice experiments also allow for the realization of quasi-one and -two dimensional systems [@kin04; @par04]. Simply increasing the confining potential steeply in one or two of the transverse directions will effectively ‘freeze out’ the corresponding degrees of freedom [@ket96; @gor01]. Such systems of reduced dimensionality can also be achieved using optical or magnetic atom waveguides. The theoretical treatment of the particle-particle interactions in such low-dimensional geometries has been previously considered. For a quasi-one dimensional (quasi-1D) system it was found that the scattering could be treated in terms of a 1D, zero-ranged $\delta$-potential, renormalized according to the confining potential [@ols98]. The physical realization of such quasi-1D trap geometries and recent advances in the tuning of atomic interactions using Feshbach resonances have permitted the study of previously inaccessible regimes. Notably, the 1D system of impenetrable bosons, or so-called Tonks-Girardeau gas [@ton36; @gir60], has commanded considerable experimental [@kin04; @par04] and theoretical [@yuk05; @gir00; @gir01; @bus03; @lin07; @murA07] interest in recent years.
In [@murA07] the detailed theoretical study of two interacting particles in a $\delta$-split harmonic potential was considered. The DVR techniques, [@bay86; @lig00], employed in [@murA07] to study the $\delta$-split trap potential can be easily adapted to other types of confining potentials. In the current article we utilize these same numerical techniques to study a prototypical two-well trap, defined by $ V\left( x \right) = A
\left[ x^{4} - \kappa x^{2} \right]$. The eigenspectrum for this two-particle system is studied and properties of the ground and lowest excited states are investigated for varying of the barrier height (dictated by $\kappa$) and the strength of the particle-particle interactions. Particular consideration is given to the similarities observed between the the ground state structure in this prototypical two-well potential and that of the $\delta$-split potential [@murA07].
Similar numerical studies of ultracold few-boson systems have been recently reported [@zolA06; @zolB06; @zol07]. In this work the authors use a multi-configurational time-dependent Hartree (MCTDH) method to study systems of several bosons in a double-well trap, with narrow width Gaussians used to represent both the central splitting potential and the interparticle potential. Where comparison is possible, the results of this numerical MCTDH study demonstrate qualitative similarity to the results of the present study.
The remainder of this paper is organized as follows. In Sec. \[sect:system\_hamiltonian\] we outline the Hamiltonian that shall be considered, for two particles confined by a quasi-1D double well potential. In Sec. \[sect:energy\_spec\] we present the energy level spectrum for the single and two-particle systems, illustrating how the spectrum is influenced by barrier height and interaction strength. In Sec. \[sect:ground\_state\] we examine various properties of the two-particle ground state. The properties considered include the ground state wavefunction, momentum distributions (Sec. \[subsect:mom\_dist\_ground\]) and von Neumann entropy (Sec. \[subsect:entropy\_ground\]). Particular emphasis is given to how these properties may be influenced by varying the ‘experimentally controllable’ parameters of barrier height and interaction strength. In Sec. \[sect:excited\_states\] we systematically examine these same properties for the lowest excited states of this system. Finally, in Sec. \[sect:summary\] we summarize our findings and make some concluding remarks.
\[sect:system\_hamiltonian\] System Hamiltonian
===============================================
Consider a system of two interacting particles confined in two dimensions by means of a ‘tight’ harmonic potential, having trapping frequency $\omega_{\perp}$ and associated length scale $d_{\perp} =
\sqrt{\hbar/m\omega_{\perp}}$. In the remaining third dimension, the confining potential is, relatively, ‘loose’ and has the form $$\label{eq:double_well_unscaled}
V \left( x \right) = A \left[ x^{4} - \kappa x^{2} \right]
\hspace*{0.3cm} .$$ The parameters $A$ and $\kappa$ determine the precise form of the double-well potential. It is straightforward to verify that the two minima of this double-well potential are located at $ x_{\textrm{min}} = \pm \sqrt{\kappa/2}$, with the potential at these minima being $ V \left( x_{\textrm{min}} \right) =
-A \kappa^{2}/4$. The well separation, $x_{\textrm{min}}$, and the barrier height, $V \left( x_{\textrm{min}} \right)$, are controlled by the parameter $\kappa$.
As a result of the large energy level separation, associated with the transverse eigenstates ($\hbar \omega_{\perp}$), the transverse motion of the particles is ‘frozen out’. In this way the particles are confined to the lowest motional state in each of these transverse directions. In this case the system is quasi-1D and may be effectively described by $$\label{eq:two_part_ham_unscaled}
H = \sum_{i = 1,2} \left[- \frac{\hbar^{2}}{2m}
\frac{\partial^{2}}{\partial x_{i}^{2}} + A \left( x_{i}^{4} - \kappa
x_{i}^{2} \right) \right] + g_{1D} \delta \left( x_{2} - x_{1} \right)
\hspace*{0.3cm} .$$ Here, $m$ is the mass, and $x_{1}$ and $x_{2}$ are the coordinates of atoms 1 and 2, respectively. The quantity $g_{1D}$ represents the particle-particle interaction strength, and is related to the 1D s-wave scattering length ($a_{1D}$) through $ \hspace*{0.1cm} g_{1D} = -2\hbar^{2}/m a_{1D}
\hspace*{0.1cm}$. In turn, $a_{1D}$ is related to the 3D s-wave scattering length, $a_{3D}$, through $\hspace*{0.1cm} a_{1D} = -d_{\perp}^{2}/2a_{3D}(1 -
Ca_{3D}/d_{\perp}) \hspace*{0.1cm}$, where $C$ is a constant and has approximate value $C = 1.4603$ [@ols98].
In the limit of tight confinement the free-space pseudopotential approximation, for the particle-particle interactions, becomes compromised [@blo02; @tie00]. In this case, one may obtain the eigenenergies for the system by employing an energy-dependent scattering length and solving for the energy eigenvalues self-consistently [@bol02; @bol03; @bur02]. For current purposes it is supposed that we are in the regime for which the pseudopotential approximation is still valid and the 1D collisional coupling, $g_{1D}$, acts as a parameter for the system.
The aim is to study the eigenvalues and eigenvectors for the 2D Hamiltonian given in Eq. (\[eq:two\_part\_ham\_unscaled\]). To facilitate this we introduce the scaling $x_{i} = \alpha \bar{x_{i}}$ for $i = 1, 2$. Under this rescaling the time-independent Schrödinger equation (TISE) can be written as $$\label{eq:tise_scaled}
\bar{H} \Psi_{i} \left( \bar{x_{1}}, \bar{x_{2}} \right) = \bar{E_{i}}
\Psi_{i} \left( \bar{x_{1}}, \bar{x_{2}} \right) \hspace*{0.3cm} ,$$ where $$\label{eq:ham_scaled}
\bar{H} = \frac{m \alpha^{2}}{\hbar^{2}} H = \sum_{i = 1,2} \left[
\frac{1}{2} \frac{\partial^{2}}{\partial \bar{x}_{i}^{2}} + \left(
\bar{x}_{i}^{4} - \bar{\kappa} \bar{x}_{i}^{2} \right) \right] +
\bar{g}_{1D} \delta \left( \bar{x}_{2} - \bar{x}_{1} \right)
\hspace*{0.3cm} .$$ Here the scaling factor, $\alpha$, has been chosen such that $$\frac{A m \alpha^{6}}{\hbar^{2}} = 1 \hspace*{0.3cm} .$$ Consequently, $$\label{eq:scaled_quantities}
\begin{array}{rcl}
\bar{\kappa} & = & \left( \frac{A m}{\hbar^{2}} \right)^{1/3} \kappa
\hspace*{0.3cm} , \\[0.2cm]
\bar{g}_{1D} & = & \frac{m}{\hbar^{2}} \left( \frac{\hbar^{2}}{A m}
\right)^{1/6} g_{1D} \hspace*{0.3cm} \textrm{and} \\[0.2cm]
\bar{E} & = & \frac{m}{\hbar^{2}} \left( \frac{\hbar^{2}}{A m}
\right)^{1/3} E \hspace*{0.3cm} .
\end{array}$$ For convenience we shall drop ‘bar’ on all quantities and use, exclusively, the scaled quantities just described.
\[sect:energy\_spec\] Energy spectrum
=====================================
The eigenspectra for the single- and two-particle system are obtained, subject to the scaling introduced in the previous section. A cartesian DVR [@bay86] is used to discretize the spatial coordinates $x_{1}$ and $x_{2}$, see [@murA07] for details. The discretization scheme used in these calculations employs $N = 61$ mesh points in each dimension with a mesh spacing of $h = 0.16$. Consideration is mainly limited to the four lowest eigenvalues of the two-particle system (i.e. the lowest band), in particular we are interested in the behavior as the parameters $\kappa$ and $g_{1D}$ are varied.
\[subsect:sing\_part\_spec\] Single-particle spectrum
-----------------------------------------------------
Subject to the scaling introduced in Sec. \[sect:system\_hamiltonian\] the TISE for the single-particle system is simply $$\label{eq:sing_part_ham_scaled}
\left[ - \frac{1}{2} \frac{\partial^{2}}{\partial x^{2}} + \left( x^{4} -
\kappa x^{2} \right) \right] u_{i} \left( \kappa ; x \right) =
E^{\textrm{single}}_{i} \left( \kappa \right) u_{i} \left( \kappa ; x
\right) \hspace*{0.3cm} .$$ The single-particle spectrum is presented in Fig. \[fig:sin\_part\_eig\_spec\]. One can see that as the parameter $\kappa$ is increased the lowest eigenvalues are pulled downwards in energy as $V \left( x_{\textrm{min}} \right)$ becomes increasingly negative. At the same time one observes the degeneracy of energy levels as $\kappa$ is increased.
\[subsect:two\_part\_spec\] Two-particle spectrum
-------------------------------------------------
Extending consideration to the two-particle spectrum, we focus attention on the two-particle eigenstates belonging to the lowest band. Denoting the $i^{\textrm{th}}$ eigenstate of the two-particle system by $\Psi_{i}$, the eigenstates for the lowest band are then denoted by $\Psi_{0}, \Psi_{1},
\Psi_{2}$ and $\Psi_{3}$ (see Sec. \[sect:excited\_states\] for details). This lowest band corresponds to the four lowest levels in Fig. \[fig:two\_part\_eig\_spec\](a), representing the two-particle system in the absence of interactions. In this non-interacting regime the two-particle eigenstates, under exchange symmetry, are $$\begin{aligned}
\Psi^{\textrm{ni}}_{0} \left( \kappa ; x_{1}, x_{2} \right) = & u_{0}
\left( \kappa ; x_{1} \right) u_{0} \left( \kappa ; x_{2} \right)
\hspace*{0.3cm} , \nonumber \\[0.2cm]
\Psi^{\textrm{ni}}_{2,1} \left( \kappa ; x_{1}, x_{2} \right) = &
\frac{1}{\sqrt{2}} \left[ u_{0} \left( \kappa ; x_{1} \right) u_{1} \left(
\kappa ; x_{2} \right) \right. \nonumber \\[0.2cm]
& \left. \hspace*{0.5cm} \pm u_{1} \left( \kappa ; x_{1} \right) u_{0}
\left( \kappa ; x_{2} \right) \right] \hspace*{0.3cm} ,
\nonumber \\[0.2cm]
\Psi^{\textrm{ni}}_{3} \left( \kappa ; x_{1}, x_{2} \right) = & u_{1}
\left( \kappa ; x_{1} \right) u_{1} \left( \kappa ; x_{2} \right)
\label{eq:two_part_eigen_C00} \hspace*{0.3cm} ,
\end{aligned}$$ with the two-particle eigenenergies given by corresponding combinations of the single-particle energies, $E^{\textrm{single}}_{i} \left( \kappa \right)$. From Fig. \[fig:two\_part\_eig\_spec\](a), one notes that as $\kappa$ is increased all two-particle eigenstates in the lowest band become degenerate. This degeneracy follows automatically from the degeneracy of the states $u_{0} \left( \kappa ; x
\right)$ and $u_{1} \left( \kappa ; x \right)$ seen in the single-particle case (see Fig. \[fig:sin\_part\_eig\_spec\]). The states, which are symmetric (solid lines) and antisymmetric (dashed lines) under exchange, are indicated, corresponding to boson and fermion pairs.
The effect of introducing interactions between the two bosons is displayed in Fig. \[fig:two\_part\_eig\_spec\](b) - (d). In Fig. \[fig:two\_part\_eig\_spec\](b) a scaled interaction coupling of $g_{1D} = 1$ is considered. The symmetric states are shifted upwards in energy as a result of the repulsive interactions while the antisymmetric states remain unaltered. In the limit of large $\kappa$ one now observes two pairs of degenerate levels, as opposed to the set of four degenerate states seen in the non-interacting case. The energy separation of these two pairs of levels is monotonically increasing with increasing $\kappa$. Increasing the interaction coupling further, Fig. \[fig:two\_part\_eig\_spec\](c), leads to one of the symmetric states being promoted above the higher-lying antisymmetric state for small $\kappa$, but with increased $\kappa$ the normal ordering is restored. Finally, Fig. \[fig:two\_part\_eig\_spec\](d) depicts the same spectrum in the limit of strong repulsion: $g_{1D} = 10$. The lowest symmetric state now follows closely the energy profile of the lowest antisymmetric state. This feature is a universal property for a system of strongly interacting bosonic particles in 1D. In the limit of $g_{1D} \rightarrow \infty$ the bosonic particles become impenetrable, and one enters the so-called Tonks-Girardeau regime [@ton36; @gir60]. The Fermi-Bose mapping [@gir60; @yuk05] allows, for example, the ground state of the bosonic system to be given by $$\label{eq:ground_state_tg}
\hspace*{-0.3cm}
\Psi_{0} \left( x_{1}, x_{2} \right) = \left| \Psi^{\textrm{ni}}_{1}
\left( x_{1}, x_{2} \right) \right| \hspace*{0.3cm} .$$ The similarity of the energy of the symmetric ground state and the lowest antisymmetric state, seen in Fig. \[fig:two\_part\_eig\_spec\](d), is an indication that the system is approaching this Tonks-Girardeau regime.
\[sect:ground\_state\] Ground-state properties
==============================================
The ground state wavefunction, $\Psi_{0} \left( x_{1}, x_{2} \right)$, for two interacting particles, is presented in Fig. \[fig:psi00\]. The individual color scale plots will be referenced using standard (*row, column*) matrix notation.
In the non-interacting case and for $\kappa = 0$, plot (1,1), the wavefunction is distributed fairly isotropically about the centre of the trap. Moving down this column, increasing $\kappa$, the wavefunction expands slightly in both dimensions and takes on a more rectilinear appearance, e.g. plot (3,1). For small values of $\kappa$, the potential resembles that of a square well. The $x^{4}$ term gives rise to a steep boundary and the distribution of the two (independent) particles will be quite uniform, leading to the distribution seen in (3,1). As the value of $\kappa$ is increased the wavefunction begins to segregate into four quadrants with suppression in the region of the barrier (i.e. along the lines $x_{1} = 0$ and $x_{2} =0$). This effect is seen, quite markedly, in plot (4,1). Considering the energy level spectrum in Fig. \[fig:two\_part\_eig\_spec\](a) one can see that for $\kappa = 2$ one has not yet reached the insulator limit, whereas for $\kappa = 5$ one is deep within this insulator regime, for which degeneracy is observed for the four lowest two-particle levels.
Turning to the fourth column of Fig. \[fig:psi00\], plot (1,4) shows the case of no barrier ($\kappa = 0$). The repulsive interaction precludes any overlap of the particles. The effect of increasing the barrier height to $\kappa = 1$, $\kappa = 2$ and $\kappa = 5$ can be seen in plots (4,2), (4,3) and (4,4), respectively. Again, for small values of $\kappa$, the wavefunction distribution expands slightly in ($x_{1}, x_{2}$) space, but now the presence of repulsive interactions distorts the wavefunction along the line $x_{1} = -
x_{2}$. In the insulator limit, as we have for (4,4), one sees that the wavefunction has split into two clear lobes.
The behavior observed in Fig. \[fig:psi00\] correlates closely to the behavior reported for the $\delta$-split potential in [@murA07]: the segregation of the wavefunction distribution into four quadrants, and the vacancy of two of these quadrants owing to the introduction of repulsive interactions. These features are essentially generic for double-well systems.
\[subsect:mom\_dist\_ground\] Momentum distribution
---------------------------------------------------
The reduced single-particle density matrix (RSPDM) has proven to be an extremely useful mathematical construct in the analysis of pair correlations [@col_re]. For the two-particle system considered here, the RSPDM, $\rho_{i} \left( x, x' \right)$, for a given eigenstate, $\Psi_{i} \left(
x_{1}, x_{2} \right)$, is defined to be $$\label{eq:rspd_def}
\rho_{i} \left( x, x' \right) = \int_{- \infty}^{+ \infty} \Psi_{i}
\left( x, x_{2} \right) \Psi_{i} \left( x', x_{2} \right) d x_{2}
\hspace*{0.3cm} .$$ This object has been analyzed in detail for the ground state of two particles in a $\delta$-split potential [@murA07]. The behavior of $\rho_{0} \left( x,
x' \right)$ for the double-well, presented here, exhibits the same gross features as have been observed in [@murA07] for the $\delta$-split trap problem. Instead, in this section we focus on the momentum distributions for this system.
The reciprocal momentum distribution for the $i^{th}$ eigenstate, $n_{i} \left(
k \right)$, is calculated from the corresponding reduced single-particle density, $\rho_{i} \left( x, x' \right)$, through Fourier transform $$\label{eq:mom_dist_int}
n_{i} \left( k \right) \equiv \left( 2 \pi \right)^{-1} \int_{- \infty}^{+
\infty} \int_{- \infty}^{+ \infty} \rho_{i} \left( x, x' \right)
\textrm{e}^{- \imath k \left( x - x' \right)} dx dx' \hspace*{0.3cm} ,$$ where $\, \int_{- \infty}^{+ \infty} n_{i} \left( k \right) dk = 1 \,$. Equivalently, one may obtain the momentum distribution for this eigenstate by considering the diagonalization of $\rho_{i} \left( x, x' \right)$. Specifically, the eigenvalue equation is $$\label{eq:rspdm_diag}
\int_{-\infty}^{+\infty}\rho_{i} \left( x, x' \right) \phi_{ij} \left( x'
\right) dx' = \lambda_{ij} \phi_{ij} \left( x \right) \hspace*{0.3cm} ,$$ where $\lambda_{ij}$, represents the fractional population of the ‘natural orbital’ $\phi_{ij} \left( x \right)$ such that $\sum_{j} \lambda_{ij} = 1$, for each $i$. Using numerical quadrature allows one to rewrite (\[eq:rspdm\_diag\]) as a linear equation. The momentum distribution, $n_{i}
\left( k \right)$, may then be obtained from the relation $$\label{eq:mom_dist}
n_{i} \left( k \right) = \sum_{j} \lambda_{ij} \left| \mu_{ij} \left( k
\right) \right|^{2} \hspace*{0.3cm} ,$$ where $\mu_{ij} \left( k \right)$ denotes the Fourier transform of the natural orbital $\phi_{ij} \left( x \right)$, $$\label{eq:ft_nat_orb}
\mu_{ij} \left( k \right) = \frac{1}{\sqrt{2 \pi}}\int_{-\infty}^{+\infty}
\phi_{ij} \left( x \right) e^{- \imath k x} dx \hspace*{0.3cm} .$$ The momentum distribution for the ground state is presented in Fig. \[fig:mom\_dist\_00\]. The distributions presented correspond to $g_{1D} = 0$ (a), $g_{1D} = 1$ (b), $g_{1D} = 2$ (c) and $g_{1D} = 5$ (d). Also within each figure, the distributions arising for several different values for the barrier height ($\kappa$) are illustrated.
In the non-interacting case, Fig. \[fig:mom\_dist\_00\](a), one observes an initial peaked distribution for $\kappa = 0$. Increasing $\kappa$ enhances the peak and narrows the distribution. With increasing barrier, the ground-state wavefunction adapts to spread over the available interval, leading to this reciprocal narrowing in momentum space. Further increase in $\kappa$ means that the particles begin to experience the effect of the double-well. As the particles are non-interacting, the system displays a single-particle behavior. For a value of $\kappa = 5$ (insulator regime) the particle splits between the wells and the momentum distribution displays prominent second-order peaks, seen in Fig. \[fig:mom\_dist\_00\](a). The momentum distributions can be observed by scattering or free expansion of the particles in the absence of a confining potential. From this perspective the second-order peaks correspond to the interference fringes that arise from two coherent matter wave sources.
Introducing an interaction encourages localization and has the effect of removing these secondary peaks. Once again, for small values of $\kappa$, e.g. $\kappa = 1$ (dashed line) and $\kappa = 2$ (dot-dash line), the momentum distribution becomes increasingly peaked and narrower. In the presence of interactions the particles are restricted to separate wells and the interference effects are lost. In addition, the localization of the particles leads to a broadening of the momentum distribution, as observed in Fig. \[fig:mom\_dist\_00\](b), (c) and (d), for $\kappa = 5$ (dotted line). It is also observed that, in the absence of any barrier, $\kappa = 0$ (solid line), one sees the emergence of higher-energy wings for increasing interaction, $g_{1D}$. Similar high-energy wings have been reported in the TG regime for free space, [@ols98], and harmonic confinement, [@gir01].
\[subsect:entropy\_ground\] Von Neumann entropy
-----------------------------------------------
Entanglement is a fundamental expression of information content and is responsible for the increased efficiency of some quantum algorithms over their classical counterparts. Previous authors have shown that the von Neumann entropy of the RSPDM is a good measure of entanglement for a system of two bosons [@pas01; @li01; @sun06]. For the case of two indistinguishable particles, determination of whether or not the two subsystems are entangled requires that one considers both the von Neumann entropy of the reduced single-particle density matrix, and the Schmidt number i.e. number of non-zero eigenvalues ($\lambda_{ij}$) obtained in the diagonalization of $\rho_{i}$, Eq. , [@ghi03; @ghiA04; @ghiB04]. We use the von Neumann entropy to quantify the entanglement in the position coordinates, $x_{1}$ and $x_{2}$, of the particle pair.
Following the diagonalization of the reduced single-particle density, Eq. (\[eq:rspdm\_diag\]), the von Neumann entropy for the $i^{\textrm{th}}$ eigenstate of the two-particle system ($S_{i}$) is obtained from, $$\label{eq:von_neumann}
S_{i} = -\sum_{j} \lambda_{ij} \log_{2} \lambda_{ij} \hspace*{0.3cm} .$$
### \[subsubsect:entropy\_v\_interaction\_ground\] Variation of von Neumann entropy with interaction strength
Variation of the von Neumann entropy with $g_{1D}$, for the ground state of this system, is plotted in Fig. \[fig:entropy\_v\_int\_ground\].
Examining the lowest solid line ($\kappa = 0$), when no interactions are present ($g_{1D} = 0$) then $S = 0$ as one expects. The product states (with correct symmetrization) given in Eq. (\[eq:two\_part\_eigen\_C00\]) represent the eigenstates of the non-interacting system. Introducing a small interaction has the effect of introducing correlations and results in a non-zero entropy. Increasing the interaction strength leads to an increasing entropy, saturating at $S \approx 1$, as for the harmonic potential [@murA07; @sun06]. This behavior can be related to fermionization. As the repulsive interactions increase, the system enters the TG regime. In this regime the ground state of the system can be represented by the corresponding system of two non-interacting fermions, with correct symmetrization. In terms of the eigenfunctions prescribed in Eq. (\[eq:two\_part\_eigen\_C00\]), the ground state of the system is given by $\left| \Psi^{\textrm{ni}}_{1} \left( \kappa ; x_{1}, x_{2} \right)
\right| = \left| \frac{1}{\sqrt{2}} \left[ u_{0} \left( \kappa ; x_{1} \right)
u_{1} \left( \kappa ; x_{2} \right) - u_{1} \left( \kappa ; x_{1} \right) u_{0}
\left( \kappa ; x_{2} \right) \right] \right|$. The antisymmetric state, $\Psi_{1}$, in the presence of point-like interactions, will always give $S =
1$. The ground state of the system becomes degenerate with this antisymmetric state in the limit of hard-core interactions.
Fig. \[fig:entropy\_v\_int\_ground\] also displays the effect of increasing the barrier height, $\kappa$. As the system tends towards the insulator regime, the entropy of the system becomes increasingly sensitive to changes in $g_{1D}$, about $g_{1D} = 0$. This effect was also reported in [@murA07] for the $\delta$-split trap, suggesting that this is another generic feature associated with double-well potentials. The increased barrier height reduces tunneling between the wells. For any increase in the interaction strength, the two-particle wavefunction will attempt to redistribute so as to minimize this interaction. However, with the increased barrier height the wavefunction is forced to remain more localized, and is restricted in its redistribution.
### \[subsubsect:entropy\_v\_barrier\_ground\] Variation of von Neumann entropy with barrier height
Variation of the von Neumann entropy with $\kappa$, for the ground state of this system, is plotted in Fig. \[fig:entropy\_v\_barrier\_ground\].
The basic trends bear a striking resemblance to those seen for the $\delta$-split trap, [@murA07]. Specifically, one observes that the initial entropy of the system (i.e. for $\kappa = 0$) is dictated by the interaction strength of the system, $g_{1D}$. The larger is $g_{1D}$, the larger is the initial value of $S$, as is consistent with Fig. \[fig:entropy\_v\_int\_ground\]. Increasing the height of the barrier then has the effect of increasing the entropy of the system towards $S = 1$. In the limit of large barrier heights the entropy of the system saturates at $S=1$, regardless of the value of interaction strength (the notable exception being the non-interacting case, for which $S$ is identically equal to zero for all $\kappa$). This saturation at $S = 1$ corresponds to the loss of entanglement.
When the system enters the insulator limit there is an implicit exchange uncertainty in the state of the system, arising from the indistinguishable nature of the particles. As such, these correlations cannot be exploited in any meaningful quantum information protocol and the system is regarded as non-entangled. This diagnosis also follows from the criteria set out in [@ghi03; @ghiA04; @ghiB04] as, in the limit $\kappa \rightarrow \infty$ then $S \rightarrow 1$ and the Schmidt number can be seen to approach a value of 2 (not shown here). By the criteria outlined in [@ghiA04], any state for which $S = 1$ and with a Schmidt number of 2 must be regarded as non-entangled.
In contrast to the $\delta$-split trap, the entropy dependence $S \left( \kappa
\right)$ for the double-well system is quite sigmoidal. The separation of wells only becomes apparent for large values of $\kappa$ (i.e. $\kappa > 3$). From Fig. \[fig:entropy\_v\_barrier\_ground\] one can identify $2 < \kappa < 3$ as the interval over which the entropy makes its most rapid variation.
### \[subsubsect:bose\_hubbard\_entropy\] Von Neumann entropy in the Bose-Hubbard model
One may examine the von Neumann entropy of the ground state within the formalism of the Bose-Hubbard model presented in Eq. . Using a Fock basis for the two-particle system of the form $ \left. \left|
n_{L} n_{R} \right. \right>$, where $n_{L(R)}$ represents the number of particles in the left (right) well, leads to three basis states : $ \left. \left|
2 0 \right. \right>$, $ \left. \left| 1 1 \right. \right>$ and $ \left. \left| 0
2 \right. \right>$.
Thus, in terms of this basis the Hamiltonian may be written in matrix form as $$\label{eq:bose_hubbard_ham_matrix}
\hat{H} =
\left(
\begin{array}{ccc}
2 \epsilon + 2 U & \sqrt{2} J & 0 \\[0.2cm]
\sqrt{2} J & 2 \epsilon & \sqrt{2} J \\[0.2cm]
0 & \sqrt{2} J & 2 \epsilon + 2 U
\end{array}
\right) \hspace*{0.3cm} .$$ with eigenvalues of this Hamiltonian follow from some simple algebra $$\begin{aligned}
E_{-} = & 2 \epsilon + U - \sqrt{U^{2} + 4 J^{2}} \nonumber \\[0.2cm]
E_{\textrm{mid}} = & 2 \epsilon + 2 U \nonumber \\[0.2cm]
E_{+} = & 2 \epsilon + U + \sqrt{U^{2} + 4 J^{2}}
\label{eq:bose_hubbard_eig_val} \hspace*{0.3cm} .
\end{aligned}$$ The eigenvectors corresponding to the eigenvalues presented in Eq. are found to be $$\label{eq:bose_hubbard_eig_vec}
\left. \left| \Psi_{\textrm{mid}} \right. \right> = \frac{1}{\sqrt{2}}
\left(
\begin{array}{c}
1 \\
0 \\
-1
\end{array}
\right) \hspace*{0.5cm}
\left. \left| \Psi_{\pm} \right. \right> = \mathcal{N}_{\pm}
\left(
\begin{array}{c}
1 \\
\frac{2 \sqrt{2} J}{U \pm \sqrt{U^{2} + 4 J^{2}}} \\
1
\end{array}
\right) \hspace*{0.3cm} ,$$ where $\mathcal{N}_{\pm}$ represent normalization factors and $\left. \left|
\Psi_{\textrm{mid}} \right. \right>$ has odd inversion symmetry.
Given the two-particle ground state in the Fock basis, $\left. \left| \Psi_{-}
\right. \right>$ , one may determine the reduced single-particle density matrix by tracing over the degrees of freedom of either particle. The single-particle basis states may be represented in the form $\left. \left| n_{L}, n_{R} \right.
\right>$ as $\left. \left| 1 0 \right. \right>$ and $\left. \left| 0 1 \right.
\right>$. In turn, the two-particle basis states can be written in the symmetric form: $$\begin{aligned}
\left. \left| 2 0 \right. \right> = & \left. \left| 1 0 \right.
\right>_{1} \otimes \left. \left| 1 0 \right. \right>_{2}
\nonumber \\[0.2cm]
\left. \left| 1 1 \right. \right> = & \frac{1}{\sqrt{2}} \left[ \left.
\left| 1 0 \right. \right>_{1} \otimes \left. \left| 0 1 \right.
\right>_{2}
+ \left. \left| 0 1 \right. \right>_{1} \otimes \left. \left| 1 0 \right.
\right>_{2} \right] \nonumber \\[0.2cm]
\left. \left| 0 2 \right. \right> = & \left. \left| 0 1 \right.
\right>_{1}
\otimes \left. \left| 0 1 \right. \right>_{2}
\label{eq:two_particle_basis_reexpressed} \hspace*{0.3cm} .
\end{aligned}$$ The eigenvalues of this RSPDM are found to be $$\begin{aligned}
\lambda_{1} = & \mathcal{N}_{-}^{2} \left[ 1 + \frac{4 J^{2}}{\left( U -
\sqrt{U^{2} + 4 J^{2}} \right)^{2}} - \frac{4 J}{U - \sqrt{U^{2} + 4
J^{2}}}\right] \nonumber \\[0.2cm]
\lambda_{2} = & \mathcal{N}_{-}^{2} \left[ 1 + \frac{4 J^{2}}{\left( U -
\sqrt{U^{2} + 4 J^{2}} \right)^{2}} + \frac{4 J}{U - \sqrt{U^{2} + 4
J^{2}}}\right] \label{eq:eigval_rspdm_bhm} \hspace*{0.3cm} .
\end{aligned}$$ The variation of the ground-state entropy ($S$) with the model parameters $J$ and $U$ is depicted as a surface plot in Fig. \[fig:bhm\_entropy\_surf\_JU\].
It is noted that the qualitative behavior of the entropy displayed in Figs. \[fig:entropy\_v\_int\_ground\] and \[fig:entropy\_v\_barrier\_ground\] for varying $g_{1D}$ and $\kappa$, respectively, is reflected in the Bose-Hubbard model with variation of the parameters $U$ and $J$. This tight-binding approximation is poor in the limit $U/J \rightarrow 0$, but is an accurate representation in the insulator limit.
\[sect:excited\_states\] Excited states
=======================================
Attention is now turned to the three lowest excited states which, together with the ground state, represent the lowest energy band of the two-particle double-well system. In the non-interacting case, with spectrum depicted in Fig. \[fig:two\_part\_eig\_spec\](a), the three lowest excited states may be represented as given in Eq. , with one of these states being antisymmetric and two of them symmetric. Variation of parameters $\kappa$ and $g_{1D}$ can lead to reordering of the energy eigenvalues, as observed in Fig. \[fig:two\_part\_eig\_spec\]. However, in this section, the study of the excited states of this two-particle system will be restricted to these three states of the lowest band, identifiable through their symmetry. Henceforth the term ‘first-excited state’ refers to the lowest energy antisymmetric state ($\Psi_{1}$), ‘second-excited state’ refers to the second-lowest energy symmetric state ($\Psi_{2}$) and ‘third-excited state’ refers to the third-lowest lying symmetric state ($\Psi_{3}$).
\[subsect:wavefunction\_excited\] Two-particle excitations
----------------------------------------------------------
The wavefunctions for $\Psi_{1,2,3}$ are represented, by means of color scale plots, in Figs. \[fig:psi01\], \[fig:psi02\] and \[fig:psi03\], respectively. Again, the standard (*row, column*) notation is used to reference individual subplots. The color scale is consistent across all wavefunction plots, permitting direct comparison between Figs. \[fig:psi00\], \[fig:psi01\], \[fig:psi02\] and \[fig:psi03\].
Fig. \[fig:psi01\] represents the ground state for a system of two spin-aligned fermions ($\Psi_{1}$), which is identically zero along the line $x_{1} = x_{2}$ and, thereby, unaffected by the zero-ranged interaction. Considering Fig. \[fig:psi01\], moving along a given row (i.e. increasing repulsion for a fixed barrier), the wavefunction plots remain unchanged, illustrating the independence of this state with respect to interaction strength, $g_{1D}$. As $\kappa$ increases (i.e. down any column) the positive and negative lobes along the $x_{1} = -x_{2}$ diagonal become more widely separated indicating isolation into separate wells. Once again the wavefunction density in these two quadrants correspond to the situation where particle 1 is in the left well ($x_{1} < 0$) and particle 2 is in the right ($x_{2}> 0$), and vice versa. In the limit of a large barrier, the ground state becomes degenerate with this antisymmetric state. The wavefunction plots are almost identical (except for sign) for $\kappa = 5$ as seen, for example, by comparing Fig. \[fig:psi00\] (4,4) and Fig. \[fig:psi01\] (4,4). Furthermore, as already discussed, one expects the ground state to become degenerate with this antisymmetric state in the limit of $g_{1D} \rightarrow \infty$, for all $\kappa$. This degeneracy is evidenced by comparing the fourth column in Fig. \[fig:psi00\] to any column in Fig. \[fig:psi01\]. Even at this finite interaction strength ($g_{1D} = 5$) the equivalence of these two states is apparent. Finally, from each of the plots in Fig. \[fig:psi01\] it is clear that this eigenstate is of odd parity, such that $\Psi_{1} \left( x_{1}, x_{2}
\right) = - \Psi \left( -x_{1}, -x_{2} \right)$.
Fig. \[fig:psi02\] depicts the second-excited state for the system of two bosons in a double-well potential and, as with $\Psi_{1}$, this state exhibits odd parity: $\Psi_{2} \left( x_{1}, x_{2} \right) = - \Psi_{2} \left( -x_{1},
-x_{2} \right)$. The case of no barrier ($\kappa = 0$) and no interaction ($g_{1D} = 0$) is illustrated in Fig. \[fig:psi02\] (1,1). The eigenstate is composed of two lobes which correspond to both particles co-existing on the same side of the well. In the case of no interactions ($g_{1D} = 0$), illustrated in column 1, this symmetric eigenstate is degenerate with the antisymmetric state considered in Fig. \[fig:psi01\]. Repulsive interactions will tend to exclude the wavefunction from the line $x_{1} = x_{2}$ (e.g. compare (1,1) to (1,3) or (1,4)). In the Tonks limit this splits each of the upper right and lower left lobes. Considering the effect of the barrier in column 3 ($g_{1D} = 2$), the initial wavefunction demonstrates the double-lobe structure. As the barrier is increased to $\kappa = 1$, (2,3), and then $\kappa = 2$, (3,3), the wavefunction spreads out in ($x_{1}, x_{2}$) space. Further increase of the barrier height causes the system to move into the insulator limit, plot (4,3), forming two isolated lobes in the upper-right and lower-left quadrants. The eigenstate, in this case, corresponds to the physical situation of both particles residing in either the left well or the right well.
Finally, the third-excited state is illustrated in Fig. \[fig:psi03\]. In contrast to $\Psi_{1}$ and $\Psi_{2}$, this eigenstate is of even parity such that $\Psi_{3} \left( x_{1}, x_{2} \right) = \Psi_{3} \left( -x_{1}, -x_{2}
\right)$. Scanning down column 1: as the system moves into the insulator limit, the eigenstate is composed of four equally-weighted lobes in the four quadrants, equivalent to the corresponding ground-state eigenfunction, seen in Fig. \[fig:psi00\] (4,1). In fact from Fig. \[fig:two\_part\_eig\_spec\](a), in the non-interacting case, the four lowest eigenstates all become degenerate in the insulator limit ($\kappa \rightarrow \infty$). As a consequence, the eigenfunction plot (4,1) in Figs. \[fig:psi00\], \[fig:psi01\], \[fig:psi02\] and \[fig:psi03\] relate to four degenerate states.
This symmetric eigenstate is non-zero along the line $x_{1} = x_{2}$. As one increases $g_{1D}$ one again observes the exclusion of the wavefunction from this line (e.g. examining row 1 in Fig. \[fig:psi03\]). As barrier height is increased the wavefunction expands in ($x_{1}, x_{2}$) space and there is some suppression of the wavefunction in the region of the rising barrier (i.e. $x_{1}
= 0$ and $x_{2} = 0$). In the insulator limit, e.g. in plot (4,4) for which $\kappa = 5$, the wavefunction in the off-diagonal quadrants vanishes and one observes two double-lobes in the lower-left and upper-right quadrants, representing the physical situation where both particles reside in the same well. The degeneracy of $\Psi_{2}$ and $\Psi_{3}$, in the limit $\kappa \rightarrow
\infty$ (seen in Fig. \[fig:two\_part\_eig\_spec\]) is manifested in the corresponding wavefunction plots. This is demonstrated by comparing corresponding plots in the bottom rows ($\kappa = 5$) of Figs. \[fig:psi02\] and \[fig:psi03\].
In the limit of large $\kappa$ (and for any positive interaction), the ground and first-excited states correspond to the two particles in separate wells. By contrast, the second- and third-excited states, in the same limit, correspond to two particles in the same well. It follows that an increase in the repulsive interaction coupling will cause this second pair of levels to be shifted upwards in energy. In this way, in the $\kappa \rightarrow \infty$ limit, one observes the separation of these two pairs of levels to increase as $g_{1D}$ is increased (see Fig. \[fig:two\_part\_eig\_spec\]). The increasing separation of these levels with increasing $\kappa$ is corollary to this. As $\kappa$ is increased the particles become more tightly confined to the individual wells. This increased confinement, for the upper pair of levels, will give rise to an increased interaction of the two particles and a subsequent increase in the energy of these eigenstates, relative to the lower pair.
\[subsect:mom\_dist\_excited\] Momentum distribution
----------------------------------------------------
The momentum distributions for the excited states are calculated as outlined in Sec. \[subsect:mom\_dist\_ground\]. The calculated distributions for the second-excited state ($\Psi_{2}^{\textrm{o}}$, where the superscript ‘o’ indicates the ‘odd’ inversion symmetry of this eigenstate) are displayed in Fig. \[fig:mom\_dist\_02\]. For $\kappa = 0$ one observes a double-humped distribution that becomes narrower with increasing $\kappa$ and, in the insulator limit, gives way to a single-peak distribution with high-energy tails. This is similar to the result for $\Psi_{1}^{\textrm{o}}$ (not shown). An increase in the interaction coupling has the effect of narrowing the momentum distribution. Fig. \[fig:psi02\] illustrates that increasing $g_{1D}$ will expand the wavefunction in ($x_{1}, x_{2}$) space, leading to this reciprocal narrowing in momentum space. At the same time the increased interaction leads to an accentuation of the double-peaked structure, observed for small $\kappa$.
Fig. \[fig:mom\_dist\_03\] illustrates the momentum distribution for the even-parity state $\Psi_{3}^{\textrm{e}}$. For the non-interacting case, $g_{1D}
= 0$ (a), a double-mode distribution arises with a node at $k = 0$. This node is accounted for due to the separable nature of $\Psi_{3}^{\textrm{e}}$ in the non-interacting limit: $\Psi^{\textrm{ni}}_{3} \left( \kappa ; x_{1}, x_{2}
\right) = u_{1} \left( \kappa ; x_{1} \right) u_{1} \left( \kappa ; x_{2}
\right)$. Considerable narrowing of this distribution is noted as $\kappa$ is increased and in the insulator limit a second pair of smaller peaks emerges. This second pair of peaks may be viewed as interference fringes from each particle being distributed between the two wells - compare Fig. \[fig:psi00\] (4,1) and Fig. \[fig:psi03\] (4,1).
For increased interaction strength ($g_{1D}$) one continues to observe the narrowing of the distribution with increased barrier height. However, the presence of the interactions causes the node at $k=0$ to be removed, as one can no longer write the eigenfunction in the separable form given in Eq. . Instead one just observes a strong depression of the distribution about $k = 0$. At the same time, the introduction of the interactions has the effect of completely removing the double-peaked structure in the insulator limit, as is observed for the dotted line ($\kappa = 5$) in each of Fig. \[fig:mom\_dist\_03\](b), (c) and (d). As seen in Fig. \[fig:psi03\], in the presence of a finite interaction the wavefunction in the off-diagonal quadrants vanishes in the insulator limit, and this eigenstate describes a situation where both particles occupy one side of the double-well.
\[subsect:entropy\_excited\] Von Neumann entropy
------------------------------------------------
As for the ground state, one may obtain the von Neumann entropy for the excited states of the two-particle system via diagonalization of the reduced single-particle density matrix. In this section the dependence of the von Neumann entropy, $S$, of the four lowest two-particle states, on the interaction strength ($g_{1D}$) and the barrier height ($\kappa$) is considered.
### \[subsubsect:entropy\_v\_interaction\_excited\] Variation of von Neumann entropy with interaction strength
Fig. \[fig:entropy\_v\_int\_excited\] illustrates the dependence of $S$ on the interaction coupling, $g_{1D}$ ($> 0$). The dependence is examined for four different values of barrier height: $\kappa = 0$ (a), $\kappa = 2$ (b), $\kappa
= 4$ (c) and $\kappa = 5$ (d). For each value of the barrier height the entropy of the four lowest eigenstates is depicted: ground state (solid line), first-excited state (dashed line), second-excited state (dot-dash line) and third-excited state (dotted line). The dependence of the ground-state entropy on $g_{1D}$ has already been examined in Fig. \[fig:entropy\_v\_int\_ground\], however it is useful to replicate these plots here to help inform the examination of the excited-state plots.
Several important features are noted. In all cases the first-excited state (dashed line) shows no dependence on the interaction strength, as is expected owing to the symmetry of this eigenstate. Instead, this eigenstate exhibits a value of $S = 1$ for all $g_{1D}$. This value follows from the analytic form for this eigenstate, $\Psi^{\textrm{ni}}_{1}$, given by Eq. , which holds for all values of $g_{1D}$. At the same time, the analytic representations for the three remaining eigenstates are also given in Eq. , for $g_{1D} = 0$. From these representations it is clear that, in the non-interacting limit, the entropy for the ground state (solid line) and third-excited state (dotted line) is always zero, as these states may always be represented as direct-product states for $g_{1D} = 0$. In a similar way, the second-excited state (dot-dash line) always assumes a value of $S = 1$ in the non-interacting limit. Once again, this may be attributed to the symmetrized form for this state as given by $\Psi^{\textrm{ni}}_{2}$ in Eq. .
Considering the case of $\kappa = 0$, Fig. \[fig:entropy\_v\_int\_excited\](a), the ground state begins at $S = 0$ and increases monotonically with $g_{1D}$. As $g_{1D} \rightarrow \infty$ one enters the TG regime and this ground state (solid line) becomes degenerate with the first-excited (dashed line) state and $S \approx 1$. By contrast, the second-excited state begins with $S = 1$, as discussed, and increases with increasing $g_{1D}$, but at a much slower rate than that exhibited by the ground state. The third excited state (dotted line) begins, like the ground state, with $S = 0$ and increases rapidly with increasing interaction strength.
Increasing the height of the barrier to $\kappa = 2$, Fig. \[fig:entropy\_v\_int\_excited\](b), one observes qualitatively similar behavior from all four states except that each state exhibits a more marked variation in $S$ over the range of $g_{1D}$ examined. As one moves into the insulator regime, e.g. $\kappa = 4$, Fig. \[fig:entropy\_v\_int\_excited\](c), the behavior changes quite significantly. As discussed previously, the ground state exhibits a very drastic variation with $g_{1D}$, converging very rapidly to $S \approx 1$. The second-excited state still exhibits the same basic behavior as noted for smaller $\kappa$ but, once again, the increased barrier height leads to an increased sensitivity of this state to variation in $g_{1D}$. The third-excited state shows a distinct change in behavior for this increased barrier height. At small values of interaction coupling ($g_{1D} < 1$) the entropy of this state follows closely that of the ground state. As interaction strength is increased beyond this value then the ground-state entropy begins to plateau at $S \approx 1$, whilst that of third-excited state continues to increase. Increasing the barrier height to $\kappa = 5$, Fig. \[fig:entropy\_v\_int\_excited\](d), moves the system deeper into the insulator limit and the behavior demonstrated in (c) becomes even more striking. In this case the behavior of the ground-state entropy is more dramatic, with the entropy saturating at $S \approx 1$, already, for $g_{1D} \approx 0.5$. Again the entropy of the third-excited state follows this trend identically. However, where the entropy of the ground state plateaus at $S \approx 1$, the entropy of the third-excited state continues to increase and follows now, almost identically, the entropy of the second-excited state. A handle on this behavior is provided by the wavefunction plots of Figs. \[fig:psi00\], \[fig:psi02\] and \[fig:psi03\]. One observes that, in this insulator limit ($\kappa = 5$), the third-excited state, for small $g_{1D}$, as seen in Fig. \[fig:psi03\] (4,1), closely resembles the ground state in Fig. \[fig:psi00\] (4,1). For larger interaction couplings ($g_{1D} \ge 1$) the eigenfunction for this third-excited state, as seen in Fig. \[fig:psi03\] (4,2) - (4,4), closely resembles that of the second-excited state in Fig. \[fig:psi02\] (4,2) - (4,4).
### \[subsubsect:entropy\_v\_barrier\_excited\] Variation of von Neumann entropy with barrier height
The variation of von Neumann entropy with barrier height is illustrated in Fig. \[fig:entropy\_v\_barrier\_excited\], for the same four, lowest-energy two-particle states. In this case, four different values of interaction coupling are presented: $g_{1D} = 1$ (a), $g_{1D} = 2$ (b), $g_{1D} = 5$ (c) and $g_{1D} = 10$ (d). Again, in each plot the eigenstates are represented by the same line types used in Fig. \[fig:entropy\_v\_int\_excited\].
Some general features and behaviors can be noted from these plots. Again, the first-excited state is observed to have an entropy of unity for all $g_{1D}$ and $\kappa$. For $\kappa \rightarrow \infty$, the entropy of the ground state tends to a value of unity, regardless of the value of $g_{1D}$ (provided $g_{1D}
> 0$). In this limit the ground state of the system is described by one particle in each half of the double-well potential, and corresponds to the Mott-insulator regime. On the other hand, the initial value of $S$ (when $\kappa = 0$) is sensitive to $g_{1D}$. The higher the value of $g_{1D}$, the larger is the initial value of $S$. As $S \rightarrow 1$ in the insulator limit, it follows that the entropy of the ground state exhibits a less dramatic variation with $\kappa$, for larger values of interaction strength. For all of the symmetric eigenstates, i.e. ground (solid line), second-excited (dot-dash line) and third-excited (dotted line), as the interaction strength is increased the entropy of the eigenstates, in general, increases, consistent with Fig. \[fig:entropy\_v\_int\_excited\]. In particular, the entropy of these symmetric states in the absence of a barrier ($\kappa = 0$), increases with increasing $g_{1D}$. The second-excited state (dot-dash line) exhibits an entropy that monotonically increases with $\kappa$ for the range of parameter space considered. By contrast, the third-excited state (dotted line) exhibits an entropy that both increases then decreases with raising of the barrier.
One will also note that in the limit of large barrier heights (i.e. $\kappa
\rightarrow \infty$), the entropy of the second- and third-excited states tend to the same value. Although not obvious from Fig. \[fig:entropy\_v\_barrier\_excited\](d), this fact has also been verified for the case of $g_{1D} = 10$. Once again, a handle on why this happens can be obtained from the wavefunction plots for these eigenstates in Figs. \[fig:psi02\] and \[fig:psi03\]. One can see that in the presence of finite interactions, these two eigenstates become identical in the insulator limit, except for some phase (compare row 4 of these figures). Fig. \[fig:entropy\_v\_barrier\_excited\] also suggests the the value of $S$ to which these two states converge, in the insulator limit, is greater than one and increases with increasing interaction strength, $g_{1D}$.
This behavior of the entropy may be qualitatively understood as follows. Both the second- and third-excited states correspond, in the insulator limit, to the physical situation of two particles coexisting in either the right-well or the left-well. As such, these states may be roughly represented by Bell-type states of the form $\; 1/\sqrt{2} \left( \left. \left| 20 \right. \right> \pm \left.
\left| 02 \right. \right> \right)$ - see Sec. \[subsubsect:bose\_hubbard\_entropy\] for the definition of these basis states. Such a Bell state carries one e-bit of entanglement, with a corresponding von Neumann entropy of unity. However, beyond this, there are also correlations between the two particles coexisting in the same well. As can be seen, for example, from Fig. \[fig:psi03\] (4,4). Here the repulsive interactions between the particles occupying the same well leads to a partition of the wavefunction, within each well, into two lobes. Considering the double lobe seen in the upper-right quadrant, corresponding to both particles co-existing in the right well of the double-well. The upper half of the lobe represents the situation where particle 1 is on the left of this well and particle 2 is on the right, the lower half-lobe corresponds to the reverse of this situation ($x_{1}
\leftrightarrow x_{2}$). In this case the correlations in the system are analogous to the correlations that are observed for the ground state, $\Psi_{0}$, in the absence of any barrier ($\kappa = 0$). These correlations (and therefore $S$) are seen to increase as the interaction coupling is increased. One significant distinction exists between these ‘single-well’ correlations, seen in states $\Psi_{2,3}$, and the correlations seen in the ground state, for $\kappa =
0$. On increasing $\kappa$, the second- and third-excited states tend to become more confined and the two-particle wavefunction becomes increasingly localized in the single-well. However, the particle-particle interactions will compete with this effect, attempting to keep the two-particle wavefunction spread in space and, in particular, minimized along the line $x_{1} = x_{2}$. For the ground state this particular type of single-well competition between $\kappa$ and $g_{1D}$ is not experienced. So, in the insulator limit, the second- and third-excited states will have correlations arising from the realization of the Bell-type state, and the ‘single-well’ correlations due to the two interacting particles coexisting in the same well. This combination of factors leads to an entropy which is greater than unity, with the contribution of the ‘single-well’ correlations, in general, increasing with increasing interaction strength.
\[subsect:stimulating\_excitations\] Stimulating two-particle excitations
-------------------------------------------------------------------------
The previous results have clearly illustrated that manipulations of this two-particle system can be achieved through the variation of the control parameters $g_{1D}$ and $\kappa$ in some adiabatic manner. However, one could also consider time-dependent manipulation of the state. Considering the insulator limit, one may propose two methods of coupling these lowest levels: (a) shaking the trap from side-to-side (b) modulating the barrier height (see Fig. \[fig:bhm\_manipulations\]). To first-order, the former represents a dipole excitation, capable of coupling $\left. \left| \Psi_{0}^{\textrm{e}} \right.
\right>$ and $\left. \left| \Psi_{2}^{\textrm{o}} \right. \right>$. The latter scheme (to first-order) corresponds to a quadrupole excitation, capable of coupling states $\left. \left| \Psi_{0}^{\textrm{e}} \right. \right>$ and $\left.
\left| \Psi_{3}^{\textrm{e}} \right. \right>$. In this way, by employing such techniques it should prove possible to exploit these three lowest eigenstates in order to engineer the two-particle state in a time-dependent fashion.
Further investigation of this idea of time-dependent manipulation of the two-particle state could prove a useful extension to the present study. In particular, a combination of time-dependent excitation processes and the adiabatic variation of control parameters, $g_{1D}$ and $\kappa$, should permit an impressive degree of control over the two-particle state, within this system.
\[sect:summary\] Summary
========================
The system of two interacting particles in a prototypical double-well potential of the form $V \left( x \right) = A \left[ x^{4} - \kappa x^{2} \right]$ has been considered. Using a cartesian DVR, the eigenspectrum for this system has been studied and the four lowest eigenstates have been obtained and investigated for varying barrier height and interaction strength. For each state the two-particle eigenfunction, the momentum distribution and the von Neumann entropy have been examined. It was found that the ground state for this double-well system exhibits behavior that closely resembles that observed in a previous study of the $\delta$-split trap potential, [@murA07]. In particular, the ground-state wavefunction is suppressed along the lines $x_{1}
= 0$ and $x_{2} = 0$ as barrier height is increased, leading to a quadrant separation of the wavefunction. In the presence of repulsive interactions ($g_{1D} > 0$) only the contributions in the off-diagonal quadrants remain in the insulator limit ($\kappa \rightarrow \infty$). In this limit the ground state of the system is composed of one particle in each half of the double-well. The momentum distributions display an initial narrowing with increasing barrier height but with a broadening and high-energy wings being observed in the insulator limit. Furthermore, the secondary peaks observed in the momentum distribution for the double well, in the non-interacting regime, are quickly suppressed in the presence of repulsive interactions. The variation in the von Neumann entropy ($S$) with interaction strength shows remarkably similar behavior. In all cases $S = 0$ in the absence of interactions and for $g_{1D}
\rightarrow \infty$, $S$ saturates at a value close to unity. Increasing the height of the barrier, in each case, has the effect of making the entropy more sensitive to changes in the interaction strength, around $g_{1D} = 0$. Similarly, the behavior of the entropy with varying barrier height exhibits generic features between the two double-well systems. In both cases the ground-state entropy saturates at a value of unity as $\kappa \rightarrow
\infty$, regardless of the value of $g_{1D}$. The initial value of $S$ (i.e. the value of $S$ for $\kappa = 0$) is determined by the strength of the interaction, with larger interaction coupling leading to larger initial entropy. As such, the sensitivity of $S$ to $\kappa$ is reduced for double-well systems with larger interaction couplings ($g_{1D}$). This behavior of the ground-state entropy is also illustrated within a Bose-Hubbard model, wherein the controllable parameters are the on-site interaction ($U$) and the tunnelling strength ($J$).
As well as examining the ground state of this double-well system, some of the properties of the three lowest excited states have also been studied. Two of these states are found to be symmetric whilst one is antisymmetric and they constitute the lowest band of the two-particle, double-well system. The antisymmetric state is found to be completely independent of the interaction parameter ($g_{1D}$). However, this state displays a dependence on the barrier height and in the limit of a high barrier becomes degenerate with the ground state - corresponding, physically, to the situation of each particle residing in a separate, isolated well. The von Neumann entropy for this antisymmetric state is identically equal to one for all $\kappa$ and $g_{1D}$.
The second- and third-excited states are symmetric. In the insulator limit (provided $g_{1D} > 0$) the states become degenerate and correspond to the physical situation where both particles occupy the same well. Both eigenstates demonstrate momentum distributions that are double-humped, with the double-hump giving way to a single peak in the insulator limit. For $g_{1D} = 0$ the third-excited state exhibits secondary peaks in the momentum distribution, similar to the ground state in the non-interacting regime. The entropy of both states increases with $g_{1D}$, with that of the third-excited state showing a more marked variation. As for the ground state, increasing barrier height $\kappa$ has the effect of increasing the sensitivity of the entropy to variations in $g_{1D}$ (about $g_{1D} = 0$). In the insulator limit the entropy of the third excited state is found to follow, almost identically, that of the ground state for small $g_{1D}$. As the ground state entropy saturates at $S
\approx 1$, the entropy of the third excited state continues to increase and, for larger $g_{1D}$, follows, almost identically, that of the second-excited state. Indeed, in the insulator limit and for fixed interaction strength, the second- and third-excited states are found to have the same entropy (as follows from the physical equivalence of these states in this limit). The entropy, in this case, is proposed to have two contributions due to (i) the realization of a Bell-type state with both particles co-occupying either the left *or* right well (ii) single-well correlations, owing to the repulsive interaction of the two particles occupying the same well.
\[subsect:outlook\] Outlook
---------------------------
The double-well arrangement studied in this work represents a more experimentally realizable system, compared to the $\delta$-split trap previously considered. Having characterized the properties of the ground and lowest-excited eigenstates, the foundation is laid for future investigation into state manipulations using this system. Future avenues may include the time-dependent manipulation of states through shaking of the trap, an oscillating barrier height or introduction of a constant, or oscillating, field gradient. These time-dependent manipulations, along with the adiabatic variation of the control parameters $g_{1D}$ and $\kappa$, should allow for comprehensive state engineering within the lowest band of this two-particle system.
The authors would like to thank John Goold, Thomas Busch and Mauro Paternostro for helpful discussions. DSM would like to acknowledge funding from the Department for Employment and Learning (NI) and the support of the Sorella Trust (NI).
|
20 true cm 15.2 true cm
[**The Irreducible Tensor Bases of Exceptional Lie Algebras \[1\]. $G_2$, $F_4$ and $E_6$** ]{}
Dong Ruan${}^{1,2,3}$, Hong-Zhou Sun${}^{1,3}$ and Qi-Zhi Han${}^{4}$
${}^{1}$ Department of Physics, Tsinghua University, Beijing 100084, P.R. China
${}^{2}$ Key Laboratory for Quantum Information and Measurements of MOE, Tsinghua University, Beijing 100084, P.R. China
${}^{3}$ Center of Theoretical Nuclear Physics, National Laboratory of Heavy Ion Accelerator, Lanzhou, 730000, P.R. China
${}^{4}$ Department of Physics, Peking University, Beijing 100871, P.R. China
[[**Abstract**]{}. The irreducible tensor bases of exceptional Lie algebras $G_2$, $F_4$ and $E_6$ are built by grouping their Cartan-Weyl bases according to the respective chains $G_2$ $\supset$ SO(3) $\otimes$ SO(3), $F_4$ $\supset$ SO(3) $\otimes$ SO(3) $\otimes$ SO(3) $\otimes$ SO(3) and $E_6$ $\supset$ SO(3) $\otimes$ SO(3) $\otimes$ SO(3) $\otimes$ SO(3). The explicit commutation relations of the irreducible tensor bases of these algebras are given also respectively. ]{}
[**PACS:**]{} 03.65.Fd 03.65.-w 02.20.Sv
Introduction {#1}
============
It is well known that the representation theory of Lie groups has now been established as an invaluable tool in modern physics, especially in those fields where the symmetry plays an important role, such as atomic, molecular, nuclear, particle, solid physics and so on.
The complex semisimple Lie algebras were classified completely by Cartan [@cartan] in his thesis of 1894. Cartan identified four great classes of Lie algebras, often referred to as the classical Lie algebras, $A_n$, $B_n$, $C_n$ and $D_n$, and five exceptional Lie algebras $G_2$, $F_4$, $E_6$, $E_7$ and $E_8$, where the subscripted integers are the ranks of the respective algebras. Now the traditional representation theory of Lie algebra, developed by Cartan, Weyl [@weyl], Chevalley [@che] and many others, may be found in the numerous standard mathematical textbooks (for example, Refs. [@hum; @var]), in which for Lie algebras there are two kinds of bases of very usefulness: the Cartan-Weyl basis and the Chevalley basis.
From the point of view of practical applications to physics, it has been proved convenient to have explicit bases for Lie algebras in terms of some physical group chains. For example, in order to classify the states of electrons in the atomic $f$-shell according to the chain SO(7) $\supset$ $G_2$ $\supset$ SO(3), Racah [@racah] found first that it is possible to build tensor bases of SO(7) and $G_2$ by use of the irreducible tensor operators $v^k_q$ of rank $k$ of the three dimensional rotation group SO(3). We now call the kind of tensor operator realization of Lie algebra the irreducible tensor basis of Lie algebra. Later, this idea of Racah was extended to classical Lie algebras [@judd; @jeugt94], exceptional Lie algebras $F_4$ [@jbm; @jeugt92; @berghe], $E_6$ [@berghe; @bmj] and $E_7$ [@berghe] by considering the chains of these algebras ending at an SO(3) algebra, and to $F_4$ [@mbj] and $E_6$ [@bmj] by the chains of these algebras ending at the direct product of two SO(3) algebras so that the irreducible tensor bases of $F_4$ and $E_6$ are made up of SO(3) $\otimes$ SO(3) irreducible tensor operators $v^{k_1\,k_2}_{\:q_1\,q_2}$ of rank $(k_1 k_2)$. Furthermore, based upon the irreducible tensor bases, the structural zero of certain $6j$-coefficients of exceptional Lie algebras has been explained in Racah’s spirit. However the relationship between the irreducible tensor basis and the standard Cartan-Weyl basis is not revealed, and the explicit commutation relations satisfied by the irreducible tensor bases of these algebras are not given, which are very important for the problems of irreducible representations.
In practice the irreducible tensor bases of Lie algebras may be realized by the other approaches. The alternative useful realization is based upon a chain of the algebra $G$ under consideration ending at a direct product of several SO(3) algebras, i.e., $G$ $\supset$ SO(3) $\otimes$ SO(3) $\otimes$ ... $\otimes$ SO(3). Initially, for the classical Lie algebras of rank 2 such as $A_2$, $B_2$, $C_2$ and $D_2$, their irreducible tensor bases have been build by many authors according to the chains $A_2$ $\supset$ SO(3) [@s1], $B_2$ $\supset$ SO(3) $\otimes$ SO(3) [@he; @kpw; @s2], $C_2$ $\supset$ SO(3) $\otimes$ SO(3) [@ph; @bernards] and $D_2$ $\supset$ SO(3) $\otimes$ SO(3) [@s3] respectively. Later, the irreducible tensor bases of the classical Lie algebras $A_n$, $B_n$, $C_n$ and $D_n$ of arbitrary rank $n$ have been obtained systematically from their respective Cartan-Weyl bases by Sun and Han [@sh1], and the explicit commutation relations of the irreducible tensor bases have been given as well. Clearly, the type of irreducible tensor basis of $G$, different from Racah’s type, are made up of mutually commuting scalar operators, mutually commuting angular momentum operators and multi-fold irreducible tensor operators of half-odd integral ranks. The purpose of the present paper is to generalize the method applied in [@sh1] to construct the irreducible tensor bases of five exceptional Lie algebras. Owing to the fact that the exceptional Lie algebras $F_4$, $E_6$, $E_7$ and $E_8$ contain the classical Lie algebras $B_4$, $A_5$, $A_7$ and $D_8$ as subalgebras respectively, therefore, this work constructing the irreducible tensor bases of $F_4$, $E_6$, $E_7$ and $E_8$ is to group their remaining generators corresponding to the extra roots by the similar approach used in [@sh1] to yield the extra scalar operators, angular momentum operators and multi-fold irreducible tensor operators. In this paper, only the irreducible tensor bases of $G_2$, $F_4$ and $E_6$ are constructed, whereas those of $E_7$ and $E_8$ will be discussed in a subsequent paper.
This paper is arranged as follows. In Section \[2\], the basic definitions to be employed such as angular momentum operator, scalar operator and (multi-fold) irreducible tensor operator and the notation of Cartan-Weyl basis of Lie algebra are reviewed briefly. In Sections \[3\]-\[5\], the irreducible tensor bases of $G_2$, $F_4$ and $E_6$ are constructed respectively, and the explicit commutation relations satisfied by the irreducible tensor bases are calculated also. A simple conclusion is given in the final section.
Definitions and notation {#2}
==========================
1\) [**Angular momentum operator**]{}
[**J**]{}(1), [**J**]{}(2), ..., [**J**]{}$(n)$ are $n$ mutually commuting angular momentum operators, if they satisfy $$\begin{aligned}
\begin{array}{l}
\left[ J_0(i)\, , \hspace{2mm}J_{\pm 1}(i) \right] = \pm J_{\pm 1}(i), \\
\left[ J_{+1}(i)\, , \hspace{2mm}J_{-1}(i) \right] = - J_0(i); \\
\left[{\bf J}(i)\, , \hspace{2mm} {\bf J}(j) \right] = 0,
\hspace{3mm} i \not= j ,
\end{array}
\label{angular-d}\end{aligned}$$ where $J_{+1}(i)$, $J_0(i)$ and $J_{-1}(i)$ are the spherical components of the $i$th angular momentum operator [**J**]{}$(i)$, or, in mathematical language, are the infinitesimal generators of a SO(3) group.
[2) [**Scalar operator** ]{} ]{}
$A(1)$, $A(2)$, ..., $A(n)$ are $n$ mutually commuting scalar operators, if they commute amongst themselves and with all angular momentum operators, i.e., $$\begin{aligned}
\begin{array}{l}
\left[ {\bf J}(i)\, , \hspace{2mm} A(j) \right] = 0 , \\
\left[ A(i)\, , \hspace{2mm} A(j) \right] = 0 .
\end{array}
\label{scalar-d}\end{aligned}$$
[3) [**Irreducible tensor operator**]{} [@ra; @edmonds; @wy; @bl] ]{}
${\bf U}^{r}(i)$ is a irreducible tensor operator of rank $r$ with respect to the $i$th angular momentum operator (or SO(3)), if its $2r+1$ components $U^{r}_{p}(i)$, together with the angular momentum operators, satisfy $$\begin{aligned}
\begin{array}{l}
\left[ J_0(i)\, , \hspace{2mm} U^{r}_{p}(i) \right] = p\,U^{r}_{p}(i), \\
\left[ J_{\pm 1}(i)\, , \hspace{2mm} U^{r}_{p}(i) \right]
= C_{\pm}(r p)\,U^{\;r}_{p\pm 1}(i); \\
\left[ {\bf J}(j)\, , \hspace{2mm} U^{r}_{p}(i) \right] = 0 ,
\hspace{3mm}
j\not= i,
\end{array}
\label{tensr-d}\end{aligned}$$ where $r$ may take nonnegative integers or half-odd integers and for a fixed $r$, the component label $p$ may take $-r$, $-r+1$, ..., $r$, and $$C_{\pm}(r p) = \mp \sqrt{{\frac{1}{2}} (r\mp p)(r\pm p + 1)} .$$
[4) [**Multifold irreducible tensor operator**]{} ]{}
${\bf U}^{r_1, ..., r_m }(i_1, ..., i_m)$ is a $m$-fold irreducible tensor operator of rank $(r_1$, ..., $r_m)$ with respect to SO(3) $\otimes$ SO(3) $\otimes$ ... $\otimes$ SO(3) (i.e., direct product of $m$ SO(3) groups), if its $(2 r_1 +1)$ $\times$ $(2 r_2 +1)$ $\times$...$\times$ $(2r_m + 1)$ components $U^{r_1, ..., r_m}_{\,p_1, ..., p_m}(i_1, ..., i_m)$, together with the angular momentum operators, satisfy the following relations $$\begin{aligned}
\begin{array}{l}
\left[ J_0(i_{\alpha})\, , \hspace{2mm}
U^{r_1 ... r_{\alpha}... r_m }
_{\,p_1 ... p_{\alpha}... p_m}(i_1 ...i_{\alpha}... i_m)
\right]
= p_{\alpha} \,
U^{r_1 ... r_{\alpha}... r_m }
_{\,p_1 ... p_{\alpha}... p_m}(i_1 ...i_{\alpha}... i_m), \\
\left[ J_{\pm 1}(i_{\alpha})\, , \hspace{2mm}
U^{r_1 ... r_{\alpha}... r_m }
_{\,p_1 ... p_{\alpha}... p_m}(i_1 ...i_{\alpha}... i_m)
\right]
= C_{\pm}(r_{\alpha}\,p_{\alpha}) \,
U^{r_1 ... \;\; r_{\alpha}\;\;...\, r_m }
_{\,p_1 ... p_{\alpha}\pm1... p_m}(i_1 ...i_{\alpha}... i_m); \\
\left[ {\bf J}(i_{\beta})\, , \hspace{2mm}
U^{r_1 ... r_m }_{\,p_1 ...\,p_m} (i_1, ..., i_m)
\right] = 0,
\hspace{5mm}
i_{\beta} \not= i_1,\;i_{\beta} \not= i_2,\, ..., \, i_{\beta} \not= i_m,
\end{array}
\label{mtensr-d}\end{aligned}$$ where $$C_{\pm}(r_{\alpha}\, p_{\alpha}) = \mp \sqrt{{\frac{1}{2}} (r_{\alpha} \mp
p_{\alpha}) (r_{\alpha} \pm p_{\alpha} + 1)} .$$ Since a $m$-fold irreducible tensor operator is in fact a direct product of $m$ mutually independent irreducible tensor operators, so the restriction among $r_i$’s in rank $(r_1$, ..., $r_m)$ does not exit so that, similar to the definition (3), any $r_i$ ($i=1$, $2$, ..., $m$) may take nonnegative integers or half-odd integers and for a fixed $r_i$, the corresponding component label $p_i$ may take $-r_i$, $-r_i+1$, ..., $r_i$.
We see from the definitions (2)-(4) that only the concept of irreducible tensor operator is basic, whereas a scalar operator is a special irreducible tensor operator of rank 0 and the concept of multi-fold irreducible tensor operator is a natural extension of the definition of irreducible tensor operator.
[*Notation*]{}: Let $\{ H_1$, $H_2$, ..., $H_n$; $E_{\pm \alpha}
$, $\alpha \in \sum^+ \}$ be the Cartan-Weyl basis of some exceptional Lie algebra of rank $n$ [@hum; @ra; @wy], where $\sum^+$ is its positive root system. In this paper, we will use the simple notation, for example, when $\alpha$ $=$ $e_i - e_j$, the corresponding generators $E_{\pm (e_i - e_j)}$ are replaced by $E_{\pm (i-j)}$.
The irreducible tensor basis of $G_2$ {#3}
=====================================
In order to see clearly how to construct the irreducible tensor basis from the corresponding Cartan-Weyl basis, let us begin with the simplest exceptional Lie algebra $G_2$.
As is known, $G_2$ has twelve nonnull roots [@ra; @wy] $$e_i - e_j, \hspace{6mm} \pm (e_i + e_j)\mp 2e_k, \hspace{6mm}
1\le i<j <k \le 3,$$ with the normalization constant $ K=\sqrt{24} $.
In terms of the symmetries and identities satisfied by the structure constants [@ra; @wy], we take the structure constants of the Cartan-Weyl basis of $G_2$ as $$N_{61}= N_{64}= N_{42}= N_{15}
= {1\over 2}\,\sqrt{1 \over 2}, \hspace{5mm}
N_{61}= \sqrt{1 \over 6}.$$ Then we may let $$\begin{aligned}
\begin{array}{l}
J_0(1) = 2\,\sqrt3 \, H_1, \\
J_{\pm 1}(1) = \mp \, 2\,\sqrt3 \, E_{\pm 3} ; \\
J_0(2) = 2\, H_2, \\
J_{\pm 1}(2) = \mp \, 2\, E_{\pm 6},
\end{array}
\label{g2-angular}\end{aligned}$$ and put $U^{{1\over 2}{3\over 2}}_{\,p\,q}(12)\equiv
U_{\,p\,q}(12)$ as
12 pt
$p\backslash q$ ${3\over 2}$ ${1\over 2}$ $-{1\over 2}$ $-{3\over 2}$
----------------------------- ------------------------- ------------------------- -------------------------- -------------------------
$ \hspace{2.2mm}{1\over 2}$ $ 2\,\sqrt3 \, E_{5}$ $ 2\,\sqrt3 \, E_{4} $ $ 2\,\sqrt3 \, E_{2} $ $ 2\,\sqrt3 \, E_{1} $
$ -{1\over 2}$ $ -2\,\sqrt3 \, E_{-1}$ $ 2\,\sqrt3 \, E_{-2} $ $ -2\,\sqrt3 \, E_{-4} $ $ 2\,\sqrt3 \, E_{-5} $
The number of the above operators is $3+3+2\times 4 =14$, it is equal to the order of $G_2$. Hence, these operators form the irreducible tensor basis of $G_2$. The root diagram corresponding to the irreducible tensor basis of $G_2$ is given in Figure 1.
It is easy to find that the irreducible tensor operator ${\bf U}^{{1\over 2}{3\over 2}}(12)$ and its Hermitian conjugate $({\bf U}^{{1\over 2}{3\over 2}}(12))^{\dagger}$ satisfy the following relation $$\begin{aligned}
U_{-p\,-q}(12) = (-)^{p+q} \, U_{p\,q}^{\dagger}(12).\end{aligned}$$
By direct calculations, we can obtain the commutation relations satisfied by the irreducible tensor basis of $G_2$:
1\) [**J**]{}(1) and [**J**]{}(2) are two mutually commuting angular momentum operators, and satisfy the commutation relations (\[angular-d\]).
2\) ${\bf U}^{{1\over 2}{3\over 2}}(12)$ is a 2-fold irreducible tensor operator, hence it, together with [**J**]{}(1) and [**J**]{}(2), satisfies the commutation relations (\[mtensr-d\]).
3\) The nonzero commutation relations between eight components $U_{\,p\,q}(12)$ ($p= -{1\over 2}$, ${1\over 2}$, and $q= -{3\over 2}$, $- {1\over 2}$, ${1\over 2}$, ${3\over 2}$) of ${\bf U}^{{1\over 2}{3\over 2}}(12)$ are given in coupled irreducible tensor forms, which are more compact and symmetric than usual Lie bracket forms, as follows: $$\begin{aligned}
\begin{array}{l}
\left( {\bf U}^{{1\over 2}{3\over 2}}(12) \, {\bf U}^{{1\over 2}{3\over 2}}(12)
\right)^{1\,0}_{\mu\,0}
= \sqrt{9\over 2}\, J_{\mu}(1) , \cr
\left( {\bf U}^{{1\over 2}{3\over 2}}(12) \, {\bf U}^{{1\over 2}{3\over 2}}(12)
\right)^{0\,1}_{0\,\mu}
= \sqrt{5\over 2}\, J_{\mu}(2) , \cr
\mu = -1,\,0,\,1.
\end{array}\end{aligned}$$ Here (and afterwards) we have used the definition of coupled irreducible tensor operator, [@fano] for example, for two 2-fold irreducible tensor operators ${\bf U}^{k_1 l_1}(ij)$ and ${\bf U}^{k_2 l_2}(ij)$, which correspond to the common angular momenta ${\bf J}(i)$ and ${\bf J}(j)$ (i.e., labels $k_1$ and $k_2$ correspond to ${\bf J}(i)$, and labels $l_1$ and $l_2$ to ${\bf J}(j)$), thus we may couple ${\bf U}^{k_1 l_1}(ij)$ and ${\bf U}^{k_2 l_2}(ij)$ to produce a new 2-fold irreducible tensor operator indicated as $( {\bf U}^{k_1 l_1}(ij) \, {\bf U}^{k_2 l_2}(ij) )^{k\,l}$, whose components are $$\begin{aligned}
\begin{array}{rl}
\left( {\bf U}^{k_1 l_1}(ij) \, {\bf U}^{k_2 l_2}(ij) \right)^{k\,l}_{pq} =
& \sum_{p_1\,p_2\,q_1\,q_2}\: \langle k_1 \, p_1 \, k_2 \, p_2\,
\left| \right. k \, p \rangle \,
\langle l_1 \, q_1 \, l_2 \, q_2\,
\left| \right. l \, q \rangle \cr
{} & \times U^{k_1 l_1}_{\,p_1 \,q_1}(ij) \, U^{k_2 l_2}_{\,p_2 \,q_2}(ij) ,
\end{array}
\label{define}\end{aligned}$$ where symbol $
\langle k_1 \, p_1 \, k_2 \,p_2\, \left| \right. k \, p \rangle
$ is the usual Clebsch-Gordon coefficient of SO(3), [@edmonds; @bl] for the given $k_1$ and $k_2$, $k$ may take $|k_1- k_2|$, $|k_1- k_2|+1$, ..., $k_1 + k_2$. Especially, if $k_1 = k_2$, and when $k$ takes 0, then $( {\bf U}^{k_1 l_1}(ij) \, {\bf U}^{k_2 l_2}(ij) )^{0\,l}$ is just a (1-fold) irreducible tensor operator of rank $l$. If $k_1 = k_2$ and $l_1 = l_2$, and when $k=l=0$, then $( {\bf U}^{k_1 l_1}(ij) \, {\bf U}^{k_2 l_2}(ij) )^{00}$ is just a scalar operator.
It is very obvious from the above commutation relations that the Cartan generators of $G_2$ in the scheme of irreducible tensor basis are $\{ J_0(1)$, $J_0(2) \}$.
The irreducible tensor basis of $F_4$
=====================================
It is known [@ra; @wy] that $F_4$ contains $B_4$ as a subalgebra, hence, all nonnull roots of $F_4$ include those of $B_4$ $$\pm e_i, \hspace{6mm} \pm e_i \pm e_j, \hspace{6mm} i<j,
\hspace{6mm} i,\,j=1,\,2,\,3,\,4$$ and the extra roots $${1 \over 2} (\pm e_1 \pm e_2 \pm e_3 \pm e_4).$$
For convenience, let $$\begin{aligned}
\begin{array}{l}
\alpha = {1\over 2}\,\{ (e_1+ e_2) \pm (e_3+ e_4)\}, \\
\beta = {1\over 2}\,\{ (e_1+ e_2) \pm (e_4- e_3)\}, \\
\gamma = {1\over 2}\,\{ (e_2- e_1) \pm (e_3+ e_4)\}, \\
\epsilon = {1\over 2}\,\{ (e_2- e_1) \pm (e_4- e_3)\},
\end{array}\end{aligned}$$ with $$\begin{aligned}
y_1 = \{ (\cdots) + (\cdots)\} , \hspace{5mm}
y_2 = \{ (\cdots) - (\cdots)\} , \end{aligned}$$ where $y$ may take $\alpha$, $\beta$, $\gamma$, $\epsilon$.
The irreducible tensor basis of $B_4$ has been given in Ref. [@sh1]. Thus in terms of the symmetries and identities satisfied by the structure constants [@ra; @wy], we take the structure constants of the Cartan-Weyl basis of $F_4$ as $$N_{ij}= N_{j,\,i-j}= N_{i-k,\,j+k}= N_{i+k,\,j-k}
= N_{j+k,\,i-j}= N_{j-k,\,i-j}= -{1\over K} ,$$ $$i<j <k \leq 4; \hspace{10mm} N_{xy} = {G_{xy}\over K},$$ where $N_{ij}$, $N_{j,\,i-j}$, ..., $N_{j-k,\,i-j}$ are the structure constants of $B_4$ and $G_{xy}$ is given in Table 1.
Now we may let $$\begin{aligned}
\begin{array}{l}
J_0(i^{\prime}) = {K\over 2}(H_{i^{\prime}}
+ H_{i_1^{\prime}}) , \\
J_{\pm 1}(i^{\prime}) = \pm {K\over \sqrt{2}}
E_{\pm(i^{\prime} +i_1^{\prime})}; \\
J_0(i_1^{\prime}) = {K\over 2} (-H_{i^{\prime}}
+H_{i_1^{\prime}}) , \\
J_{\pm 1}(i_1^{\prime}) = \pm {K\over \sqrt{2}}
E_{\pm(-i^{\prime} +i_1^{\prime})} , \\
i^{\prime}=1, 3, \hspace{3mm}
i_1^{\prime}=i^{\prime} + 1,
\end{array}\end{aligned}$$ and put $U_{\;p\:q}^{{1\over 2}{1 \over 2}}(ij) \equiv U_{p\: q}(ij)$ as
$ p \backslash q $ $ {1\over 2} $ $-{1\over 2} $
-------------------- ---------------------------------------- ----------------------------------------
$ {1\over 2} $ $-\frac{K}{\sqrt{2}}E_{i_1^{\prime}} $ $\frac{K}{\sqrt{2}}E_{i^{\prime}} $
$ -{1\over 2}$ $\frac{K}{\sqrt{2}}E_{-i^{\prime}} $ $\frac{K}{\sqrt{2}}E_{-i_1^{\prime}} $
for $i\,j = i^{\prime}\, i_1^{\prime}$, or
$ p \backslash q $ $ {1\over 2} $ $-{1\over 2} $
-------------------- ------------------------------- -------------------------------
$ {1\over 2}$ $-\frac{K}{\sqrt{2}}E_{y_1} $ $\frac{K}{\sqrt{2}}E_{y_2} $
$ -{1\over 2}$ $\frac{K}{\sqrt{2}}E_{-y_2} $ $\frac{K}{\sqrt{2}}E_{-y_1} $
for $i\,j \not= i^{\prime}\, i_1^{\prime}$, where $$\begin{aligned}
i\, j = \left\{
\begin{array}{lll}
1\,3 , & \mbox{when} \hspace{2mm} y= \alpha; \cr
1\,4 , & \mbox{when} \hspace{2mm} y= \beta; \cr
2\,3 , & \mbox{when} \hspace{2mm} y= \gamma; \cr
2\,4 , & \mbox{when} \hspace{2mm} y= \epsilon,
\end{array}
\right.\end{aligned}$$ and put $U_{\;p\:q\:p'\:q'}^{{1\over 2}{1 \over 2}\,{1 \over 2}\:{1 \over 2}}(1234)
\equiv U_{p\: q\: p'\: q'}(1234)$ as
$p\,q \backslash p' \, q' $ ${1\over 2}\; {1\over 2} $ ${1\over 2}\; -{1\over 2} $ $-{1\over 2}\; {1\over 2} $ $-{1\over 2}\; -{1\over 2}$
--------------------------------------------------- -------------------------------- -------------------------------- -------------------------------- --------------------------------
${1\over 2}\hspace{5mm}{1\over 2} \hspace{5mm} $ $-\frac{K}{\sqrt{2}}E_{2+4} $ $ \frac{K}{\sqrt{2}}E_{2+3} $ $-\frac{K}{\sqrt{2}}E_{2-3} $ $-\frac{K}{\sqrt{2}}E_{2-4} $
${1\over 2}\; -\!{1\over 2} \hspace{5mm} $ $ \frac{K}{\sqrt{2}}E_{1+4} $ $-\frac{K}{\sqrt{2}}E_{1+3} $ $ \frac{K}{\sqrt{2}}E_{1-3} $ $ \frac{K}{\sqrt{2}}E_{1-4} $
$-{1\over 2}\hspace{5mm}{1\over 2} \hspace{5mm} $ $ \frac{K}{\sqrt{2}}E_{-1+4} $ $-\frac{K}{\sqrt{2}}E_{-1+3} $ $ \frac{K}{\sqrt{2}}E_{-1-3} $ $ \frac{K}{\sqrt{2}}E_{-1-4} $
$-{1\over 2}\;-\!{1\over 2} \hspace{5mm} $ $ \frac{K}{\sqrt{2}}E_{-2+4} $ $-\frac{K}{\sqrt{2}}E_{-2+3} $ $ \frac{K}{\sqrt{2}}E_{-2-3} $ $ \frac{K}{\sqrt{2}}E_{-2-4} $
It is easy to get $$\begin{aligned}
U_{-p\,-q}(ij) = (-)^{p+q}\, U_{p\,q}^{\dagger}(ij),\end{aligned}$$ $$\begin{aligned}
U_{-p\,-q\,-p'\,-q'}(1234) =
(-)^{1+p+q+p'+q'}\, U_{p\,q\,p'\,q'}^{\dagger}(1234).\end{aligned}$$
The number of the above operators is $ 3\times 4 + 4\times 2 + 4\times 4 + 16 = 52$, it is equal to the order of $F_4$. Hence, these operators form the irreducible tensor basis of $F_4$.
By direct calculations, we can obtain the commutation relations satisfied by the irreducible tensor basis of $F_4$:
1\) [**J**]{}(1), [**J**]{}(2), [**J**]{}(3) and [**J**]{}(4) are the mutually commuting angular momentum operators, and satisfy the commutation relations (\[angular-d\]).
2\) ${\bf U}^{{1\over 2}{1\over 2}}(ij)$ ($i=1$, 2; $j=3$, 4) are the 2-fold irreducible tensor operators, hence they, together with ${\bf J}(i)$ and ${\bf J}(j)$, satisfy the commutation relations (\[mtensr-d\]).
3\) ${\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(1234)$ is a 4-fold irreducible tensor operator, hence it, together with [**J**]{}(1), [**J**]{}(2), [**J**]{}(3) and [**J**]{}(4), satisfies the commutation relations (\[mtensr-d\]).
4\) The nonzero commutation relations satisfied by the components of ${\bf U}^{{1\over 2}{1\over 2}}(ij)$ and ${\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(ijkl)$ read: $$\begin{aligned}
\begin{array}{l}
[\, U_{p\,q}(12),\; U_{p'\,q'}(34)\, ] = \sqrt{1 \over 2}\,
U_{p\,q\,p'\,q'}(1234), \cr
[\, U_{p\,q}(13),\; U_{p'\,q'}(24)\, ] = - \sqrt{1 \over 2}\,
U_{p\,p'\,q\,q'}(1234), \cr
[\, U_{p\,q}(14),\; U_{p'\,q'}(23)\,] = - \sqrt{1 \over 2}\,
U_{p\,p'\,q'\,q}(1234); \cr
\left( {\bf U}^{{1\over 2}{1\over 2}}(ij)\, {\bf U}^{{1\over 2}{1\over 2}}(ij)
\right)^{10}_{\mu 0} = -{1\over 2} J_{\mu}(i), \\
\left( {\bf U}^{{1\over 2}{1\over 2}}(ij)\, {\bf U}^{{1\over 2}{1\over 2}}(ij)\
\right)^{01}_{0\mu } = -{1\over 2} J_{\mu}(j), \hspace{5mm}
\mu = -1,\,0,\,1, \\
\left\{ {\bf U}^{{1\over 2}{1\over 2}}(ij) \, {\bf U}^{{1\over 2}{1\over 2}}(jk)
\right\}^{{1\over 2}0{1\over 2}}_{\,p\:0\:q}
= (-)^{x+1} \sqrt{1\over 2}\,
U^{{1\over 2}{1\over 2}}_{\,p\,q}(ik), \\
x= \mbox{min}(i,j,k); \\
\left[ {\bf U}^{{1\over 2}{1\over 2}}(ij) \,
{\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(ijkl)
\right] ^{00{1\over 2}{1\over 2}}_{00\,p\,q}
= -\sqrt{2}\, U^{{1\over 2}{1\over 2}}_{\,p\,q}(kl), \\
\left[ {\bf U}^{{1\over 2}{1\over 2}}(ik) \,
{\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(ijkl)
\right] ^{0{1\over 2}0{1\over 2}}_{0\,p\,0q}
= (-)^{\, i+1} \sqrt{2}\,
U^{{1\over 2}{1\over 2}}_{\,p\,q}(jl); \\
\left( {\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(ijkl) \,
{\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(ijkl)
\right)^{1000}_{\mu 000} = - J_{\mu}(i), \\
\left( {\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(ijkl)\,
{\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(ijkl)
\right)^{0100}_{0\mu 00} = - J_{\mu}(j), \\
\left( {\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(ijkl)\,
{\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(ijkl)
\right)^{0010}_{00\mu 0} = - J_{\mu}(k), \\
\left( {\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(ijkl)\,
{\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(ijkl)
\right)^{0001}_{000\mu} = - J_{\mu}(l).
\end{array}
\label{f4-cr}\end{aligned}$$ Here we have used the following two convenient notations $$\begin{aligned}
\begin{array}{c}
\left\{ {\bf X}^{\eta_1} \, {\bf Y}^{\eta_2} \right\}^{\eta}_{\zeta}
\equiv
\left( {\bf X}^{\eta_1} {\bf Y}^{\eta_2} \right)^{\eta}_{\zeta}
+ \left( {\bf Y}^{\eta_2} {\bf X}^{\eta_1} \right)^{\eta}_{\zeta}, \cr
\left[ {\bf X}^{\eta_1} \, {\bf Y}^{\eta_2} \right]^{\eta}_{\zeta}
\equiv
\left( {\bf X}^{\eta_1} {\bf Y}^{\eta_2} \right)^{\eta}_{\zeta}
- \left({\bf Y}^{\eta_2} {\bf X}^{\eta_1} \right)^{\eta}_{\zeta} ,
\end{array}\end{aligned}$$ where $\left( {\bf X}^{\eta_1} {\bf Y}^{\eta_2} \right)^{\eta}_{\zeta}$ is a coupled irreducible tensor operator of rank $\eta$ (see Eq. (\[define\])) build from two irreducible tensor operators ${\bf X}$ of rank $\eta_1$ and ${\bf Y}$ of rank $\eta_2$. We note that the former three commutation relations in Eq. (\[f4-cr\]) are written in the usual Lie bracket forms since the two irreducible tensor operators in Lie brackets do not correspond to the common angular momentum.
We can find easily from the above commutation relations that the Cartan generators of $F_4$ in the irreducible tensor basis are $\{ J_0(1)$, $J_0(2)$, $J_0(3)$, $J_0(4) \}$.
The irreducible tensor basis of $E_6$ {#5}
=====================================
It is known [@ra; @wy] that $E_6$ contains $A_5$ as a subalgebra, hence, all nonnull roots of $E_6$ include those of $A_5$ $$e_i - e_j, \hspace{6mm} i\not= j, \hspace{6mm} i,\,j= 1,\,2,...,\, 6,$$ and the extra roots $$\pm \sqrt{2}\, e_7, \hspace{5mm}
{1 \over 2} (\pm e_1 \pm e_2 \pm e_3 \pm e_4 \pm e_5 \pm e_6)
\pm {1\over \sqrt{2}}\, e_7 ,$$ where three positive sign and three negative sign are taken in the above parentheses. The normalization constant of root vectors is $K=\sqrt{144}$.
For convenience, let $$\begin{aligned}
\begin{array}{l}
\alpha = {1\over 2}\, \{ (e_1+ e_2- e_3- e_4) \pm (e_6- e_5)
\pm \sqrt{2}\, e_7 \}, \\
\beta =
{1\over 2}\,\{ (e_1+ e_2- e_5- e_6) \pm (e_4- e_3)
\pm \sqrt{2}\, e_7 \}, \\
\epsilon =
{1\over 2}\,\{ (e_3+ e_4- e_5- e_6) \pm (e_2- e_1)
\pm \sqrt{2}\, e_7 \}, \\
\lambda =
{1\over 2}\,\{ \pm (e_2- e_1)\pm (e_4- e_3) \pm (e_6- e_5)
\pm \sqrt{2}\, e_7 \}, \\
\end{array}\end{aligned}$$ with $$\begin{aligned}
\begin{array}{rl}
y_1 = \{ (\cdots) + (\cdots) + (\cdots)\} , &
y_2 = \{ (\cdots) + (\cdots) - (\cdots)\} , \\
y_3 = \{ (\cdots) - (\cdots) + (\cdots)\} , &
y_4 = \{ (\cdots) - (\cdots) - (\cdots)\} ; \\
x_1 = \{ + (\cdots)+ (\cdots)+ (\cdots)+ (\cdots)\} , &
x_2 = \{ + (\cdots)+ (\cdots)+ (\cdots)- (\cdots)\} , \\
x_3 = \{ + (\cdots)+ (\cdots)- (\cdots)+ (\cdots)\} , &
x_4 = \{ + (\cdots)+ (\cdots)- (\cdots)- (\cdots)\} , \\
x_5 = \{ + (\cdots)- (\cdots)+ (\cdots)+ (\cdots)\} , &
x_6 = \{ + (\cdots)- (\cdots)+ (\cdots)- (\cdots)\} , \\
x_7 = \{ + (\cdots)- (\cdots)- (\cdots)+ (\cdots)\} , &
x_8 = \{ + (\cdots)- (\cdots)- (\cdots)- (\cdots)\} ,
\end{array}\end{aligned}$$ where $ y = \alpha, \, \beta, \, \epsilon $ and $ x= \lambda $.
The irreducible tensor basis of $A_5$ has been given in Ref. [@sh1]. Thus in terms of the symmetries and identities satisfied by the structure constants [@ra; @wy], we take the structure constants of the Cartan-Weyl basis of $E_6$ as $$N_{i-j,\,j-k}= {1\over K} , \hspace{6mm}
i<j <k \leq 6 ; \hspace{6mm} N_{xy} = {S_{xy}\over K},$$ where $N_{i-j,\,j-k}$ is the structure constant of $A_5$ and $S_{xy}$ is given in Table 2.
Now we may let $$\begin{aligned}
\begin{array}{l}
J_0(i_1) = {K\over 2}\,(-H_i + H_{i_1}) , \\
J_{\pm 1}(i_1) = \pm {K\over \sqrt{2}} \,
E_{\pm(-i + i_1)}, \\
i_1 =i + 1, \hspace{6mm} i =1,\, 3,\,5; \\
A(i)= K(H_i+ H_{i_1}) , \\
\sum\limits_{i}A(i) =0; \\
J_0(8) = {K\over \sqrt{2}}\, H_7 , \\
J_{\pm 1}(8) = \pm {K\over \sqrt{2}} \,E_{\pm 7} ,
\end{array}\end{aligned}$$ where $E_{\pm 7}$ are the generators corresponding to the roots $\pm \sqrt{2}\,e_{{}_{7}}$, and put $V^{\hspace{1mm} {1\over 2} \hspace{3.5mm} {1\over 2}}_{1\,p\,-1\,q}
$ $(i\,i_1\,j\,j_1)$ $\equiv$ $V_{\,p\,q}(i_1\,j_1)$ and $W^{\hspace{3mm} {1\over 2} \hspace{2.25mm} {1\over 2}}_{-1\,p\,1\,q}
(i\,i_1\,j\,j_1)$ $\equiv$ $W_{\,p\,q}(i_1\,j_1)$ as
14 pt
$ p \backslash q $ ${1\over 2}$ $-{1\over 2}$
------------------------ ------------------------------ ------------------------------------- -------------------------------------
$V_{\,p\,q}(i_1\,j_1)$ $ {1\over 2} \hspace{3mm} $ $-\frac{K}{\sqrt{2}}E_{+i_1-\!j} $ $-\frac{K}{\sqrt{2}}E_{+i_1-\!j_1}$
$ -{1\over 2} \hspace{3mm} $ $ \frac{K}{\sqrt{2}}E_{+i-\!j} $ $ \frac{K}{\sqrt{2}}E_{+i-\!j_1} $
$W_{\,p\,q}(i_1\,j_1)$ $ {1\over 2} \hspace{3mm} $ $ \frac{K}{\sqrt{2}}E_{-i+\!j_1} $ $-\frac{K}{\sqrt{2}}E_{-i+\!j} $
$ -{1\over 2} \hspace{3mm} $ $ \frac{K}{\sqrt{2}}E_{-i_1+\!j_1}$ $-\frac{K}{\sqrt{2}}E_{-i_1+\!j} $
where $
i,\,j=1,\,3,\,5, \hspace{3mm} i<j, \hspace{3mm} i_1=i+1, \hspace{3mm}
j_1=j+1,
$\
$V_{\,p\,q}(1638)$ and $W_{\,p\,q}(1638)$ as
14 pt
$ p \backslash q $ ${1\over 2}$ $-{1\over 2}$
-------------------- ------------------------------ -------------------------------------- ---------------------------------------
$V_{\,p\,q}(1638)$ $ {1\over 2} \hspace{3mm} $ $-\frac{K}{\sqrt{2}}E_{\alpha_1} $ $-\frac{K}{\sqrt{2}}E_{\alpha_2} $
$ -{1\over 2} \hspace{3mm} $ $ \frac{K}{\sqrt{2}}E_{\alpha_3} $ $ \frac{K}{\sqrt{2}}E_{\alpha_4} $
$W_{\,p\,q}(1638)$ $ {1\over 2} \hspace{3mm} $ $ \frac{K}{\sqrt{2}}E_{-\alpha_4} $ $-\frac{K}{\sqrt{2}}E_{-\alpha_3} $
$ -{1\over 2} \hspace{3mm} $ $ \frac{K}{\sqrt{2}}E_{-\alpha_2} $ $ -\frac{K}{\sqrt{2}}E_{-\alpha_1} $
$V_{\,p\,q}(1458)$ and $W_{\,p\,q}(1458)$ as
14 pt
$ p \backslash q $ ${1\over 2}$ $-{1\over 2}$
-------------------- ----------------------------- ------------------------------------ ------------------------------------
$V_{\,p\,q}(1458)$ $ {1\over 2} \hspace{3mm} $ $ \frac{K}{\sqrt{2}}E_{\beta_1} $ $ \frac{K}{\sqrt{2}}E_{\beta_2} $
$-{1\over 2} \hspace{3mm} $ $-\frac{K}{\sqrt{2}}E_{\beta_3} $ $-\frac{K}{\sqrt{2}}E_{\beta_4} $
$W_{\,p\,q}(1458)$ $ {1\over 2} \hspace{3mm} $ $-\frac{K}{\sqrt{2}}E_{-\beta_4} $ $ \frac{K}{\sqrt{2}}E_{-\beta_3} $
$-{1\over 2} \hspace{3mm} $ $-\frac{K}{\sqrt{2}}E_{-\beta_2} $ $ \frac{K}{\sqrt{2}}E_{-\beta_1} $
$V_{\,p\,q}(3258)$ and $W_{\,p\,q}(3258)$ as
14 pt
$ p \backslash q $ ${1\over 2}$ $-{1\over 2}$
-------------------- ----------------------------- -------------------------------------- --------------------------------------
$V_{\,p\,q}(3258)$ $ {1\over 2} \hspace{3mm} $ $-\frac{K}{\sqrt{2}}E_{\epsilon_1} $ $ \frac{K}{\sqrt{2}}E_{\epsilon_3} $
$-{1\over 2} \hspace{3mm} $ $-\frac{K}{\sqrt{2}}E_{\epsilon_2} $ $ \frac{K}{\sqrt{2}}E_{\epsilon_4} $
$W_{\,p\,q}(3258)$ $ {1\over 2} \hspace{3mm} $ $\frac{K}{\sqrt{2}}E_{-\epsilon_4}$ $ \frac{K}{\sqrt{2}}E_{-\epsilon_2}$
$-{1\over 2} \hspace{3mm} $ $-\frac{K}{\sqrt{2}}E_{-\epsilon_3}$ $-\frac{K}{\sqrt{2}}E_{-\epsilon_1}$
$U_{\;p\:q\:p'\:q'}^{{1\over 2}{1 \over 2}\,{1 \over 2}\:{1 \over 2}}
(2468) \equiv U_{p\: q\: p'\: q'}(2468)$ as
12 pt
------------------------------------------------------------- --------------------------------------- --------------------------------------- --------------------------------------- ---------------------------------------
$ p\,q \backslash p' \, q' $ ${1\over 2}\; {1\over 2}$ ${1\over 2}\; -{1\over 2}$ $-{1\over 2}\; {1\over 2}$ $-{1\over 2}\; -{1\over 2}$
\[1.5mm\] ${1\over 2}\hspace{5mm}{1\over 2} \hspace{4mm} $ $-\frac{K}{\sqrt{2}}E_{\lambda_1} $ $ \frac{K}{\sqrt{2}}E_{\lambda_2} $ $-\frac{K}{\sqrt{2}}E_{\lambda_3} $ $-\frac{K}{\sqrt{2}}E_{\lambda_4} $
${1\over 2}\; -\!{1\over 2} \hspace{4mm} $ $ \frac{K}{\sqrt{2}}E_{\lambda_5}$ $-\frac{K}{\sqrt{2}}E_{\lambda_6} $ $ \frac{K}{\sqrt{2}}E_{\lambda_7} $ $ \frac{K}{\sqrt{2}}E_{\lambda_8} $
$-{1\over 2}\hspace{5mm}{1\over 2} \hspace{4mm} $ $ \frac{K}{\sqrt{2}}E_{-\lambda_8} $ $-\frac{K}{\sqrt{2}}E_{-\lambda_7} $ $ \frac{K}{\sqrt{2}}E_{-\lambda_6} $ $ \frac{K}{\sqrt{2}}E_{-\lambda_5} $
$-{1\over 2}\;-\!{1\over 2} \hspace{4mm} $ $ \frac{K}{\sqrt{2}}E_{-\lambda_4} $ $-\frac{K}{\sqrt{2}}E_{-\lambda_3} $ $ \frac{K}{\sqrt{2}}E_{-\lambda_2} $ $ \frac{K}{\sqrt{2}}E_{-\lambda_1} $
------------------------------------------------------------- --------------------------------------- --------------------------------------- --------------------------------------- ---------------------------------------
The number of the above operators is $ 3\times 3 + 2 + 3 + 6\times 8 + 16 = 78$, it is equal to the order of $E_6$. Hence, these operators form the irreducible tensor basis of $E_6$.
It is not difficult to find $$\begin{aligned}
V_{\,-p\,-q} = (-)^{1+p+q} \, W_{\,p\,q}^{\dagger},\end{aligned}$$ $$\begin{aligned}
W_{\,-p\,-q} = (-)^{1+p+q} \, V_{\,p\,q}^{\dagger};\end{aligned}$$ $$\begin{aligned}
U_{\,-p\,-q\,-p'\,-q'}
= (-)^{1+p+q+p'+q'}\, U_{\,p\,q\,p'\,q'}^{\dagger}.\end{aligned}$$
By direct calculations, we can obtain the commutation relations satisfied by the irreducible tensor basis of $E_6$:
1\) [**J**]{}(2), [**J**]{}(4), [**J**]{}$(6)$ and [**J**]{}$(8)$ are the mutually commuting angular momentum operators, and satisfy commutation relations (\[angular-d\]).
2\) $A(1)$, $A(3)$ and $A(5)$ (only two of them are independent) are the mutually commuting scalar operators, hence they, together with ${\bf J}(1)$, ${\bf J}(3)$ and ${\bf J}(5)$, satisfy commutation relations (\[scalar-d\]).
3\) Both ${\bf V}^{{1\over 2}{1\over 2}}(i\,j_1\,k\,l_1)$ and ${\bf W}^{{1\over 2}{1\over 2}}(i\,j_1\,k\,l_1)$ are the 2-fold irreducible tensor operators, hence they, together with [**J**]{}$(j_1)$ and [**J**]{}$(l_1)$, satisfy commutation relations (\[mtensr-d\]).
4\) The nonzero commutation relations satisfied by sixty-four components of ${\bf V}^{{1\over 2}{1\over 2}}(i\,j_1\,k\,l_1)$ and ${\bf W}^{{1\over 2}{1\over 2}}(i\,j_1\,k\,l_1)$ and two scalar operators are $$\begin{aligned}
\begin{array}{l}
[ A(i), \, V_{\,p\,q}(i\,j_1\,k\,l_1)]
= V_{\,p\,q}(i\,j_1\,k\,l_1), \cr
[ A(i), \, W_{\,p\,q}(i\,j_1\,k\,l_1)]
= - W_{\,p\,q}(i\,j_1\,k\,l_1), \cr
[ A(k), \, V_{\,p\,q}(i\,j_1\,k\,l_1)]
= -V_{\,p\,q}(i\,j_1\,k\,l_1), \cr
[ A(k), \, W_{p\,q}(i\,j_1\,k\,l_1)]
= W_{\,p\,q}(i\,j_1\,k\,l_1); \cr
\left[ {\bf V}^{{1\over 2}{1\over 2}}(i\,j_1\,k\,l_1) \,
{\bf W}^{{1\over 2}{1\over 2}}(i\,j_1\,k\,l_1)
\right]^{\:\;1\;\;0}_{0\mu 00} = J_{\mu}(j_1), \cr
\left[ {\bf V}^{{1\over 2}{1\over 2}}(i\,j_1\,k\,l_1)\,
{\bf W}^{{1\over 2}{1\over 2}}(i\,j_1\,k\,l_1)
\right]^{\;\;0\;\:1}_{000\mu } = J_{\mu }(l_1) , \cr
\mu =-1,\,0,\,1; \cr
\left[ {\bf V}^{{1\over 2}{1\over 2}}(i\,j_1\,k\,l_1)\,
{\bf W}^{{1\over 2}{1\over 2}}(i\,j_1\,k\,l_1)
\right] ^{\;\:0\;\:0}_{0000} = {1\over 2} \, \left[ A(i)- A(k) \right] ; \cr
\left\{ {\bf V}^{{1\over 2}{1\over 2}}(i_1\,j_1) \,
{\bf V}^{{1\over 2}{1\over 2}}(j_1\,k_1)
\right\}^{\hspace{2.2mm}{1 \over 2} \hspace{2mm}0 \hspace{5mm}{1 \over 2}}
_{\,1\,p\,0\,0\,-1\,q}
= V^{{1\over 2}{1\over 2}}_{\,p\,q}(i_1\, k_1) , \cr
\left\{ {\bf W}^{{1\over 2}{1\over 2}}(i_1\, j_1) \,
{\bf W}^{{1\over 2}{1\over 2}}(j_1\, k_1)
\right\}^{\hspace{4.2mm}{1 \over 2} \hspace{2.5mm} 0 \hspace{2.2mm}{1 \over 2}}
_{\,-1\,p\,0\,0\,1\,q}
= W^{{1\over 2}{1\over 2}}_{\,p\,q}(i_1\,k_1) .
\end{array}
\label{12}\end{aligned}$$ We can conclude from the former four equations in Eq. (\[12\]) that ${\bf V}^{{1\over 2}{1\over 2}}(i\,j_1\,k\,l_1)$ raise and lower the eigenvalues of $A(i)$ and $A(k)$ by $1$ respectively, while ${\bf W}^{{1\over 2}{1\over 2}}(i\,j_1\,k\,l_1)$ lower and raise the eigenvalues of $A(i)$ and $A(k)$ by $1$ respectively.
4\) ${\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(2468)$ is a 4-fold irreducible tensor operator, hence it, together with [**J**]{}(2), [**J**]{}(4), [**J**]{}$(6)$ and [**J**]{}$(8)$ satisfies the commutation relations (\[mtensr-d\]). The nonzero commutation relations between components of ${\bf U}^{{1\over 2}{1\over 2}{1\over 2}{1\over 2}}(2468)$ are the last four equations in Eq. (\[f4-cr\]).
5\) The other nonzero commutation relations satisfied by these irreducible tensor operators are [$$\begin{aligned}
\begin{array}{ll}
[V(1638),\; W(1458)]= -\sqrt{1 \over 2}\, W(3456), &
[W(1638),\; V(1458)]= +\sqrt{1 \over 2}\, V(3456), \cr
[V(3456),\; W(1638)]= -\sqrt{1 \over 2}\, W(1458), &
[W(3456),\; V(1638)]= +\sqrt{1 \over 2}\, V(1458), \cr
[V(3456),\; W(1458)]= +\sqrt{1 \over 2}\, W(1638), &
[W(3456),\; V(1458)]= -\sqrt{1 \over 2}\, V(1638); \cr
[V(1638),\; W(3258)]= +\sqrt{1 \over 2}\, W(1256), &
[W(1638),\; V(3258)]= -\sqrt{1 \over 2}\, V(1256), \cr
[V(1256),\; W(1638)]= +\sqrt{1 \over 2}\, W(3258), &
[W(1256),\; V(1638)]= -\sqrt{1 \over 2}\, V(3258), \cr
[V(1256),\; W(3258)]= -\sqrt{1 \over 2}\, W(1458), &
[W(1256),\; V(3258)]= +\sqrt{1 \over 2}\, V(1458); \cr
[V(1458),\; W(3258)]= -\sqrt{1 \over 2}\, W(1234), &
[W(1458),\; V(3258)]= +\sqrt{1 \over 2}\, V(1234), \cr
[V(1234),\; W(1458)]= -\sqrt{1 \over 2}\, W(3258), &
[W(1234),\; V(1458)]= +\sqrt{1 \over 2}\, V(3258), \cr
[V(1234),\; W(3258)]= +\sqrt{1 \over 2}\, W(1458), &
[W(1234),\; V(3258)]= -\sqrt{1 \over 2}\, V(1458); \cr
[V(1234),\; W(1638)]= +\sqrt{1 \over 2}\, U(2468), &
[W(1234),\; V(1638)]= -\sqrt{1 \over 2}\, U(2468), \cr
[V(1234),\; U(2468)]= -\sqrt{1 \over 2}\, V(1638), &
[W(1234),\; U(2468)]= +\sqrt{1 \over 2}\, W(1638), \cr
[V(1638),\; U(2468)]= +\sqrt{1 \over 2}\, V(1234), &
[W(1638),\; U(2468)]= -\sqrt{1 \over 2}\, W(1234); \cr
[V(1256),\; W(1458)]= +\sqrt{1 \over 2}\, U(2468), &
[W(1256),\; V(1458)]= -\sqrt{1 \over 2}\, U(2468), \cr
[V(1256),\; U(2468)]= -\sqrt{1 \over 2}\, V(1458), &
[W(1256),\; U(2468)]= +\sqrt{1 \over 2}\, W(1458), \cr
[V(1458),\; U(2468)]= +\sqrt{1 \over 2}\, V(1256), &
[W(1458),\; U(2468)]= -\sqrt{1 \over 2}\, W(1256); \cr
[V(3456),\; W(3258)]= +\sqrt{1 \over 2}\, U(2468), &
[W(3456),\; V(3258)]= -\sqrt{1 \over 2}\, U(2468), \cr
[V(3456),\; U(2468)]= -\sqrt{1 \over 2}\, V(3258), &
[W(3456),\; U(2468)]= +\sqrt{1 \over 2}\, W(3258), \cr
[V(3258),\; U(2468)]= +\sqrt{1 \over 2}\, V(3456), &
[W(3258),\; U(2468)]= -\sqrt{1 \over 2}\, W(3456).
\end{array}
\label{e6-cr2}\end{aligned}$$ ]{} In Eq. (\[e6-cr2\]), we have utilized the simple expressions, for example, $$[ V(1638),\; W(1458) ]= -\sqrt{1 \over 2}\, W(3456)$$ means $$\left[ V^{{1\over 2}{1\over 2}}_{\,q\,q'}(1638), \;
W^{{1\over 2}\hspace{2mm} {1\over 2}}_{\,p\,-q'}(1458) \right]
= -(2q')\sqrt{1 \over 2}\, W^{{1\over 2}{1\over 2}}_{\,p\,q}(3456),$$ and so forth.
We can also find from the above commutation relations that the Cartan generators of $E_6$ in the irreducible tensor basis are $\{ A(1)$, $A(3)$, $J_0(2)$, $J_0(4)$, $J_0(6)$, $J_0(8) \}$.
Conclusions
===========
In this paper, we obtain the irreducible tensor bases of exceptional Lie algebras $G_2$, $F_4$ and $E_6$ by grouping their Cartan-Weyl bases according to the respective chains $G_2$ $\supset$ SO(3) $\otimes$ SO(3), $F_4$ $\supset$ SO(3) $\otimes$ SO(3) $\otimes$ SO(3) $\otimes$ SO(3) and $E_6$ $\supset$ SO(3) $\otimes$ SO(3) $\otimes$ SO(3) $\otimes$ SO(3). The irreducible tensor basis of $G_2$ is made up of two mutually commuting angular momentum operators and one 2-fold irreducible tensor operator of rank (${1\over 2}{3\over 2})$. The irreducible tensor basis of $F_4$ is made up of four mutually commuting angular momentum operators, six 2-fold irreducible tensor operators of rank (${1\over 2}{1\over 2}$) and one 4-fold irreducible tensor operator of rank (${1\over 2}{1\over 2}{1\over 2}{1\over 2}$). The irreducible tensor basis of $E_6$ is made up of two independent and mutually commuting scalar operators, four mutually commuting angular momentum operators, twelve 2-fold irreducible tensor operators of rank (${1\over 2}{1\over 2}$) and one 4-fold irreducible tensor operator of rank (${1\over 2}{1\over 2}{1\over 2}{1\over 2}$). However, it is worth reminding readers to note that, in the process of constructing the angular momentum operators and the (multi-fold) irreducible tensor operators within the irreducible tensor basis, the structure constants of the Cartan-Weyl basis can not be taken arbitrarily even if they obey the symmetries and identities \[In the irreducible tensor bases of Racah’s type for some exceptional Lie algebras, one or two free parameters exist. [@jbm; @jeugt92; @berghe; @bmj; @mbj]\] The explicit commutation relations satisfied by the irreducible tensor bases of $G_2$, $F_4$ and $E_6$ are gained as well respectively. Thus, by means of the similar method used in Refs. [@sh2; @hs2; @hls; @sr1; @shzr] (especially, the Wigner-Eckart theorem [@edmonds; @wy; @bl; @wigner]), the problems of irreducible representations of exceptional Lie algebras may be solved. They are being studied. The irreducible tensor bases of exceptional Lie algebras $E_7$ and $E_8$ will be discussed by considering the suitable chains in a subsequent paper.
Acknowledgments {#acknowledgments .unnumbered}
===============
The project supported by National Natural Science Foundation of China (19905005), Major State Basic Research Development Programs (G2000077400 and G2000077604) and Tsinghua Natural Science Foundation (985 Program).
[99]{} Cartan E 1894 “ Sur la Structure des Groupes de Transformation Finis et Continus”, Thesis (Paris: Nony). Weyl H 1939 [*The Classical Groups*]{} (New York: Princeton University). Chevalley C 1946 [*Theory of Lie Groups* ]{} (New York: Princeton University). Humphreys J E 1972 [*Introduction to Lie Algebras and Representation Theory*]{} (New York: Springer-Verlag). Varadarajan V S 1984 [*Lie Groups, Lie Algebras, and Their Representations*]{} (New York: Springer-Verlag) Racah R 1942 [*Phys. Rev.*]{} [**61**]{} 186; [**62**]{} 438 Judd B R 1963 [*Operator Techniques in Atomic Spectroscopy*]{} (New York: McGraw-Hill) Van der Jeugt J 1994 [*J. Math. Phys.*]{} [**35**]{} 4383 Van der Jeugt J, Van Berghe G and De Meyer H 1983 [*J. Phys.*]{} [**A16**]{} 1377 Van der Jeugt J 1992 [*J. Math. Phys.*]{} [**39**]{} 2417 Berghe G V 1994 [*J. Math. Phys.*]{} [**35**]{} 508 Van Berghe G, De Meyer H and Van der Jeugt J 1984 [*J. Math. Phys.*]{} [**25**]{} 2585 De Meyer H, Van Berghe G and Van der Jeugt J 1984 [*J. Math. Phys.*]{} [**25**]{} 751 Bremner M R, Moody R V and Patera J 1985 [*Tables of Dominat Weight Multiplicities for Representations of Simple Lie Algebras*]{} (New York: Drekker) Elliott J P 1958 [*Proc. Roy. Soc.*]{} (London) [**A245**]{} 128 Hecht K T 1965 [*Nucl. Phys.*]{} [**A102**]{} 177 Kemmer N, Pursey D L and Williams S A 1968 [*J. Math. Phys.*]{} [**9**]{} 1224 Sun H Z 1980 [*Phys. Energ. Fort. et Phys. Nucl.*]{} [**4**]{} 265 Peterson D R and Hecht K T 1980 [*Nucl. Phys.*]{} [**A344**]{} 361 Bernards E De S 1999 [*J. Phys.*]{} [**A32**]{} 6295 Sun H Z 1980 [*Phys. Energ. Fort. et Phys. Nucl.*]{} [**4**]{} 137 Sun H Z and Han Q Z 1980 [*Phys. Energ. Fort. et Phys. Nucl.*]{} [**4**]{} 588 Racah G 1951 [*Group Theory and Spectroscopy*]{} (New York: Princeton University) Edmonds A R 1957 [*Angular Momentum in Quantum Mechanics*]{} (New York: Princeton University) Wybourne B G 1970 [*Symmetry Principles in Atomic Spectroscopy*]{} (New York: Wiley) Biedenharn L C and Louck J D 1981 [*Angular Momentum in Quantum Physics*]{} (Massachusetts: Addison-Wesley) Fano U and Racah R 1959 [*Irreducible Tensorial Sets*]{} (New York: Academic) Sun H Z and Han Q Z 1981 [*Scien. Sinica.*]{} [**24**]{} 914 Han Q Z and Sun H Z 1983 [*Commu. Theor. Phys.*]{} [**2**]{} 1137 Han Q Z, Liu F S and Sun H Z 1984 [*Commu. Theor. Phys.*]{} [**3**]{} 529 Sun H Z and Ruan D 1998 [*J. Math. Phys.*]{} [**39**]{} 630 Sun H Z, Han Q Z, Zhang M and Ruan D 1998 [*Commu. Theor. Phys.*]{} [**30**]{} 541 Wigner E P 1965 “On the matrices which reduce the Kronecker products of representations of simply reducible groups" in [*Quantum Theory of Angular Momentum*]{} edited by Biedenharn L C and Louck J D (New York: Academic)
[**Captions**]{}
TABLE 1 $G_{xy}$ of $F_4$
TABLE 2 $S_{xy}$ of $E_6$
FIGURE 1 The root diagram corresponding to the irreducible tensor basis of $G_2$
[**TABLE 1**]{} $G_{xy}$ of $F_4$\
[1.1]{}[1.1]{}
----------------------------------------------------------------- ------------------------------------------------------------- --------------------------------------------------------------- ------------------------------------------------------------------
$ \begin{array}{rll} \hline $ \begin{array}{rll} \hline $ \begin{array}{rll} \hline $ \begin{array}{rll} \hline
x \backslash y & \alpha_2 & -\alpha_2 \\ \hline x \backslash y & \beta_2 & -\beta_2 \\ \hline x \backslash y & \gamma_2 & -\gamma_2 \\ \hline x \backslash y & \epsilon_2 & -\epsilon_2 \\ \hline
\alpha_1 & +1 & +1 \\ \hline \beta_1 & +1 & +1 \\ \hline \gamma_1 & +1 & +1 \\ \hline \epsilon_1 & +1 & +1 \\ \hline
\end{array} \end{array} $ \end{array} $ \end{array} $
$
----------------------------------------------------------------- ------------------------------------------------------------- --------------------------------------------------------------- ------------------------------------------------------------------
[1.1]{}[1]{}
---------------------------------------------------------------------- ----------------------------------------------------------------------- ----------------------------------------------------------------------
$ \begin{array}{rcc} \hline $ \begin{array}{rcc} \hline $ \begin{array}{rcc} \hline
x \backslash y x \backslash y x \backslash y
& -\beta_2 & -\beta_1 \\ \hline & -\epsilon_2 & -\epsilon_1 \\ \hline & \gamma_2 & -\gamma_1 \\ \hline
\alpha_1 & +\sqrt{1\over 2} & -\sqrt{1\over 2} \\ \gamma_1 & -\sqrt{1\over 2} & +\sqrt{1\over 2} \\ \alpha_1 & +\sqrt{1\over 2} & -\sqrt{1\over 2} \\
\alpha_2 & +\sqrt{1\over 2} & +\sqrt{1\over 2} \\ \hline \gamma_2 & -\sqrt{1\over 2} & -\sqrt{1\over 2} \\ \hline -\alpha_2 & +\sqrt{1\over 2} & +\sqrt{1\over 2} \\ \hline
\end{array} $ \end{array} $ \end{array} $
---------------------------------------------------------------------- ----------------------------------------------------------------------- ----------------------------------------------------------------------
[1.1]{}[1]{}
----------------------------------------------------------------------- ------------------------------------------------------------------------------------- -------------------------------------------------------------------------------
$ \begin{array}{rcccc} \hline $ \begin{array}{rcccc} \hline $ \begin{array}{rcccc} \hline
x \backslash y x \backslash y x \backslash y
& \epsilon_2 & -\epsilon_1 \\ \hline & \epsilon_1 & -\epsilon_1 & \epsilon_2 & -\epsilon_2 \\ \hline & \gamma_1 & -\gamma_1 & \gamma_2 & -\gamma_2 \\ \hline
\beta_1 & +\sqrt{1\over 2} & -\sqrt{1\over 2} \\ \alpha_1 & +1 & -1 & +1 & +1 \\ \beta_1 & +1 & +1 & -1 & +1 \\
-\beta_2 & +\sqrt{1\over 2} & +\sqrt{1\over 2} \\ \hline -\alpha_2 & +1 & +1 & +1 & -1 \\ -\beta_1 & +1 & -1 & -1 & -1 \\
\end{array} $ \hline \hline
\end{array} $ \end{array} $
----------------------------------------------------------------------- ------------------------------------------------------------------------------------- -------------------------------------------------------------------------------
[**TABLE 2**]{} $S_{xy}$ of $E_6$\
[1.1]{}[1]{}
------------------------------------------------------------- ------------------------------------------------------------- ------------------------------------------------------------- -----------------------------------------------------------
$ \begin{array}{rll} \hline $ \begin{array}{rll} \hline $ \begin{array}{rll} \hline $ \begin{array}{rll} \hline
x \backslash y & -\alpha_3 & -\alpha_2 \\ \hline x \backslash y & -\alpha_3 & -\alpha_1 \\ \hline x \backslash y & -\alpha_4 & -\alpha_2 \\ \hline x \backslash y & -\beta_3 & -\beta_2 \\ \hline
\alpha_1 & +1 & -1 \\ \beta_1 & +1 & +1 \\ \beta_2 & +1 & +1 \\ \beta_1 & +1 & -1 \\
\alpha_4 & -1 & +1 \\ \hline \beta_3 & +1 & +1 \\ \hline \beta_4 & +1 & +1 \\ \hline \beta_4 & -1 & +1 \\ \hline
\end{array} $ \end{array} $ \end{array} $ \end{array} $
------------------------------------------------------------- ------------------------------------------------------------- ------------------------------------------------------------- -----------------------------------------------------------
[1.1]{}[1]{}
----------------------------------------------------------------- --------------------------------------------------------------- -------------------------------------------------------------- --------------------------------------------------------------
$ \begin{array}{rll} \hline $ \begin{array}{rll} \hline $ \begin{array}{rll} \hline $ \begin{array}{rll} \hline
x \backslash y & -\epsilon_3 & -\epsilon_2 \\ \hline x \backslash y & -\lambda_3 & -\lambda_2 \\ \hline x \backslash y & \lambda_8 & -\lambda_5 \\ \hline x \backslash y & \lambda_7 & -\lambda_6 \\ \hline
\epsilon_1 & +1 & -1 \\ \lambda_1 & -1 & +1 \\ \lambda_1 & +1 & +1 \\ \lambda_2 & +1 & +1 \\
\epsilon_4 & -1 & +1 \\ \hline \lambda_4 & +1 & -1 \\ \hline -\lambda_4 & -1 & -1 \\ \hline -\lambda_3 & -1 & -1 \\ \hline
\end{array} $ \end{array} $ \end{array} $ \end{array} $
----------------------------------------------------------------- --------------------------------------------------------------- -------------------------------------------------------------- --------------------------------------------------------------
[1.1]{}[1]{}
--------------------------------------------------------------- ------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------
$ \begin{array}{rll} \hline $ \begin{array}{rllll} \hline $ \begin{array}{rllll} \hline
x \backslash y & -\lambda_7 & -\lambda_6 \\ \hline x \backslash y x \backslash y
\lambda_5 & -1 & +1 \\ & -\lambda_5 & -\lambda_1 & \lambda_4 & \lambda_8 \\ \hline & -\lambda_6 & -\lambda_2 & \lambda_3 & \lambda_7 \\ \hline
\lambda_8 & -1 & +1 \\ \hline \alpha_1 & -1 & -1 & -1 & +1 \\ \alpha_2 & +1 & +1 & +1 & -1 \\
\end{array} $ -\alpha_4 & +1 & -1 & -1 & -1 \\ \hline -\alpha_3 & +1 & -1 & -1 & -1 \\ \hline
\end{array} $ \end{array} $
--------------------------------------------------------------- ------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------
[1.1]{}[1]{}
---------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
$ \begin{array}{rllll} \hline $ \begin{array}{rllll} \hline
x \backslash y x \backslash y
& -\lambda_3 & -\lambda_1 & \lambda_6 & \lambda_8 \\ \hline & -\lambda_4 & -\lambda_2 & \lambda_5 & \lambda_7 \\ \hline
\beta_1 & -1 & +1 & +1 & -1 \\ \beta_2 & -1 & -1 & +1 & +1 \\
-\beta_4 & +1 & +1 & +1 & +1 \\ \hline -\beta_3 & -1 & +1 & -1 & + \\ \hline
\end{array} $ \end{array} $
---------------------------------------------------------------------------------- ----------------------------------------------------------------------------------
[1.1]{}[1]{}
---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------
$ \begin{array}{rllll} \hline $ \begin{array}{rllll} \hline
x \backslash y & \alpha_2 & \alpha_4 & -\beta_3 & -\beta_1 \\ \hline x \backslash y
\epsilon_1 & -1 & +1 & +1 & -1 \\ & \alpha_1 & \alpha_3 & -\beta_4 & -\beta_2 \\ \hline
\epsilon_3 & -1 & +1 & -1 & +1 \\ \hline \epsilon_2 & +1 & -1 & +1 & -1 \\
\end{array} $ \epsilon_4 & +1 & -1 & -1 & +1 \\ \hline
\end{array} $
---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------
[1.1]{}[1]{}
---------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------
$ \begin{array}{rllll} \hline $ \begin{array}{rllll} \hline
x \backslash y x \backslash y
& -\lambda_7 & -\lambda_5 & -\lambda_3 & -\lambda_1 & -\lambda_8 & -\lambda_6 & \lambda_4 & \lambda_2 \\ \hline
\\ \hline \epsilon_2 & -1 & -1 & +1 & +1 \\
\epsilon_1 & -1 & +1 & +1 & -1 \\ -\epsilon_3 & +1 & -1 & +1 & -1 \\ \hline
-\epsilon_4 & -1 & -1 & -1 & -1 \\ \hline \end{array} $
\end{array} $
---------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------
|
---
abstract: 'We experimentally demonstrate phase-insensitive linear optical amplification which preserves the idler at the output. Since our amplification operation is unitary up to small excess noise, it is reversible beyond the classical limit. The entanglement between the two output modes is the resource for the reversibility. The amplification gain of $G=2.0$ is demonstrated. In addition, combining this amplifier with a beamsplitter, we also demonstrate approximate cloning of coherent states where an anticlone is present. We investigate the reversibility by reconstructing the initial state from the output correlations, and the results are slightly beyond the cloning limit. Furthermore, full characterization of the amplifier and cloner is given by using coherent states with several different mean values as inputs. Our amplifier is based on linear optics, offline preparation of nonclassical ancillas, and homodyne measurements followed by feedforward. Squeezed states are used as the ancillas, and nonlinear optical effects are exploited only for their generation. The ancillas introduce nonclassicality into the amplifying operation, making entanglement at the output.'
author:
- 'Jun-ichi Yoshikawa'
- Yoshichika Miwa
- Radim Filip
- Akira Furusawa
title: 'Demonstration of reversible phase-insensitive optical amplifier'
---
Introduction
============
Quantum optics is governed by rules imposed by commutation relations which have to be kept during time evolution. Optical amplification is no exception to this story. Typically, the amplified output is suffered from inevitable excess noise. This limitation is quantum-mechanically imposed, thus does not depend on the specific realization methods. Caves classified general linear amplification into phase-insensitive amplification (PIA) and phase-sensitive amplification (PSA) [@Caves(1982):PRD]. He also systematically derived the quantum limit of excess noise for such general linear amplification with arbitrary gain from the requirement to preserve commutation relations. This excess noise originates from quantum fluctuations in the auxiliary system required to keep energy conservation.
We concentrate on PIA, supposing the target of amplification to be optical wave amplitude of a single mode, which is denoted by the term “signal”. Classical counterpart of PIA is a conversion of arbitrary complex wave amplitude $\alpha\in{\mathbb{C}}$ into $\sqrt{G}\alpha$, where $G\ge1$ is the gain of amplification. As is found in ordinary textbooks, annihilation operators in quantum optics correspond to complex amplitudes in classical optics. Therefore, we describe the amplifying process by the transformation of annihilation operators. Quantum-mechanically optimal PIA in the sense that the excess noise is minimized can be achieved by the following transformation [@Caves(1982):PRD]: $$\begin{aligned}
\hat{a}_\text{sig}^\text{out}=
\sqrt{G}\,\hat{a}_\text{sig}^\text{in}+e^{i\theta}\sqrt{G-1}\,(\hat{a}_\text{idl}^\text{in})^\dagger,
\label{eq:PiaSingle}\end{aligned}$$ where $\hat{a}_\text{sig}^\text{in}$ and $\hat{a}_\text{sig}^\text{out}$ are the signal mode’s annihilation operators before and after the amplification, respectively. There is an extra term $e^{i\theta}\sqrt{G-1}\,(\hat{a}_\text{idl}^\text{in})^\dagger$ which is introduced in order to meet the commutation relation of $[\hat{a}_\text{sig},\hat{a}_\text{sig}^\dagger]=1$ for both the input and output signal modes. Here, $\theta\in{\mathbb{R}}$ is an arbitrary phase factor, and $\hat{a}_\text{idl}^\text{in}$ is another mode’s annihilation operator in the auxiliary system. Throughout this paper, the ancilla mode represented by $\hat{a}_\text{idl}^\text{in}$ is denoted by the term “idler” and distinguished from other ancilla modes. Eq. becomes the input-output relation of optimal PIA when the idler is in a vacuum state. The quantum fluctuation of the idler contaminates the amplified signal. This is the inevitable excess noise of PIA. Note that this penalty prevents amplification from being a loophole of the uncertainty relation in joint measurements [@Authurs(1965):BSTJ; @Authurs(1988):PRL]. At the limit of high-gain amplification, we can see the famous $3$ dB cost of the noise figure for PIA of coherent states. In addition to this intrinsic excess noise, further nonintrinsic excess noise may be caused by other ancilla modes in nonoptimal PIA.
There are numerous practical realizations of optical amplification. Doped fiber amplifiers (DFAs) and semiconductor optical amplifiers (SOAs) utilize stimulated emissions [@Shimoda(1957):JPSJ], and Raman amplifiers (RAs) and optical parametric amplifiers (OPAs) utilize nonlinear optical effects. In principle, there is no quantum-mechanical reason to prevent these realizations from achieving the optimal PIA in the form of Eq. . However, the real devices with current technology are accompanied by further excess noises.
Recently, PIA operating almost at the optimal level is experimentally demonstrated by Josse *et al.* by utilizing feedforward [@Josse(2006):PRL]. The reason for the high efficiency of Josse’s PIA is that it does not require inefficient nonclassical operations or nonclassical ancillas. It uses a vacuum state as an ancilla which is present everywhere, and linear optics and homodyne measurements followed by feedforward which are highly efficient.
Although Josse’s PIA is a good attainment, it is not the end of the story. The signal transformation in Eq. is an irreversible thermalizing process. Complete PIA should have *unitary* realization on an expanded Hilbert space. In order to unitarize PIA, two-mode description is sufficient. The full input-output relation becomes as follows:
\[eq:PiaUnitary\] $$\begin{aligned}
\hat{a}_\text{sig}^\text{out}= &
\sqrt{G}\,\hat{a}_\text{sig}^\text{in}+e^{i\theta}\sqrt{G-1}\,(\hat{a}_\text{idl}^\text{in})^\dagger,
\label{seq:PiaSignal}\\
\hat{a}_\text{idl}^\text{out}= &
\sqrt{G}\,\hat{a}_\text{idl}^\text{in}+e^{i\theta}\sqrt{G-1}\,(\hat{a}_\text{sig}^\text{in})^\dagger.
\label{seq:PiaIdler}\end{aligned}$$
Note that the roles of the signal and idler are symmetric in this relation.
The significance of unitarization must be the reversibility. The inverse transformation is easily derived when we take notice of the fact that Eq. is equivalent to two-mode squeezing operation. A two-mode squeezing operation parametrized by $(G,\theta)$ is canceled by another two-mode squeezing operation where the squeezing direction is opposite, i.e., $(G,\theta+\pi)$. Nonetheless, in many amplification schemes including Josse’s experimental demonstration [@Josse(2006):PRL], the idler output is lost in the inextractable environment, making the process irreversible.
In order to realize idler-preserving and close-to-optimal PIA, we require some nonclassicality for the amplifier. This is contrastive to Josse’s idler-nonpreserving PIA which does not require any nonclassicality. A typical strategy to introduce nonclassicality into feedforward-based quantum circuits is to use nonclassical states as ancillas. Continuous-variable (CV) quantum teleportation [@Furusawa(1998):Science] and CV error correction [@Aoki(2009):NatPhys] are good examples. In these examples, squeezed states are used as ancillas that support the performance beyond the classical limit, and the complex operations after the state preparation stage are efficiently implemented by linear optics.
In this paper, by employing the feedforward-based scheme proposed in Ref. [@Filip(2005):PRA], we demonstrate PIA of coherent states which preserve the idler output. The scheme basically relies on linear optics including homodyne measurements and feedforward. Squeezed vacuum states are used as ancillas, which inject nonclassicality into our PIA. Only for generating the nonclassical ancilla states, we resort to nonlinear optical effects. Our demonstration is for the amplification gain of $G=2.0$, which is tuned via passive optical devices and feedforward electric circuits. Combining PIA for $G=2.0$ with a half beamsplitter, we also demonstrate $1\to2$ approximate cloning of coherent states, where an “anticlone” remains at the output. (Anticlone will be explained in Sec. \[sec:Clone\].) In principle, our amplifier and cloner becomes quantum-mechanically optimum at the limit of infinite squeezing of the ancillas. For the case of finite squeezing, as is the real situation in experiments, further excess noise invades in accordance to the level of the squeezing. However, the degradation is small enough to retain nonclassical features. The behaviors of our amplifier and cloner are fully characterized by using several coherent states as inputs. Furthermore, we also pay much attention to the output correlations, because nonclassical properties clearly appear in them. For the PIA experiment, we check the Einstein-Podolsky-Rosen (EPR) correlation between the signal and idler outputs. For the cloning experiment, we check bipartite entanglement between each clone and the anticlone, which as a whole proves tripartite entanglement of class I [@Giedke(2001):PRA]. Moreover, for both experiments, the reversibility is investigated from the output correlations.
Our idler-preserving PIA is significant in several respects. First of all, the reversibility will pave the way to new schemes. Recently, there is a proposal of a CV quantum interface that enables in principle a unit fidelity of transfer using such reversible PIA [@Radim(2009):PRA]. Moreover, the reversibility in cloning is also advantageous. Cloning of unknown states is distribution of information, and its reversibility reserves the option to recover the distributed fragments of the information. This will be further discussed in Sec. \[sec:Clone\]. Secondly, our PIA would have some applications as two-mode squeezing operation. Note that one-mode squeezing operation is already demonstrated successfully in Ref. [@Yoshikawa(2007):PRA] with similar approach.
In this introduction, PIA has been described together with a brief historical review. Especially, the nonclassical property of PIA is discussed, which is obscure in many amplification processes because the idler output is lost in the inextractable environment. The subsequent contents of this paper are as follows. In Sec. \[sec:FfPia\], feedforward-based PIA is described, explicitly showing the excess noise due to finite squeezing of ancillas. In Sec. \[sec:Clone\], CV quantum state cloning and its connection with PIA are described. In Sec. \[sec:SetUp\], the experimental setup is described. In Sec. \[sec:ResultsPia\], the experimental results for PIA of coherent states with $G=2.0$ are shown. In Sec. \[sec:ResultsClone\], the experimental results for $1\to2$ approximate cloning of coherent states are shown. In Sec. \[sec:Summary\], our experimental achievements are summarized.
Feedforward-based Amplifier {#sec:FfPia}
===========================
In our definition, feedforward means that the operations after some measurements are changed depending on the measurement outcomes which in general are obtained randomly. In particular, in this paper it indicates phase space displacement operations whose amounts are proportional to the results of homodyne measurements.
We know two specific schemes for feedforward-based PIA that preserves the idler at the output. One scheme is proposed by Filip *et al.* in Ref. [@Filip(2005):PRA], in which PIA is composed of two feedforward-based single-mode squeezers proposed in the same paper. The other scheme is proposed by Josse *et al.* in Ref. [@Josse(2006):PRL] as a modification of the idler-nonpreserving PIA. Note that Josse’s idler-preserving PIA is just a theoretical proposal and the idler-nonpreserving PIA alone is experimentally demonstrated.
Both of Filip’s scheme and Josse’s scheme rely on linear optics including homodyne measurements and feedforward, and require offline-prepared nonclassical states as ancillas. Moreover, in both schemes, the gain of amplification is accurately and stably determined via the choice of passive optical devices and correspondingly feedforward gains. As for the nonclassical ancillas, Filip’s scheme requires two single-mode squeezed states, on the other hand, Josse’s scheme requires a two-mode squeezed state. Since two single-mode squeezed states can be converted to a two-mode squeezed state and vice versa by a half beamsplitter interaction, the amounts of the nonclassical resources required for the two distinct schemes are the same.
For both schemes, the feedforward-based PIA coincides with the quantum-mechanically optimal PIA only at the limit of infinite squeezing of the ancillas. For the case of finite squeezing, excess noise contaminates the output to some extent. Note that this is a common matter of feedforward-based CV deterministic processing [@Furusawa(1998):Science; @Aoki(2009):NatPhys; @Ukai(2010):QPh]. The difference between the two schemes proposed by Filip and Josse solely arises in this excess noise. For Filip’s scheme, it appears symmetrically in the signal and idler outputs. On the other hand, for Josse’s scheme, it appears only in the idler output. The better choice between different schemes depends on the specific application.
We have chosen the symmetrical one. In the demonstration in Sec. \[sec:ResultsPia\], we confirm the symmetry of PIA by swapping the roles of the signal and idler.
Fig. \[sfig:AmpSchematic\] shows the schematic of our PIA, from which the symmetry of the signal and idler is obvious. Its details will be described in Sec. \[sec:SetUp\]. Here we give the input-output relation. In the following, the quadrature phase amplitudes of each optical mode are denoted by $\hat{x}$ and $\hat{p}$, which correspond to the real and imaginary parts of the mode’s annihilation operator $\hat{a}$, i.e., $\hat{a}=\hat{x}+i\hat{p}$. The phase factor $\theta$ in Eq. can be arbitrarily changed by pre- and post-processing of phase rotation of the idler. Therefore, we consider the case of $\theta=0$ without loss of generality. Explicitly showing the excess noise coming from finitely squeezed ancillas, the input-output relation becomes as follows [@Filip(2005):PRA]:
$$\begin{aligned}
\!\!\hat{x}_1^\text{out}\! = &
\tfrac{1}{2}\bigl(\tfrac{1}{\sqrt{\!R}}\!+\!\sqrt{\!R}\bigr)\hat{x}_1^\text{in}
\!+\!\tfrac{1}{2}\bigl(\tfrac{1}{\sqrt{\!R}}\!-\!\sqrt{\!R}\bigr)\hat{x}_2^\text{in}
\!-\!\sqrt{\tfrac{1-R}{2}}\hat{x}_\text{A}^\text{out}\!,\! \\
\!\!\hat{p}_1^\text{out}\! = &
\tfrac{1}{2}\bigl(\tfrac{1}{\sqrt{\!R}}\!+\!\sqrt{\!R}\bigr)\hat{p}_1^\text{in}
\!-\!\tfrac{1}{2}\bigl(\tfrac{1}{\sqrt{\!R}}\!-\!\sqrt{\!R}\bigr)\hat{p}_2^\text{in}
\!+\!\sqrt{\tfrac{1-R}{2}}\hat{p}_\text{B}^\text{out}\!,\! \\
\!\!\hat{x}_2^\text{out}\! = &
\tfrac{1}{2}\bigl(\tfrac{1}{\sqrt{\!R}}\!+\!\sqrt{\!R}\bigr)\hat{x}_2^\text{in}
\!+\!\tfrac{1}{2}\bigl(\tfrac{1}{\sqrt{\!R}}\!-\!\sqrt{\!R}\bigr)\hat{x}_1^\text{in}
\!+\!\sqrt{\tfrac{1-R}{2}}\hat{x}_\text{A}^\text{out}\!,\! \\
\!\!\hat{p}_2^\text{out}\! = &
\tfrac{1}{2}\bigl(\tfrac{1}{\sqrt{\!R}}\!+\!\sqrt{\!R}\bigr)\hat{p}_2^\text{in}
\!-\!\tfrac{1}{2}\bigl(\tfrac{1}{\sqrt{\!R}}\!-\!\sqrt{\!R}\bigr)\hat{p}_1^\text{in}
\!+\!\sqrt{\tfrac{1-R}{2}}\hat{p}_\text{B}^\text{out}\!.\!\end{aligned}$$
The subscripts ‘1’ and ‘2’ represent the two main modes. They correspond to the signal and idler, though we do not specify which is which because the relation is symmetric. $\hat{x}_\text{A}$ and $\hat{p}_\text{B}$ denote the squeezed quadratures of the two ancilla modes. At the limit of infinite squeezing, these terms vanish, and the transformation above strictly coincides with the optimal PIA. The amplification gain $G$ is determined via one parameter $R$, which is a common reflectivity of two beamsplitters and in Fig. \[sfig:AmpSchematic\], with the relation of $$\begin{aligned}
G=\tfrac{1}{4}\bigl(\tfrac{1}{\sqrt{R}}+\sqrt{R}\bigr)^2.
\label{eq:Gain}\end{aligned}$$ One-to-one correspondence of $1\le{G}<\infty$ and $0<R\le1$ is easily checked. Note that the feedforward gain is also parameterized by $R$. It is chosen so that the antisqueezed noises from the ancillas are canceled out at the output.
For the demonstration of $G=2$, The value of $R$ should be chosen as $3-2\sqrt{2}\approx0.17$. The resulting input-output relation becomes as follows:
\[eq:in-out\_G2\] $$\begin{aligned}
\hat{x}_1^\text{out} = &
\sqrt{2}\,\hat{x}_1^\text{in}
+\hat{x}_2^\text{in}
-\sqrt{\sqrt{2}-1}\,\hat{x}_\text{A}^\text{out},
\label{}\\
\hat{p}_1^\text{out} = &
\sqrt{2}\,\hat{p}_1^\text{in}
-\hat{p}_2^\text{in}
+\sqrt{\sqrt{2}-1}\,\hat{p}_\text{B}^\text{out}, \\
\hat{x}_2^\text{out} = &
\sqrt{2}\,\hat{x}_2^\text{in}
+\hat{x}_1^\text{in}
+\sqrt{\sqrt{2}-1}\,\hat{x}_\text{A}^\text{out}, \\
\hat{p}_2^\text{out} = &
\sqrt{2}\,\hat{p}_2^\text{in}
-\hat{p}_1^\text{in}
+\sqrt{\sqrt{2}-1}\,\hat{p}_\text{B}^\text{out}.\end{aligned}$$
Quantum State Cloning {#sec:Clone}
=====================
It is known as the no-cloning theorem that an unknown quantum state ${|{\psi}\rangle}$ cannot be perfectly duplicated as ${|{\psi}\rangle}{|{\psi}\rangle}$ [@Wootters(1982):Nature]. However, approximate cloning is possible, which can go beyond some classical limit in general.
In this section, we will discuss CV cloning, and make its connection with PIA. Furthermore, its reversibility is discussed by introducing the notion of anticlone. In general, cloning can be described as a unitary operation supported by ancilla systems. The ancilla output system generally depends on the cloned state, from which anticlones are obtained. We show the equations for $1\to2$ cloning, which corresponds to the experimental demonstration in Sec. \[sec:ResultsClone\]. However, PIA allows general ${K}\to{L}$ cloning in principle, which will be described in Appendix \[sec:GenCln\]. Here, the notation ${K}\to{L}$ means that $L$ clones are created from $K$ identical originals.
First, we would like to say that pragmatic cloning for CV is not the universal cloning [@Braunstein(2001):PRA] with respect to the infinite-dimensional Hilbert space, because it is an unnatural situation that all the states in the noncompact space appears with equal probability. In general, the choice of the appropriate cloner depends on how the information is embedded in the infinite-dimensional Hilbert space.
The typical situation is that the CV information is embedded as a displacement on some quantum state ${|{\psi}\rangle}$. Here, ${|{\psi}\rangle}$, which we refer to as a core state, is either known or unknown. Then, the set of possible original states is $S=\{\hat{D}(x_\text{d},p_\text{d}){|{\psi}\rangle}\!\mid\!(x_\text{d},p_\text{d})\in{\mathbb{R}}^2\}$, where $\hat{D}(x_\text{d},p_\text{d})\equiv\exp[-2i(x_\text{d}\hat{p}-p_\text{d}\hat{x})]$ is the displacement operator. \[The probability density $p(x_\text{d},p_\text{d})$ is omitted because we consider for simplicity the case where $(x_\text{d},p_\text{d})$ is uniformly distributed.\] As a special case of this, the set $S$ becomes all coherent states when the core state ${|{\psi}\rangle}$ is known to be a vacuum state. This way of embedding is found in ordinary CV quantum key distribution (QKD) protocols [@Grosshans(2002):PRL].
For such protocols, the role of cloning is distribution of the information, rather than duplication of a quantum state. Therefore, the measure of the cloning precision should be related to the estimation of $(x_\text{d},p_\text{d})$, instead of the traditional fidelity. Furthermore, asymmetric cloner is significant as well as symmetric cloner. Arbitrary share ratio of the information is achieved by cloner with tunable asymmetry.
We suppose a simple picture of cloning where some noise is added to the original state as the penalty of cloning. Then, the quality of cloning is totally determined by this noise. For simplicity, we impose rotational symmetry on the noise added to each clone. This is naturally justified when the core state ${|{\psi}\rangle}$ is either known to be symmetric or unknown. The added noise is characterized by its variance $n_k\equiv({\Delta}x_{\text{cln-}k}^\text{noise})^2+({\Delta}p_{\text{cln-}k}^\text{noise})^2$ [@Fiurasek(2007):PRA], where $k\in\{1,2\}$ for $1\to2$ cloning. Note that $n_k$ corresponds to the mean photon number of thermalization in the clone.
The variances $n_k$ are directly connected to the mean square errors in the estimation of $(x_\text{d},p_\text{d})$. Therefore, a measure can be constructed from them. Given the desired asymmetry, the cost function is determined [@Fiurasek(2007):PRA]: $$\begin{gathered}
C(n_1,n_2)=c_1n_1+c_2n_2.
\label{eq:ClnCostFunc}\end{gathered}$$ The positive parameters $c_k$ determine the asymmetry. ($c_1=c_2$ corresponds to symmetric cloning.) The cloner that minimize the cost function is the optimal.
It is obvious that the optimal cloner is Gaussian when the cost function is set as a function of the noise variances as in Eq. . In order to minimize it, the ancillas that support cloning are chosen in minimum uncertainty states, which are Gaussian. This contrasts with the evaluation by the fidelity. Non-Gaussian cloning can slightly go beyond the Gaussian fidelity for coherent states [@Cerf(2005):PRL]. We emphasize that the evaluation by the variances is more practical. It is pointed out that optimal attack in QKD is Gaussian [@Grosshans(2004):PRL; @Leverrier(2010):PRA].
There is a restriction on the excess noises $n_k$ which is imposed by quantum mechanics [@Cerf(2000):PRL; @Fiurasek(2001):PRL]: $$\begin{aligned}
n_1n_2\ge(1/2)^2.
\label{eq:ClnNoiseIneq}\end{aligned}$$ The optimal cloner with respect to the cost function in Eq. satisfies the equality in Eq. by necessity. This noise penalty comes from consistency with the uncertainty relation. The attainable information of the original state does not increase by cloning due to this noise. Recall that the inevitable noise in PIA comes from the same reason. Indeed, the optimal cloner can be constructed from the optimal phase-insensitive amplifier and beamsplitters. For example, $1\to2$ cloning with arbitrary asymmetry is achieved by putting an amplifier in one of the arms of a Mach-Zehnder interferometer [@Fiurasek(2001):PRL]. Especially, for symmetric cloning, the reflectivity of the first beamsplitter becomes unity, i.e., it is achieved by first amplifying the original with $G=2$ and then splitting the amplified signal in half. This procedure can be extended to ${K}\to{L}$ cloning [@Braunstein(2001):PRL; @Fiurasek(2007):PRA]. The optimality of this realization is proven with respect to the cost function in Eq. [@Fiurasek(2007):PRA].
For Gaussian cloning of coherent states, the added noise variance $n_k$ and the fidelity $F_k$ have correspondence as $F_k=1/(1+n_k)$. By using Eq. , the upper limit of fidelity is obtained for arbitrarily asymmetric Gaussian cloning. In particular, it becomes $F=2/3$ for the symmetric case. This is significantly higher than the classical limit of $F=1/2$, where we regard the limit of state estimation as the classical limit of symmetric cloning because the estimated state is classical information which can be copied any number of times. Note that the sameness of state estimation and asymptotic cloning where the number of clones tends to infinity is proven for a general set $S$ of possible original states [@Bae(2006):PRL]. We refer to these fidelities only for the consistency with previous works. We stress again that our actual interest is the variances.
We have seen above that the clones are made of the signal output of the amplifier. When cloning is unitarily realized, we still have the idler output, whose state is affected by the original state. Now we pay attention to this ancilla output system.
As mentioned at the beginning of this section, anticlones are byproducts of cloning which are obtained from the ancilla systems. Especially, for $1\to2$ cloning, the idler output itself is an anticlone. It is an approximation of the phase-conjugated original state, or in other words, the output of an approximate NOT gate. Qubit version of this gate is demonstrated in Ref. [@DeMartini(2002):Nature].
Anticlones are important when we are concerned with the reversibility. The originals can in principle be perfectly reproduced only when all the clones and anticlones are present. For the reversibility, the essential resource is nonclassical correlations, or entanglement. Conceptually, the excess noises in cloning is canceled by using the nonclassical correlations. Therefore, we discuss the existence of entanglement in the three-mode output system of $1\to2$ cloning. Clones are obtained by splitting the amplified signal, thus there is no entanglement among clones. However, there is entanglement between each clone and the anticlone. We stress that the resource for the recovery is not the anticlones themselves but the entanglement. We can obtain anticlones without entanglement with clones as follows. Suppose the situation where two independent cloners are running, and the same states are used as the inputs for them. Then, the clones obtained from one cloner do not have entanglement with the anticlones from the other. In this case, the originals can not be recovered from the noncorrelated outputs.
For the recovery of the originals, the inverse unitary operation is not required. The optimal cloning can be fully reversed by the Bell measurement on a clone and an anticlone and subsequent feedforward to the remaining single clone [@Filip(2004):PRA]. Note that this recovery scheme works not only for coherent states but also for an arbitrary core state ${|{\psi}\rangle}$. This scheme is efficient from two points of view. One is on a technical level that the homodyne measurements and feedforward displacement operations are quite efficient with current technology. The other is on a conceptual level that the performer of the Bell measurement and the owner of the remaining clone who is willing to recover the original can be spatially separated. For this case, they only need classical channels for communication, and never quantum channels. Note that even partial reversal is possible with a similar scheme based on local operations and classical communication (LOCC), which converts, e.g., symmetric clones to asymmetric clones [@Filip(2004):PRA].
We would like to discuss practical aspects of cloning and its reversibility assisted by classical communication. As described above, cloning of a quantum state is regarded as distribution of information among plural participants. The information of the original is to some extend accessible to individual participants. This situation is clearly distinguished from that found in usual quantum error correcting codes where the quantum information is mapped on a larger Hilbert space so that no information about the original is accessible from a localized system. Such share of information would play important roles in several scenarios, in which the reversibility would give a tactical aspect to information exchange. For example, cloning is a possible attack by an eavesdropper in QKD. In this example, the reversibility of cloning provides the opportunity for the communicators to negotiate with the eavesdropper when they know the attack [@Filip(2004):PRA]. Since coherent states are a strong candidate for the information carrier in quantum communication, cloning of coherent states is especially of great significance.
There are several experimental previous works which demonstrate cloning of coherent states beyond the classical limit of $F=1/2$ in non-reversible ways, i.e., their anticlones are lost in the environment. In Ref. [@Andersen(2005):PRL], using feedforward-based PIA of Ref. [@Josse(2006):PRL], almost quantum-limited $1\to2$ cloning is demonstrated. In Ref. [@Koike(2006):PRL], telecloning is demonstrated where the original coherent state is teleported and cloned at the same time.
In Sec. \[sec:ResultsClone\], we demonstrate $1\to2$ symmetric Gaussian cloner which preserves an anticlone at the output. As is shown in Fig. \[sfig:ClnSchematic\], we apply a half beamsplitting to the signal output of the feedforward-based PIA with the gain $G=2$ described in Sec. \[sec:FfPia\]. In the demonstration, the reversibility is checked from the output correlations. To our knowledge, there is no previous experiment of this kind even in qubit regime. Although our demonstration is only for coherent states, our cloner should equally work for arbitrary core state ${|{\psi}\rangle}$ as discussed above.
We close this section by giving the input-output relation of the optimal $1\to2$ symmetric cloning. By substituting $G=2$ and $\theta=0$ into the input-output relation in Eq. and splitting the signal output in half, we obtain,
$$\begin{aligned}
\hat{a}_\text{cln-1}= &
\hat{a}_\text{org}+\tfrac{1}{\sqrt{2}}{\hat{a}^\dagger}_\text{idl}+\tfrac{1}{\sqrt{2}}\hat{a}_\text{vac}, \\
\hat{a}_\text{cln-2}= &
\hat{a}_\text{org}+\tfrac{1}{\sqrt{2}}{\hat{a}^\dagger}_\text{idl}-\tfrac{1}{\sqrt{2}}\hat{a}_\text{vac}, \\
\hat{a}_\text{a-cln}= &
{\hat{a}^\dagger}_\text{org}+\sqrt{2}\,\hat{a}_\text{idl},\end{aligned}$$
where the subscripts ‘org’, ‘’, ‘’, and ‘’ denote the original, first clone, second clone, and anticlone, respectively. The subscript ‘idl’ denotes the idler input for PIA which is in a vacuum state. The annihilation operators with the subscript ‘vac’ indicate another ancilla in a vacuum state, which invades from the empty port of the final half beamsplitter. For the excess noise of the two clones, $n_1=n_2=1/2$ is easily checked. Therefore, this cloner is optimum when evaluated by the cost function in Eq. with $c_1=c_2$. When PIA is realized with a feedforward-based scheme as is found in Sec. \[sec:ResultsClone\], further excess noise contaminates the output in accordance with the squeezing levels of the ancillas.
Experimental Setup {#sec:SetUp}
==================
Schematic of the experimental setup for PIA is illustrated in Fig. \[sfig:AmpSchematic\], and that for approximate cloning is illustrated in Fig. \[sfig:ClnSchematic\]. The light source is a Ti:sapphire laser, which has a continuous-wave single-mode output of $860$ nm in wavelength and about $1.5$ W in power. We treat the quantum states of narrow sidebands located at $1.34$ MHz apart from the optical carrier frequency.
Two main beams that go from and to and in Fig. \[sfig:AmpSchematic\] carry the quantum states which are targets of PIA. The setup has a form of a Mach-Zehnder interferometer that holds a single-mode squeezer ( or ) in each arm. This decomposition of unitary PIA into squeezers and beamsplitters is derived from the bosonic version of Bloch-Messiah reduction shown in Ref. [@Braunstein(2005):PRA]. We note that this setup is almost the same as that for quantum nondemolition (QND) interaction demonstrated in Ref. [@Yoshikawa(2008):PRL]. This fact shows the capability of our setup to realize many types of two-mode Gaussian interaction. Combining PIA for $G=2.0$ with another half beamsplitter as is shown in Fig. \[sfig:ClnSchematic\], $1\to2$ approximate cloning of coherent states is achieved.
and are feedforward-based squeezers, which are theoretically proposed in Ref. [@Filip(2005):PRA] and experimentally demonstrated in Ref. [@Yoshikawa(2007):PRA]. Each squeezer consumes an ancilla in a squeezed state, which is generated by an optical parametric oscillator (OPO).
Note that several essential optical elements are omitted from Fig. \[fig:Schematic\], such as a second harmonic generation (SHG) cavity to generate pump beams for OPOs, and three spatial-mode cleaning cavities (MCCs). One MCC is used for local oscillators (LOs) for homodyne measurements and auxiliary beams for feedforward displacements. The other two MCCs are used for individual input beams.
The experimental procedure is divided into three steps: Firstly, we prepare input coherent states and ancilla squeezed vacuum states. Secondly, we implement PIA and cloning via feedforward. Finally, the output states are homodyne measured for verification. In the following, we describe the experimental details of each step.
Preparation
-----------
At this step, we generate coherent states which are used as inputs, and squeezed vacuum states which are used as ancillas.
The nonzero mean values of the sideband coherent states at $1.34$ MHz are produced by appropriately modulating the optical carriers. In our setup, the relative phase of interference at each beamsplitter is designed to be fixed with active feedback control. Therefore, in order to make an arbitrary phase space displacement in the input modes, both amplitude modulation (AM) and phase modulation (PM) are utilized. AM and PM make non-zero mean values of $\hat{x}_1^\text{in}$ and $\hat{p}_1^\text{in}$ for the first input mode, and those of $\hat{p}_2^\text{in}$ and $\hat{x}_2^\text{in}$ for the second input mode, respectively. Each of four electro-optic modulators (EOMs) before PIA in Fig. \[sfig:AmpSchematic\] corresponds to one of these four quadratures. On the other hand, in Fig. \[sfig:ClnSchematic\], only two EOMs are depicted before PIA which are both located at the first input beam path. Therefore, the symmetry of the two input modes is broken in the cloning experiment. One input mode is the target of cloning, and the other input mode is set in a vacuum state throughout. For both experiments, the modulations are switched on and off in order to use several coherent states as inputs. After these EOMs, there are MCCs though they are omitted from Fig. \[fig:Schematic\].
Squeezed vacuum states are each generated by an OPO which is driven below the threshold. Our OPO has a bow-tie shaped configuration with a round-trip length of about 500 mm. It contains a periodically-poled KTiOPO$_4$ (PPKTP) crystal as a nonlinear optical medium, which is commercially available from Raicol and has 10 mm length and 1 mm by 1 mm cross section. The experimental datails of our OPO squeezing are found in Ref. [@Suzuki(2006):APL]. The squeezing level with the pump of about $100$ mW is about $-5$ dB relative to the shot noise level at $1.34$ MHz. The pump beams for the OPOs are the second harmonic of a fundamental beam, generated by a SHG cavity. Most of the Ti:sapphire laser output is sent to the SHG cavity, whose output of about $300$ mW is divided into two to pump the individual OPOs. The SHG cavity has almost the same configuration as that of OPOs, whereas a KNbO$_3$ (KN) crystal is used instead of the PPKTP crystal.
Modulation sidebands other than $1.34$ MHz are exploited for active feedback control of optical interferences. A modulation at $13.5$ MHz is utilized for locking cavities, including the SHG cavity, the two OPOs, and the three MCCs. On the other hand, lower frequency modulations at $193$ kHz and $333$ kHz are utilized at the OPOs to lock the phases of the pump beams. Furthermore, the two input beams are modulated at $108$ kHz and $154$ kHz. These four low-frequency modulations contribute to the lock of the downstream interferometric system in the subsequent steps, as mentioned later.
Amplifier and Cloner
--------------------
The two input beams are combined at a preceding half beamsplitter () and then sent to and . After the squeezing operations, the two beams interfere again at another half beamsplitter (), which completes PIA. By splitting one of the two output beams by another half beamsplitter (), $1\to2$ cloner is obtained.
The squeezing procedure goes as follows. First, the main beam is combined with an ancilla beam coming from an OPO at a beamsplitter ( or ). Next, one of the two beams after the beamsplitter is homodyne measured. Finally, the measurement outcome is fed forward to the remaining beam. and have the common reflectivity of $R$. This parameter $R$ determines the degree of the feedforward-based squeezing and thus the gain of amplification $G$ with the relation shown in Eq. . As is already mentioned, $R\approx0.17$ for our demonstration of $G=2.0$.
The feedforward operation is a phase space displacement whose amount is proportional to the random outcome of the homodyne measurement. The electric signal from the homodyne detector is sent to an EOM to be converted into an optical signal, where the gain and phase at $1.34$ MHz are carefully chosen. The auxiliary beam which is modulated by the feedforward EOM has the power of $150$ $\mu$W, $1\%$ of which subsequently enters the mainstream via an asymmetric beamsplitter (99:1).
The powers of the two input beams are $10$ $\mu$W, and those of the two ancilla beams are $2$ $\mu$W. These powers are considerably smaller than $3$ mW of LOs used for homodyne detections.
The four beamsplitters of PIA (, , , and ) are actually composed of two polarization beam splitters and a half-wave plate in the same manner as the QND experiment in Ref. [@Yoshikawa(2008):PRL]. Their reflectivities are variable by rotating half-wave plates. They enable us to measure the input states as well as the output states with the same homodyne detectors for verification. The propagation losses of two main beams are measured to be $7\%$ on average, which mostly come from these variable beam splitters.
In order to control the relative phases at beamsplitters with active feedback, interferences between the carriers and the low-frequency modulations are monitored. This is typically done by picking up $1\%$ of the beam after the interference, though such details are omitted from Fig. \[fig:Schematic\]. For each locking point, an appropriate modulation sideband is chosen, and the error signal is extracted from the interference between the carrier and the sideband by demodulation. However, and are exceptions, where the interference between two modulation sidebands, namely $108$ kHz and $154$ kHz, is exploited. The beat frequency of $46$ kHz is chosen for the reference signal of demodulation to obtain the error signals.
Verification
------------
PIA is characterized by measuring two-mode input states as well as two-mode output states using two homodyne detections. In the cloning experiment, on the other hand, three-mode output states are compared with single-mode input states. The input states are measured by setting the reflectivities of the four variable beamsplitters to unity and disabling the feedforward. The quantum efficiency of a homodyne detector is about $99\%$, and the dark noise is about $17$ dB below the optical shot noise produced by the LO. The interference visibilities to the LOs are $98\%$ on average.
The outcomes of the final homodyne measurements are analyzed in either of the two ways below.
In one way of analysis, the quadrature data are directly treated, which are obtained by lock-in detection of $1.34$ MHz components of the homodyne outputs. A signal from a homodyne detector is mixed with the reference signal at $1.34$ MHz, and then low-pass filtered with the cutoff of $30$ kHz. Subsequently it is analog-to-digital (A/D) converted for storage with the sampling rate of $300$ kHz and the resolution of $14$ bits (, National Instruments Corporation). In this analysis, the phase of the homodyne detection is slowly scanned. The phase information is stored simultaneously with the quadrature values using the same A/D board. From the resulting marginal distributions, phase space distributions (i.e., Wigner functions) are reconstructed, where we assume that all the quantum states obtained in the experiments are Gaussian. The first and second moments are computed so that the likelihoods are maximized.
The other way is the power analysis at $1.34$ MHz using a spectrum analyzer. In this analysis, the measured quadratures are set to either $\hat{x}$ or $\hat{p}$. Not only the powers of the output quadratures but also those of their correlations are measured for several input coherent states. The resolution bandwidth is $30$ kHz, the video bandwidth is $300$ Hz, the sweep time is $0.1$ s, and $20$ times averaging is taken for each trace.
Note that we can easily see the effect of the Hermitian conjugate term in Eq. as a mirror image with the former way of analysis, whereas we can not do this with the latter.
Experimental Results for Phase-Insensitive Amplifier {#sec:ResultsPia}
====================================================
The two main modes are denoted by “” and “”. One of them is the “signal” and the other is the “idler”, which are initially in a coherent state and a vacuum state, respectively. By swapping the role of the signal and idler, we check the symmetry of our PIA.
\
\
We first show the results of the lock-in detection, because it is intuitively easier to see. Figs. \[fig:Amp\_MD1\] and \[fig:Amp\_MD2\] show the experimental quadrature values at various phases of LOs. For Fig. \[fig:Amp\_MD1\], the is the signal and the is the idler, whereas for Fig. \[fig:Amp\_MD2\], the is the signal and the is the idler. There are three subfigures corresponging to the signal input (a), the signal output (b), and the idler output (c). Horizontal axes are the measurement phases $\phi$ which are scanned from $0$ to $2\pi$. The quadrature at $\phi=0$ corresponds to $\hat{x}$ and that at $\phi=\pi/2$ does to $\hat{p}$. Vertical axes are normalized quadrature values where the standard deviation of vacuum fluctuation is $0.5$. Each set of data is taken for about $0.2$ seconds. Quadrature data are plotted every $10$ points in the figures, whereas the whole data are used for the analysis. The sinusoidal curve of the signal input represents the nonzero mean amplitude of a coherent state, and the fluctuation around the sinusoid represents the quantum noise. With regard to the fluctuation, it grows uniformly at both the signal and idler outputs. This uniformity is an evidence of the phase-insensitivity of our amplifier. On the other hand, with regard to the sinusoidal curve, the two output modes show different behaviors. At the signal output, the amplitude of the sinusoid is amplified from that of the signal input, maintaining the phase. At the idler output, the amplitude of the sinusoid is the same as the signal input, whereas the phase is flipped. This flip is due to the Hermitian conjugate term in Eq. . For both figures, the same qualitative behaviors as mentioned above are observed.
Figs. \[fig:Amp\_PS1\] and \[fig:Amp\_PS2\] are phase space diagrams, which are computed from the quadrature data shown in Figs. \[fig:Amp\_MD1\] and \[fig:Amp\_MD2\], respectively. The experimental results (a) and the theoretical calculations for the optimal PIA (b) are depicted next to each other. In the theoretical calculations, the experimental value is used for the amplitude of the signal input. The first and second moments are expressed by ellipses, which correspond to the cross sections of the Wigner functions. Note that the theoretical ellipses in (b) are strictly circles. In each phase space diagram, there are three ellipses. The ellipse in green is the signal input. Its radius is almost $0.5$ which corresponds to the standard deviation of vacuum fluctuation. The ellipse in red is the signal output. Its center is about $\sqrt{2}$ times farther away from the origin and its radius is about $\sqrt{3}$ times larger than those of the signal input. The ellipse in blue is the idler output. Its radius is almost the same as that of the signal output, whereas its center is flipped around the $x$-axes from that of the signal input, which again represents the Hermitian conjugate term in Eq. .
![ Output powers for vacuum inputs. Vertical axes are powers in dB scale normalized by shot noises. Blue: Output quadratures. Cyan: Shot noises. Red: Theory for optimal PIA outputs. Magenta: Theory for our PIA outputs with $-5$ dB squeezed ancillas. Green: Theory for our PIA outputs with vacuum ancillas. []{data-label="fig:Amp_V"}](Amp_Out_V.eps)
In principle, we can fully characterize our PIA with only the above way of analysis that treats quadrature values directly. However, such treatment requires a large amount of data for good accuracy. Thus, in the following, we resort to power measurements using a spectrum analyzer. Not only output quadratures (shown in Figs. \[fig:Amp\_V\] and \[fig:Amp\_C\]) but also their correlations (shown in Figs. \[fig:Amp\_EPR\], \[fig:Amp\_RV\], and \[fig:Amp\_RC\]) are taken for several input states. In each figure, results of each quadrature are contained in one of boxes. Vertical axes are powers in dB scale which are normalized by corresponding shot noises.
Fig. \[fig:Amp\_V\] shows the experimental results for vacuum inputs (fluctuating traces), together with their theoretical expectations (straight lines). There are four boxes corresponding to the four output quadratures, namely, $\hat{x}_1^\text{out}$, $\hat{p}_1^\text{out}$, $\hat{x}_2^\text{out}$, and $\hat{p}_2^\text{out}$. The traces in blue are the powers of the output quadratures. The traces in cyan around $0$ dB are the powers of the shot noises, which are used for normalization. Since the inputs are in vacuum states, the powers of the shot noises correspond to those of the input quadratures $\hat{x}_1^\text{in}$, $\hat{p}_1^\text{in}$, $\hat{x}_2^\text{in}$, and $\hat{p}_2^\text{in}$. We put three kinds of theoretical lines corresponding to three different conditions. For the optimal PIA of a coherent state with $G=2.0$, the output quadrature variances become three times larger than the initial shot-noise-limited variance, where two from amplification and one from contamination by the other mode. The corresponding $4.8$ dB is marked by the lines in red. Our PIA with finite ancilla squeezing is suffered from further excess noise. Assuming $-5$ dB of squeezing for the ancilla states, we calculate theoretical values which are marked by the lines in magenta. We also show them with vacuum ancillas by the lines in green. The lines in red, magenta, and green are very close to each other, thus other experimental errors are dominant rather than the ancilla squeezing levels in these results. Therefore, it is hard to discuss nonclassicality of our PIA only from these results. As is shown later, the effects of ancilla squeezing more clearly appear in the output correlations.
Next we use several coherent states as inputs. The results are shown in Fig. \[fig:Amp\_C\]. The four input quadratures $\hat{x}_1^\text{in}$, $\hat{p}_1^\text{in}$, $\hat{x}_2^\text{in}$, and $\hat{p}_2^\text{in}$ are displaced from zero mean values by turns, leaving the other three quadratures at the vacuum level. There are four subfigures labeled from (a) to (d) corresponding to such four excitations. For each subfigure, there are five boxes. The trace in magenta in the leftmost box shows the measured power of the excited input quadrature. The other four boxes correspond to the four output quadratures, namely, $\hat{x}_1^\text{out}$, $\hat{p}_1^\text{out}$, $\hat{x}_2^\text{out}$, and $\hat{p}_2^\text{out}$. The traces in red show the output quadrature powers with the input excitation. They are compared to those without the excitation shown by the blue traces, which are replottings of the blue traces in Fig. \[fig:Amp\_V\]. The obtained results show the following features. When a quadrature $\hat{x}$ or $\hat{p}$ of an input mode is excited, the same quadratures of both of the output modes are excited, whereas the conjugate quadratures do not change from the nonexcited levels. The two increased output powers differ by about $3.0$ dB, where the larger one corresponds to the amplified signal and the smaller one corresponds to the phase-conjugated idler output. These features are exactly what are expected from Eq. for $G=2$ and $\theta=0$. Note that the coefficients $\sqrt{2}$ correspond to the $3.0$ dB.
The results in Figs. \[fig:Amp\_V\] and \[fig:Amp\_C\] are only for the five specific input states. However, the results for other input states can be predicted on the assumption of linearity. More precisely speaking, the absolute values of the coefficients of $\hat{x}_1^\text{in}$, $\hat{p}_1^\text{in}$, $\hat{x}_2^\text{in}$, and $\hat{p}_2^\text{in}$ in Eq. are determined from these results. On the other hand, the signs of the coefficients are not determined from them. However, they are checked from the phase space diagrams shown in Figs. \[fig:Amp\_PS1\] and \[fig:Amp\_PS2\]. In the above sense, the results shown so far give full information of the input-output relation when output modes are separately concerned.
In order to fully characterize our amplifier, the individual behaviors of the output modes are not enough. In the following, we are concerned with the output correlations.
Since unitary PIA is equivalent to two-mode squeezing, the two output modes should be entangled, and have an EPR type of correlation. The results of EPR correlation are shown in Fig. \[fig:Amp\_EPR\]. Here the two input modes are both in vacuum states. There are two boxes corresponding to $x$ and $p$ correlations. The lower traces in blue show the two-mode squeezing of $\hat{x}_1^\text{out}-\hat{x}_2^\text{out}$ and $\hat{p}_1^\text{out}+\hat{p}_2^\text{out}$, on the other hand, the upper traces in blue show the two-mode antisqueezing of $\hat{x}_1^\text{out}+\hat{x}_2^\text{out}$ and $\hat{p}_1^\text{out}-\hat{p}_2^\text{out}$, respectively. They are compared with the summed shot noises of the two homodyne detections shown by the traces in cyan. Several theoretical lines are plotted together. The lower and upper lines in red are the theoretical values of two-mode squeezing and antisqueezing for the optimal PIA, respectively. Our results of two-mode squeezing are degraded from the ideal case due to finite squeezing of ancillas. Assuming $-5$ dB squeezing for ancillas, theoretical expectation is marked by the lines in magenta. That for vacuum ancillas is also marked by the lines in green, which is exactly equal to the shot noise level. In contrast, the theoretical two-mode antisqueezing is always ideal for arbitrary ancillas. The experimental results well agree with the theory assuming $-5$ dB squeezing of ancillas. Since the lower traces in blue are both below the traces in cyan, existence of entanglement is verified between the two output modes via the Duan-Simon criterion [@Duan(2000):PRL; @Simon(2000):PRL].
![ Two-mode squeezing and antisqueezing for vacuum inputs. Vertical axes are powers in dB scale normalized by summed shot noises of two homodyne detections. Lower Blue: Two-mode squeezing in $\hat{x}_1^\text{out}-\hat{x}_2^\text{out}$ and $\hat{p}_1^\text{out}+\hat{p}_2^\text{out}$. Upper Blue: Two-mode antisqueezing in $\hat{x}_1^\text{out}+\hat{x}_2^\text{out}$ and $\hat{p}_1^\text{out}-\hat{p}_2^\text{out}$. Cyan: Summed shot noises. Lower Red: Theory of two-mode squeezing for optimal PIA outputs. Upper Red: Theory of two-mode antisqueezing for optimal PIA outputs. Magenta: Theory of two-mode squeezing for our PIA outputs with $-5$ dB squeezed ancillas. Green: Theory of two-mode squeezing for our PIA outputs with vacuum ancillas. []{data-label="fig:Amp_EPR"}](Amp_Corr_EPR.eps)
From the nonclassical correlation between the two output modes, we investigate the reversibility of our PIA. For this purpose, we virtually realize the inverse transformation electrically and reconstruct the initial quadratures. Neglecting the excess noise from finite ancillas, PIA that we demonstrate has the input-output relation as $\hat{a}_1^\text{out}=\sqrt{2}\hat{a}_1^\text{in}+(\hat{a}_2^\text{in})^\dagger$, $\hat{a}_2^\text{out}=\sqrt{2}\hat{a}_2^\text{in}+(\hat{a}_1^\text{in})^\dagger$, which is obtained by substituting $G=2$ and $\theta=0$ into Eq. . The inverse transformation becomes as $\hat{a}_1^\text{out}=\sqrt{2}\hat{a}_1^\text{in}-(\hat{a}_2^\text{in})^\dagger$, $\hat{a}_2^\text{out}=\sqrt{2}\hat{a}_2^\text{in}-(\hat{a}_1^\text{in})^\dagger$, or equivalently,
\[eq:PiaInv\] $$\begin{aligned}
\hat{x}_1^\text{out}= & \sqrt{2}\hat{x}_1^\text{in}-\hat{x}_2^\text{in}, &
\hat{x}_2^\text{out}= & \sqrt{2}\hat{x}_2^\text{in}-\hat{x}_1^\text{in}, \\
\hat{p}_1^\text{out}= & \sqrt{2}\hat{p}_1^\text{in}+\hat{p}_2^\text{in}, &
\hat{p}_2^\text{out}= & \sqrt{2}\hat{p}_2^\text{in}+\hat{p}_1^\text{in}. \end{aligned}$$
Therefore, by adding or subtracting the two homodyne outcomes with $3.0$ dB difference of gains, the initial quadratures are reconstructed. The reconstructed quadratures are denoted by the superscripts “rec” in the following. Note that the initial quantum state is not recovered in the experiment. In addition, note also that only one of two quadratures $\hat{x}$ or $\hat{p}$ can be reconstructed in each moment, and never both simultaneously. The recovery of the initial state is possible only when either a quantum channel is between the signal and idler outputs [@Filip(2004):PRA] or linear pre-processing is applied [@Radim(2009):PRA]. However, the demonstrated reconstruction of the initial quadratures can show that necessary correlations for the recovery of the initial state are present.
![ Powers of initial vacuum fluctuations reconstructed from output correlations. Vertical axes are powers in dB scale normalized by shot noises. Blue: Reconstructed quadratures. Cyan: Summed shot noises of two homodyne detections. Red: Theory for optimal PIA. Magenta: Theory for our PIA with $-5$ dB squeezed ancillas. Green: Theory for our PIA with vacuum ancillas. []{data-label="fig:Amp_RV"}](Amp_Corr_V.eps)
In Fig. \[fig:Amp\_RV\], the results of such reconstruction of initial vacuum fluctuations are shown. There are four boxes corresponding to the four reconstructed quadratures $\hat{x}_1^\text{rec}$, $\hat{p}_1^\text{rec}$, $\hat{x}_2^\text{rec}$, and $\hat{p}_2^\text{rec}$. The powers of the reconstructed quadratures are shown by the traces in blue. The traces in cyan are the powers of the summed shot noises of the two homodyne detections, which are taken with the same electric gains as the traces in blue. Note that the lower levels of the blue traces than the cyan traces are due to the nonclassical correlation. From Eq. , the summed shot noises should have three times larger variances than those corresponding to the initial vacuum fluctuations. Thus, we infer the original vacuum level to be $4.8$ dB below the measured sum of the shot noises. All results shown here are normalized by the inferred vacuum level. For the optimal PIA, vacuum fluctuations are perfect reconstructed, thus the theoretical expectation coincides with the vacuum level of $0$ dB, which is marked by the lines in red. The increases of the blue traces from $0$ dB show the imperfection of our PIA. The theoretical values for $-5$ dB squeezing of and no squeezing of ancillas are marked by the lines in magenta and green, respectively. The experimental results of the traces in blue are in good agreement with the lines in magenta.
Next, we pay attention to the reconstruction of mean amplitude, using coherent states as inputs. The results are shown in Fig. \[fig:Amp\_RC\]. The four input quadratures, namely $\hat{x}_1^\text{in}$, $\hat{p}_1^\text{in}$, $\hat{x}_2^\text{in}$, and $\hat{p}_2^\text{in}$, are excited one by one, as shown in the leftmost boxes of four subfigures. For each excitation, the powers of the four reconstructed quadratures are measured, the results of which are put in the other four boxes on the right side. For the leftmost box, the trace in magenta is the power of the excited input quadrature, and the trace in cyan is the power of the shot noise of the corresponding homodyne detection. The increase of the magenta trace from the cyan trace indicates the excitation. For the other four boxes, the traces in red and blue show the powers of the reconstructed quadratures with and without the input excitation respectively, and the traces in cyan are the summed shot noise powers of the two homodyne detections. Similarly to Fig. \[fig:Amp\_RV\], one third of the summed shot noise power is used for normalization. The reconstructed quadratures are excited almost to the same levels as the input quadratures at the excited quadratures, whereas they remain unchanged from the nonexcited levels at the nonexcited quadratures.
All the results shown above prove the success of our demonstration of PIA. They agree well with the theoretical calculations assuming $-5$ dB of squeezing for ancillas.
Experimental Results for Cloner {#sec:ResultsClone}
===============================
We show next the results of the cloning experiment in a manner similar to the PIA experiment, i.e., quadrature data and the phase space diagrams reconstructed from them are used for intuitive understanding and also for check of the mirror image that is found in the anticlone, and the full verification is given by the power analysis for various input states. In the following, since the representation and interpretation of results are almost the same as those for the PIA experiment, we give only short description of them. The signal input of PIA is denoted by the term “original”, and the resulting two clones and one anticlone are denoted by “clone-1”, “clone-2”, and “anticlone”, respectively. There are also abbreviation of these terms as “org”, “cln-1”, “cln-2”, and “a-cln” in the figures and mathematical expressions. Occasionally, the term “input” is used indicating the original, and “output” indicating the two clones and one anticlone.
\
Fig. \[fig:Cln\_MD\] shows the quadrature data. From an original (a) in a coherent state, two clones (b) and (c) and an anticlone (d) are produced. We see that the original and the two clones have almost the same sinusoidal curves of the mean amplitudes, though the fluctuations are uniformly increased in the clones. On the other hand, the anticlone has the same sinusoid when the phase is flipped.
Fig. \[fig:Cln\_PS\] is the phase space diagram computed from the quadrature data shown in Fig. \[fig:Cln\_MD\]. The first and second moments of each distribution are represented by an ellipse. Next to the experimental diagram (a), theoretical calculations for the optimal cloning is depicted (b). The two ellipses of the clones (red and magenta) in the experimental diagram are almost overlapped. The centers of the two clones are almost the same as that of the original (green), whereas the radii of the clones are larger than that of the original. On the other hand, the anticlone (blue) has a different center, where the sign of $p$ quadrature is opposite to that of the original.
We move on to the power analysis. First we show the powers of the output quadratures (shown in Figs. \[fig:Cln\_OV\] and \[fig:Cln\_OC\]), and then their correlations (shown in Figs. \[fig:Cln\_EPR\], \[fig:Cln\_RV\], and \[fig:Cln\_RC\]). Fig. \[fig:Cln\_OV\] shows the cloning of a vacuum state, and Fig. \[fig:Cln\_OC\] shows the cloning of several coherent states.
In Fig. \[fig:Cln\_OV\], there are six boxes corresponding to six output quadratures, namely, $\hat{x}_\text{cln-1}$, $\hat{p}_\text{cln-1}$, $\hat{x}_\text{cln-2}$, $\hat{p}_\text{cln-2}$, $\hat{x}_\text{a-cln}$, and $\hat{p}_\text{a-cln}$. For each box, there are two experimental traces. The traces in blue are the powers of the output quadratures. The traces in cyan are the shot noise powers used for normalization, which are equal to the powers of the input quadratures. There are also three kinds of theoretical lines. The lines in red are for the optimal cloning, and the lines in magenta and green are for our cloning using $-5$ dB squeezed and vacuum ancillas for PIA, respectively. Note that the lines in red at $3.0$ dB for clones correspond to the cloning limit. From these results, cloning fidelities are estimated at $F=0.63\pm0.01$ for a vacuum original, which is higher enough than the classical limit of $F=1/2$ and very close to the cloning limit of $F=2/3$.
In Fig. \[fig:Cln\_OC\], there are two subfigures (a) and (b) corresponding to excitations in $\hat{x}_\text{org}$ and $\hat{p}_\text{org}$, respectively. Each subfigure is composed of seven boxes, where the leftmost one shows the input excitation in $\hat{x}_\text{org}$ or $\hat{p}_\text{org}$, and the other six boxes show the output quadratures of $\hat{x}_\text{cln-1}$, $\hat{p}_\text{cln-1}$, $\hat{x}_\text{cln-2}$, $\hat{p}_\text{cln-2}$, $\hat{x}_\text{a-cln}$, and $\hat{p}_\text{a-cln}$. The trace in magenta shows the power of the excited input quadrature. The traces in red and blue show the output powers with and without the input excitation, respectively. The traces in cyan are the shot noise powers used for normalization. When we excite one quadrature of the original, the same quadratures in the three output modes are excited to almost the same level, whereas the conjugate quadratures do not change.
![ Output powers for vacuum originals. Vertical axes are powers in dB scale normalized by shot noises. Blue: Output quadratures. Cyan: Shot noises. Red: Theory for optimal cloner. Magenta: Theory for our cloner with $-5$ dB squeezed ancillas. Green: Theory for our cloner with vacuum ancillas. []{data-label="fig:Cln_OV"}](Cln_Out_V.eps)
Our remaining concerns are the output correlations and the reversibility. First, in Fig. \[fig:Cln\_EPR\], we show EPR correlation between each clone and the anticlone as a sufficient condition for entanglement. The correlation between the clone-1 and the anticlone is shown in Fig. \[sfig:Cln\_EPR\_1\], and that between the clone-2 and the anticlone is shown in Fig. \[sfig:Cln\_EPR\_2\]. By electrically adding or subtracting the two homodyne signals with the same electric gains, four observables are measured, namely, $\hat{x}_\text{cln-1}-\hat{x}_\text{a-cln}$, $\hat{p}_\text{cln-1}+\hat{p}_\text{a-cln}$, $\hat{x}_\text{cln-2}-\hat{x}_\text{a-cln}$, and $\hat{p}_\text{cln-2}+\hat{p}_\text{a-cln}$, which are separately contained in boxes and shown as the blue traces. These traces are all below the summed shot noises of the traces in cyan. From these results, we verify bipartite entanglement between each clone and the anticlone from Duan-Simon criterion [@Duan(2000):PRL; @Simon(2000):PRL], and eventually, tripartite entanglement of Class I where none of three partial systems is separable from the others [@Giedke(2001):PRA]. Theoretical lines in red, magenta, and green are plotted together, corresponding to infinite squeezing, finite squeezing of $-5$ dB, and no squeezing of ancillas for PIA, respectively. The experimental results of the traces in blue well agree with the lines in magenta.
![ Powers of initial vacuum fluctuations reconstructed from output correlations. Vertical axes are powers in dB scale normalized by shot noises. Blue: Reconstructed quadratures. Cyan: Summed shot noises of three homodyne detections. Red: Theory for optimal cloner. Magenta: Theory for our cloner with $-5$ dB squeezed ancillas. Green: Theory for our cloner with vacuum ancillas. []{data-label="fig:Cln_RV"}](Cln_Corr_V.eps)
Using the nonclassical correlations, we reconstruct the original quadratures. The reconstructed quadratures are denoted by $\hat{x}_\text{rec}$ and $\hat{p}_\text{rec}$. The results for a vacuum state are shown in Fig. \[fig:Cln\_RV\], and those for coherent states are shown in Fig. \[fig:Cln\_RC\]. For the reconstruction, three homodyne signals are added with the same electric gains and appropriate signs. For the similar reason to the PIA experiment, the summed shot noise has three times larger variance than that corresponding to the vacuum fluctuation of the original. Thus, one third of the summed shot noise power is used for normalization.
In Fig. \[fig:Cln\_RV\], the powers of the reconstructed vacuum fluctuations are plotted as the traces in blue, and compared to that of the summed shot noise plotted as the traces in cyan. The blue traces are below the cyan traces for both $\hat{x}_\text{rec}$ and $\hat{p}_\text{rec}$ due to nonclassical correlations. Theoretical expectations are also shown as the lines in red, magenta, and green corresponding to the three different conditions of infinite squeezing, finite squeezing of $-5$ dB, and no squeezing of ancillas, respectively. The perfect reconstruction corresponding to $0$ dB is marked by the red lines, which is not achieved in the experiment due to the finite squeezing of ancillas. The results are degraded almost to the level of the magenta lines, as expected from the theory. However, they are still slightly below $3.0$ dB which corresponds to the cloning limit. From these results, the fidelity of reconstruction is calculated for a vacuum state. A perfect unitary cloning allows the reconstruction fidelity of $F=1$. The experimental value is calculated as $F=0.74\pm0.01$, which is higher than the cloning limit of $F=2/3$. The cloning limit can be considered as the classical limit for the reproduction of the original state, because one can never obtain a better approximation of the original state than the clones if the nonclassical correlations between the clones and anticlones can not be utilized.
In Fig. \[fig:Cln\_RC\], the input quadratures $\hat{x}_\text{org}$ and $\hat{p}_\text{org}$ are excited one by one, and for each case they are reconstructed from the output correlations. The trace in magenta in the leftmost box is the power of the excited original quadrature, whereas the trace in cyan in the same box is the power of the corresponding shot noise. The traces in red and blue in the other two boxes on the right side are the powers of the reconstructed quadratures with and without the excitation, respectively, and the traces in cyan at $4.8$ dB are the powers of the summed shot noises of the three homodyne detections. At the excited quadrature, almost the same level of excitation is reconstructed. In contrast, at the conjugate quadrature, no effect of the excitation is observed.
Summary {#sec:Summary}
=======
We succeeded in phase-insensitive optical amplification in a reversible manner. Our amplifier preserves the idler output. The entanglement between the signal and idler is responsible for the reversibility. The scheme is basically based on linear optics, homodyne measurements and feedforward. Offline-prepared squeezed states which are used as ancillas provide nonclassical properties for our PIA. We demonstrated for the amplification gain of $G=2.0$. By splitting the amplified output in half, we also demonstrated $1\to2$ approximate cloning of coherent states, where the remaining idler output was interpreted as the anticlone.
For both experiments, the full demonstration was given in the following sequence. First, we characterized the individual output modes. By treating the quadrature data directly, they were visualized as phase space diagrams to help intuitive understanding. Especially, the mirror image in the idler output or the anticlone is shown. Then the input-output relation was examined more strictly by using several different coherent states as inputs. Finally, the output correlations were examined. They are important because the nonclassical properties are only accessible via them. Not only the ordinary EPR correlations were shown, but also the possibility of the reverse operation was directly presented by appropriate measurements of the correlations.
Our results are a good demonstration that shows the properties of an amplification process, which have been theoretically known for decades but not fully demonstrated experimentally. Especially, it is reversible when the idler is present. Such reversible amplification is significant from practical respects, as is shown in Refs. [@Radim(2009):PRA; @Filip(2004):PRA]. We did not demonstrate the reverse operation but showed its possibility from the correlations. The recovery of the signal state only requires the Bell measurement and feedforward [@Radim(2009):PRA; @Filip(2004):PRA], which would be much less lossy and noisy than implementing the inverse transformation. Full and partial recovery of the distributed information via such a feedforward scheme is left for future experiments.
Acknowledgement {#acknowledgement .unnumbered}
===============
This work was partly supported by SCF, GIA, G-COE, PFN and FIRST commissioned by the MEXT of Japan, the Research Foundation for Opt-Science and Technology, SCOPE program of the MIC of Japan, and, JSPS and ASCR under the Japan-Czech Republic Research Cooperative Program. R. F. acknowledges projects: MSM 6198959213 and ME10156 of the Czech Ministry of Education, grant 202/08/0224 of GA ČR and EU Grant FP7 212008 COMPAS.
General number of clones {#sec:GenCln}
========================
In this appendix, the discussion in Sec. \[sec:Clone\] is extended to ${K}\to{L}$ cloning.
The procedure of ${K}\to{L}$ symmetric cloning can be decomposed into three steps as follows [@Braunstein(2001):PRL; @Fiurasek(2001):PRL]. Firstly, all the information of $(x_\text{d},p_\text{d})$ are put together into a single mode by a beamsplitter network. A state with larger amplitude $\hat{D}(\sqrt{K}x_\text{d},\sqrt{K}p_\text{d}){|{\psi}\rangle}$ is created from the $K$ identical originals $\hat{D}(x_\text{d},p_\text{d}){|{\psi}\rangle}$ in this step. Secondly, the combined signal is amplified with the gain $G=L/K$. Finally, the amplified signal is combined with $L-1$ ancillas by another beamsplitter network, creating $L$ clones. For asymmetric cloning, the procedure is essentially the same, but the amplification in the second step is applied to a part of the combined signal, and the gain is changed correspondingly as $G=1+\sum_{k=1}^Ln_k$ [@Fiurasek(2007):PRA]. On the other hand, $L-K$ anticlones are obtained from the idler output by combining it with $L-K-1$ ancillas by another beamsplitter network. Therefore, as a whole, $L$ clones and $L-K$ anticlones are obtained, whose complex mean amplitudes are $\alpha$ and $\alpha^\ast$ respectively, from $K$ originals with the complex mean amplitude of $\alpha$. It is evident that there is no entanglement among clones or anticlones, however, there is entanglement between a clone and an anticlone.
The asymmetric clones have the following form: $$\begin{aligned}
\hat{a}_{\text{cln-}k}=\tfrac{1}{\sqrt{K}}\hat{a}_\text{org}^\prime+\sqrt{n_k}{\hat{a}^\dagger}_\text{idl}+\sum_{k=1}^{L-1}\kappa_{k\ell}\hat{a}_{\text{anc-}\ell},
\label{eq:GenClnAsym}\end{aligned}$$ where $\hat{a}_{\text{cln-}k}$ is the annihilation operator of the clone, $\hat{a}_\text{org}^\prime$ is that of the combined original after the first step, $\hat{a}_\text{idl}$ is that of the idler input, and $\hat{a}_{\text{anc-}\ell}$ is that of the ancilla input for the latter beamsplitter network. The coefficients $\sqrt{n_k}$ and $\kappa_{k\ell}$ are not independent in order to preserve the commutation relations. By setting all the ancillas in vacuum states, the added noises become rotationally symmetric and Gaussian, and their variances correspond to the parameters $n_k$ [@Note]. They satisfy the relation below [@Fiurasek(2007):PRA]: $$\begin{aligned}
\Bigl(\sum_{k=1}^L\sqrt{n_k}\Bigr)^2=(L-K)\Bigl(\sum_{k=1}^L n_k+1\Bigr).
\label{eq:GenClnAsymNoise}\end{aligned}$$ The optimality of Eq. is proven with respect to the cost function constructed from the variances [@Fiurasek(2007):PRA]: $$\begin{gathered}
C(n_1,\dots,n_L)=\sum_{k=1}^Lc_kn_k. \end{gathered}$$ In particular, for the symmetric cloning, i.e., $n_1=\dots=n_L\equiv{n}$, Eq. saturates the following inequality which is obtained in Ref. [@Cerf(2000):PRA] from the consistency with the uncertainty relation: $$\begin{aligned}
n \ge \bigl(\tfrac{1}{K}-\tfrac{1}{L}\bigr). \end{aligned}$$
The limit fidelity for symmetric Gaussian cloning is calculated as $F=KL/(KL-K+L)$. On the other hand, taking the limit that $L$ goes to infinity, the classical limit of cloning (i.e., the limit of state estimation) is obtained as $F=K/(K+1)$.
As with $1\to2$ cloning, optimal $1\to{L}$ cloning can be fully reversed by $L-1$ Bell measurements performed on each set of a clone and an anticlone and subsequent feedforward to the remaining single clone [@Filip(2004):PRA]. Even with the smaller number of Bell measurements, the original is partially recovered accordingly.
[99]{}
C.M. Caves, **26**, 1817 (1982).
E. Authurs and J.L. Kelly Jr., Bell Syst. Tech. J. **44**, 725 (1965).
E. Authurs and M.S. Goodman, **60**, 2447 (1988).
K. Shimoda, H. Takahasi, and C.H. Townes, J. Phys. Soc. Jpn. **12**, 686 (1957).
V. Josse, M. Sabuncu, N.J. Cerf, G. Leuchs, and U.L. Andersen, **96**, 163602 (2006).
A. Furusawa, J.L. S[ø]{}rensen, S.L. Braunstein, C.A. Fuchs, H.J. Kimble, E.S. Polzik, Science, **282**, 706 (1998).
T. Aoki, G. Takahashi, T. Kajiya, J. Yoshikawa, S.L. Braunstein, P. van Loock, and A. Furusawa, Nature Physics **5**, 541 (2009).
R. Filip, P. Marek, and U.L. Andersen, **71**, 042308 (2005).
G. Giedke, B. Kraus, M. Lewenstein, and J.I. Cirac, **64**, 052303 (2001).
R. Filip, **80**, 022304 (2009).
J. Yoshikawa, T. Hayashi, T. Akiyama, N. Takei, A. Huck, U.L. Andersen, and A. Furusawa, **76**, 060301(R) (2007).
R. Ukai, N. Iwata, Y. Shimokawa, S.C. Armstrong, A. Politi, J. Yoshikawa, P. van Loock, and A. Furusawa, arXiv:quant-ph/1001.4860 (2010).
W.K. Wootters and W.H. Zurek, Nature (London) **299**, 802 (1982).
S.L. Braunstein, V. Buzek, and M. Hillery, **63**, 052313 (2001). For example, F. Grosshans and P. Grangier, **88**, 057902 (2002); F. Grosshans and P. Grangier, arXiv:quant-ph/0204127 (2002). J. Fiurášek and N.J. Cerf, **75**, 052335 (2007). N.J. Cerf, O. Krüger, P. Navez, R.F. Werner, and M.M. Wolf, **95**, 070501 (2005). F. Grosshans and N.J. Cerf, **92**, 047905 (2004). A. Leverrier and P. Grangier, **81**, 062314 (2010). N.J. Cerf, A. Ipe, and X. Rottenberg, **85**, 1754 (2000).
J. Fiurášek, **86**, 4942 (2001). S.L. Braunstein, N.J. Cerf, S. Iblisdir, P. van Loock, and S. Massar, **86**, 4938 (2001). J. Bae and A. Acín, **97**, 030402 (2006). F. De Martini, V. Bužek, F. Sciarrino, and C. Sias, Nature **419**, 815 (2002). R. Filip, J. Fiurášek, and P. Marek, **69**, 012314 (2004). U.L. Andersen, V. Josse, and G. Leuchs, **94**, 240503 (2005). S. Koike, H. Takahashi, H. Yonezawa, N. Takei, S.L. Braunstein, T. Aoki, and A. Furusawa, **96**, 060504 (2006). S.L. Braunstein, **71**, 055801 (2005). J. Yoshiakawa, Y. Miwa, A. Huck, U.L. Andersen, P. van Loock, and A. Furusawa, **101**, 250501 (2008). S. Suzuki, H. Yonezawa, F. Kannari, M. Sasaki, and A. Furusawa, **89**, 061116 (2006). L.-M. Duan, G. Giedke, J.I. Cirac, and P. Zoller, **84**, 2722 (2000).
R. Simon, **84**, 2726 (2000).
More strictly speaking, the story becomes as follows: The simple picture of cloning that some noise is added to the original $\hat{D}(x_\text{d},p_\text{d}){|{\psi}\rangle}$ is valid for $1\to{L}$ cloning, irrespective of the core state ${|{\psi}\rangle}$. However, for ${K}\to{L}$ cloning with $K\ge2$, it is not always valid. When the core state ${|{\psi}\rangle}$ is known, there would be some choice of the ancilla states which make the picture valid. Indeed, when the core state is known to be a vacuum state (i.e., the set $S$ of the possible original states is all coherent states), we can choose vacuum states as the ancillas. When the core state is unknown, a part of the original quantum fluctuation is replaced by that of the ancillas, therefore, the added noises $n_k$ become ill-defined. Nonetheless, the cloner would be still optimal in some sense from the standpoint of the share of the information of $(x_\text{d},p_\text{d})$.
N.J. Cerf and S. Iblisdir, **62**, 040301(R) (2000).
|
---
abstract: 'It has been proposed recently that, within the framework of split Supersymmetry, long lived gluinos generated in astrophysical sources could be detected using the signatures of the air showers they produce, thus providing a lower bound for their lifetime and for the scale of SUSY breaking. We present the longitudinal profile and lateral spread of $G$-hadron induced extensive air showers and consider the possibility of measuring them with a detector with the characteristics of the Pierre Auger Observatory.'
author:
- 'Javier G. Gonzalez, Stephen Reucroft, and John Swain'
title: Gluino Air Showers as a Signal of Split Supersymmetry
---
Introduction
============
In the almost structureless fast falling with energy inclusive cosmic ray spectrum, three kinematic features have drawn considerable attention for a long time. These features, known as the knee, the ankle, and the ultraviolet cutoff, are the only ones in which the spectral index shows a sharper variation as a function of energy, probably signaling some “new physics”. In particular, if cosmic ray sources are at cosmological distances, the cutoff is expected at about $10^{10.9}$ GeV, due to the GZK [@Greisen:1966jv] interactions of the primaries with the microwave background radiation. The existence of data beyond the GZK cutoff [@Takeda:1998ps] has been puzzling theorists and experimentalists [@Abbasi:2002ta], but a clear and widely accepted explanation is yet to see the light of day. The proposed resolution for this puzzle generally invokes physics from the most favored theories beyond the standard model (SM) like string/M theory, supersymmetry (SUSY), grand unified theories (GUTs), and TeV-scale gravity [@Anchordoqui:2002hs].
A novel beyond–SM–model proposal to break the GZK barrier is to assume that ultrahigh energy cosmic rays are not known particles but a new species of particle, generally referred to as the uhecron, $U$ [@Farrar:1996rg]. The meager information we have about super-GZK particles allows a naïve description of the properties of the $U$. The muonic content in the atmospheric cascades suggests $U$’s should interact strongly. At the same time, if $U$’s are produced at cosmological distances, they must be stable, or at least remarkably long lived, with mean-lifetime $\tau \gtrsim 10^6 \, (m_U/3~{\rm GeV})\, (d/ {\rm Gpc})\,{\rm s},$ where $d$ is the distance to the source and $m_U$, the uhecron’s mass. Additionally, since the threshold energy increases linearly with $m_U$, to avoid photo-pion production on the CMB $m_U \gtrsim 1.5$ GeV. Within the Minimal Supersymmetric (MS) extension of the SM, the allowed range for gluino masses is $m_{\tilde{g}} \leq 3~{\rm GeV}$ and $25~{\rm GeV} \leq m_{\tilde{g}} \leq 35~{\rm GeV}$. In this direction, it was noted in [@Berezinsky:2001fy] that light Supersymmetric baryons (made from a light gluino + the usual quarks and gluons, $m_U \lesssim 3$ GeV) would produce atmospheric cascades very similar to those initiated by protons.
Recently, Arkani-Hamed and Dimopoulos [@Arkani-Hamed:2004fb] proposed an alternative to the MSSM in which the mass spectrum of the super-partners is split in two. In this theory, all the scalars, except for a fine tuned Higgs, get a mass at a high scale of supersymmetry breaking while the fermion’s masses remain near the electroweak scale protected by chiral symmetry. Additionally, all corrections that involve loops of supersymmetric bosons are suppressed, thus removing most of the tunings required to reproduce $(g-2)_\mu$, $B-\bar{B}$ mixing and $b \rightarrow s\gamma$ [@Giudice:2004ss]. At the same time it allows for radiative corrections to the Higgs mass. Moreover, very recently it was shown that there exists a realization of a “split SUSY” in String Theory [@Antoniadis:2004dt; @Arkani-Hamed:2004ss].
An important feature of split SUSY is the long life of the gluino due to the high masses of the virtual scalars ($m_s$) that mediate the decay. Indeed, very strong limits on heavy isotope abundance require the gluino to decay on Gyr time scales, leading to an upper bound for the scale of SUSY breaking ${\cal O} (10^{13})$ GeV. Additionally, it has been pointed out that the detection of gluinos coming from astrophysical sources (with $m_{\tilde g} \sim 500$ GeV) leads to a lower bound on their proper lifetime of the order of 100 yr, which translates into a lower bound on the scale of SUSY breaking, ${\cal O} (10^{11})$ GeV [@Anchordoqui:2004bd].
In light of this, it is of interest to explore the potential of forthcoming cosmic ray observatories to observe gluino-induced events. Some signatures of the air showers initiated by these long lived gluinos have been presented in [@Anchordoqui:2004bd; @Hewett:2004nw]. In this Brief Report, we carry out a more detailed analysis by generating gluino air showers through Monte Carlo simulations and pave the ground for a future study on the actual feasibility of measuring them at the Pierre Auger Observatory [@Abraham:2004dt]. The outline is as follows. In Sec. II we review the relevant aspects of cosmic ray air showers. In Sec. III we first carry out Monte Carlo simulations of gluino induced showers and then show their distinct signatures in the air shower profile and lateral spread at ground level. Section IV contains a summary of our results.
Cosmic Ray Air Showers
======================
When a high energy particle hits the atmosphere it generates a roughly conical cascade of secondary particles, an air shower. At any given time, the shower can be pictured as a bunch of particles, the shower front, traveling toward the ground at nearly the speed of light. The number of particles in the shower multiplies as the front traverses the atmosphere, until the particles’ energy fall below a threshold at which ionization losses dominate over particle creation, after this point the number of particles decreases. By the time the front hits the ground, its shape is similar to that of a “saucer” with a radius that can range from a few meters to a few kilometers. The shape of the shower front is actually closer to that of a spherical shell, with a curvature of a few kilometers for almost vertical showers to more than a hundred kilometers for inclined ones. The number of particles as a function of amount of atmosphere traversed is the “longitudinal profile” of the shower.
The general properties of the longitudinal profile can be understood with a simple model and it usually can be parameterized by a Gaisser-Hillas function [@Anchordoqui:2004xb], where $N_{e, {\rm max}}$ is the number of particles at shower maximum, $X_0$ is the depth of the first observed interaction, $X_{\rm max}$ is the depth at the maximum, and $\lambda = 70$ g/cm$^2$. The position of the maximum, $X_{\rm max}$, depends on the energy as well as on the nature of the primary particle. With cosmic ray showers however, there are fluctuations, associated mainly with the point where the primary first interacts, as well as statistical fluctuations in the development of the shower.
When the shower front reaches the ground it is spread over an area of up to a few kilometers. It is then possible to study the density of energy deposited (or particle densities), on the ground as a function of time and position. There are a number of parameterizations for such distributions but they are mostly modified power laws like the following [@Anchordoqui:2004xb] where $r$ is the distance to the point where the core hits the ground, $r_M$ is the Moliere radius at two radiation lengths above the observation level, $\alpha$ is another empirical parameter and $\eta$ is the free parameter that depends on the angle. The normalization constant $C$ will depend on the energy.
If the primary particle is a hadron, the first interaction will be a hadronic interaction and the number of hadrons, mostly pions, will increase with each interaction. Statistically, in each interaction, about 30% of the energy goes into neutral pions that decay, generating an electromagnetic cascade. In this way, the energy of the primary is split into an electromagnetic part and a “muonic” part, that comes from the $\pi^{\pm}$ that managed to decay. If the incoming primary has a higher energy, the number of interactions required to lower the energy per particle under the threshold at which the pions will most likely decay increases, increasing the fraction of the energy that goes into the electromagnetic part. As a result, the number of muons in a shower scales as $E^{0.94}$. This in turn implies that the number of muons for a nucleus of mass A relates to the number of muons on a proton shower: $N_{\mu}^A = A^{0.06}N_{\mu}^{p}$ [@Anchordoqui:2004xb]. At this point it is worth noting that these numbers are strongly dependent on the particular hadronic interaction model used. There are three hadronic interaction models commonly used in air shower simulations, [sibyll]{} [@Fletcher:1994bd], [qgsjet]{} [@Kalmykov:1997te] and [dpmjet]{} [@Ranft:1994fd]. All of them are extrapolations of models that agree with the experimental data but show a different behaviour at energies beyond 10 TeV center of mass energy [@Anchordoqui:1998nq].
If the primary particle is a $\gamma$-ray the interactions that occur are mostly pair production, Bremsstrahlung, ionization losses and Compton scattering. Also, at energies higher than $10^{10}$ GeV, the LPM effect suppresses the cross-section for pair production and Bremsstrahlung. This results in the shower being more elongated.
Gluino air showers
==================
In this section we will study gluino induced showers. To carry out this study we use [aires]{}, a set of programs specifically designed to simulate the extensive air showers generated by ultra high energy cosmic rays interacting with the atmosphere. The [Aires]{} system is described elsewhere [@Sciutto:1999; @Sciutto:2001dn] and takes into account the relevant interactions, including electromagnetic and hadronic interactions and transport processes. The hadronic model used is [Sibyll]{}.
We use a feature of [aires]{} that allows for the definition of special primaries by providing a program that handles the first interactions of each primary until the main program ([aires]{}) can take over and simulate the rest of the shower until it strikes ground. To model the gluino induced showers we first determine where there will be a major interaction and then inject a proton with energy equal to the energy deposited by the gluon. We then force each proton to have it’s first interaction at its corresponding point, giving rise to a hadronic shower that is then simulated by the standard [@Sciutto:1999; @Sciutto:2001dn] program.
The gluino containing hadron (hereafter $G$) cross section is about half the pion-air cross section and the inelasticity is $K \propto 1~{\rm GeV}/M_G$ [@Anchordoqui:2004bd; @Berezinsky:2001fy]. The gluino mass range is not constrained so we could, in principle, study it at different scales. According to [@Hewett:2004nw], the masses accesible to neutrino detectors are $\leq 170$ GeV. In our case we try to probe for higher masses, while keeping the fluxes within reach. In our simulations we adopt a gluino mass of $500$ GeV, which yields an inelasticity of $0.002$ [@Anchordoqui:2004bd]. This small inelasticity is precisely what allows one to model a $G$ shower as a series of proton sub-showers separated according to the $G$ mean free path, with each proton having about $0.002$ of the original $G$ energy.
In our case, this program takes the $G$-hadron of a given energy, mean free path and inelasticity. The nature of the particle chosen to be injected at each vertex will determine the amount of energy channeled into the hadronic shower and the amount of energy going into the electromagnetic shower. For a detector like the surface array of the Pierre Auger Observatory, the hadronic part (the muons) are enhanced over the electromagnetic part (also considering a great part of the electromagnetic part has died away in flight). This means that, by injecting a proton in each vertex we are underestimating the number of muons that can be sampled in the ground.
The proton sub-showers are generated following these simple steps, and are repeated until the $G$-hadron reaches ground level:
- Calculate the point of the next interaction.
- Decrease the energy of the G-hadron by a factor given by the inelasticity.
- Inject a proton with energy equal to the decrease in the G-hadron’s energy traveling in the same direction.
Once the initial string of protons is generated, the [aires]{} package takes over simulation of the standard physics interactions and transport through the atmosphere to produce the set of showers.
![\[fig:comparison\] Longitudinal profile for protons at $10^{17}$, $10^{18}$, $10^{19}$, $10^{20}$ eV and a gluino at $5\times10^{18}$ eV.](longitudinal.eps){width="37.00000%"}
In Fig. \[fig:comparison\] we present the average longitudinal profile for 100 $G$-hadron induced air showers, along with the longitudinal development of a proton with different energies. Our shower simulations have been carried out for incident zenith angle $75^\circ$. It should be clear that the development of a $G$-hadron shower can not be fitted by the Gaisser-Hillas function given in Eq. (\[gaiserhillas\]). One of the biggest sources of fluctuations in air showers is the first interaction point. This means that, in our case, the longitudinal development of any particular shower will show small fluctuations in the shape, since one of these is composed of about ten proton showers. It should be noted that it might be possible to isolate these events from their background even for zenith angles as low as $60^\circ$, where the atmospheric depth is around 2000 g/cm, still more than three times the point where the shower reaches its maximum.
![\[fig:timeProfile\] Time of arrival versus distance to the core, along the symmetry axis of the LDF](timeProfile.eps){width="37.00000%"}
In Fig. \[fig:timeProfile\] we show the arrival time as a function of distance to the core along the major symmetry axis of the lateral distribution function, again for an average over 100 showers. A fit to this plot, taking the spherical front approximation to determine the radius of curvature of the shower front, leads to a value around 78 km. In the general case of cosmic ray air showers the front is not parameterized as a sphere since the curvature is more pronounced near the core and flattens out for big radius.
In the spherical front approximation, the arrival time as a function of the distance to the core ($r$) is, up to terms of order greater than $r/R$ $$t(r)= - \frac{r}{c} \hat{u}_r \cdot \hat{u}_R + \frac{r^2}{cR}\left(1 - \left(\hat{u}_r \cdot
\hat{u}_R\right)^2\right) \,\,,$$ where $R$ is the radius of the front and $\hat{u}_R$ is the unit vector pointing in the direction of arrival of the shower and $\hat{u}_r$ is the radial unitary vector on the detector plane.
![\[fig:ldf\] Average Lateral spread for $10^{18}$, $10^{19}$, $10^{20}$ eV proton showers and $5\times 10^{18}$ eV gluino showers.](ldf.eps){width="37.00000%"}
The lateral distribution function (LDF) for all particles also shows a distinct behavior. In Fig. \[fig:ldf\] the total LDF on the shower plane is plotted, along with the corresponding ones for protons of different energies. A distinct feature of the LDF from $G$-hadrons is that the slope depends on the distance to the core, as opposed to proton generated ones. It is due to the fact that $G$-hadron induced showers are just a superposition of lower energy showers with different ages (hence different slopes). The younger showers, that are spread over a smaller area, show a steeper LDF. The particle densities are on the order of the densities from proton showers. These densities need to be scaled according to the detector response. Also, for very inclined showers, the steepness of the LDF depends strongly on the zenith angle so a search for these has to take into account the angular resolution of the experiment.
Summary
=======
Using a simple model to simulate the interaction of $G$-hadrons hitting the top of the atmosphere, we have shown that the longitudinal development of a $G$-hadron induced air shower is very distinct from that of protons and gamma rays that compose the main background. Thanks to the low inelasticity of $G$-air interactions, the $G$-hadron will produce a sequence of smaller showers of almost the same energy, and the value for the cross-section, in agreement with previous works, gives a separation between them that makes it impossible to resolve them.
It should then be possible to differentiate between $G$-hadron induced showers and those of the background, since they display such a unique profile. In order to assess how difficult it will be to actually measure these showers we need to consider the characteristics of the detector. In the case of fluorescence techniques, a careful study of the signal to noise ratio for the intensities associated with these showers will be needed in order to correctly estimate the correct aperture.
It was also shown that the lateral distribution of particles at ground level is such that it should be measurable with a ground array. The expected signature, a small front radius given by the arrival times, seems not to be realized but the LDF does show a varying slope, due to the superposition of different age showers. A more detailed study also needs to be done in order to study the aperture and the discrimination power, since these will depend strongly on the characteristics of the detector.\
After this paper was finished new bounds on the $M_G$-SUSY breaking scale plane were reported [@Arvanitaki:2005fa]. Interestingly, a small window for high scale SUSY breaking and $M_G = 500$ GeV still remains open.
Acknowledgements {#acknowledgements .unnumbered}
================
We would like to thank Luis Anchordoqui and Carlos Nuñez for usefull discussions. This work has been partially supported by the NSF grant No. PHY-0140407.
[99]{}
K. Greisen, Phys. Rev. Lett. [**16**]{}, 748 (1966); G. T. Zatsepin and V. A. Kuzmin, JETP Lett. [**4**]{}, 78 (1966) \[Pisma Zh. Eksp. Teor. Fiz. [**4**]{}, 114 (1966)\]. M. Takeda [*et al.*]{}, Phys. Rev. Lett. [**81**]{}, 1163 (1998) \[arXiv:astro-ph/9807193\].
It should be stressed that the most recent results reported by the HiRes Collaboration describe a spectrum which is consistent with the expected GZK feature. R. U. Abbasi [*et al.*]{} \[HiRes Collaboration\], Phys. Rev. Lett. [**92**]{}, 151101 (2004) \[arXiv:astro-ph/0208243\].
For a review, see [*e.g*]{}, L. Anchordoqui, T. Paul, S. Reucroft and J. Swain, Int. J. Mod. Phys. A [**18**]{}, 2229 (2003) \[arXiv:hep-ph/0206072\]. G. R. Farrar, Phys. Rev. Lett. [**76**]{}, 4111 (1996) \[arXiv:hep-ph/9603271\].
V. Berezinsky, M. Kachelriess and S. Ostapchenko, Phys. Rev. D [**65**]{}, 083004 (2002) \[arXiv:astro-ph/0109026\].
N. Arkani-Hamed and S. Dimopoulos, arXiv:hep-th/0405159. G.F. Giudice and A. Romanino arXiv:hep-ph/0406088.
I. Antoniadis and S. Dimopoulos, arXiv:hep-th/0411032; B. Kors and P. Nath, arXiv:hep-th/0411201.
For general aspects of split SUSY, see [*e.g.,*]{} N. Arkani-Hamed, S. Dimopoulos, G.F. Giudice and A. Romanino arXiv:hep-ph/0409232.
L. Anchordoqui, H. Goldberg and C. Nunez, arXiv:hep-ph/0408284.
J. L. Hewett, B. Lillie, M. Masip and T. G. Rizzo, JHEP [**0409**]{}, 070 (2004) \[arXiv:hep-ph/0408248\]. J. Abraham [*et al.*]{} \[Pierre Auger Collaboration\], Nucl. Instrum. Meth. A [**523**]{} (2004) 50.
L. Anchordoqui, M. T. Dova, A. Mariazzi, T. McCauley, T. Paul, S. Reucroft and J. Swain, Annals Phys. [**314**]{}, 145 (2004) \[arXiv:hep-ph/0407020\]. R. S. Fletcher, T. K. Gaisser, P. Lipari and T. Stanev, Phys. Rev. D [**50**]{}, 5710 (1994). R. Engel, T.K Gaisser and T. Stanev, Proc. 26th ICRC (Utah), [**1**]{}, 415 (1999).
N. N. Kalmykov, S. S. Ostapchenko and A. I. Pavlov, Nucl. Phys. Proc. Suppl. [**52B**]{}, 17 (1997). J. Ranft, Phys. Rev. D [**51**]{}, 64 (1995).
L. A. Anchordoqui, M. T. Dova, L. N. Epele and S. J. Sciutto, Phys. Rev. D [**59**]{}, 094003 (1999) \[arXiv:hep-ph/9810384\].
S. J. Sciutto, arXiv:astro-ph/0106044. S. J. Sciutto, arXiv:astro-ph/9911331.
A. Arvanitaki, C. Davis, P. W. Graham, A. Pierce and J. G. Wacker, arXiv:hep-ph/0504210.
|
---
abstract: 'We develop a theory of insertion and deletion tolerance for point processes. A process is insertion-tolerant if adding a suitably chosen random point results in a point process that is absolutely continuous in law with respect to the original process. This condition and the related notion of deletion-tolerance are extensions of the so-called finite energy condition for discrete random processes. We prove several equivalent formulations of each condition, including versions involving Palm processes. Certain other seemingly natural variants of the conditions turn out not to be equivalent. We illustrate the concepts in the context of a number of examples, including Gaussian zero processes and randomly perturbed lattices, and we provide applications to continuum percolation and stable matching.'
address:
- 'Microsoft Research, 1 Microsoft Way, Redmond, WA 98052, USA'
- 'Department of Mathematics and Statistics, University of Victoria, PO BOX 3060 STN CSC, Victoria, BC V8W 3R4, Canada'
author:
- 'Alexander E. Holroyd'
- Terry Soo
date: '14 July 2010; revised 14 February 2011'
title: |
Insertion and Deletion Tolerance\
of Point Processes
---
[^1]
Introduction
============
Let $\Pi$ be a point process on ${{\mathbb R}}^d$. Point processes will always be assumed to be simple and locally finite. Let $\prec$ denote absolute continuity in law; that is, for random variables $X$ and $Y$ taking values in the same measurable space, $X \prec Y$ if and only if ${{\mathbb P}}(Y \in {{\mathcal A}}) = 0$ implies ${{\mathbb P}}(X \in {{\mathcal A}}) = 0$ for all measurable ${{\mathcal A}}$. Let ${{ {\mathfrak B} }}$ denote the Borel $\sigma$-algebra on ${{\mathbb R}}^d$ and let ${{\mathcal L}}$ be Lebesgue measure. We say that $\Pi$ is [[****]{}[insertion-tolerant]{}]{} if for every $S \in {{ {\mathfrak B} }}$ with ${{\mathcal L}}(S) \in (0, \infty)$, if $U$ is uniformly distributed on $S$ and independent of $\Pi$, then $$\Pi + \delta_U \prec \Pi,$$ where $\delta_x$ denotes the point measure at $x \in {{\mathbb R}}^d$.
Let ${{\mathbb M}}$ denote the space of simple point measures on ${{\mathbb R}}^d$. The support of a measure $\mu \in {{\mathbb M}}$ is denoted by $$[\mu]:= { \left\{ {y \in {{\mathbb R}}^d : \mu({ \left\{ {y} \right\} }) =1} \right\} }.$$ A [[****]{}[$\boldsymbol{\Pi}$-point]{}]{} is an ${{\mathbb R}}^d$-valued random variable $Z$ such that $Z \in [\Pi]$ a.s. A [[****]{}[finite subprocess]{}]{} of $\Pi$ is a point process ${ {\mathcal F} }$ such that ${ {\mathcal F} }({{\mathbb R}}^d) < \infty$ and $[{ {\mathcal F} }]
\subseteq { [\Pi] }$ a.s. We say that $\Pi$ is [[****]{}[deletion-tolerant]{}]{} if for any $\Pi$-point $Z$ we have $$\Pi -\delta_Z \prec \Pi.$$ For $S \in {{ {\mathfrak B} }}$ we define the restriction $\mu {{|}}_S$ of $\mu \in {{\mathbb M}}$ to $S$ by $$\mu {{|}}_S(A) := \mu(A \cap S), \quad \ A \in {{ {\mathfrak B} }}.$$
We will prove the following equivalences for insertion-tolerance and deletion-tolerance.
\[equiv\] Let $\Pi$ be a point process on ${{\mathbb R}}^d$. The following are equivalent.
(i) The point process $\Pi$ is deletion-tolerant.
(ii) \[minusF\] For any finite subprocess ${ {\mathcal F} }$ of $\Pi$, we have $\Pi -{ {\mathcal F} } \prec \Pi$.
(iii) \[S\] For all $S \in {{ {\mathfrak B} }}$ with finite Lebesgue measure, $\Pi {{|}}_{S^c} \prec \Pi$.
\[thm-instol-eq\] Let $\Pi$ be a point process on ${{\mathbb R}}^d$. The following are equivalent.
(i) The point process $\Pi$ is insertion-tolerant.
(ii) \[finiteadd\] For any Borel sets $S_1, \ldots, S_n$ of positive finite Lebesgue measure, if $U_i$ is a uniformly random point in $S_i$, with $U_1, \ldots, U_n$ and $\Pi$ all independent, then $\Pi
+ \sum_{i=1} ^n \delta_{U_i} \prec \Pi$.
(iii) \[weakcond\] If $(X_1, \ldots, X_n)$ is a random vector in $({{\mathbb R}}^d)^n$ that admits a conditional law given $\Pi$ that is absolutely continuous with respect to Lebesgue measure a.s., then $\Pi + \sum_{i=1} ^n
\delta_{X_i} \prec \Pi$.
In fact we will prove a stronger variant of Theorem \[thm-instol-eq\], in which (ii),(iii) are replaced with a condition involving the insertion of a [*random*]{} finite number of points.
We say that a point process is [[****]{}[translation-invariant]{}]{} if it is invariant in law under all translations of ${{\mathbb R}}^d$. In this case further equivalences are available as follows.
\[anyS\] A translation-invariant point process $\Pi$ on ${{\mathbb R}}^d$ is insertion-tolerant if and only if there exists $S\in {{ {\mathfrak B} }}$ with ${{\mathcal L}}(S)
\in (0, \infty)$ such that, if $U$ is uniformly distributed in $S$ and independent of $\Pi$, then $\Pi +\delta_U \prec \Pi$.
Let $\Pi$ be a translation-invariant point process with finite intensity; that is, ${{\mathbb E}}\Pi([0,1]^d) < \infty$. We let $\Pi^*$ be its [*Palm version*]{}. See Section \[palm\] or [@MR1876169 Chapter 11] for a definition. Informally, one can regard $\Pi^*$ as the point process $\Pi$ conditioned to have a point at the origin.
\[thm-instol-stat-eq\] Let $\Pi$ be a translation-invariant ergodic point process of finite intensity on ${{\mathbb R}}^d$ and let $\Pi ^{*}$ be its Palm version. The following are equivalent.
(i) The point process $\Pi$ is insertion-tolerant.
(ii) \[original\] $\Pi + \delta_0 \prec \Pi^*$.
Condition below appears to be the natural analogue of Theorem \[thm-instol-stat-eq\] for deletion-tolerance. However, it is only sufficient and not necessary for deletion-tolerance.
\[suff\] Let $\Pi$ be a translation-invariant point process of finite intensity on ${{\mathbb R}}^d$ and let $\Pi^{*}$ be its Palm version. If $$\label{palmd}
\Pi^{*} -\delta_0 \prec \Pi,$$ then $\Pi$ is deletion-tolerant.
In Section \[examples\], Example \[site\] shows that a deletion-tolerant process need not satisfy , while Example \[counterS\] shows that the natural analogue of Proposition \[anyS\] fails for deletion-tolerance.
\[gen\] Invariant point processes and their Palm versions can be defined on more general spaces than ${{\mathbb R}}^d$. See [@MR2371524; @MR818219; @MR2322698; @newlast; @lastjthp] for more information. For concreteness and simplicity, we have chosen to state and prove Theorems \[equiv\], \[thm-instol-eq\], \[thm-instol-stat-eq\] and \[suff\] in the setting of ${{\mathbb R}}^d$, but they can easily be adapted to any complete separable metric space endowed with: a group of symmetries that acts transitively and continuously on it, and the associated Haar measure. We will make use of this generality when we discuss Gaussian zero processes on the hyperbolic plane in Proposition \[gausszeros\]. [$\Diamond$]{}
Next we will illustrate some applications of insertion-tolerance and deletion-tolerance in the contexts of continuum percolation and stable matchings. We will prove generalizations of earlier results.
The Boolean continuum percolation model for point processes is defined as follows (see [@roy]). Let $\| \cdot\|$ denote the Euclidean norm on ${{\mathbb R}}^d$. For $R > 0$ and $\mu \in {{\mathbb M}}$, consider the set ${ {\mathcal O } }(\mu):= \cup_{x \in { [\mu] } } B(x,R)$, where $B(x,R):=
{ \left\{ {y \in {{\mathbb R}}^d: \|x-y\| < R} \right\} }$ is the open ball of radius $R$ with center $x$. We call ${ {\mathcal O } }(\mu)$ the [[****]{}[occupied region]{}]{}. The connected components of ${ {\mathcal O } }(\mu)$ are called [[****]{}[clusters]{}]{}.
\[percuniq\] Let $\Pi$ be a translation-invariant ergodic insertion-tolerant point process on ${{\mathbb R}}^d$. For any $R > 0$, the occupied region ${ {\mathcal O } }(\Pi)$ has at most one unbounded cluster a.s.
The proof of Theorem \[percuniq\] is similar to the uniqueness proofs in [@roy Chapter 7] which in turn are based on the argument of Burton and Keane [@burtonkeane].
Next we turn our attention to stable matchings of point processes (see [@random] for background). Let ${ {\mathcal R} }$ and ${ {\mathcal B} }$ be (‘red’ and ‘blue’) point processes on ${{\mathbb R}}^d$ with finite intensities. A [[****]{}[one-colour matching scheme]{}]{} for ${ {\mathcal R} }$ is a point process ${ {\mathcal M} }$ of unordered pairs ${ \left\{ {x,y} \right\} } \subset {{\mathbb R}}^d$ such that almost surely $[{ {\mathcal M} }]$ is the edge set of a simple graph $([{ {\mathcal R} }],
[{ {\mathcal M} }])$ in which every vertex has degree exactly one. Similarly, a [[****]{}[two-colour matching scheme]{}]{} for ${ {\mathcal R} }$ and ${ {\mathcal B} }$ is a point process ${ {\mathcal M} }$ of unordered pairs ${ \left\{ {x,y} \right\} } \subset {{\mathbb R}}^d$ such that almost surely, $[{ {\mathcal M} }]$ is the edge set of a simple bipartite graph $([{ {\mathcal R} }], [{ {\mathcal B} }], [{ {\mathcal M} }])$ in which every vertex has degree exactly one. In either case we write ${ {\mathcal M} }(x)=y$ if and only if ${ \left\{ {x,y} \right\} } \in [{ {\mathcal M} }]$. In the one-colour case, we say that a matching scheme is [[****]{}[stable]{}]{} if almost surely there do not exist distinct points $x,y \in [{ {\mathcal R} }]$ satisfying $$\label{defstable}
\|x-y\| < \min{ \left\{ { \|x - { {\mathcal M} }(x)\|, \|y - { {\mathcal M} }(y)\|} \right\} },$$ while in the two-colour case we say that a matching scheme is [[****]{}[stable]{}]{} if almost surely there do not exist $x \in [{ {\mathcal R} }]$ and $y \in
[{ {\mathcal B} }]$ satisfying . These definitions arise from the concept of stable marriage as introduced by Gale and Shapley [@galeshapley].
It is proved in [@random] that stable matching schemes exist and are unique for point processes that satisfy certain mild restrictions, as we explain next. Let $\mu \in {{\mathbb M}}$. We say that $\mu$ has a [[****]{}[descending chain]{}]{} if there exist $x_1, x_2, \ldots \in [\mu]$ with $$\|x_{i-1} - x_i\| > \|x_i - x_{i+1}\| \ \text{for all} \ i.$$ We say that $\mu$ is [[****]{}[non-equidistant]{}]{} if for all $x,y,u,v
\in { [\mu] }$ such that ${ \left\{ {x,y} \right\} } \not = { \left\{ {u,v} \right\} }$ and $x \not = y$ we have $\|x-y\| \not = \|u-v\|$. The following fact are proved in [@random Proposition 9]. Suppose that ${ {\mathcal R} }$ is a translation-invariant point process on ${{\mathbb R}}^d$ with finite intensity that almost surely is non-equidistant and has no descending chains. Then there exists a one-colour stable matching scheme which is an isometry-equivariant factor of ${ {\mathcal R} }$; this matching scheme may be constructed by a simple procedure of iteratively matching, and removing, mutually-closest pairs of ${ {\mathcal R} }$-points; furthermore, any two one-colour stable schemes agree almost surely [@random Proposition 9]. In this case we refer to the above-mentioned scheme as [*the*]{} one-colour stable matching scheme. Similarly, in the two-colour case, let ${ {\mathcal R} }$ and ${ {\mathcal B} }$ be point processes on ${{\mathbb R}}^d$ of equal finite intensity, jointly invariant and ergodic under translations, and suppose that ${ {\mathcal R} } + { {\mathcal B} }$ is a simple point process that is almost surely non-equidistant and has no descending chains. Then there exists an almost surely unique two-colour stable matching scheme, which is an isometry-equivariant factor and may be constructed by iteratively matching mutually-closest ${ {\mathcal R} }$ / ${ {\mathcal B} }$ pairs.
Homogeneous Poisson process are non-equidistant and have no descending chains (see [@haggstom-meester]). Descending chains are investigated in detail in [@jones], where it is shown in particular that they are absent in many well-studied point processes.
In this paper, our interest in stable matching lies in the typical distance between matched pairs. Let ${ {\mathcal M} }$ be the one-colour stable matching scheme for ${ {\mathcal R} }$. Consider the distribution function $$\label{defdist}
F(r):= \big({{\mathbb E}}{ {\mathcal R} }([0,1)^d) \big)^{-1}{{\mathbb E}}\#{ \left\{ {x \in [{ {\mathcal R} }] \cap [0,1)^d:
\|x - { {\mathcal M} }(x) \| \leq r} \right\} }.$$ As in [@random], let ${ { X} }$ be a random variable with probability measure ${{\mathbb P}}^{*}$ and expectation operator ${{\mathbb E}}^{*}$ such that ${{\mathbb P}}^{*}(X \leq
r) = F(r)$ for all $r \geq 0$. One may interpret ${ { X} }$ as the distance from the origin to its partner under the Palm version of $({ {\mathcal R} },
{ {\mathcal M} })$ in which we condition on the presence of an ${ {\mathcal R} }$-point at the origin; see [@random] for details. For the two-colour stable matching scheme of point processes ${ {\mathcal R} },{ {\mathcal B} }$ we define $X$, ${{\mathbb P}}^{*}$, and ${{\mathbb E}}^{*}$ in the same way.
\[onethm\] Let ${ {\mathcal R} }$ be a translation-invariant ergodic point process on ${{\mathbb R}}^d$ with finite intensity that almost surely is non-equidistant and has no descending chains. If ${ {\mathcal R} }$ is insertion-tolerant or deletion-tolerant, then the one-colour stable matching scheme satisfies ${{\mathbb E}}^{*} { { X} }^d = \infty$.
\[twothm\] Let ${ {\mathcal R} }$ and ${ {\mathcal B} }$ be independent translation-invariant ergodic point processes on ${{\mathbb R}}^d$ with equal finite intensity such that the point process ${ {\mathcal R} } +{ {\mathcal B} }$ is non-equidistant and has no descending chains. If ${ {\mathcal R} }$ or ${ {\mathcal B} }$ is deletion-tolerant or insertion-tolerant, then the two-colour stable matching scheme satisfies ${{\mathbb E}}^{*} { { X} }^d = \infty$.
Theorems \[onethm\] and \[twothm\] strengthen the earlier results in [@random] in the following ways. In [@random], Theorem \[onethm\] is proved in the case of homogeneous Poisson processes, but the same proof is valid under the condition that ${ {\mathcal R} }$ is [*both*]{} insertion-tolerant [*and*]{} deletion-tolerant. Similarly, in [@random], Theorem \[twothm\] is proved in the Poisson case, but the proof applies whenever ${ {\mathcal R} }$ or ${ {\mathcal B} }$ is insertion-tolerant. Related results appear also in [@Stable-PL Theorems 32,33]
The following complementary bound is proved in [@random] for Poisson processes, but again the proof given there applies more generally as follows.
\[thmfiverandom\] Let ${ {\mathcal R} }$ be a translation-invariant ergodic non-equidistant point process on ${{\mathbb R}}^d$ with no descending chains, and unit intensity. The one-colour stable matching scheme satisfies${{\mathbb P}}^{*}(X>r) \leq Cr^{-d}$ for all $r > 0$, for some constant $C =C(d)$ that does not depend on ${ {\mathcal R} }$.
Thus, Theorems \[onethm\] and \[thmfiverandom\] provide strikingly close upper and lower bounds on $X$ for the one-colour stable matching schemes of a wide range of point processes. For two-colour stable matching, even in the case of two independent Poisson processes, the correct power law for the tail of $X$ is unknown in dimensions $d\geq 2$; for $d=1$ the bounds ${{\mathbb E}}^{*}
X^{\frac{1}{2}} = \infty$ and ${{\mathbb P}}^{*}(X > r) \leq Cr^{-1/2}$ hold. See [@random] for details.
The rest of the paper is organized as follows. In Section \[examples\] we present examples. In Section \[easy\] we prove some of the simpler results including Theorems \[equiv\] and \[thm-instol-eq\]. Despite the similarities between insertion-tolerance and deletion-tolerance, the proof of Theorem \[thm-instol-eq\] relies on the following natural lemma, whose analogue for deletion-tolerance is false (see Example \[nonmono\]).
\[monofinite\] Let $\Pi$ be a point process on ${{\mathbb R}}^d$ and let $S \in
{{ {\mathfrak B} }}$ have finite nonzero Lebesgue measure. If $\Pi$ is insertion-tolerant, and $U$ is uniformly distributed in $S$ and independent of $\Pi$, then $\Pi + \delta_U$ is insertion-tolerant.
Section \[palm\] deals with Theorems \[thm-instol-stat-eq\] and \[suff\]. In Sections \[contperc\] and \[stableM\] we prove the results concerning continuum percolation and stable matchings. Section \[perproof\] provides proofs relating to some of the more elaborate examples from Section \[examples\].
Examples
========
First, we give examples of (translation-invariant) point processes that possess various combinations of insertion-tolerance and deletion-tolerance. We also provide examples to show that certain results concerning insertion-tolerance do not have obvious analogues in the setting of deletion-tolerance. Second, we give examples to show that the conditions in the results concerning continuum percolation and stable matching are needed. Finally, we provide results on perturbed lattice processes and Gaussian zeros processes on the Euclidean and hyperbolic planes.
Elementary examples
-------------------
\[poi\] [*[The homogeneous Poisson point process $\Pi$ on ${{\mathbb R}}^d$ is both insertion-tolerant and deletion-tolerant. This follows immediately from Theorem \[thm-instol-stat-eq\] (\[original\]) and Theorem \[suff\] and the relation $$\Pi ^{*} {\stackrel{d}{=}}\Pi + \delta_0.$$ It is also easy to give an direct proof of insertion-tolerance and to prove deletion-tolerance via Theorem \[equiv\] (\[S\]). [$\Diamond$]{}]{}*]{}
For $S \subseteq {{\mathbb R}}^d$ and $x \in {{\mathbb R}}^d$, write $x + S :={ \left\{ {x + z: z \in
S} \right\} }$.
\[firsteg\] [*[Let $U$ be uniformly distributed in $[0,1]^d$. Consider the point process given by $[\Lambda]:= U + {{\mathbb Z}}^d$. Clearly, $\Lambda$ is translation-invariant. Since no ball of radius $1/4$ can contain more than one $\Lambda$-point, by Theorem \[thm-instol-eq\] , $\Lambda$ is not insertion-tolerant. Also the cube $[0,1]^d$ must contain $\Lambda$-points, so by Theorem \[equiv\] , $\Lambda$ is not deletion-tolerant.]{}*]{} [$\Diamond$]{}
\[site\] [*[Let ${ \left\{ {Y_z} \right\} }_{z \in {{\mathbb Z}}^d}$ be i.i.d. \
${ \left\{ {0,1} \right\} }$-valued random variables with ${{\mathbb E}}Y_0 =p \in(0,1)$. Consider the random set $W:={ \left\{ {z \in {{\mathbb Z}}^d: Y_z=1} \right\} }$. Let $U$ be uniformly distributed in $[0,1]^d$ and independent of $W$. From Theorem \[equiv\] , it is easy to see that $\Lambda$ given by $[\Lambda]:= U + W$ is deletion-tolerant. Clearly, as in Example \[firsteg\], $\Lambda$ is not insertion-tolerant. Moreover, it is easy to verify that almost surely $[\Lambda] \cap {{\mathbb Z}}^d = \emptyset$, but $[\Lambda^*] \subset {{\mathbb Z}}^d$. Thus (\[palmd\]) is not satisfied. ]{}*]{} [$\Diamond$]{}
\[Superposition of a Poisson point process with a randomly shifted lattice\] [*[Let $\Pi$ be a Poisson point process on ${{\mathbb R}}^d$ and let $\Lambda$ be a randomly shifted lattice (as in Example \[firsteg\]) that is independent of $\Pi$. Consider the point process $\Gamma:= \Pi + \Lambda$. The insertion-tolerance of $\Pi$ is inherited by $\Gamma$, but $\Gamma$ is no longer deletion-tolerant. As in Example \[firsteg\], $[0,1]^d$ must contain $\Gamma$-points.]{}*]{} [$\Diamond$]{}
**
We show that in contrast with Lemma \[monofinite\], deleting a point from a deletion-tolerant process may destroy deletion-tolerance. Let $(N_i)_{i\in{{\mathbb Z}}}$ be i.i.d., taking values $0,1,2$ each with probability $1/3$, and let $\Pi$ have exactly $N_i$ points in the interval $[i,i+1)$, for each $i\in{{\mathbb Z}}$, with their locations chosen independently and uniformly at random in the interval. It is easy to verify that $\Pi$ is deletion-tolerant using Theorem \[equiv\] .
Consider the $\Pi$-point $Z$ defined as follows. If the first integer interval $[i, i+1)$ to the right of the origin that contains at least one $\Pi$-point contains exactly two $\Pi$-points, then let $Z$ be the point in this interval that is closest to the origin; otherwise, let $Z$ be the closest $\Pi$-point to the left of the origin. The point process $\Pi' = \Pi
-\delta_Z$ has the property that the first interval to the right of the origin that contains any $\Pi$-points contains exactly one $\Pi$-point.
Let $Z'$ be the first $\Pi'$-point to the right of the origin. The process $\Pi ^{\prime \prime} :=\Pi' - \delta_{Z'}$ has the property that with non-zero probability the first interval to the right of the origin that contains any $\Pi ^{\prime \prime}$-points contains exactly two $\Pi ^{\prime
\prime}$-points. Thus $\Pi'$ is not deletion-tolerant.
If desired, the above example can be made translation-invariant by applying a random shift $U$ as before.
[$\Diamond$]{}\[nonmono\]
\[One set $S$ satisfying $\Pi {{|}}_{S^c} \prec \Pi$ does not suffice for deletion-tolerance\] [*Let $\Lambda$ be a randomly shifted lattice in $d=1$ (as in Example \[firsteg\]) and let $\Pi$ be a Poisson point process on ${{\mathbb R}}$ of intensity 1 that is independent of $\Lambda$. Let $Y:=\cup_{x \in [\Pi]} B(x, 5)$, and consider $\Gamma:=\Lambda{{|}}_{Y^c}$. Let $Z$ be the first $\Gamma$-point to the right of the origin such that $Z + i \in [\Gamma]$ for all integers $i$ with $|i| \leq 20$. Clearly, $\Gamma -\delta_Z \not \prec \Gamma$ and thus $\Gamma$ is not deletion-tolerant. On the other hand, since $\Pi$ is insertion-tolerant, $\Gamma{{|}}_{B(0,5)^c} \prec \Gamma$. (Note the contrast with Proposition \[anyS\] for insertion-tolerance.)* ]{}[$\Diamond$]{}\[counterS\]
Continuum percolation and stable matching
-----------------------------------------
\[A point process that is neither insertion-tolerant nor deletion-tolerant and has infinitely many unbounded clusters\] [*[Let ${ \left\{ {Y_z} \right\} }_{z \in {{\mathbb Z}}}$ be i.i.d. ${ \left\{ {0,1} \right\} }$-valued random variables with ${{\mathbb E}}Y_0 = \frac{1}{2}$. Let $$W:= { \left\{ {(x_1, x_2) \in {{\mathbb Z}}^2: Y_{x_2} =1} \right\} }$$ and let $U$ be uniformly distributed in $[0,1]^2$ and independent of $W$. Consider the point process $\Lambda$ with support $U + W$. Thus $\Lambda$ is a randomly shifted lattice with columns randomly deleted. As in Example \[firsteg\], $\Lambda$ is neither insertion-tolerant nor deletion-tolerant. In the continuum percolation model with parameter $R=2$, the occupied region ${ {\mathcal O } }(\Lambda)$ has infinitely many unbounded clusters. [$\Diamond$]{}]{}*]{}
\[A point process that is not insertion-tolerant, but is deletion-tolerant and has infinitely many unbounded clusters\] [*[Let $\Lambda$ be a randomly shifted super-critical site percolation in $d=2$, as in Example \[site\]. Let ${ \left\{ {\Lambda_i} \right\} }_{i \in {{\mathbb Z}}}$ be independent copies of $\Lambda$. Let ${ \left\{ {Y_z} \right\} }_{z \in {{\mathbb Z}}}$ be i.i.d. ${ \left\{ {0,1} \right\} }$-valued random variables independent of $\Lambda$ with ${{\mathbb E}}Y_0 = \frac{1}{2}$. Consider the point process $\Gamma$ with support $$[\Gamma]=\bigcup_{i \in {{\mathbb Z}}: \,Y_i=1}
[\Lambda_i] \times { \left\{ {i} \right\} }.$$ Thus $\Gamma$ is a point process in ${{\mathbb R}}^3$, obtained by stacking independent copies of $\Lambda$. Clearly, the point process $\Gamma$ is deletion-tolerant, but not insertion-tolerant. With $R=2$, the occupied region ${ {\mathcal O } }(\Gamma)$ has infinitely many unbounded clusters. [$\Diamond$]{}]{}*]{}
[*[Let $W={ \left\{ {W_i} \right\} }_{i \in {{\mathbb Z}}^d}$ and $Y={ \left\{ {Y_i} \right\} }_{i \in {{\mathbb Z}}^d}$ be all i.i.d. random variables uniformly distributed in $B(0,1/4)$. Let $U$ be uniformly distributed in $[0,1]^d$ and independent of $W,Y$. Let ${ {\mathcal R} }$ be the point process with support $$[{ {\mathcal R} }] = U + { \left\{ {i + W_i,i + Y_i: i \in {{\mathbb Z}}^d} \right\} }.$$ It is easy to verify that ${ {\mathcal R} }$ is neither insertion-tolerant nor deletion-tolerant, and that ${ {\mathcal R} }$ has no descending chains and is non-equidistant. The one-colour stable matching scheme satisfies $\|x-{ {\mathcal M} }(x)\| < \tfrac12$ for all $x \in [{ {\mathcal R} }]$ (in contrast with the conclusion in Theorem \[onethm\]). ]{}*]{} [$\Diamond$]{}\[needone\]
[*Let ${ {\mathcal R} }$ and ${ {\mathcal B} }$ be two independent copies of the randomly shifted lattice ${{\mathbb Z}}$ in $d=1$ as defined in Example \[firsteg\]. Although ${ {\mathcal R} } + { {\mathcal B} }$ is not non-equidistant, it is easy to verify that there is an a.s. unique two-colour stable matching scheme for ${ {\mathcal R} }$ and ${ {\mathcal B} }$, and it satisfies $\|x -{ {\mathcal M} }(x)\| <
\tfrac{1}{2}$ for all $x \in [{ {\mathcal R} }]$.*]{} [$\Diamond$]{}
Perturbed lattices and Gaussian zeros
-------------------------------------
The proofs of the results stated below are given in Section \[perproof\].
\[pert\] [*[Let ${ \left\{ {Y_z} \right\} }_{z \in {{\mathbb Z}}^d}$ be i.i.d. ${{\mathbb R}}^d$-valued random variables. Consider the point process $\Lambda$ given by $$[\Lambda]:=
{ \left\{ {z + Y_z: z \in {{\mathbb Z}}^d} \right\} }.$$ Note that $\Lambda$ is invariant and ergodic under shifts of ${{\mathbb Z}}^d$. It is easy to see that (for all dimensions $d$) if $Y_0$ has bounded support, then $\Lambda$ is neither insertion-tolerant nor deletion-tolerant. Indeed, in this case we have $\Lambda(B(0,1)) \leq M$ for some constant $M<\infty$, so, by Theorem \[thm-instol-eq\] , $\Lambda$ is not insertion-tolerant (otherwise we could add $M+1$ random points in $B(0,1)$). Also, $\Lambda(B(0,N)) \geq 1$, for some $N<\infty$, so Theorem \[equiv\] shows that $\Lambda$ is not deletion-tolerant. [$\Diamond$]{}]{}*]{}
For dimensions $1$ and $2$ we can say more.
\[pertone\] Let $[\Lambda]:= { \left\{ {z + Y_z: z \in {{\mathbb Z}}^d} \right\} }$ for i.i.d.${ \left\{ {Y_z} \right\} }_{z \in {{\mathbb Z}}^d}$. For $d=1,2$, if ${{\mathbb E}}\|Y_0\|^d < \infty$, then $\Lambda$ is neither insertion-tolerant nor deletion-tolerant.
Does there exists a distribution for the perturbation $Y_0$ such that the resulting perturbed lattice is insertion-tolerant? In particular, in the case $d=1$, does this hold whenever $Y_0$ has infinite mean? What are the possible combinations of insertion-tolerance and deletion-tolerance for perturbed lattices? Allan Sly has informed us that he has made progress on these questions.
Perturbed lattice models were considered by Sodin and Tsirelson [@MR2121537] as simplified models to illustrate certain properties of Gaussian zero processes (which we will discuss next). Our proof of Proposition \[pertone\] is in part motivated by their remarks, and similar proofs have also been suggested by Omer Angel and Yuval Peres (personal communications).
The Gaussian zero processes on the plane and hyperbolic planes are defined as follows (see [@MR2552864; @MR2121537] for background). Let ${ \left\{ {a_n} \right\} }_{n=0}
^ {\infty}$ be i.i.d. standard complex Gaussian random variables with probability density $\pi^{-1}\exp(-|z|^2)$ with respect to Lebesgue measure on the complex plane. Firstly, consider the entire function $$\label{planef}
f(z) := \sum_{n=0} ^ {\infty} \frac{a_n}{\sqrt{n!}} z^n.$$ The set of zeros of $f$ forms a translation-invariant point process ${\Upsilon_{\mathbb{C}}}$ in the complex plane. Secondly, consider the analytic function on the unit disc $\mathbb{D}:={ \left\{ {z \in \mathbb{C} : |z| < 1} \right\} }$ given by $$\label{hypf}
g(z):= \sum_{n=0} ^ {\infty} a_n z^n.$$ The set of zeros of $g$ forms a point process ${\Upsilon_{\mathbb{D}}}$. We endow $\mathbb{D}$ with the hyperbolic metric $|dz| / (1 - |z|^2)$ and the group of symmetries $G$ given by the maps $z \mapsto (az + b) /(\bar{b}z + \bar{a})$, where $a,
b\in \mathbb{C}$ and $|a|^2 - |b|^2 =1$. Then ${\Upsilon_{\mathbb{D}}}$ is invariant in law under action of $G$.
The following two facts were suggested to us by Yuval Peres, and are consequences of results of [@MR2121537] and [@MR2231337] respectively.
\[GAFplane\] The Gaussian zero process ${\Upsilon_{\mathbb{C}}}$ on the plane is neither insertion-tolerant nor deletion-tolerant.
\[gausszeros\] The Gaussian zero process ${\Upsilon_{\mathbb{D}}}$ on the hyperbolic plane is both insertion-tolerant and deletion-tolerant.
Basic results {#easy}
=============
In this section we prove elementary results concerning insertion-tolerance and deletion-tolerance. The following simple application of Fubini’s theorem will be useful. Recall that ${{\mathcal L}}$ denotes Lebesgue measure.
\[fubini\] Let $\Pi$ be a point process on ${{\mathbb R}}^d$. If $S \in {{ {\mathfrak B} }}$ is a set of positive finite measure and $U$ is uniformly distributed $S$ and independent of $\Pi$, then
$\displaystyle{{\mathbb P}}( \Pi + \delta_U \in \cdot) = \frac{1}{{{\mathcal L}}(S)}\int_S
{{\mathbb P}}(\Pi + \delta_x\in \cdot)\,dx.$ $\Diamond$
Let ${ {\mathfrak M} }$ be the product $\sigma$-field on ${{\mathbb M}}$. For ${{\mathcal A}}\in { {\mathfrak M} }$ and $x\in{{\mathbb R}}^d$, we set $${{\mathcal A}}^x := \{ \mu \in {{\mathbb M}}: \mu + \delta_x \in {{\mathcal A}}\}.$$ Thus ${{\mathcal A}}^{x}$ is the set of point measures for which adding a point at $x$ results in an element of ${{\mathcal A}}$.
Let $\Pi$ be insertion-tolerant. We first show that for almost all $x \in
{{\mathbb R}}^d$ the point process $\Pi + \delta_x$ is insertion-tolerant. The proof follows from the definition of ${{\mathcal A}}^x$. Let $V$ be uniformly distributed in $S' \in {{ {\mathfrak B} }}$ and independent of $\Pi$. Suppose ${{\mathcal A}}\in { {\mathfrak M} }$ is such that $${{\mathbb P}}(\Pi + \delta_x + \delta_V \in {{\mathcal A}}) = {{\mathbb P}}(\Pi + \delta_V \in {{\mathcal A}}^x) >
0.$$ Since $\Pi$ is insertion-tolerant, $0<{{\mathbb P}}(\Pi \in {{\mathcal A}}^x)={{\mathbb P}}(\Pi+ \delta_x
\in {{\mathcal A}})$.
Next, let $U$ be uniformly distributed in $S \in {{ {\mathfrak B} }}$ and independent of $(\Pi, V)$. Let ${{\mathbb P}}(\Pi + \delta_U \in {{\mathcal A}})=0$, for some ${{\mathcal A}}\in { {\mathfrak M} }$. By Remark \[fubini\], ${{\mathbb P}}(\Pi + \delta_x \in {{\mathcal A}}) = 0$ for almost all $x
\in S$, and since $\Pi + \delta_x$ is insertion-tolerant for almost all $x
\in {{\mathbb R}}^d$, we deduce that ${{\mathbb P}}(\Pi + \delta_x + \delta_V \in {{\mathcal A}}) =0$ for almost all $x \in S$. Applying Remark \[fubini\] to the process $\Pi +
\delta_V$, we obtain ${{\mathbb P}}(\Pi + \delta_U +\delta_V \in {{\mathcal A}}) =0$.
With Lemma \[monofinite\] we prove that insertion-tolerance implies the following stronger variant of Theorem \[thm-instol-eq\] in which we allow the number of points added to be random. If $(X_1, \ldots, X_n)$ is a random vector in $({{\mathbb R}}^d)^n$ with law that is absolutely continuous with respect to Lebesgue measure, then we say that the random (unordered) set ${ \left\{ {X_1,
\ldots, X_n} \right\} }$ is [[****]{}[nice]{}]{}. A finite point process ${ {\mathcal F} }$ is [[****]{}[nice]{}]{} if for all $n \in {{\mathbb N}}$, conditional on ${ {\mathcal F} }({{\mathbb R}}^d) = n$, the support $[{ {\mathcal F} }]$ is equal in distribution to some nice random set; we also say that the law of ${ {\mathcal F} }$ is nice if ${ {\mathcal F} }$ is nice.
\[weak\] Let $\Pi$ be an insertion-tolerant point process on ${{\mathbb R}}^d$ and let ${ {\mathcal F} }$ be a finite point process on ${{\mathbb R}}^d$. If ${ {\mathcal F} }$ admits a conditional law given $\Pi$ that is nice, then $\Pi + { {\mathcal F} }
\prec \Pi$.
Clearly, (iii) $\Rightarrow$ (ii) $\Rightarrow$ (i). From Corollary \[weak\], it is immediate that (i) $\Rightarrow$ (iii).
Let $U$ be uniformly distributed in $[0,1]$ and independent of $\Pi$. Let $f:{{\mathbb M}}\times [0,1] \to {{\mathbb M}}$ be a measurable function such that for all $\pi
\in {{\mathbb M}}$ we have that $f(\pi, U)$ is a nice finite point process. It suffices to show that $\Pi + f(\Pi, U) \prec \Pi$.
Consider the events $$E_{n,k}:= \Big\{ f(\Pi, U)({{\mathbb R}}^d) = n \Big\} \
\cap \ \Big\{[f(\Pi, U)] \subset B(0,k)\Big\}.$$ Let ${ \left\{ {U_{r,k}} \right\} }_{r=1} ^n$ i.i.d. random variables uniformly distributed in $B(0,k)$ and independent of $(\Pi, U)$. Let ${ {\mathcal F} }'_{n,k}:=
\sum_{r=1} ^n \delta_{U_{r,k}}$. By applying Lemma \[monofinite\], $n$ times, we see that $\Pi + { {\mathcal F} }_{n,k}' \prec \Pi$; thus it suffices to show that $\Pi +f(\Pi, U) \prec \Pi +{ {\mathcal F} }_{n,k}'$ for some $n,k \geq
0$.
For each $\mathbf{x} \in ({{\mathbb R}}^d)^n$, let $(\mathbf{x}_1, \ldots,
\mathbf{x}_n) = \mathbf{x}$. If $S \subset {{\mathbb R}}^d$ has $n$ elements, then we write $\langle S \rangle:= (s_1 \ldots, s_n) \in ({{\mathbb R}}^d)^n,$ where $s_i$ are the elements of $S$ in lexicographic order. For each $n \geq 0$, let $g_n:({{\mathbb R}}^d)^n \times {{\mathbb M}}\to {{\mathbb R}}$ be a measurable function such that $g_n(\cdot, \pi)$ is the probability density function (with respect to $n$-dimensional Lebesgue measure) of $\langle [f(\pi, U)] \rangle $, conditional on $f(\pi, U)({{\mathbb R}}^d) =n$. Let $Q$ be the law of $\Pi$ and let ${{\mathcal A}}\in { {\mathfrak M} }$. Thus $$\begin{aligned}
\label{nk}
\lefteqn{{{\mathbb P}}\big(\Pi + f(\Pi, U) \in {{\mathcal A}}, \ E_{n,k}\big)=} \nonumber\\
&& \int \bigg(
\int_{B(0,k)^n} {{\mathbf{1}}}\Big[\pi+ \sum_{i=1} ^n {\delta_{\mathbf{x}_i}} \
\in {{\mathcal A}}\Big]g(\mathbf{x}, \pi)d\mathbf{x}\bigg)dQ(\pi).\end{aligned}$$ On the other hand, $$\begin{aligned}
\label{nkprime}
\lefteqn{{{\mathbb P}}\big(\Pi + { {\mathcal F} }_{n,k}' \in {{\mathcal A}}\big) =} \nonumber \\ && \int
\frac{1}{{{\mathcal L}}(B(0,k))^n}\bigg( \int_{B(0,k)^n} {{\mathbf{1}}}\Big[\pi + \sum_{i=1} ^n
{\delta_{\mathbf{x}_i}} \ \in {{\mathcal A}}\Big] d\mathbf{x}\bigg)dQ(\pi).\end{aligned}$$ If ${{\mathbb P}}(\Pi + f(\Pi, U) \in {{\mathcal A}}) > 0$, then there exist $n,k \geq 0$ such that ${{\mathbb P}}(\Pi + f(\Pi, U) \in {{\mathcal A}}, \ E_{n,k}) >0$; moreover from and , we deduce that ${{\mathbb P}}(\Pi + { {\mathcal F} }_{n,k}' \in {{\mathcal A}}) >0$.
The proof of Theorem \[equiv\] relies on the following lemma.
\[unipick\] Let $\Pi$ be a point process on ${{\mathbb R}}^d$. If ${ {\mathcal F} }$ is a finite subprocess of $\Pi$, then there exists $S \in {{ {\mathfrak B} }}$ with ${{\mathcal L}}(S)
\in (0, \infty)$ such that $$\label{revS}
{{\mathbb P}}(\Pi{{|}}_{S} = { {\mathcal F} } ) >0.$$
A ball $B(x,r)$ is [[****]{}[rational]{}]{} if $x \in {{\mathbb Q}}^d$ and $r \in {{\mathbb Q}}^{+}$. Let $C$ be the collection of all unions of finitely many rational balls. Clearly $C$ is countable. We will show that there exists $S \in C$ satisfying . Since $\Pi$ is locally finite, it follows that there exists a $C$-valued random variable $\mathbf{S}$ such that $\Pi{{|}}_{\mathbf{S}} = { {\mathcal F} }$ a.s. Since $$\sum_{S \in C
}{{\mathbb P}}(\Pi{{|}}_{S} ={ {\mathcal F} }, \ \ \mathbf{S} =S) ={{\mathbb P}}(\Pi{{|}}_{\mathbf{S}}
={ {\mathcal F} }) =1,$$ at least one of the terms of the sum is nonzero.
With Lemma \[unipick\] we first prove the following special case of Theorem \[equiv\].
\[minieq\] Let $\Pi$ be a point process on ${{\mathbb R}}^d$. The following conditions are equivalent.
(i) The point process $\Pi$ is deletion-tolerant.
(ii) If ${ {\mathcal F} }$ is a finite subprocess of $\Pi$ such that ${ {\mathcal F} }({{\mathbb R}}^d)$ is a bounded random variable, then $\Pi -{ {\mathcal F} }
\prec \Pi$.
Clearly, (ii) implies (i).
We show by induction on the number of points of the finite subprocess that (i) implies (ii). Assume that $\Pi$ is deletion-tolerant. Suppose that (ii) holds for every finite subprocess ${ {\mathcal F} }$ of $\Pi$ such that ${ {\mathcal F} }({{\mathbb R}}^d) \leq n$. Let ${ {\mathcal F} }'$ be a finite subprocess of $\Pi$ with ${ {\mathcal F} }'({{\mathbb R}}^d) \leq n+1$. Observe that on the event that ${ {\mathcal F} }'({{\mathbb R}}^d) \not = 0$, we have ${ {\mathcal F} }'= { {\mathcal F} } +\delta_{Z},$ where ${ {\mathcal F} }$ is a finite subprocess of $\Pi$ with ${ {\mathcal F} }({{\mathbb R}}^d) \leq n$ and $Z$ is some $\Pi$-point. Let ${{\mathbb P}}( \Pi -{ {\mathcal F} }' \in {{\mathcal A}}) > 0$, for some ${{\mathcal A}}\in { {\mathfrak M} }$. If ${{\mathbb P}}(\Pi -{ {\mathcal F} }' \in {{\mathcal A}}, \ \
{ {\mathcal F} }'({{\mathbb R}}^d) = 0 ) > 0$, then clearly ${{\mathbb P}}(\Pi \in {{\mathcal A}}) > 0$. Thus we assume without loss of generality that ${ {\mathcal F} }'= { {\mathcal F} } +\delta_{Z}$ so that ${{\mathbb P}}(\Pi - { {\mathcal F} } - \delta_{Z} \in {{\mathcal A}}) > 0$. By applying Lemma [\[unipick\]]{} to the point process $\Pi - { {\mathcal F} }$, conditioned on $\Pi - { {\mathcal F} } - \delta_Z \in {{\mathcal A}}$, there exists $S \in
{{ {\mathfrak B} }}$ with finite Lebesgue measure, so that $$\label{addAS}
{{\mathbb P}}\Bigl( (\Pi-{ {\mathcal F} }){{|}}_{S} = \delta_Z \
\Big| \ \Pi - { {\mathcal F} } - \delta_Z \in {{\mathcal A}}\Bigr) > 0.$$ Let ${{\mathcal A}}^S:= { \left\{ { \mu + \delta_x : \mu \in {{\mathcal A}},\; x \in S} \right\} }$, so that by the definition of ${{\mathcal A}}^S$ and , we have ${{\mathbb P}}(\Pi -{ {\mathcal F} } \in {{\mathcal A}}^S)
> 0$. By the inductive hypothesis, ${{\mathbb P}}(\Pi \in {{\mathcal A}}^S) > 0$.
Observe that if $\Pi \in {{\mathcal A}}^S$, there is an $x \in { [\Pi] } \cap S$ such that $\Pi - \delta_x \in {{\mathcal A}}$. Define a $\Pi$-point $R$ as follows. If $\Pi
\in {{\mathcal A}}^S$, let $R$ be the point of ${ [\Pi] } \cap S$ closest to the origin (where ties are broken using lexicographic order) such that $\Pi -\delta_{R}
\in {{\mathcal A}}$, otherwise let $R$ be the $\Pi$-point closest to the origin. Hence $${{\mathbb P}}(\Pi-\delta_R \in {{\mathcal A}}) \geq {{\mathbb P}}(\Pi \in {{\mathcal A}}^S) > 0.$$ Since $\Pi$ is deletion-tolerant, ${{\mathbb P}}(\Pi \in {{\mathcal A}}) >0$.
We show that $\Rightarrow$ (i) $\Rightarrow$ $\Rightarrow$ .
Assume that (\[S\]) holds and that for some $\Pi$-point $Z$ and some ${{\mathcal A}}\in { {\mathfrak M} }$ we have ${{\mathbb P}}( \Pi - \delta_Z \in {{\mathcal A}}) > 0$. By Lemma [\[unipick\]]{}, $ {{\mathbb P}}(\Pi{{|}}_{S^c} \in {{\mathcal A}}) >
0$ for some $S \in {{ {\mathfrak B} }}$, with finite Lebesgue measure. From (\[S\]), ${{\mathbb P}}(\Pi \in {{\mathcal A}}) > 0$. Thus (i) holds and $\Pi$ is deletion-tolerant.
Assume that (i) holds. Let ${ {\mathcal F} }$ be a finite subprocess of $\Pi$ and suppose for some ${{\mathcal A}}\in { {\mathfrak M} }$ we have ${{\mathbb P}}(\Pi -{ {\mathcal F} } \in {{\mathcal A}}) >0$. Define ${ {\mathcal F} }_n$ as follows. Take ${ {\mathcal F} }_n ={ {\mathcal F} }$ if ${ {\mathcal F} }({{\mathbb R}}^d)=n$, otherwise set ${ {\mathcal F} }_n = 0$. Note that for some $n$, we have ${{\mathbb P}}(\Pi - { {\mathcal F} }_n \in {{\mathcal A}}) > 0$. Since $\Pi$ is deletion-tolerant, by Lemma [\[minieq\]]{}, ${{\mathbb P}}(\Pi \in {{\mathcal A}}) > 0$. Thus (\[minusF\]) holds.
Clearly (\[minusF\]) implies (\[S\]), since for any set $S \in {{ {\mathfrak B} }}$ with finite measure, the point process with support $[\Pi] \cap S$ is a finite subprocess of $\Pi$.
For a translation $\theta$ of ${{\mathbb R}}^d$ and a point measure $\mu \in {{\mathbb M}}$, we define $\theta\mu\in{{\mathbb M}}$ by $(\theta \mu)(S):= \mu(\theta^{-1} S)$ for all $S
\in {{ {\mathfrak B} }}$; for ${{\mathcal A}}\in { {\mathfrak M} }$, we write $\theta {{\mathcal A}}:= { \left\{ {\theta \mu :
\mu \in {{\mathcal A}}} \right\} }$. For $x \in {{\mathbb R}}^d$ let $\theta_x$ be the translation defined by $\theta_x(y):=y+x$ for all $y \in {{\mathbb R}}^d$.
Let $U,V$ be uniformly distributed on $S,T \in {{ {\mathfrak B} }}$ respectively and let $U,V,\Pi$ be independent. Assume that $\Pi +\delta_U \prec \Pi$ and let ${{\mathcal A}}\in { {\mathfrak M} }$ be such that ${{\mathbb P}}(\Pi+ \delta_V \in {{\mathcal A}}) > 0$. We will show that ${{\mathbb P}}(\Pi \in {{\mathcal A}}) > 0$.
Since $\Pi$ is translation-invariant, for all ${{\mathcal A}}' \in { {\mathfrak M} }$ we have ${{\mathbb P}}(\Pi +\delta_{\theta U} \in {{\mathcal A}}')= {{\mathbb P}}(\Pi + \delta_U \in \theta^{-1}{{\mathcal A}}')$ and thus $\Pi +\delta_{\theta U} \prec \Pi$ for all translations $\theta$ of ${{\mathbb R}}^d$. By Remark \[fubini\], $T':={ \left\{ {w \in T: {{\mathbb P}}(\Pi + \delta_w \in {{\mathcal A}})
> 0} \right\} }$ has positive Lebesgue measure. By the Lebesgue density theorem [@MR1333890 Corollary 2.14], there exist $x \in T'$, $y \in S$, and ${\varepsilon}>0$ such that $$\begin{aligned}
{{\mathcal L}}(T \cap B(x,{\varepsilon})) &> \tfrac12 {{\mathcal L}}B(x, {\varepsilon}); \\
{{\mathcal L}}(S \cap B(y, {\varepsilon})) &> \tfrac12 {{\mathcal L}}B(y, {\varepsilon}).\end{aligned}$$
Thus with $z=x-y$, the set $T' \cap \theta_{z} S$ has positive Lebesgue measure. Thus by Remark \[fubini\], ${{\mathbb P}}(\Pi +\delta_{\theta_z U} \in {{\mathcal A}}) >
0$. Since $\Pi +\delta_{\theta_z U} \prec \Pi$, we have ${{\mathbb P}}(\Pi \in {{\mathcal A}})>0$.
Palm equivalences {#palm}
=================
In this section, we discuss insertion-tolerance and deletion-tolerance in the context of Palm processes. We begin by presenting some standard definitions and facts. Let $\Pi$ be a translation-invariant point process with finite intensity $\lambda$. The Palm version of $\Pi$ is the point process $\Pi^*$ such that for all ${{\mathcal A}}\in { {\mathfrak M} }$ and all $S \in
{{ {\mathfrak B} }}$ with finite Lebesgue measure, we have $$\label{palmeq}
{{\mathbb E}}\#\bigl\{x\in[\Pi]\cap S
\text{ with } \Pi\in \theta_x {{\mathcal A}}\bigr\}=\lambda{{\mathcal L}}S \cdot {{\mathbb P}}(\Pi^*\in {{\mathcal A}}),$$ where $\#B$ denotes the cardinality of a set $B$. Sometimes (\[palmeq\]) is called the [*[Palm property]{}*]{}.
By a monotone class argument, a consequence of (\[palmeq\]) is that for all measurable $f:{{\mathbb M}}\times {{\mathbb R}}^d \to [0,\infty)$ we have $$\label{palmeqg}
{{\mathbb E}}\int_{{{\mathbb R}}^d} f(\theta_{-x}\Pi, x) \,d\Pi(x) = \lambda
\int_{{{\mathbb R}}^d}{{\mathbb E}}f( \Pi^{*}, x) \,dx;$$ see [@MR1876169 Chapter 11].
Let $\Pi$ have intensity $\lambda > 0$. Let $S\in {{ {\mathfrak B} }}$ have finite Lebesgue measure. By Theorem \[equiv\] it suffices to show that $\Pi{{|}}_{S^c} \prec \Pi$.
Let ${{\mathbb P}}(\Pi{{|}}_{S^c} \in {{\mathcal A}}) > 0$, for some ${{\mathcal A}}\in { {\mathfrak M} }$. Thus we may assume that $$\label{gzero}
{{\mathbb P}}(\exists x \in [\Pi] \cap S : \Pi -\delta_x \in {{\mathcal A}}) >0,$$ otherwise ${{\mathbb P}}(\Pi \in {{\mathcal A}}) > 0$. By applying (\[palmeqg\]) to the function $$(\mu, x) \mapsto { \mathbf{1}{ [ \mu -\delta_0 \in \theta_{-x}{{\mathcal A}}]} } { \mathbf{1}{ [x \in S]} },$$ we obtain $$\label{palmcons}
{{\mathbb E}}\#{ \left\{ {x \in [\Pi ]\cap S : \theta_{-x}(\Pi - \delta_x) \in \theta_{-x} {{\mathcal A}}} \right\} } \\
=\lambda \int_S {{\mathbb P}}(\Pi^{*} - \delta_0 \in \theta_{-x} {{\mathcal A}}) dx.$$ From (\[gzero\]) and (\[palmcons\]), we deduce that ${{\mathbb P}}(\Pi^{*} -
\delta_0 \in \theta_{-x} {{\mathcal A}})>0$, for some $x \in S$. By assumption, ${{\mathbb P}}(\Pi
\in \theta_{-x} {{\mathcal A}}) > 0$. Since $\Pi$ is translation-invariant, ${{\mathbb P}}(\Pi \in
{{\mathcal A}}) >0$.
Suppose that $\Pi + \delta_0$ is not absolutely continuous with respect to $\Pi^*$; then there exists ${{\mathcal A}}\in { {\mathfrak M} }$ such that $${{\mathbb P}}(\Pi^*\in {{\mathcal A}})=0 \quad\text{but}\quad {{\mathbb P}}(\Pi+\delta_0\in {{\mathcal A}})>0.$$ Without loss of generality, take ${{\mathcal A}}$ to be a set that does not care whether there is a point at $0$; that is if $\mu \in {{\mathcal A}}$, then $\mu' \in {{\mathcal A}}$, provided $\mu,\mu'$ agree on ${{\mathbb R}}^d\setminus\{0\}$. By translation-invariance, $$0<c:={{\mathbb P}}(\Pi+\delta_0\in {{\mathcal A}})={{\mathbb P}}(\Pi\in {{\mathcal A}})={{\mathbb P}}(\Pi\in \theta_x {{\mathcal A}})$$ for every $x\in{{\mathbb R}}^d$. Hence the translation-invariant random set $G:=\{x\in
{{\mathbb R}}^d: \Pi\in \theta_x {{\mathcal A}}\}$ has intensity ${{\mathbb E}}{{\mathcal L}}([0,1]^d \cap G) =c$. Moreover, if $U$ is uniformly distributed in $[0,1]^d$ and independent of $\Pi$, then ${{\mathbb P}}(U \in G) =c$. Therefore defining the set $${{\mathcal A}}':=\{\mu\in {{\mathbb M}}: \exists x\in[\mu]\cap [0,1]^d
\text{ with } \mu\in \theta_x {{\mathcal A}}\},$$ we deduce that ${{\mathbb P}}(\Pi+\delta_U\in
{{\mathcal A}}')>0$. (Recall that ${{\mathcal A}}$ does not care whether there is a point at $0$.) On the other hand by the Palm property (\[palmeq\]) we have $$\begin{aligned}
{{\mathbb P}}(\Pi\in {{\mathcal A}}') &\leq& {{\mathbb E}}\#\{x\in[\Pi]\cap [0,1]^d
\text{ with } \Pi\in \theta_x {{\mathcal A}}\} \\ &=& \lambda{{\mathcal L}}S \cdot {{\mathbb P}}(\Pi^*\in {{\mathcal A}})=0.\end{aligned}$$ Thus $\Pi$ is not insertion-tolerant.
The following observations will be useful in the proof that (\[original\]) implies (i) in Theorem \[thm-instol-stat-eq\].
\[originalfubini\] Let $\Pi$ be a translation-invariant point process on ${{\mathbb R}}^d$ with finite intensity. If $Y$ is any ${{\mathbb R}}^d$-valued random variable, and $U$ is uniformly distributed in $S \in {{ {\mathfrak B} }}$ and independent of $(\Pi,
Y)$, then $\theta_U \theta_Y \Pi \prec \Pi$.
\[wextrahead\] Let $\Pi$ be a translation-invariant point process on ${{\mathbb R}}^d$ with finite intensity. There exists a $\Pi$-point $Z$ such that $\Pi^{*} \prec \theta_{-Z}\Pi$.
Suppose that $\Pi + \delta_0 \prec \Pi ^{*}$. Without loss of generality we may assume that $\Pi$ and $\Pi^{*}$ are defined a common probability space. By Lemma \[wextrahead\], there exists a $\Pi$-point $Z$ such that $$\label{clearone}
\Pi^{*} \prec \theta_{-Z} \Pi.$$
Let $U$ be uniformly distributed in a Borel set $S$ and independent of $(\Pi, \Pi ^{*}, Z)$. By Lemma \[originalfubini\], it suffices to show that $\Pi + \delta_U \prec \theta_U \theta_{-Z} \Pi$. Since $U$ is independent of $(\Pi, \Pi^{*},Z)$, from it follows that $\theta_U \Pi^{*} \prec \theta_U \theta_{-Z} \Pi$. Thus it remains to show that $\Pi + \delta_U \prec \theta_U \Pi^{*}$.
Since $\Pi$ is translation-invariant and $U$ is independent of $\Pi$ we have $$\label{insU}
\theta_U (\Pi + \delta_0) {\stackrel{d}{=}}\ \Pi + \delta_U.$$ Since we assume that $\Pi + \delta_0 \prec \Pi ^{*}$ and $U$ is independent of $(\Pi, \Pi^{*})$ we deduce from that $\Pi + \delta_U \prec
\theta_U \Pi ^{*}$.
Let $Q$ be the joint law of $\Pi$ and $Y$. Since $U$ is independent of $(\Pi, Y)$, by Fubini’s theorem, for all ${{\mathcal A}}\in {{ {\mathfrak M} }}$, we have $$\begin{aligned}
{{\mathbb P}}(\theta_U \theta_{Y} \Pi \in {{\mathcal A}})
&= \frac{1}{{{\mathcal L}}(S)}\int\left(\int_S { \mathbf{1}{ [\theta_{u+ y} \pi \in {{\mathcal A}}]} }
du \right) dQ(\pi, y) \\
& \leq \frac{1}{{{\mathcal L}}(S)}\int \left(\int_{{{\mathbb R}}^d} { \mathbf{1}{ [\theta_x \pi \in {{\mathcal A}}]} }
dx \right) dQ(\pi, y) \\
&= \frac{1}{{{\mathcal L}}(S)}\int_{{{\mathbb R}}^d} {{\mathbb P}}(\theta_x \Pi \in {{\mathcal A}}) dx\\
&= \frac{1}{{{\mathcal L}}(S)}\int_{{{\mathbb R}}^d} {{\mathbb P}}(\Pi \in {{\mathcal A}}) dx.
\qedhere\end{aligned}$$
Lemma \[wextrahead\] is an immediate consequence of a result of Thorisson [@Thorissontrv], which states that there exists a [*shift-coupling*]{} of $\Pi$ and $\Pi^{*}$; that is, a $\Pi$-point $Z$ such that $\Pi^{*} {\stackrel{d}{=}}\theta_{-Z} \Pi$. In fact, Holroyd and Peres [@Extra-Heads] prove that such a $Z$ may be chosen as a deterministic function of $\Pi$. Since Lemma \[wextrahead\] is much weaker result, we can give the following simple self-contained proof.
Let $\{a_i\}_{i \in {{\mathbb N}}} = [\Pi]$ be an enumeration of the $\Pi$-points. Let $K$ be a random variable with support ${{\mathbb N}}$; also assume that $K$ is independent of $(a_i)_{i \in {{\mathbb N}}}$. Define the $\Pi$-point $Z := a_K$. We will show that $\Pi^{*} \prec \theta_{-Z}\Pi$.
Let ${{\mathcal A}}\in {{\mathcal M}}$ be so that ${{\mathbb P}}( \Pi^{*} \in {{\mathcal A}}) > 0$. By the Palm property , there exists a $\Pi$-point $Z'= Z'({{\mathcal A}})$ such that ${{\mathbb P}}(
\theta_{-Z'} \Pi \in {{\mathcal A}}) > 0$; moreover, there exists $i \in {{\mathbb N}}$ such that ${{\mathbb P}}( \theta_{-Z'} \Pi \in {{\mathcal A}}, \ Z'=a_i) > 0$. Since $K$ is independent of $(a_i)_{i \in {{\mathbb N}}}$, it follows from the definition of $Z$ that $${{\mathbb P}}(
\theta_{-Z'} \Pi \in {{\mathcal A}}, \; Z'=a_i,\; K=i,\; Z=a_i) > 0.$$ Therefore, ${{\mathbb P}}(
\theta_{-Z} \Pi \in {{\mathcal A}}) > 0$.
Continuum percolation {#contperc}
=====================
Theorem \[percuniq\] is an immediate consequence of the following. Consider the Boolean continuum percolation model for a point process $\Pi$. Let $W$ denote the cluster of containing the origin. For $M > 0$, an [[****]{}[$\boldsymbol{M}$-branch]{}]{} is an unbounded component of $W \cap B(0,
M)^c$.
\[choices\] For a translation-invariant ergodic insertion-tolerant point process, the number of unbounded clusters is a fixed constant a.s. that is zero, one, or infinity.
\[three\] If an insertion-tolerant point process has infinitely many unbounded clusters, then with positive probability there exists $M >0$ so that there at least three $M$-branches.
\[comb\] For all $M >0$, a translation-invariant ergodic point process has at most two $M$-branches.
For a proof of Theorem \[comb\] see [@roy Theorem 7.1].
From Lemma \[choices\], it suffices to show that there can not be infinitely many unbounded clusters; this follows from Theorem \[comb\] and Lemma \[three\].
For $r > 0$, let $r{{\mathbb Z}}^d:= { \left\{ {rz : z\in {{\mathbb Z}}^d} \right\} }$.
Let $\Pi$ be a translation-invariant ergodic insertion-tolerant point process. Let the occupied region be given by a union of balls of radius $R
>0$. By ergodicity, if $K(\Pi)$ is the number of unbounded clusters, then $K(\Pi)$ is a fixed constant a.s. Assume that $K(\Pi) < \infty$. It suffices to show that ${{\mathbb P}}(K(\Pi) \leq 1) >0$. Since $K (\Pi)< \infty$, there exists $N >0$ so that every unbounded cluster intersects $B(0,N)$ with positive probability. Consider the finite set $S:=({ R /4 }){{\mathbb Z}}^d \cap
B(0,N)$. For each $x \in S$, let $U_x$ be uniformly distributed in $B(x,
R)$ and assume that the $U_x$ and $\Pi$ are independent. Let ${ {\mathcal F} }:=
\sum_{x \in S} \delta_{U_x}$. Since $B(0,N) \subset \cup_ {x \in S} B(U_x,
R)$, we have that ${{\mathbb P}}(K(\Pi + { {\mathcal F} }) \leq 1) >0$. By Theorem \[thm-instol-eq\] , $\Pi + { {\mathcal F} } \prec \Pi$, so that ${{\mathbb P}}( K(\Pi) \leq 1) >0$.
The proof is similar to that of Lemma [\[choices\]]{}. Let $\Pi$ be an insertion-tolerant point process with infinitely many unbounded clusters. Let the occupied region be given by a union of balls of radius $R >0$. Choose $N$ large enough so that at least three unbounded clusters interest $B(0,N)$ with positive probability. Define a finite point process ${ {\mathcal F} }$ exactly as in the proof of Lemma \[choices\]. The point process $\Pi +
{ {\mathcal F} }$ has at least three $(N+R)$-branches with positive probability and Theorem \[thm-instol-eq\] implies that $\Pi + { {\mathcal F} }
\prec \Pi$. Thus $\Pi$ has at least three $(N+R)$-branches with positive probability.
Stable matching {#stableM}
===============
Theorems \[onethm\] and \[twothm\] are consequences of the following lemmas. Let ${ {\mathcal R} }$ be a point process with a unique one-colour stable matching scheme ${ {\mathcal M} }$. Define
$$\label{H}
H = H({ {\mathcal R} }):=\bigl\{x \in [{ {\mathcal R} }] : \|x - { {\mathcal M} }(x)\| > \|x\| -1\bigr\}.$$
This is the set of ${ {\mathcal R} }$-points that would prefer some ${ {\mathcal R} }$-point in the ball $B(0,1)$, if one were present in the appropriate location, over their current partners. Also define $H$ by for the case of two-colour stable matching.
A calculation given in [@random Proof of Theorem 5(i)] shows that, for one-colour and two-colour matchings, $$\label{Hthmfive}
{{\mathbb E}}\#H = c \, {{\mathbb E}}^{*} \big[({ { X} } +1)^d\big].$$ for some $c=c(d)\in(0,\infty)$.
\[oneH\] Let ${ {\mathcal R} }$ be a translation-invariant point process on ${{\mathbb R}}^d$ with finite intensity that almost surely is non-equidistant and has no descending chains. If ${ {\mathcal R} }$ is insertion-tolerant, then ${{\mathbb P}}(\#H =
\infty) =1$. If ${ {\mathcal R} }$ is deletion-tolerant, then ${{\mathbb P}}(\#H =
\infty)>0$.
\[twoH\] Let ${ {\mathcal R} }$ and ${ {\mathcal B} }$ be independent translation-invariant ergodic point processes on ${{\mathbb R}}^d$ with equal finite intensity, such that the point process ${ {\mathcal R} } +{ {\mathcal B} }$ is non-equidistant and has no descending chains. If ${ {\mathcal R} }$ is insertion-tolerant, then ${{\mathbb P}}(\# H = \infty) =1$. If ${ {\mathcal R} }$ is deletion-tolerant, then ${{\mathbb P}}(\#H = \infty) >0$.
\[prime\] Recall that in the case of two-colour stable matching we defined $X$ in terms of the distance from an ${ {\mathcal R} }$-point to its partner. If we instead define $X'$ by replacing ${ {\mathcal R} }$ with ${ {\mathcal B} }$ in , then $X'{\stackrel{d}{=}}X$; see the discussion after [@random Proposition 7] for details. [$\Diamond$]{}
Use Lemma \[oneH\] together with .
Use Lemma \[twoH\] together with and Remark \[prime\].
The following lemmas concerning stable matchings in a deterministic setting will be needed. A [[****]{}partial matching]{} of a point measure $\mu \in {{\mathbb M}}$ is the edge set $m$ of simple graph $([\mu], m)$ in which every vertex has degree at most one. A partial matching is a [[****]{}perfect]{} matching if every vertex has degree exactly one. We write $m(x) = y$ if and only if ${ \left\{ {x,y} \right\} }
\in m$, and set $m(x) = \infty$ if $x$ is unmatched. We say a partial matching is [[****]{}[stable]{}]{} if there do not exist distinct points $x,y \in
[\mu]$ satisfying $$\label{defstableb}
\|x-y\| < \min{ \left\{ { \|x - m(x)\|, \|y - m(y)\|} \right\} },$$ where $\| x - m(x)\| = \infty$ if $x$ is unmatched. Note that in any stable partial matching there can be at most one unmatched point.
For each ${\varepsilon}>0$, set $$H_{{\varepsilon}} = H_{{\varepsilon}}(\mu) :={ \left\{ {x \in [\mu] : \|x - m(x)\| > \|x\| -{\varepsilon}} \right\} }.$$ For each $y\in {{\mathbb R}}^d$, set $$N(\mu, y) := { \left\{ {x \in [\mu] \setminus { \left\{ {y} \right\} }: \|x - m(x)\| > \|x - y\|} \right\} }.$$ This is the set of $\mu$-points that would prefer $y \in {{\mathbb R}}^d$ over their partners.
\[monohk\] If $\mu\in {{\mathbb M}}$ is non-equidistant and has no descending chains, then $\mu$ has an unique stable partial matching $m$. In addition, we have the following properties.
(i) \[del\] If ${ \left\{ {x,y} \right\} } \in m$ is a matched pair, then $m
\setminus{ \left\{ {{ \left\{ {x,y} \right\} }} \right\} }$ is the unique stable partial matching of $\mu
-\delta_x - \delta_y$.
(ii) \[ins\] Let ${\varepsilon}> 0$. If $m$ is a perfect matching and $\#H_{{\varepsilon}}=0$, then for all $x \in B(0, {\varepsilon})$ such that $\mu +
\delta_x$ is non-equidistant, $m$ is the unique stable partial matching of $\mu + \delta_x$; in particular, $x$ is unmatched in $m$.
(iii) \[Ndel\] If ${ \left\{ {x, y} \right\} } \in m$ is a matched pair and $\# N(\mu,
y) =0$, then $m\setminus { \left\{ {{ \left\{ {x,y} \right\} }} \right\} }$ is the unique stable partial matching of $\mu - \delta_x$, and in particular, $y$ is left unmatched.
The existence and uniqueness is given by [@random Lemma 15]. Thus for (i)–(iii) it suffices to check that the claimed matching is stable, which is immediate from the definition .
The next lemma is a simple consequence of Lemma \[monohk\].
\[addremove\] Suppose that $\mu \in {{\mathbb M}}$ is non-equidistant and has no descending chains. Let $m$ be the unique stable matching of $\mu$. Suppose that ${ \left\{ {x,y} \right\} } \in m$ and $0 \not \in [\mu]$. There exists ${\varepsilon}>0$ such that for ${{\mathcal L}}$-a.a. $x' \in B(x, {\varepsilon})$ and $y' \in B(y, {\varepsilon})$: the unique stable matching $m'$ of $\mu + \delta_{x'} + \delta_{y'}$ is given by $$m' =(m \setminus { \left\{ {{ \left\{ {x,y} \right\} }} \right\} }) \cup { \left\{ {{ \left\{ {x,x'} \right\} } , { \left\{ {y,y'} \right\} }} \right\} },$$ and furthermore, ${x,x'}, {y,y'} \not \in H_{{\varepsilon}}(\mu + \delta_{x'} + \delta_{y'})
\subseteq H_{{\varepsilon}}(\mu)$.
Consider $$d_v:= \min{ \left\{ { \|v-w\|: w \in [\mu] \cup { \left\{ {0} \right\} }, \ w \not = v } \right\} }.$$ Let $$\label{defep}
{\varepsilon}:= \tfrac15 \min { \left\{ { d_x, d_y, d_0} \right\} }$$ (any multiplicative factor less than $\tfrac14$ would suffice here). Let $A:= B(x, {\varepsilon}) \times B(y, {\varepsilon})$. It is easy to verify that for ${{\mathcal L}}$-a.a.$(x',y') \in A$ that the measure $\mu + \delta_{x'} + \delta_{y'}$ is also non-equidistant and has no descending chains. Thus by Lemma \[monohk\], for ${{\mathcal L}}$-a.a. $(x',y') \in A$ the measure $\mu + \delta_{x'} +
\delta_{y'}$ has a unique stable perfect matching $m'$. Clearly, by and , we have that ${ \left\{ {x,x'} \right\} }, { \left\{ {y,y'} \right\} } \in
m'$. On the other hand, by Lemma \[monohk\] , $m' \setminus
{ \left\{ {{ \left\{ {x,x'} \right\} },{ \left\{ {y,y'} \right\} }} \right\} }$ is the unique stable perfect matching of $\mu -
\delta_x - \delta_y$ and $m \setminus { \left\{ {{ \left\{ {x,y} \right\} }} \right\} }$ is the also the unique stable perfect matching of $\mu - \delta_x - \delta_y$. Thus $$m'= (m \setminus { \left\{ {{ \left\{ {x,y} \right\} }} \right\} }) \cup { \left\{ {{ \left\{ {x,x'} \right\} } , { \left\{ {y,y'} \right\} }} \right\} }.$$ It also follows from that $${x,x'},{y,y'} \not \in H_{{\varepsilon}}(\mu + \delta_{x'} + \delta_{y'}) \subseteq H_{{\varepsilon}}(\mu).
\qedhere$$
Let ${ {\mathcal R} }$ be insertion-tolerant. Note that $H_1({ {\mathcal R} }) =
H({ {\mathcal R} })$. First, we will show that $$\label{hkthing}
{{\mathbb P}}( \#H_{{\varepsilon}}({ {\mathcal R} }) > 0) =1 \ \text{for all} \ {\varepsilon}>0.$$
Second, we will show that if ${{\mathbb P}}( 0<\# H_1({ {\mathcal R} })<\infty)
>0$, then there exists a finite point process ${{\mathcal F}}$ such that ${{\mathcal F}}$ admits a nice conditional law given ${ {\mathcal R} }$, and $$\label{limithk}
\lim_{ {\varepsilon}\to 0} {{\mathbb P}}\bigl( \#H_{{\varepsilon}}({ {\mathcal R} } + {{\mathcal F}}) = 0\bigr)
= {{\mathbb P}}\bigl( 0<\# H_1({ {\mathcal R} })<\infty\bigr) > 0.$$
Finally, note that by Corollary \[weak\] and the insertion-tolerance of ${ {\mathcal R} }$ that and are in contradiction. Thus ${{\mathbb P}}( \#H_1({ {\mathcal R} }) = \infty) = 1$. It remains to prove the first two assertions.
The following definition will be useful. Let ${{\mathbb M}}'$ be the set of point measures $\mu \in {{\mathbb M}}$ such that $\mu$ has a unique stable perfect matching, has no descending chains, and is non-equidistant.
Let ${\varepsilon}> 0$. Let ${{\mathcal J}}$ be the set of point measures $\mu \in {{\mathbb M}}'$ such that $\#H_{{\varepsilon}}(\mu) = 0$. To show , it suffices to prove that ${{\mathbb P}}({ {\mathcal R} } \in {{\mathcal J}}) = 0$. Let $\mu \in {{\mathcal J}}$ and let $m$ be the unique stable perfect matching for $\mu$. By Lemma \[monohk\] (\[ins\]), for Lebesgue-a.a. $x \in B(0, {\varepsilon})$ the unique stable partial matching for $\mu
+ \delta_{x}$ is $m$ (and $x$ is unmatched). If ${{\mathbb P}}({ {\mathcal R} } \in {{\mathcal J}}) > 0$, then it follows from the insertion-tolerance of ${ {\mathcal R} }$ that with positive probability ${ {\mathcal R} }$ does not have a perfect stable matching, a contradiction.
Now let ${{\mathcal A}}$ be the set of point measures $\mu \in {{\mathbb M}}'$ such that $0<
\#H_1(\mu) < \infty$ and $0 \not \in [\mu]$. If ${ {\mathcal R} } \in {{\mathcal A}}$, then, by applying Lemma \[addremove\] repeatedly, there exists $\rho = \rho({ {\mathcal R} })$ such that if a point is added within distance $\rho$ of each point in $H_1({ {\mathcal R} })$ and each of their partners, then (for ${{\mathcal L}}$-a.a. choices of such points) the resulting process ${ {\mathcal R} }'$ satisfies $H_\rho({ {\mathcal R} }') = 0$. Let ${{\mathcal F}}$ be the finite point process whose conditional law given ${ {\mathcal R} }$ is given as follows. Take independent uniformly random points in each of the appropriate balls of radius $\rho$ provided ${ {\mathcal R} }\in{{\mathcal A}}$; otherwise take ${{\mathcal F}}= 0$. By the construction, $$\lim_{{\varepsilon}\to 0} {{\mathbb P}}\big( \#H_{\varepsilon}({ {\mathcal R} } + {{\mathcal F}})=0 \mid
{ {\mathcal R} } \in {{\mathcal A}}, \; \rho({ {\mathcal R} }) > {\varepsilon}\big)=1,$$ so follows.
Suppose ${ {\mathcal R} }$ is deletion-tolerant. We will show that for any ${ {\mathcal R} }$-point $Z$ $$\label{N}
\#N({ {\mathcal R} },Z) = \infty \ \text{a.s.}$$ From (\[N\]) it follows that if ${ {\mathcal R} }(B(0,1))> 0$, then $\#H =
\infty$. Since ${ {\mathcal R} }$ is translation-invariant, ${{\mathbb P}}(
{ {\mathcal R} }(B(0,1)) > 0) > 0$ and ${{\mathbb P}}(\#H = \infty) > 0$.
It remains to show (\[N\]). Let $Z$ be an ${ {\mathcal R} }$-point. Let ${{\mathcal F}}_1$ be the point process with support $N({ {\mathcal R} },Z)$, and let ${{\mathcal F}}_2$ be the point process with support ${ \left\{ {{ {\mathcal M} }(y): y \in N({ {\mathcal R} },Z)} \right\} }$. Consider the point process ${{\mathcal F}}$ defined by $$\begin{aligned}
{{\mathcal F}}&:=& \begin{cases}
{{\mathcal F}}_1 + {{\mathcal F}}_2, \ \
\text{if} \ \ \#{N}({ {\mathcal R} },Z) < \infty \\
0, \ \ \text{otherwise}.
\end{cases}\end{aligned}$$ Let ${ {\mathcal M} }'$ be given by $$[{ {\mathcal M} }']:=[{ {\mathcal M} }] \setminus
\bigcup_{x \in [{{\mathcal F}}]} { \left\{ {{ \left\{ {x, { {\mathcal M} }(x)} \right\} }} \right\} }.$$ By Lemma \[monohk\] , ${ {\mathcal M'} }$ is the unique stable matching for ${ {\mathcal R} } -
{{\mathcal F}}$ a.s.
Towards a contradiction assume that ${{\mathbb P}}( \#{N}({ {\mathcal R} },Z) < \infty) > 0$. Thus, ${{\mathbb P}}(N({ {\mathcal R} } - {{\mathcal F}}, Z) = 0) > 0$. By Lemma \[monohk\] , with positive probability, ${ {\mathcal R} } - {{\mathcal F}}-\delta_{{ {\mathcal M} }(Z)}$ has the unique stable partial matching given by ${ {\mathcal M} }'$ with the pair ${ \left\{ {Z, { {\mathcal M} }(Z)} \right\} }$ removed and $Z$ left unmatched. From Theorem \[equiv\] and the deletion-tolerance of ${ {\mathcal R} }$ we have ${ {\mathcal R} } - {{\mathcal F}}-\delta_{{ {\mathcal M} }(Z)} \prec { {\mathcal R} }$. Thus with positive probability, ${ {\mathcal R} }$ has a stable partial matching with an unmatched point, a contradiction.
We now turn to the two-colour case. Given two point measures $\mu, \mu' \in
{{\mathbb M}}$ such that $\mu + \mu'$ is a simple point measure, we say that $m$ is a [[****]{}[partial]{}]{} (respectively, [[****]{}[perfect)]{}]{} matching of $(\mu, \mu')$ if $m$ is the edge set of a simple bipartite graph $([\mu], [\mu'], m)$ in which every vertex has degree at most one (respectively, exactly one). We write $m(x) = y$ if and only if ${ \left\{ {x,y} \right\} } \in m$ and set $m(x)= \infty$ if $x$ is unmatched. We say that $m$ is [[****]{}[stable]{}]{} if there do not exist $x \in
[\mu]$ and $y \in [\mu']$ satisfying . If $\mu + \mu'$ is non-equidistant and has no descending chains then there exists a unique stable partial matching of $(\mu, \mu')$ [@random Lemma 15].
\[transfer\] It is easy to verify that the two-colour analogues of Lemma \[monohk\] and hold. [$\Diamond$]{}
We will need the following monotonicity facts about stable two-colour matchings. Similar results are proved in [@Stable-PL Proposition 21], [@galeshapley], and [@MR1415126].
\[monotrick\] Let $\mu, \mu \in {{\mathbb M}}$ and assume that $\mu + \mu'$ is a simple point measure that is non-equidistant and has no descending chains. Let $m$ be the stable partial matching of $(\mu, \mu')$.
(i) \[previous\] Assume that $w \not \in [\mu']$ and $\mu' +
\delta_w$ is non-equidistant and has no descending chains. If $m'$ is the stable partial matching of $(\mu, \mu' + \delta_w)$, then $$\|z - m(z) \| \geq \|z - m'(z)\| \ \text{for all} \ z \in [\mu].$$
(ii) \[new\] Let $x \in [\mu]$. If $m'$ is the stable partial matching of $(\mu - \delta_x, \mu')$, then $$\label{monotwocol}
\|z - m(z) \| \geq \|z - m'(z)\| \ \text{for all} \ z \in [\mu - \delta_x].$$
Part follows from [@random Lemma 17]. For part , if $x$ is not matched under $m$, then $m' = m$, thus assume that $m(x) = y$. By Lemma \[monohk\] and Remark \[transfer\], $m \setminus { \left\{ {{ \left\{ {x,y} \right\} }} \right\} }$ is the unique stable partial matching for $(\mu - \delta_x, \mu' - \delta_y)$. Thus by part , $m'$, the unique stable matching for $(\mu -\delta_x, \mu')$, satisfies .
The proof for the case when ${ {\mathcal R} }$ is insertion-tolerant is given in [@random Theorem 6(i)]. In the case when ${ {\mathcal R} }$ is deletion-tolerant we proceed similarly to the proof of Lemma \[oneH\]. Recall that in the two-colour case, ${{\mathcal M}}$ denotes the two-colour stable matching scheme for ${ {\mathcal R} }$ and ${ {\mathcal B} }$. Let $Z$ be a ${ {\mathcal B} }$-point. Define $N({ {\mathcal R} }, Z)$ and ${{\mathcal F}}_1$ as in the proof of Lemma \[oneH\], so that $N({ {\mathcal R} }, Z)$ is the set of ${ {\mathcal R} }$-points that would prefer $Z$ over their partners and ${{\mathcal F}}_1$ is the point process with support $N({ {\mathcal R} }, Z)$.
Towards a contradiction assume that ${{\mathbb P}}( \#N({ {\mathcal R} }, Z) < \infty) >0$. There exists a unique stable partial matching for $({ {\mathcal R} }- {{\mathcal F}}_1,
{ {\mathcal B} })$ a.s.; denote it by ${{\mathcal M}}'$. From Lemma \[monotrick\] , it follows that $$\label{newNR}
{{\mathbb P}}( N({ {\mathcal R} } - {{\mathcal F}}_1, Z) = 0) >0.$$ From and Remark \[transfer\] with Lemma \[monohk\] , it follows that with positive probability, ${{\mathcal M}}' \setminus
{ \left\{ {{ \left\{ {Z, {{\mathcal M}}'(Z)} \right\} }} \right\} }$ is the unique stable partial matching for $({ {\mathcal R} } -
{{\mathcal F}}_1 - {{\mathcal M}}'(Z), { {\mathcal B} })$ and the ${ {\mathcal B} }$-point $Z$ is left unmatched. By Lemma \[unipick\], there exists a Borel set $S$ with finite Lebesgue measure such that ${{\mathbb P}}({ {\mathcal R} } {{|}}_S = {{\mathcal F}}_1 + \delta_{{{\mathcal M}}'(Z)}) > 0$. By Theorem \[equiv\] and the deletion-tolerance of ${ {\mathcal R} }$, we have that ${ {\mathcal R} } {{|}}_{S^c} \prec { {\mathcal R} }$; furthermore, since ${ {\mathcal R} }$ and ${ {\mathcal B} }$ are independent, $({ {\mathcal R} } {{|}}_{S^c},
{ {\mathcal B} }) \prec ({ {\mathcal R} }, { {\mathcal B} })$. Thus with positive probability $({ {\mathcal R} }, { {\mathcal B} })$ has a stable partial matching with a unmatched ${ {\mathcal B} }$-point. This contradicts the fact that ${{\mathcal M}}$ is the two-colour matching scheme for ${ {\mathcal R} }$ and ${ {\mathcal B} }$.
Perturbed lattices and Gaussian zeros {#perproof}
=====================================
Low-fluctuation processes
-------------------------
Propositions \[pertone\] and \[GAFplane\] will be proved using the following more general result, which states that processes satisfying various “low-fluctuation” conditions are neither insertion-tolerant nor deletion-tolerant. For a point process $\Pi$ and a measurable function $h:{{\mathbb R}}^d \to {{\mathbb R}}$ write $$\Pi(h):= \int h(x) d\Pi(x) =\sum_{x \in [\Pi]} h(x).$$ Let $\overline{B}(0,1) := { \left\{ {x\in {{\mathbb R}}^d : \| x\| \leq 1} \right\} }$ denote the closed unit ball.
\[decayo\] Let $\Pi$ be a point process on ${{\mathbb R}}^d$ with finite intensity. Let $h: {{\mathbb R}}^d \to [0,1]$ be a measurable function with $h(x) = 1$ for all $x
\in B(0,1/2)$ and support in $\overline{B}(0,1)$. For each $n \in {{\mathbb Z}}^{+}$, set $h_n(x)
:= h(x/n)$ for all $x \in {{\mathbb R}}^d$.
(i) \[ch-decaya\] If $\Pi(h_n) - {{\mathbb E}}\Pi(h_n) \to 0$ in probability as $n \to \infty$, then $\Pi$ is neither insertion-tolerant nor deletion-tolerant.
(ii) \[orseq\] If there exists a deterministic sequence $(n_k)$ with $n_k \to \infty$ such that $$\label{ceas}
\frac{1}{K}\sum_{k=1} ^K
\big( \Pi(h_{n_k}) - {{\mathbb E}}\Pi (h_{n_k}) \big) {\xrightarrow{{{\mathbb P}}}}0 \quad\text{as }K\to\infty,$$ then $\Pi$ is neither insertion-tolerant nor deletion-tolerant.
(iii) \[ch-decayb\] Write $N_n = \Pi(h_n) - {{\mathbb E}}\Pi(h_n)$. If there exists a deterministic sequence $(n_k)$ with $n_k \to \infty$ and a discrete real-valued random variable $N$ such that for all $\ell \in
{{\mathbb R}}$, $$\label{ceastwo}
\frac{1}{K}\sum_{k=1} ^K \mathbf{1}[N_{n_k} \leq \ell] \
\stackrel{{{{\mathbb P}}}}{\to} \ {{\mathbb P}}(N \leq\ell) \quad \text{as } K \to \infty,$$ then $\Pi$ is neither insertion-tolerant nor deletion-tolerant.
In our application of Proposition \[decayo\] , $N_n$ will be integer-valued (see below).
Let $m_n:= {{\mathbb E}}\Pi(h_n)$. Since $\Pi(h_n) - m_n \to 0$ in probability, there exists a (deterministic) subsequence ${n_k}$ such that $\Pi(h_{n_k}) -
m_{n_k} \to 0$ a.s. On the other hand, if $U$ is uniformly distributed in $B(0,1)$, then $(\Pi + \delta_U)(h_{n_k}) - m_{n_k} \to 1$ a.s. Therefore $\Pi$ is not insertion-tolerant. Similarly, if $Z$ any $\Pi$-point, then $(\Pi - \delta_Z)(h_{n_k})-m_{n_k} \to -1$. So $\Pi$ is not deletion-tolerant.
Suppose that holds for some deterministic sequence $(n_k)$. Let $m_{n_k}:= {{\mathbb E}}\Pi(h_{n_k})$, and for each integer $K > 0$ define $S_K:{{\mathbb M}}\to {{\mathbb R}}$ by $$S_K(\mu):=\frac{1}{K}\sum_{k=1}^K (\mu(h_{n_k}) - m_{n_k}).$$ Thus $S_K(\Pi) \to 0$ in probability as $K \to \infty$, and there exists a subsequence $(K_i)$ so that $S_{K_i}(\Pi) \to 0$ a.s. However, if $U$ is uniformly distributed in $B(0,1)$, then $S_{K_i}(\Pi +
\delta_U)\to 1$ a.s. Thus $\Pi$ cannot be insertion-tolerant. Similarly, if $Z$ is a $\Pi$-point, then $S_{K_i}(\Pi - \delta_Z) \to -1$ a.s. Thus $\Pi$ cannot be deletion-tolerant.
Suppose that holds for some deterministic sequence $(n_k)$ and some discrete random variable $N$. Let $m_{n_k}:= {{\mathbb E}}\Pi(h_{n_k})$, and let $N_{n_k}(\mu) := \mu(h_n) - m_{n_k}$ for all $\mu \in {{\mathbb M}}$. For each integer $K>0$, define $F_K: {{\mathbb M}}\times {{\mathbb R}}\to [0,1]$ by $$F_K(\mu, \ell) := \frac{1}{K}\sum_{k=1} ^K \mathbf{1}[N_{n_k}(\mu) \leq \ell].$$ Thus $F_K(\Pi, \ell) \to {{\mathbb P}}(N \leq \ell)$ in probability as $K \to \infty$ for all $\ell \in {{\mathbb R}}$. Since $N$ is discrete and has countable support, by a standard diagonal argument, there exists a subsequence $(K_i)$ so that $F_{K_i}(\Pi, \ell) \to {{\mathbb P}}(N \leq \ell)$ a.s. for all $\ell \in {{\mathbb R}}$. Fix $a \in {{\mathbb R}}$ such that ${{\mathbb P}}(N \leq a) \not = {{\mathbb P}}(N \leq a+1)$. We have $F_{K_i}(\Pi, a) \to {{\mathbb P}}(N \leq a)$ a.s. and $F_{K_i}(\Pi, a+1) \to {{\mathbb P}}(N \leq
a+1) $ a.s. However, if $U$ is uniformly distributed in $B(0,1)$, then $F_{K_i}(\Pi + \delta_U, a+1)\to {{\mathbb P}}(N \leq a)$ a.s. Thus $\Pi$ cannot be insertion-tolerant. Similarly, if $Z$ is a $\Pi$-point, then $F_{K_i}(\Pi
- \delta_Z, a) \to {{\mathbb P}}(N \leq a+1)$ a.s. Thus $\Pi$ cannot be deletion-tolerant.
Gaussian zeros in the plane
---------------------------
Let ${\Upsilon_{\mathbb{C}}}$ be the Gaussian zero process on the plane. Sodin and Tsirelson [@MR2121537 Equation (0.6)] show that ${\Upsilon_{\mathbb{C}}}$ satisfies the conditions of Proposition \[decayo\] , with a twice differentiable function $h$; in particular they show that $\operatorname{Var}{\Upsilon_{\mathbb{C}}}(h_n) \to 0$ as $n \to
\infty$. Hence ${\Upsilon_{\mathbb{C}}}$ is neither insertion-tolerant nor deletion-tolerant.
Perturbed lattices in dimension $2$
-----------------------------------
The proof of Proposition \[pertone\] for the case $d=2$ relies on the following lemma.
\[masterlemma\] Let $(Y_z:z\in{{\mathbb Z}}^2)$ be i.i.d. ${{\mathbb R}}^2$-valued random variables with ${{\mathbb E}}Y_0=0$ and $\operatorname{Var}\|Y_0\|=\sigma^2<\infty$. Let $\Lambda$ be the point process given by $[\Lambda]:= { \left\{ {z + Y_z: z \in {{\mathbb Z}}^2} \right\} }$. Let $h:{{\mathbb R}}^2\to[0,1]$ have support in $B(0,1)$, and have Lipschitz constant at most $c<\infty$, and let $h(x) =1$ for all $x \in B(0,1/2)$. Define $h_r(x):=h(x/r)$ for $x\in{{\mathbb R}}^2$ and $r>0$. Set $m_r:= {{\mathbb E}}\Lambda(h_{r})$.
(i) \[finitevar\] For all $r>0$ we have $\operatorname{Var}\Lambda(h_r) \leq C,$ for some $C=C(\sigma^2,c)<\infty.$
(ii) \[covdecay\] For all $r >0$, we have $ \operatorname{Cov}(\Lambda(h_r), \Lambda(h_R)) \to 0$ as $R \to \infty$.
(iii) \[orseqtwo\] There exists a deterministic sequence $(n_k)$ with $n_k \to \infty$ such that is satisfied with $\Lambda$ in place of $\Pi$; that is, $$\frac{1}{K}\sum_{k=1} ^K
\big( \Lambda(h_{n_k}) - {{\mathbb E}}\Lambda (h_{n_k}) \big) {\xrightarrow{{{\mathbb P}}}}0 \quad\text{as }K\to\infty.$$
Lemma \[masterlemma\] parts and will allow us to use a weak law of large numbers to prove .
We may clearly assume without loss of generality that ${{\mathbb E}}Y_0=0$. Now apply Lemma \[masterlemma\] together with Proposition \[decayo\] .
Note that $$\label{form}
\Lambda(h_r) = \sum_{z\in{{\mathbb Z}}^2} h_r(z+Y_z).$$ Thus by the independence of the $Y_z$, we have $$\label{sumone}
\operatorname{Var}\Lambda(h_r) = \sum_{z\in{{\mathbb Z}}^2} \operatorname{Var}h_r(z+Y_z);$$ we will split this sum into two parts. We write $C_1,C_2$ for constants depending only on $\sigma^2$ and $c$.
Firstly, since $h_r$ has Lipschitz constant at most $c/r$, we have for all $z\in {{\mathbb Z}}^2$, $$\operatorname{Var}h_r (z+Y_z)\leq {{\mathbb E}}[(h_r (z+Y_z)-h_r (z))^2]\leq {{\mathbb E}}[(c\|Y_z\|/r)^2]=(c\sigma/r)^2,$$ therefore $$\label{sumtwo}
\sum_{z\in{{\mathbb Z}}^2:\\ \|z\|\leq 2r} \operatorname{Var}h_r(z+Y_z)\leq C_1.$$ Secondly, since $h_r$ has support in $B(0,r)$, $$\begin{aligned}
\operatorname{Var}h_r(z+Y_z) &\leq {{\mathbb E}}[h_r(z+Y_z)^2] \\ &\leq {{\mathbb P}}[z+Y_z\in B(0,r)] = {{\mathbb P}}[Y_0\in B(-z,r)],\end{aligned}$$ therefore $$\begin{aligned}
\label{sumthree}
\sum_{{z\in{{\mathbb Z}}^2: \|z\|> 2r}} \operatorname{Var}h_r(z+Y_z) &\leq
\sum_{{z\in{{\mathbb Z}}^2: \|z\|> 2r}} {{\mathbb P}}[Y_0\in B(-z,r)] \nonumber \\ &\leq
C_2 r^2 {{\mathbb P}}(\|Y_0\|>r)\leq C_2 \sigma^2.\end{aligned}$$ The result now follows by combining –.
Note that by Lemma \[masterlemma\] , for all $r, R > 0$, we have that $\operatorname{Cov}(\Lambda(h_r), \Lambda(h_R)) < \infty.$ By and independence of the $Y_z$ we have $$\begin{aligned}
\operatorname{Cov}(\Lambda(h_r), \Lambda(h_R)) =&\,
{{\mathbb E}}\Big(\sum_{z \in {{\mathbb Z}}^2} h_r(z+Y_z) \; h_R(z + Y_z)\Big)
\\ &-\sum_{z \in {{\mathbb Z}}^2} {{\mathbb E}}h_r(z+Y_z) \;{{\mathbb E}}h_R(z + Y_z).\end{aligned}$$ Let $R > 2r$. If $h_r(z+ Y_z) >0$, then $h_R(z + Y_z) =1$; thus $$\operatorname{Cov}(\Lambda(h_r), \Lambda(h_R)) =
m_r - \sum_{z \in {{\mathbb Z}}^2} {{\mathbb E}}h_r(z+Y_z) \;{{\mathbb E}}h_R(z + Y_z).$$ Since $ h_R \uparrow 1$ as $R \to \infty$, for each $z \in {{\mathbb Z}}^2$ we have by the monotone convergence theorem that ${{\mathbb E}}h_R(z + Y_z) \uparrow 1$ as $R \to
\infty$. An additional application of the monotone convergence theorem shows that $$\lim_{R \to \infty} \sum_{z \in {{\mathbb Z}}^2} {{\mathbb E}}h_r(z+Y_z) \; {{\mathbb E}}h_R(z + Y_z)
= \sum_{z \in {{\mathbb Z}}^2} {{\mathbb E}}h_r(z+Y_z) =m_r.
\qedhere$$
We will employ the following weak law of large numbers for dependent sequences to prove Lemma \[masterlemma\] .
\[durrett\] Let $Z_1, Z_2, \ldots$ be real-valued random variables with finite second moments and zero means. If there exists a sequence $b(k)$ with $b(k) \to 0$ as $k \to \infty$ such that ${{\mathbb E}}(Z_n Z_m) \leq b(n-m)$ for all $n \geq m$, then $(Z_1 + \cdots + Z_n)/n \to 0$ in probability as $n \to
\infty$.
Lemma \[durrett\] is a straightforward generalization of the standard $L^2$ weak law. See [@MR1609153 Chapter 1, Theorem 5.2 and Exercise 5.2].
\[durrettm\] Let $Z_1, Z_2, \ldots$ be real-valued random variables with finite second moments and zero means. Suppose that there exists $C >0$, such that ${{\mathbb E}}|Z_m|^2 \leq C$ for all $m \in {{\mathbb Z}}^{+}$. If for all $m \in
{{\mathbb Z}}^{+}$ we have ${{\mathbb E}}(Z_mZ_n) \to 0$ as $n \to \infty$, then there exists an increasing sequence of positive of integers $(r_n)$ such that $(Z_{r_1} +
\cdots+ Z_{r_n})/n \to 0$ in probability as $n \to \infty$. Furthermore, for any further subsequence $(r_{n_k})$ we have $(Z_{r_{n_1}} + \cdots
+Z_{r_{n_k}})/k \to 0$ in probability as $n \to \infty$.
Consider the sequence $b(k):=1/k$, where we set $b(0) = C$. We will show that there exists a sequence $r_k$ so that ${{\mathbb E}}( Z_{r_n} Z_{r_m}) \leq 1/n$ for all $n >m$. Thus $Z_{r_k}$ satisfies the conditions of Lemma \[durrett\] with $b(k)$. We proceed by induction. Set $r_1 = 1$. Suppose that $r_2,
\ldots, r_{k-1}$ have already been defined and satisfy ${{\mathbb E}}(Z_{r_n} Z_{r_m})
\leq 1/n$ for all $1 \leq m < n \leq k-1$. It follows from Lemma \[masterlemma\] that there exists an integer $R > 0$ such that ${{\mathbb E}}(Z_{r_m}Z_R) \leq 1/k$ for all $1 \leq m \leq k-1$; set $r_k:= R$. Furthermore, if $(r_{n_k})$ is a subsequence of $(r_n)$, we have that if $m
<k$, then ${{\mathbb E}}(Z_{r_{n_m}} Z_{r_{n_k}}) \leq { 1 /n_k } \leq
{ 1 /k }.$ Thus $Z_{r_{n_k}}$ satisfies the conditions of Lemma \[durrett\] with $b(k)$.
For each $n \in {{\mathbb Z}}^{+}$, set $Z_{n} := \Lambda(h_{n}) - m_{n}$. By Lemma \[masterlemma\] parts and , $Z_n$ satisfies the conditions of Corollary \[durrettm\].
Perturbed lattices in dimension $1$
-----------------------------------
The proof of Proposition \[pertone\] for the case $d=1$ relies on the following lemma.
\[mlemma\] Let $(Y_z: z \in {{\mathbb Z}})$ be i.i.d. ${{\mathbb R}}$-valued random variables. Let $\Lambda$ be the point process given by $[\Lambda]:= { \left\{ {z + Y_z: z\in
{{\mathbb Z}}} \right\} }$. Define $h(x) := {{{\mathbf 1}}_{(-1, 1]}}(x)$ for all $x \in {{\mathbb R}}$ and set $h_n(x)
:= h(x/n)$ for $x \in {{\mathbb R}}$ and $n \in {{\mathbb Z}}^{+}$. For each $n \in {{\mathbb Z}}^{+}$, let $N_n := \Lambda(h_n) - {{\mathbb E}}\Lambda(h_n).$ Assume that ${{\mathbb E}}|Y_0| <
\infty$.
(i) \[tight\] The family of random variables $(N_n)_{n\in {{\mathbb Z}}^+}$ is tight and integer-valued.
(ii) \[cov\] For any $k,\ell \in {{\mathbb R}}$ and $a \in {{\mathbb Z}}^{+}$, $${{\mathbb P}}(N_a \leq k, N_n \leq \ell) - {{\mathbb P}}(N_a \leq k) \;
{{\mathbb P}}(N_n \leq \ell) \to 0 \quad\text{as} \ n\to\infty.$$
(iii) \[done\] There exists a deterministic sequence $(n_k)$ with $n_k \to \infty$ and an integer-valued random variable $N$ such that is satisfied; that is, for all $\ell \in {{\mathbb R}}$, $$\frac{1}{K}\sum_{k=1} ^K \mathbf{1}[N_{n_k} \leq \ell] \
\stackrel{{{{\mathbb P}}}}{\to} \ {{\mathbb P}}(N \leq \ell) \quad \text{as } K \to \infty.$$
As in the case $d=2$, Lemma \[mlemma\] parts and will allow us to use a weak law of large numbers to prove . Let us note that the assumption that ${{\mathbb E}}|Y_0| < \infty$ is not necessary for Lemma \[mlemma\] part .
Apply Lemma \[mlemma\] together with Proposition \[decayo\] .
The following simple calculation (an instance of the ‘mass-transport principle’) shows that ${{\mathbb E}}\Lambda(0,1] =1$: $${{\mathbb E}}\Lambda (0,1]
= \sum_{ z \in {{\mathbb Z}}} {{\mathbb P}}\big(Y_z +z \in (0,1]\big)
= \sum_{z \in {{\mathbb Z}}} {{\mathbb P}}\big(Y_0 \in (-z, -z+1]\big)= 1.$$ Thus $$\label{integervalued}
N_n = \Lambda(-n, n] - 2n \ \text{for all} \ n \in {{\mathbb Z}}^{+}.$$
For $A, B \subseteq {{\mathbb R}}$, write $$T_A ^B:= \# { \left\{ {z \in A \cap {{\mathbb Z}}: z + Y_z \in
B} \right\} };$$ that is, the number of $\Lambda$-points in $B$ that originated from $A$. Observe that for $n \in {{\mathbb Z}}^{+}$, $$\label{fourterms}
N_n = T_{(n, \infty)} ^{(-n,n]} + T_{(-\infty, -n]} ^{(-n,n]}
- T_{(-n,n]} ^{(n, \infty)} - T_{(-n,n]} ^{(-\infty, -n]}.$$ On the other hand, ${{\mathbb E}}|Y_0| < \infty$ implies easily that $K_+:= {{\mathbb E}}T_{(-\infty, 0]} ^{[0, \infty)} < \infty$ and $K_-:= {{\mathbb E}}T_{[0, \infty)}
^{(-\infty, 0]} < \infty$. By translation-invariance, each term on the right side of is bounded in expectation by one of these constants; for instance: ${{\mathbb E}}T_{(n, \infty)} ^{(-n,n]}\leq {{\mathbb E}}T_{[n, \infty)}
^{(-\infty,n]}=K_-$. Hence ${{\mathbb E}}| N_n| \leq 2K_+ +2K_-$ for all $n\in
{{\mathbb Z}}^{+}$.
Let ${ {\mathfrak F} }_n:=\sigma(\{z+Y_z \in [-n, n]\}: z \in {{\mathbb Z}})$. We will show that for any event $E \in \sigma(Y_z:z\in{{\mathbb Z}})$, we have $$\label{asy}
{{\mathbb P}}(E \mid { {\mathfrak F} }_n) \to {{\mathbb P}}(E) \ \text{a.s. as} \ n \to \infty.$$ From , the result follows, since $\{N_n \leq \ell\}\in{ {\mathfrak F} }_n.$ It suffices to check for $E$ in the generating algebra of events that depend on only finitely many of the $Y_z$. But for such an event, say $E\in\sigma(Y_z:-m\leq z\leq m)$, we observe that ${{\mathbb P}}(E\mid{ {\mathfrak F_n} })$ equals the conditional probability of $E$ given the [*finite*]{} $\sigma$-algebra $\sigma(\{z+Y_z\in[-n,n]\}: -m\leq z\leq m)$, hence the required convergence follows from an elementary computation.
By Lemma \[mlemma\] we may choose an integer-valued $N$ and a subsequence $(c_n)$ so that $N_{c_n} \stackrel{d}{\to} N$ as $n \to \infty$. We will show that for all $\ell \in {{\mathbb Z}}$, there is a further subsequence $c_{n_k} = :r_k$ such that $$\label{almost}
\frac{1}{n}\sum_{k=1} ^n \Big[\mathbf{1}[N_{r_k} \leq \ell] - {{\mathbb P}}(N_{r_k} \leq \ell) \Big]
\ \stackrel{{{\mathbb P}}}{\to} \ 0 \ \text{as} \ n \to \infty.$$ Clearly, the result follows from and the fact that $N_{r_k}
\stackrel{d}{\to} N$ as $k \to \infty$.
We use Corollary \[durrettm\] in conjunction with a diagonal argument to prove . Consider an enumeration of the integers given by $\ell_1, \ell_2, \ldots$ For each $i \in {{\mathbb Z}}^{+}$, let $Z_{k} ^{i}:=
\mathbf{1}[N_{c_k} \leq \ell_i] - {{\mathbb P}}(N_{c_k} \leq \ell_i)$. By Lemma \[mlemma\] and Corollary \[durrettm\], there exists a subsequence $c^{1}_{n_k} := r^1_k$ such that holds with $r_k$ replaced by $r^1_k$, and $\ell$ replaced by $\ell_1$. Similarly, we may choose $(r^2_k)$ to be a subsequence of $(r^1_k)$ so that holds with $r_k$ replaced by $r^2_k$, and $\ell$ replaced by $\ell_2$; moreover Corollary \[durrettm\] assures us that holds with $r_k$ replaced by $r^2_k$, and $\ell$ replaced by $\ell_1$. Similarly define the sequence $(r^{i}_k)$ for each $i \in {{\mathbb Z}}^{+}$. By taking the diagonal sequence $r_k := r^k_k$, we see that holds for all $\ell \in {{\mathbb Z}}$.
Gaussian zeros in the hyperbolic plane
--------------------------------------
The proof of Proposition \[gausszeros\] uses the following consequence of a result of Peres and Virág.
\[mini\] If ${\Upsilon_{\mathbb{D}}}$ is the Gaussian zero process on the hyperbolic plane and ${\Upsilon_{\mathbb{D}}}^{*}$ is its Palm version, then ${\Upsilon_{\mathbb{D}}}^{*} \prec {\Upsilon_{\mathbb{D}}}+ \delta_0$ and ${\Upsilon_{\mathbb{D}}}+ \delta_0 \prec {\Upsilon_{\mathbb{D}}}^{*}$.
Let ${\Upsilon_{\mathbb{D}}}$ be the process of zeros of $\sum_{n=0} ^ {\infty} a_n z^n$, where the $a_n$’s are i.i.d. standard complex Gaussian random variables. Let $E_k$ be the event that ${\Upsilon_{\mathbb{D}}}(B(0,1/k )) >0$. Peres and Virág [@MR2231337 Lemma 18] prove that the conditional law of $(a_0,a_1,\ldots)$ given $E_k$ converges as $k\to\infty$ to the law of $(0,\widehat{a}_1,a_2,\ldots)$, where $\widehat{a}_1$ is independent of the $a_n$’s, and has a rotationally symmetric law with $|\widehat{a}_1|$ having probability density $2r^3 e^{-r ^2}$.
Let ${\widehat{\Upsilon}_{\mathbb{D}}}$ be the process of zeros of the power series with coefficients $(0,\widehat{a}_1,a_2,\ldots)$. Since the latter sequence is mutually absolutely continuous in law with $(0,a_1,a_2\ldots)$, we have that ${\widehat{\Upsilon}_{\mathbb{D}}}$ and ${\Upsilon_{\mathbb{D}}}+\delta_0$ are mutually absolutely continuous in law.
By Rouché’s theorem from complex analysis [@Gamelin Ch. 8, p. 229], the above convergence implies that the conditional law of ${\Upsilon_{\mathbb{D}}}$ given $E_k$ converges to the law of ${\widehat{\Upsilon}_{\mathbb{D}}}$ (the convergence is in distribution with respect to the vague topology for point processes). By [@MR818219 Theorem 12.8] it follows that ${\widehat{\Upsilon}_{\mathbb{D}}}{\stackrel{d}{=}}{\Upsilon_{\mathbb{D}}}^{*}.$
It follows from Proposition \[mini\] and Theorems \[thm-instol-stat-eq\] and \[suff\] with Remark \[gen\] that the Gaussian zero process on the hyperbolic plane is insertion-tolerant and deletion-tolerant.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Omer Angel and Yuval Peres for many valuable conversations. Terry Soo thanks the organizers of the 2010 PIMS Summer School in Probability.
[10]{}
R. M. Burton and M. Keane. Density and uniqueness in percolation. , 121(3):501–505, 1989.
D. J. Daley and G. Last. Descending chains, the lilypond model, and mutual-nearest-neighbour matching. , 37(3):604–628, 2005.
D. J. Daley and D. Vere-Jones. . Probability and its Applications (New York). Springer, New York, second edition, 2008. General theory and structure.
R. Durrett. . Duxbury Press, Belmont, CA, second edition, 1996.
D. Gale and L. Shapley. College admissions and stability of marriage. , 69:9–15, 1962.
T. W. Gamelin. . Undergraduate Texts in Mathematics. Springer-Verlag, New York, 2001.
O. H[ä]{}ggstr[ö]{}m and R. Meester. Nearest neighbor and hard sphere models in continuum percolation. , 9(3):295–315, 1996.
C. Hoffman, A. E. Holroyd, and Y. Peres. A stable marriage of [P]{}oisson and [L]{}ebesgue. , 34(4):1241–1272, 2006.
A. E. Holroyd, R. Pemantle, Y. Peres, and O. Schramm. Poisson matching. , 45(1):266–287, 2009.
A. E. Holroyd and Y. Peres. Extra heads and invariant allocations. , 33(1):31–52, 2005.
J. B. Hough, M. Krishnapur, Y. Peres, and B. Vir[á]{}g. , volume 51 of [*University Lecture Series*]{}. American Mathematical Society, Providence, RI, 2009.
O. Kallenberg. . Akademie-Verlag, Berlin, third edition, 1983.
O. Kallenberg. . Probability and its Applications (New York). Springer-Verlag, New York, second edition, 2002.
O. Kallenberg. Invariant measures and disintegrations with applications to [P]{}alm and related kernels. , 139(1-2):285–310, 2007.
D. E. Knuth. , volume 10 of [*CRM Proceedings & Lecture Notes*]{}. American Mathematical Society, Providence, RI, 1997. An introduction to the mathematical analysis of algorithms, Translated from the French by Martin Goldstein and revised by the author.
G. Last. Modern random measures: [P]{}alm theory and related models. In [*New perspectives in stochastic geometry*]{}. Clarendon Press, Oxford, 2008.
G. Last. Stationary random measures on homogeneous spaces. , 2009.
P. Mattila. , volume 44 of [*Cambridge Studies in Advanced Mathematics*]{}. Cambridge University Press, Cambridge, 1995. Fractals and rectifiability.
R. Meester and R. Roy. , volume 119 of [*Cambridge Tracts in Mathematics*]{}. Cambridge University Press, Cambridge, 1996.
Y. Peres and B. Vir[á]{}g. Zeros of the i.i.d. [G]{}aussian power series: a conformally invariant determinantal process. , 194(1):1–35, 2005.
M. Sodin and B. Tsirelson. Random complex zeroes. [I]{}. [A]{}symptotic normality. , 144:125–149, 2004.
H. Thorisson. Transforming random elements and shifting random fields. , 24:2057–2064, 1996.
[^1]: Funded in part by Microsoft Research (AEH) and NSERC (both authors)
|
---
author:
- |
Richard D. Kenway,\
The Higgs Centre for Theoretical Physics, School of Physics and Astronomy,\
University of Edinburgh\
Edinburgh, EH9 3FD, UK\
E-mail:
title: 'Five-dimensional Gauge Theories in a warped background'
---
Introduction
============
Extra-dimensional theories offer a solution to the hierarchy problem. Even though collider experiments have not provided evidence of the existence of extra dimensions, they have not excluded them either. All higher-dimensional theories must undergo dimensional reduction to be compatible with the observed four-dimensional world. This can be achieved by compactification, or localization and our work focuses on a possible way of achieving the latter.
Higher-dimensional theories are perturbatively non-renormalizable and therefore techniques of lattice gauge theories provide a tool for their investigation. Usually, phase diagrams are obtained and one seeks regions where the system is dimensionally reduced and a continuum theory can be defined by means of a second order phase transition. The phase diagram of the pure five-dimensional SU(2) lattice gauge theory has two phases (the 5D deconfining and confining phases), which are separated by a first-order phase transition and thus it is physically uninteresting.
In 1984, Fu and Nielsen showed that if an abelian anisotropic higher-dimensional lattice gauge theory is considered, there is an additional phase, called the layered phase [@Fu]. In this new phase, the 4D hyperplanes transverse to the extra dimensions can be seen as layers where the gauge fields are localized. The existence of a critical point in the non-abelian case, where a 4D continuum theory could be defined is still in doubt [@DelDebbio:2013rka]. However, results from [@MeanField1] suggest that interesting physics happens close to the transition line from the Coulomb to the layered phase.
A well-known class of five-dimensional models are the so-called Randall-Sundrum models [@Randall:1999ee; @Randall:1999vf], which are embedded in a warped background, given by $$\label{eq:warped_metric}
ds^2 = \mbox{e}^{-2k|y|} \eta_{\mu\nu}dx^\mu dx^\nu + dy^2$$ where $k$ is the curvature. We also define $f(y) = \mbox{e}^{-2k|y|}$, that is called the warp factor, for later use. In these models, the extra dimension has a finite extent, $L_5$, and 3-branes (or 4D layers) are placed at $y=0$ and $y=L_5$. Even though the dimensional reduction of all fields in these models is achieved by localization, the mechanism behind localization of gauge fields is still elusive, as charge universality is violated when the usual techniques are employed. In this work, we make a first attempt to investigate the gauge sector of the anisotropic SU(2) lattice gauge theory embedded in a warped metric. The presence of a layered phase might signal a possible non-perturbative way of localizing gauge fields.
The Mean-Field approach
=======================
The five-dimensional gauge action in the warped background in the continuum is given by $$S_{AdS_5} = \int d^4x \int dy \Big [ \frac{1}{4g_5^2}F_{\mu\nu}^2 + \frac{1}{2g_5^2}f(y)F_{\mu 5}^2 \Big ] \;; \;\;\;\;\; f(y) = \mbox{e}^{-2ky}$$ We call this action $S_{AdS_5}$ as the extra dimension is in a slice of the $AdS_5$ spacetime. Its discretized version imposing an anisotropy is given by $$S_{AdS_5} = \frac{\beta}{\gamma} \sum_{4D} \Big (1-\frac{1}{2} \Real \Tr U_{\mu\nu}(n,n_5)\Big) +\beta \gamma \sum_{5D} \Big (1-\frac{1}{2} \Real \Tr f(n_5) U_{\mu 5}(n,n_5)\Big)$$ where $\gamma$ is the anisotropy parameter and the plaquettes along the usual four dimensions and those extended in the extra dimension are given by Eq. (\[eq.Plaqs\])-(\[eq.Plaq5\]) respectively, where $\mu,\nu=0,1,2,3$ $$\begin{aligned}
&U_{\mu\nu}(n,n_5) = U_\mu(n,n_5)U_\nu(n+a_4\hat \mu,n_5) U^\dagger_\mu(n+ a_4\hat \nu,n_5)U^\dagger _\nu(n,n_5) \label{eq.Plaqs} \\
&U_{\mu5}(n,n_5) = U_\mu(n,n_5)U_5(n+a_4\hat \mu,n_5) U^\dagger_\mu(n, n_5+ a_5 \hat 5)U^\dagger _5(n,n_5).\label{eq.Plaq5}\end{aligned}$$ The warp factor in our lattice action is anticipated to have an effect on the lattice spacing leading to large finite-size effects. This suggests that the correct investigation of the system using Monte Carlo simulations will be computationally expensive and, as we have no previous studies to guide us to specific regions of parameter space, the first exploration was undertaken using the Mean-Field approximation and specifically employing the saddle-point approach.
Following the standard procedure that is described in [@Drouffe:1983fv], we found the effective action to be $$\begin{aligned}
\label{eq:SeffADS}
S_{\rm{eff}} = S_{AdS_5}[V_\mu,V_5] &+ \sum_{n;n_5} \bigg [ \sum_\mu u[H_\mu(n,n_5)] + u_5[H_5(n,n_5)] \nonumber \\
&+ \sum_\alpha h_{\alpha_\mu}(n,n_5) v_{\alpha_\mu}(n,n_5) + \sum_\alpha h_{\alpha_5}(n,n_5) v_{\alpha_5}(n,n_5) \bigg ]\end{aligned}$$ where $V$ and $H$ are $2\times 2$ matrices that are used to replace the group-constrained integration measure in the path integral with a flat measure and $v_\alpha$ and $h_\alpha$ are their components after parametrization $(\alpha=0,1,2,3)$. We also define $$\begin{aligned}
\mathrm{e}^{-u[H_M(n,n_5)]} &= \int_{{{\rm SU}}(2)} {\cal D} U \mbox{e}^{\frac{1}{2} \Real \Tr (UH_M)} \end{aligned}$$ which gives u(H\_[M]{}) = - ; \_M = M=, 5. where $I_1$ is the modified Bessel function of the first kind of order 1.
Then one usually finds the saddle-point equations and sets the fields to a constant value proportional to the identity matrix. In our case, as the background depends on the extra dimension, there is a mean-field value for each point along the extra dimension. This extra-dimensional dependence of the mean fields was also seen in the construction of the SU(2) theory in an orbifold [@Irges:2012ih] and for our convenience in first-order correction calculations, we made a scale transformation of the fields so that the $AdS_5$ action in Eq. (\[eq:SeffADS\]) will look like the flat SU(2) gauge action, i.e. without the factor $f(n_5)$ in front of the extra-dimensional plaquettes. The scaling of the fields is done only on the fields that involve the extra dimension, whereas the fields in the usual four dimensions remain the same $$\begin{aligned}
\label{eq:ReDfnv}
&V_\mu(n,n_5) = V'_\mu(n,n_5) \Rightarrow V_{\mu \nu}(n,n_5)= V'_{\mu \nu}(n,n_5) \nonumber \\
&V_\mu(n,n_5) = \sqrt{f(n_5)} V'_\mu(n,n_5) \Rightarrow V_{\mu 5}(n,n_5) = f(n_5) V'_{\mu 5}(n,n_5).\end{aligned}$$ Looking at the effective action in Eq. (\[eq:SeffADS\]) we see that we also need to rescale $H_5$ as $$\label{eq:ReDfnh}
H_5(n,n_5) = \frac{1}{\sqrt{f(n_5)}}H'_5(n,n_5)$$ so that we get $$h_{\alpha_5}(n,n_5) v_{\alpha 5}(n,n_5) = h'_{\alpha_5}(n,n_5) v'_{\alpha 5}(n,n_5).$$ Rescaling the external field in the fifth dimension though, changes the term $u_5[H_5(n,n_5)]$ that becomes $$\begin{aligned}
\mathrm{e}^{-u[H_5'(n,n_5)]}&=\int_{{{\rm SU}}(2)} {\cal D} U \mbox{e}^{\frac{1}{2}\sqrt{f(n_5)} \Real \Tr (U H_5)}.\end{aligned}$$ The extra factor that involves the warp factor does not affect the nature of the group integral so it can be evaluated as usual using character expansions which results in $$\label{eq:u5}
u_5(H_5') = -\ln \bigg ( \frac{2}{\rho_{5}(n_5) \sqrt{f(n_5)}}I_1\big (\rho_5(n_5)\sqrt{f(n_5)}\big ) \bigg)$$ where $$\rho_5(n_5) = \sqrt{\big [ \mbox{Re}(h_{5_0}(n_5)) \big ]^2 + \sum_A\big [ \mbox{Re}(h_{5_A}(n_5))\big ]^2}.$$ Next the saddle-point solutions are determined by setting the fields to a background value which, in contrast to the flat case, has an extra-dimensional dependence, i.e. $$\begin{aligned}
&V_\mu(n,n_5) = \bar v_4(n_5)\mathbb{1} \;\;\;\;\;\;\;\;\;\; H_\mu(n,n_5) = \bar h_4(n_5) \mathbb{1} \nonumber \\
&V_5(n,n_5) = \bar v_5(n_5) \mathbb{1} \;\;\;\;\;\;\;\;\; \;H_5(n,n_5) = \bar h_5(n_5) \mathbb{1}.\end{aligned}$$ This leads to the saddle-point equations given by $$\begin{aligned}
\label{eq:SaddlePointEqns}
&\bar v_4(n_5) = \frac{I_2(\bar h_4(n_5))}{I_1(\bar h_4(n_5))} \nonumber \\
&\bar v_5 (n_5) = \frac{I_2(\sqrt{f(n_5)}\bar h_5(n_5))}{I_1(\sqrt{f(n_5)}\bar h_5(n_5))} \nonumber \\
&\bar h_4(n_5) = 6 \frac{\beta}{\gamma} \bar v_4^3(n_5) + \beta \gamma \bar v_5^2 (n_5) \bar v_4(n_5+a_5) + \beta \gamma \bar v_5^2(n_5-a_5) \bar v_4(n_5-a_5) \nonumber \\
&\bar h_5(n_5) = 8 \beta \gamma \bar v_5(n_5) \bar v_4(n_5) \bar v_4(n_5+a_5). \end{aligned}$$
The phase-diagram
=================
The first thing we did was to investigate the phase diagram. We made a specific choice of boundary conditions, where we reflected the system in the negative $n_5$ direction and then repeated the system periodically. We call this Periodic Boundary Conditions (PBC) and we have checked with other choices that the system in the middle of the fifth dimension is not affected by the boundary conditions. Then we solved the coupled equations as given in Eq. (\[eq:SaddlePointEqns\]) and for each layer (i.e. each $n_5$) we identified three phases according to the following:
- [$v_4(n_5) = 0$, $v_5(n_5) = 0$ Strong-coupling phase (S) ]{}
- [$v_4(n_5) \neq 0$, $v_5(n_5) \neq 0$ Deconfining phase (D)]{}
- [$v_4(n_5) \neq 0$, $v_5(n_5) = 0$ Layered phase (L)]{}.
We computed the free energy at first order, in an analogous way to [@Irges:2012ih], to check the stability of the critical points and, as far as we could check, those presented in this phase diagram are stable.
We chose to keep the curvature fixed to the value $k=0.10$ and the lattice size in the positive $n_5$ direction to be $N_5=8$. The layers in the negative $n_5$ direction were matched with layers in the positive $n_5$ direction and thus we consider only the latter in the phase diagram given below. Even though the transition to the confining phase seems to happen at the same point for all layers, we observe that each layer goes from a deconfining phase to the layered phase at different values of $(\beta,\gamma)$. Therefore, we observe an extra phase, *a mixed phase*, where some layers are in the weak-coupling phase and some are in the layered one. This can be seen in Fig. \[fig:PhaseDiagram\_k010\] as the phase between the orange and the red points.
![The phase diagram obtained for each layer for fixed $k=0.10$. We observe three phases, the confining(S), the deconfining(D) and the layered(L). However, there is a new phase that appears, the mixed phase, in which some of the layers are in the layered phase and some are in the deconfining phase. The width of the mixed phase increases with increasing $k$. []{data-label="fig:PhaseDiagram_k010"}](PhaseDiagram_k010_N56.eps)
The static potential
====================
As the main focus of our work is to find evidence of localization of gauge fields, we measured the static potential for each layer at two points in parameter space to investigate its form. Keeping the value $k=0.10$ fixed and a lattice size of $T=L=32,N_5=8$, we chose values of $(\beta,\gamma)$ by inspecting the phase diagram of Fig. \[fig:PhaseDiagram\_k010\]. The first one was $(2.50,1.00)$ which is away from any phase transition and deep into the deconfining phase. We fitted the mean-field potential points to four different forms: 4D Coulomb, 4D Yukawa, 5D Coulomb and 5D Yukawa. Unfortunately, we could not unambiguously distinguish the form of the potential, as only the 4D Coulomb potential could be excluded, while the rest appeared to be good fits to the potential in all 8 layers. For the last few layers, both 4D and 5D Yukawa forms fitted the MF points well. Fits to all forms of the potential for the last layer $n_5=8$ can be seen in Fig. \[fig:Potential\_fits\].
![Fits to the static potential of the last layer $n_5=8$ using various potential forms for lattice sizes of $T=L=32, N_5=8$ for two different parameter space points: $\beta=2.50$, $\gamma=1.00, k=0.10$ (left) and $\beta=2.30$, $\gamma=0.505, k=0.10$ (right).[]{data-label="fig:Potential_fits"}](a4V4_b2500_g1000_k020_y-7.eps "fig:") ![Fits to the static potential of the last layer $n_5=8$ using various potential forms for lattice sizes of $T=L=32, N_5=8$ for two different parameter space points: $\beta=2.50$, $\gamma=1.00, k=0.10$ (left) and $\beta=2.30$, $\gamma=0.505, k=0.10$ (right).[]{data-label="fig:Potential_fits"}](a4V4_b2300_g0505_k020_y-7.eps "fig:")
The second point considered was $(2.30,0.505)$, which is close to the transition from the deconfining to the mixed phase. This potential behaves as a 4D Yukawa one for all layers. Also, starting from the first layer, $n_5=1$, the 5D Yukawa and Coulombic potentials also fitted quite well. At larger values of $n_5$, the fits to these forms lose their goodness so, at least for the last layers, we can say with confidence that the potential behaves as a 4D Yukawa one (Fig. \[fig:Potential\_fits\]).
All the above provide preliminary evidence that, as a Yukawa mass can be obtained, the system close to the transition line is in a 4D Higgs-like phase and not in a Coulombic phase. To check that the Yukawa mass is not the result of the finite extent of our system and remains non-zero in the infinite-volume limit, we performed finite-size scaling on the Yukawa mass and indeed we could get a non-zero value for the infinite-volume Yukawa mass on each layer, as shown in Fig. \[fig:a4mY\_b2300\_g0505\_k020\]. This further supports our suspicion that the system is in a Higgs-like phase and not in a Coulombic phase.
![The infinite-volume Yukawa mass in lattice spacing units on each 4D layer for $\beta=2.30, \gamma=0.505, k=0.10, N_5=8$ as found by finite-size scaling analysis using lattice sizes of $T=L=24,32,48,100$. All error bars are tiny except for the last layer.[]{data-label="fig:a4mY_b2300_g0505_k020"}](a4mY_b2300_g0505_k020_infVol_v2.eps)
Conclusions and Future work
============================
The mean-field calculations of the static potential show the existence of a Yukawa mass that suggests the presence of a 4D Higgs-like phase close to the line of transition in the phase diagram. This phase suggests that some symmetry breaking may be happening, which is not enforced by imposing certain boundary conditions as done in previous investigations [@Irges:2012ih; @Alberti:2015pha]. The only modification in our system from the flat case, where the Higgs-like phase is absent, is the introduction of the curvature along the transverse direction. Thus, we tentatively conclude that the warping breaks the symmetry everywhere in the deconfining phase giving a Higgs-like phase there. This was not clear from the form of the potential away from the transition line, but was not excluded either. So further studies are necessary in order to clarify the nature of the phase in the weak-coupling regime.
We return to the question that motivated this project, i.e. whether there is a dimensionally reduced phase close to the layered phase. If there is a 5D Higgs-like phase away from the transition line, then we might have dimensional reduction via localization, analogous to the one found in [@Alberti:2015pha], where they explicitly broke the symmetry using the orbifold. If not, then the system, due to the warping, behaves as a four-dimensional one everywhere outside the strong-coupling phase. It is noteworthy that we have used a small extent of lattice points along the extra dimension, which restricts the region of the mixed phase to a small width. It appears likely that the pure deconfining phase is a finite-size effect of the fifth direction, and the infinite system is actually in a 4D Higgs-like phase everywhere in the weak-coupling regime.
There are a number of open questions that still need to be resolved by further work. One is whether this Higgs-like phase is physical, a lattice artefact, or a fake result of the Mean-Field approximation. Also, nothing can be said about the layered phase at the moment. Studies using Monte Carlo simulations are expected to show the true behaviour in this phase, which might be 4D Higgs-like, or Coulombic. All in all, our tentative conclusion that there is a 4D Higgs-like phase motivates a range of further tests and explorations, especially with numerical simulations, to clarify the effect of warping.
Acknowledgments {#acknowledgments .unnumbered}
===============
E.L. is supported by an STFC studentship. We thank F. Knechtli for the fruitful discussions during the conference.
[99]{} Y. Fu, H.B. Nielsen, [*Nucl. Phys. B*]{} [**236**]{} (1984) 167
L. Del Debbio, R. D. Kenway, E. Lambrou and E. Rinaldi, [*Phys. Lett. B*]{} [**724**]{}, no. 1-3, 133 (2013) \[arXiv:1305.0752 \[hep-lat\]\]. Irges N. and Knechtli F. [*Nucl. Phys. B*]{} [**822**]{} (2009) 1, \[arXiv:0905.2757\]
L. Randall and R. Sundrum, [*Phys. Rev. Lett. *]{} [**83**]{} (1999) 3370 \[hep-ph/9905221\].
L. Randall and R. Sundrum, [*Phys. Rev. Lett. *]{} [**83**]{}, 4690 (1999) \[hep-th/9906064\]. J. M. Drouffe and J. B. Zuber, [ *Phys. Rept. *]{} [**102**]{}, 1 (1983). N. Irges, F. Knechtli and K. Yoneyama, [*Nucl. Phys. B* ]{}[**865**]{}, 541 (2012) \[arXiv:1206.4907 \[hep-lat\]\]. M. Alberti, N. Irges, F. Knechtli and G. Moir, [ *JHEP* ]{}[**1509**]{}, 159 (2015) \[arXiv:1506.06035 \[hep-lat\]\].
|
---
abstract: 'A light Higgs portal scalar could be abundantly produced in the earth’s atmosphere and decay in large-volume neutrino detectors. We propose broadening the purpose of the Hyper-Kamiokande detector to search for such particle that can account for recent KOTO measurements of rare kaon decays. The signal is electron-positron pair creation that manifests as a double-ring appearing from the same vertex. Most of pairs originate from zenith angles above the detector’s horizon. This search can be generalized to other new light states and is highly complementary to beam experiments.'
author:
- 'Paul Archer-Smith'
- Yue Zhang
bibliography:
- 'Atmospheric.bib'
title: 'Higgs Portal From The Atmosphere To Hyper-K'
---
A Standard Model gauge singlet scalar that mixes with the Higgs boson, sometimes also referred to as the “dark Higgs”, is a simple new physics candidate. It has been introduced for exploring the dark universe [@Patt:2006fw; @Weinberg:2013kea; @Wise:2014jva; @Wise:2014ola], facilitating baryogengesis mechanisms [@Anderson:1991zb; @Pietroni:1992in], precision physics of the Standard Model [@TuckerSmith:2010ra; @Chen:2015vqy], and, perhaps, naturalness [@Graham:2015cka]. In its minimal incarnation, the Higgs portal scalar is produced in laboratories and decays into Standard Model particles via the same mixing parameter with the Higgs boson. These makes it a well-motivated and well-defined target of searches in a number of experiments. Constraints have been set for a wide range of its mass [@Beacham:2019nyx; @Flacke:2016szy; @Clarke:2013aya]. In particular, if the scalar is lighter than $\sim$ GeV, leading constraints come from the measurement of rare $K$ and $B$ meson decays where the mixing parameter must be smaller than $\sim10^{-3}$.
Recently, Higgs portal scalar has been revisited to understand a new experimental finding. In 2016-18, the KOTO experiment at J-PARC performed a search for the flavor-changing decay process $K_L \to \pi^0 \nu\bar\nu$, in final states with two energetic photons plus a missing transverse momentum. Three candidate events were identified while Standard Model predicts nearly none [@KOTO]. Although this might simply be due to an underestimate of background, it has triggered the study of a variety of potential new physics candidates, heavy and light. Among them, a light Higgs portal scalar $\phi$ stands out as the simplest explanation [@Kitahara:2019lws; @Egana-Ugrinovic:2019wzj; @Dev:2019hho; @Liu:2020qgx]. The signal is explained as $K_L \to \pi^0 \phi$ decay where $\phi$ is long lived and escapes the detector. Viable parameter space corresponds to a $\phi$ mass between 100-200 MeV and $\phi$-Higgs mixing parameter of a few $\times\, 10^{-4}$.
Given such a simple explanation, it is worthwhile exploring how the target parameter space could be tested in other experiments. An obvious place to check is the isospin related decay mode, $K^+\to \pi^+ \nu\bar\nu$. Indeed, this channel has been searched for at the E949 [@Artamonov:2009sz] and NA62 [@NA62] experiments where upper limits are set on the mixing parameter of the Higgs portal scalar. However, both limits feature a gap when the scalar mass is around the pion mass, due to the enormous $K^+\to \pi^+\pi^0$ background followed by $\pi^0\to\nu\bar\nu$. In this mass window, an upper limit on the mixing parameter is set by a very early beam dump experiment, CHARM [@Bergsma:1985qz], in the search for displaced decay of $\phi$, although this limit is not yet competitive. The above contrast points to the direction to proceed. In order to cover the KOTO favored parameter space, one should resort to appearance experiments hunting the visible decay of long lived $\phi$ particles rather than disappearance experiments searching for $\phi$ as missing momentum. As a further useful observation, the decay length of a KOTO favored Higgs portal scalar is of order hundreds of kilometers (even longer if boosted). This gives motivation to imagine large experiments operating at length scales beyond those beam-based ones built entirely within the laboratories.
In this Letter, we propose using a nature-made experimental setup to probe the Higgs portal scalar $\phi$. It utilizes cosmic rays as the beam, earth’s atmosphere as the target, and earth itself as the shielding region. In this picture, $\phi$ particles originate from the decay of kaons, with the latter being abundantly produced in the cosmic-ray-atmosphere fixed-target collisions, together with charged pions that make the atmospheric neutrinos [@Fukuda:1998mi]. If long lived enough, the $\phi$ particles travel a long distance across the earth before decaying inside a human-made detector. We focus on the Hyper-Kamiokande (Hyper-K) experiment which, at least for the foreseeable future, has the largest detector volume and a suitably low energy threshold to capture the scalar decays.
The Higgs portal scalar is defined as a mass eigenstate and a linear combination of a Standard Model gauge singlet $s$ and the Higgs boson $h$, $$\phi = \cos\theta\, s + \sin\theta\, h \ ,$$ where $\theta$ is a real mixing parameter. The cosmic rays near us are dominated by protons while the elements in the earth’s atmosphere are dominated by nitrogen and oxygen, comprised of equal numbers of protons and neutrons. We simulate fixed target proton-proton and proton-neutron collisions using [PYTHIA 8]{} [@Sjostrand:2014zea] for various incoming proton energies, which is further convoluted with the incoming cosmic proton spectrum to derive the differential energy spectrum of kaons (most relevant for this study, $K^\pm$ and $K_L$), ${d \Phi}/{dE_K}$. Their sum is shown as the blue histogram in Fig. \[fig:fluxes\]. The ratio of $K^\pm$ and $K_L$ particles is about $2:1$, as expected.
![Energy distribution of atmospheric kaons ($K^\pm$ and $K_L$ added together) and $\phi$ particles, for $m_\phi=150\,$MeV, obtained from the atmospheric simulation described in the text. For illustration purpose, the flux of $\phi$ has been rescaled by assuming the $K\to\pi\phi$ decay branching ratios are equal to 1. []{data-label="fig:fluxes"}](diffPhiFluxPaper.pdf){width="45.00000%"}
The $\phi$ particles are produced from rare kaon decays, $K^\pm\to \pi^\pm \phi$ and $K_L \to \pi^0 \phi$. The corresponding branching ratios are [@Feng:2017vli; @Batell:2019nwo; @Gunion:1989we] $$\begin{aligned}
\label{Kpm}
&&{\rm Br}(K^\pm\to \pi^\pm \phi) \simeq \frac{9 \tau_{K^\pm} |V_{ts} V_{td}^*|^2 G_F^3 m_t^4 m_{K^\pm}^2 p_{\phi\rm CM} \theta^2}{2048\sqrt{2} \pi^5},\\
&&{\rm Br}(K_L \to \pi^0 \phi) \simeq \frac{9 \tau_{K_L} [{\rm Re}(V_{ts} V_{td}^*)]^2 G_F^3 m_t^4 m_{K^\pm}^2 p_{\phi\rm CM} \theta^2}{2048\sqrt{2} \pi^5},\nonumber\end{aligned}$$ where the decay momentum in the center-of-mass (CM) frame is $p_{\phi\rm CM} = \lambda(m_K^2, m_\pi^2, m_\phi^2)/2m_{K^\pm}$, and $\lambda$ is the Källén function. In small $m_\phi$ limit, ${\rm Br}(K_L \to \pi^0 \phi)/{\rm Br}(K^\pm\to \pi^\pm \phi) \simeq 3.7$ [@Grossman:1997sk]. In the lab frame, the ratio of the final state $\phi$ energy to that of kaon is $$\label{EphiEK}
\frac{E_\phi}{E_K} =
\frac{E_{\phi\rm CM}}{m_K} + \frac{p_{\phi\rm CM}}{m_K} \sqrt{ 1 - \frac{m_K^2}{E_K^2}} \cos\vartheta_{\rm CM} \ ,$$ where $E_{\phi\rm CM}=\sqrt{p_{\phi\rm CM}^2 + m_\phi^2}$ and $\vartheta_{\rm CM}$ is the relative angle between $\phi$’s three-momentum in the kaon rest frame and the boost direction of the kaon. Because $K^\pm$ and $K_L$ are scalars, the angular $\phi$ distribution in their rest frame is isotropic. For given energy $E_K$, the values of $E_\phi$ distribute evenly between its extremes, corresponding to $\cos\vartheta_{\rm CM}=\pm1$. The resulting differential flux of $\phi$ can be calculated using $$\begin{aligned}
\label{dPhiphi}
\frac{d \Phi_\phi}{dE_\phi} &=& \sum_{K= K^\pm, K_L} {\rm Br}(K\to \pi \phi) \int_{E_{K\rm min} (E_\phi)}^{E_{K\rm max} (E_\phi)} dE_K \frac{d \Phi_K}{dE_K} \nonumber\\
&&\hspace{1cm}\times \frac{m_K}{2 p_{\phi\rm CM}\sqrt{E_K^2-m_K^2}} \ ,\end{aligned}$$ where $E_{K\rm max, min}$ is the largest (smallest) kaon energy that satisfies Eq. (\[EphiEK\]), for given $E_\phi$. In the limit $E_K\gg m_K$, $E_{K\rm max, min}\simeq E_\phi m_K/(E_{\phi\rm CM} \mp p_{\phi\rm CM})$. In Fig. \[fig:fluxes\], the red histogram shows the energy distribution of atmospheric $\phi$ particles, for $m_\phi=150\,$MeV. Its energy is peaked $\sim700$MeV.
It is worth pointing out the above is a conservative approach of simulating atmospheric $\phi$ production. In order for the parton picture used by [PYHTIA]{} to be valid, we restrict the CM energy of $pp$ and $pn$ scatterings to be above $\sim 6$GeV. We also neglected secondary reactions of kaons in the atmosphere before they decay, keeping in mind that the earth’s atmosphere is dilute. These approximations leave out lower energy processes that could also make kaons, and in turn, more $\phi$ particles.
After being produced in the atmosphere, the $\phi$ particles can travel through the earth to decay inside human-made detectors, provided they have sufficiently long lifetimes. Clearly, the larger the detector the better to capture such a signal. Its energy threshold should be low enough to see sub-GeV energy deposits from the $\phi$ decay. These requirements led us to consider Hyper-K.
![Geography of earth and detector. The blue box indicates the location of the Hyper-K detector. The dashed circle represents a sphere where the cosmic-ray-atmosphere reactions mainly occur that produce light $\phi$ particles. $h$ is given by the height of this sphere plus the depth of detector underground, and $\varphi$ is the zenith angle in view of the detector.[]{data-label="geometry"}](Geometry.pdf){width="31.00000%"}
{width="48.00000%"} {width="46.60000%"}
To calculate the $\phi$ flux at Hyper-K detector, we consider the geometric picture shown in Fig. \[geometry\]. We assume all cosmic-ray-atmosphere reactions occur on a sphere with fixed height above the ground. This height plus the depth of the underground Hyper-K detector, denoted by $h$, is taken to be 10km. The angles $\varphi$ and $\alpha$ are related by $$\begin{aligned}
\cos\alpha= [L(\varphi) \cos\varphi - R]/(R+h) \ , \end{aligned}$$ where $L(\varphi)$ is the distance $\phi$ travels, $$\begin{aligned}
\label{Lvarphi}
L(\varphi) = R \cos\varphi + \sqrt{h^2 + 2 R h + R^2 \cos^2\varphi} \ .\end{aligned}$$ An infinitesimal area on the source sphere is $$d \mathcal{S} = 2\pi (R+h)^2 d\cos\alpha= \frac{2 \pi (R+h)L(\varphi)^2}{L(\varphi) - R\cos\varphi} d \cos\varphi \ .$$ We assume cosmic ray showers on the earth atmosphere to be isotropic, and so is the resulting $\phi$ angular distribution within the hemisphere pointing towards the center of the earth. If the Hyper-K detector volume is denoted as $V$, the event rate of $\phi$ particles decaying inside this volume is, regardless of its shape, $$\begin{aligned}
\label{eq:rate}
R_{\rm event} &=& V \sum_\text{all $\phi$} \int_{0}^\pi \sin\varphi d \varphi \frac{R+h}{L(\varphi) - R\cos\varphi} \nonumber \\
&& \hspace{1.1cm }\times \int d E_\phi \frac{d \Phi_\phi/dE_\phi}{\gamma \beta \tau_\phi} e^{- \frac{L(\varphi)}{\gamma \beta \tau_\phi}} \ ,\end{aligned}$$ where $\gamma$ is the boost factor of $\phi$ with energy $E_\phi$ and $\beta$ is the corresponding velocity. The sum over $\phi$ is performed on an event-by-event basis in our simulation. $d \Phi_\phi/dE_\phi$ is given by Eq. (\[dPhiphi\]). The lifetime of $\phi$ is dictated by the Higgs portal. For mass of $\phi$ below twice the muon mass, it mainly decays into a $e^+e^-$ pair. The corresponding decay length without boost factor is (assuming $m_\phi \gg m_e$) $$\label{decaylength}
\begin{split}
c \tau_\phi &= \frac{8\pi}{\sqrt{2} G_F m_e^2 m_\phi \theta^2} \\
& \simeq 30\,{\rm km} \left( \frac{m_\phi}{0.15\,{\rm GeV}} \right) \left( \frac{\theta}{5\times10^{-4}} \right)^2 \ ,
\end{split}$$ where the benchmark values of $\theta$ and $m_\phi$ lie within the KOTO favored region. It is worth noting that the small electron mass appearing in the decay rate does not suppress the $\phi$ production rate (see Eq. (\[Kpm\])). Once produced from the atmosphere, it is able to penetrate the earth above deep underground detectors. In water Cherenkov detectors like Hyper-K, the final state $e^+e^-$ manifest as a double-ring signature, where the two rings originate from the same primary vertex of $\phi$ decay. We focus on fully contained events where the $\phi$ decay vertex emerges from inside the detector.
{width="43.60000%"} {width="45.00000%"}
Our main result is shown in Fig. \[mainplot\], in the $\theta$ versus $m_\phi$ plane. In the left panel, the black solid, dashed, and dotted curves corresponds to observing 10, 100, and 1000 $e^+e^-$ pair events due to $\phi$ decay in the Hyper-K detector, after 10 years of data taking. To derive these curves, the volume of the Hyper-K detector used is $21.6\times10^3\,{\rm m^3}$ (diameter = 70.8m and height = 54.8m) [@Hyper-K]. Here, we only present contours for certain signal events. They indicate the region of parameter space that potentially could be covered with the Hyper-K detector. Once the backgrounds is understood, it is straightforward to derive an expected limit using our result. It is worth noting that in the $\phi$ decay signal, the invariant mass of the $e^+e^-$ pair is always given by the decaying $\phi$ mass. This coincidence provides a useful cut for suppressing the background. In the same plot, the red contours correspond to constant decay lengths of $\phi$ assuming it travels near the speed of light but with the boost factor neglected.
In the right panel of Fig. \[mainplot\], we zoom in toward the KOTO favored (blue shaded) parameter space. Regions already excluded by existing searches are shaded in gray, including the measurement of $K^\pm\to \pi^\pm \phi$ at E949 [@Artamonov:2009sz] and NA62 [@NA62], displaced visibly-decaying $\phi$ search at CHARM [@Bergsma:1985qz; @Egana-Ugrinovic:2019wzj], and searches for $B\to K\mu^+\mu^-$ at LHCb [@Aaij:2016qsm; @Aaij:2015tna]. Again, the Hyper-K coverage is indicated by the thick black curves, with solid, dashed and dotted corresponding to observing 10, 100 and 1000 $e^+e^-$ pair events, respectively. Remarkably, they enclose almost the entire parameter space of interest to KOTO.
Moreover, there is important information about the lifetime and mass of $\phi$ in the proposed signal, including the zenith angle and opening angle distributions of the final state $e^+e^-$ pairs. In the left panel of Fig. \[fig:kinematics\], we plot the distribution of the zenith angle of $\phi$ particles arriving at the Hyper-K detector, for two sets of parameters. They exhibit very different behaviors, which can be understood by comparing the $\phi$ decay length, Eq. (\[decaylength\]), and the distance it needs to travel before reaching the Hyper-K detector, $L(\varphi)$, given in Eq. (\[Lvarphi\]). The first set of parameters, $m_\phi=150\,$MeV, $\theta = 5\times10^{-4}$, lies in the center of the KOTO region. In this case, $\gamma \beta \tau_\phi \sim 100\,$km, for a typical boost factor (see Fig. \[fig:fluxes\]), whereas $L(\varphi) \sim 10^4,\, 300,\, 10\,$km for $\varphi=0, \pi/2, \pi$, respectively. Clearly, if a $\phi$ particle travels to the detector from directions well below the horizon ($0 < \varphi < \pi/2$), the distance $L(\varphi)$ is too long compared to $\gamma \beta \tau_\phi$ for it to survive. As a result, most of the $\phi$ particles are expected to arrive from above the Hyper-K detector’s horizon ($\pi/2<\varphi<\pi$). For comparison, the second set of parameters has a much smaller $\theta$ leading to a much longer lived $\phi$, $\gamma \beta \tau_\phi \sim 10^4\,$km, thus $\phi$ could also arrive from directions below the horizon. However, smaller $\theta$ means fewer $\phi$ being produced from the atmosphere and such a point is beyond the reach of Hyper-K. Similarly, as $m_\phi$ increases beyond twice of the muon mass, it mainly decays into $\mu^+\mu^-$, via a much larger muon Yukawa coupling. The corresponding decay length is too short for $\phi$ to reach Hyper-K, unless $\theta$ is made much smaller, again resulting in a suppressed atmospheric production rate. In both latter cases, a larger detector would be needed.
In the right panel of Fig. \[fig:kinematics\], we plot the final state electron-positron opening angle distribution from $\phi$ decays, for $m_\phi=150\,$MeV. The result peaks around $\theta_{e^+e^-} \sim 30^\circ$, which is expected from the peak of $\phi$ energy distribution in Fig. \[fig:fluxes\], using $\theta_{e^+e^-} \sim 2m_\phi/E_\phi$. We find a sizable fraction of events have sufficiently large ${e^+e^-}$ opening angle for the double ring signature to be resolved once they occur inside the Hyper-K detector.
To summarize, we propose broadening the purpose of the Hyper-Kamiokande experiment though using it to hunt down long-lived Higgs portal scalar particles produced from the atmosphere. The target parameter space of this search has a strong overlap with that favored by the recent KOTO anomaly in the $K_L$ rare decay measurement. The corresponding signal is electron-positron pair creations in the Hyper-K detector. We make approximations to the atmospheric production picture and derive a semi-analytical expression for the signal rate. In most events, the electron-positron opening angle is large enough for the double-ring signal to be resolved. If the double-rings are further used to reconstruct the decaying $\phi$ particles, one would find most of $\phi$ are arriving from directions above the detector’s horizon. The Hyper-K reach reported here for Higgs portal scalar similarly applies to light axion-like particles which couple to Standard Model fermions also through their masses. The presence of small electron Yukawa coupling in the decay rates naturally makes these particles long lived and suitable to be searched for at earth-sized experiments.
It could be exciting to explore the proposed signal using the existing Super-Kamiokande data, although it is unlikely that Super-K fully probes the KOTO favored region given its smaller detector volume [@Fukuda:2002uc].
There have been recent proposals of searching for light particles such as the Higgs portal scalar at accelerator neutrino facilities [@Batell:2019nwo; @Berryman:2019dme; @Foroughi-Abari:2020gju], such as the DUNE near detector complex. In comparison, the atmospheric $\phi$ particles carry relatively lower energies than their beam counterpart, thus the resulting $e^+e^-$ opening angles are wider and easier for detection. Background is also much lower in the absence of a nearby intense beam. The very large Hyper-K detector volume partially compensates for the relatively lower atmospheric luminosity. All in all, there is excellent complementarity between the searches for long-lived particles of atmospheric and beam origins.
[*Acknowledgement.*]{} We thank Razvan Gornea for helpful discussions on the Hyper-K experiment, and Paddy Fox and Roni Harnik for discussions at early stage of this work. Y.Z. is supported by the Arthur B. McDonald Canadian Astroparticle Physics Research Institute.
|
---
abstract: |
**Abstract**
We aim at studying the asymptotic properties of typical *positive braids*, respectively *positive dual braids*. Denoting by $\mu_k$ the uniform distribution on positive (dual) braids of length $k$, we prove that the sequence $(\mu_k)_k$ converges to a unique probability measure $\mu_{\infty}$ on *infinite* positive (dual) braids. The key point is that the limiting measure $\mu_{\infty}$ has a Markovian structure which can be described explicitly using the combinatorial properties of braids encapsulated in the Möbius polynomial. As a by-product, we settle a conjecture by Gebhardt and Tawn (J. Algebra, 2014) on the shape of the Garside normal form of large uniform braids. **MSC (2010):** 20F36, 05A16, 60C05
author:
- Samy Abbes
- Sébastien Gouëzel
- Vincent Jugé
- Jean Mairesse
bibliography:
- 'biblio.bib'
title: |
Uniform measures on braid monoids\
and dual braid monoids
---
Introduction {#sec:introduction}
============
Consider a given number of strands, say $n$, and the associated positive braid monoid ${{B}^{+}_{n}}$ defined by the following monoid presentation, known as the [*Artin*]{} presentation: $$\begin{gathered}
\label{eq:0*}
\arraycolsep=0.3pt
{{B}^{+}_{n}} = \left\langle \sigma_1,\ldots,\sigma_{n-1} \left|
\begin{array}{ll} \sigma_i \sigma_j = \sigma_j \sigma_i &\quad \text{for $|i-j| \geq 2$} \\
\sigma_i \sigma_{j} \sigma_i = \sigma_{j} \sigma_i \sigma_{j} &\quad \text{for $|i-j|=1$}
\end{array} \right.\right\rangle^+\,.$$ The elements of ${{B}^{+}_{n}}$, the [*positive braids*]{}, are therefore equivalence classes of words over the alphabet $\Sigma= \{\sigma_1,\dots,
\sigma_{n-1}\}$. Alternatively, going back to the original geometric intuition, positive braids can be viewed as isotopy classes of [*positive*]{} braid diagrams, that is, braid diagrams in which the bottom strand always goes on top in a crossing, see Figure \[fig:braidsojf\].
$$\begin{aligned}
\begin{tikzpicture}
\draw[thick,green] (0,0) -- (.5,0); \draw[thick,green] (0.5,0) -- (1,.5);
\draw[thick,green] (1,.5) -- (1.25,.5);
\draw[thick,green] (1.25,0.5) -- (1.75,1);
\draw[thick,green] (1.75,1) -- (2,1);
\draw[thick,green] (2,1) -- (2.5,1.5);
\draw[thick,green] (2.5,1.5) -- (3.75,1.5);
\draw[thick,red] (0,.5) -- (.5,.5); \draw[thick,red] (.5,.5) -- (.7,.3);
\draw[thick,red] (.8,.2) -- (1,0);
\draw[thick,red] (1,0) -- (3.75,0);
\draw[thick,blue] (0,1) -- (1.25,1); \draw[thick,blue] (1.25,1) -- (1.45,.8);
\draw[thick,blue] (1.55,.7) -- (1.75,.5);
\draw[thick,blue] (1.75,.5) -- (2.75,.5);
\draw[thick,blue] (2.75,.5) -- (3.25,1);
\draw[thick,blue] (3.25,1) -- (3.75,1);
\draw[thick,orange] (0,1.5) -- (2,1.5); \draw[thick,orange] (2,1.5) -- (2.2,1.3);
\draw[thick,orange] (2.3,1.2) -- (2.5,1);
\draw[thick,orange] (2.5,1) -- (2.75,1);
\draw[thick,orange] (2.75,1) -- (2.95,.8);
\draw[thick,orange] (3.05,.7) -- (3.25,.5);
\draw[thick,orange] (3.25,.5) -- (3.75,.5);
\node at (-.5,0){$1$};
\node at (-.5,.5){$2$};
\node at (-.5,1){$3$};
\node at (-.5,1.5){$4$};
\node at (0.75,-.5){$\sigma_1$};
\node at (1.5,-.5){$\sigma_2$};
\node at (2.25,-.5){$\sigma_3$};
\node at (3,-.5){$\sigma_2$};
\end{tikzpicture}
&&
\begin{tikzpicture}
\node at (-.5,0){$1$};
\node at (-.5,.5){$2$};
\node at (-.5,1){$3$};
\node at (-.5,1.5){$4$};
\node at (0.75,-.5){$\sigma_3$};
\node at (1.5,-.5){$\sigma_1$};
\node at (2.25,-.5){$\sigma_2$};
\node at (3,-.5){$\sigma_3$};
\draw[thick,green] (0,0) -- (1.25,0); \draw[thick,green] (1.25,0) -- (1.75,.5);
\draw[thick,green] (1.75,.5) -- (2,.5);
\draw[thick,green] (2,0.5) -- (2.5,1);
\draw[thick,green] (2.5,1) -- (2.75,1);
\draw[thick,green] (2.75,1) -- (3.25,1.5);
\draw[thick,green] (3.25,1.5) -- (3.75,1.5);
\draw[thick,red] (0,.5) -- (1.25,.5); \draw[thick,red] (1.25,.5) -- (1.45,.3);
\draw[thick,red] (1.55,.2) -- (1.75,0);
\draw[thick,red] (1.75,0) -- (3.75,0);
\draw[thick,blue] (0,1) -- (.5,1); \draw[thick,blue] (.5,1) -- (1,1.5);
\draw[thick,blue] (1,1.5) -- (2.75,1.5);
\draw[thick,blue] (2.75,1.5) -- (2.95,1.3);
\draw[thick,blue] (3.05,1.2) -- (3.25,1);
\draw[thick,blue] (3.25,1) -- (3.75,1);
\draw[thick,orange] (0,1.5) -- (0.5,1.5); \draw[thick,orange] (0.5,1.5) -- (0.7,1.3);
\draw[thick,orange] (0.8,1.2) -- (1,1);
\draw[thick,orange] (1,1) -- (2,1);
\draw[thick,orange] (2,1) -- (2.2,.8);
\draw[thick,orange] (2.3,.7) -- (2.5,.5);
\draw[thick,orange] (2.5,.5) -- (3.75,.5);
\end{tikzpicture}
\end{aligned}$$
We want to address the following question:
> \[quote\] What does a typical complicated positive braid look like?
To make the question more precise, we need to clarify the meaning of “complicated” and “typical”. First, let the complexity of a positive braid be measured by the length (number of letters) of any representative word. This is natural since it corresponds to the number of crossings between strings in any representative braid diagram. Therefore, a positive braid is “complicated” if its length is large.
Second, let us define a “typical” braid as a braid being picked at random according to some probability measure. The two natural candidates for such a probability measure are as follows. Fix a positive integer $k$.
- The first option consists in running a simple random walk on ${{B}^{+}_{n}}$ : pick a sequence of random elements $x_i, i\geq 1,$ independently and uniformly among the generators $\Sigma= \{\sigma_1,\dots , \sigma_{n-1}\}$, and consider the “typical” braid $X=x_1\cdot x_2\cdot \dots \cdot x_k$. It corresponds to drawing a word uniformly in $\Sigma^k$ and then considering the braid it induces.
- The second option consists in picking a “typical” braid of length $k$ uniformly at random among all braids of length $k$.
The two approaches differ since the number of representative words varies among positive braids of the same length. For instance, in ${{B}^{+}_{3}}$ and for the length 3, the braid $\sigma_1\cdot \sigma_2 \cdot \sigma_1$ $(=
\sigma_2\cdot \sigma_1 \cdot \sigma_2)$ will be picked with probability 2/8 in the first approach, and with probability 1/7 in the second one, while all the other braids of length 3 will be picked respectively with probabilities 1/8 and 1/7 in the two approaches. The random walk approach has been studied for instance in [@MaMa06; @vershik00]; it is a special instance of random walks on (semi)groups, see [@woes]. In this paper, our focus is on the second approach, that is, on [*uniform measures on positive braids*]{}.
Let $\mu_k$ be the uniform probability measure on positive braids of ${{B}^{+}_{n}}$ of length $k$. The general question stated above can now be rephrased as follows : study $\mu_k$ for large $k$. Let us say that we are interested in some specific property, say, the number of occurrences of the Garside element $\Delta$ in a large random braid. To study it, a first approach consists in performing a numerical evaluation. To that purpose, the key ingredient is to have a [*sampling algorithm*]{}, that is, a random procedure which takes as input $k$ and returns as output a random braid of distribution $\mu_k$. Another, more intrinsic, approach consists first in defining a probability measure $\mu_{\infty}$ on [*infinite*]{} positive braids, encapsulating all the measures $\mu_k$, and then in studying the asymptotics of the property *via* $\mu_{\infty}$. None of these two paths is easy to follow. The difficulty is that the probability measures $(\mu_k)_k$ are not consistent with one another. For instance, in ${{B}^{+}_{3}}$, we have: $$\label{non-consistent}
1/4 = \mu_2(\sigma_1\cdot \sigma_1) \neq \mu_3(\sigma_1\cdot \sigma_1\cdot \sigma_1)
+\mu_3(\sigma_1\cdot \sigma_1\cdot \sigma_2)= 2/7 \:.$$ Therefore, there is no obvious way to design a dynamic process to sample braids. As another consequence, the Kolmogorov consistency theorem does not apply, and there is no simple way to define a uniform probability measure on infinite positive braids. This is in sharp contrast with the simpler picture for the random walk approach described above.
To overcome the difficulties, the rich combinatorics of positive braids has to enter the scene. Going back to Garside [@garside1969braid] and Thurston [@ECHLPT], it is known that positive braids admit a *normal form*, that is a selection of a unique representative word for each braid, which is *regular*, that is recognized by a finite automaton. This so called *Garside normal form* enables to count positive braids in an effective way, see for instance Brazil [@braz], but a non-efficient one since the associated automaton has a large number of states, exponential in the number of strands $n$, see Dehornoy [@dehornoy07]. A breakthrough is provided by Bronfman [@bronfman01] (see also [@albenque09]) who obtains, using an inclusion-exclusion principle, a simple recursive formula for counting positive braids. Based on this formula, a sampling algorithm whose time and space complexities are polynomial in both the number of strands $n$ and the length $k$ is proposed by Gebhardt and Gonzales-Meneses in [@gebhard13]. Using the sampling procedure, extensive numerical evaluations are performed by Gebhardt and Tawn in [@gebhardt14], leading to the [*stable region conjecture*]{} on the shape of the Garside normal form of large uniform braids.
In the present paper, we complete the picture by proving the existence of a natural uniform probability measure $\mu_{\infty}$ on infinite positive braids. The measure induced by $\mu_{\infty}$ on braids of length $k$ is not equal to $\mu_k$, which is in line with the non-consistency illustrated in (\[non-consistent\]), but the sequence $(\mu_k)_k$ does converge weakly to $\mu_{\infty}$. The remarkable point is that the measure $\mu_{\infty}$ has a Markovian structure which can be described explicitly. It makes it possible to get precise information on $\mu_k$ for large $k$ by using the limit $\mu_{\infty}$. For instance, we prove that the number of $\Delta$ in a random braid of ${{B}^{+}_{n}}$ is asymptotically geometric of parameter $q^{n(n-1)/2}$ where $q$ is the unique root of smallest modulus of the Möbius polynomial of ${{B}^{+}_{n}}$. As another by-product of our results, we settle the stable region conjecture, proving one of the two statements in the conjecture, and refuting the other one. Our different results are achieved by strongly relying on refined properties of the combinatorics of positive braids, some of them new.
*Mutatis mutandis*, the results also hold in the Birman-Ko-Lee dual braid monoid [@birman1998new]. We present the results in a unified way, with notations and conventions allowing to cover the braids and the dual braids at the same time. The prerequisites on these two monoids are recalled in Section \[sec:posit-dual-posit\], and the needed results on the combinatorics of braids are presented in Section \[sec:garside-normal-form\]. The main results are proved in Section \[sec:unif-meas-braid\], with applications in Section \[sec:appl-asympt-finite\], including the clarification of the stable region conjecture. In Section \[se-explicit\], we provide explicit computations of the uniform measure $\mu_{\infty}$ for the braid monoid and the dual braid monoid on 4 strands. At last, analogs and extensions are evoked in Section \[se-ext\]. Indeed, our results on braid monoids form a counterpart to the results on trace monoids in [@abbes15a; @abbes15b], and, in a forthcoming paper [@opus2], we plan to prove results in the same spirit for Artin-Tits monoids, a family encompassing both braids and traces.
Positive and dual positive braid monoids {#sec:posit-dual-posit}
========================================
In this section we introduce some basics on the monoid of positive braids and the monoid of positive dual braids. We recall the notions of simple braids for these monoids, as well of combinatorial representations of them.
Two distinct braid monoids {#sec:posit-braid-mono}
--------------------------
### The braid group and two of its submonoids {#sec:presentations}
For each integer $n\geq2$, the *braid group* ${{B}_{n}}$ is the group with the following group presentation: $$\begin{gathered}
\label{eq:1*}
\arraycolsep=0.3pt
{{B}_{n}} = \left\langle \sigma_1,\ldots,\sigma_{n-1} \left|
\begin{array}{ll} \sigma_i \sigma_j = \sigma_j \sigma_i &\quad \text{for $|i-j| \geq 2$} \\
\sigma_i \sigma_{j} \sigma_i = \sigma_{j} \sigma_i \sigma_{j} &\quad \text{for $|i-j|=1$}
\end{array} \right.\right\rangle\,.$$
Elements of ${{B}_{n}}$ are called *braids*. Let ${\textbf{e}}$ and “$\cdot$” denote respectively the unit element and the concatenation operation in ${{B}_{n}}$. It is well known since the work of Artin that elements of ${{B}_{n}}$ correspond to isotopy classes of braid diagrams on $n$ strands, as illustrated in Figure \[fig:braidsojf\]; the elementary move where strand $i$ crosses strand $i+1$ from above corresponds to generator $\sigma_i$, and the move where strand $i$ crosses strand $i+1$ from behind to $\sigma_i^{-1}$.
We will be interested in two submonoids of ${{B}_{n}}$. The *positive braids monoid* ${{B}^{+}_{n}}$ is the submonoid of ${{B}_{n}}$ generated by $\{\sigma_1,\ldots,\sigma_{n-1}\}$; and the *positive dual braid monoid* ${{B}^{+*}_{n}}$ is the submonoid of ${{B}_{n}}$ generated by $\{\sigma_{i,j}\
|\ 1\leq i<j\leq n\}$, where $\sigma_{i,j}$ is defined by: $$\begin{aligned}
\sigma_{i,j}&=\sigma_i\,,&&\text{for $1\leq i<n$ and $j=i+1$,}\\
\sigma_{i,j}&=\sigma_i \sigma_{i+1} \ldots \sigma_{j-1}
\sigma_{j-2}^{-1} \sigma_{j-3}^{-1} \ldots
\sigma_i^{-1}\,,&&\text{for $1\leq i< n-1$ and $i+2\leq j\leq n$}\,.\end{aligned}$$ Observe the inclusion ${{B}^{+}_{n}}\subseteq{{B}^{+*}_{n}}$, since each generator $\sigma_i$ of ${{B}^{+}_{n}}$ belongs to ${{B}^{+*}_{n}}$. The elements $\sigma_{i,j}$ are often called *Birman-Ko-Lee generators* in the literature, while the elements $\sigma_i$ are called *Artin* generators.
#### Running examples for $n=3$. {#sec:runn-exampl-n=3-1 .unnumbered}
Throughout the paper, we shall illustrate the notions and results on the most simple, yet non trivial examples of braid monoids, namely on ${{B}^{+}_{3}}$ and on ${{B}^{+*}_{3}}$: $$\begin{aligned}
{{B}^{+}_{3}}&=\langle\sigma_1,\;\sigma_2\rangle^+\,,\\
{{B}^{+*}_{3}}&=\langle
\sigma_{1,2},\;\sigma_{2,3},\;\sigma_{1,3}\rangle^+
&\text{with }&
\begin{aligned}
\sigma_{1,2}&=\sigma_1\,,&\sigma_{2,3}&=\sigma_2\,,
&\sigma_{1,3}&=\sigma_1\cdot\sigma_2\cdot\sigma_1^{-1}\,.
\end{aligned}\end{aligned}$$
### Presentations of the monoids {#sec:presentations-1}
Defining ${{B}^{+}_{n}}$ and ${{B}^{+*}_{n}}$ as submonoids of ${{B}_{n}}$, as we just did, is one way of introducing them. Another one is through generators and relations.
First, ${{B}^{+}_{n}}$ is isomorphic to the monoid with the monoid presentation (\[eq:0\*\]), that is, the same presentation as $B_n$ but viewed as a monoid presentation instead of a group presentation. Second, ${{B}^{+*}_{n}}$ is isomorphic to the monoid with $n(n-1)/2$ generators $\sigma_{i,j}$ for $1\leq i<j\leq n$ and the following relations, provided that the convention $\sigma_{j,i}=\sigma_{i,j}$ for $i<j$ is in force: $$\begin{gathered}
\label{eq:3*}
\begin{cases}
\sigma_{i,j} \sigma_{j,k} = \sigma_{j,k} \sigma_{k,i} = \sigma_{k,i} \sigma_{i,j} &
\text{for $ 1 \leq i < j < k \leq n$} \\
\sigma_{i,j} \sigma_{k,\ell} = \sigma_{k,\ell} \sigma_{i,j} & \text{for $ 1 \leq i < j < k < \ell \leq n$} \\
\sigma_{i,j} \sigma_{k,\ell} = \sigma_{k,\ell} \sigma_{i,j} & \text{for $ 1 \leq i < k < \ell < j \leq n $} \:.
\end{cases}\end{gathered}$$
Elements of ${{B}^{+}_{n}}$ are called *positive braids*, they correspond to isotopy classes of braid diagrams involving only crossing of strands in the same direction, see Figure \[fig:braidsojf\]. Elements of ${{B}^{+*}_{n}}$ are called *dual positive braids*. They correspond to isotopy classes of chord diagrams [@birman1998new]. This time, there are still $n$ strands but they are arranged along a cylinder; the element $\sigma_{i,j}$ corresponds to a crossing of strands $i$ and $j$. See Figure \[fig:udalbraids\].
The inclusion ${{B}^{+}_{n}}\subseteq{{B}^{+*}_{n}}$ comes with the definition of ${{B}^{+}_{n}}$ and ${{B}^{+*}_{n}}$ as submonoids of the braid group ${{B}_{n}}$. It can be obtained as follows when considering ${{B}^{+}_{n}}$ and ${{B}^{+*}_{n}}$ as abstract monoids with generators and relations. Let $\iota:\{\sigma_1,\ldots,\sigma_n\}\to{{B}^{+*}_{n}}$ be defined by $\iota(\sigma_i)=\sigma_{i,i+1}$, and keep the notation $\iota$ to denote the natural extension on the free monoid $\iota:\{\sigma_1,\ldots,\sigma_n\}^*\to{{B}^{+*}_{n}}$. It is clear that $\iota$ is constant on congruence classes of positive braids, whence $\iota$ factors through $\iota:{{B}^{+}_{n}}\to{{B}^{+*}_{n}}$. It can then be proved that this morphism is injective [@garside1969braid; @birman1998new].
\[rem:6\] We emphasize that all the notions that we are about to define on ${{B}^{+}_{n}}$ and on ${{B}^{+*}_{n}}$ may or may not coincide on ${{B}^{+}_{n}}\cap{{B}^{+*}_{n}}={{B}^{+}_{n}}$. Henceforth, it is probably clearer to keep in mind the point of view on these monoids through generators and relations, rather than as submonoids of ${{B}_{n}}$.
#### Running examples for $n=3$. {#sec:runn-exampl-n=3 .unnumbered}
The presentations of the monoids ${{B}^{+}_{3}}$ and ${{B}^{+*}_{3}}$ are the following: $$\begin{aligned}
{{B}^{+}_{3}}&=\langle
\sigma_1,\ \sigma_2 \mid
\sigma_1\sigma_2\sigma_1=\sigma_2\sigma_1\sigma_2\rangle^+\\
{{B}^{+*}_{3}}&=\langle \sigma_{1,2},\ \sigma_{2,3},\ \sigma_{1,3} \mid \sigma_{1,2}\sigma_{2,3} =
\sigma_{2,3}\sigma_{1,3} = \sigma_{1,3}\sigma_{1,2}\rangle^+\end{aligned}$$
### A common notation {#sec:simultaneous}
We will consider simultaneously the monoids ${{B}^{+}_{n}}$ and ${{B}^{+*}_{n}}$. Henceforth, we will denote by ${{B}^{?}_{n}}$ a monoid which, unless stated otherwise, may be either the monoid ${{B}^{+}_{n}}$ or ${{B}^{+*}_{n}}$. The statements that we will prove for the monoid ${{B}^{?}_{n}}$ will then hold for both monoids ${{B}^{+}_{n}}$ and ${{B}^{+*}_{n}}$.
In addition, we will denote by $\Sigma$ the set of generators of ${{B}^{?}_{n}}$, hence $\Sigma = \{\sigma_i {\;:\;}1 \leq i \leq n-1\}$ if ${{B}^{?}_{n}} = {{B}^{+}_{n}}$ and $\Sigma = \{\sigma_{i,j} {\;:\;}1 \leq i < j \leq n\}$ if ${{B}^{?}_{n}} = {{B}^{+*}_{n}}$.
### Length and division relations. Mirror mapping {#sec:length-divis-relat}
The above presentations (\[eq:0\*\]) and (\[eq:3\*\]) of ${{B}^{?}_{n}}$ are homogeneous, meaning that the relations involve words of the same lengths. Hence, the *length* of $x\in{{B}^{?}_{n}}$, denoted by ${|x|}$, is the length of any word in the equivalence class $x$, with respect to the congruence defining ${{B}^{?}_{n}}$.
\[rem:2\] The length is an example of a quantity which is defined on both ${{B}^{+}_{n}}$ and on ${{B}^{+*}_{n}}$, and which is invariant on ${{B}^{+}_{n}}$. That is to say, the length ${|x|}$ of a positive braid $x\in{{B}^{+}_{n}}$ is invariant whether $x$ is considered as an element of ${{B}^{+}_{n}}$ or as an element of ${{B}^{+*}_{n}}$. Indeed, if $x$ has length $k$ as an element of ${{B}^{+}_{n}}$, then it necessarily writes as a product $x=\sigma_{\varphi(1)}\cdot\ldots\cdot \sigma_{\varphi(k)}$ for some function $\varphi:\{1,\ldots,k\}\to\{1,\ldots,n-1\}$. This entails that $x$, as an element of ${{B}^{+*}_{n}}$, writes as $x=\sigma_{\varphi(1),\varphi(1)+1}\cdot\ldots\cdot\sigma_{\varphi(k),\varphi(k)+1}$, and thus $x$ also has length $k$ as an element of ${{B}^{+*}_{n}}$.
The monoid ${{B}^{?}_{n}}$ is equipped with the *left* and with the *right divisibility* relations, denoted respectively ${\leq_\text{l}}$ and ${\leq_{\text{r}}}$, which are both partial orders on ${{B}^{?}_{n}}$, and are defined by: $$\begin{aligned}
x{\leq_\text{l}}y&\iff\exists z\in{{B}^{?}_{n}}\quad y=x\cdot z,&
x{\leq_{\text{r}}}y&\iff\exists z\in{{B}^{?}_{n}}\quad y=z\cdot x.&\end{aligned}$$
The mirror mapping, defined on words by $a_1\ldots a_k\mapsto a_k\ldots
a_1$, factorizes through ${{B}^{?}_{n}}$ and induces thus a *mirror mapping* on braids, denoted by $x\in{{B}^{?}_{n}}\mapsto x^*\in{{B}^{?}_{n}}$. It is an involutive anti-isomorphism of monoids, it preserves the length of braids and swaps the left and right divisibility relations: $$\begin{aligned}
\forall x\in{{B}^{?}_{n}}\quad{|x^*|}&={|x|}\,,&
\forall x,y\in{{B}^{?}_{n}}\quad x{\leq_{\text{r}}}y&\iff x^*{\leq_\text{l}}y^*\,.\end{aligned}$$
The mirror mapping being an isomorphism of partial orders $({{B}^{?}_{n}},{\leq_\text{l}})\to({{B}^{?}_{n}},{\leq_{\text{r}}})$, we shall focus on the left divisibility relation ${\leq_\text{l}}$ only.
\[rem:3\] Following Remark \[rem:2\], it is clear that the left divisibility is also invariant on ${{B}^{+}_{n}}$: if $x,y\in{{B}^{+}_{n}}$ are such that $x{\leq_\text{l}}y$ in ${{B}^{+}_{n}}$, then $x{\leq_\text{l}}y$ also holds in ${{B}^{+*}_{n}}$. Observe however that the converse is not true. For instance, consider the case $n=3$ and set $x=\sigma_2$ and $y=\sigma_1\cdot\sigma_2$. In ${{B}^{+}_{3}}$, clearly, $x{\leq_\text{l}}y$ does not hold. However, in ${{B}^{+*}_{3}}$, we have $x=\sigma_{2,3}$ and $y=\sigma_{1,2}\cdot\sigma_{2,3}=\sigma_{2,3}\cdot\sigma_{1,3}$, therefore $x{\leq_\text{l}}y$ does hold.
Garside structure and simple braids {#sec:comb-repr-braids}
-----------------------------------
### Garside structure {#sec:simple-braids-gars}
The monoid ${{B}^{?}_{n}}$ is known to be a *Garside monoid* [@adian1984fragments; @bessis2003dual; @birman1998new]; that is to say:
(1) ${{B}^{?}_{n}}$ is a cancellative monoid;
(2) ${{B}^{?}_{n}}$ contains a *Garside element*, that is to say, an element whose set of left divisors coincides with its set of right divisors and contains the generating set $\Sigma$;
(3) Every finite subset $X$ of ${{B}^{?}_{n}}$ has a least upper bound in $({{B}^{?}_{n}},{\leq_\text{l}})$, and a greatest lower bound if $X\neq\emptyset$, respectively denoted $\operatorname{\mathchoice {\bigvee{}_{\raisebox{-.8ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.3ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}}}X$ and $\operatorname{\mathchoice {\bigwedge{}_{\raisebox{-.8ex}{\scriptsize\!\text{l}}}} {\bigwedge{}_{\raisebox{-.3ex}{\scriptsize\!\text{l}}}} {\bigwedge{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}} {\bigwedge{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}}}X$.
Let $\operatorname{\mathchoice {\bigvee{}_{\raisebox{-.8ex}{\scriptsize\!\!\text{r}}}} {\bigvee{}_{\raisebox{-.3ex}{\scriptsize\!\!\text{r}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{r}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{r}}}}}X$ denote the least upper bound in $({{B}^{?}_{n}},{\leq_{\text{r}}})$ of a subset $X\subseteq{{B}^{?}_{n}}$. Then, if $X$ is a subset of $\Sigma$, it is known [@birman1998new; @garside1969braid] that $\operatorname{\mathchoice {\bigvee{}_{\raisebox{-.8ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.3ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}}}X$ and $\operatorname{\mathchoice {\bigvee{}_{\raisebox{-.8ex}{\scriptsize\!\!\text{r}}}} {\bigvee{}_{\raisebox{-.3ex}{\scriptsize\!\!\text{r}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{r}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{r}}}}}X$ coincide. We introduce therefore the notation $\Delta_X$ for: $$\Delta_X=\operatorname{\mathchoice {\bigvee{}_{\raisebox{-.8ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.3ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}}}X=\operatorname{\mathchoice {\bigvee{}_{\raisebox{-.8ex}{\scriptsize\!\!\text{r}}}} {\bigvee{}_{\raisebox{-.3ex}{\scriptsize\!\!\text{r}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{r}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{r}}}}}X\,,\qquad\text{for $X\subseteq\Sigma$\,.}$$ Moreover, $\Delta_X$ has the same left divisors and right divisors in ${{B}^{?}_{n}}$: $\{x \in {{B}^{?}_{n}} {\;:\;}x {\leq_\text{l}}\Delta_X\} = \{x \in {{B}^{?}_{n}} {\;:\;}x {\leq_{\text{r}}}\Delta_X\}$.
The element $\Delta_X$ one obtains when considering $X=\Sigma$ plays a special role in Garside theory. Indeed, based on the above remarks, it is not difficult to see that $\Delta_\Sigma$ is a Garside element of ${{B}^{?}_{n}}$, and is moreover the *smallest* Garside element of ${{B}^{?}_{n}}$. Defining the elements $\Delta_n\in{{B}^{+}_{n}}$ and $\delta_n\in{{B}^{+*}_{n}}$ by: $$\begin{aligned}
\Delta_n&=(\sigma_1\cdot\ldots\cdot\sigma_{n-1})\cdot(\sigma_1\cdot\ldots\cdot\sigma_{n-2})\cdot\ldots\cdot(\sigma_1\cdot\sigma_2)\cdot\sigma_1\,,&
\delta_n&= \sigma_{1,2} \cdot \sigma_{2,3} \cdot \ldots \cdot
\sigma_{n-1,n}\,,\end{aligned}$$ we have $\Delta_\Sigma=\Delta_n$ if ${{B}^{?}_{n}}={{B}^{+}_{n}}$ and $\Delta_\Sigma=\delta_n$ if ${{B}^{?}_{n}}={{B}^{+*}_{n}}$. We adopt the single notation $\Delta=\Delta_\Sigma$ to denote either $\Delta_n$ or $\delta_n$ according to which monoid we consider.
### Definition of simple braids {#sec:defin-simple-braids}
The set of all divisors of $\Delta$ is denoted by ${\mathcal{S}}_n$, and its elements are called *simple braids*. It is a bounded subset of ${{B}^{?}_{n}}$, with ${\textbf{e}}$ as minimum and $\Delta$ as maximum, closed under $\operatorname{\mathchoice {\bigvee{}_{\raisebox{-.8ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.3ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}}}$ and under $\operatorname{\mathchoice {\bigwedge{}_{\raisebox{-.8ex}{\scriptsize\!\text{l}}}} {\bigwedge{}_{\raisebox{-.3ex}{\scriptsize\!\text{l}}}} {\bigwedge{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}} {\bigwedge{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}}}$. With the induced partial order, $({\mathcal{S}}_n,{\leq_\text{l}})$ is thus a lattice.
Consider the mapping $\Phi:\mathcal{P}(\Sigma)\to{\mathcal{S}}_n, \ X \mapsto
\Delta_X$, and its image: $$\label{eq:7}
{\mathcal{D}}_n=\{\Delta_X{\;:\;}X\subseteq\Sigma\}\,.$$ Then $\Phi:\bigl(\mathcal P (\Sigma),\subseteq\bigr)\to({\mathcal{S}}_n,{\leq_\text{l}})$ is a lattice homomorphism, and ${\mathcal{D}}_n$ is thus a sub-lattice of $({\mathcal{S}}_n,{\leq_\text{l}})$. If ${{B}^{?}_{n}}={{B}^{+}_{n}}$, the mapping $\Phi$ is injective, but *not onto* ${\mathcal{S}}_n$, and so $({\mathcal{D}}_n,{\leq_\text{l}})$ is isomorphic to $\bigl(\mathcal
P(\Sigma),\subseteq\bigr)$. If ${{B}^{?}_{n}}={{B}^{+*}_{n}}$, the mapping $\Phi$ is not injective, but it is onto ${\mathcal{S}}_n$, hence ${\mathcal{D}}_n={\mathcal{S}}_n$.
\[rem:5\] Contrasting with the length discussed in Remark \[rem:2\], the notion of simplicity is *not* invariant on ${{B}^{+}_{n}}$. For instance, the braid $\Delta_n$ is simple in ${{B}^{+}_{n}}$, but it is not simple as an element of ${{B}^{+*}_{n}}$ as soon as $n\geq3$. Indeed, since its length is ${|\Delta_n|}=n(n-1)/2$, it cannot be a divisor of $\delta_n$ which is of length ${|\delta_n|}=n-1$.
#### Running examples for $n=3$. {#running-examples-for-n3. .unnumbered}
The Garside elements of ${{B}^{+}_{3}}$ and of ${{B}^{+*}_{3}}$ are: $$\begin{aligned}
\delta_3&=\sigma_{1,2}\cdot\sigma_{2,3}\,,&
\Delta_3&=\sigma_1\cdot\sigma_2\cdot\sigma_1\,.\end{aligned}$$
The Hasse diagrams of $({\mathcal{S}}_3,{\leq_\text{l}})$ are pictured in Figure \[fig:hessediagram3\]. For ${{B}^{+}_{3}}$, the lattice $({\mathcal{D}}_3,{\leq_\text{l}})$ consists of the following four elements: $$\begin{aligned}
\Delta_\emptyset&={\textbf{e}}&\Delta_{\sigma_1}&=\sigma_1
&\Delta_{\sigma_2}&=\sigma_2&\Delta_{\sigma_1,\sigma_2}&=\sigma_1\cdot\sigma_2\cdot\sigma_1=\Delta\end{aligned}$$ whereas the lattice $({\mathcal{S}}_3,{\leq_\text{l}})$ contains the two additional elements $\sigma_1\cdot\sigma_2$ and $\sigma_2\cdot\sigma_1$. For ${{B}^{+*}_{3}}$, the elements of ${\mathcal{D}}_3$ and ${\mathcal{S}}_3$ are: $$\begin{gathered}
\begin{aligned}
\Delta_\emptyset &={\textbf{e}}\qquad
&\Delta_{\{\sigma_{1,2}\}} &=\sigma_{1,2}\qquad
&\Delta_{\{\sigma_{2,3}\}} &=\sigma_{2,3}\qquad
&\Delta_{\{\sigma_{1,3}\}} &=\sigma_{1,3}
\end{aligned} \\
\Delta_{\{\sigma_{1,2},\sigma_{2,3}\}}=\Delta_{\{\sigma_{2,3},\sigma_{1,3}\}}=
\Delta_{\{\sigma_{1,2},\sigma_{1,3}\}}=\Delta_{\{\sigma_{1,2},\sigma_{2,3},\sigma_{1,3}\}}=\delta_3\end{gathered}$$
0=
$$\begin{aligned}
\xymatrix@R=1.3em@C=1.5em{
&
{*+[F]{\makebox[\mylength]{\strut$\Delta_3$}}}\\
\strut \sigma_1\cdot\sigma_2\POS!U\ar@{-}[ur]!D!L
&&\strut\sigma_2\cdot\sigma_1\POS!U\ar@{-}[ul]!D!R\\
{*+[F]{\makebox[\mylength]{\strut$\sigma_1$}}}\ar@{-}[u]&&{*+[F]{\makebox[\mylength]{\strut$\sigma_2$}}}\ar@{-}[u]\\
&{*+[F]{\makebox[\mylength]{\strut${\textbf{e}}$}}}\POS!U!L\ar@{-}[ul]!D\POS!R\ar@{-}[ur]!D
}
&&
\xymatrix{
&{*+[F]{\makebox[\mylength]{\strut$\delta_3$}}}\\
{*+[F]{\makebox[\mylength]{\strut$\sigma_{1,2}$}}}\POS!U\ar@{-}[ur]!D!L
&{*+[F]{\makebox[\mylength]{\strut$\sigma_{2,3}$}}}\ar@{-}[u]
&{*+[F]{\makebox[\mylength]{\strut$\sigma_{1,3}$}}}\POS!U\ar@{-}[ul]!D!R\\
&{*+[F]{\makebox[\mylength]{\strut${\textbf{e}}$}}}\POS!U!L\ar@{-}[ul]!D\POS!R\ar@{-}[ur]!D\POS!L(0.5)\ar@{-}[u]
}
\end{aligned}$$
### Combinatorial representations of simple braids {#sec:comb-epr-simple}
The natural map that sends each generator $\sigma_i$ of the braid group ${{B}_{n}}$ to the transposition $(i,i+1)$ induces a morphism from ${{B}_{n}}$ to $\mathfrak{S}_n$, the set of permutations of $n$ elements. Hence, this map also induces morphisms from ${{B}^{+}_{n}}$ and from ${{B}^{+*}_{n}}$ to $\mathfrak{S}_n$.
In the case of the braid monoid ${{B}^{+}_{n}}$, this morphism induces a bijection from ${\mathcal{S}}_n$ to $\mathfrak{S}_n$. Thus, ${\mathcal{S}}_n$ has cardinality $n!$. The element ${\textbf{e}}$ corresponds to the identity permutation, and the element $\Delta_n$ to the mirror permutation $i\mapsto n+1-i$.
From the point of view of braid diagrams, such as the one pictured in Figure \[fig:braidsojf\], simple braids correspond to braids such that in any representative braid diagram, any two strands cross at most once.
In the case of the dual braid monoid ${{B}^{+*}_{n}}$, this morphism only induces an injection from ${\mathcal{S}}_n$ to $\mathfrak{S}_n$. It is thus customary to consider instead the following alternative representation. Recall that a partition $\{T^1,\ldots,T^m\}$ of $\{1,\ldots,n\}$ is called *non-crossing* if the sets $\{\exp(2 \mathbf{i}k \pi / n) {\;:\;}k \in T^i\}$ have pairwise disjoint convex hulls in the complex plane. For instance, on the left of Figure \[fig:nocrosodoif\], is illustrated the fact that $\{ \{1\},
\{2,3\}, \{4,5,6\} \}$ is a non-crossing partition of $\{1,2,\dots, 6 \}$.
Now, for each subset $T$ of $\{1,\ldots,n\}$, let $x_T$ denote the dual braid $\sigma_{t_1,t_2} \cdot \sigma_{t_2,t_3} \cdot \ldots \cdot
\sigma_{t_{k-1},t_k}$, where $t_1 < t_2 < \ldots < t_k$ are the elements of $T$. Then, for each non-crossing partition $\mathbf{T} =
\{T^1,\ldots,T^m\}$ of $\{1,\ldots,n\}$, we denote by $x_\mathbf{T}$ the (commutative) product $x_{T^1} \cdot \ldots \cdot x_{T^m}$. It is known [@birman1998new; @bessis02] that the mapping $\mathbf{T} \mapsto
x_{\mathbf{T}}$ is a lattice isomorphism from the set of non-crossing partitions of $\{1,\ldots,n\}$ onto ${\mathcal{S}}_n$. Thus in particular, the cardinality of ${\mathcal{S}}_n$ is the Catalan number $\frac{1}{n+1} \binom{2n}{n}$. In this representation, ${\textbf{e}}$ corresponds to the finest partition $\{\{1\},\ldots,\{n\}\}$, and $\delta_n$ to the coarsest partition $\{\{1,\ldots,n\}\}$. See Figure \[fig:nocrosodoif\].
$$\begin{aligned}
\begin{tikzpicture}
\node at (1,0) {$\bullet$};
\node at (.5,.86) {$\bullet$};
\node at (-.5,-.86) {$\bullet$};
\node at (-1,0) {$\bullet$};
\node at (-.5,.86) {$\bullet$};
\node at (.5,-.86) {$\bullet$};
\node at (1.3,0) {$1$};
\node at (.5,1.2) {$2$};
\node at (-.5,1.2) {$3$};
\node at (-1.3,0) {$4$};
\node at (-.5,-1.2) {$5$};
\node at (.5,-1.2) {$6$};
\draw (0,0) circle (1);
\draw (.5,.86) -- (-.5,.86);
\draw (-1,0) -- (-.5,-.86);
\draw (-.5,-.86) -- (.5,-.86);
\draw (.5,-.86) -- (-1,0);
\end{tikzpicture}
&&
\begin{tikzpicture}
\node at (1,0) {$\bullet$};
\node at (.5,.86) {$\bullet$};
\node at (-.5,-.86) {$\bullet$};
\node at (-1,0) {$\bullet$};
\node at (-.5,.86) {$\bullet$};
\node at (.5,-.86) {$\bullet$};
\node at (1.3,0) {$1$};
\node at (.5,1.2) {$2$};
\node at (-.5,1.2) {$3$};
\node at (-1.3,0) {$4$};
\node at (-.5,-1.2) {$5$};
\node at (.5,-1.2) {$6$};
\draw (0,0) circle (1);
\draw (-.5,-.86) -- (.5,-.86);
\draw (.5,-.86) -- (1,0);
\draw (1,0) -- (-.5,-.86);
\end{tikzpicture}
&&\begin{tikzpicture}
\node at (1,0) {$\bullet$};
\node at (.5,.86) {$\bullet$};
\node at (-.5,-.86) {$\bullet$};
\node at (-1,0) {$\bullet$};
\node at (-.5,.86) {$\bullet$};
\node at (.5,-.86) {$\bullet$};
\node at (1.3,0) {$1$};
\node at (.5,1.2) {$2$};
\node at (-.5,1.2) {$3$};
\node at (-1.3,0) {$4$};
\node at (-.5,-1.2) {$5$};
\node at (.5,-1.2) {$6$};
\draw (0,0) circle (1);
\draw (.5,.86) -- (-.5,.86);
\draw (-1,0) -- (-.5,-.86);
\draw (-.5,-.86) -- (.5,-.86);
\draw (.5,-.86) -- (1,0);
\draw (1,0) -- (-1,0);
\end{tikzpicture}\end{aligned}$$
#### Running examples for $n=3$. {#running-examples-for-n3.-1 .unnumbered}
Let us consider the case $n=3$. For ${{B}^{+}_{3}}$, the correspondence between simple braids and permutations of $\{1,2,3\}$ is the following: $$\begin{aligned}
{\textbf{e}}&=\text{Id}&\sigma_1&=(1,2)&\sigma_2&=(2,3)\\
\sigma_1\cdot\sigma_2&=(1,2,3)&\sigma_2\cdot\sigma_1&=(1,3,2)&\Delta&=(3,1)\end{aligned}$$
Simple braids of ${{B}^{+*}_{3}}$ correspond to non-crossing partitions of $\{1,2,3\}$, which in this case are simply all the partitions of $\{1,2,3\}$. The correspondence is the following, where singletons are omitted: $$\begin{aligned}
{\textbf{e}}&=\bigl\{\bigr\}&
\sigma_{1,2}&=\bigl\{\{1,2\}\bigr\}&
\sigma_{2,3}&=\bigl\{\{2,3\}\bigr\}&
\sigma_{1,3}&=\bigl\{\{1,3\}\bigr\}&
\delta_3&=\bigl\{\{1,2,3\}\bigr\}\end{aligned}$$
Garside normal form and combinatorics of braids {#sec:garside-normal-form}
===============================================
Braids are known to admit normal forms, that is to say, a unique combinatorial representation for each braid. Normal forms are the standard tool for several algorithmic problems related to braids, for instance the word problem to cite one of the most emblematic [@dehornor08]. Among the several normal forms for braids introduced in the literature, we shall focus in this work on the Garside normal form which derives from the Garside structure attached to our braid monoids, as recalled above.
Garside normal form of braids {#sec:combinatorics-braids}
-----------------------------
In the monoid ${{B}^{?}_{n}}$, and regardless on whether ${{B}^{?}_{n}} = {{B}^{+}_{n}}$ or ${{B}^{?}_{n}}
= {{B}^{+*}_{n}}$, a sequence $(x_1,\ldots,x_k)$ of *simple braids* is said to be *normal* if $x_j=\Delta {\wedge_{\text{l}}}(x_j\cdot\ldots\cdot x_k)$ holds for all $j=1,\ldots,k$. Intuitively, this is a maximality property, meaning that no left divisor of $x_{j+1} \dotsm x_k$ could be moved to $x_j$ while remaining in the world of simple braids. We recall the two following well known facts concerning normal sequences of braids:
1. \[item:2\] For $x,y\in{\mathcal{S}}_n$, let $x\to y$ denote the relation ${\mathsf{R}}(x)\supseteq{\mathsf{L}}(y)$, where the sets ${\mathsf{R}}(x)$ and ${\mathsf{L}}(y)$ are defined as follows: $$\begin{aligned}
\label{eq:9}
{\mathsf{R}}(x)& = \bigl\{ \sigma \in \Sigma {\;:\;}x\cdot \sigma \notin
{\mathcal{S}}_n\}\,,
&{\mathsf{L}}(y) &=\bigl\{\sigma \in \Sigma {\;:\;}\sigma{\leq_\text{l}}y\bigr\}\,.\end{aligned}$$ Then a sequence $(x_1,\ldots,x_k)$ is normal if and only if $x_j\to
x_{j+1}$ holds for all $j=1,\ldots,k-1$, again meaning that left divisors of $x_{j+1}$ are already present in $x_j$, and therefore cannot be pushed into $x_j$ while keeping it simple.
2. For every non-unit braid $x\in{{B}^{?}_{n}}$, there exists a unique normal sequence $(x_1,\ldots,x_k)$ of non-unit simple braids such that $x=x_1\cdot\ldots\cdot x_k$. This sequence is called the *Garside normal form* or *decomposition* of $x$.
In this work, the integer $k$ is called the *height* of $x$ (it is also called the *supremum* of $x$ in the literature [@ECHLPT]). We denote it by ${\tau}(x)$.
Regarding the special elements ${\textbf{e}}$ and $\Delta$, the following dual relations hold, meaning that ${\textbf{e}}$ is “final” whereas $\Delta$ is “initial”: $$\begin{aligned}
\forall x\in{\mathcal{S}}_n\quad x &\to {\textbf{e}}& \forall x\in{\mathcal{S}}_n\quad
{\textbf{e}}\to x &\iff x={\textbf{e}}\\
\forall x\in{\mathcal{S}}_n\quad \Delta &\to x & \forall x\in{\mathcal{S}}_n\quad x \to \Delta &\iff x=\Delta\end{aligned}$$
Therefore, the Garside decomposition starts with a finite (possibly zero) number of $\Delta$s, and then is given by a finite path in the finite directed graph $({\mathcal{S}}_n \setminus \{\Delta, {\textbf{e}}\},\to)$. By convention, we define the Garside normal form of the unit braid ${\textbf{e}}$ as the sequence $({\textbf{e}})$, and we put ${\tau}({\textbf{e}})=1$. (It might seem that ${\tau}({\textbf{e}})=0$ would be a more natural convention, but it turns out that taking ${\tau}({\textbf{e}})=1$ is the good choice for convenient formulation of several results below as it encompasses the fact that, in normal forms, ${\textbf{e}}$ can not be followed by any letter. For instance, Lemma \[lem:full-visual-is-union-of-garside\] would not hold otherwise.) Then, it is a well-known property of Garside monoids that ${\tau}(x)$ is the least positive integer $k$ such that $x$ is a product of $k$ simple braids.
Moreover, it will be convenient to complete the normal form of a braid with infinitely many factors ${\textbf{e}}$. We call the infinite sequence $(x_k)_{k\geq1}$ of simple braids thus obtained the *extended Garside decomposition* of the braid. The directed graph $({\mathcal{S}}_n,\to)$ is then their accepting graph: extended Garside decompositions of braids correspond bijectively to infinite paths in $({\mathcal{S}}_n,\to)$ that eventually hit ${\textbf{e}}$, and then necessarily stay in ${\textbf{e}}$ forever.
\[rem:4\] Following up on Remark \[rem:5\], just as simplicity was observed not to be invariant on ${{B}^{+}_{n}}$, the height and the Garside normal forms are not invariant on ${{B}^{+}_{n}}$. For instance, the braid $\Delta_n$ has Garside normal form the sequence $(\Delta_n)$ itself in ${{B}^{+}_{n}}$, and the sequence $(\delta_n,\delta_{n-1},\ldots,\delta_2)$ in ${{B}^{+*}_{n}}$. Its height is $1$ in ${{B}^{+}_{n}}$ and $n-1$ in ${{B}^{+*}_{n}}$.
We gather in the following proposition some well-known properties of Garside normal forms [@dehornoy2013foundations] that we shall use later.
\[proposition:dehornoy2013foundations\] For all braids $x,y \in {{B}^{?}_{n}}$:
1. \[pro:garside-1\] the height ${\tau}(x)$ is the smallest integer $k\geq 1$ such that $x{\leq_\text{l}}\Delta^k$;
2. \[pro:garside-2\] if $(x_1,\ldots,x_k)$ is the normal form of $x$, then $x_1\cdot\ldots\cdot x_j=x{\wedge_{\text{l}}}\Delta^j$ for all $j\in\{1,\ldots,k\}$ ;
3. \[pro:garside-3\] $x{\leq_\text{l}}y\iff x\leq y{\wedge_{\text{l}}}\Delta^{{\tau}(x)}$.
\[rem:1\] We should stress that, while the normal form is very convenient to enumerate braids, it behaves poorly with respect to multiplication: consider $x$ with height $k$ and Garside decomposition $(x_1,\dotsc,x_k)$, and let $\sigma$ be a generator. Then, $y=x\cdot \sigma$ has height in $\{k, k+1\}$, but if it has height $k$ then the normal form $y=(y_1,\dotsc, y_k)$ might be completely different from that of $x$ (in the sense that $y_1\neq x_1,\dotsc, y_k \neq
x_k$), although it is algorithmically computable.
#### Running examples for $n=3$. {#running-examples-for-n3.-2 .unnumbered}
Let us describe explicitly the accepting graphs $({\mathcal{S}}_3,\to)$ for ${{B}^{+}_{3}}$ and for ${{B}^{+*}_{3}}$. Consider first the case of ${{B}^{+}_{3}}$. The subsets ${\mathsf{L}}(x)$ and ${\mathsf{R}}(x)$ are easily computed through their definition (\[eq:9\]), from which the relation $\to$ is derived. The results of these computations are depicted in Figure \[fig:acceptorB3\]. The analogous computations for ${{B}^{+*}_{3}}$ result in the data pictured in Figure \[fig:acceptordual\].
$$\begin{aligned}
\begin{array}[t]{rcll}
{\mathsf{L}}(x)&x\in{\mathcal{S}}_3&{\mathsf{R}}(x)&\{y\in{\mathcal{S}}_3{\;:\;}x\to y\}\\
\hline
\emptyset&{\textbf{e}}&\emptyset&\{{\textbf{e}}\}\\
\{\sigma_1\}&\sigma_1&\{\sigma_1\}&\{{\textbf{e}},\sigma_1,\sigma_1\cdot\sigma_2\}\\
\{\sigma_2\}&\sigma_2&\{\sigma_2\}&\{{\textbf{e}},\sigma_2,\sigma_2\cdot\sigma_1\}\\
\{\sigma_1\}&\sigma_1\cdot\sigma_2&\{\sigma_2\}&\{{\textbf{e}},\sigma_2,\sigma_2\cdot\sigma_1\}\\
\{\sigma_2\}&\sigma_2\cdot\sigma_1&\{\sigma_1\}&\{{\textbf{e}},\sigma_1,\sigma_1\cdot\sigma_2\}\\
\Sigma&\Delta_3&\Sigma&{\mathcal{S}}_3
\end{array}&&
\xymatrix{
&*+[F]{\strut\sigma_1}\POS!U!L\ar@(ul,ur)!U!R
\POS[]\ar[r]
\POS[]\POS!D!R\ar[drr]!U!L
&*+[F]{\strut\sigma_1\cdot\sigma_2}
\ar@{<->}[dd]
\POS!R\ar[dr]!U
\POS[]\POS!D!L(.5)\ar[ddl]!U
\\
*+[F]{\strut\Delta}
\POS!U!L\ar@(ul,dl)!D!L
\POS!R\POS!U\ar[ur]!D!L
\POS[]\POS!R!U(.5)\ar[urr]!D!L
\POS[]\ar[rrr]
\POS[]\POS!R\POS!D(.5)\ar[drr]!U!L
\POS[]\POS!R\POS!D\ar[dr]!U!L
&&&*+[F]{\strut{\textbf{e}}}
\POS!U!R\ar@(ur,dr)!D!R
\\
&*+[F]{\strut\sigma_2}
\POS!D!L\ar@(dl,dr)!D!R
\POS[]\ar[r]
\POS[]\POS!R!U\ar[urr]!D!L
&*+[F]{\strut\sigma_2\cdot\sigma_1}
\POS!R\ar[ur]!D
\POS[]\POS!U!L(.5)\ar[uul]!D
}\\
\end{aligned}$$
$$\begin{aligned}
\begin{array}[t]{rcll}
{\mathsf{L}}(x)&x\in{\mathcal{S}}_3&{\mathsf{R}}(x)&\{y\in{\mathcal{S}}_3{\;:\;}x\to y\}\\
\hline
\emptyset&{\textbf{e}}&\emptyset&\{{\textbf{e}}\}\\
\{\sigma_{1,2}\}&\sigma_{1,2}&\{\sigma_{1,2},\;\sigma_{1,3}\}&\{{\textbf{e}},\sigma_{1,2},\;\sigma_{1,3}\}\\
\{\sigma_{2,3}\}&\sigma_{2,3}&\{\sigma_{2,3},\; \sigma_{1,2}\}&\{{\textbf{e}},\sigma_{2,3},\; \sigma_{1,2}\}\\
\{\sigma_{1,3}\}&\sigma_{1,3}&\{\sigma_{1,3},\;\sigma_{2,3}\}&\{{\textbf{e}},\sigma_{1,3},\;\sigma_{2,3}\}\\
\Sigma&\delta_3&\Sigma&{\mathcal{S}}_3
\end{array} &&
\xymatrix{
*+[F]{\strut{\textbf{e}}}\POS!U!L\ar@(ul,ur)!U!R
&&*+[F]{\strut\sigma_{2,3}}\POS!D!R\ar@(dr,ur)!U!R\POS!L!U(.25)\ar[dll]!U!R\POS[]\ar[ll]\\
*+[F]{\strut\sigma_{1,2}}\POS!D!L\ar@(dl,ul)!U!L\POS[]\ar[rr]\ar[u]&&
*+[F]{\strut\sigma_{1,3}}\POS!D!R\ar@(dr,ur)!U!R\POS[]\ar[u]\POS!U!L\ar[ull]\\
&*+[F]{\strut\delta_3}\POS!D!L\ar@(dl,dr)!D!R\POS[]!U\ar[uul]\POS!U!L\ar[ul]!D!R
\POS!R(.75)\ar[uur]!D!L\POS!R\ar[ur]!D!L
}\\
\end{aligned}$$
Combinatorics of braids {#sec:combinatorics-braids-1}
-----------------------
How many braids $x\in{{B}^{?}_{n}}$ of length $k$ are there? Is there either an exact or an approximate formula? The aim of this subsection is to recall the classical answers to these questions, which we will do by analyzing the ordinary generating function, or growth series, of ${{B}^{?}_{n}}$. This series turns out to be rational, and the Garside normal form is an efficient tool for the study of its coefficients.
### Growth series of braid monoids and Möbius polynomial {#sec:growth-series-braid}
Let $G_n(t)$ be the growth series of ${{B}^{?}_{n}}$. It is defined by: $$G_n(t)=\sum_{x\in{{B}^{?}_{n}}}t^{{|x|}}\,.$$
According to a well-known result [@deligne72; @charney95; @bronfman01], the growth function of ${{B}^{?}_{n}}$ is rational, inverse of a polynomial: $$\begin{aligned}
\label{eq:2}
G_n(t)&=\frac1{H_n(t)}\,,&
\text{with\quad} H_n(t)&=\sum_{X\subseteq \Sigma}(-1)^{{|X|}}\,t^{{|\Delta_X|}}\,.\end{aligned}$$
There exist explicit or recursive formulæ allowing to compute effectively $H_n(t)$: $$\begin{aligned}
H_n(t)&=\sum_{k=1}^n(-1)^{k+1}t^{\frac{k(k-1)}{2}}H_{n-k}(t) &&
\text{ if } {{B}^{?}_{n}} = {{B}^{+}_{n}}\\
H_n(t)&= \sum_{k=0}^{n-1} (-1)^k \frac{(n-1+k)!}{(n-1-k)!k!(k+1)!} t^k && \text{ if } {{B}^{?}_{n}} = {{B}^{+*}_{n}}\end{aligned}$$ For reasons that will appear more clearly in a moment (see Subsection \[sec:two-mobi-transf\]), the polynomial $H_n(t)$ deserves the name of *Möbius polynomial of ${{B}^{?}_{n}}$*.
#### Running examples for $n=3$. {#running-examples-for-n3.-3 .unnumbered}
For ${{B}^{+}_{3}}$, the computation of the Möbius polynomial may be done as follows: $$\begin{aligned}
H_3(t)&=1-t^{|\sigma_1|}-t^{|\sigma_2|}+t^{|\sigma_1\vee\sigma_2|}=1-2t+t^3\qquad\text{since
$\sigma_1\vee\sigma_2=\Delta_3$}\\
\intertext{and similarly for ${{B}^{+*}_{3}}$:}
H_3(t)&=1-t^{|\sigma_{1,2}|}-t^{|\sigma_{2,3}|}-t^{|\sigma_{1,3}|}+t^{|\sigma_{1,2}\vee\sigma_{1,3}|}
+t^{|\sigma_{1,2}\vee\sigma_{2,3}|}+t^{|\sigma_{2,3}\vee\sigma_{1,3}|}-t^{|\delta_3|}\\
&=1-3t+2t^2\quad\qquad\text{since
$\sigma_{1,2}\vee\sigma_{1,3}=\sigma_{1,2}\vee\sigma_{2,3}=\sigma_{2,3}\vee\sigma_{1,3}=\delta_3$}\end{aligned}$$
### Connectivity of the Charney graph {#sec:charney-graph}
The growth series $G_n(t)$ is a rational series with non-negative coefficients and with a finite positive radius of convergence, say $q_n$, which, by the Pringsheim theorem [@flajolet09], is necessarily itself a pole of $G_n(t)$. Since $G_n(t)=1/H_n(t)$ as recalled in , it follows that $q_n$ is a root of minimal modulus of the polynomial $H_n(t)$. In order to evaluate the coefficients of $G_n(t)$, we shall prove that $G_n(t)$ has no other pole of modulus $q_n$, or equivalently, that $H_n(t)$ has no other root of modulus $q_n$. This is stated in Corollary \[cor:6\] below.
To this end, we first study the connectivity of the *Charney graph*, which is the directed graph $\mathcal{G} = (V,E)$ with set of vertices $V = {\mathcal{S}}_n \setminus \{{\textbf{e}},\Delta\}$ and set of edges $E =
\{(x,y) \in V^2 {\;:\;}x \to y\}$. The connectivity of $\mathcal{G}$ is well known for ${{B}^{?}_{n}}={{B}^{+}_{n}}$ [@bestvina1999non; @caruso2013genericity; @gebhardt2014penetration], and actually the same result also holds for ${{B}^{?}_{n}}={{B}^{+*}_{n}}$, although it does not seem to be found in the literature. We obtain thus the following result.
\[prop:3\] For $n\geq3$, the Charney graph of ${{B}^{?}_{n}}$ is strongly connected and contains loops.
First, observe that the graph $\mathcal{G}$ contains the loop $\sigma \to
\sigma$ for every generator $\sigma \in \Sigma$. Since proofs of the strong connectivity of $\mathcal{G}$ are found in the literature when ${{B}^{?}_{n}} = {{B}^{+}_{n}}$, we focus on proving that $\mathcal{G}$ is strongly connected when ${{B}^{?}_{n}}
= {{B}^{+*}_{n}}$.
Recall that simple braids are in bijection with non-crossing partitions of $\{1,\ldots,n\}$. For each subset $T$ of $\{1,\ldots,n\}$, we denote by $x_T$ the braid $\sigma_{t_1,t_2} \cdot \sigma_{t_2,t_3} \cdot \ldots \cdot
\sigma_{t_{k-1},t_k}$, where $t_1 < t_2 < \ldots < t_k$ are the elements of $T$. Then, for each non-crossing partition $\mathbf{T} =
\{T^1,\ldots,T^m\}$ of $\{1,\ldots,n\}$, we denote by $x_\mathbf{T}$ the (commutative) product $x_{T^1} \cdot \ldots \cdot x_{T^m}$. It is known [@birman1998new] that
the mapping $\mathbf{T} \mapsto x_{\mathbf{T}}$ is a bijection from the set of non-crossing partitions of $\{1,\ldots,n\}$ to ${\mathcal{S}}_n$, as mentioned in Subsection \[sec:comb-repr-braids\];
the set ${\mathsf{L}}(x_{\mathbf{T}})$ is equal to $\{\sigma_{u,v} {\;:\;}\exists T \in \mathbf{T}\quad u,v \in
T\}$;
the set ${\mathsf{R}}(x_{\mathbf{T}})$ is equal to $$\bigl\{\sigma_{u,v} {\;:\;}\exists T \in \mathbf{T} \quad
T \cap \{u+1,\ldots,v\} \neq \emptyset \text{ and } T \cap \{1,\ldots,u,v+1,\ldots,n\} \neq \emptyset\bigr\}\,.$$
Hence, consider two braids $y,z \in {\mathcal{S}}_n \setminus\{{\textbf{e}},\Delta\}$, and let $m = \lfloor \frac{n}{2} \rfloor$, as well as the set $A =
\{\sigma_{1,2},\ldots,\sigma_{n-1,n},\sigma_{n,1}\}$. Since $z <_{\text{l}}
\Delta$, we know that $z = x_{\mathbf{Z}}$ where $\mathbf{Z}$ is a partition of $\{1,\ldots,n\}$ in at least two subsets. It follows that $\mathbf{Z}$ is a refinement of a non-crossing partition $\mathbf{V}$ of $\{1,\ldots,n\}$ in *exactly* two subsets. The map $\sigma_{i,j} \mapsto \sigma_{i+1,j+1}$ induces an automorphism of the dual braid monoid ${{B}^{+*}_{n}}$, hence we assume without loss of generality that $\mathbf{V} =
\bigl\{\{1,\ldots,i,n\},\{i+1,\ldots,n-1\}\bigr\}$ for some integer $i \in
\{0,\ldots,m-1\}$.
Finally, consider some generator $\sigma_{a,b} \in {\mathsf{L}}(y)$, with $a < b$. Since ${\mathsf{L}}(y) \subseteq {\mathsf{R}}(y)$, it comes that that $y \to \sigma_{a,b} \to \sigma_{1,a} \to \sigma_{1,n}\to \sigma_{2,n}$ if $1 < a$, or $y \to \sigma_{1,b} \to \sigma_{1,n} \to \sigma_{2,n}$ if $a = 1$. Since $2 \leq m+1 < n$, we then observe that $\sigma_{2,n} \to \sigma_{1,m+1} \to x_{\mathbf{T}} \to x_{\mathbf{U}}
\to x_{\mathbf{V}} \to x_{\mathbf{Z}} = z$, where $$\begin{aligned}
\mathbf{T} &=
\begin{cases}
\bigl\{\{u,n+1-u\} {\;:\;}1 \leq u \leq m\bigr\} & \text{if $n$ is even,} \\
\bigl\{\{u,n+1-u\} {\;:\;}1 \leq u \leq m\bigr\} \cup
\bigl\{\{m+1\}\bigr\} & \text{if $n$ is odd;}
\end{cases}\\
\mathbf{U} &= \bigl\{\{1,\ldots,m\},\{m+1,\ldots,n\}\bigr\}.\end{aligned}$$ This completes the proof.
The connectivity of the Charney graph stated in the above proposition has several consequences on the combinatorics of braids, which we gather in the following corollary, the result of which seems to have been unnoticed so far.
\[cor:6\] The Möbius polynomial $H_n(t)$ has a unique root of smallest modulus. This root, say $q_n$, is real and lies in $(0,1)$ and it is simple. It coincides with the radius of convergence of the growth series $G_n(t)$.
Furthermore, for each integer $k\geq0$, put $\lambda_n(k)=\# \{x\in{{B}^{?}_{n}}{\;:\;}{|x|}=k\}\,.$ Then, for $n\geq3$, the following asymptotics hold for some constant $C_n>0$: $$\begin{gathered}
\label{eq:12}
\lambda_n(k)\sim_{k\to\infty} C_n q_n^{-k}\,.
\end{gathered}$$
Recall that we have already defined $q_n$, at the beginning of Subsection \[sec:charney-graph\], as the radius of convergence of $G_n(t)$, and we know that $q_n$ is a root of smallest modulus of $H_n(t)$. We will now derive the two statements of the corollary through an application of Perron-Frobenius theory for primitive matrices (see, *e.g.*, [@seneta81]).
Let $I_\Delta$ and $I_{\neg\Delta}$ denote the sets $$\begin{aligned}
I_\Delta&=\{(i,\Delta) {\;:\;}1 \leq i \leq {|\Delta|}\}\,,&
I_{\neg \Delta}&=\{(i,\sigma) {\;:\;}\sigma \in {\mathcal{S}}_n \setminus \{{\textbf{e}},\Delta\} \text{
and } 1 \leq i \leq {|\sigma|}\}\,,\end{aligned}$$ and let $I$ denote the disjoint union $I_\Delta \cup I_{\neg \Delta}$. Let $M
= (M_{x,y})_{x,y \in I}$ be the non-negative matrix defined as follows: $$M_{(i,\sigma),(j,\tau)}=
\begin{cases}
1,&\text{if $j=i+1$ and $\sigma=\tau$,}\\
1,&\text{if $i=|\sigma|$ and $j=1$ and $\sigma\to\tau$,}\\
0,&\text{otherwise.}
\end{cases}$$
By construction, $M$ is a block triangular matrix $M=\left(\begin{smallmatrix}A&B\\0&C
\end{smallmatrix}
\right)$ where $A$, $B$ and $C$ are the restrictions of $M$ to the respective sets of indices $ I_\Delta \times I_\Delta$, $I_\Delta \times I_{\neg \Delta}$ and $I_{\neg \Delta} \times I_{\neg \Delta}$.
Since the Charney graph is strongly connected and contains loops according to Proposition \[prop:3\], and since it contains at least $n-1$ vertices (the elements of $\Sigma$), it follows that $C$ is a primitive matrix with Perron eigenvalue $\rho > 1$. By construction, we know that $A^{{|\Delta|}} =
\mathbf{Id}_{{|\Delta|}}$, hence that the eigenvalues of $A$ have modulus $1$. Consequently, $\rho$ is a simple eigenvalue of $M$, and has a strictly greater modulus than all other eigenvalues of $M$. Hence, there exist left and right eigenvectors $\mathbf{l}$ and $\mathbf{r}$ of $M$ for the eigenvalue $\rho$ with non-negative entries, whose restrictions $(\mathbf{l}_x)_{x \in I_{\neg \Delta}}$ and $(\mathbf{r}_x)_{x \in I_{\neg
\Delta}}$ only have positive entries, and such that $\mathbf{l} \cdot
\mathbf{r} = 1$.
Then, observe that $\lambda_n(k) = \mathbf{u} \cdot M^{k-1} \cdot \mathbf{v}$ for all $k \geq 1$, where $\mathbf{u}$ is the row vector defined by $\mathbf{u}_{(i,\sigma)} = {\mathbf{1}\bm(i = 1\bm)}$ and $\mathbf{v}$ is the column vector defined by $\mathbf{v}_{(i,\sigma)} = {\mathbf{1}\bm(i = {|\sigma|}\bm)}$. Indeed, this follows at once from the existence and uniqueness of the Garside normal form for braids, and from the construction of the matrix $M$.
Both vectors $\mathbf{u}$ and $\mathbf{v}$ have some non-zero entries in $I_{\neg \Delta}$, and therefore $$\begin{gathered}
\label{eq:8}
\lambda_n(k) = \mathbf{u} \cdot M^{k-1} \cdot \mathbf{v} \sim_{k\to\infty}
\rho^{k-1} (\mathbf{u} \cdot \mathbf{r}) (\mathbf{l} \cdot
\mathbf{v})\end{gathered}$$ Hence, $\rho^{-1}$ is the radius of convergence of the generating series $G_n(t) = \sum_{k \geq 0} \lambda_n(k)$, and thus $\rho^{-1}=q_n$.
To complete the proof, consider the decomposition of $H_n(t)$ as a product of the form: $$\begin{gathered}
H_n(t)=(1-t/q_n)\cdot(1-t/r_1)\cdot\ldots\cdot(1-t/r_i)\cdot(1-t/a_1)\cdot\ldots\cdot(1-t/a_j)\,,\end{gathered}$$ where $r_1,\ldots,r_i$ are the other complex roots of $H_n(t)$ of modulus $q_n$, including $q_n$ if its multiplicity is $>1$, and $a_1,\ldots,a_j$ are the remaining complex roots of $H_n(t)$, hence of modulus greater than $q_n$. Since we know that $G_n(t)=1/H_n(t)$, and considering the series expansion of $1/H_n(t)$, one sees that the equivalent found in (\[eq:8\]) for the coefficients $\lambda_n(\cdot)$ of $G_n(t)$ cannot hold if $i>0$, whence the result.
Two Möbius transforms {#sec:two-mobi-transf}
---------------------
This last subsection is devoted to the study of Möbius transforms. In § \[sec:mobius-transform\], we particularize to our braid monoids ${{B}^{?}_{n}}$ the classical Möbius transform, as defined for general classes of partial orders [@rota64; @stanley97]. We prove the Möbius inversion formula for our particular case, although it could be derived from more general results. Next, we introduce in § \[sec:grade-mobi-transf\] a variant, called graded Möbius transform, which will prove to be most useful later for the probabilistic analysis.
We will use extensively the notation ${\mathbf{1}\bm(A\bm)}$ for the characteristic function of $A$, equal to $1$ if $A$ is true and to $0$ is $A$ is false.
### The standard Möbius transform {#sec:mobius-transform}
In the framework of braid monoids, the usual Möbius transform is defined as follows and leads to the next proposition.
\[def:3\] Given a real-valued function $f: {{B}^{?}_{n}}\mapsto\bbR$, its *Möbius transform* is the function $h:{{B}^{?}_{n}}\mapsto\bbR$ defined by: $$\begin{gathered}
\label{eq:4}
h(x)=\sum_{X \subseteq \Sigma} (-1)^{|X|} f(x \cdot \Delta_X) \text{ for all } x \in {{B}^{?}_{n}}.\end{gathered}$$
\[prop:2\] Let $f,h:{{B}^{?}_{n}}\to\bbR$ be two functions such that the series $\sum_{x \in
{{B}^{?}_{n}}} |f(x)|$ and $\sum_{x \in {{B}^{?}_{n}}} |h(x)|$ are convergent. Then $h$ is the Möbius transform of $f$ if and only if $$\begin{gathered}
\label{eq:10}
\forall x\in{{B}^{?}_{n}}\quad f(x)=\sum_{y\in{{B}^{?}_{n}}} h(x \cdot y)\,.\end{gathered}$$
For every non-unit braid $z \in {{B}^{?}_{n}}$, the sets ${\mathsf{L}}(z)$ and ${\mathsf{L}}(z^*)$ are non-empty, hence the powersets $\mathcal{P} {\mathsf{L}}(z)$ and $\mathcal{P}{\mathsf{L}}(z^*)$ are non-trivial Boolean lattices. It follows from the equality $\Delta_X =
\operatorname{\mathchoice {\bigvee{}_{\raisebox{-.8ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.3ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}}}X$ that: $$\sum_{X \subseteq \Sigma} (-1)^{|X|} {\mathbf{1}\bm(\Delta_X {\leq_\text{l}}z\bm)} = \sum_{X \subseteq {\mathsf{L}}(z)} (-1)^{|X|} = 0\,.$$ And, similarly, it follows from the equality $\Delta_X = \operatorname{\mathchoice {\bigvee{}_{\raisebox{-.8ex}{\scriptsize\!\!\text{r}}}} {\bigvee{}_{\raisebox{-.3ex}{\scriptsize\!\!\text{r}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{r}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{r}}}}}X$ that: $$\sum_{X \subseteq \Sigma} (-1)^{|X|} {\mathbf{1}\bm(\Delta_X {\leq_{\text{r}}}z\bm)} = \sum_{X \subseteq {\mathsf{L}}(z^*)} (-1)^{|X|} = 0\,
.$$
Consider now two functions $f$ and $h$ such that the series $\sum_{x \in {{B}^{?}_{n}}} |f(x)|$ and $\sum_{x \in {{B}^{?}_{n}}} |h(x)|$ are convergent. Assume first that $h$ is the Möbius transform of $f$. Using the change of variable $v = y \cdot
\Delta_X$, we derive from the following: $$\begin{aligned}
f(x) &= \sum_{v \in {{B}^{?}_{n}}} \Bigl(\sum_{X \subseteq \Sigma} (-1)^{|X|} {\mathbf{1}\bm(\Delta_X {\leq_{\text{r}}}v\bm)}\Bigr) f(x \cdot v) \\
&=\sum_{y \in {{B}^{?}_{n}}} \sum_{X \subseteq \Sigma} (-1)^{|X|} f(x \cdot y \cdot \Delta_X)
= \sum_{y\in{{B}^{?}_{n}}} h(x \cdot y),\end{aligned}$$ proving .
Conversely, if holds, we use the change of variable $u =
\Delta_X \cdot y$ to derive: $$\begin{aligned}
h(x) &= \sum_{u \in {{B}^{?}_{n}}} \Bigl(\sum_{X \subseteq \Sigma} (-1)^{|X|} {\mathbf{1}\bm(\Delta_X {\leq_\text{l}}u\bm)}\Bigr) h(x \cdot u) \\
&=\sum_{y \in {{B}^{?}_{n}}} \sum_{X \subseteq \Sigma} (-1)^{|X|} h(x \cdot \Delta_X \cdot y)
= \sum_{X \subseteq \Sigma} (-1)^{|X|} f(x \cdot\Delta_X).\end{aligned}$$ This shows that $h$ is the Möbius transform of $f$, completing the proof.
In particular, observe that, if a function $f$ has support in ${\mathcal{S}}_n$, then so does its Möbius transform $h$. Hence, we also define the notion of Möbius transform of real-valued functions $f : {\mathcal{S}}_n \to \bbR$ in a natural way. In that narrower context, Proposition \[prop:2\] formulates as follows.
\[cor:5\] Let $f,h:{\mathcal{S}}_n\to\bbR$ be two functions. Then the two statements: $$\begin{aligned}
\label{eq:1}
\forall x\in{\mathcal{S}}_n\quad f(x)&=\sum_{y\in{\mathcal{S}}_n} {\mathbf{1}\bm(x \cdot y \in
{\mathcal{S}}_n\bm)} h(x \cdot y) \\
\label{eq:3}
\forall x\in{\mathcal{S}}_n\quad h(x)&=\sum_{X \subseteq \Sigma} (-1)^{|X|} {\mathbf{1}\bm(x \cdot \Delta_X \in
{\mathcal{S}}_n\bm)} f(x \cdot \Delta_X)\end{aligned}$$ are equivalent.
In particular, by comparing the expressions (\[eq:2\]) of $H_n$ and (\[eq:3\]) of the Möbius transform of $f:{\mathcal{S}}_n\to\bbR$, we observe that if $p$ is a real number, and if $f:{\mathcal{S}}_n \to \bbR$ is defined by $f(x)=p^{{|x|}}$, then its Möbius transform $h$ satisfies: $$\begin{gathered}
\label{eq:19}
h({\textbf{e}})=H_n(p).\end{gathered}$$
#### Running examples for $n=3$. {#running-examples-for-n3.-4 .unnumbered}
We tabulate in Table \[tab:mobiustransform3\] the values of the Möbius transform of the function $p^{|x|}$ defined on ${\mathcal{S}}_3$, for ${{B}^{+}_{3}}$ and for ${{B}^{+*}_{3}}$. It is easily computed based on the elements found in Figures \[fig:acceptorB3\] and \[fig:acceptordual\] respectively.
$$\begin{aligned}
\begin{array}{ll}
x\in{\mathcal{S}}_3 &h(x)\\
\hline
{\textbf{e}}&1-2p+p^3=H_3(p)\\
\sigma_1&p-p^2\\
\sigma_2&p-p^2\\
\sigma_1\cdot\sigma_2&p^2-p^3\\
\sigma_2\cdot\sigma_1&p^2-p^3\\
\Delta_3&p^3
\end{array}
&&
\begin{array}{ll}
x\in{\mathcal{S}}_3 &h(x)\\
\hline
{\textbf{e}}&1-3p+2p^2=H_3(p)\\
\sigma_{1,2}&p-p^2\\
\sigma_{2,3}&p-p^2\\
\sigma_{1,3}&p-p^2\\
\delta_3&p^2\\
\phantom{\Delta_3}
\end{array}\end{aligned}$$
### The graded Möbius transform {#sec:grade-mobi-transf}
The above relation between real-valued functions $f : {{B}^{?}_{n}} \mapsto \bbR$ and their Möbius transforms works only when the Möbius transform is summable. In order to deal with all functions defined on ${{B}^{?}_{n}}$, we introduce a variant of those transforms, which is the notion of *graded Möbius transform*. To this end, for each finite braid $x\in{{B}^{?}_{n}}$, we define ${{B}^{?}_{n}}[x]$ as the following subset: $${{B}^{?}_{n}}[x] = \{y\in{{B}^{?}_{n}}{\;:\;}{\tau}(x \cdot y)={\tau}(x)\}
\,,$$
\[def:4\] Given a real-valued function $f: {{B}^{?}_{n}}\mapsto\bbR$, its *graded Möbius transform* is the function $h:{{B}^{?}_{n}}\mapsto\bbR$ defined by: $$\begin{gathered}
\label{eq:4**}
\forall x\in{{B}^{?}_{n}}\quad h(x)=\sum_{X \subseteq \Sigma} (-1)^{|X|} {\mathbf{1}\bm(\Delta_X \in {{B}^{?}_{n}}[x]\bm)} f(x \cdot \Delta_X)\,.\end{gathered}$$
For functions that vanish outside of ${\mathcal{S}}_n$, the notions of Möbius transform and graded Möbius transform coincide, while this is not the case in general.
The generalization of the summation formula stands in the following result.
\[thr:3\] Let $f,h:{{B}^{?}_{n}}\to\bbR$ be two functions. Then $h$ is the graded Möbius transform of $f$ if and only if $$\begin{gathered}
\label{eq:11}
\forall x\in{{B}^{?}_{n}}\quad f(x)=\sum_{y\in{{B}^{?}_{n}}[x]}h(x \cdot y)\,.\end{gathered}$$
Note that, in formula (\[eq:11\]), the braids $y\in{{B}^{?}_{n}}[x]$ may have normal forms that differ completely from that of $x$. This relates with Remark \[rem:1\].
For a generic braid $x\in{{B}^{?}_{n}}$ of height $k={\tau}(x)$, we denote by $(x_1,\ldots,x_k)$ the Garside decomposition of $x$. Observe that, for all $x, y, z\in {{B}^{?}_{n}}$, we have: $$\begin{gathered}
\label{eq:equivalence}
y \in {{B}^{?}_{n}}[x] \wedge z \in {{B}^{?}_{n}}[x \cdot y] \iff
y \cdot z \in {{B}^{?}_{n}}[x].
\end{gathered}$$ Indeed, $y \in {{B}^{?}_{n}}[x] \wedge z \in {{B}^{?}_{n}}[x \cdot y] \iff \tau(x) = \tau(x \cdot y) = \tau(x \cdot y \cdot z) \iff
\tau(x) = \tau(x \cdot y \cdot z) \iff y \cdot z \in {{B}^{?}_{n}}[x]$.
Hence, if $h$ is the graded Möbius transform of $f$, then: $$\begin{aligned}
\sum_{y\in{{B}^{?}_{n}}[x]} h(x \cdot y) &=\sum_{y \in {{B}^{?}_{n}}} \sum_{X \subseteq \Sigma} (-1)^{|X|} {\mathbf{1}\bm(y \in {{B}^{?}_{n}}[x]\bm)} {\mathbf{1}\bm(\Delta_X \in {{B}^{?}_{n}}[x \cdot y]\bm)} f(x \cdot y \cdot \Delta_X)
&&\text{by~\eqref{eq:4**}}\\
&= \sum_{v \in {{B}^{?}_{n}}} \sum_{y \in {{B}^{?}_{n}}} \sum_{X \subseteq \Sigma} (-1)^{|X|} {\mathbf{1}\bm(v \in {{B}^{?}_{n}}[x]\bm)} {\mathbf{1}\bm(v = y \Delta_X\bm)} f(x \cdot v)
&&\hspace{-0.9cm}\text{by~\eqref{eq:equivalence} with $z=\Delta_X$}\\
&= \sum_{v \in {{B}^{?}_{n}}} \sum_{X \subseteq \Sigma} (-1)^{|X|} {\mathbf{1}\bm(v \in {{B}^{?}_{n}}[x]\bm)} {\mathbf{1}\bm(\Delta_X {\leq_{\text{r}}}v\bm)} f(x \cdot v) \\
&= \sum_{v \in {{B}^{?}_{n}}} \Bigl(\sum_{X \subseteq {\mathsf{L}}(v^*)} (-1)^{|X|}\Bigr) {\mathbf{1}\bm(v \in {{B}^{?}_{n}}[x]\bm)} f(x \cdot v) \\
&= f(x).\end{aligned}$$
Conversely, if holds, then: $$\begin{aligned}
\sum_{X \subseteq \Sigma} (-1)^{|X|} &{\mathbf{1}\bm(\Delta_X \in {{B}^{?}_{n}}[x]\bm)} f(x\cdot \Delta_X) \\
&= \sum_{X \subseteq \Sigma} \sum_{z \in {{B}^{?}_{n}}}(-1)^{|X|} {\mathbf{1}\bm(\Delta_X \in {{B}^{?}_{n}}[x]\bm)} {\mathbf{1}\bm(z \in {{B}^{?}_{n}}[x \cdot \Delta_X]\bm)} h(x \cdot \Delta_X \cdot z) \\
&= \sum_{u \in {{B}^{?}_{n}}} \sum_{X \subseteq \Sigma} \sum_{z \in {{B}^{?}_{n}}}(-1)^{|X|} {\mathbf{1}\bm(u \in {{B}^{?}_{n}}[x]\bm)} {\mathbf{1}\bm(u = \Delta_X \cdot z\bm)} h(x \cdot u) \\
&= \sum_{u \in {{B}^{?}_{n}}[x]} \sum_{X \subseteq \Sigma} (-1)^{|X|} {\mathbf{1}\bm(\Delta_X {\leq_\text{l}}u\bm)} h(x \cdot u) \\
&= \sum_{u \in {{B}^{?}_{n}}[x]} \Bigl(\sum_{X \subseteq {\mathsf{L}}(u)} (-1)^{|X|} \Bigr) h(x \cdot u) \\
&= h(x).\end{aligned}$$ This completes the proof.
### Additional properties of Möbius transforms {#sec:additional-results}
Finally, we state in this subsection a couple of lemmas which we will use in next section for the probabilistic study.
\[lem:2\] For $p$ a real number, let $f:{\mathcal{S}}_n\to\bbR$ be defined by $f(x)=p^{{|x|}}$, and let $h:{\mathcal{S}}_n\to\bbR$ be the Möbius transform of $f$. Let also $g:{\mathcal{S}}_n\to\bbR$ be defined by: $$\begin{gathered}
\label{eq:17}
g(x)=\sum_{y\in{\mathcal{S}}_n} {\mathbf{1}\bm(x \to y\bm)}h(y)\,.\end{gathered}$$ Then the identity $h(x)=f(x)g(x)$ holds for all $x\in{\mathcal{S}}_n$.
Let $P=\mathcal{P}(\Sigma)$, and consider the two functions $F,G:P\to\bbR$ defined, for $A\in P$, by: $$\begin{aligned}
F(A) &= \sum_{I \in P} (-1)^{{|I|}} {\mathbf{1}\bm(I \subseteq {\mathsf{L}}(\Delta_{\Sigma \setminus A})\bm)} f(\Delta_I)\,,&
G(A) &= \sum_{y \in {\mathcal{S}}_n} {\mathbf{1}\bm({\mathsf{L}}(y) \cap {\mathsf{L}}(\Delta_{\Sigma\setminus A}) = \emptyset\bm)} h(y)\,.\end{aligned}$$ Then we claim that $F=G$.
Let us prove the claim. For every $I\in P$ and for every $y\in{\mathcal{S}}_n$, we have: $$I\subseteq{\mathsf{L}}(y)\iff\Delta_I{\leq_\text{l}}y \iff {\mathsf{L}}(\Delta_I) \subseteq {\mathsf{L}}(y).$$ Therefore, according to the Möbius summation formula , we have: $$f(\Delta_I)=\sum_{y\in{\mathcal{S}}_n}{\mathbf{1}\bm(I\subseteq{\mathsf{L}}(y)\bm)}h(y).$$ Reporting the right hand member above in the sum defining $F(A)$, and inverting the order of summations, yields: $$F(A) =\sum_{y\in{\mathcal{S}}_n}\Bigl(\sum_{I\in
P}(-1)^{{|I|}}{\mathbf{1}\bm(I\subseteq {\mathsf{L}}(\Delta_{\Sigma\setminus A})
\cap {\mathsf{L}}(y)\bm)}\Bigr)h(y)
=\sum_{y\in{\mathcal{S}}_n}{\mathbf{1}\bm({\mathsf{L}}(\Delta_{\Sigma\setminus A}) \cap {\mathsf{L}}(y) =
\emptyset\bm)}h(y) = G(A),$$ which proves the claim.
Now observe that, for every $x\in{\mathcal{S}}_n$, we have $$x \cdot \Delta_{\Sigma \setminus {\mathsf{R}}(x)} = \operatorname{\mathchoice {\bigvee{}_{\raisebox{-.8ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.3ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}}}\{x \cdot \sigma {\;:\;}\sigma \in \Sigma \setminus {\mathsf{R}}(x)\} \leq \operatorname{\mathchoice {\bigvee{}_{\raisebox{-.8ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.3ex}{\scriptsize\!\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}} {\bigvee{}_{\raisebox{-.17ex}{\tiny\!\text{l}}}}}{\mathcal{S}}_n = \Delta_\Sigma,$$ and therefore ${\mathsf{L}}(\Delta_{\Sigma \setminus {\mathsf{R}}(x)}) = \Sigma \setminus
{\mathsf{R}}(x)$. This proves that $$\begin{aligned}
\label{eq:24}
x \cdot \Delta_I \in {\mathcal{S}}_n\ &
\iff\ \makebox[6em]{$I \cap {\mathsf{R}}(x) = \emptyset$}
\iff\ I \subseteq {\mathsf{L}}(\Delta_{\Sigma \setminus {\mathsf{R}}(x)})\\
\label{eq:26}
x \to y\ & \iff\ \makebox[6em]{${\mathsf{L}}(y) \subseteq {\mathsf{R}}(x)$}
\iff\ {\mathsf{L}}(y) \cap {\mathsf{L}}(\Delta_{\Sigma \setminus {\mathsf{R}}(x)}) = \emptyset.\end{aligned}$$ Hence, using the multiplicativity of $f$, we have simultaneously $$\begin{aligned}
h(x)&=\sum_{I\in P}(-1)^{{|I|}} {\mathbf{1}\bm(I \subseteq {\mathsf{L}}(\Delta_{\Sigma \setminus {\mathsf{R}}(x)})\bm)} f(x\cdot\Delta_I) = f(x)F\left({\mathsf{R}}(x)\right) \\
\text{and }g(x)&=\sum_{y\in{\mathcal{S}}_n}{\mathbf{1}\bm({\mathsf{L}}(y) \cap {\mathsf{L}}(\Delta_{\Sigma\setminus{\mathsf{R}}(x)} = \emptyset\bm)}h(y)=G({\mathsf{R}}(x))\end{aligned}$$ Since $F=G$, it implies $h(x)=f(x)g(x)$, which completes the proof of the lemma.
\[lem:3b\] Let $(x_1,\ldots,x_k)$ be the Garside decomposition of a braid $x \in {{B}^{?}_{n}}$ and let $X$ be a subset of $\Sigma$. We have: $$\begin{gathered}
\label{eq:3b}
\Delta_X \in {{B}^{?}_{n}}[x] \iff \Delta_X \in {{B}^{?}_{n}}[x_k].\end{gathered}$$
The result is immediate if $x = {\textbf{e}}$. Moreover, if $x \neq {\textbf{e}}$ and if $\Delta_X \in {{B}^{?}_{n}}[x_k]$, we observe that $x_k \Delta_X$ is a simple braid, and therefore that $x_1 \cdot \ldots \cdot x_{k-1} \cdot (x_k \Delta_X)$ is a factorization of $x$ into $k$ simple braids, whence ${\tau}(x \Delta_X) \leq {\tau}(x)$. Since the Garside length is non-decreasing for ${\leq_\text{l}}$, it follows that $\Delta_X \in {{B}^{?}_{n}}[x]$.
Conversely, if $x \neq {\textbf{e}}$ and if $\Delta_X \notin {{B}^{?}_{n}}[x_k]$, since the set ${\mathcal{S}}_n$ is closed under ${\vee_{\!\text{l}}}$, there must exist some generator $\sigma \in X \setminus {{B}^{?}_{n}}[x_k]$. Hence, we have $x_1 \to \ldots x_k \to \sigma$, and therefore ${\tau}(x \cdot \Delta_X) \geq {\tau}(x \cdot \sigma) = k+1$, i.e. $\Delta_X \notin {{B}^{?}_{n}}[x]$.
\[cor:cor\] Let $f:{{B}^{?}_{n}}\to\bbR$ be the function defined by $f(x)=p^{|x|}$. Then the graded Möbius transform $h:{{B}^{?}_{n}}\to\bbR$ of $f$ satisfies the following property: $$\begin{gathered}
\label{eq:mobtransf}
h(x)=p^{|x_1|+\ldots+|x_{k-1}|} h(x_k)\,,
\end{gathered}$$ where $(x_1,\ldots,x_k)$ is the Garside decomposition of $x$.
It follows directly from the definition of the graded Möbius transform, together with Lemma \[lem:3b\].
Uniform measures on braid monoids {#sec:unif-meas-braid}
=================================
We are now equipped with adequate tools to study uniform measures on braids. Consider the following (vague) questions: how can we pick a braid uniformly at random? How can we pick a large braid uniformly at random? What are the characteristics of such random braids?
Since there are countably many braids, these questions cannot be given immediately a consistent meaning. However, for each fixed integer $k\geq0$, there are finitely many braids of size $k$, and it is thus meaningful to pick uniformly a braid of size $k$ at random. Please notice the difference between picking a braid of size $k$ uniformly at random, and picking a word uniformly in $\Sigma^k$ and then considering the braid it induces. The later corresponds to the uniform random walk on ${{B}^{?}_{n}}$, but not the former.
A possible way of picking a braid at random is thus the following: first pick the size $k$ at random, and then pick a braid uniformly among those of size $k$. The problem remains of how to draw $k$ in a “natural” way. It is the topic of this section to demonstrate that there is indeed a natural family, indexed by a real parameter $p$, of conducting this random procedure. Furthermore, the parameter $p$ is bound to vary in the interval $(0,q_n)$, where $q_n$ is the root of $H_n(t)$ introduced earlier; and letting $p$ tend to $q_n$ by inferior values, the distributions induced on braids weight large braids more and more, such that at the limit we obtain a natural uniform measure on “infinite braids”. In turn, we shall derive in the next section information on large random braids, that is to say, on random braids of size $k$ when $k$ is large enough, based on the notion of uniform measure on infinite braids.
Generalized braids {#subsubsec:gen_braids}
------------------
Considering the extended Garside decomposition of braids, one sees that elements of ${{B}^{?}_{n}}$ are in bijection with *infinite paths in $({\mathcal{S}}_n, \to)$ that eventually hit ${\textbf{e}}$*. Therefore, it is natural to define a compactification ${\overline{{{B}^{?}_{n}}}}$ as the set of all infinite paths in this graph. As a subset of ${\mathcal{S}}_n^{\bbN^*}$, it is endowed with a canonical topology, for which it is compact. Moreover, the restriction of this topology to ${{B}^{?}_{n}}$ is the discrete topology, and ${\overline{{{B}^{?}_{n}}}}$ is the closure of ${{B}^{?}_{n}}$. This is the set of *generalized braids*. We endow the set ${\overline{{{B}^{?}_{n}}}}$ with its Borel [gebra]{}. All measures we shall consider on ${\overline{{{B}^{?}_{n}}}}$ will be finite and Borelian.
We may refer to elements of ${{B}^{?}_{n}}$ as *finite braids*, to emphasize their status as elements of ${\overline{{{B}^{?}_{n}}}}$. We define the *boundary* ${\partial{{B}^{?}_{n}}}$ by: $${\partial{{B}^{?}_{n}}}={\overline{{{B}^{?}_{n}}}}\setminus{{B}^{?}_{n}}\,.$$ Elements in ${\partial{{B}^{+}_{n}}}$ correspond to infinite paths in $({\mathcal{S}}_n,\to)$ that never hit ${\textbf{e}}$, we may thus think of them as *infinite braids*.
If $(x_1,\ldots,x_p)$ is a finite path in the graph $({\mathcal{S}}_n,\to)$, the corresponding cylinder set ${\mathcal{D}}_{(x_1,\ldots,x_p)}$ is defined as the set of paths starting with vertices $(x_1,\ldots,x_p)$. Cylinder sets are both open and closed, and they generate the topology on ${\overline{{{B}^{?}_{n}}}}$.
\[def:2\] For $x\in{{B}^{?}_{n}}$ of Garside decomposition $(x_1,\ldots,x_p)$, we define the *Garside cylinder*, and we denote by ${\mathcal{C}}_x$, the cylinder subset of ${\overline{{{B}^{?}_{n}}}}$ given by ${\mathcal{C}}_x={\mathcal{D}}_{(x_1,\ldots,x_p)}$.
Garside cylinders only reach those cylinders sets of the form $D={\mathcal{D}}_{(x_1,\ldots,x_p)}$ with $x_p\neq{\textbf{e}}$. But if $x_p=e$, then $D$ reduces to the singleton $\{x\}$, with $x=x_1\cdot\ldots\cdot x_p$. And then, denoting by $q$ the greatest integer with $x_q\neq{\textbf{e}}$ and writing $y=x_1\cdot\ldots\cdot x_q$, one has: $$\{x\}={\mathcal{C}}_{y}\setminus\bigcup_{z\in{\mathcal{S}}_n\setminus\{{\textbf{e}}\}{\;:\;}x_q\to z}{\mathcal{C}}_{y\cdot z}\,.$$ It follows that *Garside cylinders generate the topology on ${\overline{{{B}^{?}_{n}}}}$*, which implies the following result.
\[prop:5\] Any finite measure on the space ${\overline{{{B}^{?}_{n}}}}$ of generalized braids is entirely determined by its values on Garside cylinders. In other words, if $\nu$ and $\nu'$ are two finite measures on ${\overline{{{B}^{?}_{n}}}}$ such that $\nu({\mathcal{C}}_x)=\nu'({\mathcal{C}}_x)$ for all $x\in{{B}^{?}_{n}}$, then $\nu=\nu'$.
We have already seen that Garside cylinders generate the topology, and thus the Borel [gebra]{} of ${\overline{{{B}^{?}_{n}}}}$. The collection of Garside cylinders, augmented with the empty set, is obviously stable by intersection: $$\forall x,y\in{{B}^{?}_{n}}\quad \text{ either }{\mathcal{C}}_x \subseteq {\mathcal{C}}_y
\text{ or } {\mathcal{C}}_x \supseteq {\mathcal{C}}_y \text{ or } {\mathcal{C}}_x \cap {\mathcal{C}}_y = \emptyset\,,$$ and forms thus a so-called $\pi$-system. The result follows from classical measure theory [@billingsley95 Th. 3.3].
Garside cylinders are very natural from the point of view of the normal forms, however they are somewhat unnatural from the algebraic point of view as they discard most of the divisibility information (*cf.* Remark \[rem:1\]). A more natural notion is that of *visual cylinder*, which corresponds, for a given finite braid $x\in{{B}^{?}_{n}}$, to the subset of those generalized braids which are “left divisible” by $x$. It will be useful to differentiate between generalized braids and infinite braids, therefore we introduce both the *full visual cylinder ${\,\Uparrow x}$* and the *visual cylinder ${\,\uparrow x}$*, as follows: $$\begin{aligned}
{\,\Uparrow x}&=\text{Closure}\bigl(\{x\cdot z\ :\ z\in{{B}^{?}_{n}}\}\bigr)\,,&
{\,\uparrow x}&={\,\Uparrow x}\cap{\partial{{B}^{?}_{n}}}\,,\end{aligned}$$ where $\text{Closure}(A)$ denotes the topological closure of the set $A$.
The relationship between Garside cylinders and visual cylinders is given by the following result.
\[lem:full-visual-is-union-of-garside\] For each finite braid $x \in {{B}^{?}_{n}}$, the full visual cylinder ${\,\Uparrow x}$ is the following disjoint union of Garside cylinders: $$\label{eq:defUp}
{\,\Uparrow x} = \bigcup_{y \in {{B}^{?}_{n}}[x]} {\mathcal{C}}_{x \cdot y}.$$
We first observe that ${\,\Uparrow x} \cap {{B}^{?}_{n}} = \bigcup_{y \in {{B}^{?}_{n}}[x]} ({{B}^{?}_{n}} \cap {\mathcal{C}}_{x
\cdot y})$. Indeed, the $\supseteq$ inclusion is obvious, while the converse one is a consequence of points \[pro:garside-2\] and \[pro:garside-3\] of Proposition \[proposition:dehornoy2013foundations\]. Since ${\,\Uparrow x}$ and $\bigcup_{y \in {{B}^{?}_{n}}[x]} {\mathcal{C}}_{x \cdot y}$ are the respective topological closures of ${\,\Uparrow x} \cap {{B}^{?}_{n}}$ and of $\bigcup_{y \in {{B}^{?}_{n}}[x]} ({{B}^{?}_{n}} \cap {\mathcal{C}}_{x \cdot y})$ in ${\overline{{{B}^{?}_{n}}}}$, the result follows.
Hence, ${\,\Uparrow x}$, as a finite union of Garside cylinders, is also open and closed in ${\overline{{{B}^{?}_{n}}}}$. In the same way, ${\,\uparrow x}$ is open and closed in ${\partial{{B}^{?}_{n}}}$.
Studying finite measures on generalized braids via the graded Möbius transform {#subsec:measures}
------------------------------------------------------------------------------
In this subsection, we study finite measures on the set ${\overline{{{B}^{?}_{n}}}}$ of generalized braids.
Assume that $\nu$ is some finite measure on ${\overline{{{B}^{?}_{n}}}}$. Then for practical purposes, we are mostly interested in the values of $\nu$ on Garside cylinders $\nu({\mathcal{C}}_x)$. However, most natural measures will enjoy good properties with respect to the full visual cylinders ${\,\Uparrow x}$, which is not surprising as these sets are most natural from the point of view of divisibility properties. For instance, the limits $\nu$ of uniform measures on the set of braids of length $k$ will satisfy $\nu({\,\Uparrow x}) = p^{{|x|}}$ for some $p$, see Definition \[def:1\] and Theorem \[thr:9\] below.
Henceforth, to understand these measures, we need to relate $\nu({\mathcal{C}}_x)$ and $\nu({\,\Uparrow x})$ in an explicit way, and this is where the graded Möbius transform of Subsection \[sec:grade-mobi-transf\] plays a key role, as shown by Proposition \[cor:1\] below. In turn, Proposition \[cor:1\] provides a nice probabilistic interpretation of the graded Möbius transform.
\[cor:1\] Let $\nu$ be a finite measure on ${\overline{{{B}^{?}_{n}}}}$. Let $f:{{B}^{?}_{n}}\mapsto\bbR$ be defined by $f(x)=\nu({\,\Uparrow x})$, and let $h:{{B}^{?}_{n}}\mapsto\bbR$ be the graded Möbius transform of $f$. Then, for every integer $k\geq1$ and for every finite braid $y$ of height $k$, holds: $$\begin{gathered}
\label{eq:16}
\nu({\mathcal{C}}_y)=h(y).\end{gathered}$$
The decomposition of a full visual cylinder as a disjoint union of Garside cylinders shows that $$\nu({\,\Uparrow x}) = \sum_{y\in {{B}^{?}_{n}}[x]} \nu({\mathcal{C}}_{x\cdot y}).$$ Thus, the characterization of the graded Möbius transforms shows that $y\mapsto \nu({\mathcal{C}}_y)$ is the graded Möbius transform of $x \mapsto
\nu({\,\Uparrow x})$, as claimed.
\[cor:3\] A finite measure $\nu$ on ${\overline{{{B}^{?}_{n}}}}$ is entirely determined by its values $\nu({\,\Uparrow x})$ on the countable collection of full visual cylinders.
According to Proposition \[prop:5\], a finite measure $\nu$ is entirely determined by its values on Garside cylinders. Hence the result follows from Proposition \[cor:1\].
Uniform measures {#sec:uniform-measures}
----------------
Our ultimate goal is to understand the uniform measure $\mu_{n,k}$ on braids in ${{B}^{?}_{n}}$ of a given length $k$, when $k$ tends to infinity. We will see below in Theorem \[thr:9\] that this sequence of measures converges to a measure on ${\partial{{B}^{?}_{n}}}$ which behaves nicely on the visual cylinders ${\,\uparrow x}$ (this is not surprising as these are the natural objects from the point of view of the monoid structure on ${{B}^{?}_{n}}$).
Therefore, it is good methodology to study the general class of measures which do behave nicely on *full* visual cylinders. Our usual conventions and notations are in force throughout this subsection, in particular concerning ${{B}^{?}_{n}}$ which may be either ${{B}^{+}_{n}}$ or ${{B}^{+*}_{n}}$.
\[def:1\] A *uniform measure for braids* of parameter $p\geq0$ is a probability measure $\nu_p$ on ${\overline{{{B}^{?}_{n}}}}$ satisfying: $$\begin{gathered}
\forall x\in{{B}^{?}_{n}}\quad \nu_p({\,\Uparrow x})=p^{{|x|}}\,.\end{gathered}$$
Although not apparent from this definition, we will see in Theorem \[thr:1\] below that such a measure either weights ${{B}^{?}_{n}}$ or ${\partial{{B}^{?}_{n}}}$, but not both. Theorem \[thr:1\] will describe quite precisely all uniform measures. It will allow us to define the *uniform measure at infinity* as the unique non trivial uniform measure supported by the boundary ${\partial{{B}^{+}_{n}}}$, see Definition \[def:7\]. A realization result for uniform measures will be the topic of Subsection \[sec:mark-chain-real\].
Before coming to the theorem, we state a key lemma.
\[lem:3\] Let $\nu$ be a uniform measure of parameter $p<1$. Assume that $\nu$ is concentrated at infinity, *i.e.*, $\nu({\partial{{B}^{?}_{n}}})=1$. Then $H_n(p)=0$.
Furthermore, let $B=(B_{x,x'})$ be the non-negative matrix indexed by pairs of simple braids $(x,x')$ such that $x,x'\in{\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}$, and defined by: $$B_{x,x'}={\mathbf{1}\bm(x\to x'\bm)}p^{{|x'|}}\,.$$ Then $B$ is a primitive matrix of spectral radius $1$. The Perron right eigenvector of $B$ is the restriction to ${\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}$ of the vector $g$ defined in .
Let $f(x)=p^{{|x|}}$, and let $h:{\mathcal{S}}_n\to\bbR$ be the graded Möbius transform of $f$.
According to Proposition \[cor:1\], we have $h({\textbf{e}})=\nu({\mathcal{C}}_{\textbf{e}})=\nu(\{{\textbf{e}}\})$. Since it is assumed that $\nu({{B}^{?}_{n}})=0$, it follows that $h({\textbf{e}})=0$. But $H_n(p)=h({\textbf{e}})$, as previously stated in , hence $H_n(p)=0$, proving the first claim of the lemma.
Let $g$ be defined on ${\mathcal{S}}_n$ as in , and let $\widetilde g$ be the restriction of $g$ to ${\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}$. It follows from Lemma \[lem:2\] that $h(x)=p^{{|x|}}g(x)$ holds on ${\mathcal{S}}_n$. Therefore the computation of $B\widetilde g$ goes as follows, for $x\in{\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}$: $$(B\widetilde g)_{x} =\sum_{y\in{\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}}{\mathbf{1}\bm(x \to y\bm)}p^{{|y|}}g(y)=\sum_{y \in{\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}}{\mathbf{1}\bm(x \to y\bm)}h(y).$$
But $h({\textbf{e}})=0$ on the one hand; and on the other hand, $x\to\Delta$ does not hold since $x\neq\Delta$. Hence the above equality rewrites as: $$(B\widetilde g)_x =\sum_{y\in{\mathcal{S}}_n} {\mathbf{1}\bm(x \to y\bm)} h(y)=\widetilde g(x).$$
We have proved that $\widetilde g$ is right invariant for $B$. Let us prove that $\widetilde g$ is non identically zero. Observe that $h$ is non-negative, as a consequence of Proposition \[cor:1\]. Therefore $g$ is non-negative as well. If $\widetilde g$ were identically zero on ${\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}$, so would be $h$ on ${\mathcal{S}}_n\setminus\{\Delta\}$. The Möbius summation formula would imply that $f$ is constant, equal to $f(\Delta)$ on ${\mathcal{S}}_n$, which is not the case since we assumed $p\neq1$. Hence $\widetilde g$ is not identically zero.
But $B$ is also aperiodic and irreducible, hence primitive. Therefore Perron-Frobenius theory [@seneta81 Chapter 1] implies that $\widetilde
g$ is actually *the* right Perron eigenvector of $B$, and $B$ is thus of spectral radius $1$.
\[thr:1\] For each braid monoid ${{B}^{?}_{n}}$, uniform measures $\nu_p$ on ${\overline{{{B}^{?}_{n}}}}$ are parametrized by the parameter $p$ ranging exactly over the closed set of reals $[0,q_n]\cup\{1\}$.
1. \[item:4\] For $p=0$, $\nu_0$ is the Dirac measure at ${\textbf{e}}$.
2. \[item:1\] For $p=1$, $\nu_1$ is the Dirac measure on the element $\Delta^\infty$ defined by its infinite Garside decomposition: $(\Delta\cdot\Delta\cdot\ldots)$.
3. \[item:6\] For $p\in(0,q_n)$, the support of $\nu_p$ is ${{B}^{?}_{n}}$, and it is equivalently characterized by: $$\begin{aligned}
\label{eq:13}
\nu_p\bigl(\{x\}\bigr)&=H_n(p)\cdot p^{{|x|}}&&\text{or}&
\nu_p({\,\Uparrow x})&=p^{{|x|}}
\end{aligned}$$ for $x$ ranging over ${{B}^{?}_{n}}$.
4. \[item:7\] For $p=q_n$, the support of $\nu_{q_n}$ is ${\partial{{B}^{?}_{n}}}$, and it is characterized by: $$\begin{gathered}
\label{eq:14}
\forall x\in{{B}^{?}_{n}}\quad\nu_{q_n}({\,\uparrow x})=q_n^{{|x|}}\,.
\end{gathered}$$
It follows from this statement that, except for the degenerated measure $\nu_1$, there exists a unique uniform measure on the boundary ${\partial{{B}^{?}_{n}}}$. It is thus natural to introduce the following definition.
\[def:7\] The uniform measure on ${\partial{{B}^{?}_{n}}}$ which is characterized by $\nu_{q_n}({\,\uparrow x})=q_n^{{|x|}}$ for $x\in{{B}^{?}_{n}}$, is called the *uniform measure at infinity*.
The statements \[item:4\]–\[item:7\] contains actually three parts: the existence of $\nu_p$ for each $p\in[0,q_n]\cup\{1\}$, the uniqueness of the measures satisfying the stated characterizations, and that $[0,q_n]\cup\{1\}$ is the only possible range for $p$.
*Existence and uniqueness of $\nu_p$ for $p\in[0,q_n]\cup\{1\}$.*The cases $p=0$ and $p=1$ (points \[item:4\] and \[item:1\]) are trivial. For $p\in(0,q_n)$ (point \[item:6\]), let $\nu_p$ be the discrete distribution on ${{B}^{?}_{n}}$ defined by the left hand side of . Since $p<q_n$, the series $G_n(p)$ is convergent, and it implies that the following formula is valid in the field of real numbers: $$G_n(p)\cdot H_n(p)=1\,.$$ It implies in particular that $H_n(p)>0$, and thus: $$\sum_{x\in{{B}^{?}_{n}}}\nu_p\bigl(\{x\}\bigr)=1\,,$$ and therefore $\nu_p$ is a probability distribution on ${{B}^{?}_{n}}$.
It remains to prove that $\nu_p$ is indeed uniform with parameter $p$. Since ${{B}^{?}_{n}}$ is left cancellative, we notice that, for each $x\in{{B}^{?}_{n}}$, the mapping $y\in{{B}^{?}_{n}}\mapsto x\cdot y$ is a bijection of ${{B}^{?}_{n}}$ onto ${\,\Uparrow x}\,\cap{{B}^{?}_{n}}$. Whence: $$\begin{aligned}
\nu_p({\,\Uparrow x})&=H_n(p)\cdot p^{{|x|}}\cdot\Bigl(\sum_{y\in{{B}^{?}_{n}}}p^{{|y|}}\Bigr)=p^{{|x|}}\,.\end{aligned}$$
Conversely, if $\nu$ is a probability measure on ${\overline{{{B}^{?}_{n}}}}$ such that $\nu({\,\Uparrow x})=p^{{|x|}}$, then $\nu$ and $\nu_p$ agree on full visual cylinders, hence $\nu=\nu_p$ according to Corollary \[cor:3\].
We now treat the case of point \[item:7\], corresponding to $p=q_n$. For this, let $(p_j)_{j\geq1}$ be any sequence of reals $p_j<q_n$ such that $\lim_{j\to\infty}p_j=q_n$, and such that $(\nu_{p_j})_{j\geq1}$ is a weakly convergent sequence of probability measures. Such a sequence exists since ${\overline{{{B}^{?}_{n}}}}$ is a compact metric space. Let $\nu$ be the weak limit of $(\nu_{p_j})_{j\geq1}$.
Obviously, for each braid $x$ fixed: $$\begin{aligned}
\lim_{j\to\infty} \nu_{p_j}({\,\Uparrow x})=q_n^{{|x|}}\,.\end{aligned}$$ But ${\,\Uparrow x}$ is both open and closed in ${\overline{{{B}^{?}_{n}}}}$, it has thus an empty topological boundary. The Portemanteau theorem [@billingsley95] implies that the above limit coincides with $\nu({\,\Uparrow x})$, hence $\nu({\,\Uparrow x})=q_n^{{|x|}}$ for all $x\in{{B}^{?}_{n}}$. The same reasoning applied to every singleton $\{x\}$, for $x\in{{B}^{?}_{n}}$, yields: $$\begin{aligned}
\nu(\{x\})=\lim_{j\to\infty}\nu_{p_j}(\{x\})=\lim_{j\to\infty}\frac{p_j^{{|x|}}}{G_n(p_j)}=0\,,\end{aligned}$$ the later equality since $\lim_{t\to q_n^-}G_n(t)=+\infty$. Since ${{B}^{?}_{n}}$ is countable, it follows that $\nu$ puts weight on ${\partial{{B}^{?}_{n}}}$ only, and thus: $$\forall x\in{{B}^{?}_{n}}\quad\nu({\,\uparrow x})=\nu({\,\Uparrow x})=q_n^{{|x|}}\,.$$
If $\nu'$ is a probability measure on ${\overline{{{B}^{?}_{n}}}}$ satisfying $\nu'({\,\uparrow x})=q_n^{{|x|}}$ for every $x\in{{B}^{?}_{n}}$, then we observe first that $\nu'$ is concentrated on the boundary, since $\nu'({\partial{{B}^{?}_{n}}})=\nu'({\,\uparrow {\textbf{e}}})=1$. And since $\nu$ and $\nu'$ coincide on all visual cylinders ${\,\uparrow x}$, for $x$ ranging over ${{B}^{?}_{n}}$, it follows from Corollary \[cor:3\] that $\nu=\nu'$.
*Range of $p$.*It remains only to prove that, if $\nu$ is a uniform probability measure on ${\overline{{{B}^{?}_{n}}}}$ of parameter $p$, then $p=1$ or $p\leq
q_n$. Seeking a contradiction, assume on the contrary that $p>q_n$ and $p\neq1$ holds.
We first show the following claim: $$\begin{gathered}
\label{eq:23}
\nu({\partial{{B}^{?}_{n}}})=1\,.\end{gathered}$$
Assume on the contrary $\nu({\partial{{B}^{?}_{n}}})<1$, hence $\nu({{B}^{?}_{n}})>0$. Then we claim that the inclusion-exclusion principle yields: $$\begin{gathered}
\label{eq:20}
\forall x\in{{B}^{?}_{n}}\quad\nu\bigl(\{x\}\bigr)=H_n(p)\cdot p^{{|x|}}\,.\end{gathered}$$
Indeed, for any braid $x\in{{B}^{?}_{n}}$, the singleton $\{x\}$ decomposes as: $$\{x\}={\,\Uparrow x}\setminus\bigcup_{\sigma \in \Sigma}{\,\Uparrow (}x\cdot\sigma)$$ and therefore: $$\nu\left(\{x\}\right) = \sum_{I \subseteq \Sigma} (-1)^{|I|} \nu \Bigl(\bigcap_{\sigma \in I} {\,\Uparrow (}x \cdot \sigma)\Bigr)
= \sum_{I \subseteq \Sigma} (-1)^{|I|} \nu \bigl({\,\Uparrow (}x \cdot \Delta_I)\bigr)\,.$$
Note that the above equality is valid for any finite measure on ${\overline{{{B}^{?}_{n}}}}$. Since $\nu$ is assumed to be uniform of parameter $p$, it specializes to the following: $$\nu\left(\{x\}\right)=\sum_{I\subseteq\Sigma}(-1)^{|I|}p^{{|x|}+{|\Delta_I|}}=p^{{|x|}}\cdot H_n(p)\,,$$ given the form (\[eq:2\]) for $H_n(p)$. This proves our claim (\[eq:20\]).
Together with $\nu({{B}^{?}_{n}})>0$, it implies $H_n(p)>0$. Consequently, summing up $\nu(\{x\})$ for $x$ ranging over ${{B}^{?}_{n}}$ yields $G_n(p)<\infty$. Hence $p<q_n$, which is a contradiction since we assumed $p>q_n$. This proves the claim .
Next, consider the two matrices $B$ and $B'$ indexed by all braids $x \in
{\mathcal{S}}_n \setminus \{{\textbf{e}},\Delta\}$ and defined by: $$B_{x,x'}={\mathbf{1}\bm(x\to x'\bm)}p^{{|x'|}} \quad\text{and}\quad B'_{x,x'}={\mathbf{1}\bm(x\to x'\bm)}q_n^{{|x'|}}.$$ They are both non-negative and primitive, and of spectral radius $1$ according to Lemma \[lem:3\] (which applies since $p\neq1$ by assumption). According to Perron-Frobenius theory [@seneta81 Chapter 1], there cannot exist a strict ordering relation between them. Yet, this is implied by $p>q_n$, hence the latter is impossible. The proof is complete.
\[rem:7\]
Since the length of braids is additive, any uniform measure is multiplicative, *i.e.*, it satisfies: $\nu_p({\,\Uparrow (}x\cdot y))=\nu_p({\,\Uparrow x})\cdot\nu_p({\,\Uparrow y})$.
Conversely, assume that $\nu$ is a multiplicative probability measure on ${\overline{{{B}^{?}_{n}}}}$. Then $\nu$ is entirely determined by the values $p_\sigma=\nu({\,\Uparrow \sigma})$ for $\sigma \in \Sigma$.
If ${{B}^{?}_{n}} = {{B}^{+}_{n}}$, let us write $p_i$ instead of $p_{\sigma_i}$. The braid relations $\sigma_i\cdot\sigma_{i+1}\cdot\sigma_i =
\sigma_{i+1}\cdot\sigma_i\cdot\sigma_{i+1}$ entail: $p_i
p_{i+1}(p_i-p_{i+1})=0$. Hence, if any two consecutive $p_i,p_{i+1}$ are non zero, they must be equal. Removing the generators $\sigma_i$ for which $p_i=0$, the braid monoid splits into a direct product of sub-braid monoids, each one equipped with a uniform measure.
Similarly, if ${{B}^{?}_{n}} = {{B}^{+*}_{n}}$, let us write $p_{i,j}$ instead of $p_{\sigma_i,\sigma_j}$. Then the dual braid relations $\sigma_{i,j} \cdot \sigma_{j,k} = \sigma_{j,k} \cdot
\sigma_{k,i}=\sigma_{k,i}\cdot\sigma_{i,j}$ (if $i < j < k$) yield the following three relations: $p_{j,k}(p_{i,j}-p_{k,i})= 0$, $p_{i,k}(p_{i,j}-p_{j,k})=0$ and $p_{i,j}(p_{j,k}-p_{i,k})=0$. Therefore the following implication holds for all $i<j<k$: $(p_{i,j}>0\wedge p_{j,k}>0)\implies p_{i,j}=p_{j,k}=p_{i,k}$. Removing the generators $\sigma_{i,j}$ for which $p_{i,j} = 0$, the dual braid monoid splits thus into a direct product of sub-dual braid monoids, each one equipped with a uniform measure.
Therefore, without loss of generality, the study of multiplicative measures for braid monoids reduces to the study of uniform measures. This contrasts with other kinds of monoids, such as heap monoids, see [@abbes15b] and the discussion in Section \[se-ext\].
Markov chain realization of uniform measures {#sec:mark-chain-real}
--------------------------------------------
Recall that generalized braids $\xi\in{\overline{{{B}^{?}_{n}}}}$ are given by infinite sequences of linked vertices in the graph $({\mathcal{S}}_n,\to)$. For each integer $k\geq1$, let $X_k(\xi)$ denote the $k^{\text{th}}$ vertex appearing in an infinite path $\xi\in{\overline{{{B}^{?}_{n}}}}$. This defines a sequence of measurable mappings $$X_k:{\overline{{{B}^{?}_{n}}}} \to{\mathcal{S}}_n\,,$$ which we may interpret as random variables when equipping ${\overline{{{B}^{?}_{n}}}}$ with a probability measure, say for instance a uniform measure $\nu_p$.
It turns out that, under any uniform measure $\nu_p$, the process $(X_k)_{k\geq1}$ has a quite simple form, namely that of a Markov chain. This *realization result* is the topic of the following theorem (the trivial cases $p=0$ and $p=1$ are excluded from the discussion).
\[thr:2\] Let $p\in(0,q_n]$, and let $\nu_p$ be the uniform measure of parameter $p$ on ${\overline{{{B}^{?}_{n}}}}$. Let $h:{\mathcal{S}}_n\to\bbR$ be the Möbius transform of $x\in{\mathcal{S}}_n\mapsto p^{{|x|}}$.
1. \[item:8\] Under $\nu_p$, the process $(X_k)_{k\geq1}$ of simple braids is a Markov chain, taking its values in ${\mathcal{S}}_n$ if $p<q_n$, and in ${\mathcal{S}}_n\setminus\{{\textbf{e}}\}$ if $p=q_n$.
2. \[item:5\] The initial measure of the chain coincides with $h$, which is a probability distribution on ${\mathcal{S}}_n$. The initial distribution puts positive weight on every non unit simple braid, and on the unit ${\textbf{e}}$ if and only if $p<q_n$.
3. \[item:9\] The transition matrix $P$ of the chain $(X_k)_{k\geq1}$ is the following: $$\begin{aligned}
\label{eq:18}
P_{x,x'}&={\mathbf{1}\bm(x\to x'\bm)}p^{{|x|}}\frac{h(x')}{h(x)},
\end{aligned}$$ where $x$ and $x'$ range over ${\mathcal{S}}_n$ for $p<q_n$ or over ${\mathcal{S}}_n\setminus\{{\textbf{e}}\}$ for $p=q_n$.
Let $f:{{B}^{?}_{n}}\to\bbR$ be defined by $f(x)=p^{{|x|}}$.
We first show that $h>0$ on ${\mathcal{S}}_n$ if $p<q_n$, and that $h>0$ on ${\mathcal{S}}_n\setminus\{{\textbf{e}}\}$ if $p=q_n$. Obviously, it follows from Proposition \[cor:1\] that $h$ is non-negative on ${\mathcal{S}}_n$, and even on ${{B}^{?}_{n}}$.
1. *Case $p<q_n$.* Then $H_n(p)>0$ and therefore, according to Theorem \[thr:1\] and Proposition \[cor:1\], we obtain: $$h(x) = \nu_p({\mathcal{C}}_x) \geq \nu_p(\{x\}) = H_n(p) \cdot p^{{|x|}} > 0 \text{ for all } x \in {\mathcal{S}}_n,$$ which was to be shown.
2. *Case $p=q_n$*. Consider the matrix $B$ indexed by pairs $(x,x')$ of simple braids distinct from ${\textbf{e}}$ and from $\Delta$, and defined by $B_{x,x'}={\mathbf{1}\bm(x\to x'\bm)}q_n^{{|x'|}}$. According to Lemma \[lem:3\], the restriction of $g$ to ${\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}$ is the Perron right eigenvector of $B$, where $g$ has been defined in (\[eq:17\]). Therefore $g>0$ on ${\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}$. But $h(x)=q_n^{{|x|}}g(x)$ holds on ${\mathcal{S}}_n$ according to Lemma \[lem:2\], therefore $h>0$ on ${\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}$. As for $\Delta$, one has $h(\Delta)=p^{{|\Delta|}}>0$. Hence $h>0$ on ${\mathcal{S}}_n\setminus\{{\textbf{e}}\}$, as claimed.
It follows in particular from the above discussion that the matrix $P$ defined in the statement of the theorem is well defined. Now, let $(x_1,\ldots,x_k)$ be any sequence of simple braids (including maybe the unit braid). Let $\delta$ and $\delta'$ denote the following quantities: $$\begin{aligned}
\delta&=\nu_p(X_1=x_1,\ldots,X_k=x_k)\\
\delta'&=h(x_1)\cdot P_{x_1,x_2}\cdot\ldots\cdot P_{x_{k-1},x_k}\,.\end{aligned}$$ We prove that $\delta=\delta'$.
We observe first that both $\delta$ and $\delta'$ are zero if the sequence $(x_1,\ldots,x_k)$ is not normal. Hence, without loss of generality, we restrict our analysis to the case where $(x_1,\ldots,x_k)$ is a normal sequence of simple braids.
Consider the braid $y=x_1\cdot\ldots\cdot x_k$. By the uniqueness of the Garside normal form, the following equality holds: $$\{X_1=x_1,\ldots,X_k=x_k\}=\{X_1 \cdot\ldots\cdot X_k =y\}\,.$$ Applying successively Proposition \[cor:1\] and Corollary \[cor:cor\], we have thus: $$\delta=h(y)
=p^{{|x_1|}+\ldots+{|x_{k-1}|}}h(x_k)\,.$$ On the other hand, we have: $$\delta'=h(x_1)\cdot p^{{|x_1|}}\frac{h(x_2)}{h(x_1)}\cdot\ldots\cdot
p^{{|x_{k-1}|}}\frac{h(x_k)}{h(x_{k-1})}=p^{{|x_1|}+\ldots+{|x_{k-1}|}}h(x_k)$$ which completes the proof of the equality $\delta=\delta'$. It follows that $(X_k)_{k\geq1}$ is indeed a Markov chain with the specified initial distribution and transition matrix.
If $p=q_n$, then we have already observed in the proof of Lemma \[lem:3\] that $h({\textbf{e}})=0$. It implies both $\nu(X_1={\textbf{e}})=0$ and $P_{x,{\textbf{e}}}=0$ for all $x\in{\mathcal{S}}_n\setminus\{{\textbf{e}}\}$. We conclude that $(X_k)_{k\geq1}$ does never reach ${\textbf{e}}$, which completes the proof of point \[item:8\], and of the theorem.
#### Running examples for $n=3$. {#running-examples-for-n3.-5 .unnumbered}
We characterize the uniform measure at infinity both for ${{B}^{+}_{3}}$ and for ${{B}^{+*}_{3}}$. For this, we first determine the root of smallest modulus of the Möbius polynomial, which we determined in Subsection \[sec:growth-series-braid\]: $q_3=(\sqrt5-1)/2$ for ${{B}^{+}_{3}}$ and $q_3=1/2$ for ${{B}^{+*}_{3}}$.
The Markov chain of simple braids induced by the uniform measure at infinity takes its values in ${\mathcal{S}}_3\setminus\{{\textbf{e}}\}$, which has $5$ elements for ${{B}^{+}_{3}}$ and $4$ elements for ${{B}^{+*}_{3}}$. Since the Möbius transform of $q_3^{|x|}$ is tabulated in Table \[tab:mobiustransform3\], we are in the position to compute both the initial distribution and the transition matrix of the chain by an application of Theorem \[thr:2\], yielding the results given in Table \[tab:unfimre\].
$$\begin{gathered}
\begin{array}{c}
\Delta_3\\\sigma_1\\\sigma_2\\\sigma_1\cdot\sigma_2\\\sigma_2\cdot\sigma_1
\end{array}
\begin{pmatrix}
\sqrt5-2&\sqrt5-2&\sqrt5-2&(7-3\sqrt5)/2&(7-3\sqrt5)/2\\
0&(\sqrt5-1)/2&0&(3-\sqrt5)/2&0\\
0&0&(\sqrt5-1)/2&0&(3-\sqrt5)/2\\
0&0&(\sqrt5-1)/2&0&(3-\sqrt5)/2\\
0&(\sqrt5-1)/2&0&(3-\sqrt5)/2&0
\end{pmatrix}
\\[1em]
\begin{array}{c}
\delta_3\\\sigma_{1,2}\\\sigma_{2,3}\\\sigma_{1,3}
\end{array}
\begin{pmatrix}
1/4&1/4&1/4&1/4\\
0&1/2&0&1/2\\
0&1/2&1/2&0\\
0&0&1/2&1/2
\end{pmatrix}
\end{gathered}$$
Applications to finite uniform distributions {#sec:appl-asympt-finite}
============================================
Weak convergence of finite uniform distributions {#sec:cons-furth-quest}
------------------------------------------------
The following result states a relationship between the finite uniform distributions, and the uniform measure at infinity. If one were only interested in finite uniform distributions, that would be a justification for studying uniform measures as defined previously.
\[thr:9\] The uniform measure at infinity $\nu_{q_n}$ is the weak limit of the sequence $(\mu_{n,k})_{k\geq0}$ as $k\to\infty$, where $\mu_{n,k}$ is for each integer $k\geq0$ the uniform distribution on the finite set ${{B}^{?}_{n}}(k)$ defined by: $${{B}^{?}_{n}}(k)=\{x\in{{B}^{?}_{n}}{\;:\;}{|x|}=k\}\,.$$
Recall that ${{B}^{?}_{n}}(k)$, as a subset of ${{B}^{?}_{n}}$, is identified with its image in ${\overline{{{B}^{?}_{n}}}}$, and thus $\mu_{n,k}$ identifies with a discrete probability distribution on ${\overline{{{B}^{?}_{n}}}}$. We denote $\lambda_n(k)=\#{{B}^{?}_{n}}(k)$. For a fixed braid $x\in{{B}^{?}_{n}}$, the map $y\mapsto x\cdot y$ is a bijection between ${{B}^{?}_{n}}$ and ${\,\Uparrow x}\cap{{B}^{?}_{n}}$, a fact that we already used in the proof of Theorem \[thr:1\], point \[item:7\]. Hence, for any $k\geq |x|$, and using the asymptotics found in Corollary \[cor:6\], one has: $$\begin{aligned}
\mu_{n,k}({\,\Uparrow x})&=\frac{\lambda_n(k-{|x|})}{\lambda_n(k)}\to_{k\to\infty}q_n^{{|x|}}\,.\end{aligned}$$
Invoking the Portemanteau theorem as in the proof of Theorem \[thr:1\], we deduce that any weak cluster value $\nu$ of $(\mu_{n,k})_{k\geq0}$ satisfies $\nu({\,\Uparrow x})=q_n^{{|x|}}$ for any full visual cylinder ${\,\Uparrow x}$. Theorem \[thr:1\] implies $\nu=\nu_{q_n}$. By compactness of ${\overline{{{B}^{?}_{n}}}}$, it follows that $(\mu_{n,k})_{k\geq0}$ converges toward $\nu_{q_n}$.
A practical interest of Theorem \[thr:9\] lies in the following corollary. Define $X_i:{\overline{{{B}^{?}_{n}}}}\to{\mathcal{S}}_n$ by $X_i(\xi)=x_i$, where $(x_k)_{k\geq1}$ is the extended Garside normal form of $\xi$.
\[cor:4\] Let $j\geq1$ be an integer. Then the joint distribution of the first $j$ simple braids appearing in the extended Garside decomposition of a uniformly random braid of size $k$ converges, as $k\to\infty$, toward the joint distribution of $(X_1,\ldots,X_j)$ under the uniform measure at infinity.
By definition of the topology on ${\overline{{{B}^{?}_{n}}}}$, the mapping $\xi\in{\overline{{{B}^{?}_{n}}}}\mapsto (X_1,\ldots,X_j)$ is continuous for each integer $j\geq1$. The result follows thus from Theorem \[thr:9\].
#### Example for $n=4$. {#sec:running-example-n=3 .unnumbered}
In anticipation of the computations to be performed in Section \[se-explicit\], we depict in Figure \[fig:ranazdopsq\] the beginning of a “truly random infinite braid” on $n=4$ strands, up to height $k=10$. These are “typical 10 first elements” in the decomposition of a large random braid on four strands. Observe the absence of $\Delta$; the numerical values found in next subsection make it quite likely.
(0,0) – (2.5,0); (2.5,0) – (3,0.5); (3,0.5) – (3.25,0.5); (3.25,0.5) – (3.45,0.3); (3.55,.2) – (3.75,0); (3.75,0) – (7.75,0); (7.75,0) – (8.25,0.5); (8.25,.5) – (10,.5); (10,.5) – (10.5,1); (10.5,1) – (10.75,1); (10.75,1) – (10.95,.8); (11.05,.7) – (11.25,.5); (11.25,.5) – (11.5,.5); (11.5,.5) – (11.7,.3); (11.8,.2) – (12,0); (12,0) – (12.25,0); (12,0) – (15,0); (0,.5) – (1.75,.5); (1.75,.5) – (2.25,1); (2.25,1) – (4,1); (4,1) – (4.2,.8); (4.3,.7) – (4.5,.5); (4.5,.5) – (4.75,.5); (4.75,.5) – (5.25,1); (5.25,1) – (5.5,1); (5.5,1) – (5.7,.8); (5.8,.7) – (6,.5); (6,.5) – (6.25,.5); (6.25,.5) – (6.75,1); (6.75,1) – (7,1); (7,1) – (7.2,.8); (7.3,.7) – (7.5,.5); (7.5,.5) – (7.75,.5); (7.75,.5) – (7.95,.3); (8.05,.2) – (8.25,0); (8.25,0) – (11.5,0); (11.5,0) – (12,0.5); (12,.5) – (13.75,.5); (13.75,.5) – (14.25,1); (14.25,1) – (14.5,1); (14.5,1) – (15,1); (0,1) – (0.25,1); (0.25,1) – (0.75,1.5); (0.75,1.5) – (1,1.5); (1,1.5) – (1.2,1.3); (1.3,1.2) – (1.5,1); (1.5,1) – (1.75,1); (1.75,1) – (1.95,.8); (2.05,.7) – (2.25,.5); (2.25,.5) – (2.5,.5); (2.5,.5) – (2.7,.3); (2.8,.2) – (3,0); (3,0) – (3.25,0); (3.25,0) – (3.75,0.5); (3.75,.5) – (4,.5); (4,.5) – (4.5,1); (4.5,1) – (4.75,1); (4.75,1) – (4.95,.8); (5.05,.7) – (5.25,.5); (5.25,.5) – (5.5,.5); (5.5,.5) – (6,1); (6,1) – (6.25,1); (6.25,1) – (6.45,.8); (6.55,.7) – (6.75,.5); (6.75,.5) – (7,.5); (7,.5) – (7.5,1); (7.5,1) – (8.5,1); (8.5,1) – (9,1.5); (9,1.5) – (9.25,1.5); (9.25,1.5) – (9.45,1.3); (9.55,1.2) – (9.75,1); (9.75,1) – (10,1); (10,1) – (10.2,.8); (10.3,.7) – (10.5,.5); (10.5,.5) – (10.75,.5); (10.75,.5) – (11.25,1); (11.25,1) – (12.25,1); (12.25,1) – (12.75,1.5); (12.75,1.5) – (13,1.5); (13,1.5) – (13.2,1.3); (13.3,1.2) – (13.5,1); (13.5,1) – (13.75,1); (13.75,1) – (13.95,.8); (14.05,.7) – (14.25,.5); (14.25,.5) – (14.5,.5); (14.5,.5) – (15,.5); (0,1.5) – (0.25,1.5); (0.25,1.5) – (0.45,1.3); (0.55,1.2) – (0.75,1); (0.75,1) – (1,1); (1,1) – (1.5,1.5); (1.5,1.5) – (8.5,1.5); (8.5,1.5) – (8.7,1.3); (8.8,1.2) – (9,1); (9,1) – (9.25,1); (9.25,1) – (9.75,1.5); (9.75,1.5) – (12.25,1.5); (12.25,1.5) – (12.45,1.3); (12.55,1.2) – (12.75,1); (12.75,1) – (13,1); (13,1) – (13.5,1.5); (13.5,1.5) – (13.75,1.5); (13.75,1.5) – (15,1.5); at (-.5,0)[$1$]{}; at (-.5,.5)[$2$]{}; at (-.5,1)[$3$]{}; at (-.5,1.5)[$4$]{}; at (0.5,-.5)[$\sigma_3$]{}; at (1.25,-.5)[$\sigma_3$]{}; at (2,-.5)[$\sigma_2$]{}; at (2.75,-.5)[$\sigma_1$]{}; at (3.5,-.5)[$\sigma_1$]{}; at (4.25,-.5)[$\sigma_2$]{}; at (5.0,-.5)[$\sigma_2$]{}; at (5.75,-.5)[$\sigma_2$]{}; at (6.5,-.5)[$\sigma_2$]{}; at (7.25,-.5)[$\sigma_2$]{}; at (8.0,-.5)[$\sigma_1$]{}; at (8.75,-.5)[$\sigma_3$]{}; at (9.5,-.5)[$\sigma_3$]{}; at (10.25,-.5)[$\sigma_2$]{}; at (11.0,-.5)[$\sigma_2$]{}; at (11.75,-.5)[$\sigma_1$]{}; at (12.5,-.5)[$\sigma_3$]{}; at (13.25,-.5)[$\sigma_3$]{}; at (14.0,-.5)[$\sigma_2$]{};
A geometric number of $\Delta$s. {#sec:geom-numb-delt}
--------------------------------
The $\Delta$ element only appears at the beginning of normal sequences of simple braids. Accordingly, under the uniform probability measure $\nu_p$, the occurrences of $\Delta$ in the Markov chain $(X_k)_{k\geq1}$ are only observed in the first indices, if any, and their number is geometrically distributed.
This behavior is quite easy to quantify, as the probabilistic parameters associated with $\Delta$ have simple expressions: $$\begin{aligned}
\nu_p(X_1=\Delta)&=h(\Delta)=p^{{|\Delta|}}=
\begin{cases}
p^{\frac{n(n-1)}2} & \text{ if } {{B}^{?}_{n}} = {{B}^{+}_{n}} \\
p^{n-1} & \text{ if } {{B}^{?}_{n}} = {{B}^{+*}_{n}}
\end{cases}
\,,&
P_{\Delta,\Delta}&=p^{{|\Delta|}}\,.\end{aligned}$$
It follows that the number of $\Delta$s appearing in the normal form of a random braid, possibly infinite and distributed according to a uniform measure of parameter $p\in(0,q_n]$, is geometric of parameter $p^{{|\Delta|}}$.
As a consequence of Theorem \[thr:9\], we obtain this corollary.
\[cor:2\] Let $T_k:{{B}^{?}_{n}}(k)\to\bbN$ denote the number of $\Delta$s in the Garside decomposition of a random braid of size $k$. Then $(T_k)_{k\geq0}$ converges in distribution, as $k\to\infty$, toward a geometric distribution of parameter $q_n^{\frac{n(n-1)}2}$ if ${{B}^{?}_{n}} = {{B}^{+}_{n}}$, or $q_n^{n-1}$ if ${{B}^{?}_{n}} = {{B}^{+*}_{n}}$.
Authors are sometimes only interested by the elements of the Garside decomposition of a “large” braid that appear *after* the last occurrence of $\Delta$. The notion of uniform measure at infinity allows also to derive information on these, as we shall see next.
#### Examples for $n=3$ and $n=4$.
Exact or numerical values for the parameter of the geometrical distribution are easily computed for $n=3$ and for $n=4$, based on our previous computations for $n=3$ and on the computations of Section \[se-explicit\] for $n=4$; see Table \[tab:pjkaozdja\].
$$\begin{gathered}
\begin{array}{l|cc|cc|}
\multicolumn{1}{l}{}&\multicolumn{2}{c}{\text{monoid ${{B}^{+}_{n}}$}}&\multicolumn{2}{c}{\text{monoid ${{B}^{+*}_{n}}$}}\\
&\text{parameter}&\text{prob. of occ.~of $\Delta$}
&\text{parameter}&\text{prob. of occ.~of $\Delta$}\\
\cline{2-5}
n=3&\Bigl(-\frac12+\frac{\sqrt5}2\Bigr)^3\approx0.236&\approx.309&
\frac14&\frac13\\
n=4&\approx0.0121 &\approx 0.0122&\Bigl(\frac12-\frac{\sqrt5}{10}\Bigr)^3\approx0.021&
\approx0.022
\end{array}\end{gathered}$$
On a conjecture by Gebhardt and Tawn {#sec:gebhardt--conjecture}
------------------------------------
This subsection only deals with positive braids monoids ${{B}^{+}_{n}}$. In their *Stable region Conjecture*, the authors of [@gebhardt14] suggest the following, based on a thorough experimental analysis. For each integer $i\geq1$, let $\lambda_{i*}(\mu_k)$ be the distribution of the $i^{\text{th}}$ factor in the extended Garside normal that occurs after the last $\Delta$, for braids drawn at random uniformly among braids of length $k$. Then two facts are suspected to hold, according to [@gebhardt14 Conjecture 3.1]:
For each integer $i\geq1$, the sequence $(\lambda_{i*}(\mu_k))_{k\geq1}$ is convergent as $k\to\infty$.
There exists a probability measure, say $\mu$ on ${\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}$, and a constant $C>0$ such that holds: $$\begin{gathered}
\label{eq:6}
\forall i>C\quad \lambda_{i*}(\mu_k)\to\mu\quad\text{as $k\to\infty$\,.}
\end{gathered}$$
Theorems \[thr:9\] and \[thr:2\] translate the problem of the limiting behavior of factors of the normal form within the familiar field of Markov chains with a finite number of states. This brings a simple way of determining the status of the above conjecture.
It follows from Theorem \[thr:9\] that the distribution of the $k^{\text{th}}$ element in the extended Garside decomposition of a random braid (including all the starting $\Delta$s), distributed uniformly among finite braids of size $k$, converges toward the distribution of the $k^{\text{th}}$ element in the extended Garside decomposition of an *infinite* braid, distributed according to the unique uniform measure at infinity. And, according to Theorem \[thr:2\], this is the distribution of a Markov chain at time $k$, with the prescribed initial distribution and transition matrix.
As for $\lambda_{i*}(\mu_k)$ it converges thus toward the distribution of the same chain, $i$ steps after having left the state $\Delta$. Hence we can affirm *the veracity of Fact 1*. Using the notations of Theorem \[thr:2\], we may also describe the limit, say $\lambda_{i*}=\lim_{k\to\infty}\lambda_{i*}(\mu_k)$, as follows: $$\forall s\in{\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}\quad
\lambda_{1*}(s)=\frac{h(s)}{1-{q_n}^{\frac{n(n-1)}2}}\,,$$ where the denominator comes from the conditioning on $s\neq\Delta$. The next values for $\lambda_{i*}$ are obtained recursively: $$\begin{gathered}
\label{eq:25}
\forall i\geq 1\quad \lambda_{i*}=\lambda_{1*}P^{i-1}\,,\end{gathered}$$ where $P$ is the transition matrix of the chain described in Theorem \[thr:2\].
On the contrary, *Fact 2 is incorrect.* Indeed, keeping the notation $\lambda_{i*}=\lim_{k\to\infty}\lambda_{i*}(\mu_k)$, if was true, then $\mu=\lambda_{i*}$ for $i$ large enough, would be the invariant measure of the Markov chain according to . But that would imply that the chain is stationary. We prove below that this is not the case when $n> 3$. What is true however, is that $(\lambda_{i*})_{i\geq1}$ converges toward the stationary measure of the chain when $i\to\infty$.
Assume, seeking a contradiction, that the chain $(X_k)_{k\geq1}$ is stationary. That would imply that the Möbius transform $h$ of the function $f(x)=q_n^{{|x|}}$, identified with a vector indexed by ${\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}$, is left invariant for the transition matrix $P$. Hence, for $y\in{\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}$: $$h(y)=(hP)_y=\sum_{x\in{\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}{\;:\;}x\to y}h(x)\frac{f(x)h(y)}{h(x)}\,.$$ We deduce, since $h>0$ on ${\mathcal{S}}_n\setminus\{{\textbf{e}}\}$: $$\begin{gathered}
\label{eq:32}
\forall y\in{\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}\quad
\sum_{x\in{\mathcal{S}}_n\setminus\{{\textbf{e}},\Delta\}{\;:\;}x\to y}q_n^{{|x|}}=1\,.\end{gathered}$$
Consider $y=\sigma_1$ and $y'=\sigma_1\cdot\sigma_2\cdot\sigma_1$. One has: $${\mathsf{L}}(y)=\{\sigma_1\}\subsetneq{\mathsf{L}}(y')=\{\sigma_1,\sigma_2\}\,,$$ which entails: $$\{x\in{\mathcal{S}}_n{\;:\;}(x\neq{\textbf{e}},\,\Delta)\wedge x\to y\}\supsetneq
\{x\in{\mathcal{S}}_n{\;:\;}(x\neq{\textbf{e}},\,\Delta)\wedge x\to y'\}\,.$$ It follows that the equality stated in cannot hold both for $y$ and for $y'$, which is the sought contradiction.
Explicit computations {#se-explicit}
=====================
We gather in this section the computations needed, for $n=4$, to characterize the uniform measure at infinity, both for ${{B}^{+}_{4}}$ and for ${{B}^{+*}_{4}}$.
Computations for ${{B}^{+}_{4}}$ {#sec:computations-br-4}
--------------------------------
The monoid ${{B}^{+}_{4}}$ has the following presentation: $$\begin{gathered}
{{B}^{+}_{4}}=\bigl\langle\sigma_1,\; \sigma_2,\;\sigma_3
\ \big|\
\sigma_1\sigma_3=\sigma_3\sigma_1,\
\sigma_1\sigma_2\sigma_1=\sigma_2\sigma_1\sigma_1,\
\sigma_2\sigma_3\sigma_2=\sigma_3\sigma_2\sigma_3\bigr\rangle^+\,.\end{gathered}$$ In order to shorten notations, we denote a product of generators $\sigma_i$ simply by the corresponding sequence of indices. So for instance, the Garside element is denoted: $\Delta_4=123121$.
The lattice ${\mathcal{D}}_4=\{\Delta_X\ |\ X\subseteq\Sigma_4\}$ has $2^3=8$ elements, and is isomorphic to the lattice of subsets of $\{1,2,3\}$, whereas ${\mathcal{S}}_4$ has $4!=24$ elements. The Hasse diagram of ${\mathcal{S}}_4$ is depicted in Figure \[fig:hessebr4\].
at (0,0) [${\textbf{e}}$]{};
at (-2.5,1) [$\sigma_1$]{}; at (0,1) [$\sigma_2$]{}; at (2.5,1) [$\sigma_3$]{};
at (-5,2) [$\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_2$]{}; at (-2.5,2) [$\sigma_2
{\!\!\!\;\cdot\!\!\!\;}\sigma_1$]{}; at (0,2) [$\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_3$]{}; at (2.5,2) [$\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_3$]{}; at (5,2) [$\sigma_3 {\!\!\!\;\cdot\!\!\!\;}\sigma_2$]{};
at (-6.25,3) [$\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_3$]{}; at (-3.75,3) [$\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_1$]{}; at (-1.25,3) [$\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_3 {\!\!\!\;\cdot\!\!\!\;}\sigma_2$]{}; at (1.25,3) [$\sigma_2
{\!\!\!\;\cdot\!\!\!\;}\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_3$]{}; at (3.75,3) [$\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_3
{\!\!\!\;\cdot\!\!\!\;}\sigma_2$]{}; at (6.25,3) [$\sigma_3 {\!\!\!\;\cdot\!\!\!\;}\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_1$]{};
at (-5,4) [$\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_3$]{}; at (-2.5,4) [$\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_3 {\!\!\!\;\cdot\!\!\!\;}\sigma_2$]{}; at (0,4) [$\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_3 {\!\!\!\;\cdot\!\!\!\;}\sigma_2$]{}; at (2.5,4) [$\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_3 {\!\!\!\;\cdot\!\!\!\;}\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_1$]{}; at (5,4) [$\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_3 {\!\!\!\;\cdot\!\!\!\;}\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_1$]{};
at (-2.5,5) [$\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_3
{\!\!\!\;\cdot\!\!\!\;}\sigma_2$]{}; at (0,5) [$\sigma_1 {\!\!\!\;\cdot\!\!\!\;}\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_3
{\!\!\!\;\cdot\!\!\!\;}\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_1$]{}; at (2.5,5) [$\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_1
{\!\!\!\;\cdot\!\!\!\;}\sigma_3 {\!\!\!\;\cdot\!\!\!\;}\sigma_2 {\!\!\!\;\cdot\!\!\!\;}\sigma_1$]{};
at (0,6) [$\Delta_4$]{};
(0.25,0.1) – (2.25,0.9); (0,0.25) – (0,0.8); (-0.25,0.1) – (-2.25,0.9);
(2.75,1.1) – (4.5,1.8); (2.25,1.1) – (0.5,1.8); (0.25,1.1) – (2,1.8); (-0.25,1.1) – (-2,1.8); (-2.75,1.1) – (-4.5,1.8); (-2.25,1.1) – (-0.5,1.8);
(5.25,2.2) – (6,2.8); (4.75,2.2) – (4,2.8); (2.75,2.2) – (3.5,2.8); (2.25,2.2) – (1.5,2.8); (-0.25,2.2) – (-1,2.8); (-2.75,2.2) – (-3.5,2.8); (-2.05,2.12) – (0.6875,2.85); (-5.25,2.2) – (-6,2.8); (-4.75,2.2) – (-4,2.8);
(6,3.2) – (5.25,3.8); (5.6875,3.15) – (3.25,3.8); (4,3.2) – (4.75,3.8); (1,3.2) – (0.25,3.8); (-0.6875,3.15) – (1.75,3.8); (-1.5,3.2) – (-2.25,3.8); (-4,3.2) – (-4.75,3.8); (-6,3.2) – (-5.25,3.8); (-5.6875,3.15) – (-3.25,3.8);
(4.25,4.2) – (2.75,4.8); (1.75,4.2) – (0.25,4.8); (0.25,4.2) – (1.75,4.8); (-0.25,4.2) – (-1.75,4.8); (-1.75,4.2) – (-0.25,4.8); (-4.25,4.2) – (-2.75,4.8);
(1.75,5.2) – (0.25,5.8); (0,5.2) – (0,5.8); (-1.75,5.2) – (-0.25,5.8);
(0,0) circle (0.25); (-2.45,1.2) arc (90:-90:0.2) – (-2.55,0.8) arc (270:90:0.2) – cycle; (0.05,1.2) arc (90:-90:0.2) – (-0.05,0.8) arc (270:90:0.2) – cycle; (2.55,1.2) arc (90:-90:0.2) – (2.45,0.8) arc (270:90:0.2) – cycle; (0.45,2.2) arc (90:-90:0.2) – (-0.45,1.8) arc (270:90:0.2) – cycle; (-3.2,3.2) arc (90:-90:0.2) – (-4.3,2.8) arc (270:90:0.2) – cycle; (4.3,3.2) arc (90:-90:0.2) – (3.2,2.8) arc (270:90:0.2) – cycle; (0.2,6.2) arc (90:-90:0.2) – (-0.2,5.8) arc (270:90:0.2) – cycle;
In order to compute the Möbius transform $h$ of the function $f(x)=p^{|x|}$ defined on ${\mathcal{S}}_4$, we refer to the expression (\[eq:3\]): $$\begin{gathered}
h(x)=\sum_{X\subseteq\Sigma{\;:\;}x\cdot\Delta_X\in{\mathcal{S}}_4}(-1)^{|X|}f(x\cdot\Delta_X)\end{gathered}$$
Furthermore, recalling the property $x\cdot\Delta_X\in{\mathcal{S}}_4\iff
X\subseteq{\mathsf{L}}(\Delta_{\Sigma\setminus{\mathsf{R}}(x)})$ proved earlier in (\[eq:24\]), the range of those $X\subseteq\Sigma$ such that $x\cdot\Delta_X\in{\mathcal{S}}_4$ is directly derived from the knowledge of the sets ${\mathsf{L}}(y)$ and ${\mathsf{R}}(y)$. All these elements are gathered in Table \[tab:mobiusB4\].
$$\begin{array}{rclll}
{\mathsf{L}}(x)&x\in{\mathcal{S}}_4&{\mathsf{R}}(x)&\{y\in{\mathcal{S}}_4{\;:\;}x\to y\}&h(x)\\
\hline
\rule{0em}{1em}\emptyset&\fbox{${\textbf{e}}$}&\emptyset&{\textbf{e}}&1-3p+p^2+2p^3-p^6\\
1&\fbox1&1&1,12,123&p-2p^2+p^4\\
2&\fbox2&2&2,21,23,213,2132&p-2p^2+p^3\\
3&\fbox3&3&3,32,321&p-2p^2+p^4\\
1&12&2&2,21,23,213,2132&p^2-2p^3+p^4\\
3,1&\fbox{13}&1,3&1,3,12,13,32,123,132,321,1232,1321,12321&p^2-p^3\\
2&21&1&1,12,123&p^2-2p^3+p^5\\
2&23&3&3,32,321&p^2-2p^3+p^5\\
3&32&2&2,21,23,213,2132&p^2-2p^3+p^4\\
2,1&\fbox{121}&1,2&1,2,12,21,23,121,123,213,1213,2132,21323&p^3-p^4\\
1&123&3&3,32,321&p^3-2p^4+p^6\\
3,1&132&2&2,21,23,213,2132&p^3-2p^4+p^5\\
2&213&1,3&1,3,12,13,32,123,132,321,1232,1321,12321&p^3-p^4\\
3,2&\fbox{232}&2,3&2,3,21,23,32,213,232,321,2132,2321,21321&p^3-p^4\\
3&321&1&1,12,123&p^3-2p^4+p^6\\
2,1&1213&1,3&1,3,12,13,32,123,132,321,1232,1321,12321&p^4-p^5\\
3,1&1232&2,3&2,3,21,23,32,213,232,321,2132,2321,21321&p^4-p^5\\
3,1&1321&1,2&1,2,12,21,23,121,123,213,1213,2132,21323&p^4-p^5\\
2&2132&2&2,21,23,213,2132&p^4-2p^5+p^6\\
3,2&2321&1,3&1,3,12,13,32,123,132,321,1232,1321,12321&p^4-p^5\\
3,1&12321&1,3&1,3,12,13,32,123,132,321,1232,1321,12321&p^5-p^6\\
3,2&21321&1,2&1,2,12,21,23,121,123,213,1213,2132,21323&p^5-p^6\\
2,1&21323&2,3&2,3,21,23,32,213,232,321,2132,2321,21321&p^5-p^6\\
3,2,1&\fbox{$\Delta_4$}&1,2,3&{\mathcal{S}}_4&p^6
\end{array}$$
The Möbius polynomial $H_4(t)$ can be obtained, for instance, by evaluating on ${\textbf{e}}$ the Möbius transform of the function $x\mapsto t^{{|x|}}$. From the first line of Table \[tab:mobiusB4\], we read: $$H_4(t)=(1-t)(1-2t-t^2+t^3+t^4+t^5)$$
Let $p=q_4$ be the smallest root of $H_4(t)$. We illustrate on the example $x=12$ the computation of a line $P_{x,\text{\tiny$\bullet$}}$ of the transition matrix corresponding to the uniform measure at infinity. From Table \[tab:mobiusB4\], we read the list of non zero values of the corresponding line of the matrix, which are for this case: $2$, $21$, $23$, $213$ and $2132$. According to Theorem \[thr:2\], for $x$ fixed, the entries $P_{x,y}$ of the matrix are proportional to $h(y)$, and the normalization factor is $p^{-{|x|}}h(x)$. Reading the values of $h$ in Table \[tab:mobiusB4\], we use the relation $1-2p-p^2+p^3+p^4+p^5=0$ to write the coefficients as polynomials in $p$, yielding: $$\begin{aligned}
P_{12,2}&=p\,,
&P_{12,213}&=-1+p+2p^2+2p^3+p^4\,,\\
P_{12,21}=P_{12,23}&=p(1-2p^2-2p^3-p^4)\,,
&P_{12,2132}&=p^4 \,.\end{aligned}$$
Computations for ${{B}^{+*}_{4}}$ {#sec:computations-dbr-4}
---------------------------------
We now treat the case of the ${{B}^{+*}_{4}}$. In order to simplify the notations, we write $(ij)$ for the generator $\sigma_{i,j}$, so for instance: $\delta_4=(12)\cdot(23)\cdot(34)$. The monoid ${{B}^{+*}_{4}}$ has the six generators $(ij)$ for $1\leq i<j\leq4$, subject to the following relations: $$\begin{aligned}
(12)\cdot(23)&=(23)\cdot(13)=(13)\cdot(12)
&(12)\cdot(24)&=(24)\cdot(14)=(14)\cdot(12)\\
(13)\cdot(34)&=(34)\cdot(14)=(14)\cdot(13)
&(23)\cdot(34)&=(34)\cdot(24)=(24)\cdot(23)\\
(12)\cdot(34)&=(34)\cdot(12)
&(23)\cdot(14)&=(14)\cdot(23)\end{aligned}$$
The set of simple braids ${\mathcal{S}}_4$ has $14$ elements, which we organize below according to the type of partition of the integer $4$ that the associated non-crossing partition of $\{1,2,3,4\}$ induces (see Subsection \[sec:comb-epr-simple\]): $$\begin{aligned}
{\mathcal{S}}_4=&\bigl\{{\textbf{e}},&&\text{$1$ partition of type $1+1+1+1$}\\
&\delta_4,
&&\text{$1$ partition of type $4$}\\
&(12),\;(13),\;(14),\;(23),\;(24),\;(34),&&\text{$6$ partitions of type $2+1+1$}\\
&(12)\cdot(23),\;(12)\cdot(24),\;(13)\cdot(34),\;(23)\cdot(34),
&&\text{$4$ partitions of type $3+1$}\\
&(13)\cdot(24),\;(12)\cdot(34)\bigr\}
&&\text{$2$ partitions of type $2+2$}\\\end{aligned}$$
Following the same scheme as for ${{B}^{+*}_{3}}$, we gather in Table \[tab:oijaoijaopakla\] the characteristic elements for ${{B}^{+*}_{4}}$. In particular, the first line gives the Möbius polynomial, from which the characteristic value $q_4$ is derived: $$\begin{aligned}
H_4(t)&=(1-t)(1-5t+5t^2)\,,
&q_4&=\frac12-\frac{\sqrt5}{10}\,.\end{aligned}$$
$$\begin{gathered}
\begin{array}{clll}
x\in{\mathcal{S}}_4&\{y\in{\mathcal{S}}_4{\;:\;}x\to y\}&h(x)&\rho(x)\\
\hline
\rule{0em}{1em} {\textbf{e}}&{\textbf{e}}&1-6p+10p^2-5p^3&0\\ \delta_4& {\mathcal{S}}_4 &p^3&1/5-2\sqrt5/25\\
({12})&(12),\;(13),\;(14)
&p(1-3p+2p^2)&\sqrt5/25\\ ({13})&(13),\;(14),\;(23),\;(24),\;(14)\cdot(23)
&p(1-2p+p^2)&1/10+\sqrt5/50\\ ({14})&(14),\;(24),\;(34)
&p(1-3p+2p^2)&\sqrt5/25\\ ({23})&(12),\;(23),\;(24)
&p(1-3p+2p^2)&\sqrt5/25\\ ({24})&(12),\;(13),\;(24),\;(34),\;(12)\cdot(34)
&p(1-2p+p^2)&1/10+\sqrt5/50\\ ({34})&(13),\;(23),\;(34)
&p(1-3p+2p^2)&\sqrt5/25\\ ({12})\cdot({23})&{\mathcal{S}}_4\setminus\bigl\{\delta_4,\;(34),\;(23)\cdot(34),\;(13)\cdot(34),\;(12)\cdot(34)\bigr\}
&p^2(1-p)&1/10-\sqrt5/50\\ ({12})\cdot ({24})&{\mathcal{S}}_4\setminus\bigl\{\delta_4,\;(23),\;(12)\cdot(23),\;(23)\cdot(34),\;(14)\cdot(23)\bigr\}
&p^2(1-p)&1/10-\sqrt5/50\\ ({23})\cdot ({34})&{\mathcal{S}}_4\setminus\bigl\{\delta_4,\;(14),\;(12)\cdot(24),\;(13)\cdot(34),\;(14)\cdot(23)\bigr\}&p^2(1-p)&1/10-\sqrt5/50\\ ({13})\cdot ({34})&{\mathcal{S}}_4\setminus\bigl\{\delta_4,\;(12),\;(12)\cdot(23),\;(12)\cdot(24),\;(12)\cdot(34)\bigr\}
&p^2(1-p)&1/10-\sqrt5/50\\ ({14})\cdot ({23})&
{\mathcal{S}}_4\setminus\bigl\{\delta_4,\;(13),\;(12)\cdot(23),\;(13)\cdot(34) \bigr\}
&p^2(1-p)&1/10-\sqrt5/50\\ ({12})\cdot ({34})& {\mathcal{S}}_4\setminus\bigl\{\delta_4,\;(24),\;(12)\cdot(24),\;(23)\cdot(34) \bigr\}
&p^2(1-p)&1/10-\sqrt5/50 \end{array}\end{gathered}$$
The computation of the transition matrix of the chain of simple braids induced by the uniform measure at infinity yields the values reported in Table \[tab:ojkazaaq\], where the line corresponding to $\delta_4$ has been omitted. This line, which also corresponds to the initial measure of the chain, is given by the function $\rho(x)$ tabulated in Table \[tab:oijaoijaopakla\].
$$\begin{gathered}
\begin{aligned}
\left.\begin{array}{CCCCCCc}
({12})&
({13})&
({14})&
({23})&
({24})&
({34})&
\cdots
\end{array}\right.
\\
\begin{array}{c}
({12})\\
({13})\\
({14})\\
({23})\\
({24})\\
({34})\\
({12})\cdot({23})\\
({12})\cdot ({24})\\
({23})\cdot ({34})\\
({13})\cdot ({34})\\
({14})\cdot ({23})\\
({12})\cdot ({34})
\end{array}
\left(\begin{array}{CCCCCCc}
1/2-{\theta}&2{\theta}&1/2-{\theta}&0&0&0&\cdots\\0&1/2-{\theta}&-1/2+3{\theta}&-1/2+3{\theta}&1/2-{\theta}&0&\cdots\\0&0&1/2-{\theta}&0&2{\theta}&1/2-{\theta}&\cdots\\1/2-{\theta}&0&0&1/2-{\theta}&2{\theta}&0&\cdots\\-1/2+3{\theta}&1/2-{\theta}&0&0&1/2-{\theta}&-1/2+3{\theta}&\cdots\\0&2{\theta}&0&1/2-{\theta}&0&1/2-{\theta}&\cdots\\-1/10+{\theta}&1/5&-1/10+{\theta}&-1/10+{\theta}&1/5&0&\cdots\\-1/10+{\theta}&1/5&-1/10+{\theta}&0&1/5&-1/10+{\theta}&\cdots\\-1/10+{\theta}&1/5&0&-1/10+{\theta}&1/5&-1/10+{\theta}&\cdots\\0&1/5&-1/10+{\theta}&-1/10+{\theta}&1/5&-1/10+{\theta}&\cdots\\-1/10+{\theta}&0&-1/10+{\theta}&-1/10+{\theta}&1/5&-1/10+{\theta}&\cdots\\-1/10+{\theta}&1/5&-1/10+{\theta}&-1/10+{\theta}&0&-1/10+{\theta}&\cdots\\\end{array}\right.\\
\end{aligned}
\\[1em]
\begin{aligned}
&\left. \begin{array}{cCCCCCCc}
\cdots&
({12})\cdot({23})&
({12})\cdot ({24})&
({23})\cdot ({34})&
({13})\cdot ({34})&
({14})\cdot ({23})&
({12})\cdot ({34})
\end{array}
\right.
\\
&
\left.
\begin{array}{cCCCCCCc}
\cdots& 0&0&0&0&0&0&\\ \cdots& 0&0&0&0&1-4{\theta}&0&\\ \cdots& 0&0&0&0&0&0&\\ \cdots& 0&0&0&0&0&0&\\ \cdots& 0&0&0&0&0&1-4{\theta}&\\ \cdots& 0&0&0&0&0&0&\\ \cdots& 3/10-{\theta}&3/10-{\theta}&0&0&3/10-{\theta}&0&\\ \cdots& 0&3/10-{\theta}&0&3/10-{\theta}&0&3/10-{\theta}&\\ \cdots& 3/10-{\theta}&0&3/10-{\theta}&0&0&3/10-{\theta}&\\ \cdots& 0&0&3/10-{\theta}&3/10-{\theta}&3/10-{\theta}&0&\\ \cdots& 0&3/10-{\theta}&3/10-{\theta}&0&3/10-{\theta}&3/10-{\theta}&\\ \cdots& 3/10-{\theta}&0&0&3/10-{\theta}&3/10-{\theta}&3/10-{\theta}&\\ \end{array}
\right)
\begin{array}{c}
({12})\\
({13})\\
({14})\\
({23})\\
({24})\\
({34})\\
({12})\cdot({23})\\
({12})\cdot ({24})\\
({23})\cdot ({34})\\
({13})\cdot ({34})\\
({14})\cdot ({23})\\
({12})\cdot ({34})
\end{array}
\end{aligned}\end{gathered}$$
Extensions {#se-ext}
==========
There are various other questions of interest concerning the asymptotic behavior of random braids, uniformly distributed among braids of length $k$. For instance, what is the asymptotic value of the height of a large braid? In other words, how do the Garside length and the Artin length compare to each other for large braids?
The height of braids gives rise to a sequence of integer random variables ${\tau}_k:{{B}^{?}_{n}}(k)\to\bbN$, indexed by $k$, where ${{B}^{?}_{n}}(k)=\{x\in{{B}^{?}_{n}}{\;:\;}{|x|}=k\}$ is equipped with the uniform distribution. Since the ratios height over length are uniformly bounded and bounded away from zero, performing the correct normalization leads to considering the sequence of real random variables $\rho_k:{{B}^{?}_{n}}(k)\to\bbR$ defined by: $$\begin{gathered}
\forall x\in{{B}^{?}_{n}}(k)\qquad \rho_k(x)=\frac{{\tau}(x)}{{|x|}}=\frac{{\tau}(x)}k\,,\end{gathered}$$ which takes values in the fixed interval $[1/{|\Delta|},1]$. Since all these random variables are defined on different probability spaces, the natural way of studying their asymptotic behavior is by studying their convergence in distribution.
A first result one may wish to establish is a *concentration result*: one aims to prove that $(\rho_k)_{k\geq1}$ converges in distribution toward a single value, say $\rho$. Hence one expects a convergence in distribution of the following form, where $\delta_\rho$ denotes the Dirac probability measure on the singleton $\{\rho\}$: $$\begin{gathered}
\label{eq:27}
\frac{{\tau}(\cdot)}k\xrightarrow[k\to\infty]{\mathrm{\quad d\quad}}\delta_\rho\,,\end{gathered}$$ where $\rho$ is some real number in the open interval $(1/{|\Delta|},1)$. The number $\rho$ would appear as a *limit average rate*: most of braids of Artin size $k$ would have, for $k$ large enough, a Garside size close to $\rho k$. If $\rho$ can furthermore be simply related to the quantities we have introduced earlier, such as the characteristic parameter $q_n$, it is reasonable to expect that $\rho$ would be an algebraic number.
Once this would have been established, the next step would consist in studying a Central Limit Theorem: upon normalization, is the distribution of $\rho_k$ Gaussian around its limit value $\rho$? Hence, one expects a convergence in distribution of the following form, for some constant $\sigma_n^2>0$ and where ${\mathcal{N}}(0,\sigma^2)$ denotes the Normal distribution of zero mean and variance $\sigma^2$: $$\begin{gathered}
\label{eq:21}
\sqrt
k\Bigl(\frac{{\tau}(\cdot)}k-\rho\Bigr)\xrightarrow[k\to\infty]{\quad\mathrm{d}\quad}
{\mathcal{N}}(0,\sigma_n^2)\end{gathered}$$
It turns out that both results (\[eq:27\]) and (\[eq:21\]) hold indeed. Because of space constraints, we postpone their proofs to a forthcoming work [@opus2].
This concerned an extension of the results established in this paper. Generalizations to other monoids are also possible, which we intend to expose in [@opus2]. Braid monoids fall into the wider class of Artin-Tits monoids, investigated by several authors since the 1960’s, including Tits, Deligne, Sato, Brieskorn, Garside, Charney, Dehornoy. Several results established in this paper for braid monoids admit generalizations to Artin-Tits monoids, and analogues of the convergences (\[eq:27\]) and (\[eq:21\]) also hold.
Among Artin-Tits monoids, one class in particular has retained the attention of the authors: the class of trace monoids, also called partially commutative monoids [@cartier69]. In trace monoids, the only relations between generators are commutativity relations (there is no braid relations); they correspond to Viennot’s *heap monoids* [@viennot86]. Trace monoids differ from braid monoids for several reasons, for instance there is no lattice structure and their associated Coxeter group is not finite. From the point of view adopted in this paper, the main difference lies in the existence of a *continuum of multiplicative measures*, among which the uniform measure is a particular case. Recall that we have observed in Remark \[rem:7\] that the uniform measure for infinite braids is the only instance of multiplicative measures, so the situation for braids presents a sharp contrast with trace monoids. The investigation of multiplicative measures for trace monoids has been the topic of [@abbes15a].
We shall prove in [@opus2] that, from this perspective, there are essentially only two types of Artin-Tits monoids: the trace type and the braid type, corresponding respectively to the type with a continuum of multiplicative measures, and the type where multiplicative measures reduce to the uniform measure only. For the trace type, multiplicative measures are parametrized by a sub-manifold of $\bbR^m$, diffeomorphic to the standard $(m-1)$-simplex, where $m$ is the minimal number of generators of the monoid.
|
---
abstract: 'Unsupervised feature extractors are known to perform an efficient and discriminative representation of data. Insight into the mappings they perform and human ability to understand them, however, remain very limited. This is especially prominent when multilayer deep learning architectures are used. This paper demonstrates how to remove these bottlenecks within the architecture of Nonnegativity Constrained Autoencoder (NCSAE). It is shown that by using both L1 and L2 regularization that induce nonnegativity of weights, most of the weights in the network become constrained to be nonnegative thereby resulting into a more understandable structure with minute deterioration in classification accuracy. Also, this proposed approach extracts features that are more sparse and produces additional output layer sparsification. The method is analyzed for accuracy and feature interpretation on the MNIST data, the NORB normalized uniform object data, and the Reuters text categorization dataset.'
author:
- 'Babajide O. Ayinde, , and Jacek M. Zurada, [^1] [^2] [^3]'
bibliography:
- 'autoencoder.bib'
title: 'Deep Learning of Nonnegativity-Constrained Autoencoders for Enhanced Understanding of Data'
---
[0.84]{}(0.08,0.03) ©2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses. Cite: B.O. Ayinde and J. M. Zurada,“ Deep Learning of Constrained Autoencoders for Enhanced Understanding of Data” in IEEE Trans. on Neural Networks and Learning Systems, September 2018, Vol. 29, Issue 9, Pg. 3969 - 3979.
Sparse autoencoder, part-based representation, nonnegative constraints, white-box model, deep learning, receptive field.
Introduction
============
learning (DL) networks take the form of heuristic and rich architectures that develop unique intermediate data representation. The complexity of architectures is reflected by both the sizes of layers and, for a large number of data sets reported in the literature, also by the processing. In fact, the architectural complexity and the excessive number of weights and units are often built in into the DL data representation by design and are deliberate [@bengio2007scaling; @bengio2009learning; @Hinton2006reducing; @Deng2014tutorial; @Bengio2013Guest]. Although deep architectures are capable of learning highly complex mappings, they are difficult to train, and it is usually hard to interpret what each layer has learnt. Moreover, gradient-based optimization with random initialization used in training is susceptible to converging to local minima [@Bengio2007Greedy; @ayinde2016clustering].\
In addition, it is generally believed that humans analyze complex interactions by breaking them into isolated and understandable hierarchical concepts. The emergence of part-based representation in human cognition can be conceptually tied to the nonnegativity constraints [@lee1999learning]. One way to enable easier human understandability of concepts in neural networks is to constrain the network’s weights to be nonnegative. Note that such representation through nonnegative weights of a multilayer network perceptron can implement any shattering of points provided suitable negative bias values are used [@Chorowski2014Learning].\
Drawing inspiration from the idea of Nonnegative Matrix Factorization (NMF) and sparse coding [@Olshausen1996Emergence; @lee1999learning], the hidden structure of data can be unfolded by learning features that have capabilities to model the data in parts. Although NMF enforces the encoding of both the data and features to be nonnegative thereby resulting in additive data representation, however, incorporating sparse coding within NMF for the purpose of encoding data is computationally expensive, while with AEs, this incorporation is learning-based and fast. In addition, the performance of a deep network can be enhanced using Nonnegativity Constrained Sparse Autoencoder (NCAE) with part-based data representation capability [@Ehsan2015Deep; @Ranzato2007SparseFeature].\
It is remarked that weight regularization is a concept that has been employed both in the understandability and generalization context. It is used to suppress magnitudes of the weights by reducing the sum of their squares. Enhancement in sparsity can also be achieved by penalizing sum of absolute values of the weights rather than the sum of their squares [@ishikawa1996structural; @bartlett1998sample; @gnecco2010regularization; @moody1995simple; @ogundijo2017reverse]. In this paper, the work proposed in [@Ehsan2015Deep] is extended by modifying the cost function to extract more sparse features, encouraging nonnegativity of the network weights, and enhancing the understandability of the data. Other related model is the Nonnegative Sparse Autoencoder (NNSAE) trained with an online algorithm with tied weights and linear output activation function to mitigate the training hassle [@Lemme2012OnlineLearning]. While [@Lemme2012OnlineLearning] uses a piecewise linear decay function to enforce nonnegativity and focuses on shallow architecture, the proposed uses a composite norm with focus on deep architectures. Dropout is another recently introduced and widely used heuristic to sparsify AEs and prevent overfitting by randomly dropping units and their connections from the neural network during training [@hinton2012improving; @srivastava2014dropout].\
More recently, different paradigm of AEs that constrain the output of encoder to follow a chosen prior distribution have been proposed [@kingma2013auto; @makhzani2015adversarial; @burda2015importance]. In variational autoencoding, the decoder is trained to reconstruct the input from samples that follow chosen prior using variational inference [@kingma2013auto]. Realistic data points can be reconstructed in the original data space by feeding the decoder with samples from chosen prior distribution. On the other hand, adversarial AE matches the encoder’s output distribution to an arbitrary prior distribution using adversarial training with discriminator and the generator [@makhzani2015adversarial]. Upon adversarial training, encoder learns to map data distribution to the prior distribution.\
The problem addressed here is three-fold: (i) The interpretability of AE-based deep layer architecture fostered by enforcing high degree of weight’s nonnegativity in the network. This improves on NCAEs that show negative weights despite imposing nonnegativity constraints on the network’s weights [@Ehsan2015Deep]. (ii) It is demonstrated how the proposed architecture can be utilized to extract meaningful representations that unearth the hidden structure of a high-dimensional data. (iii) It is shown that the resulting nonnegative AEs do not deteriorate their classification performance. This paper considerably expands the scope of the AE model first introduced in [@baba101] by: (i) introducing smoothing function for $L_1$ regularization for numerical stability, (ii) illustrating the connection between the proposed regularization and weights’ nonnegativity, (iii) drawing more insight into variety of dataset, (iv) comparing the proposed with recent AE architectures, and lastly (v) supporting the interpretability claim with new experiments on text categorization data. The paper is structured as follows: Section II introduces the network configuration and the notation for nonnegative sparse feature extraction. Section III discusses the experimental designs and Section IV presents the results. Finally, conclusions are drawn in Section V.
Nonnegative sparse feature extraction using Constrained Autoencoders
====================================================================
As shown in [@lee1999learning], one way of representing data is by shattering it into various distinct pieces in a manner that additive merging of these pieces can reconstruct the original data. Mapping this intuition to AEs, the idea is to sparsely disintegrate data into parts in the encoding layer and subsequently additively process the parts to recombine the original data in the decoding layer. This disintegration can be achieved by imposing nonnegativity constraint on the network’s weights [@wright1999; @Nguyen2013Learning; @Ehsan2015Deep].

[[(a)]{}]{}

[[(b)]{}]{}

[[(c)]{}]{}

[[(d)]{}]{}
$L_1/L_2$-Nonnegativity Constrained Sparse Autoencoder ($L_1/L_2$-NCSAE)
------------------------------------------------------------------------
In order to encourage higher degree of nonnegativity in network’s weights, a composite penalty term is added to the objective function resulting in the cost function expression for $L_1/L_2$-NCSAE: $$\label{MyEq8}
\begin{split}
J_{\text{$L_1/L_2$-NCSAE}}\big(\textbf{W},\textbf{b}\big) = J_{AE} &+\beta \sum^{n'}_{r=1} D_{KL}\bigg(p\bigg\Vert \frac{1}{m}\sum_{k=1}^mh_r(\textbf{x}^{(k)})\bigg)\\
& +\sum_{l=1}^{2}\sum_{i=1}^{s_{l}}\sum_{j=1}^{s_{l+1}}f_{L_1/L_2}\big(w_{ij}^{(l)}\big) \end{split}$$ where $\textbf{W} = \{\textbf{W}^{(1)},\textbf{W}^{(2)}\}$ and $\textbf{b} = \{\textbf{b}_x,\textbf{b}_h\}$ represent the weights and biases of encoding and decoding layers respectively; $s_l$ is the number of neurons in layer $l$. $w^{(l)}_{ij}$ represents the connection between $j$th neuron in layer $l-1$ and $i$th neuron in layer $l$ and for given input $\textbf{x}$, $$\label{MyEq9zz}
J_{AE} = \frac{1}{m}\sum^m_{k=1}{\left\lVert\sigma(\textbf{W}^{(2)} \sigma(\textbf{W}^{(1)}\textbf{x}^{(k)}+\textbf{b}_x)+\textbf{b}_h) - \textbf{x}^{(k)}\right\rVert}^2_2,$$ where $m$ is the number of training examples, $||\centerdot||_2$ is the Euclidean norm. $D_{KL}(\centerdot)$ is the Kullback-Leibler (KL) divergence for sparsity control [@ng2011ufldl] with $p$ denoting the desired activation and the average activations of hidden units, $n'$ is the number of hidden units, $h_j(\textbf{x}^{(k)})= \sigma(\textbf{W}_j^{(1)}\textbf{x}^{(k)}+b_{x,j})$ denotes the activation of hidden unit $j$ due to input $\textbf{x}^{(k)}$, and $\sigma(\centerdot)$ is the element-wise application of the logistic sigmoid, $\sigma(\textbf{x})=\sfrac{1}{(1+exp(-\textbf{x}))}$, $\beta$ controls the sparsity penalty term, and $$\label{MyEq9}
f_{L_1/L_2}(w_{ij}) = \Bigg\{
\begin{array}{l l}
\alpha_1\Gamma(w_{ij},\kappa) + \frac{\alpha_2}{2}||w_{ij}||^2 & \quad w_{ij} <0\\
0 & \quad w_{ij} \geq 0
\end{array}$$ where $\alpha_1$ and $\alpha_2$ are $L_1$ and $L_2$ nonnegativity-constraint weight penalty factors, respectively. $p$, $\beta$, $\alpha_1$, and $\alpha_2$ are experimentally set to $0.05$, $3$, $0.0003$, and $0.003$, respectively using $9000$ randomly sampled images from the training set as a held-out validation set for hyperparameter tuning and the network is retrained on the entire dataset. The weights are updated as below using the error backpropagation: $$\label{MyEq10}
w_{ij}^{(l)} =w_{ij}^{(l)}-\xi \frac{\partial}{\partial w_{ij}^{(l)}}J_{\text{$L_1/L_2$-NCSAE}}(\textbf{W},\textbf{b})$$ $$\label{MyEq11}
b_{i}^{(l)}=b_{i}^{(l)}-\xi \frac{\partial}{\partial b_{i}^{(l)}}J_{\text{$L_1/L_2$-NCSAE}}(\textbf{W},\textbf{b})$$ where $\xi>0$ is the learning rate and the gradient of $L_1/L_2$-NCSAE loss function is computed as in (\[MyEq12\]). $$\label{MyEq12}
\begin{split}
\frac{\partial}{\partial w_{ij}^{(l)}}J_{\text{$L_1/L_2$-NCSAE}}(\textbf{W},\textbf{b}) &=\frac{\partial}{\partial w_{ij}^{(l)}}J_{\text{AE}}\big(\textbf{W},\textbf{b}\big)\\
&+\beta \frac{\partial}{\partial w_{ij}^{(l)}}D_{KL}\bigg(p\bigg\Vert \frac{1}{m}\sum_{k=1}^mh_j(\textbf{x}^{(k)})\bigg)\\
&+ g\big(w_{ij}^{(l)}\big)
\end{split}$$ where $g(w_{ij})$ is a composite function denoting the derivative of $f_{L_1/L_2}(w_{ij})$ with respect to $w_{ij}$ as in . $$\label{MyEq13}
g(w_{ij}) = \bigg\{
\begin{array}{l l}
\alpha_1\nabla_{\textbf{w}}{\left\lVertw_{ij}\right\rVert} + \alpha_2w_{ij} & \quad w_{ij} <0\\
0 & \quad w_{ij} \geq 0
\end{array}
$$\
\
Although the penalty function in is an extension of NCAE (obtained by setting $\alpha_1$ to zero), a close scrutiny of the weight distribution of both the encoding and decoding layer in NCAE reveals that many weights are still not nonnegative despite imposing nonnegativity constraints. The reason for this is that the original *$L_2$* norm used in NCAE penalizes the negative weights with big magnitudes stronger than those with smaller magnitudes. This forces a good number of the weights to take on small negative values. This paper uses additional *$L_1$* to even out this occurrence, that is, the *$L_1$* penalty forces most of the negative weights to become nonnegative.
Implication of imposing nonnegative parameters with composite decay function
----------------------------------------------------------------------------
The graphical illustration of the relation between the weight distribution and the composite decay function is shown in Fig. \[weight\_dist\_example\]. Ideally, addition of Frobenius norm of the weight matrix ($\alpha||\textbf{W}||_F^2$) to the reconstruction error in imposes a Gaussian prior on the weight distribution as shown in curve $G_3$ in Fig. \[weight\_dist\_example\]a. However, using the composite function in results in imposition of positively-skewed deformed Gaussian distribution as in curves $G_1$ and $G_2$. The degree of nonnegativity can be adjusted using parameters $\alpha_1$ and $\alpha_2$. Both parameters have to be carefully chosen to enforce nonnegativity while simultaneously ensuring good supervised learning outcomes. The effect of $L_1$ ($\alpha_2=0$), $L_2$ ($\alpha_1=0$) and $L_1/L_2$ ($\alpha_1\neq0$ and $\alpha_2\neq0$) nonnegativity penalty terms on weight updates for weight distributions $G_1$, $G_2$ and $G_3$ are respectively shown in Fig. \[weight\_dist\_example\]c,d, and b. It can be observed for all the three distributions that $L_1/L_2$ regularization enforces stronger weight decay than individual $L_1$ and $L_2$ regularization. Other observation from Fig. \[weight\_dist\_example\] is that the more positively-skewed the weight distribution becomes, the lesser the weight decay function.\
The consequences of minimizing are that: (i) the average reconstruction error is reduced (ii) the sparsity of the hidden layer activations is increased because more negative weights are forced to zero thereby leading to sparsity enhancement, and (iii) the number of nonnegative weights is also increased. The resultant effect of penalizing the weights simultaneously with *$L_1$* and *$L_2$* norm is that large positive connections are preserved while their magnitudes are shrunk. However, the $L_1$ norm in is non-differentiable at the origin, and this can lead to numerical instability during simulations. To circumvent this drawback, one of the well known smoothing function that approximates *$L_1$* norm as in is utilized. Given any finite dimensional vector $\textbf{z}$ and positive constant $\kappa$, the following smoothing function approximates *$L_1$* norm: $$\label{MyEq9d}
\begin{split}
\Gamma(\textbf{z},\kappa) = \Bigg\{
\begin{array}{l l}
||\textbf{z}|| & \quad ||\textbf{z}|| > \kappa \\ \\
\frac{||\textbf{z}||^2}{2\kappa} + \frac{\kappa}{2} & \quad ||\textbf{z}|| \leq \kappa
\end{array}
\end{split}$$ with gradient $$\label{MyEq9g}
\nabla_{\textbf{z}}\Gamma(\textbf{z},\kappa) = \Bigg\{
\begin{array}{l l}
\frac{\textbf{z}}{||\textbf{z}||} & \quad ||\textbf{z}|| > \kappa \\ \\
\frac{\textbf{z}}{\kappa} & \quad ||\textbf{z}|| \leq \kappa
\end{array}
$$ For convenience, we adopt to smoothen the $L_1$ penalty function and $\kappa$ is experimentally set to $0.1$.
Experiments
===========
{width="0.85\linewidth"}
In the experiments, three data sets are used, namely: MNIST [@LeCun1998], NORB normalized-uniform [@lecun2004learning], and Reuters-21578 text categorization dataset. The Reuters-21578 text categorization dataset comprises of documents that featured in 1987 Reuters newswire. The ModApte split was employed to limit the dataset to 10 most frequent classes. The ModApte split was utilized to limit the categories to 10 most frequent categories. The bag-of-words format that has been stemmed and stop-word removed was used; see http://people.kyb.tuebingen.mpg.de/pgehler/rap/ for further clarification. The dataset contains $11,413$ documents with $12,317$ dimensions. Two techniques were used to reduce the dimensionality of each document in order to preserve the most informative and less correlated words [@tan2006introduction]. To reduce the dimensionality of each document to contain the most informative and less correlated words, words were first sorted based on their frequency of occurrence in the dataset. Words with frequency below 4 and above $70$ were then eliminated. The most informative words that do not occur in every topic were selected based on information gain with the class attribute. The remaining words (or features) in the dataset were sorted using this method, and the less important features were removed based on the desired dimension of documents. In this paper, the length of the feature vector for each of the documents was reduced to 200.\
In the preliminary experiment, the subset $1$, $2$ and $6$ from the MNIST handwritten digits as extracted for the purpose of understanding how the deep network constructed using $L_1/L_2$-NCSAE processes and classifies its input. For easy interpretation, a small deep network was constructed and trained by stacking two AEs with $10$ hidden neurons each and $3$ softmax neurons. The number of hidden neurons was chosen to obtain reasonably good classification accuracy while keeping the network reasonably small. The network is intentionally kept small because the full MNIST data would require larger hidden layer size and this may limit network interpretability. An image of digit $2$ is then filtered through the network, and it can be observed in Fig. \[deep\_receptive\] that sparsification of the weights in all the layers is one of the aftermath of nonnegativity constraints imposed on the network. Another observation is that most of the weights in the network have been confined to nonnegative domain, which removes opaqueness of the deep learning process. It can be seen that the fourth and seventh receptive fields of the first AE layer have dominant activations (with activation values $0.12$ and $0.13$ respectively) and they capture most information about the test input. Also, they are able to filter distinct part of input digit. The outputs of the first layer sigmoid constitute higher level features extracted from test image with emphasis on the fourth and seventh features. Subsequently in second layer the second, sixth, eight, and tenth neurons have dominant activations (with activation values $0.0914$, $0.0691$, $0.0607$, and $0.0606$ respectively) because they have stronger connections with the dominant neurons in first layer than the rest. Lastly in the softmax layer, the second neuron was $99.62\%$ activated because it has strongest connections with the dominant neurons in second layer thereby classifying the test image as “2”.\
{height="8cm" width="0.85\linewidth"}
The fostering of interpretability is also demonstrated using a subset of NORB normalized-uniform dataset [@lecun2004learning] with class labels “four-legged animals”, “human figures”, “airplanes”. The $1024$-$10$-$5$-$3$ network configuration was trained on the subset of the NORB data using two stacked $L_1/L_2$-NCSAEs and a Softmax layer. Fig. \[norb\_mag\]b shows the randomly sampled test patterns and the weights and activations of first and second AE layer are shown in Fig. \[norb\_mag\]a. The bar charts indicate the activations of hidden units for the sample input patterns. The features learned by units in each layer are localized, sparse and allow easy interpretation of isolated data parts. The features mostly show nonnegative weights making it easier to visualize to what input object patterns they respond. It can be seen that units in the network discriminate among objects in the images and react differently to input patterns. Third, sixth, eight, and ninth hidden units of layer 1 capture features that are common to objects in class “2” and react mainly to them as shown in the first layer activations. Also, the features captured by the second layer activations reveal that second and fifth hidden units are mainly stimulated by objects in class “2”.\
The outputs of Softmax layer represent the *a posteriori* class probabilities for a given sample and are denoted as Softmax scores. An important observation from Fig. \[norb\_mag\]a,b, and c is that hidden units in both layers did not capture significant representative features for class “1” white color-coded test sample. This is one of the reasons why it is misclassified into class “3” with probability of 0.57. The argument also goes for class “1” dark-grey color-coded test sample misclassified into class “3” with probability of 0.60. In contrast, hidden units in both layers capture significant representative features for class “2” test samples of all color codes. This is why all class “2” test samples are classified correctly with high probabilities as shown in Fig. \[norb\_mag\]d. Lastly, the network contains a good number of representative features for class “3” test samples and was able to classify 4 out of 5 correctly as given in Fig. \[norb\_mag\]e.
Results and Discussion
======================
Unsupervised Feature Learning of Image Data
-------------------------------------------
In the first set of experiments, three-layer $L_1/L_2$-NCSAE, NCAE [@Ehsan2015Deep], DpAE [@hinton2012improving], and conventional SAE network with $196$ hidden neurons were trained using MNIST dataset of handwritten digits and their ability to discover patterns in high dimensional data are compared. These experiments were run one time and recorded. The encoding weights $\textbf{W}^{(1)}$, also known as receptive fields or filters as in the case of image data, are reshaped, scaled, centered in a 28 $\times$ 28 pixel box and visualized. The filters learned by $L_1/L_2$-NCSAE are compared with that learned by its counterparts, NCAE and SAE. It can be easily observed from the results in Fig. \[Receptive\_fields\_MNIST\] that $L_1/L_2$-NCSAE learned receptive fields that are more sparse and localized than those of SAE, DpAE, and NCAE. It is remarked that the black pixels in both SAE and DpAE features are results of the negative weights whose values and numbers are reduced in NCAE with nonnegativity constraints, which are further reduced by imposing an additional $L_1$ penalty term in $L_1/L_2$-NCSAE as shown in the histograms located on the right side of the figure. In the case of $L_1/L_2$-NCSAE, tiny strokes and dots which constitute the basic part of handwritten digits, are unearthed compared to SAE, DpAE, and NCAE. Most of the features learned by SAE are major parts of the digits or the blurred version of the digits, which are obviously not as sparse as those learned by $L_1/L_2$-NCSAE. Also, the features learned by DpAE are fuzzy compared to those of $L_1/L_2$-NCSAE which are sparse and distinct. Therefore, the achieved sparsity in the encoding can be traced to the ability of $L_1$ and $L_2$ regularization in enforcing high degree of weights’ nonnegativity in the network.\
{width="0.85\linewidth"}
{width="0.85\linewidth"}
{width="0.85\linewidth"}
{width="0.85\linewidth"}

[[(a)]{}]{}

[[(b)]{}]{}
{width="1.0\linewidth"}
{width="0.85\linewidth" height="2.0cm"}
{width="0.85\linewidth" height="2.0cm"}
{width="0.85\linewidth" height="2.0cm"}
{width="0.85\linewidth" height="2.0cm"}
{height="3.5cm"}
[[(a)]{}]{}
{height="3.5cm"}
[[(b)]{}]{}
{height="3.5cm"}
[[(c)]{}]{}
{width="85.00000%"}
Likewise in Fig. \[hidden\_size\]a, $L_1/L_2$-NCSAE with other AEs are compared in terms of reconstruction error, while varying the number of hidden nodes. As expected, it can be observed that $L_1/L_2$-NCSAE yields a reasonably lower reconstruction error on the MNIST training set compared to SAE, DpAE, and NCAE. Although, a close scrutiny of the result also reveals that the reconstruction error of $L_1/L_2$-NCSAE deteriorates compared to NCAE when the hidden size grows beyond $400$. However on the average, $L_1/L_2$-NCSAE reconstructs better than other AEs considered. It can also be observed that DpAE with 50% dropout has high reconstruction error when the hidden layer size is relatively small (100 or less). This is because the few neurons left are unable to capture the dynamics in the data, which subsequently results in underfitting the data. However, the reconstruction error improves as the hidden layer size is increased. Lower reconstruction error in the case of $L_1/L_2$-NCSAE and NCAE is an indication that nonnegativity constraint facilitates the learning of parts of digits that are essential for reconstructing the digits. In addition, the KL-divergence sparsity measure reveals that $L_1/L_2$-NCSAE has more sparse hidden activations than SAE, DpAE and NCAE for different hidden layer size as shown in Fig. \[hidden\_size\]b. Again, averaging over all the training examples, $L_1/L_2$-NCSAE yields less activated hidden neurons compared to its counterparts.
[[(a)]{}]{}
[[(b)]{}]{}
Also, using t-distributed stochastic neighbor embedding (t-SNE) to project the $196$-D representation of MNIST handwritten digits to 2D space, the distribution of features encoded by $196$ encoding filters of DpAE, NCAE, and $L_1/L_2$-NCSAE are respectively visualized in Figs. \[tsne\_mnist\_196\_composite\]a, b, and c. A careful look at Fig. \[tsne\_mnist\_196\_composite\]a reveals that digits “$4$” and “$9$” are overlapping in DpAE, and this will inevitably increase the chance of misclassifying these two digits. It can also be observed in Fig. \[tsne\_mnist\_196\_composite\]b corresponding to NCAE that digit “$2$” is projected with two different landmarks. In sum, the manifolds of digits with $L_1/L_2$-NCSAE are more separable than its counterpart as shown in Fig. \[tsne\_mnist\_196\_composite\]c, aiding the classifier to map out the separating boundaries among the digits more easily.\
In the second experiment, SAE, NCAE, $L_1/L_2$-NCSAE, and DpAE with 200 hidden nodes were trained using the NORB normalized-uniform dataset. The NORB normalized-uniform dataset, which is the second dataset, contains $24,300$ training images and $24,300$ test images of $50$ toys from $5$ generic categories: four-legged animals, human figures, airplanes, trucks, and cars. The training and testing sets consist of $5$ instances of each category. Each image consists of two channels, each of size $96\times 96$ pixels. The inner $64\times 64$ pixels of one of the channels cropped out and resized using bicubic interpolation to $32\times 32$ pixels that form a vector with $1024$ entries as the input. Randomly selected weights of $90$ out of $200$ neurons are plotted in Fig. \[Receptive\_fields\_NORB\]. It can be seen that $L_1/L_2$-NCSAE learned more sparse features compared to features learned by all the other AEs considered. The receptive fields learned by $L_1/L_2$-NCSAE captured the real actual edges of the toys while the edges captured by NCAE are fuzzy, and those learned by DpAE and SAE are holistic. As shown in the weight distribution depicted in Fig. \[weight\_distribution\_norb\], $L_1/L_2$-NCSAE has both its encoding and decoding weights centered around zero with most of its weights positive when compared with those of DpAE and NCAE that have weights distributed almost even on both sides of the origin.
\[table:result4\]
Unsupervised Semantic Feature Learning from Text Data
-----------------------------------------------------
In this experiment DpAE, NCAE, and $L_1/L_2$-NCSAE are evaluated and compared based on their ability to extract semantic features from text data, and how they are able to discover the underlined structure in text data. For this purpose, the Reuters-21578 text categorization dataset with $200$ features is utilized to train all the three types of AEs with $20$ hidden nodes. A subset of $500$ examples belonging to categories “grain”, “crude”, and “money-fx” was extracted from the test set. The experiments were run three times, averaged and recorded. In Fig. \[tsne\_reuters\_15\_compo1\], the 20-dimensional representations of the Reuters data subset using DpAE, NCAE, and $L_1/L_2$-NCSAE are visualized. It can be observed that $L_1/L_2$-NCSAE is able to disentangle the documents into three distinct categories with more linear manifolds than NCAE. In addition, $L_1/L_2$-NCSAE is able to group documents that are closer in the semantic space into the same categories than DpAE that finds it difficult to group the documents into any distinct categories with less overlap.
Supervised Learning
-------------------
In the last set of experiments, a deep network was constructed using two stacked $L_1/L_2$-NCSAE and a softmax layer for classification to test if the enhanced ability of the network to shatter data into parts and lead to improved classification. Eventually, the entire deep network is fine-tuned to improve the accuracy of the classification. In this set of experiments, the performance of pre-training a deep network with $L_1/L_2$-NCSAE is compared with those pre-trained with recent AE architectures. The MNIST and NORB data sets were utilized, and every run of the experiments is repeated ten times and averaged to combat the effect of random initialization. The classification accuracy of the deep network pre-trained with NNSAE [@Lemme2012OnlineLearning], DpAE [@hinton2012improving], DAE [@vincent2008extracting], AAE [@makhzani2015adversarial], NCAE, and $L_1/L_2$-NCSAE using MNIST and NORB data respectively are detailed in Table \[table:result4\]. The network architectures are 784-196-20-10 and 1024-200-20-5 for MNIST and NORB dataset respectively. It is remarked that for training of AAE with two layers of 196 hidden units in the encoder, decoder, discriminator, and other hyperparameters tuned as described in [@makhzani2015adversarial], the accuracy was $83.67$%. The AAE reported in Table \[table:result4\] used encoder, decoder, and discriminator each with two layers of 1000 hidden units and trained for 1000 epochs. The classification accuracy and speed of convergence are the figures of merit used to benchmark $L_1/L_2$-NCSAE with other AEs.\
It is observed from the result that $L_1/L_2$-NCSAE-based deep network gives an improved accuracy before fine-tuning compared to methods such as NNSAE, NCAE, DpAE, and NCAE. However, the performance in terms of classification accuracy after fine-tuning is very competitive. In fact, it can be inferred from the p-value of the experiments conducted on MNIST and NORB in Table \[table:result4\] that there is no significant difference in the accuracy after fine-tuning between NCAE and $L_1/L_2$-NCSAE even though most of the weights in $L_1/L_2$-NCSAE are nonnegativity constrained. Therefore it is remarked that even though the interpretability of the deep network has been fostered by constraining most of the weights to be nonnegative and sparse, nothing significant has been lost in terms of accuracy. In addition, network trained with $L_1/L_2$-NCSAE was also observed to converge faster than its counterparts. On the other hand, NNSAE also has nonnegative weights but with deterioration in accuracy, which is more conspicuous especially before the fine-tuning stage. The improved accuracy before fine-tuning in $L_1/L_2$-NCSAE based network can be traced to its ability to decompose data more into distinguishable parts. Although the performance of $L_1/L_2$-NCSAE after fine-tuning is similar to those of DAE and NCAE but better than NNSAE, DpAE, and AAE, $L_1/L_2$-NCSAE constrains most of the weights to be nonnegative and sparse to foster transparency than for other AEs. However, DpAE and NCAE performed slightly more accurate than $L_1/L_2$-NCSAE on NORB after network fine-tuning.\
In light of constructing an interpretable deep network, an $L_1/L_2$-NCSAE pre-trained deep network with $10$ hidden neurons in the first AE layer, $5$ hidden neurons in the second AE, and 10 output neurons (one for each category) in the softmax layer was constructed. It was trained on Reuters data, and compared with that pre-trained using DpAE. The interpretation of the encoding layer of the first AE is provided by listing words associated with $10$ strongest weights, and the interpretation of the encoding layer of the second AE is portrayed as images characterized by both the magnitude and sign of the weights. Compared to the AE with weights of both signs shown in Fig. \[reuters\]a, Fig. \[reuters\]b allows for much better insight into the categorization of the topics.\
Topic *earn* in the output weight matrix resonates with the 5th hidden neuron most, lesser with the 3rd, and somewhat with the 4th. This resonance can happen only when the 5th hidden neuron reacts to input by words of columns 1 and 4, and in addition, to a lesser degree, when the 3rd hidden neuron reacts to input by words of the 3rd column of words. So, in tandem, the dominant columns 1, 4 and then also 3 are sets of words that trigger the category *earn*.\
Analysis of the term words for the topic *acq* leads to a similar conclusion. This topic also resonates with the two dominant hidden neurons 5 and 3 and somewhat also with neuron 2. These neurons 5 and 3 are driven again by the columns of words 1,4, and 3. The difference between the categories is now that to a lesser degree, the category *acq* is influenced by the 6th column of words. An interesting point is in contribution of the 3rd column of words. The column connects only to the 4th hidden neuron but weights from this neuron in the output layer are smaller and hence less significant than for any other of the five neurons (or rows) of the output weight matrix. Hence this column is of least relevance in the topical categorization.
Experiment Running Times
------------------------
The training time for networks with and without the nonnegativity constraints was compared. The constrained network converges faster and requires lesser number of training epochs. In addition, the unconstrained network requires more time per epoch than the constrained one. The running time experiments were performed using full MNIST benchmark dataset on Intel(r) Core(TM) i7-6700 CPU @ 3.40Ghz and a 64GB of RAM running a 64-bit Windows 10 Enterprise edition. The software implementation has been with MATLAB 2015b with batch Gradient Descent method, and LBFGS in minFunc ([@Byrd1995]) is used to minimize the objective function. The usage times for constrained and unconstrained networks were also compared. We consider the usage time in milliseconds (ms) as the time elapsed in ms a fully trained deep network requires to classify all the test samples. The unconstrained network took 48 ms per epoch in the training phase while the constrained counterpart took 46 ms. Also, the unconstrained network required 59.9 ms usage time, whereas the network with nonnegative weights took 55 ms. From the above observations, it is remarked that the nonnegativity constraint simplifies the resulting network.
Conclusion
==========
This paper addresses the concept and properties of special regularization of DL AE that takes advantage of non-negative encodings and at the same time of special regularization. It has been shown that by using both $L_1$ and $L_2$ to penalize the negative weights, most of them are forced to be nonnegative and sparse, and hence the network interpretability is enhanced. In fact, it is also observed that most of the weights in the Softmax layer become nonnegative and sparse. In sum, it has been observed that encouraging nonnegativity in NCAE-based deep architecture forces the layers to learn part-based representation of their input and leads to a comparable classification accuracy before fine-tuning the entire deep network and not-so-significant accuracy deterioration after fine-tuning. It has also been shown on select examples that concurrent $L_1$ and $L_2$ regularization improve the network interpretability. The performance of the proposed method was compared in terms of sparsity, reconstruction error, and classification accuracy with the conventional SAE and NCAE, and we utilized MNIST handwritten digits, Reuters documents, and the NORB dataset to illustrate the proposed concepts.
[Babajide Ayinde]{} (S’09) received the M.Sc. degree in Engineering Systems and Control from the King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia. He is currently a Ph.D. student at the University of Louisville, Kentucky, USA and a recipient of University of Louisville fellowship. His current research interests include unsupervised feature learning and deep learning techniques and applications.
[Jacek M. Zurada]{} (M’82-SM’83-F’96-LF’14) Ph.D., has received his degrees from Gdansk Institute of Technology, Poland. He now serves as a Professor of Electrical and Computer Engineering at the University of Louisville, KY. He authored or co-authored several books and over 380 papers in computational intelligence, neural networks, machine learning, logic rule extraction, and bioinformatics, and delivered over 100 presentations throughout the world.
In 2014 he served as IEEE V-President, Technical Activities (TAB Chair). He also chaired the IEEE TAB Periodicals Committee, and TAB Periodicals Review and Advisory Committee and was the Editor-in-Chief of the IEEE Transactions on Neural Networks (1997-03), Associate Editor of the IEEE Transactions on Circuits and Systems, Neural Networks and of The Proceedings of the IEEE. In 2004-05, he was the President of the IEEE Computational Intelligence Society.
Dr. Zurada is an Associate Editor of Neurocomputing, and of several other journals. He has been awarded numerous distinctions, including the 2013 Joe Desch Innovation Award, 2015 Distinguished Service Award, and five honorary professorships. He has been a Board Member of IEEE, IEEE CIS and IJCNN.
[^1]: B. O. Ayinde is with the Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY, 40292 USA (e-mail: [email protected]).
[^2]: J. M. Zurada is with the Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY, 40292 USA, and also with the Information Technology Institute, University of Social Science,Łódz 90-113, Poland (Corresponding author, e-mail: [email protected]).
[^3]: This work was supported in part by the NSF under grant 1641042.
|
---
author:
- |
Gong-Bo Zhao $^{1,\ 2}$, David Bacon $^{2}$, Roy Maartens $^{3,\ 2}$, Mario Santos $^{3,\ 4}$, Alvise Raccanelli $^{5, \ 6, \ 7}$\
\
$^{1}$ National Astronomy Observatories, Chinese Academy of Science, Beijing, 100012, People’s Republic of China\
$^{2}$ Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth, PO1 3FX, United Kingdom\
$^{3}$ Department of Physics, University of the Western Cape, Cape Town 7535, South Africa\
$^{4}$ SKA SA, 4th Floor, The Park, Park Road, Pinelands, 7405, South Africa\
$^{5}$ Department of Physics & Astronomy, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA\
$^{6}$ Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA 91109, USA\
$^{7}$ California Institute of Technology, Pasadena CA 91125, USA\
bibliography:
- 'PCA.bib'
title: 'Model-independent constraints on dark energy and modified gravity with the SKA'
---
Introduction
============
The physical origin of the acceleration of the universe remains unknown since its discovery in 1998 using supernovae (SN) observations [@Riess; @Perlmutter]. It might imply that there exists a repulsive ‘dark energy’ component dominating the universe, or that we need a better understanding of the law of gravity, , the general relativity (GR) might need to be modified on cosmological scales (For a recent review of modified gravity theories, see @Clifton:2011jh). Although dark energy (DE) and modified gravity (MG) can accelerate the universe at the background level in the same way after the required tuning, the degeneracy can be broken when the cosmic structure formation is investigated.
In this era of precision cosmology, a combination of multiple observation probes including SN, cosmic microwave background (CMB), and large scale structure (LSS) surveys is key to unveil the mystery of the cosmic acceleration [@Weinberg:2012es]. This is because different kinds of surveys can be highly complementary, , the weak lensing (WL) and redshift surveys are able to probe $\gamma(k,z)$, quantifying the deviation of photon’s trajectory from the geodesics, and $\mu(k,z)$, the time and spatial variation of the Newton’s constant respectively, which are two different effects predicted by a wide range of MG models, making the combination of WL with redshift surveys robust for GR tests, as well as for the dark energy studies.
Given that the CMB and WL surveys of Planck [@Planck] and Dark Energy Survey (DES) [^1] are accumulating data, we need large redshift surveys to complement. The BOSS spectroscopic survey [@boss] of SDSS-III is currently the largest redshift survey worldwide, mapping the 10,000 square degree sky up to $z=0.7$ by tracing 1.5 million luminous galaxies. It will be succeeded by eBOSS [^2], a multi-tracer spectroscopic survey of SDSS-IV, which will focus on a smaller patch of the sky (7500 square degree) but going deeper. According to the forecast, it will achieve 1-2% distance measurement from the baryon acoustic oscillations (BAO) between $0.6 < z < 2.5$. The Square Kilometre Array (SKA) [^3] HI galaxy redshift survey can provide us with accurate redshifts (using the 21cm line) of millions of sources over a wide range of redshifts, making it an ideal redshift survey for cosmological studies [@SKABAO; @SKARSD; @SKALSST; @SKAEuclid; @SKAsurvey; @SKAcosrev].
Traditionally, observational constraints on DE or MG using either current or future data are usually performed in a parameterised fashion, , the equation of state of DE, $w(z)$, or the $\mu(k,z)$ and $\gamma(k,z)$ functions quantifying the effect of MG [@Zhao:2008bn] [^4], are parameterised using assumed function forms, and then the observational constraints on these parameters are worked out. Simple as it is, this approach has its drawbacks,
- It may cause [*theoretical bias*]{}: the result largely depends on the functional form used for the parametrisation, which is [*a priori*]{}. The functional forms are usually chosen for the purpose of simplicity, or for the assumed theoretical consistency, or for both;
- The number of parameters are usually minimised, , the CPL parametrisation [@CP; @L] of $w(z)$ has $2$ parameters, while the BZ parametrisation [@BZ] for MG has $5$ parameters. This can yield a reasonably good constraint on the reconstructed $w(z)$, or the MG functions even when data is weak, but it might under fit the data when data is excellent.
However, in nonparametric methods, including the principal component analysis (PCA), it is the assumption, rather than the number of parameters, that is minimised, hence it can largely avoid the theoretical bias.
In this chapter, we use the PCA method to perform the forecast of $w(z)$ and $\mu(k,z)$ using a SKA HI redshift galaxy survey.
Methodology
===========
In this section, we employ a standard Fisher matrix technology [@Fisher] to perform the future forecast.
The Fisher matrix formulism
---------------------------
For a redshift survey, the Fisher matrix formalism reads [@FisherPk] [^5],
\[eq:Fisher\]F\_[ij]{}&=&\_[-1]{}\^[1]{} [d]{}\_[k\_[min]{}]{}\^[k\_[max]{}]{} [d]{}k V\_[eff]{}(k,) k\^2 e\^[-k\^2\^2\^2]{}\
\[eq:Kaiser\](k,) &=& (b+f\^2)\^2 P(k)\
\[eq:Veff\] V\_[eff]{}(k,)&=&\^2 V\_[sur]{} where $\tilde{P}(k,\mu), V_{\rm eff}$ denote the power spectrum in redshift space and the effective volume respectively, and $V_{\rm sur}$ is the actual volume of the redshift survey. We have used the Kaiser formula, , Eq (\[eq:Kaiser\]) to evaluate $\tilde{P}(k,\mu)$, where $P(k)$ is the linear matter spectrum calculated using [CAMB]{} [@camb], $b$ and $f$ are the linear bias and the growth function respectively. To account for the Finger of God (FoG) effect, we have chosen $\Sigma$ to be $4$ Mpc, which is consistent with simulations.
The Fisher matrix formulae for CMB and WL surveys are elaborated in [@Zhao:2008bn].
Specifications of future SKA HI surveys
---------------------------------------
A future SKA HI redshift survey will trace the galaxies at radio wavelengths, and the redshifts will be measured precisely using the emission lines. In this work, we consider Phase 1 and Phase 2 of SKA HI surveys (dubbed SKA1 and SKA2 respectively). SKA1 will achieve an RMS flux sensitivity of $S_{\rm rms}\simeq 70 - 100 \mu$Jy with SKA1-MID or SUR, surveying over $5000$ deg$^2$ in 10,000 hours. The expected total number of galaxies in Phase 1 is roughly 5 million at redshift $z\lesssim0.5$ with a $5\sigma$ detection. In Phase 2, a 10,000 hours survey over $30,000$ deg$^2$ will detect one billion galaxies at a $10\sigma$ detection level. The expected galaxy distribution and bias for SKA1 (SKA2) is shown in Fig \[fig:nz\]. For more details of the survey specifications, see [@SKAsurvey]. Although SKA1 is not able to compete with the BOSS survey, SKA2 will surpass any planned spectroscopic surveys in the optical bands at $z\lesssim1.4$.
Cosmological parameters
-----------------------
To be generic, we parameterise the universe using parameters \[eq:para\] P={\_b h\^2, \_c h\^2, h, , n\_s, A\_s, w\_i,\_[ij]{}, \_[ij]{} } where $\Omega_b h^2$ and $\Omega_c h^2$ are energy density of baryons and cold dark matter respectively, $h$ is the Hubble constant, $\tau$ is the optical depth, $n_s$ and $A_s$ are the spectral index and the amplitude of the primordial power spectrum respectively. $w$ denotes the equation-of-state of dark energy. In general, we treat $w(z)$ as a unknown function and determine how many degrees of freedom of it can be constrained using the PCA method [@wPCA1; @wPCA2; @wPCA3; @MGPCA1; @MGPCA2; @MGPCA3]. To do this, we bin $w$ in the late-time universe, namely, $0 \leq z \leq 30$ using $M+1$ $z$-bins, and consider the value of $w$ in each bin as an independent parameter. Since the surveys we consider in this work will not be able to probe $z>3$ in detail, we use $M$ bins linear in $z$ for $0\leq z \leq 3$ and a single bin for $3 \leq z \leq 30$.
The $\mu$ and $\gamma$’s are modified gravity parameters and they are defined as follows.
In Newtonian gauge, the linear scalar perturbations to the flat Friedmann-Robertson-Walker metric read, $$\label{FRW}
ds^2=-a^2(\eta)[(1+2\Psi(\vec{x},\eta))d\eta^2-(1-2\Phi(\vec{x},\eta))d\vec{x}^2],
\nonumber$$ where $\eta$ is the conformal time and $a(\eta)$ the scale factor. In Fourier space, one can write [@Hu:2007pj; @BZ], $$\begin{aligned}
\label{parametrization-Poisson} k^2\Psi&=&-\mu(k,a) 4\pi G a^2\rho\Delta \\
\label{gamma}\Phi/\Psi&=&\gamma(k,a)\end{aligned}$$ where $\Delta$ is the comoving matter density perturbation. The function $\gamma$ describes anisotropic stresses, while $\mu$ describes a time- and scale-dependent rescaling of Newton’s constant $G$, as well as the effects of DE clustering or massive neutrinos. In $\Lambda$CDM, the anisotropic stress due to radiation is negligible during matter domination, thus $\mu=\gamma=1$.
Similar to $w(z)$, we treat $\mu(k,a)$ and $\gamma(k,a)$ as unknown functions and forecast how well we can constrain the eigenmodes of them using PCA. Since they are 2-variable functions in both $k$ and $a$, we have to [*pixelise*]{} them in the $(k,z)$ plane. We pixelise the late-time and large-scale universe ($0 \leq z \leq 30,10^{-5} \leq k \leq 0.2~{\rm
h}\,{\rm Mpc}^{-1}$) into $M+1$ $z$-bins and $N$ $k$-bins, with each of the $(M+1)\times N$ pixels having independent values of $\mu_{ij}$ and $\gamma_{ij}$. We consider $w(z)$ as another unknown function, allowing each of the $M+1$ $z$-bins to have an independent value of $w_i$. We use $M$ bins linear in $z$ for $0 \leq z \leq 3$ and a single bin for $3 \leq z \leq 30$. We choose $M=N=20$ and have checked that this pixelisation is fine enough to ensure the convergence of the results. We use logarithmic $k$-bins on superhorizon scales and linear $k$-bins on subhorizon scales, to optimize computational efficiency. As in [@Zhao:2008bn], we only consider information from scales well-described by linear perturbation theory, which is only a fraction of the $(k,z)$-volume probed by future surveys. Since the evolution equations [@Zhao:2008bn] contain time-derivatives of $\mu(k,z)$, $\gamma(k,z)$ and $w(z)$, we follow [@wPCA2] and [@MGPCA1] and use hyperbolic tangent functions to represent steps in these functions in the $z$-direction, while steps in the $k$-direction are left as step functions. The total number of free parameters in our forecast is therefore $(M+1)(2N+1)+17=878$.
![Upper panel: The expected galaxy distribution for SKA1 and SKA2; Lower panel: the corresponding bias as a function of redshift.[]{data-label="fig:nz"}](nz.eps)
The principal component analysis (PCA) method
---------------------------------------------
The PCA method is a traditional method in data analysis. It helps to identify the [*principal components*]{} (PCs) of data by maximising the data covariance matrix. In cosmology, PCA has been used in determining the well-constrained combinations [^6] of cosmological parameters, , the binned equation of state $w(z)$ of dark energy [@wPCA1; @wPCA2; @wPCA3] and the pixelised 2-variable functions of $\mu(k,z)$ and $\gamma(k,z)$ [@MGPCA1; @MGPCA2; @MGPCA3], which quantify the deviation from general relativity on cosmological scales.
Generically, the PCA method can be formulated as follows. Let $F$ be an $N \times N$ Fisher information matrix for a parameter set $P=\{p_1,p_2,...,p_N \}$. We can find the [*eigenmodes*]{} of $F$ by matrix diagonalisation, namely, F = W\^T W, where $\Lambda={\rm diag}(\lambda_1,\lambda_2,...,\lambda_N)$, and $W$ is the transformation matrix relating $P$ to $Q$, which is a set of new parameters $Q=\{q_1,q_2,...,q_N \}$. $P$ and $Q$ are related via, Q=WP The matrices $\Lambda$ and $W$ store the eigen-values and eigen-vectors of $F$: $W$ tells how to map the old correlated parameters, the $p$’s, to the new orthogonal ones, the $q$’s, and $\Lambda$ quantifies the uncertainty on the $q$’s. The best measured parameter is the $q$ with the minimal error (the one corresponding to the maximum entry in matrix $\Lambda$).
For dark energy, the $p$’s are the binned $w(z)$ in redshift $z$. $W$ helps to locate the ‘sweet-spots’ (the redshifts where the error on $w(z)$ get minimised), and $\Lambda$ quantifies how ‘sweet’ they are (the size of errors when measuring these modes). For modified gravity, the $p$’s are the pixelised functions of $\mu(k,z)$ and $\gamma(k,z)$ in the $(k,z)$ plane, and the eigen-vectors in this case are 2D surfaces.
Results
=======
For a given set of parameter values, we use MGCAMB [@Zhao:2008bn; @mgcamb] to compute the observables. We generate numerical derivatives of observables with respect to parameters, and use the specifications for the experiments to compute the Fisher information matrix, which defines the sensitivity of the experiments to these parameters (see @Zhao:2008bn for computational details). Our fiducial values are in all cases $\Lambda$CDM: $\gamma_{ij} = \mu_{ij} = -w_i =
1$ for all $i$ and $j$, and the fiducial values of the other parameters are those of Planck.
Besides the SKA HI surveys, we consider the two-point correlations (both auto- and cross-) between weak lensing shear (WL), and cosmic microwave background (CMB) temperature anisotropy, plus the CMB E-mode polarization and its correlation with the CMB temperature. Detailed descriptions of our assumptions for each measurement are found in [@Zhao:2008bn]. WL is sourced by the sum of the potentials $(\Psi+\Phi)$. CMB data probe the Integrated Sachs-Wolfe effect (ISW) which depends on $\partial(\Phi+\Psi)/\partial\eta$. Thus, measuring WL over multiple redshift bins, along with CMB data, yields information about the relation between $\Psi$ and $\Phi$ and their response to matter density fluctuations. For our forecasts, we assume the following probes: Planck [@Planck] for CMB, and DES for WL.
In what follows in the section, we shall present the results of our forecast for dark energy and modified gravity respectively.
Dark Energy constraints
-----------------------
In this section, we focus on DE constraints in the framework of general relativity. Therefore we fix the $\mu$ and $\gamma$ pixels to be unity, but vary the remaining parameters in Eq (\[eq:para\]) simultaneously. After marginalising over other parameters, we perform a PCA on the $w$ bins.
The result is shown in Figs \[fig:w-eval\] and \[fig:w-evec\]. In Fig \[fig:w-eval\], we show the 68% CL forecasted error on the part of the principal components (PCs) for four different data combinations. As shown, within the level of $\sigma(\alpha_i)<0.5$, Planck alone can only constrain 1 mode (the distance to the last scattering surface); Planck + DES can constrain 2 modes, while adding in SKA1 or SKA2 can constrain 3 and 5 eigenmodes to this level. In particular, the best measured modes using SKA1 and SKA2 (combined with Planck and DES) can be determined at the level of $\sigma(\alpha_1)=0.04$ and $\sigma(\alpha_1)=0.023$ respectively. This is a significant improvement given that $\sigma(\alpha_1)=0.25$ (Planck alone) and $\sigma(\alpha_1)=0.13$ (Planck + DES).
The eigenvectors for the best constrained modes are shown in Fig \[fig:w-evec\]. Roughly speaking, the $n$th best measured mode has $n-1$ nodes, corresponding to the $(n-1)$th time derivative of $w$. Having SKA helps determining the higher derivatives of $w(z)$, which is key to probe dark energy dynamics.
[![The forecasted 68% CL measurement error on $\alpha_i$, the coefficient of the $i$th principal components of $w(z)+1$, namely, $w(z)+1=\sum_i \alpha_i e_i(z)$, using different data combinations illustrated in the legend. A weak prior of $\sigma(w(z))<1$ was assumed.[]{data-label="fig:w-eval"}](w_eval.eps "fig:")]{}
[![The best determined eigenvectors (with errors less than $0.5$) of $w(z)$ for different data combinations shown in the legends. The modes are shown, in the order from better constrained to worse, as black solid, red dashed, blue dash-dot, purple dash-dot-dot and brown short dash-dot curves.The short dashed green horizon line shows $e_i(z)=0$. []{data-label="fig:w-evec"}](w_evec.eps "fig:")]{}
Modified Gravity constraint
---------------------------
![The forecasted 68% CL error on the coefficients of the principal components of $\mu(k,z)$ for different data combinations shown in the legend.[]{data-label="fig:MG-eval"}](MG_eval.eps)
![Eigensurfaces for the first three best constrained modes of $\mu$ after marginalisation over all other cosmological parameters. Top row: Planck + DES; Middle: Planck + DES + SKA1; Bottom: Planck + DES + SKA2.[]{data-label="fig:MG-evec"}](MG_evec.eps)
Here we consider the most general case, in which we drop the assumption of general relativity. Therefore we vary all the parameters in Eq (\[eq:para\]) simultaneously and focus on the constraint on the $\mu$ and $\gamma$.
Let us study the expected errors on $\mu(k,z)$ [^7]. The error on any $\mu_{ij}$ is large, and the pixels have highly correlated errors. We take only the $\mu_{ij}$ block of the covariance matrix, thus marginalizing over all other parameters, including the [$w_i$]{} and [$\gamma_{ij}$]{}. We invert this block to obtain the Fisher matrix for our $\mu$ values, $F_{(\mu)}$, and diagonalize $F_{(\mu)}$ by writing $F_{(\mu)}=W^{T}\Lambda{W}$. We expect, from existing data, that variations in $\mu$ larger than $\mathcal{O}(1)$ are unlikely. We enforce this by applying a prior $\lambda_m>1$ to the matrix $F_{(\mu)}$. This procedure does not affect the well-measured modes, but gives a reference point with respect to which we define poorly constrained modes. Since we compute the full covariance matrix, then marginalize over all but the parameter(s) of interest, our procedure yields the results that we would get for $\mu$ if we simultaneously measured $w$, $\gamma$, and $\mu$. This analysis can be repeated for $\gamma$ or $w$.
Measurements of WL and CMB probe combinations of $\Phi$ and $\Psi$, so the effects of $\gamma$, which affects only $\Phi$, are mixed with those of $\mu$, which affects both potentials. This yields degeneracy between $\mu$ and $\gamma$. But this degeneracy can be broken when SKA is combined since it only measures $\Psi$ through the RSD.
From Fig. \[fig:MG-eval\], we see that Planck + DES could not constrain any modes within 10% level, but adding in SKA1 can easily help to constrain 7 modes to this level. SKA2 can further increase this number to 20. In particular 2 and 7 modes can be constrained within sub percent level using SKA1 and SKA2 respectively.
Fig. \[fig:MG-evec\] shows three best constrained eigenmodes for $\mu$ for different data combinations. A first observation is that the modes with more nodes (a node appears when eigensurfaces crosses zero) are less constrained. This is intuitive: noisy modes are worse constrained than the smooth modes. The best modes are mainly functions of $k$ and not $z$. This is partly because the total observable volume in the radial ($z$) direction is limited by the dimming of distant objects and, ultimately, the fact that structures only exist at relatively low $z$. Also, it is related to us considering only linear perturbations in our analysis, since at small $z$ the observable volume is too small to fit the small $k$-modes that are still in the linear regime. Hence, there is more volume available for studying the spatial distribution of structure than the radial distribution.
For Planck+DES, we see a clear degeneracy in the $k$ and $z$ dependences of the modes. This is because changing $\mu$ at some point $(k,z)$ should have the same impact on the observables as a change at a larger scale but later time. Interestingly, this $k$ and $z$ dependence goes away when SKA is combined. This is simply because SKA constrains $\mu$ very well via the RSD effect, which means that data can well distinguish the effect between the variation of $\mu$ in $k$ and in $z$.
Conclusion and Discussions
==========================
In this work we apply the PCA method to investigate the constraint on dark energy and modified gravity using the future SKA HI redshift surveys, combined with CMB (Planck) and WL (DES) surveys. The PCA method is ideal to investigate dark energy and modified gravity in a nonparametric way, which efficiently minimises the theoretical bias stemming from choosing [*ad hoc*]{} functional forms for unknown functions.
We study dark energy and modified gravity separately. For dark energy equation-of-state, we find that SKA Phase 1 (2) can well constrain $3$ and $5$ eigenmodes of $w(z)$ respectively. The errors on the best measured modes can be reduced to 0.04 and 0.023 for SKA1 and SKA2 respectively, making it possible to probe dark energy dynamics [@wrecon]. On the other hand, for modified gravity constraints, SKA1 (2) can constrain $7~(20)$ eigenmodes of $\mu(k,z)$ respectively within 10% sensitivity level. In particular 2 and 7 modes can be constrained within sub percent level using SKA1 and SKA2 respectively.
Imaging and redshift surveys are highly complementary when constraining cosmological parameters, especially for modified gravity models [@IRx1; @IRx2; @IRx3]. The method developed in this work can be directly applied to future surveys of LSST [@LSST] and Euclid [@Euclid]. For synergy between SKA and LSST and Euclid, see [@SKALSST] and [@SKAEuclid].
[**Acknowledgement:**]{}\
GBZ is supported by Strategic Priority Research Program “The Emergence of Cosmological Structures” of the Chinese Academy of Sciences, Grant No. XDB09000000, by the 1000 Young Talents program in China, and by the 973 Program grant No. 2013CB837900, NSFC grant No. 11261140641, and CAS grant No. KJZD-EW-T01. All numeric calculations were performed on the SCIAMA2 supercomputer at University of Portsmouth. RM and MS are supported by the South African SKA Project and the National Research Foundation. DB and RM are supported by the UK Science & Technology Facilities Council (grant No. ST/K0090X/1). AR is supported by the Templeton Foundation. Part of the research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
[^1]: More details of the Dark Energy Survey are available at http://www.dark-energysurvey.org/
[^2]: More details of the eBOSS survey are available at http://www.sdss.org/sdss-surveys/eboss/
[^3]: More details of the SKA survey are available at https://www.skatelescope.org/
[^4]: There are other ways to parameterise the effect of MG, , see [@Baker:2012zs].
[^5]: Note that this is the Fisher matrix for a given redshift bin. The final Fisher matrix is the sum over the Fisher matrices of individual redshift bins.
[^6]: The PCA method discussed here identifies the best measured [*linear*]{} combinations of cosmological parameters, although extensions exist, , the kernel PCA method [@kPCA] can optimise the constraint on [*nonlinear*]{} combinations of parameters.
[^7]: We don’t show the $\gamma$ constraint here since redshift surveys don’t constrain $\gamma$ directly.
|
In frustrated magnetism, the term [*spin ice*]{} was recently coined by Harris and coworkers [@Harris] to describe the analogy that exists between the statistical physics of certain geometrically frustrated Ising pyrochlore magnets, and proton ordering in the hexagonal phase of ice (${\rm I_{h}}$) [@Bernal; @Pauling; @Anderson]. For the Ising pyrochlore systems ${\rm Ho_{2}Ti_{2}O_{7}}$ and ${\rm Dy_{2}Ti_{2}O_{7}}$, the ${\rm Ho^{3+}}$ and ${\rm Dy^{3+}}$ rare earth magnetic moments reside on a network of corner sharing tetrahedra (Fig. \[fig1\]). Each moment is forced by single-ion anisotropy to lie along the axis joining the centers of the two tetrahedra that it belongs to[@Harris; @Ramirez]. For a simple theoretical model considering only nearest neighbor ferromagnetic (FM) exchange, the groundstate is macroscopically degenerate, but is required to have two moments pointing in and two pointing out of every tetrahedron, a constraint that maps exactly the two short and long proton bonds and the ice-rules for their arrangement in ${\rm I_{h}}$ [@Bramwell; @Harris2]. This nearest neighbor FM model shows no ordering and is characterized by a broad Schottky-like peak in the magnetic specific heat[@Harris2].
Both ${\rm Ho_{2}Ti_{2}O_{7}}$ [@Harris; @Bramwell2] and ${\rm Dy_{2}Ti_{2}O_{7}}$ [@Ramirez; @denHertog] show qualitative properties roughly consistent with the basic spin ice picture of the simple nearest neighbor FM model [@Bramwell; @Harris2]. However, it has been shown recently that rather than nearest neighbor FM exchange, it is surprisingly the large dipolar interaction present in these materials that is responsible for their spin ice behavior[@Bramwell2; @denHertog; @Gingras; @note2; @note]. For a model which we call , with the long range nature of the dipolar interaction properly handled using Ewald summation techniques, numerical results show a lack of magnetic ordering down to very low temperatures[@denHertog]. Furthermore, the dipolar spin ice model agrees quantitatively very well with specific heat data for ${\rm Dy_{2}Ti_{2}O_{7}}$ [@Ramirez] and ${\rm Ho_{2}Ti_{2}O_{7}}$ [@Bramwell2], as well as neutron scattering measurements on the latter material[@Bramwell2]. In other words, while the simple nearest neighbor FM model provides a simple and qualitative understanding of the spin ice phenomenon, there is now strong evidence that the dipolar spin ice model with its long range dipolar interactions provides a quantitatively accurate description of experimental results on real materials [@Bramwell2; @denHertog].
![The lower left ‘downward’ tetrahedron of the pyrochlore lattice shows Ising spins (arrows) whose axes meet at the middle of the tetrahedron. For clarity, black and white circles on the lattice points denote other spins. White represents a spin pointing into a downward tetrahedron while black is the opposite. The entire lattice is shown in an ice-rules state (two black and two white sites for every tetrahedron). The hexagon (thick gray line) shows a minimal loop move, which corresponds to reversing all colors (spins) on the loop to produce a new ice-rules state.[]{data-label="fig1"}](pyro_corel.eps){width="7.cm"}
As in the case of ${\rm I_{h}}$, for dipolar spin ice it is unclear whether the long range interaction should cause the absence of ordering (and nonzero entropy) down to zero temperature. The dipolar interaction is itself FM at nearest neighbor, and is thus prone to spin ice correlations. However, [*a priori*]{} one might expect that its longer range component should lift the nearest neighbor degeneracy and induce the selection of an ordered state within the ice-rules manifold. We show in this work that this is precisely the case. Specifically, the dipolar spin ice model with long range interactions does possess a unique groundstate (apart from trivial global symmetry operations) which develops at very low temperature. However, for local dynamical processes (such as single spin fluctuations), the development of this ground state is completely dynamically inhibited. As we discuss below, this occurs because of high energy barriers separating quasi-degenerate ice-rules states. This allows paramagnetic spin ice behavior to occur despite no special spin or space symmetry in the system which would a priori prevent magnetic ordering. In this paper we explore the low temperature ordering properties of dipolar spin ice by taking advantage of ‘loop moves’ incorporated into a standard Metropolis Monte Carlo algorithm, a method considered previously in the context of two-dimensional square ice models[@Barkema]. Such moves allow us to explore degeneracy lifting effects within the ice-rules manifold in an efficient manner, something which is not possible via single spin flip fluctuations. We present here strong numerical evidence for a first order phase transition at extremely low temperature in the dipolar spin ice model in zero field that recovers the entire low temperature residual magnetic entropy of the system.
For the pyrochlore lattice with Ising spins defined by local axes, the Hamiltonian with nearest neighbor exchange and long range dipolar interactions is [@Bramwell2; @denHertog; @Gingras]: $$\begin{aligned}
\label{eqn1}
H&=&-J\sum_{\langle ij\rangle}{\bf S}_{i}^{z_{i}}\cdot{\bf S}_{j}^{z_{j}}
\nonumber \\
&+& Dr_{{\rm nn}}^{3}\sum_{i>j}\frac{{\bf S}_{i}^{z_{i}}\cdot{\bf
S}_{j}^{z_{j}}}{|{\bf r}_{ij}|^{3}} - \frac{3({\bf S}_{i}^{z_{i}}\cdot{\bf r}_{i
j})
({\bf S}_{j}^{z_{j}}\cdot{\bf r}_{ij})}{|{\bf r}_{ij}|^{5}} \; ,\end{aligned}$$ where the spin vector ${\bf S}_{i}^{z_{i}}$ labels the Ising moment of magnitude $|S|=1$ at lattice site $i$ and [*local*]{} Ising $[111]$ axis $z_{i}$ discussed earlier. Here $J$ represents the exchange energy and $D=(\mu_{0}/4\pi)g^{2}\mu^{2}/r_{nn}^{3}$. However, because of the local Ising axes, the effective nearest neighbor energy scales are $J_{\rm nn}\equiv J/3$ and $D_{\rm nn}\equiv 5D/3$.
As described in Ref. [@denHertog], the long range nature of the dipolar interactions can be handled conveniently by the Ewald method. In that work, extensive numerical analysis via single spin flip Monte Carlo simulations found no evidence of a transition to long range order. Rather, short range order dominated by ice-rules correlations was observed down to low temperatures, similar to that found in the nearest neighbor FM model[@note].
Qualitatively, the dynamics of both models appear to be very similar. As the temperature is lowered, significant thermal barriers are created by the energy cost involved in fluctuating [*out*]{} of the ice-rules manifold. With single spin flips, fluctuations [*between*]{} states [*within*]{} the ice-rules manifold are also reduced, as it is impossible to do so without first breaking the two-in/two-out ice-rules. Such thermal barriers produce non-trivial and extremely slow dynamics. If a unique groundstate exists within the plethora of ice-rules states ($\sim (3/2)^{N/2}$) of the dipolar spin ice model (Eq. 1), these thermal barriers make the probability of reaching it in a numerical simulation using conventional spin flips exceedingly small. Consequently, the question concerning the nature of the groundstate becomes difficult to answer using standard numerical techniques, and a different procedure must be applied[@Barkema]. Since we found in Ref.[@denHertog] that long range dipolar interactions give rise to spin ice behavior, we take as a starting point for identifying the low energy excitations (quasi zero modes) of Eq. (1) the exactly degenerate ice-rules state. This is entirely analogous to the approach taken in considering the so-called ‘energetic ice models’ in two-dimensional square ice models [@Barkema].
In Fig. \[fig1\] we denote each site of the pyrochlore lattice by a white or black circle which represents a spin pointing into or out of a ‘downward’ facing tetrahedron, respectively. In this particular example, the spin configuration shown forms an ice-rules state that can be transformed into another ice-rules state by reversing all the colors (spins) on the loop denoted by the gray hexagon. In general, six spins form the shortest loop, while larger loops are also possible. A loop can be constructed by simply choosing a starting lattice site and tracing out a closed path that involves tetrahedra which have exactly two spins on the path (see Fig. 1). Furthermore, each pair of spins which are neighbors on the path are such that one is pointing into and the other pointing out of their shared tetrahedron. As seen in Fig. \[fig1\], such a loop is constructed of alternating black and white circles.
For our numerical study of the dipolar spin ice model, this type of ‘loop move’ was utilized in conjunction with conventional single spin flip dynamics. Specifically, such loops are identified by allowing a wandering path to form a loop whenever it encounters any previously visited site and ignoring any ‘dangling’ spins in the path. This allows for a large number of short loops to be created, with an average length that tends to a finite value as the system size is increased. As explained above for the dipolar system, such ‘loop reversal’ moves are not true zero modes, but involve a small gain or lowering of the energy (small compared to $J_{nn}+D_{nn}$) which is handled by a standard Metropolis algorithm[@note4].
Our numerical simulations for the dipolar spin ice model were carried out on system sizes up to 2000 spins (of cubic unit cell length L=5) with $O(10^{5})$ spin flips per spin and $O(10^{5})$ loop moves. For all interaction parameters $J_{\rm nn}$ and $D_{\rm nn}$ which show spin ice behavior using single spin flip dynamics only ($J_{\rm nn}/D_{\rm nn} \gtrsim -0.91$)[@denHertog], we find that the acceptance ratio of the loop moves increases at low temperature as the system enters the spin ice regime, before dropping to zero just below the temperature at which the system appears to undergo a very sharp first order phase transition to a long range ordered state obeying the ice-rules.
In Fig. \[fig2\] we present specific heat data obtained for a system with interaction parameters $J_{\rm nn}$ and $D_{\rm nn}$ identified in Ref. [@denHertog] for the spin ice material ${\rm Dy_{2}Ti_{2}O_{7}}$. Using a single spin flip Monte Carlo algorithm, spin ice correlations develop over a large temperature regime (signified by the broad peak around 1.1 K), before the system dynamically slows down into a disordered ice-rules state at low temperature. Using the loop algorithm in combination with single spin flips, the higher temperature data is reproduced before a very sharp transition is observed at $T_{c}\simeq 0.18\; {\rm K}$, with a latent heat observed at the transition. The energy probability distribution displays a double-peak feature in a narrow temperature region close to $T_c$, another indicator that the transition is first order. To assess in a more quantitative way the nature of the phase transition, a finite-size scaling study was done (see inset of Fig. 2). Because of the extremely sharp nature of the specific heat at $T_{c}$, the method of slowly cooling in a Monte Carlo simulation with discrete temperature steps could not give sufficiently accurate data to resolve $C_{peak}$ within reasonable computer time. To avoid this problem, simulations were performed in a multicanonical ensemble [@Hansmann] at a single temperature near $T_{c}$. This data was then re-weighted using Ferrenberg and Swendsen’s technique [@Ferrenberg], which allowed us to obtain the appropriate thermodynamic quantities to any degree of temperature resolution required.
The ordered phase is similar to that found in the order by disorder transition in the antiferromagnetic FCC Ising model[@Wengel]. In that frustrated system, an ordering of antiferromagnetically stacked FM planes is found. For the dipolar spin ice system considered here, the ordering vector ${\bf q}$ lies parallel to one of the cubic axes directions, specifically ${\bf q}=(0,0,2\pi/a)$ or its starred directions. To construct the ordered state, first consider a starting tetrahedron with its six possible ice-rules states. For a given ordering vector ${\bf q}$, this tetrahedron selects one of the four possible spin configurations (two independent configurations and their spin-reversals, ${\bf S}_i
\rightarrow -{\bf S}_i$), with a total magnetic moment for the tetrahedron perpendicular to ${\bf q}$. The entire ordered state may then be described by planes (perpendicular to ${\bf q}$) of such tetrahedra. The wavelength defined by ${\bf q}$ physically corresponds to antiferromagnetically stacked planes of tetrahedra, where a given plane has tetrahedra of opposite configuration to the plane above and below it. In Fig. 3 we show one such groundstate with ordering vector ${\bf q}=(0,0,2\pi/a)$.
The transition to such a groundstate structure can be characterized by the multi-component order parameter $${ \Psi}_{\alpha}^{m} =\frac{1}{N}\left|\sum_{j=1}^{N/4}\sum_{a=1}^{4}
\sigma^{j}_{a}
{\rm e}^{i\phi^{m}_{a}}{\rm e}^{i{\bf q}_{\alpha}.{\bf r}_{j}}\right| \; .$$ Such a labeling is natural given that the pyrochlore lattice can be viewed as an FCC lattice with a ‘downward’ tetrahedral basis (see Fig. 1). Thus $j$ labels the FCC lattice points of the pyrochlore lattice, and the index $a$ sums over the four spins comprising the basis attached to each $j$. The index $\alpha$ labels the three possible symmetry related ${\bf q}$ ordering vectors. For a given ${\bf q}_{\alpha}$, as described above, there are two ice-rules configurations and their reversals which can each form a groundstate. Thus $m=1,2$ labels these possibilities with the phase factors $\{\phi^{m}_{a}\}$ describing the given configuration $m$. Each Ising variable $\sigma_a^j$ has value 1 (-1) when a spin points into (out of) its downward tetrahedron labeled by $j$.
As written in Eq.(2), ${\Psi}_{\alpha}^{m}$ has six degenerate components, each of which can take on a value between 0 and 1. Upon cooling through the transition, the system selects a unique ordered configuration, causing the corresponding component of ${\Psi}_{\alpha}^{m}$ to rise to unity and all others to fall to zero. The component selected by the ordering is equally likely to be any one of the six. Fig. 3 is a plot of $\left<\Psi \right>$ for two system sizes, where $\left<\Psi \right>=\sqrt{\sum_{m=1}^{2}\sum_{\alpha =1}^{3}\left(
{\Psi}_{\alpha}^{m} \right)^{2}}
$ is the magnitude of the multi-component order parameter. For $T<T_{c}$ the two lattice sizes produce identical order parameters. By contrast, $\left<\Psi \right>$ for the smaller lattice shows a somewhat more pronounced rounding near $T_{c}$, and an increased residual value for large $T$. These results show a clear discontinuity of the order parameter at $T_c$, and hence a first order transition to the long range ordered phase we have identified.
For all values of $J_{\rm nn}/D_{\rm nn}$ within the dipolar spin ice regime [@denHertog], we find a low temperature phase transition to the state discussed above. The transition is driven by the long range dipolar interactions and, therefore, $T_c\sim 0.18$K is independent of the strength of the nearest neighbor exchange $J$ ($T_c/D_{nn} \sim 0.08$). The observation of a finite ordering temperature using the algorithm presented here demonstrates that long range dipolar interactions between Ising spins on the pyrochlore lattice have no special exact symmetry that allow a macroscopically degenerate ground state. This conclusion is also suggested within a mean field analysis[@denHertog; @Gingras], which shows that as the truncation of long range dipolar interactions is pushed out to further distances (up to $10^{4}$ nearest neighbors), the maximal eigenvalues of the normal mode spectrum become only [*quasi-degenerate*]{} throughout the Brillouin zone, as opposed to the completely flat spectrum (and macroscopic degeneracy) we find for the nearest neighbor spin ice model [@Gingras]. Furthermore, the quasi-degenerate eigenvalues of the mean field theory have a very weak dispersion which is maximal at the FCC zone boundary and, therefore, predicts the same ordering wavevector ${\bf q}$ found here.
![ Temperature dependence of the order parameter $\left<\Psi
\right>$ defined above for system sizes L=3 (triangles) and L=4 (squares). Inset: The ${\bf q} = (0,0,2\pi /a)$ groundstate projected down the z axis. The four tetrahedra making up the cubic unit cell 1 appear as dark grey squares. The light grey square does not represent a tetrahedron, however its diagonally opposing spins occur in the same plane. The component of each spin parallel to the z axis is indicated by a $+$ and $-$ sign.[]{data-label="fig3"}](FIG3.ps){width="7.cm"}
The question remains as to what extent our conclusions apply to the real spin ice materials ${\rm Ho_{2}Ti_{2}O_{7}}$[@Harris] and ${\rm Dy_{2}Ti_{2}O_{7}}$[@Ramirez]. The dipolar spin ice model may be an accurate description of these materials even at extremely low temperatures, while it is also possible that perturbations, $H^{\prime}$, exist beyond Eq. \[eqn1\] which could induce another type of groundstate selection. Similar to ${\rm I_{h}}$ however, irrespective of the origin of any ordering, its actual observation may depend critically on the dynamical behavior of the materials. The inability of single spin fluctuations to connect different ice-rules states in phase space shows that at low temperatures relaxation via local dynamics is extremely slow. For both ${\rm Ho_{2}Ti_{2}O_{7}}$ and ${\rm Dy_{2}Ti_{2}O_{7}}$, the transition temperature for the ordered phase observed in our simulations is well below the temperature at which single spin fluctuations over extended length scales (and out of the ice-rules manifold) are thermally frozen out. Thus, while theoretically an ordered phase induced by long range dipolar interactions between Ising spins on the pyrochlore lattice does exist, its experimental observation will depend acutely on the dynamical processes of the real materials. Furthermore, one requires that perturbations $H^{\prime}$ are negligible, ie. that $H^{\prime}/D_{\rm nn}\lesssim T_{c}/D_{\rm nn}\lesssim 0.08$.
In conclusion, we predict that in the dipolar spin ice model, which is in quantitative agreement with experimental data on real systems in the temperature regime investigated so far [@Bramwell2; @denHertog], a very low temperature transition to a zero total moment structure exists with recovery of all residual entropy. However, it is unlikely that such a phase can be arrived at via conventional local dynamics. These results suggest that Ising pyrochlore magnets with long range dipolar interactions provide an even deeper analogy with the proton ordering in hexagonal ice water ${\rm I_{h}}$ than previously suggested.
We thank S. Bramwell and P. Holdsworth for useful discussions. R.M. acknowledges financial support from NSERC of Canada. M.G. acknowledges financial support from NSERC, Research Corporation and the Province of Ontario.
M. J. Harris [*et al.*]{}, , 2554 (1997). D. Bernal and R. H. Fowler J. Chem. Phys. [**1**]{}, 515 (1933). L. Pauling, [*The Nature of the Chemical Bond*]{} (Cornell University Press, Ithaca, 1945), p. 301. Previously, Anderson identified a connection between binary ordering in the spinel structure and the ice problem. Phys. Rev. [**102**]{}, 1008 (1956). A. P. Ramirez [*et al.*]{}, Nature [**339**]{}, 333 (1999). S. T. Bramwell and M. J. Harris, J. Phys. Condens. Matter [**10**]{}, L215 (1998). M. J. Harris [*et al.*]{}, , 4496 (1998). S. T. Bramwell [*et al.*]{}, cond-mat/0101114. B. C. den Hertog and M. J. P. Gingras, , 3430 (2000). M. J. P. Gingras and B. C. den Hertog, cond-mat/0012275. The nearest neighbor exchange in ${\rm Ho_{2}Ti_{2}O_{7}}$ and ${\rm Dy_{2}Ti_{2}O_{7}}$ is antiferromagnetic, which by itself would trivially cause long range Néel order. See Refs. [@Bramwell2; @denHertog] There are subtle differences between the correlations in the dipolar spin ice model and those found in the nearest neighbor spin ice model. For example, the dipolar spin ice model reproduces a larger amount of detail in the structure factor of ${\rm Ho_{2}Ti_{2}O_{7}}$. See Ref. [@Bramwell2]. G. T. Barkema and M. E. J. Newman, , 1155 (1998). As both loop moves simply provide a way of traversing a path from one part of the ice-rules phase space to another, they obey detailed balance[@Barkema]. The combination of loop moves and single spin flips restores ergodicity. U.H.E. Hansmann and Y. Okamoto, Physica A, [**212**]{}, 415 (1994). A.M. Ferrenberg and R.H.Swendsen, Phys. Rev. Lett. [**61**]{}, 2635 (1988). C. Wengel, C. L. Henley and A. Zippelius, , 6543 (1996).
|
\
Faculty of Nuclear Sciences and Physical Engineering
Interference Phenomena in Quantum Information
Martin Štefaňák\
Supervisor: Prof. Ing. [I.]{} Jex, DrSc.
Prague, 2010
This thesis is the result of my own work, except where explicit reference is made to the work of others and has not been submitted for another qualification to this or any other university.
Martin Štefaňák
Acknowledgement {#acknowledgement .unnumbered}
===============
First of all, I would like to thank prof. Igor Jex for his kind supervision during the past years.
The first part of my thesis results from our longstanding and fruitful collaboration with Dr. Tamas Kiss from the Department of Nonlinear and Quantum Optics of the Research Institute for Solid State Physics and Optics belonging under the Hungarian Academy of Sciences. I would like to thank him in this way for numerous discussions.
The second part of my thesis follows from the results of my one year stay as a Marie Curie fellow in the group of prof. Schleich at the Department of Quantum Physics of the University of Ulm. I would like to thank him and the people from his group, in particular to Dr. Wolfgang Merkel, for stimulating discussions during my stay in Ulm. I also have to mention Dr. Daniel Haase from the Department of Number Theory and Probability Theory of the University of Ulm who contributed substantially to the discussions.
I would like to thank my fellow students and post-docs from the Department of Physics, especially to Dr. Jaroslav Novotný, Dr. Hynek Lavička, Dr. Aurél Gábris and Ing. Václav Potoček, for the very nice and stimulating atmosphere in our group.
The financial support from the Doppler Institute of the Faculty of Nuclear Sciences and Physical Engineering and the EU Marie Curie Research Network Training Project CONQUEST is gratefully acknowledged.
Last, but not least I would like to thank to my girlfriend, my family and friends for support during my studies.
One of the key features of quantum mechanics is the interference of probability amplitudes. The reason for the appearance of interference is mathematically very simple. It is the linear structure of the Hilbert space which is used for the description of quantum systems. In terms of physics we usually talk about the superposition principle valid for individual and composed quantum objects. So, while the source of interference is understandable it leads in fact to many counter-intuitive physical phenomena which puzzle physicists for almost hundred years.
The present thesis studies interference in two seemingly disjoint fields of physics. However, both have strong links to quantum information processing and hence are related. In the first part we study the intriguing properties of quantum walks. In the second part we analyze a sophisticated application of wave packet dynamics in atoms and molecules for factorization of integers.
The main body of the thesis is based on the original contributions listed separately at the end of the thesis. The more technical aspects and brief summaries of used methods are left for appendices.
\[part:1\]
\[chap:1\]
\[chap:1a\]
The term random walk was first introduced by Pearson [@pearson] in 1905, is a mathematical formalization of a trajectory that consists of successive random steps. Shortly after that a paradigmatic application of a random walk - the explanation of Brownian motion [@brown] and diffusive processes, was found by Einstein [@einstein] and Smoluchowski [@smoluchowski]. Since then random walks have been used in many branches of science [@overview], ranging from physics, economy, ecology to social sciences. Among others, the random walk is one of the cornerstones of theoretical computer science [@rw:compsc1; @rw:compsc2]. Indeed, it can be employed for algorithmic purposes to solve problems such as graph connectivity [@graph:connect], 3-SAT [@3-sat] or approximating the permanent of a matrix [@matrix:perm].
Quantum walks have been proposed by Aharonov, Davidovich and Zagury [@aharonov] as a generalization of classical random walks to quantum domain. The unitary time evolution governing the walk can be either discrete as introduced by Meyer [@meyer1; @meyer2] and Watrous [@watrous] leading to coined quantum walks or continuous as introduced by Farhi and Gutman [@farhi; @childs]. It is interesting to note that similar ideas can be found already in the works of Feynman [@feynman] and Bialynicki-Birula [@birula] in the context of discretization of the Dirac equation. Scattering quantum walks [@hillery:2003; @hillery:2004; @kosik:2005; @hillery:2007] were proposed by Hillery, Bergou and Feldman as a natural generalization of coined quantum walks based on an interferometric analogy. The connection between the coined quantum walks and the continuous time quantum walks has been established [@strauch; @chandra:08]. Recently, it has been shown that both continuous [@childs:09] and discrete time [@lovett:09] quantum walks can be regarded as a universal computational primitive. By now, quantum walks form a well established part of quantum information theory [@bruss:leuchs]. For a review see e.g. the article by Kempe [@kempe:ovw] or books by Venegas-Andraca [@Venegas-Andraca] or Konno [@konno:book].
Continuous-time quantum walks are suitable for the description of coherent transport of excitation in networks [@muelken:prl; @muelken:pre1]. Recently, a coherent energy transfer in photosynthetic systems was observed [@engel]. This long-lived coherence which can be described by a generalized continuous-time quantum walk [@mohseni] together with the environmental noise leads to a substantial increase in energy transfer efficiency [@caruso].
Coined quantum walk is well suited as an algorithmic tool [@kempe; @ambainis]. Several algorithms based on coined quantum walks showing speed up over classical algorithms have been proposed [@shenvi:2003; @ambainis:2003; @childs:04; @kendon:2006; @aurel:2007; @magniez; @vasek]. Various properties of coined quantum walks have been analyzed, e.g. the effects of the coin and the initial state [@2dw1; @chandrashekar:2007; @miyazaki], absorbing barriers [@bach:2004], the hitting times [@kempe:2005; @krovi:2006a; @krovi:2006b] or the effect of decoherence [@aurel:2007; @kendon:2006b]. Hitting times for continuous quantum walks related to the quantum Zeno effect were considered in [@varbanov:2008]. Great attention has been paid to the asymptotics of quantum walks [@nayak; @carteret; @Grimmett; @konno:2002; @konno:2005b]. In particular, localization was found in 2-D quantum walks [@2dqw; @2dw1; @localization] and in 1-D for a generalized quantum walk [@1dloc; @sato:2008]. Several experimental schemes have been proposed to realize coined quantum walks including cavity QED [@sanders], linear optics [@jeong; @pathak], optical lattices [@eckert; @dur], Bose-Einstein condensate [@chandrashekar:2006] and quantum rings [@Orsolya]. Recently, as proof of principle, experiments with neutral atoms [@karski], ions [@schmitz] and photons [@Schreiber] have been performed.
In comparison to classical random walks coined quantum walks are considerably more flexible. The coin operator can be in principle an arbitrary unitary matrix. Moreover, one can choose the initial coin state. All of these influence the dynamics of the quantum walk. The diversity of quantum walks asks for a classification. Indeed, in order to exploit the full potential of quantum walks for algorithmic purposes one needs to know in which regimes they can be operated in.
The present thesis focuses mainly on one particular quantity which is suitable for the classification of both classical as well as quantum walks, namely the probability to return to the origin. The recurrence probability is known as the Pólya number, after G. Pólya who as the first discussed this property in the context of classical random walks on infinite lattices in 1921 [@polya]. Pólya pointed out the fundamental difference between walks in different dimensions. In three or higher dimensions the recurrence probability is less than one and depends exclusively on the dimension [@montroll:1956], whereas for walks in one or two dimensions the Pólya number equals unity. As a consequence, in three and higher dimensions the particle has a non-zero probability of escape [@domb:1954]. Recurrence in classical random walks is closely related to first passage times as pointed out in a number of classics papers of statistical mechanics [@montroll:1964; @hughes]. A summary of the results on recurrence of classical random walks is left for Appendix \[app:a\].
We extend the concept of recurrence and Pólya number to quantum walks in Chapter \[chap:2\] based on [@stef:prl] where a particular measurement scheme was considered. Other possible definitions of the quantum Pólya number are briefly discussed following [@kiss:recurrence]. As we show in Appendix \[app:b\], within the framework of our measurement scheme the criterion for recurrence of a quantum walk is the same as for the classical random walk - it is determined by the asymptotic behaviour of the probability at the origin. To be able to analyze the probability at the origin we first solve the time evolution equations. Since the quantum walks in consideration are translationally invariant we make us of the Fourier transformation and find a simple solution in the momentum picture. Probability amplitudes in the position representation are then obtained by performing the inverse Fourier transformation. Hence, they have a form of an integral over momenta where the time enters only in the rapidly oscillating phase. This allows us to perform the asymptotic analysis of the probability at the origin in a straightforward way by means of the method of stationary phase. Basic concepts of this method are reviewed in Appendix \[app:c\]. We find that the asymptotic scaling of the probability at the origin is affected by the additional degrees of freedom offered by quantum mechanics. Hence, the recurrence probability of a quantum walk depends in general on the topology of the walk, choice of the coin and the initial state. This is in great contrast to classical random walks, where the Pólya number is characteristic for the given dimension.
Recurrence of unbiased quantum walks on infinite $d$-dimensional lattices is analyzed in Chapter \[chap:4\] which is based on [@stef:pra]. First, we show that for the quantum walk driven by Hadamard tensor product coin, the Pólya number is independent of the initial conditions, thus resembling the property of the classical walks. We provide an estimation of the Pólya number for this quantum walk in dependence of the dimension of the lattice. Second, we examine the Grover walk on a plane, which exhibits localization and thus is recurrent, except for a particular initial state for which the walk is transient. We generalize the Grover walk to show that one can construct in arbitrary dimensions a quantum walk which is recurrent. This is in great contrast with classical random walks which are recurrent only for the dimensions $d=1,2$. Finally, we analyze the recurrence of the Fourier walk on a plane. This quantum walk is recurrent except for a two-dimensional subspace of initial states. We provide an estimation of the Pólya number in dependence on the initial states.
In Chapter \[chap:5\] we extend our analysis of recurrence to biased quantum walks following [@stef:njp]. As we illustrate in Appendix \[app:a2\], recurrence of a classical random walk on a line is extremely sensitive to the directional symmetry, any deviation from the equal probability to travel in each direction results in a change of the character of the walk from recurrent to transient. Applying our definition of the Pólya number to quantum walks on a line we show that the recurrence character of quantum walks is more stable against bias. We determine the range of parameters for which biased quantum walks remain recurrent. We find that there exist recurrent genuine biased quantum walks which is a striking difference to classical random walks .
Quantum walks involving more than one particle opens up the possibility of having entangled initial states or the particles can be indistinguishable - either bosons or fermions. In Chapter \[chap:6\] which is based on [@stef:meeting] we study the motion of two non-interacting quantum particles performing a quantum walk on a line. We analyze the meeting problem, i.e. the probability that the two particles are detected at a particular position after a certain number of steps. The results are compared with the corresponding classical problem which we review in Appendix \[app:d\]. We derive analytical formulas for the meeting probability and find its asymptotic behaviour. We show that the decay of the meeting probability is faster than in the classical case, but not quadratically as one could expect from the ballistic nature of a quantum walk. The effect of non-classical features offered by quantum mechanics on the meeting probability is analyzed. We summarize our results and present an outlook in the Conclusions.
\[chap:1b\]
Before we turn to the presentation of our results we briefly introduce the basic notions of quantum walks. For a more comprehensive review we refer to the literature [@kempe:ovw].
Let us begin with the classical random walk on a line. Random walk is a stochastic process where the particle moves on an integer lattice in discrete time steps. In each step the particle can move from its current location (say $m$) to the neighboring lattice points (i.e. $m\pm 1$) with equal probability. Suppose that the particle is at time $t=0$ at the origin of the lattice $m=0$. After the first step, we can find the particle at site $m=1$ or $m=-1$ with probability one-half. To calculate the probability that the particle is at position $m$ at a latter time $t$ we can use the following recurrence relations $$\label{cl:walk:time:evol}
P(m,t) = \frac{1}{2} P(m-1,t-1) + \frac{1}{2} P(m+1,t-1),\qquad m\in\mathds{Z}.$$ The solution of the equations (\[cl:walk:time:evol\]) with the initial condition $P(0,0) = 1$ has the form $$\label{chap:1:crw:dist}
P(m,t) = \frac{1}{2^{t}} {t\choose \frac{t+m}{2}}.$$ Indeed, each random path has the same probability $2^{-t}$ and the number of paths leading to the lattice point $m$ is given by the well-known binomial distribution. It is straightforward to calculate various attributes of the random walk, e.g. the mean value and the variance of the particle’s position. We find that the mean value vanishes, in agreement with the unbiasedness of the random walk we consider. On the other hand, the variance grows with the square root of the number of steps. Indeed, random walk is a diffusion process.
The quantum walk is a generalization of a classical random walk to a discrete unitary evolution of a quantum particle. Hence, there is no randomness in the time evolution itself in the quantum case. Nevertheless, the randomness enters through the measurement. Indeed, if we want to know the position of the particle we have to measure it and a particular result is found with the corresponding probability given by the standard quantum-mechanical formula. The particle can be found on any lattice point $m\in\mathds{Z}$. We denote the corresponding position eigenstates by $|m\rangle$. These vectors form an orthonormal basis of the [*position space*]{} $\mathcal{H}_P$ $$\mathcal{H}_P = {\rm Span}\left\{|m\rangle|m\in\mathds{Z}\right\},\quad \langle m|n\rangle = \delta_{mn},\quad \sum_{m}|m\rangle\langle m| = I.$$ As in the classical random walk, the particle moves from its current position to the neighboring lattice points, but instead of choosing the path randomly it travels all paths simultaneously, i.e. it evolves into a superposition $$|m\rangle \longrightarrow |m-1\rangle + |m+1\rangle.$$ However, we can easily see that such a time evolution is not unitary. Indeed, two orthogonal vectors $|0\rangle$ and $|2\rangle$ evolves into the states $$|0\rangle\longrightarrow |-1\rangle + |1\rangle,\qquad |2\rangle\longrightarrow |1\rangle + |3\rangle,$$ which have non-zero overlap. To make the time-evolution unitary we have to consider a particle which has an internal degree of freedom with two orthogonal states $|L\rangle$ and $|R\rangle$. This additional degree of freedom is usually referred to as [*coin*]{} and its two orthogonal states $|L\rangle$, $|R\rangle$ form a basis of the corresponding [*coin space*]{} $\mathcal{H}_C$ $$\mathcal{H}_C = {\rm Span}\left\{|L\rangle,|R\rangle\right\}.$$ The state of the coin determines the next move of the particle according to $$|m\rangle|L\rangle\longrightarrow|m-1\rangle|L\rangle,\qquad |m\rangle|R\rangle\longrightarrow|m+1\rangle|R\rangle.$$ Such a transformation is performed by the [*conditional displacement operator*]{} $S$ $$S = \sum_m\left(|m-1\rangle\langle m|\otimes|L\rangle\langle L| + |m+1\rangle\langle m|\otimes|R\rangle\langle R|\frac{}{}\right),$$ which is indeed unitary. However, a time evolution according to $S$ itself would be rather trivial. Indeed, if the particle will start the quantum walk in a definite coin state, say $|L\rangle$, it will simply move on to the left. Hence, to obtain a non-trivial time evolution we first rotate the coin by the [*coin operator*]{} before the conditional displacement $S$ is applied. As the coin operator we can in principle choose an arbitrary unitary transformation on the coin space $\mathcal{H}_C$. Here, we consider a particular choice of the [*Hadamard coin*]{} $H$ which performs the following rotation $$H|L\rangle = \frac{1}{\sqrt{2}}\left(|L\rangle + |R\rangle\right),\qquad H|R\rangle = \frac{1}{\sqrt{2}}\left(|L\rangle - |R\rangle\right).$$ Finally, we can write the [*unitary propagator*]{} $U$ which performs a single step of the quantum walk $$\label{chap:1:U}
U = S\cdot\left(I\otimes H\right).$$ Suppose that the particle is initially at the origin with the coin state $|L\rangle$, i.e. $$\label{chap:1:init:state}
|\psi(0)\rangle = |0\rangle|L\rangle.$$ After the first step of the quantum walk it evolves into the state $$\label{chap:1:state:1}
|\psi(1)\rangle = U|\psi(0)\rangle = \frac{1}{\sqrt{2}}\left(|-1\rangle|L\rangle + |1\rangle|R\rangle\frac{}{}\right).$$ Note that if we perform the measurement of the particle’s position, we find it with equal probability on the sites $\pm 1$. This is the same result as for the classical random walk. Moreover, after the measurement the state of the particle is projected onto the eigenstate corresponding to the measurement outcome. Hence, by performing position measurements after each step we obtain one classical random path. By making a statistics of such paths we recover a classical random walk. To obtain different dynamics we have to let the quantum particle evolve unperturbed, i.e. without measurements, for a desired number of steps $t$, and perform the position measurement afterwards. In this way, each path will not obtain probability but probability amplitude, which involves a phase. Different paths leading to the same lattice point will interfere. Hence, a quantum walk is an interference phenomenon.
As we have seen from (\[chap:1:state:1\]) the probability distribution of the quantum walk after the first step does not differ from the probability distribution of the classical random walk. Indeed, if the quantum particle is initially localized at the origin no interference can occur. The same applies to the second step and the state of the particle is given by $$|\psi(2)\rangle = U|\psi(1)\rangle = \frac{1}{2}\left(|-2\rangle|L\rangle + |0\rangle(|L\rangle + |R\rangle) - |2\rangle|R\rangle\frac{}{}\right).$$ The probability to find the particle at the position $m$ after two steps $P(m,2)$ is given by $$\begin{aligned}
\nonumber P(-2,2) & = & |\langle -2|\langle L|\psi(2)\rangle|^2 + |\langle -2|\langle R|\psi(2)\rangle|^2 = \frac{1}{4},\\
\nonumber P(0,2) & = & |\langle 0|\langle L|\psi(2)\rangle|^2 + |\langle 0|\langle R|\psi(2)\rangle|^2 = \frac{1}{2},\\
\nonumber P(2,2) & = & |\langle 2|\langle L|\psi(2)\rangle|^2 + |\langle 2|\langle R|\psi(2)\rangle|^2 = \frac{1}{4},\end{aligned}$$ which is the same as for the classical random walk. Finally, in the third step the interference occurs for the first time. The state of the particle after the third step has the form $$|\psi(3)\rangle = U|\psi(2)\rangle = \frac{1}{2\sqrt{2}}\left(|-3\rangle|L\rangle + |-1\rangle(2|L\rangle + |R\rangle) - |1\rangle|L\rangle + |3\rangle|R\rangle\frac{}{}\right),$$ and we see that the probability distribution $$\begin{aligned}
\nonumber P(-3,3) & = & |\langle -3|\langle L|\psi(3)\rangle|^2 + |\langle -3|\langle R|\psi(3)\rangle|^2 = \frac{1}{8},\\
\nonumber P(-1,3) & = & |\langle -1|\langle L|\psi(3)\rangle|^2 + |\langle -1|\langle R|\psi(3)\rangle|^2 = \frac{5}{8},\\
\nonumber P(1,3) & = & |\langle 1|\langle L|\psi(3)\rangle|^2 + |\langle 1|\langle R|\psi(3)\rangle|^2 = \frac{1}{8},\\
\nonumber P(3,3) & = & |\langle 3|\langle L|\psi(3)\rangle|^2 + |\langle 3|\langle R|\psi(3)\rangle|^2 = \frac{1}{8},\end{aligned}$$ differs from the classical one. As a consequence of the choice of the initial coin state (\[chap:1:init:state\]) it is biased towards the left.
In general, the state of the particle at a later time $t$ is given by the successive application of the propagator $U$ on the initial state $|\psi(0)\rangle$ $$\label{chap:1:state:t}
|\psi(t)\rangle = U^t|\psi(0)\rangle.$$ Let us denote by $\psi_{L,(R)}(m,t)$ the probability amplitude of finding the particle at site $m$ with the coin state $|L(R)\rangle$ after $t$ steps of the quantum walk. These amplitudes are the coefficients of the decomposition of the state vector $|\psi(t)\rangle$ into the basis of the total Hilbert space $\mathcal{H} = \mathcal{H}_P\otimes\mathcal{H}_C$ $$|\psi(t)\rangle = \sum_m \left(\psi_{L}(m,t)|m\rangle|L\rangle + \psi_{R}(m,t)|m\rangle|R\rangle\frac{}{}\right).$$ Using the form of the propagator $U$ (\[chap:1:U\]) we find from the time evolution of the state vector (\[chap:1:state:t\]) the equations of motions for the probability amplitudes $$\begin{aligned}
\label{chap:1:amp:t}
\nonumber \psi_L(m,t) & = & \frac{1}{\sqrt{2}} \psi_L(m+1,t-1) + \frac{1}{\sqrt{2}} \psi_R(m+1,t-1),\\
\psi_R(m,t) & = & \frac{1}{\sqrt{2}} \psi_L(m-1,t-1) - \frac{1}{\sqrt{2}} \psi_R(m-1,t-1).\end{aligned}$$ These equations are reminiscent of the time evolution equations of the classical random walk (\[cl:walk:time:evol\]). However, in (\[chap:1:amp:t\]) we transform probability amplitudes instead of probabilities. The probability to find the quantum particle at a particular position $m$ is given by the standard quantum-mechanical formula $$P(m,t) = \left|\langle m|\langle L|\psi(t)\rangle\right|^2 + \left|\langle m|\langle R|\psi(t)\rangle\right|^2 = \left|\psi_L(m,t)\right|^2 +\left|\psi_R(m,t)\right|^2.$$ In Figure \[chap:1:fig2\] we display the probability distribution of the classical and quantum walk on a line obtained from the numerical simulation. Concerning the classical random walk depicted by the red points we observe a symmetric gaussian distribution with a rather small width. Indeed, the variance of the classical random walk grows with the square root of the number of steps, which is a typical signature of diffusion. The probability distribution of the quantum walk depicted by the blue points shows striking differences compared to the classical random walk. As we have already discussed, due to the choice of the initial coin state the distribution is biased to the left. More important observation is that the width of the distribution is proportional to the number of steps. Indeed, due to the interference of the probability amplitudes (\[chap:1:amp:t\]) the growth of the variance is linear in time [@nayak]. Hence, the quantum walk is a ballistic process which is the key difference from the diffusive nature of the classical random walk. The quadratic speed-up of the variance is at the heart of the fast algorithms based on quantum walks [@shenvi:2003; @ambainis:2003; @childs:04; @kendon:2006; @aurel:2007; @magniez; @vasek].
![The probability distribution of the classical and quantum walk on a line after 100 steps. For the classical random walk illustrated by the red points we find that the probability distribution is peaked at the origin and symmetric. Indeed, the mean value vanishes. The width of the distribution is rather small, since the variance of the classical random walk grows only with the square root of the number of steps. This is a typical signature of diffusion. In contrast, the probability distribution of the quantum walk described by the blue points shows striking differences. First, due to the choice of the initial coin state the distribution is biased towards left. Second, the width of the distribution is proportional to the number of steps. Indeed, the variance of the quantum walk grows linearly with time which is a typical signature of a ballistic process.[]{data-label="chap:1:fig2"}](intro_fignewp.eps){width="70.00000%"}
Recurrence of Quantum Walks {#chap:2}
===========================
Classical random walks are defined as the probabilistic discrete time evolution of the position of a point-like particle on a discrete graph. Starting the walker from a well-defined graph point (the origin) one can ask whether the particle returns there at least once during the time evolution. The probability of this event is called the Pólya number [@polya]. Classical random walks are said to be [*recurrent*]{} or [*transient*]{} depending on whether their Pólya number equals to one, or is less than one, respectively.
The Pólya number of a classical random walk can be defined in the following way [@revesz] $$P\equiv\sum\limits_{t=1}^\infty q_0(t),
\label{polya:1:chap2}$$ where $q_0(t)$ is the probability that the walker returns to the origin for the [*first time*]{} after $t$ steps. More practical expression of the Pólya number is in terms of the probability $p_0(t)$ that the particle can be found at the origin at any given time instant $t$. It is straightforward to show that $$P = 1-\frac{1}{\sum\limits_{t=0}^{+\infty}p_0(t)}.
\label{polya:def:crw}$$ From (\[polya:def:crw\]) we find that the recurrence behaviour of a random walk is determined solely by the infinite sum $${\cal S} \equiv \sum_{t=0}^{\infty}p_0(t).
\label{series}$$ Indeed, $P$ equals unity if and only if the series ${\cal S}$ diverges [@revesz]. In such a case the random walk is recurrent. On the other hand, if the series $\cal S$ converges, the Pólya number $P$ is strictly less than unity and the walk is transient. The well-known result found by Pólya [@polya] is that unbiased random walks in one and two dimensions are recurrent while for higher dimensional lattices they are transient. For a more detailed review of recurrence of random walks see Appendix \[app:a\].
We define the Pólya number of a quantum walk in Section \[chap:2a\] by considering a specific measurement scheme. Other possible measurement schemes are briefly discussed. In accordance with the classical terminology we describe the quantum walk as recurrent or transient depending on the value of the Pólya number. We find a condition for the recurrence of a quantum walk which is given by the asymptotic behaviour of the probability at the origin. A general description of a quantum walk on an infinite $d$-dimensional lattice is left for Section \[chap:2b\]. In particular, we find a simple form of the time evolution equation for probability amplitudes. In Section \[chap:2c\] we employ the translational invariance of the problem which allows us to solve the equations of motion easily in the momentum representation. We find that the amplitudes in the position representation can be written in the form of an integral over momenta where the time enters only in the oscillating phase. This form of the solution allows a straightforward analysis of the asymptotic behaviour of the amplitudes by means of the method of stationary phase. We perform this analysis in Section \[chap:2d\] and discuss the consequences on the recurrence nature of the quantum walk. In particular, we find that the latter is affected by the choice of the coin and the initial coin state.
Pólya number of a quantum walk {#chap:2a}
------------------------------
For quantum walks we can keep the same definition of the Pólya number (\[polya:1:chap2\]) being the probability of returning to the origin at least once during the time evolution. However, to be able to talk about the position of a particle in quantum mechanics one must specify when and which type of measurement is performed. According to the definition (\[polya:1:chap2\]) we would have to continuously measure whether the particle is at the origin. However, such a radical interruption of the system ultimately leads to a loss of coherence which is a vital ingredient of a quantum walk. It can be anticipated that within the continuous measurement scheme most of the quantum effects become rather weak. The analysis we have performed in [@kiss:recurrence] supports this conclusion.
In order to preserve the quantum interference we have considered different measurement scheme in [@stef:prl]. The recurrence is understood as a property of an ensemble of particles rather than an individual. The measurement scheme is the following: Prepare an ensemble of quantum walk systems in an identical initial state. Take one of such systems, let it evolve for one step, perform the measurement at the origin and then discard the system. Take a second, identically prepared system, let it evolve for two steps, make a position measurement at the origin and then discard it. Continue until a positive outcome is obtained. In the $t$-th trial we do not find the particle at the origin with the probability $1-p_0(t)$. Since the individual trials are independent the product $$\overline{P}_n = \prod_{t=1}^n(1-p_0(t))$$ gives the probability that we have not found any particle at the origin in the first $n$ trials. In the complementary event, which occurs with the probability $$P_n = 1-\prod_{t=1}^n(1-p_0(t)),
\label{polya:approx}$$ we have found at least one particle at the origin. We define the Pólya number of a quantum walk by extending $n$ to infinity $$P = 1-\prod\limits_{t=1}^{+\infty}(1-p_0(t)).
\label{polya:def}$$ This definition resembles the expression of the Pólya number of a classical random walk in terms of the probability at the origin (\[polya:def:crw\]). However, the inverted sum of $p_0(t)$ is replaced by the product of $1-p_0(t)$. Nevertheless, we show in Appendix \[app:b\] that definition (\[polya:def\]) of the Pólya number of a quantum walk leads to the same criterion for recurrence in terms of the probability at the origin $p_0(t)$. Indeed, the infinite product in (\[polya:def\]) vanishes if and only if the series $\cal S$ (\[series\]) diverges [@jarnik]. In such a case the Pólya number of a quantum walk is unity and we call such quantum walks recurrent. If the series $\cal S$ converges, then the product in (\[polya:def\]) does not vanish and the Pólya number of a quantum walk is less than one. In accordance with the classical terminology we call such quantum walks transient.
The convergence of the series $\cal S$ (\[series\]) is determined by the asymptotic behaviour of the probability at the origin. In the following Sections we find the means which allows us to perform this asymptotic analysis.
Description of quantum walks on $\mathds{Z}^d$ {#chap:2b}
----------------------------------------------
Let us first define quantum walks on an infinite $d$ dimensional lattice $\mathds{Z}^d$. The Hilbert space of the quantum walk can be written as a tensor product $$\mathcal{H} = \mathcal{H}_P\otimes\mathcal{H}_C$$ of the position space $$\mathcal{H}_P=\ell^2(\mathds{Z}^d)$$ and the coin space $\mathcal{H}_C$. The position space is spanned by the vectors $|\textbf{m}\rangle$ corresponding to the particle being at the lattice point $\textbf{m}$, i.e. $$\mathcal{H}_P=\text{Span}\left\{|\textbf{m}\rangle|\quad\textbf{m}=\left\{m_1,\ldots,m_d\right\}\in\mathds{Z}^d\right\}.$$ The coin space $\mathcal{H}_C$ is determined by the topology of the walk. In particular, its dimension $n$ is given by the number of possible displacements in a single step. We denote the displacements by vectors $$\mathbf{e}_i\in\mathds{Z}^d,\quad i=1,\ldots,n.$$ Hence, the particle can move from $\textbf{m}$ to any of the points $\textbf{m}+\textbf{e}_i, i=1,\ldots,n$ in a single step. We define an orthonormal basis in the coin space by assigning to every displacement $\mathbf{e}_i$ the basis vector $|\mathbf{e}_i\rangle$, i.e. $$\mathcal{H}_C = \textrm{Span}\left\{|\mathbf{e}_i\rangle|i=1,\ldots,n\right\}.$$ A single step of the quantum walk is given by $$U=S\cdot\left(I_P\otimes C\right).
\label{qw:time}$$ Here $I_P$ denotes the unit operator acting on the position space $\mathcal{H}_P$. The coin flip operator $C$ is applied on the coin state before the displacement $S$ itself. The coin flip $C$ can be in general an arbitrary unitary operator acting on the coin space $\mathcal{H}_C$.
The displacement itself is represented by the conditional step operator $S$ $$S = \sum\limits_{\mathbf{m},i}|\mathbf{m}+\mathbf{e}_i\rangle\langle\mathbf{m}|\otimes|\mathbf{e}_i\rangle\langle\mathbf{e}_i|,$$ which moves the particle from the site $\mathbf{m}$ to $\mathbf{m}+\mathbf{e}_i$ if the state of the coin is $|\mathbf{e}_i\rangle$.
Let the initial state of the particle be $$|\psi(0)\rangle \equiv \sum\limits_{\mathbf{m},i}\psi_i(\mathbf{m},0)|\mathbf{m}\rangle\otimes|\mathbf{e}_i\rangle.$$ Here $\psi_i(\mathbf{m},0)$ is the probability amplitude of finding the particle at time $t=0$ at the position $\mathbf{m}$ in the coin state $|\mathbf{e}_i\rangle$. The state of the particle after $t$ steps is given by successive application of the time evolution operator given by Eq. (\[qw:time\]) on the initial state $$|\psi(t)\rangle \equiv \sum\limits_{\mathbf{m},i}\psi_i(\mathbf{m},t)|\mathbf{m}\rangle\otimes|\mathbf{e}_i\rangle=U^t|\psi(0)\rangle.
\label{time:evol}$$ The probability of finding the particle at the position $\textbf{m}$ at time $t$ is given by the summation over the coin state, i.e. $$p(\textbf{m},t) \equiv \sum_{i=1}^n|\langle\textbf{m}|\langle\mathbf{e}_i|\psi(t)\rangle|^2 = \sum_{i=1}^n|\psi_i(\mathbf{m},t)|^2 = ||\psi(\textbf{m},t)||^2.$$ Here we have introduced $n$-component vectors $$\psi(\mathbf{m},t)\equiv{\left(\psi_1(\mathbf{m},t),\psi_2(\mathbf{m},t),\ldots,\psi_n(\mathbf{m},t)\right)}^T$$ of probability amplitudes. We rewrite the time evolution equation (\[time:evol\]) for the state vector $|\psi(t)\rangle$ into a set of difference equations $$\psi(\mathbf{m},t) = \sum_l C_l\psi(\mathbf{m}-\mathbf{e}_l,t-1)
\label{time:evol2}$$ for probability amplitudes $\psi(\mathbf{m},t)$. Here the matrices $C_l$ have all entries equal to zero except for the $l$-th row which follows from the coin-flip operator $C$, i.e. $$\langle\mathbf{e}_i\left|C_l\right|\mathbf{e}_j\rangle = \delta_{il}\langle\mathbf{e}_i\left|C\right|\mathbf{e}_j\rangle.$$
Time evolution of quantum walks {#chap:2c}
-------------------------------
The quantum walks we consider are translationally invariant which manifests itself in the fact that the matrices $C_l$ on the right-hand side of Eq. (\[time:evol2\]) are independent of $\mathbf{m}$. Hence, the time evolution equations (\[time:evol2\]) simplify considerably with the help of the Fourier transformation $$\tilde{\psi}(\mathbf{k},t)\equiv\sum\limits_\mathbf{m}\psi(\mathbf{m},t) e^{i \mathbf{m}\cdot\mathbf{k}}, \quad \mathbf{k}\in\mathbb{K}^d.
\label{qw:ft}$$ The Fourier transformation defined by Eq. (\[qw:ft\]) is an isometry between $\ell^2(\mathds{Z}^d)$ and $L^2(\mathds{K}^d)$ where $\mathds{K}=(-\pi,\pi]$ can be thought of as the phase of a unit circle in $\mathds{R}^2$.
The time evolution in the Fourier picture turns into a single difference equation $$\tilde{\psi}(\mathbf{k},t)=\widetilde{U}(\mathbf{k})\tilde{\psi}(\mathbf{k},t-1).
\label{qw:te:fourier}$$ Here we have introduced the propagator in the momentum representation $$\widetilde{U}(\mathbf{k}) \equiv D(\mathbf{k})\cdot C,\quad D(\mathbf{k}) \equiv \textrm{Diag}\left(e^{i\mathbf{e}_1\cdot\mathbf{k}},\ldots,e^{i\mathbf{e}_n\cdot\mathbf{k}}\right).
\label{teopF}$$ We find that $\widetilde{U}(\mathbf{k})$ is determined both by the coin $C$ and the topology of the quantum walk through the diagonal matrix $D(\mathbf{k})$ containing the displacements $\mathbf{e}_i$.
We solve the difference equation (\[qw:te:fourier\]) by formally diagonalising the matrix $\widetilde{U}(\mathbf{k})$. Since it is a unitary matrix its eigenvalues can be written in the exponential form $$\lambda_j(\mathbf{k})=\exp{\left(i\ \omega_j(\mathbf{k})\right)},$$ where the phase is given by the eigenenergy $\omega_j(\mathbf{k})$. We denote the corresponding eigenvectors as $v_j(\mathbf{k})$. Using this notation the state of the particle in the Fourier picture at time $t$ reads $$\tilde{\psi}(\mathbf{k},t) = \sum_j e^{i\ \omega_j(\mathbf{k})t}\left(v_j(\mathbf{k}),\tilde{\psi}(\mathbf{k},0)\right)v_j(\mathbf{k}),
\label{sol:k}$$ where $\left(\ ,\ \right)$ denotes the scalar product in the $n$ dimensional coin space ${\mathcal H}_C$. Finally, we perform the inverse Fourier transformation and find the exact expression for the probability amplitudes $$\psi(\mathbf{m},t) = \int_{\mathds{K}^d}\frac{d\mathbf{k}}{(2\pi)^d}\ \widetilde{\psi}(\mathbf{k},t)\ e^{-i \mathbf{m}\cdot\mathbf{k}}
\label{inv:f}$$ in the position representation.
We are interested in the recurrence nature of quantum walks. As we have discussed in Section \[chap:2a\] the recurrence of a quantum walk is determined by the asymptotic behaviour of the probability at the origin $$p_0(t)\equiv p(\mathbf{0},t)=\left\|\psi(\mathbf{0},t)\right\|^2.$$ as the number of steps approaches infinity. Hence, we set $\mathbf{m}=\mathbf{0}$ in Eq. (\[inv:f\]). Moreover, in analogy with the classical problem of Pólya we restrict ourselves to quantum walks which start at origin. Hence, the initial condition reads $$\psi(\mathbf{m},0)=\delta_{\mathbf{m},\mathbf{0}}\psi, \quad \psi\equiv\psi(\mathbf{0},0)
\label{init:cond}$$ and its Fourier transformation $\tilde{\psi}(\mathbf{k},0)$ entering Eq. (\[sol:k\]) is identical to the initial state of the coin $$\tilde{\psi}(\mathbf{k},0)=\psi,$$ which is a $n$-component vector. We note that due to the Kronecker delta in Eq. (\[init:cond\]) the Fourier transformation $\tilde{\psi}(\mathbf{k},0)$ is independent of the momenta $\mathbf{k}$.
Using the above assumptions we find the exact expression for the probability at the origin $$p_0(t) = \left|\sum_{j=1}^c I_j(t)\right|^2$$ where $I_j(t)$ are given by the integrals $$I_j(t) = \int\limits_{\mathbb{K}^d}\frac{d\mathbf{k}}{(2\pi)^d}\ e^{i\ \omega_j(\mathbf{k})t}\ f_j(\mathbf{k}),\quad f_j(\mathbf{k}) = \left(v_j(\mathbf{k}),\psi\right)\ v_j(\mathbf{k}).
\label{psi:0}$$
Asymptotics of the probability at the origin {#chap:2d}
--------------------------------------------
Let us discuss how the additional freedom we have at hand for quantum walks influences the asymptotics of the probability at the origin $p_0(t)$. We suppose that the functions $\omega_j(\mathbf{k})$ and $f_j(\mathbf{k})$ entering $I_j(t)$ are smooth. According to the method of stationary phase [@statphase] which we briefly review in Appendix \[app:c\] the major contribution to the integral $I_j(t)$ comes from the stationary points $\mathbf{k}^0$ of the eigenenergies $\omega_j(\mathbf{k})$, i.e. from the points where the gradient vanishes $$\left.\vec{\nabla}\omega_j(\mathbf{k})\right|_{\mathbf{k}=\mathbf{k}^0} = \mathbf{0}.$$ The asymptotic behaviour of $I_j(t)$ is then determined by the stationary point with the greatest degeneracy given by the dimension of the kernel of the Hessian matrix $$H^{(j)}_{m,n}(\mathbf{k})\equiv \frac{\partial^2 \omega_j(\mathbf{k})}{\partial k_m\partial k_n}$$ evaluated at the stationary point, i.e. by the flatness of $\omega_j(\mathbf{k})$. The function $f_j(\mathbf{k})$ entering the integral $I_j(t)$ determines only the pre-factor. We now discuss how the existence, configuration and number of stationary points affect the asymptotic behaviour of $I_j(t)$. As a rule of thumb, the decay of the probability at the origin $p_0(t)$ can slow down with the increase in the number of stationary points. Let us briefly discuss the results.
### No stationary points {#chap:2d1}
If $\omega_j(\mathbf{k})$ has no stationary points then $I_j(t)$ decays faster than any inverse polynomial in $t$. Consequently, the decay of the probability at the origin is also exponential $$p_0(t) \sim e^{- \gamma t}$$ with some positive rate $\gamma$. Indeed, quantum walks for which the probability at the origin decays so fast are transient. Such a situation occurs e.g. for extremely biased quantum walks which we analyze in Chapter \[chap:5\].
### Finite number of stationary points {#chap:2d2}
Suppose that $\omega_j(\mathbf{k})$ has a finite number of non-degenerate stationary points, i.e. the determinant of the Hessian matrix $H$ is non-zero for all stationary points. If the function $f_j(\mathbf{k})$ does not vanish at the stationary points then the contribution from all stationary points to the integral $I_j(t)$ is of the order $t^{-d/2}$. Consequently, the probability at the origin behave like $$p_0(t) \sim t^{-d}$$ as $t$ approaches infinity. Clearly, the sum $\cal S$ defined in (\[series\]) is convergent for $d>1$. Hence, the quantum walks for which the eigenenergies have only non-degenerate stationary points are recurrent only for the dimension $d=1$, i.e. on a line. This is e.g. the case of the Hadamard walk with tensor product coin studied in Chapter \[chap:4b\].
### Continuum of stationary points {#chap:2d3}
If $\omega_j(\mathbf{k})$ has a continuum of stationary points then the dimension of the continuum determines the decay of the integral $I_j(t)$. The case of 2-D integrals with curves of stationary points are treated in [@statphase]. It is shown that the contribution from the continuum of stationary points to the integral $I_j(t)$ is of the order $t^{-1/2}$. This is greater than the contribution arising from a discrete stationary point which is of the order $t^{-1}$. Hence, the continuum of stationary points has effectively slowed-down the decay of the integral $I_j(t)$. Consequently, the leading order term of the probability at the origin is $$p_0(t)\sim t^{-1},$$ and we find that such a quantum walk is recurrent. We come across this situation in the case of the Fourier walk on a plane in Chapter \[chap:4d\]. Similar results can be expected for higher dimensional quantum walks where $\omega_j(\mathbf{k})$ have a continuum of stationary points.
A special case for a continuum of stationary points is when $\omega_j(\mathbf{k})$ does not depend on $n$ variables, say $k_1,\ldots k_n$, but has a finite number of stationary points with respect to the remaining $d-n$ variables $k_{n+1},\ldots, k_d$. Indeed, such an $\omega_j(k_{n+1},\ldots, k_d)$ has obviously a zero derivative with respect to $k_i,\ i=1,\ldots n$. Suppose that the function $f_j(\mathbf{k})$ factorizes $$f_j(\mathbf{k}) = g_j(k_1,\ldots,k_n)\cdot h_j(k_{n+1},\ldots,k_d).$$ In such a case $I_j(t)$ is given by the product of time-independent and time-dependent integrals over $n$ and $d-n$ variables $$I_j(t) = \left[\ \int\limits_{\mathbb{K}^n} \frac{d\mathbf{k}}{(2\pi)^n} g_j(k_1,\ldots, k_n)\right]\cdot \left[\ \int\limits_{\mathbb{K}^{d-n}} \frac{d\mathbf{k}}{(2\pi)^{d-n}} e^{i\ \omega_j(k_{n+1},\ldots, k_d) t} h_j(k_{n+1},\ldots, k_d)\right].$$ It is easy to find that if the time-independent integral does not vanish $I_j(t)$ behaves asymptotically like $t^{-(d-n)/2}$. Hence, the asymptotic behaviour of the probability at the origin is $$p_0(t)\sim{t^{-(d-n)}}.$$ The quantum walks of this kind would be recurrent if the eigenenergy $\omega_j$ would depend only on a single component of the momenta $\mathbf{k}$. In the extreme case when $\omega_j(\mathbf{k})$ does not depend on $\mathbf{k}$ at all we can extract the time dependence out of the integral $I_j(t)$. If the remaining time independent integral does not vanish then $p_0(t)$ converges to a non-zero value and say that such a quantum walk exhibits [*localization*]{}. Note that since $p_0(t)$ has a non-vanishing limit the quantum walk is recurrent. Indeed, localization implies recurrence. We find localization in Chapter \[chap:4c\] for the Grover walk on a plane. Moreover, extending the 2-D Grover walk to $\mathds{Z}^d$ we find quantum walks where some of the eigenenergies are either constant or depend only on a single momentum component. As discussed above, such quantum walks are recurrent.
### Effect of the initial state {#chap:2d4}
So far we have assumed that the function $f_j(\mathbf{k})$ is non-vanishing for $\mathbf{k}$ values corresponding to the stationary points. However, the initial state $\psi$ can be orthogonal to the eigenvector $v_j(\mathbf{k})$ for $\mathbf{k}=\mathbf{k}^0$ corresponding to the stationary point. In such a case the function $f_j(\mathbf{k})$ vanishes for $\mathbf{k}=\mathbf{k}^0$ and the stationary point $\mathbf{k}^0$ does not contribute to the integral $I_j(t)$. Consequently, the decay of $p_0(t)$ can speed up. Hence, for quantum walks we might change the recurrence behaviour and the actual value of the Pólya number by altering the initial state $\psi$. Indeed, we find this non-trivial effect of the initial state for the Grover walk and the Fourier walk on a plane in Chapters \[chap:4c\] and \[chap:4d\].
Recurrence of Unbiased Quantum Walks on Infinite Lattices {#chap:4}
=========================================================
In the present Chapter we determine the recurrence behaviour and the Pólya number of several unbiased quantum walks. We concentrate on the effect of the coin operators and the initial states. For this purpose we fix the topology of the walks. We consider quantum walks where the displacements $\mathbf{e}_i$ have all entries equal to $\pm 1$ $$\mathbf{e}_1 = \left(1,\ldots,1\right)^T,\ldots, \mathbf{e}_{2^d} = \left(-1,\ldots,-1\right)^T.$$ In such a case the coin space has the dimension $n=2^d$ where $d$ is the dimension of the lattice. Moreover, the diagonal matrix $D(\mathbf{k})$ entering the propagator in the Fourier picture (\[teopF\]) can be written as a tensor product $$D(\textbf{k}) = D(k_1)\otimes\ldots\otimes D(k_d)
\label{dk2}$$ of $2\times 2$ diagonal matrices $$D(k_j)=\textrm{Diag}\left(e^{-ik_j},e^{ik_j}\right).$$ This fact allows us to extend some of the results for the quantum walks on a line or on a plane to quantum walks on a $d$-dimensional lattice.
First, in Section \[chap:4b\] we treat Hadamard walk on $\mathds{Z}^d$ with an independent coin for each spatial dimension. We find that for this quantum walk the probability at the origin is independent of the initial coin state. Hence, a unique Pólya number can be assigned to this quantum walk for each dimension $d$. In contrast with the classical random walks the Hadamard walk is recurrent only for $d=1$. In Section \[chap:4c\] we analyze the recurrence of the Grover walk on a plane. This quantum walk exhibits localization [@localization] and therefore is recurrent. However, for a particular initial state localization disappears and the Grover walk becomes transient. We find an approximation of the Pólya number for this particular initial state. We then employ the Grover walk on a plane to construct for arbitrary dimension $d$ a quantum walk which is recurrent. This is in great contrast with the classical random walks, which are recurrent only for the dimensions $d=1,2$. Finally, in Section \[chap:4d\] we analyze the Fourier walk on a plane. This quantum walk is recurrent except for a two-parameter family of initial states for which it is transient. For the latter case we find an approximation of the Pólya number depending on the parameters of the initial state. We summarize our results in Section \[chap:4e\].
Hadamard walk on $\mathds{Z}^d$ {#chap:4b}
-------------------------------
Let us start with the analysis of the recurrence behaviour of the Hadamard walk on a line which is driven by the coin $$H = \frac{1}{\sqrt{2}}\left(
\begin{array}{cc}
1 & 1 \\
1 & -1 \\
\end{array}
\right).$$ We find that the propagator in the Fourier picture $$\widetilde{U}_H(k) = D(k)\cdot H = \frac{1}{\sqrt{2}}\left(
\begin{array}{cc}
e^{ik} & e^{ik} \\
e^{-ik} & -e^{-ik} \\
\end{array}
\right)
\label{1d:Had}$$ has eigenvalues $e^{i\ \omega_i(k)}$ where the phases $\omega_i(k)$ are given by $$\omega_1(k) = \arcsin\left(\frac{\sin k}{\sqrt{2}}\right),\quad \omega_2(k) = -\pi-\arcsin\left(\frac{\sin k}{\sqrt{2}}\right).$$ Thus the derivatives of $\omega_i$ with respect to $k$ reads $$\frac{d\omega_1(k)}{dk}=-\frac{d\omega_2(k)}{dk}=-\frac{\cos k}{\sqrt{2 - \sin^2 k}}
\label{der:1d}$$ and we find that the phases $\omega_i(k)$ have common non-degenerate stationary points $k^0=\pm\pi/2$. It follows that the probability at the origin behaves asymptotically like $t^{-1}$. This asymptotic scaling is independent of the initial state. Indeed, no non-zero initial state $\psi$ exists which is orthogonal to both eigenvectors at the common stationary points $k^0=\pm\pi/2$. Hence, the Hadamard walk on a line is recurrent, i.e. the Pólya number equals one, independent of the initial coin state.
We turn to the Hadamard walk on a $d$-dimensional lattice. The coin flip operator has the form of the tensor product of $d$ $2\times 2$ Hadamard matrices $$H_d = H\otimes\ldots\otimes H.
\label{C:ind:x}$$ Hence, we have an independent coin for each spatial dimension. It follows that also the propagator in the Fourier picture has the form of the tensor product $$\widetilde{U}_{H_d}(\textbf{k}) = \widetilde{U}_H(k_1)\otimes\ldots\otimes \widetilde{U}_H(k_d)
\label{C:ind}$$ of $d$ time evolution operators given by Eq. (\[1d:Had\]) with different momenta $k_i$. Hence, the eigenenergies of the propagator (\[C:ind\]) have the form of the sum $$\omega_j(\textbf{k})=\sum_{l=1}^d \omega_{j_l}(k_l).
\label{eigenval:Cind}$$ Therefore we find that the asymptotic behaviour of this quantum walk follows directly from the asymptotics of the Hadamard walk on a line. Indeed, the derivative of the phase $\omega_j(\textbf{k})$ with respect to $k_l$ reads $$\frac{\partial \omega_j(\textbf{k})}{\partial k_l} = \frac{d \omega_{j_l}(k_l)}{d k_l},
\label{der:phase:Cind}$$ and so $\omega_j(\textbf{k})$ has a stationary point $\textbf{k}^0=\left(k_1^0,k_2^0,\ldots,k_d^0\right)$ if and only if for all $l=1,\ldots,d$ the point $k_l^0$ is the stationary point of $\omega_{j_l}(k_l,\alpha_l)$. As we have found from Eq. (\[der:1d\]) the stationary points of $\omega_{j_l}$ are $k^0_l=\pm\pi/2$. Hence, all phases $\omega_j(\textbf{k})$ have $2^d$ common stationary points $\textbf{k}^0=\left(\pm\pi/2,\ldots,\pm\pi/2\right)$. It follows that the asymptotic behaviour of the probability $p_0(t)$ is given by $$p_0(t)\sim t^{-d}.
\label{asymp:Cind}$$ As follows from the results for the Hadamard walk on a line the asymptotic behaviour given by Eq. (\[asymp:Cind\]) is independent of the initial coin state. Compared to classical walks this is a quadratically faster decay of the probability at the origin which is due to the quadratically faster spreading of the probability distribution of the quantum walk.
We illustrate the results for Hadamard walk on a plane driven by the coin $$H_2 = \frac{1}{2}\left(
\begin{array}{rrrr}
1 & 1 & 1 & 1\\
1 & -1 & 1 & -1 \\
1 & 1 & -1 & -1\\
1 & -1 & -1 & 1\\
\end{array}
\right)$$ in [Figure \[had:2d\]]{}. Here we show the probability distribution in dependence on the initial state and the probability at the origin $p_0(t)$. The first two plots indicates that the initial state of the coin influences mainly the edges of the probability distribution. However, the probability $p_0(t)$ is unaffected and is exactly the same for all initial states. The lower plot confirms the asymptotic behaviour of the probability at the origin $p_0(t)\sim t^{-2}$.
![Probability distribution of the Hadamard walk on a plane after 50 steps and the probability at the origin $p_0(t)$ for different choices of the initial state. In the upper plot we choose the initial state $\frac{1}{2}(1,i,i,-1)^T$ which leads to a symmetric probability distribution, whereas in the middle plot we choose the initial state $(1,0,0,0)^T$ resulting in a dominant peak of the probability distribution in the lower-left corner of the $(m,n)$ plane. However, the initial state influences the probability distribution only near the edges. The probability $p_0(t)$ is unaffected and is the same for all initial coin states. The lower plot confirms the asymptotic behaviour of the probability at the origin $p_0(t)\sim t^{-2}$ independent of the initial state.[]{data-label="had:2d"}](unbiased_f1an.eps "fig:"){width="60.00000%"} ![Probability distribution of the Hadamard walk on a plane after 50 steps and the probability at the origin $p_0(t)$ for different choices of the initial state. In the upper plot we choose the initial state $\frac{1}{2}(1,i,i,-1)^T$ which leads to a symmetric probability distribution, whereas in the middle plot we choose the initial state $(1,0,0,0)^T$ resulting in a dominant peak of the probability distribution in the lower-left corner of the $(m,n)$ plane. However, the initial state influences the probability distribution only near the edges. The probability $p_0(t)$ is unaffected and is the same for all initial coin states. The lower plot confirms the asymptotic behaviour of the probability at the origin $p_0(t)\sim t^{-2}$ independent of the initial state.[]{data-label="had:2d"}](unbiased_f1bn.eps "fig:"){width="60.00000%"} ![Probability distribution of the Hadamard walk on a plane after 50 steps and the probability at the origin $p_0(t)$ for different choices of the initial state. In the upper plot we choose the initial state $\frac{1}{2}(1,i,i,-1)^T$ which leads to a symmetric probability distribution, whereas in the middle plot we choose the initial state $(1,0,0,0)^T$ resulting in a dominant peak of the probability distribution in the lower-left corner of the $(m,n)$ plane. However, the initial state influences the probability distribution only near the edges. The probability $p_0(t)$ is unaffected and is the same for all initial coin states. The lower plot confirms the asymptotic behaviour of the probability at the origin $p_0(t)\sim t^{-2}$ independent of the initial state.[]{data-label="had:2d"}](unbiased_f1cn.eps "fig:"){width="45.00000%"}
Since the probability at the origin $p_0(t)$ decays like $t^{-d}$ we find that the Hadamard walk on $\mathds{Z}^d$ is recurrent only for dimension $d=1$ and is transient for all higher dimensions $d\geq 2$. Moreover, the whole sequence of probabilities $p_0(t)$ is independent of the initial state. Hence, the Pólya number for this class of quantum walks depends only on the dimension of the walk $d$, thus resembling the property of the classical walks. On the other hand, this quantum walk is transient for the dimension $d=2$ and higher. This is a direct consequence of the faster decay of the probability at the origin which, in this case, cannot be compensated for by interference.
Let us estimate the value of the Pólya number for the dimension $d\geq 2$. As depicted in the lowest plot of [Figure \[had:2d\]]{} the probability at the origin approaches quite rapidly its asymptotic form $$p_0(t)\approx\frac{1}{(\pi t)^{d}}.$$ Hence, already the first few terms of the product in Eq. (\[polya:approx\]) are sufficient to estimate the value of the Pólya number. Taking into account the first three terms of $p_0(t)$ which are found to be $$p_0(2)=\frac{1}{2^d},\quad p_0(4)=p_0(6)=\frac{1}{8^d},$$ we obtain the following approximation of the Pólya number $$P_{H_d}\approx 1 - \left(1-\frac{1}{2^d}\right)\left(1-\frac{1}{8^d}\right)^2.
\label{Polya:ind:est}$$ We compare the estimation in Eq. (\[Polya:ind:est\]) with the numerical results obtained from the simulation of the Hadamard walk with 1000 steps in the [Table \[tab1\]]{} and find that they are in excellent agreement.
-- -- -- --
-- -- -- --
: Comparison of the Pólya number for the Hadamard walk on $\mathds{Z}^d$ obtained from the numerical simulation and the estimation of Eq. (\[Polya:ind:est\]).[]{data-label="tab1"}
Grover walk on a plane {#chap:4c}
----------------------
We turn to the Grover walk on a plane which is driven by the coin $$G = \frac{1}{2}\left(
\begin{array}{rrrr}
-1 & 1 & 1 & 1 \\
1 & -1 & 1 & 1 \\
1 & 1 & -1 & 1 \\
1 & 1 & 1 & -1 \\
\end{array}
\right).
\label{grover:coin}$$ It was identified numerically [@2dw1] and later proven analytically [@localization] that the Grover walk exhibits a localization effect, i.e. the probability $p_0(t)$ does not vanish but converges to a non-zero value except for a particular initial state $$\psi_G\equiv\psi_G(0,0,0) = \frac{1}{2}\left(1,-1,-1,1\right)^T.
\label{grover:nospike:state}$$
In order to explain the localization we analyze the eigenvalues of the propagator in the Fourier picture for the Grover walk $$\widetilde{U}_G(k_1,k_2) = \left(D(k_1)\otimes D(k_2)\right) G.
\label{gkl}$$ We find that they are given by $$\label{eigenval:Grover}
\lambda_{1,2} = \pm 1,\qquad \lambda_{3,4}(k_1,k_2) = e^{\pm i\ \omega(k_1,k_2)}$$ where the phase $\omega(k_1,k_2)$ reads $$\cos(\omega(k_1,k_2)) = -\cos{k_1}\cos{k_2}.
\label{phase:Grover}$$ The eigenvalues $\lambda_{1,2}$ are constant. As a consequence the probability at the origin is non-vanishing as discussed in detail in Chapter \[chap:2d3\], unless the initial state is orthogonal to the eigenvectors corresponding to $\lambda_{1,2}$ at every point $(k_1,k_2)$. By explicitly calculating the eigenvectors of the matrix $\widetilde{U}_G(k_1,k_2)$ it is straightforward to see that such a vector is unique and equals that in Eq. (\[grover:nospike:state\]), in agreement with the result derived in [@localization].
It is easy to show that for the particular initial state given by Eq. (\[grover:nospike:state\]) the probability $p_0(t)$ decays like $t^{-2}$. Indeed, as the initial state of Eq. (\[grover:nospike:state\]) is orthogonal to the eigenvectors corresponding to $\lambda_{1,2}$ the asymptotic behaviour is determined by the remaining eigenvalues $\lambda_{3,4}(k_1,k_2)$, or more precisely by the stationary points of $\omega(k_1,k_2)$. From Eq. (\[phase:Grover\]) we find that it has only non-degenerate stationary points $k_1^0,\ k_2^0=\pm \pi/2$. For the initial state of Eq. (\[grover:nospike:state\]) the probability that the Grover walk returns to the origin decays like $t^{-2}$. We conclude that the Grover walk on a 2-D lattice is recurrent and its Pólya number equals one for all initial states except the one given in Eq. (\[grover:nospike:state\]) for which the walk is transient. We illustrate these results in [Figure \[grover:fig1\]]{} and [Figure \[grover:fig2\]]{}.
In [Figure \[grover:fig1\]]{} we show the probability distribution generated by the Grover walk and the probability at the origin for a symmetric initial state $$\psi_S=\frac{1}{2}(1,i,i,-1)^T.
\label{grover:psi:s}$$ This particular choice of the initial state results to a probability distribution with a dominant central spike, as depicted in the upper plot. The lower plot indicates that the probability at the origin has a non-vanishing limit.
In contrast for the initial state $\psi_G$ given by (\[grover:nospike:state\]) the central spike in the probability distribution vanishes, as we illustrate in the upper plot of [Figure \[grover:fig2\]]{}. The lower plot indicates that the probability at the origin decays like $t^{-2}$.
![Probability distribution of the Grover walk after 50 steps and the probability at the origin for a symmetric initial state (\[grover:psi:s\]). This particular choice of the initial state leads to a symmetric probability distribution with a dominant central spike, as depicted in the upper plot. The lower plot indicates that the probability at the origin has a non-vanishing limit as $t$ approaches infinity. The results are qualitatively the same for all initial coin states except for $\psi_G$ given in (\[grover:nospike:state\]), as we illustrate in [Figure \[grover:fig2\]]{}.[]{data-label="grover:fig1"}](unbiased_f2an.eps "fig:"){width="70.00000%"} ![Probability distribution of the Grover walk after 50 steps and the probability at the origin for a symmetric initial state (\[grover:psi:s\]). This particular choice of the initial state leads to a symmetric probability distribution with a dominant central spike, as depicted in the upper plot. The lower plot indicates that the probability at the origin has a non-vanishing limit as $t$ approaches infinity. The results are qualitatively the same for all initial coin states except for $\psi_G$ given in (\[grover:nospike:state\]), as we illustrate in [Figure \[grover:fig2\]]{}.[]{data-label="grover:fig1"}](unbiased_f2bn.eps "fig:"){width="60.00000%"}
![Probability distribution of the Grover walk after 50 steps and the probability at the origin for a particular initial state $\psi_G$ given by Eq. (\[grover:nospike:state\]). In contrast to [Figure \[grover:fig1\]]{} we find that the central spike vanishes and most of the probability is situated at the edges. Moreover, the probability at the origin vanishes as $t$ approaches infinity, as we illustrate in the lower figure. Here we plot the probability $p_0(t)$ multiplied by $t^2$ to unravel the asymptotic behavior of the probability at the origin. The plot confirms the analytic result of the scaling $p_0(t)\sim t^{-2}$.[]{data-label="grover:fig2"}](unbiased_f3an.eps "fig:"){width="70.00000%"} ![Probability distribution of the Grover walk after 50 steps and the probability at the origin for a particular initial state $\psi_G$ given by Eq. (\[grover:nospike:state\]). In contrast to [Figure \[grover:fig1\]]{} we find that the central spike vanishes and most of the probability is situated at the edges. Moreover, the probability at the origin vanishes as $t$ approaches infinity, as we illustrate in the lower figure. Here we plot the probability $p_0(t)$ multiplied by $t^2$ to unravel the asymptotic behavior of the probability at the origin. The plot confirms the analytic result of the scaling $p_0(t)\sim t^{-2}$.[]{data-label="grover:fig2"}](unbiased_f3bn.eps "fig:"){width="60.00000%"}
Let us estimate the Pólya number of the Grover walk for the initial state of Eq. (\[grover:nospike:state\]). The numerical simulations indicate that the probability at the origin $p_0(t)$ for the initial state $\psi_G$ is the same as the probability at the origin of the 2-D Hadamard walk. Hence, their Pólya numbers coincide. With the help of the relation (\[Polya:ind:est\]) we can estimate the Pólya number of the Grover walk with the initial state of $\psi_G$ by $$P_G(\psi_G) \equiv P_{H_2}\approx 0.27325.$$
The above derived results allow us to construct a quantum walk which is recurrent for an arbitrary dimension $d$, except for a subspace of initial states. Let us first consider the case when the dimension of the walk is even and equals $2d$. We choose the coin as a tensor product $$G_{2d} = \otimes^d G
\label{c2d}$$ of $d$ Grover coins given by Eq. (\[grover:coin\]). As follows from Eqs. (\[teopF\]) and (\[dk2\]) the time evolution operator in the Fourier picture is also a tensor product $$\widetilde{U}_{G_{2d}}(\textbf{k}) = \widetilde{U}_G(k_1,k_2)\otimes\ldots\otimes \widetilde{U}_G(k_{2d-1},k_{2d})
\label{c2d:k}$$ of the matrices $\widetilde{U}_G$ defined by Eq. (\[gkl\]) with different Fourier variables $k_i$. Hence, the eigenvalues of $\widetilde{U}_{G_{2d}}(\textbf{k})$ are given by the product of the eigenvalues of $\widetilde{U}_G$. Since two eigenvalues of $\widetilde{U}_G$ are constant as we have found in Eq. (\[eigenval:Grover\]) one half of the eigenvalues of $\widetilde{U}_{G_{2d}}(\textbf{k})$ are also independent of $\textbf{k}$. As we have discussed in Chapter \[chap:2d3\] the probability $p_0(t)$ converges to a non-zero value and therefore the quantum walk exhibits localization.
In the case of odd dimension $2d+1$ we augment the coin given by Eq. (\[c2d\]) by the Hadamard coin for the extra spatial dimension $$G_{2d+1} = G_{2d}\otimes H.
\label{c2d1}$$ Performing a similar analysis as in the case of even dimensions we find that for the quantum walk driven by the coin $G_{2d+1}$ the probability that the walk returns to the origin decays like $t^{-1}$ due to the Hadamard walk in the extra spatial dimension. Hence, this quantum walk is recurrent.
We note that due the fact that the 2-D Grover walk is transient for the initial state $\psi_G$ the same statement holds for the above constructed quantum walks, supposed that the initial state contains $\psi_G$ in its tensor product decomposition. Such vectors form a subspace with dimension equal to $4^{d-1}$ for even dimensional walks and $2\times 4^{d-1}$ for odd dimensional walks.
Fourier walk on a plane {#chap:4d}
-----------------------
We turn to the 2-D Fourier walk driven by the coin $$F = \frac{1}{2}\left(
\begin{array}{rrrr}
1 & 1 & 1 & 1 \\
1 & i & -1 & -i \\
1 & -1 & 1 & -1 \\
1 & -i & -1 & i \\
\end{array}
\right).$$ As we will see, the Fourier walk does not exhibit localization. However, the decay of the probability $p_0(t)$ is slowed down to $t^{-1}$ so the Fourier walk is recurrent, except for a subspace of states.
We start our analysis of the Fourier walk with the propagator $$\widetilde{U}_F(k_1,k_2) = \left(D(k_1)\otimes D(k_2)\right) F,$$ which determines the time evolution in the Fourier picture. It seems to be hard to determine the eigenvalues of $\widetilde{U}_F(k_1,k_2)$ analytically. However, we only need to determine the stationary points of their phases $\omega_j(k_1,k_2)$. For this purpose we consider the eigenvalue equation $$\Phi(k_1,k_2,\omega)\equiv\det{\left(\widetilde{U}_F(k_1,k_2)-e^{i\ \omega} I\right)}=0.$$ This equation gives us the eigenenergies $\omega_i(k_1,k_2)$ as the solutions of the implicit function $$\Phi(k_1,k_2,\omega) = 1 + \cos(2k_2)-2\cos(2\omega)+2\sin{2\omega} + 4\cos{k_2}\sin{\omega}\left(\sin{k_1}-\cos{k_1}\right) = 0.$$ Using the implicit differentiation we find the derivatives of the phase $\omega$ $$\begin{aligned}
\nonumber \frac{\partial \omega}{\partial k_1} & = & -\frac{\cos{k_2}\sin{\omega}\left(\cos{k_1}+\sin{k_1}\right)}{\cos(2\omega)+\sin(2\omega)+\cos{k_2}\cos{\omega}\left(\sin{k_1}-\cos{k_2}\right)}\\
\frac{\partial \omega}{\partial k_2} & = & -\frac{2\sin{k_2}\sin{\omega}\left(\cos{k_1}-\sin{k_1}\right)-\sin(2k_2)}{2\left(\cos(2\omega)+\sin(2\omega)+\cos{k_2}\cos{\omega}\left(\sin{k_1}-\cos{k_2}\right)\right)}
\label{fourier:der}\end{aligned}$$ with respect to $k_1$ and $k_2$. Though we cannot eliminate $\omega$ on the RHS of Eq. (\[fourier:der\]), we can identify the stationary points $\textbf{k}^0=(k_1^0,k_2^0)$ $$\left.\frac{\partial\omega(\textbf{k})}{\partial k_i}\right|_{\textbf{k}=\textbf{k}^0}=0,\quad i=1,2$$ of $\omega(k_1,k_2)$ with the help of the implicit function $\Phi(k_1,k_2,\omega)$. We find the following:\
([*i*]{}) $\omega_{1,2}(k_1,k_2)$ have stationary lines $$\gamma_1=(k_1,0)\ \textrm{and}\ \gamma_2=(k_1,\pi)$$\
([*ii*]{}) all four phases $\omega_{i}(k_1,k_2)$ have stationary points for $$k_1^0=\frac{\pi}{4},\ -\frac{3\pi}{4}\quad \textrm{and}\quad k_2^0=\pm\frac{\pi}{2}$$
It follows from the discussion of Chapter \[chap:2d3\] that the two phases $\omega_{1,2}(k_1,k_2)$ with stationary lines $\gamma_{1,2}$ are responsible for the slow down of the decay of the probability $p_0(t)$ to $t^{-1}$ for the Fourier walk, unless the initial coin state is orthogonal to the corresponding eigenvectors $v_{1,2}(k_1,k_2)$ at the stationary lines. For such an initial state the probability $p_0(t)$ behaves like $t^{-2}$ as the asymptotics of the integral given by Eq. (\[psi:0\]) is determined only by the stationary points ([*ii*]{}).
Let us determine the states $\psi_F$ which lead to the fast decay $t^{-2}$ of the probability that the Fourier walk returns to the origin. The states $\psi_F$ have to be constant vectors fulfilling the conditions $$\left(v_{1,2}(\mathbf{k}),\psi_F\right)=0 \quad \forall\ \mathbf{k}\in\gamma_{1,2},$$ which implies that $\psi_F$ must be a linear combination of $v_{3,4}(\mathbf{k}\in\gamma_{1,2})$ forming a two-dimensional subspace in $\mathcal{H}_C$. For $k_2=0,\pi$ we can find the eigenvectors of the matrix $\widetilde{U}_F(k_1,k_2)$ explicitly $$\begin{aligned}
\nonumber v_1(k_1,0) & = & v_2(k_1,\pi) = \frac{1}{2}\left(e^{-ik_1},1,-e^{-ik_1},1\right)^T\\
\nonumber v_1(k_1,\pi) & = & v_2(k_1,0) = \frac{1}{2}\left(-e^{-ik_1},1,e^{-ik_1},1\right)^T\\
\nonumber v_3(k_1,0) & = & v_3(k_1,\pi) = \frac{1}{\sqrt{2}}(1,0,1,0)^T \\
\nonumber v_4(k_1,0) & = & v_4(k_1,\pi) = \frac{1}{\sqrt{2}}(0,1,0,-1)^T.\end{aligned}$$ The explicit form of $\psi_F$ reads $$\psi_F(a,b) = \left(a,b,a,-b\right)^T,
\label{psi:F}$$ where $a,b\in \mathds{C}$. We point out that the particular initial state $$\psi_F\left(a=\frac{1}{2},b=\frac{1-i}{2\sqrt{2}}\right) = \frac{1}{2}\left(1,\frac{1-i}{\sqrt{2}},1,-\frac{1-i}{\sqrt{2}}\right)^T
\label{sym:F}$$ which was identified in [@2dw1] as the state which leads to a symmetric probability distribution with no peak in the neighborhood of the origin belongs to the family described by Eq. (\[psi:F\]).
![Probability distribution after 50 steps and the time evolution of the probability $p_0(t)$ for the Fourier walk with the initial state $\psi=(1,0,0,0)^T$. The upper plot of the probability distribution reveals a presence of the central peak. Indeed, $\psi$ is not a member of the family $\psi_F(a,b)$. However, in contrast to the Grover walk the peak vanishes. In the lower plot we illustrate this by showing the probability $p_0(t)$ multiplied by $t$ to unravel the asymptotic behaviour $p_0(t)\sim t^{-1}$.[]{data-label="f3d1"}](unbiased_f4an.eps "fig:"){width="70.00000%"} ![Probability distribution after 50 steps and the time evolution of the probability $p_0(t)$ for the Fourier walk with the initial state $\psi=(1,0,0,0)^T$. The upper plot of the probability distribution reveals a presence of the central peak. Indeed, $\psi$ is not a member of the family $\psi_F(a,b)$. However, in contrast to the Grover walk the peak vanishes. In the lower plot we illustrate this by showing the probability $p_0(t)$ multiplied by $t$ to unravel the asymptotic behaviour $p_0(t)\sim t^{-1}$.[]{data-label="f3d1"}](unbiased_f4bn.eps "fig:"){width="60.00000%"}
![Probability distribution after 50 steps and the time evolution of the probability $p_0(t)$ for the Fourier walk with the initial state given by Eq. (\[sym:F\]). Since $\psi$ is a member of the family $\psi_F(a,b)$ the central peak in the probability distribution is not present, as depicted on the upper plot. The lower plot indicates that the probability $p_0(t)$ decays like $t^{-2}$.[]{data-label="f3d2"}](unbiased_f5an.eps "fig:"){width="70.00000%"} ![Probability distribution after 50 steps and the time evolution of the probability $p_0(t)$ for the Fourier walk with the initial state given by Eq. (\[sym:F\]). Since $\psi$ is a member of the family $\psi_F(a,b)$ the central peak in the probability distribution is not present, as depicted on the upper plot. The lower plot indicates that the probability $p_0(t)$ decays like $t^{-2}$.[]{data-label="f3d2"}](unbiased_f5bn.eps "fig:"){width="60.00000%"}
We illustrate the results in [Figure \[f3d1\]]{} and [Figure \[f3d2\]]{}. In [Figure \[f3d1\]]{} we plot the probability distribution and the probability $p_0(t)$ for the Fourier walk with the initial state $\psi=(1,0,0,0)^T$. This vector is not a member of the family $\psi_F(a,b)$ defined by Eq. (\[psi:F\]). We find that a central peak is present, as depicted in [Figure \[f3d1\]]{}. However, in contrast to the Grover walk, the peak vanishes as shown by plotting the probability $p_0(t)$ multiplied by $t$ in [Figure \[f3d1\]]{} indicating a decay like $t^{-1}$, in agreement with the analytical result. In contrast, for [Figure \[f3d2\]]{} we have chosen the initial state given by Eq. (\[sym:F\]) which is a member of the family $\psi_F(a,b)$. The upper plot shows highly symmetric probability distribution. However, the central peak is not present and as the lower plot indicates the probability $p_0(t)$ decays like $t^{-2}$.
We conclude that the Fourier walk is recurrent except for the two-dimensional subspace of initial states defined by Eq. (\[psi:F\]) for which the walk is transient.
We turn to the estimation of the Pólya numbers of the 2-D Fourier walk for the two-dimensional subspace of initial states given by Eq. (\[psi:F\]). We make use of the normalization condition and the fact that the global phase of a state is irrelevant. Hence, we can choose $a$ to be non-negative real and $b$ is then given by the relation $$b = \sqrt{\frac{1}{2}-a^2}e^{i\phi}.$$ Therefore, we parameterize the family of states defined by Eq. (\[psi:F\]) by two real parameters — $a$ ranging from $0$ to $\frac{1}{\sqrt{2}}$ and the mutual phase $\phi\in[0,2\pi)$. The exact expression for $p_0(a,\phi,t)$ can be written in the form $$p_0(a,\phi,t)=\frac{K_1(t)-K_2(t) a\sqrt{\frac{1}{2}-a^2}(\cos{\phi}-\sin{\phi})}{t^2},$$ where $K_{1,2}$ has to be determined numerically. Nevertheless, the numerical simulation of $p_0(a,\phi,t)$ at two values of $(a,\phi)$ enables us to find the numerical values of $K_{1,2}(t)$ and we can evaluate $p_0(a,\phi,t)$ at any point $(a,\phi)$. The probability $p_0(a,\phi,t)$ shows the maximum at $a=\frac{1}{2}$, $\phi=\frac{3\pi}{4}$ and the minimum for the same value of $a$ and the phase $\phi=\frac{7\pi}{4}$. Consequently, these points also represent the maximum and the minimum of the Pólya numbers.
![Approximation of the Pólya numbers for the 2-D Fourier walk and the initial states from the family of states defined by Eq. (\[psi:F\]) in their dependence on the parameters of the initial state $a$ and $\phi$. Here we have evaluated the first 100 terms of $p_0(a,\phi,t)$ exactly. The Pólya numbers cover the whole interval between the minimal value of $P_F^{min}\approx 0.314$ and the maximal value of $P_F^{max}\approx 0.671$. The extreme values are attained for $a=1/2$ and $\phi^{min}=7\pi/4$, respectively $\phi^{max}=3\pi/4$. On the lower plot we show the cut at the value $a=1/2$ containing both the maximum and the minimum.[]{data-label="polya:fourier"}](unbiased_f6an.eps "fig:"){width="70.00000%"} ![Approximation of the Pólya numbers for the 2-D Fourier walk and the initial states from the family of states defined by Eq. (\[psi:F\]) in their dependence on the parameters of the initial state $a$ and $\phi$. Here we have evaluated the first 100 terms of $p_0(a,\phi,t)$ exactly. The Pólya numbers cover the whole interval between the minimal value of $P_F^{min}\approx 0.314$ and the maximal value of $P_F^{max}\approx 0.671$. The extreme values are attained for $a=1/2$ and $\phi^{min}=7\pi/4$, respectively $\phi^{max}=3\pi/4$. On the lower plot we show the cut at the value $a=1/2$ containing both the maximum and the minimum.[]{data-label="polya:fourier"}](unbiased_f6bn.eps "fig:"){width="60.00000%"}
In [Figure \[polya:fourier\]]{} we present the approximation of the Pólya number Eq. (\[polya:approx\]) in its dependence on $a$ and $\phi$ and a cut through the plot at the value $a=1/2$ containing both the global minimum and the global maximum. Here we have evaluated the first 100 terms of $p_0(a,\phi,t)$ exactly. We see that the values of the Pólya number vary from the minimum $P_F^{min}\approx 0.314$ to the maximal value of $P_F^{max}\approx 0.671$. We note that for the initial states that do not belong to the subspace defined by Eq. (\[psi:F\]) the Pólya number equals one.
Conclusions {#chap:4e}
-----------
Our results, summarized in [Table \[tab2\]]{}, demonstrate that there is a remarkable freedom for the value of the Pólya number for quantum walks, depending both on the initial state and the coin operator, in contrast to the classical random walk where the dimension of the lattice uniquely defines the recurrence probability. Hence, the quantum Pólya number is able to indicate physically different regimes in which a quantum walk can be operated in.
-- -- -- -----------------------
$1$ for $d=1$
$<1$ for $d\geq 2$
independent of $\psi$
$<1$
dependent on $\psi$
-- -- -- -----------------------
: Summary of the main results. We list the types of studied quantum walks, the asymptotic behaviour of the probability at the origin and the Pólya number in the respective cases in its dependence on the initial state $\psi$.[]{data-label="tab2"}
Recurrence of Biased Quantum Walks on a Line {#chap:5}
============================================
\[chap:5a\]
Recurrence of classical random walks is a consequence of the walk’s symmetry. As we briefly review in the Appendix \[app:a2\], they are recurrent if and only if the mean value of the position of the particle vanishes. This is due to the fact that the spreading of the probability distribution of the position is diffusive while the mean value of the position propagates with a constant velocity. In contrast, for quantum walks both the spreading of the probability distribution and the propagation of the mean value are ballistic. In the present Chapter we show that this allows for maintaining recurrence even when the symmetry is broken.
The Chapter is organized as follows: In Section \[chap:5b\] we describe the biased quantum walk on a line, find the propagator in the momentum representation and solve the time evolution equation. The recurrence of the quantum walk is determined by the asymptotics of the probability at the origin. We perform this analysis in Section \[chap:5c\] and find the conditions under which the biased quantum walk on a line is recurrent. In Section \[chap:5d\] we analyze the recurrence of biased quantum walks from a different perspective. We find that the recurrence is related to the velocities of the peaks of the probability distribution of the quantum walk. The explicit form of the velocities leads us to the same condition derived in Section \[chap:5c\]. Finally, in Section \[chap:5e\] we derive the formula for the mean value of the position of the particle in dependence of the parameters of the walk and the initial state. We find that there exist genuine biased quantum walks which are recurrent. We summarize our results in the conclusions of Section \[chap:5f\].
Description of the walk {#chap:5b}
-----------------------
Let us consider biased quantum walks on a line where the particle has two possibilities — jump to the right or to the left. Without loss of generality we restrict ourselves to biased quantum walks where the jump to the right is of the length $r$ and the jump to the left has a unit size. We depict the biased quantum walk schematically in [Figure \[chap5:fig1\]]{}.
![Schematics of the biased quantum walk on a line. If the coin is in the state $|R\rangle$ the particle moves to the right to a point at distance $r$. With the coin state $|L\rangle$ the particle makes a unit length step to the left. Before the step itself the coin state is rotated according to the coin operator $C(\rho)$.[]{data-label="chap5:fig1"}](biased_fig1.eps){width="60.00000%"}
The Hilbert space of the particle has the form of the tensor product $${\cal H} = {\cal H}_P\otimes{\cal H}_C$$ of the position space $${\cal H}_P = \ell^2(\mathds{Z}^d) = \textrm{Span}\left\{|m\rangle|\ m\in\mathds{Z}\right\},$$ and the two dimensional coin space $${\cal H}_C = \mathds{C}^2 = \textrm{Span}\left\{|R\rangle,|L\rangle\right\}.$$ The propagator of the quantum walk in the position representation is $$U = S \left(I_P\otimes C\right),$$ where the displacement operator $S$ has the form $$S = \sum\limits_{m=-\infty}^{+\infty}|m+r\rangle\langle m|\otimes|R\rangle\langle R|+\sum\limits_{m=-\infty}^{+\infty}|m-1\rangle\langle m|\otimes|L\rangle\langle L|.$$ The coin flip $C$ can be in general an arbitrary unitary operator acting on the coin space $\mathcal{H}_C$. However, as has been discussed in [@2dw1] the probability distribution is not affected by the complex phases of the coin operator. Hence, it is sufficient to consider the one-parameter family of coins $$C(\rho) = \left(
\begin{array}{cc}
\sqrt{\rho} & \sqrt{1-\rho} \\
\sqrt{1-\rho} & -\sqrt{\rho} \\
\end{array}
\right).$$ From now on we restrict ourselves to this family of coins. The value of $\rho=1/2$ corresponds to the well known case of the Hadamard walk.
In the momentum representation the propagator has the form $$\widetilde{U}(k) = \textrm{Diag}\left(e^{ikr},e^{-ik}\right)\cdot C(\rho) = \left(
\begin{array}{cc}
\sqrt{\rho}e^{ikr} & \sqrt{1-\rho}e^{ikr} \\
\sqrt{1-\rho}e^{-ik} & -\sqrt{\rho}e^{-ik} \\
\end{array}
\right).$$ Since it is a unitary operator its eigenvalues are $e^{i\ \omega_{1,2}}$ where the phases are given by $$\begin{aligned}
\nonumber \omega_1(k) & = & \frac{r-1}{2}k+\arcsin\left(\sqrt{\rho}\sin\left(\frac{r+1}{2}k\right)\right),\\
\omega_2(k) & = & \frac{r-1}{2}k -\pi-\arcsin\left(\sqrt{\rho}\sin\left(\frac{r+1}{2}k\right)\right).
\label{omega}\end{aligned}$$ We denote the corresponding eigenvectors by $v_{1,2}(k)$. We give their explicit form in Section \[chap:5f\]. The solution of the time evolution equation in the Fourier picture has the standard form $$\widetilde{\psi}(k,t) = \sum_{j=1}^2 e^{i\ \omega_j(k)t}\left(v_j(k),\tilde{\psi}(k,0)\right)v_j(k),$$ where $\tilde{\psi}(k,0)$ is the Fourier transformation of the initial state. We restrict ourselves to the situation where the particle is initially localized at the origin as dictated by the nature of the problem we wish to study. Hence, the Fourier transformation of such an initial condition is equal to the initial state of the coin which we denote by $\psi$. Since $\psi$ can be an arbitrary normalized complex two-component vector we parameterize it by two parameters $a\in[0,1]$ and $\varphi\in[0,2\pi)$ in the form $$\psi = \left(
\begin{array}{c}
\sqrt{a} \\
\sqrt{1-a}e^{i\varphi} \\
\end{array}
\right).
\label{psi:init}$$ The solution in the position representation is obtained by performing the inverse Fourier transformation $$\psi(m,t) = \int_{-\pi}^\pi\frac{dk}{2\pi}\ \widetilde{\psi}(k,t)\ e^{-imk} = \sum_{j=1}^2\int_{-\pi}^\pi\frac{dk}{2\pi}e^{i(\omega_j(k)t-mk)}\ \left(v_j(k),\psi\right)v_j(k).
\label{chap5:inv:f}$$
Asymptotics of the probability at the origin {#chap:5c}
--------------------------------------------
To determine the recurrence nature of the biased quantum walk we have to analyze the asymptotic behaviour of the probability at the origin. Exploiting (\[chap5:inv:f\]) the amplitude at the origin reads $$\psi(0,t) = \sum_{j=1}^2\int_{-\pi}^\pi\frac{dk}{2\pi}e^{i\ \omega_j(k)t}\ \left(v_j(k),\psi\right)v_j(k),
\label{chap5:psi:0}$$ which allows us to find the asymptotics of the probability at the origin with the help of the method of stationary phase [@statphase]. The important contributions to the integrals in (\[chap5:psi:0\]) arise from the stationary points of the phases (\[omega\]). We find that the derivatives of the phases $\omega_{1,2}(k)$ are $$\begin{aligned}
\nonumber \omega_1'(k) & = & \frac{r-1}{2}+\frac{\sqrt{\rho}(r+1)\cos\left(k\frac{r+1}{2}\right)}{\sqrt{4+2\rho\left[\cos(k(r+1))-1\right]}},\\
\omega_2'(k) & = & \frac{r-1}{2}-\frac{\sqrt{\rho}(r+1)\cos\left(k\frac{r+1}{2}\right)}{\sqrt{4+2\rho\left[\cos(k(r+1))-1\right]}}.
\label{phase:der}\end{aligned}$$ Using the method of stationary phase we find that the amplitude will decay slowly - like $t^{-\frac{1}{2}}$, if at least one of the phases has a vanishing derivative inside the integration domain. Solving the equations $\omega_{1,2}'(k) = 0$ we find that the possible stationary points are $$k_0 = \pm\frac{2}{r+1}\arccos\left(\pm\sqrt{\frac{(1-\rho)(r-1)^2}{4\rho r}}\right).
\label{k0}$$ The stationary points are real valued provided the argument of the arcus-cosine in (\[k0\]) is less or equal to unity $$\frac{(1-\rho)(r-1)^2}{4\rho r} \leq 1.$$ This inequality leads us to the condition for the biased quantum walk on a line to be recurrent $$\rho_R(r) \geq \left(\frac{r-1}{r+1}\right)^2.
\label{crit:rec}$$ We illustrate this result in [Figure \[chap5:fig2\]]{} for a particular choice of the walk parameter $r=3$.
![The existence of stationary points of the phases $\omega_{1,2}(k)$ in dependence on the parameter $\rho$ and a fixed step length $r$. We plot the implicit functions $\omega_{1,2}'(k)\equiv
0$ for $r=3$. The plot indicates that for $\rho<\rho_R(3)=\frac{1}{4}$ the phases $\omega_{1,2}(k)$ do not have any stationary points. Consequently, the probability amplitude at the origin decays fast and such biased quantum walk on a line is transient. For $\rho\geq\rho_R(3)$ the stationary points exists and the quantum walk is recurrent.[]{data-label="chap5:fig2"}](biased_fig2.eps){width="50.00000%"}
Our simple result proves that there is an intimate nontrivial link between the length of the step of the walk and the bias of the coin. The parameter of the coin $\rho$ has to be at least equal to a factor determined by the size of the step to the right $r$ for the walk to be recurrent. We note that the recurrence nature of the biased quantum walk on a line is determined only by the parameters of walk itself, i.e. the coin and the step, not by the initial conditions. The parameters of the initial state $a$ and $\varphi$ have no effect on the rate of decay of the probability at the origin.
Velocities of the peaks {#chap:5d}
-----------------------
We can determine the recurrence nature of the biased quantum walk on a line from a different point of view. This approach is based on the following observation. The well known shape of the probability distribution generated by the quantum walk consists of two counter-propagating peaks. In between the two dominant peaks the probability is roughly independent of $m$ and decays like $t^{-1}$. On the other hand, outside the decay is exponential as we depart from the peaks. As it has been found in [@nayak] the positions of the peaks varies linearly with the number of steps. Hence, the peaks propagate with constant velocities, say $v_L$ and $v_R$. For the biased quantum walk to be recurrent the origin of the walk has to remain in between the two peaks for all times. In other words, the biased quantum walk on a line is recurrent if and only if the velocity of the left peak is negative and the velocity of the right peak is positive.
The velocities of the left and right peak are easily determined. We rewrite the formula (\[chap5:inv:f\]) for the probability amplitude $\psi(m,t)$ into the form $$\psi(m,t) = \sum_{j=1}^2\int_{-\pi}^\pi\frac{dk}{2\pi}e^{i(\omega_j(k)-\alpha k)t}\ \left(v_j(k),\psi\right)v_j(k),$$ where we have introduced $\alpha = \frac{m}{t}$. Due to the fact that we concentrate on the amplitudes at the positions $m\sim t$ we have to consider modified phases $$\widetilde{\omega}_j(k) = \omega_j(k)-\alpha k.$$ The peak occurs at such a position $m_0$ where both the first and the second derivatives of $\widetilde{\omega}_j(k)$ vanishes. The velocity of the peak is thus $\alpha_0 = \frac{m_0}{t}$. Hence, solving the equations $$\begin{aligned}
\nonumber \widetilde{\omega}_1'(k) & = & \frac{r-1}{2}+\frac{\sqrt{\rho}(r+1)\cos\left(k\frac{r+1}{2}\right)}{\sqrt{4+2\rho\left[\cos(k(r+1))-1\right]}} - \alpha = 0 ,\\
\nonumber \widetilde{\omega}_2'(k) & = & \frac{r-1}{2}-\frac{\sqrt{\rho}(r+1)\cos\left(k\frac{r+1}{2}\right)}{\sqrt{4+2\rho\left[\cos(k(r+1))-1\right]}} - \alpha = 0,\\
\nonumber \widetilde{\omega}_1''(k) & = & -\widetilde{\omega}_2''(k) = \frac{(\rho-1)\sqrt{\rho}(r+1)^2\sin\left(k\frac{r+1}{2}\right)}{\sqrt{2}\left[2-\rho+\rho\cos(k(r+1))\right]^\frac{3}{2}} = 0,\end{aligned}$$ for $\alpha$ determines the velocities of the left and right peak $v_{L,R}$. The third equation is independent of $\alpha$ and we easily find the solution $$k_0 = \frac{4n\pi}{r+1},\ k_0=\frac{2\pi(2n+1)}{r+1},\ n\in\mathds{Z}.$$ Inserting this $k_0$ into the first two equations we find the velocities of the left and right peak $$v_L = \frac{r-1}{2}-\frac{r+1}{2}\sqrt{\rho},\quad v_R = \frac{r-1}{2}+\frac{r+1}{2}\sqrt{\rho}.
\label{velocities}$$ We illustrate this result in [Figure \[chap5:fig3\]]{} where we show the probability distribution generated by the quantum walk for the particular choice of the parameters $r = 3,\ \rho = \frac{1}{\sqrt{2}}$. The initial state was chosen according to $a = \frac{1}{\sqrt{2}}$ and $\varphi = \pi$. Since the velocity of the left peak $v_L$ is negative this biased quantum walk is recurrent.
![Velocities of the left and right peak of the probability distribution generated by the biased quantum walk on a line and the recurrence. We have chosen the parameters $r=3$, $a=\rho=\frac{1}{\sqrt{2}}$ and $\varphi = \pi$. The left peak propagates with the velocity $v_L\approx -0.68$, the velocity of the right peak is $v_R\approx 2.68$. In between the two peaks the probability distribution behaves like $t^{-1}$ while outside the decay is exponential. Since the velocity $v_L$ is negative the origin of the walk remains in between the left and right peak. Consequently, this quantum walk is recurrent.[]{data-label="chap5:fig3"}](biased_fig3.eps){width="70.00000%"}
The peak velocities have two contributions. One is identical and independent of $\rho$, the second is a product of $r$ and $\rho$ and differs in sign for the two velocities. The obtained results indicate that biasing the walk by having the size of the step to the right equal to $r$ results in dragging the whole probability distribution towards the direction of the larger step. This is manifested by the term $\frac{r-1}{2}$ which appears in both velocities $v_{L,R}$ with the same sign. On the other hand the parameter of the coin $\rho$ does not bias the walk. As we can see from the second terms entering the velocities it rather influences the rate at which the walk spreads.
As we have discussed above the biased quantum walk on a line is recurrent if and only if $v_L$ is negative and $v_R$ is positive. The form of the velocities (\[velocities\]) implies that this condition is satisfied if and only if the criterion (\[crit:rec\]) is fulfilled.
Mean value of the position {#chap:5e}
--------------------------
As we discuss in the Appendix \[app:a2\] the classical random walks are recurrent if and only if the mean value of the position vanishes. We show that this is not true for biased quantum walks, i.e. there exist biased quantum walks on a line which are recurrent but cannot produce probability distribution with zero mean value. This is another unique feature of quantum walks compared to the classical ones.
Let us derive the formula for the mean value of the position of the particle for the biased quantum walk. With the help of the weak limit theorem [@Grimmett] we express the mean value after $t$ steps in the form $$\left\langle \frac{x}{t}\right\rangle \approx \sum_{j=1}^2\int_{-\pi}^\pi\frac{dk}{2\pi}\ \omega_j'(k)\ \left|\left(v_j(k),\psi\right)\right|^2,$$ up to the corrections of the order $O(t^{-1})$. Here $v_j(k)$ are eigenvectors of the unitary propagator $\widetilde{U}(k)$, $\omega_j'(k)$ are the derivatives of the eigenenergies and $\psi$ is the initial state expressed in (\[psi:init\]). The derivatives of the phases are given in (\[phase:der\]). We express the eigenvectors in the form $$\begin{aligned}
\nonumber v_1(k) & = & n_1(k)\left(\sqrt{1-\rho}, -\sqrt{\rho} + e^{i(\omega_1(k)-rk)}\right)^T,\\
\nonumber v_2(k) & = & n_2(k)\left(\sqrt{1-\rho}, -\sqrt{\rho} + e^{i(\omega_2(k)-rk)}\right)^T.\end{aligned}$$ The normalizations are given by $$\begin{aligned}
\nonumber n_1(u) & = & 2-2\sqrt{\rho}\cos\left(u-\arcsin\left[\sqrt{\rho}\sin u\right]\right),\\
\nonumber n_2(u) & = & 2+2\sqrt{\rho}\cos\left(u+\arcsin\left[\sqrt{\rho}\sin u\right]\right),\end{aligned}$$ where we have introduced $u = \frac{k(r+1)}{2}$ to shorten the notation. The mean value is thus given by the following integral $$\left\langle \frac{x}{t}\right\rangle \approx \int\limits_0^{(r+1)\pi}\frac{f(a,\varphi,\rho,r,u)du}{2(r+1)\pi \left[1 +\sqrt{\rho}\cos u_1\right] \left[1-\sqrt{\rho} \sin u_2\right]}+O(t^{-1}),$$ where $$u_1 = u + \arcsin(\sqrt{\rho}\sin u),\quad u_2 = u + \arccos(\sqrt{\rho}\sin u),$$ and the numerator reads $$\begin{aligned}
\nonumber f(a,\varphi,\rho,r,u) & = & (1-\rho) \left[r-1 + \rho \left(a + r (a-1)\right)\left(1+\cos(2u)\right)+\right.\\
\nonumber & & \left.+ \sqrt{a(1-a)}\sqrt{\rho(1-\rho)} (r+1) \left(\cos{\varphi}+\cos(\varphi+2u)\right)\right].\end{aligned}$$ Performing the integrations we arrive at the following formula for the position mean value $$\begin{aligned}
\nonumber \left\langle\frac{x}{t}\right\rangle & \approx & (1-\sqrt{1-\rho})(a(r+1)-1) + \frac{r-1}{2}\sqrt{1-\rho} + \\
& & + \frac{\sqrt{a(1-a)}(1-\sqrt{1-\rho})(1-\rho)(r+1)\cos\varphi}{\sqrt{\rho(1-\rho)}} + O(t^{-1}).
\label{mean}\end{aligned}$$
We see that for quantum walks the mean value is affected by both the fundamental walk parameters through $r$ and $\rho$ and the initial state parameters $a$ and $\varphi$. The mean value is typically non-vanishing even for unbiased quantum walks ( with $r=1$ ). However, one easily finds [@2dw1] that the initial state with the parameters $a=1/2$ and $\varphi=\pi/2$ results in a symmetric probability distribution with zero mean independent of the coin parameter $\rho$. Indeed, the quantum walks with $r=1$, i.e. with equal steps to the right and left, does not intrinsically distinguish left from right. On the other hand the quantum walks with $r>1$ treat the left and right direction in a different way. Nevertheless, one can always find for a given $r$ a coin parameter $\rho_0$ such that for all $\rho\geq\rho_0$ the quantum walk can produce a probability distribution with zero mean value. This is impossible for quantum walks with $\rho<\rho_0$ and we will call such quantum walks genuine biased.
Let us determine the minimal value of $\rho$ for a given $r$ for which mean value vanishes. We first find the parameters of the initial state $a$ and $\varphi$ which minimizes the mean value. Clearly the term on the second line in (\[mean\]) reaches the minimal value for $\varphi_0=\pi$. Differentiating the resulting expression with respect to $a$ and setting the derivative equal to zero gives us the condition $$2 + \frac{(2a-1)\sqrt{\rho(1-\rho)}}{\rho\sqrt{a(1-a)}} = 0$$ on the minimal mean value with respect to $a$. This relation is satisfied for $a_0=\frac{1}{2}(1-\sqrt{\rho})$. The resulting formula for the mean value reads $$\left\langle\frac{x}{t}\right\rangle_{a_0,\varphi_0}
= \frac{r-1}{2}+\frac{\left(1-\sqrt{1-\rho}-\rho\right) (1+r)}{2
\sqrt{(1-\rho) \rho}}.
\label{mean:min}$$ This expression vanishes for $$\rho_0(r) = \left(\frac{r^2 - 1}{r^2 + 1}\right)^2.
\label{rho:0}$$ Since (\[mean:min\]) is a decreasing function of $\rho$ the mean value is always positive for $\rho<\rho_0$ independent of the choice of the initial state. For $\rho>\rho_0$ one can achieve zero mean value for different combination of the parameters $a$ and $\varphi$.
The formula (\[rho:0\]) is reminiscent of the condition (\[crit:rec\]) for the biased quantum walk on a line to be recurrent. However, $r$ is in (\[rho:0\]) replaced by $r^2$. Therefore we find the inequality $\rho_R<\rho_0$. Hence, the quantum walks with the coin parameter $\rho_R<\rho<\rho_0$ are recurrent but cannot produce a probability distribution with zero mean value. We conclude that there are genuine biased quantum walks which are recurrent in contrast to situations found for classical walks.
Conclusions {#chap:5f}
-----------
We have analyzed one dimensional biased quantum walks. Classically, the bias leading to a non-zero mean value of the particle’s position can be introduced in two ways — unequal step lengths or unfair coin. In contrast, for quantum walks on a line the initial state can introduce bias for any coin. On the other hand, for symmetric initial state modifying only the unitary coin operator while keeping the equal step lengths will not introduce bias. Finally, the bias due to unequal step lengths may be compensated for by the choice of the coin operator for some initial conditions. For this reason we have introduced the concept of the genuinely biased quantum walk for which there does not exists any initial state leading to vanishing mean value of the position.
We have determined the conditions under which one dimensional biased quantum walks are recurrent. This together with the condition of being genuinely biased give rise to three different regions in the parameter space which we depict as a “phase diagram” in [Figure \[chap5:fig4\]]{}.
![“Phase diagram” of biased quantum walks on a line. The horizontal axis represents the length of the step to the right $r$ and the vertical axis shows the coin parameter $\rho$. The dotted line corresponds to the recurrence criterion (\[crit:rec\]), while the squares represent the condition (\[rho:0\]) on the zero mean value of the particle’s position. The quantum walks in the white area are transient and genuine biased. In between the two curves (light gray area) we find quantum walks which are recurrent but still genuine biased. The quantum walks in the dark gray area are recurrent and for a particular choice of the initial state they can produce probability distribution with vanishing mean value.[]{data-label="chap5:fig4"}](biased_fig4.eps){width="60.00000%"}
Meeting Problem in the Quantum Walk {#chap:6}
===================================
\[chap:6a\]
In this Chapter we study the evolution of two particles performing a quantum walk. The evolution of each of the two particles is subjected to the same rules. One of the interesting questions, when two particles are involved, is to clarify how the probability of the particles to meet changes with time (or number of steps taken in walk). Because the behavior of a single particle performing a quantum walk differs from its classical counterpart it has to be expected that the same applies to the situation when two particles are involved. Interference, responsible for the unusual behavior of the single particle should play also a considerable role when two particles are involved. The possibility to change the input states (in particular the possibility to choose entangled initial coin states) adds another interesting point to the analysis. In the following, we study the evolution of the meeting probability for two particles. We point out the differences to the classical case and discuss the influence of the input state on this probability.
Before we turn to the meeting problem we generalize in Sections \[chap:6b\] and \[chap:6c\] the quantum walk to two distinguishable and indistinguishable particles. The meeting problem for two distinguishable particles with initially factorized coin states is analyzed in Section \[chap:6d\]. We derive the asymptotic behavior of the meeting probability and compare it with the results for the classical random walk which are summarized in Appendix \[app:d\]. The effect of entanglement on the meeting probability is considered in Section \[chap:6e\]. Finally, in Section \[chap:6f\] we analyze the meeting probability for two indistinguishable bosons and fermions. We summarize our results in the conclusions of Section \[chap:6g\]
Quantum walk with two distinguishable particles {#chap:6b}
-----------------------------------------------
The Hilbert space of the two particles is given by a tensor product of the single particle spaces, i.e. $$\mathcal{H} = (\mathcal{H}_P\otimes\mathcal{H}_C)_1\otimes(\mathcal{H}_P\otimes\mathcal{H}_C)_2.$$ Each particle has its own coin which determines his movement on the line. Since we assume that there is no interaction between the two particles they evolve independently and the time evolution of the whole system is given by a tensor product of the single particle time evolution operators. We describe the state of the system by vectors $$\psi(m,n,t)= \left(\begin{array}{c}
\psi_{LL}(m,n,t) \\
\psi_{RL}(m,n,t) \\
\psi_{LR}(m,n,t) \\
\psi_{RR}(m,n,t) \\
\end{array}\right),$$ where e.g. the component $\psi_{RL}(m,n,t)$ is the amplitude of the state where the first particle is on $m$ with the internal state $|R\rangle$ and the second particle is on $n$ with the internal state $|L\rangle$. The state of the two particles at time $t$ is then given by $$\label{dist}
|\psi(t)\rangle = \sum_{m,n}\sum_{i,j=\ R,L}\psi_{ij}(m,n,t)|m,i\rangle_1|n,j\rangle_2.$$ The conditional probability that the first particle is on a site $m$ at time $t$, provided that the second particle is at the same time at site $n$, is defined by $$\label{P2}
P(m,n,t) = \sum_{i,j=L,R}|\langle m,i|\langle n,j|\psi(t)\rangle|^2 = \sum_{i,j=L,R}|\psi_{ij}(m,n,t)|^2.$$ Note that if we would consider a single quantum particle but on a two dimensional lattice, with two independent Hadamard coins for each spatial dimension, (\[P2\]) will give the probability distribution generated by such a two dimensional walk. This shows the relation between a one dimensional walk with two particles and a two dimensional walk. The reduced probabilities for the first and the second particle are given by $$P_1(m,t) = \sum_n P(m,n,t),\quad P_2(n,t) = \sum_m P(m,n,t).$$
The dynamics of the two particles is determined by the single particle motion. Since we can always decompose the initial state of the two particles into a linear combination of a tensor product of a single particle states and because the time evolution is also given by a tensor product of two unitary operators, the shape of the state will remain unchanged. Thus we can fully describe the time evolution of the two quantum particles with the help of the single particle wave-functions. A similar relation holds for the probability distribution (\[P2\]). Moreover, in the particular case when the two particles are initially in a factorized state $$\label{fact}
|\psi(0)\rangle = \left( \sum_{m,i}\psi_{1i}(m,0)|m,i\rangle_1\right)\otimes\left(\sum_{n,j}\psi_{2j}(n,0)|n,j\rangle_2\right),$$ which translates into $\psi_{ij}(m,n,0)=\psi_{1i}(m,0)\psi_{2j}(n,0)$. Hence, the probability distribution remains a product of a single particle probability distributions $$\begin{aligned}
\label{fp}
\nonumber P(m,n,t) &=& (|\psi_{1L}(m,t)|^2+|\psi_{1R}(m,t)|^2)(|\psi_{2L}(n,t)|^2+|\psi_{2R}(n,t)|^2) \\
&=& P_1(m,t)P_2(n,t).\end{aligned}$$ However, when the initial state of the two particles is entangled $$\label{ent}
|\psi(0)\rangle = \sum_\alpha\left\{\left( \sum_{m,i}\psi^\alpha_{1i}(m,0)|m,i\rangle_1\right)\otimes\left(\sum_{n,j}\psi^\alpha_{2j}(n,0)|n,j\rangle_2\right)\right\},$$ the probability distribution cannot be expressed in terms of single particle distributions but probability amplitudes $$\label{ment}
P(m,n,t) = \sum_{i,j=L,R} \left| \sum_\alpha\psi^\alpha_{1i}(m,t)\psi^\alpha_{2j}(n,t)\right|^2.$$
Notice that the correlations are present also in the classical random walk with two particles, if we consider initial conditions of the following form $$\label{clcor}
P(m,n,0) = \sum_\alpha P_1^\alpha(m,0)P_2^\alpha(n,0).$$ The difference between (\[ment\]) and (\[clcor\]) is that in the quantum case we have probability amplitudes not probabilities. The effect of the quantum mechanical dynamics is the interference in (\[ment\]).
Let us define the meeting problem. We ask for the probability that the two particles will be detected at the position $m$ after $t$ steps. This probability is given by the norm of the vector $\psi(m,m,t)$ $$\label{md}
M_D(m,t) = \sum_{i,j=L,R}|\psi_{ij}(m,m,t)|^2 = P(m,m,t).$$ As we have seen above for a particular case when the two particles are initially in a factorized state of the form (\[fact\]) this can be further simplified to the multiple of the probabilities that the individual particles will reach the site. However, this is not possible in the situation when the particles are initially entangled (\[ent\]). The entanglement introduced in the initial state of the particles leads to the correlations between the particles position and thus the meeting probability is no longer a product of the single particle probabilities.
Quantum walk with two indistinguishable particles {#chap:6c}
-------------------------------------------------
We analyze the situation when the two particles are indistinguishable. Because we work with indistinguishable particles we use the Fock space and creation operators, we use symbols $a_{(m,i)}^\dagger$ for bosons and $b_{(n,j)}^\dagger$ for fermions, e.g. $a_{(m,i)}^\dagger$ creates one bosonic particle at position $m$ with the internal state $|i\rangle$. The dynamics of the quantum walk with indistinguishable particles is defined on a one-particle level, i.e. a single step is given by the following transformation of the creation operators $$\hat{a}_{(m,L)}^\dagger \longrightarrow \frac{1}{\sqrt{2}}\left(\hat{a}_{(m-1,L)}^\dagger + \hat{a}_{(m+1,R)}^\dagger\right),\quad
\hat{a}_{(m,R)}^\dagger \longrightarrow \frac{1}{\sqrt{2}}\left(\hat{a}_{(m-1,L)}^\dagger - \hat{a}_{(m+1,R)}^\dagger\right),$$ for bosonic particles, similarly for fermions. The difference is that the bosonic operators fulfill the commutation relations $$\label{commut}
\left[\hat{a}_{(m,i)},\hat{a}_{(n,j)}\frac{}{}\right] = 0,\qquad \left[\hat{a}_{(m,i)},{\hat{a}_{(n,j)}}^\dagger\right] = \delta_{mn}\delta_{ij},$$ while the fermionic operators satisfy the anticommutation relations $$\label{anticommut}
\left\{\hat{b}_{(m,i)},\hat{b}_{(n,j)}\frac{}{}\right\} = 0,\qquad \left\{\hat{b}_{(m,i)},\hat{b}^\dagger_{(n,j)}\right\} = \delta_{mn}\delta_{ij}.$$ We will describe the state of the system by the same vectors of amplitudes $\psi(m,n,t)$ as for the distinguishable particles. The state of the two bosons and fermions analogous to (\[dist\]) for two distinguishable particles is given by $$\begin{aligned}
\label{psi:B:F}
\nonumber |\psi_B(t)\rangle & = & \sum_{m,n}\sum_{i,j=L,R}\psi_{ij}(m,n,t)\hat{a}_{(m,i)}^\dagger \hat{a}_{(n,j)}^\dagger|vac\rangle,\\
|\psi_F(t)\rangle & = & \sum_{m,n}\sum_{i,j=L,R}\psi_{ij}(m,n,t)\hat{b}_{(m,i)}^\dagger \hat{b}_{(n,j)}^\dagger|vac\rangle,\end{aligned}$$ where $|vac\rangle$ is the vacuum state. Note that in (\[psi:B:F\]) both summation indexes $m$ and $n$ run over all possible sites, even though e.g. the vectors $\hat{a}_{(m,i)}^\dagger \hat{a}_{(n,j)}^\dagger|vac\rangle$ and $\hat{a}_{(n,i)}^\dagger \hat{a}_{(m,j)}^\dagger|vac\rangle$ correspond to the same physical state. Using the commutation (\[commut\]) and anticommutation (\[anticommut\]) relations we can restrict the sums in (\[psi:B:F\]) over an ordered pair $(m,n)$ with $m\geq n$. The resulting wave-function will be symmetric or antisymmetric.
The conditional probability distribution is given by $$P_{B,F}(m,n,t) = \sum_{i,j=L,R}\left|\langle 1_{(m,i)}1_{(n,j)}|\psi_{B,F}(t)\rangle\right|^2 =\sum_{i,j = L,R}\left|\psi_{ij}(m,n,t)\pm\psi_{ji}(n,m,t)\right|^2,$$ for $m\neq n$, and for $m=n$ $$\begin{aligned}
\label{mbf}
\nonumber P_B(m,m,t) & = & \left|\langle 2_{(m,L)}|\psi_B(t)\rangle\right|^2 + \left|\langle 2_{(m,R)}|\psi_B(t)\rangle\right|^2 + \left|\langle 1_{(m,L)}1_{(m,R)}|\psi_B(t)\rangle\right|^2 \\
\nonumber & = & 2\left|\psi_{LL}(m,m,t)\right|^2+2\left|\psi_{RR}(m,m,t)\right|^2 + \left|\psi_{LR}(m,m,t)+\psi_{RL}(m,m,t)\right|^2\\
\nonumber & = & M_B(m,t),\\
\nonumber P_F(m,m,t) & = & \left|\langle 1_{(m,L)}1_{(m,R)}|\psi_F(t)\rangle\right|^2 = \left|\psi_{LR}(m,m,t)-\psi_{RL}(m,m,t)\right|^2\\
& = & M_F(m,t).\end{aligned}$$ The diagonal terms of the probability distribution (\[mbf\]) define the meeting probability we wish to analyze.
Let us specify the meeting probability for the case when the probability amplitudes can be written in a factorized form $\psi_{ij}(m,n,t) = \psi_{1i}(m,t)\psi_{2j}(n,t)$, which for the distinguishable particles corresponds to the situation when they are initially not correlated. In this case the meeting probabilities are given by $$\begin{aligned}
\label{mb}
M_B(m,t) = 2\left|\psi_{1L}(m,t)\psi_{2L}(m,t)\right|^2+2\left|\psi_{1R}(m,t)\psi_{2R}(m,t)\right|^2 \nonumber \\
+\left|\psi_{1L}(m,t)\psi_{2R}(m,t)+\psi_{1R}(m,t)\psi_{2L}(m,t)\right|^2,\end{aligned}$$ for bosons and $$\label{mf}
M_F(m,t) = \left|\psi_{1L}(m,t)\psi_{2R}(m,t)-\psi_{1R}(m,t)\psi_{2L}(m,t)\right|^2,$$ for fermions. We see that they differ from the formulas for the distinguishable particles, except for a particular case when the two bosons start in the same state, i. e. $\psi_1(m,0) = \psi_2(m,0) =
\psi(m,0) $ for all integers $m$. For this initial state we obtain $$\begin{aligned}
\nonumber M_B(m,t) &=& |\psi_{L}(m,t)|^4+|\psi_{R}(m,t)|^4+2|\psi_{L}(m,t)\psi_{R}(m,t)|^2 \\
\nonumber &=& (|\psi_L(m,t)|^2+|\psi_R(m,t)|^2)^2 \\
\nonumber &=& P^2(m,t) ,\end{aligned}$$ which is the same as for the case of distinguishable particles starting at the same point with the same internal state.
Meeting problem for distinguishable particles {#chap:6d}
---------------------------------------------
Let us compare the meeting problem in the classical and quantum case. We study the two following probabilities: the total meeting probability after $t$ step have been performed $$\label{m2}
M(t) = \sum_{m}M(m,t),$$ and the overall meeting probability during some period of steps $T$ defined as $$\label{ov}
{\cal M}(T) = 1-\prod_{t=1}^T\left(1-M(t)\right) .$$ The total meeting probability $M(t)$ is the probability that the two particles meet at time $t$ anywhere on the lattice, the overall meeting probability ${\cal M}(T)$ is the probability that they meet at least once anywhere on the lattice during the first $T$ steps.
We first concentrate on the influence of the initial state on the meeting probability for the distinguishable particles. We consider three situations, the particles start localized with some initial distance $2d$ (for odd initial distance they can never meet, without loss of generality we assume that the first starts at the position zero and the second at the position $2d$), with the coin states:
([*i*]{}) right for the first particle and left for the second $$\psi_{RL}(0,2d,0) = 1 ,$$
([*ii*]{}) symmetric initial conditions $1/\sqrt{2}(|L\rangle+i|R\rangle)$ for both $$\psi(0,2d,0) = \frac{1}{2}\left( \begin{array}{c}
1 \\
i \\
i \\
-1
\end{array}\right) ,$$
([*iii*]{}) left for the first particle and right for the second $$\psi_{LR}(0,2d,0) = 1.$$ In the first case the probability distributions of the particles are biased to the right for the first particle, respectively to the left for the second, and thus the particles are moving towards each other. In the second case the particles mean positions remain unchanged, as for this initial condition the single particle probability distribution is symmetric and unbiased. In the last case the particles are moving away from each other as their probability distributions are biased to the left for the first one and to the right for the second.
Let us specify the meeting probabilities (\[m2\]). Since the two particles are initially in a factorized state it follows from (\[fp\]) and (\[md\]) that the meeting probability is fully determined by the single particle probability distribution. Let $$\begin{aligned}
|\psi^{(L)}(t)\rangle & = & \sum_m\left(\psi^{(L)}_L(m,t)|m,L\rangle+\psi^{(L)}_R(m,t)|m,R\rangle\right)\\
|\psi^{(R)}(t)\rangle & = & \sum_m\left(\psi^{(R)}_L(m,t)|m,L\rangle+\psi^{(R)}_R(m,t)|m,R\rangle\right)
\label{psi:LR}\end{aligned}$$ be the state of a single quantum particle after $t$ steps, under the assumption that the initial condition was $$|\psi^{(L)}(0)\rangle=|0,L\rangle,\qquad |\psi^{(R)}(0)\rangle=|0,R\rangle.$$ Let us denote by $P^{(L,R)}(m,t)$ the corresponding single particle probability distributions. The meeting probabilities for the three situations ([*i*]{})-([*iii*]{}) are then given by $$\begin{aligned}
\label{mq}
\nonumber M_{RL}(t,d) & = & \sum_m P^{(R)}(m,t)P^{(L)}(m-2d,t) \\
\nonumber M_{S}(t,d) & = & \sum_m\frac{P^{(L)}(m,t)+P^{(R)}(m,t)}{2}\frac{P^{(L)}(m-2d,t)+P^{(L)}(m-2d,t)}{2}\\
M_{LR}(t,d) & = & \sum_m P^{(L)}(m,t)P^{(R)}(m-2d,t).\end{aligned}$$
Figure \[fig:61\] shows the time evolution of the meeting probability for the three studied situations and compares it with the classical case. The initial distance is set to 0 and 10 lattice points. The plot clearly shows the difference between the quantum and the classical case.
![Time evolution of the meeting probability for the three types of initial states and the classical random walk with two particles. The initial distance is set to 0 (upper plot) and 10 lattice points (lower plot). The upper plot shows a faster decay of the meeting probability when the two particles are initially at the same lattice point. Indeed, quantum walk spreads quadratically faster compared to the classical random walk. Since both particles start the walk from the origin the results for the initial states $|LR\rangle$ and $|RL\rangle$ are identical. In the lower plot, where the particles are initially separated, we observe an increase in the meeting probability for quantum walk. On the other hand, on a long-time scale the meeting probability decays faster in the quantum case.[]{data-label="fig:61"}](meeting_f1anewp.eps "fig:"){width="70.00000%"} ![Time evolution of the meeting probability for the three types of initial states and the classical random walk with two particles. The initial distance is set to 0 (upper plot) and 10 lattice points (lower plot). The upper plot shows a faster decay of the meeting probability when the two particles are initially at the same lattice point. Indeed, quantum walk spreads quadratically faster compared to the classical random walk. Since both particles start the walk from the origin the results for the initial states $|LR\rangle$ and $|RL\rangle$ are identical. In the lower plot, where the particles are initially separated, we observe an increase in the meeting probability for quantum walk. On the other hand, on a long-time scale the meeting probability decays faster in the quantum case.[]{data-label="fig:61"}](meeting_f1bnewp.eps "fig:"){width="70.00000%"}
In contrast to the classical walk, in the quantum case the meeting probability is oscillatory. The oscillations arise from the single particle probability distribution. After some rapid oscillations in the beginning we get a periodic function with the characteristic period of about six steps, independent of the initial state. In the quantum case the maximum of the meeting probability is reached sooner than in the classical case - the number of steps needed to hit the maximum is linear in the initial distance $d$. This can be understood from the shape of the particles probability distribution. The maximum of the meeting probability is obtained when the peaks of the probability distribution of the first and second particle overlap. If the initial distance between the two particles is $2d$ then the peaks will overlap approximately after $\sqrt{2}d$ steps. The value of the maximum depends on the choice of the initial state.
We turn to the meeting probabilities on a long-time scale. We present the review of the results derived in Appendix \[app:d\]. For the classical random walk we find in Appendix \[app:d1\] that the meeting probability can by estimated by $$M_{cl}(t,d)\approx\frac{1}{\sqrt{\pi t}} \exp(-\frac{d^2}{t})\sim \frac{1}{\sqrt{\pi t}}(1-\frac{d^2}{t})
\label{apcl}$$ for large number of steps $t$. We see that the asymptotic behaviour of the meeting probability is determined by $t^{-\frac{1}{2}}$. Concerning the quantum walk, in Appendix \[app:d2\] we approximate the single particle probability distribution according to [@nayak] and replace the sum in (\[mq\]) by integral. We find that within this approximation the meeting probability can be expressed in terms of the elliptic integrals. Finally, using the asymptotic expansion of the elliptic integrals we find the behaviour of the meeting probability for large number of steps $$\label{apq}
M_D(t,d) \sim \frac{\ln{\left(\frac{2\sqrt{2}t}{d}\right)}}{t}.$$ Hence, the meeting probability decays faster in the quantum case compared to the classical case (\[apcl\]). However, the decay is not quadratically faster, as one could expect from the fact that the single particle probability distribution spreads quadratically faster in the quantum walk. The peaks in the probability distribution of the quantum walk slow down the decay.
Note that the estimation (\[apq\]) holds for $d>0$, i.e. the initial distance has to be non-zero. As we mention in Appendix \[app:d2\], the continuous approximation of the single particle probability distribution is not quadratically integrable, and therefore we cannot use this approach for the estimation of the meeting probability when the two particles are initially at the same lattice point. There does not seem to be an easy analytic approach to the problem. However, from the numerical results, the estimation $$M_D(t) \sim \frac{\ln{t}}{t}
\label{mq:d0}$$ fits the data the best.
We illustrate these results in Figure \[fig:62\]. We plot the meeting probability multiplied by the number of steps to unravel the different scaling in the classical and quantum case. In the upper plot both particles start from the origin, whereas in the lower plot the initial distance is 10 lattice points. The numerical results are consistent with the analytical estimation of (\[apq\]) and support the approximation (\[mq:d0\]).
![Long-time behaviour of the meeting probability in the classical and quantum walk. In the upper plot both particles start the walk from the origin. In the lower plot the particles are initially separated by 10 lattice points. To highlight the asymptotic scaling of the meeting probability we plot the latter one multiplied by the number of steps. We can clearly see the difference between the classical and quantum walk. In the quantum case the re-scaled meeting probability shows a logarithmic increase. On the other hand, the growth is much faster (with a square root of $t$) for the classical case. The numerical results are in good agreement with the analytical estimation of Appendix \[app:d\] which are summarized in (\[apq\]).[]{data-label="fig:62"}](meeting_f2anewp.eps "fig:"){width="70.00000%"} ![Long-time behaviour of the meeting probability in the classical and quantum walk. In the upper plot both particles start the walk from the origin. In the lower plot the particles are initially separated by 10 lattice points. To highlight the asymptotic scaling of the meeting probability we plot the latter one multiplied by the number of steps. We can clearly see the difference between the classical and quantum walk. In the quantum case the re-scaled meeting probability shows a logarithmic increase. On the other hand, the growth is much faster (with a square root of $t$) for the classical case. The numerical results are in good agreement with the analytical estimation of Appendix \[app:d\] which are summarized in (\[apq\]).[]{data-label="fig:62"}](meeting_f2bnewp.eps "fig:"){width="70.00000%"}
We focus on the overall meeting probability defined by (\[ov\]). In Figure \[fig:63\] we plot the overall probability that the two particles will meet during the first $T=100$ steps.
![The overall meeting probability for two distinguishable quantum and classical particle during first 100 steps as a function of the initial distance. The same plot on the logarithmic scale. Only the values for even points are plotted since for odd initial distance the particles never meet.[]{data-label="fig:63"}](meeting_f4ap.eps "fig:"){width="70.00000%"} ![The overall meeting probability for two distinguishable quantum and classical particle during first 100 steps as a function of the initial distance. The same plot on the logarithmic scale. Only the values for even points are plotted since for odd initial distance the particles never meet.[]{data-label="fig:63"}](meeting_f4bp.eps "fig:"){width="70.00000%"}
On the first plot we present the difference between the three studied quantum situations, whereas the second plot, where the meeting probability is on the log scale, uncovers the difference between the quantum and the classical random walk. In the log scale plot we see that the overall meeting probability decays slower in the quantum case then in the classical case, up to to the initial distance of $\sqrt{2}T$. This can be understood by the shape and the time evolution of a single particle probability distribution. After $t$ steps the maximums of the probability distribution are around the point $s\pm\frac{t}{\sqrt{2}}$, where $s$ is the initial starting point of the quantum particle. For $t=100$ steps the peaks are around the points $s\pm 70$. When the two particles are initially more then 140 points away, the peaks do not overlap, and the meeting probability is given by just the tails of the single particle distributions, which have almost classical behavior.
Finally, we note that the overall meeting probability ${\cal M}(T,d)$ defined in (\[ov\]) converges to one as $T$ approaches infinity for both classical and quantum walk independent of the initial distance. Indeed, to estimate ${\cal M}(T,d)$ we rewrite it in the form $${\cal M}(T,d) = 1 - \exp\left[\ln\left(\prod\limits_{t=1}^T\left(1-M(t,d)\right)\right)\right],
\label{appd:eq2}$$ and estimate the exponent with the first order Taylor expansion $$\ln\left(\prod_{t=1}^T\left(1-M(t,d)\right)\right)=\sum_{t=1}^T \ln{\left(1-M(t,d)\right)}\approx -\sum_{t=1}^T M(t,d).
\label{appd:eq3}$$ The scaling of the meeting probability $M(t,d)$ both in the classical case (\[apcl\]) and in the quantum case (\[apq\]) is slow enough such that the sum in (\[appd:eq3\]) diverges to $-\infty$ as $T$ grows. Consequently, the exponential in (\[appd:eq2\]) vanishes as $T$ grows. Hence, the overall meeting probability converges to unity for both classical and quantum walk, i.e. the particles will meet with certainty during their time evolution.
Effect of the entanglement {#chap:6e}
--------------------------
We will consider the case when the two distinguishable particles are initially entangled. According to (\[ment\]) the meeting probability is no longer given by a product of a single particle probability distributions. However, it can be described using single particle probability amplitudes. We consider the initial state of the following form $$|\psi(0)\rangle = |0, 2d\rangle\otimes|\chi\rangle,$$ where $|\chi\rangle$ is one of the Bell states $$\begin{aligned}
\label{bell}
\nonumber |\psi^\pm\rangle &=&
\frac{1}{\sqrt{2}}\left(|LR\rangle\pm|RL\rangle\right),\\
|\phi^\pm\rangle & = &
\frac{1}{\sqrt{2}}\left(|LL\rangle\pm|RR\rangle\right).\end{aligned}$$ The corresponding probability distributions resulting from such initial states have the form $$\begin{aligned}
\label{pent}
\nonumber P_{\psi^\pm}(m,n,t) & = & \frac{1}{2} \sum_{i,j=L,R}\left|\psi_i^{(L)}(m,t)\psi_j^{(R)}(n-2d,t)\pm \psi_i^{(R)}(m,t)\psi_j^{(L)}(n-2d,t)\right|^2,\\
P_{\phi^\pm}(m,n,t) & = & \frac{1}{2}\sum_{i,j=L,R}\left|\psi_i^{(L)}(m,t)\psi_j^{(L)}(n-2d,t)\pm\psi_i^{(R)}(m,t)\psi_j^{(R)}(n-2d,t)\right|^2,\end{aligned}$$ where $\psi^{L(R)}(m,t)$ are the probability amplitudes from (\[psi:LR\]) which describe the state of a single particle after $t$ steps starting the quantum walk from the origin with the initial coin state $L(R)$. The meeting probabilities are given by the sum of the diagonal terms in (\[pent\]) $$\begin{aligned}
\nonumber M_{\psi^\pm}(t,d) & = & \frac{1}{2}\sum_m \sum_{i,j=L,R}\left|\psi_i^{(L)}(m,t)\psi_j^{(R)}(m-2d,t)\pm\psi_i^{(R)}(m,t)\psi_j^{(L)}(m-2d,t)\right|^2,\\
\nonumber M_{\phi^\pm}(t,d) & = & \frac{1}{2}\sum_m \sum_{i,j=L,R}\left|\psi_i^{(L)}(m,t)\psi_j^{(L)}(m-2d,t)\pm\psi_i^{(R)}(m,t)\psi_j^{(R)}(m-2d,t)\right|^2.\end{aligned}$$ The reduced density operators for both coins are maximally mixed for all four Bell states (\[bell\]). From this fact follows that the reduced density operators of the particles are $$\begin{aligned}
\nonumber \rho_1(t) & = & \frac{1}{2}\left(|\psi^{(L)}(t)\rangle\langle\psi^{(L)}(t)|+|\psi^{(R)}(t)\rangle\langle\psi^{(R)}(t)|\right)\\
\nonumber \rho_2(t) & = & \frac{1}{2}\left(|\psi^{(L)}_d(t)\rangle\langle\psi^{(L)}_d(t)|+|\psi^{(R)}_d(t)\rangle\langle\psi^{(R)}_d(t)|\right),\end{aligned}$$ where the states $|\psi^{L(R)}_d(t)\rangle$ are analogous to $|\psi^{L(R)}(t)\rangle$ expressed in (\[psi:LR\]) but with shifted starting point by $2d$, i.e. $$|\psi^{(L,R)}_d(t)\rangle = \sum_m\left(\psi^{(L,R)}_L(m-2d,t)|m,L\rangle+\psi^{(L,R)}_R(m-2d,t)|m,R\rangle\right).$$ The reduced probabilities are therefore $$\begin{aligned}
\label{red}
\nonumber P_1(m,t) & = & \frac{1}{2}(P^{(L)}(m,t)+P^{(R)}(m,t))\\
P_2(m,t) & = & P_1(m-2d,t),\end{aligned}$$ which are symmetric and unbiased. Notice that the product of the reduced probabilities (\[red\]) gives the probability distribution of a symmetric case studied in the previous section. Therefore to catch the interference effect in the meeting problem we compare the quantum walks with entangled coin states (\[bell\]) with the symmetric case $M_{S}$. Figure \[fig:65\] shows the meeting probabilities and the difference $M_\chi-M_{S}$, the initial distance between the two particles was chosen to be 10 points.
![Comparison of the meeting probability for the initially entangled coins and the symmetric case. The initial distance between the two particles is set to 10 points. As the initial coin states we choose the Bell states (\[bell\]). We observe that the effect of the entangled coin state on the meeting probability can be both positive or negative. In the lower plot we show the difference in the meeting probability with respect to the symmetric case. We find that the effect of $|\psi^-\rangle$ is opposite to $|\phi^+\rangle$ and $|\phi^-\rangle$ is opposite to $|\psi^+\rangle$.[]{data-label="fig:65"}](meeting_f6ap.eps "fig:"){width="70.00000%"} ![Comparison of the meeting probability for the initially entangled coins and the symmetric case. The initial distance between the two particles is set to 10 points. As the initial coin states we choose the Bell states (\[bell\]). We observe that the effect of the entangled coin state on the meeting probability can be both positive or negative. In the lower plot we show the difference in the meeting probability with respect to the symmetric case. We find that the effect of $|\psi^-\rangle$ is opposite to $|\phi^+\rangle$ and $|\phi^-\rangle$ is opposite to $|\psi^+\rangle$.[]{data-label="fig:65"}](meeting_f6bp.eps "fig:"){width="70.00000%"}
![Asymptotic behaviour of the meeting probability for the initially entangled coins. In order to unravel the asymptotic scaling of the meeting probability we multiply $M_\chi(t)$ by the number of steps $t$. We see that for the Bell states $|\psi^+\rangle$ (black dots) and $|\phi^{\pm}\rangle$ (open circles/diamonds) the rescaled meeting probability $M_\chi(t)\cdot t$ shows a logarithmic increase with $t$, while for $|\psi^-\rangle$ (stars) the value of $M_\chi(t)\cdot t$ levels. These results indicate that the asymptotic decay of the meeting probability is faster for the singlet state $|\psi^-\rangle$ compared to the other Bell states or factorized initial conditions.[]{data-label="fig:68"}](meeting_f8p.eps){width="70.00000%"}
We see that the effect of the entanglement could be both positive or negative. Notice that $$\begin{aligned}
\nonumber M_{\psi^-}(t,d)-M_S(t,d) & = & -\left(M_{\phi^+}(t,d)-M_S(t,d)\right)\\
\nonumber M_{\phi^-}(t,d)-M_S(t,d) & = & -\left(M_{\psi^+}(t,d)-M_S(t,d)\right),\end{aligned}$$ so the effect of $|\psi^-\rangle$ is opposite to $|\phi^+\rangle$ and $|\phi^-\rangle$ is opposite to $|\psi^+\rangle$. The main difference is around the point $t\approx\sqrt{2}d$, i.e., the point where for the factorized states the maximum of the meeting probability is reached. The peak value is nearly doubled for $M_{\psi^-}$, but significantly reduced for $M_{\phi^+}$. On the long time scale, however, the meeting probability $M_{\psi^-}$ decays faster than in the other situations. According to the numerical results presented in Figure \[fig:68\], the meeting probabilities for $|\psi^+\rangle$ and $|\phi^\pm\rangle$ maintain the asymptotic behavior $\ln{t}/t$, but for $|\psi^-\rangle$ it goes like $$M_{\psi^-}(t,d)\sim\frac{1}{t}.$$ The initial entanglement between the particles influences the height of the peaks giving the maximum meeting probability and affects also the meeting probability on the long time scale.
Let us briefly comment on the overall meeting probability. As we have discussed in the previous section the overall meeting probability converges to one only if the decay of the meeting probability is not faster than $\frac{1}{t}$. As we have seen the entanglement could speed-up the decay of the meeting probability but it is never faster than $\frac{1}{t}$. Therefore we conclude that for the initially entangled particles the overall meeting probability converges to one.
Meeting problem for indistinguishable particles {#chap:6f}
-----------------------------------------------
We turn to the meeting problem for two indistinguishable particles. As an example, we consider the initial state of the form $|1_{(0,R)}1_{(2d,L)}\rangle$, i.e. one particle starts at the site zero with the right coin state and one starts at $2d$ with the left state. This corresponds to the case $M_{RL}$ for the distinguishable particles. The meeting probabilities are according to (\[mb\]), (\[mf\]) given by $$\begin{aligned}
\label{bfmp}
\nonumber M_B(t,d) & = & \sum_{m}\left(\frac{}{}2|\psi_L^{(R)}(m,t)|^2|\psi_L^{(L)}(m-2d,t)|^2 + 2|\psi_R^{(R)}(m,t)|^2|\psi_R^{(L)}(m-2d,t)|^2+\nonumber \right.\\
\nonumber & & \quad\quad \left.+|\psi_L^{(R)}(m,t)\psi_R^{(L)}(m-2d,t) + \psi_R^{(R)}(m,t)\psi_L^{(L)}(m-2d,t)|^2\frac{}{}\right),\\
\nonumber \\
M_F(t,d) & = & \sum_m\left(|\psi_L^{(R)}(m,t)\psi_R^{(L)}(m-2d,t)-\psi_R^{(R)}(m,t)\psi_L^{(L)}(m-2d,t)|^2\frac{}{}\right).\end{aligned}$$
In Figure \[fig:66\] we plot the meeting probabilities and the difference $M_{B,F}-M_{RL}$.
![Comparison of the meeting probability for bosons, fermions and distinguishable particles. The initial distance between the two particles is set to 10 points. We find that the maximum value of the meeting probability is almost unaffected. However, for longer times we observe an increase in the meeting probability for bosons and decrease for fermions. In the lower plot we show the difference in the meeting probability for bosons and fermions with respect to distinguishable particles. We find that the increase of the meeting probability for bosons is the same as the decrease for fermions.[]{data-label="fig:66"}](meeting_f7ap.eps "fig:"){width="70.00000%"} ![Comparison of the meeting probability for bosons, fermions and distinguishable particles. The initial distance between the two particles is set to 10 points. We find that the maximum value of the meeting probability is almost unaffected. However, for longer times we observe an increase in the meeting probability for bosons and decrease for fermions. In the lower plot we show the difference in the meeting probability for bosons and fermions with respect to distinguishable particles. We find that the increase of the meeting probability for bosons is the same as the decrease for fermions.[]{data-label="fig:66"}](meeting_f7bp.eps "fig:"){width="70.00000%"}
From the figure we infer that the peak value is in this case only slightly changed. Significant differences appear on the long time scale. The meeting probability is greater for bosons and smaller for fermions compared to the case of distinguishable particles. This behavior can be understood by examining the asymptotic properties of the expressions (\[bfmp\]). Numerical evidence presented in Figure \[fig:69\] indicates that the meeting probability for bosons has the asymptotic behavior of the form $\ln(t)/t$. However, for fermions the decay of the meeting probability is faster having the form $$M_F(t,d)\sim\frac{1}{t} .$$ The fermion exclusion principle simply works against an enhancement of the meeting probability.
![Asymptotic behaviour of the meeting probability for bosons, fermions and distinguishable particles. In order to unravel the asymptotic scaling of the meeting probability we multiply $M_(t)$ by the number of steps $t$. We see that for bosons (stars) and distinguishable particles (black dots) the rescaled meeting probability $M(t)\cdot t$ shows a logarithmic increase with $t$, while for fermions (open circles) the value of $M(t)\cdot t$ levels. These results indicate that the meeting probability decays faster for fermions.[]{data-label="fig:69"}](meeting_f9p.eps){width="70.00000%"}
For the overall meeting probability we can use the same arguments as in the previous section and conclude that it will converge to one for both bosons and fermions.
Conclusions {#chap:6g}
-----------
We have defined and analyzed the meeting problem in the quantum walk on an infinite line with two quantum particles. For distinguishable particles we have derived analytical formulas for the meeting probability. The asymptotic behavior following from these results shows that the meeting probability decays faster but not quadratically faster than in the classical random walk. This results in the slower convergency of the overall meeting probability, however it still converges to one. This is due to the fact that the meeting probability does not decay faster than $\frac{1}{t}$. Such a situation might occur in higher dimensional walks and could result in yet another difference between the classical and the quantum walks. We have studied the influence of the entanglement and the indistinguishability of the particles on the meeting probability. The influence is particularly visible for fermions and in the case of distinguishable particles for the case of initial entangled singlet state. Although the meeting probability decays faster in these cases the overall meeting probability will still converge to one, as the decay is never faster than the threshold $\frac{1}{t}$.
Quantum walks are a specialized field on the border between quantum information theory and statistical physics which attracted a lot of interest in recent years. A number of novel effects have been found and are still under investigation. In the present thesis we contributed to these investigations.
In particular, we extended the concept of recurrence and Pólya number to quantum walks. The particular measurement scheme employed in our definition preserves the effect of the additional degrees of freedom offered by quantum mechanics on the Pólya number. We developed the tools needed for the analysis of the recurrence nature of quantum walks. The actual analysis revealed that quantum walks can be operated in physically different regimes. These regimes cover localization as well as ballistic spreading of the walker’s wave packets. We found that the free parameters we have at hand in a coined quantum walk have a crucial impact on its dynamics and are capable of changing its behaviour from recurrent to transient. Striking diversity of quantum walks in contrast to classical random walks was pointed out. The present results prove the usefulness of the Pólya number concept for quantum walks and support our expectation of its applicability in related domains.
The recurrence of quantum walks under the effect of bias was analyzed. For classical random walks breaking the symmetry results in immediate turnover from recurrence to transience. However, the ballistic nature of the quantum walk is able to compensate for the bias and the recurrence can be preserved. We identified the range of parameters for which the recurrence behaviour of biased quantum walks on a line diverse from classical random walks.
Finally, we considered quantum walks with two particles. This makes the additional properties offered by quantum mechanics like entanglement or indistinguishability accessible. We analyzed the effect of these non-classical features on the meeting probability and pointed out the difference from the classical random walk.
The presented results provide a step in the classification of coined quantum walks, in particular on higher-dimensional lattices. We have identified several extreme modes of the dynamics of quantum walks. Our next goal is to exploit the free parameters of the coin operator which will allow us to shed light on the connection between these different regimes.
Within our definition the recurrence of a quantum walk describes the revival of a particular quantity, namely the probability at the origin, rather than the revival of a quantum state. Nevertheless and quite surprisingly, even full revivals are possible in quantum walk settings. This effect is closely related to localization. Indeed, for localizing quantum walks the propagator has a non-empty point spectrum which allows for stationary and oscillating states. However, the point spectra of the presently known localizing quantum walks are rather simple leading only to oscillations with a period of two steps. Finding quantum walks with a broader point spectrum will lead to novel features including full and fractional revival dynamics.
Our definition of the Pólya number of a quantum walk is connected to a specific measurement scheme. Needless to say, we can consider schemes where the measurements are performed in a different manner and define the Pólya number accordingly. It is interesting to analyze the influence of various measurement schemes on the recurrence nature of the quantum walk. Preliminary results indicate that our definition gives an upper limit for the Pólya number.
The meeting problem for two quantum walkers which we have studied presents a step towards quantum walks involving many particles. It is certainly worth to analyze other various quantities available in multi-particle settings, e.g. the angular correlations among the particle’s positions. Moreover, up to date the particles performing quantum walk were considered non-interacting. To investigate the effect of interactions between the particles on the dynamics of quantum walks is one of our next goals.
Recurrence of Random Walks {#app:a}
==========================
In this appendix we review the main results on the recurrence in classical random walks. First, we show how the recurrence is related to the probability at the origin. Then we discuss the recurrence of unbiased random walks on $d$-dimensional lattices. Finally, we analyze the recurrence of biased random walks on a line. For a more comprehensive reviews we refer to the literature [@hughes; @revesz].
We begin with the problem analyzed by Pólya in 1921 [@polya]. Consider a particle performing a random walk on an infinite $d$-dimensional lattice. The particle is initially localized at the origin of the lattice. The probability $P$ that the particle returns to the origin during the time evolution is called the Pólya number of the walk. Random walks are classified as [*recurrent*]{} or [*transient*]{} depending on whether their Pólya number equals to one, or is less than one, respectively. If the random walk is recurrent the particle returns to the origin with certainty. On the hand, for transient random walks there is a non-zero probability that the particle never returns to its starting point. In other words, there is a non-vanishing probability of escape.
The Pólya number of a classical random walk can be defined in the following way [@revesz]. Let $q_0(t)$ be the probability that the particle returns to the origin for the [*first time*]{} after $t$ steps. Since these events are mutually exclusive we can add up their probabilities and the series $$P\equiv\sum\limits_{t=1}^\infty q_0(t)
\label{polya:1}$$ gives the probability that at least once the particle has returned to the origin, i.e. the Pólya number. However, the definition (\[polya:1\]) is not very practical for determining the recurrence nature of a random walk. We can express the Pólya number in terms of the probability $p_0(t)$ that the particle can be found at the origin at any given time instant $t$. Indeed, it is easy to see that the probability at the origin $p_0(t)$ and the first return probability $q_0(t)$ fulfills the following relations $$\begin{aligned}
\nonumber p_0(0) & = & 1\\
\nonumber p_0(1) & = & q_0(1)\\
\nonumber p_0(2) & = & q_0(2)+q_0(1)p_0(1)\\
\nonumber p_0(3) & = & q_0(3)+q_0(2)p_0(1)+q_0(1)p_0(2)\\
\nonumber & \vdots & \\
\nonumber p_0(n) & = & q_0(n)+q_0(n-1)p_0(1)+\ldots+q_0(1)p_0(n-1).\end{aligned}$$ Simply adding all of these equations together might lead to a divergent series. Therefore, we first multiply the $n$-th equation by $z^n$ with $|z|<1$. Adding these modified equations we find the relation $$F(z) = 1 + F(z)G(z),
\label{ap1:eq1}$$ where we have defined the following functions $$\begin{aligned}
\nonumber F(z) & = & \sum\limits_{n=0}^\infty p_0(n)z^n\\
\nonumber G(z) & = & \sum\limits_{n=1}^\infty q_0(n)z^n.\end{aligned}$$ Both series are convergent for $|z|<1$. Moreover, the Pólya number $P$ can be evaluated from the function $G(z)$ by taking the limit $z\rightarrow 1^-$ $$P = \lim\limits_{z\rightarrow 1^-} G(z) = \sum\limits_{n=1}^\infty q_0(n).$$ From the relation (\[ap1:eq1\]) we express the function $G(z)$ in the form $$G(z) = 1-\frac{1}{F(z)}.$$ Finally, we take the limit $z\rightarrow 1^-$ and find the formula $$P = 1-\frac{1}{\sum\limits_{t=0}^{+\infty}p_0(t)},$$ which expresses the Pólya number $P$ in terms of the probability at the origin $p_0(t)$.
The recurrence behaviour of a random walk is determined solely by the infinite sum $${\cal S} \equiv \sum_{t=0}^{\infty}p_0(t).
\label{ap1:eq2}$$ We find that $P$ equals unity if and only if the series ${\cal S}$ diverges [@revesz]. In such a case the walk is recurrent. On the other hand, if the series $\cal S$ is convergent, the Pólya number $P$ is strictly less than unity and the walk is transient. The convergence of the series $\cal S$ is determined by the asymptotic behaviour of the probability at the origin $p_0(t)$. Indeed, we find that if $p_0(t)$ decays faster than $t^{-1}$ the sum is finite, while if the decay of $p_0(t)$ is slower the sum is divergent. Hence we find the following criterion for recurrence of random walks — the random walk is recurrent if and only if the probability at the origin decays like $t^{-1}$ or slower as $t$ approaches infinity.
In the following we use the above mentioned criterion to analyze the recurrence of biased and unbiased random walks.
Unbiased random walks on $\mathds{Z}^d$ {#app:a1}
---------------------------------------
Let us begin with the unbiased random walk on a line. At each time step the particle has two possibilities — it can move to the right or to the left by a unit distance with equal probability $1/2$. The probability distribution generated by such a random walk is easily found to be $$P(m,t) = \frac{1}{2^t} {t\choose \frac{t+m}{2}}.$$ The probability at the origin is thus given by ( for even number of steps $2t$ ) $$p_0(t) = \frac{1}{4^t} {2t\choose t}.$$ Using the Stirling’s formula $$n! \approx \sqrt{2\pi n}\left(\frac{n}{e}\right)^n
\label{ap1:eq3}$$ we find that the asymptotical behaviour of the probability at the origin is determined by $$p_0(t) \approx \frac{1}{\sqrt{\pi t}}.$$ Hence, the series $\cal S$ defined in (\[ap1:eq2\]) is divergent. Consequently, we find that the unbiased random walk on a line is recurrent.
Recurrence of unbiased random walks on higher-dimensional lattices can be analyzed in a similar way [@revesz]. One finds that the asymptotics of the probability at the origin is determined by the dimension of the lattice $d$ in the following form $$p_0(t) \sim t^{-\frac{d}{2}}.$$ It follows that the series $\cal S$ determining the recurrence of a random walk is divergent only for the dimensions $d=1,2$ and convergent for $d\geq 3$. We conclude that the random walks on a line and in the plane are recurrent while higher-dimensional random walks are transient, the result originally found by Pólya in 1921 [@polya].
Concerning the value of the Pólya number for the transient case Montroll [@montroll:1956] showed that for the dimensions $d>2$ the following relation holds $$P(d) = 1-\frac{1}{u(d)},$$ where $u(d)$ can be expressed in terms of an integral of the modified Bessel function of the first kind [@abramowitzstegun] $$u(d) = \int\limits_0^\infty \left[I_0\left(\frac{t}{d}\right)\right]^d e^{-t} dt.$$ However, the closed form of the function $u(d)$ is known only for $d=3$ due to the Watson’s triple integral [@watson:1939] with the result $$u(3) = \frac{\sqrt{6}}{32\pi^3}\Gamma\left(\frac{1}{24}\right)\Gamma\left(\frac{5}{24}\right)\Gamma\left(\frac{7}{24}\right)\Gamma\left(\frac{11}{24}\right)\approx 1.516.$$ For higher dimensions $d>3$ one has to evaluate the integral numerically. We present an overview of the numerical values of the Pólya number [@montroll:1956] for a different dimensions $d$ in Table \[app:a:tab\].
-- --
-- --
: Pólya number of a random walk on $\mathds{Z}^d$ in dependence of the dimension $d$.[]{data-label="app:a:tab"}
Biased random walks on a line {#app:a2}
-----------------------------
Let us consider biased random walks on a line. The bias can be introduced in two ways — the step in one direction is greater than in the other one and the probability of the step to the right is different from the probability of the step to the left (see [Figure \[fig5\]]{}).
![Schematics of the biased random walk on a line. The particle can move to the right by a distance $r$ with the probability $p$. The length of the step to the left is unity and the probability of this step is $1-p$.[]{data-label="fig5"}](biased_fig5.eps){width="60.00000%"}
Consider a random walk on a line such that the particle can make a jump of length $r$ to the right with probability $p$ or make a unit size step to the left with probability $1-p$. As we have discussed a random walk is recurrent if and only if the probability to find the particle at the origin at any time instant $t$ does not decays faster than $t^{-1}$. This probability is easily found to be $$p_0(t) = (1-p)^{\frac{t r}{r+1}}p^{\frac{t}{r+1}}{t\choose \frac{t r}{r+1}}.$$ With the help of the Stirling’s formula (\[ap1:eq3\]) we find the asymptotical behaviour of the probability at the origin $$p_0(t)\approx \frac{r+1}{\sqrt{2\pi r t}}\left[(1-p)^{\frac{r}{r+1}}p^{\frac{1}{r+1}}\frac{r+1}{r^\frac{r}{r+1}}\right]^t.$$ The asymptotics of the probability $p_0(t)$ therefore depends on the value of $$q = (1-p)^{\frac{r}{r+1}}p^{\frac{1}{r+1}}\frac{r+1}{r^\frac{r}{r+1}}.$$ Since $q\leq 1$ the probability $p_0(t)$ decays exponentially unless the inequality is saturated. Hence, the random walk is recurrent if and only if $q$ equals unity. This condition is satisfied for $$\label{rw:cond}
p = \frac{1}{r+1},$$ i.e. the probability of the step to the right has to be inversely proportional to the length of the step.
This result can be well understood from a different point of view, as we illustrate in [Figure \[fig6\]]{}. The spreading of the probability distribution is diffusive, i.e. $\sigma\sim\sqrt{t}$. The probability in the $\sigma$ neighborhood of the mean value $\langle x\rangle$ behaves like $t^{-\frac{1}{2}}$ while outside this neighborhood the probability decays exponentially. Therefore for the random walk to be recurrent the origin must lie in this $\sigma$ neighborhood for all times $t$. However, if the random walk is biased the mean value of the position $\langle x\rangle$ varies linearly in time, thus it is a faster process than the spreading of the probability distribution. In such a case the origin would lie outside the $\sigma$ neighborhood of the mean value after a finite number of steps leading to the exponential asymptotic decay of the probability at the origin $p_0(t)$. Hence, the random walk is recurrent if and only if the mean value of the position equals zero. Since the individual steps are independent of each other the mean value after $t$ steps is simply a $t$ multiple of the mean value after single step, i.e. $$\langle x (t)\rangle = t \langle x(1)\rangle = t\left[p(r+1)-1\right].$$ We find that the mean value equals zero if and only if the condition (\[rw:cond\]) holds.
![Spreading of the probability distribution versus the motion of the mean value of a biased classical random walk on a line. While the spreading is diffusive ($\sigma\sim\sqrt{t}$) the mean value propagates with a constant velocity ($\langle x\rangle\sim t$). The probability inside the $\sigma$ neighborhood of the mean value $\langle x\rangle$ behaves like $t^{-\frac{1}{2}}$. On the other hand, outside the $\sigma$ neighborhood the decay is exponential. Hence, if the mean value $\langle x\rangle$ does not vanish the origin of the walk leaves the $\sigma$ neighborhood of the mean value. In such a case the probability at the origin decays exponentially and the walk is transient.[]{data-label="fig6"}](biased_fig6.eps){width="60.00000%"}
Recurrence Criterion for Quantum Walks {#app:b}
======================================
Let us prove that the recurrence criterion for quantum walks is the same as for random walks, i.e. the Pólya number equals one if and only if the series $${\cal S} \equiv \sum_{t=0}^{\infty}p_0(t)$$ diverges.
According to the definition of the Pólya number Eq. (\[polya:def\]) for quantum walks we have to prove the equivalence $$\overline{P}\equiv\prod\limits_{t=1}^{+\infty}\left(1-p_0(t)\right) = 0 \Longleftrightarrow {\cal S}=+\infty.$$ We note that the convergence of both the sum ${\cal S}$ and the product $\overline{P}$ is unaffected if we omit a finite number of terms.
Let us first consider the case when the sequence $p_0(t)$ converges to a non-zero value $0<a\leq 1$. Obviously, in such a case the series ${\cal S}$ is divergent. Since $p_0(t)$ converges to $a$ we can find for any $\varepsilon>0$ some $t_0$ such that for all $t>t_0$ the inequalities $$1-a-\varepsilon\leq 1-p_0(t)\leq 1-a+\varepsilon.$$ hold. Hence, we can bound the infinite product $$\lim\limits_{t\rightarrow +\infty}\left(1-a-\varepsilon\right)^t\leq\overline{P}\leq\lim\limits_{t\rightarrow +\infty} \left(1-a+\varepsilon\right)^t.
\label{app2:eq1}$$ Since we can choose $\varepsilon$ such that $$\left|1-a\pm\varepsilon\right|<1,$$ we find that limits both on the left-hand side and the right-hand side of Eq. (\[app2:eq1\]) equals zero. Hence, the product $\overline{P}$ vanishes.
We turn to the case when $p_0(t)$ converges to zero. We denote the partial product $$\overline{P}_n = \prod\limits_{t=1}^n(1-p_0(t)).$$ Since $1-p_0(t)>0$ for all $t\geq 1$ we can consider the logarithm $$\ln{\overline{P}_n} = \sum\limits_{t=1}^n\ln\left(1-p_0(t)\right)
\label{app3}$$ and rewrite the infinite product as a limit $$\overline{P} = \lim\limits_{n\rightarrow +\infty}e^{\ln{\overline{P}_n}}
\label{app2}.$$ Since $p_0(t)$ converges to zero we can find some $t_0$ such that for all $t>t_0$ the value of $p_0(t)$ is less or equal than $1/2$. With the help of the inequality $$-2x\leq\ln\left(1-x\right)\leq -x$$ valid for $x\in\left[0,1/2\right]$ we find the following bounds $$-2\sum\limits_{t=1}^n p_0(t)\leq\ln\overline{P}_n\leq -\sum_{t=1}^n p_0(t).$$ Hence, if the series ${\cal S}$ is divergent the limit of the sequence ${\left(\ln{\overline{P}_n}\right)}^\infty_{n=1}$ is $-\infty$ and according to Eq. (\[app2\]) the product $\overline{P}$ vanishes. If, on the other hand, the series ${\cal S}$ converges the sequence ${\left(\ln{\overline{P}_n}\right)}^\infty_{n=1}$ is bounded. According to Eq. (\[app3\]) the partial sums of the series $\sum\limits_{t=1}^{+\infty}\ln\left(1-p_0(t)\right)$ are bounded and since it is a series with strictly negative terms it converges to some negative value $b<0$. Consequently, the sequence ${\left(\ln{\overline{P}_n}\right)}^\infty_{n=1}$ converges to $b$ and according to Eq. (\[app2\]) the product equals $$\overline{P} = e^b>0.$$ This completes our proof.
Method of Stationary Phase {#app:c}
==========================
In order to determine the recurrence nature of a quantum walk one has to analyze the asymptotic behaviour of the probability at the origin. As we have shown in Section \[chap:2d\] the probability amplitude of the particle being at the origin of the quantum walk after $t$ steps is given by a sum of integrals of the form $$I(t) = \int\limits_V e^{i\ \omega(\mathbf{k}) t} f(\mathbf{k})d\mathbf{k}.
\label{app:c:eq1}$$ The recurrence of a quantum walk is determined by the asymptotics of such integrals. The method of stationary phase is a suitable tool for such analysis.
In the following we briefly review the main concepts of the method of stationary phase. First, we treat the one-dimensional integrals. Then we turn to the multivariate integrals. We find that the crucial contribution to the integral (\[app:c:eq1\]) as $t$ approaches infinity arises from the [*stationary points*]{}, i.e. the points where the derivative of the phase $\omega(\mathbf{k})$ vanishes. We discuss how the amount of stationary points and the “flatness” of the phase at the stationary point influences the asymptotic behaviour of the integral $I(t)$. For a more comprehensive analysis we refer to the literature [@statphase2; @statphase].
One-dimensional integrals {#app:c1}
-------------------------
Let us begin with the one-dimensional integral of the form $$I(t) = \int\limits_a^b e^{i\ \omega(k) t} f(k) dk,
\label{app:c:eq2}$$ where $f$ and $\omega$ are smooth functions and $\omega$ is real-valued. We see that in the region of $k$ where $\omega(k)$ changes considerably the exponential $e^{i\ \omega(k) t}$ oscillates rapidly as $t$ approaches infinity. Assuming that the function $f$ is slowly varying compared to these rapid oscillations we find that this region of integration does not contribute significantly to the integral $I(t)$. Obviously, the most important contributions to the integral (\[app:c:eq2\]) arise from the regions where the oscillations of the exponential are least rapid, which occur precisely at the stationary points $k_0$ of the phase $\omega$ $$\omega'(k_0) = \left.\frac{d\omega}{dk}\right|_{k_0} = 0.$$ The “flatness” of the phase at the stationary point determines the order of this contribution — the more derivatives of the phase vanishes at the stationary point the slower the contribution decays as $t$ approaches infinity. Here we assume that the function $f$ is non-zero at the stationary point, otherwise the contribution to the integral $I(t)$ vanishes.
### No stationary points {#no-stationary-points}
Let us first consider the case when the phase $\omega$ has no stationary points inside the integration domain. Then there exists $\varepsilon>0$ such that $$\left|\omega'(k)\right|>\varepsilon$$ for all $k$. Performing the integration in (\[app:c:eq2\]) per parts with $$\begin{aligned}
\nonumber u(k) & = & \frac{f(k)}{i t \omega'{k}}, \quad v'(k) = i t \omega'(k) e^{i\ \omega(k) t},\\
\nonumber u'(k) & = & \frac{1}{i t} \frac{f'(k)\omega'(k)-f(k)\omega''(k)}{\omega'(k)^2}, \quad v(k) = e^{i\ \omega(k) t},\end{aligned}$$ we find that $I(t)$ can be expressed in the form $$I(t) = \frac{1}{i t}\left[\frac{f(k)}{\omega'(k)}e^{i\ \omega(k)t}\right]_a^b-\frac{1}{it}\int\limits_a^b e^{i\ \omega(k)t}\frac{f'(k)\omega'(k)-f(k)\omega''(k)}{\omega'(k)^2}dk.
\label{app:c:eq3}$$ We see that $I(t)$ decays at least like $t^{-1}$ as $t$ approaches infinity. Moreover, the second term in (\[app:c:eq3\]) has the same form as the original integral (\[app:c:eq2\]). Hence, if in addition the first term in (\[app:c:eq3\]) vanishes, e.g. if the function $f$ equals zero at the boundaries of the integration domain, we find by repeated integration per parts that $I(t)$ decays faster than any inverse polynomial in $t$.
### First-order stationary points
We turn to the case of $\omega$ having a single stationary point coinciding with the left endpoint of the interval $k_0=a$ ( any integral where the phase has more than one stationary point can be decomposed into a sum of such integrals ) and assume that the stationary point is of the first order, i.e. $\omega'(a)=0$ but $\omega''(a)\neq 0$. We then expand the phase into a Taylor series $$\omega(k) \simeq \omega(a) + \frac{\omega''(a)}{2}(k-a)^2$$ around the stationary point $k_0=a$. Since we assume that the function $f$ is slowly varying we put it equal to its value at the stationary point $f(k)\approx f(a)$. With these estimations we find $$I(t) \simeq f(a)e^{i\ \omega(a)t}\int\limits_a^b e^{i\frac{\omega''(a)}{2}(k-a)^2 t} dk.
\label{app:c:eq4}$$ Let us estimate the remaining integral. We substitute for $y=k-a$ and extend the integration domain to $[0,+\infty)$ $$\int\limits_a^b e^{i\frac{\omega''(a)}{2}(k-a)^2 t} dk = \int\limits_{0}^{b-a} e^{i\frac{\omega''(a)}{2} y^2 t} dy \approx \int\limits_{0}^{+\infty} e^{i\frac{\omega''(a)}{2} y^2 t} dy.$$ This is the familiar Fresnel integral $$F(\gamma) = \int\limits_{0}^{+\infty} e^{i\gamma y^2} dy = \frac{\Gamma\left(\frac{1}{2}\right)}{2}|\gamma|^{-\frac{1}{2}} e^{i {\rm sign}\gamma\frac{\pi}{4}}.
\label{app:c:fresnel}$$ Inserting this result into the expression (\[app:c:eq4\]) we finally arrive at the formula $$I(t) \simeq \left[f(a)\frac{\Gamma\left(\frac{1}{2}\right)}{2}e^{i\ \omega(a)t\pm i\frac{\pi}{4}}\left(\frac{2}{|\omega''(a)|}\right)^\frac{1}{2}\right] t^{-\frac{1}{2}}
\label{app:c:eq5}$$ describing the behaviour of the integral $I(t)$ for large values of $t$. The plus (minus) sign in the exponential in (\[app:c:eq5\]) corresponds to the second derivative of the phase at the stationary point $\omega''(a)$ being positive (negative). To conclude, we find that if the phase $\omega(k)$ has a stationary point of the first order the integral $I(t)$ decays like $t^{-1/2}$ as $t$ approaches infinity.
### Higher-order stationary points
We close this section by the analysis of the integral $I(t)$ when the phase $\omega$ has a stationary point $k_0=a$ of the order of $p-1$, i.e. $$\omega'(a) = \omega''(a) = \ldots = \omega^{(p-1)}(a) = 0,\quad \omega^{(p)}(a)\neq 0.$$ In such a case the Taylor expansion of the phase reads $$\omega(k) \simeq \omega(a) + \frac{\omega^{(p)}(a)}{p!}(k-a)^p.$$ Performing similar approximations as above we find $$I(t) \simeq f(a)e^{i\ \omega(a)t}\int\limits_a^b e^{i\frac{\omega^{(p)}(a)}{p!}(k-a)^p t} dk.
\label{app:c:eq6}$$ In the remaining integral we substitute for $y = k-a$ and extend the upper limit of the integration to $+\infty$ $$\int\limits_a^b e^{i\frac{\omega^{(p)}(a)}{p!}(k-a)^p t} dk = \int\limits_{0}^{b-a} e^{i\frac{\omega^{(p)}(a)}{p!}y^p t} dy \approx \int\limits_{0}^{+\infty} e^{i\frac{\omega^{(p)}(a)}{p!}y^p t} dy.$$ We find a generalization of the Fresnel integral (\[app:c:fresnel\]) which is readily evaluated $$F_p(\gamma) = \int\limits_{0}^{+\infty} e^{i\gamma y^p} dy = \frac{\Gamma\left(\frac{1}{p}\right)}{p}|\gamma|^{-\frac{1}{p}} e^{i {\rm sign}\gamma\frac{\pi}{2p}}.
\label{app:c:fresnel:p}$$ Finally, inserting this result into the Eq. (\[app:c:eq6\]) we arrive at the estimation $$I(t) \simeq \left[f(a)\frac{\Gamma\left(\frac{1}{p}\right)}{p}e^{i\ \omega(a)t\pm i\frac{\pi}{2p}}\left(\frac{p!}{|\omega^{(p)}(a)|}\right)^\frac{1}{p}\right] t^{-\frac{1}{p}},
\label{app:c:eq7}$$ where the plus (minus) sign corresponds to positive (negative) value of $\omega^{(p)}(a)$. From (\[app:c:eq7\]) we find that the contribution of the stationary point of the order $p-1$ to the integral $I(t)$ behaves like $t^{-1/p}$ as $t$ approaches infinity. The flatness of the phase at the stationary point reduces the rate at which the integral $I(t)$ decays.
Multivariate integrals {#app:c2}
----------------------
We turn to the asymptotic analysis of the multidimensional integrals of the form $$I(t) = \int\limits_V e^{i\ \omega(\mathbf{k}) t} f(\mathbf{k})d\mathbf{k}.
\label{app:c2:eq1}$$ We assume that both functions $\omega(\mathbf{k})$ and $f(\mathbf{k})$ are smooth and $\omega$ is real-valued. Similarly to the one-dimensional case, the main contribution to the integral arise from the stationary points of the phase $\omega$, i.e. points $\mathbf{k}_0$ where the gradient of $\omega$ vanishes $$\left.\nabla \omega(\mathbf{k})\right|_{\mathbf{k}=\mathbf{k}_0} = \mathbf{0}.$$ As in the previous Section we approximate the phase around the stationary point by the Taylor expansion. In addition, we have to change the coordinates in such a way that the resulting integral factorizes into a product of one-dimensional integrals. Each of the 1-D integrals can be estimated by means provided in the previous Section.
In the following we review the main results of the asymptotics of (\[app:c2:eq1\]) in dependence on the properties of the phase $\omega(\mathbf{k})$. For a more detailed analysis we refer to the literature [@statphase2; @statphase].
### No stationary points {#app:c2a}
Let us begin with the case when the gradient of $\omega$ is non-vanishing inside the integration domain $V$. From the divergence theorem we find $$I(t) = -\frac{i}{t}\int\limits_{\partial V} (\mathbf{u}\cdot\mathbf{n})e^{i\ \omega t}ds + \frac{i}{t}\int\limits_V (\nabla\cdot\mathbf{u})e^{i\ \omega t}d\mathbf{k},
\label{app:c2a:eq1}$$ where $\partial V$ is the boundary of $V$, $\mathbf{n}$ is the unit vector normal to the boundary and the vector function $\mathbf{u}(\mathbf{k})$ is given by $$\mathbf{u}(\mathbf{k}) = \frac{\nabla\omega(\mathbf{k})}{|\nabla\omega(\mathbf{k})|^2} f(\mathbf{k}).$$ The expression (\[app:c2a:eq1\]) indicates that $I(t)$ decays at least like $t^{-1}$. Suppose that the function $f(\mathbf{k})$ vanishes smoothly on the boundary of $V$. In such a case the contour integral in (\[app:c2a:eq1\]) equals zero. The remaining volume integral in (\[app:c2a:eq1\]) is of the same kind as the original integral $I(t)$. Hence, by repeating the same procedure as above we find that the integral $I(t)$ decays faster than any inverse polynomial in $t$.
### Non-degenerate stationary points {#app:c2b}
We turn to the case when the phase $\omega(\mathbf{k})$ has a single stationary point $\mathbf{k}_0$ inside the integration domain. We assume that $\mathbf{k}_0$ is non-degenerate, i.e. the Hessian matrix evaluated at the stationary point $$H_{ij}(\mathbf{k_0}) = \left.\left(\frac{\partial^2\omega}{\partial k_i\partial k_j}\right)\right|_{\mathbf{k}=\mathbf{k}_0}
\label{app:c2b:hessian}$$ is regular. We expand the phase around the stationary point into the second order $$\omega(\mathbf{k}) \simeq \omega(\mathbf{k}_0) + \frac{1}{2}\sum_{i,j} \left(k_i - {k_0}_i\right) H_{i,j}(\mathbf{k}_0) \left(k_j - {k_0}_j\right).$$ Assuming that $f(\mathbf{k})$ is slowly varying we can evaluate it at the stationary point and extract it from the integral. Substituting for $$\bm{\kappa} = \mathbf{k}-\mathbf{k}_0$$ and extending the integration from the finite volume $V$ to $\mathds{R}^n$ we arrive at the following estimation of the integral (\[app:c2:eq1\]) $$I(t) \simeq f(\mathbf{k}_0) e^{i\ \omega(\mathbf{k}_0) t} \int\limits_{\mathds{R}^n} \exp\left(\frac{i}{2}\sum_{i,j} \kappa_i H_{ij}(\mathbf{k}_0) \kappa_j t \right) d\bm{\kappa}.
\label{app:c2b:eq1}$$ The integral in (\[app:c2b:eq1\]) can be reduced into the product of $n$ one-dimensional Fresnel integrals (\[app:c:fresnel\]). Indeed, the Hessian matrix (\[app:c2b:hessian\]) is real and symmetric since we assumed $\omega(\mathbf{k})$ to be smooth. Hence, it can be diagonalized with the help of the orthogonal matrix $O$. In the new coordinate system $$\mu_i = \sum_j O_{ij}\kappa_j
\label{app:c2b:coor}$$ the bilinear form in (\[app:c2b:eq1\]) is given by the sum of purely quadratic terms $$\sum_{i,j} \kappa_i H_{ij}(\mathbf{k}_0) \kappa_j = \sum_i \lambda_i(\mathbf{k}_0) \mu_i^2,$$ where $\lambda_i(\mathbf{k}_0)$ are eigenvalues of the Hessian matrix (\[app:c2b:hessian\]) at the stationary point $\mathbf{k}_0$. Since the matrix $O$ is orthogonal the change of coordinates (\[app:c2b:coor\]) has a unit Jacobian. Hence, using the substitution (\[app:c2b:coor\]) we decompose the integral in (\[app:c2b:eq1\]) into the product of one-dimensional Fresnel integrals $$\int\limits_{\mathds{R}^n} \exp\left(\frac{i}{2}\sum_{i,j} \kappa_i H_{ij}(\mathbf{k}_0) \kappa_j t \right) d\bm{\kappa} = \prod\limits_{j=1}^n\int\limits_\mathds{R} \exp\left(\frac{i}{2}\lambda_j(\mathbf{k}_0) \mu_j^2 t\right) d\mu_j,$$ which are readily evaluated with the help of (\[app:c:fresnel\]). Finally, we arrive at the following approximation of the integral (\[app:c2:eq1\]) $$I(t) \simeq \left[f(\mathbf{k}_0) e^{i\ \omega(\mathbf{k}_0) t + i \nu(\mathbf{k}_0)\frac{\pi}{4}} \sqrt{\frac{(2\pi)^n}{\left|\det H_{ij}(\mathbf{k}_0)\right|}}\right] t^{-\frac{n}{2}},
\label{app:c2b:eq2}$$ where $\nu(\mathbf{k}_0)$ is the sum of the signs of the eigenvalues of the Hessian matrix $$\nu(\mathbf{k}_0) = \sum_j {\rm sign} \lambda_j(\mathbf{k}_0).$$ We find that contribution from the non-degenerate stationary points to the $n$-dimensional integral (\[app:c2:eq1\]) is of the order of $t^{-n/2}$.
### Continuum of stationary points {#app:c2c}
We close this Appendix by briefly discussing the asymptotic scaling of the integral (\[app:c2:eq1\]) when the phase $\omega(\mathbf{k})$ has a curve of stationary points $\gamma$, i.e. $$\forall\mathbf{k}\in\gamma\qquad \nabla\omega(\mathbf{k}) = \mathbf{0}.$$ Without loss of generality we assume that $\omega(\mathbf{k})=0$ at the stationary curve $\gamma$ which is considered to be smooth and without any loops. Moreover, we restrict ourselves to two-dimensional integrals, i.e. $n = 2$. As shown in [@statphase], Chapter VIII.9, the main contribution of the continuum of stationary points to the asymptotic expansion of the integral (\[app:c2:eq1\]) is $$I(t) \simeq \left[\sqrt{2\pi}e^{i\frac{\pi}{4}}\int\limits_\gamma \frac{f\left(k_1(s),k_2(s)\right)}{\sqrt{\frac{\partial^2\omega}{\partial k_1^2}+\frac{\partial^2\omega}{\partial k_2^2}}}ds\right] t^{-\frac{1}{2}},$$ where $s$ is the parametrization of the curve $\gamma$. We find that in comparison with the case of the isolated non-degenerate stationary point analyzed in Section \[app:c2b\] the continuum of stationary points has reduced the decay of the integral $I(t)$ by a factor of square-root.
Meeting Problem {#app:d}
===============
In this Appendix we analyze the meeting problem in classical and quantum walk. We derive analytical formulas for the asymptotic behaviour of the meeting probability.
Meeting problem in the classical random walk {#app:d1}
--------------------------------------------
Let us define the meeting problem on the classical level. We assume two particles which in each step of the process can perform randomly a step to the left or to the right on a one dimensional lattice labelled by integers. Initial distance between the two particles is $2d$, because for odd initial distance the two particles never meet, due to the transitional invariance we can assume that one particle starts from the origin and the other one in the vertex $2d$. We assume complete randomness, i.e. the probabilities for the step right or left are equal. We ask for the probability that the two particles meet again after $t$ steps either at a certain position $m$ or we might ask for the total probability to meet (the sum of probabilities at all of the possible positions). A simple analysis reveals that the probability to meet at a certain position $m$ equals $$M_{cl}(t,m,d) = \frac{1}{2^{2t}} {t\choose \frac{t+m}{2}}
{t\choose \frac{t+m-2d}{2}}.$$ The total probability that the two particles are reunited after $t$ steps reads $$M_{cl}(t,d) = \sum\limits^t_{m=2d-t}\frac{1}{2^{2t}} {t\choose
\frac{t+m}{2}} {t\choose \frac{t+m-2d}{2}},$$ which simplifies to $$\label{a1}
M_{cl}(t,d) = \frac{1}{2^{2t}}{2t\choose m+d}.$$ To obtain the asymptotic behavior of the meeting probability we approximate the single particle probability distribution by a gaussian $$P_{cl}(x,t,d) = \frac{1}{\sqrt{\pi t}}\exp\left(-\frac{(x-2d)^2}{2t}\right),$$ which leads to the following estimate on the meeting probability $$M_{cl}(t,d) \approx \int\limits_{-\infty}^{+\infty}P_{cl}(x,t,0)P_{cl}(x,t,d)dx =
\frac{1}{\sqrt{\pi t}} \exp\left(-\frac{d^2}{t}\right).$$ Finally, for a fixed initial distance $d$ we get the long-time approximation for $t>d^2$ $$M_{cl}(t,d)\approx \frac{1}{\sqrt{\pi t}}\left(1-\frac{d^2}{t}\right).$$
Meeting problem in the quantum walk {#app:d2}
-----------------------------------
Let us derive analytical formulas for the meeting probabilities in the quantum case. We consider the following initial states:
([*i*]{}) right for the first particle and left for the second $$\psi_{RL}(0,2d,0) = 1 ,$$
([*ii*]{}) symmetric initial conditions $1/\sqrt{2}(|L\rangle+i|R\rangle)$ for both $$\psi(0,2d,0) = \frac{1}{2}\left( \begin{array}{c}
1 \\
i \\
i \\
-1
\end{array}\right) ,$$
([*iii*]{}) left for the first particle and right for the second $$\psi_{LR}(0,2d,0) = 1.$$
For $t\geq\sqrt{2}d$ we consider the slowly varying part of the single particle probability distribution derived in [@nayak] which has the form $$P^{(L,R)}_{slow}(x,t) = \frac{2}{\pi t\left(1\pm\frac{x}{t}\right)\sqrt{1-\frac{2x^2}{t^2}}},$$ if the initial coin state was $|L\rangle$ or $|R\rangle$, while for the symmetric initial condition it reads $$P^{(S)}_{slow}(x,t) = \frac{1}{2}\left(P^{(L)}_{slow}(x,t)+P^{(R)}_{slow}(x,t)\right) = \frac{2}{\pi t\left(1-\frac{x^2}{t^2}\right)\sqrt{1-\frac{2x^2}{t^2}}}.$$ We then estimate the sums in (\[mq\]) defining the meeting probabilities by integrals $$\begin{aligned}
\label{M1}
\nonumber M_{RL}(t,d) & \approx & \frac{2}{\pi^2 t^2}\int\limits_{2d-\frac{t}{\sqrt{2}}}^{\frac{t}{\sqrt{2}}}\frac{dx}{(1-\frac{x}{t})(1+\frac{x-2d}{t})\sqrt{1-2\frac{x^2}{t^2}}\sqrt{1-2\frac{(x-2d)^2}{t^2}}}\\\nonumber\\
\nonumber M_{S}(t,d) & \approx & \frac{2}{\pi^2 t^2}\int\limits_{2d-\frac{t}{\sqrt{2}}}^{\frac{t}{\sqrt{2}}}\frac{dx}{(1-\frac{x^2}{t^2})(1-\frac{(x-2d)^2}{t^2})\sqrt{1-2\frac{x^2}{t^2}}\sqrt{1-2\frac{(x-2d)^2}{t^2}}}\\\nonumber\\
M_{LR}(t,d) & \approx & \frac{2}{\pi^2
t^2}\int\limits_{2d-\frac{t}{\sqrt{2}}}^{\frac{t}{\sqrt{2}}}\frac{dx}{(1+\frac{x}{t})(1-\frac{x-2d}{t})\sqrt{1-2\frac{x^2}{t^2}}\sqrt{1-2\frac{(x-2d)^2}{t^2}}}\end{aligned}$$ which can be evaluated in terms of elliptic integrals. Notice that the integrals diverge for $d=0$, i.e. for the case when the two particles start at the same point. For now we suppose that $d>0$. The formulas (\[M1\]) can be expressed in the form $$\begin{aligned}
\label{Mq2}
\nonumber M_{RL}(t,d) & \approx & F_+\left\{2(t-d)(t-(4-2\sqrt{2})d)K(a)+\frac{}{}\right.\\
\nonumber & & \left.+\sqrt{2}\left((t-(4+2\sqrt{2})d)(t-(4-2\sqrt{2})d)\Pi(b_+|a)-t^2\Pi(c_+|a)\frac{}{}\right)\right\}\\
\nonumber \\
\nonumber M_{S}(t,d) & \approx & \frac{ \pi^2 F_+F_-}{4}\left\{
16d(t^2-d^2)(t+(4+2\sqrt{2})d)(t-(4-2\sqrt{2})d)K(a)+\frac{}{}\right.\\
\nonumber & & +\sqrt{2}(t+(4+2\sqrt{2})d)(t-(4+2\sqrt{2})d)(t+(4-2\sqrt{2})d)\times \\
\nonumber & & \times(t-(4-2\sqrt{2})d)\left((t+d)\Pi(b_+|a)+(t-d)\Pi(b_-|a)\frac{}{}\right)-\\
\nonumber & & -\sqrt{2}t^2\left((t+d)(t+(4+2\sqrt{2})d)(t+(4-2\sqrt{2})d)\Pi(c_+|a)+\frac{}{}\right.\\
\nonumber & & \left.\left.+(t-d)(t-(4+2\sqrt{2})d)(t-(4-2\sqrt{2})d)\Pi(c_-|a)\frac{}{}\right)\right\}\\
\nonumber \\
\nonumber M_{LR}(t,d) & \approx &
F_-\left\{2(t+d)(t+(4+2\sqrt{2})d)K(a)-\frac{}{}\right.\\
\nonumber & & \left.-\sqrt{2}\left((t+(4+2\sqrt{2})d)(t+(4-2\sqrt{2})d)\Pi(b_-|a)-t^2\Pi(c_-|a)\frac{}{}\right)\right\}.\\\end{aligned}$$ Here $K(a)$ is the complete elliptic integral of the first kind and $\Pi(x|a),\Pi(y|a)$ are the complete elliptic integrals of the third kind (see e.g. [@abramowitzstegun], chapter 17). The coefficients $a,b_\pm,c_\pm$ and $F_\pm$ are given by $$\begin{aligned}
\nonumber F_\pm & = & \frac{2t}{\pi^2 d(t\mp
d)(t(2+\sqrt{2})\mp 4d)(t(2-\sqrt{2})\mp 4d))} \\
\nonumber a & = & i\sqrt{\frac{t^2}{2d^2}-1} \\
\nonumber b_\pm & = & \frac{(1\pm \sqrt{2})(t-\sqrt{2}d)}{d(\sqrt{2}\mp 2)}\\
\nonumber c_\pm & = & \frac{(t(\sqrt{2}\mp
2)+4d)(t-\sqrt{2}d)}{\sqrt{2}d(t(\sqrt{2}\pm 2)-4d)}.\end{aligned}$$
Let us analyze the asymptotic behavior of the meeting probability. We begin with the observation that the coefficients at the highest power of $t$ with the elliptic integrals of the third kind are the same but with the opposite signs for $\Pi(b|a)$ and $\Pi(c|a)$. Moreover, $b_\pm$ and $c_\pm$ goes like $-t$ as $t$ approaches infinity, and thus all of the $\Pi$ functions have the same asymptotic behavior. Due to the opposite sign for $\Pi(b|a)$ and $\Pi(c|a)$ the leading order terms cancel and the contribution from this part to the meeting probability is of higher order of $1/t$ compared to the contribution from the complete elliptic integral of the first kind $K(a)$. The asymptotic of the function $K(a)$ is given by $$K(a)\approx
\frac{d\sqrt{2}\ln{\left(\frac{2\sqrt{2}t}{d}\right)}}{t}.$$ Inserting this approximation into (\[Mq2\]) we find that the leading order term of the meeting probability in all the three studied situations is given by $$M_{D}(t,d) \sim \frac{\ln{\left(\frac{2\sqrt{2}t}{d}\right)}}{t}.$$
\[part:2\]
\[chap8\]
Factorization of integers is a famous NP problem [@Wegener; @Mertens] and the difficulty to decompose a number into prime factors lies at the heart of several encryption schemes [@RSA; @Menezes]. However, Peter Shor found [@shor] that a quantum computer is capable of finding factors of a given number efficiently. The fundamental advantage of the Shor’s algorithm compared to the classical algorithms is the massive use of quantum parallelism and entanglement. On the other hand, the physical realizations of the Shor’s algorithm are very challenging and are so far limited to a proof of principle experiment [@vandersypen].
Recently, several schemes for integer factorization based on Gauss sums [@lang:1970; @davenport:1980; @schleich:2005:primes] were proposed [@clauser:1996; @harter:2001; @harter:2001b; @mack:2002; @mack:proc; @merkel:ijmpb:2006; @merkel:FP; @rangelov]. For a review see e.g. [@zubairy:science]. In contrast to the Shor’s algorithm, factorization using Gauss sums consists of a feasible factor test based on a classical interference scheme. The proposals employ the so-called normalized truncated Gauss sum $${\cal A}_N^{(M)}(\ell) = \frac{1}{M+1}\sum\limits_{m=0}^{M}\exp\left(2\pi i\, m^2 \frac{N}{\ell}\right).$$ Here $N$ is the number to be factored, $\ell$ is a trial factor and $M$ is the truncation parameter. The capability of Gauss sums to factor numbers stem from the fact that the [*signal*]{} - the absolute value of the Gauss sum which is measured in the experiment, attains the maximal value only for a factor. For non-factors destructive interference yields a small signal. In the most elementary approach we have to perform this factor test for every number smaller than $\sqrt{N}$. As a consequence the method scales as $\sqrt{N}$ and is therefore exponential.On the other hand, the physical realizations of Gauss sums are less demanding than the implementations of the Shor’s algorithm. Indeed, recent experiments based on NMR [@mehring:NMR:2006; @Suter; @suter2], cold atoms [@rasel], ultra-short pulses [@girard; @girard2; @girard3] and Bose-Einstein condensate [@sadgrove] have successfully demonstrated the possibility to find the prime factors of up to 17-digit numbers.
In the NMR settings [@mehring:NMR:2006; @Suter; @suter2] a sequence of RF pulses with linearly increasing relative phase shifts is applied to the ensemble of nuclear spins. After each pulse the echo, i.e. the polarization of the spins, is measured. Finally, all echoes are summed and for the proper choice of relative phase shifts of the RF pulses the resulting signal has the form of the Gauss sum.
The experiment with cold atoms presented in [@rasel] employs two long-living hyperfine ground states of rubidium. The atoms are launched by a system of magneto-optical traps and prepared in the atomic ground state by appropriate pulse sequence. After the preparation the atoms interact with a sequence of Raman pulses driving a transition between the hyperfine states. Similarly to the NMR experiments the individual pulses have to be properly phase shifted. Finally, after all pulses are applied a fluorescence detection measures the populations in both hyperfine states. The sum of these interference signals determines the Gauss sum.
In [@girard; @girard2; @girard3] the Gauss sum is implemented by a sequence of shaped femtosecond laser pulses. Individual laser pulses are properly phase shifted by a complex spectral mask. The interference produced by the pulse train is analyzed with a spectrometer. Due to the temporal Talbot effect the frequency component of the electric field is determined by a Gauss sum.
The experiment [@sadgrove] uses diffraction of the BEC on an optical lattice. One of the beams which creates the optical lattice is designed with a specific phase jumps. The pulse separates the atoms in the BEC into different momentum orders. In the absorption image a diffraction pattern determined by the Gauss sum is observed in which high-momentum atoms represent factors and low-momentum atoms represent non-factors.
As we have mentioned, the signal for a non-factor is suppressed and its value depends on the number of terms in the Gauss sum. In the experiment we have to take into account the limited resolution of the measured signal. Hence, to be able to distinguish factors from non-factors we have to add a sufficient number of terms in the Gauss sum. However, in all experiments performed so-far the individual contributions to the Gauss sums are created by individual pulses. Hence, the total number of terms in the Gauss sum is limited by the decoherence time of the system used in the experiment. Because of these two antagonistic effects we have to find conditions under which the algorithm based on Gauss sums successfully finds the factors of a given number $N$. We answer these questions in the following Chapters.
Chapter \[chap9\] deals with truncated Gauss sums and is based on [@opttrunc]. We find that the truncated Gauss sums offer good discrimination of factors from non-factors since the gap between their corresponding signals can reach a value of almost $30\%$. Moreover, we show that to reach such a gap the number of terms in the Gauss sum $M$ we have to add, i.e. the number of laser pulses we have to apply in the experiment, has to be of the order of the fourth-root of $N$. The total number of the resources needed for the success of the factorization scheme based on the truncated Gauss sum is thus $${\cal R} \sim \sqrt[4]{N}\cdot \sqrt{N} = N^\frac{3}{4}.$$
In Chapter \[chap10\] which is based on [@stef:exp:sum] we extend the idea of factorization of integers from Gauss sums to exponential sums of the form $${\cal A}_N^{(M,j)}(\ell) \equiv \frac{1}{M+1} \sum_{m=0}^M \exp\left[2\pi i\, m^j\frac{N}{\ell}\right].$$ Here the power of the phase is no longer quadratic like in the case of the Gauss sum but is given by a positive integer $j$. The faster growth of the phase results in the reduction of the number of terms $M$ that has to be added to the $2j$-th root of $N$. The total number of resources necessary to factorize a number $N$ using exponential sum is given by $${\cal R}_j \sim \sqrt[2j]{N}\cdot \sqrt{N} = N^\frac{j+1}{2j}.$$ Hence, we can save experimental resources by applying exponential sums with larger value of $j$. On the other hand, the gap between the signals of factors and non-factors shrinks as the power of the phase $j$ increases. This can make the experimental data inconclusive, unless a sufficient resolution is guaranteed. We summarize our results in the Conclusions.
Factorization with Gauss sums {#chap9}
=============================
Gauss sums [@lang:1970; @davenport:1980; @schleich:2005:primes] play an important role in many phenomena of physics ranging from the Talbot effect of classical optics [@talbot:1836] via the curlicues emerging in the context of the semi-classical limit of quantum mechanics [@berry:curlicues:1:1988; @berry:curlicues:2:1988], fractional revivals [@leichtle:PRL:1996; @leichtle:PRA:1996] and quantum carpets [@schleich:2001] to Josephson junctions [@schopohl]. Moreover, they build a bridge to number theory, especially to the topic of factorization. Indeed, they can be viewed as a discrimination function of factors versus non-factors for a given natural number. The essential tool of this factorization scheme [@merkel:FP] is the periodicity of the Gauss sum.
Usually Gauss sums extend over some period which leads to the [*complete Gauss sum*]{}. However, recent experiments based on NMR [@mehring:NMR:2006; @Suter; @suter2], cold atoms [@rasel], ultra-short pulses [@girard; @girard2; @girard3] and Bose-Einstein condensate [@sadgrove] have demonstrated the possibility of factoring numbers using a [*truncated Gauss sum*]{}, where the number of terms in the sum is much smaller than the period. Therefore, factorization with truncated Gauss sums offers enormous experimental advantages since the number of terms is limited by the decoherence time of the system. In the present Chapter we address the dependence of the number of terms needed in order to factor a given number. In particular, we find an optimal number of terms which preserves the discrimination property and at the same time minimizes the number of terms in the sum.
In order to factor a number $N$ we analyze the [*signal*]{}, i.e. the absolute value of the Gauss sum, for integer arguments $\ell=1,\ldots,\lfloor\sqrt{N}\rfloor$. We call the graphical representation of the signal data [*factorization interference pattern*]{}. In order to gain information about the factors of $N$ we analyze the factorization interference pattern: Whenever the argument $\ell$ corresponds to a factor of $N$ we observe the maximal signal value of unity. For most non-factor arguments this signal value is significantly below unity. However, for [*ghost factors*]{} we observe signal values close to unity even though these arguments do not correspond to an actual factor of $N$. Thus ghost factors spoil the discrimination of factors from non-factors in such a factorization interference pattern. Fortunately, ghost factors can be suppressed below a given threshold by extending the upper limit of the summation in the Gauss sum. This goal of completely suppressing all ghost factors provides us with an upper bound on the truncation parameter. This upper bound represent a sufficient condition for the success of our Gauss sum factorization scheme. The analysis of the number of ghost factors evaluated by the [*ghost factor counting function*]{} $g(N,M)$, which depends on the number to be factorized $N$ and the truncation parameter $M$, reveals that this upper bound on the truncation parameter is also a necessary condition for the success of our Gauss sum factorization scheme.
The Chapter is organized as follows: We first briefly review in Section \[chap9:1\] the central idea of the factorization scheme based on the Gauss sums. In particular, we introduce complete and truncated Gauss sums and compare the resources necessary to factor a given number $N$. We find the first traces of ghost factors in the factorization interference pattern based on the truncated Gauss sum. Since the truncation of the Gauss sum weakens the discrimination of the factors from non-factors, we dedicate Section \[chap9:2\] to deriving a deeper understanding of this feature. We find four distinct classes of arguments $\ell$ which result in utterly different behaviours of the truncated Gauss sum. Rewriting the truncated Gauss sum in terms of the curlicue sum allows us to identify the class of problematic arguments - the ghost factors. Moreover, we identify a natural threshold which separates factors from non-factors. For a rigorous argument we refer to Appendix \[appendA\]. In Section \[chap9:3\] we obtain an upper bound on the truncation parameter of the Gauss sum needed to suppress the signal of all ghost factors below the natural threshold. Ghost factors appear whenever the ratio of the number to be factored and a trial factor is close to an integer. This fact allows us to replace the Gauss sum by an appropriate Fresnel integral. From this expression we find the scaling law $M\sim\sqrt[4]{N}$ for the truncation parameter $M$, which represents the sufficient condition for the success of our Gauss sum factorization scheme. We discuss the applicability of the Fresnel approximation in the Appendix \[appendB\]. Finally, we analyze the ghost factor counting function in Section \[chap9:4\] and show that the fourth-root law is also necessary for the success of our factorization scheme, even if we relax the threshold value or allow limited error tolerance. We conclude in Section \[chap9:5\].
Factorization based on Gauss sums: appearance of ghost factors {#chap9:1}
--------------------------------------------------------------
To start our analysis we first consider the complete normalized quadratic Gauss sum $$\label{complete:GS}
{\cal A}_N^{(\ell-1)}(\ell) = \frac{1}{\ell}\sum\limits_{m=0}^{\ell-1}\exp\left(2\pi
i\, m^2 \frac{N}{\ell}\right),$$ which is frequently used in number theory. Here $N$ is the integer to be factorized and the integer argument $\ell$ scans through all numbers from 1 to $\lfloor\sqrt{N}\rfloor$ for factors of $N$. If $\ell$ is a factor then all terms in the sum contribute with a value of unity and thus the resulting signal value $|{\cal A}_N^{(\ell-1)}(\ell)|$ is one. However, for non-factor arguments the signal value is suppressed considerably as illustrated on the left in Figure \[truncated\]. Thus the absolute value of the Gauss sum allows one to discriminate between factors from non-factors.
Factorization based on the complete Gauss sum (\[complete:GS\]) has several disadvantages. First of all, the limit of the sum depends on the trial factor $\ell$. Thus the number of terms in the sum increases with $\ell$ up to $\sqrt{N}$. Hence, to obtain a complete factorization interference pattern in total $$\sum_{\ell=1}^{\sqrt{N}} \ell=\frac{1}{2}\sqrt{N}(\sqrt{N}-1)\approx \frac{1}{2}N
\label{resource:complete}$$ terms have to be added.
In the recent experimental demonstrations [@mehring:NMR:2006; @Suter; @suter2; @rasel; @girard; @girard2; @girard3; @sadgrove] of our Gauss sum factorization scheme the number of terms in the sum translates directly into the number of pulses applied onto the system, or the number of interfering light fields. Due to the decoherence it is favorable to use as few pulses as possible. Hence the experiments employ a constant number $M$ of pulses for each argument $\ell$ to be tested. Thus the resulting signal is of the form of a truncated Gauss sum $${\cal A}_N^{(M)}(\ell) = \frac{1}{M+1}\sum\limits_{m=0}^{M}\exp\left(2\pi
i\, m^2 \frac{N}{\ell}\right),
\label{gauss2}$$ rather than a complete Gauss sum of (\[complete:GS\]). Hence we have to add $$\sum_{\ell=1}^{\sqrt{N}} M = M\cdot\sqrt{N}
\label{resource}$$ terms to obtain the factorization pattern with the truncated Gauss sum. With this fact in mind we treat the number of terms in the Gauss sum as a resource for this factorization scheme.
The experiments impressively demonstrate that the truncated Gauss sums are also well suited to discriminate in the factorization interference pattern between factors from non-factors, even though the summation range does not cover a full period. As a drawback we find that the signal value at non-factor arguments is not suppressed as well as in the case of the complete Gauss sum.
In order to illustrate the effect of truncating the Gauss sum we compare in Figure \[truncated\] the factorization interference patterns for the complete Gauss sum ${\cal A}_N^{(\ell-1)}(\ell)$ (left) and for the truncated Gauss sum ${\cal A}_N^{(M)}(\ell)$ (right). In a first guess we chose the truncation parameter $M=\ln N$ to depend logarithmically on the number to be factorized. It is remarkable that the small number $M=16$ of terms in the truncated Gauss sum is sufficient to reveal the factors of a seven-digit number like $N=9624687$. On the other hand we observe a number of data-points with signal values close to one (stars), for example at the argument $\ell=2555$. In an experiment such points might lead us to wrong conclusions in the interpretation of a factorization interference pattern. Thus we call arguments $\ell$ resulting in such critical values of the signal [*ghost factors*]{}.
![Influence of the truncation parameter $M$ of the incomplete Gauss sum ${\cal A}_{N}^{(M)}(\ell)$ defined in (\[gauss2\]) on the contrast of the factorization interference pattern for the example $N=9624687=3\cdot 919 \cdot 3491$. The upper picture shows the corresponding pattern for the complete Gauss sum defined in (\[complete:GS\]) where $M= \ell-1$. For the lower plot we have truncated the Gauss sum after $M=\ln N=16$ terms. At factors of $N$ indicated by vertical lines the Gauss sum assumes the value of unity marked by black diamonds. The complete Gauss sum enjoys an impressive contrast due to a suppressed signal value at all non-factors. However, also the truncated Gauss sum with a relatively small number of terms allows to discriminate factors from non-factors. However, we also observe several ghost factors marked by stars.[]{data-label="truncated"}](ghost_fig1lp.eps "fig:"){width="65.00000%"} ![Influence of the truncation parameter $M$ of the incomplete Gauss sum ${\cal A}_{N}^{(M)}(\ell)$ defined in (\[gauss2\]) on the contrast of the factorization interference pattern for the example $N=9624687=3\cdot 919 \cdot 3491$. The upper picture shows the corresponding pattern for the complete Gauss sum defined in (\[complete:GS\]) where $M= \ell-1$. For the lower plot we have truncated the Gauss sum after $M=\ln N=16$ terms. At factors of $N$ indicated by vertical lines the Gauss sum assumes the value of unity marked by black diamonds. The complete Gauss sum enjoys an impressive contrast due to a suppressed signal value at all non-factors. However, also the truncated Gauss sum with a relatively small number of terms allows to discriminate factors from non-factors. However, we also observe several ghost factors marked by stars.[]{data-label="truncated"}](ghost_fig1rp.eps "fig:"){width="65.00000%"}
Classification of trial factors {#chap9:2}
-------------------------------
The frequency of appearance of ghost factors is the central question of our study. Indeed, how many terms in the truncated Gauss sum are needed in order to suppress the occurrence of ghost factors. However, we first need to identify the class of arguments which results in ghost factors.
Figure \[truncated\] already indicates that there are different classes of trial factors: (i) factors with constant value of $|{\cal A}_N^{(M)}(\ell)|$, (ii) typical non-factors at which already few terms $M$ are sufficient to suppress the value of $|{\cal A}_N^{(M)}(\ell)|$ considerably, (iii) ghost factors at which a larger summation range is needed to suppress the value of $|{\cal A}_N^{(M)}(\ell)|$, and finally (iv) threshold non-factors at which the value of $|{\cal A}_N^{(M)}(\ell)|$ [*cannot*]{} be suppressed by increasing $M$.
To illustrate this we plot in [Figure \[arguments\]]{} the truncated Gauss sum of (\[gauss2\]) for $N=9624687$ and various arguments $\ell$ characteristic for each one of the class (i-iv) as a function of the truncation parameter $M$.
![Emergence of four different classes of arguments $\ell$ of the truncated Gauss sum of (\[gauss2\]) from its dependence on its truncation parameter $M$ for $N=9624687=3\cdot 919\cdot 3491$. We show the signal value $|{\cal A}_N^{(M)}(\ell)|$ for four arguments $\ell$ as a function of the truncation parameter $M$. For factors of $N$, such as $\ell=919$ depicted by the black diamonds, the signal is constant and equal to unity. For typical non-factors, such as $\ell=14$ depicted by the gray dots, the signal is suppressed considerably already for small values of the truncation parameter $M$. However, for ghost factors, such as $\ell=2555$ depicted by stars, much more terms in the sum (\[gauss2\]) are needed to suppress the signal. For arguments such as $\ell=12$ depicted by the black triangles, the signal levels off at a non-vanishing threshold determined by $1/\sqrt{2}$ and it is impossible to suppress it by further by increasing the truncation parameter $M$.[]{data-label="arguments"}](ghost_fig2p.eps){width="60.00000%"}
To which class of arguments (i-iv) the given $\ell$ belongs is determined by the relation between the argument $\ell$ and the number we are factorizing $N$, namely on the value of the fraction $2N/\ell$ which enters the Gauss sum (\[gauss2\]). Indeed, for the number $N=9624687$ and the arguments $\ell$ used in [Figure \[arguments\]]{} we find the following: (i) for a factor $\ell=919$ the fraction $2N/\ell$ is an even integer, (ii) for a typical non-factor $\ell=14$ the fraction $2N/\ell$ is close to an odd integer, (iii) for a ghost factor $\ell=2555$ the fraction $2N/\ell$ is close to an even integer, (iv) for a threshold non-factor $\ell=12$ the fraction $2N/\ell$ is an even integer plus one-half. Thus we see that the class of $\ell$ is given by the fractional part of the fraction $2N/\ell$. Hence, in order to bring out these classes most clearly, we represent the truncated Gauss sum (\[gauss2\]) in a different form. For any argument $\ell$ we decompose the fraction $2N/\ell$ into the closest even integer $2k$ and the fractional part $\rho(N,\ell)=p/q$ with $|\rho|<1$ and $p$, $q$ being coprime, i.e. $$\rho(N,\ell)=\frac{2N}{\ell}-2k.
\label{fracpart}$$ Since $\exp\left(2\pi i\, m^2 \cdot k\right)=1$ the Gauss sum (\[gauss2\]) reads $$\label{gauss3}
{\cal A}_N^{(M)}(\ell)= s_M\left(\rho(N,\ell)\right)$$ where we have introduced the normalized curlicue function [@berry:curlicues:1:1988],[@berry:curlicues:2:1988] $$s_M(\tau)\equiv\frac{1}{M+1}\sum\limits_{m=0}^M \exp\left(i \pi\, m^2 \tau \right)
\label{curl}$$ which we consider for a real argument $\tau$ with $-1\leq\tau\leq 1$.
The connection (\[gauss3\]) between the truncated Gauss sum ${\cal A}_N^{(M)}(\ell)$ defined in (\[gauss2\]) and the normalized curlicue sum $s_M(\tau)$ for a given $N$ is established by the fractional part $\rho(N,\ell)$ of the fraction $2N/\ell$. Indeed, factors of $N$ correspond to $\rho=0$. All other values of $\rho$ correspond to non-factors. In particular, ghost factors have $\rho$ values close to zero.
We depict the connection of ${\cal A}_N^{(M)}(\ell)$ with $s_M(\tau)$ via $\rho(N,\ell)$ in [Figure \[master\]]{} for the number to be factorized $N=559=13\cdot 43$ and the truncation parameter $M=2$. The upper-left plot represents the master curve $|s_2(\tau)|$ (blue line) given by the absolute value of the normalized curlicue sum (\[curl\]). The function $|s_M(\tau)|$ is even with respect to $\tau$, since $$s_M(-\tau)=s_M^*(\tau).$$ Hence, it depends only on the absolute value of $\tau$. Moreover, we note two characteristic domains of $|s_M(\tau)|$: (i) the function starts at unity for $\tau=0$ and decays for increasing $\tau$. This central peak around $\tau=0$ is the origin of the ghost factors. (ii) After this initial decay oscillations set in whose amplitudes seem to be bound. Indeed, in the Appendix \[appendA\] we show that in the limit of large $M$ the absolute value of the normalized curlicue sum $|s_M(\tau)|$ evaluated at non-zero rational $\tau$ is bounded from above by $1/\sqrt{2}$.
The lower-left plot shows the distribution of the fractional parts $\rho(N,\ell)$ given by (\[fracpart\]). The dots in the upper-left plot arise from the projection of the fractional parts (\[fracpart\]) of the lower-left plot onto the master curve. Those data points represent the factorization interference pattern for $N=559$, as depicted on the right.
![Connection between the truncated Gauss sum ${\cal A}_N^{(M)}(\ell)$ and the normalized curlicue sum $s_M(\tau)$ established by the fractional part $\rho(N,\ell)$ of the fraction $2N/\ell$. Here the number $N$ to be factorized is $N=559=13\cdot 43$ with the truncation parameter $M=2$. In the lower-left plot we assign to every value of $\ell$ the fractional part $\rho(N,\ell)$ defined by (\[fracpart\]) for the number $N=559$ as exemplified by $\ell=20$ and the green dot. In the upper-left plot we show the master curve $|s_2(\tau)|$ indicated by the red curve and given by the absolute value of the normalized curlicue sum (\[curl\]). This curve is an even function with respect to $\tau$ and attains the values above $1/\sqrt{2}$ only in the narrow peak located at $\tau=0$. The factorization interference pattern for $N=559$ shown in the upper-right corner follows from the dots in the upper-left plot in a two step process going through the master curve: from $\ell$ we find the fractional part $\rho(N,\ell)$ which determines through the master curve the signal value as indicated by the arrows.[]{data-label="master"}](ghost_fig3.eps){width="80.00000%"}
Upper bound on the truncation by complete suppression of ghost factors {#chap9:3}
----------------------------------------------------------------------
Ghost factors emerge from the central peak of the absolute value of the normalized curlicue function. Our goal is to suppress these ghost factors by increasing the truncation parameter $M$. For this purpose we display in [Figure \[gs3d\]]{} the normalized curlicue sum (\[curl\]) in the neighborhood of $\tau=0$ in its dependence on $\tau$ and $M$. Indeed, we find a narrowing of the central peak with increasing $M$. In this way we can suppress the ghost factors below a natural threshold.
As shown in Appendix \[appendA\] for non-zero positive rational $\tau=p/q$ the absolute value of the normalized curlicue sum is asymptotically bounded from above by $1/\sqrt{2}$. Due to the connection (\[gauss3\]) between the normalized curlicue sum $s_M(\tau)$ and the Gauss sum ${\cal A}_N^{(M)}(\ell)$ it is natural to use this bound as a natural threshold between factors and non-factors. This observation allows us to define the ghost factor properly: ghost factors $\ell$ of a number $N$ arise when the fractional part $\rho(N,\ell)$ of $2N/\ell$ leads to a value of the normalized curlicue function $|s_M(\rho)|$ in the domain between $1/\sqrt{2}$ and unity.
![Absolute value $|s_M(\tau)|$ of the normalized curlicue function in the neighborhood of $\tau=0$ in its dependence on the fractional part $\tau$ and the truncation parameter $M$. The function starts at unity for $\tau=0$ and decays for increasing $\tau$. This decay becomes faster as we increase $M$. This behaviour is at the heart of the suppression of the ghost factors.[]{data-label="gs3d"}](ghost_fig3dp2.eps){width="70.00000%"}
We determine the truncation parameter $M_0$ such that we can push the absolute value of the Gauss sum for all ghost factors below the natural threshold of $1/\sqrt{2}$. Ghost factors appear for small values of $\tau$. This fact allows us to replace the Gauss sum by an integral which leads us to an estimate for the truncation parameter $M_0$.
Indeed, with the substitution $u=\sqrt{2\tau}m$ we can approximate the normalized curlicue function $$\label{fresnel}
s_M(\tau)
\approx \frac{1}{M}\int\limits_0^{M} du\,
\exp\left(i\, \pi m^2\tau\right)= \frac{F(M\sqrt{2\tau})}{M\sqrt{2\tau}}$$ with the Fresnel integral [@abramowitzstegun] $$F(x)=\int\limits_0^x du\,\exp\left(i\, \frac{\pi}{2} u^2\right)$$ familiar from the diffraction from a wedge [@born:wolf]. We note that in the continuous approximation the normalized curlicue function depends only on the product $M\cdot \sqrt{2\tau}$.
In Figure \[gf:M\] we compare the absolute value of the discrete curlicue sum $s_M(\tau)$ and the continuous Fresnel integral $F(M\sqrt{2\tau})/(M\sqrt{2\tau})$ at small value $\tau = 10^{-3}$. This approximation impressively models the results of the discrete curlicue sum.
![Comparison between the exact discrete normalized curlicue sum (\[curl\]) shown by diamonds and its approximation (\[fresnel\]) by the continuous Fresnel integral depicted by the black curve. We display the absolute value $|s_M(\tau)|$ as a function of the number $M$ of terms contributing to the curlicue sum (\[curl\]) for $\tau=10^{-3}$. The horizontal line marks the threshold $1/\sqrt{2}$ of the signal and the vertical line indicates the upper bound $M_0$ (\[M-N\]) required to suppress a ghost factor corresponding to $\tau=10^{-3}$.[]{data-label="gf:M"}](ghost_fig5p.eps){width="70.00000%"}
We are looking for the truncation parameter $M_0$ such that for a given fractional part $\tau$ the absolute value of the integral (\[fresnel\]) is equal to $\frac{1}{\sqrt{2}}$. We denote $\alpha(\xi)$ as the solution of the transcendental equation $$\frac{|F(\alpha)|}{\alpha}=\xi.$$ In particular, for the natural threshold $\xi=1/\sqrt{2}$ defining the ghost factors we find the numerical value of $\alpha(\xi)\approx 1.318$. From the fact that $F$ depends only on the product of $M\cdot\sqrt{2\tau}$ it follows that $$\label{alpha}
\alpha(\xi) = M_0 \sqrt{2\tau}.$$
For the factorization of the number $N$ the argument $\ell$ is varied within the interval $[1,\sqrt{N}]$. Consequently, the minimal fractional part $$\rho_{\rm min}(N)\sim\frac{2}{\sqrt{N}}.$$ arises from the ratio $2N/\ell$ when the denominator takes on its maximum value $\ell=\sqrt{N}$.
Finally, we arrive at $$\label{M-N}
M_0\approx \frac{\alpha(\xi)}{\sqrt{2 \rho_{\rm min}(N)}}\approx \frac{\alpha(\xi)}{2}\sqrt[4]{N}.$$ Hence, $M_0$ represents an upper bound for the number of terms in the truncated Gauss sum (\[gauss2\]) required to push [*all*]{} non-factors below the threshold of $\xi$. In particular, we find that to suppress all ghost factors below the natural threshold $\xi=1/\sqrt{2}$ we need $M_0\approx 0.659\sqrt[4]{N}$ terms in the truncated Gauss sum. However, we point out that the power-law (\[M-N\]) arises from the fact that we use quadratic phases and will be unchanged by relaxing the threshold value $\xi$, as the change of this threshold will only change the prefactor $\alpha(\xi)$.
We conclude this section by noting that the scaling law rests on approximating the normalized curlicue sum by the Fresnel integral. In Appendix \[appendB\] we analyze the range of applicability of the Fresnel integral approximation (\[fresnel\]) and find that our results lie within the validity of the approximation.
Ghost factor counting function: inevitable scaling law {#chap9:4}
------------------------------------------------------
In the preceding section we have derived a scaling law between the number $M$ of terms of the truncated Gauss sum to factor a given number $N$. This estimate is a [*sufficient*]{} condition for the success of the Gauss sum factorization scheme. In the present section we show that it is also a [*necessary*]{} condition. In order to illustrate this feature we first choose logarithmic truncation $M=\ln N$ and show that at the end of our factorization scheme we will be left with too many candidate factors, most of them being a ghost factor. Moreover, we show that we cannot achieve a more favorable scaling than the fourth-root dependence, (\[M-N\]), even if we tolerate a limited number of ghost factors.
To answer these questions we introduce the ghost factor counting function $$g(N,M)\equiv \#\left\{\ell=1,\ldots,\lfloor\sqrt{N}\rfloor\ \rm{with}\ \frac{1}{\sqrt{2}}<|{\cal A}_N^{(M)}(\ell)|<1\right\}
\label{gf:count}$$ which yields the number of data-points with critical values of the signal in the factorization interference pattern for a given $N$ and a chosen truncation $M$.
In order to study the behaviour of the ghost factor counting function $g(N,M)$ on a broad range of numbers $N$ we show in Figure \[gfcount\] $g(N,M=\ln N)$ for 10000 random numbers $N$ out of the interval $[1,2\cdot 10^{10}]$. Here we choose the truncation parameter to depend logarithmically $M\approx \ln N$ on $N$. This result shows that the number of ghost factors $g(N,M)$ for $M\approx\ln N$ grows faster than the logarithm of $N$. Hence the logarithmic truncation $M\approx\ln N$ is not sufficient for the success of our Gauss sum factorization scheme. We provide an explanation for the general trend observable in Figure \[gfcount\] in Section \[uniform\] and discuss the deviations in Section \[non-uniform\].
![A logarithmic dependence of the truncation parameter $M$ on $N$ is not sufficient to suppress ghost factors. The ghost factor counting function $g(N,M)$ calculated for 10000 random odd numbers $N$ out of the interval $[1,2\cdot 10^{10}]$ with $M=\ln N$ grows faster than the logarithm of $N$. The black line describes the general trend given by (\[gf:fit\]). We observe strong deviations, as exemplified by $N=13064029441$ highlighted by the star and discussed in Section \[non-uniform\].[]{data-label="gfcount"}](ghost_fig6p.eps){width="60.00000%"}
In evaluating the number of ghost factors we proceed in two steps. First, we make use of the connection (\[gauss3\]) between the truncated Gauss sum ${\cal A}_N^{(M)}(\ell)$ and the normalized curlicue sum $s_M(\tau)$. As already pointed out in Section \[chap9:2\] the ghost factors appear only for $\tau$ values lying in the small interval $[-\tau_0,\tau_0]$ around zero. The Fresnel integral approximation from Section \[chap9:3\] allows us to determine the fractional part $\tau_0$ where the normalized curlicue sum assumes the value $1/\sqrt{2}$. In the second step we relate the number of ghost factors $g(N,M)$ to $\tau_0$ by a density argument.
We determine $\tau_0$ with the help of the continuous approximation of the curlicue sum. From (\[alpha\]) we obtain $$\label{tau0}
\tau_0= \tau_0(M) \approx \frac{\alpha^2}{2 M^2}$$ and we thus arrive at the total width $2\tau_0\approx\alpha^2/M^2$ of the interval of fractional parts resulting in signal values larger than $1/\sqrt{2}$.
We relate the number of ghost factors $g(N,M)$ to the width of the interval $2\tau_0$ via the distribution of fractional parts $\tau$ for a given $N$. First, we consider a uniform distribution. Here we derive an analytical estimation for $g(N,M)$ which explains the general trend in Figure \[gfcount\]. Second, we discuss the case of numbers $N$ where the distribution of fractional parts cannot be approximated as uniform. Finally, we analyze a trade-off between a smaller truncation parameter at the expense of more ghost factors. We show that this approach will not change the power-law (\[M-N\]).
### Uniform distribution of fractional parts {#uniform}
Let us first assume for simplicity that the distribution of the fractional parts $\tau$ is uniform for a given number $N$. Here the number of ghost factors $g(N,M)$ is directly proportional $$\frac{g(N,M)}{\sqrt{N}}\approx\frac{2 \tau_0}{2}.$$ to the width $2\tau_0$ of the interval of the fractional parts which lead to ghost factors.
Recalling the dependence of $\tau_0$ on $M$ (\[tau0\]) we conclude that the number of ghost factors $$\label{nbghost}
g(N,M)\approx \frac{1}{2}\left(\frac{\alpha}{M}\right)^2\sqrt{N}.$$ depends via an inverse power-law on the truncation parameter $M$.
In Figure \[gfcount\] we already found indications that $g(N,M=\ln N)$ grows faster than the logarithm of $N$. Indeed, from (\[nbghost\]) we obtain $$g(N,\ln N)
\approx\frac{1}{2}\left(\frac{\alpha}{\ln N}\right)^2\sqrt{N}.
\label{gf:fit}$$ which implies that $g(N,\ln N)$ behaves like $\sqrt{N}$. In Figure \[gfcount\] we display a fit according to (\[gf:fit\]). We find that this fit well describes the general trend over a large range of numbers $N$. However, we also observe strong variations around this general trend. The deviations indicate that the distribution of fractional parts is not uniform for certain numbers $N$. We analyze such numbers in Section \[non-uniform\].
### Non-uniform distribution of the fractional parts {#non-uniform}
In Figure \[gfcount\] we find that for certain numbers the actual number of ghost factors $g(N,M)$ considerably deviates from our estimation (\[gf:fit\]). In the following we show that for such numbers the distribution of the fractional parts cannot be treated as uniform.
This unfavorable case occurs when $N$ has only few divisors, but another number $N'=N+k$ close to $N$ has a lot of divisors (with $|k|\ll N$). For example for the number $$N=13064029441=21647\cdot 603503$$ highlighted in [Figure \[gfcount\]]{} by the circle we find that $$N'=N-1=2^8\cdot 3\cdot 5\cdot 11\cdot 17\cdot 23\cdot 113$$ obviously has a lot of divisors.
Let us consider $\ell'$ which is a divisor of $N'=N+k$ but not of $N$. It follows that if $\ell'>2k$ the fractional part of $2N/\ell'$ is equal to $$\rho(N,\ell')=-\frac{2k}{\ell'}.
\label{hyp}$$ If we consider a plot of the fractional part $\rho(N,\ell)$ of $2N/\ell$ as a function of $\ell$ we will find that for divisors $\ell'$ of $N'$ the resulting fractional parts are aligned on the hyperbola (\[hyp\]) and are attracted to zero. Hence for $N$ the distribution of fractional parts $\rho(N,\ell)$ is not uniform.
In the factorization interference pattern of $N$ data-points associated with arguments $\ell'$ corresponding to divisors of $N'$ are also aligned on the curve $$\gamma_k^{(M)}(l) \equiv \left|s_M\left(\frac{2k}{\ell}\right)\right|.
\label{gamma}$$ As for large values of $\ell'$ the associated fractional part $-2k/\ell'$ tends to zero the resulting signal values $|{\cal A}^{(M)}_N(\ell')|$ approaches unity. Hence the divisors of $N'$ become ghost factors of $N$.
We illustrate this fact in Figure \[signal:tau\] where we plot the distribution of the fractional parts and the factorization interference pattern for two numbers: $N'$ rich in factors and $N=N'-1$ rich in ghost factors. To emphasize the region of fractional parts which lead to ghost factors we use the logarithmic scale. Here we have chosen $N'=13335840=2^5\cdot 3^5\cdot 5\cdot 7^3$ which obviously has a lot of divisors, as depicted on the upper-left plot by the straight line of black diamonds. In the factorization interference pattern shown on the right these divisors correspond to a straight line of signals equal to unity. However, the divisors of $N'$ are non-factors of $N=N'-1=13335839=11\cdot 479\cdot 2531$. Moreover, they are aligned on a hyperbola (\[hyp\]) and attracted to zero as shown in the lower-left plot where we can clearly identify the hyperbola of stars. Consequently, in the factorization interference pattern plotted on the right this hyperbola of arguments with small fractional parts (\[hyp\]) translates into the curve of ghost factors, as depicted on the right.
![Emergence of ghost factors of $N$ from factors of $N'$. We display the distributions of the fractional parts (left column) and the factorization interference patterns (right column) for the numbers $N'=13335840=2^5\cdot 3^5\cdot 5\cdot 7^3$ which is rich in factors and $N=N'-1=13335839=11\cdot 479\cdot 2531$ which is rich in ghost factors. To emphasize the region of fractional parts which lead to ghost factors we use a logarithmic scale for $|\rho|$ on the vertical axes. The number $N'$ has a lot of divisors, as depicted on the upper-left plot by the straight line of black diamonds. In the factorization interference pattern shown on the right these divisors correspond to a straight line of signals equal to unity. However, the divisors of $N'$ are non-factors for $N=N'-1$. Moreover, they are aligned on a hyperbola (\[hyp\]) and attracted to zero as shown in the lower-left plot where we can clearly identify the hyperbola of stars. Consequently, in the factorization interference pattern shown on the right this hyperbola translates into the curve of ghost factors.[]{data-label="signal:tau"}](ghost_fig71p.eps "fig:"){width="45.00000%"}![Emergence of ghost factors of $N$ from factors of $N'$. We display the distributions of the fractional parts (left column) and the factorization interference patterns (right column) for the numbers $N'=13335840=2^5\cdot 3^5\cdot 5\cdot 7^3$ which is rich in factors and $N=N'-1=13335839=11\cdot 479\cdot 2531$ which is rich in ghost factors. To emphasize the region of fractional parts which lead to ghost factors we use a logarithmic scale for $|\rho|$ on the vertical axes. The number $N'$ has a lot of divisors, as depicted on the upper-left plot by the straight line of black diamonds. In the factorization interference pattern shown on the right these divisors correspond to a straight line of signals equal to unity. However, the divisors of $N'$ are non-factors for $N=N'-1$. Moreover, they are aligned on a hyperbola (\[hyp\]) and attracted to zero as shown in the lower-left plot where we can clearly identify the hyperbola of stars. Consequently, in the factorization interference pattern shown on the right this hyperbola translates into the curve of ghost factors.[]{data-label="signal:tau"}](ghost_fig72p.eps "fig:"){width="45.00000%"} ![Emergence of ghost factors of $N$ from factors of $N'$. We display the distributions of the fractional parts (left column) and the factorization interference patterns (right column) for the numbers $N'=13335840=2^5\cdot 3^5\cdot 5\cdot 7^3$ which is rich in factors and $N=N'-1=13335839=11\cdot 479\cdot 2531$ which is rich in ghost factors. To emphasize the region of fractional parts which lead to ghost factors we use a logarithmic scale for $|\rho|$ on the vertical axes. The number $N'$ has a lot of divisors, as depicted on the upper-left plot by the straight line of black diamonds. In the factorization interference pattern shown on the right these divisors correspond to a straight line of signals equal to unity. However, the divisors of $N'$ are non-factors for $N=N'-1$. Moreover, they are aligned on a hyperbola (\[hyp\]) and attracted to zero as shown in the lower-left plot where we can clearly identify the hyperbola of stars. Consequently, in the factorization interference pattern shown on the right this hyperbola translates into the curve of ghost factors.[]{data-label="signal:tau"}](ghost_fig73p.eps "fig:"){width="45.00000%"}![Emergence of ghost factors of $N$ from factors of $N'$. We display the distributions of the fractional parts (left column) and the factorization interference patterns (right column) for the numbers $N'=13335840=2^5\cdot 3^5\cdot 5\cdot 7^3$ which is rich in factors and $N=N'-1=13335839=11\cdot 479\cdot 2531$ which is rich in ghost factors. To emphasize the region of fractional parts which lead to ghost factors we use a logarithmic scale for $|\rho|$ on the vertical axes. The number $N'$ has a lot of divisors, as depicted on the upper-left plot by the straight line of black diamonds. In the factorization interference pattern shown on the right these divisors correspond to a straight line of signals equal to unity. However, the divisors of $N'$ are non-factors for $N=N'-1$. Moreover, they are aligned on a hyperbola (\[hyp\]) and attracted to zero as shown in the lower-left plot where we can clearly identify the hyperbola of stars. Consequently, in the factorization interference pattern shown on the right this hyperbola translates into the curve of ghost factors.[]{data-label="signal:tau"}](ghost_fig74p.eps "fig:"){width="45.00000%"}
### Optimality of the fourth-root law
In Section \[chap9:3\] we have derived the fourth-root law (\[M-N\]) as an upper bound on the truncation parameter. We will show that it is also necessary for the success of our factorization scheme.
The analysis of $g(N,M)$ revealed that it behaves similarly to the inverse power in $M$ (\[nbghost\]). The closer the distribution of the fractional parts for a given $N$ the better the estimation (\[nbghost\]) fits the actual data.
In Figure \[plaw\] we present the log-log plot of $g(N,M)$ as a function of the truncation parameter $M$ for three characteristic examples. First, for the number $N=13335769$ which has the fractional parts $\rho(N,\ell)$ of $2N/\ell$ distributed almost uniformly we find that scaling $\sim M^{-2}$ predicted by (\[nbghost\]) is obeyed. For $N=13335839$ we find strong deviations for larger values of $M$ due to the fact that the actual distribution of fractional parts is not uniform. Finally, for $N=13335840$ the ghost factor counting function $g(N,M)$ decays even faster than the estimation (\[nbghost\]) predicts. Nevertheless, in all three cases the number of ghost factors drops down rapidly in the beginning.
![The number $g(N,M)$ of ghost factors expressed by the ghost factor counting function (\[gf:count\]) as a function of the truncation parameter $M$ for three characteristic examples. We use a log-log plot to bring out the scaling of $g(N,M)$ with $M$. For the number $N=13335769$ the scaling $g(N,M)\sim M^{-2}$ predicted by (\[nbghost\]) with the help of the Fresnel integral is satisfied. In contrast for $N=13335839$ which is rich in ghost factors we see a strong deviation. Finally, for $N=13335840$ which is poor in ghost factors due to the fact it has many divisors the ghost factor counting function $g(N,M)$ decays even faster than the estimation (\[nbghost\]) predicts.[]{data-label="plaw"}](ghost_fig8p.eps){width="60.00000%"}
The inverse power-law (\[nbghost\]) suggests an alternative truncation of the Gauss sum when we tolerate a limited number of ghost factors, say $K$. Indeed, the power-law reduces the number of ghost factors considerably for small values of $M$. On the other hand, it has a long tail, which implies that we have to include many more terms in the Gauss sum in order to discriminate the last few ghost factors. However, this approach will not change the power law dependence of $M$, (\[M-N\]), as the equation (\[nbghost\]) yields that $$M_K \approx \frac{\alpha}{\sqrt{2K}}\sqrt[4]{N}$$ terms are required to achieve this goal. Let us point out that this results holds if we can approximate the distribution of the fractional parts by uniform distribution. However, as we have seen in [Figure \[plaw\]]{}, if this simplification is not feasible such $M_K$ might be even greater. Therefore we cannot achieve a better scaling on $N$ than $\sqrt[4]{N}$, even if we tolerate a limited number of ghost factors.
We conclude that the scaling $M_0\sim\sqrt[4]{N}$ of the upper limit of the Gauss sum ${\cal A}_N^{(M)}$ provides both [*sufficient*]{} and [*necessary*]{} condition for the success of our factorization scheme. Using $M_0$ terms in the Gauss sum we can suppress [*all*]{} ghost factors for [*any*]{} number $N$. From the relation (\[resource\]) we see that we need to add $${\cal R}\sim \sqrt[4]{N}\cdot\sqrt{N}=N^{\frac{3}{4}}$$ terms for the success of the factorization scheme based on the truncated Gauss sum. In comparison with the value of ${\cal R}\sim N$ of terms required for the complete Gauss sum (\[resource:complete\]) we have gained a factor of fourth-root. We emphasize that we cannot reduce the amount of resources further.
Conclusions {#chap9:5}
-----------
We have analyzed the conditions required for the success of the factorization algorithm based on the truncated Gauss sums. Four distinct classes of candidate factors $\ell$ with respect to the number to be factorized $N$ have been identified. In particular, with the help of the normalized curlicue sum we have found a simple criterion for the most problematic class of ghost factors. The natural threshold of the signal value of the Gauss sum which can be employed to discriminate factors from non-factors was identified. We have derived the scaling law $M_0\sim\sqrt[4]{N}$ for the upper limit of the Gauss sum which guarantees that all ghost factors are suppressed, i.e. the signal values for all non-factors lie below the natural threshold. Unfortunately, we cannot achieve a more favorable scaling even if we change the threshold value or tolerate a limited amount of non-factors.
However, a generalization of Gauss sums to sums with phases of the form $m^j$ with $2<j$ might offer a way out of the fourth-root scaling law. Indeed, such a naive approach suggests the scaling law $M_0\sim\sqrt[2j]{N}$. For an exponential phase dependence $m^m$ we would finally achieve a logarithmic scaling law. However, these new phases bring in new thresholds and a more detailed analysis is needed. The answer to these questions is presented in the following Chapter \[chap10\].
Moreover, the analysis of the non-uniform distribution of the fractional parts provides us with a new perspective on the ghost factors. So far we have treated them as problematic trial factors which might spoil the identification of factors from the factorization interference pattern. However, the fact that the ghost factors of $N$ are factors of numbers close to $N$ offers an interesting possibility – by factorizing $N$ we can find candidate factors of numbers close to $N$. Indeed, as we have found in (\[gamma\]) the factors of $N\pm k$ align on the curve $\gamma_k^{(M)}(\ell)$ in the factorization interference pattern of $N$. Hence, if we identify the data points lying on these curves we find candidate factors of $N\pm k$. However, to take advantage of this positive aspect of ghost factors we need a very good resolution of the experimental signal data.
We illustrate this feature in [Figure \[usefulghost\]]{} on the factorization interference pattern of $N = 32183113 = 613\cdot 52501$. Here we have chosen the truncation parameter according to $M\approx\ln{N}\approx 17$ which leads to an interference pattern with several ghost factors. However, we can clearly fit the ghost factors to curves $\gamma_k^{(17)}(\ell)$ for $k=1,\ldots,5$. Hence, by factorizing $N$ we also find candidate factors of $N\pm k$ with $k=1,\ldots,5$.
![Factors of $N\pm k$ obtained from the ghost factors of the factorization interference pattern of $N = 32183113 = 613\cdot 52501$ with the truncation parameter $M = 17 \approx \ln{N}$. Such a choice of $M$ is clearly not sufficient to suppress all ghost factors. However, the remaining ghost factors can be fitted to the curves $\gamma_k^{(17)}(\ell)$ for $k=1,\ldots, 5$. Hence, we can identify candidate factors of numbers close to $N$, in our case up to $N\pm 5$.[]{data-label="usefulghost"}](ghost_fig9.eps){width="70.00000%"}
Factorization with Exponential sums {#chap10}
===================================
\[sec10:1\]
In the present Chapter we extend the idea of factorization with the help of Gauss sums by considering exponential sums. Here the phase is proportional to $m^j$ where $m$ is the summation index and $j$ is an integer. We show that in such a case the truncation depends on the inverse of this function, i.e. $M\sim\sqrt[2j]{N}$. Hence, we can save experimental resources by employing rapidly increasing phase functions. The extreme limit of an exponential sum where the phase varies exponentially with the summation index, i.e. $m^m$, should then be the optimal choice. We briefly address this case and demonstrate by a numerical analysis that the truncation parameter depends only logarithmically on the number to be factored.
It is interesting to note that recently an experiment [@suter2] based on NMR has used an exponential sum with $j=5$ to factor a 17-digit number consisting of two prime factors of the same order. In this experiment $\pi$-pulses [@sargent] drive a two-level atom. By choosing the phases of the pulses appropriately we can achieve a situation in which the resulting polarization is determined by a truncated exponential sum with a particular choice of $j$. Moreover, even the extreme case of an exponential phase $m^m$ can be realized in this way.
We introduce exponential sums in Section \[sec10:2\] and show that they allow us to discriminate between factors and non-factors. In particular, we demonstrate by a numerical example that phases which increase as $m^3$ suppress ghost factors more effectively than Gauss sums which have phases proportional to $m^2$. This feature is our motivation to study the factorization properties of exponential sums. In Section \[sec10:3\] we have shown that for truncated Gauss sums the influence of the truncation parameter $M$ depends crucially on the choice of trial factors. We have identified four classes: (i) factors, which are not influenced by $M$, (ii) threshold trial factors, which are also independent of $M$, (iii) typical non-factors, which decay very quickly, and (iv) ghost factors, which decay slowly. In Section \[sec10:3\] we perform a similar analysis for exponential sums. The numerical calculations of Section \[sec10:2\] are confirmed in Section \[sec10:4\] by an analytic argument. We show that the number of terms which have to be summed in order to suppress the signal of all ghost factors depends on the $2j$-th root of the number to be factored. For all exponential sums except the Fourier sum there exist non-factors for which the signal cannot be suppressed below certain thresholds by further increasing the truncation parameter. The values of these thresholds are determined by the power $j$ and can be close to the maximal signal of unity corresponding to a factor. In such a case we cannot achieve a sufficient contrast between the signals of factors and non-factors. We discuss the restrictions imposed by this fact on our factorization scheme in Section \[sec10:5\]. Our analysis indicates that rapidly increasing phases suppress ghost factors most effectively. This feature suggests to consider the extreme case with the phase $m^m$. We briefly address this case in Section \[sec10:6\] where we present numerical simulations indicating that the resources scale only logarithmically. However, in contrast to sums involving a fixed exponent, we no longer have the tools of number theory at hand to prove perfect discrimination of factors from non-factors. Nevertheless, in the Appendix \[appendC\] we demonstrate that the sum actually discriminates factors from non-factors. We summarize our results in the conclusions of Section \[sec10:7\].
Factorization with exponential sums {#sec10:2}
-----------------------------------
For our purpose to factorize numbers we use truncated and normalized exponential sums of the type $${\cal A}_N^{(M,j)}(\ell) \equiv \frac{1}{M+1} \sum_{m=0}^M
\exp\left[2\pi i\, m^j\frac{N}{\ell}\right],
\label{eqn:Gausslike}$$ where the phases are determined by the integer power $j$. Here $N$ is the number to be factored and $\ell$ is a trial factor which scans through all integers between $1$ and $\lfloor\sqrt{N}\rfloor$. In the experiments performed so far the upper bound $M$ in the sum is equal to the number of pulses applied.
In the case of $j=1$ the exponential sum reduces to a Fourier sum. For $j=2$ we find the truncated Gauss sum $${\cal A}_N^{(M)}(\ell)\equiv{\cal A}_N^{(M,2)}(\ell) = \frac{1}{M+1}
\sum_{m=0}^M \exp\left[2\pi i\, m^2\frac{N}{\ell}\right].
\label{eqn:Gauss}$$ In the case of $j=3$ the sum $${\cal A}_N^{(M,3)}(\ell) = \frac{1}{M+1} \sum_{m=0}^M \exp\left[2\pi i\,
m^3\frac{N}{\ell}\right]
\label{kummer}$$ is the truncated version of the [*Kummer sum*]{} named after the mathematician Ernst Kummer (1810-1893).
The capability of the exponential sums, [(\[eqn:Gausslike\])]{}, to factor numbers stems from the fact that for an integer factor $q$ of $N$ with $N=q \cdot r$ all phases in ${\cal A}_N^{(M,j)}$ are integer multiples of $2\pi$. Consequently, the terms add up constructively and yield ${\cal A}_N^{(M,j)}(q)=1$. When $\ell$ is not a factor the phases oscillate with $m$ and the signal $|{\cal
A}_N^{(M,j)}(\ell)|$ takes on small values. In order to factor a number $N$ we analyze $|{\cal A}_N^{(M,j)}(\ell)|$ for arguments $\ell$ out of the interval $[1,\sqrt{N}]$. We refer to the graphical representation of the signal data as [*factorization interference pattern*]{}.
In [Figure \[factpat\]]{} we show the factorization interference patterns of the number $N = 6172015 = 5\cdot 379\cdot 3257$ resulting from the Gauss sum (left) and from the Kummer sum (right) for the choice of the truncation parameter $M = 15\approx\ln{N}$. In both cases the factors of $N$ lead to the maximal signal of unity depicted by black diamonds. In contrast for most of the non-factors the signal represented by gray dots is well suppressed. However, for the Gauss sum there appear some non-factors, the so-called [*ghost factors*]{}, where the signal indicated by black stars is still close to that of a factor. We recognize that the corresponding factorization pattern resulting from the Kummer sum does not display any ghost factors. The origin of this positive feature lies in the fact that the cubic phase of the Kummer sum shows a stronger increase than the quadratic variation of the Gauss sum.
![ Factorization interference patterns of the number $N
=6172015 = 5\cdot 379\cdot 3257$ resulting from the Gauss sum (upper plot) and the Kummer sum (lower plot). Here we have chosen the truncation parameter $M\approx\ln{N}\approx 15$. The factors of $N$, depicted by black diamonds, correspond to the signal value of unity. For most of the non-factors, depicted by gray dots, the signal value is well suppressed. However, in the case of Gauss sum we note that for a few non-factors, depicted by stars, the signal is close to that of a factor. Since such arguments can be misinterpreted as factors of $N$ we call them ghost factors. The presence of ghost factors in the factorization interference pattern indicates that the choice of the truncation parameter $M\approx\ln{N}$ is not sufficient for the Gauss sum. However, the cubic phases in the Kummer sum grow faster than the quadratic phases in the Gauss sum. As a result, the truncation parameter $M=15$ is sufficient to suppress all ghost factor. Moreover, some trial factors result in a threshold value of the signal depicted by black triangles which cannot be suppressed by further increasing the truncation parameter $M$. In the case of the Gauss sum the threshold is $1/\sqrt{2}$ whereas for the Kummer sum it has the value $~0.844$.[]{data-label="factpat"}](expsum_f1l.eps "fig:"){width="65.00000%"} ![ Factorization interference patterns of the number $N
=6172015 = 5\cdot 379\cdot 3257$ resulting from the Gauss sum (upper plot) and the Kummer sum (lower plot). Here we have chosen the truncation parameter $M\approx\ln{N}\approx 15$. The factors of $N$, depicted by black diamonds, correspond to the signal value of unity. For most of the non-factors, depicted by gray dots, the signal value is well suppressed. However, in the case of Gauss sum we note that for a few non-factors, depicted by stars, the signal is close to that of a factor. Since such arguments can be misinterpreted as factors of $N$ we call them ghost factors. The presence of ghost factors in the factorization interference pattern indicates that the choice of the truncation parameter $M\approx\ln{N}$ is not sufficient for the Gauss sum. However, the cubic phases in the Kummer sum grow faster than the quadratic phases in the Gauss sum. As a result, the truncation parameter $M=15$ is sufficient to suppress all ghost factor. Moreover, some trial factors result in a threshold value of the signal depicted by black triangles which cannot be suppressed by further increasing the truncation parameter $M$. In the case of the Gauss sum the threshold is $1/\sqrt{2}$ whereas for the Kummer sum it has the value $~0.844$.[]{data-label="factpat"}](expsum_f1r.eps "fig:"){width="65.00000%"}
Classification of trial factors {#sec10:3}
-------------------------------
In the preceding section we have shown using numerical examples that the influence of the truncation parameter of the exponential sums depends crucially on the choice of the trial factors. In the present section we analyze this feature in more detail and identify four classes of trial factors.
For this purpose we start from the decomposition of the fraction $N/\ell$ into an integer $k$ and the fractional part $$\rho(N,\ell) = \frac{N}{\ell}-k$$ with $|\rho|\leq 1/2$. Indeed, the integer part contributes only as the multiplication by unity in [(\[eqn:Gausslike\])]{} and we find $${\cal A}_N^{(M,j)}(\ell) = {\cal S}_j^{(M)}\left(\rho(N,\ell)\right)$$ where we have introduced the sum $${\cal S}_j^{(M)}(\rho)\equiv\frac{1}{M+1}\sum\limits_{m=0}^M \exp\left(2\pi i\, m^j\rho\right)$$ This elementary analysis allows us to identify four classes of the fractional part. Indeed, we find in complete analogy to the Gauss sums [@opttrunc] : ([*i*]{}) for $\rho(N,\ell)=0$ the trial factor $\ell$ is a factor of $N$, ([*ii*]{}) for $|\rho(N,\ell)| = t_j$ the trial factor $\ell$ results in a threshold value $T_j$ of the exponential sum, where the values of $t_j$ and $T_j$ are determined by the power $j$, ([*iii*]{}) for $\rho(N,\ell)$ appropriately away from the origin the trial factor $\ell$ is a typical non-factor of $N$, ([*iv*]{}) for $\rho(N,\ell)\sim 0$ the trial factor $\ell$ is a ghost factor of $N$.
We illustrate the different dependence of representatives of these classes on the truncation parameter $M$ in [Figure \[fig:arguments\]]{} using the example of the truncated Kummer sum (\[kummer\]). We find signals which are independent of $M$ and equal to unity. They indicate factors. Moreover, we note, a rapid suppression of the signal for a typical non-factor. However, for a ghost factor the signal is close to that of a factor and we have to include more terms in the sum [(\[kummer\])]{} in order to suppress it. Moreover, we find that for certain trial factors $\ell$ the signal levels off at a non-zero threshold value and thus cannot be reduced at all.
![Four classes of trial factors $\ell$ illustrated by the dependence of the Kummer sum $|{\cal A}_N^{(M,3)}(\ell)|$ on the truncation parameter $M$. In order to compare with [Figure \[factpat\]]{} where $M=15$ as indicated by a vertical dashed line we have chosen again $N=6172015 = 5\cdot 379\cdot 3257$. For factors of $N$, such as $\ell=5$ depicted by black diamonds, the signal is constant and equals to unity. For typical non-factors, such as $\ell=10$ depicted by gray dots, the signal is suppressed considerably already for small values of the truncation parameter $M$. However, for ghost factors, such as $\ell=2337$ depicted by black stars, more terms in the sum (\[kummer\]) are needed to suppress the signal. Finally, for certain arguments, such as $\ell=45$ depicted by black triangles, the signal levels at non-vanishing threshold and it is impossible to suppress it further by increasing the truncation parameter $M$.[]{data-label="fig:arguments"}](expsum_f2.eps){width="65.00000%"}
Scaling law of the truncation parameter {#sec10:4}
---------------------------------------
In Section \[sec10:2\] we have shown that the ghost factors spoil the discrimination of factors from non-factors. Fortunately, we can suppress the signal of a ghost factor by increasing the truncation parameter $M$. In this context the truncated Gauss sums were analyzed in [@opttrunc] and it was shown that one needs $M\sim\sqrt[4]{N}$ terms in the sum in order to suppress the signal of all ghost factors considerably. We derive the corresponding scaling law $M_j\sim\sqrt[2j]{N}$ of an exponential sum ${\cal A}_N^{(M,j)}$. In [@opttrunc] the upper bound for the truncated Gauss sum [(\[eqn:Gauss\])]{} was obtained by approximating the Gauss sum by the Fresnel integral. We perform a similar analysis for the exponential sums.
Since ghost factors result from small values of the fractional part $\rho\equiv N/\ell-k$ we replace the exponential sum by an integral, i.e. $${\cal A}_N^{(M,j)}(\ell)={\cal S}_j^{(M)}(\rho)\approx \frac{1}{M}\int\limits_0^M e^{2\pi i m^j\rho}dm.$$ This approximation is justified by the van der Corput method [@Kowalski] approximating sums by sums of shifted integrals.
With the help of the substitution $m^j\rho \equiv u^j$ and $dm=du/\sqrt[j]{\rho}$ we find $${\cal A}_N^{(M,j)}(\ell)\approx F_j(M\cdot\sqrt[j]{\rho})$$ where $$F_j(x)\equiv\frac{1}{x}\int\limits_0^{x} e^{2\pi i u^j}du \: .$$ This analysis brings out most clearly that for small fractional parts $\rho$ the truncation parameter $M$ and $\rho$ appear in the exponential sum only as the product $M\cdot\sqrt[j]{\rho}$.
In order to suppress the absolute value $|{\cal A}_N^{(M,j)}(\ell)|$ below a given value $\xi$ we have to choose the upper bound $M$ according to $$M\cdot\sqrt[j]{\rho}=\alpha$$ where $\alpha$ is the solution of the integral equation $$|F_j(\alpha)| = \xi$$ which leads us to the relation $$M=\alpha(\xi)\rho^{-\frac{1}{j}}.$$
This result shows that the smaller the fractional part $\rho(N,\ell)$ of the ghost factor $\ell$ the more terms are required. Since the largest trial factor is of the order of $\sqrt{N}$ the smallest attainable fractional part $$\rho_{\rm min}(N) \sim \frac{1}{\sqrt{N}}$$ gives an upper bound $$M_j\approx\alpha(\xi)\rho_{\rm min}^{-\frac{1}{j}} \approx \alpha(\xi)\sqrt[2j]{N}
\label{rule}$$ on the truncation parameter $M$.
Hence, in order to suppress all ghost factors of $N$ we require an order of $\sqrt[2j]{N}$ terms in the exponential sum ${\cal A}_N^{(M,j)}$. We point out that the scaling law (\[rule\]) is inherent in the exponential sum since the change of $\xi$ only modifies the pre-factor $\alpha(\xi)$.
In [Figure \[fig:supp\]]{} we illustrate the behaviour of $|{\cal A}_N^{(M,j)}(\ell)|$ for $N=10^6+1$ and $\ell=10^3$ resulting in the fractional part $\rho(N,\ell)=10^{-3}\approx 1/\sqrt{N}$ as a function of the truncation parameter $M$. We visualize the effect of the power $j$ on the suppression of $|{\cal A}_N^{(M,j)}(\ell)|$ by presenting three different curves: ([*i*]{}) black dots correspond to the Fourier sum with linear phases, ([*ii*]{}) diamonds represent the Gauss sum, and finally ([*iii*]{}) stars result from the Kummer sum with cubic phases. We find that for the Fourier sum the suppression of the signal is extremely slow. Indeed, according to the estimate [(\[rule\])]{} we need $M_1\sim\sqrt{N}\approx 10^3$ terms in order to suppress the signal considerably. On the other hand, for the Gauss sum already $M_2\sim\sqrt[4]{N}\approx 32$ terms suffice to reduce the signal, in agreement with [(\[rule\])]{}. Finally, for the Kummer sum the decay of the signal is even faster. We find that $M_3\sim\sqrt[6]{N}\approx10$ terms are sufficient to suppress the signal, in agreement with [(\[rule\])]{}.
![Decay of the signal $|{\cal A}_N^{(M,j)}(\ell)|$ for increasing truncation parameter $M$ exemplified by the Fourier $(j=1)$, Gauss $(j=2)$ and Kummer $(j=3)$ sum. Here we have chosen $N=10^6+1$ and $\ell=10^3$ resulting in the fractional part $\rho(N,\ell)=10^{-3}\approx 1/\sqrt{N}$. For the Fourier sum (black dots) we find an extremely slow decay of the signal. On the other hand, for the Gauss sum (diamonds) already $M_2\sim\sqrt[4]{N}\approx 32$ terms are sufficient to suppress the signal considerably. This requirement is further reduced for the Kummer sum (stars) to $M_3\sim\sqrt[6]{N}\approx10$. We find that our numerical results are in good agreement with the analytical estimate [(\[rule\])]{}.[]{data-label="fig:supp"}](expsum_f3.eps){width="65.00000%"}
In order to verify the scaling law (\[rule\]) for a broad range of $N$ we have calculated numerically the truncation parameter $M_j$ needed to suppress all ghost factors of $N$ below the value $\xi$. We have chosen $N$ randomly from the interval $\left[10^4,10^{20}\right]$ and considered $\xi=0.7$. In [Figure \[fig:supp2\]]{} we present the results for the Fourier sum (black dots), Gauss sum (open diamonds) and Kummer sum (stars). To unravel the scaling law we use a logarithmic scale for both $N-$ and $M-$ axes. The numerical results are in excellent agreement with the estimates (\[rule\]) indicated by the dashed lines.
![Number $M_j$ of terms needed to suppress the signal of all ghost factors of $N$ below the value $0.7$ for the Fourier sum (black dots), Gauss sum (open diamonds) and Kummer sum (stars). To unravel the scaling of $M_j$ with $N$ we use a log-log scale. The dashed lines follow from the estimate $M_j\sim\sqrt[2j]{N}$ given by (\[rule\]).[]{data-label="fig:supp2"}](expsum_f4newp.eps){width="65.00000%"}
Threshold {#sec10:5}
---------
An experiment must also take into account the limited measurement accuracy. Thus for the success of our factorization scheme we need a good contrast between the signals of factor and non-factors, i.e. we require that the signals of all non-factors are suppressed below the estimated measurement error. However, due to the existence of the thresholds discussed in Section \[sec10:3\] this suppression might be impossible for certain powers $j$. In such a case we might misinterpret the signal arising from a non-factor as that of a factor. Hence, such exponential sums ${\cal A}_N^{(M,j)}$ are not suitable for integer factorization.
Relation (\[rule\]) shows that the faster the phase grows the less terms in the exponential sum are needed in order to suppress the signal of a ghost factor argument $\ell$. However, the suppression of the signal might be impossible for all arguments $\ell$, as we have seen already in [Figure \[fig:arguments\]]{}. This feature is closely related to the power $j$ determining the phase.
The absolute value $|{\cal A}_N^{(M,j)}(\ell)|$ depends on how many different roots of unity we find in the sum. These roots of unity are given by $$\exp{\left(2\pi i m^j \frac{N}{\ell}\right)}=\exp{\left(2\pi i m^j \rho(N,\ell)\right)}=\exp{\left(2\pi i m^j \frac{p}{q}\right)}
\label{phase:iden}$$ where $p/q$ is the coprime rational representation of $\rho(N,\ell)$. This is equivalent to $$m^j\frac{N}{\ell} q \equiv 0,\ 1,\ \ldots,\ q-1\ \textrm{mod}\ q,$$ i.e. the terms in the exponential sum $|{\cal A}_N^{(M,j)}(\ell)|$ attain at most $q$ different values.
For the Fourier sum we find all $q$ different roots $\exp{\left(2\pi i m/q\right)}$ with $m=0,\ldots,\ q-1$ of unity. Moreover, since they are distributed symmetrically on the unit circle they cancel each other out. Hence, for the Fourier sum we can suppress the signal $|{\cal A}_N^{(M,1)}|$ of any non-factor $\ell$ below any given value by extending the summation range $M$.
However, for exponential sums ${\cal A}_N^{(M,j)}$ with powers $2\leq j$ we are not guaranteed to find all different roots of unity. Moreover, since $j\neq 1$ the corresponding roots of unity $\exp{\left(2\pi i m^j p/q\right)}$ are not necessarily distributed symmetrically on a unit circle. Hence, they do not cancel themselves completely. In such a case the signal $|{\cal A}_N^{(M,j)}(\ell)|$ has a non-zero limit as $M$ tends to infinity. This limit value determines the threshold and depends on how many different roots of unity we find in the sum and their distribution on the unit circle. If we find only few different roots of unity which are moreover close to each other on the unit circle the signal $|{\cal A}_N^{(M,j)}(\ell)|$ attains values close to unity and cannot be suppressed further by increasing the truncation parameter $M$, even though $\ell$ does not correspond to a factor of $N$.
![The roots of unity contained in the exponential sums ${\cal A}_N^{(M,j)}(\ell)$ exemplified by the Fourier sum (j=1, black dots), the Gauss sum (j=2, open diamonds) and a higher order exponential sum (j=6, black stars). Here we have chosen $N=99$ and $\ell=7$ which leads to $\rho(N,\ell)=p/q=1/7$. For the Fourier sum we find all seven different roots of unity. However, in the Gauss sum only four different roots of unity appear. This number is further reduced to just two different roots of unity in the higher order exponential sum with power $j=6$.[]{data-label="fig:phases"}](expsum_f5.eps){width="40.00000%"}
The fewest possible terms in the sum ${\cal A}_N^{(M,j)}$ for a non-factor $\ell$ occur if $j+1$ is the prime number $q$ from the rational representation of $\rho(N,\ell)$. In such a case we find from the Euler’s Theorem (see e.g. Chapter 3 in [@Rosen]) $$m^j \equiv
\left\lbrace{
\begin{tabular}{ccl}
$1$ & if & $q$ is not a divisor of $m$ \\
$0$ & if & $q$ is a divisor of $m$
\end{tabular}}\right.$$ so $m^j\cdot p$ is either congruent to $p$ or $0$ mod $q$. With the help of the periodicity $m^j\cdot p\equiv (m+q)^j\cdot p\ \textrm{mod}\ q$ and the relation (\[phase:iden\]) we obtain for $M+1$ being a multiple of $q$ $$\begin{aligned}
\nonumber {\cal A}_N^{(M,j)}(\ell) & = & \frac{1}{M+1}\sum_{m=0}^M e^{2\pi i m^j\frac{N}{\ell}} = \frac{1}{q}\sum_{m=0}^{q-1} e^{2\pi im^j\frac{p}{q}}\\
\nonumber & = & \frac{1}{q}\left(1+(q-1)e^{2\pi i\frac{p}{q}}\right).\end{aligned}$$ Hence we find for the absolute value squared $$|{\cal A}_N^{(M,j)}(\ell)|^2=\frac{1}{q^2}((1+(q-1)\cos(\frac{2\pi p}{q}))^2+(q-1)^2\sin^2(\frac{2\pi p}{q})).$$ Substituting $q=j+1$ we find for $p=1$ the threshold value of the sum ${\cal A}_N^{(M,j)}$ $$T_1(j) = \frac{1}{j+1}\sqrt{j^2+1+2j\cos{\left(\frac{2\pi}{j+1}\right)}}.$$ For $p>1$ or for more than two different terms in the sum ${\cal A}_N^{(M,j)}$ the threshold will always be smaller.
To illustrate this we plot in [Figure \[fig:thresh\]]{} the behaviour of the signal $|{\cal A}_N^{(M,6)}(\ell)|$ as a function of the truncation parameter $M$. Here we have chosen $N=99$ and $\ell=7$ resulting in $\rho(N,\ell)=p/q=1/7$. Hence, $q=7=1\cdot 6+1$ and we find that the signal converges to the threshold value $T_1(6)\approx 0.953$.
![Emergence of the threshold for the exponential sum ${\cal A}^{(M,j)}_N$ with the power $j=6$ for increasing truncation parameter $M$. We have chosen $N=99$ and $\ell=7$ resulting in $\rho(N,\ell)=p/q=1/7$. The signal converges to the value of $T_1(6)\approx 0.953$ and cannot be suppressed by a further increase of $M$.[]{data-label="fig:thresh"}](expsum_f6.eps){width="70.00000%"}
More generally, for prime denominator $q=k\cdot j+1$ the sum ${\cal A}_N^{(M,j)}$ contains at most $k+1$ different terms. For the case of $k=2$ an analogous calculation results in the threshold value $$T_2(j) = \frac{1}{2j+1}\left(1+2j\cos{\left(\frac{2\pi}{2j+1}\right)}\right).$$ Obviously, for large powers $j$ the values of $T_{1,2}(j)$ are very close to one.
The above derived results indicate that the exponential sums ${\cal A}_N^{(M,j)}$ with powers $j$ larger than two can be used for integer factorization only when the experimental data are sufficiently precise. For the Fourier sum the signal for any non-factor can be suppressed below any given value. However, according to [(\[rule\])]{} we have to include a number of terms of the order of the square-root of $N$ to achieve this suppression. The quadratic Gauss sum of [(\[eqn:Gauss\])]{} provides a reasonable compromise between the number of terms needed and the non-factor discrimination. The gap between the signal of a factor and the greatest threshold is approximately 30$\%$ which should be sufficient for the experimental realization. The number of terms in the sum needed is according to [@opttrunc] reduced to the fourth-root of $N$.
Factorization with an exponential phase {#sec10:6}
---------------------------------------
One way to improve the scaling law might be offered by an exponential sum where the phase is not governed by a polynomial as in (\[eqn:Gausslike\]) but by an exponential function. This idea leads to the sum $${\cal E}_N^{(M)}(\ell) \equiv\frac{1}{M+1}\sum_{m=0}^M \exp\left[2\pi i m^m \frac{N}{\ell}\right].$$
We present a numerical analysis which confirms a logarithmic scaling law. In Section \[sec10:4\] we have found that the number of $M_j$ terms needed to suppress all ghost factors for the exponential sum ${\cal A}_N^{(M,j)}$ scales like $M_j\sim\sqrt[2j]{N}$, i.e. $M_j$ is determined by the inverse function of the phase evaluated at $\sqrt{N}$. This feature arises from the fact that the rising exponent prevents the function from accumulating values near unity for small arguments $m$, as we illustrate in [Figure \[fig:phases2\]]{}.
![Distribution of the roots $e^{2\pi i m^2 p/q}$ (dots) and $e^{2\pi i m^m p/q}$ (stars) of unity for quadratic and exponential phase, respectively. Here we have chosen $p=1$ and $q=10^4$. Since the fraction $p/q$ is small we observe an accumulation of the roots for small values of $m$ in the case of the quadratic phase.[]{data-label="fig:phases2"}](expsum_f7.eps){width="40.00000%"}
This result suggests that for the exponential sum ${\cal E}_N^{(M)}$ already a logarithmic number of terms $M_{\rm{exp}}\sim\ln \sqrt{N}$ should be sufficient to eliminate all ghost factors. Moreover, our numerical analysis summarized in [Figure \[scaling:exp\]]{} indicates that the largest threshold for ${\cal E}_N^{(M)}$ occurs around the value $0.5$. Hence, we can achieve perfect discrimination of factors from non-factors.
![Number $M_{\rm{exp}}$ of terms needed to suppress the signal $|{\cal E}_N^{(M)}|$ of all non-factors of $N$ below the value $0.7$. To unravel the scaling of $M$ we use a logarithmic scale for $N$. The gray line represents the estimate $M\sim\ln\sqrt{N}$. The plot indicates that already an order of $\ln{\sqrt{N}}$ terms in the exponential sum ${\cal E}_N^{(M)}$ is sufficient to find all factors of $N$.[]{data-label="scaling:exp"}](expsum_f8.eps){width="70.00000%"}
However, in contrast to sums involving a fixed exponent, we no longer have the tools of number theory at hand to prove perfect discrimination of factors from non-factors. Moreover, since the derivative of $m^m$ grows faster then $m^m$ itself, standard techniques to approximate these exponential sums by integrals cannot be applied. Nevertheless, in the Appendix \[appendC\] we demonstrate that it is still possible to show that the sum actually discriminates factors from non-factors by methods of elementary number theory (see [@Rosen] for example).
Conclusions {#sec10:7}
-----------
In the present Chapter we have extended the idea of factorization with Gauss sums to exponential sums where the phase is governed by a power $j$ of the summation index. These sums are also capable of non-factor discrimination in complete analogy to Gauss sums. However, the truncation parameter $M_j$ needed to achieve a significant suppression of ghost factors of the number $N$ scales like $M_j\sim \sqrt[2j]{N}$ . Hence, we can save experimental resources by employing exponential sums with large powers $j$. On the other hand the gap between the signal of a factor and the greatest threshold value shrinks as $j$ grows. Therefore, exponential sums with large values of $j$ can be used for integer factorization only if the expected imperfections in the experiment are smaller than this gap.
We have also presented numerical simulations of factoring numbers using an exponential sum with exponentially increasing phases. Here the resources scale only logarithmically. Moreover, our results indicate that the gap survives.
Our results also show a connection to two recent experiments [@suter2; @girard3] which factored a 13-digit and a 17-digit number using a Monte-Carlo sampling technique of a complete Gauss sum. This method accepts a small fraction of ghost factors and achieves a logarithmic scaling very much in the spirit of the exponential phase.
It is interesting to compare and contrast these two approaches. Ghost factors arise from the addition of neighbouring phase factors which only deviate slightly from each other. However, when many terms are added the phase factors are distributed homogeneously on the unit circle. The Monte-Carlo technique does not add up consecutive terms but tries to collect those terms which almost cancel each other. On the other hand, the exponential phase guarantees that neighbouring phase factors deviate significantly from each other and no ghost factors can arise. This feature leads to the logarithmic scaling.
Various schemes for factorization of numbers based on exponential sums have been developed recently. Their relative simplicity when compared to the celebrated Shor’s algorithm results in several advantages for the experimental realizations. First of all, exponential sums can be easily implemented in various physical systems. Moreover, thanks to the sufficiently long coherence times larger numbers can be factorized.
In the previous Chapters we presented the necessary conditions for the success of the factorization schemes based on exponential sums. We found that the number of terms in the sum, which directly translates to the number of pulses in the experiment, needed for the suppression of all ghost factors is determined by the inverse of the function determining the growth of the phase evaluated at the square-root of the number to be factorized. The exponential sums with rapidly growing phases are therefore more suitable for the suppression of ghost factors. On the other hand, the non-factors resulting in the threshold signal values become a significant problem. In general, with the faster growing phase the thresholds appear closer to the maximal signal of unity corresponding to the factor. Hence, for the successful physical implementation of the exponential sums algorithm for factorization of numbers one has to guarantee sufficient resolution of the measured signal. The quadratic Gauss sum analyzed in Chapter \[chap9\] provides a reasonable compromise between the number of terms needed and the non-factor discrimination. Most of the experiments performed to date benefited from this fact.
Needless to say, the simplicity of the analyzed schemes follows from the fact that they do not employ entanglement which is the key for the exponential speed-up of the Shor’s algorithm over the classical ones. Indeed, factorization of numbers based on exponential sums relies only on interference. The resources scale exponentially like in the case of all known classical algorithms. To improve this scaling law by involving entanglement is our next goal.
Determination of Threshold {#appendA}
==========================
In this Appendix we show that for non-zero positive rational $\tau=p/q$ the absolute value of the normalized curlicue sum is asymptotically bound from above by $1/\sqrt{2}$. This property follows immediately from [@berry:curlicues:1:1988]. Indeed, as shown in [@berry:curlicues:1:1988] the asymptotic behaviour of the curlicue sum $${\cal C}_M(\tau) = \sum\limits_{m=0}^M \exp\left(i \pi\, m^2 \tau \right)$$ for rational $\tau=p/q$ depends on the product $p\cdot q$. We find that for $p\cdot q$ being odd the curlicue is bounded. In such a case the absolute value of the normalized curlicue sum $s_M(\tau)$ decays with increasing $M$ like $|s_M(\tau)|\sim M^{-1}$. On the other hand for $p\cdot q$ being even the curlicue is unbounded and its growth can be approximated by $$|C_M(\tau)|\approx (M +1) (\tau_0\cdot\tau_1\cdot\ldots\cdot\tau_{\mu-1})^{1/2}$$ where $$\tau_j = (1/\tau_{j-1}) \rm{\ mod\ } 1\quad \rm{if}\quad \tau_{j-1}\neq 0
\label{appendA:rec}$$ belongs to the $j$-th step in the repeating curlicue pattern [@berry:curlicues:1:1988] with $\tau_0=\tau$. Consequently, the limit of the absolute value of the normalized curlicue sum is non-vanishing and for large $M$ we can approximate $|s_M(\tau)|$ by the finite product $$|s_M(\tau)|\approx (\tau_0\cdot\tau_1\cdot\ldots\cdot\tau_{\mu-1})^{1/2}$$
We illustrate this feature in [Figure \[appendA:curl\]]{} where we show two different curlicues $C_M(\tau)$. In the upper plot we choose $\tau = \frac{9}{10001}$ for which the product $p\cdot q$ is odd. In such a case the function $C_M(\tau)$ is periodic in $M$ and the curlicue is bounded. On the other hand for $\tau = \frac{8}{10001}$ the curlicue depicted in the lower plot is unbounded and its ultimate growth is linear.
![The behaviour of the curlicue sum ${\cal C}_M(p/q)$ in dependence on the parity of the product $p\cdot q$. In the upper plot we choose $\tau = p/q = 9/10001$ for which the product $p\cdot q$ is odd. We find that the curlicue repeats itself and is bounded. As a second example we choose $\tau = p/q = 8/10001$ where the product $p\cdot q$ is even. In such a case the curlicue expands, as depicted in the lower plot.[]{data-label="appendA:curl"}](ghost_curl1p.eps "fig:"){width="70.00000%"} ![The behaviour of the curlicue sum ${\cal C}_M(p/q)$ in dependence on the parity of the product $p\cdot q$. In the upper plot we choose $\tau = p/q = 9/10001$ for which the product $p\cdot q$ is odd. We find that the curlicue repeats itself and is bounded. As a second example we choose $\tau = p/q = 8/10001$ where the product $p\cdot q$ is even. In such a case the curlicue expands, as depicted in the lower plot.[]{data-label="appendA:curl"}](ghost_curl2p.eps "fig:"){width="80.00000%"}
Let us determine the asymptotic bound of $|s_M(\tau)|$. The recursion (\[appendA:rec\]) terminates [@berry:curlicues:1:1988] at $\tau_\mu=0$ which implies $\tau_{\mu-1}=1/b$ where $b$ is a natural number. Since $\tau_j< 1$ we find the estimate $$|s_M(\tau)|\leq\sqrt{\tau_{\mu-1}}=1/\sqrt{b}.$$ The case $b=1$ cannot be produced by the recursion formula since all $\tau_j$ are strictly less than one. As a consequence the absolute value of the normalized curlicue sum $|s_M(\tau)|$ is asymptotically bound from above by $1/\sqrt{2}$.
Applicability of the Fresnel approximation {#appendB}
==========================================
Let us comment the range of applicability of the continuous approximation (\[fresnel\]). The scaling law $M_0\sim \sqrt[4]{N}$ connecting the number to be factored with the truncation parameter $M_0$ necessary to push all ghost factors below the threshold $1/\sqrt{2}$ relies on the approximation of the normalized curlicue function by the Fresnel integral. For large values of $N$ the scaling law requires large values of $M_0$. However, for large $M$ the continuous approximation might not hold any more.
For the continuous approximation to hold the phase difference $$\pi \left((m+1)^2-m^2\right)\tau=\pi (2m+1)\tau$$ of two successive terms in the sum (\[curl\]) should at most be of the order of $\pi$. Together with the fact that the maximal phase difference appears for $m=M$ we arrive at the inequality $$\tau (2M+1) < 1.$$ Indeed, this condition is violated for sufficiently large $M$.
When we recall that for a given $N$ the smallest fractional part is $\tau_{\rm min}=1/\sqrt{N}$ we arrive at the constraint $$M_c \approx \frac{1}{4}\sqrt{N}.$$ on the maximal value $M_c$ of the truncation parameter for a given $N$. Thus $M_c\sim\sqrt{N}$ provides an upper bound on the validity of the Fresnel approximation, (\[fresnel\]). Since $M_0\sim\sqrt[4]{N}$ the Fresnel approximation is valid.
Discrimination Property for Variable Exponents {#appendC}
==============================================
In this Appendix we prove that the exponential sums with exponential phase allows us to distinguish factors from non-factors of a given number. The discrimination property of the exponential sums with a fixed exponent rests on the fact that only for integer values of $l$ which are factors of $N$, the sum can take the value unity. There is a number theoretical argument supporting this fact, as long as the exponent $j$ in the sum (\[eqn:Gausslike\]) is fixed. This feature comes from the distribution of the values $\exp(2\pi i m^j \frac{N}{\ell})$ on the unit circle. For fixed $j$, it is impossible to hit the same point twice as $m$ increases provided we use a truncation parameter $M$ below $\sqrt[2j]{N}$. However, for a variable power $m^m$ that is an exponential phase, this non-recurrence property is not obvious. In this case we need to prove the discrimination property explicitly.
The value $\exp(2\pi i m^m\frac{N}{\ell})$ depends on the fractional part of $m^m\frac{N}{\ell}$ only. We hit the same point twice for different values $m$ and $n$ if and only if $$m^m\frac{N}{\ell}-n^n\frac{N}{\ell}=k
\label{rec}$$ where $k$ is an integer.
As in (\[phase:iden\]) we make use of the coprime rational representation of $\rho(N,\ell)=p/q$ and find that the phase factor $$\exp\left(2\pi i m^m\frac{N}{\ell}\right) = \exp\left(2\pi im^m\rho(N,\ell)\right) = \exp\left(2\pi i \frac{pm^m}{q}\right)$$ is a $q$-th root of unity. In particular, it is the $(pm^m)$-th one if we enumerate them counter-clockwise starting from the zeroth root $1=\exp(2\pi i \frac{0}{q})$. Note that $c$-th and $d$-th roots coincide if and only if $q$ is a divisor of $c-d$.
So the discrimination property depends on the fact, that there are values $c$ and $d$ such that $$q\;\textrm{is not a divisor of}\; pc^c-pd^d \: .$$ The discrimination threshold does not depend only on the number of such pairs, but also on the position of the corresponding roots of unity. Opposite roots of unity eliminate themselves in the sum, so the worst case occurs if these roots accumulate on the same position.
We consider two cases: for large $q$, the first numbers in the sequence $pm^m$ will be below $q$, so any pair chosen from the beginning of the sequence cannot fulfill the recurrence condition (\[rec\]), so they correspond to pairwise distinct roots. As a consequence, the absolute value of the sum cannot assume the value unity.
For small $q$, we show that the $p$-th root $\exp(2\pi i \frac{p}{q})$ and its conjugate $\exp(-2\pi i \frac{p}{q})$ appear in the sum, which leads to the elimination of their imaginary parts. According to Euler’s Theorem [@Rosen] there is an even $m$ such that $pm^m$ corresponds to the first root $\exp(2\pi i \frac{1}{q})$ and $j=m/2$ gives $pj^j$, which corresponds to the $(-1)$-root $\exp(-2\pi i \frac{1}{q})=\exp(2\pi i \frac{q-1}{q})$. The sum of this conjugate pair is a real number strictly below unity.
[x]{}
M. Štefaňák, I. Jex and T. Kiss, Phys. Rev. Lett. **100**, 020501 (2008)
T. Kiss, L. Kecskés, M. Štefaňák and I. Jex, Phys. Scripta T **135**, 014055 (2009)
M. Štefaňák, T. Kiss and I. Jex, Phys. Rev. A **78**, 032306 (2008)
M. Štefaňák, T. Kiss and I. Jex, New J. Phys. **11**, 043027 (2009)
M. Štefaňák, T. Kiss, I. Jex and B. Mohring, J. Phys. A **39**, 14965 (2006)
M. Štefaňák, W. Merkel, W. P. Schleich, D. Haase and H. Maier, New J. Phys. **9**, 370 (2007)
M. Štefaňák, D. Haase, W. Merkel, M. S. Zubairy and W. P. Schleich, J. Phys. A **41**, 304024 (2008)
K. Pearson, Nature **72**, 294 (1905)
R. Brown, Phil. Mag. **4**, 161 (1828)
A. Einstein, Ann. Phys. (Leipzig) 17, 549 (1905); 19, 371 (1906)
M. Smoluchowski, Ann. Phys. (Leipzig) 21, 756 (1906)
N. Guillotin-Plantard and R. Schott, [*Dynamic Random Walks: Theory and Application*]{}, Elsevier, Amsterdam (2006)
C. Papadimitriou, [*Computational Complexity*]{}, Addison Wesley, Reading (1994)
R. Motwani and P. Raghavan, [*Randomized Algorithms*]{}, Cambridge University Press, Cambridge (1995)
A. Sinclair, [*Algorithms for Random Generation and Counting, a Markov Chain Approach*]{}, Birkhauser Press, Boston (1993)
U. Schöning, 40th Annual Symposium on Foundations of Computer Science, IEEE, New York, 17 (1999)
M. Jerrum, A. Sinclair and E. Vigoda, in Proceedings of the 33th STOC, New York, 712 (2001)
Y. Aharonov, L. Davidovich and N. Zagury, Phys. Rev. A **48**, 1687 (1993)
D. Meyer, J. Stat. Phys. **85**, 551 (1996)
D. Meyer, Phys. Lett. A **223**, 337 (1996)
J. Watrous, J. Comput. Syst. Sci. **62**, 376 (2001)
E. Farhi and S Gutmann, Phys. Rev. A **58**, 915 (1998)
A. Childs, E. Farhi and S. Gutmann, Quantum Inf. Process. **1**, 35 (2002)
R. P. Feynman and A. R. Hibbs, [*Quantum Mechanics and Path Integrals*]{}, International Series in Pure and Applied Physics, McGraw-Hill, New York (1965)
I. Bialynicki-Birula, Phys. Rev. D **49**, 6920 (1994)
M. Hillery, J. Bergou and E. Feldman, Phys. Rev. A **68**, 032314 (2003)
E. Feldman and M. Hillery, Phys. Lett. A **324**, 277 (2004)
J. Košík and V. Bužek, Phys. Rev. A **71**, 012306 (2005)
E. Feldman and M. Hillery, J. Phys. A **40**, 11343 (2007)
F. W. Strauch, Phys. Rev. A **74**, 030301 (2006)
C. M. Chandrashekar, Phys. Rev. A **78**, 052309 (2008)
A. M. Childs, Phys. Rev. Lett. **102**, 180501 (2009)
N. B. Lovett, S. Cooper, M. Everitt, M. Trevers and V. Kendon, [*pre-print*]{} arXiv:0910.1024 (2009)
D. Bruß and G. Leuchs (Eds.), [*Lectures on Quantum Information*]{}, Wiley-VCH, Berlin (2006)
J. Kempe, Contemp. Phys. **44**, 307 (2003)
S. E. Venegas-Andraca, [*Quantum Walks for Computer Scientists*]{}, Morgan and Claypool (2008)
N. Konno, [*Quantum Walks*]{}, in [*Quantum Potential Theory*]{}, Eds. U. Franz and M. Schürmann, Lecture Notes in Mathematics **1954**, pp. 309-452, Springer-Verlag, Heidelberg (2008)
O. Mülken, A. Blumen, T. Amthor, Ch. Giese, M. Reetz-Lamour and M Weidemueller, Phys. Rev. Lett. **99**, 090601 (2007)
O. Mülken O, V. Bierbaum and A. Blumen, Phys. Rev. E **75**, 031121 (2007)
G. S. Engel, T. R. Calhoun, E. L. Read, T. K. Ahn, T. Mančal, Y. C. Cheng, R. E. Blankenship and G. R. Fleming, Nature **446**, 782 (2007)
M. Mohseni, P. Rebentrost, S. Lloyd and A. Aspuru-Guzik, J. Chem. Phys. **129**, 174106 (2008)
F. Caruso, A. W. Chin, A. Datta, S. F. Huelga and M. B. Plenio, J. Chem. Phys. **131**, 105106 (2009)
D. Aharonov, A. Ambainis, J. Kempe and U. Vazirani, in Proceedings of the 33th STOC, ACM, New York, 50 (2001)
A. Ambainis, E. Bach, A. Nayak, A. Vishwanath and J. Watrous, Proceedings of the 33th STOC, ACM, New York, 60 (2001)
N. Shenvi, J. Kempe and K. B. Whaley, Phys. Rev. A **67**, 052307 (2003)
A. Ambainis, SIAM J. Comput., **37**, 210 (2007)
A. M. Childs, J. Goldstone, Phys. Rev A **70**, 022314 (2004)
V. Kendon, Phil. Trans. R. Soc. A 364, 3407 (2006)
F. Magniez, A. Nayak, J. Roland and M. Santha, in Proceedings of the 33th STOC, ACM, New York, 575 (2007)
A. Gabris, T. Kiss and I. Jex, Phys. Rev. A **76**, 062315 (2007)
V. Potoček, A. Gabris, T. Kiss and I. Jex, Phys. Rev. A **79**, 012325 (2009)
B. Tregenna, W. Flanagan, R. Maile and V. Kendon, New J. Phys. **5**, 83.1 (2003)
T. Miyazaki, M. Katori and N. Konno, Phys. Rev. A **76**, 012332 (2007)
C. M. Chandrashekar, R. Srikanth and R. Laflamme, Phys. Rev. A **77**, 032326 (2008)
E. Bach, S. Coppersmith, M. P. Goldschen, R. Joynt and J. Watrous, J. Comput. Syst. Sci. **69**, 562 (2004)
J. Kempe, Prob. Th. Rel. Fields **133** (2), 215 (2005)
H. Krovi and T. A. Brun, Phys. Rev. A **73**, 032341 (2006)
H. Krovi and T. A. Brun, Phys. Rev. A **74**, 042334 (2006)
V. Kendon, Math. Struct. in Comp. Sci **17**(6), 1169 (2006)
M. Varbanov, H. Krovi and T. A. Brun, Phys. Rev. A **78**, 022324 (2008)
A. Nayak and A. Vishwanath, [*pre-print*]{} arXiv:quant-ph/0010117v1 (2001)
N. Konno, Quantum Inform. Compu. **2**, 578 (2002)
N. Konno, J. Math. Soc. Japan **57**, 1179 (2005)
H. A. Carteret, M. E. H. Ismail and B. Richmond, J. Phys. A **36**, 8775 (2003)
G. Grimmett, S. Janson and P. F. Scudo, Phys. Rev. E **69**, 026119 (2004)
T. D. Mackay, S. D. Bartlett, L. T. Stephenson and B. C. Sanders, J. Phys. A **35**, 2745 (2002)
N. Inui, Y. Konishi and N. Konno, Phys. Rev. A **69**, 052323 (2004)
N. Inui, N. Konno and E. Segawa, Phys. Rev. E **72**, 056112 (2005)
M. Sato, N. Kobayashi, M. Katori and N. Konno, [*pre-print*]{} arXiv:0802.1997v1 (2008)
B. C. Sanders, S. D. Bartlett, B. Tregenna and P. L. Knight, Phys. Rev. A **67**, 042305 (2003)
H. Jeong, M. Paternostro and M. S. Kim, Phys. Rev. A **69**, 012310 (2004)
P. K. Pathak and G. S. Agarwal, Phys. Rev. A **75**, 032351 (2007)
W. Dür, R. Raussendorf, V.M. Kendon and H.-J. Briegel, Phys. Rev. A **66**, 052319 (2002)
K. Eckert, J. Mompart, G. Birkl and M. Lewenstein, Phys. Rev. A **72**, 012327 (2005)
C.M. Chandrashekar, Phys. Rev. A **74**, 032307 (2006)
O. Kálmán, T. Kiss and P. Földi, Phys. Rev. B **80**, 035327 (2009)
M. Karski, L. Förster, J. Choi, A. Steffen, W. Alt, D. Meschede and A. Widera, Science **325**, 174 (2009)
H. Schmitz, R. Matjeschk, Ch. Schneider, J. Glueckert, M. Enderlein, T. Huber and T. Schaetz, Phys. Rev. Lett. **103**, 090504 (2009)
A. Schreiber, K. N. Cassemiro, V. Potoček, A. Gabris, P. Mosley, E. Andersson, I. Jex and Ch. Silberhorn, [*pre-print*]{} arXiv:0910.2197 (2009)
G. Pólya, Mathematische Annalen **84**, 149 (1921)
E. W. Montroll, J. SIAM 4, 241 (1956)
C. Domb, Proc. Cambridge Philos. Soc. **50**, 586 (1954)
B. D. Hughes, [*Random walks and random environments, Vol. 1: Random walks*]{}, Oxford University Press, Oxford (1995)
E.W. Montroll, in [*Random Walks on Lattices*]{}, edited by R. Bellman (American Mathematical Society, Providence, RI), Vol. **16**, 193 (1964)
P. Révész, [*Random walk in random and non-random environments*]{}, World Scientific, Singapore (1990)
V. Jarník, [*Diferenciální počet II*]{}, Academia, Prague, 121 (1976)
R. Wong, [*Asymptotic Approximations of Integrals*]{}, SIAM, Philadelphia (2001)
M. Abramowitz and I. A. Stegun, [*Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables*]{}, Dover Publications (1972)
G. N. Watson, Quart. J. Math., Oxford Ser. 2 10, 266 (1939)
N. Bleistein and R. A. Handelsman, [*Asymptotic Expansions of Integrals*]{}, Holt, Rinehart and Winston, New York, (1975)
I. Wegener, [*Complexity Theory*]{}, Springer-Verlag, Berlin (2005)
S. Mertens and C. Moore, [*The Nature of Computation*]{}, Oxford University Press, Oxford (2007)
R. L. Rivest, A. Shamir and L. Adleman, Communications of the ACM **21**, 120 (1978)
A. J. Menezes, P. C. van Oorschot and S. A. Vanstone, [*Handbook of Applied Cryptography*]{}, CRC Press (1996)
P. Shor, SIAM J. Comput. **26**, 1484 (1997)
L. M. K. Vandersypen, M. Steffen, G. Breyta, C. S. Yannoni, M. H. Sherwood and I. L. Chuang, Nature (London) **414**, 883 (2001)
S. Lang, [*Algebraic Number Theory*]{}, Addison Wesley, New York (1970)
H. Davenport, [*Multiplicative Number Theory*]{}, Springer, New York (1980)
H. Maier and W. P. Schleich, [*Prime Numbers 101: A Primer on Number Theory*]{}, Wiley-VCH, New York (2008)
J. F. Clauser and J. P. Dowling, Phys. Rev. A **53**, 4587 (1996)
W. G. Harter, Phys. Rev. A **64**, 012312 (2001)
W. G. Harter, J. Mol. Spec. **210**, 166 (2001)
H. Mack, M. Bienert, F. Haug, M. Freyberger and W. P. Schleich, Phys. Stat. Sol. (b) **233**, 408 (2002)
H. Mack H, M. Bienert, F. Haug, F. S. Straub, M. Freyberger and W. P. Schleich, in [*Experimental Quantum Computation*]{}, Eds. P. Mataloni and F. De Martini, Elsevier, Amsterdam (2002)
W. Merkel, O. Crasser, F. Haug, E. Lutz, H. Mack, M. Freyberger, W. P. Schleich, I. Sh. Averbukh, M. Bienert, B. Girard, H. Maier and G. G. Paulus, Int. J. of Mod. Phys. B **20**, 1893 (2006)
W. Merkel, I. Sh. Averbukh, B. Girard, G. G. Paulus and W. P. Schleich, Fortschr. Phys. **54**, 856 (2006)
A. A. Rangelov, J. Phys. B **42**, 021002 (2009)
M. S. Zubairy, Science **316**, 554 (2007)
M. Mehring, K. Müller, I. Sh. Averbukh, W. Merkel and W. P. Schleich, Phys. Rev. Lett. **98**, 120502 (2007)
T.S. Mahesh, N. Rajendran, X. Peng and D. Suter, Phys. Rev. A **75**, 062303 (2007)
X. Peng and D. Suter, Euro. Phys. Lett. **84**, 40006 (2008)
M. Gilowski, T. Wendrich, T. Müller, Ch. Jentsch, W. Ertmer, E. M. Rasel and W. P. Schleich, Phys. Rev. Lett. **100**, 030201 (2008)
D. Bigourd, B. Chatel, W. P. Schleich and B. Girard, Phys. Rev. Lett. **100**, 030202 (2008)
S. Weber, B. Chatel and B. Girard, Euro. Phys. Lett. **83**, 34008 (2008)
S. Weber, B. Chatel and B. Girard, in [*Conference On Lasers And Electro-Optics Quantum Electronics And Laser Science Conference*]{}, 3002 (2008)
M. Sadgrove, S. Kumar and K. Nakagawa, Phys. Rev. Lett. **101**, 180502 (2008)
H. F. Talbot, Phil. Mag. **9**, 401 (1836)
M. V. Berry and J. Goldberg, Nonlinearity **1**, 1 (1988)
M. V. Berry, Physica D **33**, 26 (1988)
C. Leichtle, I. Sh. Averbukh and W. P. Schleich, Phys. Rev. Lett. **77**, 3999 (1996)
C. Leichtle, I. Sh. Averbukh and W. P. Schleich, Phys. Rev. A **54**, 5299 (1996)
M. V. Berry, I. Marzoli and W. P. Schleich, Physics World **14**, 39 (2001)
J. Oppenländer, Ch. Häussler and N. Schopohl, Phys. Rev. B **63**, 024511 (2000)
M. Born and E. Wolf, [*Principles of Optics*]{}, Pergamon Press, Oxford (1993)
M. Sargent, M. O. Scully and W. E. Lamb, [*Laser Physics*]{}, Addison-Wesley, Reading (1974)
H. Iwaniec and E. Kowalski, [*Analytic Number Theory*]{}, American Mathematical Society, Providence (2004)
K. Ireland and M. Rosen, [*A Classical Introduction to Modern Number Theory*]{}, Springer-Verlag, Heidelberg (1990)
|
---
abstract: 'The density profile of simulated dark matter structures is fairly well-established, and several explanations for its characteristics have been put forward. In contrast, the radial variation of the velocity anisotropy has still not been explained. We suggest a very simple origin, based on the shapes of the velocity distributions functions, which are shown to differ between the radial and tangential directions. This allows us to derive a radial variation of the anisotropy profile which is in good agreement with both simulations and observations. One of the consequences of this suggestion is that the velocity anisotropy is entirely determined once the density profile is known. We demonstrate how this explains the origin of the $\gamma$–$\beta$ relation, which is the connection between the slope of the density profile and the velocity anisotropy. These findings provide us with a powerful tool, which allows us to close the Jeans equations.'
author:
- 'Steen H. Hansen'
title: 'Might we eventually understand the origin of the dark matter velocity anisotropy?'
---
Introduction
============
The natural outcome of cosmological structure formation theory is equilibrated dark matter (DM) structures. According to numerical simulations, the mass density profile, $\rho(r)$, of these structures changes from something with a fairly shallow profile in the central region, $\gamma \equiv d{\rm ln}\rho/d{\rm ln}r \sim -1$ (or maybe zero), to something steeper in the outer region, $\gamma \sim -3$ (or maybe steeper) [@nfw; @moore; @diemand] (see also [@reed; @stoehr; @navarro2004; @alister; @merritt; @ascasibar; @stadel08; @springel08]). For the largest structures, like galaxy clusters, there appears to be fair agreement between the numerical predictions and observations concerning the central steepness [@pointe; @sand; @buote; @broadhurst; @vikhlinin], however, for smaller structures, like galaxies or dwarf galaxies, observations tend to indicate central cores [@salucci; @gilmore; @wilkinson] (see also [@rubin85; @courteau97; @palunas00; @blok01; @blok02; @salucci01; @swaters02; @corb03; @salucci2]). The various theoretical approaches still make different predictions [@taylornavarro; @hansenjeans; @austin; @dehnenmclaughlin; @gsmh; @henriksen; @henriksen2], varying from central cores to cusps.
The second natural quantity to consider (after the density profile) is the velocity anisotropy, which is defined through $$\beta \equiv 1 - \frac{\sigma^2_t}{\sigma^2_r} ~,$$ where $\sigma^2_t$ and $\sigma^2_r$ are the 1-dimensional tangential and radial velocity dispersions [@binneytremaine]. If most dark matter particles in an equilibrated structure were purely on radial orbits, then $\beta$ could be as large as 1, and for mainly tangential orbits $\beta$ could be arbitrarily large and negative. Since dark matter is collision-less $\beta$ does not have to be zero, and it could in principle even vary as a function of radius.
Numerical N-body simulations of collision-less dark matter particles show that the dark matter velocity anisotropy is indeed radially varying, and that $\beta$ goes from roughly zero in the central region, to 0.5 towards the outer region [@colelacey; @carlberg]. Only very recently has this velocity anisotropy been measured to be non-zero in galaxy clusters [@hansenpiff], and it has even been observed to be increasing as a function of radius [@host2008], in excellent agreement with the numerical predictions. For smaller structures, like our own galaxy, this has not been observed yet. In principle $\beta$ of our Galaxy can be measured in an underground directional sensitive detector, however, it will require a large dedicated experimental programme [@hosthansen]. Very little theoretical understanding of the origin of this velocity anisotropy exists, and to my knowledge no successful derivation of it has been published (see, however, [@hansenmoore; @smgh07; @wojtak2008]). We will in this paper present an attempt towards deriving $\beta$.
Decomposition
=============
When analyzing the outcome of a numerically simulated dark matter structure one traditionally divides the equilibrated structure in bins (shells) in radius, or in potential energy. For spherical structures there is naturally no difference. We can now consider all the particles in a given radial bin, and calculate properties like average density, angular momentum, velocity anisotropy etc. In order to do this, we must decompose the velocity of each particle into the radial component, and the two tangential components. The two tangential components can for instance be separated according to the total angular momentum of all the particles in the bin.
By summing over all the particles in the radial (or potential) bin, we thus get the velocity distribution function (VDF), which for a gas would have been a Gaussian represented by the local gas temperature, $f(v) \sim {\rm exp} (-E/T)$. We are here discussing the 1-dimensional VDF (i.e. the one where the two other velocities are integrated over), and we are not assuming that the radial and tangential VDF’s are independent. Naturally, since dark matter particles are not collisional, the concept of temperature is not well defined for them. In numerically simulated structures one observes that the radial VDF is symmetric (with respect to particles moving in or out of the structure), and also the non-rotational part of the tangential VDF is symmetric. The asymmetry of the rotational part of the tangential VDF was discussed in [@schmidt] for the DM particles. For the structures with very little rotation, the two tangential VDF’s are virtually identical. To avoid any complications from the total angular momentum we will hereafter only discuss the two symmetric 1-dimensional VDF’s. For any given radial bin in a given DM structure, the shape of the VDF only depends on the momentaneous distribution of particles (which should be virtually time-independent for equilibrated structures), and is independent of the method by which the structure is selected.
When analyzing dark matter structures resulting from cosmological simulations, we find that the shape of the [*radial*]{} VDF changes as a function of radius [@wojtak2005; @hansenzemp; @faltenb; @fairbairn]. In particular, the bins in the inner region tend to have long tails (more particles at high velocity compared to a Gaussian), whereas bins at larger radii tend to have stronger reduction in high velocity particles. This is exemplified in figure \[fig:vdf\], where the upper curves (blue and green) show the radial VDF. The open diamonds (blue) come from a radial bin in the inner region, whereas the stars (green) are from a bin further out. The VDF’s are normalized such that a comparison is possible, and velocity is normalized to the dispersion. This simulated cosmological data is from the Local Group simulation of [@moore2001]. The lower curves (red and black) are the tangential VDF from the same two bins, however, for the [*tangential*]{} VDF there is a striking resemblance, infact to a first approximation these two tangential VDF’s from different radial bins look identical.
![The velocity distribution function for 2 different radial bins from a simulated cosmological DM structure [@moore2001]. The upper (green and blue) curves are the radial VDF, and the lower (black and red) are the tangential VDF. The open diamonds are from the inner bin, and stars are from the outer bin. It is clear that the tangential VDF’s are very similar to each other, whereas the radial VDF’s differ in shape both at small and large velocities. All figures have velocity normalized to the dispersion, and random y-axis normalization to enhance visibility.[]{data-label="fig:vdf"}](f1.pdf){width="49.00000%"}
![The velocity distribution function for 2 different radial bins from the Eddington formula for an NFW density profile. The open diamonds are from an inner bin ($r=0.1$), and stars are from an outer bin ($r=10$). There is a striking resemblance with the radial VDF’s from the cosmological simulation in the upper curves in figure \[fig:vdf\]. Same normalization as figure \[fig:vdf\].[]{data-label="fig:edd"}](f2.pdf){width="49.00000%"}
The most frequently used approach to discuss DM structures is through the first Jeans equation, which relates the velocity dispersions to the density profiles. If we were to have some knowledge about some of the quantities entering the Jeans equation, then we can solve for the others. One example hereof was presented in [@dehnenmclaughlin], who demonstrated how to derive a generalized NFW density profile, by both assuming that the pseudo phase-space is a power-law in radius [@taylornavarro], and that there is a linear relation between the velocity anisotropy and the density slope [@hansenmoore]. A somewhat generalized approach was presented in [@zait], where the authors demonstrated how to derive the velocity anisotropy by assuming simple forms for both the density profile and the pseudo phase-space density. The fundamental problem with this kind of approaches is, that any departure from truth in the assumptions will lead to departure from correctness in the results. [@zait] demonstrated this in a very convincing way, by deriving significantly different velocity anisotropy profiles by just changing between NFW or Sersic density profiles as input. Another related problem with these approaches is, that the assumption that pseudo phase-space is a simple power-law, was recently demonstrated to be oversimplified. The unknown question was which component (radial, tangential, or something else) of the velocity dispersion in the pseudo phase-space gives the best approximation to a power-law in radius [@hansenzemp; @knollmann]. [@schmidt09] demonstrated that different numerically simulated structures are best fitted by different forms of the pseudo phase-space, and hence that there is no universal simple behavior of the pseudo phase-space.
The shape of the radial VDF
---------------------------
For ergodic structures, that is structures where the orbits depend only on energy (hence with $\beta=0$) we can use the Eddington formula to get the VDF at any radius [@eddington], (see also [@binney82; @cuddeford; @evansan]). The Eddington formula only depends on the radial dependence of the density profile of the structure, and by assumption the VDF is the same for the radial and tangential directions, $f(v,r) = $ function$(\rho(r))$. It is natural to interpret this in the following way. The structure is in equilibrium, so there is detailed balance for each phase-space element. The velocity of each particle is decomposed into the radial and tangential components, and for any infinitesimal time step, the radial component of any individual particle can tell that it is moving in a changing density and changing potential (the radial component of any individual particle is either moving directly inwards or outwards). It is therefore natural that the radial VDF is imprinted by the radial variation in the density and potential. For a truncated NFW density profile we can e.g. get the VDF at radius 0.1 and 10, in units of the characteristic radius, see figure \[fig:edd\]. By comparison of figures \[fig:vdf\] and \[fig:edd\] it is clear, that the radial VDF of the cosmological simulation indeed looks very similar to the VDF from the Eddington formula. It is therefore tempting to suggest that the radial VDF to a first approximation is identical to the one which results when applying the Eddington formula to the given density profile.
Now, the actual radial VDF (for a given cosmologically simulated structure) will differ slightly from the VDF resulting from the Eddington formula, since the latter was based on the assumption that $\beta=0$. Specifically, the VDF from the Eddington formula gives also $\sigma_r$, and to ensure consistency with the Jeans equation, one must have $\beta=0$. However, we will here use this VDF [*as a first approximation*]{} to the radial VDF, and we will present a quantitative comparison in a future paper. Recently, [@vanhese] showed that for a large class of theoretical model, this is an excellent approximation (see their figure 5).
The shape of the tangential VDF
-------------------------------
It is somewhat less trivial to argue (or claim) the shape of the tangential VDF. For an infinitesimal time step, the tangential component of any individual particle’s velocity is moving in constant density and constant potential (the tangential velocity component of any individual particle is moving, well, tangentially, and we assume spherical symmetry). We still assume that the structure is in equilibrium with no time variation. This means, that [*as a first approximation*]{} the tangential VDF can be thought of as the one resulting from an infinite and homogeneous medium, where both the density and potential is constant everywhere. This argument is similar to the Jeans swindle, where the homogeneous medium implies constant potential. Naturally, such an infinite structure is not gravitationally stable against perturbations, but we can instead approximate it in the following way.
Let us consider a density profile, which is a power-law in radius over many orders of magnitude, and is then truncated. One example hereof is an NFW profile, where the central density slope is -1, and the corresponding truncation is then happening after the scale radius. We are thus considering the VDF in a bin at a radius which is many orders of magnitude deeper towards the center than the scale radius. We can now consider a generalized double power-law profile, where the central slope can be more shallow than -1, and we can use the Eddington formula to extract the VDF for any central slope. By lowering the central slope towards zero, we get a structure which in principle is stable towards perturbations, but at the same time is approaching constant density and constant potential in the central region. The resulting VDF (extrapolated to zero slope) has been discussed by [@zurichstudents] and has the shape $$f(v) = n(\rho) \, \left( 1 - \frac{1-q}{3-q} \,
\left( \frac{v}{\sigma} \right) ^2 \right) ^ \frac{q}{1-q} \, ,
\label{eq:tsallis}$$ with $q=5/3$, and $n(\rho)$ shows that the normalization only depends on the local density. This form is known as a q-generalized exponential [@tsallis]. For comparison one should note that a simple comparison with polytropes (where $f(E) \sim E^{(n-3/2)}$) breaks down, since the normal connection between density and potential, $\rho \sim \Psi ^n$, is not valid for such shallow slopes [@binneytremaine].
![The VDF as function of velocity at 3 different radii, upper curves correspond to outer radial bin, and lower curves correspond to an inner radial bin. The green stars are the radial VDF’s, whereas the red diamonds are the tangential VDF’s. The simulation is the very non-cosmological “tangential orbit instability” from [@hansenzemp]. The black lines are of the form in eq. (\[eq:tsallis\]). The velocity is normalized to the dispersion, and the y-axis has been shifted vertically for two of the bins to enhance visibility. One clearly sees that the tangential VDF’s are virtually identical, whereas the radial VDF’s vary as function of radius.[]{data-label="fig:tangent"}](f3.pdf){width="49.00000%"}
![Same as fig. \[fig:tangent\] but with log-scales to make the suppression at high-energy more visible. The normalization of the y-axis is random, to enhance visibility.[]{data-label="fig:tangent.log"}](f4.pdf){width="49.00000%"}
When considering the shape of the tangential VDF from simulations in figs. \[fig:tangent\] and \[fig:tangent.log\] we see that this form indeed provides a very good fit for all radii, at least for $v$ smaller than roughly $2\sigma$. The structure in figs. \[fig:tangent\] and \[fig:tangent.log\] is from a very non-cosmological simulation (the “tangential orbit instability” of [@hansenzemp]). The same form fits the tangential VDF’s from a cosmological simulation (lower lines in fig. \[fig:vdf\]) equally well.
Clearly, this form in eq. (\[eq:tsallis\]) has an extended tail of high energy particles, which would not be bound by the equilibrated structure. The suppression of high energy particles due to the finite radial extend of the structure is naturally included through the Eddington formula for the radial component, and we therefore make the suggestion that the [*tangential*]{} VDF must have a high-energy tail which follows the [*radial*]{} VDF. Effectively, this means that for large velocities the tangential component of the velocity might as well be moving in the radial direction. This corresponds to the fact that the tangential velocity component of any individual particle actually is moving somewhat radially after a finite time-interval.
When looking at fig. \[fig:tangent.log\] we see that the actual suppression is even slightly larger for these high-energy particles, however, the difference between the suggested and the actual suppression at high energy is very small. When looking at the number of particles (the integral under the curve in fig. \[fig:tangent\]) we find, that the difference is virtually zero.
In conclusion, the tangential VDF is surprisingly well fit by the phenomenologically predicted shape in eq. (\[eq:tsallis\]), and with a high-energy tail suppression corresponding to the tail of the radial VDF.
To emphasize the general nature of the shape of the tangential VDF, we also present the radial and tangential VDF’s of a cosmological simulation of a galaxy, including both cooling gas, star-formation and stars, as well as supernova feedback from [@sommerlarsen2; @sommerlarsen]. In fig. \[fig:k15\] we see that the dark matter VDF also in this case has the suggested shape.
![The velocity distribution function as function of velocity for a galaxy from a cosmological simulation including both gas and stars from [@sommerlarsen2; @sommerlarsen]. Green diamonds are radial VDF’s, red stars are tangential VDF’s, and the lines are of the form in eq. (\[eq:tsallis\]).[]{data-label="fig:k15"}](f5.pdf){width="49.00000%"}
The velocity anisotropy
=======================
Now, after having established the shape of both the radial and tangential VDF’s, the velocity anisotropy at any given radius can easily be determined, as we will show later in this section, since it is just an integral over these distributions, $\sigma^2 = \int v^2
f(v) dv/\int f(v) dv$. The shape of the radial VDF changes as a function of radius (section 2.1), whereas the shape of the tangential VDF is virtually constant (section 2.2), and it is therefore natural to expect that the velocity anisotropy will also change as a function of radius. Since the radial VDF generally is more flat-topped than the tangential one (see fig. \[fig:tangent.log\], top lines), then we will expect $\beta$ to be positive. Only when the density slope approaches zero (e.g. a central core) will the radial VDF approach the tangential one, and hence $\beta \rightarrow 0$ (see fig. \[fig:tangent.log\], bottom lines).
Let us assume that the radial density profile is given, e.g. by a truncated NFW profile. In this case the radial VDF is completely determined through the Eddington formula. The tangential VDF is given by eq. (\[eq:tsallis\]) and at first sight there are 2 free parameters, namely the $\sigma$ entering equation \[eq:tsallis\], and then the normalization. For a given $\sigma$ we can determine the normalization, since the particle number is conserved when integrating over the radial or tangential VDF, $\int f_{rad} dv = \rho = \int
f_{tan} dv$. This leaves us only to determine the $\sigma$ entering eq. (\[eq:tsallis\]). One could argue that it most likely is either $\sigma_r, \sigma_{tan}$ or $\sigma_{tot}$ which should enter here. We will allow ourselves to be guided by the results of numerical simulations, and use the average $\sigma_{tot}$, since that gives a fairly good approximation to all the tangential VDF’s from the simulations discussed above. We will present a more quantitative test of this in a future paper, but the effect is modest. E.g. for the truncated NFW profile at the radius where the slope is $-2$, we find $\beta=0.265$ when using $\sigma = \sigma_{tot}$, and if we instead use $\sigma = \sigma_r$ ($\sigma_{tan}$) we get $\beta = 0.22 (0.31)$.
It is now straight-forward to calculate the velocity anisotropy, $\beta$, at any radius for any given density profile with no free parameters. Practically we do it iteratively in the following way. 1) First we find the radial VDF, using the Eddington formula. 2) Then we write the tangential VDF, which is eq. (\[eq:tsallis\]) where we initially use $\sigma = \sigma_r$ (initially assuming $\beta$ to be very small). 3) Then replace this form at high momenta with the radial VDF (as described in detail in section 2), and normalized in such a way that the particle number is conserved (between radial and tangential). 4) It is now trivial to calculate $\beta$ as an integral over these distribution functions, and if this is different from the initial assumption, then re-iterate the entire process with the calculated $\beta$. This means explicitely that we return to point 2) with this newly calculated $\beta$, and $\sigma = \sigma_{tot}$. In practice we repeat until $\beta$ has converged with accuracy $0.01$.
![The velocity anisotropy as function of radius. Blue triangles are for an NFW density profile, the red diamonds for the density profile suggested by [@navarro2004], and the green stars is for $\rho(r) \sim 1/(1+r^2)^2$, which has a central core (x-axis normalized to the scale radius). The squares with error-bars is from the CLEF simulation [@kay; @springel], where the 67 most relaxed galaxy clusters at $z=0$ have been selected (x-axis normalized to $r_{2500}$). The error-bars correspond to 1 sigma scatter over the 67 most relaxed clusters [@host2008]. The $\beta$-profiles from pure dark matter simulations (e.g. [@diemand]) are in good agreement with this radial behaviour.[]{data-label="fig:beta.r"}](f6.pdf){width="49.00000%"}
In fig. \[fig:beta.r\] we present the radial dependence of $\beta$ for 3 density profiles, namely an NFW profile with a truncation at large radius, a profile like the one advocated by [@navarro2004], and finally a profile of the form $\rho(r) \sim 1/(1+r^2)^2$, which has a central core. We see that the anisotropy increases in a way similar to what is observed in numerical simulations, namely from something small in the central region, to something of the order 0.4 towards the outer region. The orange squares are from CLEF numerial simulation [@kay; @springel], where the error-bars represent the 1$\sigma$ scatter over the 67 most relaxed clusters [@host2008]. Observations of $\beta (r)$ in galaxy clusters are in excellent agreement with these numerical predictions [@host2008]. The radial scale of the simulated $\beta$ is $r_{2500}$, which does not have to coincide with the scale radius of the analytical profiles. This gives a free parameter (of the order unity) in the normalization of the x-axis, which we just put to 1 for simplicity. In a similar way the analytical profiles are all normalized to their respective scale radii, which means that they could also have different $r_{2500}$.
Since we have suggested the shape of the full VDF’s, then we can naturally also get higher order moments, such as the kurtosis, as function of radius. We thus also predict that the radial profile of the higher velocity moments are fully determined by the shape of the density profile.
Discussion
==========
One of the consequences of the above considerations is, that the appearance of the radial variation of $\beta$ is dictated by the density profile. That is, given any density profile, the velocity anisotropy is entirely determined, as long as the structure has had time to equilibrate. We are thus stating explicitly, that $\beta$ is unrelated to the infall of matter in the outer region, and that the only connection $\beta$ has to the formation process is through the radial structure of the density profile. This is naturally supported by the very non-cosmological simulations (see figs. \[fig:tangent\], \[fig:tangent.log\]) which also produce a $\beta$-profile in agreement with cosmological simulations [@hansenzemp].
![The velocity anisotropy, $\beta$, as function of the density slope, $\gamma$. The blue line (triangles) is for an NFW profile, and the black solid line (crosses) is for a power-law density profile. The orange (squares) is for the non-trivial double-bump profile, the green (stars) is for $\rho = 1/(1+r^2)^2$ profile, and the red diamonds are for the profile suggested in [@navarro2004]. The dashed straight line is the suggestion from [@hansenstadel]. These results are roughly fit by $\beta = -0.13\gamma$.[]{data-label="fig:gamma.beta"}](f7.pdf){width="49.00000%"}
Another consequence is that $\beta$ must always be positive in equilibrated structures, since the density profile at most can develop a core (see fig. \[fig:beta.r\]). This is also in good agreement with dark matter simulations [@dehnenmclaughlin; @barnes; @faltenb; @bellovary]. Virtually no numerical simulations find negative velocity anisotropy, and when they do, this is usually only in the very inner region where numerical convergence may be questioned. Few analytical treatments have predicted a negative $\beta$ in the inner region [@zait], however, this result may be an artefact of assuming that the pseudo phases-space is a perfect power-law in radius, which is generally not correct [@schmidt09]. If future high-resolution numerical simulations instead will establish that the central velocity anisotropy is negative (in agreement with the predictions of [@zait]), then that would be a proof that the present analysis is flawed somehow.
It has previously been suggested that a connection between the anisotropy and the slope of the density profile should exist. This connection appears to hold even for structures which have profiles with non-trivial radial variation in $d{\rm log}\rho/d{\rm log}r$ [@hansenmoore]. We can now test this connection. In figure \[fig:gamma.beta\] we show $\beta$ as function of the density slope for the NFW profile (solid line, blue triangles), the profile suggested by [@navarro2004] (red diamonds), and also for a “double-bump” structure (the sum of two spatially separated profiles of the form $1/(1+r)^3$, orange squares)). This double-hump profile has a very non-trivial radial variation of $d{\rm ln}\rho/d{\rm ln}r$, which cannot be well approximated by any generalized double power-law profile. All these structures appear to land near a connection roughly given by $\beta = -0.13\gamma$. We also show the results for the $\rho = 1/(1+r^2)^2$ profile (green stars), as well as for single power-law profiles (fat black line, crosses). The dashes line is $\beta = -0.2(\gamma-0.8)$ as suggested in [@hansenstadel], based on a set of cosmological and non-cosmological simulations. These two results differ by approximately 0.1 in $\beta$. We indeed see that all structures land in a relatively narrow band in the $\gamma$–$\beta$ plane, and hence likely explaining the origin of the $\gamma$–$\beta$ relations.
The most important practical implication of this suggestion is, that it will allow us to close the Jeans equation. As is well-known, the Jeans equation depends on the density, dispersion, anisotropy and the total mass. Now, having demonstrated (or at least suggested strongly) that the anisotropy is uniquely determined once the density is known, we see that it is possible to close the Jeans equation for systems that are fully relaxed.
We have been making simplifying assumptions above, which all need to be tested through high resolution simulations. First, we assume that the radial VDF is very similar to the one appearing from the Eddington formula, even in the presence of a non-zero $\beta$. We also assume that the $\sigma$ entering eq. (\[eq:tsallis\]) is the total one. If the correct $\sigma$ to use is instead closer to $\sigma_{tan}$, then $\beta$ will be slightly larger, but the radial variation will remain. From these assumptions we estimate the accuracy of the present work to be about 0.1 or up to about $30\%$ in $\beta(r)$.
One could naturally ask why and how the radial and tangential VDF’s get their shapes? It is slightly disappointing that it is not a deep physical principle, like a generalized entropy, which is responsible. Instead it is simply the density profile (either the radially varying, or the tangentially constant) which through the Eddington formula demands that the VDF’s take on these forms. These forms will therefore appear when there is sufficient amount of violent relaxation to allow enough energy exchange between the particles.
Summary
=======
The velocity of any particle can be decomposed into the radial and tangential components, and when summing over all particles in a radial bin, we get the particle velocity distribution function, the VDF. We suggest that both the radial and tangential VDF’s are given through the Eddington formula. The radial one comes from the radially changing density profile, and the tangential VDF arises when considering a structure with constant density and potential. This is because the tangential component of the velocity as a first approximation is moving in constant density and constant potential. In addition the tangential VDF is reduced for high-energy particles in accordance with the radial VDF, to ensure that the particles remain bound to the structure. These phenomenological predictions are in remarkably good agreement with the results from numerical simulations of collisionless particles, both of structures of cosmological origin as well as highly non-cosmological origin.
Under these suggestions it is straight forward to derive the velocity anisotropy profile, $\beta (r)$, with no free parameters. This is shown to increase radially from something small (possibly zero) in the center, to something large and positive (possibly around 0.4) towards the outer region.
We have thus demonstrated that the velocity anisotropy is entirely determined from the density profile. This allows us to close the Jeans equation, since $\beta$ is no-longer a free parameter.
[**Acknowledgements**]{}\
It is a pleasure to thank Jin An and Ole Host for discussions, and Ben Moore and Jesper Sommer-Larsen for kindly letting me use their simulations. The Dark Cosmology Centre is funded by the Danish National Research Foundation.
[99]{}
Ascasibar, Y., & Gottloeber, S. 2008, arXiv:0802.4348
Austin, C. G., Williams, L. L. R., Barnes, E. I., Babul, A., & Dalcanton, J. J. 2006, ApJ, 634, 756 Barnes, E. I., Williams, L. L. R., Babul, A., & Dalcanton, J. J. 2007, Astrophys. J. [**654**]{} (2007) 814 Bellovary, J. M. et al. 2008, arXiv:0806.3434
Binney, J. 1982, MNRAS, 200, 951
Binney, J., & Tremaine, S. 1987, Princeton, NJ, Princeton University Press, 1987, 747 p.
de Blok, W. J. G., McGaugh, S. S., Bosma, A., & Rubin, V. C. 2001, ApJ, 552, 23
de Blok W. J. G., Bosma A., & McGaugh S. S. 2003, MNRAS, 340, 657
Broadhurst, T. J., Takada, M., Umetsu, K., Kong, X., Arimoto, N., Chiba, M., & Futamase, T. 2005, Astrophys. J. 619, L143 Buote, D. A., & Lewis, A. D. 2004, Astrophys. J. 604, 116
Carlberg, R. G., et al. 1997, ApJ, 485, L13
Cole, S., & Lacey, C. 1996, MNRAS, 281, 716
Corbelli, E. 2003, MNRAS, 342, 199 Courteau, S. 1997, AJ, 114, 2402
Cuddeford, P. 1991, , 253, 414
Dehnen, W., & McLaughlin, D. 2005, MNRAS, 363, 1057
Diemand, J., Moore, B., & Stadel, J. 2004, MNRAS, 353, 624
Eddington, A. S. 1916, MNRAS, 76, 572
Evans, N. W., & An, J. H. 2006, Phys. Rev. D., 73, 023524
Fairbairn, M., & Schwetz, T. 2008, arXiv:0808.0704
Faltenbacher,A., & J. Diemand, . 2006, Mon. Not. Roy. Astron. Soc. 369, 1698 \[arXiv:astro-ph/0602197\].
Gilmore, G., Wilkinson, M. I., Wyse, R. F. G., Kleyna, J. T., Koch, A., Evans, N. W., & Grebel, E. K. 2007, Astrophys. J. 663, 948 Gonz[á]{}lez-Casado, G., Salvador-Sol[é]{}, E., Manrique, A., & Hansen, S. H. 2007, arXiv:astro-ph/0702368 Graham, A. W., Merritt, D., Moore, B., Diemand, J., & Terzi[ć]{}, B. 2006, , 132, 2701
Hansen, S. H., Egli, D., Hollenstein, L., Salzmann, C. 2005, New Astron. 10, 379 Hansen S. H., & Moore B 2006, New Astron., 11, 333 Hansen, S. H., Moore, B., Zemp, M., & Stadel, J. 2006, Journal of Cosmology and Astro-Particle Physics, 1, 14
Hansen, S. H., & Stadel, J. 2006, Journal of Cosmology and Astro-Particle Physics, 5, 14
Hansen S. H. 2004 MNRAS, 352, L41
Hansen, S. H., & Piffaretti, R. 2007, , 476, L37
Host, O., & Hansen, S. H. 2007, JCAP, 0706, 016 Host, O., Hansen, S. H., Piffaretti, R., Morandi, A., Ettori, S., Kay, S. T., & Valdarnini, R. 2009, ApJ to appear, arXiv:0808.2049
Henriksen, R. N. 2007, arXiv:0709.0434 Henriksen, R. N. 2008, submitted to ApJ
Kay, S. T., da Silva, A. C., Aghanim, N., Blanchard, A., Liddle, A. R., Puget, J.-L., Sadat, R., & Thomas, P. A. 2007, , 377, 317
Knollmann, S. R., Knebe, A. & Hoffman, Y. 2008, arXiv:0809.1439
Merritt, D., Graham, A. W., Moore, B., Diemand, J., & Terzi[ć]{}, B. 2006, , 132, 2685
Moore, B., Governato, F., Quinn, T., Stadel, J. & Lake G. 1998, ApJ, 499, 5
Moore, B., Calcaneo-Roldan, C., Stadel, J., Quinn, T., Lake, G., Ghigna, S., & Governato, F. 2001, Phys. Rev. D., 64, 063508
Navarro, J. F., Frenk, C. S., & White, S. D. M.1996, ApJ, 462, 563
Navarro, J. et al. 2004, MNRAS, 349, 1039 Navarro, J. F., et al. 2008, arXiv:0810.1522
Palunas, P., & Williams, T. B. 2000, AJ, 120, 2884
Pointecouteau, E., Arnaud, M., & Pratt, G. W. 2005, Astron. & Astrophys, 435, 1
Reed, D. et al. 2003, Mon. Not. Roy. Astron. Soc. 357, 82 Rubin, V. C., Burstein, D., Ford, W. K., & Thonnard, N. 1985, ApJ, 289, 81
Salvador-Solé, E., Manrique, A., González-Casado, G., & Hansen, S. H. 2007, , 666, 181
Salucci, P. 2001, MNRAS, 320L, L1
Salucci, P., Walter, F., & Borriello, A. 2003, , 409, 53
Salucci, P., Lapi, A., Tonini, C., Gentile, G., Yegorova, I., & Klein, U. 2007, , 378, 41
Sand, D. J., Treu, T., Smith, G. P., & Ellis, R. S. 2004, Astrophys. J. 604, 88
Schmidt, K., Hansen, S. H., An, J. H., Williams, L. L. R., & Maccio, A. V. 2008, submitted to ApJ
Schmidt, K. B., Hansen, S. H., & Macci[ò]{}, A. V. 2008, , 689, L33
Sommer-Larsen, J. 2006, , 644, L1 Sommer-Larsen, J., G[ö]{}tz, M., & Portinari, L. 2003, , 596, 47
Springel, V. 2005, , 364, 1105
Stadel, J., Potter, D., Moore, B., Diemand, J., Madau, P., Zemp, M., Kuhlen, M., & Quilis, V. 2008, arXiv:0808.2981
Stoehr, F. 2004, Mon. Not. Roy. Astron. Soc. 365, 147 Swaters, R. A., Madore, B. F., van den Bosch, F. C., & Balcells, M. 2003, ApJ, 583, 732
Taylor, J. E., & Navarro, J. F. 2001, ApJ, 563, 483
Tsallis, C. 1988, J. Stat. Phys., 52, 479
Van Hese, E., Baes, M., & Dejonghe, H. 2008, arXiv:0809.0901 Vikhlinin, A., Kravtsov, A., Forman, W., Jones, C., Markevitch, M., Murray, S. S., & Van Speybroeck, L. 2006, Astrophys. J. 640, 691 Wilkinson, M. I. et al. 2004, Astrophys. J. 611, L21 Wojtak, R. et al. Lokas, E. L., Gottloeber, S., & Mamon, G. A. 2005, Mon. Not. Roy. Astron. Soc. 361, L1 Wojtak, R., Lokas, E. L., Mamon, G. A., Gottloeber, S., Klypin, A., & Hoffman, Y. 2008, arXiv:0802.0429 Zait, A., Hoffman, Y., & Shlosman, I. 2008, arXiv:0711.3791
|
---
abstract: 'We provide a numerical algorithm for the model characterizing anomalous diffusion in expanding media, which is derived in \[F. Le Vot, E. Abad, and S. B. Yuste, Phys. Rev. E [**96**]{} (2017) 032117\]. The Sobolev regularity for the equation is first established. Then we use the finite element method to discretize the Laplace operator and present error estimate of the spatial semi-discrete scheme based on the regularity of the solution; the backward Euler convolution quadrature is developed to approximate Riemann-Liouville fractional derivative and error estimates for the fully discrete scheme are established by using the continuity of solution. Finally, the numerical experiments verify the effectiveness of the algorithm.'
address:
- 'School of Mathematics and Statistics, Gansu Key Laboratory of Applied Mathematics and Complex Systems, Lanzhou University, Lanzhou 730000, P.R. China'
- 'School of Mathematics and Statistics, Gansu Key Laboratory of Applied Mathematics and Complex Systems, Lanzhou University, Lanzhou 730000, P.R. China'
- 'School of Mathematics and Statistics, Gansu Key Laboratory of Applied Mathematics and Complex Systems, Lanzhou University, Lanzhou 730000, P.R. China'
author:
- Daxin Nie
- Jing Sun
- Weihua Deng
title: Numerical algorithm for the model describing anomalous diffusion in expanding media
---
Introduction
============
Currently, it is widely recognized that anomalous diffusions are ubiquitous in the natural world, and some important models are built, including the continuous time random walk (CTRW) model, e.g., [@Barkai2000; @Barkai2001; @Xu2018], and the Langevin picture, e.g., [@Chen20191]. Most of the CTRW models mimic anomalous diffusion processes in static media, while expanding media are typical in biology and cosmology. Recently, [@LeVot2017] builds the CTRW model for anomalous diffusion in expanding media, the Langevin picture of which is given in [@Chen20192], and derives the corresponding Fokker-Planck equation $$\label{eqretosol}
\left\{
\begin{aligned}
&\frac{\partial W(x,t)}{\partial t}=\frac{1}{a^2(t)}\Delta\left[\,_0D^{1-\alpha}_tW(x,t)\right]+f(x,t), \qquad(x,t)\in \Omega\times(0,T],\\
&W(x,0)=W_0(x),\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad x\in \Omega,\\
&W(x,t)=0,\qquad\qquad\quad\qquad\qquad\qquad\qquad\qquad\quad\quad (x,t)\in \partial \Omega\times(0,T],
\end{aligned}
\right.$$ where $\Delta$ stands for Laplace operator; $f(x,t)$ is the source term; $\Omega\subset \mathbb{R}$ is a bounded domain; $T$ is a fixed final time; $_0D^{1-\alpha}_t$ denotes the Riemann-Liouville fractional derivative, defined as [@Podlubny1999] $$\!_0D^{1-\alpha}_t W(t)=\frac{\partial}{\partial t}\,_0I_t^{\alpha}W(t)=\frac{1}{\Gamma(\alpha)}\frac{\partial}{\partial t}\int_0^t\frac{W(\xi)}{(t-\xi)^{1-\alpha}}d\xi, \qquad 0<\alpha<1,$$ and $\,_0I^\alpha_t$ denotes the Riemann-Liouville fractional integral; $a^2(t)$ is the variable diffusion coefficient satisfying $$\label{eqassuma}
\left|\frac{1}{a^2(t)}-\frac{1}{a^2(s)}\right|\leq C|t-s|,\quad t,s\in[0,T]$$ and $$\label{eqassumb}
c<a^2(t)<C,\quad t\in[0,T]$$ with $c$ and $C$ being two positive constants.
So far, numerical methods for fractional differential equations have gained widespread concerns[@Bazhlekova2015; @Chen2009; @Deng2009; @Deng2013; @Jin2014; @Jin2015; @Jin2016; @Jin2017; @Li2009; @Lin2007; @Zeng2018], and [@Jin2019; @Mustapha2018] also provide a complete numerical analysis for fractional differential equations with variable coefficient. Compared with them, the non-commutativity of the Riemann-Liouville fractional derivative and the variable coefficient, i.e., $\frac{1}{a^2(t)}~_0D^{1-\alpha}_t\neq ~_0D^{1-\alpha}_t\frac{1}{a^2(t)}$, brings new challenges in the priori estimate and numerical analysis. To obtain the priori estimate of the solution $W(x,t)$ of Eq. , the regularity of $\,_0D^{1-\alpha}_tW(x,t)$ is needed. As for the spatial discretization, we use finite element method to discretize Laplace operator $\Delta$ and get the optimal-order convergence rates. And then we use backward Euler convolution quadrature [@Lubich1988; @Lubich19882] to discretize Riemann-Liouville fractional derivative and derive error estimates for fully discrete scheme by using Hölder continuity.
The rest of the paper is organized as follows. We first provide some preliminaries and then give some priori estimates for the solution of Eq. in Sec. 2. In Sec. 3, we use the finite element method to discretize the Laplace operator and get the error estimate of the spatial semi-discrete scheme. Section 4 approximates the Riemann-Liouville fractional derivative by backward Euler convolution quadrature and gives the error estimates of the fully discrete scheme for the homogeneous and inhomogeneous problems. In the last section, we verify the effectiveness of the algorithm by numerical experiments.
Preliminary
===========
We first give some preliminaries. For $\kappa>0$ and $\pi/2<\theta<\pi$, we define sectors $\Sigma_{\theta}$ and $\Sigma_{\theta,\kappa}$ in the complex plane $\mathbb{C}$ as $$\begin{aligned}
&\Sigma_{\theta}=\{z\in\mathbb{C}\setminus \{0\},|\arg z|\leq \theta\}, \quad&\Sigma_{\theta,\kappa}=\{z\in\mathbb{C}:|z|\geq\kappa,|\arg z|\leq \theta\},\\
\end{aligned}$$ and the contour $\Gamma_{\theta,\kappa}$ is defined by $$\Gamma_{\theta,\kappa}=\{z\in\mathbb{C}: |z|=\kappa,|\arg z|\leq \theta\}\cup\{z\in\mathbb{C}: z=r e^{\pm \mathbf{i}\theta}: r\geq \kappa\},$$ oriented with an increasing imaginary part, where $\mathbf{i}$ denotes the imaginary unit and $\mathbf{i}^2=-1$. We use $\|\cdot\|$ to denote the operator norm from $L^2(\Omega)$ to $L^2(\Omega)$ and $\epsilon$ any small number larger than $0$.
Then we introduce $G(x,t)=\!_0D^{1-\alpha}_tW(x,t)$, $A=-\Delta$, and $A(t)=-\frac{1}{a^2(t)}\Delta$. For any $ r\geq 0 $, denote the space $ \dot{H}^r(\Omega)=\{v\in L^2(\Omega): A^{\frac{r}{2}}v\in L^2(\Omega) \}$ with the norm [@Bazhlekova2015] $$\|v\|^2_{\dot{H}^r(\Omega)}=\sum_{j=1}^{\infty}\lambda_j^r(v,\varphi_j)^2,$$ where $ {(\lambda_j,\varphi_j)} $ are the eigenvalues ordered non-decreasingly and the corresponding eigenfunctions normalized in the $ L^2(\Omega) $ norm of operator $A$ subject to the homogeneous Dirichlet boundary conditions on $\Omega$. Thus $ \dot{H}^0(\Omega)=L^2(\Omega) $, $\dot{H}^1(\Omega)=H^1_0(\Omega)$, and $\dot{H}^2(\Omega)=H^2(\Omega)\bigcap H^1_0(\Omega)$. For simplicity, we denote $G(t)$, $W(t)$, $W_0$, and $f(t)$ as $G(x,t)$, $W(x,t)$, $W_0(x)$, and $f(x,t)$ in the following. Throughout this paper, $C$ denotes a generic positive constant, whose value may differ at each occurrence.
According to , there exists $$\label{eqassumae}
\|(A(t)-A(s))u\|_{L^2(\Omega)}\leq C|t-s|\|u\|_{\dot{H}^2(\Omega)}.$$ Thus by simple calculations, for any fixed $t_0\in (0,T]$, the solution of Eq. can be represented as $$\label{eqrepsW}
\begin{aligned}
W(t)=F(t,t_0)W_0+\int_0^tF(t-s,t_0)f(s)ds+\int_0^tF(t-s,t_0)\left (A(t_0)-A(s)\right )G(s)ds,
\end{aligned}$$ where $$\label{eqdefF}
F(t,t_0):=\frac{1}{2\pi \mathbf{i}}\int_{\Gamma_{\theta,\kappa}}e^{zt}z^{\alpha-1}(z^\alpha+A(t_0))^{-1}dz.$$ By means of the Laplace transform and the definition of $G(t)$, we get $$\label{eqrepsG}
G(t)=E(t,t_0)W_0+\int_0^tE(t-s,t_0)f(s)ds+\int_0^tE(t-s,t_0)(A(t_0)-A(s))G(s)ds,$$ where $$\label{eqdefE}
E(t,t_0):=\frac{1}{2\pi \mathbf{i}}\int_{\Gamma_{\theta,\kappa}}e^{zt}(z^\alpha+A(t_0))^{-1}dz.$$ As for the operators $F(t,t_0)$ and $E(t,t_0)$, there exist the following estimates.
\[lemestEF\] The operators $F(t,t_0)$ and $E(t,t_0)$ defined in and satisfy $$\begin{aligned}
&\|E(t,t_0)\|\leq Ct^{\alpha-1},\quad \|F(t,t_0)\|\leq C,\quad\|A^{1-\beta}E(t,t_0)\|\leq Ct^{\alpha\beta-1},\\
&\|A^{\beta}F(t,t_0)\|\leq Ct^{-\alpha\beta},\quad \|A^{-\beta}F'(t,t_0)\|\leq Ct^{\alpha\beta-1},
\end{aligned}$$ where $F'(t,t_0)$ denotes the first derivative about $t$ and $\beta\in[0,1]$.
The estimates in Lemma \[lemestEF\] are got by mainly using $\|(z+A)^{-1}\|\leq C|z|^{-1}$ for $z\in \Sigma_\theta$. And the last estimate can be obtained by using the fact $z^\alpha(z^{\alpha}+A)^{-1}=\mathbf{I}-A(z^{\alpha}+A)^{-1}$, where $\mathbf{I}$ denotes the identity operator.
To get the priori estimate of $W(t)$, we first provide some estimates of $G(t)$.
\[thmregofG\] If $W_0\in \dot{H}^{\epsilon}(\Omega)$, $f(0)\in L^2(\Omega)$ and $\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds<C$ with $t\in [0,T]$, then $G(t)$ satisfies $$\|G(t)\|_{L^2(\Omega)}\leq C(T)t^{\alpha-1}\|W_0\|_{L^2(\Omega)}+C(T)\|f(0)\|_{L^2(\Omega)}+C(T)\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds$$ and $$\|G(t)\|_{\dot{H}^2(\Omega)}\leq C(T)t^{\epsilon\alpha/2-1}\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}+C(T)\|f(0)\|_{L^2(\Omega)}+C(T)\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds.$$
Applying $A(t_0)$ on both sides of and taking $L^2$ norm on both sides yield $$\begin{aligned}
\|A(t_0)G(t_0)\|_{L^2(\Omega)}\leq&\|A(t_0)E(t_0,t_0)W_0\|_{L^2(\Omega)}+\left\|\int_0^{t_0}A(t_0)E(t_0-s,t_0)f(s)ds\right\|_{L^2(\Omega)}\\
&+\left\|\int_0^{t_0}A(t_0)E(t_0-s,t_0)(A(t_0)-A(s))G(s)ds\right\|_{L^2(\Omega)}.
\end{aligned}$$ According to Lemma \[lemestEF\], , and convolution properties, there exists $$\begin{aligned}
\|A(t_0)G(t_0)\|_{L^2(\Omega)}\leq& Ct_0^{\epsilon\alpha/2-1}\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}+C\|f(0)\|_{L^2(\Omega)}+C\int_{0}^{t_0}\|f'(s)\|_{L^2(\Omega)}ds+C\int_0^{t_0} \|G(s)\|_{\dot{H}^2(\Omega)}ds.
\end{aligned}$$ Taking $t_0=t$ and using Grönwall’s inequality [@Jin2019; @Larsson1992] lead to $$\|G(t)\|_{\dot{H}^2(\Omega)}\leq C(T)t^{\epsilon\alpha/2-1}\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}+C(T)\|f(0)\|_{L^2(\Omega)}+C(T)\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds.$$ Similarly we have $$\|G(t)\|_{L^2(\Omega)}\leq C(T)t^{\alpha-1}\|W_0\|_{L^2(\Omega)}+C(T)\|f(0)\|_{L^2(\Omega)}+C(T)\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds.$$
Next we give the regularity estimate of $W(t)$.
\[thmregofW\] If $W_0\in \dot{H}^{\epsilon}(\Omega)$, $f(0)\in L^2(\Omega)$ and $\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds<C$ with $t\in [0,T]$, then the solution $W(t)$ of satisfies $$\begin{aligned}
\|W(t)\|_{\dot{H}^2(\Omega)}
\leq
& C(T)t^{-\alpha}\|W_0\|_{\dot{H}^\epsilon(\Omega)}+C(T)\|f(0)\|_{L^2(\Omega)}+C(T)\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds.
\end{aligned}$$
Applying $A(t_0)$ on both sides of and taking $L^2$ norm lead to $$\begin{aligned}
\|A(t_0)W(t_0)\|_{L^2(\Omega)}\leq&\left \|A(t_0)F(t_0,t_0)W_0\right \|_{L^2(\Omega)}+\left\|\int_0^{t_0}A(t_0)F(t_0-s,t_0)f(s)ds\right \|_{L^2(\Omega)}\\
&+\left \|\int_0^{t_0}A(t_0)F(t_0-s,t_0)(A(t_0)-A(s))G(s)ds\right \|_{L^2(\Omega)}.\\
\end{aligned}$$ According to (\[eqassumae\]), Lemma \[lemestEF\], and the fact $T/(t_0-s)>1$, there exists $$\begin{aligned}
& \|A(t_0)W(t_0)\|_{L^2(\Omega)} \\ & \leq Ct_0^{-\alpha}\|W_0\|_{L^2(\Omega)}+C\|f(0)\|_{L^2(\Omega)}+C\int_0^{t_0}\|f'(s)\|_{L^2(\Omega)}ds+C\int_0^{t_0}(t_0-s)^{1-\alpha}\|G(s)\|_{\dot{H}^2(\Omega)}ds.
\end{aligned}$$ Further combining Theorem \[thmregofG\] results in $$\begin{aligned}
\|W(t_0)\|_{\dot{H}^2(\Omega)}%\leq& C\int_0^{t_0}(t_0-s)^{1-\alpha}\bigg(Cs^{\epsilon\alpha/2-1}\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}+C\|f(0)\|_{L^2(\Omega)}+C\int_{0}^{s}\|f'(r)\|_{L^2(\Omega)}dr\bigg)ds\\
%&+Ct_0^{-\alpha}\|W_0\|_{L^2(\Omega)}+C\|f(0)\|_{L^2(\Omega)}+C\int_{0}^{t_0}\|f'(s)\|_{L^2(\Omega)}ds \\
\leq& C(T)t_0^{-\alpha}\|W_0\|_{\dot{H}^\epsilon(\Omega)}+C(T)\|f(0)\|_{L^2(\Omega)}+C(T)\int_{0}^{t_0}\|f'(s)\|_{L^2(\Omega)}ds,
\end{aligned}$$ which leads to the desired result after taking $t_0=t$,.
Spacial discretization and error analysis
=========================================
In this section, we discretize Laplace operator by the finite element method and provide the error estimates for the space semi-discrete scheme of Eq. . Let $\mathcal{T}_h$ be a shape regular quasi-uniform partitions of the domain $\Omega$, where $h$ is the maximum diameter. Denote $ X_h $ as piecewise linear finite element space $$X_{h}=\{v_h\in C(\bar{\Omega}): v_h|_\mathbf{T}\in \mathcal{P}^1,\ \forall \mathbf{T}\in\mathcal{T}_h,\ v_h|_{\partial \Omega}=0\},$$ where $\mathcal{P}^1$ denotes the set of piecewise polynomials of degree $1$ over $\mathcal{T}_h$. Then we define the $ L^2 $-orthogonal projection $ P_h:\ L^2(\Omega)\rightarrow X_h $ and the Ritz projection $ R_h:\ H^1_0(\Omega)\rightarrow X_h $ [@Bazhlekova2015], respectively, by $$\begin{aligned}
&(P_hu,v_h)=(u,v_h) \ ~~~\forall v_h\in X_h,\\
&(\nabla R_h u,\nabla v_h)=(\nabla u, \nabla v_h) \ ~~~\forall v_h\in X_h.
\end{aligned}$$
\[lemprojection\] The projections $ P_h $ and $ R_h $ satisfy $$\begin{aligned}
&\|P_hu-u\|_{L^2(\Omega)}+h\|\nabla(P_hu-u)\|_{L^2(\Omega)}\leq Ch^q\|u\|_{\dot{H}^{q}(\Omega)}\ {\rm for}\ u\in \dot{H}^q(\Omega),\ q=1,2,\\
&\|R_hu-u\|_{L^2(\Omega)}+h\|\nabla(R_hu-u)\|_{L^2(\Omega)}\leq Ch^q\|u\|_{\dot{H}^{q}(\Omega)}\ {\rm for}\ u\in \dot{H}^q(\Omega),\ q=1,2.
\end{aligned}$$
Denote $(\cdot,\cdot)$ as the $L_2$ inner product and $A_h$ defined from $(A_hu,v)=(\nabla u, \nabla v)$. The semi-discrete Galerkin scheme for reads: For every $t\in (0,T]$ find $ W_{h}\in X_h$ such that $$\label{eqsemischeme}
\left \{
\begin{aligned}
&\left(\frac{\partial W_h}{\partial t},v\right)+(\,_0D^{1-\alpha}_tA_h(t_0)W_h,v)=(f,v)+((A_h(t_0)-A_h(t))\,_0D^{1-\alpha}_tW_h,v)\quad {\rm~for~all~} v\in X_h,
%(x,t)\in \Omega\times,
\\
&W_h(0)=W_{0,h}, %\qquad x\in \Omega,
\\
\end{aligned}
\right.$$ where $$W_{0,h}=\left \{
\begin{aligned}
&P_hW_0,\qquad W_0\in L^2(\Omega),\\
&R_hW_0,\qquad W_0\in \dot{H}^2(\Omega),
\end{aligned}\right .$$ and $$(A_h(t)u,v)=\frac{1}{a^2(t)}(\nabla u, \nabla v).$$ For convenience, we rewrite the spatial semi-discrete scheme as $$\frac{\partial W_h}{\partial t}+\,_0D^{1-\alpha}_tA_h(t_0)W_h=f_h+(A_h(t_0)-A_h(t))\,_0D^{1-\alpha}_tW_h,$$ where $f_h=P_hf$. By means of Laplace transform, the solution of can be rewritten as $$\label{eqrepsWh}
\begin{aligned}
W_h(t)=&F_h(t,t_0)W_{0,h}+\int_0^tF_h(t-s,t_0)f_h(s)ds+\int_0^tF_h(t-s,t_0)(A_h(t_0)-A_h(s))\,_0D^{1-\alpha}_sW_h(s)ds,
\end{aligned}$$ where $$\label{equdefFh}
F_h(t,t_0):=\frac{1}{2\pi \mathbf{i}}\int_{\Gamma_{\theta,\kappa}}e^{zt}z^{\alpha-1}(z^\alpha+A_h(t_0))^{-1}dz.$$ Introducing $G_h(t)=\,_0D^{1-\alpha}_tW_h(t)$, thus $G_h(t)$ can be represented by $$\label{eqrepsGh}
\begin{aligned}
G_h(t)=&E_h(t,t_0)W_{0,h}+\int_0^tE_h(t-s,t_0)f_h(s)ds+\int_0^tE_h(t-s,t_0)(A_h(t_0)-A_h(s))G_h(s)ds,
\end{aligned}$$ where $$\label{equdefEh}
E_h(t,t_0):=\frac{1}{2\pi \mathbf{i}}\int_{\Gamma_{\theta,\kappa}}e^{zt}(z^\alpha+A_h(t_0))^{-1}dz.$$
Similar to Lemma \[lemestEF\], the following estimates about $E_h$ and $F_h$ hold.
The operators $F_h(t,t_0)$ and $E_h(t,t_0)$ defined in and satisfy $$\begin{aligned}
&\|E_h(t,t_0)\|\leq Ct^{\alpha-1},\quad \|F_h(t,t_0)\|\leq C,\quad\|A_h^{1-\beta}E_h(t,t_0)\|\leq Ct^{\alpha\beta-1},\\
&\|A_h^{\beta}F_h(t,t_0)\|\leq Ct^{-\alpha\beta},\quad \|A_h^{-\beta}F_h'(t,t_0)\|\leq Ct^{\alpha\beta-1},
\end{aligned}$$ where $\beta\in [0,1]$.
Next, we provide the following Lemma which helps us for the error estimate.
Let $\phi\in L^2(\Omega)$, $z\in \Sigma_{\theta}$, $\omega=(z^\alpha \mathbf{I}+A)^{-1}\phi$, and $\omega_h=(z^\alpha \mathbf{I}+A_h)^{-1}P_h\phi$, where $\mathbf{I}$ denotes the identity operator. Then there holds $$\|\omega_h-\omega\|_{L^2(\Omega)}+h\|\nabla(\omega_h-\omega)\|_{L^2(\Omega)}\leq Ch^2\|\phi\|_{L^2(\Omega)}.$$
To get the error estimate for the space semi-discrete scheme. Denote $e_h(t)=P_hW(t)-W_h(t)$. From and , there exists $$\label{eqehsep}
\begin{aligned}
e_h(t)=&(P_hF(t,t_0)W_0-F_h(t,t_0)W_{0,h})+\int_0^t(P_hF(t-s,t_0)-F_h(t-s,t_0)P_h)f(s)ds\\
&+\int_0^t(P_hF(t-s,t_0)-F_h(t-s,t_0)P_h)(A(t_0)-A(s))\,_0D^{1-\alpha}_sW(s)ds\\
&+\int_0^tF_h(t-s,t_0)((P_hA(t_0)-P_hA(s))\,_0D^{1-\alpha}_sW(s)-(A_h(t_0)-A_h(s))\,_0D^{1-\alpha}_sW_h(s))ds\\
=&\uppercase\expandafter{\romannumeral1}(t)+\uppercase\expandafter{\romannumeral2}(t)+\uppercase\expandafter{\romannumeral3}(t)+\uppercase\expandafter{\romannumeral4}(t).
\end{aligned}$$
Then we need to provide the bounds of $\uppercase\expandafter{\romannumeral1}(t)$, $\uppercase\expandafter{\romannumeral2}(t)$, $\uppercase\expandafter{\romannumeral3}(t)$, and $\uppercase\expandafter{\romannumeral4}(t)$ in .
\[lemspaestI\] If $W_0\in L^2(\Omega)$, there exists the estimate $$\begin{aligned}
\|\uppercase\expandafter{\romannumeral1}(t)\|_{L^2(\Omega)}\leq& Ct^{-\alpha}h^2\|W_0\|_{L^2(\Omega)}.\\
\end{aligned}$$
According to Lemmas \[lemprojection\] and \[lemerror1\], $$\begin{aligned}
\|\uppercase\expandafter{\romannumeral1}(t)\|_{L^2(\Omega)}\leq& \|(P_hF(t,t_0)-F_h(t,t_0)P_h)W_0\|_{L^2(\Omega)}\\
\leq& \|(P_hF(t,t_0)-F(t,t_0))W_0\|_{L^2(\Omega)}+\|(F(t,t_0)-F_h(t,t_0)P_h)W_0\|_{L^2(\Omega)}\\
\leq& C(T)t^{-\alpha}h^2\|W_0\|_{L^2(\Omega)},
\end{aligned}$$ which leads to the desired result.
Similarly, we have the following estimate of $\uppercase\expandafter{\romannumeral2}(t)$.
\[lemspaestII\] If $f(0)\in L^2(\Omega)$ and $\int_0^t\|f'(s)\|_{L^2(\Omega)}ds<\infty$, then $\uppercase\expandafter{\romannumeral2}(t)$ can be bounded by $$\begin{aligned}
\|\uppercase\expandafter{\romannumeral2}(t)\|_{L^2(\Omega)}\leq& C(T)h^2\left (\|f(0)\|_{L^2(\Omega)}+\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds\right ).\\
\end{aligned}$$
As for $\uppercase\expandafter{\romannumeral3}(t)$, there exists the estimate
\[lemspaestIII\] If $W_0\in \dot{H}^\epsilon(\Omega)$, $f(0)\in L^2(\Omega)$, and $\int_0^t\|f'(s)\|_{L^2(\Omega)}ds<\infty$, then $$\|\uppercase\expandafter{\romannumeral3}(t)\|_{L^2(\Omega)}\leq C(T)h^2\left (\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}+\|f(0)\|_{L^2(\Omega)}+\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds\right ).$$
According to , Lemmas \[lemprojection\], \[lemerror1\], and Theorem \[thmregofG\], we have $$\begin{aligned}
\|\uppercase\expandafter{\romannumeral3}(t_0)\|_{L^2(\Omega)}\leq&\int_0^{t_0}\|P_hF(t_0-s,t_0)-F(t_0-s,t_0)\|\|(A(t_0)-A(s))\,_0D^{1-\alpha}_sW(s)\|_{L^2(\Omega)}ds\\
&+\int_0^{t_0}\|F(t_0-s,t_0)-F_h(t_0-s,t_0)P_h\|\|(A(t_0)-A(s))\,_0D^{1-\alpha}_sW(s)\|_{L^2(\Omega)}ds\\
\leq&Ch^2\int_0^{t_0}(t_0-s)^{1-\alpha}\|\,_0D^{1-\alpha}_sW(s)\|_{\dot{H}^2(\Omega)}ds\\
\leq&Ch^2\left (\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}+\|f(0)\|_{L^2(\Omega)}+\int_{0}^{t_0}\|f'(s)\|_{L^2(\Omega)}ds\right ).
\end{aligned}$$ Taking $t_0=t$ leads to the desired result.
To estimate $\|\uppercase\expandafter{\romannumeral4}(t)\|_{L^2(\Omega)}$, introducing $\upsilon_h(t)=\,_0D^{1-\alpha}_te_h$ results in $$\begin{aligned}
\upsilon_h(t)=&(P_hE(t,t_0)W_0-E_h(t,t_0)W_{0,h})+\int_0^t(P_hE(t-s,t_0)-E_h(t-s,t_0)P_h)f(s)ds\\
&+\int_0^t(P_hE(t-s,t_0)-E_h(t-s,t_0)P_h)(A(t_0)-A(s))G(s)ds\\
&+\int_0^tE_h(t-s,t_0)((P_hA(t_0)-P_hA(s))G(s)-(A_h(t_0)-A_h(s))G_h(s))ds
=\sum_{i=1}^{4}\upsilon_{i,h}(t).
\end{aligned}$$
Next, we consider the estimate of $\|\upsilon_h(t)\|_{L^2(\Omega)}$, which helps to get the estimate of $\|\uppercase\expandafter{\romannumeral4}(t)\|_{L^2(\Omega)}$.
\[lemepslonest\] If $W_0\in \dot{H}^{\epsilon}(\Omega)$, $f(0)\in L^2(\Omega)$ and $\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds<\infty$, then we have $$\|\upsilon_h(t)\|_{L^2(\Omega)}\leq Ch^2t^{\epsilon\alpha/2-1}\|W_0\|_{\dot{H}^\epsilon(\Omega)}+Ch^2\|f(0)\|_{L^2(\Omega)}+Ch^2\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds.$$
According to Lemma \[lemprojection\], we have the estimates $$\left \|(P_hE(t,t_0)-E(t,t_0))W_0\right \|_{L^2(\Omega)}\leq\left \{\begin{aligned}
&Ch^2t^{-1}\|W_0\|_{L^2(\Omega)},\qquad W_0\in L^2(\Omega),\\
&Ch^2t^{\alpha-1}\|W_0\|_{\dot{H}^2(\Omega)},\qquad W_0\in \dot{H}^2(\Omega).
\end{aligned}\right.$$ If $W_0\in L^2(\Omega)$, according to Lemma \[lemerror1\], the following estimate holds $$\begin{aligned}
&\left \|(E(t,t_0)-E_h(t,t_0)P_h)W_0\right \|_{L^2(\Omega)}\\
\leq& \left\| \int_{\Gamma_{\theta,\kappa}}e^{zt}((z^{\alpha}+A(t_0))^{-1}-(z^{\alpha}+A_h(t_0))^{-1}P_h)W_0dz\right \|_{L^2(\Omega)}\leq Ch^2t^{\alpha-1}\|W_0\|_{L^2(\Omega)}.
\end{aligned}$$ If $W_0\in \dot{H}^2(\Omega)$, then one has $$\begin{aligned}
&\left \|(E(t,t_0)-E_h(t,t_0)R_h)W_0\right \|_{L^2(\Omega)}\\
% \leq&\left\| \int_{\Gamma_{\theta,\kappa}}e^{zt}((z^{\alpha}+A(t_0))^{-1}-(z^{\alpha}+A_h(t_0))^{-1}R_h)W_0dz\right \|_{L^2(\Omega)}\\
\leq&\left\| \int_{\Gamma_{\theta,\kappa}}e^{zt}z^{-\alpha}(A(t_0)(z^{\alpha}+A(t_0))^{-1}-A_h(t_0)(z^{\alpha}+A_h(t_0))^{-1}R_h)W_0dz\right \|_{L^2(\Omega)}+\left\| \int_{\Gamma_{\theta,\kappa}}e^{zt}z^{-\alpha}(\mathbf{I}-R_h)W_0dz\right \|_{L^2(\Omega)}\\
\leq&\left\| \int_{\Gamma_{\theta,\kappa}}e^{zt}z^{-\alpha}((z^{\alpha}+A(t_0))^{-1}-(z^{\alpha}+A_h(t_0))^{-1}P_h)A(t_0)W_0dz\right \|_{L^2(\Omega)}+Ch^2t^{\alpha-1}\|W_0\|_{\dot{H}^2(\Omega)}\\\leq& Ch^2t^{\alpha-1}\|W_0\|_{\dot{H}^2(\Omega)},
\end{aligned}$$ because of Lemma \[lemerror1\], the fact $A_hR_h=P_hA$ [@Bazhlekova2015], and $(z^\alpha+A)^{-1}=z^{-\alpha}(\mathbf{I}-A(z^{\alpha}+A)^{-1})$; and here $\mathbf{I}$ is the identity operator. Thus we get $$\begin{aligned}
\|\upsilon_{1,h}(t_0)\|_{L^2(\Omega)}\leq&\left \|(P_hE(t,t_0)-E(t,t_0))W_0\right \|_{L^2(\Omega)}\\
&+\left \|(E(t,t_0)-E_h(t,t_0)P_h)W_0\right \|_{L^2(\Omega)}\leq Ch^2t^{-1}\|W_0\|_{L^2(\Omega)} ~{\rm for} ~W_0\in L^2(\Omega)
\end{aligned}$$ and $$\begin{aligned}
\|\upsilon_{1,h}(t_0)\|_{L^2(\Omega)}\leq&\left \|(P_hE(t,t_0)-E(t,t_0))W_0\right \|_{L^2(\Omega)}\\
&+\left \|(E(t,t_0)-E_h(t,t_0)R_h)W_0\right \|_{L^2(\Omega)}\leq Ch^2t^{\alpha-1}\|W_0\|_{\dot{H}^2(\Omega)} ~{{\rm for} ~W_0\in \dot{H}^2(\Omega).
\end{aligned}$$ Taking $t_0=t$ and using the interpolation property [@Adams2003] lead to $$\|\upsilon_{1,h}(t)\|_{L^2(\Omega)}\leq Ch^2t^{\epsilon\alpha/2-1}\|W_0\|_{\dot{H}^\epsilon(\Omega)}.$$ Similarly, one has $$\begin{aligned}
&\|\upsilon_{2,h}(t)\|_{L^2(\Omega)}\leq Ch^2\|f(0)\|_{L^2(\Omega)}+Ch^2\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds,
\end{aligned}$$ according to Lemmas \[lemprojection\], \[lemerror1\], and the convolution property $f(t)=f(0)+(1\ast f')(t)$. As for $\upsilon_{3,h}(t)$, Theorem \[thmregofG\] gives $$\begin{aligned}
\|\upsilon_{3,h}(t_0)\|_{L^2(\Omega)}\leq& Ch^2\int_{0}^{t_0}\|G(s)\|_{\dot{H}^2(\Omega)}ds
\leq Ch^2\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}+Ch^2\|f(0)\|_{L^2(\Omega)}+Ch^2\int_0^{t_0}\|f'(s)\|_{L^2(\Omega)}ds.
\end{aligned}$$ Combining Lemma \[lemestEFh\], , and $A_hR_h=P_hA$ results in $$\begin{aligned}
\|\upsilon_{4,h}(t_0)\|_{L^2(\Omega)}\leq &\left \|\int_0^{t_0}E_h(t_0-s,t_0)(A_h(t_0)-A_h(s))\upsilon_h(s)ds\right \|_{L^2(\Omega)}\\
&+\left \|\int_0^{t_0}E_h(t_0-s,t_0)A_h(t_0)(1-a^2(t_0)/a^2(s))(R_h-\mathbf{I})G(s)ds\right \|_{L^2(\Omega)}\\
\leq& C\int_0^{t_0}\|\upsilon_h(s)\|_{L^2(\Omega)}ds+Ch^2\int_0^{t_0}\|G(s)\|_{\dot{H}^2(\Omega)}ds.
\end{aligned}$$ Thus Grönwall’s inequality and Theorem \[thmregofG\] imply the desired result.
\[lemspaestIV\] If $W_0\in \dot{H}^{\epsilon}(\Omega)$, $f(0)\in L^2(\Omega)$, and $\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds<C$, then there holds $$\|\uppercase\expandafter{\romannumeral4}(t)\|_{L^2(\Omega)}\leq Ch^2\left (\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}+\|f(0)\|_{L^2(\Omega)}+\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds\right ).$$
According to , one can divide $\uppercase\expandafter{\romannumeral4}(t_0)$ into two parts, i.e., $$\begin{aligned}
\|\uppercase\expandafter{\romannumeral4}(t_0)\|_{L^2(\Omega)}\leq&\left \|\int_0^{t_0}F_h(t_0-s,t_0)(A_h(t_0)(R_h-P_h)-A_h(s)(R_h-P_h))\,_0D^{1-\alpha}_sW(s)ds\right \|_{L^2(\Omega)}\\
&+\left \|\int_0^{t_0}F_h(t_0-s,t_0)(A_h(t_0)-A_h(s))\,_0D^{1-\alpha}_se_h(s)ds\right \|_{L^2(\Omega)}
\leq \uppercase\expandafter{\romannumeral4}_1(t_0)+
\uppercase\expandafter{\romannumeral4}_2(t_0). \end{aligned}$$ By assumption , Lemma \[lemestEF\], and Theorem \[thmregofG\], one can derive $$\begin{aligned}
\uppercase\expandafter{\romannumeral4}_1(t_0)\leq&\int_{0}^{t_0}\|F_h(t_0-s,t_0)A_h(t_0)(1-a^2(t_0)/a^2(s))(R_h-P_h)\,_0D^{1-\alpha}_sW(s)\|_{L^2(\Omega)}ds\\
\leq&Ch^2\int_0^{t_0}(t_0-s)^{1-\alpha}\|\,_0D^{1-\alpha}_sW(s)\|_{\dot{H}^2(\Omega)}ds\\
\leq& Ch^2\left (\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}+\|f(0)\|_{L^2(\Omega)}+\int_{0}^{t_0}\|f'(s)\|_{L^2(\Omega)}ds\right ).
\end{aligned}$$ According to Lemmas \[lemestEFh\], \[lemepslonest\] and assumption , there are $$\begin{aligned}
\uppercase\expandafter{\romannumeral4}_2(t_0)\leq& \int_{0}^{t_0}\|A_h(t_0)F_h(t_0-s,t_0)\|\|1-a^2(t_0)/a^2(s)\|\|\,_0D^{1-\alpha}_se_h(s)\|_{L^2(\Omega)}ds\\
\leq& \int_{0}^{t_0}(t-s)^{1-\alpha}\|\,_0D^{1-\alpha}_se_h(s)\|_{L^2(\Omega)}ds\\
\leq& Ch^2\left (\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}+\|f(0)\|_{L^2(\Omega)}+\int_{0}^{t_0}\|f'(s)\|_{L^2(\Omega)}ds\right ).
\end{aligned}$$ Then the desired result is obtained after taking $t_0=t$.
Combining Theorem \[thmregofW\], Lemmas \[lemprojection\], \[lemspaestI\], \[lemspaestII\], \[lemspaestIII\], and \[lemspaestIV\] leads to the error estimate of spacial semi-discrete scheme.
\[thmsemier\] Let $W(t)$ and $W_h(t)$ be the solutions of Eqs. and , respectively. If $W_0\in \dot{H}^{\epsilon}(\Omega)$, $f(0)\in L^2(\Omega)$, and $\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds<+\infty$, then there holds $$\|W(t)-W_h(t)\|_{L^2(\Omega)}\leq Ch^2\left (t^{-\alpha}\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}+\|f(0)\|_{L^2(\Omega)}+\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds\right ).$$
Temporal discretization and error analysis
==========================================
In this section, we use backward Euler convolution quadrature to discretize the time fractional derivative and perform the error analyses of the fully discrete scheme for homogeneous and inhomogeneous problems. First, let the time step size $\tau=T/L$, $L\in\mathbb{N}$, $t_i=i\tau$, $i=0,1,\ldots,L$ and $0=t_0<t_1<\cdots<t_L=T$. Taking $\delta_\tau(\zeta)=\frac{1-\zeta}{\tau}$ and using convolution quadrature for Eq. , we have the fully discrete scheme for any fixed integer $m\in[0,L]$, $$\label{eqfullscheme}
\left \{\begin{aligned}
&\frac{W^{n}_{h}-W^{n-1}_{h}}{\tau}+A_h(t_m)\sum_{i=0}^{n-1}d^{1-\alpha}_{i}W^{n-i}_h=f_h^n+(A_h(t_m)-A_h(t_n))\sum_{i=0}^{n-1}d^{1-\alpha}_{i}W^{n-i}_h,\\
&W^{0}_{h}=W_{0,h},
\end{aligned}\right .$$ where $$\sum_{i=0}^{\infty}d^{1-\alpha}_i\zeta^i=\delta_\tau(\zeta)^{1-\alpha},\quad 0<\alpha<1,$$ and $W^n_h$ denotes the numerical solution of at $t=t_n$. Multiplying $\zeta^n$ on both sides of and summing $n$ from $1$ to $\infty$ result in $$\begin{aligned}
\sum_{n=1}^{\infty}\frac{W^{n}_{h}-W^{n-1}_{h}}{\tau}\zeta^n+\sum_{n=1}^{\infty}A_h(t_m)\sum_{i=0}^{n-1}d^{1-\alpha}_{i}W^{n-i}_{h}\zeta^n=\sum_{n=1}^{\infty}f_h^n\zeta^n+\sum_{n=1}^{\infty}(A_h(t_m)-A_h(t_n))\sum_{i=0}^{n-1}d^{1-\alpha}_{i}W^{n-i}_h\zeta^n;
\end{aligned}$$ after simple calculations, there exists $$\begin{aligned}
\left (\delta_\tau(\zeta)+A_h(t_m)\delta_\tau(\zeta)^{1-\alpha}\right )\sum_{n=1}^{\infty}W^n_h\zeta^n=\sum_{n=1}^{\infty}f_h^n\zeta^n+\sum_{n=1}^{\infty}(A_h(t_m)-A_h(t_n))\sum_{i=0}^{n-1}d^{1-\alpha}_{i}W^{n-i}_h\zeta^n+\frac{\zeta}{\tau}W^0_h.
\end{aligned}$$ Thus, choosing $\xi_\tau=e^{-\tau(\kappa+1)}$, one has $$\label{eqrepsWnh}
\begin{aligned}
W^n_h=&\frac{1}{2\pi\mathbf{i} }\int_{\zeta=|\xi_\tau|}\zeta^{-n-1}\delta_\tau(\zeta)^{\alpha-1}\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right )^{-1}\sum_{j=1}^{\infty}f_h^j\zeta^jd\zeta\\
&+\frac{1}{2\pi\mathbf{i} }\int_{\zeta=|\xi_\tau|}\zeta^{-n-1}\delta_\tau(\zeta)^{\alpha-1}\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right )^{-1}\frac{\zeta}{\tau}W^0_hd\zeta\\
&+\frac{1}{2\pi\mathbf{i} }\int_{\zeta=|\xi_\tau|}\zeta^{-n-1}\delta_\tau(\zeta)^{\alpha-1}\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right )^{-1}\left (\sum_{j=1}^{\infty}(A_h(t_m)-A_h(t_j))\sum_{i=0}^{j-1}d^{1-\alpha}_{i}W^{j-i}_h\zeta^j\right )d\zeta.
\end{aligned}$$
Before providing the error estimates, we recall the following Lemma.
\[Lemseriesest\] Let $0<\alpha<1$ and $\theta \in\left(\frac{\pi}{2}, \operatorname{arccot}\left(-\frac{2}{\pi}\right)\right)$ be given, where $arccot$ means the inverse function of $\cot$, and let $\rho \in (0,1)$ be fixed. Then, both $\delta_\tau(e^{-z\tau})$ and $(\delta_\tau(e^{-z\tau})+A)^{-1}$ are analytic with respect to $z$ in the region enclosed by $\Gamma^\tau_\rho=\{z=-\ln{\rho}/\tau+\mathbf{i}y:y\in\mathbb{R}~and~|y|\leq \pi/\tau\}$, $\Gamma^\tau_{\theta,\kappa}=\{z\in \mathbb{C}:\kappa\leq |z|\leq\frac{\pi}{\tau\sin(\theta)},|\arg z|=\theta\}\bigcup\{z\in \mathbb{C}:|z|=\kappa,|\arg z|\leq\theta\}$, and the two lines $\mathbb{R}\pm \mathbf{i}\pi/\tau$ whenever $0<\kappa \leq \min (1 / T,-\ln (\rho) / \tau)$. Furthermore, there are the estimates $$\begin{aligned}
&\delta_{\tau}\left(e^{-z \tau}\right) \in \Sigma_{\theta}&\forall z \in \Gamma_{\theta, \kappa}^{\tau},\\
&C_{0}|z| \leq\left|\delta_{\tau}\left(e^{-z\tau }\right)\right| \leq C_{1}|z|&\forall z \in \Gamma_{\theta, \kappa}^{\tau},\\
&\left|\delta_{\tau}\left(e^{-z\tau }\right)-z\right| \leq C \tau|z|^{2}&\forall z \in \Gamma_{\theta, \kappa}^{\tau},\\
&\left|\delta_{\tau}\left(e^{-z\tau }\right)^{\alpha}-z^{\alpha}\right| \leq C \tau|z|^{\alpha+1}&\forall z \in \Gamma_{\theta, \kappa}^{\tau},
\end{aligned}$$ where the constants $C_0$, $C_1$ and $C$ are independent of $\tau$ and $\kappa\in (0,\min (1 / T,-\ln (\rho) / \tau)]$.
Below we provide the error estimates of the homogeneous and inhomogeneous problems separately.
Error estimate for the inhomogeneous problem
--------------------------------------------
In this subsection, we consider the error estimate between $W_h(t_n)$ and $W^n_h$ which are the solutions of Eqs. and with the initial value $W_{0}=0$. Denote $e^n_h=W_h(t_n)-W^n_h$. By and , there exists $$\label{errordd}
\|e^n_h\|_{L^2(\Omega)}\leq \uppercase\expandafter{\romannumeral1}+\uppercase\expandafter{\romannumeral2},$$ where $$\begin{aligned}
\uppercase\expandafter{\romannumeral1}\leq&C\left\|\int_0^{t_n}F_h(t_n-s,t_m)f_h(s)ds\right.-\left.\frac{1}{2\pi\mathbf{i} }\int_{\zeta=|\xi_\tau|}\zeta^{-n-1}\delta_\tau(\zeta)^{\alpha-1}\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right )^{-1}\sum_{j=1}^{\infty}f_h^j\zeta^jd\zeta\right\|_{L^2(\Omega)},\\
\uppercase\expandafter{\romannumeral2}\leq&C\left\|\int_0^{t_n}F_h(t_n-s,t_m)(A_h(t_m)-A_h(s))\,_0D^{1-\alpha}_sW_h(s)ds\right.\\
&-\left.\frac{1}{2\pi\mathbf{i} }\int_{\zeta=|\xi_\tau|}\zeta^{-n-1}\delta_\tau(\zeta)^{\alpha-1}\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right )^{-1}\left (\sum_{j=1}^{\infty}(A_h(t_m)-A_h(t_j))\sum_{i=0}^{j-1}d^{1-\alpha}_{i}W^{j-i}_h\zeta^j\right )d\zeta\right\|_{L^2(\Omega)}.
\end{aligned}$$
Like the proof in [@Jin2016; @Lubich1996], one can get the following estimates of $\uppercase\expandafter{\romannumeral1}$ and $\uppercase\expandafter{\romannumeral2}$ defined in .
\[thmimhomI\] If $f(0)\in L^2(\Omega)$ and $\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds<\infty$, then there holds $$\begin{aligned}
&\uppercase\expandafter{\romannumeral1}\leq C\tau \|f(0)\|_{L^2(\Omega)}+C\tau\int_0^t\|f'(s)\|_{L^2(\Omega)}ds.
\end{aligned}$$
As for $\uppercase\expandafter{\romannumeral2}$, we introduce $$\tau\sum_{i=0}^{\infty}F^{i}_{\tau,m}\zeta^i=\delta_\tau(\zeta)^{\alpha-1}\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right )^{-1},$$ where $$F^n_{\tau,m}=\frac{1}{2\pi\tau \mathbf{i}}\int_{\zeta=|\xi_\tau|}\zeta^{-n-1}\delta_\tau(\zeta)^{\alpha-1}\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right )^{-1}d\zeta$$ and $\xi_\tau=e^{-\tau(\kappa+1)}$. Taking $\zeta=e^{-z\tau}$ and deforming the contour $\Gamma^\tau=\{z=\kappa+1+\mathbf{i}y:y\in\mathbb{R}~{\rm and}~|y|\leq \pi/\tau\}$ to $\Gamma^\tau_{\theta,\kappa}$, one has $$F^n_{\tau,m}=\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^\tau_{\theta,\kappa}}e^{zn\tau}\delta_\tau(e^{-z\tau})^{\alpha-1}\left (\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m)\right )^{-1}dz,$$ and simple calculations lead to $$\label{eqFntmest1}
\left \|A_h(t_m)F^n_{\tau,m}\right \|=\left \|\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^\tau_{\theta,\kappa}}e^{zn\tau}A_h(t_m)\delta_\tau(e^{-z\tau})^{\alpha-1}\left (\delta_\tau(e^{-z\tau} )^{\alpha}+A_h(t_m)\right )^{-1}dz\right \|\leq C(t_m+\tau)^{-\alpha}.$$ To get the estimates of $\uppercase\expandafter{\romannumeral2}$, we divide it into four parts, i.e., $$\label{equimII}
\begin{aligned}
\uppercase\expandafter{\romannumeral2}\leq&C\left\|\int_0^{t_n}F_h(t_n-s,t_m)(A_h(t_m)-A_h(s))\,_0D^{1-\alpha}_sW_h(s)ds\right.\\
&-\left.\tau\sum_{k=1}^{n}F^{n-k}_{\tau,m}\left ((A_h(t_m)-A_h(t_k))\sum_{i=0}^{k-1}d^{1-\alpha}_{i}W^{k-i}_h\right )\right\|_{L^2(\Omega)}\leq\sum_{k=1}^n(\uppercase\expandafter{\romannumeral2}_{1,k}+\uppercase\expandafter{\romannumeral2}_{2,k}+\uppercase\expandafter{\romannumeral2}_{3,k}+\uppercase\expandafter{\romannumeral2}_{4,k}),
\end{aligned}$$ where $$\begin{aligned}
\uppercase\expandafter{\romannumeral2}_{1,k}\leq&C\left\|\tau F^{n-k}_{\tau,m}(A_h(t_m)-A_h(t_k))\left (\sum_{i=0}^{k-1}d^{1-\alpha}_{i}W^{k-i}_h-\,_0D^{1-\alpha}_tW_h(t_{k})\right )\right\|_{L^2(\Omega)},\\
\uppercase\expandafter{\romannumeral2}_{2,k}\leq&C\left\|\left (\int_{t_{k-1}}^{t_k}F_h(t_n-s,t_m)ds-\tau F^{n-k}_{\tau,m}\right)\left ((A_h(t_m)-A_h(t_k))\,_0D^{1-\alpha}_tW_h(t_{k})\right )\right\|_{L^2(\Omega)},\\
\uppercase\expandafter{\romannumeral2}_{3,k}\leq&C\bigg\|\int_{t_{k-1}}^{t_k}F_h(t_n-s,t_m)\left ((A_h(t_m)-A_h(s))-(A_h(t_m)-A_h(t_k))\right )\,_0D^{1-\alpha}_tW_h(t_{k})ds\bigg\|_{L^2(\Omega)},\\
\uppercase\expandafter{\romannumeral2}_{4,k}\leq&C\bigg\|\int_{t_{k-1}}^{t_k}F_h(t_n-s,t_m)(A_h(t_m)-A_h(s))\left (\,_0D^{1-\alpha}_tW_h(s)-\,_0D^{1-\alpha}_tW_h(t_{k})\right )ds\bigg\|_{L^2(\Omega)}.\\
\end{aligned}$$ To get error estimates of $\uppercase\expandafter{\romannumeral2}$, the following estimates of $G_h$ defined in are also needed. Similar to Theorem \[thmregofG\], we have the following results.
\[thmregofGh\] If $W_0\in \dot{H}^{\epsilon}(\Omega)$, $f(0)\in L^2(\Omega)$, and $\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds<\infty$, then $G_h(t)$ satisfies $$\|G_h(t)\|_{L^2(\Omega)}\leq Ct^{\alpha-1}\|W_0\|_{L^2(\Omega)}+C\|f(0)\|_{L^2(\Omega)}+C\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds$$ and $$\|G_h(t)\|_{\dot{H}^2(\Omega)}\leq Ct^{\epsilon\alpha/2-1}\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}+C\|f(0)\|_{L^2(\Omega)}+C\int_{0}^{t}\|f'(s)\|_{L^2(\Omega)}ds.$$
\[thmHolderG\] Let $G_h(t)=\!_0D^{1-\alpha}_tW_h(t)$. Assume $W_0=0$, $f(0)\in L^2(\Omega)$, and $f'(s)\in L^{\infty}(0,T,L^2(\Omega))$. There holds $$\begin{aligned}
\left\|\frac{G_h(t)-G_h(t-\tau)}{\tau^{\gamma}}\right\|_{L^2(\Omega)}\leq& Ct^{\alpha-\gamma}\left (\|f(0)\|_{L^2(\Omega)}+\|f'(s)\|_{L^{\infty}(0,T,L^2(\Omega))}\right ),
\end{aligned}$$ where $\gamma<1+\alpha$.
According to , one has $$\begin{aligned}
\left\|\frac{G_h(t)-G_h(t-\tau)}{\tau^{\gamma}}\right\|_{L^2(\Omega)}\leq&\upsilon_1+\upsilon_2,
\end{aligned}$$ where $$\begin{aligned}
\upsilon_1=&\left\|\frac{\int_0^tE_h(t-s,t_0)f_h(s)ds-\int_0^{t-\tau}E_h(t-\tau-s,t_0)f_h(s)ds}{\tau^\gamma}\right\|_{L^2(\Omega)},\\
\upsilon_2=&\left\|\frac{\int_0^tE_h(t-s,t_0)(A_h(t_0)-A_h(s))G_h(s)ds-\int_0^{t-\tau}E_h(t-\tau-s,t_0)(A_h(t_0)-A_h(s))G_h(s)ds}{\tau^\gamma}\right\|_{L^2(\Omega)}.\\
\end{aligned}$$ As for $\upsilon_1$, we split it into $$\begin{aligned}
\upsilon_1\leq& C\left\|\frac{\int_{0}^{t}E_h(t-s,t_0)dsf_h(0)-\int_{0}^{t-\tau}E_h(t-\tau-s,t_0)dsf_h(0)}{\tau^\gamma}\right\|_{L^2(\Omega)}\\
&+C\left\|\frac{\int_{0}^{t-\tau}\left (\int_0^{t-s}E_h(r,t_0)dr-\int_0^{t-\tau-s}E_h(r,t_0)dr\right )f_h'(s)ds}{\tau^\gamma}\right\|_{L^2(\Omega)}\\
&+C\left\|\frac{\int_{t-\tau}^{t}\int_0^{t-s}E_h(r,t_0)drf_h'(s)ds}{\tau^\gamma}\right\|_{L^2(\Omega)}\leq \upsilon_{1,1}+\upsilon_{1,2}+\upsilon_{1,3}.
\end{aligned}$$ Using the fact $\left |\frac{1-e^{-z\tau}}{\tau^\gamma}\right |\leq C|z|^{\gamma}$ on $\Gamma_{\theta,\kappa}$, there is $$\begin{aligned}
\upsilon_{1,1}\leq& C\left\|\int_{\Gamma_{\theta,\kappa}}e^{z(t-s)}\frac{1-e^{-z\tau}}{\tau^{\gamma}}(z^\alpha+A_h(t_0))^{-1}z^{-1}dzf_h(0)\right\|_{L^2(\Omega)}\\
\leq&C\int_{\Gamma_{\theta,\kappa}}|e^{z(t-s)}||z|^{\gamma-\alpha-1}|dz|\|f_h(0)\|_{L^2(\Omega)}\leq Ct^{\alpha-\gamma}\|f(0)\|_{L^2(\Omega)}.
\end{aligned}$$ Similarly, we can bound $\upsilon_{1,2}$ by $$\begin{aligned}
\upsilon_{1,2}\leq& C\left\|\int_{0}^{t-\tau}\int_{\Gamma_{\theta,\kappa}}e^{z(t-s)}\frac{1-e^{-z\tau}}{\tau^{\gamma}}(z^\alpha+A_h(t_0))^{-1}z^{-1}dzf_h'(s)ds\right\|_{L^2(\Omega)}\\
\leq
&C\int_{0}^{t-\tau}\int_{\Gamma_{\theta,\kappa}}|e^{z(t-s)}||z|^{\gamma-\alpha-1}|dz|\|f_h'(s)\|_{L^2(\Omega)}ds
\leq C\int_{0}^{t-\tau}(t-s)^{\alpha-\gamma}\|f'(s)\|_{L^2(\Omega)}ds,
\end{aligned}$$ where $\gamma<1+\alpha$ is required to ensure $\upsilon_{1,2}$ convergent. Similarly one has $$\begin{aligned}
\upsilon_{1,3}\leq& C\left\|\int_{t-\tau}^{t}\int_{\Gamma_{\theta,\kappa}}e^{z(t-s)}\tau^{-\gamma}(z^\alpha+A_h(t_0))^{-1}z^{-1}dzf_h'(s)ds\right\|_{L^2(\Omega)}\\
\leq& C\left\|\int_{\Gamma_{\theta,\kappa}}e^{z\tau}\frac{1-e^{-z\tau}}{z\tau^{\gamma}}(z^\alpha+A_h(t_0))^{-1}z^{-1}dz\right\|_{L^2(\Omega)}\|f_h'(s)\|_{L^{\infty}(0,T,L^2(\Omega))}\\
\leq
&C\int_{\Gamma_{\theta,\kappa}}|z|^{\gamma-\alpha-2}|dz|\|f_h'(s)\|_{L^{\infty}(0,T,L^2(\Omega))}
\leq Ct^{1+\alpha-\gamma}\|f'(s)\|_{L^{\infty}(0,T,L^2(\Omega))},
\end{aligned}$$ where we take $\kappa=1/t$ and require $\gamma<1+\alpha$ to ensure $\upsilon_{1,3}$ convergent. Thus there exist $$\upsilon_1\leq Ct^{\alpha-\gamma}\left (\|f(0)\|_{L^2(\Omega)}+\|f'(s)\|_{L^{\infty}(0,T,L^2(\Omega))}\right )$$ and $\gamma<1+\alpha$. Similarly, when $t=t_0$, one can split $\upsilon_{2}$ into $$\begin{aligned}
\upsilon_2
\leq&\left\|\frac{\int_0^{t_0-\tau}\left (E_h(t_0-s,t_0)-E_h(t_0-\tau-s,t_0)\right )(A_h(t_0)-A_h(s))G_h(s)ds}{\tau^\gamma}\right\|_{L^2(\Omega)}\\
&+\left\|\frac{\int_{t_0-\tau}^{t_0}E_h(t_0-s,t_0)(A_h(t_0)-A_h(s))G_h(s)ds}{\tau^\gamma}\right\|_{L^2(\Omega)}\leq \upsilon_{2,1}+\upsilon_{2,2}.
\end{aligned}$$ Using Lemma \[lemestEFh\] and assumption , one has $$\begin{aligned}
\upsilon_{2,1}\leq& \left\|\int_0^{t_0-\tau}\int_{\Gamma_{\theta,\kappa}}e^{z(t_0-s)}\frac{1-e^{-z\tau}}{\tau^\gamma}(z^{\alpha}+A_h(t_0))^{-1}dz(A_h(t_0)-A_h(s))G_h(s)ds\right\|_{L^2(\Omega)}\\
\leq& \int_0^{t_0-\tau}\int_{\Gamma_{\theta,\kappa}}|e^{z(t_0-s)}||z|^{\gamma-\alpha}|dz|\|(A_h(t_0)-A_h(s))G_h(s)\|_{L^2(\Omega)}ds
\leq\int_0^{t_0-\tau}(t_0-s)^{\alpha-\gamma}\|G_h(s)\|_{\dot{H}^2(\Omega)}ds.
\end{aligned}$$ Similarly, when $\gamma\leq 1$, there exists $$\upsilon_{2,2}\leq C\left\|\frac{\int_{t_0-\tau}^{t_0}E_h(t_0-s,t_0)ds}{\tau^{\gamma-1}}\right\|\|G_h(s)\|_{L^\infty(0,T,\dot{H}^2(\Omega))}\leq Ct_0^{1+\alpha-\gamma}\|G_h(s)\|_{L^\infty(0,T,\dot{H}^2(\Omega))};$$ when $\gamma>1$, $$\begin{aligned}
\upsilon_{2,2}\leq& C\left\|\frac{\int_{t_0-\tau}^{t_0}E_h(t_0-s,t_0)ds}{\tau^{\gamma-1}}\right\|\|G_h(s)\|_{L^\infty(0,T,\dot{H}^2(\Omega))}\\
\leq& C\left\|\frac{\int_{t_0-\tau}^{t_0}\int_{\Gamma_{\theta,\kappa}}e^{z(t_0-s)}(z^{\alpha}+A_h(t_0))^{-1}dzds}{\tau^{\gamma-1}}\right\|\|G_h(s)\|_{L^\infty(0,T,\dot{H}^2(\Omega))}\\
\leq&C \left\|\int_{\Gamma_{\theta,\kappa}}\frac{1-e^{z\tau}}{z\tau^{\gamma-1}}(z^{\alpha}+A_h(t_0))^{-1}dz\right\|\|G_h(s)\|_{L^\infty(0,T,\dot{H}^2(\Omega))}\\
\leq&C \int_{\Gamma_{\theta,\kappa}}|z|^{\gamma-2-\alpha}|dz|\|G_h(s)\|_{L^\infty(0,T,\dot{H}^2(\Omega))}\leq Ct_0^{1+\alpha-\gamma}\|G_h(s)\|_{L^\infty(0,T,\dot{H}^2(\Omega))},
\end{aligned}$$ where we take $\kappa=1/t_0$ and require $\gamma<1+\alpha$ for the integral $\int_{\Gamma_{\theta,\kappa}}|z|^{\gamma-2-\alpha}|dz|$ to be convergent. Taking $t_0=t$ and using Theorem \[thmregofGh\], the desired results can be obtained by the fact $T/t>1$.
Now we estimate $\uppercase\expandafter{\romannumeral2}$. First, we consider $\uppercase\expandafter{\romannumeral2}_{1,k}$ defined in , which implies the difference between $\sum_{i=0}^{k-1}d^{1-\alpha}_{i}W^{k-i}_h$ and $\,_0D^{1-\alpha}_tW_h(t_{k})$ needs to be obtained.
\[lemerrorGhGnh\] Let $G_h(t)=\!_0D^{1-\alpha}_tW_h(t)$ and $G^n_h=\sum_{i=0}^{n-1}d^{1-\alpha}_iW^{n-i}_h$. Assume $f(0)\in L^2(\Omega)$ and $f'(s)\in L^{\infty}(0,T,L^2(\Omega))$. There exists $$\|G^n_h-G_h(t_n)\|_{L^2(\Omega)}\leq C\tau t_n^{\alpha-1}\left (\|f(0)\|_{L^2(\Omega)}+\|f'(s)\|_{L^{\infty}(0,T,L^2(\Omega))}\right ).$$
From the definition of $G^n_h$ and Eq. , it has $$\begin{aligned}
&\sum_{n=1}^{\infty}G^{n}_h\zeta^n=\sum_{n=1}^{\infty}\sum_{i=0}^{n-1}d^{1-\alpha}_iW^{n-i}_h\zeta^n=\delta_\tau(\zeta)^{1-\alpha}\sum_{n=1}^{\infty}W^{n}_h\zeta^n\\
=&\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right)^{-1}\sum_{n=1}^{\infty}f_h^n\zeta^n+\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right)^{-1}\sum_{n=1}^{\infty}(A_h(t_m)-A_h(t_n))G^n_h\zeta^n.
\end{aligned}$$ Considering the error between $G^m_h$ and $G_h(t_m)$, one has $$\begin{aligned}
\|G^m_h-G_h(t_m)\|_{L^2(\Omega)}\leq \sum_{k=1}^{2}\upsilon_{k,h}
\end{aligned}$$ where $$\begin{aligned}
\upsilon_{1,h}\leq&C\left \|\int_{\zeta=|\xi_\tau|}\zeta^{-m-1}\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right)^{-1}\sum_{n=1}^{\infty}f_h^n\zeta^nd\zeta-\int_{0}^{t_m}E_h(t_m-s,t_m)f_h(s)ds\right \|_{L^2(\Omega)},\\
\upsilon_{2,h}\leq&C\left \|\int_{\zeta=|\xi_\tau|}\zeta^{-m-1}\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right)^{-1}\sum_{n=1}^{\infty}(A_h(t_m)-A_h(t_n))G^n_h\zeta^nd\zeta\right .\\&\left .-\int_{0}^{t_m}E_h(t_m-s,t_m)(A_h(t_m)-A_h(s))G_h(s)ds\right \|_{L^2(\Omega)}\\
\end{aligned}$$ with $\xi_\tau=e^{-\tau(\kappa+1)}$. Similar to the proof in [@Jin2016; @Lubich1996], the following estimate of $\upsilon_{1,h}$ can be got $$\begin{aligned}
&\upsilon_{1,h}\leq C\tau t_m^{\alpha-1}\|f(0)\|_{L^2(\Omega)
}+C\tau\int_0^{t_m}(t_m-s)^{\alpha-1}\|f'(s)\|_{L^2(\Omega)}ds.\\
\end{aligned}$$ As for $\upsilon_{2,h}$, we introduce $$\tau\sum_{i=0}^{\infty}E^{i}_{\tau,m}\zeta^i=\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right )^{-1},$$ where $$E^n_{\tau,m}=\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^\tau_{\theta,\kappa}}e^{zn\tau}\left (\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m)\right )^{-1}dz.$$ Thus $$\label{eqestAEn}
\|A_h(t_m)E^n_{\tau,m}\|=\left \|\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^\tau_{\theta,\kappa}}e^{zn\tau}A_h(t_m)\left (\delta_\tau(e^{-z\tau} )^{\alpha}+A_h(t_m)\right )^{-1}dz\right \|\leq C(t_m+\tau)^{-1}.$$ For convenience, we split $\upsilon_{2,h}$ into the following forms $$\begin{aligned}
\upsilon_{2,h}\leq&C\left \|\tau\sum_{k=1}^{m}E^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))G^k_h-\int_{0}^{t_m}E_h(t_m-s,t_m)(A_h(t_m)-A_h(s))G_h(s)ds\right \|_{L^2(\Omega)}\\
\leq& C\sum_{k=1}^{m} \left \|\tau E^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))(G^k_h-G_h(t_k))\right \|_{L^2(\Omega)}\\
&+C\sum_{k=1}^{m} \left \|\left (\tau E^{m-k}_{\tau,m}-\int_{t_{k-1}}^{t_k}E_h(t_m-s,t_m)ds\right)(A_h(t_m)-A_h(t_k))G_h(t_k)\right \|_{L^2(\Omega)}\\
&+C\sum_{k=1}^{m} \left \|\int_{t_{k-1}}^{t_k}E_h(t_m-s,t_m)(A_h(t_k)-A_h(s))dsG_h(t_k)\right \|_{L^2(\Omega)}\\
&+C\sum_{k=1}^{m} \left \|\int_{t_{k-1}}^{t_k}E_h(t_m-s,t_m)(A_h(t_m)-A_h(s))(G_h(t_k)-G_h(s))ds\right \|_{L^2(\Omega)}=\sum_{k=1}^{m }\sum_{j=1}^{4}\upsilon_{2,j,k,h}.
\end{aligned}$$ Assumption and Eq. lead to $$\begin{aligned}
\sum_{k=1}^{m}\upsilon_{2,1,k,h}\leq C\sum_{k=1}^{m}\tau\|G^k_h-G_h(t_k)\|_{L^2(\Omega)}.
\end{aligned}$$ As for $\upsilon_{2,2,k,h}$, there is $$\begin{aligned}
&\left \|\tau E^{m-k}_{\tau,m
}-\int_{t_{k-1}}^{t_k}E_h(t_m-s,t_m)ds\right \|\leq \left \| \int_{t_{k-1}}^{t_k}E^{m-k}_{\tau,m
}-E_h(t_m-s,t_m)ds\right \|\\
\leq&C\left \| \int_{t_{k-1}}^{t_k}\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(m-k)\tau}\left (\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m)\right )^{-1}dz-\int_{\Gamma_{\theta,\kappa}}e^{z(t_m-s)}(z^{\alpha}+A_h(t_m))^{-1}dzds\right \|\\
\leq&C\left \| \int_{t_{k-1}}^{t_k}\int_{\Gamma_{\theta,\kappa}\backslash\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-s)}(z^{\alpha}+A_h(t_m))^{-1}dzds\right \|\\
&+C\left \| \int_{t_{k-1}}^{t_k}\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-s)}(1-e^{(s-k\tau)z})\left (\delta_\tau(e^{-z\tau} )^{\alpha}+A_h(t_m)\right )^{-1}dzds\right \|\\
&+C\left \| \int_{t_{k-1}}^{t_k}\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-s)}\left (\left (\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m)\right )^{-1}-(z^{\alpha}+A_h(t_m))^{-1}\right )dzds\right \|\\
\leq &C\tau\int_{t_{k-1}}^{t_k}(t_m-s)^{\alpha-2}ds.
\end{aligned}$$ According to , it has $$\begin{aligned}
\sum_{k=1}^{m}\upsilon_{2,2,k,h}\leq C\tau\sum_{k=1}^{m}\int_{t_{k-1}}^{t_k}(t_m-s)^{\alpha-1}ds\|G_h(t_k)\|_{\dot{H}^2(\Omega)}
\leq C\tau\|f(0)\|_{L^2(\Omega)}+C\tau\int_{0}^{t_m}\|f'(s)\|_{L^2(\Omega)}ds,
\end{aligned}$$ where the last inequality follows by Lemma \[thmregofG\]. Combining Lemma \[lemestEF\] and assumption , there exists $$\begin{aligned}
\sum_{k=1}^{m}\upsilon_{2,3,k,h}\leq C\sum_{k=1}^{m}\tau\int_{t_{k-1}}^{t_k}(t_m-s)^{\alpha-1} \|G_h(t_k)\|_{\dot{H}^2(\Omega)}ds
\leq C\tau\|f(0)\|_{L^2(\Omega)}+C\tau\int_{0}^{t_m}\|f'(s)\|_{L^2(\Omega)}ds.
\end{aligned}$$ From Lemma \[thmHolderG\], it holds $$\begin{aligned}
\sum_{k=1}^{m}\upsilon_{2,4,k,h}\leq& C\tau\sum_{k=1}^{m}\int_{t_{k-1}}^{t_k}(t_m-s)^{\alpha} \left \|\frac{G_h(s)-G_h(t_k)}{\tau}\right \|_{L^2(\Omega)}ds
\leq C\tau\left (\|f(0)\|_{L^2(\Omega)}+\|f'(s)\|_{L^{\infty}(0,T,L^2(\Omega))}\right ).
\end{aligned}$$ Since $m\in[0,L]$ is any fixed integer, taking $m=n$ results in $$\|G^n_h-G_h(t_n)\|_{L^2(\Omega)}\leq C\tau t_n^{\alpha-1}\left (\|f(0)\|_{L^2(\Omega)}+\|f'(s)\|_{L^{\infty}(0,T,L^2(\Omega))}\right )+C\sum_{k=1}^{n}\tau\|G^k_h-G_h(t_k)\|_{L^2(\Omega)}.$$ Then the discrete Grönwall inequality [@Thomee2006] leads to the desired results.
\[thmimhomII\] If $f(0)\in L^2(\Omega)$ and $f'(s)\in L^{\infty}(0,T,L^2(\Omega))$, then there holds $$\uppercase\expandafter{\romannumeral2}\leq C\tau\left (\|f(0)\|_{L^2(\Omega)}+\|f'(s)\|_{L^{\infty}(0,T,L^2(\Omega))}\right ),$$ where $\uppercase\expandafter{\romannumeral2}$ is defined in .
According to Lemma \[lemerrorGhGnh\] and Eq. , it has $$\begin{aligned}
\sum_{k=1}^m\uppercase\expandafter{\romannumeral2}_{1,k}\leq& C\tau \sum_{k=1}^m(t_m-t_k)^{1-\alpha}\|G^k_h-G_h(t_k)\|_{L^2(\Omega)}\\
\leq& C\tau \|f(0)\|_{L^2(\Omega)}+C\tau\|f'(s)\|_{L^{\infty}(0,T,L^2(\Omega))}.
\end{aligned}$$ Next, consider the difference between $\tau F^{m-k}_{\tau,m}$ and $\int_{t_{k-1}}^{t_k}F_h(t_m-s,t_m)ds$, i.e., $$\begin{aligned}
&\left \|\tau F^{m-k}_{\tau,m
}-\int_{t_{k-1}}^{t_k}F_h(t_m-s,t_m)ds\right \|\leq \left \| \int_{t_{k-1}}^{t_k}F^{m-k}_{\tau,m
}-F_h(t_m-s,t_m)ds\right \|\\
\leq&C\left \| \int_{t_{k-1}}^{t_k}\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(m-k)\tau}\delta_\tau(e^{-z\tau})^{\alpha-1}\left (\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m)\right )^{-1}dz\right.\\
&-\left .\int_{\Gamma_{\theta,\kappa}}e^{(t_m-s)z}z^{\alpha-1}(z^{\alpha}+A_h(t_m))^{-1}dzds\right \|\\
\leq&C\left \| \int_{t_{k-1}}^{t_k}\int_{\Gamma_{\theta,\kappa}\backslash\Gamma^\tau_{\theta,\kappa}}e^{(t_m-s)z}z^{\alpha-1}(z^{\alpha}+A_h(t_m))^{-1}dzds\right \|\\
&+C\left \| \int_{t_{k-1}}^{t_k}\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-s)}(1-e^{(s-k\tau)z})\delta_\tau(e^{-z\tau})^{\alpha-1}\left (\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m)\right )^{-1}dzds\right \|\\
&+C\left \| \int_{t_{k-1}}^{t_k}\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-s)}\left (\delta_\tau(e^{-z\tau})^{\alpha-1}\left (\delta_\tau(e^{-z\tau} )^{\alpha}+A_h(t_m)\right )^{-1}-z^{\alpha-1}(z^{\alpha}+A_h(t_m))^{-1}\right )dzds\right \|\\
\leq &C\tau\int_{t_{k-1}}^{t_k}(t_m-s)^{-1}ds,
\end{aligned}$$ where Lemma \[Lemseriesest\] is used. According to assumption and $t_m-t_k\leq t_m-s$ for $s\in[t_{k-1},t_k]$, one has $$\begin{aligned}
\sum_{k=1}^m\uppercase\expandafter{\romannumeral2}_{2,k}\leq C\tau\sum_{k=1}^m\|G_h(t_k)\|_{\dot{H}^2(\Omega)}\leq C\tau\|f(0)\|_{L^2(\Omega)}+C\tau\int_{0}^{t_m}\|f'(s)\|_{L^2(\Omega)}ds.
\end{aligned}$$ Using Lemma \[lemestEFh\], assumption , and Theorem \[thmregofGh\], one can get $$\begin{aligned}
\sum_{k=1}^m\uppercase\expandafter{\romannumeral2}_{3,k}\leq C\tau\sum_{k=1}^m\int_{t_{k-1}}^{t_k}\|G_h(s)\|_{\dot{H}^2(\Omega)}ds\leq C\tau\|f(0)\|_{L^2(\Omega)}+C\tau\int_{0}^{t_m}\|f'(s)\|_{L^2(\Omega)}ds.
\end{aligned}$$ Combining Lemma \[lemestEFh\], assumption , and Theorem \[thmHolderG\] results in $$\begin{aligned}
\sum_{k=1}^m\uppercase\expandafter{\romannumeral2}_{4,k}\leq C\tau\sum_{k=1}^m\int_{t_{k-1}}^{t_k}(t_m-s)^{1-\alpha}\left \|\frac{G_h(s)-G_h(t_k)}{\tau}\right \|_{L^2(\Omega)}ds\leq C\tau\|f(0)\|_{L^2(\Omega)}+C\tau\|f'(s)\|_{L^{\infty}(0,T,L^2(\Omega))}.
\end{aligned}$$ Thus taking $m=n$ leads to $$\uppercase\expandafter{\romannumeral2}\leq C\tau\left (\|f(0)\|_{L^2(\Omega)}+\|f'(s)\|_{L^{\infty}(0,T,L^2(\Omega))}\right ).$$
Combining Theorems \[thmimhomI\] and \[thmimhomII\], one gets the result.
\[thmfullinhomo\] Let $W_h$ and $W^n_h$ be the solutions of Eqs. and respectively. If $f(0)\in L^2(\Omega)$ and $f'(s)\in L^{\infty}(0,T,L^2(\Omega))$, then there holds $$\|W_h(t_n)-W^n_h\|_{L^2(\Omega)}\leq C\tau\left (\|f(0)\|_{L^2(\Omega)}+\|f'(s)\|_{L^{\infty}(0,T,L^2(\Omega))}\right ).$$
Error estimate for the homogeneous problem
------------------------------------------
In this subsection, we consider the error between $W_h(t_n)$ and $W^n_h$, which are the solutions of Eqs. and with $f=0$. Similarly, denote $e^n_h=W_h(t_n)-W^n_h$. Thus $$\|e^n_h\|_{L^2(\Omega)}\leq \uppercase\expandafter{\romannumeral1}+\uppercase\expandafter{\romannumeral2},$$ where $$\begin{aligned}
\uppercase\expandafter{\romannumeral1}\leq&C\left\|F_h(t_n,t_m)W_{0,h}-\frac{1}{2\pi\mathbf{i} }\int_{\zeta=|\xi_\tau|}\zeta^{-n-1}\delta_\tau(\zeta)^{\alpha-1}\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right )^{-1}\frac{\zeta}{\tau}W^0_hd\zeta\right\|_{L^2(\Omega)},\\
\uppercase\expandafter{\romannumeral2}\leq&C\left\|\int_0^{t_n}F_h(t_n-s,t_m)(A_h(t_m)-A_h(s))\,_0D^{1-\alpha}_sW_h(s)ds\right.\\
&-\left.\frac{1}{2\pi\mathbf{i} }\int_{\zeta=|\xi_\tau|}\zeta^{-n-1}\delta_\tau(\zeta)^{\alpha-1}\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right )^{-1}\left (\sum_{j=1}^{\infty}(A_h(t_m)-A_h(t_j))\sum_{i=0}^{j-1}d^{1-\alpha}_{i}W^{j-i}_h\zeta^j\right )d\zeta\right\|_{L^2(\Omega)}
\end{aligned}$$ with $\xi_\tau=e^{-\tau(\kappa+1)}$. Similar to the proof in [@Jin2016; @Lubich1996], one can obtain the following estimates.
\[thmhomI\] If $W_0\in \dot{H}^\epsilon(\Omega)$ with $\epsilon>0$, then there exists $$\begin{aligned}
&\uppercase\expandafter{\romannumeral1}\leq Ct^{\alpha\epsilon/2-1}\tau \|W_0\|_{\dot{H}^{\epsilon}(\Omega)}.
\end{aligned}$$
As for $\uppercase\expandafter{\romannumeral2}$, when $t_n=t_m$, it has
$$\label{eqromannumera}
\begin{aligned}
\uppercase\expandafter{\romannumeral2}\leq&C\left\|\int_0^{t_m}F_h(t_m-s,t_m)(A_h(t_m)-A_h(s))\,_0D^{1-\alpha}_sW_h(s)ds\right.\\
&-\left.\tau\sum_{k=1}^{m}F^{m-k}_{\tau,m}\left ((A_h(t_m)-A_h(t_k))\sum_{i=0}^{k-1}d^{1-\alpha}_{i}W^{k-i}_h\right )\right\|_{L^2(\Omega)}\\
\leq&C\left\|-\int_0^{t_m}\frac{\partial}{\partial s}\left (F_h(t_m-s,t_m)(A_h(t_m)-A_h(s))\right )\,_0I^{\alpha}_sW_h(s)ds\right.\\
&-\left.\tau\sum_{k=1}^{m}\left (F^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))\right )\left (\sum_{i=0}^{k-1}d^{-\alpha}_{i}W^{k-i}_h-\sum_{i=0}^{k-2}d^{-\alpha}_{i}W^{k-1-i}_h\right )/\tau\right\|_{L^2(\Omega)}\\
\leq&C\left\|\int_0^{t_m}\frac{\partial}{\partial (t_m-s)}\left (F_h(t_m-s,t_m)(A_h(t_m)-A_h(s))\right )\,_0I^{\alpha}_sW_h(s)ds\right.\\
&-\left.\tau\sum_{k=1}^{m}\left (F^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))-F^{m-k-1}_{\tau,m}(A_h(t_m)-A_h(t_{k-1}))\right )/\tau\sum_{i=0}^{k-1}d^{-\alpha}_{i}W^{k-i}_h\right\|_{L^2(\Omega)}\\
\leq&\sum_{k=1}^{m}(\uppercase\expandafter{\romannumeral2}_{1,k}+\uppercase\expandafter{\romannumeral2}_{2,k}+\uppercase\expandafter{\romannumeral2}_{3,k}),
\end{aligned}$$
where $$\begin{aligned}
\uppercase\expandafter{\romannumeral2}_{1,k}\leq& \left\|\left (F^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))-F^{m-k-1}_{\tau,m}(A_h(t_m)-A_h(t_{k-1}))\right )\left (\!_0I^{\alpha}_{t_k}W_h-\sum_{i=0}^{k-1}d^{-\alpha}_{i}W^{k-i}_h\right )\right\|_{L^2(\Omega)},\\
\uppercase\expandafter{\romannumeral2}_{2,k}\leq& \left\|\bigg(\int_{t_{k-1}}^{t_k}\frac{\partial}{\partial (t_m-s)}\left (F_h(t_m-s,t_m)(A_h(t_m)-A_h(s))\right )ds\right .\\
&\left .-\left (F^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))-F^{m-k-1}_{\tau,m}(A_h(t_m)-A_h(t_{k-1}))\right )\bigg) \!_0I^{\alpha}_{t_k}W_h\right\|_{L^2(\Omega)},\\
\uppercase\expandafter{\romannumeral2}_{3,k}\leq& \left\|\int_{t_{k-1}}^{t_k}\frac{\partial}{\partial (t_m-s)}\left (F_h(t_m-s,t_m)(A_h(t_m)-A_h(s))\right )(\!_0I^{\alpha}_{s}W_h-\!_0I^{\alpha}_{t_k}W_h)ds\right\|_{L^2(\Omega)}.\\
\end{aligned}$$
To estimate $\uppercase\expandafter{\romannumeral2}_{1,k}$, denote $U_h(t_k)=\!_0I^{\alpha}_{t_k}W_h$ and $U^k_h=\sum_{i=0}^{k-1}d^{-\alpha}_{i}W^{k-i}_h$. By means of Laplace transform, $U_h$ can be represented by $$U_h(t)=H_h(t,t_m)W^{0}_{h}+\int_0^t H_h(t-s,t_m)(A_h(t_m)-A_h(s))\frac{\partial}{\partial s}U_h(s)ds$$ and $$\begin{aligned}
U^{n}_h=&\sum_{i=0}^{n-1}d^{-\alpha}_{i}W^{n-i}_h
=\frac{1}{2\pi \mathbf{i}}\int_{\zeta=|\xi_\tau|}\zeta^{-n-1}\delta_\tau(\zeta)^{-1}(\delta_\tau(\zeta)^{\alpha}+A_h(t_m))^{-1}\frac{\zeta}{\tau}W^0_hd\zeta\\
&+\frac{1}{2\pi \mathbf{i}}\int_{\zeta=|\xi_\tau|}\zeta^{-n-1}\delta_\tau(\zeta)^{-1}(\delta_\tau(\zeta)^{\alpha}+A_h(t_m))^{-1}\sum_{j=1}^{\infty}(A_h(t_m)-A_h(t_j))(U^j_h-U^{j-1}_h)/\tau\zeta^jd\zeta,
\end{aligned}$$ where $$H_h(t,t_m)=\frac{1}{2\pi \mathbf{i}}\int_{\Gamma_{\theta,\kappa}}e^{zt}z^{-1}(z^\alpha+A_h(t_m))^{-1}dz.$$ Similar to the proof of Theorems \[thmregofG\] and \[thmHolderG\], one can get the following estimates of $U_h(t)$.
\[thmregofUh\] If $W_0\in L^2(\Omega)$, then $U_h(t)$ satisfies $$\|U_h(t)\|_{L^2(\Omega)}\leq C\|W_0\|_{L^2(\Omega)},\qquad
\|U_h(t)\|_{\dot{H}^2(\Omega)}\leq C\|W_0\|_{L^2(\Omega)}.$$ And if $W_0\in \dot{H}^\eta(\Omega)$, $\eta\in[0,2]$, then it holds $$\|A_h^{\eta/2} U_h(t)\|_{\dot{H}^2(\Omega)}\leq C\|W_0\|_{\dot{H}^{\eta}(\Omega)}.$$
\[thmHolderU\] Let $U_h=\!_0I^{\alpha}_tW_h(t)$. When $W_0\in L^2(\Omega)$, there exists $$\begin{aligned}
\left\|\frac{U_h(t)-U_h(t-\tau)}{\tau^{\gamma_1}}\right\|_{L^2(\Omega)}\leq& Ct^{\alpha-\gamma_1}\|W_0\|_{L^2(\Omega)},
\end{aligned}$$ where $\gamma_1<1+\alpha$. And when $W_0\in \dot{H}^\epsilon(\Omega)$, one has $$\begin{aligned}
\left\|A_h\frac{U_h(t)-U_h(t-\tau)}{\tau^{\gamma_2}}\right\|_{L^2(\Omega)}\leq& Ct^{\alpha\epsilon/2-\gamma_2}\|W_0\|_{\dot{H}^\epsilon(\Omega)},
\end{aligned}$$ where $\gamma_2\leq 1$.
Then we consider the difference between $U_h(t_k)$ and $U^k_h$.
If $W_0\in L^2(\Omega)$ and $\frac{1}{a^2(t)}\in C^2[0,T]$, then there exists $$\|U^n_h-U_h(t_n)\|_{L^2(\Omega)}\leq Ct_n^{\alpha-1}\tau\|W_0\|_{L^2(\Omega)}.$$
Here, for $n=m$, we can split it into $$\|U_h(t_m)-U^m_h\|_{L^2(\Omega)}\leq \upsilon_{1,h}+\upsilon_{2,h},$$ where $$\begin{aligned}
\upsilon_{1,h}\leq& C\left \|H_h(t_m,t_m)W_{0,h}-\frac{1}{2\pi \mathbf{i}}\int_{\zeta=|\xi_\tau|}\zeta^{-m-1}\delta_\tau(\zeta)^{-1}(\delta_\tau(\zeta)^{\alpha}+A_h(t_m))^{-1}\frac{\zeta}{\tau}W^0_hd\zeta\right \|_{L^2(\Omega)},\\
\upsilon_{2,h}\leq& C\left \|\int_0^{t_m} H_h(t_m-s,t_m)(A_h(t_m)-A_h(s))\frac{\partial}{\partial s}U_h(s)ds\right .\\
&\left .-\frac{1}{2\pi \mathbf{i}}\int_{\zeta=|\xi_\tau|}\zeta^{-m-1}\delta_\tau(\zeta)^{-1}(\delta_\tau(\zeta)^{\alpha}+A_h(t_m))^{-1}\sum_{j=1}^{\infty}(A_h(t_m)-A_h(t_j))(U^j_h-U^{j-1}_h)/\tau\zeta^jd\zeta\right \|_{L^2(\Omega)}.\\
\end{aligned}$$ Similar to the proof in [@Jin2016; @Lubich1996], the following estimate can be got $$\upsilon_{1,h}\leq C\tau t_m^{\alpha-1}\|W_0\|_{L^2(\Omega)}.$$ To get the estimate of $\upsilon_{2,h}$, introduce $$\tau\sum_{i=0}^{\infty}H^{i}_{\tau,m}\zeta^i=\delta_\tau(\zeta)^{-1}\left (\delta_\tau(\zeta)^{\alpha}+A_h(t_m)\right )^{-1},$$ where $$H^n_{\tau,m}=\frac{1}{2\pi \mathbf{i}}\int_{\Gamma^\tau_{\theta,\kappa}}e^{-zn\tau}\delta_\tau(e^{-z\tau})^{-1}\left (\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m)\right )^{-1}dz.$$ The $\upsilon_{2,h}$ can be divided into the following parts, i.e., $$\begin{aligned}
\upsilon_{2,h}\leq & C\left \|\int_0^{t_m} \frac{\partial}{\partial t_m-s}H_h(t_m-s,t_m)(A_h(t_0)-A_h(s))U_h(s)ds\right .\\
&\left .-\sum_{j=1}^{m} (H^{m-j}_{\tau,m}-H^{m-j-1}_{\tau,m})(A_h(t_m)-A_h(t_j))U^j_h\right \|_{L^2(\Omega)}=\sum_{i=1}^3\sum_{k=1}^{m}\upsilon_{2,i,k,h},
\end{aligned}$$ where $$\begin{aligned}
\upsilon_{2,1,k,h}\leq& \left\|\left (H^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))-H^{m-k-1}_{\tau,m}(A_h(t_m)-A_h(t_{k-1}))\right )\left (U_h(t_k)-U^k_h\right )\right\|_{L^2(\Omega)},\\
\upsilon_{2,2,k,h}\leq& \left\|\bigg(\int_{t_{k-1}}^{t_k}\frac{\partial}{\partial (t_m-s)}\left (H_h(t_m-s,t_m)(A_h(t_m)-A_h(s))\right )ds\right .\\
&\left .-\left (H^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))-H^{m-k-1}_{\tau,m}(A_h(t_m)-A_h(t_{k-1}))\right )\bigg)U_h(t_k)\right\|_{L^2(\Omega)},\\
\upsilon_{2,3,k,h}\leq& \left\|\int_{t_{k-1}}^{t_k}\frac{\partial}{\partial (t_m-s)}\left (H_h(t_m-s,t_m)(A_h(t_m)-A_h(s))\right )(U_h(s)-U_h(t_k))ds\right\|_{L^2(\Omega)}.\\
\end{aligned}$$ As for $\upsilon_{2,1,k,h}$, it has $$\begin{aligned}
&\|H^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))-H^{m-k-1}_{\tau,m}(A_h(t_m)-A_h(t_{k-1}))\|_{L^2(\Omega)}\\
\leq& \|H^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))-H^{m-k-1}_{\tau,m}(A_h(t_m)-A_h(t_{k}))\|\\
&+\|H^{m-k-1}_{\tau,m}(A_h(t_k)-A_h(t_{k-1}))\|\leq \sigma_{1,k}+\sigma_{2,k}.
\end{aligned}$$ Using Lemma \[Lemseriesest\], one can obtain $$\begin{aligned}
\sigma_{1,k}\leq C\tau(t_m-t_k)\left \|\int_{\Gamma^\tau_{\theta,\kappa}}e^{(m-k)\tau z}\frac{1-e^{-z\tau}}{\tau}A_h(t_m)\delta_\tau(e^{-z\tau})^{-1}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|
\leq C\tau.
\end{aligned}$$ Similarly, $$\sigma_{2,k}\leq C\tau\left \|\int_{\Gamma^\tau_{\theta,\kappa}}e^{(m-k-1)\tau z}A_h(t_m)\delta_\tau(e^{-z\tau})^{-1}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|\leq C\tau.$$ Thus $$\begin{aligned}
\sum_{k=1}^{m}\upsilon_{2,1,k,h}\leq C\tau\sum_{k=1}^{m}\|U_h(t_k)-U^k_h\|_{L^2(\Omega)}.
\end{aligned}$$ As for $\upsilon_{2,2,k,h}$, one can divide it into four parts, i.e., $$\begin{aligned}
\upsilon_{2,2,k,h}%\leq&C\left\|\bigg(\int_{t_{k-1}}^{t_k}\frac{\partial}{\partial (t_m-s)}\left (H_h(t_m-s,t_m)(A_h(t_m)-A_h(s))\right )ds\right .\\
%&\left .-\left (H^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))-H^{m-k-1}_{\tau,m}(A_h(t_m)-A_h(t_{k-1}))\right )\bigg)U_h(t_k)\right\|_{L^2(\Omega)}\\
\leq&C\left\|\int_{t_{k-1}}^{t_k}(A_h(t_m)-A_h(s))\left( \frac{\partial}{\partial (t_m-s)}H_h(t_m-s,t_m)-\left (H^{m-k}_{\tau,m}-H^{m-k-1}_{\tau,m}\right)/\tau\right)dsU_h(t_k) \right\|_{L^2(\Omega)}\\
&+C\left\|\int_{t_{k-1}}^{t_k}(A_h(s)-A_h(t_{k-1}))\left (H^{m-k}_{\tau,m}-H^{m-k-1}_{\tau,m}\right)/\tau dsU_h(t_k) \right\|_{L^2(\Omega)}\\
&+C\left\|\int_{t_{k-1}}^{t_k}\frac{\partial}{\partial s}A_h(s)\left( H_h(t_m-s,t_m)-H^{m-k}_{\tau,m}\right)ds U_h(t_k)\right\|_{L^2(\Omega)}\\
&+C\left\|\int_{t_{k-1}}^{t_k}\left (\frac{\partial}{\partial s}A_h(s)-\frac{A_h(t_k)-A_h(t_{k-1})}{\tau}\right )H^{m-k}_{\tau,m}dsU_h(t_k) \right\|_{L^2(\Omega)}\leq \sum_{i=1}^{4}\vartheta_{i,k}.
\end{aligned}$$ For the first part $\vartheta_{1,k}$, using Lemma \[Lemseriesest\], one has the estimate of the difference between $\frac{\partial}{\partial (t_m-s)}H_h(t_m-s,t_m)$ and $\frac{H^{m-k}_{\tau,m}-H^{m-k-1}_{\tau,m}}{\tau}$, i.e., $$\begin{aligned}
&\left \| \frac{\partial}{\partial (t_m-s)}H_h(t_m-s,t_m)-\frac{H^{m-k}_{\tau,m}-H^{m-k-1}_{\tau,m}}{\tau}\right \|\\
\leq &C\left \| \int_{\Gamma_{\theta,\kappa}}e^{z(t_m-s)}(z^{\alpha}+A_h(t_m))^{-1}dz-\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-t_{k})}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|\\
\leq &C\tau(t_m-s)^{\alpha-2},
\end{aligned}$$ which yields $$\begin{aligned}
\sum_{k=1}^{m}\vartheta_{1,k}\leq& C\tau\sum_{k=1}^{m} \int_{t_{k-1}}^{t_k}(t_m-s)^{\alpha-1}\|U_h(t_k)\|_{\dot{H}^2(\Omega)}ds.
\end{aligned}$$ Similarly, one has $$\left \|\frac{H^{m-k}_{\tau,m}-H^{m-k-1}_{\tau,m}}{\tau}\right \|\leq C\left \|\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-t_{k-1})}e^{-z\tau}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|\leq C(t_m-s)^{\alpha-1}.$$ Therefore one has $$\begin{aligned}
\sum_{k=1}^{m}\vartheta_{2,k}\leq& C\tau\sum_{k=1}^{m} \int_{t_{k-1}}^{t_k}(t_m-s)^{\alpha-1}\|U_h(t_k)\|_{\dot{H}^2(\Omega)}ds.
\end{aligned}$$ Moreover, according to Lemma \[Lemseriesest\], there is $$\begin{aligned}
&\left \| H_h(t_m-s,t_m)-H^{m-k}_{\tau,m}\right \|\\
\leq &C\left \| \int_{\Gamma_{\theta,\kappa}}e^{z(t_m-s)}z^{-1}(z^{\alpha}+A_h(t_m))^{-1}dz-\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-t_{k})}\delta_\tau(e^{-z\tau})^{-1}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|\\
\leq &C\tau(t_m-s)^{\alpha-1},
\end{aligned}$$ which leads to $$\begin{aligned}
\sum_{k=1}^{m}\vartheta_{3,k}\leq& C\tau\sum_{k=1}^{m} \int_{t_{k-1}}^{t_k}(t_m-s)^{\alpha-1}\|U_h(t_k)\|_{\dot{H}^2(\Omega)}ds.
\end{aligned}$$ On the other hand, according to Lemma \[Lemseriesest\], one has $$\begin{aligned}
&\left \|H^{m-k}_{\tau,m}\right \|
\leq C\left \| \int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-t_{k})}\delta_\tau(e^{-z\tau})^{-1}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|
\leq C(t_m-t_{k})^{\alpha}.
\end{aligned}$$ Combining $\frac{1}{a^2(t)}\in C^2[0,T]$ leads to $$\label{eqqSigma}
\begin{aligned}
\sum_{k=1}^{m}\vartheta_{4,k}\leq& C\tau\sum_{k=1}^{m} \int_{t_{k-1}}^{t_k}(t_m-t_{k})^{\alpha}\|U_h(t_k)\|_{\dot{H}^2(\Omega)}ds.
\end{aligned}$$ The estimate together with Theorem \[thmregofUh\] yield that $$\begin{aligned}
\sum_{k=1}^{m}\upsilon_{2,2,k,h}\leq C\tau\sum_{k=1}^{m} \int_{t_{k-1}}^{t_k}(t_m-s)^{\alpha-1}\|U_h(t_k)\|_{\dot{H}^2(\Omega)}ds\leq C\tau\|W_0\|_{L^2(\Omega)}.
\end{aligned}$$ Using the condition $\frac{1}{a^2(t)}\in C^2[0,T]$, one can bound $\upsilon_{2,3,k,h}$ by $$\begin{aligned}
\upsilon_{2,3,k,h}\leq&C\left\|\int_{t_{k-1}}^{t_k}(A_h(t_m)-A_h(s))\frac{\partial}{\partial (t_m-s)}\left(H_h(t_m-s,t_m)\right )(U_h(s)-U_h(t_k))ds\right\|_{L^2(\Omega)}\\
&+C\left\|\int_{t_{k-1}}^{t_k}H_h(t_m-s,t_m)\frac{\partial}{\partial s}\left(A_h(t_m)-A_h(s)\right )(U_h(s)-U_h(t_k))ds\right\|_{L^2(\Omega)}\\
\leq& C\tau\int_{t_{k-1}}^{t_k}\left \|\frac{U_h(s)-U_h(t_k)}{\tau}\right \|_{L^2(\Omega)}ds.
\end{aligned}$$ According to Theorem \[thmHolderU\], one has $$\sum_{k=1}^{m}\upsilon_{2,3,k,h}\leq C\tau\|W_0\|_{L^2(\Omega)}.$$ Thus using discrete Grönwall inequality and taking $m=n$ result in
$$\|U^n_h-U_h(t_n)\|_{L^2(\Omega)}\leq Ct_n^{\alpha-1}\tau\|W_0\|_{L^2(\Omega)}.$$
Next consider the estimate of $\uppercase\expandafter{\romannumeral2}$ defined in .
If $W_0\in L^2(\Omega)$ and $\frac{1}{a^2(t)}\in C^2[0,T]$, then there exists $$\begin{aligned}
\sum_{k=1}^{m}\uppercase\expandafter{\romannumeral2}_{1,k}\leq C\tau \|W_0\|_{L^2(\Omega)},
\end{aligned}$$ where $\uppercase\expandafter{\romannumeral2}_{1,k}$ is defined in .
By triangle inequality, we can divide it into two parts, i.e., $$\begin{aligned}
&\|F^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))-F^{m-k-1}_{\tau,m}(A_h(t_m)-A_h(t_{k-1}))\|\\
\leq& \|F^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))-F^{m-k-1}_{\tau,m}(A_h(t_m)-A_h(t_{k}))\|\\
&+\|F^{m-k-1}_{\tau,m}(A_h(t_k)-A_h(t_{k-1}))\|\leq \varrho_{1,k}+\varrho_{2,k}.
\end{aligned}$$ The fact $|\frac{1-e^{-z\tau}}{\tau}|\leq C|z|$ and Lemma \[Lemseriesest\] show $$\begin{aligned}
\varrho_{1,k}\leq C\tau(t_m-t_k)\left \|\int_{\Gamma^\tau_{\theta,\kappa}}e^{(m-k)\tau z}\frac{e^{-z\tau}-1}{\tau}A_h(t_m)\delta_\tau(e^{-z\tau})^{\alpha-1}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|
\leq C(t_m-s)^{-\alpha}\tau.
\end{aligned}$$ Similarly $$\varrho_{2,k}\leq C\tau\left \|\int_{\Gamma^\tau_{\theta,\kappa}}e^{(m-k-1)\tau z}A_h(t_m)\delta_\tau(e^{-z\tau})^{\alpha-1}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|\leq C(t_m-s)^{-\alpha}\tau.$$ Thus $$\sum_{k=1}^{m}\uppercase\expandafter{\romannumeral2}_{1,k}\leq C\tau\|W_0\|_{L^2(\Omega)}.$$
If $W_0\in \dot{H}^\epsilon(\Omega)$ and $\frac{1}{a^2(t)}\in C^2[0,T]$, then there exists $$\begin{aligned}
\sum_{k=1}^{m}\uppercase\expandafter{\romannumeral2}_{2,k}\leq C\tau \|W_0\|_{\dot{H}^{\epsilon}(\Omega)},
\end{aligned}$$ where $\uppercase\expandafter{\romannumeral2}_{2,k}$ is defined in .
By triangle inequality, there exists $$\begin{aligned}
\uppercase\expandafter{\romannumeral2}_{2,k}\leq&C\left\|\bigg(\int_{t_{k-1}}^{t_k}\frac{\partial}{\partial (t_m-s)}\left (F_h(t_m-s,t_m)(A_h(t_m)-A_h(s))\right )ds\right .\\
&\left .-\left (F^{m-k}_{\tau,m}(A_h(t_m)-A_h(t_k))-F^{m-k-1}_{\tau,m}(A_h(t_m)-A_h(t_{k-1}))\right )\bigg)U_h(t_k)\right\|_{L^2(\Omega)}\\
\leq&C\left\|\int_{t_{k-1}}^{t_k}(A_h(t_m)-A_h(s))\left( \frac{\partial}{\partial (t_m-s)}F_h(t_m-s,t_m)-\left (F^{m-k}_{\tau,m}-F^{m-k-1}_{\tau,m}\right)/\tau\right)dsU_h(t_k) \right\|_{L^2(\Omega)}\\
&+C\left\|\int_{t_{k-1}}^{t_k}(A_h(s)-A_h(t_{k-1}))\left (F^{m-k}_{\tau,m}-F^{m-k-1}_{\tau,m}\right)/\tau dsU_h(t_k) \right\|_{L^2(\Omega)}\\
&+C\left\|\int_{t_{k-1}}^{t_k}\frac{\partial}{\partial s}A_h(s)\left( F_h(t_m-s,t_m)-F^{m-k}_{\tau,m}\right)ds U_h(t_k)\right\|_{L^2(\Omega)}\\
&+C\left\|\int_{t_{k-1}}^{t_k}\left (\frac{\partial}{\partial s}A_h(s)-\frac{A_h(t_k)-A_h(t_{k-1})}{\tau}\right )F^{m-k}_{\tau,m}dsU_h(t_k) \right\|_{L^2(\Omega)}\leq \sum_{i=1}^{4}\ell_{i,k}.
\end{aligned}$$ From Lemma \[lemestEFh\], one has $$\begin{aligned}
&\left \| A_h(t_m)^{-\epsilon/2}\frac{\partial}{\partial (t_m-s)}F_h(t_m-s,t_m)-A_h(t_m)^{-\epsilon/2}\frac{F^{m-k}_{\tau,m}-F^{m-k-1}_{\tau,m}}{\tau}\right \|\\
\leq &C\left \| \int_{\Gamma_{\theta,\kappa}}e^{z(t_m-s)}A_h(t_m)^{1-\epsilon/2}(z^{\alpha}+A_h(t_m))^{-1}dz-\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-t_{k})}A_h(t_m)^{1-\epsilon/2}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|\\
&+C\left \| A_h(t_m)^{-\epsilon/2}\left (\int_{\Gamma_{\theta,\kappa}}e^{z(t_m-s)}\mathbf{I}dz-\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-t_{k})}\mathbf{I}dz\right )\right \|\\
\leq&C\left \| \int_{\Gamma_{\theta,\kappa}\backslash\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-s)}A_h(t_m)^{1-\epsilon/2}(z^{\alpha}+A_h(t_m))^{-1}dz\right \|\\
&+C\left \|\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-s)}(1-e^{z(s-t_k)})A_h(t_m)^{1-\epsilon/2}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|\\
&+C\left \|\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-s)}A_h(t_m)^{1-\epsilon/2}((z^{\alpha}+A_h(t_m))^{-1}-(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1})dz\right \|\\
\leq &C\tau(t_m-s)^{\alpha\epsilon/2-2},
\end{aligned}$$ which implies $$\begin{aligned}
\sum_{k=1}^{m}\ell_{1,k}\leq& C\tau\sum_{k=1}^{m} \int_{t_{k-1}}^{t_k}(t_m-s)^{\alpha\epsilon/2-1}\|A_h(t_m)^{\epsilon/2}U_h(t_k)\|_{\dot{H}^2(\Omega)}ds.
\end{aligned}$$ Similarly, $$\begin{aligned}
\left \|A_h(t_m)^{-\epsilon/2}\frac{F^{m-k}_{\tau,m}-F^{m-k-1}_{\tau,m}}{\tau}\right \|&\leq \left \|\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-t_{k-1})}e^{-z\tau}A_h(t_m)^{-\epsilon/2}\delta_\tau(e^{-z\tau})^{\alpha}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|\\
&\leq C(t_m-s)^{\alpha\epsilon/2-1}.
\end{aligned}$$ Therefore $\sum_{k=1}^{m}\ell_{2,k}$ can be bounded as $$\begin{aligned}
\sum_{k=1}^{m}\ell_{2,k}\leq& C\tau\sum_{k=1}^{m} \int_{t_{k-1}}^{t_k}(t_m-s)^{\alpha\epsilon/2-1}\|A_h(t_m)^{\epsilon/2}U_h(t_k)\|_{\dot{H}^2(\Omega)}ds.
\end{aligned}$$ On the other hand, one can get $$\begin{aligned}
&\left \| A_h(t_m)^{-\epsilon/2}F_h(t_m-s,t_m)-A_h(t_m)^{-\epsilon/2}F^{m-k}_{\tau,m}\right \|\\
\leq &C\left \| \int_{\Gamma_{\theta,\kappa}}e^{z(t_m-s)}A_h(t_m)^{1-\epsilon/2}z^{-1}(z^{\alpha}+A_h(t_m))^{-1}dz\right .\\
&\left .-\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-t_{k})}A_h(t_m)^{1-\epsilon/2}\delta_\tau(e^{-z\tau})^{-1}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|\\
&+C\left \|A_h(t_m)^{-\epsilon/2}\left ( \int_{\Gamma_{\theta,\kappa}}e^{z(t_m-s)}z^{-1}dz\right .\left .-\int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-t_{k})}\delta_\tau(e^{-z\tau})^{-1}dz\right )\right \|\\
\leq &C\tau(t_m-s)^{\alpha\epsilon/2-1},
\end{aligned}$$ which leads to $$\begin{aligned}
\sum_{k=1}^{m}\ell_{3,k}\leq& C\tau\sum_{k=1}^{m} \int_{t_{k-1}}^{t_k}(t_m-s)^{\alpha\epsilon/2-1}\|A_h(t_m)^{\epsilon/2}U_h(t_k)\|_{\dot{H}^2(\Omega)}ds.
\end{aligned}$$ Next, using $$\begin{aligned}
&\left \|F^{m-k}_{\tau,m}\right \|
\leq C\left \| \int_{\Gamma^\tau_{\theta,\kappa}}e^{z(t_m-t_{k})}\delta_\tau(e^{-z\tau})^{\alpha-1}(\delta_\tau(e^{-z\tau})^{\alpha}+A_h(t_m))^{-1}dz\right \|
\leq C,
\end{aligned}$$ there exists $$\begin{aligned}
\sum_{k=1}^{m}\ell_{4,k}\leq& C\tau\sum_{k=1}^{m} \int_{t_{k-1}}^{t_k}\|U_h(t_k)\|_{\dot{H}^2(\Omega)}ds.
\end{aligned}$$ Thus, by Lemma \[thmregofUh\], it has $$\begin{aligned}
\sum_{k=1}^{m}\uppercase\expandafter{\romannumeral2}_{2,k}\leq C\tau\sum_{k=1}^{m} \int_{t_{k-1}}^{t_k}(t_m-s)^{\alpha\epsilon/2-1}\|A_h(t_m)^{\epsilon/2}U_h(t_k)\|_{\dot{H}^2(\Omega)}ds\leq C\tau \|W_0\|_{\dot{H}^{\epsilon}(\Omega)}.
\end{aligned}$$
If $W_0\in \dot{H}^\epsilon(\Omega)$ and $\frac{1}{a^2(t)}\in C^2[0,T]$, then there exists $$\begin{aligned}
\sum_{k=1}^{m}\uppercase\expandafter{\romannumeral2}_{3,k}\leq C\tau \|W_0\|_{\dot{H}^{\epsilon}(\Omega)},
\end{aligned}$$ where $\uppercase\expandafter{\romannumeral2}_{3,k}$ is defined in .
According to Theorem \[thmHolderU\], $\uppercase\expandafter{\romannumeral2}_{3,k}$ can be bounded as $$\begin{aligned}
\uppercase\expandafter{\romannumeral2}_{3,k}\leq& \left\|\int_{t_{k-1}}^{t_k}(A_h(t_m)-A_h(s))\frac{\partial}{\partial (t_m-s)}\left (F_h(t_m-s,t_m)\right )(U_h(s)-U_h(t_k))ds\right \|_{L^2(\Omega)}\\
&+\left\|\int_{t_{k-1}}^{t_k}F_h(t_m-s,t_m)\frac{\partial}{\partial (t_m-s)}\left((A_h(t_m)-A_h(s))\right)(U_h(s)-U_h(t_k))ds\right \|_{L^2(\Omega)}\\
\leq& C\tau \int_{t_{k-1}}^{t_k}\left\|A_h(t_m)\frac{U_h(s)-U_h(t_k)}{\tau}\right \|_{L^2(\Omega)}ds\\
\leq& C\tau\int_{t_{k-1}}^{t_k} s^{\alpha\epsilon/2-1}\|W_0\|_{\dot{H}^{\epsilon}(\Omega)}ds.
\end{aligned}$$ Summing $k$ from $1$ to $n$ leads to the desired estimate.
Thus the error estimate of the fully discrete scheme when $f=0$ is obtained.
\[thmfullhom\] Let $W_h$ and $W^n_h$ be the solutions of Eqs. and respectively. If $\frac{1}{a^2(t)}\in C^2[0,T]$, $W_0\in \dot{H}^\epsilon(\Omega)$, and $f=0$, then there holds $$\|W_h(t_n)-W^n_h\|\leq C\tau t_n^{\alpha\epsilon-1}\|W_0\|_{\dot{H}^\epsilon(\Omega)}.$$
Numerical experiments
=====================
In this section, we perform three numerical experiments with unknown explicit solution to verify the effectiveness of the designed schemes. The spatial errors will be measured by $$\begin{aligned}
E_{h}=\|W^{n}_{h}-W^{n}_{h/2}\|_{L^2(\Omega)},
\end{aligned}$$ where $W^n_{h}$ means the numerical solution of $W$ at time $t_n$ with mesh size $h$; similarly, we measure the temporal errors by $$\begin{aligned}
E_{\tau}=\|W^n_{\tau}-W^n_{\tau/2}\|_{L^2(\Omega)},
\end{aligned}$$ where $W^n_{\tau}$ are the numerical solutions of $W$ at the fixed time $t_n$ with step size $\tau$. The corresponding convergence rates can be calculated by $${\rm Rate}=\frac{\ln(E_{h}/E_{h/2})}{\ln(2)} ~{\rm and }~ {\rm Rate}=\frac{\ln(E_{\tau}/E_{\tau/2})}{\ln(2)}.$$ For convenience, we take $\Omega=(0,1)$.
Here we consider temporal convergence rates for inhomogeneous problem . Let $$\frac{1}{a^2(t)}=t^{1.01},\quad
f(x,t)=t^{0.1}\chi_{[0,1/2]},$$ and $T=1$, where $\chi_{[a,b]}$ denotes the characteristic function on $[a,b]$. To investigate the convergence in time and eliminate the influence from spatial discretization, we set $h=1/128$ and Table \[tab:timeu00\] shows the errors and convergence rates when $\alpha=0.3$ and $0.7$. The results shown in Table \[tab:timeu00\] validate Theorem \[thmfullinhomo\].
$\alpha\backslash \tau$ 1/50 1/100 1/200 1/400 1/800
------------------------- ----------- ----------- ----------- ----------- -----------
0.3 7.038E-04 3.269E-04 1.506E-04 6.899E-05 3.150E-05
Rate 1.1063 1.1186 1.1259 1.1310
0.7 2.661E-04 1.225E-04 5.646E-05 2.601E-05 1.197E-05
Rate 1.1186 1.1180 1.1183 1.1192
: Temporal errors and convergence rates for inhomogeneous problem
\[tab:timeu00\]
Here we validate temporal convergence rates for homogeneous problem . To satisfy the condition provided in Theorem \[thmfullhom\], we take $$\frac{1}{a^2(t)}=t^{2.01}.$$ Set $T=1$ and $$W_0(x)=\chi_{(1/2,1]}.$$ We take small spatial mesh size $h=1/128$ so that the spatial discretization error is relatively negligible. The corresponding results are shown in Table \[tab:timef00\], which agree with the predictions of Theorem \[thmfullhom\].
$\alpha\backslash \tau$ 1/50 1/100 1/200 1/400 1/800
------------------------- ----------- ----------- ----------- ----------- -----------
0.4 8.319E-03 4.193E-03 1.997E-03 9.421E-04 4.534E-04
Rate 0.9885 1.0705 1.0835 1.0552
0.6 3.802E-03 1.873E-03 9.194E-04 4.542E-04 2.256E-04
Rate 1.0217 1.0262 1.0172 1.0095
: Temporal errors and convergence rates for homogeneous problem
\[tab:timef00\]
Finally, we take $$\frac{1}{a^2(t)}=10t^{1.01},\quad W_0(x)=\chi_{(1/2,1]},\quad f(x,t)=t^{0.1}\chi_{[0,1/2]}$$ to verify the spatial convergence rates. Here we choose $T=2$ and $\tau=1/1000$. Table \[tab:spac\] shows the errors and convergence rates, which agree with the predictions of Theorem \[thmsemier\].
$\alpha\backslash h$ 1/32 1/64 1/128 1/256 1/512
---------------------- ----------- ----------- ----------- ----------- -----------
0.2 9.828E-04 2.483E-04 6.224E-05 1.557E-05 3.893E-06
Rate 1.9848 1.9962 1.9990 1.9998
0.7 1.196E-04 3.341E-05 8.675E-06 2.192E-06 5.494E-07
Rate 1.8395 1.9453 1.9849 1.9961
: Spatial errors and convergence rates
\[tab:spac\]
Conclusion
==========
The model describing anomalous diffusion in expanding media is with variable coefficient. The finite element method and backward Euler convolution quadrature are respectively used to approximate the Laplace operator and Riemann-Liouville fractional derivative. We first derive the priori estimate of the solution, and then present the error estimates of the space semi-discrete and the fully discrete schemes. The extensive numerical experiments validate the effectiveness of the numerical schemes.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by the National Natural Science Foundation of China under grant no. 11671182, and the Fundamental Research Funds for the Central Universities under grant no. lzujbky-2018-ot03.
A. Adams and J.F. Fournier, Sobolev spaces, Academic Press (2003).
A.A. Alikhanov, A new difference scheme for the time fractional diffusion equation. J. Comput. Phys. [**280**]{} (2015) 424–438.
E. Barkai, R. Metzler, and J. Klafter, From continuous time random walks to the fractional Fokker-Planck equation. Phys. Rev. E [**61**]{} (2000) 132–138.
E. Barkai, Fractional Fokker-Planck equation, solution, and application. Phys. Rev. E [**63**]{} (2001) 046118.
E. Bazhlekova, B.T. Jin, R. Lazarov and Z. Zhou, An analysis of the Rayleigh-Stokes problem for a generalized second-grade fluid. Numer. Math. [**131**]{} (2015) 1–31.
S. Chen, F. Liu, P. Zhuang and V. Anh, Finite difference approximations for the fractional Fokker-Planck equation. Appl. Math. Model. [**33**]{} (2009) 256–273.
Y. Chen, X.D. Wang, and W.H. Deng, Langevin dynamics for Lévy walk with memory. Phys. Rev. E [**99**]{} (2019) 012135.
Y. Chen, X.D. Wang, and W.H. Deng, Subdiffusion in an external force field. Phys. Rev. E [**99**]{} (2019) 042125.
W.H. Deng, Finite element method for the space and time fractional Fokker-Planck equation. SIAM J. Numer. Anal. [**47**]{} (2009) 204–226.
W.H. Deng and J.S. Hesthaven, Local discontinuous galerkin methods for fractional diffusion equations. ESAIM Math. Model. Numer. Anal. [**47**]{} (2013) 1845–1864.
M. Gunzburger, B.Y. Li, and J.L. Wang, Sharp convergence rates of time discretization for stochastic time-fractional PDEs subject to additive space-time white noise. Math. Comp. [**88**]{} (2018) 1715–1741.
B.T. Jin, R. Lazarov and Z. Zhou, Error estimates for a semidiscrete finite element method for fractional order parabolic equations. SIAM J. Numer. Anal. [**51**]{} (2013) 445–466.
B.T. Jin, R. Lazarov, J. Pasciak and Z. Zhou, Error analysis of a finite element method for the space-fractional parabolic equation. SIAM J. Numer. Anal. [**52**]{} (2014) 2272–2294.
B.T. Jin, R. Lazarov, J. Pasciak and Z. Zhou, Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion. IMA J. Numer. Anal. [**35**]{} (2015) 561–582.
B.T. Jin, R. Lazarov and Z. Zhou, Two fully discrete schemes for fractional diffusion and diffusion-wave equations with nonsmooth data. SIAM J. Sci. Comput. [**38**]{} (2016) A146–A170.
B.T. Jin, B.Y. Li and Z. Zhou, Correction of high-order BDF convolution quadrature for fractional evolution equations. SIAM J. Sci. Comput. [**39**]{} (2017) A3129–A3152.
B.T. Jin, B.Y. Li, and Z. Zhou, Subdiffusion with a time-dependent coefficient: Analysis and numerical solution. Math. Comp. [**88**]{} (2019) 2157–2186.
C.E.S. Larsson, Error estimates with smooth and nonsmooth data for a finite element method for the cahn-hilliard equation. Math. Comp. [**58**]{} (1992) 603–630.
F. Le Vot, E. Abad, and S.B. Yuste, Continuous-time random-walk model for anomalous diffusion in expanding media. Phys. Rev. E [**96**]{} (2017) 032117.
X.J. Li and C.J. Xu, A space-time spectral method for the time fractional diffusion equation. SIAM J. Numer. Anal. [**47**]{} (2009) 2108–2131.
Y.M. Lin and C.J. Xu, Finite difference/spectral approximations for the time-fractional diffusion equation. J. Comput. Phys. [**225**]{} (2007) 1533–1552.
C. Lubich, Convolution quadrature and discretized operational calculus I. Numer. Math. [**52**]{} (1988) 129–145.
C. Lubich, Convolution quadrature and discretized operational calculus II. Numer. Math. [**52**]{} (1988) 413–425.
C. Lubich, I.H. Sloan and V. Thomée, Nonsmooth data error estimates for approximations of an evolution equation with a positive-type memory term. Math. Comp. [**65**]{} (1996) 1–17.
K. Mustapha, FEM for time-fractional diffusion equations, novel optimal error analyses. Math. Comp. [**87**]{} (2018) 2259–2272.
I. Podlubny, Fractional Differential Equations. Academic Press (1999).
V. Thomée, Galerkin Finite Element Methods for Parabolic Problems. Springer-Verlag 2006.
P.B. Xu and W.H. Deng, Fractional compound Poisson processes with multiple internal states. Math. Model. Nat. Phenom. [**13**]{} (2018) 10.
F.H. Zeng, I. Turner and K. Burrage, A stable fast time-stepping method for fractional integral and derivative operators. J. Sci. Comput. [**77**]{} (2018) 283–307.
|
---
abstract: 'This paper presents a novel estimation approach for cumulative link models, based on median bias reduction as developed in @kenne2017. The median bias reduced estimator is obtained as solution of an estimating equation based on an adjustment of the score. It allows to obtain higher-order median centering of maximum likelihood estimates without requiring their finiteness. Moreover, the estimator is equivariant under componentwise monotone reparameterizations and the method is effective in preventing boundary estimates. We evaluate the properties of the median bias reduced estimator through simulation studies and compare it with the two main competitors, the maximum likelihood and the mean bias reduced [@firth1993] estimators. Finally, we show an application where the proposed estimator is able to solve the boundary estimates problem.'
author:
- |
V. GIOIA$\,^1$, E. C. KENNE PAGUI$\,^2$ and A. SALVAN$\,^2$\
$\,^1\,$University of Udine, Department of Economics and Statistics\
$\,^2\,$University of Padova, Department of Statistical Sciences\
[email protected], [email protected], [email protected]
bibliography:
- 'MBRCLM.bib'
title: |
Median bias reduction in\
cumulative link models
---
*Some key words:* Adjusted score; Boundary estimate; Likelihood; Median unbiased; Ordinal data; Ordinal probability effect measure.
Introduction
============
Cumulative link models were proposed by @mccullagh1980, see also @agresti2010, and are the most popular tool to handle ordinal outcomes, which are pervasive in many disciplines. One of the reasons for their popularity relies on the use of a single regression coefficient for all response levels, making the effect simple to summarize. For these models, maximum likelihood (ML) is the most common estimation method. Despite this fact, it presents some problems and several proposals have been developed to solve them. One of the problems concerns the asymptotic approximation for the distribution of the ML estimator, which can be highly inaccurate with moderate sample information or sparse data. Another problem with ML estimation lies in boundary estimates, which can arise with positive probability in models for ordinal data and can cause several difficulties in the fitting process and inferential procedures.
The literature is rich in methods related to bias reduction of the ML estimator. Such methods can be distinguished [@kosmidis2014a] into explicit methods, that focus on correcting the estimate, and implicit methods, based on correction of the estimating function. The main disadvantage of the former lies in the need for finiteness of ML estimates which is overcome by the latter, one of the reasons for their spread in applied statistics.
The estimation approaches based on an adjustment of the score allow, by introducing an asymptotically negligible bias in the score function, to obtain the mean bias reduced (mean BR) estimator, proposed by @firth1993 and developed in @kosmidis2009 [@kosmidis2010], and the median bias reduced (median BR) estimator, proposed by @kenne2017. A unified presentation for generalized linear models is given by @kosmidis2020 and for general models in @kenne2019. Such approaches do not require the finiteness of the ML estimates. In addition, they are effective in preventing boundary estimates. The main difference between the two methods lies in the use of the mean and the median, respectively, as a centering index for the estimator. Mean BR achieves a first-order bias correction. The lack of equivariance under nonlinear reparameterizations is a disadvantage of this approach which is, however, overcome by practical advantages in applications. Median BR, developed in @kenne2017 and in a subsequent paper [@kenne2019], aims at median centering of the estimator, that is componentwise third-order median unbiased in the continuous case and equivariant under componentwise monotone reparameterizations.
Mean BR for cumulative link models is developed in @kosmidis2014b, where finiteness and optimal frequentist properties are illustrated. Here we obtain the quantities needed to compute the median BR in cumulative link models. We use the simplified algebric form of the adjustment term developed in @kenne2019. We show, through extensive simulation studies, that the proposed method succeeds in achieving componentwise median centering, outperforms ML and is competitive with mean BR. Considering an ordinal probability effect measure, proposed by @agrestikateri2017, we also analyze the behaviour under componentwise monotone reparameterizations, showing the good performance achieved by the median BR estimator. Finally, we present an application where the median BR approach, like mean BR, is seen to be able to prevent boundary estimates.
Cumulative link models
======================
Let $Y_i$ be the ordinal outcome, with $c$ categories, for subject $i$, $i=1,\ldots,n$. Let $p_{ij}=\text{Pr}(Y_i = j)$ be the probability to observe category $j$, $j=1, \ldots, c-1$, for subject $i$, and $\text{Pr}(Y_i \leq j)=\sum_{k=1}^{j}{p_{ik}}$ the cumulative probability. With $\boldsymbol{x}_i$, $i=1,\ldots,n$, a $p$-dimensional row vector of covariates, the cumulative link model [@mccullagh1980] links the cumulative probabilities to a linear predictor, $\eta_{ij}=\alpha_j+\boldsymbol{x}_i\beta$, $j=1, \ldots, c-1$, via the relationship $$\label{clm}
g\{\text{Pr}(Y_i\leq j|\boldsymbol{x}_{i})\}= \eta_{ij},$$ where $g(\cdot)$ is a given link function and $\beta^\top=(\beta_1, \ldots, \beta_p)$ is the regression parameter vector. This class of models assumes that the effects of $\boldsymbol{x}_{i}$, expressed through $\beta$, are the same for each $j=1, \ldots, c-1$. The intercept parameters $\alpha_j$, $j=1,\ldots, c-1$, satisfy $-\infty=\alpha_0\leq\alpha_1 \leq \ldots \leq \alpha_{c-1}\leq \alpha_c=+\infty$, since $\text{Pr}(Y_i \leq j)$ is increasing in $j$ for each fixed $\boldsymbol{x}_{i}$. Model (\[clm\]) has an interpretation in terms of an underlying latent variable [see e.g. @agresti2010 Section 3.3.2], that is the ordinal outcome $Y_i$ can be seen as the discretization of a latent continuous random variable $Y^*_i$, satisfying a regression model $Y^*_i=-\boldsymbol{x}_{i} \beta +\varepsilon_i$, $i=1, \ldots, n$. The random variables $\varepsilon_i$ are independent and identically distributed with $E(\varepsilon_i)=0$ and cumulative distribution function $G(\cdot)$. By assigning threshold values $\alpha_j$, $j=1,\ldots,c$, such that we observe $Y_i=j$ if $\alpha_{j-1} \leq Y^*_i < \alpha_j$, with $-\infty=\alpha_0\leq\alpha_1 \leq \ldots \leq \alpha_{c-1}\leq \alpha_c=+\infty$, the equivalent formulation of model (\[clm\]) is obtained $$\text{Pr}(Y_i\leq j|\boldsymbol{x}_{i})=\text{Pr}(Y^*_i\leq \alpha_j|\boldsymbol{x}_{i})=\text{Pr}(\varepsilon_i<\alpha_j+\boldsymbol{x}_{i}\beta)=G(\eta_{ij}),$$ with $j=1, \ldots, c-1$. Common choices for $G(\cdot)$ are the logistic, standard normal or extreme value distribution. The cumulative logit model, also known as proportional odds model [@mccullagh1980 Section 2], is obtained assuming $G(\eta_{ij})=\exp(\eta_{ij})/\{1+\exp(\eta_{ij})\}$, the cumulative probit model is recovered with $G(\eta_{ij})=\Phi(\eta_{ij})$, and the cumulative complementary log-log link model, also known as proportional hazards model [@mccullagh1980 Section 3], setting $G(\eta_{ij})=1-\exp\{-\exp(\eta_{ij})\}$.
The popularity of model (\[clm\]) is linked to its parsimony since it uses a single parameter for each predictor, in addition to the latent variable interpretation. The cumulative link model can be inadequate because of misspecification of the linear predictor or due to departure from the assumption that the covariate effect is the same for each $j$, $j=1,\ldots, c-1$. Several models have been proposed that relax the latter assumption [for a detailed description see @fullerton2016]. Instances are the partial cumulative link model, which first appeared in the literature as partial proportional odds model [@peterson1990], or the nonparallel cumulative link model. Both include the cumulative link model as a special case. However, despite their flexibility, they may present some difficulties either from a computational or from the interpretation point of view, especially with data sets with several predictors.
Maximum likelihood, bias reduction and boundary estimates
---------------------------------------------------------
As the sample size increases, the probability of unique ML estimates tends to one [@mccullagh1980 Section 6.3]. However, the ML estimator has a positive probability of being on the boundary of the parameter space. In cumulative link models (\[clm\]), boundary estimates are estimates of the regression parameters with infinite components, and/or consecutive intercept estimates having the same value. @pratt1981 showed that zero counts for a middle category $j$, $j=2,\ldots,c-1$, produce consecutive equal intercept estimates, that is $\hat \alpha_{j-1}=\hat \alpha_j$, and if the first or the last category have zero observed counts, then the estimates for $\alpha_1$ or $\alpha_{c-1}$ are infinite. @agresti2010 [Section 3.4.5] describes some settings where infinite ML estimates occur for the regression parameters.
@kosmidis2014b demonstrates that meanBR is a general effective strategy to prevent boundary estimates. The same advantage will be seen to hold for median BR in Sections 4 and 5. With particular regard to boundary estimates of the intercept parameters, @kosmidis2014b [Section 8.3, Remark 1] showed that the ML estimate of the regression parameters is invariant with respect to grouping of unobserved categories with the adjacent ones. So, likelihood inference on the regression parameters is possible if one or more categories are unobserved. The same appears to hold for mean BR and will be seen to hold in all examples considered for median BR. The only difference with respect to ML estimates is that if the first or the last category has zero counts, then the mean and median BR estimates are tipically finite.
An ordinal probability effect measure
-------------------------------------
A useful monotone transformation of regression parameters related to binary covariates was proposed by @agrestikateri2017 to overcome the difficulty for practitioners to interpret nonlinear measures, such as probits and odds ratios. This reparameterization allows an interpretation in terms of “ordinal superiority”, that is the probability that an observation from one group falls above an independent observation from the other group, adjusting for other covariates. For a vector of covariates $\boldsymbol{x}=(x_1,\ldots,x_p)$, let $x_r$ a binary variable which is a group indicator for an observation. Let $Y_{i1}$, $Y_{i2}$ be the independent outcomes from the groups $x_{ir}=0$ and $x_{ir}=1$, respectively. For ordinal responses, the ordinal superiority measure, $\gamma \in [0,1]$, is defined as $$\gamma=\text{Pr}(Y_{i1}>Y_{i2}|\boldsymbol{x}_i \setminus \{x_{ir}\})+\frac{1}{2}\text{Pr}(Y_{i1}=Y_{i2}|\boldsymbol{x}_i \setminus \{x_{ir}\}).$$ Based on model (\[clm\]), @agrestikateri2017 show that the exact or approximate expressions of $\gamma$ for the parameter related to the binary covariate, $\beta_r$, are $\gamma(\beta_r)\approx \exp(-\beta_r/\sqrt 2)/\{1+\exp(-\beta_r/\sqrt 2)\}$, considering the logit link function, $\gamma(\beta_r)=\Phi(-\beta_r/\sqrt 2)$ for the probit link, and $\gamma(\beta_r)= \exp(-\beta_r)/\{1+\exp(-\beta_r)\}$ for the complementary log-log link.
Median bias reduction
=====================
For a regular parametric model with $p$-dimensional parameter $\theta=(\theta_1,\ldots,\theta_p)$, let $\ell(\theta)$ be the log-likelihood based on a sample of size $n$ and $U_r=U_r(\theta)=\partial \ell(\theta) / \partial \theta_r$, $r=1,\ldots,p$, the $r$-th component of the score $U(\theta)$. Moreover, let $j(\theta)=-\partial^2 \ell(\theta)/\partial \theta\partial \theta^\top$ be the observed information matrix and $i(\theta)=E_\theta\{j(\theta)\}$ the expected information matrix, which we assume to be of order $O(n)$. We denote with $[i(\theta)^{-1}]_r$ the $r$-th column of $i(\theta)^{-1}$ and with $i^{rr}(\theta)$ the $(r,r)$ element of $i(\theta)^{-1}$.
The median BR estimator, $\tilde\theta$, is obtained as solution of the estimating equation $\tilde U(\theta)=0$, where $$\label{adjscore}
\tilde U(\theta)=U(\theta)+\tilde A(\theta),$$ with $$\tilde A(\theta)=A^*(\theta)-i(\theta)F(\theta).$$ The vector $A^*(\theta)$ has components $$A^*_r= \frac{1}{2}\tr\{i(\theta)^{-1}(P_r+Q_r)\},$$ with $P_r=E_\theta\{U(\theta)U(\theta)^\top U_r\}$ and $Q_r=-E_\theta\{j(\theta) U_r\}$, $r=1,\ldots, p$. The vector $F(\theta)$ has components $F_r=[i(\theta)^{-1}]_r^\top \tilde F_r$, where $\tilde F_r$ has elements $$\tilde F_{r,t}= \tr[h_r\{(1/3)P_t+(1/2)Q_t\}],\hspace{0.9cm} r,t=1,\ldots,p,$$ with the matrix $h_r$ given by $$h_r=\frac{[i(\theta)^{-1}]_r[i(\theta)^{-1}]^\top _r}{i^{rr}(\theta)}, \hspace{0.9cm} r=1,\ldots,p.$$ We refer to @kenne2019 for further details about the computation of $\tilde A(\theta)$ and for the relation with the mean BR estimator [@firth1993], $\hat \theta^*$. The latter is seen to be based on an adjusted score of the form (\[adjscore\]) with $\tilde A(\theta)=A^*(\theta)$.
@kenne2017 show that in the continuous case, each component of $\tilde \theta$, $\tilde\theta_r$, $r=1,\ldots,p$, is median unbiased with an error of order $O(n^{-3/2})$, i.e. $\text{Pr}_{\theta}(\tilde\theta_r\leq\theta_r)=\frac{1}{2}+O(n^{-3/2})$, compared to the ML estimator, which is median unbiased with an error of order $O(n^{-1/2})$. Moreover, the asymptotic distribution of $\tilde\theta$ is the same as that of the ML estimator, $\hat\theta$, and of the mean BR estimator, $\hat \theta^*$, that is $\mathcal{N}_p(\theta, i(\theta)^{-1})$.
The equation $\tilde U(\theta)=0$ is usually solved numerically. Moreover, a finite solution is not always guaranteed. The numerical solutions of $\tilde U(\theta)=0$ can be obtained by a Fisher scoring-type algorithm, whose $(k+1)$-th iteration is $$\label{fisherscoring}
\theta^{(k+1)}=\theta^{(k)}+i(\theta^{(k)})^{-1}U(\theta^{(k)})+i(\theta^{(k)})^{-1}\tilde A(\theta^{(k)}),$$ which differs from the analogue for the ML estimates only by the addition of the term $i(\theta^{(k)})^{-1}\tilde A(\theta^{(k)})$. We adopt, as a stopping criterion for the algorithm, the condition $|\tilde U_r(\theta^{(k)})|<q$, for every $r=1,\ldots,p$, and we set, as default, $q=10^{-10}$.
The algorithm needs a starting value, $\theta^{(0)}$, whose determination is not trivial and can result in nonconvergence of (\[fisherscoring\]). When available, the ML estimate, $\hat \theta$, or the mean BR estimate, $\hat \theta^*$, are suitable starting values, which are also able to speed up the convergence. We set the starting values following a strategy similar to that used in @christensen2019 for cumulative link models (\[clm\]). The starting value for the regression coefficients, $\beta$, is set to zero. The intercept parameters, $\alpha_j$, $j=1, \ldots, c-1$, are initialized to $\alpha^{(0)}_j=G^{-1}(j/c)$, where $G(\cdot)$ is the cumulative distribution function of the error terms, according to the latent variable interpetation discussed in Section 2.
In order to recognize boundary estimates, we adapt the diagnostics in @lesaffre1989, identifying infinite estimates if their absolute value and the corresponding standard error are greater then some thresholds. Categories with zero observed counts are grouped, except when it happens at the extreme categories.
Simulation study
================
We conducted a simulation study to assess the performance of the median BR estimator, $\tilde \theta$, in cumulative link models (\[clm\]). We compare it with the ML, $\hat \theta$, and mean BR, $\hat \theta^*$, estimators in terms of empirical probability of underestimation (PU%), estimated relative (mean) bias (RB%), and empirical coverage of the 95% Wald-type confidence interval (WALD%).
We consider sample sizes, $n=50,100,200$, and different link functions $g(\cdot)$, namely the logit, probit and complementary log-log (cloglog) link functions. We generate the covariate $x_1$ from a standard Normal, $x_2$ and $x_3$ from Bernoulli distributions with probabilities 0.5 and 0.8 respectively, and $x_4$ from a Poisson with mean 2.5. Assuming that the response has three categories, we fit the model $$g\{\text{Pr}(Y_i\leq j|\boldsymbol{x}_i)\}=\alpha_j +x_{i1}\beta_1+x_{i2}\beta_2+x_{i3}\beta_3+x_{i4}\beta_4, \hspace{0.9cm} j=1,2;\, i=1,\ldots,n,$$ considering 10,000 replications, with covariates fixed at the observed value and true parameter $\theta_0$. Setting $\theta_0=(-1,2,1,-1,1,-1)$ for the logit link function, we use the approximate relations between the coefficients with different link functions leading to $\theta_0=(-0.6,1.2,0.6,-0.6,0.6,-0.6)$ for the probit link function, and $\theta_0=(-1.1,1,0.7,-0.7,0.7,-0.7)$ for the complementary log-log link function.
Table \[tab1\] contains the numerical results for all link functions considered. Boundary estimates occurred using ML with percentage frequencies 2.82%, 2.75% and 2.44%, with $n=50$, and 0.08%, 0.1% and 0.04%, with $n=100$, for the logit, probit and complementary log-log link functions, respectively. Instead, mean and median BR estimates are always finite. It appears that the new method proves to be remarkably accurate in achieving median centering and shows a lower estimated relative bias than ML and comparable with that of the mean BR estimator, as well as a good empirical coverage of the the 95% Wald-type confidence intervals. The differences between the three estimators are appreciable in lower sample size settings and become much less pronounced as the sample size increases.
------ ------------------ ------- ------- ------- ------- ------- ------- ------- ------- -------
Link $\beta$
$\hat \beta_1$ 40.94 14.50 94.97 43.46 6.30 94.77 45.83 2.80 94.75
$\hat \beta_2$ 55.34 14.90 94.76 54.27 6.60 94.93 52.06 2.50 94.88
$\hat \beta_3$ 44.63 13.50 96.48 46.91 9.10 95.32 47.39 4.60 94.97
$\hat \beta_4$ 62.99 16.50 95.19 59.19 7.00 94.92 56.22 3.20 95.36
$\hat \beta^*_1$ 54.14 -0.50 95.94 51.99 -0.20 95.34 51.64 -0.30 95.23
$\hat \beta^*_2$ 48.38 0.90 96.35 49.51 0.60 95.77 48.60 -0.30 95.45
$\hat \beta^*_3$ 53.01 -0.30 96.96 52.64 -0.50 96.06 51.27 0.00 95.52
$\hat \beta^*_4$ 45.71 0.40 94.96 47.47 0.00 95.11 47.89 -0.10 95.35
$\tilde \beta_1$ 50.83 2.90 95.92 50.05 1.20 95.47 50.01 0.40 95.25
$\tilde \beta_2$ 50.12 4.20 95.89 50.67 2.10 95.64 49.62 0.40 95.34
$\tilde \beta_3$ 50.12 8.70 97.03 50.60 2.90 95.97 49.99 1.50 95.39
$\tilde \beta_4$ 50.22 4.30 95.54 50.34 1.70 95.25 50.07 0.70 95.51
$\hat \beta_1$ 40.31 14.50 94.12 42.82 6.17 94.21 45.23 2.83 94.41
$\hat \beta_2$ 55.40 14.67 94.26 53.65 6.33 94.62 52.44 2.67 94.61
$\hat \beta_3$ 45.35 12.67 96.35 46.58 8.50 95.02 47.63 4.17 94.82
$\hat \beta_4$ 63.26 15.83 94.16 59.23 6.67 94.56 56.74 3.17 95.20
$\hat \beta^*_1$ 53.79 -0.83 95.56 52.18 -0.33 95.15 51.66 -0.17 94.99
$\hat \beta^*_2$ 48.67 0.67 96.06 49.30 0.33 95.65 48.69 -0.17 95.06
$\hat \beta^*_3$ 52.93 -1.33 96.79 52.18 -0.67 95.82 51.58 -0.33 95.45
$\hat \beta^*_4$ 44.93 -0.33 94.87 46.40 -0.17 95.18 47.80 0.00 95.17
$\tilde \beta_1$ 50.81 2.33 95.54 50.08 1.00 95.01 50.23 0.50 94.89
$\tilde \beta_2$ 50.46 3.50 95.71 50.23 1.50 95.49 49.37 0.33 94.99
$\tilde \beta_3$ 50.24 6.00 96.89 50.37 2.33 95.63 50.42 1.17 95.23
$\tilde \beta_4$ 49.67 3.33 95.36 49.35 1.33 95.35 49.90 0.67 95.36
$\hat \beta_1$ 39.59 15.29 94.07 42.58 7.14 94.47 44.69 3.29 94.89
$\hat \beta_2$ 55.42 13.86 94.25 53.82 5.86 94.60 52.85 2.86 94.79
$\hat \beta_3$ 46.72 15.57 95.46 46.31 11.43 95.57 47.27 5.86 95.33
$\hat \beta_4$ 62.53 16.00 94.23 59.16 7.14 94.87 56.04 3.29 95.11
$\hat \beta^*_1$ 55.26 -1.14 95.36 53.07 -0.29 94.89 52.19 -0.29 95.04
$\hat \beta^*_2$ 48.95 0.57 96.09 49.17 0.00 95.53 49.46 0.00 95.21
$\hat \beta^*_3$ 54.39 -0.86 95.83 52.99 -0.43 95.86 52.02 0.14 95.73
$\hat \beta^*_4$ 44.90 0.29 94.73 47.13 0.14 94.94 47.32 0.00 95.37
$\tilde \beta_1$ 51.31 2.57 95.40 50.33 1.43 95.01 50.28 0.71 95.07
$\tilde \beta_2$ 50.55 3.43 95.72 50.20 1.29 95.33 50.25 0.71 95.12
$\tilde \beta_3$ 50.77 12.14 96.04 50.16 4.71 95.86 50.10 2.57 95.69
$\tilde \beta_4$ 49.95 4.14 95.29 50.73 2.00 95.17 49.52 0.86 95.50
------ ------------------ ------- ------- ------- ------- ------- ------- ------- ------- -------
: Estimation of regression parameters $\beta=(\beta_1, \beta_2, \beta_3, \beta_4)$. Simulation results for ML, $\hat \beta$, mean BR, $\hat \beta^*$, and median BR, $\tilde \beta$, estimators. For ML, RB% and WALD% are conditional upon finiteness of the estimates[]{data-label="tab1"}
Table \[tab2\] shows the estimated relative bias under monotone reparameterizations of the parameters related to the binary covariates, considering the ordinal probability effect measure presented in Section 2.2. In the new parameterization, it appears that the median BR estimator has the best performance in terms of estimated relative bias, if compared with ML and mean BR, which is not equivariant under this type of reparameterization.
link $n$ $\gamma(\hat \beta_2)$ $\gamma(\hat \beta^*_2)$ $\gamma(\tilde \beta_2)$ $\gamma(\hat \beta_3)$ $\gamma(\hat \beta^*_3)$ $\gamma(\tilde \beta_3)$
------ ------- -- ------------------------ -------------------------- -------------------------- -- ------------------------ -------------------------- --------------------------
$50$ 1.58 -1.05 -0.42 -1.30 4.15 1.21
$100$ 0.79 -0.49 -0.18 -1.70 2.27 0.88
$200$ 0.24 -0.39 -0.22 -1.00 1.03 0.33
$50$ 1.99 -0.74 -0.18 -2.23 -3.43 0.80
$100$ 0.93 -0.36 -0.09 -2.09 1.73 0.48
$200$ 0.38 -0.26 -0.14 -1.10 0.80 0.21
$50$ 1.39 -1.11 -0.55 -1.18 5.18 1.30
$100$ 0.63 -0.61 -0.33 -2.11 2.59 0.54
$200$ 0.33 -0.30 -0.16 -1.36 1.12 0.06
: Estimated relative bias (RB%) for $\gamma(\beta_2)$ and $\gamma(\beta_3)$. For ML, RB% is conditional upon finiteness of the estimates []{data-label="tab2"}
Application
===========
We consider the data analysed in @randall1989, related to a factorial experiment for investigating the factors that affect the bitterness of wine. There are two factors, temperature at the time of crashing the grapes, $x_1$, and contact between juice and skin, $x_2$. Each factor has two levels, “cold” and “warm” for temperature and “yes” and “no” for contact. For each of the four treatment conditions, two bottles were assessed by a panel of nine judges, giving $n=72$ observations. As in @christensen2019 [Section 4.8], we consider the outcomes obtained by combining the three central categories and we fit the model $$\operatorname{logit}\{\text{Pr}(Y_i \leq j|\boldsymbol {x}_i)\}=\alpha_j+x_{i1}\beta_1+x_{i2}\beta_2, \hspace{0.9cm} j=1,2;\, i=1,\ldots,72.$$ Table \[tab3\] shows the coefficient estimates obtained with ML, mean BR and median BR. Both mean and median BR approaches are able to solve the boundary estimates problem.
$\alpha_1$ $\alpha_2$ $\beta_1$ $\beta_2$
---------- -------------- ----------------------- ----------------------- --------------
ML -1.32 (0.53) $+\infty$ ($+\infty$) $-\infty$ ($+\infty$) -1.31 (0.71)
meanBR -1.25 (0.51) 5.48 (1.48) -3.43 (1.42) -1.19 (0.67)
medianBR -1.29 (0.52) 6.46 (2.32) -4.48 (2.29) -1.24 (0.68)
: Coefficient estimates and corresponding standard errors in parenthesis[]{data-label="tab3"}
Table \[tab4\] shows the simulation results for the regression parameters considering 10,000 replications, with covariates fixed at the observed value and true parameter $\theta_0=(-1,4,-2,-1)$. We found $979$ samples out of 10,000 with ML boundary estimates. Instead, mean and median BR estimates are always finite. The median BR is again highly accurate in achieving median centering and shows a lower estimated relative bias than ML, as well as a good empirical coverage of the 95% Wald-type confidence intervals.
---------- ------- ------- ------- -- ------- ------ -------
PU% RB% WALD% PU% RB% WALD%
ML 55.08 1.80 96.92 53.20 8.20 96.50
meanBR 43.91 -0.65 95.88 48.10 0.50 96.60
medianBR 49.71 8.95 96.48 50.35 4.90 96.28
---------- ------- ------- ------- -- ------- ------ -------
: Estimation of regression parameters $\beta=(\beta_1, \beta_2)$. Simulation results for ML, mean BR and median BR estimators. For ML, RB% and WALD% are conditional upon finiteness of the estimates[]{data-label="tab4"}
Under the monotone reparameterization of the coefficients related to the binary covariates, proposed by @agrestikateri2017 and presented in Section 2.2, the estimated percentage relative bias is $-0.81\%$, $1.79\%$ and $0.15\%$ for $\gamma(\beta_1)$, and $0.69\%$, $0.94\%$ and $0.13\%$ for $\gamma(\beta_2)$, with ML, mean BR and median BR, respectively. For ML, it should be recalled that the estimated relative bias is conditional upon finiteness of the estimates. It is noteworthy that the median BR estimator has lower estimated relative mean bias that the ML and the mean BR estimators.
|
---
abstract: 'We consider the problem of fitting a polynomial to a set of data points, each data point consisting of a feature vector and a response variable. In contrast to standard least-squares polynomial regression, we require that the polynomial regressor satisfy shape constraints, such as monotonicity with respect to a variable, Lipschitz-continuity, or convexity over a region. Constraints of this type appear quite frequently in a number of areas including economics, operations research, and pricing. We show how to use semidefinite programming to obtain polynomial regressors that have these properties. We further show that, under some assumptions on the generation of the data points, the regressors obtained are consistent estimators of the underlying shape-constrained function that maps the feature vectors to the responses. We apply our methodology to the US KLEMS dataset to estimate production of a sector as a function of capital, energy, labor, materials, and services. We observe that it outperforms the more traditional approach (which consists in modelling the production curves as Cobb-Douglas functions) on 50 out of the 65 industries listed in the KLEMS database.'
author:
- 'Mihaela Curmei[^1]'
- 'Georgina Hall[^2]'
bibliography:
- 'pablo\_amirali.bib'
- 'sample.bib'
- 'thesis.bib'
title: '**Shape-Constrained Regression using Sum of Squares Polynomials**'
---
#### Keywords:
[Polynomial regression, semidefinite programming, consistent estimators, sum of squares polynomials, convex regression, production functions.]{}
Introduction {#sec:intro}
============
Regression is a fundamental problem in statistics and machine learning, appearing across all areas, from the social sciences to engineering. Its goal is to estimate a relationship between *feature vectors* and *response variables* from observed feature vector-response variable pairings. As an example, consider the manager of a second-hand car dealership who wishes to infer from past sales the price at which (s)he should sell a newly acquired car based on the car’s make, brand, mileage, power, and age. In this case, the second-hand car’s price is the response variable, its make, brand, mileage, power, and age are its features, and the observed feature vector-response variable pairings are the past sales. One of the approaches used to infer this relationship is to search for a function which maps the feature space to the response space, from within a set of parametric functions. Linear regression, for example, corresponds to the case where the set of parametric functions considered is the set of linear functions. Deciding which function to select among this set relies on the use of a *loss function*, which is a way of measuring the *loss* or distance between the value that is predicted by the parametric function on the observed feature vector, on the one hand, and the corresponding true response variable, on the other. The function that is selected as most accurate is the one that gives rise to the smallest loss. We call this function the *regressor*.
In general, the family of parametric functions from which the regressor is picked is a large one. This allows for a more complex (and hopefully more accurate) model of the relationship between features and response. Increasing the size of the parametric family comes however with downsides: as the number of parameters that describe the family increases, the model can overfit the observed data, generalizing poorly to new data and exhibiting behavior that should not occur in the context of the application (e.g., prices going negative). A popular work-around to this issue is to add *regularization* to the model: this is a process through which candidate regressors are filtered out if they are, e.g., too complex, or do not fit the bill in some other way. The ability to filter out certain candidates supposes of course that one is able to distinguish between acceptable candidates and non-acceptable ones. In other words, regularization is only possible if we have access to some additional prior knowledge of the model.
As it turns out, additional information that is generally available for regression models relates to the *shape* of the regressor. We may know for example whether a response increases or decreases with a feature. Considering the second-hand car example previously mentioned, we could expect that its price increases with its power, but decreases with its mileage. We may even have some sense as to whether the response variable is convex or concave in a feature: we can imagine, e.g., that the price of a second-hard car could be convex in its age as it goes from brand-new to old to vintage. As a consequence, when running a regression model, we may want to regularize our model by restricting ourselves to the candidate functions that have these shape particularities: this is what we call *shape-constrained regression*, which is the subject of study of this paper.
We consider here two types of shape constraints: *convexity constraints* and what we call *bounded-derivative constraints* (see Section \[sec:math.form\] for a formal definition). Bounded-derivative constraints include as subcases both the case where the regressor is constrained to be monotone and the case where it is constrained to be Lipschitz-continuous with a fixed Lipschitz constant. We focus on the convex and bounded-derivative settings as they are those that appear most frequently in applications. A short and non-exhaustive list of areas where convex-constrained regression appears, e.g., includes economics [@meyer1968consistent], psychology [@gallistel2004learning], electrical engineering [@hannah2012ensemble]), and medicine [@prince1990reconstructing]. Similarly, the need for monotone-constrained regression occurs in medicine [@hippo], biology and environmental engineering [@leukemia], electrical and computer engineering [@software_failure; @48758], economics [@pricing], and civil engineering [@shade].
We further focus on shape-constrained *polynomial* regression. In other words, the parametric family we restrict ourselves to is the set of (multivariate) polynomial functions. Two reasons motivate this choice. First, polynomial functions are incredibly expressive, particularly in the set-up that we consider: we assume throughout that the feature vectors lie in a box and that the response variable is a continuous function of the features. These two assumptions are not as restrictive as they may sound. It is generally the case that for each feature, a range of possible values is known even if very wide. Regarding continuity, smoothness is often a property that is independently required in a regressor (see, e.g., [@mazumder2017computational] where techniques to smooth a piecewise linear regressor are discussed). Under these assumptions, of course, polynomial functions are able to approximate arbitrarily well the relationship between feature vectors and response variables. The second reason for choosing polynomial functions as our parametric family is because they are amenable to the use of certain algebraic techniques (described in Section \[subsec:sos\]) which make imposing monotonicity and convexity, among other shape constraints, a feasible task computationally-speaking. More specifically, we show that solving a shape-constrained polynomial regression problem can be dealt with using semidefinite programming. The semidefinite programs obtained have nice computational properties: their size does not scale with the number of datapoints and scales polynomially in the number of features. Furthermore, obtaining a response corresponding to a new feature vector is very easy as it simply amounts to evaluating the polynomial regressor on this feature vector. All in all, by using polynomial functions, we are able to impose shape constraints on our regressor in a tractable way without sacrificing richness of the model. This is in contrast to a number of other methods, which we review now. After the literature review, we wrap up the introduction by giving the main contributions of the paper.
Literature relating to shape-constrained regression
---------------------------------------------------
As shape-constrained regression is such a fundamental problem, the literature relating to it is bountiful. It centers around two types of shape constraints: convexity constraints, which we consider, and monotonicity constraints, which are slightly more restrictive than the bounded derivative constraints that we consider. To the best of our knowledge however, there are no publications devoted to the bounded derivative case. We thus review the monotone regression literature only, noting that many techniques described there can in fact be extended to the bounded-derivative case.
### A review of the literature on convex regression
We focus here on the literature devoted to *multivariate* convex regression as it is our space of interest. A separate literature exists for the univariate case, which we do not cover here.
The main results in multivariate convex regression revolve around the convex least-squares estimator, introduced in [@hildreth1954point] and [@holloway1979estimation], which is obtained, as can be inferred from its name, by searching for a function among the set of convex functions that minimizes the least squares error between predicted values and measured values. Surprisingly, this problem is tractable and can be reduced to a quadratic program. The estimator thus obtained is a piecewise linear function and computing a prediction from a new feature vector can be done by solving a linear program; see [@kuosmanen2008representation]. It is interesting to contrast this to our results. As mentioned above, the estimator we obtain is a polynomial (so smooth), and its computation involves solving a semidefinite program, which is in theory a more expensive optimization problem to solve than a quadratic program. The quadratic program that appears here however has a number of variables that scales linearly and a number of constraints that scales quadratically with the number of data points. This is in opposition to our semidefinite program, whose size does not depend on the number of data points. It could happen, as a consequence, that the quadratic program be more time-consuming to solve than the semidefinite program, depending on the number of points. In terms of dependency on the number of features, both programs scale polynomially with this dimension. Another point of contrast between the two method relates to the difficulty of obtaining a prediction from a new feature vector: for the convex least-squares estimator, one needs to solve a linear program; in our case, a simple polynomial evaluation suffices. Additional work on the convex least-squares estimator has been done in [@lim2012consistency] and [@seijo2011nonparametric], who show that it is a consistent estimator of the true underlying function. The proof of consistency of our estimator owes many of its key ideas to the consistency proof in [@lim2012consistency]. Some more recent work on the topic of the convex least-squares estimator includes the work in [@mazumder2017computational] which proposes a faster algorithm to compute it which leverages the problem structure. They also develop techniques to smooth the piecewise linear function obtained and to constrain the function to be Lipschitz continuous. Another line of work has focused on bounding the number of breakpoints of the convex least-squares estimator, the goal being to restrict the size of the quadratic program that needs to be solved to obtain it; see, e.g., [@hannah2013multivariate] and [@magnani2009convex]. Choosing an appropriate number of breakpoints and how to partition the space with these breakpoints then becomes the main difficulty.
An orthogonal line of work to that described above appears in [@aguilera2008approximating] and [@aguilera2009convex]. These papers both rely on the second-order characterization of convexity and involve constraining the Hessian of the function to be positive semidefinite to enforce convexity of the regressor. To achieve this, the space of interest is discretized via a mesh: the Hessian is then required to be positive semidefinite at the nodes of the mesh and a finite-difference approximation is used to ensure convexity of the Hessian over the whole space. The main caveat of this method is that it is very computationally intensive in high dimension. In particular, it involves semidefinite programs whose sizes are linear in the number of mesh nodes, which are themselves exponential in the number of features. As stated above, this is in contrast to our method, which involves semidefinite programs that scale polynomially with the number of features.
The last line of work that we wish to review here appears in [@convex_fitting]. From a content perspective, this is the paper that is most closely related to ours. Indeed, similarly to us, the setting considered is that of multivariate polynomial regression and convexity of the regressor is enforced via the use of sum of squares polynomials. The main differences between their work and ours is that they focus on convexity only (whereas we consider other settings such as monotonicity and Lipschitz-continuity) and there is no statistical analysis of the regressor they propose. It is interesting to note that semidefinite programming has been used outside of regression to enforce structural properties such as convexity. An example involving probability distributions, e.g., has appeared in [@popescu2005semidefinite].
### A review of the literature on monotone regression
We focus again here on *multivariate* monotone regression and leave the univariate monotone regression literature aside. Within the multivariate literature, approaches can be split into five different categories with only the last one leading to a polynomial regressor (which is our setting). The first four methods are computationally quite intense, with the second, third, and fourth being exponential in the number of features, while our method is polynomial in the number of features. The fifth method that we touch upon corresponds to the univariate setting of our method, which is of course a simpler setting to deal with. In particular, the sum of squares techniques presented in Section \[subsec:sos\] have some specific properties in the univariate case which are lost in the multivariate setting. We now go into more detail for each approach.
The first approach relies on the use of Artificial Neural Networks (ANN). The easiest way to guarantee that an ANN outputs an increasing function with respect to all features is to keep the edge weights in the neural net nonnegative, see [@wang1994neural; @kay2000estimating; @dugas2001incorporating; @dugas2009incorporating; @zhang1999feedforward]. However, it was shown in [@daniels2010monotone] that in order for a neural network with nonnegative weights to approximate arbitrarily well any monotonically increasing function in $n$ features, the ANN must have $n$ fully connected hidden layers, which can lead to computational limitations and requires a large training dataset.
The following three methods (lattice methods, regression trees, and isotonic regression) all involve breaking the space down into smaller subset. Lattice methods (see, e.g., [@gupta2016monotonic]) involve discretizing the feature space via a mesh. To each data point X in the feature space, a vector of linear interpolation weights, $\phi(X)$, is then associated. This vector reflects the distance of $X$ to its closest mesh nodes. Computing the regressor amounts to finding a linear combination of $\phi(X)$ which minimizes some loss function. If the coefficients that appear in the linear combination satisfy some pairwise constraints, then the regressor is guaranteed to be monotone. For regression trees, the feature domain is also partitioned into smaller subdomains where interactions between features are more manageable. On each subdomain, a fit to the data is computed, and to obtain a function over the whole domain, the subdomain fits are aggregated, via, e.g., gradient boosting; see [@breiman1984classification; @freund1999short; @ridgeway1999state; @friedman2001greedy]. Monotonicity of the regressor is obtained by enforcing monotonicity on each subregion as aggregation maintains this structural property [@chipman2016high; @hofner2011boosting]. Finally, isotonic regression—initially developed for the univariate monotone regression case—can be generalized to the multivariate setting. In this method, the feature space is discretized via a mesh once again and a piecewise constant function $f$ is fitted to the data in such a way that $f(x_i)\leq f(x_j)$ if $x_i$ and $x_j$ are nodes of the mesh and $x_i \leq x_j$, where $\leq$ is some partial or total ordering. As all three methods involve breaking the feature space down into smaller subsets, they all suffer from the same drawback: the size of the problems scales exponentially with the features. As mentioned above, this is in contrast to our model where the size of the semidefinite program obtained scales polynomially in the number of feaures. Furthermore, the regressors obtained in the papers mentioned above are nonsmooth and, in the case of isotonic regression, nondifferentiable, which once again is in stark contrast to our set-up.
Monotone *polynomial* regression has only been very lightly touched upon in the literature. Most methods that give rise to a polynomial regressor typically involve adding monotone univariate polynomials together to obtain a (separable) monotone multivariate polynomial. This of course is less than ideal as it ignores possible interactions between features. The only paper that appears in the literature and that resembles ours to some degree is [@wang2019calibrating]. In it, the authors use semidefinite programming to enforce monotonicity of their polynomial regressor and show its consistency. The main difference with our work lies in the fact that they only consider the simpler univariate setting.
Outline and contributions
-------------------------
The goal of this paper is to study the problem of (multivariate) shape-constrained polynomial regression, which is the problem of fitting a multivariate polynomial regressor to datapoints with constraints on the shape of the regressor. We focus on two types of shape constraints here: convexity constraints and bounded-derivative constraints, with both of these shape constraints being required to hold only over a box, rather than globally. We formally define these concepts in Section \[sec:math.form\]. We then formulate the problem of obtaining shape-constrained regressors as optimization problems. We show that, as is, these optimization problems are intractable (Section \[subsec:np.hard\]), but that they can be approximated arbitrarily well using *sum of squares polynomials*, a concept defined in Section \[subsec:sos\]. The resulting approximations are tractable semidefinite programs (Section \[subsec:approx\]) that have the key properties of not scaling with the number of datapoints and only scaling polynomially with the number of features. In Section \[sec:consistent\], we further show that, despite the restriction to polynomial regressors and the approximations of the initial intractable optimization problem, the polynomial regressors obtained remain *consistent* estimators of the true underlying function from which the data has been generated. This is a fundamental property to have as it guarantees that if we had an infinite number of datapoints (which can be viewed as the best-case scenario), then we would be able to recover the true relationship between feature vectors and response variables. Finally, in Section \[sec:num.exps\], we present two sets of computational results. One is the outcome of applying our method to datasets generated synthetically. In this case, we are able to observe that when the datapoints are obtained in a noisy fashion, running shape-constrained regression as opposed to regular regression leads to more robust estimators of the underlying function. It also gives rise to a regressor with better generalization error than its unconstrained counterpart. The second set of computational results presented here is the outcome of applying our methodology to a well-known dataset (the KLEMS database) which appears in economics and relates production of a sector back to capital, labor, energy, materials, and services. We observe that we are able to outperform the more traditional approach (which uses Cobb-Douglas functions) on 50 out of the 65 sectors listed in the KLEMS database. Additional applications to California housing and weekly-wage datasets can be found at <https://github.com/mcurmei627/dantzig/tree/master/Experiments>.
Mathematical formulation of the problem {#sec:math.form}
=======================================
Throughout the paper, we operate with $m$ pairs $(X_i,Y_i)_{i=1,\ldots,m}$ of data, where $X_i \in \mathbb{R}^n$ is a feature vector and $Y_i \in \mathbb{R}$ is the corresponding response variable. We occasionally use the notation $X_i^j$ to refer to the $j^{th}$ component of vector $X_i$. Our only assumption regarding the data at this point is that there exists a full-dimensional box $B$ such that $X_i \in B, \forall i=1,\ldots,m$. Recall that a box $B \subseteq \mathbb{R}^n$ is a set of the following form: $$\begin{aligned}
\label{eq:box}
B=\{(x_1,\ldots,x_n) \in \mathbb{R}^n~|~ l_i \leq x_i \leq u_i,~\forall i=1,\ldots,n\}\end{aligned}$$ where $l_1,\ldots,l_n,u_1,\ldots,u_n$ are scalars such that $l_i \leq u_i,~\forall i=1,\ldots,n$. We say that $B$ is full-dimensional in the particular case where $l_i<u_i,~\forall i=1,\ldots,n$. In practice, this assumption is quite easily verified: each feature tends to have a natural range (which can potentially be quite large) in which it lies. Note that for the moment we are not making any assumptions regarding the way in which the data is generated: we are not assuming, for example, that the feature vectors are realizations of independent random variables. Assumptions of this type only come into play in Section \[sec:consistent\], so we make them explicit then.
In a standard polynomial regression setting, the goal is to fit a multivariate polynomial function $p:\mathbb{R}^n \mapsto \mathbb{R}$ of degree $d$ to the data in such a way that the least squares error between the predicted values and the observed values is minimized. In other words, if we denote by $P_{n,d}$ the set of polynomials in $n$ variables and of degree $d$, we solve $$\label{eq:poly.reg}
\min_{p \in P_{n,d}} \sum_{i=1}^m (Y_i-p(X_i))^2.$$ As $p$ is finitely parameterized by its coefficients, this is a quadratic program, which can be solved in polynomial time. Our contributions can be viewed as a refinement of the standard model where *shape constraints* on the regressor $p$ are required. As mentioned previously, we focus on two specific cases of shape constraints in this paper which we define now.
A function $f$ is convex over a box $B$ if for any $x,y \in B$ and for any $\lambda \in [0,1]$, we have $$f(\lambda x+(1-\lambda)y) \leq \lambda f(x)+(1-\lambda)f(y).$$
Being convex over a box is evidently less restrictive than being globally convex. Similarly to global convexity, we can define a second-order characterization of convexity over a box.
Let $\nabla^2 f(x)$ be the Hessian of $f$ at point $x$. If $B$ is a full-dimensional box and $f$ is twice-differentiable, then $f$ is convex over $B$ if and only if $\nabla^2 f(x) \succeq 0$, for all $x\in B$.
This result is well-know and can be found, e.g., in [@bertsekas2009convex Section 1.1.4]. Examples of applications where one would require such a shape constraint appear in Section \[sec:intro\]. We mostly use the second characterization of convexity over a box. This can be done as our box $B$ is assumed to be full-dimensional and $p$ is twice differentiable.
The second shape constraint we may wish to impose is a requirement on the derivatives of the regressor, namely that they be bounded. To this effect, we define the concept of $K$-bounded derivatives.
\[def:bdr\] Given (possibly infinite) real numbers $K_1^-,K_1^+,\ldots,K_n^-,K_n^+$ and the associated vector $K\mathrel{\mathop{:}}=(K_1^-,K_1^+,\ldots,K_n^-,K_n^+)$, a continuously-differentiable function $f$ is said to have *$K$-bounded derivatives* over a box $B$ if, for all $i=1,\ldots,n$, $$\begin{aligned}
\label{eq:kbdder}
K_i^- \leq \frac{\partial f(x)}{\partial x_i} \leq K_i^+, \forall x \in B.
\end{aligned}$$
Note that any continuously differentiable function over a compact set has bounded derivatives. Hence, for any continuously differentiable $f$ over $B$, there always exists a vector $K$ such that (\[eq:kbdder\]) holds. The specificity of this constraint though is that $K$ is part of the input and fixed a priori.
The notion of $K$-bounded derivatives subsumes many other notions, such as monotonicity and Lipschitz-continuity. In the case of monotonicity, requiring e.g. that $f$ be increasing in the variable $x_i$ (i.e., $f(x_1,\ldots,x_i,\ldots,x_n) \leq f(x_1,\ldots,x_i+h,\ldots,x_n)$ for all $(x_1,\ldots,x_n) \in B$ and $h>0$ such that $(x_1,\ldots,x_i+h,\ldots,x_n) \in B$) is equivalent, for continuously differentiable functions, to requiring that $\frac{\partial f(x)}{\partial x_i} \geq 0, \forall x \in B$. This corresponds to taking $K_i^-=0$ and $K_i^+= +\infty$ in the bounded derivative setting. A similar reasoning can be applied to the decreasing case. In the case of Lipschitz-continuity, we would like to impose that $|f(x)-f(y)|\leq M||x-y||$ for a fixed positive scalar $M$, some norm $||.||$, and any $x,y \in B$. This is equivalent to requiring that $-M\leq \frac{\partial f(x)}{\partial x_i} \leq M$ for all $x \in B$ and any $i=1,\ldots,n$, provided that the norm chosen above is the 1-norm. To see this equivalence, note that the implication follows immediately by taking $y=x+he_i$ where $e_i$ is the vector of all zeros except for a one in $i^{th}$ position. The converse is a consequence of the multivariate mean value theorem and Hölder’s inequality. Hence, if we take $K_i^-=-M$ and $K_i^+=M$ for all $i=1,\ldots,n$ in the bounded derivative setting, we obtain a regressor that is $M$-Lipschitz.
For each type of constraint, we can define the correspondingly constrained polynomial regressor. Let ${\bar{g}_{m,d} }:\mathbb{R}^{n} \rightarrow \mathbb{R}$ be the solution of the following optimization problem: $$\label{eq:opt.bg}
\begin{aligned}
{\bar{g}_{m,d} }\mathop{\mathrel{:}}=\arg &\min_{g \in P_{n,d}} &&\sum_{i=1}^m (Y_i-g(X_i))^2\\
&\text{s.t. } &&H_g(x) \succeq 0, \forall x\in B.
\end{aligned}$$ Thus defined, ${\bar{g}_{m,d} }$ exists, is unique, and is convex over $B$. It is a function of both the degree (which is chosen by the user) and the datapoints $(X_i,Y_i)_{i=1,\ldots,m}$. We use the subscripts $d$ and $m$ respectively to denote these dependencies. We refer to Problem (\[eq:opt.bg\]) in the rest of the paper as convex (polynomial) regression. Likewise, for the bounded derivative setting, we can define ${\bar{h}_{m,d} }:\mathbb{R}^{n} \rightarrow \mathbb{R}$ to be the solution of the following optimization problem: $$\label{eq:opt.bh}
\begin{aligned}
{\bar{h}_{m,d} }\mathop{\mathrel{:}}=\arg &\min_{h \in P_{n,d}} &&\sum_{i=1}^m (Y_i-h(X_i))^2\\
&\text{s.t. } &&K_i^- \leq \frac{\partial g(x)}{\partial x_i} \leq K_i^+, ~\forall i=1,\ldots,m, \forall x\in B.
\end{aligned}$$ As before, ${\bar{h}_{m,d} }$ thus defined exists, is unique, and depends both on the degree (which is chosen by the user) and the datapoints $(X_i,Y_i)_{i=1,\ldots,m}$. Furthermore, ${\bar{h}_{m,d} }$ has $K$-bounded derivatives. We refer to Problem (\[eq:opt.bh\]) in the rest of the paper as (polynomial) bounded derivative regression. Throughout, we assume that $m$ is large enough so that the solution to either problem (\[eq:opt.bg\]) or problem (\[eq:opt.bh\]) (depending on the context) is unique. Note that if the datapoints are generated randomly, this occurs with high probability when $m$ is larger than $\binom{n+d}{d}$. Further note that we consider here both problems separately, though this need not necessarily be done. Indeed, one can easily imagine settings where we would require both types of constraints.
It is natural to wonder whether problems (\[eq:opt.bg\]) and (\[eq:opt.bh\]) can be solved as is. In Section \[sec:comp\], we show that these problems are in fact not tractable and propose a sum of squares-based approximation to them. One can then consider how good the resulting polynomial regressors are under some assumptions on the generative model for the data. This is the focus of Section \[sec:consistent\].
Computational considerations {#sec:comp}
============================
As mentioned in Section \[sec:math.form\], the optimization problems in (\[eq:opt.bg\]) and (\[eq:opt.bh\]) are intractable. We formally show this in Section \[subsec:np.hard\]. We then review sum of squares polynomials and related concepts in Section \[subsec:sos\], as this will be the basis of the approximations we present in Section \[subsec:approx\].
Hardness results {#subsec:np.hard}
----------------
We show in this section that testing whether a polynomial has $K$-bounded derivatives over a box, even in the simple case where the polynomial is cubic, is a hard problem. It follows that one cannot hope to optimize over this set of polynomial functions as is done in (\[eq:opt.bh\]). It has already been shown in another paper by one of the authors [@ahmadi2019complexity] that it is hard to test whether a polynomial is convex over a box. We refer the reader to the paper for a complete proof of the result, but nevertheless rewrite the statement here for completeness. This result also implies that optimizing over the set of polynomials which are convex over a box, as is done in (\[eq:opt.bg\]), is hard.
\[th:compl.conv\] The problem of testing whether a polynomial $p$ is convex over a box $B$ is strongly NP-hard for any $d \geq 3$.
The proof of this theorem is based on a reduction from the problem of testing whether a matrix whose entries are affine polynomials in $x$ is positive semidefinite for all $x$ in a full-dimensional box $B$. It is also shown that for degrees 1 and 2 (respectively affine functions and quadratic functions), the problem is polynomial-time solvable, which implies that this result is minimal in the degree of the polynomial.
\[th:compl.deriv\] The problem of testing whether a polynomial $p$ has $K$-bounded derivatives is strongly NP-hard for any $d\geq 3$.
The proof of this theorem is based on the famous MAX-CUT problem and is given in Appendix \[appendix:np.hard\].
Similarly to Theorem \[th:compl.conv\], Theorem \[th:compl.deriv\] is minimal in the degree of the polynomial. Indeed, testing whether a quadratic or affine polynomial $p$ has $K$-bounded derivatives over $B$ can be done in polynomial time. For affine polynomials, this is equivalent to testing whether $\partial p/\partial x_i$, which is a constant, belongs to $[K_i^-,K_i^+]$ for all $i=1,\ldots,n$. This can of course be done in polynomial time. For quadratic polynomials, testing whether it has $K$-bounded derivatives over $B$ amounts to testing whether the linear function $\frac{\partial p(x)}{\partial x_i}$ is in $[K_i^-,K_i^+]$ for any $x \in B$, for all $i=1,\ldots,n$. This can be done by solving a sequence of linear programs indexed by $i$ where the objective we maximize (resp. minimize) is $\frac{\partial p(x)}{\partial x_i}$ and the constraints are given by the box. As linear programs can be solved in polynomial time and testing whether the optimal value obtained is larger (resp. smaller) than $K_i^+$ (resp. $K_i^-$) can also be done in polynomial time, the quadratic case is tractable.
It follows from Theorems \[th:compl.conv\] and \[th:compl.deriv\] that optimization problems (\[eq:opt.bg\]) and (\[eq:opt.bh\]) are intractable. We consequently introduce sum of squares-based approximations of (\[eq:opt.bg\]) and (\[eq:opt.bh\]) in Section \[subsec:approx\]. Before doing this however, we briefly review the concept of sum of squares polynomials and some key results in the area.
Review of Sum of Squares Polynomials {#subsec:sos}
------------------------------------
To make this paper self-contained, this section briefly introduces the concept of sum of squares polynomials with some related results. A more extensive collection of results on the topic can be found in [@blekherman2012semidefinite] and the references therein.
We say that a polynomial $p$ of degree $2d$ and in $n$ variables is a *sum of squares* (sos) polynomial if $p$ can be written as $$p(x_1,\ldots,x_n)=\sum_{i=1}^r q_i^2(x_1,\ldots,x_n)$$ for some polynomials $q_i$ of degree $d$ and in $n$ variables. We denote by $\Sigma_{n,2d}$ the set of sos polynomials in $n$ variables and of degree $2d$. Sum of squares polynomials combine a few characteristics that make them very useful in practice. First, testing membership to $\Sigma_{n,2d}$ can be done in polynomial time. Indeed, a polynomial $p(x_1,\ldots,x_n)$ of degree $2d$ is sos if and only if there exists a positive semidefinite matrix $Q$ such that $p(x)=z(x)^TQz(x)$, where $z(x)=[1,x_1,\ldots,x_n,\ldots,x_n^d]^T$. It follows that testing membership to $\Sigma_{n,2d}$ is equivalent to solving a semidefinite program, which can be done to arbitrary accuracy in polynomial time. Second, sum of squares polynomials can be used to algebraically certify nonnegativity of a polynomial over a *basic semialgebraic set*, i.e., a set defined by a finite number of polynomial inequalities. The exact form of the algebraic certificate varies together with the assumption(s) on the basic semialgebraic set and all results of this type are regrouped under the name of *Positivstellensätze*. We make use of one such Positivstellensatz in this paper, due to Putinar, which we give below. In this case, the assumption on the basic semialgebraic set is that it is *Archimedean*. This is a slightly stronger requirement than compactness which is trivially satisfied by the sets that we consider (boxes). As a consequence, we do not give an exact definition of this notion but instead refer the reader to [@lasserre2009convexity] if this is of interest.
\[thm:putinar\] Let $g_1,\ldots,g_s$ be polynomials in $n$ variables such that the set $$\Omega\mathrel{\mathop{:}}=\{x \in \mathbb{R}^n~|~ g_1(x)\geq 0, \ldots, g_n(x)\geq 0\}$$ is Archimedean. If a polynomial $p$ is positive on $\Omega$, then there exist sos polynomials $s_0,\ldots,s_s$ such that $$\begin{aligned}
\label{eq:putinar}
p(x)=s_0(x)+s_1(x)g_1(x)+\ldots+s_s(x)g_s(x).
\end{aligned}$$
The combination of these two results implies that, to show nonnegativity of a polynomial over an Archimedean basic semialgebraic set, one can simply search for sum of squares polynomials that verify (\[eq:putinar\]). This is a semidefinite program if the degree of the polynomials involved is fixed. When the polynomial is positive over this set, such a decomposition is actually guaranteed to exist and so, by increasing the degree of the sos polynomials in the decomposition, we will eventually recover a certificate of nonnegativity of the polynomial over the set. This is particularly valuable as testing whether a polynomial is nonnegative over a set is NP-hard to do, even when the polynomial is a quadratic function and the set is defined by linear inequalities. The caveat of course when searching for sum of squares certificates is that one does not know a priori how high the degree of the sum of squares polynomials must be in order to obtain a decomposition. It can be shown in fact that an explicit bound that depends only on the number of variables of the polynomial and its degree cannot be obtained.
Another concept that will be useful to us in Section \[subsec:approx\] is that of *sum of squares matrices*, which are a generalization of sos polynomials to polynomial matrices. Recall that a polynomial matrix is a matrix with entries that are polynomials. We say that a $t \times t$ polynomial matrix $M(x)$ is an *sos matrix* if there exists a $t' \times t$ polynomial matrix $V(x)$ such that $M(x)=V(x)^TV(x)$. This is equivalent to requiring that, for $y\in \mathbb{R}^n$, $y^TM(x)y$ be a sum of squares (polynomial) in $x$ and $y.$ As a consequence, testing whether a given polynomial matrix is an sos matrix can again be done by solving a semidefinite program. We denote by $\Sigma_{n,2d,t}^M$ the set of sos matrices of size $t \times t$ and with entries that are polynomials of degree $2d$ and in $n$ variables. Scherer and Hol [@scherer2006matrix] generalized Theorem \[thm:putinar\] to this setting: we give their theorem below.
\[thm:scherer.hol\] Let $g_1,\ldots,g_s$ and $\Omega$ be as defined in Theorem \[thm:putinar\]. If a symmetric polynomial matrix $H(x)$ is positive definite on $\Omega$ (i.e., $H(x) \succ 0, \forall x \neq 0$ in $\Omega$), then there exist sos matrices $S_0(x), S_1(x),\ldots,S_s(x)$ such that $$H(x)=S_0(x)+g_1(x) \cdot S_1(x)+\ldots+ g_s(x) \cdot S_s(x).$$
Sum of Squares Approximations {#subsec:approx}
-----------------------------
Using the results given in Section \[subsec:sos\], we are able to reformulate the optimization problems (\[eq:opt.bg\]) and (\[eq:opt.bh\]), which are of interest to us, using sum of squares polynomials and matrices. For (\[eq:opt.bg\]), we replace the constraint $H_g(x) \succeq 0, \forall x \in B$ by a sum of squares-based condition as indicated in Theorem \[thm:scherer.hol\]. This can be done as $H_g$ is a symmetric polynomial matrix. Likewise, for (\[eq:opt.bh\]), we replace the constraints $K_i^- \leq \frac{\partial g(x)}{\partial x_i} \leq K_i^+$ for all $i=1,\ldots,m$ and $x \in B$ by sum of squares-based constraints as indicated in Theorem \[thm:putinar\]. Again, this is only possible as $\frac{\partial g(x)}{\partial x_i}$ is a polynomial function for all $i=1,\ldots,m$.
When using both Theorem \[thm:putinar\] and Theorem \[thm:scherer.hol\], we consider $\Omega=B$. Note that both theorems depend not only on the *set* that $\Omega$ defines, but on the *way* it is defined. We choose to use the following representation of $B$, $$\begin{aligned}
\label{eq:box.2}
B=\{(x_1,\ldots,x_n)~|~ (u_i-x_i)(x_i-l_i) \geq 0, i=1,\ldots,n\},\end{aligned}$$ which is different, but equivalent, to that given in (\[eq:box\]). This is because this particular representation enables us to take $$g_i(x)=(u_i-x_i)(x_i-l_i), i=1,\ldots,n,$$ thus leading to only $n$ defining inequalities of $B$ rather than $2n$ as we would have had, had we used the representation given in (\[eq:box\]). This gives rise to the following optimization problems.
\[def:tg\] We define ${\tilde{g}_{m,d,r} }$ to be the solution to the following optimization problem: $$\label{eq:opt.tg}
\begin{aligned}
{\tilde{g}_{m,d,r} }\mathop{\mathrel{:}}=&\arg &&\min_{g \in P_{n,d}, S_0,\ldots,S_n \in \Sigma_{n,2r,n}^M} \sum_{i=1}^m (Y_i-g(X_i))^2\\
&\text{s.t. } &&H_g(x)= S_0(x)+g_1(x)S_1(x)+\ldots+g_n(x)S_n(x).
\end{aligned}$$
\[def:th\] We define ${\tilde{h}_{m,d,r} }$ to be the solution to the following optimization problem: $$\label{eq:opt.th}
\begin{aligned}
{\tilde{h}_{m,d,r} }\mathop{\mathrel{:}}=&\arg &&\min_{h \in P_{n,d}, s_{ij}^+,s_{ij}^-\in \Sigma_{n,2r}} \sum_{i=1}^m (Y_i-h(X_i))^2\\
&\text{s.t. } &&K_i^+-\frac{\partial h(x)}{\partial x_i}= s_{i0}^+(x)+ s_{i1}^{+}(x)g_1(x)+\ldots+s_{in}^+(x)g_n(x), i=1,\ldots,n,\\
& &&\frac{\partial h(x)}{\partial x_i}-K_i^-= s_{i0}^-(x)+ s_{i1}^{-}(x)g_1(x)+\ldots+s_{in}^-(x)g_n(x), i=1,\ldots,n.\\
\end{aligned}$$
These sum of square-based approximations of ${\bar{g}_{m,d} }$ and ${\bar{h}_{m,d} }$ have the following property.
\[th:gmdr.lemma\] For fixed $d$ and large enough $m$ so that the solutions of (\[eq:opt.bg\]) and (\[eq:opt.bh\]) are unique, we have $$\begin{aligned}
\label{eq:cvgce.g}
\lim_{r \rightarrow \infty} \sup_{x \in B} |{\bar{g}_{m,d} }(x)-{\tilde{g}_{m,d,r} }(x)| \rightarrow 0
\end{aligned}$$ and $$\begin{aligned}
\label{eq:cvgce.h}
\lim_{r \rightarrow \infty} \sup_{x \in B} |{\bar{h}_{m,d} }(x)-{\tilde{h}_{m,d,r} }(x)| \rightarrow 0.
\end{aligned}$$
The proof of this theorem is a consequence of Theorems \[thm:putinar\] and \[thm:scherer.hol\] and is given in Appendix \[appendix:approx.proof\]—it is (surprisingly) not as as straightforward to derive as one would assume. The result states that we can recover an arbitrarily accurate approximation of ${\bar{g}_{m,d} }$ and ${\bar{h}_{m,d} }$. The approximations ${\tilde{g}_{m,d,r} }$ and ${\tilde{h}_{m,d,r} }$ have some appreciable characteristics. For instance, as they are polynomials, they are smooth functions. Furthermore, ${\tilde{g}_{m,d,r} }$ is certifiably convex over $B$ and ${\tilde{h}_{m,d,r} }$ has certifiable bounded derivatives: it suffices to exhibit the sum of squares polynomials as certificates to convince ourselves that such properties hold. In terms of computation, the size of the semidefinite programs that need to be solved to obtain ${\tilde{g}_{m,d,r} }$ and ${\tilde{h}_{m,d,r} }$ scale polynomially in the number $n$ of features. Adding additional data points to the problem does not impact the size of the semidefinite program as it only adds terms to the objective. Finally, in Section \[sec:consistent\], we also show that under certain generative assumptions on the data, in particular assuming that $Y_i$ is a noisy evaluation of a function $f$ at $X_i$, both ${\tilde{g}_{m,d,r} }$ and ${\tilde{h}_{m,d,r} }$ are consistent estimators of the underlying function $f$.
Exactitude of the Sum of Squares Approximations in some Cases
-------------------------------------------------------------
In Definitions \[def:tg\] and \[def:th\], we have replaced our initial nonnegative or positive semidefinite constraints by sum of squares-based relaxations. As shown in Theorem \[th:gmdr.lemma\], by doing so, we can recover arbitrarily accurate approximations of the solutions ${\bar{g}_{m,d} }$ and ${\bar{h}_{m,d} }$. It so happens that in certain cases problems (\[eq:opt.bg\]) and (\[eq:opt.bh\]) can be exactly solved using sum of squares polynomials; i.e., the solutions to problems (\[eq:opt.tg\]) and (\[eq:opt.th\]) *are* ${\bar{g}_{m,d} }$ and ${\bar{h}_{m,d} }$. These cases correspond to ${\bar{g}_{m,d} }$ and ${\bar{h}_{m,d} }$ being quadratic functions, or separable functions. Both cases can be quite valuable in practice so we explicitly write out the optimization problems that they correspond to.
### The quadratic case
In this case, we enforce that ${\bar{g}_{m,d} }$ and ${\bar{h}_{m,d} }$ have degree $d=2$, which amounts to solving problems (\[eq:opt.bg\]) and (\[eq:opt.bh\]) with $d=2$. In the case where we would like to enforce $K$-bounded derivatives, we make use of a result that appears in [@handelman Proposition I.1.] which gives us a Positivstellensatz for positivity of linear forms over compact convex polyhedra.
Let $\Omega$ be a compact convex polyhedron $\mathbb{R}^n$ with nonempty interior, defined by $g_i(x) \geq 0, i=1,\ldots,s$, where $g_i(x)=\alpha_i^Tx+\gamma_i$ are linear functions, with $\alpha_i \in \mathbb{R}^n$ and $\gamma_i \in \mathbb{R}$. If $p(x)$ is a linear function that is nonnegative over $\Omega$, then there exist nonnegative scalars $\lambda_1,\ldots,\lambda_s$ such that $$p(x)=\lambda_0+\lambda_1 g_1+\ldots+\lambda_s g_s.$$
We can then obtain $\bar{h}_{m,2}$ by solving the following optimization problem: $$\begin{aligned}
\bar{h}_{m,2} &\mathop{\mathrel{:}}=&&\arg \min_{h \in P_{n,2}, \lambda_{ij}^-,\lambda_{ij}^+, \tau_{ij}^-,\tau_{ij}^+ \in \mathbb{R} } \sum_{i=1}^m (Y_i-h(X_i))^2\\
&\text{s.t. } &&K_i^+-\frac{\partial h(x)}{\partial x_i}= \lambda_{i0}+\sum_{j=1}^n\lambda_{ij}^{+}(u_j-x_j)+ \lambda_{ij}^{-}(x_j-l_j), i=1,\ldots,n,\\
& &&\frac{\partial h(x)}{\partial x_i}-K_i^-= \tau_{i0}+\sum_{j=1}^n\tau_{ij}^{+}(u_j-x_j)+ \tau_{ij}^{-}(x_j-l_j), i=1,\ldots,n,\\
& &&\lambda_{i0}, \tau_{i0} \geq 0, \forall i=1,\ldots,n, \lambda_{ij}^+,\lambda_{ij}^-,\tau_{ij}^+, \tau_{ij}^- \geq 0, \forall i=1,\ldots,n, \forall j=1,\ldots,n.
\end{aligned}$$ Indeed, as $h$ is of degree $2$, we have that $K_i^+-\frac{\partial h(x)}{\partial x_i}$ and $\frac{\partial h(x)}{\partial x_i}-K_i^-$ are linear functions. Furthermore, $B$ is a compact and convex polyhedron, which can be described with linear equations. This requires us to change the description of the box from (\[eq:box.2\]), which we were using before, back to (\[eq:box\]). This optimization problem is a quadratic program with linear constraints, so obtaining $\bar{h}_{m,2}$ can be done via quadratic programming.
In the case where we would like to enforce convexity over a box, problem (\[eq:opt.bg\]) becomes $$\begin{aligned}
\bar{g}_{m,2} \mathop{\mathrel{:}}=\arg &\min_{g \in P_{n,2}} &&\sum_{i=1}^m (Y_i-g(X_i))^2\\
&\text{s.t. } &&H_g \succeq 0.
\end{aligned}$$ Here, $H_g$ is a constant matrix, as the functions $g$ we are searching among are quadratic. As a consequence, $H_g$ is either globally positive semidefinite or it is not (i.e., it cannot be only positive semidefinite over $B$). Obtaining $\bar{g}_{m,2}$ is consequently a semidefinite program.
### The separable case {#sec:sep}
As a reminder, a function $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is said to be separable if $f(x_1,\ldots,x_n)=\sum_{j=1}^{n}f_j(x_j)$ where $f_j:\mathbb{R}\mapsto \mathbb{R}.$
We first consider the case where we would like to enforce $K$-bounded derivatives on our polynomial function as in (\[eq:opt.bh\]). We now assume that ${\bar{h}_{m,d} }$ has a separable structure, so we search among polynomial functions $h$ that are separable, i.e., $h(x)=\sum_{j=1}^n h_j(x_j)$. This implies that $\frac{\partial h(x)}{\partial x_j}=\frac{d h_j(x_j)}{d x_j}$ is a univariate function in $x_j$ for all $j=1,\ldots,n$. As a consequence, the constraints $K_j^+-\frac{\partial h(x)}{\partial x_j} \geq 0, \forall x \in B$ and $j=1,\ldots,n$ can be replaced by $K_j^+-h_j'(x_j) \geq 0, \forall x_j \in [l_j,u_j]$ and $j=1,\ldots,n$; likewise for the constraints $\frac{\partial h(x)}{\partial x_j} -K_j^- \geq 0, \forall x \in B$ and $j=1,\ldots,n$. We can then make use of the following lemma.
\[lem:pablo.univ\] Let $a<b$. The univariate polynomial $p(x)$ is nonnegative over $[a,b]$ if and only if it can be written as $$\begin{cases}
&p(x)=s(x)+(x-a)\cdot (b-x)\cdot t(x), \text{ if $deg(p)$ is even}\\
&p(x)=s(x)\cdot(x-a)+t(x)\cdot(b-x), \text{ if $deg(p)$ is odd},
\end{cases}$$ where $t(x),s(x)$ are (univariate) sum of squares polynomials. In the first case, we have $deg(p)=2d$, $deg(t)\leq 2d-2$, and $deg(s)\leq 2d$. In the second case, we have $deg(p)=2d+1$, $deg(t)\leq 2d$, and $deg(s)\leq 2d.$
Depending on the degrees of the polynomials $h_1,\ldots,h_n$, we use Lemma \[lem:pablo.univ\] to rewrite (\[eq:opt.bh\]) as a semidefinite program. For example, in the case where the degrees of $h_1,\ldots,h_n$ are all odd and equal to $d=2d'+1$, we would write: $$\begin{aligned}
{\bar{h}_{m,d} }\mathop{\mathrel{:}}=&\arg &&\min_{h \in P_{n,d}, s_j^+,s_j^- \in \Sigma_{1,2d'}, t_j^+,t_j^-\in \Sigma_{1,2d'-2}} \sum_{i=1}^m \left(Y_i-\sum_{j=1}^n h_j(X_i^j)\right)^2\\
&\text{s.t. } &&K_j^+-h_j'(x_j)=s_j^+(x_j)+t_j^+(x_j) \cdot (u_j-x_j)(x_j-l_j), j=1,\ldots,n,\\
& &&h_j'(x_j)-K_j^-= s_j^-(x_j)+t_j^-(x_j) \cdot (u_j-x_j)(x_j-l_j), j=1,\ldots,n.\\
\end{aligned}$$
We now consider the case where we would like to enforce convexity over $B$ on our polynomial function as in (\[eq:opt.bg\]). Once again, we assume that ${\bar{g}_{m,d} }$ has a separable structure, so we search among polynomial functions $g$ that are separable as well, i.e., $g(x)=\sum_{j=1}^n g_j(x_j)$. From this, it follows that the Hessian of $g$, $H_g$, has a specific structure. Namely, $H_g$ is a diagonal matrix with the $j^{th}$ entry on the diagonal being the univariate polynomial $g_j''(x_j)$. Hence, requiring that $H_g(x)$ be positive semidefinite for any $x \in B$ is equivalent to requiring that $g_j''(x_j) \geq 0, \forall x_j \in [l_j,u_j]$. We can consequently make use of Lemma \[lem:pablo.univ\] again to write problem (\[eq:opt.bg\]) as a semidefinite program. For instance, in the case where the degrees of $g_1,\ldots,g_n$ are all even and equal to $d=2d'$, we can obtain ${\bar{g}_{m,d} }$ by solving $$\begin{aligned}
{\bar{g}_{m,d} }\mathop{\mathrel{:}}=&\arg &&\min_{g \in P_{n,d}, s_j \in \Sigma_{1,2d'-2}, t_j \in \Sigma_{1,2d'-4}} \sum_{i=1}^m (Y_i-g(X_i))^2\\
&\text{s.t. } &&g_j''(x_j)=s_j(x)+t_j(x) \cdot (u_j-x_j)(x_j-l_j), j=1,\ldots,n.
\end{aligned}$$
Consistency of the sum of squares-based estimators {#sec:consistent}
==================================================
Up until now, we have simply assumed that we are given $m$ data points $(X_i,Y_i)$, where $X_i \in \mathbb{R}^n$ and $Y_i \in \mathbb{R}$, without assuming any relationship between $X_i$ and $Y_i$. The purpose of regression however is to infer a relationship between $X_i$ and $Y_i$. As a consequence, it is often assumed that such a relationship exists, i.e., it is assumed that $Y_i$is equal to a noisy evaluation of a function $f$ at $X_i$. A key property to show then is that the regressor obtained converges towards $f$ in some sense as the number of data points goes to infinity. In other words, when fed with an infinite amount of data, the regression problem recovers the true underlying relationship between the data points. This property is called *consistency* of the regressor. If such a property did not hold, then the method proposed would not be very useful. In our case, we show that under certain assumptions on the data, our polynomial regressors ${\tilde{g}_{m,d,r} }$ and ${\tilde{h}_{m,d,r} }$ converge to $f$ when $m,d,r$ tend towards infinity. We also call this consistency of ${\tilde{g}_{m,d,r} }$ and ${\tilde{h}_{m,d,r} }$, though this is with slight abuse of language ($m$ is not the only parameter tending towards infinity here). Below, we discuss the exact assumptions that are needed for our main theorems before giving their statements.
\[assmpt:generation.X\] The random vectors $X_1,\ldots,X_m$ are independently and identically distributed (iid) with $E[||X_1||^2]<\infty$.
\[assmpt:box\] The support of the random vectors $X_1,\ldots,X_m$ is a full-dimensional box $B \subseteq \mathbb{R}^n$ defined as in (\[eq:box\]). In other words, $P(X_i \in B)=1$. Furthermore, we assume that for any full-dimensional set $C \subseteq B$, $P(X_i \in C)>0$.
\[assmpt:generation.Y\] There exists a continuous function $f:B \rightarrow \mathbb{R}$ such that $$Y_i=f(X_i)+\nu_i, \forall i=1,\ldots,m,$$ where $\nu_i$ are random variables with support $\mathbb{R}$ and the following characteristics: $$\begin{aligned}
P(\epsilon_1 \in dz_1,\ldots, \epsilon_m \in dz_n~|~X_1,\ldots,X_m)&=\prod_{i=1}^m P(\nu_i \in dz_i ~|~ X_i)\\
E[\nu_i|X_i]&=0 \text{ a.s. } \forall i=1,\ldots,m \\
E[\nu_i^2]&=\mathrel{\mathop{:}}\sigma^2<\infty~\forall i=1,\ldots,m.
\end{aligned}$$
Note that Assumptions \[assmpt:generation.X\] and \[assmpt:generation.Y\] imply that the sequence $\{(X_i,Y_i)\}_{i=1,\ldots,m}$ is iid, that $E[\epsilon_1]=0$, and that $E[Y_1^2]<\infty$. We state the two main theorems of this section (Theorems \[th:gmdr.consistent\] and \[th:hmdr.consistent\]) below.
\[th:gmdr.consistent\] Let $C$ be any compact full-dimensional subset of $B$ such that no point on the boundary of $B$ is in $C$. Assuming that $f$ is twice continuously differentiable and convex over $B$, that ${\tilde{g}_{m,d,r} }$ is as defined in (\[eq:opt.tg\]), and that Assumptions 1 through 3 hold, we have $$\begin{aligned}
\sup_{x \in C} |{\tilde{g}_{m,d,r} }(x)-f(x)| \rightarrow 0 \text{ a.s.}
\end{aligned}$$ as $d,m,r \rightarrow \infty.$
In this theorem, we have convergence to zero when three indices go to infinity. One could wonder if there are dependencies between the different indices. This is indeed the case. As can be seen in Appendix \[appendix:cvx.reg\], we show that for any $\epsilon>0$, there exists $d_0$ (a function of $\epsilon$), $m_0$ (a function of $\epsilon$ and $d$), and $r_0$ (a function of $\epsilon,d,$ and $m$) such that, for all $d \geq d_0$, $m \geq m_0$ and $r \geq r_0$, $$\sup_{x \in C} |{\tilde{g}_{m,d,r} }(x)-f(x)| \leq \epsilon \text{ a.s.}$$
Under the sampling assumptions that we have, we conclude that our estimator is consistent over any compact full-dimensional subset of $B$ that does not share its boundary with $B$. One could extend this result to the box $B$ itself, provided that we assume stronger assumptions on the sampling of the pairs of points $(X_i,Y_i)_{i=1,\ldots,m}$. Namely, we would need to assume that a non-negligeable fraction of the sample is located at the vertices of $B$. As this is unlikely to occur in practice, we have chosen to instead show this version of the Theorem, which comes with much more reasonable assumptions on the sampling of the data.
The proof of Theorem \[th:gmdr.consistent\] is a straightforward combination of Theorem \[th:gmdr.lemma\] and Lemma \[lem:consistent.g\], which is given below, via the triangle inequality.
\[lem:consistent.g\] Let $C$ be any compact full-dimensional subset of $B$ such that no point on the boundary of $B$ is in $C$. Assuming that $f$ is twice continuously differentiable and convex over $B$, that ${\bar{g}_{m,d} }$ is as defined in (\[eq:opt.bg\]), and that Assumptions 1 through 3 hold, we have $$\begin{aligned}
\label{eq:result.th1}
\sup_{x \in C} |{\bar{g}_{m,d} }(x)-f(x)| \rightarrow 0 \text{ a.s.}
\end{aligned}$$ as $m \rightarrow \infty$ and $d \rightarrow \infty$.
All the difficulty of the proof of Theorem \[th:gmdr.consistent\] is in the proofs of Theorem \[th:gmdr.lemma\] and Lemma \[lem:consistent.g\]. The proof of Theorem \[th:gmdr.lemma\] can be found in Appendix \[appendix:approx.proof\] and the proof of Lemma \[lem:consistent.g\] can be found in Appendix \[appendix:cvx.reg\].
We briefly comment on the proof of Lemma \[lem:consistent.g\], which is the more complicated of the two, and contrast it to that of [@lim2012consistency], which has a similar layout and uses similar ideas. The major difference between the two proofs is that we are showing consistency of two very different estimators (typically, the one obtained in [@lim2012consistency] is a piecewise linear function whereas ours is a polynomial function). This generates a myriad of differences between the two proofs, which prevented us from applying their result as is, though we retained the proof’s philosophy. We give an overview of the differences between the proofs below.
As mentioned before, the estimator in [@lim2012consistency] is very different to the one we consider. In particular, it is a function of one parameter only, $m$, whereas ${\bar{g}_{m,d} }$ is a function of two parameters, $m$ and $d$. This requires us to adapt the proof in [@lim2012consistency]. We start by introducing an intermediate convex and deterministic polynomial function $g_d$ of degree $d$ and then show that $\sup_{x \in C} |g_d(x)-{\hat{g}_{m,d} }(x)|\rightarrow 0$ a.s. when $m \rightarrow \infty$ for fixed $d$, before proving that $\sup_{x \in C} |g_d(x)-f(x)| \rightarrow 0$ when $d \rightarrow \infty$. Note that the requirement of $f$ being twice continuously differentiable comes into play to guarantee the existence of a *convex* polynomial function $g_d$ such that $\sup_{x \in C} |g_d(x)-f(x)| \rightarrow 0$ when $d \rightarrow \infty$. Another difference between the two proofs is in proof techniques. In particular, Step 4 of the proof in [@lim2012consistency] has been somewhat simplified. There are also some minor differences to take into consideration that relate to the support of $X_1,\ldots,X_n$. Indeed, [@lim2012consistency] assumes that $X_1,\ldots,X_m$ are sampled from $\mathbb{R}^n$ whereas in our case, they are sampled from $B$. Finally, the proof given by Lim and Glynn is for the convex regression case only (potentially combined with monotone constraints as well). We adapt the proof to a new setting: that of regressors with bounded derivatives, which is Theorem \[th:hmdr.consistent\] below. We refer the reader to Appendix \[appendix:cvx.reg\] for more details.
\[th:hmdr.consistent\] Let $C$ be any compact full-dimensional subset of $B$ such that no point on the boundary of $B$ is in $C$. Let $K=(K_1^-,K_1^+,\ldots,K_n^-,K_n^+)$ be a vector of finite scalars with $K_i^-<K_i^+$ for all $i=1,\ldots,n$. Assuming that $f$ has $K$-bounded derivatives over $B$, that ${\tilde{h}_{m,d,r} }$ is as defined in (\[eq:opt.th\]), and that Assumptions 1 through 3 hold, we have $$\begin{aligned}
\sup_{x \in C} |{\tilde{h}_{m,d,r} }(x)-f(x)| \rightarrow 0 \text{ a.s.}
\end{aligned}$$ as $m,d,r \rightarrow \infty$.
Similarly to Theorem \[th:gmdr.consistent\], the proof of Theorem \[th:hmdr.consistent\] is a straightforward combination of Theorem \[th:gmdr.lemma\] and Lemma \[lem:consistent.h\], via the triangle inequality.
\[lem:consistent.h\] Let $C$ be any compact full-dimensional subset of $B$ such that no point on the boundary of $B$ is in $C$ and let $K=(K_1^-,K_1^+,\ldots,K_n^-,K_n^+)$ be a vector of finite scalars with $K_i^-<K_i^+$ for all $i=1,\ldots,n$. Assuming that $f$ has $K$-bounded derivatives, that ${\bar{h}_{m,d} }$ is as defined in (\[eq:opt.bh\]), and that Assumptions 1 through 3 hold, we have $$\begin{aligned}
\label{eq:result.th2}
\sup_{x \in C} |{\bar{h}_{m,d} }(x)-f(x)| \rightarrow 0 \text{ a.s.}
\end{aligned}$$ as $m \rightarrow \infty$ and $d \rightarrow \infty$.
This lemma is the counterpart of Lemma \[lem:consistent.g\] and consequently also shares similarities with the proof in [@lim2012consistency]. Again, the difficulty of proving Theorem \[th:hmdr.consistent\] is contained in the proofs of Theorem \[th:gmdr.lemma\] given in Appendix \[appendix:approx.proof\] and of Lemma \[lem:consistent.h\] given in Appendix \[appendix:cvx.reg\].
Theorems \[th:gmdr.consistent\] and \[th:hmdr.consistent\] show in some sense the validity of considering the estimators ${\tilde{g}_{m,d,r} }$ and ${\tilde{h}_{m,d,r} }$ as potential regressors. Indeed, they approximate $f$ well when $m$ (i.e., the number of data points) and $d$ and $r$ (i.e., the degree of the polynomial regressor and the degree of the sos multipliers) are large. We now apply the techniques seen so far to synthetic data and production-output data.
Numerical Results {#sec:num.exps}
=================
In this section, we apply our methodology to different datasets. The first dataset is generated in a synthetic manner to showcase the possibilites of our methods. The second dataset is the KLEMS dataset [@jorgenson2012world] which contains production data for 65 industries in the US, from 1947 to 2014. We have also used our model on other datasets such as housing and wages datasets. Code and results for everything presented here as well as extensions to other datasets can be found at <https://github.com/mcurmei627/dantzig/tree/master/Experiments>.
Synthetic experiments
---------------------
For these experiments, we generate uniformly at random $m=400$ datapoints $X_i \in \mathbb{R}^3$ from the cube $[0.5;2]^3$. The corresponding response variable is obtained by taking: $$Y_i=f(X_i)+\epsilon \cdot \sigma(\bar{Y}) \cdot \nu_i, i=1,\ldots,400,$$ where $\sigma(\bar{Y})$ is the empirical standard deviation of $(f(X_1),\ldots,f(X_m))$, $\nu_i$ is a standard normal random variable, and $\epsilon$ is a parameter which we vary across experiments. We consider three different candidates $f_1,f_2$ and $f_3$ for $f$ which reflect different prior knowledge on the function. We take:
(1) $f_1(x_1,x_2,x_3)=\frac{1}{1+e^{x_1+x_2+x_3}}$ (up to scaling and translation), which is a multi-dimensional extension of the sigmoid function. Note that this function is monotone but not convex or concave.
(2) $f_2(x_1,x_2,x_3)=(x_1 + x_2 + x_3)\log(x_1 + x_2 + x_3)$ (up to scaling and translation) which is a multi-dimensional version of $x \log(x)$. Note that this function is convex but not monotone nor symmetric.
(3) $f_3(x_1,x_2,x_3)= \log(e^{x_1} + e^{x_2}+ e^{x_3})$, which is both monotone and convex.
None of the functions considered are polynomials to avoid giving an unfair bias to our method. In fact, we purposefully chose functions which would be difficult for a polynomial to replicate.
When $f=f_1$, we search for a polynomial regressor $p_1$ using (\[eq:opt.th\]) with the constraints involving $K_i^+$ removed, and $K_i^-=0$ for $i=1,2,3$. We refer to this type of regression as *monotone polynomial regression (MPR)*. When $f=f_2$, we search for a polynomial regressor $p_2$ using (\[eq:opt.tg\]). We refer to this type of regression as *convex polynomial regression (CPR)*. Finally, when $f=f_3$, we search for a polynomial regressor $p_3$ using an optimization problem which adds to the constraints in (\[eq:opt.tg\]) the constraints in (\[eq:opt.th\]) that contain $K_i^-$ (which we take equal to 0). We refer to this type of regression as *convex monotone polynomial regression (CMPR)*. We contrast each of these polynomial regressors to their *unconstrained polynomial regression (UPR)* counterpart $p_0$ obtained by solving (\[eq:poly.reg\]).
Note that for $f_1$, $f_2$, $f_3$, the constraints involving $K_i^+, i=1,\ldots,n$ are removed and $K_{i}^-=0, i=1,\ldots,n$. In other words, we simply require $f_1,f_2,f_3$ to be monotone. The reason for this choice is linked to the prevalence of monotone functions in applications. Examples which involve bounded-derivative functions in their full generality can be found at <https://github.com/mcurmei627/dantzig/tree/master/Experiments>.
To obtain $p_1,p_2$ and $p_3$, we further need to specify the values of the parameters $d$ (the degree of the regressor) and $r$ (the degree of the sos polynomial multipliers). We use $d$ here as a parameter of the experiments so it is specified as needed. For $r$, we choose it in such a way that the degrees of the polynomials that appear in either side of the constraints of (\[eq:opt.tg\]) and (\[eq:opt.th\]) match. An example of how to do this can be found in Section \[sec:sep\]. We compare the performance of the shape-constrained regressors (in red) and the unconstrained regressor (in blue) on the basis of the Root Mean Squared Error (RMSE) as either $d$ or $\epsilon$ vary. Each experiment is repeated over 5 trials. On the graphs, the average test RMSE is displayed (blue or red full lines) together with their 90% confidence intervals (blue or red shaded areas). We also display the average train RMSE (blue or red dotted lines).
In the first set of experiments, we compare the RMSE of the shape-constrained regressor against that of the unconstrained regressor as the noise-scaling factor $\epsilon$ is increased when $d=6$; see Figure \[fig:robustness\]. In both graphs, we observe the same thing: constraining the shape leads to more robust predictions, with the RMSE being significantly lower for testing for the shape-constrained regressor, particularly when the noise-scaling factor is large. This is in opposition to the training performance: the unconstrained regressor tends to overfit the noise in training thus leading to a worst generalization error for testing.
In the second set of experiments, we compare the RMSE of the shape-constrained regressor against that of the unconstrained regressor for varying degree $d$ and $\epsilon=0.7$; see Figure \[main:c\]. We focus here on the MCPR case but similar behaviors can be observed for other shape-constrained regressors (see <https://github.com/mcurmei627/dantzig/tree/master/Experiments> for other plots.) As expected, the shape constraints become more valuable as the degree of the polynomial regressor increases. Indeed, in the higher-degree cases, the unconstrained polynomial regressor has the ability to significantly overfit the noise (as can be seen with the training curve.) This is not so much the case of the shape-constrained regressor as it incorporates additional information.
In the third set of experiments, we project onto one coordinate the shape-constrained regressor and the unconstrained regressor so as to contrast them with the true underlying function (dotted black line); see Figure \[main:d\]. Once again, we focus on the MCPR case but similar graphs can be observed for other shape-constrained regressors. Note that on the graph the shape-constrained regressor is very close to the true underlying function which is not so much the case of the unconstrained regressor.
Experiments on the KLEMS dataset {#subsec:econ}
--------------------------------
The USA KLEMS data (which can be found at <http://www.worldklems.net/data.htm>) contains yearly gross-output production data $Out$ for 65 industries in the US, from 1947 to 2014. For each industry, the dataset also contains yearly inputs such as Capital (K), Labor (L) and Intermediate (I) inputs, adjusted for inflation. This dataset is a good application case for us as $Out$ is considered to be a nondecreasing function of $K$, $L$ and $I$, and concave in these three variables by virtue of diminishing returns. Obtaining a regressor of $Out$ is typically done by fitting a *Cobb-Douglas production function* to the data, i.e., finding $(a,b,c,d)$ such that the function $$Out=a \cdot K^b \cdot L^c \cdot I^d,$$ is as close as possible to the observed data. This can be done via linear regression by working in log-space. By imposing some constraints on $a, b,c$ and $d$ (such as $b,c,d>0$, $b+c+d<1$ and $a>0$), one can obtain the nondecreasing concave shape that is required. We propose to replace this strategy by our methodology. For simplicity we fit a polynomial constrained to be concave and non-decreasing in each feature for each industry independently. This might be a potential limitation in terms of accuracy, as there are much more sophisticated algorithms that use longitudinal features to capture inter-industry relationships and trends. However as our purpose is mainly to illustrate the advantages of our method over the standard Cobb-Douglass method, we limit ourselves to this setting. Since the data is temporal, we perform a temporal split for our training-testing splits and fit a degree 4 polynomial to our data. The results obtained are given in \[fig:KLEMS\].
As can be seen in the figure, our method outperforms the traditional Cobb-Douglass technique on 50 out of the 65 industries, sometimes quite significantly. This is unsurprising as our method allows for more flexibility and variety in the functional form of the regressor while maintaining the original shape constraints.
Conclusion and future directions
================================
In this paper, we considered the problem of shape-constrained polynomial regression. This problem is an extension of the (unconstrained) least-squares polynomial regression problem and can be valuable for any problem where additional information relating to the shape of the regressor is known. Among other things, incorporating this additional knowledge into the model can lead to more robust regressors and regressors with improved generalization error. We focused here on two types of shape constraints: bounded-derivative constraints (which include as sub-cases monotonicity and Lipschitz-continuity constraints) and convexity constraints, as they appear most regularly in applications. It should be noted however that any shape constraint which can be rewritten as enforcing nonnegativity of some polynomial can be encoded via the techniques presented here.
By leveraging tools from real algebra, we showed that we can tackle shape-constrained polynomial regression using semidefinite programming. We further showed that the semidefinite programs obtained have some nice computational properties. In particular, their size is not impacted by the number of datapoints, they scale polynomially with the number of features and output a regressor which is a consistent estimator of the underlying data-generating function.
Among possible future directions, one could imagine further improving the computational attributes of our techniques by leveraging recent developments in scalability of semidefinite programs such as the ones presented in this survey [@majumdar2019recent]. It could also be interesting to explore more deeply the economics application described in Section \[subsec:econ\]. In particular, in view of the structure of the Cobb-Douglas functions, one could attempt to fit a polynomial in the *logarithm* of the features to the data in log space, rather than a polynomial of the features themselves in feature space. The difficulty would then be to enforce the appropriate constraints on the regressor in log space that would translate back to monotonicity and concavity constraints in feature space. This may be possible to do by developing theory akin to the sum of squares theory presented here, but for polynomials of logarithms of variables rather than polynomials of variables.
Proof of Theorem \[th:compl.deriv\] {#appendix:np.hard}
===================================
[*Proof of Theorem \[th:compl.deriv\].*]{} We provide a reduction from MAX-CUT. Recall that in an unweighted undirected graph $G=(V,E)$ with no self-loops, a *cut* partitions the $n$ nodes of the graph into two sets, $S$ and $\bar{S}$. The size of the cut is given by the number of edges connecting a node in $S$ to a node in $\bar{S}$. MAX-CUT is then the following problem: given a graph $G$ and an integer $k$, test whether $G$ has a cut of size at least $k$. It is well-known that this problem is NP-hard [@GareyJohnson_Book].
We denote by $A$ the adjacency matrix of the graph, i.e., $A$ is an $n \times n$ matrix such that $A_{ij}=1$ if $\{i,j\} \in E$ and $A_{ij}=0$ otherwise, and by $\gamma\mathrel{\mathop{:}}=\max_{i} \{A_{ii}+\sum_{j\neq i} |A_{ij}|\}$. Note that $\gamma$ is an integer (it corresponds to the maximum degree in the graph) and an upper bound on the largest eigenvalue of $A$ from Gershgorin’s circle theorem [@Gershgorin].
We show that $G$ does not have a cut of size greater than or equal to $k$ if and only if the partial derivative with respect to $x_1$ of the polynomial $$\begin{aligned}
p(x_1,\ldots,x_n)=&\frac{1}{8} \sum_{j=2}^n x_1^2A_{1j}x_j+\frac{1}{4}x_1\cdot \left(\sum_{1<i<j\leq n} x_i A_{ij}x_j\right)-\frac{\gamma}{12} x_1^3-\frac{\gamma}{4} x_1\sum_{i=2}^n x_i^2\\
&+x_1 \cdot \left( k+\frac{n\gamma}{4}-\frac14 e^TAe\right)
\end{aligned}$$ is greater or equal to $K_1^-=0$ over $B=[-1,1]^n$. By setting $K_1^+=\ldots=K_n^+=\infty$ and $K_2^-=\ldots=K_n^-=-\infty$, this is equivalent to $p$ having $K$-bounded derivatives over $B$.
The partial derivative of $p$ with respect to $x_1$ is given by $$\begin{aligned}
\frac{\partial{p}(x)}{\partial x_1}&=\frac{1}{4}\sum_{j=2}^n x_1A_{1j}x_j+\frac{1}{4}\sum_{1<i<j \leq n} x_iA_{ij}x_j-\frac{\gamma}{4} x_1^2-\frac{\gamma}{4} \sum_{i=2}^n x_i^2 +(k+\frac{n\gamma}{4}-\frac14 e^TAe)\\
&=\frac{1}{4}\sum_{i,j} x_iA_{ij}x_j-\frac{\gamma}{4} \sum_{i=1}^n x_i^2+(k+\frac{n\gamma}{4}-\frac14 e^TAe)\\
&=\frac{1}{4} x^T(A-\gamma I)x+(k+\frac{n\gamma}{4}-\frac14 e^TAe).
\end{aligned}$$ Hence, we show that $G$ does not have a cut of size greater than or equal to $k$ if and only if $$\frac14 x^T(A-\gamma I)x+k+\frac{n\gamma}{4}-\frac14 e^TAe \geq 0,~\forall x\in B.$$ The converse implication is easy to prove: if $\frac{\partial p(x)}{\partial x_1}\geq 0$ for all $x\in B$, then, in particular, $\frac{\partial p(x)}{\partial x_1}\geq 0$ for $x \in \{-1,1\}^n.$ When restricting ourselves to $x \in \{-1,1\}^n$, we have that $\gamma x^Tx=\gamma n$, and so $$k \geq \frac{1}{4}e^TAe-\frac{1}{4}x^TAx,~\forall x\in \{-1,1\}^n.$$ Any cut in $G$ can be encoded by a vector $x \in \{-1,1\}^n$ by taking $x_i=1$ if node $i$ is on one side of the cut and by taking $x_i=-1$ if node $i$ is on the other side of the cut. In this set-up, the size of the cut is given by $\frac{1}{4}e^TAe-\frac{1}{4}x^TAx$ [@maxcut_gw]. Hence, the previous inequality implies that all cuts in $G$ are of size less than or equal to $k$.
For the implication, as mentioned above, if $G$ does not have a cut of size greater than or equal to $k$, then we have $$k \geq \frac{1}{4}e^TAe-\frac{1}{4}x^TAx,~\forall x\in \{-1,1\}^n,$$ which is equivalent to $$\begin{aligned}
\label{eq:ineq}
\frac14 x^T(A-\gamma I)x \geq -k-\frac{n\gamma}{4}+\frac14 e^TAe,~\forall x\in \{-1,1\}^n.
\end{aligned}$$ Now, by definition of $\gamma$, $A-\gamma I \preceq 0$, i.e., $x^T (A-\gamma I) x$ is concave. Let $y\in B$. We have $y=\sum_{i=1}^{2^n} \lambda_i x_i$ where $x_i$ are the corners of $B$, which are in $\{-1,1\}^n$, $\lambda_i \geq 0$ $\forall i=1,\ldots,2^n$, and $\sum_{i=1}^{2^n} \lambda_i=1$. By virtue of concavity of $y \mapsto y^T(A-\gamma I)y$ and (\[eq:ineq\]), $$\frac{1}{4} y^T(A-\gamma I) y \geq \sum_{i=1}^{2^n} \lambda_i x_i^T(A-\gamma I)x_i \geq \sum_{i=1}^{2^n} \lambda_i (-k-\frac{n \gamma }{4}+\frac14 e^TAe)=-k-\frac{n \gamma }{4}+\frac14 e^TAe.$$ This concludes the proof.
Proof of Theorem \[th:gmdr.lemma\] {#appendix:approx.proof}
==================================
We show the more general result given in Theorem \[th:conv.anal\]. This immediately implies Theorem \[th:gmdr.lemma\] as shown below.
\[th:conv.anal\] Let $f:\mathbb{R}^N \rightarrow \mathbb{R}$ be a strictly convex and coercive function and let $X \subseteq \mathbb{R}^N$ be a closed and convex set. Furthermore, let $\{X_k\}_{k \geq 1}$ be an increasing sequence (with respect to inclusion) of closed and convex sets with $X_k \subseteq X$ for all $k\geq 1.$ We denote by $c^*$ the (unique) minimizer of $f$ over $X$ and by $c_k$ the (unique) minimizer of $f$ over $X_k$ for all $k\geq 1$.
If $\lim_{k \rightarrow \infty} |f(c_k)-f(c^*)|=0$ then the limit $\lim_{k \rightarrow \infty} ||c_k-c^*||_2$ exists and is equal to zero.
First, note that if $f(c_1)=f(c^*)$ then the theorem is immediate. We assume for the rest of the proof that $f(c_1)>f(c^*)$. Let $\delta_0>0$ be such that $\forall k\in \mathbb{N}, \exists k'>k$ such that $||c_{k'}-c^*||_2>\delta_0$. To prove the theorem, it is enough to show that $$\begin{aligned}
\label{eq:inclusion}
\exists \epsilon_0>0 \text{ s.t. } \{c \in X~|~ |f(c)-f(c^*)|\leq \epsilon_0\} \subseteq \{c \in \mathbb{R}^N ~|~||c-c^*||_2 \leq \delta_0\}.
\end{aligned}$$ Indeed, as $c_{k'} \in X_{k'} \subseteq X$ and $c_{k'} \notin \{c \in \mathbb{R}^N ~|~||c-c^*||_2 \leq \delta_0\}$, (\[eq:inclusion\]) implies that for any $k \in \mathbb{N}$, there exists $\tilde{k}=k'$ such that $|f(c_{\tilde{k}})-f(c^*)|>\epsilon_0$, which is the contrapositive of the theorem. To show (\[eq:inclusion\]), let $I \mathrel{\mathop{:}}=[0,f(c_1)-f(c^*)]$ and consider the following optimization problem parametrized by $\epsilon \in I$: $$\label{eq:opt.proof}
\begin{aligned}
\delta(\epsilon)=&\max_{c \in X} ||c-c^*||_2\\
&\text{s.t. } f(c) \leq f(c^*)+\epsilon.
\end{aligned}$$ For any $\epsilon \in I$, $\delta(\epsilon)$ exists as $c^*$ is a feasible solution and is achieved as $c \mapsto ||c-c^*||_2$ is continuous and the feasible set is a compact set ($f$ being coercive). Furthermore, $\delta(\epsilon)=0$ if and only if $\epsilon=0$ as $c^*$ is the unique minimizer of $f$ over $Y$. This implies that the interval $(\delta(0),\delta(f(c_1)-f(c^*))]$ is non-empty. Wlog, we assume that $\delta_0$ belongs to this interval. Indeed, if $\delta_0> \delta(f(c_1)-f(c^*))$, then we have that, $\forall k \in \mathbb{N}$, $\exists k'>k$ such that $||c_{k'}-c^*||_2>\delta(f(c_1)-f(c^*))$ and we can simply replace $\delta_0$ with $\delta(f(c_1)-f(c^*))$ and start the proof over. Now, assuming $\epsilon \mapsto \delta(\epsilon)$ is continuous, it follows from the intermediate value theorem that there exists $\epsilon_0 \in (0,f(c_1)-f(c^*)]$ such that $\delta_0=\delta(\epsilon_0)$, which implies (\[eq:inclusion\]).
As a consequence, to finish the proof, it only remains to show that $\epsilon \mapsto \delta(\epsilon)$ is continuous. Let $Y=\{c \in X ~|~ f(c) \leq f(c_1)\}$. To show continuity, we use a famous result of Berge [@berge Chapter VI, Maximum Theorem], which states that $\epsilon \mapsto \delta(\epsilon)$ is continuous if (i) $\delta(\epsilon)$ is finite for any $\epsilon \in I$; (ii) $c \mapsto ||c-c^*||_2$ is continuous; (iii) the correspondance $$\begin{aligned}
\Gamma:~ & I \rightarrow Y \\
&\epsilon \mapsto \{c \in X~|~ f(c) \leq f(c^*)+\epsilon\}
\end{aligned}$$ is compact-valued, i.e., for any $\epsilon>0$, $\Gamma(\epsilon)$ is compact; (iv) $\Gamma$ is both upper and lower hemi-continuous. It is straightforward to see that (i)-(iii) hold. For (iv), upper hemicontinuity follows from [@berge Chapter VI, Corollary of Theorem 7], which is itself a consequence of $Y$ being compact and [@berge Chapter VI, Example after Theorem 3] that states that $\Gamma$ as defined is a closed mapping. For lower hemicontinuity, we use the sequential definition. Take $\epsilon \in I$ and let $\{\epsilon_m\}_m$ be a sequence converging to $\epsilon$ and $\tilde{c}$ an element of $\Gamma(\epsilon)$. We take $\{\epsilon_{m_k}\}_k$ to be a monotone subsequence of $\{\epsilon_m\}_m$, which always exists and also converges to $\epsilon$. We need to show that there exists $c_{m_k} \in \Gamma(\epsilon_{m_k})$ such that $c_{m_k} \rightarrow \tilde{c}$. We distinguish two different cases. If $\{\epsilon_{m_k}\}_k$ is decreasing to $\epsilon$, simply take $c_{m_k}=\tilde{c}$ for all $k$. This implies that $c_{m_k} \in \Gamma(\epsilon_{m_k}), \forall k,$ as $f(c_{m_k})=f(\tilde{c}) \leq f(c^*)+\epsilon \leq f(c^*)+\epsilon_{m_k}$ for all $k$, and that $\lim_{k \rightarrow \infty} c_{m_k}=\tilde{c}$. If $\{\epsilon_{m_k}\}_k$ is increasing to $\epsilon$, then we take $$c_{m_k}=\arg \min_{c\in \Gamma(\epsilon_{m_k})} ||c-\tilde{c}||_2, \forall k.$$ As $\Gamma(\epsilon_{m_k})$ is compact and convex, this minimum exists. (It can possibly be the case that $c_{m_k}=\tilde{c}$ for some $k=k_0$ if $\tilde{c} \in \Gamma(\epsilon_{m_{k_0}})$.) We have that $\Gamma(\epsilon_{m_k})\subseteq \Gamma(\epsilon_{m_{k+1}}), \forall k$ with the closure of $\cup_{k\geq 1} \Gamma(\epsilon_{m_k})$ being equal to $\Gamma(\epsilon)$. As the sequence $\{||c_{m_k}-\tilde{c}||_2\}$ is nonincreasing, its limit exists and is equal to the infimum of $||c-\tilde{c}||_2$ over the closure of $\cup_{k\geq 1} \Gamma(\epsilon_{m_k})$. Thus, $\lim_{k \rightarrow \infty} c_{m_k}=\tilde{c}$.
[*Proof of Theorem \[th:gmdr.lemma\].*]{} We identify the set of polynomials in $P_{n,d}$ with the set of coefficients of the polynomials in $\mathbb{R}^{\binom{n+d}{d}}$ and use the same notation ($c$ here) for both the polynomial and its vector of coefficients.
For (\[eq:cvgce.g\]), we take $f(c)=\sum_{i=1}^m (Y_i-c(X_i))^2$ which has all the properties required (when $m$ is large enough as assumed here). The set $X$ here is the set $\{c~|~H_c(x)\succeq 0, \forall x\in B\}$ and the sets $X_k$ are the sets $\{c~|~\exists S_0,\ldots,S_n \in \Sigma^M_{n,2k,n} \text{ s.t. } H_c(x)=S_0(x)+g_1(x)S_1(x)+\ldots+g_n(x)S_n(x)\}$. Finally, $c_k$ corresponds to $\tilde{g}_{m,d,k}$ and $c^*$ corresponds to ${\bar{g}_{m,d} }$. From Theorem \[thm:scherer.hol\], we have that $\lim_{k \rightarrow \infty} |f(c_k)-f(c^*)|=0$. From Theorem \[th:conv.anal\], it follows that $\lim_{k \rightarrow \infty} ||c_k -c^*||_2=0$. Using Cauchy Schwarz and the fact that $||x||_2$ is bounded as $x \in B$ enables us to conclude that $\lim_{k \rightarrow \infty}\sup_{x \in B} |c_k(x)-c^*(x)|=0$.
For (\[eq:cvgce.h\]), we take $f(c)=\sum_{i=1}^m (Y_i-c(X_i))^2$ again. The set $X$ here is the set $\{c~|~K_i^- \leq \frac{\partial c(x)}{\partial x_i} \leq K_i^+, \forall x\in B, \forall i=1,\ldots,m\}$ and the set $X_k$ are the sets $\{c~|~\exists s_{i_0}^{\pm},\ldots,s_{i_n}^{\pm} \in \Sigma_{n,2k} \text{ s.t. } K_i^+-\frac{\partial c(x)}{\partial x_i}= s_{i0}^+(x)+ s_{i1}^{+}(x)g_1(x)+\ldots+s_{in}^+(x)g_n(x), i=1,\ldots,n,\frac{\partial c(x)}{\partial x_i}-K_i^-= s_{i0}^-(x)+ s_{i1}^{-}(x)g_1(x)+\ldots+s_{in}^-(x)g_n(x), i=1,\ldots,n\}$. Finally, $c_k$ corresponds to $\tilde{h}_{m,d,k}$ and $c^*$ corresponds to ${\bar{h}_{m,d} }$. From Theorem \[thm:putinar\] this time, we have that $\lim_{k \rightarrow \infty} |f(c_k)-f(c^*)|=0$. The conclusion follows as above.
Proofs of Lemmas \[lem:consistent.g\] and \[lem:consistent.h\] {#appendix:cvx.reg}
==============================================================
The proofs of Lemmas \[lem:consistent.g\] and \[lem:consistent.h\] are divided into three steps, each step relying on separately proved results. A road map to these two proofs is given in Table \[tab:proofs\].
Lemma \[lem:consistent.g\] Lemma \[lem:consistent.h\] Appendix to read
--------------------- -------------------------------- -------------------------------- ---------------------------------
Step 1 Proposition \[prop:approx.gd\] Proposition \[prop:approx.hd\] Appendix \[subappendix:approx\]
Step 2 Corollary \[cor:min.ineq.g\] Corollary \[cor:min.ineq.h\] Appendix \[subappendix:min\]
Proposition \[prop:prop.gd\] Proposition \[prop:prop.hd\] Appendix \[subappendix:props\]
Step 3 Proposition \[prop:prop.gd\] Proposition \[prop:prop.hd\] Appendix \[subappendix:props\]
Proof of the Lemmas Appendix \[subappendix:proof\]
: Road map for the proofs of Lemmas \[lem:consistent.g\] and \[lem:consistent.h\][]{data-label="tab:proofs"}
By proceeding this way, we are able to collapse the proofs of Lemmas \[lem:consistent.g\] and \[lem:consistent.h\] into one single proof: all the differences are contained in the previously mentioned propositions and corollaries. Note that we have placed on the same line in columns 2 and 3 of Table \[tab:proofs\] the results that can be viewed as convex/bounded derivative counterparts of one another.
We denote by $C_{n,d}$ the set of polynomials of degree $d$ in $n$ variables that are convex over the box $B$ and by $K_{n,d}$ the set of polynomials of degree $d$ in $n$ variables that have $K$-bounded derivatives over $B$. Note that here $K$ is assumed to be a vector of *real-valued* scalars.
Weierstrass-type theorems for functions with shape constraints {#subappendix:approx}
--------------------------------------------------------------
Central to the proofs of Lemma \[lem:consistent.g\] and Lemma \[lem:consistent.h\] is the idea that one can approximate, over a box $B$, any convex-constrained function or function with bounded derivatives arbitrarily well by a polynomial with the same characteristics. This is a similar result to the Weierstrass theorem, with the added complication of the shape constraints, which prevents us from using the Weierstrass theorem as is. The proofs of these theorems rely on the following proposition.
\[prop:Bernstein\]
Consider the Bernstein multivariate polynomial of degree $d$ and in $n$ variables, defined over $[0,1]^n$: $$B_d(f,x)=\sum_{j_1+\ldots+j_n=d} f\left(\frac{j_1}{d},\ldots,\frac{j_n}{d}\right) C_d^{j_1}\ldots C_d^{j_n}x_1^{j_1}(1-x_1)^{d-j_1}\ldots x_n^{j_n}(1-x_n)^{d-j_n},$$ where $$C_d^{j_i}=\frac{d!}{j_i!(d-j_i)!}.$$
Let $m$ be an integer and assume that $f$ is $m$ times continuously differentiable. Let $k=(k_1,\ldots,k_n)$ be a multi-index such that $\sum_{i=1}^n |k_i|\leq m$ and denote by $$\partial^k f=\frac{\partial^k f(x)}{\partial x_1^{k_1}\ldots \partial x^{k_n}}.$$
Then, for any $k$ such that $\sum_{i=1}^{n}|k_i|\leq m$, we have $$\sup_{x\in [0,1]^n}|\partial^k B_d(f,x)-\partial^k f(x)| \rightarrow 0 \text{ as } d\rightarrow \infty.$$
This result can easily be extended to hold over any box $B \subset \mathbb{R}^n$ by simply scaling and translating the variables in the Bernstein polynomials. We use this latter version in our case.
\[prop:approx.gd\] Let $f$ be a twice continuously differentiable function defined over $B$ such that $H_f(x) \succeq 0$, for all $x \in B$ (i.e., $f$ is convex over $B$). Define $$g_d \mathrel{\mathop{:}}= \arg \min_{g \in C_{n,d}} \sup_{x \in B} |f(x)-g(x)|.$$ For any $\epsilon>0$, there exists $d$ such that $$\sup_{x \in B} |g_d(x)-f(x)| <\epsilon.$$
As defined, $g_d$ is guaranteed to exist as the objective function is coercive in the coefficients of $g$ and the set $C_{n,2d}$ is closed; see, e.g., Appendix A in [@bertsekasnonlin]. However, it may not necessarily be unique, so we pick $g_d$ to be one of the existing minimizers.
Let $\epsilon>0$ and $M \mathrel{\mathop{:}}=\max_{x \in B} \frac12 \sum_{i=1}^n x_i^2$. From Proposition \[prop:Bernstein\], as $f$ is twice continuously differentiable, there exists a polynomial $q$ of degree $d$ such that $$\sup_{x \in B} |f(x)-q(x)| \leq \frac{\epsilon}{2(1+2nM)}$$ and $$\begin{aligned}
\label{eq:second.deriv}
\sup_{x \in B} \left|\frac{\partial^2 f(x)}{\partial x_i \partial x_j}- \frac{\partial^2 q(x)}{\partial x_i \partial x_j}\right| \leq \frac{\epsilon}{2(1+2nM)}, \forall i,j=1,\ldots,n.
\end{aligned}$$ Let $\Delta H(x)=H_q(x)-H_f(x)$. As $f$ and $q$ are twice continuously differentiable, the entries of $\Delta H(x)$ are continuous in $x$. This implies that $x \mapsto \lambda_{\min}(\Delta H(x))$ is continuous [@bhatia2013matrix Corollary VI.1.6]. Hence, if we let $$\Lambda\mathrel{\mathop{:}}=\min_{x\in B} \lambda_{\min}(\Delta H(x)),$$ it follows that there exists $x_0 \in B$ such that $\Lambda=\lambda_{\min} \Delta H(x_0).$ We now bound this quantity. Recall that for a symmetric $n \times n$ real-valued matrix $M$, $||M||_{\max}$ is the max-norm of $M$, i.e., its largest entry in absolute value, $||M||_2=\max \{|\lambda_{\min}(M)|,|\lambda_{\max}(M)|\}$, and $||M||_2 \leq n ||M||_{\max}$. From (\[eq:second.deriv\]), we have that $$||\Delta H(x_0)||_{\max} \leq \frac{\epsilon}{2(1+2nM)},$$ which implies that $$\max \{|\lambda_{\min}(\Delta H(x_0))|, |\lambda_{\max}(\Delta H(x_0))| \} \leq \frac{n\epsilon}{2(1+2nM)},$$ and so $$-\frac{n\epsilon}{2(1+2nM)}\leq \Lambda \leq \frac{n\epsilon}{2(1+2nM)}.$$ By definition of $\Lambda$, we thus have $\Delta H(x)\succeq -\frac{n\epsilon}{2(1+2nM)}$ for all $x \in B.$
Now, consider $$p(x)\mathrel{\mathop{:}}=q(x)+\frac{n\epsilon}{2(1+2nM)}x^Tx.$$ For any $x \in B$, we have $$\begin{aligned}
|f(x)-p(x)|\leq |f(x)-q(x)|+|q(x)-p(x)| \leq \frac{\epsilon}{2(1+2nM)}+\frac{n\epsilon}{2(1+2nM)}\cdot 2M \leq \frac{\epsilon}{2}<\epsilon.
\end{aligned}$$ Using our previous result on $\Delta H(x)$, the definition of $p$, and the fact that $H_f(x)\succeq 0$, we also have $$\begin{aligned}
H_p(x)=H_p(x)-H_q(x)+H_q(x)-H_f(x)+H_f(x) &\succeq \frac{2n\epsilon}{2(1+2nM)}I-\frac{n\epsilon}{2(1+2nM)}I\\
&\succeq \frac{n \epsilon}{2(1+2nM)} \succ 0.
\end{aligned}$$ From this, it follows that there exists a degree $d$ and a polynomial $p \in C_{n,d}$ such that $\sup_{x \in B} |f(x)-p(x)|<\epsilon.$ The definition of $g_d$ as the minimizer of $\sup_{x \in B} |f(x)-g(x)|$ for any $g \in C_{n,d}$ enables us to obtain the result.
We now show an analogous lemma but for the case where $f$ has $K$-bounded derivatives.
\[prop:approx.hd\]
Let $$K=(K_1^-,K_1^+,\ldots,K_n^-,K_n^+)$$ be a vector of finite scalars with $K_i^-<K_i^+$ for all $i=1,\ldots,n$ and let $f$ be a continuously differentiable function defined over $B$ with $K$-bounded derivatives. Define $$h_d \mathrel{\mathop{:}}= \arg \min_{g \in K_{n,d}} \sup_{x \in B} |f(x)-g(x)|.$$ For any $\epsilon>0$, there exists $d$ such that $$\sup_{x \in B} |h_d(x)-f(x)| <\epsilon.$$
We once again use Proposition \[prop:Bernstein\] to show this result. Recal
Let $\epsilon>0$, take $M=\max_{x \in B}||x||_{\infty}$ and $M'=\max_{x \in B} |f(x)|$. From Proposition \[prop:Bernstein\], there exists a polynomial $q$ of degree $d$ such that $\max_{x \in B} |f(x)-q(x)| \leq \epsilon'$ and $$\max_{x \in B} \left| \frac{\partial f(x)}{\partial x_i}- \frac{\partial q(x)}{\partial x_i} \right| \leq \epsilon',$$ where $\epsilon'$ is a positive scalar such that $$\epsilon=\epsilon'\cdot \left(1+\frac{n M \max_j (|K_j^+|+|K_j^-|)+2M' +2\epsilon'}{\min_j |K_j^+-K_j^-|}\right).$$ (This is a finite scalar as we have assumed that $K_i^-<K_i^+$ for all $i=1,\ldots,n$.) Such an $\epsilon'$ is guaranteed to exist from the intermediate value theorem as $$\epsilon' \mapsto \epsilon'\cdot \left(1+\frac{n M \max_j (|K_j^+|+|K_j^-|)+2M' +2\epsilon'}{\min_j |K_j^+-K_j^-|}\right)$$ is increasing, maps $0$ to $0$, and infinity to infinity. Now consider $$p(x)\mathrel{\mathop{:}}=q(x) \cdot \left(1-\frac{2\epsilon'}{\min_{j} |K_j^+-K_j^-|+2\epsilon'}\right)+\sum_i \epsilon' \cdot \frac{K_i^++K_i^-}{\min_j |K_j^+-K_j^-|+2\epsilon'}\cdot x_i.$$ We show that $p$ has $K$-bounded derivatives and that $\sup_{x \in B} |p(x)-f(x)| \leq \epsilon.$ It immediately follows from the definition of $h_d$ that Proposition \[prop:approx.hd\] holds.
Let $x \in B$. We have $$\begin{aligned}
|p(x)-f(x)| &\leq \left| q(x)-f(x)+\epsilon' \cdot \frac{\sum_{i} x_i(K_i^++K_i^-)-2 \cdot q(x)}{\min_j |K_j^+-K_j^-|+2\epsilon'} \right|\\
&\leq |q(x)-f(x)|+\epsilon' \cdot \frac{n M \max_j (|K_j^+|+|K_j^-|)+2M' +2\epsilon'}{\min_j |K_j^+-K_j^-|} \\
&\leq \epsilon'\cdot \left(1+\frac{n M \max_j (|K_j^+|+|K_j^-|)+2M' +2\epsilon'}{\min_j |K_j^+-K_j^-|}\right)=\epsilon,
\end{aligned}$$ where we have used the fact that $|q(x)|\leq |f(x)|+\epsilon'$ for any $x\in B$ in the second inequality. We now show that $p$ thus defined has $K$-bounded derivatives. Again, let $x \in B$ and $i \in \{1,\ldots,n\}$. We have $$\frac{\partial p(x)}{\partial x_i}=\frac{\partial q(x)}{\partial x_i} \cdot \left(1-\frac{2\epsilon'}{\min_{j} |K_j^+-K_j^-|+2\epsilon'}\right)+\epsilon' \cdot \frac{K_i^++K_i^-}{\min_j |K_j^+-K_j^-|+2\epsilon'}.$$ As $$\frac{\partial f(x)}{\partial x_i} \leq K_i^+ \text{ and } \frac{\partial q(x)}{\partial x_i} \leq \frac{\partial f(x)}{\partial x_i} +\epsilon'$$ it follows that $$\begin{aligned}
\frac{\partial p(x)}{\partial x_i} &\leq (K_i^++\epsilon')\cdot \left(1-\frac{2\epsilon'}{\min_{j} |K_j^+-K_j^-|+2\epsilon'}\right)+\epsilon' \cdot \frac{K_i^++K_i^-}{\min_j |K_j^+-K_j^-|+2\epsilon'}\\
&=K_i^+ + \frac{\epsilon' \cdot (\min_j |K_j^+-K_j^-|+2\epsilon')-2\epsilon'K_i^+-2\epsilon'^2+\epsilon'K_i^++\epsilon'K_i^-}{\min_j |K_j^+-K_j^-|+2\epsilon'}\\
&=K_i^+ - \epsilon' \cdot \frac{K_i^+-K_i^--\min_j |K_{j}^+-K_j^-|}{\min_j |K_j^+-K_j^-|+2\epsilon'} \leq K_i^+.
\end{aligned}$$ Likewise, as $$\frac{\partial f(x)}{\partial x_i} \geq K_i^- \text{ and } \frac{\partial q(x)}{\partial x_i} \geq \frac{\partial f(x)}{\partial x_i} -\epsilon'$$ it follows that $$\begin{aligned}
\frac{\partial p(x)}{\partial x_i} &\geq (K_i^--\epsilon')\cdot \left(1-\frac{2\epsilon'}{\min_{j} |K_j^+-K_j^-|+2\epsilon'}\right)+\epsilon' \cdot \frac{K_i^++K_i^-}{\min_j |K_j^+-K_j^-|+2\epsilon'}\\
&=K_i^- +\frac{-\epsilon' \cdot (\min_j |K_j^+-K_j^-|+2\epsilon')-2\epsilon'K_i^-+2\epsilon'^2+\epsilon'K_i^++\epsilon'K_i^-}{\min_j |K_j^+-K_j^-|+2\epsilon'}\\
&=K_i^- + \epsilon' \cdot \frac{K_i^+-K_i^--\min_j |K_{j}^+-K_j^-|}{\min_j |K_j^+-K_j^-|+2\epsilon'} \geq K_i^-.
\end{aligned}$$ Hence, $p$ has $K$-bounded derivatives.
Minimizer inequalities {#subappendix:min}
----------------------
We have limited information regarding ${\bar{g}_{m,d} }$ and ${\bar{h}_{m,d} }$. We do know however that they are solutions to minimization problems (\[eq:opt.bg\]) and (\[eq:opt.bh\]), which is what we leverage in this appendix. We start first with a general proposition, which can be found in the proof of consistency of [@lim2012consistency] but which we repeat here for completeness, and then specialize this proposition to the two settings we are concerned with here.
\[prop:min.ineq\] Let $\hat{g}\mathrel{\mathop{:}}=\arg \min_{g \in S} \sum_{i=1}^m (Y_i-g(X_i))^2$, where $S$ is some subset of $P_{n,d}$ and let $g$ be an element of $S$. We have $$\begin{aligned}
\frac{1}{m} \sum_{i=1}^m (g(X_i)-\hat{g}(X_i))^2 &\leq \frac{2}{m} \sum_{i=1}^m (Y_i-g(X_i))(\hat{g}(X_i)-g(X_i)), \text{ and} \\
\frac1m \sum_{i=1}^m (g(X_i)-\hat{g}(X_i))^2 &\leq \frac4m \sum_{i=1}^m (Y_i-g(X_i))^2.
\end{aligned}$$
As $\hat{g}$ is the minimizer of $\sum_{i=1}^m (Y_i-g(X_i))^2$ over $S$ and $g$ is an element of $S$, it follows that $$\frac1m \sum_{i=1}^m (Y_i-\hat{g}(X_i))^2 \leq \frac1m \sum_{i=1}^m (Y_i-g(X_i))^2,$$ which is equivalent to $$\frac1m \sum_{i=1}^m (Y_i-g(X_i)+g(X_i)-\hat{g}(X_i))^2 \leq \frac1m \sum_{i=1}^m (Y_i-g(X_i))^2.$$ Expanding the left hand side of the inequality, we get $$\frac1m \sum_{i=1}^m (Y_i-g(X_i))^2+\frac2m \sum_{i=1}^m (Y_i-g(X_i))(g(X_i)-\hat{g}(X_i)) +\frac1m \sum_{i=1}^m (g(X_i)-\hat{g}(X_i))^2\leq \frac1m \sum_{i=1}^m (Y_i-g(X_i))^2,$$ which simplifies to the first inequality. To obtain the second inequality, note that using Cauchy-Schwarz on the right hand side of the first inequality gives us $$\frac1m \sum_{i=1}^m (\hat{g}(X_i)-g(X_i))^2 \leq 2 \sqrt{\frac1m \sum_{i=1}^m (Y_i-g(X_i))^2 \cdot \frac1m \sum_{i=1}^m (\hat{g}(X_i)-g(X_i))^2}.$$ By squaring the inequality and dividing on either side by $\frac1m \sum_{i=1}^m (\hat{g}(X_i)-g(X_i))^2$, we obtain the second inequality.
\[cor:min.ineq.g\] Let ${\bar{g}_{m,d} }$ be as defined in (\[eq:opt.bg\]) and let $g$ be an element of $C_{n,d}$. We have $$\begin{aligned}
\frac{1}{m} \sum_{i=1}^m (g(X_i)-{\bar{g}_{m,d} }(X_i))^2 &\leq \frac{2}{m} \sum_{i=1}^m (Y_i-g(X_i))({\bar{g}_{m,d} }(X_i)-g(X_i)), \text{ and} \label{eq:min.ineq.1}\\
\frac1m \sum_{i=1}^m (g(X_i)-{\bar{g}_{m,d} }(X_i))^2 &\leq \frac4m \sum_{i=1}^m (Y_i-g(X_i))^2.\label{eq:min.ineq.2}
\end{aligned}$$
The proof is immediate by taking $S=C_{n,d}$.
\[cor:min.ineq.h\] Let ${\bar{h}_{m,d} }$ be as defined in (\[eq:opt.bh\]) and let $h$ be an element of $K_{n,d}$. We have $$\begin{aligned}
\frac{1}{m} \sum_{i=1}^m (h(X_i)-{\bar{h}_{m,d} }(X_i))^2 &\leq \frac{2}{m} \sum_{i=1}^m (Y_i-h(X_i))({\bar{h}_{m,d} }(X_i)-h(X_i)), \text{ and} \label{eq:min.ineq.1.h}\\
\frac1m \sum_{i=1}^m (h(X_i)-{\bar{h}_{m,d} }(X_i))^2 &\leq \frac4m \sum_{i=1}^m (Y_i-h(X_i))^2.\label{eq:min.ineq.2.h}
\end{aligned}$$
The proof is immediate by taking $S=K_{n,d}$.
Boundedness and Lipschitz continuity of ${\bar{g}_{m,d} }$ and ${\bar{h}_{m,d} }$ {#subappendix:props}
---------------------------------------------------------------------------------
In this section, we prove that ${\bar{g}_{m,d} }$ and ${\bar{h}_{m,d} }$ are uniformly upper bounded and Lipschitz continuous (with Lipschitz constants that do not depend on the data) over certain boxes contained within $B$. For this purpose, we introduce the following notation: let $\eta$ be a scalar such that $$\begin{aligned}
\label{eq:def.eta}
0<\eta < \min_{i=1\ldots,m} \frac{u_i-l_i}{2}
\end{aligned}$$ and let $$\begin{aligned}
\label{def:B.eta}
B_{\eta}\mathrel{\mathop{:}}=\{x~|~l_i+\eta \leq x_i \leq u_i-\eta, i=1,\ldots,n \}.
\end{aligned}$$ If (\[eq:def.eta\]) holds, we have that $B_{\eta}$ is full-dimensional and a strict subset of $B$ ($B=B_{\eta}$ when $\eta=0$), and conversely, if $B_{\eta}$ is full-dimensional and a strict subset of $B$, then (\[eq:def.eta\]) must hold.
Parts of the ideas presented here appear in [@lim2012consistency]. However, there are some key differences linked for example to considering a box $B$ rather than the whole space.
\[prop:prop.gd\] Let $g_d$ be defined as in Proposition \[prop:approx.gd\] and ${\bar{g}_{m,d} }$ defined as in (\[eq:opt.bg\]). Furthermore, let $\eta$ be a scalar such that (\[eq:def.eta\]) holds. We have the following properties:
(i) $ \exists c_{\eta}>0$, which is independent of the data $(X_1,Y_1),\ldots(X_m,Y_m)$, such that $|{\bar{g}_{m,d} }(x)| \leq c_{\eta}$ a.s. for all $x \in B_{3\eta/4}$.
(ii) $\exists M_{\eta}>0$, which is independent of the data $(X_1,Y_1),\ldots(X_m,Y_m)$, such that $|{\bar{g}_{m,d} }(x)-{\bar{g}_{m,d} }(y)| \leq M_{\eta} ||x-y||$ a.s. for all $x,y \in B_{\eta}$, i.e., ${\bar{g}_{m,d} }$ is $M_{\eta}$-Lipschitz over $B_{\eta}$.
(iii) $\exists N_{\eta}>0$, which is independent of the data $(X_1,Y_1),\ldots(X_m,Y_m)$, such that $|g_d(x)-g_d(y)| \leq N_{\eta} ||x-y||$ for all $x,y \in B_{\eta}$, i.e., $g_d$ is $N_{\eta}$-Lipschitz over $B_{\eta}$.
We prove each statement separately.
(i) The idea here is to control the value of ${\bar{g}_{m,d} }$ at the corners and the analytic center of $B$. Convexity of ${\bar{g}_{m,d} }$ enables us to conclude that ${\bar{g}_{m,d} }$ is upper and lower bounded almost surely over $B_{3\eta/4}$.
We first start by giving an a.s. bound on $\frac1m \sum_{i=1}^m{\bar{g}_{m,d} }^2(X_i)$ which will serve to show existence of sample points $X_i$ satisfying certain properties. Using Equation (\[eq:min.ineq.2\]) with $g=g_d$, we get $$\frac1m \sum_{i=1}^m (g_d(X_i)-{\bar{g}_{m,d} }(X_i))^2 \leq \frac4m \sum_{i=1}^m(Y_i-g_d(X_i))^2.$$ We then use the identity $(a+b)^2 \leq 2a^2+2b^2$ for $a,b \in \mathbb{R}$ and the previous inequality to obtain $$\begin{aligned}
\frac1m \sum_{i=1}^m {\bar{g}_{m,d} }^2(X_i) &\leq \frac2m \sum_{i=1}^m ({\bar{g}_{m,d} }(X_i)-g_d(X_i))^2+\frac2m \sum_{i=1}^m g_d^2(X_i)\\
&\leq \frac8m \sum_{i=1}^m(Y_i-g_d(X_i))^2+\frac2m \sum_{i=1}^m g_d^2(X_i).
\end{aligned}$$ As $g_d$ is a deterministic function, we can apply the strong law of large numbers to the terms in the right hand side of the inequality to obtain that, for $m$ large enough, $$\begin{aligned}
\label{eq:ub.l2.hg}
\frac1m \sum_{i=1}^m {\bar{g}_{m,d} }^2(X_i) \leq 9E[(Y_1-g_d(X_1))^2]+3E[g_d^2(X_1)]=\mathrel{\mathop{:}} \beta \text{ a.s.}
\end{aligned}$$
We now show the existence of sample points $X_i$ in the “corners” and around the analytic center of $B$ such that $|{\bar{g}_{m,d} }(X_i)|$ is uniformly bounded (in $m$). To do this, we define for each vertex $i, i=1,\ldots,2^n$, of $B$, a box $B^v_i$ which is included in $B$, has vertex $i$ as a vertex, and has edges of length $\eta/4$. In other words, if vertex $i_0$ of $B$ is given by $(l_1,u_1,u_2,\ldots,u_n)$, then the corresponding box $B^v_{i_0}$ is defined as $$B^v_{i_0}\mathrel{\mathop{:}}=\{x \in \mathbb{R}^n ~|~ l_1 \leq x_1 \leq l_1+\frac{\eta}{4}, u_2-\frac{\eta}{4} \leq x_2 \leq u_2, \ldots, u_n -\frac{\eta}{4} \leq x_n \leq u_n\}.$$ We further define $$B_0^v\mathrel{\mathop{:}}=\{x \in \mathbb{R}^n~|~\frac{u_i+l_i}{2}-\frac{\eta}{8} \leq x_i \leq \frac{u_i+l_i}{2}+\frac{\eta}{8}, i=1,\ldots,n\}.$$ We refer the reader to Figure \[fig:proof.lemma.2\] for illustrations of these boxes and their relationships to other boxes such as $B$, $B_{\eta/2}$ (which will play a role later on) and $B_{3\eta/4}$ (over which we will show that $|{\bar{g}_{m,d} }|$ is uniformly upperbounded).
![An illustration of the different boxes that appear in the proof of Proposition \[prop:prop.gd\].[]{data-label="fig:proof.lemma.2"}](Picture_proof_ub)
Note that, for all $i=0,\ldots,2^n$, $B^v_i \subset B$ and is full dimensional. However, when $i\geq 1$, $B_i^v \cap B_{\eta/2} = \emptyset$ whereas $B_0^v \subseteq B_{\eta} \subseteq B_{3\eta/4}$. Let $$\gamma_i \mathrel{\mathop{:}}=P(X \in B_i^v), i=0,\ldots,2^n, \text{ and } \gamma \mathrel{\mathop{:}}=\min \{\gamma_0, \ldots, \gamma_{2^n}\}.$$ As $B_i^v$ is full-dimensional for all $i$, it follows that $\gamma>0$. For each $i \in \{0,\ldots,2^n\}$ and for a positive scalar $r$ such that $\frac{\beta}{r^2} \leq \frac{\gamma}{2}$, we then have when $m$ is large enough that $$\begin{aligned}
\frac1m \sum_{j=1}^m P(X_j \in B_i^v, |{\bar{g}_{m,d} }(X_j)| \leq r) &\geq \frac1m \sum_{j=1}^m P(X_j \in B_i^v)-\frac1m \sum_{j=1}^m P(X_j \in B_i^v, |{\bar{g}_{m,d} }(X_j)|>r)\\
&\geq \gamma -\frac1m \sum_{j=1}^m P(|{\bar{g}_{m,d} }(X_j)|>r)\\
&\geq \gamma -\frac{E[{\bar{g}_{m,d} }^2(X_j)]}{r^2}\\
&\geq \gamma - \frac{\beta}{r^2}\\
&\geq \gamma-\frac{\gamma}{2}=\frac{\gamma}{2}>0.
\end{aligned}$$ Here, the first inequality follows from the union bound, the second from the definition of $\gamma$ and the fact that $P(A \cap B) \leq P(A)$, the third from Markov’s inequality, and the fourth from (\[eq:ub.l2.hg\]) which holds when $m$ is large enough. As a consequence, for any $i \in \{0,\ldots,2^n\}$ and for large enough $m$, there exists $1 \leq I(i) \leq m$ such that $X_{I(i)} \in B_i^v$ and $|{\bar{g}_{m,d} }(X_{I(i)})| \leq r$.
We use this to obtain upper and lower bounds on ${\bar{g}_{m,d} }(x)$ over $B_{3 \eta/4}$ which only depend on the probability distribution of $X_i$ and $B_{\eta}$ (i.e., these bounds do not depend on the number of data points, nor on the data points themselves). The proof of the lower bound requires us to show that ${\bar{g}_{m,d} }$ is actually upper bounded over $B_{\eta/2}$. As $B_{\eta/2}$ is a superset of $B_{3\eta/4}$, this will naturally imply that ${\bar{g}_{m,d} }$ is upper bounded over $B_{3 \eta/4}$.\
**Upper bound:** We show that $B_{\eta/2}$ is a subset of the convex hull of $X_{I(1)},\ldots, X_{I(2^n)}$. This then implies that any $x$ in $B_{\eta/2}$ can be written as a convex combination of these points, and so, using convexity of ${\bar{g}_{m,d} }$, we can conclude that ${\bar{g}_{m,d} }(x) \leq r$. To see that $B_{\eta/2}$ is a subset of the convex hull of $X_{I(1)},\ldots, X_{I(2^n)}$, first note that $X_{I(i)} \notin B_{\eta/2}$ for all $i=1,\ldots,2^n$ as $B_i^v \cap B_{\eta/2}= \emptyset$. Hence, either $B_{\eta/2}$ is a subset of convex hull of $X_{I(1)},\ldots,X_{I(2^n)}$ or the two sets are disjoint. We show that the former has to hold. This follows from the fact that $X_0=\frac{1}{2^n} \sum_{i=1}^{2^n} X_{I(i)}$, which is in the convex hull of $X_{I(1)},\ldots,X_{I(2^n)}$, is also in $B_{\eta/2}$. To see this, note that for a fixed component $k$ of the vectors $\{X_{I(i)}\}_i$, there are exactly $2^{n-1}$ of these components that belong to $[l_k,l_k+\frac{\eta}{4}]$ and $2^{n-1}$ that belong to $[u_k-\frac{\eta}{4}, u_k]$. This implies that the $k$-th component of $X_0$ belongs to the interval $[\frac{u_k+l_k}{2}-\frac{\eta}{8}; \frac{u_k+l_k}{2}+\frac{\eta}{8}]$. As $\frac{u_k+l_k}{2}+\frac{\eta}{8} \leq u_k -\frac{\eta}{2}$ and $\frac{u_k+l_k}{2}-\frac{\eta}{8} \geq l_k+\frac{\eta}{2}$ by consequence of (\[eq:def.eta\]), we get that $X_0$ is in $B_{\eta/2}$.\
**Lower bound:** Let $x \in B_{3\eta/4}$. As $X_{I(0)} \in B_{I(0)}^v$, there exists $y \in B_{\eta/2}$ such that $$X_{I(0)}=\frac{x+y}{2}.$$ (This can easily be checked via a simple analysis of each component of $y=2X_{I(0)}-x$.) By convexity of ${\bar{g}_{m,d} }$, it follows that $${\bar{g}_{m,d} }(X_{I(0)}) \leq \frac{{\bar{g}_{m,d} }(x)+{\bar{g}_{m,d} }(y)}2.$$ Using the fact that $|{\bar{g}_{m,d} }(X_{I(0)})|\leq r$ a.s. for $m$ large enough and the fact that ${\bar{g}_{m,d} }(y) \geq r$ as $y \in B_{ \eta/2}$, we obtain that for large enough $m$, $${\bar{g}_{m,d} }(x) \geq 2{\bar{g}_{m,d} }(X_{I(0)})-{\bar{g}_{m,d} }(y) \geq -3r.$$ Taking $c_{\eta}=\max \{r,3r\}=3r$ gives us the expected result.
(ii) As ${\bar{g}_{m,d} }$ is convex over $B$ and almost surely bounded on $B_{3\eta/4}$ by $c_{\eta}$ from (i), there exists a constant $M_{\eta}=\frac{8c_{\eta}}{\eta}$ which is independent of the data, such that ${\bar{g}_{m,d} }$ is $M_{\eta}$-Lipschitz over $B_{\eta}$; for a proof of this, see [@roberts1974another Theorem A].
(iii) As $g_d$ is continuous over $B$, $g_d$ has a maximum over $B$. Furthermore, $g_d$ is convex over $B$. It follows, using a similar argument to (ii), that $g_d$ is Lipschitz over $B_{\eta}$ with a constant that is independent of the data $(X_1,Y_1), \ldots,(X_m,Y_m)$.
\[prop:prop.hd\] Let $h_d$ be defined as in Proposition \[prop:approx.hd\] and ${\bar{h}_{m,d} }$ defined as in (\[eq:opt.bh\]). The following properties hold:
(i) $\exists M'_{\eta}>0$, which is independent of the data $(X_1,Y_1),\ldots(X_m,Y_m)$, such that $|{\bar{h}_{m,d} }(x)-{\bar{h}_{m,d} }(y)| \leq M'_{\eta} ||x-y||$ a.s. for all $x,y \in B_{\eta}$, i.e., ${\bar{h}_{m,d} }$ is $M'_{\eta}$-Lipschitz over $B_{\eta}$.
(ii) $\exists N'_{\eta}>0$, which is independent of the data $(X_1,Y_1),\ldots(X_m,Y_m)$, such that $|h_d(x)-h_d(y)| \leq N'_{\eta} ||x-y||$ for all $x,y \in B_{\eta}$, i.e., $h_d$ is $N'_{\eta}$-Lipschitz over $B_{\eta}$.
This follows immediately from the fact that both $h_d$ and ${\bar{h}_{m,d} }$ have $K$-bounded derivatives, with $K$ being a vector of finite scalars.
Proof of Lemmas \[lem:consistent.g\] and \[lem:consistent.h\] {#subappendix:proof}
-------------------------------------------------------------
We now prove Lemmas \[lem:consistent.g\] and \[lem:consistent.h\] using the previously shown results.
[*Proof of Lemma \[lem:consistent.g\].*]{} We define $C_{n,d}$ and $g_d$ as previously. Let $\epsilon>0$. We split this proof into three steps: the first step establishes that one can obtain an arbitrarily good approximation of $f$ by a family of convex polynomials $\{g_d\}_d$. We further show that one can reduce the problem of showing consistency of ${\bar{g}_{m,d} }$ over any compact set $C$ in $B$ to the problem of showing consistency of ${\bar{g}_{m,d} }$ over $B_{\eta}$ for some $\eta$ such that (\[eq:def.eta\]) holds. This simplifies considerably the subsequent steps. In the second step, we show that $g_d$ and ${\bar{g}_{m,d} }$ are “close” on the random samples $X_i$; this is then used in the third step to show that the two functions are uniformly close and hence that $f$ and ${\bar{g}_{m,d} }$ are also uniformly close.
#### Step 1: approximating $f$ by a convex polynomial $g_d$.
From Proposition \[prop:approx.gd\], there exists $d\mathrel{\mathop{:}}=d(\epsilon)$ such that $$\begin{aligned}
\label{eq:def.d}
\sup_{x \in B} |g_d(x)-f(x)| \leq \frac{\epsilon}{2},
\end{aligned}$$ where $g_d$ is defined as in Proposition \[prop:approx.gd\]. Henceforth, we assume that $d$ is fixed to this value.
We now prove that the problem of showing consistency of ${\bar{g}_{m,d} }$ over any compact subset $C$ of $B$ can be reduced instead to showing consistency of ${\bar{g}_{m,d} }$ over some box $B_{\eta}$ where $\eta$ is such that (\[eq:def.eta\]) holds. Let $C$ be any full-dimensional compact subset of $B$ such that no point of the boundary of $B$ is in $C$. As $C \cap int(B)=C$, there exists $\eta_C>0$ such that $C \subseteq B_{\eta_C}$. Furthermore, there exists $\eta_{\epsilon}>0$ such that $$\begin{aligned}
\label{eq:rid.of.outliers}
2\sqrt{2E[((Y_1-g_d(X_1))^2 \textbf{1}(X_1 \notin B_{\eta_{\epsilon}})]} \cdot \sqrt{5E[(Y_1-g_d(X_1))^2]} \leq \epsilon.
\end{aligned}$$ To see this, note that as $\eta \rightarrow 0$, $P(X_1 \notin B_{\eta}) \rightarrow 0$ with $P(X_1 \notin B)=0$ (this is a consequence of $P(X \in A)$ being positive for any full-dimensional set $A$). Existence of $\eta_{\epsilon}$ then follows by expanding out the expression and using Assumptions \[assmpt:generation.X\] and \[assmpt:generation.Y\] together with the fact that both $f$ and $g_d$ are continuous over $B$ and so bounded over $B$. We let $\eta\mathrel{\mathop{:}}= \min\{\eta_C,\eta_{\epsilon}\}.$ Thus defined, $\eta$ is such that (\[eq:def.eta\]) holds as $C$ is full-dimensional and a subset of $B_{\eta}$. As a consequence, in the rest of the proof, we restrict ourselves to showing that $$\begin{aligned}
\label{eq:result.amended}
\sup_{x \in B_{\eta}} |{\bar{g}_{m,d} }(x)-f(x)| \rightarrow 0 \text{ a.s.}
\end{aligned}$$ when $m,d \rightarrow \infty$ instead of (\[eq:result.th1\]). Indeed, (\[eq:result.amended\]) implies (\[eq:result.th1\]) as $C \subseteq B_{\eta}$ but the geometry of $B_{\eta}$ is much nicer to work with than that of $C$.
#### Step 2: showing that, for fixed $d$, $\frac1m \sum_{i=1}^m ({\bar{g}_{m,d} }(X_i)-g_d(X_i))^2 \rightarrow 0$ a.s. as $m\rightarrow \infty$.
Equation (\[eq:min.ineq.1\]) with $g=g_d$ gives us $$\frac1m \sum_{i=1}^m ({\bar{g}_{m,d} }(X_i)-g_d(X_i))^2 \leq \frac2m \sum_{i=1}^m (Y_i-g_d(X_i))({\bar{g}_{m,d} }(X_i)-g_d(X_i)).$$ The right hand side of this inequality can be rewritten $$\label{eq:partition}
\begin{aligned}
\frac2m \sum_{i=1}^m (Y_i-g_d(X_i))({\bar{g}_{m,d} }(X_i)-g_d(X_i)) &\leq \frac2m \sum_{i=1}^m (Y_i-g_d(X_i))({\bar{g}_{m,d} }(X_i)-g_d(X_i)) \textbf{1}(X_i \in B_{\eta})\\
&+\frac2m \sum_{i=1}^m (Y_i-g_d(X_i))({\bar{g}_{m,d} }(X_i)-g_d(X_i)) \textbf{1}(X_i \notin B_{\eta})
\end{aligned}$$ We focus first on the term that includes the sample points outside of $B_{\eta}$. We have $$\begin{aligned}
&\frac2m \sum_{i=1}^m (Y_i-g_d(X_i))({\bar{g}_{m,d} }(X_i)-g_d(X_i)) \textbf{1}(X_i \notin B_{\eta})\\
&\leq 2 \sqrt{\frac1m \sum_{i=1}^m (Y_i-g_d(X_i))^2 \textbf{1}(X_i \notin B_{\eta})} \cdot \sqrt{\frac1m \sum_{i=1}^m ({\bar{g}_{m,d} }(X_i)-g_d(X_i))^2}\\
&\leq 2 \sqrt{\frac1m \sum_{i=1}^m (Y_i-g_d(X_i))^2 \textbf{1}(X_i \notin B_{\eta})} \cdot \sqrt{\frac4m \sum_{i=1}^m (Y_i-g_d(X_i))^2}\\
&\leq 2\sqrt{2E[(Y_1-g_d(X_1))^2\textbf{1}(X_i \notin B_{\eta})]} \cdot \sqrt{5E[(Y_1-g_d(X_1))^2]} \text{ a.s.},
\end{aligned}$$ where the first inequality holds by virtue of the Cauchy-Schwarz inequality, the second inequality is a consequence of (\[eq:min.ineq.2\]), and the third inequality holds for large enough $m$ following the strong law of large numbers. Equation (\[eq:rid.of.outliers\]) implies that for large enough $m$, $$\begin{aligned}
\label{eq:terms.outside}
& \frac2m \sum_{i=1}^m (Y_i-g_d(X_i))({\bar{g}_{m,d} }(X_i)-g_d(X_i)) \textbf{1}(X_i \notin B_{\eta}) \leq \epsilon \text{ a.s.}.
\end{aligned}$$
We now focus on the term that includes the sample points inside of $B_{\eta}$. We hope to use the strong law of large numbers to conclude, and indeed, if ${\bar{g}_{m,d} }$ were not a function of the data points $(X_1,Y_1),
\ldots, (X_m,Y_m)$, this could be done in a straightforward fashion. The goal is consequently to replace ${\bar{g}_{m,d} }$ by a deterministic approximation and then apply the strong law of large numbers, as we show now. Let $$\mathcal{C}=\{\text{polynomials $p:B_{\eta} \mapsto \mathbb{R}$ of degree $d$, $M_{\eta}$-Lipschitz with $|p(x)| \leq c_{\eta}, \forall x \in B_{\eta}$}\},$$ where $M_{\eta}$ and $c_{\eta}$ are the constants given in Proposition \[prop:prop.gd\], which do not depend on the data $(X_1,Y_1),\ldots,(X_m,Y_m)$. Proposition \[prop:prop.gd\] implies that ${\bar{g}_{m,d} }$ belongs to $\mathcal{C}$ for large enough $m$.
Furthermore, given that $\mathcal{C}$ is a subset of the set of continuous functions over the box $B_{\eta}$ and given that all functions in $\mathcal{C}$ are uniformly bounded and Lipschitz, it follows from Ascoli-Arzelá’s theorem that $\mathcal{C}$ is compact in the metric $d(f,g)=\sup_{x \in B_{\eta}} |f(x)-g(x)|$. As a consequence, $\mathcal{C}$ has a finite $\epsilon$-net: we denote by $p_1,\ldots,p_R$ the polynomials belonging to it. Hence, for large enough $m$, there exists $r\in \{1,\ldots,R\}$ such that $\sup_{x \in B_{\eta}} |p_r(x)-{\bar{g}_{m,d} }(x)|<\epsilon$: this is our deterministic approximation of ${\bar{g}_{m,d} }$ and we are now equipped to control the term that includes the sample points inside $B_{\eta}$ from (\[eq:partition\]). We have: $$\begin{aligned}
&\frac2m \sum_{i=1}^m (Y_i -g_d(X_i))({\bar{g}_{m,d} }(X_i)-g_d(X_i)) \textbf{1}(X_i \in B_{\eta})\\
&\leq \frac2m \left(\sum_{i=1}^m (Y_i -g_d(X_i))({\bar{g}_{m,d} }(X_i)-p_r(X_i)) \textbf{1}(X_i \in B_{\eta})+\sum_{i=1}^m (Y_i -g_d(X_i))(p_r(X_i)-g_d(X_i)) \textbf{1}(X_i \in B_{\eta}) \right)\\
&\leq \frac2m \cdot \epsilon \sum_{i=1}^m |Y_i-g_d(X_i)| + \max_{j=1,\ldots,R} \frac2m \sum_{i=1}^m (Y_i-g_d(X_i))(p_j(X_i)-g_d(X_i)) \textbf{1}(X_i \in B_{\eta}).
\end{aligned}$$ As $g_d$ is bounded over $B$, we use the strong law of large numbers to obtain, for large enough $m$, $$\begin{aligned}
\label{eq:term.1}
\frac1m \sum_{i=1}^m |Y_i-g_d(X_i)| \leq 2E[|Y_1-g_d(X_1)|] \text{ a.s. }
\end{aligned}$$ We also have for any $j \in \{1,\ldots,R\}$, $$\begin{aligned}
&\frac2m \sum_{i=1}^m (Y_i-g_d(X_i))(p_j(X_i)-g_d(X_i)) \textbf{1}(X_i \in B_{\eta}) \\
&\leq \frac2m \sum_{i=1}^m (Y_i-f(X_i))(p_j(X_i)-g_d(X_i)) \textbf{1}(X_i \in B_{\eta})+\frac2m \sum_{i=1}^m (f(X_i)-g_d(X_i))(p_j(X_i)-g_d(X_i)) \textbf{1}(X_i \in B_{\eta}) \\
&\leq \frac2m \sum_{i=1}^m \nu_i(p_j(X_i)-g_d(X_i)) \textbf{1}(X_i \in B_{\eta})+\frac2m \epsilon \sum_{i=1}^m |p_j(X_i)-g_d(X_i)| \textbf{1}(X_i \in B_{\eta}).
\end{aligned}$$ Gvien Assumptions \[assmpt:generation.X\] and \[assmpt:generation.Y\] and the fact that $h_j$ is uniformly bounded over $B_{\eta}$, it follows from the strong law of large numbers that, for any $j \in \{1,\ldots,r\}$ and for large enough $m$, $$\begin{aligned}
\label{eq:term.2}
\frac2m \sum_{i=1}^m \nu_i(p_j(X_i)-g_d(X_i)) \textbf{1}(X_i \in B_{\eta}) \leq \epsilon \text{ a.s.}
\end{aligned}$$ Similarly, using the strong law of large numbers again, for large enough $m$, $$\begin{aligned}
\label{eq:term.3}
\frac1m \sum_{i=1}^m |p_j(X_i)-g_d(X_i)| \textbf{1}(X_i \in B_{\eta}) \leq 2E[|p_j(X_1)-g_d(X_1)|\textbf{1}(X_1 \in B_{\eta})] \text{ a.s.}
\end{aligned}$$ Putting (\[eq:term.1\]), (\[eq:term.2\]), and (\[eq:term.3\]) together, we conclude that for large enough $m$, $$\begin{aligned}
&\frac2m \sum_{i=1}^m (Y_i -g_d(X_i))({\bar{g}_{m,d} }(X_i)-g_d(X_i)) \textbf{1}(X_i \in B_{\eta}) \\ &\leq 2\epsilon \left(2E[|Y_1-g_d(X_1)|]+\max_{j=1,\ldots,R}2E[|p_j(X_1)-g_d(X_1)|\textbf{1}(X_1 \in B_{\eta})]\right) +\epsilon.
\end{aligned}$$ Combining this with (\[eq:terms.outside\]) in (\[eq:partition\]), it follows that, for fixed $d$, $$\frac1m \sum_{i=1}^m ({\bar{g}_{m,d} }(X_i)-g_d(X_i))^2 \rightarrow 0 \text{ a.s. when $m\rightarrow \infty$}.$$
#### Step 3: showing that $\sup_{x \in B_{\eta}} |f(x)-{\bar{g}_{m,d} }(x)| \rightarrow 0$ a.s. when $m \rightarrow \infty$ and $d \rightarrow \infty$.
We fix $d$ as previously and $m$ to be as large as needed. Let $x \in B_{\eta}$ and let $\delta$ be a fixed positive scalar such that $(M_{\eta}+N_{\eta})\delta \leq \epsilon/4$. As $B_{\eta}$ is a compact box, there exists a finite partition $C_1,\ldots,C_K$ of $B_{\eta}$ such that the diameter of $C_k$, $k=1,\ldots,K$, is less than $\delta$ (i.e., $\sup_{x,y \in C_k} ||x-y|| \leq \delta$) and $C_k$ is full dimensional. It follows from Assumption \[assmpt:box\] that for large enough $m$, each $C_k$ contains at least one $X_i$. Furthermore, as $x \in B_{\eta}$, $x \in C_k$ for some $k$. Let’s denote by $k_0$ this specific $k$ and let $$i_{k_0}=\arg \min_{\{i|X_i \in C_{k_0}\}} |g_d(X_i)-{\bar{g}_{m,d} }(X_i)|.$$ For $d$ chosen as previously and very large $m$, we have $$\begin{aligned}
|f(x)-{\bar{g}_{m,d} }(x)| &\leq |f(x)-g_d(x)| +|g_d(x)-g_d(X_{i_{k_0}})|+|g_d(X_{i_{k_0}})-{\bar{g}_{m,d} }(X_{i_{k_0}})|+|{\bar{g}_{m,d} }(X_{i_{k_0}})-{\bar{g}_{m,d} }(x)|\\
&\leq \frac{\epsilon}{2} +N_{\eta} \cdot \delta + \frac{ \sum_{i=1}^m |g_d(X_i)-{\bar{g}_{m,d} }(X_i)|I(X_i \in C_{k_0})}{\sum_{i=1}^m I(X_i \in C_{k_0})}+M_{\eta} \cdot \delta,
\end{aligned}$$ where we have used the fact that both $g_d$ and ${\bar{g}_{m,d} }$ are Lipschitz (for $m$ large enough in the case of ${\bar{g}_{m,d} }$) with Lipschitz constants $N_{\eta}$ and $M_{\eta}$ respectively, which do not depend on the data (see Proposition \[prop:prop.gd\]), together with the fact that the minimum of a vector is less than or equal to its average.
The previous inequality implies that $$\begin{aligned}
\sup_{x \in B_{\eta}}|f(x)-{\bar{g}_{m,d} }(x)| &\leq \frac{\epsilon}{2} + \delta(M_{\eta}+N_{\eta}) + \frac1m \sum_{i=1}^m |g_d(X_i)-{\bar{g}_{m,d} }(X_i)| \cdot \max_{k=1,\ldots,K} \frac{m}{\sum_{i=1}^m I(X_i \in C_k)}\\
&\leq \frac{\epsilon}{2} + \delta (M_{\eta}+N_{\eta})+\sqrt{\frac1m \sum_{i=1}^m (g_d(X_i)-{\bar{g}_{m,d} }(X_i))^2} \cdot \max_{k=1,\ldots,K} \frac{m}{\sum_{i=1}^m I(X_i \in C_k)}.
\end{aligned}$$ From the strong law of large numbers, $\lim_{m \rightarrow \infty} \max_{k=1,\ldots,K} \frac{m}{\sum_{i=1}^m I(X_i \in C_k)}=\frac{1}{\min_{k=1,\ldots,K}P(X_1 \in C_k)}$ a.s. (which is a fixed number for fixed $\delta$). As a consequence, for large enough $m$, $\max_{k=1,\ldots,K} \frac{m}{\sum_{i=1}^m I(X_i \in C_k)} \leq \frac{2}{\min_{k=1,\ldots,K}P(X_1 \in C_k)}$ a.s. Furthermore, as shown in Step 2, $\frac1m \sum_{i=1}^m (g_d(X_i)-{\bar{g}_{m,d} }(X_i))^2 \rightarrow 0$ a.s., which implies that for large enough $m$, $$\sqrt{\frac1m \sum_{i=1}^m (g_d(X_i)-{\bar{g}_{m,d} }(X_i))^2} \leq \frac{\epsilon \cdot \min_{k=1,\ldots,K} P(X_1 \in C_k)}{8} \text{ a.s.}$$ Combining this to the definition of $\delta$, we obtain that for very large $m$, $$\sup_{x \in B_{\eta}} |f(x)-{\bar{g}_{m,d} }(x)| \leq \epsilon.$$ This concludes our proof.
[*Proof of Lemma \[lem:consistent.h\].*]{} The exact same proof as above goes through providing that $C_{n,d}$ is replaced by $K_{n,d}$, $g_d$ by $h_d$, ${\bar{g}_{m,d} }$ by ${\bar{h}_{m,d} }$, Proposition \[prop:approx.gd\] by Proposition \[prop:approx.hd\], Corollary \[cor:min.ineq.g\] by Corollary \[cor:min.ineq.h\], and Proposition \[prop:prop.gd\] by Proposition \[prop:prop.hd\].
Acknowledgments {#acknowledgments .unnumbered}
===============
We would like to thank Amir Ali Ahmadi for bringing the problem of shape-constrained regression to our attention as well as for his feedback on an initial version of the manuscript. We would also like to thank Ioana Popescu for her feedback and constructive comments and Rahul Mazumder for giving us access to the datasets used in [@mazumder2017computational]. Finally, we are grateful to Dimitrije Ruzic for letting us know about the KLEMS dataset.
[^1]: Mihaela Curmei is with the department of Electrical Engineering and Computer Science at the University of California, Berkeley. Email: `[email protected]`
[^2]: Georgina Hall is with the department of Decision Sciences at INSEAD. Email: `[email protected]`
|
---
abstract: 'GRB051022 was detected at 13:07:58 on 22 October 2005 by HETE-2. The location of GRB051022 was determined immediately by the flight localization system. This burst contains multiple pulses and has a rather long duration of about 190 seconds. The detections of candidate X-ray and radio afterglows were reported, whereas no optical afterglow was found. The optical spectroscopic observations of the host galaxy revealed the redshift ${\rm z} = 0.8$. Using the data derived by HETE-2 observation of the prompt emission, we found the absorption $N_{\rm H} = (8.8_{-2.9}^{+3.1}) \times 10^{22}$ cm$^{-2}$ and the visual extinction $A_{V} = 49_{-16}^{+17}$ mag in the host galaxy. If this is the case, no detection of any optical transient would be quite reasonable. The absorption derived by the Swift XRT observations of the afterglow is fully consistent with those obtained from the early HETE-2 observation of the prompt emission. Our analysis implies an interpretation that the absorbing medium may be outside the external shock at $R \sim 10^{16}$ cm, which could be a dusty molecular cloud.'
author:
- |
Yujin E. <span style="font-variant:small-caps;">Nakagawa</span> Atsumasa <span style="font-variant:small-caps;">Yoshida</span> Satoshi <span style="font-variant:small-caps;">Sugita</span> Kaoru <span style="font-variant:small-caps;">Tanaka</span> Nobuyuki <span style="font-variant:small-caps;">Ishikawa</span>\
Toru <span style="font-variant:small-caps;">Tamagawa</span> Motoko <span style="font-variant:small-caps;">Suzuki</span> Yuji <span style="font-variant:small-caps;">Shirasaki</span> Nobuyuki <span style="font-variant:small-caps;">Kawai</span> Masaru <span style="font-variant:small-caps;">Matsuoka</span>\
Jean-Luc <span style="font-variant:small-caps;">Atteia</span> Alexandre <span style="font-variant:small-caps;">Pelangeon</span> Roland <span style="font-variant:small-caps;">Vanderspek</span> Geoff B. <span style="font-variant:small-caps;">Crew</span> Joel S. <span style="font-variant:small-caps;">Villasenor</span>\
Nat <span style="font-variant:small-caps;">Butler</span> John <span style="font-variant:small-caps;">Doty</span> George R. <span style="font-variant:small-caps;">Ricker</span> Graziella <span style="font-variant:small-caps;">Pizzichini</span> Timothy Q. <span style="font-variant:small-caps;">Donaghy</span>\
Donald Q. <span style="font-variant:small-caps;">Lamb</span> Carlo <span style="font-variant:small-caps;">Graziani</span> Rie <span style="font-variant:small-caps;">Sato</span> Miki <span style="font-variant:small-caps;">Maetou</span> Makoto <span style="font-variant:small-caps;">Arimoto</span> Jun’ichi <span style="font-variant:small-caps;">Kotoku</span>\
J. Garret <span style="font-variant:small-caps;">Jernigan</span> Takanori <span style="font-variant:small-caps;">Sakamoto</span> Jean-Francois <span style="font-variant:small-caps;">Olive</span> Michel <span style="font-variant:small-caps;">Boer</span>\
Edward E. <span style="font-variant:small-caps;">Fenimore</span> Mark <span style="font-variant:small-caps;">Galassi</span> Stanford E. <span style="font-variant:small-caps;">Woosley</span> Makoto <span style="font-variant:small-caps;">Yamauchi</span>\
Kunio <span style="font-variant:small-caps;">Takagishi</span> and Isamu <span style="font-variant:small-caps;">Hatsukade</span>
title: 'An Optically Dark GRB Observed by HETE-2: GRB051022'
---
Introduction
============
Among gamma-ray bursts (GRBs), “optically dark” bursts are known as GRBs without accompanying optical transients. From lack of a precise location being usually determined by its optical afterglow, it is generally difficult to search out its corresponding host galaxy. Hence there are only a few hosts found to date by radio counterparts for this kind of bursts, and detailed studies such as their morphology or taxonomy are very premature. Why are these GRBs “optically dark”? The answer is still unclear. Their optical counterparts might have decayed much rapidly than others, and/or could be hidden behind heavily absorbing matters. Some works suggest that those are associated with dusty molecular clouds along the line of sight [@rei02] or distant GRBs with $z\gtrsim5$ [@fru99a; @lam00].
In soft X-ray band, measuring the absorptions in spectra of prompt emissions and/or afterglows from GRBs could provide us important information about the environments around sources. Actually some authors reported time-variable absorptions in spectra of some GRB prompt emissions and afterglows. One of the interesting properties of GRB970828 is a time-variable absorption in the spectra of the afterglow, which could be due to circum-burst medium [@yos01]. Similar absorption in the afterglows were also reported by @owe98 and @str04.
For prompt emissions, time-variable absorptions were reported for GRB980329 [@fro00], GRB990705 [@ama00] and GRB010222 [@int01]. Optical counterparts were found for these bursts, in contrast to the case for GRB970828. @fro00 suggested that a time-variable absorption in the spectra of GRB980329 was due to the internal shock accompanied by the expanding fireball. For GRB990705, an absorption feature in the prompt emission might be explained by a photoelectric absorption by a medium at z=0.86 [@ama00].
We report here constant absorptions throughout the prompt emission and the afterglow of GRB051022 observed respectively by HETE-2 and Swift. We also present the localization and the spectral properties of GRB051022. We discuss output energies of the burst and an evidence for an intervening dense medium along the line of sight.
Observations and Analyses
=========================
Localization
------------
The gamma-ray burst GRB051022 was detected with the three scientific instruments aboard HETE-2, the Wide-Field X-ray Monitor (WXM; 2-25 keV; [@shi03]), Soft X-ray Camera (SXC; 0.5-10 keV; [@vil03]) and French Gamma Telescope (FREGATE: 6-400 keV; [@att03]), at 13:07:58 on 22 October 2005 [@gra05; @tan05]. The location of GRB051022 was determined immediately by the flight WXM and SXC localization system. The GCN Notices reporting the position were sent out 45 seconds after the onset based on the WXM flight localization, and 119 seconds after the onset based on the SXC flight localization.
The WXM location was a circle centered at ${\rm R.A.} = \timeform{23h55m55s.2}$, ${\rm decl.} = \timeform{19D39'36''.0}$ (J2000) with a radius of $\timeform{5'}$ (90 % confidence region) by the ground analysis of the data. The brightness of GRB051022 was sufficient in soft X-rays to determine the position with an error radius down to $\timeform{1'19''.8}$ using the SXC data independently of the WXM localization. Unfortunately the SXC data was partially lost due to the dropout of internet connection to the ground station at Cayenne. The SXC localization was made by the ground analysis and resulted in a circle centered at ${\rm R.A.} = \timeform{23h56m03s.7}$, ${\rm decl.} = \timeform{19D37'10''.9}$ (J2000) ($l = \timeform{105D27'20''.0}$, $b = \timeform{-41D21'54''.8}$) with a radius of $\timeform{2'30''}$ (90 % confidence region).
The Inter-Planetary Network (IPN) also reported its position [@hur05] constrained on an annulus centered at ${\rm R.A.} = \timeform{03h11m39s}$, ${\rm decl.} = \timeform{16D32'32''}$ (J2000) with a radius of 46.4533$\pm$0.0684 degrees (the quoted error is 3$\sigma$ confidence region). The SXC error circle was just encompassed in this annulus position.
The Swift XRT instrument began observing 3.5 hours after the trigger the field of GRB051022 based on the HETE-2 localization, and detected a bright unknown fading X-ray source [@rac05a], just $\timeform{1'6''.8}$ away from the center of the SXC error circle. The XRT located it with an accuracy of $\timeform{4''}$ and the Chandra narrowed later its error region down to $\timeform{0.7''}$ [@pat05], where the VLA observation at 8.5GHz discovered a bright radio source [@cam05]. The probable host galaxy was reported by several groups [@cas05; @ber05; @coo05; @nys05; @blo05; @uga05], and the optical spectroscopic observation using the 200-inch Hale Telescope at Palomar Observatory detected a strong line at 6736 ${\rm \AA}$ which corresponds to O$\emissiontype{II}$ 3727 ${\rm \AA}$ and determined a redshift ${\rm z} = 0.8$ [@gal05].
Using the data of Swift XRT instrument, spectral analyses of X-ray afterglow were performed ([@but05a], ; [@rac05b]). @but05a reported the absorption $N_{\rm H} = (0.84\pm0.07)
\times 10^{22}$ cm$^{-2}$ which is greater than the Galactic value in the direction to the burst. The light curve of X-ray afterglow presented a break at t$_{\rm break} = 2.9\pm0.2$ days, and the jet opening angles were estimated [@rac05b] if this break was due to the sideway expansion; $\theta_{\rm jet} = 4.3$ degrees for the HETE-2 spectral parameters [@dot05] and $\theta_{\rm jet} = 4.4$ degrees for the Konus-Wind spectral parameters [@gol05].
Temporal Properties
-------------------
Unfortunately, because of the dropout of internet connection to our ground station at Cayenne at the trigger time, we lost the time tagged photon data of the FREGATE. The spectral and temporal analyses of the FREGATE are performed using the 5.24 s resolution data from this reason. The upper five panels in Figure \[lc\_par\] show the time history of GRB051022 in five energy bands, where ${\rm t} = 0$ shows the trigger time which corresponds to 13:07:58 on 22 October 2005 UT. The event consists of multiple pulses and has a rather long duration $T_{\rm{90}} = \rm{178} \pm \rm{8}$ s in the 2$-$25 keV energy band and $\rm{157} \pm \rm{5}$ s in the 30$-$400 keV energy band.
Absorption in Spectra
---------------------
In our analysis, we use the following models: power-law (PL), power-law times exponential cutoff (PLE) and Band function (GRBM; [@ban93]). The WXM and the FREGATE were pointed to around ${\rm R.A.} = \timeform{01h08m00s}$, ${\rm decl.} = \timeform{11D08'00''}$ (J2000) ($l = \timeform{129D28'14''.0}$, $b = \timeform{-51D31'40''.9}$) when the GRB was observed. The FREGATE has larger field-of-view of 70 degrees than that of the WXM, and we found many mildly bright soft sources at that time within the FREGATE field-of-view but outside the WXM consulting the ASM/RXTE database. The spectrum of FREGATE shows somewhat larger counts near its lower energy end than those from the WXM in the same band. This could be due to contamination from the soft sources mentioned above. To avoid this discrepancy only in the lower end of energy band of the FREGATE, we employ data above 40 keV up to 400 keV for joint fits and find very good agreement in continuum spectra with the WXM and the FREGATE.
First of all, we analyze the average spectrum of the prompt emission using the total duration t $=$ 21$-$220 s. For background we employ data during t $=$ 231$-$341 s. Their time regions are indicated in Figure \[lc\_par\]. Any unabsorbed model does not provide an acceptable fit to the data. We find a deficit of photons in the spectrum using unabsorbed GRBM model in lower energy band below 4 keV (see panel (b) of Figure \[spectra\]). The Galactic value of absorption in the direction of the burst is $4.09 \times 10^{20}$ cm$^{-2}$ [@dic90], which is negligible for this fit. Then, we adopt an absorption as a free parameter and fit the data. The fit is clearly improved (see panel (c) of Figure \[spectra\]) and the most favorable model is the absorbed GRBM with $N_{\rm H}$ $=$ $(1.51_{-0.50}^{+0.53}) \times 10^{22}$ cm$^{-2}$. Using the absorbed GRBM model, we find $\alpha = 1.01_{-0.03}^{+0.02}$, $\beta = 1.95_{-0.14}^{+0.25}$, $E_{\rm peak}^{\rm obs} = 213\pm18$ keV, and fluences $S_{\rm X}$ $=$ $(21.4\pm0.2)
\times 10^{-6}$ $\rm{ergs}$ $\rm{cm^{-2}}$ (2$-$30 keV) and $S_{\gamma}$ $=$ $(131\pm1) \times 10^{-6}$ $\rm{ergs}$ $\rm{cm^{-2}}$ (30$-$400 keV). The quoted errors correspond to the 90 % confidence region for $N_{\rm{H}}$, $\alpha$, $\beta$ and $E_{\rm peak}^{\rm obs}$, and the 68 % confidence region for $S_{\rm X}$ and $S_{\gamma}$. Thus the ratio of fluences is log($S_{\rm X}$/$S_{\gamma}$) $=$ $-$0.786, and GRB051022 is classified into the “Classical” hard GRB in the HETE-2 sample [@sak05].
In previous studies for GRBs, the absorption appeared only in a part of a prompt emission and/or afterglow [@ama00; @fro00; @int01; @yos01; @str04]. Then we perform the time resolved spectral analyses for 10 time intervals to investigate a time variation of the absorption, and summarize the results in Table \[spec\_tb\]. The bottom three panels in Figure \[lc\_par\] show the time variation of these spectral parameters. In all time intervals, we adopt the absorbed PLE model because reliable $\beta$ cannot be obtained from these fittings with GRBM model. The quoted errors correspond to the 90 % confidence region for $N_{\rm{H}}$, $\alpha$ and $E_{\rm peak}^{\rm obs}$, and the 68 % confidence region for $S_{\rm X}$ and $S_{\gamma}$.
From the spectral variation, the initial pulse (t $=$ 21.0$-$36.7 s) is hard and accompanied by a soft pulse (t $=$ 36.7$-$57.7 s). During the later long phase (t $=$ 57.7$-$220 s), the spectrum shows a softening trend. These are consistent with the general view of the time variation of GRB spectra. The most remarkable result is that $N_{\rm H}$ is significantly needed and seems constant in all the time intervals contrary to the results from the previous studies. Then we perform spectral fitting by assuming the absorption fixed to the value obtained by the analysis of the average spectrum and get acceptable fits for all intervals (see $\chi'^2$ in Table \[spec\_tb\]). In conclusion, $N_{\rm H}$ does not seem to have a significant evolution.
Discussion and Conclusion
=========================
Using the measured redshift $z = 0.8$ [@gal05], the luminosity distance is given by $d_{\rm L} = 1.49\times10^{28}$ cm ($\Omega_m=0.32$, $\Omega_\Lambda=0.68$ and $H_{\rm 0}=72$ km s$^{-1}$ Mpc$^{-1}$). $E_{\rm iso}$ turns out to be $(6.6\pm1.3) \times 10^{53}$ ergs by integrating the best-fit time integrated spectrum in the observer frame over the energy ranging from $1/(1+z)$ keV to $10/(1+z)$ MeV ([@blo01], ; [@ama02]). We also find the peak energy of $\nu F_{\nu}$ spectrum in the source frame $E_{\rm peak}^{\rm src} = 382_{-32}^{+33}$ keV. These values are lying on $E_{\rm peak}^{\rm src}-E_{\rm iso}$ relation [@ama02].
In the standard view, the fireball is collimated [@wax98; @fru99b] and the collimation corrected energies are concentrated around $10^{51}$ ergs [@blo03; @ghi04]. This scenario is believed to appear as the achromatic break in observed afterglow light curve [@rho97; @sar99]. The break time of the light curve $t_{\rm break}=2.9\pm0.2$ days was reported using the X-ray afterglow observations [@rac05b]. If this is due to the jet break, the jet opening angle $\theta_{\rm jet}$ [@sar99] turns out to be $\sim4.2$ deg assuming $n\sim0.1$ cm$^{-3}$ and $\eta_{\gamma}\sim0.2$, where $n$ is a proton number density around the GRB site and $\eta_{\gamma}$ is an energy conversion efficiency. Then, we estimate the collimation corrected energy $E_{\gamma}$ [@blo03; @ghi04] scaling $E_{\rm iso}$ by (1$-\cos\theta_{\rm jet}$) and find $E_{\gamma}=(1.8\pm0.3) \times 10^{51}$ ergs. This value is lying on $E_{\rm peak}^{\rm src}-E_{\gamma}$ relation [@ghi04]. Thus the jet break at 2.9 days seems plausible.
The follow-up observations of X-ray afterglow were performed by the Swift XRT instrument from 16:35:54 on 22 October 2005 [@rac05a]. Using the three orbits of XRT data, we performed spectral analyses of the afterglow, and found $N_{\rm H} = (0.91_{-0.11}^{+0.12}) \times 10^{22}$ cm$^{-2}$ and $\Gamma=2.0\pm0.1$ with $\chi^2/{\rm d.o.f.} = 166/150$. These values are consistent with that previously reported by @but05a. We also found a constant absorption during the XRT observation. The absorption derived by the Swift XRT observation is fully consistent with those obtained from the early HETE-2 observation of the prompt emission. In earlier studies, it was reported that the absorption varied in time [@ama00; @fro00; @int01; @yos01; @str04] and therefore it was interpreted as an absorption by the circum-burst dusty medium in the very vicinity of internal shocks. In contrast, our analyses present the constant absorption throughout the prompt emission and the afterglow. It might be due to the intervening medium along the line of sight which is in the outside of external shocks; i.e. a molecular cloud in the host galaxy.
The large absorption would cause the extinctions of the afterglow emission. If the intervening medium is located somewhere in the middle between the source and our galaxy, the absorption should be at least $N_{\rm H}$ $\gtrsim$ $(1.51_{-0.50}^{+0.53}) \times 10^{22}$ cm$^{-2}$. We find the visual extinction at minimum $A_{V} = 8.4_{-2.8}^{+3.0}$ mag using the relationship $A_{V} = N_{\rm H}/(1.79 \times 10^{21}$ cm$^{-2})$ found by @car89. Then we estimate the extinctions for other wavelengths at minimum using the extinction curves found by @pre95; $A_{U} = 13.2_{-4.4}^{+4.7}$ mag, $A_{B} = 11.3_{-3.7}^{+4.0}$ mag, $A_{R} = 6.3_{-2.1}^{+2.2}$ mag, $A_{I} = 4.0_{-1.3}^{+1.4}$ mag, $A_{J} = 2.4\pm0.8$ mag, $A_{H} = 1.6_{-0.5}^{+0.6}$ mag, $A_{K} = 1.0\pm0.3$ mag and $A_{L} = 0.5\pm0.2$ mag. These large extinctions could explain the fact that no optical afterglow was found despite the prompt and deep search [@tor05; @cen05].
The most extreme case is that the intervening medium is located in the very vicinity of the host galaxy of GRB051022. The absorption should be scaled by $(1+z)^{3}$ [@gun65]. We find $N_{\rm H} = (8.8_{-2.9}^{+3.1}) \times 10^{22}$ cm$^{-2}$ and the extinctions are $A_{V} = 49_{-16}^{+17}$ mag, $A_{U} = 77_{-25}^{+27}$ mag, $A_{B} = 66_{-22}^{+23}$ mag, $A_{R} = 37_{-12}^{+13}$ mag, $A_{I} = 24\pm8$ mag, $A_{J} = 14\pm5$ mag, $A_{H} = 9\pm3$ mag, $A_{K} = 6\pm2$ mag and $A_{L} = 3\pm1$ mag at maximum. If this is the case, no detection of any optical transient would be quite reasonable.
Considering that no optical afterglow is found despite a prompt deep search of afterglow down to $R\sim20.0$ mag [@cen05], GRB051022 is very similar to GRB970828 which is one of the important GRBs for which no optical afterglow was found despite a prompt deep search down to $R\sim24.5$ mag [@ode97]. @djo01 reports that the radio afterglow of GRB970828 is located between two bright sources (A and B in Figure 3 of [@djo01]). They suggest that there might be a dust lane intersecting a single galaxy or it might be a merging system of three components (A, B and C in Figure 3 of [@djo01]). The star formation rates (SFR) are reported by the authors as SFR$\sim$1.2 $\MO$ yr$^{-1}$ for component A and SFR$\sim$0.3 $\MO$ yr$^{-1}$ for B.
The absorption for GRB970828 using the brightness of X-ray and radio afterglow was $N_{\rm H}\gtrsim6 \times 10^{21}$ cm$^{-2}$ in the source frame, therefore a dusty molecular cloud is one possible interpretation [@djo01]. Meantime @yos01 shows a time-variable absorption with $N_{\rm H} = 3.13 \times 10^{22}$ cm$^{-2}$ in the source frame based on the spectral analyses of the X-ray afterglow. Therefore the most probable interpretation is that the absorption could be dominantly due to the medium near the GRB site [@yos01; @djo01].
For GRB051022, several authors report, based on the optical and IR observations, the most probable candidate of its host galaxy [@cas05; @ber05; @coo05; @nys05; @blo05; @gal05; @uga05] at the location consistent with those of the burst (SXC), the X-ray afterglow (XRT), and the radio transient (VLA). From these images, the host galaxy (i.e., galaxy “B” in the above references) seems roundly extended at least $\sim \timeform{1''}$ in radius, which corresponds to be about 7kpc at $z=0.8$, greater than the typical size of a galactic bulge. Therefore it would not be an “edge-on” spiral galaxy.
There is also a report that this galaxy is blue with SFR of more than 20 $\MO$ yr$^{-1}$ [@cas06]. The value is far larger than that for GRB970828. This large SFR could be consistent with a dusty molecular cloud in the galaxy. In addition, GRB051022 shows the constant absorption in the soft X-ray band throughout the prompt emission and the afterglow. This sharply contrasts with the previous results [@ama00; @fro00; @int01; @yos01; @str04]. The absorption for GRB051022 is evaluated to be $N_{\rm H} = (8.8_{-2.9}^{+3.1}) \times 10^{22}$ cm$^{-2}$, which is larger than that of GRB970828. Our results favor an interpretation that the absorbing medium is outside the external shock at $R \gtrsim 10^{16}$ cm and could be a dusty molecular cloud.
We would like to thank the HETE-2 members for their support. We acknowledge the use of public data from the Swift data archive. The HETE-2 mission is supported in the US by NASA contract NASW-4690; in Japan in part by the Ministry of Education, Culture, Sports, Science, and Technology Grant-in-Aid 14079102; and in France by CNES contract 793-01-8479. One of the authors (Y.E.N.) is supported by the JSPS Research Fellowships for Young Scientists.
(80mm,114mm)[figure1.eps]{}
(80mm,80mm)[figure2.eps]{}
------------- --------------------- ------------------------ -------------------------- ------------------------ ---------------------- ------------------- -------------------
Time Region $N_{\rm{H}}$ $\alpha$ $E_{\rm peak}^{\rm obs}$ $\chi^2$ (d.o.f.) $\chi'^2$(d.o.f.)
(s) (keV)
Time Region $N_{\rm{H}}$ $\alpha$ $E_{\rm peak}^{\rm obs}$ $\chi^2$ (d.o.f.) $\chi'^2$(d.o.f.)
(s) (keV)
21.0-36.7 $2.7_{-2.0}^{+2.7}$ $0.81_{-0.07}^{+0.07}$ $210_{-23}^{+30}$ $1.32_{-0.04}^{+0.06}$ $10.8_{-0.3}^{+0.2}$ 79.1 (106) 80.0 (107)
36.7-57.7 $7.1_{-5.0}^{+8.8}$ $1.16_{-0.23}^{+0.23}$ $101_{-25}^{+60}$ $0.61_{-0.05}^{+0.03}$ $1.8_{-0.4}^{+0.1}$ 52.6 (57) 56.2 (58)
57.7-105 $1.5_{-1.0}^{+1.1}$ $0.93_{-0.04}^{+0.04}$ $248_{-21}^{+26}$ $4.34_{-0.07}^{+0.09}$ $32.8_{-0.6}^{+0.6}$ 82.2 (106) 82.2 (107)
105-121 $2.6_{-1.1}^{+1.3}$ $0.77_{-0.03}^{+0.03}$ $352_{-25}^{+29}$ $3.27_{-0.06}^{+0.05}$ $41.1_{-0.5}^{+0.4}$ 99.1 (106) 102 (107)
121-131 $1.4_{-0.9}^{+1.0}$ $1.05_{-0.05}^{+0.05}$ $195_{-22}^{+29}$ $1.68_{-0.04}^{+0.04}$ $8.8_{-0.3}^{+0.2}$ 83.4 (106) 83.4 (107)
131-142 $3.0_{-1.4}^{+1.7}$ $1.02_{-0.06}^{+0.06}$ $226_{-30}^{+40}$ $1.60_{-0.04}^{+0.03}$ $9.9_{-0.3}^{+0.2}$ 87.2 (106) 90.4 (107)
142-157 $1.1_{-0.8}^{+0.9}$ $0.98_{-0.05}^{+0.05}$ $132_{-10}^{+12}$ $2.37_{-0.04}^{+0.05}$ $9.9_{-0.4}^{+0.2}$ 65.9 (106) 66.6 (107)
157-168 $1.9_{-1.0}^{+1.1}$ $1.17_{-0.06}^{+0.06}$ $165_{-24}^{+34}$ $1.58_{-0.03}^{+0.04}$ $6.2_{-0.3}^{+0.2}$ 61.4 (88) 61.8 (89)
168-205 $1.0_{-0.7}^{+0.8}$ $1.21_{-0.07}^{+0.06}$ $100_{-11}^{+15}$ $3.23_{-0.07}^{+0.05}$ $8.1_{-0.3}^{+0.2}$ 78.4 (106) 79.7 (107)
205-220 $1.7_{-1.2}^{+1.4}$ $1.39_{-0.11}^{+0.11}$ $100_{-21}^{+42}$ $1.18_{-0.05}^{+0.03}$ $2.5_{-0.3}^{+0.1}$ 82.1 (90) 82.2 (91)
------------- --------------------- ------------------------ -------------------------- ------------------------ ---------------------- ------------------- -------------------
: Spectral model parameters for the time resolved spectra of GRB051022.[]{data-label="spec_tb"}
Amati, L., 2000, Science, 290, 953 Amati, L., 2002, , 390, 81 Atteia, J. -L., 2003, in Gamma-Ray Bursts and Afterglow Astronomy, ed. G. R. Ricker & R. Vanderspek (Melville: AIP), 17 Band, D. L., 1993, , 413, 281 Berger, E., & Wyatt, P. 2005, GCN Circ., 4148 Bloom, J. S., Frail, D. A., & Sari, R. 2001, , 121, 2879 Bloom, J. S., Frail, D. A., & Kulkarni, S. R. 2003, , 594, 674 Bloom, J. S. 2005, GCN Circ., 4153 Butler, N. R., Ricker, G. R., Lamb, D. Q., Burrows, D. N., Racusin, J., & Gehrels, N. 2005, GCN Circ., 4165 Butler, N. R., Ricker, G. R., Lamb, D. Q., Burrows, D. N., Racusin, J., & Gehrels, N. 2005, GCN Circ., 4170 Cameron, P. B., & Frail, D. A. 2005, GCN Circ., 4154 Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, , 345, 245 Castro-Tirado, A. J., de Ugarte Postigo, A., Bihain, G., Guziy, S., Pandey, S. B., Jelinek, M., & Gorosabel, J. 2005, GCN Circ., 4143 Castro-Tirado, A. J., 2006, in Gamma Ray Bursts in the Swift Era, ed. Stephen S. Holt, Neil Gehrels and John A. Nousek (Melville: AIP), 79 Cenko, S. B., Fox, D. B., McNaught, R., & Peterson, B. 2005, GCN Circ., 4134 Cool, R. 2005, GCN Circ., 4149 de Ugarte Postigo, A., Aceituno, F. J. & Guziy, S. 2005, GCN Circ., 4164 Dickey, J. M., & Lockman, F. J. 1990, , 28, 215 Djorgovski, S. G., Frail, D. A., Kulkarni, S. R., Bloom, J. S., Odewahn, S. C., & Diercks, A. 2001, , 562, 654 Doty, J., 2005, GCN Circ., 4145 Frontera, F., 2000, , 127, 59 Fruchter, A., 1999, , 516, 683 Fruchter, A. S., 1999, , 519, L13 Gal-Yam, A., Berger, E., Fox, D. B., Soderberg, A. M., Cenko, S. B., Cameron, P. B., & Frail, D. A. 2005, GCN Circ., 4156 Ghirlanda, G., Ghisellini, G., & Lazzati, D. 2004, , 616, 331 Golenetskii, S., Aptekar, R., Mazets, E., Pal’shin, V., Frederiks, D., & Cline, T. 2005, GCN Circ., 4150 Graziani, C. 2005, GCN Circ., 4131 Gunn, J. E., & Peterson, B. A. 1965, , 142, 1633 Hurley, K., Cline, T. 2005, GCN Circ., 4139 in’t Zand, J. J., 2001, , 559, 710 Lamb, D. Q., & Reichart, D. E. 2000, , 536, 1 Nysewander, M., Cypriano, E., LaCluyze, A., Bayliss, M., Reichart, D., Alvarez, A., & Ugarte, P. 2005, GCN Circ., 4152 Odewahn, S.C., Djorgovski, S. G., Kulkarni, S. R., & Frail D. A. 1997, IAU Circ., 6735 Owens, A., 1998, , 339, L37 Patel, S., Kouveliotou, K., & Rol, E. 2005, GCN Circ., 4163 Predehl, P., & Schmitt, J. H. M. M. 1995, , 293, 889 Racusin, J., Burrows, D., & Gehrels, N. 2005, GCN Circ., 4141 Racusin, J., 2005, GCN Circ., 4169 Reichart, D. E., & Price, P. A. 2002, , 565, 174 Rhoads, J. E. 1997, , 487, L1 Sakamoto, T., 2005, , 629, 311 Sari, R., Piran, T., & Narayan, R. 1998, , 497, L17 Sari, R., Piran, T., & Halpern, J. P. 1999, , 519, 17 Shirasaki, Y., 2003, , 55, 1033 Stratta, G., Fiore, F., Antonelli, L. A., Piro, L., & de Pasquale, M. 2004, , 608, 846 Tanaka, K., 2005, GCN Circ., 4137 Torii, K. 2005, GCN Circ., 4130 Villasenor, J. N., 2003, in Gamma-Ray Bursts and Afterglow Astronomy, ed. G. R. RIcker & R. Vanderspek (Melville: AIP), 33 Waxman, E., Kulkarni, S. R., & Frail, D. A. 1998, , 497, 288 Yoshida, A., 2001, , 557, L27
|
---
abstract: 'These proceedings summarize my plenary talk at Quark Matter 2011 with a focus on the future perspectives of the low energy programs at RHIC, FAIR, NICA and CERN.'
address:
- 'Institut für Theoretische Physik, Goethe Universität Frankfurt,Germany'
- 'Frankfurt Institute for Advanced Studies (FIAS), Ruth-Moufang-Str. 1, 60438 Frankfurt, Germany'
author:
- Marcus Bleicher
title: 'The low energy frontier: What is exciting about physics below the top RHIC energy'
---
Introduction
============
Over the last decade relativistic heavy ion physics has made tremendous advances in terms of our understanding of the phase diagram of Quantum-Chromo-Dynamics (QCD). Most of these advances are, however, related to the high temperature and small baryon density regime as encountered in collisions at the top RHIC energy. Here one has been able explore the properties of the QCD-matter created in great detail and to estimate its transport properties quite accurately from comparisons between viscous hydrodynamics calculations and experimental data (especially on the elliptic flow, $v_2$ and on the attenuation of particles with high transverse momenta, called jet quenching). This energy regime offers certain advantages and disadvantages, e.g. perturbative QCD methods may work reliably for the first time in heavy ion reactions, and straightforward hydrodynamics seems to allow for a good description of the experimental data. While on the experimental side, the luminosities are very high allowing for detailed studies of a wide range of observables. On the other hand, lattice QCD suggests that at the (T,$\mu_B$) values probed at top RHIC energy, the transition from partonic to hadronic matter is not a phase transition but a cross over. To explore the phase transition region and the critical endpoint of QCD one clearly has to go downwards in energy. Unfortunately, decreasing the beam energy poses a problem for collider based experiments because the luminosity decreases too. Which in turn means that rare probes get out of reach. Nevertheless, a decrease in energy offers many new exciting possibilities and challenges. This is why more suitable titles for this talk may have been ”The high baryon density frontier” or ”The quest for the critical end point” or ”The ’where pQCD won’t help you’ frontier”.
The outline of these proceedings is as follows. I will shortly review the past achievements, basically providing experimental facts of irregularities in the low energy regime between $\sqrt{s_{NN}}=5-15$ GeV. Some of these experimental result are already known, while the majority has been presented for the first time at this meeting. They all share that there is a lack of consistent interpretation of the data on the theory side. Then I discuss the next generation of experiments and facilities that may allow us to gain high precision data to explore the onset of deconfinement and the phase transition with unprecedented accuracy. Finally, I want to point out what should be done on the theory side to provide highly precise calculations for the interpretation of the experimental data. In line with my presentation, these proceedings will neglect all critical discussions. I also want to apologize to those colleagues whose exciting results could not be mentioned due to space limitations.
Phase diagram and existence of the CEP
======================================
The phase diagram of QCD is depicted in Figure \[fig:phasediagram\]. The left part of Fig. \[fig:phasediagram\] provides a schematic view on the qualitative features expected of QCD. Most notable are the critical endpoint (CEP) and the Quarkyonic phase. The location and existence of the critical point has been under discussion for many years [@STEPHANOV] (Fig. \[fig:phasediagram\], center, shows an early calculation in lattice QCD [@hep-lat/0111064]). While different lattice QCD groups have predicted the existence of a CEP [@hep-lat/0111064; @KARSCH], other groups have suggested that the critical surface might be bending away from the physical point [@arxiv:1009.4089]. Recent studies by Endroede et al, do also not provide evidence for the existence of the critical point in the investigated (T,$\mu_B$) regime below a $\mu_B$ of 600 MeV [@arxiv:1102.1356].
![Left: Schematic phase diagram of QCD, including the CEP, the parton-hadron transition line and the Quarkyonic phase. Center: Previous prediction for the location of the CEP from lattice QCD in the (T,$\mu_B$)-plane [@hep-lat/0111064]. Right: Recent prediction of the transition lines in the (T,$\mu_B$)-plane [@arxiv:1102.1356].\[fig:phasediagram\]](fig01.pdf "fig:"){width=".32\textwidth"} ![Left: Schematic phase diagram of QCD, including the CEP, the parton-hadron transition line and the Quarkyonic phase. Center: Previous prediction for the location of the CEP from lattice QCD in the (T,$\mu_B$)-plane [@hep-lat/0111064]. Right: Recent prediction of the transition lines in the (T,$\mu_B$)-plane [@arxiv:1102.1356].\[fig:phasediagram\]](fig02.pdf "fig:"){width=".32\textwidth"} ![Left: Schematic phase diagram of QCD, including the CEP, the parton-hadron transition line and the Quarkyonic phase. Center: Previous prediction for the location of the CEP from lattice QCD in the (T,$\mu_B$)-plane [@hep-lat/0111064]. Right: Recent prediction of the transition lines in the (T,$\mu_B$)-plane [@arxiv:1102.1356].\[fig:phasediagram\]](fig03.pdf "fig:"){width=".32\textwidth"}
In view of these ambiguous theoretical results, I will avoid a deeper discussion of the theoretical expectations and turn to the experimental observations in the beam energy region where a change from the first order phase transition to the cross over (via a CEP) may take place.
Kinks everywhere!
=================
First hints for irregular and non-monotonous behaviors in energy excitation functions of nucleus-nucleus reactions were reported by the NA49 collaboration [@MAREK]. Among other observations is the step like structure in the inverse slope (and mean $m_T-m_0$) systematics of various particle species, see Fig. \[fig:kinetics\] (left). This observation could be interpreted as a sign of the mixed phase which softens the equation of state (EoS). Further hints for a softening of the EoS come from the systematic study of the $v_1$ excitation function (Fig. \[fig:kinetics\] (center)), which shows the slope of the bounce-off of protons as a function of energy. It was predicted that a sign change in the slope parameter, as now observed in the STAR data [@MOHANTY], could be interpreted as a change from hadronic to partonic matter [@CSERNAI]. Finally let us investigate how the initial anisotropies are transformed into final state anisotropies. To this aim Fig. \[fig:kinetics\] (right) shows the extracted final state eccentricities in coordinate space as obtained from HBT studies [@LISA]. Also here a non-monotonous behavior becomes visible (note the CERES point around 20 AGeV) that coincides with the energy range of the previously discussed irregular structures.
![Left: Inverse slope excitation function for Kaons [@SPHERIO]. Center: Flow systematics [@MOHANTY]. Right: Final state coordinate space eccentricity [@LISA]. \[fig:kinetics\]](fig04.pdf "fig:"){width=".32\textwidth"} ![Left: Inverse slope excitation function for Kaons [@SPHERIO]. Center: Flow systematics [@MOHANTY]. Right: Final state coordinate space eccentricity [@LISA]. \[fig:kinetics\]](fig05.pdf "fig:"){width=".32\textwidth"} ![Left: Inverse slope excitation function for Kaons [@SPHERIO]. Center: Flow systematics [@MOHANTY]. Right: Final state coordinate space eccentricity [@LISA]. \[fig:kinetics\]](fig06.pdf "fig:"){width=".32\textwidth"}
Next let me turn to the elliptic flow analysis. After the exploratory studies at the CERN-SPS, the STAR collaborations beam energy scan (BES) program has put the breadth and the quality of these measurements to a new level. A most striking result is the excitation function of the non-flow contributions as shown in Fig. \[fig:v2\] (left). Also here one observes a local minimum which may indicate a sudden change in the conversion efficiency of the the initial anisotropy in space into momentum space. Such a behavior may be expected if the initial state viscosity has a local minimum, e.g. due to a phase transition to a QGP. The relative importance of the hadronic stage may be inferred from the differences of the elliptic flow values of particles and anti-particles as shown in Fig. \[fig:v2\] (center). The strong rise towards low energies can be interpreted as the emergence of a long lived hadronic state that ’eats-up’ the anti-particles. Further evidence for a change in the degrees of freedom around 10 AGeV center-of-mass energy comes from the investigation of the elliptic flow of multi-strange particles. This is exemplified by the deviation of the $v_2$-values of the $\phi$-meson from the constituent quark scaling (ncq) curve (see Fig. \[fig:v2\] (right)). If in addition, a hierarchy of the violation of the ncq-scaling could be established when going from protons to Lambdas, Xis and Omegas it could provide direct evidence for the relative importance of the hadronic phase as compared to the partonic phase in the early stage of the reaction.
![Left: Excitation function of the non-flow contribution [@SORENSEN]. Center: Relative difference between the elliptic flow of particles and anti-particles [@SCHMAH]. Right: Scaled elliptic flow of hadrons, including the $\phi$-meson (triangles) [@SCHMAH]\[fig:v2\]](fig07.pdf "fig:"){width=".32\textwidth"} ![Left: Excitation function of the non-flow contribution [@SORENSEN]. Center: Relative difference between the elliptic flow of particles and anti-particles [@SCHMAH]. Right: Scaled elliptic flow of hadrons, including the $\phi$-meson (triangles) [@SCHMAH]\[fig:v2\]](fig08.pdf "fig:"){width=".32\textwidth"} ![Left: Excitation function of the non-flow contribution [@SORENSEN]. Center: Relative difference between the elliptic flow of particles and anti-particles [@SCHMAH]. Right: Scaled elliptic flow of hadrons, including the $\phi$-meson (triangles) [@SCHMAH]\[fig:v2\]](fig09.pdf "fig:"){width=".32\textwidth"}
Fluctuations: From lattice QCD to data
======================================
While irregular structures appear in a multitude of measured data around center-of-mass energies of 5-15 AGeV, it is usually difficult to find a consistent and unambiguous theoretical interpretation. Fluctuation observables which are usually connected to well defined susceptibilities and correlation lengths might allow to gain additional insights. Very interesting data on fluctuations has been provided by the NA61 experiment (Fig. \[fig:flucs\], left) where the measured fluctuations are interpreted in terms of the correlation length at the critical point [@STEPHANOV]. If this interpretation could be confirmed by a full dynamical simulation [@NAHRGANG] it may provide the first experimental hint for the location of the CEP. A direct connection between data and a first principle calculation has been suggested by recent lattice QCD data [@CHENG] on fluctuations (Fig. \[fig:flucs\], right). The ratio of the 4th order to the second order baryon number susceptibilities is related to the scaled kurtosis of the event-by-event baryon number fluctuations and can be measured in experiment. These measurements have been performed by the STAR experiment and confirm the expectations, see Fig. \[fig:flucs\], center. Deviations from the hadron gas behavior are predicted for the ratios of even higher order susceptibilities, which modify the fluctuations below the phase transition temperature (i.e. they are in principle observable in the hadronic fluctuations) [@REDLICH].
Physics challenges and future perspectives
==========================================
After the previous discussions, the physics challenges in this energy regime are clear: (I) Exploration of the onset of deconfinement, i.e. the parton-hadron phase transition, the chiral phase transition and the CEP. (II) Exploration of the properties of hadrons at high baryon densities, including the Quarkyonic phase, (III) Extraction of the equation of state of QCD matter, especially the velocity of sound and the transport properties and (IV) the quest for exotica, like multi-strange objects and charmed hadrons. To this aim, high precision experimental data will be urgently needed, accompanied by high precision theoretical modeling. To meet these challenges, two new facilities will become available in the near future: FAIR (near Darmstadt, Germany) and NICA (at the JINR, Dubna, Russia). In addition, we have the currently running experimental programs at SPS (NA61/SHINE) and the RHIC-BES (STAR and PHENIX). As the impact of the RHIC-BES program and NA61 have already been discussed above, I will focus here on the potential of the new facilities for heavy ion beams.
The heavy ion program at NICA [@SORIN] with the MPD detector will provide collisions with light and heavy ions up to Au+Au at center-of-mass energies between $\sqrt {s_{NN}}=4-11$ GeV and luminosities around $10^{27}/{\rm cm}^2/{\rm s}$. It is supplemented by the low energy program at the Nuclotron with beam energies up to 4.5 AGeV. The physics program will focus on the onset of deconfinement with discovery potential for the CEP and the Quarkyonic phase. While the Nuclotron is already in operation, the physics start for the NICA collider program is envisaged for the year 2016.
![Left: Momentum fluctuations as measured by NA61 in comparison to the expectations from an effective model at the CEP [@MAREK; @STEPHANOV]. Center: STAR data on the scaled proton number kurtosis (as a proxy for $\chi^B_4/\chi^B_2$) [@MOHANTY]. Right: Lattice QCD prediction for the $\chi^B_4/\chi^B_2$-ratio [@CHENG].\[fig:flucs\]](fig10.pdf "fig:"){width=".32\textwidth"} ![Left: Momentum fluctuations as measured by NA61 in comparison to the expectations from an effective model at the CEP [@MAREK; @STEPHANOV]. Center: STAR data on the scaled proton number kurtosis (as a proxy for $\chi^B_4/\chi^B_2$) [@MOHANTY]. Right: Lattice QCD prediction for the $\chi^B_4/\chi^B_2$-ratio [@CHENG].\[fig:flucs\]](fig11.pdf "fig:"){width=".32\textwidth"} ![Left: Momentum fluctuations as measured by NA61 in comparison to the expectations from an effective model at the CEP [@MAREK; @STEPHANOV]. Center: STAR data on the scaled proton number kurtosis (as a proxy for $\chi^B_4/\chi^B_2$) [@MOHANTY]. Right: Lattice QCD prediction for the $\chi^B_4/\chi^B_2$-ratio [@CHENG].\[fig:flucs\]](fig12.pdf "fig:"){width=".32\textwidth"}
The FAIR facility with the SIS-100 and SIS-300 program will allow to study heavy ion beams with up to 35 AGeV beam energy with ultra-high luminosities and extreme precision. The physics program will focus on the onset of deconfinement, the search for critical fluctuations, hadron properties in dense baryonic matter and exotica with an unprecedented breadth of observables. Most notably are the unique capabilities of the CBM-experiment for the study of rare probes, like charmed hadrons, multi-strange hadrons and clusters, coupled with a state-of-the-art dilepton detector. This will open up the route towards the exploration of the hadron properties near the chirally restored phase via low mass dileptons, allow for flow measurements of D-mesons and $J/\Psi$s with high statistics, discovery of MEMOs and exotic quark-gluon states and studies of (pre-cursors) of the CFL state and Quarkyonic matter via dileptons and event-by-event fluctuations. Start of the ground breaking is planned for the beginning of 2012 and the first beam is expected in the year 2017/2018 [@RICHTER].
Need for a joint theory effort
==============================
Let me finally address my wish list for the developments on the theory side. Currently the theoretical approaches fall into three categories: theories in equilibrium, e.g. studies using effective Lagrangians (PNJL, quark-meson-models,…) but also lattice QCD, (viscous) hydrodynamic studies based on various equations of state and transport simulations (including models based on the geometrical Glauber picture). It is evident that a unified picture of the dynamical evolution of the QCD-matter will have to include features of all these approaches to allow for a consistent interpretation of all facets of the experimental data. With the extraction of transport coefficients and the EoS from lattice QCD data and their use in hydrodynamics models, we have seen a first step towards this unification [@HUOVINEN-PETRECKY]. On the other hand, previous studies have also shown, that one can unite hydrodynamics with transport simulations employing hybrid approaches to avoid initial stage and final stage ambiguities [@HYBRID]. However, it is possible to go even further and simulate the hydrodynamical behavior using only a Boltzmann approach [@CARSTEN]. An orthogonal approach with yet not fully recognized potential are real-time (as compared to imaginary time) lattice QCD studies [@NARA-DUMITRU]. Here one has the potential to obtain a real dynamical evolution of the QCD system based on first principles.
From my point of view, the ultimate goal of these developments should be to design a single and open standard model for the description of (heavy) ion reactions, including a deconfinement transition with the proper order, including the dynamics near the CEP, chiral symmetry with its breaking and restauration, multi-particle interactions and off-shell dynamics. This goal can only be reached as a joined community effort. It will be worthwhile to pursue because it will provide a solid basis for the interpretation of the experimental data. First attempts for joined theory activities have already started with the MADAI collaboration and the TechQM initiative and should be broadened and internationalized.
Summary and outlook
===================
I have discussed the status of the low energy regime for heavy ion reactions where the location of the critical end point and the onset of deconfinement is expected. A large body of experimental data has become available during this conference which has confirmed and tremendously extended the previously observed irregularities and highly interesting results from the CERN-SPS. Most noteworthy to me are the elliptic flow studies for $\phi$-mesons and the $v_1$, and $v^2_2(2)-v_2^2(4)$ studies at the RHIC-BES which provide many additional insights and even more questions. Currently, the field is in a very comfortable situation with dedicated running programs (RHIC-BES and NA61) that provide pioneering studies geared to pin down the CEP and the onset of deconfinement. The future is bright with two upcoming facilities that will allow to push these investigations to a new level in terms of data quantity and quality. These facilities will allow to explore the properties of QCD matter in the respective (T,$\mu_B)$ regions with novel and dedicated probes like charm and dileptons which will provide challenges and results for the next two decades. The ultimate goal will be to obtain a concise and unambiguous interpretation of the experimental data in terms of the equation of state and the transport properties of the QCD matter at high baryon densities. This will need a coordinated and joined theory activity to model and understand the experimental results not only in the large wave length limit (i.e. hydrodynamics), but also to understand what actually makes up this liquid on the microscopic level.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by the Helmholtz International Center for FAIR within the framework of the LOEWE program launched by the State of Hesse, GSI, and BMBF.
References {#references .unnumbered}
==========
[36]{} Misha Stephanov, talk at this conference and\
M. A. Stephanov, \[arXiv:1104.1627 \[hep-ph\]\].\
M. A. Stephanov, Phys. Rev. Lett. [**102**]{}, 032301 (2009). \[arXiv:0809.3450 \[hep-ph\]\].
Zoltan Fodor, talk at this conference and Z. Fodor, S. D. Katz, \[hep-lat/0111064\]. Frithjof Karsch, talk at this conference and F. Karsch et al., Nucl. Phys. [**B129**]{}, 614 (2004) O. Philipsen, \[arXiv:1009.4089 \[hep-lat\]\]. G. Endrodi, Z. Fodor, S. D. Katz, K. K. Szabo, JHEP [**1104**]{}, 001 (2011). \[arXiv:1102.1356\]. L. P. Csernai, D. Rohrich, Phys. Lett. [**B458**]{}, 454 (1999). \[nucl-th/9908034\].\
J. Brachmann, S. Soff, A. Dumitru, H. Stoecker, J. A. Maruhn, W. Greiner, L. V. Bravina, D. H. Rischke, Phys. Rev. [**C61**]{}, 024909 (2000). \[nucl-th/9908010\].
Marek Gazdzicki, talk at this conference and\
K. Grebieszkow \[ NA49 and NA61 Collaborations \], Acta Phys. Polon. [**B41**]{}, 427-440 (2010). \[arXiv:0911.1902 \[nucl-ex\]\]. Bedanga Mohanty (STAR), talk at this conference. Y. Hama, F. Grassi, O. Socolowski, T. Kodama, M. Gazdzicki, M. Gorenstein, Acta Phys. Polon. [**B35**]{}, 179-182 (2004). Christopher Anson, talk at this conference and\
M. A. Lisa, E. Frodermann, G. Graef, M. Mitrovski, E. Mount, H. Petersen, M. Bleicher, New J. Phys. [**13**]{}, 065006 (2011). \[arXiv:1104.5267 \[nucl-th\]\]. Paul Sorensen (STAR), talk at this conference. Alexander Schmah (STAR), talk at this conference. Marlene Nahrgang, talk at this conference and\
M. Nahrgang, M. Bleicher, S. Leupold, I. Mishustin, \[arXiv:1105.1962 \[nucl-th\]\].\
M. Nahrgang, S. Leupold, M. Bleicher, \[arXiv:1105.1396 \[nucl-th\]\].\
M. A. Stephanov, Prog. Theor. Phys. Suppl. [**186**]{}, 434-439 (2010). F. Karsch, K. Redlich, Phys. Lett. [**B695**]{}, 136-142 (2011). \[arXiv:1007.2581 \[hep-ph\]\].\
B. Friman, F. Karsch, K. Redlich, V. Skokov, \[arXiv:1103.3511 \[hep-ph\]\]. M. Cheng, P. Hendge, C. Jung, F. Karsch, O. Kaczmarek, E. Laermann, R. D. Mawhinney, C. Miao [*et al.*]{}, Phys. Rev. [**D79**]{}, 074505 (2009). \[arXiv:0811.1006 \[hep-lat\]\]. Alexander Sorin, talk at this conference. Simone Richter, Presentation of the status of FAIR,\
www-win.gsi.de/r3b/HIC4FAIR\_Talks/Richter\_StatusFAIR\_HIC4FAIR.ppt Pasi Huovinen, talk at this conference and\
P. Huovinen, P. Petreczky, \[arXiv:1106.6227 \[nucl-th\]\]. Hannah Petersen, talk at this conference and\
H. Petersen, J. Steinheimer, G. Burau, M. Bleicher, H. Stocker, Phys. Rev. [**C78**]{}, 044901 (2008). \[arXiv:0806.1695 \[nucl-th\]\].\
Bjorn Schenke, Sangyong Jeon, talks at this conference and\
B. Schenke, S. Jeon, C. Gale, Phys. Rev. [**C82**]{}, 014903 (2010). \[arXiv:1004.1408 \[hep-ph\]\].
I. Bouras, E. Molnar, H. Niemi, Z. Xu, A. El, O. Fochler, C. Greiner, D. H. Rischke, Phys. Rev. [**C82**]{}, 024910 (2010). \[arXiv:1006.0387 \[hep-ph\]\]. A. Dumitru, Y. Nara, Eur. Phys. J. [**A29**]{}, 65-69 (2006). \[hep-ph/0511242\].
|
---
abstract: 'Modern longitudinal studies feature data collected at many timepoints, often of the same order of sample size. Such studies are typically affected by [dropout]{} and positivity violations. We tackle these problems by generalizing effects of recent incremental interventions (which shift propensity scores rather than set treatment values deterministically) to accommodate multiple outcomes and subject dropout. We give an identifying expression for incremental effects when dropout is conditionally ignorable (without requiring treatment positivity), and derive the nonparametric efficiency bound for estimating such effects. Then we present efficient nonparametric estimators, showing that they converge at fast parametric rates and yield uniform inferential guarantees, even when nuisance functions are estimated flexibly at slower rates. We also study the efficiency of incremental effects relative to more conventional deterministic effects in a novel infinite time horizon setting, where the number of timepoints grows with sample size, and show that incremental effects yield near-exponential gains in this setup. Finally we conclude with simulations and apply our methods in a study of the effect of low-dose aspirin on pregnancy outcomes.'
author:
- |
Kwangho Kim,[^1] [^2]\
Edward H. Kennedy,[^3]\
and\
Ashley I. Naimi [^4]
bibliography:
- 'Methodology.bib'
title: |
**[Incremental Intervention Effects\
in Studies with Many Timepoints,\
Repeated Outcomes, and Dropout]{}**
---
[*Keywords:*]{} causal inference, right-censoring, observational study, positivity, efficient influence function, time-varying confounding, treatment effect, dense longitudinal data
Introduction
============
Causal inference has long been an important scientific pursuit, and understanding causal relationships is essential across many disciplines. However, for practical and ethical reasons, causal questions cannot always be evaluated via experimental methods (i.e., randomized trials), making observational studies the only viable alternative. Further, when individuals can be exposed to varying treatment levels over time, collecting appropriate longitudinal data is important. To that end, recent technological advancements that facilitate data collection are making longitudinal studies with a very large number of time points (sometimes of the same order of sample size) increasingly common [e.g., @kumar2013mobile; @eysenbach2011consort; @klasnja2015microrandomized].
The increase in observational studies with detailed longitudinal data has also introduced numerous statistical challenges that remain unaddressed. For longitudinal causal studies, two analytic frameworks are often invoked: *deterministic fixed interventions* [@robins1986; @robins2000marginal; @hernan2000marginal], in which all individuals are assigned to a fixed exposure level over all time-points; and *deterministic dynamic interventions* [@murphy2001marginal; @robins2004optimal] in which, at each time, treatment is assigned according to a fixed rule that depends on past history. In the real world, the fixed deterministic interventions might not be of practical interest since the treatment is typically not applied uniformly [@Kennedy17].
Generally, deterministic interventions (fixed or dynamic) rely on the [positivity assumption]{} which requires every unit to have a nonzero chance of receiving each of the available treatments at every time point. If the positivity assumption is violated, the causal effect defined under deterministic (fixed or dynamic) interventions will be no longer identifiable. Even under positivity, longitudinal studies are especially prone to the curse of dimensionality, since exponentially many samples are needed to learn about all treatment trajectories. These issues only worsen when the number of timepoints or covariates increases. Thus, due to a lack of analytic methods for such longitudinal data, researchers are often forced to either rely on strong parametric assumptions, or forego the estimation of causal effects altogether [e.g. @kumar2013mobile].
Recently, [@Kennedy17] has proposed a novel *incremental intervention effects* which quantify the effect of shifting treatment propensities, rather than effects of setting treatment to fixed values. An incremental intervention is a stochastic intervention in that it depends on unit characteristics and is random at each timepoint [see @young2014identification; @munoz2012population; @haneuse2013estimation; @moore2012causal as prior works on stochastic interventions whose setup is relevant to our study]. Importantly, incremental effect estimators do not require positivity, and can still achieve $\sqrt{n}$-rates regardless of the number of timepoints, even when nonparametric methods are used. Despite these strengths, the method has not been adapted to general longitudinal studies, where multiple right-censored outcomes are common (particularly for human subjects). Additionally, the relative efficiency of such incremental intervention effects over traditional deterministic effects has never been formally assessed - neither theoretically nor empirically - especially for very dense longitudinal data with a large number of timepoints.
In this paper we propose a more comprehensive form of incremental intervention effects that accommodate not only time-varying treatments, but time-varying outcomes subject to right censoring (i.e., dropout). We provide an identifying expression for incremental effects when dropout is conditionally ignorable, still without requiring (treatment) positivity, and derive the nonparametric efficiency bound for estimating such effects. We go on to present efficient nonparametric estimators, showing that they converge at fast rates and give uniform inferential guarantees, even when nuisance functions are estimated flexibly at much slower rates with flexible machine learning tools under weak conditions. Importantly, we also study the relative efficiency of incremental effects to more conventional deterministic effects in a novel infinite time horizon setting, where the number of timepoints can grow with sample size to infinity. We specifically show that incremental effects can yield near-exponential gains in this setup. Finally we conclude with a simulation study and apply our methods to a longitudinal study of the effect of low-dose aspirin on pregnancy outcomes to demonstrate the effectiveness of our method.
Setup
=====
We consider a study where for each subject we observe covariates $X_t \in \mathbb{R}^d$, treatment $A_t \in \mathbb{R}$, and outcome $Y_t \in \mathbb{R}$, with all variables allowed to vary over time, but where subjects can drop out or be lost to follow-up. In particular, we observe a set of i.i.d samples $(Z_1, ... , Z_n)$ from a probability distribution ${\mathbb{P}}$ where, for those subjects who remain in the study up to the final timepoint $t=T$, we observe $$Z = (X_1, A_1, Y_1, X_2, A_2, Y_2, ..., X_T, A_T, Y_T).$$ But in general we only get to observe $$\begin{aligned}
\label{setup:causal-process}
Z = \left(X_1, A_1, R_2, R_2(Y_1, X_2, A_2), ... , R_T, R_T(Y_{T-1}, X_T, A_T), R_{T+1}, R_{T+1}Y_{T} \right)
\end{aligned}$$ with $R_t = \mathbbm{1} \text{\{ still in the study at time t \}}$ an indicator for whether the subject contributes data at time $t$. We write $R_t(Y_{t-1}, X_t, A_t)$ as a shorthand notation of $(R_tY_{t-1}, R_tX_t, R_tA_t)$, so the missingness process we consider is one where subjects can drop out at each time after the measurement of covariates/treatment. This is motivated by the fact that this is likely the most common type of dropout, since outcomes $Y_t$ at time $t$ are often measured together with or just prior to covariates $X_{t+1}$ at time $t+1$. Since we consider a monotone dropout (i.e., right-censoring) process, $R_t$ is non-increasing in time $t$, i.e., $$\begin{aligned}
&
\begin{cases}
R_t = 1 \ \Rightarrow & {(R_1,...,R_{t-1}) = \bm{1} } \\
R_t = 0 \ \Rightarrow & {(R_{t+1},...,R_T) = \bm{0} },
\end{cases}
\end{aligned}$$ where $\bm{0}, \bm{1}$ are vectors of zeros and ones. Thus our data structure $Z$ is a chain with $t$-th component $$\left\{R_t, R_t(Y_{t-1}, X_t, A_t)\right\}$$ for $t = 1,...,T+1$ where $R_1=1$ and we do not use $Y_{0}$ or $X_{T+1}, A_{T+1}$. Although we suppose each subject’s dropout will occur before the $t$-th stage, our data structure also covers the case when the dropout will occur after the $t$-th stage because in that case we can write $$\left\{R_t(Y_{t-1}, X_t, A_t), R_{t+1} \right\}$$ as the $t$-th component of our chain and the general structure remains the same.
For simplicity, we consider binary treatment in this paper, so that the support of each $A_t$ is $\mathcal{A} = \{0,1\}$. We use overbars and underbars to denote all the past history and future event of a variable respectively, so that $\overline{X}_t = (X_1, ... , X_t)$ and $\underline{A}_t = (A_t, ... , A_T)$ for example. We also write $H_t = ( \overline{X}_t, \overline{A}_{t-1}, \overline{Y}_{t-1})$ to denote all the observed past history just prior to treatment at time $t$, with support $\mathcal{H}_t$. Finally, we use lower-case letters $a_t, h_t, x_t$ to represent realized values for $A_t, H_t, X_t$ respectively, unless stated otherwise.
Now that we have defined our data structure we turn to our estimation goal, i.e., which treatment effect we aim to estimate. Since we are interested in causal inference we use potential outcomes $Y_t^{\overline{a}_t}$ to denote the counterfactual outcome at time $t$ that would have been observed under a treatment sequence $\overline{a}_t=(a_1,...,a_t)$ (note we have $Y_t^{\overline{a}_T}=Y_t^{\overline{a}_t}$ as long as the future cannot cause the past). In longitudinal causal problems it is common to pursue quantities such as ${\mathbb{E}}(Y_t^{\overline{a}_{t}})$, i.e., the mean outcome at a given time under particular treatment sequences $\overline{a}_t$; for example one might compare the mean outcome under $\overline{a}_t=\bm{1}$ versus $\overline{a}_t=\bm{0}$, which represents how outcomes would change if all versus none were treated at all times. However identifying these effects requires strong positivity assumptions (i.e., that all have some chance at receiving every treatment at every time), and estimating these effects often requires untenable parametric assumptions when there are more than a few timepoints.
Following [@Kennedy17] we instead consider incremental intervention effects, which represent how mean outcomes would change if the odds of treatment at each time were multiplied by a factor $\delta$ (e.g., $\delta=2$ means odds of treatment are doubled). Incremental interventions shift propensity scores rather than impose treatments themselves; they represent what would happen if treatment were slightly more or less likely to be assigned, relative to the natural/observational treatment. There are a number of benefits of studying incremental intervention effects: for example, positivity assumptions can be entirely and naturally avoided; complex effects under a wide range of intensities can be summarized with a single curve in $\delta$, no matter how many timepoints $T$ there are; and they more closely align with actual intervention effects than their fixed treatment regime counterparts. We refer to [@Kennedy17] for more discussion and details.
Formally, incremental interventions are dynamic stochastic interventions where treatment is not assigned based on the observational propensity scores $\pi_t(h_t) = {\mathbb{P}}(A_t=1 \mid H_t=h_t)$; instead these propensity scores are replaced by new interventional propensity scores given by $$\begin{aligned}
\label{eqn:incr-intv-ps}
q_t(h_t; \delta,\pi_t) = \frac{\delta\pi_{t}(h_t)}{\delta\pi_{t}(h_t) + 1 - \pi_{t}(h_t)}.
\end{aligned}$$ to ensure the odds of treatment are multiplied by $\delta$. We denote potential outcomes under the above intervention as $Y_{t}^{\overline{Q}_t(\delta)}$ where $\overline{Q}_t(\delta) = \{Q_1(\delta), ... , Q_{t}(\delta)\}$ represents draws from the conditional distributions $Q_s(\delta) \mid H_s=h_s \sim \text{Bernoulli}\{q_s( h_s; \delta, \pi_s)\}$, $s=1,...,t$. We often drop $\delta$ and write $Q_t = Q_t(\delta)$ when the dependence is clear from the context. Note here we use capital letters for the intervention indices since they are random, as opposed to $Y_t^{\overline{a}_t}$ where the intervention is deterministic. Therefore in this paper we aim to estimate the mean counterfactual outcome $$\psi_t(\delta) = {\mathbb{E}}\left(Y_t^{\overline{Q}_t(\delta) } \right)$$ for any $t\leq T$. This goal is different from [@Kennedy17] in that we allow varying outcomes over time and dropout/right-censoring. Thus in the next section we describe the necessary conditions for identifying $\psi_t(\delta)$ in the presence of dropout.
Identification
==============
In this section, we will give assumptions under which the entire marginal distribution of the resulting counterfactual outcome $Y_t^{\overline{Q}_t(\delta)}$ is identified. Specifically, we require the following assumptions for all $t \leq T$.
[A1]{} \[assumption:A1\] $Y = Y^{\overline{a}_T}$ if $\overline{A}_T = \overline{a}_T$
[A2-E]{} \[assumption:A2-E\] $A_t {\protect\mathpalette{\protect\independenT}{\perp}}Y^{\overline{a}_{T}} \mid H_{t}$
[A2-M]{} \[assumption:A2-M\] $R_t {\protect\mathpalette{\protect\independenT}{\perp}}(\underline{X}_t, \underline{A}_t, Y) \mid H_{t-1}, A_{t-1}, R_{t-1}=1$
[A3]{} \[assumption:A3\] ${\mathbb{P}}(R_{t} = 1 \mid H_{t-1}, A_{t-1}, R_{t-1}=1)$ is bounded away from 0 a.e. $[{\mathbb{P}}]$
Assumptions (\[assumption:A1\]) and (\[assumption:A2-E\]) correspond to consistency and exchangeability conditions respectively, which are commonly used in causal inference problems. Consistency means the observed outcomes are equal to the corresponding potential outcomes under the observed treatment sequence, and would be violated in settings with interference, for example. Exchangeability means that the treatment and counterfactual outcome are independent, conditional on the observed past (if there were no dropout), i.e., that treatment is as good as randomized at each time conditional on the past. Experiments ensure exchangeability holds by construction, but in observational studies it requires sufficiently many relevant adjustment covariates ($H_t$ in our case) to be collected.
In this paper, we additionally require assumptions (\[assumption:A2-M\]) and (\[assumption:A3\]) because of the missingness/dropout. (\[assumption:A2-M\]) is a time-varying missing-at-random assumption, ensuring that dropout is independent of the future (and underlying missing data values), conditioned on the observed history up to the current time point. This would be a reasonable assumption if we can collect enough data to explain the dropout process, so we can ensure that those who dropout look like those who do not, given all past observed data. (\[assumption:A3\]) is a positivity assumption for missingness, meaning that each subject in the study has some non-zero chance at staying in the study at the next timepoint. This would be expected to hold in many studies, but may not if some subjects are ‘doomed’ to drop out based on their specific measured characteristics. [Note that assumptions (\[assumption:A2-M\]) and (\[assumption:A3\]) also appear in more classical works on dealing with missing data [e.g. @robins1995analysis; @robins1994estimation]]{}.
Importantly, we do not need any positivity conditions on the propensity scores, since we are targeting incremental effects as defined in (\[eqn:incr-intv-ps\]) rather than more common deterministic effects. The next result gives an identifying expression for the incremental effect under the above assumptions.
\[thm:ident-exp\] Suppose identification assumptions (\[assumption:A1\]) - (\[assumption:A3\]) hold. Then the incremental effect on outcome $Y$ at time $t$ with given value of $\delta \in [\delta_l, \delta_u]$ for $0 < \delta_l \leq \delta_u < \infty$ equals $$\label{eqn:ident-exp}
\begin{aligned}
& \psi_t(\delta)
& = \underset{\overline{\mathcal{X}}_{t}\times \overline{\mathcal{A}}_{t}}{\int} \mu(h_{t},a_{t}, R_{t+1}=1) \prod_{s=1}^{t} q_s(a_s \mid h_s, R_s=1) d\nu(a_s) \ d{\mathbb{P}}(x_s \mid h_{s-1},a_{s-1}, R_s=1)
\end{aligned}$$ for $t \leq T$, where $\overline{\mathcal{X}}_{t} = \mathcal{X}_1 \times \cdots \times \mathcal{X}_t$, $\overline{\mathcal{A}}_{t} = \mathcal{A}_1 \times \cdots \times \mathcal{A}_t$, $\mu(h_{t},a_{t}, R_{t+1}=1) = {\mathbb{E}}(Y_{t} \mid H_{t} = h_{t}, A_{t} = a_{t}, R_{t+1}=1)$, and $$\label{eqn:incremental-density}
q_s(a_s \mid h_s, R_s=1)=\frac{a_s\delta\pi_{s}(h_s, R_s=1) + (1-a_s) \{ 1 - \pi_s(h_s, R_s=1) \} }{\delta\pi_{s}(h_s, R_s=1) + 1 - \pi_{s}(h_s, R_s=1)}.$$ with $\pi_{s}(h_s, R_s=1)={\mathbb{P}}(A_s=1 \mid H_s=h_s,R_s=1)$ and a dominating measure $\nu$ for the distribution of $A_s$.
Theorem \[thm:ident-exp\] follows by Theorem 1 in [@Kennedy17] and Lemma \[lem:identification\] given in the Appendix. Note that $q_s(a_s \mid h_s)$ is the propensity score under the incremental intervention. The identifying expression (\[eqn:ident-exp\]) shows that the mean counterfactual outcome $\psi_t(\delta) $ is identified and can be expressed in terms of the observed data distribution ${\mathbb{P}}$.
As mentioned earlier, without the additional assumptions (\[assumption:A2-M\]) and (\[assumption:A3\]) together with Lemma \[lem:identification\], the intervention effect $\psi_t(\delta)$ would in general not be identifiable under the setting considered by @Kennedy17, due to the dropout. It is also worth noting that here we do not make any parametric assumptions and the censorship process is also allowed to be model-free. Theorem \[thm:ident-exp\] therefore extends previous results on incremental interventions to studies with arbitrary time-varying outcomes and missing-at-random style dropout.
To illustrate, the next corollary shows what the identification result gives in the simple setting where there is only one timepoint, so dropout amounts to mere missing outcomes.
\[cor:ident-exp-pt-trt\] When $T=1$, the data structure reduces to $$\begin{aligned}
Z=(X, A, R, RY)
\end{aligned}$$ where $R=1$ means the outcome was not missing. Then the identifying expression for $\psi(\delta)$ simplifies to $$\begin{aligned}
&\psi(\delta) = {\mathbb{E}}\left[ \frac{\delta\pi(X)\mu(X,1,1) + \{1 - \pi(X)\}\mu(X,0,1) }{\delta\pi(X) + \{1 - \pi(X)\}} \right]
\end{aligned}$$ where $\pi(X) = {\mathbb{P}}(A=1 \mid X)$ and $\mu(x,a,1) = {\mathbb{E}}(Y \mid X = x, A = a, R=1)$.
Therefore when $T=1$ the effect $\psi(\delta)$ is simply a weighted average of the regression functions $\mu(X,1,1)$ and $\mu(X,0,1)$ among those with observed outcomes, with weights depending on the observational propensity scores and $\delta$.
Efficiency Theory {#sec:efficiency-theory}
=================
In the previous section, we showed the incremental intervention effect adjusted for right-censoring and repeated outcomes can be identified under weak nonparametric assumptions, without requiring any positivity conditions on the treatment process. Our main goal in this section is to develop a nonparametric efficiency theory for the incremental effect, via the efficient influence function for $\psi_t(\delta)$.
The efficient influence function is a crucial object in non/semiparametric efficiency theory because 1) its variance gives an asymptotic efficiency bound that cannot be improved upon without adding assumptions, and 2) its form indicates how to do appropriate bias correction in order to construct estimators that attain the efficiency bound under weak conditions. Mathematically, an influence function $\phi$ acts as the derivative term in a distributional Taylor expansion of the functional of interest, which can be seen to imply $$\label{def:influence-function-1}
\frac{\partial \psi({\mathbb{P}}_\epsilon)}{\partial\epsilon} \Big\vert_{\epsilon=0} = \int \phi(z;{\mathbb{P}}) \left( \frac{\partial \log d{\mathbb{P}}_\epsilon(z)}{\partial\epsilon} \right)\Big\vert_{\epsilon=0} \ d{\mathbb{P}}(z)$$ for all smooth parametric submodels ${\mathbb{P}}_\epsilon$ containing the true distribution so that ${\mathbb{P}}_{\epsilon=0}={\mathbb{P}}$. For more details we refer to @Bickel98 [@vaart98; @VanAndRobin03; @Tsiatis06; @Kennedy16], as well as Section \[sec:ifdetails\] in the Appendix.
The main result in this section gives the efficient influence function of the incremental effect $\psi_t(\delta)$ on an outcome at arbitrary time $t$ in the presence of dropout, as defined in the identification result of the previous section.
\[thm:eif\] The efficient influence function for the intervention effect $\psi_t(\delta)$ under a nonparametric model is given by $$ \begin{aligned}
& \sum_{s=0}^{t} \left\{ \frac{ \{A_s - \pi_s(H_s)\}(1-\delta)}{\delta A_s + 1-A_s} \right\} \left[ \frac{m_s(H_s,1,R_{s+1}=1)\delta\pi_s(H_s)+m_s(H_s,0,R_{s+1}=1)\{ 1-\pi_s(H_s) \}}{\delta\pi_s(H_s) + 1 - \pi_s(H_s)} \right] \\
& \qquad \times\omega_s( H_s, A_s) \left( \prod_{k=1}^{s} \frac{\delta A_k + 1-A_k}{\delta\pi_k(H_k) + 1-\pi_k(H_k)} \cdot\frac{\mathbbm{1}\left(R_{s+1}=1\right)}{\omega_s( H_s, A_s)} \right) \\
& + \prod_{s=1}^{t} \left\{ \frac{\delta A_s + 1-A_s}{\delta\pi_s(H_s) + 1-\pi_s(H_s)}\cdot\frac{\mathbbm{1}\left(R_{s+1}=1\right)}{\omega_s( H_s, A_s)}\right\}Y_{t} - \psi_t( \delta)
\end{aligned}$$ where $\pi_s(h_s) = {\mathbb{P}}(A_s=1 \mid H_s=h_s, R_s=1)$, $\omega_s( H_s, A_s) = d{\mathbb{P}}(R_{s+1}=1 \mid H_s, A_s,R_s=1)$, and $$\begin{aligned}
m_s&(h_s,a_s, R_{s+1}=1) \\
& = \int_{ \mathcal{R}_s} \mu(h_{t},a_{t}, R_{t+1}=1) \prod_{k=s+1}^{{t}} q_k(a_k \mid h_k, R_k=1) d\nu(a_k) d{\mathbb{P}}(x_k|h_{k-1},a_{k-1}, R_k=1)
\end{aligned}$$ for $\forall s \leq t$, where $\mathcal{R}_s = (\overline{\mathcal{X}}_{t}\times \overline{\mathcal{A}}_{t}) \setminus (\overline{\mathcal{X}}_{s}\times \overline{\mathcal{A}}_{s})$, $\mu(h_{t},a_{t}, R_{t+1}=1) = {\mathbb{E}}(Y_t \mid H_{t} = h_{t}, A_{t} = a_{t}, R_{t+1}=1)$, and $\nu$ is a dominating measure for the distribution of $A_k$.
A proof can be found in the Appendix \[proof:thm-eif\]. In the proof, first we find an identifying expression of the efficient influence function for our target parameter $\psi_t(\delta)$ and then convert it into the more succinct, estimable form in Theorem \[thm:eif\]. Note that in Theorem \[thm:eif\], all terms are either directly available from the data or estimable, e.g., via regression tools, hinting that we can estimate the efficient influence function (and its mean) to use for bias correction. Our results generalize previous ones for incremental interventions by allowing time-varying outcomes and dropout: as might be expected, if there is no censoring (i.e., ${\mathbb{P}}[R_t=0]=1$ a.e $[{\mathbb{P}}]$ for all $t\leq T$) then both the identifying expression and the efficient influence function reduce to the expressions presented by @Kennedy17.
The efficient influence function in Theorem \[thm:eif\] consists of an augmentation term and an product term, both of which are quite different from those that appear in estimators for more standard causal effects. In fact, the product term (the last term with $Y_t$) is an inverse-probability-weighted estimator for the case when $\pi_s,\omega_s$ are parametrically correctly modeled for all $s$, which will be discussed more detail in the next section. The structure of quotient terms is rooted in the form of our new incremental interventional score defined in (\[eqn:incremental-density\]). It is worth noting that every such quotient term is now multiplied by $\frac{\mathbbm{1}\left(R_{s+1}=1\right)}{\omega_s( H_s, A_s)}$ to adjust dropout effects at each stage $s$.
The above efficient influence function involves three types of nuisance functions: the treatment propensity scores $\pi_s(H_s)$, the missingness propensity scores $\omega_s( H_s, A_s)$ and the outcome regressions $m_s(H_s,A_s,R_{s+1}=1)$ for $s\leq t$. The propensity scores $\pi_s(H_s)$ and $\omega_s( H_s, A_s)$ can be directly estimated via arbitrary regression methods. The outcome regressions $m_s$ are marginalized versions of the full regression function $\mu(h_{s},a_{s}, R_{s+1}=1)$ that condition on all of the past, so smaller values of $t$ coincides with more marginalization. In the Appendix \[sequential-regression-formulation\] we give a sequential regression formulation for these outcome regressions to indicate how they might be estimated without resorting to complicated conditional density estimation.
The efficient influence function in the $T=1$ case follows a relatively simple and intuitive form, equaling a weighted average of the efficient influence functions for ${\mathbb{E}}(Y^1)$ and ${\mathbb{E}}(Y^0)$ plus some contribution from the estimation of the treatment propensity scores. We give this result in the Appendix \[eif-for-T=1\].
Estimation and Inference
========================
Proposed Estimator {#subsec:proposed-estimator}
------------------
In this section we develop an estimator that can attain fast $\sqrt[]{n}$ convergence rates, even when other nuisance functions are modeled nonparametrically and estimated at rates slower than $\sqrt[]{n}$.
To begin, let $\varphi(Z;\bm{\eta},\delta, t)$ denote the uncentered efficient influence function from Theorem \[thm:eif\], which is a function of the observations $Z$ and the set of nuisance functions $$\bm{\eta} = (\bm{\pi, m, \omega}) = \left(\pi_1,...,\pi_{t}, m_1,...,m_{t}, \omega_1,...,\omega_{t} \right)$$ for any $t \leq T$, where $\pi_t, m_t, \omega_t$ are the same nuisance functions defined in Theorem \[thm:eif\]. Thus ${\mathbb{E}}[\varphi(Z;\bm{\eta},\delta, t)] = \psi_t (\delta)$.
A natural estimator for $\phi(Z;\bm{\eta},\delta)$ would be given by the solution to the efficient influence function estimating equation, i.e., the naive plug-in $Z$-estimator $$\hat{\psi}_{inc.pi}(t; \delta) = {\mathbb{P}}_n \{ \varphi(Z;\hat{\bm{\eta}},\delta, t) \}$$ where $\hat{\bm{\eta}}$ are some regression based estimators of the nuisance functions directly plugged into the efficient influence function, and ${\mathbb{P}}_n$ denotes the empirical measure so that sample averages can be written as $\frac{1}{n}\sum_i f(Z_i) = {\mathbb{P}}_n\{f(Z)\} = \int f(z) d{\mathbb{P}}_n(z)$.
Note also that if we could assume $\pi_{t}$ and $\omega_{t}$ were modeled with correct parametric models, then one could use the following simple inverse-probability-weighted (IPW) estimator $$\hat{\psi}_{inc.ipw}(t; \delta) = {\mathbb{P}}_n \left\{\prod_{t=1}^{T} \left( \frac{\delta A_t + 1-A_t }{\delta\hat{\pi}_t(H_t) + 1-\hat{\pi}_t(H_t)} \cdot \frac{\mathbbm{1}\left( R_{t+1}=1 \right)}{\hat{\omega}_{t}(H_t, A_t)} \right)Y \right\}.$$ Note that this IPW estimator is a special case of $\hat{\psi}_{inc.pi}$ where $\hat{m}_t$ is set to zero for all $t$.
However, to develop general $Z$-estimators with desired convergence rates requires empirical process conditions that restrict the flexibility and complexity of the nuisance estimators. This is due to using the data twice (once for estimating the nuisance functions, again for estimating the average of the uncentered influence function), which can cause overfitting. Hence, to avoid this downside and to make our estimator more practically useful, we use sample splitting, following [@zheng10; @chernozhukov16double; @Kennedy17; @robins2008estimation]. As will be seen shortly, our estimator can attain fast parametric $\sqrt[]{n}$ rates even when we have all the nuisance functions $\bm{\eta}$ estimated consistently at much slower rates than $\sqrt[]{n}$. Hence we can be more flexible in employing nonparametric methods in our model.
To this end we randomly split the observations $(Z_1, ..., Z_n)$ into $K$ disjoint groups, using a random variable $S$ drawn independently of the data, where $S_i \in \{1,...,K\}$ denotes the group membership for unit $i$. Then our proposed estimator is given by $$\label{eqn::proposed-estimator}
\widehat{\psi}_t (\delta) = \frac{1}{K}\sum_{k=1}^{K} {\mathbb{P}}_n^{(k)} \{ \varphi(Z;\hat{\bm{\eta}}_{-k},\delta, t) \} \equiv {\mathbb{P}}_n\left\{\varphi(Z;\hat{\bm{\eta}}_{-S},\delta, t)\right\}$$ where we let ${\mathbb{P}}_n^{(k)}$ denote empirical averages only over the set of units $\{i : S_i = k\}$ in group $k$, and let $\hat{\bm{\eta}}_{-k}$ denote the nuisance estimator constructed excluding group $k$. We detail exactly how to compute the proposed estimator $\widehat{\psi}_t(\delta)$ in Algorithm \[algorithm-1\] in section \[sec:algorithm\] of the Appendix.
Computing the estimator is easily amenable to parallelizable due to the sample splitting. Note also that it only requires estimating regression functions and not conditional densities, by virtue of the recursive regression formulation of the functions $m_t$ as discussed in Remark \[rmk:m\_s\] in the Appendix. It is also worth noting that our method effectively utilizes all the observable samples at each time $t$ to estimate functions $m_t$.
Convergence Theory
------------------
Now we provide a theorem that details the main large-sample property of our proposed estimator. In the theorem we verify that $\widehat{\psi}_t(\delta)$ is $\sqrt[]{n}$-consistent and asymptotically normal even when all the nuisance functions are estimated at much slower than $n^{-1/2}$ rates. In what follows we denote the squared $L_2({\mathbb{P}})$ norm of function $f$ by $\Vert f \Vert = \left(\int f(z)^2 d{\mathbb{P}}(z) \right)^{1/2}$, Moreover, note that the pseudo-regression functions $m_t$ defined in Theorem \[thm:eif\] can be indexed by both time $t$ and the given increment parameter $\delta$ as $m_{t,\delta}$ if necessary. The next theorem shows uniform convergence of $\hat{\psi}_t (\delta)$.
\[thm:convergence\] Define the variance function as $\sigma^2(\delta, t)={\mathbb{E}}\left[\left(\varphi(Z;\bm{\eta},\delta, t) - \psi_t(\delta) \right)^2 \right]$ and let $\hat{\sigma}^2(\delta, t)={\mathbb{P}}_n\left[\left(\varphi(Z;\hat{\bm{\eta}}_{-S},\delta, t) - \hat{\psi}_t(\delta) \right)^2 \right]$ denote its estimator. Assume:
- The set $\mathcal{D}=[\delta_l, \delta_u]$ is bounded with $0 < \delta_l \leq \delta_u < \infty$.
- ${\mathbb{P}}\left[ \mid m_t(H_t, A_t, R_{t+1}=1) \mid \leq C \right]= {\mathbb{P}}\left[ \mid \hat{m}_t(H_t, A_t, R_{t+1}=1) \mid \leq C \right] = 1$ for some constant $C<\infty$ and $\forall t$.
- $\sup_{\delta \in \mathcal{D}} \big| \frac{\hat{\sigma}^2(\delta, t)}{\sigma^2(\delta, t)} -1 \big| = o_{\mathbb{P}}(1)$, and $\| \sup_{\delta \in \mathcal{D}} \mid \varphi(Z;\bm{\eta},\delta, t) - \varphi(Z;\hat{\bm{\eta}}_{-S},\delta, t) | \|= o_{\mathbb{P}}(1)$.
- $
\left( \underset{\delta\in \mathcal{D}}{sup}\| m_{\delta,t} - \widehat{m}_{\delta,t} \| + \| \pi_t - \widehat{\pi}_{t} \| \right) \Big( \| \widehat\pi_s - {\pi}_s \| + \| \widehat\omega_s - {\omega}_s \| \Big) = o_{\mathbb{P}}\left(\frac{1}{\sqrt{n}}\right)
$ for $\forall s \leq t$.
Then we have $$\frac{\hat{\psi}_t (\delta) - \psi_t (\delta)}{\hat{\sigma}(t, \delta)/\sqrt[]{n}} \leadsto \mathbb{G}(\delta, t)$$ in $l^{\infty}(\mathcal{D})$, where $\mathbb{G}$ is a mean-zero Gaussian process with covariance ${\mathbb{E}}[\mathbb{G}(\delta_1, t_1)\mathbb{G}(\delta_2, t_2)]={\mathbb{E}}\left[\widetilde{\varphi}(Z;\bm{\eta},\delta_1, t_1) \widetilde{\varphi}(Z;\bm{\eta},\delta_2, t_2)\right]$ and $\widetilde{\varphi}(Z;\bm{\eta},\delta, t) = \frac{\varphi(Z;\bm{\eta},\delta, t) - \psi_t(\delta)}{\sigma(\delta, t)}$.
The above theorem lays the foundation for inference; its proof is given in the Appendix \[proof:thm-6-1\]. We analyze the second order remainder terms of the efficient influence function given in Lemma \[lem:eif\], and keep the intervention distribution completely general (see section \[lem:remainder\_1\], \[lem:remainder\_2\] in the Appendix). Therefore, the results can be applied to study other stochastic interventions under the presence of right-censoring, which is beyond the scope of this paper.
Assumptions 1), 2) and 3) in Theorem \[thm:convergence\] are all very weak. Specifically, assumptions 1) and 2) are mild boundedness conditions; assumption 2) could be further relaxed at the expense of a less simple proof, for example with bounds on $L_p$ norms. Assumption 3) is also a basic and mild consistency assumption, with no requirement on rates of convergence. The main substantive assumption is Assumption 4), which says the nuisance estimators must be consistent and converge at a fast enough rate. Note that unlike the result from [@Kennedy17], we have additional nuisance function $\omega$ in the condition, which represents a propensity score for missingness or dropout. One sufficient condition for Assumption 4 to hold is that all the nuisance functions are consistently estimated at a rate of $n^{-1/4}$ or faster.
Lowering the bar from $\sqrt[]{n}$ to $n^{-1/4}$ allows us to employ a richer set of modern machine learning methods by reducing the burden of nonparametric modeling. Such rates are attainable under diverse structural constraints; see for example [@yang2015minimax; @raskutti2012minimax; @kandasamy2016additive]. More conventional structural constraints including sparsity and smoothness in [@gyorfi2006distribution]. However, we are agnostic about how such rates might be attained by which nonparametric methods. In practice, we may want to consider using different estimation techniques for each of $\bm{\pi, m, \omega}$ based on our prior knowledge or use ensemble learners.
Based on the result in Theorem \[thm:convergence\], given the value of $\delta$ and $t$ we can construct pointwise $1-\alpha$ confidence intervals for $\psi_t (\delta)$ as $$\widehat{\psi}_t (\delta) \pm z_{1-\alpha/2}\frac{\hat{\sigma}^2(\delta, t)}{\sqrt[]{n}}$$ where $\hat{\sigma}^2(\delta, t)$ is the variance estimator defined in Theorem \[thm:convergence\]. As in [@Kennedy17] we can use the multiplier bootstrap for uniform inference, by replacing the $z_{1-\alpha/2}$ critical value with one $c_\alpha$ satisfying $${\mathbb{P}}\left( \underset{\delta\in \mathcal{D}, 1\leq t \leq T}{\sup} \left| \frac{\widehat{\psi}_t (\delta)-\psi_t (\delta)}{\widehat{\sigma}(\delta, t)/\sqrt[]{n}} \right| \leq c_\alpha \right) = 1 - \alpha + o(1) .$$
This is due to the fact that we only add a finite number $T$ timepoints into the function class of $\varphi$ at maximum (see \[append:thm-bootstrap-kennedy\] in the Appendix for more detailed discussion). We refer to [@Kennedy17] for details on how to construct $c_\alpha$ via a bootstrap procedure.
Infinite Time Horizon Analysis {#sec:Inf-time-horizon}
==============================
The great majority of causal inference literature considers a finite time horizon where the number of timepoints is small and fixed, or even just equal to one, a priori ruling out much significant (if any) longitudinal structure. However, in practice more and more studies accumulate data across very many timepoints, due to ever increasing advances in data collection technology. In fact, in many applications the number of timepoints $T$ can even be comparable to or larger than sample size $n$, rendering most of the classical methods based on finite time horizons futile. [For example, [@kumar2013mobile] describe how new mobile and wearable sensing technologies have revolutionized randomized trial and other health-care studies by providing data at very high sampling rates (e.g., 10-500 times per second). [@klasnja2015microrandomized] use 210 timepoints in their study in which they present the micro-randomized trial for just-in-time adaptive interventions via mobile applications.]{} As we collect such more granular and fine-grained data, some recent studies explore efficient off-policy estimation techniques on infinite-time horizon (e.g. @liu2018breaking in reinforcement learning). Interestingly, there has been no formal analysis for longitudinal study.
Therefore here we analyze the behavior of an inverse-probability-weighted (IPW) version of our proposed incremental effect estimator (relative to a standard IPW estimator of a classical deterministic effect), in a more realistic regime where the number of timepoints can scale with sample size. To the best of our knowledge, this is one of the first such infinite-horizon analyses in causal inference, outside of some recent examples involving dynamic treatment regimes [@laber2018optimal; @ertefaie2018constructing]. Importantly, we show that a classical IPW estimator can suffer exponentially large variance inflation relative to an analogous incremental effect estimator: the relative efficiency is exponential in the number of timepoints $T$.
We proceed by comparing different estimators of two different effects, namely the usual deterministic effect of receiving treatment at every timepoint, as well as the incremental effect for a given $\delta>1$ (we present results for effects of receiving control at every timepoint and incremental interventions with $\delta<1$ in the Appendix \[proof:thm-inf-time\] as well). Although these are effects under different interventions, under positivity the incremental intervention effect can well approximate the always-treated effect by letting $\delta \rightarrow \infty$ (and similarly $\delta \rightarrow 0$ for the never-treated effect). More importantly, however, we argue that the incremental effects are more appropriate for long-term longitudinal studies with many timepoints both based on their interpretation, and based on the extreme efficiency gains we discuss here.
For simplicity, and to make our results more intuitive, we consider a simple randomized trial where propensity scores are known and do not vary with covariates (i.e., $\pi_t(H_t)=p$ for all $t$) and there is no dropout (i.e. $d{\mathbb{P}}\{R_{t+1}=1\}=1$ a.e. $[{\mathbb{P}}]$ for all $t=1,...,T$). However, we expect that by introducing additional complexity (e.g., requiring estimation of nuisance parameters and introducing dropout) the efficiency gap between deterministic always-treated-type and incremental effects would not be substantially affected (and may even be exacerbated). Alternatively we can view our results as corresponding to the full nonparametric efficiency bounds under a simple setup where the propensity scores are all equal to $p$ and the pseudo-regressions equal zero.
In this setup we have unbiased estimators of the always-treated effect $\psi_{at}={\mathbb{E}}(Y^{\overline{\bm{1}}})$ and incremental effect $\psi_{inc} = {\mathbb{E}}(Y^{\overline{Q}(\delta)})$ given by $$\widehat{\psi}_{at} = \prod_{t=1}^{T} \left( \frac{A_t}{ p} \right)Y$$ and $$\widehat{\psi}_{inc} = \prod_{t=1}^{T} \left( \frac{\delta A_t + 1-A_t }{\delta p + 1-p} \right)Y$$ respectively, where $Y=Y_T$ for simplicity. We now explore the relative efficiency of these estimators, considering the case where $T$ approaches infinity. In particular the next theorem shows that we can achieve an asymptotically exponential efficiency gain by targeting incremental effects.
\[thm:inf-time-horizon\] Consider the estimators and assumptions defined above. Suppose $\left\vert Y \right\vert \leq {b_u}$ for some constant $ b_u>0$ and ${\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right] > 0$. Then for any $T \geq 1$, $$\begin{aligned}
C_{T}\left[ \left\{ \frac{\delta^2p^2 + p(1-p)}{(\delta p + 1 - p)^2} \right\}^{T} - p^{T} \right] \leq
\frac{Var(\widehat{\psi}_{at})} {Var(\widehat{\psi}_{inc})}
\leq C_{T}\zeta(T;p)\left\{ \frac{\delta^2p^2 + p(1-p)}{(\delta p + 1 - p)^2} \right\}^{T}
\end{aligned}$$ where $C_{T} = \frac{b_u^2}{{\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]}$ and $\zeta(T;p) = \left( 1+ \frac{c \left({\mathbb{E}}\left[Y^{\overline{\bm{1}}}\right]\right)^2}{\left( 1/p \right)^{T} {\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]} \right)$ for any fixed value of $c$ such that $\frac{1}{1-p^T{\left({\mathbb{E}}\left[Y^{\overline{\bm{1}}}\right]\right)^2}\big/{{\mathbb{E}}\left[\left(Y^2 \right)^{\overline{\bm{1}}} \right]}} \leq {c}$.
A proof of the above theorem can be found in the Appendix \[proof:thm-inf-time\]. The proof is based on similar logic used in deriving the g-formula [@robins1986]. Note that we only require two very basic structural assumptions: the boundedness assumption on $Y$ and ${\mathbb{E}}[(Y^{\overline{\bm{1}}} )^2 ] > 0$ which is equivalent to the condition that $Y^{\overline{\bm{1}}}$ is a non-degenerate random variable.
Theorem \[thm:inf-time-horizon\] allows us to precisely quantify the asymptotic relative efficiency gain. Crucially, since $\frac{\delta^2p^2 + p(1-p)}{(\delta p + 1 - p)^2} < 1$ when $\delta > 1$ and $\zeta(T;p) \rightarrow 1$ monotonically at an exponential rate in $T$, the efficiency gain is also almost exponential in $T$. We give a result for the case of deterministic never-treated effects as well, as stated in \[proof:thm-inf-time\] of the Appendix. In fact, in the proof we show the same results hold for not only always-treated effect but also any feasible deterministic effects ${\mathbb{E}}(Y^{\overline{a}_{T}})$ for $\overline{a}_{T} \in \overline{\mathcal{A}}_{T}$. Hence Theorem \[thm:inf-time-horizon\] provides important insight about utilizing incremental interventions for causal effects in a novel infinite time horizon (large $T$) regime.
Theorem \[thm:inf-time-horizon\] naturally leads to the conclusion that $\widehat{\psi}_{inc}$ is always more efficient than $\widehat{\psi}_{at}$ if we intend to incorporate many time points into the study. In what follows we refine this statement so that one can characterize the minimum threshold of the number of timepoints to make the claim true, under the same condition used in Theorem \[thm:inf-time-horizon\].
\[cor:inf-time-horizon\] There exists a finite number $T_{min}$ such that $$\begin{aligned}
Var(\widehat{\psi}_{inc}) < Var(\widehat{\psi}_{at})
\end{aligned}$$ for every $T > T_{min}$, where $T_{min}$ is never greater than $$\begin{aligned}
\min \left\{T: \left[\frac{\delta^2p+1-p}{(\delta p + 1 - p)^2}\right]^T - \frac{c_{\bm{1}}}{p^T} + 2 < 0\right\}
\quad
\text{where} \quad c_{\bm{1}}=\frac{{\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]}{b^2_u}.
\end{aligned}$$
A proof appears in the Appendix \[proof:cor-inf-time\]. The proof of the above corollary relies upon the fact that $var(\widehat{\psi}_{inc})$ can be represented as the variance of the weighted sum of all the distinct deterministic intervention effects $\overline{a}_{T} \in \overline{\mathcal{A}}_{T}$ (Lemma \[lem:inf-time-decomp\]). The constant $c_{\bm{1}}$ is simply the normalized second order moment and can be translated into the average magnitude of $Y^{\overline{\bm{1}}}$. In other words, the larger $\vert Y^{\overline{\bm{1}}} \vert$ is on average the smaller the value of $T_{min}$ is guaranteed.
It may be possible to tighten the upper bound for $T_{min}$, but in practice the value of $T_{min}$ is typically already small. To illustrate, consider the setup where $Y \in [0,1]$ and $\delta = 2.5, p = 0.5$, and two extreme cases: $c_{\bm{1}}=0.95$ ($Y^{\overline{\bm{1}}}$ is dispersed mostly around $\{0,1\}$) and $c_{\bm{1}}=0.05$ ($Y^{\overline{\bm{1}}}$ is concentrated around $0$). Then the corresponding $T_{min}$ values are 2 and 6 respectively. If we use $\delta = 5, p = 0.5$, the numbers will become 3 and 9 respectively.
Our proof of Theorem \[thm:inf-time-horizon\] and Corollary \[cor:inf-time-horizon\] can be generalized to the case where the nuisance functions need to be estimated, but we feel the simple case captures the main ideas, and the general case would only add complexity. Numerical simulations support our theorem in both randomized and observational settings (see Section \[appd:infinite-time-numerical\] of the Appendix). Our result in this section provides the crucial insight into longitudinal studies with many timepoints, indicating massive efficiency gains are possible by studying incremental rather than more classical deterministic effects.
Experiments
===========
Simulation Study
----------------
In this section we explore finite-sample performance of the proposed estimator $\hat{\psi}(t; \delta)$ via synthetic simulation for an observational study. We consider the following data generation model $$X_t=(X_{1,t}, X_{2,t}) \sim N(0,\textbf{I}),$$ $$\pi_t(H_t) = expit\Big( \bm{1}^\top X_t + 2\sum_{s=t-2}^{t-1} \left(A_s-1/2\right) \Big),$$ $$\omega_t(H_t, A_t) = expit\Big( C_0 + \sum_{s=1}^{t} A_s \Big) \ \text{,} \quad C_0 \sim \mathcal{U}[a,5],$$ $$\left(Y \big\vert \overline{X}_{t}, \overline{A}_{t} \right) \sim N\big(\mu(\overline{X}_{t}, \overline{A}_{t}),1 \big)$$ for all $t =1,...,t$ where we set $\mu(\overline{X}_{t}, \overline{A}_{t})= 10 + A_{t} + A_{t-1} +\vert((\bm{1}^\top X_{t}+\bm{1}^\top X_{t-1}) \mid$ and $t=50$. $\mathcal{U}[a,5]$ is a uniform random variable with interval $[a,5]$. Basically we recycle the simulation setup used in *Simulation 2* in the Appendix \[appd:infinite-time-numerical\], but add a right-censoring process. So in this simulation we assume that the more likely to have been treated, the less likely to drop out from the study.
We use three baseline methods: the naive Z-estimator ($\hat{\psi}_{inc.pi}$) and IPW type estimator ($\hat{\psi}_{inc.ipw}$), both of which are defined in Section \[subsec:proposed-estimator\], and the efficient incremental-effect estimator ($\hat{\psi}_{inc.nc}$) proposed by [@Kennedy17], which does not take right-censoring into account.
To estimate nuisance parameters, we form an ensemble of widely used nonparametric models. Specifically, we use cross-validation-based superleaner ensemble algorithm [@van2007super] via the `SuperLearner` package in R to combine support vector machine, random forest, k-nearest neighbor regression, and multivariate adaptive regression splines. For the proposed method, we use sample splitting with $K=2$ splits as described in Algorithm \[algorithm-1\].
We repeat simulation $S$ times in which we draw $n$ samples each simulation. We use $D$ values of $\delta$ equally spaced on the log-scale within $[0.1, 5]$. Then performance of each estimator is assessed via normalized root-mean-squared error (RMSE) defined as belows $$\widehat{RMSE} = \frac{1}{D} \sum_{d=1}^{D} \left[ \frac{1}{S} \sum_{s=1}^{S} \left\{ \frac{\hat{\psi}_s(t;\delta_d) - {\psi}(t;\delta_d)}{\overline{\psi}(t;\delta_d)} \right\}^2 \right]$$ where $\hat{\psi}_s(t;\delta_d)$ and ${\psi}(t;\delta_d)$ are estimated value of given estimator $\hat{\psi}$ for $s$-th simulation with value $\delta_d$ and true value of the target parameter with $\delta_d$ respectively, and $\overline{\psi}(t;\delta_d)$ means sample average of ${\psi}(t;\delta_d)$ across different values of $\delta_d$. Results are shown in Table \[tbl:synthetic-sim1\].
---------------------------- ----------------------- ------------------------ ----------------------- --------------------------- ------
$\hat{\psi}_{inc.pi}$ $\hat{\psi}_{inc.ipw}$ $\hat{\psi}_{inc.nc}$ ${\hat{\psi}_{proposed}}$
$S=100, n=500, D=25, a=1$ 0.59 0.56 1.01 **0.13** 35.4
$S=200, n=1000, D=50, a=1$ 0.40 0.52 0.89 **0.09** 34.9
$S=100, n=500, D=25, a=5$ 0.48 0.49 0.38 **0.14** 3.7
$S=200, n=1000, D=50, a=5$ 0.36 0.44 0.35 **0.11** 3.2
---------------------------- ----------------------- ------------------------ ----------------------- --------------------------- ------
: Normalized RMSE across different simulation settings.[]{data-label="tbl:synthetic-sim1"}
As shown in Table \[tbl:synthetic-sim1\], the proposed estimator performs better than the other baseline methods, especially when there is a lot of censored data. $\hat{\psi}_{inc.pi}$ and $\hat{\psi}_{inc.ipw}$ estimators in general show fairly large RMSE, since they are not expected to converge at $\sqrt{n}$ rates. $\hat{\psi}_{inc.nc}$ performs relatively well when there is only a small portion of censored data, but under the presence of aggressive censoring it shows large bias. In contrast, the proposed estimator only hows a slight loss in RMSE. This is indicative of the fact that the proposed estimator only requires $n^{1/4}$ rates on every nuisance estimation to achieve full efficiency and in general has second-order bias.
Application
-----------
Here we illustrate the proposed methods in analyzing the Effects of Aspirin on Gestation and Reproduction (EAGeR) data, which evaluates the effect of daily low-dose aspirin on pregnancy outcomes and complications. The EAGeR trial was the first randomized trial to evaluate the effect of pre-conception low-dose aspirin on pregnancy outcomes ([@schisterman2014preconception; @mumford2016expanded]). However, to date this evidence has been limited to intention-to-treat analyses.
The design and protocol used for the EAGeR study have been previously documented [@schisterman2013randomised]. Overall, 1,228 women were recruited into the study (615 aspirin, 613 placebo) and 11% of participants chose to drop out of the study before completion. Roughly 43,000 person weeks of information were available from daily diaries, as well as study questionnaires, and clinical and telephone evaluations collected at regular intervals over follow-up.
We used our incremental propensity score approach to evaluate the effect of aspirin on live birth and pregnancy loss in the EAGeR trial, accounting for time-varying exposure and dropout. The EAGeR dataset has been compiled as described in (\[setup:causal-process\]). Here, the study terminates week 89 ($T=89$). We use 24 baseline covariates (e.g., age, race, income) and 5 time-dependent covariates (compliance, conception, vaginal bleeding, nausea and GI discomfort). $A_t$ is binary treatment variable coded as $1$ if a woman took aspirin at time $t$ and $0$ else. $R_t=1$ indicates that the woman is observed in the study at time $t$. Lastly, $Y_{t}$ is an indicator of having a pregnancy outcome of interest at time $t$. For the sake of clarity and simplicity, we perform two separate analyses for the two types of pregnancy outcomes (one for live birth and one for pregnancy loss).
For comparative purposes, we estimate the simple complete-case effect $$\begin{aligned}
\widehat{\psi}_{CC} = {\mathbb{P}_n}(Y |\overline{A}_T=1, R_T = 1) - {\mathbb{P}_n}(Y | \overline{A}_T=0, R_T = 1).
\end{aligned}$$ which relies on both non-compliance and drop-out being completely randomized. The value of $\widehat{\psi}_{CC}$ is 0.052 (5.2%) for live birth and 0.012 (1.2%) for pregnancy loss, both of which are close to the intention-to-treat estimates reported in [@schisterman2013randomised; @schisterman2014preconception].
We estimate the incremental effect curve ${\psi}(T;\delta)$, which represents the probability of having live birth or pregnancy loss at the end of the study if the odds of taking aspirin were multiplied by factor $\delta$. This effect compares the outcome probabilities that would be observed if the odds of taking aspirin for all women was increased by a factor of $\delta$ at all timepoints, relative to the odds of taking aspirin that were actually observed in the trial at all timepoints. Again, we use the cross-validated superleaner algorithm [@van2007super] to combine support vector machine, random forest, k-nearest neighbor regression, and multivariate adaptive regression splines, to estimate a tuple of nuisance functions $(m_t, \omega_{t}, \pi_{t})$ at every $t$. We use sample splitting as in Algorithm \[algorithm-1\] with $K=2$ splits, and use 10,000 bootstrap replications to compute pointwise and uniform confidence intervals. Results are shown in Figure \[fig:appl-aspirin\].
![Estimated incremental effect curves which represent the probability of having a live birth (Left) and a pregnancy loss (Right). In each figure, lighter grey area with red dotted line represents a 95% uniform band and darker grey area represents a 95% pointwise band.[]{data-label="fig:appl-aspirin"}](LB_T=89.png){width=".95\linewidth"}
![Estimated incremental effect curves which represent the probability of having a live birth (Left) and a pregnancy loss (Right). In each figure, lighter grey area with red dotted line represents a 95% uniform band and darker grey area represents a 95% pointwise band.[]{data-label="fig:appl-aspirin"}](FL_T=89.png){width=".95\linewidth"}
We find the estimated curve is almost flat for live birth, and has a negative gradient with respect to $\delta$ (odds ratio) in general for pregnancy loss. Thus, unlike the previous findings, our result seems indicative of a positive effect of low-dose aspirin on reducing the risk of pregnancy loss, but one needs to take the wider uniform band at large $\delta$ into consideration. In conclusion, our analysis suggests new evidence that increase in chance of taking a low-dose aspirin may be associated with decrease in pregnancy loss, but its accuracy is still afflicted with uncertainties.
Discussion
==========
Incremental interventions are a novel class of stochastic dynamic intervention where positivity assumptions can be completely avoided. However, they had not been extended to repeated outcomes, and without further assumptions do not give identifiability under dropout - both very common in practice. In this paper we solved this problem by showing how incremental intervention effects are identified and can be estimated when drop-out occurs (conditionally) at random. Even in the case of many dropouts, our proposed method efficiently uses all the data without sacrificing robustness. We give an identifying expression for incremental effects under monotone dropout, without requiring any positivity assumptions. We establish general efficiency theory and construct the efficient influence function, and present nonparametric estimators which converge at fast rates and yield uniform inferential guarantees, even when all the nuisance functions are estimated with flexible machine learning tools at slower rates. Furthermore, we studied the relative efficiency of incremental effects to conventional deterministic dynamic intervention effects in a novel infinite time horizon setting in which the number of timepoints can possibly grow with sample size, and showed that incremental effects are more efficient than deterministic effects and yield near-exponential efficiency gains in the infinite-time regime.
There are a number of avenues for future work. The first is application to other substantive problems in medicine and the social sciences. For example, in a forthcoming paper we analyze the effect of aspirin on pregnancy outcomes with more extensive data. It will also be important to consider other types of non-monotone missingness where the standard time-varying missing-at-random assumption \[assumption:A2-M\] may not be appropriate ([@sun2014inverse; @tchetgen2016discrete]). We expect our approach can be extended to other important problems in causal inference; for example, one could develop incremental effects for continuous treatments and instruments [@kennedy2017non; @kennedy2019robust], or for mediation in the same spirit as [@diaz2019causal], but generalized to the longitudinal case with dropout. Developing incremental-based sensitivity analyses for the longitudinal missing-at-random assumption would also be important.
Acknowledgement
===============
Edward Kennedy gratefully acknowledges financial support from the NSF (Grant \# DMS-1810979) for this research.
\[appendix\]
Algorithm {#sec:algorithm}
=========
Let $\delta$ be fixed and pick $t \leq T$. For each $k \in \{1,...,K\}$, let $D_0 =\{Z_i : S_i \neq k\}$ and $D_1 =\{Z_i : S_i = k\}$ denote corresponding training and test data, respectively, and let $D = D_0 \bigcup D_1$.
1. For each time $t=1,...,t$ regress $A_t$ on $H_t$ using only observable samples at time $t$ in $D_0$, then obtain predicted values $\widehat{\pi}_t(H_t)$ for only subject with $R_t=1$ in $D$.
2. For each time $t=1,...,t$ regress $R_{t+1}$ on $(H_t,A_t)$ using only observable samples at time $t$ in $D_0$, then obtain predicted values $\widehat{\omega}_t(H_t,A_t)$ for only subject with $R_t=1$ in $D$.
3. For each time $t=1,...,t$, letting $W_s = \frac{\delta A_s + 1-A_s}{\delta\widehat{\pi}_s(H_s) + 1-\widehat{\pi}_s(H_s)}\cdot\frac{1}{\hat{\omega}_s( H_s, A_s)}$ and construct following cumulative product weights for only subject with $R_{t+1}=1$ in $D_1$:
- $\widetilde{W}_{t} = \widehat{\omega}_t(H_t,A_t)\prod_{s=1}^t W_s$ for $1 \leq t < t$
- $\widetilde{W}_{t} = \prod_{s=1}^{t} W_s$
4. For each time $t=t,t-1,...,1$, by setting $M_{t+1} = Y_{t}$:
- Regress $M_{t+1}$ on $(H_t, A_t)$ using only observable samples at time $t+1$ (i.e. only if $R_{t+1}=1$) in $D_0$, then obtain predictions $\widehat{m}_t(H_t, 1)$ and $\widehat{m}_t(H_t, 0)$ for only subject with $R_t=1$ in $D$.
- Construct pseudo-outcome $M_{t} = \frac{\widehat{m}_t(H_t, 1)\delta\widehat{\pi}_t(H_t)+\widehat{m}_t(H_t, 0)\{ 1-\widehat{\pi}_t(H_t) \}}{\delta\widehat{\pi}_t(H_t) + 1 - \widehat{\pi}_t(H_t)}$ for only subject with $R_t=1$ in $D$.
5. Construct time-dependent weights $V_t = \frac{\{A_t - \widehat{\pi}_t(H_t)\}(1-\delta)}{\delta A_t + 1-A_t}$ for only subject with $R_t=1$ in $D_1$.
6. Compute $\sum_t\widetilde{W}_{t} V_tM_t + \widetilde{W}_{t}Y_{t}$ for only subject with $R_{t+1}=1$ in $D_1$ and define $\widehat{\psi}^{(k)}_t(\delta)$ to be its average.
**Output** : $\widehat{\psi}_t(\delta) = \frac{1}{K}\sum_{k=1}^{K} \widehat{\psi}^{(k)}_t(\delta)$
Empirical demonstration for Theorem \[thm:inf-time-horizon\] {#appd:infinite-time-numerical}
============================================================
To empirically assess the above result in finite samples, we conduct two simple simulations under different setups; one in a randomized trial and the other in an observational study.
*Simulation 1. (Randomized Trial)* We set $p=0.5$ in the simulation for both always-treated and never-treated units. We let $Y \ \mid \ \overline{A}_{t} \sim N\left( 10 + \mid \overline{A}_{t} \mid_2,1\right)$ truncated at $\pm$ two standard deviations. Given a value of $\delta$, we generate datasets for $t=1,...,50$, $n=250$ for all $t$, and repeat the same simulation $100$ times with the same data generation process. For positivity assumption to be valid, we always keep at least one always-treated or never-treated unit in each simulation. We compute the sample variance of each estimator and the relative efficiency. Figure \[fig:sim1-inf-time\] shows the results along with the true lower bound on the relative efficiency given in Theorem \[thm:inf-time-horizon\] (the dotted line).
![Relative efficiency curve in log-scale over time $t$ for the case of always-treated unit where we use $\delta=5,10$ (Left) and for the case of never-treated unit where we use $\delta=0.2, 0.1$ (Right). The true lower bound for each $\delta$ is represented as dotted line.[]{data-label="fig:sim1-inf-time"}](always-trted_sim1.png){width=".9\linewidth"}
![Relative efficiency curve in log-scale over time $t$ for the case of always-treated unit where we use $\delta=5,10$ (Left) and for the case of never-treated unit where we use $\delta=0.2, 0.1$ (Right). The true lower bound for each $\delta$ is represented as dotted line.[]{data-label="fig:sim1-inf-time"}](never-trted_sim1.png){width=".9\linewidth"}
*Simulation 2. (Observational Study)* Although not directly covered by the setup from Theorem \[thm:inf-time-horizon\], it is also valuable to investigate the corresponding results in an observational study. To this end, we consider the following model $$X_t=(X_{1,t}, X_{2,t}) \sim N(0,\textbf{I})$$ $$\pi_t(H_t) = expit\Big( \bm{1}^\top X_t + 2\sum_{s=t-2}^{t-1} \left(A_s-1/2\right) \Big)$$ $$\left(Y \big\vert \overline{X}_{t}, \overline{A}_{t} \right) \sim N\big(\mu(\overline{X}_{t}, \overline{A}_{t}),1 \big)$$ for all $t \leq T$ where we set $\mu(\overline{X}_{t}, \overline{A}_{t})= 10 + A_{t} + A_{t-1} +\vert((\bm{1}^\top X_{t}+\bm{1}^\top X_{t-1}) \mid$ and $\bm{1}=[1,1]^\top$. This simple simulation setup assumes that it is more (less) likely to receive a treatment if a subject has recently received (not received) treatments. The rest of the simulation specifications are the same as *Simulation 1*. The result is presented in Figure \[fig:sim2-inf-time\].
![Relative efficiency curve over time $t$ for the case of always-treated unit where we use $\delta=2,5,10$ (Left) and for the case of never-treated unit where we use $\delta=0.5, 0.2, 0.1$ (Right).[]{data-label="fig:sim2-inf-time"}](always-trted_sim2.png){width=".9\linewidth"}
![Relative efficiency curve over time $t$ for the case of always-treated unit where we use $\delta=2,5,10$ (Left) and for the case of never-treated unit where we use $\delta=0.5, 0.2, 0.1$ (Right).[]{data-label="fig:sim2-inf-time"}](never-trted_sim2.png){width=".9\linewidth"}
Overall, the simulation results support Theorem \[thm:inf-time-horizon\]. Remarkably, even when we consider the setup for observational studies (the second simulation) we still observe almost exponential gains with incremental intervention effects.
Technical Results and Proofs
============================
Lemma for the identifying expression in Theorem \[thm:ident-exp\] {#proof:lem-ident-assumption}
-----------------------------------------------------------------
To identify our target parameter $\psi_t(\delta) = {\mathbb{E}}\left(Y_t^{\overline{Q}_t(\delta) } \right)$, we need the following lemma.
\[lem:identification\] Under (A2-M) and (A3), and for all $t \leq T$, we have following equvalence properties:
- $d{\mathbb{P}}(A_t|H_t) = d{\mathbb{P}}(A_t|H_t, R_t=1)$
- $d{\mathbb{P}}(X_t|A_{t-1}, H_{t-1}) = d{\mathbb{P}}(X_t|A_{t-1}, H_{t-1}, R_t=1)$
- ${\mathbb{E}}[Y|\overline{X}_t, \overline{A}_t] = {\mathbb{E}}[Y|\overline{X}_t, \overline{A}_t, R_{t+1}=1]$
Lemma \[lem:identification\] thus shows that the above important quantities conditional on the observed data are equivalent to corresponding quantities conditioned on the full data. In the identifying expression we can only use quantities directly estimated from observed history, so the above equivalence relations play a key role.
Proof is done based on induction. We proceed one by one as follows.
- $\bm{d{\mathbb{P}}(A_t|H_t) = d{\mathbb{P}}(A_t|H_t, R_t=1)}$
First note that $$\begin{aligned}
d{\mathbb{P}}(A_t,H_t) &= d{\mathbb{P}}(\overline{X}_t, \overline{A}_t) = d{\mathbb{P}}(\underline{X}_2, \underline{A}_2 \mid X_1, A_1)d{\mathbb{P}}(X_1,A_1) \\
&= d{\mathbb{P}}(\underline{X}_2, \underline{A}_2 \mid X_1, A_1, R_2 = 1)d{\mathbb{P}}(X_1,A_1, R_1 = 1) \\
&= d{\mathbb{P}}(\underline{X}_3, \underline{A}_3 \mid \overline{X}_2, \overline{A}_2, R_2 = 1) \frac{d{\mathbb{P}}(X_1,A_1, R_1 = 1)}{d{\mathbb{P}}(X_1,A_1, R_2 = 1)} d{\mathbb{P}}(\overline{X}_2, \overline{A}_2, R_2 = 1) \\
&= d{\mathbb{P}}(\underline{X}_3, \underline{A}_3 \mid \overline{X}_2, \overline{A}_2, R_3 = 1) \frac{d{\mathbb{P}}(X_1,A_1, R_1 = 1)}{d{\mathbb{P}}(X_1,A_1, R_2 = 1)} d{\mathbb{P}}(\overline{X}_2, \overline{A}_2, R_2 = 1) \\
& = d{\mathbb{P}}({X}_t, {A}_t \mid \overline{X}_{t-1}, \overline{A}_{t-1}, R_{t} = 1) \prod_{s=1}^{t-2} \frac{d{\mathbb{P}}(\overline{X}_s, \overline{A}_s, R_s = 1)}{d{\mathbb{P}}(\overline{X}_s, \overline{A}_s, R_{s+1} = 1)} d{\mathbb{P}}(\overline{X}_{t-1}, \overline{A}_{t-1}, R_{t-1} = 1) \\
&= d{\mathbb{P}}(\overline{X}_t, \overline{A}_t, R_{t} = 1) \prod_{s=1}^{t-1} \frac{d{\mathbb{P}}(\overline{X}_s, \overline{A}_s, R_s = 1)}{d{\mathbb{P}}(\overline{X}_s, \overline{A}_s, R_{s+1} = 1)}
\end{aligned}$$ where the first equality follows by definition, the second by definition of conditional probability, the third by assumption (A2-M), the fourth again by definition of conditional probability, the fifth by assumption (A2-M), and the sixth by repeating the same step $t-1$ times. The last expression is obtained by simply rearranging terms using the definition of conditional probability.
Now introduce the following shorthand notation: $$\mathlarger{\mathlarger{\bm{\Pi}}}_{{\mathbb{P}}}(t-1) \equiv \prod_{s=1}^{t-1} \frac{d{\mathbb{P}}(\overline{X}_s, \overline{A}_s, R_s = 1)}{d{\mathbb{P}}(\overline{X}_s, \overline{A}_s, R_{s+1} = 1)}$$ so we can write $d{\mathbb{P}}(A_t,H_t) = d{\mathbb{P}}(\overline{X}_t, \overline{A}_t, R_{t} = 1) \mathlarger{\mathlarger{\bm{\Pi}}}_{{\mathbb{P}}}(t-1)$.
Then, similarly we have $$\begin{aligned}
d{\mathbb{P}}(H_t) &= d{\mathbb{P}}(\overline{X}_t, \overline{A}_{t-1}) = d{\mathbb{P}}(\overline{X}_t, \overline{A}_{t-1}, R_t=1) \mathlarger{\mathlarger{\bm{\Pi}}}_{{\mathbb{P}}}(t-1).
\end{aligned}$$ Hence, finally we obtain $$\begin{aligned}
d{\mathbb{P}}(A_t \mid H_t) &= \frac{d{\mathbb{P}}(A_t,H_t)}{d{\mathbb{P}}(H_t)} = \frac{d{\mathbb{P}}(\overline{X}_t, \overline{A}_t, R_{t} = 1)}{d{\mathbb{P}}(\overline{X}_t, \overline{A}_{t-1}, R_t=1)} \\
&= \frac{d{\mathbb{P}}({A}_t, {H}_t, R_{t} = 1)}{d{\mathbb{P}}({H}_t, R_t=1)} \\
& = d{\mathbb{P}}(A_t|H_t, R_t=1)
\end{aligned}$$ where the second equality comes from the above results. The proof naturally leads to subsequent result of $\bm{dQ_t(A_t|H_t) = dQ_t(A_t|H_t, R_t=1)}$.
- $\bm{d{\mathbb{P}}(X_t|A_{t-1}, H_{t-1}) = d{\mathbb{P}}(X_t|A_{t-1}, H_{t-1}, R_t=1)}$
By definition $d{\mathbb{P}}(X_t|A_{t-1}, H_{t-1})= d{\mathbb{P}}(H_t)/d{\mathbb{P}}(A_{t-1}, H_{t-1})$, and from previous part it immediately follows $$\begin{aligned}
& d{\mathbb{P}}(H_t) = d{\mathbb{P}}(\overline{X}_t, \overline{A}_{t-1}, R_t=1) \mathlarger{\mathlarger{\bm{\Pi}}}_{{\mathbb{P}}}(t-1), \\
& d{\mathbb{P}}(A_{t-1}, H_{t-1}) = d{\mathbb{P}}(\overline{X}_{t-1}, \overline{A}_{t-1}, R_{t-1} = 1) \mathlarger{\mathlarger{\bm{\Pi}}}_{{\mathbb{P}}}(t-2) .
\end{aligned}$$ Hence, we have $$\begin{aligned}
\frac{d{\mathbb{P}}(H_t)}{d{\mathbb{P}}(A_{t-1}, H_{t-1})} &= \frac{d{\mathbb{P}}(\overline{X}_t, \overline{A}_{t-1}, R_t=1)}{d{\mathbb{P}}(\overline{X}_{t-1}, \overline{A}_{t-1}, R_{t} = 1)} \\
&= d{\mathbb{P}}(X_t \mid \overline{H}_{t-1}, \overline{A}_{t-1}, R_{t} = 1)
\end{aligned}$$ which yields the desired result.
- $\bm{{\mathbb{E}}[Y|\overline{X}_t, \overline{A}_t] = {\mathbb{E}}[Y|\overline{X}_t, \overline{A}_t, R_{t+1}=1]}$
By definition $
{\mathbb{E}}[Y|\overline{X}_t, \overline{A}_t] = \int y d{\mathbb{P}}(y\vert \overline{X}_t, \overline{A}_t),
$ and thereby it suffices to show that $d{\mathbb{P}}(Y\vert \overline{X}_t, \overline{A}_t) = d{\mathbb{P}}(Y\vert \overline{X}_t, \overline{A}_t, R_{t+1})$.
By the same logic we use for the first proof, we have $$\begin{aligned}
d{\mathbb{P}}(Y, \overline{X}_t, \overline{A}_t) = d{\mathbb{P}}(Y, \overline{X}_t, \overline{A}_t, R_t=1) \mathlarger{\mathlarger{\bm{\Pi}}}_{{\mathbb{P}}}(t-1)
\end{aligned}$$ and also $$d{\mathbb{P}}(\overline{X}_t, \overline{A}_t) = d{\mathbb{P}}(\overline{X}_{t}, \overline{A}_{t}, R_{t} = 1) \mathlarger{\mathlarger{\bm{\Pi}}}_{{\mathbb{P}}}(t-1).$$ Thus it follows by what are shown above displays together with assumption (A2-M) that $$d{\mathbb{P}}(Y \mid \overline{X}_t, \overline{A}_t) = d{\mathbb{P}}(Y \mid \overline{X}_t, \overline{A}_t, R_t = 1) = d{\mathbb{P}}(Y \mid \overline{X}_t, \overline{A}_t, R_{t+1} = 1).$$
Hence, we have shown that all the identities hold.
More details on influence functions {#sec:ifdetails}
-----------------------------------
Here we briefly describe the influence function. It was first introduced by [@Hampel74] and studied to provide general solution to find approximation-by-averages representation for a functional statistic (for example, see Chapter 5 in [@boos2013essential]). For a functional $\psi({\mathbb{P}})$, the influence function $\phi({\mathbb{P}})$ is defined by $$\label{def:influence-function-3}
\phi({\mathbb{P}}) = \frac{\partial}{\partial \epsilon} \psi \left( (1-\epsilon){\mathbb{P}}+ \epsilon\delta_z \right) \Big\vert_{\epsilon=0^+} = \underset{\epsilon \rightarrow 0^+}{\lim} \frac{\psi\left( (1-\epsilon){\mathbb{P}}+ \epsilon\delta_z \right) - \psi({\mathbb{P}})}{\epsilon}$$ where we let $\delta_z$ be the Dirac measure at $Z = z$. This definition is equivalent to Gateaux derivative of $\psi$ at ${\mathbb{P}}$ in direction of point mass $(\delta_z - {\mathbb{P}})$.
Mathematically, influence functions can be viewed as elements of the Hilbert space of mean-zero finite-variance functions whose covariance with parametric submodel scores equals a pathwise derivative of the target parameter [@Tsiatis06]. The influence function is hugely important particularly in a nonparametric model ${\mathbb{P}}$. Let $\{ {\mathbb{P}}_\epsilon$, $\epsilon \in \mathbb{R} \}$, denote a smooth parametric submodel for ${\mathbb{P}}$ with ${\mathbb{P}}_{\epsilon=0} = {\mathbb{P}}$. Then the influence function for parameter $\psi({\mathbb{P}})$ is the function $\phi({\mathbb{P}})$ satisfying $$\label{def:influence-function-2}
\frac{\partial}{\partial \epsilon}\psi({\mathbb{P}}_\epsilon) \Bigg\vert_{\epsilon=0} = \int \phi({\mathbb{P}}) \left(\frac{\partial}{\partial\epsilon}\log d{\mathbb{P}}_\epsilon \right) \Bigg\vert_{\epsilon=0} d{\mathbb{P}}.$$ It is known that no estimator can beat an estimator $\hat{\psi}({\mathbb{P}})$ such that $$\sqrt{n}(\hat{\psi} - \psi) \rightsquigarrow N(0, var(\phi))$$ in an aysmptotic minimax sense [@Kennedy16]. $\phi$ is called the *efficient influence function*. The efficient influence function is the only influence function in nonparametric models, and thus providing an important benchmark and allowing for the construction of optimal estimators. Both (\[def:influence-function-1\]) and (\[def:influence-function-2\]) can be used as technical device to obtain the efficient influence function.
There are at least two more fundamental reasons for why characterizing influence functions is essential in nonparametric statistics. First and most importantly, influence functions can be used to construct estimators with very favorable properties, such as double robustness or general second-order bias. Estimators with these properties can attain fast parametric convergence rates, even in fully nonparametric settings where nuisance functions are estimated at slower rates via flexible machine learning. Secondly, influence functions are also critical for understanding the asymptotics of corresponding estimators, since by definition any regular asymptotically linear estimator can be expressed as the empirical average of an influence function plus a negligible $o_p(1/\sqrt{n})$ error term. We refer elsewhere (for example, @Kennedy16, @vaart98, @Bickel98, @VanAndRobin03, @Tsiatis06) for more detailed information about nonparametric efficiency theory.
Proof of Theorem \[thm:eif\] {#proof:thm-eif}
----------------------------
### Identifying expression for the efficient influence function
In the next lemma, we provide an identifying expression for the efficient influence function for our incremental effect $\psi_t(\delta)$ under a nonparametric model, which allows the data-generating process ${\mathbb{P}}$ to be infinite-dimensional.
\[lem:eif\] Define $$\begin{aligned}
m_s&(h_s,a_s, R_{s+1}=1) \\
& = \int_{ \mathcal{R}_s} \mu(h_{t},a_{t}, R_{t+1}=1) \prod_{k=s+1}^{{t}} dQ_k(a_k \mid h_k, R_k=1) d{\mathbb{P}}(x_k|h_{k-1},a_{k-1}, R_k=1)
\end{aligned}$$ for $s=0,...,{t}-1$, $\forall t \leq T$, where we write $\mathcal{R}_s = (\overline{\mathcal{X}}_{t}\times \overline{\mathcal{A}}_{t}) \setminus (\overline{\mathcal{X}}_{s}\times \overline{\mathcal{A}}_{s})$ and $\mu(h_{t},a_{t}, R_{t+1}=1) = {\mathbb{E}}(Y_t \mid H_{t} = h_{t}, A_{t} = a_{t}, R_{t+1}=1)$. For $s=t$ and $s=t+1$, we set $m_{s}(\cdot)=\mu(h_{t},a_{t}, R_{t+1}=1)$ and $m_{t+1}(\cdot)=Y$. Moreover, let $\frac{\mathbbm{1}(H_s=h_s, R_s=1)}{d{\mathbb{P}}(h_s,R_s=1)} \phi_s(H_s, A_s, R_s=1;a_s)$ denote the efficient influence function for $dQ_s(a_s|h_s,R_s=1)$.
Then, the efficient influence function for $m_0=\psi_t (\delta)$ is given by $$\begin{aligned}
& \sum_{s=0}^{t} \left\{ \int_{ \mathcal{A}_{s+1}} m_{s+1}(H_{s+1}, A_{s+1}, R_{s+2}=1)dQ_{s+1}(a_{s+1}|H_{s+1},R_{s+1}=1) - m_s(H_{s}, A_{s}, R_{s+1}=1) \right\} \\
& \qquad \times \mathbbm{1}\left(R_{s+1}=1\right) \left( \prod_{k=0}^{s} \frac{dQ_k(A_k \mid H_k, R_k=1)}{d{\mathbb{P}}(A_k \mid H_k, R_k=1)} \frac{1}{d{\mathbb{P}}(R_{k+1}=1 \mid H_k, A_k,R_k=1)} \right) \\
& + \sum_{s=1}^{t} \mathbbm{1}(R_s=1) \left( \prod_{k=0}^{s-1} \frac{dQ_k(A_k \mid H_k, R_k=1)}{d{\mathbb{P}}(A_k \mid H_k, R_k=1)} \frac{1}{d{\mathbb{P}}(R_{k+1}=1 \mid H_k, A_k,R_k=1)} \right) \\
& \qquad \quad \times \int_{ \mathcal{A}_{s}} m_s(H_s, a_s,R_{s+1}=1) \phi_s(H_s, A_s, R_s=1;a_s) d\nu(a_s)
\end{aligned}$$ where we define $dQ_{t+1} = 1$, $m_{t+1}(\cdot)=Y$, and $dQ_0(a_0|h_0)/d{\mathbb{P}}(a_0|h_0) = 1$, and $\nu$ is a dominating measure for the distribution of $A_s$.
The proof of Lemma \[lem:eif\] involves derivation of efficient influence function for general stochastic interventions that depend on the both observational propensity scores and right-censoring process. In the proof, we delineate how we can apply chain rule arguments to derive efficient influence functions for complicated functionals from much simpler functional forms. We further simplify and render the above efficient influence function to estimable form in next theorem.
The basic proof structure follows the work of [@Kennedy17]. We begin by presenting the following three additional lemmas to prove Lemma \[lem:eif\].
\[lem:eif\_dQ\] For $\forall t$, the efficient influence function for $$\begin{aligned}
dQ_t(a_t \mid h_t, R_t=1) = \frac{a_t\delta\pi_{t}(h_t) + (1-a_t) \{ 1 - \pi_t(h_t) \} }{\delta\pi_{t}(h_t) + 1 - \pi_{t}(h_t)}
\end{aligned}$$ which is defined in (\[eqn:incr-intv-ps\]) is given by $\frac{\mathbbm{1}(H_t=h_t, R_t=1)}{d{\mathbb{P}}(h_t,R_t=1)} \phi_t(H_t, A_t, R_t=1;a_t)$, where $\phi_t(H_t, A_t, R_t=1;a_t)$ equals $$\begin{aligned}
\frac{(2a_t-1)\delta\{A_t-\pi_t(H_t)\}}{\left( \delta\pi_t(H_t) + 1 - \pi_t(H_t) \right)^2}
\end{aligned}$$ where $\pi_t(h_t) = {\mathbb{P}}(A_t=1 \mid H_t=h_t, R_t=1)$.
\[lem:eif\_1\] Suppose $\overline{Q}_T$ is not depending on ${\mathbb{P}}$. Recall that for $\forall t \leq T$, $$\begin{aligned}
m_s&(h_s,a_s, R_{s+1}=1)
& = \int_{ \mathcal{R}_s} \mu(h_{t},a_{t}, R_{t+1}=1) \prod_{k=s+1}^{{t}} dQ_k(a_k \mid h_k, R_k=1) d{\mathbb{P}}(x_k|h_{k-1},a_{k-1}, R_k=1)
\end{aligned}$$ for $s=0,...,{t}-1$, where we write $\mathcal{R}_s = (\overline{\mathcal{X}}_{t}\times \overline{\mathcal{A}}_{t}) \setminus (\overline{\mathcal{X}}_{s}\times \overline{\mathcal{A}}_{s})$ and $\mu(h_{t},a_{t}, R_{t+1}=1) = {\mathbb{E}}(Y_t \mid H_{t} = h_{t}, A_{t} = a_{t}, R_{t+1}=1)$. Note that from definition of $m_s$ it immeidately follows $m_s = \int_{\mathcal{X}_{s} \times \mathcal{A}_{s}} m_{s+1}dQ_{s+1}(a_{s+1} \mid h_{s+1}, R_{s+1}=1) d{\mathbb{P}}(x_{s+1}|h_{s},a_{s}, R_{s+1}=1) $.
Now the efficient influence function for $\psi^*(\overline{Q}_t)=m_0$ is $$\begin{aligned}
& \sum_{s=0}^{t} \left\{ \int_{ \mathcal{A}_{s+1}} m_{s+1}(H_{s+1}, A_{s+1}, R_{s+2}=1)dQ_{s+1}(a_{s+1}|H_{s+1},R_{s+1}=1) - m_s(H_{s}, A_{s}, R_{s+1}=1) \right\} \\
& \qquad \times \mathbbm{1}\left(R_{s+1}=1\right) \left( \prod_{k=0}^{s} \frac{dQ_k(A_k \mid H_k, R_k=1)}{d{\mathbb{P}}(A_k \mid H_k, R_k=1)} \frac{1}{d{\mathbb{P}}(R_{k+1}=1 \mid H_k, A_k,R_k=1)} \right)
\end{aligned}$$ where we define $dQ_{t+1} = 1$, $m_{t+1}(\cdot)=Y_t$, and $dQ_0(a_0|h_0)/d{\mathbb{P}}(a_0|h_0) = 1$.
\[lem:eif\_2\] Suppose $\overline{Q}_T$ depends on ${\mathbb{P}}$ and let $\frac{\mathbbm{1}(H_t=h_t, R_t=1)}{d{\mathbb{P}}(h_t,R_t=1)} \phi_t(H_t, A_t, R_t=1;a_t)$ denote the efficient influence function for $dQ_t(a_t|h_t,R_t=1)$ defined in Lemma \[lem:eif\_dQ\] for all $t$. Then the efficient influence function for $\psi_t(\delta)$ is given as $$\begin{aligned}
&\varphi^*(\overline{Q}_t) \\
& + \sum_{s=1}^{t} \mathbbm{1}(R_s=1) \left( \prod_{k=0}^{s-1} \frac{dQ_k(A_k \mid H_k, R_k=1)}{d{\mathbb{P}}(A_k \mid H_k, R_k=1)} \frac{1}{d{\mathbb{P}}(R_{k+1}=1 \mid H_k, A_k,R_k=1)} \right) \\
& \qquad \quad \times \int_{ \mathcal{A}_{s}} m_s(H_s, a_s,R_{s+1}=1) \phi_s(H_s, A_s, R_s=1;a_s) d\nu(a_s)
\end{aligned}$$ where $\varphi^*(\overline{Q}_t)$ is the efficient influence function from Lemma \[lem:eif\_1\] and $\nu$ is a dominating measure for the distribution of $A_s$.
The proof of Lemma \[lem:eif\_dQ\], \[lem:eif\_1\] and \[lem:eif\_2\] are basically results of a series of chain rules, after specifying efficient influence functions for terms that commonly appear. The full proofs are not particularly illuminating considering its length. Thus we omit a proof of Lemma \[lem:eif\_dQ\] and only include a brief sketch for proofs of Lemma \[lem:eif\_1\] and \[lem:eif\_2\] below, which can be useful to develop results for more general stochastic interventions.
### Proof of Lemma \[lem:eif\_1\] and Lemma \[lem:eif\_2\] {#proof-of-lemma-lemeif_1-and-lemma-lemeif_2 .unnumbered}
Let $\mathcal{IF}: \psi \rightarrow \phi$ denote a map to the efficient influence function $\phi$ for a functional $\psi$. First without proof, we specify efficient influence functions for mean and conditional mean which serve two basic ingredients for our proof. For mean value of a random variable $Z$, we have $$\mathcal{IF}\big({\mathbb{E}}[Z]\big) = Z - {\mathbb{E}}[Z],$$ and for conditional mean with a pair of random variables $(X,Y) \sim {\mathbb{P}}$ where $X$ is discrete, we have $$\mathcal{IF}\big({\mathbb{E}}[Y\vert X=x]\big) = \frac{\mathbbm{1}(X=x)}{{\mathbb{P}}(X=x)}\Big\{ Y - {\mathbb{E}}[Y \mid X=x] \Big\}.$$ These results can be directly obtained from either (\[def:influence-function-1\]) or (\[def:influence-function-2\]) in section \[sec:efficiency-theory\].
It is sufficient to prove for the case $t=2$ since it is straightforward to extend the proof for general $t \leq T$ by induction. For $t=2$, it is enough to compute the following four terms.
- $
\begin{aligned}[t]
& \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mathcal{IF} \Big( \mu(h_2,a_2, R_3=1) \Big) \prod_{s=1}^{2} dQ_s(a_s \mid h_s, R_s=1) d{\mathbb{P}}(x_s|h_{s-1},a_{s-1}, R_s=1) \\
&= \int_{ \mathcal{H}_2\times \mathcal{A}_2} \frac{\mathbbm{1}\{ (H_2,A_2,R_3)=(h_2,a_2,1) \} }{d{\mathbb{P}}(h_2,a_2,R_3=1)}\Big\{Y - \mu(h_2,a_2, R_3=1) \Big\} \\
& \qquad \qquad \ \times \prod_{s=1}^{2} dQ_s(a_s \mid h_s, R_s=1) d{\mathbb{P}}(x_s|h_{s-1},a_{s-1}, R_s=1)\\
&= \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mathbbm{1}\big\{ (H_2,A_2,R_3)=(h_2,a_2,1) \big\} \big\{Y - \mu(h_2,a_2, R_3=1) \big\} \\
& \qquad \qquad \ \times \prod_{s=1}^{2} \frac{dQ_s(a_s \mid h_s, R_s=1)}{d{\mathbb{P}}(a_s \mid h_s, R_s=1)} \frac{1}{d{\mathbb{P}}(R_{s+1}=1 \mid h_s, a_s,R_s=1)} \\
&= \{ Y - \mu(H_2,A_2, R_3=1) \} \mathbbm{1}(R_3=1) \prod_{s=1}^{2} \frac{dQ_t(A_s \mid H_s, R_s=1)}{d{\mathbb{P}}(A_s \mid H_s, R_s=1)} \frac{1}{d{\mathbb{P}}(R_{s+1}=1 \mid H_s, A_s,R_s=1)}
\end{aligned}
$
- $
\begin{aligned}[t]
& \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mu(h_2,a_2, R_3=1) \mathcal{IF} \Big( d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) \Big) d{\mathbb{P}}(h_1) \prod_{s=1}^{2} dQ_s(a_s \mid h_s, R_s=1) \\
& = \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mu(h_2,a_2, R_3=1) \frac{\mathbbm{1}\big\{ (H_1,A_1,R_2)=(h_1,a_1,1) \big\}}{d{\mathbb{P}}(h_1, a_1, R_2=1)} \Big\{\mathbbm{1}(X_2=x_2) - d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) \Big\} \\
& \qquad \qquad \ \times d{\mathbb{P}}(h_1) \prod_{s=1}^{2} dQ_s(a_s \mid h_s, R_s=1) \\
& = \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mu(h_2,a_2, R_3=1) \frac{\mathbbm{1}\big\{ (H_1,A_1,R_2)=(h_1,a_1,1) \big\} \big\{\mathbbm{1}(X_2=x_2) - d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) \big\} }{d{\mathbb{P}}(R_2=1|h_1,a_1)d{\mathbb{P}}(a_1|h_1)d{\mathbb{P}}(h_1)} \\
& \qquad \qquad \ \times d{\mathbb{P}}(h_1) \prod_{s=1}^{2} dQ_s(a_s \mid h_s, R_s=1) \\
&= \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mu(h_2,a_2, R_3=1) dQ_2(a_2 \mid h_2, R_2=1) \mathbbm{1}\big\{ (H_1,A_1,R_2)=(h_1,a_1,1) \big \} \\
& \qquad \qquad \ \times \big\{\mathbbm{1}(X_2=x_2) - d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) \big\} \frac{dQ_1(A_1 \mid H_1)}{d{\mathbb{P}}(A_1 \mid H_1)} \frac{1}{d{\mathbb{P}}(R_{2}=1 \mid H_1, A_1)} \\
&= \Bigg\{ \int_{ \mathcal{H}_2\times \mathcal{A}_2 \setminus \mathcal{H}_2 } \mu(H_2,a_2, R_3=1) dQ_2(a_2 \mid H_2, R_2=1) \\
& \qquad - \int_{ \mathcal{H}_2\times \mathcal{A}_2 \setminus \mathcal{H}_1 \times \mathcal{A}_1 } \mu(h_2,a_2, R_3=1) dQ_2(a_2 \mid h_2, R_2=1) d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) \Bigg\} \\
& \qquad \times \mathbbm{1}(R_2=1) \frac{dQ_1(A_1 \mid H_1)}{d{\mathbb{P}}(A_1 \mid H_1)} \frac{1}{d{\mathbb{P}}(R_{2}=1 \mid H_1, A_1)} \\
&= \Bigg\{ \int_{ \mathcal{A}_2 } \mu(H_2,a_2, R_3=1) dQ_2(a_2 \mid H_2, R_2=1) - m_1(h_1,a_1,R_2=1) \Bigg\} \\
& \qquad \times \mathbbm{1}(R_2=1) \frac{dQ_1(A_1 \mid H_1)}{d{\mathbb{P}}(A_1 \mid H_1)} \frac{1}{d{\mathbb{P}}(R_{2}=1 \mid H_1, A_1)} \\
\end{aligned}
$
- $
\begin{aligned}[t]
& \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mu(h_2,a_2, R_3=1) d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) \mathcal{IF} \Big(d{\mathbb{P}}(h_1) \Big) \prod_{s=1}^{2} dQ_s(a_s \mid h_s, R_s=1) \\
&= \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mu(h_2,a_2, R_3=1) d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) \big\{\mathbbm{1}(X_1=x_1) - d{\mathbb{P}}(x_1) \big\} \prod_{s=1}^{2} dQ_s(a_s \mid h_s, R_s=1) \\
&= \int_{ \mathcal{H}_2\times \mathcal{A}_2 \setminus \mathcal{H}_1 } \mu(h_2,a_2, R_3=1) dQ_2(a_2 \mid h_2, R_2=1) d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) dQ_1(a_1|h_1) - m_0 \\
& =\int_{ \mathcal{A}_1} m_1(h_1,a_1,R_2=1) dQ_1(a_1|h_1) - m_0 \\
\end{aligned}
$
- Let $\phi_t$ denote the efficient influence function for $dQ_t(a_t|h_t,R_t=1)$ as given in Lemma \[lem:eif\_dQ\]. Now we have\
$
\begin{aligned}
& \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mu(h_2,a_2, R_3=1) d{\mathbb{P}}(h_1) d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) \mathcal{IF} \Big( dQ_1(a_1|h_1) dQ_2(a_2 \mid h_2, R_2=1) \Big) \\
&= \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mu(h_2,a_2, R_3=1) d{\mathbb{P}}(h_1) d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) \frac{\mathbbm{1}\big\{ (H_2,R_2)=(h_2,1) \big\}}{d{\mathbb{P}}(h_2, R_2=1)} \phi_2 dQ_1(a_1|h_1) \\
& \quad + \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mu(h_2,a_2, R_3=1) d{\mathbb{P}}(h_1) d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) \frac{\mathbbm{1}\big\{ (H_1=h_1) \big\}}{d{\mathbb{P}}(h_1)} \phi_1 dQ_2(a_2 \mid h_2, R_2=1)\\
&= \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mu(h_2,a_2, R_3=1) \frac{\mathbbm{1}\big\{ (H_2,R_2)=(h_2,1) \big\}d{\mathbb{P}}(h_1) d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) dQ_1(a_1|h_1) }{d{\mathbb{P}}(x_2|h_1,a_1,R_2=1)d{\mathbb{P}}(R_2=1|h_1,a_1)d{\mathbb{P}}(a_1|h_1)d{\mathbb{P}}(h_1)} \phi_2 \\
& \quad + \int_{ \mathcal{H}_2\times \mathcal{A}_2} \mu(h_2,a_2, R_3=1) d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) \mathbbm{1}\big\{ (H_1=h_1) \big\} \phi_1 dQ_2(a_2 \mid h_2, R_2=1)\\
&= \int_{ \mathcal{H}_2\times \mathcal{A}_2 \setminus \mathcal{H}_2} \mu(H_2,a_2, R_3=1) \mathbbm{1}(R_2=1) \phi_2 \frac{dQ_1(A_1 \mid H_1)}{d{\mathbb{P}}(A_1 \mid H_1)} \frac{1}{d{\mathbb{P}}(R_{2}=1 \mid H_1, A_1)} \\
& \quad + \int_{ \mathcal{H}_2\times \mathcal{A}_2 \setminus \mathcal{H}_1} \mu(h_2,a_2, R_3=1) dQ_2(a_2 \mid h_2, R_2=1) d{\mathbb{P}}(x_2 | h_1, a_1, R_2=1) \phi_1 \\
&= \left\{ \frac{dQ_1(A_1 \mid H_1)}{d{\mathbb{P}}(A_1 \mid H_1)} \frac{1}{d{\mathbb{P}}(R_{2}=1 \mid H_1, A_1)} \right\} \int_{ \mathcal{A}_2 } \mu(H_2,a_2, R_3=1) \phi_2 d\nu(a_2) \mathbbm{1}(R_2=1) \\
& \quad + \int_{ \mathcal{A}_1 } m_1(h_1,a_1,R_2=1) \phi_1 d\nu(a_1)
\end{aligned}
$
Note that we have set $dQ_0(a_0|h_0)/d{\mathbb{P}}(a_0|h_0) = 1$, and that we have $d{\mathbb{P}}(R_1=1)=1$ and $ \mathbbm{1}(R_1=1)=1 $ by construction. Hence, putting part A), B), and C) together proves Lemma \[lem:eif\_1\] and part D) proves Lemma \[lem:eif\_2\].
### Conversion to an estimable form
Next, we convert the identifying expression in Lemma \[lem:eif\] into an estimable form which is also more succinct and intuitive. To this end, we first present two identities about the parameter $m_t$ defined in Lemma \[lem:eif\] in the following lemma.
\[lem:m\_t-equivalence\] Given $m_t$ defined in Lemma \[lem:eif\] for $\forall t \leq T$ we have the following identities.
- $\mathbbm{1}(R_{t+1}=1) m_t(H_t, A_t, R_{t+1}=1) = m_t(H_t, A_t, R_{t+1}=1)$
- $\left( \frac{\mathbbm{1}(R_{t+1}=1)}{d{\mathbb{P}}(R_{t+1}=1 \mid H_t, A_t, R_t=1)} \right) m_t(H_t, A_t, R_{t+1}=1) = \mathbbm{1}(R_{t+1}=1) m_t(H_t, A_t, R_{t+1}=1)$
First, note that from Remark \[rmk:m\_s\], $$\begin{aligned}
& m_t(H_t, A_t, R_{t+1}=1)\\
& = {\mathbb{E}}\left[ \frac{m_t(H_{t+1}, a_{t+1}, 1) \delta\pi_{t+1}(H_{t+1}) + \{1-m_t(H_{t+1}, 0, 1)\} \{ 1 - \pi_{t+1}(H_{t+1}) \} }{\delta\pi_{t+1}(H_{t+1}) + 1 - \pi_{t+1}(H_{t+1})} \Bigg\vert H_t, A_t, R_{t+1}=1 \right]
\end{aligned}$$ where we use shorthand notation $m_t(H_{t+1}, a_{t+1}, 1) = m_t(H_{t+1}, A_{t+1}=a_{t+1}, R_{t+2}=1)$. In this proof, let $(m\cdot dQ)_{t+1}$ denote $\frac{m_t(H_{t+1}, a_{t+1}, 1) \delta\pi_{t+1}(H_{t+1}) + \{1-m_t(H_{t+1}, 0, 1)\} \{ 1 - \pi_{t+1}(H_{t+1}) \} }{\delta\pi_{t+1}(H_{t+1}) + 1 - \pi_{t+1}(H_{t+1})}$ which is the quotient inside above conditional expectation.
The identity in *part a* immediately follows from the definition of $m_t$.
For the identity in *part b*, we first note that by assumption (\[assumption:A2-M\]) it follows $d{\mathbb{P}}(x_s|h_{s-1},a_{s-1}, R_s=1)= d{\mathbb{P}}(x_s|h_{s-1},a_{s-1}, R_{s-1}=1)$ for every $s>1$. Thus, we can write $$m_t= {\mathbb{E}}\left[ (m\cdot dQ)_{t+1} \big\vert H_t, A_t, R_{t}=1 \right]$$ based on the definition of $m_t$. Now define another shorthand notation $h_{t+1}^{A_{t},H_t} \coloneqq (x_{t+1}, A_{t},H_t)$ and $R_{t+1}^{R_{t}=1} \coloneqq (R_{t+1}, R_t=1)$. Then it follows that $$\begin{aligned}
& m_t(H_t, A_t, R_{t+1}=1) \\
&= {\mathbb{E}}\left[ (m\cdot dQ)_{t+1} \big\vert H_t, A_t, R_{t}=1 \right] \\
&= {\mathbb{E}}\left[ {\mathbb{E}}\left\{ (m\cdot dQ)_{t+1} \big\vert H_{t+1}, A_{t+1}, R_{t+1}^{R_{t}=1} \right\} \big\vert H_t, A_t, R_{t}=1 \right] \\
&= \int {\mathbb{E}}\left\{ (m\cdot dQ)_{t+1} \big\vert h_{t+1}^{A_{t},H_t}, a_{t+1}, R_{t+1}^{R_{t}=1} \right\} d{\mathbb{P}}(a_{t+1} \mid h_{t+1}^{A_{t},H_t}, R_{t+1}^{R_{t}=1}) d{\mathbb{P}}(x_{t+1}, R_{t+1} \mid H_{t}, A_t, R_{t}=1) \\
& = \int {\mathbb{E}}\left\{ (m\cdot dQ)_{t+1} \big\vert h_{t+1}^{A_{t},H_t}, a_{t+1}, R_{t+1}^{R_{t}=1} \right\} \\
& \qquad \times d{\mathbb{P}}(a_{t+1} \mid h_{t+1}^{A_{t},H_t}, R_{t+1}^{R_{t}=1})d{\mathbb{P}}(x_{t+1} \mid H_{t}, A_t, R_{t}=1) d{\mathbb{P}}(R_{t+1} \mid H_{t}, A_t, R_{t}=1) \\
& = \int {\mathbb{E}}\left\{ (m\cdot dQ)_{t+1} \big\vert h_{t+1}^{A_{t},H_t}, a_{t+1}, R_{t+1}^{R_{t}=1} \right\} \\
& \qquad \times d{\mathbb{P}}(a_{t+1} \mid h_{t+1}^{A_{t},H_t}, R_{t+1}^{R_{t}=1})d{\mathbb{P}}(x_{t+1} \mid H_{t}, A_t, R_{t+1}^{R_{t}=1}) d{\mathbb{P}}(R_{t+1} \mid H_{t}, A_t, R_{t}=1) \\
&= {\mathbb{E}}\left[ (m\cdot dQ)_{t+1} \big\vert H_t, A_t, R_{t+1}^{R_{t}=1} \right] d{\mathbb{P}}(R_{t+1} \mid H_{t}, A_t, R_{t}=1)
\end{aligned}$$ , where both the fourth and the fifth equalities follow from assumption (\[assumption:A2-M\]). From this result, it is straightforward to see $$\begin{aligned}
& \mathbbm{1}(R_{t+1}=1) m_t(H_t, A_t, R_{t+1}=1) \\
&= \mathbbm{1}(R_{t+1}=1){\mathbb{E}}\left[ (m\cdot dQ)_{t+1} \big\vert H_t, A_t, R_{t+1}^{R_{t}=1} \right] d{\mathbb{P}}(R_{t+1} \mid H_{t}, A_t, R_{t}=1) \\
&= \mathbbm{1}(R_{t+1}=1){\mathbb{E}}\left[ (m\cdot dQ)_{t+1} \big\vert H_t, A_t, R_{t+1}=1 \right] d{\mathbb{P}}(R_{t+1}=1 \mid H_{t}, A_t, R_{t}=1).
\end{aligned}$$ Finally assumption (\[assumption:A3\]) guarantees that we obtain $$\left( \frac{\mathbbm{1}(R_{t+1}=1)}{d{\mathbb{P}}(R_{t+1}=1 \mid H_t, A_t, R_t=1)} \right) m_t(H_t, A_t, R_{t+1}=1) = \mathbbm{1}(R_{t+1}=1) m_t(H_t, A_t, R_{t+1}=1)$$ which is the desired identity.
Finally, we are ready to give a proof of Theorem \[thm:eif\]. In fact, it is nothing but simplifying the given efficient influence function in terms of estimable regression functions.
### Proof of Theorem \[thm:eif\] {#proof-of-theorem-thmeif}
First, we define following shorthand notations for the proof: for $\forall s \leq t$ $$dQ_{s}(A_{s}) \equiv dQ_{s}(A_{s}|H_{s},R_{s}=1), \qquad d{\mathbb{P}}_{s}(A_{s}) \equiv d{\mathbb{P}}(A_{s} \mid H_{s}, R_{s}=1),$$ $$d\omega_{s} \equiv \omega_{s}( H_{s}, A_{s}) \equiv d{\mathbb{P}}(R_{{s}+1}=1 \mid H_{s}, A_{s},R_{s}^\prime=1),$$ $$m_{s}(H_{s}, a_{s}) \equiv m_{s}(H_{s}, a_{s}, R_{{s}+1}=1)$$ With these notations we can rewrite the result of Lemma \[lem:eif\_1\] as below. $$\begin{aligned}
& \sum_{s=0}^{t} \left\{ \int_{ \mathcal{A}_{{s}+1}} m_{{s}+1}(H_{{s}+1}, a_{{s}+1})dQ_{{s}+1}(a_{{s}+1}) - m_{s}(H_{s}, A_{s}) \right\} \mathbbm{1}\left(R_{{s}+1}=1\right) \left( \prod_{k=0}^{s} \frac{dQ_k(A_k )}{d{\mathbb{P}}_k(A_k)} \frac{1}{d\omega_k} \right) \\
& = \sum_{{s}=1}^{t} \left\{ \int_{ \mathcal{A}_{s}} m_{s}(H_{s}, a_{s})dQ_{s}(a_{s}) - m_{s}(H_{s}, A_{s}) \left[ \mathbbm{1}\left(R_{{s}+1}=1\right) \frac{dQ_{s}(A_{s})}{d{\mathbb{P}}_{s}(A_{s})} \frac{1}{d\omega_{s}} \right] \right\} \\
& \qquad \times \mathbbm{1}\left(R_{{s}}=1\right) \left( \prod_{k=0}^{s-1} \frac{dQ_k(A_k )}{d{\mathbb{P}}_k(A_k)} \frac{1}{d\omega_k} \right) + \mathbbm{1}\left(R_{{t}+1}=1\right) \left( \prod_{s=1}^{t} \frac{dQ_s(A_s )}{d{\mathbb{P}}_s(A_s)} \frac{1}{d\omega_s} \right)Y_t - m_0.
\end{aligned}$$ Now, by the result of Lemma \[lem:eif\_1\] and \[lem:eif\_2\], we can represent the efficient influence function for $\psi_t(\delta)$ as $$\begin{aligned}
& \sum_{{s}=1}^{t} \Bigg\{ \int_{ \mathcal{A}_{s}} m_{s}(H_{s}, a_{s})dQ_{s}(a_{s}) - m_{s}(H_{s}, A_{s}) \left[ \mathbbm{1}\left(R_{{s}+1}=1\right) \frac{dQ_{s}(A_{s})}{d{\mathbb{P}}_{s}(A_{s})} \frac{1}{d\omega_{s}} \right] \\
& \ \ \qquad + \int_{ \mathcal{A}_{s}} m_{s}(H_{s}, a_{s}) \phi_{s}(H_{s}, A_{s}, R_{s}=1;a_{s}) d\nu(a_{s}) \Bigg\}
\mathbbm{1}\left(R_{s}=1\right) \left( \prod_{k=0}^{s-1} \frac{dQ_k(A_k )}{d{\mathbb{P}}_k(A_k)} \frac{1}{d\omega_k} \right) \\
& \qquad + \mathbbm{1}\left(R_{{t}+1}=1\right) \left( \prod_{s=1}^{t} \frac{dQ_s(A_s )}{d{\mathbb{P}}_s(A_s)} \frac{1}{d\omega_s} \right)Y_t - m_0.
\end{aligned}$$ On the other hand, we have $$\begin{aligned}
\int_{ \mathcal{A}_{s}} m_{s}(H_{s}, a_{s})dQ_{s}(a_{s}) &= \frac{m_{s}(H_{s},1)\delta\pi_{s}(H_{s})+m_{s}(H_{s},0)\{ 1-\pi_{s}(H_{s}) \}}{\delta\pi_{s}(H_{s}) + 1-\pi_{s}(H_{s})},
\end{aligned}$$ $$\begin{aligned}
\frac{dQ_s(A_s )}{d{\mathbb{P}}_s(A_s )} &= \frac{\delta A_s + 1-A_s}{\delta\pi_s(H_s) + 1-\pi_s(H_s)},
\end{aligned}$$ $$\begin{aligned}
m_{s}(H_{s}, A_{s})\frac{dQ_{s}(A_{s})}{d{\mathbb{P}}_{s}(A_{s})} &= \frac{m_{s}(H_{s}, 1, R_{{s}+1}=1)\delta A_{s} + m_{s}(H_{s}, 0, R_{{s}+1}=1)(1-A_{s})}{\delta\pi_{s}(H_{s}) + 1-\pi_{s}(H_{s})},
\end{aligned}$$ $$\begin{aligned}
\int_{ \mathcal{A}_{s}} m_{s}(H_{s}, a_{s})& \phi_{s}(H_{s},A_{s},R_{s}=1;a_{s}) d\nu(a_{s})
= \frac{\{ m_{s}(H_{s}, 1) - m_{s}(H_{s}, 0)\}\delta(A_{s} - \pi_{s}(H_{s})) }{\left(\delta\pi_{s}(H_{s}) + 1 - \pi_{s}(H_{s}) \right)^2 }.
\end{aligned}$$
Now going back to the expression for the efficient influence function, note that by Lemma \[lem:m\_t-equivalence\] terms inside the summation before multiplied by\
$\mathbbm{1}\left(R_{s}=1\right) \left( \prod_{k=0}^{s-1} \frac{dQ_k(A_k )}{d{\mathbb{P}}_k(A_k)} \frac{1}{d\omega_k} \right)$ simplify to $$\begin{aligned}
& \int_{ \mathcal{A}_{s}} m_{s}(H_{s}, a_{s})dQ_{s}(a_{s}) - m_{s}(H_{s}, A_{s}) \left[ \mathbbm{1}\left(R_{{s}+1}=1\right) \frac{dQ_{s}(A_{s})}{d{\mathbb{P}}_{s}(A_{s})} \frac{1}{d\omega_{s}} \right] \\
&= \int_{ \mathcal{A}_{s}} \mathbbm{1}\left(R_{{s}+1}=1\right)m_{s}(H_{s}, a_{s})dQ_{s}(a_{s}) - \mathbbm{1}\left(R_{{s}+1}=1\right)m_{s}(H_{s}, A_{s}) \frac{dQ_{s}(A_{s})}{d{\mathbb{P}}_{s}(A_{s})} \\
& \quad + \int_{ \mathcal{A}_{s}} \mathbbm{1}\left(R_{{s}+1}=1\right) m_{s}(H_{s}, a_{s}) \phi_{s}(H_{s}, A_{s}, R_{s}^\prime=1;a_{s}) d\nu(a_{s}) \\
&= \Bigg[ \frac{m_{s}(H_{s},1)\delta\pi_{s}(H_{s})+m_{s}(H_{s},0)\{ 1-\pi_{s}(H_{s}) \}}{\delta\pi_{s}(H_{s}) + 1-\pi_{s}(H_{s})}
+ \frac{m_{s}(H_{s}, 1)\delta A_{s} + m_{s}(H_{s}, 0)(1-A_{s})}{\delta\pi_{s}(H_{s}) + 1-\pi_{s}(H_{s})} \\
& \qquad + \frac{\{ m_{s}(H_{s}, 1) - m_{s}(H_{s}, 0)\}\delta(A_{s} - \pi_{s}(H_{s})) }{\left(\delta\pi_{s}(H_{s}) + 1 - \pi_{s}(H_{s}) \right)^2 } \Bigg] \mathbbm{1}\left(R_{{s}+1}=1\right) \\
&= \quad \left[ \frac{ ( \pi_{s}(H_{s})- A_{s})\{\delta m_{s}(H_{s}, 1) -m_{s}(H_{s}, 0) \} }{\delta\pi_{s}(H_{s}) + 1-\pi_{s}(H_{s})} + \frac{\{ m_{s}(H_{s}, 1) - m_{s}(H_{s}, 0)\}\delta(A_{s} - \pi_{s}(H_{s})) }{\left(\delta\pi_{s}(H_{s}) + 1 - \pi_{s}(H_{s}) \right)^2 } \right]\mathbbm{1}\left(R_{{s}+1}=1\right) \\
&= \left( \frac{ \left\{A_{s} - \pi_{s}(H_{s})\right\}(1-\delta) }{\delta\pi_{s}(H_{s}) + 1-\pi_{s}(H_{s})} \right) \left[ \frac{m_{s}(H_{s},1)\delta\pi_{s}(H_{s})+m_{s}(H_{s},0)\{ 1-\pi_{s}(H_{s}) \}}{\delta\pi_{s}(H_{s}) + 1 - \pi_{s}(H_{s})} \right] \mathbbm{1}\left(R_{{s}+1}=1\right)
\end{aligned}$$ By multiplying $\left[ \frac{dQ_{s}(A_{s})}{d{\mathbb{P}}_{s}(A_{s})}\frac{1}{d\omega_{s}} \right]^{-1}$ to the last expression, we finally obtain an equivalent form of the efficient influence function for $\psi_t(\delta)$ as $$\begin{aligned}
& \sum_{s=0}^{t} \left\{ \frac{ \{A_{s} - \pi_{s}(H_{s})\}(1-\delta)}{\delta A_{s} + 1-A_{s}} \right\} \left[ \frac{m_{s}(H_{s},1)\delta\pi_{s}(H_{s})+m_{s}(H_{s},0)\{ 1-\pi_{s}(H_{s}) \}}{\delta\pi_{s}(H_{s}) + 1 - \pi_{s}(H_{s})} \right]\omega_{s}( H_{s}, A_{s}) \\
& \qquad \times\left( \prod_{k=1}^{s} \frac{\delta A_k + 1-A_k}{\delta\pi_k(H_k) + 1-\pi_k(H_k)} \cdot\frac{\mathbbm{1}\left(R_{{k}+1}=1\right)}{\omega_{k}( H_{k}, A_{k})} \right) + \prod_{s=1}^{t} \left\{ \frac{\delta A_s + 1-A_s}{\delta\pi_s(H_s) + 1-\pi_s(H_s)}\cdot\frac{\mathbbm{1}\left(R_{s+1}=1\right)}{\omega_s( H_s, A_s)}Y_t \right\} \\
& \ - \psi_t(\delta).
\end{aligned}$$
Sequential regression formulation
---------------------------------
The efficient influence function derived in the previous subsection involves pseudo-regression functions $m$, whose estimation in general might involve complicated conditional density estimation. However, as pointed out by @Kennedy17, one efficient strategy is to formulate a series of sequential regressions for $m_s$, as described in the subsequent remark in more detail.
\[rmk:m\_s\] From the definition of $m_s$, it immediately follows that $$m_s = \int_{\mathcal{X}_s \times \mathcal{A}_s} m_{s+1}dQ_{s+1}(a_{s+1} \mid h_{s+1}, R_{s+1}=1) d{\mathbb{P}}(x_{s+1}|h_s,a_s, R_{s+1}=1).$$ Hence, we can find equivalent form of the functions $m_s(\cdot)$ in Theorem \[thm:eif\] as the following recursive regression: $$\begin{aligned}
& m_s(H_s, A_s, R_{s+1}=1)\\
& = {\mathbb{E}}\left[ \frac{m_{s+1}(H_{s+1}, a_{s+1}, 1) \delta\pi_{s+1}(H_{s+1}) + \{1-m_{s+1}(H_{s+1}, 0, 1)\} \{ 1 - \pi_{s+1}(H_{s+1}) \} }{\delta\pi_{s+1}(H_{s+1}) + 1 - \pi_{s+1}(H_{s+1})} \Bigg\vert H_s, A_s, R_{s+1}=1 \right]
\end{aligned}$$ for $s= 1, ... , t-1$, where we use shorthand notation $m_{s+1}(H_{s+1}, a_{s+1}, 1) = m_{s+1}(H_{s+1}, A_{s+1}=a_{s+1}, R_{t+2}=1)$ and $m_s(H_s,A_s,1)=\mu(H_s,A_s, R_{s+1}=1)$.
Above sequential regression form is very practically useful when we estimate $m_s$, since it allows us to bypass all the conditional density estimations and instead use regression methods that are more readily available in statistical software.
EIF for $T=1$ {#eif-for-T=1}
-------------
In the next corollary we provide the efficient influence function for the incremental effect in a single timepoint study ($T=1$) whose identifying expression is given in Corollary \[cor:ident-exp-pt-trt\].
\[cor:eif-single-exp\] When $T=1$, the efficient influence function for $\psi(\delta)$ in Corollary \[cor:ident-exp-pt-trt\] is given by $$\begin{aligned}
\mathbbm{1}\left(R=1\right)\left[\frac{\delta\pi(1\vert X)\phi_{1,R=1}(Z)+\pi(0\vert X)\phi_{0,R=1}(Z)}{\delta\pi(1\vert X) + \pi(0\vert X)} + \frac{\delta\{\mu(X,1,1) - \mu(X,0,1) \}\left(A-\pi(1\vert X)\right)}{\left\{\delta\pi(1\vert X)+\pi(0\vert X)\right\}^2}\right]
\end{aligned}$$ where $$\mu(x,a, 1) = {\mathbb{E}}(Y \mid X = x, A = a, R=1),$$ $$\pi(a\vert X) = d{\mathbb{P}}(A=a \mid X = x),$$ and $$\phi_{a,R=1}(Z) = \frac{\mathbbm{1}\left(A=a\right)\mathbbm{1}\left(R=1\right)}{\pi(a\vert X)\omega(X,a)}\left\{Y- \mu(X,a,1)\right\} + \mu(X,a,1)$$ which is the uncentered efficient influence function for ${\mathbb{E}}[\mu(X,a, 1)]$.
The efficient influence function for the point exposure case has a simpler and more intuitive form. In fact, as stated in Corollary \[cor:eif-single-exp\], it is a weighted average of the two efficient influence functions $\phi_{0,R=1}, \phi_{1,R=1}$, plus a contribution term due to unknown propensity scores. An existence of the indicator function $\mathbbm{1}\left(R=1\right)$ proceeds from a likelihood of potential dropouts, and it implies that if a dropout occurs the outcome would not be available and consequently a contribution from the subject would not be taken into account.
Proof of Theorem \[thm:inf-time-horizon\] {#proof:thm-inf-time}
-----------------------------------------
First we find an alternative form of the variance of each estimator, which eventually comes in handy for our proof. To this end, let $\widehat{\psi}_{c.ipw}(\overline{a^\prime}_{T})$ denote the standard IPW estimator of a classical deterministic intervention effect ${\mathbb{E}}\left[ Y^{\overline{a^\prime}_{T}} \right]$ under $i.i.d$ assumption, i.e. $$\widehat{\psi}_{c.ipw}(\overline{a^\prime}_{T}) = \prod_{t=1}^{T} \left( \frac{\mathbbm{1}\left({A}_{t}={a^\prime}_{t}\right)}{ \pi_t(a^\prime_t|H_t)} \right)Y.$$ Hence $\widehat{\psi}_{c.ipw}(\overline{\bm{1}})$ is equivalent to $\widehat{\psi}_{at}$ in the main text. Now by definition we have $$\begin{aligned}
Var\left(\widehat{\psi}_{c.ipw}(\overline{a^\prime}_{T})\right) &= {\mathbb{E}}\left\{ \left( \prod_{t=1}^{T} \frac{\mathbbm{1}\left({A}_{t}={a^\prime}_{t}\right)}{ \pi_t(a^\prime_t|H_t)^2} \right)Y^2 \right\} - \left\{ {\mathbb{E}}\left[ \prod_{t=1}^{T} \frac{\mathbbm{1}\left({A}_{t}={a^\prime}_{t}\right)}{ \pi_t(a^\prime_t|H_t)} Y \right] \right\}^2 \\
& \equiv \mathbb{V}_{c.ipw.1}(\overline{a^\prime}_{T}) - \mathbb{V}_{c.ipw.2}(\overline{a^\prime}_{T})
\end{aligned}$$ where $\mathbb{V}_{c.ipw.1}(\overline{a^\prime}_{T})$ and $\mathbb{V}_{c.ipw.2}(\overline{a^\prime}_{T})$ are simply the first and second term in the first line of the expansion respectively.
By the same procedure to derive g-formula [@robins1986] it is easy to see $$\begin{aligned}
\mathbb{V}_{c.ipw.1}(\overline{a^\prime}_{T}) &= {\mathbb{E}}\left\{ \prod_{t=1}^{T} \left( \frac{\mathbbm{1}\left({A}_{t}={a^\prime}_{t}\right)}{ \pi_t(a^\prime_t|H_t)^2} \right)Y^2 \right\} \\
&= \int_{\mathcal{X}} {\mathbb{E}}\left[Y^2 \mid \overline{X}_{t}, \overline{A}_{t}=\overline{a^\prime}_{t} \right] \prod_{t=1}^{T} \frac{d{\mathbb{P}}(X_t \mid \overline{X}_{t-1}, \overline{A}_{t-1}=\overline{a^\prime}_{t-1})}{\pi_t(a^\prime_t|H_t)}
\end{aligned}$$ where $\mathcal{X}=\mathcal{X}_1 \times \cdots \times \mathcal{X}_{T}$. Above result simply follows by iterative expectation conditioning on $\overline{X}_{t}$ and then another iterative expectation conditioning on $H_t$ followed by the fact that ${\mathbb{E}}\left[ \frac{\mathbbm{1}\left({A}_{t}={a^\prime}_{t}\right)}{\pi_t(a^\prime_t|H_t)} \big\vert H_t \right]=1$ for all $t$. We repeat this process $T$ times, starting from $t=T$ all the way through $t=1$.
Likewise, for $\widehat{\psi}_{inc}$ we have $$\begin{aligned}
Var(\widehat{\psi}_{inc}) &= {\mathbb{E}}\left\{ \prod_{t=1}^{T} \left( \frac{\delta A_t + 1-A_t }{\delta{\pi}_t(H_t) + 1-{\pi}_t(H_t)} \right)^2Y^2 \right\} - \left\{ {\mathbb{E}}\left[ \prod_{t=1}^{T} \left( \frac{\delta A_t + 1-A_t }{\delta{\pi}_t(H_t) + 1-{\pi}_t(H_t)} \right)Y \right] \right\}^2 \\
& \equiv \mathbb{V}_{inc.1} - \mathbb{V}_{inc.2}
\end{aligned}$$ For the first term $\mathbb{V}_{inc.1}$, observe that $$\begin{aligned}
& {\mathbb{E}}\left\{ \prod_{t=1}^{T} \left( \frac{\delta A_t + 1-A_t }{\delta{\pi}_t(H_t) + 1-{\pi}_t(H_t)} \right)^2Y^2 \right\} \\
&= {\mathbb{E}}\left\{ \prod_{t=1}^{T-1} \left( \frac{\delta A_t + 1-A_t }{\delta{\pi}_t(H_t) + 1-{\pi}_t(H_t)} \right)^2 {\mathbb{E}}\left[ \left( \frac{\delta A_{T} + 1-A_{T} }{\delta{\pi}_{T}(H_{T}) + 1-{\pi}_{T}(H_{T})} \right)^2Y^2 \Bigg\vert H_{T} \right] \right\} \\
&= {\mathbb{E}}\left\{ \prod_{t=1}^{T-1} \left( \frac{\delta A_t + 1-A_t }{\delta{\pi}_t(H_t) + 1-{\pi}_t(H_t)} \right)^2 {\mathbb{E}}\left[ \frac{\delta^2Y^2 }{(\delta{\pi}_{T}(H_{T}) + 1-{\pi}_{T}(H_{T}))^2} \Bigg\vert H_{T}, A_{T}=1 \right]{\pi}_{T}(H_{T}) \right\} \\
& \quad \ + {\mathbb{E}}\left\{ \prod_{t=1}^{T-1} \left( \frac{\delta A_t + 1-A_t }{\delta{\pi}_t(H_t) + 1-{\pi}_t(H_t)} \right)^2 {\mathbb{E}}\left[ \frac{Y^2}{(\delta{\pi}_{T} + 1-{\pi}_{T})^2} \Bigg\vert H_{T}, A_{T}=0 \right]\left(1-{\pi}_{T}(H_{T})\right) \right\}
\end{aligned}$$ where we apply the law of total expectation in the first equality and the law of total probability in the second.
After repeating the same process for $T-1$ times, for $t=T-1, ... ,1$, we obtain $2^{T}$ terms in the end where each of which corresponds to distinct treatment sequence $\overline{A}_{T}=\overline{a}_{T}$. Hence, we eventually have $$\begin{aligned}
\mathbb{V}_{inc.1}
&= \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} \int_{\mathcal{X}} {\mathbb{E}}\left[Y^2 \mid H_{T}, A_{T}=a_{T} \right] \prod_{t=1}^{T} \frac{\mathbbm{1}\left({a}_{t}=1\right)\delta^2{\pi}_t(H_t) + \mathbbm{1}\left({a}_{t}=0\right)\{1-{\pi}_t(H_t)\}}{(\delta{\pi}_{t}(H_t) + 1-{\pi}_{t}(H_t))^2} \\
& \qquad \qquad \quad \times d{\mathbb{P}}(X_t \mid \overline{X}_{t-1}, \overline{A}_{t-1}=\overline{a}_{t-1}).
\end{aligned}$$
Recall that we assume $\pi_t(H_t)=p$ for all $t$ as stated in Theorem \[thm:inf-time-horizon\]. Hence we can write $\pi_t(a_t \mid H_t)$ as $\pi_t(a_t)= \mathbbm{1}\left({a}_{t}=1\right)p + \mathbbm{1}\left({a}_{t}=0\right)\{1-p\}$.
Next we notice that to compute the upper bound of $RE(\widehat{\psi}_{c.ipw}(\overline{a}_{T}), \widehat{\psi}_{inc}) = \frac{ \mathbb{V}_{inc.1} - \mathbb{V}_{inc.2}}{\mathbb{V}_{c.ipw.1}(\overline{a}_{T}) - \mathbb{V}_{c.ipw.2}(\overline{a}_{T})} $ for always-treated unit (i.e. $\overline{a}_{T}=\overline{\bm{1}}$) it suffices to compute the quantity $$\frac{ \mathbb{V}_{inc.1}}{\mathbb{V}_{c.ipw.1}(\overline{\bm{1}}) - \mathbb{V}_{c.ipw.2}(\overline{\bm{1}})}$$ since $0 < \mathbb{V}_{inc.2} < \mathbb{V}_{inc.1}$ by Jensen’s inequality.
On the other hand, we have $$\begin{aligned}
\mathbb{V}_{c.ipw.1}(\overline{\bm{1}}) - \mathbb{V}_{c.ipw.2}(\overline{\bm{1}})&= \int_{\mathcal{X}} {\mathbb{E}}\left[Y^2 \mid \overline{X}_{T}, \overline{A}_{T}=\overline{a^\prime}_{T} \right] \prod_{t=1}^{T} \frac{d{\mathbb{P}}(X_t \mid \overline{X}_{t-1}, \overline{A}_{t-1}=\overline{a^\prime}_{t-1})}{p} - \left({\mathbb{E}}[Y^{\overline{\bm{1}}}]\right)^2 \\
& = \left( \frac{1}{p} \right)^{T} {\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right] - \left({\mathbb{E}}\left[Y^{\overline{\bm{1}}}\right]\right)^2
\end{aligned}$$ , and under the given boundedness assumption we see the ratio of the second term to the first term becomes quickly (at least exponentially) negligible as $t$ increases. Hence we can write $$\begin{aligned}
\frac{1}{\mathbb{V}_{c.ipw.1}(\overline{\bm{1}}) - \mathbb{V}_{c.ipw.2}(\overline{\bm{1}})} \leq \frac{1}{\mathbb{V}_{c.ipw.1}(\overline{\bm{1}})}\left( 1+ \frac{c \left({\mathbb{E}}\left[Y^{\overline{\bm{1}}}\right]\right)^2}{\left( 1/p \right)^{T} {\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]} \right)
\end{aligned}$$ for some constant $c$ such that $\frac{1}{1-\mathbb{V}_{c.ipw.2}(\overline{\bm{1}})/\mathbb{V}_{c.ipw.1}(\overline{\bm{1}})}=\frac{1}{1-p^T{\left({\mathbb{E}}\left[Y^{\overline{\bm{1}}}\right]\right)^2}\big/{{\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]}} \leq {c}$. Note that in our setting in which we have an infinitely large value of $T$, $c$ can be almost any constant greater than one.
Putting above ingredients together, for sufficiently large $t$ it follows that $$\begin{aligned}
RE(\widehat{\psi}_{c.ipw}(\overline{\bm{1}}), \widehat{\psi}_{inc}) & \leq \frac{\mathbb{V}_{inc.1}}{\mathbb{V}_{c.ipw.1}(\overline{\bm{1}})} \left( 1+ \frac{c \left({\mathbb{E}}\left[Y^{\overline{\bm{1}}}\right]\right)^2}{\left( 1/p \right)^{T} {\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]} \right),
\end{aligned}$$ where we have $$\begin{aligned}
\frac{\mathbb{V}_{inc.1}}{\mathbb{V}_{c.ipw.1}(\overline{\bm{1}})} &= \frac{ w(\overline{\bm{1}}) \mathbb{V}_{c.ipw.1}(\overline{\bm{1}} ) + \sum_{\overline{a}_{T} \neq \overline{\bm{1}}} w(\overline{a}_{T}; \delta, p) \mathbb{V}_{c.ipw.1}(\overline{a}_{T} )}{\mathbb{V}_{c.ipw.1}(\overline{\bm{1}})} \\
&= w(\overline{\bm{1}}) + \sum_{\overline{a}_{T} \neq \overline{\bm{1}}} w(\overline{a}_{T}; \delta, p) \prod_{t=1}^{T} \left(\frac{p}{\pi_t(a_t)} \frac{{\mathbb{E}}\left[\left(Y^2 \right)^{\overline{a}_{T}} \right]}{{\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]} \right) \\
&\leq \frac{b_u^2}{{\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]} \left\{ w(\overline{\bm{1}}) + \sum_{\overline{a}_{T} \neq \overline{\bm{1}}} \left[ \prod_{t=1}^{T} \frac{\mathbbm{1}\left({a}_{t}=1\right)\delta^2 p^2 + \mathbbm{1}\left({a}_{t}=0\right)(1- p)p}{(\delta p + 1-p)^2} \right] \right\} \\
&= \frac{b_u^2}{{\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]}\left\{ \frac{\delta^2p^2 + p(1-p)}{(\delta p + 1 - p)^2} \right\}^{T}
\end{aligned}$$ where the first equality follows by the fact that $ \mathbb{V}_{inc.1} = \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} w(\overline{a}_{T}; \delta, p) \mathbb{V}_{c.ipw.1}(\overline{a}_{T} )$ derived in the proof of the first part, the second equality by the fact that $\mathbb{V}_{c.ipw.1}(\overline{a}_{T} ) = \prod_{t=1}^{T} \frac{1}{\pi_t(a_t)} {\mathbb{E}}\left[\left(Y^2 \right)^{\overline{a}_{T}} \right] $, the first inequality by definition of $w(\overline{a}_{T}; \delta, p)$ and the given boundedness assumption, and the last equality by binomial theorem. Therefore we obtain the upper bound as $$RE(\widehat{\psi}_{c.ipw}(\overline{\bm{1}}), \widehat{\psi}_{inc}) \leq \frac{b_u^2}{{\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]}\left\{ \frac{\delta^2p^2 + p(1-p)}{(\delta p + 1 - p)^2} \right\}^{T} \left( 1+ \frac{c \left({\mathbb{E}}\left[Y^{\overline{\bm{1}}}\right]\right)^2}{\left( 1/p \right)^{T} {\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]} \right).$$
Next for the lower bound, first we note that $$\begin{aligned}
\mathbb{V}_{inc.2} &= \left\{ {\mathbb{E}}\left[ \prod_{t=1}^{T} \left( \frac{\delta A_t + 1-A_t }{\delta p + 1-p} \right)Y \right] \right\}^2 \\
&= \Bigg\{ \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} \int_{\mathcal{X}} {\mathbb{E}}\left[Y \mid H_{T}, A_{T}=a_{T} \right] \left(\prod_{t=1}^{T} \frac{\mathbbm{1}\left({a}_{t}=1\right)\delta p + \mathbbm{1}\left({a}_{t}=0\right)(1-p)}{\delta p + 1-p}\right) \\
& \qquad \qquad \qquad \times d{\mathbb{P}}(X_t \mid \overline{X}_{t-1}, \overline{A}_{t-1}=\overline{a}_{t-1}) \Bigg\}^2 \\
& \leq b_u^2 \left[ \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}}\prod_{t=1}^{T} \left( \frac{\mathbbm{1}\left({a}_{t}=1\right)\delta p + \mathbbm{1}\left({a}_{t}=0\right)(1-p)}{\delta p + 1-p}\right) \right]^2 \\
&= b_u^2 \left( \frac{\delta p + 1-p}{\delta p + 1-p} \right)^{2T} = b_u^2
\end{aligned}$$ where the first equality follows by definition, the second equality by exactly same process used to find the expression for $\mathbb{V}_{inc.1}$, the first inequality by the boundedness assumption, and the third equality by binomial theorem.
However, we already know that $$\begin{aligned}
\mathbb{V}_{c.ipw.1}(\overline{\bm{1}}) - \mathbb{V}_{c.ipw.2}(\overline{\bm{1}})\leq \mathbb{V}_{c.ipw.1}(\overline{\bm{1}}) = \left( \frac{1}{p} \right)^{T} {\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right].
\end{aligned}$$
Hence putting these together we conclude $$\begin{aligned}
RE(\widehat{\psi}_{c.ipw}(\overline{\bm{1}}), \widehat{\psi}_{inc}) &= \frac{ \mathbb{V}_{inc.1} - \mathbb{V}_{inc.2}}{\mathbb{V}_{c.ipw.1}(\overline{\bm{1}}) - \mathbb{V}_{c.ipw.2}(\overline{\bm{1}})} \\
& \geq \frac{ \mathbb{V}_{inc.1} - b_u^2}{\mathbb{V}_{c.ipw.1}(\overline{\bm{1}})} \\
&= \frac{b_u^2}{{\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]}\left\{ \frac{\delta^2p^2 + p(1-p)}{(\delta p + 1 - p)^2} \right\}^{T} - \frac{b_u^2}{{\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]} p^{T}.
\end{aligned}$$
At this point, we obtain upper and lower bound for $RE(\widehat{\psi}_{c.ipw}(\overline{\bm{1}}), \widehat{\psi}_{inc})$, which yields the result of part $ii)$ having $C_{T} = \frac{b_u^2}{{\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]}$.
Proof for the case of $\overline{a^\prime}_{T}=\overline{\bm{0}}$ (*never-treated unit*) is based on the almost same steps as the case of $\overline{a^\prime}_{T}=\overline{\bm{1}}$ except for the rearragement of terms due to replacing $\left(\frac{1}{p}\right)^{T}$ by $\left(\frac{1}{1-p}\right)^{T}$ and so on. In fact, due to the generality of our proof structure, the exact same logic used for $\widehat{\psi}_{c.ipw}(\overline{\bm{1}})$ also applies to $\widehat{\psi}_{c.ipw}(\overline{\bm{0}})$ (and $\widehat{\psi}_{c.ipw}(\overline{a^\prime}_{T})$ for $\forall \overline{a^\prime}_{T} \in \overline{\mathcal{A}_{T}}$). We present the result without the proof as below.
$$\begin{aligned}
C_{T}^\prime\left[ \left\{ \frac{\delta^2p(1-p) + (1-p)^2}{(\delta p + 1 - p)^2} \right\}^{T} - (1-p)^{T} \right] & \leq RE(\widehat{\psi}_{c.ipw}(\overline{\bm{0}}), \widehat{\psi}_{inc}) \\
& \leq C_{T}^\prime \zeta^\prime(T;p) \left\{ \frac{\delta^2p(1-p) + (1-p)^2}{(\delta p + 1 - p)^2} \right\}^{T}
\end{aligned}$$
where we define $C_{T}^\prime = \frac{b_u^2}{{\mathbb{E}}\left[\left(Y^2 \right)^{\overline{\bm{0}}} \right]}$ and $\zeta^\prime(T;p) = \left( 1+ \frac{c \left({\mathbb{E}}\left[Y^{\overline{\bm{1}}}\right]\right)^2}{\left( 1/(1-p) \right)^{T} {\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]} \right)$.\
Proof of Corollary \[cor:inf-time-horizon\] {#proof:cor-inf-time}
-------------------------------------------
Now we provide following Lemma \[lem:inf-time-decomp\] which becomes a key to prove Corollary \[cor:inf-time-horizon\].
\[lem:inf-time-decomp\] Assume that $\pi_t(H_t)=p$ for all $1\leq t\leq T$ for $0<p<1$. Then we have following variance decomposition : $$\begin{aligned}
& Var(\widehat{\psi}_{inc})
= Var \left( \sum_{ \overline{a}_{T} \in \overline{\mathcal{A}}_{T} } \sqrt{w(\overline{a}_{T}; \delta, p)} \widehat{\psi}_{c.ipw}(\overline{a}_{T}) \right)
\end{aligned}$$ where for $\forall \overline{a}_{T} \in \overline{\mathcal{A}}_{T}$ the weight $w$ is defined by $$w(\overline{a}_{T}; \delta, p) = \prod_{t=1}^{T} \frac{\pi_t(a_t) \left\{ \mathbbm{1}\left({a}_{t}=1\right)\delta^2p + \mathbbm{1}\left({a}_{t}=0\right)(1-p) \right\} }{(\delta{\pi}_{t}(H_t) + 1-{\pi}_{t}(H_t))^2}.$$
From the last display for $\mathbb{V}_{inc.1}$, we have that $$\begin{aligned}
& \mathbb{V}_{inc.1} \\
& = \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} \int_{\mathcal{X}} {\mathbb{E}}\left[Y^2 \mid H_{T}, A_{T}=a_{T} \right] \prod_{t=1}^{T} \frac{\pi_t(a_t) \left( \mathbbm{1}\left({a}_{t}=1\right)\delta^2p + \mathbbm{1}\left({a}_{t}=0\right)\{1-p\} \right) }{(\delta p + 1-p)^2} \\
& \qquad \qquad \quad \times \prod_{t=1}^{T} \frac{d{\mathbb{P}}(X_t \mid \overline{X}_{t-1}, \overline{A}_{t-1}=\overline{a}_{t-1})}{\pi_t(a_t)} \\
&= \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} w(\overline{a}_{T}; \delta, p) \int_{\mathcal{X}} {\mathbb{E}}\left[Y^2 \mid H_{T}, A_{T}=a_{T} \right]
\prod_{t=1}^{T} \frac{d{\mathbb{P}}(X_t \mid \overline{X}_{t-1}, \overline{A}_{t-1}=\overline{a}_{t-1})}{\pi_t(a_t)} \\
&= \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} w(\overline{a}_{T}; \delta, p) \mathbb{V}_{c.ipw.1}(\overline{a}_{T} )
\end{aligned}$$ where we let weight $w(\overline{a}_{T}; \delta, p)$ denote the product term $\prod_{t=1}^{T} \frac{\pi_t(a_t) \left( \mathbbm{1}\left({a}_{t}=1\right)\delta^2p + \mathbbm{1}\left({a}_{t}=0\right)\{1-p\} \right) }{(\delta{\pi}_{t}(H_t) + 1-{\pi}_{t}(H_t))^2}$.
Next, we observe that $$\begin{aligned}
\mathbb{V}_{inc.2} &= \left\{ {\mathbb{E}}\left[ \prod_{t=1}^{T} \left( \frac{\delta A_t + 1-A_t }{\delta p + 1-p} \right)Y \right] \right\}^2 \\
&= \left\{ {\mathbb{E}}\left[ \prod_{t=1}^{T} \left( \frac{\delta \mathbbm{1}\left({A}_{t}=1\right) }{\delta p + 1-p} \right)Y + \ \cdots \ + \prod_{t=1}^{T} \left( \frac{ \mathbbm{1}\left({A}_{t}=0 \right) }{\delta p + 1-p} \right)Y \right] \right\}^2 \\
&= \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} v^2_{inc.2}(\overline{A}_{T}; \overline{a}_{T}) + \sum_{\overline{a^\prime}_{T} \neq \overline{a}_{T} } v_{inc.2}(\overline{A}_{T}; \overline{a}_{T}) v_{inc.2}(\overline{A}_{T}; \overline{a^\prime}_{T})
\end{aligned}$$ where we have decomposed $\mathbb{V}_{inc.2}$ into $2^{T} \times 2^{T}$ terms by defining $v_{inc.2}(\overline{A}_{T}; \overline{a}_{T})$ by $$v_{inc.2}(\overline{A}_{T}; \overline{a}_{T}) \equiv {\mathbb{E}}\left[ \prod_{t=1}^{T} \left( \frac{\delta \mathbbm{1}(a_t=1) + \mathbbm{1}(a_t=0) }{\delta p + 1-p} \right)\mathbbm{1}(A_t=a_t) \cdot Y \right].$$ Then for fixed $\overline{a}_{T}$ it is straightforward to see that $$\begin{aligned}
\frac{v^2_{inc.2}(\overline{A}_{T}; \overline{a}_{T})}{w(\overline{a}_{T}; \delta, p)} &= \left\{ {\mathbb{E}}\left[ \prod_{t=1}^{T} \left( \frac{ \left\{ \delta \mathbbm{1}(a_t=1) + \mathbbm{1}(a_t=0) \right\} \mathbbm{1}(A_t=a_t) }{ \sqrt{\pi(a_t) \left( \mathbbm{1}\left({a}_{t}=1\right)\delta^2p + \mathbbm{1}\left({a}_{t}=0\right)\{1-p\} \right) } } \right)Y \right] \right\}^2 \\
&= \left\{ {\mathbb{E}}\left[ \prod_{t=1}^{T} \left( \frac{ \mathbbm{1}(A_t=a_t) }{ \pi(a_t) } \right)Y \right] \right\}^2 = \mathbb{V}_{c.ipw.2}(\overline{a}_{T})
\end{aligned}$$
Now putting this together, we obtain $$\begin{aligned}
& \mathbb{V}_{inc.1} - \mathbb{V}_{inc.2} \\
&= \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} w(\overline{a}_{T}; \delta, p) \left\{ \mathbb{V}_{c.ipw.1}(\overline{a}_{T} ) - \mathbb{V}_{c.ipw.2}(\overline{a}_{T}) \right\} - \sum_{\overline{a^\prime}_{T} \neq \overline{a}_{T} } v_{inc.2}(\overline{A}_{T}; \overline{a}_{T}) v_{inc.2}(\overline{A}_{T}; \overline{a^\prime}_{T}) \\
&= \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} w(\overline{a}_{T}; \delta, p) Var\left(\widehat{\psi}_{c.ipw}(\overline{a}_{T})\right) - \sum_{\overline{a^\prime}_{T} \neq \overline{a}_{T} } v_{inc.2}(\overline{A}_{T}; \overline{a}_{T}) v_{inc.2}(\overline{A}_{T}; \overline{a^\prime}_{T}).
\end{aligned}$$
However, from the second term in the last display one could notice that $$\begin{aligned}
\frac{v_{inc.2}(\overline{A}_{T}; \overline{a}_{T}) v_{inc.2}(\overline{A}_{T}; \overline{a^\prime}_{T})}{\sqrt{w(\overline{a}_{T}; \delta, p)w(\overline{a^\prime}_{T}; \delta, p)}} &= {\mathbb{E}}\left[ \prod_{t=1}^{T} \left( \frac{ \mathbbm{1}(A_t=a_t) }{ \pi(a_t) } \right)Y \right] {\mathbb{E}}\left[ \prod_{t=1}^{T} \left( \frac{ \mathbbm{1}(A_t=a^\prime_t) }{ \pi(a^\prime_t) } \right)Y \right] \\
&= -Cov(\widehat{\psi}_{c.ipw}(\overline{a}_{T}), \widehat{\psi}_{c.ipw}(\overline{a^\prime}_{T}))
\end{aligned}$$ where the last equality follows by the fact that $${\mathbb{E}}\left\{ \prod_{t=1}^{T} \left( \frac{ \mathbbm{1}(A_t=a_t) }{ \pi(a_t) } \right) \prod_{t=1}^{T} \left( \frac{ \mathbbm{1}(A_t=a^\prime_t) }{ \pi(a^\prime_t) } \right) Y^2 \right\} = 0 \quad \text{for } \ \forall \overline{a^\prime}_{T} \neq \overline{a}_{T}.$$
Hence finally we conclude that $$\begin{aligned}
& Var(\widehat{\psi}_{inc}) = \mathbb{V}_{inc.1} - \mathbb{V}_{inc.2} \\
&= \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} w(\overline{a}_{T}; \delta, p) Var\left(\widehat{\psi}_{c.ipw}(\overline{a}_{T})\right) + \sum_{\substack{ \overline{a}_{T}, \overline{a^\prime}_{T} \in \overline{\mathcal{A}}_{T} \\ \overline{a^\prime}_{T} \neq \overline{a}_{T} } } \sqrt{w(\overline{a}_{T}; \delta, p)w(\overline{a^\prime}_{T}; \delta, p)}Cov(\widehat{\psi}_{c.ipw}(\overline{a}_{T}), \widehat{\psi}_{c.ipw}(\overline{a^\prime}_{T})) \\
&= \sum_{ \overline{a}_{T}, \overline{a^\prime}_{T} \in \overline{\mathcal{A}}_{T} } \sqrt{w(\overline{a}_{T}; \delta, p)w(\overline{a^\prime}_{T}; \delta, p)}Cov(\widehat{\psi}_{c.ipw}(\overline{a}_{T}), \widehat{\psi}_{c.ipw}(\overline{a^\prime}_{T})).
\end{aligned}$$
In Lemma \[lem:inf-time-decomp\] it should be noticed that the weight $w(\overline{a}_{T}; \delta, p)$ exponentially and monotonically decays to zero for $\forall \overline{a}_{T} \in \overline{\mathcal{A}}_{T}$.
Now we show that there always exists $T_{min}$ such that $Var(\widehat{\psi}_{inc}) < Var(\widehat{\psi}_{c.ipw}(\overline{\bm{1}}))$ for all $T \geq T_{min}$. Let $\overline{\bm{1}} = [1,...,1]$. From Lemma \[lem:inf-time-decomp\] it follows that $$\begin{aligned}
& Var(\widehat{\psi}_{inc}) - Var(\widehat{\psi}_{c.ipw}(\overline{\bm{1}})) \\
& = \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} w(\overline{a}_{T}; \delta, p) Var\left(\widehat{\psi}_{c.ipw}(\overline{a}_{T})\right) - Var(\widehat{\psi}_{c.ipw}(\overline{\bm{1}})) \\
& \quad + \sum_{\substack{ \overline{a}_{T}, \overline{a^\prime}_{T} \in \overline{\mathcal{A}}_{T} \\ \overline{a^\prime}_{T} \neq \overline{a}_{T} } } \sqrt{w(\overline{a}_{T}; \delta, p)w(\overline{a^\prime}_{T}; \delta, p)}Cov(\widehat{\psi}_{c.ipw}(\overline{a}_{T}), \widehat{\psi}_{c.ipw}(\overline{a^\prime}_{T})) \\
& = \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} \prod_{t=1}^{T} \frac{\pi_t(a_t)\left\{ \mathbbm{1}\left({a}_{t}=1\right)\delta^2p + \mathbbm{1}\left({a}_{t}=0\right)(1-p)\right\}}{(\delta p + 1-p)^2} \left( \prod_{t=1}^{T} \frac{1}{\pi_t(a_t)} {\mathbb{E}}\left[\left(Y^2 \right)^{\overline{a}_{T}} \right] - \left({\mathbb{E}}\left[Y^{\overline{a}_{T}}\right]\right)^2 \right) \\
& \quad - \left( \frac{1}{p} \right)^{T} {\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right] + \left({\mathbb{E}}\left[Y^{\overline{\bm{1}}}\right]\right)^2 - \sum_{\substack{ \overline{a}_{T}, \overline{a^\prime}_{T} \in \overline{\mathcal{A}}_{T} \\ \overline{a^\prime}_{T} \neq \overline{a}_{T} } } \sqrt{w(\overline{a}_{T}; \delta, p)w(\overline{a^\prime}_{T}; \delta, p)} {\mathbb{E}}\Big[Y^{\overline{a}_T}\Big] {\mathbb{E}}\Big[Y^{\overline{a^\prime}_T}\Big] \\
& \leq b^2_u \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} \left(\prod_{t=1}^{T} \frac{ \mathbbm{1}\left({a}_{t}=1\right)\delta^2p + \mathbbm{1}\left({a}_{t}=0\right)(1-p)}{(\delta p + 1-p)^2} \right) - \left( \frac{1}{p} \right)^{T} {\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right] + \left({\mathbb{E}}\left[Y^{\overline{\bm{1}}}\right]\right)^2 \\
& \quad - \sum_{\substack{ \overline{a}_{T}, \overline{a^\prime}_{T} \in \overline{\mathcal{A}}_{T} \\ \overline{a^\prime}_{T} \neq \overline{a}_{T} } } \sqrt{w(\overline{a}_{T}; \delta, p)w(\overline{a^\prime}_{T}; \delta, p)} {\mathbb{E}}\Big[Y^{\overline{a}_T}\Big] {\mathbb{E}}\Big[Y^{\overline{a^\prime}_T}\Big] + \sum_{\overline{a}_{T} \in \overline{\mathcal{A}}_{T}} w(\overline{a}_{T}; \delta, p)\left({\mathbb{E}}\left[Y^{\overline{a}_{T}}\right]\right)^2 \\
& = b^2_u \left\{ \left[\frac{\delta^2p+1-p}{(\delta p + 1 - p)^2}\right]^T - \left(\frac{c^{1/T}_{{\bm{1}}}}{p}\right)^T \right\} - \sum_{ \overline{a}_{T}, \overline{a^\prime}_{T} \in \overline{\mathcal{A}}_{T} } \sqrt{w(\overline{a}_{T}; \delta, p)w(\overline{a^\prime}_{T}; \delta, p)} {\mathbb{E}}\Big[Y^{\overline{a}_T}\Big] {\mathbb{E}}\Big[Y^{\overline{a^\prime}_T}\Big] + \left({\mathbb{E}}\left[Y^{{\bm{1}}}\right]\right)^2 \\
& = b^2_u \left\{\left[\frac{\delta^2p+1-p}{(\delta p + 1 - p)^2}\right]^T - \left(\frac{c^{1/T}_{{\bm{1}}}}{p}\right)^T - A(\delta,p) + B \right\}
\end{aligned}$$ where $c_{\bm{1}}=\frac{{\mathbb{E}}\left[\left(Y^{\overline{\bm{1}}} \right)^2 \right]}{b^2_u}$, $A(\delta,p)=\sum_{ \overline{a}_{T}, \overline{a^\prime}_{T} \in \overline{\mathcal{A}}_{T} } \sqrt{w(\overline{a}_{T}; \delta, p)w(\overline{a^\prime}_{T}; \delta, p)} \frac{{\mathbb{E}}\left[Y^{\overline{a}_T}\right]}{b_u}\frac{{\mathbb{E}}\big[Y^{\overline{a^\prime}_T}\big]}{b_u}$, and $B=\frac{\left({\mathbb{E}}\big[Y^{{\overline{\bm{1}}}}\big]\right)^2}{b^2_u}$. The inequality comes from the boundedness condition. It can be immediately noted that $c^{1/T}_{\bm{1}} \rightarrow 1$ as $T\rightarrow \infty$ very quickly and monotonically. Also we note $|A(\delta,p)|\leq1$ and $0 \leq B \leq 1$.
For $\delta > 1$, $\frac{\delta^2p+1-p}{(\delta p + 1 - p)^2} < \frac{1}{p}$. Hence based on above observation, it follows that for sufficiently large $T$ the last display is strictly less than zero. Consequently we conclude $ Var(\widehat{\psi}_{inc}) - Var(\widehat{\psi}_{c.ipw}(\overline{\bm{1}})) < 0$ for all $T\geq T_{min}$, which is the result of part $i)$. Likewise, we have the same conclusion for $\overline{\bm{0}}_{T} = [0,...,0]$ such that $ Var(\widehat{\psi}_{inc}) - Var(\widehat{\psi}_{c.ipw}(\overline{\bm{0}}_{T})) < 0$.
The value of $T_{min}$ is determined by $\delta, p$, and distribution of counterfactual outcome $Y^{\overline{a}_T}$. One rough upper bound of such $T_{min}$ is $$\min \left\{T: \left[\frac{\delta^2p+1-p}{(\delta p + 1 - p)^2}\right]^T - \frac{c_{\bm{1}}}{p^T} + 2 < 0\right\}$$ which could be obtained by the last display above and is always finite due to the fact $c_{\bm{1}}>0$ by given assumption in the theorem. $T_{min}$ should not be very large for moderately large value of $\delta$ unless $c_{\bm{1}}$ is unreasonably small since the difference $\frac{1}{p^T} - \left[\frac{\delta^2p+1-p}{(\delta p + 1 - p)^2}\right]^T$ also grows exponentially.\
Proof of Theorem \[thm:convergence\] {#proof:thm-6-1}
------------------------------------
First we need to define the following notations: $$\Vert f \mid_{\mathcal{D},\mathcal{T}} \equiv \sup_{\delta\in\mathcal{D},t\in \mathcal{T}}\vert f(\delta,t) \mid$$ $$\widehat{\Psi}_n(\delta, t) \equiv \sqrt[]{n}\{\widehat{\psi}_t (\delta) - \psi_t (\delta) \} / \widehat{\sigma}(\delta, t)$$ $$\widetilde{\Psi}_n(\delta, t) \equiv \sqrt[]{n}\{\widehat{\psi}_t (\delta) - \psi_t (\delta) \} / \sigma(\delta, t)$$ $$\Psi_n(\delta; t) \equiv \mathbb{G}_{n}\{\widetilde{\varphi}(Z;\bm{\eta},\delta, t) \}$$ where we let $\mathcal{T} = \{1,...,T\}$, let $ \mathbb{G}_{n}$ denote the empirical process on the full sample as usual, and let $\widetilde{\varphi}(Z;\bm{\eta},\delta, t) = \{\varphi(Z;\bm{\eta},\delta, t) - \psi(t;\delta)\} / \sigma(\delta; t)$ and let $\mathbb{G}$ be a mean-zero Gaussian process with covariance ${\mathbb{E}}[\mathbb{G}(\delta_1; t_1)\mathbb{G}(\delta_2; t_2)]={\mathbb{E}}\left[\widetilde{\varphi}(Z;\bm{\eta},\delta_1, t_1) \widetilde{\varphi}(Z;\bm{\eta},\delta_2, t_2)\right]$ as defined in Theorem \[thm:convergence\] in the main text.
The proof consists of two parts; in the first part we will show $\Psi_n(\cdot) \leadsto \mathbb{G}(\cdot)$ in $l^{\infty}(\mathcal{D},\mathcal{T})$ and in the second we will show $\Vert \widehat{\Psi}_n-\Psi_n \mid_{\mathcal{D},\mathcal{T}} = o_{\mathbb{P}}(1)$.
**Part 1.** A proof of the first statement immediately follows from the proof of Theorem 3 in @Kennedy17. He showed the function class $\mathcal{F}_{\bar{\bm{\eta}}}=\{\varphi(\cdot; \bar{\bm{\eta}},\delta): \delta \in \mathcal{D} \}$ is Lipschitz and thus has a finite bracketing integral for any fixed set of nuisance functions, and then applied Theorem 2.5.6 in @van1996weak. In our case, the function class $\mathcal{F}_{\bar{\bm{\eta}}}=\{\varphi(\cdot; \bar{\bm{\eta}},\delta, t): \delta \in \mathcal{D}, t\leq T \}$ is still Lipschitz, since for $\forall t \in \{1,..., T\}$ we have $$\left| \frac{\partial }{\partial\delta}\left[ \frac{ \{a_t - \pi_t(h_t)\}(1-\delta)}{\delta a_t + 1-a_t} \right] \right| \leq \frac{1}{\delta_l} + \frac{1}{4\delta_l^2}$$ $$\left| \frac{\partial }{\partial\delta}\left[ \frac{m_t(h_t,1,1)\delta\pi_t(h_t)+m_t(h_t,0,1)\{ 1-\pi_t(h_t) \}}{\delta\pi_t(h_t) + 1 - \pi_t(h_t)} \cdot \omega_t(h_t, a_t) \right] \right| \leq \frac{2C}{\delta_l^2}$$ $$\frac{\partial }{\partial\delta}\left[ \frac{\delta a_t + 1-a_t}{\delta\pi_t(h_t) + 1-\pi_t(h_t)}\cdot\frac{1}{\omega_t( h_t, a_t)} \right] \leq \frac{1}{c_\omega\delta_l^2 }$$ where we use assumption 1) and 2) in the Theorem, and the identification assumption (\[assumption:A3\]) that there exist a constant $c_\omega$ such that $0<\omega_t( h_t, a_t)<c_\omega \leq 1$ and thus $\frac{1}{\omega_t( h_t, a_t)} \leq \frac{1}{c_\omega}$ a.e. \[${\mathbb{P}}$\]. Therefore, every $\varphi(\cdot; \bar{\bm{\eta}},\delta, t)$ is basically a finite sum of products of Lipschitz functions with bounded $\mathcal{D}$ and we conclude $\mathcal{F}_{\bar{\bm{\eta}}}$ is Lipschitz.
Hence our function class still has a finite bracketing integral for fixed $\bar{\bm{\eta}}$ and $t$, which concludes the first statement is true.
**Part 2.** Let $N = n/K$ be the sample size in any group $k = 1, ..., K$, and denote the empirical process over group k units by $\mathbb{G}^k_n=\sqrt[]{N}({\mathbb{P}}^k_n - {\mathbb{P}})$. From the result of Part 1 and the proof of Theorem 3 in @Kennedy17 we have $$\begin{aligned}
& \widetilde{\Psi}_n(\delta; t) - \Psi_n(\delta; t) \\
& = \frac{\sqrt[]{n}}{K\sigma(\delta; t)}\sum_{k=1}^K \left[ \frac{1}{\sqrt[]{N}}\mathbb{G}^k_n\left\{\varphi(Z;\hat{\bm{\eta}}_{-k},\delta, t) - \varphi(Z;\bm{\eta},\delta, t) \right\} + {\mathbb{P}}\left\{ \varphi(Z;\hat{\bm{\eta}}_{-k},\delta, t) - \varphi(Z;\bm{\eta},\delta, t) \right\} \right] \\
& \equiv B_{n,1}(\delta;t) + B_{n,2}(\delta;t).
\end{aligned}$$
Now we analyze the above two pieces $B_{n,1}(\delta;t)$ and $B_{n,2}(\delta;t)$. Showing $B_{n,1}(\delta;t)=o_{\mathbb{P}}(1)$ follows the exact same steps done by @Kennedy17. However, analysis on $B_{n,2}(\delta;t)$ is largely different.
To analyze $B_{n,2}(\delta;t)$, we follow the same notation with that of @Kennedy17. First let $\psi({\mathbb{P}};Q)$ denote the mean outcome under intervention $Q$ for a population corresponding to observed data distribution ${\mathbb{P}}$. Next, let denote $\varphi^*(z;{\bm{\eta}, t})$ its *centered* efficient influence function when $Q$ does not depend on ${\mathbb{P}}$, as given in Lemma \[lem:eif\_1\] and let denote $\zeta^*(z;{\bm{\eta}}, t)$ the contribution to the efficient influence function $\varphi^*(z;{\bm{\eta}, t})$ due to estimating $Q$ when it depends on ${\mathbb{P}}$, as given in Lemma \[lem:eif\_2\]. Now by definition, $$\varphi(Z;{\bm{\eta}, \delta, t}) = \varphi^*(Z;{\bm{\eta}, t}) + \psi({\mathbb{P}};Q) + \zeta^*(Z;{\bm{\eta}}, t),$$ and thereby after some rearrangement we obtain $$\begin{aligned}
\frac{1}{\sqrt[]{n}}B_{n,2}(\delta;t) & = {\mathbb{P}}\left\{ \varphi(Z;\overline{\bm{\eta}},\delta, t) - \varphi(Z;{\bm{\eta}},\delta, t) \right\} \\
& =\int \varphi^*(z;\overline{\bm{\eta}},t) d{\mathbb{P}}(z) + \psi(\overline{{\mathbb{P}}};\overline{Q}) - \psi({{\mathbb{P}}};\overline{Q}) \\
& \quad + \int \zeta^*(z;\overline{\bm{\eta}},t) d{\mathbb{P}}(z) + \psi({{\mathbb{P}}};\overline{Q}) - \psi({{\mathbb{P}}};Q).
\end{aligned}$$ Although one can relate $\overline{\bm{\eta}}$ to $\widehat{\bm{\eta}}_{-k}$ in above equation, it can be anything associated with new $\overline{{\mathbb{P}}}$ and $\overline{Q}$.
Hence, by analyzing the second order remainder terms of von Mises expansion for the efficient influence functions given in Lemma \[lem:eif\_1\] and \[lem:eif\_2\], we can evaluate the convergence rate of $B_{n,2}(\delta;t)$. The following two lemmas analyze those second order remainder terms in the presence of censoring process.
\[lem:remainder\_1\] Let $\psi({{\mathbb{P}}};Q)$ be a mean outcome under intervention $Q$ for a for a population corresponding to observed data distribution ${\mathbb{P}}$, and let $\varphi^*(z;{\bm{\eta}},t)$ denote its efficient influence function when $Q$ does not depend on ${\mathbb{P}}$ for given $t$, as given in Lemma \[lem:eif\_1\]. For another data distribution $\overline{{\mathbb{P}}}$, let $\overline{\bm{\eta}}$ denote the corresponding nuisance functions. Then we have von Mises type expansion $$\begin{aligned}
\psi(\overline{{\mathbb{P}}};Q) - \psi({{\mathbb{P}}};Q) &= \int \varphi^*(z;\overline{\bm{\eta}},t) d{\mathbb{P}}(z) \\
& +\sum_{t=1}^{2} \sum_{s=1}^{t} \int (m^*_t - \overline{m}_t)\left( \prod_{r=1}^{t} dQ_rd{\mathbb{P}}_r \right) \left( \frac{d\pi_s - d\overline{\pi}_s}{d\overline{\pi}_s} \right) \left( \frac{d\omega_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& + \sum_{t=1}^{2} \sum_{s=1}^{t} \int (m^*_t - \overline{m}_t)\left( \prod_{r=1}^{t} dQ_rd{\mathbb{P}}_r \right) \left( \frac{d\omega_s - d\overline{\omega}_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right)
\end{aligned}$$ where we define $$\overline{m}_t = \int \overline{m}_{t+1}dQ_{t+1}d\overline{{\mathbb{P}}}_{t+1}, \qquad {m}^*_t = \int \overline{m}_{t+1}dQ_{t+1}d{{\mathbb{P}}}_{t+1},$$ $$dQ_t = dQ_t(A_t \mid H_t), \qquad d\pi_t = d{\mathbb{P}}(A_t \mid H_t), \qquad d{\mathbb{P}}_t = d{\mathbb{P}}(X_t \mid H_{t-1}, A_{t-1}),$$ $$d\omega_s=d{\mathbb{P}}(R_{s+1}=1 \mid H_s, A_s,R_s=1), \qquad d\overline{\omega}_s=d\overline{{\mathbb{P}}}(R_{s+1}=1 \mid H_s, A_s,R_s=1).$$
From Lemma \[lem:eif\_1\], we have $$\begin{aligned}
{\mathbb{E}}\{ \varphi^*(Z;\overline{\bm{\eta}}) \} &= \sum_{t=0}^{t} {\mathbb{E}}\left\{ \left(\int \overline{m}_{t+1}dQ_{t+1} - \overline{m}_t \right) \mathbbm{1}(R_{t+1}=1) \prod_{s=0}^t\left(\frac{dQ_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) \right\} \\
&= \sum_{t=0}^{t} {\mathbb{E}}\left\{ {\mathbb{E}}\left[ \left(\int \overline{m}_{t+1}dQ_{t+1} - \overline{m}_t \right) \mathbbm{1}(R_{t+1}=1) \mathbbm{1}(R_{t}=1) \prod_{s=0}^t\left(\frac{dQ_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) \Bigg\vert H_t, A_t, R_t \right] \right\} \\
&= \sum_{t=0}^{t} {\mathbb{E}}\Bigg\{ {\mathbb{E}}\left[ \left( \int \overline{m}_{t+1}dQ_{t+1} - \overline{m}_t \right) \mathbbm{1}(R_{t}=1) \prod_{s=0}^t\left(\frac{dQ_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) \Bigg\vert H_t, A_t, R_t = 1, R_{t+1}=1 \right] \\
& \qquad \qquad \times d{\mathbb{P}}(R_{t+1}=1 \mid H_t, A_t, R_{t}=1) \Bigg\} \\
&= \sum_{t=0}^{t} {\mathbb{E}}\left\{ \left(\int \int \overline{m}_{t+1}dQ_{t+1}d{\mathbb{P}}_{t+1} - \overline{m}_t \right) \mathbbm{1}(R_{t}=1) d\omega_t \prod_{s=0}^t\left(\frac{dQ_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) \right\} \\
&= \sum_{t=0}^{t} {\mathbb{E}}\left\{ \left(m^*_t- \overline{m}_t \right) d\omega_t \mathbbm{1}(R_{t}=1) \prod_{s=0}^t\left(\frac{dQ_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) \right\} \\
&= \sum_{t=0}^{t} \int \left(m^*_t- \overline{m}_t \right) d\omega_t \prod_{s=0}^t \left\{ \left(\frac{dQ_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) d\pi_s d{\mathbb{P}}_s d\omega_{s-1} \right\}
\end{aligned}$$ where the first equality follows by the definition and linearity of expectation, the second by iterated expectation and the equivalence between $\mathbbm{1}(R_{t+1}=1)$ and $\mathbbm{1}(R_{t+1}=1, R_{t}=1)$ [^5], the third by the law of total probability on conditional expectation [^6], the fourth by the result of Lemma \[lem:identification\] (i.e. $d{\mathbb{P}}_{t+1} = d{\mathbb{P}}(X_{t+1} \mid H_t, A_t, R_{t+1}=1)$) and by the definition, and the fifth simply by definition. To obtain the last equality, we first apply iterated expectation conditioning on $(H_t, R_t)$, then do another iterated expectation conditioning on $(H_{t-1}, A_{t-1}, R_{t-1})$ followed by same steps from the second, the third and the fourth equalities, and repeat these processes for $t-2, ... , 1$.
From the last expression, now we have $$\begin{aligned}
\sum_{t=0}^{t} \int & \left(m^*_t- \overline{m}_t \right) \prod_{s=0}^t \left\{ \left(\frac{dQ_s}{d\overline{\pi}_s} \frac{d\omega_{s}}{d\overline{\omega}_s} \right) d\pi_s d{\mathbb{P}}_s \right\} \\
&= \sum_{t=0}^{t} \int \left(m^*_t- \overline{m}_t \right) \frac{d\pi_{t}}{d\overline{\pi}_t} \frac{d\omega_{t}}{d\overline{\omega}_t} dQ_td{\mathbb{P}}_t \prod_{s=0}^{t-1} \left\{ \left(\frac{dQ_s}{d\overline{\pi}_s} \frac{d\omega_{s}}{d\overline{\omega}_s} \right) d\pi_s d{\mathbb{P}}_s \right\} \\
&= \sum_{t=0}^{t} \int \left(m^*_t- \overline{m}_t \right) \left( \frac{d\pi_{t} - d\overline{\pi}_t}{d\overline{\pi}_t} \right) \frac{d\omega_{t}}{d\overline{\omega}_t} dQ_td{\mathbb{P}}_t \prod_{s=0}^{t-1} \left\{ \left(\frac{dQ_s}{d\overline{\pi}_s} \frac{d\omega_{s}}{d\overline{\omega}_s} \right) d\pi_s d{\mathbb{P}}_s \right\} \\
& \quad + \sum_{t=0}^{t} \int \left(m^*_t- \overline{m}_t \right) \frac{d\omega_{t}}{d\overline{\omega}_t} dQ_td{\mathbb{P}}_t \prod_{s=0}^{t-1} \left\{ \left(\frac{dQ_s}{d\overline{\pi}_s} \frac{d\omega_{s}}{d\overline{\omega}_s} \right) d\pi_s d{\mathbb{P}}_s \right\} \\
& = \sum_{t=1}^{t} \int \left(m^*_t- \overline{m}_t \right) \left( \frac{d\pi_{t} - d\overline{\pi}_t}{d\overline{\pi}_t} \right) \frac{d\omega_{t}}{d\overline{\omega}_t} dQ_td{\mathbb{P}}_t \prod_{s=0}^{t-1} \left\{ \left(\frac{dQ_s}{d\overline{\pi}_s} \frac{d\omega_{s}}{d\overline{\omega}_s} \right) d\pi_s d{\mathbb{P}}_s \right\} \\
& \quad + \sum_{t=1}^{t} \int \left(m^*_t- \overline{m}_t \right) \left( \frac{d\omega_{t} - d\overline{\omega}_t}{d\overline{\omega}_t} \right) dQ_td{\mathbb{P}}_t \prod_{s=0}^{t-1} \left\{ \left(\frac{dQ_s}{d\overline{\pi}_s} \frac{d\omega_{s}}{d\overline{\omega}_s} \right) d\pi_s d{\mathbb{P}}_s \right\} \\
& \quad + \sum_{t=1}^{t} \int \left(m^*_t- \overline{m}_t \right) dQ_td{\mathbb{P}}_t \prod_{s=0}^{t-1} \left\{ \left(\frac{dQ_s}{d\overline{\pi}_s} \frac{d\omega_{s}}{d\overline{\omega}_s} \right) d\pi_s d{\mathbb{P}}_s \right\} + \left(m^*_0- \overline{m}_0 \right)
\end{aligned}$$ , where all the algebras are basically adding and subtracting the same term after some rearrangement. Note that we use the convention from earlier lemmas that all the quantities with negative times such as $dQ_{-1}$ are set to one. If we repeat above process $t$ times we obtain the following identity. $$\begin{aligned}
\sum_{t=0}^{t} \int & \left(m^*_t- \overline{m}_t \right) \prod_{s=0}^t \left\{ \left(\frac{dQ_s}{d\overline{\pi}_s} \frac{d\omega_{s}}{d\overline{\omega}_s} \right) d\pi_s d{\mathbb{P}}_s \right\} \\
& = \sum_{t=1}^{t} \sum_{s=1}^{t} \int \left(m^*_t- \overline{m}_t \right) \left( \prod_{r=s}^{t} dQ_r d{\mathbb{P}}_r \right) \left( \frac{d\pi_{s} - d\overline{\pi}_s}{d\overline{\pi}_s} \right) \frac{d\omega_{s}}{d\overline{\omega}_s} \prod_{r=1}^{s-1} \left\{ \left(\frac{dQ_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) d\pi_r d{\mathbb{P}}_r \right\} \\
& \quad + \sum_{t=1}^{t} \sum_{s=1}^{t} \int \left(m^*_t- \overline{m}_t \right) \left( \prod_{r=s}^{t}dQ_rd{\mathbb{P}}_r \right) \left( \frac{d\omega_{s} - d\overline{\omega}_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left\{ \left(\frac{dQ_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) d\pi_r d{\mathbb{P}}_r \right\} \\
& \quad + \sum_{t=1}^{t} \int \left(m^*_t- \overline{m}_t \right) \left( \prod_{s=1}^{t} dQ_s d{\mathbb{P}}_s \right) + \left(m^*_0- \overline{m}_0 \right)
\end{aligned}$$ However, by last part of Lemma 5 in @Kennedy17 we have $$\sum_{t=1}^{t} \int \left(m^*_t- \overline{m}_t \right) \left( \prod_{s=1}^{t} dQ_s d{\mathbb{P}}_s \right) = m_0 - m^*_0.$$ Putting all these together, after some rearranging finally we have $$\begin{aligned}
{\mathbb{E}}\{ \varphi^*(Z;\overline{\bm{\eta}}) \} &= m_0 -\overline{m}_0 \\
& +\sum_{t=1}^{t} \sum_{s=1}^{t} \int (m^*_t - \overline{m}_t)\left( \prod_{r=1}^{t} dQ_rd{\mathbb{P}}_r \right) \left( \frac{d\pi_s - d\overline{\pi}_s}{\overline{\pi}_s} \right) \left( \frac{d\omega_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& + \sum_{t=1}^{t} \sum_{s=1}^{t} \int (m^*_t - \overline{m}_t)\left( \prod_{r=1}^{t} dQ_rd{\mathbb{P}}_r \right) \left( \frac{d\omega_s - d\overline{\omega}_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right)
\end{aligned}$$ , which yields the formula we have in Lemma \[lem:remainder\_1\].
\[lem:remainder\_2\] Let $\zeta^*(z;\overline{\bm{\eta}},t)$ denote the contribution to the efficient influence function $\varphi^*(z;{\bm{\eta}},t)$ due to dependence between ${\mathbb{P}}$ and $Q$ as given in Lemma \[lem:eif\_2\]. Then for two different intervention distributions $Q$ and $\overline{Q}$ whose corresponding densities are $dQ_t$ and $d\overline{Q}_t$ respectively with respect to some dominating measure for $t = 1, ..., t$, we have von Mises type expansion $$\begin{aligned}
\psi({{\mathbb{P}}};\overline{Q}) - \psi & ({{\mathbb{P}}};Q) = \int \zeta^*(z;\overline{\bm{\eta}},t) d{\mathbb{P}}(z) \\
& + \sum_{t=1}^{t} \int \overline{\phi}_t d\pi_t (m_t - \overline{m}_t) d\nu d{\mathbb{P}}_t \prod_{s=0}^{t-1} \left( \frac{d\overline{Q}_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) d{\pi}_s d{{\mathbb{P}}}_s d{\omega}_s \\
& +\sum_{t=1}^{t} \sum_{s=1}^{t} \int \overline{\phi}_t d\pi_t {m}_t d\nu d{\mathbb{P}}_t \left( \prod_{r=0}^{t-1} d\overline{Q}_rd{\mathbb{P}}_r \right) \left( \frac{d\overline{\pi}_s - d\pi_s }{d\overline{\pi}_s} \right) \left( \frac{d\omega_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& + \sum_{t=1}^{t} \sum_{s=1}^{t} \int \overline{\phi}_t d\pi_t {m}_t d\nu d{\mathbb{P}}_t \left( \prod_{r=0}^{t-1} d\overline{Q}_rd{\mathbb{P}}_r \right) \left( \frac{d\overline{\omega}_s - d\omega_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& + \sum_{t=1}^{t} \int {m}_t\left( d\overline{Q}_t - d{Q}_t - \overline{\phi}_t d\pi_t d\nu \right) d{\mathbb{P}}_t \left( \prod_{s=0}^{t-1} d\overline{Q}_s d{\mathbb{P}}_s \right)
\end{aligned}$$ where we define all the notation in the same way in Lemma \[lem:remainder\_1\].
From Lemma 6 in @Kennedy17 and by Lemma \[lem:identification\], we have $$\begin{aligned}
\Psi({{\mathbb{P}}};\overline{Q}) - \Psi({{\mathbb{P}}};Q) &= \int m_T \left( \prod_{t=1}^{T} d\overline{Q}_td{\mathbb{P}}_t - \prod_{t=1}^{T} dQ_td{\mathbb{P}}_t \right) \\
&= \sum_{t=1}^{t} \int m_t \left( d\overline{Q}_t - d{Q}_t \right) d{\mathbb{P}}_t \prod_{s=0}^{t-1} d\overline{Q}_s d{\mathbb{P}}_s.
\end{aligned}$$ Next, for the expected contribution to the influence function due to estimating $Q$ when it depends on ${\mathbb{P}}$, we have that $$\begin{aligned}
{\mathbb{E}}[\zeta^*(Z;\overline{\bm{\eta}})] &= {\mathbb{E}}\left[ \sum_{t=1}^{t} \int \overline{\phi}_t \overline{m}_t d\nu \left( \prod_{s=0}^{t-1} \frac{d\overline{Q}_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) \mathbbm{1}(R_{t}=1) \right] \\
&=\sum_{t=1}^{t} {\mathbb{E}}\left[ \int \overline{\phi}_t d\pi_t \overline{m}_t d\nu \left( \prod_{s=0}^{t-1} \frac{d\overline{Q}_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) \mathbbm{1}(R_{t}=1) \mathbbm{1}(R_{t-1}=1) \right] \\
&= \sum_{t=1}^{t} {\mathbb{E}}\left\{ \left[ \int \overline{\phi}_t d\pi_t \overline{m}_t d\nu d{\mathbb{P}}_t \left( \prod_{s=0}^{t-1} \frac{d\overline{Q}_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) \mathbbm{1}(R_{t-1}=1) \right] d{\mathbb{P}}(R_t=1 \mid H_{t-1}, A_{t-1}, R_{t-1}=1) \right\} \\
&= \sum_{t=1}^{t} {\mathbb{E}}\left\{ \int \overline{\phi}_t d\pi_t \overline{m}_t d\nu d{\mathbb{P}}_t \prod_{s=0}^{t-1} \left( \frac{d\overline{Q}_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) d{\omega}_{t-1} \mathbbm{1}(R_{t-1}=1) \right\} \\
&= \sum_{t=1}^{t} \int \overline{\phi}_t d\pi_t \overline{m}_t d\nu d{\mathbb{P}}_t \prod_{s=0}^{t-1} \left( \frac{d\overline{Q}_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) d{\pi}_s d{{\mathbb{P}}}_s d{\omega}_s
\end{aligned}$$ where the first equality by definition, the second by iterated expectation conditioning on $(H_t, R_t)$ and equivalence between $\mathbbm{1}(R_{t}=1) \mathbbm{1}(R_{t-1}=1)$ and $\mathbbm{1}(R_{t}=1)$, the third by iterated expectation conditioning on $(H_{t-1}, A_{t-1}, R_{t-1})$ and law of total probability, and the fifth by repeating the process $T$ times. Details follow almost the same logic as in Lemma \[lem:remainder\_1\].
Now, we further expand our last expression as $$\begin{aligned}
\sum_{t=1}^{t} & \int \overline{\phi}_t d\pi_t \overline{m}_t d\nu d{\mathbb{P}}_t \prod_{s=0}^{t-1} \left( \frac{d\overline{Q}_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) d{\pi}_s d{{\mathbb{P}}}_s d{\omega}_s \\
&= \sum_{t=1}^{t} \int \overline{\phi}_t d\pi_t (\overline{m}_t - m_t) d\nu d{\mathbb{P}}_t \prod_{s=0}^{t-1} \left( \frac{d\overline{Q}_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) d{\pi}_s d{{\mathbb{P}}}_s d{\omega}_s \\
& \qquad + \sum_{t=1}^{t} \int \overline{\phi}_t d\pi_t {m}_t d\nu d{\mathbb{P}}_t \prod_{s=0}^{t-1} \left( \frac{d\overline{Q}_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) d{\pi}_s d{{\mathbb{P}}}_s d{\omega}_s \\
&= \sum_{t=1}^{t} \int \overline{\phi}_t d\pi_t (\overline{m}_t - m_t) d\nu d{\mathbb{P}}_t \prod_{s=0}^{t-1} \left( \frac{d\overline{Q}_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) d{\pi}_s d{{\mathbb{P}}}_s d{\omega}_s \\
& \quad +\sum_{t=1}^{t} \sum_{s=1}^{t} \int \overline{\phi}_t d\pi_t {m}_t d\nu d{\mathbb{P}}_t \left( \prod_{r=0}^{t-1} d\overline{Q}_rd{\mathbb{P}}_r \right) \left( \frac{d\pi_s - d\overline{\pi}_s}{d\overline{\pi}_s} \right) \left( \frac{d\omega_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& \quad + \sum_{t=1}^{t} \sum_{s=1}^{t} \int \overline{\phi}_t d\pi_t {m}_t d\nu d{\mathbb{P}}_t \left( \prod_{r=0}^{t-1} d\overline{Q}_rd{\mathbb{P}}_r \right) \left( \frac{d\omega_s - d\overline{\omega}_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& \quad + \sum_{t=1}^{t} \int \overline{\phi}_t d\pi_t {m}_t d\nu d{\mathbb{P}}_t \left( \prod_{s=0}^{t-1} d\overline{Q}_s d{\mathbb{P}}_s \right)
\end{aligned}$$ where the first equality follows by adding and subtracting the second term, an the second by the same steps used in Lemma \[lem:remainder\_1\].
With the last term in the last expression above, it follows $$\begin{aligned}
\Psi({{\mathbb{P}}};\overline{Q}) - \Psi({{\mathbb{P}}};Q) & - \sum_{t=1}^{t} \int \overline{\phi}_t d\pi_t {m}_t d\nu d{\mathbb{P}}_t \left( \prod_{s=0}^{t-1} d\overline{Q}_s d{\mathbb{P}}_s \right) \\
&= \sum_{t=1}^{t} \int {m}_t\left( d\overline{Q}_t - d{Q}_t - \overline{\phi}_t d\pi_t d\nu \right)d{\mathbb{P}}_t \left( \prod_{s=0}^{t-1} d\overline{Q}_s d{\mathbb{P}}_s \right).
\end{aligned}$$
Putting these all together, finally we have $$\begin{aligned}
\Psi({{\mathbb{P}}};\overline{Q}) - \Psi & ({{\mathbb{P}}};Q) = {\mathbb{E}}[ \zeta^*(Z;\overline{\bm{\eta}})] \\
& + \sum_{t=1}^{t} \int \overline{\phi}_t d\pi_t (m_t - \overline{m}_t) d\nu d{\mathbb{P}}_t \prod_{s=0}^{t-1} \left( \frac{d\overline{Q}_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) d{\pi}_s d{{\mathbb{P}}}_s d{\omega}_s \\
& +\sum_{t=1}^{t} \sum_{s=1}^{t} \int \overline{\phi}_t d\pi_t {m}_t d\nu d{\mathbb{P}}_t \left( \prod_{r=0}^{t-1} d\overline{Q}_rd{\mathbb{P}}_r \right) \left( \frac{d\overline{\pi}_s - d\pi_s }{d\overline{\pi}_s} \right) \left( \frac{d\omega_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& + \sum_{t=1}^{t} \sum_{s=1}^{t} \int \overline{\phi}_t d\pi_t {m}_t d\nu d{\mathbb{P}}_t \left( \prod_{r=0}^{t-1} d\overline{Q}_rd{\mathbb{P}}_r \right) \left( \frac{d\overline{\omega}_s - d\omega_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& + \sum_{t=1}^{t} \int {m}_t\left( d\overline{Q}_t - d{Q}_t - \overline{\phi}_t d\pi_t d\nu \right)d{\mathbb{P}}_t \left( \prod_{s=0}^{t-1} d\overline{Q}_s d{\mathbb{P}}_s \right)
\end{aligned}$$ which is the result of the lemma.
Finally, the next Lemma concludes the proof of the second statement and thus completes the proof of the Theorem \[thm:convergence\]. In fact, it is this lemma that substantiates why having all nuisance functions estimated at rate of $n^{-1/4}$ can be one sufficient condition.
\[lem:upperbound\_remainder\] Remainders of the von Mises expansion from Lemma \[lem:remainder\_1\] and \[lem:remainder\_2\] are both diminishing at rate of $n^{-\frac{1}{2}}$ uniformly in $\delta$, if $$\left( \underset{\delta\in \mathcal{D}}{sup}\Vert m_{\delta,t} - \widehat{m}_{\delta,t} \mid + \mid \pi_t - \widehat{\pi}_{t} \mid \right) \Big( \mid \pi_s - \overline{\pi}_s\Vert + \mid \omega_s - \overline{\omega}_s\Vert \Big) = o_{\mathbb{P}}\left(\frac{1}{\sqrt{n}}\right),$$ for $\forall s \leq t \leq T$.
The remainder term of the Von Mises type expansion from Lemma \[lem:remainder\_1\] equals $$\begin{aligned}
& \sum_{t=1}^{t} \sum_{s=1}^{t} \int (m^*_t - \overline{m}_t)\left( \prod_{r=1}^{t} dQ_rd{\mathbb{P}}_r \right) \left( \frac{d\pi_s - d\overline{\pi}_s}{d\overline{\pi}_s} \right) \left( \frac{d\omega_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& \qquad + \sum_{t=1}^{t} \sum_{s=1}^{t} \int (m^*_t - \overline{m}_t)\left( \prod_{r=1}^{t} dQ_rd{\mathbb{P}}_r \right) \left( \frac{d\omega_s - d\overline{\omega}_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& = \sum_{t=1}^{t} \sum_{s=1}^{t} \int \Big\{ (\overline{m}_{t+1} - {m}_{t+1})dQ_{t+1}d{\mathbb{P}}_{t+1} + (m_t - \overline{m}_t)\Big\} \left( \prod_{r=1}^{t} dQ_rd{\mathbb{P}}_r \right) \left( \frac{d\pi_s - d\overline{\pi}_s}{d\overline{\pi}_s} \right) \left( \frac{d\omega_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& \quad + \sum_{t=1}^{t} \sum_{s=1}^{t} \int \Big\{ (\overline{m}_{t+1} - {m}_{t+1})dQ_{t+1}d{\mathbb{P}}_{t+1} + (m_t - \overline{m}_t)\Big\} \left( \prod_{r=1}^{t} dQ_rd{\mathbb{P}}_r \right) \left( \frac{d\omega_s - d\overline{\omega}_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& \lesssim \sum_{t=1}^{t} \Big( \mid \overline{m}_{t+1} - {m}_{t+1} \mid + \mid m_t - \overline{m}_{t} \mid \Big) \sum_{s=1}^{t} \Big( \mid \pi_s - \overline{\pi}_s\Vert + \mid \omega_s - \overline{\omega}_s\Vert \Big)
\end{aligned}$$ where we obtain the first inequality simply by adding and subtracting $m_t$.
For the remainder term from Lemma \[lem:remainder\_2\], first note that by Lemma \[lem:identification\] the following results stated in @Kennedy17 also holds for our case: $$\int \overline{\phi}_t d\pi_t = \frac{\delta(2a_t-1)(\pi_t - \overline{\pi}_t )}{(\delta\overline{\pi}_t+1-\overline{\pi}_t)^2},$$ $$d\overline{Q}_t - d{Q}_t - \int \overline{\phi}_t d\pi_t = \frac{\delta(\delta-1)(2a_t-1)(\overline{\pi}_t - \pi_t)^2}{(\delta\overline{\pi}_t+1-\overline{\pi}_t)^2(\delta{\pi}_t+1-{\pi}_t)}.$$
where we additionally condition $R_t=1$ for $\pi_t, \overline{\pi}_t$ in our case. Hence, it immediately follows that the remainder from Lemma \[lem:remainder\_2\] is $$\begin{aligned}
& \sum_{t=1}^{t} \int \overline{\phi}_t d\pi_t (m_t - \overline{m}_t) d\nu d{\mathbb{P}}_t \prod_{s=0}^{t-1} \left( \frac{d\overline{Q}_s}{d\overline{\pi}_s} \frac{1}{d\overline{\omega}_s} \right) d{\pi}_s d{{\mathbb{P}}}_s d{\omega}_s \\
& +\sum_{t=1}^{t} \sum_{s=1}^{t} \int \overline{\phi}_t d\pi_t {m}_t d\nu d{\mathbb{P}}_t \left( \prod_{r=0}^{t-1} d\overline{Q}_rd{\mathbb{P}}_r \right) \left( \frac{d\overline{\pi}_s - d\pi_s }{d\overline{\pi}_s} \right) \left( \frac{d\omega_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& + \sum_{t=1}^{t} \sum_{s=1}^{t} \int \overline{\phi}_t d\pi_t {m}_t d\nu d{\mathbb{P}}_t \left( \prod_{r=0}^{t-1} d\overline{Q}_rd{\mathbb{P}}_r \right) \left( \frac{d\overline{\omega}_s - d\omega_s}{d\overline{\omega}_s} \right) \prod_{r=1}^{s-1} \left(\frac{d\pi_r}{d\overline{\pi}_r} \frac{d\omega_{r}}{d\overline{\omega}_r} \right) \\
& + \sum_{t=1}^{t} \int {m}_t\left( d\overline{Q}_t - d{Q}_t - \overline{\phi}_t d\pi_t d\nu \right)d{\mathbb{P}}_t \left( \prod_{s=0}^{t-1} d\overline{Q}_s d{\mathbb{P}}_s \right) \\
& \lesssim \sum_{t=1}^{t} \mid \pi_t - \overline{\pi}_{t} \mid \left\{ \mid m_t - \overline{m}_{t} \mid + \sum_{s=1}^{t} \Big( \mid \pi_s - \overline{\pi}_s\Vert + \mid \omega_s - \overline{\omega}_s\Vert \Big) + \mid \pi_t - \overline{\pi}_{t} \mid\right\}.
\end{aligned}$$
Therefore, supported by the condition 4) in Theorem \[thm:convergence\], if we have $$\left( \underset{\delta\in \mathcal{D}}{sup}\Vert m_{\delta,t} - \widehat{m}_{\delta,t} \mid + \mid \pi_t - \widehat{\pi}_{t} \mid \right) \Big( \mid \pi_s - \overline{\pi}_s\Vert + \mid \omega_s - \overline{\omega}_s\Vert \Big) = o_{\mathbb{P}}(\frac{1}{\sqrt{n}}),$$ for $\forall s \leq t \leq t$, both of the remainders from Lemma \[lem:remainder\_1\] and \[lem:remainder\_2\] are diminishing at rate of $n^{-\frac{1}{2}}$ uniformly in $\delta$.
Rationality of using multiplier bootstrap from [@Kennedy17] {#append:thm-bootstrap-kennedy}
-----------------------------------------------------------
As in the proof of Theorem \[thm:convergence\], we let
$$\Vert f \mid_{\mathcal{D},\mathcal{T}} \equiv \sup_{\delta\in\mathcal{D},t \in \mathcal{T}}\vert f(\delta,t) \mid$$ and define the processes $$\widehat{\Psi}_n(\delta, t) \equiv \sqrt[]{n}\{\widehat{\psi}_t (\delta) - \psi_t (\delta) \} / \widehat{\sigma}(\delta, t)$$ $$\widehat{\Psi}^*_n(\delta, t) \equiv \mathbb{G}_{n}\left[ \varepsilon \{\varphi(Z;\hat{\bm{\eta}}_{-S},\delta, t) - \widehat{\psi}_t(\delta) \} / \widehat{\sigma}(\delta, t)\right]$$ $${\Psi}^*_n(\delta, t) \equiv \mathbb{G}_{n}\left[ \varepsilon \{\varphi(Z;{\bm{\eta}},\delta, t) - {\psi}(t;\delta) \} / {\sigma}(\delta, t) \right]$$ where we let the star superscripts denote multiplier bootstrap processes defined in Theorem 4 of @Kennedy17 and let $\mathbb{G}$ be a mean-zero Gaussian process with covariance ${\mathbb{E}}[\mathbb{G}(\delta_1; t)\mathbb{G}(\delta_2; t)]={\mathbb{E}}\left[\widetilde{\varphi}(Z;\bm{\eta},\delta_1, t_1) \widetilde{\varphi}(Z;\bm{\eta},\delta_2, t_2)\right]$ as defined in Theorem \[thm:convergence\] in the main text.
From above setup and the result of Theorem \[thm:convergence\] it only requires to show $$\left| {\mathbb{P}}\left( \mid \widehat{\Psi}_n \mid_{\mathcal{D},\mathcal{T}} \leq \hat{c}_\alpha \right) - {\mathbb{P}}\left( \mid \widehat{\Psi}^*_n \mid_{\mathcal{D},\mathcal{T}} \leq \hat{c}_\alpha \right)\right| = o(1),$$ since ${\mathbb{P}}\left( \mid \widehat{\Psi}^*_n \mid_{\mathcal{D},\mathcal{T}} \leq \hat{c}_\alpha \right) = 1 - \alpha$ by definition. The proof is very straightforward since we already have shown $\Vert \widehat{\Psi}_n - {\Psi}_n \mid_{\mathcal{D},\mathcal{T}}= o_{\mathbb{P}}(1)$ in the proof of Theorem \[thm:convergence\], which implies that $\left| \mid \widehat{\Psi}^*_n \mid_{\mathcal{D},\mathcal{T}} - \mid {\Psi}^*_n \mid_{\mathcal{D},\mathcal{T}} \right|=o_{\mathbb{P}}(1)$. Furthermore since we are adding only finite number of discrete timepoints into the function class used in the proof of Theorem 4 in @Kennedy17, Lemma 2.3. in @chernozhukov2014 and Corollary 2.2 in @belloni2015 are still valid in our case and thereby the exact same argument used in the proof of Theorem 4 in @Kennedy17 follows to conclude the above statement.
[^1]: {Department of Statistics & Data Science, Machine Learning Department}, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213. Email: `[email protected]`
[^2]: To whom correspondence should be addressed
[^3]: Assistant Professor, Department of Statistics & Data Science, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213. Email: `[email protected]`
[^4]: Assistant Professor, Department of Epidemiology, University of Pittsburgh, 130 DeSoto Street, Pittsburgh, PA 15261. Email: `[email protected]`
[^5]: For $\forall t$ the event $\{R_{t}=1\}$ implies $\{R_{s}=1$ for all $s \leq t\}$ by construction.
[^6]: For random variable $X,Y,Z$, it follows ${\mathbb{E}}[X|Y] = \sum_z {\mathbb{E}}[X|Y, Z=z]{\mathbb{P}}(Z=z|Y)$.
|
---
author:
- |
\
Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova,\
and INFN Sezione di Padova, Via Marzolo 8, 35131 Padova, Italy\
E-mail:
title: 'Generalizing Minimal Dark Matter: Millicharge or Decay[^1]'
---
=1
Introduction
============
Among the few things we know about Dark Matter (DM) is that it must be stable on cosmological timescales. In terms of particle physics, stability means symmetry: there must be a symmetry, exact or approximate, responsible for the absence or suppression of Lagrangian operators which may cause the DM to decay. The simplest example is a global ${\mathbb{Z}}_2$ or $U(1)$ symmetry if the DM field is or is not self-conjugated, respectively.
A common way of enforcing DM stability in model building is to impose such a symmetry by hand, postponing the issue of justifying its ultraviolet origin. A different, elegant way to ensure stability is instead exploiting accidental symmetries, the same mechanism that makes the proton stable in the Standard Model (SM). Accidental symmetries are global symmetries appearing in a renormalizable theory as a consequence of its specific matter content, without being imposed a priori.
This is the main idea behind the Minimal Dark Matter (MDM) setup, first presented in Ref. [@Cirelli:2005uq] (see also Refs. [@Cirelli:2007xd; @Cirelli:2009uv]). There, the SM is augmented with a new generic multiplet ${\mathcal{X}}$ with generic quantum numbers under the SM gauge group, without introducing new symmetries. The multiplet mass, the model’s only free parameter, is fixed by requiring the DM to be a thermal relic. In listing all possible scalar and spin-$\frac{1}{2}$ candidates, one must take into account the following facts [@Cirelli:2005uq]:
- Colored thermal relics are very constrained.
- DM candidates with tree-level interactions with the photon and/or the $Z$ boson are ruled out by direct detection experiments. Therefore only odd-dimensional representations of ${SU(2)_{\text{L}}}$ are viable choices (see however Refs. [@Hisano:2014kua; @Nagata:2014aoa]).
- Multiplets for which Yukawa couplings with SM fields exist, making the DM unstable, are to be discarded. Also dimension-$5$ operators, in an Effective Field Theory (EFT) approach, make the DM decay too quickly even for a cutoff at the Planck scale.
- Matter charged under representations of ${SU(2)_{\text{L}}}$ with larger and larger dimension make the ${SU(2)_{\text{L}}}$ coupling constant run faster and faster. For large enough representations, a Landau pole makes it necessary to modify the low-energy theory, effectively lowering the EFT cutoff thus making higher-order EFT operators potentially dangerous in inducing fast DM decay.
All in all, enforcing electric neutrality for the DM, only one viable candidate is singled out: a fermion ${SU(2)_{\text{L}}}$ quintuplet with zero hypercharge and mass ${M}\approx 9$ TeV.[^2] A DM candidate previously believed to be viable, part of a scalar eptaplet, actually decays too quickly due to a previously overlooked dimension-$5$ operator trilinear in ${\mathcal{X}}$ [@DelNobile:2015bqo; @DiLuzio:2015oha].
At present, direct DM detection experiments are not sensitive to the quintuplet MDM model due to the large radiative mass splitting among multiplet components and the loop-suppressed elastic DM-nucleus scattering cross section (see [e.g. ]{}Ref. [@DelNobile:2013sia]). However, experiments like the [[Fermi]{}]{} [LAT]{} and [[H.E.S.S.]{}]{} are very sensitive to gamma-ray lines from the quintuplet MDM annihilations in the Galactic Center, due to the Sommerfeld-enhanced cross section and the clean, peaked signal [@Cirelli:2015bda]. This candidate may be already ruled out or in the reach of near-future experiments, depending on the DM density in the Galactic Center. We therefore deemed it timely to perform a critical review of the MDM setup, pointing out yet unexplored generalizations of this framework. We propose and explore two distinct directions, discussed in the following. One is to lower the cutoff of the model to allow for DM decays. Another possibility is to abandon the assumption of DM electric neutrality in favor of absolutely stable, millicharged DM candidates.
Decaying quintuplet MDM
=======================
![\[fig:quintuplet e\]*Isotropic gamma-ray flux due to DM decays induced by the operators ${\CMcal{O}}_1$ (**left**) and ${\CMcal{O}}_2$ (**right**), assuming DM coupling to electrons and electron neutrinos ($a = e$). [[Fermi]{}]{} data on the diffuse isotropic gamma-ray flux [@Ackermann:2014usa] are shown in brown, and the astrophysical background is displayed as a gray line.*](figures/Decay5pletOp1_diffusegamma_e.pdf "fig:"){width="45.00000%"} ![\[fig:quintuplet e\]*Isotropic gamma-ray flux due to DM decays induced by the operators ${\CMcal{O}}_1$ (**left**) and ${\CMcal{O}}_2$ (**right**), assuming DM coupling to electrons and electron neutrinos ($a = e$). [[Fermi]{}]{} data on the diffuse isotropic gamma-ray flux [@Ackermann:2014usa] are shown in brown, and the astrophysical background is displayed as a gray line.*](figures/Decay5pletOp2_diffusegamma_e.pdf "fig:"){width="45.00000%"}
The lowest-order Lagrangian operators responsible for breaking the accidental symmetry stabilizing the MDM quintuplet are the two dimension-$6$ operators $$\begin{aligned}
{\CMcal{O}}_1 \equiv \frac{c_1^a}{\Lambda^2} \overline{{\mathcal{X}}} L^a H H H^\dagger \ ,
&&
{\CMcal{O}}_2 \equiv \frac{c_2^a}{\Lambda^2} \overline{{\mathcal{X}}} \sigma_{\mu \nu} L^a W^{\mu \nu} H \ ,\end{aligned}$$ with $H$ the Higgs doublet, $L^a$ the left-handed lepton doublet of flavor $a = e, \mu, \tau$, and $W^{\mu \nu}$ the ${SU(2)_{\text{L}}}$ gauge boson field strength. For a low enough cutoff $\Lambda$ we should be able to see the products of DM decay. We compute the photon flux from DM decays with the code described in Ref. [@Cirelli:2010xx], and constrain the cutoff using [[Fermi]{}]{} data on the diffuse isotropic flux [@Ackermann:2014usa] and [[H.E.S.S.]{}]{} data on gamma-ray lines [@Abramowski:2013ax]. The two operators present a peculiar phenomenology, in that decays into a larger number of particles are favored over final states with fewer particles. In fact, out of each $H$ field, one could either take the Higgs vev $v$, or a Higgs or longitudinal gauge boson. In the first case, one gets one less particle in the final state (and thus a larger phase space), but also a suppression by a factor $(v / {M})^2 \approx 10^{-3}$ in the decay rate. Also of interest, ${\CMcal{O}}_2$ induces a gamma-ray line-like feature in the spectrum due to ${\mathcal{X}}\to \gamma \nu$ and other decays with a nearly monochromatic photon. [Fig. \[fig:quintuplet e\]]{} shows the spectral photon flux, broken down in contributions, for the two operators separately. Our take-home messages are:
- In both cases, the cutoff of the model is constrained to lie above the GUT scale.
- For ${\CMcal{O}}_2$, the best bound is set by the [[Fermi]{}]{} data on the diffuse isotropic flux [@Ackermann:2014usa], instead of the [[H.E.S.S.]{}]{} data on gamma-ray lines [@Abramowski:2013ax] as one may have expected.
- Considering ${\CMcal{O}}_1 + {\CMcal{O}}_2$, and taking ${\CMcal{O}}_2$ to come from a loop-suppressed process as its Lorentz structure may seem to suggest, the gamma-ray line-like feature is dwarfed by the continuum photon flux due to ${\CMcal{O}}_1$. Therefore, in general one should consider, beside operators generating gamma-ray lines, also the diffuse emission from other operators arising at the same order in the EFT expansion.
Millicharged MDM
================
As noted above, the assumption that the DM is electrically neutral singles out a Majorana quintuplet as the only viable MDM candidate. If we give up this assumption and allow for non-zero hypercharge assignments for the new generic multiplet, we get a host of new possible DM candidates. Again we may list all possible multiplets we can add to the SM, but now without the need to worry about DM stability being spoiled by higher-order operators or by Landau poles. In fact, for small (but non-zero) hypercharges, $|\epsilon| \ll 1$ as required by experiments (see [Fig. \[fig:Masses\]]{}), the so-called millicharged DM is made absolutely stable ([i.e. ]{}to all orders in the EFT expansion) by electric charge conservation. So in principle also large-dimensional ${SU(2)_{\text{L}}}$ representations become viable.
![\[fig:Masses\]***Left:** Thermal relic abundance and mass of millicharged candidates (the Majorana quintuplet is shown for reference). The relic density line for the Dirac triplet crosses the red band (indicating the measured DM abundance from Ref. [@Ade:2015xua]) twice, thus there are two allowed values for its mass. **Right:** Constraints on the absolute value of the DM millicharge as a function of the DM mass.*](figures/plottaOmegaDM.pdf "fig:"){width="45.00000%"} ![\[fig:Masses\]***Left:** Thermal relic abundance and mass of millicharged candidates (the Majorana quintuplet is shown for reference). The relic density line for the Dirac triplet crosses the red band (indicating the measured DM abundance from Ref. [@Ade:2015xua]) twice, thus there are two allowed values for its mass. **Right:** Constraints on the absolute value of the DM millicharge as a function of the DM mass.*](figures/plottaBoundsmilli.pdf "fig:"){width="45.00000%"}
Another welcome feature of a non-zero hypercharge is the following. For odd-dimensional ${SU(2)_{\text{L}}}$ representations (as required to evade direct DM detection constraints), $\epsilon = 0$ implies that the multiplet is in a real representation of the gauge group, while for $\epsilon \neq 0$ the representation is complex. In the first case the DM is self-conjugated while in the second case it is not. Therefore, a non-zero hypercharge implies a doubling of degrees of freedom, which changes the relic density (and therefore the DM mass) as well as the bounds from indirect searches. Under some conditions [@DelNobile:2015bqo], we can now describe the DM as composed of two mutually decoupled species with same mass and interactions, and therefore the relic density for $\epsilon \neq 0$ is twice that for $\epsilon = 0$. Moreover, since now particles and anti-particles are distinct, the probability a DM particle finds a partner to annihilate with is half that for a self-conjugated DM, thus making indirect detection bounds less stringent for a non self-conjugated candidate.
The left panel of [Fig. \[fig:Masses\]]{} shows the relic density and mass of few of our millicharged DM candidates (for reference, the Majorana quintuplet is the standard, neutral MDM candidate). The right panel instead displays bounds on the absolute value of the DM millicharge from CMB observations [@Dolgov:2013una] and [[LUX]{}]{} [@Akerib:2013tjd; @DelNobile:2013sia; @Chuzhoy:2008zy]. When faced with gamma-ray line searches in the Galactic Center, most of our millicharged candidates perform as good (or bad) as the MDM Majorana quintuplet: they are excluded (allowed) for a cuspy (cored) DM profile. However, we find that a Dirac triplet with mass about $2$ TeV is a viable candidate even for a cuspy profile.
Conclusions
===========
Minimal Dark Matter [@Cirelli:2005uq; @Cirelli:2007xd; @Cirelli:2009uv] is an elegant and extremely predictive framework, which singles out the neutral component of a spin-$\frac{1}{2}$ ${SU(2)_{\text{L}}}$ quintuplet with zero hypercharge as the DM. Present day gamma-ray line searches in the Galactic Center are particularly sensitive to this candidate, thus making it timely to perform a critical review of the model to find possible yet unexplored generalizations.
We proposed and explored two distinct directions [@DelNobile:2015bqo]. One is to lower the cutoff of the model to allow for decays of the DM quintuplet. A careful analysis of the decay spectrum of this candidate showed that current gamma-ray data constrain the cutoff to lie above the GUT scale. Another possibility is to abandon the assumption of DM electric neutrality in favor of absolutely stable, millicharged DM candidates. We found that a Dirac ${SU(2)_{\text{L}}}$ triplet with mass around $2$ TeV is a viable candidate still unconstrained by the stringent bounds from gamma-ray line searches.
[99]{}
E. Del Nobile, M. Nardecchia and P. Panci, *Millicharge or Decay: A Critical Take on Minimal Dark Matter,* \[arXiv: \[hep-ph\]\].
M. Cirelli, N. Fornengo and A. Strumia, *Minimal dark matter,* \[\].
M. Cirelli, A. Strumia and M. Tamburini, *Cosmology and Astrophysics of Minimal Dark Matter,* \[arXiv: \[hep-ph\]\].
M. Cirelli and A. Strumia, *Minimal Dark Matter: Model and results,* \[arXiv: \[hep-ph\]\].
J. Hisano, D. Kobayashi, N. Mori and E. Senaha, *Effective Interaction of Electroweak-Interacting Dark Matter with Higgs Boson and Its Phenomenology,* \[arXiv: \[hep-ph\]\].
N. Nagata and S. Shirai, *Electroweakly-Interacting Dirac Dark Matter,* \[arXiv: \[hep-ph\]\].
A. Mitridate, M. Redi, J. Smirnov and A. Strumia, *Cosmological Implications of Dark Matter Bound States,* \[arXiv: \[hep-ph\]\].
L. Di Luzio, R. Gröber, J. F. Kamenik and M. Nardecchia, *Accidental matter at the LHC,* \[arXiv: \[hep-ph\]\].
M. Cirelli, E. Del Nobile and P. Panci, *Tools for model-independent bounds in direct dark matter searches,* \[arXiv: \[hep-ph\]\].
M. Cirelli, T. Hambye, P. Panci, F. Sala and M. Taoso, *Gamma ray tests of Minimal Dark Matter,* \[arXiv: \[hep-ph\]\].
M. Cirelli [*et al.*]{}, *PPPC 4 DM ID: A Poor Particle Physicist Cookbook for Dark Matter Indirect Detection,* \[\] \[arXiv: \[hep-ph\]\].
M. Ackermann [*et al.*]{} \[Fermi-LAT Collaboration\], *The spectrum of isotropic diffuse gamma-ray emission between 100 MeV and 820 GeV,* \[arXiv: \[astro-ph.HE\]\].
A. Abramowski [*et al.*]{} \[HESS Collaboration\], *Search for Photon-Linelike Signatures from Dark Matter Annihilations with H.E.S.S.,* \[arXiv: \[astro-ph.HE\]\].
P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], *Planck 2015 results. XIII. Cosmological parameters,* \[arXiv: \[astro-ph.CO\]\].
A. D. Dolgov, S. L. Dubovsky, G. I. Rubtsov and I. I. Tkachev, *Constraints on millicharged particles from Planck data,* \[arXiv: \[hep-ph\]\].
D. S. Akerib [*et al.*]{} \[LUX Collaboration\], *First results from the LUX dark matter experiment at the Sanford Underground Research Facility,* \[arXiv: \[astro-ph.CO\]\].
L. Chuzhoy and E. W. Kolb, *Reopening the window on charged dark matter,* \[arXiv: \[astro-ph\]\].
[^1]: Based on Ref. [@DelNobile:2015bqo].
[^2]: A more recent analysis including the effects of bound-state formation finds ${M}\approx 11.5$ TeV [@Mitridate:2017izz]. We neglect these effects in the following.
|
---
abstract: 'In this paper we analyze the biasing effect of point sources, either thermal Sunyaev-Zeldovich clusters or standard radio sources, on the estimated strength of the non-Gaussianity in the Cosmic Microwave Background (CMB). We show that the biggest contribution comes from the cross–correlation of the CMB with the matter density rather than from the poisson term which is conventionally assumed in these calculations. For the three year WMAP data, we estimate that point sources could produce a non–Gaussian signature equivalent to a bias in $f_{NL}$ of $0.35, 0.24, -0.097, -0.13$ in the Ka, Q, V and W bands respectively. The level of bias we find is largely insufficient to explain the very high $f_{NL}$ values recently detected by Yadav and Wandelt. For Planck, we estimate the point source bispectra to contaminate the $f_{NL}$ estimator with a bias of $1.3, 0.34, -0.25, -0.48$ at $30, 44, 70, 100~{\rm GHz}$ respectively. These results depend on the assumed redshift distribution of the point sources. However, given the projected Planck sensitivity of $\Delta f_{NL} \simeq 5$ (95 % C.L.), a good estimate of point sources’ properties including their number density and redshift distribution is essential before deriving strong conclusions on primordial non–Gaussianity.'
author:
- Daniel Babich
- Elena Pierpaoli
date: '. To be submitted to Phys. Rev. D.'
title: 'Point Source Contamination in CMB Non-Gaussianity Analyses'
---
Introduction
============
Recent claims by Yadav and Wandelt [@Yadav07] of the detection of strong primordial non-Gaussianity in the three-year Wilkinson Microwave Anisotropy Probe (WMAP) data [@Spergel07] have the potential to revolutionize our understanding of the early universe. These results were also found in the WMAP five-year analysis, although with less statistical significance [@Komatsu08]. The strength of the non-Gaussianity detected in their analysis is more than two orders of magnitude larger than the non-Gaussianity expected in the simplest model of single field, slow roll model of inflation [@Maldacena03]. This detection, if it stands up to scrutiny, will be the first definitive indication that the simplest model of inflation cannot adequately explain all of the current cosmological observations and must be modified in some way.
In Yadav and Wandelt’s analysis [@Yadav07], the detection of the non-Gaussian signal is due to the simultaneous reduction in the estimator’s error bars and the shift in the central value as smaller angular scale information was included in the analysis. One obvious concern is that the estimator is contaminated by foreground emission, in particular radio points sources and the thermal Sunyaev-Zeldovich effect which become increasingly more important on small angular scales.
In this paper we will analyze the influence of point sources, for the experiment resolution of WMAP both the radio sources and SZ clusters effectively act as point sources, on the standard non-Gaussianity estimator. While it has been claimed that the non-Gaussianity caused by Poisson fluctuations in the number density of radio point sources can be safely separated from the primordial non-Gaussian signal [@Komatsu02], we will demonstrate the other forms of point source non-Gaussianity cannot be safely ignored. In addition to the standard forms of non-Gaussianity produced by point sources, we will show that cross-correlation between the point source power spectrum and the CMB temperature anisotropies or instrument noise will produce non-Gaussianity of a very similar form as the local model. The local model is the form typically assumed in analyses of primordial non-Gaussianity, so these new non-Gaussian contributions will bias the estimator in a fashion that cannot easily be corrected.
This paper is organized as follows. In §\[sec:ps\] we demonstrate that point sources can produce a bispectrum that has the same form as the local model. In §\[sec:bias\] we derive the bias induced by the various point source bispectra. In §\[sec:conc\] we conclude. We use the WMAP three-year cosmological model [@Spergel07] for numerical calculations.
Estimator Bias {#sec:bias}
==============
While non-Gaussianity generically implies that any higher-order connected correlation function is non-zero, it is typical to focus on the three-point correlation function, or equivalently the bispectrum, because it has the simplest form of all non-Gaussian correlation functions and for weak non-Gaussianity it contains the nearly all of the information [@Babich05]. The three-point correlation function can be factored into a component fixed by rotational invariance, which is assumed [*a prioiri*]{}, and a piece determined by the underlying mechanism that produced the non-Gaussianity [@Komatsu02]. Rotational invariance forces the three-point correlation function to be proportional to the Gaunt integral, $$\mathcal{G}^{\ell_1 \ell_2 \ell_3}_{m_1 m_2 m_3} = \sqrt{\frac{(2 \ell_1 + 1)(2 \ell_2 + 1)(2 \ell_3 + 1)}{4\pi}}
\left(\begin{array}{ccc} \ell_1 & \ell_2 & \ell_3 \\ m_1 & m_2 & m_3 \end{array}\right)
\left(\begin{array}{ccc} \ell_1 & \ell_2 & \ell_3 \\ 0 & 0 & 0 \end{array}\right),$$ and the CMB three-point correlation function can be written as $$\langle a_{\ell_1 m_1} a_{\ell_2 m_2} a_{\ell_3 m_3} \rangle = \mathcal{G}^{\ell_1 \ell_2 \ell_3}_{m_1 m_2 m_3}
b_{\ell_1, \ell_2, \ell_3},$$ where the reduced bispectrum, $b_{\ell_1, \ell_2, \ell_3}$, contains information about the form of non-Gaussianity.
The estimators used in CMB non-Gaussianity analyses are optimized for the detection of a signal with a very particular form, namely the local model [@Babich05]. The local model assumes that the initial curvature perturbations can be written as $$\label{eq:local}
\Phi({\bm x}) = \phi_g({\bm x}) + f_{NL}[\phi^2_g({\bm x}) - \langle \phi^2_g({\bm x}) \rangle ],$$ where $\phi_g$ is Gaussian. The non-linear terms in this model lead to the following bispectrum for the initial curvature perturbations $$B(k_1,k_2,k_3) = 2f_{NL}[P(k_1)P(k_2) + {\rm cyc.}],$$ where $P(k)$ is the power spectrum. The ordinary linear radiative transfer function are subsequently used to calculate the CMB temperature anisotropies from these initial curvature perturbations. The statistical properties of the CMB temperature anisotropies will mirror the statistical properties of underlying curvature perturbations since we are consider linear radiative transfer. The levels of non-Gaussianity claimed by Yadav & Wandelt are significantly larger than the expected non-Gaussianity produced by non-linear radiative transfer.
However the estimators are sensitive to any bispectrum form that might be present in the data, regardless of its origin. In this section we will determine the induced bias produced by the various cross-correlation bispectra described in the next section.
The non-Gaussianity estimator can be expressed as [@Creminelli06] $$\label{eq:est}
\hat{f}_{NL} = \frac{1}{A} \left[ \sum \mathcal{G}^{\ell_1 \ell_2 \ell_3}_{m_1 m_2 m_3}
\frac{b_{\ell_1, \ell_2, \ell_3}}{C^T_{\ell_1} C^T_{\ell_2} C^T_{\ell_3}}
a_{\ell_1 m_1} a_{\ell_2 m_2} a_{\ell_3 m_3} \right],$$ where the normalization is $$A = \sum \frac{(2 \ell_1 + 1)(2 \ell_2 + 1)(2 \ell_3 + 1)}{4\pi}
\left(\begin{array}{ccc} \ell_1 & \ell_2 & \ell_3 \\ 0 & 0 & 0 \end{array}\right)^2
\frac{b^2_{\ell_1, \ell_2, \ell_3}}{C^T_{\ell_1} C^T_{\ell_2} C^T_{\ell_3}},$$ here $C^T_{\ell} = C_{\ell} + C^N_{\ell}$ is the sum of the CMB signal and noise. The CMB experimental noise parameters are described in Table \[table:exp\_info\].
We are ignoring the additional linear term in the estimator because the contribution of radio point source will not bias this piece of the estimator if the signal and noise covariance matrices well represent the real data.
The weight functions used in the estimator are optimized for a bispectrum produced by the local model. The estimator bias will be determined by substituting the various forms of the point sources bispectra into the estimator $$\Delta f^{\alpha}_{NL} = \frac{1}{A} \sum \frac{(2 \ell_1 + 1)(2 \ell_2 + 1)(2 \ell_3 + 1)}{4\pi}
\left(\begin{array}{ccc} \ell_1 & \ell_2 & \ell_3 \\ 0 & 0 & 0 \end{array}\right)^2
\frac{b_{\ell_1, \ell_2, \ell_3} b^{\alpha}_{\ell_1, \ell_2, \ell_3}}{C^T_{\ell_1} C^T_{\ell_2} C^T_{\ell_3}}.$$ Here $b^{\alpha}_{\ell_1, \ell_2, \ell_3}$ is the one of the reduced bispectrum produced by point sources. The CMB bispectrum produced by the local model is generally negative. The collapsed triangle modes, which have the highest signal-to-noise, are always negative. So a positive point source bispectrum will cause a negative estimator bias. We will now discuss the possible bispectrum terms.
Point Source Bispectra {#sec:ps}
======================
The observed signal is the sum of the primordial and secondary temperature anisotropies, foreground emission and instrument noise. While the secondary anisotropies and extra-galactic foregrounds, which maybe quite non-Gaussian, are important on small angular scales, the signal on the large angular scales relevant for WMAP is dominated by primary anisotropies. Thus it is assumed that any measured non-Gaussianity by WMAP is primordial in nature. We will argue that cross-correlations between some of these signals may induce bispectra on large angular scales.
Moreover these bispectra may have similar forms as the local model if the signal power spectrum becomes spatial inhomogeneous in manner that then correlates with a second component in the observed signal. For example, the matter overdensity will bias the local radio point source power spectrum and it will be correlated with the CMB temperature anisotropies produced via the ISW effect. This will produce a bispectrum similar in form to the local model.
This is not a coincidence as the non-Gaussianity in the local model is produced by the modulation of the small scale inflaton power spectrum by the large scale inflaton fluctuations that have already left the horizon and frozen out. The parameter $f_{NL}$ is a measure of the non-linear coupling between these different scales. Likewise the large scale matter overdensity modulates the small scale Poisson fluctuation power spectrum by altering the local number density of radio point sources. In an analogous fashion the bias describes the coupling between the different scales. As discussed in the previous section any bispectrum present in the data can bias the estimator.
Anisotropy Mechanisms
=====================
Now we will discuss the various physical effects considered in this paper – radio point source emission, the thermal Sunyaev-Zeldovich effect and the integrated Sachs-Wolfe effect.
### Radio Point Sources
The temperature anisotropy induced by unresolved radio point sources can be expressed as an integral over their flux distribution function $$\label{eq:flux}
\frac{\Delta T}{T}(\hat{\bm n},\nu) = \frac{1}{c_{\nu}}
\int_0^{\bar{S}(\hat{\bm n})} dS S \frac{dN}{dS}(S,\nu;\hat{\bm n}).$$ The conversion between the temperature and intensity fluctuations is $$c_{\nu} = \frac{\partial B_{\nu}}{\partial \ln{T}}(T_{CMB}),$$ where $T_{CMB} = 2.728$ K and $B_{\nu}(T)$ is a blackbody frequency distribution.
Note that we have allowed both the upper flux limit for unresolved radio point sources and their number density to be spatially inhomogeneous. As we will describe below, these spatial inhomogeneities will correlate with other signals present in the data to produce bispectra in the observed CMB data.
The actual values for the bispectra calculated in this paper depend on the radio point sources properties (both flux and redshift distributions) and on the flux cut at a given frequency for a specific experiment. In order to evaluate the residual point source contribution to the total estimated bispectrum in the WMAP–3yrs data, we must consider the technique used by WMAP to identify and substract radio point sources [@Hinshaw07] and the point source mask applied to the data. The WMAP source selection criterion does not correspond to a single flux threshold at a given frequency, rather it requires that a candidate source should be seen with a minimal statistical significance in all channels [@Pierpa03]. As Yadav & Wandelt’s results [@Yadav07], which are the motivation for this paper, were derived using the V and W frequencies bands, the radio point source populations at these frequencies is the most relevant. Unfortunately, as most radio sources are stronger at low frequencies, they tend to be detected with higher significance in the K – Q bands than in the V – W ones.
In addition from a blind search, at 20–30 GHz it is possible to use lower–frequency catalogs of point sources as tracers for detection [@LopezCaniego07]. As a result, the source population at such frequencies is much better characterized than the one at higher frequencies. By considering flux number counts at low frequencies it is possible to give an estimate of the flux above which the detected point sources create a complete catalog. This flux threshold is estimated to be above 1.1 Jy at 23 GHz (K Band) [@LopezCaniego07; @Gonzalez08], while in the W band (94 GHz) the number counts are not sufficiently well determined to allow such estimate. An alternative blind search technique applied to the WMAP V and W band data increases the number of sources found in the V band by 50% compared to the WMAP team results [@ChenWright07]. These results include some sources that were not contained in the WMAP point source mask. The new sources found, however, do not seem to represent a different population than the ones previously detected. As the WMAP 5 year data is now available, more work on point source characterization at the frequencies where the CMB science is derived should be possible.
The actual residual contribution of point sources depends upon the mask applied to the observed map. In the case of WMAP, this mask considers the WMAP detected sources (about 300) as well as some bright sources from other low–frequencies catalogs, for a total of approximately seven hundred sources masked. The actual selection function that this procedure imposes at the V and W bands is poorly understood, and indeed some sources detected by [@ChenWright07] were not masked. However, while the detection threshold implied by this whole procedure is poorly defined, there is good agreement in the residual point sources power spectrum contribution as derived by different authors [@Huffenberger06; @ChenWright07; @Hinshaw07]. This can be approximately translated in a flux threshold of 0.6 Jy in the Q band [@Huffenberger06] and 0.75 Jy in the V band [@ChenWright07]. For illustrative purposes, we will adopt here an approximate estimate for the flux cut-off of 0.7 Jy in all WMAP bands. In §\[sec:results\] we discuss the dependence of our results on this choice.
The Planck satellite will have better resolution and sensitivity, resulting in a lower detection threshold for point sources. In the following, we make predictions of the bispectrum expected in the final Planck maps, considering the detection thresholds for a 95% complete sample derived by [@LopezCarniego06] using realistic Planck sky simulations. The flux cut-offs for both WMAP and Planck, as well as, the adopted instrument noise parameters are given in Table \[table:exp\_info\].
Frequency (GHz) $\bar{S}$ (Jy) FWHM (arcmin) $\Delta T/T$
----------------- ---------------- --------------- --------------
WMAP – 33 (Ka) 0.7 41 5.7
WMAP – 41 (Q) 0.7 28 8.2
WMAP – 61 (V) 0.7 21 11.0
WAMP – 94 (W) 0.7 13 18.3
Planck – 30 0.33 33 1.6
Planck – 44 0.36 23 2.4
Planck – 70 0.34 14 3.6
Planck – 100 0.13 11 1.6
: \[table:exp\_info\] Radio point source flux threshold, FWHM, and instrument pixel noise (in $10^{-6}$) for the relevant WMAP and Planck frequency bands. For WMAP the various band names are also listed.
\
Finally, in order to compute the bispectra implied by point sources below a given flux, we adopt the source counts predictions of Toffolatti et al [@Toffolatti98] rescaled by a factor 0.8, as suggested by the matching of these predictions with the actual number counts at fluxes above 1.1 at 41 GHz obtained by [@Gonzalez08]. We will use the radio point source bias $b^{PS}
\simeq 1.7$ [@Smith07; @Blake04; @Boughn02] as inferred for low frequency radio point sources. This value is uncertain and almost definite varies for the various population type that constitute the high frequency sample.
It is difficult to constrain the redshift distribution due to the lack of optical studies of $60-96 {\rm GHz}$ source population, work at $23 {\rm GHz}$ has been conducted by Gonzalez et al. for fluxes above 1 Jy [@Gonzalez08]. At these fluxes and frequencies the number counts are dominated by QSOs for relatively high redshifts. This population is well approximated by the following analytical formula: $$\label{eq:n_PS}
n^{PS}(z) \propto 0.75 \times e^{-(z-z_0)^2/(2 \sigma)^2},$$ with $z_0 = 0.95$ and $\sigma$ is 0.4 (0.9) for $z < 1$ ($z > 1$). This redshift distribution is in agreement with recent studies of radio sources populations at 90 GHz with ATCA [@Sadler07]. In addition, there is a small (10–15 %) of the total population that consists of radio loud galaxies which peaks at much lower redshifts ($z \le 0.1$). Given the small number of sources, it is difficult to derive an appropriate fitting formula for this other population. In the following, we will adopt the following distribution of low redshift sources $$\label{eq:n_PSlow}
n^{PS}(z) \propto 0.25 \times 10^{-3z},$$ that provides a better fit to counts found by Gonzalez et al. [@Gonzalez08]. We will take the sum of Eqs.\[eq:n\_PS\] & \[eq:n\_PSlow\] with a common proportionality factor determined by requiring $n^{PS}(z)$ to integrate to unity over the range $z=0$ to $z=3.1$. The relative amplitude of the QSO and radio galaxy contribution to the sources is derived from the optical indentifications of Gonzalez et al [@Gonzalez08]. This fit will be called model 1. Gonzalez et al. [@Gonzalez08] provide a fit to a theoretical model of the luminosity functions of the various source populations as derived from [@DeZotti05]. To understand how the uncertainty in the source redshift distribution affects our results, we also do calculations with the model of Gonzalez [@Gonzalez08]; this will be called model 2.
WMAP only resolves the high-flux sources which are typically dominated by AGN. Most likely lower flux sources consist of a different population (e.g. [@PierpaPerna04; @DeZotti05]) and therefore have a different redshift distribution with possibly more weight either at lower or higher redshifts. An analogous redshift analysis on higher frequencies catalog is strongly needed, but is not available at this time. Future investigations of radio point sources catalogs plus WMAP and Planck results are higher frqeuencies will help clarify this issue. For the aims of this paper we take Eq. \[eq:n\_PS\] to be the redshift distribution at all frequencies and fluxes and keep in mind the potential uncertainty that this assumption introduces.
### Thermal Sunyaev-Zeldovich Effect
The hot plasma in the intra-cluster medium will produce temperature anisotropies via Thomson scattering of the incident CMB photons; this is the well-known thermal Sunyaev-Zeldovich (SZ) effect (see @Carlstrom02 for a review). Following the model of Komatsu & Seljak [@Komatsu02a; @Komatsu01], the temperature anisotropies produced by the SZ effect can be expressed as an integral over the cluster mass distribution function $$\label{eq:sz}
\frac{\Delta T}{T}(\hat{\bm n},\nu) = g_{\nu} \int dz \frac{dV}{dz} \int_0^{\infty} dM y(M,z)
\frac{dn}{dM}[M,z;\hat{\bm n \chi(z)}],$$ here the frequency dependence of the thermal SZ effect is given by $$g_{\nu} = x\frac{e^x+1}{e^x-1} - 4,$$ where $x = h\nu/k_B T_{\rm CMB}$. The Compton y-parameter is related to the line-of-sight integral of the the cluster’s thermal pressure, $dn/dM$ is the cluster mass function and the volume element is $$\frac{dV}{dz} = \frac{c}{H(z)} \chi^2(z),$$ where $\chi(z)$ is comoving distance to redshift z. The details of the implementation of this model are extensively discussed in [@Komatsu02a]. Ignoring clustering terms in the power spectrum, we find the thermal SZ point source power spectrum $$\label{eq:cl_sz}
C^{SZ}_{\ell}(\nu) = g^2_{\nu} \int dz \frac{dV}{dz} \int_0^{\infty} dM y_{\ell}^2(M,z) \frac{dn}{dM}(M,z).$$
In a similar fashion to the radio point sources, the number density of massive clusters will be changed by both the biasing effect of the large scale matter overdensity and gravitational lensing magnification. In order to determine how these processes will affect the SZ power spectrum we need to know the redshift distribution of power in the SZ effect. The weighted redshift distribution of power produced by the SZ effect can be expressed as the integral over halo mass of the Sheth-Tormen [@Sheth99] halo mass function $$\label{eq:n_SZ}
n^{SZ}(z) \propto \frac{dV}{dz} \int_{M_{\rm min}}^{M_{\rm max}} dM M^{2\alpha} \frac{dn}{dM}(M,z).$$ The cluster y-parameter – mass scaling relationship slope is taken to be $\alpha = 1.6$ [@Nagai06]. The limits of integration are taken to be $M_{\rm min}= 10^{14} M_{\odot}$ and $M_{\rm max} = 5\times 10^{15} M_{\odot}$. The redshift distribution is normalized so it will integrate to unity. We also need the bias weighted redshift distribution of power produced by the SZ effect can be expressed as the integral over the Sheth-Tormen halo mass function $$\label{eq:bn_SZ}
(bn)^{SZ}(z) \propto \frac{dV}{dz} \int_{M_{\rm min}}^{M_{\rm max}} dM M^{2\alpha} \frac{dn}{dM}(M,z)~b(M,z),$$ where $b(M,z)$ is the standard bias of the Sheth-Tormen mass function.
### Integrated Sachs-Wolfe Effect
Most of the power in the CMB temperature anisotropies is produced at high redshift during recombination. There will be very little cross-correlation between these temperature anisotropies and the low redshift matter overdensity responsible for altering the small scale point source power spectrum. However additional CMB temperature anisotropies can be generated at low redshift via the Integrated Sachs-Wolfe (ISW) effect if the gravitational potential fluctuations are evolving in time. In a fully matter dominated regime the gravitational potential fluctuations are static, however as the universe becomes dark energy dominated the ISW effect can occur.
The temperature anisotropy produced by the ISW effect can be expressed as $$\begin{aligned}
\frac{\Delta T}{T}(\hat{\bm n}) &=& -2 \int dz \frac{\partial \Phi}{\partial z}, \\
&=& 3 H^2_0 \Omega_M \int dz [(1+z) D(z)]' \int \frac{d^3{\bm k}}{(2\pi)^3} e^{i{\bm k} \cdot \hat{\bm n} \chi}
\frac{\delta({\bm k})}{k^2},\end{aligned}$$ where $D(z)$ is the linear theory growth function and the prime denotes differentiation with respect to $z$.
![\[fig:nz\] Redshift weight functions for the ISW effect (green, dot-dashed); radio point sources – model 1 (red, dashed) and model 2 (blue, long-dashed); and thermal SZ effect (solid, black).](overlap.eps){width="10cm" height="10cm"}
In Fig. \[fig:nz\] we show the redshift weight functions for the ISW effect $[(1+z)D(z)]'$ (green, dot-dashed); the radio point sources $b^{PS} n^{PS}(z) D(z)$ – model 1 (red, dashed) and model 2 (blue, long-dashed); and the thermal SZ effect $(bn)^{SZ}(z) D(z)$ (solid, black). The overlap of these weight fuctions will determine the amplitude of the cross-correlation spectra as described in §\[sec:cross\] & \[sec:mag\] and shown in Figs. \[fig:cross\_power\] & \[fig:mag\_power\].
Point Source Bispectrum {#sec:ps_bisp}
-----------------------
The simplest bispectrum form produced in the CMB is due to radio point source Poisson fluctuations. The reduced bispectrum is independent of scale and can be written as $$b_{\ell_1, \ell_2, \ell_3} = c_{\nu}^{-3} \int_0^{\bar{S}} dS S^3 \frac{\bar{dN}}{dS}(S,\nu).$$ This bispectrum component has been detected in the WMAP data [@Komatsu03]. Since its functional form is not similar to the local model it will not significantly bias the estimator and we will ignore it as did Yadav & Wandelt. The five-year WMAP analysis [@Komatsu08] includes estimates (and corrections) for the estimator bias produced by this bispectrum form.
Number Density Modulation {#sec:cross}
-------------------------
The Poisson fluctuation power spectrum in a certain region of the sky can be written as integral over the distribution of sources in that region. If the anisotropic component of the power spectrum correlates with any other signal present in the data, non-Gaussian correlation functions will be generated. In this subsection we will focus on correlation of the large scale matter overdensity, which bias the number density of point sources, with the ISW effect.
The power spectrum produced by radio point sources in direction $\hat{\bm n}$ is $$C^{PS}(\hat{\bm n}) = c_{\nu}^{-2} \int_0^{\bar{S}} dS S^2 \frac{dN}{dS}(S,\nu;\hat{\bm n}),$$ where the anisotropic point source distribution can be expressed in terms of the matter overdensity as $$\label{eq:ps_n}
\frac{dN}{dS}(S,\nu;\hat{\bm n}) = \frac{\bar{dN}}{dS}(S,\nu) \left[ 1 + b^{PS}\int dz
n^{PS}(z) \delta(\hat{\bm n},z) \right],$$ the mean point source distribution was described in §\[sec:ps\]. The large scale matter overdensity, which biases the local number density of point sources as shown in Eq. \[eq:ps\_n\], will be correlated with the large scale CMB temperature anisotropies produced by the ISW effect and the following reduced bispectrum will be induced $$b_{\ell_1, \ell_2, \ell_3} = 2 \bar{C}^{PS} (X^{PS}_{\ell_1} + \mathrm{cyc.}).$$ Here the isotropic source distribution $\bar{dN}/dS$ leads to an isotropic power spectrum $\bar{C}^{PS}$. The matter-ISW cross correlation spectrum can be expressed as $$\label{eq:x_ps}
X^{PS}_{\ell} = \frac{3 H^2_0 \Omega_M b^{PS}}{\ell^2} \int \frac{H(z) dz}{c} P\left[\frac{\ell}{\chi(z)}\right]
D(z) n^{PS}(z) [(1+z) D(z)]'$$ where $P(k)$ is the matter power spectrum and $b^{PS} \simeq 1.7$ is the radio point source bias. The cross-correlation spectrum is positive because matter overdensities correspond to potential wells. At low redshift as the amplitude of the potential wells decay the CMB photons experience a net blue-shift, thus the cross-correlation is positive. In Eq. \[eq:x\_ps\] we have employed Limber’s approximation to simplify the cross-correlation spectrum. The function $n^{PS}(z)$ is the redshift probability distribution function of the radio point sources defined in Eq. \[eq:n\_PS\]. The large scale matter overdensity affects the thermal SZ power spectrum in an analogous fashion.
In Fig. \[fig:cross\_power\] we show the matter-ISW cross correlation spectrum for the thermal SZ effect (solid, black) and the radio point sources – model 1 (red, dashed), model 2 (blue, long-dashed). The thermal SZ effect has the largest cross correlation because the clusters tend to be located at lower redshift than the radio point sources. The ISW effect primarily occurs at low redshift once the universe is strongly dark energy dominated, so it most strongly overlaps with the SZ effect.
![\[fig:cross\_power\] Density-ISW cross-correlation spectrum for thermal SZ (solid, black); radio point sources – model 1 (red, dashed) and model 2 (blue, long-dashed).](cross.eps){width="10cm" height="10cm"}
In Fig. \[fig:cross\_bispect\] we show the number density modulation bispectrum for the thermal SZ effect (solid, black) and the radio point sources – model 1 (red, dashed), model 2 (blue, long-dashed). Also shown for reference is the equilateral shape $(\ell_1 = \ell_2 = \ell_3 = \ell)$ of the primordial local model bispectrum ($f_{NL} = 1$) (green, dot-dashed). The equilateral shape of the local model bispectrum changes signs, the zero-crossing are obvious from the plot. This is a consequence of the radiative transfer functions producing both hot and cold regions on the sky. The collapsed shape always has the same sign, opposite the sign of $f_{NL}$. For the local model the collapsed bispectra have the highest signal-to-noise.
![\[fig:cross\_bispect\] Number density modulation bispectrum for thermal SZ (solid, black); radio point sources – model 1 (red, dashed) and model 2 (blue, long-dashed). The equilateral shape $(\ell_1 = \ell_2 = \ell_3 = \ell)$ of the local model ($f_{NL} = 1$) (green, dot-dashed) is also shown for reference.](bispect_x.eps){width="10cm" height="10cm"}
Magnification Modulation {#sec:mag}
------------------------
The distribution of matter along the line-of-sight between the observer and the sources will gravitationally lens these point sources. The priniciple effect of gravitational lensing will be to change the source density by magnifying and de-magnifying certain regions of the sky. The magnification in a given direction can be written, to first order, as $$\mu(\hat{\bm n}) \simeq 1 + 2 \kappa(\hat{\bm n}).$$ The convergence field distorting a background source at $z$ is related to matter overdensity as $$\kappa(\hat{\bm n}, z) = \frac{3 \Omega_M H^2_0}{2} \int \frac{c dz'}{H(z')} (1+z') \frac{\chi(z')}{\chi(z)} [\chi(z)-\chi(z')]
\delta(\hat{\bm n},z'),$$ and the average magnification of SZ point sources in a given direction is then given by $$\kappa(\hat{\bm n}) = \int dz n^{SZ}(z) \kappa(\hat{\bm n}, z).$$ The observed SZ cluster number density in a given direction will be related to the intrinsic number density as $$\begin{aligned}
\frac{dn^{obs}}{dM}(M,\hat{\bm n}) &=& \frac{1}{\mu(\hat{\bm n})} \frac{dn}{dM}(M,\hat{\bm n}),
\\
&\simeq& [1 - 2\kappa(\hat{\bm n})] \frac{dn}{dM}(M,\hat{\bm n}).\end{aligned}$$ The convergence field is correlated with the CMB temperature anisotropies produced via the ISW effect, $$\label{eq:m_sz}
M^{SZ}_{\ell} = \frac{9 \Omega^2_M H^4_0}{2\ell^2} \int_0^{\infty} dz n^{SZ}(z)
\int_0^{z} dz_1 P\left[\frac{\ell}{\chi(z_1)}\right] (1+z_1) D(z_1) [(1+z_1) D(z_1)]' \frac{\chi(z_1)}{\chi(z)} [\chi(z)-\chi(z_1)].$$ Again we have evaluated the cross-correlation according to Limber’s approximation and the convergence-ISW cross-correlation spectrum is positive. This magnification effect will result in the following reduced bispectrum $$\label{eq:lensing_sz}
b_{\ell_1, \ell_2, \ell_3} = -2 [M^{SZ}_{\ell_1} (C^{SZ}_{\ell_2} + C^{SZ}_{\ell_3})
+ M^{SZ}_{\ell_2} (C^{SZ}_{\ell_1} + C^{SZ}_{\ell_3})
M^{SZ}_{\ell_3} (C^{SZ}_{\ell_1} + C^{SZ}_{\ell_2})].$$ The bispectrum is negative because large scale CMB hot spots (positive ISW effect) correlate with positive magnification which always reduces the amplitude of point source Poisson fluctuations.
The radio point source selection function is expressed in terms of the observed flux which can be affected by gravitational lensing due matter along the line-of-sight. The gravitational lensing magnification will modulate the flux cut-off and be correlated with the ISW temperature anisotropies. The point source dilution effect discussed above will also occur. The radio point source Poisson fluctuation power spectrum can be expressed as $$C^{PS}(\hat{\bm n}) = c_{\nu}^{-2} \int_0^{\bar{S}/\mu(\hat{\bm n})} dS S^2 \frac{1}{\mu(\hat{\bm n})} \frac{dN}{dS}(S,\nu).$$ Linearizing in the convergence field, we find that the Poisson fluctuation power spectrum becomes anisotropic $$C^{PS}(\hat{\bm n}) = 2 c_{\nu}^{-2} \kappa(\hat{\bm n}) \left[ \bar{S}^3 \frac{dN}{dS}(\bar{S},\nu) + \bar{C}^{PS} \right].$$ This will result in the following reduced bispectrum $$\label{eq:lensing}
b_{\ell_1, \ell_2, \ell_3} = -4(M^{PS}_{\ell_1} + {\rm cyc.}) \left[ \bar{S}^3 \frac{dN}{dS} (\bar{S}, \nu) + \bar{C}^{PS} \right],$$ where $M^{PS}_{\ell}$ is the radio point source version of Eq. \[eq:m\_sz\]. Note that gravitational lensing affects the radio point source power spectrum by changing both the upper flux cut-off and the source counts, whereas the SZ power spectrum is only altered by changes in the local cluster counts. If the SZ clusters are detected with high signal-to-noise and removed from the CMB maps then flux cut-off modulation effect will also produce an additional bispectrum term.
In Fig. \[fig:mag\_power\] we show the convergence-ISW cross correlation spectrum for the thermal SZ effect (solid, black) and the radio point sources – model 1 (red, dashed), model 2 (blue, long-dashed). The model 2 of the radio point sources has the largest cross correlation because it predicts that the point sources tend to be located at higher redshift. As opposed to the matter-ISW cross-correlation, which requires that the sources lie in the same redshift range over which the ISW effect occurs, in this case the point source do not have to be at the same redshift at which the ISW occurs in order for the contribution to be relevant. In fact the point sources experience greater magnification if they are at substantially higher redshift than the matter distribution that is simultaneously magnifying them and is correlated with the ISW effect.
![\[fig:mag\_power\] Convergence-ISW cross-correlation spectrum for thermal SZ (solid, black); radio point sources – model 1 (red, dashed) and model 2 (blue, long dashed)](mag.eps){width="10cm" height="10cm"}
In Fig. \[fig:mag\_bispect\] we show the magnification modulation bispectrum for the thermal SZ effect (solid, black); radio point sources – model 1 (red, dashed) and model 2 (blue, long-dashed). The local model bispectrum ($f_{NL} = 1$) (green, dot-dashed) is also shown for reference.
![\[fig:mag\_bispect\] Magnification modulation bispectrum for thermal SZ effect (solid, black); radio point sources – model 1 (red, dashed) and model 2 (blue, long-dashed). The local model bispectrum ($f_{NL} = 1$) – equilateral shape $(\ell_1 = \ell_2 = \ell_3 = \ell)$ (green, dot-dashed) is also shown for reference.](bispect_m.eps){width="10cm" height="10cm"}
Selection Modulation
--------------------
The amplitude of the Poisson fluctuation power spectrum depends on the number density of radio point sources below some flux limit determined by the radio point source removal technique. If the selection criterion used produces an anisotropic flux limit that is correlated with either the large scale temperature anisotropy or the instrument noise, then a bispectrum will be produced. If the radio point sources are identified via external catalogs, such as NVSS or PMN, then there will be no cross-correlation and therefore no bispectrum. And if multi-wavelength data is differenced to isolate the power-law frequency dependence of the radio point source, then the modulated selection function will produce correlations between the data in different frequency bands.
The simplest method is to removal pixels above some multiple ($\gamma$) of the pixel variance $$\bar{S}(\hat{\bm n}) = c_{\nu}\left[\gamma \sigma_p - \Delta \Omega \left( \frac{\Delta T}{T}(\hat{\bm n})
+ N(\hat{\bm n}) \right)\right]$$ The temperature anisotropy and instrument noise have vanishing expectation values, so the mean flux cut-off is $$\bar{S}_0 = c_{\nu} \gamma \sigma_p,$$ where the pixel variance is defined as $$\sigma^2_p = (\Delta \Omega)^2 \sum_{\ell} \frac{(2\ell + 1)}{4\pi} C^T_{\ell},$$ here $C^T_{\ell} = C_{\ell} + C^N_{\ell}$ is the sum of the CMB signal and noise and $\Delta \Omega$ is the pixel size. There will be fluctuations about the expected flux cut-off that are correlated with either the temperature anisotropy or the instrument noise. Linearizing in these fluctuations, we find the following reduced bispectrum $$b_{\ell_1, \ell_2, \ell_3} = - \frac{\Delta \Omega}{c_{\nu}} (C^T_{\ell_1} + {\rm cyc.}) \bar{S}^2_0 \frac{dN}{dS}(\bar{S},\nu).$$
The selection criterion used by WMAP is based on an algorithm developed by Tegmark & de Oliveira-Costa [@Tegmark98]. The map is filtered in order to reduce the importance of the long-wavelength CMB modes. The filtered total temperature fluctuation in some pixel is $$y(\hat{\bm n}) = \sum_{\ell m} \frac{1}{C^T_{\ell}} Y_{\ell m}(\hat{\bm n}) (a_{\ell m} + n_{\ell m}) \Delta \Omega
+ \sum_{\ell} \frac{(2\ell + 1)}{4\pi} \frac{S(\hat{\bm n})}{c_{\nu} C^T_{\ell}}.$$ The algorithm removes any pixel that has a value greater than some multiple ($\gamma$) of the filtered map pixel variance $$\tilde{\sigma}^{-2}_p = (\Delta \Omega)^{-2} \sum_{\ell} \frac{(2\ell + 1)}{4\pi} \frac{1}{C^T_{\ell}}.$$ The threshold $\gamma$ is chosen according to some compromise between false positive and negatives. Since the thresholding is applied to the total signal in a pixel, the corresponding radio point source flux cut-off in a pixel is $$\bar{S}(\hat{\bm n}) = \frac{c_{\nu}}{F} \left[ \gamma \tilde{\sigma}_p
- \Delta \Omega \sum_{\ell m} \frac{1}{C^T_{\ell}} Y_{\ell m}(\hat{\bm n}) (a_{\ell m} + n_{\ell m}) \right],$$ where the normalization is $$F = \sum_{\ell} \frac{(2\ell + 1)}{4\pi} \frac{1}{C^T_{\ell}}.$$ The temperature anisotropy and instrument noise have vanishing expectation values, so the mean flux cut-off is $$\bar{S}_0 = \frac{c_{\nu}}{F} \gamma \tilde{\sigma}_p.$$ There will be fluctuations about this expected value and these fluctuations will be correlated with either the temperature anisotropy or the instrument noise. Linearizing in these fluctuations, we find the following reduced bispectrum $$\label{eq:select}
b_{\ell_1, \ell_2, \ell_3} = - \frac{3 \Delta \Omega}{c_{\nu} F} \bar{S}^2_0 \frac{dN}{dS} (\nu, \bar{S}_0).$$ In regions with large instrument noise or CMB temperature anisotropy, the radio point source flux cut-off will be lowered. This reduces the total number density of radio point sources in that region and therefore the Poisson fluctuation power spectrum. This effect explains the negative sign in the reduced bispectrum, Eq. \[eq:select\].
Since the filtering applied to the maps produces a bispectrum independent of scale, similar to the point source bispectrum described in §\[sec:ps\_bisp\], we will ignore it. In reality the actual selection function is more complicate than this simple filter technique predicts and it might produce a bispectrum of a much different form. Numerical simulations incorporating the exact selection procedure will need to be done in order to fully determine its effect on the estimator.
Numerical Results {#sec:results}
=================
In this section we will present numerical results for the contamination of the standard non-Gaussianity estimator by the various bispectra discussed in this paper. This is done for both the WMAP and Planck instrument noise levels, frequency bands and flux cut-offs given in Table \[table:exp\_info\].
In Fig. \[fig:wmap\] the $f_{NL}$ estimator bias as a function of $\ell_{max}$ is shown for the different bispectra – radio point source number density modulation (solid, black); SZ number density modulation (dotted, red); radio point source gravitational lensing magnification modulation (dashed, blue); and SZ gravitational lensing magnification modulation (long-dashed, green) with WMAP instrument noise. The bias plots are shown for the four relevant WMAP frequency bands – upper left Ka – $33~{\rm GHz}$; upper right Q – $40~{\rm GHz}$; lower left V – $61~{\rm GHz}$; lower right W – $94~{\rm GHz}$. The magnification modulation effect produces a positive bias since its bispectrum is negative, while the density modulation effect produces a negative bias.
![\[fig:wmap\] The WMAP estimator bias terms $\Delta f^{\alpha}_{NL}$ as a function of $\ell_{max}$ for radio point source density modulation (solid, black); SZ number density modulation (dotted, red); radio point source gravitational lensing magnification modulation (dashed, blue); and SZ gravitational lensing magnification modulation (long-dashed, green) in Ka – $33~{\rm GHz}$ (upper left); Q – $40~{\rm GHz}$ (upper right); V – $61~{\rm GHz}$; and W – $94~{\rm GHz}$. The density modulation terms produce a negative bias, while the magnification bias produce a positive bias.](wmap_bias.eps)
Yadav & Wandelt claim a central value of $f_{NL} = 86.8$ with a standard deviation of $\sigma = 30.0$ for a $2.9 \sigma$ detection of non-Gaussianity [@Yadav07]. In the five-year WMAP data the central value is found to be $f_{NL} = 67$ with a standard deviation of $\sigma = 31$ [@Komatsu08]. At $\ell_{max} = 750$ the total estimator bias $\Delta f_{NL} = 0.35$ in the Ka band, $\Delta f_{NL} = 0.24$ in the Q band, $\Delta f_{NL} = -0.097$ in the V band and $\Delta f_{NL} = -0.13$ in W band. Since the density modulation and the magnification modulation bispectra have different signs they partially cancel and reduce the overall effect. At low frequency the radio point source magnification modulation bispectrum is the most important so the bias is positive. At higher frequencies the SZ density modulation bispectrum dominates which makes the bias negative. These numbers should be compared to the estimator bias produced by the radio point source Poisson fluctuation bispectrum. Komatsu et al [@Komatsu08] have estimated this bias to be $\Delta f_{NL} \simeq -(3-5)$ at $\ell_{max} = 700$.
In Fig. \[fig:planck\] the $f_{NL}$ estimator bias as a function of $\ell_{max}$ is shown for the different bispectra – radio point source number density modulation (solid, black); SZ number density modulation (dotted, red); radio point source gravitational lensing magnification modulation (dashed, blue); and SZ gravitational lensing magnification modulation (long-dashed, green) for Planck at $30~{\rm GHz}$ (upper left); $44~{\rm GHz}$ (upper left); $70~{\rm GHz}$ (lower left) and $100~{\rm GHz}$ (lower right).
![\[fig:planck\] The Planck estimator bias terms $\Delta f^{\alpha}_{NL}$ for $30~{\rm GHz}$ (upper left); $44~{\rm GHz}$ (upper left); $70~{\rm GHz}$ (lower left) and $100~{\rm GHz}$ (lower right). The curves are the same as Fig. \[fig:wmap\]. The density modulation terms produce a negative bias, while the magnification bias produce a positive bias.](planck_bias.eps)
Estimates of the sensitivity on $f_{NL}$ achievable with Planck suggest that $\Delta f_{NL} \simeq 10$ at 95 % C.L.using just temperature information and $\Delta f_{NL} \simeq 5$ at 95 % C.L also including polarization. Summing the various biases we find $\Delta f_{NL} = 1.3$ at $\nu = 30$ GHz, $\Delta f_{NL} = 0.34$ at $\nu = 44$ GHz, $\Delta f_{NL} = -0.25$ at $\nu = 70$ GHz and $\Delta f_{NL} = -0.48$ at $\nu = 100$ GHz. These results imply that a good knowledge of point source properties is important if Planck will be able to achieve its full potential in constraining primordial non-Gaussianity.
There are uncertainties in the models we have used to describe the radio point sources. These model uncertainities will directly lead to uncertainities in the above predictions. As can be directly seen in Figs. \[fig:cross\_power\] – \[fig:mag\_bispect\], the differences between the radio point source redshift distributions, model 1 and model 2, are not significant. This is not surprising as the ISW kernel, with which the radio point source redshift distributions are being cross-correlated, is quite broad. The redshift distributions are supposed to trace both the high–flux source populations at low frequencies. However, they may not be representative of the source populations at all frequencies considered here for lower flux cut thresholds. If a lower flux cut implies a higher population of low–redshift objects, the amplitude of the ISW-density cross-correlation spectrum would be increased. The decrease in the flux cut-off will decrease the Poisson fluctuation power spectrum, so the change in the bias of the $f_{NL}$ estimator is not clear. We also note that reducing the flux cut–off from the WMAP to the Planck level does not reduce the bias implied on $f_{NL}$ as the Planck noise levels and beam sizes are also smaller.
Conclusion {#sec:conc}
==========
In this paper we analyzed the effect of point sources, both due to radio emission and the cluster SZ effect, on the estimation of primordial non-Gaussianity in the cosmic microwave background. The standard non-Gaussianity estimator is sensitive to any bispectrum present in the data. In addition to the standard Poisson fluctuation bispectrum, we found that cross-correlations between the radio point source and SZ power spectra and either the CMB temperature anisotropies or instrument noise can produce bispectra. These bispectra have forms somewhat similar to the local model, which is the standard bispectrum form used to search for primordial non-Gaussianity in CMB data. These similarities are not accidental, but occur because the same basic principle generates the non-Gaussianity in these cases. Due to this similarity it will be much more difficult to distinguish this non-Gaussianity from the primordial signal than other secondary bispectra with shapes which can be quite different.
A related paper by Serra & Cooray [@Serra08] has examined different secondary bispectra, but has reached similar conclusions. They examined the bispectra produced by the cross-correlation of the thermal SZ effect and the gravitational lensing of the primary CMB anisotropies. They concluded that the effects are too small to account for the infered values of $f_{NL}$ from the WMAP data and will start to become important for the Planck dataset.
The estimator bias that we have calculated is quite small and is not able to explain the results found by Yadav & Wandelt, although the bispectra considered in our paper will start to become important for the Planck non-Gaussianity analysis. Due to the extremely important nature of the detection of primordial non-Gaussianity in the WMAP data, all possible alternative explanations should be considered.
DB acknowledges financial support from the Betty and Gordon Moore Foundation and would like to thank Sterl Phinney, Daisuke Nagai and Eiichiro Komatsu for helpful conversations. EP is and NSF–ADVANCE fellow (AST–0649899) also supported by NASA grant NNX07AH59G, Planck subcontract 1290790 and JPL SURP award 1314616. She would also like to thank Kevin Huffenberger and Joachin Gonzales-Nuevo for useful conversations. We also thank Kendrick Smith for helpful comments on a draft of the paper.
[27]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, **** (), .
, , , , , , , , , , , ****, (), .
, ****, (), .
, (), .
, ****, (), .
, , , , , ****, (), .
, , , , , , , , , , , ****, (), .
, ****, (), .
, , , , , , , , ****, (), .
, , , , , , , , pp. (), .
, **** (), .
, , , ****, (), .
, , , , , , , , ****, (), .
, , , , , , , ****, (), .
, , , ****, (), .
, , , ****, (), .
, ****, (), .
, , , , , , **** (), .
, ****, (), .
, , , , , , , ****, (), .
, , , ****, (), .
, ****, (), .
, ****, (), .
, ****, (), .
, ****, (), .
, , , , , , , , , , , ****, (), .
, ****, (), .
, ****, (), .
, , , , , , , , , , ****, (), .
|
---
abstract: 'Because of their intense incident stellar irradiation and likely tidally locked spin states, hot Jupiters are expected to have wind speeds that approach or exceed the speed of sound. In this work we develop a theory to explain the magnitude of these winds. We model hot Jupiters as planetary heat engines and show that hot Jupiters are always less efficient than an ideal Carnot engine. Next, we demonstrate that our predicted wind speeds match those from three-dimensional numerical simulations over a broad range of parameters. Finally, we use our theory to evaluate how well different drag mechanisms can match the wind speeds observed with Doppler spectroscopy for HD 189733b and HD 209458b. We find that magnetic drag is potentially too weak to match the observations for HD 189733b, but is compatible with the observations for HD 209458b. In contrast, shear instabilities and/or shocks are compatible with both observations. Furthermore, the two mechanisms predict different wind speed trends for hotter and colder planets than currently observed. As a result, we propose that a wider range of Doppler observations could reveal multiple drag mechanisms at play across different hot Jupiters.'
author:
- 'Daniel D.B. Koll$^{1,3}$ and Thaddeus D. Komacek$^{2,3}$'
title: Atmospheric Circulations of Hot Jupiters as Planetary Heat Engines
---
Introduction {#sec:intro}
============
Hot Jupiters provide a unique laboratory for testing our understanding of planetary atmospheres. @showman_2002 were the first to consider the atmospheric circulations of these planets. Using numerical simulations, @showman_2002 predicted that hot Jupiters should develop strongly superrotating equatorial jets, with wind speeds up to several kilometers per second. This prediction was confirmed by subsequent observations which showed that the thermal emission peak on many hot Jupiters is shifted eastwards from the substellar point, consistent with heat being advected downwind by a superrotating jet [e.g., @Knutson_2007; @Crossfield:2010].
More recent observations have started to directly constrain the wind speeds of these jets. High-resolution transmission spectra have found Doppler shifts in molecular absorption lines for HD 209458b [@Snellen:2010] as well as HD 189733b [@Wyttenbach:2015; @Louden:2015; @Brogi:2015]. The significant ($\sim$ several km s$^{-1}$) blueshifts detected for both planets imply rapid dayside-to-nightside winds that are broadly consistent with the wind speeds predicted by a range of numerical simulations [@showman_2002; @Showmanetal_2009; @Heng:2011a; @showman_2013_doppler; @Komacek:2017].
Although it is qualitatively understood why hot Jupiters develop equatorial jets, there is still no general theory that explains the jets’ magnitude. Hot Jupiters are very likely tidally locked. This orbital spin state creates a strong day-night forcing which excites standing waves that flux angular momentum towards the equator and drive equatorial superrotation [@Showman_Polvani_2011]. The strength of superrotation should therefore depend on the ratio between horizontal wave propagation and radiative cooling timescales [@Koll:2014; @Komacek:2015; @Zhang:2016]. This basic expectation is complicated, however, by results which show that the jet’s state depends on both horizontal standing waves and vertical eddies [@Tsai:2014; @Showman:2014], and it is still unclear how the two mechanisms jointly determine the jet’s magnitude.
In this paper we constrain the wind speeds of hot Jupiters by modeling their atmospheric circulations as planetary heat engines. The utility of this approach has previously been demonstrated for hurricanes on Earth [@Emanuel:1986] and rocky exoplanets [@Koll:2016]. Atmospheric circulations can be considered heat engines because parcels of fluid tend to absorb heat at a high temperature (e.g., on the dayside of a hot Jupiter) and emit heat at a low temperature (on the nightside). The differential heating and cooling allows parcels to generate work, and thus kinetic energy, which in steady state has to be balanced by the dissipation of kinetic energy via friction.
In contrast to hurricanes and the atmospheres of rocky exoplanets, however, it is still poorly understood how hot Jupiters dissipate kinetic energy [@Goodman:2009]. Potential mechanisms include magnetic drag in partially ionized atmospheres [@Perna_2010_1; @Menou:2012fu; @Rauscher_2013; @Rogers:2020], shocks in supersonic flows [@Li:2010; @Heng:2012a; @perna_2012; @Dobbs-Dixon:2013; @Fromang:2016], and turbulence induced by fluid instabilities such as the Kelvin-Helmholtz instability [@Li:2010; @Fromang:2016].
Our goal is to evaluate these proposed mechanisms and to test which of them are able to match current observations. To do so we first describe our numerical simulations ([Section \[sec:methods\]]{}). Next, we develop the heat engine framework and test it with the numerical simulations ([Section \[sec:theory\]]{}). Finally, we apply our framework to observations ([Section \[sec:data\]]{}) and state our conclusions ([Section \[sec:conc\]]{}). Our results show that current observations favor shear instabilities and/or shocks as the dominant drag mechanism for HD 189733b, and motivate extending similar observations across a wider range of planets.
Numerical simulations {#sec:methods}
=====================
We compare our theory with the GCM simulations that were previously described in [@Komacek:2017]. In summary, the simulations use the MITgcm [@adcroft:2004] to solve the atmospheric fluid dynamics equations coupled to double-gray radiative transfer with planetary parameters relevant for a typical hot Jupiter, HD 209458b. The double-gray approximation divides the spectrum into an incoming collimated and a thermal diffuse part. The absorption coefficients were chosen to match more detailed radiative transfer calculations; the absorption coefficient for incoming stellar radiation is set to a uniform value, $\kappa_{SW} = 4 \times 10^{-4}$ m$^{-2}$ kg$^{-1}$, the thermal absorption coefficient varies approximately with the square root of pressure, $\kappa_{LW} = 2.28 \times 10^{-6}$ m$^{-2}$ kg$^{-1}$ $\times \ (p/\mathrm{1~Pa})^{0.53}$, where the power-law exponent comes from fitting the analytic model of [@Parmentier:2014] and [@Parmentier:2014a] to radiative transfer models with realistic opacities. With these values the photosphere (where the optical thickness equals unity) for stellar radiation lies at about $0.23$ bar and the photosphere for thermal radiation lies at $0.28$ bar.\
The model’s resolution is C32 in the horizontal (roughly corresponding to a global resolution of $128 \times 64$ in longitude and latitude) and 40 levels in the vertical which are evenly spaced in log pressure, with the uppermost layer extending to zero pressure. Table \[table:params\] in the Appendix summarizes the physical and numerical parameters used in our suite of models.
Most GCMs do not explicitly resolve the mechanisms that are thought to dissipate kinetic energy in hot Jupiter atmospheres, such as Lorentz drag or shocks (see Section \[sec:intro\]). Our GCM includes two potential sources of drag which can be thought of as parametrizing these mechanisms. First, the simulations include a Rayleigh drag that linearly damps winds over a prescribed timescale $\tau_{\mathrm{drag}}$. Simulations with $\tau_{\mathrm{drag}} \leq 10^{5} \mathrm{s}$ use a timescale that is spatially uniform. Simulations with $\tau_{\mathrm{drag}} > 10^{5} \mathrm{s}$ additionally include a “basal” drag term that allows the model to equilibrate within reasonable integration times. The basal drag strength increases as a power-law with pressure, from no drag at 10 bar to a timescale of 10 days at 200 bar [@Komacek:2015]. Second, to enforce numerical stability, the model includes a a fourth-order Shapiro filter that damps wind and temperature variations at the model grid scale. The Shapiro filter acts as numerical drag at small spatial scales and, in simulations without any other sources of drag, eventually helps to equilibrate the kinetic energy of the flow. The potential issue with relying on numerical drag is that it relies on parameters which are generally chosen for modeling convenience, not because they are physically motivated. This raises the question of which source of drag is dominant in our simulations.
![ Kinetic energy dissipation in many of our GCM simulations is dominated by numerical drag. Panel (a) shows the ratio between the global root-mean-square rate of kinetic energy dissipation by numerical drag, $(dK/dt)_\mathrm{num,rms}$, versus the global root-mean-square rate of kinetic energy dissipation by explicit Rayleigh drag, $(dK/dt)_{\mathrm{Rayleigh,rms}}$, as a function of pressure for models with an equilibrium temperature of $T_\mathrm{eq} = 1500 \ \mathrm{K}$. The colored lines show simulations with different Rayleigh drag timescales, with darker lines representing longer drag timescales. The dashed vertical line shows the divide between dissipation dominated by numerical drag (to the right of the line) and Rayleigh drag (to the left). Except for short Rayleigh drag timescales, $\tau_\mathrm{drag} \leq 10^{4} \mathrm{s}$, numerical dissipation dominates. Note that the case with $\tdrag = \infty$ still includes basal drag, so the ratio of numerical to Rayleigh drag dissipation is not infinite at depth. Panel (b) shows the absolute contribution of Rayleigh drag and numerical effects to the kinetic energy dissipation. Only a subset of the simulations are shown for visual convenience. The dissipation rate increases with decreasing pressure, largely due to the stronger wind speeds at lower pressures.[]{data-label="fig:dragratios"}](fig1){width=".45\textwidth"}
![Numerical effects are small relative to physical terms in the zonal angular momentum budget of our simulations. This plot shows the global root-mean-square of the change in zonal momentum due to Rayleigh drag (solid lines) and numerics (dashed lines) relative to the change in zonal angular momentum due to the Coriolis force (i.e. rotation). Plots have the same color scheme as in [Fig. \[fig:dragratios\]]{}, for visual convenience we only show a subset of all simulations. The acceleration from numerics is smaller than either the Coriolis force (if $\tdrag \ge 10^5 \ \mathrm{s}$) or Rayleigh drag (if $\tdrag \le 10^4 \ \mathrm{s}$). As a result, numerics do not significantly affect the angular momentum budget of our simulations.[]{data-label="fig:angmom"}](fig2){width=".45\textwidth"}
We find that numerical drag can play a key role in our GCM simulations. Although the potential importance of numerical drag has repeatedly been pointed out in the hot Jupiter literature ([@Goodman:2009; @Li:2010; @Thrastarson:2010; @Heng:2011a; @Liu:2013; @Mayne:2014; @Polichtchouk:2014; @Cho:2015]), no work has previously quantified its effect relative to explicitly parametrized drag. Figure \[fig:dragratios\] compares the rates at which our GCM is dissipating kinetic energy via numerical drag from the Shapiro filter versus the dissipation rate due to Rayleigh drag as a function of pressure. Figure \[fig:dragratios\] (a) shows the relative global root-mean square-dissipation due to numerical drag versus Rayleigh drag, while Figure \[fig:dragratios\] (b) shows the absolute global root-mean-square value of kinetic energy dissipated by both drag mechanisms. We compute the root-mean-square change in kinetic energy as $(\partial K/\partial t)_{rms} = \langle (\partial K/\partial t)^2
\rangle^{1/2}$, where the angle brackets denote an area average. We find that all simulations with moderately long Rayleigh drag timescales, $\tau_{\mathrm{drag}} \geq 10^6 \mathrm{s}$, dissipate most kinetic energy through numerical drag. Moreover, even in the simulations with the strongest Rayleigh drag (yellow curve in [Fig. \[fig:dragratios\]]{}a,b) numerical drag dominates the dissipation of kinetic energy near the top and bottom of the model domain. Although the model includes a basal drag, we find that it contributes less towards kinetic energy dissipation than numerical drag near the bottom of the domain. This is likely due to the Shapiro filter acting as a sponge for waves that are excited in the upper atmosphere. However, wind speeds at pressures greater than 10 bar are small so kinetic energy dissipation near the domain bottom contributes relatively little to the overall dissipation (see Fig. \[fig:dragratios\]b). Though numerical drag is a dominant factor in how our GCM dissipates kinetic energy, atmospheric circulations additionally depend on how the GCM resolves the angular momentum budget. We do not expect *a priori* that numerical effects will dominate the global angular momentum budget, because the Shapiro filter is designed to not affect large-scale flow [@Shapiro:1971]. To check this insight, we explicitly compute the change in zonal angular momentum by numerics and Rayleigh drag as in [@Peixoto:1992]: $$\label{eq:angmom}
\frac{\partial M}{\partial t} = \frac{\partial u}{\partial t} a \mathrm{cos}(\phi).$$ In [Eqn. (\[eq:angmom\])]{} $M$ is the zonal angular momentum per unit mass, $\partial
M/\partial t$ is the rate of change of angular momentum which we compute in our simulations from the acceleration $\partial u/\partial
t$ due to the Shapiro filter or Rayleigh drag, $a$ is the planetary radius, and $\phi$ is latitude. Rayleigh drag always acts as a sink of angular momentum whereas the Shapiro filter can accelerate parts of the flow so we compare both terms via the root-mean-square change in momentum, $(\partial M/\partial t)_{rms} = \langle (\partial M/\partial t)^2 \rangle^{1/2}$, where the angle brackets as before denote an area average.\
We find that numerical effects play a relatively minor role in the zonal angular momentum budget. Figure \[fig:angmom\] shows the change in angular momentum from numerics and Rayleigh drag relative to the change in angular momentum from the Coriolis force, as a function of pressure. We compare both terms against the Coriolis force because it is a small term in the zonal momentum budget of hot Jupiters due to their slow rotation and winds that peak at the equator [@Showman_Polvani_2011; @Showman:2014]. In relative terms, we find that the numerical change in angular momentum becomes larger than Rayleigh drag once $\tdrag > 10^5 \ \mathrm{s}$ (blue curves). However, in absolute terms, the momentum change from numerics remains one to two orders of magnitude smaller than the Coriolis term at most pressure levels. We conclude that numerical effects likely do not play a dominant role in the angular momentum budget of our simulations.
Given that many published simulations of hot Jupiters do not include Rayleigh drag, our results indicate that many of these simulations rely on numerical drag to equilibrate kinetic energy. Further work is needed to ensure that this kind of dissipation in hot Jupiter GCMs is physically motivated and that its effects are robust with respect to changes in numerical parameters. At the same time, the angular momentum budget in our simulations is not dominated by numerics. We therefore expect that GCMs are robust in simulating the qualitative features of hot Jupiter circulations (e.g., equatorial jets), but that the absolute kinetic energy and thus wind speeds in these simulations might be affected by numerical details. Our results agree with previous work, which has shown that the equilibrated flows in hot Jupiter GCMs largely conserve angular momentum, are independent of initial conditions, and that the magnitude of winds is only weakly sensitive to changes in numerical parameters (e.g. [@Heng:2011; @Liu:2013; @Mayne:2014]). In the remainder of this paper we focus on existing GCMs to test our theoretical framework. To do so we develop a theory in the next section that can account for both explicit and numerical drag.
Hot Jupiters as heat engines {#sec:theory}
============================
In steady state, the rate $W$ at which a heat engine performs work is given by $$W = \eta Q,
\label{eq:carnot}$$ where $\eta$ is the engine’s thermodynamic efficiency and $Q$ is the rate at which the engine absorbs heat.
First, the heating rate $Q$ is equal to the average absorbed stellar flux, $$\label{eq:dotq}
Q = \sigma T_{eq}^4,$$ where $T_{eq}$ is the planetary equilibrium temperature.
Second, we constrain the work output rate $W$. We assume that work goes entirely towards generating and dissipating kinetic energy. If Rayleigh drag dominates, the rate at which kinetic energy is dissipated equals $$\label{eq:dotwray}
W_\mathrm{Rayleigh} = \int \frac{dp}{g} \times \left\langle \frac{\mathbf{v}^2}{\tau_{\mathrm{drag}}}\right\rangle \mathrm{,}$$ where $\mathbf{v}$ is the velocity vector and the angle brackets denote an area average. If numerical drag dominates, kinetic energy is dissipated by the Shapiro filter which damps the highest wavenumber components of the flow. Because the highest wavenumber in the GCM is set by the model’s grid spacing $\Delta x$ we scale the Shapiro filter’s damping timescale as $\tau \sim \Delta x/ U$. This means the rate at which numerical drag dissipates kinetic energy is equal to $$\label{eq:dotwnum}
W_\mathrm{num} \sim \frac{U^2}{\Delta x/ U} \times \frac{p}{g} =
\frac{U^3}{\Delta x} \times \frac{p}{g}.$$
![ A diagram of the Ericsson cycle, overlaid on dayside- and nightside-averaged temperature profiles of a reference simulation ($T_\mathrm{eq} =
1500 \ \mathrm{K}, \tau_\mathrm{drag} = 10^6 \ \mathrm{s}$) and an adiabatic profile. The Ericsson cycle works as follows: a parcel of fluid starts at depth on the nightside (a), moves towards the dayside (b), where it rises (c), moves back towards the nightside (d), and sinks (a). We assume that rising and sinking motions (b-c, d-a) are isothermal and that motions between hemispheres (a-b, c-d) are isobaric. The isothermal assumption is motivated by the GCM profiles, which show that hot Jupiters are much closer to vertically isothermal than to adiabatic.[]{data-label="fig:diagram"}](fig3){width="50.00000%"}
Third, we constrain the efficiency $\eta$. Previous work on hurricanes and the atmospheres of rocky planets constrained this quantity by modeling atmospheric circulations as Carnot cycles [@Emanuel:1986; @Koll:2016]. Unfortunately it is difficult to argue that hot Jupiters should also resemble Carnot cycles. In a Carnot cycle parcels of fluid expand and contract adiabatically between heating and cooling. This model is physically motivated by the fact that hurricanes and rocky planets undergo convection, so fluid parcels move rapidly and quasi-adiabatically. In contrast, the upper atmospheres of hot Jupiters are strongly irradiated by their host stars. The irradiation creates a stable stratification and suppresses convection, which means the vertical temperature structure is approximately in radiative equilibrium and lapse rates are small [@Iro:2005; @Guillot:2010]. As the temperature profiles from a reference simulation in Figure \[fig:diagram\] illustrate, temperatures are indeed far from adiabatic, which underlines that the Carnot cycle is a poor model for hot Jupiters.
Here we constrain the efficiency $\eta$ by modeling hot Jupiters as Ericsson cycles [@McCulloh:1876]. The Ericsson cycle is shown in Figure \[fig:diagram\]: a parcel of fluid starts deep in the nightside atmosphere (Fig. \[fig:diagram\], point a). It moves at constant pressure towards the dayside (b), where the stellar heating causes it to rise (c). The parcel then moves to the nightside (d), before cooling and sinking back to its starting position (a). Even though the assumption of isothermal vertical motions is an idealization, Figure \[fig:diagram\] shows that the Ericsson cycle provides a physically motivated model for hot Jupiters.
The efficiency of the Ericsson cycle is given by $$\label{eq:etafrac}
\eta = \frac{\oint \delta Q}{\int_a^c \delta Q} = \frac{\oint T ds}{\int_a^c T ds} \mathrm{.}$$ Here $\delta Q$ is a change in a parcel’s heat content, and $ds$ is a change in entropy. From the first law of thermodynamics, $$T ds = c_p dT - \frac{dp}{\rho} = c_p dT - R Td\ln p \mathrm{,}$$ where we have used the ideal gas law in the second step. We can then evaluate the numerator $\oint T ds$ as $$\begin{aligned}
&& \int_a^b c_p dT - \int_b^c R T \nonumber
d\ln p + \int_c^d c_p dT - \int_d^a R T d\ln p, \\
& = & c_p (T_\mathrm{day} - T_\mathrm{night}) - R T_\mathrm{day} \ln(p_\mathrm{lo}/p_\mathrm{hi})
\nonumber \\ && + c_p
(T_\mathrm{night} - T_\mathrm{day}) - R T_\mathrm{night} \ln(p_\mathrm{hi}/p_\mathrm{lo}), \nonumber \\
& = & R (T_\mathrm{day} - T_\mathrm{night}) \ln(p_\mathrm{hi}/p_\mathrm{lo}).\end{aligned}$$ Similarly the denominator $\int_a^c T ds$ in [Eqn. (\[eq:etafrac\])]{} is $$\begin{aligned}
&& \int_a^b c_p dT - \int_b^c R T d\ln p, \nonumber \\
& = & c_p (T_\mathrm{day} - T_\mathrm{night}) + R T_\mathrm{day} \ln(p_\mathrm{hi}/p_\mathrm{lo}).
\label{eq:integraltwo}\end{aligned}$$ The ratio of these two terms gives the efficiency, which we write as $$\eta = \frac{ \frac{T_\mathrm{day} - T_\mathrm{night}}{T_\mathrm{day}} \times
\ln\left[ (p_\mathrm{hi}/p_\mathrm{lo})^{R/c_p} \right]}{ \frac{T_\mathrm{day} -
T_\mathrm{night}}{T_\mathrm{day}} + \ln\left[ (p_\mathrm{hi}/p_\mathrm{lo})^{R/c_p}\right]}.
\label{eq:eta}$$
{width="80.00000%"}
Importantly, the efficiency $\eta$ is always lower than the efficiency of a Carnot cycle, $\eta_{\mathrm{Carnot}} = (T_\mathrm{day} - T_\mathrm{night}) /
T_\mathrm{day}$, which is the maximum efficiency a heat engine can reach. The lower efficiency arises because heat is radiated to space as a parcel passes from the dayside to the nightside (c-d). If, instead, this heat could be stored and used later to heat up the parcel as it passes back from the nightside to the dayside (a-b), the Ericsson cycle’s efficiency would equal that of a Carnot cycle[^1].
As an example we consider the efficiency of WASP-18b, whose phase curve is consistent with zero heat redistribution from dayside to nightside [@Maxted:2013]. We assume that a parcel of fluid moves two scale heights in the vertical every time it traverses the planet horizontally[^2] so $\ln\left[ (p_\mathrm{hi}/p_\mathrm{lo})^{R/c_p} \right] \sim 2
R/c_p$. In this case WASP-18b’s Carnot efficiency would be unity, $\eta_\mathrm{Carnot} = 1$, whereas its actual efficiency is smaller by a factor of three, $\eta = 0.36$. Hot Jupiters can therefore be thought of as comparable to, but less efficient than, ideal Carnot engines. Their efficiency can be reduced even further by molecular diffusion and irreversible phase changes [@pauluis2002a], so Equation \[eq:eta\] should be considered an upper limit.
We are now able to test the extent to which hot Jupiters resemble heat engines. A key prediction of our theory is that wind speeds are sensitive to whether winds are damped by Rayleigh drag or numerical drag. Based on Equations \[eq:carnot\]-\[eq:dotwnum\], we expect that winds should scale as the square root of the modified heat input for Rayleigh drag, $U \propto (\tau_{\mathrm{drag}} \eta \sigma T_{eq}^4)^{1/2}$, whereas they should scale as the one-third power of the heat input for numerical drag, $U \propto (\Delta x \eta \sigma T_{eq}^4)^{1/3}$. To compare both scalings in a single plot and because $\Delta x$ depends on numerical parameters we first use the quantity $\tau_{\mathrm{drag}} \eta \sigma T_{eq}^4$.\
Figure \[fig:winds\](a) shows that our simulations indeed exhibit a dichotomy between Rayleigh and numerical drag. The x-axis shows the scaled heat input $\tau_{\mathrm{drag}} \eta \sigma T_{eq}^4$ while the y-axis shows the root-mean-square wind speed, $U_{rms} = (p^{-1} \int \langle u^2 + v^2 \rangle dp)^{1/2}$, where $u$ and $v$ are the zonal and meridional wind speeds and where we average horizontally and over the meteorologically active region above $p=1$ bar (see Fig. \[fig:dragratios\]). To evaluate $\eta$ we use the dayside and nightside brightness temperatures that would be seen by an observer and assume that a parcel crosses two scale heights, $\ln\left[ (p_\mathrm{hi}/p_\mathrm{lo})^{R/c_p} \right] \sim 2
R/c_p$.\
We find that wind speeds in most strongly damped simulations with $\tau_\mathrm{drag} \leq 10^5 \mathrm{s}$ increase according to Rayleigh drag (Fig. \[fig:winds\]a). In contrast, winds in simulations with $\tau_\mathrm{drag} \geq 10^6 \mathrm{s}$ increase more slowly and approximately follow the one-third slope predicted for numerical drag. A notable exception to the Rayleigh scaling is given by the hottest simulations with $\tau_\mathrm{drag}=10^3 \mathrm{s}$ (yellow dots), in which winds increase with a one-thirds slope instead. This is due to the relative increase of numerical dissipation in strongly damped simulations. At $\tdrag = 10^3 \mathrm{s}$ winds are so weak that Rayleigh drag, which is proportional to wind speed, becomes small relative to numerical drag in parts of the model domain. Similarly, our numerical scaling performs worst for simulations with $\tau_\mathrm{drag}=10^7 \mathrm{s}$ (purple dots), in which wind speeds flatten out at high $T_{eq}$ even though the heat input keeps increasing. Given that our theory performs well in the strongly damped limit, deviations from it are likely due to inaccuracies in our numerical scaling, which we discuss below.
We now constrain the wind speeds inside a hot Jupiter atmosphere. If the atmospheric circulation is primarily balancing Rayleigh drag then wind speeds should scale as $$U_\mathrm{Rayleigh} = k_0 \left(\tau_\mathrm{drag} \eta
\sigma T_{eq}^4 \frac{g}{p} \right)^{1/2},
\label{eq:Uray}$$ whereas if the circulation is balancing numerical drag then winds should scale as $$U_\mathrm{num} = k_1 \left(\Delta x \eta \sigma T_{eq}^4 \frac{g}{p} \right)^{1/3}.
\label{eq:Unum}$$ Here $k_0$ and $k_1$ are fitting constants of order unity that account for various approximations, in particular our assumption that temperature profiles are isothermal. We use $k_0=0.3$ and $k_1=1.1$ to match the simulations at $T_{eq} = 3000$ K with $\tau_\mathrm{drag} = 10^4 \mathrm{s}$ and $\tau_\mathrm{drag} = \infty$, respectively. We combine Eqns. \[eq:Uray\] and \[eq:Unum\] by demanding that a GCM’s work output equals whichever is stronger, Rayleigh or numerical drag, so $$U = \min(U_\mathrm{Rayleigh}, U_\mathrm{num}).
\label{eq:Ucombined}$$ To evaluate [Eqn. (\[eq:Unum\])]{} we use the model’s grid spacing at the equator $\Delta x \sim 2 \pi a/128$, where $a$ is the planetary radius.
We find that our theory matches the GCM simulations well. Figure \[fig:winds\](b) compares our predicted winds with the simulated root-mean-square wind speeds $U_{rms}$, defined above. As in Figure \[fig:winds\](a), we find that our scaling works best in the strongly damped limit, particularly for the simulations with $\tau_\mathrm{drag} = 10^4-10^5 \mathrm{s}$ which our scaling matches to better than $33\%$. These are also the simulations in which numerical drag is not dominant yet, and for which we scale winds using [Eqn. (\[eq:Uray\])]{}. Our scaling additionally matches the weakly damped simulations that are dominated by numerical drag ($\tau_\mathrm{drag} > 10^5 \mathrm{s}$), even though the fit is less good than in the strongly damped regime. This is likely due to the approximations we made in deriving [Eqn. (\[eq:Unum\])]{}. To test this point we performed additional simulations in which we varied the model resolution and timestep. We found that [Eqn. (\[eq:Unum\])]{} over-predicts the sensitivity of wind speeds to numerical resolution (see Appendix). Further work is needed to understand exactly how hot Jupiter simulations equilibrate through numerical drag.
Nevertheless, given that our scaling captures the basic dependence of wind speeds on a planet’s heat input (Fig. \[fig:winds\]a) and additionally matches the GCM to better than a factor of two even when the models are dominated by numerical drag (Fig. \[fig:winds\]b), we argue that the main shortcoming in Figure \[fig:winds\] is due to our imperfect description of numerical drag, not due to the heat engine framework. We therefore sidestep the intricacies of numerical simulations and in the last section apply the heat engine framework directly to data.
Evaluating drag mechanisms with observations {#sec:data}
============================================
![Top: solid lines show the predicted wind speeds from [Eqn. (\[eq:Uray\])]{}, assuming dissipation is caused by magnetic drag. Colored envelopes indicate that our theoretical scalings are subject to uncertainty. The uppermost line for each magnetic field strength shows the wind speed predicted for dissipation occuring at $1$ bar, the lower line shows the wind speed predicted for dissipation occuring at $10^{-3}$ bar, and the colored envelope shows intermediate pressures. Dots show wind speeds constrained via Doppler spectroscopy for HD 189733b and HD 209458b [@Snellen:2010; @Louden:2015]. Bottom: solid lines show the predicted wind speeds from [Eqn. (\[eq:shearU\])]{}, assuming dissipation is caused by shear instabilities. Colored envelopes here indicate our estimated uncertainty for our heat engine scaling (see text). Winds faster than the speed of sound (dashed black line[^3]) can also develop shocks. Magnetic drag can match both observations, but doing so requires a large dipole field ($\gtrsim 100 \mathrm{G}$) for HD 189733b. In contrast, shear instabilities and/or shocks can match the observed wind speeds of both planets. []{data-label="fig:HD189"}](fig5){width=".5\textwidth"}
In this section we use the heat engine framework to predict how strong winds would have to be to balance the two main proposed drag mechanisms on hot Jupiters, namely magnetic drag and shear instabilities. We then evaluate our predictions by comparing them to observed wind speeds obtained from Doppler spectroscopy.
For magnetic drag we combine [Eqn. (\[eq:Uray\])]{} with a kinematic scaling for the effective Lorentz drag timescale [@Perna_2010_1; @Menou:2012fu; @Rauscher_2013]. To be consistent with Section 3, we use $k_0=0.3$ in [Eqn. (\[eq:Uray\])]{}. The drag timescale is $$\tau_\mathrm{mag} = \frac{4\pi H_e \rho}{B^2} \mathrm{,}
\label{eq:tauMag}$$ where $B$ is the dipole field strength, $H_e$ the atmospheric electrical resistivity, and $\rho$ the gas density. The electrical resistivity is inversely related to the ionization fraction $x_e$, $H_e \propto \sqrt{T}/x_e$, where $x_e$ is calculated from the Saha equation [@Perna_2010_1]. For hot Jupiters the ionized gas is largely potassium, for which we assume a solar abundance [^4]. We expect that most dissipation occurs somewhere between the upper levels probed by Doppler observations ($\sim10^{-3}$ bar) and the photosphere, so we calculate winds over the range $10^{-3} \leq p \leq 1 \mathrm{bar}$. Note that Equation \[eq:tauMag\] does not include induced atmospheric fields. In strongly ionized atmospheres induced fields can be significant [@Rogers:2020; @Rogers:2014; @Rogers:2017a], which means winds could decrease faster with equilibrium temperature than implied by Equation \[eq:tauMag\].
For shear instabilities we predict wind speeds analogous to [Eqn. (\[eq:Unum\])]{}. We assume instabilities have a spatial extent $L$ and damp the flow over a timescale $L/U$, so wind speeds scale as $$\label{eq:shearU}
U_\mathrm{shear} = k_1 \left(L \eta \sigma T_{eq}^4 \frac{g}{p}\right)^{1/3}.$$ For consistency we use $k_1=1.1$, as in Section 3. We note that Doppler observations probe the upper atmosphere only whereas our theory constrains large-scale dissipation and thus should be representative of the bulk flow. Observable wind speeds could potentially deviate from the bulk flow in atmospheres with large vertical shear. Nevertheless, we expect that the comparison between our theory and observations is warranted, given that a wide range of hot Jupiter GCMs produce equatorial jets that are strongly vertically coherent [@Showmanetal_2009; @Heng:2011a; @Liu:2013; @Mayne:2014; @Polichtchouk:2014; @Cho:2015].
Figure \[fig:HD189\] compares the observed wind speeds of $1.9^{+ 0.7}_{-0.6} \ \mathrm{km}~\mathrm{s}^{-1}$ for HD 189733b [@Louden:2015] and $2 \pm 1 \ \mathrm{km}~\mathrm{s}^{-1}$ for HD 209458b[^5] [@Snellen:2010] with our theoretical predictions for the two drag mechanisms[^6]. To indicate that our scalings aren’t exact, the colored envelopes in Figure \[fig:HD189\] reflect the dominant sources of uncertainty in our scalings. For magnetic drag the uncertainty is dominated by the pressure at which dissipation is assumed to occur, for shear instabilities we use the remaining mismatch between theory and GCM simulations[^7] in Section 3. Because the magnetic drag timescale is relatively sensitive to both temperature and pressure we additionally explored the impact of different pressure-temperature profiles, and find that most features in Figure \[fig:HD189\] are robust (see Appendix).
First, we find that the observations for HD 189733b can only be matched with a very strong dipole field of $\sim 100 \mathrm{G}$ (Fig. \[fig:HD189\], top panel). Second, matching the observations for HD 209458b also requires a strong dipole field, on the order of $\gtrsim 30 \mathrm{G}$. Such a dipole is broadly in agreement with predictions from dynamo scaling laws for HD 209458b [@Yadav:2017], which predict a dipole component at the poles of $\sim 50 \mathrm{G}$ (R. Yadav, personal communication). We conclude that magnetic drag is a plausible drag mechanism for HD 209458b. In addition, given the potentially large uncertainties in both the Lorentz drag timescale (Equation \[eq:tauMag\]) and dynamo scaling laws, magnetic drag cannot be ruled out for HD 189733b, even though the required field strengths would be larger than currently expected. Further theoretical work could help reduce these uncertainties. Our result that Lorentz forces are potentially unimportant for HD 189733b but may be important for HD 209458b therefore agrees with previous estimates that magnetic drag could become significant at $T_\mathrm{eq} \gtrsim 1400 \mathrm{K}$ [@Menou:2012fu; @Rogers:2014].
In contrast to magnetic drag, we find that shear instabilities are a plausible mechanism to match the observations of both planets (Fig. \[fig:HD189\], bottom panel). Our scaling predicts that wind speeds increase moderately with $T_{eq}$, in agreement with the observations. We also find that the vertical scale height $H$, which has been proposed as the characteristic scale of Kelvin-Helmholtz instabilities in hot Jupiters [@Goodman:2009; @Li:2010], would yield wind speeds that are an order of magnitude too slow to match the observed wind speeds. Instead, a damping length $2\pi a$, where $a$ is the planet radius, is needed to match the observed wind speeds. Such a damping length could be either due to a horizontal Kelvin-Helmholtz instability or due to the steepening of day-night standing waves into shocks. We note that the shock-resolving simulations in [@Fromang:2016] also found a dominant scale for horizontal shear instabilities of $L\sim 2 \pi a / 5$, and are thus consistent with our results. The upper end of our wind speed estimate is additionally consistent with the bulk flow becoming supersonic, and thus prone to dissipation via shocks (Fig. \[fig:HD189\]).
Conclusion {#sec:conc}
==========
We describe the large-scale atmospheric dynamics of hot Jupiters by modeling them as planetary heat engines. Hot Jupiters are comparable to, but less efficient than, ideal Carnot engines because parcels lose heat to space as they move between dayside and nightside. Our theory successfully captures the intensity of winds in a large number of hot Jupiter simulations ([Fig. \[fig:winds\]]{}). Remaining differences between theory and simulations are likely due to our imperfect understanding of numerical dissipation in the simulations, instead of a fundamental shortcoming in our theory.
Applying our theory to observations, we find that either the magnetic dipole field of HD 189733b could be stronger than current estimates suggest, or that its atmosphere is dissipating kinetic energy via shear instabilities and/or shocks. For HD 209458b our results indicate that both drag mechanisms can plausibly match the observations.
Looking towards future observations, we expect that magnetic drag should become dominant on hotter exoplanets with $T_{eq} > 1400 \ \mathrm{K}$ (Fig. \[fig:HD189\]). Wind speeds on these planets should follow a different trend with equilibrium temperature than wind speeds in colder atmospheres. As a result, we propose that more Doppler measurements over a wider range of planets could reveal a diversity of drag mechanisms at work in hot Jupiter atmospheres.
We thank Vivien Parmentier, Dorian Abbot, and Malte Jansen for insightful feedback on an early draft. We also thank the reviewer for helpful comments that significantly improved this manuscript. This work benefited from the Exoplanet Summer Program in the Other Worlds Laboratory (OWL) at the University of California, Santa Cruz, a program funded by the Heising-Simons Foundation. D.D.B. Koll was supported by a James McDonnell Foundation postdoctoral fellowship. T.D. Komacek was supported by a NASA Earth and Space Science fellowship.
Sensitivity to numerical parameters {#app:one .unnumbered}
===================================
Our scalings suggest that, for simulations that are dominated by numerical drag, large-scale wind speeds should be sensitive to horizontal resolution (Eqn. \[eq:Unum\]). To explore this possibility we performed additional simulations in which we did not include any Rayleigh drag (including no basal drag) and kept the equilibrium temperature fixed to $1500 \ \mathrm{K}$ while varying different numerical parameters in the model. The two parameters we considered are the model’s horizontal resolution and its timestep $dt$. Table \[table:params\] summarizes the numerical parameter variations for this suite of simulations. The Shapiro filter timescale $\tau_\mathrm{num}$ was always kept equal to the timestep.
[Fig. \[fig:numerics\_scaling\]]{} shows that wind speeds are largely independent of the GCM timestep. We only find a $\lesssim 3\%$ variation in the RMS wind speed while changing $dt$ (and thus also $\tau_\mathrm{num}$) over an order of magnitude. Given that Equation \[eq:Unum\] predicts wind speeds should be independent of $dt$, this implies a general agreement between our theory and our GCM results.
In addition, Figure \[fig:numerics\_scaling\] shows that large-scale wind speeds are less sensitive to horizontal resolution than our scaling would suggest. Following Equation \[eq:Unum\], wind speeds should scale with resolution as $U \propto N_x^{-1/3}$, where $N_x$ is the number of horizontal grid points. Our GCMs do not follow such a scaling and instead we find that the wind speed is independent of resolution to $\lesssim 10\%$ over a factor of 4 change in horizontal resolution, going from C16 to C64. One potential explanation is that our weakly damped simulations develop a direct turbulent cascade of energy to smaller scales, so that the large-scale kinetic energy of the flow becomes insensitive to the dissipation scale. Another explanation is that hot Jupiter GCM simulations are prone to developing shocks [see @Rauscher:2010; @perna_2012; @Dobbs-Dixon:2013; @Fromang:2016], in which case the large-scale kinetic energy might be less sensitive to how well the shock is being resolved than [Eqn. (\[eq:Unum\])]{} suggests.
Our result is consistent with the suggestion of @Heng:2011a that changes in numerics can change wind speeds in GCMs at the $\lesssim 10\%$ level, but shows that our scaling does not adequately capture the dependence of large-scale GCM wind speeds on numerical resolution. As a result, a better description of numerical drag than our scaling is needed to capture how hot Jupiter GCMs converge with numerical drag. Nevertheless, although our scaling over-predicts the sensitivity to numerical parameters, it does correctly predict the sensitivity to physical parameters, such as equilibrium temperature (see [Fig. \[fig:winds\]]{}, left panel).
[**Physical Parameter**]{} [**Parameter Value(s)**]{} [**Unit**]{}
----------------------------------------------------------- ----------------------------------------------------------- ---------------------------------------------------
Equilibrium temperature $T_\mathrm{eq}$ 500, 1000, [**1500**]{}, 2000, 2500, 3000 K
Visible absorption coefficient $\kappa_{SW}$ $4 \times 10^{-4}$ m$^{-2}$ kg$^{-1}$
Thermal absorption coefficient $\kappa_{LW}$ $2.28 \times 10^{-6}$ $\times \ (p/\mathrm{1~Pa})^{0.53}$ m$^{-2}$ kg$^{-1}$
Drag timescale $\tdrag$ $10^3, 10^4, 10^5, 10^6, 10^7, {\bf \infty} $ s
Gravity $g$ 9.36 $\mathrm{m} \ \mathrm{s}^{-2}$
Rotation rate $\Omega$ $2.078 \times 10^{-5}$ $\mathrm{s}^{-1}$
Planet Radius $a$ $9.43 \times 10^7$ m
Heat capacity $C_p$ $1.3 \times 10^4$ $\mathrm{J} \ \mathrm{kg}^{-1} \ \mathrm{K}^{-1}$
Specific gas constant $R$ 3700 $\mathrm{J} \ \mathrm{kg}^{-1} \ \mathrm{K}^{-1}$
[**Numerical Parameter**]{} [**Parameter Value(s)**]{} [**Unit**]{}
Horizontal resolution ($N_x$) C16 (64), [**C32 (128)**]{}, C64 (256) n/a
Vertical resolution $N_z$ 40 n/a
Timestep $dt$ 1.5, 7.5, [**15**]{} s
Shapiro filter timescale $dt_\mathrm{num}$ 1.5, 7.5, 15, [**25**]{} s
Shapiro filter length scale $l_\mathrm{num} = 2\pi a/N_x$ $2\pi a/64$, [**${\bf 2\pi a}$/128**]{}, $2\pi a/256$ m
Shapiro filter order $n$ 4 n/a
: Range of physical and numerical parameters used in our suite of simulations. Numerical parameters in bold show fiducial values used for our main suite of simulations with varying physical parameters, and physical parameters in bold highlight fiducial values used for our secondary suite of simulations with varying numerical parameters. Numbers in parentheses for horizontal resolution show the approximate number of horizontal grid points.[]{data-label="table:params"}
![Our scaling for how wind speeds depends on numerical parameters (Eqn. \[eq:Unum\]) matches the independence of $U_\mathrm{rms}$ on timestep well, but does not match the dependence of $U_\mathrm{rms}$ on grid size. Shown are GCM results for $U_\mathrm{rms}$ as a function of horizontal resolution (black dots) and timestep (magenta dots) from simulations with $T_\mathrm{eq} = 1500 \ \mathrm{K}$ and no Rayleigh drag. In this set of simulations the Shapiro filter timescale $\tau_\mathrm{num}$ is kept equal to the timestep. Dashed lines show our predicted dependence of $U_\mathrm{rms}$ on timestep (magenta) and resolution (black), using a value of $k_1$ such that the theory matches the intermediate GCM point. Eqn. \[eq:Unum\] correctly predicts that the wind speed is independent of timestep (accurate to the $3\%$ level in our GCMs), but predicts that the wind speeds should decrease steeply with increasing resolution, which is not found in our GCM simulations.[]{data-label="fig:numerics_scaling"}](fig6){width="50.00000%"}
Sensitivity of magnetic drag timescale to temperature-pressure profile {#app:two .unnumbered}
======================================================================
Because the magnetic drag timescale is highly sensitive to temperature [@Perna_2010_1; @Menou:2012fu; @Rauscher_2013], we explored the impact of the assumed temperature-pressure profile on our results in Section \[sec:data\]. In Section \[sec:data\] we assume an isothermal atmosphere, here we constrain the vertical temperature structure using the analytical solutions from @Guillot:2010 as follows: we use Eqn. 29 from @Guillot:2010 with parameters similar to those used in that paper ($\kappa_{LW}=10^{-2}
cm^2 g^{-1}$, $\gamma=0.1$, $T_{int}=100\mathrm{K}$, $f=0.25$). With these temperature-pressure profiles we evaluate the magnetic drag timescale (Eqn. \[eq:tauMag\]) at $1 \mathrm{bar}$ and $10^{-3}
\mathrm{bar}$, and compute wind speeds following Eqn. \[eq:Uray\].\
Figure \[fig:HD189\_appendix\] shows that our conclusions from Section \[sec:data\] are robust. The most significant difference in Figure \[fig:HD189\_appendix\] compared to Figure \[fig:HD189\] occurs above $T_{eq}\gtrsim 1500 \mathrm{K}$, in which wind speeds increase more slowly with temperature, whereas our scalings at $T_{eq}<1500 \mathrm{K}$ are affected relatively little. The relatively small effect of the temperature-pressure profile is largely due to a trade-off between the effect of pressure and temperature on the magnetic timescale (Eqn. \[eq:tauMag\]). Although $H_e$ has an exponential sensitivity to temperature, the absolute value of temperature varies less than a factor of two between $1 \mathrm{bar}$ and $10^{-3} \mathrm{bar}$. This compares to a three order of magnitude change in pressure, which appears in both density ($\rho \propto p$) and resistivity ($H_e \propto x_e^{-1} \propto p^{1/2}$) in Eqn. \[eq:tauMag\].
![Same as the top panel in Figure \[fig:HD189\], but instead of an isothermal atmosphere we assume that temperature increases with pressure following the analytic solutions in @Guillot:2010. Solid lines are evaluated at $1 \mathrm{bar}$, dashed lines are evaluated at $10^{-3} \mathrm{bar}$. Compared with Figure \[fig:HD189\], our main conclusions are robust to changes in thermal structure.[]{data-label="fig:HD189_appendix"}](fig7){width=".5\textwidth"}
[y]{}y
[48]{} natexlab\#1[\#1]{}
Adcroft, A., Hill, C., Campin, J., Marshall, J., & Heimbach, P. 2004, Monthly Weather Review, 132, 2845
Brogi, M., de Kok, R., Albrecht, S., Snellen, I., Birkby, J., & Schwarz, H. 2016, The Astrophysical Journal, 817, 106
Cho, J., Polichtchouk, I., & Thrastarson, H. 2015, Monthly Notices of the Royal Astronomical Society
Crossfield, I., Hansen, B., Harrington, J., Cho, J., Deming, D., Menou, K., & Seager, S. 2010, The Astrophysical Journal, 723, 1436
Dobbs-Dixon, I. & Agol, E. 2013, Monthly Notices of the Royal Astronomical Society, 435, 3159
Emanuel, K. A. 1986, Journal of the Atmospheric Sciences, 43, 585
Fromang, S., Leconte, J., & Heng, K. 2016, Astronomy and Astrophysics, 591, A144
Goodman, J. 2009, The Astrophysical Journal, 693, 1645
Guillot, T. 2010, Astronomy and Astrophysics, 520, A27
Heng, K. 2012, The Astrophysical Journal Letters, 761, L1
Heng, K., Menou, K., & Phillips, P. 2011, Monthly Notices of the Royal Astronomical Society, 413, 2380
Heng, K., Frierson, D., & Phillips, P. 2011, Monthly Notices of the Royal Astronomical Society, 418, 2669
Iro, N., Bézard, B., & Guillot, T. 2005, Astronomy and Astrophysics, 436, 719
Knutson, H., Charbonneau, D., Allen, L., Fortney, J., Agol, E., Cowan, N., Showman, A., Cooper, C., & Megeath, S. 2007, Nature, 447, 183
Koll, D. & Abbot, D. 2015, The Astrophysical Journal, 802, 21
—. 2016, The Astrophysical Journal, 825, 99
Komacek, T. & Showman, A. 2016, The Astrophysical Journal, 821, 16
Komacek, T., Showman, A., & Tan, X. 2017, The Astrophysical Journal, 835, 198
Li, J. & Goodman, J. 2010, The Astrophysical Journal, 725, 1146
Liu, B. & Showman, A. 2013, The Astrophysical Journal, 770, 42
Louden, T. & Wheatley, P. 2015, The Astrophysical Journal Letters, 814, L24
Maxted, P., Anderson, D., Doyle, A., Gillon, M., Harrington, J., Iro, N., Jehin, E., Lafreniere, D., Smalley, B., & Southworth, J. 2013, Monthly Notices of the Royal Astronomical Society, 428, 2645
Mayne, N., Baraffe, I., Acreman, D., Smith, C., Browning, M., Amundsen, D., Wood, N., Thuburn, J., & Jackson, D. 2014, Astronomy and Astrophysics, 561, A1
McCulloh, R. 1876, Treatise on the mechanical theory of heat and its applications to the steam-engine, etc. (New York: D. Van Nostrand)
Menou, K. 2012, The Astrophysical Journal, 745, 138
Parmentier, V. & Guillot, T. 2014, Astronomy and Astrophysics, 562, A133
Parmentier, V., Guillot, T., Fortney, J., & Marley, M. 2015, Astronomy and Astrophysics, 475, A35
Pauluis, O. & Held, I. M. 2002, Journal of the Atmospheric Sciences, 59, 125
Peixoto, J. and Oort, A. 1992, Physics of Climate (New York: American Institute of Physics)
Perna, R., Heng, K., & Pont, F. 2012, The Astrophysical Journal, 751, 59
Perna, R., Menou, K., & Rauscher, E. 2010, The Astrophysical Journal, 719, 1421
Pierrehumbert, R. 2010, Principles of Planetary Climate (Cambridge: Cambridge University Press)
Polichtchouk, I., Cho, J., Watkins, C., Thrastarson, H., Umurhan, O., & de la Torre Juarez, M. 2014, Icarus, 229, 355
Rauscher, E. & Menou, K. 2010, The Astrophysical Journal, 714, 1334
—. 2013, The Astrophysical Journal, 764, 103
Rogers, T. & Komacek, T. 2014, The Astrophysical Journal, 794, 132
Rogers, T. & McElwaine, J. 2017, The Astrophysical Journal Letters, 841, L26
Rogers, T. & Showman, A. 2014, The Astrophysical Journal Letters, 782, L4
Shapiro, R. 1971, Journal of the Atmospheric Sciences, 28, 523
Showman, A., Fortney, J., Lewis, N., & Shabram, M. 2013, The Astrophysical Journal, 762, 24
Showman, A., Fortney, J., Lian, Y., Marley, M., Freedman, R., Knutson, H., & Charbonneau, D. 2009, The Astrophysical Journal, 699, 564
Showman, A. & Guillot, T. 2002, Astronomy and Astrophysics, 385, 166
Showman, A., Lewis, N., & Fortney, J. 2015, The Astrophysical Journal, 801, 95
Showman, A. & Polvani, L. 2011, The Astrophysical Journal, 738, 71
Snellen, I., de Kok, R., de Mooij, E., & Albrecht, S. 2010, Nature, 465, 1049
Thrastarson, H. & Cho, J. 2010, The Astrophysical Journal, 716, 144
Tsai, S., Dobbs-Dixon, I., & Gu, P. 2014, The Astrophysical Journal, 793, 141
Wyttenbach, A., Ehrenreich, D., Lovis, C., Udry, S., & Pepe, F. 2015, Astronomy [&]{} Astrophysics, 577, A62
Yadav, R. & Thorngren, D. 2017, The Astrophysical Journal Letters, 849, L12
Zhang, X. & Showman, A. 2017, The Astrophysical Journal, 836, 73
[^1]: If the heat lost during (c-d) could be captured and used to heat the parcel during (a-b), then [Eqn. (\[eq:integraltwo\])]{} becomes $\int_a^c \delta T ds = \int_b^c \delta T ds = R T_\mathrm{day}
\ln(p_\mathrm{hi}/p_\mathrm{lo})$ and [Eqn. (\[eq:eta\])]{} becomes $\eta = (T_\mathrm{day} - T_\mathrm{night}) / T_\mathrm{day}$.
[^2]: A parcel travels a vertical distance $d_\mathrm{z} \sim \frac{Wa}{U}$, where $a$ is the planet radius and $W$ the vertical wind speed. Using characteristic values from a simulation with $T_\mathrm{eq} = 1500 \ \mathrm{K}$ and no drag, $W \sim 10 \mathrm{m}\mathrm{s}^{-1}$, $U \sim 10^3 \mathrm{m}\mathrm{s}^{-1}$, and $a=a_\mathrm{HD 209458b}$, we find $d_\mathrm{vert} \sim 2H$, where $H$ is the scale height. In agreement with this estimate, we mapped streamfunctions in our simulations and found that the vertical extent of both zonal and meridional flows is normally confined to $\sim 1-3$ scale heights.
[^3]: We assume solar composition and that atmospheric temperature is equal to the equilibrium temperature.
[^4]: For a planet with the equilibrium temperature of HD 209458b, $T_{eq} = 1450 \ \mathrm{K}$, the ionization fraction is $x_e=4.4 \times 10^{-11}$, which is much smaller than the solar abundance of neutral potassium and thus consistent with the approximations made in @Perna_2010_1. The corresponding magnetic resistivity is $H_e=2.0 \times 10^{14} \ \mathrm{cm}^2 \ \mathrm{s}^{-1}$.
[^5]: Note that these are $1\sigma$ error bars and the detection itself was only significant at $2\sigma$.
[^6]: We assume $p=1$ bar, $\eta = 0.2$, and $g=23\mathrm{m}~\mathrm{s}^{-1}$, with the last two values motivated by the phase curve amplitude and mass-radius measurements of HD 189733b.
[^7]: We conservatively use $100\%$ uncertainty (a factor of two) for winds predicted with [Eqn. (\[eq:shearU\])]{}.
|
---
abstract: 'We study the properties of the light vector mesons ($\rho$, $\omega$ and $\phi$) in strange hadronic matter using the QCD sum rule approach. The in-medium masses of the vector mesons are calculated from the modifications of the light quark condensates and the gluon condensates in the hadronic medium. The light quark condensates in the hadronic matter are obtained from the values of the non strange and strange scalar fields of the explicit chiral symmetry breaking term in a chiral SU(3) model. The in-medium gluon condensate is calculated through the medium modification of a scalar dilaton field, which is introduced into the chiral SU(3) model to simulate the scale symmetry breaking of QCD. The mass of the $\omega$ meson is observed to have initially a drop with increase in density and then a rise due to the scattering with the baryons. The mass of the $\rho$ meson is seen to drop with density due to decrease of the light quark condensates in the medium. The effects of isospin asymmetry and strangeness of the medium on the masses of the vector mesons are also studied in the present work. The $\phi$ meson is observed to have marginal drop in its mass in the nuclear medium. However, the strangeness of the medium is seen to lead to an appreciable increase in its mass arising due to scattering with the hyperons.'
author:
- Amruta Mishra
title: |
Light vector meson masses in strange hadronic matter\
– a QCD sum rule approach
---
\#1
Introduction
============
The study of the properties of hadrons in hot and dense matter is an important topic of research in strong interaction physics. The changes in the hadron properties in the medium affect the experimental observables from the hot and/or dense matter produced in the heavy ion collision experiments. The medium modifications of the properties of the light vector mesons [@rapp] can affect the low mass dilepton spectra, the properties of the kaons and antikaons can show in their production as well as collective flow. The modifications of the properties of the charm mesons, $D$ and $\bar D$ as well as the charmonium states can modify the yield of the open charm meson as well as charmonium states in the high energy nuclear collision experiments.
In the present work, we study the medium modification of the masses of the light vector mesons ($\rho$, $\omega$ and $\phi$) in the strange hadronic matter due to the interaction with the light quark condensates and the gluon condensates using the QCD sum rule approach [@hatlee; @hatlee2; @zschocke; @klinglnpa; @kwonprc2008; @Abhee]. The light quark condensates are calculated from the expectation values of the non-strange and strange scalar fields of the explicit chiral symmetry breaking term in a chiral SU(3) model [@kristof1; @papa]. The gluon condensate in the hadronic medium is obtained from the medium modification of a scalar dilaton field introduced within the chiral SU(3) model through a scale symmetry breaking term in the Lagrangian density leading to the QCD trace anomaly. The chiral SU(3) model has been used to describe the hadronic properties in the vacuum as well as in nuclear matter [@kristof1], finite nuclei [@papa] and the bulk properties of (proto) neutron stars [@nstar]. The vector mesons have also been studied within the model [@vecm], arising due to their interaction with the nucleons in the medium. The model has been used to study the medium modifications of kaons and antikaons in isospin asymmetric nuclear matter in [@isoamss] and in hyperonic matter in [@isoamss2]. The chiral effective model has also been generalized to SU(4) to derive the interactions of the charm mesons with the light hadrons to study the $D$ mesons in asymmetric nuclear matter at zero temperature [@amarind] and in the symmetric and asymmetric nuclear (hyperonic) matter at finite temperatures in Ref.[@amdmeson] and Ref. [@amarvind; @amarvindhyp]. In the present investigation, we study the light vector mesons using QCD sum rule approach due to their interaction with the quark and gluon condensates in the strange hadronic medium. These in-medium condensates are calculated in a chiral SU(3) model, from the explicit chiral symmetry breaking term and the scale breaking term of the Lagrangian density of the effective hadronic model.
The outline of the paper is as follows : In section II, we give a brief introduction of the chiral $SU(3)$ model used to calculate the quark and gluon condensates in the hadronic medium. In the present work, the in-medium condensates as calculated in the chiral SU(3) model, are taken as inputs for studying the in-medium masses of the light vector mesons using the QCD sum rule approach. The medium modifications of the quark and gluon condensates arise from the medium modification of the scalar fields of the explicit symmetry breaking term and of the scalar dilaton field introduced in the hadronic model to incorporate broken scale invariance of QCD. In section III, we present the results for the medium modifications of the light vector mesons using a QCD sum rule approach. In section IV, we summarize the findings of the present investigation and compare with the existing results in the literature for the in-medium properties of the light vector mesons.
The hadronic chiral $SU(3) \times SU(3)$ model
===============================================
We use an effective chiral $SU(3)$ model for the present investigation [@papa]. The model is based on the nonlinear realization of chiral symmetry [@weinberg; @coleman; @bardeen] and broken scale invariance [@papa; @kristof1; @vecm]. This model has been used successfully to describe nuclear matter, finite nuclei, hypernuclei and neutron stars. The effective hadronic chiral Lagrangian density contains the following terms $${\cal L} = {\cal L}_{kin}+\sum_{W=X,Y,V,A,u} {\cal L}_{BW} +
{\cal L}_{vec} + {\cal L}_{0} + {\cal L}_{SB}
\label{genlag}$$ In Eq. (\[genlag\]), ${\cal L}_{kin}$ is kinetic energy term, ${\cal L}_{BW}$ is the baryon-meson interaction term in which the baryon-spin-0 meson interaction term generates the vacuum baryon masses. ${\cal L}_{vec}$ describes the dynamical mass generation of the vector mesons via couplings to the scalar mesons and contain additionally quartic self-interactions of the vector fields. ${\cal L}_{0}$ contains the meson-meson interaction terms inducing the spontaneous breaking of chiral symmetry as well as a scale invariance breaking logarithmic potential. ${\cal L}_{SB}$ describes the explicit chiral symmetry breaking.
To study the in-medium hadron properties using the chiral SU(3) model, we use the mean field approximation, where all the meson fields are treated as classical fields. In this approximation, only the scalar and the vector fields contribute to the baryon-meson interaction, ${\cal L}_{BW}$ since for all the other mesons, the expectation values are zero. The baryon-scalar meson coupling constants are fitted from the vacuum masses of the baryons. The parameters in the model [@papa; @isoamss] are chosen so as to decouple the strange vector field $\phi_{\mu}\sim\bar{s}\gamma_{\mu}s$ from the nucleon.
The concept of broken scale invariance leading to the trace anomaly in QCD, $\theta_{\mu}^{\mu} = \frac{\beta_{QCD}}{2g}
{G^a}_{\mu\nu} G^{\mu\nu a}$, where $G_{\mu\nu}^{a} $ is the gluon field strength tensor of QCD, is simulated in the effective Lagrangian at tree level through the introduction of the scale breaking terms [@sche1; @ellis] $${\cal L}_{scalebreaking} = -\frac{1}{4} \chi^{4} {\rm {ln}}
\Bigg ( \frac{\chi^{4}} {\chi_{0}^{4}} \Bigg ) + \frac{d}{3}{\chi ^4}
{\rm {ln}} \Bigg ( \bigg ( \frac { \sigma^{2} \zeta }{\sigma_{0}^{2}
\zeta_{0}}\bigg) \bigg (\frac {\chi}{\chi_0}\bigg)^3 \Bigg ).
\label{scalebreak}$$ The Lagrangian density corresponding to the dilaton field, $\chi$ leads to the trace of the energy momentum tensor as [@heide1; @chqsram] $$\theta_{\mu}^{\mu} = \chi \frac{\partial {\cal L}}{\partial \chi}
- 4{\cal L}
= -(1-d)\chi^{4}.
\label{tensor1}$$
The comparison of the trace of the energy momentum tensor arising from the trace anomaly of QCD with that of the present chiral model given by equation (\[tensor1\]), gives the relation of the dilaton field to the scalar gluon condensate. We have, in the limit of finite quark masses [@cohen], $$T_{\mu}^{\mu} = \sum_{i=u,d,s} m_i \bar {q_i} q_i+ \langle \frac{\beta_{QCD}}{2g}
G_{\mu\nu}^{a} G^{\mu\nu a} \rangle \equiv -(1 - d)\chi^{4},
\label{tensor2m}$$ where the first term of the energy-momentum tensor, within the chiral SU(3) model is the negative of the explicit chiral symmetry breaking term, ${\cal L}_{SB}$. In the mean field approximation, this chiral symmetry breaking term is given as $$\begin{aligned}
{\cal L} _{SB} & = & {\rm Tr} \left [ {\rm diag} \left (
-\frac{1}{2} m_{\pi}^{2} f_{\pi} (\sigma+\delta),
-\frac{1}{2} m_{\pi}^{2} f_{\pi} (\sigma-\delta),
\Big( \sqrt{2} m_{k}^{2}f_{k} - \frac{1}{\sqrt{2}}
m_{\pi}^{2} f_{\pi} \Big) \zeta \right) \right ].
\label{ecsb}\end{aligned}$$
In the above, we have explicitly written down the matrix whose trace gives the Lagrangian density corresponding to the explicit chiral symmetry breaking in the chiral SU(3) model. Comparing the above term with the explicit chiral symmetry breaking term of the Lagrangian density in QCD given as $$\begin{aligned}
{\cal L}^{QCD}_{SB} & =- & {\rm Tr} \left [ {\rm diag} \left (m_u \bar u u,
m_d \bar d d , m_s \bar s s \right ) \right],
\label{ecsbqcd}\end{aligned}$$ we obtain the nonstrange quark condensates ($\langle \bar u u \rangle$ and $\langle \bar d d \rangle$) and the strange quark condensate ($\langle \bar s s \rangle $) to be related to the the scalar fields, $\sigma$, $\delta$ and $\zeta$ as
$$m_u\langle \bar u u \rangle
= \frac{1}{2}m_{\pi}^{2} f_{\pi} (\sigma+\delta)
\label{nsubu}$$
$$m_d \langle \bar d d \rangle
= \frac{1}{2}m_{\pi}^{2} f_{\pi} (\sigma-\delta)
\label{nsdbd}$$
and, $$m_s\langle \bar s s \rangle
= \Big( \sqrt {2} m_{k}^{2}f_{k} - \frac {1}{\sqrt {2}}
m_{\pi}^{2} f_{\pi} \Big) \zeta.
\label{sbs}$$
The coupled equations of motion for the non-strange scalar isoscalar field $\sigma$, scalar isovector field, $\delta$, the strange scalar field $ \zeta$, and the dilaton field $\chi$, derived from the Lagrangian density, are solved to obtain the values of these fields in the strange hadronic medium.
The QCD $\beta$ function occurring in the right hand side of equation (\[tensor2m\]), at one loop level, for $N_{c}$ colors and $N_{f}$ flavors, is given as $$\beta_{\rm {QCD}} \left( g \right) = -\frac{11 N_{c} g^{3}}{48 \pi^{2}}
\left( 1 - \frac{2 N_{f}}{11 N_{c}} \right) + O(g^{5})
\label{beta}$$ We then obtain the trace of the energy-momentum tensor in QCD, using the one loop beta function given by equation (\[beta\]), for $N_c$=3 and $N_f$=3, as given by, $$\theta_{\mu}^{\mu} = - \frac{9}{8} \frac{\alpha_{s}}{\pi}
G_{\mu\nu}^{a} G^{\mu\nu a}
+ \left( m_{\pi}^{2}
f_{\pi} \sigma
+ \Big( \sqrt {2} m_{k}^{2}f_{k} - \frac {1}{\sqrt {2}}
m_{\pi}^{2} f_{\pi} \Big) \zeta \right),
\label{tensor4m}$$ where $\alpha_s=\frac{g^2}{4\pi}$. Using equations (\[tensor2m\]) and (\[tensor4m\]), we can write $$\left\langle \frac{\alpha_{s}}{\pi} {G^a}_{\mu\nu} {G^a}^{\mu\nu}
\right\rangle = \frac{8}{9} \Bigg [(1 - d) \chi^{4}
+ \left( m_{\pi}^{2} f_{\pi} \sigma
+ \Big( \sqrt {2} m_{k}^{2}f_{k} - \frac {1}{\sqrt {2}}
m_{\pi}^{2} f_{\pi} \Big) \zeta \right) \Bigg ].
\label{chiglu}$$ Hence the scalar gluon condensate of QCD ($\langle {G^a}_{\mu \nu}
G^{\mu \nu a} \rangle$) is simulated by a scalar dilaton field in the present hadronic model. For the case of massless quarks, the scalar gluon condensate is proportional to the fourth power of the dilaton field, whereas for the case of finite masses of quarks, there are modifications arising from the scalar fields, $\sigma$ and $\zeta$.
We calculate the light quark condensates, $\langle \bar u u\rangle$, $\langle \bar d d\rangle$ and $\langle \bar s s \rangle$ and the scalar gluon condensate, $\left\langle \frac{\alpha_{s}}{\pi} {G^a}_{\mu\nu} {G^a}^{\mu\nu}
\right\rangle$ in the hadronic medium using the equations (\[nsubu\]), (\[nsdbd\]) and (\[sbs\]) and (\[chiglu\]) respectively, from the medium modifications of the scalar fields, $\sigma$, $\delta$, $\zeta$ and $\chi$. These values of the quark and gluon condensates are then taken as inputs for the studying the masses of the light vector mesons ($\omega$, $\rho$, $\phi$) in the strange hadronic matter using the QCD sum rule approach. In the next section we shall describe the QCD sum rule approach to study these in-medium vector meson masses in the isospin asymmetric strange hadronic medium.
QCD sum rule approach
=====================
In the present section, we investigate the properties of the light vector mesons ($\omega$, $\rho$, $\phi$) in the nuclear medium using the method of QCD sum rules. The in-medium masses of the vector mesons are computed from the medium modifications of the light quark condensates and the scalar gluon condensate calculated in the chiral effective model as described in the previous section. The current current correlation function for the vector meson, V(=$\omega$,$\rho$, $\phi$) is written as
$$\Pi _{\mu \nu}= i\int d^4x d^4y \langle 0| T j^V_\mu (x) j^V _\nu (0)|0\rangle,$$
where $T$ is the time ordered product and $J^V_\mu$ is the current for the vector meson, $V=\rho,\omega,\phi$, given as $j_\mu ^{\rho}=\frac{1}{2}
(\bar u \gamma_\mu u -\bar d \gamma_\mu d)$, $j_\mu ^{\omega}=\frac{1}{6}
(\bar u \gamma_\mu u +\bar d \gamma_\mu d)$ and $j_\mu ^{\phi}=-\frac{1}{3} (\bar s \gamma_\mu s)$. Current conservation gives the transverse tensor structure for the correlation function as $$\Pi^V_{\mu \nu}(q)=\left (g_{\mu \nu}-\frac{q_\mu q_\nu}{q^2}
\right) \Pi^V (q^2)$$ where, $$\Pi^V (q^2)=\frac{1}{3} g^{\mu \nu}\Pi^V _{\mu \nu }(q).$$ The correlation function $\Pi^V (q^2)$ in the large space-like region $Q^2=-q^2 >> $ 1 GeV$^2$ for the light vector mesons ($\omega$, $\rho$ and $\phi$) can be written in terms of the operator product expansion (OPE) as [@klinglnpa; @kwonprc2008]
$$12\pi^2{\tilde \Pi^V} (q^2=-Q^2)=
d_V \Big [ -c^V_0 \ln \Big (\frac { Q^2}{\mu^2}\Big )
+\frac {c^V_1}{Q^2} + \frac {c^V_2}{Q^4} +\frac {c^V_3}{Q^6}+\cdots \Big ]
\label{qcdope}$$
where, $\tilde \Pi^V (q^2=-Q^2)=\frac{ \Pi^V (q^2=-Q^2)}{Q^2}$ and $\mu$ is a scale which we shall take as 1 GeV in the present investigation. The coefficients $c^V_i$’s in equation (\[qcdope\]) contain the informations of the nonperturbative effects of QCD in terms of the quark and gluon condensates. In equation (\[qcdope\]), $d_V$=3/2,1/6 and 1/3, for $\rho$, $\omega$ and $\phi$ vector mesons respectively.
For the vector mesons, $\rho$ and $\omega$, containing the u and d quarks (antiquarks), these coefficients are given as [@klinglnpa] $$c_0 ^{(\rho,\omega)}=1+\frac{\alpha_s (Q^2)}{\pi},\;\;\;\;
c_1 ^{(\rho,\omega)}=-3 (m_u ^2 +m_d ^2)
\label{c0c1rhomg}$$ $$c_2 ^{(\rho,\omega)}= \frac {\pi^2}{3}
\langle \frac {\alpha_s}{\pi} G^{\mu \nu} G_{\mu \nu}
\rangle + 4\pi^2 \langle m_u \bar u u +m_d \bar d d \rangle
\label{c2rhomg}$$ $$\begin{aligned}
{c_3}^{(\rho,\omega)} & =& -4\pi^3 \Big [ \langle \alpha_s
(\bar u \gamma_\mu \gamma_5 \lambda^a u
\mp \bar d \gamma_\mu \gamma_5 \lambda^a d )^2 \rangle
\nonumber \\
&+& \frac {2}{9} \langle \alpha_s
(\bar u \gamma_\mu \lambda^a u
+ \bar d \gamma_\mu \lambda^a d )
(\sum_{q=u,d,s}\bar q \gamma^\mu \lambda^a q) \rangle \Big ]
\label {c3rhomg}\end{aligned}$$ In the above, $\alpha_S =4\pi /(b \ln (Q^2/{\Lambda_{QCD}}^2))$ is the running coupling constant, with $\Lambda_{QCD}$=140MeV and $b=11-(2/3)N_f$=9. In equation (\[c3rhomg\]), the ‘$\mp$’ sign in the first term corresponds to $\rho (\omega)$ meson.
For $\phi$ meson, these coefficients are given as [@klinglnpa; @svznpb1] $$c_0 ^{\phi}=1+\frac{\alpha_s (Q^2)}{\pi},\;\;\;\;
c_1 ^{\phi}=-6 {m_s}^2
\label{c0c1phi}$$ $$c_2 ^{\phi}= \frac {\pi^2}{3}
\langle \frac {\alpha_s}{\pi} G^{\mu \nu} G_{\mu \nu}
\rangle + 8\pi^2 \langle m_s \bar s s \rangle
\label{c2phi}$$ $$\begin{aligned}
{c_3}^{\phi} = -8\pi^3 \Bigg [ 2 \langle \alpha_s
(\bar s \gamma_\mu \gamma_5 \lambda^a s )^2 \rangle
+ \frac {4}{9} \langle \alpha_s
(\bar s \gamma_\mu \lambda^a s )
(\sum_{q=u,d,s}\bar q \gamma^\mu \lambda^a q) \rangle \Bigg ]
\label{c3phi}\end{aligned}$$ After Borel transformation, the correlator for the vector meson given by equation (\[qcdope\]) can be written as $$12 \pi^2 \tilde \Pi^V (M^2)=d_V \Big [ c^V_0 M^2 +c^V_1+ \frac{c^V_2}{M^2}
+\frac{ c^V_3}{2M^4}\Big ]
\label{corropeborel}$$ On the phenomenological side, the correlator function, $\tilde \Pi^V (Q^2)$ can be written as $$12 \pi^2 \tilde \Pi^V _{phen}(Q^2)
=\int _0 ^\infty ds \frac{R^V_{phen}(s)}{s+Q^2}
\label{corrphen}$$ where $R^V_{phen}(s)$ is the spectral density proportional to the imaginary part of the correlator $$R^V_{phen}(s)={12 \pi} {\rm {Im}} \Pi^V _{phen} (s).$$ On Borel transformation, equation (\[corrphen\]) reduces to $$12 \pi^2 \tilde \Pi^V (M^2)=\int _0 ^\infty d s e^{-s/{M^2} }
R^V_{phen}(s)
\label{corrphenborel}$$ Equating the correlation functions from the phenomenological side given by equation (\[corrphenborel\]) to that from the operator product expansion given by equation (\[corropeborel\]), we obtain, $$\int _0 ^\infty d s e^{-{s}/{M^2} }
R^V_{phen} (s) ={d_V}
\Big [ c^V_0 M^2 +c^V_1+ \frac{c^V_2}{M^2}
+\frac{ c^V_3}{2M^4}\Big ].
\label{qsr}$$ The finite energy sum rules (FESR) for the vector mesons are derived from equation (\[qsr\]) by assuming that the spectral density separates to a resonance part ${R^V}_{phen}^{(res)}(s)$ with $s \le s^V_0$ and a perturbative continuum as $$R^V_{phen}(s) ={R^V}_{phen}^{(res)}(s) \theta (s^V_0-s)
+{d_V} c^V_0 \theta (s-s^V_0)
\label{qsr1}$$ For $M > \sqrt {s^V_0}$, the exponential function in the integral of the left hand side of the equation (\[qsr\]) can be expanded in powers of $s/M^2$ for $s < s^V_0$. We then obtain the left hand side of equation (\[qsr\]) as $$\begin{aligned}
&& \int _0 ^\infty e^{-s/M^2}R^V_{phen}(s)
=\int _0 ^{s^V_0} d s {R^V}_{phen}^{(res)} (s) -\frac {1}{M^2}
\int _0 ^{s^V_0} d s s {R^V}_{phen}^{(res)} (s) + \frac{1}{2 M^4}
\int _0 ^{s^V_0} d s s^2 {R^V}_{phen} ^{(res)} (s) \nonumber \\
&+&
{d_V} c_0 M^2 \Bigg (1- \frac{s^V_0}{M^2} +\frac{{(s^V_0)}^2}{2 M^4}
+\frac{{(s^V_0)}^3}{6 M^6}-\cdots \Bigg )
\label{rhophborel}\end{aligned}$$ Equating the powers in $1/{M^2}$ in the Borel transformations of the spectral functions, given by equations (\[qsr1\]) and (\[rhophborel\]), we obtain the Finite energy sum rules (FESR) as $$\int _0 ^{s^V_0} d s {R^V}_{phen}^{(res)} =
{d_V} (c^V_0 s^V_0 +c^V_1)
\label{fesr1v}$$
$$\int _0 ^{s^V_0} d s s {R^V}_{phen}^{(res)}=
{d_V} \Big (
\frac {(s^V_0)^2 c^V_0 }{2}-c^V_2 \Big )
\label{fesr2v}$$
$$\int _0 ^{s^V_0} d s s^2 {R^V}_{phen}^{(res)}=
{d_V} \Big (
\frac{(s^V_0)^3}{3} c^V_0 +c^V_3 \Big )
\label{fesr3v}$$
To evaluate $c^V_3$ for the vector mesons $\rho$, $\omega$ and $\phi$, given by equations (\[c3rhomg\]) and (\[c3phi\]), we use factorization method [@svznpb2], $$\langle (\bar {q_i} \gamma_\mu \gamma_5 \lambda^a {q_j})^2 \rangle
= -\langle (\bar {q_i} \gamma_\mu \lambda^a {q_j})^2 \rangle
=\delta_{ij} \frac {16}{9} \kappa_i \langle \bar {q_i} {q_i} \rangle ^2,
\label{4qfact}$$ for ${q_i}=u,d,s$ for $i=1,2,3$. In the above, $\kappa_i$ is introduced to parametrise the deviation from exact factorization ($\kappa_i$=1). Using equation (\[4qfact\]), the four quark condensate for the $\omega (\rho)$ meson given by equation (\[c3rhomg\]) becomes $${c_3}^{(\rho,\omega)}=
-\alpha_s \pi^3\times \frac{448}{81} \kappa_q (\langle \bar u u \rangle^2
+ \langle \bar d d \rangle^2),
\label{c3rhomgf}$$ where we have used, $\kappa_u \simeq \kappa_d =\kappa_q$.
For the $\phi$ meson, using equations (\[c3phi\]) and (\[4qfact\]), we obtain the four quark condensate, ${c_3}^\phi$ as given by [@svznpb1] $$\begin{aligned}
{c_3}^{\phi}
&=& -8\pi^3 \times \frac{224}{81} \alpha_s \kappa_s
\langle \bar s s \rangle ^2.
\label{c3phif}\end{aligned}$$
We assume a simple ansatz for the spectral function, $R^V_{phen}(s)$ as [@klinglnpa; @kwonprc2008] $$R^V_{phen}(s)=F_V \delta (s-{m_V}^2)+ d_V c^V_0 \theta (s-s^V_0),
\label{spspectf}
$$
Using the form of the spectral function given by equation (\[spspectf\]), the finite energy sum rules for vacuum given by equations (\[fesr1v\]) to (\[fesr3v\]), can be written as $$F_V =
{d_V} (c^V_0 s^V_0 +c^V_1)
\label{fesr1vf}$$ $$F_V m_V^2=
{d_V} \Big (
\frac {(s^V_0)^2 c^V_0 }{2}-c^V_2 \Big )
\label{fesr2vf}$$ $$F_V m_V^4=
{d_V} \Big (
\frac{(s^V_0)^3}{3} c^V_0 +c^V_3 \Big )
\label{fesr3vf}$$ Using equations (\[fesr1vf\]) and (\[fesr2vf\]), we determine the values of $F_V$ and $s^V_0$ by assuming the values of $c^V_0$, with $Q^2=s_0$ ($\alpha_s(Q^2 \simeq 1 {\rm GeV}^2)$=0.5 ) and $c^V_1$ as calculated in the chiral SU(3) model. These values are assumed in equation (\[fesr3vf\]) to find the vacuum value of the 4-quark condensate, $c^V_3$ and hence the value of $\kappa_i$.
At finite densities, there is contribution to the spectral function for the vector mesons, due to scattering from the baryons and the equation (\[qsr\]) is modified to $$\int _0 ^\infty d s e^{-{s}/{M^2} }
R^ V_{phen} (s)+12 \pi^2 \Pi^ V(0) ={d_V}
\Big [ c_0 M^2 +c_1+ \frac{c_2}{M^2}
+\frac{ c_3}{2M^4}\Big ],
\label{qsrfinitedens}$$ where, in the nuclear medium, $\Pi^ V(0)=\frac {\rho_B}{4M_N}$ for V=$\omega$,$\rho$. and vanishes for $\phi$ meson [@klinglnpa; @hatlee2; @bochkarev; @florkowski]. However, in the presence of hyperons in the hadronic medium, the contribution due to the scattering of the $\omega$ and $\rho$ vector mesons from the baryons is modified to $$\Pi^ V(0)=\frac{1}{4}\sum_i \Big (\frac{g_{Vi}}{g_{VN}}\Big )^2
\frac{\rho_i}{M_i},$$ where, $g_{Vi}$ is the coupling strength of the vector meson, V with the $i$-th baryon ($i=N,\Lambda,\Sigma^{\pm,0},
\Xi^{-,0}$), $\rho_i$ and $M_i$ are the number density and mass of the $i$-th baryon. For the $\omega$ meson, $\frac{g_{\omega i}}{g_{\omega N}}
=(1,\frac{2}{3},\frac{2}{3},
\frac{1}{3})$ for $i=N,\Lambda,\Sigma^{\pm,0},\Xi^{-,0}$ respectively. For the $\rho$ meson, the ratio $\frac{g_{\rho i}}{g_{\rho N}} =(1,0,2,1)$ for $i=(N,\Lambda,\Sigma^{\pm,0},\Xi^{-,0})$. In the nuclear medium, the contribution for the $\phi$ meson due to scattering from nucleons vanishes, since the $\phi$ meson-nucleon coupling strength is zero. In the strange hadronic matter, the contribution is, however, nonzero due to the presence of the hyperons in the medium. For the $\phi$ meson, $\frac{g_{\phi i}}{g_{\phi \Lambda}} =(1,1,2)$ for $i=(\Lambda,\Sigma^{\pm,0},\Xi^{-,0})$.
At finite densities, the finite energy sum rules (FESR) for vacuum given by equations (\[fesr1vf\]) to (\[fesr3vf\]) are modified to $$F^*_V =
{d_V} ({c^V_0} {{s^*}^V_0} +{c^V_1}) -12\pi^2 \Pi^V(0)
\label{fesr1mf}$$ $$F^*_V {m^*_V}^2=
{d_V} \Big (
\frac {({s^*}^V_0)^2 c^V_0}{2}-{c^*}_2^V \Big )
\label{fesr2mf}$$ $$F^*_V {m^*_V}^4=
{d_V} \Big (
\frac{({s^*}^V_0)^3}{3} c^V_0 +{c^*}_3^V \Big )
\label{fesr3mf}$$ These equations are solved to obtain the medium dependent mass, $m^*_V$, the scale ${s^*}_0^V$ and $F^*_V$, by using the coefficient $k$ of the 4-quark condensate for the vector mesons, as determined from the FESRs in vacuum.
![(Color online) The quark condensates $(-m_q \langle \bar q q \rangle)^{1/4}$ ($q=u,d$) and $(-m_s \langle \bar s s \rangle)^{1/4}$, in units of MeV, are plotted as functions of density for isospin asymmetric hadronic matter (for $f_s$=0, 0.3 and 0.5) in figures (b) and (d), and compared with the isospin symmetric case, shown in subplots (a) and (c). []{data-label="psipsibdens"}](fig1.eps){width="16cm" height="16cm"}
![(Color online) The quantity $\langle \frac{\alpha_{s}}{\pi} {G^a}_{\mu\nu} {G^a}^{\mu\nu}
\rangle^{1/4}$ in MeV plotted as a function of the baryon density in units of the nuclear matter saturation density. This is plotted for isospin asymmetric hadronic matter (for strangeness fraction, $f_s$=0, 0.3, 0.5 and isospin asymmetric parameter, $\eta$=0.5) in subplot (b) and compared with the symmetric matter ($\eta$=0) in subplot (a).[]{data-label="ggcond"}](fig2.eps){width="16cm" height="16cm"}
![(Color online) The mass of $\omega$ meson plotted as a function of the baryon density in units of nuclear saturation density, for the isospin asymmetric strange hadronic matter (for strangeness fraction, $f_s$=0, 0.3, 0.5 and isospin asymmetric parameter, $\eta$=0.5) in subplot (b) and compared with the symmetric matter ($\eta$=0) shown in subplot (a).[]{data-label="omgmassdens"}](fig3.eps){width="16cm" height="16cm"}
![(Color online) The mass of $\rho$ meson plotted as a function of the baryon density in units of nuclear saturation density, for the isospin asymmetric strange hadronic matter (for strangeness fraction, $f_s$=0, 0.3, 0.5 and isospin asymmetric parameter, $\eta$=0.5) in subplot (b) and compared with the symmetric matter ($\eta$=0) shown in subplot (a).[]{data-label="rhomassdens"}](fig4.eps){width="16cm" height="16cm"}
![(Color online) The mass of $\phi$ meson plotted as a function of the baryon density in units of nuclear saturation density, for the isospin asymmetric strange hadronic matter (for strangeness fraction, $f_s$=0, 0.3, 0.5 and isospin asymmetric parameter, $\eta$=0.5) in subplot (b) and compared with the symmetric matter ($\eta$=0) shown in subplot (a).[]{data-label="phimassdens"}](fig5.eps){width="16cm" height="16cm"}
![(Color online) The density dependence of ${s^*}_0^V$ for the vector mesons ($\omega$, $\rho$ and $\phi$) in the strange hadronic matter is shown for the isospin symmetric ($\eta$=0) and isospin asymmetric (with $\eta$=0.5) cases for values of $f_s$=0, 0.3 and 0.5. []{data-label="s0dens"}](fig6.eps){width="16cm" height="16cm"}
![(Color online) The density dependence of $F^*_V$ for the vector mesons ($\omega$, $\rho$ and $\phi$) in the strange hadronic matter is shown for the isospin symmetric ($\eta$=0) and isospin asymmetric (with $\eta$=0.5) cases for values of $f_s$=0, 0.3 and 0.5. []{data-label="fvdens"}](fig7.eps){width="16cm" height="16cm"}
![(Color online) The density dependence of the 4-quark condensate for the cases of the $\omega$, $\rho$ and $\phi$ mesons is shown for the isospin symmetric ($\eta$=0) and asymmetric (with $\eta$=0.5) hadronic matter for the values of the strangeness fraction, $f_s$=0, 0.3 and 0.5. []{data-label="4quark"}](fig8.eps){width="16cm" height="16cm"}
Results and Discussions
=======================
In this section, we first investigate the effects of density on the scalar gluon condensate and the light quark condensates arising due to the modifications of the dilaton field, $\chi$ and the scalar isoscalar fields $\sigma$ and $\zeta$ calculated in the chiral SU(3) model. The values of these fields in the isospin asymmetric strange hadronic matter are obtained by solving the coupled equations of these fields in the mean field approximation. The nonstrange and strange quark condensates, $\langle \bar q q\rangle $ ($q=u,d$) and $\langle \bar s s \rangle $, as well as, the scalar gluon condensate, $\left\langle \frac{\alpha_{s}}{\pi} {G^a}_{\mu\nu} {G^a}^{\mu\nu}
\right\rangle$, are calculated from the in-medium values of the fields $\sigma$, $\zeta$ and $\chi$, by using the equations (\[nsubu\]), (\[nsdbd\]), (\[sbs\]) and (\[chiglu\]), respectively. The values of the current quark masses are taken as $m_u$=4MeV, $m_d$=7MeV and $m_s$=150MeV in the present investigation. In figure \[psipsibdens\], we show the density dependence of the quantities $(-m_q\langle \bar q q\rangle)^{1/4} $ ($q=u,d$), $(-m_s \langle \bar s s\rangle)^{1/4} $ for given isospin asymmetry and strangeness of the hadronic medium. For the isospin symmetric situation ($\eta$=0), the quantity $(-m_q\langle \bar q q\rangle)^{1/4} $ is identical for $u$ and $d$ quarks, for a given value of $f_s$. It is also seen that the effect from the strangeness fraction is very small. For the isospin asymmetric situation, the quantities $(-m_u\langle \bar u u\rangle)^{1/4} $ and $(-m_d\langle \bar d d\rangle)^{1/4} $ are no longer identical, and their difference is due to the nonzero value of the isoscalar scalar field $\delta$, as can be seen from equations (\[nsubu\]) and (\[nsdbd\]). For the $u$ quark, there is seen to be smaller drop with density as compared to the $d$ quark due to the negative value of the isoscalar scalar field $\delta$ in the medium. For the isospin symmetric nuclear matter, the value of the quantity $(-m_q\langle \bar q q\rangle)^{1/4} $ for $q=u,d$, changes from the vacuum value of 95.8 MeV to 85.7 MeV at the nuclear matter saturation density. This corresponds to a drop of the quantity, $(-m_q\langle \bar q q\rangle) $ for $q=u,d$, by about 36 % at the nuclear matter saturation density from its vacuum value. At a density of 4$\rho_0$, this quantity is modified to (72 MeV)$^4$, which corresponds to a drop of about 69 % from its vacuum value. The drop of the non-strange condensate in the medium is the dominant contribution to the modification of the $\omega$ and $\rho$ mesons in the medium. The quantity $(-m_s \langle \bar s s\rangle)^{1/4} $ for given isospin symmetric ($\eta$=0) and isospin asymmetric (with $\eta$=0.5) situations are shown in subplots (c) and (d) respectively. The vacuum value of $(-m_s \langle \bar s s\rangle)^{1/4} $ is about 258 MeV, which may be compared with the value of 210 MeV in Ref. [@hatlee]. For the symmetric nuclear matter, the quantity $(-m_s \langle \bar s s\rangle)^{1/4} $ changes from the vacuum value of 258 MeV to 252 MeV and 248.7 MeV at densities of $\rho_0$ and 4$\rho_0$ respectively, which correspond to about 9% and 13.7 % drop in the quantity, 8$\pi^2 \langle m_s \bar s s \rangle$ occurring in $c_2 ^\phi$ in the finite energy sum rule for the $\phi$ meson given by equation (\[c2phi\]). Figure \[ggcond\] shows the quartic root of the scalar gluon condensate, $\langle \frac{\alpha_{s}}{\pi} {G^a}_{\mu\nu} {G^a}^{\mu\nu}
\rangle^{1/4}$ as a function of the baryon density in units of the nuclear matter saturation density, for isospin symmetric ($\eta$=0) as well as asymmetric hadronic medium (with $\eta$=0.5) for typical values of the strangeness fraction. The value of the scalar gluon condensate $\langle \frac{\alpha_{s}}{\pi} {G^a}_{\mu\nu} {G^a}^{\mu\nu}
\rangle$ for isospin symmetric nuclear matter is observed to be modified from the vacuum value of (373 MeV)$^4$ to (371.3 MeV)$^4$ and (361.9 MeV)$^4$ at densities of $\rho_0$ and 4$\rho_0$ respectively, which correspond to about 1.8% and 11.4% drop in the medium from its vacuum value. It is thus observed that the light nonstrange quark condensate has a larger drop in the medium as compared to the strange quark condensate as well as the scalar gluon condensate in the medium. This is observed as a much smaller drop of the $\phi$ meson in the medium as compared to the $\omega$ and $\rho$ mesons due to the quark and gluon condensates. The in-medium quark and gluon condensates are used as inputs for the calculations of the vector meson masses in the hadronic medium. The effects of isospin asymmetry as well as strangeness of the medium on the masses of the vector mesons are investigated in the present work. As has already been mentioned, using the vacuum values of the vector meson mass and the quark and gluon condensates, the finite energy sum rules (FESR) for the vector mesons in vacuum given by equations (\[fesr1vf\]), (\[fesr2vf\]) and (\[fesr3vf\]) are solved to obtain the values for $s_0^V$, $F_V$ and the coefficient of the 4-quark condensate, $\kappa_{q(s)}$. The vacuum value of the scale, $s_0^V$, which separates the resonance part from the continuum part is obtained as 1.3 ,1.27 and 1.6 GeV$^2$ and the value of $F_V$ is obtained as 0.242,0.258 and 0.55 GeV$^2$ for the for $\omega$, $\rho$ and $\phi$ mesons respectively. The value of the coefficient of the 4-quark condensate is obtained as 7.788, 7.236 and -1.21 for the $\omega$, $\rho$ and $\phi$ mesons, which are then used to obtain the medium dependent mass, $m^*_V$, the scale ${s^*}_0^V$ and $F^*_V$ for the vector mesons, by solving the FESRs in the medium given by equations (\[fesr1mf\]), (\[fesr2mf\]) and (\[fesr3mf\]).
In figure \[omgmassdens\], the density dependence of the mass of the $\omega$-meson is shown for the cases of isospin symmetric ($\eta$=0) as well as the asymmetric matter for given values of the strangeness fraction, $f_s$. There is seen to be initially a drop in the $\omega$-meson mass with increase in density. However, as the density is further increased, the mass of the $\omega$-meson is observed to increase with density. This behavior can be understood from the equations (\[fesr1mf\]) and (\[fesr2mf\]), which yield the expression for the mass squared of the vector meson as $${m^*_V}^2= \frac{ \Big (
\frac {({s^*}^V_0)^2 c^V_0}{2}-{c^*}_2^V \Big )}
{({c^V_0} {{s^*}^V_0} +{c^V_1}) -{12\pi^2 (\Pi^V(0)}/{d_V})}
\label{mv2}$$ The contribution of $c_1^V$ is negligible for the $\rho$ and $\omega$ mesons, due to the small values of the masses of the $u$ and $d$ quarks. At low densities, the contribution from the scattering of the vector mesons from baryons, given by the last term in the denominator of (\[mv2\]) is negligible and the mass drop of the $\omega$ meson mainly arises due to the drop of the light quark condensates in the medium, given by the second term, ${c^*}_2^V$ in the numerator which comes with a negative sign. As seen in figure \[ggcond\], the modification of the scalar gluon condensate of the term ${c^*}_2^V$ is much smaller than that of the light quark condensate. However, at higher baryon densities, the last term in the denominator, the so-called Landau scattering term, becomes important for the $\omega$ meson. This leads to an increase in the mass of the $\omega$ vector meson with density, as can be observed in figure \[omgmassdens\]. The denominator becomes negative above a certain value of density, when there does not exist any solution for the mass of the $\omega$ meson, since ${m^*}_V^2$ becomes negative. For the case of nuclear matter, the mass of the $\omega$ meson remains very similar in the isospin symmetric as well as isospin asymmetric cases. This is because the modification of the $\omega$ meson at low densities is mainly due to the quark condensates in the combination $(m_u \bar u u +m_d \bar d d$), which depends only on the value of $\sigma$ (as seen from equations (\[nsubu\]) and (\[nsdbd\])), and, $\sigma$ is marginally different for the symmetric and asymmetric cases. At higher densities, the effect of the Landau scattering term becomes important. However, there is still observed to be very small difference between the $\eta=0$ and $\eta$=0.5 cases of nuclear matter, since the dependence of this term on the proton and neutron densities is in the form ($\rho_p$+$\rho_n$), which is same for the two cases at a given density. With the inclusion of hyperons in the medium, the contribution of the scattering term in the denominator of equation (\[mv2\]), becomes smaller in magnitude due to the smaller values of the baryon-$\omega$ meson coupling strengths for the hyperons as compared to the nucleons. However, the trend of the initial mass drop followed by an increase at higher densities is still seen to be the case for the mass of the $\omega$-meson. However, the density above which the $\omega$-mass is observed to increase with density, is seen to be higher for finite strangeness fraction in the hadronic medium, since the contribution from the Landau scattering term is smaller for the case of hyperonic matter as compared to nuclear matter. For the $\rho$ meson, the contribution from the Landau scattering term remains small as compared to the contribution from the light quark condensate in the medium, due to the factor $(1/{d_V})$ in this term, which makes the contribution of the Landau scattering term to be 9 times smaller than that of the $\omega$ meson, as ${(1/d_\rho)}/{(1/d_\omega)}=9$. This is observed as a monotonic decrease of the mass of the $\rho$-meson with density, in figure \[rhomassdens\]. The effects of the strangeness fraction as well as isospin asymmetry of the medium are seen to be small on the $\rho$ meson mass. In figure \[phimassdens\], the mass of $\phi$ meson is plotted as a function of the baryon density in units of nuclear matter saturation density, for isospin symmetric and asymmetric cases for typical values of the strangeness fraction. Due to the larger value of the strange quark mass as compared to the $u(d)$ quark masses, the contribution from $c_1^\phi$ is no longer negligible as was the case for $\omega (\rho)$ meson. The dominant contribution to the mass modification of the $\phi$ meson is from the in-medium modification of the strange quark condensate of the Wilson coefficient $c_2^\phi$ in the nuclear medium. This is because the $\phi$-meson has no contribution from the scattering term in nuclear matter, since the nucleon-$\phi$ meson coupling is zero. The strange quark condensate as well as the scalar gluon condensate have very small effects from isospin asymmetry, leading to the modifications of the $\phi$ meson mass to be very similar in the isospin symmetric and asymmetric nuclear matter. There are, however, contributions from the Landau scattering term due to the hyperons in the medium for nonzero $f_s$, which leads to an increase in the mass of the $\phi$ meson at higher values of the densities. For nuclear matter, the mass of the $\phi$ meson does not have contribution from the scattering term and since the in-medium modifications of both the strange quark condensate as well as the scalar gluon condensate are small and occur with opposite signs in the coefficient ${c^*}_2^\phi$, the mass of $\phi$ meson is observed to have negligible change with density, the value being modified from the vacuum value of 1020 MeV to about 1000 MeV at a density of 5$\rho_0$. For the case of isospin symmetric hyperonic matter, there is seen to be an increase in the mass of the $\phi$ meson at low densities, due to scattering from the $\Xi^-$ and $\Xi^0$, whose number densities are equal for this $\eta$=0 case. The $\Sigma^+$, $\Sigma^-$ and $\Sigma^0$ (with equal number densities) start appearing at around 3$\rho_0$, when the number densities of the $\Xi^-$ and $\Xi^0$ show a downward trend with density. It is the overall contributions from the hyperons to the scattering term which leads to the observed increase in the mass of the $\phi$ meson in the strange hadronic medium with $f_s$=0.3 and 0.5, shown in figure \[phimassdens\]. For the isospin asymmetric hyperonic matter, there is contribution from the $\Sigma^+$ and $\Xi^0$ for $\eta$=0.5 situation (but not from $\Sigma^{0,-}$ and $\Xi^-$), which is seen as a smaller increase of the $\phi$ mass at high densities as compared to the isospin symmetric hyperonic matter.
The density dependence of the scale ${s^*}_0^V$, which separates the resonance part from the perturbative continuum, is shown in figure \[s0dens\] for the $\omega$, $\rho$ and $\phi$ vector mesons. For isospin symmetric nuclear matter, for the $\omega$ meson, the vacuum value of 1.3 GeV$^2$ is modified to about 1.086 and 1.375 GeV$^2$ at densities of $\rho_0$ and 2$\rho_0$ respectively. The dependence of ${s^*}_0^V$ on density as an initial drop followed by an increase is similar to that of the density dependence of the mass of the $\omega$ meson. This can be understood in the following way. From the medium dependent FESRs, we obtain the expression for the scale ${s^*}_0^V$ as $${s^*}_0^V={m^*_V}^2\Bigg( 1+ \frac{2}{{m^*_V}^4c_0}
(c_1^V{{m^*}_V}^2+{c^*}_2^V-(12\pi^2 \Pi(0)/d_V))\Bigg)^{1/2}.$$ The value of the second term in the bracket, within the square root, is found to be small as compared to 1. At higher densities, the second term still remains small as compared to 1, due to the cancelling effect of the contributions from the quark condensate and the Landau scattering term. This is seen as the density dependence of ${s^*}_0^\omega$ to have first a drop and then an increase with density as found for the mass of the $\omega$ meson. The dependence of the scale ${s^*}_0^V$ for the $\rho$ meson is observed to be a monotonic drop with increase in density, due to the negligible contribution from the Landau damping term as compared to the contribution from the light quark condensate. In the case of $\phi$ meson, the effect of the scattering term is zero for the nuclear matter case, when ${s^*}_0^\phi$ is observed to have a small drop due to the marginal drop of the strange condensate and the gluon condensate in the medium. For the hyperonic matter, there is observed to be an increase in ${s^*}_0^\phi$ due to the scattering from the hyperons, which is observed to be larger for the isospin symmetric case as compared to the isospin asymmetric situation. In figure \[fvdens\], the value of $F^*_V$ is plotted as a function of density. From the first finite energy sum rule given by equation (\[fesr1mf\]), due to the small masses of the $u$ and $d$ quarks, the term $c_1^V$ is negligible for the $\omega$ and $\rho$ mesons. At low densities, the value of $F^*_V$ turns out to be proportional to ${s^*}_0^V$, since the contribution from the Landau scattering term is small. At higher densities, there is contribution from the Landau scattering term, which modifies the behavior of $F^*_V$ to a slower change with density for the $\omega$ meson. For the $\rho$ meson, this is approximately proportional to ${s^*}_0^\rho$ as the Landau term has negligible contribution. For the $\phi$ meson, the scattering from the hyperons leads to an increase of $F^*_\phi$ at higher densities. In figure \[4quark\], we have plotted the quartic quark condensate, $c_3^V$ for the $\omega$, $\rho$ and $\phi$ mesons, given by equations (\[c3rhomgf\]) and (\[c3phif\]) as functions of density, for the isospin symmetric and asymmetric nuclear (hyperonic) matter. For the $\rho$ and $\omega$ mesons, the values of $\kappa$ calculated from the vacuum FESRs are found to be 7.236 and 7.788, which yield very similar values for the 4-quark condensate for the $\omega$ and $\rho$ mesons, shown in figure \[4quark\]. The vacuum FESRs for the $\phi$ meson yield the 4-quark condensate to be negative, with the value of $\kappa$ as $-1.21$. There is seen to be a large effect from the strangeness fraction of the medium on $c_3^\phi$, since the strange condensate has appreciable effect from $f_s$, as can be seen from figure \[psipsibdens\].
Summary
=======
In the present investigation, we have calculated the effect of density on the masses of the light vector mesons ($\omega$, $\rho$ and $\phi$) using the QCD sum rules, from the light quark condensates and gluon condensates in the medium calculated within a chiral SU(3) model. The effects of the isospin asymmetry as well as the strangeness of the medium on the modifications of these masses have also been investigated. The light quark condensates ($\langle \bar u u\rangle $, $\langle \bar d d\rangle $, $\langle \bar s s\rangle $) in the isospin asymmetric strange hadronic medium are calculated from the values of the nonstrange and strange scalar mesons, $\sigma$ and $\zeta$, and the isoscalar scalar meson, $\delta$ of the explicit symmetry breaking term of the chiral SU(3) model. The scalar gluon condensate is calculated from a scalar dilaton field, which is introduced in the chiral SU(3) model to mimic the scale symmetry breaking of QCD. The mass of the $\omega$ meson is observed initially to drop with increase in density in the hadronic matter. This is because the magnitudes of the light nonstrange quark condensates become smaller in the hadronic medium as compared to the values in vacuum. However, as the density is further increased, there is seen to be a rise in its mass, when the effect from the Landau term due to the scattering of the $\omega$ meson from the baryons becomes important. In the presence of hyperons, the increase in the mass of the $\omega$ meson occurs at a higher value of the density as compared to the case of nuclear matter. This is because the contribution from the Landau term becomes less with inclusion of hyperons due to smaller values of the coupling strengths of the $\omega$ meson with hyperons as compared to coupling strengths with the nucleons. The $\rho$ meson mass is observed to drop monotonically with density dominantly from the drop in the light quark condensate in the medium, with negligible contribution from the Landau scattering term. The effect of isospin asymmetry is observed to be small on the masses of the $\omega$ and $\rho$ mesons, as the dependence on the light quark condensates is through the combination $(m_u \bar u u+m_d \bar d d)$, which has marginal effect from the isospin asymmetry. For the $\phi$ meson, there is observed to be a drop in the mass in nuclear matter due to the modification of the strange quark condensate and scalar gluon condensate, because the contribution from the Landau term for the $\phi$ meson vanishes in the nuclear matter. The mass shift of $\phi$ meson in nuclear medium is seen to be small, of the order of 20 MeV at a density of 5$\rho_0$. This is because the strange condensate as well as gluon condensate have very small modification in the medium and occur with opposite signs in the coefficient ${c^*}_2^\phi$. In the presence of hyperons, however, there is seen to be an increase in the mass of the $\phi$ meson with density due to contribution from the Landau term arising from scattering of the $\phi$ meson with the hyperons. The mass of the $\phi$ meson is observed to have larger effect from the Landau scattering term for the isospin symmetric case as compared to the isospin asymmetric hyperonic matter.
The author acknowledges financial support from Department of Science and Technology, Government of India (project no. SR/S2/HEP-031/2010).
R. Rapp and J. Wambach, Adv. Nucl. Phys. [**25**]{}, 1 (2000). T. Hatsuda, S.H. Lee, Phys. Rev. C [**46**]{}, (1992) R34. T. Hatsuda, S.H. Lee, H. Shiomi, Phys. Rev. C [**52**]{}, (1995) 3364. S. Zschocke, O. P. Pavlenko and B. Kämpfer, Eur. Phys. Jour. A [**15**]{}, 529 (2002). F.Klingl, N. Kaiser, W. Weise, Nucl. Phys. [**A 624**]{},527 (1997). Y. Kwon, M. Procura and W. Weise, Phys. Rev. [**C 78**]{}, 055203 (2008). A.K. Dutt-Mazumder, R. Hofmann, M. Pospelov, Phys. Rev. C [**63**]{}, 015204 (2000). A. Mishra, K. Balazs, D. Zschiesche, S. Schramm, H. Stöcker and W. Greiner, Phys. Rev. C [**69**]{}, 024903 (2004). P. Papazoglou, D. Zschiesche, S. Schramm, J. Schaffner-Bielich, H. Stöcker, and W. Greiner, Phys. Rev. C [**59**]{}, 411 (1999). A. Mishra, A. Kumar, S. Sanyal, V. Dexheimer, S. Schramm, Eur. Phys. Jour. A [**45**]{}, 169 (2010). D. Zschiesche, A. Mishra, S. Schramm, H. Stöcker and W. Greiner, Phys. Rev. C 70, 045202 (2004). A. Mishra, E. L. Bratkovskaya, J. Schaffner-Bielich, S. Schramm and H. Stöcker, Phys. Rev. C [**70**]{}, 044904 (2004). A. Mishra and S. Schramm, Phys. Rev. C [**74**]{}, 064904 (2006). Amruta Mishra, Arvind Kumar, Sambuddha Sanyal, S. Schramm, Eur. Phys, J. A [**41**]{}, 205 (2009). Amruta Mishra and Arindam Mazumdar, Phys. Rev. C [**79**]{}, 024908 (2009). A. Mishra, E. L. Bratkovskaya, J. Schaffner-Bielich, S.Schramm and H. Stöcker, Phys. Rev. [**C 69**]{}, 015202 (2004). Arvind Kumar and Amruta Mishra, Phys. Rev. C [**81**]{}, 065204 (2010). Arvind Kumar and Amruta Mishra, Eur. Phys. J. A [**47**]{}, 164 (2011). S.Weinberg, Phys. Rev. [**166**]{} 1568 (1968). S. Coleman, J. Wess, B. Zumino, Phys. Rev. [**177**]{} 2239 (1969); C.G. Callan, S. Coleman, J. Wess, B. Zumino, Phys. Rev. [**177**]{} 2247 (1969). W. A. Bardeen and B. W. Lee, Phys. Rev. [**177**]{} 2389 (1969). J. Schechter, Phys. Rev. D [**21**]{}, 3393 (1980). J.Ellis, Nucl. Phys. [**B**]{} 22, 478 (1970); B. A. Campbell, J. Ellis and K. A. Olive, Nucl. Phys. [**B**]{} 345, 57 (1990). Erik K. Heide, Serge Rudaz and Paul J. Ellis, Nucl. Phys. A [**571**]{}, (2001) 713. Arvind Kumar and Amruta Mishra, Phys. Rev. C [**82**]{}, 045207 (2010). Thomas D. Cohen, R. J. Furnstahl and David K. Griegel, Phys. Rev. C [**45**]{}, 1881 (1992). M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. [**Bb 147**]{}, 385 (1979). M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. [**Bb 147**]{}, 448 (1979). A. I. Bochkarev and M. E. Shaposhnikov, Phys. Lett. B [**145**]{}, 276 (1984); ibid, Nucl. Phys. B [**268**]{}, 220 (1986). W. Florkowski, W. Broniowski, Nucl. Phys. A [**651**]{}, 397 (1999).
|
---
abstract: 'We study theoretically the effects of long-range and on-site Coulomb interactions on the topological phases and transport properties of spin-orbit-coupled quasi-one-dimensional quantum wires imposed on an s-wave superconductor. The electrostatic potential and charge density distributions are computed self-consistently within the Hartree approximation. Due to the finite width of the wires and the charge repulsion, the potential and density distribute inhomogeneously in the transverse direction and tend to accumulate along the lateral edges where the hard-wall confinement is assumed. This result has profound effects on the topological phases and the differential conductance of the interacting quantum wires and their hybrid junctions with superconductors. Coulomb interactions renormalize the chemical potential, and alter the topological phases strongly by enhancing the topological regimes and producing jagged boundaries. Moreover, the multicritical points connecting different topological phases from high-index subbands are modified remarkably in striking contrast to the predictions of the two-band model. We further suggest the possible non-magnetic topological phase transitions manipulated externally with the aid of long-range interactions. Finally, the transport properties of normal-superconductor junctions are also examined and interaction impacts on the emergence of Majorana fermions and the strength of Majorana zero-bias peaks are revealed.'
author:
- Hengyi Xu
- Ye Xiong
- Jun Wang
title: Topological phases and Majorana states in screened interacting quantum wires
---
Introduction
============
The existence of Majorana fermions as elementary particles has been a myth since the original proposal suggested by E. Majorana in 1937. [@nuocim14.171(1937)] In recent years, condensed matter physicists have been searching for the Majorana fermions as quasi-particle excitations in various solid state hybrid structures with vigorous efforts attributed to some alluring and promising theoretical predictions. [@arcmp4.113.(2013); @rpp75.076501(2012); @prb81.125318(2012)] The enthusiasm was further ignited by the relevant experimental realizations in semiconductor quantum wires with strong spin-orbit couplings and proximity-induced s-wave superconductivity by Mourik [*et al.*]{}, [@science336.1003(2012)] and other groups subsequently [@nanolett12.6414(2012); @natphy8.887(2012); @prl110.126406(2013)]. In these experiments, zero-bias conductance peaks have been observed due to perfect Andreev reflection, signaling the presence of Majorana states at the ends of quantum wires. The experimental measurements show that the zero-bias differential conductance evolves into peaks as the system is tuned into the predicted topological regime without taking into account various effects, such as the finite-length, finite temperature, and electron-electron interactions etc. To clarify some discrepancies between experiments and theories, the effects of disorder [@prl109.267002(2012); @njp14.125011(2012)], nonclosure of gaps [@prl109.266402(2012)], inhomogeneous pairing potentials [@prl110.186803(2013)] have been investigated theoretically. More severely, alternative mechanisms, for examples, the Andreev bound state [@prb65.184505(2002)] and the Kondo effect [@nat405.764(2000)] which also produce the zero-bias peaks have been suggested to challenge the experimental findings.
Among all the aforementioned effects, the electronic interaction is of vital importance and tricky to treat microscopically. [@prb88.161103(R)(2013)] It is expected that Coulomb interactions can strongly influence the stability of Majorana modes [@prl107.036801(2011); @prb84.085114(2011); @njp14.125018(2012)], and are, therefore, crucial for understanding quantitatively the experimental findings and ultimately recognition of the existence of Majorana bound states at the ends of quantum wires. In the one-dimensional (1D) quantum wires, repulsive interacting electrons form interacting Luttinger liquids and should be described more precisely by the corresponding theory. [@prb85.245121(2012); @prb84.085114(2011)] To attack this problem, various methods have been employed. Based on the density matrix renormalization group (DMRG), tunneling spectra of interacting Kitaev chains and Majorana edge states have been examined. [@prb88.161103(R)(2013)] In particular, E. Stoudenmire [*et al.*]{}, [@prb84.014503(2011)] compared systematically the DMRG, Hartree-Fock, and bosonization approaches for treating the interacting Majorana wires and found that the interaction problem can be described reasonably well using Hartree-Fock theory with the sufficiently strong proximity effect and applied magnetic fields albeit it deserves more powerful DMRG and bosonization techniques. Besides the single-mode wires, multichannel wires have also been studied considerably. [@prl105.227003(2010); @prb84.214528(2011); @prl105.046803(2010); @prb83.094525(2011); @prl106.127001(2011)] Lutchyn [*et al.*]{}, [@prb84.214528(2011)] have studied the roles of interactions on the low-energy topological phase diagram near the multicritical point connecting the topological phases originating from the first and second transverse subbands, and revealed that the interactions renormalize the phase boundary near the multicritical point leading to the hybridization of Majorana modes from different subbands. Furthermore, the presence of disorder was found to induce the phase transition from topological phases to trivial localized phases together with interactions. [@prl109.146403(2012)]
In a realistic experimental setup, the semiconducting quantum wire with a high g-factor and spin-orbit coupling is exposed on a metallic s-wave superconductor to get a proximity energy gap. The metallic superconductor, as a secondary effect, may drive the electronic interactions into a strongly screened regime. [@jpc26.172203(2014)] Consequently, the electronic density and potential distributions in multiband nanowires are rather inhomogeneous along the transverse direction due to the finite width and electronic repulsions. This inhomogeneity in electrostatic potential can be one of the major sources of the soft superconducting gaps. For the transport properties of the semiconductor-superconductor hybrid structures, much of the prior work has been focusing on the non-interacting cases. [@prb91.024514(2015); @prb91.2145413(2015); @prb85.245121(2012); @prb88.064509(2013); @njp15.075019(2013); @prb90.115107(2014)] How the screened interactions and the inhomogeneous potential distribution influence the topological phases in multiband quantum wires and the related Majorana modes, has received relatively less attention. In this work, we study the topological phases and Majorana zero mode in a typical experiment-relevant semiconductor-superconductor hybrid device composed of an interacting quantum wire in proximity to an s-wave superconductor. The screened Coulomb interactions are incorporated by the self-consistent Hartree-Fock calculations in the presence of external magnetic fields. It is shown that electron-electron interactions strongly change the energy bands and modify the topological phase boundaries as well as the emergence of Majorana modes.
The paper is organized as follows. In Sec. II we introduce the structure to be investigated and formulate our model. The calculation results are presented and discussed in Sec. III. Sec. IV contains the summary and conclusions.
Theoretical Model
=================
We consider a spin-orbit-coupled semiconductor quantum wire of the width $W$ in the y-direction and the length $L$ along the x-direction deposited on an s-wave superconducting electrode, while its left side is contacted through a tunnel barrier $U_p$ by a normal metallic lead as shown in Fig. \[fig1\](a). The s-wave superconductor induces a paring potential $\Delta$ for the electrons in the wire. The whole system is subjected to a uniform in-plane magnetic field $B_x$. Throughout the calculations, we choose the realistic parameters for InSb semiconductor quantum wires: $\Delta=0.25\mathrm{meV}$, g-factor $\mathrm{g}=50$, Rashba spin-orbit coupling strength $t_R=20 \mathrm{meV\cdot nm}$, and effective mass $m^*=0.015m_e$ with $m_e$ being the electron mass.
The system is described by the tight-binding Hamiltonian consisting of three terms as $$\mathcal{H}=H_0+H_R+H_U,\label{hamtt}$$ with respective form given by $$\begin{aligned}
H_0&=&\sum_{i,\sigma} c_{i,\sigma}^\dag (\epsilon_{0}+V_H-\mu)c_{i,\sigma}-t\sum_{\langle i,j\rangle,\sigma} c_{i,\sigma}^\dag c_{j,\sigma} \nonumber\\
&&+\frac{1}{2}g\mu_B \sum_{i;\sigma,\sigma'}c_{i,\sigma}^\dag s_xB_x c_{i,\sigma'};\label{hamt0}\end{aligned}$$ $$H_R=it_R\sum_{\langle i,j\rangle;\sigma,\sigma'} {\mathbf {\hat e}_z}\cdot (\mathbf s \times \mathbf d_{ij})c_{i,\sigma}^\dag c_{j,\sigma'};\label{hamtr}$$ $$H_U=U\sum_{i,\sigma}n_{i\sigma}n_{i\bar\sigma}.\label{hamtu}$$ where $c_{i,\sigma}^\dag$ and $c_{i,\sigma}$ are creation and annihilation operators for an electron with spin $\sigma (\uparrow,\downarrow)$ on site $i$, and $\mathbf s$ denotes the Pauli matrices. The Hamiltonian $H_0$ in Eq. (\[hamt0\]) represents the Hamiltonian of semiconductor quantum wires including the on-site energy $\epsilon_0=-4t$ and the hopping energy $t$ between the nearest-neighbouring sites along $x$- and $y$-directions. $V_H$ and $\mu$ are the Hartree potential and the chemical potential, respectively, and the last contribution is from the Zeeman splitting due to the in-plane magnetic field $B_x$. The term $H_R$ in Eq. (\[hamtr\]) describes the Rashba spin-orbit coupling with $\mathbf d_{ij}$ being a lattice vector pointing from site $j$ to site $i$. $\langle\rangle$ runs over all the nearest-neighbouring sites. The on-site electronic interactions between electrons of the opposite spins are captured by the Hubbard-like term $H_U$ in Eq. (\[hamtu\]). To facilitate the computation, Eq. (\[hamtu\]) can be rewritten within the mean-field approximation such that a charge with spin $\sigma$ at the site $\mathbf r_i$ interacts with the average charge population with an opposite spin $\langle n_{\bar \sigma}\rangle$ at the same site and vice versa. Moreover, the Hartree term $V_H(\mathbf r)$ in Eq. (\[hamt0\]) depicts the long-range Coulomb interactions between charges at different sites in the semiconducting quantum wire, [@prb73.075331(2006); @jpc26.172203(2014)] $$V_H(\mathbf r_{i})=\frac{e^2}{4\pi\epsilon_0\epsilon_r}\sum_{\mathbf r_i\neq \mathbf r_j}n(\mathbf r_j)\left(\frac{1}{|\mathbf r_i-\mathbf r_j|}-\frac{1}{\sqrt{|\mathbf r_i-\mathbf r_j|^2+4d^2}} \right).\label{vhart}$$ where $d$ is the distance between the quantum wire and the superconducting metallic gate, and the second part in the parenthesis is the contribution from the mirror charges due to the presence of the metallic superconducting gate. The average charge population at the site $\mathbf r_i$ is calculated by $$\langle n_{\sigma}(\mathbf r_i)\rangle =-\frac{1}{\pi}\int_{-\infty}^{E_F}\Im[G_\sigma(\mathbf r_i,\mathbf r_i;E)] f_{FD}(E,E_F)dE,\label{nrho}$$ where $G_\sigma(\mathbf{r}_i,\mathbf{r}_i;E)$ is the Green’s function on the site $\mathbf r_i$ at energy $E$ for spin $\sigma$. From the computational point of view, both short- or long-range Coulomb interactions affect only the diagonal elements of the Hamiltonian matrix. Eqs. (\[hamtt\])-(\[nrho\]) can be solved self-consistently starting from some initial guess of charge density $\langle n_{\sigma}\rangle$ to obtain the self-consistent charge and electrostatic potential distributions which can be further used to calculate the band structures, the interacting topological phase diagrams and the differential conductance spectroscopy.
We study the topological phase diagram by calculating a $Z_2$ analogy topological invariant in quasi-1D wires, namely the evolution of Wannier function center during a pumping process, which has been formulated in detail in Ref. \[\]. Here we only give a very brief description of this method. The main idea of this formalism is to investigate the maximally localized evolution of Wannier functions for quasi-1D systems by studying the eigenstates of the position operator projected into the subspace of the occupied states. In the eigenstate space, the projected position operator can be written in a matrix form with nonzero super-diagonal and left-down corner elements. The nonzero elements of the matrix are again the matrices formed by the products of all occupied eigenvectors. The eigenproblem of the block position operator can be solved by constructing the matrix $D(k_y)$, a product of all of its nonzero block matrices. Equivalently, $D(k_y)$ can be viewed as a product of the Berry connections along the so-called “Wilson loop” and further expressed in the language of non-Abelian gauge fields $A^{mn}_{i,i+1}$ as $D(k_y)=\Pi_{i=0}^{N_x-1} e^{-iA_{i,i+1}\delta k}$ with $N_x$ being the number of discrete $k_x$ and $\delta k=k_x^{i+1}-k_x^i$. The phase factor $\theta^D_m$ of the $m$-th eigenvalue of $D(k_y)$ determines the evolution of the Wannier function center pairs which reside in a cylinder surface and enclose it integer times for an effective 1D system with $k_y$. The enclosing times equivalent to the winding number of the Wannier center pairs are used to distinguish the different phases in our study.
The calculations are started by performing the Fourier transformation along $x$-direction since $k_x$ is a good quantum number, while the Hamiltonian in the $y$-direction remains in the real-space. Thus, the Hamiltonian for a discretized site $i$ in the momentum space is given by $$\left[\begin{array}{cccc}
h_\uparrow & 2it_R\sin(k_x) & & \Delta \\
-2it_R\sin(k_x) & h_\downarrow & -\Delta & \\
& -\Delta^* & -h_\uparrow^* & -2it_R\sin(k_x) \\
\Delta^* & & 2it_R\sin(k_x) & -h_\downarrow^*
\end{array} \right],\label{hamtkr}$$ with $h_\sigma(\mathbf r_i)=\epsilon_0-2t\cos(k_x)-\mu +V_H(\mathbf r_i)+V_U(\mathbf r_i)\pm E_z$ and $\sigma=(\uparrow,\downarrow)$, and $V_U$ is the potential from Eq. (\[hamtu\]). The nearest-neighboring sites along the y-direction are coupled by the matrix $$\left[\begin{array}{cccc}
t & -it_R/2 & & \\
it_R/a & t & & \\
& & -t & -it_R/2 \\
& & -it_R/2 & -t
\end{array} \right]. \label{hamtcp}$$ The total Hamiltonian $\mathcal H$ can be written in the block matrix form with the diagonal part in the form of Eq. (\[hamtkr\]) and superdiagonal or subdiagonal part in the form of Eq. (\[hamtcp\]). By directly diagonalizing $\mathcal H$ in the momentum-real mixed space, one can obtain the wave functions of the eigenstates and furthermore the matrix $D(k_y)$. For interacting cases, the self-consistent Hartree and Hubbard potentials are used in Eq. (\[hamtkr\]).
The differential conductance of normal-superconductor (NS) junctions is calculated by $$\frac{dI}{dV}=\frac{e^2}{h}\left[N-R_{ee}+R_{eh} \right],$$ where $N$ is the number of propagating modes in the normal lead, and $R_{ee}$ and $R_{eh}$ are the normal and Andreev reflections, respectively. The calculations of the reflections are based on the Nambu Green’s function technique which has been formulated in detail in Ref. \[\].
Results and discussion
======================
![(Color online) (a) The schematic structure of the normal metal-superconductor (NS) junction for the transport properties calculations. Lower panels: the phase diagrams as a function of the chemical potential and the magnetic field for width (b) $W=200\mathrm{nm}$ and (c) $W=400\mathrm{nm}$. The white rectangular frame in (c) indicates the regime to be considered in the interacting case. Other parameters are $\Delta=0.25\mathrm{meV}$, $g=50$, $t_R=20\mathrm{meV\cdot nm}$, and $m^*=0.015m_e$.[]{data-label="fig1"}](fig1.eps)
As the first step, we study the non-interacting phase diagrams in wide parameter ranges by calculating the phase factor $\theta^D_m$ associated with the evolution of the Wannier function center for quasi-1D wires. Fig. \[fig1\](b) and (c) show the topological invariants as a function of the chemical potential $\mu$ and the applied magnetic field $B_x$ for different wire widths. As the parameters vary, two phase factors $\theta^D_m$ split and meet again resulting in an integer times of $2\pi$ difference in their values, which is equivalent to the winding number of Wannier center pairs. For $W=200\mathrm{nm}$ (see Fig. \[fig1\](a)), the multichannel quantum wire exhibits three topological phase regimes separated by topological trivial regimes in the calculated parameter range. The topological phases with a value of $0.5$ in units of $2\pi$ appear as rounded rectangular blocks originating from different subbands in the phase diagram. As the wire width increases, more topological regimes associated with high-energy subbands show up as shown in Fig. \[fig1\](c) since the energy separations between subbands decrease with the wire width.
For both cases, the topological regimes along the direction of sweeping $\mu$ with fixed $B_x$ are disconnected. Their separations are basically the distance between the neighboring subbands with different spins. In contrast, the topological regimes may contact each other forming the so-called multicritical point with varying the magnetic field at a fixed chemical potential. The emergence of multicritical points owes to that two subbands are very close and even touch each other at low energies around $k=0$ such that two topological phase transitions occur continuously and even simultaneously, see Fig.\[fig2\](c). Moreover, relatively small subband separations of the wider wire give rise to small topological islands at low magnetic fields corresponding to the first subband as shown in Fig. \[fig1\](c). It should be noted that the noninteracting phase diagrams of quasi-1D nanowires have also been examined and a similar phase diagram was found. [@prb86.024505(2012)]
![(Color online) The self-consistent (a) Hartree potential together with Hubbard term and (b) charge density profiles at the multi-critical point with $\mu=1.9\mathrm{meV}$ and $B_x=1.2\mathrm{T}$. Comparison between the non-interacting (c) and interacting (d) band structures at the multicritical point. $\varepsilon_r=18$ for $\mathrm{InSb}$ and $d=10\mathrm{nm}$.[]{data-label="fig2"}](fig2.eps)
When Coulomb interactions are turned on, the charges distribute inhomogeneously along the transverse direction due to long-range repulsive potentials computed by Eq. (\[vhart\]) as shown in Fig. \[fig2\]. We first focus on the effects of Coulomb interactions on the multicritical point in the white frame shown in Fig. \[fig1\](c). Fig. \[fig2\](a) and (b) show the electronic potential and corresponding charge density profiles in the transverse direction at the critical point where one topological regime transits to another. The application of gate voltages can tune the chemical potential of the system and vary the charge density. The overall charge distribution shows a density enhancement toward the wire edges, which is related to the effect of the electrostatic Coulomb repulsion in a hard-wall confined structure. The applied magnetic field lifts the spin degeneracy, while the short-range Coulomb potential accounted by Eq. (\[hamtu\]) further enhances the Zeeman splitting leading to an asymmetry between the spin-up and spin-down branches in the potential. As a result, the opposite spin subbands are not equally populated as indicated in charge density profile (see Fig. \[fig2\](b)). The roles of on-site Coulomb interactions in single-mode wires have been studied previously by Stoudenmire [*et al.,*]{} [@prb84.014503(2011)] and the conclusions are equally applied here for multichannel nanowires. The experimental significance of on-site interactions is to lower the critical magnetic field to enter the topological phase with Majorana modes existing at two ends of the wire.
.[]{data-label="fig3"}](fig3.eps)
Fig. \[fig2\](c) shows the corresponding non-interacting band structure of the multicritical point at $\mu=1.9\mathrm{meV}$ and $B_x=1.2\mathrm{T}$ where the 3rd-subband spin-up branch overlaps with the lower index subbands. As the magnetic field increases, the particle and hole subbands approach each other and a topological phase transition results with a process of the gap closure and reopening. The system may undergo a series of phase transition processes as the gap is further closed and reopened by the subsequent subbands. In the presence of Coulomb interactions as shown in Fig. \[fig2\](d), the low-energy subbands around small wave vectors retreat towards particle and hole directions, respectively, while the minigap where particle and hole subbands anticross remains unchanged. Evidently, the interactions do not spoil the overlap of two subbands around the zero energy but defer the emergence of the multicritical point. It will be clear in the following that the influences of interactions on mutlitcritical points may vary according to their subband originations.
In Fig. \[fig3\], we show the roles of Coulomb interactions in different quantum wires with broad parameter ranges. The inhomogeneous electrostatic potential renormalizes the chemical potential due to electronic repulsions and shifts the topological phases to high chemical potentials. Also, the interactions enlarge the areas of topological regimes, which is consistent with the single-mode cases where on-site interactions increase the chemical potential windows for fixed magnetic fields, thereby enhancing the system immunity against the fluctuation of $\mu$. [@prb84.014503(2011)] Most strikingly, the long-range Coulomb interactions strongly modify the boundaries of topological phases and even generate jagged boundaries. It is also evident that some very small isolate topological regimes may be produced near the phase boundaries, and topologically trivial phases may appear in topological areas as well. This is because, close to the phase boundaries, small perturbations are sufficient to introduce or interrupt a closure of the energy gap so that additional phase transitions occur. For the case of $W=400\mathrm{nm}$, the topological island at $B_x=1.2\mathrm{T}$ and $\mu=2.6\mathrm{meV}$ appears as a remnant of the non-interacting multicritical point since the subband overlap is preserved as shown in Fig. \[fig2\](d). It has been pointed out the repulsive on-site interactions affect only quantitatively the topological phases and the appearance of multicritical points remains similar. [@prb84.214528(2011)] In our cases, the impact of long-range interactions on the multicritical point in the narrow wire is predominantly a shift to higher $\mu$ in agreement with the two-band model in Ref. \[\]. By contrast, long-range Coulomb interactions influence the multicritical points formed by the high-index subbands in a more profound way. From Fig. \[fig3\](b), one can see clearly that the two phase boundary vertexes do not coincide with each other in chemical potential any longer due to interactions. Thus, the multicritical points from high-index subbands are more fragile to the inhomogeneity of Coulomb potentials along the transverse direction.
In fact, Coulomb interactions may play a much more crucial role in the manipulation of topological phase transitions. In graphene, San-Jose [*at al.*]{}, [@prx5.041042(2015)] has recently put forward a Majorana zero-mode mechanism through interactions without the aid of spin-orbit couplings. This is a very significant progress in this field and pave a way for constructing graphene Majorana in view of the rather weak spin-orbit coupling of graphene. For long-range Coulomb interactions, their major advantage in the present case is that they can be tuned externally by gate voltages. Therefore, the long-range interaction can be an important tool at hand which enables ones to manipulate the phase transitions conveniently. It has been shown that long-range Coulomb interactions can generate band-structure warping and lead to an anomalous conductance reduction in graphene indicating a possible topological transition. [@prb82.115311(2010); @prb79.035421(2009)] Based on these considerations, study of effects of interactions in nanowires are extremely important for its promising realization of a non-magnetic topological phase transition and the related Majorana quasi-particle.
![The differential conductance as a function of the bias voltage and the magnetic field for the cases of the (a) non-interacting and (b) interacting quantum wires. The system has parameters $\mu=2.5\mathrm{meV}$, $W=200\mathrm{nm}$ and $L=200\mathrm{nm}$. The pinch-off gate $U_p=8\mathrm{meV}$ with width $d=10\mathrm{nm}$.[]{data-label="fig4"}](fig4.eps)
![The differential conductance as a function of the bias voltage and the magnetic field for the cases of the (a) non-interacting and (b) interacting quantum wires. The system has parameters $\mu=1.4\mathrm{meV}$, $W=400\mathrm{nm}$ and $L=200\mathrm{nm}$. The pinch-off gate $U_p=8\mathrm{meV}$ with width $d=10\mathrm{nm}$.[]{data-label="fig5"}](fig5.eps)
![The differential conductance as a function of bias voltage and magnetic field for the cases of the (a) non-interacting and (b) interacting quantum wires. The system has parameters $\mu=2.8\mathrm{meV}$, $W=400\mathrm{nm}$ and $L=200\mathrm{nm}$. The pinch-off gate $U_p=8\mathrm{meV}$ with width $d=10\mathrm{nm}$.[]{data-label="fig6"}](fig6.eps)
We proceed by studying the differential conductance $dI/dV$ of quantum wires with a superconducting metal as the right lead. In our case, we consider the simplest hybrid structure, namely a normal-superconductor junction as shown in Fig. \[fig1\](a). Various hybrid junctions hosting Majorana fermions have been studied extensively, and a rich phenomenology has been revealed. [@prb86.180503(R)(2012)] Here we primarily concern with the interaction effects on the differential conductance spectroscopy such that a simple geometry is more illuminating. Fig. \[fig4\]-\[fig6\] show the differential conductance as a function of the bias voltage $V$ and the magnetic field $B_x$ for different widths and chemical potentials. The left and right panels correspond to the non-interacting and interacting cases, respectively.
In Fig. \[fig4\], the non-interacting $dI/dV$ peaks around the gap edges at zero magnetic field. With the increase of the magnetic field, the gap slowly shrinks and closes at the critical point of phase transitions forming a prototypical Y-type structure in the $B_x-V$ (magnetic field-bias voltage) plane. As the gap is closed and reopened, the zero-bias peak (ZBP) develops implying the formation of Majorana end states. Fig. \[fig4\](b) shows the interacting differential conductance through the topological superconductor with the self-consistent potential which is calculated for different magnetic fields. The commencement of the ZBP moves towards higher magnetic fields and the peak strength becomes even stronger at high magnetic fields compared with the non-interacting case. This change in the ZBP can be traced back to the phase diagram in Fig. \[fig3\](a). In the presence of Coulomb interactions, the phase boundary at $\mu=2.5\mathrm{meV}$ is displaced with respect to the magnetic field compared with non-interacting case consistent with the changes in the ZBP quantitatively. However, this consistency is just a serendipity because the phase diagrams are obtained for infinitely long wires, while the $dI/dV$ spectroscopy is calculated through semi-infinite topological superconductors. For wider wires, their discrepancies are evident.
Fig. \[fig5\] shows the wider quantum wire with $W=400\mathrm{nm}$ and a smaller chemical potential $\mu=1.4\mathrm{meV}$. Besides the similar Y-type structure, the $dI/dV$ spectroscopy exhibits richer structures. The first visible peak appears at $B_x=0.2\mathrm{T}$ for the non-interacting wire, which is due to the Andreev bound states since it splits with the magnetic field. The ZBP induced by Majorana zero-modes arises at $B_x=0.3\mathrm{T}$ and builds up as the magnetic field increases. However, the ZBP is interrupted with the increasing of magnetic fields by the phase transitions originating from different subbands. This can be verified by the phase diagram shown in Fig. \[fig1\](c). Fig. \[fig5\](b) shows the effects of Coulomb interactions on differential conductance spectroscopy. The resonance from Andreev bound states at $\mu=0.2\mathrm{meV}$ is diminished and the ZBP from Majorana modes becomes stronger and more recognizable in intermediate magnetic fields. We further consider the wire with higher chemical potential as shown in Fig. \[fig6\]. Compared with the non-interacting case, the interactions modifies the particle and hole gap edges reducing the minigap which is reflected resonances within the bulk gap at the zero magnetic field in Fig. \[fig6\](b). The ZBP around $B_x\approx 1\mathrm{T}$ from Majorana mode is suppressed accompanied with the modification of phase transitions due to Coulomb interactions. In addition, the interactions also influence the position of resonances due to Andreev bound states.
From above transport calculations, we arrive at some points on Majorana fermion observations. (i) long-range Coulomb interactions modify the topological phase boundary and affect the position of ZBPS in the parameter space. In experiments, the Majorana ZBP may appear at quite different places with the non-interacting theoretical predictions for multichannel wires. This will no doubt cause the difficulties in the recognition of Majorana fermions. (ii) Coulomb interaction can change topological windows and number of phase transitions thereby strongly altering the appearance of Majorana ZBPs. Theoretical predictions based on single-particle models is unreliable for interpreting the experimental findings particularly for high chemical potential in multiband wires.
We end this section with the following comments. Throughout our calculations, we fixed the pinch-off gate voltage and wire length. It is expected that pinch-off gates and wire length only affect the resonances in trivial phases, and have no significant effects on topological regimes. Also, our calculations are performed at zero temperature. We expect that finite temperature will influence the results quantitatively not qualitatively. Our discussion and conclusions remain valid. Finally, the interplay between Coulomb interactions and disorder is an interesting question. Most recent researches concentrate on the disorder effects in the normal region of NS junctions. How disorder influences the topological phases together with the long-range Coulomb interaction is desirable for both experimental purposes and theoretical interest. However, this issue is beyond the scope of the present work and will be the central task in our further work.
Summary and Conclusions
=======================
We have calculated the phase diagrams of an infinitely long quasi-1D wire with a high spin-orbit interaction imposed on an s-wave superconductor. The phase diagrams in $B_x-\mu$ plane demonstrate connected or disconnected topological phase regimes originating from different subbands. We incorporate Coulomb interactions in a self-consistent way within the Hartree approximation and study the potential and charge density distributions along the transverse direction. The repelling charges tend to accumulate along the wire edges and form the characteristic oscillations in profile. The inhomogeneously distributed potentials alter the band structures strongly and modify the phase diagrams in a nontrivial way. In particular, the multicritical points from high-index subbands respond to the interactions more prominently compared with those from low-index subbands.
The changes in the phase diagram due to interactions can be experimentally detected by transport measurements, namely $dI/dV$ spectroscopy. The study of the roles of interactions in $dI/dV$ is particularly important for recognition of Majorana fermions in experiments. For multiband wires, $dI/dV$ spectroscopy with interactions may be dramatically different compared with the non-interacting cases. The careful investigation of the effects of Coulomb interactions on the transport properties has a far-reaching significance for identification of Majorana fermions in experiments.
H.X. acknowledges financial support from Department of Education of Jiangsu province through Grant No. 164080H00210. J.W. thanks the support from NSFC (Grant No. 115074045).
[41]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , , , ****, ().
, , , , ****, ().
, , , , , ****, ().
, , , , ****, ().
, , , , , ****, ().
, , , ****, ().
, , , , , , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , ****, ().
, ****, ().
, , , , , ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, , , , , ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , , ****, ().
, , , , ****, ().
, ****, ().
, ****, ().
, , , , , ****, ().
, , , , ****, ().
, ****, ().
, , , , ****, ().
, , , , , ****, ().
, , , , ****, ().
, , , ****, ().
, , , ****, ().
|
---
abstract: 'Utilities use demand response to shift or reduce electricity usage of flexible loads, to better match electricity demand to power generation. A common mechanism is peak pricing (PP), where consumers pay reduced (increased) prices for electricity during periods of low (high) demand, and its simplicity allows consumers to understand how their consumption affects costs. However, new consumer technologies like internet-connected smart thermostats simplify real-time pricing (RP), because such devices can automate the tradeoff between costs and consumption. These devices enable consumer choice under RP by abstracting this tradeoff into a question of quality of service (e.g., comfort) versus price. This paper uses a principal-agent framework to design PP and RP rates for heating, ventilation, and air-conditioning (HVAC) to address adverse selection due to variations in consumer comfort preferences. We formulate the pricing problem as a stochastic bilevel program, and numerically solve it by reformulation as a mixed integer program (MIP). Last, we compare the effectiveness of different pricing schemes on reductions of peak load or load variability. We find that PP pricing induces HVAC consumption to spike high (before), spike low (during), and spike high (after) the PP event, whereas RP achieves reductions in peak loads and load variability while preventing large spikes in electricity usage.'
author:
- 'John Audie Cabrera$^{2}$, Yonatan Mintz$^{1}$, Jhoanna Rhodette Pedrasa$^{2}$, and Anil Aswani$^{1}$[^1][^2][^3]'
bibliography:
- 'IEEEabrv.bib'
- 'hvar.bib'
title: '**Designing Real-Time Prices to Reduce Load Variability with HVAC**'
---
Introduction
============
High demand variability stresses the electrical grid by increasing the mismatch with supply, and it is costly for utilities because it requires adding redundant power generation. Demand response is an alternative that induces consumers to reduce or shift their consumption by setting prices by time of day [@Newsham2011; @Gyamfi2012; @Gyamfi2013; @Sun2013; @Strbac2008]. For example, peak pricing (PP) reduces the peak demand of electricity by charging consumers reduced (increased) rates for electricity during periods of low (high) demand. This is a common structure for demand response programs because the simplicity of PP allows consumers to understand how their consumption impacts their costs.
Real-time pricing (RP) of electricity is less common because historically the complex pricing structure of RP makes it difficult for consumers to match consumption to prices. However, new consumer technologies like internet-connected smart thermostats [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012b; @MaasoumyRosenbergSangiovanni-VincentelliEtAl2014; @ZugnoMoralesPinsonEtAl2013; @BorscheOldewurtelAndersson2013; @vrettos2013predictive] simplify RP, because such devices can automate the tradeoff between costs and consumption. These devices simplify RP by abstracting this tradeoff into a question of quality of service (e.g., comfort) versus price, which is easier for consumers to understand.
This paper designs PP and RP electricity rates using realistic, validated models of heating, ventilation, and air-conditioning (HVAC) [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012a], and there are three contributions. First, we use a principal-agent model [@LaffontMartimort2009; @aswani2012incentive] to formulate the problem of a utility designing rates for HVAC that responds to prices, where the consumer has an acceptable (but unknown to the utility) comfort level. The challenge is that prices must be designed so that inflexible (with respect to comfort) consumers do not get excessive benefits relative to flexible consumers, since flexible consumers provide more benefits to the utility. Second, we pose the design problem as a mixed integer program (MIP). Third, we present numerically solvable approximations of this MIP, and then evaluate the impact of the resulting PP and RP rates.
PP for HVAC Demand Response
---------------------------
HVAC is arguably the most significant target for demand response since it the largest source of energy consumption in most buildings [@afram2014]. This is relevant from the standpoint of utilities because HVAC use is obviously correlated with high outdoor temperature, which means that HVAC usage in different buildings is strongly correlated with each other and is an important contributor to peak demand [@mendoza2012]. As a result, many studies have considered different aspects of PP for demand response of HVAC. A large number of demand response programs that have been implemented by utility companies use PP to reduce peak load [@Newsham2011; @Gyamfi2012; @Gyamfi2013; @Sun2013; @Strbac2008], and such programs have been found to provide varying levels of value to utilities. Within the controls literature, the use of model predictive control (MPC) techniques is particularly popular for demand response of HVAC [@kelman2011; @parisio2014; @mintz2016; @mintz2017behavioral] because of the ability of MPC to handle complex constraints.
RP for HVAC Demand Response
---------------------------
Recent work studied RP design for HVAC that automates price-responsiveness. One approach uses stochastic differential equations to design prices [@YangCallawayTomlin2014; @YangCallawayTomlin2015], and this work found a benefit to RP for a simplified HVAC model. In contrast, we consider in this paper the rate design problem using realistic, validated models of HVAC [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012a]. Another body of work [@avci2012residential; @avci2013model] considers RP design using realistic HVAC models. Our paper differs in two substantive ways. The first is we use a different notion of comfort: Comfort in [@avci2012residential; @avci2013model] was defined using the temperature set-point, whereas in our paper we define comfort using allowable deviations in the temperature from the desired value. The second is we consider adverse selection, which are the issues caused when an inflexible (with respect to comfort) consumer accepts a rate designed for a flexible consumer, in our rate design.
Outline
-------
Sect. \[sec:cm\] describes our model for the consumer and our model for the electric utility company, including the principal-agent model the utility uses to design the electricity rates. The key feature of the model is the fact that consumers are either flexible or inflexible with regards to their comfort, but this information is hidden from the electric utility. The electricity rate will not be efficient for the utility if it does not account for this information asymmetry (formally known as *adverse selection*). Next, Sect. \[sec:nspp\] describes how to numerically solve the rate design problem using an MIP reformulation of the principal-agent model. As part of our approach, we derive relaxations that facilitate fast numerical solution. We conclude with Sect. \[sec:nr\], which numerically solves the pricing problem and then compares the impact of PP and RP on electricity consumption by HVAC.
Model of Consumer and Electric Utility {#sec:cm}
======================================
In this section, we present our model for the consumer and the electric utility. We also formally define the problem of using a principal-agent framework to design either PP or RP electricity prices for HVAC demand response.
Consumer Model
--------------
The first part of our model defines comfort in relation to deviations in room temperature from the desired value: Consumers are inflexible ($\pm 2^\circ$C deviation from desired temperature) or flexible ($\pm 3^\circ$C deviation from desired temperature) in their comfort, and these ranges are from the ASHRAE 55 standard [@Standard2010] that defines quantitative models of occupant comfort. We use $T_d$ to refer to a consumer’s desired room temperature, and the $\overline{T},\underline{T}$ are the upper and lower bounds of comfort for the consumer. So if the consumer is inflexible, then $\underline{T} = T_d-2$ and $\overline{T} = T_d+2$ . Similarly, if the consumer is flexible, then $\underline{T} = T_d-3$ and $\overline{T} = T_d+3$.
The next part of our model describe the room temperature dynamics and provides an energy model for the consumer. We use a linear time-invariant model for room temperature $$T_{n+1} =k_{r}T_{n}+k_{c}u_{n}+k_{w}w_{n}+q_n,\label{eq: dynamic}$$ where $T_{n}$, $u_{n}$, $w_{n}$, $q_n$ are room temperature, HVAC control input, outside temperature, and heating load due to occupancy, respectively, and each time step is a 15 min interval. This model has been validated [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012a]. The total energy usage of the consumer is $\sum_{n=1}^N(b_n + pu_n)$, where $b_n$ is nondeferrable electricity load, $p$ is a constant that converts input $u_n$ to energy consumption [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012a], and $N$ is a horizon.
An important component of our model characterizes the HVAC controller, which automates the tradeoff between room temperature and electricity consumption. In particular, we assume that the HVAC is controlled by MPC: $$\label{eqn:mpc}
\begin{aligned}
\min\ & \textstyle\sum_{n=1}^{N}\big((T_{n}-T_{d})^{2}+\gamma c_{n}u_{n}\big)\\
\mathrm{s.t.}\ & T_{n+1}=k_{r}T_{n}+k_{c}u_{n}+k_{w}w_{n}+q_n\\
& T_n \in [\underline{T},\overline{T}], u_n \in [0,\overline{u}],\quad \text{ for } n = 1,\ldots,N
\end{aligned}$$ where $\gamma$ is a constant that trades off temperature and electricity usage, $\overline{u}$ is the maximum control input, and $c_n$ is the price of electricity at time $n$.
The last part of the model describes what information is known by the consumer (and implicitly known by the HVAC controller). The variable $$\begin{gathered}
\theta = \big\{k_r,k_c,k_w,w_n,q_n,b_n,\gamma,T_d,\underline{T},\overline{T},\overline{u}, \\\text{for } n=1,\ldots,N\big\}\end{gathered}$$ completely characterizes each consumer, and it is known as *type* in the principal-agent literature [@LaffontMartimort2009; @aswani2012incentive]. (The value $p$ is a constant known by everyone.) We assume that the consumer (and HVAC controller) exactly knows the value of $\theta$, and knows the electricity price $\textbf{c} = \{c_1,\ldots,c_N\}$. Moreover, we use $J(\textbf{c}; \theta)$ to refer to the minimum value of (\[eqn:mpc\]), and $u^*(\textbf{c}; \theta)$ refers to the minimizer of (\[eqn:mpc\]).
Model of Electric Utility Company
---------------------------------
An important component in the electric utility model is the information asymmetry between the utility and consumers. Specifically, we assume the utility does not know $\theta$ for any single customer. Instead, the utility knows the overall probability distribution for $\theta$. (Recall the utility and consumers know $p$, which is a constant.) We also assume that both the utility and consumers know the electricity price $\textbf{c}$.
The next element in the utility model describes the goal of the electricity pricing for demand response. If the goal is to reduce peak load, then the utility aims to minimize $$V_{p} = \textstyle\mathbb{E}_\theta\Big(\sum_{n = t_1}^{t_2} u^*_n(\textbf{c}; \theta)\Big),$$ where $[t_1,t_2]$ is a time range during which the peak load is anticipated by the utility. If the goals is to reduce load variability, then the utility aims to minimize $$V_{l} = \textstyle\mathbb{E}_\theta\Big(\mathrm{var}_n\big(b_n + u_n^*(\textbf{c}; \theta)\big)\Big),$$ where $\mathrm{var}_n(\cdot)$ is the variance over $n=1,\ldots,N$. We will consider designing PP and RP for both goals.
The electric utility is interested in designing $\textbf{c}$, and we describe the constraints that characterize PP and RP rates. If the utility is designing PP rates, then this means they are selecting from $$\mathcal{C}_{pp} = \left\{\textbf{c} :
\begin{aligned}
&c_n = c_{t_1}, &\text{ for } n \in[t_1,t_2]\\
& c_n = c_1, &\text{ for } n \in \{1,\ldots,N\}\setminus[t_1,t_2]
\end{aligned}\right\}.$$ This expresses prices that are constant within the peak period $[t_1,t_2]$, and constant (with a possibly different value) outside of the peak period. Similarly, if the utility is designing RP rates, then this means they are selecting from $$\mathcal{C}_{rp} = \left\{\textbf{c} :
\begin{aligned}
&c_1 = c_N\\
&|c_{n+1}-c_n| \leq \rho, &\text{ for } n =1,\ldots,N_1
\end{aligned}\right\}.$$ This expresses prices that are equal at the beginning and end of the horizon, and such that the rate of change is bounded by a constant $\rho$. Lastly, we use $\textbf{f} = \{f,\ldots,f\}$ to refer to a flat pricing structure, and $f$ in particular refers to the existing electricity price prior to the introduction of the demand response pricing.
Principle-Agent Model for Pricing
---------------------------------
$k_{r}$ $k_{c}$ $k_{w}$ average $q_n$
-------- --------- --------- --------- ---------------
Room 1 0.63 2.64 0.10 6.78
Room 2 0.43 1.95 0.18 9.44
: \[tabel:thermal\_coeff\]Temperature Model Coefficients
The last part of the model for the utility describes the principal-agent formulation used to design electricity prices. In particular, we assume the utility solves $$\label{eqn:pam}
\begin{aligned}
\min\ & \textstyle V + \lambda\cdot\mathbb{E}_\theta\Big(\sum_{n=1}^N\big(f_n^{\vphantom{*}}u_n^*(\mathbf{f}; \theta) - c_n^{\vphantom{*}}u_n^*(\mathbf{c}; \theta)\big)\Big) \\
\mathrm{s.t}\ & J(\textbf{c}; \theta) \leq J(\textbf{f}; \theta)\\
&\mathbf{c}\in\mathcal{C}\\
& c_n \in [\underline{c},\overline{c}],\quad \text{for } n=1,\ldots,N
\end{aligned}$$ to design the electricity rates, where $V$ is either $V_{p}$ (to minimize peak load) or $V_{v}$ (to minimize load variance), and $\mathcal{C}$ is either $\mathcal{C}_{pp}$ (for PP) or $\mathcal{C}_{rp}$ (for RP). Note the $\underline{c},\overline{c}$ are bounds on the minimum and maximum electricity rate, respectively.
Here, $\sum_{n=1}^N\big(f_nu_n^*(\mathbf{f}; \theta) - c_nu_n^*(\mathbf{c}; \theta)\big)$ is the amount of revenue the utility loses from implementing the new pricing $\textbf{c}$ (relative to the existing rate $\mathbf{f}$), and so this means $\lambda$ is a constant that the utility uses to tradeoff achieving the demand response goal with revenue loss. We do not include the nondeferrable electricity load $b_n$ when defining revenue loss, because in our setting the electricity rates for the nondeferrable electricity load are different (and left unchanged) from the rates $\textbf{c}$ for HVAC electricity consumption.
There are two game-theoretic considerations that must be discussed when defining and solving principal-agent models [@LaffontMartimort2009; @aswani2012incentive]. The constraint $J(\textbf{c}; \theta) \leq J(\textbf{f}; \theta)$ is known as a *participation constraint*, and it ensures that the new electricity rates $\textbf{c}$ are such that the overall utility of the consumer under the new rates $\mathbf{c}$ is equal or better than the overall utility of the consumer under the original rate $\mathbf{f}$. The second game-theoretic aspect to be discussed is adverse selection. We mitigate adverse selection by minimizing the expectation (with respect to type $\theta$) of the goal $V$ and revenue loss.
Numerical Solution of Pricing Problem {#sec:nspp}
=====================================
This section studies how to solve the principal-agent model (\[eqn:pam\]). The main difficulty is that (\[eqn:pam\]) is a bilevel program [@ColsonMarcotteSavard2007; @aswani2016duality], which means that (\[eqn:pam\]) is an optimization problem in which some variables are solutions to optimization problems themselves. In particular, recall that $u^*(\textbf{c}; \theta)$ is the minimizer to (\[eqn:mpc\]). In order to solve (\[eqn:pam\]), we first show how the problem can be reformulated as a MIP. Then we describe some relaxations that facilitate numerical solution of the MIP.
MIP Reformulation of Pricing Problem
------------------------------------
They key idea in reformulating (\[eqn:pam\]) is to replace the convex optimization problem (\[eqn:mpc\]) by the KKT conditions, which provides constraints that $u^*(\textbf{c}; \theta)$ must satisfy. More specifically, the KKT conditions for (\[eqn:mpc\]) can be written as the following set of mixed integer linear constraints: $$\begin{aligned}
&T_{n+1}=k_{r}^{\vphantom{*}}T_{n}^{\vphantom{*}}+k_{c}^{\vphantom{*}}u_{n}^*(\textbf{c}; \theta) +k_{w}^{\vphantom{*}}w_{n}^{\vphantom{*}}+q_n^{\vphantom{*}}\\
& \gamma c_{n}-k_{c}\nu_{n}+\overline{\mu}_{n}-\underline{\mu}_{n}=0\\
&0\leq\overline{\mu}_{n}\leq M\eta_n,\\
&0\leq\underline{\mu}_{n}\leq M\zeta_n\\
&\overline{u}\eta_n+\underline{u}\left(1-\eta_n\right)\leq u_{n}^*(\textbf{c}; \theta)\leq\underline{u}\zeta_n+\overline{u}\left(1-\zeta_n\right)\\
&\eta_n, \zeta_n \in\{0,1\}, \quad \text{for } i = 1,\ldots,N-1
\end{aligned}$$ and also that $$\begin{aligned}
&(T_{n}-T_{d})+\nu_{n-1}-k_{r}\nu_{n}+\overline{\xi}_{n}-\underline{\xi}_{n}=0\\
&0\leq\overline{\xi}_{n}\leq Mx_{n}\\
&0\leq\underline{\xi}_{n}\leq My_{n}\\
&\overline{T}x_{n}+\underline{T}\left(1-x_{n}\right)\leq T_{n}\leq\underline{T}y_{n}+\overline{T}\left(1-y_{n}\right)\\
&x_n, y_n \in\{0,1\}, \quad \text{for } 2 = 1,\ldots,N
\end{aligned}$$ where $M > 0$ is a sufficiently large constant [@Fortuny-AmatMcCarl1981].
The problem (\[eqn:pam\]) becomes an infinite dimensional MIP, after a few more reformulations. The first is to observe that $\mathbb{E}_\theta(f_n^{\vphantom{*}}u_n^*(\mathbf{f}; \theta))$ is a constant, and so can be removed from the objective function. The second is to note that $J(\textbf{f}; \theta)$ is also a constant since it does not depend on any decision variables. The third reformulation is to substitute $J(\textbf{c}; \theta)$ with $\sum_{n=1}^{N}\big((T_{n}^{\vphantom{*}}-T_{d^{\vphantom{*}}})^{2}+\gamma c_{n}^{\vphantom{*}}u_{n}^*(\textbf{c}; \theta)\big)$. Though this yields an infinite dimensional problem, using sample average approximation (SAA) [@kleywegt2002sample; @wang2008sample] to approximate the reformulation gives a finite dimensional MIP.
Relaxation of Pricing Problem
-----------------------------
The reformulated MIP described above is still difficult to solve because it involves nonconvex quadratic terms $c_{n}^{\vphantom{*}}u_{n}^*(\textbf{c}; \theta)$, and so additional relaxations are needed so that the price design problem can be solved using standard numerical optimization software. The quadratic term is relaxed using the McCormick envelope [@McCormick1976] to $$\begin{aligned}
r_n \geq \underline{c}u_{n}^*(\textbf{c}; \theta) + \underline{u}c_n - \underline{u}\cdot\underline{c}\\
r_n \geq \overline{c}u_{n}^*(\textbf{c}; \theta) + \overline{u}c_n - \overline{c}\cdot\overline{u}\\
r_n \leq \overline{c}u_{n}^*(\textbf{c}; \theta) + \underline{u}c_n - \overline{c}\cdot\underline{u}\\
r_n \leq \underline{c}u_{n}^*(\textbf{c}; \theta) + \overline{u}c_n - \underline{c}\cdot\overline{u}
\end{aligned}$$ for $n = 1,\ldots,N$. With this relaxation, the SAA form of the reformulated problem is a mixed-integer quadratic program (MIQP), which can be solved using existing software.
However, numerical solution of MIQP’s can be slow. So we next describe two additional relaxations that speed up computation by approximating the MIQP using a mixed-integer linear program (MILP), which can typically be numerically solved faster. First, we replace $(T_n-T_d)^2$ with $4|T_n-T_d|^2$ since $(T_n-T_d) \leq 3|T_n-T_d|^2$ when $|T_n-T_d| \leq 3$ as is the case from our assumptions about comfort. Second, we replace $\mathrm{var}_n\big(b_n + u_n^*(\textbf{c}; \theta)\big)$ with $N^{-1}\sum_{n=1}^N|b_n + u_n^*(\textbf{c}; \theta) - m(\theta)|$, where $m(\theta) = \frac{1}{N}\sum_{n=1}^N u_n^*(\textbf{f}; \theta)$. The idea is we approximate the variance by (a) replacing squares with absolute value, and (b) replacing the mean in the variance $\frac{1}{N}\sum_{n=1}^N u_n^*(\textbf{c}; \theta)$ with the mean $m(\theta)$.
**Flat Rate** **PP Rate** **RP Rate**
-- --------------- --------------- ------------- -------------
Peak Load 28.3 27.0 27.6
Load Variance 0.49 0.54 0.42
Peak Load 19.1 15.3 17.5
Load Variance 0.25 0.28 0.17
: Pricing to Reduce Peak Load\[tab:rpl\]
Numerical Results {#sec:nr}
=================
In this section, we numerically solve our MILP relaxation of the pricing problem for a 24 hour horizon. All of the calculations where conducted on laptop computer with dual core 2.5GHz processor and 8GB RAM using MATLAB with the CVX toolbox [@cvx] and the Gurobi solver [@GurobiOptimization2016]. We finish by evaluating the quality of the designed electricity rates, and the results are summarized in Tables \[tab:rpl\] and \[tab:rlv\].
Values of Type Parameters
-------------------------
For scenarios with PP and peak load reduction, we set the peak times to be 1pm–4pm. Our bounds on the electricity cost were $7\text{PhP}\leq c_{n}\leq20\text{PhP}$, where PhP is Philippines Pesos. Parameters in the room temperature dynamics (\[eq: dynamic\]) were chosen by uniformly sampling from the paramters in Table \[tabel:thermal\_coeff\]. The first set of parameters are from [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012a], while the second set of parameters were replicated using the same methodology from [@AswaniMasterTanejaEtAl2012; @AswaniMasterTanejaEtAl2012a] with data from our UP-BRITE testbed located at the University of the Phillipines, Diliman. We set the probability of a consumer to have high flexibility to be 0.2. Scenario generation for outside temperature was performed using data from Weather Underground [@wunderground], scenario generation for heating load due to occupancy was based on occupancy models, and scenario generation for nondeferrable electricity load was based on data from [@nrel].
Results and Discussion for PP
-----------------------------
Results for PP for peak load reduction are shown in Fig. \[fig:pppl\]. PP is effective in reducing the peak load for both the flexible and inflexible consumers; but there is a side effect in which the HVAC has sharp increases in electricity consumption both prior to and after the peak period, as well as a sharp decrease in consumption at the start and end of the peak period. This substantially increases the variability of the load profile. Results for PP for load variance reduction are shown in Fig. \[fig:ppv\]. PP is not effective in decreasing load variability because sharp changes in electricity price induce the HVAC to make sharp changes in consumption.
Results and Discussion for RP
-----------------------------
The results for RP for peak load reduction are shown in Fig. \[fig:rppl\]. The RP is effective in reducing the peak load for both the flexible and inflexible consumers, and it in fact also reduces the variance of the electricity load. The results for RP for load variance reduction are shown in Fig. \[fig:rpv\]. The RP is effective in decreasing the variability of the total electricity load, and it also reduces the peak load for both the flexible and inflexible consumers. The variance in load under this latter contract is lower than the variance under the former contract, but the difference is small.
Conclusion
==========
**Flat Rate** **PP Rate** **RP Rate**
-- --------------- --------------- ------------- -------------
Peak Load 28.3 27.3 27.6
Load Variance 0.49 0.49 0.41
Peak Load 19.1 17.7 17.8
Load Variance 0.25 0.26 0.17
: Pricing to Reduce Load Variance\[tab:rlv\]
We studied the problem of designing PP and RP electricity rates using realistic, validated models of HVAC. We used a principal-agent model to formulate the problem of a utility designing rates for HVAC that responds to prices, where the consumer has an acceptable (but unknown to the utility) comfort level. We showed how this problem could be posed as numerically tractable MILP’s, and then solved these MILP’s to compare the efficacy of different pricing schemes. We found that RP was substantially better at reducing load variability than PP, whereas PP was superior in reducing peak load. Directions for future work include incorporating more detailed consumer models to better understand best practices for the design of incentives for effective demand response.
[^1]: \*This work was supported in part by the Philippine-California Advanced Research Institutes (PCARI) and NSF Award CMMI-1450963.
[^2]: $^{1}$Y. Mintz and A. Aswani are with the Department of Industrial Engineering and Operations Research, University of California, Berkeley, CA 94720 USA [[email protected], [email protected]]{}
[^3]: $^{2}$J.A. Cabrera and J.R. Pedrasa are with the Electrical and Electronics Engineering Institute, University of the Philippines, Diliman, Quezon City, Philippines 1101 [john\[email protected], [email protected]]{}
|
---
abstract: 'We review the role of a muon collider in the study of Higgs bosons via production in the $s$-channel. Very precise measurements of a Standard Model-like Higgs boson mass and total width can be performed, and may lead to a discrimination between a Standard Model Higgs boson and the light Higgs boson of the minimal supersymmetric theory. The heavier Higgs bosons from a supersymmetric theory or an exotic Higgs sector can be studied in the $s$-channel. A muon collider may play a crucial role in separating the overlapping signals for two heavy nearly degenerate Higgs bosons, and may play an important role in precision tests of radiative corrections in the Higgs sector. The measurements at a muon collider will be complementary to the Higgs studies at the Large Hadron Collider and at an electron-positron Linear Collider.'
author:
- 'M. S. Berger'
bibliography:
- 'P1\_berger\_0717.bib'
title: 'Higgs Bosons at Muon Colliders[^1]'
---
Introduction
============
Interest has grown rapidly in muon colliders in the last several years as it became clear that the technological challenges might not be insurmountable[@Ankenbrandt:1999as]. Muon colliders are of interest to particle physics exploration for a number of reasons: a) the absence of significant bremsstrahlung allows one to contemplate circular accelerators of much higher energy than is possible with $e^+e^-$ machines, b) the coupling of Higgs bosons is proportional to particle mass (see Fig. (\[feyn-diag\])), and hence there is the possibility that Higgs bosons can be produced in reasonable numbers in the $s$-channel[@Barger:1997jm; @Barger:1995hr], c) there are regions of parameter space for which it will be impossible for either the Large Hadron Collider (LHC) or a Linear Collider (LC) to discover the heavier Higgs bosons of supersymmetry or, in the case of a general two-Higgs-doublet or more extended model, Higgs bosons of any mass with small or zero $VV$ coupling, d) the neutrinos from the decays of muons can be used as a source for a neutrino factory[@Ayres:1999ug; @Holtkamp:2000xn].
The large mass of the muon in comparison to that of the electron results in a number of advantageous features of a muon collider. The beam energy spreads of a muon collider can be very small, making them useful for studying narrow resonances like the SM Higgs boson. In addition, there is little bremsstrahlung, and the beam energy can be tuned to one part in a million through [*in situ*]{} spin-rotation measurements[@Raja:1998ip].
High rates of Higgs production at $\epem$ colliders rely on substantial $VV$ Higgs coupling for the Higgs-strahlung process $Z+$Higgs or for the $WW$ fusion process $WW\to$Higgs ($WW$ fusion). In contrast, a $\mupmum$ collider can provide a factory for producing a Higgs boson with little or no $VV$ coupling so long as it has SM-like (or enhanced) $\mupmum$ couplings. Important examples of this last form of Higgs boson are the heavy neutral Higgs bosons $\hh$ and $\ha$ of the Minimal Supersymmetric Standard Model (MSSM).
If the a light ($\lsim 130$ GeV) Higgs boson exists, then both $\epem$ and $\mupmum$ colliders will be valuable; the Higgs boson would have been discovered at a previous higher energy collider (possibly a muon collider running at high energy), and then the Higgs factory would be built with a center-of-mass energy precisely tuned to the Higgs boson mass. The most likely scenario is that the Higgs boson is discovered at the LHC via gluon fusion ($gg\to H$) or perhaps earlier at the Tevatron via associated production ($q\bar{q}\to WH, t\overline{t}H$), and its mass is determined to an accuracy of about 100 MeV. If a linear collider has also observed the Higgs via the Higgs-strahlung process ($e^+e^-\to ZH$), one might know the Higgs boson mass to better than 50 MeV with an integrated luminosity of $500$ fb$^{-1}$. The muon collider would be optimized to run at $\sqrt{s}\approx m_H$, and this center-of-mass energy would be varied over a narrow range so as to scan over the Higgs resonance.
![Feynman diagram for $s$-channel production of a Higgs boson.[]{data-label="feyn-diag"}](P1_berger_0717_fig1.ps)
SM-Like Higgs Bosons
====================
The production of a Higgs boson (generically denoted $\h$) in the $s$-channel with interesting rates is a unique feature of a muon collider [@Barger:1997jm; @Barger:1995hr]. The resonance cross section is $$\sigma_h(\sqrt s) = {4\pi \Gamma(h\to\mu\bar\mu) \, \Gamma(h\to X)\over
\left( s - m_h^2\right)^2 + m_h^2 \left(\Gamma_{\rm tot}^h \right)^2}\,.
\label{rawsigform}$$ In practice, however, there is a Gaussian spread ($\srts$) to the center-of-mass energy and one must compute the effective $s$-channel Higgs cross section after convolution assuming some given central value of $\rts$: \_h(s) & =& [1]{} \_h () d [4m\_h\^2]{} . \[sigform\]
![ Number of events and statistical errors in the $b\overline{b}$ final state as a function of $\protect\rts$ in the vicinity of $\mhsm=110\gev$, assuming $R=0.003\%$, and $\epsilon L=0.00125$ fb$^{-1}$ at each data point. \[mhsmscan\]](P1_berger_0717_fig2.ps){width="5in"}
It is convenient to express $\srts$ in terms of the root-mean-square (rms) Gaussian spread of the energy of an individual beam, $R$: $$\srts = (2{\rm~MeV}) \left( R\over 0.003\%\right) \left(\sqrt s\over
100\rm~GeV\right) \,.$$ It is clear from Eq. (\[rawsigform\]) that a resolution $\srts \lsim \gamhtot$ is needed to be sensitive to the Higgs width. Furthermore, Eq. (\[sigform\]) indicates that $\br(\h\to \mu\anti\mu)$ must not be extremely suppressed for there to be large event rates for Higgs production. The width of a light SM-like Higgs is very small (e.g. a few MeV for $\mhsm\sim 110\gev$), implying the need for $R$ values as small as $\sim 0.003\%$ for studying a light SM-like $\h$. In addition to the very small beam energy spread, one must also be able to determine very accurately the beam energy to perform a scan over such a narrow resonance. This can be accomplished utilizing the spin precession of the muon noted above. A sample scan is illustrated in Fig. \[mhsmscan\] for a $\mhsm=110\gev$ SM Higgs boson.
The SM Higgs cross sections and backgrounds as well as the integrated luminosity required for a $5\sigma $ signal are shown in Fig. \[sm-higgs\] for $R=0.003\%$ and $\mhsm$ values such that the dominant decay mode is $b\overline{b}$. The significance of the signal is impacted by two physical processes: 1) For a Higgs mass near the $Z$-pole there is a significant background from $\mu^+\mu^-\to Z\to b\overline{b}$. However the most recent experimental results from LEP have pushed the SM Higgs mass bound well above $91$ GeV. 2) For a Higgs mass $\gsim 130$ GeV, the Higgs width $\gamhtot$ becomes much larger as the $WW^\star$ decay channel opens up.
The Higgs bosons in supersymmetric models are in general detectable at muon colliders. If the masses of the supersymmetric particles are large, the Higgs sector typically exhibits decoupling behavior in which the lightest supersymmetric Higgs boson $\hl$ will be very similar to the $\hsm$ when the other Higgs bosons are heavy, and the $\hl$ rates will be very similar to $\hsm$ rates. On the other hand, the heavier Higgs bosons in a typical supersymmetric model decouple from pairs of gauge bosons $VV$ at large mass and remain reasonably narrow ($<1$ GeV unless the $t\overline{t}$ decay mode is open). As a result, their $s$-channel production rates remain large, and a muon collider can avoid the production channels that depend on a sizable coupling to gauge bosons.
![The SM Higgs cross sections and backgrounds in $b\bar b,\ WW^*$ and $ZZ^*$. Also shown is the luminosity needed for a 5 standard deviation detection in $b\bar b$. From Ref. [@Barger:1997jm]. For a SM-like $\h$, at $\sqrt s = \mh \approx 115$ GeV, the $b\bar b$ final state rates are $\approx 10^4\:\:{\rm events\:\:\times L}(fb^{-1})$ for both the signal and the background. \[sm-higgs\]](P1_berger_0717_fig3.eps){width="\textwidth"}
What can a muon collider add to the LHC and LC? The LHC and quite likely a linear collider will be available already, and the Higgs boson will be detected and some of its properties determined before a muon collider will become operational. Current expectations for the luminosity at an LC are 500 fb$^{-1}$ over 1-2 years. This yields a SM Higgs boson production rate of greater than $10^4$ per year in the process $e^+e^-\to Z\h$. Therefore the latest estimates of the luminosity at a linear collider yield numbers of Higgs bosons that are comparable to what will be available at a muon collider/Higgs factory with its more modest integrated luminosity (expected with the current machine parameters) of the order of one inverse femtobarn. A linear collider with such high luminosity can certainly perform quite accurate measurements of certain Higgs parameters such as the Higgs mass, couplings to gauge bosons, couplings to heavy quarks, etc.[@Battaglia:2000jb].
The $s$-channel production process allows one to determine the mass, total width, and the cross sections $\overline \sig_h(\mupmum\to\h\to X)$ for several final states $X$ to very high precision. The Higgs mass, total width and the cross sections can be used to constrain the parameters of the Higgs sector. For example, in the MSSM their precise values will constrain the Higgs sector parameters $\mha$ and $\tanb$ (where $\tanb$ is the ratio of the two vacuum expectation values (vevs) of the two Higgs doublets of the MSSM). The main question is whether these constraints will be a valuable addition to LHC and LC constraints.
Precise measurements of the couplings of the Higgs boson to the Standard Model particles are important tests of the mass generation mechanism. In the Standard Model with one Higgs doublet, this coupling is proportional to the particle mass. In the more general case there can be mixing angles present in the couplings. Precision measurements of the couplings can distinguish the Standard Model Higgs boson from the SM-like Higgs boson typically present in a more general model. If deviations are found, their magnitude can be extremely crucial for constraining the parameters of the more general Higgs sector. In particular, it might be possible to estimate the masses of the other Higgs bosons of the extended Higgs sector, thereby allowing a more focused search for them.
The precision possible at a muon collider for measuring $\mh$ and $\gamhtot$ of a SM-like $\h$ with $\mh\sim 110\gev$ are $1-3\times 10^{-6}$ and $0.2$ respectively. To achieve these accuracies, one first determines the Higgs mass to about 1 MeV by the preliminary scan illustrated in Fig. \[mhsmscan\]. Then, a dedicated three-point fine scan[@Barger:1997jm] near the resonance peak using $L\sim 0.2\fbi$ of integrated luminosity (corresponding to a few years of operation) would be performed. For a SM Higgs boson with a mass sufficiently below the $WW^\star$ threshold, the Higgs total width is very small (of order several MeV), and the only process where it can be measured [*directly*]{} is in the $s$-channel at a muon collider. An accurate measurement of $\gamhtot$ would be a very valuable input for precision tests of the Higgs sector. In particular, since all the couplings of the Standard Model $\hsm$ are known, $\gamhsmtot$ is precisely predicted. Therefore, the precise determination of $\gamhtot$ obtained by this scan would be an important test of the Standard Model, and any deviation would be evidence for a nonstandard Higgs sector (or other new physics).
Other interesting measurements of Higgs boson properties can be performed at a muon collider in the case where at least a hundred inverse femtobarns of luminosity is available. Then the mass, width and spin of a SM-like Higgs boson can also be determined by operating either a muon collider or a linear collider at the $Z\h$ production threshold where the rate is sensitive to the Higgs mass[@Barger:1997pv]. With 100 fb$^{-1}$ of integrated luminosity, an error of less than $100$ MeV can be achieved[@Barger:1997pv] for $\mh<150$ GeV. The shape of the $\ell^+\ell^-\to Z\h$ threshold cross section can also be used to determine the spin and to check the CP nature of the Higgs[@Miller:2001bi].
Heavy Higgs Bosons
==================
In supersymmetric models there are multiple physical Higgs bosons. Often the Higgs spectrum includes a SM-like Higgs boson with mass close to the $Z$ boson mass and some heavier Higgs bosons whose couplings are very much different than a SM particle of the same mass. For example, in the MSSM there is a light, neutral $\hl$ and two heavier neutral Higgs bosons, $\hh$ and $\ha$. As one adjusts the parameters of the theory to make the $\hh$ and $\ha$ heavier, the light Higgs boson $\hl$ becomes more and more like the SM Higgs boson. It may very well be the case that after the initial discovery of this SM-like Higgs boson the primary question will involve detecting deviations from the SM Higgs sector by a) measuring very precisely the SM-like Higgs boson properties, and/or b) directly discovering additional Higgs bosons.
In the context of the MSSM, It is highly likely that the process $e^+e^-\to ZH$ used to find and study the light Higgs state at a first generation LC will not be suitable for the heavier Higgs bosons, because in the decoupling limit the coupling of the Higgs to gauge bosons is greatly suppressed (this is a corollary to the statement that the light Higgs boson in Standard Model-like). There is a $250-500\gev$ range of heavy Higgs boson masses for which discovery is not possible via $\hh\ha$ pair production at a $\rts=500\gev$ LC. Further, the $\ha$ and $\hh$ cannot be detected in this mass range at either the LHC or LC for a wedge of moderate $\tanb$ values. (For large enough values of $\tanb$ the heavy Higgs bosons are expected to be observable in $b\anti b \ha,b\anti b \hh$ production at the LHC via their $\tau ^+\tau ^-$ decays and also at the LC.) A linear collider operating in the $\gamma \gamma $ mode can produce Higgs bosons in the $s$-channel, and there have been a number of studies of such processes[@Jikia:1993di; @Berger:1993tr; @Dicus:1994ux; @Gounaris:2000un; @Muhlleitner:2001kw; @Asner:2001ia; @Berger:1992nr]. This requires that such an option exists, and the energy of the $\gamma \gamma $ system is not as sharply peaked at the center-of-mass energy as it is for the muon collider.
A muon collider can fill some, perhaps all of this moderate $\tanb$ wedge. If $\tanb$ is large, the $\mupmum \hh$ and $\mupmum\ha$ couplings (proportional to $\tanb$ times a SM-like value) are enhanced, thereby leading to enhanced production rates in $\mupmum$ collisions. These bosons can be discovered via the radiative return mechanism[@Barger:1997jm], and once a peak is found the machine energy can be set to $\mha$ or $\mhh$ and the muon collider becomes a Higgs factory for the heavier Higgs bosons. The resolution requirements for studying the heavy Higgs bosons in the $s$-channel are not as stringent as those for the light Higgs boson because the heavier Higgs boson widths are generally much larger. Since $R=0.1\%$ is sufficient, much higher luminosity ($L\sim 2-10~{\rm fb}^{-1}
/{\rm yr}$) would be possible as compared to that for $R=0.01\%-0.003\%$ as required for studying the $\hl$.
In the MSSM, the heavy Higgs bosons are largely degenerate, especially in the decoupling limit where they are heavy. In that case, a muon collider with sufficient energy resolution might be the only possible means for separating out these states. Examples showing the $\hh$ and $\ha$ resonances for $\tan \beta =5$ and $10$ are shown in Fig. \[H0-A0-sep\]. For the larger value of $\tan \beta$ the resonances are clearly overlapping. For the better energy resolution of $R=0.01\%$, the two distinct resonance peaks are still visible, but they are smeared out and merge into one broad peak for $R=0.06\%$.
![Separation of $A$ and $H$ signals for $\tan\beta=5$ and $10$. From Ref. [@Barger:1997jm]. \[H0-A0-sep\]](P1_berger_0717_fig4.ps){width="5in"}
Muon colliders excel at making precise measurements of Higgs boson masses since they can exploit the $s$-channel production process. This is reminiscent of the very accurate determination of the $Z$ boson mass to just 2.2 MeV from the LEP measurements[@Groom:2000in]. Precise measurements of supersymmetric Higgs boson masses could provide a powerful window on radiative corrections[@Berger:2001et]. Supersymmetry together with gauge invariance in the MSSM implies the mass-squared sum rule $$\begin{aligned}
&&m_{h^0}^2+m_{H^0}^2=m_{A^0}^2+m_Z^2+\Delta \;,\end{aligned}$$ where $\Delta $ is a calculable radiative correction (the tree-level sum rule results from setting $\Delta =0$). This formula involves observables (masses) that can be precisely measured in the $s$-channel processes. Solving for the mass difference $$\begin{aligned}
&&m_{A^0}-m_{H^0}={{m_{h^0}^2-m_Z^2-\Delta}\over {m_{A^0}+m_{H^0}}}\;,\end{aligned}$$ and one obtains a form that indicates in the decoupling limit, $m_{A^0}\to \infty$, the mass difference between the heavy Higgs bosons becomes small. As discussed in the previous section, the light Higgs mass $m_{h^0}$ can be measured to less than an MeV in the $s$-channel. The masses of and the mass difference between the heavy Higgs states $H^0$ and $A^0$ can also be measured precisely by $s$-channel production. The ultimate precision that can be obtained on the masses of the $H^0$ and $A^0$ depends strongly on the masses themselves and $\tan \beta$. But a reasonable expectation is that a scan through the resonances should be able to determine the masses and the mass-difference to some tens of MeV with just $0.1$ fb$^{-1}$ of integrated luminosity[@Berger:2001et]. Altogether these mass measurements yield a value for the radiative correction $\Delta$ to a precision of order $10$ GeV$^2$. Since the typical size of $\Delta$ is of order $10^4$ GeV$^2$, this constitutes a measurement of roughly one part in $10^3$. The quantity $\Delta$ is calculable in terms of the self-energy diagrams of the Higgs bosons[@Berger:1990hg], and a comparison between the measured value and the theoretical prediction yields a test of radiative corrections in the MSSM. Further progress in the theoretical calculation of $\Delta$ would be needed to fully exploit the expected precision of the experimental measurements.
Concluding Remarks
==================
Recent experimental results hint that a muon collider may play a crucial role in studying the next generation of physics signals. There is the evidence from LEP[@Barate:2000ts; @Abreu:2000fw; @Acciarri:2000ke; @Abbiendi:2000ac; @Okpara:2001jf] for a Higgs boson near $m_H\simeq 115$ GeV. This $\gsim 2\sigma$ signal is not definitive, but it has been taken very seriously since it is consistent with the current precision electroweak data and fits well with a supersymmetric interpretation. A Higgs boson with such a mass is in the optimal range for study at a Higgs factory. Such a Higgs boson sits comfortably above the $Z$-pole where there is a large background from $Z$ decay to $b\overline{b}$, and a $115\gev$ mass is sufficiently below the $WW^\star$ threshold that the decay width remains small and the ability of the muon collider to achieve a very narrow beam energy spread can be exploited.
In the MSSM such a Higgs boson mass of $115$ GeV is near the theoretical upper limit of $m_{H^0}<130$ GeV, and would indicate a value of the supersymmetry parameter $\tan \beta$ substantially above 1 (assuming stop masses $\lsim 1\tev$). This is consistent with recent evidence for non-SM contributions to the anomalous magnetic moment of the muon[@Brown:2001mg] which also can be explained in the MSSM with a moderately large value of $\tan \beta$. If these early indications prevail, and we are left with a supersymmetric Higgs sector with large $\tan \beta$, then it is likely that the heavy Higgs $\hh$ and $\ha$ will not be observable at the LHC or a LC. The detection of these Higgs bosons could be accomplished in the $s$-channel at a muon collider, and some precision tests involving the Higgs boson masses can be performed to check radiative corrections in the Higgs sector. More generally, the muon collider has the potential to find and study Higgs bosons that exist in more general models than the MSSM with extended Higgs sectors. In this more general context, the muon collider offers the possibility of studying the CP nature of the Higgs bosons that are found.
Finally the muon collider program encompasses much more than physics of Higgs factories described here. Interesting physics can be envisioned at all stages of the development of muon colliders: from neutrino factories to Higgs factories to even higher energies.
This work was supported in part by the U.S. Department of Energy under Grant No. DE-FG02-91ER40661.
[^1]: Submitted to the Proceedings of “The Future of Particle Physics”, Snowmass 2001, P1 group. An expanded version of the Physics of Higgs Factories has been submitted to the E1 Group[@e1group].
|
---
abstract: 'This paper studies differential graded modules and representations up to homotopy of Lie $n$-algebroids, for general $n\in\mathbb{N}$. The adjoint and coadjoint modules are described, and the corresponding split versions of the adjoint and coadjoint representations up to homotopy are explained. In particular, the case of Lie 2-algebroids is analysed in detail. The compatibility of a Poisson bracket with the homological vector field of a Lie $n$-algebroid is shown to be equivalent to a morphism between the coadjoint and the adjoint modules leading to an alternative characterisation of non-degeneracy of higher Poisson structures. Moreover, the Weil algebra of a Lie $n$-algebroid is computed explicitly in terms of splittings, and representations up to homotopy of Lie $n$-algebroids are used to encode decomposed VB-Lie $n$-algebroid structures on double vector bundles.'
author:
- 'M. Jotz Lean, R. A. Mehta, T. Papantonis'
title: 'Modules and representations up to homotopy of Lie $n$-algebroids'
---
Introduction
============
Lie $n$-algebroids, for $n\in\mathbb{N}$, are graded geometric structures which generalise the notion of Lie algebroids. They have become a field of much interest in mathematical physics, since they form a nice framework for higher analogues of Poisson and symplectic structures.
Courant algebroids [@LiWeXu97] give an important example of such higher structures. The work of Courant and Weinstein [@CoWe88] and of Hitchin and Gualtieri [@Hitchin03; @Gualtieri03; @Gualtieri07] show that Courant algebroids serve as a convenient framework for Hamiltonian systems with constraints, as well as for generalised geometry. A significant result from Roytenberg [@Roytenberg02] and Ševera [@Severa05] showed that Courant algebroids are in one-to-one correspondence with Lie 2-algebroids which are equipped with a compatible symplectic structure.
The standard super-geometric description of a Lie $n$-algebroid generalises the differential algebraic way of defining usual Lie algebroids, as a vector bundle $A$ over a smooth manifold $M$ together with a degree 1 differential operator on the space $\Omega^\bullet(A):=\Gamma(\wedge^\bullet A^*)$. In the language of graded geometry, this is equivalent to a graded manifold of degree 1 equipped with a homological vector field [@Vaintrob97], i.e. a degree 1 derivation on its sheaf of functions which squares to zero and satisfies the graded Leibniz rule. A Lie $n$-algebroid is then defined as a graded manifold ${\mathcal{M}}$ of degree $n$, whose sheaf of functions ${\mathcal{C}^\infty}({\mathcal{M}})$ is equipped with a homological vector field ${\mathcal{Q}}$. In more “classical” geometric terms, a Lie $n$-algebroid can also be defined as a graded vector bundle ${\underline{A}}=\bigoplus_{i=1}^n A_i$ over a smooth manifold $M$ together with some multi-brackets on its space of sections $\Gamma({\underline{A}})$ which satisfy some higher Leibniz and Jacobi identities [@ShZh17]. A Lie $n$-algebroid $({\mathcal{M}},{\mathcal{Q}})$ is called *Poisson* if its underlying manifold carries a degree $-n$ Poisson structure $\{\cdot\,,\cdot\}$ on its sheaf of functions ${\mathcal{C}^\infty}({\mathcal{M}})$, such that the homological vector field is a derivation of the Poisson bracket.
A well-behaved representation theory of Lie $n$-algebroids for $n\geq2$ has not been developed yet. In the case $n=1$, i.e. in the case of usual Lie algebroids, Gracia-Saz and Mehta [@GrMe10], and independently Abad and Crainic [@ArCr12], showed that the notion of *representation up to homotopy* is a good notion of representation, which includes the adjoint representation. Roughly, the idea is to let the Lie algebroid act via a differential on Lie algebroid forms which take values on a cochain complex of vector bundles instead of just a single vector bundle. This notion is essentially a ${\mathbb{Z}}$-graded analogue of Quillen’s super-representations [@Quillen85]. After their discovery, representations up to homotopy been extensively studied in other works, see e.g. [@Mehta09; @ArCrDh11; @ArSc11; @ArSc13; @DrJoOr15; @Mehta15; @TrZh16; @GrJoMaMe18; @BrCaOr18; @Jotz19b; @BrOr19]. In particular, in [@Mehta09] it was shown that representations up to homotopy of Lie algebroids are equivalent, up to isomorphism, to Lie algebroid modules in the sense of [@Vaintrob97].
This paper extends the above notions of modules, and consequently of representations up to homotopy, to the world of higher Lie algebroids. The definition is the natural generalisation from the case of usual Lie algebroids explained above, i.e. differential graded modules over the space of smooth functions of the underline graded manifold. It is analysed in detail, including the two most important examples of representations, namely, *the adjoint* and *the coadjoint* representations up to homotopy. An equivalent geometric point of view of representations is given by a special class of double vector bundles together with a linear homological vector field.
Our general motivation for studying representations up to homotopy of higher Lie $n$-algebroids comes from the case $n=2$, and in particular from Courant algebroids. More precisely, it is the search for a good notion not only of the adjoint representation of a Courant algebroid, but also of its ideals, similar to the work done in [@JoOr14]. The natural question that arises then is the following:
Is a compatible Poisson or symplectic structure on a Lie $n$-algebroid encoded in its adjoint representation?
The answer to this question is positive, since it turns out that a Poisson bracket on a Lie $n$-algebroid gives rise to a natural morphism from the coadjoint to the adjoint representation (see Theorem \[thm\_poisson\], Corollary \[cor\_poisson\] and Section \[morphism\_of\_ad\*\_ad\_Poisson012\]), i.e. a map which commutes with the differentials. Further, the Poisson structure is symplectic if and only if this map is in fact an isomorphism. This result is already known in the case of Poisson Lie $0$-algebroids, i.e. ordinary Poisson manifolds $(M,\{\cdot\,,\cdot\})$, and Courant algebroids over a point, i.e. quadratic Lie algebras $(\mathfrak{g},[\cdot\,,\cdot],\langle\cdot\,,\cdot\rangle)$. In the former case the map reduces to the natural map $\sharp\colon T^*M\to TM$ obtained from the Poisson bracket on $M$, and in the latter case it is the inverse of the map defined by the nondegenerate pairing $\mathfrak{g}\to\mathfrak{g^*},x\mapsto\langle x,\cdot
\rangle$.
Outline of the paper {#outline-of-the-paper .unnumbered}
--------------------
This paper consists of seven sections and is organised as follows. Section \[prelim\_sec\] sets notation, conventions, and recalls the definitions and constructions of graded vector bundles and Lie algebroids.
Section \[Lie\_n\] offers a quick introduction to graded manifolds, (split) Lie $n$-algebroids, and Poisson and symplectic structures on Lie $n$-algebroids. In particular, it discusses the space of generalised functions of a Lie $n$-algebroid, gives the geometric description of a split Lie 2-algebroid [@Jotz19b] which is used in the rest of the paper, and defines the Weil algebra of a Lie $n$-algebroid – as it is done in [@Mehta06] in the case $n=1$.
Sections \[modules\] and \[ruth\] generalise the notions of Lie algebroid modules and representations up to homotopy to the setting of Lie $n$-algebroids. They offer a detailed explanation of the theory and give some useful examples, including the classes of the adjoint and coadjoint modules, whose properties are discussed thoroughly, especially in the case of Lie 2-algebroids. Section \[modules\] provides the answer to the question expressed above about the connection between higher Poisson or symplectic structures and the adjoint and coadjoint modules.
Section \[Split VB-Lie n-algebroids\] recalls some basic definitions and examples from the theory of double vector bundles and defines VB-Lie $n$-algebroids together with the prototype example of the tangent prolongation of a Lie $n$-algebroid. It also shows that there is a 1-1 correspondence between split VB-Lie $n$-algebroids and representations up to homotopy of degree $n+1$, which relates again the adjoint representation of a Lie algebroid with its tangent prolongation.
Finally, Section \[applications\] discusses in the split case the results of this paper. It analyses the Weil algebra of a split Lie $n$-algebroid using vector bundles and connections, and it gives more details about the map between the coadjoint and adjoint representations for split Poisson Lie algebroids of degree $n\leq2$.
Acknowledgements {#acknowledgements .unnumbered}
----------------
The authors thank Miquel Cueca Ten and Chenchang Zhu for interesting discussions and remarks. During the preparation of this work, the authors learnt that Caseiro and Laurent-Gengoux also consider representations up to homotopy of Lie $n$-algebroids, in particular the adjoint representation, in their work in preparation [@CaLa19].
Preliminaries {#prelim_sec}
=============
This section recalls basic definitions and conventions that are used later on. In what follows, $M$ is a smooth manifold and all the considered objects are supposed to be smooth even if not explicitly mentioned. Moreover, all (graded) vector bundles are assumed to have finite rank.
(Graded) vector bundles and complexes
-------------------------------------
Given two ordinary vector bundles $E\to M$ and $F\to N$, there is a bijection between vector bundle morphisms $\phi:E\to F$ covering $\phi_0\colon M\to N$ and morphisms of modules $\phi^\star\colon \Gamma(F)\to \Gamma(E)$ over the pull-back $\phi_0^*\colon C^\infty(N)\to C^\infty(M)$. Explicitly, the map $\phi^\star$ is defined by $\phi^\star(f)(m)=\phi_m^*f_{\phi_0(m)}$, for $f\in\Gamma(F),m\in M$.
Throughout the paper, underlined symbols denote graded objects. For instance, a graded vector bundle is a vector bundle $q\colon \underline{E}\to M$ together with a direct sum decomposition $$\underline{E}=\bigoplus_{n\in\mathbb{Z}}E_n[n]$$ of vector bundles $E_n$ over $M$. *Here, an element $e\in E_n$ is (degree-)homogeneous of degree $|e|=-n$. That is, for $k\in{\mathbb{Z}}$, the degree $k$-component ${\underline{E}}^k$ of ${\underline{E}}$ equals $E_{-k}$.*
All the usual algebraic constructions from the theory of usual vector bundles extend to the graded setting. More precisely, for graded vector bundles ${\underline{E}},\underline{F}$, the dual ${\underline{E}}^*$, the direct sum ${\underline{E}}\oplus\underline{F}$, the space of graded homomorphisms $\underline{\operatorname{Hom}}({\underline{E}},\underline{F})$, the tensor product ${\underline{E}}\otimes\underline{F}$, and the symmetric and antisymmetric powers $\underline{S}({\underline{E}})$ and $\underline{A}({\underline{E}})$ are defined.
A (cochain) complex of vector bundles is a graded vector bundle $\underline{E}$ over $M$ equipped with a degree one endomorphism (called the differential) $$\ldots\overset{\partial}{\to}E_{i+1}\overset{\partial}{\to}E_{i}\overset{\partial}{\to}E_{i-1}\overset{\partial}{\to}\ldots$$ which squares to zero; $\partial^2=0$. A $k$-morphism between two complexes $(\underline{E},\partial)$ and $(\underline{F},\partial)$ over $M$ is a degree $k$ map of graded vector bundles $\phi\colon \underline{E}\to \underline{F}$ over the identity on $M$ that commutes with the differentials: $\phi\circ\partial=\partial\circ\phi$.
Given two complexes $(\underline{E}, \partial)$ and $(\underline{F},\partial)$, one may construct new complexes by considering all the constructions that were discussed before. Namely, the bundles $\underline{S}(\underline{E})$, $\underline{A}(\underline{E})$, $\underline{E}^*$, $\underline{\operatorname{Hom}}(\underline{E},\underline{F})$ and $\underline{E}\otimes\underline{F}$ inherit a degree one operator that squares to 0. The basic principle for all the constructions is the graded derivation rule. For example, for $\phi\in\underline{\operatorname{Hom}}(\underline{E},\underline{F})$ and $e\in \underline{E}$: $$\partial(\phi(e)) = \partial(\phi)(e) + (-1)^{|\phi|}\phi(\partial(e)).$$ This can also be expressed using the language of (graded) commutators as $$\partial(\phi) = [\partial,\phi] = \partial\circ\phi -
(-1)^{|\phi|}\phi\circ\partial=\partial\circ\phi -
(-1)^{|\phi|\cdot|\partial|}\phi\circ\partial.$$
Dull algebroids vs Lie algebroids
---------------------------------
A dull algebroid [@Jotz18a] is a vector bundle $Q\to M$ endowed with an anchor $\rho_Q\colon Q\to TM$ and a bracket (i.e. an $\mathbb R$-bilinear map) $[\cdot\,,\cdot]\colon\Gamma(Q)\times\Gamma(Q)\to\Gamma(Q)$ on its space of sections $\Gamma(Q)$, such that $$\label{comp_anchor_bracket}
\rho_Q[q_1,q_2] = [\rho_Q(q_1),\rho_Q(q_2)]$$ and the Leibniz identity is satisfied in both entries: $$[f_1q_1,f_2q_2] = f_1f_2[q_1,q_2] + f_1\rho_Q(q_1)f_2\cdot q_2 - f_2\rho_Q(q_2)f_1\cdot q_1,$$ for all $q_1,q_2\in\Gamma(Q)$ and all $f_1,f_2\in C^\infty(M)$.
In other words, a dull algebroid is a Lie algebroid if its bracket is also skew-symmetric and satisfies the Jacobi identity $$\operatorname{Jac}_{[\cdot\,,\cdot]}(q_1,q_2,q_3) := [q_1,[q_2,q_3]] - [[q_1,q_2],q_3] - [q_2, [q_1,q_3]] = 0,$$ for all $q_1,q_2,q_3\in\Gamma(Q)$.
Given a skew-symmetric dull algebroid $Q$, there is an associated operator ${\mathrm{d}}_Q$ of degree 1 on the space of $Q$-forms $\Omega^\bullet(Q) = \Gamma(\wedge^\bullet Q^*)$, defined by the formula $$\begin{aligned}
{\mathrm{d}}_Q\tau(q_1,\ldots,q_{k+1}) = & \sum_{i<j}(-1)^{i+j}\tau([q_1,q_j],q_1,\ldots,\hat{q_i},\ldots,\hat{q_j},\ldots,q_{k+1}) \\
& + \sum_i(-1)^{i+1}\rho_Q(q_i)(\tau(q_1,\ldots,\hat{q_i},\ldots,q_{k+1})),
\end{aligned}$$ for $\tau\in\Omega^k(Q)$ and $q_1,\ldots,q_{k+1}\in\Gamma(Q)$. The operator ${\mathrm{d}}_Q$ satisfies as usual $${\mathrm{d}}_Q(\tau_1\wedge\tau_2) = ({\mathrm{d}}_Q\tau_1)\wedge\tau_2 + (-1)^{|\tau_1|}\tau_1\wedge {\mathrm{d}}_Q\tau_2,$$ for $\tau_1,\tau_2\in\Omega^\bullet(Q)$. In general, the operator ${\mathrm{d}}_Q$ squares to zero only on 0-forms $f\in \Omega^0(M)=C^\infty(M)$: ${\mathrm{d}}_Q^2f=0$ for all $f\in C^\infty(M)$ is equivalent to the compatibility of the anchor with the bracket . The vanishing ${\mathrm{d}}_Q^2 = 0$ on all forms is equivalent to $(Q,\rho_Q,[\cdot\,,\cdot])$ being a Lie algebroid. *From now on, all considered dull brackets are assumed to be skew-symmetric even if it is not stated explicitly*.
Basic connections and basic curvature
-------------------------------------
Let $Q\to M$ be a dull algebroid as in the last section and suppose that $E\to M$ is another vector bundle. A $Q$-connection on $E$ is defined similarly as usual as a map $\nabla\colon\Gamma(Q)\times\Gamma(E)\to\Gamma(E),(q,e)\mapsto
\nabla_q e$ such that it is $C^\infty(M)$-linear in the first argument and satisfies $$\nabla_q(fe) = \rho_Q(q)f\cdot e + f\nabla_q e,$$ for all $q\in\Gamma(Q),e\in\Gamma(E)$ and $f\in
C^\infty(M)$. The dual connection $\nabla^*$ is the $Q$-connection on $E^*$ defined by the formula $$\langle \nabla_q^* \varepsilon,e \rangle = \rho_Q(q)\langle \varepsilon,e \rangle - \langle \varepsilon,\nabla_q e \rangle,$$ for all $\varepsilon\in\Gamma(E^*),e\in\Gamma(E)$ and $q\in\Gamma(Q)$, where $\langle\cdot\,,\cdot\rangle$ is the natural pairing between $E$ and its dual $E^*$.
A $Q$-connection on a graded vector bundle $(\underline{E}=\bigoplus_{n\in\mathbb Z} E_n[n], \partial)$ is a family of $Q$-connections $\nabla^n$, $n\in\mathbb N$, on each of the bundles $E_n$. If $\underline{E}$ is a complex with differential $\partial$, then the $Q$-connection is *a connection on the complex $(\underline{E},\partial)$* if it commutes with $\partial$: $\partial(\nabla^{n}_qe)=\nabla^{n-1}(\partial_q e)$ for $q\in\Gamma(Q)$ and $e\in \Gamma(E_n)$.
The curvature of a $Q$-connection on a vector bundle $E$ is defined by $$R_\nabla(q_1,q_2)e = \nabla_{q_1}\nabla_{q_2} e - \nabla_{q_2}\nabla_{q_1} e - \nabla_{[q_1,q_2]} e,$$ for all $q_1,q_2\in\Gamma(Q)$ and $e\in\Gamma(E)$, and generally, it is an element of $\Gamma(Q^*\otimes Q^*\otimes E^*\otimes E)$. If the dull bracket of $Q$ is skew-symmetric, then the curvature is a 2-form with values in the endomorphism bundle $\operatorname{End}(E)=E^*\otimes E$: $R_\nabla\in\Omega^2(Q,\operatorname{End}(E))$. A connection is as usual called *flat* if its curvature $R_\nabla$ vanishes identically.
Given a $Q$-connection $\nabla$ on $E$, and assuming that $[\cdot\,,\cdot]$ is skew-symmetric, there is an induced operator ${\mathrm{d}}_\nabla$ on the space of $E$-valued $Q$-forms $\Omega^\bullet(Q,E) = \Omega^\bullet(Q)\otimes \Gamma(E)$ given by the usual Koszul formula $$\begin{aligned}
{\mathrm{d}}_\nabla\tau(q_1,\ldots,q_{k+1}) = & \sum_{i<j}(-1)^{i+j}\tau([q_i,q_j],q_1,\ldots,\hat{q_i},\ldots,\hat{q_j},\ldots,q_{k+1}) \\
& + \sum_i(-1)^{i+1}\nabla_{q_i}(\tau(q_1,\ldots,\hat{q_i},\ldots,q_{k+1})),
\end{aligned}$$ for all $\tau\in\Omega^k(Q,E)$ and $q_1,\ldots,q_{k+1}\in\Gamma(Q)$. It satisfies $${\mathrm{d}}_\nabla(\tau_1\wedge\tau_2) = {\mathrm{d}}_Q\tau_1\wedge\tau_2 + (-1)^{k}\tau_1\wedge {\mathrm{d}}_\nabla\tau_2,$$ for all $\tau_1\in\Omega^k(Q)$ and $\tau_2\in\Omega^\bullet(Q,E)$, and squares to zero if and only if $Q$ is a Lie algebroid and $\nabla$ is flat.
Suppose that $\nabla\colon \mathfrak{X}(M)\times\Gamma(Q)\to\Gamma(Q)$ is a $TM$-connection on the vector bundle $Q$. The induced *basic connections* on $Q$ and $TM$ are defined similarly as the ones associated to Lie algebroids [@GrMe10; @ArCr12]: $$\nabla^{\text{bas}}=\nabla^{\text{bas},Q}\colon \Gamma(Q)\times\Gamma(Q)\to\Gamma(Q),\
\nabla^{\text{bas}}_{q_1} q_2 = [q_1,q_2] +
\nabla_{\rho_Q(q_2)} q_1$$ and $$\nabla^{\text{bas}}=\nabla^{\text{bas},TM}\colon\Gamma(Q)\times\mathfrak{X}(M)\to\mathfrak{X}(M),\
\nabla^{\text{bas}}_{q} X = [\rho_Q(q),X] + \rho_Q(\nabla_X
q).$$ The basic curvature is the form $R_\nabla^{\text{bas}}\in \Omega^2(Q, \operatorname{Hom}(TM,Q))$ defined by $$R_\nabla^{\text{bas}}(q_1,q_2)X = -\nabla_X[q_1,q_2] +
[q_1,\nabla_Xq_2] +[\nabla_Xq_1,q_2] +
\nabla_{\nabla_{q_2}^\text{bas}X}q_1 -
\nabla_{\nabla_{q_1}^\text{bas}X}q_2.$$ The basic connections and the basic curvature satisfy $$\label{eq_bas_Q_1}
\nabla^{\text{bas},TM}\circ\rho_Q = \rho_Q\circ\nabla^{\text{bas},Q}$$ $$\label{eq_bas_Q_2}
\rho_Q\circ R_\nabla^{\text{bas}} = R_{\nabla^{\text{bas},TM}}$$ $$\label{eq_bas_Q_3}
R_\nabla^{\text{bas}}\circ\rho_Q + \operatorname{Jac}_{[\cdot\,,\cdot]} = R_{\nabla^{\text{bas},Q}}.$$
(Split) Lie $n$-algebroids and $\mathbb{N}{\mathcal{Q}}$-manifolds {#Lie_n}
==================================================================
This section recalls basic results about $\mathbb{N}$-manifolds and Lie $n$-algebroids (based on [@Jotz18b]), and describes the Weil algebra of a Lie $n$-algebroid for general $n$ (see [@Mehta09] for $n=1$). It focusses on the category of *split $\mathbb{N}$-manifolds*, which is isomorphic modulo some choices of splittings to the category of $\mathbb{N}$-manifolds ([@BoPo13; @Roytenberg02]).
(Split) $\mathbb{N}$-manifolds and homological vector fields {#lie2lie3}
------------------------------------------------------------
Graded manifolds of degree $n\in\mathbb{N}$ are defined in terms of sheaves over ordinary smooth manifolds as follows.
An *$\mathbb{N}$-manifold ${\mathcal{M}}$ of degree $n$ and dimension $(m;r_1,\ldots,r_n)$* is a sheaf ${\mathcal{C}^\infty}({\mathcal{M}})$ of $\mathbb{N}$-graded, graded commutative, associative, unital $C^\infty(M)$-algebras over a smooth $m$-dimensional manifold $M$, which is locally freely generated by $r_1+\ldots+r_n$ elements $\xi_1^1,\ldots,\xi_1^{r_1}, \xi_2^1,\ldots,
\xi_2^{r_2},\ldots, \xi_n^1,\ldots,\xi_n^{r_n}$ with $\xi_i^j$ of degree $i$ for $i\in\{1,\ldots,n\}$ and $j\in \{1,\ldots,r_i\}$.
A morphism of $\mathbb{N}$-manifolds $\mu\colon {\mathcal{N}}\to{\mathcal{M}}$ over a smooth map $\mu_0\colon N\to M$ of the underlying smooth manifolds is a morphism of sheaves of graded algebras $\mu^\star\colon {\mathcal{C}^\infty}({\mathcal{M}})\to {\mathcal{C}^\infty}({\mathcal{N}})$ over $\mu_0^\ast\colon C^\infty(M)\to C^\infty(N)$.
For short, “${[n]}$-manifold” means “$\mathbb{N}$-manifold of degree $n$”. The degree of a (degree-)homogeneous element $\xi\in {\mathcal{C}^\infty}({\mathcal{M}})$ is written $|\xi|$. Note that the degree 0 elements of ${\mathcal{C}^\infty}({\mathcal{M}})$ are just the smooth functions of the manifold $M$. By definition, a $[1]$-manifold ${\mathcal{M}}$ is a locally free and finitely generated sheaf $ {\mathcal{C}^\infty}({\mathcal{M}})$ of $C^\infty(M)$-modules. That is, ${\mathcal{C}^\infty}({\mathcal{M}})=\Gamma(\wedge E^*)$ for a vector bundle $E\to M$. In that case, ${\mathcal{M}}=:E[1]$. *Recall that this means that the elements of $E$ have degree $-1$, and so the sections of $E^*$ have degree $1$.*
Consider now a (non-graded) vector bundle $E$ of rank $r$ over the smooth manifold $M$ of dimension $m$. Similarly as before, assigning the degree $n$ to the fibre coordinates of $E$ defines an $[n]$-manifold of dimension $(m;r_1=0,\ldots,r_{n-1}=0,r_n=r)$ denoted by $E[n]$, with ${\mathcal{C}^\infty}(E[n])^n=\Gamma(E^*)$. More generally, let $E_1,\ldots,E_n$ be vector bundles of ranks $r_1,\ldots,r_n$, respectively, and assign the degree $i$ to the fibre coordinates of $E_i$, for each $i=1,\ldots,n$. The direct sum ${\underline{E}}=E_1[1]\oplus\ldots\oplus E_n[n]$ is a graded vector bundle with grading concentrated in degrees $-1,\ldots,-n$. When seen as an $[n]$-manifold, $E_1[1]\oplus\ldots\oplus E_n[n]$ has the local basis of sections of $E_i^*$ as local generators of degree $i$ and thus its dimension is $(m;r_1,\ldots,r_n)$.
An $[n]$-manifold of the form $E_1[1]\oplus\ldots\oplus E_n[n]$ as above is called a *split $[n]$-manifold*.
The relation between $[n]$-manifolds and split $[n]$-manifolds is explained by the following theorem, which is implicit in [@Roytenberg02] and explicitly proved in [@BoPo13].
Any $[n]$-manifold is non-canonically diffeomorphic to a split $[n]$-manifold.
Note that under the above correspondence, the structure sheaf of an \[$n$\]-manifold ${\mathcal{M}}\simeq \underline{E} = E_1[1]\oplus\ldots\oplus E_n[n]$ becomes $${\mathcal{C}^\infty}({\mathcal{M}}) \simeq \Gamma(\underline{S}(\underline{E}^*)),$$ and a different choice of splitting leaves the bundles unchanged. In particular, for the case of a split \[2\]-manifold ${\mathcal{M}}=\underline{E}=E_1[1]\oplus E_2[2]$ the graded functions are $${\mathcal{C}^\infty}({\mathcal{M}})=\Gamma(\underline{S}(\underline{E}^*))= \Gamma(\wedge E^*_1\otimes SE^*_2),$$ where the grading is defined such that $${\mathcal{C}^\infty}({\mathcal{M}})^i=\bigoplus_{k+2\ell=i}\Gamma\left(\wedge^kE^*_1\otimes
S^\ell E^*_2\right).$$
Using the language of graded derivations, the usual notion of a vector field can be generalized a notion of vector field on an $[n]$-manifold ${\mathcal{M}}$.
A vector field of degree $j$ on ${\mathcal{M}}$ is a degree $j$ (graded) derivation of ${\mathcal{C}^\infty}({\mathcal{M}})$, i.e. a map $\mathcal{X}:{\mathcal{C}^\infty}({\mathcal{M}})\to {\mathcal{C}^\infty}({\mathcal{M}})$ such that $|\mathcal{X}(\xi)|=j+|\xi|$ and $\mathcal{X}(\xi\zeta)=\mathcal{X}(\xi)\zeta+(-1)^{j|\xi|}\xi \mathcal{X}(\zeta)$, for a homogeneous element $\xi\in C^\infty({\mathcal{M}})$.
As usual, $|\mathcal{X}|$ is the degree of a vector field $\mathcal{X}$. The Lie bracket of two vector fields $\mathcal{X},\mathcal{Y}$ on ${\mathcal{M}}$ is the graded commutator $$[\mathcal{X},\mathcal{Y}]=\mathcal{X}\mathcal{Y}-(-1)^{|\mathcal{X}||\mathcal{Y}|}\mathcal{Y}\mathcal{X}.$$ The following relations hold:
(i) $[\mathcal{X},\mathcal{Y}]=-(-1)^{|\mathcal{X}||\mathcal{Y}|}[\mathcal{Y},\mathcal{X}]$,
(ii) $[\mathcal{X},\xi
\mathcal{Y}]=\mathcal{X}(\xi)\mathcal{Y}+(-1)^{|\mathcal{X}||\xi|}\xi[\mathcal{X},\mathcal{Y}]$,
(iii) $(-1)^{|\mathcal{X}||\mathcal{Z}|}[\mathcal{X},[\mathcal{Y},\mathcal{Z}]]
+(-1)^{|\mathcal{Y}||\mathcal{X}|}[\mathcal{Y},[\mathcal{Z},\mathcal{X}]]
+(-1)^{|\mathcal{Z}||\mathcal{Y}|}[\mathcal{Z},[\mathcal{X},Y]]=0$,
for $\mathcal{X},\mathcal{Y},\mathcal{Z}$ homogeneous elements of the space of (graded) derivations of ${\mathcal{C}^\infty}({\mathcal{M}})$, and $\xi,\zeta$ homogeneous elements of ${\mathcal{C}^\infty}({\mathcal{M}})$.
The local generators $\xi_i^j$ of ${\mathcal{C}^\infty}({\mathcal{M}})$ over an open set $U\subseteq M$ given by the definition of ${\mathcal{M}}$ define the (local) vector fields $\partial_{\xi_i^j}$ of degree $-j$, which sends $\xi_i^j$ to $1$ and the other local generators to $0$. The sheaf $\underline{\operatorname{Der}}_U({\mathcal{C}^\infty}({\mathcal{M}}))$ of graded derivations of ${\mathcal{C}^\infty}_U({\mathcal{M}})$ is freely generated as a $C^\infty_U({\mathcal{M}})$-module by $\partial_{x_k}$ and $\partial_{\xi_i^j}$, where $x_1,\ldots,x_m$ are coordinates for $M$ defined on $U$.
Note that in the case of a split $[n]$-manifold $E_1[1]\oplus\ldots\oplus E_n[n]$, each section $e\in\Gamma(E_j)$ defines a derivation $\hat{e}$ of degree $-j$ on ${\mathcal{M}}$ by the relations: $\hat{e}(f)=0$ for $f\in{\mathcal{C}^\infty}(M)$, $\hat{e}(\varepsilon)=\langle\varepsilon,e\rangle$ for $\varepsilon\in\Gamma({\underline{E}})$ such that $|\varepsilon|= j$ and $\hat{e}(\varepsilon)=0$ for $|\varepsilon|\neq j$. In particular, $\widehat{e_j^i}=\partial_{\varepsilon_j^i}$ for $\{e_j^i\}$ a local basis of $E_j$ and $\{\varepsilon_j^i\}$ the dual basis of $E_j^*$.
Given $TM$-connections $\nabla^i\colon\mathfrak{X}(M)\to\operatorname{Der}(E_i)$ for all $i$, the space of vector fields over ${\mathcal{M}}$ is generated as a ${\mathcal{C}^\infty}({\mathcal{M}})$-module by $$\{ \nabla^1_X \oplus \ldots \oplus \nabla^n_X\ |\
X\in\mathfrak{X}(M) \}\cup\{ \hat{e}\ |\ e\in\Gamma(E_i)\ \text{for
some}\ i \}.$$ The vector fields of the form $\nabla^1_X \oplus \ldots \oplus \nabla^n_X$ are of degree 0 and are understood to send $f\in C^\infty(M)$ to $X(f)\in C^\infty(M)$, and $\varepsilon\in\Gamma(E_i^*)$ to $\nabla_X^{i,*}\varepsilon\in\Gamma(E_i^*)$. The negative degree vector fields are generated by those of the form $\hat{e}$.
A homological vector field ${\mathcal{Q}}$ on an $[n]$-manifold ${\mathcal{M}}$ is a degree 1 derivation of ${\mathcal{C}^\infty}({\mathcal{M}})$ such that ${\mathcal{Q}}^2=\frac{1}{2}[{\mathcal{Q}},{\mathcal{Q}}]=0$.
A homological vector field on an $[1]$-manifold ${\mathcal{M}}=E[1]$ is a differential ${\mathrm{d}}_E$ associated to a Lie algebroid structure on the vector bundle $E$ over $M$ [@Vaintrob97]. This is generalized to arbitrary degrees in the following definition.
\[abstract\_Lie\_algebroids\] A *Lie $n$-algebroid* is an $[n]$-manifold ${\mathcal{M}}$ endowed with a homological vector field ${\mathcal{Q}}$ – the pair $({\mathcal{M}}, {\mathcal{Q}})$ is also called *$\mathbb{N}{\mathcal{Q}}$-manifold of degree $n$*. A *split Lie $n$-algebroid* is a split $[n]$-manifold ${\mathcal{M}}$ endowed with a homological vector field ${\mathcal{Q}}$. A *morphism of (split) Lie $n$-algebroids* is a morphism $\mu$ of the underlying \[$n$\]-manifolds such that $\mu^\star$ commutes with the homological vector fields.
The homological vector field of a split Lie $n$-algebroid ${\underline{A}}=A_1[1]\oplus\ldots\oplus A_n[n]\to M$ can be equivalently described by a family of brackets which satisfy some Leibniz and higher Jacobi identities [@ShZh17]. More precisely, a homological vector field on ${\underline{A}}$ is equivalent to an $L_\infty$-algebra structure on $\Gamma({\underline{A}})$ that is anchored by a vector bundle morphism $\rho\colon A_1\to TM$. Such a structure is given by brackets $\llbracket\cdot,\ldots,\cdot\rrbracket_i\colon\Gamma({\underline{A}})^i\to \Gamma({\underline{A}})$ of degree $1$ for $1\leq i \leq n+1$ such that
1. $\llbracket\cdot,\cdot\rrbracket_2$ satisfies the Leibniz identity with respect to $\rho$,
2. $\llbracket\cdot,\ldots,\cdot\rrbracket_i$ is $C^\infty(M)$-linear in each entry for all $i\neq2$,
3. *(graded skew symmetry)* each $\llbracket\cdot,\ldots,\cdot\rrbracket_i$ is graded alternating: for $\sigma\in S_i$ and for all $a_1,\ldots,a_i\in\Gamma({\underline{A}})$ degree-homogeneous sections, then $$\llbracket a_{\sigma(1)}, a_{\sigma(2)},\ldots,a_{\sigma(k)}\rrbracket_i
=\text{Ksgn}(\sigma,a_1,\ldots,a_k)\cdot \llbracket a_1,a_2,\ldots,a_k\rrbracket_i,$$ and
4. *(strong homotopy Jacobi identity)* for $k\in \mathbb N$ and $a_1,\ldots, a_k\in\Gamma({\underline{A}})$ sections of homogeneous degree: $$\sum_{i+j=k+1}(-1)^{i(j-1)}\sum_{\sigma\in\text{Sh}_{i,k-i}}
\text{Ksgn}(\sigma,a_1,\ldots,a_k)\llbracket \llbracket
a_{\sigma(1)},\ldots,a_{\sigma(i)} \rrbracket_i,
a_{\sigma(i+1)},\ldots,a_{\sigma(k)} \rrbracket_j = 0.$$
Here, $\text{Sh}_{i,k-i}$ is the set of all $(i,k-i)$-shuffles and $\text{Ksgn}(\sigma,a_1,\ldots,a_k)$ is the $(a_1,\cdots,a_k)$-graded signature of the permutation $\sigma\in S_k$, i.e. $$a_1\wedge\ldots\wedge a_k = \text{Ksgn}(\sigma, a_1, \ldots,a_k)a_{\sigma(1)}\wedge\ldots\wedge a_{\sigma(k)}.$$
This gives the following alternative geometric description of a split Lie $2$-algebroid $(\mathcal M=A_1[1]\oplus A_2[2],\mathcal Q)$, see [@Jotz19b]. For consistency with the notation in [@Jotz19b], set $A_1:=Q$ and $A_2^*=:B$.
\[geom\_split\_2-alg\] A split Lie 2-algebroid $Q[1]\oplus B^*[2]$ is given by a pair of an anchored vector bundle $(Q\to M,\rho_Q)$ and a vector bundle $B\to M$, together with a vector bundle map $\ell\colon B^*\to Q$, a skew-symmetric dull bracket $[\cdot\,,\cdot]\colon \Gamma(Q)\times\Gamma(Q)\to \Gamma(Q)$, a linear $Q$-connection $\nabla$ on $B$, and a vector valued 3-form $\omega\in\Omega^3(Q,B^*)$ such that
(i) $\nabla^*_{\ell(\beta_1)}\beta_2 + \nabla^*_{\ell(\beta_2)}\beta_1 = 0$, for all $\beta_1,\beta_2\in\Gamma(B^*)$,
(ii) $[q,\ell(\beta)]=\ell(\nabla_q^*\beta)$ for all $q\in\Gamma(Q)$ and $\beta\in\Gamma(B^*)$,
(iii) $\operatorname{Jac}_{[\cdot\,,\cdot]} = \ell\circ\omega\in\Omega^3(Q,Q)$,
(iv) $R_{\nabla^*}(q_1,q_2)\beta = \omega(q_1,q_2,\ell(\beta))$ for $q_1,q_2\in\Gamma(Q)$ and $\beta\in\Gamma(B^*)$,
(v) ${\mathrm{d}}_{\nabla^*}\omega = 0$.
To pass from the definition above to the homological vector field ${\mathcal{Q}}$, set ${\mathcal{Q}}(f)=\rho^*{\mathrm{d}}f \in\Gamma(Q^*)$, ${\mathcal{Q}}(\tau)={\mathrm{d}}_{Q}\tau+\partial_B\tau \in \Omega^2(Q)\oplus
\Gamma(B)$, and ${\mathcal{Q}}(b)={\mathrm{d}}_{\nabla}b - \langle\omega, b\rangle \in \Omega^1(Q,
B)\oplus \Omega^3(Q)$ for $f\in C^\infty(M),\tau\in\Omega(Q)$ and $b\in\Gamma(B)$, where $\partial_B=\ell^*$.
On the other hand, to obtain the data of Definition \[geom\_split\_2-alg\] from a given homological vector field ${\mathcal{Q}}$, define the 1-bracket to be the vector bundle map $\ell$, $\rho$ is the anchor, and the 2-bracket defines the dull bracket on $Q$ and the $Q$-connection on $B^*$ via $$\llbracket q_1\oplus\beta_1,q_2\oplus\beta_2\rrbracket_2 = [q_1,q_2]_Q\oplus(\nabla_{q_1}^*\beta_2 - \nabla_{q_2}^*\beta_1),$$ while the 3-bracket defines the 3-form $\omega$ via $$\llbracket q_1\oplus0,q_2\oplus0,q_3\oplus0\rrbracket_3 = 0\oplus\omega(q_1,q_2,q_3).$$
Using the definition above, one has that a *Lie 2-algebra*, i.e. a Lie 2-algebroid over a point, consists of a pair of vector spaces $\mathfrak{g}_0,\mathfrak{g}_1$, a linear map $\ell\colon\mathfrak{g}_0\to\mathfrak{g}_1$, a skew-symmetric bilinear bracket $[\cdot\,,\cdot]\colon
\mathfrak{g}_1\times\mathfrak{g}_1\to\mathfrak{g}_1$, a bilinear *action bracket* $[\cdot\,,\cdot]\colon\mathfrak{g}_1\times\mathfrak{g}_0\to\mathfrak{g}_0$, and an alternating trilinear bracket $[\cdot\,,\cdot\,,\cdot]\colon\mathfrak{g}_1\times\mathfrak{g}_1\times\mathfrak{g}_1\to\mathfrak{g}_0$ such that
1. $[\ell(x),y] + [\ell(y),x] = 0$ for $x,y\in\mathfrak{g}_0$,
2. $[x,\ell(y)] = \ell([x,y])$ for $x\in\mathfrak{g}_1$ and $y\in\mathfrak{g}_0$,
3. $\operatorname{Jac}_{[\cdot\,,\cdot]}(x,y,z) = \ell([x,y,z])$ for $x,y,z\in\mathfrak{g}_1$,
4. $[x,[y,z]] - [y,[x,z]] - [[x,y],z] = [x,y,\ell(z)]$ for $x,y\in\mathfrak{g}_1$ and $z\in\mathfrak{g}_0$,
5. and such that the higher Jacobi identity $$\begin{aligned}
0 = & [x,[y,z,w]] - [y,[x,z,w]]+[z,[x,y,w]]- [w,[x,y,z]]\\
& - [[x,y],z,w] + [[x,z],y,w] - [[x,w],y,z]- [[y,z],x,w] + [[y,w],x,z] - [[z,w],x,y].
\end{aligned}$$ holds for $x,y,z,w\in\mathfrak{g}_1$.
For any Lie algebra $(\mathfrak{g},[\cdot\,,\cdot]_\mathfrak{g})$, the derivation Lie 2-algebra is defined as the complex $$\operatorname{ad}\colon \mathfrak{g}\to\operatorname{Der}(\mathfrak{g})$$ with brackets given by $[\delta_1,\delta_2] = \delta_1\delta_2 - \delta_2\delta_1$, $[\delta,x] = \delta x$, $[\delta_1,\delta_2,\delta_3] = 0$ for all $\delta,\delta_i\in\operatorname{Der}(\mathfrak{g}),i=1,2,3$, and $x\in\mathfrak{g}$.
For any Lie algebroid $A\to M$, the space $\operatorname{Der}_{[\cdot\,,\cdot]}(A)$ which consists of all derivations $D$ of the vector bundle $A$ such that $$D[a_1,a_2] = [Da_1,a_2] + [a_1,Da_2]$$ is a Lie algebroid over $M$ with anchor $\rho'(D)=X$ and bracket given by the usual commutator. If in addition $A$ is a bundle of Lie algebras, i.e. a Lie algebroid with vanishing anchor, the complex $$A\overset{\operatorname{ad}}{\to}\operatorname{Der}_{[\cdot\,,\cdot]}(A)\overset{\rho'}{\to}TM$$ becomes a Lie 2-algebroid with $\operatorname{Der}_{[\cdot\,,\cdot]}(A)$-connection on $A$ given by $\nabla_Da = Da$ and $\omega=0$.
\[Split\_symplectic\_Lie\_2-algebroid\_example\] Let $E\to M$ be a Courant algebroid with pairing $\langle\cdot\,,\cdot\rangle\colon E\times_M E\to E$, anchor $\rho$ and bracket $\llbracket\cdot\,,\cdot\rrbracket$, and choose a metric linear connection $\nabla\colon \mathfrak{X}(M)\times\Gamma(E)\to\Gamma(E)$. Then $E[1]\oplus T^*M[2]$ becomes as follows a split Lie $2$-algebroid. The skew-symmetric dull bracket is given by $[e,e'] = \llbracket e,e' \rrbracket - \rho^*\langle \nabla_.e,e'
\rangle$ for all $e,e'\in\Gamma(E)$. The *basic connection* is $\nabla^\text{bas}\colon\Gamma(E)\times\mathfrak{X}(M)\to\mathfrak{X}(M)$ and the *basic curvature* $\omega_\nabla\in\Omega^2(E,\operatorname{Hom}(TM,E))$ $$\omega_\nabla(e,e')X = -\nabla_X\llbracket e,e' \rrbracket +
\llbracket \nabla_Xe,e' \rrbracket + \llbracket e,\nabla_Xe'
\rrbracket + \nabla_{\nabla_{e'}^{\text{bas}} X} e -
\nabla_{\nabla_{e}^{\text{bas}} X} e' - P^{-1}\langle
\nabla_{\nabla_{.}^{\text{bas}}X}e,e' \rangle,$$ where $P\colon E\to E^*$ is the isomorphism defined by the pairing. for all $e,e'\in\Gamma(E)$ and $X\in\mathfrak{X}(M)$. The map $\ell$ is $\rho^*\colon T^*M\to E$, the $E$-connection on $T^*M$ is $\nabla^{\text{bas},*}$ and the form $\omega\in\Omega^3(E,T^*M)$ is given by $\omega(e_1,e_2,e_3)=\langle \omega_\nabla(e_1,e_2)(.),e_3
\rangle$. In particular, this kind of split Lie 2-algebroids are exactly the split symplectic Lie 2-algebroids [@Roytenberg02]. They are splittings of the symplectic Lie $2$-algebroid which is equivalent to the tangent prolongation of $E$, an LA-Courant algebroid [@Jotz19b; @Jotz18d].
Generalized functions of a Lie $n$-algebroid
--------------------------------------------
In the following, $({\mathcal{M}},{\mathcal{Q}})$ is a Lie $n$-algebroid with underlying manifold $M$. Consider the space ${\mathcal{C}^\infty}({\mathcal{M}})\otimes_{C^\infty(M)}\Gamma({\underline{E}})$ for a graded vector bundle ${\underline{E}}$ over $M$.
First suppose that $({\mathcal{M}},{\mathcal{Q}})=(A[1],{\mathrm{d}}_A)$ is a Lie algebroid. The space of ${\underline{E}}$-valued differential forms $\Omega(A;{\underline{E}}):=\Omega(A)\otimes_{C^\infty(M)}\Gamma({\underline{E}})={\mathcal{C}^\infty}(A[1])\otimes_{C^\infty(M)}\Gamma({\underline{E}})$ has a natural grading given by $$\Omega(A;{\underline{E}})_p=\bigoplus_{i-j=p}\Omega^i(A;E_j).$$ It is well-known (see [@ArCr12]) that any degree preserving vector bundle map $h\colon {\underline{E}}\otimes \underline{F}\to G$ induces a wedge product operation $$(\cdot\wedge_h\cdot)\colon\Omega(A;{\underline{E}})\times\Omega(A;\underline{F})\to
\Omega(A;\underline{G})$$ which is defined on $\omega\in\Omega^p(A;E_i)$ and $\eta\in\Omega^q(A;F_j)$ by $$(\omega\wedge_h\eta)(a_1,\ldots,a_{p+q})=\sum_{\sigma\in
S_{p+q}}(-1)^{qi}\operatorname{sgn}(\sigma)h\left(\omega(a_{\sigma(1)},\ldots,a_{\sigma(p)}),\eta(a_{\sigma(p+1)},\ldots,a_{\sigma(p+q)})\right),$$ for all $a_1,\ldots,a_{p+q}\in\Gamma(A)$.
In particular, the above rule reads $$\theta\wedge_h\zeta=(-1)^{qi}\left(\omega\wedge\eta\right)\otimes h(e,f),$$ for all $\theta=\omega\otimes e$ and $\zeta=\eta\otimes f$ where $\omega$ is a $p$-form, $\eta$ is a $q$-form, and $e$ and $f$ are homogeneous sections of ${\underline{E}}$ and $\underline{F}$ of degree $i$ and $j$, respectively.
Some notable cases for special choices of the map $h$ are given by the identity, the composition of endomorphisms, the evaluation and the ‘twisted’ evaluation maps, the graded commutator of endomorphisms and the natural pairing of a graded vector bundle with its dual. In particular, the evaluation $(\Phi,e)\mapsto \Phi(e)$ and the twisted evaluation $(e,\Phi)\mapsto(-1)^{|\Phi||e|}\Phi(e)$ make $\Omega(A;{\underline{E}})$ a graded $\Omega(A;\underline{\operatorname{End}}({\underline{E}}))$-bimodule.
In the general case of a Lie $n$-algebroid $({\mathcal{M}},{\mathcal{Q}})$, the space $\Omega(A)$ is replaced by the generalized smooth functions ${\mathcal{C}^\infty}({\mathcal{M}})$ of ${\mathcal{M}}$. The space ${\mathcal{C}^\infty}({\mathcal{M}})\otimes_{C^\infty(M)}\Gamma({\underline{E}})$ has a natural grading, where the homogeneous elements of degree $p$ are given by $$\bigoplus_{i-j=p}{\mathcal{C}^\infty}({\mathcal{M}})^i\otimes\Gamma(E_j).$$
Similarly as in the case of a Lie algebroid, given a degree preserving map $$h\colon {\underline{E}}\otimes \underline{F}\to \underline{G},$$ one obtains the multiplication $$\begin{aligned}
\left({\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma({\underline{E}})\right)\times \left({\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\underline{F})\right)\to & {\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\underline{G})\\
(\omega,\eta)\mapsto & \omega\wedge_h\eta.\end{aligned}$$ In particular, for elements of the form $\xi\otimes e\in {\mathcal{C}^\infty}({\mathcal{M}})^i\otimes\Gamma(E_j),\zeta\otimes f\in
{\mathcal{C}^\infty}({\mathcal{M}})^k\otimes\Gamma(F_\ell)$ the above rule reads $$\left(\xi\otimes e\right)\wedge_h\left(\zeta\otimes f\right)=(-1)^{(-j)k}\xi\zeta\otimes h(e,f),$$ where on the right hand side the multiplication $\xi\zeta$ is the one in ${\mathcal{C}^\infty}({\mathcal{M}})$. The special cases above are defined similarly for the $n$-algebroid case. Moreover, ${\mathcal{C}^\infty}({\mathcal{M}})\otimes_{C^\infty(M)}\Gamma({\underline{E}})$ is endowed with the structure of a graded ${\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\underline{\operatorname{End}}({\underline{E}}))$-bimodule.
Finally, the following lemma will be useful later as it is a generalisation of [@ArCr12 Lemma A.1], and gives the connection between the space ${\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\underline{\operatorname{Hom}}({\underline{E}},\underline{F}))$ and the homomorphisms from ${\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma({\underline{E}})$ to ${\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\underline{F})$.
\[wedge\_product-operators\_Correspondence\_Lemma\] There is a 1-1 correspondence between the degree $n$ elements of ${\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\underline{\operatorname{Hom}}({\underline{E}},\underline{F}))$ and the operators $\Psi\colon {\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma({\underline{E}})\to {\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\underline{F})$ of degree $n$ which are ${\mathcal{C}^\infty}({\mathcal{M}})$-linear in the graded sense: $$\Psi(\xi\wedge\eta)=(-1)^{nk}\xi\wedge \Psi(\eta),$$ for all $\xi\in {\mathcal{C}^\infty}({\mathcal{M}})^k$, and all $\eta\in {\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma({\underline{E}})$.
The element $\Phi\in {\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\underline{\operatorname{End}}({\underline{E}}))$ induces the operator $\widehat{\Phi}$ given by left multiplication by $\Phi$: $$\widehat{\Phi}(\eta)=\Phi\wedge\eta.$$ This clearly satisfies $\widehat{\Phi}(\xi\wedge\eta)=(-1)^{nk}\xi\wedge\widehat{\Phi}(\eta)$, for all $\xi\in {\mathcal{C}^\infty}({\mathcal{M}})^k,\ \eta\in
{\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma({\underline{E}})$. Conversely, an operator $\Psi$ of degree $n$ must send a section $e\in\Gamma(E_k)$ into the sum $$\Gamma(F_{k-n}) \oplus \left(
{\mathcal{C}^\infty}({\mathcal{M}})^1\otimes\Gamma(F_{k-n+1}) \right) \oplus
\left( {\mathcal{C}^\infty}({\mathcal{M}})^2\otimes\Gamma(F_{k-n+2}) \right)
\oplus\dots,$$ defining the elements $$\Psi_i\in C^\infty({\mathcal{M}})^i\otimes\Gamma(\underline{\operatorname{Hom}}^{n-i}({\underline{E}},\underline{F})).$$ Thus, this yields the element $\widetilde{\Psi} = \sum \Psi_i\in
\Big({\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\underline{\operatorname{Hom}}({\underline{E}},\underline{F}))\Big)^n$. Clearly, $$\widetilde{\widehat{\Phi}} = \Phi\ \text{and}\ \widehat{\widetilde{\Psi}} = \Psi.\qedhere$$
Schematically, for a Lie $n$-algebroid ${\mathcal{M}}$, the above lemma gives the following diagram: $$\Big({\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\underline{\operatorname{Hom}}({\underline{E}},\underline{F}))\Big)^n{ \ext@arrow 9999{\longleftrightarrowfill@}{}{\text{1-1}}}
\left\{\begin{array}{c}
\text{Degree}\ n\ \text{operators}\ \Psi \\
{\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma({\underline{E}})\to {\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\underline{F}) \\
\text{which are}\ {\mathcal{C}^\infty}({\mathcal{M}})\text{-linear in the graded sense}
\end{array}\right\}$$ In particular, if ${\underline{E}}= \underline{F}$, then $$\Big({\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\underline{\operatorname{End}}({\underline{E}}))\Big)^n{ \ext@arrow 9999{\longleftrightarrowfill@}{}{\text{1-1}}}
\left\{\begin{array}{c}
\text{Degree}\ n\ \text{operators}\ \Psi\ \text{on}\ {\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma({\underline{E}})\ \text{which}\\
\text{are}\ {\mathcal{C}^\infty}({\mathcal{M}})\text{-linear in the graded sense}
\end{array}\right\}$$
The Weil algebra associated to a Lie $n$-algebroid
--------------------------------------------------
Let ${\mathcal{M}}$ be an $[n]$-manifold over a smooth manifold $M$ and $\xi_1^1,\ldots,\xi_1^{r_1},\xi_2^1,\ldots,\xi_2^{r_2},\ldots,\xi_n^1,\ldots,\xi_n^{r_n}$ be its local generators over some open $U\subset M$ with degrees $1,2,\ldots,n$, respectively. By definition, its *tangent prolongation* $T{\mathcal{M}}$ is an $[n]$-manifold over $TM$ [@Mehta06; @Mehta09], whose local generators over $TU\subset TM$ are given by $${\mathcal{C}^\infty}_{TU}(T{\mathcal{M}})^0 = C^\infty(TU)\ \text{and}\ {\mathcal{C}^\infty}_{TU}(T{\mathcal{M}})^i =
\xi_i^1,\ldots,\xi_i^{r_i},{\mathrm{d}}\xi_i^1,\ldots,{\mathrm{d}}\xi_i^{r_i}.$$ The shifted tangent prolongation[^1] $T[1]{\mathcal{M}}$ is an $[n+1]$-manifold over $M$, with local generators over $U$ given by
-------------- ------------------------------------------------------------------------------------------------
degree 0 $C^\infty(U)$
degree 1 $\xi_1^1,\ldots,\xi_1^{r_1}, \Omega^1(U)$
degree 2 $\xi_2^1,\ldots,\xi_2^{r_2}, {\mathrm{d}}\xi_1^1,\ldots,{\mathrm{d}}\xi_1^{r_1}$
degree $n$ $\xi_n^1,\ldots,\xi_n^{r_n}, {\mathrm{d}}\xi_{n-1}^1,\ldots,{\mathrm{d}}\xi_{n-1}^{r_{{n-1}}}$
degree $n+1$ ${\mathrm{d}}\xi_n^1,\ldots,{\mathrm{d}}\xi_n^{r_n}$
-------------- ------------------------------------------------------------------------------------------------
It carries a bigrading $(p,q)$, where $p$ comes from the grading of ${\mathcal{M}}$ and $q$ is the grading of “differential forms”. In other words, the structure sheaf of $T[1]{\mathcal{M}}$ assigns to every coordinate domain $(U,x^1,\ldots,x^m)$ of $M$ that trivialises $\mathcal M$, the space $${\mathcal{C}^\infty}_U(T[1]{\mathcal{M}}) =
\bigoplus_i\underset{\text{($i$,0)}}{\underbrace{{\mathcal{C}^\infty}_U({\mathcal{M}})^i}}\left<
\underset{\text{(0,1)}}{\underbrace{({\mathrm{d}}x^k)_{k=1}^m}},
\underset{\text{(1,1)}}{\underbrace{({\mathrm{d}}\xi_1^k)_{k=1}^{r_1}}},\ldots,
\underset{\text{($n$,1)}}{\underbrace{({\mathrm{d}}\xi_n^k)_{k=1}^{r_n}}}
\right>.$$
Suppose now that $({\mathcal{M}},{\mathcal{Q}})$ is a Lie $n$-algebroid over $M$. Then $T[1]{\mathcal{M}}$ is an $[n+1]$-manifold, which inherits the two commuting differentials ${{{\pounds}}_{{\mathcal{Q}}}}$ and ${\mathbf{d}}$ defined as follows:
- ${\mathbf{d}}\colon{\mathcal{C}^\infty}(T[1]{\mathcal{M}})^\bullet\to{\mathcal{C}^\infty}(T[1]{\mathcal{M}})^{\bullet+1}$ is defined on generators by $C^\infty(M)\ni f \mapsto {\mathrm{d}}f,
\xi_i^j \mapsto {\mathrm{d}}\xi_i^j,
{\mathrm{d}}\xi_i^j \mapsto 0,$ and is extended to the whole algebra as a derivation of bidegree $(0,1)$.
- ${{{\pounds}}_{{\mathcal{Q}}}}\colon{\mathcal{C}^\infty}(T[1]{\mathcal{M}})^\bullet\to{\mathcal{C}^\infty}(T[1]{\mathcal{M}})^{\bullet+1}$ is the *Lie derivative* with respect to the vector field ${\mathcal{Q}}$, i.e. the graded commutator ${{{\pounds}}_{{\mathcal{Q}}}} = [{\mathbf{d}},i_{{\mathcal{Q}}}] = {\mathbf{d}}\circ i_{\mathcal{Q}}- i_{\mathcal{Q}}\circ{\mathbf{d}}$, and it is a derivation of bidegree $(1,0)$.
By checking their values on local generators, it is easy to see that ${{{\pounds}}_{{\mathcal{Q}}}}^2 = 0, {\mathbf{d}}^2 = 0$ and $[{{{\pounds}}_{{\mathcal{Q}}}},{\mathbf{d}}] = {{{\pounds}}_{{\mathcal{Q}}}}\circ{\mathbf{d}}+ {\mathbf{d}}\circ{{{\pounds}}_{{\mathcal{Q}}}} = 0$. Hence, $W^{p,q}({\mathcal{M}}):=\{\text{elements of}\ {\mathcal{C}^\infty}(T[1]{\mathcal{M}})\ \text{of bidegree}\
(p,q) \}$ together with ${{{\pounds}}_{{\mathcal{Q}}}}$ and ${\mathbf{d}}$ forms a double complex.
The Weil algebra of a Lie $n$-algebroid $({\mathcal{M}},{\mathcal{Q}})$ is the differential graded algebra given by the total complex of $W^{p,q}({\mathcal{M}})$: $$W({\mathcal{M}}):=\left(\bigoplus_{i\in\mathbb{Z}}\bigoplus_{i=p+q} W^{p,q}({\mathcal{M}}),{{{\pounds}}_{{\mathcal{Q}}}} + {\mathbf{d}}\right).$$
In the case of a Lie 1-algebroid $A\to M$, this is the Weil algebra from [@Mehta06; @Mehta09]. For the 1-algebroid case, see also [@ArCr12] for an approach without the language of supergeometry.
Differential graded modules {#modules}
===========================
This section defines the notion of a differential graded module over a Lie $n$-algebroid $({\mathcal{M}},{\mathcal{Q}})$ and gives the two fundamental examples of modules which come canonically with any Lie $n$-algebroid, namely the adjoint and the coadjoint modules. Note that the case of differential graded modules over a Lie 1-algebroid $A\to M$ is studied in detail in [@Mehta14].
The category of differential graded modules
-------------------------------------------
Let $A\to M$ be a Lie 1-algebroid. A *Lie algebroid module* [@Vaintrob97] over $A$ is defined as a sheaf ${\mathscr{E}}$ of locally freely generated graded $\Omega(A)$-modules over $M$ together with a map ${\mathcal{D}}:{\mathscr{E}}\to{\mathscr{E}}$ which squares to zero and satisfies the Leibniz rule $${\mathcal{D}}(\alpha\eta) = ({\mathrm{d}}_A\alpha)\eta + (-1)^{|\alpha|}\alpha{\mathcal{D}}(\eta),$$ for $\alpha\in\Omega(A)$ and $\eta\in{\mathscr{E}}$. For a Lie $n$-algebroid $({\mathcal{M}},{\mathcal{Q}})$ over $M$, this is generalised to the following definition.
A differential graded module of $({\mathcal{M}},{\mathcal{Q}})$, for short a DG ${\mathcal{M}}$-module, is a sheaf ${\mathscr{E}}$ of locally freely generated graded ${\mathcal{C}^\infty}({\mathcal{M}})$-modules over $M$ together with a map ${\mathcal{D}}\colon{\mathscr{E}}\to{\mathscr{E}}$ of degree $1$, such that ${\mathcal{D}}^2=0$ and $${\mathcal{D}}(\xi\eta) = {\mathcal{Q}}(\xi)\eta + (-1)^{|\xi|}\xi{\mathcal{D}}(\eta)$$ for all $\xi\in{\mathcal{C}^\infty}({\mathcal{M}})$ and $\eta\in{\mathscr{E}}({\mathcal{M}})$. The cohomology of the induced complex is denoted by $H^\bullet({\mathcal{M}},{\mathcal{Q}};{\mathscr{E}})$, or simply by $H^\bullet({\mathcal{M}},{\mathscr{E}})$.
Let $({\mathscr{E}}_1,{\mathcal{D}}_1)$ and $({\mathscr{E}}_2,{\mathcal{D}}_2)$ be two differential graded modules over the Lie $n$-algebroids $({\mathcal{M}},{\mathcal{Q}}_{{\mathcal{M}}})$ and $({\mathcal{N}},{\mathcal{Q}}_{{\mathcal{N}}})$, respectively, and let $k\in\mathbb{Z}$. A $k$-morphism between ${\mathscr{E}}_1$ and ${\mathscr{E}}_2$ consists of a morphism of Lie $n$-algebroids $\phi\colon {\mathcal{N}}\to{\mathcal{M}}$ and a degree $k$ map $\mu\colon {\mathscr{E}}_1\to{\mathscr{E}}_2$ which is linear: $\mu(\xi\eta) = \phi^\star\xi \mu(\eta)$, for all $\xi\in{\mathcal{C}^\infty}({\mathcal{M}})$ and $\eta\in{\mathscr{E}}({\mathcal{M}})$, and commutes with the differentials ${\mathcal{D}}_1$ and ${\mathcal{D}}_2$. A 0-morphism is simply called a morphism. A $k$-isomorphism is a $k$-morphism which has an inverse.
1. The sheaves ${\mathscr{E}}_1$ and ${\mathscr{E}}_2$ in the definition above can be thought of as the linear functions of ${\mathcal{Q}}$-vector bundles over ${\mathcal{M}}$. From this point of view, it is natural that the definition of a $k$-morphism of differential graded modules has a contravariant nature.
2. The inverse of a $k$-isomorphism is necessarily a $-k$-morphism.
3. For all $k\in\mathbb{Z}$ and all DG ${\mathcal{M}}$-modules ${\mathscr{E}}$, there is an obvious $k$-isomorphism ${\mathscr{E}}\to {\mathscr{E}}[k]$, where ${\mathscr{E}}[k]$ is the representation obtained from ${\mathscr{E}}$ after shifting the degree by $k$.
Considering the special case of ${\mathcal{M}}= {\mathcal{N}}$ in the definition above yields $k$-morphisms between DG ${\mathcal{M}}$-modules over the same Lie $n$-algebroid. The resulting category is denoted by $\mathbb{M}\text{od}({\mathcal{M}},{\mathcal{Q}})$, or simply by $\mathbb{M}\text{od}({\mathcal{M}})$. The isomorphism classes of this category are denoted by $\text{Mod}({\mathcal{M}},{\mathcal{Q}})$, or simply by $\text{Mod}({\mathcal{M}})$.
As in the case of Lie algebroids, new examples of DG ${\mathcal{M}}$-modules of Lie $n$-algebroids are obtained by considering the usual algebraic constructions, as in the following examples.
Given ${\mathscr{E}}\in\mathbb{M}\text{od}({\mathcal{M}})$ with differential ${\mathcal{D}}_{{\mathscr{E}}}$, one defines a DG ${\mathcal{M}}$-module structure on the dual sheaf ${\mathscr{E}}^*:=\underline{\operatorname{Hom}}({\mathscr{E}},{\mathcal{C}^\infty})$ with differential ${\mathcal{D}}_{{\mathscr{E}}^*}$ defined via the property $${\mathcal{Q}}(\psi(\eta)) = {\mathcal{D}}_{{\mathscr{E}}^*}(\psi)(\eta) + (-1)^{|\psi|}\psi({\mathcal{D}}_{{\mathscr{E}}}(\eta)),$$ for all $\psi\in{\mathscr{E}}^*({\mathcal{M}})$ and $\eta\in{\mathscr{E}}({\mathcal{M}})$.
For ${\mathscr{E}},{\mathscr{F}}\in\mathbb{M}\text{od}({\mathcal{M}})$ with operators ${\mathcal{D}}_{{\mathscr{E}}}$ and ${\mathcal{D}}_{{\mathscr{F}}}$, the corresponding operator ${\mathcal{D}}_{{\mathscr{E}}\otimes {\mathscr{F}}}$ on ${\mathscr{E}}\otimes {\mathscr{F}}$ is uniquely characterised by the formula $${\mathcal{D}}_{{\mathscr{E}}\otimes {\mathscr{F}}}(\eta\otimes\eta') =
{\mathcal{D}}_{{\mathscr{E}}}(\eta)\otimes\eta' +
(-1)^{|\eta|}\eta\otimes{\mathcal{D}}_{{\mathscr{F}}}(\eta'),$$ for all $\eta\in {\mathscr{E}}({\mathcal{M}})$ and $\eta'\in {\mathscr{F}}({\mathcal{M}})$.
For ${\mathscr{E}},{\mathscr{F}}\in\mathbb{M}\text{od}({\mathcal{M}})$ with operators ${\mathcal{D}}_{{\mathscr{E}}}$ and ${\mathcal{D}}_{{\mathscr{F}}}$, the differential ${\mathcal{D}}_{\underline{\operatorname{Hom}}({\mathscr{E}},{\mathscr{F}})}$ on $\underline{\operatorname{Hom}}({\mathscr{E}},{\mathscr{F}})$ is defined via $${\mathcal{D}}_{{\mathscr{F}}}(\psi(\eta)) = {\mathcal{D}}_{\underline{\operatorname{Hom}}({\mathscr{E}},{\mathscr{F}})}(\psi)(\eta) + (-1)^{|\psi|}\psi({\mathcal{D}}_{{\mathscr{E}}}(\eta)),$$ for all $\psi\in
\underline{\operatorname{Hom}}({\mathscr{E}}({\mathcal{M}}),{\mathscr{F}}({\mathcal{M}}))$ and $\eta\in {\mathscr{E}}({\mathcal{M}})$.
For ${\mathscr{E}}\in\mathbb{M}\text{od}({\mathcal{M}})$ with operator ${\mathcal{D}}$, the corresponding operator ${\mathcal{D}}_{\underline{S} ({\mathscr{E}})}$ on ${\underline{S}^k({\mathscr{E}})}$ is uniquely characterised by the formula $$\begin{aligned}
{\mathcal{D}}_{\underline{S} ({\mathscr{E}})}(\eta_1\eta_2\ldots\eta_k) = &\ {\mathcal{D}}(\eta_1)\eta_2\ldots\eta_k \\
& + \eta_1\sum_{i=2}^k(-1)^{|\eta_1|+\ldots+|\eta_{i-1}|}\eta_2\ldots{\mathcal{D}}(\eta_i)\ldots\eta_k,
\end{aligned}$$ for all $\eta_1,\ldots,\eta_k\in {\mathscr{E}}({\mathcal{M}})$. The same formula gives also the characterisation for the operator ${\mathcal{D}}_{\underline{A} ({\mathscr{E}})}$ of the antisymmetric powers $\underline{A}^q({\mathscr{E}})$.
For ${\mathscr{E}},{\mathscr{F}}\in\mathbb{M}\text{od}({\mathcal{M}})$ with operators ${\mathcal{D}}_{{\mathscr{E}}}$ and ${\mathcal{D}}_{{\mathscr{F}}}$, the differential operator ${\mathcal{D}}_{{\mathscr{E}}\oplus {\mathscr{F}}}$ on ${\mathscr{E}}\oplus {\mathscr{F}}$ is defined as $${\mathcal{D}}_{{\mathscr{E}}\oplus {\mathscr{F}}} = {\mathcal{D}}_{{\mathscr{E}}} \oplus {\mathcal{D}}_{{\mathscr{F}}}.$$
Adjoint and coadjoint modules
-----------------------------
Recall that every $[n]$-manifold ${\mathcal{M}}$ comes with the sheaf of graded derivations $\underline{\operatorname{Der}}({\mathcal{C}^\infty}({\mathcal{M}}))$ of ${\mathcal{C}^\infty}({\mathcal{M}})$, which is called *the sheaf of vector fields over ${\mathcal{M}}$*. It is a natural sheaf of locally freely generated graded ${\mathcal{C}^\infty}({\mathcal{M}})$-modules over the smooth manifold $M$, with module structure defined by the property $(\xi_1\mathcal{X})(\xi_2) = \xi_1\mathcal{X}(\xi_2)$ for all $\xi_1,\xi_2\in{\mathcal{C}^\infty}({\mathcal{M}})$ and $\mathcal{X}\in\underline{\operatorname{Der}}({\mathcal{C}^\infty}({\mathcal{M}}))$.
Suppose now that ${\mathcal{M}}$ is endowed with a homological vector field ${\mathcal{Q}}$, i.e. $({\mathcal{M}},{\mathcal{Q}})$ is a Lie $n$-algebroid. Then the Lie derivative ${{{\pounds}}_{{\mathcal{Q}}}}:=[{\mathcal{Q}},\cdot]$ is a degree 1 operator on the space of vector fields. Since it squares to zero, the sheaf of vector fields over $({\mathcal{M}},{\mathcal{Q}})$ has a canonical DG ${\mathcal{M}}$-module structure. It is called the *adjoint module* of ${\mathcal{M}}$ and denoted by $$(\mathfrak{X},{{{\pounds}}_{{\mathcal{Q}}}}).$$
Applying the dual construction to the adjoint module from above, one obtains the DG ${\mathcal{M}}$-module $\bigoplus_p{\mathcal{C}^\infty}(T[1]{\mathcal{M}})_{(p,1)}$ of 1-forms over ${\mathcal{M}}$, with the grading obtained from the horizontal grading of the Weil algebra – that is, the elements of ${\mathcal{C}^\infty}(T[1]{\mathcal{M}})_{(p,1)}$ have degree $p$. The structure operator is given by the Lie derivative ${{{\pounds}}_{{\mathcal{Q}}}} = [{\mathbf{d}},i_{\mathcal{Q}}]$. This DG-module is called the *coadjoint module* of $({\mathcal{M}},{\mathcal{Q}})$ and denoted by $$(\Omega^1,{{{\pounds}}_{{\mathcal{Q}}}}).$$
Poisson Lie $n$-algebroids: coadjoint vs adjoint modules
--------------------------------------------------------
This section shows that a compatible pair of a homological vector field and a Poisson bracket on an $[n]$-manifold gives rise to a degree $-n$ morphism of DG ${\mathcal{M}}$-modules from the coadjoint to the adjoint module.
Let $k\in\mathbb{Z}$. A degree $k$ Poisson bracket on an $[n]$-manifold ${\mathcal{M}}$ is a degree $k$ $\mathbb{R}$-bilinear map $\{\cdot\,,\cdot\}\colon {\mathcal{C}^\infty}({\mathcal{M}})\times {\mathcal{C}^\infty}({\mathcal{M}})\to {\mathcal{C}^\infty}({\mathcal{M}})$, i.e. $|\{\xi_1,\xi_2\}| = |\xi_1| + |\xi_2| + k$, such that $\{ \xi_1,\xi_2 \} = (-1)^{(|\xi_1|+k)(|\xi_2|+k)}\{ \xi_2,\xi_1 \}$, and satisfying the graded Leibniz and Jacobi identities $$\{\xi_1,\xi_2\xi_3\} = \{\xi_1,\xi_2\}\xi_3 + (-1)^{(|\xi_1|+k)|\xi_2|}\xi_2\{\xi_1,\xi_3\}$$ $$\{\xi_1,\{\xi_2,\xi_3\}\} = \{\{\xi_1,\xi_2\},\xi_3\} +
(-1)^{(|\xi_1|+k)(|\xi_2|+k)}\{\xi_2,\{\xi_1,\xi_3\}\},$$ for homogeneous elements $\xi_1,\xi_2,\xi_3\in {\mathcal{C}^\infty}({\mathcal{M}})$. A morphism between two Poisson $[n]$-manifolds $({\mathcal{N}},\{\cdot\,,\cdot\}_{\mathcal{N}})$ and $({\mathcal{M}},\{\cdot\,,\cdot\}_{\mathcal{M}})$ is a morphism of $[n]$-manifolds $\mathcal{F}\colon{\mathcal{N}}\to{\mathcal{M}}$ which respects the Poisson brackets: $\mathcal{F}^\star\{\xi_1,\xi_2\}_{\mathcal{M}}=
\{\mathcal{F}^\ast\xi_1,\mathcal{F}^\ast\xi_2\}_{\mathcal{N}}$ for all $\xi_1,\xi_2\in {\mathcal{C}^\infty}({\mathcal{M}})$.
As is the case for ordinary Poisson manifolds, a degree $k$ Poisson bracket on ${\mathcal{M}}$ induces a degree $k$ map $$\mathrm{Ham} \colon {\mathcal{C}^\infty}({\mathcal{M}})\to \underline{\operatorname{Der}}({\mathcal{C}^\infty}({\mathcal{M}}))$$ which sends $\xi$ to its *Hamiltonian vector field* $\mathcal{X}_\xi=\{\xi\,,\cdot\}$. An $[n]$-manifold is called *symplectic* if it is equipped with a degree $k$ Poisson bracket whose Hamiltonian vector fields generate all of $\operatorname{Der}({\mathcal{C}^\infty}({\mathcal{M}}))$.
If the $[n]$-manifold ${\mathcal{M}}$ carries both a homological vector field ${\mathcal{Q}}$ and a degree $k$ Poisson bracket $\{\cdot\,,\cdot\}$, then the two structures are compatible if $${\mathcal{Q}}\{\xi_1,\xi_2\} = \{{\mathcal{Q}}(\xi_1),\xi_2\} + (-1)^{|\xi_1|+k}\{\xi_1,{\mathcal{Q}}(\xi_2)\}$$ for homogeneous $\xi_1\in{\mathcal{C}^\infty}({\mathcal{M}})$ and all $\xi_2\in {\mathcal{C}^\infty}({\mathcal{M}})$. Using the Hamiltonian map defined above, the compatibility of ${\mathcal{Q}}$ and $\{\cdot\,,\cdot\}$ can be rewritten as $\mathcal{X}_{{\mathcal{Q}}(\xi)}=[{\mathcal{Q}},\mathcal{X}_{\xi}]$ for all $\xi\in{\mathcal{C}^\infty}({\mathcal{M}})$.
A *Poisson Lie $n$-algebroid* $({\mathcal{M}},{\mathcal{Q}},\{\cdot\,,\cdot\})$ is an $[n]$-manifold ${\mathcal{M}}$ endowed with a compatible pair of a homological vector field ${\mathcal{Q}}$ and a degree $-n$ Poisson bracket $\{\cdot\,,\cdot\}$. If in addition the Poisson bracket is symplectic, then it is called *symplectic Lie $n$-algebroid*. A morphism of Poisson Lie $n$-algebroids is a morphism of the underlying $[n]$-manifolds which is also a morphism of Lie $n$-algebroids and a morphism of Poisson $[n]$-manifolds.
A Poisson (symplectic) Lie 0-algebroid is a usual Poisson (symplectic) manifold $M$. A Poisson Lie 1-algebroid is a Lie bialgebroid $(A,A^*)$ and a symplectic Lie 1-algebroid is again a usual Poisson manifold – Section \[applications\] explains this in detail. A result due to Ševera [@Severa05] and Roytenberg [@Roytenberg02] shows that symplectic Lie 2-algebroids are in one-to-one correspondence with Courant algebroids.
In [@MaXu94], it was shown that a Lie algebroid $A$ with a linear Poisson structure satisfies the Lie bialgebroid compatibility condition if and only if the map $T^*A \to TA$ induced by the Poisson bivector is a Lie algebroid morphism from $T^*A = T^*A^* \to A^*$ to $TA \to TM$. This is now generalized to give a characterisation of Poisson-Lie $n$-algebroids.
Let ${\mathcal{M}}$ be an $[n]$-manifold equipped with a homological vector field ${\mathcal{Q}}$ and a degree $-n$ Poisson bracket $\{\cdot\,,\cdot\}$. The Poisson bracket on ${\mathcal{M}}$ induces a ${\mathcal{C}^\infty}({\mathcal{M}})$-linear map $\sharp\colon\Omega^1({\mathcal{M}})\to\mathfrak{X}({\mathcal{M}})$ of degree $-n$ via the property $$\label{eqn:sharp}
\left( \sharp({\mathrm{d}}\xi_1) \right)\xi_2 = \{ \xi_1,\xi_2 \}$$ for all $\xi_1,\xi_2\in{\mathcal{C}^\infty}({\mathcal{M}})$.
\[thm\_poisson\] Let ${\mathcal{M}}$ be an $[n]$-manifold equipped with a homological vector field ${\mathcal{Q}}$ and a degree $-n$ Poisson bracket $\{\cdot\,,\cdot\}$. Then $({\mathcal{M}},{\mathcal{Q}},\{\cdot\,,\cdot\})$ is a Poisson Lie $n$-algebroid if and only if $\sharp\colon \Omega^1({\mathcal{M}})\to\mathfrak{X}({\mathcal{M}})$ is a degree $-n$ morphism of DG ${\mathcal{M}}$-modules, i.e. $\sharp\circ{{{\pounds}}_{{\mathcal{Q}}}}={{{\pounds}}_{{\mathcal{Q}}}}\circ\sharp$.
From , $$\Big( {{{\pounds}}_{{\mathcal{Q}}}}(\sharp({\mathrm{d}}\xi_1)) -
\sharp({{{\pounds}}_{{\mathcal{Q}}}}({\mathrm{d}}\xi_1)) \Big)\xi_2 = {\mathcal{Q}}\{\xi_1,\xi_2\}
- (-1)^{|\xi_1|-n}\{\xi_1,{\mathcal{Q}}(\xi_2)\} - \{{\mathcal{Q}}(\xi_1),\xi_2\}.$$ In other words, the compatibility of ${\mathcal{Q}}$ with $\{\cdot\,,\cdot\}$ is equivalent to ${{{\pounds}}_{{\mathcal{Q}}}}\circ\sharp = \sharp\circ{{{\pounds}}_{{\mathcal{Q}}}}$.
A detailed analysis of this map in the cases of Poisson Lie algebroids of degree $n\leq2$ is given in Section \[morphism\_of\_ad\*\_ad\_Poisson012\]. The two following corollaries can be realised as obstructions for a Lie $n$-algebroid with a Poisson bracket to be symplectic. In particular, for $n=2$ one obtains the corresponding results for Courant algebroids.
\[cor\_poisson\] Let ${\mathcal{M}}$ be an $[n]$-manifold equipped with a homological vector field ${\mathcal{Q}}$ and a degree $-n$ Poisson bracket $\{\cdot\,,\cdot\}$. Then $({\mathcal{M}},{\mathcal{Q}},\{\cdot\,,\cdot\})$ is symplectic if and only if $\sharp$ is an isomorphism of DG ${\mathcal{M}}$-modules.
For any Poisson Lie $n$-algebroid $({\mathcal{M}},{\mathcal{Q}},\{\cdot\,,\cdot\})$ there is a natural degree $-n$ map in cohomology $\sharp\colon H^\bullet({\mathcal{M}},\Omega^1)\to
H^{\bullet-n}({\mathcal{M}},\mathfrak{X})$ which is an isomorphism if the bracket is symplectic.
Representations up to homotopy {#ruth}
==============================
This section generalises the notion of representation up to homotopy of Lie algebroids from [@ArCr12; @GrMe10] to representations of higher Lie algebroids. Some basic examples are given and 3-term representations of a split Lie 2-algebroid are described in detail. The adjoint and coadjoint representations of a split Lie 2-algebroid are special examples, which this section describes with explicit formulas for their structure objects and their coordinate transformation. Lastly, this shows how to define these two representations together with their objects for general Lie $n$-algebroids for all $n$.
The category of representations up to homotopy
----------------------------------------------
Recall that a representation up to homotopy of a Lie algebroid $A$ is given by an $A$-module of the form $\Omega(A,{\underline{E}})=\Omega(A)\otimes\Gamma({\underline{E}})$ for a graded vector bundle ${\underline{E}}$ over $M$. In the same manner, a *representation up to homotopy of a Lie $n$-algebroid $({\mathcal{M}},{\mathcal{Q}})$* is defined as a DG ${\mathcal{M}}$-module of the form ${\mathcal{C}^\infty}({\mathcal{M}})\otimes_{{\mathcal{C}^\infty}(M)}\Gamma({\underline{E}})$ for a graded vector bundle ${\underline{E}}\to M$.
Following the notation from [@ArCr12], denote the category of representations up to homotopy by $\mathbb{R}\text{ep}^\infty({\mathcal{M}},Q)$, or simply by ${{\mathbb{R}}\text{ep}^\infty({\mathcal{M}})}$. The isomorphism classes of representations up to homotopy of this category are denoted by $\text{Rep}^\infty({\mathcal{M}},{\mathcal{Q}})$, or by ${\text{Rep}^\infty({\mathcal{M}})}$. A representation of the form ${\underline{E}}=E_0\oplus\ldots\oplus E_{k-1}$ is a *$k$-term representation*, or simply a *$k$-representation*.
Any DG ${\mathcal{M}}$-module is non-canonically isomorphic to a representation up to homotopy of $({\mathcal{M}},{\mathcal{Q}})$ [@Mehta14]. The proof goes as follows: an ${\mathcal{M}}$-module is, by definition, the sheaf of sections $\Gamma(\mathcal{B})$ of a vector bundle $\mathcal{B}$ over ${\mathcal{M}}$. The pull-back $0_{{\mathcal{M}}}^*\mathcal{B}$, where $0_{{\mathcal{M}}}\colon M\to{\mathcal{M}}$ is the zero embedding, is an ordinary graded vector bundle ${\underline{E}}$ over $M$ and hence splits as ${\underline{E}}=\bigoplus_i E_i[i]$. According to [@Mehta14 Theorem 2.1], the double pull-back $\pi_{{\mathcal{M}}}^*0_{{\mathcal{M}}}^*\mathcal{B}$ is isomorphic to $\mathcal{B}$ as vector bundles over ${\mathcal{M}}$, where $\pi_{{\mathcal{M}}}\colon {\mathcal{M}}\to M$ is the projection map. Then, as a sheaf over $M$, $\Gamma(\mathcal{B})$ is identified with $\Gamma(\pi_{{\mathcal{M}}}^*0_{{\mathcal{M}}}^*\mathcal{B})=\Gamma(\pi_{{\mathcal{M}}}^*{\underline{E}})$, which in turn is canonically isomorphic to ${\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma({\underline{E}})$.
Let $({\mathcal{M}},{\mathcal{Q}})$ be a Lie $n$-algebroid and suppose $\xi\in {\mathcal{C}^\infty}({\mathcal{M}})^k$ such that ${\mathcal{Q}}(\xi) = 0$. Then one can construct a representation up to homotopy of ${\mathcal{M}}$ on the graded vector bundle ${\underline{E}}_\xi=(\mathbb{R}[0]\oplus\mathbb{R}[1-k])\times M\to M$ (i.e. $\mathbb{R}$ in degrees 0 and $1-k$, and zero otherwise). Its differential ${\mathcal{D}}_\xi$ is given in components by the map $${\mathcal{D}}_\xi = \sum_i {\mathcal{D}}_\xi^i,$$ where $${\mathcal{D}}_\xi^i\colon {\mathcal{C}^\infty}({\mathcal{M}})^i\oplus {\mathcal{C}^\infty}({\mathcal{M}})^{i-k+1}\to {\mathcal{C}^\infty}({\mathcal{M}})^{i+1}\oplus {\mathcal{C}^\infty}({\mathcal{M}})^{i-k+2}$$ is defined by the formula $${\mathcal{D}}_\xi^i(\zeta_1,\zeta_2)=({\mathcal{Q}}(\zeta_1) + (-1)^{i-k+1}\zeta_2\xi,{\mathcal{Q}}(\zeta_2)).$$ If there is an element $\xi'\in {\mathcal{C}^\infty}({\mathcal{M}})^k$ which is ${\mathcal{Q}}$-cohomologous to $\xi$, i.e. $\xi-\xi'={\mathcal{Q}}(\xi'')$ for some $\xi''\in {\mathcal{C}^\infty}({\mathcal{M}})^{k-1}$, then the representations ${\underline{E}}_\xi$ and ${\underline{E}}_{\xi'}$ are isomorphic via the isomorphism $\mu\colon {\underline{E}}_\xi\to {\underline{E}}_{\xi'}$ defined in components by $$\mu^i\colon {\mathcal{C}^\infty}({\mathcal{M}})^i\oplus {\mathcal{C}^\infty}({\mathcal{M}})^{i-k+1}\to {\mathcal{C}^\infty}({\mathcal{M}})^{i}\oplus {\mathcal{C}^\infty}({\mathcal{M}})^{i-k+1}$$ given by the formula $$\mu^i(\zeta_1,\zeta_2)=(\zeta_1+\zeta_2\xi'',\zeta_2).$$ Hence, one obtains a well-defined map $H^\bullet({\mathcal{M}})\to{\text{Rep}^\infty({\mathcal{M}})}$. In particular, if ${\mathcal{M}}$ is a Lie algebroid, the above construction recovers Example 3.5 in [@ArCr12].
The case of (split) Lie 2-algebroids
------------------------------------
Fix now a split Lie $2$-algebroid ${\mathcal{M}}$, and recall that from the analysis of Section \[lie2lie3\], ${\mathcal{M}}$ is given by the sum $Q[1]\oplus B^*[2]$ which forms the complex $$B^*\overset{\ell}{\longrightarrow} Q\overset{\rho_Q}{\longrightarrow} TM.$$ Unravelling the data of the definition of representations up to homotopy for the special case where $E$ is concentrated only in degree 0 yields the following characterisation.
\[Representations\_of\_Lie\_2-algebroids\] A representation of the Lie $2$-algebroid $Q[1]\oplus B^*[2]$ consists of a (non-graded) vector bundle $E$ over $M$, together with a $Q$-connection $\nabla$ on $E$ such that[^2] :
(i) $\nabla$ is flat, i.e. $R_\nabla = 0$ on $\Gamma(E)$,
(ii) $\partial_B\circ {\mathrm{d}}_\nabla = 0$ on $\Gamma(E)$.
Let $(E,{\mathcal{D}})$ be a representation of the Lie $2$-algebroid. Due to the Leibniz rule, ${\mathcal{D}}$ is completely characterised by what it does on $\Gamma(E)$. By definition, it sends $\Gamma(E)$ into $\Omega^1(Q,E)$. Using the Leibniz rule once more together with the definition of the homological vector field ${\mathcal{Q}}$ on $\Omega^1(Q)$, for all $f\in C^\infty(M)$ and all $e\in\Gamma(E)$ yields $${\mathcal{D}}(fe) = (\rho_Q^*{\mathrm{d}}f)\otimes e + f{\mathcal{D}}(e),$$ which implies that ${\mathcal{D}}= {\mathrm{d}}_\nabla$ for a $Q$-connection $\nabla$ on $\Gamma(E)$. Moreover, by definition of ${\mathcal{D}}$ one must have ${\mathcal{D}}^2(e) = 0$ for all $e\in\Gamma(E)$. On the other hand, a straightforward computation yields $${\mathcal{D}}^2(e) = {\mathcal{D}}({\mathrm{d}}_\nabla e) = {\mathrm{d}}_\nabla^2 e +
\partial_B({\mathrm{d}}_\nabla
e)\in\Omega^2(Q,E)\oplus\Gamma(B\otimes E).\qedhere$$
\[Trivial line bundle representation example\] The trivial line bundle ${\mathbb{R}}[0]$ over $M$ with $Q$-connection defined by $${\mathrm{d}}_\nabla f = {\mathrm{d}}_Q f =\rho_Q^* {\mathrm{d}}f$$ is a representation of the Lie $2$-algebroid $Q[1]\oplus
B^*[2]$. The operator ${\mathcal{D}}$ is given by the homological vector field ${\mathcal{Q}}$ and thus the cohomology induced by the representation is the Lie $2$-algebroid cohomology: $H^\bullet({\mathcal{M}},{\mathbb{R}}) = H^\bullet({\mathcal{M}})$.
\[Trivial representation of rank k example\] More generally, for all $k>0$, the trivial vector bundle ${\mathbb{R}}^k$ of rank $k$ over $M$ with $Q$-connection defined component-wise as in the example above becomes a representation with cohomology $H^\bullet({\mathcal{M}},{\mathbb{R}}^k)=H^\bullet({\mathcal{M}})\oplus\ldots\oplus H^\bullet({\mathcal{M}})$ ($k$-times).
Given a split Lie $n$-algebroid $A_1[1]\oplus\ldots\oplus A_n[n]$ over a smooth manifold $M$, with $n\geq 2$, the vector bundle $A_1\to M$ carries a dull algebroid structure induced by the 2-bracket and the anchor $\rho\colon A_1\to TM$ given by ${\mathcal{Q}}(f)=\rho^*{\mathrm{d}}f$, for $f\in C^\infty(M)$. Hence, Proposition \[Representations\_of\_Lie\_2-algebroids\], Example \[Trivial line bundle representation example\] and Example \[Trivial representation of rank k example\] can be carried over verbatim to the general case.
A more interesting case is for representations ${\underline{E}}$ which are concentrated in 3 degrees. An explicit description of those representations is given below. The reader should note the similarity of the following proposition with the description of 2-term representations of Lie algebroids from [@ArCr12].
\[3-term\_representations\] A 3-term representation up to homotopy $({\underline{E}}= E_0\oplus E_1\oplus E_2,{\mathcal{D}})$ of $Q[1]\oplus B^*[2]$ is equivalent to the following data:
(i) A degree 1 map $\partial\colon {\underline{E}}\to {\underline{E}}$ such that $\partial^2 = 0$,
(ii) a $Q$-connection $\nabla$ on the complex $\partial\colon E_\bullet\to E_{\bullet + 1}$,
(iii) an element $\omega_2\in\Omega^2(Q,\underline{\operatorname{End}}^{-1}({\underline{E}}))$,
(iv) an element $\omega_3\in\Omega^3(Q,\underline{\operatorname{End}}^{-2}({\underline{E}}))$, and an element $\phi_j\in\Gamma(B)\otimes\Omega^j(Q,\underline{\operatorname{End}}^{-j-1}({\underline{E}}))$ for $j=0,1$
such that[^3]
1. $\partial\circ\omega_2 + {\mathrm{d}}_\nabla^2 + \omega_2\circ\partial = 0$,
2. $\partial\circ\phi_0 + \partial_B\circ {\mathrm{d}}_\nabla
+ \phi_0\circ\partial = 0$,
3. $\partial\circ\omega_3 + {\mathrm{d}}_\nabla\circ\omega_2 +
\omega_2\circ {\mathrm{d}}_\nabla + \omega_3\circ\partial =
\langle \omega,\phi_0 \rangle$,
4. ${\mathrm{d}}_{\overline{\nabla}}\phi_0 +
\partial\circ\phi_1 + \partial_B\circ\omega_2 +
\phi_1\circ\partial = 0$,
5. ${\mathrm{d}}_\nabla\circ\omega_3 + \omega_2\circ\omega_2 +
\omega_3\circ {\mathrm{d}}_\nabla = \langle \omega,\phi_1
\rangle$,
6. ${\mathrm{d}}_{\overline{\nabla}}\phi_1 +
\omega_2\circ\phi_0 + \partial_B\circ\omega_3 +
\phi_0\circ\omega_2 = 0$,
7. $\phi_0\circ\phi_0 + \partial_B\circ\phi_1 = 0$,
where $\overline{\nabla}$ is the $Q$-connection on $B\otimes\underline{\operatorname{End}}^{-j-1}({\underline{E}})$ induced by $\nabla$ on $B$ and $\nabla^{\underline{\operatorname{End}}}$ on $\underline{\operatorname{End}}({\underline{E}})$.
1. If both of the bundles $E_1$ and $E_2$ are zero, the equations agree with those of a 1-term representation.
2. The equations in the statement can be summarised as follows: $$[\partial,\phi_0] + \partial_B\circ {\mathrm{d}}_\nabla = 0,\qquad
\phi_0\circ\phi_0 + \partial_B\circ\phi_1 = 0,$$ and for all $i$: $$[\partial,\omega_i] + [{\mathrm{d}}_\nabla,\omega_{i-1}]
+\omega_2\circ\omega_{i-2} +
\omega_3\circ\omega_{i-3} + \ldots
+\omega_{i-2}\circ\omega_2 = \langle
\omega,\phi_{i-3} \rangle,$$ $$\partial_B\circ\omega_{i+2} + [\partial,\phi_{i+1}]
+ {\mathrm{d}}_{\overline{\nabla}}\phi_i +
\sum_{j\geq2}[\omega_j,\phi_{i-j+1}] = 0.$$
It is enough to check that ${\mathcal{D}}$ acts on $\Gamma({\underline{E}})$. Since ${\mathcal{D}}$ is of degree 1, it maps each $\Gamma(E_i)$ into the direct sum $$\Gamma(E_{i+1}) \oplus
\left({\mathcal{C}^\infty}({\mathcal{M}})^1\otimes\Gamma(E_i)\right) \oplus
\left({\mathcal{C}^\infty}({\mathcal{M}})^2\otimes\Gamma(E_{i-1})\right) \oplus
\left({\mathcal{C}^\infty}({\mathcal{M}})^3\otimes\Gamma(E_{i-2})\right).$$ Considering the components of ${\mathcal{D}}$, this translates to the following three equations: $${\mathcal{D}}(e) = \partial(e) + d(e)\in\Gamma(E_1)\oplus\Omega^1(Q,E_0)$$ for $e\in\Gamma(E_0)$, $${\mathcal{D}}(e) = \partial(e) + d(e) + \omega_2(e) +
\phi_0(e)\in\Gamma(E_2)\oplus\Omega^1(Q,E_1)\oplus\Omega^2(Q,E_0)\oplus\left(\Gamma(B)\otimes\Gamma(E_0)\right)$$ for $e\in\Gamma(E_1)$, and $$\begin{aligned}
{\mathcal{D}}(e) = &\ d(e) + \omega_2(e) + \phi_0(e) + \omega_3(e) +\phi_1(e)\\
& \in\Omega^1(Q,E_2)\oplus\Omega^2(Q,E_1)\oplus\left(\Gamma(B)\otimes\Gamma(E_1)\right)\\
& \oplus\Omega^3(Q,E_0)\oplus\left(\Gamma(B)\otimes\Omega^1(Q,E_0)\right)
\end{aligned}$$ for $e\in\Gamma(E_2)$. Due to Lemma \[wedge\_product-operators\_Correspondence\_Lemma\] and the Leibniz rule for ${\mathcal{D}}$, $\partial\in\underline{\operatorname{End}}^1({\underline{E}})$, $d={\mathrm{d}}_\nabla$ where $\nabla$ are $Q$-connections on the vector bundles $E_i$ for $i = 0,1,2$, $\omega_i\in\Omega^i(Q,\underline{\operatorname{End}}^{1-i}({\underline{E}}))$ for $i = 2,3$, and $\phi_i\in\Gamma(B)\otimes\Omega^i(Q,\underline{\operatorname{End}}^{-i-1}({\underline{E}}))$ for $i = 0,1$.
A straightforward computation and a degree count in the expansion of the equation ${\mathcal{D}}^2=0$ shows that $({\underline{E}},\partial)$ is a complex, $\nabla$ commutes with $\partial$, and the equations in the statement hold.
Adjoint representation of a Lie 2-algebroid {#adjoint}
-------------------------------------------
This section shows that any split Lie $2$-algebroid $Q[1]\oplus B^*[2]$ admits a 3-term representation up to homotopy which is called *the adjoint representation*. It is a generalisation of the adjoint representation of a (split) Lie $1$-algebroid studied in [@ArCr12].
\[Adjoint\_representation\_of\_Lie\_2-algebroid\] Any split Lie $2$-algebroid $Q[1]\oplus B^*[2]$ admits a 3-term representation up to homotopy as follows: Choose arbitrary $TM$-connections on $Q$ and $B^*$ and denote both by $\nabla$. Then the structure objects are[^4]
(i) the *adjoint complex* $B^*[2]\to Q[1]\to TM[0]$ with maps $-\ell$ and $\rho_Q$,
(ii) the two $Q$-connections $\nabla^{\text{bas}}$ on $Q$ and $TM$, and the $Q$-connection $\nabla^*$ on $B^*$ given by the split Lie 2-algebroid,
(iii) the element $\omega_2\in\Omega^2(Q,\operatorname{Hom}(Q,B^*)\oplus\operatorname{Hom}(TM,Q))$ defined by $$\omega_2(q_1,q_2)q_3 =
-\omega(q_1,q_2,q_3)\in\Gamma(B^*)\ \text{and}\
\omega_2(q_1,q_2)X =
-R_\nabla^\text{bas}(q_1,q_2)X\in\Gamma(Q)$$ for $q_1,q_2,q_3\in\Gamma(Q)$ and $X\in\mathfrak{X}(M)$,
(iv) the element $\omega_3\in\Omega^3(Q,\operatorname{Hom}(TM,B^*))$ defined by $$\omega_3(q_1,q_2,q_3)X = - (\nabla_X\omega)(q_1,q_2,q_3)\in\Gamma(B^*)$$ for $q_1,q_2,q_3\in\Gamma(Q)$ and $X\in\mathfrak{X}(M)$,
(v) the element $\phi_0\in\Gamma(B)\otimes(\operatorname{Hom}(Q,B^*)\oplus\operatorname{Hom}(TM,Q))$ defined by $$\phi_0(\beta)X = \ell(\nabla_X\beta) - \nabla_X(\ell(\beta))\in\Gamma(Q)\ \text{and}\
\phi_0(\beta)q = \nabla_{\rho(q)}\beta - \nabla^*_q\beta\in\Gamma(B^*)$$ for $\beta\in\Gamma(B^*),q\in\Gamma(Q),X\in\mathfrak{X}(M)$,
(vi) the element $\phi_1\in\Gamma(B)\otimes\Omega^1(Q,\operatorname{Hom}(TM,B^*))$ defined by $$\phi_1(\beta,q)X = \nabla_X\nabla^*_q \beta - \nabla^*_q\nabla_X
\beta-\nabla^*_{\nabla_X q} \beta
+ \nabla_{\nabla^{\rm bas}_ qX} \beta\in\Gamma(B^*)$$ for $\beta\in\Gamma(B^*),q\in\Gamma(Q),X\in\mathfrak{X}(M)$.
The proof can be done in two ways. First, one could check explicitly that all the conditions of a 3-representation of $Q[1]\oplus B^*[2]$ are satisfied. This is an easy but long computation and it can be found in [@Papantonis21]. Instead, the following section shows that given a splitting and $TM$-connections on the vector bundles $Q$ and $B^*$, there exists an isomorphism of sheaves of ${\mathcal{C}^\infty}({\mathcal{M}})$-modules between the adjoint module $\mathfrak{X}({\mathcal{M}})$ and ${\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(TM\oplus Q\oplus B^*)$, such that the objects defined above correspond to the differential ${{{\pounds}}_{{\mathcal{Q}}}}$. Another advantage of this approach is that it gives a precise recipe for the definition and the explicit formulas for the components of the adjoint representation of a Lie $n$-algebroid for general $n$.
Adjoint module vs adjoint representation {#adjoint_module_adjoint_representation_isomorphism}
----------------------------------------
Recall that for a split $[n]$-manifold ${\mathcal{M}}=\bigoplus E_i[i]$, the space of vector fields over ${\mathcal{M}}$ is generated as a ${\mathcal{C}^\infty}({\mathcal{M}})$-module by two special kinds of vector fields. Namely, the degree $-i$ vector fields $\hat{e}$ for $e\in\Gamma(E_i)$, and the family of vector fields $\nabla^1_X \oplus \ldots \oplus \nabla^n_X$ for $X\in\mathfrak{X}(M)$ and a choice of $TM$-connections $\nabla^i$ on the vector bundles $E_i$.
Consider now a Lie 2-algebroid $({\mathcal{M}},{\mathcal{Q}})$ together with a splitting ${\mathcal{M}}\simeq Q[1]\oplus B^*[2]$ and a choice of $TM$-connections $\nabla^{B^*}$ and $\nabla^Q$ on $B^*$ and $Q$, respectively. These choices give as follows the adjoint representation $\operatorname{ad}_\nabla$, whose complex is given by $B^*[2]\oplus Q[1]\oplus TM[0] $. Define a map $\mu_\nabla\colon {\mathcal{C}^\infty}({\mathcal{M}})\otimes_{C^\infty(M)}\Gamma(B^*[2]\oplus Q[1]\oplus
TM[0] )\to \mathfrak{X}({\mathcal{M}})$ on the generators by $$\Gamma(B^*)\ni \beta \mapsto \hat{\beta},\qquad
\Gamma(Q)\ni q \mapsto \hat{q},\qquad
\mathfrak{X}(M)\ni X \mapsto \nabla^{B^*}_X \oplus \nabla^Q_X$$ and extend ${\mathcal{C}^\infty}({\mathcal{M}})$-linearly to the whole space to obtain a degree-preserving isomorphism of sheaves of ${\mathcal{C}^\infty}({\mathcal{M}})$-modules. A straightforward computation shows that $${{{\pounds}}_{{\mathcal{Q}}}}(\hat{\beta}) = \mu\left(-\ell(\beta) + {\mathrm{d}}_{\nabla^*}\beta\right),$$ $${{{\pounds}}_{{\mathcal{Q}}}}(\hat{q}) = \mu\left(\rho_Q(q) +
{\mathrm{d}}_{\nabla^{\text{bas}}}q +\omega_2(\cdot\,,\cdot)q +
\phi_0(\cdot)q\right),$$ $${{{\pounds}}_{{\mathcal{Q}}}}(\nabla_X^{B^*} \oplus \nabla_X^Q) = \mu\left(
{\mathrm{d}}_{\nabla^{\text{bas}}} X +\phi_0(\cdot)X
+\omega_2(\cdot\,,\cdot)X + \omega_3(\cdot\,,\cdot\,,\cdot)X +
\phi_1(\cdot\,,\cdot)X \right)$$ and therefore, the objects in the statement of Proposition \[Adjoint\_representation\_of\_Lie\_2-algebroid\] define the differential ${\mathcal{D}}_{\operatorname{ad}_\nabla}:=\mu_\nabla^{-1}\circ{{{\pounds}}_{{\mathcal{Q}}}}\circ\mu_\nabla$ of a 3-representation of $Q[1]\oplus B^*[2]$, called the *adjoint representation* and denoted as $(\operatorname{ad}_\nabla,{\mathcal{D}}_{\operatorname{ad}_\nabla})$. The adjoint representation is hence, up to isomorphism, independent of the choice of splitting and connections (see the following section for the precise transformations), and defines so a well-defined class $\operatorname{ad}\in{\text{Rep}^\infty({\mathcal{M}})}$.
Due to the result above, one can also define the *coadjoint representation* of a Lie 2-algebroid $({\mathcal{M}},{\mathcal{Q}})$ as the isomorphism class $\operatorname{ad}^*\in{\text{Rep}^\infty({\mathcal{M}})}$. To find an explicit representative of $\operatorname{ad}^*$, suppose that $Q[1]\oplus B^*[2]$ is a splitting of ${\mathcal{M}}$, and consider its adjoint representation $\operatorname{ad}_\nabla$ as above for some choice of $TM$-connections $\nabla$ on $B^*$ and $Q$. Recall that given a representation up to homotopy $({\underline{E}},{\mathcal{D}})$ of a ${\mathcal{M}}$, its dual ${\underline{E}}^*$ becomes a representation up to homotopy with operator ${\mathcal{D}}^*$ characterised by the formula $${\mathcal{Q}}(\xi\wedge\xi') = {\mathcal{D}}^*(\xi)\wedge\xi' + (-1)^{|\xi|}\xi\wedge{\mathcal{D}}(\xi'),$$ for all $\xi\in {\mathcal{C}^\infty}({\mathcal{M}})\otimes_{C^\infty(M)}\Gamma({\underline{E}}^*)$ and $\xi'\in {\mathcal{C}^\infty}({\mathcal{M}})\otimes_{C^\infty(M)}\Gamma({\underline{E}})$. Here, $\wedge=\wedge_{\langle\cdot\,,\cdot\rangle}$, with $\langle\cdot\,,\cdot\rangle$ the pairing of ${\underline{E}}$ with ${\underline{E}}^*$. Unravelling the definition of the dual representation for $\operatorname{ad}_\nabla$, one finds that $\operatorname{ad}_\nabla^*$ is given by the following objects:
1. the *coadjoint complex* $T^*M\to Q^* \to B$ obtained by $-\rho_Q$ and $-\ell^*$,
2. the $Q$-connections $\nabla$ on $B$ and $\nabla^{\text{bas},*}$ on $Q^*$ and $T^*M$,
3. the elements
---------------------------------------------------------- --------------------------------------------------
$\omega_2^*(q_1,q_2)\tau=\tau\circ\omega_2(q_1,q_2)$, $\omega_2^*(q_1,q_2)b=-b\circ\omega_2(q_1,q_2)$,
$\phi_0^*(\beta)\tau=\tau\circ\phi_0(\beta)$, $\phi_0^*(\beta)b=-b\circ\phi_0(\beta)$,
$\omega_3^*(q_1,q_2,q_3)b=-b\circ\omega_3(q_1,q_2,q_3)$, $\phi_1^*(\beta,q)b=-b\circ\phi_1(\beta,q)$,
---------------------------------------------------------- --------------------------------------------------
for all $q,q_1,q_2,q_3\in\Gamma(Q),\tau\in\Gamma(Q^*),b\in\Gamma(B)$ and $\beta\in\Gamma(B^*)$.
\[Iso\_coad\_mod\_coad\_rep\] The coadjoint representation can also be obtained from the coadjoint module $\Omega^1({\mathcal{M}})$ by the ${\mathcal{C}^\infty}({\mathcal{M}})$-module isomorphism $\mu^\star_\nabla\colon\Omega^1({\mathcal{M}})\to{\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(B[-2] \oplus
Q^*[-1] \oplus T^*M[0])$ which is dual to $\mu_\nabla\colon {\mathcal{C}^\infty}({\mathcal{M}})\otimes_{C^\infty(M)}\Gamma(TM[0]\oplus Q[1]\oplus B^*[2])\to
\mathfrak{X}({\mathcal{M}})$ above. Explicitly, it is defined as the pull-back map $\mu^\star_\nabla(\omega)=\omega\circ\mu$ for all $\omega\in\Omega^1({\mathcal{M}})$, whose inverse is given on generators by ${\mathcal{C}^\infty}({\mathcal{M}})\otimes_{C^\infty(M)}\Gamma(B[-2] \oplus Q^*[-1] \oplus T^*M[0])\to
\Omega^1({\mathcal{M}})$, $$\Gamma(B)\ni b\mapsto {\mathrm{d}}b-{\mathrm{d}}_\nabla b, \quad \Gamma(Q^*)\ni\tau\mapsto - ({\mathrm{d}}\tau - {\mathrm{d}}_\nabla\tau),
\quad \text{and}\quad \Omega^1(M)\ni\theta\mapsto\theta.$$
Coordinate transformation of the adjoint representation
-------------------------------------------------------
The adjoint representation up to homotopy of a Lie 2-algebroid depends on a choice of splitting and on choices of $TM$-connections. This section explains how the adjoint representation changes under different choices.
First, a morphism of 3-representations of a split Lie 2-algebroid can be described as follows.
\[morphism\_of\_3-term\_representations\] Let $({\underline{E}},{\mathcal{D}}_{{\underline{E}}})$ and $(\underline{F},{\mathcal{D}}_{\underline{F}})$ be 3-term representations up to homotopy of the split Lie 2-algebroid $Q[1]\oplus B^*[2]$. A morphism $\mu\colon {\underline{E}}\to \underline{F}$ is equivalent to the following data:
(i) For each $i=0,1,2$, an element $\mu_i\in\Omega^{i}(Q,\underline{\operatorname{Hom}}^{-i}({\underline{E}},\underline{F}))$.
(ii) An element $\mu^b\in\Gamma(B\otimes\underline{\operatorname{Hom}}^{-2}({\underline{E}},\underline{F}))$.
The above objects are subject to the relations
1. $[\partial,\mu_i] + [{\mathrm{d}}_\nabla,\mu_{i-1}] +
{\displaystyle\sum_{j+k=i,i\geq2}[\omega_j,\mu_k]} = \langle
\omega,\mu^b_{i-3} \rangle$,
2. $[\partial,\mu^b] + [\phi_0,\mu_0] + \partial_B\circ\mu_1 = 0$,
3. ${\mathrm{d}}_{\overline{\nabla}}\mu^b + [\phi_0,\mu_1] +
[\phi_1,\mu_0] + \partial_B\circ\mu_2 = 0$,
where $\mu_0^b = \mu^b$ and $\mu_i^b = 0$ for $i \neq 0$.
As before it suffices to check how $\mu$ acts on $\Gamma({\underline{E}})$, by the same arguments. Then it must be of the type $$\mu = \mu_0 + \mu_1 + \mu_2 + \mu^b,$$ where $\mu_i\in\Omega^{i}(Q,\underline{\operatorname{Hom}}^{-i}({\underline{E}},\underline{F}))$ and $\mu^b\in\Gamma(B)\otimes\Gamma(\underline{\operatorname{Hom}}^{-2}({\underline{E}},\underline{F}))$. It is easy to see that the three equations in statement come from the expansion of $\mu\circ{\mathcal{D}}_{{\underline{E}}} = {\mathcal{D}}_{\underline{F}}\circ\mu$ when $\mu$ is written in terms of the components defined before.
The transformation of $\operatorname{ad}\in {\text{Rep}^\infty({\mathcal{M}})}$ for a fixed splitting $Q[1]\oplus B^*[2]$ of ${\mathcal{M}}$ and different choices of $TM$-connections is given by their difference. More precisely, let $\nabla$ and $\nabla'$ be the two $TM$-connections. Then the map $\mu=\mu_{\nabla'}^{-1}\circ \mu_\nabla\colon\operatorname{ad}_\nabla\to \operatorname{ad}_{\nabla'}$ is defined by $\mu = \mu_0 + \mu_1 + \mu^b$, where $$\begin{aligned}
\mu_0 = &\ \operatorname{id}\\
\mu_1(q)X = &\ \nabla'_X q - \nabla_X q \\
\mu^b(\beta)X = &\ \nabla'_X \beta - \nabla_X \beta,\end{aligned}$$ for $X\in\mathfrak{X}(M)$, $q\in\Gamma(Q)$ and $\beta\in\Gamma(B^*)$. The equations in Proposition \[morphism\_of\_3-term\_representations\] are automatically satisfied since by construction $${\mathcal{D}}_{\operatorname{ad}_{\nabla'}}\circ\mu={\mathcal{D}}_{\operatorname{ad}_{\nabla'}}\circ\mu_{\nabla'}^{-1}\circ\mu_\nabla=\mu_{\nabla'}^{-1}\circ{{{\pounds}}_{{\mathcal{Q}}}}\circ\mu_\nabla=\mu_{\nabla'}^{-1}\circ\mu_\nabla\circ
{\mathcal{D}}_{\operatorname{ad}_{\nabla}}=\mu\circ {\mathcal{D}}_{\operatorname{ad}_{\nabla}}.$$ This yields the following result.
\[Isomorphism with change of connections\] Given two pairs of $TM$-connections on the bundles $B^*$ and $Q$, the isomorphism $\mu\colon\operatorname{ad}_\nabla\to \operatorname{ad}_{\nabla'}$ between the corresponding adjoint representations is given by $\mu=\operatorname{id}\oplus \Big( \nabla'-\nabla \Big)$.
The next step is to show how the adjoint representation transforms after a change of splitting of the Lie 2-algebroid. Fix a Lie 2-algebroid $({\mathcal{M}},Q)$ over the smooth manifold $M$ and choose a splitting $Q[1]\oplus B^*[2]$, with structure objects $(\ell,\rho,[\cdot\,,\cdot]_1,\nabla^1,\omega^1)$ as before. Recall that a change of splitting does not change the vector bundles $B^*$ and $Q$, and it is equivalent to a section $\sigma\in\Omega^2(Q,B^*)$. The induced isomorphism of \[2\]-manifolds over the identity on $M$ is given by: $\mathcal{F}_\sigma^\star(\tau) = \tau$ for all $\tau\in\Gamma(Q^*)$ and $\mathcal{F}^\star_\sigma(b) = b + \sigma^\star
b\in\Gamma(B)\oplus\Omega^2(Q)$ for all $b\in\Gamma(B)$. If $(\ell,\rho,[\cdot\,,\cdot]_2,\nabla^2,\omega^2)$ is the structure objects of the second splitting, then the compatibility of $\sigma$ with the homological vector fields reads
- The dull brackets are related by: $[q_1,q_2]_2 = [q_1,q_2]_1 - \ell(\sigma(q_1,q_2))$.
- The connections are related by: $\nabla^2_q b = \nabla^1_q b + \partial_B\langle
\sigma(q,\cdot),b \rangle$, or equivalently on the dual by $\nabla^{2*}_q \beta = \nabla^{1*}_q \beta -
\sigma(q,\ell(\beta))$.
- The curvature terms are related by: $\omega^2 = \omega_1 + {\mathrm{d}}_{2,\nabla^1}\sigma$, where the operator $${\mathrm{d}}_{2,\nabla^1}\sigma\colon \Omega^\bullet(Q,B^*)\to\Omega^{\bullet+1}(Q,B^*)$$ is defined by the usual Koszul formula using the dull bracket $[\cdot\,,\cdot]_2$ and the connection $\nabla^{1*}$.
The above equations give the following identities between the structure data for the adjoint representations[^5] $\operatorname{ad}_\nabla^1$ and $\operatorname{ad}_\nabla^2$.
\[Identities\_for\_different\_splitting\_of\_Lie\_2-algebroid\] Let $q,q_1,q_2\in\Gamma(Q),\beta\in\Gamma(B^*)$ and $X\in\mathfrak{X}(M)$. Then
(i) $\ell_2 = \ell_1$ and $\rho_2 = \rho_1$.
(ii) $\nabla^{2,\text{bas}}_{q_1} q_2 = \nabla^{1,\text{bas}}_{q_1} q_2 - \ell(\sigma(q_1,q_2))$
$\nabla^{2,\text{bas}}_{q} X = \nabla^{1,\text{bas}}_{q} X$
$\nabla^{2,*}_{q} \beta = \nabla^{1,*}_{q} \beta - \sigma(q,\ell(\beta))$.
(iii) $\omega_2^2(q_1,q_2)q_3 = \omega_2^1(q_1,q_2)q_3 + {\mathrm{d}}_{2,\nabla^1}\sigma(q_1,q_2,q_3)$\
$\omega_2^2(q_1,q_2)X = \omega_2^1(q_1,q_2)X +
\nabla_X(\ell(\sigma(q_1,q_2))) -
\ell(\sigma(q_1,\nabla_X q_2)) +
\ell(\sigma(q_2,\nabla_X q_1))$.
(iv) $\omega_3^2(q_1,q_2,q_3)X = \omega_3^1(q_1,q_2,q_3)X + (\nabla_X({\mathrm{d}}_{2,\nabla^1}\sigma))(q_1,q_2,q_3)$.
(v) $\phi_0^2(\beta)q = \phi_0^1(\beta)q + \sigma(q,\ell(\beta))$\
$\phi_0^2(\beta)X = \phi_0^1(\beta)X$.
(vi) $\phi_1^2(\beta,q)X = \phi_1^1(\beta,q)X -
\sigma(\nabla_X q_1,\ell(\beta)) -
\sigma(q,\ell(\nabla_X \beta)) +
\nabla_X(\sigma(q,\ell(\beta)))$.
Consider now two Lie $n$-algebroids ${\mathcal{M}}_1$ and ${\mathcal{M}}_2$ over $M$, and an isomorphism $$\mathcal{F}\colon({\mathcal{M}}_1,{\mathcal{Q}}_1)\to({\mathcal{M}}_2,{\mathcal{Q}}_2)$$ given by the maps $\mathcal{F}_Q\colon Q_1\to Q_2$, $\mathcal{F}_B\colon B_1^*\to B^*_2$, and $\mathcal{F}_0\colon\wedge^2Q_1\to B_2^*$. Recall that a 0-morphism between two representations up to homotopy $({\underline{E}}_1,{\mathcal{D}}_1)$ and $({\underline{E}}_2,{\mathcal{D}}_2)$ of ${\mathcal{M}}_1$ and ${\mathcal{M}}_2$, respectively, is given by degree 0 map $$\mu\colon {\mathcal{C}^\infty}({\mathcal{M}}_2)\otimes\Gamma({\underline{E}}_2)\to {\mathcal{C}^\infty}({\mathcal{M}}_1)\otimes\Gamma({\underline{E}}_1),$$ which is ${\mathcal{C}^\infty}({\mathcal{M}}_2)$-linear: $\mu(\xi\otimes e) = \mathcal{F}^\star\xi\otimes\mu(e)$ for all $\xi\in {\mathcal{C}^\infty}({\mathcal{M}}_2)$ and $e\in\Gamma({\underline{E}}_2)$, and makes the following diagram commute $$\xymatrix{
{\mathcal{C}^\infty}({\mathcal{M}}_2)\otimes\Gamma({\underline{E}}_2)\ar[r]^{\mu}\ar[d]_{{\mathcal{D}}_2} & {\mathcal{C}^\infty}({\mathcal{M}}_1)\otimes\Gamma({\underline{E}}_1)\ar[d]^{{\mathcal{D}}_1} \\
{\mathcal{C}^\infty}({\mathcal{M}}_2)\otimes\Gamma({\underline{E}}_2)\ar[r]_\mu &
{\mathcal{C}^\infty}({\mathcal{M}}_1)\otimes\Gamma({\underline{E}}_1).
}$$ The usual analysis as before implies that $\mu$ must be given by a morphism of complexes $\mu_0\colon ({\underline{E}}_2,\partial_2)\to ({\underline{E}}_1,\partial_1)$ and elements $$\mu_1\in\Omega^1(Q_1,\underline{\operatorname{Hom}}^{-1}({\underline{E}}_2,{\underline{E}}_1)),$$ $$\mu_2\in\Omega^2(Q_1,\underline{\operatorname{Hom}}^{-2}({\underline{E}}_2,{\underline{E}}_1)),$$ $$\mu^b\in \Gamma(B)\otimes\Gamma(\underline{\operatorname{Hom}}^{-2}({\underline{E}}_2,{\underline{E}}_1)),$$ which satisfy equations similar to the set of equations in Proposition \[morphism\_of\_3-term\_representations\].
A change of splitting of the Lie 2-algebroid transforms as follows the adjoint representation. Since changes of choices of connections are now fully understood, choose the same connection for both splittings ${\mathcal{M}}_1\simeq Q[1]\oplus B^*[2]\simeq{\mathcal{M}}_2$. Suppose that $\sigma\in\Omega^2(Q,B^*)$ is the change of splitting and denote by $\mathcal{F}_\sigma$ the induced isomorphism of the split Lie 2-algebroids whose components are given by $\mathcal{F}^\star_{\sigma,Q}=\operatorname{id}_{Q^*}, \mathcal{F}^\star_{\sigma,B}=\operatorname{id}_B,
\mathcal{F}^\star_{\sigma,0}=\sigma^\star$. The composition map $\mu^\sigma:\operatorname{ad}_\nabla^1\to\mathfrak{X}({\mathcal{M}})\to\operatorname{ad}_{\nabla}^2$ is given in components by $$\begin{aligned}
\mu_0^\sigma = &\ \operatorname{id}\\
\mu_1^\sigma(q_1)q_2 = &\ \sigma(q_1,q_2) \\
\mu_2^\sigma(q_1,q_2)X = &\ (\nabla_X \sigma)(q_1,q_2).\end{aligned}$$ A similar argument as before implies that $\mu^\sigma$ is a morphism between the two adjoint representations and therefore the following result follows.
\[Isomorphism with change of splitting\] Given two splittings of a Lie 2-algebroid with induced change of splitting $\sigma\in\Omega^2(Q,B^*)$ and a pair of $TM$-connections on the vector bundles $B^*$ and $Q$, the isomorphism between the corresponding adjoint representations is given by $\mu=\operatorname{id}\oplus\ \sigma\oplus\nabla_\cdot\sigma$.
Adjoint representation of a Lie $n$-algebroid {#Adjoint of Lie n-algebroids}
---------------------------------------------
The construction of the adjoint representation up to homotopy of a Lie $n$-algebroid $({\mathcal{M}},{\mathcal{Q}})$ for general $n$ is similar to the $n=2$ case. Specifically, choose a splitting ${\mathcal{M}}\simeq \bigoplus_{i=1}^n E_i[i]$ and $TM$-connections $\nabla^i$ on the bundles $E_i$. Then there is an induced isomorphism of ${\mathcal{C}^\infty}({\mathcal{M}})$-modules $$\begin{aligned}
\mu\colon {\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(TM[0]\oplus E_1[1]\oplus\ldots\oplus E_n[n]) & \to \mathfrak{X}({\mathcal{M}}),\end{aligned}$$ which at the level of generators is given by $$\begin{aligned}
\Gamma(E_i)\ni e & \mapsto \hat{e} \quad \text{ and } \quad \mathfrak{X}(M)\ni X \mapsto \nabla^{E_n}_X \oplus \ldots \oplus \nabla^{E_1}_X.\end{aligned}$$ Then $\mu$ is used to transfer ${{{\pounds}}_{{\mathcal{Q}}}}$ from $\mathfrak{X}({\mathcal{M}})$ to obtain the differential ${\mathcal{D}}_{\operatorname{ad}_\nabla} := \mu^{-1}\circ{{{\pounds}}_{{\mathcal{Q}}}}\circ\mu$ on ${\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(TM[0]\oplus E_1[1]\oplus\ldots\oplus E_n[n]) $.
Split VB-Lie $n$-algebroids {#VB-Lie
n-algebroids}
===========================
\[Split VB-Lie n-algebroids\]
This section gives a picture of representations up to homotopy in more “classical” geometric terms. That is, in terms of linear Lie $n$-algebroid structures on double vector bundles. It introduces the notion of split VB-Lie $n$-algebroids and explains how they corresponds to $(n+1)$-representations of Lie $n$-algebroids. In particular, the tangent of a Lie $n$-algebroid is a VB-Lie $n$-algebroid which is linked to the adjoint representation. The main result in this section is a generalisation of the correspondence between decomposed VB-algebroids and $2$-representations in [@GrMe10].
Double vector bundles
---------------------
Recall that a double vector bundle $(D,V,F,M)$ is a commutative diagram $$\begin{tikzcd}
D \arrow[d,"\pi_V"'] \arrow[r,"\pi_F"] & F \arrow[d,"q_F"] \\
V \arrow[r,"q_V"'] & M
\end{tikzcd}$$ such that all the arrows are vector bundle projections and the structure maps of the bundle $D\to V$ are bundle morphisms over the corresponding structure maps of $F\to M$ (see [@Mackenzie05]). This is equivalent to the same condition holding for the structure maps of $D\to F$ over $V\to M$. The bundles $V$ and $F$ are called the side bundles of $D$. The intersection of the kernels $C:=\pi_V^{-1}(0^V)\cap\pi_F^{-1}(0^F)$ is the $\mathit{core}$ of $D$ and is naturally a vector bundle over $M$, with projection denoted by $q_C\colon C\to M$. The inclusion $C\hookrightarrow D$ is denoted by $C_m\ni c_m\mapsto\overline{c}\in
\pi_V^{-1}(0^V_m)\cap\pi_F^{-1}(0^F_m)$.
A morphism $(G_D,G_V,G_F,g)$ of two double vector bundles $(D,V,F,M)$ and $(D',V',F',M')$ is a commutative cube $$\begin{tikzcd}
& D \arrow[dl, "G_D"] \arrow[rr] \arrow[dd] & & F \arrow[dl, "G_F"] \arrow[dd] \\
D' \arrow[rr, crossing over] \arrow[dd] & & F' \\
& V \arrow[dl, "G_V"] \arrow[rr] & & M \arrow[dl, "g"] \\
V' \arrow[rr] & & M' \arrow[from=uu, crossing over]
\end{tikzcd}$$ such that all the faces are vector bundle maps.
Given a double vector bundle $(D,V,F,M)$, the space of sections of $D$ over $V$, denoted by $\Gamma_V(D)$, is generated as a $C^\infty(V)$-module by two special types of sections, called *core* and *linear* sections and denoted by $\Gamma_V^c(D)$ and $\Gamma^l_V(D)$, respectively (see [@Mackenzie05]). The core section $c^\dagger\in\Gamma_V^c(D)$ corresponding to $c\in\Gamma(C)$ is defined as $$c^\dagger(v_m) = 0_{v_m}^D +_F \overline{c(m)},\, \text{ for }\, m\in M \, \text{ and }\, v_m\in V_m.$$ A section $\delta\in\Gamma_V(D)$ is linear over $f\in\Gamma(F)$, if $\delta\colon V\to D$ is a vector bundle morphism $V\to D$ over $f\colon M\to F$.
Finally, a section $\psi\in\Gamma(V^*\otimes C)$ defines a linear section $\psi^\wedge\colon V\to D$ over the zero section $0^F\colon M\to F$ by $$\psi^\wedge(v_m) = 0_{v_m}^D +_F \overline{\psi(v_m)}$$ for all $m\in M$ and $v_m\in V_m$. This type of linear section is called a *core-linear* section. In terms of the generators $\theta\otimes c\in\Gamma(V^*\otimes C)$, the correspondence above reads $(\theta\otimes c)^\wedge=\ell_\theta\cdot c^\dagger$, where $\ell_\theta$ is the linear function on $V$ associated to $\theta\in\Gamma(V^*)$.
\[Example decomposed DVB\] Let $V,F,C$ be vector bundles over the same manifold $M$. Set $D:=V\times_M F\times_M C$ with vector bundle structures $D=q_V^!(F\oplus C)\to V$ and $D=q_F^!(V\oplus C)\to F$. Then $(D,V,F,M)$ is a double vector bundle, called the decomposed double vector bundle with sides $V$ and $F$ and with core $C$. Its core sections have the form $c^\dagger\colon f_m\mapsto(0^V_m,f_m,c(m))$, for $m\in M,f_m\in F_m$ and $c\in\Gamma(C)$, and the space of linear sections $\Gamma_V^l(D)$ is naturally identified with $\Gamma(F)\oplus\Gamma(V^*\otimes C)$ via $(f,\psi)\colon v_m\mapsto(f(m),v_m,\psi(v_m))$ where $\psi\in\Gamma(V^*\otimes C)$ and $f\in\Gamma(F)$. This yields the canonical *linear horizontal lift* $h\colon \Gamma(F)\hookrightarrow\Gamma_V^l(D)$.
Given a vector bundle $q\colon E\to M$, its tangent bundle $TE$ is naturally a vector bundle over the manifold $E$. In addition, applying the tangent functor to all the structure maps of $E\to M$ yields a vector bundle structure on $Tq\colon TE\to TM$ which is called the *tangent prolongation* of $E$. Hence, $(TE,TM,E,M)$ has a natural double vector bundle structure with sides $TM$ and $E$. Its core is naturally identified with $E\to M$ and the inclusion $E\hookrightarrow TE$ is given by $E_m\ni e_m\mapsto\left.\frac{d}{dt}\right|_{t=0}te_m\in
T^q_{0_m^E}E$. For $e\in\Gamma(E)$, the section $Te\in\Gamma_{TM}^l(TE)$ is linear over $e$. The core vector field $e^\dagger \in\Gamma_{TM}(TE)$ is defined by $e^\dagger(v_m)=T_m0^E(v_M)+_{E}\left.\frac{d}{dt}\right.\arrowvert_{t=0}te(m)$ for $m\in M$ and $v_m\in T_ MM$ and the *vertical lift* $e^\uparrow\in \Gamma_E(TE)=\mathfrak{X}(E)$ is the (core) vector field defined by the flow $\mathbb{R}\times E\to E,(t,e'_m)\mapsto e'_m + te(m)$. Elements of $\Gamma_E^l(TE)=:\mathfrak{X}^l(E)$ are called *linear vector fields* and are equivalent to derivations $\delta\colon \Gamma(E)\to\Gamma(E)$ over some element in $\mathfrak{X}(M)$ [@Mackenzie05]. The linear vector field which corresponds to the derivation $\delta$ is written $X_\delta$.
Linear splittings, horizontal lifts and duals
---------------------------------------------
A *linear splitting* of a double vector bundle $(D,V,F,M)$ with core $C$ is a double vector bundle embedding $\Sigma$ of the decomposed double vector bundle $V\times_M F$ into $D$ over the identities on $V$ and $F$. It is well-known that every double vector bundle admits a linear splitting, see [@GrRo09; @delCarpio-Marek15; @Pradines77] or [@HeJo18] for the general case. Moreover, a linear splitting is equivalent to a *decomposition* of $D$, i.e. to an isomorphism of double vector bundles $S:V\times_M F\times_M C\to D$ over the identity on $V, F$ and $C$. Given $\Sigma$, the decomposition is obtained by setting $S(v_m,f_m,c_m)=\Sigma(v_m,f_m) +_F (0_{f_m} +_V \overline{c_m})$, and conversely, given $S$, the splitting is defined by $\Sigma(v_m,f_m)=S(v_m,f_,,0_m^C)$.
A linear splitting of $D$, and consequently a decomposition, is also equivalent to a *horizontal lift*, i.e. a right splitting of the short exact sequence $$0\to\Gamma(V^*\otimes C)\to \Gamma_V^l(D)\to \Gamma(F)\to 0$$ of $C^\infty(M)$-modules. The correspondence is given by $\sigma_F(f)(v_m)=\Sigma(f(m),b_m)$ for $f\in\Gamma(F)$, $m\in M$ and $b_m\in B(m)$. Note that all the previous constructions can be done similarly if one interchanges the role of $V$ and $F$.
For the tangent bundle $TE$ of a vector bundle $E\to M$, a linear splitting is equivalent to a choice of $TM$-connection on $E$. Specifically, given a horizontal lift $\sigma\colon \mathfrak{X}(M)\to\mathfrak{X}^l(E)$, the corresponding connection $\nabla$ is defined by $\sigma(X) = X_{\nabla_X}$.
Double vector bundles can be dualized in two ways, namely, as the dual of $D$ either over $V$ or over $F$. Precisely, from a double vector bundle $(D,V,F,M)$ with core $C$, one obtains the double vector bundles
[l r]{}
D\^\*\_V & C\^\*\
V & M
&
D\^\*\_F & F\
C\^\* & M
with cores $F^*$ and $V^*$, respectively.
Given a linear splitting $\Sigma:V\times_M F\to D$, the dual splitting $\Sigma^*:V\times_M C^*\to D^*_V$ is defined by
------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------- --
$\left\langle \Sigma^*(v_m,\gamma_m),\Sigma(v_m,f_m) \right\rangle = 0$ and $\left\langle \Sigma^*(v_m,\gamma_m), c^\dagger(v_m) \right\rangle = \left\langle \gamma_m,c(m) \right\rangle$,
------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------- --
for all $v_m\in\Gamma(V_m),f_m\in\Gamma(F_m),\gamma_m\in\Gamma(C^*_m), c\in\Gamma(C)$ and all $m\in M$.
VB-Lie $n$-algebroids and $(n+1)$-representations
-------------------------------------------------
Suppose now that $(\underline{D},V,{\underline{A}},M)$ is a double vector bundle together with graded vector bundle decompositions $\underline{D}=D_1[1]\oplus\ldots\oplus D_n[n]$ and ${\underline{A}}=A_1[1]\oplus\ldots\oplus A_n[n]$ which are compatible with the projection $\underline{D}\to{\underline{A}}$. This means that each of the individual squares $(D_i,V,A_i,M)$ also forms a double vector bundle. Schematically, this yields the following sequence of diagrams $$\begin{tikzcd}
& D_1[1] \arrow[ldd] \arrow[rr] \arrow[d,symbol=\oplus] & & A_1[1] \arrow[ldd, crossing over] \arrow[d,symbol=\oplus] \\
& D_2[2] \arrow[ld] \arrow[rr] \arrow[d,symbol=\oplus] & & A_2[2] \arrow[ld] \arrow[d,symbol=\oplus] \\
V \arrow[rr] & \vdots & M & \vdots \\
& D_n[n] \arrow[lu] \arrow[rr] \arrow[u,symbol=\oplus] & & A_n[n] \arrow[lu] \arrow[u,symbol=\oplus]
\end{tikzcd}$$ where all the “planes" are double vector bundles. This yields that the core of $(\underline{D},V,{\underline{A}},M)$ is the graded vector bundle $\underline{C}=C_1[1]\oplus\ldots\oplus C_n[n]$, where $C_i$ is the core of $(D_i,V,A_i,M)$, for $i=1,\ldots,n$.
\[VB\_lien\] The quadruple $(\underline{D},V,{\underline{A}},M)$ is a *(split) VB-Lie $n$-algebroid* if
1. the graded vector bundle $\underline{D}\to V$ is endowed with a homological vector field ${\mathcal{Q}}_{\underline{D}}$,
2. the Lie $n$-algebroid structure of $\underline{D}\to V$ is *linear*, in the sense that
1. the anchor $\rho_D\colon D_1\to TV$ is a double vector bundle morphism,
2. the map $\partial_{D_i}$ fits into a morphism of double vector bundles $(\partial_{D_i},\operatorname{id}_V,\partial_{A_i},\operatorname{id}_M)$ between $(D_i,V,A_i,M)$ and $(D_{i+1},V,A_{i+1},M)$ for all $i$,
3. the multi-brackets of $\underline{D}$ satisfy the following relations:
1. the $i$-bracket of $i$ linear sections is a linear section;
2. the $i$-bracket of $i-1$ linear sections with a core section is a core section;
3. the $i$-bracket of $i-k$ linear sections with $k$ core sections, $i\geq k \geq 2$, is zero;
4. the $i$-bracket of $i$ core sections is zero.
<!-- -->
1. A VB-Lie $n$-algebroid structure on the double vector bundle $(\underline{D},V,{\underline{A}},M)$ defines a unique Lie $n$-algebroid structure on ${\underline{A}}\to M$ as follows: the anchor $\rho_D\colon D_1\to TV$ is linear over the anchor $\rho\colon A_1\to TM$, and if all $d_k\in\Gamma_V^l(\underline{D})$ cover $a_{k}\in\Gamma({\underline{A}})$ for $k=1,2,\ldots,i$, then $\llbracket d_1,\ldots,d_i
\rrbracket_{\underline{D}}\in\Gamma_V^l(\underline{D})$ covers $\llbracket a_1,\ldots,a_i \rrbracket_{{\underline{A}}}\in\Gamma({\underline{A}})$. Therefore, the graded vector bundles $\underline{D}\to V$ and ${\underline{A}}\to M$ are endowed with homological vector fields ${\mathcal{Q}}_{\underline{D}}$ and ${\mathcal{Q}}_{{\underline{A}}}$ for which the bundle projection $\underline{D}\to {\underline{A}}$ is a morphism of Lie $n$-algebroids over the projection $V\to M$.
2. A VB-Lie 1-algebroid as in the definition above is just a VB-algebroid.
The basic example of a split VB-Lie $n$-algebroid is obtained by applying the tangent functor to a split Lie $n$-algebroid ${\underline{A}}=A_1[1]\oplus\ldots\oplus A_n[n]\to M$. The double vector bundle is given by the diagram $$\begin{tikzcd}
\underline{TA} \arrow[d] \arrow[r] & {\underline{A}}\arrow[d] \\
TM \arrow[r] & M
\end{tikzcd}$$ where the Lie $n$-algebroid structure of $\underline{TA}=T{\underline{A}}=TA_1[1]\oplus\ldots\oplus TA_n[n]$ over the manifold $TM$ is defined by the relations
1. $\rho_{TA}=J_M\circ T_{\rho_A}\colon TA_1\to TTM$, where $J_M\colon TTM\to TTM$ is the canonical involution, see e.g. [@Mackenzie05],
2. $\llbracket Ta_{k_1},\ldots,Ta_{k_i}\rrbracket = T\llbracket a_{k_1},\ldots,a_{k_i}\rrbracket$,
3. $\llbracket
Ta_{k_1},\ldots,Ta_{k_{i-1}},a_{k_i}^\dagger\rrbracket =
\llbracket
a_{k_1},\ldots,a_{k_{i-1}},a_{k_i}\rrbracket^\dagger$,
4. $\llbracket
Ta_{k_1},\ldots,Ta_{k_j},a_{k_{j+1}}^\dagger,\ldots,a_{k_i}^\dagger\rrbracket
= 0$ for all $1\le j\le i-2$,
5. $\llbracket a_{k_1}^\dagger,\ldots,a_{k_i}^\dagger\rrbracket = 0$,
for all sections $a_{k_j}\in\Gamma(A_{k_j})$ with pairwise distinct $k_j$ and all $i$.
Applying the above construction to a split Lie 2-algebroid $Q[1]\oplus B^*[2]\to M$ with structure $(\rho_Q,\ell,\nabla^*,\omega)$ yields as follows the objects $(\rho_{TQ},T\ell,T\nabla^*,T\omega)$ of the split Lie 2-algebroid structure of $TQ[1]\oplus TB^*[2]\to TM$: The complex $TB^*\to TQ\to TTM$ consists of the anchor of $TQ$ given by $\rho_{TQ}=J_M\circ T\rho_Q$, and the vector bundle map $T\ell\colon TB^*\to TQ$. The bracket of $TQ$ is defined by the relations $$[Tq_1,Tq_2]_{TQ} = T[q_1,q_2]_Q,\qquad
[Tq_1,q_2^\dagger]_{TQ} = [q_1,q_2]_Q^\dagger,\qquad
[q_1^\dagger,q_2^\dagger]_{TQ} = 0,$$ for $q_1,q_2\in\Gamma(Q)$. The $TQ$-connection $T\nabla^*\colon
\Gamma_{TM}(TQ)\times\Gamma_{TM}(TB^*)\to\Gamma_{TM}(TB^*)$ is defined by $$(T\nabla^*)_{Tq}(T\beta) = T(\nabla^*_q\beta),\qquad
(T\nabla^*)_{Tq}(\beta^\dagger) = (\nabla^*_q\beta)^\dagger = (T\nabla^*)_{q^\dagger}\beta,\qquad
(T\nabla^*)_{q^\dagger}(\beta^\dagger) = 0,$$ for $q\in\Gamma(Q)$ and $\beta\in\Gamma(B^*)$. Finally, the 3-form $T\omega\in\Omega^3(TQ,TB^*)$ is defined by $$(T\omega)(Tq_1,Tq_2,Tq_3) = T(\omega(q_1,q_2,q_3)),\qquad
(T\omega)(Tq_1,Tq_2,q_3^\dagger) = \omega(q_1,q_2,q_3)^\dagger,$$ $$(T\omega)(q_1,q_2^\dagger,q_3^\dagger) = 0 =T\omega(q_1^\dagger,q_2^\dagger,q_3^\dagger),$$ for $q_1,q_2,q_3\in\Gamma(Q)$.
As it is shown in [@GrMe10], an interesting fact about the tangent prolongation of a Lie algebroid is that it encodes its adjoint representation. The same holds for a split Lie 2-algebroid $Q[1]\oplus B^*[2]$ as the next example shows.
Choose two $TM$-connections on $Q$ and $B^*$, both denoted by $\nabla$. These choices induce the horizontal lifts $\Gamma(Q)\to\Gamma_{TM}^l(TQ)$ and $\Gamma(B^*)\to\Gamma_{TM}^l(TB^*)$, both denoted by $h$. More precisely, given a section $q\in\Gamma(Q)$, its lift is defined as $h(q) = Tq - (\nabla_{.}q)^\wedge$. A similar formula holds for $h(\beta)$ as well. Then an easy computation yields the following:
1. $\rho_{TQ}(q^\dagger) = \rho(q)^\uparrow$ and $(T\ell)(\beta^\dagger) = \ell(\beta)^\uparrow$
2. $\rho_{TQ}(h(q)) = X_{\nabla_q^{\text{bas}}}$
3. $(T\ell)(h(\beta)) = h(\ell(\beta)) + (\nabla_.(\ell(\beta)) - \ell(\nabla_.\beta))^\wedge$
4. $[h(q_1),h(q_2)]_{TQ} = h[q_1,q_2]_Q - R_\nabla^{\text{bas}}(q_1,q_2)^\wedge$
5. $[h(q_1),q_2^\dagger]_{TQ} = (\nabla_{q_1}^{\text{bas}}q_2)^\dagger$
6. $(T\nabla^*)_{h(q)}(\beta^\dagger) = (\nabla_q^*\beta)^\dagger$
7. $(T\nabla^*)_{q^\dagger}(h(\beta)) = (\nabla_q^*\beta - \nabla_{\rho(q)}\beta)^\dagger$
8. $(T\nabla^*)_{h(q)}(h(\beta)) =
h(\nabla^*_q\beta) + \left(\nabla_{\nabla_\cdot q}^*\beta -
\nabla_{\rho(\nabla_\cdot q)}\beta + \nabla^*_q\nabla_\cdot\beta -
\nabla_\cdot\nabla_q^*\beta -
\nabla_{[\rho(q),\cdot]}\beta\right)^\wedge$
9. $(T\omega)(h(q_1),h(q_2),h(q_3)) = h(\omega(q_1,q_2,q_3)) + ((\nabla_\cdot\omega)(q_1,q_2,q_3))^\wedge$
10. $(T\omega)(h(q_1),h(q_2),q_3^\dagger) = (\omega(q_1,q_2,q_3))^\dagger$.
In fact, the last example is a special case of a correspondence between VB-Lie $n$-algebroid structures on a decomposed graded double vector bundle $(\underline{D},V,{\underline{A}},M)$ and $(n+1)$-representations of ${\mathcal{M}}={\underline{A}}$ on the complex ${\underline{E}}=V[0] \oplus C_1[1] \oplus \ldots \oplus C_n[n]$. In the general case, it is easier to give the correspondence in terms of the homological vector field on $\underline{D}$ and the dual representation on ${\underline{E}}^*=C_n^*[-n]\oplus\ldots\oplus C_1^*[-1] \oplus V^*[0]$.
Suppose that $(\underline{D},V,{\underline{A}},M)$ is a VB-Lie $n$-algebroid with homological vector fields ${\mathcal{Q}}_{\underline{D}}$ and ${\mathcal{Q}}_{{\underline{A}}}$, and choose a decomposition for each double vector bundle $(D_i,V,A_i,M)$[^6], and consequently for $(\underline{D},V,{\underline{A}},M)$. Consider the dual $\underline{D}_V^*$ and recall that the spaces $\Gamma_V(D_i^*)$ are generated as $C^\infty(V)$-modules by core and linear sections. For the latter, use the identification $\Gamma_V^l(D_i^*) = \Gamma(A_i^*\otimes
V^*)\oplus\Gamma(C_i^*)$ induced by the decomposition. Accordingly, the element $\alpha\in\Gamma(A_i^*)$ is identified with the core section $\pi_{{\underline{A}}}^{\star}(\alpha)\in\Gamma_V^c(\underline{D}^*)$.
For all $\psi\in\Gamma(V^*)$, the 1-form ${\mathrm{d}}\ell_\psi$ is a linear section of $T^*V\to V$ over $\psi$ and the anchor $\rho_{D_1}\colon D_1\to TV$ is a morphism of double vector bundles. This implies that the degree 1 function ${\mathcal{Q}}_{\underline{D}}(\ell_\psi)=\rho_{D_1}^*{\mathrm{d}}\ell_\psi$ is a linear section of $\Gamma_V(\underline{D}^*)$ and thus $${\mathcal{Q}}_{\underline{D}}(\ell_\psi)\in\Gamma_V^l(D_1^*) = \Gamma(A_1^*\otimes V^*)\oplus\Gamma(C_1^*).$$ Moreover, due to the decomposition, $D_i=q_V^!(A_i\oplus C_i)$ as vector bundles over $V$ for all $i=1,\ldots,n$. Given $\gamma\in\Gamma(C_i^*)$, the function ${\mathcal{Q}}_{\underline{D}}(\gamma)$ lies in $\Gamma(\underline{S}^{i+1}\underline{D}_V^*)$, where $\underline{D}_V^*=q_V^!(A_1^*\oplus C_1^*)\oplus\ldots\oplus
q_V^!(A_n^*\oplus C_n^*)$. A direct computation shows that the components of ${\mathcal{Q}}_{\underline{D}}(\gamma)$ which lie in spaces with two or more sections of the form $\Gamma(q_V^!C_i^*)$ and $\Gamma(q_V^!C_j^*)$ vanish due to the bracket conditions of a VB-Lie $n$-algebroid. Therefore, define the representation ${\mathcal{D}}^*$ of ${\underline{A}}$ on the dual complex ${\underline{E}}^*$ by the equations
-------------------------------------------------------------------- ----- --------------------------------------------------------------------
${\mathcal{Q}}_{\underline{D}}(\ell_\psi) = {\mathcal{D}}^*(\psi)$ and ${\mathcal{Q}}_{\underline{D}}(\gamma) = {\mathcal{D}}^*(\gamma)$,
-------------------------------------------------------------------- ----- --------------------------------------------------------------------
for all $\psi\in\Gamma(V^*)$ and all $\gamma\in\Gamma(C_i^*)$.
Conversely, given a representation ${\mathcal{D}}^*$ of ${\underline{A}}$ on ${\underline{E}}^*$, the above equations together with
----------------------------------------------------------------------------------------------------------- ----- ---------------------------------------------------------------------------------------------------------------------------------------------
${\mathcal{Q}}_{\underline{D}}(q_V^*f) = \pi_{{\underline{A}}}^\star({\mathcal{Q}}_{{\underline{A}}}(f))$ and ${\mathcal{Q}}_{\underline{D}}(\pi_{{\underline{A}}}^\star(\alpha)) = \pi_{{\underline{A}}}^\star({\mathcal{Q}}_{{\underline{A}}}(\alpha))$
----------------------------------------------------------------------------------------------------------- ----- ---------------------------------------------------------------------------------------------------------------------------------------------
for all $f\in C^\infty(M)$ and $\alpha\in\Gamma({\underline{A}}^*)$, define a VB-Lie $n$-algebroid structure on the double vector bundle $(\underline{D},V,{\underline{A}},M)$. This yields the following theorem.
Let $(\underline{D},V,{\underline{A}},M)$ be a decomposed graded double vector bundle as above with core $\underline{C}$. There is a 1-1 correspondence between VB-Lie $n$-algebroid structures on $(\underline{D},V,{\underline{A}},M)$ and $(n+1)$-representations up to homotopy of ${\underline{A}}$ on the complex $V[0] \oplus C_1[1] \oplus \ldots \oplus C_n[n]$.
Constructions in terms of splittings {#applications}
====================================
This section presents in terms of splittings two of the applications of the adjoint and coadjoint representations that were defined before. First, there is an explicit description of the Weil algebra of a split Lie 2-algebroid together with its structure differentials, in terms of vector bundles and connections similarly to [@ArCr12]. Second, the map between the coadjoint and the adjoint representations in the case of a Poisson Lie $n$-algebroid for degrees $n\leq2$ is examined in detail.
The Weil algebra of a split Lie $n$-algebroid
---------------------------------------------
Suppose first that ${\mathcal{M}}= Q[1]\oplus B^*[2]$ is a split Lie 2-algebroid and consider two $TM$-connections on the vector bundles $Q$ and $B^*$, both denoted by $\nabla$. Recall from Section \[adjoint\_module\_adjoint\_representation\_isomorphism\] the (non-canonical) isomorphism of DG ${\mathcal{M}}$-modules $$\mathfrak{X}({\mathcal{M}})\cong{\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(TM[0]\oplus Q[1]\oplus B^*[2]).$$ This implies that $$\Omega^1({\mathcal{M}})\cong{\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(B[-2]\oplus Q^*[-1]\oplus T^*M[0])$$ as DG ${\mathcal{M}}$-modules, and thus the generators of the Weil algebra can be identified with $$\underset{\text{($t$,0)}}{\underbrace{{\mathcal{C}^\infty}({\mathcal{M}})^t}},
\underset{\text{(0,$u$)}}{\underbrace{\Gamma(\wedge^uT^*M)}},
\underset{\text{$(\upsilon,\upsilon)$}}{\underbrace{\Gamma(S^\upsilon Q^*)}},
\underset{\text{$(2w,w)$}}{\underbrace{\Gamma(\wedge^w B)}}.$$ Using also that ${\mathcal{C}^\infty}({\mathcal{M}})^t=\bigoplus_{t=r+2s} \Gamma(\wedge^rQ^*)\otimes\Gamma(S^s
B)$, the space of $(p,q)$-forms is decomposed as $$\begin{aligned}
W^{p,q}({\mathcal{M}},\nabla) = & \bigoplus_{\substack{p=t+v+2w \\ q=u+w+v}} {\mathcal{C}^\infty}({\mathcal{M}})^t\otimes
\Gamma\left( \wedge^uT^*M\otimes S^vQ^*\otimes \wedge^wB \right) \\
= & \bigoplus_{\substack{p=r+2s+v+2w \\ q=u+w+v}} \Gamma\left( \wedge^uT^*M\otimes
\wedge^rQ^*\otimes S^vQ^*\otimes \wedge^wB\otimes S^sB \right).\end{aligned}$$ Therefore, after a choice of splitting and $TM$-connections $\nabla$ on $Q$ and $B^*$, the total space of the Weil algebra of ${\mathcal{M}}$ can be written as $$W({\mathcal{M}},\nabla) = \bigoplus_{r,s,u,v,w} \Gamma\left(
\wedge^uT^*M\otimes \wedge^rQ^*\otimes S^vQ^*\otimes
\wedge^wB\otimes S^sB \right).$$
The next step is to express the differentials ${{{\pounds}}_{{\mathcal{Q}}}}$ and ${\mathbf{d}}$ on $W({\mathcal{M}},\nabla)$ in terms of the two $TM$-connections $\nabla$. For the horizontal differential, recall that by definition the $q$-th row of the double complex $W({\mathcal{M}},\nabla)$ equals the space of $q$-forms $\Omega^q({\mathcal{M}})$ on ${\mathcal{M}}$ with differential given by the Lie derivative ${{{\pounds}}_{{\mathcal{Q}}}}$. Due to the identification of DG ${\mathcal{M}}$-modules $$\Omega^q({\mathcal{M}})=\Omega^1({\mathcal{M}})\wedge\ldots\wedge\Omega^1({\mathcal{M}})
={\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(\operatorname{ad}_\nabla^*\wedge\ldots\wedge\operatorname{ad}_\nabla^*)$$ ($q$-times) and the Leibniz identity for ${{{\pounds}}_{{\mathcal{Q}}}}$, it follows that the $q$-th row of $W({\mathcal{M}},\nabla)$ becomes the $q$-symmetric power of the coadjoint representation $\underline{S}^q(\operatorname{ad}_\nabla^*)$ and ${{{\pounds}}_{{\mathcal{Q}}}}={\mathcal{D}}_{\underline{S}^q(\operatorname{ad}_\nabla^*)}$.
The vertical differential ${\mathbf{d}}$ is built from two 2-representations of the tangent Lie algebroid $TM$, namely the dualization of the $TM$-representations on the graded vector bundles ${\underline{E}}_{Q}=Q[0]\oplus Q[-1]$ and ${\underline{E}}_{B^*}=B^*[0]\oplus B^*[-1]$ whose differentials are given by the chosen $TM$-connections $(-\operatorname{id}_Q,\nabla,-R_\nabla)$ and $(\operatorname{id}_{B^*},\nabla,R_\nabla)$, respectively. Indeed, suppose first that $\tau\in\Gamma(Q^*)$ and $b\in\Gamma(B)$ are functions on ${\mathcal{M}}$, i.e. $0$-forms. Then from Remark \[Iso\_coad\_mod\_coad\_rep\], it follows that ${\mathbf{d}}$ acts via $${\mathbf{d}}\tau = -\tau + {\mathrm{d}}_{\nabla^*}\tau\qquad \text{and}\qquad {\mathbf{d}}b
= b + {\mathrm{d}}_{\nabla^*}b.$$ If now $\tau\in\Gamma(Q^*),b\in\Gamma(B)$ are 1-forms on ${\mathcal{M}}$, then $${\mathbf{d}}\tau=-{\mathbf{d}}(-\tau+{\mathrm{d}}_{\nabla^*}\tau-{\mathrm{d}}_{\nabla^*}\tau)
=-{\mathbf{d}}^2\tau+{\mathbf{d}}({\mathrm{d}}_{\nabla^*}\tau)={\mathrm{d}}_{\nabla^*}\tau+{\mathrm{d}}_{\nabla^*}^2\tau,$$ $${\mathbf{d}}b={\mathbf{d}}(b+{\mathrm{d}}_{\nabla^*}b-{\mathrm{d}}_{\nabla^*}b)={\mathbf{d}}^2b-{\mathbf{d}}({\mathrm{d}}_{\nabla^*}b)={\mathrm{d}}_{\nabla^*}b-{\mathrm{d}}_{\nabla^*}^2b.$$
Note that if $B^*=0$, i.e. ${\mathcal{M}}$ is an ordinary Lie algebroid $A\to M$, the above construction recovers (up to isomorphism) the connection version of the Weil algebra $W(A,\nabla)$ from [@ArCr11; @ArCr12; @Mehta09].
In the general case of a split Lie $n$-algebroid ${\mathcal{M}}=A_1[1]\oplus\ldots\oplus A_n[n]$ with a choice of $TM$-connections on all the bundles $A_i$, one may apply the same procedure as above to obtain the (non-canonical) DG ${\mathcal{M}}$-module isomorphisms $$\mathfrak{X}({\mathcal{M}})\cong{\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(TM[0]\oplus A_1[1]\oplus\ldots\oplus A_n[n])$$ $$\Omega^1({\mathcal{M}})\cong{\mathcal{C}^\infty}({\mathcal{M}})\otimes\Gamma(A_n^*[-n]\oplus\ldots\oplus A_1^*[-1]\oplus T^*M[0]),$$ and hence the identification of the generators of the Weil algebra with $$\underset{\text{($t$,0)}}{\underbrace{{\mathcal{C}^\infty}({\mathcal{M}})^t}},
\underset{\text{(0,$u$)}}{\underbrace{\Gamma(\wedge^{u}T^*M)}},
\underset{\text{$(\upsilon_1,\upsilon_1)$}}{\underbrace{\Gamma(S^{\upsilon_1} A_1^*)}},
\underset{\text{$(2\upsilon_2,\upsilon_2)$}}{\underbrace{\Gamma(\wedge^{\upsilon_2} A_2^*)}},\ldots,
\underset{\text{$(n\upsilon_n,\upsilon_n)$}}{\underbrace{\Gamma(\wedge^{\upsilon_n} A_n^*)}}.$$ This then yields $$\begin{aligned}
W^{p,q}({\mathcal{M}},\nabla) = & \bigoplus_{\substack{p=t+v_1+2v_2+\ldots \\ q=u+v_1+v_2+\ldots}} {\mathcal{C}^\infty}({\mathcal{M}})^t\otimes\Gamma\left( \wedge^uT^*M\otimes S^{v_1}A_1^*\otimes \wedge^{v_2}A_2^*\otimes\ldots \right) \\
= & \bigoplus_{\substack{p=r_1+v_1+2r_2+2v_2+\ldots \\ q=u+v_1+v_2+\ldots}} \Gamma\left( \wedge^uT^*M\otimes \wedge^{r_1}A_1^*\otimes S^{v_1}A_1^*\otimes S^{r_2}A_2^*\otimes \wedge^{v_2}A_2^*\otimes\ldots \right).\end{aligned}$$ Similar considerations as before imply that the $q$-th row of $W({\mathcal{M}},\nabla)$ is given by $\underline{S}^q(\operatorname{ad}_\nabla^*)$ with ${{{\pounds}}_{{\mathcal{Q}}}}={\mathcal{D}}_{\underline{S}^q(\operatorname{ad}_\nabla^*)}$, and that ${\mathbf{d}}$ is built again by the dualization of the 2-representations of $TM$ on the graded vector bundles $\underline{E}_{A_i}=A_i[0]\oplus A_i[-1]$, for $i=1,\ldots,n$, whose differentials are given by $((-1)^i\operatorname{id}_{A_i},\nabla,(-1)^iR_{\nabla})$.
Poisson Lie algebroids of low degree {#morphism_of_ad*_ad_Poisson012}
------------------------------------
This section describes in detail the map $\sharp\colon\operatorname{ad}_\nabla^*\to\operatorname{ad}_\nabla$ for the cases of Poisson Lie $n$-algebroids for $n=0,1,2$. First, consider a Poisson Lie 0-algebroid, i.e. a usual Poisson manifold $(M,\{\cdot\,,\cdot\})$. Then the coadjoint and adjoint representations are $T^*M[0]$ and $TM[0]$, respectively, and the map simply becomes $$\sharp\colon T^*M[0] \to TM[0].$$
Consider a Lie algebroid $A\to M$ with anchor $\rho\colon A\to TM$ and a linear Poisson structure $\{\cdot\,,\cdot\}$, i.e. a Lie algebroid structure on the dual $A^*\to M$. It is easy to see that this means that the \[1\]-manifold $A[1]$ has a Poisson structure of degree $-1$. This Poisson structure is the Schouten bracket defined on $\Omega^\bullet(A)$ by the Lie algebroid bracket on $A^*$. Then it is immediate that $(A[1],{\mathrm{d}}_A,\{\cdot\,,\cdot\})$ is a Poisson Lie $1$-algebroid if and only if $(A,A^*)$ is a Lie bialgebroid. The latter is equivalent to $(A,\{\cdot\,,\cdot\})$ being a Poisson Lie algebroid [@MaXu00].
Let $\rho'\colon A^*\to TM$, $\alpha\mapsto \{\alpha,\cdot\}$ be the anchor of $A^*$. After a choice of a $TM$-connection $\nabla$ on the vector bundle $A$, the map $\sharp\colon \operatorname{ad}_\nabla^*\to\operatorname{ad}_\nabla$ becomes the (-1)-chain map $$\begin{tikzcd}
T^*M[0] \arrow[r, "\rho^*"] \arrow[d, "-\rho'^*"] & A^*[-1] \arrow[d, "\rho'"] \\
A[1] \arrow[r, "\rho"] & TM [0]
\end{tikzcd}$$ together with $\sharp_1(a)\beta = \nabla_{\rho'(\beta)}a -
(\nabla^*)^{\text{bas},*}_\beta a\in\Gamma(A)$, for all $\beta\in\Gamma(A^*)$ and $a\in\Gamma(A)$. By Theorem \[thm\_poisson\], $\sharp$ is a morphism of $2$-representations if and only $(A[1],{\mathrm{d}}_A,\{\cdot\,,\cdot\})$ is a Poisson Lie $1$-algebroid. Hence, $\sharp$ is a morphism of $2$-representations if and only if $(A,A^*)$ is a Lie bialgebroid. Similarly, [@GrJoMaMe18] shows that $\operatorname{ad}_\nabla^*$ and $\operatorname{ad}_\nabla$ form a *matched pair* if and only if $(A,A^*)$ is a Lie bialgebroid.
Note that $(A,\{\cdot\,,\cdot\})$ is a Poisson Lie algebroid if the induced vector bundle morphism $\sharp\colon T^* A\to TA$ over $A$ is a VB-algebroid morphism over $\rho'\colon A^*\to TM$ [@MaXu00]. Then the fact that $\sharp\colon \operatorname{ad}_\nabla^*\to\operatorname{ad}_\nabla$ is a morphism of $2$-representations follows immediately [@DrJoOr15], since $\operatorname{ad}_\nabla^*$ and $\operatorname{ad}_\nabla$ are equivalent to decompositions of the VB-algebroids $(T^*A\to A^*, A\to M)$ and $(TA\to TM, A\to M)$, respectively.
Now consider the case of 2-algebroids. First recall that a symplectic Lie 2-algebroid over a point, that is, a Courant algebroid over a point, is a usual Lie algebra $(\mathfrak{g},[\cdot\,,\cdot])$ together with a non-degenerate pairing $\langle \cdot\,,\cdot \rangle\colon
\mathfrak{g}\times\mathfrak{g}\to\mathfrak{g}$, such that $$\langle [x,y],z \rangle + \langle y,[x,z] \rangle = 0\ \text{for
all}\ x,y,z\in\mathfrak{g}.$$ Using the adjoint and coadjoint representations $\operatorname{ad}\colon\mathfrak g\to \operatorname{End}(\mathfrak g)$, $x\mapsto [x,\cdot]$, and $\operatorname{ad}^*\colon \mathfrak\to\operatorname{End}(\mathfrak g^*)$, $x\mapsto -\operatorname{ad}(x)^*$, and denoting the canonical linear isomorphism induced by the pairing by $P\colon \mathfrak{g}\to\mathfrak{g^*}$, the equation above reads $$P(\operatorname{ad}(x)y) = \operatorname{ad}^*(x)P(y)\ \text{for all}\ x,y\in\mathfrak{g}.$$ In other words, this condition is precisely what is needed to turn the vector space isomorphism $P$ into an isomorphism of Lie algebra representations between $\operatorname{ad}$ and $\operatorname{ad}^*$. In fact, the map $\sharp\colon \operatorname{ad}^*\to\operatorname{ad}$ for Poisson Lie 2-algebroids is a direct generalisation of this construction.
Let $B\to M$ be a usual Lie algebroid with a 2-term representation $(\nabla^Q,\nabla^{Q^*},R)$ on a complex $\partial_Q\colon Q^*\to
Q$. The representation is called *self dual* [@Jotz18b] if it equals its dual, i.e. $\partial_Q=\partial_Q^*$, the connections $\nabla^Q$ and $\nabla^{Q^*}$ are dual to each other, and $R^*=-R\in\Omega^2(B,\operatorname{Hom}(Q,Q^*))$, i.e. $R\in\Omega^2(B,\wedge^2Q^*)$. [@Jotz18b] further shows that Poisson brackets $\{\cdot\,,\cdot\}$ on a split Lie 2-algebroid $Q[1]\oplus B^*[2]$ correspond to self dual 2-representations of $B$ on $Q^*[1]\oplus Q[0]$ as follows: the bundle map $\partial_Q\colon
Q^*\to Q$ is $\tau\mapsto\{ \tau,\cdot \}$, the anchor $\rho_B\colon B\to TM$ is $b\mapsto\{ b,\cdot \}$, the $B$-connection on $Q^*$ is given by $\nabla_b\tau=\{b,\tau\}$, and the 2-form $R$ and the Lie bracket of $B$ are defined by $\{b_1,b_2\} = [b_1,b_2] - R(b_1,b_2)\in\Gamma(B)\oplus\Omega^2(Q)$.
Fix now a Poisson Lie 2-algebroid $({\mathcal{M}},{\mathcal{Q}},\{\cdot,\cdot\})$ together with a choice of a splitting $Q[1]\oplus B^*[2]$ for ${\mathcal{M}}$, a pair of $TM$-connections on $B^*$ and $Q$, and consider the representations $\operatorname{ad}_\nabla$ and $\operatorname{ad}_\nabla^*$. Then the map $\sharp\colon \operatorname{ad}^*\to\operatorname{ad}$ consists of the (-2)-chain map $$\begin{tikzcd}
T^*M[0] \arrow[r, "-\rho_Q^*"] \arrow[d, "-\rho_B^*"] & Q^*[-1] \arrow[r, "\partial_B"] \arrow[d, "\partial_Q"]
& B[-2] \arrow[d, "\rho_B"] \\
B^*[2] \arrow[r, "\partial_B^*"] & Q[1] \arrow[r, "\rho_Q"] &
TM[0]
\end{tikzcd}$$ and the elements $$\sharp_1(q)\tau = \langle \tau,\nabla_{\rho_B(\cdot)}q - \nabla_\cdot q \rangle\in\Gamma(B^*)$$ $$\sharp_1(q)b = \nabla_{\rho_B(b)}q - \nabla_b q \in\Gamma(Q)$$ for $q\in\Gamma(Q),\tau\in\Gamma(Q^*),b\in\Gamma(B)$, $$\sharp_2(q_1,q_2)b = \langle R(b,\cdot)q_1,q_2 \rangle\in\Gamma(B^*)$$ for $q_1,q_1\in\Gamma(Q),b\in\Gamma(B)$, where $R$ is the component that comes from the self-dual 2-representation of $B$ from the Poisson structure, $$\sharp^b(\beta)b = \langle \beta,\nabla^*_{\rho_B(b)}(\cdot) -
\nabla^*_{\rho_B(\cdot)}b + [b,\cdot] \rangle \in\Gamma(B^*)$$ for $\beta\in\Gamma(B^*),b\in\Gamma(B)$.
Suppose now that the split Lie 2-algebroid is symplectic, i.e. that it is of the form $E[1]\oplus T^*M[2]$ for a Courant algebroid $E\to
M$. The only thing that is left from the construction in the Example \[Split\_symplectic\_Lie\_2-algebroid\_example\] is a choice of a $TM$-connection on $TM$, and hence on the dual $T^*M$. The isomorphism $\sharp\colon \operatorname{ad}_\nabla^*\to\operatorname{ad}_\nabla$ consists of the (-2)-chain map $$\begin{tikzcd}
T^*M[0] \arrow[r, "-\rho^*"] \arrow[d, "-\operatorname{id}"] & E^*[-1] \arrow[r, "\rho"] \arrow[d, "P^{-1}"] & TM[-2] \arrow[d, "\operatorname{id}"] \\
T^*M[2] \arrow[r, "\rho^*"] & E[1] \arrow[r, "\rho"] & TM [0]
\end{tikzcd}$$ where $P\colon E\overset{\sim}{\to} E^*$ is the pairing, and the elements $\langle \sharp_2(e_1,e_2)X,Y \rangle = \langle
R_\nabla(X,Y)e_1,e_2 \rangle$ and $\langle \sharp^b(\alpha)X, Y \rangle = \langle
\alpha,T_\nabla(X,Y) \rangle$. Its inverse consists of the 2-chain map $$\begin{tikzcd}
T^*M[2] \arrow[r, "\rho^*"] \arrow[d, "-\operatorname{id}"] & E[1] \arrow[r, "\rho"] \arrow[d, "P"] & TM[0] \arrow[d, "\operatorname{id}"] \\
T^*M[0] \arrow[r, "-\rho^*"] & E^*[-1] \arrow[r, "\rho"] & TM [-2]
\end{tikzcd}$$ and again the elements $\langle \sharp^{-1}_2(e_1,e_2)X,Y \rangle = \langle
R_\nabla(X,Y)e_1,e_2 \rangle$ and $\langle (\sharp^{-1})^b(\alpha)X, Y \rangle = \langle
\alpha,T_\nabla(X,Y) \rangle$. In other words, $\sharp^2=\operatorname{id}$. If the connection on $TM$ is torsion-free, then the terms $\sharp^b$ and $(\sharp^{-1})^b$ vanish, as well. In particular, if the base manifold $M$ is just a point, then the bundles $TM$ and $T^*M$, and the elements $\sharp_2$ and $\sharp^{-1}_2$ are zero. Therefore, the map $\operatorname{ad}^*\to\operatorname{ad}$ reduces to the linear isomorphism of the pairing and agrees with the one above.
\#1[0=]{}
Arias Abad, C. and Crainic, M. (2011). The [W]{}eil algebra and the [V]{}an [E]{}st isomorphism. , 61(3):927–970.
Arias Abad, C. and Crainic, M. (2012). Representations up to homotopy of [L]{}ie algebroids. , 663:91–126.
Arias Abad, C., Crainic, M., and Dherin, B. (2011). Tensor products of representations up to homotopy. , 6(2):239–288.
Arias Abad, C. and Schätz, F. (2011). Deformations of [L]{}ie brackets and representations up to homotopy. , 22(1-2):27–54.
Arias Abad, C. and Schätz, F. (2013). The [$\textbf{A}_\infty$]{} de [R]{}ham theorem and integration of representations up to homotopy. , (16):3790–3855.
Bonavolont[à]{}, G. and Poncin, N. (2013). On the category of [L]{}ie [$n$]{}-algebroids. , 73:70–90.
Brahic, O. and Ortiz, C. (2019). Integration of [$2$]{}-term representations up to homotopy via [$2$]{}-functors. , 372(1):503–543.
Cabrera, A., Brahic, O., and Ortiz, C. (2018). Obstructions to the integrability of [${\mathcal {V B}}$]{}-algebroids. , 16(2):439–483.
Caseiro, R. and Laurent-Gengoux, C. (2019). Modular class of [L]{}ie infinity-algebras. .
Courant, T. J. and Weinstein, A. (1988). Beyond [P]{}oisson structures. In [*Action hamiltoniennes de groupes. [T]{}roisième théorème de [L]{}ie ([L]{}yon, 1986)*]{}, volume 27 of [*Travaux en Cours*]{}, pages 39–49. Hermann, Paris.
del Carpio-Marek, F. (2015). . PhD thesis, IMPA, available at [www.impa.br/wp-content/uploads/2017/05/Fernando\_Del\_Carpio.pdf](www.impa.br/wp-content/uploads/2017/05/Fernando_Del_Carpio.pdf), Rio de Janeiro.
Drummond, T., Jotz, M., and Ortiz, C. (2015). -algebroid morphisms and representations up to homotopy. , 40:332–357.
Grabowski, J. and Rotkiewicz, M. (2009). Higher vector bundles and multi-graded symplectic manifolds. , 59(9):1285–1305.
Gracia-Saz, A., Jotz Lean, M., Mackenzie, K. C. H., and Mehta, R. A. (2018). Double [L]{}ie algebroids and representations up to homotopy. , 13(2):287–319.
, A. and [Mehta]{}, R. A. (2010). , 223(4):1236–1275.
Gualtieri, M. (2003). . PhD thesis.
Gualtieri, M. (2007). Generalized complex geometry. .
Heuer, M. and Lean, M. J. (2018). Multiple vector bundles: cores, splittings and decompositions.
Hitchin, N. (2003). Generalized [C]{}alabi-[Y]{}au manifolds. , 54(3):281–308.
Jotz Lean, M. (2018a). Dorfman connections and [C]{}ourant algebroids. , 116:1–39.
Jotz Lean, M. (2018b). The geometrization of [$\mathbb N$]{}-manifolds of degree 2. , 133:113 – 140.
Jotz Lean, M. (2018c). On [LA]{}-[C]{}ourant algebroids and [P]{}oisson [L]{}ie $2$-algebroids. .
Jotz Lean, M. (2019). Lie 2-algebroids and matched pairs of 2-representations – a geometric approach. .
Jotz Lean, M. and Ortiz, C. (2014). Foliated groupoids and infinitesimal ideal systems. , 25(5):1019–1053.
Liu, Z.-J., Weinstein, A., and Xu, P. (1997). Manin triples for [L]{}ie bialgebroids. , 45(3):547–574.
Mackenzie, K. C. H. (2005). , volume 213 of [*London Mathematical Society Lecture Note Series*]{}. Cambridge University Press, Cambridge.
Mackenzie, K. C. H. and Xu, P. (1994). , 73(2):415–452.
Mackenzie, K. C. H. and Xu, P. (2000). , 39(3):445–467.
Mehta, R. (2006). Supergroupoids, double structures, and equivariant cohomology. .
, R. A. (2009). , 7(3):263–293.
, R. A. (2014). , 25(5):1122–1134.
Mehta, R. A. (2015). Modular classes of [L]{}ie groupoid representations up to homotopy. , 11:Paper 058, 10.
Papantonis, T. (in preparation). .
Pradines, J. (1977). , volume 29 of [*Esquisses Mathématiques \[Mathematical Sketches\]*]{}. Université d’Amiens U.E.R. de Mathématiques, Amiens.
Quillen, D. (1985). Superconnections and the [C]{}hern character. , 24(1):89–95.
Roytenberg, D. (2002). On the structure of graded symplectic supermanifolds and [C]{}ourant algebroids. In [*Quantization, [P]{}oisson brackets and beyond ([M]{}anchester, 2001)*]{}, volume 315 of [*Contemp. Math.*]{}, pages 169–185. Amer. Math. Soc., Providence, RI.
evera, P. (2005). Some title containing the words “homotopy” and “symplectic”, e.g. this one. In [*Travaux mathématiques. [F]{}asc. [XVI]{}*]{}, Trav. Math., XVI, pages 121–137. Univ. Luxemb., Luxembourg.
Sheng, Y. and Zhu, C. (2017). Higher extensions of [L]{}ie algebroids. , 19(3):1650034, 41.
Trentinaglia, G. and Zhu, C. (2016). Some remarks on representations up to homotopy. , 13(3):1650024, 15.
Va[ĭ]{}ntrob, A. Y. (1997). Lie algebroids and homological vector fields. , 52(2(314)):161–162.
[^1]: Note that here there is a sign difference in the notation with [@Mehta06] and [@Mehta09].
[^2]: Note that all the objects that appear in the following equations act via the generalised wedge products that were discussed before. For example, $\partial({\mathrm{d}}_\nabla e)$ or $\omega_2(\omega_2(e))$ mean $\partial\wedge{\mathrm{d}}_\nabla e$ and $\omega_2\wedge\omega_2(e)$, respectively. This is explained in detail in the Appendix of [@ArCr12].
[^3]: In the following equations, the map $\partial_B\colon\Omega^1(Q)\to\Gamma(B)$ extends to $\partial_B\colon \Omega^k(Q)\to\Omega^{k-1}(Q,B)$ by the rule $\partial_B(\tau_1\wedge\ldots\wedge\tau_k) =
\sum_{i=1}^k
(-1)^{i+1}\tau_1\wedge\ldots\wedge\hat{\tau_i}\wedge\ldots\wedge\tau_k\wedge\partial_B\tau_i$, for $\tau_i\in\Omega^1(Q)$.
[^4]: Some signs are chosen so that the map given in \[adjoint\_module\_adjoint\_representation\_isomorphism\] is an isomorphism for the differential of the adjoint module defined earlier.
[^5]: Note that the two pairs of $TM$-connections are identical
[^6]: In the case of the tangent Lie $n$-algebroid, this corresponds to choosing the $TM$-connections on the vector bundles of the adjoint complex.
|
---
abstract: |
We obtain a structurally stable family of smooth ordinary differential equations exhibiting heteroclinic tangencies for a dense subset of parameters. We use this to find vector fields $C^2$-close to an element of the family exhibiting a tangency, for which the set of solutions with historic behaviour contains an open set. This provides an affirmative answer to Taken’s Last Problem *(F. Takens (2008) Nonlinearity, 21(3) T33–T36).* A limited solution with historic behaviour is one for which the time averages do not converge as time goes to infinity. Takens’ problem asks for dynamical systems where historic behaviour occurs persistently for initial conditions in a set with positive Lebesgue measure.
The family appears in the unfolding of a degenerate differential equation whose flow has an asymptotically stable heteroclinic cycle involving two-dimensional connections of non-trivial periodic solutions. We show that the degenerate problem also has historic behaviour, since for an open set of initial conditions starting near the cycle, the time averages approach the boundary of a polygon whose vertices depend on the centres of gravity of the periodic solutions and their Floquet multipliers.
We illustrate our results with an explicit example where historic behaviour arises $C^2$-close of a $\textbf{SO(2)}$-equivariant vector field.
author:
- |
Isabel S. Labouriau Alexandre A. P. Rodrigues\
Centro de Matemática da Universidade do Porto [^1]\
and Faculdade de Ciências, Universidade do Porto\
Rua do Campo Alegre, 687, 4169-007 Porto, Portugal\
[email protected] [email protected]
date:
title: |
On Takens’ Last Problem:\
tangencies and time averages near heteroclinic networks
---
**Keywords:** Heteroclinic cycle, Time averages, Historic behaviour, Heteroclinic tangencies, Newhouse phenomena.
**2010 — AMS Subject Classifications**
[Primary: 34C28; Secondary: 34C37, 37C29, 37D05, 37G35]{}
Introduction
============
Chaotic dynamics makes it difficult to give a geometric description of an attractor in many situations, when probabilistic and ergodic analysis becomes relevant. In a long record of a chaotic signal generated by a deterministic time evolution, for suitable initial conditions the expected time average exists — see [@Ruelle; @Sigmund]. However, there are cases where the time averages do not converge no matter how long we wait. This *historic behaviour* is associated with intermittent dynamics, which happens typically near heteroclinic networks.
The aim of this article is to explore the persistence of this behaviour for a deterministic class of systems involving robust heteroclinic cycles, leading to an answer to Taken’s Last Problem [@T]. More precisely, we study non-hyperbolic heteroclinic attractors such that the time averages of all solutions within their basin of attraction do not converge, and for which this holds persistently.
This is done by first studying a one-parameter family of vector fields having periodic solutions connected in a robust cycle. We show that under generic conditions there are parameter values for which the invariant manifolds of a pair of periodic solutions have a heteroclinic tangency. This implies the Newhouse property of existence of infinitely many sinks. Results by Kiriki and Soma [@KS] may then be used to provide an affirmative answer to the problem proposed by Takens in [@T].
Takens’ last problem
--------------------
Let $M$ be a compact three-dimensional manifold without boundary and consider a vector field $f: M \rightarrow TM$ defining a differential equation $$\label{general}
\dot{x}=f(x), \qquad x(0)=x_0\in M$$ and denote by $\phi(t,x_0)$, with $t \in {{\rm\bf R}}$, the associated flow with initial condition $x_0 \in M$. The following terminology has been introduced by Ruelle [@Ruelle] (see also Sigmund [@Sigmund]).
\[historicDef\] We say that the solution $\phi(t,x_0)$, $x_0 \in M$, of has *historic behaviour* if there is a continuous function $H:M\rightarrow {{\rm\bf R}}$ such that the time average $$\label{historic1}
\frac{1}{T}\int_{0}^{T} H(\phi(t,x_0)) dt$$ fails to converge.
A solution $\phi(t,x_0)$, $x_0 \in M$ with historic behaviour retains informations about its past. This happens, in particular, if there are at least two different sequences of times, say $(T_i)_{i \in {{\rm\bf N}}}$ and $(S_j)_{j \in {{\rm\bf N}}}$, such that the following limits exist and are different: $$\lim_{i \rightarrow +\infty}\frac{1}{T_i}\int_{0}^{T_i} H(\phi(t,x_0)) dt
\quad \neq \quad
\lim_{j \rightarrow +\infty}\frac{1}{S_j}\int_{0}^{S_j} H(\phi(t,x_0)) dt.$$
The consideration of the limit behaviour of time averages with respect to a given measure has been studied since Sinai [@Sinai], Ruelle [@Ruelle76] and Bowen [@Bowen75]. Usually, historic behaviour is seen as an anomaly. Whether there is a justification for this belief is the content of Takens’ Last Problem [@KS; @Takens94; @T]: *are there persistent classes of smooth dynamical systems such that the set of initial conditions which give rise to orbits with historic behaviour has positive Lebesgue measure?* In ergodic terms, this problem is equivalent to finding a persistent class of systems admitting no physical measures [@Hofbauer; @Ruelle], since roughly speaking, these measures are those that give probabilistic information on the observable asymptotic behaviour of trajectories.
The class may become persistent if one considers differential equations in manifolds with boundary as in population dynamics [@Hofbauer; @HSig]. The same happens for equivariant or reversible differential equations [@Guckenheimer; @e; @Holmes; @1]. The question remained open for systems without such properties until, recently, Kiriki and Soma [@KS] proved that any Newhouse open set in the $C^r$-topology, $r\geq 2$, of two-dimensional diffeomorphisms is contained in the closure of the set of diffeomorphisms which have non-trivial wandering domains whose forward orbits have historic behaviour. As far as we know, the original problem, stated for flows, has remained open until now.
Non-generic historic behaviour
------------------------------
In this section, we present some non-generic examples that, however, occur generically in families of discrete dynamical systems depending on a small number of parameters. The first example has been given in Hofbauer and Keller [@HK], where it has been shown that the logistic family contains elements for which almost all orbits have historic behaviour. This example has codimension one in the space of $C^3$ endomorphisms of the interval; the $C^3$ regularity is due to the use of the Schwarzian derivative operator.
The second example is due to Bowen, who described a codimension two system of differential equations on the plane whose flow has a heteroclinic cycle consisting of a pair of saddle-equilibria connected by two trajectories. As referred by Takens [@Takens94; @T], apparently Bowen never published this result. We give an explicit example in \[subsecBowen\] below. The eigenvalues of the derivative of the vector field at the two saddles are such that the cycle attracts solutions that start inside it. In this case, each solution in the domain has historic behaviour. In ergodic terms, it is an example without SRB measures. Breaking the cycle by a small perturbation, the equation loses this property. This type of dynamics may become persistent for dynamical systems in manifolds with boundary or in the presence of symmetry. We use Bowen’s example here as a first step in the construction of a generic example. Other examples of high codimension with heteroclinic attractors where Lebesgue almost all trajectories fail to converge have been given by Gaunersdorfer [@Gaunersdorfer] and Sigmund [@Sigmund].
Ergodicity implies the convergence of time averages along almost all trajectories for all continuous observables [@KA]. For non-ergodic systems, time averages may not exist for almost all trajectories. In Karabacak and Ashwin [@KA Th 4.2], the authors characterise conditions on the observables that imply convergent time averages for almost all trajectories. This convergence is determined by the behaviour of the observable on the statistical attractors (subsets where trajectories spend almost all time). Details in [@KA §4].
General examples
----------------
The paradigmatic example with persistent historic behaviour has been suggested by Colli and Vargas in [@CV], in which the authors presented a simple non-hyperbolic model with a wandering domain characterised by the existence of a two-dimensional diffeomorphism with a Smale horseshoe whose stable and unstable manifolds have persistent tangencies under arbitrarily small $C^2$ perturbations. The authors of [@CV] suggest that this would entail the existence of non-wandering domains with historic behaviour, in a robust way. This example has been carefully described in [@KS §2.1].
For diffeomorphisms, an answer has been given by Kiriki and Soma [@KS], where the authors used ideas suggested in [@CV] to find a nontrivial non-wandering domain (the interior of a specific rectangle) where the diffeomorphism is contracting. In a robust way, they obtain an open set of initial conditions for which the time averages do not converge. Basically, the authors linked two subjects: homoclinic tangencies studied by Newhouse, Palis and Takens and non-empty non-wandering domains exhibiting historic behaviour. An overview of the proof has been given in §2 of [@KS]. We refer those that are unfamiliar with Newhouse regions to the book [@PT].
The results
-----------
The goal of this article is twofold. First, we extend the results by Takens [@Takens94] and by Gaunersdorfer [@Gaunersdorfer] to heteroclinic cycles involving periodic solutions with real Floquet multipliers. The first main result is Theorem \[Main1\], with precise hypotheses given in Section \[Hypotheses\]:
1[$^{\rm \bf \underline {st}}$]{} result:
: Consider an ordinary differential equation in ${{\rm\bf R}}^3$ having an attracting heteroclinic cycle involving periodic solutions with two-dimensional heteroclinic connections. Any neighbourhood of this cycle contains an open set of initial conditions, for which the time averages of the corresponding solutions accumulate on the boundary of a polygon, and thus, fail to converge. The open set is contained in the basin of attraction of the cycle and the observable is the projection on a component.
This situation has high codimension because each heteroclinic connection raises the codimension by one, but this class of systems is persistent in equivariant differential equations. The presence of symmetry creates flow-invariant fixed-point subspaces in which heteroclinic connections lie — see for example the example constructed in [@Rodrigues §8]. Another example is constructed in Section \[secLifting\] below. The second main result, Theorem \[teorema tangency\], concerns tangencies:
2[$^{\rm \bf \underline {nd}}$]{} result:
: Consider a generic one-parameter family of structurally stable differential equations in the unfolding of an equation for which the 1$^{\rm\underline{ st}}$ result holds. Then there is a sequence of parameter values for which there is a heteroclinic tangency of the invariant manifolds of two periodic solutions.
We use this result to obtain Theorem \[teoremaHistoric\]:
3[$^{\rm \bf \underline {rd}}$]{} result:
: Consider a generic one-parameter family of structurally stable differential equations in the unfolding of an equation for which the 1$^{\rm\underline{ st}}$ result holds. Therefore, for parameter values in an open interval, there are vector fields arbitrarily $C^2$-close to an element of the family, for which there is an open set of initial conditions exhibiting historic behaviour.
In other words, we obtain a class, dense in a $C^2$-open set of differential equations and elements of this class exhibit historic behaviour for an open set of initial conditions, which may be interpreted as the condition required in Takens’ Last Problem. The idea behind the proof goes back to the works of [@LR3; @LR2015], combined with the recent progress on the field made by [@KS]. The proof consists of the followingsteps:
1. use the 3[$^{\rm \bf \underline {rd}}$]{} result to establish the existence of intervals in the parameters corresponding to Newhouse domains;
2. \[Item1\] in a given cross section, construct a diffeomorphim ($C^2$-close to the first return map) having historic behaviour for an open set of initial conditions;
3. \[Item2\] transfer the historic behaviour from the perturbed diffeomorphism of *\[Item1\].* to a flow $C^2$-close to the original one.
Furthermore, in the spirit of the example by Bowen described in [@Takens94], we obtain:
4[$^{\rm \bf \underline {th}}$]{} result:
: We construct explicitly a class of systems for which historic behaviour arises $C^2$-close to the unfolding of a fully symmetric vector field, we may find an open set of initial conditions with historic behaviour. In contrast to the findings of Bowen and Kleptsyn [@Kleptsyn], our example is robust due the hyperbolicity of the periodic solutions and the transversality of the local heteroclinic connections.
The results in this article are stated for vector fields in ${{\rm\bf R}}^3$, but they hold for vector fields in a three-dimensional Riemannian manifold and, with some adaptation, in higher dimensions.
An ergodic point of view
------------------------
Concerning the first result, the outstanding fact in the degenerate case is that the time averages diverge precisely in the same way: they approach a $k$-polygon. This is in contrast with ergodic and hyperbolic strange attractors admitting a physical measure, where almost all initial conditions lead to converging time averages, in spite of the fact that the observed dynamics may undergo huge variations.
If a flow $\phi(t, .)$ admits an invariant probability measure $\mu$ that is absolutely continuous with respect to the Lebesgue measure and ergodic, then $\mu$ is a physical measure for $\phi(t, .)$, as a simple consequence of the Birkhoff Ergodic Theorem. In other words if $H: M \rightarrow {{\rm\bf R}}$ is a $\mu$-integrable function, then for $\mu$-almost all points in $M$ the time average: $$\lim_{T \rightarrow + \infty}\frac{1}{T}\int_{0}^{T} H\circ \phi(t,x_0) dt$$ exists and equals the space average $\int H d\mu$. In the conservative context, historic behaviour has zero Lebesgue measure.
Physical measures need not be unique or even exist in general. When they exist, it is desirable that the set of points whose asymptotic time averages are described by physical measures be of full Lebesgue measure. It is unknown in how much generality do the basins of physical measures cover a subset of $M$ of full Lesbegue measure. There are examples of systems admitting no physical measure but the only known cases are not robust, *ie*, there are systems arbitrarily close (in the $C^2$ Whitney topology) that admit physical measures. In the present article, we exhibit a persistent class of smooth dynamical systems that does not have global physical measures. In the unfolding of an equation for which the first result holds, there are no physical measures whose basins intersect the basin of attraction of an attracting heteroclinic cycle. Our example confirms that physical measures need not exist for all vector fields. Existence results are usually difficult and are known only for certain classes of systems.
Example without historic behaviour
----------------------------------
Generalised Lotka-Volterra systems has been analysed by Duarte *et al* in [@DFO]. Results about the convergence of time averages are known in two cases: either if there exists a unique interior equilibrium point, or in the conservative setting (see [@DFO]), when there is a heteroclinic cycle. In the latter case, if the solution remains limited and does not converge to the cycle, then its time averages converge to an equilibrium point. The requirement is that the heteroclinic cycle is stable but not attracting, and the limit dynamics has been extended to polymatrix replicators in [@Peixe]. This is in contrast to our findings in the degenerate case, emphasising the importance of the hypothesis that the cycle is attracting in order to obtain convergence to a polygon.
Framework of the article
------------------------
Preliminary definitions are the subject of Section \[Preliminaries\] and the main hypotheses are stated in Section \[Hypotheses\]. We introduce the notation for the rest of the article in Section \[Local\] after a linearisation of the vector field around each periodic solution, whose details are given in Appendix \[appendix\]. We use precise control of the times of flight between cross-sections in Section \[Organizing\], to show that for an open set of initial conditions in a neighbourhood of asymptotically stable heteroclinic cycles involving non-trivial periodic solutions, the time averages fail to converge. Instead, the time averages accumulate on the boundary of a polygon, whose vertices may be computed from local information on the periodic solutions in the cycle. The proofs of some technical lemmas containing the computations about the control of the flight time between nodes appear in Appendix \[appendixB\], to make for easier reading.
In Section \[Tangencies\], we obtain a persistent class of smooth dynamical systems such that an open set of initial conditions corresponds to trajectories with historic behaviour. Symmetry-breaking techniques are used to obtain a heteroclinic cycle associated to two periodic solutions and we find heteroclinic tangencies and Newhouse phenomena near which the result of [@CV; @KS] may be applied. This is followed in Section \[Example\] by an explicit example where historic behaviour arise in the unfolding of an $\textbf{SO(2)}$-equivariant vector field.
Preliminaries {#Preliminaries}
=============
To make the paper self-contained and readable, we recall some definitions.
Heteroclinic attractors
-----------------------
Several definitions of heteroclinic cycles and networks have been given in the literature. In this paper we consider non-trivial periodic solutions of (\[general\]) that are hyperbolic and that have one Floquet multiplier with absolute value greater than 1 and one Floquet multiplier with absolute value less than 1. A connected component of $W^s(\mathcal{P})\backslash \mathcal{P}$, for a periodic solution $\mathcal{P}$, will be called a *branch* of $W^s(\mathcal{P})$, with a similar definition for a branch of $W^u(\mathcal{P})$. Given two periodic solutions $\mathcal{P}_{a}$ and $\mathcal{P}_{b}$ of (\[general\]), a *heteroclinic connection* from $\mathcal{P}_a$ to $\mathcal{P}_b$ is a trajectory contained in $W^u(\mathcal{P}_a)\cap W^s(\mathcal{P}_b)$, that will be denoted $[\mathcal{P}_a\to \mathcal{P}_b]$.
Let $\mathcal{S=}\{\mathcal{P}_{a}:a\in \{1,\ldots,k\}\}$ be a finite ordered set of periodic solutions of saddle type of (\[general\]). The notation for $\mathcal{P}_{a}$ is cyclic, we indicate this by taking the index $a\pmod{k}$, *ie* $a\in{{\rm\bf Z}}_k = {{\rm\bf Z}}/k{{\rm\bf Z}}$. Suppose $$\forall a\in{{\rm\bf Z}}_k
\quad W^{u}(\mathcal{P}_{a})\cap W^{s}(\mathcal{P}_{a+1})\neq\emptyset .$$ A *heteroclinic cycle* $\Gamma$ associated to $\mathcal{S}$ is the union of the saddles in $\mathcal{S}$ with a heteroclinic connection $[\mathcal{P}_a \rightarrow \mathcal{P}_{a+1}]$ for each $a\in{{\rm\bf Z}}_k$. We refer to the saddles defining the heteroclinic cycle as *nodes*. A *heteroclinic network* is a connected set that is the union of heteroclinic cycles. When a branch of $W^{u}(\mathcal{P}_{a})$ coincides with a branch of $W^{s}(\mathcal{P}_{a+1})$, we also refer to it as a two-dimensional connection $[\mathcal{P}_a \rightarrow \mathcal{P}_{a+1}]$.
Basin of attraction
-------------------
For a solution of (\[general\]) passing through $x\in M$, the set of its accumulation points as $t$ goes to $+\infty$ is the $\omega$-limit set of $x$ and will be denoted by $\omega(x)$. More formally, $$\omega(x)=\bigcap_{T=0}^{+\infty} \overline{\left(\bigcup_{t>T}\phi(t, x)\right)}.$$ It is well known that $\omega(x)$ is closed and flow-invariant, and if $M$ is compact, then $\omega(x)$ is non-empty for every $x\in M$. If $\Gamma\subset M$ is a flow-invariant subset for (\[general\]), the *basin of attraction of $\Gamma$* is given by $$\mathcal{B}(\Gamma) = \{x \in M\backslash \Gamma : \mbox{all accumulation points of } \phi(t, x)\mbox{ as } t\to +\infty \mbox{ lie in } \Gamma\} .$$ Note that, with this definition, the set $\Gamma$ is not contained in $\mathcal{B}(\Gamma)$.
The setting {#Hypotheses}
===========
The hypotheses
--------------
Our object of study is the dynamics around a heteroclinic cycle associated to $k$ periodic solutions, $k\in{{\rm\bf N}}$, $k>1$, for which we give a rigorous description here. Specifically, we study a one-parameter family of $C^2$-vector fields $f_\lambda$ in ${{\rm\bf R}}^3$ whose flow has the following properties (see Figure \[Configuration\]):
1. \[P1\] For $\lambda\in {{\rm\bf R}}$, there are $k$ hyperbolic periodic solutions $\mathcal{P}_a$ of $\dot{x}=f_\lambda(x)$, $a\in{{\rm\bf Z}}_k$, of minimal period $\xi_a>0$. The Floquet multipliers of $\mathcal{P}_a$ are real and given by $e^{e_a}>1$ and $e^{-c_a}<1$ where $c_a> e_a>0 $.
2. \[P2\] For each $a\in{{\rm\bf Z}}_k$, the manifolds $W^s_{loc}(\mathcal{P}_a)$ and $W^u_{loc}(\mathcal{P}_a)$ are smooth surfaces homeomorphic to a cylinder – see Figure \[local\_C\].
3. \[P3\] For each $a\in{{\rm\bf Z}}_k$, and for $\lambda=0$, one branch of $W^u(\mathcal{P}_{a})$ coincides with a branch of $W^s(\mathcal{P}_{a+1})$, forming a heteroclinic network, that we call $\Gamma_0$, and whose basin of attraction contains an open set.
4. \[P4\]\[Transversality\] For $\lambda\neq 0$ and for each $a\in{{\rm\bf Z}}_k$, a branch of the two-dimensional manifold $W^u (\mathcal{P}_{a})$ intersects transverselly a branch of $W^s(\mathcal{P}_{a+1})$ at two trajectories, forming a heteroclinic network $\Gamma_\lambda$, consisting of two heteroclinic cycles.
For $\lambda\neq 0$, any one of the two trajectories of (P\[P4\]) in $W^u (\mathcal{P}_{a})\cap W^s(\mathcal{P}_{a+1})$ will be denoted by $[\mathcal{P}_{a}\to \mathcal{P}_{a+1}]$. A more technical assumption (P\[P5\]) will be made in Section \[subsecSuspension\] below, after we have established some notation. For $a\in{{\rm\bf Z}}_k$, define the following constants: $$\label{constants}
\delta_a=\frac{c_a}{e_a} >1, \qquad \mu_{a+1}= \frac{c_a}{e_{a+1}} \qquad \text{and} \qquad \delta=\prod_{a=1}^k \delta_a >1$$
Also denote by $\overline{x}_a\in {{\rm\bf R}}^4$ the centre of gravity of $\mathcal{P}_{a}$, given by $$\overline{x}_a=\frac{1}{\xi_a} \int_{0}^{\xi_a}\mathcal{P}_{a} (t) dt \in {{\rm\bf R}}^3.$$
Without loss of generality we assume that the minimal period $\xi_a=1$, for all $a\in{{\rm\bf Z}}_k$. It will be explicitly used in system (\[ode of suspension\]) below.
![Configuration of $\Gamma_\lambda$ for $\lambda=0$ (left) and $\lambda>0$ (right). The representation is done for $k=2$.[]{data-label="Configuration"}](Configuration1){height="4.5cm"}
The dynamics
------------
The dynamics of this kind of heteroclinic structures involving periodic solutions has been studied before in [@ACL; @NONLINEARITY; @ALR; @Melbourne; @Rodrigues], in different contexts.
Since $f_0$ satisfies (P\[P1\])–(P\[P3\]) then, adapting the Krupa and Melbourne criterion [@KM1; @KM2], any solution starting sufficiently close to $\Gamma_0$ will approach it in positive time; in other words $\Gamma_0$ is asymptotically stable. As a trajectory approaches $\Gamma_0$, it visits one periodic solution, then moves off to visit the other periodic solutions in the network. After a while it returns to visit the initial periodic solution, and the second visit lasts longer than the first. The oscillatory regime of such a solution seems to switch into different nodes, at geometrically increasing times.
For $\lambda \neq 0$, by (P\[P4\]), the invariant manifolds of the nodes meet transversally, and the network is no longer asymptotically stable due to the presence of suspended horseshoes in its neighbourhood. As proved in [@Rodrigues], there is an infinite number of heteroclinic and homoclinic connections between any two periodic solutions and the dynamics near the heteroclinic network is very complex. The route to chaos corresponds to an interaction of robust switching with chaotic cycling. The emergence of chaotic cycling does not depend on the magnitude of the multipliers of the periodic solutions. It depends only on the geometry of the flow near the cycle. In Table \[notation\], we summarise some information about the type of heteroclinic structure of $\Gamma_\lambda$ and the type of dynamics nearby.
$\lambda$ Structure of $V_{\Gamma_\lambda}$ Dynamics near $\Gamma_\lambda$ References
----------- ----------------------------------- -------------------------------- -----------------------------------------
zero torus of genus $k$ Attractor [@Melbourne; @Rodrigues]
non-zero torus of genus $> k$ Chaos (Switching and Cycling) [@ACL; @NONLINEARITY; @ALR; @Rodrigues]
: Heteroclinic structure of $\Gamma_\lambda$, for $\lambda=0$ and $\lambda\neq0$.[]{data-label="notation"}
Local and global dynamics near the network {#Local}
==========================================
Given a heteroclinic network of periodic solutions $\Gamma_\lambda$ with nodes $\mathcal{P}_{a}$, $a\in{{\rm\bf Z}}_k$, let $V_{\Gamma_\lambda}$ be a compact neighbourhood of $\Gamma_\lambda$ and let $V_a$ be pairwise disjoint compact neighbourhoods of the nodes $\mathcal{P}_{a}$, such that each boundary $\partial V_a$ is a finite union of smooth manifolds with boundary, that are transverse to the vector field everywhere, except at their boundary. Each $V_a$ is called an *isolating block* for $\mathcal{P}_{a}$ and, topologically, it consists of a hollow cylinder. Topologically, $V_{\Gamma_0}$ may be seen as a solid torus with genus $k$ (see Table \[notation\]).
Suspension and local coordinates {#subsecSuspension}
--------------------------------
For $a\in{{\rm\bf Z}}_k$, let $\Sigma_a$ be a cross section transverse to the flow at $p_a \in \mathcal{P}_{a}$. Since $\mathcal{P}_{a}$ is hyperbolic, there is a neighbourhood $V^*_a$ of $p_a$ in $\Sigma_a$ where the first return map to $\Sigma_a$, denoted by $\pi_a$, is $C^1$ conjugate to its linear part. Moreover, for each $r\ge 2$ there is an open and dense subset of ${{\rm\bf R}}^2$ such that, if the eigenvalues $(c_a,e_a)$ lie in this set, then the conjugacy is of class $C^r$ — see [@Takens71] and Appendix \[appendix\]. The eigenvalues of $d\pi_a$ are $e^{e_a}$ and $e^{-c_a}$. Suspending the linear map gives rise, in cylindrical coordinates $(\rho, \theta, z)$ around $\mathcal{P}_{a}$, to the system of differential equations: $$\label{ode of suspension}
\left\{
\begin{array}{l}
\dot{\rho}=-c_{a}(\rho -1) \\
\dot{\theta}=1 \\
\dot{z}=e_{a}z
\end{array}
\right.$$ which is $C^2$-conjugate, after reparametrising the time variable, to the original flow near $\mathcal{P}_{a}$. In these coordinates, the periodic solution $\mathcal{P}_{a}$ is the circle defined by $\rho=1$ and $z=0$, its local stable manifold, $W^s_{loc}(\mathcal{P}_{a})$, is the plane defined by $z=0$ and $W^u_{loc}(\mathcal{P}_{a})$ is the surface defined by $\rho=1$ as in Figure \[local\_C\].
We will work with a hollow three-dimensional cylindrical neighbourhood $V_a(\varepsilon)$ of $\mathcal{P}_{a}$ contained in the suspension of $V^*_a$ given by: $$V_a(\varepsilon)=\left\{ (\rho,\theta,z):\quad 1-\varepsilon\le\rho\le 1+\varepsilon,
\quad -\varepsilon\le z\le \varepsilon\quad \text{and}\quad
\theta\in{{\rm\bf R}}\pmod{2\pi}
\right\}\ .$$ When there is no ambiguity, we write $V_a$ instead of $V_a(\varepsilon)$. Its boundary is a disjoint union $$\partial V_{a}= In(\mathcal{P}_{a}) \cup Out(\mathcal{P}_{a}) \cup \Omega(\mathcal{P}_{a})$$ such that :
- $In(\mathcal{P}_{a})$ is the union of the walls, defined by $\rho=1\pm\varepsilon$, of the cylinder, locally separated by $W^u(\mathcal{P}_{a})$. Trajectories starting at $In(\mathcal{P}_{a})$ go inside the cylinder $V_a$ in small positive time.
- $Out(\mathcal{P}_{a})$ is the union of two anuli, the top and the bottom of the cylinder, defined by $z=\pm\varepsilon$, locally separated by $W^s(\mathcal{P}_{a})$. Trajectories starting at $Out(\mathcal{P}_{a})$ go inside the cylinder $V_a$ in small negative time.
- The vector field is transverse to $\partial V_{a}$ at all points except possibly at the four circles: $\Omega(\mathcal{P}_{a})=\overline{In(\mathcal{P}_{a})}\cap \overline{Out(\mathcal{P}_{a})}$.
The two cylinder walls, $In(\mathcal{P}_{a})$ are parametrised by the covering maps: $$(\theta,z)\mapsto(1\pm\varepsilon,\theta,z)=(\rho,\theta,z),$$ where $\theta\in\textbf{R}\pmod{2\pi}$, $|z|<\varepsilon$. In these coordinates, $In(\mathcal{P}_{a})\cap W^s(\mathcal{P}_{a})$ is the union of the two circles $z=0$. The two anuli $Out(\mathcal{P}_{a})$ are parametrised by the coverings: $$(\varphi,r) \mapsto ( r,\varphi, \pm \varepsilon)=(\rho,\theta,z),$$ for $1-\varepsilon<r<1+\varepsilon$ and $\varphi \in {{\rm\bf R}}\pmod{2\pi}$ and where $Out(\mathcal{P}_{a})\cap W^u(\mathcal{P}_{a})$ is the union of the two circles $r=1$. In these coordinates $\Omega(\mathcal{P}_{a}) =\overline{In(\mathcal{P}_{a})}\cap \overline{Out(\mathcal{P}_{a})}$ is the union of the four circles defined by $\rho=1\pm \varepsilon$ and $ z=\pm \varepsilon$.
The portion of the unstable manifold of $\mathcal{P}_{a}$ that goes from $\mathcal{P}_{a}$ to $In(\mathcal{P}_{a+1})$ without intersecting $V_{a+1}$ will be denoted $W^u_{loc}(\mathcal{P}_{a})$. Similarly, $W^s_{loc}(\mathcal{P}_{a})$ will denote the portion of the stable manifold of $\mathcal{P}_{a}$ that is outside $V_{a-1}$ and goes directly from $Out(\mathcal{P}_{a-1})$ to $\mathcal{P}_{a}$. With this notation, we formulate the following technical condition:
![Local coordinates on the boundary of the neighbourhood $V_a$ of a periodic solution $\mathcal{P}_a$ where $a \in {{\rm\bf Z}}_k$. Double bars mean that the sides are identified.[]{data-label="local_C"}](Local_C){height="8cm"}
1. \[P5\] For $a\in{{\rm\bf Z}}_k$, and $\lambda\neq 0$ close to zero, the manifolds $W^u_{loc}(\mathcal{P}_{a})$ intersect the cylinders $In(\mathcal{P}_{a+1})$ on a closed curve. Similarly, $W^s_{loc}(\mathcal{P}_{a})$ intersects the annulus $Out(\mathcal{P}_{a-1})$ on a closed curve.
The previous hypothesis complements (P\[P4\]) and corresponds to the expected unfolding from the coincidence of the manifolds $W^s(\mathcal{P}_{a+1})$ and $W^u(\mathcal{P}_{a})$ at $f_0$, see Chilingworth [@Chilingworth]. Note that (P\[P4\]) and (P\[P5\]) are satisfied in an open subset of the set of unfoldings $f_\lambda$ of $f_0$ satisfying (P\[P1\])–(P\[P3\]).
In order to distinguish the local coordinates near the periodic solutions, we sometimes add the index $a$ with $a\in{{\rm\bf Z}}_k$.
Local map near the periodic solutions {#sublocal}
-------------------------------------
For each $a \in {{\rm\bf Z}}_k$, we may solve (\[ode of suspension\]) explicitly, then we compute the flight time from $In(\mathcal{P}_{a})$ to $Out(\mathcal{P}_{a})$ by solving the equation $z(t)=\varepsilon$ for the trajectory whose initial condition is $(\theta_a, z_a) \in In(\mathcal{P}_{a})\backslash W^s(\mathcal{P}_{a})$, with $z_a>0$. We find that this trajectory arrives at $ Out(\mathcal{P}_{a})$ at a time $\tau_a: In(\mathcal{P}_{a})\backslash W^s(\mathcal{P}_{a}) \rightarrow {{\rm\bf R}}_0^+$ given by: $$\label{Time of Flight}
\tau_a(\theta_a, z_a)=\frac{1}{e_a}\ln \left(\frac{\varepsilon}{z_a}\right).$$ Replacing this time in the other coordinates of the solution, yields:
$$\label{local map}
\Phi _{a}(\theta_a,z_a)=
\left(\theta_a-\frac{1}{e_a}\ln\left(\frac{z_a}{\varepsilon}\right),
1\pm \varepsilon \left(\frac{z_a}{\varepsilon}\right)^{\delta_a}\right)= (\varphi_a,r_a)
\qquad\mbox{where} \quad\delta_a=\frac{c_{a}}{e_{a}}>1 .$$
The signs $\pm$ depend on the component of $In(\mathcal{P}_{a})$ we started at, $+$ for trajectories starting with $r_a>1$ and $-$ for $r_a<1$. We will discuss the case $r_a>1$, $z_a>0$, the behaviour on the other components is analogous.
Flight times for $\lambda=0$ {#secTransition0}
----------------------------
Here we introduce some terminology that will be used in Section \[Organizing\]; see Figure \[times\_notation\]. For $X\in \mathcal{B}(\Gamma_0)$, let $T_1(X)$ be the smallest $t\ge 0$ such that $\phi(t,X)\in In(\mathcal{P}_{1})$. For $j\in {{\rm\bf N}}$, $j>1$, we define $T_j(X)$ inductively as the smallest $t>T_{j-1}(X)$ such that $\phi(t,X)\in In(\mathcal{P}_{\langle j\rangle})$, where $$\left\langle j\right\rangle= j- \left[\frac{j}{k}\right]k$$ is the remainder in the integer division by $k$ and $[x]$ is the greatest integer less than or equal to $x$. Recall that the index $a$ in $\mathcal{P}_{a}$ lies in ${{\rm\bf Z}}_a$, so that $\mathcal{P}_{0}$ and $\mathcal{P}_{k}$ represent the same periodic solution.
In order to simplify the computations, we may assume that the transition from $Out(\mathcal{P}_a)$ to $In(\mathcal{P}_{a+1})$ is instantaneous. This is reasonable because, as $t\to\infty$, the time of flight inside each $V_a$ tends to infinity, whereas the time of flight from $Out(\mathcal{P}_a)$ to $In(\mathcal{P}_{a+1})$ remains limited. In the proof of Proposition \[density\] below, we will see that this assumption does not affect the validity of our results. With this assumption, the time of flight $\tau_{a+nk}(X)$ inside $V_a$ at the $n$-th pass of the trajectory through $V_a$ will be $$\tau_{a+nk}(X)=T_{a+1+nk}(X)-T_{a+nk}(X),$$ thus extending the notation $\tau_a$ introduced in \[sublocal\] above to $X\in \mathcal{B}(\Gamma_0)$ and any index $a+nk\in{{\rm\bf N}}$.
![For $X\in \mathcal{B}(\Gamma_0)$, the solution $\phi(t,X)$ remains in ${V}_a$ for a time interval of length $\tau_{a}(X)$, then spends $\tau_{a+1}(X)$ units of time near $\mathcal{P}_{a+1}$, and , after $n$ full turns, stays again in ${V}_a$ for $\tau_{a+nk}(X)$ units of time, and so on. The representation is done for $k=3$.[]{data-label="times_notation"}](times_notation){height="7.5cm"}
For each $a \in {{\rm\bf Z}}_k$, and for $\lambda=0$, we define the transition map $ \Psi_{a }^0:Out(\mathcal{P}_{a})\rightarrow In(\mathcal{P}_{a+1})$ $$\label{Psi_def}
\Psi_a^0(\varphi_a, r_a) = (\varphi_a, r_a-1)=({\theta}_{a+1}, {z}_{a+1}).$$ The transition maps for $\lambda\ne 0$ will being discussed in Section \[secTransition\].
The $k$-polygon at the organising centre {#Organizing}
========================================
Let $f_0$ be a vector field in ${{\rm\bf R}}^3$ satisfying (P\[P1\])–(P\[P3\]). All the results of this section assume $\lambda=0$. Suppose, from now on, that $\phi(t, X) $ is a solution $\dot{x}=f_0(x)$ with initial condition $X=\phi(0,X)$ in $\mathcal{B}(\Gamma_0)$, the basin of attraction of $\Gamma_0$.
The statistical limit set of $f_0$
----------------------------------
The statistical limit set $\Lambda_{stat}(f_0)$ associated to the basin of attraction of $\Gamma_0$ is the smallest closed subset where Lebesgue almost all trajectories spend almost all time. More formally, following Ilyashenko [@Ilya1] and Karabacak and Ashwin [@KA], we define:
For an open set $U\subset {{\rm\bf R}}^3$ and a solution $\phi(t,x)$ of (\[general\]) with $x\in {{\rm\bf R}}^3$:
1. the frequency of the solution being in $U$ is the ratio: $$\rho_{f}(x, U, T)= \frac{Leb\{t \in [0,T]: \phi(t,x) \in U\}}{T}.$$ where $Leb$ denotes the Lebesgue measure in ${{\rm\bf R}}$.
2. the statistical limit set, denoted by $\Lambda_{stat}({f})$, is the smallest closed subset of ${{\rm\bf R}}^3$ for which any open neighbourhood of $U$ of $\Lambda_{stat}$ satisfies the equality: $$\lim_{t\rightarrow +\infty} \rho_{f}(x, U, t)=1, \qquad \text{for almost all } x \in {{\rm\bf R}}^3.$$
Since the transitions between the saddles of $\Gamma_0$ are very fast compared with the times of sojourn near the periodic solutions $\mathcal{P}_a$, $a\in{{\rm\bf Z}}_k$ (see \[secTransition0\]) we may conclude that:
\[density\] Let $f_0$ be a vector field in ${{\rm\bf R}}^3$ satisfying (P\[P1\])–(P\[P3\]). Then: $$\Lambda_{stat}({f_0}|_{\mathcal{B}(\Gamma_0)})=\bigcup_{a=1}^k \mathcal{P}_a\subset \Gamma_0.$$
**Proof:** The flow from $Out(\mathcal{P}_{a})$ to $In(\mathcal{P}_{a+1})$ is non-singular as in a flow-box. Since both $Out(\mathcal{P}_{a})$ and $In(\mathcal{P}_{a+1})$ are compact sets, the time of flight between them has a positive maximum. On the other hand, for each $a\in{{\rm\bf Z}}_k$, the time of flight inside $V_a$ from $In(\mathcal{P}_{a})\backslash W^s_{loc}(\mathcal{P}_a)$ to $Out(\mathcal{P}_{a})$ tends to infinity as $t$ approach the stable manifold of $\mathcal{P}_a$, $W^s_{loc}(\mathcal{P}_a)$, or equivalently as the trajectory accumulates on $\Gamma_0$.
\[rkFlightTimes\] It follows from Proposition \[density\] that, for each $a\in{{\rm\bf Z}}_k$, the time intervals in which trajectories are travelling from $Out(\mathcal{P}_a)$ to $In(\mathcal{P}_{a+1})$ do not affect the accumulation points of the time averages of a solution that is accumulating on $\Gamma_0$. This result will be useful in the proof of the Theorem \[Main1\] because it shows that the duration of the journeys between nodes may be statistically neglected.
Estimates of flight times
-------------------------
In this section, we obtain relations between flight times of a trajectory in consecutive isolating blocks as well as other estimates that will be used in the sequel.
\[Lemma\_times\_2\] For all $j\in{{\rm\bf N}}$ and any initial condition $X\in \mathcal{B}(\Gamma_0)$ we have: $$\label{ratio1}
\frac{\tau_{j+1 }(X)}{\tau_{j}(X)}=\frac{c_{\langle j\rangle}}{e_{\langle j+1\rangle}}.$$ In particular the ratio ${\tau_{j+1 }(X)}/{\tau_{j}(X)}$ does not depend on $X$.
**Proof:** Given $j\in{{\rm\bf N}}$, let $X_j=\left(\theta_j,z_j\right)=\phi \left(T_j(X),X\right)\in In(\mathcal{P}_{\langle j\rangle})$. Using the expressions , and the expression for $\Psi_{\langle j\rangle}^0$ in , we have: $$\tau_{j+1}(X)=\frac{1}{e_{\langle j+1\rangle}}\ln \left( \frac{\varepsilon}{ \varepsilon \left(\frac{z_j}{\varepsilon}\right)^{\delta_{\langle j\rangle}}} \right) =
\frac{1}{e_{\langle j+1\rangle}} \delta_{\langle j\rangle} \left[\ln (\varepsilon)-\ln (z_j)\right]$$ Thus $$\frac{\tau_{j+1}(X)}{\tau_j(X)} =
\frac{\frac{1}{e_{\langle j+1\rangle}} \delta_{\langle j \rangle} \left[\ln (\varepsilon)-\ln (z_j)\right]}{\frac{1}{e_{\langle j\rangle}}\left[\ln(\varepsilon)-\ln(z_j)\right]}
= \frac{e_{\langle j \rangle}}{e_{\langle j+1\rangle}} \delta_{\langle j\rangle} =\frac{c_{\langle j\rangle}}{e_{\langle j+1\rangle}}.$$
Recall from that $\displaystyle\mu_{a+1}= \frac{c_a}{e_{a+1}}$, $a \in {{\rm\bf Z}}_k$. With this notation we obtain:
\[Lemma3\] For $i,j\in {{\rm\bf N}}$ such that $j>i>1$, and for any $X\in \mathcal{B}(\Gamma_0)$, we have:
1. \[geom\_sum\] $\frac{\tau_{j+k}(X)}{\tau_j (X)}= \prod_{a=1}^k \mu_{a+1} = \prod_{j=1}^k \delta_j=\delta>1$.
2. $\tau_{j+1}(X)= \tau_i (X)\prod^{j+1}_{l=i+1}\mu_{\langle l\rangle}$.
We finish this section with a result comparing the two sequences of times $(T_i)_{i\in {{\rm\bf N}}}$ and $(\tau_i)_{i\in {{\rm\bf N}}}$. The proof is very technical and is given in Appendix \[appendixB1\].
\[Equalities\] For $a\in{{\rm\bf Z}}_k$, and for any $X\in \mathcal{B}(\Gamma_0)$, the following equalities hold:
1. $T_{a + nk}(X)=T_a (X)+ \frac{\delta^n-1}{\delta-1} \left(\mu_a +\mu_a \mu_{a+1} + \ldots + \prod_{l=0}^{k-1}\mu_{a+l}\right)\tau_{a-1}(X);$
2. $\tau_{a + nk}(X)=T_{a +1+ nk }(X)-T_{a + nk}(X)= \delta^n \mu_a \tau_{ a-1}(X)$.
The vertices of the $k$-polygon {#subsecvertices}
-------------------------------
In this section, we show that in $\mathcal{B}(\Gamma_0)$ the time averages fail to converge, by finding several accumulation points for them. For each $a\in{{\rm\bf Z}}_k$, define the point $$\label{point1}
A_{a}= \frac{\overline{x}_{a} + \mu_{a+1} \overline{x}_{a+1} + \mu_{a+1}\mu_{a+2} \overline{x}_{a+2} + \ldots+ \prod_{l=1}^{k-1}\mu_{a+l} \overline{x}_{a+k-1}}{1+\mu_{a+1} +\mu_{a+1}\mu_{a+2}+\ldots+ \prod_{l=1}^{k-1}\mu_{a+l}}= \frac{num(A_a)}{den(A_a)}$$ Note that $A_a$ and $num(A_a)$ lie in ${{\rm\bf R}}^3$ and $den(A_a)\in {{\rm\bf R}}$. Later we will see that these points are the vertices of a polygon of accumulation points. First we show that they are accumulation points for the time averages.
\[Prop6\] Let $a\in{{\rm\bf Z}}_k$, let $f_0$ be a vector field in ${{\rm\bf R}}^3$ satisfying (P\[P1\])–(P\[P3\]) and let $\phi(t,X)$ a solution of $\dot{x}=f_0(x)$ with $X\in\mathcal{B}(\Gamma_0)$. Then $$\lim_{n \rightarrow +\infty } \left[\frac{1}{T_{a+nk}} \int_0^{T_{a+nk}} \phi(t,X) dt \right] = A_{a}$$
In order to prove Proposition \[Prop6\], first we show that it is sufficient to consider the limit when $n\to\infty$ of the averages over one turn around $\Gamma_0$ and then we prove that these averages tend to $A_a$. The proof is divided in two technical lemmas, which may be found in Appendix \[appendixC\].
The sides of the $k$-polygon
----------------------------
In Section \[subsecvertices\] we have shown that for $a \in {{\rm\bf Z}}_k$, the time average over the sequences $T_{a+nk}$ of times accumulate, as $n\to\infty$ in the $A_a$. In this section we describe accumulation points for intermediate sequences of times $t_n$. For this, it will be useful to know how $A_a$ and $A_{a+1}$ are related:
\[Colinear\] For all $a \in {{\rm\bf Z}}_k$, the following equalities hold: $$\label{colinear2}
\mu_{a+1} den(A_{a+1}) =den (A_a)-(1-\delta)
\qquad\mbox{and}\qquad
\mu_{a+1} num(A_{a+1})=num(A_a) - (1-\delta)\overline{x}_a .$$
\[propColinear\] The point $A_{a+1}$ lies in the segment connecting $A_a$ to $\overline{x}_{a}$.
**Proof:** We use Lemma \[Colinear\] to obtain $$num(A_{a+1}) =\frac{1}{ \mu_{a+1}}num(A_a)+\frac{\delta-1}{ \mu_{a+1}}\overline{x}_{a}$$ and hence $$A_{a+1}=\frac{ num(A_{a+1}) }{den(A_{a+1}) }
=\left(\frac{den(A_{a}) }{ \mu_{a+1}den(A_{a+1}) }\right)\frac{num(A_a)}{den(A_{a}) }
+\left(\frac{\delta-1}{ \mu_{a+1}den(A_{a+1}) }\right)\overline{x}_{a}=\alpha A_a+\beta \overline{x}_{a}.$$ Again from Lemma \[Colinear\] we have $den (A_a)=\mu_{a+1} den(A_{a+1}) -(\delta-1)$, and therefore $$\alpha=\frac{den (A_a)}{\mu_{a+1} den(A_{a+1}) }=1-\frac{\delta-1}{\mu_{a+1} den(A_{a+1}) }=1-\beta$$ hence $A_{a+1}$ lies in the line through $A_a$ and $\overline{x}_{a}$. From the expression in Lemma \[Colinear\] it follows that $\mu_{a+1} den(A_{a+1})-den (A_a)= \delta-1<0$, hence $0<\alpha<1$ and thus $A_{a+1}$ lies in the segment from $A_a$ to $\overline{x}_{a}$, proving the result.
![Representation of the sequence of times $\lambda_n\tau_{a+nk}$, where $a\in{{\rm\bf Z}}_k$, is fixed and $n \in {{\rm\bf N}}$.[]{data-label="sequence2"}](sequence1){height="6cm"}
We now come to the main result of this section:
\[Main1\] If $f_0$ is a vector field in ${{\rm\bf R}}^3$ satisfying (P\[P1\])–(P\[P3\]), then for any $X\in \mathcal{B}(\Gamma_0)$, the set of accumulation points of the time average $\frac{1}{T} \int_0^T \phi(t,X) dt $ is the boundary of the $k$-polygon defined by $A_1,\ldots, A_k\in {{\rm\bf R}}^3$ in (\[point1\]). Moreover, when $\delta\rightarrow 1$ the polygon collapses into a point.
**Proof:** First we show that all points in the boundary of the polygon are accumulation points. Given $ L\in[0,1]$ and $a\in {{\rm\bf Z}}_k$, consider the sequence $t_n=T_{a+nk}+ L \tau_{a+nk}$, we want the accumulation points of $\mathcal{L}_n=\frac{1}{t_n} \int_0^{t_n} \phi(t,X) dt$ as $n\to\infty$. For this we write $$\begin{aligned}
\mathcal{L}_n=
\frac{1}{t_n} \int_0^{t_n} \phi(t,X) dt &=&
\frac{1}{t_n} \int_0^{T_{a+nk}} \phi(t,X) dt + \frac{1}{t_n} \int_{T_{{a+nk}}}^{t_n} \phi(t,X) dt\\
&=&\alpha_n \left(\frac{1}{T_{{a+nk}}} \int_0^{T_{a+nk}} \phi(t,X) dt\right)
+ \beta_n \left(\frac{1}{t_n-T_{{a+nk}}} \int_{T_{{a+nk}}}^{t_n} \phi(t,X) dt\right),\end{aligned}$$ where $$0< \alpha_n=\frac{T_{a+nk} }{t_n} \leq 1,
\qquad
0\leq \beta_n= \frac{t_n-T_{{a+nk}} }{t_n}\leq 1
\qquad\text{and} \qquad
\alpha_n+\beta_n=1 .$$ Since both $\alpha_n$ and $\beta_n$ are limited, each one of them contains a converging subsequence. We analyse separately each of the terms in the expression for $\mathcal{L}_n$ above.
We have already seen in Proposition \[Prop6\] that, if $X\in \mathcal{B}(\Gamma_0)$, then $\lim_{n\to\infty} \frac{1}{T_{a+nk}} \int_0^{T_{a+nk}} \phi(t,X) dt=A_a$. In particular, if $ L=0$, then $\alpha_n=1$, $\beta_n=0$ and $\lim_{n\to\infty}\mathcal{L}_n =A_a$.
We claim that if $ L\ne 0$, then $\lim_{n\to\infty} \frac{1}{t_n-T_{a+nk}} \int_{T_{{a+nk}}}^{t_n} \phi(t,X) dt=\overline{x}_a$. To see this, note that $\phi(t,X)\in V_a$ for $t\in[T_{a+nk},t_n]$. Moreover, since $\lim_{n\to\infty}\tau_{a+nk}=\infty$, then for large $n$, we have that $t_n-T_{a+nk}= L\tau_{a+nk}$ is much larger than $\xi_a$, the period of $\mathcal{P}_a$. Since $X\in \mathcal{B}(\Gamma_0)$, then $ \phi(t,X) $, with $t\in[T_{a+nk},t_n]$, tends to $\mathcal{P}_a$ when $n\to\infty$ and the average of $ \phi(t,X)$ tends to $\overline{x}_a$, the average of $\mathcal{P}_a$.
At this point we have established that any accumulation point of $\mathcal{L}_n$ lies in the segment connecting $A_a$ to $\overline{x}_a$. We have shown in Proposition \[propColinear\] that this segment also contains $A_{a+1}$. By Proposition \[Prop6\] we have that $\lim_{n\to\infty}\mathcal{L}_n=A_{a+1}$ for $ L=1$. On the other hand, $\beta_n$ is an increasing function of $ L$, so, as $ L$ increases from 0 to 1, the accumulation points of $\mathcal{L}_n$ move from $A_a$ to $A_{a+1}$ in the segment connecting them.
Conversely, any accumulation point lies on the boundary of the polygon. To see this, let $A$ be an accumulation point of the time average. This means that there is an increasing sequence of times $s_n$, tending to infinity, and such that $\lim_{n\to \infty}\mathcal{L}_n=A$, where $\mathcal{L}_n=\frac{1}{s_n}\int_0^{s_n}\phi(t,x)dt$. Since $s_n$ tends to infinity, then it may be partitioned into subsequences of the form $s_{n_j}=T_{a+n_jk}+ L_{n_j} \tau_{a+n_jk}$ for each $a\in{{\rm\bf Z}}_k$, and some $ L_{n_j}\in[0,1]$, as shown in Figure \[sequence2\]. The arguments above, applied to this subsequence, show that the accumulation points of $\mathcal{L}_{n_j}$ lie in the segment connecting $A_a$ to $A_{a+1}$. Therefore, since $\mathcal{L}_n$ converges, there are two possibilities. The first is that all the $s_n$ (except possibly finitely many) are of the form above for a fixed $a\in{{\rm\bf Z}}_k$, and hence $A$ lies in the the segment connecting $A_a$ to $A_{a+1}$. The second possibility is that all the $s_n$ (except maybe a finite number) are of one of the forms $$s_{n_j}=T_{a+n_jk}+ L_{n_j} \tau_{a+n_jk}\qquad \text{or} \qquad s_{n_i}=T_{a+1+n_ik}+ L_{n_i} \tau_{a+1+n_ik},$$ and that $A=A_{a+1}$. In both cases, the accumulation point of the time average will lie on the boundary of the polygon. Finally, when $\delta\to 1$, the expressions in Lemma \[Colinear\] become $\mu_{a+1} den(A_{a+1}) =den (A_a)$ and $\mu_{a+1} num(A_{a+1})=num(A_a)$, hence $$A_{a}=\frac{num(A_a)}{den (A_a)}=\frac{\mu_{a+1} num(A_{a+1})}{\mu_{a+1} den(A_{a+1}) }=A_{a+1}$$ and the polygon collapses to a point at the same time as $\Gamma_0$ stops being attracting.
![The polygon in Theorem \[Main1\] with $k=3$: the accumulation points of the time average $\frac{1}{T} \int_0^T \phi(t,x) dt $ lie on the boundary of the triangle defined by $A_1,A_2$ and $A_3.$ []{data-label="scheme1"}](scheme1){height="6cm"}
Taking the observable as the projection on any component, the first main result of this paper may be stated as:
\[CorollaryHistoric\] If $f_0$ is a vector field in ${{\rm\bf R}}^3$ satisfying (P\[P1\])–(P\[P3\]), then all points in the basin of attraction of $\Gamma_0$ have historic behaviour. In particular the set of initial conditions with historic behaviour has positive Lebesgue measure.
The points of $\Gamma_0$ do not have historic behaviour. Indeed, if $X\in\Gamma_0$ then either $X\in \mathcal{P}_a$ or $\phi(t,X)$ accumulates on $\mathcal{P}_a$ for some $a\in\{1,\ldots,k\}$. In both cases, $\lim_{T\to\infty}\frac{1}{T}\int_0^T\phi(t,X)dt=\overline{x}_a$. The previous proofs have been done for a piecewise continuous trajectory; when $t=T_a$, the trajectory jumps from $V_{a-1}$ to $V_a$, whereas the real solutions have a continuous motion from $V_{a-1}$ to $V_a$ along the corresponding heteroclinic connection, during a bounded interval of time. As shown in Proposition \[density\], the statistical limit set of $\Gamma_0$ is $\bigcup_{a=1}^k \mathcal{P}_a$ meaning that trajectories spend Lebesgue almost all time near the periodic solutions, and not along the connections. Therefore, the intervals in which the transition occurs do not affect the accumulation points of the time averages of the trajectories and the result that was shown for a piecewise continuous trajectory holds.
Persistence of historic behaviour {#Tangencies}
=================================
From now on, we discuss he differential equation $\dot{x}=f_\lambda(x)$ satisfying (P\[P1\])–(P\[P5\]), with $\lambda\ne 0$. In this case it was shown in Rodrigues *et al* [@Rodrigues] that the simple dynamics near $\Gamma_0$ jumps to chaotic behaviour near $\Gamma_\lambda$.
Invariant manifolds for $\lambda>0$ {#secTransition}
-----------------------------------
![For $\lambda$ close to zero, both $W^s_{loc}(\mathcal{P}_{a+1})\cap Out(\mathcal{P}_{a})$ and $W^u_{loc}(\mathcal{P}_{a})\cap In(\mathcal{P}_{a+1})$ are closed curves, given in local coordinates as the graphs of periodic functions; this is the expected unfolding from the coincidence of the invariant manifolds at $\lambda=0$.[]{data-label="Transitions"}](Transitions){height="4.5cm"}
We describe the geometry of the two-dimensional local invariant manifolds of $\mathcal{P}_{a}$ and $\mathcal{P}_{a+1}$ for $\lambda\neq 0$, under the assumptions (P\[P1\])–(P\[P5\]). For this, let $f_\lambda$ be an unfolding of $f_0$ satisfying (P\[P1\])–(P\[P5\]). For $\lambda\ne 0$, we introduce the notation:
- $(O_a^1,0)$ and $(O_a^2,0)$ with $0<O_a^1<O_a^2<2\pi$ are the coordinates of the two points where the connections $[\mathcal{P}_{a} \rightarrow \mathcal{P}_{a+1}]$ of Properties (P\[P4\])–(P\[P5\]) meet $Out(\mathcal{P}_{a})$;
- $(I_a^1,0)$ and $(I_a^2,0)$ with $0<I_a^1<I_a^2<2\pi$ are the coordinates of the two points where $[\mathcal{P}_{a-1} \rightarrow \mathcal{P}_a]$ meets $In(\mathcal{P}_{a})$;
- $(O_a^i,0)$ and $(I_{a+1}^i,0)$ are on the same trajectory for each $i \in \{1,2\}$ and $a\in{{\rm\bf Z}}_k$.
By (P\[P5\]), for small $\lambda>0$, the curves $W^s_{loc}(\mathcal{P}_{a+1})\cap Out(\mathcal{P}_a)$ and $W^u_{loc}(\mathcal{P}_{a})\cap In(\mathcal{P}_{a+1})$ can be seen as graphs of smooth periodic functions, for which we make the following conventions (see Figure \[Transitions\]):
- $W^s_{loc}(\mathcal{P}_{a+1})\cap Out(\mathcal{P}_{a})$ is the graph of $y=g_a^\lambda (\varphi)$, with $g_a^\lambda(O_a^i)=1$, for $i \in \{1,2\}$ and $a\in{{\rm\bf Z}}_k$.
- $W^u_{loc}(\mathcal{P}_{a-1})\cap In(\mathcal{P}_{a})$ is the graph of $y=h_{a}^\lambda (\theta)$, with $h_{a}^\lambda(I_{a}^i)=0$, for $i \in \{1,2\}$ and $a\in{{\rm\bf Z}}_k$.
- omitting the superscript $\lambda$, we have: $h_a^\prime(I_a^1)>0$, $h_a^\prime(I_a^2)<0$, $g_a^\prime(O_a^2)>0$ and $g_a^\prime(O_a^1)<0$, for $i \in \{1,2\}$.
The two points $(O_a^1,0)$ and $(O_a^2,0)$ divide the closed curve $W^s_{loc}(\mathcal{P}_{a+1})\cap Out(\mathcal{P}_{a})$ in two components, corresponding to different signs of $r_a-1$. With the conventions above, we get $g_a^\lambda(\varphi)>1$ for $\varphi \in\left(O_a^2,O_a^1\right)$. More specifically, the region in $Out(\mathcal{P}_{a})$ between $W_{loc}^s(\mathcal{P}_{a+1})$ and $W_{loc}^u(\mathcal{P}_{a})$ given by $$A=\{(\varphi_a,r_a)\in Out(\mathcal{P}_{a}): 1<r_a<g_a^\lambda(\varphi_a) \}$$ is mapped by $\Psi_a$ into the lower ($z_a<0$) part of $In(\mathcal{P}_{a+1})$. Similarly, the region $$B=\{(\varphi_a,r_a)\in Out(\mathcal{P}_{a}): r_a>1\}\backslash A=\{(\varphi_a,r_a)\in Out(\mathcal{P}_{a}): 1<r_a\ \mbox{ and }\ g_a^\lambda(\varphi_a)<r_a \}$$ (see Figure \[figRegionsInOut\]) is mapped into the $z_a>0$ component of $In(\mathcal{P}_{a+1})$.
![The component $A$ of $Out(\mathcal{P}_{a})$ between $W_{loc}^s(\mathcal{P}_{a+1})$ and $W_{loc}^u(\mathcal{P}_{a})$ is mapped by $\Psi_a$ into the lower ($z_{a+1}<0$) part of $In(\mathcal{P}_{a+1})$, its complement $B$ in the $r_a>1$ component of $Out(\mathcal{P}_{a})$ is mapped by $\Psi_a$ into the upper ($z_{a+1}>0$) part of $In(\mathcal{P}_{a+1})$.[]{data-label="figRegionsInOut"}](New_AB1){height="4cm"}
The maximum value of $g_a^\lambda (\varphi)$ is attained at some point $$(\varphi_a,r_a)= (\varphi_a^O (\lambda),M_a^O(\lambda)) \qquad \text{with} \qquad O_a^2<\varphi_a^O(\lambda)<O_a^1.$$ We denote by $M^I_a(\lambda)$ the maximum value of $h_a^\lambda$.
Geometrical preliminaries
-------------------------
We will need to introduce some definitions.
![A spiral is defined on a covering of the annulus $Out(\mathcal{P}_a)$ by a smooth curve that turns around the annulus infinitely many times as its radius tends to $\nu\in[0,1]$. It contains a fold point and a point of maximum radius. []{data-label="spiral1"}](helix2){height="6cm"}
\[spiral\_def\] A *spiral* on the annulus $\mathcal{A}$ *accumulating on the circle* $r=\nu$ is a curve on $\mathcal{A}$, without self-intersections, that is the image, by the parametrisation $(\varphi,r )$ of the annulus, of a continuous map $H:(b,c)\rightarrow {{\rm\bf R}}\times[0,1]$, $$H(s)=\left(\varphi(s),r(s)\right),$$ such that:
1. \[monotonicity\] there are $\tilde{b}\le \tilde{c}\in (b,c)$ for which both $\varphi(s)$ and $r(s)$ are monotonic in each of the intervals $(b,\tilde{b})$ and $(\tilde{c},c)$;
2. \[turns\] either $\lim_{s\to b^+}\varphi(s)=\lim_{s\to c^-}\varphi(s)=+\infty$ or $\lim_{s\to b^+}\varphi(s)=\lim_{s\to c^-}\varphi(s)=-\infty,$
3. \[accumulates\] $\lim_{s\to b^+}r(s)=\lim_{s\to c^-}r(s)=\nu$.
It follows from the assumptions on the function $\varphi(s)$ that it has either a global minimum or a global maximum, and that $r(s)$ always has a global maximum. The point where the map $\varphi(s)$ has a global minimum or a global maximum will be called a *fold point* of the spiral. The global maximum value of $r(s)$ will be called the *maximum radius* of the spiral.
Geometry of the transition maps $\Phi_a$ {#secImageInvariantManifs}
----------------------------------------
\[Structures\] Under the conventions of Section \[secTransition\], for each $a \in {{\rm\bf Z}}_k$, the local map $\Phi_a$ transforms the part of the graph of $h_a$ with $I_a^1<\theta<I_a^2$ into a spiral on $Out(\mathcal{P}_{a})$ accumulating on the circle $Out(\mathcal{P}_{a}) \cap W^{u}_{loc}(\mathcal{P}_{a})$. This spiral has maximum radius $1+ \varepsilon^{1-\delta_a}(M_a^I)^{\delta_a}$; it has a fold point that, as $\lambda$ tends to zero, turns around $Out(\mathcal{P}_{a})$ infinitely many times.
**Proof:** The curve $\Phi_a\left(W^u_{loc}(\mathcal{P}_{a-1})\cap In(\mathcal {P}_a \right)$ is given by $H_a(\theta)= \Phi_a(\theta, h_a(\theta)) =(\varphi_a(\theta), r_a(\theta))$ where: $$\label{spiral_expression}
H_a(\theta)= \Phi_a(\theta, h_a(\theta)) = \left(\theta-\frac{1}{e_a}\ln\left(\frac{h_a(\theta)}{\varepsilon}\right),
1+ \varepsilon \left(\frac{h_a(\theta)}{\varepsilon}\right)^{\delta_a}\right)=
(\varphi_a(\theta), r_a(\theta)) .$$ From this expression if follows immediately that $$\lim_{\theta \rightarrow I_a^1} \varphi_a(\theta)=\lim_{x \rightarrow I_a^2} \varphi_a(\theta)= +\infty
\quad\mbox{and}\quad
\lim_{\theta \rightarrow I_a^1} r_a(\theta)=\lim_{\theta \rightarrow I_a^2} r_a(\theta)= 1$$ hence, conditions [*\[turns\])*]{} and [*\[accumulates\])*]{} of the definition of spiral hold. Condition [*\[monotonicity\])*]{} holds trivially near $I_a^2$ since $h_a'(I_a^2) < 0$, hence there is $\tilde{I_a^2}< I_a^2$ such that $\varphi_a'(\theta)>1$ for all $\theta \in \left( \tilde{I_a^2}, I_a^2\right)$. On the other hand, since $h_a'(I_a^1) > 0$ and $\lim_{\theta \rightarrow I_a^1}h_a(\theta)=0$, there is $\tilde{I_a^1}<\theta_a^M$, where $\varphi_a'(\theta)<0$ for all $\theta \in \left(I_a^1, \tilde{I_a^1}\right)$.
The statement about the maximum radius follows immediately from and the conventions of Section \[secTransition\].
Let $H_a(\theta_a^\star(\lambda))$ be a fold point of the spiral. Its first coordinate is given by $\varphi_{a}^\star=\theta_a^\star-\frac{1}{e_a}\ln\left(\frac{h_a(\theta_a^\star(\lambda))}{\varepsilon}\right)$ and $h_a(\theta_a)\le M^I_a(\lambda)$. Since $f_\lambda$ unfolds $f_0$, then $\lim_{\lambda\to 0}M^I_a(\lambda)=0$ and therefore $\lim_{\lambda\to 0}\varphi_{a}^\star=+\infty$. Hence, the fold point turns around the cylinder $Out(\mathcal{P}_{a})$ infinitely many times, as $\lambda$ tends to zero.
A set of one-parameter families of vector fields
------------------------------------------------
For any unfolding $f_\lambda$ of $f_0$, as we have seen in Sections \[secTransition\] and \[secImageInvariantManifs\], the maximum radius $M_a^O(\lambda)$ of $W^s_{loc}(\mathcal{P}_{a+1})\cap Out(\mathcal{P}_{a})$, and the maximum height $M_a^I(\lambda)$ of $W^u_{loc}(\mathcal{P}_{a-1})\cap In(\mathcal{P}_{a})$, satisfy: $$\lim_{\lambda\to 0} M_a^I(\lambda)=0\qquad
\lim_{\lambda\to 0} \left(1+ \varepsilon^{1-\delta_a}(M_a^I(\lambda))^{\delta_a}\right)=\lim_{\lambda\to 0} M_a^O(\lambda)=1.$$
We make the additional assumption that $1+ \varepsilon^{1-\delta_a}(M_a^I(\lambda))^{\delta_a}$ tends to zero faster than $M_a^O(\lambda)$ for at least one $a \in{{\rm\bf Z}}_k$. This condition defines the open set ${\mathcal C}$ of generic unfoldings $f_\lambda$ that we need for the statement of Theorem \[teorema tangency\]. More precisely, $$\label{eqDefineC}
{\mathcal C}=\left\{
f_\lambda \mbox{ satisfying (P\ref{P1}) -- (P\ref{P5})}: \exists a\in{{\rm\bf Z}}_k\
\exists \lambda_0>0 :\ \
0<\lambda<\lambda_0\ \Rightarrow
1+ \varepsilon^{1-\delta_a}(M_a^I(\lambda))^{\delta_a}<M_a^O(\lambda)
\right\}.$$ The set ${\mathcal C}$ is open in the Whitney $C^2$ topology.
Heteroclinic tangencies
-----------------------
\[teorema tangency\] For any family $f_\lambda$ of vector fields in the set ${\mathcal C}$ defined in there is $a\in{{\rm\bf Z}}_k$ such that:
1. \[tangentManifs\] there is a sequence $\lambda_i>0$ of real numbers with $\lim_{i\to\infty}\lambda_i=0$ such that for $\lambda=\lambda_i$ the manifolds $W^u(\mathcal{P}_{a-1})$ and $W^s(\mathcal{P}_{a+1})$ are tangent; for $\lambda>\lambda_i$, there are two heteroclinic connections in $W^u(\mathcal{P}_{a-1})\cap W^s(\mathcal{P}_{a+1})$ that collapse into the tangency at $\lambda=\lambda_i$ and then disappear for $\lambda<\lambda_i$;
2. \[accumulatingTangs\] arbitrarily close to the connection $[\mathcal{P}_{a-1}\to \mathcal{P}_{a}]$ there are hyperbolic periodic solutions at points $x_i$ and infinitely many values $\lambda_{n,i}$ for which the periodic solution has a homoclinic tangency of its invariant manifolds.
![When $\lambda$ decreases, the fold point of the spiral $\Phi_a\left(W^u_{loc}(\mathcal{P}_{a-1})\cap In(\mathcal {P}_a) \right)$ moves to the right and for $\lambda=\lambda_i$, it is tangent to $W^s(\mathcal{P}_{a+1})$ creating a heteroclinic tangency. []{data-label="tangencies1"}](tangencies1){height="4cm"}
Note that for $k=2$, the tangency of assertion [*\[tangentManifs\].*]{} is a homoclinic connection.
**Proof:** Let $\theta_a=\theta_a^\star(\lambda)$ correspond to a fold point of the spiral $\Phi_a\left(W^u_{loc}(\mathcal{P}_{a-1})\cap In(\mathcal {P}_a) \right)$ given by . Since $f_\lambda\in{\mathcal C}$ and using Proposition \[Structures\] and , for $\lambda< \lambda_0$ all points in the spiral have second coordinate less than $M_a^O$, this is true, in particular, for the fold point $H_a(\theta_a^\star(\lambda))$. Also by Proposition \[Structures\] the fold point turns around $Out(\mathcal{P}_{a})$ infinitely many times as $\lambda$ goes to zero. This means that there is a positive value $\lambda_A<\lambda_0$ such that $H_a(\theta_a^\star(\lambda_R))$ lies in the region $A$ that will be mapped to $z_a<0$ (see Section \[secTransition\]) and there is a positive value $\lambda_B<\lambda_A$ such that $H_a(\theta_a({\lambda_L}))$ lies in the region $B$ that goes to $z_a>0$, as in Figure \[tangencies1\]. Therefore, the curve $H_a(\theta_j(\lambda))$ is tangent to the graph of $g_a^\lambda$ at some point $H_a(\theta_a^\star(\lambda_1))$ with $\lambda_1\in\left( \lambda_B,\lambda_A\right)$.
As $\lambda$ decreases from $\lambda_B$, the fold point enters and leaves the region $A$, creating a sequence of tangencies to the graph of $g_a^\lambda$. At each tangency, two points where $H_a(\theta_a^\star(\lambda))$ intersects the graph of $g_a^\lambda$ come together, corresponding to the pair of transverse heteroclinic connections that collapse at the tangency. This completes the proof of [*\[tangentManifs\].*]{}
For assertion [*\[accumulatingTangs\].*]{}, note that by the results of [@Rodrigues] there is a suspended horseshoe near the connection $[\mathcal{P}_{a-1}\to \mathcal{P}_{a}]$. Hence, there are hyperbolic fixed points of the first return map to $In(\mathcal{P}_{a-1})$ arbitrarily close to the connection; let $p_i$ be one of them. Denote by $\eta_a$ the map $\Psi_{a } \circ \Phi_a$. The image by $\Phi_{a-1}$ of an interval contained in $W^u(p_i)$ accumulates on $W^u(\mathcal{P}_{a-1})$. In particular, it is mapped by $\eta_a\circ\Phi_{a-1}$ into infinitely many spirals in $Out(\mathcal{P}_{a})$, each one having a fold point — see Figure \[homoclinicTangency\]. Since the fold points turn around $Out(\mathcal{P}_{a+1})$ infinitely many times as $\lambda$ varies, this curve is tangent to $W^s(p_i)$ at a sequence $\lambda_{n,i}$ of values of $\lambda$.
![The unstable manifold of a fixed point $x_i$ of the first return map to $In(\mathcal{P}_{a-1})$ accumulates on $W^u(\mathcal{P}_{a-1})$ and defines a family of curves in $Out(\mathcal{P}_{a})$ with a fold point. When $\lambda$ decreases, the fold point moves to the right and for $\lambda=\lambda_{n,i}$, it is tangent to $W^s(x_i)$ creating a homoclinic tangency. []{data-label="homoclinicTangency"}](Tangency2){width="11cm"}
The hypothesis (P\[P3\]) in the definition of $\mathcal C$ for Theorem \[teorema tangency\], that the family $f_\lambda$ unfolds the degeneracy $f_0$, may be replaced by the assumption that the flow of $f_\lambda$ turns in opposite directions around two successive nodes $\mathcal{P}_{a}$ and $\mathcal{P}_{a+1}$, as in [@LR3], [*ie*]{}, by the assumption that two successive nodes have different chirality. This is because, in the proof of Theorem \[teorema tangency\], the heteroclinic tangency is obtained from the presence of a fold pont in the curve $\Phi_a\left(W^u_{loc}(\mathcal{P}_{a-1})\cap In(\mathcal {P}_a) \right)$ and from the control of the angular coordinate $\varphi$ of the fold. This is the content of Proposition \[Structures\], where we use the fact that $W^u_{loc}(\mathcal{P}_{a-1})\cap In(\mathcal {P}_a) $ is the graph of a function with a maximum, a consequence of (P\[P3\]) and (P\[P4\]). If we assume instead that successive nodes have different chirality as in [@LR3], then the image by $\Phi_a$ of the curve $W^u_{loc}(\mathcal{P}_{a-1})\cap In(\mathcal {P}_a) $ will have infinitely many fold points whose coordinates $\varphi$ will form a dense subset of $[0,2\pi]$, and hence, as in [@LR3], an arbitrarily small change in the parameter $\lambda$ will create a heteroclinic tangency.
Historic behaviour
------------------
The next result is the core of this section. It locates trajectories with historic behaviour $C^2$-close to the unfolding of a degenerate equation, as a consequence of the tangencies found in Theorem \[teorema tangency\].
\[teoremaHistoric\] For any family $f_\lambda$ of vector fields in the open set ${\mathcal C}$ defined in there are sequences $0<\xi_i<\zeta_i<\xi_{i+1}$, with $\lim_{i\rightarrow +\infty}\zeta_i=0$, such that for each $\lambda$ in $(\xi_i,\zeta_i)$, there are vector fields arbitrarily close to $f_\lambda$ in the $C^2$-topology for which there is an open set of initial conditions with historic behaviour.
In the proof we will use the following concept:
Let $M$ be a smooth surface and let $\mathrm{Diff}^r(M)$ be the set of its local diffeomorphisms of class $C^r$, $r\ge 2$. An open subset $\mathcal{N}\subset \mathrm{Diff}^r(M)$ is a *Newhouse domain* if any element of $\mathcal{N}$ is $C^r$-approximated by a diffeomorphism $g$ with a homoclinic tangency associated with a dissipative saddle fixed point $p_g$, and moreover $g$ has a $C^r$-persistent tangency associated with some basic sets $\Lambda_g$ containing $p_g$ in the sense that there is a $C^r$-neighbourhood of $g$ any element of which has a homoclinic tangency for the continuation of $\Lambda_g$.
Newhouse has shown in [@Newhouse79] that any $C^2$ diffeomorphism containing a homoclinic tangency to a dissipative saddle point lies in the closure of a Newhouse domain in the $C^2$ topology.
We will need the definition of historic behaviour for diffeomorphisms:
Let $F$ be a $C^2$ diffeomorphism on a smooth surface $M$. We say that the forward orbit $
\{ x,F(x),F^2(x),\ldots, F^j(x),\ldots\}
$ has *historic behaviour* if the average $$\label{discreteHistoric}
\frac{1}{n+1}\sum_{j=0}^n \delta_{F^j(x)}$$ does not converge as $n\to+\infty$ in the weak topology, where $\delta_Z$ is the Dirac measure on $M$ supported at $Z\in M$.
**Proof of Theorem \[teoremaHistoric\]:** For $\lambda=0$ and $a\in {{\rm\bf Z}}_k$, the derivative of the first return map to $In(\mathcal{P}_{a})$ has determinant of the form $Cz_a^{\delta-1}$ for some constant $C>0$. Thus, for sufficiently small $\lambda>0$, and at points near $W^s(\mathcal{P}_{a})$, the first return map to $In(\mathcal{P}_{a})$ is also contracting, since the determinant of its derivative has absolute value less than 1. Moreover, the family $f_\lambda$ unfolds each one of the homoclinic tangencies of Theorem \[teorema tangency\] generically. Hence the arguments of Newhouse, Palis & Takens and Yorke & Alligood [@Newhouse79; @PT; @YA] revived in [@LR2015] may be applied here to show that near each one of the homoclinic tangencies there is a sequence of intervals $(\xi_i,\zeta_i)$ in the set of parameters $\lambda$ corresponding to a Newhouse domain.
By Theorem A of Kiriki & Soma [@KS], each Newhouse domain for the first return map is contained in the closure of the set of diffeomorphisms having an open set of points with historic behaviour. By Theorem \[teorema tangency\] the family $f_\lambda$ unfolds the heteroclinic tangencies generically. Hence, from the results of [@KS], it follows that for each $\lambda\in(\xi_i,\zeta_i)$, the first return map $F_a^\lambda$ may be approximated in the $C^2$ topology by maps $\widehat{F}$ defined in $In(\mathcal{P}_{a})$ for which we may find an open connected subset $\mathcal{U}\subset In(\mathcal{P}_{a})$ and two sequences of integers, $(a_j)_{j\in {{\rm\bf N}}}$ and $(b_j)_{j\in {{\rm\bf N}}}$, such that, for each $x\in\mathcal{U}$, the limits for $\widehat{F}$ are different for $n$ in the two sequences. In particular, there exists a set $A\subset In(\mathcal{P}_{a})$, such that $$\forall x \in \mathcal{U}, \qquad
\lim_{k \rightarrow +\infty} \frac{1}{a_k+1}\sum_{j=0}^{a_k} \delta_{\widehat{F}^j(x)}(A)
\neq \lim_{k \rightarrow +\infty} \frac{1}{b_k+1}\sum_{j=0}^{b_k} \delta_{\widehat{F}^j(x)}(A)$$ or, equivalently, $$\label{Hist_Beha1}
\forall x \in \mathcal{U}, \qquad
L=\lim_{k \rightarrow +\infty} \frac{1}{a_k+1}\sum_{j=0}^{a_k} \chi_A(\widehat{F}^{j}(x))
\neq \lim_{k \rightarrow +\infty} \frac{1}{b_k+1}\sum_{j=0}^{b_k} \chi_A(\widehat{F}^{j}(x)) = \widehat{L}$$ where $\chi_A$ denotes the characteristic function on $A$. According to the proof of [@KS], the two fixed points of the horseshoe that arises near the tangency will be visited by orbits of points in the set $\mathcal{U}$.
Since the maps $\widehat{F}$ are close to the first return map in the $C^2$ topology, they may be seen as the first return maps to a vector field $g$ that is $C^2$-close to $f_\lambda$ — see, for instance Remark 2 in Pugh and Robinson [@PR Section 7A]. It remains to show that solutions to $\dot{x}=g(x)$ have historic behaviour in the sense of Definition \[historicDef\].
Let $\tau(x)$ be the time of first return of $x\in In (\mathcal{P}_a)$, *ie* $\tau(x)>0$ and $\phi(\tau(x),x)\in In (\mathcal{P}_a)$ where $\phi$ is the flow associated to $\dot{x}=g(x)$. Since $\mathcal{U}$ is connected, taking its closure $\overline{\mathcal{U}}$ compact and sufficiently small, $\tau(x)$ is approximately constant on $\overline{\mathcal{U}}$. Rescaling the time $t$ we may suppose $\tau(x)\equiv 1$.
Given $b>0$, let $V_b=\{\phi(t,x):\ -b<t<b, \ x\in In(\mathcal{P}_a)\}$. For $0<c<1$ and $\varepsilon>0$ sufficiently small, let $\psi: {{\rm\bf R}}^3 \rightarrow [0,1]$ be of class $C^k$, $k\ge 2$, such that $\psi=$1 on $\overline{V_c}$ and $\psi=0$ outside $V_{c+\varepsilon}$. Let $\mathcal{S}(A)$ be the saturation of $A$ by the flow $\phi$, given by $\mathcal{S}(A)=\{\phi(t,x):\ t\in{{\rm\bf R}}, \ x\in A\}$. Define the observable $H$ by $H(x)=\psi(x) \chi_{\mathcal{S}(A)}(x)$. For $x\in\mathcal{U}$ we have: $$\begin{aligned}
\int_0^{a_{k}+1}H(\phi(t,x)) dt &=& \int_{0}^{c} H(\phi(t,x)) dt + \sum_{j=1}^{a_k} \int_{j-c}^{j+c} H(\phi(t,x)) dt + \int_{a_{k}+1-c}^{a_{k}+1} H(\phi(t,x)) dt +o(\varepsilon) \\
&=&c \chi_A(\widehat{F}^{0}(x)) +2c\sum_{j=1}^{a_k} \chi_A(\widehat{F}^{j}(x)) +c \chi_A(\widehat{F}^{a_{k}+1}(x)) +o(\varepsilon) .\end{aligned}$$ Hence $$\frac{1}{a_{k}+1}\int_0^{a_{k}+1}H(\phi(t,x)) dt =
\frac{2c}{a_{k}+1}\sum_{j=0}^{a_k} \chi_A(\widehat{F}^{j}(x))
- \frac{c}{a_{k}+1} \chi_A(x)+\frac{c}{a_{k}+1} \chi_A(\widehat{F}^{a_{k}+1}(x))$$ where $$\lim_{k \to +\infty} \frac{c}{a_{k}+1}\chi_A(x)=0
\quad\mbox{and}\quad
\lim_{k\to +\infty} \frac{c}{a_{k}+1} \chi_A(\widehat{F}^{a_{k}+1}(x))=0 .$$ Therefore, $$\lim_{k \to +\infty} \frac{1}{a_{k}+1}\int_0^{a_{k}+1}H(\phi(t,x)) dt =
\lim_{k\to +\infty} \frac{2c}{a_{k}+1}\sum_{j=0}^{a_k} \chi_A(\widehat{F}^{j}(x)) +o(\varepsilon)=
2c L + o(\varepsilon)$$ where the last equality follows from (\[Hist\_Beha1\]). Similarly, $$\lim_{k \to +\infty} \frac{1}{b_{k}+1}\int_0^{b_{k}+1}H(\phi(t,x)) dt =2c \widehat{L} + o(\varepsilon).$$ Since $L\neq \widehat{L}$, then for sufficiently small $\varepsilon$ and for all $x$ in the open set $\mathcal{S}(\mathcal{U})$ we have: $$\lim_{a_i \rightarrow +\infty} \frac{1}{a_i} \int_0^{a_i} H(\phi(t,x)) dt \neq \lim_{b_i \rightarrow +\infty} \frac{1}{b_i} \int_0^{b_i} H(\phi(t,x)) dt$$
It follows that for each $\lambda$ in $(\xi_i,\zeta_i)$ there are vector fields $g$ arbitrarily close to $f_\lambda$ in the $C^2$-topology such that there is an open set of initial conditions for which the solution of $\dot x=g(x)$ has historic behaviour, as claimed.
In the proof of Theorem \[teoremaHistoric\], the conditions defining the set $\mathcal C$ are only used to obtain Theorem \[teorema tangency\]. Hence, for Theorem \[teoremaHistoric\], condition (P\[P3\]) may be replaced in the definition of $\mathcal C$ by the assumption that two successive nodes have different chirality, as remarked after the proof of Theorem \[teorema tangency\].
Heteroclinic tangencies also create new tangencies near them in phase space and for nearby parameter values. Based on [@Rodrigues2015; @Takens94], it should be possible to obtain a topological interpretation of the asymptotic properties of these non-converging time averages and obtain a complete set of moduli for the attracting cycle.
An example {#Example}
==========
In this section we construct a family of vector fields in ${{\rm\bf R}}^3$ satisfying properties (P\[P1\])–(P\[P5\]). Thus, via Theorem \[teoremaHistoric\], we provide an explicit example where trajectories with historic behaviour have positive Lebesgue measure. Our example relies on Bowen’s example described in [@Takens94]. This is a vector field in the plane with structurally unstable connections. We use the techniques developed by Aguiar *et al* [@ACL06; @Rodrigues] combined with symmetry breaking, to lift Bowen’s example to a vector field in ${{\rm\bf R}}^3$ with periodic solutions having robust connections arising from transverse intersections of invariant manifolds.
The starting point
------------------
Consider the differential equation $(\dot x,\dot y)=g(x,y)$ given by $$\label{example 1}
\left\{
\begin{array}{l}
\dot{x}=-y \\
\dot{y}=x-x^3
\end{array}
\right.$$ that is equivalent to the second order equation $\ddot{x}=x-x^3$. Its equilibria are $O = (0,0)$ and $P^\pm=(\pm 1, 0)$. This is a conservative system, with first integral ${\displaystyle}{{\rm\bf v}}(x,y)= \frac{x^2 }{2}\left(1-\frac{x^2}{2} \right)+\frac{y^2}{2}$. From the graph of ${{\rm\bf v}}$ (see Figure \[example1\] (a)) it follows that the origin $O$ is a centre and the equilibria $P^\pm$ are saddles. The equilibria $P^\pm$ are contained in the ${{\rm\bf v}}$-energy level ${{\rm\bf v}}(x,y)=1/4$ hence there are two one-dimensional connections, one from $P^+$ to $P^-$ and another from $P^-$ to $P^+$. Denote this cycle by $\Gamma_1$. The region bounded by this cycle, that is filled by closed trajectories, will be called the *invariant fundamental domain*. For $(x,y)\ne(0,0)$ inside the fundamental domain we have $0\le{{\rm\bf v}}(x,y)<1/4$ and the boundary of the fundamental domain intersects the $x=0$ axis at the points $(0,\pm\sqrt{2}/2)$.
![(a): First integral and energy level of ${{\rm\bf v}}(x,0)$. (b) First perturbation. (c) Numerics for $\varepsilon=0$ and $\varepsilon =0.05$. []{data-label="example1"}](example1b){height="8cm"}
An expression for Bowen’s example {#subsecBowen}
---------------------------------
For a given $\varepsilon$, such that $0<\varepsilon<\!\!<1$, consider the following perturbation of (\[example 1\]): $$\label{example 2}
\left\{
\begin{array}{l}
\dot{x}=-y \\
\dot{y}=x-x^3 -\varepsilon y\left({{\rm\bf v}}(x,y)-\frac{1}{4}\right)
\end{array}
\right.$$
In the flow of equation (\[example 2\]), the cycle $\Gamma_1$ persists and is asymptotically stable with respect to the invariant fundamental domain.
**Proof:** The term $-({{\rm\bf v}}(x,y)-1/4)$ is zero on $\Gamma_1$ and positive in the interior of the fundamental domain. Therefore, the perturbing term $-y({{\rm\bf v}}(x,y)-1/4)$ has the same sign as $y$. Hence the heteroclinic connections $[P^+ \rightarrow P^-]$ and $[P^- \rightarrow P^+]$ are preserved and solutions starting away from the origin inside the fundamental domain approach the cycle when time goes to infinity as in Figure \[example1\] (b) and (c).
Translating the cycle
---------------------
For $z^2=y+1$, Bowen’s example takes the form $$\label{example 3}
\left\{
\begin{array}{l}
\dot{x}= 2z^2(1-z^2)\\
\dot{z}= z\left(x-x^3-\varepsilon \left(\frac{x^2}{2}- \frac{x^4}{4} + \frac{(z^2-1)^2}{2}-\frac{1}{4}\right)(z^2-1)\right)
\end{array}
\right.$$
The following assertions hold for equation :
1. \[L1\] it is ${{\rm\bf Z}}_2$-equivariant under the reflection on the $z=0$ axis;
2. \[L2\] the $z=0$ axis is flow-invariant;
3. \[L3\] the dynamics of in the $z>0$ half-plane is orbitally equivalent to that of in the $y>-1$ half-plane.
**Proof:** Assertion [*1.*]{} is a simple calculation, and it implies assertion [*2.*]{} For [*3.*]{} with $z> 0$, use $z^2=y+1$ and $\dot{z}= \frac{\dot{y}}{2z}$ to put equation in the form: $$\left\{
\begin{array}{l}
\dot{x}= 1-z^2\\
\dot{z}= \frac{1}{2z}\left(x-x^3-\varepsilon\left(\frac{x^2}{2}- \frac{x^4}{4} + \frac{(z^2-1)^2}{2}-\frac{1}{4}\right)(z^2-1)\right) .
\end{array}
\right.$$ Multiplying both equations by the positive term $2z^2$ does not affect the phase portrait and thus with $y>-1$ is orbitally equivalent to with $z>0$.
The lifting {#secLifting}
-----------
Now we are going to use a technique presented in [@ACL06; @Melbourne; @Rodrigues] which consists essentially in three steps:
1. Start with a vector field on ${{\rm\bf R}}^2$ with a heteroclinic cycle where $\dim Fix (\gamma)=1$, $\gamma \in \mathbf{O}(2)$. The heteroclinic cycle involves two equilibria in $Fix (\gamma)$ and one-dimensional heteroclinic connections that do not intersect the line $Fix (\gamma)$.
2. Lift this to a vector field on ${{\rm\bf R}}^3$ by rotating it around $Fix (\gamma)$. This transforms one-dimensional heteroclinic connections into two-dimensional heteroclinic connections. The resulting vector field is ${{\mathbf {SO}}(2)}$-equivariant under a 3-dimensional representation of ${{\mathbf {SO}}(2)}$. The attracting character of the cycle is preserved by the lifting.
3. Perturb the vector field to destroy the ${{\mathbf {SO}}(2)}$-equivariance and so that the two-dimensional heteroclinic connections perturb to transverse connections.
Take $(x,z,\theta)$ to be cylindrical coordinates in ${{\rm\bf R}}^3$ with radial component $z$ and let $(x,z_1,z_2)=(x,z\cos \theta, z \sin \theta)$ be the corresponding Cartesian coordinates. Adding $\dot{\theta}=1$ to we obtain: $$\label{example 5}
\left\{
\begin{array}{l}
\dot{x}= 2(1-z_1^2-z_2^2)(z_1^2+z_2^2)\\
\dot{z}_1= z_1 \left[x-x^3-\varepsilon(z_1^2+z_2^2-1)\left(\frac{x^2}{2}- \frac{x^4}{4} + \frac{(z_1^2+z_2^2-1)}{2}-\frac{1}{4}\right)\right] -z_2\\
\\
\dot{z}_2= z_2 \left[x-x^3-\varepsilon(z_1^2+z_2^2-1)\left(\frac{x^2}{2}- \frac{x^4}{4} + \frac{(z_1^2+z_2^2-1)}{2}-\frac{1}{4}\right)\right]+z_1 .
\end{array}
\right.$$
\[lemaGamma0\] The flow of for $\varepsilon>0$ has a heteroclinic cycle $\Gamma_0$ that satisfies (P\[P1\])–(P\[P3\]), consisting of two hyperbolic closed trajectories and two surfaces homeomorphic to cylinders. The cycle $\Gamma_0$ is asymptotically stable with respect to the lifting of the fundamental domain of .
**Proof:** We follow the arguments of [@ACL06; @Rodrigues]. The periodic solutions are defined by: $$\mathcal{P}_1: \quad x=1, \quad z_1^2+z_2^2=1
\qquad
\mbox{and}
\qquad
\mathcal{P}_2: \quad x=-1, \quad
z_1^2+z_2^2=1.$$ The connections are the lift of the one-dimensional connections, rotated around the fixed-point subspace of the symmetry. It follows that the heteroclinic connections are two-dimensional manifolds diffeomorphic to cylinders and a branch of the stable manifold of each periodic solution coincides with a branch of the unstable manifold of the other. As remarked above, the stability of the cycle is preserved.
Time averages
-------------
Theorem \[Main1\] applied to (\[example 5\]) says that if $\phi(t,X)\subset \mathcal{B}(\Gamma_0)$ is a non-trivial solution of the differential equation, then the accumulation points of the time average $\frac{1}{T} \int_0^T \phi(t,X) dt $ lie in the boundary of the segment joining the points $$A_1= \left(\frac{e_2-{c_1}}{e_2+{c_1}}, 0, 0\right) \qquad \text{and} \qquad A_2= \left(\frac{e_1-{c_2}}{e_1+{c_2}}, 0, 0 \right) .$$ Since $e_1=c_1=e_2=c_2= \sqrt{2}$, the points $A_1$ and $A_2$ coincide, although the centres of gravity of $\mathcal{C}_1$ and $\mathcal{C}_2$ do not. The polygon ensured by Theorem \[Main1\] degenerates into a single point, the origin. This is in contrast to the example constructed in [@Rodrigues]where the polygon is degenerate because the centres of gravity of the nodes coincide. Usually, for initial conditions in the basin of attraction of heteroclinic cycles, the time averages do not converge as $t \rightarrow +\infty$. However, if the vector field has symmetry, some non-generic properties appear. To destroy this degeneracy and obtain historic behaviour, it is enough to replace in the first integral by: $$\tilde{{{\rm\bf v}}}(x,y) = -(x-1)^2(x+1)^2\left(1+\frac{x^2}{2}+x^2 \right)+\frac{y^2}{2}.$$ For this case, the contracting and expanding eigenvalues at the two equilibria satisfy the conditions: $$\mu_1= \frac{\sqrt{5}}{\sqrt{3}}\qquad \text{and} \qquad \mu_2= \frac{\sqrt{3}}{\sqrt{5}}$$ and thus, for the lift of the corresponding system, $A_1\neq A_2$. In particular, the Birkhoff time averages do not converge and thus they have historic behaviour. The next step, the second perturbation, will be performed for , constructed using the first integral ${{\rm\bf v}}$, but it could also be done starting with $\tilde{{{\rm\bf v}}}$.
The second perturbation
-----------------------
We perturb by adding to the equation for $\dot z_1$ a term depending on $\lambda$, as follows: $$\label{example 4}
\left\{
\begin{array}{l}
\dot{x}= 2(1-z_1^2-z_2^2)(z_1^2+z_2^2)\\
\dot{z}_1= z_1 \left[x-x^3-\varepsilon(z_1^2+z_2^2-1)\left(\frac{x^2}{2}- \frac{x^4}{4} + \frac{(z_1^2+z_2^2-1)^2}{2}-\frac{1}{4}\right)\right] -z_2+\lambda(x^2-1)\\
\\
\dot{z}_2= z_2 \left[x-x^3-\varepsilon(z_1^2+z_2^2-1)\left(\frac{x^2}{2}- \frac{x^4}{4} + \frac{(z_1^2+z_2^2-1)^2}{2}-\frac{1}{4}\right)\right]+z_1 .
\end{array}
\right.$$ A geometric argument is used to show that the invariant manifolds of the periodic solutions of intersect transversely.
For small $\lambda>0$ and $\varepsilon>0$, the flow of (\[example 4\]) has a heteroclinic cycle associated to two hyperbolic periodic solutions, $\mathcal{P}_1$ and $\mathcal{P}_2$, satisfying properties (P\[P1\])–(P\[P5\]).
**Proof:** Properties (P\[P1\])–(P\[P3\]) follow from the construction and from Lemma \[lemaGamma0\]. The perturbing term $\lambda(x^2-1)$ is zero on the planes $x=\pm 1$ that contain the cycles $\mathcal{C}_1$ and $\mathcal{C}_2$, so the periodic solutions persist. Since $\lambda(x^2-1)$ is positive for $-1<x<1$, then when $\lambda$ increases from zero, $W^u(\mathcal{P}_a)$, $a\in{{\rm\bf Z}}_2$, moves towards larger values of $z_1$, while $W^s(\mathcal{P}_{a+1})$ moves in the opposite direction. In particular, on the plane $x=0$, for $\lambda \neq 0$, each pair of invariant manifolds meets transversely at two points (Figure \[intersection\_example\]). Hence, there are two curves where each pair of invariant manifolds of the periodic solutions meets transversely and properties (P\[P4\])–(P\[P5\]) hold.
Let $f_\lambda$ be the family of vector fields of (\[example 4\]). Theorem \[teorema tangency\] says that for each $a\in{{\rm\bf Z}}_2$ there is a sequence of values of $\lambda$ for which $W^u(\mathcal{P}_a)$ is tangent to $W^s(\mathcal{P}_{a+1})$, and that for other values of $\lambda$ arbitrarily close to the connections there are closed trajectories with homoclinic tangencies. It follows that for these values of $\lambda$ the vector field $f_\lambda$ lies in the closure of a Newhouse domain. Theorem \[teoremaHistoric\] ensures that there are sequences $0<\xi_i<\zeta_i<\xi_{i+1}$, with $\lim\zeta_i=0$, such that for each $\lambda$ in $(\xi_i,\zeta_i)$ there are vector fields $g$ arbitrarily close to $f_\lambda$ in the $C^2$-topology such that there is an open set of initial conditions for which the solution of $\dot x=g(x)$ has historic behaviour.
![Sketch of the invariant manifolds of $\mathcal{P}_1$ and $\mathcal{P}_2$ in . For $\lambda=0$ (left) pairs of branches of invariant manifolds of the closed trajectories coincide. For $\lambda \neq 0$ (right) each pair of invariant manifolds meets transversely at two curves corresponding to two points on the plane $x=0$. []{data-label="intersection_example"}](intersection_example){height="5cm"}
[99]{}
M. Aguiar, S. B. Castro, I. S. Labouriau, *Dynamics near a heteroclinic network,* Nonlinearity 18, 2005
M. A. D. Aguiar, S. B. S. D. Castro, I. S. Labouriau. *Simple Vector Fields with Complex Behavior*, [Int. Jour. of Bif. and Chaos]{}, [Vol. **16**]{} [No. **2**]{}, 2006
M.A.D. Aguiar, I.S. Labouriau, A.A.P. Rodrigues, *Switching near a heteroclinic network of rotating nodes*, Dyn. Syst., Vol. 25(1), 75–95, 2010
R. Bowen, *Equilibrium states and Ergodic Theory of Anosov Diffeomorphisms*, Lectures Notes in Mathematics, 470, Springer-Berlin, 1975
D. R. J. Chilingworth, *Generic multiparameter bifurcation from a manifold*, Dynamical Systems, Vol. 15, 2, 101–137, 2000
E. Colli, E. Vargas, *Non-trivial wandering domains and homoclinic bifurcations*, Ergod. Th. & Dynam. Sys., 21, 1657–1681, 2001
P. Duarte, R. Fernandes, W. Oliva, *Dynamics of the attractor in the Lotka-Volterra equations*, J. Diff. Equations, 149, 143–189, 1998
A. Gaunersdorfer, *Time averages for heteroclinic attractors*, SIAM J. Math. Anal. 52 1476–89, 1992
M. I. Golubitsky, I. Stewart, D. G. Schaeffer, *Singularities and Groups in Bifurcation Theory* , Vol. **II**, Springer, 2000
J. Guckenheimer, P. Holmes, *Structurally stable heteroclinic cycles,* Math. Proc. Camb. Phil. Soc., No. **103**,1988
J. Hofbauer, *Heteroclinic cycles in ecological differential equations*, Tatra Mount. Math. Publ. [4]{}, 105–116, 1994
F. Hofbauer, G. Keller, *Quadratic maps without asymptotic measure*, Commun. Math. Phys. 127 319–37, 1990
J. Hofbauer, K. Sigmund, *Evolutionary Game Dynamics*, Bulletin of the American Mathematical Society, Vol. [40]{}, 2003
Y. Ilyashenko, *Minimal attractors*. In EQUADIFF 2003, 421–428 World Scientific Publishing, 2005
I.S. Labouriau, A.A.P. Rodrigues, *Dense heteroclinic tangencies near a Bykov cycle*, J. Diff. Eqs., 259, 5875–5902, 2015
I.S. Labouriau, A.A.P. Rodrigues, *Global bifurcations close to symmetry*, J. Math. Anal. Appl., 444(1), 648–671, 2016
O. Karabacak, P. Ashwin, *On statistical attractors and the convergence of time averages*, Math. Proc. Camb. Phil. Soc., 1–13, 2011
S. Kiriki, T. Soma, *Takens’ last problem and existence of non-trivial wandering domains*, Advances in Mathematics, 306, 524–588, 2017
V. Kleptsyn, *An example of non-coincidence of minimal and statistical attractors*, Ergod. Th. & Dynam. Sys, 26, 759–768, 2006
M. Krupa, I. Melbourne, *Asymptotic stability of heteroclinic cycles in systems with symmetry*, Ergod. Th. & Dynam. Sys., [15]{}, 121–147, 1995
M. Krupa, I. Melbourne, *Asymptotic Stability of Heteroclinic Cycles in Systems with Symmetry II*, Proc. Roy. Soc. Edinburgh Sect. A 134 1177–1197, 2004
I. Melbourne, *Intermittency as a Codimension-Three Phenomenon*, Journal of Dynamics and Differential Equations, No. **4**, 1989
S.E. Newhouse, *The abundance of wild hyperbolic sets and non-smooth stable sets for diffeomorphisms*, Publ. Math. Inst. Hautes Etudes Sci. 50, 101–151, 1979
J. Palis, F. Takens, *Hyperbolicity and sensitive chaotic dynamics at homoclinic bifurcations*, Cambridge University Press, Cambridge Studies in Advanced Mathematics 35, 1993
T. Peixe, *Lotka-Volterra systems and Polymatrix Replicators*, Ph.D. Thesis, Fac. Ciências da Universidade de Lisboa, 2015
C. Pugh, C. Robinson, *The $C^1$ Closing Lemma, including hamiltonians*, Ergodic Theory Dynam. Systems 3(2), 261–313, 1983
A. A. P. Rodrigues, *Moduli for heteroclinic connections involving saddle-foci and periodic solutions*, Disc. Conti. Dynam. Systems A, 35(7), 3155–3182, 2015
A. A. P. Rodrigues, I. S. Labouriau, M. A. D. Aguiar, *Chaotic Double Cycling*, Dynamical Systems: an International Journal, Vol. 26 (2), 199–233, 2011
D. Ruelle, *A measure associated with Axiom A attractors*, Amer. J. Math., 98, 619–654, 1976
D. Ruelle, *Historic behaviour in smooth dynamical systems*, Global Analysis of Dynamical Systems ed H W Broer *et al* (Bristol: Institute of Physics Publishing), 2001
K. Sigmund, *Time Averages for unpredictable orbits of determinist systems*, Annals of Operations Research, 37, 217–228, 1992
Ya. Sinai, *Gibbs measures in ergodic theory*, Russ. Math. Surveys, 27, 21–60, 1972
F. Takens, *Parially hyperbolic fixed points*, Topology, 10, 133–147, 1971
F. Takens, *Heteroclinic attractors: Time averages and moduli of topological conjugacy*, Bol. Soc. Brasil. Mat., 25, 107–120, 1994
F. Takens, *Orbits with historic behaviour, or non-existence of averages*, Nonlinearity, 21 , no. 3, T33–T36, 2008
J. A. Yorke, K. T. Alligood, *Cascades of period-doubling bifurcations: A prerequisite for horseshoes*, Bull. Am. Math. Soc. (N.S.) 9(3), 319–322, 1983
Appendix: $C^2$-Linearizing the hyperbolic periodic solution {#appendix}
============================================================
For $a \in \{1, \ldots, k\}$, let $\Pi_a$ be a cross section transverse to the flow at $p_a \in \mathcal{P}_{a}$. Since $ \mathcal{P}_{a}$ is hyperbolic, there is a neighbourhood of $p_a$ where the first return map to $p_a$, denoted by $\pi_a$, is [$C^1$-conjugate]{} to its linear part. Moreover:
Let $\pi_a$ be the first return map to $\Pi_a$. For each $r\geq 2$ there is an open and dense subset of ${{\rm\bf R}}^2$ such that, if the eigenvalues $(c_a,e_a)$ of $d\pi_a$ lie in this set, then there is a neighbourhood $V^*_a$ of $p_a$ in $\Pi_a$ where $\pi_a$ is $C^r$ conjugate to its linear part.
**Proof:** Let $r\geq 2$. In order to ensure the existence of a $C^r$ conjugacy between $d\pi_a$ and the first return map to $\Pi_a$, we use the Takens’ criterion [@Takens71 Sections 1 and 5] which asks for the Sternberg $\alpha \left(d\pi_a, k\right)$-condition. Following Takens’ terminology [@Takens71], let us define: $$\lambda_1=\lambda_c = e^{-c_a}<1, \quad \lambda_2=\lambda_e = e^{e_a}>1, \quad s=u=1, h=2$$ and $$\overline{M}= \lambda_e=\overline{m}>1 ,\quad \overline{N}= \lambda_c^{-1}=\overline{n}>1.$$ In order to apply the criterion, we should define the function $\alpha \left(d\pi_a, k\right)$. The definition will depends on an auxiliary function $\beta \left(d\pi_a, k\right)$. This proof will be divided in three steps: characterisation of $\beta$, characterisation of $\alpha$ and application of the criterion.
1. **The function $\beta$:** The value of $\beta \left(d\pi_a, k\right)$ is that of the smallest $j \in {{\rm\bf N}}$ for which: $$\forall r<k, \qquad \overline{N} \overline{M}^r \overline{n}^{r-j}<1.$$ In other words, $\beta \left(d\pi_a, k\right)$ is the smallest $j \in {{\rm\bf N}}$ for which: $$\label{beta1}
\forall r<k, \qquad \Phi(\lambda_c, \lambda_e,r)\lambda_c^j<1, \qquad \text{where}\qquad \Phi(\lambda_c, \lambda_e,r)= \left(\frac{1}{\lambda_c}\right)^{1+r}\lambda_e^r.$$ Thus $\beta$ depends on $d\pi_a$ through the latter’s eigenvalues. In particular, $\beta \left(d\pi_a, k\right) >1+r$ for $r<k$. Moreover, for $j-(r+1)\in{{\rm\bf N}}$ large enough, the map $\Phi(\lambda_c, \lambda_e,r)$ increases with $r$. Therefore, it is sufficient to check condition for $r=k$. Indeed, the value of $\beta \left(d\pi_a, k\right)$ is that of the smallest $j \in {{\rm\bf N}}$ for which: $$(\lambda_c^{-1} \lambda_e)^k \lambda_c^{-1} \lambda_c^j<1\quad \Longleftrightarrow \quad (\lambda_c^{-1})^{k+1} \lambda_c^{k+1}\lambda_e^k\lambda_c^{j-(k+1)}<1 \quad \Longleftrightarrow \quad \lambda_e^k\lambda_c^{j-(k+1)}<1.$$ Define $ j-(k+1)=l$. Then it is easy to see that $\beta \left(d\pi_a, k\right)$ is the smallest $j \in {{\rm\bf N}}$ such that $\lambda_e^k \lambda_c^{l}<1$. Taking logarithms, it follows that this is equivalent to: $$k\ln \lambda_e + l \ln \lambda_c<0 \quad \Longleftrightarrow \quad l> -k \frac{\ln \lambda_e}{ \ln \lambda_c} .$$ Since $l\in {{\rm\bf N}}$, its minimum value will be $$l=1 + \left[ -k \frac{\ln \lambda_e}{ \ln \lambda_c} \right] \quad \Longleftrightarrow \quad \beta \left(d\pi_a, k\right)=k+2 + \left[ -k \frac{\ln \lambda_e}{ \ln \lambda_c} \right],$$ where $[x]$ represents the largest integer less than or equal to $x\in {{\rm\bf R}}$.
2. **The function $\alpha$:** The value of $\alpha \left(d\pi_a, k\right)$ is that of the smallest $j \in {{\rm\bf N}}$ for which: $$\forall r<\beta \left(d\pi_a, k\right), \qquad \overline{M} \overline{N}^r \overline{m}^{r-j}<1.$$ In other words, $\alpha \left(d\pi_a, k\right)$ is the smallest $j \in {{\rm\bf N}}$ for which: $$\forall r<\beta \left(d\pi_a, k\right), \qquad \Phi(\lambda_c, \lambda_e,r)\lambda_e^{-j+1}<1, \qquad \text{with}\qquad \Phi(\lambda_c, \lambda_e,r)= \left(\frac{1}{\lambda_c}\right)^{r}\lambda_e^r.$$ Since $\Phi(\lambda_c, \lambda_e,r)$ increases with $r$, we would like to find the smallest $j \in {{\rm\bf N}}$ for which $$(\lambda_c^{-1}\lambda_e)^{\beta \left(d\pi_a, k\right)} \lambda_e^{-j+1}<1.$$ If $j=\beta \left(d\pi_a, k\right)+1+l$, then: $$(\lambda_c^{-1})^{\beta \left(d\pi_a, k\right)} \lambda_e^{(\beta \left(d\pi_a, k\right)+1)}< \lambda_e^{(\beta \left(d\pi_a, k\right)+1)} \lambda_e^l
\quad \Longleftrightarrow \quad
\lambda_c^{-\beta \left(d\pi_a, k\right)}<\lambda_e^l$$ that happens if and only if $ -\beta \left(d\pi_a, k\right)\ln (\lambda_c)<l \ln (\lambda_e)$. Therefore, $\alpha \left(d\pi_a, k\right)= \beta \left(d\pi_a, k\right)+1+l$ where $l$ is the smallest integer $l$ such that $-\beta \left(d\pi_a, k\right)\ln (\lambda_c)<l \ln (\lambda_e)$. Noting that $\ln(\lambda_c)<0$ and $l \in {{\rm\bf N}}$, we have: $$l > -\frac{\beta \left(d\pi_a, k\right)\ln \lambda_c}{\ln \lambda_e}
\quad\mbox{with minimum value}\quad
l=1+ \left[ -\frac{\beta \left(d\pi_a, k\right)\ln \lambda_c}{\ln \lambda_e}\right].$$
In conclusion, since $\ln \lambda_c=-c_a$ and $\ln \lambda_e=e_a$, it follows that: $$\beta \left(d\pi_a, k\right)=k+2+ \left[ \frac{k e_a}{c_a}\right]$$ and $$\alpha \left(d\pi_a, k\right)=\beta \left(d\pi_a, k\right)+1+l = k+4+\left[ \frac{k e_a}{c_a}\right
] + \left[ \left( k+2+\left[ \frac{k e_a}{c_a}\right]
\right) \frac{c_a}{e_a}\right].$$
3. **Applying the Sternberg condition:** In order to have $C^r$ conjugacy between $\pi_a$ and its linear part, the eigenvalues of $d\pi_a$ must satisfy the $\alpha \left(d\pi_a, r\right)$-condition, that we proceed to explain in this context. For all $ \nu_1, \nu_2 \geq 0 $ such that $ 2\leq \nu_1+\nu_2 \leq \alpha \left(d\pi_a, r\right)$ we should have: $$\lambda_c^{\nu_1-1}\lambda_e^{\nu_2} \neq 1, \qquad \lambda_e^{\nu_2-1}\lambda_c^{\nu_1}\neq 1\quad \text{and}\quad |\lambda_c^{\nu_1} \lambda_e^{\nu_2}|\neq 1$$ Indeed, $ \lambda_c^{\nu_1} \lambda_e^{\nu_2}= e^{-\nu_1 c_a}e^{-\nu_2 e_a} =1$ if and only if $-\nu_1 c_a=\nu_2 e_a$. In summary, for all $ \nu_1, \nu_2 \geq 0 $ such that $ 2\leq \nu_1+\nu_2 \leq \alpha \left(d\pi_a, r\right)$, the following conditions should hold:
- $(\nu_1-1) c_a \neq \nu_2 e_a$
- $(\nu_1) c_a \neq (\nu_2-1) e_a$
- $\nu_1 c_a \neq \nu_2 e_a$.
The set of smooth vector fields that satisfy the Sternberg $\alpha \left(d\pi_a, r\right)$-condition, for each $r\ge 2$, is open and dense in the set of vector fields satisfying (P\[P1\]) – (P\[P5\]). Hence, generically the assumptions are satisfied.
Control of flight times {#appendixB}
=======================
Proof of Lemma \[Equalities\] {#appendixB1}
-----------------------------
1. If $n=0$ ($n$ corresponds to the number of loops around the cycle $\Gamma_0$), it is trivial. For $n \geq 1$, we may write the following equality, omitting the dependence on $X$: $$\begin{aligned}
T_{a + nk}=&T_a &+\quad \tau_a +\tau_{a+1}+ \ldots \tau_{a+k-1}+\\
&&+\quad \tau_{a+k} +\tau_{a+k+1}+ \ldots \tau_{a+2k-1}+\ldots\\
&&+\quad \tau_{a+(n-1)k} +\tau_{a+(n-1)k+1}+ \ldots \tau_{a+nk-1}\\\end{aligned}$$ Using Corollary \[Lemma3\], the previous equality yields: $$\begin{aligned}
T_{a + nk}=&T_a &+ \mu_a \tau_{a-1} +\mu_a\mu_{a+1}\tau_{a-1}+ \ldots \prod_{l=0}^{k-1}\mu_{a+l}\tau_{a-1}+\\
&&+ \delta \mu_a \tau_{a-1}+\delta\mu_a\mu_{a+1}\tau_{a-1}+ \ldots \delta \left(\prod_{l=0}^{k-1}\mu_{a+l}\right)\tau_{a-1}+\ldots\\
&&+ \delta^{n-1} \mu_a \tau_{a-1}+\delta^{n-1}\mu_a\mu_{a+1}\tau_{a-1}+ \ldots \delta^{n-1}\left(\prod_{l=0}^{k-1}\mu_{a+l}\right)\tau_{a-1}=\\
=& T_a& + \frac{\delta^n-1}{\delta-1} \left(\mu_a +\mu_a \mu_{a+1} + \ldots + \prod_{l=0}^{k-1}\mu_{a+l}\right)\tau_{a-1}\end{aligned}$$
2. This item follows from Corollary \[Lemma3\]. Indeed, we have: $$\tau_{a + nk}(X)=\mu_{a }\tau_{a+nk-1}(X)=\mu_{a}\mu_{a-1}\tau_{a+nk-2}(X)=\ldots=\delta^n\tau_{a}(X)=\delta^n\mu_a\tau_{a-1}(X).$$
Proof of Proposition \[Prop6\] {#appendixC}
------------------------------
We divide the proof in two lemmas. First we show in Lemma \[lematrivial?\] that it is sufficient to consider the limit when $n\to\infty$ of the averages over one turn around $\Gamma_0$. Then in Lemma \[Lemma4\] we show that these averages tend to $A_a$.
\[lematrivial?\] Let $T_\ell $, $\ell\in{{\rm\bf N}}$, be a sequence $0=T_0<T_\ell <T_{\ell +1}$ with $\lim_{\ell \to \infty} T_\ell =\infty$. Given $h:{{\rm\bf R}}\rightarrow{{\rm\bf R}}^m$ an integrable map, $$\mbox{if }
\lim_{\ell \to\infty}\frac{1}{T_{\ell +1}-T_{\ell }} \int_{T_{\ell }}^{T_{\ell +1}} h(t)dt=\omega,
\qquad\mbox{then}\qquad
\lim_{\ell \to\infty}\frac{1}{T_{\ell }} \int_{0}^{T_{\ell }} h(t)dt=\omega .$$
**Proof:** First note that $$\frac{1}{T_{\ell }} \int_{0}^{T_{\ell }} h(t)dt - \omega=\frac{1}{T_{\ell }} \int_{0}^{T_{\ell }} (h(t)-\omega) dt=
\sum_{j=1}^\ell \left(\frac{T_j-T_{j-1}}{T_\ell }\right) \left[
\frac{1}{T_j-T_{j-1}} \int_{T_{j-1}}^{T_{j}} (h (t)-\omega)dt
\right] .$$ From the hypothesis, given $\varepsilon>0$ there exists $N_1$ such that $\ell >N_1$ implies $$\frac{1}{T_\ell -T_{\ell -1}}\left| \int_{T_{\ell -1}}^{T_{\ell }} (h (t)-\omega)dt\right|<\frac{\varepsilon}{2}.$$ Let $A=\displaystyle \left|\int_0^{T_{N_1} }(h (t)-\omega) dt\right|$. Since $T_\ell \to\infty$ then there exists $N_2$ such that $T_{N_2}> 2A/\varepsilon$. Let $N_0=\max\left\{ N_1,N_2\right\}$. If $\ell >N_0$ then $$\begin{aligned}
\left|\frac{1}{T_{\ell }} \int_{0}^{T_{\ell }} (h (t)-\omega )dt\right|&\le&
\frac{1}{T_{\ell }}\left| \int_{0}^{T_{N_1}} (h (t)-\omega )dt\right| +
\frac{1}{T_{\ell }} \left|\int_{T_{N_1}}^{T_{\ell }} (h (t)-\omega )dt\right|\\
&\le&\frac{A}{T_\ell } +
\sum_{j=N_1}^\ell \left(\frac{T_j-T_{j-1}}{T_\ell }\right) \
\frac{1}{T_j-T_{j-1}}\left| \int_{T_{j-1}}^{T_{j}} (h (t)-\omega )dt\right| \\
&\le&\frac{\varepsilon}{2}+\sum_{j=N_1}^\ell \left(\frac{T_j-T_{j-1}}{T_\ell }\right) \frac{\varepsilon}{2}
\le \frac{\varepsilon}{2}\left( 1+\sum_{j=1}^\ell \frac{T_j-T_{j-1}}{T_\ell } \right)= \varepsilon .\end{aligned}$$
\[Lemma4\] Let $f_0$ be a vector field in ${{\rm\bf R}}^3$ satisfying (P\[P1\])–(P\[P3\]). For each $a\in{{\rm\bf Z}}_k$, and for each $X\in\mathcal{B}(\Gamma_0)$, the limit of the spatial average of of $\phi(t,X)$ over one full turn around the heteroclinic cycle $\Gamma_0$ starting at $In(\mathcal{P}_j)$ is $A_j$. More precisely: $$\lim_{n\to\infty} \frac{1}{T_{a+(n+1)k}(X)-T_{a+nk}(X)}\int_{T_{a+(n+1)k}(X)}^{T_{a+nk}(X)}\phi(t, X) dt =A_a.$$
**Proof:** First, recall that we are assuming that, for all $a\in{{\rm\bf Z}}_k$, the jumps from $Out(\mathcal{P}_a)$ to $In(\mathcal{P}_{a+1})$ are instantaneous (see Remark \[rkFlightTimes\]). Since $X\in\mathcal{B}(\Gamma_0)$, then for $t\in\left[T_{a+nk},T_{a+1+nk} \right]$ with large $n$, the trajectory $\phi(t,X)$ gets very close to $\mathcal{P}_a$. Therefore, omitting the $(X)$ for shortness, we have: $$\label{average1}
\lim_{n\to \infty} \frac{1}{T_{a+1+nk}-T_{a+nk}}\int_{T_{a+nk}}^{T_{a+1+nk}}\phi(t, X) dt =
\lim_{n\to \infty} \frac{1}{\tau_{a+nk}} \int_{T_{a+nk}}^{T_{a+1+nk}}\phi(t, X) dt = \overline{x_a}.$$
Without loss of generality, from now on we take $a=1$. Then $$\begin{aligned}
&&
\frac{1} {T_{k+1+nk}-T_{1+nk}} \int_{T_{1+nk}}^{T_{k+1+nk}} \phi(t, X)dt \\
&= &
\frac{1} {T_{k+1+nk}-T_{1+nk}}\left[\int_{T_{1+nk}}^{T_{2+nk}} \phi(t, X)dt+\int_{T_{2+nk}}^{T_{3+nk}} \phi(t, X)dt +\cdots+ \int_{T_{k+nk}}^{T_{k+1+nk}} \phi(t, X)dt\right]\\
&=&
\sum_{b=1}^k \frac{T_{b+1+nk}-T_{b+nk}}{T_{k+1+nk}-T_{1+nk}}
\left[ \frac{1}{T_{b+1+nk}-T_{b+nk}}
\int_{T_{b+nk}}^{T_{b+1+nk}} \phi(t, X)dt \right] .\end{aligned}$$ Recall that $T_{2+nk}-T_{1+nk}=\tau_{1+nk}$, and by Corollary \[Lemma3\], for any $b\in\{2,\ldots,k\}$ $$\begin{aligned}
\frac{T_{b+1+nk}-T_{b+nk}} {T_{k+1+nk}-T_{1+nk}}&=&
\frac{\tau_{b+nk} }{\tau_{1+nk}+\tau_{2+nk}+\ldots+ \tau_{k+nk}}\\
&=&\frac{\mu_b \mu_{b-1}\ldots \mu_2 \tau_{1+nk}}{\tau_{1+nk}+\mu_2 \tau_{1+nk} +\ldots+ \mu_k \mu_{k-1}\ldots \mu_2 \tau_{1+nk}}\\
&=&\frac{\mu_b \mu_{b-1}\ldots \mu_2 }{den A_1} .\end{aligned}$$ Therefore, the value of $$\lim_{n \rightarrow +\infty}
\frac{1} {T_{k+1+nk}-T_{1+nk}} \int_{T_{1+nk}}^{T_{k+1+nk}} \phi(t, X)dt$$ is, by , $$\begin{aligned}
&&
\lim_{n \rightarrow +\infty}
\frac{1}{den A_1}\left[ \frac{1}{T_{2+nk}-T_{1+nk}}\int_{T_{1+nk}}^{T_{2+nk}} \phi(t, X)dt+
\sum_{b=2}^k \frac{\mu_b \mu_{b-1}\ldots \mu_2}{T_{b+1+nk}-T_{b+nk}}
\int_{T_{b+nk}}^{T_{b+1+nk}} \phi(t, X)dt
\right]\\
&=& \frac{ \overline{x}_{1} + \mu_2 \overline{x}_{2} + \ldots+ \mu_k \mu_{k-1}\ldots \mu_2 \overline{x}_{k}}{1+\mu_2 +\ldots+ \mu_k \mu_{k-1}\ldots \mu_2}=A_1 .\end{aligned}$$
Proof of Lemma \[Colinear\]
---------------------------
Expanding $den(A_{a})$ and $den(A_{a+1})$, yields: $$den(A_a) = {1+\mu_{a+1} +\mu_{a+1}\mu_{a+2}+\cdots+ \prod_{l=1}^{k-1}\mu_{l+a}}
\qquad
den(A_{a+1}) = {1+\mu_{a+2} +\mu_{a+2}\mu_{a+2}+\cdots+ \prod_{l=1}^{k-1}\mu_{l+a+1}}$$ hence, since $ \prod_{l=0}^{k-1}\mu_{l+a+1}=\delta$, $$\begin{aligned}
\mu_{a+1} den(A_{a+1}) &=& {\mu_{a+1}+\mu_{a+1}\mu_{a+2} +\mu_{a+1}\mu_{a+2}\mu_{a+2}+\cdots+ \mu_{a+1} \prod_{l=1}^{k-1}\mu_{l+a+1}}\\
&=& \mu_{a+1}+\mu_{a+1}\mu_{a+2} +\mu_{a+1}\mu_{a+2}\mu_{a+2}+\cdots+ \prod_{l=1}^{k-1}\mu_{l+a} + \delta\\
&=& den (A_a)-(1-\delta) .\end{aligned}$$ For $num(A_{a})$ and $num(A_{a+1})$ we obtain $$num(A_a) = \overline{x}_{a} + \mu_{a+1} \overline{x}_{a+1} + \mu_{a+1}\mu_{a+2} \overline{x}_{a+2} + \cdots+ \left(\prod_{l=1}^{k-1}\mu_{l+a} \right) \overline{x}_{a+k-1}$$ $$num(A_{a+1}) = \overline{x}_{a+1} + \mu_{a+2} \overline{x}_{a+2} + \mu_{a+2}\mu_{a+3} \overline{x}_{a+3} + \cdots+ \left(\prod_{l=1}^{k-1}\mu_{l+a+1} \right) \overline{x}_{a+k}$$ and, since $\overline{x}_{a+k}=\overline{x}_a$, we get $$\begin{aligned}
\mu_{a+1} num(A_{a+1}) &=& \mu_{a+1} \overline{x}_{a+1} + \mu_{a+1} \mu_{a+2} \overline{x}_{a+2} + \mu_{a+1} \mu_{a+2}\mu_{a+3} \overline{x}_{a+3} + \cdots+ \mu_{a+1} \left(\prod_{l=1}^{k-1}\mu_{l+a+1} \right) \overline{x}_{a+k}\\
&=& \mu_{a+1} \overline{x}_{a+1} + \mu_{a+1} \mu_{a+2} \overline{x}_{a+2} + \mu_{a+1} \mu_{a+2}\mu_{a+3} \overline{x}_{a+3} + \cdots + \left(\prod_{l=1}^{k-1}\mu_{l+a} \right) \overline{x}_{a+k-1}+ \delta \overline{x}_{a}\\
&=& num(A_a) - (1-\delta)\overline{x}_a \end{aligned}$$ and the lemma is proved.
[^1]: CMUP (UID/MAT/00144/2013) is supported by the European Regional Development Fund through the programme COMPETE and by the Portuguese Government through the Fundação para a Ciência e a Tecnologia (FCT) under the partnership agreement PT2020. A.A.P. Rodrigues was supported by the grant SFRH/BPD/84709/2012 of FCT. Part of this work has been written during AR stay in Nizhny Novgorod University partially supported by the grant RNF 14-41-00044.
|
---
author:
- 'Bill S. Wright$^1$,'
- 'Kazuya Koyama$^1$,'
- 'Hans A. Winther$^1$'
- 'and Gong-Bo Zhao$^{2,3,1}$'
bibliography:
- 'References.bib'
title: 'Investigating the degeneracy between modified gravity and massive neutrinos with redshift-space distortions'
---
Introduction
============
Modifications to Einstein’s theory of General Relativity (GR) can be considered when searching for an explanation of the late-time acceleration [@KoyamaMGReview; @CliftonMGReview]. The simplest class of models, known as scalar-tensor theories [@Horndeski1; @Horndeski2; @Horndeski3], introduce a new scalar field that causes the required acceleration. This new scalar field also couples to matter, leading to a so-called fifth force. Such models typically invoke a screening mechanism to ensure that the fifth force would become negligible in high density environments, and thus not be observable in solar system tests. However, in other environments the fifth force is present and can enhance structure formation. This enhancement can be used to constrain such scalar-tensor theories of modified gravity (MG) with large-scale structure observations, and this is indeed a key goal of upcoming large-scale structure surveys such as Euclid [@Euclid], DESI [@DESI], WFIRST [@WFIRST], LSST [@LSST], and SKA [@SKA].
In this paper we will consider $f(R)$ gravity [@fofr1; @fofr2], and in particular the commonly studied Hu-Sawicki variant of $f(R)$ [@HuSawicki]. This model has been well studied with [*N*]{}-body simulations as it is well understood theoretically and offers a phenomenology that is representative of a broad range of modified gravity models.
In GR with cold dark matter the growth rate of matter perturbations is scale-independent. A key signature of modified gravity is that the linear growth rate can be scale-dependent. However, a vital but often overlooked complication when searching for signatures of modified gravity in large-scale structure is the suppression of structure growth due to massive neutrinos. Neutrinos were first shown to have mass in observations of neutrino flavour oscillations [@MassiveNeutrinos1; @MassiveNeutrinos2], the presence of which demand that at least two of the neutrino states are massive [@Pontecorvo1957]. Even though particle physics experiments do not yet tell us the absolute mass of each of the three mass eigenstates, they do allow strong constraints to be placed on the difference in mass between the states, and these imply that at least one of the mass eigenstates has a mass $m_{\nu}\gtrsim 0.05~\mathrm{eV}$ [@PDG2018]. As a consequence of having mass, the matter-radiation equality time will be delayed and the neutrinos will not cluster at scales below their free-streaming length $\lambda_{\rm fs}$ [@MassiveNeutrinoReview]. The delay to the time of matter-radiation equality lowers the amplitude of the matter perturbations at the start of matter-domination, and the free-streaming of massive neutrinos causes the dark matter perturbations to feel a reduced gravitational potential below $\lambda_{\rm fs}$ and thus cluster less strongly than in a model with the same value of the matter density parameter but only massless neutrinos. The combination of these two effects leads to a scale-dependent suppression of structure growth; a signature which can be used to constrain the neutrino masses if it can be measured by the previously mentioned large-scale structure surveys [@Bondetal1980; @2016PhRvD..94h3522G; @2016PDU....13...77C; @2015JCAP...02..045P; @2014MNRAS.444.3501B; @2013MNRAS.436.2038Z].
However, with the enhancement of structure formation from modified gravity and the suppression due to massive neutrinos, there is a risk of degeneracy whereby large-scale structure in a universe with a strong modification to gravity and heavy neutrinos can be difficult to distinguish from that of a universe with GR and light neutrinos [@2013PhRvL.110l1302M; @2013PhRvD..88j3523H; @Baldi2014; @2015PhRvD..91f3524H; @2010PThPh.124..541M; @Bellomo2017; @2017PhRvD..95f3502A]. This degrades the ability of surveys to achieve their twin goals of testing gravity and constraining the neutrino masses in any theories of gravity beyond GR. Indeed, it has been shown that the non-linear matter power spectrum [@Baldi2014] and halo mass function [@Hagstotzetal2018] in $f(R)$ models are difficult to distinguish from their equivalents in GR when the neutrino masses are allowed to vary. The DES Collaboration considers neutrino mass and extensions to GR in the same analysis [@DESjoint], although they only state the resulting constraints on the MG parameters and not the neutrino masses. There are some promising signs that certain observables may be better at reducing or even breaking this degeneracy, such as higher-order weak lensing statistics [@Peeletal2018] and weak lensing tomographic information at multiple redshifts [@Giocolietal2018]; as well as techniques that are superior at distinguishing models such as machine learning [@DegenMachineLearning1; @DegenMachineLearning2].
A different observable that has degeneracy breaking potential is that of redshift-space distortions (RSD) [@RSDReview]. RSD occur when the distances to tracers are computed using their observed redshifts without accounting for the effect of the tracers’ peculiar velocities on the redshifts which adds to the contribution from the Hubble flow. On linear scales this results in a slight squashing along the line-of-sight, whereas there is a strong stretching along the line-of-sight at non-linear scales commonly known as the Fingers-of-God (FoG) effect. For combinations of MG parameters and neutrino masses whose enhancement and suppression of [*structure growth*]{} produce matter power spectra that are difficult to differentiate between, the [*structure growth rate*]{} can still be different in each case and allow for models to be distinguished between. It has recently been shown that growth rate information imprinted in velocity statistics in real-space can be used to break the degeneracy [@Hagstotzetal2019]. However, real-space velocity statistics are not directly observable. Fortunately, because of the velocity information encoded in them, RSD observations can be used to extract the linear growth rate of structure $f$. However, in order to extract $f$ and break the degeneracy, it is necessary to accurately model the non-linearities of RSD with MG and massive neutrinos.
In this paper, we extend the cosmological perturbation theory code [`COPTER`]{} [@OriginalCOPTER] to include the effects of massive neutrinos in addition to those of modified gravity allowing us to accurately model non-linear RSD in scenarios with Hu-Sawicki $f(R)$ gravity and non-zero neutrino masses. We build on [`MGCOPTER`]{}, the modified version of [`COPTER`]{} developed in [@BoseKoyama2016], which is itself based on the approach presented in [@Taruya].
We validate this implementation against simulations using the COmoving Lagrangian Acceleration (COLA) method [@OriginalCOLA], which is a fast approximate simulation method, and then investigate whether the degeneracy between the two effects is broken by RSD at the level of the dark matter field.
The paper is organised as follows. In Section \[sec:Implementation\] we explain our implementation of modified gravity and massive neutrinos in the Standard Perturbation Theory (SPT) formalism and [`MGCOPTER`]{} code. In Section \[sec:Validation\], we show the results of tests validating our implementation against simulation results. In Section \[sec:Degeneracy\] we use our new implementation to investigate the degeneracy and then conclude in Section \[sec:Conclusion\].
Implementation {#sec:Implementation}
==============
In order to model the combined effect of modified gravity and massive neutrinos on real- and redshift-space power spectra with low computational expense, it is necessary to include both effects in a semi-analytical code such as [`COPTER`]{} which computes large-scale structure observables using perturbation theory. For the redshift-space quantities, [`COPTER`]{} depends on the TNS model of redshift-space distortions which is named after the authors of [@TNS] (Taruya, Nishimichi, and Saito).
[`MGCOPTER`]{} and the TNS model {#ssec:LCDM_imp}
--------------------------------
[`MGCOPTER`]{} [@BoseKoyama2016] solves the equations of Standard Perturbation Theory (SPT) to acquire the real-space power spectra up to 1-loop order based on the approach developed by [@Taruya].
Starting from the continuity and Euler equations, assuming fluid quantities to be irrotational such that velocity field $\Vec{v}$ can be expressed in terms of the velocity divergence $\theta=\left( \Vec{\nabla} \cdot \Vec{v} \right)/aH$, and transforming to Fourier space yields $$\begin{aligned}
a \frac{\partial\delta(\Vec{k})}{\partial a} + \theta(\Vec{k})
&= -\int \frac{d^3\Vec{k}_1 d^3\Vec{k}_2}{(2\pi)^3} \delta_D\left( \Vec{k} - \Vec{k}_1 - \Vec{k}_2 \right) \alpha(\Vec{k}_1, \Vec{k}_2) \theta(\Vec{k}_1)\delta(\Vec{k}_2)~, \label{eq:continuity}
\\
\nonumber \\
a \frac{\partial\theta(\Vec{k})}{\partial a} + \left( 2 + \frac{aH^{\prime}}{H} \right)\theta(\Vec{k}) &- \left( \frac{k}{aH} \right)^2 \Phi(\Vec{k})
\nonumber \\ &= -\frac{1}{2}\int \frac{d^3\Vec{k}_1 d^3\Vec{k}_2}{(2\pi)^3} \delta_D\left( \Vec{k} - \Vec{k}_1 - \Vec{k}_2 \right) \beta(\Vec{k}_1, \Vec{k}_2) \theta(\Vec{k}_1)\theta(\Vec{k}_2)~, \label{eq:Euler}\end{aligned}$$ where $a=1/(1+z)$ is the scale factor, $y^{\prime}=\partial y/\partial a$ and the kernels $\alpha$ and $\beta$ are given by $$\begin{aligned}
\alpha(\Vec{k}_1, \Vec{k}_2) &= 1 + \frac{\Vec{k}_1 \cdot \Vec{k}_2}{|\Vec{k}_1|^2}~,
\\
\nonumber \\
\beta(\Vec{k}_1, \Vec{k}_2) &= \frac{(\Vec{k}_1 \cdot \Vec{k}_2)|\Vec{k}_1+\Vec{k}_2|^2}{|\Vec{k}_1|^2|\Vec{k}_2|^2}~.\end{aligned}$$ The Poisson equation completes the above modified continuity and Euler equations $$\begin{aligned}
-\left( \frac{k}{aH} \right)^2 \Phi(\Vec{k}) = \frac{3\Omega_{\rm m}(a)}{2}\delta(\Vec{k})~,\end{aligned}$$ where $\Omega_{\rm m}(a) = 8\pi G\rho_{\rm m}/3H^2$. We want the $n^{\rm th}$ order solutions of \[eq:continuity,eq:Euler\] to be of the form $$\begin{aligned}
\label{eq:deltasol}
\delta_n(\Vec{k}, a) = \int d^3\Vec{k}_1 \ldots d^3\Vec{k}_n \delta_D(\Vec{k}-\Vec{k}_{1 \ldots n})F_n(\Vec{k}_1,\ldots,\Vec{k}_n, a) \Delta(\Vec{k}_1)\ldots\Delta(\Vec{k}_n)~,\end{aligned}$$ $$\begin{aligned}
\label{eq:thetasol}
\theta_n(\Vec{k}, a) = \int d^3\Vec{k}_1 \ldots d^3\Vec{k}_n \delta_D(\Vec{k}-\Vec{k}_{1 \ldots n})G_n(\Vec{k}_1,\ldots,\Vec{k}_n, a) \Delta(\Vec{k}_1)\ldots\Delta(\Vec{k}_n)~,\end{aligned}$$ where $\Vec{k}_{1 \ldots n}=\Vec{k}_{1}+\ldots+\Vec{k}_n$. Inserting these forms of the solutions into \[eq:continuity,eq:Euler\] yields a generalised system of equations for the $n^{\rm th}$ order kernels [@Taruya] $$\begin{aligned}
\hat{\mathcal{L}} \begin{bmatrix} F_n(\Vec{k}_1,\ldots,\Vec{k}_n) \\ G_n(\Vec{k}_1,\ldots,\Vec{k}_n) \end{bmatrix}
= \sum_{j=1}^{n-1} \begin{bmatrix} -\alpha(\Vec{k}_{1 \ldots j}, \Vec{k}_{j+1 \ldots n}) G_j(\Vec{k}_1,\ldots,\Vec{k}_j) F_{n-j}(\Vec{k}_{j+1},\ldots,\Vec{k}_n) \\ -\frac{1}{2}\beta(\Vec{k}_{1 \ldots j}, \Vec{k}_{j+1 \ldots n}) G_j(\Vec{k}_1,\ldots,\Vec{k}_j) G_{n-j}(\Vec{k}_{j+1},\ldots,\Vec{k}_n) \end{bmatrix}~,\end{aligned}$$ where $$\begin{aligned}
\hat{\mathcal{L}} = \begin{bmatrix} a\frac{d}{da} & 1 \\ \ \ \frac{3\Omega_{\rm m}}{2}\ \ \ \ & a\frac{d}{da} + \left( 2 + \frac{aH^{\prime}}{H} \right) \end{bmatrix}~.\end{aligned}$$ [`MGCOPTER`]{} solves this system of equations to compute the kernels $F_i$ and $G_i$. The power spectra up to 1-loop are given as $$\begin{aligned}
P^{\rm 1-loop}_{ij}(k) = P^{\rm L}_{ij}(k) + P_{ij}^{13}(k) + P_{ij}^{22}(k)~,\end{aligned}$$ where the 1-loop corrections are defined by $$\begin{aligned}
\left\langle x_2(\Vec{k}) y_2(\Vec{k}^{\prime}) \right\rangle &= (2\pi)^3 \delta_D(\Vec{k}+\Vec{k}^{\prime}) P_{xy}^{22}(k)~,
\nonumber \\
\left\langle x_1(\Vec{k}) y_3(\Vec{k}^{\prime}) + x_3(\Vec{k}) y_1(\Vec{k}^{\prime}) \right\rangle &= (2\pi)^3 \delta_D(\Vec{k}+\Vec{k}^{\prime}) P_{xy}^{13}(k)~,\end{aligned}$$ where $x$ and $y$ can be $\delta$ or $\theta$. Working these through, the final expressions for the 1-loop corrections in terms of the $z=0$ linear power spectrum $P_0(k)=P^{\rm L}(k, z=0)$ are, for the 22 correction, $$\begin{aligned}
P_{\delta\delta}^{22}(k) =& 2 \frac{k^3}{(2\pi)^2}\int_0^{\infty}r^2 \mathrm{d}r \int_{-1}^1 P_0(kr)P_0(k\sqrt{1+r^2-2rx})F_2^2(k, r, x) \mathrm{d}x~,
\\
\nonumber \\
P_{\delta\theta}^{22}(k) =& 2 \frac{k^3}{(2\pi)^2}\int_0^{\infty}r^2 \mathrm{d}r \int_{-1}^1 P_0(kr)P_0(k\sqrt{1+r^2-2rx})F_2(k, r, x) G_2(k, r, x) \mathrm{d}x~,
\\
\nonumber \\
P_{\theta\theta}^{22}(k) =& 2 \frac{k^3}{(2\pi)^2}\int_0^{\infty}r^2 \mathrm{d}r \int_{-1}^1 P_0(kr)P_0(k\sqrt{1+r^2-2rx})G_2^2(k, r, x) \mathrm{d}x~,\end{aligned}$$ while for the 13 correction we have $$\begin{aligned}
P_{\delta\delta}^{13}(k) =& 2 \frac{k^3}{(2\pi)^2}F_1(k)P_0(k) \int_0^{\infty} r^2 P_0(kr)F_3(k, r, x) \mathrm{d}r~,
\\
\nonumber \\
P_{\delta\theta}^{13}(k) =& \frac{k^3}{(2\pi)^2}F_1(k)P_0(k) \int_0^{\infty} r^2 P_0(kr)G_3(k, r, x) \mathrm{d}r
\nonumber \\ &+ \frac{k^3}{(2\pi)^2}G_1(k)P_0(k) \int_0^{\infty} r^2 P_0(kr)F_3(k, r, x) dr~,
\\
\nonumber \\
P_{\theta\theta}^{13}(k) =& 2 \frac{k^3}{(2\pi)^2}G_1(k)P_0(k) \int_0^{\infty} r^2 P_0(kr)G_3(k, r, x) \mathrm{d}r~.\end{aligned}$$ With the SPT real-space power spectra computed up to 1-loop order, [`MGCOPTER`]{} can then input these to the TNS model to calculate the redshift-space power spectrum $P^{(s)}(k)$.
The TNS model for the redshift-space power spectrum $P^{(s)}$ as a function of scale $k$ and line-of-sight (LoS) angle parameter $\mu=\cos(\theta)$ is given by Eq. (18) of [@TNS], which we reproduce here: $$\begin{aligned}
\label{eq:TNS}
P^{(s)}(k, \mu) = D_{\rm FoG}\left[ k\mu f\sigma_v \right] &\left\{ P_{\delta\delta}(k) + 2f(k)\mu^2 P_{\delta\theta}(k) + f^2(k)\mu^4 P_{\theta\theta}(k) \right.
\nonumber \\ &\left. + A(k, \mu) + B(k, \mu) \right\}~,\end{aligned}$$ where $D_{\rm FoG}$ is the Fingers-of-God damping function which we will discuss later. It is generally a function of $k$, $\mu$, the scale-dependent growth rate $f(k)$, and the velocity dispersion $\sigma_v$. $P_{\delta\delta}(k)$, $P_{\delta\theta}(k)$, and $P_{\theta\theta}(k)$ are the density auto-correlation, density-velocity divergence cross correlation, and the velocity divergence auto-correlation power spectra respectively. $A(k, \mu)$ and $B(k, \mu)$ are correction terms given by $$\begin{aligned}
A(k, \mu) &= k\mu f(k) \int \frac{d^3 \Vec{p}}{(2\pi)^3}\frac{p_z}{p^2}
\left\{ B_{\sigma}(\Vec{p}, \Vec{k} - \Vec{p}, -\Vec{k})-B_{\sigma}(\Vec{p}, \Vec{k}, -\Vec{k}-\Vec{p}) \right\},
\\
B(k, \mu) &= \left[k\mu f(k)\right]^2 \int \frac{d^3 \Vec{p}}{(2\pi)^3} F(\Vec{p})F(\Vec{k})~,\end{aligned}$$ where $B_{\sigma}$ is the cross bispectrum defined by $$\begin{aligned}
\left\langle \theta(\Vec{k}_1 \left\{ \delta(\Vec{k}_2) + f(k)\frac{k^2_{2z}}{k^2_2}\theta(\Vec{k}_2) \right\} \left\{ \delta(\Vec{k}_3) + f(k)\frac{k^2_{3z}}{k^2_3}\theta(\Vec{k}_3) \right\} \right\rangle
\nonumber \\ = (2\pi)^3 \delta_D(\Vec{k}_1+\Vec{k}_2+\Vec{k}_3) B_{\sigma}(\Vec{k}_1,\Vec{k}_2,\Vec{k}_3)
~,\end{aligned}$$ and $F(\Vec{p})$ is defined as $$\begin{aligned}
F(\Vec{p})=\frac{p_z}{p} \left\{ P_{\delta\theta}(p) + f(p)\frac{p_z^2}{p^2}P_{\theta\theta}(p) \right\}~.\end{aligned}$$ Throughout we use an exponential form for the Fingers-of-God damping factor: $$\begin{aligned}
D_{\rm FoG}\left[ k\mu f(k) \sigma_v \right] = \exp\left( -k^2\mu^2 f^2(k) \sigma_v^2 \right)~.\end{aligned}$$ The velocity dispersion $\sigma_v$ is a free parameter and needs to be fitted to some other $P^{(s)}$ data, for example from simulations as we do here. To do this, we minimise the likelihood function $$\begin{aligned}
-2\ln\mathcal{L} = \sum_n \sum_{l,l^{\prime}=0, 2} \left( P^{s}_{l,\ {\texttt{COPTER}}{}}(k_n) - P^{s}_{l,\ {\texttt{COLA}}{}}(k_n) \right) \mathrm{Cov}^{-1}_{l,l^{\prime}}(k_n) \left( P^{s}_{l^{\prime},\ {\texttt{COPTER}}{}}(k_n) - P^{s}_{l^{\prime},\ {\texttt{COLA}}{}}(k_n) \right)\end{aligned}$$ for the first two multipoles. Expressions for the covariance matrix between the different multipoles $\mathrm{Cov}_{l,l^{\prime}}$ are given in Appendix C of [@TNS]. We do not consider non-Gaussianity in this covariance but we do include the effect of shot-noise. For the validation of our implementation of massive neutrinos in [`MGCOPTER`]{} presented in Section \[sec:Validation\] we assume an ideal survey with survey volume $V_{\rm s}=10~{\rm Gpc}^3/h^3$ and galaxy number density $\bar{n}_{\rm g}=4\times 10^{-3} h^3/{\rm Mpc}$. For the study of the degeneracy in Section \[sec:Degeneracy\], we want to model a slightly more realistic scenario, so we assume a DESI-like survey with $V_{\rm s}$ and $\bar{n}_{\rm g}$ as given in Table \[tab:surv\_param\] and redshift bin width $\Delta z=0.2$. These values are computed using the information for emission line galaxies (ELGs) in Table V of [@DESIparams].
$z$ $V_{\rm s}$ (Gpc$^3$/$h^3$) $\bar{n}_{\rm g}$ ($h^3$/Mpc$^3$)
----- ----------------------------- -----------------------------------
0.5 3.40 2.95$\times 10^{-4}$
1.0 7.68 5.23$\times 10^{-4}$
1.5 10.14 1.71$\times 10^{-4}$
: Survey parameters for a DESI-like survey computed from the information for emission line galaxies (ELGs) in Table V of [@DESIparams]. These parameters are used in the computation of the covariance matrices for fitting $\sigma_v$ in [`MGCOPTER`]{} in the study of the degeneracy in Section \[sec:Degeneracy\].[]{data-label="tab:surv_param"}
Thus the TNS model can be used to compute $P^{(s)}(k, \mu)$ with the input of $P_{\delta\delta}$, $P_{\delta\theta}$, $P_{\theta\theta}$ at 1-loop order from [`MGCOPTER`]{}.
Adding modified gravity {#ssec:MG_imp}
-----------------------
Modified gravity models, like the $f(R)$ gravity model we consider here, have been previously added to [`COPTER`]{} in [@BoseKoyama2016], resulting in [`MGCOPTER`]{}. The 1-loop real-space power spectra are affected by the inclusion of modified gravity in SPT, but the TNS model of Eq. (\[eq:TNS\]) is still applicable without changes. We shall reproduce here the essentials of the implementation of modified gravity in the SPT part of [`MGCOPTER`]{}.
The modifications to gravity can be included in the Poisson equation, which up to $3^{\rm rd}$ order becomes $$\begin{aligned}
\label{eq:MGPoisson}
-\left( \frac{k}{aH} \right)^2 \Phi(\Vec{k}) = \frac{3\Omega_{\rm m}(a)}{2}\delta(\Vec{k})\mu(k, a) + S(k) ~,\end{aligned}$$ where $\mu(k, a)=G_{\rm eff}(k, a)/G$ is an effective Newton’s constant and the non-linear source term $S(\Vec{k})$ up to $3^{\rm rd}$ order is $$\begin{aligned}
S(\Vec{k}) = &\int \frac{d^3\Vec{k}_1 d^3\Vec{k}_2}{(2\pi)^3}\delta_D(\Vec{k}-\Vec{k}_{12})\gamma_2(\Vec{k}, \Vec{k}_1, \Vec{k}_2, a) \Delta(\Vec{k}_1)\Delta(\Vec{k}_2)
\nonumber \\ &+ \int \frac{d^3\Vec{k}_1 d^3\Vec{k}_2 d^3\Vec{k}_3}{(2\pi)^3}\delta_D(\Vec{k}-\Vec{k}_{123})\gamma_3(\Vec{k}, \Vec{k}_1, \Vec{k}_2, \Vec{k}_3, a) \Delta(\Vec{k}_1)\Delta(\Vec{k}_2)\Delta(\Vec{k}_3)~.\end{aligned}$$ Using the same form for the $n^{\rm th}$ order solutions as in \[eq:deltasol,eq:thetasol\], the new system of equations for the $n^{\rm th}$ order kernels is $$\begin{aligned}
\hat{\mathcal{L}} \begin{bmatrix} F_n(\Vec{k}_1,\ldots,\Vec{k}_n) \\ G_n(\Vec{k}_1,\ldots,\Vec{k}_n) \end{bmatrix}
= \sum_{j=1}^{n-1} \begin{bmatrix} -\alpha(\Vec{k}_{1 \ldots j}, \Vec{k}_{j+1 \ldots n}) G_j(\Vec{k}_1,\ldots,\Vec{k}_j) F_{n-j}(\Vec{k}_{j+1},\ldots,\Vec{k}_n) \\ -\frac{1}{2}\beta(\Vec{k}_{1 \ldots j}, \Vec{k}_{j+1 \ldots n}) G_j(\Vec{k}_1,\ldots,\Vec{k}_j) G_{n-j}(\Vec{k}_{j+1},\ldots,\Vec{k}_n) - N_n(\Vec{k}, \Vec{k}_1,\ldots,\Vec{k}_n) \end{bmatrix}~,\end{aligned}$$ where $$\begin{aligned}
\hat{\mathcal{L}} = \begin{bmatrix} a\frac{d}{da} & 1 \\ \ \ \frac{3\Omega_{\rm m}}{2}\mu(k, a)\ \ \ \ & a\frac{d}{da} + \left( 2 + \frac{aH^{\prime}}{H} \right) \end{bmatrix}~,\end{aligned}$$ and $$\begin{aligned}
N_2 = \gamma_2(\Vec{k}, \Vec{k}_1, \Vec{k}_2)F_1(\Vec{k}_1)F_1(\Vec{k}_2)~,\end{aligned}$$ $$\begin{aligned}
N_3 = &\gamma_2(\Vec{k}, \Vec{k}_1, \Vec{k}_{23})F_1(\Vec{k}_1)F_2(\Vec{k}_2, \Vec{k}_3)
+ \gamma_2(\Vec{k}, \Vec{k}_{12}, \Vec{k}_3)F_2(\Vec{k}_1, \Vec{k}_2)F_1(\Vec{k}_3)
\nonumber \\ &+ \gamma_3(\Vec{k}, \Vec{k}_1, \Vec{k}_2, \Vec{k}_3)F_1(\Vec{k}_1)F_1(\Vec{k}_2)F_1(\Vec{k}_3)~.\end{aligned}$$
In this work we investigate Hu-Sawicki $f(R)$ gravity, which has a single free parameter $|f_{R0}|$. For this theory, the extra terms in Eq. (\[eq:MGPoisson\]) are given as $$\begin{aligned}
\mu(k, a) = 1 + \left( \frac{k}{a} \right)^2 \frac{1}{3\Pi(k, a)}~,\end{aligned}$$ $$\begin{aligned}
\gamma_2(k, \Vec{k}_1, \Vec{k}_2, a) = -& \frac{9}{48} \left( \frac{kH_0}{aH} \right)^2 \left( \frac{H_0^2\Omega_{\mathrm{m}0}}{a^3} \right)^2
\frac{(\Omega_{\mathrm{m}0}-4a^3(\Omega_{\mathrm{m}0}-1))^5}{a^15|f_{R0}|^2(3\Omega_{\mathrm{m}0}-4)^4}
\nonumber \\ &\times\frac{1}{\Pi(k, a)\Pi(k_1, a)\Pi(k_2, a)}~,\end{aligned}$$ $$\begin{aligned}
\gamma_3(k, \Vec{k}_1, \Vec{k}_2, \Vec{k}_3, a) =& \left( \frac{kH_0}{aH} \right)^2 \left( \frac{H_0^2\Omega_{\mathrm{m}0}}{a^3} \right)^3
\frac{1}{36\Pi(k, a)\Pi(k_1, a)\Pi(k_2, a)\Pi(k_3, a)\Pi(k_{23}, a)}
\nonumber \\ &\times \left[ -\frac{45}{8}\frac{\Pi(k_{23}, a)}{a^{21}|f_{R0}|^3} \left(\frac{(\Omega_{\mathrm{m}0}-4a^3(\Omega_{\mathrm{m}0}-1))^7}{(3\Omega_{\mathrm{m}0}-4)^6}\right) \right.
\nonumber \\ &\left. + H_0^2 \left( \frac{9}{4a^{15}|f_{R0}|^2} \frac{(\Omega_{\mathrm{m}0}-4a^3(\Omega_{\mathrm{m}0}-1))^5}{(3\Omega_{\mathrm{m}0}-4)^4} \right)^2 \right] ~,\end{aligned}$$ where $$\begin{aligned}
\Pi(k, a) = \left( \frac{k}{a} \right)^2 + \frac{H_0^2 (\Omega_{\mathrm{m}0}-4a^3(\Omega_{\mathrm{m}0}-1))^3}{2|f_{R0}|a^9(3\Omega_{\mathrm{m}0}-4)^2}~.\end{aligned}$$
Adding massive neutrinos {#ssec:mass_nu_imp}
------------------------
We have added support for massive neutrinos to the code [`MGCOPTER`]{} developed in [@BoseKoyama2016]. Note that massive neutrinos were also added to the original [`COPTER`]{} code in [@COPTERNeutrinos] using a similar approach. In our implementation, we follow the method of [@Saito:2008bp; @Saito:2009ah] and include massive neutrinos at the level of the linear real-space power spectra $P^{\rm L}$, $P_{\delta\theta, \mathrm{L}}=f(k)P^{\rm L}$, and $P_{\theta\theta, \mathrm{L}}=f^2(k)P^{\rm L}$ without modifying the higher order SPT kernels. This allows us to take $P^{\rm L}(k)$ and $f(k)$ from `CAMB` [@CAMB] (or `MGCAMB` [@MGCAMB1; @MGCAMB2] for MG+$m_{\nu}$) as input to [`MGCOPTER`]{}; note that a small modification to `CAMB`/`MGCAMB` is necessary to get scale-dependent growth rate $f(k)$ as output. This method for including massive neutrinos is general enough to handle the various hierarchies of neutrino mass eigenstates [@NeutrinoMassHierarchy], but for simplicity in the results that follow we have modelled the massive neutrinos as a single massive eigenstate with mass $m_{\nu}$ and two massless eigenstates.
The expressions for the 1-loop power spectra corrections in terms of the $z=0$ linear power spectrum $P_0(k)=P^{\rm L}(k, z=0)$ were given in Section \[ssec:LCDM\_imp\]. For our implementation, we want to take $P^{\rm L}(k, z)$ and $f(k, z)$ at the intended [`MGCOPTER`]{} output redshift from `CAMB`/`MGCAMB` and use it as input to [`MGCOPTER`]{}. Therefore we need to rewrite the expressions for the 1-loop power spectra in terms of $P^{\rm L}(k, z)$ instead of $P_0(k)$, using $F_1(k)=G_1(k)/f(k, z)$ and $P_0(k)=P^{\rm L}(k, z)/F_1^2(k)=f^2(k, z)P^{\rm L}(k, z)/G_1^2(k,z)$. The 22 correction terms are\
$$\begin{aligned}
P_{\delta\delta}^{22}(k) =& 2 \frac{k^3}{(2\pi)^2}\int_0^{\infty}r^2 \mathrm{d}r \int_{-1}^1 P^{\rm L}(kr, z)P^{\rm L}(k\sqrt{1+r^2-2rx}, z)
\nonumber \\ &\times \frac{F_2^2(k, r, x)}{F_1^2(kr)F_1^2(k\sqrt{1+r^2-2rx})} \mathrm{d}x~,
\\
\nonumber \\
P_{\delta\theta}^{22}(k) =& 2 \frac{k^3}{(2\pi)^2} \int_0^{\infty}r^2 \mathrm{d}r \int_{-1}^1 P^{\rm L}(kr, z)P^{\rm L}(k\sqrt{1+r^2-2rx}, z)
\nonumber \\ &\times f(kr, z) f(k\sqrt{1+r^2-2rx}, z) \frac{G_2(k, r, x)}{G_1(kr) G_1(k\sqrt{1+r^2-2rx})}
\nonumber \\ &\times \frac{F_2(k, r, x)}{F_1(kr)F_1(k\sqrt{1+r^2-2rx})}\mathrm{d}x~,
\\
\nonumber \\
P_{\theta\theta}^{22}(k) =& 2 \frac{k^3}{(2\pi)^2} \int_0^{\infty}r^2 \mathrm{d}r \int_{-1}^1 P^{\rm L}(kr, z)P^{\rm L}(k\sqrt{1+r^2-2rx}, z)
\nonumber \\ &\times f^2(kr, z)f^2(k\sqrt{1+r^2-2rx}, z) \frac{G_2^2(k, r, x)}{G_1^2(kr) G_1^2(k\sqrt{1+r^2-2rx})} \mathrm{d}x~,\end{aligned}$$\
while the 13 correction terms are\
$$\begin{aligned}
P_{\delta\delta}^{13}(k) =& 2 \frac{k^3}{(2\pi)^2} P^{\rm L}(k, z) \int_0^{\infty} r^2 P^{\rm L}(kr, z) \frac{F_3(k, r, x)}{F_1(k)F_1^2(kr)} \mathrm{d}r~,
\\
\nonumber \\
P_{\delta\theta}^{13}(k) =& \frac{k^3}{(2\pi)^2}F_1(k) P^{\rm L}(k, z) \int_0^{\infty} r^2 P^{\rm L}(kr, z) f(k, z)f^2(kr, z) \frac{G_3(k, r, x)}{G_1(k)G_1^2(kr)} \mathrm{d}r
\nonumber \\ &+ \frac{k^3}{(2\pi)^2}f(k, z)P^{\rm L}(k, z) \int_0^{\infty} r^2 P^{\rm L}(kr, z) \frac{F_3(k, r, x)}{F_1(k)F_1^2(kr)} \mathrm{d}r~,
\\
\nonumber \\
P_{\theta\theta}^{13} =& 2 \frac{k^3}{(2\pi)^2}P^{\rm L}(k, z) \int_0^{\infty} r^2 P^{\rm L}(kr, z) f^2(k, z)f^2(kr, z) \frac{G_3(k, r, x)}{G_1(k)G_1^2(kr)} \mathrm{d}r~.\end{aligned}$$\
Note that in these expressions the only terms to contain massive neutrinos are $P^{\rm L}$ and $f$; all of the kernels $F_i$ and $G_i$ are unmodified. We have implemented these equations in [`MGCOPTER`]{}.
Validation {#sec:Validation}
==========
In order to validate our implementation of massive neutrinos in the [`MGCOPTER`]{} code, we have tested its output against results from the fast, approximate [*N*]{}-body code [`MG-PICOLA`]{}, which is a modified version of [`L-PICOLA`]{} [@LPICOLA] that includes modified gravity [@Winther2017] and massive neutrinos [@Wrightetal2017] and has been tested against full [*N*]{}-body simulations. In the legends of the figures that follow we shall refer to our modified [`MGCOPTER`]{} code simply as [`COPTER`]{}, and the [`MG-PICOLA`]{} code as [`COLA`]{}.
Throughout, we use paired-fixed [`MG-PICOLA`]{} simulations where we produce two simulations with fixed amplitudes, meaning the initial amplitudes of the Fourier modes of the density field are set to that of the ensemble average power spectrum, and paired, where the initial modes in the second simulation are mirrored compared to those of the first [@PairedFixed]. This procedure significantly reduces variance that arises from the sparse sampling of wavemodes without the need for averaging over a large number of density field realisations, and has been shown not to introduce a bias to the recovery of the mean properties of the Gaussian ensemble, despite the fixing introducing non-Gaussianity [@PairedFixedAccuracy]. However, we also ran five additional [`MG-PICOLA`]{} simulations for each model with randomised realisations of the initial density field. The standard deviation in the power spectra of these additional five simulations is used for the error bars in the figures below unless explicitly stated otherwise. The modified gravity model considered here is the Hu-Sawicki $f(R)$ model, which has one free parameter [$|f_{R0}|$]{} and we refer to $|f_{R0}|=10^{-5}$ and $|f_{R0}|=10^{-4}$ as F5 and F4 respectively. The velocity divergence field $\theta$ has been computed using the `DTFE` code [@DTFE]. The cosmological parameters used in this paper are the same as in [@Baldi2014]; $h=0.671$, $\Omega_{\rm m}=0.3175$, $\Omega_{\rm b}=0.049$, $A_s=2.215\times 10^{-9}$, and $n_{\rm s}=0.966$.
Note that a recent version update of `MGCAMB` improved the handling of massive neutrinos [@MGCAMB3]. Although our results were produced using the previous version of `MGCAMB`, we have verified that for the parameters we use the difference in the linear power spectrum between the two versions is negligible.
We first study the comparison between [`MGCOPTER`]{} and [`MG-PICOLA`]{} in the real-space power spectra, in \[fig:real,fig:real\_NLL,fig:real\_ratio\]. Figure \[fig:real\] shows the real-space non-linear power spectra at $z=1$ computed with both [`MG-PICOLA`]{} and [`MGCOPTER`]{}. We display the density auto-correlation $P_{\delta\delta}$, the velocity divergence auto-correlation $P_{\theta\theta}$, and the density-velocity divergence cross-correlation $P_{\delta\theta}$, in the form $k^{3/2}P_{ij}$ for ease of viewing, for GR, F5, and F4 each with $0.0$eV, $0.06$eV, and $0.2$eV neutrinos. The error bars on the (paired-fixed) [`MG-PICOLA`]{} points are the standard deviation of the 5 additional (non-paired-fixed) [`MG-PICOLA`]{} simulations. In all cases, [`MGCOPTER`]{} reproduces the results of the [`MG-PICOLA`]{} simulations very well up to the start of the quasi-non-linear scale around $k=0.1~h/{\rm Mpc}$ where perturbation theory begins to break down. The agreement between [`MGCOPTER`]{} and [`MG-PICOLA`]{} persists to larger $k$ values for $P_{\theta\theta}$ and $P_{\delta\theta}$ than $P_{\delta\delta}$, which is consistent with the behaviour seen when [`MGCOPTER`]{} was compared to full [*N*]{}-body simulations in Fig. 10 of [@BoseKoyama2016].
{width="\textwidth"}
Figure \[fig:real\_NLL\] displays the same data but presented as the ratio of the full non-linear power spectra to their linear components, which helps to show where the modelling of non-linearities with [`MGCOPTER`]{} becomes inaccurate. Figure \[fig:real\_ratio\] again shows the same data but presented as the ratio of the power-spectra with and without massive neutrinos for both the $0.06$eV and $0.2$eV neutrinos. The scale up to which [`MGCOPTER`]{} closely follows the results of the [`MG-PICOLA`]{} simulations is marginally improved due to taking the ratio between power spectra in two models.
{width="\textwidth"}
{width="\textwidth"}
Next, we look at the comparison between [`MG-PICOLA`]{} and [`MGCOPTER`]{} with $\sigma_{v}$ fitted to the [`MG-PICOLA`]{} simulations in the non-linear redshift-space power spectra in \[fig:red,fig:red\_NLL,fig:red\_ratio\]. Figure \[fig:red\] shows the monopole $P_0$ and quadrupole $P_2$ of the redshift-space power spectra for GR, F5, and F4 gravity models each with $0.0$eV, $0.06$eV, and $0.2$eV neutrinos. We display the results computed from paired-fixed [`MG-PICOLA`]{} simulations and [`MGCOPTER`]{} with the TNS velocity dispersion parameter $\sigma_v$ fitted to the [`MG-PICOLA`]{} simulations up to $k=0.15~h/{\rm Mpc}$ in the form $k^{3/2}P_i(k)$; the figure includes the best-fitting values of $\sigma_v$ (expressed in RSD displacement units ${\rm Mpc}/h$) and the reduced $\chi^2$ for each model. The error bars on the [`MG-PICOLA`]{} points are taken from the inverse covariance matrices used in the $\sigma_v$ fitting procedure, whose computation is described at the end of Section \[ssec:LCDM\_imp\]. The $\sigma_v$ fitting procedure prioritises recovering the monopole $P_0$, and thus the agreement between [`MGCOPTER`]{} and [`MG-PICOLA`]{} is slightly worse for the quadrupole $P_2$. As expected, for each gravity model increasing the mass of the neutrinos leads to a decrease in the best-fitting value of $\sigma_v$ and the quality of the fit increases, while for a fixed neutrino mass increasing the strength of the modification of gravity from GR to F5 and then F4 leads to an increase in the best-fitting value of $\sigma_v$ and a slightly worse quality of fit. The reason for this behaviour is that enhancement to gravity leads to an increase in the velocities of galaxies around an overdensity, thus increasing the non-linear damping, while massive neutrinos have the opposite effect due to their suppression of structure formation. The quality of the fit is better when the non-linearity is smaller and vice versa. However, in all cases the quality of the fit of [`MGCOPTER`]{} to [`MG-PICOLA`]{} is good up to quasi-non-linear scales.
{width="\textwidth"}
Figure \[fig:red\_NLL\] displays the same data as Fig. \[fig:red\] but presented as the ratio of the full non-linear multipoles to their linear counterparts computed with the Kaiser RSD model [@Kaiser1987], while Fig. \[fig:red\_ratio\] presents the data of Fig. \[fig:red\] as the ratio of the non-linear power-spectra multipoles with and without massive neutrinos for both the $0.06$eV and $0.2$eV neutrinos. The error bars on the [`MG-PICOLA`]{} points in these two figures represent the standard deviation of the 5 additional [`MG-PICOLA`]{} simulations. As in real-space, the scale up to which [`MGCOPTER`]{} closely follows the results of the [`MG-PICOLA`]{} simulations is slightly improved due to taking the ratio between power spectra in two models.
{width="\textwidth"}
{width="\textwidth"}
We also quantify the ability of [`MGCOPTER`]{} to recover the redshift-space multipole results of [`MG-PICOLA`]{} through $\chi^2_{m_{\nu}}$; the difference between the redshift-space multipoles with and without neutrino mass. In Fig. \[fig:chi\_mnu\] we display $\chi^2_{m_{\nu}}$ as a function of the maximum comparison scale $k_{\rm max}$ for GR, F5, and F4 each with $0.06$eV and $0.2$eV neutrinos at $z=1$. Here, [`MGCOPTER`]{} is fitted to the [`MG-PICOLA`]{} simulations up to $k_{\rm max}$ with the covariance computed assuming an ideal survey as described at the end of Section \[ssec:LCDM\_imp\]. The agreement in $\chi^2_{m_{\nu}}$ between [`MG-PICOLA`]{} and [`MGCOPTER`]{} fitted to [`MG-PICOLA`]{} is excellent in all cases. This implies that [`MGCOPTER`]{} with $\sigma_v$ fitted to simulations is capable of capturing the effect of massive neutrinos accurately.
{width="\textwidth"}
Degeneracy {#sec:Degeneracy}
==========
With the inclusion of modified gravity and massive neutrinos in [`MGCOPTER`]{} the degeneracy between the two effects can be investigated.
Real- and redshift-space
------------------------
We start by studying the degeneracy between modified gravity and massive neutrinos in real space.
{width="\textwidth"}
In Fig. \[fig:degen\_real\_comp\] we display the ratio of real-space power spectra in F4 gravity with $0.06$eV neutrinos in the left panel and $0.2$eV neutrinos in the right panel to a fiducial model which we take to be GR with $0.06$eV neutrinos at $z=1$. We show results for the density auto-correlation $P_{\delta\delta}$, the velocity divergence auto-correlation $P_{\theta\theta}$, and the density-velocity divergence cross-correlation $P_{\delta\theta}$. The results of paired-fixed [`MG-PICOLA`]{} simulations and of [`MGCOPTER`]{} are plotted. The error bars on the [`MG-PICOLA`]{} results are computed using the standard deviation over the five additional simulations. In all cases, the results of [`MGCOPTER`]{} agree well with those of [`MG-PICOLA`]{} up to quasi-non-linear scales around $k=0.1~h/{\rm Mpc}$. The left panel, where the neutrino masses are the same in both GR and F4, shows the scale-dependent enhancement of the real-space power spectra provided by F4 gravity. However, when heavier neutrinos are added to the F4 case, as in the right panel, this enhancement is opposed by the suppression effect of the massive neutrinos. Indeed, the right panel shows that $P_{\delta\delta}$ is a poor probe to distinguish between GR with $0.06$eV neutrinos and F4 with $0.2$eV neutrinos in this particular case. However, the two models remain distinguishable in $P_{\delta\theta}$ and $P_{\theta\theta}$, showing that velocity information has the potential to break the degeneracy between modified gravity and massive neutrinos. This was recently shown using the results of full [*N*]{}-body simulations [@Hagstotzetal2019]. However, neither $P_{\delta\theta}$ nor $P_{\theta\theta}$ can be measured directly by observations. Instead, it is necessary to extract the velocity information that is encoded within redshift-space distortions, and it is to this we turn our attention. We shall refer to GR with $0.06$eV neutrinos and F4 with $0.2$eV neutrinos as our two degenerate models.
{width="\textwidth"}
In Fig. \[fig:degen\_red\] we plot the redshift-space monopole and quadrupoles in F4 gravity with $0.2$eV neutrinos normalised to GR with $0.06$eV neutrinos computed with both [`MGCOPTER`]{} and [`MG-PICOLA`]{}. For each model the [`MGCOPTER`]{} result has been produced by fitting $\sigma_v$ to the paired-fixed [`MG-PICOLA`]{} simulation up to $k=0.15~h/{\rm Mpc}$ with the covariance computed assuming a DESI-like survey as detailed at the end of Section \[ssec:LCDM\_imp\]. The error bars on the [`MG-PICOLA`]{} results are computed using the standard deviation over five simulations with a boxsize of $1~{\rm Gpc}/h$ for each model. Firstly, this plot shows that modelling the redshift-space monopole and quadrupole using [`MGCOPTER`]{} with $\sigma_v$ fitted to [`MG-PICOLA`]{} simulations works well. Secondly, for our degenerate models, while the monopole is still a poor probe for distinguishing between the models, the quadrupole, by virtue of the encoding of velocity information, displays differences between the two models and thus has the potential to break the degeneracy.
Redshift evolution
------------------
Our method also allows us to investigate how the degeneracy evolves with redshift in both real- and redshift-space.
{width="\textwidth"}
{width="\textwidth"}
In Fig. \[fig:degen\_real\_zevo\] we show the real-space power spectra in the ratio between the two degenerate models as in the right panel of Fig. \[fig:degen\_real\_comp\] but at $z=0.5$ (left panel) and $z=1.5$ (right panel). In Fig. \[fig:degen\_red\_zevo\] we show the redshift-space power spectrum multipoles in the ratio between the two degenerate models as in Fig. \[fig:degen\_red\] but at $z=0.5$ (left panel) and $z=1.5$ (right panel). These figures demonstrate that the degeneracy evolves significantly with redshift, both in real- and redshift-space. Figure \[fig:degen\_real\_zevo\] shows that while our two degenerate models had similar matter power spectra at $z=1$ it is easier to distinguish between the two models with the matter power spectrum at other redshifts.
{width="\textwidth"}
In Fig. \[fig:chi\_MGmnu\_zevo\] we plot the difference between the redshift-space multipoles in the two degenerate models quantified through $\chi^2_{\mathrm{MG}+m_{\nu}}$ as a function of the maximum comparison scale $k_{\rm max}$. We show $\chi^2_{\mathrm{MG}+m_{\nu}}$ as computed by both [`MG-PICOLA`]{} and [`MGCOPTER`]{} with $\sigma_v$ fitted to [`MG-PICOLA`]{} up to $k_{\rm max}$ with the covariance computed assuming a DESI-like survey as detailed at the end of Section \[ssec:LCDM\_imp\]. The results from both methods agree with each other very well. We plot $\chi^2_{\mathrm{MG}+m_{\nu}}$ at three redshifts $z=1.5,\ 1.0,\ 0.5$ and it is clear from these results, along with those in \[fig:degen\_real\_zevo,fig:degen\_red\_zevo\], that the ability to distinguish between the redshift-space multipoles of these two models evolves with redshift. This emphasises the potential for data at multiple redshifts to break the degeneracy. The tomographic nature of weak lensing observations make them well suited to this task, and the combination of redshift-space distortion measurements with weak lensing observations could prove one of the best probes for breaking the modified gravity-massive neutrino degeneracy.
Conclusions {#sec:Conclusion}
===========
In this paper, we have studied the potential for redshift-space distortions to break the degeneracy between the enhancement of structure growth provided by modifications to gravity and suppression of structure growth due to massive neutrinos, at the level of the dark matter field. For combinations of modified gravity parameters and neutrino masses that have similar matter power spectra at a given redshift, the growth rates are different and will remain distinguishable. This degeneracy-breaking growth rate information is encoded via velocities into redshift-space distortions. To carry out this work, we have modelled the effects of both modified gravity and massive neutrinos on real- and redshift-space power spectra with Standard Perturbation Theory through the code [`MGCOPTER`]{}. We find the implementation of modified gravity and massive neutrinos in [`MGCOPTER`]{} produces a good agreement for both real- and redshift-space power spectra with the simulation results from the code [`MG-PICOLA`]{} in the case of Hu-Sawicki $f(R)$ gravity.
We have then investigated the degeneracy and shown that the quadrupole of the redshift-space power spectrum retains enough of the velocity information to distinguish between GR with light neutrinos and Hu-Sawicki $f(R)$ with heavy neutrinos. The logical next step is to confirm that we can use the computationally inexpensive modelling of RSD in [`MGCOPTER`]{} to recover a fiducial combination of $|f_{R0}|$ and $m_{\nu}$ from a simulation. An important open question for this endeavour is whether the process of fitting $\sigma_v$ introduces a new degeneracy, where $\sigma_v$ can dampen the redshift-space multipoles of a model with incorrect $|f_{R0}|$ and $m_{\nu}$ values in a way that makes them difficult to distinguish from those of the fiducial simulation. Future work will focus on extending the modelling of RSD with modified gravity and massive neutrinos to halos and galaxies in order to bring this method closer to being able to use RSD observations to jointly constrain modified gravity and massive neutrinos.
We have also briefly studied how the degeneracy evolves with redshift. There is a clear evolution of the degeneracy with redshift even for the matter power spectrum; for combinations of modified gravity and neutrino mass parameters that give comparable matter power spectra at one redshift, the matter power spectra at another redshift are in general likely to be distinguishable. The tomographic nature of weak lensing is particularly well suited to investigating this approach to breaking the degeneracy. Alternatively, if modified gravity is only a low redshift effect, a constraint on neutrino mass from clustering at higher redshift, for example from HI intensity mapping [@HINeutrinos], would help break the degeneracy.
Acknowledgement {#acknowledgement .unnumbered}
===============
BSW is supported by the U.K. Science and Technology Facilities Council (STFC) research studentship. KK and HAW are supported by the European Research Council through 646702 (CosTesGrav). KK is also supported by the UK Science and Technologies Facilities Council grants ST/N000668/1. GBZ is supported by NSFC Grants 1171001024 and 11673025, and the National Key Basic Research and Development Program of China (No. 2018YFA0404503). Numerical computations for this research were done on the Sciama High Performance Compute (HPC) cluster which is supported by the ICG, SEPNet, and the University of Portsmouth.
|
---
abstract: 'Networks are useful representations of many systems with interacting entities, such as social, biological and physical systems. Characterizing the meso-scale organization, i.e. the community structure, is an important problem in network science. Community detection aims to partition the network into sets of nodes that are densely connected internally but sparsely connected to other dense sets of nodes. Current work on community detection mostly focuses on static networks. However, many real world networks are dynamic, i.e. their structure and properties change with time, requiring methods for dynamic community detection. In this paper, we propose a new stochastic block model (SBM) for modeling the evolution of community membership. Unlike existing SBMs, the proposed model allows each community to evolve at a different rate. This new model is used to derive a maximum a posteriori estimator for community detection, which can be written as a constrained spectral clustering problem. In particular, the transition probabilities for each community modify the graph adjacency matrix at each time point. This formulation provides a relationship between statistical network inference and spectral clustering for dynamic networks. The proposed method is evaluated on both simulated and real dynamic networks.'
address: 'Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI.'
bibliography:
- 'apos\_library.bib'
title: Constrained Spectral Clustering for Dynamic Community Detection
---
Community Detection, Dynamic Networks, Stochastic Block Model, Spectral Clustering
Introduction {#sec:intro}
============
Community detection (CD) partitions the nodes of a network such that nodes are densely connected within their respective communities while being sparsely connected across communities [@fortunato_community_2010]. CD has important applications in recommendation systems [@reddy2002graph], social networks [@moody2003structural] and brain connectomics [@sporns_modular_2016]. Recently, CD methods have been developed for networks that change with time, i.e. *dynamic networks* [@holme_temporal_2012]. Compared to *static networks*, CD methods for dynamic networks aim to partition nodes at each time as well as to track the changes in the partitions over time [@rossetti_community_2018].
CD in static networks is commonly formulated as the optimization of a quality function. Some of the well-known quality functions are modularity [@newman_finding_2004], normalized and ratio cuts [@von_luxburg_tutorial_2007], InfoMap [@rosvall2008maps] and likelihood or posterior distributions defined based on statistical inference [@snijders_estimation_1997; @karrer_stochastic_2011]. These functions can be divided into two categories [@pamfil_relating_2018]. The first category includes functions that are defined heuristically such as modularity or cut based methods. Functions in the second category are based on statistical network models, e.g. *stochastic block model (SBM)* and *degree-corrected SBM (DCSBM)*, and likelihood or posterior distributions are defined as quality functions.
Dynamic CD methods are mostly based on extensions of aforementioned quality functions from the static to the dynamic case. The early work in dynamic CD named *evolutionary spectral clustering (EvoSC)* [@chi_evolutionary_2007], defines a quality function at each time point as $\alpha CS + \beta CT$ where $CS$ is *snapshot cost*, $CT$ is *temporal cost* and $\alpha,\ \beta$ are parameters that weigh the two terms. This formulation can be thought of as constraining the snapshot cost of a time point with community structure of previous time points. Similarly, modularity optimization [@mucha_community_2010; @pamfil_relating_2018], statistical methods [@yang_detecting_2011; @xu_dynamic_2014; @ghasemian_detectability_2016] and InfoMap [@peixoto_modelling_2017] have been extended to dynamic networks.
Recently, there have been attempts to show that heuristic based optimization methods are equivalent to statistical inference under some conditions. Newman et al. [@newman_spectral_2013] showed that spectral approximation of modularity, normalized cut and statistical inference are equivalent to each other for a particular choice of parameters. Similarly, equivalence between spectral clustering, modularity maximization and non-negative matrix factorization is shown in [@ma_semi-supervised_2018]. Lastly, in [@young_universality_2018] statistical inference is shown to be universal, which means most of the quality functions developed for CD are indeed special cases of statistical inference. This work is extended to dynamic networks in [@pamfil_relating_2018], where dynamic modularity function defined in [@mucha_community_2010] is shown to be equivalent to statistical inference methods.
Following this line of work, we propose a method for dynamic CD, referred to as *constrained dynamic spectral clustering (CDSC)*. We start by defining a dynamic DCSBM and the corresponding posterior distribution. We then show that maximizing the posterior distribution can be solved by a dynamic spectral clustering algorithm. The proposed work makes some significant contributions to the literature. First, the proposed dynamic DCSBM allows each community to evolve with a different probability that varies with time. Second, we derive a relationship between statistical inference based on the proposed dynamic DCSBM and spectral clustering, in particular constrained spectral clustering. Finally, we show that the proposed method is a generalization of EvoSC.
The remainder of the paper is organized as follows. In Section \[sec:background\], we give an overview of the notations used in the paper along with a background on spectral clustering and DCSBM. In Section \[sec:method\], we introduce our new dynamic DCSBM and the corresponding optimization problem. In Section \[sec:results\], the comparison of the proposed method with state-of-the-art dynamic CD methods on both simulated and a real dynamic network is given.
Background {#sec:background}
==========
Notation
--------
A static graph is represented by $G=(V,E)$ where $V$ is the node set with $|V|=n$ and $E\in V\times V$ is the edge set with $|E|=m$. An edge between two nodes $i$ and $j$ is indicated by $e_{ij}$. In this work, the graphs are assumed to be *undirected*, i.e. $e_{ij} = e_{ji}$, and self loops are not allowed, i.e. $e_{ii}\not\in E,\ \forall i\in V$. Each edge $e_{ij}$ is associated with a weight $w_{ij}$. If $w_{ij}\in\{0,1\}$, the graph is said to be *binary*, on the other hand if $w_{}\in\R_{\geq 0}$ then it is a *weighted* graph. *Degree* of a node $i$ is $d_i=\sum_{j}w_{ij}$. A graph is algebraically represented by an $n\times n$ *adjacency matrix* ${\boldsymbol{\mathrm{A}}}$ whose entities are $A_{ij}=w_{ij}$. Lastly, *Laplacian matrix* of a graph is defined as ${\boldsymbol{\mathrm{L}}}={\boldsymbol{\mathrm{D}}}-{\boldsymbol{\mathrm{A}}}$, where ${\boldsymbol{\mathrm{D}}}$ is a diagonal matrix with entries $D_{ii}=d_i$.
A dynamic graph is a time sequence of static graphs, i.e. $\mathcal{G}=\{G^1, G^{2}, \dots, G^{T}\}$, where $G^t$s are defined on the same vertex set $V$. The edge sets $E^{t}$s define the set of interactions between nodes at time $t$ [@holme_temporal_2012]. Mathematically, we represent $\G$ as a sequence of adjacency matrices $\A = \{{\boldsymbol{\mathrm{A}}}^1, {\boldsymbol{\mathrm{A}}}^2, \dots, {\boldsymbol{\mathrm{A}}}^T\}$.
Spectral Clustering {#ssec:spectralclustering}
-------------------
CD on a graph $G=(V, E)$ is the task of partitioning nodes in $V$ into $K$ non-overlapping communities, i.e. $P = \{C_1, \dots, C_K\}$ where $C_i\cap C_j = \varnothing\, \forall, i\neq j$ and $\bigcup_{i=1}^K C_i = V$. This task is usually achieved by optimizing a function that quantifies the quality of the communities, $C_i$s. Two widely used quality functions are graph cut and graph association, which are defined as follows. Let ${\boldsymbol{g}}$ be the $n$ dimensional *community assignment vector* whose entries $g_i=k$ if node $i$ is in community $k$ and ${\boldsymbol{Z}} \in \{0,1\}^{n \times K}$ be the *community membership matrix*, whose entries $Z_{ik}=1$ if and only if $g_i=k$. The association and cut of the partition $P$ are defined as [@von_luxburg_tutorial_2007]: $$\begin{aligned}
{\mathrm{Assoc}_{G}}({\boldsymbol{Z}}) & = \sum_{i<j}^n A_{ij} \delta_{g_ig_j} = \frac{1}{2}{\mathrm{Tr}}({\boldsymbol{Z}}^T{\boldsymbol{A}}{\boldsymbol{Z}}), \label{eq:assoc} \\
{\mathrm{Cut}_{G}}({\boldsymbol{Z}}) & = \sum_{i<j}^n A_{ij} (1-\delta_{g_ig_j}) = \frac{1}{2}{\mathrm{Tr}}({\boldsymbol{Z}}^T{\boldsymbol{L}}{\boldsymbol{Z}}).\label{eq:cut}\end{aligned}$$
Degree Corrected SBM (DCSBM)
----------------------------
SBM was first proposed in social sciences as a random network model with community structure, where each node belongs to a community and edges between nodes are drawn independently based on their community membership [@holland1983stochastic; @goldenberg2010survey]. The model is parameterized with community assignment vector ${\boldsymbol{g}}$ and an edge probability matrix ${\boldsymbol{\mathrm{\theta}}}\in [0,1]^{K\times K}$ where $\theta_{kl}$ is the probability of an edge between the $k$th and $l$th communities and $K$ is the number of communities. The edge between nodes $i$ and $j$ is drawn from a Bernoulli distribution with probability $\theta_{g_ig_j}$. SBM has been used for inferring communities by maximizing the likelihood function of the observed network with respect to ${\boldsymbol{g}}$ [@snijders_estimation_1997].
In [@karrer_stochastic_2011], it is observed that network inference with SBM can result in erroneous community assignments when the degrees of the nodes are not uniformly distributed. In order to overcome this problem, *degree-corrected* SBM (DCSBM), in which degrees of nodes are used in determining the probability of edge formation, has been proposed. This is done by assuming that the edge between nodes $i$ and $j$ comes from a Poisson distribution with mean $\lambda_{ij} = d_i d_j \theta_{g_ig_j}$. DCSBM leads to the following likelihood function, which can be maximized with respect to ${\boldsymbol{g}}$ to find community structure: $$\begin{aligned}
\label{eq:likelihood}
\P(\A|{\boldsymbol{g}};\ {\boldsymbol{\mathrm{\theta}}}) = \prod_{i<j}^n \frac{(\lambda_{ij})^{A_{ij}}e^{-\lambda_{ij}}}{A_{ij}!}. \end{aligned}$$
Method {#sec:method}
======
Dynamic DCSBM {#ssec:dynamicdsbm}
-------------
Recently, DCSBM has been extended to dynamic networks in [@ghasemian_detectability_2016; @pamfil_relating_2018; @bazzi_generative_2016], where the network at each time is a DCSBM and community assignment of any node $i$ at time $t$ is modelled to be the same as the community assignment at time $t-1$ with a *copying probability* of $q^t$. We base our dynamic DCSBM on this prior work but with a different assumption about network dynamics. Let ${\boldsymbol{g}}=\{{\boldsymbol{g}}^1, \dots, {\boldsymbol{g}}^T\}$ and ${\boldsymbol{\theta}}=\{{\boldsymbol{\theta}}^1, \dots, {\boldsymbol{\theta}}^T\}$ be sequences of community assignment vectors and edge probability matrices at each time point for a dynamic network $\G$, respectively. Moreover, we assume that there are $K$ communities at each time. Different from previous work, we define a sequence of copying probabilities, ${\boldsymbol{q}} = \{{\boldsymbol{q}}^2, \dots, {\boldsymbol{q}}^T\}$, where the $k$th entry of ${\boldsymbol{q}}^t\in [0, 1]^K$, $q_k^t$, is the probability of a node at time $t-1$ staying in the $k$th community. Thus, our model allows each community to have its own copying probability $q_k^t$. This is a reasonable assumption since each community may have its own evolutionary dynamics, such that some communities may grow with time while others may stay stationary across time [@rossetti_community_2018]. Next, community assignments of nodes are modelled as follows. If $g_i^{t-1}=k$, then we assume $g_i^t=k$ with probability $q_{k}^t$, otherwise $g_i^t$ is equal to one of the $K$ communities with uniform probability. Based on this, $\P(g_i^t=l|g_i^{t-1}=k) = \pi_{il}^t$ where $\pi_{il}^t=q_k^t+\frac{1-q_k^t}{K}$ if $k=l$ and $\pi_{il}^t=\frac{1-q_k^t}{K}$ otherwise. Finally, the community transition probabilities are assumed to be independent across nodes. Therefore, the prior distribution $\P({\boldsymbol{g}})$ is: $$\begin{aligned}
\label{eq:prior}
\begin{split}
\P({\boldsymbol{g}}; {\boldsymbol{q}}) = \P({\boldsymbol{g^1}}) \prod_{t=2}^T \P({\boldsymbol{g}}^t|{\boldsymbol{g}}^{t-1}) = \prod_{i=1}^n \P(g_i^1) \prod_{t=2}^T \prod_{i=1}^n \pi_{ig_{i}^t}^t,
\end{split}\end{aligned}$$ where $P(g_i^1)$ is the prior probability of community assignment of node $i$ at $t=1$ and it is assumed to be uniformly distributed, i.e. $P(g_i^1=k) = 1/K\, , \forall k={1, \dots, K}$.
Dynamic community detection
---------------------------
In this section, we show how maximizing the posterior distribution of dynamic DCSBM can be transformed into a trace maximization problem, which can be solved using spectral clustering algorithms. Using Eqs. \[eq:likelihood\] and \[eq:prior\], the posterior distribution of a dynamic network $\mathcal{G}$ following dynamic DCSBM is written as: $$\begin{aligned}
\label{eq:posterior}
\P(\A,{\boldsymbol{g}};{\boldsymbol{\mathrm{\theta}}}, {\boldsymbol{q}}) \hspace{-0.2em} = \hspace{-0.2em} \prod_{t=1}^T \prod_{i<j}^n \frac{(\lambda_{ij}^t)^{A_{ij}^t} e^{{\scalebox{0.75}[1.0]{$-$}}\lambda_{ij}^t}}{A_{ij}^t!}
\prod_{i=1}^n \frac{1}{K} \prod_{t=2}^T \prod_{i=1}^n \pi_{ig_{i}^t}^t ,\end{aligned}$$ where $\lambda_{ij}^t = d_i^td_j^t\theta_{g_i^tg_j^t}^t$ and $d_i^t$ is the degree of node $i$ at time $t$. Let $\L({\boldsymbol{g}})=\log \P(\A,\ {\boldsymbol{g}}; \ {\boldsymbol{\mathrm{\theta}}}, {\boldsymbol{q}})$, which can be written as follows by ignoring the terms that do not depend on ${\boldsymbol{g}}$:
\[eq:logposterior\] Ł() = & \_[t=1]{}\^T \_[i<j]{}\^n \[A\_[ij]{}\^t(\_[g\_i\^tg\_j\^t]{}\^t)[\[1.0\][$-$]{}]{}d\_i\^td\_j\^t\_[g\_i\^tg\_j\^t]{}\^t \]+ \_[t=2]{}\^T \_[i=1]{}\^n (\_[ig\_[i]{}\^t]{}\^t), &
where the first and second terms are the log-likelihood and log-prior, respectively. First, consider the log-prior term in (\[eq:logposterior\]). For fixed nodes $i$ and $j$, and fixed $t$, let $g_i^t=k$ and $g_j^t=l$ where $k, l \in \{1,\dots, K\}$. Then, the sum of log-priors of nodes $i$ and $j$ at time $t$ is $\log(\pi_{ik}^t) + \log(\pi_{il}^t) = \log(\pi_{ik}^t\pi_{il}^t)$. Due to independence, $\pi_{ik}^t\pi_{il}^t$ is the joint probability of nodes $i$ and $j$ being in communities $k$ and $l$ at time $t$, respectively. As community labels are arbitrary, it is more meaningful to quantify the joint probability of any two nodes being in the same community rather than the probability of individual nodes being in a particular community. Therefore, the joint probability $\pi_{ik}^t \pi_{jl}^t$ considered to be one of two values, namely when $i$ and $j$ are in the same community or in different communities: $$\begin{aligned}
\pi_{ik}^t \pi_{jl}^t = \begin{cases}
p_{ij}^t/K, & k=l, \\
(1-p_{ij}^t)/(K(K-1)), & k \neq l,
\end{cases}\end{aligned}$$ where $p_{ij}^t$ is the probability of nodes $i$ and $j$ being in the same community at time $t$ and the denominators are the normalization terms. Note that, $p_{ij}^t$ can also be calculated as $\sum_{k=1}^K \pi_{ik}^t\pi_{jk}^t$. Then, $\log(\pi_{ik}^t \pi_{jl}^t) = \log(p_{ij}^t/K)\delta_{kl} + \log((1-p_{ij}^t)/(K(K-1)))(1-\delta_{kl})$. This expression corresponds to the log-prior for a fixed node pair $(i,j)$ at time $t$. In order to write the log-prior term as a quadratic expression similar to spectral clustering, we add up the terms $\log(\pi_{ig_i^t}^t)$ for a fixed time $t$ as many times as necessary to generate terms for all node pairs (for $i<j$, we only need pairs in the form of $(i,j)$). This implies that we need $(n-1)$ number of log-prior terms for each node in (\[eq:logposterior\]) for a fixed time $t$. Thus, the second term of (\[eq:logposterior\]) at time $t$ can be written by ignoring the terms that do not depend on ${\boldsymbol{g}}$:
\[eq:priorsum\] \_[i]{} (\_[ig\_i\^t]{}\^t) & = \_[i<j]{}\^n . &
Next, we consider the log-likelihood term in (\[eq:logposterior\]). At time $t$, we assume ${\boldsymbol{\theta}}^t$ to be a planted partition model, i.e., $\theta_{kl}^t = \theta_{i}^t \delta_{kl} + \theta_{o}^t (1-\delta_{kl}) = (\theta_{i}^t - \theta_{o}^t)\delta_{kl} + \theta_{o} $, where $\theta_{i}$ is the intra-community connection probability and $\theta_{o}$ is the inter-community connection probability. Inserting this into the log-likelihood by ignoring the terms that do not depend on ${\boldsymbol{g}}$: $$\begin{aligned}
\label{eq:likelihoodsum}
\sum_{i<j} \{A_{ij}^t & (\log(\theta_{i}^t) - \log(\theta_{o}^t)) - d_i^t d_j^t (\theta_{i}^t - \theta_{o}^t) \} \delta_{g_i^tg_j^t}, \nonumber \\
& = \sum_{i<j} \beta^t A_{ij}^t \delta_{g_i^tg_j^t} - \gamma^t d_i^t d_j^t \delta_{g_i^tg_j^t},\end{aligned}$$ where $\gamma^t = \theta_{i}^t - \theta_{o}^t$ and $\beta^t = \log(\theta_{i}^t) - \log(\theta_{o}^t)$. It is easy to see that (\[eq:priorsum\]) and (\[eq:likelihoodsum\]) are now similar to (\[eq:assoc\]) and (\[eq:cut\]), thus they can be written using a trace operator. Defining two matrices ${\boldsymbol{\mathrm{P}}}^t$ and ${\boldsymbol{\mathrm{Q}}}^t\, \forall t$ with entries $P_{ij}^t = P_{ji}^t = \log(p_{ij}^t)$ and $Q_{ij}^t = Q_{ji}^t = \log(1{\scalebox{0.75}[1.0]{$-$}}p_{ij}^t)$, respectively, the log-posterior can be written as: $$\begin{aligned}
\label{eq:posteriortrace}
\begin{split}
\L({\boldsymbol{{\boldsymbol{\mathrm{Z}}}}}) & \hspace{-0.1em} = \hspace{-0.1em} \sum_{t=1}^T \beta^t {\mathrm{Tr}}({{\boldsymbol{\mathrm{Z}}}^t}^T{\boldsymbol{\mathrm{A}}}{\boldsymbol{\mathrm{Z}}}^t) - \gamma^t {\mathrm{Tr}}({{\boldsymbol{\mathrm{Z}}}^t}^T{{\boldsymbol{D}}^t}{\boldsymbol{\mathrm{Z}}}^t{{\boldsymbol{\mathrm{Z}}}^t}^T{{\boldsymbol{D}}^t}{\boldsymbol{\mathrm{Z}}}^t) \\ & + \sum_{t=2}^T \frac{1}{n-1} {\mathrm{Tr}}({{\boldsymbol{\mathrm{Z}}}^t}^T({\boldsymbol{\mathrm{P}}}+{\boldsymbol{\mathrm{L}}}_Q){\boldsymbol{\mathrm{Z}}}^t),
\end{split}\end{aligned}$$ where ${\boldsymbol{L}}_Q = {\boldsymbol{\mathrm{D}}}_Q - {\boldsymbol{\mathrm{Q}}}$ and ${\boldsymbol{\mathrm{D}}}_Q$ is a diagonal matrix with entries ${{\boldsymbol{\mathrm{D}}}_Q}_{ii} = \sum_{j=1}^n Q_{ij}$.
Constrained Dynamic Spectral Clustering {#ssec:cdsc}
---------------------------------------
Maximizing (\[eq:posteriortrace\]) with respect to ${\boldsymbol{Z}}$ reveals the community structure of the dynamic network $\mathcal{G}$. As in spectral clustering, this problem is NP-hard since ${\boldsymbol{Z}}$ is a binary matrix. Therefore, we relax ${\boldsymbol{Z}}$ to take on any real value while imposing size constraints ${{\boldsymbol{Z}}^t}^T{\boldsymbol{D}}^t{\boldsymbol{Z}}^t={\boldsymbol{I}},\ \forall t$. Due to the constraint, the second term in (\[eq:posteriortrace\]) becomes a constant, thus can be ignored during optimization. Thus, CD in a dynamic network $\G$ can be written as the following optimization problem: $$\begin{aligned}
\label{eq:cdsc}
{\boldsymbol{\mathrm{Z}}}^* = & \arg\max_{{\boldsymbol{\mathrm{Z}}}} \hspace{-0.1em} \sum_{t=1}^T \beta^t{\mathrm{Tr}}({{\boldsymbol{\mathrm{Z}}}^t}^T{\boldsymbol{\mathrm{A}}}^t{\boldsymbol{\mathrm{Z}}}^t) \hspace{-0.1em} + \hspace{-0.1em} \sum_{t=2}^T \frac{{\mathrm{Tr}}({{\boldsymbol{\mathrm{Z}}}^t}^T({\boldsymbol{\mathrm{P}}}^t\hspace{-0.1em}+\hspace{-0.1em}{\boldsymbol{\mathrm{L}}}_Q^t){\boldsymbol{\mathrm{Z}}}^t)}{n{\scalebox{0.75}[1.0]{$-$}}1} \nonumber \\
& \text{subject to } {{\boldsymbol{Z}}^t}^T{\boldsymbol{D}}^t{\boldsymbol{Z}}^t={\boldsymbol{I}},\ \forall t.\end{aligned}$$ This optimization problem is similar to EvoSC, where at each time point the first and second terms correspond to the snapshot and temporal costs, respectively. However, unlike EvoSC, our objective function is based on normalized association and the temporal cost is a generalized version of temporal cost used in *preserving cluster membership (PCM)* [@chi_evolutionary_2007]. This is a generalization as we include copying probabilities into calculation of distance, whereas in PCM each community is assumed to evolve at the same rate.
The problem in (\[eq:cdsc\]) can be solved via spectral clustering in an iterative fashion as follows. First, communities at $t=1$ can be obtained by static spectral clustering. Next, at any time $t>1$ a $K \times K$ matrix ${\boldsymbol{\mathrm{\Pi}}}^t = diag({\boldsymbol{q}}^t) + \frac{1}{K}({\boldsymbol{1}}-{\boldsymbol{q}}^t){\boldsymbol{1}}^T$ is constructed where $diag(\cdot)$ is an operator that transforms a vector into a diagonal matrix and ${\boldsymbol{1}}$ is a $K$-dimensional vector of ones. From ${\boldsymbol{\mathrm{\Pi}}}^t$ and ${\boldsymbol{\mathrm{Z}}}^{t-1}$, we calculate ${\boldsymbol{\mathrm{P}}}^t={{\boldsymbol{\mathrm{Z}}}^{t-1}}\log({\boldsymbol{\mathrm{\Pi}}}^t{{\boldsymbol{\mathrm{\Pi}}}^t}^T){{\boldsymbol{\mathrm{Z}}}^{t-1}}^T$ and ${\boldsymbol{\mathrm{Q}}}^t={{\boldsymbol{\mathrm{Z}}}^{t-1}}\log(1{\scalebox{0.75}[1.0]{$-$}}{\boldsymbol{\mathrm{\Pi}}}^t{{\boldsymbol{\mathrm{\Pi}}}^t}^T){{\boldsymbol{\mathrm{Z}}}^{t-1}}^T$ where logarithm is taken element-wise. Finally, spectral clustering is applied to the matrix $\beta^t{\boldsymbol{\mathrm{A}}}^t+\frac{{\boldsymbol{\mathrm{P}}}^t + {\boldsymbol{\mathrm{L}}}_Q^t}{n{\scalebox{0.75}[1.0]{$-$}}1}$ with the constraint ${{\boldsymbol{Z}}^t}^T{\boldsymbol{D}}^t{\boldsymbol{Z}}^t={\boldsymbol{I}}$. Since CD is performed individually at each time point , the number of communities can be different at each time. Pseudo-code for the proposed approach is given in Algorithm 1.
Parameter Estimation
--------------------
\[ssec:parameterest\] The proposed method requires the estimation of copying probabilities ${\boldsymbol{q}}$ and parameter $\beta^t$. These parameters are estimated in an iterative fashion similar to [@pamfil_relating_2018]. In particular, at each time ${\boldsymbol{q}}^t$ and $\beta^t$ are randomly initialized with ${\boldsymbol{q}}_0^t$ and $\beta_0^t$ and community structure is found as in Algorithm 1. Next, the community structure ${\boldsymbol{Z}}^t$ is compared to ${\boldsymbol{Z}}^{t-1}$ to update copying probabilities ${\boldsymbol{q}}^t$. ${\boldsymbol{Z}}^t$ and ${\boldsymbol{A}}^t$ are also used to compute $\theta_i^t$ and $\theta_o^t$ as in [@karrer_stochastic_2011] and $\beta^t=\log(\theta_i^t)-log(\theta_o^t)$. Lastly, ${\boldsymbol{Z}}^t$ is updated by finding the community structure at time $t$ with the updated parameter values. This process is repeated iteratively $N$ times or till convergence. In our experiments, it was observed that copying probabilities and $\beta^t$ do not change after a couple of iterations.
\[alg:cdsc\]
**Input:** Dynamic network $\mathcal{G}=(G^1, \dots, G^T)$, Number of communities $K^1, \dots, K^T$\
**Output:** Community Structure $P^*$
${\boldsymbol{\mathrm{Z}}}^t \leftarrow$ Spectral clustering of ${\boldsymbol{\mathrm{A}}}^t$ with $K^t$ by . Find parameters ${\boldsymbol{q}}^t$ and $\beta^t$ as in Section \[ssec:parameterest\] ${\boldsymbol{\mathrm{\Pi}}}^t \leftarrow diag({\boldsymbol{q}}^t) + \frac{1}{K^t}({\boldsymbol{1}}-{\boldsymbol{q}}^t){\boldsymbol{1}}^T$ ${\boldsymbol{\mathrm{P}}}^t\leftarrow {{\boldsymbol{\mathrm{Z}}}^{t-1}}\log({\boldsymbol{\mathrm{\Pi}}}^t{{\boldsymbol{\mathrm{\Pi}}}^t}^T){{\boldsymbol{\mathrm{Z}}}^{t-1}}^T$ ${\boldsymbol{\mathrm{Q}}}^t\leftarrow{{\boldsymbol{\mathrm{Z}}}^{t-1}}\log(1{\scalebox{0.75}[1.0]{$-$}}{\boldsymbol{\mathrm{\Pi}}}^t{{\boldsymbol{\mathrm{\Pi}}}^t}^T){{\boldsymbol{\mathrm{Z}}}^{t-1}}^T$ ${\boldsymbol{\mathrm{L_Q}}} \leftarrow {\boldsymbol{\mathrm{D}}}_Q-{\boldsymbol{\mathrm{Q}}}$ $\widehat{{\boldsymbol{\mathrm{A}}}}^t \leftarrow \beta^t{\boldsymbol{\mathrm{A}}}^t + \frac{1}{n+1}({\boldsymbol{\mathrm{P}}}+{\boldsymbol{\mathrm{L}}}_{Q})$ ${\boldsymbol{\mathrm{Z}}}^t \leftarrow$ Spectral clustering of $\widehat{{\boldsymbol{\mathrm{A}}}}^t$ with $K^t$ by .
Results {#sec:results}
=======
Results for Simulated Networks
------------------------------
The performance of the proposed method is first evaluated on simulated networks and compared to state-of-the-art dynamic CD methods including PCM [@chi_evolutionary_2007], DSBM [@xu_dynamic_2014] and GenLouvain [@pamfil_relating_2018]. First, we generate simulated networks based on Girvan-Newman (GN) benchmark networks [@girvan_community_2002]. At time point $t=1$, a GN network with 128 nodes divided into 4 equal sized communities is generated. For $1<t\leq T$, community assignments of each node is first determined by the copying probability ${\boldsymbol{q}}^t = {\boldsymbol{q}}\in[0,1]^4$, that is a node in the $k$th community at time $t-1$ stays in the $k$th community at time $t$ with probability $q_k$, otherwise it is randomly assigned to one of the 4 communities. For all time points, average degree and mixing coefficients are set to $16$ and $\mu$, respectively. Mixing coefficient $\mu$ indicates how noisy the community structure of the network is. The larger the $\mu$ is, the harder it is to detect the community structure. Comparison is done by calculating the normalized mutual information (NMI) [@danon_comparing_2005] for each method averaged over time and 50 Monte Carlo simulations.
In Fig. \[fig:binarybenchmark\]a, the results for GN benchmark can be seen for ${\boldsymbol{q}} = [0.9, 0.6, 0.9, 0.6]$, T=10 and 3 different values of $\mu$. For PCM, the parameter $\alpha$ is set to 1 and $\beta$ is selected empirically between $0.1$ and $0.3$ as the one that gives the best normalized association value. Initial values of the parameters for GenLouvain and DSBM are set in a similar fashion as in the original papers [@pamfil_relating_2018; @xu_dynamic_2014]. Finally, for all of the methods the number of communities are assumed to be known. For $\mu=0.40$, all algorithms yield high average NMI values as shown in Fig. \[fig:binarybenchmark\]a, while the smallest variance in NMI is achieved by CDSC and PCM. As $\mu$ increases, the performance of all methods degrades. However, GenLouvain degrades faster than the others as seen in results for $\mu=0.50$, where the best result is achieved by CDSC both in terms of average NMI and variance across simulations. Finally, as CDSC is a generalized version of PCM, it always provides better accuracy than PCM (difference when $\mu=0.5$ is statistically significant at $\alpha=0.001$). The results indicate that incorporating copying probabilities that are dependent on community membership in DCSBM improves performance.
![Average NMI values for the different methods as a function of mixing parameter $\mu$: (a) GN benchmark networks and (b) MLGM benchmark networks.[]{data-label="fig:binarybenchmark"}](GN_bar.eps){width="\columnwidth"}
(a)
![Average NMI values for the different methods as a function of mixing parameter $\mu$: (a) GN benchmark networks and (b) MLGM benchmark networks.[]{data-label="fig:binarybenchmark"}](MLGM_bar.eps){width="\columnwidth"}
(b)
The results given above indicate that CDSC provides higher accuracy than existing methods for GN networks. However, GN benchmark model is too simplistic in the way it generates the network as it does not account for heterogeneity in degrees and inter-community edge probabilities. For this reason, we evaluate the proposed method on a more complex benchmark proposed in [@bazzi_generative_2016], referred as Multilayer Generative Model (MLGM) bechmark. This benchmark is generated using a dynamic DCSBM similar to the one mentioned in Section \[ssec:dynamicdsbm\] and introduces heterogeneity in the degrees of nodes, community sizes, inter-community edge probabilities. Moreover, we modified the benchmark such that each community can have different copying probabilities. The number of nodes is set to $128$, $T=10$ and copying probabilities are ${\boldsymbol{q}}^t = [0.9, 0.6, 0.9, 0.6]$ for all $t$. At each time there are $4$ communities with different sizes and degrees of nodes are drawn from a power law distribution truncated between $8$ and $16$. Results are shown for four different values of $\mu$ in Fig. \[fig:binarybenchmark\]b. For small values of $\mu$, all methods have similar NMI values. As $\mu$ increases, the proposed method performs the best giving the highest average NMI (difference between PCM and CDSC when $\mu=0.55$ and $\mu=0.6$ are statistically significant at $\alpha=0.001$).
Results for Primary School Temporal Networks (PSTN):
----------------------------------------------------
The proposed method is applied to a real dynamic social network that depicts the connectivity between students and teachers in a primary school. The data is collected in October 2009 for one day using wearable sensors that measure face-to-face proximity. Temporal resolution of the data is 20 seconds, and there are 232 students and 10 teachers. The school is in session between 8:30 a.m. and 4:30 p.m. with two 20-25 minutes breaks at 10:30 a.m. and 3:30 p.m. and lunch time between 12:00 p.m. to 2:00 p.m. [@stehle2011high]. The raw data are divided into 13 minute intervals and a binary network is generated for each interval by connecting two individuals if they interact in the given time interval. The resulting dynamic network has $T=40$ time points and 242 nodes.
The proposed method is applied to the constructed network where the number of communities at each time is selected as the number that maximizes *asymptotic surprise* [@traag2015detecting]. In Fig. \[fig:primaryschool\]a, the community structure of a time interval (between 2.15 p.m.and 2.30 p.m.) when students are in classes is shown as an example to indicate the effectiveness of the proposed method in detecting the communities. Fig \[fig:primaryschool\]b shows the similarity between the community structures at consecutive time points, where the similarity is quantified by the weighted average of copying probabilities. It can be seen that the similarity is high for most times except during breaks and lunch time. These results agree with our intuition since students from different classes interact with each other during breaks and lunch time resulting in a change in the community structure. Fig. \[fig:primaryschool\]b also illustrates that the proposed parameter estimation method described in Section \[ssec:parameterest\] gives meaningful results.
![(a) Detected communities for primary school data when the students are in classes. White rectangles correspond to different classes with the last rectangle corresponding to the teachers; (b) Similarity of community structures between consecutive time points, where red regions correspond to the two breaks and the green one to lunch time.[]{data-label="fig:primaryschool"}](PrimarySchoolT26.eps){width="\columnwidth"}
(a)
![(a) Detected communities for primary school data when the students are in classes. White rectangles correspond to different classes with the last rectangle corresponding to the teachers; (b) Similarity of community structures between consecutive time points, where red regions correspond to the two breaks and the green one to lunch time.[]{data-label="fig:primaryschool"}](primary_school_similarity.eps){width="\columnwidth"}
(b)
Conclusions
===========
In this work, a new algorithm for dynamic CD is introduced based on the equivalence between statistical network inference and spectral clustering. We first introduced a novel dynamic DCSBM that accounts for the differences in the evolutionary dynamics of different communities. We then proved the equivalency between statistical inference under this model and constrained spectral clustering for the planted partition model. Our derivation extends previous works that relate statistical inference and heuristic quality function optimization to dynamic networks. Moreover, the proposed method has been shown to be a generalization of PCM framework in EvoSC. Future work will exploit this relationship to analyze the consistency and scalability of the proposed algorithm and parameter estimation.
|
---
abstract: 'This is a concise introduction to Fomin-Zelevinsky’s cluster algebras and their links with the representation theory of quivers in the acyclic case. We review the definition cluster algebras (geometric, without coefficients), construct the cluster category and present the bijection between cluster variables and rigid indecomposable objects of the cluster category.'
author:
- Bernhard Keller
date: 'version du 21/10/2007, modifiée le 21/10/2007'
title: 'Categorification of acyclic cluster algebras: an introduction'
---
*To Murray Gerstenhaber and Jim Stasheff*
MSC 2010 classification: 18E30, 16S99
Introduction
============
Context
-------
Cluster algebras were invented by S. Fomin and A. Zelevinsky [@FominZelevinsky02] in the spring of the year 2000 in a project whose aim it was to develop a combinatorial approach to the results obtained by G. Lusztig concerning total positivity in algebraic groups [@Lusztig96] on the one hand and canonical bases in quantum groups [@Lusztig90] on the other hand (let us stress that canonical bases were discovered independently and simultaneously by M. Kashiwara [@Kashiwara90]). Despite great progress during the last few years [@FominZelevinsky03] [@BerensteinFominZelevinsky05] [@FominZelevinsky07], we are still relatively far from these initial aims. Presently, the best results on the link between cluster algebras and canonical bases are probably those of C. Geiss, B. Leclerc and J. Schröer [@GeissLeclercSchroeer05] [@GeissLeclercSchroeer06] [@GeissLeclercSchroeer06a] but even they cannot construct canonical bases from cluster variables for the moment. Despite these difficulties, the theory of cluster algebras has witnessed spectacular growth thanks notably to the many links that have been discovered with a wide range of subjects including
- Poisson geometry [@GekhtmanShapiroVainshtein03] [@GekhtmanShapiroVainshtein05] …,
- integrable systems [@FominZelevinsky03b] …,
- higher Teichmüller spaces [@FockGoncharov03] [@FockGoncharov05] [@FockGoncharov07a] [@FockGoncharov07b] …,
- combinatorics and the study of combinatorial polyhedra like the Stasheff associahedra [@ChapotonFominZelevinsky02] [@Chapoton04] [@Krattenthaler06] [@FominReading05] [@Musiker07] [@FominShapiroThurston06] …,
- commutative and non commutative algebraic geometry, in particular the study of stability conditions in the sense of Bridgeland [@Bridgeland02] [@Bridgeland06], Calabi-Yau algebras [@Ginzburg06], Donaldson-Thomas invariants [@Szendroi07] [@Kontsevich07a] [@Kontsevich07] [@KontsevichSoibelman07] …,
- and last not least the representation theory of quivers and finite-dimensional algebras, cf. for example the surveys [@BuanMarsh06] [@Ringel07] [@Reiten06].
We refer to the introductory papers [@Zelevinsky02] [@FominZelevinsky03a] [@Zelevinsky04] [@Zelevinsky05] [@Zelevinsky07] and to the cluster algebras portal [@Fomin07] for more information on cluster algebras and their links with other parts of mathematics.
The link between cluster algebras and quiver representations follows the spirit of categorification: One tries to interpret cluster algebras as combinatorial (perhaps $K$-theoretic) invariants associated with categories of representations. Thanks to the rich structure of these categories, one can then hope to prove results on cluster algebras which seem beyond the scope of the purely combinatorial methods. It turns out that the link becomes especially beautiful if we use a [*triangulated category*]{} constructed from the category of quiver representations, the so-called cluster category.
In this brief survey, we will review the definition of cluster algebras and Fomin-Zelevinsky’s classification theorem for cluster-finite cluster algebras [@FominZelevinsky03]. We will then recall some basic notions on the representations of a quiver without oriented cycles, introduce the cluster category and describe its link with the cluster algebra.
Cluster algebras
================
The cluster algebras we will be interested in are associated with antisymmetric matrices with integer coefficients. Instead of using matrices, we will use quivers (without loops and $2$-cycles), since they are easy to visualize and well-suited to our later purposes.
Quivers
-------
Let us recall that a *quiver* $Q$ is an oriented graph. Thus, it is a quadruple given by a set $Q_0$ (the set of vertices), a set $Q_1$ (the set of arrows) and two maps $s:Q_1 \to Q_0$ and $t:Q_1\to Q_0$ which take an arrow to its source respectively its target. Our quivers are ‘abstract graphs’ but in practice we draw them as in this example: $$Q:
\xymatrix{ & 3 \ar[ld]_\lambda & & 5 \ar@(dl,ul)[]^\alpha \ar@<1ex>[rr] \ar[rr] \ar@<-1ex>[rr] & & 6 \\
1 \ar[rr]_\nu & & 2 \ar@<1ex>[rr]^\beta \ar[ul]_\mu & & 4.
\ar@<1ex>[ll]^\gamma }$$ A *loop* in a quiver $Q$ is an arrow $\alpha$ whose source coincides with its target; a *$2$-cycle* is a pair of distinct arrows $\beta\neq\gamma$ such that the source of $\beta$ equals the target of $\gamma$ and vice versa. It is clear how to define *$3$-cycles*, *connected components* …. A quiver is *finite* if both, its set of vertices and its set of arrows, are finite.
Seeds and mutations
-------------------
Fix an integer $n\geq 1$. A *seed* is a pair $(R,u)$, where
- $R$ is a finite quiver without loops or $2$-cycles with vertex set $\{1, \ldots, n\}$;
- $u$ is a free generating set $\{u_1, \ldots, u_n\}$ of the field ${\mathbb{Q}}(x_1, \ldots, x_n)$ of fractions of the polynomial ring ${\mathbb{Q}}[x_1, \ldots, x_n]$ in $n$ indeterminates.
Notice that in the quiver $R$ of a seed, all arrows between any two given vertices point in the same direction (since $R$ does not have $2$-cycles). Let $(R,u)$ be a seed and $k$ a vertex of $R$. The *mutation* $\mu_k(R,u)$ of $(R,u)$ at $k$ is the seed $(R',u')$, where
- $R'$ is obtained from $R$ as follows:
- reverse all arrows incident with $k$;
- for all vertices $i\neq j$ distinct from $k$, modify the number of arrows between $i$ and $j$ as follows: $$\begin{array}{|c|c|}\hline
R & R' \\ \hline
\xymatrix@=0.3cm{i \ar[rr]^{r} \ar[rd]_{s} & & j \\
& k \ar[ru]_t & } &
\xymatrix@=0.3cm{i \ar[rr]^{r+st} & & j \ar[ld]^t\\
& k \ar[lu]^{s}} \\\hline
\xymatrix@=0.3cm{i \ar[rr]^{r} & & j \ar[ld]^{t}\\
& k \ar[lu]^{s} & } &
\xymatrix@=0.3cm{i \ar[rr]^{r-st} \ar[rd]_{s} & & j \\
& k \ar[ru]_t & } \\\hline
\end{array}$$ where $r,s,t$ are non negative integers, an arrow $\xymatrix@=0.3cm{i\ar[r]^l & j}$ with $l\geq 0$ means that $l$ arrows go from $i$ to $j$ and an arrow $\xymatrix@=0.3cm{i\ar[r]^l & j}$ with $l\leq 0$ means that $-l$ arrows go from $j$ to $i$.
b\) $u'$ is obtained from $u$ by replacing the element $u_k$ with $$\label{eq:exchange}
u_k'=\frac{1}{u_k} \left( \prod_{\mbox{\scriptsize arrows $i\to k$}} u_i + \prod_{\mbox{\scriptsize arrows $k\to j$}} u_j\right).$$
In the [*exchange relation*]{} (\[eq:exchange\]), if there are no arrows from $i$ with target $k$, the product is taken over the empty set and equals $1$. It is not hard to see that $\mu_k(R,u)$ is indeed a seed and that $\mu_k$ is an involution: we have $\mu_k(\mu_k(R,u))=(R,u)$.
Examples of mutations
---------------------
Let $R$ be the cyclic quiver $$\label{quiver1}
\begin{xy} 0;<0.3pt,0pt>:<0pt,-0.3pt>::
(94,0) *+{1} ="0",
(0,156) *+{2} ="1",
(188,156) *+{3} ="2",
"1", {\ar"0"},
"0", {\ar"2"},
"2", {\ar"1"},
\end{xy}$$ and $u=\{x_1, x_2, x_3\}$. If we mutate at $k=1$, we obtain the quiver $$\begin{xy} 0;<0.3pt,0pt>:<0pt,-0.3pt>::
(92,0) *+{1} ="0",
(0,155) *+{2} ="1",
(188,155) *+{3} ="2",
"0", {\ar"1"},
"2", {\ar"0"},
\end{xy}$$ and the set of fractions given by $u'_1=(x_2+x_3)/x_1$, $u'_2=u_2=x_2$ and $u'_3=u_3=x_3$. Now, if we mutate again at $1$, we obtain the original seed. This is a general fact: Mutation at $k$ is an involution. If, on the other hand, we mutate $(R', u')$ at $2$, we obtain the quiver $$\begin{xy} 0;<0.3pt,0pt>:<0pt,-0.3pt>::
(87,0) *+{1} ="0",
(0,145) *+{2} ="1",
(167,141) *+{3} ="2",
"1", {\ar"0"},
"2", {\ar"0"},
\end{xy}$$ and the set $u''$ given by $u''_1=u'_1=(x_2+x_3)/x_1$, $u'_2=\frac{x_1 + x_2 + x_3}{x_1 x_2}$ and $u''_3=u'_3=x_3$.
Let us consider the following, more complicated quiver glued together from four $3$-cycles: $$\label{quiver2}
\begin{xy} 0;<0.4pt,0pt>:<0pt,-0.4pt>::
(74,0) *+{1} ="0",
(38,62) *+{2} ="1",
(110,62) *+{3} ="2",
(0,123) *+{4} ="3",
(74,104) *+{5} ="4",
(148,123) *+{6.} ="5",
"1", {\ar"0"},
"0", {\ar"2"},
"2", {\ar"1"},
"3", {\ar"1"},
"1", {\ar"4"},
"4", {\ar"2"},
"2", {\ar"5"},
"4", {\ar"3"},
"5", {\ar"4"},
\end{xy}$$ If we successively perform mutations at the vertices $5$, $3$, $1$ and $6$, we obtain the sequence of quivers (we use [@KellerQuiverMutationApplet]) $$\quad
\begin{xy} 0;<0.4pt,0pt>:<0pt,-0.4pt>::
(74,0) *+{1} ="0",
(38,62) *+{2} ="1",
(110,62) *+{3} ="2",
(0,123) *+{4} ="3",
(75,104) *+{5} ="4",
(148,123) *+{6} ="5",
"1", {\ar"0"},
"0", {\ar"2"},
"4", {\ar"1"},
"2", {\ar"4"},
"3", {\ar"4"},
"5", {\ar"3"},
"4", {\ar"5"},
\end{xy}
\quad
\begin{xy} 0;<0.4pt,0pt>:<0pt,-0.4pt>::
(75,0) *+{1} ="0",
(38,62) *+{2} ="1",
(110,61) *+{3} ="2",
(0,123) *+{4} ="3",
(75,104) *+{5} ="4",
(148,123) *+{6} ="5",
"1", {\ar"0"},
"2", {\ar"0"},
"0", {\ar"4"},
"4", {\ar"1"},
"4", {\ar"2"},
"3", {\ar"4"},
"5", {\ar"3"},
"4", {\ar"5"},
\end{xy}
\quad
\begin{xy} 0;<0.4pt,0pt>:<0pt,-0.4pt>::
(75,0) *+{1} ="0",
(38,61) *+{2} ="1",
(110,60) *+{3} ="2",
(0,122) *+{4} ="3",
(75,103) *+{5} ="4",
(148,122) *+{6} ="5",
"0", {\ar"1"},
"0", {\ar"2"},
"4", {\ar"0"},
"3", {\ar"4"},
"5", {\ar"3"},
"4", {\ar"5"},
\end{xy}
\quad
\begin{xy} 0;<0.4pt,0pt>:<0pt,-0.4pt>::
(75,0) *+{1} ="0",
(38,61) *+{2} ="1",
(110,60) *+{3} ="2",
(0,122) *+{4} ="3",
(75,103) *+{5} ="4",
(149,121) *+{6.} ="5",
"0", {\ar"1"},
"0", {\ar"2"},
"4", {\ar"0"},
"3", {\ar"5"},
"5", {\ar"4"},
\end{xy}
\quad$$ Notice that the last quiver no longer has any oriented cycles and is in fact an orientation of the Dynkin diagram of type $D_6$. The sequence of new fractions appearing in these steps is $$\begin{aligned}
u'_5 & = &\frac{x_3 x_4 + x_2 x_6}{x_5} {\: , \;}\quad
u'_3 = \frac{x_3 x_4 + x_1 x_5 + x_2 x_6}{x_3 x_5}{\: , \;}\\
u'_1 & = & \frac{x_2 x_3 x_4 + x_3^2 x_4 + x_1 x_2 x_5 + x_2^2 x_6 + x_2 x_3x_6}{x_1 x_3 x_5} {\: , \;}\quad
u'_6=\frac{x_3 x_4 + x_4 x_5 + x_2 x_6}{x_5 x_6}\;.\end{aligned}$$ It is remarkable that all the denominators appearing here are monomials and that all the coefficients in the numerators are positive.
Finally, let us consider the quiver $$\label{quiver3}
\begin{xy} 0;<0.6pt,0pt>:<0pt,-0.6pt>::
(79,0) *+{1} ="0",
(52,44) *+{2} ="1",
(105,44) *+{3} ="2",
(26,88) *+{4} ="3",
(79,88) *+{5} ="4",
(131,88) *+{6} ="5",
(0,132) *+{7} ="6",
(52,132) *+{8} ="7",
(105,132) *+{9} ="8",
(157,132) *+{10.} ="9",
"1", {\ar"0"},
"0", {\ar"2"},
"2", {\ar"1"},
"3", {\ar"1"},
"1", {\ar"4"},
"4", {\ar"2"},
"2", {\ar"5"},
"4", {\ar"3"},
"6", {\ar"3"},
"3", {\ar"7"},
"5", {\ar"4"},
"7", {\ar"4"},
"4", {\ar"8"},
"8", {\ar"5"},
"5", {\ar"9"},
"7", {\ar"6"},
"8", {\ar"7"},
"9", {\ar"8"},
\end{xy}$$ One can show [@KellerReiten06] that it is impossible to transform it into a quiver without oriented cycles by a finite sequence of mutations. However, its mutation class (the set of all quivers obtained from it by iterated mutations) contains many quivers with just one oriented cycle, for example $$\begin{xy} 0;<0.3pt,0pt>:<0pt,-0.3pt>::
(0,70) *+{1} ="0",
(183,274) *+{2} ="1",
(293,235) *+{3} ="2",
(253,164) *+{4} ="3",
(119,8) *+{5} ="4",
(206,96) *+{6} ="5",
(125,88) *+{7} ="6",
(104,164) *+{8} ="7",
(177,194) *+{9} ="8",
(39,0) *+{10} ="9",
"9", {\ar"0"},
"8", {\ar"1"},
"2", {\ar"3"},
"3", {\ar"5"},
"8", {\ar"3"},
"4", {\ar"6"},
"9", {\ar"4"},
"5", {\ar"6"},
"6", {\ar"7"},
"7", {\ar"8"},
\end{xy}
\quad\quad
\begin{xy} 0;<0.3pt,0pt>:<0pt,-0.3pt>::
(212,217) *+{1} ="0",
(212,116) *+{2} ="1",
(200,36) *+{3} ="2",
(17,0) *+{4} ="3",
(123,11) *+{5} ="4",
(64,66) *+{6} ="5",
(0,116) *+{7} ="6",
(12,196) *+{8} ="7",
(89,221) *+{9} ="8",
(149,166) *+{10} ="9",
"9", {\ar"0"},
"1", {\ar"2"},
"9", {\ar"1"},
"2", {\ar"4"},
"3", {\ar"5"},
"4", {\ar"5"},
"5", {\ar"6"},
"6", {\ar"7"},
"7", {\ar"8"},
"8", {\ar"9"},
\end{xy}
\quad\quad
\begin{xy} 0;<0.3pt,0pt>:<0pt,-0.3pt>::
(0,230) *+{1} ="0",
(294,255) *+{2.} ="1",
(169,253) *+{3} ="2",
(285,174) *+{4} ="3",
(125,0) *+{5} ="4",
(90,114) *+{6} ="5",
(161,73) *+{7} ="6",
(142,177) *+{8} ="7",
(17,150) *+{9} ="8",
(213,135) *+{10} ="9",
"8", {\ar"0"},
"3", {\ar"1"},
"7", {\ar"2"},
"9", {\ar"3"},
"4", {\ar"6"},
"5", {\ar"6"},
"7", {\ar"5"},
"8", {\ar"5"},
"6", {\ar"9"},
"9", {\ar"7"},
\end{xy}$$ In fact, in this example, the mutation class is finite and it can be completely computed using, for example, [@KellerQuiverMutationApplet]: It consists of $5739$ quivers up to isomorphism. The above quivers are members of the mutation class containing relatively few arrows. The initial quiver is the unique member of its mutation class with the largest number of arrows. Here are some other quivers in the mutation class with a relatively large number of arrows: $$\begin{xy} 0;<0.3pt,0pt>:<0pt,-0.3pt>::
(290,176) *+{\circ} ="0",
(154,235) *+{\circ} ="1",
(34,147) *+{\circ} ="2",
(50,0) *+{\circ} ="3",
(239,244) *+{\circ} ="4",
(0,69) *+{\circ} ="5",
(169,89) *+{\circ} ="6",
(205,165) *+{\circ} ="7",
(85,78) *+{\circ} ="8",
(118,159) *+{\circ} ="9",
"4", {\ar"0"},
"0", {\ar"7"},
"4", {\ar"1"},
"1", {\ar"7"},
"9", {\ar"1"},
"5", {\ar"2"},
"2", {\ar"8"},
"9", {\ar"2"},
"5", {\ar"3"},
"3", {\ar"8"},
"7", {\ar"4"},
"8", {\ar"5"},
"6", {\ar"7"},
"6", {\ar"8"},
"9", {\ar"6"},
"7", {\ar"9"},
"8", {\ar"9"},
\end{xy}
\quad
\quad
\begin{xy} 0;<0.3pt,0pt>:<0pt,-0.3pt>::
(0,78) *+{\circ} ="0",
(226,262) *+{\circ} ="1",
(23,284) *+{\circ} ="2",
(61,0) *+{\circ} ="3",
(208,92) *+{\circ} ="4",
(159,7) *+{\circ} ="5",
(125,273) *+{\circ} ="6",
(64,191) *+{\circ} ="7",
(166,180) *+{\circ} ="8",
(103,96) *+{\circ} ="9",
"0", {\ar"3"},
"9", {\ar"0"},
"1", {\ar"6"},
"8", {\ar"1"},
"6", {\ar"2"},
"2", {\ar"7"},
"5", {\ar"3"},
"3", {\ar"9"},
"5", {\ar"4"},
"8", {\ar"4"},
"4", {\ar"9"},
"9", {\ar"5"},
"7", {\ar"6"},
"6", {\ar"8"},
"8", {\ar"7"},
"7", {\ar"9"},
"9", {\ar"8"},
\end{xy}
\quad
\quad
\begin{xy} 0;<0.3pt,0pt>:<0pt,-0.3pt>::
(159,287) *+{\circ} ="0",
(252,281) *+{\circ} ="1",
(19,152) *+{\circ} ="2",
(67,0) *+{\circ} ="3",
(0,61) *+{\circ} ="4",
(200,203) *+{\circ} ="5",
(109,180) *+{\circ} ="6",
(155,26) *+{\circ} ="7",
(176,115) *+{\circ} ="8",
(87,92) *+{\circ} ="9",
"0", {\ar"1"},
"5", {\ar"0"},
"1", {\ar"5"},
"4", {\ar"2"},
"6", {\ar"2"},
"2", {\ar"9"},
"4", {\ar"3"},
"7", {\ar"3"},
"3", {\ar"9"},
"9", {\ar"4"},
"5", {\ar"6"},
"8", {\ar"5"},
"6", {\ar"8"},
"9", {\ar"6"},
"7", {\ar"8"},
"9", {\ar"7"},
"8", {\ar"9"},
\end{xy}$$ Only $84$ among the $5739$ quivers in the mutation class contain double arrows (and none contain arrows of multiplicity $\geq 3$). Here is a typical example $$\begin{xy} 0;<0.4pt,0pt>:<0pt,-0.4pt>::
(89,0) *+{1} ="0", (262,111) *+{2} ="1", (24,29) *+{3} ="2",
(247,27) *+{4} ="3", (201,153) *+{5} ="4", (152,30) *+{6} ="5",
(36,159) *+{7} ="6", (0,96) *+{8} ="7", (144,213) *+{9} ="8",
(123,120) *+{10} ="9", "2", {\ar"0"}, "0", {\ar"5"}, "1", {\ar"3"},
"9", {\ar"1"}, "7", {\ar"2"}, "4", {\ar"3"}, "5", {\ar"3"}, "3",
{\ar|*+{\scriptstyle 2}"9"}, "8", {\ar"4"}, "9", {\ar"4"}, "9",
{\ar"5"}, "6", {\ar"7"},
\end{xy}$$ The quivers (\[quiver1\]), (\[quiver2\]) and (\[quiver3\]) are part of a family which appears in the study of the cluster algebra structure on the coordinate algebra of the subgroup of upper unitriangular matrices in $SL_n({\mathbb{C}})$, [[*cf.*]{} ]{}[@GeissLeclercSchroeer06]. The study of coordinate algebras on varieties associated with reductive algebraic groups (in particular, double Bruhat cells) has provided a major impetus for the development of cluster algebras, [[*cf.*]{} ]{}[@BerensteinFominZelevinsky05].
Definition of cluster algebras
------------------------------
Let $Q$ be a finite quiver without loops or $2$-cycles with vertex set $\{1, \ldots, n\}$. Consider the seed $(Q,x)$ consisting of $Q$ and the set $x$ formed by the variables $x_1, \ldots, x_n$. Following [@FominZelevinsky02] we define
- the *clusters with respect to $Q$* to be the sets $u$ appearing in seeds $(R,u)$ obtained from $(Q,x)$ by iterated mutation,
- the *cluster variables* for $Q$ to be the elements of all clusters,
- the *cluster algebra ${{\mathcal A}}_Q$* to be the ${\mathbb{Q}}$-subalgebra of the field ${\mathbb{Q}}(x_1, \ldots, x_n)$ generated by all the cluster variables.
Thus the cluster algebra consists of all ${\mathbb{Q}}$-linear combinations of monomials in the cluster variables. It is useful to define another combinatorial object associated with this recursive construction: The *exchange graph* associated with $Q$ is the graph whose vertices are the seeds modulo simultaneous renumbering of the vertices and the associated cluster variables and whose edges correspond to mutations.
The example $A_3$
-----------------
Let us consider the quiver $$Q: \xymatrix{1 \ar[r] & 2 \ar[r] & 3}$$ obtained by endowing the Dynkin diagram $A_3$ with a linear orientation. By applying the recursive construction to the initial seed $(Q,x)$ one finds exactly fourteen seeds (modulo simultaneous renumbering of vertices and cluster variables). These are the vertices of the exchange graph, which is isomorphic to the third Stasheff associahedron [@Stasheff63] [@ChapotonFominZelevinsky02]: $$\begin{xy} 0;<0.4pt,0pt>:<0pt,-0.4pt>::
(173,0) *+<8pt>[o][F]{2} ="0",
(0,143) *+{\circ} ="1",
(63,168) *+{\circ} ="2",
(150,218) *+{\circ} ="3",
(250,218) *+<8pt>[o][F]{3} ="4",
(375,143) *+{\circ} ="5",
(350,82) *+{\circ} ="6",
(152,358) *+{\circ} ="7",
(200,168) *+<8pt>[o][F]{1} ="8",
(200,268) *+{\circ} ="9",
(32,79) *+{\circ} ="10",
(33,218) *+{\circ} ="11",
(320,170) *+{\circ} ="12",
(353,228) *+{\circ} ="13",
"0", {\ar@{-}"6"},
"0", {\ar@{-}"8"},
"0", {\ar@{-}"10"},
"1", {\ar@{.}"5"},
"1", {\ar@{-}"10"},
"11", {\ar@{-}"1"},
"2", {\ar@{-}"3"},
"10", {\ar@{-}"2"},
"2", {\ar@{-}"11"},
"3", {\ar@{-}"8"},
"9", {\ar@{-}"3"},
"8", {\ar@{-}"4"},
"4", {\ar@{-}"9"},
"4", {\ar@{-}"12"},
"6", {\ar@{-}"5"},
"5", {\ar@{-}"13"},
"12", {\ar@{-}"6"},
"9", {\ar@{-}"7"},
"11", {\ar@{-}"7"},
"13", {\ar@{-}"7"},
"13", {\ar@{-}"12"},
\end{xy}$$ The vertex labeled $1$ corresponds to $(Q,x)$, the vertex $2$ to $\mu_2(Q,x)$, which is given by $$\xymatrix{1 \ar@/^1pc/[rr] & 2 \ar[l] & 3 \ar[l]} {\: , \;}\{ x_1, \frac{x_1+x_3}{x_2}, x_3\} {\: , \;}$$ and the vertex $3$ to $\mu_1(Q,x)$, which is given by $$\xymatrix{1 & 2 \ar[l] \ar[r] & 3} {\: , \;}\{\frac{1+x_3}{x_1}, x_2, x_3\}.$$ We find a total of $9$ cluster variables, namely $$\begin{aligned}
& x_1 {\: , \;}x_2 {\: , \;}x_3,
\frac{1+x_2}{x_1}{\: , \;}\frac{x_1+x_3+x_2x_3}{x_1x_2} {\: , \;}\frac{x_1+x_1 x_2 +x_3 + x_2 x_3}{x_1 x_2 x_3} {\: , \;}\\
& \frac{x_1+x_3}{x_2}{\: , \;}\frac{x_1+x_1x_2+x_3}{x_2 x_3}{\: , \;}\frac{1+x_2}{x_3}\;.\end{aligned}$$ Again we observe that all denominators are monomials. Notice also that $9=3+6$ and that $3$ is the rank of the root system associated with $A_3$ and $6$ its number of positive roots. Moreover, if we look at the denominators of the non trivial cluster variables (those other than $x_1$, $x_2$, $x_3$), we see a natural bijection with the positive roots $$\alpha_1, \alpha_1+\alpha_2, \alpha_1+\alpha_2+\alpha_3,
\alpha_2, \alpha_2+\alpha_3, \alpha_3$$ of the root system of $A_3$, where $\alpha_1$, $\alpha_2$, $\alpha_3$ denote the three simple roots.
Cluster algebras with finitely many cluster variables
-----------------------------------------------------
The phenomena observed in the above example are explained by the following key theorem:
Let $Q$ be a finite connected quiver without loops or $2$-cycles with vertex set $\{1, \ldots, n\}$. Let ${{\mathcal A}}_Q$ be the associated cluster algebra.
- All cluster variables are Laurent polynomials, i.e. their denominators are monomials.
- The number of cluster variables is finite iff $Q$ is mutation equivalent to an orientation of a simply laced Dynkin diagram $\Delta$. In this case, $\Delta$ is unique and the non trivial cluster variables are in bijection with the positive roots of $\Delta$; namely, if we denote the simple roots by $\alpha_1, \ldots, \alpha_n$, then for each positive root $\sum d_i
\alpha_i$, there is a unique non trivial cluster variable whose denominator is $\prod x_i^{d_i}$.
Categorification
================
We refer to the books [@Ringel84] [@GabrielRoiter92] [@AuslanderReitenSmaloe95] and [@AssemSimsonSkowronski06] for a wealth of information on the representation theory of quivers and finite-dimensional algebras. Here, we will only need very basic notions.
Let $Q$ be a finite quiver without oriented cycles. For example, $Q$ can be an orientation of a simply laced Dynkin diagram or the quiver $$\xymatrix@R=10pt{ & 2 \ar[rd]^\beta & \\
1 \ar[rr]_{\gamma} \ar[ru]^{\alpha} & & 3.
}$$ Let $k$ be an algebraically closed field. Recall that a [*representation of $Q$*]{} is a diagram of finite-dimensional vector spaces of the shape given by $Q$. Thus, in the above example, a representation of $Q$ is a (not necessarily commutative) diagram $$\xymatrix@R=10pt{ & V_2 \ar[rd]^{V_\beta} & \\
V_1 \ar[rr]_{V_{\gamma}} \ar[ru]^{V_\alpha} & & V_3
}$$ formed by three finite-dimensional vector spaces and three linear maps. A *morphism of representations* is a morphism of diagrams. We thus obtain the *category of representations* ${{\operatorname{\mathsf{rep}}}\nolimits}(Q)$. Notice that it is an abelian category (since it is a category of diagrams in an abelian category, that of finite-dimensional vector spaces): Sums, kernels and cokernels in the category ${{\operatorname{\mathsf{rep}}}\nolimits}(Q)$ are computed componentwise. We denote by ${{\mathcal D}}_Q$ its bounded derived category. Thus, the objects of ${{\mathcal D}}_Q$ are the bounded complexes $$\xymatrix{ \ldots \ar[r] & V^p \ar[r]^d & V^{p+1} \ar[r] & \ldots}$$ of representations and its morphisms are obtained from morphisms of complexes by formally inverting all quasi-isomorphisms (=morphisms inducing isomorphisms in homology). The category ${{\mathcal D}}_Q$ is still an additive category (direct sums are given by direct sums of complexes) but it is almost never abelian. In fact, it is abelian if and only if $Q$ does not have any arrows. But it is always [*triangulated*]{}. This means that ${{\mathcal D}}_Q$ is additive and endowed with
- a [*suspension functor*]{} $\Sigma: {{\mathcal D}}_Q {\stackrel{_\sim}{\rightarrow}}{{\mathcal D}}_Q$, namely the functor taking a complex $V$ to $V[1]$, where $V[1]^p=V^{p+1}$ for all $p\in{\mathbb{Z}}$ and $d_{V[1]}=-d_V$;
- a class of [*triangles*]{}, namely the sequences $$\xymatrix{U \ar[r] & V \ar[r] & W \ar[r] & \Sigma U}$$ which are ‘induced’ from short exact sequences of complexes.
The triangulated category ${{\mathcal D}}_Q$ admits a *Serre functor*, i.e. an autoequivalence $S: {{\mathcal D}}_Q {\stackrel{_\sim}{\rightarrow}}{{\mathcal D}}_Q$ which makes the Serre duality formula true: We have $$D{{\operatorname{\mathsf{Hom}}}}(X,Y) {\stackrel{_\sim}{\rightarrow}}{{\operatorname{\mathsf{Hom}}}}(Y,SX)$$ bifunctorially in $X$, $Y$ belonging to ${{\mathcal D}}_Q$, where $D$ denotes the duality functor ${{\operatorname{\mathsf{Hom}}}}_k(?,k)$ over the ground field $k$. The [*cluster category*]{} is defined as the *orbit category* $${{\mathcal C}}_Q = {{\mathcal D}}_Q / (S^{-1}\circ \Sigma^2)^{\mathbb{Z}}$$ of ${{\mathcal D}}_Q$ under the action of the cyclic group generated by the automorphism $S^{-1}\circ \Sigma^2$. Thus, its objects are the same as those of ${{\mathcal D}}_Q$ and its morphisms are defined by $${{\operatorname{\mathsf{Hom}}}}_{{{\mathcal C}}_Q}(X,Y) = \bigoplus_{p\in {\mathbb{Z}}} {{\operatorname{\mathsf{Hom}}}}_{{{\mathcal D}}_Q}(X, (S^{-1}\circ \Sigma^2)^p Y).$$ One can show [@Keller05] that ${{\mathcal C}}_Q$ admits a structure of triangulated category such that the projection functor ${{\mathcal D}}_Q \to {{\mathcal C}}_Q$ becomes a triangle functor (in general, the orbit category of a triangulated category under the action of an automorphism group is no longer triangulated). It is not hard to see that the cluster category has finite-dimensional morphism spaces, and that it admits a Serre functor induced by that of the derived category. The definition of the cluster category then immediately yields an isomorphism $$S {\stackrel{_\sim}{\rightarrow}}\Sigma^2$$ and this means that ${{\mathcal C}}_Q$ is $2$-Calabi-Yau: A $k$-linear triangulated category with finite-dimensional morphism spaces is $d$-Calabi-Yau if it admits a Serre functor $S$ and if $S$ is isomorphic to $\Sigma^d$ (the $d$th power of the suspension functor) as a triangle functor. The definition of the cluster category is due to Buan-Marsh-Reineke-Reiten-Todorov [@BuanMarshReinekeReitenTodorov04] (for arbitrary $Q$ without oriented cycles) and, independently and with a very different, more geometric description, to Caldero-Chapoton-Schiffler [@CalderoChapotonSchiffler06] (for $Q$ of type $A_n$).
To state the close relationship between the cluster category ${{\mathcal C}}_Q$ and the cluster algebra ${{\mathcal A}}_Q$, we need some notation: For two objects $L$ and $M$ of ${{\mathcal C}}_Q$, we write $${{\operatorname{\mathsf{Ext}}}}^1(L,M) = {{\operatorname{\mathsf{Hom}}}}_{{{\mathcal C}}_Q}(L,\Sigma M).$$ Notice that it follows from the Calabi-Yau property that we have a canonical isomorphism $${{\operatorname{\mathsf{Ext}}}}^1(L,M) {\stackrel{_\sim}{\rightarrow}}D {{\operatorname{\mathsf{Ext}}}}^1(M,L).$$ An object $L$ of ${{\mathcal C}}_Q$ is *rigid* if we have ${{\operatorname{\mathsf{Ext}}}}^1(L,L)=0$. It is *indecomposable* if it is non zero and in each decomposition $L=L_1\oplus L_2$, we have $L_1=0$ or $L_2=0$.
\[thm:2\] Let $Q$ be a finite quiver without oriented cycles with vertex set $\{1, \ldots, n\}$.
- There is an explicit bijection $L \mapsto X_L$ from the set of isomorphism classes of rigid indecomposables of the cluster category ${{\mathcal C}}_Q$ onto the set of cluster variables of the cluster algebra ${{\mathcal A}}_Q$.
- Under this bijection, the clusters correspond exactly to the *cluster-tilting subsets*, i.e. the sets $T_1, \ldots, T_n$ of rigid indecomposables such that $$Ext^1(T_i,T_j)=0$$ for all $i,j$.
- If $L$ and $M$ are rigid indecomposables such that the space ${{\operatorname{\mathsf{Ext}}}}^1(L,M)$ is one-dimensional, then we have the generalized exchange relation $$\label{eq:gen-exchange}
X_L = \frac{X_B + X_{B'}}{X_M}$$ where $B$ and $B'$ are the middle terms of ‘the’ non split triangles $$\xymatrix{L \ar[r] & B \ar[r] & M \ar[r] & \Sigma L} \mbox{ and }
\xymatrix{M \ar[r] & B' \ar[r] & L \ar[r] & \Sigma M}$$ and we define $$X_B= \prod_{i=1}^s X_{B_i} {\: , \;}$$ where $B=B_1 \oplus \cdots \oplus B_s$ is a decomposition into indecomposables.
The relation (\[eq:gen-exchange\]) in part c) of the theorem can be generalized to the case where the extension group is of higher dimension, cf. [@CalderoKeller05a] [@Hubery06] [@XiaoXu07]. One can show using [@BuanMarshReiten04] that relation (\[eq:gen-exchange\]) generalizes the exchange relation (\[eq:exchange\]) which appeared in the definition of the mutation.
The proof of the theorem builds on work by many authors notably Buan-Marsh-Reiten-Todorov [@BuanMarshReitenTodorov07], Buan-Marsh-Reiten [@BuanMarshReiten04b], Buan-Marsh-Reineke-Reiten-Todorov [@BuanMarshReinekeReitenTodorov04], Marsh-Reineke-Zelevinsky [@MarshReinekeZelevinsky03], … and especially on Caldero-Chapoton’s explicit formula for $X_L$ proved in [@CalderoChapoton06] for orientations of simply laced Dynkin diagrams. We include the formula below. Another crucial ingredient of the proof is the Calabi-Yau property of the cluster category. An alternative proof of part c) was given by A. Hubery [@Hubery06] for quivers whose underlying graph is an extended simply laced Dynkin diagram.
The theorem does shed new light on cluster algebras. In particular, we have the following
Suppose that $Q$ does not have oriented cycles. Then all cluster variables of ${{\mathcal A}}_Q$ belong to ${\mathbb{N}}[x_1^{\pm}, \ldots, x_n^{\pm}]$.
This settles a conjecture of Fomin-Zelevinsky [@FominZelevinsky02] in the case of cluster algebras associated with acyclic quivers. The proof is based on Lusztig’s [@Lusztig98] and in this sense it does not quite live up to the hopes that cluster theory ought to explain Lusztig’s results. However, it does show that the conjecture is true for this important class of cluster algebras.
Caldero-Chapoton’s formula
==========================
We describe the bijection of part a) of theorem \[thm:2\]. Let $k$ be an algebraically closed field and $Q$ a finite quiver without oriented cycles with vertex set $\{1, \ldots, n\}$. Let $L$ be an object of the cluster category ${{\mathcal C}}_Q$. With $L$, we will associate an element $X_L$ of the field ${\mathbb{Q}}(x_1, \ldots, x_n)$. According to [@BuanMarshReinekeReitenTodorov04], the object $L$ decomposes into a sum of indecomposables $L_i$, $1\leq i\leq s$, unique up to isomorphism and permutation. By defining $$X_L=\prod_{i=1}^s X_{L_i}$$ we reduce to the case where $L$ is indecomposable. Now again by [@BuanMarshReinekeReitenTodorov04], if $L$ is indecomposable, it is either isomorphic to an object $\pi(V)$, or an object $\Sigma \pi(P_i)$, where $\pi : {{\mathcal D}}_Q \to {{\mathcal C}}_Q$ is the canonical projection functor, $\Sigma$ is the suspension functor of ${{\mathcal C}}_Q$, $V$ is a representation of $Q$ (identified with a complex of representations concentrated in degree $0$) and $P_i$ is the projective representation associated with a vertex $i$ ($P_i$ is characterized by the existence of a functorial isomorphism $${{\operatorname{\mathsf{Hom}}}}(P_i, W) = W_i$$ for each representation $W$). If $L$ is isomorphic to $\Sigma \pi(P_i)$, we put $X_L=x_i$. If $L$ is isomorphic to $\pi(V)$, we define $$X_L = X_V = \frac{1}{\prod_{i=1}^n x_i^{d_i}} \sum_{0\leq e \leq d} \chi(\mbox{Gr}_e(V))
\prod_{i=1}^n x_i^{\sum_{j\to i} e_j + \sum_{i\to j} (d_j-e_j)} {\: , \;}$$ where $d_i=\dim V_i$, $1 \leq i \leq n$, the sum is taken over all elements $e\in {\mathbb{N}}^n$ such that $0 \leq e_i \leq d_i$ for all $i$, the [*quiver Grassmannian*]{} $\mbox{Gr}_e(V)$ is the variety of $n$-tuples of subspaces $U_i \subset V_i$ such that $\dim U_i=e_i$ and the $U_i$ form a subrepresentation of $V$, the Euler characteristic $\chi$ is taken with respect to étale cohomology (or with respect to singular cohomology with coefficients in a field if $k={\mathbb{C}}$) and the sums in the exponent of $x_i$ are taken over all arrows $j \to i$ respectively all arrows $i\to j$. This formula was invented by P. Caldero and F. Chapoton in [@CalderoChapoton06] for the case of a quiver whose underlying graph is a simply laced Dynkin diagram. It is still valid for arbitrary quivers without oriented cycles [@CalderoKeller06] and further generalizes to arbitrary triangulated $2$-Calabi-Yau categories containing a cluster-tilting object [@Palu07].
Some further developments
=========================
The extension of the results presented here to quivers containing oriented cycles is the subject of ongoing research. In a series of papers [@GeissLeclercSchroeer05] [@GeissLeclercSchroeer05b] [@GeissLeclercSchroeer06] [@GeissLeclercSchroeer06a] [@GeissLeclercSchroeer06b], Geiss-Leclerc-Schröer have obtained remarkable results for a class of quivers which are important in the study of (dual semi-)canonical bases. They use an analogue [@GeissLeclercSchroeer05c] of the Caldero-Chapoton map due ultimately to Lusztig [@Lusztig00]. The class they consider has been further enlarged by Buan-Iyama-Reiten-Scott [@BuanIyamaReitenScott07]. Thanks to their results, an analogue of Caldero-Chapoton’s formula and a version of theorem \[thm:2\] was proved in [@FuKeller07] for an even larger class.
Building on [@MarshReinekeZelevinsky03] Derksen-Weyman-Zelevinsky are developing a representation-theoretic model for mutation of general quivers in [@DerksenWeymanZelevinsky07]. Their approach is related to Kontsevich-Soibelman’s work [@KontsevichSoibelman07], where $3$-Calabi-Yau categories play an important rôle, as was already the case in [@IyamaReiten06].
\[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{}
[10]{}
Ibrahim Assem, Daniel Simson, and Andrzej Skowro[ń]{}ski, *Elements of the representation theory of associative algebras. [V]{}ol. 1*, London Mathematical Society Student Texts, vol. 65, Cambridge University Press, Cambridge, 2006, Techniques of representation theory.
M. Auslander, I. Reiten, and S. Smalø, *Representation theory of [Artin]{} algebras*, Cambridge Studies in Advanced Mathematics, vol. 36, Cambridge University Press, 1995 (English).
Aslak Bakke Buan, Osamu Iyama, Idun Reiten, and Jeanne Scott. Cluster structures for 2-[C]{}alabi-[Y]{}au categories and unipotent groups. , 145(4):1035–1079, 2009.
Aslak Bakke Buan and Robert Marsh, *Cluster-tilting theory*, Trends in representation theory of algebras and related topics, Contemp. Math., vol. 406, Amer. Math. Soc., Providence, RI, 2006, pp. 1–30.
Aslak Bakke Buan, Robert J. Marsh, Markus Reineke, Idun Reiten, and Gordana Todorov, *Tilting theory and cluster combinatorics*, Advances in Mathematics **204 (2)** (2006), 572–618.
Aslak Bakke Buan, Robert J. Marsh, and Idun Reiten. Cluster mutation via quiver representations. , 83(1):143–177, 2008.
Aslak Bakke Buan, Robert J. Marsh, and Idun Reiten, *Cluster-tilted algebras*, Trans. Amer. Math. Soc., **359** (2007), no. 1, 323–332, electronic.
Arkady Berenstein, Sergey Fomin, and Andrei Zelevinsky, *Cluster algebras. [III]{}. [U]{}pper bounds and double [B]{}ruhat cells*, Duke Math. J. **126** (2005), no. 1, 1–52.
Tom Bridgeland, *Stability conditions and [H]{}all algebras*, Talk at the meeting ‘Recent developments in Hall algebras’, Luminy, November 2006.
, Stability conditions on triangulated categories. , 166(2):317–345, 2007.
Aslak Bakke Buan, Robert J. Marsh, Idun Reiten, and Gordana Todorov, *Clusters and seeds in acyclic cluster algebras*, Proc. Amer. Math. Soc. **135** (2007), no. 10, 3049–3060 (electronic), With an appendix coauthored in addition by P. Caldero and B. Keller.
Philippe Caldero and Fr[é]{}d[é]{}ric Chapoton, *Cluster algebras as [H]{}all algebras of quiver representations*, Comment. Math. Helv. **81** (2006), no. 3, 595–616.
Philippe Caldero, Frédéric Chapoton, and Ralf Schiffler, *Quivers with relations arising from clusters (${A}_n$ case)*, Trans. Amer. Math. Soc. **358** (2006), no. 5, 1347–1364.
Philippe Caldero and Bernhard Keller. From triangulated categories to cluster algebras. , 172:169–211, 2008.
, *From triangulated categories to cluster algebras. [II]{}*, Ann. Sci. École Norm. Sup. (4) **39** (2006), 983–1009.
Philippe Caldero and Markus Reineke. On the quiver [G]{}rassmannian in the acyclic case. , 212(11):2369–2380, 2008.
Fr[é]{}d[é]{}ric Chapoton, *Enumerative properties of generalized associahedra*, Sém. Lothar. Combin. **51** (2004/05), Art. B51b, 16 pp. (electronic).
Fr[é]{}d[é]{}ric Chapoton, Sergey Fomin, and Andrei Zelevinsky, *Polytopal realizations of generalized associahedra*, Canad. Math. Bull. **45** (2002), no. 4, 537–566, Dedicated to Robert V. Moody.
Harm Derksen, Jerzy Weyman, and Andrei Zelevinsky. Quivers with potentials and their representations [I]{}: [Mutations]{}. , 14:59–119, 2008.
Vladimir V. Fock and Alexander B. Goncharov. Cluster ensembles, quantization and the dilogarithm. , 42(6):865–930, 2009.
, Cluster ensembles, quantization and the dilogarithm. [II]{}. [T]{}he intertwiner. In [*Algebra, arithmetic, and geometry: in honor of [Y]{}u. [I]{}. [M]{}anin. [V]{}ol. [I]{}*]{}, volume 269 of [*Progr. Math.*]{}, pages 655–673. Birkh[ä]{}user Boston Inc., Boston, MA, 2009.
, Cluster [$\mathcal{X}$]{}-varieties, amalgamation, and [P]{}oisson-[L]{}ie groups. In [*Algebraic geometry and number theory*]{}, volume 253 of [ *Progr. Math.*]{}, pages 27–68. Birkhäuser Boston, Boston, MA, 2006.
, The quantum dilogarithm and representations of quantum cluster varieties. , 175(2):223–286, 2009.
Sergey Fomin, *Cluster algebras portal*, `www.math.lsa.umich.edu/~fomin/` `cluster.html`.
Sergey Fomin and Nathan Reading, *Generalized cluster complexes and [C]{}oxeter combinatorics*, Int. Math. Res. Not. (2005), no. 44, 2709–2757.
Sergey Fomin, Michael Shapiro, and Dylan Thurston. Cluster algebras and triangulated surfaces. [I]{}. [C]{}luster complexes. , 201(1):83–146, 2008.
Sergey Fomin and Andrei Zelevinsky, *Cluster algebras. [I]{}. [F]{}oundations*, J. Amer. Math. Soc. **15** (2002), no. 2, 497–529 (electronic).
, *Cluster algebras. [II]{}. [F]{}inite type classification*, Invent. Math. **154** (2003), no. 1, 63–121.
, *Cluster algebras: notes for the [CDM]{}-03 conference*, Current developments in mathematics, 2003, Int. Press, Somerville, MA, 2003, pp. 1–34.
, *[$Y$]{}-systems and generalized associahedra*, Ann. of Math. (2) **158** (2003), no. 3, 977–1018.
, *Cluster algebras [IV]{}: Coefficients*, Compositio Mathematica **143** (2007), 112–164.
Changjian Fu and Bernhard Keller. On cluster algebras with coefficients and $2$-[C]{}alabi-[Y]{}au categories. , 362:859–895, 2010.
P. Gabriel and A.V. Roiter, *Representations of finite-dimensional algebras*, Encyclopaedia Math. Sci., vol. 73, Springer–Verlag, 1992.
Christof Geiß, Bernard Leclerc, and Jan Schr[ö]{}er. Auslander algebras and initial seeds for cluster algebras. , 75(3):718–740, 2007.
, Partial flag varieties and preprojective algebras. , 58(3):825–876, 2008.
, *Rigid modules over preprojective algebras [II]{}: The [K]{}ac-[M]{}oody case*, arXiv:math.RT/0703039.
, Semicanonical bases and preprojective algebras. [II]{}. [A]{} multiplication formula. , 143(5):1313–1334, 2007.
, *Semicanonical bases and preprojective algebras*, Ann. Sci. École Norm. Sup. (4) **38** (2005), no. 2, 193–253.
, *Rigid modules over preprojective algebras*, Invent. Math. **165** (2006), no. 3, 589–632.
Michael Gekhtman, Michael Shapiro, and Alek Vainshtein, *Cluster algebras and [P]{}oisson geometry*, Mosc. Math. J. **3** (2003), no. 3, 899–934, 1199, {Dedicated to Vladimir Igorevich Arnold on the occasion of his 65th birthday}.
, *Cluster algebras and [W]{}eil-[P]{}etersson forms*, Duke Math. J. **127** (2005), no. 2, 291–311.
Victor Ginzburg, *[Calabi-Yau]{} algebras*, arXiv:math/0612139v3 \[math.AG\].
Andrew Hubery, *Acyclic cluster algebras via [R]{}ingel-[H]{}all algebras*, Preprint available at the author’s home page.
Osamu Iyama and Idun Reiten. Fomin-[Z]{}elevinsky mutation and tilting modules over [C]{}alabi-[Y]{}au algebras. , 130(4):1087–1149, 2008.
Masaki Kashiwara, *Bases cristallines*, C. R. Acad. Sci. Paris Sér. I Math. **311** (1990), no. 6, 277–280.
Bernhard Keller, Quiver mutation in java. Interactive computer program, July 2006, available at the author’s homepage.
, *[On triangulated orbit categories]{}*, Doc. Math. **10** (2005), 551–581.
Bernhard Keller and Idun Reiten, Acyclic [C]{}alabi-[Y]{}au categories. , 144(5):1332–1348, 2008. With an appendix by Michel Van den Bergh.
Maxim Kontsevich, *Donaldson-[T]{}homas invariants*, Mathematische Arbeitstagung June 22-28, 2007, MPI Bonn, www.mpim-bonn.mpg.de/preprints/.
, *Donaldson-[T]{}homas invariants, stability conditions and cluster transformations*, Report on joint work with Y. Soibelman, talks at the IHES, October 11 and 12, 2007.
Maxim Kontsevich and Yan Soibelman, Stability structures, [D]{}onaldson-[T]{}homas invariants and cluster transformations. arXiv:0811.2435.
Christian Krattenthaler, *The [$F$]{}-triangle of the generalised cluster complex*, Topics in discrete mathematics, Algorithms Combin., vol. 26, Springer, Berlin, 2006, pp. 93–126.
G. Lusztig, *Canonical bases arising from quantized enveloping algebras*, J. Amer. Math. Soc. **3** (1990), no. 2, 447–498.
, *Total positivity in reductive groups*, Lie theory and geometry, Progr. Math., vol. 123, Birkhäuser Boston, Boston, MA, 1994, pp. 531–568.
, *Semicanonical bases arising from enveloping algebras*, Adv. Math. **151** (2000), no. 2, 129–139.
George Lusztig, *Canonical bases and [H]{}all algebras*, Representation theories and algebraic geometry (Montreal, PQ, 1997), NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci., vol. 514, Kluwer Acad. Publ., Dordrecht, 1998, pp. 365–399.
Robert Marsh, Markus Reineke, and Andrei Zelevinsky, *Generalized associahedra via quiver representations*, Trans. Amer. Math. Soc. **355** (2003), no. 10, 4171–4186 (electronic).
Gregg Musiker, *A graph theoretic expansion formula for cluster algebras of type [$B_n$ and $D_n$]{}*, arXiv:0710.3574v1 \[math.CO\].
Hiraku Nakajima. Quiver varieties and cluster algebras. arXiv:0905.0002v3 \[math.QA\].
Yann Palu. Cluster characters for 2-[C]{}alabi-[Y]{}au triangulated categories. , 58(6):2221–2248, 2008.
Fan Qin. Quantum cluster variables via Serre polynomials. arXiv:1004.4171 \[math.RT\].
Idun Reiten, *Tilting theory and cluster algebras*, preprint available at www.institut.math.jussieu.fr/ $\widetilde{\mbox{ }}$ keller/ictp2006/lecturenotes/reiten.pdf.
Claus Michael Ringel, *Some remarks concerning tilting modules and tilted algebras. [Origin. Relevance. Future.]{}*, Handbook of Tilting Theory, LMS Lecture Note Series, vol. 332, Cambridge Univ. Press, Cambridge, 2007, pp. 49–104.
C.M. Ringel, *Tame algebras and integral quadratic forms*, Lecture Notes in Mathematics, vol. 1099, Springer Verlag, 1984.
James Dillon Stasheff, *Homotopy associativity of [$H$]{}-spaces. [I]{}, [II]{}*, Trans. Amer. Math. Soc. 108 (1963), 275-292; ibid. **108** (1963), 293–312.
Bal[á]{}zs Szendr[ő]{}i. Non-commutative [D]{}onaldson-[T]{}homas invariants and the conifold. , 12(2):1171–1202, 2008.
Jie Xiao and Fan Xu, *Green’s formula with $\mathbf{C}^{*}$-action and [C]{}aldero-[K]{}eller’s formula for cluster algebras*, arXiv:arXiv:0707.1175.
Andrei Zelevinsky, *[Cluster algebras: notes for 2004 IMCC (Chonju, Korea, August 2004)]{}*, arXiv:math.RT/0407414.
Andrei Zelevinsky, *From [L]{}ittlewood-[R]{}ichardson coefficients to cluster algebras in three lectures*, Symmetric functions 2001: surveys of developments and perspectives, NATO Sci. Ser. II Math. Phys. Chem., vol. 74, Kluwer Acad. Publ., Dordrecht, 2002, pp. 253–273.
, *Cluster algebras: origins, results and conjectures*, Advances in algebra towards millennium problems, SAS Int. Publ., Delhi, 2005, pp. 85–105.
, *What is a cluster algebra?*, Notices of the A.M.S. **54** (2007), no. 11, 1494–1495.
|
---
abstract: 'We obtain scatter-broadened images of the Crab nebula at 80 MHz as it transits through the inner solar wind in June 2016 and 2017. These images are anisotropic, with the major axis oriented perpendicular to the radially outward coronal magnetic field. Using these data, we deduce that the density modulation index ($\delta N_e/N_e$) caused by turbulent density fluctuations in the solar wind ranges from 1.9 $\times 10^{-3}$ to 7.7 $\times 10^{-3}$ between 9 —- 20 $R_{\odot}$. We also find that the heating rate of solar wind protons at these distances ranges from $2.2 \times 10^{-13}$ to $1.0 \times 10^{-11} ~\rm erg~cm^{-3}~s^{-1}$. On two occasions, the line of sight intercepted a coronal streamer. We find that the presence of the streamer approximately doubles the thickness of the scattering screen.'
author:
- 'K. Sasikumar Raja'
- Prasad Subramanian
- 'R. Ramesh'
- Angelos Vourlidas
- Madhusudan Ingale
bibliography:
- 'ms.bib'
title: 'Turbulent density fluctuations and proton heating rate in the solar wind from $9-20~R_{\odot}$'
---
Introduction {#intro}
============
The solar wind exhibits turbulent fluctuations in velocity, magnetic field, and density. Traditionally, researchers have attempted to understand this phenomenon within the framework of incompressible magnetohydrodynamic (MHD) turbulence (e.g., @Gol1995). However, density fluctuations are not explained in this framework, and remain a relative enigma despite noteworthy progress (e.g., @Hna2005 [@Sha2010; @Ban2014]). While most of the data used for solar wind turbulence studies are from in-situ measurements made by near-Earth spacecraft, density fluctuations can often been inferred via remote sensing observations, typically at radio wavelengths. Examples include angular broadening of point-like radio sources observed through the solar wind [@Mac52; @Hew63; @Eri1964; @Ble1972; @Den1972; @Sastry1974; @Arm90; @Ana94; @Ram1999; @Ram01; @Ram2012; @Kat2011; @Mug2016; @Sas2016], interplanetary scintillations (IPS; @Hew64 [@Coh69; @Eke71; @Ric90; @Bis09; @Man2000; @Tok12; @Tok16]), spacecraft beacon scintillations [@Woo79], interferometer phase scintillations using Very Long Baseline Interferometers (VLBI; @Cro1972), spectral broadening using coherent spacecraft beacons [@Woo79] and radar echoes [@Har1983].
A related problem is the issue of turbulent heating in the inner solar wind. It is well known that the expansion of the solar wind leads to adiabatic cooling, which is offset by some sort of heating process [@Ric1995; @Mat1999]. The candidates for such extended heating range from resonant wave heating [@Cra2000; @Hol2002] to reconnection events (e.g., @Car2004). Some studies have attempted to link observations of density turbulence with kinetic Alfven waves that get resonantly damped on protons, consequently heating them [@Ing2015b; @Cha2009].
In this paper, we investigate the characteristics of turbulent density fluctuations and associated solar wind heating rate from $9-20~R_{\odot}$ using the anisotropic angular broadening of radio observations of the Crab nebula from June 9 to 22 in 2016 and 2017. The Crab nebula passes close to the Sun on these days every year. Since its radiation passes through the foreground solar wind, these observations give us an opportunity to explore the manner in which its angular extent is broadened due to scattering off turbulent density fluctuations in the solar wind. Anisotropic scatter-broadening of background sources observed through the solar wind has hitherto been reported only for small elongations ($\approx 2-6 ~R_{\odot}$) e.g., [@Ana94; @Arm90]. Imaging observations of the Crab nebula (e.g., @Ble1972 [@Den1972]) offer us an opportunity to investigate this phenomenon for elongations $\gtrsim 10 R_{\odot}$. On 17 June 2016, 17 and 18 June 2017, a coronal streamer was present along the line of sight to the Crab nebula; this gives us an additional opportunity to study streamer characteristics. The Parker Solar Probe [@Fox2016] is expected to sample the solar wind as close as 10 $R_{\odot}$. In-situ measurements from the SWEAP instrument aboard the PSP can validate our findings regarding the density turbulence level and the proton heating rate.
The rest of the paper is organized as follows: in § 2, we describe imaging observations of the Crab nebula made at Gauribidanur in June 2016 and 2017. The next section (§ 3) explains the methodology for obtaining the turbulence levels from these images. This includes a brief discussion of the structure function, some discussion of the inner scale of the density fluctuations, followed by the prescription we follow in computing the density fluctuations and solar wind heating rate at the inner scale. § 4 summarizes our main results and conclusions.
Observations: scatter-broadened images of the Crab nebula {#observations}
=========================================================
The radio data were obtained with the Gauribidanur RAdioheliograPH (GRAPH; @Ram98 [@Ram11]) at 80 MHz during the local meridian transit of the Crab nebula. The GRAPH is a T-shaped interferometer array with baselines ranging from $\approx 80$ to $\approx 2600$ meters. The angular resolution is $\approx$ 5 arcmin at 80 MHz, and the minimum detectable flux ($5 \sigma$ level) is $\approx 50$ Jy for 1 sec integration time and 1 MHz bandwidth. Cygnus A was used to calibrate the observations. Its flux density is $\approx 16296$ Jy at 80 MHz. The flux density of Crab nebula (when it is far from the Sun and is not therefore scatter-broadened by solar coronal turbulence) is $\approx 2015$ Jy at 80 MHz. We imaged the Crab nebula at different projected heliocentric distances shown in column (3) of Table-\[tab:table-1\] in the years 2016 and 2017.
{width="140.00000%"}
{width="75.00000%"}
We have used white light images of the solar corona obtained with the Large Angle and Spectrometric Coronagraph (LASCO) onboard the SOlar and Heliospheric Observatory (SOHO) [@Bru95] for general context, and to identify features like coronal streamers. Figure \[fig:lasco\] shows the white light images of the solar corona obtained with the LASCO C3 (left) and C2 (right) coronagraphs on 17 June 2016. The black features in both inverted grey scale images are coronal streamers. The location of the Crab nebula between 8 and 21 June 2016 is marked by the red circles on the LASCO C3 images. On 17 June 2016, the Crab nebula was observed through a streamer in the south-west quadrant. The streamer was associated with an active region NOAA 12555 located at heliographic coordinates S09W71. The contours superposed over the LASCO C2 image are from the GRAPH observations at 80 MHz showing radio emission from the streamers in north-east and south-west quadrants [@Ram2000].
Some representative 80 MHz GRAPH images of the Crab nebula are shown in Figure \[fig:graph\_images\]. The image on 12 June 2016 was observed through the solar wind at $10.18~ R_{\odot}$ during ingress. The one on 17 June 2016 was observed at $10.20~ R_{\odot}$, while the one on 17 June 2017 at $9.41~ R_{\odot}$ and the one on 18 June 2017 at $12.61~ R_{\odot}$ during egress. The Crab nebula was occulted by a coronal streamer on 17 June 2016 and on 17 and 18 June 2017. These scatter-broadened images are markedly anisotropic. This aspect has been noted earlier, for the Crab nebula [@Ble1972; @Den1972] as well as other sources [@Ana94; @Arm90]. Note that the major axis of these images is always perpendicular to the heliocentric radial direction (which is typically assumed to be the magnetic field direction at these distances) - this is especially evident when the Crab is occulted by a streamer. The parameters for all observations of the Crab nebula in 2016 and 2017 are tabulated in Table \[tab:table-1\].
---------------------------------------------------------- ---------------------------------------------------------- -- --
{width=".45\textwidth"} {width=".45\textwidth"}
{width=".45\textwidth"} {width=".45\textwidth"}
---------------------------------------------------------- ---------------------------------------------------------- -- --
Figure \[fig:peak\] shows the observed peak flux density of the Crab nebula with respect to its projected heliocentric distance. The red circles and blue squares are for the 2016 and 2017 observations respectively. Note that, in a given year the data points obtained during ingress and egress were plotted together with the (projected) heliocentric distance.
![Peak flux density of the Crab nebula on different days of June 2016 (red circles) and 2017 (blue squares). The red and blue data points shown in the shaded area indicate instances when the Crab nebula was observed through a streamer in 2016 and 2017 respectively. []{data-label="fig:peak"}](crab_light_curve_all.eps){width="12cm"}
The observations shown in the shaded region in Figure \[fig:peak\] represent instances where the Crab nebula was occulted by a coronal streamer. Evidently, the peak flux density in these instances in considerably lower (as compared to the flux corresponding to a similar heliocentric distance, when the Crab is not occulted by a streamer). This could be because the line of sight to the Crab nebula passes through more coronal plasma during instances of streamer occultation, leading to enhanced scatter broadening. In turn, this leads to a larger scatter-broadened image and a consequent reduction in the peak flux density.
Turbulent density fluctuations and solar wind proton heating rate
=================================================================
The angular broadening observations of the Crab nebula described in the previous section can be used to infer the amplitude of turbulent density fluctuations and associated heating rate of protons in the solar wind. The main quantity inferred from the observations is the structure function, which is essentially the spatial Fourier transform of the visibility observed with a given baseline. The structure function is used to estimate $C_{N}^2$, the so-called “amplitude” of the turbulent density spectrum. The density spectrum is modelled as a power law with an exponential cutoff at an “inner scale”. We assume that the inner scale is given by the proton inertial length. We elaborate on these aspects in the subsections below.
Background electron density and the inner scale {#sec:dens}
-----------------------------------------------
Since our aim is to estimate the level of turbulent density fluctuations in relation to the background density ($N_e$), we use Leblanc density model [@Leb1998] to estimate the $N_e$ in the solar wind, $$N_e(R) = 7.2~R^{-2} + 1.95 \times 10^{-3}~R^{-4} + 8.1\times 10^{-7}~R^{-6} \,\,\,\, {\rm cm}^{-3}.
\label{leblanc}$$ where ‘R’ is the heliocentric distance in units of astronomical units (AU, 1 AU = $215 R_{\odot}$). The background electron density is used to compute the inner scale of the turbulent density spectrum. We assume that the inner scale $l_{i}$ is given by the proton inertial length [@Ver96; @Lea99; @Lea00; @Smi01; @Che14; @Bru14], which is related to the background electron density by $$\label{eq:inner}
l_i(R) = v_A(R) / \Omega_p(R) = 2\pi/k_i(R)=228 \times \sqrt{N_{e}(R)}\, \, \, {\rm km},$$
where $N_{e}$ is the electron density in ${\rm cm}^{-3}$, $k_i$ is the wavenumber, $v_A$ is the $\rm Alfv\acute{e}n$ speed and $\Omega_i$ is the proton gyrofrequency. We note that our definition differs slightly from that of @Col89 [@Har89; @Yam98] who use $l_i = 3 \times v_A(R)/\Omega_p(R)$ and $k_i = 3/l_i$.
The structure function $D_{\phi}$
---------------------------------
The structure function $D_{\phi}(s)$ is defined by [@Pro75; @Ish78; @Col89; @Arm90],
$$D_{\phi}(s)=-2 ln \Gamma(s)=-2ln\left[V(s)/V(0)\right] \, ,
\label{eq:struct}$$
where the quantity $s$ represents the baseline length, $\Gamma(s)$ is the mutual coherence function, $V(s)$ denotes the visibility obtained with a baseline of length $s$ and $V(0)$ denotes the “zero-length” baseline visibility. The quantity $V(0)$ is the peak flux density when the Crab nebula is situated far away from the Sun, and is unresolved; we set it to be $\approx 2015$ Jy at 80 MHz [@Bra1970; @Mcl1985]. The images of the Crab nebula in Figure \[fig:graph\_images\] are obtained by combining the visibilities from all the baselines available in the GRAPH. We are interested in the turbulent density fluctuations at the inner scale, which is the scale at which the turbulent spectrum transitions from a power law to an exponential turnover. This is typically the smallest measurable scale; we therefore compute the structure function corresponding to the longest available baseline (s = 2.6 km), since that corresponds to the smallest scale.
The amplitude of density turbulence spectrum ($C_N^2$) {#lab:cn2}
------------------------------------------------------
The turbulent density inhomogeneities are represented by a spatial power spectrum, comprising a power law together with an exponential turnover at the inner scale:
$$\begin{aligned}
\label{eq:ss}
P_{\delta n}(k, R) = C_{N}^{2}(R) (\rho^2 ~k_x^2+k_y^2)^{-\alpha/2} \times \exp\biggl[ -(\rho^2 ~k_x^2+k_y^2)\bigg({l_{i}(R) \over 2 \pi }\bigg)^{2} \biggr]\, ,\end{aligned}$$
where $k = \sqrt{\rho~ k_x^2+k_y^2}$ is the wavenumber, $k_x$ and $k_y$ are the wavenumber along and perpendicular to the large-scale magnetic field respectively. The quantity $\rho$ is a measure of the anisotropy of the turbulent eddies. In our calculations, we use the axial ratio of the scatter broadened images at 80 MHz (shown in Table \[tab:table-1\]) for $\rho$. The quantity $C_{N}^{2}$ is the amplitude of density turbulence, and has dimensions of ${\rm cm}^{-\alpha - 3}$, where $\alpha$ is the power law index of the density turbulent spectrum. At large scales the density spectrum follows the Kolmogorov scaling law with $\alpha=11/3$. At small scales, (close to the inner scale, when $s \approx l_i$) the spectrum flattens to $\alpha=3$ [@Col89]. Since we are interested in the density fluctuations near the inner scale, we use $\alpha = 3$.
Many authors use analytical expressions for the structure function that are applicable in the asymptotic limits $s \ll l_i$ or $s \gg l_i$ [@Col87; @Arm00; @Bas94; @Pra11]. However, these expressions are not valid for situations (such as the one we are dealing with in this paper) where the baseline is comparable to the inner scale; i.e., $s \approx l_{i}$. We therefore choose to use the General Structure Function (GSF) which is valid in the $s \ll l_i$ and $s \gg l_i$ regimes as well as when $s \approx l_i$ [@Ing2015a]. In the present case, largest baseline length $\approx 2.6$ km is comparable to the inner scale lengths $\approx 4.56$ km. The GSF is given by the following expression:
[$$\begin{aligned}
\label{eq:gsf}
\nonumber
{D_\phi(s)} = \frac{8 \pi^2 r_e^2 \lambda^2 \Delta L}{\rho~ 2^{\alpha-2}(\alpha-2)} {\Gamma \bigg( 1 - {{\alpha-2} \over 2} \bigg)}
{{C_N^2 (R) l_i^{\alpha-2}(R)} \over {(1 - f_p^2 (R) / f^2)}} \\
{\times \bigg\{ { _1F_1} {\bigg[ - {{\alpha-2} \over 2},~1,~ - \bigg( {s \over l_i(R)} \bigg)^2 \bigg]} -1 \bigg\}} \, \, \, \, {\rm rad}^{2},\end{aligned}$$]{} where ${ _1F_1}$ is the confluent hyper-geometric function, $r_e$ is the classical electron radius, $\lambda$ is the observing wavelength, $R$ is the heliocentric distance (in units of $R_{\odot}$), $\Delta L$ is the thickness of the scattering medium, $f_p$ and f are the plasma and observing frequencies respectively. Substituting the model densities and $\alpha=3$ in Equation \[eq:gsf\] enables us to calculate $C_N^2$. Following @Sas2016, we assume the thickness of the scattering screen to be $\Delta L = (\pi/2) R_0$, where, $R_0$ is the impact parameter related to the projected heliocentric distance of the Crab nebula in units of cm. When the Crab nebula is occulted by a streamer, however, this estimate of $\Delta L$ is not valid. It is well known that the streamer owes its appearance to the fact that the line of sight to the streamer intercepts excess coronal plasma that is contained around the current sheet “fold”. It therefore stands to reason that the $\Delta L$ along a line of sight that intercepts a streamer will be larger than that along a line of sight that does not include a streamer. In view of this, we use the formula $\Delta L = (\pi/2) R_0$ and compute the density fluctuation amplitude and turbulent heating rate only for the instances where the Crab nebula is not occulted by a streamer.
In the instances where it is occulted by a streamer, we can estimate the extra line of sight path length implied by the presence of the streamer. In order to do this, we first compute the structure function (Eq \[eq:gsf\]) in the instances when the line of sight to the Crab nebula contains a streamer. We then estimate the ratio of this quantity to the structure function (at a similar heliocentric distance) when the line of sight does not intercept a streamer turns out to be $\approx 2$. For instance, $D_{\phi}(s = 2.6 \, {\rm km}, \,\, {\rm June}\, 17 \,2016)/D_{\phi}(s = 2.6 \,{\rm km}, \,\, {\rm June}\, 12 \,2016) = 2.16$. On June 12 2016, the Crab nebula was situated at $10.18 R_{\odot}$ and the line of sight to it did not pass through a streamer. On June 17 2016, the Crab nebula was situated at a similar projected heliocentric distance ($10.2 R_{\odot}$), but the line of sight to it passed through a coronal streamer. From Eq (\[eq:gsf\]), it is evident that this ratio is equal to the ratio of the $\Delta L$s in the two instances. In other words, the presence of a streamer approximately doubles the path length along the line of sight over which scattering takes place.
Although we show 80 MHz observations in this paper, we also have simultaneous observations at 53 MHz. The structure function (equation \[eq:gsf\]) is proportional to the square of the observing frequency (i.e., $D_{\phi}(s)~\propto~\lambda^2$). This predicts that the ratio of the structure functions at 80 and 53.3 MHz should be 0.44. Our observations yield a value of 0.43 for this ratio, and are thus consistent with the expected scaling.
Estimating the density modulation index ($\epsilon_{N_e}=\delta N_{k_i} / N_e$) {#lab:densmod}
-------------------------------------------------------------------------------
The density fluctuations $\delta N_{k_i}$ at the inner scale can be related to the spatial power spectrum (Equation \[eq:ss\]) using the following prescription [@Cha2009]
$$\label{eq:deltn}
{\delta}N_{k_i}^2(R) \sim 4 \pi k_i^3 P_{\delta N} (R, k_i) = 4 \pi C_{N}^{2}(R) k_i^{3 - \alpha} e^{-1} \,,$$
where $k_{i} \equiv 2 \pi/l_{i}$. We estimate ${\delta}N_{k_i}$ by substituting $C_N^2$ calculated in § 3.3 and using $\alpha = 3$ in Equation \[eq:deltn\]. We then use this ${\delta}N_{k_i}$ and the background electron density ($N_{e}$, § 3.1) to estimate the density modulation index ($\epsilon_{N_e}$) defined by
$$\label{eq:df}
\epsilon_{N_e}(R) \equiv {~\delta N_{k_{i}}(R) \over N_{e}(R)} \,$$
The density modulation index in the solar wind at different heliocentric distances is computed using Eq \[eq:df\]. The results are listed in column (6) of table \[tab:table-1\]. The numbers in table \[tab:table-1\] show that the density modulation index ($\epsilon_{N_e}$) in the solar wind ranges from 1.9 $\times 10^{-3}$ to 7.7 $\times 10^{-3}$ in the heliocentric range $\approx 10-20~ R_{\odot}$. We have carried out these calculations only for the instances where the Crab nebula is not occulted by a streamer.
Solar wind heating rate {#sec:hr}
-----------------------
We next use our estimates of the turbulent density fluctuations (${\delta}N_{k_i}$) to calculate the rate at which energy is deposited in solar wind protons, following the treatment of @Ing2015b. The basic assumption used is that the density fluctuations at small scales are manifestations of low frequency, oblique ($k_{\perp} \gg k_{\parallel}$), $\rm Alfv\acute{e}n$ wave turbulence. The quantities $k_{\perp}$ and $k_{\parallel}$ refer to components of the wave vector perpendicular and parallel to the background large-scale magnetic field respectively. The turbulent $\rm Alfv\acute{e}n$ wave cascade transitions to such oblique $\rm Alfv\acute{e}n$ waves (often referred to as kinetic $\rm Alfv\acute{e}n$ waves) near the inner/dissipation scale. We envisage a situation where the turbulent $\rm Alfv\acute{e}n$ wave cascade resonantly damps on (and thereby heats) the protons at the inner scale. Since this implicitly assumes that the $\rm Alfv\acute{e}n$ waves do not couple to other modes at the inner scale, our estimate of the proton heating rate is an upper limit. As explained in § \[sec:dens\], we assume that the inner scale is the proton inertial length, which is expressible as $l_{i} = v_{\rm A}/\Omega_{p}$, where $v_{\rm A}$ is the $\rm Alfv\acute{e}n$ speed and $\Omega_{p}$ is the proton gyrofrequency. This way of writing the the proton inertial length emphasizes its relation to the resonant damping of $\rm Alfv\acute{e}n$ waves on protons.
The specific energy per unit time ($\epsilon\, , \, {\rm erg ~cm^{-3}~s^{-1}}$) in the turbulent $\rm Alfv\acute{e}n$ wave cascade is transferred from large scales to smaller ones, until it dissipates at the inner/dissipation scale. The proton heating rate equals the turbulent energy cascade rate at the inner scale ($\epsilon_{k_i}$), which is given by [@Hol1999; @Cha2009; @Ing2015b], $$\label{eq:hr}
\epsilon_{k_i}(R)=c_0 \rho_p k_i(R) \delta v_{k_i}^3(R) ~ \rm erg ~cm^{-3}~s^{-1} \, ,$$
where $c_0$ is a constant usually taken to be 0.25 [@How2008; @Cha2009] and $\rho_p=m_pN_e(R)~\rm g~ cm^{-3}$, with $m_p$ representing the proton mass in grams. The quantity $k_i=2 \pi/l_i$ is the wavenumber corresponding to the inner scale (Eq \[eq:inner\]) and $\delta v_{k_i}$ represents the magnitude of turbulent velocity fluctuations at the inner scale. The density modulation index $\epsilon_{N_e}$ and the turbulent velocity fluctuations are related via the kinetic $\rm Alfv\acute{e}n$ wave dispersion relation [@How2008; @Cha2009; @Ing2015b]
$$\begin{aligned}
\label{eq:rmsv}
\delta v_{k_i}(R)=\Bigg({1+{\gamma_i k_i^2(R) \rho_i^2(R)} \over {k_i(R) l_i(R)}} \Bigg) \epsilon_{N_e} (R, k_i) v_A(R) \, .\end{aligned}$$
The adiabatic index $\gamma_i$ is taken to be 1 [@Cha2009] and the proton gyroradius ($\rho_i$) is given by
$$\label{eq:rho_i}
\rho_i(R)=102 \times \mu^{1/2} T_i^{1/2} B^{-1}(R)~\rm cm,$$
where $\mu$ is the ion mass expressed in terms of proton mass ($\approx 1$) and $T_i$ is the proton temperature in eV. We use $T_i=86.22$ eV which corresponds to a temperature of $1 \times 10^6$ K.
The $\rm Alfv\acute{e}n$ speed ($v_A$) in the solar wind is given by
$$\label{eq:va}
v_A(R)=2.18\times 10^{11} \mu^{-1/2} N_e^{-1/2}(R)B(R) ~\rm cm~s^{-1},$$
and the magnetic field stength (B) is taken to be the Parker spiral mangetic field in the ecliptic plane [@Wil1995]
$$\label{eq:parker}
B(R)= 3.4 \times 10^{-5} R^{-2} (1+R^2)^{1/2} ~ \rm Gauss,$$
where, ‘R’ is the heliocentric distance in units of AU. Equations (\[eq:parker\]), (\[eq:va\]), (\[eq:rho\_i\]), (\[eq:rmsv\]) and the density modulation index computed in § \[lab:densmod\] are used in Eq (\[eq:hr\]) to compute the solar wind heating rate at different heliocentric distances. These values are tabulated in column (7) of Table \[tab:table-1\]. Figure \[fig:hr\] depicts the density modulation index and the solar wind heating rate graphically as a function of heliocentric distance.
![The variation of the density modulation index (red circles) and the solar wind proton heating rate (blue squares) with projected heliocentric distance. We note that the proton heating rate is correlated with the density modulation index.[]{data-label="fig:hr"}](heating_rate.eps){width="16cm"}
[|c|c |c| c| c| c| c| c| c| c| c| c| c| c| c| c| c|]{} S.No & Date & R & Peak flux density& $\rm \rho $ & $\epsilon_{N_e}$ & Heating rate\
& & $\rm (R_{\odot})$ & (Jy) & & & ($\rm erg~ cm^{-3}~ s^{-1}$)\
(1) & (2) & (3) & (4) & (5) & (6) & (7)\
\
1 & 12 June 2016 & 10.18 & 1349 & 1.48 & 2.9E-3 & 3.9E-12\
2 & 18 June 2016 & 13.46 & 1473 & 1.76 & 5.3E-3 & 1.0E-11\
3 & 19 June 2016 & 16.83 & 1546 & 1.69 & 7.7E-3 & 1.9E-11\
4 & 20 June 2016 & 20.27 & 2003 & 1.98 & 1.9E-3 & 2.2E-13\
5 & 09 June 2017 & 21.13 & 2015 & 1.48 & - & -\
6 & 10 June 2017 & 17.68 & 1732 & 1.57 & 6.2E-3 & 9.2E-12\
7 & 12 June 2017 & 10.97 & 1386 & 1.50 & 3.4E-3 & 4.7E-12\
8 & 22 June 2017 & 26.34 & 2015 & 1.40 & - & -\
\
9 & 17 June 2016 & 10.20 & 845 & 2.44 & - & -\
10 & 17 June 2017 & 9.41 & 901 & 2.51 & - & -\
11 & 18 June 2017 & 12.61 & 800 & 1.65 & - & -\
Summary and conclusions
=======================
Summary
-------
We have imaged (figure \[fig:graph\_images\]) the Crab nebula at 80 MHz using the GRAPH in June 2016 and 2017, when it passed close to the Sun, and was obscured by the turbulent solar wind. Since the Crab nebula is a point source at 80 MHz when it is far from the Sun, these images are evidence of anisotropic scatter-broadening of radiation emanating from it as it passes through the turbulent solar wind. We calculate the structure function with the visibilities from the longest baselines (2.6 km) used in making these images. The structure function is used to infer the amplitude of the density turbulence spectrum ($C_{N}^{2}$), which is then used to compute the magnitude of the turbulent density fluctuations at the inner scale (Eq \[eq:deltn\]). This is then used to compute the density modulation index (Eq \[eq:df\]). Assuming that the turbulent $\rm Alfv\acute{e}n$ wave cascade in the solar wind dissipates on protons at the inner scale, we calculate the heating rate of protons in the solar wind (Eq \[eq:hr\]). The density modulation index and solar wind proton heating rate are plotted in Figure \[fig:hr\] as a function of heliocentric distance.
Conclusions
-----------
The main conclusions of this paper pertain to the anisotropy of the scatter-broadened image of the Crab nebula, the density modulation index of the turbulent fluctuations in the solar wind and the solar wind proton heating rate from $9-20~R_{\odot}$. Some of the conclusions are:
- The 80 MHz scatter broadened images of the Crab nebula at heliocentric distances ranging from $9$ to $20~R_{\odot}$ in the solar wind are anisotropic, with axial ratios typically $\lesssim 2$ (table \[tab:table-1\]). The major axis of the Crab nebula is typically oriented perpendicular to the magnetic field direction, as in @Ana94 [@Arm90] (although their observations were at much smaller distances from the Sun).
- On 17 June 2016 and 17 June 2017, a coronal streamer was present along the line of sight to the Crab nebula. The line of sight to the Crab encountered more coronal plasma on these days, as compared to the days when a streamer was not present. The axial ratio of the scatter-broadened images on these days was somewhat larger ($\approx 2$, see table \[tab:table-1\]) and the peak flux density is considerably lower (figure \[fig:peak\]), reflecting this fact. In the presence of a streamer, the path length over which scattering takes place was found to be approximately twice of that when the streamer was not present.
- The density modulation index ($\epsilon_{N_{e}} \equiv \delta N_{e}/N_{e}$) at the inner scale of the turbulent spectrum in the solar wind from $9-20~R_{\odot}$ ranges from 1.9 $\times 10^{-3}$ to 7.7 $\times 10^{-3}$ (see table \[tab:table-1\]). Earlier estimates of $\epsilon_{N_e}$ include @Sas2016 who reported $0.001 \lesssim \epsilon_{N_e} \lesssim 0.1$ from 10-45 $R_{\odot}$, $0.001 \lesssim \epsilon_{N_e} \lesssim 0.02$ reported by @Bis14b in the distance range 56-—185 $R_{\odot}$ and $0.03 \lesssim \epsilon_{N_e} \lesssim 0.08$ reported by @Spa04 at 1 AU (215 $R_{\odot}$). The red circles in Figure \[fig:hr\] depict the modulation index as a function of heliocentric distance. Figure \[fig:hr\] shows that the modulation index in the heliocentric distance $12-18~R_{\odot}$ is relatively higher. As explained in @Sas2016, this might be because the line of sight to the Crab nebula at these distances passes through the fast solar wind, which has relatively higher proton temperatures [@Lop1986]. Furthermore, the density modulation index is correlated with the proton temperature [@Cel87]. Taken together, this implies that one could expect higher values for the density modulation index in the fast solar wind.
- We interpret the turbulent density fluctuations as manifestations of kinetic $\rm Alfv\acute{e}n$ wave turbulence at small scales. Assuming that the turbulent $\rm Alfv\acute{e}n$ wave cascade damps resonantly on the protons at the inner scale, we use our estimates of the density modulation index to calculate the proton heating rate in the solar wind. We find that the estimated proton heating rate in the solar wind from $9-20~R_{\odot}$ ranges from $2.2 \times 10^{-13}$ to $1.0 \times 10^{-11} ~\rm erg~cm^{-3}~s^{-1}$ (blue squares in figure \[fig:hr\]).
Acknowledgments
===============
KSR acknowledges the financial support from the Science $\&$ Engineering Research Board (SERB), Department of Science $\&$ Technology, India (PDF/2015/000393). PS acknowledges support from the ISRO RESPOND program. AV is supported by NRL grant N00173-16-1-G029. We thank the staff of the Gauribidanur observatory for their help with the observations and maintenance of the antenna and receiver systems there. KSR acknowledges C. Kathiravan for the valuable discussions related to the GRAPH observations. SOHO/LASCO data used here are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institut fuer Aeronomie (Germany), Laboratoire d’Astronomie (France), and the University of Birmingham (UK). SOHO is a project of international cooperation between ESA and NASA. The authors would like to thank the anonymous referee for the valuable and constructive suggestions.
|
---
abstract: 'Magnetically arrested accretion discs (MADs), where the magnetic pressure in the inner disc is dynamically important, provide an alternative mechanism for regulating accretion to what is commonly assumed in black hole systems. We show that a global magnetic field inversion in the MAD state can destroy the jet, significantly increase the accretion rate, and move the effective inner disc edge in to the marginally stable orbit. Reconnection of the MAD field in the inner radii launches a new type of transient outflow containing hot plasma generated by magnetic dissipation. This transient outflow can be as powerful as the steady magnetically-dominated Blandford-Znajek jet in the MAD state. The field inversion qualitatively describes many of the observational features associated with the high luminosity hard to soft state transition in black hole X-ray binaries: the jet line, the transient ballistic jet, and the drop in rms variability. These results demonstrate that the magnetic field configuration can influence the accretion state directly, and hence the magnetic field structure is an important second parameter in explaining observations of accreting black holes across the mass and luminosity scales.'
author:
- |
Jason Dexter$^{1}$[^1], Jonathan C. McKinney$^{2}$, Sera Markoff$^{3}$, and Alexander Tchekhovskoy$^{4}$\
$^{1}$Departments of Physics and Astronomy, University of California, Berkeley, CA 94720-3411, USA\
$^{2}$Physics Department and Joint Space Science Institute, University of Maryland, College Park, MD 20742, USA\
$^{3}$Astronomical Institute “Anton Pannekoek”, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam, The Netherlands\
$^{4}$Lawrence Berkeley National Laboratory, 1 Cyclotron Rd, Berkeley, CA 94720, USA; Einstein Fellow
title: 'Transient jet formation and state transitions from large-scale magnetic reconnection in black hole accretion discs'
---
\[firstpage\]
accretion, accretion discs — black hole physics — X-rays: binaries — galaxies: jets
Introduction
============
A black hole accreting magnetic field with a consistent sign of magnetic flux reaches a limit where the magnetic pressure at the black hole resists continued accretion [@bisnovatyikoganruzmaikin1974; @bisnovatyikoganruzmaikin1976; @igumenshchevetal2003; @narayanetal2003]. The accretion process in this limit is mediated by instabilities in the black hole magnetosphere, and the magnetorotational instability [@mri], typically thought to cause angular momentum transport in black hole accretion discs, is marginally suppressed [@mckinneyetal2012 MTB12].
@narayanetal2003 predicted that such a “magnetically arrested” disc (MAD) could be an extremely efficient engine. This was recently confirmed in general relativistic MHD simulations [@tchekhovskoyetal2011 MTB12], where the @blandfordznajek1977 [BZ] jet efficiency (energy expelled vs. accreted by the black hole), can be $\gtrsim 150-250\%$ at high black hole spin. Reaching this limit requires only modest coherent vertical magnetic field, suggesting that it could be generic to galactic nuclei and binary systems (MTB12).
@igumenshchev2009 argued that a polarity inversion in the accreted magnetic field in the MAD state could trigger the observed state transitions in black hole X-ray binaries (BHBs). This work was based on 2D MHD simulations, which cannot accurately capture the MAD state due to the absence of the non-axisymmetric modes which dominate the accretion in this state. In order to more accurately study the physical outcome of such an inversion, MTB12 set up numerical experiments in several of their 3D, high resolution, general relativistic simulations of MAD accretion flows. Their initial conditions contained large scale field inversions, where adjacent magnetic field loops in the initial density distribution have opposite magnetic polarity. The magnetic flux in each loop was chosen to be much more than required to establish the MAD state. As accretion proceeded, a MAD state was established, reached quasi-steady state, and then a large amount of coherent field of the opposite polarity was accreted. MTB12 studied the quasi-steady structure of the MAD state in these simulations, but did not analyze the outcome of these polarity inversions.
In this Letter, we study the evolution of the accretion flow and jet during a large-scale field inversion experiment carried out by MTB12. We demonstrate that the accreted magnetic field configuration can indeed change important properties of the accretion flow including the mass accretion rate, the disc geometry, and the effective inner disc edge (§\[sec:mads\]). We further show that the reconnection of the MAD magnetic field destroys the BZ jet and launches a new type of transient outflow. We discuss possible observational implications of these results, particularly for BHBs and their state transitions, in §\[sec:observ-impl\].
![\[shellavg\]Time evolution of several quantities during the A0.94BfN40 simulation from MTB12. The inner disc edge is in units of $r_g$, the scale height is dimensionless, and the radial velocity is in units of $c$. All other quantities use (arbitrary) code units. The pressure, radial velocity, and vertical field strength are measured at $r=5\hspace{2pt} r_g$. The accretion rate is measured at $r=3\hspace{2pt} r_g$. The scale height is measured at $r = 3 r_g$ (solid line) and $5 r_g$ (dashed line). The magnetic and internal energies are measured at $r=25\hspace{2pt} r_g$. During the inversion, the vertical field polarity flips in sign. The magnetic pressure drops sharply as the two field polarities mix and reconnect, resulting in an increase in thermal energy. The accretion rate increases due to an increase in the radial velocity, the inner disc is no longer compressed by pressure at the disc-jet interface, the inner disc edge moves towards the black hole, and the rms variability drops. These effects are reversed as the MAD state is re-established.](shellavg_inversion.eps)
A field inversion in a magnetically arrested black hole accretion disc {#sec:mads}
======================================================================
The fiducial simulation presented in MTB12 (named A0.94BfN40 in their Table 3) is the highest resolution, longest duration 3D simulation of a MAD configuration to date, providing a wealth of information about the physics of magnetized accretion. We extend their analysis by studying the evolution of this simulation to late times, following a polarity inversion in the accreted magnetic field. We use area-integrated and shell-averaged radial profiles in Boyer-Lindquist coordinates to study the evolution of various quantities. The density-weighted shell-average of a quantity Q is defined as:
$$\langle Q \rangle = \frac{\int dA_{\theta \phi} \rho Q}{\int dA_{\theta \phi} \rho},$$
where $\rho$ is the density, $dA_{\theta \phi} = d\theta d\phi \sqrt{-g}$, and $g$ is the metric determinant.
Long after a quasi-steady MAD state is established in the inner disc ($r < 25 r_g$, where $r_g = G M / c^2$ is the gravitational radius and $M$ is the black hole mass), material is accreted with the opposite magnetic field polarity as that on the black hole. Figure \[shellavg\] shows density-weighted shell-averaged radial profiles of various quantities before, during, and after the ensuing field inversion at $t \sim 20000 \hspace{2pt} r_g / c$. The quantities considered include the vertical magnetic field strength, taken as the azimuthally-averaged $b^\theta (\theta=\pi/2)$, where $b^\mu$ is the magnetic field four-vector measured in Heaviside-Lorentz units; the total pressure: $p_g + b^2 / 2$, where $p_g$ is the gas pressure; and the radial velocity, $v^r \equiv u^r / u^t$. As in MTB12, the disc scale height, $\theta_d$, is defined as:
$$\begin{aligned}
\theta_d = \langle \left ( \theta - \theta_0 \right)^2 \rangle^{1/2},\hspace{12pt}
\theta_0 = \pi/2 + \langle (\theta - \pi/2)\rangle,\end{aligned}$$
where $\theta_0$ is the midplane location. The measured rms variability is calculated as the difference between the actual accretion rate curve and a smoothed version in order to remove secular trends on longer timescales. It is only calculated during the field inversion. This procedure enables accurate estimates of fluctuations during the inversion, but leads to small rms values at all times (e.g., rms $\simeq 1\%$ compared to $\simeq 30\%$ in the raw accretion rate curve in the MAD state).
The mass and energy fluxes are,
$$\begin{aligned}
\dot{M} = \int dA_{\theta \phi} \rho u^r, \hspace{8pt} \dot{E} = \int dA_{\theta \phi} T^r_t, \end{aligned}$$
where $T^\mu_\nu$ is the stress energy tensor. We will further use the fluxes of electromagnetic, $T^{r, \rm EM}_{t} = b^2 u^r u_t - b^r b_t$, kinetic, $T^{r, \rm KE}_t = \rho u^r (1 + u_t)$, and thermal, $T^{r, \rm EN}_t = (u_g + p) u^r u_t$ energy where $u_g$ is the internal energy density.
The inner disc radius is defined as the minimum of the stress, infall, and turbulent velocity measures from @krolikhawley2002, but is not allowed to move inside the ISCO. The jet efficiency is defined as,
$$\label{eq:2}
\eta_j \equiv \frac{\langle \dot{M} \rangle - \langle \dot{E}_j \rangle}{\langle \dot{M} \rangle},$$
where the subscript $j$ denotes that the jet power includes only the energy fluxes in the jet and wind (with the jet being the dominant contribution). In the MAD state, the jet can be robustly defined as magnetically-dominated regions with $b^2 / \rho c^2 > 1$ and the wind as regions with $b^2 / \rho c^2 < 1$ and $2 p_g / b^2 < 1$ (MTB12). However, during the field inversion these definitions do not capture the region of interest. We instead define the jet and wind as regions near the pole with $\theta < 10^\circ$ and $30^\circ$, and measure jet powers at $r=50 r_g$ where both choices for the wind and jet region give consistent results.
Quasi-steady MAD state
----------------------
Prior to the field inversion event ($t \lesssim 19000 r_g / c$), the accretion flow is in the quasi-steady MAD state analysed in detail by MTB12. A strong, coherent vertical field is present in the inner disc. The jet is powerful, and clearly extracting black hole spin energy (efficiency $\simeq 250\%$). The scale height at $r = 3 r_g$ is significantly smaller ($h/r \simeq 0.25$) than that at $r=5 r_g$ ($h/r \simeq 0.6$) due to compression by magnetic pressure from the surrounding jet magnetosphere. The MRI is marginally suppressed in this state, and accretion proceeds through instabilities at the jet-disc interface, leading in this simulation to fluctuations in the mass accretion rate and quasi-periodic oscillations in other dynamical quantities (e.g., the jet power). For the thinner discs ($\theta_d \simeq 0.3$) studied in @tchekhovskoyetal2011, the rms noise is larger, and no QPOs are present.
Field inversion
---------------
The evolution during the magnetic polarity inversion ($t \gtrsim 19000 r_g / c$) is studied here for the first time. As the opposite polarity loop is accreted, the two field polarities mix and reconnect throughout the disc. This can be seen in the top two panels of Figure \[shellavg\]: the vertical field in the inner disc changes sign as the inversion proceeds, while the internal energy increases as the magnetic pressure decreases.
Subsequently, the inner disc expands vertically due to the removal of magnetic pressure from the jet (e.g., the scale height at all radii expands to that imposed by the initial conditions, $\theta_d \simeq 0.6$), and therefore the radial velocity increases. The accretion rate then increases in response to the larger radial velocity. The effective inner disc edge moves closer to the black hole as the disc more closely resembles a standard MRI state. In the (short lived for these initial conditions, $\Delta t \approx 2000 r_g / c$) MRI state, the QPOs disappear and the rms variability in all dynamical quantities drops (bottom panel of Figure \[shellavg\]).
These results demonstrate that the magnetic field geometry accreted can greatly affect the evolution of the disc, and even control the accretion rate, which is usually considered to be an independent parameter. The reason for the accretion rate change is that the radial velocity allowed by magnetic Rayleigh-Taylor and interchange instabilities [@stonegardiner2007] in the MAD state is different from that set by the MRI alone. Once the field inversion destroys the built up magnetically-dominated jet, the MRI determines the radial velocity and in general gives a different quasi-steady accretion rate. The MAD accretion rate should always be smaller than that in the MRI state, since the strong magnetosphere at the black hole in the MAD state is actively impeding accretion.
![\[jetenergy\]Magnetic, thermal, and kinetic energy flux in the polar region ($\theta < 30^\circ$) at $r = 50 r_g$ (top) and thermal energy flux measured at several radii vs. time (bottom) during a field inversion in a magnetically arrested accretion disc. During the field inversion, the magnetic energy is converted into thermal and kinetic energy flux. This transient jet propagates outwards (velocity $\simeq 0.1 c$) with comparable power and velocity at small radius to the steady magnetically-dominated BZ jet. The negative energy flux at small radius in the bottom panel indicates inflow rather than outflow.](jet_energy.eps "fig:")\
![\[jetenergy\]Magnetic, thermal, and kinetic energy flux in the polar region ($\theta < 30^\circ$) at $r = 50 r_g$ (top) and thermal energy flux measured at several radii vs. time (bottom) during a field inversion in a magnetically arrested accretion disc. During the field inversion, the magnetic energy is converted into thermal and kinetic energy flux. This transient jet propagates outwards (velocity $\simeq 0.1 c$) with comparable power and velocity at small radius to the steady magnetically-dominated BZ jet. The negative energy flux at small radius in the bottom panel indicates inflow rather than outflow.](transient_jet_propagation.eps "fig:")
Transient jet formation
-----------------------
The production of magnetically-dominated jets in GRMHD simulations requires coherent vertical field threading the black hole [@beckwithetal2008jet; @mckinneyblandford2009]. As field of the opposite polarity is accreted, the vertical field on the black hole decreases due to magnetic reconnection. This in turn destroys the magnetically-dominated MAD state jet. Figure \[jetenergy\] shows the magnetic, kinetic, and thermal energy fluxes in the jet as functions of simulation time. The decrease of the Poynting flux corresponds to the destruction of the BZ jet.
Following the inversion, the kinetic and thermal fluxes suddenly increase. These increases correspond to a transient outflow, which propagates outwards as shown in the bottom panel of Figure \[jetenergy\]. The implied speed from the delay between the peaks in flux at different radii is $\simeq 0.1 c$, about twice the velocity of the MAD jet at $r = 50 r_g$. These measured velocities are sub-relativistic because the jet is still accelerating at these radii. The terminal Lorentz factor gives an upper limit to the velocity at large radius, outside the simulation domain [@mckinney2006]: $\Gamma_{\infty} = -\dot{E}_j/\dot{M}_j$. The upper limit to the Lorentz factor of the transient outflow, $\Gamma_{\infty} \simeq 1.3-2.0$, is smaller than that of the BZ jet: $\Gamma_{\infty} \simeq 3-30$. Further study will be required to determine whether these transient outflows can become ultrarelativistic at large radii.
The most natural energy source for this outflow is magnetic energy converted into thermal energy during the field inversion. Most of this energy in the simulation is contained within $r \lesssim 10 r_g$, but the vast majority is lost to the black hole. Between $r = 10-50 r_g$, the thermal energy increases by roughly the same amount as the magnetic energy drops during the conversion, implying efficient heating of the gas from reconnection. Therefore, the outflow is powered by reconnection in the disc rather than at the black hole. The energy in the outflow also increases as a function of radius, indicating that a wide range of radii contribute to its power.
The transient outflow velocity is $\simeq 0.05-0.1 c$ over the radial range ($r \lesssim 200 r_g$) where it can be followed. It is not clear what sets the velocity. In the simulation we study, the outflow is accelerated by a BZ jet which re-forms following the inversion. However, the material is already unbound and flowing outwards before the BZ jet re-forms. The outflow duration at $r = 50 r_g$ is shorter than expected from reconnection liberating thermal energy on the inflow time, and the velocity is smaller than the local escape speed. It is possible that faster material from smaller radii sweeps up that at larger radius, but the velocity is roughly constant with radius.
Unlike the BZ jet ($P \sim a^2$ at low spin), the transient jet power does not appear to depend on black hole spin. The $a=0$ MTB12 simulation with the same initial condition has no BZ jet but forms a transient outflow with comparable speed and power. Therefore, large-scale reconnection may produce powerful jets even at low black hole spin, which has not previously been possible in global simulations. However, the lower spin MTB12 simulations were not run long enough for the second polarity inversion to occur, which we have studied here for their fiducial simulation (which required $\sim 10^7$ cpu-hours). This second inversion occurs after a quasi-steady MAD state is achieved out to $r > 25 r_g$, and is therefore more likely to give robust results. We plan to study the properties of inversions and transient outflows as a function of black hole spin and disc thickness.
![\[hardnessluminosity\]Proxies for luminosity vs. hardness (top) and rms variability vs. hardness (bottom) sampled during (solid) and before/after (open) a field inversion in a simulation of a magnetically arrested accretion flow. The hardness is taken to be the difference between jet (equation \[eq:2\]) and disc efficiencies, while the total luminosity is the sum of jet and disc efficiencies multiplied by the accretion rate. The rms variability values are low because they are measured by subtracting the accretion rate curve from a smoothed version, in order to remove secular variations. The field inversion causes a transition between a more variable “hard” state and a quieter “soft” state.](hardness_intensity.eps "fig:")\
![\[hardnessluminosity\]Proxies for luminosity vs. hardness (top) and rms variability vs. hardness (bottom) sampled during (solid) and before/after (open) a field inversion in a simulation of a magnetically arrested accretion flow. The hardness is taken to be the difference between jet (equation \[eq:2\]) and disc efficiencies, while the total luminosity is the sum of jet and disc efficiencies multiplied by the accretion rate. The rms variability values are low because they are measured by subtracting the accretion rate curve from a smoothed version, in order to remove secular variations. The field inversion causes a transition between a more variable “hard” state and a quieter “soft” state.](hardness_rms.eps "fig:")
Observational Implications {#sec:observ-impl}
==========================
We have demonstrated that global magnetic field polarity inversions in magnetically arrested accretion discs (MADs) can directly influence the accretion state. In the simulation studied here (the fiducial simulation first presented by MTB12), magnetic reconnection following a global field inversion destroys the steady jet. In addition, the inflow is no longer inhibited by the strong magnetic field, so that the accretion rate rises by a factor of several. The accreted magnetic field geometry, then, can control the accretion rate onto the black hole. Reconnection from the accretion of opposite polarity field converts magnetic energy in the disc into kinetic and thermal energy fluxes (Figure \[jetenergy\]). This energy flux propagates outwards, at $\simeq 0.1c$ out to $100 r_g$ in the simulation studied here, with a mildly relativistic terminal Lorentz factor of $1.3$. This is a new type of relativistic outflow from a black hole accretion disc, whose power is comparable to that of the magnetically-dominated Blandford-Znajek jet in this simulation. This method for dissipating jet Poynting flux should operate in MRI discs as well as the MAD disc studied here.
Our results may have several implications for observed accreting black holes. At least two types of jets are observed in BHBs: a steady, compact jet [@mirabeletal1992; @fender2001] is seen in the hard spectral state [e.g., @remillardmcclintock2006], while a transient, ballistic jet is seen during transitions from the hard to soft state [@mirabelrodriguez1994]. Neither type of jet is observed in the soft state [@fenderetal1999]. The transient jet seen here is also produced during a change in state of the accretion disc, from the MAD to the MRI state. It is therefore tempting to associate the hard and soft states with MAD and MRI accretion flows, as first suggested by @igumenshchev2009. The hysteresis seen in black hole outbursts would then correspond to whether or not the accretion flow is magnetically arrested.
Several of our results support this scenario. The MAD state is associated with a steady jet, while MRI jets are either absent or weak, depending on the spin and disc thickness [@pennaetal2010]. The MAD jet power is proportional to the accretion rate [@tchekhovskoyetal2011], naturally explaining the observed radio/X-ray correlations in BHBs whether the hard X-rays are produced in the disc [e.g., @magdziarzzdziarski1995; @esinetal1997] or the jet [e.g., @markoffetal2001; @markoffetal2005]. The recently discovered radio loud and quiet tracks in the hard state [e.g., @coriatetal2011] could correspond to weak (strong) jets from an MRI (MAD) accretion state. The large MAD jet efficiences could explain the high X-ray luminosities reached in the hard state, which are difficult to explain using radiatively inefficient accretion.
The transition from the MAD to the MRI state also qualitatively resembles the observed hard to soft state transitions in BHBs. As the steady MAD jet is destroyed during the field inversion, the rms variability in many quantities (e.g., the accretion rate) drops sharply, and a new type of transient outflow is launched, in qualitative agreement with observations. To demonstrate this idea, we show proxies for the X-ray color (hardness ratio), luminosity, and rms variability during the state transition in Figure \[hardnessluminosity\]. Using these proxies, the inversion event shows a transition from a more variable hard state to a quieter soft state at similar luminosity, qualitatively similar to observed hardness-intensity and hardness-rms diagrams from black hole outbursts. This transition is driven by large-scale magnetic reconnection, in this simulation following a field polarity inversion. We note that reconnection could alternatively be a side effect of a different mechanism driving BHB outbursts, e.g., cooling instabilities [@esinetal1997; @dassharma2013], rather than the root cause. In this case it could explain the spectral state transition and the associated transient jet ejections.
If the magnetic field configuration does play an important role in BHB state transitions, then different outburst cycles in persistent and transient sources may be related to the efficiency of magnetic field transport in accretion discs. In transient sources where the soft state is reached, the disc may collapse and become geometrically thin, preventing further efficient transport of magnetic flux [@lubowetal1994]. In persistent sources which never reach the soft state, the MAD state could be re-established on a relatively short timescale, triggering a return to the hard state. This could explain the repeated jet ejection cycles in a source like GRS 1915+105 [e.g., @neilsenlee2009] or 3C 111 [@chatterjeeetal2011]. These could also be the result of partial reconnection events, which power a transient outflow but do not completely destroy the MAD state.
Our estimated transient jet power depends on the magnetic energy density and the timescale over which it is dissipated. It does not seem to depend on black hole spin, and therefore naively we predict that the observed transient jet power should be roughly independent of black hole spin, in agreement with some analyses [@russelletal2013] and potentially at odds with others [@narayanmcclintock2012; @steineretal2013]. This issue is complicated by the fact that the observed radio emission comes from much larger scales than the jets studied in the simulation. The propagation of these new transient outflows to larger scales, including their dynamical and radiative properties, should be studied in future work.
The radiative properties of the new transient jets described here are particularly interesting. Simulated BZ jets contain an “empty funnel” [@devilliersetal2005; @mckinney2006]. In order to calculate observables from BZ jets, it is necessary to invoke some source of particles, either from a physical process [e.g., pair-production, @moscibrodzkaetal2011] or as an ad-hoc prescription [@broderickmckinney2010; @dexteretal2012; @moscibrodzkafalcke2013]. In contrast, the transient jets are dominated by particle energy generated by magnetic reconnection.
MAD accretion flows, jets, and transient outflows may play a role in a variety of BH systems. @sikorabegelman2013 suggested that the radio loud/quiet dichotomy in active galactic nuclei could be due to the presence or absence of sufficient coherent magnetic field to develop a MAD accretion state. This is similar to our association of MAD and MRI accretion with the hard and soft states of BHBs. Alternatively, the dichotomy could be due to the type of jet present in the system: the steady MAD jet, or a transient outflow powered by magnetic reconnection. @tchekhovskoyetal2013 showed that many observed properties of the putative tidal disruption flare Swift J1644+57 can be explained by a MAD state jet. Some of the large amplitude variability occurring in that event at early times could alternatively be explained by transient outflows triggered by magnetic reconnection. Reconnection could also produce similar variability seen in long gamma ray bursts [@progazhang2006; @mckinneyuzdensky2012]. Our results show that field polarity inversions are one possible mechanism for triggering large-scale magnetic reconnection.
acknowledgements {#acknowledgements .unnumbered}
================
We thank R. Fender, E. Quataert, and P. Sharma for useful discussions related to this work. This work used NSF/XSEDE resources provided by NICS (Nautilus) under the award TG-PHY120005. AT was supported by NASA through the Einstein Fellowship Program, grant PF3-140115.
\[lastpage\]
[^1]: E-mail: [email protected]
|
---
abstract: 'The aim of this note is to describe basic properties of the representations of $\operatorname{GL}_2({\mathbb F}_q[\theta])$ associated to certain vectorial modular forms with values in Tate algebras and Banach algebras introduced by the author. We discuss how certain $L$-values occur as limits values of these functions. We also present families of examples which can be the object of further studies.'
author:
- 'F. Pellarin'
date: '\'
title: A note on certain representations in characteristic $p$ and associated functions
---
Introduction
============
In [@Pe], the author has introduced some special functions related to the arithmetic of function fields of positive characteristic (and more precisely, to the arithmetic of the ring ${\mathbb F}_q[\theta]$ with $\theta$ an indeterminate), namely, $L$-values and vector valued modular forms (the vectors having entries in certain ultrametric Banach algebras). The purpose of [@Pe] was to produce a new class of functional identities for $L$-values, and only very particular examples of these new special functions were required, in order to obtain the results in that paper. The theory was later developed along several axes (see for example [@APTR], which also contains quite a detailed bibliography).
The aim of this note is to highlight the connection that these special functions have with representations of algebras, groups etc. associated to $A$, and to present families of examples which can be the object of further studies. In particular, we are interested in certain irreducible representations of ${\operatorname{SL}}_2({\mathbb F}_q[\theta])$ or ${\operatorname{GL}}_2({\mathbb F}_q[\theta])$. We also provide a few explicit examples and properties of such representations.
The plan of the note is the following. In §\[algebrarepr\], we discuss algebra representations of $A$ and we will consider their associated $\omega$-values and $L$-values. In §\[symmetricpowers\], we present a class of irreducible representations $\rho^I$ inside symmetric powers in the case $q=p$. In §\[tensorprod\], we apply the results of §\[symmetricpowers\] to show that certain tensor products $\rho^{I\!I}$ are irreducible. In §\[poincare\] we use these results to show that the entries of certain vectorial Poincaré series generalizing those introduced in [@Pe] are linearly independent and we present a conjecture on the rank of a certain module of vectorial modular forms.
In all the following, $q=p^e$ with $p$ a prime number and $e>0$. we set $$\Gamma=\operatorname{GL}_2({\mathbb F}_q[\theta]),$$ where $q=p^e$ for some prime number $p$ and an integer $e>0$. We shall write $A={\mathbb F}_q[\theta]$ (so that $\Gamma=\operatorname{GL}_2(A)$). All along the paper, if $a=a_0+a_1\theta+\cdots+a_r\theta^r$ is an element of $A$ with $a_0,\ldots,a_r\in{\mathbb F}_q$ and if $t$ is an element of an ${\mathbb F}_q$-algebra $B$, then $a(t)$ denotes the element $a_0+a_1t+\cdots+a_rt^r\in B$. Also, we set $K={\mathbb F}_q(\theta)$.
Algebra representations {#algebrarepr}
=======================
In this section, we consider an integral, commutative ${\mathbb F}_q$-algebra ${\boldsymbol{A}}$ and we denote by $\boldsymbol{K}$ its fraction field. We denote by $\operatorname{Mat}_{n\times m}(R)$, with $R$ a commutative ring, the $R$-module of the matrices with $n$ rows and $m$ columns, and with entries in $R$. If $n=m$, this $R$-module is equipped with the structure of an $R$-algebra. We choose an injective algebra representation $$\label{algrep}
A\xrightarrow{\sigma}\operatorname{Mat}_{d\times d}(\boldsymbol{K}),$$ which is completely determined by the choice of the image $\vartheta:=\sigma(\theta)$. Note that $\sigma$ is not injective if and only if $\vartheta$ has all its eigenvalues in ${\mathbb F}_q^{ac}$, algebraic closure of ${\mathbb F}_q$ (In all the following, if $L$ is a field, $L^{ac}$ denotes an algebraic closure of $L$). Further, we have that $\sigma$ is irreducible if and only if its characteristic polynomial is irreducible over $\boldsymbol{K}$.
An example
----------
We denote by ${\boldsymbol{A}}[\theta]^+$ the multiplicative monoid of polynomials which are monic in $\theta$. Let $P$ be a polynomial in ${\boldsymbol{A}}[\theta]^+$, let $d$ be the degree of $P$ in $\theta$. The euclidean division in ${\boldsymbol{A}}[\theta]$ by $P$ defines for all $a\in {\boldsymbol{A}}[\theta]$, in an unique way, a matrix $\sigma_P(a)\in\operatorname{Mat}_{d\times d}({\boldsymbol{A}})$ such that $$aw\equiv\sigma_P(a)w\pmod{P{\boldsymbol{A}}[\theta]},$$ where $w$ is the column vector with entries $1,\theta,\ldots,\theta^{d-1}$. Explicitly, if $P=\theta^d+P_{d-1}\theta^{d-1}+\cdots+P_0$ with $P_i\in{\boldsymbol{A}}$, then $$\sigma_P(\theta)=\left(\begin{array}{cccc} 0 & 1 & \cdots & 0
\\
0 & 0 & \cdots & 0\\ \vdots & \vdots & & \vdots \\
0 & 0 & \cdots & 1\\
-P_0 & -P_1 & \cdots & -P_{d-1}\end{array}\right).$$ Hence, the map $\sigma_P$ defines an algebra representation $$A\xrightarrow{\sigma_P}\operatorname{End}({\boldsymbol{A}}^d).$$ The representation $\sigma_P$ is faithful if $P$ has not all its roots in ${\mathbb F}_q^{ac}$ and is irreducible if and only if $P$ is irreducible over $\boldsymbol{K}$.
$L$-values and $\omega$-values of algebra representations and semi-characters
-----------------------------------------------------------------------------
We give a few elementary properties of certain basic objects that can be associated to representations such as in (\[algrep\]). Since the proofs are in fact obvious generalizations of the arguments of [@Pe; @APTR], we will only sketch them.
For a ring $R$, we denote by $R^*$ the underlying multiplicative monoid of $R$ (if we forget the addition of $R$ we are left with the monoid $R^*$). We recall that $A^+$ denotes the multiplicative monoid of monic polynomials of $A$. Let $\boldsymbol{M}$ be an ${\mathbb F}_q$-algebra (for example, $\boldsymbol{M}=\operatorname{Mat}_{d\times d}(\boldsymbol{K})$ for some integer $d$).
[ *A monoid homomorphism $$\sigma:A^+\rightarrow \boldsymbol{M}^*$$ is a [*semi-character*]{} if there exist pairwise commuting ${\mathbb F}_q$-algebra homomorphisms $$\sigma_1,\ldots,\sigma_s:A\rightarrow \boldsymbol{M}$$ such that, for $a\in A^+$, $$\sigma(a)=\sigma_1(a)\cdots\sigma_s(a).$$ The trivial map $\sigma(a)=1_{\boldsymbol{M}}$ for all $a$ is a semi-character, according to the convention that an empty product is equal to one. If, for all $i=1,\ldots,s$, $\vartheta_i=\sigma_i(\theta)$ has a well defined minimal polynomial over $\boldsymbol{K}$, we say that the semi-character $\sigma$ is of [*Dirichlet type*]{}. This happens if, for example, $\boldsymbol{M}=\operatorname{Mat}_{d\times d}(\boldsymbol{K})$. The [*conductor*]{} of a semi-character of Dirichlet type is the product of all the pairwise distinct minimal polynomials of the elements $\vartheta_1,\ldots,\vartheta_s$.*]{}
### Example {#example .unnumbered}
If we choose $\boldsymbol{M}={\mathbb F}_{q}^{ac}$, then a semi-character $\sigma:A^+\rightarrow{\mathbb F}_q^{ac}$ is always of Dirichlet type, and our definition coincides in fact with the usual notion of a Dirichlet-Goss character $A^+\rightarrow{\mathbb F}_q^{ac}$. There are non-pairwise conjugated elements $\zeta_1,\ldots,\zeta_s\in{\mathbb F}_q^{ac}$, of minimal polynomials $P_1,\ldots,P_s\in A$, such that $\sigma(a)=a(\zeta_1)^{n_1}\cdots a(\zeta_s)^{n_s}$ for all $a\in A$, with $0<n_i<q^{d_i}-1$ for all $i$, with $d_i$ the degree of $P_i$. The conductor is the product $P_1\cdots P_s$.
### Non-example {#non-example .unnumbered}
We set $\boldsymbol{M}={\mathbb F}_{q}[x]$ and we consider the map $\sigma:A^+\rightarrow\boldsymbol{M}$ defined by $a\mapsto x^{\deg_\theta(a)}$. Then, $\sigma$ is a monoid homomorphism which is not a semi-character. Indeed, assuming the converse, then $\sigma=\sigma_1\cdots\sigma_s$ for algebra homomorphisms $\sigma_i:A\rightarrow\boldsymbol{M}$. But since $\sigma(\theta)=\sigma_1(\theta)\cdots\sigma_s(\theta)=x$, we get $s=1$ and $\sigma$ would be an algebra homomorphism, which is certainly false.
From now on, we suppose, for commodity, that $\boldsymbol{M}=\operatorname{Mat}_{d\times d}(\boldsymbol{K}_s)$ with $\boldsymbol{K}_s={\mathbb F}_q(t_1,\ldots,t_s)$ (but some of the arguments also hold with $\boldsymbol{M}$ any ${\mathbb F}_q$-algebra). Let $K_\infty$ be the completion of $K={\mathbb F}_q(\theta)$ at the infinity place. Then, $K_\infty={\mathbb F}_q((\theta^{-1}))$, with the norm $|\cdot|$ defined by $|\theta|=q$ (associated to the valuation $v_\infty$ such that $v_\infty(\theta)=-1$). Let ${\mathbb C}_\infty$ be the completion $\widehat{K_\infty^{ac}}$, where $K_\infty^{ac}$ denotes an algebraic closure of $K_\infty$. We denote by $\mathbb{K}_s$ the completion of the field ${\mathbb C}_\infty(t_1,\ldots,t_s)$ for the Gauss valuation extending the valuation of $K_\infty$, so that the valuation induced on ${\mathbb F}_q^{ac}(t_1,\ldots,t_s)\subset{\mathbb C}_\infty(t_1,\ldots,t_s)$ is the trivial one. Also, we denote by $K_{s,\infty}$ the completion of $K(t_1,\ldots,t_s)$ in $\mathbb{K}_s$; we have $K_{s,\infty}=\boldsymbol{K}_s((\theta^{-1}))$. We have that $K_\infty=K_{0,\infty}$, and $\mathbb{K}_0={\mathbb C}_\infty$.
### $\omega$-values of an algebra representation
We consider a $d$-dimensional representation as in (\[algrep\]), in $\boldsymbol{M}=\operatorname{Mat}_{d\times d}(\boldsymbol{K}_s)$. We consider the following element of $
K_\infty\widehat{\otimes}_{{\mathbb F}_q}\boldsymbol{M}=\operatorname{Mat}_{n\times n}(K_{s,\infty})\subset {\mathbb C}_\infty\widehat{\otimes}_{{\mathbb F}_q}\boldsymbol{M}=\operatorname{Mat}_{n\times n}(\mathbb{K}_s)$, the topological product $\widehat{\otimes}_{{\mathbb F}_q}$ being considered with respect to the trivial norm over $\boldsymbol{M}$. We denote by $\Pi_\sigma$ the convergent product $$\Pi_\sigma=\prod_{i\geq 0}(I_d-\sigma(\theta)\theta^{-q^i})^{-1}\in \operatorname{GL}_d(K_{s,\infty})$$ (where $I_d$ denotes the identity matrix). Let $\lambda_\theta$ be a root $(-\theta)^{\frac{1}{q-1}}\in K^{ac}$ of $-\theta$. The [*$\omega$-value*]{} associated to $\sigma$ is the product $$\omega_\sigma=\lambda_\theta\Pi_\sigma\in\operatorname{GL}_d(K_{s,\infty}(\lambda_\theta)).$$ We have, on the other hand, a continuous $\boldsymbol{K}_s$-algebra automorphism $\tau$ of $\mathbb{K}_s$ uniquely defined by setting $\tau(\theta)=\theta^q$. By using an ultrametric version of Mittag-Leffler decomposition, it is easy to show that $\mathbb{K}_s^{\tau=1}$, the subfield of $\mathbb{K}_s$ of the $\tau$-invariant elements, is equal to $\boldsymbol{K}_s$. We denote by $\tau$ the algebra endomorphism of $\operatorname{Mat}_{d\times d}(\mathbb{K}_s)$ defined by applying $\tau$ entry-wise.
\[equationtau\] The element $\omega_\sigma$ is a generator of the free $\boldsymbol{M}$-submodule of rank one of $\operatorname{Mat}_{d\times d}(\mathbb{K}_s)$ of the solutions of the $\tau$-difference equation $$\tau(X)=(\sigma(\theta)-\theta I_d)X.$$
(Sketch.) It is easy to verify that $\omega_\sigma$ is a solution of the above equation. If $\omega'$ is another solution, then $Y=\omega'\omega_\sigma^{-1}$ is solution of $\tau(Y)=Y$ in $\operatorname{Mat}_{d\times d}(\mathbb{K}_s)$, hence, $Y\in\boldsymbol{M}$.
[*It is also easy to prove that, for $\sigma$ as in (\[algrep\]), $\det(\omega_\sigma)=\omega_\alpha$, where $\alpha\in\boldsymbol{K}_s[\theta]$ is $-1$ times the characteristic polynomial of $\sigma(\theta)$ and $\omega_\alpha$ is the function defined in [@APTR §6].*]{}
Let $T$ be another indeterminate. The algebra $\operatorname{Mat}_{d\times d}(\mathbb{K}_s)$ is endowed with a structure of $A[T]$-module in two ways. The first structure is that in which the multiplication by $\theta$ is given by the usual diagonal multiplication, and the multiplication by $T$ is given by the left multiplication by $\sigma(\theta)$; this defines indeed, uniquely, a module structure. The second structure, called [*Carlitz module structure*]{}, denoted by $C(\operatorname{Mat}_{d\times d}(\mathbb{K}_s))$, has the same multiplication by $T$ and has the multiplication $C_{\theta}$ by $\theta$ independent of the choice of $\sigma$, and defined as follows. If $m\in C(\operatorname{Mat}_{d\times d}(\mathbb{K}_s))$, then $C_\theta(m)=\theta m+\tau(m)$.
We have the exponential map $$\exp_C:\operatorname{Mat}_{d\times d}(\mathbb{K}_s)\rightarrow
C(\operatorname{Mat}_{d\times d}(\mathbb{K}_s))$$ defined by $\exp_C(f)=\sum_{i\geq 0}D_i^{-1}\tau^i(f)$, where $D_i$ is the product of the monic polynomials of $A$ of degree $i$. It is quite standard to check that this is a continuous, open, surjective $A[T]$-module homomorphism, of kernel $\widetilde{\pi}\operatorname{Mat}_{d\times d}(\boldsymbol{K}_s[\theta])$, where $$\widetilde{\pi}:=\theta\lambda_\theta\prod_{i>0}(1-\theta^{1-q^i})^{-1}\in\lambda_\theta K_\infty\subset {\mathbb C}_\infty$$ is a fundamental period of Carlitz’s exponential $\exp_C:{\mathbb C}_\infty\rightarrow{\mathbb C}_\infty$.
\[lemmaexpmatrix\] We have $\omega_\sigma=\exp_C\left(\widetilde{\pi}(\theta I_d-\sigma(\theta))^{-1}\right).$
We set $f=\exp_C\left(\widetilde{\pi}(\theta I_d-\sigma(\theta))^{-1}\right)$. Since $(C_{\theta}-\sigma(\theta))(f)=0$ in $C(\operatorname{Mat}_{d\times d}(\mathbb{K}_s))$, Lemma \[equationtau\] tells us that $f$ belongs to the free $\boldsymbol{M}$-submodule of rank one of $\operatorname{Mat}_{d\times d}(\mathbb{K}_s)$ of the solutions of the homogeneous linear difference equation described in that statement. Now, observe that $$\omega_\sigma=\theta\lambda_\theta(\theta I_d-\sigma(\theta))^{-1}+M_1,\quad f=\theta\lambda_\theta(\theta I_d-\sigma(\theta))^{-1}+M_2,$$ where $M_1,M_2$ are matrices with coefficients in $K_{s,\infty}$ whose entries have Gauss norms $<|\lambda_\theta|=q^{\frac{1}{q-1}}$. Hence $\omega_\sigma=f$.
### $L$-values associated to a semi-character {#Lvalues}
we again suppose that $\boldsymbol{M}=\operatorname{Mat}_{d\times d}(\boldsymbol{K}_s)$, with $\boldsymbol{K}_s$ as in the previous sections. Let $\sigma$ be a semi-character $A^+\rightarrow\boldsymbol{M}$ Let $n$ be a positive integer. The [*$n$-th $L$-value*]{} associated to $\sigma$ is the following element of $\operatorname{GL}_d(K_{s,\infty})$: $$L_\sigma(n)=\prod_{P}\left(I_d-\sigma(P)P^{-n}\right)^{-1}=\sum_{a\in A^+}\sigma(a)a^{-n}=I_d+\cdots,$$ the product running over the irreducible elements of $A^+$.
### Determinant
We write $\sigma=\sigma_1\cdots\sigma_s$ for injective ${\mathbb F}_q$-algebra homomorphisms $\sigma_i:A\rightarrow\boldsymbol{M}$ with $\sigma_i(\theta),\sigma_j(\theta)$ commuting each other. The elements $L_\sigma(n)$ and $\omega_{\sigma_1}\cdots\omega_{\sigma_s}$ commute each other. Further we denote by $\lambda_{i,1},\ldots,\lambda_{i,d}\in\boldsymbol{K}_s^{ac}$ the eigenvalues of $\sigma_i(\theta)$ for $i=1,\ldots,s$ (considered with multiplicities). For simplicity, we suppose that none of these eigenvalues belong to ${\mathbb F}_q^{ac}$. On the other hand, we consider variables $x_1,\ldots,x_s$ and the $L$-value: $$\mathcal{L}_s(n):=\prod_{P}(1-\psi_s(P)P^{-n})^{-1},$$ where $\psi:A^+\rightarrow{\mathbb F}_q[x_1,\ldots,x_s]^*$ is the semi-character defined by $a\mapsto a(x_1)\cdots a(x_s)$, and the series converges in the completion of $K[x_1,\ldots,x_s]$ for the Gauss norm extending $|\cdot|$.
We have the formula $$\det(L_\sigma(n))=\mathcal{L}_s(n)_{x_j=\lambda_{i,j}\atop j=1,\ldots,s}\in K_{s,\infty}.$$
We note that, for every polynomial $P\in A^+$, $$\det((I_d-\sigma(P)P^{-n})^{-1})=P^{dn}\det(I_dP^n-\sigma(P))^{-1}.$$ By the well known properties of the characteristic polynomial of an endomorphism, we have that $$\det(P^n-\sigma(P))=\prod_{i=1}^d(X-\mu_{i,P})_{X=P^n},$$ where $\mu_{i,P}\in\boldsymbol{K}_s^{ac}$ are the eigenvalues of the left multiplication by $\sigma(P)$. Now, observe that $$\sigma(P)=\prod_{j=1}^s\sigma_j(P)=\prod_{j=1}^sP(\sigma_j(\theta))$$ (the elements $\sigma_j(\theta)$ commute each other). Hence, $\mu_{i,P}=\prod_{j=1}^sP(\lambda_{i,j})$ for all $i=1,\ldots,d$. Thus, $$\det(I_d-\sigma(P)P^{-n})^{-1}=P^{dn}\prod_{j=1}^s\left(I_dP^n-\prod_{j=1}^sP(\lambda_{i,j})\right)^{-1},$$ and the lemma follows.
### The case $n=1$
We write $\sigma=\sigma_1\cdots\sigma_s$ as above. Since for all $a\in A^+$, $C_a(\omega_{\sigma_i})=\sigma_i(a)\omega_{\sigma_i}\in\boldsymbol{M}$, we have the convergent series identity $$\sum_{a\in A^+}a^{-1}C_a(\omega_{\sigma_1})\cdots C_a(\omega_{\sigma_s})=L_\sigma(1)\omega_{\sigma_1}\cdots\omega_{\sigma_s}\in \operatorname{Mat}_{d\times d}(K_{s,\infty}(\lambda_\theta))$$ (in fact, the series converges to an invertible matrix).
### A Simple application of Anderson’s log-algebraic Theorem
We now invoke the result of B. Anglès, F. Tavares Ribeiro and the author [@APTR Theorem 8.2] (note that in the statement, we can set $Z=1$). For completeness, we mention the following result, which is a very easy consequence of ibid.:
\[Taelmanunits\] For every semi-character $\sigma:A^+\rightarrow\boldsymbol{M}$ with $\sigma=\sigma_1\cdots\sigma_s$ as above, $$(\omega_{\sigma_1}\cdots\omega_{\sigma_s})^{-1}\exp_C(\omega_{\sigma_1}\cdots\omega_{\sigma_s}L_\sigma(1))=:S_\sigma\in \operatorname{Mat}_{d\times d}(\boldsymbol{K}_s[\theta]).$$ Further, if $s\equiv1\pmod{q-1}$ and if $s>1$, the matrix with polynomial entries $S_\sigma$ is zero. In particular, in this case, $$L_\sigma(1)=\widetilde{\pi}(\omega_{\sigma_1}\cdots\omega_{\sigma_s})^{-1}\mathbb{B}_\sigma,$$ where $\mathbb{B}_\sigma$ is a matrix with polynomial entries in $\boldsymbol{K}_s[\theta]$.
Hence, $L_\sigma(1)$ is a “Taelman unit”, in the sense of [@TAE2]. If $s=1$, we have a more explicit property. In this case, $\sigma$ extends to an algebra homomorphism $\sigma:A\rightarrow\boldsymbol{M}$, and we have the simple explicit formula $$L_\sigma(1)=\omega_\sigma^{-1}(I_d\theta-\sigma(\theta))^{-1}\widetilde{\pi}$$ which can be proved in a way very similar to that of [@Pe §4]. We are going to see, in §\[eisensteinseries\] that $L_\sigma(n)$ is related to certain vectorial Eisenstein series, when $n\equiv s\pmod{q-1}$.
Representations of $\Gamma$ associated to an algebra representation
-------------------------------------------------------------------
Let $\boldsymbol{K}$ be any commutative field extension of ${\mathbb F}_q$. Let $\sigma$ be a $d$-dimensional representation as in (\[algrep\]). We associate to it, canonically, a representation of $\Gamma=\operatorname{GL}_2(A)$ in $\operatorname{GL}_{2d}(\boldsymbol{K})$.
We consider the map $\Gamma\xrightarrow{\rho_\sigma}\operatorname{Mat}_{2d\times 2d}(\boldsymbol{K}),$ defined by $$\rho_\sigma\left({\left(\begin{matrix} a & b \\ c & d \end{matrix}\right)}\right)={\left(\begin{matrix} \sigma(a) & \sigma(b) \\ \sigma(c) & \sigma(d) \end{matrix}\right)}.$$ Then, $\rho_\sigma$ determines a representation $\Gamma\rightarrow{\operatorname{GL}}_{2d}(\boldsymbol{K}).$ Indeed, $\sigma(a),\sigma(b)$ commute each other, for $a,b\in A$. Furthermore, we have:
\[lemma2\] $\rho_\sigma$ is irreducible if and only if $\sigma$ is irreducible.
Let $V$ be a non-trivial sub-vector space of $\operatorname{Mat}_{d\times 1}(\boldsymbol{K})$ such that $\sigma(a)(V)\subset V$. Then, if $\gamma=(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix})\in\Gamma$, we have $$\rho_\sigma(\gamma)=\begin{pmatrix} \sigma(a) & \sigma(b) \\ \sigma(c) & \sigma(d) \end{pmatrix}(V\oplus V)\subset V\oplus V$$ and $\sigma$ not irreducible implies $\rho_\sigma$ not irreducible.
Now, let us assume that $\sigma$ is irreducible and let us consider $V$ a non-zero sub-vector space of $\operatorname{Mat}_{2d\times 1}(\boldsymbol{K})$ which is $\rho_\sigma$-invariant. We observe that $V\cap\Delta\neq\{0\}$, with $\Delta=\{\binom{v}{v}:v\in\operatorname{Mat}_{d\times 1}(\boldsymbol{K})\}$. Indeed, $\rho_\sigma((\begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix}))=(\begin{smallmatrix} 0 & I_d \\ I_d & 0 \end{smallmatrix})$ and $\rho_\sigma((\begin{smallmatrix} 1 & 0 \\ 0 & -1 \end{smallmatrix}))=(\begin{smallmatrix} I_d & 0 \\ 0 & -I_d \end{smallmatrix})$. If $\binom{x}{y}$ is a non-zero vector of $V$ with $x\neq-y$, we have $\binom{x}{y}+\rho_\sigma((\begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix}))\binom{x}{y}=\binom{x+y}{x+y}\in\Delta\setminus\{0\}$. If $x=-y$, then $\rho_\sigma((\begin{smallmatrix} 1 & 0 \\ 0 & -1 \end{smallmatrix}))\binom{x}{y}\in\Delta\setminus\{0\}$. Let $\binom{v}{v}$ be non-zero in $V\cap\Delta$. Since for all $a\in A$, $\rho_\sigma((\begin{smallmatrix} 1 & a \\ 0 & 1 \end{smallmatrix}))\binom{v}{v}=(\begin{smallmatrix} I_d & \sigma(a) \\ 0 & I_d \end{smallmatrix})\binom{v}{v}=\binom{\sigma(a')v}{v}$ with $a'=a+1$ we have $\{\binom{\sigma(a)(v)}{v}:a\in A\}\subset V$. Let $W$ be the $\boldsymbol{K}$-sub-vector space of $\operatorname{Mat}_{d\times 1}(\boldsymbol{K})$ generated by the set $\{\sigma(a)(v):a\in A\}$. Then, $W$ is $\sigma$-invariant: if $w=\sum_ic_i\sigma(a_i)(v)\in W$ ($c_i\in \boldsymbol{K}$, $a_i\in A$), we have that $$\sigma(a)(w)=\sum_ic_i\sigma(aa_i)(v)\in W.$$ By hypothesis, $W$ is non-zero, so that $W=\operatorname{Mat}_{d\times 1}(\boldsymbol{K})$. We have proved that $V\supset \operatorname{Mat}_{d\times 1}(\boldsymbol{K})\oplus\{v\}$. Translating, this means that $V\supset\operatorname{Mat}_{d\times 1}(\boldsymbol{K})\oplus\{0\}$. Applying $\rho_\sigma((\begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix}))$ we see that $V\supset\{0\}\oplus\operatorname{Mat}_{d\times 1}(\boldsymbol{K})$ and $V=\operatorname{Mat}_{2d\times 1}(\boldsymbol{K})$.
### Example {#example-1 .unnumbered}
We can construct, in particular, the representation $\rho_P=\rho_{\sigma_P}:\Gamma\rightarrow{\operatorname{GL}}_{2d}(\boldsymbol{A})$ which is irreducible if and only if $P$ is irreducible.
Symmetric powers {#symmetricpowers}
================
In the first part of this section, we suppose that $q$ is a prime number; $p=q$. Let $B$ be an ${\mathbb F}_p$-algebra. We denote by $\rho$ the tautological representation ${\operatorname{GL}}_2(B)\rightarrow\operatorname{GL}_2(B)$. We consider the representation $$\rho_{r}=\operatorname{Sym}^r(\rho):{\operatorname{GL}}_2(B)\rightarrow\operatorname{GL}_{r+1}(B),$$ where $\operatorname{Sym}^r$ denotes the $r$-th symmetric power realized in the space of polynomials homogeneous of degree $r+1$ with coefficients in $B$. If $\gamma={\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right)}\in{\operatorname{GL}}_2(B)$, then $$\rho_{r}(\gamma)(X^{r-i}Y^i)=(aX+cY)^{r-i}(bX+dY)^i,\quad i=0,\ldots, r.$$
Associated to an integer $l\geq 0$ with $p$-expansion $l=l_0+l_1p+\cdots+l_sp^s$ ($0\leq l_i\leq p-1$), we also consider the representations $$\rho^{I}_{l}=\rho_{l_0}\otimes\rho_{l_1}^{(1)}\otimes\cdots\otimes\rho_{l_s}^{(s)},$$ where, for a matrix $M$ with entries in $B$, $M^{(i)}$ denotes the matrix obtained from $M$ raising all its entries to the power $p^i$. The dimension of $\rho_l^{I}$ is equal to $$\phi_p(l)=\prod_i(l_i+1).$$
\[isomorphictoasub\] The representation $\rho^{I}_l$ is isomorphic to a sub-representation of $\rho_l$.
We actually construct the sub-representation explicitly; the Lemma will follow easily. We consider, for $\gamma\in{\operatorname{GL}}_2(B)$, the matrix $\rho_{l}^{{\star}}(\gamma)$ which is the square matrix extracted from $$\rho_{l}(\gamma)=(\rho_{i,j})_{1\leq i,j\leq l+1}$$ in the following way. If $0\leq r\leq l$ is such that $\binom{l}{r}\equiv0\pmod{p}$, we drop the $(r+1)$-th row and the $(r+1)$-th column. In other words, one uses the row matrix $$\mathcal{D}_l=\left(\binom{l}{l},\ldots,\binom{l}{r},\ldots,\binom{l}{0}\right)$$ and discards rows and columns of $\rho_{l}$ according with the vanishing of the corresponding entry of $\mathcal{D}_l$ and what is left precisely defines the matrix $\rho_l^{\star}$. By Lucas formula, $\rho_{l}^{{\star}}$ has dimension $\phi_p(l)$ and it is easy to see, by induction on the number of digits of the $p$-expansion of $l$, that $\rho_l^{\star}\cong\rho_l^{I}$.
### Example {#example-2 .unnumbered}
If $l=1+p$, we have $$\mathcal{D}_l^{I}=(1,1,1,1)=\left(\binom{l}{0},\binom{l}{1},\binom{l}{p},\binom{l}{l}\right).$$ In this case, we find, for $\gamma={\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right)}$, $$\rho_{l}^{{\star}}(\gamma)=\left(\begin{array}{llll} a^{p+1} & a^pb & ab^p & b^{p+1}\\
a^pc & a^pd & b^p c & b^p d\\ ac^
p & bc^p & a d^p & bd^p \\
c^{p+1} & c^pd & c d^p & d^{p+1}
\end{array}\right).$$
[ *We notice the following algorithm to construct the sequence of dimensions $(\phi_p(l))_{l\geq 1}$. Define $a_0=(1)$, $a_1=(1,2,\ldots,p)$ (equal to the concatenation $[a_0,2a_0,\ldots,pa_0]$) and then, inductively, $$a_n=[a_{n-1},2a_{n-1},\ldots,pa_{n-1}].$$ Since it is clear that for all $n$, $a_{n-1}$ is a prefix of $a_n$, there is a well defined inductive limit $a_\infty$ of the sequence $(a_n)_{n\geq0}$ which is easily seen to be equal to the sequence $(\phi_p(l))_{l\geq 1}$.*]{}
Representations of $\operatorname{SL}_2(\mathbb{F}_{q'})$. {#section1}
----------------------------------------------------------
Let us set $q'=p^f$ with $f> 0$. Then, $B={\mathbb F}_{q'}$ is an ${\mathbb F}_p$-algebra and we can construct the representations $\rho_l$ and $\rho_l^\star\cong\rho_L^I$ of the beginning of this section. We denote by $\overline{\rho}_l$ the representation $\rho_l^{I}$ with $B=\mathbb{F}_{q'}$ restricted to ${\operatorname{SL}}_2(B)$. By using the fact that any non-zero stable subspace in a representation of a $p$-group over a vector space has a non-zero fixed vector, it is easy to show and in fact well known that, for all $l\geq0$, $\overline{\rho}_{l}$ is an irreducible representation if and only if $l<q'$. By Schur’s theory, one shows that the representations $\overline{\rho}_l$ with $l<q'$ exhaust all the isomorphism classes of irreducible representations of $\operatorname{SL}_2(\mathbb{F}_{q'})$ over ${\mathbb F}_p^{ac}$. Indeed, counting isomorphism classes of $\operatorname{SL}_2(\mathbb{F}_{q'})$ is an easy task and we know that their number coincides with the number of isomorphism classes of irreducible representations so it suffices to check that the representations above are mutually inequivalent which is an elementary task. This explicit description first appears in the paper [@BN] of Brauer and Nesbitt. Steinberg tensor product theorem provides such a description when, at the place of $G={\operatorname{SL}}_2$, we have, much more generally, a semisimple algebraic group of simply connected type, defined over an algebraically closed field $B$ of positive characteristic. This also implies Lemma \[isomorphictoasub\]. The author is thankful to Gebhard Böckle for having drawn his attention to this result and reference.
Some representations of $\operatorname{GL}_2(A)$. {#section2}
-------------------------------------------------
We now set $q=p^e$ with $e>0$. We also set $\boldsymbol{K}:={\mathbb F}_q(t)$ for a variable $t$ all along this subsection. We consider the algebra homomorphism $\chi_t:A\rightarrow{\mathbb F}_q[t]\subset\boldsymbol{K}$ defined by $\chi_t(a)=a(t)=a_0+a_1t+\cdots+a_dt^d$ for $a=a_0+a_1\theta+\cdots+a_d\theta^d\in A$, with coefficients $a_i\in{\mathbb F}_q$. We extend our notations by setting, for a matrix $M$ with entries in $A$, $\chi_t(M)$ the matrix obtained applying $\chi_t$ entry-wise. We denote by $\rho_{t},\rho_{t,l},\rho_{t,l}^{I}$ the representations $\chi_t\circ\rho_1,\chi_t\circ\rho_l,\chi_t\circ\rho_l^{I}$ over $\boldsymbol{K}$-vector spaces with the appropriate dimensions, of the group $\Gamma={\operatorname{GL}}_2(A)$.
\[theoprinc\] For all $l$ as above, the representation $\rho_{t,l}^{I}$ is irreducible.
It suffices to show that the restriction to ${\operatorname{SL}}_2({\mathbb F}_p[\theta])\subset\Gamma$ is irreducible. Let us consider an element $\zeta\in\mathbb{F}_q^{ac}$ of degree $f$ and let us denote by $\mathbb{F}_{q'}$ with $q'=p^f$ the subfield $\mathbb{F}_p(\zeta)$ of ${\mathbb F}_p^{ac}$. The group homomorphism $$\operatorname{ev}_\zeta:{\operatorname{SL}}_2({\mathbb F}_p[\theta])\rightarrow\operatorname{GL}_2(\mathbb{F}_{q'})$$ defined by the entry-wise evaluation $\operatorname{ev}_\zeta$ of $\theta$ by $\zeta$ has image $\operatorname{SL}_2(\mathbb{F}_{q'})$. Indeed, the evaluation map $\operatorname{ev}_\zeta:{\mathbb F}_p[\theta]\rightarrow{\mathbb F}_{q'}$ is surjective, the image of $\operatorname{SL}_2({\mathbb F}_p[\theta])$ by $\operatorname{ev}_\zeta$ clearly contains the subgroup of triangular upper and lower matrices with coefficients in $\mathbb{F}_{q'}$, which are known to generate $\operatorname{SL}_2(\mathbb{F}_{q'})$ We set $N=\phi_p(l)$. Let $V$ be a non-zero $\boldsymbol{K}$-subvector space of $\boldsymbol{K}^{N}$ which is stable under the action of the representation of ${\operatorname{SL}}_2({\mathbb F}_p[\theta])$ induced by $\rho^{I}_{t,l}$. Let us fix a basis $b$ of $V$. We choose $f$ big enough so that $q'>l,q$ and the image $b'$ of $b$ in ${\mathbb F}_{q'}^N$ by the evaluation at $t=\zeta$ is well defined and non-zero. Then, the ${\mathbb F}_{q'}$-span of $b'$ is a non-trivial sub-vector space of ${\mathbb F}_{q'}^N$ which is left invariant under the action of $\overline{\rho}_l$, which is impossible.
[ *Let $m$ be a class of ${\mathbb Z}/(q-1){\mathbb Z}$ and let us consider the representation $$\rho^{I}_{t,l,m}:\Gamma\rightarrow\operatorname{GL}_{\phi_p(l)}(\boldsymbol{K})$$ defined by $$\rho^{I}_{t,l,m}:=\rho_{t,l}^{{I}}\otimes\sideset{_{}^{}}{_{}^{-m}}\det.$$ By Lemma \[theoprinc\], it is irreducible. However, the representations $\rho^{I}_{t,l,m}$ do not cover all the irreducible representations of $\Gamma$ in ${\operatorname{GL}}_N(\boldsymbol{K})$ for some $N$. Due to the fact that we evaluate the functor ${\operatorname{GL}}_N$ on a ring which is not a field (here, the ring $A$), there are irreducible representations which, after specialization at roots of unity, do not give irreducible representations of ${\operatorname{SL}}_2({\mathbb F}_{q'})$.*]{}
[*The group $S_{(p)}$ of $p$-adic digit permutations of ${\mathbb Z}_p$ discussed by Goss in [@Go] acts on the positive integers $l$ by means of their expansions in base $p$. This defines an action of the group $S_{(p)}$ on the set of representations $\rho_l^{I}$ by $\nu(\rho_l^{I})=\rho^{I}_{\nu(l)}$, for $\nu\in S_{(p)}$. Note that the dimensions of these representations are $S_{(p)}$-invariants. It is easy to show that $\nu(\rho^{I}_l)\cong\rho^{I}_{l'}$ if and only if $\nu(l)=l'$.*]{}
[ *We are thankful to Pietro Corvaja for having pointed out the following property. [*Let $k$ be a perfect field. Then, for all $\gamma\in\operatorname{SL}_2(k^{ac})$ there exists a morphism $\phi:\mathbb{A}^1\rightarrow\operatorname{SL}_2$ defined over $k$ and $\alpha\in k^{ac}$, such that $\phi(\alpha)=\gamma$.*]{}* ]{}
Products of representations {#tensorprod}
===========================
Let $t_1,\ldots,t_s$ be independent variables. We denote by $\underline{t}_s$ the set of variables $(t_1,\ldots,t_s)$ and we set $\boldsymbol{K}_s={\mathbb F}_q(\underline{t}_s)$. If $s=1$, we write $t=t_1$ and we have $\boldsymbol{K}_s=\boldsymbol{K}$, the field of §\[section2\]. We also consider $\underline{l}=(l_1,\ldots,l_s)$ an $s$-tuple with entries in ${\mathbb Z}$ which are $\geq 1$.
\[corollaryt1ts\] the representation $$\rho^{{I\!I}}_{\underline{t},\underline{l}}:=\rho_{t_1,l_1}^{I}\otimes\cdots\otimes\rho_{t_s,l_s}^{I}:\Gamma\rightarrow\operatorname{GL}_{\phi_p(l_1)\cdots\phi_p(l_s)}(\boldsymbol{K}_s)$$ is irreducible.
We set $N=\phi_p(l_1)\cdots\phi_p(l_s)$. Let us suppose by contradiction that the statement is false. Then, there exists a $\boldsymbol{K}_s$-sub-vector space $V\neq\{0\}\subset\boldsymbol{K}_s^N$ such that for all $\gamma\in\Gamma$, $\rho^{{I\!I}}_{\underline{t},\underline{l}}(\gamma)(V)\subset V$. Let us fix a basis $v=(v_1,\ldots,v_r)$ of $V$. For integers $0\leq k_1\leq\cdots\leq k_s$, we denote by $\operatorname{ev}$ the map ${\mathbb F}_q[\underline{t}_s]\rightarrow {\mathbb F}_q[t]$ which sends $a(t_1,\ldots,t_s)\in {\mathbb F}_q[\underline{t}_s]$ to $a(t^{k_1},\ldots,t^{k_s})\in {\mathbb F}_q[t]$. This map is a ring homomorphism whose kernel is the prime ideal $\mathcal{P}$ generated by the polynomials $t_j-t_{j-1}^{q^{k_j-k_{j-1}}}$, $j=2,\ldots,s$. We consider the associated multiplicative set $S={\mathbb F}_q[\underline{t}_s]\setminus\mathcal{P}$. Then, the evaluation map $\operatorname{ev}$ extends to $S^{-1}{\mathbb F}_q[\underline{t}_s]$ which is Zariski dense in $\boldsymbol{K}_s={\mathbb F}_q(\underline{t}_s)$. We now extend $\operatorname{ev}$ coefficient-wise on every matrix, vector, etc. with entries in $S^{-1}{\mathbb F}_q[\underline{t}_s]$. If $k_1$ is big enough, $\operatorname{ev}(v)$ is well defined and non-zero.
We can in fact choose $k_1,\ldots,k_s$ so that we also have at once, $\operatorname{ev}(\rho^{{I\!I}}_{\underline{t},\underline{l}})=
\rho_{t,l}^{I}$ for some $l\geq 0$. Indeed, if we write the $p$-expansions $l_i=l_{i,0}+l_{i,1}p+\cdots+l_{i,r}p^r$ ($i=1,\ldots,s$) for some $r\geq 0$, then we can choose $k_1,\ldots,k_s$ so that there is no carry over in the $p$-expansion of the sum $l=l_1q^{k_1}+l_2q^{k_2}+\cdots+l_sq^{k_s}$; for such a choice of $k_1,\ldots,k_s$, $\operatorname{ev}(\rho^{{I\!I}}_{\underline{t},\underline{l}})$ is thus irreducible.
We now set $W$ to be the $\boldsymbol{K}$-span of $\operatorname{ev}(v)$, well defined and non-trivial in $\boldsymbol{K}^N$ (we recall that $\boldsymbol{K}={\mathbb F}_q(t)$). Let $w$ be in $W$. We can write $w=a_1\operatorname{ev}(v_1)+\cdots+a_r\operatorname{ev}(v_r)$ for elements $a_i\in\boldsymbol{K}$. Then, $$\rho_{t,l}^{I}(\gamma)(w)=a_1\rho_{t,l}^{I}(\gamma)(\operatorname{ev}(v_1))+\cdots+a_r\rho_{t,l}^{I}(\gamma)(\operatorname{ev}(v_r))=
a_1\operatorname{ev}(\rho_{\underline{t},\underline{l}}^{{I\!I}}(\gamma)(v_1))+\cdots+a_r\operatorname{ev}(\rho_{\underline{t},\underline{l}}^{{I\!I}}(\gamma)(v_r))$$ is a vector of $W$, hence contradicting the irreducibility of $\rho_{t,l}^{I}$.
Applications to Poincaré series {#poincare}
===============================
[*We say that a representation $\Gamma\xrightarrow{\rho}{\operatorname{GL}}_N(\boldsymbol{K})$ is [*normal to the depth*]{} $L\in\{1,\ldots,N\}$ if for all $\gamma\in H=\{{\left(\begin{smallmatrix} * & * \\ 0 & 1 \end{smallmatrix}\right)}\}\subset\Gamma$, we have that $\rho(\gamma)={\left(\begin{smallmatrix} * & * \\ 0 & I_L \end{smallmatrix}\right)}$, where $I_L$ denotes the identity matrix of size $L$.*]{}
A representation as above which is normal to the depth $N$ has finite image. To see this, note that $\rho((\begin{smallmatrix} * & * \\ 0 & * \end{smallmatrix}))=((\begin{smallmatrix} * & 0 \\ 0 & * \end{smallmatrix}))$ is finite. Hence, $$\begin{aligned}
\lefteqn{\rho\begin{pmatrix} * & 0 \\ * & * \end{pmatrix}=
\rho\left(\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\begin{pmatrix} * & * \\ 0 & * \end{pmatrix}\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\right)=}\\
&=&\rho\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\rho\begin{pmatrix} * & * \\ 0 & * \end{pmatrix}\rho\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\\
&=&\rho\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\rho\begin{pmatrix} * & 0 \\ 0 & * \end{pmatrix}\rho\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\\
&=&\rho\left(\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\begin{pmatrix} * & 0 \\ 0 & * \end{pmatrix}\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\right)=\rho\begin{pmatrix} * & 0 \\ 0 & * \end{pmatrix}.\end{aligned}$$ We thus have $\rho(\begin{smallmatrix} * & 0 \\ * & * \end{smallmatrix})=\rho(\begin{smallmatrix} * & * \\ 0 & * \end{smallmatrix})$ finite, and $\rho(\Gamma)=\rho((\begin{smallmatrix} * & * \\ 0 & * \end{smallmatrix})(\begin{smallmatrix} * & 0 \\ * & * \end{smallmatrix}))=\rho(\begin{smallmatrix} * & 0 \\ 0 & * \end{smallmatrix})$ is finite.
The representation $\Gamma\xrightarrow{\rho_\sigma}{\operatorname{GL}}_N(\boldsymbol{K})$ with $N=2d$ associated to an algebra representation $A\xrightarrow{\sigma}\operatorname{Mat}_{d\times d}(\boldsymbol{K})$ is normal to the depth $L=d$.
If, for some ring $R$, we have that $M\in\operatorname{Mat}_{N\times N}(R)={\left(\begin{smallmatrix} * & * \\ X & Y \end{smallmatrix}\right)}$ with $Y\in\operatorname{Mat}_{L\times L}$, we set $M_L=(X,Y)\in\operatorname{Mat}_{L\times N}(R)$. In other words, $M_L$ is the matrix constituted by the last $L$ lines of $M$.
We denote by $\Omega={\mathbb C}_\infty\setminus K_\infty$ the Drinfeld “upper-half plane” of ${\mathbb C}_\infty$. We choose $m$ a non-negative integer, an integer $w\in{\mathbb Z}_{>0}$, and, for $\delta\in\Gamma$ and $z\in\Omega$, we set $\mu_{w,m}(\delta,z)=\det(\delta)^{-m}J_\delta(z)^w$, where $J_\gamma(z)$ is the usual “Drinfeldian” factor of automorphy defined, for $\gamma={\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right)}\in\Gamma$ by $J_\gamma(z)=cz+d$. We also denote by $u(z)$ the uniformizer at infinity of $\Omega$, that is, the function $u(z)=\frac{1}{e_C(z)}$ with $e_C$ the exponential function ${\mathbb C}_\infty\rightarrow{\mathbb C}_\infty$ with lattice period $A\subset{\mathbb C}_\infty$.
We consider a representation $\Gamma\xrightarrow{\rho}{\operatorname{GL}}_N(\boldsymbol{K})$, normal to the depth $L$. Following [@Pe §2.4], we set, for $\delta\in\Gamma$, $$f_\delta=\mu_{w,m}(\delta,z)^{-1}u^m(\delta(z))\rho(\delta)_L:\Omega\rightarrow\operatorname{Mat}_{L\times N}(\mathbb{K}),$$ where we recall that $\mathbb{K}=\mathbb{K}_1$ is the completion of ${\mathbb C}_\infty(t)$ for the Gauss norm. It is easy to show that the series $$\mathcal{E}_{w,m,\rho}(z)=\sum_{\delta\in H\backslash \Gamma}f_\delta,$$ the sum being over the representatives of the cosets of $\Gamma$ modulo the left action of $H=\{{\left(\begin{smallmatrix} * & * \\ 0 & 1 \end{smallmatrix}\right)}\}$, converges to a holomorphic function $$\mathcal{E}_{w,m,\rho}:\Omega\rightarrow\operatorname{Mat}_{L\times N}(\mathbb{K}),$$ in the sense of .
[ *For convenience of the reader, we recall here the definition of an holomorphic function $\Omega\rightarrow\mathbb{K}$. For $z\in\Omega$, we set $|z|_\Im:=\inf_{\lambda\in K_\infty}|z-\lambda|$, which is non-zero. We also define, on $\Omega$, a Stein-like structure by considering the affinoids $U_n=\{z\in\Omega;|z|\leq q^n\text{ and }|z|_\Im\geq q^{-n}\}$, so that $\Omega=\cup_{n\in{\mathbb N}}U_n$. For $n$ fixed, a function $f:U_n\rightarrow\mathbb{K}$ is [*holomorphic*]{} if it is a uniform limit of a converging sequence of rational functions $U_n\rightarrow\mathbb{K}$, without poles in $U_n$. A function $f:\Omega\rightarrow\mathbb{K}$ is [*holomorphic*]{} if, for all $n\geq0$, the restriction of $f$ to $U_n$ is holomorphic.*]{}
Following and readapting the proof of [@Pe Proposition 22], we obtain:
The following properties hold, for $w\in{\mathbb Z}_{>0}$, $m\in{\mathbb Z}_{\geq0}$ and $\rho$ a representation $\Gamma\rightarrow{\operatorname{GL}}_N(\boldsymbol{K})$:
1. For all $\gamma\in \Gamma$, we have $$\mathcal{E}_{w,m,\rho}(\gamma(z))=\det(\gamma)^{-m}J_\gamma(z)^w\mathcal{E}_{w,m,\rho}(z)\cdot\rho(\gamma)^{-1},$$
2. There exists $h\in{\mathbb Z}$ such that $$u(z)^h\mathcal{E}_{w,m,\rho}(z)\rightarrow 0\in\operatorname{Mat}_{L\times N}(\mathbb{K})$$ as $u(z)\rightarrow0$.
The last condition means that $\mathcal{E}_{w,m,\rho}(z)$ is tempered in the sense of [@Pe]. The proposition means that the $L$ columns of the transposed of $\mathcal{E}_{w,m,\rho}$ are vectorial modular forms of weight $w$ and type $m$ with respect to the contragredient representation of $\rho$. In the next two subsections, we analyze Poincaré series associated with two particular classes of representations.
Vectorial Poincaré series associated to representations $\rho_\sigma$
---------------------------------------------------------------------
Let us consider an irreducible, faithful algebra representation $A\xrightarrow{\sigma}\operatorname{Mat}_{d\times d}(\boldsymbol{K})$, and the associated representation $\Gamma\xrightarrow{\rho_\sigma}{\operatorname{GL}}_N(\boldsymbol{K})$ with $N=2d$, which is normal to the depth $L=d$. We additionally suppose that the characteristic polynomial of $\vartheta=\sigma(\theta)$, irreducible, is also separable.
\[proposition1\] The following properties hold, with $\rho=\rho_\sigma$.
1. If $w-1\not\equiv 2m\pmod{q-1}$, then $\mathcal{E}_{w,m,\rho}=0$, identically.
2. If $w-1\equiv 2m\pmod{q-1}$ and $w\geq (q+1)m+1$, then, the rank of the matrix function $\mathcal{E}_{w,m,\rho}\not=0$ is $d$.
3. In the second case, each row of the matrix function $\mathcal{E}_{w,m,\rho}$ has the entries which are $\mathbb{K}$-linearly independent.
The hypotheses on $\sigma$ imply that the matrix $\vartheta=\sigma(\theta)$ has distinct conjugate eigenvalues $\lambda_1,\ldots,\lambda_d\in\boldsymbol{K}^{ac}$ none of which lies in ${\mathbb F}_q^{ac}$. We consider a corresponding basis $v_1,\ldots,v_d\in(\boldsymbol{K}^{ac})^d$ of eigenvectors (considered as column matrices) which are then common eigenvectors for all the elements of the image of $\sigma$. In $(\boldsymbol{K}^{ac})^{2d}=(\boldsymbol{K}^{ac})^N$ we consider the basis $w_1,\ldots,w_{2d}$ defined by $w_i=v_i\oplus0$ and $w_{d+i}=0\oplus v_i$ for $i=1,\ldots,d$. We also denote by $M\in{\operatorname{GL}}_N(\boldsymbol{K}^{ac})$ the matrix whose columns are the $w_i$’s for $i=1,\ldots,2d$. then, for $\delta={\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right)}\in\Gamma$, we have $$\rho(\delta)_LM=\delta(t)*(v_1,\ldots,v_d):=(c(\lambda_1)v_1,\ldots,c(\lambda_d)v_d,d(\lambda_1)v_1,\ldots,d(\lambda_d)v_d)\in\operatorname{Mat}_{d\times 2d}(\boldsymbol{K}^{2d}).$$ Hence we have, with the same significance of the product $*$ extended linearly, that $$M\cdot \mathcal{E}_{w,m,\rho}=\mathcal{E}_{w,m,\chi_t}*(v_1,\ldots,v_d),$$ where $\mathcal{E}_{w,m,\chi_t}:\Omega\rightarrow\operatorname{Mat}_{1\times 2}(\mathbb{K})$ is the function defined by $$\mathcal{E}_{w,m,\chi_t}(z)=\sum_{\delta={\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right)}\in H\backslash \Gamma}
\mu_{w,m}(\delta,z)^{-1}u^m(\delta(z))(c(t),d(t)).$$ This matrix function is the deformation of vectorial Poincaré series $\mathcal{E}_{w,m}(z,t)$ considered in [@Pe Proposition 22] and we know the following:
- If $w-1\not\equiv 2m\pmod{q-1}$, then $\mathcal{E}_{w,m,\chi_t}$ is identically zero.
- If $w-1\equiv 2m\pmod{q-1}$ and $w\geq (q+1)m+1$, then all the entries of $\mathcal{E}_{w,m,\chi_t}$ are non-zero.
We now observe that if $C$ is a complete field containing $A$ and if $f(t)=\sum_{i\geq 0}f_it^i\in C[[t]]$ is a non-zero formal power series, with $f_i\rightarrow0$ for $i\rightarrow\infty$ (an element of the Tate algebra of formal series with coefficients in $C$ in the variable $t$) then, for $\lambda\in\boldsymbol{K}^{ac}\setminus{\mathbb F}_q^{ac}$, we have that $f(\lambda)=\sum_{i\geq 0}f_i\lambda^i$ converges in the complete field $\mathbb{K}^\sharp:=\widehat{\operatorname{Frac}(C\otimes_{{\mathbb F}_q}\boldsymbol{K}^{ac})}$ (with $\boldsymbol{K}^{ac}$ carrying the trivial norm) to a non-zero element.
Since $M$ is invertible, $\mathcal{E}_{w,m,\rho}$ is identically zero if and only if $\mathcal{E}_{w,m,\chi_t}$ is identically zero (we have supposed that $\vartheta$ has no eigenvalues in ${\mathbb F}_q^{ac}$) and in the case of non-vanishing, the rank is maximal equal to $d$. Properties 1) and 2) of our proposition hence follow from Proposition 22 of [@Pe].
It remains to show the part 3); this follows from Lemma \[lemma2\]. Indeed, assuming that $w-1\equiv 2m\pmod{q-1}$ and $w\geq (q+1)m+1$, let $i$ be an index between $1$ and $d$; we know that it is non-zero. Let us assume, by contradiction, that the entries of $\mathcal{E}$ are linearly dependent; then, the vector space $V$ whose elements are the vectors $v\in (\mathbb{K}^\sharp)^N$ such that $\mathcal{E}(z)\cdot v=0$ for all $z\in\Omega$, is non-trivial.
Let $v$ be in $V$. For all $\gamma\in\Gamma$, we have $$0=\mathcal{E}(\gamma(z))\cdot v=
\det(\gamma)^{-m}J_\gamma(z)^w\mathcal{E}(z)\cdot\rho(\gamma)^{-1}\cdot v.$$ This means that $\rho(\gamma)^{-1}\cdot v\in V$ so that $\rho(\gamma)(V)\subset V$ for all $\gamma\in \Gamma$. Since $\rho$ is irreducible, we thus have that $V=
(\mathbb{K}^\sharp)^N$ but $\mathcal{E}$ is non-zero, whence a contradiction.
Vectorial Poincaré series associated to the representations $\rho^{{I\!I}}_{\underline{t},\underline{l}}$
---------------------------------------------------------------------------------------------------------
We now consider the settings of §\[tensorprod\] and we return to independent variables $\underline{t}_s$ and to the field $\boldsymbol{K}_s={\mathbb F}_q(\underline{t}_s)$. We also consider $\underline{l}=(l_1,\ldots,l_s)$ an $s$-tuple with entries in ${\mathbb Z}$ which are $\geq 1$ and we note that the representation $\rho=\rho^{{I\!I}}_{\underline{t},\underline{l}}$ of Theorem \[corollaryt1ts\] is normal to the depth $L=1$.
We have the following properties, for $w>0$ and $m\geq0$, and with $\rho$ the above considered representation.
1. The function $$\mathcal{E}_{w,m,\rho}:\Omega\rightarrow\operatorname{Mat}_{1\times N}(\mathbb{K}),$$ with $N=\phi_p(l_1)\cdots\phi_p(l_s)$ is well defined, holomorphic, tempered, and satisfies $$\mathcal{E}_{w,m,\rho}(\gamma(z))=\det(\gamma)^{-m}J_\gamma(z)^w\mathcal{E}_{w,m,\rho}(z)\cdot\rho(\gamma)^{-1},\quad \gamma\in\Gamma.$$
2. If $w':=w-l_1-\cdots-l_s\not\equiv 2m+1\pmod{q-1}$, then $\mathcal{E}_{w,m,\rho}\equiv0$.
3. With $w'$ defined as above, if $w'\equiv 2m+1$ and $w'\geq(q+1)m+1$, then $\mathcal{E}_{w,m,\rho}\neq0$.
4. If $m=0$ and $w'\equiv1\pmod{q-1}$, then $\mathcal{E}_{w,m,\rho}\neq0.$
5. In all cases in which $\mathcal{E}_{w,m,\rho}\neq0$, its entries are linearly independent over $\mathbb{K}$.
The first two properties and the last one are simple variants of the corresponding parts of Proposition \[proposition1\]. For the third property, we consider the matrix function $F_{\underline{l}}:\Omega\rightarrow\operatorname{Mat}_{N\times 1}({\mathbb C}_\infty)$ with $N=\phi_p(l_1)\cdots\phi_p(l_s)$, defined by $$F_{\underline{l}}(z)=\operatorname{Sym}^{l_{1,0}}(F)\otimes\cdots\otimes\operatorname{Sym}^{l_{1,r_1}}(F^{(r_1)})\otimes\cdots\otimes
\operatorname{Sym}^{l_{s,0}}(F)\otimes\cdots\otimes\operatorname{Sym}^{l_{s,r_s}}(F^{(r_s)}),$$ with $F(z)=\binom{z}{1}$ and where we have used the expansions in base $p$ of $l_1,\ldots,l_s$: $l_i=l_{i,0}+l_{i,1}p+\cdots+l_{i,r_i}p^{r_i}$ with $r_i\neq 0$ for $i=1,\ldots,s$. Then, as in [@Pe], we note that $$(\mathcal{E}_{w,m,\rho}\cdot F_{\underline{l}})_{t_i=\theta}=P_{w',m}$$ the Poincaré series of weight $w'$ and type $m$ so that we can conclude with [@Ge Proposition 10.5.2]. The property 3) is not enough to show the property 4), but we can proceed more directly by noticing that in this case, $$\mathcal{E}_{w,0,\rho}=\sum_{\delta\in H\backslash\Gamma}J_\gamma^{-w}\rho(\gamma)_1.$$ Hence, if we suppose that $u(z)\rightarrow\infty$, then $\mathcal{E}_{w,0,\rho}\rightarrow(0,\ldots,0,1)$.
The transposed of the matrix functions $\mathcal{E}_{w,m,\rho}$ are thus vectorial modular forms of weight $w$, type $m$ associated to the representations ${}^t\rho$ in the sense of [@Pe].
### Eisenstein series {#eisensteinseries}
We consider ${\mathbb F}_q$-algebra representations $\sigma_1,\ldots,\sigma_s:A\rightarrow\operatorname{Mat}_{d\times d}(\boldsymbol{K})$. Let $\sigma$ be the semi-character $\sigma_1\cdots\sigma_s$. We set: $$\mathcal{G}_{w,\sigma}(z)=\sideset{}{'}\sum_{(a,b)\in A^2}(az+b)^{-w}(\sigma_1(a),\sigma_1(b))\otimes\cdots\otimes(\sigma_s(a),\sigma_s(b))$$ (the dash $'$ denotes a sum which avoids the couple $(0,0)$). This defines a holomorphic function $$\mathcal{G}_{w,\sigma}:\Omega\rightarrow\operatorname{Mat}_{d^s\times 2d^s}(\mathbb{K}).$$ Let, on the other side, $\rho$ be the representation $\rho:\Gamma\rightarrow\operatorname{Mat}_{2d^s\times 2d^s}(\boldsymbol{K})$ defined by $\rho=\rho_{\sigma_1}\otimes\cdots\otimes\rho_{\sigma_s}$. The following lemma is easy to verify.
We have the identity $\mathcal{G}_{w,\sigma}=L_\sigma(w)\mathcal{E}_{w,0,\rho}.$
The matrix $L_\sigma(w)$ is the $L$-value associated to the semi-character $\sigma$ as defined in §\[Lvalues\]. This and Proposition \[Taelmanunits\] suggest that the Eisenstein series $\mathcal{G}_{w,\sigma}$ could be also related to Taelman units. Of course, this is quite speculative, because at the moment, we do not have at our disposal any kind of metric over the spaces of vectorial modular forms that we consider, allowing us to define an appropriate notion of unit group of Taelman in this setting. However, this seems to suggest the following conjecture.
The $\mathbb{K}$-module of the vectorial modular forms of weight one and type $0$ associated to a representation $\rho_{\sigma_1}\otimes\cdots\otimes\rho_{\sigma_s}$ with $\sigma_1,\ldots,\sigma_s$ algebra representations $A\rightarrow\operatorname{Mat}_{d\times d}(\boldsymbol{K})$ is of rank one, generated by the Eisenstein series $\mathcal{G}_{w,\sigma}$, where $\sigma=\sigma_1\cdots\sigma_s$.
Other representations of $\Gamma$ {#pathologies}
=================================
Let $\boldsymbol{K}$ be a field containing a fixed base field $k$ of positive characteristic (e.g. $k={\mathbb F}_q$) and let us consider a group representation $\rho:G\rightarrow{\operatorname{GL}}_N(\boldsymbol{K})$. The [*essential dimension*]{} (over $k$) of $\rho$ is the transcendence degree over $k$ of the field generated by the entries of the image of $\rho$. If $G$ is finite, then the essential dimension of $\rho$ is zero. In this paper, we have studied several examples in the case $k={\mathbb F}_q$ and $G=\Gamma$. For instance, the essential dimension of the tautological representation $\Gamma\rightarrow{\operatorname{GL}}_2(A)$ is one, and the essential dimension of a representation $\rho^{{I\!I}}_{\underline{t},\underline{l}}$ as in Theorem \[corollaryt1ts\] is $s$, the number of variables in $\underline{t}_s$.
As a conclusion of the present note, we would like to point out that there are irreducible, finite dimensional representations of ${\operatorname{GL}}_2(k[t])$ with infinite essential dimension. Indeed, for any field $k$, a Theorem of Nagao (see [@Na Theorem 2] and Serre, [@Se II.1.6]) asserts that $$\label{amalgamated}
{\operatorname{GL}}_2(k[t])\cong{\operatorname{GL}}_2(k)*_{B(k)}B(k[t]),$$ where, for a commutative ring $R$, $B(R)$ denotes the group of upper triangular matrices with entries in $R$ with invertible determinant and where $*_{B(k)}$ stands for the amalgamated product along $B(k)$. Therefore, we have the following:
Any automorphism $\phi$ of ${\operatorname{GL}}_2(k)$ extends to a group isomorphism between ${\operatorname{GL}}_2(k[t])$ and the subgroup $\Phi^\infty$ of ${\operatorname{GL}}_2(k[x_1,x_2,\ldots])$ generated by ${\operatorname{GL}}_2(k)$ and the matrices ${\left(\begin{smallmatrix} \lambda & x_i \\ 0 & \mu \end{smallmatrix}\right)}$, where $x_1,x_2,\ldots$ are independent indeterminates over $k$ and $\lambda,\mu\in k^\times$.
By (\[amalgamated\]), we see that the association ${\left(\begin{smallmatrix} \lambda & t^i \\ 0 & \mu \end{smallmatrix}\right)}\mapsto{\left(\begin{smallmatrix} \lambda & x_i \\ 0 & \mu \end{smallmatrix}\right)}$ extends to give the above group isomorphism.
The above proposition exhibits representations ${\operatorname{GL}}_2(k[t])\rightarrow{\operatorname{GL}}_2(\boldsymbol{K}_\infty)$ where $\boldsymbol{K}_\infty=k(x_1,x_2,\ldots)$, which have infinite essential dimension over ${\mathbb F}_q$.
[*The group $\operatorname{SL}_2(\mathbb{Z})$ has uncountably many isomorphism classes of irreducible complex representations and their explicit classification is not yet understood. A similar question arises with the group ${\operatorname{GL}}_2(A)$ and its representations in a complete algebraically closed field of characteristic $p$. The complete classification for $\operatorname{SL}_2(\mathbb{Z})$ is however accessible if we impose an upper bound on the dimension. In [@Tu], Tuba and Wenzl obtained a complete classification of irreducible representations of the braid group $B_3$ of dimension $\leq 5$ yelding a similar result for irreducible complex representations of $\operatorname{SL}_2(\mathbb{Z})$; it turns out that these families algebraically depend on finitely many parameters (eigenvalues, characters etc.). It would be nice to have a similar result for ${\operatorname{GL}}_2(A)$.* ]{}
### Acknowledgement {#acknowledgement .unnumbered}
The author thanks Gebhard Böckle, Mihran Papikian and the Referee, for useful hints and remarks that have contributed to improve and correct the paper.
[9]{}
B. Anglès, F. Pellarin & F. Tavares Ribeiro. [*Arithmetic of positive characteristic $L$-series values in Tate algebras.*]{} Compositio Math., [**152**]{}, (2016), pp. 1-61.
R. Brauer and C. Nesbitt. On the modular character of groups. Ann. of Math., 1941.
D. Goss. [*$\zeta$-phenomenology.*]{} In Noncommutative geometry, arithmetic, and related topics, Johns Hopkins Univ. Press, Baltimore, (2011), pp. 159-182.
E. Gekeler. [*On the coefficients of Drinfeld modular forms.*]{} Invent. Math. 93 (1988), pp. 667-700. http://dx.doi.org/10.1007/ BF01410204.
G. Malle & D. Testerman. [*Linear algebraic groups and finite groups of Lie type.*]{} Cambridge studies in advanced mathematics, 133. (2011).
H. Nagao. [*On $\operatorname{GL}(2,K[x])$*]{}. J. Inst. Polytech. Osaka City Univ. Ser. A [**10**]{} (1959), pp. 117-121.
F. Pellarin. [*Values of certain $L$-series in positive characteristic*]{}. [*Ann. of Math.*]{} [**176**]{}, (2012), pp. 2055-2093.
F. Pellarin & R. Perkins. [*On certain generating functions in positive characteristic.*]{} Monat. Math., [**180**]{}, (2016), pp. 123-144. J.-P. Serre. [*Arbres, amalgames, $\operatorname{SL}_2$.*]{} Rédigé avec la collaboration de Hyman Bass. Astérisque, No. 46. Société Mathématique de France, Paris, (1977). 189 pp.
L. Taelman. [*Special $L$-values of Drinfeld modules.*]{} Annals of Math. 75 (2012), pp. 369-391.
I. Tuba & H. Wenzl. [*Representations of the braid group $B_3$ and of $\operatorname{SL}(2,{\mathbb Z})$*]{}, Pacific J. Math. 197 (2001).
|
---
abstract: 'Arbitrary style transfer is an important problem in computer vision that aims to transfer style patterns from an arbitrary style image to a given content image. However, current methods either rely on slow iterative optimization or fast pre-determined feature transformation, but at the cost of compromised visual quality of the styled image; especially, distorted content structure. In this work, we present an effective and efficient approach for arbitrary style transfer that seamlessly transfers style patterns as well as keep content structure intact in the styled image. We achieve this by aligning style features to content features using rigid alignment; thus modifying style features, unlike the existing methods that do the opposite. We demonstrate the effectiveness of the proposed approach by generating high-quality stylized images and compare the results with the current state-of-the-art techniques for arbitrary style transfer.'
author:
- |
Suryabhan Singh Hada $\hspace{2ex}$ Miguel [Á]{}. Carreira-Perpi[ñ]{}[á]{}n\
Electrical Engineering and Computer Science, University of California, Merced\
[<http://eecs.ucmerced.edu>]{}
date: 'September 26, 2019'
title: Style Transfer by Rigid Alignment in Neural Net Feature Space
---
Introduction
============
Given a pair of style and a target image, style transfer is a process of transferring the texture of the style image to the target image keeping the structure of the target image intact. Recent work from @Gatys_16a (Neural style transfer (NST)) shows the power of the Convolution Neural Networks (CNN) in style transfer. The use of multi-level features extracted from different layers of a pre-trained CNN has significantly improved stylization quality.
In just a few years, significant effort has been made to improve NST, either by iterative optimization-based approaches [@LiWand16a; @Li_17a; @Risser_17a] or feed-forward network approximation [@Johnson_16a; @Ulyanov_16b; @Ulyanov_16a; @LiWand16b; @Dumoul_17a; @Chen_17a; @Li_17f; @Shen_18a; @ZhangDana17a; @Wang_17c]. Optimization-based methods [@Gatys_16a; @LiWand16a; @Li_17a; @Risser_17a], achieve visually great results, but at the cost of efficiency, as every style transfer requires multiple optimization steps. On the other hand, feed-forward network based style transfer methods [@Johnson_16a; @Ulyanov_16b; @Ulyanov_16a; @LiWand16b; @Dumoul_17a; @Chen_17a; @Li_17f; @Shen_18a; @ZhangDana17a; @Wang_17c] provide efficiency and quality, but at the cost of generalization. These networks are limited to a fixed number of styles.
Arbitrary style transfer can achieve generalization, quality, and efficiency at the same time. The goal is to find a transformation that can take style and content features as input, and produce a styled feature that does not compromise reconstructed stylized image quality.
However, current work in this regard [@HuangBelong17a; @Li_17d; @ChenSchmidt16a; @Sheng_18a] has failed in quality of the generated results. @HuangBelong17a and @ChenSchmidt16a use external style signals to supervise the content modification on a feed-forward network. The network is trained by using perpetual loss [@Johnson_16a], which is known to be unstable and produce unsatisfactory style transfer results [@Gupta_17b; @Risser_17a].
On the contrary, @Li_17d, @ChenSchmidt16a and @Sheng_18a manipulate the content features under the guidance of the style features in a shared high-level feature space. By decoding the manipulated features back into the image space with a style-agnostic image decoder, the reconstructed images will be stylized with seamless integration of the style patterns. However, these techniques over-distort the content or fail to balance the low level and global style patterns.
In this work, we address the aforementioned issues by modifying style features instead of content features during style transfer. We achieve this by first matching the channel-wise statistics of content features to those of style features, and then align style features to content features by rigid alignment. The channel-wise statistics matching transfers local texture and rigid transformation adjusts global style patterns with respect to content features. By doing so, we solve the problem of content over-distortion since alignment does not manipulate content features. Similar to @Li_17d and @Sheng_18a, our method does not require any training and can be applied to any style image in real time. We also provide comprehensive evaluations to compare with the prior arbitrary style transfer methods [@Gatys_16a; @HuangBelong17a; @Li_17d; @Sheng_18a], to show that our method achieves state-of-the-art performance.
Our contributions in this paper are threefold: 1) We achieve style transfer by using rigid alignment which is different from traditional style transfer methods that depend on feature statistics matching. Rigid alignment is well studied in computer vision for many years and has been very successful in image registration and many problems of that type. We show that by rearranging the content and style features in a specific manner (each channel ($C$) as a point in $\bbR^{HW}$ space, where $H$ is height and $W$ is the width of the feature), they can be considered as a point cloud of $C$ points. 2) We provide a closed-form solution to the style transfer problem. 3) The proposed approach achieves impressive style transfer results in real-time without introducing content distortion.
-------------------------------------------------------
[content sty le Avatar\[\] WCT\[\] AdaIN\[\] Ours ]{}
{width="1\linewidth"}
-------------------------------------------------------
Related work
============
Due to the wide variety of applications, the problem of style transfer has been studied for a long time in computer vision. Before seminal work by @Gatys_16a, the problem of style transfer has been focused as *non-photorealistic rendering (NPR)* [@Kyprian_12a], and closely related to texture synthesis [@EfrosFreeman01a; @EfrosLeung99a]. Early approaches rely on finding low-level image correspondence and do not capture high-level semantic information well. As mentioned above, the use of CNN features in style transfer has improved the results significantly. We can divide the current Neural style transfer literature into four parts.
- **Slow optimization-based methods:** @Gatys_16a introduced the first NST method for style transfer. The authors created artistic style transfer by matching multi-level feature statistics of content and style images extracted from a pre-trained image classification CNN (VGG [@SimonyZisser15a]) using Gram matrix. Soon after this, other variations were introduced to achieve better style transfer [@LiWand16a; @Li_17a; @Risser_17a], user controls like spatial control and color preserving [@Gatys_16b; @Risser_17a] or include semantic information [@Frigo_16a; @Champand16a]. However, these methods require an iterative optimization over the image which makes it impossible to apply in real time.
- **Single style feed-forward networks:** Recently, @Johnson_16a, @Ulyanov_16b, @Ulyanov_16a and @LiWand16b address the real-time issue by approximating the iterative back-propagating procedure to feed-forward neural networks, trained either by the perceptual loss [@Johnson_16a; @Ulyanov_16b] or Markovian generative adversarial loss [@LiWand16b]. Although these approaches achieve style transfer in real time, they require training a new model for every style. This makes them very difficult to use for multiple styles, as every single style requires hours of training.
- **Single network for multiple styles:** @Dumoul_17a, @Chen_17a, @Li_17f and @Shen_18a have tried to tackle the problem of multiple styles by training a small number of parameters for every new style while keeping rest of the network the same. Conditional instance normalization [@Dumoul_17a] achieved it by training channel-wise statistics corresponding to each style. Stylebank [@Chen_17a] learned convolution filters for each style, @Li_17f transferred styles by binary selection units and @Shen_18a trained a meta-network that generates a $14$ layer network for each content and style image pair. On the other hand, @ZhangDana17a trained a weight matrix to combine style and content features. The major drawback is the model size that grows proportionally to the number of style images. Additionally, there is interference among different styles [@Jing_17a], which affects stylization quality.
- **Single network for arbitrary styles:** Some recent works [@HuangBelong17a; @Li_17d; @ChenSchmidt16a; @Sheng_18a] have been focused on creating a single model for arbitrary style i.e., one model for any style. @ChenSchmidt16a swaps the content feature patches with the closest style feature patch, but fails if the domain gap between content and style is large. @Sheng_18a addresses this problem by first normalizing the features, and then apply the patch swapping. Although this improves the stylization quality, it still produces content distortion and misses global style patterns as shown in figure: \[f:closeup\]. WCT [@Li_17d] transfers multi-level style patterns by recursively applying whitening and coloring transformation (WCT) to a set of trained auto-encoders with different levels. However, similar to @Sheng_18a, WCT also produces content distortion; moreover, this introduces some unwanted patterns in the styled image [@Jing_17a]. Adaptive Instance normalization (AdaIN) [@HuangBelong17a] matches the channel-wise statistics (mean and variance) of content features to the style features, but this matching occurs only at one layer which authors try to compensate by training a network on perpetual loss [@Johnson_16a]. Although this does not introduce content distortion, it fails to capture style patterns.
[@c@c@]{}
-- ------------------------------------------------------------
{width="0.2\linewidth"}
{width="0.2\linewidth"}
-- ------------------------------------------------------------
&
-----------------------------------------------
{width="0.65\linewidth"}
-----------------------------------------------
\
-------------------------------------------------------
{width="1\linewidth"}
[relu\_1 relu\_2 relu\_3 relu\_4 relu 1 to 4]{}
-------------------------------------------------------
[@c@c@ c@c@]{} [content]{}&[style]{}&[relu 1 to 4 ]{}&[ relu\_4]{}\
[@c@c@c@c@c@c@c@]{}
&&&& {width="0.8\linewidth"}\
\
[Style image]{}&& && {width="0.8\linewidth"}\
\[ 8ex\][{width="0.16\linewidth"}]{} &&&& {width="0.8\linewidth"}\
&&&& {width="0.8\linewidth"}
The common part of the existing arbitrary style transfer methods, that they all try to modify the content features during the style transfer process. This eventually creates content distortion. Different from existing methods, our approach manipulates the style features during style transfer. We achieve this in two steps. First, we apply channel-wise moment matching (mean and variance) between content and style features, just as AdaIN [@HuangBelong17a]. Second, we use rigid alignment (Procrustes analysis[see @BorgGroenen97a chap. 21]) to align style features to content features. This alignment modifies the style features to adapt content structure, thus avoiding any content distortions while keeping its style information intact. In the next section, we describe our complete approach.
Style transfer using features
=============================
Let $\z_c \in \bbR^{C \times H \times W}$ is a feature extracted from a layer of a pre-trained CNN when the content image passes through the network. Here, $H$ is the height, $W$ is the width, and $C$ is the number of channels of the feature $\z_c$. Similarly, for style image $\z_s \in \bbR^{C \times H \times W}$ represents the corresponding features.
For any arbitrary style transfer method, we pass $\z_s$ and $\z_c$ to a transformation function $\calT$ which outputs styled feature $\z_{cs}$ as described below: $$\z_{cs} = \calT(\z_c, \z_s).
\label{e:gen_styled}$$ Reconstruction of $\z_{cs}$ to image space gives the styled image. The difficult part is finding the transformation function $\calT$ that is style-agnostic like @Sheng_18a,@ChenSchmidt16a and @Li_17d, but unlike these, it captures local and global style information without distorting the content and does not need iterative optimization.
Proposed approach {#s:procrustes}
=================
Although AdaIN [@HuangBelong17a] is not style agnostic, it involves a transformation which is entirely style agnostic: channel-wise moment matching. This involves matching channel-wise mean and variance of content features to those of style features as follows: $$\z_{c^{\prime}}= \bigg( \frac{\z_c - \calF_{\mu}(\z_c)}{\calF_{\sigma}(\z_c)} \bigg) \calF_{\sigma}(\z_s) + \calF_{\mu}(\z_s).
\label{e:adaIN}$$ Here, $\calF_{\mu}(.)$ and $ \calF_{\sigma}(.)$ is channel-wise mean and variance respectively. Although this channel-wise alignment produces unsatisfactory styled results, it is able to transfer local patterns of style image without distorting content structure as shown in figure: \[f:closeup\]. Moment matching does not provide a perfect alignment among channels of style and content features which leads to missing global style patterns and thus unsatisfactory styled results. Other approaches achieve this, either by doing WCT transformation [@Li_17d] or patch replacement [@Sheng_18a; @ChenSchmidt16a], but this requires content features modification that leads to content distortion. We tackle this, by aligning style features to content features instead. In that way, style features get structure of content while maintaining their global patterns.
There exist many variations of alignment, or registration, for images and point clouds, the more general of which involve nonrigid alignment (e.g. [@Myronen_07a]). In this work we use rigid alignment via a Procrustes transformation [@BorgGroenen97a] because of its simplicity and the existence of a closed-form solution that can be computed efficiently. The Procrustes transformation involves shifting, scaling and finally rotation of the points that to be moved (styled features) with respect to the target points (content features after moment matching). For this we consider both features as point clouds of size $C$ with each point is in $ \bbR^{HW}$ space, i.e. $\z_c, \z_s \in \bbR ^{ C \times HW}$. Now, we apply rigid transformation in following steps:
- **Step-I: Shifting.** First, we need to shift both point clouds $\z_c$ and $\z_s$ to a common point in $\bbR^{HW}$ space. We center these point clouds to the origin as follows: $$\begin{aligned}
\bar{\z}_c = \z_c -\bmu_c \nonumber \\
\bar{\z}_s = \z_s -\bmu_s \end{aligned}$$ here, $\bmu_c$ and $\bmu_s \in \bbR^{HW}$ are the mean of the $\z_c$ and $\z_s$ point clouds respectively.
- **Step-II: Scaling.** Both points clouds need to have the same scale before alignment. For this, we make each point cloud to have unit Frobenius norm. $$\begin{aligned}
\hat{\z}_c = \frac{\bar{\z}_c}{\norm{\z_c}_F} \nonumber \\
\hat{\z}_s = \frac{\bar{\z}_s}{\norm{\z_s}_F}\end{aligned}$$ here, $\norm{.}_F$ represents Frobenius norm.
- **Step-III: Rotation.** Next step involves rotation of $\hat{\z}_s$ so that it can align perfectly with $\hat{\z}_c$. For this, we multiply $\hat{\z}_s$ to a rotation matrix that can be created as follows: $$\begin{aligned}
\argmin_\Q \norm{\hat{\z}_s \Q -\hat{\z}_c}_2^2 \quad \text{s.t.} \quad \Q\text{ is orthogonal}.
\label{e:roation}\end{aligned}$$ Although this is an optimization problem, it can be solved as follows: $$\begin{aligned}
\norm{\hat{\z}_s \Q -\hat{\z}_c}_2^2 = \trace{\hat{\z}_s^T \hat{\z}_s +\hat{\z}_c^T \hat{\z}_c } -2 \trace {\hat{\z}_c^T \hat{\z}_s \Q}.\end{aligned}$$ Since, $\trace{\hat{\z}_s^T \hat{\z}_s +\hat{\z}_c^T \hat{\z}_c } -2 \trace{\hat{\z}_c^T \hat{\z}_s \Q}$ term is independent of $\Q$, so eq: becomes:
$$\begin{aligned}
\argmax_\Q \trace{\hat{\z}_c^T \hat{\z}_s\Q} \quad \text{s.t.} \quad \Q\text{ is orthogonal}.\end{aligned}$$
Using singular value decomposition of $\hat{\z}_c^T \hat{\z}_s= \U \mathbf{S} \V^T$ and cyclic property of trace we have:
$$\begin{aligned}
\trace{\hat{\z}_c^T \hat{\z}_s\Q} &= \trace{\U \mathbf{S} \V^T \Q} \nonumber \\
& = \trace{\mathbf{S} \V^T \Q \U} \nonumber \\
&= \trace{\mathbf{S} \HH}
\label{e:svd_part}\end{aligned}$$
here, $\HH= \V^T \Q \U$ is an orthogonal matrix, as it is product of orthogonal matrices. Since, $\mathbf{S}$ is a diagonal matrix, so in order to maximize $\trace{\mathbf{S} \HH}$, the diagonal values of $\HH$ need to equal to $1$. Now, we have:
$$\begin{aligned}
\HH = \V^T\Q\U &= \I \nonumber \\
\text{or ,} \qquad \Q &= \V \U^T.\end{aligned}$$
- **Step-IV: Alignment.** After obtaining rotation matrix $\Q$, we scale and shift style point cloud with respect to the original content features in the following way: $$\begin{aligned}
\z_{sc} = \norm{\z_c}_F \hat{\z}_s \Q + \bmu_c
\label{e:proc}\end{aligned}$$ $\z_{sc}$ is the final styled feature.
This alignment makes style features to adapt content structure while keeping its local and global patterns intact.
Multi-level style transfer
--------------------------
As shown in the @Gatys_16a, features from different layers provide different details during style transfer. Lower layer features (*relu\_1* and *relu\_2*) provide color and texture information, while features from higher layer (*relu\_3* and *relu\_4*) provide common patterns details (figure: \[f:multi\_lvl\]). Similar to WCT [@Li_17d], we also do this by cascading the image through different auto-encoders. However, unlike WCT [@Li_17d] we do not need to do the alignment described in section \[s:procrustes\] at every level. We only apply the alignment at the deepest layer (*relu4\_1*).
Doing alignment at each layer or only at deepest layer (*relu4\_1*) produce identical results as shown in figure: \[f:proc\_once\]. This also shows the rigid alignment of style features to content is perfect.
Once the features are aligned we only need to take care of local textures at other layers. We do this by applying moment matching (eq: ) at lower layers. The complete pipeline is shown in figure: \[f:multi\_lvl\].
[@c@c@ c@c@]{} [content]{}&[style]{}&[$C\times HW$]{}&[$HW\times C$]{}\
Need to arrange features in $\bbR^{C\times HW}$ space
-----------------------------------------------------
As mentioned above, for alignment we consider the deep neural network features ($\z \in \bbR^{C\times H \times W}$) as a point cloud which has $C$ points each of dimension $HW$. We can also choose another configuration where, each point is in $\bbR^C$ space, and thus having $HW$ points in a point cloud. In figure:\[f:configs\] we show a comparison of style transfer with two configurations. As shown in the figure:\[f:configs\] having later configuration results in complete distortion of content structure in the final styled image. The reason for that is deep neural network features (convolution layers) preserve some spatial structure, which is required for style transfer and successful image reconstruction. So, we need to transform the features in a specific manner that after alignment we do not lose that spatial structure. That is why for alignment, we transform $\z$ such that the point cloud has $C$ points each of dimension $HW$.
Experiments
============
Decoder training
----------------
We use a pre-trained auto-encoder network from @Li_17d. This auto-encoder network has been trained for general image reconstruction. The encoder part of the network is the pre-trained VGG-19 [@SimonyZisser15a] that has been fixed and the decoder network is trained to invert the VGG features to image space. As mentioned in @Li_17d the decoder is designed as being symmetrical to that of the VGG-19 network, with the nearest neighbor up-sampling layer used as the inverse of max pool layers.
Authors in @Li_17d trained five decoders for reconstructing images from features extracted at different layers of the VGG-19 network. These layers are *relu5\_1, relu4\_1, relu3\_1, relu2\_1 and relu1\_1*. The loss function for training involves pixel reconstruction loss and feature loss [@DosovitBrox16b]. $$\begin{aligned}
\argmin_\theta \norm{\X- \X_r}^2_2 + \lambda \norm{\Phi_l(\X)- \Phi_l(\X_r)}^2_2\end{aligned}$$ where, $\theta$ are the weights of the decoder. $\X$ ,$\X_r$ are the original reconstructed image respectively, and $\Phi_l(\X)$ is VGG-19 encoder that extracts features from layer $l$. In addition, $\lambda$ is the weight to balance the two losses. The decoders have been trained on Microsoft COCO dataset [@Lin_14d]. However, unlike @Li_17d we use only four decoders in our experiments for multi-level style transfer. These decoders correspond to *relu4\_1, relu3\_1, relu2\_1 and relu1\_1* layers of the VGG-19 network.
Comparison with prior style transfer methods
--------------------------------------------
[@c@ c@c@ c@ c@ c@ c@ c@]{} [content]{}& [style]{} && [Gatys\[\] ]{}& [AdaIN \[\]]{} & [WCT \[\]]{} & [Avatar \[\]]{}& [Ours]{}\
To show the effectiveness of the proposed method, we compare our results with two types of arbitrary style transfer approaches. The first type is iterative optimization-based [@Gatys_16a] and the second type is fast arbitrary style transfer method [@Li_17d; @Shen_18a; @HuangBelong17a]. We present these stylization results in figure: \[f:comaprison\].
Although optimization based approach [@Gatys_16a] perform arbitrary style transfer, it requires slow optimization for this. Moreover, it suffers from getting stuck at local minima. This results in visually unsatisfied style transfer results as shown in the third and fourth rows. AdaIN [@HuangBelong17a] addresses the issue of local minima along with efficiency, but fails to capture the style patterns. For instance, in the third row, the styled image contains colors from the content such as red color on the lips. Contrary to this, WCT [@Li_17d] and Avatar-Net [@Shen_18a] perform very well in capturing the style patterns, by matching second order statistics and the latter one by normalized patch swapping. However, both methods fail to maintain the content structure in the stylized results. For instance, in the first row, WCT [@Li_17d] completely destroys the content structure: mountains and clouds are indistinguishable. Similarly, in the second and fifth row, content image details are too distorted. Although Avatar-Net [@Shen_18a] performs better than WCT [@Li_17d] as in the first and fifth rows, it fails too in maintaining content information as shown in the second and sixth rows. In the second row, the styled image does not even have any content information.
On the other hand, the proposed method not only captures style patterns similar to WCT [@Li_17d] and Avatar-Net [@Shen_18a], but also maintains the content structure perfectly as shown in the first, second and fifth row where other two failed.
We also provide a close-up in figure: \[f:closeup\]. As shown in the figure, WCT [@Li_17d] and Avatar-Net [@Shen_18a] distort the content image structure. The nose in the styled image is too much distorted, making these methods difficult to use with human faces. Contrary to this, AdaIN [@HuangBelong17a] and the proposed method keep content information intact, as shown in the last two columns of the second row. However, AdaIN [@HuangBelong17a] does not capture style patterns very well. The proposed method, on the other hand, captures style patterns very well without any content distortion in the styled image.
In addition to image-based stylization, the proposed method can also do video stylization. We achieve this by just doing per-frame style transfer as shown in figure: \[f:video\]. The styled video is coherent over adjacent frames since the style features adjust themselves instead of content, so the style transfer is spatially invariant and robust to small content variations. In contrast, Avatar-Net [@Sheng_18a] and WCT [@Li_17d] contain severe content distortions, with the distortion is much worse in WCT [@Li_17d].
Efficiency
----------
We compare the execution time for style transfer of the proposed method with state-of-the-art arbitrary style transfer methods in the table \[t:num\_comp\]. We implement all methods in Tensorflow [@Abadi_16a] for a fair comparison. @Gatys_16a approach is very slow due to iterative optimization steps that involve multiple forward and backward pass through a pre-trained network. On the contrary, other methods have very good execution time, as these methods are feed-forward network based. Among all, AdaIN [@HuangBelong17a] performs best, since it requires only moment-matching between content and style features. WCT [@Li_17d] is relatively slower as it requires SVD operation at each layer during multi-layer style transfer. Avatar-Net [@Sheng_18a] has better execution time compared to WCT [@Li_17d] and ours. This is because of the GPU based style-swap layer and hour-glass multi-layer network.
On the other hand, our method is comparatively slower than AdaIN [@HuangBelong17a], and Avatar-Net [@Sheng_18a] as our method involves SVD operation at *relu\_4*. Additionally, it requires to pass through multiple auto-encoders for multi-level style transfer similar to WCT [@Li_17d]. However, unlike WCT [@Li_17d] proposed method needs only one SVD operation as shown in figure: \[f:proc\_once\] and thus have better execution time compared to WCT [@Li_17d].
----------------------------- -- --------------------------------- ---------------- ----------------
[**Method**]{} [[Execution time (in sec)]{}]{} **$\log L_c$** **$\log L_s$**
[**($512 \times 512$)**]{}
[Gatys [@Gatys_16a]]{} 58 **4.40** 8.28
[AdaIN [@HuangBelong17a]]{} **0.13** 4.62 8.18
[WCT [@Li_17d]]{} 1.12 4.79 7.83
[Avatar-Net [@Sheng_18a]]{} 0.34 4.75 **7.77**
[Ours]{} 0.46 4.70 7.87
----------------------------- -- --------------------------------- ---------------- ----------------
: Numeric comparison between the proposed method and state of the art methods. *Second column:* average execution time (in seconds). *Last two columns:* average content and style loss for the styled images in figure: \[f:comaprison\]. Lower values are better. []{data-label="t:num_comp"}
Numeric comparison
------------------
In table \[t:num\_comp\] we show numerical comparison between different style methods. We provide average content loss($L_c$) and style loss($L_s$) from [@Gatys_16a], for the images in figure: \[f:comaprison\]. $$\begin{aligned}
L_c &= \frac{1}{2CHW} \sum_{i,j} \norm{\z_{c_{i,j}}-\z_{i,j}}^2_2 \\
L_s &= \frac{1}{4C^2H^2W^2} \sum_{i,j}\norm{G_{i,j}(\z_s)-G_{i,j}(\z)}^2_2\end{aligned}$$ here, $\z_c$ is content feature, $\z_s$ is style feature, $\z$ is styled feature and $G(.)$ provides Gram matrix. As shown in the table \[t:num\_comp\], our method not only performs well in terms of content loss, but is also on par with WCT [@Li_17d] and Avatar-Net [@Sheng_18a] in terms of style loss. This proves our intuition that by aligning style features to content features, we not only preserve content structure but also effectively transfers style patterns.
User control
------------
----------------------------------------------------------- -- ------------- -------------------------------------------------------------------------------------------------------------------- --
[content]{} [style]{} [style]{}
[image]{} [image 1]{} [image 2]{}
[ $\xrightarrow{\hspace{0.3\columnwidth}{ \normalsize \beta} \hspace{0.3\columnwidth}} \hspace{0.08\linewidth}$]{}
{width="0.08\linewidth"}
----------------------------------------------------------- -- ------------- -------------------------------------------------------------------------------------------------------------------- --
-------------------------------------------------------------------------------------------------------------------
[style content]{}
[ $ \hspace{-0.02\linewidth}\xrightarrow{\hspace{0.3\linewidth} {\normalsize \alpha} \hspace{0.3\linewidth}} $]{}
{width="1\linewidth"}
-------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------
{width="0.8\linewidth"}
-------------------------------------------------------
Like other arbitrary style transfer methods, our approach is also flexible to accommodate different user controls such as the trade-off between style and content, style interpolation, and spatial control during style transfer.
Since our method applies transformation in the feature-space independent of the network, we can achieve trade-off between style and content as follows: $$\begin{aligned}
\z = \alpha \z_c + (1-\alpha) \z_{sc}.\end{aligned}$$ Here, $\z_{sc}$ is the transformed feature from eq: , $\z_c$ is content feature and $\alpha $ is the trade off parameter. Figure: \[f:tradeoff\] shows one such example of content-style trade-off.
Figure: \[f:interpolate\] shows an instance of linear interpolation between two styles created by proposed approach. This is done by adjusting the weight parameter ($\beta$) between transformation outputs ($\calT(\z_c,\z_s)$) as follows: $$\begin{aligned}
\z = \alpha \z_c + (1-\alpha) (\beta \calT(\z_{c} ,\z_{s1}) + (1-\beta) \calT (\z_c,\z_{s2}) ).\end{aligned}$$ Spatial control is needed to apply different styles to different parts of the content image. A set of masks $\M$ are additionally required to control the regions of correspondence between style and content. By replacing the content feature $\z_c$ in section \[s:procrustes\] with $\M \odot \z_c$, where $\odot$ is a simple mask-out operation, we can stylize the specified region only, as shown in figure: \[f:mask\].
Conclusion
==========
In this work, we propose an effective and efficient arbitrary style transfer approach that does not require learning for every individual style. By applying rigid alignment to style features with respect to content features, we solve the problem of content distortion without sacrificing style patterns in the styled image. Our method can seamlessly adapt the existing multi-layer stylization pipeline and capture style information from those layers too. Our method can also seamlessly perform video stylization, merely by per-frame style transfer. Experimental results demonstrate that the proposed algorithm achieves favorable performance against the state-of-the-art methods in arbitrary style transfer. As further direction, one may replace multiple autoencoders for multi-level style transfer, by training an hourglass architecture similar to Avatar-Net for better efficiency.
More styled Results
===================
[@c@ c@c@ c@ c@ c@ c@ c@]{} [content]{}& [style]{} && [Gatys\[\] ]{}& [AdaIN \[\]]{} & [WCT \[\]]{} & [Avatar \[\]]{}& [Ours]{}\
[@c@ c@c@ c@ c@ c@ c@ c@]{} [content]{}& [style]{} && [Gatys\[\] ]{}& [AdaIN \[\]]{} & [WCT \[\]]{} & [Avatar \[\]]{}& [Ours]{}\
[@c@ c@c@ c@ c@ c@ c@ c@]{} [content]{}& [style]{} && [Gatys\[\] ]{}& [AdaIN \[\]]{} & [WCT \[\]]{} & [Avatar \[\]]{}& [Ours]{}\
[32]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi: \#1]{}
M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng. : A system for large-scale machine learning. In *12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 2016)*, pages 265–283, Savannah, GA, Oct. 6–8 2016.
I. Borg and P. Groenen. *Modern Multidimensional Scaling: Theory and Application*. Springer Series in Statistics. Springer-Verlag, Berlin, 1997.
A. J. Champandard. Semantic style transfer and turning two-bit doodles into fine artworks. arXiv:1603.01768 \[cs.CV\], Mar. 16 2016.
D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua. Stylebank: An explicit representation for neural image style transfer. In *Proc. of the 2017 IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR’17)*, Honolulu, HI, July 21–26 2017.
T. Q. Chen and M. Schmidt. Fast patch-based style transfer of arbitrary style. arXiv:1612.04337, Dec. 16 2016.
A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. Garnett, editors, *Advances in Neural Information Processing Systems (NIPS)*, volume 29, pages 658–666. MIT Press, Cambridge, MA, 2016.
V. Dumoulin, J. Shlens, and M. Kudlur. A learned representation for artistic style. In *Proc. of the 5th Int. Conf. Learning Representations (ICLR 2017)*, Toulon, France, Apr. 24–26 2017.
A. A. Efros and W. T. Freeman. Image quilting for texture synthesis and transfer. In L. Pocock, editor, *Proc. of the 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2010)*, pages 341–346, Los Angeles, CA, Aug. 12–17 2010.
A. A. Efros and T. K. Leung. Texture synthesis by non-parametric sampling. In J. K. Tsotsos, A. Blake, Y. Ohta, and S. W. Zucker, editors, *Proc. 7th Int. Conf. Computer Vision (ICCV’99)*, pages 1033–1038, Kerkyra, Corfu, Greece, Sept. 20–27 1999.
O. Frigo, N. Sabater, J. Delon, and P. Hellier. Split and match: Example-based adaptive patch sampling for unsupervised style transfer. In *Proc. of the 2016 IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR’16)*, Las Vegas, NV, June 26 – July 1 2016.
L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In *Proc. of the 2016 IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR’16)*, pages 2414–2423, Las Vegas, NV, June 26 – July 1 2016.
L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann, and E. Shechtman. Controlling perceptual factors in neural style transfer. arXiv:1611.07865, Nov. 16 2016.
A. Gupta, J. Johnson, A. Alahi, and L. Fei-Fei. Characterizing and improving stability in neural style transfer. In *Proc. 17th Int. Conf. Computer Vision (ICCV’17)*, pages 2380–7504, Dec. 11–18 2017.
X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In *Proc. 17th Int. Conf. Computer Vision (ICCV’17)*, Dec. 11–18 2017.
Y. Jing, Y. Yang, Z. Feng, J. Ye, Y. Yu, and M. Song. Neural style transfer: A [Review]{}. arXiv:1705.04058, May 17 2017.
J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In B. Leibe, J. Matas, N. Sebe, and M. Welling, editors, *Proc. 14th European Conf. Computer Vision (ECCV’16)*, pages 694–711, Amsterdam, The Netherlands, Oct. 11–14 2016.
J. E. Kyprianidis, J. Collomosse, T. Wang, and T. Isenberg. State of the “[Art]{}”: A [Taxonomy]{} of [Artistic Stylization Techniques for Images and Video]{}. *IEEE transactions on visualization and computer graphics*, 190 (5):0 866–885, July 2012.
C. Li and M. Wand. Combining markov random fields and convolutional neural networks for image synthesis. In *Proc. of the 2016 IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR’16)*, Las Vegas, NV, June 26 – July 1 2016.
C. Li and M. Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. In B. Leibe, J. Matas, N. Sebe, and M. Welling, editors, *Proc. 14th European Conf. Computer Vision (ECCV’16)*, Amsterdam, The Netherlands, Oct. 11–14 2016.
Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang. Universal style transfer via feature transforms. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, *Advances in Neural Information Processing Systems (NIPS)*, volume 30, pages 386–396. MIT Press, Cambridge, MA, 2017.
Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang. Diversified texture synthesis with feed-forward networks. In *Proc. of the 2017 IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR’17)*, Honolulu, HI, July 21–26 2017.
Y. Li, N. Wang, J. Liu, and X. Hou. Demystifying neural style transfer. In *Proc. of the 26th Int. Joint Conf. Artificial Intelligence (IJCAI’15)*, pages 2230–2236, Melbourne, Australia, Aug. 19–25 2017.
T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll[á]{}r, and C. L. Zitnick. Microsoft [COCO]{}: Common objects in context. In *Proc. 13th European Conf. Computer Vision (ECCV’14)*, pages 740–755, Z[ü]{}rich, Switzerland, Sept. 6–12 2014.
A. Myronenko, X. Song, and M. [Á]{}. Carreira-Perpi[ñ]{}[á]{}n. Non-rigid point set registration: Coherent point drift. In B. Sch[ö]{}lkopf, J. Platt, and T. Hofmann, editors, *Advances in Neural Information Processing Systems (NIPS)*, volume 19, pages 1009–1016. MIT Press, Cambridge, MA, 2007.
E. Risser, P. Wilmot, and C. Barnes. Stable and controllable neural texture synthesis and style transfer using histogram losses. arXiv:1701.08893, Jan. 17 2017.
F. Shen, S. Yan, and G. Zeng. Neural style transfer via meta networks. In *Proc. of the 2018 IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR’18)*, Salt Lake City, UT, June 18–22 2018.
L. Sheng, Z. Lina, J. Shao, and X. Wang. : Multi-scale zero-shot style transfer by feature decoration. In *Proc. of the 2018 IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR’18)*, Salt Lake City, UT, June 18–22 2018.
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In *Proc. of the 3rd Int. Conf. Learning Representations (ICLR 2015)*, San Diego, CA, May 7–9 2015.
D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In M.-F. Balcan and K. Q. Weinberger, editors, *Proc. of the 33rd Int. Conf. Machine Learning (ICML 2016)*, pages 1349–1357, New York, NY, June 19–24 2016.
D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv:1607.08022, July 16 2016.
H. Wang, X. Liang, H. Zhang, D.-Y. Yeung, and E. P. Xing. -net: Real-time zero-shot image manipulation network. arXiv:1703.07255, Mar. 17 2017.
H. Zhang and K. Dana. Multi-style generative network for real-time transfer. arXiv:1703.06953, Mar. 20 2017.
|
---
abstract: 'We present the results of our most recent works on [*Spitzer/MIPS*]{} $24 \,\mu \rm m$ galaxies. Through a multiwavelength analysis, we study different properties (redshifts, luminosities, stellar masses) characterising the sources which produce the bulk of the mid-IR background. From a comparative study with the total population of $K_s$-selected galaxies, we determine that $24 \,\mu \rm m$ sources account for an important fraction of the most massive galaxies present at different redshifts. On the other hand, we determine that $24 \,\mu \rm m$ galaxies also produce most of the energy contained in the far-IR cosmic background at 70 and $160 \,\mu \rm m$. Furthermore, we are able to set tight constraints on the Cosmic Infrared Background (CIB) spectral energy distribution (SED). Our results help to clarify the links between these presumably different IR galaxy populations.'
author:
- 'K.I. Caputi, H. Dole, G. Lagache, J.-L. Puget'
title: '[*Spitzer/MIPS*]{} $24 \,\mu \rm m$ galaxies: the link to near-IR galaxies and the cosmic IR background'
---
Introduction
============
The Cosmic Infrared Background (CIB) accounts for roughly half of the total energy produced by extragalactic sources (e.g. Hauser & Dwek 2001). Since the discovery of the CIB (Puget et al. 1996), it has been recognised that the study of infrared (IR) extragalactic sources is fundamental to understand galaxy formation and evolution. As IR emission is produced by the dust re-processing of UV/optical light, IR sources constitute directly the signposts of star formation or accretion activity in the Universe.
The determination of the properties characterising IR galaxies and the link between the IR and other galaxy populations have been a matter of study since the first IR missions ([*IRAS, ISO*]{}). With the advent of [*Spitzer*]{} (Werner et al. 2004), our comprehension of the nature and composition of the CIB has very much improved. Numerous works presented during this conference have shown how much progress we have made in understanding IR galaxies up to high redshifts $z \sim 4-5$.
In this paper, we summarize our recent studies of the properties of mid-IR galaxies and their contribution to other wavelength domains. In particular, in Section 3, we analyze the role of the most luminous mid-IR galaxies in the evolution of the most massive $K_s$-band galaxies. In Section 4, we constrain the contribution of mid-IR galaxies to the far-IR background and give new estimates for the CIB. Finally, in Section 5, we briefly discuss the implications of our work.
Properties of the sources composing the $24 \,\mu \rm m$ background {#secprop}
===================================================================
Caputi et al. (2006a) studied different properties of the sources composing the bulk of the $24 \, \rm \mu m$ background in the Great Observatories Origins Deep Survey / Chandra Deep Field South (GOODS/CDFS). Observations of the CDFS have been carried out with the Multiband Imaging Photometer for [*Spitzer*]{} (MIPS; Rieke et al. 2004), as part of the Guaranteed Time Observers (GTO) program. Using a deep ($K_s<21.5$, Vega) galaxy sample, Caputi et al. (2006a) identified $\sim 94\%$ of the sources with $S_\nu(24 \, \rm \mu m)>83 \, \rm \mu Jy$ in the GOODS/CDFS. Taking advantage of the excellent quality multiwavelength photometry and spectroscopic coverage of this field, they determined the redshift distribution, IR luminosities and stellar masses characterising $24 \, \rm \mu m$ galaxies.
![ The normalized redshift distributions of the $S_\nu >83 \, \mu {\rm Jy}$ $\rm 24\, \mu m $ galaxies (black line), compared to the normalized redshift distribution of the total $K_s<21.5$ sample in the same field (grey line). Figure taken from Caputi et al. (2006a).[]{data-label="fig1"}](caputi_k_fig1.eps){width="8cm"}
Figure \[fig1\] shows the normalized redshift distributions of the $S_\nu(\rm 24 \, \mu m) > 83 \, \rm \mu Jy$ galaxies in comparison to the distribution of all the $K_s$-band galaxies in the GOODS/CDFS. These populations are composed by $\sim 500$ and 3000 objects, respectively, over an area of $\sim$130 arcmin$^2$. Several features are present in both distributions, which are the consequence of known large-scale structure in the GOODS/CDFS. In contrast, we observe the existence of a bump in the redshift distribution of $24 \, \rm \mu m$ galaxies at redshift $z \sim 1.9$, which does not appear for the total $K_s<21.5$ galaxy population. This peak in the $24 \, \rm \mu m$ redshift distribution has been predicted by Lagache et al. (2004) and is the consequence of the selection effect produced by the presence of PAH emission features entering the observed $\rm 24\,\mu m$ band. Given the width of the $\rm 24\,\mu m$ filter (whose transmission covers the wavelength range $\sim 20-28 \, \mu \rm m$), both the 7.7 and the $8.6 \mu \rm m$ PAH lines could contribute to the redshift distribution peak observed at $z \sim 1.9$. Our results show that PAH molecules must already be present in a significant amount of star-forming galaxies at high redshifts.
![The IR luminosity ($L_{IR}$) versus stellar mass ($M_{stell.}$) relation for $\rm 24\,\mu m$ galaxies, at different redshifts. In each panel, the dotted line indicates the luminosity completeness imposed by the $S_\nu = 83 \, \rm \mu Jy$ flux completeness limit of the $\rm 24\,\mu m$ sample. Figure taken from Caputi et al. (2006a).[]{data-label="fig2"}](caputi_k_fig2.eps){width="14cm"}
Using the empirical calibrations obtained by Chary & Elbaz (2001) and Elbaz et al. (2002) between mid-IR luminosities and bolometric IR luminosities $L_{IR}=L(8-1000 \mu{\rm m})$, Caputi et al. (2006a) computed bolometric IR luminosity estimates for all their star-forming $\rm 24 \, \mu m$ galaxies. On the other hand, they modelled the optical/near-IR spectral energy distributions (SEDs) of these galaxies, including [*Spitzer/IRAC (Infrared Array Camera)*]{} $3.6$ and $4.5 \, \rm \mu m$ data. This allowed them to obtain rest-frame $K_s$-band luminosities and derived stellar masses.
Figure \[fig2\] shows the evolution of the IR luminosity ($L_{IR}$) versus stellar mass ($M_{stell.}$) plane as a function of redshift. Panel a) shows that most of the $\rm 24 \, \mu m$ galaxies at redshifts $0.4 \leq z < 0.8$ have infrared luminosities $L_{IR}<10^{11}\, L_{\odot}$. The maximum observed infrared luminosities increase with redshift, and luminous infrared galaxies (LIRGs) characterized by $10^{11}\, L_{\odot}<L_{IR}<10^{12}\, L_{\odot}$ are the dominant $\rm 24 \, \mu m$ population at redshifts $0.8 \leq z < 1.2$ (cf. also Le Floc’h et al. 2005). The majority of the mid-IR sources at $0.4 \leq z < 1.2$ are hosted by intermediate-mass galaxies with stellar masses $10^{10} \, M_{\odot} < M < 10^{11} \, M_{\odot}$, although some more massive galaxies could also be classified as LIRGs at these redshifts. Within our surveyed area, there is virtually no ultra-luminous infrared galaxy (ULIRG) with $L_{IR}>10^{12}\, L_{\odot}$ at $z<1.2$. ULIRGs might be present at these low redshifts, but are indeed very rare (e.g. Flores et al. 1999). At $1.2 \leq z \leq 2.0$, ULIRGs start to be the dominant population ($\sim$ 65% at $S_\nu > 83 \mu {\rm Jy}$). Most of them are intermediate to high-mass galaxies. At $z>2$, we observe sources with extremely high infrared luminosities $10^{12}\, L_{\odot}<L_{IR}<10^{14}\, L_{\odot}$ mainly harboured by galaxies with stellar masses $M > 10^{11} \, M_{\odot}$.
For star-forming galaxies, an IR luminosity $L_{IR} \sim 10^{11}\, (10^{12})\, L_\odot$ corresponds to a star-formation rate $SFR \sim 20 \, (200) \, M_\odot \, \rm yr^{-1}$ (Kennicutt 1998). The analysis of X-ray data and IRAC colour-colour diagrams suggests that only a minor fraction of IR galaxies could be mainly driven by quasar activity (Caputi et al. 2006b). Thus, the bulk of the IR emission in most $24 \, \rm \mu m$ galaxies must be produced by star-formation activity, whose rates can achieve extremely high values ($SFR>500-1000 \, M_\odot \rm yr^{-1}$) at high redshifts.
The large $SFR$ characterising some systems at $z \sim 2-3$ imply that a few starburst episodes lasting $10^7-10^8$ yr might be sufficient to build up the stellar mass of some massive ($\sim 10^{11} \, M_{\odot}$) galaxies at these redshifts. Thus, the burst-like mode of star formation appears to have been a very efficient way of constructing massive galaxies in the past. On the contrary, at lower redshifts, the efficiency of the burst-like mode of star-formation to construct entire galaxies is limited to lower mass systems. A significant fraction ($\sim 30\%$) of massive galaxies experience star-formation activity at lower redshifts, but this star formation only produces an additional minor amount of the stellar mass already present in these systems (Caputi et al. 2006a,b).
The role of $24 \, \rm \mu m$ sources in the evolution of $K_s$-band galaxies {#secks}
=============================================================================
Near-IR surveys are traditionally used to trace stellar mass at different redshifts. The study of the role of mid-IR galaxies within the context of the total near-IR-selected galaxy population should allow, then, to understand the importance of star formation and accretion activity in galaxies of different stellar mass.
![The evolution of the comoving number density of $K_s<21.5$ (Vega) galaxies with redshift (empty circles), in comparison to that of ULIRGs with the same mass cut (filled circles). Filled triangles represent lower limits on the ULIRG densities. The diamond-like symbol in the left panel shows the density of ULIRGs estimated by Daddi et al. (2005). The cross-like symbol in the right panel corresponds to the density of radio-detected submillimetre galaxies with $S_\nu > 5 \, \rm mJy$ (Chapman et al. 2003). Figure taken from Caputi et al. (2006b).[]{data-label="fig3"}](caputi_k_fig3.eps){width="14cm"}
Caputi et al. (2006b) studied the role of the LIRG and ULIRG phases in the evolution of massive $K_s$-band galaxies, using the same $24 \, \rm \mu m$ sample in the GOODS/CDFS as analyzed in Section 2. They found that LIRGs and ULIRGs only constitute a fraction of the massive ($M>10^{11}\, M_\odot$) galaxies present at different redshifts, but this fraction becomes very important ($>50\%$) at $z>2$.
Certainly, from Figure \[fig3\] we can see that ULIRGs trace a substantial fraction of massive galaxies at high redshifts. The density of ULIRGs sharply decreases below $z \approx 1.5$, but LIRGs still constitute $\sim 30\%$ of the galaxies with $M>10^{11}\, M_\odot$.
The contribution of $24 \, \rm \mu m$ galaxies to the far-IR background
=======================================================================
In the previous section we have explained the importance of 24 $\mu$m galaxies within the population of massive galaxies. However, one could wonder whether galaxies selected at 24 $\mu$m are actually representative of the galaxy populations selected at other IR wavelengths.
To explore this issue, Dole et al. (2006) studied the contribution of [*Spitzer/ MIPS*]{} 24 $\mu$m galaxies to the Far-Infrared (FIR) Background at 70 and 160 $\mu$m, using stacking analysis over an area of 0.85 deg$^2$. This technique consist in studying the integrated light at 70 and 160 of all the resolved 24 $\mu$m sources. Resolved 24 $\mu$m sources make up $\sim 80\%$ of the 24 $\mu$m background (Papovich et al. 2004; Dole et al. 2006). However, due to confusion, resolved 70 and 160 $\mu$m sources only can explain a minor fraction of the respective 70 and 160 $\mu$m backgrounds. Dole et al. (2006) showed that the stacking analysis allows to gain an order of magnitude below the confusion level. They determined that 24 $\mu$m sources account for 92 and 69% of the 70 and 160 $\mu$m backgrounds, respectively. This is the first measurement of the contribution of 24 $\mu$m galaxies to the far-IR cosmic background.
![ The EBL SED from 0.1 $\mu$m to 1 mm. The arrows at 24, 70 and 160 $\mu$m show the fraction of the CIB resolved through the stacking analysis of $S_\nu(24 \, \rm \mu m) > 60~\mu$Jy sources, as obtained by Dole et al. (2006). See this paper for further references on this figure.[]{data-label="fig4"}](caputi_k_fig4.eps){width="14cm"}
Figure \[fig4\] shows the SED of the Extragalactic Background Light (EBL). Surveys conducted with different UV/optical to submillimetre facilities have progressively allowed to put constraints on this SED at different wavelengths. The arrows at 24, 70 and 160 $\mu$m indicate the lower limits on the IR background determined through the stacking analysis of 24 $\mu$m sources performed by Dole et al. (2006). Unlike all previous determinations, the stacking analysis allowed to obtain a precise direct measurement of the EBL in this wavelength range.
The determined intensity of the CIB is 24 nW m$^{-2}$ sr$^{-1}$. This intensity is similar to that of the Cosmic Optical Background (COB). Put another way, half of the energy associated to galaxy formation and evolution directly comes from starlight, while the other half is due to the light reprocessed by the dust. However, altogether, the energy budgets of the COB and the CIB are equivalent to only 5% of the energy contained in the Cosmic Microwave Background (CMB; Dole et al. 2006). This percentage represents the fraction of the energy which has been produced after recombination. The comparison of intensities of the different background SEDs is illustrated in figure \[fig5\].
![Schematic SEDs of different extragalactic backgrounds. The numbers appearing inside the squares indicate the integrated intensities of the respective backgrounds, in units of nW m$^{-2}$ sr$^{-1}$. The CMB contains $\sim 20$ times as much energy as the COB and CIB altogether. Figure taken from Dole et al. (2006).[]{data-label="fig5"}](caputi_k_fig5.eps){width="8cm"}
Conclusions
===========
[*Spitzer*]{} is making possible an unprecedented study of the mid-IR Universe and to set important constraints on the links between the mid-IR and other galaxy populations. On the one hand, we have shown that mid-IR sources constitute a significant fraction of the already-assembled massive galaxies at different redshifts. This implies that star-formation and accretion processes play a fundamental role in the evolution of massive galaxies through cosmic time. On the other hand, we have determined that those galaxies composing the 24 $\mu$m background are also responsible for most of the extragalactic energy produced at 70 and 160 $\mu$m. In conjunction, our results demonstrate that the importance of studying mid-IR galaxies extends beyond the mid-IR domain and this study is necessary to achieve a unified picture of galaxy populations.
Caputi, K. I., Dole, H., Lagache, G., et al., 2006a, ApJ, 637, 727
Caputi, K. I., Dole, H., Lagache, G., et al., 2006b, A&A, in press
Chapman, S. C., Blain, A. W., Ivison, R. J., & Smail, I. R. 2003, Nature, 422, 695
Chary, R., Elbaz, D. 2001, ApJ, 556, 562
Daddi, E., Dickinson, M., Chary, R., et al., 2005, ApJ, 631, L13
Dole, H., Lagache, G., Puget, J.-L., et al., 2006, A&A, in press (astro-ph/0603208)
Elbaz, D., et al. 2002, A&A, 384, 848
Flores, H., Hammer, F., Thuan, T. X., et al. 1999, ApJ, 517, 148
Hauser, M. G & Dwek, E., 2001, ARA&A, 37, 249
Kennicutt, R.C. Jr. 1998, ApJ, 498, 541
Lagache, G., Dole, H., Puget, J.-L., et al., 2004, ApJS, 154, 112
Le Floc’h, E., Papovich, C., Dole, H., et al., 2005, ApJ, 632, 169
Papovich, C., Dole, H., Egami, E., et al., 2004, ApJS, 154, 70
Puget, J.-L, Abergel, A., Bernard, J.-P, et al., 1996, A&A, 308, L5
Rieke, G. H., Young, E. T., Engelbracht, C. W., et al. 2004, ApJS, 154, 25
Werner, M. W., Roellig, T. L., Low, F. J., et al., 2004, ApJS, 154, 1
|
---
abstract: 'We derive the complete asymptotic expansion in terms of powers of $N$ for the geodesic $f$-energy of $N$ equally spaced points on a rectifiable simple closed curve $\Gamma$ in ${\mathbb{R}}^p$, $p\geq2$, as $N \to \infty$. For $f$ decreasing and convex, such a point configuration minimizes the $f$-energy $\sum_{j\neq k}f(d(\mathbf{x}_j, \mathbf{x}_k))$, where $d$ is the geodesic distance (with respect to $\Gamma$) between points on $\Gamma$. Completely monotonic functions, analytic kernel functions, Laurent series, and weighted kernel functions $f$ are studied. Of particular interest are the geodesic Riesz potential $1/d^s$ ($s \neq 0$) and the geodesic logarithmic potential $\log(1/d)$. By analytic continuation we deduce the expansion for all complex values of $s$.'
address: 'J. S. Brauchart, D. P. Hardin and E. B. Saff: Center for Constructive Approximation, Department of Mathematics, Vanderbilt University, Nashville, TN 37240, USA '
author:
- 'J. S. Brauchart, D. P. Hardin, and E. B. Saff'
bibliography:
- '/home/jsb/APART\_Fellowship/1stYEAR/PROJECTS/Bibliography/bibliography.bib'
- '/home/jsb/APART\_Fellowship/1stYEAR/PROJECTS/Bibliography/ENERGYbibliography.bib'
title: Discrete Energy Asymptotics on a Riemannian circle
---
Introduction
============
Throughout this article, $\Gamma$ is a [*Riemannian circle*]{} (that is, a rectifiable simple closed curve in ${\mathbb{R}}^p$, $p\geq2$) with length $|\Gamma|$ and associated (Lebesgue) arclength measure $\sigma = \sigma_\Gamma$. We denote by $\ell({\mathbf{x}}, {\mathbf{y}})$ the length of the arc of $\Gamma$ from ${\mathbf{x}}$ to ${\mathbf{y}}$, where ${\mathbf{x}}$ precedes ${\mathbf{y}}$ on $\Gamma$. Thus $\ell({\mathbf{x}}, {\mathbf{y}}) + \ell({\mathbf{y}}, {\mathbf{x}}) = \left| \Gamma \right|$ for all ${\mathbf{x}}, {\mathbf{y}} \in \Gamma$. The [*geodesic distance $\operatorname{d}({\mathbf{x}},{\mathbf{y}})$*]{} between ${\mathbf{x}}$ and ${\mathbf{y}}$ on $\Gamma$ is given by the length of the shorter arc connecting ${\mathbf{x}}$ and ${\mathbf{y}}$, that is $$\label{geodesic.dist}
\begin{split}
\operatorname{d}({\mathbf{x}},{\mathbf{y}}) {{:=}}\operatorname{d}_\Gamma({\mathbf{x}},{\mathbf{y}})
&{{:=}}\min \left\{ \ell({\mathbf{x}}, {\mathbf{y}}), \ell({\mathbf{y}}, {\mathbf{x}}) \right\}
= \frac{\left| \Gamma \right|}{2} - \left| \ell( {\mathbf{x}}, {\mathbf{y}} ) - \frac{\left| \Gamma \right|}{2} \right|.
\end{split}$$ The geodesic distance between two points on $\Gamma$ can be at most $|\Gamma|/2$.
Given a lower semicontinuous function $f:[0, | \Gamma | / 2] \to {\mathbb{R}}\cup\{+\infty\}$, the discrete $f$-energy problem is concerned with properties of $N$ point systems ${\mathbf{z}}_{1,N}^*, \dots, {\mathbf{z}}_{N,N}^*$ on $\Gamma$ ($N\geq2$) that minimize the $f$-energy functional$$\label{G.f}
G_f({\mathbf{x}}_1,\dots,{\mathbf{x}}_N) {{:=}}\sum_{j \neq k} f( \operatorname{d}({\mathbf{x}}_j,{\mathbf{x}}_k) ) {{:=}}\mathop{\sum_{j=1}^N\sum_{k=1}^N}_{j \neq k} f ( \operatorname{d}({\mathbf{x}}_j,{\mathbf{x}}_k) ),$$ over all $N$ point configurations $\omega_N$ of not necessarily distinct points ${\mathbf{x}}_1, \dots, {\mathbf{x}}_N$ on $\Gamma$. The following result asserts that equally spaced points (with respect to arclength) on $\Gamma$ are minimal $f$-energy point configurations for a large class of functions $f$.
\[prop:optimality\] Let $f:[0,| \Gamma | / 2] \to {\mathbb{R}}\cup\{+\infty\}$ be a lower semicontinuous function.
[(A)]{} If $f$ is convex and decreasing, then the geodesic $f$-energy of $N$ points on $\Gamma$ attains a global minimum at $N$ equally spaced points on $\Gamma$. If $f$ is strictly convex, then these are the only configurations that attain a global minimum.
[(B)]{} If $f$ is concave and decreasing, then the geodesic $f$-energy of $N$ points on $\Gamma$ attains a global minimum at antipodal systems $\omega_N$ with $\lceil N / 2 \rceil$ points at ${\mathbf{p}}$ and $\lfloor N / 2 \rfloor$ points at ${\mathbf{q}}$, where ${\mathbf{p}}$ and ${\mathbf{q}}$ are any pair of points on $\Gamma$ with geodesic distance $| \Gamma | / 2$. If $f$ is strictly concave, then these are the only configurations that attain a global minimum.
Part (A) of Proposition \[prop:optimality\] follows from a standard “winding number argument” that can be traced back to the work of Fejes T[ó]{}th [@Fe1956]. The result in the general form stated here appears explicitly in the work of M. G[ö]{}tz [@Go2003 Proposition 9] who uses a similar notion of “orbits.” For completeness, we present in Section \[sec:proofs\] a brief proof of Part (A).
Alexander and Stolarsky [@AlSt1974] studied the discrete and continuous energy problem for continuous kernel functions $f$ on compact sets. In particular, they established the optimality of vertices of a regular $N$-gon circumscribed by a circle $\mathcal{C}_a$ of radius $a$ for various non-Euclidean metrics $\rho({\mathbf{x}}, {\mathbf{y}})$ (including the geodesic metric) with respect to an energy functional $E_{\sigma,\lambda}({\mathbf{x}}_1, \dots, {\mathbf{x}}_N) {{:=}}\sigma([\rho({\mathbf{x}}_j,{\mathbf{x}}_k)]^\lambda)$, $0 < \lambda \leq 1$, on $\mathcal{C}_a$ where $\sigma$ is an elementary symmetric function on $\binom{n}{2}$ real variables. This result does not extend to the complete class of functions in Proposition \[prop:optimality\] and vice versa. However, both cover the generalized sum of geodesic distances problem.
In the case of Riesz potentials we set $$f_s(x) {{:=}}- x^{-s}, \quad s < 0, \qquad f_{0}(x) {{:=}}\log ( 1 / x ), \qquad f_s(x) {{:=}}x^{-s}, \quad s > 0.$$ Then Proposition \[prop:optimality\](A) asserts that equally spaced points are unique (up to translation along the simple closed curve $\Gamma$) optimal [*geodesic $f_s$-energy*]{} points for $s > -1$. (For $s>0$ this fact is also proved in the dissertation of S. Borodachov [@Bo2006 Lemma V.3.1], see also [@Bo2009pre].) Proposition \[prop:optimality\](B) shows that for $s<-1$ and $N\geq3$, antipodal configurations are optimal $f_s$-energy points, but equally spaced points are [**not**]{}. (We remark that if [*Euclidean*]{} distance is used instead of geodesic distance, then the $N$-th roots of unity on the unit circle cease to be optimal $f_s$-energy points when $s < - 2$, cf. [@Bj1956] and [@BrHaSa2009].)
For $s=-1$ in the geodesic case, equally spaced points are optimal but so are antipodal and other configurations. Fejes T[ó]{}th [@Fe1959] showed that a configuration on the unit circle is optimal with respect to the [*sum of geodesic distances*]{}[^1] ($s=-1$) if and only if the system is centrally symmetric for an even number of points and it is the union of a centrally symmetric set and a set $\{{\mathbf{x}}_1, \dots, {\mathbf{x}}_{2k+1}\}$ such that each half circle determined by ${\mathbf{x}}_j$ ($j=1,\dots,2k+1$) contains $k$ of the points in its interior for an odd number of points. (This result is reproved in [@Ji2008].) These criteria easily carry over to Riemannian circles. In particular, any system of $N$ equally spaced points on $\Gamma$ and any antipodal system on $\Gamma$ satisfy these criteria.
\[rmk:cohn.kumar\] Equally spaced points on the unit circle are also [*universally optimal*]{} in the sense of Cohn and Kumar [@CoKu2007], that is, they minimize the energy functional $\sum_{j \neq k} f( | {\mathbf{x}}_j - {\mathbf{x}}_k |^2 )$ for any [*completely monotonic*]{} potential function $f$; that is, for a function $f$ satisfying $(-1)^k f^{(k)}(x)>0$ for all integers $k \geq 0$ and all $x \in [0,2]$.
To determine the leading term in the energy asymptotics it is useful to consider the continuous energy problem. Let $\mathfrak{M}(\Gamma)$ denote the class of Borel probability measures supported on $\Gamma$. The [*geodesic $f$-energy*]{} of $\mu \in \mathfrak{M}(\Gamma)$ and the [*minimum geodesic $f$-energy*]{} of $\Gamma$ are defined, respectively, as $$\mathcal{I}_f^g[\mu] {{:=}}\int \int f( \operatorname{d}({\mathbf{x}}, {\mathbf{y}}) ) {\,d}\mu({\mathbf{x}}) {\,d}\mu({\mathbf{y}}), \qquad V_f^g(\Gamma) {{:=}}\inf\left\{ \mathcal{I}_f^g[\mu] : \mu \in \mathfrak{M}(\Gamma) \right\}.$$ The [*continuous $f$-energy problem*]{} concerns the existence, uniqueness, and characterization of a measure $\mu_\Gamma$ satisfying $V_f^g(\Gamma) = \mathcal{I}_f^g[\mu_\Gamma]$. If such a measure exists, it is called an [*equilibrium measure on $\Gamma$*]{}.
\[prop:leading.term\] Let $f$ be a Lebesgue integrable lower semicontinuous function on $[0, | \Gamma |/2]$ and convex and decreasing on $(0, | \Gamma |/2]$. Then the normalized arclength measure $\sigma_\Gamma$ is an equilibrium measure on $\Gamma$ and $$\label{eq:limit}
\lim_{N \to \infty} G_f(\omega_N^{(f)}) / N^2 = V_f^g(\Gamma).$$ If, in addition, $f$ is strictly decreasing, then $\sigma_\Gamma$ is unique.
The proofs of the propositions in this introduction are given in Section \[sec:proofs\].
Note that provides the first term in the asymptotic expansion of $G_f(\omega_N^{(f)})$ for large $N$, that is $G_f(\omega_N^{(f)}) \sim V_f^g(\Gamma) \, N^2$ as $N\to\infty$. The goal of the present paper is to extend this asymptotic expansion to an arbitrary number of terms. The case when $\lim_{N \to \infty} G_f(\omega_N^{(f)}) / N^2 \to \infty$ as $N \to \infty$ is also studied. For a certain class of functions $f$ it turns out that the leading term is of the form $a_0 2 \operatorname{\zeta}(s_0) | \Gamma |^{-s_0} N^{1+s_0}$ for some $s_0 > 1$, where $a_0 = \lim_{x \to 0^+} x^{s_0} f(x)$ is the coefficient of the dominant term in the asymptotic expansion of $f$ near the origin and $\operatorname{\zeta}(s)$ is the classical Riemann zeta function. However, such a leading term might even not exist. Indeed, if the function $f$ has an essential singularity at $0$ and is otherwise analytic in a sufficiently large annulus centered at zero, then the asymptotics of the geodesic $f$-energy of equally spaced points on $\Gamma$ contains an infinite series part with rising positive powers of $N$ determined by the principal part of the Laurent expansion of $f$ at $0$. Consequently, there is no “highest power of $N$”, see Examples \[eg:ess.sing.1\] and \[eg:ess.sing.2\] below.
An outline of our paper is as follows. In Section \[sec:f.energy\], the geodesic $f$-energy of equally spaced points on $\Gamma$ is investigated. In particular, completely monotonic functions, analytic kernel functions, Laurent series, and weighted kernel functions $f$ are considered. Illustrative examples complement this study. In Section \[sec:geodesic.Riesz.s.energy\], the geodesic logarithmic energy and the geodesic Riesz $s$-energy of equally spaced points on $\Gamma$ are studied. The results are compared with their counterparts when $\operatorname{d}({\mathbf{\cdot}}, {\mathbf{\cdot}})$ is replaced by the Euclidean metric. The proofs of the results are given in Section \[sec:proofs\].
The geodesic $f$-energy of equally spaced points on $\Gamma$ {#sec:f.energy}
============================================================
\[def:main.general.f\] Given a kernel function $f:[0,|\Gamma|/2]\to\mathbb{C} \cup \{+\infty\}$, the [*discrete geodesic $f$-energy*]{} of $N$ equally spaced points ${\mathbf{z}}_{1,N}, \dots, {\mathbf{z}}_{N,N}$ on $\Gamma$ is denoted by $$\mathcal{M}(\Gamma,f;N) {{:=}}\sum_{j\neq k} f( \operatorname{d}({\mathbf{z}}_{j,N},{\mathbf{z}}_{k,N}) ) = N \sum_{j=1}^{N-1} f( \operatorname{d}({\mathbf{z}}_{j,N},{\mathbf{z}}_{N,N}) ).$$
Set $N = 2 M + \kappa$ ($\kappa = 0, 1$). Using the fact that the points are equally spaced, it can be easily shown that $$\label{eq:cal.M.Gamma.f.N}
\mathcal{M}(\Gamma, f; N) = 2 N \sum_{n = 1}^{\lfloor N / 2 \rfloor} f( n \left| \Gamma \right| / N ) - \left( 1 - \kappa \right) f( \left| \Gamma \right| / 2 ) N.$$ An essential observation is that the geodesic $f$-energy has (when expressed in terms of powers of $N$) different asymptotics for even $N$ and odd $N$. We remark that for real-valued functions $f$ a configuration of equally spaced points is optimal with respect to the geodesic $f$-energy defined in , whenever $f$ satisfies the hypotheses of Proposition \[prop:optimality\](A).
An application of the generalized Euler-MacLaurin summation formula (see Proposition \[prop:Euler-MacLaurin.Summation\] below) yields an exact formula for $\mathcal{M}(\Gamma, f; N)$ in terms of powers of $N$. The asymptotic analysis of this expression motivates the following definition.
\[def:admissible\] A function $f:[0,|\Gamma|/2]\to\mathbb{C}\cup\{+\infty\}$ is called [*admissible*]{} if the following holds:
1. $f$ has a continuous derivative of order $2p+1$ on the interval $(0, | \Gamma | / 2]$;
2. there exists a function $S_q(x)$ of the form $S_q(x) = \sum_{n=0}^{q} a_n \, x^{-s_n}$, where $a_n$ and $s_n$ ($n=0,\dots,q$) are complex numbers with ${\mathop{\mathrm{Re}}}s_0 > {\mathop{\mathrm{Re}}}s_1 > \cdots > {\mathop{\mathrm{Re}}}s_q$ [^2] and ${\mathop{\mathrm{Re}}}s_q + 2p > 0$ or $s_q = -2p$ such that for some $\delta>0$
1. $1 - {\mathop{\mathrm{Re}}}s_q + \delta > 0$,
2. $\displaystyle \int_0^x \left\{ f(y) - S_{q}(y) \right\} {\,d}y = \mathcal{O}(x^{1+\delta-s_q})$ as $x \to 0^+$,
3. $\displaystyle \left\{ f(x) - S_{q}(x) \right\}^{(\nu)} = \mathcal{O}(x^{\delta-s_q-\nu})$ as $x \to 0^+$ for all $\nu = 0, 1, \dots, 2p+1$.
For $p\geq1$ an integer the following sum arises in the main theorems describing the asymptotics of $\mathcal{M}(\Gamma, f; N)$: $$\mathcal{B}_p(\Gamma, f; N) {{:=}}\frac{2}{\left| \Gamma \right|} N^2 \sum_{n = 1}^{p} \frac{B_{2n}(\kappa/2)}{(2n)!} \left( \left| \Gamma \right| / N \right)^{2n} f^{(2n-1)}( \left| \Gamma \right| / 2 ), \qquad N = 2M + \kappa, \ \kappa = 0, 1, \label{eq:B.p}$$ where $B_m(x)$ denotes the Bernoulli polynomial of degree $m$ defined by $$\frac{z}{e^z-1} e^{x z} = \sum_{m=0}^\infty \frac{B_m(x)}{m!} \, z^m, \qquad B_m(x) = \sum_{k=0}^m \binom{m}{k} B_{m-k} x^k,$$ where $B_0=1$, $B_1=-1/2$, …are the so-called [*Bernoulli numbers*]{}. Recall that $B_{2k+1}=0$, $(-1)^{k-1}B_{2k}>0$ for $k=1,2,3,\dots$, and $B_{n}(1/2) = ( 2^{1-n} - 1 ) B_n$ for $n\geq0$ ([@AbSt1992]).
\[thm:general.f.general.case\] Let $f$ be admissible in the sense of Definition \[def:admissible\] and suppose none of $s_0, s_1, \dots, s_q$ equals $1$. Then, for $N = 2 M + \kappa$ with $\kappa = 0$ or $\kappa = 1$, $$\label{eq:general.case.asymptotics}
\mathcal{M}(\Gamma, f; N) = V_f(\Gamma) \, N^2 + \sum_{n=0}^q a_n \frac{2 \operatorname{\zeta}(s_n)}{\left| \Gamma \right|^{s_n}} N^{1+s_n} + \mathcal{B}_p(\Gamma, f; N) + \mathfrak{R}_p(\Gamma, f; N),$$ where $$\label{eq:general.case.V.f}
V_f(\Gamma) = \frac{2}{\left| \Gamma \right|} \sum_{n=0}^q a_n \frac{\left( \left| \Gamma \right| / 2 \right)^{1-s_n}}{1-s_n} + \frac{2}{\left| \Gamma \right|} \int_0^{\left| \Gamma \right| / 2} ( f - S_q )(x) {\,d}x$$ and the remainder term satisfies $\mathfrak{R}_p(\Gamma, f; N) = \mathcal{O}( N^{1-2p} ) + \mathcal{O}( N^{1-\delta+s_q} )$ as $N\to\infty$ if $2p \neq \delta - {\mathop{\mathrm{Re}}}s_q$, whereas $\mathfrak{R}_p(\Gamma, f; N) = \mathcal{O}( N^{1-2p} \log N)$ if $2p = \delta - {\mathop{\mathrm{Re}}}s_q$.
The next result involves the [*Euler-Mascheroni constant*]{} defined by $$\gamma {{:=}}\lim_{n\to\infty} \left( 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots + \frac{1}{n} - \log n \right).$$
\[thm:general.f.exceptional.case\] Let $f$ be admissible in the sense of Definition \[def:admissible\] and $s_{q^\prime}=1$ for some $1 \leq q^\prime \leq q$.[^3] Then, for $N = 2 M + \kappa$ with $\kappa = 0$ or $\kappa = 1$, $$\begin{aligned}
\mathcal{M}(\Gamma, f; N)
&= \frac{2}{\left| \Gamma \right|} a_{q^\prime} \, N^2 \log N + V_f(\Gamma) \, N^2 + \sum_{\substack{n=0,\\ n\neq q^\prime}}^q a_n \frac{2 \operatorname{\zeta}(s_n)}{\left| \Gamma \right|^{s_n}} N^{1+s_n} + \mathcal{B}_p(\Gamma, f; N) + \mathfrak{R}_p(\Gamma, f; N),\end{aligned}$$ where $$\label{eq:except.case.V.f}
V_f(\Gamma) = \frac{2}{\left| \Gamma \right|} \Bigg\{ \sum_{\substack{n=0,\\ n\neq q^\prime}}^q a_n \frac{\left( \left| \Gamma \right| / 2 \right)^{1-s_n}}{1-s_n} + \int_0^{\left| \Gamma \right| / 2} ( f - S_q )(x) {\,d}x - a_{q^\prime} \left( \log 2 - \gamma \right) \Bigg\}$$ and the remainder term satisfies $\mathfrak{R}_p(\Gamma, f; N) = \mathcal{O}( N^{1-2p} ) + \mathcal{O}( N^{1-\delta+s_q} )$ as $N\to\infty$ if $2p \neq \delta - {\mathop{\mathrm{Re}}}s_q$, whereas $\mathfrak{R}_p(\Gamma, f; N) = \mathcal{O}( N^{1-2p} \log N)$ if $2p = \delta - {\mathop{\mathrm{Re}}}s_q$.
Both Theorems \[thm:general.f.general.case\] and \[thm:general.f.exceptional.case\] show that only the coefficients of the nonpositive even powers of $N$ depend on the parity of $N$. These dependencies appear in the sum $\mathcal{B}_p(\Gamma, f; N)$.
If $f(z) \equiv S_q(z) = \sum_{n=0}^q a_n z^{-s_n}$ for some $q$ and ${\mathop{\mathrm{Re}}}s_0 > \cdots > {\mathop{\mathrm{Re}}}s_q$, then all expressions in Theorems \[thm:general.f.general.case\] and \[thm:general.f.exceptional.case\] containing $f-S_q$ vanish. In general, the remainder term $\mathfrak{R}_p(\Gamma, f; N)$ is of order $\mathcal{O}( N^{1-2p} )$, where the integer $p$ satisfies ${\mathop{\mathrm{Re}}}s_q + 2p >0$. In particular, this holds for the Riesz kernels (cf. Theorems \[thm:main\] and \[thm:s.EQ.1\] below).
Completely monotonic functions {#completely-monotonic-functions .unnumbered}
------------------------------
A non-constant [*completely monotonic*]{} function $f:(0,\infty) \to \mathbb{R}$ has derivatives of all orders and satisfies $(-1)^k f^{(k)}(x) > 0$ (cf. [@Du1940]).[^4] In particular, it is a continuous strictly decreasing convex function. Therefore, by Proposition \[prop:optimality\], equally spaced points are optimal $f$-energy configurations on the Riemannian circle $\Gamma$.
By Bernstein’s theorem [@Wi1946 p. 161] a function is completely monotonic on $(0,\infty)$ if and only if it is the Laplace transformation $f(x) = \int_0^\infty e^{-x t} {\,d}\mu(t)$ of some nonnegative measure $\mu$ on $[0,\infty)$ such that the integral converges for all $x>0$.
The following result applies in particular to completely monotonic functions.
\[thm:completely.monotonic\] Let $f$ be the Laplace transform $f(x) = \int_0^\infty e^{-x t} {\,d}\mu(t)$ for some signed Borel measure $\mu$ on $[0,\infty)$ such that $\int_0^\infty t^m {\,d}|\mu|(t)$, $m = 0, 1, 2, \dots$, are all finite. Then for all integers $p\geq1$ and $N = 2M + \kappa$ with $\kappa=0, 1$ $$\mathcal{M}(\Gamma, f; N) = \left\{ \frac{2}{\left| \Gamma \right|} \int_0^{\infty} \frac{1-e^{-t \left| \Gamma \right|/2}}{t} {\,d}\mu(t) \right\} N^2 + \sum_{n=0}^{2p} (-1)^n \frac{\mu_n}{n!} \frac{2\operatorname{\zeta}(-n)}{\left| \Gamma \right|^{-n}} N^{1-n} + \mathcal{B}_p( \Gamma, f; N) + \mathcal{O}(N^{1-2p}), $$ where $\mu_m {{:=}}\int_0^\infty t^m {\,d}\mu(t)$ denotes the $m$-th moment of $\mu$.
The derivation of the (complete) asymptotic expansion for $\mathcal{M}(\Gamma, f;N)$ as $N\to\infty$ for Laplace transforms for which not all moments $\mu_m$ are finite, depends on more detailed knowledge of the behavior of $f(x)$ near the origin. For example, for integral transforms $G(x) = \int_0^\infty h(x t) g(t) {\,d}t$ there is a well-established theory of the asymptotic expansion of $G(x)$ at $0^+$. See, [@HaLe1970], [@HaLe1971], [@BeHa1975] or [@Lo2008] and [@Du1979]. These expansions give rise to results similar to our theorem above.
Recently, Koumandos and Pedersen [@KoPe2009] studied so-called [*completely monotonic functions of integer order $r\geq0$*]{}, that is functions $f$ for which $x^r f(x)$ is completely monotonic. The completely monotonic functions of order $0$ are the classical completely monotonic functions; those of order $1$ are the so-called [*strongly completely monotonic functions*]{} satisfying that $(-1)^k x^{k+1} f^{(k)}(x)$ is nonnegative and decreasing on $(0,\infty)$. In [@KoPe2009] it is shown that $f$ is completely monotonic of order $\alpha>0$ ($\alpha$ real) if and only if $f$ is the Laplace transformation of a fractional integral of a positive Radon measure on $[0,\infty)$; that is $$f(x) = \int_0^\infty e^{-x t} \mathcal{J}_\alpha[\mu](t) {\,d}t, \qquad \mathcal{J}_\alpha[\mu](t) {{:=}}\frac{1}{\Gamma(\alpha)} \int_0^t \left( t - s \right)^{\alpha-1} {\,d}\mu(s).$$ Results similar to Theorem \[thm:completely.monotonic\] hold for these kinds of functions. However, the problem of giving an asymptotic expansion of $f(x)$ near the origin is more subtle.
Analytic kernel functions {#analytic-kernel-functions .unnumbered}
-------------------------
If $f$ is analytic in a disc with radius $| \Gamma | / 2 + {\varepsilon}$ (${\varepsilon}> 0$) centered at the origin, then $f$ is admissible in the sense of Definition \[def:admissible\] and we have the following result.
\[thm:analytic.f\] Let $f(z) = \sum_{n=0}^\infty a_n z^n$ be analytic in $|z| < | \Gamma | / 2 + {\varepsilon}$, ${\varepsilon}>0$. Then for $N = 2 M + \kappa$ with $\kappa = 0$ or $\kappa = 1$ $$\mathcal{M}(\Gamma, f; N) = \left\{ \frac{2}{\left| \Gamma \right|} \int_0^{\left| \Gamma \right| / 2} f(x) {\,d}x \right\} N^2 + \sum_{n=0}^{2p} a_n \frac{2 \operatorname{\zeta}(-n)}{\left| \Gamma \right|^{-n}} N^{1-n} + \mathcal{B}_p(\Gamma, f; N) + \mathcal{O}_{p,|\Gamma|,f}(N^{1-2p}).$$ Note that $\operatorname{\zeta}(0) = -1/2$ and $\operatorname{\zeta}(-2k) = 0$ for $k = 1, 2, 3, \dots$.
If $f(x) = e^{-x}$, then for any positive integer $p$: $$\begin{aligned}
\begin{split}
\mathcal{M}(\Gamma, f; N)
&= \frac{2}{\left| \Gamma \right|} \left( 1 - e^{-\left| \Gamma \right|/2} \right) N^2 - N + \sum_{n=1}^{p} \frac{1}{(2n-1)!} \frac{2 \operatorname{\zeta}(1-2n)}{\left| \Gamma \right|^{1-2n}} N^{2-2n} \\
&\phantom{=}- \sum_{n=1}^p \frac{B_{2n}(\kappa/2)}{(2n)!} \frac{2 e^{- \left| \Gamma \right| / 2}}{\left| \Gamma \right|^{1-2n}} N^{2-2n} + \mathcal{O}_{p,|\Gamma|,f}(N^{1-2p})
\end{split}\end{aligned}$$ as $N = 2 M + \kappa \to \infty$, where the notation of the last term indicates that the $\mathcal{O}$-constant depends on $p,|\Gamma|$ and $f$. Since $f(x)$ is a strictly decreasing convex function, by Proposition \[prop:optimality\](A), equally spaced points are also optimal $f$-energy points. Thus, the relation above gives the complete asymptotics for the optimal $N$-point geodesic $e^{-(\cdot)}$-energy on Riemannian circles.
Laurent series kernels {#laurent-series-kernels .unnumbered}
----------------------
If $f(z)$ is analytic in the annulus $0 < |z| < | \Gamma | / 2 + {\varepsilon}$ (${\varepsilon}> 0$) with a pole at $z=0$, then $f$ is admissible in the sense of Definition \[def:admissible\] and we obtain the following result.
\[thm:Laurent.series\] Let $f$ be analytic in the annulus $0 < |z| < | \Gamma | / 2 + {\varepsilon}$ (${\varepsilon}> 0$) having there the Laurent series expansion $f(z) = \sum_{n=-K}^\infty a_n z^n$, $K \geq 1$.
[(i)]{} If the residue $a_{-1}=0$, then for $N = 2 M + \kappa$ with $\kappa = 0, 1$ $$\mathcal{M}(\Gamma, f; N) = V_f(\Gamma) \, N^2 + \sum_{\substack{n=-K, \\ n \neq -1}}^{2p} a_n \frac{2 \operatorname{\zeta}(-n)}{\left| \Gamma \right|^{-n}} N^{1-n} + \mathcal{B}_p(\Gamma, f; N) + \mathcal{O}_{p,|\Gamma|,f}(N^{1-2p}),$$ where the $N^2$-coefficient is $$V_f(\Gamma) = \frac{2}{\left| \Gamma \right|} \sum_{n=-K}^{\infty} a_n \frac{\left( \left| \Gamma \right| / 2 \right)^{1+n}}{1+n}.$$
[(ii)]{} If the residue $a_{-1} \neq 0$, then for $N = 2 M + \kappa$ with $\kappa = 0, 1$ $$\begin{aligned}
\mathcal{M}(\Gamma, f; N)
&= \frac{2}{\left| \Gamma \right|} a_{-1} \, N^2 \log N + V_f(\Gamma) \, N^2 + \sum_{\substack{n=-K,\\ n\neq -1}}^{2p} a_n \frac{2 \operatorname{\zeta}(-n)}{\left| \Gamma \right|^{-n}} N^{1-n} + \mathcal{B}_p(\Gamma, f; N) + \mathcal{O}_{p,|\Gamma|,f}(N^{1-2p}),\end{aligned}$$ where the $N^2$-coefficient is $$V_f(\Gamma) = \frac{2}{\left| \Gamma \right|} \Bigg\{ \sum_{\substack{n=-K,\\ n\neq -1}}^\infty a_n \frac{\left( \left| \Gamma \right| / 2 \right)^{1+n}}{1+n} - a_{-1} \left( \log 2 - \gamma \right) \Bigg\}.$$
Next, we give two examples of kernels $f$ each having an essential singularity at $0$. Such kernels can also be treated in the given framework, since they satisfy an extended version of Definition \[def:admissible\]; see Proof of Examples \[eg:ess.sing.1\] and \[eg:ess.sing.2\] in Section \[sec:proofs\].
\[eg:ess.sing.1\] Let $f(x) = e^{1/x} = \sum_{n=0}^\infty 1/ (n! x^n)$, $x\in(0,+\infty)$, $f(0)=+\infty$. We define the entire function $$F(z) {{:=}}\sum_{n=2}^\infty \frac{\operatorname{\zeta}(n)}{n!} z^n = - \gamma z - \frac{1}{2\pi i} \oint_{|w|=\rho<1} e^{z/w} \operatorname{\psi}(1-w) {\,d}w, \qquad z \in \mathbb{C},$$ where $\operatorname{\psi}(z)$ denotes the digamma function and we observe that, because of $0 < \operatorname{\zeta}(n) - 1 < c 2^{-n}$ for all integers $n \geq 2$ for some $c>0$, $$F(x) = e^x - 1 - x + \sum_{n=2}^\infty \frac{\operatorname{\zeta}(n)-1}{n!} x^n = e^x + \mathcal{O}(e^{x/2}) \qquad \text{as $x\to \infty$.}$$ Then $$\begin{split}
\mathcal{M}(\Gamma, f; N) &= 2 N F(N/\left|\Gamma\right|) + \frac{2}{\left| \Gamma \right|} \, N^2 \log N + V_f(\Gamma) \, N^2 - N \\
&\phantom{=}+ \sum_{n = 1}^{p} \frac{2 B_{2n}(\kappa/2)}{(2n)! \left| \Gamma \right|^{1-2n}} N^{2-2n} f^{(2n-1)}( \left| \Gamma \right| / 2 ) + \mathcal{O}_{p,|\Gamma|,f}(N^{1-2p}),
\end{split}$$ where $$\begin{aligned}
V_f(\Gamma)
&= 1 + \frac{2}{\left| \Gamma \right|} \sum_{n=2}^\infty \frac{1}{n!} \frac{\left( \left| \Gamma \right| / 2 \right)^{1-n}}{1-n} - \frac{2}{\left| \Gamma \right|} \left( \log 2 - \gamma \right) \\
&= e^{2/\left| \Gamma \right|} - \frac{2}{\left| \Gamma \right|} \left\{ 1 - 2 \gamma + \log \left| \Gamma \right| + \operatorname{Ei}(2 / \left| \Gamma \right|) \right\},\end{aligned}$$ where $\operatorname{Ei}(x) = - \int_{-x}^\infty e^{-t} t^{-1} {\,d}t$ is the exponential integral (taking the Cauchy principal value of the integral). In particular it follows that $$\lim_{N\to\infty} \frac{\mathcal{M}(\Gamma, f; N)}{N \, e^{N/\left|\Gamma\right|}} = 2.$$ Since $f$ is a strictly decreasing convex function on $(0,\infty)$, by Proposition \[prop:optimality\](A), equally spaced points are also optimal. Thus, the above expansion gives the asymptotics of the optimal $N$-point $e^{1/(\cdot)}$-energy.
\[eg:ess.sing.2\] Let $\operatorname{J}_k(\lambda) = (-1)^k \operatorname{J}_{-k}(\lambda) {{:=}}\frac{1}{2\pi} \int_0^{2\pi} \cos( k \theta - \lambda \sin \theta ) {\,d}\theta$ denote the [*Bessel function of the first kind of order $k$*]{} whose generating function relation is given by (cf. [@SaSn1993 Exercise 5.5(10)]) $$f(x) = \exp\left[ \frac{\lambda}{2} \left( x - \frac{1}{x} \right) \right] = \sum_{n=-\infty}^\infty \operatorname{J}_n(\lambda) x^n \qquad \text{for $|x|>0$.}$$ For integers $m\geq2$ we define the entire functions $$\begin{aligned}
F_m(z) &{{:=}}\sum_{n=m}^\infty \operatorname{J}_{-n}(\lambda) \operatorname{\zeta}(n) z^n = \sum_{k=1}^\infty G_m(z/k), \quad G_m(z) {{:=}}\sum_{n=m}^\infty \operatorname{J}_{-n}(\lambda) z^n, \qquad z \in \mathbb{C}.\end{aligned}$$ If $\lambda$ is a zero of the Bessel function $\operatorname{J}_{-1}$, then for positive integers $p$ and $m$ $\geq2$ there holds $$\begin{split}
\mathcal{M}(\Gamma, f; N) &= 2 N F_m( N / \left| \Gamma \right| ) + 2 \sum_{n=2}^{m-1} \operatorname{J}_{-n}(\lambda) \operatorname{\zeta}(n) \left| \Gamma \right|^{-n} N^{1+n} + V_f(\Gamma) \, N^2 + \left| \Gamma \right| B_2(\frac{\kappa}{2}) f^\prime( \left| \Gamma \right| / 2 ) \\
&\phantom{=}+ \sum_{n=2}^p \left\{ \frac{2 B_{2n}}{2n} \frac{f^{2n-1}(\left| \Gamma \right| / 2)}{(2n-1)!} + 2 \operatorname{J}_{2n-1}(\lambda) \operatorname{\zeta}(1-2n) \right\} \left| \Gamma \right|^{2n-1} N^{2-2n} + \mathcal{O}(N^{1-2p})
\end{split}$$ where $$V_f(\Gamma) = \frac{2}{\left| \Gamma \right|} \sum_{\substack{n=-\infty, \\ n \neq \pm 1}}^{\infty} \operatorname{J}_n(\lambda) \frac{\left( \left| \Gamma \right| / 2 \right)^{1+n}}{1+n}.$$ If, in addition, $\lambda<0$, then $f(x)$ is a strictly decreasing convex function and, therefore, $\mathcal{M}(\Gamma, f; N)$ is also the minimal $N$-point $f$-energy on $\Gamma$ and it follows from the observation $$G_m(x/k) = \exp\left[ - \frac{\lambda}{2} \left( \frac{x}{k} - \frac{k}{x} \right) \right] - \sum_{n=-\infty}^{m-1} J_n(\lambda) (-x/k)^n, \qquad k = 1, 2, 3, \dots,$$ that $$\lim_{N\to\infty} \frac{\mathcal{M}(\Gamma, f; N)}{N f(-N/|\Gamma|)} = 2.$$ If $\lambda$ is not a zero of $\operatorname{J}_{-1}$, then the above asymptotics must be modified to include a logarithmic term.
The [*weighted*]{} kernel function $f_s^w(x) = x^{-s} w(x)$ {#the-weighted-kernel-function-f_swx-x-s-wx .unnumbered}
-----------------------------------------------------------
Given a weight function $w(x)$, the kernel $f_s^w(x) = x^{-s} w(x)$ gives rise to the so-called [*geodesic weighted Riesz $s$-energy*]{} of an $N$-point configuration $({\mathbf{x}}_1, \dots, {\mathbf{x}}_N)$ $$G_s^w({\mathbf{x}}_1, \dots, {\mathbf{x}}_N) {{:=}}\sum_{j \neq k} \frac{w(\operatorname{d}({\mathbf{x}}_j,{\mathbf{x}}_k))}{\left[ \operatorname{d}({\mathbf{x}}_j,{\mathbf{x}}_k) \right]^{s}}.$$ For the Euclidean metric the related weighted energy functionals are studied in [@BoHaSa2008].
If $w(x)$ is such that $f_s^w(x)$ is admissible in the sense of Definition \[def:admissible\], then Theorems \[thm:general.f.general.case\] and \[thm:general.f.exceptional.case\] provide asymptotic expansions for the weighted geodesic Riesz $s$-energy of equally spaced points on a Riemannian circle $\Gamma$, which are also optimal configurations if $f_s^w(x)$ is strictly decreasing and convex (cf. Proposition \[prop:optimality\](A)).
\[thm:weighted.f\] Let $w(z) = \sum_{n=0}^\infty a_n z^n$ be analytic in $|z| < | \Gamma | / 2 + {\varepsilon}$, ${\varepsilon}>0$. Set $f_s^w(z) {{:=}}z^{-s} w(z)$. Then for integers $p,q>0$ and $s\in\mathbb{C}$, $s$ not an integer, such that $q-2p < {\mathop{\mathrm{Re}}}s < 2 + q$ we have $$\mathcal{M}(\Gamma, f_s^w; N) = V_{f_s^w}(\Gamma) \, N^2 + \sum_{n=0}^{q} a_n \frac{2 \operatorname{\zeta}(s-n)}{\left| \Gamma \right|^{s-n}} N^{1+s-n} + \mathcal{B}_p(\Gamma, f_s^w; N) + \mathfrak{R}_p(\Gamma, f_s^w; N),$$ where $\mathcal{B}_p$ is defined in . The $N^2$-coefficient is the meromorphic continuation to $\mathbb{C}$ of the geodesic $f_s^w$-energy of $\Gamma$ given by $( 2 / | \Gamma | ) \int_0^{| \Gamma | / 2} f_s^w(x) {\,d}x$ for $0 < s < 1$; that is $$V_{f_s^w}(\Gamma) = \frac{2}{\left| \Gamma \right|} \sum_{n=0}^{\infty} a_n \frac{\left( \left| \Gamma \right| / 2 \right)^{1+n-s}}{1+n-s}, \qquad s \neq 1, 2, 3, \dots.$$ The remainder $\mathfrak{R}_p(\Gamma, f_s^w; N)$ is of order $\mathcal{O}( N^{1-2p} ) + \mathcal{O}( N^{s-2p} )$ as $N\to\infty$.
For $s$ is a positive integer the series $\sum_{n=0}^\infty a_n z^{n-s}$ is the Laurent expansion of $f(z)$ in $0 < |z| < | \Gamma | / 2 + {\varepsilon}$ and Theorem \[thm:Laurent.series\] applies. For $s$ is a non-positive integer the series $\sum_{n=0}^\infty a_n z^{n-s}$ is the power series expansion of $f(z)$ in $0 < |z| < | \Gamma | / 2 + {\varepsilon}$ and Theorem \[thm:analytic.f\] applies.
Let $w(z) = \sin( z \pi / | \Gamma | )$. Then for ${\mathop{\mathrm{Re}}}s>0$ not an integer $$f_s^w(z) = x^{-s} w(z) = \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!} \left( \pi / \left| \Gamma \right| \right)^{2n+1} z^{2n+1-s}$$ and, by Theorem \[thm:weighted.f\], the geodesic weighted Riesz $s$-energy of $N$ equally spaced points has the asymptotic expansion ($0 < {\mathop{\mathrm{Re}}}s < 1 + 2p$) $$\mathcal{M}(\Gamma, f_s^w; N) = V_{f_s^w}(\Gamma) \, N^2 + \left( \pi / \left| \Gamma \right| \right)^{s} \sum_{k=1}^{p} \frac{(-1)^{k-1}}{(2k-1)!} \frac{2 \operatorname{\zeta}(1+s-2k)}{\pi^{1+s-2k}} N^{2+s-2k} + \mathcal{B}_p(\Gamma, f_s^w; N) + \mathfrak{R}_p(\Gamma, f_s^w; N),$$ where $\mathcal{B}_p(\Gamma, f_s^w; N)$ is given in . The remainder $\mathfrak{R}_p(\Gamma, f_s^w; N)$ is of order $\mathcal{O}( N^{1-2p} ) + \mathcal{O}( N^{s-2p} )$ as $N \to \infty$ and $$V_{f_s^w}(\Gamma) = \frac{2}{\pi} \left( \left| \Gamma \right| / \pi \right)^{-s} \sum_{k=1}^{\infty} \frac{(-1)^{k-1}}{(2k-1)!} \frac{\left( \pi / 2 \right)^{2k-s}}{2k-s}.$$ For $0 < s < 1$ we have $$V_{f_s^w}(\Gamma) = \frac{2}{\left| \Gamma \right|} \int_0^{\left| \Gamma \right| / 2} f_s^w(x) {\,d}x = \frac{\pi}{2} \frac{\left( \left| \Gamma \right| / 2 \right)^{-s}}{2-s} {{\sideset{_1}{_2}\operatorname{F}\!\left(\substack{\displaystyle1-s/2\\\displaystyle2-s/2,3/2};-\left( \pi / 4 \right)^2\right)}}$$ expressed in terms of a generalized ${{\sideset{_1}{_2}\operatorname{F}\!\left(\substack{\displaystyle×\\\displaystyle×};×\right)}}$-hypergeometric function, which is analytic at $s$ not an even integer. Hence, $V_{f_s^w}(\Gamma)$ is the meromorphic continuation to the complex plane of the integral $\frac{2}{\left| \Gamma \right|} \int_0^{\left| \Gamma \right| / 2} f_s^w(x) {\,d}x$. We observe that for $s=1/2$ we have $V_{f_s^w}(\Gamma) = 2 \sqrt{ 2 / | \Gamma | } \operatorname{S}(1)$, where $\operatorname{S}(u)$ is the Fresnel integral $\operatorname{S}(u) {{:=}}\int_0^u \sin( x^2 \pi / 2 ) {\,d}x$.
As an application of the theorems of this section, we recover results recently given in [@BrHaSa2009] regarding the complete asymptotic expansion of the Euclidean Riesz $s$-energy $\mathcal{L}_s(N)$ of the $N$-th roots of unity on the unit circle $\mathbb{S}^1$ in the complex plane $\mathbb{C}$. Indeed, if $| z - w |$ denotes the Euclidean distance between two points $\zeta$ and $z$ in $\mathbb{C}$, then from the identities $| z - \zeta |^2 = 2 ( 1 - \cos \psi ) = 4 [ \sin ( \psi / 2) ]^2$, where $\psi$ denotes the angle “between” $\zeta$ and $z$ on $\mathbb{S}^1$, we obtain the following relation between Euclidean and geodesic Riesz $s$-kernel: $$\left| z - \zeta \right|^{-s} = \left| 2 \left( 1 - \cos \psi \right) \right|^{s/2} = \left| 2 \sin \frac{\psi}{2} \right|^s = \left| 2 \sin \frac{\operatorname{d}(\zeta,z)}{2} \right|^s, \qquad \zeta,z \in \mathbb{S}^1.$$ Thus, for $\zeta,z \in \mathbb{S}^1$ there holds $$\left| z - \zeta \right|^{-s} = f_s^w(\operatorname{d}(\zeta,z)), \qquad w(x) {{:=}}\left( \operatorname{sinc}\frac{x}{2} \right)^{-s}, \qquad f_s^w(x) = x^{-s} \operatorname{sinc}^{-s}(x/2),$$ where the “sinc” function, defined as $\operatorname{sinc}z = ( \sin z ) / z$ is an entire function that is non-zero for $|z|<\pi$ and hence, has a logarithm $g(z)= \log \operatorname{sinc}z$ that is analytic for $|z|<\pi$ (we choose the branch such that $\log \operatorname{sinc}0=0$). The function $\operatorname{sinc}^{-s} (z/2) {{:=}}\exp[-s \log \operatorname{sinc}(z/2)]$ is even and analytic on the unit disc $|z|<2\pi$ and thus has a power series representation of the form $$\operatorname{sinc}^{-s} (z/2) = \sum_{n=0}^\infty \alpha_n(s) z^{2n}, \quad |z|<2\pi, \, s\in {\mathbb{C}}.$$ It can be easily seen that for $s>-1$ and $s\neq0$ the function $(\operatorname{sgn}s) f_s^w(x)$ [^5] is a convex and decreasing function. Hence, application of Proposition \[prop:optimality\](A) reproves the well-known fact that the $N$-th roots of unity and their rotated copies are the only optimal $f_s^w$-energy configurations for $s$ in the range $(-1,0)\cup(0,\infty)$. (We remind the reader that, in contrast to the geodesic case, in the Euclidean case the $N$-th roots of unity are optimal for $s\geq-2$, $s\neq0$, and they are unique up to rotation for $s>-2$, see discussion in [@BrHaSa2009].) The complete asymptotic expansion of $\mathcal{L}_s(N) = \mathcal{M}(\mathbb{S}^1,f_s^w; N)$ can be obtained from Theorem \[thm:weighted.f\] if $s$ is not an integer, from Theorem \[thm:Laurent.series\] if $s$ is a positive integer, and from Theorem \[thm:analytic.f\] if $s$ is a negative integer. (We leave the details to the reader.) For $s\in\mathbb{C}$ with $s\neq 0, 1, 3, 5, \dots$ and $q-2p < {\mathop{\mathrm{Re}}}s < 2 + q$, the Euclidean Riesz $s$-energy for the $N$-th roots of unity is given by (cf. [@BrHaSa2009 Theorem 1.1]) $$\label{eq:cal.L.s.N}
\mathcal{L}_s(N) = V_s \, N^2 + \frac{2\operatorname{\zeta}(s)}{(2\pi)^s} N^{1+s} + \sum_{n=1}^{q} \alpha_n(s) \frac{2\operatorname{\zeta}(s-2n)}{(2\pi)^{s-2n}} N^{1+s-2n} + \mathcal{O}( N^{1-2p} ) + \mathcal{O}( N^{s-2p} )$$ as $N \to \infty$, where (cf. [@BrHaSa2009]) $$\begin{aligned}
\label{eq:V.s.alpha.n}
V_s &= \frac{2^{-s}\operatorname{\Gamma}((1-s)/2)}{\sqrt{\pi}\operatorname{\Gamma}(1-s/2)}, &\qquad \alpha_n(s) &= \frac{(-1)^n B_{2n}^{(s)}(s/2)}{(2n)!}, \quad n = 0, 1, 2, \dots.\end{aligned}$$ Here, $B_n^{(\alpha)}(x)$ denotes the generalized Bernoulli polynomial, where $B_n(x) = B_n^{(1)}(x)$. Notice the absence of the term $\mathcal{B}_p(\Gamma, f_s^w; N)$, which follows from the fact that odd derivatives of $f_s^w(x)$ evaluated at $\pi$ assume the value $0$. (This can be seen, for example, from Faà di Bruno’s differentiation formula.)
The entirety of positive odd integers $s$ constitutes the class of exceptional cases regarding the Euclidean Riesz $s$-energy of the $N$-th roots of unity. For such $s$ Theorem \[thm:Laurent.series\](ii) provides the asymptotic expansion of $\mathcal{L}_s(N) = \mathcal{M}(\mathbb{S}^1,f_s^w;N)$, which features an $N^2 \log N$ term as leading term. That is, for $s=2L+1$, $L=0,1,2,\dots$, we have from Theorem \[thm:Laurent.series\](ii) that (cf. [@BrHaSa2009 Thm. 1.2]) $$\label{eq:L.s.N}
\mathcal{L}_s(N) = \frac{\alpha_L(s)}{\pi} N^2 \log N + V_{f_s^w}(\mathbb{S}^1) N^2 + \sum_{\substack{m=0, \\ m \neq L}}^{p+L} \alpha_m(s) \frac{2\operatorname{\zeta}(s-2m)}{\left( 2 \pi \right)^{s-2m}} N^{1+s-2m} + \mathcal{O}(N^{1-2p}),$$ where the coefficients $\alpha_m(s)$ are given in and $$V_{f_s^w}(\mathbb{S}^1) = \frac{1}{\pi} \Bigg\{ \sum_{\substack{m=0, \\ m \neq L}}^\infty \alpha_m(s) \frac{\pi^{2m+1-s}}{2m+1-s} - \alpha_L(s) \left( \log 2 - \gamma \right) \Bigg\}.$$ We remark that in [@BrHaSa2009 Thm. 1.2] we also give a computationally more accessible representation of $V_{f_s^w}(\mathbb{S}^1)$. The appearance of the $N^2 \log N$ terms can be understood on observing that the constant $V_s$ in has its simple poles at positive odd integers $s$ and when using a limit process as $s\to K$ ($K$ a positive odd integer) in , the simple pole at $s=K$ need to be compensated by the simple pole of the Riemann zeta function in the coefficient of an appropriate lower-order term. This interplay produces eventually the $N^2 \log N$ term.
The geodesic Riesz $s$-energy of equally spaced points on $\Gamma$ {#sec:geodesic.Riesz.s.energy}
==================================================================
Here, we state theorems concerning the geodesic Riesz $s$-energy of equally spaced points on $\Gamma$ that follow of the results from the preceding section together with asymptotic properties of generalized harmonic numbers. The proofs are given in Section \[sec:proofs\].
\[def:main\] The [*discrete geodesic Riesz $s$-energy*]{} of $N$ equally spaced points ${\mathbf{z}}_{1,N}, \dots, {\mathbf{z}}_{N,N}$ on $\Gamma$ is given by $$\mathcal{M}_s(\Gamma;N) {{:=}}\sum_{j\neq k} \left[ \operatorname{d}({\mathbf{z}}_{j,N},{\mathbf{z}}_{k,N}) \right]^{-s}
= N \sum_{j=1}^{N-1} \left[ \operatorname{d}({\mathbf{z}}_{j,N},{\mathbf{z}}_{N,N}) \right]^{-s}, \qquad s \in \mathbb{C}.$$ The [*discrete logarithmic geodesic energy*]{} of $N$ equally spaced points ${\mathbf{z}}_{1,N}, \dots, {\mathbf{z}}_{N,N}$ on $\Gamma$ enters in a natural way by taking the limit $$\mathcal{M}_{\mathrm{log}}(\Gamma; N) {{:=}}\lim_{s\to0} \frac{\mathcal{M}_s(\Gamma; N)-N(N-1)}{s} = \sum_{j\neq k} \log\frac{1}{\operatorname{d}({\mathbf{z}}_{j,N},{\mathbf{z}}_{k,N})}. \label{M.0}$$
We are interested in the asymptotics of $\mathcal{M}_s(\Gamma; N)$ for large $N$ for all values of $s$ in the complex plane and we shall compare them with the related asymptotics for the Euclidean case given in our recent paper [@BrHaSa2009]. In the following we use the notation $$\begin{aligned}
\mathcal{I}_s^g[\mu] &{{:=}}\int \int \frac{{\,d}\mu({\mathbf{x}}) {\,d}\mu({\mathbf{y}})}{\left[ \operatorname{d}({\mathbf{x}},{\mathbf{y}}) \right]^s}, &\qquad V_s^g(\Gamma) &{{:=}}\inf\{ \mathcal{I}_s^g[\mu] : \mu \in \mathfrak{M}(\Gamma) \}, \\
\mathcal{I}_{\mathrm{log}}^g[\mu] &{{:=}}\int \int \log \frac{1}{\operatorname{d}({\mathbf{x}},{\mathbf{y}})} {\,d}\mu({\mathbf{x}}) {\,d}\mu({\mathbf{y}}), &\qquad V_{\mathrm{log}}^g(\Gamma) &{{:=}}\inf\{ \mathcal{I}_{\mathrm{log}}^g[\mu] : \mu \in \mathfrak{M}(\Gamma) \}.\end{aligned}$$
The geodesic logarithmic energy
-------------------------------
\[thm.M.0\] Let $q$ be a positive integer. For $N = 2 M + \kappa$, $\kappa = 0, 1$ $$\mathcal{M}_{\mathrm{log}}(\Gamma; N) = V_{\mathrm{log}}^g( \Gamma) \, N^2 - N \log N + N \log \frac{\left| \Gamma \right|}{2\pi} - \sum_{n=1}^q \frac{B_{2n}(\kappa / 2)}{\left( 2 n - 1 \right) 2 n} 2^{2n} N^{2-2n} + \mathcal{O}_{q,\kappa}(N^{-2q})$$ as $N \to \infty$. Here, $V_{\mathrm{log}}^g( \Gamma) = 1 - \log ( | \Gamma | / 2 )$.
The parity of $N$ affects the coefficients of the powers $N^{2-2m}$, $m \geq 1$. The $N^2$-term vanishes for curves $\Gamma$ with $| \Gamma | = 2 e$ and the $N$-term vanishes when $| \Gamma | = 2 \pi$. By contrast, the Euclidean logarithmic energy of $N$ equally spaced points on the unit circle is given by (cf. [@BrHaSa2009]) $$\mathcal{L}_{\mathrm{log}}(N) = - N \log N.$$
The geodesic Riesz $s$-energy
-----------------------------
The next result provides the complete asymptotic formula for all $s\neq1$. This exceptional case, in which a logarithmic term arises, is described in Theorem \[thm:s.EQ.1\].
\[thm:main\] Let $q$ be a positive integer. Then for all $s \in \mathbb{C}$ with $s\neq 1$ and ${\mathop{\mathrm{Re}}}s + 2q \geq 0$ there holds $$\mathcal{M}_s(\Gamma; N) = V_s^g( \Gamma ) \, N^2 + \frac{2\operatorname{\zeta}(s)}{\left| \Gamma \right|^s} N^{1+s} - \frac{1}{\left( \left| \Gamma \right| / 2 \right)^s} \sum_{n=1}^q \frac{B_{2n}(\kappa/2)}{(2n)!} {{\left(s\right)_{2n-1}}} 2^{2n} N^{2-2n} + \mathcal{O}_{s,q,\kappa}(N^{-2q}) \label{gen:asympt.1}$$ as $N\to\infty$, where $V_s^g( \Gamma ) = ( | \Gamma | / 2 )^{-s} / ( 1 - s )$ and $N = 2M + \kappa$, $\kappa = 0, 1$.
In the symbol ${{\left(s\right)_{n}}}$ denotes the Pochhammer symbol defined as ${{\left(s\right)_{0}}} = 1$ and ${{\left(s\right)_{n+1}}} = ( n + s ) {{\left(s\right)_{n}}}$ for integers $n\geq0$.
It is interesting to compare with . It should be noted that in both the geodesic and the Euclidean case, the respective asymptotics have an $N^2$-term whose coefficient is the respective energy integral of the limit distribution (which is the normalized arc-length measure) or its appropriate analytic continuation, and an $N^{1+s}$-term with the coefficient $2 \operatorname{\zeta}(s) / | \Gamma |^s$. Regarding the latter, it has been shown in [@MaMaRa2004] that for $s>1$ the dominant term of the asymptotics for the (Euclidean) Riesz $s$-energy of optimal energy $N$-point systems for any one-dimensional rectifiable curves in ${\mathbb{R}}^{p}$ is given by $2 \operatorname{\zeta}(s) / | \Gamma |^s N^{1+s}$. Regarding the remaining terms of the asymptotics of $\mathcal{M}_s(\Gamma;N)$ and $\mathcal{L}_s(N)$ one sees that the exponents of the powers of $N$ do not depend on $s$ in the geodesic case but do depend on $s$ in the Euclidean case.
In the general case $s\neq1$, the asymptotic series expansion is not convergent, except for $s=0,-1,-2, \dots$ when the infinite series reduces to a finite sum. The former follows from properties of the Bernoulli numbers and the latter from properties of the Pochhammer symbol ${{\left(a\right)_{n}}}$.
For a negative integer $s$ we have the following result.
\[prop:M.neg.p\] Let $p$ be a positive integer. Then $$\begin{split}
\mathcal{M}_{-p}(\Gamma; N) &= \frac{\left( \left| \Gamma \right| / 2 \right)^p}{p+1} N^2 + \frac{\left( \left| \Gamma \right| / 2 \right)^p}{p+1} \sum_{n=1}^{\lfloor p / 2 \rfloor} \binom{p+1}{2n} B_{2n}(\kappa/2) \, 2^{2n} N^{2-2n} \\
&\phantom{=\pm}+ \frac{2 \left| \Gamma \right|^p}{p+1} \left( B_{p+1}(\kappa/2) - B_{p+1} \right) N^{1-p}
\end{split} \label{M.s.spec.odd}$$ for $N = 2M + \kappa$, $\kappa = 0, 1$. The right-most term above vanishes for even $p$.
The corresponding [*Euclidean*]{} Riesz $(-p)$-energy of $N$-th roots of unity reduces to $$\mathcal{L}_{-p}(N) = V_{-p} N^2 \quad \text{if $p=2,4,6,\dots$.}$$
The quantity $\mathcal{M}_{-1}(\mathbb{S}; N)$ gives the maximum sum of [*geodesic*]{} distances on the unit circle. Corollary \[prop:M.neg.p\] yields $$\label{eq:M.neg.1}
\mathcal{M}_{-1}(\mathbb{S}; N) = \frac{\pi}{2} \left( N^2 - \kappa \right), \qquad \text{$N = 2 M + \kappa$, $\kappa = 0, 1$.}$$ We remark that L. Fejes T[ó]{}th [@Fe1959] conjectured (and proved for $N \leq 6$) that the maximum sum of geodesic distances on the [*unit sphere $\mathbb{S}^2$ in $\mathbb{R}^3$*]{} is also given by the right-hand side in . This conjecture was proved by Sperling [@Sp1960] for even $N$ [^6] and by Larcher [@La1962] for odd $N$.[^7] An essential observation is that the sum of geodesic distances does not change if a given pair of antipodal points $({\mathbf{x}}, {\mathbf{x}}^\prime)$ is rotated simultaneously, since $\operatorname{d}({\mathbf{x}},{\mathbf{y}}) + \operatorname{d}({\mathbf{x}}^\prime,{\mathbf{y}}) = \pi$ for every ${\mathbf{y}} \in \mathbb{S}^2$.
In the exceptional case $s=1$ a logarithmic term appears.
\[thm:s.EQ.1\] Let $q \geq 1$ be an integer. For $N = 2 M + \kappa$, $\kappa = 0, 1$, $$\label{M.s.EQ.1}
\begin{split}
\mathcal{M}_1(\Gamma; N) &= \frac{2}{\left| \Gamma \right|} N^2 \log N - \frac{\log2-\gamma}{\left| \Gamma \right| / 2} N^2 - \frac{2}{\left| \Gamma \right|} \sum_{n=1}^q \frac{B_{2n}(\kappa/2)}{2n} 2^{2n} N^{2-2n} \\
&\phantom{=\pm}- \theta_{q,N,\kappa} \frac{2}{\left| \Gamma \right|} \frac{B_{2q+2}(\kappa/2)}{2q+2} 2^{2q+2} N^{-2q},
\end{split}$$ where $0 < \theta_{q,N,\kappa} \leq 1$ depends on $q$, $N$ and $\kappa$.
\[rmk:s.EQ.1\] A comparison of the asymptotics and the corresponding result for the Euclidean Riesz $1$-energy of $N$-th roots of unity (cf. and [@BrHaSa2009 Thm. 1.2]), $$\mathcal{L}_1(N) = \frac{1}{\pi} N^2 \log N + \frac{\gamma - \log ( \pi / 2 )}{\pi} N^2 + \sum_{n=1}^q \frac{(-1)^n B_{2n}(1/2)}{(2n)!} \frac{2\operatorname{\zeta}(1-2n)}{\left(2\pi\right)^{1-2n}} N^{2-2n} + \mathcal{O}(N^{1-2q}),$$ shows that for $| \Gamma | = 2 \pi$ the dominant term is the same and the coefficients of all other powers of $N$ differ. The latter is obvious for the $N^2$-term, and for the $N^{2-2n}$-term, follows from the fact that the coefficient in multiplied by $\pi$ is rational whereas the coefficient in the asymptotics for $\mathcal{L}_1(N)$ multiplied by $\pi$ is transcendental. Interestingly, except for $s=1$, there are no other exceptional cases with an $N^2 \log N$ term in the asymptotics of $\mathcal{M}_s(\Gamma; N)$, whereas in the asymptotics of $\mathcal{L}_s(N)$ there appears an $N^2 \log N$ term whenever $s$ is a positive integer, cf. [@BrHaSa2009 Thm. 1.2].
Proofs {#sec:proofs}
======
[**Part (A).**]{} The proof utilizes the “winding number” argument of L. Fejes T[ó]{}th. The key idea is to regroup the terms in the sum in with respect to its $m$ nearest neighbors ($m=1,\dots,N$) and then use convexity and Jensen’s inequality.
W.l.o.g. we assume that ${\mathbf{w}}_1, \dots, {\mathbf{w}}_N$ on $\Gamma$ are ordered such that ${\mathbf{w}}_k$ precedes ${\mathbf{w}}_{k+1}$ (denoted ${\mathbf{w}}_k \prec {\mathbf{w}}_{k+1} $). We identify ${\mathbf{w}}_{j+N}$ with ${\mathbf{w}}_j$ for $j=1,\dots,N-1$. By convexity $$\label{eq:sum1}
\sum_{j=1}^N \sum_{\begin{subarray}{c} k = 1 \\ k \neq j \end{subarray}}^N f(\operatorname{d}({\mathbf{w}}_j,{\mathbf{w}}_k)) = N \sum_{k=1}^{N-1} \Big[ \frac{1}{N} \sum_{j=1}^N f(\operatorname{d}({\mathbf{w}}_j,{\mathbf{w}}_{j+k})) \Big] \geq N \sum_{k=1}^{N-1} f( \frac{1}{N} \sum_{j=1}^N \operatorname{d}( {\mathbf{w}}_j, {\mathbf{w}}_{j+k}) ).$$ Let ${\mathbf{z}}_{1,N} \prec \dots \prec {\mathbf{z}}_{N,N}$ be $N$ equally spaced (with respect to the metric $\operatorname{d}$) points on $\Gamma$. Set ${\mathbf{z}}_{0,N} = {\mathbf{z}}_{N,N}$. Assuming further that this metric $\operatorname{d}$ also satisfies $$\label{main.property}
\frac{1}{N} \sum_{j=1}^N \operatorname{d}( {\mathbf{x}}_j, {\mathbf{x}}_{j+k} ) \leq \operatorname{d}({\mathbf{z}}_{0,N},{\mathbf{z}}_{k,N}), \qquad k = 1, \dots, N - 1,$$ for every ordered $N$-point configuration ${\mathbf{x}}_1 \prec \dots \prec {\mathbf{x}}_N$ with ${\mathbf{x}}_j = {\mathbf{x}}_{j+N}$, it follows that $$G_f({\mathbf{w}}_1, \dots, {\mathbf{w}}_N) \geq N \sum_{k=1}^{N-1} f( \operatorname{d}({\mathbf{z}}_{0,N},{\mathbf{z}}_{k,N})) {{=:}}\mathcal{M}_f(\Gamma; N) = G_f({\mathbf{z}}_{1,N},\dots, {\mathbf{z}}_{N,N}).$$ It remains to show that the geodesic distance satisfies . From $$\operatorname{d}({\mathbf{x}}_j,{\mathbf{x}}_k) = \min \left\{ \ell( {\mathbf{x}}_j, {\mathbf{x}}_k ), | \Gamma | - \ell( {\mathbf{x}}_j, {\mathbf{x}}_k ) \right\} \qquad \text{if $0 \leq k - j < N$}$$ and additivity of the distance function $\ell( {\mathbf{\cdot}}, {\mathbf{\cdot}} )$ it follows that $$\sum_{j=1}^N \operatorname{d}( {\mathbf{x}}_j, {\mathbf{x}}_{j+k} ) \leq
\begin{cases}
\displaystyle \sum_{j=1}^N \ell( {\mathbf{x}}_j, {\mathbf{x}}_{j+k} ) = \sum_{j=1}^N \sum_{n=1}^k \ell( {\mathbf{x}}_{j+n-1}, {\mathbf{x}}_{j+n} ) = \left| \Gamma \right| k, \\
\displaystyle \sum_{j=1}^N \left( | \Gamma | - \ell( {\mathbf{x}}_j, {\mathbf{x}}_{j+k} ) \right) = \left| \Gamma \right| \left( N - k \right)
\end{cases}$$ and therefore$$\frac{1}{N} \sum_{j=1}^N \operatorname{d}( {\mathbf{x}}_j, {\mathbf{x}}_{j+k} ) \leq \min\{ \left| \Gamma \right| k / N, \left| \Gamma \right| \left( N - k \right) / N \} = \operatorname{d}({\mathbf{z}}_{0,N},{\mathbf{z}}_{k,N}).$$ In the case of a strictly convex function $f$ we have equality in if and only if the points are equally spaced. This shows uniqueness (up to translation along the simple closed curve $\Gamma$) of equally spaced points.
[**Part (B).**]{} Given $N = 2 M + \kappa$ ($\kappa = 0, 1$) let $\omega_N$ denote the antipodal set with $M + \kappa$ points placed at the North Pole and $M$ points at the South Pole of $\Gamma$, where both Poles can be any two points on $\Gamma$ with geodesic distance $|\Gamma|/2$. Thus, the geodesic distance between two points in $\omega_N$ is either $0$ or $|\Gamma|/2$. Hence $$\label{G.f.gen}
G_f(\omega_N) = 2 M \left( M + \kappa \right) f( \left| \Gamma \right| / 2 ) = \frac{1}{2} f( \left| \Gamma \right| / 2 ) \left( N^2 - \kappa \right).$$ Since adding a constant to $G_f$ does not change the positions of optimal $f$-energy points, we may assume w.l.o.g. that $f(0)=0$. In fact, we will prove the equivalent assertion that if $f$ is a non-constant convex and increasing function with $f(0) = 0$, then the functional $G_f$ has a maximum at $\omega_N$, which is unique (up to translation along $\Gamma$) if $f$ is strictly increasing. (Note that by these assumptions $f(x) \geq 0$.) Indeed, any $N$-point system $X_N$ of points ${\mathbf{x}}_1, \dots, {\mathbf{x}}_N$ from $\Gamma$ satisfies $$\begin{aligned}
G_f(X_N)
&= f( \left| \Gamma \right| / 2 ) \sum_{j \neq k} \frac{f( \operatorname{d}({\mathbf{x}}_j, {\mathbf{x}}_k) )}{f( \left| \Gamma \right| / 2 )} \leq f( \left| \Gamma \right| / 2 ) \sum_{j \neq k} \frac{\operatorname{d}({\mathbf{x}}_j, {\mathbf{x}}_k)}{\left| \Gamma \right| / 2}= f( \left| \Gamma \right| / 2 ) \frac{G_{\mathrm{id}}(X_N)}{\left| \Gamma \right| / 2} \\
&\leq f( \left| \Gamma \right| / 2 ) \frac{G_{\mathrm{id}}(\omega_N)}{\left| \Gamma \right| / 2} = \frac{1}{2} f( \left| \Gamma \right| / 2 ) \left( N^2 - \kappa \right),\end{aligned}$$ where we used that antipodal configurations are optimal for the “sum of distance function” ($f$ is the identity function $\mathrm{id}$) and relation with $f \equiv \mathrm{id}$. Note that the first inequality is strict if there is at least one pair $(j,k)$ such that $0 < \operatorname{d}({\mathbf{x}}_j, {\mathbf{x}}_k) < | \Gamma | / 2$. On the other hand, if $X_N = \omega_N$, then equality holds everywhere.
For Lebesgue integrable functions $f$ the minimum geodesic $f$-energy $V_f^g(\Gamma)$ is finite, since $\mathcal{I}_f^g[\sigma_\Gamma] = \int f( \operatorname{d}({\mathbf{x}}, {\mathbf{y}}) ) {\,d}\sigma_\Gamma({\mathbf{x}}) = ( 2 / | \Gamma | ) \int_0^{| \Gamma | / 2} f(\ell) {\,d}\ell \neq \infty$ (${\mathbf{y}} \in \Gamma$ arbitrary). Moreover, for lower semicontinuous functions $f$, a standard argument (see [@La1972]) shows that the sequence $\{G_f(\omega_N^{(f)}) / [ N ( N - 1 ) ] \}_{N\geq2}$ is monotonically increasing. Since $f$ is Lebesgue integrable, this sequence is bounded from above by $\mathcal{I}_f^g[\sigma_\Gamma]$; thus, the limit $\lim_{N\to\infty} G_f(\omega_N^{(f)}) / N^2$ exists in this case. If $f$ also satisfies the hypotheses of Proposition \[prop:optimality\](A), then $\lim_{N\to\infty} G_f(\omega_N^{(f)}) / N^2 = \mathcal{I}_f^g[\sigma_\Gamma]$. (By a standard argument, one constructs a family of continuous functions $F_{\varepsilon}(x)$ with $F_{\varepsilon}(x) = f(x)$ outside of ${\varepsilon}$-neighborhoods at points of discontinuity of $f$, $f(x) \geq F_{\varepsilon}(x)$ everywhere and $\lim_{{\varepsilon}\to0} F_{\varepsilon}(x) = f(x)$ wherever $f$ is continuous at $x$. Then the lower bound follows from weak-star convergence of $\nu[\omega_N^{(f)}]$ as $N \to \infty$ and, subsequently, letting ${\varepsilon}\to 0$.)
We next present some auxiliary results that are needed to prove the main Theorems \[thm:general.f.general.case\] and \[thm:general.f.exceptional.case\]. We begin with the following generalized Euler-MacLaurin summation formula.
\[prop:Euler-MacLaurin.Summation\] Let $\omega = 0$ or $\omega = 1/2$. Let $M \geq 2$. Then for any function $h$ with continuous derivative of order $2p+1$ on the interval $[1 - \omega, M + \omega]$ we have $$\begin{split}
\sum_{k = 1}^M h(k)
&= \int_a^b h(x) {\,d}x + \left( 1 / 2 - \omega \right) \left\{ h(a) + h(b) \right\} + \sum_{k = 1}^p \frac{B_{2k}(\omega)}{(2k)!} \left\{ h^{(2k-1)}(b) - h^{(2k-1)}(a) \right\} \\
&\phantom{=}+ \frac{1}{(2p+1)!} \int_a^b C_{2p+1}(x) h^{(2p+1)}(x) {\,d}x, \qquad a = 1 - \omega, b = M + \omega,
\end{split}$$ where $C_{k}(x)$ is the periodized Bernoulli polynomial $B_{k}(x-\lfloor x\rfloor)$.
For $\omega = 0$, the above formula is the classical Euler-MacLaurin summation formula (cf., for example, [@Ap1999]). For $\omega = 1 / 2$, iterated application of integration by parts yields the desired result.
Let $f$ have a continuous derivative of order $2p+1$ on the interval $(0 , | \Gamma | / 2]$. Then applying Proposition \[prop:Euler-MacLaurin.Summation\] with $h(x) = f( x | \Gamma | / N )$ and $\omega = \kappa / 2$, where $N = 2M+\kappa\geq2$, $\kappa=0,1$, we obtain $$\begin{aligned}
\mathcal{M}(\Gamma, f; N)
&= 2 N \sum_{n = 1}^{\lfloor N / 2 \rfloor} f( n \left| \Gamma \right| / N ) - \left( 1 - \kappa \right) f( \left| \Gamma \right| / 2 ) N = 2 N \int_{1-\omega}^{N/2} f( x | \Gamma | / N ) {\,d}x \\
&\phantom{=}+ 2 \left( \frac{1}{2} - \omega \right) N \left\{ f( (1-\omega) | \Gamma | / N ) + f( | \Gamma | / 2 ) \right\} + 2 N \sum_{k = 1}^p \frac{B_{2k}(\omega)}{(2k)!} \left. \left\{f( x | \Gamma | / N )\right\}^{(2k-1)} \right|_{1-\omega}^{N/2} \\
&\phantom{=}+ \frac{2 N}{(2p+1)!} \int_{1-\omega}^{N/2} C_{2p+1}(x) \left\{f( x | \Gamma | / N )\right\}^{(2p+1)}(x) {\,d}x - 2 \left( \frac{1}{2} - \omega \right) f( \left| \Gamma \right| / 2 ) N.\end{aligned}$$ Regrouping the terms in the last relation and using the fact that $B_{2k+1} = B_{2k+1}(1/2) =0$ for $k=1,2,3,\dots$ and $B_1(\omega)=\omega-1/2$, we derive the exact representation $$\mathcal{M}(\Gamma, f; N) = N^2 \, \frac{2}{\left| \Gamma \right|} \int_{\left( 1 - \omega \right) \left| \Gamma \right| / N}^{\left| \Gamma \right| / 2} f(y) {\,d}y - \mathcal{A}_p(\Gamma, f; N) + \mathcal{B}_p(\Gamma, f; N) + \mathcal{R}_p(\Gamma, f; N) \label{eq:term.general}$$ valid for every integer $N \geq 2$, where
$$\begin{aligned}
\mathcal{A}_p(\Gamma, f; N) &{{:=}}- 2 B_1(\omega) f( (1-\omega) | \Gamma | / N ) - 2 N \sum_{k = 1}^p \frac{B_{2k}(\omega)}{(2k)!} \left. \left\{f( x | \Gamma | / N )\right\}^{(2k-1)} \right|_{1-\omega} \notag \\
&= \frac{2}{\left| \Gamma \right|} N^2 \sum_{r = 1}^{2p} \frac{B_{r}(\omega)}{r!} \left( \left| \Gamma \right| / N \right)^{r} f^{(r-1)}( \left( 1 - \omega \right) \left| \Gamma \right| / N ), \label{eq:A.p} \\
\mathcal{B}_p(\Gamma, f; N) &{{:=}}\frac{2}{\left| \Gamma \right|} N^2 \sum_{k = 1}^{p} \frac{B_{2k}(\omega)}{(2k)!} \left( \left| \Gamma \right| / N \right)^{2k} f^{(2k-1)}( \left| \Gamma \right| / 2 ), \label{eq:B.p...} \\
\mathcal{R}_p(\Gamma, f; N) &{{:=}}2 N \frac{\left( \left| \Gamma \right| / N \right)^{2p+1}}{(2p+1)!} \int_{1-\omega}^{N / 2} C_{2p+1}( x ) f^{(2p+1)}( x \left| \Gamma \right| / N) {\,d}x. \label{eq:R.p}\end{aligned}$$
If $f$ is admissible in the sense of Definition \[def:admissible\], then by linearity $$\mathcal{M}(\Gamma, f; N) = \mathcal{M}(\Gamma, S_q; N) + \mathcal{M}(\Gamma, f - S_q; N),$$ where the term $\mathcal{M}(\Gamma, S_q; N)$ contains the asymptotic expansion of $\mathcal{M}(\Gamma, f; N)$ and the term $\mathcal{M}(\Gamma, f - S_q; N)$ is part of the remainder term. The next lemma provides estimates for the contributions to the remainder term in the asymptotic expansion of $\mathcal{M}(\Gamma, f; N)$ as $N\to\infty$.
\[lem:estimates\] Let $f$ be admissible in the sense of Definition \[def:admissible\]. Then as $N\to\infty$: $$\begin{aligned}
N^2 \frac{2}{\left| \Gamma \right|} \int_0^{\left( 1 - \omega \right) \left| \Gamma \right| / N} (f - S_q)(y) {\,d}y &= \mathcal{O}( N^{1-\delta+s_q} ), \\
\mathcal{A}_p(\Gamma, f - S_q; N) &= \mathcal{O}( N^{1-\delta+s_q}), \\
\mathcal{R}_p(\Gamma, f - S_{q}; N) &=
\begin{cases}
\displaystyle \mathcal{O}( N^{1-2p} ) & \text{if $2p \neq \delta - {\mathop{\mathrm{Re}}}s_q$,} \\[1em]
\displaystyle \mathcal{O}( N^{1-2p} \log N ) & \text{if $2p = \delta - {\mathop{\mathrm{Re}}}s_q$.}
\end{cases}\end{aligned}$$ The $\mathcal{O}$-term depends on $|\Gamma|$, $p$, $s_q$, and $f$.
The first relation follows directly from Definition \[def:admissible\](ii.a). The second estimate follows from Definition \[def:admissible\](ii.b) and ; that is for some positive constant $C$ $$\begin{aligned}
\left| \mathcal{A}_p(\Gamma, f - S_q; N) \right|
&\leq \frac{2}{\left| \Gamma \right|} N^2 \sum_{r = 1}^{2p} \frac{| B_{r}(\omega) |}{r!} \left( \left| \Gamma \right| / N \right)^{r} \left| (f-S_q)^{(r-1)}( \left( 1 - \omega \right) \left| \Gamma \right| / N ) \right| \\
&\leq C \frac{2}{\left| \Gamma \right|} N^2 \sum_{r = 1}^{2p} \frac{| B_{r}(\omega) |}{r!} \left( \left| \Gamma \right| / N \right)^{r} \left( 1 - \omega \right)^{\delta-{\mathop{\mathrm{Re}}}s_q-r+1} \left( \left| \Gamma \right| / N \right)^{r + \delta - {\mathop{\mathrm{Re}}}s_q - r + 1}. \end{aligned}$$ The last estimate follows from Definition \[def:admissible\](ii.b), and the fact that $$\label{eq:period.Bernoulli.estimate}
\left| C_{2p+1}(x) \right| \leq \left( 2 p + 1 \right) \left| B_{2p} \right| \qquad \text{for all real $x$ and all $p=1,2, \dots$;}$$ that is for some positive constant $C$ $$\begin{aligned}
\left| \mathcal{R}_p(\Gamma, f-S_Q; N) \right|
&\leq 2 N \frac{\left( \left| \Gamma \right| / N \right)^{2p+1}}{(2p+1)!} \int_{1-\omega}^{N / 2} \left| C_{2p+1}( x )\right| \left| (f-S_q)^{(2p+1)}( x \left| \Gamma \right| / N) \right| {\,d}x \\
&\leq 2 C N \frac{B_{2p}}{(2p)!} \left( \left| \Gamma \right| / N \right)^{\delta-{\mathop{\mathrm{Re}}}s_q} \int_{1-\omega}^{N / 2} x^{\delta-1-2p-{\mathop{\mathrm{Re}}}s_q} {\,d}x.\end{aligned}$$
Other functions arising in the asymptotics of $\mathcal{M}(\Gamma, f; N)$ are defined next.
\[def:zeta.psi\] Let $\omega = 0, 1/2$ and $p$ be a positive integer. For $s \in \mathbb{C}$ with $s \neq 1$ $$\begin{split}
\operatorname{\zeta}_p(\omega, y; s)
&{{:=}}\frac{1}{s-1} \sum_{r=0}^{2p} \frac{B_r(\omega)}{r!} ( -1 )^r {{\left(s-1\right)_{r}}} \left( 1 - \omega \right)^{1-s-r} - \frac{{{\left(s\right)_{2p+1}}}}{\left( 2p + 1\right)!} \int_{1-\omega}^y C_{2p+1}(x) x^{-s-1-2p} {\,d}x, \end{split}$$ which we call [*incomplete zeta function*]{} and $$\begin{split}
\Psi_p(\omega, y)
&{{:=}}- \log( 1 - \omega ) + \sum_{r=1}^{2p} \frac{B_r(\omega)}{r} ( -1 )^r \left( 1 - \omega \right)^{-r} - \int_{1-\omega}^y C_{2p+1}(x) x^{-2-2p} {\,d}x.
\end{split}$$
\[prop:aux.results\] Let $\omega = 0, 1/2$. Then $$\begin{aligned}
\Psi_p(\omega, y) &= \lim_{s \to 1} ( \operatorname{\zeta}_p(\omega, y; s) - 1 / (s-1) ), \notag \\
\operatorname{\zeta}_p(\omega, y; -n) &= - \frac{B_{n+1}}{n+1} = \operatorname{\zeta}(-n), \qquad n = 0, 1, \dots, 2p, \notag \\
\operatorname{\zeta}_p(\omega, y; s) - \operatorname{\zeta}(s) &= \frac{{{\left(s\right)_{2p+1}}}}{\left( 2p + 1\right)!} \int_y^\infty C_{2p+1}(x) x^{-s-1-2p} {\,d}x, \qquad {\mathop{\mathrm{Re}}}s + 2p > 0, \notag \\
\operatorname{\zeta}(s) &= \lim_{y\to\infty} \operatorname{\zeta}_p(\omega, y; s), \qquad {\mathop{\mathrm{Re}}}s + 2p > 0, \notag \\
\Psi_p(\omega, y) - \gamma &= \int_y^\infty C_{2p+1}(x) x^{-2-2p} {\,d}x, \notag \\
\gamma &= \lim_{y\to\infty} \Psi_p(\omega, y). \notag\end{aligned}$$
The second relation follows from [@Lu1969I Eq. 2.8(13)], $B_{2k+1}(\omega) = 0$ for $\omega=0,1/2$ and $k\geq1$ and [@AbSt1992 Eq. 23.2.15]. The representations and therefore the limit relations for $\operatorname{\zeta}(s)$ and $\gamma$ follow from Proposition \[prop:Euler-MacLaurin.Summation\].
Let $f$ be admissible in the sense of Definition \[def:admissible\]. In the representation we can write the integral as follows:$$\begin{aligned}
\frac{2}{\left| \Gamma \right|} \int_{a}^{\left| \Gamma \right| / 2} f(y) {\,d}y
&= \frac{2}{\left| \Gamma \right|} \int_{a}^{\left| \Gamma \right| / 2} S_q(x) {\,d}x + \frac{2}{\left| \Gamma \right|} \int_{a}^{\left| \Gamma \right| / 2} ( f - S_q )(x) {\,d}x \notag \\
&= \frac{2}{\left| \Gamma \right|} \sum_{n=0}^q a_n \int_a^{\left| \Gamma \right| / 2} x^{-s_n} {\,d}x + \frac{2}{\left| \Gamma \right|} \int_0^{\left| \Gamma \right| / 2} ( f - S_q )(x) {\,d}x - \frac{2}{\left| \Gamma \right|} \int_0^a ( f - S_q )(x) {\,d}x \\
&= V_f(\Gamma) - \frac{2}{\left| \Gamma \right|} \sum_{n=0}^q a_n \frac{a^{1-s_n}}{1-s_n} - \frac{2}{\left| \Gamma \right|} \int_0^a ( f - S_q )(x) {\,d}x, \qquad a {{:=}}( 1 - \omega ) | \Gamma | / N.\end{aligned}$$ Defining $$\tilde{\mathfrak{R}}_p(f-S_q; N) {{:=}}- \frac{2}{\left| \Gamma \right|} N^2 \int_0^{\left( 1 - \omega \right) \left| \Gamma \right| / N} ( f - S_q )(x) {\,d}x - \mathcal{A}_p(\Gamma, f - S_q; N) + \mathcal{R}_p(\Gamma, f - S_{q}; N),$$ formula becomes (in condensed notation) $$\begin{aligned}
\mathcal{M}(f; N)
&= V_f \, N^2 - \frac{2}{\left| \Gamma \right|} N^2 \sum_{n=0}^q a_n \frac{a^{1-s_n}}{1-s_n} - \mathcal{A}_p(S_q; N) + \mathcal{B}_p(f; N) + \mathcal{R}_p(S_q; N) + \tilde{\mathfrak{R}}_p(f-S_q; N) \\
&= V_f \, N^2 + \sum_{n=0}^q a_n \left\{ \frac{2}{\left| \Gamma \right|} N^2 \frac{a^{1-s_n}}{s_n-1} - \mathcal{A}_p(x^{-s_n}; N) + \mathcal{R}_p(x^{-s_n}; N) \right\} + \mathcal{B}_p(f; N) + \tilde{\mathfrak{R}}_p(f-S_q; N).\end{aligned}$$ Furthermore, using , and Definition \[def:zeta.psi\], we can write the expression in curly brackets above as follows: $$\begin{aligned}
\frac{2}{\left| \Gamma \right|} N^2 & \frac{a^{1-s_n}}{s_n-1} - \mathcal{A}_p(x^{-s_n}; N) + \mathcal{R}_p(x^{-s_n}; N) = \frac{2}{\left| \Gamma \right|} N^2 \frac{a^{1-s_n}}{s_n-1} - \frac{2}{\left| \Gamma \right|} N^2 \sum_{r = 1}^{2p} \frac{B_{r}(\omega)}{r!} \left( \left| \Gamma \right| / N \right)^{r} \left. \left\{ t^{-s_n} \right\}^{(r-1)} \right|_{t=a} \\
&\phantom{=}+ 2 N \frac{\left( \left| \Gamma \right| / N \right)^{2p+1}}{(2p+1)!} \int_{1-\omega}^{N / 2} C_{2p+1}( x ) \left. \left\{ t^{-s_n} \right\}^{(2p+1)} \right|_{t=x \left| \Gamma \right| / N} {\,d}x \\
&= \frac{2}{\left| \Gamma \right|} N^2 \left( \left| \Gamma \right| / N \right)^{1-s_n} \Bigg\{ \frac{\left( 1 - \omega \right)^{1-s_n}}{s_n-1} + \sum_{r=1}^{2p} \frac{B_r(\omega)}{r!} (-1)^r {{\left(s_n\right)_{r-1}}} \left( 1 - \omega \right)^{1-s_n-r} \\
&\phantom{=}- \frac{{{\left(s_n\right)_{2p+1}}}}{(2p+1)!} \int_{1-\omega}^{N/2} C_{2p+1}(x) \, x^{-s_n-1-2p} {\,d}x \Bigg\} = \frac{2}{\left| \Gamma \right|} N^2 \left( \left| \Gamma \right| / N \right)^{1-s_n} \operatorname{\zeta}_p(\omega,N/2;s_n).\end{aligned}$$ Hence, we arrive at the formula $$\mathcal{M}(f; N) = V_f \, N^2 + \sum_{n=0}^q a_n \frac{2 \operatorname{\zeta}_p(\omega,N/2;s_n)}{\left| \Gamma \right|^{s_n}} N^{1+s_n} + \mathcal{B}_p(f; N) + \tilde{\mathfrak{R}}_p(f-S_q; N).$$
For $\mathfrak{R}_p(\Gamma, f; N)$ defined by we have $$\label{eq:main.proof.aux2}
\mathfrak{R}_p(\Gamma, f; N) = \sum_{n=0}^q a_n \frac{2 \operatorname{\zeta}_p(\kappa/2,N/2;s_n) - 2\operatorname{\zeta}(s_n)}{\left| \Gamma \right|^{s_n}} N^{1+s_n} + \tilde{\mathfrak{R}}_p(f-S_q; N).$$ Furthermore, it follows from Lemma \[lem:estimates\] that $\tilde{\mathfrak{R}}_p(f-S_q; N) = \mathcal{O}( N^{1-\delta+s_q}) + \mathcal{O}( N^{1-2p} )$ if $2p \neq \delta - {\mathop{\mathrm{Re}}}s_q$ and $\tilde{\mathfrak{R}}_p(f-S_q; N) = \mathcal{O}( N^{1-\delta+s_q}) + \mathcal{O}( N^{1-2p} \log N )$ if $2p = \delta - {\mathop{\mathrm{Re}}}s_q$. Finally, using and Proposition \[prop:aux.results\] we obtain the estimate $$\left| \sum_{n=0}^q a_n \frac{\operatorname{\zeta}_p(\kappa/2,N/2,s_n) - \operatorname{\zeta}(s_n)}{\left| \Gamma \right|^{s_n}} N^{1+s_n} \right| \leq 2 \left( N / 2 \right)^{1-2p} \sum_{n=0}^q \left| a_n \frac{B_{2p}}{(2p)!} {{\left(s_n\right)_{2p}}} \frac{2p+s_n}{2p+{\mathop{\mathrm{Re}}}s_n} \right| \left( \left| \Gamma \right| / 2 \right)^{-{\mathop{\mathrm{Re}}}s_n}.$$ Note that, whenever $s_n=-k$ for some $k=0,1,\dots,2p$, then the corresponding terms on both sides of the estimate above are not present. Also, from Definition \[def:admissible\] it follows that $2p+{\mathop{\mathrm{Re}}}s_n>0$ for $n=0,\dots,q-1$ and that either ${\mathop{\mathrm{Re}}}s_q + 2p > 0$ or $s_q=-2p$. In either case the sum on the left-hand side above is of order $\mathcal{O}(N^{1-2p})$. Hence, we have from that $\mathfrak{R}_p(\Gamma, f; N) = \mathcal{O}( N^{1-\delta+s_q}) + \mathcal{O}( N^{1-2p} )$ if $2p \neq \delta - {\mathop{\mathrm{Re}}}s_q$ and $\mathfrak{R}_p(\Gamma, f; N) = \mathcal{O}( N^{1-\delta+s_q}) + \mathcal{O}( N^{1-2p} \log N )$ if $2p = \delta - {\mathop{\mathrm{Re}}}s_q$.
Proceeding as in the proof of Theorem \[thm:general.f.general.case\] the remainder term now takes the form $$\begin{aligned}
\mathfrak{R}_p(\Gamma, f; N)
&= \frac{2}{\left| \Gamma \right|} N^2 a_{q^\prime} \left( \Psi_p(\kappa/2,N/2) - \gamma \right) + \sum_{\substack{n=0,\\ n\neq q^\prime}}^q a_n \frac{2 \operatorname{\zeta}_p(\kappa/2,N/2,s_n) - 2\operatorname{\zeta}(s_n)}{\left| \Gamma \right|^{s_n}} N^{1+s_n} \\
&\phantom{=}- N^2 \frac{2}{\left| \Gamma \right|} \int_0^{\left( 1 - \omega \right) \left| \Gamma \right| / N} (f-S_q)(y) {\,d}y - \mathcal{A}_p(\Gamma, f - S_q; N) + \mathcal{R}_p(\Gamma, f - S_{q}; N).\end{aligned}$$ Using Lemma \[lem:estimates\], Proposition \[prop:aux.results\], and the inequality $$\left| \frac{2}{\left| \Gamma \right|} N^2 a_{q^\prime} \left( \Psi_p(\kappa/2,N/2) - \gamma \right) \right| \leq 4 \frac{2}{\left| \Gamma \right|} \left| a_{q^\prime} B_{2p} \right| \left( N / 2 \right)^{1-2p},$$ we get the estimate $\mathfrak{R}_p(\Gamma, f; N) = \mathcal{O}( N^{1-2p} ) + \mathcal{O}( N^{1-\delta+s_q} )$ if $2p \neq \delta - {\mathop{\mathrm{Re}}}s_q$ and $\mathfrak{R}_p(\Gamma, f; N) = \mathcal{O}( N^{1-2p} \log N)$ if $2p = \delta - {\mathop{\mathrm{Re}}}s_q$.
Next, we prove the results related to particular types of kernel functions.
The Laplace transform $f(x) {{:=}}\int_0^\infty e^{-x t} {\,d}\mu(t)$ of a signed measure $\mu$ on $[0,\infty)$ satisfying $\int_0^\infty t^m {\,d}|\mu|(t) < \infty $ for every $m=0, 1, 2, \dots$ has derivatives of all orders on $(0,\infty)$. For $q$ a positive integer let $S_q(x)$ be defined by $S_q(x) {{:=}}\sum_{n=0}^q \frac{\mu_n}{n!} (-x)^n$. For every $0\leq m \leq q$ we can write $$f^{(m)}(x) = (-1)^m \int_0^\infty e^{-x t} t^m {\,d}\mu(t) = (-1)^m \sum_{n=m}^q \frac{\mu_n}{(n-m)!} (- x)^{n-m} + (f-S_q)^{(m)}(x), \qquad x > 0,$$ where, using a finite section of the Taylor series expansion of $h(x) = e^{- x t}$ with integral remainder term, we have that $$\begin{aligned}
(f-S_q)^{(m)}(x) &= f^{(m)}(x)-S_{q}^{(m)}(x)
= (-1)^m \int_0^\infty \left\{ e^{-x t} - \sum_{n=0}^{q-m} \frac{(-x t)^n}{n!} \right\} t^m {\,d}\mu(t) \\
&= \frac{(-1)^{q+1}}{(q-m)!} \int_0^\infty \left\{ \int_0^x e^{-u t} \left( x - u \right)^{q-m} {\,d}u \right\} t^{q+1} {\,d}\mu(t), \qquad x > 0.\end{aligned}$$ For $x>0$ we have the following bound: $$\left| (f-S_q)^{(m)}(x) \right| \leq \frac{x^{q+1-m}}{(q+1-m)!} \int_0^\infty t^{q+1} {\,d}|\mu|(t), \qquad m = 0, 1, \dots, q.$$ Since $S_q^{(q+1)}(x) = 0$ for all $x$, it is immediate that the last estimate also holds for $m=q+1$. It follows that $f$ is admissible in the sense of Definition \[def:admissible\] with $q=2p$, $\delta=1$. The result follows from Theorem \[thm:general.f.general.case\], after observing that $$V_f(\Gamma) = \frac{2}{\left| \Gamma \right|} \int_0^{\left| \Gamma \right|/2} f(x) {\,d}x = \frac{2}{\left| \Gamma \right|} \int_0^{\left| \Gamma \right|/2} \int_0^\infty e^{-x t} {\,d}\mu(t) {\,d}x.$$
In the case that $f$ is a completely monotonic function on $(0,\infty)$ (that is, $\mu$ is a positive measure), it is possible to improve the estimate for $\mathcal{R}_p(\Gamma, f; N)$ in .
Let $f$ be analytic in a disc with radius $| \Gamma | / 2 + {\varepsilon}$ (${\varepsilon}> 0$) centered at the origin. Then $f(z) = \sum_{n=0}^\infty a_n z^n$ for $|z| < | \Gamma | / 2 + {\varepsilon}$ and $f$ is admissible in the sense of Definition \[def:admissible\] for any positive integers $p$ and $q=2p$, where $S_{2p}(z) = \sum_{n=0}^{2p} a_n z^n$ and $\delta=1$. The asymptotic expansion follows from Theorem \[thm:general.f.general.case\] on observing that with $s_n=-n$ ($n=0,\dots,2p$), one has $$V_f(\Gamma) = \frac{2}{\left| \Gamma \right|} \sum_{n=0}^{2p} a_n \int_0^{\left| \Gamma \right|/2} x^n {\,d}x + \frac{2}{\left| \Gamma \right|} \int_0^{\left| \Gamma \right|/2} \left( f - S_{2p} \right)(x) {\,d}x = \frac{2}{\left| \Gamma \right|} \int_0^{\left| \Gamma \right|/2} f(x) {\,d}x.$$ Moreover, since $s_q=-2p$ and $\delta=1$, it follows that $\mathfrak{R}_p(\Gamma, f; N) = \mathcal{O}_{p,|\Gamma|,f}(N^{1-2p})$ as $N\to\infty$.
Suppose $f$ has a pole of integer order $K\geq1$ at zero and is analytic in the annulus $0 < |z| < | \Gamma | / 2 + {\varepsilon}$ (${\varepsilon}>0$) with series expansion $f(z) = \sum_{n=-K}^{\infty} a_n z^n$. Then $f$ is admissible in the sense of Definition \[def:admissible\] for any positive integers $p$ and $q=2p$ with $S_{2p}(z) = \sum_{n=-K}^{2p} a_n z^n$ and $\delta=1$. In the [case [(i)]{}]{} Theorem \[thm:general.f.general.case\] is applied and in the [case [(ii)]{}]{} Theorem \[thm:general.f.exceptional.case\] is applied. The expressions for $V_f(\Gamma)$ follow from termwise integration in and . Since $1-\delta+s_q=-2p$, the remainder terms are $\mathfrak{R}_p(\Gamma, f; N) = \mathcal{O}_{p,|\Gamma|,f}(N^{1-2p})$ as $N\to\infty$.
If $f$ has an essential singularity at $0$ and is analytic in the annulus $0 < |z| < |\Gamma|/2+{\varepsilon}$ (${\varepsilon}>0$), then for positive integers $p$ $$f(z) = S_{2p}(z) + F_{2p}(z), \qquad S_{2p}(z) {{:=}}\sum_{n=-\infty}^{2p} a_n z^{n}, \quad F_{2p}(z) {{:=}}\sum_{n=2p+1}^\infty a_n z^{n} = \mathcal{O}(z^{2p+1}) \ \text{as $z\to0$.}$$ Clearly, the function $f(z)$ satisfies item (i) of Definition \[def:admissible\] and both functions $f(z)$ and $S_{2p}(z)$ satisfy an extended version of item (ii) of Definition \[def:admissible\] suitable for an infinite series $S_{2p}(z)$. Since termwise integration and differentiation of $S_{2p}(z)$ are justified by the theory for Laurent series, Theorems \[thm:general.f.general.case\] and \[thm:general.f.exceptional.case\] can be extended for such kernel functions $f$. In this case all formulas in Theorems \[thm:general.f.general.case\] and \[thm:general.f.exceptional.case\] still hold provided the index $n$ starts with $-\infty$. In particular, we note that the infinite series $\sum_{n=-\infty, n \neq -1}^{2p} a_n \operatorname{\zeta}(-n) \left| \Gamma \right|^{n} N^{1-n}$ appearing in the asymptotics of $\mathcal{M}(\Gamma, f; N)$ converges for every $N$, since $\operatorname{\zeta}(m) \leq \operatorname{\zeta}(2)$ for all integers $m\geq2$.
Example \[eg:ess.sing.1\] follows from the extended version of Theorem \[thm:general.f.exceptional.case\].
To justify Example \[eg:ess.sing.2\] let $\lambda$ be a zero of the Bessel function $\operatorname{J}_{-1}$. The extended version of Theorem \[thm:general.f.general.case\] with $a_n = \operatorname{J}_{n}(\lambda)$ gives that for positive integers $p \geq2$ and $m\geq2$ $$\begin{split}
\mathcal{M}&(\Gamma, f; N) = V_f(\Gamma) \, N^2 + 2 \sum_{\substack{n=-2p, \\ n \neq \pm 1}}^{\infty} \operatorname{J}_{-n}(\lambda) \operatorname{\zeta}(n) \left| \Gamma \right|^{-n} N^{1+n} + \mathcal{B}_p(\Gamma, f;N) + \mathcal{O}(N^{1-2p}) \\
&= 2 N \sum_{n=m}^\infty \operatorname{J}_{-n}(\lambda) \operatorname{\zeta}(n) ( N / \left| \Gamma \right| )^n + 2 \sum_{n=2}^{m-1} \operatorname{J}_{-n}(\lambda) \operatorname{\zeta}(n) \left| \Gamma \right|^{-n} N^{1+n} + V_f(\Gamma) \, N^2 + \left| \Gamma \right| B_2(\frac{\kappa}{2}) f^\prime( \frac{\left| \Gamma \right|}{2} ) \\
&\phantom{=}+ 2 \sum_{k=2}^{2p} J_k(\lambda) \operatorname{\zeta}(-k) \left| \Gamma \right|^k N^{1-k} + \sum_{n = 2}^{p} \frac{2B_{2n}(\kappa/2)}{(2n)! \left| \Gamma \right|^{1-2n}} f^{(2n-1)}( \left| \Gamma \right| / 2 ) N^{2-2n} + \mathcal{O}(N^{1-2p}),
\end{split}$$ where $$V_f(\Gamma) = \frac{2}{\left| \Gamma \right|} \sum_{\substack{n=-\infty, \\ n \neq \pm 1}}^{\infty} \operatorname{J}_n(\lambda) \frac{\left( \left| \Gamma \right| / 2 \right)^{1+n}}{1+n}.$$ In the above we used the relation . Observe that $\operatorname{\zeta}(-k)=0$ for $k=2,4,6,\dots$.
The asymptotics and the remainder estimates follow from Theorem \[thm:general.f.general.case\] on observing that $f_s^w(x)$ has derivatives of all orders in $(0,|\Gamma|/2+{\varepsilon})$, $S_q(x) = \sum_{n=0}^q a_n x^{n-s}$, and $\delta=1$. The constraints on $s_q = s - q$ imply that the positive integers $q,p$ and $s\in\mathbb{C}$ satisfy $q-2p < {\mathop{\mathrm{Re}}}s < 2 + q$ or $s = q - 2p$. For $0<s<1$ we have (see ) $$V_{f_s^w}(\Gamma) = \frac{2}{\left| \Gamma \right|} \int_0^{\left| \Gamma \right| / 2} f_s^w(x) {\,d}x = \frac{2}{\left| \Gamma \right|} \sum_{n=0}^{\infty} a_n \frac{\left( \left| \Gamma \right| / 2 \right)^{1+n-s}}{1+n-s}$$ and the right-hand side as a function of $s$ is analytic in $\mathbb{C}$ except for poles at $s=1+n$ ($n=0,1,2,\dots$) provided $a_n\neq0$.
Using the same method of proof as in [@Ka1998] for the Hurwitz zeta function, we obtain the following two propositions, which will be used in the proofs of Theorems \[thm.M.0\] and \[thm:main\].
\[prop:hzeta\] Let $q \geq 1$ and $\alpha = 1 / 2$ or $\alpha = 1$. For $x > 0$ and $s \in \mathbb{C}$ with $s \neq 1$ and ${\mathop{\mathrm{Re}}}s + 2q + 1 > 0$ the [*Hurwitz zeta function*]{} defined as $\operatorname{\zeta}(s,a) {{:=}}\sum_{k=0}^\infty (k + a)^{-s}$ for ${\mathop{\mathrm{Re}}}s > 1$ and $a\neq 0, -1, -2, \dots$ has the following representation $$\operatorname{\zeta}( s, x + \alpha ) = \frac{x^{1-s}}{s-1} - B_1(\alpha) \, x^{-s} + \sum_{n=1}^{q} \frac{B_{2n}(\alpha)}{(2n)!} {{\left(s\right)_{2n-1}}} x^{1-s-2n} + \rho_q(s,x,\alpha).$$ The remainder term is given by $$\rho_q(s,x,\alpha) = \frac{1}{2 \pi i} \int_{\gamma_q - i \infty}^{\gamma_q + i \infty} \frac{\operatorname{\Gamma}(-w) \operatorname{\Gamma}(s+w)}{\operatorname{\Gamma}(s)} \operatorname{\zeta}(s+w, \alpha) x^w {\,d}w = \mathcal{O}_{s,q}(x^{-1-{\mathop{\mathrm{Re}}}s - 2q})$$ as $N \to \infty$, where $-1 - {\mathop{\mathrm{Re}}}s - 2q < \gamma_q < - {\mathop{\mathrm{Re}}}s - 2q$.
By the well-known relation $\log [ \operatorname{\Gamma}(x + \alpha) / \sqrt{2\pi} ] = \frac{\partial}{\partial s} \operatorname{\zeta}( s, x + \alpha ) |_{s = 0}$ one obtains the next result from Proposition \[prop:hzeta\].
\[prop:log\] Let $q \geq 1$ and $\alpha = 1 / 2$ or $\alpha = 1$. For $x > 0$ $$\log \frac{\operatorname{\Gamma}(x + \alpha)}{\sqrt{2 \pi}} = \left( x + \alpha - 1 / 2 \right) \log x - x + \sum_{n=1}^q \frac{B_{2n}(\alpha)}{\left( 2 n - 1 \right) 2 n} x^{1-2n} + \rho_q(x,\alpha).$$ The remainder term is given by $$\rho_q(x,\alpha) = \frac{1}{2\pi i} \int_{\gamma_q- i \infty}^{\gamma_q+ i \infty} \operatorname{\Gamma}(-w) \operatorname{\Gamma}(w) \operatorname{\zeta}(w,\alpha) x^w {\,d}w = \mathcal{O}_{q}(x^{-1-2q})$$ as $N \to \infty$, where $-1 - 2q < \gamma_q < - 2q$.
In the proofs of Theorems \[thm.M.0\] and \[thm:main\] we make use of the observation that for $N = 2 M + \kappa$ with $M \geq 1$ and $\kappa = 0, 1$ formula simplifies to $$\mathcal{M}_s(\Gamma; N) = \frac{2}{\left| \Gamma \right|^s} N^{1+s} \sum_{k=1}^{\lfloor N / 2 \rfloor} \frac{1}{k^s} - \frac{1-\kappa}{\left( \left| \Gamma \right| / 2 \right)^s} N, \label{scr.M}$$ which involves the [*generalized harmonic numbers $H_n^{(s)} {{:=}}\sum_{k=1}^n k^{-s}$*]{}.
Differentiating with respect to $s$ and taking the limit $s\to0$ yields $$\mathcal{M}_{\mathrm{log}}(\Gamma; N) = N \left( N - \kappa \right) \log \frac{N}{\left| \Gamma \right|} - 2 N \log \operatorname{\Gamma}(\lfloor N / 2 \rfloor + 1) - \left( 1 - \kappa \right) N \log ( N / 2).$$ The asymptotic expansion of the theorem now follows by applying Proposition \[prop:log\] with $x = N / 2$, $\alpha = (2-\kappa) / 2$. Note that $B_{2n}( \alpha ) = B_{2n}( 1 - \kappa / 2 ) = B_{2n}(\kappa/2)$.
Starting with Theorem \[thm:general.f.general.case\], we obtain an asymptotic formula of the form but with error estimate $\mathcal{O}(N^{1-2q})$.[^8] On the other hand, substitution of the identity $\sum_{k=1}^n k^{-s} = \operatorname{\zeta}(s) - \operatorname{\zeta}(s,n+1)$ into gives the exact formula $$\mathcal{M}_s( \Gamma; N)= \frac{2\operatorname{\zeta}(s)}{\left| \Gamma \right|^s} N^{1+s} - \frac{2}{\left| \Gamma \right|^s} N^{1+s} \operatorname{\zeta}(s,\lfloor N / 2 \rfloor + 1) - \frac{1-\kappa}{\left( \left| \Gamma \right| /2 \right)^s} N.$$ Then the asymptotic relation with error term of order $\mathcal{O}(N^{-2q})$ follows by applying Proposition \[prop:hzeta\] with $x = N / 2$, $\alpha = (2-\kappa) / 2$. This expansion holds for $s$ with ${\mathop{\mathrm{Re}}}s + 2q + 1 > 0$, $q \geq 1$.
Using Jacob Bernoulli’s famous closed form summation formula ([@AbSt1992 Eq. (23.1.4)]) $1^p + 2^p + \cdots + n^p = (B_{p+1}(n+1)-B_{p+1}) / ( p + 1 )$ in one gets $$\mathcal{M}_{-p}(\Gamma; N) = 2 \left| \Gamma \right|^p \frac{B_{p+1}((N+\kappa)/2) - B_{p+1}}{p+1} N^{1-p} + \left( 1 - \kappa \right) \left( \left| \Gamma \right| / 2 \right)^p N.$$ Application of the addition theorem for Bernoulli polynomials (see [@AbSt1992 Eq. (23.1.7)]) yields the result.
An asymptotic formula with error estimate $\mathcal{O}(N^{1-2p})$ follows from Theorem \[thm:general.f.exceptional.case\]; see also the second remark after Theorem \[thm:general.f.exceptional.case\]. However, by substituting into with $\omega=\kappa/2$ ($\kappa=0,1$) the following asymptotic expansions $$\label{H.n}
H_n = \sum_{k=1}^n \frac{1}{k} = \log ( n + \omega ) + \gamma - \frac{B_1(\omega)}{n+\omega} - \sum_{k=1}^q \frac{B_{2k}(\omega) / (2k)}{\left( n + \omega \right)^{2k}} \pm \theta_{q,N,\kappa} \frac{B_{2q+2}(\omega) / (2q+2)}{\left( n + \omega \right)^{2q+2}},$$ where $0<\theta_{q,N,\kappa}<1$ and collecting terms we get the asymptotic formula with improved error estimate. The plus sign in is taken if $\omega=1/2$ and the negative sign corresponds to $\omega=0$. We remark that the representation is given in [@DeSh1991] if $\omega = 1/2$ and can be obtained as an application of the Euler-MacLaurin summation formula if $\omega = 0$ (see, for example, [@Ap1999]). We leave the details to the reader.
[^1]: The analogue problem for the [*sum of (Euclidean) distances*]{} on the unit circle was also studied by Fejes T[ó]{}th [@Fe1956] who proved that only (rotated copies) of the $N$-th roots of unity are optimal.
[^2]: The powers in $S_q(x)$ are principal values.
[^3]: By Definition \[def:admissible\] there is only one such $s_{q^\prime}$.
[^4]: A completely monotonic function on $(0,\infty)$ is necessarily analytic in the positive half-plane ([@Wi1946]).
[^5]: The function $\operatorname{sgn}s$ denotes the sign of $s$. It is defined to be $-1$ if $s<0$, $0$ if $s=0$, and $1$ if $s>0$.
[^6]: Sperling mentions that his proof can be easily generalized to higher-dimensional spheres.
[^7]: Larcher also characterizes all optimal configurations.
[^8]: If ${\mathop{\mathrm{Re}}}s = - 2q$ and $s \neq 2q$, then a factor $\log N$ must be included.
|
---
abstract: |
We study relaxation properties of two-body collisions in infinite spatial dimension. We show that this process exhibits multiscaling asymptotic behavior as the underlying distribution is characterized by an infinite set of nontrivial exponents. These nonequilibrium relaxation characteristics are found to be closely related to the steady state properties of the system.
[PACS numbers: 05.40.+j, 05.20.Dd, 02.50.Ey]{}
address:
- '$\dag$Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545, USA'
- '$\ddag$Center for Polymer Studies and Department of Physics, Boston University, Boston, MA 02215, USA'
author:
- 'E. Ben-Naim$\dag$ and P. L. Krapivsky$\ddag$'
title: Multiscaling in Infinite Dimensional Collision Processes
---
[2]{}
Our understanding of the statistical mechanics of nonequilibrium systems remains incomplete, in sharp contrast with their equilibrium counterpart. The rich phenomenology associated with dynamics of far from equilibrium interacting particle systems exposes the lack of a unifying theoretical framework. Simple tractable microscopic models can therefore help us gain insight and better the description of nonequilibrium dynamics.
In this study, we focus on the nonequilibrium relaxation of an infinite particle system interacting via two body collisions. We find that a hierarchy of scales underlies the relaxation. In particular, we devise an extremely simple system which exhibits multiscaling in infinite dimension, while in finite dimensions simple scaling behavior is restored. Furthermore, we show that this behavior extends to a broader class of collision processes.
We are interested in modeling collision processes in a structureless, infinite dimensional space. Therefore, we place an infinite number of identical particles on the nodes of a completely connected graph. Particles are characterized by a single parameter, their velocity $v$. Two-body collisions are realized by choosing two particles at random and changing their velocities according to $(u_1,u_2)\to (v_1,v_2)$ with $$\begin{aligned}
\label{rule}
\pmatrix{v_1\cr v_2\cr}=\pmatrix{\gamma &1-\gamma\cr
1-\gamma&\gamma\cr}\pmatrix{u_1\cr u_2\cr}\end{aligned}$$ and $0\leq \gamma\leq 1$. In other words, the post-collision velocities are given by a linear combination of the pre-collision velocities. Both the total momentum ($u_1+u_2=v_1+v_2$) and the total number of particles are conserved by this process. In fact, the collision rule (\[rule\]) is the most general linear combination which obeys momentum conservation and Galilean invariance, i.e., invariance under velocity translation $v\to v-v_0$.
Our motivation for studying this problem is inelastic collisions in one-dimensional granular gases [@pkh; @my; @bm]. While the two problems involve different collision rates, they share the same trivial final state where all velocities vanish, $P(v,t)\to \delta(v)$ when $t\to\infty$ (without loss of generality, the average velocity was set to zero by invoking the transformation $v\to v-\langle
v\rangle$). We chose to describe this work in sightly more general terms since closely related dynamics were used in different contexts including voting systems [@melzak; @ff], asset exchange processes [@sps], combinatorial processes [@d], traffic flows [@kg], and force fluctuations in bead packs [@c]. We will show that multiscaling characterizes fluctuations in some of these problems as well.
Velocity fluctuations may be obtained via the probability distribution function $P(v,t)$ which evolves according to the following master equation $$\begin{aligned}
\label{BE}
{\partial P(v,t)\over\partial t}&=&\int_{-\infty}^\infty
\int_{-\infty}^\infty du_1\, du_2\, P(u_1,t)P(u_2,t)\\\nonumber
&\times &\left[\delta(v-\gamma u_1-(1-\gamma)u_2)
-\delta(v-u_2)\right].\end{aligned}$$ The $\delta-$functions on the right-hand side reflect the collision rule (\[rule\]) and guarantee conservation of the number of particles, $\int dv P(v,t)=1$, and the total momentum $\int dv\, v
P(v,t)=0$. Eq. (\[BE\]) can be simplified by eliminating one of the integrations $$\label{BE1}
{\partial P(v,t)\over\partial t}+P(v,t)={1\over 1-\gamma}
\int_{-\infty}^\infty du P(u,t)P\left({v-\gamma u\over 1-\gamma},t\right).$$ Further simplification may be achieved via the Fourier transform $\hat P(k,t)=\int dv\, e^{ikv}\,P(v,t)$ which obeys $$\label{BEF}
{\partial \over\partial t}\,\hat P(k,t)+\hat P(k,t)=
\hat P[\gamma k,t]\,\hat P[(1-\gamma)k,t].$$ Although the integration is eliminated, this compact equation is still challenging as the nonlinear term becomes nonlocal.
Velocity fluctuations can be quantified using the moments of the velocity distribution, . The moments obey a closed and recursive set of the ordinary differential equations. The corresponding equations can be derived by inserting the expansion $\hat P(k,t)=\sum_n {(ik)^n\over n!} M_n(t)$ into Eq. (\[BEF\]) or directly from Eq. (\[BE\]). The first few moments evolve according to $\dot M_0=\dot M_1=0$, and $$\begin{aligned}
\dot M_2&=&-a_2M_2,\nonumber\\
\dot M_3&=&-a_3M_3,\\
\dot M_4&=&-a_4M_4+a_{24}M_2^2,\nonumber\end{aligned}$$ with the coefficients $$\label{an}
a_n\equiv a_n(\gamma)=1-(1-\gamma)^n-\gamma^n,$$ and $a_{24}=6\gamma^2(1-\gamma)^2$. Integrating these rate equations yields $M_0=1$, $M_1=0$ and $$\begin{aligned}
M_2(t)&=&M_2(0)e^{-a_2t}\nonumber\\
M_3(t)&=&M_3(0)e^{-a_3t}\\
M_4(t)&=&\left[M_4(0)+3M^2_2(0)\right]e^{-a_4t}-3M_2^2(t).\nonumber\end{aligned}$$ The asymptotic behavior of the first few moments suggests that knowledge of the RMS fluctuation $v^*\equiv M_2^{1/2}$ is not sufficient to characterize higher order moments since $M_3^{1/3}/v^*,
M_4^{1/4}/v^*\to\infty$, as $t\to\infty$.
This observation extends to higher order moments as well. In general, the moments evolve according to $$\dot M_n+a_nM_n
=\sum_{m=2}^{n-2}{n\choose m}\gamma^{m}(1-\gamma)^{n-m}M_{m}M_{n-m}.$$ Note that for , the coefficients $a_n$ satisfy when . This inequality can be shown by introducing which satisfies $G(0)=0$ and $G(\gamma)=G(1-\gamma)$. Therefore, one needs to show that for with . One can verify that the $b_n$’s decrease monotonically with increasing $n$, $b_n\geq b_{n+1}$ for $n\geq 2$, therefore proving the desired inequality. Since moments decay exponentially, this inequality shows that the right hand side in the above equation is negligible asymptotically. Thus, the leading asymptotic behavior for all $n>0$ is $M_n\sim \exp(-a_n t)$. Since the $a_n$’s increase monotonically, $a_n<a_{n+1}$, the moments decrease monotonically in the long time limit, $M_n>M_{n+1}$. Furthermore, in terms of the second moment one has $$\label{multi}
M_{n}\propto M_2^{\alpha_n}, \qquad
\alpha_n={1-(1-\gamma)^n-\gamma^n
\over 1-(1-\gamma)^2-\gamma^2}.$$ While the prefactors depend on the details of the initial distribution, the scaling exponents are universal. Therefore, the velocity distribution does not follow a naive scaling form $P(v,t)={1\over v^*} P({v\over v^*})$. Such a distribution would imply the linear exponents $\alpha_n=\alpha^*_n=n/2$. Instead, the actual behavior is given by Eq. (\[multi\]) with the exponents $\alpha_n$ reflecting a multiscaling asymptotic behavior with a nontrivial (non-linear) dependence on the index $n$. For instance, the high order exponents saturate, $\alpha_n\to a_2^{-1}$ for $n\to\infty$, instead of diverging. One may quantify the deviation from ordinary scaling via a properly normalized set of indices $\beta_n=\alpha_n/\alpha_n^*$ defined from $M_n^{1/n}\sim
(v^*)^{\beta_n}$. By evaluating the $\gamma=1/2$ case where multiscaling is most pronounced, a bound can be obtained for these indices: $7/8,31/48\leq \beta_n\leq 1$ for $n=4,6$ respectively. Furthermore, $\beta_n\to 1-{2n-3\over 2}\gamma$ when $\gamma\to 0$ indicating that the deviation from ordinary scaling vanishes for weakly inelastic collisions. Thus, the multiscaling behavior can be quite subtle [@bk1].
The above shows that a hierarchy of scales underlies fluctuations in the velocity. In parallel, a hierarchy of diverging time scales characterizes velocity fluctuations $$M_n^{1/n}\sim \exp(-t/\tau_n), \qquad \tau_n={n\over a_n}.$$ These time scales diverge for large $n$ according to $\tau_n\simeq n$. Large moments reflect the large velocity tail of a distribution. Indeed, the distribution of extremely large velocities is dominated by persistent particles which experienced no collisions up time $t$. The probability for such events decays exponentially with time for $v\gg 1$ (alternatively, this behavior emerges from Eq. (\[BE1\]) since the gain term is negligible for the tail and hence $\dot P+P=0$). This decay is consistent with the large order moment decay $M_n\sim \exp(-t)$ when $n\to\infty$.
=7.5cm
Although the leading asymptotic behavior of the moments was established, understanding the entire distribution $P(v,t)$ remains a challenge. Simulations of the $\gamma=1/2$ process reveal an interesting structure for compact distributions. Starting from a uniform velocity distribution, $P_0(v)=1/2$ for $-1<v<1$, the distribution loses analyticity at $v=\pm 1/2$. Our analysis of Eq. (\[BEF\]) shows that such a singularity should indeed develop at $v=\pm 1/2$ and it additionally implies the appearance of (progressively weaker and weaker) singularities at $v=\pm 1/4$, etc. More generally, for an arbitrary [*compact*]{} initial distribution and an arbitrary $\gamma$, the distribution $P(v,t)$ loses analyticity for $t>0$ and develops an infinite (countable) set of singularities whose locations depend on the arithmetic nature of $\gamma$ (e.g., it is very different for rational and irrational $\gamma$’s). On the other hand, unbounded distributions do not develop such singularities, and therefore, the loss of analyticity is not necessarily responsible for the multiscaling behavior.
Asymptotically, our system reaches a trivial steady state $P(v,t=\infty)=\delta(v)$. To examine the relation between dynamics and statics, a non-trivial steady state can be generated by considering the driven version of our model [@wm; @sbcm]. External forcing balances dissipation due to collisions and therefore results in a nontrivial nonequilibrium steady state. Specifically, we assume that in addition to changes due to collisions, velocities may also change due to an external forcing: . We assume standard uncorrelated white noise with a zero average $\langle \xi_j\rangle=0$. The left hand side of the master equation (\[BE1\]) should therefore be modified by the diffusion term $$\begin{aligned}
\label{BEFP}
{\partial P(v,t)\over\partial t}\to
{\partial P(v,t)\over\partial t}-D{\partial^2 P(v,t)\over\partial v^2}.\end{aligned}$$ Of course, the addition of the diffusive term does not alter conservation of the total particle number and the total momentum, and one can safely work in a reference frame moving with the center of mass velocity.
We restrict our attention to the steady state, obtained by setting the time derivative to zero. The corresponding Fourier transform $\hat P_\infty(k)\equiv\hat P(k,t=\infty)$ satisfies $$\label{FP}
(1+Dk^2)\hat P_\infty(k)=
\hat P_\infty[\gamma k]\,\hat P_\infty[(1-\gamma)k].$$ The solution to this functional equation which obeys the conservation laws $\hat P_\infty(0)=1$ and $\langle v\rangle=\hat P_\infty'(0)=0$ is found recursively $$\label{FPsol}
\hat P_\infty(k)=\prod_{i=0}^\infty\prod_{j=0}^i
\left[1+\gamma^{2j}(1-\gamma)^{2(i-j)}Dk^2\right]^{-{i\choose j}}.$$ To simplify this double product we take the logarithm and transform it as follows $$\begin{aligned}
\ln \hat P_\infty(k)
&=&-\sum_{i=0}^\infty\sum_{j=0}^j {i\choose j}\,
\ln \left[1+\gamma^{2j}(1-\gamma)^{2(i-j)}Dk^2\right]\nonumber\\
&=&\sum_{i=0}^\infty\sum_{j=0}^i {i\choose j}\sum_{n=1}^\infty
{(-Dk^2)^n\gamma^{2jn}(1-\gamma)^{2(i-j)n}\over n}\nonumber\\
&=&\sum_{n=1}^\infty {(-Dk^2)^n\over n}
\sum_{i=0}^\infty\sum_{j=0}^i {i\choose j}\,
\gamma^{2nj}(1-\gamma)^{2n(i-j)}\nonumber\\
&=&\sum_{n=1}^\infty {(-Dk^2)^n\over n}\sum_{i=0}^\infty
\left[\gamma^{2n}+(1-\gamma)^{2n}\right]^i.\end{aligned}$$ The second identity follows from the series expansion $\ln
(1+q)=-\sum_{n\geq 1}n^{-1}(-q)^n$, and the forth from the binomial identity $\sum_{j=0}^i {i\choose j}p^jq^{i-j}=(p+q)^i$. Finally, using the geometric series $(1-x)^{-1}=\sum_{n\geq 0} x^n$, the Fourier transform at the steady state is found $$\label{pinf}
\hat P_\infty(k)=\exp\left\{\sum_{n=1}^\infty
{(-Dk^2)^n\over n a_{2n}(\gamma)}\right\},$$ with $a_n(\gamma)$ given by Eq. (\[an\]). The $n$th cumulant of the steady state distribution $\kappa_n$ can be readily found from $\ln
\hat P_\infty(k)=\sum_m {(ik)^m\over m!}\kappa_m$. Therefore, the odd cumulants vanish while the even cumulants are simply proportional to the time scales characterizing the exponential relaxation of the corresponding moments: $$\kappa_{2n}={(2n-1)!\over n}D^n\tau_{2n}.$$ Of course, the moments can be constructed from these cumulants. Interestingly, a direct correspondence between the steady state characteristics and the nonequilibrium relaxation time scales is established via the cumulants of the probability distribution.
None of the (even) cumulants vanish, thereby reflecting significant deviations from a Gaussian distribution. Nevertheless, for sufficiently large velocities, one may concentrate on the small wave number behavior. Using the inverse Fourier transform of (\[pinf\]) one finds the tail of the distribution $$P_\infty(v)\simeq \sqrt{a_2\over 4\pi D}\,
\exp\left\{-{a_2v^2\over 4D}\right\}, \qquad v\gg \sqrt{D/a_2}.$$ This in particular implies the large moment behavior $M_{2n}\to
(2n-1)!!(4D/a_2)^{n}$ as $n\to\infty$.
To examine how general is the above behavior, we briefly discuss a few generalizations and extensions of the basic model. Relaxing Galilean invariance, the most general momentum conserving collision rule is $$\begin{aligned}
\label{rule1}
\pmatrix{v_1\cr v_2\cr}=\pmatrix{\gamma_1 &1-\gamma_2\cr
1-\gamma_1&\gamma_2\cr}\pmatrix{u_1\cr u_2\cr}. \end{aligned}$$ Following the same steps that led to (\[multi\]) shows that when $\gamma_1,\gamma_2\neq 0,1$ and when $M_1=0$ this process also exhibits multiscaling with the exponents $\alpha_n=a_n/a_2$, where $a_n(\gamma_1,\gamma_2)={1\over 2}[a_n(\gamma_1)+a_n(\gamma_2)]$. When $\gamma_1=1-\gamma_2=\gamma$ one recovers the model introduced by Melzak [@melzak], and when $\gamma_1=\gamma_2=\gamma$ one recovers inelastic collisions. Since $a_n(\gamma)=a_n(1-\gamma)$ both models have identical multiscaling exponents. Furthermore, a multiscaling behavior with the very same exponents $\alpha_n(\gamma)$ is also found for the following process $(u_1,u_2)\to (u_1-\gamma u_1,v_1+\gamma
u_1)$ investigated in the context of asset distributions [@sps] and headway distributions in traffic flows [@kg].
One can also consider stochastic rather than deterministic collision processes by assuming that the collision (\[rule1\]) occurs with probability density $\sigma_1(\gamma_1,\gamma_2)$. Our findings extend to this model as well and the multiscaling exponents are given by the same general expression $\alpha_n=a_n/a_2$ with . In particular, for completely random inelastic collisions, i.e., $\sigma\equiv 1$ and $\gamma_1=\gamma_2=\gamma$, one finds $a_n={n-1\over n+1}$ and hence $\alpha_n=3{n-1\over n+1}$.
So far, we discussed only two-body interactions. We therefore consider $N$-body interactions where a collision is symbolized by $(u_1,\ldots,u_N)\to (v_1,\ldots,v_N)$. We consider a generalization of the $\gamma={1\over 2}$ two-body case where the post-collision velocities are all equal. Momentum conservation implies $v_i=\bar
u=N^{-1}\sum u_i$. The master equation is a straightforward generalization of the two-body case and we merely quote the moment equations $$\dot M_n+a_nM_n=
N^{-n}\sum_{n_i\neq 1} {n\choose {n_1\ldots n_N}} M_{n_1}\cdots M_{n_N}$$ with $a_n=1-N^{1-n}$. Using the inequality for all , and its kin like for all , etc., we find that the right-hand side of the above equation remains asymptotically negligible. Therefore, and $$M_n\sim M_2^{\alpha_n},\qquad
\alpha_n={1-N^{1-n}\over 1-N^{-1}}.$$ Thus, this $N$-body “averaging” process exhibits multiscaling asymptotic behavior as well.
Thus far, we considered the behavior on a mean field level, i.e., in an infinite dimensional space. It is natural to consider the finite-dimensional counterpart. Specifically, we assume that particles reside on a $d$ dimensional lattice and that only nearest neighbors interact. Here, the above dynamics is essentially equivalent to a diffusion process [@bk]. As a result, the underlying correlation length is diffusive, $L(t)\sim t^{1/2}$. Within this correlation length the velocities are “well mixed” and momentum conservation therefore implies that $v\sim L^{-d/2}\sim t^{-d/4}$. Indeed, the infinite dimension limit is consistent with the above exponential decay. Furthermore, an exact solution for moments of arbitrary order is possible [@bk]. We do not detail it here and simply quote that ordinary scaling is restored $M_n\sim t^{-n/4}$, i.e. $\alpha_n=\alpha_n^*=n/2$. Thus, spatial correlations counter the mechanism responsible for multiscaling.
In summary, we have investigated inelastic collision processes in infinite dimension. We have shown that such systems are characterized by multiscaling, or equivalently by an infinite hierarchy of diverging time scales. Multiscaling holds for several generalizations of the basic model including stochastic collision models and even processes which do not obey Galilean invariance. In this latter case, however, multiscaling is restricted to situations with zero total momentum. This perhaps explains why multiscaling asymptotic behavior was overlooked in previous studies [@melzak; @sps]. Another explanation is that this behavior may be difficult to detect from numerical simulations. Indeed, in other problems such as multidimensional fragmentation [@bk1], and in fluid turbulence, low order moments deviate only slightly from the normal scaling expectation.
There are a number of extensions of this work which are worth pursuing. We have started with a simplified model of a 1D granular gas with a velocity independent collision rate. One possibility is to approximate the collision rate with the RMS velocity fluctuation. This leads to the algebraic decay $M_n\sim t^{-2\alpha_n}$ with $\alpha_n$ given by Eq. (\[multi\]) and in particular, Haff’s cooling law $T=M_2\sim t^{-2}$ is recovered [@pkh]. Our numerical studies indicate that when velocity dependent collision rates are implemented, ordinary scaling behavior is restored. One may also use this model as an approximation for inelastic collisions in higher dimensions as well, following the Maxwell approximation in kinetic theory [@ernst; @bk2].
This research was supported by the DOE (W-7405-ENG-36), NSF (DMR9632059), and ARO (DAAH04-96-1-0114).
[99]{}
P. K. Haff, J. Fluid Mech. [**134**]{}, 401 (1983). S. McNamara and W. R. Young, Phys Fluids A [**4**]{}, 496 (1992). B. Bernu and R. Mazighi, J. Phys. A [**23**]{}, 5745 (1990). Z. A. Melzak, [*Mathematical Ideas, Modeling and Applications, Volume II of Companion to Concrete Mathematics*]{} (Wiley, New York, 1976), p. 279. P. A. Ferrari and L. R. G. Fontes, El. J. Prob. [**3**]{}, Paper no. 6 (1998). S. Ispolatov, P. L. Krapivsky, and S. Redner, Eur. Phys. J. B [**2**]{}, 267 (1998). D. Aldous and P. Diaconis, Prob. Theory Relat. Fields [**103**]{}, 199 (1995). J. Krug and J. Garisa, [*cond-mat/9909034*]{}. S. N. Coppersmith, C.-h. Liu, S. Majumdaar, O. Narayan, and T. Witten, Phys. Rev. E [**53**]{}, 4673 (1996). P. L. Krapivsky and E. Ben-Naim, Phys. Rev. E [**50**]{}, 3502 (1994); E. Ben-Naim and P. L. Krapivsky, Phys. Rev. Lett. [**76**]{}, 3234 (1996). D. R. M. Williams and F. C. MacKintosh, Phys. Rev. E [**54**]{}, 9 (1996). M. R. Swift, M. Boamfǎ, S. J. Cornell, and A. Maritan, Phys. Rev. Lett. [**80**]{}, 4410 (1998). E. Ben-Naim and P. L. Krapivsky, in preparation. M. H. Ernst, Phys. Rep. [**78**]{}, 1 (1981). E. Ben-Naim and P. L. Krapivsky, Phys. Rev. E [**59**]{}, 7000 (1999).
|
---
abstract: 'We consider the problem of regulating by means of external control inputs the ratio of two cell populations. Specifically, we assume that these two cellular populations are composed of cells belonging to the same strain which embeds some bistable memory mechanism, e.g. a genetic toggle switch, allowing them to switch role from one population to another in response to some inputs. We present three control strategies to regulate the populations’ ratio to arbitrary desired values which take also into account realistic physical and technological constraints occurring in experimental microfluidic platforms. The designed controllers are then validated in-silico using stochastic agent-based simulations.'
author:
- 'Davide Salzano$^{1}$, Davide Fiore$^{1}$, Mario di Bernardo$^{1,2}$[^1][^2]'
bibliography:
- 'refs.bib'
title: '**Ratiometric control for differentiation of cell populations endowed with synthetic toggle switches** '
---
Introduction
============
The aim of Synthetic Biology is to engineer biomolecular systems to achieve new useful functionalities [@del2018future]. Potential applications range from designing bacteria that can produce biofuels or sense and degrade pollutant in the environment (like hydrocarbons and plastic), to immune cells that can track and kill cancer cells, or that can release drugs at specific points and conditions to avoid side effects (see [@del2018future] for references). This is possible by designing genetic circuits with programmed functionalities and embedding them into living cells. However, most of the engineered genetic circuits have been designed to work at single-cell level. As a consequence, their functional complexity is limited by inherent factors such as excessive metabolic burden on the cell, competition of limited resources, and incompatible chemical reactions.
A promising approach to overcome these issues is to engineer synthetic microbial consortia in which the effort is divided and assigned to different subpopulations of cells to achieve more sophisticated functionalities [@bittihn2018rational]. Recent cooperative consortia designs include a predator-prey system [@balagadde2008synthetic], an emergent oscillator [@chen2015emergent], a toggle-switch implemented across two species [@sadeghpour2017bistability], and a multicellular feedback control scheme where the control functions are split between two species [@fiore2016silico]. Unfortunately, the correct functioning of a multicellular consortium requires cocultivating and maintaining multiple cell populations. As different cells in the consortium embed specific sets of genetic circuits, they also present different growth rates due to uneven metabolic burdens and might show additional undesired dynamics, such as oscillations [@sadeghpour2017bistability]. Therefore, when different strains are mixed together, it is essential to maintain their stable coexistence by controlling their relative population numbers (i.e. their ratio). This is usually achieved by encoding in the synthetic design some dynamic equilibrium between the two populations, e.g. [@igem],[@ren2017population]. However, if one of the two populations eventually dies out, these solutions can either lead to uncontrolled growth of one population or to the extinction of both. Moreover, the steady-state value of the populations ratio is hard-coded into the genes without any possibility of being changed online.
In this paper we present an alternative approach to control the populations’ ratio in mono-strain consortia by means of external control inputs. Specifically, we consider the case in which there exists a bistable memory mechanism inside the cells, such as the genetic toggle switch circuit [@Gar], whose current internal state defines which of the two possible roles, or “working-condition”, the cell is playing in the consortium. We assume that, by changing the concentrations of some inducer molecules in the growth medium, it is possible to make cells switch their role and hence keep the populations ratio to a desired value. Albeit requiring a possibly more complex design with respect to other multi-strain scenarios, this approach has the advantage of being intrinsically robust to extinction events that could undermine the operation of the entire consortium. Also, it allows the ratio of the populations to be changed online, in real-time, if needed.
The crucial problem we address in this paper is to design feedback control strategies able to steer the inducer molecules inputs to achieve and maintain a desired cell ratio. We define this problem as *“ratiometric"* control of cell populations.
We propose and test three different external control strategies to regulate the populations ratio to any value, namely a Bang-Bang controller, a PI controller and a model predictive controller (MPC). The control laws are designed taking into account realistic physical and technological implementation constraints that are present in microfluidic-based experimental platforms. Finally, the proposed controllers are validated *in-silico* using realistic stochastic agent-based simulations in BSim [@BSim] that appropriately model also spatial and diffusion effects in the microfluidic device and cell growth.
The ratiometric control problem
===============================
We want to design some feedback control strategy such that, by acting on some input signals *common* to every cells, the ratio between the two populations is asymptotically regulated to some desired value.
![Regions of the state space $(LacI^i, TetR^i)$ such that cells belong to set $\mathcal{A}_t$ (red color) or $\mathcal{B}_t$ (green color). The positions of the stable equilibrium points **A** and **B** are reported in the phase plane with several examples of solutions starting from different initial conditions.[]{data-label="fig:phasePortraitLugagne"}](PhasePortToggle_10.png){width="0.8\linewidth"}
Population model {#sec:Model}
----------------
We assume that the bistable memory required by the cells to guarantee their the correct operations in the consortium is realized by means of an inducible genetic toggle switch [@Gar]. This genetic regulatory network consists of two repressor proteins, LacI and TetR, both repressing each other’s promoter, so that only one protein is fully expressed at any time. From a modelling viewpoint, the genetic toggle switch is a bistable dynamical system, possessing two stable equilibria, **A** and **B**, each associated to a fully expressed protein, and a saddle equilibrium point, whose stable manifold is the boundary separating the regions of attraction of the other two. Thus, given an initial condition, its solutions will converge to one of the two stable equilibria. The expression level of the two repressing proteins can be flipped by changing the concentration of two inducer molecules, aTc and IPTG. This causes the occurrence of two saddle-node bifurcations yielding the required reversible bistable memory function. We use the inducible toggle switch model described in [@lugagne] and further analyzed in [@fiore2019analysis; @guarino2018silico]. Namely, we assume the dynamics of the $i$-th cell in the consortium can be written as follows:
$$\begin{aligned}
\label{eq:LugagneOriginal}
& \frac{\mathrm{d}\, mRNA_\mathrm{LacI}^{i}}{\mathrm{dt}} = \kappa_\mathrm{L}^\mathrm{m0} + \kappa_\mathrm{L}^\mathrm{m} \, \Phi_\mathrm{T}(t)-\gamma_\mathrm{L}^\mathrm{m} \, {mRNA_\mathrm{LacI}}\\
& \frac{\mathrm{d}\, mRNA_\mathrm{TetR}^{i}}{\mathrm{dt}} = \kappa_\mathrm{T}^\mathrm{m0} + \kappa_\mathrm{T}^\mathrm{m} \, \Phi_\mathrm{L}(t) - \gamma_\mathrm{T}^\mathrm{m} \, {mRNA_\mathrm{TetR}}\\
& \frac{\mathrm{d}\, LacI^{i}}{\mathrm{dt}} = \kappa_\mathrm{L}^\mathrm{p} \, {mRNA_\mathrm{LacI}^{i}} - \gamma_\mathrm{L}^\mathrm{p} \, {LacI^{i}}\\
& \frac{\mathrm{d}\, TetR^{i}}{\mathrm{dt}} = \kappa_\mathrm{T}^\mathrm{p} \, {mRNA_\mathrm{TetR}^{i}} - \gamma_\mathrm{T}^\mathrm{p} \, {TetR^{i}}\\
& \frac{\mathrm{d}\, aTc^{i}}{\mathrm{dt}} = k_\mathrm{aTc} \, \left(u_\mathrm{a} - {aTc^{i}} \right)\\
\label{eq:LugagneOriginal_last}
& \frac{\mathrm{d}\, IPTG^{i}}{\mathrm{dt}} = k_\mathrm{IPTG} \, \left(u_\mathrm{p} - {IPTG^{i}} \right)\end{aligned}$$ where the state variables denote concentrations of molecules inside the cell. The parameters $\kappa_\mathrm{L/T}^\mathrm{m0}$, $\kappa_\mathrm{L/T}^\mathrm{m}$, $\kappa_\mathrm{L/T}^\mathrm{p}$, $\gamma_\mathrm{L/T}^\mathrm{m}$, $\gamma_\mathrm{L/T}^\mathrm{p}$, $k_\mathrm{aTc/IPTG}$ are leakage transcription, transcription, translation, mRNA degradation and protein degradation rates, and diffusion rates of the inducers across the cell membrane, respectively. The variables $u_{\mathrm{a}}$ and $u_{\mathrm{p}}$ denote the concentrations of the inducer molecules in the growth medium and they also represent the control inputs common to every cell in the populations. Moreover, in the previous equations, the input effects are modelled by the following terms: $$\begin{aligned}
& \Phi_\mathrm{T}(t) : = \frac{1}{1+\left(\frac{TetR^{i}}{\theta_{\mathrm{TetR}}}\cdot \frac{1}{1+\left(\frac{aTc^{i}}{\theta_{\mathrm{aTc}}}\right)^{\eta_{\mathrm{aTc}}}}\right)^{\eta_{\mathrm{TetR}}}}\\
& \Phi_\mathrm{L}(t) : = \frac{1}{1+\left(\frac{{LacI^{i}}}{\theta_{\mathrm{LacI}}}\cdot \frac{1}{1+\left(\frac{{IPTG^{i}}}{\theta_{\mathrm{IPTG}}}\right)^{\eta_{\mathrm{IPTG}}}}\right)^{\eta_{\mathrm{LacI}}}}\end{aligned}$$ All parameter values of are provided in Table \[tab:par\] and are also the same used in [@lugagne].
$ \kappa_\mathrm{L}^\mathrm{m0} $ $ 3.045\cdot10^{-1} $ mRNA $\min^{-1}$ $ \gamma_\mathrm{L}^\mathrm{p} $ $ 1.65\cdot10^{-2} $ $\min^{-1}$
----------------------------------- ------------------------------------------------ ---------------------------------- ----------------------------------
$ \kappa_\mathrm{T}^\mathrm{m0} $ $ 3.313\cdot10^{-1} $ mRNA $\min^{-1}$ $ \gamma_\mathrm{T}^\mathrm{p} $ $ 1.65\cdot10^{-2} $ $\min^{-1}$
$ \kappa_\mathrm{L}^\mathrm{m} $ $ 13.01 $ mRNA $\min^{-1}$ $ \theta_{\mathrm{LacI}} $ $ 124.9 $
$ \kappa_\mathrm{T}^\mathrm{m} $ $ 5.055 $ mRNA $\min^{-1}$ $ \eta_{\mathrm{LacI}} $ $ 2.00 $
$ \kappa_\mathrm{L}^\mathrm{p} $ $ 0.6606 $ a.u. $\text{mRNA}^{-1}$ $\min^{-1}$ $ \theta_{\mathrm{TetR}} $ $ 76.40 $
$ \kappa_\mathrm{T}^\mathrm{p} $ $ 0.5098 $ a.u. $\text{mRNA}^{-1}$ $\min^{-1}$ $ \eta_{\mathrm{TetR}} $ $ 2.152 $
$ k_{\mathrm{aTc}} $ $ 4\cdot 10^{-2} $ $\min^{-1}$ $ \theta_{\mathrm{aTc}} $ $ 35.98 $
$ k_{\mathrm{IPTG}} $ $ 4\cdot 10^{-2} $ $\min^{-1}$ $ \eta_{\mathrm{aTc}} $ $ 2.00 $
$ \gamma_\mathrm{L}^\mathrm{m} $ $ 1.386\cdot10^{-1} $ $\min^{-1}$ $ \theta_{\mathrm{IPTG}} $ $ 2.926\cdot10^{-1} $
$ \gamma_\mathrm{T}^\mathrm{m} $ $ 1.386\cdot10^{-1} $ $\min^{-1}$ $ \eta_{\mathrm{IPTG}} $ $ 2.00 $
: Value of the parameters of the cell population models (taken from [@lugagne]).[]{data-label="tab:par"}
The previous dynamical model is a deterministic description of the evolution of the molecule concentrations in the system and, therefore, it is only an approximation of the stochastic biochemical processes taking place inside the cells. To obtain a more accurate description of the stochastic processes governing the dynamics, for validation we adopted the SDE-based algorithm described in [@lakatos2017stochastic] that provide a better approximation of the Chemical Master Equation [@CME; @CLE] of the system. Formally, we solved: $$\mathrm{d} \mathbf{x}(t)=S\cdot a(\mathbf{x}(t))\cdot \mathrm{dt}+ S \cdot \mathrm{diag} \big( \sqrt{a(\mathbf{x}(t))} \big) \cdot \mathrm{d}\mathbf{w}$$ where $\mathbf{x}(t)$ is the state of the process, $S$ is the stoichiometric matrix, $a(\mathbf{x}(t))$ is a vector containing the propensity functions associated to each reaction and $\mathbf{w}$ is a vector of independent standard Wiener processes. Both $S$ and $a(\mathbf{x})$ are the same used in [@lugagne].\
As shown later in the *in-silico* validation of the control approaches, the heterogeneity in the response of the cells, provided in our case by the biochemical noise, is a fundamental ingredient to solve the ratiometric control problem.
Problem Statement
-----------------
We denote by $\mathcal{N}_t$ the finite set of all cells in the consortium at time $t$, and with $N(t)=|\mathcal{N}_t|$ its cardinality. Note that the number of cells may vary in time as a consequence of cell births and deaths or of their accidental outflow from the microfluidic chamber in which they are hosted. We define the following sets: $\mathcal{A}_t:=\{ i\in\mathcal{N}_t:\, TetR^i(t) > 2\, LacI^i(t) \}$, $\mathcal{B}_t:=\{ i\in\mathcal{N}_t:\, LacI^i(t) > 2\, TetR^i(t) \}$, and $\mathcal{C}_t:=\{ i\in\mathcal{N}_t:\, i\notin\mathcal{A}_t, i\notin\mathcal{B}_t \}$. We also denote with $n_\mathrm{A}(t)$ and $n_\mathrm{B}(t)$ the cardinalities of $\mathcal{A}_t$ and $\mathcal{B}_t$ at time $t$, respectively. It is clear from Figure \[fig:phasePortraitLugagne\] that these sets are disjoint and form a partition of $\mathcal{N}_t$ for all $t$.
Noticing the relative position in state space $(LacI^i, TetR^i)$ of the stable equilibria **A** and **B** of the toggle switch, we say that at time $t$ cell $i$ *belongs* to the population **A** (population **B**), if $i\in\mathcal{A}_t$ ($i\in\mathcal{B}_t$, respectively). Moreover, we define as $r_\mathrm{A}(t) = \frac{n_\mathrm{A}(t)}{N(t)}$ and $r_\mathrm{B}(t) = \frac{n_\mathrm{B}(t)}{N(t)}$ the *ratio* of cells that belong to population **A** and population **B**, respectively.
Given a consortium of cells whose dynamics is described by - and a desired ratio $r\in[0,1]$ of cells belonging to one population, for example **B**, we say that the control law $u(t)=\left[u_\mathrm{a}(t), \, u_\mathrm{p}(t)\right]^\top$ solves the *ratiometric control problem* if, for some small positive constant $\epsilon$, $$\label{eq:control_aim}
\lim_{t\to \infty}|e_\mathrm{A}(t)| < \epsilon \quad \text{ and } \quad \lim_{t\to \infty}|e_\mathrm{B}(t)|<\epsilon,
% \lim_{t\to \infty}e_\mathrm{A}(t) = 0 \quad \text{ and } \quad \lim_{t\to \infty}e_\mathrm{B}(t) = 0,$$ where $e_\mathrm{B}(t) = r-r_\mathrm{B}(t)$ and $e_\mathrm{A}(t) = (1-r)-r_\mathrm{A}(t)$.
Proposed Control Strategies
===========================
In this section we propose three control strategies to solve the ratiometric control problem. Specifically, we present a Bang-Bang controller, a PI controller and a model predictive controller (MPC). All controllers are ad-hoc implementations that take explicitly into account the physical and technological constraints related to an experimental microfluidic platform. Then, in Section \[sec:simulations\] we validate the controllers [in-silico]{}. In contrast to [@lugagne] where the objective of the controllers proposed was to regulate at an *intermediate* level the expression of the genes of a *single* toggle-switch, here the feedback loop is closed on the entire cell population and the cells are split in two groups, each fully expressing either of the two genes.
Experimental Constraints {#experimental-constraints .unnumbered}
------------------------
The experimental platform we consider as a reference is based on microfluidics as the one described in [@menolascina2014vivo; @Perrino2015] which uses a fluorescence microscope to measure the current state of the cells. We then have to take into account the following realistic constraints:
1. the state of the cells cannot be measured more often than $5\,\mathrm{min}$ to avoid excessive phototoxicity;
2. there is a time delay between $20$ and $40\,\mathrm{s}$ on the actuation of the control inputs due to the time that the flow of the chemical inducers takes to reach the chambers on the microfluidic chip where cells are hosted;
3. the minimum time interval between two consecutive control inputs cannot be less than $15\,\mathrm{min}$ to limit excessive osmotic stress on the cells;
4. the maximum duration of any experiment cannot exceed $24$ hours, to avoid substantial cell mutations during the experiments.
Moreover, the specific implementation of the microfluidic device also introduces constraints on the possible classes of input signals $u(t)=\left[u_\mathrm{a}(t), \, u_\mathrm{p}(t)\right]^\top$ that can be generated by the actuators.
We consider two possible implementations:
1. a T-Junction, which limits $u_\mathrm{a}$ and $u_\mathrm{p}$ to be mutually exclusive and with fixed amplitudes, that is $u$ is either set to $[U_\mathrm{a},0]^\top$, which causes $e_\mathrm{B}$ to decrease and $e_\mathrm{A}$ to increase, or to $[0, U_\mathrm{p}]^\top$, which does the opposite;
2. a Dial-A-Wave (DAW) system [@ferry2011microfluidics], which constraints $u_\mathrm{a}$ and $u_\mathrm{p}$ to be in a convex combination. Namely, given $u_\mathrm{a} \in [0,U_a]$ we have $$\label{eq:DAW}
u_\mathrm{p}=\left(1-\frac{u_\mathrm{a}}{U_\mathrm{a}}\right)U_\mathrm{p}
% u_\mathrm{a} = \lambda \cdot U_\mathrm{a} ,\quad u_\mathrm{p} = (1-\lambda)\cdot U_\mathrm{p},$$
In the above equations, $U_\mathrm{a}\in\left[0,100\right]$ and $U_\mathrm{p}\in\left[0,1\right]$ are control amplitudes to be selected and denote the maximum concentrations possible of the inducers that are present in the reservoirs (These values are the same as those that were used *in-vivo* in [@lugagne].).
Depending on which implementation is considered, only specific controllers are feasible. More precisely, for the T-Junction implementation only a Bang-Bang controller is considered, while for the Dial-A-Wave system we design a PI controller and an MPC.
Bang-Bang Controller
--------------------
The Bang-Bang controller implemented via a T-junction consists of two mutually exclusive inputs with fixed amplitude which are applied to the system depending on the current value of the error signals $e_\mathrm{A}(t)$ and $e_\mathrm{B}(t)$. Specifically, at any time $t$ the input is applied that causes the $\max\{ |e_\mathrm{A}(t)|, |e_\mathrm{B}(t)| \}$ to decrease. More formally, the control input $u(t)=\left[u_\mathrm{a}(t), \, u_\mathrm{p}(t)\right]^\top$ is chosen as $$u(t) =
\begin{cases}
u_{1}, & |e_\mathrm{B}(t)|\geq |e_\mathrm{A}(t)| \\
u_{2}, & |e_\mathrm{B}(t)| < |e_\mathrm{A}(t)|\\
\end{cases},$$ where $$\label{eq:BB}
u_{1}=
\begin{cases}
\left[0,U_\mathrm{p}\right]^\top, & \! e_\mathrm{B}(t)\leq 0\\
\left[U_\mathrm{a},0\right]^\top, & \! e_\mathrm{B}(t)>0
\end{cases},
\; \; u_{2}=
\begin{cases}
\left[U_\mathrm{a}, 0\right]^\top, & \! e_\mathrm{A}(t)\leq 0\\
\left[0,U_\mathrm{p}\right]^\top, & \! e_\mathrm{A}(t) > 0
\end{cases}.$$
PI Controller
-------------
The PI control inputs to be implemented via the Dial-a-Wave system are chosen as: $$\label{eq:PIctrl}
\begin{split}
u_\mathrm{a}(t) = \, & k_\mathrm{P,a}e_\mathrm{B}(t)+k_\mathrm{I,a}\int_{0}^{t}e_\mathrm{B}(t)dt\\
& -\left(k_\mathrm{P,p}e_\mathrm{A}(t)+k_\mathrm{I,p}\int_{0}^{t}e_\mathrm{A}(t)dt\right)
\end{split}$$ with $k_\mathrm{P,p}$, $k_\mathrm{P,a}$, $k_\mathrm{I,a} $ and $ k_\mathrm{I,p} $ being the control gains, and $u_\mathrm{p}(t)$ given by .
Moreover, to improve the performance and guarantee that the control signals do not exceed their admissible values, the PI controller is complemented by a dynamic saturation defined as: $$\label{eq:dyn_sat}
\begin{cases}
u_\mathrm{a}\in\left[0,50\right], & \text{if } |e_\mathrm{B}|<|e_\mathrm{A}|\\
u_\mathrm{a}\in\left[0,100\right], & \text{otherwise}
\end{cases}$$ and an anti wind-up scheme.
MPC Algorithm
-------------
The last algorithm we considered is a Model Predictive Controller (MPC) [@mayne2000constrained]. Given the state of the cell population, say $\textbf{x}=[x_1, \dots, x_{N(t)}]^\top\in\mathbb{R}^{2N(t)}$, where $x_i=[LacI^i, TetR^i]^\top$ is the state of the $i$-th cell, we compute the optimal control input over the time interval $\left[t, t+T_{p}\right]$ which minimizes the cost function $$\label{eq:cost_fcn}
J(\textbf{x},r,u,t)=\int_{0}^{Tp}\left(\alpha\|e_\mathrm{B}(t)\|+(1-\alpha)\|e_\mathrm{A}(t)\|\right)dt$$ with $T_p$ being the controller prediction time and $\alpha\in(0,1)$ a constant design parameter. We then apply the computed optimal control input over the interval $\left[ t,t+T_c\right)$ where $T_c<T_p$ is a control time to be selected during the implementation.
To reduce the computational burden, in our implementation we evaluated the cost function $ J(\textbf{x},u) $ using only a representative subset of cells chosen as a sample of the entire population. This subset of cells is chosen such that the reduced ratios (i.e. the ratios $r_\mathrm{A}$ and $r_\mathrm{B}$ computed on the subset) is as close as possible to the ratios of the entire population. Also, a genetic algorithm taken from [@GA] (without the mutation phase) was used to find the optimal control sequence.
In-silico validation and comparisons {#sec:simulations}
====================================
We tested all the designed control laws in-silico. First, we performed a batch of simulations in Matlab assuming a constant population of 30 cells, then we performed more accurate simulations using an agent-based simulator specifically designed for bacterial populations called BSim [@BSim].
In all simulations we consider a desired ratio $ r=0.6 $, with initial conditions taken in a neighborhood of the saddle point between **A** and **B**, specifically we picked the initial conditions for each cell with a random uniform distribution in the intervals: $ {mRNA_\mathrm{LacI,0}^{i}}\in \left[3,6\right] $, $ {mRNA_\mathrm{TetR,0}^{i}}\in \left[3,6\right] $, $ {LacI}_{0}^{i}\in \left[150,300\right] $, ${TetR}_{0}^{i}\in \left[200,400\right]$. To mimic the real experimental constraints, the state of the population is sampled every $T_s = 5\,\mathrm{min}$ and the control inputs are evaluated every $T_c = 15\, \mathrm{min}$.
Numerical simulations in Matlab
-------------------------------
For the Bang-Bang controller we empirically set the control amplitudes to $U_\mathrm{a}=60$ and $U_\mathrm{p}=0.5$. With these values we obtained the evolution of errors and control inputs shown in panels (a) and (b) of Figure \[fig:nQS\]. We notice that the Bang-Bang controller achieves good performance with a settling time of about 1300 min.
For the PI controller, the control input amplitudes and control gains were empirically selected as $U_\mathrm{a}=100$, $U_\mathrm{p}=1$, $k_\mathrm{P,a}=66.67$, $k_\mathrm{P,p}=2.25$, $k_\mathrm{I,a}=1.2$ and $k_\mathrm{I,p}=0.006 $ obtaining the results portrayed in panels (c), (d) of Figure \[fig:nQS\]. In this case we observe a settling time which is half the one obtained with the Bang Bang controller, together with lower error values at steady state.
Finally, for the MPC controller, we chose $[U_\mathrm{a}, U_\mathrm{p}] = [60, 0.5]$, $T_{p}=75\,\mathrm{min}$, $ T_{c}=T_{s}=15\,\mathrm{min} $, $\alpha=0.6$. The genetic algorithm’s parameters were set to $N_\mathrm{p}=20$, $ M_{\max}=10$, where $N_p$ is the length of the control sequence generated at each step and $M_{\max}$ the number of generations being considered.
The resulting errors and the control inputs are shown in panels (e) and (f) of Figure \[fig:nQS\], which confirm the MPC as the best strategy with a settling time almost $40 \%$ shorter than the one observed with the PI.
\
Agent-based simulations in BSim
-------------------------------
To provide a more realistic validation of our control strategies, we used BSim [@BSim], a realistic agent-based simulator of bacterial populations, which considers also cells reproduction, spatial distribution and geometry, the diffusion of the chemicals in the environment and, more importantly, the flush-out of cells from the chamber. To run the required stochastic simulations, we extended BSim with an Euler-Maruyama solver [@Stoc_sim].
As a reference for the geometry of the microfluidic device we used a scaled version of the one described in [@danino2010synchronized] with dimensions $ 13.3 \mu \mathrm{m}\times 16.6 \mu \mathrm{m} \times 1 \mu \mathrm{m} $ which can contain a population of about $50$ cells.
We tested in BSim all the proposed controllers and snapshots of a typical BSim simulation is shown in Figure \[fig:BSIM\_snap\] where cells fully expressing one of the two repressor genes are depicted either in red or green.
The errors obtained via BSim *in-silico* experiments are reported in Figure \[fig:BSIM\_nQS\]. It can be noticed that the fluctuations of the error signals are higher than in the previous Matlab simulations essentially due to cell growth and splitting, and the flush-out of the cells from the chamber. However, the average error evolution is qualitatively the same, confirming the good performance of the controllers.
Performance Comparison
----------------------
To compare the performance of the controllers, we considered the following performance indices evaluated averaging over $M$ simulation trials: (i) the average value of the error norm over the total simulation time $T_\mathrm{sim}$, $
\bar{e} = \frac{1}{M} \sum_{j=1}^M \left( \frac{1}{T_\mathrm{sim}} \int_0^{T_\mathrm{sim}} \lVert e_j(t) \rVert dt \right),
$ and (ii) over the last $180\,\mathrm{min}$, $
\bar{e}_\mathrm{f} = \frac{1}{M} \sum_{j=1}^M \left( \frac{1}{180} \int_{T_\mathrm{sim}-180}^{T_\mathrm{sim}} \lVert e_j(t) \rVert dt \right),
$ where for the $j$-th trial $e_j(t) = [e_\mathrm{A}^j(t), \, e_\mathrm{B}^j(t)]^\top$,
and (iii) the average settling time at $15\%$ of the error, $\bar{t}_\mathrm{s}$.
In Table \[tab:nearRatio\] we report the values of the previous indices considering $M=30$ for the simulations in Matlab and $M=1$ for those in BSim. It can be observed that the MPC algorithm guarantees excellent performance both in terms of settling time and steady-state error norm. Therefore, it is the best candidate for the [*in-vivo*]{} implementation that will be the next stage of our ongoing research. Nevertheless, the Bang-Bang controller offers a good compromise between ease of implementation and performance.
\[!t\]
Controller $\bar{e}$ $\bar{e}_\mathrm{f}$ $\bar{t}_{s}$ (min)
------------ ------------- ---------------------- --------------------- -- -- --
Bang-Bang 0.20 (0.13) 0.07 (0.06) 1077 (1185)
PI 0.17 (0.28) 0.02 (0.05) 563 (1020)
MPC 0.13 (0.22) 0.02 (0.05) 329 (820)
: Performance indices of the proposed controllers evaluated using Matlab (BSim, respectively) simulations. []{data-label="tab:nearRatio"}
Conclusions
===========
We considered the ratiometric control problem in a mono-strain microbial consortium made of bacteria embedding a bistable toggle switch. We demonstrated that, by varying global inputs to all the cells, it is possible to control the ratio between those stabilizing onto one equilibrium and those on the other. Namely, we presented three control strategies to regulate the cells in the consortium to the desired ratio. The control design took into account the constraints of a possible experimental microfluidic implementation. We tested performance of the controllers [in-silico]{} by numerical and realistic agent-based simulations. In both cases, it emerged that the MPC algorithm guarantees excellent performance in terms of both the settling time and the steady-state error values. Future work will be aimed at validating *in-vivo* the proposed controllers and exploiting them for multicellular feedback control schemes such as the one described in [@fiore2016silico].
ACKNOWLEDGMENT {#acknowledgment .unnumbered}
==============
The authors wish to acknowledge support from the research project COSY-BIO (Control Engineering of Biological Systems for Reliable Synthetic Biology Applications) funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 766840.
[^1]: $^{1}$Davide Salzano, Davide Fiore and Mario di Bernardo are with the Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy. [[email protected], [email protected]]{}
[^2]: $^{2}$Mario di Bernardo is also with the Department of Engineering Mathematics, University of Bristol, University Walk, BS8 1TR Bristol, U.K. [[email protected]]{}
|
---
abstract: 'Scalar-field dark energy models like tachyon are often regarded as an effective description of an underlying theory of dark energy. In this Letter, we implement the interacting agegraphic dark energy models with tachyon field. We demonstrate that the interacting agegraphic evolution of the universe can be described completely by a single tachyon scalar field. We thus reconstruct the potential as well as the dynamics of the tachyon field according to the evolutionary behavior of interacting agegraphic dark energy.'
address: |
Department of Physics, Shahid Bahonar University, P.O. Box 76175, Kerman, Iran\
Research Institute for Astronomy and Astrophysics of Maragha (RIAAM), Maragha, Iran
author:
- 'Ahmad Sheykhi [^1]'
title: Interacting agegraphic tachyon model of dark energy
---
Introduction\[Int\]
===================
A great variety of cosmological observations, direct and indirect, reveal that our universe is currently experiencing a phase of accelerated expansion [@Rie]. A component which causes cosmic acceleration is usually dubbed dark energy which constitute a major puzzle of modern cosmology. The most obvious theoretical candidate of dark energy is the cosmological constant. Though, it suffers the so-called *fine-tuning* and *cosmic-coincidence* problems. Among different candidates for probing the nature of dark energy, the holographic dark energy model arose a lot of enthusiasm recently [@Coh; @Li; @Huang; @Hsu; @HDE; @Setare1]. This model is motivated from the holographic hypothesis [@Suss1] and has been tested and constrained by various astronomical observations [@Xin; @Feng]. However, there are some difficulties in holographic dark energy model. Choosing the event horizon of the universe as the length scale, the holographic dark energy gives the observation value of dark energy in the universe and can drive the universe to an accelerated expansion phase. But an obvious drawback concerning causality appears in this proposal. Event horizon is a global concept of spacetime; existence of event horizon of the universe depends on future evolution of the universe; and event horizon exists only for universe with forever accelerated expansion. In addition, more recently, it has been argued that this proposal might be in contradiction to the age of some old high redshift objects, unless a lower Hubble parameter is considered [@Wei0].
An interesting proposal to explore the nature of dark energy within the framework of quantum gravity is a so-called agegraphic dark energy (ADE). This model takes into account the Heisenberg uncertainty relation of quantum mechanics together with the gravitational effect in general relativity. The ADE model assumes that the observed dark energy comes from the spacetime and matter field fluctuations in the universe [@Cai1; @Wei2; @Wei1]. Since in ADE model the age of the universe is chosen as the length measure, instead of the horizon distance, the causality problem in the holographic dark energy is avoided. The agegraphic models of dark energy have been examined and constrained by various astronomical observations [@age; @shey1; @Setare2]. Although going along a fundamental theory such as quantum gravity may provide a hopeful way towards understanding the nature of dark energy, it is hard to believe that the physical foundation of ADE is convincing enough. Indeed, it is fair to say that almost all dynamical dark energy models are settled at the phenomenological level, neither holographic dark energy model nor ADE model is exception. Though, under such circumstances, the models of holographic and ADE, to some extent, still have some advantage comparing to other dynamical dark energy models because at least they originate from some fundamental principles in quantum gravity.
On the other hand, among the various candidates to explain the accelerated expansion, the rolling tachyon condensates in a class of string theories may have interesting cosmological consequences. The tachyon is an unstable field which has became important in string theory through its role in the Dirac-Born-Infeld action which is used to describe the D-brane action [@Sen1; @Sen2]. It has been shown [@Sen3] that the decay of D-branes produces a pressureless gas with finite energy density that resembles classical dust. The effective Lagrangian for the tachyon field is described by $$\begin{aligned}
L=-V(\phi)\sqrt{1-g^{\mu\nu}\partial_\mu \phi \partial_\nu \phi},
\end{aligned}$$ where $V(\phi)$ is the tachyon potential. The corresponding energy momentum tensor for the tachyon field can be written in a perfect fluid form $$\begin{aligned}
T_{\mu\nu}=(\rho_\phi+p_\phi)u_{\mu} u_\nu-p_\phi g_{\mu\nu},
\end{aligned}$$ where $\rho_\phi$ and $p_\phi$ are, respectively, the energy density and pressure of the tachyon and the velocity $u_\mu$ is $$\begin{aligned}
u_\mu=\frac{\partial_\mu \phi}{\sqrt{\partial_\nu \phi \partial^\nu
\phi}}.
\end{aligned}$$ A rolling tachyon has an interesting equation of state whose parameter smoothly interpolates between $-1$ and $0$ [@Gib1]. Thus, tachyon can be realized as a suitable candidate for the inflation at high energy [@Maz1] as well as a source of dark energy depending on the form of the tachyon potential [@Padm]. Therefore it becomes meaningful to reconstruct tachyon potential $V(\phi)$ from some dark energy models possessing some significant features of the quantum gravity theory, such as holographic and ADE models. It was demonstrated that dark energy driven by tachyon, decays to cold dark matter in the late accelerated universe and this phenomenon yields a solution to cosmic coincidence problem [@Sri]. The investigations on the reconstruction of the tachyon potential $V(\phi)$ in the framework of holographic dark energy have been carried out in [@Setare4]. In the absence of the interaction between ADE and dark matter, the connection between tachyon field and the new ADE model has also been established in [@agetach].
In the present Letter, we would like to extend the study to the case where both components- the pressureless dark matter and the ADE- do not conserve separately but interact with each other. Given the unknown nature of both dark matter and dark energy there is nothing in principle against their mutual interaction and it seems very special that these two major components in the universe are entirely independent [@Setare3; @wang1; @shey2]. We shall establish a correspondence between the interacting ADE scenarios and the tachyon scalar field in a non-flat universe. Although it is believed that our universe is flat, a contribution to the Friedmann equation from spatial curvature is still possible if the number of e-foldings is not very large [@Huang]. Besides, some experimental data has implied that our universe is not a perfectly flat universe and recent papers have favored the universe with spatial curvature [@spe]. We suggest the agegraphic description of the tachyon dark energy in a universe with spacial curvature and reconstruct the potential and the dynamics of the tachyon scalar field which describe the tachyon cosmology. The plan of the work is as follows. In the next section we associate the original ADE with the tachyon field. In section \[NEW\], we establish the correspondence between the new model of interacting ADE and the tachyon dark energy. The last section is devoted to conclusions.
Tachyon reconstruction of the ORIGINAL ADE \[ORI\]
==================================================
We consider the Friedmann-Robertson-Walker (FRW) universe which is described by the line element $$\begin{aligned}
ds^2=dt^2-a^2(t)\left(\frac{dr^2}{1-kr^2}+r^2d\Omega^2\right),\label{metric}
\end{aligned}$$ where $a(t)$ is the scale factor, and $k$ is the curvature parameter with $k = -1, 0, 1$ corresponding to open, flat, and closed universes, respectively. A closed universe with a small positive curvature ($\Omega_k\simeq0.01$) is compatible with observations [@spe]. The first Friedmann equation takes the form $$\begin{aligned}
\label{Fried}
H^2+\frac{k}{a^2}=\frac{1}{3m_p^2} \left( \rho_m+\rho_D \right).\end{aligned}$$ We define, as usual, the fractional energy densities such as $$\begin{aligned}
\label{Omega}
\Omega_m=\frac{\rho_m}{3m_p^2H^2}, \hspace{0.5cm}
\Omega_D=\frac{\rho_D}{3m_p^2H^2},\hspace{0.5cm}
\Omega_k=\frac{k}{H^2 a^2},\end{aligned}$$ thus, the Friedmann equation can be written $$\begin{aligned}
\label{Fried2}
\Omega_m+\Omega_D=1+\Omega_k.\end{aligned}$$ We adopt the viewpoint that the scalar field models of dark energy are effective theories of an underlying theory of dark energy. The energy density and pressure for the tachyon scalar field can be written as $$\begin{aligned}
\label{rhophi}
\rho_\phi&=&-T^0 _0=\frac{V(\phi)}{\sqrt{1-\dot{\phi}^2}},\\
p_\phi&=&T^i _i=-V(\phi)\sqrt{1-\dot{\phi}^2}. \label{pphi}\end{aligned}$$ Consequently the equation of state of the tachyon is given by $$\begin{aligned}
\label{wphi}
w_\phi=\frac{p_\phi}{\rho_\phi}=\dot{\phi}^2-1.\end{aligned}$$ From Eq. (\[wphi\]) we see that irrespective of the steepness of the tachyon potential, we have always $-1<w_\phi<0$. This implies that the tachyon field cannot realize the equation of state crossing $-1$. Next we intend to implement the interacting original ADE models with tachyon scalar field. Let us first review the origin of the ADE model. Following the line of quantum fluctuations of spacetime, Karolyhazy et al. [@Kar1] argued that the distance $t$ in Minkowski spacetime cannot be known to a better accuracy than $\delta{t}=\beta t_{p}^{2/3}t^{1/3}$ where $\beta$ is a dimensionless constant of order unity. Based on Karolyhazy relation, Maziashvili discussed that the energy density of metric fluctuations of the Minkowski spacetime is given by [@Maz] $$\label{rho0}
\rho_{D} \sim \frac{1}{t_{p}^2 t^2} \sim \frac{m^2_p}{t^2},$$ where $t_{p}$ is the reduced Planck time and $t$ is a proper time scale. In the original ADE model Cai [@Cai1] proposed the dark energy density of the form (\[rho0\]) where $t$ is chosen to be the age of the universe $$T=\int_0^a{\frac{da}{Ha}},$$ Thus, he wrote down the energy density of the original ADE as [@Cai1] $$\label{rho1}
\rho_{D}= \frac{3n^2 m_{p}^2}{T^2},$$ where the numerical factor $3n^2$ is introduced to parameterize some uncertainties, such as the species of quantum fields in the universe, the effect of curved space-time, and so on. The dark energy density (\[rho1\]) has the same form as the holographic dark energy, but the length measure is chosen to be the age of the universe instead of the horizon radius of the universe. Thus the causality problem in the holographic dark energy is avoided. Combining Eqs. (\[Omega\]) and (\[rho1\]), we get $$\begin{aligned}
\label{Omegaq}
\Omega_D=\frac{n^2}{H^2T^2}.\end{aligned}$$ The total energy density is $\rho=\rho_{m}+\rho_{D}$, where $\rho_{m}$ and $\rho_{D}$ are the energy density of dark matter and dark energy, respectively. The total energy density satisfies a conservation law $$\label{cons}
\dot{\rho}+3H(\rho+p)=0.$$ However, since we consider the interaction between dark matter and dark energy, $\rho_{m}$ and $\rho_{D}$ do not conserve separately; they must rather enter the energy balances $$\begin{aligned}
&&\dot{\rho}_m+3H\rho_m=Q, \label{consm}
\\&& \dot{\rho}_D+3H\rho_D(1+w_D)=-Q.\label{consq}\end{aligned}$$ Here $w_D$ is the equation of state parameter of ADE and $Q$ denotes the interaction term and can be taken as $Q =3b^2 H\rho$ with $b^2$ being a coupling constant [@Pav1]. Taking the derivative with respect to the cosmic time of Eq. (\[rho1\]) and using Eq. (\[Omegaq\]) we get $$\begin{aligned}
\label{rhodot}
\dot{\rho}_D=-2H\frac{\sqrt{\Omega_D}}{n}\rho_D.\end{aligned}$$ Inserting this relation into Eq. (\[consq\]), we obtain the equation of state parameter of the original ADE in non-flat universe $$\begin{aligned}
\label{wq}
w_D=-1+\frac{2}{3n}\sqrt{\Omega_D}-\frac{b^2}{\Omega_D}
(1+\Omega_k).\end{aligned}$$ Differentiating Eq. (\[Omegaq\]) and using relation ${\dot{\Omega}_D}={\Omega'_D}H$, we reach $$\begin{aligned}
\label{Omegaq2}
{\Omega'_D}=\Omega_D\left(-2\frac{\dot{H}}{H^2}-\frac{2}{n
}\sqrt{\Omega_D}\right),\end{aligned}$$ where the dot and the prime stand for the derivative with respect to the cosmic time and the derivative with respect to $x=\ln{a}$, respectively. Taking the derivative of both side of the Friedman equation (\[Fried\]) with respect to the cosmic time, and using Eqs. (\[Fried2\]), (\[rho1\]), (\[Omegaq\]) and (\[consm\]), it is easy to show that $$\begin{aligned}
\label{Hdot}
\frac{\dot{H}}{H^2}=-\frac{3}{2}(1-\Omega_D)-\frac{\Omega^{3/2}_D}{n}-\frac{\Omega_k}{2}
+\frac{3}{2}b^2(1+\Omega_k).\end{aligned}$$ Substituting this relation into Eq. (\[Omegaq2\]), we obtain the equation of motion for the original ADE $$\begin{aligned}
\label{Omegaq3}
{\Omega'_D}&=&\Omega_D\left[(1-\Omega_D)\left(3-\frac{2}{n}\sqrt{\Omega_D}\right)
-3b^2(1+\Omega_k)+\Omega_k\right].\end{aligned}$$ From the first Friedmann equation (\[Fried\]) sa well as Eqs. (\[Fried2\]), (\[consm\]) and (\[consq\]), we obtain $$\begin{aligned}
\label{HH0}
H=H_0\sqrt{\frac{1+\Omega_{k_0}}{1+\Omega_k}}\
\exp\left[-\frac{3}{2}\int_{a_0}^{a}{(1+w_D)\frac{da}{a}}\right].\end{aligned}$$ Now we suggest a correspondence between the original ADE and tachyon scalar field namely, we identify $\rho_\phi$ with $\rho_D$. Using relation $\rho_\phi=\rho_D={3m_p^2H^2}\Omega_D$ and Eqs. (\[rhophi\]), (\[wphi\]) and (\[wq\]), we can find $$\begin{aligned}
\label{vphi2}
V(\phi)&=&\rho_\phi\sqrt{1-\dot{\phi}^2}=3m^2_pH^2 \Omega_D\left(1-\frac{2}{3n}\sqrt{\Omega_D}+\frac{b^2}{\Omega_D}(1+\Omega_k)\right)^{1/2},\\
\dot{\phi}&=&=\sqrt{1+w_D}=\left(\frac{2}{3n}\sqrt{\Omega_D}-\frac{b^2}{\Omega_D}
(1+\Omega_k)\right)^{1/2}.\label{dotphi2}\end{aligned}$$ Using relation $\dot{\phi}=H{\phi'}$, we get $$\begin{aligned}
\label{primephi}
{\phi'}&=&H^{-1}\left(\frac{2}{3n}\sqrt{\Omega_D}-\frac{b^2}{\Omega_D}
(1+\Omega_k)\right)^{1/2}.\end{aligned}$$ Consequently, we can easily obtain the evolutionary form of the tachyon field by integrating the above equation $$\begin{aligned}
\label{phi}
\phi(a)-\phi(a_0)=\int_{a_0}^{a}{\frac {1}{H
a}\sqrt{\frac{2}{3n}\sqrt{\Omega_D}-\frac{b^2}{\Omega_D}
(1+\Omega_k)}\ da},\end{aligned}$$ where $a_0$ is the value of the scale factor at the present time $t_0$, $H$ is given by Eq. (\[HH0\]) and $\Omega_D$ can be extracted from Eq. (\[Omegaq3\]). The above equation can also be written in the following form $$\begin{aligned}
\label{phit}
\phi(t)-\phi(t_0)=\int_{t_0}^{t}{\sqrt{\frac{2}{3n}\sqrt{\Omega_D}-\frac{b^2}{\Omega_D}
(1+\Omega_k)}\ dt'}.\end{aligned}$$ Therefore, we have established an interacting agegraphic tachyon dark energy model and reconstructed the potential and the dynamics of the tachyon field. It is worth noting that if one omits $\Omega_D$ between Eqs. (\[vphi2\]) and (\[phi\]), one can obtain $V=V(\phi)$. Unfortunately, this cannot be done analytically for the above general solutions. Let us consider, as an example, the matter-dominated epoch where $a\ll1$ and $\Omega_D
\ll1$. In this case Eq. (\[Omegaq3\]) with $\Omega_k \ll 1$ approximately becomes $$\begin{aligned}
\label{Omegaq32}
\frac{d\Omega_D}{da}\simeq \frac{\Omega_D}{a}
\left(3-\frac{2}{n}\sqrt{\Omega_D}-3b^2\right),\end{aligned}$$ Solving the above equation we find $$\begin{aligned}
\label{Omegaq33}
\Omega_D =\frac{9n^2}{4} (1-b^2)^2.\end{aligned}$$ Substituting this relation into Eq. (\[wq\]), we obtain $$\begin{aligned}
\label{wqq2}
w_D=-b^2 \left(1+\frac{4}{9n^2(1-b^2)^2}\right).\end{aligned}$$ In this case for $\Omega_k \ll 1$, Eq. (\[HH0\]) can be integrated. The result is $$\begin{aligned}
\label{H3}
H=H_0 a^{-3(1+w_D)/2}.\end{aligned}$$ Combining Eqs. (\[Omegaq33\]) and (\[H3\]) with (\[phi\]) we find $$\begin{aligned}
\label{phi4}
\phi=\frac{2a^{3(1+w_D)/2}}{3H_0 \sqrt{1+w_D}},\end{aligned}$$ up to a constant of integration. From this equation we get $$\begin{aligned}
\label{a1}
a= \left(\frac{9(1+w_D)\phi^2
H_0^2}{4}\right)^{\frac{1}{3(1+w_D)}}.\end{aligned}$$ Finally, combining Eqs. (\[Omegaq33\]), (\[wqq2\]), (\[a1\]) with Eq. (\[vphi2\]) we reach $$\begin{aligned}
\label{vv}
V(\phi)= \,{\frac {{{\it -9 m_p}}^{2} \left( -1+{b}^{2} \right)
^{3}{n}^{3}b \sqrt
{-9\,{n}^{2}+18\,{n}^{2}{b}^{2}-9\,{n}^{2}{b}^{4}-4}}{ \left( -9
\,{n}^{2}+27\,{n}^{2}{b}^{2}-27\,{n}^{2}{b}^{4}+9\,{n}^{2}{b}^{6}+4\,{
b}^{2} \right) {\phi}^{2}}}.\end{aligned}$$
Tachyon reconstruction of the NEW ADE \[NEW\]
=============================================
To avoid some internal inconsistencies in the original ADE model, the so-called “new agegraphic dark energy" was proposed, where the time scale is chosen to be the conformal time $\eta$ instead of the age of the universe [@Wei2]. The new ADE contains some new features different from the original ADE and overcome some unsatisfactory points. For instance, the original ADE suffers from the difficulty to describe the matter-dominated epoch while the new ADE resolved this issue [@Wei2]. The energy density of the new ADE can be written $$\label{rho1new}
\rho_{D}= \frac{3n^2 m_{p}^2}{\eta^2},$$ where the conformal time $\eta$ is given by $$\eta=\int{\frac{dt}{a}}=\int_0^a{\frac{da}{Ha^2}}.$$ The fractional energy density of the new ADE is now expressed as $$\begin{aligned}
\label{Omegaqnew}
\Omega_D=\frac{n^2}{H^2\eta^2}.\end{aligned}$$ Taking the derivative with respect to the cosmic time of Eq. (\[rho1new\]) and using Eq. (\[Omegaqnew\]) we get $$\begin{aligned}
\label{rhodotnew}
\dot{\rho}_D=-2H\frac{\sqrt{\Omega_D}}{na}\rho_D.\end{aligned}$$ Inserting this relation into Eq. (\[consq\]) we obtain the equation of state parameter of the new ADE $$\begin{aligned}
\label{wqnew}
w_D=-1+\frac{2}{3na}\sqrt{\Omega_D}-\frac{b^2}{\Omega_D}
(1+\Omega_k).\end{aligned}$$ The evolution behavior of the new ADE is now given by $$\begin{aligned}
\label{Omegaq3new}
{\Omega'_D}&=&\Omega_D\left[(1-\Omega_D)\left(3-\frac{2}{na}\sqrt{\Omega_D}\right)
-3b^2(1+\Omega_k)+\Omega_k\right].\end{aligned}$$ Next, we reconstruct the new agegraphic tachyon dark energy model, connecting the tachyon scalar field with the new ADE. Using Eqs. (\[Omegaqnew\]) and (\[wqnew\]) one can easily show that the tachyon potential and kinetic energy term take the following form $$\begin{aligned}
\label{vphi2new}
V(\phi)&=&3m^2_pH^2 \Omega_D\left(1-\frac{2}{3na}\sqrt{\Omega_D}+\frac{b^2}{\Omega_D}(1+\Omega_k)\right)^{1/2},\\
\dot{\phi}&=&\left(\frac{2}{3na}\sqrt{\Omega_D}-\frac{b^2}{\Omega_D}
(1+\Omega_k)\right)^{1/2}.\label{dotphi2new}\end{aligned}$$ We can also rewrite Eq. (\[dotphi2new\]) as $$\begin{aligned}
\label{primephinew}
{\phi'}&=&H^{-1}\left(\frac{2}{3na}\sqrt{\Omega_D}-\frac{b^2}{\Omega_D}
(1+\Omega_k)\right)^{1/2}.\end{aligned}$$ Therefore the evolution behavior of the tachyon field can be obtained by integrating the above equation $$\begin{aligned}
\label{phinew}
\phi(a)-\phi(a_0)=\int_{a_0}^{a}{\frac {1}{H
a}\sqrt{\frac{2}{3na}\sqrt{\Omega_D}-\frac{b^2}{\Omega_D}
(1+\Omega_k)}\ da},\end{aligned}$$ or in another way $$\begin{aligned}
\label{phitnew}
\phi(t)-\phi(t_0)=\int_{t_0}^{t}{\sqrt{\frac{2}{3na}\sqrt{\Omega_D}-\frac{b^2}{\Omega_D}
(1+\Omega_k)}\ dt'}.\end{aligned}$$ where $\Omega_D$ is now given by Eq. (\[Omegaq3new\]). In this way we connect the interacting new ADE with a tachyon field and reconstruct the potential and the dynamics of the tachyon field which describe tachyon cosmology.
Conclusions\[CONC\]
===================
Among the various candidates to play the role of the dark energy, tachyon has emerged as a possible source of dark energy for a particular class of potentials [@Padm]. In this Letter, we have associated the interacting ADE models with a tachyon field which describe the tachyon cosmology in a non-flat universe. The ADE models take into account the Heisenberg uncertainty relation of quantum mechanics together with the gravitational effect in general relativity. These models assume that the observed dark energy comes from the spacetime and matter field fluctuations in the universe. Therefore, agegraphic scenarios may possess some significant features of an underlying theory of dark energy. We have demonstrated that the agegraphic evolution of the universe can be described completely by a tachyon scalar field in a certain way. We have adopted the viewpoint that the scalar field models of dark energy are effective theories of an underlying theory of dark energy. Thus, we should be capable of using the scalar field model to mimic the evolving behavior of the interacting ADE and reconstructing this scalar field model. We have reconstructed the potential and the dynamics of the tachyon scalar field according to the evolutionary behavior of the interacting agegraphic dark energy models.
[99]{} A.G. Riess, et al., Astron. J. 116 (1998) 1009;\
S. Perlmutter, et al., Astrophys. J. 517 (1999) 565;\
S. Perlmutter, et al., Astrophys. J. 598 (2003) 102;\
P. de Bernardis, et al., Nature 404 (2000) 955.
A. Cohen, D. Kaplan, A. Nelson, Phys. Rev. Lett. 82 (1999) 4971.
M. Li, Phys. Lett. B 603 (2004) 1. Q. G. Huang, M. Li, JCAP 0408 (2004) 013.
S. D. H. Hsu, Phys. Lett. B 594 (2004) 13.
E. Elizalde, S. Nojiri, S.D. Odintsov, P. Wang, Phys. Rev. D 71 (2005) 103504;\
B. Guberina, R. Horvat, H. Stefancic, JCAP 0505 (2005) 001;\
B. Guberina, R. Horvat, H. Nikolic, Phys. Lett. B 636 (2006) 80;\
H. Li, Z. K. Guo, Y. Z. Zhang, Int. J. Mod. Phys. D 15 (2006) 869;\
J. P. B. Almeida, J. G. Pereira, Phys. Lett. B 636 (2006) 75;\
Y. Gong, Phys. Rev. D 70 (2004) 064029;\
B. Wang, E. Abdalla, R. K. Su, Phys. Lett. B 611 (2005) 21.
M. R. Setare, S. Shafei, JCAP 09 (2006) 011;\
M. R. Setare, Phys. Lett. B 644 (2007) 99;\
M. R. Setare, E. C. Vagenas, Phys. Lett. B 666 (2008) 111;\
H. M. Sadjadi, arXiv:0902.2462.
G. ’t Hooft, gr-qc/9310026;\
L. Susskind, J. Math. Phys. 36 (1995) 6377.
X. Zhang, F. Q. Wu, Phys. Rev. D 72 (2005) 043524;\
X. Zhang, F. Q. Wu, Phys. Rev. D 76 (2007) 023502;\
Q. G. Huang, Y.G. Gong, JCAP 0408 (2004) 006;\
K. Enqvist, S. Hannestad, M. S. Sloth, JCAP 0502 (2005) 004.
B. Feng, X. Wang, X. Zhang, Phys. Lett. B 607 (2005) 35;\
H.C. Kao, W.L. Lee, F.L. Lin, Phys. Rev. D 71 (2005) 123518;\
J. Y. Shen, B. Wang, E. Abdalla, R. K. Su, Phys. Lett. B 609 (2005) 200.
H. Wei and S. N. Zhang, arXiv:0707.2129
R. G. Cai, Phys. Lett. B 657 (2007) 228.
H. Wei and R. G. Cai, Phys. Lett. B 660 (2008) 113. H. Wei and R. G. Cai, Eur. Phys. J. C 59 (2009) 99.
H. Wei and R. G. Cai, Phys. Lett. B 663 (2008) 1;\
Y. W. Kim, et al., Mod. Phys. Lett. A 23 (2008) 3049;\
Y. Zhang, et al. arXiv:0708.1214;\
K. Y. Kim, H. W. Lee, Y. S. Myung, Phys.Lett. B 660 (2008) 118;\
X. Wu, et al., arXiv:0708.0349;\
I. P. Neupane, Phys. Lett. B 673 (2009) 111;\
J. Zhang, X. Zhang, H. Liu, Eur. Phys. J. C 54 (2008) 303;\
J .P Wu, D. Z. Ma, Y. Ling, Phys. Lett. B 663, (2008) 152.
A. Sheykhi, Phys. Lett. B 680 (2009) 113;\
A. Sheykhi, Int. J. Mod. Phys. D, in press;\
A. Sheykhi, arXiv:0909.0302;\
A. Sheykhi, arXiv:0908.1214. M. R. Setare, arXiv:0907.4910;\
M. R. Setare, arXiv:0908.0196.
A. Sen, JHEP 0204 (2002) 048;\
A. Sen, Mod. Phys. Lett. A 17 (2002) 1797.
A. Sen, JHEP 9910 (1999) 008;\
E.A. Bergshoeff, M. de Roo, T.C. de Wit, E. Eyras, S. Panda, JHEP 0005 (2000) 009;\
J. Kluson, Phys. Rev. D 62 (2000) 126003;\
D. Kutasov, V. Niarchos, Nucl. Phys. B 666 (2003) 56.
A. Sen, JHEP 0207 (2002) 065.
G. W. Gibbons, Phys. Lett. B 537 (2002) 1.
A. Mazumdar, S. Panda and A. Perez-Lorenzana, Nucl. Phys. B 614, 101 (2001);\
A. Feinstein, Phys. Rev. D 66, 063511 (2002);\
Y. S. Piao, R. G. Cai, X. M. Zhang and Y. Z. Zhang, Phys. Rev. D 66, 121301 (2002).
T. Padmanabhan, Phys. Rev. D 66, 021301 (2002);\
J.S. Bagla, H.K.Jassal, T. Padmanabhan, Phys. Rev. D 67 (2003) 063504;\
Z. K. Guo and Y. Z. Zhang, JCAP 0408, 010 (2004);\
E. J. Copeland, M. R. Garousi, M. Sami and S. Tsujikawa, Phys. Rev. D 71, 043003 (2005).
S.K. Srivastava arXiv:gr-qc/0409074.
M.R. Setare, J. Sadeghi, A.R. Amani, Phys. Lett. B 673 (2009) 241;\
M. R. Setare, Phys. Lett. B 653 (2007) 116.
J. Cui, L. Zhang, J. Zhang, and X. Zhang, arXiv:0902.0716.
M. R. Setare, Eur. Phys. J. C 50 (2007) 991;\
M. R. Setare, JCAP 0701 (2007) 023;\
M. R. Setare, Phys. Lett. B 654 (2007) 1;\
M. R. Setare, Phys. Lett. B 642 (2006) 421.
B. Wang, Y. Gong and E. Abdalla, Phys. Lett. B 624 (2005) 141;\
B. Wang, C. Y. Lin and E. Abdalla, Phys. Lett. B 637 (2005) 357;\
B. Wang, C. Y. Lin. D. Pavon and E. Abdalla, Phys. Lett. B 662 (2008) 1;\
W. Zimdahl and D. Pavon, Class. Quantum Grav. 24 (2007) 5461;\
D. Pavon and A. A. Sen, arXiv:0811.1446;\
H. Wei and R. G. Cai, Phys. Rev. D 71 (2005) 043504;\
H. Wei and R. G. Cai, Phys. Rev. D 72, 123507 (2005).
A. Sheykhi, Phys Lett B 681 (2009) 205. D. N. Spergel, Astrophys. J. Suppl. 148 (2003) 175;\
C. L. Bennett, et al., Astrophys. J. Suppl. 148 (2003) 1;\
U. Seljak, A. Slosar, P. McDonald, JCAP 0610 (2006) 014;\
D. N. Spergel, et al., Astrophys. J. Suppl. 170 (2007) 377.
F. Karolyhazy, Nuovo.Cim. A 42 (1966) 390;\
F. Karolyhazy, A. Frenkel and B. Lukacs, in *Physics as natural Philosophy*\
edited by A. Shimony and H. Feschbach, MIT Press, Cambridge, MA, (1982);\
F. Karolyhazy, A. Frenkel and B. Lukacs, in *Quantum Concepts in Space and Time*\
edited by R. Penrose and C.J. Isham, Clarendon Press, Oxford, (1986).
M. Maziashvili Int. J. Mod. Phys. D 16 (2007) 1531;\
M. Maziashvili, Phys. Lett. B 652 (2007) 165.
D. Pavon, W. Zimdahl, Phys. Lett. B 628 (2005) 206;\
N. Banerjee, D. Pavon, Phys. Lett. B 647 (2007) 477.
[^1]: [email protected]
|
---
abstract: 'In this note, we study the radius of positively curved or non-negatively curved Alexandrov space with strictly convex boundary, with convexity measured by the Base-Angle defined by Alexander and Bishop. We also estimate the volume of the boundary of non-negatively curved spaces as well as the rigidity case, which can be thought as a non-negatively curved version of a recent result of Grove-Petersen.'
address:
- 'Beijing International center for Mathematical Research, Peking University. Beijing 100871, China'
- 'School of Mathematical Science, Peking University. Beijing 100871, China'
author:
- Jian Ge
- Ronggang Li
bibliography:
- 'mybib.bib'
title: Radius Estimates for Alexandrov Space with Boundary
---
Introduction
============
Let $M^{n}$ be a closed $n$-dimensional Riemannian manifold with Ricci curvature bound from below by $(n-1)$, then by the classical Bonnet-Myers theorem the diameter of $M$ has the upper bound: ${\operatorname{diam}}(M)\le \pi$. Let $X\in {\operatorname{Alex}}^{n}(1)$, i.e. an $n$-dimensional Alexandrov space with curvature bounded from below by $1$, we have the same diameter estimate ${\operatorname{diam}}(X)\le \pi$, by [@BGP1992]. The positive lower bound of the curvature is crucial here, since for any $X\in {\operatorname{Alex}}^{n-1}(0)$, the cylinder $X\times {\mathds{R}}\in {\operatorname{Alex}}^{n}(0)$ has infinite diameter. On the other hand, if the Ricci curvature of $M^{n}$ is nonnegative, and $\partial M$ is non-empty with mean curvature satisfies $H\ge (n-1)h>0$, we can still estimate the inner radius of $M$, i.e. the largest radius of a metric ball inscribed inside the manifold: ${\operatorname{InRad}}(M)\le 1/h$. cf. [@Li2014]. Cf. also [@Ge2015] for a unified treatment for all lower curvature bounds. In this case, one cannot estimate the diameter as the solid cylinder with cross section a unit disc: $D^{n-1}\times {\mathds{R}}$ shows. For Alexandrov spaces, one expects that a similar estimate holds. First, the mean curvature assumption in [@Li2014] needs to be replaced by something meaningful for non-smooth spaces. This has been done by Alexander-Bishop in [@AB2010], where the authors defined a function called Base-Angle at each foot point.
\[def:BA\] Let $X$ be an $n$-dimensional Alexandrov space with non-empty boundary $\partial X$. For $x\in \partial X$, the *[base angle at]{} $p$ of a chord $\gamma$ of $V$ at an endpoint $p$ is the angle formed by the direction of $\gamma$ and $\partial(\Sigma_x(X))$, where $\Sigma_{x}(X)$ is the space of directions at $x$ of $X$. We call the boundary $\partial X$ has extrinsic curvature $\ge A$ in the base-angle sense at $x$ or ${\operatorname{BA}}(x)\ge A$, if the base angle $\alpha$ at $x$ of a chord of length $r$ from $x$ satisfies $$\liminf_{r\to 0}\frac{2\alpha}{r}\ge A.$$*
It can be verified that if $X$ is a Riemannian manifold with smooth boundary, a Base-Angle lower bound is equivalent to a lower bound on the principal curvatures the boundary. We will call the boundary $\partial X$ is $A$-convex, if the base-angle ${\operatorname{BA}}(x)\ge A$ at each foot point, which will be written as ${\operatorname{BA}}(\partial X)\ge A$. Recall that a point $x\in \partial X$ is called a [*foot point*]{}, if there exists $y\in X\setminus \partial X$ such that $$\rho(y):=|y, \partial X| =|y, x|.$$ We use $|A, B|$ to denote the distance between subsets $A$ and $B$ in $X$. In [@AB2010] it is then proved, among other things, that the inner radius of $X\in {\operatorname{Alex}}^{n}(\kappa)$ with $A$-convex boundary $\partial X$ satisfies the expected estimate, see .
In this note, we are interested in the radius estimate for $X\in{\operatorname{Alex}}^{n}(\kappa)$ with $A$-convex boundary $\partial X$. Recall the radius of $X$ at $p$ is defined by $${\operatorname{Rad}}_{p}(X)=\sup\{|p, x|\ |\ x\in X\},$$ and the radius of $X$ is defined by $${\operatorname{Rad}}(X)=\inf_{p\in X}{\operatorname{Rad}}_{p}(X).$$
Now we state our main theorems
\[thm:k=0\] Let $X\in{\operatorname{Alex}}^{n} (0)$, with ${\operatorname{BA}}(\partial X)\ge A> 0$. We have: $${\operatorname{Rad}}(X)\leq \frac{1}{A},$$ with equality holds if and only if $X$ is isometric to the warped product $[0, A]\times_{t}\partial X$.
\[thm:k=1\] Let $X\in{\operatorname{Alex}}^{n} (1)$, with ${\operatorname{BA}}(\partial X)\ge A\geq 0$. We have: $${\operatorname{Rad}}(X)\leq\operatorname{\operatorname{arccot}}(A),$$ with equality holds if and only if $X$ is isometric to the warped product $[0, \operatorname{\operatorname{arccot}}(A)]\times_{\sin(t)}\partial X$.
As one can easily see, our upper bound of the radius is the same as the upper bound of inner-radius proved in by Alexander-Bishop, but our theorem does not imply their estimate since we use the inner radius estimate in our proof of radius estimate. On the other hand, our result gives shaper estimates of inner-radius, in fact, we insert more terms between the inner-radius and Alexander-Bishop’s upper bound. See and for details.
Let $X\in{\operatorname{Alex}}^{n}(\kappa)$ with nonempty boundary $\partial X$, the *Boundary Conjecture* says that $\partial X$ equipped with the induced path metric is again an Alexandrov space with the same lower curvature bound $\kappa$. In particular, if $\kappa=1$, we expect $\partial X$ has lower curvature bound $1$, thus it would follows from the Boundary Conjecture that ${\operatorname{diam}}(\partial X)\le \pi$ and ${\operatorname{Vol}}(\partial X)\le {\operatorname{Vol}}({\mathbf{S}}^{n-1})$, where ${\mathbf{S}}^{n-1}$ denotes the unit $(n-1)$-sphere. The volume upper bound of $\partial X$ was called Lytchak’s Problem in [@Pet2007], and Petrunin proved it using gradient exponential map. The rigidity result is proved only recently by Grove-Petersen [@GP2018]. In the [@Ge2018], the first author estimates the volume of Alexandrov space with fixed boundary, where we could think of the convexity of the boundary as positive curvatures. As the classical Gauss equation relates the intrinsic curvature of submanifold and ambient space via the second fundament form. So we propose the following Boundary Conjecture for Alexandrov spaces with curved boundary:
Let $X\in {\operatorname{Alex}}^{n}(0)$ and ${\operatorname{BA}}(\partial X)\ge 1$, then $\partial X\in {\operatorname{Alex}}^{n-1}(1)$.
Our next theorem gives an evidence of this conjecture. Namely we get a solution to the Lytchak’s Problem for the non-negatively curved Alexandrov space with $1$-convex boundary, as well as a rigidity result parallel to the one in [@GP2018]:
\[thm:fill01\] Let $X\in {\operatorname{Alex}}^{n}(0)$ with $\partial X\neq\emptyset$. Suppose ${\operatorname{BA}}(\partial X)\ge 1$. Then $${\operatorname{Vol}}_{n-1}({\partial X})\leq {\operatorname{Vol}}_{n-1}\big({\mathbf{S}}^{n-1}(1)\big).$$ Moreover, if $\partial X$ is intrinsically isometric to ${\mathbf{S}}^{n-1}$, then $X$ is isometric to the unit disk in ${\mathds{R}}^{n}$.
Note that in the classical positive mass theorem implies that the Euclidean ${\mathds{R}}^{n}$ admits not compact perturbation while keeping lower scalar curvature bound $0$. On the hand, the boundary hypersurface is assumed to be smooth or with a restricted type of singularity, cf. [@ST2002; @ST2018] . Our approach to this problem uses no assumption on the smoothness of the boundary at all. However, we required a much strong curvature condition.\
Acknowledgment: We would like to thank Stephanie Alexander and Yuguang Shi for their interest in our work and helpful discussions.
Proofs of the Radius Estimates
==============================
One key ingredient of our proof is the following concavity estimates of the distance function $\rho(x)=|x, \partial X|$:
\[thm:AB\] Let $X\in {\operatorname{Alex}}^{n}(\kappa)$ and ${\operatorname{BA}}(\partial X)\ge A$. Let $${{\mathcal D}}=R(\kappa, A)-{\operatorname{dist_{\partial X}}}$$ where $R(\kappa, A)$ is the radius of the circle with geodesic curvature equals to $A$ in the $2$-dimensional space form of curvature $\kappa$. If $\kappa>-A^2$, then ${{\mathcal D}}$ is nonnegative, and the function $f={\operatorname{md_{\kappa}}}({{\mathcal D}})$ satisfies $$f''+\kappa f \geq 1$$ where ${\operatorname{md_{\kappa}}}(t)=\int_{0}^{t} \frac{1}{\sqrt{\kappa}}\sin(\sqrt{\kappa}s) ds$.
The non-negativity of ${{\mathcal D}}$ implies that the *inner radius* estimate of $X$, i.e.
\[cor:inner\] Let $X$ and $R$ be as above, then the inner radius of $X$ satisfies $$a:=\max_{x\in X}\rho(x)\le R(\kappa, A).$$ In particular, $a\le \frac1A$ for $\kappa=0$ and $a\le \operatorname{\operatorname{arccot}}(A)$ for $\kappa=1$. Moreover, in the case $\kappa=0, A>0$ and $\kappa=1, A\ge 0$, there is a unique point $s\in X$ realized the maximum of $\rho$, which is called the *soul* of $X$.
First, we need characterize the set of points with maximal distance to the soul $s\in X$.
\[lem:rad\] Let $X\in {\operatorname{Alex}}^{n}(\kappa)$ with $\kappa\ge 0$ and $\partial X\ne \varnothing$. Let $s$ be the soul of $X$. Then $${\operatorname{Rad}}_{s}(X)= \sup_{x\in \partial X}|s, x|=:b.$$
For any $y$ in the interior of $X$, let $q\in {\partial X}$ be a foot point such that $|y, \partial X| =|y, q|=:\beta$. Let $\gamma(t):[0,\beta]\rightarrow $ be the unit-speed geodesic from $y$ to $q$. We have: $$|\Uparrow_y^s,\gamma^{+}(0)|\geq \frac{\pi}{2},$$ since otherwise there would exist a geodesic $\alpha$ from $y$ to $s$ with $\alpha(0)=y$ and $|\alpha^+(0),\gamma^+(0)|< \frac{\pi}{2}$, then by the first variation formula $$\rho(\alpha(\epsilon))\leq|\alpha(\epsilon)q|<|yq|=\rho(y)$$ for $\epsilon$ small. Here the set $\Uparrow_{y}^{s}$ consists of initial directions of all the unit speed geodesics from $y$ to $s$.
On the other hand, $\rho$ is a concave function with the maximum achieved at $s$, it follows that $\rho(\alpha(\cdot))$ is monotone, therefore $\rho(s)<\rho(y)$. Hence a contradiction.
Since $\forall t\in[0,\beta]$, $q$ is the foot point achieves the distance form $\gamma(t)$ to $\partial X$, we have: $$|\Uparrow_{\gamma(t)}^s,\gamma'(t)|\geq\frac{\pi}{2}.$$ by replacing the $y$ above by $\gamma(t), t\in[0,\beta]$. Therefore the first variation formula tells $|\gamma(t),s|$ is increasing along $\gamma(t)$. It follows that $$|qs| \geq |ys|.$$ Therefore the conclusion holds.
It can be showed that ${\operatorname{Rad}}_{s}(X)$ can only be achieved by geodesics from $s$ to some points on $\partial X$. In fact, if $|\Uparrow_{\gamma(t)}^s, \gamma'(t)|>\frac{\pi}{2}$ for some $t\in[0,\beta]$, then we have the strict inequality $|q, s|>|y, s|$. Therefore if there were an interior point $y\notin\partial X$ satisfies $|s, y|={\operatorname{Rad}}_{s}(X)$, it follows that $|\Uparrow_{\gamma(t)}^s,\gamma'(t)|\equiv\frac{\pi}{2}$ for any $t\in[0,\beta]$. In particular, the equality holds at $q=\gamma(\beta)$, therefore $\uparrow_q^s\in\partial\Sigma_{s}X$. By the convexity of $\partial X$, we have $s\in\partial X$. Hence a contradiction.
The following elementary comparison result for ODEs is needed.
\[lem:ode\] For any $\kappa\geq 0$, let $f$ and $\tilde{f}$ be real functions on $[0,\infty)$ satisfying: $$\begin{aligned}
f''+\kappa f&\geq 1 \\
\tilde{f}''+\kappa \tilde{f}&=1\end{aligned}$$ respectively, while $0\leq f(0)<\frac{1}{\kappa}$ (if $\kappa=0$, define $\frac{1}{\kappa}$ as $\infty$), $f(0)=\tilde{f}(0)$, $f'(0)=\tilde{f}'(0)$. Then $$f(t)\geq \tilde{f}(t) \qquad\forall t \in [0,t_{0}]$$ where $t_0$ is the first zero of $\frac{1}{\kappa}-\widetilde{f}$.
For the case $\kappa>0$, let $w=\frac{1}{\kappa}-f$ and $\tilde{w}=\frac{1}{\kappa}-\tilde{f}$. Then $w(0)=\tilde{w}(0)>0$, and the ordinary functions of $f$ and $\tilde{f}$ makes $$\begin{aligned}
w''+\kappa w&\leq 0\\
\tilde{w}''+\kappa \tilde{w} &=0.\end{aligned}$$ Then $$\begin{aligned}
&\qquad w''(t)\widetilde{w}(t)-\widetilde{w}''(t)w(t)\leq 0 \\
&\Longleftrightarrow \big(w'(t)\widetilde{w}(t)-\widetilde{w}'(t)w(t)\big)'\leq 0 \\
&\Longrightarrow\big(w'(t)\widetilde{w}(t)-\widetilde{w}'(t)w(t)\big) \leq \big(w'(0)\widetilde{w}(0)-\widetilde{w}'(0)w(0)\big)=0 \\
&\Longrightarrow (\frac{\widetilde{w}}{w})'(t)\geq 0 \qquad \text{whenever $w(t)>0$} \\
&\quad\; \; \; \; (\frac{w}{\widetilde{w}})'(t)\leq 0 \qquad \text{whenever $\widetilde{w}(t)>0$} \\
&\Longrightarrow w(t)\leq\widetilde{w}(t) \qquad \text{whenever $\widetilde{w}(t)\geq 0$}\\
&\Longleftrightarrow \frac{1}{\kappa}-f(t)\leq\frac{1}{\kappa}-\widetilde{f}(t) \qquad \text{whenever $\frac{1}{\kappa}-\widetilde{f}(t)\geq 0$}\\
&\Longleftrightarrow f(t)\geq\widetilde{f}(t) \qquad \text{whenever $\frac{1}{\kappa}-\widetilde{f}(t)\geq 0$}.\\\end{aligned}$$
In the case that $\kappa=0$, it is easy to see $f''-\tilde{f}''\geq 0$, thus $f'-\tilde{f}'\geq 0$ by $f'(0)=\tilde{f}'(0)$, and then, $f(t)\geq\tilde{f}(t)$ follows from $f(0)=\tilde{f}(0)$.
The and are in fact easy corollaries of the following theorems, where we insert one more term between the inner radius estimates of . As we can see easily $$a\le {\operatorname{Rad}}(x)\le b,$$ Recall that $a=\max_{x\in X} \rho(x)$ and $b={\operatorname{Rad}}_{s}(X)$. We have:
\[thm:b:k=1\] If $X\in{\operatorname{Alex}}^{n} (1)$ and ${\operatorname{BA}}(\partial X)\ge A\geq 0$. Then $${\operatorname{Rad}}(X)\leq b\le \arccos \left (\frac{A}{A\cos a+\sin a}\right ).$$
Set $\ell =R(1,A)=\operatorname{\operatorname{arccot}}(A)$. Let $\gamma(t)$ be a geodesic of length $b$ with $\gamma(0)=s$ and $\gamma(b)\in \partial X$. Therefore ${\operatorname{dist_{\partial X}}}(\gamma(0))=a\leq \ell$. Let $$h(t)=\rho(\gamma(t)),$$ Then $h$ satisfies $$h(0)=a,\ h(b)=0, \ h'(0)=-\cos\alpha_{0},$$ where $\alpha_{0}=|\gamma'(0),\Uparrow_{s}^{{\partial X}}|$. Since $s$ is the critical point for the distance function $\rho$, we have $\alpha_{0}\leq \frac{\pi}{2}.$ Define $$\begin{aligned}
f(t)=1-\cos(\ell-h(t)) \qquad t\in[0,b] .\end{aligned}$$ Since we are working for the case $\kappa=1$, ${\operatorname{md_{\kappa}}}(x)=1-\cos x$. Therefore the function $f$ satisfies the following differential inequality: $$\begin{aligned}
&\qquad f''+f\geq 1 \\
$$ Let $$\widetilde{f}(t)=1-\cos(\ell-a)\cos t+\sin(\ell-a)\sin t\cos\alpha_0.$$ then one verifies easily: $$\begin{aligned}
\widetilde{f}(0)&=1-\cos(\ell-a)=f(0) \\
\widetilde{f}'(0)&=\sin(\ell-a)\cos\alpha_{0}=f'(0)\end{aligned}$$ and $$\widetilde{f}''(t)+\widetilde{f}(t)=1$$ that follows: $$f(t)\geq\widetilde{f}(t)$$ for $t\le t_{0}$, where the $t_{0}>0$ is the first zero of $1-\tilde f$ by . Especially when $t=b\leq t_0$, we have $$\begin{aligned}
&\qquad\cos \ell\leq \cos(\ell-a)\cos b-\sin(\ell-a)\sin b\cos\alpha_{0} \\
&\Longrightarrow\cos \ell\leq\cos(\ell-a)\cos b \\
&\Longleftrightarrow\frac{\cos \ell}{\cos(\ell-a)}\leq \cos b \\
&\Longleftrightarrow \frac{A}{A\cos a+\sin a}\leq \cos b \\
&\Longleftrightarrow b\leq \arccos \left( \frac{A}{A\cos a+\sin a} \right)\end{aligned}$$
One observe that $$\frac{A}{A\cos a+\sin a}\geq \frac{A}{\sqrt{1+A^2}},$$ since $a\leq l\leq \frac{\pi}{2}$. We have: $$b\leq \arccos \left( \frac{A}{A\cos a+\sin a} \right)\le \arccos \frac{A}{\sqrt{1+A^2}}$$ that is $b\leq \operatorname{\operatorname{arccot}}(A)$.
Now we move to the discussion on the case $\kappa =0$.
\[thm:b:k=0\] Let $X\in{\operatorname{Alex}}^{n} (0)$ and ${\operatorname{BA}}(\partial X)\ge A> 0$. Let $a$ be the inner radius of $X$. We have: $${\operatorname{Rad}}(X)\leq b\le \sqrt{2\frac{a}{A}-a^2}$$
Let $\ell=R(0,A)=\frac{1}{A}$. Suppose $\gamma(t)$ is a geodesic of length $b$, with $\gamma(0)=s$ and $\gamma(b)\in{\partial X}$. Therefore ${\operatorname{dist_{\partial X}}}(\gamma(0))=a\leq \frac{1}{A}$ by . Let $$h(t)={\operatorname{dist_{\partial X}}}(\gamma(t)),$$ then $$h(0)=a, \ h(b)=0, \ -h'(0)=\cos\alpha_{0}$$ where $\alpha_{0}=|\gamma'(0),\Uparrow_{s}^{{\partial X}}|$. Since $s$ is the critical point for $\rho$, we know $\alpha_{0}\leq \frac{\pi}{2}.$ In this case, $${\operatorname{md_{\kappa}}}(x)=\frac{x^2}{2},$$ thus $$\begin{aligned}
f(t)=\frac{(\frac{1}{A}-h(t))^2}{2}, \qquad 0\leq h(t)\leq\frac{1}{A}\end{aligned}$$ satisfying $$f''\geq 1.$$ Let $$\widetilde{f}(t)=
\frac{t^2+2(\frac{1}{A}-a)\cos\alpha_{0} t+(\frac{1}{A}-a)^2}{2}$$ then $$\begin{aligned}
\widetilde{f}(0)&=f(0)=\frac{(\frac{1}{A}-a)^2}{2} \\
\widetilde{f}'(0)&=f'(0)=(\frac{1}{A}-a)\cos\alpha_{0}\end{aligned}$$ and $$\widetilde{f}''(t)=1$$ It follows that $$f(t)\geq\widetilde{f}(t).$$ Therefore, when $t=b$, we have $$\begin{aligned}
&\qquad b^2+2(\frac{1}{A}-a)\cos\alpha_{0} b+(a^2-2\frac{a}{A})\leq 0\\
&\Longrightarrow b\leq \sqrt{2\frac{a}{A}-a^2}\end{aligned}$$
By and : $a\leq \frac1A,$ we have: $$b\le \sqrt{2\frac{a}{A}-a^2} \leq \frac{1}{A}.$$ Thus the conclusion follows.
Discussion of the Equality Cases
================================
In this section we discuss various equality case in the estimates below. Recall that the inner radius $a:=\max_{x\in X}\rho(x)$ and $b:=\max_{x\in \partial X}|x, s|$. By the previous theorems we have for the case $\kappa=0, A>0$: $$\label{eq:equality:0}
a\le {\operatorname{Rad}}(X)\le b\le \sqrt{2\frac aA-a^{2}}\le \frac 1A;$$ and for the case $\kappa=1, A>0$: $$\label{eq:equality:1}
a \le {\operatorname{Rad}}(X)\le b\le \arccos\left( \frac{A}{A\cos a+\sin a}\right)\le \operatorname{\operatorname{arccot}}(A).$$ For simplicity, we will refer the terms in the and as ${\tikz[baseline=(char.base)]{\node[shape=circle,draw,inner sep=2pt] (char) {1};}}$ to ${\tikz[baseline=(char.base)]{\node[shape=circle,draw,inner sep=2pt] (char) {5};}}$ from the left to the right.
\[prop:rigidty15\] The equality $a=\frac1A$ in (resp. $a=\operatorname{\operatorname{arccot}}(A)$ in ) implies that the space $X$ is isometric to the cone $[0, a]\times_{t} \partial X$ (resp. $[0, a]\times_{\sin t}\partial X$).
As one can see in our proof of and , the same type of rigidity holds, that is.
\[prop:rigidty25\] The equality ${\operatorname{Rad}}(X)=\frac1A$ in (resp. ${\operatorname{Rad}}(X)=\operatorname{\operatorname{arccot}}(A)$ in ) implies that the space $X$ is isometric to the cone $[0, a]\times_{t} \partial X$ (resp. $[0, a]\times_{\sin t}\partial X$).
The following example shows that class of spaces satisfying ${\operatorname{Rad}}(X)=b=\sqrt{2\frac aA-a^{2}}$ or ${\operatorname{Rad}}(X)=b=\arccos\left( \frac{A}{A\cos a+\sin a}\right)$ are very large.
\[ex:cutthetip\] We construct $X\in{\operatorname{Alex}}^{3} (0)$, and ${\operatorname{BA}}(\partial X)\ge A\geq 0$ in the Euclidean space ${\mathds{R}}^3$ as the intersection of three balls centered at $(\frac{1}{A}-a,0,0),$ $-(\frac{1}{A}-a,0,0),$ and $(0,\sqrt{2\frac{a}{A}-a^2}-\epsilon-\frac{1}{A},0)$ respectively, where $a<\frac{1}{A}$, $\epsilon<\sqrt{2\frac{a}{A}-a^2}-a$. Soul of $X$ is the origin of ${\mathds{R}}^3$, the inner radius of $X$ is $a$ while the radius is $\sqrt{2\frac{a}{A}-a^2}$, which is also the distance from the soul to the boundary of $X$. A similar example in ${\operatorname{Alex}}^{3}(1)$ can be constructed easily.
\[prop:rigidty14\] The equality $a=\sqrt{2\frac aA-a^{2}}$ in (resp. $a=\operatorname{\operatorname{arccot}}(A)$ in ) follows the equivalent in \[prop:rigidty15\], thus that the space $X$ is isometric to the cone $[0, a]\times_{t} \partial X$ (resp. $[0, a]\times_{\sin t}\partial X$).
The case $\kappa=1, A=0$ contains all positively curved Alexandrov spaces with boundary. The upper bound in is $\pi/2$. In this case, the following rigidity theorem is proved by Petersen and Grove
\[prop:PositiveRigidity\] Let $X\in {\operatorname{Alex}}^{n}(1)$ and $\partial X$ is intrinsically isometric to ${\mathbf{S}}^{n-1}$. Then $X$ is isometric to the lens $L_{\alpha}^{n}=[0, \alpha]*{\mathbf{S}}^{n-2}$ for some $0<\alpha\le \pi$, where $*$ is the spherical join.
The Filling of Round Sphere
===========================
In this section, we prove . The volume estimate uses the same idea as Petrunin’s in [@Pet2007] we include it only for completeness. The rigidity part uses our discussion on the equality case in the previous section.
For $X\in {\operatorname{Alex}}^{n}(0)$ with no empty boundary, the distance function to the boundary is concave in $X$. Thus the gradient exponential map ${\operatorname{gexp}}_s$ maps $\overline B_b(o_s)$ onto $X$. Moreover ${\operatorname{gexp}}_s$ also gives a homotopy equivalence of $\partial B_b(o_s)=\Sigma_{s}$ and $X\setminus \{s\}$, which is homotopy to $\partial X$, by noting that the soul $s$ is the only critical point of the distance function to $\partial X$. Since $\Sigma_{s}$ is a compact Alexandrov space without boundary, we have $H_{n-1}({\partial X},{\mathds{Z}}_2) \neq 0$. Hence $\forall x\in {\partial X}$, the geodesic $sx$ must have a point of ${\operatorname{gexp}}_s\big(\partial\overline B_b(o_s)\big)$. Since the inverse of the gradient exponential map ${\operatorname{gexp}}_s^{-1}$ is uniquely defined inside any geodesic starting at $s$, it can only be $x$ for ${\operatorname{gexp}}_s$ is a short map. Thus $${\partial X}\subset {\operatorname{gexp}}_s(\partial B_b(o_s)).$$ On another hand, Using gradient exponential map is a distance non-increasing map, we have $$\begin{aligned}
{\operatorname{Vol}}_{n-1}({\partial X})&\leq {\operatorname{Vol}}_{n-1}\big({\operatorname{gexp}}_s(\partial B_b(o_s)) \big)\\
&\leq {\operatorname{Vol}}_{n-1}\big(\partial B_b(o_s)\big) \\
&\leq {\operatorname{Vol}}_{n-1}\big({\mathbf{S}}^{n-1}(b)\big) \\
&\leq {\operatorname{Vol}}_{n-1}\big({\mathbf{S}}^{n-1}(1)\big).\end{aligned}$$ If $\partial X$ is intrinsically isometric to ${\mathbf{S}}^{n-1}$, the previous inequalities implies $b=1$. Recall $$b\leq\sqrt{2a-a^2}\leq 1,$$ we get $a=1$ thus by the Corollary 1.10 in [@AB2010], $X$ is isometric to the ball of radius $1$ about the vertex in a $0$-cone over it’s boundary. Since $\partial X$ is isometric to ${\mathbf{S}}^{n-1}$, such cone is ${\mathds{R}}^{n}$, therefore the conclusion holds.
|
---
abstract: 'We present $UBVRI$ photometry of stars in the field of the intermediate-age open cluster NGC559. By determining the stellar membership probabilities derived through a photometric and kinematic study of the cluster, we identify the 22 most probable cluster members. These are used to obtain robust cluster parameters. The mean proper motion of the cluster is $\mu_x = -3.29\pm0.35$, $\mu_y = -1.24\pm0.28$ mas yr$^{-1}$. The radial distribution of the stellar surface density gives a cluster radius of $4''.5\pm0''.2$ (3.2$\pm0.2$ pc). By fitting solar metallicity stellar isochrones to the colour-colour and colour-magnitude diagrams, we find a uniform cluster reddening of $E(B-V) = 0.82\pm0.02$. The cluster has an age of $224\pm25$Myr and is at a distance of $2.43\pm0.23$kpc. From the optical and near-infrared two-colour diagrams, we obtain colour excesses in the direction of the cluster $E(V-K) = 2.14\pm0.02$, $E(J-K) = 0.37\pm0.01$, and $E(B-V)= 0.76\pm0.04$. A total-to-selective extinction of $R_V=3.5\pm0.1$ is found in the direction of the cluster which is marginally higher than the normal value. We derive the luminosity function and the mass function for the cluster main sequence. The mass function slope is found to be $-2.12\pm0.31$. We find evidence of mass segregation in this dynamically relaxed cluster.'
author:
- |
Y. C. Joshi$^{1}$[^1], L. A. Balona$^{2}$, S. Joshi$^{1}$, B. Kumar$^{1}$,\
$^{1}$Aryabhatta Research Institute of Observational Sciences (ARIES), Manora peak, Nainital, India\
$^{2}$South African Astronomical Observatory, PO Box 9, Observatory 7935, Cape Town, South Africa\
date: Accepted 07 October 2013 Received 22 July 2013
title: 'A photometric study of the Open Cluster II: Stellar population and dynamical evolution in NGC559'
---
\[firstpage\]
open cluster:individual:NGC559–stars: formation – stars: luminosity function, mass function–techniques:photometric
INTRODUCTION {#sec:intro}
============
Systematic photometric studies of Galactic open star clusters (OCs) offer unique opportunities to understand large-scale star formation processes in the Galaxy and in Galactic clusters (Lada 2003). The precise knowledge of cluster parameters such as age, distance, reddening and chemical composition as well as knowledge of the stellar population distribution and the cluster mass function at the time of star formation play a key role in understanding the star formation history. The importance of photometric studies of OCs lies in the colour-colour and colour-magnitude diagrams derived through multi-band photometric observations. Since most of the OCs are embedded in the Galactic disk and are likely to be affected by field star contamination, it is essential to discriminate between members and non-members of the clusters. The amount of field star contamination depends on the location of the cluster. It is necessary to perform a detailed membership analysis of the stars found within the observed field for a robust investigation of cluster properties (Carraro et al. 2008, Yadav et al. 2008). For most of the OCs, kinematical data is unavailable. However, recent all-sky proper motion catalogues (e.g., Roeser et al. 2010, Zacharias et al. 2013), provide clues to determine cluster membership. Together with a photometric study of the cluster, it becomes possible to draw some conclusions regarding the dynamical evolution of the cluster.
At ARIES, Nainital, we have been carrying out a long-term observational program to search and characterize the variable stars in Galactic open star clusters using various 1- to 2-m class telescopes in India. The advantage of having such observations is that they can also be used to study the physical properties of the clusters and their stellar and dynamical evolution. In Joshi et al. (2012), we performed a photometric study of the intermediate age open cluster, NGC6866, which also included a search for variable stars in the cluster. The results presented here for NGC559 are a continuation of our efforts to understand star formation in some unstudied or poorly studied young- and intermediate-age open clusters.
NGC559 (RA = 01:29:35, DEC = +63:18:14; $l = 127^\circ.2, b = +0^\circ.75$) is a moderately populated and heavily reddened intermediate-age open cluster, classified as type I$2m$ by Trumpler (1930) and II$2m$ by Ruprecht (1966). It is located in the direction of the second Galactic Quadrant in the vicinity of the Perseus and Local arms (Russeil et al. 2007). Photoelectric photometry of the cluster was obtained by Lindoff (1969) and Jennens & Helfer (1975), while Grubissich (1975) provided photographic photometry of cluster stars. A subsequent investigation using CCD photometry was carried out by Ann & Lee (2002, hereafter AL02) and Maciejewski & Niedzieski (2007, hereafter MN07). However, a complete UBVRI study is still lacking. Moreover, there has not been any systematic attempt to identify cluster members in the field of this cluster.
The main focus of the present study is to accurately determine the fundamental parameters of NGC559 by identifying cluster members using photometric and kinematic criteria. The outline of the paper is as follows. A photometric study of the cluster is presented in 2. The cluster properties are discussed in 3 and fundamental parameters are derived in 4. The dynamical study of the cluster is presented in 5. Finally, we discuss the results in 6.
Photometric study of the cluster {#sec:phot}
================================
Observations and Calibration {#sec:photcal}
----------------------------
Johnson-Cousins $UBVRI$ photometry of stars in the field of NGC559 was obtained on 2010, November 30 using the 1-m Sampurnanand telescope at Nainital, India. The telescope is equipped with a $2k\times2k$ CCD camera which covers a $\sim 13'\times13'$ field of view. We acquired two frames each in $U$, $B$, $V$, $R$ and $I$ filters with exposure times of 300, 300, 200, 100, and 60-sec in respective passbands, respectively, at a typical airmass of about 1.3. On the same night we also observed two Landolt’s standard fields: SA95 and PG0231+051 (Landolt 1992) at different airmasses. The usual image processing procedures were performed which included bias subtraction, flat fielding, and cosmic ray removal. We used the [IRAF]{}[^2] software package for this purpose.
![For the standard stars in the Landolt field, plots show residuals of the differential magnitudes (standard - calibrated) in the $U$, $B$, $V$, $R$, and $I$ bands as a function of $V$ magnitude. The dashed line drawn in each panel represents a zero difference.[]{data-label="figure:comp_stand"}](Fig01.ps){width="8.0cm" height="8.0cm"}
Photometry of the frames was performed using the [DAOPHOT II]{} profile fitting software (Stetson 1987). Details of the photometric calibration obtained on this night are given in Joshi et al. (2012). Transformation coefficients for the standard stars were determined as follows.\
\
$ u = U + 8.16\pm0.01 - (0.05\pm0.01)(U-B) + (0.55\pm0.02)X $\
$ b = B + 5.81\pm0.02 - (0.01\pm0.02)(B-V) + (0.29\pm0.03)X $\
$ v = V + 5.43\pm0.01 - (0.08\pm0.01)(B-V) + (0.15\pm0.01)X $\
$ r = R + 5.23\pm0.01 - (0.09\pm0.02)(V-R) + (0.09\pm0.02)X $\
$ i = I + 5.63\pm0.02 + (0.01\pm0.01)(R-I) + (0.07\pm0.02)X $\
where $u, b, v, r$ and $i$ are the aperture instrumental magnitudes and $U$, $B$, $V$, $R$ and $I$ are the standard magnitudes and $X$ is the airmass. The difference between the calibrated magnitudes derived from the above transformation equations and the Landolt (1992) magnitudes are plotted in Fig. 1. The standard deviations of these measurements are estimated to be 0.04, 0.05, 0.03, 0.03, and 0.03mag for the $U$, $B$, $V$, $R$ and $I$ filters, respectively. The above transformation coefficients were used to convert instrumental magnitudes to the standard system. The average internal photometric error per magnitude bin in all the five filters on the night of standardization are listed in Table1. This shows that photometric errors become large ($>0.1$mag) for stars fainter than $V \approx 20$ mag. To standardize the data on remaining nights, differential photometry was performed using a linear fit between the standard and instrumental magnitudes on each night, assuming that most of the stars are non-variable.
Completeness of the data
------------------------
It is necessary to determine the completeness of the data as it is not always possible to detect every star in the CCD frame, particularly the faintest stars. The completeness factor (CF) is required in order to derive the luminosity function and the mass function of the cluster as well as to estimate the stellar density distribution. The [ADDSTAR]{} routine in [DAOPHOT]{} was used to determine CF. This involves adding randomly selected artificial stars with different, but known, magnitudes and positions to the original frames. We added about 10–15% of the actually detected stars, so that the crowding characteristics of the original image is almost unchanged. We added simulated stars to all bands in such a way that they have similar geometric locations. We varied the brightness of the artificial star depending on its location relative to the Main-Sequence (MS) in the $V$ band. We constructed five frames for each passband and re-processed them with the same procedure as used in the original frames. The average ratio of number of stars recovered to the number of simulated stars in the different magnitude bins gives the CF as a function of magnitude. The CF in all five passbands for both cluster and field regions is given in Table2. From the table, one can see that the completeness decreases towards the fainter stars because of the increased crowding caused by the large number of low-mass stars.
Astrometry {#sec:photast}
----------
In order to transform CCD pixel coordinates to celestial coordinates, we used the on-line digitized ESO catalogue included in the [skycat]{} software as an absolute astrometric reference frame. A linear astrometric solution was derived for the $V$ filter reference frame by matching positions of 63 well isolated, bright stars in the USNOA2.0 catalogue. The [ccmap]{} and [cctran]{} routines in [IRAF]{} were used to find a transformation equation which gives the celestial coordinates $(\alpha, \delta)$ as a function of the pixel coordinates, $(X,Y)$. The resulting celestial coordinates have standard deviations of 0.1arcsec in both right ascension and declination.
A finding chart for stars in NGC559 is shown in Fig. 2. We do not see any significant concentration of stars at the center which suggests that cluster is loosely bound.
![Finding chart of stars in the field of NGC559. North is upwards and East is on the left. The sizes of the filled circles are proportional to the brightness of the stars in the $V$ band. The faintest are $V = 21$. The inner and outer rings indicate core and cluster radii with origin, $(0,0)$, at the cluster center.[]{data-label="figure:fchart"}](Fig02.ps)
Comparison with previous photometry {#sec:photcomp}
-----------------------------------
Photoelectric and photographic observations of NGC559 have been carried out by Lindoff (1969) and Grubissich (1975) respectively. Photographic magnitudes contain relatively large errors, while photoelectric magnitudes are mostly confined to stars brighter than $V \sim 15$, hence we did not compare them with our photometry in the present study. CCD photometry in the $UBVRI$ bands is discussed in AL02, but these data have not been published. Recently, MN07 performed a wide field CCD survey of a few clusters using a 90/180cm Schmidt-Cassegrain telescope equipped with a SBIG camera. This survey also includes NGC559, for which $BV$ data are presented, but only for stars brighter than about 18mag.
![Differences, $\Delta$ between measurements presented in MN07 and in the present study for $B$ magnitude and $(B-V)$ colour. Zero difference is indicated by the dashed line.[]{data-label="figure:comp_phot"}](Fig03.ps)
We found 1112 stars in the MN07 catalogue which are included in our study. However, there are only 687 stars in common for which both $B$ and $V$ magnitudes are available. We have cross-identified stars in the two catalogues on the assumption that stars are correctly matched if the difference in position is less than $1\arcsec$. On this basis, we found 505 stars in common which have similar $B$ and $V$ magnitudes within 0.5mag. A comparison of $B$ magnitudes and $(B-V)$ colours between the two catalogues is shown in Fig. 3. The mean difference and standard deviation in each magnitude bin is given in Table3. This shows that our $B$ measurements are in fair agreement with those given in the MN07 catalogue. However, there is a systematic difference in $(B-V)$ colours between the two catalogues.
A complete $UBVRIJHK$-proper motion catalog
-------------------------------------------
We have compiled a photometric catalogue of 2393 stars in the field of NGC559. The catalogue contains 515, 1288, 2177, 2352 and 2221 stars measured in the $UBVRI$ bands respectively. Near-infrared magnitudes for point sources around NGC559 have also been obtained from the Two Micron All-sky survey (2MASS; Skrutskie et al. 2006). The 2MASS provides photometry in the $J$ (1.25 $\mu$m), $H$ (1.65 $\mu$m) and $K_s$ (2.17 $\mu$m) bands up to a limiting magnitude of 15.8, 15.1, and 14.3 respectively. We found $JHK_s$ magnitudes for 917 stars in the field of NGC559, of which 906 stars are identified in our catalogue within $1\arcsec$ of their positions. The $K_s$ magnitudes were converted into $K$ magnitudes using equations given in the Carpenter et al. (2001). The proper motions have been taken from Roeser et al. (2010) which gives a catalogue for about 900 million stars derived from the USNOB1.0 and 2MASS all sky catalogues.
The $UBVRIJHK$ magnitudes and proper motions, wherever measured, are presented in Table4, sorted in increasing order of $V$. In the catalogue, column 1 contains a running number, columns 2 and 3 give right ascension and declination for J(2000), columns 4 to 11 provide photometric magnitudes and corresponding errors in the $UBVRIJHK$ passbands. The proper motion along the RA and DEC directions and their respective errors are given in the columns 12 and 13. Only a short extract of Table4 is shown; the complete catalogue is available at the WEBDA open cluster data base website[^3] or can be obtained directly from the authors.
Structural Properties of the cluster {#pcl}
====================================
Spatial Structure: Radial density profile {#rdp}
-----------------------------------------
The spatial structure and precise center of the star cluster is difficult to determine due to the irregular shape of the cluster and the non-uniform distribution of stars at different brightness levels. We define the cluster centre as the region where maximum stellar density is attained. To determine this value, we consider all stars with $V < 19$ for which the completeness level is in excess of 90%. We found that the stellar density peaks at the pixel coordinate (510, 535), corresponding to a cluster centre at ($\alpha, \delta$) = (01:29:32.33, +63:18:14.5). An error of up to $10\arcsec$ is expected in locating the cluster center.
To draw the radial density profile (RDP), we determined the stellar density in concentric rings, $0'.5$ wide, centered on the cluster center. The errorbars were derived assuming Poisson statistics. We fitted a King (1966) stellar density as modified by Kaluzny & Udalski (1992):
$\rho(r) = \rho_f + \frac{\rho_0}{1+ (\frac{r}{r_c})^2} $
Here $\rho_f$ is the field density and $r_c$ is the core radius of cluster where the stellar density, $\rho(r)$, becomes half of its central value, $\rho_0$. The stellar density distribution in the $V$ band is shown in Fig. 4. A $\chi^2$ best fit to the radial density profile is shown in the figure along with the field star density. The cluster boundary is considered to be the point in the radial direction when $\rho(r)$ falls below the field star density by 3$\sigma$. The value of the core radius was found to be $1'.3\pm0'.3$ and the cluster radius was estimated to be $4'.5\pm0'.2$. Our radius estimate is the same as that determined by AL02. The inner and outer rings in Fig. 2 represent the core and cluster regions, respectively.
![The stellar density distribution in NGC559 for stars brighter than 19mag. The solid line represents the King profile while the horizontal dashed line indicates the field density.[]{data-label="figure:rdp"}](Fig04.ps)
We noticed that the core radii derived from bright stars is smaller than those which include stars up to $V$=20. This suggests that: i) the core and cluster radii derived using the RDP are only approximate or, ii) that there is mass segregation due to the dynamical evolution of the cluster. In the latter case, it would seem that bright massive stars move towards the cluster centre, while faint low-mass stars move away from the cluster center. A similar trend has been noticed by Lee et al. (2013) in their investigation of the clusters NGC1245 and NGC2506. A detailed study on the dynamical evolution is presented in Section 5.
Colour-Magnitude Diagram {#cmd}
------------------------
{height="12cm" width="15cm"}
The identification of the cluster main sequence in the colour-magnitude diagrams (CMDs) allows a model-dependent mass, radius, and distance for each star to be determined. To draw the CMD, we used the area within cluster radius ($4'.5$) as the [*‘cluster region’*]{} and an equal area outside the cluster radius of $5'.6$ as the [*‘field region’*]{}. In the left panels of Fig. 5, we constructed calibrated $(B-V)$, and $(V-I)$ vs $V$ diagrams of NGC 559 using the stars falling in the cluster region. A similar diagram for the stars in the field region are shown in the middle panels of the same figure.
Since stars in the cluster region are contaminated by the field star population, we adopted a statistical approach to remove the field star contamination. This method is based on a comparison of the cluster and field CMDs. We removed all cluster stars in the $(V-I)$/$V$ CMD which fall within a grid cell of $(V, V-I)$ = ($\pm 0.25$, $\pm 0.125$) of the field stars CMD. A similar removal process was done for the $(B-V)$/$V$ CMD with a grid of $(V, B-V)$ = ($\pm 0.25$, $\pm 0.10$). We iterated the procedure for each star lying on the CMDs of the field region. We were finally left with 462 stars in the $(V-I)$/$V$ CMD and 341 stars in the $(B-V)$/$V$ CMD. We found more stars in the $(V-I)$/$V$ CMD because our photometry goes deeper in the $V$ and $I$ bands than in the $B$ band. The statistically cleaned cluster CMDs are shown in the right hand panels of Fig. 5. The spatial distribution of stars extracted after the statistical subtraction shows that the inner region is dominated by giant and upper-MS stars, whereas the outer region is dominated by low-mass stars. The lack of stars in some pockets is quite evident in the cleaned CMDs. These kind of gaps in MS are not unusual and have been found in many clusters (see detail in Rachford & Canterna 2000). AL02 also noticed a gap at $M_V\sim3.5$ mag ($m_v\sim18.1$) in the cluster NGC559 similar to the one seen in the present study. This suggests that these gaps could be due to a real lack of cluster members in some magnitude bins.
Mean proper motion {#pm}
------------------
Recently, Roeser et al. (2010) provided a catalogue which lists stellar coordinates with an accuracy of 80–300mas and absolute proper motion with an accuracy of 4–10mas yr$^{-1}$ for about 900 million stars. A cross-match of these stars with our catalogue using a matching criterion of $1\arcsec$ resulted in 1824 stars in common. In Table4, we provide proper motions of these stars along the RA and DEC directions and their respective errors. Fig. 6 shows the proper motion distribution in the RA-DEC plane.
To determine the mean proper motion of the cluster, we considered those 341 stars which fall in both the cleaned $(V-I)$/$V$ and $(B-V)$/$V$ CMDs. Among them, 307 stars were found within $1\arcsec$ of the Roeser et al. (2010) catalogue positions. We determined the mean and $\sigma$ values of the proper motion in both RA and DEC directions and rejected those stars which fall outside 3$\sigma$ in both the directions. We iterated this procedure until all values fall within 3$\sigma$ of the mean. We were finally left with 229 stars which were used to determine the mean proper motion of the cluster NGC 559. These stars are shown by filled circles in Fig. 6. The mean proper motion of the cluster determined in this way is\
$\bar{\mu}_{x} = -3.29\pm0.35$masyr$^{-1}$; $\bar{\mu}_{y} = -1.24\pm0.28$masyr$^{-1}$\
![The distribution of stars in the $\mu_{x}$ - $\mu_{y}$ plane for which proper motion values are determined in our catalogue and given in Table4. The 229 stars used to estimate the mean proper motion are shown by filled circles.[]{data-label="figure:pm"}](Fig06.ps)
where the uncertainties are standard deviations. A similar matching criteria using UCAC4 catalogue (Zacharias et al. 2013) has given only 167 stars though UCAC4 catalogue provides proper motion with higher accuracy. A 3$\sigma$ clipping analysis done on the proper motions left 145 stars which resulted a mean proper motion of $\bar{\mu}_{x} = -4.45\pm0.49$ and $\bar{\mu}_{y} = 1.65\pm0.37$masyr$^{-1}$ in RA and DEC directions, respectively. The proper motion for the cluster NGC559 estimated using two different catalogues are therefore in close agreement within their given uncertainties. From the radial-velocity measurements of 24 stars computed from the data of the Tycho-2 catalog, Loktin & Beshenov (2003) estimated a proper motion of $\bar{\mu}_{x} = -1.59\pm0.41$ and $\bar{\mu}_{y} = -0.52\pm0.46$masyr$^{-1}$ for the cluster NGC 559, which is lower than the present estimates.
Probable cluster members {#pk}
------------------------
Open clusters are mostly located within the densely populated Galactic plane and often contaminated with large numbers of field stars belonging to the disc population. It is therefore essential to discriminate between members and non-members in order to obtain correct cluster parameters. To identify the most-likely cluster members in NGC 559, we first derive different membership probabilities for each star in the cluster field based on their spatial distribution, position in the colour-magnitude diagram and proper motions.
### Spatial probability {#sp}
The spatial probability, $P_{\rm sp}$, is a function of the angular distance of the star from the cluster centre, $r$, and is given by $$P_{\rm sp} = 1-\frac{r}{r_c}$$ where $r_c$ is the angular radius of the cluster. Using $r_c = 4'.5$ derived in \[rdp\], we determined $P_{\rm sp}$ for all the 960 stars falling within the cluster radius. For $r \geq r_c$ we assign $P_{\rm sp} = 0$. We found 176 stars within the core region of the cluster for which $P_{\rm sp} \geq 0.67$.
### Statistical probability {#sp}
We determine statistical probability which is based on a comparison of the cluster CMD with that of the field CMD, as discussed in \[cmd\]. In this method we removed all the stars in the $(B-V)$/$V$ CMD of the cluster field which fall within a grid cell of $(V, B-V)$ = ($\pm 0.25$, $\pm 0.10$), in the field CMD. After iterating the procedure for each star lying on the CMD of the field region, we found 341 stars for which we assigned statistical probabilities $P_{\rm st}=1$. For the remaining stars, we assigned $P_{\rm st}=0$.
### Kinematic probability {#pk}
The kinematic probability, $P_{k}$, is defined as the deviation in the proper motion of stars in both RA and DEC directions with respect to the mean proper motion of the cluster.
Using the method given by Kharchenko et al. (2004), we determined $P_k$ for each star using $$P_k = \exp \left\{-0.25 \left[ (\mu_{x} - \bar{\mu}_{x})^{2}/ \sigma_{x}^{2} +
(\mu_{y} - \bar{\mu}_{y})^{2}/ \sigma_{{y}}^{2} \right] \right\}$$ where $\sigma_{x}^{2} = \sigma_{\mu_{x}}^{2}+\sigma_{\bar{\mu}_{x}}^{2}$ and $\sigma_{y}^{2} = \sigma_{\mu_{y}}^{2}+\sigma_{\bar{\mu}_{y}}^{2}$. The mean proper motion of the cluster NGC559 is taken from our analysis carried out in \[pm\]. We found 1824 stars for which $P_k$ could be estimated using the Roeser et al. (2010) catalogue.
To identify the most-likely members in the cluster NGC 559, we considered stars those lie in the core region of the cluster ($P_{\rm sp} \geq 0.67$), fall within the cleaned CMD ($P_{\rm st}$=1.0), and proper motion within 1$\sigma$ of the mean proper motion ($P_{\rm k} \geq 0.60$). We identified 22 such stars in our catalogue which fulfill above criteria. These criteria are conservative in the sense that they confer membership status to the selected stars, but it does not mean that other stars are non-members. The positions of these stars along with their magnitude, and colours are given in Table5. To determine robust cluster parameters for NGC559, these stars were preferentially used in our analysis as explained in the following section.
Cluster Parameters {#cp}
==================
Reddening law and two-colour-diagrams {#eltcd}
-------------------------------------
Though the normal reddening law, $R_V = \frac{A_V}{E(B-V)} = 3.1$, is valid for lines of sight that do not pass through dense clouds (Sneden et al. 1978), clusters associated with gas and dust or behind the dusty Galactic spiral arms may give a different value of $R_V$. To investigate the nature of the reddening law, Chini & Wargau (1990) showed that the TCDs of the form $(\lambda-V)/(B-V)$ can be used, where $\lambda$ is any broad-band filter. The slope of the TCD distinguishes normal extinction produced by grains in the diffuse interstellar medium from that caused by abnormal dust grains (Pandey et al. 2000). We studied the reddening law in the cluster NGC559 by drawing $(\lambda-V)/(B-V)$ diagrams for the $\lambda = R$, $I$, $J$, $H$ and $K$ bands as shown in Fig. 7. The slope, $m_{\rm cluster}$, was determined by fitting a linear relation in the TCD for the stars in the cluster region and a best fit determined after a 3$\sigma$-clipping iteration. The estimated values of $m_{\rm cluster}$ for all five colours are given in Table6 along with their normal values. To derive the value of total-to-selective extinction $R_{\rm cluster}$ in the direction of NGC559, we used the approximate relation (cf. Neckel & Chini 1981)
$R_{\rm cluster} = \frac{m_{\rm cluster}}{m_{\rm normal}} \times R_{\rm normal}$
Using $R_{\rm normal} = 3.1$, we estimated $R_{\rm cluster}$ in different passbands to be $3.1 < R_{\rm cluster} < 3.5$ which is marginally higher than the normal value. The reddening law in the direction of the cluster is found to be normal at longer wavelengths but anomalous at shorter wavelengths.
![The $(\lambda-V)/(B-V)$ two-colour diagram for the stars within cluster region. The most probable cluster members are shown by large filled circles. The continuous lines represent the slope determined through least square linear fit.[]{data-label="figure:tcd"}](Fig07.ps){width="8.8cm" height="15cm"}
Reddening determination: $(U-B)$ vs $(B-V)$ TCD {#ccd}
-----------------------------------------------
The reddening, $E(B-V)$, in the cluster region is normally determined using the $(U-B)$/$(B-V)$ two-colour diagram (TCD). Out of 2393 stars in our catalogue, we found only 501 stars for which all the $U$, $B$ and $V$ magnitudes are available. Among them, we considered only 275 stars within the cluster which have a $U$ band photometric error less than 0.05. The resulting TCD is shown in Fig. 8. As mentioned in the previous section, the normal reddening law is not applicable at shorter wavelengths. Therefore, we have fitted intrinsic zero-age main sequence (ZAMS) isochrones of solar metallicity (Marigo et al. 2008) to the observed MS stars by shifting $E(B-V)$ and $E(U-B)$ along different values of the reddening vector $\frac {E(U-B)} {E(B-V)}$. A visual inspection shows that the best fit is achieved for $\frac {E(U-B)} {E(B-V)} = 0.84\pm0.01$. This gives a mean reddening of $E(B-V) = 0.82\pm0.02$ in the direction of NGC559 as shown by a solid line in Fig. 8. In determining the reddening, we used only stars having colours corresponding to spectral classes earlier than A0 because stars having later spectral types are more affected by metallicity and background contamination (Hoyle et al. 2003). The colour excess obtained in the present study is in good agreement with the value $E(B-V) = 0.81\pm0.05$ given by AL02, but higher than $0.68^{+0.11}_{-0.12}$ obtained by MN07. Using the Johnson & Morgan (1953) $Q$-method for stars earlier than A0 ($(B-V)<0.84$), we determined the reddening of each star. The reddening distributions of these stars show that reddening is uniform over the whole cluster.
Considering $R_{\rm normal} = 3.1$, we estimated a higher value of $R_{\rm cluster}=3.6$ for ultraviolet wavelengths. This further suggests an anomalous reddening law at shorter wavelengths in the direction of NGC559. Chini & Wargau (1990) pointed out that both larger and smaller size grains may increase $R_{\rm cluster}$. However, some of the recent studies (e.g., Whittet et al. 2001, Pandey et al. 2008 and references therein) suggest that a value of $R_{\rm cluster}$ higher than the normal is indicative of the presence of larger dust grains. As NGC559 is situated behind the Perseus arm, a high reddening and anomalous reddening law is not surprising.
![The $(U-B)$ versus $(B-V)$ diagram for the stars in NGC559. The small dots represent the stars which lie within the cluster boundary and have a $U$-band photometric error less than 0.05. Filled circles are the most probable cluster members. The 3 stars shown in red colour represent red giants belonging to the cluster but not considered in the reddening estimation. The thick dashed arrow represents the slope (0.84) and direction of the reddening vector. The solid line represents the ZAMS with solar metallicity taken from the Marigo et al. (2008) shifted for $E(B-V)=0.82$.[]{data-label="figure:ccd"}](Fig08.ps)
Distance and Age determination {#cmd}
------------------------------
The distance and age of NGC559 can be estimated by visual fitting of theoretical isochrones to the MS. For this purpose we used $(B-V)/V$ and $(V-I)/V$ CMDs shown in the right panels of Fig. 5. The stars show a broad but clearly distinct MS in the CMD. The width is mainly caused by cluster binaries and field stars. There are a few stars scattered towards the red side of the CMDs. We suspect these may be foreground field stars which have remained due to incomplete subtraction of the field star contamination. We presume most of them belong to the Perseus spiral arm. In order to obtain the most reliable estimates of the cluster parameters, we identified those stars in the cleaned CMDs which lie inside the core region and have proper motions within 1$\sigma$ of the mean proper motion of the cluster. These stars are shown by the blue filled circles in Fig. 5. We used stellar evolutionary isochrones published by the Padova group[^4] (Marigo et al. 2008) to estimate the cluster age and distance. We fixed the reddening to the value estimated in \[ccd\]. A simultaneous best fit was made of the isochrones in the bluest envelope of the $(B-V)/V$ and $(V-I)/V$ CMDs, corrected for a mean reddening of $E(B-V)=0.82$ and $E(V-I)=1.12$ assuming $\frac{E(V-I)}{E(B-V)}$=1.37 (Schlegel et al. 1998). This gives an age of $\log$(Age)=$8.35\pm0.05$ and an apparent distance modulus of $(m-M)$ = $14.80\pm0.05$ for NGC559. The errors in age and distance are strongly influenced by a few blue and red supergiants in the CMDs.
As we have seen in \[eltcd\] and \[ccd\], that the total-to-selective extinction in optical region varies from 3.4 to 3.6. We adopted a mean value of $R_V = 3.5\pm0.1$ as the total-to-selective extinction in the direction of NGC559. Assuming a total extinction of $A_V = R_V \times E(B-V)$, the reddening-free distance modulus is estimated as $(V_0 - M_V)$ = $11.93\pm0.20$, which corresponds to a distance of $2.43\pm0.23$kpc for NGC559. The linear diameter of the cluster is estimated to be $6.4\pm0.4$pc. Since the cluster lies very close to the Galactic plane, a large foreground extinction of about $E(B-V)=0.56$ is expected in that direction (Schlegel et al. 1998, Joshi 2005).
The position of NGC559 in Galactic coordinates is $l = 127^\circ.2, b = +0^\circ.75$. Assuming that the Sun is at a distance of 8.5kpc from the Galactic center, the Galactocentric rectangular coordinates of NGC559 are $X\sim1.88$kpc, $Y\sim1.44$kpc, $Z\sim+30.9$pc and a Galactocentric distance of $\sim$ 10.1kpc for the cluster. This places NGC559 just outside the Perseus spiral arm. The distance of the cluster from the Galactic plane is smaller than the typical scale height of the thin disk ($\approx$ 75 pc). This is in agreement with Joshi (2007) which found that most of the OCs younger than about 300Myr lie somewhere within $\pm$100pc of the Galactic Plane.
![The $(V-K)/(J-K)$ colour-colour diagram for stars within the cluster boundary. Filled circles are the most probable cluster members. The dotted line is the solar metallicity isochrone for $\log$(Age)=8.35, while the two dashed lines indicate the direction of the normal reddening vector. The solid line is obtained by using reddenings of $E(V-K) = 2.14$ and $E(J-K) = 0.37$.[]{data-label="figure:irtcd"}](Fig09.ps)
Interstellar extinction in the near-infrared {#IRebv}
--------------------------------------------
To determine interstellar extinction in the near-IR, we used 370 stars for which $VJK$ magnitudes were available in our catalogue. The $(V-K)/(J-K)$ diagram is shown in Fig. 9. We used the normal reddening law for the infra-red colours, as given in Table6, and shifted the stars along the reddening vector $\frac{E(J-K)}{E(V-K)} = 0.173$ using solar metallicity isochrones given by the Marigo et al. (2008). The best fit to points in the $(V-K)/(J-K)$ diagram gives a colour excess of $E(V-K) = 2.14\pm0.02$ and $E(J-K) = 0.37\pm0.01$ by minimizing $\chi^2$. The theoretical isochrone shifted by the above values is shown by the solid line in Fig. 9. Using the Whittet & van Breda (1980) relation for $R_K = 1.1 E(V-K)/E(B-V)$, which is insensitive to the reddening law, we obtained $E(B-V) = 0.76\pm0.04$ for the reddening in NGC559. This is close to $E(B-V) = 0.82\pm0.02$ determined using the $(U-B)/(B-V)$ TCD. The agreement between two complementary methods suggests that our values are robust.
The fundamental parameters derived for NGC559 in this study are summarized in Table7.
Comparison to previous results
------------------------------
NGC559 has been studied in the past by various authors. Lindoff (1969) found it to be a very old cluster with an age of about 1000Myr, while Jennens & Helfer (1975) estimate the age at only 100Myr. Both studies used photoelectic photometry. Grubissich (1975), Lynga (1987), AL02 and MN07, all estimated the cluster age at $\log$(Age)$ =8.7\pm0.1$. In this paper we used only the most probable cluster members to estimate $\log$(Age)=$8.35\pm0.05$.
The distance of the cluster is estimated to be about 1.3kpc (Lindoff 1969), 6.3kpc (Jennens & Helfer 1975), and 1.15kpc (Lynga 1987). The recent CCD study by AL02 and MN07 determined a distance of $2.3\pm0.3$ and $2.17^{+0.56}_{-0.82}$kpc respectively. The latter value is close to the distance of $2.43\pm0.22$ kpc determined in the present study. Previous estimates of reddening, $E(B-V)$, are about 0.45 (Lindoff 1969), 0.62$\pm$0.17 (Jennes & Halfer 1975), 0.54 (Lynga 1987), and $0.68^{+0.11}_{-0.12}$ (MN07). However, AL02 obtained a higher value of $E(B-V)=0.81\pm0.05$, which is in good agreement with our value of $0.82\pm0.02$.
Dynamical Study of the cluster {#ds}
==============================
The dynamical properties of the cluster can be studied by determining the luminosity and mass functions of the cluster members.
Luminosity function {#lf}
-------------------
The luminosity function (LF) is the total number of cluster members in different magnitude bins. After correcting for the data completeness to both cluster and field regions, the number of probable cluster members was obtained by subtracting the contribution of field stars from stars in the cluster region. The estimated number of stars in each magnitude bin for both the cluster $(N_C)$ and field regions $(N_F)$ are given in Table8. To determine the photometric LFs in $(V-I)/V$ and $(B-V)/V$ CMDs, we subtracted $N_F$ from $N_C$ and resultant probable members $(N_P)$ are given in the 4th and 7th columns of Table8, respectively.
Mass function {#mf}
-------------
The initial mass function (IMF) is defined as the distribution of stellar masses per unit volume in a star formation event. Along with the star formation rate, the IMF determines the subsequent evolution of clusters (Kroupa 2002). Since the direct determination of the IMF is not possible due to the dynamical evolution of stellar systems, we derive the mass function (MF), which is the relative number of stars per unit mass and can be expressed by a power law $N(\log M) \propto M^{\Gamma}$. The slope, $\Gamma$, of the MF can be determined from $$\Gamma = \frac{d\log N(\log\it{m})}{d\log\it{m}}$$
![MF derived for the core region (lower panel), corona (middle panel), and whole cluster region (upper panel). The error bars represent $1/\sqrt N$ errors. The continuous line is the fit to the data excluding the points shown by open circles.[]{data-label="figure:mf"}](Fig10.ps){height="14cm" width="9cm"}
where $N\log(m)$ is the number of stars per unit logarithmic mass. The masses of probable cluster members can be determined by comparing observed magnitudes with those predicted by a stellar evolutionary model if the age, reddening, distance and metallicity are known.
As seen in Fig. 5, the $(V-I)/V$ CMD goes deeper than the $(B-V)/V$ CMD, so we used the former to determine the MF of the cluster. The main factors that limit the accuracy of the MF are incompleteness and field star contamination. While the central region of the cluster may be affected by data incompleteness, the outer region is more likely to be affected by field star contamination. After statistically correcting for the field star contamination, we determined the MF in three regions i.e., the core region ($r\leq1'.3$), the corona ($1'.3 < r \leq 4'.5$), and the whole cluster region ($r\leq4'.5$). The MF determined for the cluster region is given in Table9. Fig. 10 shows the MF in the cluster fitted for the MS stars with masses $0.8 \leq M/M_\odot < 3.7$. The error bars were calculated assuming Poisson statistics. In determining the slope, we have considered only those data points which are shown by filled circles in Fig. 10. The slope of the MF ($\Gamma$) in the mass range $1.0 \leq M/M_\odot < 3.7$ in each region is calculated using a least square method and shown by the solid line in the figure. Table10 summarizes the MF slopes in the cluster for all three regions.
For the mass range $0.4 < M/M_\odot < 10$, the classical value derived by Salpeter (1955) for the MF slope is -1.35. The MF slope in the core region is in agreement with the Salpeter MF slope within the given uncertainty, but it is steeper for the corona and cluster regions. This suggests a preferential distribution of relatively massive stars towards the central region of the cluster. When we determined the MF slopes for two extreme age limits of the cluster considering the uncertainty in our age determinations, we found that the MF slope is slightly dependent on the age of the cluster, and varies by a maximum of $\sim$20% .
It is worth pointing out that the mass range for probable MS stars in this cluster is quite small. It is possible that some of the low mass stars may have escaped from the cluster as a result of stellar encounters between stars of different masses. On the other hand, the initially massive stellar members of the cluster have now evolved and may possibly be white dwarfs or have undergone supernova explosions. Very deep photometry will be required to detect white dwarfs or supernova remnants, if present.
Mass Segregation {#ms}
----------------
There is ample proof of mass segregation in star clusters, i.e. a tendency of higher-mass stars to approach towards inner region and lower-mass stars towards outer region of the cluster. This appears to be a result of equipartition of energy through stellar encounters (e.g., Mathieu & Latha, 1986, Sagar et al. 1988, Pandey et al. 2001). To understand if mass segregation is an imprint of the star formation process in the cluster and/or a result of dynamical evolution, we determined the dynamical relaxation time, $T_E$. This is the time in which individual stars in the cluster exchange energies and their velocity distribution approaches the Maxwellian equilibrium. It can be expressed as
$T_E = \frac{8.9 \times 10^5 (N R_h^3/\bar{m})^{1/2}} {\log(0.4N)}$\
where $T_E$ is in Myr, $N$ is the total number of cluster members, $R_h$ is the radius (in parsecs) containing half of the cluster mass and $\bar{m}$ is mean mass of the cluster members in solar units (cf. Spitzer & Hart, 1971). We estimated a total of 202 MS stars in the mass range $0.8 \le M/M_\odot < 3.7$. The total mass of the cluster is obtained by subtracting the total stellar mass in the field region from the cluster region. This results in a total mass of $\sim 344 M_\odot$ for NGC559, which gives an average mass of $\sim 1.7 M_\odot$ per star. The contribution of the low-mass stellar population is critical for constraining the total cluster mass, which is crucial in understanding the dynamical evolution and the long-term survival of a cluster (e.g., de Grijs & Parmentier 2007, and references therein). We can not rule out the possibility of poor subtraction of field stars from the cluster or an observing bias against detecting low-mass stars which might result in underestimating the total mass of the cluster. Therefore the present value may be taken as a lower limit for the cluster mass, while the estimated mean stellar mass can be taken as an upper limit.
It can be seen that the half-radius of the cluster, $R_h$, plays an important role in the determination of the dynamical relaxation time, $T_E$. Unfortunately, this quantity is unknown for most clusters and is generally taken as half of the total cluster radius. Nevertheless, we can estimate $R_h$ by taking advantage of the statistical removal of field stars from the field region and knowledge of the approximate stellar masses using stellar isochrones. The value of $R_h$ determined in this way is $\sim
2.3$pc, which is $\sim 70\%$ of the cluster radius. A $R_h$ value larger than half of the cluster radius suggests that inner region has a deficiency of massive stars which have now evolved. We estimated the dynamical relaxation time $T_E$ = 19.2Myr for NGC559. However, cluster members fainter than the limiting $V$ magnitude of our observations results in a decrease of $N$ and an increase of $\bar{m}$, leading to an underestimation of $T_E$. Therefore, $T_E$ obtained in this way should be regarded as a lower limit. The values used in the estimation of $T_E$ are summarized in Table11. $T_E$ determined in the present study is much smaller than the present age of about 224Myr. We conclude, therefore, that NGC559 is a dynamically relaxed cluster.
Conclusion and Summary
======================
We present results of an ongoing photometric survey in order to determine the structure, and astrophysical and dynamic evolution parameters of the intermediate age galactic cluster NGC559. We present a comprehensive $UBVRIJHK$-proper motion catalogue for 2393 stars down to about $V=21.4$ mag observed in a $\sim 13'\times13'$ field centered on the cluster. Fundamental parameters, such as core and cluster radius, reddening $E(B-V)$, age, distance modulus and mean proper motion were obtained using optical and near-IR photometry and proper motions. We analysed the cluster membership using criteria based on distance from the cluster center, position in the CMD, and proper motions. The membership probabilities of all stars in the field of the cluster are presented. We found 22 stars which are the most probable cluster members. Our study indicates a distance of $2.43\pm0.23$kpc, a diameter of $6.4\pm0.4$pc and an age of $224\pm25$Myr. The cluster is found to be heavily reddened with $E(B-V)=0.82\pm0.02$. The mean proper motion was estimated to be $\mu_x = -3.29\pm0.35$masyr$^{-1}$, $\mu_y = -1.24\pm0.28$masyr$^{-1}$. Our analysis suggests that the cluster is slightly younger and more reddened than previously thought. It is important to note that because we limit determinations to the most probable cluster members, the errors in the estimates of various cluster parameters has been considerably reduced.
The reddening law in the direction of the cluster was found to be normal at longer wavelength but anomalous at shorter wavelengths. In general, we found a slightly higher total-to-selective extinction $R_V=3.3$ towards NGC559. The larger value of $R_V$ could be caused by a bigger than average grain size. Polarimetric data would be useful to ascertain the size and behaviour of the dust grains. From the combined optical and near-infrared data, we obtained a colour excesses of $E(V-K) = 2.14\pm0.02$, $E(J-K) = 0.37\pm0.01$, and $E(B-V)= 0.76\pm0.04$, in the direction of NGC559.
The MF for MS stars in the cluster is not uniform over the entire region and found in the range $-1.64 \geq \Gamma \geq -2.14$ for the mass range $1.0 \le M/M_\odot < 3.7$. The MF slope of the core region is in agreement with the Salpeter value, but it is found steeper in the corona and in the cluster as a whole. This suggests mass segregation in MS stars due to the dynamical evolution of the cluster. A deficiency of low-mass stars as well as very massive stars was found in the core region of the cluster. The age of the cluster was found to be much higher than the relaxation time of 19.2Myr, which implies that the cluster is dynamically relaxed. An improvement in the cluster parameters and knowledge of dynamical evolution should allow a better understanding of star formation in NGC559.
In a forthcoming paper, we will report on stellar variability in NGC559 from 35 nights taken over 3 years during 2010 to 2012.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors thank the anonymous referee for useful comments that improved the scientific content of the paper. We acknowledge the suggestions given by Dr. Ramakant Singh Yadav. This study has made use of data from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts; the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
[99]{} Ann H. B., Lee, S. H., 2002, JKAS, 35, 29 (AL02) Carpenter J. M., 2001, ApJ, 121, 2851 Carraro G., Villanova S., Demarque P., Moni Bidin C., McSwain M. V., 2008, MNRAS, 386, 1625 Chini R., Wargau W. F., 1990, A&A, 227, 213 de Grijs R., Parmentier G., 2007, ChJA&A, 7, 155 Grubissich C., 1975, A&AS, 21, 99 Hoyle F., Shanks T., Tanvir N. R., 2003, MNRAS, 345, 269 Jennens P. A., Helfer H. L., 1975, MNRAS, 172, 681 Johnson H. L., Morgan W. W., 1953, ApJ, 117, 313 Joshi Y. C., 2005, MNRAS, 362, 1259 Joshi Y. C., 2007, MNRAS, 378, 768 Joshi Y. C., Joshi S., Kumar B., Mondal S., Balona L. A., 2012, MNRAS, 419, 2379 Kaluzny J., Udalski A., 1992, Acta Astron., 42, 29 Kharchenko N. V., Piskunov A. E., R[ö]{}ser S., Schilbach E., Scholz R.-D., 2004, Astron. Nachr., 325, 740 King I., 1966, AJ, 71, 64 Kroupa P., 2002, Science, 295, 82 Lada C. J., Lada E. A., 2003, ARA&A, 41, 57 Landolt A. U., 1992, AJ, 104, 340 Lee S. H., Kang Y.-W., Ann H. B., 2013, MNRAS, 432, 1672 Lindoff U., 1969, Arkiv astron, 5, 221 A. V., [Beshnov]{} G. V., 2003, ARep, 47, 6 Lynga G., 1987, Catalogue of Open Cluster Data, Centre des Données Stellaires, Strasbourg Maciejewski G., Niedzielski, A., 2007, A&A, 467, 1065 (MN07) Marigo P., Girardi L., Bressan, A., et al., 2008 A&A, 482, 883 Mathieu R. D., Lathen D. W., 1986, AJ, 92, 1364 Neckel T., Chini R., 1981, A&A, 45, 451 Pandey A. K., Ogura K., Sekiguchi K., 2000, PASJ, 52, 847 Pandey A. K., Nilakshi, Ogura K., Sagar R., Tarusawa K., 2001, A&A, 374, 504 Pandey A. K., Sharma S., Ogura K., et al., 2008, MNRAS, 383, 1241 Rachford B. L., Canterna R., 2000, AJ, 199, 1296 Roeser S., Demleitner M., Schilbach E., 2010, AJ, 139, 2440 Ruprecht J., 1966, Bull. of the Astron. Inst. of Czechoslovacia, 17, 33 Russeil D., Adami C., Georgelin Y. M., 2007, A&A, 470, 161 Sagar R., Miakutin V. I., Piskunov A. E., Dluzhnevskaia O. B., 1988, MNRAS, 234, 831 Salpeter E. E., 1955, ApJ, 121, 161 Skrutskie M. F., Cutri R. M.; Stiening R., et al., 2006, AJ, 131, 1163 Schlegel D. J., Finkbeiner D. P., Davis M., 1998, ApJ, 500, 525 Sneden C., Gehrz R. D., Hackwell J. A., York D. G., Snow T. P, 1978, ApJ, 223, 168 Spitzer L., Hart M. H., 1971, ApJ, 164, 399 Stetson P. B., 1987, PASP, 99, 191 Trumpler R. J., 1930, Lick Obs. Bull, 14, 154 Whittet D. C. B., van Breda I. G., 1980, MNRAS, 192, 467 Whittet D. C. B., Gerakines P. A., Hough J. H., Shenoy S. S., 2001, ApJ, 547, 872 Yadav R. K. S., Bedin L. R., Piotto G., et al., 2008, A&A, 484, 609 Zacharias N., Finch C.T., Girard T.M., et al., 2013, AJ, 145, 44
\[lastpage\]
[^1]: E-mail: [email protected]
[^2]: Image Reduction and Analysis Facility (IRAF) is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
[^3]: http://obswww.unige.ch/webda/
[^4]: http://pleiadi.pd.astro.it/
|
---
abstract: 'Design Theory, a branch of mathematics, was born out of the experimental statistics research of the population geneticist R. A. Fisher and of Indian mathematical statisticians in the 1930s. The field combines elements of combinatorics, finite projective geometries, Latin squares, and a variety of further mathematical structures, brought together in surprising ways. This essay will present these structures and ideas as well as how the field came together, in itself an interesting story.'
author:
- |
A. R. P. RAU\
[*Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803\
(Fax, 1-225-578-6841; Email, [email protected]*]{})
title: 'R. A. Fisher, Design Theory, and the Indian Connection'
---
Introduction
============
What do the following have in common:
$\bullet$ Kirkman’s School Girl Problem: “15 young ladies in a school walk out three abreast for 7 days in succession; it is required to arrange them daily, so that not two will walk twice abreast" [@ref1].
$\bullet$ The puzzle-game SuDoKu, one of the greatest mathematicians, and judging effectiveness of fertilizers on potato varieties.
$\bullet$ R. A. Fisher, statistician and population geneticist, a key figure in the synthesis between Darwin and Mendel.
$\bullet$ Branches of mathematics called design theory and coding theory.
$\bullet$ Projective geometry, a subject in which unlike in Euclidean geometry, there is a duality between points and lines such that interchanging them in any theorem does not affect its validity.
$\bullet$ India’s pioneering statistician and early associates in the school he founded.
This essay will present the interesting mathematical structures and ideas in the above items and the human interest thread that weaves through them. Whether arranging numbers from 1 to 9 in a $9 \times 9$ array so that each numeral occurs once and only once in each row and column, arranging schoolgirls in $5 \times 3$ blocks so that no pair is repeated, or arranging plots of potato varieties and the laying of different fertilizers on them so that each variety is subjected to each type of fertilizer to gauge effectiveness, these are all problems of ‘experimental design’ and now a branch of mathematics called ‘design theory’ [@ref1; @ref2], related also to coding theory [@ref3]. These are parts of the wider fields of combinatorics as well as finite projective geometries [@ref4; @ref5; @ref6]. While some of the basics go back to the great mathematician Euler, it is the work of Fisher and of a school of Indian mathematical statisticians that gave birth to Design Theory [@ref7]. In statistics, this is also referred to as ‘Design of Experiments’ or ‘Experimental Designs’.
Design Theory
=============
Kirkman’s School Girl Problem, originally posed by W. S. B. Woolhouse in 1844 [@ref8] and solved by the Rev. Thomas Kirkman, a Lancashire clergyman and amateur mathematician [@ref9], in 1847 in a charmingly named journal [@ref10], is a precursor of what have come to be known as ‘designs’ and more specifically, ‘balanced incomplete block (BIB) designs’ [@ref5; @ref11; @ref12] or ‘Steiner triple systems’ [@ref5; @ref13]. There was also early work by the great mathematician Euler and today, all of this is part of a branch of mathematics called design theory [@ref2].
The idea is to consider two sets, members of one to be allotted to those of the other with certain specified conditions. The first set of $v$ objects or symbols (may be anything: numbers, potatoes, …), as with $v=15$ ladies, is to be put into $b$ blocks. Each block contains exactly $k$ distinct symbols, as in $k=3$ ladies abreast, each symbol to occur in exactly $r$ different blocks and every pair of distinct symbols to occur together in exactly $\lambda$ blocks. In the case of the school girls, $r=7$, the number of days, and $\lambda=1$ because no two should recur from one day to the next. Kirkman constructed the solution with $b=35$, these being the number of rows of three, 5 for each of the 7 days.
A $(v, b, r, k, \lambda)$ design or BIB is thus one of $v$ objects in $b$ blocks with each block containing exactly $k$ distinct objects, each object occurring in exactly $r$ different blocks and every pair ($t=2$ or more general $t$) of distinct objects occurring together in exactly $\lambda$ blocks. Block designs with $k=3$ are called triple systems. Those with $\lambda=1$ are called Steiner systems $S(t, k, v)$ and, if $k=3$ as well, Kirkman or Steiner triple systems $S(2, 3, v)$ because the Berlin mathematician Jakob Steiner, proposed their existence in 1853, conjecturing that the number $v$ had to be such that it would leave a remainder of 1 or 3 upon dividing by 6 [@ref14]. This was proved by Reiss [@ref15] six years later but they were unaware of Kirkman’s work [@ref16].
The following relationships define a BIB: $vr=bk, \lambda (v-1)=r(k-1)$. For triple systems with $k=3$, these reduce to $ r=\lambda (v-1)/2, b=\lambda v(v-1)/6$. Another notation used for BIBs is $t-(v, k, \lambda)$ so that a Steiner triple system is $2-(v, 3, 1)$, the Kirkman problem being a 2-(15, 3, 1) design. An even smaller one is 2-(7, 3, 1) or $S(2, 3, 7)$ which we will encounter in Section 4 in a geometrical context of placing 7 points on 7 lines such that each line has three points on it and each point lies on three lines with no pair of points on more than one line. The terminology of symbols and blocks is replaced by the geometrical ones of points and lines, respectively. With $(v=b, r=k)$, such a BIB is said to be symmetrical. The result of Steiner and Reiss allows a parametrization of Steiner triple systems in terms of a single integer $n$. One family has $(v=6n+3, b=(3n+1)(2n+1), r=3n+1)$ and a second $(v=6n+1, b=n(6n+1), r=3n)$. With increasing $v$, establishing the exact number, 80 for $v=15$ and over two million for $v=19$, and classifying Steiner triple systems becomes complicated. For these and the long history of establishing the result of two non-equivalent designs for $v=13$, see [@ref17].
Much development of the subject comes from the work of R. A. Fisher who formulated the principles of statistical designs in 1925 in the context of agricultural research/statistics, and from Yates who introduced the use of BIB designs in 1936 [@ref11]. In studying the effects of various fertilizers and soils on growing potatoes and barley, Fisher was conducting field studies which led to the design of statistical experiments. A complete experiment on the effectiveness of $v$ different fertilizers on $b$ types of plants would require $b$ plots, each subdivided into $v$ areas. This could be prohibitively expensive. An ‘incomplete’ one would test every type of plant with $k<v$ different fertilizers such that any two fertilizers would be tested on $\lambda$ different types of plants. ‘Balancing’ occurrence of pairs of treatments on exactly $\lambda$ of $b$ blocks of size $k$ means the regular appearance of pairs of fertilizers on the same plant, allowing a complete covariance analysis of the results. This was Fisher’s great insight along with his focus not on one character at a time but a multivariate analysis. He introduced the idea of variance and maximum likelihood, established inequalities named for him (that a proper BIB requires $b \geq v, r \geq k$), and rapidly in the 1920s and 1930s established the field with mathematical rigour, writing his 1935 book, “The Design of Experiments" [@ref18]. The terminology introduced by Yates [@ref11] of v(arieties), t(reatments) and r(eplications) provides the symbols still in use today.
The next step in the development of Design Theory as the full-fledged branch of mathematics that it is today can be traced to Fisher’s trip to India in 1938 when he visited his friend P. C. Mahalanobis who had similarly pioneered the use of agricultural statistics in India, establishing a journal, Sankhya, and the Indian Statistical Institute in December 1931. A couple of young assistants in that group, most notably R. C. Bose, with physics and mathematics background, had been following Fisher’s idea of representing an $n$-sample by a point in $n$-dimensional Euclidean space, and were solving many design problems and constructing BIBs. They took up questions Fisher posed on statistical designs for controlled experiments, using their expertise in finite geometries, leading to the study based on Galois fields that forms the modern basis of the subject. We will return to this in Section V.
R. A. Fisher
============
Ronald Aylmer Fisher, born in 1890, was a pioneer in mathematical statistics and made fundamental contributions to genetics, combining Mendelism with biometry. Other famous scientists of the time such as Bateson, Pearson and deVries saw conflicts between Mendel and Darwin, between the conserved, discrete types of the former and the small differences of continuous variation as the template for adaptive change in the latter’s evolutionary theory. Already as an undergraduate in 1911, Fisher set this right by showing how indifferent variations could persist in a population even in a constant homogeneous environment. His 1930 “The Genetical Theory of Natural Selection" was the first synthesis of these two pillars of modern biology [@ref19]. Other famous contributors such as Sewall Wright and Haldane soon followed in the early 1930s.
There are several biographies [@ref20; @ref21; @ref22; @ref23], including one by his daughter [@ref24], of Fisher. Excerpts drawn from them, and other compilations that are presented here in this section, are meant only as a merest sketch to point the readers to more details in these sources. See also the website http://digital.library.adelaide.edu.au/coll/special/fisher/. In Cambridge in 1909, Fisher studied mathematics and physics (statistical mechanics) and also read Karl Pearson’s “Mathematical contribution to the theory of evolution". After four years (1915-1919) as a school teacher, Fisher joined the Rothamsted experimental station where Pearson had a group. Rejected for WW I because of poor eyesight, he took to farming as a eugenic way of life. In his field experiments, he developed the ideas of multivariate analysis and maximum likelihood and the block designs mentioned in Section II.
Keeping statistical considerations in the planning and layout of experiments led to the ‘design of experiments’. Throughout his career, Fisher regarded statistical laws as basic and, interestingly, took his cue also from Heisenberg’s contemporaneous work in quantum physics. He is also said to have commented that “geometry had led to humanity’s first great stage of intellectual liberation by discovering the principles of deduction, and that biometry was leading the second stage by discovering the principles of induction" [@ref23].
The problem of design consists of choosing a set of treatments for comparison, specifying what varieties to which they are applied, randomizing rules for applying the treatments to the varieties, and specifying what is to be measured, the records then subjected to statistical analysis. Fisher’s playful humour is apparent in some of the examples in his book [@ref18]. One deals with a lady of discernment who claims to be able to tell whether milk or tea was added first! If one were to present to her six cups of tea, three each mixed in each way, since there are 20 combinations of 3 out of 6 (given by $6 \times 5 \times 4/1 \times 2 \times 3$), there would be 1 chance in 20 of accidentally guessing the correct set. This 5% is often taken as a standard level of significance and to do better, she should be given 8 cups with 4 of each preparation. Now a pure chance success reduces to 1 in 70 (the number of combinations of 4 out of 8 being $8 \times 7 \times 6 \times 5/1 \times 2 \times 3 \times 4$). Fisher goes on to discuss how to assess the significance of her discernment were she to get 3 correct and 1 wrong.
Another amusing example, with resemblance to Kirkman’s schoolgirl one, proceeds thus in Fisher’s presentation [@ref18]: 16 passengers on a liner discover that they are an ‘exceptionally representative body’: 4 of them are English, 4 are Scots, 4 Irish and 4 Welsh. Further, they fall into four age groups, 4 being 35, 4 others 45, 4 more 55 and 4 being 65, with no two of the same age being of the same nationality. Next, it turns out that there are 4 lawyers, 4 soldiers, 4 doctors and 4 clergymen with again, the reader will get the picture, no two of the same profession sharing the same age or same nationality. It goes on, that 4 are bachelors, 4 married, 4 widowed and 4 divorced, with again no two of the same marital status sharing the same profession, age or nationality. Finally, the same with their political persuasion, 4 being conservatives, 4 liberals, 4 socialists and 4 fascists. With this somewhat head-reeling setup, Fisher poses that 3 among the fascists are known to be an unmarried English lawyer of 65, a married Scot soldier of 55 and a widowed Irish doctor of 45. It is easy enough to answer the first question of identifying the remaining fascist. Fisher’s second question is to say that it is “further given" that the Irish socialist is 35, the conservative of 45 is a Scot, and the Englishman of 55 is a clergyman, and then he asks what we know of the Welsh lawyer!
Already in his undergraduate years at Cambridge, the subjects of evolution, the implication of Darwin for the human race, the results of Mendel, and Francis Galton’s emphasis on selection continuously increasing the genetic inheritance of man, influenced him deeply in both basic and applied aspects. He formed the Cambridge Eugenics Society, while also working for twenty years with the Eugenics Education Society in London whose president was Leonard Darwin, the second youngest son of Charles Darwin. With genetics as the mechanism of inheritance and statistics as the correct tool for studying populations, the eugenic possibility of improving the biological inheritance of man was a theme in his thinking. In this he was an idealist, believing that eugenics societies must be involved in scientific research lest “social scientists divert the Society from its proper study of human inheritance to serve a non-eugenic social function" [@ref24]. Against objections, he brought in several scientists from his Rothamsted association into the Society.
Today, in the post WW II world, the word eugenics is itself so discredited that it seems astonishing to see some of the references in the design literature, including many of Fisher’s papers, in journals (now defunct) carrying that name [@ref12; @ref25; @ref26; @ref27]. In 1933, Fisher took the Chair of Eugenics at University College, London, which housed the [*Annals of Eugenics*]{}, started in 1925 by Karl Pearson, the previous holder of that Chair. Fisher held that position and headed the Galton laboratory till 1943 when he moved to the Arthur Balfour Chair at Cambridge. He wanted to take the journal that he had fostered with him but University College kept it. An alternative he wanted was the [*Journal of Genetics*]{} but Haldane took that over, also in University College. As a result, Fisher started in 1947 the journal [*Heredity*]{}, now held by the Genetical Society of Great Britain. As can be seen by the references in this essay, many papers on design were published in these journals in the 1930-1950s. It should also be noted that [*Annals of Eugenics*]{} was originally designed to house eugenics and human genetics while the journal [*Biometrika*]{} would have papers in statistical methodology, but under Fisher, the former also became important for papers in statistics.
Fisher had a long association with India, visiting it on many occasions over the decades, including the memorable one mentioned at the end of Section II. These will be taken up in Section V. Fisher spent his last years in Australia, dying in Adelaide in 1962.
Finite Projective geometry, designs, and codes
==============================================
Most people are familiar with Euclidean geometry from school, with its axioms about points and lines and its propositions and proofs about triangles and circles. Two distinct points define a line and two lines either intersect at a point or are parallel. In the latter case, also familiar is the concept of points at infinity, two parallels regarded as meeting at infinity. Every child knows this as a matter of perspective, with parallel rail tracks a canonical example. Projective geometry [@ref28], which removes the distinction between ‘finite’ points and those at infinity, regarding all of them equally, is therefore important for perspective in art and architecture.
Further, a distinguishing characteristic is that points and lines are on an equal footing with a ‘duality’ between them unlike in ordinary Euclidean geometry. Thus, that two points define a line is in balance with two lines always meeting at a point, albeit one at infinity. A striking diagram, familiar in projective geometry, makes this clear, the two triangles (abc) and (ABC) ‘in perspective’ with respect to the point P and with respect to the line (123) (Fig. 1). Like vertices of the triangles are connected by rays to P and like sides of the triangle, upon extension, meet on the common line (123). The triangles may lie on a plane or be arbitrarily oriented in space. If the two planes of the triangles are parallel, so that the extensions do not meet, the line (123) and its three points recede to infinity but the basic result remains. Fig. 1 is a partial Steiner system.
Finite geometries may be somewhat less familiar but first a couple of remarks about finite arithmetics, which again most are familiar with from the 12- or 24-hr clock. Technically called modular arithmetic, with a number such as this 12 the modulus, one deals only with the residues left over upon dividing by the modulus so that the only numbers that occur are less than it. The result noted in Section II about symmetric Steiner triple systems existing only for numbers that leave remainder of 1 or 3 upon division by 6 is an example, expressible as $v\equiv 1, 3$ (mod 6).
Turning to finite geometries, instead of the continuous one familiar from school, one deals only with a finite number of points and lines. Thus, the finite Euclidean geometry with standard notation $EG (n, s)$ has $s^n$ points. One of the smallest, $EG (2, 2)$, has thus $2^2=4$ points. Correspondingly, the number of pairs out of four being six, there are 6 lines. Various diagrammatic representations are possible, one being a square with non-intersecting diagonals, but Fig. 2 shows a convenient one with the vertices of an equilateral triangle and its in-centre as the points. Using $(x,y)$ to represent a point, with $x$ and $y$ taking on only two values 0 and 1, the points can be denoted as shown. Some of the lines meet at a point, others do not. Thus, each side of the triangle and the line connecting the in-centre to the opposite vertex do not, and can be regarded as ‘parallel’.
Extending now to finite projective geometries [@ref5; @ref29], $PG (n, s)$, one adds to $EG (n, s)$ points and a line at infinity to restore the point-line symmetry/duality. Thus, with $EG (2, 2)$ in Fig. 2, imagine extending the lines from a vertex to the in-centre to meet the corresponding side of the triangle. With these two lines ‘parallel’, the mid-point of the side where they meet is a point at infinity. Adding these three mid-points makes the total number of points 7. At the same time, the three points at infinity lie on a line, the in-circle, as shown in Fig. 3. There are then both 7 points and 7 lines in this diagram which indeed represents the finite projective geometry $PG (2, 2)$. In general, $PG (n, s)$ has $(s^{n+1} -1)/(s-1)$ points and for $PG (2, s)$, this number is $(s^2+s+1)$ points.
In such a projective geometry, every pair of points lies on a unique line, and every line contains at least 3 points, one of them sometimes a point at infinity. Also, there is a set of 3 points not on a common line, an example being the vertices of the triangle in Fig. 3. $PG (2, 2)$ in Fig. 3 has a further property, that every pair of distinct lines contains a common point. Such an entity is called a ‘projective plane’. Fig. 3 is the smallest possible and is called ‘The Fano Plane’ [@ref1], arising in many varied contexts in basic and applied mathematics. In a projective plane, there exists what is called a ‘quadrilateral’, that is, four points, no three of which lie on a line (the top four points ($e_1, e_4, e_6, e_7)$ in Fig. 3 provide an example). Its dual statement can be used as an alternative, that there exist four lines, no three of which go through the same point (the three sides and in-circle in Fig. 3 an example).
The assonance of the previous paragraph to items in designs in Section 2 must be evident. Indeed, the Kirkman design $(v=15, b=35, r=7, k=3, \lambda=1)$ is a $PG (3, 2)$ and the symmetric BIB or Steiner triple system $S(2, 3, 7)$ with $(v=b=7, r=k=3, \lambda=1)$ is $PG (2, 2)$. All that it takes to make the correspondence is to identify the symbols or objects of BIB with the points of projective geometry and, similarly, blocks with lines. In the alternative notation introduced earlier in Section II, these two geometries/designs are, respectively, $2- (15, 3, 1)$ and $2-(7, 3, 1)$. Another projective plane is $PG (2, 3)$ with 13 points and lines, and it is a $2-(13, 4, 1)$ design. It is a symmetric BIB with $(v=b=13, r=k=4, \lambda=1)$ but not a Steiner triple system because $r$ and $k$ are now 4, which represents the number of lines now on a given point.
Given these intimate connections between designs and finite projective geometries, it is not surprising that Fisher and other pioneers to be considered in the next section, made fundamental contributions in both areas. Coding theory is another closely related subject, error correcting codes being important both in classical cryptography [@ref3] and today in quantum cryptography [@ref30; @ref31]. See the Appendix for further remarks and connections to other areas of mathematics.
Connection to and contribution by Indian statisticians
======================================================
In India, at that time in the British Raj, P. C. (Prasanta Chandra) Mahalonobis started the serious study of agricultural statistics. Trained as a physicist, he pioneered statistics research in India, establishing the Indian Statistical Institute (ISI) and a journal Sankhya (in Sanskrit: number or determinate knowledge) in 1931, both of them respected institutions to this day. He had become a friend of Fisher and had visited him in Rothamsted in 1926-1927. Indeed, Fisher seems to have had a behind the scenes influence on the Government of India and the Indian Council for Agricultural Research (ICAR) in supporting Mahalanobis from 1927, and on Viceroy Linlithgow’s support in establishing ISI. Sankhya was run out of private funds. A physicist S. S. Bose was hired as an assistant in 1929, working on problems of design, and two mathematicians, R. C. Bose and S. N. Roy, in 1931 [@ref24].
Mahalanobis felt that statistics was not supported by scientific and governmental authorities in India. He was brushed off by the Indian Science Congress when he asked for a section on statistics, the suggestion met with a scoff, that if statistics can be admitted, then why not astrology! Therefore, he arranged for a special Statistical Conference in Calcutta to follow the Indian Science Congress meeting in Bombay in 1938, with Fisher, who was a delegate to that Congress, as president at the Calcutta meeting. Fisher came to India for six weeks, choosing to travel by ship although passage by air was offered, mainly because of the company: the physicist Lord Rutherford, Carl Jung and two other members of the Royal Society. They sailed in November 1937 for India [@ref24].
S. S. (Subendhu Sekhar) Bose went to Bombay to accompany him by train to Calcutta after a tour through central India. Fisher delivered the Presidential address, the Governor of Bengal being present. He also intervened with the Governor and the Viceroy because Mahalanobis’s sample survey of the jute crop in Bengal was being threatened to be shut down by the minister on the grounds that a small sample could not possibly have any relevance to a crop grown on millions of acres [@ref24]! This survey was the basis later of the National Sample Survey of India for economic and agricultural statistics, to this day crucial for a country of a billion people.
Apart from unappreciative governments and ministers, there were also disagreements between fellow statisticians. A referee of this essay has pointed to the work by V. G. Panse on the cotton crop in Madhya Pradesh and P. V. Sukhatme (who had also worked with Fisher in England) on the wheat crop in Uttar Pradesh at about the same time as Mahalanobis’s on jute. They did not favour the sampling approach but instead advocated using field to field enumeration of crop yields produced by the local revenue agencies. But they also insisted on the random selection of sample plots from the revenue agency’s data. Their method of ‘objective sampling’ was extended by ICAR later to cover wheat and rice as well as other foodgrains over most of India.
In Calcutta, Fisher discussed questions of design with the two Boses and Roy, including the large body of anthropological data available on the build and appearance of various races on the subcontinent. Mahalanobis had introduced a measure of ‘distance’ between the races, and these discussions led later to generalized variances for distributions. Unfortunately, S. S. Bose died young the next year and the subsequent development was carried out by the others, notably R. C. Bose [@ref24].
Raj Chandra Bose (see an autobiographical chapter in [@ref22]), born in 1901, studied mathematics at Hindu College, Delhi, moving later to Calcutta for a second M. A. and becoming a lecturer in 1930. Mahalanobis hired him in a half-time position at ISI in 1932. It is said that Bose was told one morning that a ‘sahab’ (in Hindi: master) in a car had come to see him. This turned out to be Mahalanobis who had seen his geometrical work and recruited him into statistics. Mahalanobis and ISI used to move to the hill station of Darjeeling in the summer months and in summer 1933, Bose was given volumes of Biometrika, a typed list of 50 papers, and Fisher’s book on statistical methods as his statistical education. S. N. (Samarendra Nath) Roy [@ref32] was hired a few months later.
They started working on Fisher’s idea of using $n$-dimensional Euclidean space to represent $n$-samples. In 1936, F. W. Levi, who had fled from the Nazis, became head of the mathematics department at Calcutta, and they learnt from him finite fields and finite geometries. (Friedrich Wilhelm Levi later spent four years at the Tata Institute of Fundamental Research in Bombay (now Mumbai), returning in 1952 to Berlin and Freiburg where he died in 1966 [@ref17; @ref33].) Thus primed, Fisher’s visit in 1938 and his questions on statistical designs for controlled experiments led them to use finite geometries for that. Fisher recognized the birth of a mathematical field, encouraged Bose to write up the work which was published [@ref12] in the [*Annals of Eugenics*]{} which he edited.
In 1941, Calcutta University started a post-graduate department in statistics with Mahalanobis as head and Bose and Roy the first lecturers. Among the first batch of students was C. R. (Calyampudi Radhakrishna) Rao, another eminent Indian statistician and later himself director of the ISI. Fisher was also in India during the war (when Calcutta was in blackout) and again after, celebrating his 55th birthday in Calcutta [@ref24]. He returned to London for a meeting of the Royal Society where he spearheaded the election of Mahalanobis as a Fellow of the Society. Among his subsequent visits was one in 1957 for the 25th anniversary of the ISI.
When Mahalanobis stepped down as head of the Calcutta department, Bose took the position in 1945. Later, wanting a career in research and teaching, he turned down positions with administrative duties and became a professor at the University of North Carolina in 1949. S. N. Roy joined him there the next year. Seven other Indians did their Ph.D. with Bose in that university, including S. S. Shrikhande. During his later return as a visiting professor, Shrikhande and Bose, together with E. T. Parker, disproved [@ref34] a 175 year old conjecture of Euler on orthogonal Latin Squares [@ref35].
Latin Squares, also related to the topics in Section II, are $s \times s$ arrangements of $s$ distinct symbols such that each occurs once in each row and column. $s$ is called the order of the square. The currently popular pastime of SuDoKu, which arranges numbers from 1 to 9 in a $9 \times 9$ square is an example of order 9. Two such squares of the same order are said to be orthogonal if, upon superposing, each symbol of one occurs exactly once with each symbol of the other. Thus, in order 2, where the only two squares possible are $\left( \begin{array}{cc}
0 & 1 \\
1 & 0
\end{array} \right)$ and $ \left( \begin{array}{cc}
1 & 0 \\
0 & 1
\end{array} \right)$, they are clearly not orthogonal. Upon superposition, a 0 occurs only with 1 and vice versa, never the 0-0 and 1-1 combinations. On the other hand, it is easy to construct an orthogonal pair in order 3: $\left( \begin{array}{ccc}
0 & 1 & 2 \\
1 & 2 & 0 \\
2 & 0 & 1
\end{array} \right)$ and $\left( \begin{array}{ccc}
0 & 2 & 1 \\
1 & 0 & 2 \\
2 & 1 & 0
\end{array} \right)$.
Euler conjectured that there is no pair of orthogonal Latin squares of order 6 or of order twice an odd number. 175 years later, Bose and co-workers showed that only the statement about 6 is correct but the rest of Euler’s conjecture is not [@ref32]. Indeed, orthogonal Latin squares exist for all orders except 1, 2 and 6! The discovery led to an interview with the science editor of The New York Times and a front page story on Bose and the result. On the morning after, the hotel desk clerk recognized Bose from his photo and said, “You must have done something. The front page of The New York Times cannot be bought for a million dollars” [@ref22]. From 1971 to 1980, Bose was a professor at Colorado State University, then moved to emeritus status but remained active till his death in 1987. He had been elected to the U. S. National Academy of Sciences in 1976.
The connection of orthogonal Latin squares to the experimental design that Fisher was interested in is clear. A Latin square can be formed for any symbols, not necessarily numbers as in the canonical examples and in SuDoKu. Thus, consider one Latin square of potato varieties, another of fertilizer types. If they are orthogonal, every potato variety sees on it every type of fertilizer. Indeed, the existence of orthogonal Latin squares, or that of Hadamard matrices (see Appendix), is in correspondence with the existence of BIB designs. The number of Latin squares increases rapidly with the order (576 in order 4 and 161,280 in order 5). Fisher made extensive use of Latin squares in randomizing the application of treatments to varieties and produced detailed tables with Yates [@ref36] for this purpose. It is also interesting to note the discussion of orthogonal Latin squares of order 3 (such as in the example given above) in Fisher’s book [@ref18] for studying the effects of nitrogen, phosphorous and potassium (the three numbers in that order on garden fertilizer bags today!) on rubber plants. Given rubber’s role in war, rubber plantations in Ceylon (now Sri Lanka) and Malaya were key to the British and allied efforts during the world wars, and here again we see Fisher’s very practical bent towards applied research.
In concluding this section on the pioneering contribution of Indian mathematicians to Design Theory, their continued contributions throughout the 1930-1950s is evinced by the names already mentioned of the two Boses, Roy, Shrikhande, Rao and Savur, as well as those of D. Ray-Chaudhuri (another student of R. C. Bose with whom he started work in 1955 on coding theory), K. R. Nair (with whom Bose introduced partially balanced incomplete block designs), K. Kishen (who worked with Bose on projective geometries and so-called ‘factorial’ designs), Q. M. Hussain, K. N. Bhattacharya and S. Chowla.
Appendix: Higher Arithmetics and other mathematical connections
===============================================================
In coding theory, the so-called ‘packing problem’ in transmitting $s$ different symbols with $t$ the measure of error-correcting capability called the ‘Hamming distance’ and $n$ the number of redundant parity checks included in each block of transmitted symbols needs $m_t(n, s)$ as the maximum length of the block in a linear code. Fisher gave the result $m_2 (n, s)=(s^n -1)/(s-1)$ which we recognize from section IV as the size of $PG( n-1, s)$. The Indian mathematician Bose, discussed in section V, gave results for $m_3(n, 2)$, $m_3(3, s)$, and $m_3(4, s)$. See section 5, chapter XIII, volume 2 of [@ref1] for the use of The Fano Plane (Fig. 3) for Hamming codes.
Yet another subject with close links to designs and geometries is that of ‘Hadamard matrices’ [@ref5; @ref37]. Such a matrix, denoted by $H_n$, is a $n \times n$ matrix with entries $\pm 1$. $H_2$, the simplest, is $\left( \begin{array}{cc}
1 & 1 \\
1 & -1
\end{array} \right)$ . They can be constructed for $n$ values that are divisible by 4, and the existence of a $H_n$ implies the existence of a symmetric BIB with $(v=b=n-1, r=k=(n/2)-1, \lambda=(n/4)-1)$. $H_8$ is, therefore, associated with the $2-(7, 3, 1)$ design or the $PG (2, 2)$ Fano Plane.
Finally, as a further connection between design theory and other branches of mathematics, The Fano Plane also describes the ‘fourth’ arithmetic. Our first acquaintance in early school is with real numbers which may be regarded as one-dimensional arithmetic (‘the real line’). In high school algebra, we encounter complex numbers, $(a+ib)$, two-dimensional numbers (‘the complex plane’) built on reals $a$ and $b$ and the imaginary unit $i$, the square root of (-1). All the usual operations of addition, subtraction, multiplication, and division can be carried out in both cases.
Extending further, it is well known that there is no consistent counterpart of ‘three-dimensional numbers’, the next with all these operations being in four dimensions. Invented by Hamilton [@ref38] and called ‘quaternions’, these numbers $(a+ib+jc+kd)$, built on reals $(a, b, c, d)$ and three square roots of (-1) called $(i, j, k; \,\,i^2=j^2=k^2=-1)$ provide the ‘third’ arithmetic (more technically, a ‘division algebra’ [@ref39]) upon defining the multiplication rules between these three objects. This rule is that the product of any two gives the third with a $\pm 1$ sign, depending on whether one cycles through them from left to right and then looping backwards to close the cycle, or from right to left. Thus, $(ij=k, jk=i, ki=j)$ and $(ji=-k, kj=-i, ik=-j)$. While all the four operations referred to above can be carried out with quaternions, clearly from this rule it follows that the order in which two quaternions are multiplied (or divided) matters, the multiplication not being ‘commutative’ as in the case of reals and complex numbers.
Quaternionic multiplication is familiar in physics, especially in quantum physics where rotation and angular motion display these anti-commutative aspects. Although less familiar, the fourth and last consistent arithmetic is that of ‘octonions’, built similarly on seven independent square roots of (-1) [@ref40; @ref41; @ref42]. These eight-dimensional numbers involve the seven objects $e_i$ in Fig. 3, that figure providing also the multiplication rule between them. Each line has three of them and they have the anti-commutative multiplication as stated above, the product of two giving the third, with a plus sign if along the arrow and a minus sign if against. Not only is octonionic multiplication not commutative but it is not ‘associative’ as well which means in multiplying three of them, the way they are grouped in pairs to carry out the multiplication matters. This property, familiar from reals, that $a(bc)$ and $(ab)c$ are the same, holds also for complex numbers and quaternions but fails for octonions. Not surprisingly, there is no consistent arithmetic with multiplication and division possible beyond them, and these are the only four arithmetics. (Technically, the four division algebras are distinguished by what is called ‘Hurwitz’s’ theorem, that the ’norm’ of a product factorizes as the product of the norms [@ref39; @ref40].)
Recently, The Fano Plane of Fig. 3 has also occurred in systems of quantum spins or what are called qubits in quantum computation and quantum information [@ref43; @ref44; @ref45; @ref46]. This continues the unexpected connections between various branches of mathematics and the sciences. As another human connection, The Fano Plane is named for a famous Italian geometer Gino Fano. His son, Ugo Fano, a distinguished physicist, was the doctoral father of the author of this essay.
Beth T, Jungnickel D and Lenz H 1985 [*Design Theory*]{} (Zürich: Bibl. Inst.) and 1993 [*Encyclopedia of Mathematics*]{} (Cambridge: Cambridge University Press) vol 69
Lenz H 1991 Half a century of Design Theory [*Mitteilungen Math. Gesellschaft Hamburg*]{} [**12**]{} 579-593
Jungnickel D 1990 Latin squares, their geometries and their groups [*Coding Theory and Design Theory*]{} IMA Volumes in Math. and its Appl. (Berlin: Springer)
Hall M Jr 1967 [*Combinatorial Theory*]{} (Waltham, MA: Blaisdell Press)
Rao Raghava D 1971 [*Constructions and Combinatorial Problems in Design of Experiments*]{} (New York: Wiley)
Bose R C and Manvel B 1984 [*Introduction to Combinatorial Theory*]{} (New York: Wiley)
Gropp H 1992 The birth of a mathematical theory in British India [*Colloq. Math. Soc. Janos Bolyai*]{} [**60**]{} 315-327
Woolhouse W S B 1844 Prize Question [*Lady’s and Gentleman’s Diary*]{}
Biggs N L 1981 T. P. Kirkman, mathematician [*Bull. London Math. Soc.*]{} [**13**]{} 97-120
Kirkman T P 1850 Query VI [*Lady’s and Gentleman’s Diary*]{} [**147**]{} 48 and Note on an unanswered prize question [*Cambridge and Dublin Math. Journal*]{} [**5**]{} 255-262
Yates F 1936 Incomplete randomized blocks [*Ann. Eugenics*]{} [**7**]{} 121-140
Bose R C 1939 On the construction of balanced incomplete block designs [*Ann. Eugenics*]{} [**9**]{} 353-399
Witt E 1938 Über Steinerische Systeme [*Abh. Hamburg*]{} [**12**]{} 265-275
Steiner J 1853 Combinatorische Aufgabe [*J. Reine Angew. Math.*]{} [**45**]{} 181-182
Reiss M 1859 Úber eine Steinerische combinatorische Aufgabe [*J. Reine Angew. Math.*]{} [**56**]{} 326-344
Kirkman T P 1847 On a problem in combinations [*Cambridge and Dublin Math. Journal*]{} [**2**]{} 191-204
Gropp H 1991 The history of Steiner systems S(2,3,13) [*Mitteilungen Math. Gesellschaft Hamburg*]{} [**12**]{} 849-861
Fisher R A 1935 [*The design of experiments*]{} (Edinburgh: Oliver and Boyd)
Fisher R A 1930 [*The genetical theory of natural selection*]{} (Oxford: Oxford University Press)
Fienberg S E and Hinkley D V 1980 [*R.A. Fisher: An Appreciation*]{} Lecture Notes in Statistics 1 (New York: Springer)
Bennett J H 1983 [*Natural selection, heredity, and eugenics*]{} (Oxford: Clarendon Press)
Gani J 1982 [*The Making of Statisticians*]{} (Berlin: Springer-Verlag)
Tankard J W Jr 1984 [*The Statistical Pioneers*]{} (Cambridge A: Schenkman Publishing)
Box Joan Fisher 1978 [*R. A. Fisher: The Life of a Scientist*]{} (New York: John Wley Press)
Savur S R 1939 A note on the arrangement of incomplete blocks, when $k=3$ and $\lambda=1$ [*Ann. Eugenics*]{} [**9**]{} 45-49
Fisher R A 1940 An examination of the different possible solutions of a problem in incomplete blocks [*Ann. Eugenics*]{} [**10**]{} 52-75
Fisher R A 1941/2 New cyclic solutions to problems in incomplete blocks [*Ann. Eugenics*]{} [**11**]{} 290-299
Hirschfeld J W P 1979 [*Projective geometries over finite fields*]{} (Oxford: Oxford University Press)
Hughes D R and Piper F C 1985 [*Design Theory*]{} (Cambridge: Cambridge University Press)
Bennett C H and Brassard 1984 [*Proceedings of the IEEE Conference on Computers, Systems and Signal Processing, Bangalore, India*]{} (New York: IEEE) 175-179
Ekert A 1991 Quantum cryptography based on Bell’s theorem [*Phys. Rev. Lett.*]{} [**67**]{} 661-663
http://en.wikipedia.org/wiki/S\_N\_Roy
Pinl M 1971/2 Kollegen in einer dunklen Zeit, III Teil [*Jahresbericht der Deutschen Mathematiker-Vereinigung*]{} [**73**]{} 153-208
Bose R C, Parker E T and Shrikhande S 1960 On orthogonal Latin squares [*Can. J. Math.*]{} [**12**]{} 189-203
Biggs N L 1985 [*Discrete Mathematics*]{} (Oxford: Clarendon Press)
Fisher R A and Yates F 1949 [*Statistical tables for biological, agricultural and medical research*]{} (London: Oliver and Boyd) 3rd ed
Hadamard J 1893 Resolution d’une question relative aux determinants [*Bull. Sci. Math.*]{} [**2**]{} 240-246
Hankins T L 1980 [*Sir William Rowan Hamilton*]{} (Baltimore: Johns Hopkins University Press)
Dickson L E 1919 On Quaternions and Their Generalization and the History of the Eight Square Theorem [*Ann. Math.*]{} [**20**]{} 155-171
Dixon G M 1994 [*Division Algebras: Octonions, Quaternions, Complex Numbers and the Algebraic Design of Physics*]{} Mathematics and its Applications, Vol. 290 (Dordrecht: Kluwer Press)
Coxeter H S M 1946 Integral Cayley Numbers [*Duke Math. Journal*]{} [**13**]{} 561-578
Baez J C 2001 The Octonions [*Bull. New Ser., Am. Math. Soc.*]{} [**39**]{} 145-205 and www.jmath.usr.edu/home/baez/octonions
Rau A R P 2009 Mapping two-qubit operators onto projective geometries [*Phys. Rev. A*]{} [**79**]{} 042323 (1-6)
Planat M and Saniga M 2008 On the Pauli graphs on $N$-qudits [*Quantum Inf. Comput.*]{} [**8**]{} 127-146
Levay P, Saniga M and Vrana P 2008 Three-Qubit Operators, the Split Cayley Hexagon of Order Two and Black Holes [*Phys. Rev. D*]{} [**78**]{} 124002 (1-22)
Rau A R P 2009 Algebraic characterization of $X$-states in quantum information arXiv:0906.4716 and [*J. Phys. A: Math. Gen.*]{} [**42**]{}, 412002 (1-7)
|
---
abstract: 'The non-Fourier heat conduction phenomenon on room temperature is analyzed from various aspects. The first one shows its experimental side, in what form it occurs and how we treated it. It is demonstrated that the Guyer-Krumhansl equation can be the next appropriate extension of Fourier’s law for room temperature phenomena in modeling of heterogeneous materials. The second approach provides an interpretation of generalized heat conduction equations using a simple thermomechanical background. Here, Fourier heat conduction is coupled to elasticity via thermal expansion, resulting in a particular generalized heat equation for the temperature field. Both of the aforementioned approaches show the size dependency of non-Fourier heat conduction. Finally, a third approach is presented, called pseudo-temperature modeling. It is shown that non-Fourier temperature history can be produced by mixing different solutions of Fourier’s law. That kind of explanation indicates the interpretation of underlying heat conduction mechanics behind non-Fourier phenomena.'
address: |
$^{1}$ Department of Energy Engineering, Faculty of Mechanical Engineering, BME, Budapest, Hungary\
$^{2}$ Department of Theoretical Physics, Wigner Research Centre for Physics, Institute for Particle and Nuclear Physics, Budapest, Hungary\
$^3$ Montavid Thermodynamic Research Group
author:
- 'Tamás Fülöp $^{1,3}$, Róbert Kovács $^{1,2,3}$, Ádám Lovas $^{1}$, Ágnes Rieth $^{1}$, Tamás Fodor $^{1}$, Mátyás Szücs $^{1,3}$, Péter Ván $^{1,2,3}$ and Gyula Gróf $^{1}$'
title: 'Emergence of non-Fourier hierarchies'
---
Introduction
============
The Fourier’s law [@Fou822] $$\begin{aligned}
\mathbf q = - k \nablar T\end{aligned}$$ is one of the most applicable, well-known elementary physical laws in engineering practice. Here, $\mathbf q$ is the heat flux vector, $T$ is absolute temperature, $k$ is thermal conductivity. However, as all the constitutive equations, it also has limits of validation. Phenomena that do not fit into these limits, called non-Fourier heat conduction, appear in many different forms. Some of them occur at low temperature like the so-called second sound and ballistic (thermal expansion induced) propagation [@Tisza38; @JosPre89; @JosPre90a; @Chen01; @VanFul12; @KovVan15]. These phenomena have been experimentally measured several times [@Acketal66; @JacWal71; @Pesh44; @McN74t] and many generalized heat equations exist to simulate them [@DreStr93a; @MulRug98; @FriCim95; @KovVan16; @KovVan18; @BarSte05a; @HerBec00]. The success in low-temperature experiments resulted in the extension of this research field to find the deviation at room temperature as well. One of the most celebrated result is related to Mitra et al. [@MitEta95] where the measured temperature history was very similar to a wave-like propagation. However, these results have not been reproduced by anyone and undoubtedly demanded for further investigation.
In most of the room-temperature measurements, the existence of Maxwell-Cattaneo-Vernotte (MCV) type behavior attempted to be proved [@Cattaneo58; @Vernotte58]. It is this MCV equation that is used to model the aforementioned second sound, the dissipative wave propagation form of heat [@JosPre89; @Tisza47; @Lan47]. The validity of MCV equation for room temperature behavior has not yet been justified, despite of the numerous experiments. It is important to note that many other extensions of Fourier equation exist beyond the MCV one, such as the Guyer-Krumhansl (GK) equation [@GuyKru66a1; @GuyKru66a2; @Van01a; @Zhu16a; @Zhu16b], the dual phase lag model [@Tzou96], and their modifications, too [@KovVan15; @SellEtal16; @RogEtal17]. Some of these possess stronger physical background, some others not [@FabEtal16; @Ruk17; @KovVan18dpl].
The simplest extension of MCV equation is the GK model, which reads: $$\begin{aligned}
\tau \dot {\mathbf q }+ \mathbf q + k \nablar T - \kappa^2 \Lapl \mathbf q =0, \label{GK}\end{aligned}$$ where the coefficient $\tau$ is called relaxation time and $\kappa^2$ is regarded as a dissipation parameter and the dot denotes the time derivative. This GK-type constitutive equation contains the MCV-type by considering $\kappa^2=0$ and the Fourier equation taking $\tau=\kappa^2=0$. This feature of GK equation allows to model both wave-like temperature history and over-diffusive one. This is more apparent when one applies the balance equation of internal energy in order to eliminate $\mathbf q$: $$\begin{aligned}
\rho c \dot T + \nablar \cdot \mathbf q = 0, \label{enbal}\end{aligned}$$ with mass density $\rho$, specific heat $c$ and volumetric source neglected, one obtains $$\begin{aligned}
\tau \ddot T + \dot T = a \Lapl T + \kappa^2 \Lapl \dot T, \label{GKT}\end{aligned}$$ with thermal diffusivity $a =k / (\rho c)$. One can realize that equation (\[GKT\]) contains the Fourier heat equation $$\begin{aligned}
\dot T = a \Lapl T
\label{Fouriereq}\end{aligned}$$ as well as its time derivative, with different coefficients. It becomes more visible after rearranging eq. (\[GKT\]): $$\begin{aligned}
\tau \left (\dot T - \frac{\kappa^2}{\tau} \Lapl T \right )^{.} + \dot T - a \Lapl T = 0.
\label{HierGKT}\end{aligned}$$ When the so-called [@Botetal16; @Vanetal17] Fourier resonance condition $\kappa^2/\tau = a$ holds, the solutions of the Fourier equation (\[Fouriereq\]) are covered by the solutions of (\[GKT\]). Meanwhile, when $\kappa^2<a \tau$ the wave-like behavior is recovered and this domain is called as under-damped region. In the opposite case ($\kappa^2>a \tau$), there is no visible wave propagation and it is called over-diffusive (or over-damped) region. We measured the corresponding over-diffusive effect several times in various materials such as metal foams, rocks and in a capacitor, too [@Botetal16; @Vanetal17]. Furthermore, a similar temperature history has been observed in a biological material [@KovVan18dpl].
In this paper, further aspects of over-diffusive propagation are discussed. In the following sections the size dependence of the observed over-damped phenomenon is discussed both experimentally and theoretically. Moreover, the approach of pseudo-temperature is presented in order to provide one concrete possible interpretation for non-Fourier heat conduction.
Size dependence
===============
Our measurements reported here are performed on basalt rock samples with three different thicknesses, $1.86$, $2.75$ and $3.84$ mm, respectively. We have applied the same apparatus of heat pulse experiment as described in [@Botetal16; @Vanetal17], schematically depicted in Fig. \[expsetup\] below.
![Setup of our heat pulse experiment [@Vanetal17]. []{data-label="expsetup"}](exp1.PNG){width="8cm"}
In each case, the rear-side temperature history was measured and numerically evaluated solving the GK equation. The recorded dimensionless temperature signals are plotted in Figs. \[expfou1\], \[expfou2\], \[expfou3\]. In these figures, the dashed line shows the solution of Fourier equation using thermal diffusivity corresponding to the initial part of temperature rising on the rear side. It is clear that the measured signal deviates from the Fourier-predicted one even with considering non-adiabatic (cooling) boundary condition. That deviation weakens with increasing the sample thickness, for the thickest one it is hardly visible and the prediction of Fourier’s law is almost acceptable.
![Data recorded for basalt rock sample with thickness of $1.86$ mm. The dashed line shows the prediction of Fourier’s law.[]{data-label="expfou1"}](S1_0001_Fourier.jpg){width="15cm"}
![Data recorded for basalt rock sample with thickness of $2.75$ mm. The dashed line shows the prediction of Fourier’s law.[]{data-label="expfou2"}](S2_0001_Fourier.jpg){width="15cm"}
![Data recorded for basalt rock sample with thickness of $3.84$ mm. The dashed line shows the prediction of Fourier’s law.[]{data-label="expfou3"}](S3_0001_Fourier.jpg){width="15cm"}
The evaluation of the thinnest sample using the Guyer-Krumhansl equation is shown in Fig. \[expfou4\]. The fitted coefficients are summarized in Table \[expcoeff\].
![Data recorded using the basalt with thickness of $1.86$ mm. The solid continous line shows the prediction of GK equation.[]{data-label="expfou4"}](S1_0002_GK.jpg){width="15cm"}
-------- --------- --------- --------- ---------
**** **** **** **** ****
$1.86$ $0.62$ $0.55$ $0.738$ $0.509$
$2.75$ $0.67$ $0.604$ $0.955$ $0.67$
$3.84$ $0.685$ $0.68$ $0.664$ $0.48$
-------- --------- --------- --------- ---------
: Summarized results of fitted coefficients in Fourier and GK equations.[]{data-label="expcoeff"}
Deviation from the Fourier prediction is weak but is clearly present, and has size dependent attributes. Concerning the ratio of parameters, i.e., investigating how considerably the Fourier resonance condition $a \tau / \kappa^2 = 1$ is violated, the outcome can be seen in Table \[expcoeff2\]. As analysis of the results, it is remarkable to note the deviation of the GK fitted thermal diffusivity from the Fourier fitted one, and that this deviation is size dependent. For the thickest sample, which can be well described by Fourier’s law, the fitted thermal diffusivity values are practically equal, and the ratio of parameters is very close to the Fourier resonance value 1.
-------- ---------
**** ****
$1.86$ $0.804$
$2.75$ $0.854$
$3.84$ $0.943$
-------- ---------
: Ratio of the fitted coefficients.[]{data-label="expcoeff2"}
The next section is devoted to a possible explanation for the emergence of a generalized heat equation with higher time and space derivatives. All coefficients of the higher time and space derivative terms are related to well-known material parameters. The result also features size dependent non-Fourier deviation.
Seeming non-Fourier heat conduction induced by elasticity coupled via thermal expansion
=======================================================================================
While, in general, one does not have a direct physical interpretation of the phenomenon that leads to, at the phenomenological level, non-Fourier heat conduction, here follows a case where we do know this background phenomenon. Namely, in case of heat conduction in solids, a plausible possibility is provided by an interplay between elasticity and thermal expansion. Namely, without thermal expansion, elasticity – a tensorial behaviour – is not coupled to Fourier heat conduction – a vectorial one – in isotropic materials. However, with nonzero thermal expansion, strains and displacements have to be in accord both with what elastic mechanics dictates and with what position dependent temperature imposes. The coupled set of equations of Fourier heat conduction, of elastic mechanics and of kinematic relationships, after eliminating the kinematic and mechanical quantities, leads to an equation for temperature only that contains higher derivative corrections to Fourier’s equation. It is important to check how remarkable these corrections are. In the following section we present this derivation and investigation.
The basic equations {#.62..2.2.}
-------------------
In all respects involved, we choose the simplest assumptions: the small-strain regime, a Hooke-elastic homogeneous and isotropic solid material, with constant thermal expansion coefficient, essentially being at rest with respect to an inertial reference frame. Kinematic, mechanical and thermodynamical quantities and their relationships are considered along the approach detailed in [@Godollo-en; @MMAS; @IWNET]. The Hooke-elastic homogeneous and isotropic material model states, at any position , the constitutive relationship between stress tensor and elastic deformedness tensor (which, in many cases, coincides with the strain tensor), where and denote the deviatoric (traceless) and spherical (proportional to the unit tensor ) parts, i.e., Stress induces a time derivative in the velocity field of the solid medium, according to the equation with mass density being constant in the small-strain regime. For the velocity gradient and its symmetric part, one has where the Einstein summation convention for indices has also been applied. Again using this convention, and the Kronecker delta notation, to any scalar field , follow, which are also to be utilized below.
The small-deformedness relationship among the kinematic quantities, with linear thermal expansion coefficient considered constant, and absolute temperature , is For specific internal energy , its balance, after subtracting the contribution coming from specific elastic energy and the corresponding elastic part of the mechanical power , is where is specific heat corresponding to constant zero stress (or pressure), temperature has been approximated in one term of by an initial homogeneous absolute temperature value to stay in accord with the linear (small-strain) approximation, and heat flux follows the Fourier heat conduction constitutive relationship with thermal conductivity also treated as a constant.
### The derivation {#.63..2.3.}
The strategy is to eliminate in favour of (with the aid of) , then is eliminated in favour of , after which we can realize that both from the mechanical direction and from the thermal one we obtain relationship between and , which, eliminating , yields an equation for only.
Starting with the thermal side, Meanwhile, from the mechanical direction, aiming at being in tune with : (where is the longitudinal elastic wave propagation velocity); hence, summarizing the final result in two equivalent forms, The first form here tells us that we have here the wave equation of a heat conduction equation, the last term on the somewhat detuning the heat conduction equation of the with respect to the one on the l.h.s. (the underlined coefficient is the one becoming modified when its term is melted together with the last term). In the meantime, the second form shows the heat conduction equation of a wave equation, the last term on the detuning the underlined coefficient.
Both forms show that coupling, after elimination, leads to a hierarchy of equations, with an amount of detuning that is induced by the coupling – for similar further examples, see [@hierar].
We close this section by rewriting the final result in a form that enables to estimate the contribution of thermal expansion coupled elasticity to heat conduction: i.e., One message here is that, thermal expansion coupled elasticity modifies the thermal diffusivity to an effective one (see the heat conduction on the ). For metals, this means a few-percent shift (1% for steel and copper, and 6% for aluminum) at room temperature.
The other is that, for a length scale (e.g., characteristic sample size) and the corresponding Fourier time scale , the is, to a (very) rough estimate, times a heat conduction equation while the is (similarly roughly) times the (nearly) same heat conduction equation (a one with ). In other words, the provides a contribution to the via a dimensionless factor This dimensionless factor is about to for metals, for rocks and for plastics with , a typical size for flash experiments. Therefore, the effect of the appears to be negligible with respect to the .
It is important to point out that the first phenomenon—the emergence of effective thermal diffusivity—would remain unnoticed in the analogous one space dimensional calculation: \[no detuning of on the \]. It is revealed only in the full 3D treatment, which enlights possible pitfalls of 1D considerations in general as well.
As conclusion of this section, thermal expansion coupled elasticity may introduce a few percent effect (a material dependent but sample size independent value) in determining thermal diffusivity from flash experiments or other transient processes (while its other consequences may be negligible).
Pseudo-temperature approach
===========================
The experimental results serve to check whether a certain theory used for describing the observed phenomenon is acceptable or not. The heat pulse (flash) experiment results may show various temperature histories. Generally the flash measurement results are according to the Fourier theory. In some cases, as reported in [@Botetal16; @Vanetal17] the temperature histories show “irregular” characteristics, especially these histories could be described by the help of various non-Fourier models [@JouEtal15; @SellEtal16; @JouCimm16; @KovVan15]. Some kind of non-Fourier behaviour could be constructed as it is shown in the following. This is only an illustration how two parallel Fourier mechanisms could result a non-Fourier-like temperature history. The idea is strongly motivated by the hierarchy of Fourier equations in the GK model [@VanKovFul15] as mentioned previously, however, their interaction is not described in detail.
The sample that we investigate now is only a hypothetic one, we may call it as a “pseudo-matter”. We consider in the following that the pseudo-matter formed by parallel material strips is wide enough that the interface effects might be neglected, i.e., they are like insulated parallel channels. We also consider that only the thermal conductivities are different, and the strips have the same mass density and specific heat. During the flash experiment after the front side energy input, a simple temperature equalisation process happens in the sample in case of adiabatic boundary conditions. Since the flash method is widely developed, the effects of the real measurement conditions (heat losses, heat gain, finite pulse time, etc.) are well treated in the literature.
Figure \[pseudo1\] shows two temperature histories with thermal diffusivities of different magnitude, both of them are the solution of Fourier heat equation.
![Rear-side temperature history; solid line: $a=10^{-6} \ \text{m$^2$/s}$, dashed line: $a=2.5 \cdot 10^{-7} \ \text{m$^2$/s}$, $L=2 \ \text{mm}$.[]{data-label="pseudo1"}](ps1.jpg){width="15cm"}
The mathematical formula that expresses the temperature history of the rear side in the adiabatic case is [@ParEtal61]: $$\begin{aligned}
\nu(\xi=1, Fo)=1 + 2 \sum\limits^{\infty}_{m=1} (-1)^m e^{-(m^2 \pi^2 Fo)},\end{aligned}$$ where $\nu$ is the dimensionless temperature, $\xi$ is the normalized spatial coordinate ($\xi=1$ corresponds to the rear-side) and $Fo = a \cdot t /(L^2)$ stands for the Fourier number (dimensionless time variable). This is an infinite series with property of slow convergence for short initial time intervals. An alternative formula derived using the Laplace theorem to obtain faster convergence for $Fo < 1$ [@James80]: $$\begin{aligned}
p(Fo) =\frac{2}{\sqrt{\pi Fo}}\sum \limits^{\infty}_{n=0} e^{-\frac{(2n+1)^2}{4 Fo}}.
\label{pfo}\end{aligned}$$ In the further analysis we use equation (\[pfo\]) to calculate the rear-side temperature history.
So far we described two parallel heat conducting layers without direct interaction among them, however, let us suppose that they can change energy only at their rear side through a very thin layer with excellent conduction properties. Eventually, that models the role of the silver layer used in our experiments in order to close the thermocouple circuit and assure that we measure the temperature of that layer instead of any internal one from the material. Actually, the silver layer averages the rear side temperature histories of the parallel strips. We considered the mixing of temperature histories using the formula: $$\begin{aligned}
p(Fo) = \Theta p_1 (a=10^{-6} \ \text{m$^2$/s}, Fo_1) + (1-\Theta) p_2 (a=2.5 \cdot 10^{-7} \ \text{m$^2$/s}, Fo_2),\end{aligned}$$ that is, taking the convex combination of different solutions of Fourier heat equation (\[Fouriereq\]). Fig. \[pseudo2\] shows a few possible cases of mixing.
![Rear-side temperature histories.[]{data-label="pseudo2"}](ps2.jpg){width="15cm"}
Outlook and summary
===================
This pseudo-material virtual experiment is only to demonstrate that there might be several effects causing non-Fourier behaviour of the registered temperature data. Here, the assumed mixing of “Fourier-temperatures” is analogous with the GK equation in sense of the hierarchy of Fourier equation: dual heat conducting channels are present and interact with each other. However, the GK equation is more general, there is no need to assume some mechanism in order to derive the constitutive equation.
Comparing eq. (\[HierGKT\]) to (\[@17339\]), the hierarchy of Fourier equation appears in a different way. While (\[HierGKT\]) contains the zeroth and first order time derivatives of Fourier equation, the (\[@17339\]) instead contains its second order time and spaces derivatives. Recalling that eq. (\[@17339\]) is derived using the assumption that thermal expansion is present beside heat conduction, it becomes obvious to compare it to a ballistic (i.e., thermal expansion induced) heat conduction model. Let us consider such model from [@KovVan15]: $$\begin{aligned}
\tau_1 \tau_2 \dddot T+(\tau_1 + \tau_2) \ddot T + \dot T = a \Lapl T + (\kappa^2 + a \tau_2) \Lapl \dot T, \label{BALLT}\end{aligned}$$ where $\tau_1$ and $\tau_2$ are relaxation times. Eq. (\[BALLT\]) have been tested on experiments, too [@KovVan18]. Eventually, the GK equation is extended with a third order time derivative and the coefficients are modified by presence of $\tau_2$. On contrary to eq. (\[@17339ag\]), it does not contain any fourth order derivative. Actually, the existing hierarchy of Fourier equation is extended, instead of $\tau$ and $\kappa^2$ the terms $(\tau_1 + \tau_2)$ and $(\kappa^2 + a \tau_2)$ appear within (\[BALLT\]).
Although it is still not clear exactly what leads to over-diffusive heat conduction, the presented possible interpretations and approaches can be helpful to understand the underlying mechanism. It is not the first time to experimentally measure the over-diffusive propagation but it is to consider its size dependence. The simplest thermo-mechanical coupling predicts size dependence of material coefficients that can be relevant in certain cases. All three approaches lead to a system of partial differential equations, which can be called hierarchical.
Acknowledgments
===============
The work was supported by the Hungarian grant National Research, Development and Innovation Office – NKFIH, NKFIH K116197, K123815, K124366, K116375.
[999]{} \[1\][\#1]{}
Fourier, J. ; Chez Firmin Didot, p[è]{}re et fils, 1822.
Tisza, L. Transport phenomena in [H]{}elium [II]{}. , [*141*]{}, 913.
Joseph, D.D.; Preziosi, L. Heat waves. , [*61*]{}, 41.
Joseph, D.D.; Preziosi, L. Addendum to the paper on heat waves. , [*62*]{}, 375–391.
Chen, G. Ballistic-diffusive heat-conduction equations. , [*86*]{}, 2297–2300.
Ván, P.; Fülöp, T. Universality in Heat Conduction Theory – Weakly Nonlocal Thermodynamics. , [*524*]{}, 470–478.
Kovács, R.; Ván, P. Generalized heat conduction in heat pulse experiments. , [*83*]{}, 613 – 620.
Ackerman, C.C.; Bertman, B.; Fairbank, H.A.; Guyer, R.A. Second sound in solid [H]{}elium. , [*16*]{}, 789–791.
Jackson, H.E.; Walker, C.T. Thermal conductivity, second sound and phonon-phonon interactions in [N]{}a[F]{}. , [*3*]{}, 1428–1439.
Peshkov, V. Second sound in [H]{}elium [II]{}. , [*381*]{}.
McNelly, T.F. Second [S]{}ound and [A]{}nharmonic [P]{}rocesses in [I]{}sotopically [P]{}ure [A]{}lkali-[H]{}alides [**1974**]{}. Ph.D. Thesis, Cornell University.
Dreyer, W.; Struchtrup, H. Heat pulse experiments revisited. , [ *5*]{}, 3–50.
Müller, I.; Ruggeri, T. ; Springer, 1998.
Frischmuth, K.; Cimmelli, V.A. Numerical reconstruction of heat pulse experiments. , [ *33*]{}, 209–215.
Kovács, R.; Ván, P. odels of [B]{}allistic [P]{}ropagation of [H]{}eat at [L]{}ow [T]{}emperatures. , [ *37*]{}, 95.
Kov[á]{}cs, R.; V[á]{}n, P. Second sound and ballistic heat conduction: [N]{}a[F]{} experiments revisited. , [*117*]{}, 682–690. submitted, arXiv preprint arXiv:1708.09770.
Bargmann, S.; Steinmann, P. Finite element approaches to non-classical heat conduction in solids. , [ *9*]{}, 133–150.
Mitra, K.; Kumar, S.; Vedevarz, A.; Moallemi, M.K. Experimental evidence of hyperbolic heat conduction in processed meat. , [*117*]{}, 568–573.
Cattaneo, C. Sur une forme de lequation de la chaleur eliminant le paradoxe dune propagation instantanee. , [*247*]{}, 431–433.
Vernotte, P. Les paradoxes de la th[é]{}orie continue de l[é]{}quation de la chaleur. , [*246*]{}, 3154–3155.
Herwig, H. and Beckert, K. Fourier versus non-[F]{}ourier heat conduction in materials with a nonhomogeneous inner structure , [*122*]{}, 363–364.
Tisza, L. The theory of liquid [H]{}elium. , [*72*]{}, 838–877.
Landau, L. On the theory of superfluidity of [H]{}elium [II]{}. , [*11*]{}, 91–92.
Guyer, R.A.; Krumhansl, J.A. Solution of the Linearized Phonon [B]{}oltzmann Equation. , [*148*]{}, 766–778.
Guyer, R.A.; Krumhansl, J.A. Thermal Conductivity, Second Sound and Phonon Hydrodynamic Phenomena in Nonmetallic Crystals. , [*148*]{}, 778–788.
Ván, P. Weakly Nonlocal Irreversible Thermodynamics – The [G]{}uyer-[K]{}rumhansl and the [C]{}ahn-[H]{}illiard Equations. , [*290*]{}, 88–92.
Zhukovsky, K.V. Exact solution of [G]{}uyer–[K]{}rumhansl type heat equation by operational method. , [*96*]{}, 132–144.
Zhukovsky, K.V. Operational Approach and Solutions of Hyperbolic Heat Conduction Equations. , [*5*]{}, 28.
Tzou, D.Y. ; CRC Press, 1996.
Sellitto, A.; Cimmelli, V.A.; Jou, D. Nonequilibrium Thermodynamics and Heat Transport at Nanoscale. In [*Mesoscopic Theories of Heat Transport in Nanosystems*]{}; Springer International Publishing, 2016; pp. 1–30.
Rogolino, P.; Kov[á]{}cs, R.; V[á]{}n, P.; Cimmelli, V.A. Generalized heat-transport equations: Parabolic and hyperbolic models. , [ *30*]{}, AiP–14.
Fabrizio, M.; Lazzari, B.; Tibullo, V. Stability and Thermodynamic Restrictions for a Dual-Phase-Lag Thermal Model. . Published Online:2017/01/10.
Rukolaine, S.A. Unphysical effects of the dual-phase-lag model of heat conduction: higher-order approximations. , [ *113*]{}, 83–88.
Kov[á]{}cs, R.; V[á]{}n, P. Thermodynamical consistency of the [D]{}ual [P]{}hase [L]{}ag heat conduction equation. , pp. 1–8.
Both, S.; Cz[é]{}l, B.; F[ü]{}l[ö]{}p, T.; Gr[ó]{}f, G.; Gyenis, [Á]{}.; Kov[á]{}cs, R.; V[á]{}n, P.; Verh[á]{}s, J. Deviation from the [F]{}ourier law in room-temperature heat pulse experiments. , [ *41*]{}, 41–48.
V[á]{}n, P.; Berezovski, A.; F[ü]{}l[ö]{}p, T.; Gr[ó]{}f, G.; Kov[á]{}cs, R.; Lovas, [Á]{}.; Verh[á]{}s, J. Guyer-[K]{}rumhansl-type heat conduction at room temperature. , [*118*]{}, 50005. arXiv:1704.00341v1.
Jou, D.; Carlomagno, I.; Cimmelli, V.A. A thermodynamic model for heat transport and thermal wave propagation in graded systems. , [*73*]{}, 242–249.
Jou, D.; Cimmelli, V.A. Constitutive equations for heat conduction in nanosystems and non-equilibrium processes: an overview. , [*7*]{}, 196–222.
Fülöp, T.; Kovács, R.; Ván, P. Thermodynamic hierarchies of evolution equations. , [*64*]{}, 389–395.
Parker, W.J.; Jenkins, R.J.; Butler, C.P.; Abbott, G.L. Flash method of determining thermal diffusivity, heat capacity, and thermal conductivity. , [*32*]{}, 1679–1684.
James, H.M. Some extensions of the flash method of measuring thermal diffusivity. , [*51*]{}, 4666–4672.
Cs. Asszonyi, A. Csatár, T. Fülöp. Elastic, thermal expansion, plastic and rheological processes – theory and experiment. *Periodica Polytechnica Civil Engineering* **60** (2016) 591–601; DOI:10.3311/PPci.8628 .
T. Fülöp, P. Ván. Kinematic quantities of finite elastic and plastic deformation. *Mathematical Methods in the Applied Sciences* **35** (2012) 1825–1841.
T. Fülöp. Objective thermomechanics. E-print arXiv:1510.08038 (2015) (`https://arxiv.org/abs/1510.08038`).
P. Ván, R. Kovács, T. Fülöp. Thermodynamics hierarchies of evolution equations. *Proceedings of the Estonian Academy of Sciences* **64** (2015) 389–395; DOI:10.3176/proc.2015.3S.09 .
|
---
abstract: |
The Virasoro algebra with $c=1$ has a continuum of superselection sectors characterized by the ground state energy $h\geq 0$. Only the discrete subset of sectors with $h=s^2$, $s\in\frac12{{\mathbb N}}_0$, arises by restriction of representations of the $SU(2)$ current algebra at level $k=1$. The remaining continuum of sectors is obtained with the help of (localized) homomorphisms into the current algebra. The fusion product of continuum sectors with discrete sectors is computed. A new method of determining the sector of a state is used.\
PACS 11.10.Cd, 11.25.Hf
author:
- |
Karl-Henning Rehren[^1]\
and\
Hilmar R. Tuneke[^2]\
Institut für Theoretische Physik, Universität Göttingen,\
37073 Göttingen, Germany
title: |
-15mm **Fusion rules for the continuum sectors\
of the Virasoro algebra with $c=1$**
---
Introduction
============
“Fusion rules” describe the product of two superselection charges and the decomposition of the product into irreducible charges. They thus constitute an important characteristics for the charge structure of a quantum field theory.
The general definition of the composition of charges (“DHR product”) was first given in [@DHRprod]. In two-dimensional conformal quantum field theory, other notions of fusion [@BPZ; @N] became more popular, but every evidence shows [@FRS; @W] that these describe the same abstract charge structure.
The actual computation of the fusion rules in concrete models is in general a difficult task, and almost always relies on some specific apriori knowledge. If the QFT at hand is the fixpoint subalgebra of another QFT with respect to a compact gauge group, then harmonic analysis determines the composition law for those sectors which appear in the decomposition of the vacuum sector of the larger algebra [@DHRfix]. The fusion rules then follow the composition of the representations of the gauge group. In low-dimensional theories, a gauge group is in general not present, but in favorable cases, modular transformation properties [@V] or “null vectors” [@BPZ; @N] can be exploited.
In the present letter we treat a model where the standard strategies are not applicable: the chiral stress-energy tensor of a 1+1-dimensional conformal quantum field theory with $c=1$. (A chiral field can be treated like a “one-dimensional QFT”.) Its algebra $A$ is the fixpoint algebra of the chiral $SU(2)$, level $k=1$, current algebra $B$ with respect to its global $SU(2)$ symmetry [@Fk; @KHR], and the positive-energy representations of the current algebra contain a discrete series of superselection sectors of $A$. But besides the discrete series there is a continuum of further sectors which do not arise by restriction from $B$. These sectors have no “null vectors” and hence infinite asymptotic dimension [@KHR], so that the Verlinde formula or Nahm’s prescription are not applicable.
We adopt a method due to Fredenhagen [@F] for the computation of the fusion rules: A charged state $\omega$ is described by a positive map $\chi$ of the algebra into itself such that $$\omega=\omega_0{{\scriptstyle \circ}}\chi$$ where $\omega_0$ is the vacuum state. The correspondence between states and positive maps is 1:1 provided the charge is strictly localized. This yields a product of states defined by $$\omega_1 \times \omega_2 := \omega_0{{\scriptstyle \circ}}\chi_1{{\scriptstyle \circ}}\chi_2.$$ The GNS representation $\pi_{\omega_1 \times \omega_2}$ is always a subrepresentation of the DHR product of GNS representations $\pi_{\omega_1} \times \pi_{\omega_2}$ [@F], and is expected to exhaust it as the positive maps vary within their equivalence class.
For two states $\omega_1$ and $\omega_2$ belonging to the discrete and continuous sectors, respectively, we shall determine (by a new method) the sectors to which the product states belong.
Fusion rules for
=================
The superselection sectors of the stress-energy tensor with $c=1$ are uniquely determined by their ground state energy $h\geq 0$ for the conformal Hamiltonian $L_0$. The sectors $[h=s^2]$ with $s\in{{\mathbb N}}_0$, arise as subrepresentations of the vacuum representation of the $SU(2)$ current algebra $B$, and those with $s\in{{\mathbb N}}_0+\frac12$ arise in the spin-$\frac12$ representation of $B$. Those with $h\notin(\frac12{{\mathbb Z}})^2$ constitute the continuum. For each of these representations, the partition function is well known [@part]: $${\hbox{Tr}}\exp(-\beta \pi_h(L_0))=\left\{\begin{array}{lcl} t^h p(t) & \hbox{if}&
h\notin(\frac12{{\mathbb Z}})^2, \\ (t^{s^2}-t^{(s+1)^2}) p(t) &
\hbox{if}& h=s^2, \quad s \in\frac12{{\mathbb N}}_0,\end{array}\right.$$ where $t={\hbox{e}}^{-\beta}$ and $p(t)=\prod_n(1-t^n)^{-1}$.
The positive maps describing the charged states are of the form (cf. Lemma 2.1) $$\chi = \mu{{\scriptstyle \circ}}\alpha_g\vert_A.$$ Here $g$ is a smooth $SU(2)$ valued function, and $\alpha_g$ the automorphism of the current algebra $B$ induced from the local gauge transformation (Bogolyubov automorphism) of the underlying chiral fermion doublet, $$\psi_i(x) \mapsto \sum_j\psi_j(x)g_{ji}(x).$$ $\mu=\int d\mu(k)\,\gamma_k$ is the average over the global gauge group $SU(2)$ acting by automorphisms $\gamma_k$. Since $\mu$ is a positive map of $B$ onto $A$, $\chi_g$ is a positive map of $A$ onto $A$.
The induced action of $\alpha_g$ on the currents $j(f)\equiv\sum j^a(f_a)=\int:\psi(x) f(x)\psi(x)^*:dx$ (with an $su(2)$ valued test function $f(x)=\sum f_a(x) T^a$) is explicitly computed as $$\alpha_g(j(f))=j(gfg^{-1})-\frac i{2\pi}\int{\hbox{Tr}}(fg^{-1}\partial g){\mathbf 1},$$ and its restriction to the Sugawara stress-energy tensor $T=\frac\pi 3 \sum g_{ab}:j^aj^b:$ is $$\alpha_g(T(f)) = T(f) -ij(f\partial gg^{-1}) -\frac 1{4\pi}\int
f\,{\hbox{Tr}}(\partial gg^{-1}\partial gg^{-1}){\mathbf 1}.$$ The central terms arise, of course, from normal ordering. To be specific, we choose the functions $$g_q(x)=\pmatrix{\exp(iq\lambda(x)) & 0\cr 0 & \exp(-iq\lambda(x)) }$$ where $\lambda(x)=-i \log\frac{1+ix}{1-ix}$ interpolates between $\lambda(-\infty)=-\pi$ and $\lambda(+\infty)=+\pi$, and $q$ is a real parameter whose role as a charge will be exhibited in Lemma 2.1. [^3]
At this point, we have to distinguish the quasilocal algebras $A{_{\rm local}}$ and $B{_{\rm local}}$ generated by field operators smeared with test functions, and the global algebras $A{_{\rm global}}$ and $B{_{\rm global}}$ generated by field operators smeared with “admissible” functions which are test functions up to polynomials of order $2(d-1)$ where $d$ is the scaling dimension. It is well known [@LM] that the fields as distributions extend to these enlarged test function spaces, so that $$L_n=\frac12\int(1-ix)^{1-n}(1+ix)^{1+n}T(x)dx
\quad\hbox{and}\quad
Q^a_n=\int(1-ix)^{-n}(1+ix)^{n} j^a(x)dx$$ are defined as closed unbounded operators. The specific automorphisms $\alpha_q\equiv\alpha_{g_q}$ extend to the operators $Q^3_n\in B{_{\rm global}}$ and $L_n\in A{_{\rm global}}$: $$\alpha_q(Q^3_n)=Q^3_n + q\delta_{n,0}, \qquad\alpha_q(L_n)=L_n +
2q Q^3_n + q^2 \delta_{n,0}\qquad(q\in{{\mathbb R}}),$$ but they extend to $Q^\pm_n\in B{_{\rm global}}$ only if $q\in\frac12{{\mathbb Z}}$: $$\alpha_q(Q^\pm_n)=Q^\pm_{n\pm2q}\qquad(q\in\frac12{{\mathbb Z}}).$$ (Our basis of $SU(2)$ and hence of the fields $j^a$ is such that $[Q^+_n,Q^-_m]=2Q^3_{n+m}+n\delta_{n+m,0}$, $[Q^3_n,Q^\pm_m]=Q^\pm_{n+m}$, $[Q^3_n,Q^3_m]=\frac 12 n\delta_{n+m,0}$.)
Our first Lemma establishes the relation between the parameters $q$ and $h$:
The state $\omega_q\equiv\omega_0{{\scriptstyle \circ}}\chi_q \equiv
\omega_0{{\scriptstyle \circ}}\mu{{\scriptstyle \circ}}\alpha_q\vert_A = \omega_0{{\scriptstyle \circ}}\alpha_q\vert_A$ is a ground state in the irreducible sector $[h=q^2]$.
[*Proof:*]{} Since the operators $L_n$ and $Q^3_n$ ($n\geq 0$) annihilate the vacuum, $\alpha_q(L_n)$ annihilate the vacuum for $n>0$ and $\alpha_q(L_0)$ has eigenvalue $q^2$. It follows that $\omega_q$ is a ground state for $L_0$ with ground state energy $q^2$. [Q.E.D.]{}
Thus, in order to compute the fusion rules $[h_1]\times[h_2]$ (where $h_i=q_i^2$) one has to determine the GNS representation for the product state $$\omega_{q_1}\times\omega_{q_2}
= \omega_0{{\scriptstyle \circ}}\alpha_{q_1}{{\scriptstyle \circ}}\mu{{\scriptstyle \circ}}\alpha_{q_2}\vert_A =
\int_{SU(2)}d\mu(k)\; \omega_0{{\scriptstyle \circ}}\alpha_{q_1}{{\scriptstyle \circ}}\gamma_k{{\scriptstyle \circ}}\alpha_{q_2}\vert_A .$$ This state is a continuous mixture of states $\omega_k \equiv\omega_0{{\scriptstyle \circ}}\alpha_k$ induced by the homomorphisms $$\alpha_k\equiv\alpha_{q_1}{{\scriptstyle \circ}}\gamma_k{{\scriptstyle \circ}}\alpha_{q_2}\vert_A$$ of $A{_{\rm local}}$ into $B{_{\rm local}}$. (We suppress the explicit reference to the involved charges $q_1$ and $q_2$.) These homomorphisms extend to $A{_{\rm global}}$ for generic $k\in SU(2)$ only if $q_1\in \frac12{{\mathbb Z}}$, as can be seen from the above transformation formulae. The following argument is more physical: If one evaluates $\omega_k(T(f)^2)$ for test functions $f$, then one finds that the contributions from the current two-point functions diverge for generic $q$ as $f$ is replaced by the function $\frac12(1+x^2)$. Hence the operator $L_0=\frac12\int (1+x^2)T(x)dx$ has a finite expectation value but infinite variance in these states.
This is why we shall restrict ourselves to the case $q_1\in\frac12{{\mathbb Z}}$. Since $q$ and $-q$ give rise to the same sector $[h=q^2]$, we shall even assume $q_1\in\frac12{{\mathbb N}}_0$.
Now we exploit the fact that $\gamma_k$ is implemented by a unitary operator in $B{_{\rm global}}$ of the form $U(k)=\exp(i\sum \kappa_a Q^a_0)$ on which $\alpha_{q_1}$ is well defined. Hence $$\alpha_k={\hbox{Ad}}(V(k)){{\scriptstyle \circ}}\alpha_{q_1}{{\scriptstyle \circ}}\alpha_{q_2}\vert_A =
{\hbox{Ad}}(V(k)){{\scriptstyle \circ}}\alpha_{q_1+q_2}\vert_A$$ with $V(k) = \alpha_{q_1}(U(k))= \exp(i\sum\kappa_a\alpha_{q_1}(Q^a_0))$. It is more convenient to express $U(k)$ in the form $$U(k)=\exp(i\frac{k_2^*}{k_1}Q^-_0)k_1^{2Q^3_0}\exp(i\frac{k_2}{k_1}Q^+_0)
\qquad \hbox{for} \qquad k=\pmatrix{ k_1 & ik_2 \cr ik_2^* & k_1^* }.$$ ($k_1^{2Q^3_0}$ is well defined since $2Q^3_0$ has integer spectrum.) Application of $\alpha_{q_1}$ yields $$V(k)= k_1^{2q_1}\exp(i\frac{k_2^*}{k_1}Q^-_{-2q_1})k_1^{2Q^3_0}
\exp(i\frac{k_2}{k_1}Q^+_{+2q_1}).$$
[**2.2. Lemma:**]{} The product state $\omega_{q_1}\times\omega_{q_2}$ is a convex integral over states $\omega_0{{\scriptstyle \circ}}\alpha_k$, $k\in SU(2)$. Each state $\omega_0{{\scriptstyle \circ}}\alpha_k$ on $A$ is a finite convex sum $$\omega_0{{\scriptstyle \circ}}\alpha_k = \sum_{\nu=0}^{2q_1}\pmatrix{2q_1 \cr \nu}
\vert k_1\vert^{2(2q_1-\nu)}\vert k_2\vert^{2\nu} \;
\omega_{q_1,q_2}^{(\nu)}$$ of states $$\omega_{q_1,q_2}^{(\nu)}(\,\cdot\,) = \frac{(2q_1-\nu)!}{(2q_1)!\nu!}\;
((Q^-_{-2q_1})^\nu\Omega,
\alpha_{q_1+q_2}(\,\cdot\,)(Q^-_{-2q_1})^\nu\Omega).$$ Since only the weights depend on the group element $k\in SU(2)$, the product state $\omega_{q_1}\times\omega_{q_2}$ is a finite convex sum of the same states $\omega_{q_1,q_2}^{(\nu)}$.
[*Proof:*]{} The first statement just summarizes the precedent discussion. We have $\omega_0{{\scriptstyle \circ}}\alpha_k =
(V(k)^*\Omega,\alpha_{q_1+q_2}(\,\cdot\,)V(k)^*\Omega)$, and $V(k)^*\Omega =(k_1^*)^{2q_1}\exp(-i\frac{k_2^*}{k_1^*}Q^-_{-2q_1})\Omega$ because $Q^a_n$ annihilate the vacuum for $n\geq 0$ (remember our choice $q_1\in\frac12{{\mathbb N}}_0$). The power series expansion of the exponential yields vectors $(Q^-_{-2q_1})^\nu\Omega$ with energy $2q_1\nu$ and Cartan charge (the eigenvalue of $Q^3_0$) $C=-\nu$. These vectors vanish for $\nu>2q_1$ because the vacuum Hilbert space $H$ of $B$ does not contain vectors with energy less than $C^2$. This fact is read off the following expression [@part] for the partition function for the vacuum representation: $${\hbox{Tr}}\exp(-\beta L_0-\eta Q^3_0)=\sum_{j\in{{\mathbb N}}_0}\sum_{m=-j}^j
z^m (t^{j^2}-t^{(j+1)^2})p(t)\qquad(z={\hbox{e}}^{-\eta},t={\hbox{e}}^{-\beta})$$ in which the power of $t$ is always at least the square of the power of $z$. Since $\alpha_{q_1+q_2}(L_n)$ does not change the Cartan charge $C$, the vectors $(Q^-_{-2q_1})^\nu\Omega$ have only diagonal matrix elements for $\alpha_{q_1+q_2}(A)$, showing the convex decomposition. The proper normalization of the states $\omega_{q_1,q_2}^{(\nu)}(1)=1$ can be checked recursively in $\nu$. [Q.E.D.]{}
The problem has thus been reduced to the determination of the GNS representations $\pi_{q_1,q_2}^{(\nu)}$ for the states $\omega_{q_1,q_2}^{(\nu)}$. One can easily compute that these states are eigenstates of $L_0$ with energy $(q_1+q_2)^2-2\nu q_2$, but they are not ground states in general. It is therefore not possible to determine the sectors directly via their ground state energies. Instead, it turns out to be possible to compute the partition function for the representations induced by these states. This is our main result.
Let $q_1\in\frac12{{\mathbb N}}_0$. If $q_2\notin\frac12{{\mathbb Z}}$, then $\pi_{q_1,q_2}^{(\nu)}$ is irreducible and belongs to the sector $[h=(q_1+q_2-\nu)^2]$. If $q_2\in\frac12{{\mathbb Z}}$, then $\pi_{q_1,q_2}^{(\nu)}$ is a direct sum of sectors from the set $\{[h=s^2] : s\in\vert q_1+q_2-\nu\vert+{{\mathbb N}}_0\}$.
[*Proof:*]{} The vector $(Q^-_{-2q_1})^\nu\Omega$ has Cartan charge $C=-\nu$. This value is not changed by application of $\alpha_{q_1+q_2}(L_n)$, hence $\pi_{q_1,q_2}^{(\nu)}$ is a subrepresentation of the representation $\alpha_{q_1+q_2}$ on the subspace $H_{C=-\nu}=P_{-\nu}H$ of Cartan charge $-\nu$ in the vacuum representation of $B$. The partition function for the latter representation is $${\hbox{Tr}}\, P_{{-\nu}}\exp(-\beta\alpha_{q_1+q_2}(L_0))=
{\hbox{e}}^{-(q_1+q_2)^2\beta}\cdot
{\hbox{Tr}}\, P_{{-\nu}}\exp(-\beta L_0 - 2(q_1+q_2)\beta Q^3_0).$$ From the previous expression for the vacuum partition function, we obtain $${\hbox{Tr}}\, P_{{-\nu}}\exp(-\beta L_0 - \eta Q^3_0)=z^{-\nu}t^{\nu^2} p(t)
\qquad(z={\hbox{e}}^{-\eta},t={\hbox{e}}^{-\beta})$$ by collecting the terms $z^{-\nu}$, and hence $${\hbox{Tr}}\, P_{{-\nu}}\exp(-\beta\alpha_{q_1+q_2}(L_0))=
t^{(q_1+q_2-\nu)^2}p(t).$$ If $q_1+q_2-\nu\notin\frac12{{\mathbb Z}}$, then this is the partition function of the irreducible sector $[h=(q_1+q_2-\nu)^2]$. Hence $\alpha_{q_1+q_2}(A)$ acts irreducibly on $H_{C=-\nu}$, and must coincide with its subrepresentation $\pi_{q_1,q_2}^{(\nu)}$. If on the other hand $q_1+q_2-\nu\in\frac12{{\mathbb Z}}$, then the above equals the sum of the partition functions $(t^{s^2}-t^{(s+1)^2})p(t)$ of the sectors $[h=s^2]$ with $s\in\vert q_1+q_2-\nu\vert+{{\mathbb N}}_0$. Thus $\pi_{q_1,q_2}^{(\nu)}$ is the direct sum of a subset of these sectors. [Q.E.D.]{}
As mentioned in the introduction, the product of states, computed here, might accidentally not exhaust the DHR product. But this degeneracy disappears if the positive map $\chi_2$ is perturbed by the adjoint action of some isometry $a\in A$. We note that the argument leading to Prop. 2.3 is in fact stable if $\chi_{q_2}$ is replaced by ${\hbox{Ad}}(a^*){{\scriptstyle \circ}}\chi_{q_2}$. Namely, because $a$ is $SU(2)$-invariant, one has ${\hbox{Ad}}(a^*){{\scriptstyle \circ}}\gamma_k{{\scriptstyle \circ}}\alpha_{q_2} = {\hbox{Ad}}(U(k)a^*){{\scriptstyle \circ}}\alpha_{q_2}$, so it is sufficient to replace in the above argument the vectors $(Q^-_{-2q_1})^\nu\Omega$ by the perturbed vectors $\alpha_{q_1}(a)(Q^-_{-2q_1})^\nu\Omega$ which still belong to $H_{C=-\nu}$. In the case $q_2\notin\frac12{{\mathbb Z}}$, the perturbed GNS representation $\pi_{q_1,q_2}^{(\nu)}$ will still belong to the irreducible sector $[h=(q_1+q_2-\nu)^2]$.
Thus, combining Lemma 2.2 with the Proposition, we obtain
Let $q_1\in\frac12{{\mathbb N}}_0$ and $q_2\in{{\mathbb R}}\setminus\frac12{{\mathbb Z}}$. The fusion rules for the sectors $[h_i=q_i^2]$ are $$[h_1]\times[h_2]=\bigoplus_{\nu=0}^{2q_1}\;[h^{(\nu)}] \qquad\hbox{with}
\qquad h^{(\nu)}=(q_1+q_2-\nu)^2.$$
Comments
========
We have studied the decomposition into irreducibles of the product of sectors (“fusion rules”) for the chiral stress-energy tensor with $c=1$. We succeeded to compute the fusion rules for two sectors with ground state energies $h_i$ where $[h_1]$ is a special sector, $h_1\in(\frac12{{\mathbb N}}_0)^2$, and $[h_2]$ belongs to the continuum of sectors, $h_2 \in{{\mathbb R}}_+\setminus(\frac12{{\mathbb N}}_0)^2$, Cor. 2.4. This result was not accessible by the prevailing methods for the computation of fusion rules. The case where both sectors belong to the continuum should in principle also be studied with the present method, but becomes technically very intricate.
When both sectors $[h_i]$ are special, we would have expected $SU(2)$-like fusion rules [@DHRfix] since the special sectors $[h=s^2]$, $s\in\frac12{{\mathbb N}}_0$, arise by restriction of the vacuum and spin-$\frac12$ representations of $B$ to the fixpoint algebra $A$ on the subspaces of $SU(2)$ charge $s$. This is, however, not reproduced by Prop. 2.3 and Lemma 2.2: Although the unperturbed states $\omega_{q_1,q_2}^{(\nu)}$ have finite energy and hence only finitely many of the possible sectors according to Prop. 2.3 really contribute to them, this limitation will disappear if $\chi_{q_2}$ is perturbed as described above. Moreover, if $h_i=s_i^2$ with $0<s_2<s_1$, the sectors $[h=s^2]$ with $0\leq s<\vert s_1-s_2\vert$ should not occur according to $SU(2)$, while they are not excluded by Prop. 2.3, and are really found to be present by more explicit computations.
This state of affairs has a simple explanation: For $q\in\frac12{{\mathbb Z}}$, the positive maps $\chi_q$ transfer not only the $SU(2)$ charge $s=\vert q\vert$ but in fact, as explained below, a mixture of all charges $s\in\vert q\vert+{{\mathbb N}}_0$. These admixtures are not seen if evaluated in the vacuum state (Lemma 2.1), but become visible if evaluated in a generic state of $A$, e.g., upon perturbation of $\chi_q$. The product states $\omega_0{{\scriptstyle \circ}}\chi_{q_1}{{\scriptstyle \circ}}\chi_{q_2}$, too, are sensitive to admixtures to $\chi_{q_2}$, which accounts for the presence of “too many” sectors contributing to the fusion rules as inferred from Lemma 2.2 and Prop. 2.3.
Let us explain why $\chi_{q}$ is capable of transferring the “wrong” charges if $q\in\frac12{{\mathbb Z}}$, but not if $q\notin\frac12{{\mathbb Z}}$, and why this is not in conflict with the statement in [@F] that the correspondence between states and positive maps is 1:1. The argument is very similar to the one in the proof of Prop. 2.3. If $\chi_q$ is evaluated in some perturbed state $\omega=(a\Omega,\,\cdot\,a\Omega)$ with $a\in A$, we have $\omega{{\scriptstyle \circ}}\chi_q=\omega{{\scriptstyle \circ}}\alpha_q$ since $a$ and $\omega_0$ are $SU(2)$ invariant. Thus the GNS representation $\pi_\omega$ for $\omega$ is a subrepresentation of the representation $\alpha_q$ on the subspace $H_{C=0}=P_0H$ of Cartan charge $C=0$ in the vacuum representation of $B$ (to which $a\Omega$ belongs). The partition function for this representation has been computed above (putting $q_1=0,\nu=0,q_2=q$): $${\hbox{Tr}}\, P_{0}\exp(-\beta\alpha_{q}(L_0))= t^{q^2}p(t).$$ This is the character of the irreducible representation $[h=q^2]$ if $q\notin\frac12{{\mathbb Z}}$, but is the sum of infinitely many irreducible characters for $[h=s^2]$, $s\in\vert q\vert+{{\mathbb N}}_0$, if $q\in\frac12{{\mathbb Z}}$.
By testing with suitable operators $a\in A{_{\rm global}}$, one finds that the “wrong” sectors are indeed present. Remember that the 1:1 correspondence between states and positive maps requires that the charge is strictly localized, while the automorphisms $\alpha_q$ in our analysis are only asymptotically localized (the derivative $\partial g_q(x)$ vanishes asymptotically). Of course our choice for $\alpha_q$ was dictated by the simplicity of the transformation formulae for $L_n$ and $Q^a_n$. The unpleasant feature of the wrong sectors is the price for that simplification.
The fusion rules in Cor. 2.4 are not affected by this complication.
This work is based on the Diploma Thesis of the second author [@T].
[99]{} -2pt S. Doplicher, R. Haag, J.E. Roberts: [*Local observables and particle statistics I*]{}, Commun. Math. Phys. [**23**]{}, 199-230 (1971), and [*II*]{}, Commun. Math. Phys. [**35**]{}, 49-85 (1974). S. Doplicher, R. Haag, J.E. Roberts: [*Fields, observables and gauge transformations I*]{}, Commun. Math. Phys., 1-23 (1969). A.A. Belavin, A.M. Polyakov, A.B. Zamolodchikov: [*Infinite conformal symmetry in two-dimensional quantum field theory*]{}, Nucl. Phys. [**B 241**]{}, 333-380 (1984). W. Nahm: [*Quasi-rational fusion products*]{}, Int. J. Mod. Phys. [**8**]{}, 3693-3702 (1994). K. Fredenhagen, K.-H. Rehren, B. Schroer: [*Superselection sectors with braid group statistics and exchange algebras I*]{}, Commun. Math. Phys. [**125**]{}, 201-226 (1989), and [*II*]{}, Rev. Math. Phys. [**SI1**]{} (special issue), 113-157 (1992). A. Wassermann: [*Operator algebras and conformal field theory III*]{}, Invent. Math. [**133**]{}, 467-538 (1998). E. Verlinde: [*Fusion rules and modular transformations in 2D conformal field theory*]{}, Nucl. Phys. [**B 330**]{}, 360-376 (1988). I.B. Frenkel, [*Representations of Kac-Moody algebras and dual resonance models*]{}, in: Lect. Notes Appl. Math. [**21**]{}, eds. M. Flato et al., AMS, Providence, RI, 1985, pp. 325-353. K.-H. Rehren: [*A new view of the Virasoro algebra*]{}, Lett. Math. Phys. [**30**]{}, 125-130 (1994). K. Fredenhagen: [*Product of states*]{}, in: Groups and Related Topics, eds. R. Gielerak et al., Kluwer Academic Press, Dordrecht, 1992, pp. 199-209. V. Kac: [*Infinite Dimensional Lie Algebras*]{}, Birkhäuser Verlag, Basel, 1983. M. Lüscher, G. Mack: [*Global conformal invariance in quantum field theory*]{}, Commun. Math. Phys. [**41**]{}, 203-234 (1975). H.R. Tuneke: [*Produkt von Superauswahlsektoren des chiralen Energie-Impuls-Tensors mit $c=1$*]{}, Diploma thesis, Göttingen, 2000 (in German).
[^1]: Electronic address: [[email protected]]{}
[^2]: Electronic address: [[email protected]]{}
[^3]: It appears that one could also use the embedding $T=\pi :jj:$ of $A$ into a $U(1)$ current algebra $C$. The problem would be that the conditional expectation $\mu$ which takes the homomorphisms $\alpha_g: A\to C$ back onto $A$ in order to obtain $\chi = \mu{{\scriptstyle \circ}}\alpha_q\vert_A$ is not explicitly known in that case.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.