subfolder
stringclasses 367
values | filename
stringlengths 13
25
| abstract
stringlengths 1
39.9k
| introduction
stringlengths 0
316k
| conclusions
stringlengths 0
229k
| year
int64 0
99
| month
int64 1
12
| arxiv_id
stringlengths 8
25
|
---|---|---|---|---|---|---|---|
1404 | 1404.0385_arXiv.txt | A substantial fraction of the lowest metallicity stars show very high enhancements in carbon. It is debated whether these enhancements reflect the stars' birth composition, or if their atmospheres were subsequently polluted, most likely by accretion from an AGB binary companion. Here we investigate and compare the binary properties of three carbon-enhanced sub-classes: The metal-poor CEMP-s stars that are additionally enhanced in barium; the higher metallicity (sg)CH- and Ba II stars also enhanced in barium; and the metal-poor CEMP-no stars, not enhanced in barium. Through comparison with simulations, we demonstrate that all barium-enhanced populations are best represented by a $\sim$100\% binary fraction with a shorter period distribution of at maximum $\sim$20,000 days. This result greatly strengthens the hypothesis that a similar binary mass transfer origin is responsible for their chemical patterns. For the CEMP-no group we present new radial velocity data from the Hobby-Eberly Telescope for 15 stars to supplement the scarce literature data. Two of these stars show indisputable signatures of binarity. The complete CEMP-no dataset is clearly inconsistent with the binary properties of the CEMP-s class, thereby strongly indicating a different physical origin of their carbon enhancements. The CEMP-no binary fraction is still poorly constrained, but the population resembles more the binary properties in the Solar Neighbourhood. | The lowest metallicity stars that still exist today probably carry the imprint of very few supernovae. As such, they represent our best observational approach to understanding of the First Stars. The number of metal-poor stars in the Galactic halo with abundance determinations has bloomed recently; now over 150 stars with [Fe/H] $< -3$ have been examined in high-resolution studies \citep[see for some recent overviews and results:][]{Aoki13, Yong13a, Cohen13, Spite13, Placco14}. In addition to stars with a “normal” chemical composition (those continuing the well-defined trends of elemental abundances at higher metallicities), various chemically peculiar stars are found. The intriguing question therefore is if these chemical anomalies tell us something about the very first stages of star formation? Probably the most significant chemical sub-group is that of carbon-enhanced metal-poor (CEMP) stars. The fraction of extremely metal-poor stars which are carbon-enhanced is as high as 32-39\% for stars with [Fe/H]$<$-3 \citep{Yong13b,Aoki13,Lee13}, which then decreases to 9-21\% for [Fe/H]$<$-2.0 \citep[e.g.,][]{Norris97a,Rossi99,Christlieb03,Marsteller05,Lucatello06,Cohen05,Frebel06,Carollo12,Lee13}\footnote{We caution that various definitions of carbon-enrichment are used between different authors. Generally, a limit of [C/Fe]=+1.0 or [C/Fe]=+0.7 is used, although some authors prefer a limit that is dependent on the luminosity of the star to take into account internal mixing of carbon on the red giant branch \citep[e.g.][]{Aoki07}.}. The fraction of CEMP stars is not just dependent on the metallicity, but also seems to vary with the distance above the Galactic plane \citep{Frebel06,Carollo12} and between the inner and outer halo component \citep{Carollo12,Carollo14}. Measurements of [C/Fe] for extremely metal-poor stars range all the way up to [C/Fe]$\sim+4.0$. A substantial fraction of CEMP stars also show an overabundance in heavy elements, these are generally called ``CEMP-s'', ``CEMP-r'', or ``CEMP-r/s'' depending on the exact abundance and ratio of s-process and r-process elements. \citet{Beers05} have developed the following nomenclature that we will follow in this paper (note that the different classes as defined here are not necessarily mutually exclusive): \begin{itemize} \item{CEMP-r: [C/Fe] $>$ +1.0 and [Eu/Fe] $>$ +1.0} \item{CEMP-s: [C/Fe] $>$ +1.0, [Ba/Fe] $>$ +1.0, and [Ba/Eu] $>$ +0.5} \item{CEMP-r/s: [C/Fe] $>$ +1.0 and 0.0 $<$ [Ba/Eu] $<$ +0.5} \item{CEMP-no: [C/Fe] $>$ +1.0 and [Ba/Fe] $<$ 0} \end{itemize} The large class of CEMP-s stars are thought to obtain their overabundant carbon and s-process elements from a companion star that has gone through the AGB phase and deposited large amounts of newly formed carbon and s-process material on its neighbour. Strong evidence in favour of this scenario was found from repeated radial velocity measurements. The fraction of stars that show significant velocity variability in the CEMP-s class is sufficiently high to comfortably support the claim that all such stars might indeed be in binary systems \citep[][and references therein]{Lucatello05}. Based on their abundance patterns, it has been suggested that the CEMP-r/s class has the same binary origin as the CEMP-s stars \citep[e.g.,][]{Masseron10,Allen12}, although this might mean that a different neutron-capture process with features in between the r- and s-process will need to be invoked \citep[][Herwig et al., in preparation]{Lugaro12} to provide for its particular chemical signatures. An alternative explanation is that CEMP-r/s stars originate in regions already enriched in r-process elements and are subsequently enriched in s-rich material by a companion \citep[e.g.,][]{Bisterzo12}. CEMP-r stars are very rare, but \citet{Hansen11} studied one CEMP-r star in their careful radial velocity monitoring program of r-process-enhanced stars. This star, CS~22892--052, shows no sign of binarity. The origin of the CEMP-no class is debated \citep[e.g.][]{Ryan05,Masseron10,Norris13b}. The absence of the signature s-process overabundance, which is thought to be produced in AGB stars just as the overabundant carbon, gives reason to believe these stars might not obtain their peculiar abundance pattern due to mass transfer in binary systems. Under the premise that CEMP-no stars are \textit{not} in binary systems, another explanation has to be offered for the overabundance of carbon and other light elements in these stars. One proposed origin for their chemical pattern is that these stars are truly second generation stars and formed from gas clouds already imprinted with a large overabundance of carbon and other light elements by the First Stars \citep[e.g.][]{Bromm03,Norris13b,Gilmore13}. The fact that almost all of the stars with [Fe/H]$\le$-4.0 are of the CEMP-no class seems to favor such an explanation. In particular, four out of the five stars known with [Fe/H]$<$-4.5 seem to be consistent with the CEMP-no class \citep[][although in some cases only an upper limit for barium could be derived]{Christlieb04,Frebel05,Aoki06,Norris07,Keller14}. Note also the exception from \citet{Caffau11}. On the other hand, it is yet insufficiently understood if carbon could be transferred from an AGB companion without s-process elements \citep[e.g.,][]{Suda04}. Mass-transfer mechanisms that would transfer carbon -- but no or few s-process elements -- are theoretically expected from massive AGB stars with hot dredge-up, terminating the AGB process before the star had time to develop s-process elements \citep{Herwig04}, or some rotating AGB companions \citep{Herwig03,Siess04}. However, this result is dependent on the parameters adopted, as shown by \citet{Piersanti13}. \citet{Komiya07} argue that relative high-mass AGB stars could be the companions of CEMP-no stars as they produce less s-process elements. But a problem with this scenario is that these stars would produce a lot of nitrogen, a signature that not all CEMP-no stars share \citep[see for instance][]{Ito13,Norris13b}. It has also been suggested that in very low-metallicity AGB-stars with very high neutron-to-Fe-peak-element seed ratios, the s-process runs to completion and a large overabundance of Pb is produced instead of Ba \citep{Busso99,Cohen06}. Because Pb absorption lines are very weak and the strongest line in the optical overlaps with the CH-feature, this hypothesis is difficult to test, especially in C-rich stars. A robust upper limit could nonetheless be given for the brightest CEMP-no star, BD +44-493, which did not show the predicted overabundance in Pb \citep{Ito13}. In analogy with the work on CEMP-s stars by \citet{Lucatello05} and others, we might be able to settle this debate using radial velocity monitoring. If CEMP-no stars are also products of binary evolution, this would show itself in radial velocity variations of the stars. From such an exercise, \citet{Norris13b} conclude that there is little support for a binary origin for CEMP-no stars, unlike with the CEMP-s stars. But, as an overview of the available literature data such as presented in Table 5 of \citet{Norris13b} makes clear, there is a lack of systematic radial velocity studies with sufficient accuracy and cadence to carry out a conclusive quantitative study. For 43\% of the stars there is only one radial velocity measurement available, making it impossible to tell whether they are part of a binary system. Most other stars have less than 5 measurements published in the literature. The two stars that have been most thoroughly researched, BD +44-493 and CS~22957-027, do show evidence for velocity variations, but these variations are comparable to the observational uncertainties in the case of BD +44-493. Radial velocity data for eight more stars has in the meantime been added \citep[][also Andersen et al., in preparation]{Hansen13}. They find that two out of their sample of eight CEMP-no stars are in binary systems. Due to these small numbers, it is not at all clear if CEMP-no stars have binary companions, and the presence of a companion may well influence their evolution. This current scarcity of data severely limits our understanding of the very first epochs of star formation. In this work, we present additional radial velocity measurements for 15 CEMP-no stars. Additionally, we homogeneously analyze and model the binary fraction and period distribution of binaries in the CEMP-no class, the CEMP-s class and the -- much more metal-rich -- CH-, sgCH- and Ba II-stars. Based on the comparative binary properties of each of these classes, we then go on to discuss their nature. In Section \ref{sec:data} we present the new data from this work. In Section \ref{sec:radvel} we use these data for the CEMP-no stars to analyze velocity variations and constrain their binary properties. Section \ref{sec:sims} is devoted to comparison with simulations for the CEMP-no, CEMP-s and CH-, sgCH- and Ba II-stars. This analysis leads to various conclusions and hypotheses for the nature of these stars, as discussed in Section \ref{sec:disc}. \begin{table*} \footnotesize{ \begin{tabular}{|l|c|c|r|r|r|r|r|r|c|c|c|c|} \hline {Star} & Vmag & {T$_{\textnormal{eff}}$} &{logg} &{[Fe/H]} & {[C/Fe]}& {[Ba/Fe]} & {\# Vr} & {Source} & $p$($\chi^{2}|f$) & $p$($\chi^{2}|f$)& $p$($\chi^{2}|f$) & $p$($\chi^{2}|f$)\\ & & & & & & & lit. & & lit. & this work & both & both\\ & & (K) & & & & & meas. & &1$\sigma$ &1$\sigma$ &1$\sigma$ & 3$\sigma$\\ \hline 53327-2044-515\_d & 15.1 & 5703 & 4.68 & $-$4.00 & +1.13 &$<$ +0.34 & 3 & 1, 13 & 0.62 & 0.11 & 0.00 & 0.53\\ 53327-2044-515\_g & 15.1 & 5703 & 3.36 & $-$4.09 & +1.57 &$<-$0.04 & 3 & 1, 13 & -- & -- & -- & -- \\ BD~+44-493 & 9.1 & 5510 & 3.70 & $-$3.68 & +1.31 & $-$0.59 & 28 & 2, 14 & 0.02 & 0.92 & 0.00 & 1.00\\ BS~16929-005 & 13.6 & 5229 & 2.61 & $-$3.34 & +0.99 & $-$0.41 & 3 & 1, 7, 8 & 0.06 & 0.82 & 0.00 & 0.74\\ CS~22878-027 & 14.8 & 6319 & 4.41 & $-$2.51 & +0.86 & $<-$0.75 & 2 & 1, 8 & 0.00 & 0.58 & 0.04 & 0.98\\ CS~22949-037 & 14.4 & 4958 & 1.84 & $-$3.97 & +1.06 & $-$0.52 & 10 & 1, 4, 9-11, 16-19 & 0.21 & 0.72 & 0.32 & 1.00\\ CS~22957-027 & 13.6 & 5170 & 2.45 & $-$3.19 & +2.27 & $-$0.80 & 15 & 1, 5, 12, 20, 23 & 0.00 & 0.95 & 0.00 & 0.00\\ CS~29502-092 & 11.9 & 5074 & 2.21 & $-$2.99 & +0.96 & $-$1.20 & 3 & 1, 8 & 0.00 & 0.92 & 0.00 & 0.19\\ HE~1150$-$0428 & 14.9 & 5208 & 2.54 &$-$3.47 & +2.37 & $-$0.48 & 2 & 1, 5, 23 & 0.00 & 0.00 & 0.00 & 0.00\\ HE~1300+0157 & 14.1 & 5529 & 3.25 & $-$3.75 & +1.31 & $<-$0.85 & 4 & 1, 4, 6, 15 & 0.65 & 0.50 & 0.72 & 1.00\\ HE~1506$-$0113 & 14.8 & 5016 & 2.01 & $-$3.54 & +1.47 & $-$0.80 & 4 & 1, 13 & 0.30 & 0.00 & 0.00 & 0.00\\ Segue~1-7 & 17.7 & 4960 & 1.90 & $-$3.52 & +2.30 & $<-$0.96 & 1 & 3 & -- & 0.83 & 0.58 & 0.97\\ \hline SDSS~J1422+0031 & 16.3 & 5200 & 2.2 & $-$3.03 & +1.70 & $-$1.18 & 2 & 21, 22 & 0.01 & -- & 0.00 & 0.01 \\ SDSS~J1613+5309 & 16.4 & 5350 & 2.1 & $-$3.33 & +2.09 & +0.03 & 2 & 21, 22 & 0.40 & 0.46 & 0.74 & 0.99 \\ SDSS~J1746+2455 & 15.7 & 5350 & 2.6 & $-$3.17 & +1.24 & +0.24 & 2 & 21, 22 & 0.97 & 0.58 & 0.43 & 0.98\\ SDSS~J2206-0925 & 14.9 & 5100 & 2.1 & $-$3.17 & +0.64 & $-$0.85 & 2 & 21, 22 & 0.49 & -- & 0.63 & 0.95\\ \hline \hline \end{tabular} \caption{Overview of literature data and derived probabilities for binarity for the targeted sample of CEMP-no stars in this work. Shown here are the literature values for the V-magnitude of the stars, derived Teff and log(g) (two possible solutions are given for 53327-2044-515), [Fe/H], [C/Fe] and [Ba/Fe]. The subsequent number of radial velocity literature measurements deviates slightly from the similar compilation of \citet{Norris13b} for a few stars. These differences arise because we count all individual measurements (also if they are on the same or adjecent days) as long as the velocities per observation are given separately in the relevant literature. The last four columns show the derived probabilities that the observed scatter in velocities is due to measurement errors (see text for details) for the data in the literature and this work both separately and combined. The last column shows the probability for the combined dataset, but inflating $\sigma_{vr_{i}}$ by a factor of 3. References: 1 = \citet{Yong13a}; 2 = \citet{Ito09}, 3 = \citet{Norris10}, 4 = \citet{Cohen08}, 5 = \citet{Cohen06}, 6 = \citet{Frebel07a}, 7 = \citet{Honda04}, 8 = \citet{Lai08}, 9 = \citet{Cayrel04}, 10 = \citet{Spite05}, 11 = \citet{Francois07}, 12 = \citet{Norris97b}, 13 = \citet{Norris13a}, 14 = \citet{Carney03}, 15 = \citet{Barklem05}, 16 = \citet{Mcwilliam95a}, 17 = \citet{Mcwilliam95b}, 18 = \citet{Norris01}, 19 = \citet{Depagne02},20 = \citet{Preston01}, 21 = \citet{Aoki13}, 22 = SDSS SSPP DR9, 23 = \citet{Cohen13} \label{tab:overview}} } \end{table*} | \begin{itemize} \item{Binary properties in CEMP-no stars are marginally consistent with the observed Solar Neighbourhood binary fraction and periods.} \item{\textit{If} CEMP-no stars are all in binaries, some of them have very long periods. A solution in which the binary fraction is lower, but the binaries have shorter periods, is also very likely. } \item{Binarity of CEMP-s stars is well-modeled with an (almost) 100\% binary fraction and a maximum period $\sim$20,000 days.} \item{CEMP-s stars and CH-stars share similar binary properties. This places the hypothesis that CEMP-s stars are the lower metallicity equivalents of the CH-stars on a much firmer footing.} \item{CEMP-no and CEMP-s stars have very different binary properties, therefore it is unlikely their overabundance in carbon is obtained via the same physical mechanism.} \item{The CEMP-no population are not the metal-poor equivalents of R-stars. A primary origin for the carbon enhancement remains very likely, although an origin from a binary companion using a mechanism that can operate for long period systems and which does not transfer s-process elements can not yet be ruled out. Another distinct possibility is that the CEMP-no class contains various physical sub-classes in itself.} \end{itemize} | 14 | 4 | 1404.0385 |
1404 | 1404.0499_arXiv.txt | Resonant scattering of energetic protons off magnetic irregularities is the main process in cosmic ray diffusion. The typical theoretical description uses Alfv\'en waves in the low frequency limit. We demonstrate that the usage of Particle-in-Cell (PiC) simulations for particle scattering is feasible. The simulation of plasma waves is performed with the relativistic electro-magnetic PiC code \textit{ACRONYM} and the tracks of test particles are evaluated in order to study particle diffusion. Results for the low frequency limit are equivalent to those obtained with an MHD description, but only for high frequencies results can be obtained with reasonable effort. PiC codes have the potential to be a useful tool to study particle diffusion in kinetic turbulence. | \label{sec:introduction} The transport of charged particles in the interstellar and interplanetary medium is governed by the scattering of those particles off magnetic irregularities. The complete system of plasma and charged particles is highly nonlinear and the description of the processes is, therefore, very complicated. In order to gain some understanding the back reaction of energetic particles on the plasma is ignored in most cases. The scattering itself may be described by different methods. Since computers are readily available, different numerical models have been developed to describe particle scattering \citep{michalek_1996, qin_2002}. Most methods assume random fluctuations in which particle tracks are followed. The ansatz of {Lange et al.} \citep{lange_2013} assumed realistic turbulence. While this limits the extent of the spectrum of magnetic fluctuations, the physics is described correctly. We want to test, whether this ansatz may be used also for non-MHD plasmas. | \label{sec:discussion} Our results show that PiC simulations are suitable to model wave-particle-interactions in kinetic plasmas. As a very important outcome of our analysis, we would like to stress that resonant interactions are independent of the artificial mass ratio used PiC simulations. In the low frequency regime of dispersionless Alfv\'en waves we are able to reproduce the MHD results of {Lange et al.} \citep{lange_2013} and find scattering amplitudes in accordance with QLT predictions. These results do not allow any new insights regarding the physics of the problem, but are useful and necessary to validate our simulations and to investigate the effects of the artificial mass ratio. \\ \\ The more interesting test case presented in this article is the scattering of particles off dispersive waves, since this cannot be done in MHD simulations and the QLT does not yield the full information about the problem. Our simulation S\textrm{IV} indicates that resonant scattering off waves in the dispersive regime shows the same characteristic behavior regarding scatter plots, which can be found in the first three simulations S\textrm{I}, S\textrm{II} and S\textrm{III}. Yet, the frequency range accessible has an upper limit, since the waves are damped near the proton cyclotron frequency. \\ We propose to solve the problem of wave dissipation by changing the mechanism, which excites the wave. Instead of initializing fields and particle velocities only at the beginning of the simulation, a constant input of energy in form of electromagnetic fields matching the wave's polarization and propagation properties should be able to drive the wave mode over a period of time sufficiently long to establish resonant scattering. \\ \\ Although PiC simulations of wave-particle-scattering within the dispersive regime of a wave mode appear to be promising and might lead to new results which can not be obtained from MHD simulations, they are still very demanding in terms of computational resources. The simulations S\textrm{IV} and S\textrm{V} presented in this article consumed several tens of thousands of CPU-hours each. Therefore, such simulations can only be conducted, if plenty of computing time is available and if the process to be modeled allows for a rather low mass ratio. Also, the number of background particles should be kept low -- so noise in the background plasma might become an issue. \\ \\ In conclusion, it appears to us that a full scale application of PiC simulations describing the scattering of protons off of low frequency waves in a truly physical setting -- i.e. with realistic solar wind parameters -- is not in sight. Although simple conceptual studies are feasible and reasonable, we see another potential benefit of PiC simulations in a related topic: Instead of using test protons and waves in the low frequency regime of the $L$-mode, we propose to probe the scattering of relativistic test electrons off Whistler waves. \\ Since electrons are lighter and faster than protons, the relevant time scales become shorter and less timesteps have to be carried out to describe scattering processes. Also, Whistler waves have higher frequencies and shorter wavelengths than low frequency (Alfv\'en) waves, meaning that the simulation size can be decreased. It is even possible -- or advisable -- to increase the mass ratio to its natural value, since protons should not contribute significantly to the transport characteristics of electrons. The protons' Larmor radii do not have to be resolved within the simulation box, because with heavier protons and thus slower proton motion, their gyromotion becomes negligible on electron time scales. Therefore, the simulation size is defined only by the wavelength and the electron Larmor radius. \\ As a crude estimate for a typical problem size, the setup of simulation S\textrm{V} gives a good clue. Relative to S\textrm{V} the number of timesteps can be reduced by a factor of $m_\text{p}/m_\text{e}=42.8$, since this is also the ratio of electron to proton cyclotron frequency. This also suggests that the computational cost for the simulation will come close to that of a typical MHD simulation by {Lange et al.} \citep{lange_2013}, as discussed at the end of Sect. \ref{sec:comparison}. \\ We are planning to carry out such a simulation in a follow-up project, since we are of the opinion that simulations of that kind are feasible and results can be obtained in a way similar to the procedure described in this article. \\ Furthermore, we see PiC simulations as a promising approach to study particle diffusion in kinetic turbulence. Recent findings by {Howes et al.} \citep{howes_2008} and {Che et al.} \citep{che_2014} state that magnetic turbulence can be reproduced in kinetic simulations, using either a gyrokinetic code \citep{howes_2008} or a 2.5-dimensional PiC code \citep{che_2014}. Although we appreciate the results obtained by \citep{che_2014}, we suggest fully three-dimensional PiC simulations, because the tracking of particle trajectories along magnetic field lines can only lead to realistic results when the magnetic field is fully developed in all spatial dimensions. | 14 | 4 | 1404.0499 |
1404 | 1404.2987_arXiv.txt | In this paper, we extend the use of the tip of the red giant branch (TRGB) method to near-infrared wavelengths from previously-used $I$-band, using the \textit{Hubble Space Telescope (HST)} Wide Field Camera 3 (WFC3). Upon calibration of a color dependency of the TRGB magnitude, the IR TRGB yields a random uncertainty of $\sim 5\%$ in relative distance. The IR TRGB methodology has an advantage over the previously-used ACS $F606W$ and $F814W$ filter set for galaxies that suffer from severe extinction. Using the IR TRGB methodology, we obtain distances toward three principal galaxies in the Maffei/IC~342 complex, which are located at low Galactic latitudes. New distance estimates using the TRGB method are 3.45$^{+0.13}_{-0.13}$~Mpc for IC~342, 3.37$^{+0.32}_{-0.23}$~Mpc for Maffei~1 and 3.52$^{+0.32}_{-0.30}$~Mpc for Maffei~2. The uncertainties are dominated by uncertain extinction, especially for Maffei~1 and Maffei~2. Our IR calibration demonstrates the viability of the TRGB methodology for observations with the \textit{James Webb Space Telescope (JWST)}. | Low-mass stars ($\lesssim 2$M$_{\odot}$) begin their main sequence life by burning hydrogen in their cores. As nuclear reactions turn hydrogen into helium, the hydrogen fraction in the core drops and the main energy production takes place in a hydrogen shell. The star gradually becomes brighter, and redder. When it reaches the fully convective Hayashi limit, the color changes only slightly as the star evolves. The nearly vertical track on the color-magnitude diagram (CMD) is the so-called red giant branch (RGB). As a star climbs up the RGB, the burning hydrogen shell dumps helium ashes into the degenerate helium core, increases the core temperature until $\sim 10^8$ K, at which point the triple-$\alpha$ helium burning process can ignite. As soon as the triple-$\alpha$ process is triggered, the star quickly moves away from the RGB. Observationally, this leads to a sharp discontinuity of the luminosity function of the RGB of a galaxy \citep{baa44}. Since the physical condition of triggering the triple-$\alpha$ process is well-studied, the luminosity of the star at the tip of RGB (TRGB), therefore the discontinuity, is predictable and can serve as a standard candle for measuring distances of galaxies. Stellar evolution theory indicates that the luminosity of the TRGB depends only a little on ages but significantly on the metallicity of the underlying stellar population \citep{ibe83}. Observationally, the age/metallicity dependencies translate into a color dependence of luminosity of the TRGB, which needs to be calibrated empirically. In practice, a TRGB distance measurement requires observations in two bands. One is used to measure the discontinuity of the RGB luminosity function, while the other provides color information that separates RGB stars from bluer main sequence stars, as well as calibrates the age/metallicity variance of the luminosity of the TRGB. Currently, the TRGB method is mostly carried out in $I$-band, as the luminosity of TRGB at this wavelength is relatively insensitive to metallicity \citep{sal00,mar08,bel04}, so uncertainties resulting from the empirical color calibration can be minimized. After calibrating the color dependencies of luminosity of the TRGB, \citet{riz07} has demonstrated that, with a single \textit{Hubble Space Telescope} (HST) orbit with the Advance Camera for Surveys (ACS), the distance of a galaxy within $\sim$ 10 Mpc can be obtained. Distances agree well with values derived from Cepheid variables. In spite of the great accuracy achieved by the optical TRGB methodology, there are still reasons to extend the TRGB method to infrared (IR) wavelengths. Through the optical TRGB method, and other high quality distance measurements, the detailed spatial distribution of unobscured galaxies in the nearby universe is being mapped out \citep{tul13}. The three-dimensional spatial information, together with the radial velocities of these galaxies, has provided precious information for dynamical studies probing the distribution of dark matter and dark energy in the nearby universe. However, while galaxies at low Galactic latitudes are dynamically equally important, their distribution remains under-explored due to severe obscuration. IR wavelengths offer a solution because the TRGB in IR is brighter and Galactic extinction is reduced. With an accurate calibration of TRGB magnitudes in the IR, distances of obscured galaxies can be obtained, providing a more complete three-dimensional map of the nearby universe. In addition, while HST is the current work horse for the TRGB methodology, its successor the James Webb Space Telescope (JWST) functions only in the IR. It will be necessary to migrate procedures from optical to IR wavelengths. Once JWST is operational, the capabilities of the TRGB method will be enhanced dramatically, allowing accurate distances to be measured to galaxies within the Virgo Cluster and beyond with short exposures. Thousands of galaxies will be within reach. Already with Wide Field Camera 3 (WFC3) installed on HST, the facility exists to work in the infrared domain. An important demonstration of its possibilities is discussed by \citet{dal12b}. With WFC3 $F110W$ and $F160W$ filters, which approximately correspond to $J$ and $H$ bands, these authors observed 23 nearby galaxies ($2.0 \mbox{ Mpc} \lesssim D \lesssim 4.5 \mbox{ Mpc}$), which had already been studied in detail at HST optical bands, mostly with ACS. That paper includes a study of the dependencies of the luminosity of the TRGB, with cognizance of the potential utility as a distance indicator. However, \citet{dal12b} focus on the TRGB in the $F160W$ band where luminosities are the greatest at wavelengths accessible to WFC3 but metallicity effects are substantial. Our interest in the current study is to calibrate the TRGB methodology in the $F110W$ band. It will be demonstrated that changes in the luminosity of the TRGB with color are cut in half at $F110W$ compared with $F160W$. A color calibration is required for working at $F110W$ but uncertainties are reduced. In this paper, we first present the calibration of the color dependencies of TRGB magnitudes in both $F110W$ and $F160W$. We then apply our calibration to obtain distances of galaxies in the Maffei/IC~342 complex, an entity that is important because it produces the greatest tidal influence on the Local Group \citep{dun93}. We describe the data used in this paper and the data reduction in Section~2. Calibration of the color dependency of TRGB magnitudes is presented in Section~3. We compare distances derived from IR TRGB and optical TRGB methods is Section~4. Processes of measuring distance to members of Maffei/IC~342 groups are discussed in Section~5. Section~6 gives the summary. All magnitudes in this paper are Vega magnitudes. | In this paper, we calibrate the color dependency of the TRGB magnitudes in \textit{HST} WFC3 $F110W$ and $F160W$ filters. IR TRGB provides an alternative to the commonly-used $I$-band with the benefits that the TRGB is brighter in the IR and dust extinction is reduced. In each band, we approximate the color dependencies of the TRGB magnitudes by two linear relations for low and high metallicity regimes respectively. In spite of a stronger color dependency, the TRGB magnitudes at IR wavelengths still provide good distance measures. The distance from the IR TRGB method yields a $\sim$5\% relative uncertainty (extinction aside) and is given a zero point that agrees with $F814W$ TRGB distances. At the high metallicity regime, possibly due to the line-blanketing effect, the color dependency is stronger and the rms uncertainties of the calibration are larger. We therefore suggest using the low metallicity regime for distance determinations, by obtaining observations toward the halos of target galaxies. We demonstrate that the IR TRGB method has an advantage over the $F814W$ TRGB method when the Galactic dust extinction is severe. Using the IR TRGB method, we derive distances toward three principal galaxies in the Maffei-IC~342 complex: IC~342, Maffei~1 and Maffei~2. These galaxies suffer from severe Galactic dust extinction, especially, Maffei~1 and Maffei~2, whose $F814W$ TRGB magnitudes are not detectable in our observations. With the IR TRGB method, new distance estimates from the $F110W$ TRGB are 3.45$^{+0.13}_{-0.13}$~Mpc for IC~342 (after averaging the $F814W$ TRGB distance), 3.43$^{+0.32}_{-0.23}$~Mpc for Maffei~1 and 3.52$^{+0.32}_{-0.30}$~Mpc for Maffei~2. The dominant source of uncertainty is the uncertain Galactic extinction, especially for Maffei~1 and Maffei~2. In the near future, with surveys such as Gaia or PanSTARRS covering low Galactic latitudes, a more accurate reddening map is possible. Combining with IR TRGB method, the accuracy of distances to objects located in the Zone of Avoidance can be improved. The next generation space telescope, $JWST$, will work at only IR wavelengths. With the IR TRGB magnitudes calibrated, once $JWST$ is operational, the power of TRGB method will be enhanced dramatically. | 14 | 4 | 1404.2987 |
1404 | 1404.0391_arXiv.txt | {% We present the result of a study on the expansion properties and internal kinematics of round/elliptical planetary nebulae of the Milky Way disk, the halo, and of the globular cluster M\,15. The purpose of this study is to considerably enlarge the small sample of nebulae with precisely determined expansion properties (Sch\"onberner et al. \cite{SJSPCA.05}). To this aim, we selected a representative sample of objects with different evolutionary stages and metallicities and conducted high-resolution \'echelle spectroscopy. In most cases we succeeded in detecting the weak signals from the outer nebular shell which are attached to the main line emission from the bright nebular rim. Next to the measurement of the motion of the rim gas by decomposition the main line components into Gaussians, we were able to measure separately, for most objects for the first time, the gas velocity immediately behind the leading shock of the shell, i.e. the post-shock velocity. We more than doubled the number of objects for which the velocities of both rim and shell are known and confirm that the overall expansion of planetary nebulae is accelerating with time. There are, however, differences between the expansion behaviour of the shell and the rim: The post-shock velocity is starting at values as low as around 20 \kms\ for the youngest nebulae, just above the AGB wind velocity of $\sim$\,10--15 \kms, and is reaching values of about 40 \kms\ for the nebulae around hotter central stars. Contrarily, the rim matter is at first decelerated below the typical AGB-wind velocity and remains at about 5--10 \kms\ for a while until finally a typical flow velocity of up to 30~\kms\ is reached. This observed distinct velocity evolution of both rim and shell is explained by radiation-hydrodynamics simulations, at least qualitatively: It is due to the ever changing stellar radiation field and wind-wind interaction together with the varying density profile ahead of the leading shock during the progress of evolution. The wind-wind interaction works on the rim dynamics while the radiation field and upstream density gradient is responsible for the shell dynamics. Because of these time-dependent boundary conditions, a planetary nebula will never evolve into a simple self-similar expansion. Also the metal-poor objects behave as theory predicts: The post-shock velocities are higher and the rim flow velocities are equal or even lower compared to disk objects at similar evolutionary stage. The old nebula around low-luminosity central stars contained in our sample expand still fast and are dominated by reionisation. We detected, for the first time, in some objects an asymmetric expansion behaviour: The relative expansions between rim and shell appear to be different for the receding and approaching parts of the nebular envelope. \vspace{1mm} } | \label{intro} \subsection{Historical background} \label{hist.back} The process of formation and evolution of planetary nebulae (PNe) is intimately connected with mass-loss processes along the asymptotic giant branch (AGB) and the final contraction and heating of the stellar remnant until the white-dwarf cooling path at the hot side of the Hertzsprung-Russell diagram (HRD) is reached. Thereby, the slow but very dense AGB wind is heated by photo-ionisation due to the intense radiation field of the hot central star and compressed from within by the action of the tenuous but very fast central-star wind. There is, however, no direct interaction of the fast wind with the former AGB matter: The wind is thermalised by a strong shock, and the system's steady attempt to achieve pressure balance between the ionised shell and this shocked wind material on one side, and between the shell and the still undisturbed AGB wind on the other side is responsible for the formation of what we call a planetary nebula and its evolution with time. In this view a PN is not simply the evidence of matter ejected from the stellar surface but can instead be described by a thermally driven shock wave through the ambient AGB wind envelope, starting at the inner edge of this envelope and powered by heating due to photo-ionisation. Morphology and kinematics of PNe are thus the result of shock waves initiated by ionisation and modified by winds interaction, and their physics may not be described adequately by static models. The expansion of a PN is usually measured by the Doppler split components (if the line is resolved) or the line width (if unresolved) of strong emission lines. Since the pioneering work of Wilson (\cite{Wi.50}) it is known that the velocity field within a planetary nebula is not uniform. Instead, one has to assume that the flow velocity generally increases with distance from the central star. Because of this (positive) velocity gradient, lines of ions with different ionisation potential exhibit different line splits if the ionisation within the nebular shell is stratified. Although already known from photographic images, observations with modern, sensitive CCD detectors revealed many more PNe which consist of up to three distinct shells. The inner two shells make up the PN proper, and Frank, Balick \& Riley (\cite{FBR.90}) coined the terms ``rim'' for the inner, bright shell and ``shell'' for the outer one which is usually fainter. The rim encloses an inner cavity containing the hot shocked wind gas from the central star. The third, extremely faint but mostly round region which embraces the shell is called ``halo'' and consists of the ionised AGB wind. A recent compilation of PNe with detected halos is those of Corradi \etal\ (\cite{CSSP.03}), but see also Frew, Boji\v{c}i\'c \& Parker (\cite{frewetal.12}). An example for the typical morphology of a multiple-shell PN is rendered in Fig.\,\ref{ngc2022}. Since we are dealing with the two main nebular shells only, we use also the term ``double-shell'' planetary in the following. \begin{figure} \includegraphics[width=0.92\columnwidth]{ngc6826_image.eps} \caption{\label{ngc2022} Combination of two \oiii\ images of the multiple-shell planetary nebula \object{NGC 6826} with different (logarithmic) intensity scales for inset and main image (see Corradi \etal\ \cite{CSSP.03} for details). The inset picture shows the PN proper, consisting of rim and shell. The nebula is expanding into the rather large, only slowly expanding spherical halo consisting of ionised AGB-wind matter. \vspace{-2mm} } \end{figure} While the density structure can be deduced from monochromatic images, the velocity field requires high-resolution spectroscopy of the emission lines, and the interpretation in terms of the internal velocity field is anything but straightforward. One reason is intrinsic to the objects: Density structure and velocity field are intertwined such that a measured line split along the central line-of-sight probes the matter velocity at a radial position corresponding to the largest emission measure, or more precisely, from the line split one derives only an average velocity, where the average along the line-of-sight is weighted by density squared. The other reason is that normally the emission from the shell is rather weak and escapes detection in spectrograms of poor quality. This is very unfortunate since the shell contains usually most and the fastest moving matter of the whole object! According to Chu, Jacoby \& Arendt (\cite{CJA.87}), at least 50\,\% of all round/elliptical PNe appear to be of the mul\-tiple-shell (or double-shell) type. By means of high-resolution \'echelle spectrograms, Chu et al.\ (\cite{CKKJ.84}) and Chu (\cite{Chu.89}) detected also the typical spectroscopic signatures of the shell as weak (outer) shoulders attached to the main emission-line profile originating from the bright rim and noted that there are obviously two different expansion modes one has to deal with. Normally, the shell is faster than the rim, but exceptions are also known (cf.\ Sabbadin et al.\ \cite{SBH.84}). A good example for normal expansion is the well-known PN NGC 6826 where Chu et al. (\cite{CKKJ.84}) deduced an expansion velocity of 27~\kms\ for the shell from the outer, weak shoulders of the line profile, in contrast to 8~\kms\ for the rim from the strong, doubly peaked central profile components (cf. also our Fig.~\ref{line.analyse} in Sect.~\ref{disk.PN}). \begin{figure*} \hskip-1mm \includegraphics[width=0.99\textwidth]{tautenburg_fig_2.eps} \vskip-2mm \caption{\label{mod.prop} Example of stellar and wind properties used in our simulations as inner boundary conditions. {\em Left panel}: evolutionary path for a 0.595~M$_{\odot}$ post-AGB model with the post-AGB ages ages indicated; \emph{middle panel}: the corresponding time evolution of the bolometric, $L_{\rm bol}$ (solid), Lyman continuum, $L_{<912}$ (dotted), and mechanical (wind) luminosity, ${L_{\rm wind}=\dot{M}\, V^2/2}$ (dashed), all in units of \Lsun. {\em Right panels}: mass-loss rate $\dot{M}$ (\emph{top}) and wind velocity $V$ (\emph{bottom}). The end of the strong AGB wind sets the zero point of the post-AGB evolution and the beginning of the much weaker post-AGB wind, modelled first by the prescription of Reimers (\cite{R.75}). Later, for ${T_{\rm eff}\ge 25\,000}$~K, or post-AGB ages $\ge\!2{\times}10^3$ years, the theory of radiation-driven winds as formulated by Pauldrach et al. (\cite{pauldrachetal.88}) is used for the rest of the evolution. \vspace{-0mm} } \end{figure*} Important more recent observational studies concerning the structure and kinematics of PNe, partly employing (static) nebula models together with high-resolution line profiles, are those of Stanghellini \& Pasquali (\cite{StaPa.95}) (only structure), Gesicki, Acker \& Szczerba (\cite{GAZ.96}), G{e}sicki et al.\ (\cite{GZAS.98}), Guerrero, Villaver \& Manchado (\cite{GVM.98}), Gesicki \& Zijlstra (\cite{GeZi.00}) (only kinematics), Neiner et al.\ (\cite{NAGS.00}), and Gesicki, Acker \& Zijlstra (\cite{GAZ.03}). Especially interesting is the study by Sabbadin et al. (\cite{Sabbetal.04}) on \object{NGC 7009} in which two distinct velocity laws for rim and shell could be derived. All these studies agree upon the facts that \begin{enumerate} \item the expansion velocity does not necessarily increase linearly with distance from the central star, and that \item often the gas velocity reaches a local minimum roughly at the rim/shell interface, i.e. rim and shell have very distinct expansion behaviours, and usually the shell matter reaches the highest expansion velocities. \end{enumerate} One can conclude from these studies that planetary nebulae \emph{do not} expand according to a ${v(r)\propto r}$ law, although this assumption is still often used to construct spatiokinematical models (but see also Steffen \& L\'opez \cite{wsteffen.06}; Steffen, Garc\'ia Segura \& Koning \cite{wsteffen.09}). It has early been realised that deciphering the expansion behaviour of PNe is important for understanding their formation and evolution. For instance, the total lifetime, or visibility time, of PNe is an important quantity to determine the total PN number of a stellar population, either by observations or theoretically by stellar population synthesis calculation (cf. Moe \& de Marco \cite{moedemarco.06}). This, however, is an uncertain endeavour as long as the internal kinematics and especially the ``true'' expansion velocity of PNe is not known. A recent discussion of this subject can be found in Jacob, Sch\"onberner \& Steffen (\cite{jacobetal.13}). Also, the problem of individual distances is an obstacle if observed peak line separations or line half widths are plotted against nebular radii, i.e. if one wants to deduce any evolution of the expansion with time. From the early works of Bohuski \& Smith (\cite{BoS.74}) and Robinson et al. (\cite{RRA.82}) a certain trend of ``expansion'' velocities with radii is detectable (see, e.g., Fig.~2 in Bianchi \cite{Bi.92} for a more recent study of this kind), but the only safe statement one can make is that PNe start their evolution with comparably low expansion rates. How and why this expansion increases with time remained quite obscure. These analyses were additionally hampered by the fact that no distinction regarding nebular morphology and/or central-star type (Wolf-Rayet vs. \hbox{O-type)} was made. The first convincing observational evidence about the increase of nebular expansion rates (as measured from half widths of strong emission lines) with time or evolution was presented by Dopita \& Meatheringham (\cite{dopetal.91}) for Magellanic Clouds PNe and by M\'endez, Kudritzki \& Herrero (\cite{MKH.92}) for Milky Way objects. In both studies a systematic increase of the emission line widths with stellar temperature was found, based on the (distance-independent) central-star temperatures as a proxy of evolution. Medina et al. (\cite{medetal.06}) discriminated, for the first time, between WR and normal spectral types of the central stars and found indications that nebula around WR central stars expand faster than those around O-type central stars. Dopita \& Meatheringham (\cite{dopetal.91}) and also Medina et al. (\cite{medetal.06}) used the 10\,\% level of the line profile in order to get hold of the fastest expanding matter. Richer \etal\ (\cite{richeretal.08}, \cite{richeretal.10}) studied the kinematics of PNe of the Milky Way bulge and found also that the HWHM velocities increase with the pace of evolution. These authors used various distance-independent indicators to discriminate the evolutionary states, as, e.g., the strength of \ion{He}{ii} $\lambda$4686 \AA. \begin{figure*}[t] \vskip-2mm \hskip-5mm \includegraphics[width=0.99\textwidth]{tb_ngc6826_pnmodel.eps} \vskip-2mm \caption{\label{model.595} Snapshot of a typical middle-aged nebular model from a 1D-radiation-hydrodynamics simulation around a 0.595 \Msun\ central star whose properties are shown in Fig.\,\ref{mod.prop} (cf. Sch\"onberner et al. \cite{SchJSt.05}). The stellar parameters are: post-AGB age ${t= 6106}$ yr, $\Teff = 80177$ K, and $L = 5.057{\times}10^3$ \Lsun. \emph{Left panel}: heavy particle density (thick), electron density (dotted), and gas velocity (thin); \emph{middle panels}: (normalised) surface brightnesses in \oiii\ $\lambda$5007 \AA\ and \nii\ $\lambda$6583 \AA; \emph{right panels}: the corresponding normalised line profiles computed for the central line-of-sight with infinite spectral resolution (dotted) and broadened with a Gaussian of 6 \kms\ FWHM (solid), both with a circular aperture of ${1\!\times\!10^{16}}$ cm. The thick vertical marks (\emph{left and middle}) indicate the positions of the leading shocks of the rim and shell, respectively. The nebular mass enclosed by the rim's shock is $M_{\rm rim} = 0.07$ \Msun, that enclosed by the shell's leading shock is $M_{\rm shell} = 0.47$ \Msun\ (= total nebular mass). \vspace{-0mm} } \end{figure*} \subsection{Theoretical considerations} \label{theo.cons} Radiation-hydrodynamics simulations provide a rather detailed picture how a PN is being formed (see, e.g., Schmidt-Voigt \& K\"oppen \cite{SK.87a, SK.87b}; Marten \& Sch\"onberner \cite{MS.91}; Mellema \cite{Mellema.94}, \cite{Mellema.95}; Villaver et al. \cite{villetal.02}; Perinotto et al. \cite{PSSC.04}): Ionisation creates a rarefaction wave (the shell) which expands into the ambient medium (the former AGB wind), led by a shock. The innermost, only very slowly expanding part of this wave is being compressed and accelerated into a dense rim by the (thermal) pressure of the so-called ``hot bubble'' which consists of wind matter from the central star heated to very high temperatures (${\simeq\!10^6}$--${>\!10^7}$ K) by the reverse wind shock and is separated from the nebula proper by a contact discontinuity.\footnote {Mellema (\cite{Mellema.95}) used a different notation: The rim is called ``W-shell'' (W\,=\,wind) because it is the signature of interactions between the fast stellar wind and the older, slowly expanding former AGB material, and our shell is named ``I-shell'' (I\,=\,ionisation) because it is originally generated by ionisation. Consequently, the rim is bounded by a ``W-shock'', and the shell by an ``I-shock''.} Kinematics and shape of the wind-compressed rim is controlled by % the wind power of the central star and the density and velocity of the ambient medium, which is here given by the low-velocity inner tail of the shell % (Koo \& McKee \cite{komc.92}). In contrast, the propagation speed of the shell's shock, $\dot{R}_{\rm out}$, is exclusively determined by the electron temperature ${(\dot{R}_{\rm out} \propto \sqrt{T_{\rm e}})}$ \emph{and}\/ the upstream density gradient (Franco, Tenorio-Tagle \& Bodenheimer \cite{FTB.90}; Chevalier \cite{ch.97}; Shu et al. \cite{SLGCL02}). Thus, the typical PN consists, next to the hot bubble which is only seen in X-rays, of two important dynamical subsystems, the shell and the rim. In the simplest case of a spherical configuration, morphology and kinematics are ruled by (i) the radial run of the density gradient of the ambient matter together with a changing nebular electron temperature, and (ii) the evolution of stellar wind power and radiation field with time. This means that the kinematics of shell and rim are expected to be quite independent of each other: The shell's expansion depends mainly on the previous mass-loss rate variation along the tip of the AGB because the electron temperature increases only slowly with evolution, while the expansion of the rim depends mainly on the stellar wind evolution and, to a lesser extend, on the shell's expansion properties. As the shell's shock expands faster than the rim, the shell becomes diluted while at the same time the slowly moving rim becomes much denser and brighter: the typical double-shell structure emerges very soon. The real expansion speed of a PN is, of course, defined by the propagation of the shell's leading shock, but this velocity cannot be measured spectroscopically! For more details, see Sch\"onberner \etal\ (\cite{SchJSt.05}, \cite{SJSPCA.05}, \cite{SchJSaSt.10}). \begin{figure*} \vskip-1mm \hskip-2mm \includegraphics*[bb= 0cm 0.7cm 20cm 10cm, width=0.97\textwidth]{tb_profile_theory.eps} \caption{\label{decomposition} Receding-shell components of the normalised lines \nii\ $\lambda$6583 \AA\ (\emph{next to left}) and \oiii\ line $\lambda$5007 \AA\ (\emph{next to right}), computed (black dashed) for the model nebula shown in the \emph{middle panel} and decomposed into the different contributions of rim (red) and shell (blue). The model is the same as displayed in Fig.~\ref{model.595}, but here we render the velocity profile only (thick solid red and blue), supplemented by the normalised densities of N$^+$ (orange dotted) and O$^{++}$ (green dotted). Thin lines indicate the velocity field belonging the hot bubble and the halo. Both the outermost panels display the derivatives of both line profiles. The (relative) minima which belong to the profiles' inflection points at 30 \kms\ are marked by the horizontal dashed line. The value of the model's post-shock velocity is 29.5 \kms. \vspace{-1.5mm} } \end{figure*} During its expansion, a PN ``sees'' orders of magnitude changes of the stellar UV-radiation field and the wind power. An illustration is given in Fig.~\ref{mod.prop} for a 0.595~\Msun\ post-AGB model where the changes of important stellar quantities are displayed: luminosities of the UV radiation and the wind (middle panel), mass-loss rate and wind velocity (right panel).\footnote {\changed{All model sequences shown in this work are based on the \textsc{nebel} code whose physical and numerical details are described in Perinotto et al. (\cite{PKSM.98}). The atomic data used for the (time-dependent) ionisation/recombination calculations are listed in Marten \& Szczerba (\cite{MS.97}). An update of these data has only a negligible influence on the hydrodynamical properties of the models and is not necessary as long as one is not interested in the determination of chemical abundances.} } Noticeable is the following: While the power of the UV-radiation field increases rapidly with time (and effective temperature), the wind power increases more gradually with time from a value as low as $\simeq$10$^{-4} L_{\rm bol}$ to a maximum value of about $\simeq$10$^{-2.5} L_{\rm bol}$ at the end of the horizontal part of evolution through the HRD, at about maximum stellar temperature. This increase of the wind power is entirely due to the growing wind speed as the star shrinks, which more than compensates for the decrease of the mass-loss rate: from typical AGB-wind velocities up to 10\,000 \kms\ at the white-dwarf stage. Only beyond maximum stellar temperature we see a rapid decline of the wind power in parallel with the drop of stellar luminosity while the wind speed remains constant. The low wind power at the beginning suggests that formation and early shaping of PNe is mainly ruled by photoionisation and not by interacting winds, as is the general assumption in the literature. The effect of photoionisation becomes even more important in systems with low metallicity, hence also with weaker winds, as demonstrated by Sch\"onberner \etal\ (\cite{SchJSaSt.10}). An illustration of the typical nebular structure and the corresponding observable quantities like radial intensity and line profiles that are predicted by 1D-hydrodynamical simulations is rendered in Fig.\,\ref{model.595}. This particular model belongs to a radiation-hydrodynamics PN simulation around a 0.595~\Msun\ central star whose properties are illustrated in Fig. \ref{mod.prop}. The initial shell configuration is based on hydrodynamical simulations of dusty AGB envelopes during the final AGB evolution (see Steffen \etal\ \cite{steffenetal.98} for details). Theoretical line profiles are computed for central lines of sight with a numerical aperture of 1$\times$10$^{16}$ cm radius, broadened by a Gaussian of FWHM of 6 \kms\ to mimick a typical finite spectral resolution. More details about this particular sequence can be seen in Sch\"onberner et al. (\cite{SchJSt.05}, Figs. 1, 2, and 3 therein). Figure \ref{model.595} renders a typical middle-aged and fully ionised nebular structure where we see, next to the ionisation-generated shell, already a well-developed rim. The shell is somewhat diluted because it has already expanded into the former AGB wind, while the rim is of much higher density because of the bubble's comparatively high pressure. Consequently, both the rim and the shell show a very distinct behaviour in their radial intensity distributions and their contributions to the line profiles. Note especially the rather faint signatures of the shell in \oiii\ in both the brightnesses and line profiles, although the shell contains most of the nebular mass, i.e. 85\,\% of the 0.47 \Msun\ of ionised matter contained in the PN model shown in the left panel of Fig.\,\ref{model.595}. Note especially the rather low flow velocities within the rim, ${\approx\!12}$ \kms, as compared to the post-shock velocity of the shell, 29.5 \kms. It is interesting to look in more detail at the different contributions from rim and shell to the total emission profiles seen in the right panels of Fig.~\ref{model.595}. Therefore, we have decomposed numerically the total nebular line emission of this model snapshot into the contributions from rim and shell and show them, together with the relevant model structures (velocity field and ion densities) in Fig.~\ref{decomposition}. First of all, the rim contribution is similarly strong and narrow in both ions and reflects the low velocities but high densities of the rim gas. This line contribution can well be approximated by a Gaussian profile. The shell contribution is weaker but much wider because of the velocity spread within the shell: the larger this velocity spread, the wider the shell contribution in velocity space (cf. middle panel of Fig.~\ref{decomposition}). The difference in shape between \nii\ and \oiii\ lines is caused by the different radial density profiles of the respective ions (cf. middle panel of Fig.~\ref{decomposition}). It is obvious that the shell profile resembles only poorly a Gaussian profile, but it is the only part of the total nebular emission line that allows an estimate of the true nebular expansion speed: The inflexion point at the outer flanks of the profiles (either from N$^+$ or O$^{++}$) mark closely the post-shock velocity which is, under the physical conditions found here, usually 20--25\,\% below the shock propagation speed (see discussion in Sect.~\ref{subsect.post.shock} and Corradi \etal\ \cite{CSSJ.07}, hereafter Paper~II; Jacob, Sch\"onberner \& Steffen \cite{jacobetal.13}). We note in passing that the double-shell structure seen in Fig. \ref{model.595} develops from an initial configuration with a rather smooth radial density profile which falls off outwards with a gradient varying between $\alpha$\,=\,2 and 4 if one approximates the density by power-law distributions with variable $\alpha$, ${\rho \propto r^{-\alpha(r)}}$. This smooth initial % configuration reflects the assumed evolution of the mass loss along the tip of the AGB (see, for details, Steffen et al. \cite{steffenetal.98}, Fig. 19 therein). This rather simple initial structure is completely reshaped firstly by ionisation and then by the stellar wind into the much more complicated configuration as seen in Fig. \ref{model.595}. Given the theoretical considerations above, we conclude that the term ``expansion velocity'' is by no means unambiguous, even if it is based on high-resolution spectroscopy. It characterises observationally nothing else than the \emph{spectroscopically} determined flow of matter at a certain position within an expanding shell with a radial density and velocity profile (see Fig.\,\ref{model.595}, left panel), and it is quite important for any interpretation to know whether this position is within the rim or the shell. Likewise, the radiation-hydrodynamics simulations confirm the observations discussed in Sect.~\ref{hist.back} that the velocity field even for spherical systems is by no means as simple as a ${v(r) \propto r}$ law. Instead, one has to consider different velocity laws for the rim and shell matter. \subsection{Kinematic ages} \label{subsect.kin.age} A quantity very often used in the literature in various contexts is the kinematic age of a PN. Its usual definition is nebular radius, taken from images, divided by a spectroscopically measured ``expansion velocity''. Irrespective of the fact that one combines a (distance-dependent) quantity measured in the plane of sky with one measured along the line-of-sight, most age determinations suffer from an internal inconsistency: The nebular outer edge, i.e. that of the shell, is combined with a velocity measured for the bright rim! Additional complications are the velocity gradient within the expanding shell, the acceleration of expansion with time, and the fact that the real expansion velocity, i.e. that of the outer, leading shock, cannot be measured spectroscopically at all. Even if one believes in a good kinematic age of the PN, the conversion of a PN's kinematic age into a post-AGB age is, by no means, straightforward. One has to consider \begin{itemize} \item that the measured kinematic age is just the current time scale of evolution which is subject to change because of a possible accelerated expansion, \item that the nebula forms at some distance from the star at the inner edge of the receding AGB wind, and that \item there is an offset between the (assumed) zero point of post-AGB evolution and the begin of PN formation, the so-called ``transition time''. The latter is quite uncertain and depends severely on the post-AGB mass-loss history and remnant mass. Different definitions exist in the literature, but a typical value may be 2000 years (cf. Fig.\,\ref{mod.prop}). \end{itemize} An insight into the correction one has to deal with for different definitions of the kinematic age gives Fig.~\ref{fig.kinages}. There the ratios of true model ages (= post-AGB ages) to various definitions of kinematic ages are plotted over effective temperatures of the respective central stars as predicted by our various 1D-model simulations whose principal properties are already explained above. We assume that the mass-loss history, as devised by Bl\"ocker (\cite{B2.95}), comes close to reality. The tracks have quite some spread caused by the different initial conditions and central-star properties, but they all show a similar run with effective temperature (or time). We begin the discussion with the kinematic age definition $R_{\rm out}/V_{\rm post}$ which appears to us the best choice one can get because $V_{\rm post}$ characterises the flow immediately behind the shock at $R_{\rm out}$ which is usually also the fastest within the entire nebula (cf. Fig.~\ref{decomposition}), and because $V_{\rm post}$ changes only slowly with time during a significant part of the nebular expansion. However, this choice for the kinematic age leads, at least in principle for expansion with constant rate, to a systematic \emph{overestimation} of the true age since ${V_{\rm post}\simeq\dot{R}_{\rm out}/1.3}$ (cf. Fig.~\ref{shock.postshock}). In reality, we have an age offset due to the finite transition time, a rapid shock acceleration during the optically-thick stage, and a low shock acceleration during the following evolution across the HRD (cf. Fig.~\ref{shell.comp} below), all of which conspire for \emph{underestimating} the post-AGB age. The situation is illustrated in the top panel of Fig.~\ref{fig.kinages}. At early stages of evolution, transition time and shock acceleration dominate and result in age underestimates by factors between 1.3 and 2.0, depending on model sequence and evolutionary stage. Later, the nearly constant expansion dominates, and the correction factors decrease slowly, but still remain above unity. Only for the 0.625 \Msun\ sequence with its short transition time and the virtually constant expansion rate after the thick/thin transition, the correction factor falls below unity quite soon. The second panel of Fig.~\ref{fig.kinages} relates the model ages to rim radius divided by rim velocity, which is a consistent combination, but not used in practice. Because the rim radius is sometimes difficult to detect numerically, we used here the radius of the contact discontinuity as a proxy instead.\footnote {The rim is geometrically quite thin, so that $R_{\rm rim} \simeq R_{\rm cd}$.} Despite its internal consistency, the use of this age can not be recommended: The correction factors range from about 0.5 up to about 2, on the average. The reason lies in the kinematics of the rim: first the rim gas is nearly stalling, pushing the kinematic age above the models' age; later when the rim begins to accelerate (for ${\ga\!50\,000}$ K), the kinematic ages falls below the true ages. \begin{figure}% \vskip-1mm \hskip-1mm \includegraphics[width= 8.3cm, height= 10.8cm]{tautenburg_fig_5.eps} \vskip-1mm \caption{Ratios of post-AGB ages to various definitions of the kinematic age as predicted from a number of hydrodynamical simulations for different post-AGB remnants and initial configurations, parameterised by stellar mass and envelope structure (slope $\alpha$ or ``hydro''), vs. stellar effected temperatures (see, for more details, Perinotto et al. \cite{PSSC.04}; Sch\"onberner et al. \cite{SchJSt.05}). Plotted are the age ratios only for the horizontal part of the HRD evolution until maximum stellar effective temperatures. Velocities were determined from computed line profiles broadened by a Gaussian of 6 \kms\ FWHM to account for the limited spectral resolution: $V_{\rm rim}$ from the peak separation of the strongest line components of \oiii, and $V_{\rm post}$ from the (outer) inflexion points. Dots along the sequences indicate the position of the optically thick/thin transition if not below ${\log\Teff = 4.5}$. The color coding of the tracks will remain the same throughout the paper. The dashed part of the 0.625 \Msun\ track seen in the two middle panels will be explained later (Sect.~\ref{comp.rim}). Note also the different ranges of the ordinates. \emph{Top panel}: kinematic age defined as (outer) shell radius, $R_{\rm out}$, over post-shock velocity $V_{\rm post}$. \emph{Second panel}: kinematic age defined as radius of contact discontinuity, $R_{\rm cd}$ (as proxy of $R_{\rm rim}$), over velocity of the rim, $V_{\rm rim}$. \emph{Third panel}: kinematic age defined as shell radius, $R_{\rm out}$ over $V_{\rm rim}$. \emph{Bottom panel}: kinematic age defined as shell radius, $R_{\rm out}$, over mass-averaged velocity, $V_{\rm av}$. \vspace{-2mm} } \label{fig.kinages} \end{figure} This panel is qualitatively similar to Fig.~20 in Villaver et al. (\cite{villetal.02}), although a quantitative comparison is impossible because of the different assumptions about the zero point of the post-AGB evolution and the different definitions of ``expansion'' velocity and radius made in both studies. The third panel of Fig.~\ref{fig.kinages} relates the model ages to shell radius divided by rim velocity, an inconsistent method but used frequently in the literature. As expected, the result is disastrous: The mean correction factor runs from 0.2 up to close to unity for the older models. Because the rim expansion is so slow at the beginning, kinematic ages based on this method overestimate grossly the true (post-AGB) ages. Only at later stages of evolution, when the rim accelerates, the discrepancy to the true post-AGB ages decreases. The bottom panel of Fig.~\ref{fig.kinages} relates the model ages to shell radius divided by $V_{\rm av}$, the mass-weighted velocity. This method, introduced by Gesicki et al. (\cite{GZAS.98}), is very elaborate because it demands knowledge of (radial) density and velocity profiles of the PN. The former can be deduced from the surface brightness distribution, e.g. from H$\alpha$, if spherical symmetry is imposed, but the latter has to be derived from emission lines and is not always unambiguously possible. It is mandatory that density profile and velocity field are consistent with the physics of expanding shock waves and wind-wind interaction, a constraint which is difficult to judge from observations alone. Nevertheless, neglecting the more extreme case of the ``${\alpha = 3.0}$'' sequence, a value of about 0.8 seems reasonable for typical PNe, i.e. post-AGB age = $(0.8\pm0.1){\times} R_{\rm out}/V_{\rm av}$, but holding only for ${\log\Teff \la 4.9}$. It is obvious that our hydrodynamics models provide only a rather simplistic view of true nebulae. Still, the results presented in Fig.~\ref{fig.kinages} suggest that measurements of flow velocities right behind the leading shock of the PN's shell is a robust and the most reliable method for estimating true post-AGB ages. But even so, errors due to asphericity, and especially the uncertainty of the transition time, make any PN age determination to a risky endeavour. We add that the ad hoc assumption to measure a relevant expansion velocity at the 10\,\% level of the line profile (resolved or not resolved) is, of course, also a reasonable choice if the post-shock velocity is not measurable, but one must recognise that this criterion is unphysical and does not take care of ionisation differences. This can be seen in Fig.~\ref{decomposition} where the 10\,\% criterion delivers a flow velocity of 33 \kms\ from the \nii\ line, while \oiii\ gives 28 \kms. Only the profile's inflexion point provides the same value for both ions: 30 \kms. % \subsection{The relation between shock and post-shock velocity} \label{subsect.post.shock} The post-shock velocity, i.e. the flow velocity immediately behind the shell's leading shock (or the shell's outer edge), is a well-defined physical quantity and is related to the shock's propagation speed via the well-known jump conditions. For instance, the relation between both velocities depends on the shock properties (adiabatic, isothermal, or intermediate) which are ruled by the physical state of the flow and the environment into which the shock expands. A thorough discussion for the specific case of planetary nebulae has been presented in Mellema (\cite{Mellema.04}) and Sch\"onberner et al. (\cite{SJSPCA.05}). \begin{figure} \vskip-4mm \hskip-1mm \includegraphics[width=1.01\linewidth]{tautenburg_fig_6.eps} \vskip-1mm \caption{\label{shock.postshock} Ratio of outer shock velocity, $\dot{R}_{\rm out}$, and post-shock velocity, $V_{\rm post}$, vs. stellar effective temperature as predicted by hydrodynamical sequences with different initial conditions and central stars with various masses (see legend). Only the results for the high-luminosity part of evolution through the HRD until maximum stellar temperatures are reached are shown. The rapid increase of $\dot{R}_{\rm out}/V_{\rm post}$ from about unity to 1.2--1.3 belongs to the optically thick part of evolution whose end is marked by a dot. } \end{figure} Figure~\ref{shock.postshock} shows how the relation between shock and post-shock velocities, as predicted by our hydrodynamical PN simulations, depend on evolutionary state (again measured by the stellar effective temperature) and model parameters. One sees that the ratio of both velocities does not vary much between the different model sequences: typical values for optically-thin models are between 1.2 and 1.35, with very little variation during the evolution across the HRD. The only exceptions are the transition phases from optically-thick to an optically-thin model where the velocity ratio ``jumps up''. % The ratios of shock to post-shock velocity shown in Fig.~\ref{shock.postshock} agree very well with the value estimated by Mellema (\cite{Mellema.04}). Based on the analytical solutions of Chevalier (\cite{ch.97}) and Shu et al. (\cite{SLGCL02}), he concluded that this ratio is, for the case of typical PN conditions, around 1.20. The close correspondence between the propagation speed of the shell's shock and the post-shock velocity seen in Fig.~\ref{shock.postshock} allows us in the following to use the post-shock velocity as a proxy for the nebular expansion: All expansion properties found from spectroscopic determinations hold also for the outer shock, apart from a scaling factor of about 1.3. \subsection{A modern observational approach} \label{new} From the previous sections it becomes evident that a more thorough discussion of the general problem of the expansion of PNe is urgently needed, as well as a physically sound interpretation of the observed line profiles. This implies the necessity to observe line profiles with high signal-to-noise in order to get hold of weak outer signatures of fast moving gas immediately behind the leading shock front combined with radiation-hydrodynamics simulations for a proper interpretation. Knowing the true expansion behaviour of PNe is mandatory for estimating total lifetimes of PNe or mean visibility times belonging to PN ensembles in stellar populations (cf. Jacob, Sch\"onberner \& Steffen \cite{jacobetal.13}). A first study to gain closer insight into the expansion behaviour of planetary nebulae, especially in terms of the different properties of shell and rim, has already been presented by Sch\"onberner \etal\ (\cite{SJSPCA.05}). Basically, these authors determined typical bulk velocities of the rim and the shell matter by means of Doppler decomposition of strong emission lines gained by high-resolution spectroscopic observations into four Gaussians and compared them with theoretical line profiles computed from appropriate radiation-hydrodynamics simulations. The theoretical profiles were decomposed into four Gaussians in exactly the same way as the observed ones. Sch\"onberner et al. (\cite{SJSPCA.05}) found a significant increase of typical bulk velocities of rim and shell with progress of evolution, the latter measured by the effective temperature of the central star. The expansion of the shell turned out to be very fast, up to about 30--40~\kms. Sch\"onberner \etal\ were then able to interpret this expansion by means of radiation-hydrodynamics simulations: A PN expands into a circumstellar environment whose radial slope is much steeper than usually assumed, ${\rho \propto r^{-\alpha}}$ with ${\alpha > 2}$. This result is fully consistent with the very few existing direct measurements of the radial density profiles of halos from their surface brightness distribution (Plait \& Soker \cite{PS.90}; Corradi \etal\ \cite{CStSchP.03}; Sandin et al. \cite{sandetal.08}). The PNe sample investigated by Sch\"onberner \etal\ (\cite{SJSPCA.05}), however, was rather small, and additional data from the literature was not available at the time. Since a detailed velocity information of a much larger sample of round/elliptical PNe appears to be necessary to strengthen the conclusions mentioned above, the Sch\"onberner \etal\ sample was increased by new high-resolution spectroscopic observations with very good signal-to-noise ratios. Only if the sample is sufficiently large, a meaningful comparison with (spherical) hydrodynamical models can be made which then will give important constraints on the physical conditions responsible for the formation and evolution of planetary nebulae. In the present work we took up the method recommended in Paper II. There, we showed that the derivative of the line profile allows an rather accurate determination of the post-shock gas velocity of the shell. Knowing the post-shock velocity, the conversion to the true expansion velocity, i.e. the propagation velocity of the outer shock, is simple (see discussion in Sect.~\ref{subsect.post.shock}). This is the optimum one can get since, as already said, the propagation speed of the shock itself cannot be measured spectroscopically. In Sect.\,\ref{obs.red} we report on our observations, the profile extraction and the determination of the respective expansion velocities which are then presented in Sect.~3. Section 4 is devoted to detailed comparisons with predictions of radiation-hydrodynamics simulations. In Sect.~5 we present, for the first time, apparent expansion asymmetries along the line-of-sight seen in some objects. Next, in Sect.~6, the expansion behaviour of very evolved PNe harbouring hot white dwarfs as central stars is discussed. Section~7 deals then with the expansion of metal-poor objects. A discussion of the results follows in Sect.~8, and the conclusions (Sect.~9) close the paper. -- We note that preliminary results of this study have been published in Jacob et al. (\cite{jacobetal.12}). | \label{sect.res.concl} The results of our study are in line with the prediction of radiation-hydrodynamic simulations in that the interaction of the fast central-star wind with the slow AGB wind via a hot bubble of shock-heated wind gas is not the main driving agent of a PN: At the beginning of the post-AGB evolution and during the late evolution with a faint central star the wind power is just sufficient to avoid the collapse of the PN towards the star. Only during the middle of the nebular evolution, if the wind power increases to its maximum (cf. Fig.~\ref{mod.prop}, middle panel), the winds interaction gets important and forms the conspicuous nebular rims. But even then most of the nebular mass is still contained in the less prominent shells, and the expansion speed of their leading shocks is not influenced at all by the stellar wind. Our main results/conclusions are as follows: \begin{itemize} \item There exists a rather tight correlation of the internal kinematics with age/stellar temperature which, however, is distinct between rim and shell. The shell's leading shock accelerates from the AGB wind speed of ${\simeq\! 10}$ \kms\ to about 50 \kms\ (${V_{\rm post}\simeq 40}$ \kms), first as a D-type ionisation front and then in a ``champagne''-flow configuration, because (i) the electron temperature increases in line with the growing energy of the ionising photons, and (ii) the density gradient of the halo may steepen with distance from the star. The rim, on the other hand, is driven by the stellar wind power which increases as the star shrinks. For young objects the rim matter expands even \emph{slower} than the former AGB wind, but at the end of the horizontal evolution values of about 30 \kms\ are reached. \item Since both rim and shell obviously expand outwards independently of each other, the expansion of a PN is by no means ballistic. Rather, \emph{the whole PN is a dynamical system throughout its entire evolution, without reaching a self-similar stage of expansion at any time.} \item The difference between rim expansion and post-shock velocity \emph{decreases} with progress of evolution from about 25 \kms\ for the youngest objects till about 15 \kms\ for the most evolved objects of our sample. \item The high expansion velocities of the outer shock indicate a steep gradient in the radial upstream density profile, i.e. in the halo, which is only possible if the mass loss during the final AGB evolution ``accelerates'' continuously. This may be an indication that in the normal case the PN precurser left the tip of the AGB quite soon after the last thermal pulse on the AGB, i.e. during recovering from the pulse. The only exception in our sample is NGC 2022 whose unusual low post-shock velocity hints to a period of constant final AGB mass-loss rate. \item The line profiles of four targets with intrinsically faint but very hot central stars could be interpreted in terms of a reionisation process after a rather severe but short recombination phase caused by the rapid luminosity drop of the star. We found in all cases quite high flow velocities of 40--50 \kms\ at the outer edge of the newly created reionised shell which are comparable to those measured for objects which are still in the high-luminosity phase of their evolution. \item The expansion behaviour of the four metal-poor objects is as expected from theory: Compared to objects with normal (i.e. Galactic disk) metallicity, post-shock velocities are generally higher because of the higher electron temperature, and rim velocities lower because of the weaker winds interaction. \end{itemize} To summarise up, our study demonstrates, in line with our previous ones, that the expansion of a PN is much faster than assumed so far. The reason is an incorrect assignment of a measured velocity, usually that of the most conspicuous part of the nebulae (i.e. the rim), to be the ``expansion velocity'' of a PN. Instead, it is the outer shell's shock whose propagation determines the true expansion of a PN. The only velocity which is physically related to the shock expansion is the gas velocity immediately behind the shock, the post-shock velocity. But this velocity is, in general, higher than the velocity measured for the rim by factors % up to about 7 for very young objects. In closing we state that the \emph{expansion velocities of PNe quoted in the literature and frequently used in statistical studies must be treated with utmost care. It is absolutely necessary to check which kind of velocity is measured and how this velocity refers to the real expansion velocity of the object in question.} \acknowledgement We are grateful to the referee for his careful reading of the manuscript and the very short reviewing time. C.\,S. acknowledges support by DFG grant SCHO 394/26. | 14 | 4 | 1404.0391 |
1404 | 1404.5261_arXiv.txt | We study the possibility of generating tiny neutrino mass through a combination of type I and type II seesaw mechanism within the framework of an abelian extension of standard model. The model also provides a naturally stable dark matter candidate in terms of the lightest neutral component of a scalar doublet. We compute the relic abundance of such a dark matter candidate and also point out how the strength of type II seesaw term can affect the relic abundance of dark matter. Such a model which connects neutrino mass and dark matter abundance has the potential of being verified or ruled out in the ongoing neutrino, dark matter as well as accelerator experiments. | Recent discovery of the Higgs boson at the large hadron collider (LHC) experiment has established the standard model (SM) of particle physics as the most successful fundamental theory of nature. However, despite its phenomenological success, the SM fails to address many theoretical questions as well as observed phenomena. Three most important observed phenomena which the SM fails to explain are neutrino oscillations, matter-antimatter asymmetry and dark matter. Neutrino oscillation experiments in the last few years have provided convincing evidence in support of non-zero yet tiny neutrino masses \cite{PDG}. Recent neutrino oscillation experiments T2K \cite{T2K}, Double ChooZ \cite{chooz}, Daya-Bay \cite{daya} and RENO \cite{reno} have not only made the earlier predictions for neutrino parameters more precise, but also predicted non-zero value of the reactor mixing angle $\theta_{13}$. Matter-antimatter asymmetry of the Universe is encoded in the baryon to photon ratio measured by dedicated cosmology experiments like Wilkinson Mass Anisotropy Probe (WMAP), Planck etc. The latest data available from Planck mission constrain the baryon to photon ratio \cite{Planck13} as \begin{equation} Y_B \simeq (6.065 \pm 0.090) \times 10^{-10} \label{barasym} \end{equation} Presence of dark matter in the Universe is very well established by astrophysics and cosmology experiments although the particle nature of dark matter in yet unknown. According to the Planck 2013 experimental data \cite{Planck13}, $26.8\%$ of the energy density of the present Universe is composed of dark matter. The present abundance or relic density of dark matter is represented as \begin{equation} \Omega_{\text{DM}} h^2 = 0.1187 \pm 0.0017 \label{dm_relic} \end{equation} where $\Omega$ is the density parameter and $h = \text{(Hubble Parameter)}/100$ is a parameter of order unity. Several interesting beyond standard model (BSM) frameworks have been proposed in the last few decades to explain each of these three observed phenomena in a natural way. Tiny neutrino masses can be explained by seesaw mechanisms which broadly fall into three types : type I \cite{ti}, type II \cite{tii} and type III \cite{tiii}. Baryon asymmetry can be produced through the mechanism of leptogenesis which generates an asymmetry in the leptonic sector first and later converting it into baryon asymmetry through electroweak sphaleron transitions \cite{sphaleron}. The out of equilibrium CP violating decay of heavy Majorana neutrinos provides a natural way to create the required lepton asymmetry \cite{fukuyana}. There are however, other interesting ways to create baryon asymmetry: electroweak baryogenesis \cite{Anderson:1991zb}, for example. The most well motivated and widely discussed particle dark matter (for a review, please see \cite{Jungman:1995df}) is the weakly interacting massive particle (WIMP) which interacts through weak and gravitational interactions and has mass typically around the electroweak scale. Weak interactions kept them in equilibrium with the hot plasma in the early Universe which at some point of time, becomes weaker than the expansion rate of the Universe leading to decoupling (or freeze-out) of WIMP. WIMP's typically decouple when they are non-relativistic and hence known as the favorite cold dark matter (CDM) candidate. Although the three observed phenomena discussed above could have completely different particle physics origin, it will be more interesting if they have a common origin or could be explained within the same particle physics model. Here we propose a model which has all the ingredients to explain these three observed phenomena naturally. We propose an abelian extension of SM (for a review of such models, please see \cite{langacker}) with a gauged $B-L$ symmetry. Neutrino mass can be explained by both type I and type II seesaw mechanisms. Some recent works related to the combination of type I and type II seesaw can be found in \cite{typeI+II}. Dark matter can be explained due to the existence of an additional Higgs doublet, naturally stable due to the choice of gauge charges under $U(1)_{B-L}$ symmetry. Unlike the conventional scalar doublet dark matter models, here we show how the origin neutrino mass can affect the dark matter phenomenology. Some recent works motivated by this idea of connecting neutrino mass and dark matter can be found in \cite{nuDM}. In supersymmetric frameworks, such scalar dark matter have been studied in terms of sneutrino dark matter in type I seesaw models \cite{susy1} as well as inverse seesaw models \cite{susy2}. We show that in our model, the dark matter abundance can be significantly altered due to the existence of a neutral scalar with mass slightly larger than the mass of dark matter, allowing the possibility of coannihilation. And interestingly, this mass splitting is found to be governed by the strength of type II seesaw term of neutrino mass in our model. We show that for sub-dominant type II seesaw term, dark matter relic abundance can get significantly affected due to coannihilation whereas for dominant type II seesaw, usual calculation of dark matter relic abundance follows incorporating self-annihilation of dark matter only. This paper is organized as follows: in section \ref{model}, we outline our model with particle content and relevant interactions. In section \ref{numass}, we briefly discuss the origin of neutrino mass in our model. In section \ref{darkmatter}, we discuss the method of calculating dark matter relic abundance. In section \ref{results}, we discuss our results and finally conclude in section \ref{conclude}. | \label{conclude} We have studied an abelian extension of SM with a $U(1)_{B-L}$ gauge symmetry. The model allows the existence of both type I and type II seesaw contributions to tiny neutrino masses. It also allows a naturally stable cold dark matter candidate: lightest neutral component of a scalar doublet $\phi$. Type II seesaw term is generated by the vev of a scalar triplet $\Delta$. We show that, in our model the vev of the scalar triplet not only decides the strength of type II seesaw term, but also the mass splitting between the neutral components of the scalar doublet $\phi$. If the vev is large (of the order of GeV, say), then mass splitting is large and hence the next to lightest neutral component $A_0$ plays no role in determining the relic abundance of $H_0$. However, if the vev is small such that the mass splitting is below $5-10\%$ of $m_{H_0}$, then $A_0$ can play a role in determining the relic abundance of $H_0$. In such a case, dark matter $(H_0)$ relic abundance gets affected due to coannihilation between these two neutral scalars. We compute the relic abundance of dark matter in both the cases: without and with coannihilation and show the change in parameter space. We incorporate the latest constraint on dark matter relic abundance from Planck 2013 data and show the allowed parameter space in the $\lambda_{DM}-m_{DM}$ plane, where $\lambda_{DM}$ is the dark matter SM Higgs coupling and $m_{DM}$ is the mass of dark matter $H_0$. We show the parameter space for mass splitting $\Delta m = 500 \; \text{keV}$. We point out that unlike the conventional inert doublet dark matter model, here we do not have to fine tune dimensionless couplings too much to get such small mass splittings between $A_0$ and $H_0$. This mass splitting can be naturally explained by the suitable adjustment of symmetry breaking scales and bare mass terms of the Lagrangian. It is interesting to note that for sub-dominant type II seesaw case, the dark matter relic abundance gets affected by coannihilation whereas for dominant type II seesaw case, usual dark matter relic abundance calculation applies taking into account of self-annihilations only. We also take into account the constraint coming from dark matter direct detection experiments like LUX experiment on spin independent dark matter nucleon scattering. We incorporate the LEP I bound on $Z$ boson decay width which rules out the region $m_{H_0}+m_{A_0} < m_Z$. We then incorporate the constraint on invisible SM Higgs decay branching ratio from measurements done at LHC experiment. We show that after taking all these relevant constraints into account, there still remains viable parameter space which can account for dark matter as well as neutrino mass simultaneously. Thus our model not only gives rise to a natural dark matter candidate, but also provides a natural way to connect the dark matter relic abundance with the neutrino mass term from type II seesaw mechanism. | 14 | 4 | 1404.5261 |
1404 | 1404.7114_arXiv.txt | \noindent We show that existing laboratory experiments have the potential to unveil the origin of matter by probing leptogenesis in the type-I seesaw model with three right-handed neutrinos and Majorana masses in the GeV range. The baryon asymmetry is generated by CP-violating flavour oscillations during the production of the right-handed neutrinos. In contrast to the case with only two right-handed neutrinos, no degeneracy in the Majorana masses is required. The right-handed neutrinos can be found in meson decays at BELLE II and LHCb. | All matter particles in the Standard Model % (SM) except neutrinos have been observed with both, left-handed (LH) and right-handed (RH) chirality. If RH neutrinos exist, they can explain several phenomena which cannot be understood in the framework of the SM, for a review see e.g. Refs.~\cite{Abazajian:2012ys,Drewes:2013gca}. In particular, they give neutrinos masses via the seesaw mechanism \cite{Minkowski:1977sc,GellMann:seesaw,Mohapatra:1979ia,Yanagida:1980xy} and can at the same time generate the baryon asymmetry of the universe (BAU) via leptogenesis \cite{Fukugita:1986hr}. Leptogenesis with $n=2$ RH neutrinos has been studied in detail, and it was found that the BAU can only be explained if their masses are either very heavy \cite{Davidson:2002qv,Antusch:2009gn,Racker:2012vw} or degenerate \footnote{The $\nu$MSM-scenario proposed in \cite{Asaka:2005an,Asaka:2005pn} involves three sterile flavours, but one of them is required to be a Dark Matter candidate and couples to weakly to contribute to the seesaw and leptogenesis, see \cite{Boyarsky:2009ix,Canetti:2012kh} for details.}, see e.g. \cite{Pilaftsis:1997jf,Pilaftsis:2003gt} and \cite{Asaka:2005pn,Canetti:2010aw,Asaka:2011wq,Canetti:2012kh,Canetti:2012vf,Shuve:2014zua,Garbrecht:2014bfa}. In the former case the new particles are too heavy to be seen in any experiment. In the latter case it was found that their interaction strengths, characterized by mixing angles with ordinary neutrinos, are generally too feeble to give measurable branching ratios in existing experiments \cite{Kersten:2007vk,Ibarra:2011xn,Canetti:2012vf,Canetti:2012kh}. They could be found in dedicated future experiments \cite{Bonivento:2013jag} using present day technology \cite{Canetti:2012vf,Canetti:2012kh}. In this work we show that both shortcomings, the ``tuned'' mass degeneracy and suppressed production rates at colliders, are specific to the scenario with $n=2$ and can be avoided if three or more RH neutrinos participate in leptogenesis even if there is no other physics beyond the SM. The present article is organized as follows. In Sec.~\ref{secmodel} we recapitulate the type-I seesaw model and introduce our notation. In Sec.~\ref{sec:leptogenesis} we briefly summarize the mechanism of leptogenesis via neutrino oscillations. In Sec.~\ref{parameter:scan} we identify a range of sterile neutrino masses and mixings that can explain the observed BAU. We discuss the perspectives to explore this region experimentally in Sec.~\ref{exp:persp}. In Sec.~\ref{TuningBlaBlaBla} we comment on the dependence of the baryogenesis region on the choice of model parameters. We draw conclusions in section \ref{sec:conclusions}. | \label{sec:conclusions} We have shown that three RH neutrinos in the type-I seesaw model (\ref{L}) with masses in the GeV range and experimentally accessible mixings can explain the BAU via leptogenesis. No degeneracy in the Majorana masses is required. In the limit of degenerate Majorana masses the model is expected to resemble many properties of the well-known $\nu$MSM. For a non-degenerate mass spectrum makes this scenario is clearly distinguishable from the $\nu$MSM and other realizations of resonant leptogenesis. A discovery of heavy neutral leptons at LHCb or BELLE would be smoking gun evidence that these particles can be the common origin of matter in the universe and the observed neutrino masses. Both of these experiments have already entered the cosmologically interesting parameter space and will continue to take data after the updates currently under way. The chances for a discovery can be optimized by studying all possible decay channels of $B$-mesons that involve $N_I$. The perspectives would be even better at the proposed SHiP experiment, for which our findings provide strong motivation. | 14 | 4 | 1404.7114 |
1404 | 1404.6923_arXiv.txt | We propose a landscape of many axions, where the axion potential receives various contributions from shift symmetry breaking effects. We show that the existence of the axion with a super-Planckian decay constant is very common in the axion landscape for a wide range of numbers of axions and shift symmetry breaking terms, because of the accidental alignment of axions. The effective inflation model is either natural or multi-natural inflation in the axion landscape, depending on the number of axions and the shift symmetry breaking terms. The tension between BICEP2 and Planck could be due to small modulations to the inflaton potential or steepening of the potential along the heavy axions after the tunneling. The total duration of the slow-roll inflation our universe experienced is not significantly larger than $60$ if the typical height of the axion potentials is of order $(10^{16-17}{\rm \,GeV})^4$. | The discovery of the primordial $B$-mode polarization of cosmic microwave background (CMB) by BICEP2 \cite{Ade:2014xna} provides us with valuable information on the early universe. The measured tensor-to-scalar ratio reads $r = 0.20^{+0.07}_{-0.05}$, which, taken at face value, implies large-field inflation models such as quadratic chaotic inflation \cite{Linde:1983gd}, natural inflation \cite{Freese:1990rb}, or their extensions to polynomial \cite{Nakayama:2013jka,Nakayama:2013txa} or multi-natural inflation \cite{Czerny:2014wza}. In particular, the inflaton field excursion exceeds the Planck scale, which places a tight constraint on the inflation model building. The observed large tensor-to-scalar ratio, however, has a tension with the Planck results, $r < 0.11~(95\%{\rm ~CL})$ \cite{Ade:2013uln}. The tension can be relaxed if the scalar perturbations are suppressed at large scales~\cite{Miranda:2014wga}, which might be a result of the steep potential after the false vacuum decay via bubble nucleation~\cite{Freivogel:2014hca,Bousso:2014jca}. Alternatively, the tension can be relaxed by a negative running of the spectral index \cite{Ade:2014xna,Cheng:2014bta}. The running can be generated if there are small modulations to the inflaton potential~\cite{Kobayashi:2010pz,Czerny:2014wua}. We shall see that the BICEP2 result and its apparent tension with Planck can be naturally explained in a landscape of many axions. In the landscape paradigm \cite{Bousso:2000xa, Susskind:2003kw}, there are numerous false vacua where eternal inflation occurs, continuously producing universes by bubble nucleation. Our universe is considered to be within a single bubble inside which slow-roll inflation took place after the tunneling event. This paradigm has various implications for cosmology and particle physics and seems to have gained further momentum after BICEP2. It however remains unanswered why and how the slow-roll inflation took place after the false vacuum decay. It might be due to some fine-tuning of the potential. Such fine-tuning may be justified because, most probably, there is an anthropic bound on the duration of slow-roll inflation after the bubble nucleation \cite{Freivogel:2005vv}. Still, it is uncertain how a very flat inflaton potential extending beyond the Planck scale is realized in the landscape. Also, there is no clear connection between the slow-roll inflation and the landscape paradigm. In this Letter we propose a landscape of axions where the eternal inflation occurs in the false vacua and the slow-roll inflation regime naturally appears after the tunneling events. See Fig.\ref{FigLand} for illustration of this concept. Most important, we find that there is very likely to be a direction along which the effective decay constant is super-Planckian, because of accidental alignment of axions known as the Kim-Nilles-Peloso (KNP) mechanism~\cite{Kim:2004rp}.\footnote{See also \cite{Harigaya:2014eta}.} The important parameters that characterize the size and shape of the axion landscape are the number of axions, $N_{\rm axion}$, and that of shift symmetry breaking terms, $N_{\rm source}$. In the case of $N_{\rm source} > N_{\rm axion}$, the vacuum structure of the axions generates the so-called landscape, in which there are valleys and hills in the axion potential; most important, there exist many local vacua where the eternal inflation occurs, leading to a multiverse. On the other hand, in the case of $N_{\rm source} \leq N_{\rm axion}$, all the vacua are degenerate in energy, and we need to embed the axion landscape into the string landscape to induce eternal inflation. In both cases the KNP mechanism works and the super-Planckian decay constant can be generated from sub-Planckian ones. We will estimate the probability for obtaining a super-Planckian decay constant for a wide range of values of $N_{\rm axion}$ and $N_{\rm source}$ based on a simplified model. When $N_{\rm axion}=N_{\rm source}$, we will show that the enhancement of the decay constant by a factor of $10^3$ happens with a probability of about $0.8\%$, $8\%$, and $24\%$ for $N_{\rm axion} = 10,100$ and $300$, respectively. Thus, the existence of such a flat inflationary potential over super-Planckian field values is a built-in feature of the axion landscape. We will also study the other cases; when $N_{\rm source} > N_{\rm axion}$, it becomes less likely to obtain a large enhancement of the decay constant, whereas numerous local minima are generated. When $N_{\rm source} < N_{\rm axion}$, there appear massless (or extremely light) axions, which may play an important cosmological role. Here let us mention the work relevant to the present study. Recently, it was shown in \cite{Choi:2014rja} that the KNP mechanism works for the case of $N_{\rm axion} = N_{\rm source} > 2$ with smaller values of anomaly coefficients. They estimated the probability for a given direction to have an effective super-Planckian decay constant, and they found that it was roughly given by the inverse of the enhancement factor. Our result looks significantly larger than their estimate, but this is because we have estimated a probability that the decay constant corresponding to {\it the lightest} direction happens to be super-Planckian for each configuration of the axion landscape. We obtained a consistent result when we used the same values of $N_{\rm axion} = N_{\rm source}$ adopted in \cite{Choi:2014rja}. \begin{figure}[t!] \begin{center} \includegraphics[width=12cm]{Landscape3} \caption{ Illustration of the axion landscape. The landscape consists of many axions with sinusoidal potentials of various height and periodicity. There is likely to be a flat direction with an effective super-Planckian decay constant because of the accidental alignment of axions, whereas the typical curvature at the false vacua is much larger than the Hubble parameter. The inflaton is one of the lightest axions, and the natural or multi-natural inflation takes place after the last Coleman-De-Luccia tunneling event. } \label{FigLand} \end{center} \end{figure} The inflation dynamics along the plateau is given by either natural inflation \cite{Freese:1990rb} or multi-natural inflation \cite{Czerny:2014wza}, depending on $N_{\rm axion}$ and $N_{\rm source}$ as well as the height of each terms. In a sufficiently complicated landscape with $N_{\rm source} \gtrsim N_{\rm axion}$, we expect that the latter will be more generic. As we shall discuss shortly, the decay constant, inflaton mass, and duration of inflation etc. are determined by $N_{\rm axion}$ and $N_{\rm source}$, i.e., the size and shape of the axion landscape. The axion landscape thus provides a unified picture of the slow-roll inflation in the landscape paradigm. In string theory, axions tend to be lighter than geometric moduli owing to gauge symmetries, and so, the axion landscape can be thought of as a low-energy branch of the string landscape paradigm. In this case, the role of the axion landscape is to generate an axion with a super-Planckian decay constant. | We have proposed a landscape of many axions where the axion potential receives various contributions from shift symmetry breaking effects. If the number of the shift symmetry breaking, $N_{\rm source}$, is large enough, there are valleys and hills in the axion potential; eternal inflation occurs in the local minima, continuously creating new universes via the CDL tunneling. On the other hand, if $N_{\rm source} \leq N_{\rm axion}$, all the vacua are degenerate in energy. In this case, eternal inflation will be possible if one introduces another kind of shift symmetry breaking or if one embed the axion landscape into the string landscape with a large number of local minima. Interestingly, there is very likely to be a direction along which the effective decay constant exceeds the Planck scale owing to the accidental alignment of axions, i.e., the KNP mechanism. Therefore, in the axion landscape, the existence of the slow-roll inflation regime is a natural outcome of the complicated vacuum structure. We have also argued that the effective inflation model in the axion landscape will be either natural inflation or multi-natural inflation, depending the values of $N_{\rm axion}$ and $N_{\rm source}$. In the latter case, a wide range of $(n_s, r)$ as well as the running of the spectral index can be realized. The size and shape of the axion landscape are parametrized by $N_{\rm axion}$ and $N_{\rm source}$, which determines the effective super-Planckian decay constant, and therefore the typical duration of the slow-roll inflation. In a certain case, there might be a strong pressure toward shorter inflation, and it will be more likely that we can measure the negative spatial curvature as a remnant of the CDL tunneling. Conversely, non-detection of the negative curvature will constrain the size and shape of the axion landscape. Also, the size of density perturbations and the inflaton potential height will be useful to extract information on the axion landscape. A more quantitative study on this issue will be given elsewhere. {\it Note added}: After completing this work, there appeared the papers \cite{Tye:2014tja, Kappl:2014lra, Ben-Dayan:2014zsa, Long:2014dta}, in which the alignment of two axions was discussed based on the KNP mechanism as well. We also note that the charge assignment studied in Refs.\cite{Tye:2014tja, Ben-Dayan:2014zsa} leads to the suppression of the lighter axion mass, similarly to the seesaw mechanism for the light neutrino mass. | 14 | 4 | 1404.6923 |
1404 | 1404.6748_arXiv.txt | {Multiple generations of stars are routinely encountered in globular clusters but no convincing evidence has been found in Galactic open clusters to date.} {In this paper we use new photometric and spectroscopic data to search for multiple stellar population signatures in the old, massive open cluster, Melotte~66. The cluster is known to have a red giant branch wide in color, which could be an indication of metallicity spread. Also the main sequence is wider than what is expected from photometric errors only. This evidence might be associated with either differential reddening or binaries. Both hypothesis have, however, to be evaluated in detail before recurring to the presence of multiple stellar populations.} {New, high-quality, CCD UBVI photometry have been acquired to this aim with high-resolution spectroscopy of seven clump stars, that are complemented with literature data; this doubles the number of clump star member of the cluster for which high-resolution spectroscopy is available. All this new material is carefully analyzed in search for any spectroscopic or photometric manifestation of multiple populations among the cluster stars.} {Our photometric study confirms that the width of the main sequence close to the turn off point is entirely accounted for by binary stars and differential reddening, with no need to advocate more sofisticated scenarios, such as metallicity spread or multiple main sequences. By constructing synthetic color-magnitude diagrams, we infer that the binary fraction has to be as large as 30$\%$ and their mass ratio in the range 0.6-1.0. As a by-product of our simulations, we provide new estimates of the cluster fundamental parameters. We measure a reddening E(B-V)=0.15$\pm$0.02, and confirm the presence of a marginal differential reddening. The distance to the cluster is $4.7^{+0.2}_{-0.1} $kpc and the age is 3.4$\pm$0.3 Gyr, which is somewhat younger and better constrained than previous estimates.} {Our detailed abundance analysis reveals that, overall, Melotte~66 looks like a typical object of the old thin disk population with no significant spread in any of the chemical species we could measure. Finally, we perform a photometric study of the blue straggler star population and argue that their number in Melotte~66 has been significantly overestimated in the past. The analysis of their spatial distribution supports the scenario that they are most probably primordial binaries.} | With the exception of Ruprecht~106 (Villanova et al. 2013) and possibly Terzan~8 (Carretta et al. 2014), all the Milky Way old globular clusters studied so far show either photometric or spectroscopic signatures of multiple stellar populations. The parameter driving the presence or absence of more than one population seems to be the total mass, and much work has currently been done to study the lowest mass globulars. As stressed by Villanova et al. (2013), Terzan~7, Palomar~3, and NGC~1783 can be good candidates to look at.\\ \noindent An interesting and different perspective can be to consider Galactic open clusters - in particular those few old open clusters that are still massive enough - and search for a signature of multiple generations among their stars. Unfortunately, old massive, open clusters are extremely rare in the Milky Way: first, because open clusters are not very massive at birth and second, because they loose quite some mass during their lifetime, mostly due to the tidal interaction with the Milky Way and the dense environment of the Milky Way disk (Friel 1995). The potential interest of old open clusters in the context of multiple stellar generations has been recognized for a while, but so far only two clusters have been investigated in details: NGC~6791 (Geisler et al. 2012) and Berkeley~39 (Bragaglia et al. 2012). Both clusters have current masses $\sim 10^4 M_{\odot}$; NGC~6791 is probably somewhat more massive than Berkeley~39. In the case of Berkeley~ 39, no signature of multiple populations were found, while there seem to be two groups of stars in NGC~6791 having different Na abundance.\\ \noindent It is important, however, to state as clear as possible that masses are difficult to estimate for open clusters because of the significant field star contaminations and the large presence of binaries, which affects both photometric and kinematic mass measures (Friel 1995).\\ Therefore, one is often left with crude mass estimates which are based mostly on the appearance of the color magnitude diagram (CMD) and the number of, for example, clump stars. A visual inspection at old open cluster older than, say, 5 Gyrs, shows that we are left with maybe only three probably massive star clusters besides NGC6791 and Berkeley~39. They are Trumpler~5, Collinder~26,1 and Melotte~66. No estimate of their mass is available, but a quick inspection of their CMD immediately shows that they harbor roughly the same number of clump stars as NGC~6791 and Berkeley~39, and therefore, their mass should be roughly of the same order. It seems to us therefore urgent to look at these few clusters, and in this paper we are going to discuss new photometric and spectroscopic material for one of them: Melotte 66.\\ \noindent The plan of the paper is as follows. In Section~2, we summarize the literature information on Melotte~66 as completely as possible. Section~3 describes our photometric dataset, and provides details on observation, data reduction, and standardization. A star count analysis is then performed in Section~4. Section~5 deals with the study of Melotte~66 photometry, and the derivation of its fundamental parameters via the comparison with theoretical models. In Section~6, we describe the spectroscopic data, while we perform a detailed abundance analysis in Section~7 . The blue straggler population in Melotte~66 is investigated in Section~8 and, finally, the conclusions of our work are drawn in Section~9. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig1.eps} \caption{An example of a CCD frame centered on Melotte~66. North is up, east to the left, and the field of view is 14.8 $\times$ 22.8 arcmin. The image is in the B filter, and the exposure was 1500 secs.} \end{figure} | We have presented in this paper a photometric and spectroscopic study of Melotte~66, one of the most massive old open clusters in the Milky Way disk. The most important result of our investigation is that Melotte~66 does not show any evidence, either photometric or spectroscopic, of distinct sub populations among its stars. Our photometry demonstrates beyond any reasonable doubt that the MS width is produced by the presence of a significant population of binary stars. The binary sequence intersects the single star MS close to the TO, producing the visual effect that the MS is wide. For the first time, using numerical simulations, we quantify the binary fraction, which would be not smaller than 30\%.\\ We discussed the cluster photometric properties and revised its fundamental parameters. The age is found to be 3.4$\pm$0.2 Gyr, which is younger than in previous investigations. The new spectroscopic material we add fully supports the conclusions from photometry. While confirming previous determinations of [Fe/H], we did not detect any significant spread in any of the elements we could analyze. Melotte~66 looks like a genuine member of the old, thin disc population when compared with other disc open clusters and disc field stars. We finally perform a photometric study of the BSS population in the cluster. We found 14 BBS candidates, a value that is fewer than that found in previous studies, and which we suggest to be primordial binaries. In conclusion, Melotte~66, like Berkeley~39 (Bragaglia et al. 2012), is a single population star cluster. Although limited by the small number statistics, NGC~6791 seems to be the only open cluster with evidence of multiple stellar populations. The reader, however, has to be warned that NGC~6791 is questioned as a disc star cluster (Carraro 2013; Carrera 2012), since its properties are closer to the bulge stellar population. Besides Melotte~66 and Berkeley~39, there are not many massive old open cluster candidates . With caution in the introduction to this paper we had described that masses for these clusters are extremely uncertain, and one can possibly look at Collinder~261 or Trumpler~5 as possible targets for further studies in this direction. | 14 | 4 | 1404.6748 |
1404 | 1404.4051_arXiv.txt | {} {We present the first optical and infrared polarimetric study of the low mass transient X-ray binary Centaurus X-4 during its quiescent phase. This work is aimed to search for an intrinsic linear polarisation component in the system emitted radiation that might be due, e.g., to synchrotron emission from a compact jet, or to Thomson scattering with free electrons in an accretion disc. } {Multiband ($ BVRI $) optical polarimetric observations were obtained during two nights in 2008 at the ESO La Silla 3.6 m telescope, equipped with the EFOSC2 instrument used in polarimetric mode. These observations cover about the 30$ \% $ of the 15.1 hours orbital period. $ J $-band observations were obtained in 2007 with the NICS instrument on the TNG telescope at La Palma, for a totality of 1 hour observation.} {We obtained 3$ \sigma $ upper limits to the polarisation degree in all the optical bands, with the most constraining one being in the $ I $-band, where $ P_{\rm I}<0.5\% $. No significant phase-correlated variability has been noticed in all the filters. The $ J $-band observations provided a 6$ \% $ upper limit on the polarisation level.} {The constraining upper limits to the polarisation in the optical (above all in the $ I $-band), allowed us to evaluate the contribution of the possible emission of a relativistic particles jet to the total system radiation to be less then the 10$ \% $.This is in agreement with the observation of a spectral energy distribution typical of a single black body of a K-spectral type main sequence star irradiated from the compact object, without any significant additional component in the infrared. Due to the low S/N ratio it was not possible to investigate the possible dependency of the polarisation degree from the wavelength, that could be suggestive of polarisation induced by Thomson scattering of radiation with free electrons in the outer part of the accretion disc. Observations with higher S/N ratio are required to examine in depth this hypothesis, searching for significant phase-correlated variability.} | Only a few low-mass X-ray binaries (LMXBs -- systems where a compact object like a neutron star or a black hole accretes matter from a companion star via Roche lobe overflow) have been studied with polarimetric techniques to date (Charles et al. 1980; Dolan \& Tapia 1989; Gliozzi et al. 1989; Hannikainen et al. 2000; Shultz et al. 2004; Brocksopp et al. 2007; Shahbaz et al. 2008; Russell et al. 2008; Russell et al. 2011). Polarisation provides a powerful diagnostic tool to obtain information about geometrical and physical conditions of these systems, scattering properties of their accretion discs or the presence of strong magnetic fields. Most of the LMXB radiation is expected to be unpolarized. Optical light from LMXBs is in fact principally made of thermal blackbody radiation from accretion disc and the companion star, and does not possess any preferential direction of oscillation. Hydrogen in the disc is nevertheless in many cases totally ionised; for this reason a significant (but small) linear polarisation (LP) is expected in the optical (Dolan 1984; Cheng et al. 1988) due to Thomson scattering of emitted unpolarised radiation with free electrons in the disc. This linear polarisation component is usually almost constant for scattering in the nearly symmetrical accretion disc. If there are deviations from axial symmetry, some phase-dependent variations might be expected. Furthermore, radiation emitted from the accretion disc could interact via inverse Compton scattering with the electrons in a hot plasma corona that surrounds the disc itself (Haardt et al. 1993). This phenomenon could induce high frequency polarisation. Another possible and intriguing origin of a significant polarisation can be synchrotron emission, that arises from emission of a relativistic particle jet. Optically thin synchrotron radiation produces in fact intrinsically linearly polarized light at a high level, up to tens per cent, especially in the NIR (Russell \& Fender 2008). Jets in X-ray binaries are expected to be linked to accretion (disc-jet coupling, Fender 2001b), and for this reason they have been principally observed in persistent systems or during the outbursts of transient LMXBs, especially if containing a black hole. In the past few years, the evidence for jet emission during quiescence of LMXBs, containing both black holes and neutron stars have been reported (Russell et al. 2006; Russell et al. 2007; Russell \& Fender 2008; Baglio et al. 2013; Shahbaz et al. 2013). The detection of a high level of linear polarisation in the NIR is for this reason considered as the main route to assess for the emission of a relativistic jet. Radiation from any source can also be polarised by the interaction with interstellar dust. This effect depends on wavelength as described by the Serkowski law Serkowski et al. 1975 and must be accounted for in the analysis. Transient LMXBs are generally faint objects in the optical; for this reason only the brightest ones have been observed in polarimetry. The most part of these studies regarded systems during outbursts or systems with BHs as compact object during quiescence. In this paper we report the results of the optical multi-band ($ BVRI $) and infrared ($ J $-band) polarimetric observations of the LMXB Cen X-4 during quiescence using the ESO 3.6 m telescope at La Silla and the TNG telescope, respectively. This is the first polarimetric study of a quiescent LMXB containing a NS. Cen X-4 was discovered during an X-ray outburst in 1969 by the X-ray satellite \textit{Vela 5B} (Conner et al. 1969). During a second outburst in 1979 the source was detected also in the radio band (Hjellming 1979), and its optical counterpart was identified with a blue star that had brightened by 6 mag to $ V $=13 (Canizares et al. 1980). The companion star was at a later stage classified as a $ 0.7M_{\odot} $ K5--7 star (Shahbaz et al. 1993; Torres et al. 2002), that evolved in order to fill its $ \sim 0.6\,R_{\odot} $ Roche lobe. The $ \sim 15.1 $ hrs orbital period has been determined thanks to the optical light curve ellipsoidal variations (Cowley et al. 1988; Chevalier et al. 1989; McClintock et al. 1990). Cen X-4 is one of the brightest quiescent systems in the optical known to date ($V$=18.7 mag) and possesses a non-negligible disc component in the optical that contributes $ \sim 80 \% $ in $ B $, $ \sim 30 \% $ in $ V $, $ 25 \% $ in $ R $ and $ 10 \% $ in $ I $ (Shahbaz et al. 1993; Torres et al. 2002; D'avanzo et al. 2005). Cen X-4 is at a distance of $ 1.2 \pm 0.3 $ kpc (Kaluzienski et al. 1980) and the interstellar absorption is low ($ A_{V}=0.3 \,\rm mag $). These characteristics make Cen X-4 an excellent candidate for polarimetric studies in quiescence. Throughout the paper all the uncertainties are at $ 68 \% $ confidence level unless stated differently. | In this work, we presented the results of an optical ($ BVRI $) and infrared ($ J $) polarimetric study on the LMXB Cen X-4 during quiescence, based on observations obtained in 2008 and 2007, respectively. We were searching for an intrinsic component of linear polarisation in the optical and IR for this source. We obtained a low polarisation degree $3\sigma$ upper limit in each optical filter, with the highest value in the $ B $-band ($P_{B}\leq 1.46 \% $) and lowest in the $ I $-band, where $ P $ is consistent with 0 within $ 1\sigma $. A $3\sigma$ upper limit to the linear polarisation of the $ 6 \% $ is obtained in the $ J $-band. We built the SED of Cen X-4 from literature data, observing that it can be fitted by the only black body of the irradiated companion star. Assuming a typical expected polarisation degree of at most the $5\%$ for a NS X-ray binary jet with tangled magnetic field, our $ I $-band upper limit on the linear polarisation implies that the contribution from a possible jet should be relatively low ($ \lesssim 10\% $) in terms of flux. This is in agreement with the non-detection of an infrared excess in the SED of Cen X-4. No variations correlated with the orbital period of the system has been detected and, due to the large error bars caused by the low S/N ratio, it was not possible to be conclusive on a possible increasing trend of $ P $ with decreasing wavelength, which should support the possibility of Thomson scattering of the radiation with accretion disc particles. Observations with larger diameter telescopes (8 m class) would provide smaller uncertainties for the Stokes parameters (in particular in the $ B $-band, where Cen X-4 is fainter). This will be crucial to best investigate possible phase-orbital correlated variations of $ P $ and its wavelength trend in order to assess the origin of polarisation for Cen X-4. | 14 | 4 | 1404.4051 |
1404 | 1404.7444_arXiv.txt | Recent observations of sunspot's umbra suggested that it may be finely structured at a sub-arcsecond scale representing a mix of hot and cool plasma elements. In this study we report the first detailed observations of the umbral spikes, which are cool jet-like structures seen in the chromosphere of an umbra. The spikes are cone-shaped features with a typical height of 0.5-1.0~Mm and a width of about 0.1~Mm. Their life time ranges from 2 to 3 ~min and they tend to re-appear at the same location. The spikes are not associated with photospheric umbral dots and they rather tend to occur above darkest parts of the umbra, where magnetic fields are strongest. The spikes exhibit up and down oscillatory motions and their spectral evolution suggests that they might be driven by upward propagating shocks generated by photospheric oscillations. It is worth noting that triggering of the running penumbral waves seems to occur during the interval when the spikes reach their maximum height. | \noindent Recent advancements in observations and modeling created a picture of a sunspot as being very dynamic and complex magnetic structure \citep[e.g.,][]{2012ApJ...747L..18S, 2011Sci...333..316S, 2011LRSP....8....3R,2011ApJ...740...15R}. A closer look at the dark sunspot umbra using photospheric spectral lines revealed detailed structure of bright and nearly circular small intensity patches called umbral dots \citep[UDs,][and references therein]{2012ApJ...745..163K, 0004-637X-752-2-109}. Magnetic fields in UDs are weaker than in their darker surroundings and UDs show plasma up-flows of a few hundred m s$^{-1}$ \citep{1993A&A...278..584W, 2004ApJ...614..448S, 2004ApJ...604..906R, 2008ApJ...672..684R,2009ApJ...702.1048W, 2012ApJ...757...49W}. According to realistic 3D simulations \citep[e.g.][]{2006ApJ...641L..73S, Bharti_2011} UDs result from magneto-convection in sunspot umbra, and represent narrow convective up-flow plumes with adjacent down-flows, which become almost field-free near the surface layer. The umbra appears much more dynamic when observed using chromospheric spectral lines. Based on recent high resolution observations it has been suggested that the sunspot's umbra may be finely structured, and consists of hot and cool plasma elements intermixed at sub-arcsecond scales \citep{2000Sci...288.1396S,2009ApJ...696.1683S, 2013ApJ...776...56R}. The well-known 3-min sunspot oscillations \citep[e.g.,][and references therein]{2000SoPh..192..373B,2013SoPh..288...73M} often lead to the appearance of bright umbral flashes (UFs), which are emissions in the core of chromospheric lines caused by hot shocked plasma \citep{2010ApJ...722..888B} and were reported to display a filamentary structure \citep{2009ApJ...696.1683S}. Later studies also found evidence for a two-component structure of the umbra \citep{2005ApJ...635..670C,2013A&A...556A.115D,2013ApJ...776...56R}. Transient jet-like structures were recently reported by \cite{2013A&A...552L...1B} seen in the Ca II H images of the sunspot umbrae, which they called umbral microjets. The microjets appear to be aligned with the umbral field, and no one-to-one correspondence between the microjets and the umbral dots was found. \cite{2013ApJ...776...56R} described small-scale, periodic, jet-like features in the chromosphere above sunspots, which result from long-period waves leaking into the chromosphere along inclined sunspot fields. In this paper we present first detailed observations of fine-scale chromospheric phenomena in sunspot's umbra using H$\alpha$ imaging spectroscopy data obtained with the New Solar Telescope \cite[NST,][]{goode_apjl_2010} that operates in Big Bear Solar Observatory. The data allowed us to fully resolve ubiquitous dynamic umbral H$\alpha$ jet-like features and measure their general properties. \begin{figure*}[!th] \centering \begin{tabular}{c} \epsfxsize=5.0truein \epsffile{fig1.eps} \ \end{tabular} \caption{The main spot of NOAA AR 11768 as seen in the chromospheric H$\alpha$ line (top and lower right panels) as well as the photospheric TiO 705~nm line (lower left). The top row shows off-band images of the sunspot taken at -0.02~nm (left, blue) and +0.02~nm (right, red) off the line center. All images were unsharp masked. The D.C. arrow in the lower left image points toward the disk center. The two line segments indicate the location of \textit{xt} cuts. The large circles indicate several bright photospheric umbral dots with fine structures inside them. The small circles mark the base of several spikes (see text for more details). The long tick marks separate 1~Mm intervals. The images were not corrected for the projection effect.} \label{sunspot} \end{figure*} | \noindent We summarize our findings as follows. High resolution NST observations of solar chromosphere using the H$\alpha$ spectral line revealed the existence of dynamic spike-like chromospheric structures inside sunspot's umbra, which we called umbral spikes. The spikes are on average about 0.1~Mm wide, their height does not exceed 1~Mm. They are mainly vertical with a slight tendency to fan out closer to the periphery of the umbra. Since umbral fields show similar inclination distribution we suggest that the umbral spikes are aligned with the umbral magnetic fields. The spikes show a nearly uniform distribution over the umbra with a tendency to be more concentrated in its darkest parts occupied by strongest fields in the umbra, while UDs are considered to be field-free magneto-convection features. Thus, our finding seem to indicate that the umbral spikes may be co-spatial with strong magnetic flux concentrations rather than with the weaker magnetic fields above UDs. These first detailed observations of umbral spikes presented here suggest that they are a wave phenomena and result from sunspot oscillations. \cite{2013A&A...552L...1B} observed transient jet-like structures in Ca II H images of sunspot umbrae, which they called umbral microjets. The authors speculated that the microjets, which are shorter than 1~Mm and not wider than 0''.3, may be either upflow jets driven by the pressure gradient above the photospheric UDs or they may be caused by reconnection of hypothetical opposite polarity fields that might exist around large UDs. Although the H$\alpha$ spikes and the microjets are of a comparable size, the lifetime of spikes appears to be longer (2-3~min) than that of the microjets (50~s). While a possible relationship between them has yet to be established, it seems that the two phenomena are different and the scenarios presented in \cite{2013A&A...552L...1B} may not be able to explain the spikes, since the latter show a preference to be more frequent in the darkest cores of the umbra, dominated by strong fields and void of large and bright UDs. \cite{2013ApJ...776...56R} described dynamic fibrils (DF) observed in the chromosphere above a sunspot. Their \textit{xt} plot generated from a series of the H$\alpha$ line center images \citep[see Fig. 3 in][]{2013ApJ...776...56R} shows the presence of parabolic intensity features, which were interpreted by these authors as being very short jet-like features precisely above the umbra. At the same time individual umbral spikes where not resolved in their data and only diffuse, low contrast dark specks can be distinguished inside the umbra in the corresponding H$\alpha$ images. The umbral spikes and the DFs may be the same wave phenomenon while their differences in appearance are caused by the fine scaled structures in magnetic fields, viewing angle as well as the resolution of the data. \cite{2011ApJ...743..142H} studied wave propagation in different magnetic configurations. They found that peaks of power of 3~min oscillations and high amplitudes of vertical velocities (5-8~km/s) are located above strong photospheric flux concentrations or, in other words, inside vertical flux tubes. They also found that rising and falling jets, which form as a result of the oscillations, have their axis aligned with the magnetic axis of field concentrations. General appearance of the simulated jets and their oscillatory motions are strikingly similar to those of the spikes (see Fig. 25 in \cite{2011ApJ...743..142H} and Fig. 1 in this paper), although the simulated jets appear to be, on average, more extended (0.5-6.0~Mm), longer living (2-5~min) and show higher velocities (10-40~km/s). Thus one possible interpretation of the observed phenomena is the penetration of photospheric oscillations into the chromosphere along thin and vertical magnetic flux tubes. Using spectropolarimetric observations \cite{2000Sci...288.1396S} suggested the existence of an unresolved active component with upward directed velocities. More precisely, the anomalous polarization profiles could only be explained by emission from an unresolved mixture of upward propagating shock and a cool slowly downflowing surroundings. Later \cite{2005ApJ...635..670C} specified that the active component is present through out the entire oscillation cycle. They also inferred that the shock waves propagate into the umbra inside channels of subarcsecond width, which could be the flux tubes discussed above. Finally, \cite{2009ApJ...696.1683S} presented high resolution \textit{Hinode} data on UFs concluding that UFs show fine filamented structure. We argue that the 0''.1 wide umbral spikes may be the unresolved active component of the sunspots umbra discussed above. They represent finely structured shocked plasma showing up- and down-flows of order of 5-7 km s$^{-1}$. The data seem to suggest that the spikes are associated with interiors of strong flux tubes, thus conforming the idea about the existence of narrow channels conduct photospheric oscillations into the chromosphere \citep{2000Sci...288.1396S, 2005ApJ...635..670C}. Finally, \cite{2013ApJ...776...56R} suggested that the fine structure of UFs is related to the sunspot dynamic fibrils. The new data presented here shows that the dark filaments reported inside the UF area are the vertical umbral spikes projected on the otherwise uniform and unstructured UFs. Authors thank anonymous referee for valuable suggestions and criticism. This work was conducted as part of the effort of NASA's Living with a Star Focused Science Team ``Jets''. We thank BBSO observing and engineering staff for support and observations. This research was supported by NASA LWS NNX11AO73G and NSF AGS-1146896 grants. VYu acknowledges support from Korea Astronomy and Space Science Institute during his stay there, where a part of the work was performed. | 14 | 4 | 1404.7444 |
1404 | 1404.2782_arXiv.txt | The thermal Sunyaev-Zel'dovich (tSZ) effect measures the line-of-sight projection of the thermal pressure of free electrons and lacks any redshift information. By cross-correlating the tSZ effect with an external cosmological tracer we can recover a good fraction of this lost information. Weak lensing (WL) is thought to provide an unbiased probe of the dark Universe, with many WL surveys having sky coverage that overlaps with tSZ surveys. Generalising the tomographic approach, we advocate the use of the spherical Fourier-Bessel (sFB) expansion to perform an analysis of the cross-correlation between the projected (2D) tSZ Compton $y$-parameter maps and 3D weak lensing convergence maps. We use redshift dependent linear biasing and the halo model as a tool to investigate the tSZ-WL cross-correlations in 3D. We use the Press-Schechter (PS) and the Sheth-Tormen (ST) mass-functions in our calculations, finding that the results are quite sensitive to detailed modelling. We provide detailed analysis of surveys with photometric and spectroscopic redshifts. The signal-to-noise $(S/N)$ of the cross-spectra $\myC_{\ell} (k)$ for individual 3D modes, defined by the radial and tangential wave numbers $( k ; \ell )$, remains comparable to, but below, unity though optimal binning is expected to improve this. The results presented can be generalised to analyse other CMB secondaries, such as the kinetic Sunyaev-Zel'dovich (kSZ) effect. | Only 50\% of baryons consistent with cosmic microwave background radiation (CMBR) and big bang nucleosynthesis (BBN) observations have been detected observationally \citep{FP04,FP06}. The validation of standard cosmological model relies on our ability to detect the missing baryons observationally \citep{Breg07}. The cosmological simulations suggest that majority of the IGM are in the form of a warm-hot intergalactic medium (WHIM) with temperature $10^5{\rm K}<{\rm T}<10^7{\rm K}$ \citep{CO99,Dave01,CO06}. It is also believed that WHIMs reside in moderately overdense structures such as the filaments. Being collisionally ionized, these baryons do not leave any footprints in the Lyman-$\alpha$ absorption systems. The emission from WHIMs in either UV or X-ray are too weak to be detected given the sensitivity of current instruments and detection in X-ray given is also unfeasible given the low level of emission from WHIM. However, the baryons in the cosmic web do have sufficient velocity and column density to produce a detectable CMB secondary effect also known as the kinetic Sunyaev Zeldovich (kSZ) effect \citep{SZ80}. Secondary anisotropies arise at all angular scales; the largest secondary anisotropy at the arcminute scale is the thermal Sunyaev-Zeldovich (tSZ) effect. The tSZ effect is caused by the thermal motion of electrons mainly from hot ionized gas in galaxy clusters where as the {\em kinetic} Sunyaev Zeldovich (kSZ) effect is attributed to the bulk motion of electrons in an ionized medium \citep{SZ72,SZ80}. The tSZ can be separated from CMB maps using spectral information. Along with weak lensing of CMB, the kSZ is the most dominant secondary contribution at arcminute scales after the removal of tSZ effect. This is because the primary CMB is sub-dominant on these scales as a result of Silk-damping. Although the tSZ is capable of overwhelming the CMB primaries on cluster scales, the blind detection of the tSZ effect on a random direction in the sky is difficult as the CMB primaries dominate on angular scales larger than that of the clusters. The tSZ and kSZ are both promising probes of the ionized fractions of the baryons with the majority of the tSZ effect being caused by electrons in virialized collapsed objects \cite{WSP09,HH09} with overdensities that can be considerably high $\delta > 100$. A detailed mapping and understanding of the SZ effect is of particular interest to cosmology and astrophysics as it is thought that the SZ effect will be a powerful method to detect galaxy clusters at high redshifts. One of the key and central features to the tSZ effect is that the efficiency of the free electron distribution in generating the tSZ effect seems to be independent of redshift. Cosmological expansion introduces an energy loss of $1+z$ to a photon emitted from a source at redshift $z$ but the scattering of these CMB photons off electrons increases the energy of the photons by a factor of $1+z$. These two effects cancel allowing us to use the tSZ effect as a probe for galaxy clusters at high redshifts. In addition, the tSZ effect will be a powerful probe of the thermal history of our Universe as it is a direct probe of the thermal energy of the intergalactic medium and intracluster medium. Two of the main drawbacks of tSZ studies are that the tSZ has been shown to be sensitive to a wide number of astrophysical processes introducing degeneracies and that the tSZ is a measure of the projected electron thermal energy along the line of sight. This smears all redshift information and the contributions of the various astrophysical processes become badly entangled with the projection effects. Recent studies have proposed the reconstruction and recovery of redshift information by cross-correlating the tSZ effect with galaxies and their photometric redshift estimates \citep{ZP01, Shao11b}. One of the leading methods proposed in the literature is tomographic reconstruction in which we crudely bin the data into redshift slices and construct the 2D projection for each bin. The auto (single bin) and cross (between bins) correlations can then be used to constrain model parameters and extract information. This paper is concerned with extending these studies to a full 3D analysis in which we necessarily avoid binning data and therefore avoid the consequential loss of information. In principle, 3D studies would allow a full sky reconstruction that includes the effects of sky curvature and extended radial coverage. As SZ studies are often followed by photometric or spectroscopic galaxy surveys, we expect that photometric redshifts up to $z \sim 1.3 - 2$ will be readily available in due course. We investigate the cross-correlation of the tSZ with an external tracer given by cosmological weak lensing and photometric redshift surveys. Current ongoing and proposed future ground based surveys, such as SZA\footnote{http://astro.uchicago.edu/sza}, ACT\footnote{http://www.physics.princeton.edu/act}, APEX\footnote{http://bolo.berkeley.edu/apexsz}, SPT\footnote{http://pole.uchicago.edu} and the recently completed all sky Planck survey \cite{Planck13y}, have published a map of the entire y-sky with a great precision (also see \cite{HS14}). The high multipole $\ell \sim 3000$ tSZ power spectrum has been observed by the SPT \citep{Lu10, Saro13,Hanson13,Holder13,Vieira13,Hou14,Story13,High12} collaboration with the ACT \citep{Fw10,Dn10, Shegal11,Hand11,Sher11,Wil13,Dunk13,Calab13} collaboration reporting an analysis on similar scales. It is expected that ongoing surveys will improve these measurements due to their improved sky coverage as well as wider frequency range. It is important to appreciate why the study of secondaries such as tSZ should be an important aspect of any CMB mission. In addition to the important physics the secondaries probe, accurate modeling of the secondary non-Gaussianities is required to avoid $20\%-30\%$ constraint degradations in future CMB data-sets such as Planck\footnote{http://www.rssd.esa.int/index.php?project=planck} and CMBPol\footnote{http://cmbpol.uchicago.edu/} \citep{Smidt10}. While the tSZ surveys described above provide a direct probe of the baryonic Universe, weak lensing observations \citep{MuPhysRep08} on the other hand can map the dark matter distribution in an unbiased way. In recent years there has been tremendous progress on the technical front in terms of specification and control of systematics in weak lensing observables. There are many current ongoing weak lensing surveys such as CFHT{\footnote{http://www.cfht.hawai.edu/Sciences/CFHLS/}} legacy survey, Pan-STARRS{\footnote{http://pan-starrs.ifa.hawai.edu/}} and the Dark Energy survey (DES){\footnote{https://www.darkenergysurvey.org/}}. In the future, the Large Synoptic Survey Telescope (LSST){\footnote{http://www.lsst.org/llst\_home.shtml}}, Joint Dark Energy Mission (JDEM){\footnote{http://jdem.gsfc.nasa.gov/}} and Euclid {\footnote{http://sci.esa.int/euclid/}} will map the dark matter and dark energy distribution of the entire sky in unprecedented detail. Among other things, these surveys hold great promise in shedding light on the nature of dark energy and the origin of neutrino masses \citep{JK11}, where the weak lensing signals dominate the others considered by e.g. the Dark Energy Task Force \citep{Al11}. However, the optimism that has been associated with weak lensing is predicated on first overcoming the vast systematic uncertainties in both the measurements and the theory \citep{HS04,MHH05,CH02,HS03,Wh04,Hu06,MTC06}. The statistics of the weak lensing convergence have been studied in great detail using an extension of perturbation theory \citep{MuJa00,MuJai01,MuVaBa04} and methods based on the halo model \citep{CH00,TJ02,TJ03}. These studies developed techniques that can be used to predict the lower-order moments (equivalent to the power spectrum and multi-spectra in the harmonic domain) and the entire PDF for a given weak lensing survey. The photometric redshifts of source galaxies are useful for tomographic studies of the dark matter distribution and in establishing a three-dimensional picture of their distribution \citep{MunshiK11}. Finally, cross correlations with other tracers of large scale structure, such as intensity mapping from future 21cm surveys, could also be considered \citep{Chang08}. This paper is primarily motivated by the recent paper \citep{WHM14} where the CFHTLenS data with Planck tSZ maps was correlated. They measure a non-zero correlation between the two maps out to one degree angular separation on the sky, with an overall significance of six sigma and use the results to conclude a substantial fraction of the ”missing” baryons in the universe may reside in a low density warm plasma that traces dark matter. An internal detection of the tSZ effect and CMB lensing cross-correlation in the Planck nominal mission data has also recently been reported at a significance of 6.2 sigma \citep{HS14}. While these correlations were computed using 2D projections, we develop techniques for cross-correlation studies in 3D that go beyond the tomographic treatment \citep{HT95,H03,BHT95,Castro05,PM13}. This paper is organised as follows. In Section \textsection\ref{sec:Notation} we outline some key notation and cosmological parameters that will be adopted throughout this paper. Section \textsection\ref{sec:tSZ} forms the core of our paper and introduces in more detail the tSZ effect, cosmological weak lensing and photometric redshift surveys. The cross-correlations of the tSZ with the external tracers is detailed and a discussion of realistic survey effects is introduced. We highlight different approaches to cosmological weak lensing, notably the halo model, and understand how redshift space distortions affect our spectra. Finally, section \textsection\ref{sec:conclu} is reserved for concluding remarks as well as a discussion of our results. In this paper we focus on the tSZ effect. The corresponding results for the kSZ effect will be presented elsewhere. | \label{sec:conclu} In this paper we have extended in detail a study of 3D thermal Sunyaev-Zel'dovich cross correlations with cosmological weak lensing and spectroscopic redshift surveys. Most previous studies to date have focused on either projected studies or tomographic reconstruction. In projection studies, information is lost in the sense that by projecting onto the 2D sky we necessarily disregard information concerning distances to individual sources. An alternative approach is tomography, this is something of a hybrid method between 3D studies and 2D projection. In tomography the sources are divided into redshift slices on which a 2D projection is performed. This means that we foliate our sky with projections in a given redshift bin. This is a rather crude division and does not capture the full 3D information that will be possible in upcoming large scale structure surveys. The method proposed in this paper is based on a 3D spherical Fourier-Bessel expansion in which we aim to use distance information from the start. Note that certain parameters will less sensitive to this inclusion of distance information, such as the amplitude of the power spectra, but for others, notably those that depend on the line-of-sight of history of the Universe, 3D methods could be a very promising avenue of research. This paper encapsulates a few interesting results as well as summarising some of the key features present in the sFB formalism. \begin{figure} \centering \textbf{tSZ-Spectroscopic Redshift Survey Cross-Correlation: Effect of RSD}\par\medskip { \includegraphics[width=55mm]{./Images/d-y_1400_20_RSD_NRSD.pdf} } { \includegraphics[width=55mm]{./Images/d-y_3600_20_RSD_NRSD.pdf} } { \includegraphics[width=55mm]{./Images/d-y_3600_50_RSD_NRSD.pdf} } \caption{Here we plot the effect of redshift space distortions (RSDs) smoothed by the unredshifted power spectra. Each of the panels corresponds to the panels shown in Figure \ref{fig:tSZ-RSD}. The RSD induce radial mode-mixing meaning that power is smoothed across the modes. This is seen at low $k$ where the spectra including RSD have less power than their unredshifted counterparts. At higher $k$ we hit oscillatory features that differ from those in the pure unredshifted contributions and beyond $k \sim 10^{-1}$ we are in a noise dominated regime where the oscillations of the Bessel functions are prominent and numerics becomes tedious. } \label{fig:tSZ-RSD-Sm} \end{figure} In order to study the tSZ-WL cross correlation, we adopted two different approaches. The first approach used the standard linear power spectrum in the analysis. The second approach used the halo model of large scale clustering to construct a non-linear power spectrum for the analysis. The halo model takes into account a number of interesting physical inputs. These include (amongst other inputs): the dark matter density profile, the gas density profile, the electron temperature as a function of halo mass, the mass function of halos and the overdensity of collapse. This allows us to connect the underlying physics to the predicted spectra in a more explicit manner than before. We know that the tSZ effect is sensitive to the higher mass halos and by combining the WL observations with the tSZ observations we can probe both the underlying baryonic and dark matter distributions as a function of halo mass. We expect the tSZ-WL cross-correlation to be sensitive to the halo mass function and density profiles. We introduce the conditional probability function of photometric redshifts to bridge survey-dependent observations to the cleaner theoretical predictions. True observations of galaxies have an intrinsic dispersion error on the measured redshifts. In the sFB formalism, redshift errors and errors in distance simply translate into radial errors. This results in a coupling of the modes and the observations become smoothed along our line of sight. We considered survey dependent parameters suitable for the DES and the LSST. Finally, we constructed the cross-correlation of the tSZ effect with spectroscopic redshift surveys in order to study the effects that redshift space distortions would have on a cross correlation of the tSZ effect with galaxy surveys. The procedure followed the procedure outlined in \citep{PM13}. In our modelling, we have used different redshift dependent linear biasing schemes at large angular scales for modelling of the diffused tSZ effect in association with the halo-model for collapsed objects as a tool to investigate the tSZ-WL cross-correlations in 3D. We use both the Press-Schechter (PS) as well as the Sheth-Tormen (ST) mass-functions in our calculations, finding that the results are quite sensitive to detailed modelling as most of the contribution to the tSZ effect comes from the extended tail of the mass function (one-halo term). We provide a detailed analysis of surveys with photometric redshifts. In the case of cross-correlation with spectroscopic redshift surveys we provide detailed estimates of the contributions from redshift-space distortions. The signal-to-noise (S/N) of the resulting cross-spectra $\myC_{\ell}(k)$ for individual 3D modes, defined by the radial and tangential wave numbers $(k,\ell)$, remains comparable to, but below, unity though optimal binning is expected to improve the situation. In summary, the thermal Sunyaev-Zel'dovich effect acts as a probe of the thermal history of the Universe and the primary observable, the Compton y-parameter, appears to have no significant dependence on the redshift. The integrated nature of the tSZ effect means that redshift information can be lost diminishing our ability to probe the redshift evolution of the baryonic Universe. That is why we also study cosmological weak lensing as a complimentary tracer. Weak lensing is predominantly effected by the gravitational potential along the line of sight and is therefore an external tracer for the underlying dark matter field. By reconstructing the mass distribution of the Universe, we can hopefully recover redshift information and probe the baryonic and dark Universes in a complimentary way. Constraints on the dark sector, such as studies of decaying dark matter or dark matter-dark energy interactions, have recently attracted a lot of attention. Such effects could be probed by the tSZ or kSZ effects (e.g. \cite{Xu13}). Similarly, the halo model for large scale clustering offers strong potential for testing different approaches to the various input ingredients: mass function, dark matter profile, gas density profile, etc. To this extent, we have seen that the tSZ is sensitive to high mass halos and a cross-correlation may be an interesting tool constraining and testing models for large scale clustering physical assumptions that enter the halo model, such as halo density profiles or the halo mass function. In our analysis we neglected general relativistic corrections which may be both important and interesting in their own right, especially in forthcoming surveys \citep{Umeh12,February13,Yoo13,Andrianomena14}. Finally, we would like to point out that it is known that the IGM is most likely have been preheated by non-gravitational sources. The feedback from SN or AGN can play an important role. The analytical modelling of such non-gravitational processes is rather difficult. Numerical simulations \citep{SWH01, Sel01, silva00, silva04, WHS02, Lin04} have shown that the amplitude of the tSZ signal is sensitive to the non-gravitational processes, e.g. the amount of radiative cooling and energy feedback. It is also not straightforward to disentangle contributions from competing processes. The inputs from simulations are vital for any progress. Our analytical results should be treated as a first step in this direction. We have focused mainly on large angular scales where we expect the gravitational process to dominate and such effects to be minimal. Thus the affect of additional baryonic physics can be separated using the formalism developed here. To understand the effect of baryonic physics we can use the techniques developed in \citep{MuJoCoSm11} for different components and study them individually. | 14 | 4 | 1404.2782 |
1404 | 1404.2920_arXiv.txt | { In this work, we present a homogeneous curve-shifting analysis using the difference-smoothing technique of the publicly available light curves of 24 gravitationally lensed quasars, for which time delays have been reported in the literature. The uncertainty of each measured time delay was estimated using realistic simulated light curves. The recipe for generating such simulated light curves with known time delays in a plausible range around the measured time delay is introduced here. We identified 14 gravitationally lensed quasars that have light curves of sufficiently good quality to enable the measurement of at least one time delay between the images, adjacent to each other in terms of arrival-time order, to a precision of better than 20\% (including systematic errors). We modeled the mass distribution of ten of those systems that have known lens redshifts, accurate astrometric data, and sufficiently simple mass distribution, using the publicly available PixeLens code to infer a value of $H_0$ of 68.1 $\pm$ 5.9 km s$^{-1}$ Mpc$^{-1}$ (1$\sigma$ uncertainty, 8.7\% precision) for a spatially flat universe having $\Omega_m$ = 0.3 and $\Omega_\Lambda$ = 0.7. We note here that the lens modeling approach followed in this work is a relatively simple one and does not account for subtle systematics such as those resulting from line-of-sight effects and hence our $H_0$ estimate should be considered as indicative. } | The Hubble constant at the present epoch ($H_0$), the current expansion rate of the universe, is an important cosmological parameter. All extragalactic distances, as well as the age and size of the universe depend on $H_0$. It is also an important parameter in constraining the dark energy equation of state and it is used as input in many cosmological simulations (\citealt{Freedman2010}; \citealt{Planck2014}). Therefore, precise estimation of $H_0$ is of utmost importance in cosmology. Estimates of $H_0$ available in the literature cover a wide range of uncertainties from $\sim$2\% to $\sim$10\% and the value ranges between 60 and 75 km s$^{-1}$ Mpc$^{-1}$. The most reliable measurements of $H_0$ known to date include \begin{description} \item[--] the Hubble Space Telescope (HST) Key Project (72 $\pm$ 8 km s$^{-1}$ Mpc$^{-1}$; \citealt{Freedman2001}), \\ \item[--] the HST Program for the Luminosity Calibration of Type Ia Supernovae by Means of Cepheids (62.3 $\pm$ 5.2 km s$^{-1}$ Mpc$^{-1}$; \citealt{Sandage2006}), \\ \item[--] Wilkinson Microwave Anisotropy Probe (WMAP) (70.0 $\pm$ 2.2 km s$^{-1}$ Mpc$^{-1}$; \citealt{Hinshaw2013}), \\ \item[--] Supernovae and $H_0$ for the Equation of State (SH0ES) Program (73.8 $\pm$ 2.4 km s$^{-1}$ Mpc$^{-1}$; \citealt{Riess2011}), \\ \item[--] Carnegie Hubble Program (CHP) (74.3 $\pm$ 2.6 km s$^{-1}$ Mpc$^{-1}$; \citealt{Freedman2012}), \\ \item[--] the Megamaser Cosmology Project (MCP) (68.9 $\pm$ 7.1 km s$^{-1}$ Mpc$^{-1}$; \citealt{Reid2013}; \citealt{Braatz2013}), \\ \item[--] Planck measurements of the cosmic microwave background (CMB) anisotropies (67.3 $\pm$ 1.2 km s$^{-1}$ Mpc$^{-1}$; \citealt{Planck2014}), and \\ \item[--] Strong lensing time delays (75.2$^{+4.4}_{-4.2}$ km s$^{-1}$ Mpc$^{-1}$; \citealt{Suyu2013}). \end{description} It is worth noting here that the small uncertainties in $H_0$ measurements resulting from WMAP and Planck crucially depend on the assumption of a spatially flat universe. Although the values of $H_0$ obtained from different methods are consistent with each other within 2$\sigma$ given the current level of precision, all of the above methods of determination of $H_0$ suffer from systematic uncertainties. Therefore as the measurements increase in precision, multiple approaches based on different physical principles need to be pursued so as to be able to identify unknown systematic errors present in any given approach. The phenomenon of strong gravitational lensing offers an elegant method to measure $H_0$. For gravitationally lensed sources that show variations in flux with time, such as quasars, it is possible to measure the time delay between the various images of the background source. The time delay, which is a result of the travel times for photons being different along the light paths corresponding to the lensed images, has two origins: (i) the geometric difference between the light paths and (ii) gravitational delay due to the dilation of time as photons pass in the vicinity of the lensing mass. Time delays, therefore depend on the cosmology, through the distances between the objects involved, and on the radial mass profile of the lensing galaxies. This was shown theoretically five decades ago by \citet{Refsdal1964} long before the discovery of the first gravitational lens Q0957+561 by \citet{Walsh1979}. Estimation of $H_0$ through gravitational lens time delays, although it has its own degeneracies, is based on the well-understood physics of General Relativity, and compared to distance ladder methods, is free from various calibration issues. In addition to measuring $H_0$, measurement of time delays between the light curves of a lensed quasar can be used to study the microlensing variations present in the light curves, and to study the structure of the quasar (\citealt{Hainline2013}; \citealt{Mosquera2013}). However, these time delay measurements of $H_0$ are extremely challenging because of the need of an intensive monitoring program that offers high cadence and good-quality photometric data over a long period of time. This type of program would then be able to cope with the presence of uncorrelated variations present in the lensed quasar lightcurves, which can interestingly arise due to microlensing by stars in the lensing galaxy \citep{Chang1979} or for mundane reasons, such as the presence of additive flux shifts in the photometry \citep{Tewes2013a}. Moreover, the estimation of $H_0$ from such high-quality data is hampered by the uncertainty on lens models. Recently, using time delay measurements from high-quality optical and radio light curves, deep and high-resolution imaging observations of the lensing galaxies and lensed AGN host galaxy, and the measurement of stellar velocity dispersion of the lens galaxy to perform detailed modeling, \citet{Suyu2013} report a $H_0$ of 75.2$^{+4.4}_{-4.2}$ km s$^{-1}$ Mpc$^{-1}$ through the study of two gravitational lenses namely RX J1131$-$1231 and CLASS B1608+656. Another approach is to perform simple modeling of a relatively large sample of gravitational lenses with moderate-precision time delay measurements. In this way, it should be possible to obtain a precise determination of the global value of $H_0$, even if the $H_0$ measurements from individual lenses have large uncertainties. In addition, when inferring $H_0$ from a relatively large sample of lenses, line-of-sight effects that bias the $H_0$ measurements from individual lenses \citep[see][Sect. 2]{Suyu2013} should tend to average out, although a residual systematic error must still remain \citep{Hilbert2007, Fassnacht2011}. A pixelized method of lens modeling is available in the literature and is also implemented in the publicly available code PixeLens \citep{Saha2004}. Using this code, \citet{Saha2006} have found $H_0$ = 72$^{+8}_{-11}$ km s$^{-1}$ Mpc$^{-1}$ for a sample of ten time delay lenses. Performing a similar analysis on an extended sample of 18 lenses \citet{Paraficz2010} obtained $H_0$ = 66$^{+6}_{-4}$ km s$^{-1}$ Mpc$^{-1}$. Here, we present an estimate of $H_0$ using the pixellated modeling approach on a sample of carefully selected lensed quasars. So far, time delays have been reported for 24 gravitationally lensed quasars among the hundreds of such strongly lensed quasars known. However, the quality of the light curves and the techniques used to infer these time delays vary between systems. In this work, we apply the difference-smoothing technique, introduced in \citet{Kumar2013}, to the publicly available light curves of the 24 systems in a homogeneous manner, first to cross-check the previously measured time delays and then to select a subsample of suitable lens systems to determine $H_0$. The paper is organized as follows. Section \ref{section:curve-shifting} describes the technique used for time delay determination and introduces a recipe for creating realistic simulated light curves with known time delays; the simulated light curves are used in this work to estimate the uncertainty of each measured delay. In Sect. \ref{section:application}, the application of the curve-shifting procedure to the 24 systems is described. In Sect. \ref{section:lens-modelling}, we infer $H_0$ from the lens-modeling of those systems that have at least one reliably measured time delay, known lens redshift, accurate astrometric data, and sufficiently simple mass distribution. We conclude in Sect. \ref{section:conclusion}. | \label{section:conclusion} We have presented a homogeneous curve-shifting analysis of the light curves of 24 gravitationally lensed quasars for which time delays have been reported in the literature so far. Time delays were measured using the difference-smoothing technique and their uncertainties were estimated using realistic simulated light curves; a recipe for creating these light curves with known time delays in a plausible range around the measured delay was introduced in this work. We identified 14 systems to have light curves of sufficiently good quality to enable the measurement of at least one time delay between the images, adjacent to each other in terms of arrival-time order, to a precision of better than 20\% (including systematic errors). Of these 14 systems, we performed pixellated mass modeling using the publicly available PixeLens software for ten of them, which have known lens redshifts, accurate astrometric information, and sufficiently simple mass distributions, to infer the value of $H_0$ to be 68.1 $\pm$ 5.9 km s$^{-1}$ Mpc$^{-1}$ (1$\sigma$ uncertainty, 8.7\% precision) for a spatially flat universe having $\Omega_m$ = 0.3 and $\Omega_\Lambda$ = 0.7. We note here that we have followed a relatively simple lens modeling approach to constrain $H_0$ and our analysis does not account for biases resulting from line-of-sight effects. Our measurement closely matches a recent estimate of $H_0$ = 69.0 $\pm$ 6 (stat.) $\pm$ 4 (syst.) km s$^{-1}$ Mpc$^{-1}$ found by \cite{Sereno2014} using a method based on free-form modeling of 18 gravitational lens systems. Our value is also consistent with the recent measurements of $H_0$ by \citet{Riess2011}, \citet{Freedman2012} and \citet{Suyu2013}; however, it has lower precision. Increasing the number of lenses with good-quality light curves, accurate astrometry, and known lens redshift from the current ten used in this study can bring down the uncertainty in $H_0$. In the future such high-precision time delays will become available from projects such as COSMOGRAIL \citep{Tewes2012} involving dedicated medium-sized telescopes. In addition, the next generation of cosmic surveys such as the Dark Energy Survey (DES), the Large Synoptic Survey Telescope (LSST; \citealt{Ivezic2008}), and the Euclid mission will detect a large sample of lenses, and time delays might be available for a large fraction of them, consequently enabling measurement of $H_0$ to an accuracy better than 2\%. Furthermore, detection of gravitational wave signals from short gamma-ray bursts associated with neutron star binary mergers in the coming decade could constrain $H_0$ to better than 1\% \citep{Nissanke2013}. | 14 | 4 | 1404.2920 |
1404 | 1404.7028_arXiv.txt | Spectra for 2D stars in the 1.5D approximation are created from synthetic spectra of 1D non-local thermodynamic equilibrium (NLTE) spherical model atmospheres produced by the PHOENIX code. The 1.5D stars have the spatially averaged Rayleigh-Jeans flux of a K3-4 III star, while varying the temperature difference between the two 1D component models ($\Delta T_{\mathrm{1.5D}}$), and the relative surface area covered. Synthetic observable quantities from the 1.5D stars are fitted with quantities from NLTE and local thermodynamic equilibrium (LTE) 1D models to assess the errors in inferred $T_{\mathrm{eff}}$ values from assuming horizontal homogeneity and LTE. Five different quantities are fit to determine the $T_{\mathrm{eff}}$ of the 1.5D stars: UBVRI photometric colors, absolute surface flux SEDs, relative SEDs, continuum normalized spectra, and TiO band profiles. In all cases except the TiO band profiles, the inferred $T_{\mathrm{eff}}$ value increases with increasing $\Delta T_{\mathrm{1.5D}}$. In all cases, the inferred $T_{\mathrm{eff}}$ value from fitting 1D LTE quantities is higher than from fitting 1D NLTE quantities and is approximately constant as a function of $\Delta T_{\mathrm{1.5D}}$ within each case. The difference between LTE and NLTE for the TiO bands is caused indirectly by the NLTE temperature structure of the upper atmosphere, as the bands are computed in LTE. We conclude that the difference between $T_{\mathrm{eff}}$ values derived from NLTE and LTE modelling is relatively insensitive to the degree of the horizontal inhomogeneity of the star being modeled, and largely depends on the observable quantity being fit. | \subsection{Red Giant Stars} Red giant stars rank among the brightest stars in the Galaxy, being generally much brighter in the visible band than main sequence stars of the same spectral type. But this only partially accounts for their brightness as large portion of their flux is emitted in the near-IR. This, combined with their enormous surface areas, gives them such large luminosities that they are easily observable even in very remote stellar populations. Red giants grant us a tool for probing nearly all regions of the Galaxy using a common indicator, a feat unparalleled by most other types of stars. \paragraph{} Because many red giants are low to intermediate mass stars that have evolved beyond the main sequence, generally found in older stellar populations, their abundances can be indicators of early Galactic chemical evolution. For example, by comparing observations of Galactic bulge giants with those of giants located in the thin and thick disks and halo, it has been shown that the bulge likely experienced similar formation timescales, chemical evolution histories, star formation rates and initial mass functions as the local thick disk population \citep{melendez08,alves-brito10}. Bulge and disk giants show some differences in their chemical abundances, with the bulge giants showing a higher relative abundance of select elements than the disk giants, suggesting more rapid chemical enrichment, possibly by ejecta from supernovae of Types Ia and II \citep{cunha06}. Observations of red clump giants in the bulge have also produced additional evidence of a central bar \citep{stanek97}, with their apparent visual magnitudes being brighter at some Galactic latitudes than others. \paragraph{} The tip of the red giant branch (TRGB) can also be used as a standard candle to determine distances to nearby galaxies. The distance moduli obtained from the $I$ pass-band magnitude of the TRGB are comparable with those from primary distance indicators like Cepheids and RR Lyraes \citep{makarov06}, and in some cases even suggest a reevaluation of the metallicity dependence and zero point calibration of the Cepheid distance scale \citep{salaris98,rizzi07}. The TRGB method even has advantages over other distance determinations like those of Cepheids and RR Lyraes: $\left(1\right)$ the TRGB method requires much less telescope time than variable stars; $\left(2\right)$ the $I$ magnitude of the TRGB is insensitive to the variation of metallicity for $\mathrm{[Fe/H]}<-$ 0.7; and $\left(3\right)$ the TRGB suffers less from extinction problems than Cepheids, which are in general located in star-forming regions \citep{lee93}. \subsection{Modeling Stellar Atmospheres\label{intromodel}} Much of what is now known about all types of stars comes from fitting the predicted quantities from atmospheric models to observations. Estimates of the solar chemical abundances come from fitting synthetic spectral line profiles and equivalent widths to those observed in the Sun \citep{ross76,asplund09,caffau11}. Calibrations of stellar parameters, such as $g$, $M_V$, $T_{\mathrm{eff}}$, $L$, and $R_{0}$, for different spectral types are found from fitting models \citep{martins05}. Beyond studying single stars, model atmospheres can be used to determine qualities of larger structures as well. The age-metallicity and color-metallicity relations of globular clusters can be determined from the abundances of individual red giants within the clusters \citep{pilachowski83,carretta97,carretta10}. \paragraph{} In this work, we will explore the limitations of two of the simplifying assumptions of atmospheric modeling: horizontal homogeneity and local thermodynamic equilibrium (LTE). Both of these assumptions have been generally adopted because they are more computationally practical than the alternatives, requiring less time and fewer resources to arrive at a result. By comparison, horizontal inhomogeneity requires model atmospheres to be calculated in two or three geometric dimensions (2D or 3D models) instead of just one, and non-LTE (NLTE) requires that each level population be computed in statistical equilibrium (SE) using iterative processes. \paragraph{} However, horizontal homogeneity and LTE both limit how realistic a model can be. Simulations of red giant atmospheres performed in 3D have confirmed that turbulent surface convection causes horizontal inhomogeneities to form \citep{collet07,kucinskas13b}, such as visually observable surface features like solar granulation \citep{mathur11,tremblay13}. These features are known to lead to detectable effects, such as altering predicted line strengths and shapes and, thus, inferred elemental abundances \citep{collet08,collet09,dobrov13,hayek11,kucinskas13a,mashonkina13}. For 3D models of red giant atmospheres, whose modeling parameters span the ranges of 3600 K $\leq T_{\mathrm{eff}}\leq$ 5200 K, 1.0 $\leq\mathrm{log\,}g\leq$ 3.0, and -3.0 $\leq\mathrm{[Fe/H]}\leq$ 0.0, granules have been shown to span a range of sizes from as small as on the order of 10$^{8}$ cm to as large as 2 $\times$ 10$^{12}$ cm, with the majority on the order of 10$^{11}$ cm \citep{collet07,chiavassa10,hayek11,ludwig12,magic13b,tremblay13}. The cooler stars and stars with lower values of $\mathrm{log}\, g$ generally display larger features. For the same set of 3D models, the root mean square (RMS) temperature variation among these features at optical depth unity is usually in the range of $\sim$ 2 to 5 $\%$, or $\sim$ 200 to 300 K for the parameters listed above, although variations can reach 2000 K between the hottest and coolest areas \citep{collet08,collet09,kucinskas13a,kucinskas13b,ludwig12,magic13a,magic13b,samadi13,tremblay13}. Most of the 3D models reported in the literature have $T_{\mathrm{eff}}$ $\approx$ 4500 K, with $\Delta T$ varying by $\sim$ 200 K. \paragraph{} For stars exhibiting horizontal inhomogeneities, where the temperature varies across the features, $T_{\mathrm{eff}}$ is no longer a well defined quantity. By definition, $T_{\mathrm{eff}}$ is derived from the Stefan-Boltzmann law, where the bolometric luminosity ($L_{\mathrm{bol}}$) of a star is proportional to the fourth power of its $T_{\mathrm{eff}}$ ($T_{\mathrm{eff,S-B}}$). The $T_{\mathrm{eff,S-B}}$ of a horizontally inhomogeneous star may be similarly found by summing the $L_{\mathrm{bol}}$ of each of the inhomogeneous components, weighted by their relative surface coverage, and taking the fourth root of the result. This quantity is defined empirically by intrinsic properties of real or model stars and is independent of fitting models to observable quantities. Alternatively, another model fitting independent $T_{\mathrm{eff}}$ can be defined from the long wavelength tails of stellar spectra. Because flux has a linear dependence on $T_{\mathrm{eff}}$ in the Rayleigh-Jeans limit, an estimate of the $T_{\mathrm{eff}}$ can be made from measuring the absolute surface flux of the R-J tail. This dependence is used by the Infrared Flux Method \citep{ramirez05} to determine $T_{\mathrm{eff}}$ from the R-J tails of spectra ($T_{\mathrm{eff,R-J}}$). \paragraph{} The key idea for a horizontally inhomogeneous star having the same $T_{\mathrm{eff,S-B}}$ as a horizontally homogeneous star, is that the SEDs will differ based on the variation of the modeling parameters, such as $T_{\mathrm{eff}}$, across the inhomogeneous surface. For example, using the Planck function, \begin{equation} B_{\lambda}(T_{\mathrm{eff}})=\frac{2hc^{2}}{\lambda^{5}}\left(e^{hc/\lambda k_{B}T_{\mathrm{eff}}}-1\right)^{-1}, \end{equation} to describe the shape of a stellar continuum illustrates the issue directly \citep{uitenbroek11}. A star with a range of differing temperatures across the surface does not have a directly obvious value of $T_{\mathrm{eff}}$ that should be used, and because of the equation's non-linear dependence on $T_{\mathrm{eff}}$, this is an important question. The higher temperature material will contribute disproportionately more flux than the lower temperature material, with a dependence on wavelength, and will alter the spectrum from that of a horizontally homogeneous star of the same $T_{\mathrm{eff,S-B}}$ accordingly. \paragraph{} The most noticeable departure of LTE models from observed stars comes from comparing computed and observed spectral features and SEDs. For 1D models of red giants, NLTE models have been shown to be more accurate than LTE models in predicting the overall monochromatic flux ($F_{\lambda}$) levels of SEDs and strengths of individual spectral lines, with the notable exceptions of molecular absorption bands and the near-UV band flux \citep{short03,short06,short09,bergemann13}. Calculating the molecular level populations in NLTE is computationally demanding, and is not handled in many atmospheric modeling codes. Both LTE and NLTE models over-predict the near-UV $F_{\lambda}$ levels of cool red giants. In the case of the near-UV $F_{\lambda}$ levels, NLTE models are worse in the over-prediction than their LTE counterparts. The NLTE effects of Fe group elements on the model structure and $F_{\lambda}$ distribution have been shown to be much more important for predicting a SED than the NLTE effects of all the light metals combined, and serve to substantially increase the near-UV $F_{\lambda}$ levels as a result of NLTE Fe I overionization \citep{short09}. The magnitude of this effect has been shown to be inversely proportional to the completeness of the Fe I atomic model used in the atmospheric modeling \citep{mashonkina11}, discussed in detail in Section \ref{PHOENIX}. These failures of 1D NLTE models to predict observable quantities may be, in part, related to the exclusion of horizontal inhomogeneities in the models. \subsection{Present Work \label{sec:Present-Work}} Our primary goal is investigating the relation between errors introduced in determining a star's $T_{\mathrm{eff}}$ from assuming horizontal homogeneity and those introduced by assuming LTE. We look to determine how distinguishable 3D hydro and NLTE effects are when looking at low resolution diagnostics (photometric colors and overall SEDs). The error inherent in assuming LTE is expected to remain approximately constant for different levels of horizontal inhomogeneity, as any changes in a LTE spectrum caused by the inhomogeneities should also be represented to a similar degree in the corresponding NLTE spectrum, as the relative difference in the flux distribution between LTE and NLTE should remain roughly constant within the $T_{\mathrm{eff}}$ range studied in this work. \paragraph{} Section \ref{methods} outlines the parameters used for the 1D NLTE and LTE model grids, and the methods used in creating and processing the 1.5D NLTE spectra. Section \ref{Results} presents the results of fitting 1D SEDs and spectra to the 1.5D SEDs and spectra. Section \ref{summary} gives a brief summary and discussion of the results. | } The goal of this work has been to analyze the effects of massively NLTE atmospheric modeling combined with 2D horizontal inhomogeneities on $T_{\mathrm{eff}}$ values inferred from SEDs and line profiles. The stellar atmosphere and spectrum synthesis code PHOENIX was used to generate a grid of spherical stellar atmosphere models in both LTE and NLTE, and to synthesize spectra for the models. \paragraph{} Spectra of target 2D ``observed'' stars were produced in the 1.5D approximation by linearly averaging two NLTE 1D spectra together under two different weighting schemes (FF), such that the $T_{\mathrm{eff,R-J}}$ was 4250 K and the temperature difference between the 1D was as large as $\Delta T_{\mathrm{1.5D}}$ = 1050 K. The grid of LTE and NLTE 1D SEDs and spectra were fit to the observations to infer $T_{\mathrm{eff}}$ values for the 1.5D stars using five different approaches. All inferred $T_{\mathrm{eff}}$ values and differences between inferred $T_{\mathrm{eff}}$ and 1.5D $T_{\mathrm{eff,S-B}}$ are considered to have a formal uncertainty of 25 K, half of one temperature resolution unit in our pre-interpolated 1D grid. \paragraph{} Photometric colors of 1D stars computed from synthetic UBVRI photometry were compared to 1.5D colors to assess the errors in photometrically derived $T_{\mathrm{eff}}$ values. For the five color indices and both values of FF, the inferred value of $T_{\mathrm{eff}}$ was seen to increase with $\Delta T_{\mathrm{1.5D}}$, and increased at a greater rate for indices that involved bluer wavebands. When the LTE and NLTE results were compared, the $T_{\mathrm{eff}}$ values inferred from fitting LTE colors were systematically higher than their NLTE counterparts. The magnitude of $\Delta T_{\mathrm{NLTE}}$ was approximately constant as a function of $\Delta T_{\mathrm{1.5D}}$ in all cases. The value was largest when comparing U$_{x}$-B$_{x}$ inferred $T_{\mathrm{eff}}$ values, and decreased for redder indices. \paragraph{} Absolute surface flux 1D SEDs were fit to the 1.5D SEDs to assess how changes to the predicted bolometric flux introduced by the modeling assumptions affect the inferred value of $T_{\mathrm{eff}}$. For both values of FF, the inferred value of $T_{\mathrm{eff}}$ was seen to increase with $\Delta T_{\mathrm{1.5D}}$. This approach showed the lowest overall error of any of the full $\lambda$ distribution fitting approaches in the inferred value of $T_{\mathrm{eff}}$; only 110 K difference between the fit $T_{\mathrm{eff}}$ value and $T_{\mathrm{eff,S-B}}$ at maximum $\Delta T_{\mathrm{1.5D}}$. Again, the inferred $T_{\mathrm{eff}}$ values from fitting LTE SEDs were systematically higher than those from fitting NLTE SEDs, and the magnitude of $\Delta T_{\mathrm{NLTE}}$ was approximately constant as a function of $\Delta T_{\mathrm{1.5D}}$. \paragraph{} Relative 1D SEDs normalized to the average continuum flux in a 10 $\textrm{\AA}$ window in the R-J tail were fit to the 1.5D SEDs to assess how the modeling assumptions affect overall shape of the SED and the temperature sensitive spectral features located in the blue and near-UV bands. For both values of FF, the inferred value of $T_{\mathrm{eff}}$ was seen to increase with $\Delta T_{\mathrm{1.5D}}$. The inferred $T_{\mathrm{eff}}$ values from fitting LTE SEDs were systematically higher than those from fitting NLTE SEDs, and the magnitude of $\Delta T_{\mathrm{NLTE}}$ was approximately constant as a function of $\Delta T_{\mathrm{1.5D}}$. These three complimentary methods (photometric colors, absolute surface flux SEDs, and relative SEDs) show results consistent with each other. \paragraph{} Of the three spectrophotometric methods, the photometric colors give both the highest and the lowest errors on the estimates of the $T_{\mathrm{eff}}$. At maximum $\Delta T_{\mathrm{1.5D}}$, the U$_{x}$-B$_{x}$ index fitting returned up to 340 K higher than $T_{\mathrm{eff,S-B}}$, while the R-I index returned as low as 60 K less than $T_{\mathrm{eff,S-B}}$. The V-I fitting resulted in error values similar to that of the absolute SEDs, with fitted $T_{\mathrm{eff}}$ values as high as 90 K above $T_{\mathrm{eff,S-B}}$. The relative SED fitting returned error values similar to the V-R index, as high as 210 and 240 K above $T_{\mathrm{eff,S-B}}$ respectively. Together, these results reinforce that red/near-IR photometry is more reliable for diagnosing $T_{\mathrm{eff}}$ in stars with significant horizontal inhomogeneity. \paragraph{} Continuum normalized 1D spectra spanning a wavelength distribution between $\lambda$ = 3000 and 13000 $\textrm{\AA}$ were fit to the 1.5D spectra to assess how the modeling assumptions change the predicted strength of spectral features. For both values of FF, the inferred value of $T_{\mathrm{eff}}$ was seen to increase with $\Delta T_{\mathrm{1.5D}}$. It is important to note that this result is consistent with the spectrophotometric results, even though it is arrived at through a complimentary method. This approach showed the highest overall error of any of the full spectrum fitting approaches in the inferred value of $T_{\mathrm{eff}}$; 310 K above $T_{\mathrm{eff,S-B}}$ at maximum $\Delta T_{\mathrm{1.5D}}$. The inferred $T_{\mathrm{eff}}$ values from fitting LTE spectra were systematically higher than those from fitting NLTE spectra, and the magnitude of $\Delta T_{\mathrm{NLTE}}$ was approximately constant as a function of $\Delta T_{\mathrm{1.5D}}$. \paragraph{} Predicted line profiles for the important 1D TiO bands spanning a wavelength range between $\lambda$ = 5500 and 8000 $\textrm{\AA}$ were fit to the 1.5D TiO bands to assess how the strength of molecular features found primarily in the cold component of the 1.5D stars affect the inferred value of $T_{\mathrm{eff}}$. This was the only approach to show the inferred value of $T_{\mathrm{eff}}$ decreasing with increasing $\Delta T_{\mathrm{1.5D}}$. The rapid nonlinear growth of molecular features with decreasing temperature became the dominant aspect in determining the $T_{\mathrm{eff}}$ value, over the nonlinear contribution to the average flux from the higher temperature 1.5D component. The inferred $T_{\mathrm{eff}}$ values from fitting LTE spectra were still systematically higher than those from fitting NLTE spectra, and the magnitude of $\Delta T_{\mathrm{NLTE}}$ was approximately constant as a function of $\Delta T_{\mathrm{1.5D}}$. \paragraph{} In this work we have shown that the approximations of both horizontal homogeneity and LTE introduce errors in the value of $T_{\mathrm{eff}}$ inferred from fitting quantities derived from models to observed quantities. By assuming both horizontal homogeneity and LTE, the inferred $T_{\mathrm{eff}}$ values may differ from the $T_{\mathrm{eff,S-B}}$ of a star by 340 K or more, depending on the quantity used to infer the $T_{\mathrm{eff}}$. Of the two values of FF, 1:1 produced hotter values of inferred $T_{\mathrm{eff}}$ in general. In simulating horizontal inhomogeneities it was seen that the bolometric flux of a 1.5D star increased with $\Delta T_{\mathrm{1.5D}}$, and a percentage of the flux was redistributed at bluer wavelengths. Spectral features in general appeared to have been produced by a star hotter than the $T_{\mathrm{eff,R-J}}$, except that strong molecular features found in cooler stars were also present in the spectra. Furthermore, for all five approaches, $\Delta T_{\mathrm{NLTE}}$ is approximately independent of $\Delta T_{\mathrm{1.5D}}$, and we conclude that the magnitude of the effect of NLTE on $T_{\mathrm{eff}}$ derived from fitting 1D models is approximately independent of the thermal contrast characterizing the degree of the horizontal inhomogeneity of the star being modeled, and only dependent on the observable quantity being fit. \paragraph{} While $\Delta T_{\mathrm{1.5D}}$ was the only parameter varied in the scope of this study, other modeling parameters such as log $g$ and [Fe/H] are expected to have an impact on $T_{\mathrm{eff}}$ -- $T_{\mathrm{eff,S-B}}$ and $T_{\mathrm{eff}}$ -- $T_{\mathrm{eff,R-J}}$ derived in various ways for horizontally inhomogeneous stars. Both \citet{samadi13} and \citet{tremblay13} have shown the RMS temperature variations to increase with decreasing log $g$, and \citet{tremblay13} has also shown them to increase with decreasing [Fe/H]. Likewise, the values of the fitted $T_{\mathrm{eff}}$ -- $T_{\mathrm{eff,S-B}}$ at a given $\Delta T_{\mathrm{1.5D}}$ are expected to differ with the $T_{\mathrm{eff,R-J}}$ of the 1.5D stars. Investigating these parameters requires extensive additions to our 1D grid of models, as well as an updated Fe I atomic model, and will be explored in a future study. | 14 | 4 | 1404.7028 |
1404 | 1404.7502_arXiv.txt | The discovery of a binary comprising a black hole (BH) and a millisecond pulsar (MSP) would yield insights into stellar evolution and facilitate {\edit exquisitely sensitive} tests of general relativity. Globular clusters (GCs) are known to harbor large MSP populations and recent studies suggest that GCs may also retain a substantial population of stellar mass BHs. We modeled the formation of BH+MSP binaries in GCs through exchange interactions between binary and single stars. We found that in dense, massive clusters most of the dynamically formed BH+MSP binaries will have orbital periods of 2 to 10 days, regardless of the mass of the BH, the number of BHs retained by the cluster, and the nature of the GC's binary population. The size of the BH+MSP population is sensitive to several uncertain parameters, including the BH mass function, the BH retention fraction, and the binary fraction in GCs. Based on our models, we estimate that there are $0.6\pm0.2$ dynamically formed BH+MSP binaries in the Milky Way GC system, and place an upper limit on the size of this population of $\sim 10$. Interestingly, we find that BH+MSP binaries will be rare even if GCs retain large BH populations. | \label{sec:intro} Radio pulsars in binary systems provide constraints on the processes that drive binary stellar evolution and unparalleled tests of general relativity in the strong-field regime. In most cases, the pulsars in these binaries are ``recycled." That is, the neutron star (NS) has been spun-up by accreting mass and angular momentum from its companion \citep{Alpar:1982}. Compared to ``normal'' pulsars, recycled pulsars exhibit greater stability and have much shorter spin periods ($P_{\rm S} \la 100$ ms), both of which facilitate high precision measurements of the pulse arrival times \citep{Lorimer:2008}. {\edit Recycled pulsars with $P_{\rm S} \la 20$ ms are referred to as millisecond pulsars (MSPs).} The outcomes of binary evolution can be probed by using the recycled pulsar in such a system as a stable clock to precisely determine the binary's Keplerian orbital parameters and the properties of its component stars. If the recycled pulsar's companion is another neutron star, it is possible to measure post-Keplerian orbital parameters in a model-independent fashion and then compare these measurements with the predictions of various theories of gravity \citep{Stairs:2004}. The post-Keplerian parameters measured in the double pulsar binary PSR J0737-3039 offer the best test of gravity in the strong field limit, to date, and are in excellent agreement with the predictions of general relativity \citep{Kramer:2006}. {\edit New and better tests} of general relativity may be possible by applying these techniques to a binary comprising a black hole (BH) and a recycled pulsar, however such a system has yet to be discovered. It is possible to produce a BH+recycled pulsar binary through standard evolutionary processes in an isolated, high-mass binary. The scenario requires that the primary (the initially more massive member of the binary, which evolves faster than its companion) produces a NS at the end of its lifetime and that the secondary produces a BH. This can occur if the primary transfers enough material to its companion to drive the companion's mass above the threshold for BH production. The NS created by the primary could then be recycled by accreting material from the companion before it evolves to become a BH. Under the assumption that these recycled pulsars would have lifetimes longer than $10^{10}$ yr, \citet{Narayan:1991} placed an empirical upper limit on the formation rate of such BH+recycled pulsar binaries of $10^{-6}~{\rm yr^{-1}}$ within the Galaxy. Population synthesis models by \citet{Lipunov:1994} found a comparable formation rate, while similar studies by \citet{Sipior:2002}, \citet{Voss:2003}, \citet{Sipior:2004}, and \citet{Pfahl:2005} favored lower BH+recycled pulsar binary formation rates of $\sim 10^{-7}~{\rm yr^{-1}}$. Additionally, \citet{Pfahl:2005} argued that the NSs in these systems would only be mildly recycled. Due to the rapid evolution of its massive companion, the NS, accreting at the Eddington limit, would only have time to accrete $10^{-3}-10^{-4}\msun$ of material before the secondary collapsed into a BH. These mildly recycled pulsars would only have lifetimes of $10^{8}$ yr. Even if the pulsars in these systems were completely recycled {\edit into long-lived MSPs}, the population synthesis calculations suggest that most of these systems will undergo a gravitational wave driven merger within $\sim 10^{8}$ yr. {\edit With formation rates of $10^{-7}$ yr$^{-1}$ and lifetimes of $\la 10^8$ yr the number of BH+recycled pulsar binaries expected to exist in the Milky Way is only $\sim 10$.} In a globular cluster, a BH+recycled pulsar binary {\edit need not form directly from} a primordial binary. The high stellar density in GCs leads to dynamical encounters between cluster members, which opens a wide array of evolutionary pathways that are inaccessible to isolated binaries. For example, a single NS in a GC can gain a companion by exchanging into a primordial binary during a three-body interaction. Subsequent evolution of these newly created binaries can result in the NS being spun-up into a recycled, MSP (\citealt{Hills:1976,Sigurdsson:1995,Ivanova:2008}). These encounters are evidenced by the enhanced formation rates of low mass X-ray binaries (LMXBs) and their progeny, MSPs, observed in GCs \citep[e.g][]{Katz:1975, Verbunt:1987, Pooley:2003, Camilo:2005}. Any of the BHs present in the cluster could acquire a MSP companion through similar interactions \citep{Sigurdsson:2003}. However, uncertainties in the size and nature of the BH population present in GCs complicate investigations of this formation channel. \citet{Kulkarni:1993} and \citet{Sigurdsson:1993} argued that the stellar mass BHs formed in a GC would rapidly sink to the center of the cluster and eject one another in a phase of intense self-interaction. The frenzy of ejections results in a substantial {\edit depletion} of the cluster's stellar mass BH population during the first Gyr of evolution. The fact that a firm BH candidate had not been identified in a GC during decades of observational study was inline with this theoretical picture. Given the meager BH populations implied by these investigations, the dynamical formation of BH+MSP binaries in GCs has received little attention. After all, this channel closes if there is {\edit no} population of BHs present in GCs. Nevertheless, the production of BH+MSP binaries through multibody interactions has been considered in dense stellar environments, analogous to GCs, that are likely to harbor BHs. \citet{Faucher-Giguere:2011} showed that a few dynamically formed BH+MSP binaries should be present in the Galactic Center, where a cluster of $\sim 10^{4}$ stellar mass BH is expected to exist. This result indicates that BH+MSP binaries {\edit might be produced} in GCs if the clusters retained some of their stellar mass BHs. Recent observational efforts have shown that there are BHs present in some GCs, prompting a renewed interest in the nature of GC BH populations. A number of promising BH candidates have been discovered in X-ray observations of extragalactic GCs \citep{Maccarone:2007, Maccarone:2011, Irwin:2010, Shih:2010, Brassington:2010, Brassington:2012, Roberts:2012, Barnard:2012}. Furthermore, three BH candidates have been identified in deep radio observations of Milky Way GCs; two candidates reside in M22 and one candidate is in M62 \citep{Strader:2012,Chomiuk:2013}. There is also a growing body of theoretical work suggesting that it may be possible for GCs to retain a substantial fraction of their stellar mass BH populations, under certain circumstances {\redit \citep{Mackey:2008,Sippel:2013,Morscher:2013,Breen:2013,Breen:2013a,Heggie:2014}}. Motivated by these new results, we set out to explore how efficiently three-body exchanges produce BH+MSP binaries in GCs. It has also been suggested that GCs may harbor intermediate mass BHs (IMBHs; $M \sim 10^{2}-10^{4} \msun$). Previous studies have considered the consequences of interactions between MSPs and these IMBHs. The encounters could result in a MSP being significantly displaced from the GC core \citep{Colpi:2003}, produce a IMBH+MSP binary \citep{Devecchi:2007}, or populate the Milky Way halo with several high velocity MSPs \citep{Sesana:2012}. We will not include IMBHs in the models presented here, and instead focus on stellar mass BHs. This paper is organized as follows. We describe the features of our models and the motivate the range of initial conditions that our simulations explore in \cref{sec:method}. In \cref{sec:orbparms} we discuss the orbital parameters of the BH+MSP binaries produced in our models. We discuss the size of the BH+MSP binary population and the possibility of detecting such a binary in \cref{sec:detection}. Finally, in \cref{sec:discussion}, we summarize and discuss our findings. | \label{sec:discussion} We have presented a study of the dynamically formed BH+MSP binaries in GCs. We found that in the highest density clusters ($n_{\rm c} \ga 5 \times 10^{5}~{\rm pc^{-3}}$), the semimajor axis distribution of the BH+MSP binaries is nearly independent of all of the parameters that we varied in our study. This property of the BH+NS binary populations is {\edit beneficial for observers} who hope to identify such systems. Regardless of the nature of many uncertain characteristics, including the GC BH and binary populations, the vast majority of BH+MSP binaries produced in dense GCs will have $2 < P_{\rm B} < 10$ days. In lower density clusters, $M_{\rm BH}$ does influence the expected orbital periods of the BH+MSP binaries. In clusters with $n_{\rm c} \sim 10^{5}~{\rm pc^{-3}}$ BH+MSP binaries with {\edit massive BHs} ($M_{\rm BH} = 35\msun$) will typically have orbital periods around 20 days. For BH+MSP binaries with $7\msun$ BHs, the expected orbital periods are much longer, with typical periods in the 150 to 250 day range. {\edit Importantly, we have also found} that dynamically formed BH+MSP binaries are quite rare. We estimated that the maximum number of detectible BH+MSP binaries produced through this channel in the Milky Way GC {\edit system is approximately 3--5}. Comparing the size of the BH+MSP binary population predicted by our models to population synthesis models of such binaries in the field, we find that the dynamical encounters result in a factor of $\sim$100 enhancement in BH+MSP binary production per unit mass in GCs. The birthrates of other exotic objects (e.g., LMXBs, MSPs) in GCs receive a similar boost over the field due to the additional, dynamical formation channels available to the members of a dense stellar system. The small size of the population is not a consequence of our assumption that most stellar mass black holes are ejected from the cluster early in its evolution. The presence of {\edit many BHs will} also reduce the probability that a cluster harbors a BH+MSP binary. BH+MSP binary formation can be stifled by as few as 19 BHs. If there are several BHs in the cluster the BHs will preferentially interact with each other and not the NSs. Furthermore, any BH+NS binaries that are formed may be destroyed when another BH exchanges into the binary. This behavior has also been seen in models that considered the evolution of the BH population as a whole. \citet{Sadowski:2008} and \citet{Downing:2010} found that very few BH+NS binaries were produced in their simulations, which included several hundred to over one thousand BHs. We expect dynamically formed BH+MSP binaries to be rare regardless of the size of the retained BH population. Another factor that played a surprisingly small role in limiting the size of the BH+MSP binary population were the large post merger recoil that BH+NS binaries are expected to receive. Several recent studies have used numerical relativity to simulate BH+NS binary mergers, and these models show that the remnant BH will receive a kick of more than $50$ km s$^{-1}$ when $3\la M_{\rm BH}/M_{\rm NS} \la 10$ \citep[e.g.][]{Shibata:2009,Etienne:2009,Foucart:2011}. These kicks exceed the escape velocities of all but the most massive GCs, so BHs of $\sim 7 \msun$ will be ejected from the cluster once they merge with a NS. These post-merger ejections reduce \tbhns because they act to remove single BHs from the cluster. However, there are three ways to avoid the large recoils. First, at smaller mass ratios, the NS is tidally disrupted before the merger, which halts the anisotropic emission of gravitational radiation and suppresses the kick. Unfortunately, it is unlikely that a BH+NS binary with such a small mass ratio exists in nature. Second, the linear momentum flux responsible for the kick declines for larger values of $M_{\rm BH}/M_{\rm NS}$, again reducing the magnitude of the kick \citep[see, e.g.][]{Fitchett:1983}. Finally, a large BH spin could also decrease the magnitude of the post merger kick. \citet{Foucart:2013} showed that for BHs with dimensionless spin parameters of 0.9, the recoil would be smaller than typical GC escape velocities for BH+NS binaries with mass ratios as small as 7. We tested how these latter two, plausible scenarios would affect our results. Many of the models discussed above already illustrate the case in which the post merger kick is suppressed because $M_{\rm BH} \gg M_{\rm NS}$. In the simulations that used $35 \msun$ BHs, the kick is small enough that most of the BHs are retained by the cluster after merging with a NS. However, as we previously discussed in \cref{sec:detection}, the BH+NS binaries produced in these models do not live as long as the binaries produced in simulations with lower mass BHs. The presence of a more massive BH accelerates both of the hardening mechanisms that drive the BH+NS binaries to merge. We also tested a more extreme mass ratio by running a simulation with a $100 \msun$ BH. We found \tbhns = 0.036 in this model, which is similar to the value of \tbhns for a simulation in the same background cluster with a 35 \msun BH. Furthermore, the IMBH+MSP formation rate implied by this value \tbhns is consistent with previous work on the formation of such binaries by \citet{Devecchi:2007}. Increasing the mass of the BH to suppress the kick does not significantly increase \tbhns, and accordingly the size of the BH+MSP binary population. To test the high spin scenario, we reran simulations using one 7 \msun BH in the high density clusters ($n_{\rm c} \ge 5 \times 10^{5}$ pc$^{-3}$) with the post merger kicks switched off. In both cases \tbhns increased by nearly a factor of two, to 0.13 and 0.08 in clusters C and D, respectively. Despite the significant increase in \tbhns, we still find $N_{\rm BH+MSP} \la 1$ in the Milky Way GC system. It should also be noted that these new simulations actually overestimate the number of rapidly spinning BHs retained by the GCs. The kick is only reduced significantly if the misalignment between the BH's spin and the angular momentum of the BH+NS binary is $\la 60\degr$. For dynamically formed binaries, it is likely that the orbital angular momentum and the spins of their components will have random orientations. Furthermore, we also counted the number of times the BHs in these simulations merged with non-NSs to estimate the degree to which the BHs would be spun up by thin disk accretion. {\edit We will discuss these results more broadly in the context of X-ray binary production in a future publication. For now we will only examine whether the BHs are able to accrete a substantial amount of angular momentum.} In both of the background clusters considered, the BHs merged with an average of 0.97 non-NSs during their lifetimes. It was only in rare cases the BHs merged with 7-9 stars. Thus, it is unlikely that the BH's spin will increase substantially during its evolution in the cluster. The BHs must be born with large spins for this post merger recoil mechanism to be effective. Even if this is the case, we expect that BH+MSP binaries will be extremely rare in GCs. Some limitations of our method will impact the results of our simulations. Because our simulations do not include binary--binary encounters, they do not capture several processes that affect the formation of BH+MSP binaries. As discussed above, binary--binary interactions open up additional BH+NS binary formation channels. Furthermore, in clusters with multiple BHs, collisions between pairs of BH+BH binaries could eject or disrupt many BH+BH binaries \citep[e.g.][]{OLeary:2006,Banerjee:2010,Downing:2010,Tanikawa:2013}. Reducing the number of BHs in the cluster and freeing BHs from otherwise impenetrable BH+BH binaries would increase the likelihood that BH+MSP binary is produced. However, binary--binary interactions will also disrupt and eject BH+MSP binaries. Models that include binary--binary interactions are needed to see which processes dominate. {\redit We also neglected long-range interactions between BH-binaries and background stars. These interactions do no perturb the binary's orbital parameters as strongly as close encounters, but they do occur more frequently. Of particular concern is whether the change in eccentricity resulting from these encounters will accelerate the rate of orbital contraction through the emission of gravitational waves (see \cref{eqn:dadtgw}). For eccentric binaries, the change in eccentricity induced by an encounter declines as $r_p^{-3/2}$, where $r_p$ is separation between the binary and single star at pericenter \citep[][]{Heggie:1975}. The change in eccentricity declines even faster with increasing $r_p$ for circular binaries \citep{Hut:1984, Rappaport:1989,Phinney:1992,Rasio:1995}. These encounters drive a random walk in eccentricity because they are just as likely to increase a binary's eccentricity as they are to decrease it. We used the cross-sections for eccentricity change derived by \citet{Heggie:1996} to estimate the root mean square rate of change in eccentricity induced by the distant encounters that our models did not include. Given these rates, we found that the eccentricity of the binaries in our simulations would change by $< 0.05$ over their entire lifetimes. In simulation 10 ($n_{\rm c} = 10^5$ pc$^{-3}$, one 7\msun BH), for example, the median eccentricity change amongst the BH+NS binaries was 0.008. Allowing for this modest change in eccentricity changed the median gravitational wave merger time for the BH+NS binaries by $\pm 7\%$. The median merger time for BH+WD binaries changed by $\pm 9\%$. Distant encounters will only have a small effect on the BH+MSP binary population.} {\redit Finally, as a consequence of the assumption that the background cluster was static, our simulations were unable to capture some dynamical processes. In the static cluster models,} we forced the BHs to remain in equilibrium with the rest of the cluster. If we had allowed for the dynamical evolution of the BHs, they might have decoupled from the cluster and produced a dense, inner subcluster. This would have further reduced the number of encounters between BHs and NSs. {\edit Alternatively, heating of the cluster by the BHs could result in expansion of the core, which would result in longer lifetimes for any BH+MSP binaries that managed to form \citep{Mackey:2008,Heggie:2014}. {\redit Primordial binaries are an additional source of heating that we were unable to included in our models. We showed that efficient BH+MSP binary formation required substantial binary fractions, but we did not account for the impact heating by these large binary populations could have on the structure of the cluster.} Clearly, more detailed models are needed to determine how these additional processes impact the BH+MSP populations in GCs.} Although the number of BH+MSP binaries in the Milky Way GC system is expected to be small, searching for these binaries is still warranted. We know that these binaries are in GCs, and our models make specific predictions about the types of GCs that are likely to harbor BH+MSP binaries. Given the rarity of BH+MSP binaries predicted by our models, the discovery of a BH+MSP pulsar binary in the Milky Way GC system would imply that the fraction of clusters that retain at least one stellar mass BH is large. With the potential scientific payoff, continued deep radio observations of the cores of the $\sim 20$ Milky Way GCs with appropriate structural properties may be justified. The clusters most likely to host a BH+MSP binary include 47 Tuc, Terzan~5, NGC 1851, NGC 6266, {\edit NGC 6388} and NGC 6441. {\edit Intriguingly, previous theoretical and observational studies have suggested that NGC 6388 and NGC 6441 may harbor BHs \citep{Lanzoni:2007,Moody:2009}.} Finally, even though there might not be any BH+MSP binaries in the Milky Way GC system, such binaries could be detected in extra-galactic GCs with the Square Kilometer Array (SKA). SKA should be able to detect most pulsars within 10 Mpc \citep{Cordes:2007}, and our models predict there could be $\sim 100$ dynamically formed BH+MSPs binaries within this volume. \bigskip {\noindent {\bf ACKNOWLEDGMENTS}} \noindent The authors thank E.S. Phinney, Francois Foucart, and Lawrence E. Kidder for valuable discussion. We also thank the referee for an insightful review that helped us improve the paper. SS and DFC thank the Kavli Institute for Theoretical Physics for their hospitality. SS thanks the Aspen Center for Physics. DC acknowledges support from NSF grant AST-1205732. | 14 | 4 | 1404.7502 |
1404 | 1404.4737_arXiv.txt | The Higgs particle has been discovered at the CERN Large Hadron Collider experiment, and their results are almost consistent with the standard model (SM)~\cite{Chatrchyan:2013lba,CMS}. In addition, since the experiment has not obtained evidence of new physics so far (e.g., supersymmetry or extra dimension(s), etc.), one might consider a scenario such that the SM is valid up to a very high energy scale (GUT, string, or Planck scales). In fact, there have been several curious research results for this scenario. For example, Ref.~\cite{Froggatt:1995rt} showed that the multiple point criticality principle\footnote{The principle says that there are two degenerate vacua in the Higgs potential of SM. One is at the Planck scale and another one is at the electroweak (EW) scale where we live.} predicts $135\pm9$ GeV Higgs and $173\pm5$ GeV top masses. Reference~\cite{Shaposhnikov:2009pv} also pointed out that 126 GeV Higgs mass can be realized with a few GeV uncertainty in an asymptotic safety scenario of gravity. They clarified that the vanishing Higgs self-coupling and its $\beta$-function at the Planck scale, $\lambda(M_{\rm pl})=\beta_\lambda(M_{\rm pl})=0$, lead to the above values of the Higgs and top masses, which are close to the current experimental values. Reference~\cite{Holthausen:2011aa} investigated the realization of the Veltman condition, Str$M^2(M_{\rm pl})=0$, and the vanishing anomalous dimension of the Higgs mass, $\gamma_{m_h}(M_{\rm pl})=0$, at the Planck scale in addition to $\lambda(M_{\rm pl})=\beta_\lambda(M_{\rm pl})=0$. As a result, the authors could find that the realization of the BCs predicts 127-142 GeV Higgs mass. It is interesting that the above BCs can lead to close values of the Higgs and top masses to the experimental ones, but it seems difficult to reproduce the experimental center values of the top and Higgs masses at the same time (see also Refs.~\cite{Degrassi:2012ry}-\cite{Masina:2013wja} for the recent analyses). Since the realization of BCs means that the Higgs potential is almost flat near the Planck scale, an application of the flat potential to the inflation, the so-called Higgs inflation~\cite{Bezrukov:2007ep}-\cite{Hamada:2013mya}, is intriguing. In addition, recently, the tensor-to-scalar ratio, \begin{eqnarray} r=0.20_{-0.05}^{+0.07}, \end{eqnarray} was reported by the BICEP2 Collaboration~\cite{Ade:2014xna}, and several researches have investigated in the ordinary Higgs inflation and models related with the Higgs field~\cite{Nakayama:2014koa}-\cite{Feng:2014naa}. In particular, the authors of Ref.~\cite{Hamada:2014iga} pointed out that the Higgs potential with the small top mass and a nonminimal coupling $\xi=7$ can make the ordinary Higgs inflation consistent with the BICEP2 result. In this paper, we will investigate the Higgs inflation with the singlets extension of the SM. The gauge singlet fields can play various roles in models/theories beyond the SM. For instance, a singlet real scalar field can rescue the SM from the vacuum instability, and it can be a candidate for dark matter (DM) with odd parity under an additional $Z_2$ symmetry (e.g., see~\cite{Silveira:1985rk}-\cite{Boucenna:2014uma}). In addition, a scalar can play an important role of EW and conformal symmetry breaking through a strongly coupled hidden sector (see~\cite{Hur:2007uz,Hur:2011sv,Holthausen:2013ota} for more recent discussion). It is also well known that the right-handed neutrinos can generate tiny active neutrino masses through a seesaw mechanism and the baryon asymmetry of the Universe (BAU) through the leptogenesis. The singlet scalar and the right-handed neutrino play crucial roles for realizing a suitable plateau of Higgs potential with the center value of the top mass of Tevatron and LHC measurements~\cite{ATLAS:2014wva}. We will show that this Higgs inflation scenario predicts about a 1 TeV scalar DM and an $\mathcal{O}(10^{14})$ GeV right-handed neutrino by use of a 125.6 GeV Higgs mass, a 173.34 GeV top mass, and a nonminimal gravity coupling $\xi\simeq10.1$. We stress that the inflation model is consistent with the recent result of the tensor-to-scalar ratio $r=0.20_{-0.05}^{+0.07}$ by the BICEP2 Collaboration. | We have investigated the Higgs inflation scenario with singlet scalar dark matter and the right- handed neutrino. The singlet scalar and the right-handed neutrino play crucial roles for realizing the suitable plateau of Higgs potential with the center value of the top mass of Tevatron and LHC measurements. We have shown that this Higgs inflation scenario predicts a 1029 GeV scalar DM and an $\mathcal{O}(10^{14})$ GeV right-handed neutrino by use of 125.6 GeV Higgs mass, 173.34 GeV top mass, and a nonminimal gravity coupling $\xi\simeq10.1$. This inflation model works well completely, and it is consistent with the recent result of tensor-to-scalar ratio $r=0.20_{-0.05}^{+0.07}$ by the BICEP2 Collaboration. \subsection*{Acknowledgement} The authors thank Kunio Kaneta and Hiroyuki Ishida for fruitful discussions at an early stage of this work and pointing out typos in the manuscript. This work is partially supported by Scientific Grant by Ministry of Education and Science, No. 24540272. The work of R.T. is supported by Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists. | 14 | 4 | 1404.4737 |
|
1404 | 1404.1923_arXiv.txt | Magneto-rotational instability (MRI) and gravitational instability (GI) are the two principle routes to turbulent angular momentum transport in accretion disks. Protoplanetary disks may develop both. This paper aims to reinvigorate interest in the study of magnetized massive protoplanetary disks, starting from the basic issue of stability. The local linear stability of a self-gravitating, uniformly magnetized, differentially rotating, three-dimensional stratified disk subject to axisymmetric perturbations is calculated numerically. The formulation includes resistivity. It is found that the reduction in the disk thickness by self-gravity can decrease MRI growth rates; the MRI becomes global in the vertical direction, and MRI modes with small radial length scales are stabilized. The maximum vertical field strength that permits the MRI in a strongly self-gravitating polytropic disk with polytropic index $\Gamma=1$ is estimated to be $B_{z,\mathrm{max}} \simeq \csmid\Omega\sqrt{\mu_0/16\pi G} $, where $\csmid$ is the midplane sound speed and $\Omega$ is the angular velocity. In massive disks with layered resistivity, the MRI is not well-localized to regions where the Elsasser number exceeds unity. For MRI modes with radial length scales on the order of the disk thickness, self-gravity can enhance density perturbations, an effect that becomes significant in the presence of a strong toroidal field, and which depends on the symmetry of the underlying MRI mode. In gravitationally unstable disks where GI and MRI growth rates are comparable, the character of unstable modes can transition smoothly between MRI and GI. Implications for non-linear simulations are discussed briefly. | 14 | 4 | 1404.1923 |
||
1404 | 1404.6442_arXiv.txt | One of the consequences of entering the era of precision cosmology is the widespread adoption of photometric redshift probability density functions (PDFs). Both current and future photometric surveys are expected to obtain images of billions of distinct galaxies. As a result, storing and analyzing all of these PDFs will be non-trivial and even more severe if a survey plans to compute and store multiple different PDFs. In this paper we propose the use of a sparse basis representation to fully represent individual photo-$z$ PDFs. By using an Orthogonal Matching Pursuit algorithm and a combination of Gaussian and Voigt basis functions, we demonstrate how our approach is superior to a multi-Gaussian fitting, as we require approximately half of the parameters for the same fitting accuracy with the additional advantage that an entire PDF can be stored by using a 4-byte integer per basis function, and we can achieve better accuracy by increasing the number of bases. By using data from the CFHTLenS, we demonstrate that only ten to twenty points per galaxy are sufficient to reconstruct both the individual PDFs and the ensemble redshift distribution, $N(z)$, to an accuracy of 99.9\% when compared to the one built using the original PDFs computed with a resolution of $\delta z = 0.01$, reducing the required storage of two hundred original values by a factor of ten to twenty. Finally, we demonstrate how this basis representation can be directly extended to a cosmological analysis, thereby increasing computational performance without losing resolution nor accuracy. | Over the last two decades, photometric redshift (hereafter \pzns) estimation has become a crucial requirement for modern, multi-band photometric galaxy surveys. The need for accurate and robust \pzsns, and in particular for their probability density functions (\pz PDFs), is increasing at an even faster rate as we enter the era of precision cosmology. Large photometric surveys like the Dark Energy Survey (DES\footnote{http://www.darkenergysurvey.org/}) and the Large Synoptic Survey Telescope (LSST\footnote{http://www.lsst.org/lsst/}) will probe hundreds of millions to billions of galaxies, which are often too faint to be spectroscopically observed. Even today, the Sloan Digital Sky Survey (SDSS;~\citealt{York2000}) has obtained hundreds of millions of photometric observations of extragalactic sources covering approximately one-third of the sky. This same survey has also, by using a considerably larger amount of time with the same telescope, obtained a galaxy spectroscopic sample that is about one hundred times smaller. While the spectroscopic sample is a higher precision sample~\citep{Ahn2013}, spectroscopy is both considerably more difficult and more time consuming than photometry. For a number of cosmological analyses this higher precision is not strictly required, thus photometric surveys have become a cost-effective method for enabling new cosmological constraints. As these photometric surveys have grown in popularity, the estimation of \pzs that use different techniques has also grown significantly. These techniques can be broadly separated into machine learning algorithms~\citep[\eg][]{Collister2004, CarrascoKind2013} or Spectral Energy Distribution (SED) fitting methods~\citep[see \eg][]{Benitez2000,Ilbert2006}. These two classes of techniques have been shown to have similar performance characteristics when high quality training data are available~\citep[see, \eg][for a review on some current \pz techniques]{Hildebrandt2010,Abdalla2011,Sanchez2014}. More recently, particular attention has been focused on techniques that compute a full \pz PDF for each galaxy. This is because a \pz PDF contains more information than a single \pz estimate, and the use of \pz PDFs has been shown to improve the accuracy of cosmological measurements~\citep[\eg][]{Mandelbaum2008,Myers2009,Sheldon2012,Carnero2012,Jee2013} while not introducing any biases~\citep[\eg][]{Bordoloi2010, Abrahamse2011}. One fact that all photometric surveys have in common is the need to efficiently handle an overwhelming quantity of imaging data. The reduction, analysis and storage of this data is a difficult problem; even with the growth of computational resources, efficiently handling these data remains a pressing problem. In particular, \pz PDFs are currently computed on the summary catalogs that are produced by uniformly processing imaging data. But storing \pz PDFs for billions of sources is a challenge in itself, which is further complicated if multiple, different \pz techniques are desired or if different \pz PDFs are generated by using different galaxy templates. This is a problem both for those managing the data archives and for the general community who desire to apply these \pz PDF estimates to cosmological analyses. Thus the time is ripe to address this issue. In this paper, therefore, we explore different methods that allow us to manipulate and use \pz PDFs in a more efficient manner by representing them as compactly as possible. In this paper we introduce the use of a sparse functional basis to represent a full \pz PDF. This approach minimizes the data required to represent the \pz PDF, while maximizing the accuracy of the PDF. This basis representation not only minimizes the storage requirements, but also allow us to manipulate PDFs in a more computationally efficient manner, thereby increasing the computational efficiency of resulting analyses. With this approach, each galaxy \pz PDF is decomposed into an over determined basis system by minimizing the number of basis functions retained. We analyze how this approach compares with other representation techniques, in particular with a multi-Gaussian approach; and we demonstrate that, by using our proposed functional form, the integration and manipulation of \pz PDFs is both easier and faster than when using either the original PDF or any other comparable technique. The rest of this paper is organized in the following manner. In Section 2 we first present the data we will use to test our proposed approach, after which we will introduce the \pz methods that we will use to generate the \pz PDFs that will be used in the rest of the paper. Section 3 describes the different algorithms used to represent the \pz PDFs. In Section 4 we present the main results, compare the performance of the different representation methods, and discuss the storage requirements of these \pz PDFs. Next, we apply our basis representation to the computation of $N(z)$ in Section 5, and discuss how this basis framework can reduce computational costs. We conclude in Section 6 with a summary of our main findings and a final discussion of the applications of our proposed approach. | In this paper, we have presented different techniques to represent and efficiently store \pz PDFs, which have been shown to convey significantly more information than a single \pz estimate. As we enter the era of precision cosmology, the growth of large, dense photometric surveys has created an unmet need to quantify and manage these probabilistic values for hundreds of millions to billions of galaxies. Specifically, we have introduced the use of a sparse basis representation that uses a dictionary of Gaussian functions and Voigt profiles, which have extended wings, to accurately and efficiently represent each \pz PDF. We minimize the number of required bases while maintaining a high accuracy by using an Orthogonal Matching Pursuit algorithm, which provides a unique set of bases for each \pz PDF while minimizing the residual between the original and final \pz PDF. This algorithm is publicly available\footnote{http://lcdm.astro.illinois.edu/code/pdfz.html} We use photometric data from the CFHTLenS survey to compute \pz PDFs by using our \tpz code, producing PDFs with two hundred points and a redshift resolution of $\delta z = 0.011$. By using these PDFs, we demonstrate the our proposed sparse basis representation reconstructs a more accurate PDF than other techniques, include a multi-Gaussian fitting approach with a flexible number of parameters based on the number of peaks in each PDF. If we use the exact same number of parameters with our sparse representation as used by the multi-Gaussian fitting, we found that the sparse basis representation results are superior with the additional benefit that each basis or parameter can be stored using a single integer. We also showed that, with a fixed number of bases, we could achieve both a highly accurate PDF that also has a large compression ratio. As a specific example, we found that by using only ten (twenty) values per \pz PDF, we could reconstruct a \pz PDF at over a 99.5\% accuracy with a compression ratio of twenty (ten), providing a significant storage reduction without a loss of information. We quantified the number of bases required within the sparse representation dictionary, specifically finding that $(\Delta z/ \delta z)^2$ bases are sufficient to represent the galaxy \pz PDFs in our CFHTLenS test sample, where $\Delta z$ is the overall redshift range and $\delta z$ is the photometric redshift PDF resolution. If the number of points in the original PDF is approximately $200$--$250$, we can use a dictionary with fewer than $2^{16}$ bases, which results in an accurate PDF reconstruction while only requiring a single sixteen-bit integer to store the basis index in the dictionary. Furthermore, since the bases themselves are normalized, all basis coefficients are less than unity by definition; and, since the \pz PDFs are also normalized, we only need to retain the relative amplitudes of each basis function. Therefore, we can independently rescale the coefficients for each galaxy to their maximum value and subsequently represent them by using a discretized range containing $2^{15}$ values. This will provide a resolution less than $10^{-5}$, and since we set the maximum value from the most significant basis function, it is always correctly represented. As a result, we can also store the coefficients (sign included) in a separate sixteen-bit integer without losing information. Taken together, we can completely encode a single basis function, both dictionary index and coefficient, in a single four-byte integer, simplifying the data management and significantly reducing the data storage and reconstruction computational requirements. Of course the results we have presented will depend on the quality of the \pz PDFs to which they are applied, which themselves depend on the details of the \pz algorithm that generated them. As would naively be expected, single peaked \pz PDFs are most accurately reconstructed by using either a multi-Gaussian fitting or a sparse basis representation, where only five points per \pz PDF is sufficient to achieve a 99\% accurate reconstruction. Furthermore, we also demonstrated that, as a simple cosmological application of our \pz PDF reconstruction, we could accurately recover the underlying $N(z)$ distribution. In particular, we recovered the $N(z)$ of our CFHTLenS test sample to an accuracy of 99.87\% by using only ten points per \pz PDF. Given their compact nature and the fact that they are predetermined, we showed that we could obtain the sample $N(z)$ by integrating the bases over the sample redshift range and later multiplying by the basis coefficients, which can also be prefactored, thereby significantly reducing the number of required integrations. This same principle can be applied to other linear combinations of \pz PDFs or to more complex analyses if they can be expressed in terms of the underlying bases. This later topic is the subject of future work. Overall, these results are very promising, as current and future photometric surveys will produce up to tens of billions of \pz PDFs. Our proposed approach will either allow a reduction in the overall storage requirements or increase the number of \pz PDFs that can be persistently maintained for each galaxy without increasing the required amount of storage. As a result, this new approach will enable science that would have otherwise been difficult or impossible to accomplish. | 14 | 4 | 1404.6442 |
1404 | 1404.1054_arXiv.txt | We constrain the parameters of a self-interacting massive dark matter scalar particle in a condensate using the kinematics of the eight brightest dwarf spheroidal satellites of the Milky Way. For the case of a repulsive self-interaction the condensate develops a mass density profile with a characteristic scale radius that is closely related to the fundamental parameters of the theory. We find that the velocity dispersion of dwarf spheroidal galaxies suggests a scale radius of the order of $1\,\textrm{kpc}$, in tension with previous results found using the rotational curve of low-surface-brightness and dwarf galaxies. The new value is however favored marginally by the constraints coming from the number of relativistic species at Big-Bang nucleosynthesis. We discuss the implications of our findings for the particle dark matter model and argue that while a single classical coherent state can correctly describe the dark matter in dwarf spheroidal galaxies, it cannot play, in general, a relevant role for the description of dark matter in bigger objects. | The nature of dark matter (DM) remains an open question. At the fundamental level, DM is expected to be described in terms of a quantum field theory. At the effective level, however, a description in terms of classical particles is usually considered, see e.g. the large literature on N-body simulations~\cite{Springel:2005nw}. Most current efforts are focused on detecting a weakly interacting massive particle (WIMP), both by direct~\cite{direct} and indirect~\cite{indirect} searches. In the case of WIMPs its present-day abundance is fixed at the time when DM decoupled from the thermal plasma. If the interaction of DM lies at the weak scale, with a mass of the particle in the range of $100\,\rm GeV$ (as expected from the supersymmetric extensions to the standard model), the energy density of these particles coincides ``miraculously'' with the observed one~\cite{WIMP}. However, alternatives exist and deserve careful scrutiny, either to constrain the associated parameter space, and thus phenomenology, or to dismiss them as viable candidates. One such proposal considers that the abundance of DM is fixed by an asymmetry between the number densities of particles and antiparticles~\cite{asymmetricDM}, similarly to the baryons and leptons in the universe. If the particle interactions in the early universe are strong enough to guarantee thermal equilibrium, and DM is further composed of a spin-0 quantum field, the zero mode could have developed a Bose-Einstein condensate where a description in terms of a classical field would be warranted. Classical coherent states can also emerge nonthermally, no asymmetry required, by means of the vacuum misalignment mechanism~\cite{misalignment}. Similar ideas have been considered previously in the literature under many different names, such as scalar field~\cite{Suarez:2013iw}, BEC~\cite{Boehmer:2007um}, Q-ball~\cite{Qball}, fuzzy~\cite{Hu:2000ke}, boson~\cite{Sin-Bose}, or even fluid~\cite{Peebles:2000yy}, DM; see also Refs.~\cite{BECgeneral, Sikivie:2009qn, Arbey:2002, Arbey:2003sj, Lee:1995af, Harko.2011, Robles.2012, Dwornik:2013fra, Hua, Lora, Lora2} for details. A natural realization of this scenario can be provided by the axion~\cite{Sikivie:2009qn}. Originally introduced to solve the charge-parity violation problem in QCD~\cite{Peccei:1977hh}, the axion was soon recognized as a promising candidate for DM. In this case the size of the condensate is so small~\cite{Barranco:2010ib} that, most probably, DM halos made of axion-balls could not be distinguished from the ones simulated with N-body codes by means of galactic dynamics and/or lensing observations~\cite{GonzalezMorales:2012ab}. Another possibility is that with an appropriate choice of the parameters in the model (see the next two paragraphs for details), it could be possible to develop single structures with the size of a galaxy~\cite{Boehmer:2007um, Sin-Bose, Lee:1995af, Arbey:2002, Arbey:2003sj, Harko.2011, Robles.2012, Dwornik:2013fra, Hua, Lora}. For practical purposes we will restrict our attention to the case of a massive, self-interacting, complex scalar field with an internal $U(1)$ global symmetry satisfying the Klein-Gordon equation \begin{equation}\label{eq:KG} \Box \phi-(mc/\hbar)^2\phi-2\lambda|\phi|^2\phi=0\,. \end{equation} Here the box denotes the d'Alembertian operator in four dimensions, with $m$ the mass of the scalar particle and $\lambda$ a dimensionless self-interaction term. As long as the interaction between bosons is repulsive, $\lambda >0$, a universal mass density profile for the static, spherically symmetric, regular, asymptotically flat, self-gravitating equilibrium scalar field configurations emerges in the weak field, Thomas-Fermi regime~\cite{Colpi:1986ye, Lee:1995af, Arbey:2003sj, Boehmer:2007um, Harko.2011} of the Einstein-Klein-Gordon system with the following analytic form: \begin{equation} \label{density} \rho(r)= \left\lbrace \begin{array}{cl} \rho_c\, \displaystyle{\frac{\sin(\pi r/r_{\textrm{max}})}{(\pi r/r_{\textrm{max}})}} \quad & \textrm{for} \quad r<r_{\textrm{max}} \; \\ 0 \quad & \textrm{for} \quad r\ge r_{\textrm{max}} \; \end{array} \right. \; . \end{equation} In the effective description above there are two free parameters: first, the size of the gravitating objects, \begin{equation}\label{eq:rmax} r_{\textrm{max}}\equiv\sqrt{\frac{\pi^2 \Lambda}{2}}\left(\frac{\hbar}{m c}\right)= 48.93 \left(\frac{\lambda^{1/4}}{m[\textrm{eV}/c^2]}\right)^2 \,\textrm{kpc}\,, \end{equation} a parameter that, as manifest from the equation, depends directly on the bare constants of the theory in the combination $m/\lambda^{1/4}$; second, the value of the mass density at the center of the configuration, $\rho_c\equiv\pi m Q/(4r_{\textrm{max}}^3)$, a quantity that in principle can vary from galaxy to galaxy. Here $Q$ is the total charge in the system, that in this case coincides with the total number of particles, and for convenience we have defined the dimensionless constant $\Lambda\equiv \lambda m_{\textrm{Planck}}^2/4\pi m^2$. The mass density profile in Eq.~(\ref{density}) describes only the diluted configurations of a scalar field in a regime of weak gravity; in terms of particle numbers that translates into (see e.g. Eq.~(25) in Ref.~\cite{Arbey:2002}) \begin{equation}\label{eq:inequalities} \Lambda^{-1/2}\ll \left(\frac{m}{m_{\textrm{Planck}}}\right)^2Q\ll \Lambda^{1/2}\,. \end{equation} The inequalities in Eq.~(\ref{eq:inequalities}) demand $\Lambda\gg1$; that is guaranteed if the combination $m/\lambda^{1/4}$ for the mass and self-interaction terms of the scalar boson is well below the Planck scale. It is precisely the very large value expected for the constant $\Lambda$ what makes possible to blow up the Compton wavelength of the scalar particle, $\hbar /m c$, up to galactic scales, see Eq.~(\ref{eq:rmax}) above. Only configurations with masses $M=mQ$ in the range from $M\gg \lambda^{-1/2}m_{\textrm{Planck}}$ up to $M\ll \lambda^{1/2}m^3_{\textrm{Planck}}/m^2$ can be described by the expression in Eq.~(\ref{density}). Then, in order to have an halo model for objects of at least $M\sim 10^{8}M_{\odot}$ ($M\sim 10^{12}M_{\odot}$) we need a scalar DM particle with $m/\lambda^{1/4}< 70\,\textrm{keV}/c^2$ ($m/\lambda^{1/4}< 0.7\,\textrm{keV}/c^2$). \begin{figure*}[] \includegraphics[scale=0.7]{Todas.pdf} \caption{Empirical, projected velocity dispersion profiles for the classical eight dSph satellites of the Milky Way as reported in Refs.~\cite{Walker:2009zp, Walker:2007ju, Mateo2008}. Solid lines denote the best fits for the halo model in Eq.~(\ref{density}) when $r_{\textrm{max}}=1\,\textrm{kpc}$ (red), $r_{\textrm{max}}=2\,\textrm{kpc}$ (black), and $r_{\textrm{max}}=6\,\textrm{kpc}$ (blue). } \label{fig:fig1} \end{figure*} The density profile in Eq.~(\ref{density}) was derived under the assumption that all the DM particles are in a condensate, while in a more realistic situation probably only a fraction of them would be represented by the coherent classical state. (That seems indeed necessary in order to explain the flattened rotation curves in large spirals, where observations suggest $\rho\sim 1/r^2$ at large radii.) Unfortunately, there is not yet a satisfactory description that includes this effect (see Ref.~\cite{Bilic:2000ef} for a proposal in this direction). Nevertheless, this halo model can still be deemed appropriate to test the self-interacting scalar field DM scenario if we carefully choose observations that are sensitive only to the mass contained up to a radius smaller or comparable to $r_{\rm max}$, where the condensate is expected to dominate the distribution of DM. One should then look at the profile in Eq.~(\ref{density}) not necessarily as a DM halo model for the whole galaxy, but for the core of the self-gravitating object only. The dwarf spheroidal (dSph) satellites of the Milky Way are probably the most promising objects to test DM models as far as structure formation is concerned. These old, pressure-supported systems are the smallest and least luminous known galaxies, and there is strong evidence that they are DM-dominated at all radii, with mass-to-light ratios as large as \cite{Mateo:1998wg} \begin{equation} M/L_V\sim 10^{1-2}[M/L_V]_{\odot}\,. \end{equation} The dynamics of these objects, for instance, could allow us to determine whether DM halos are cored or cuspy: since the concentration of baryons in these galaxies is so low, effects such as the adiabatic contraction and/or supernova feedback cannot alter significantly the shape of the original halo. Current data do not yet conclusively discriminate between cuspy and cored profiles~\cite{Walker:2009zp, Salucci:2011ee, jardel, Valenzuela:2005dh}. In this paper we use the kinematics of the eight classical dSph satellites of the Milky Way to determine whether a self-interacting scalar particle in a condensate is able to reproduce the galaxies' internal dynamics and, if so, under what conditions on the theory input parameters. In this respect, our study extends previous analyses carried out for the generalized Hernquist~\cite{Walker:2009zp} and Burkert~\cite{Salucci:2011ee} profiles to the DM halo model in Eq.~(\ref{density}). It is important to note, however, that the purpose of this paper is not to compare the profile in Eq.~(\ref{density}) with other halo models in the literature, but, rather, to use dSph dynamics to test the self-consistency of the scalar field dark matter scenario. We find that the eight classical dSphs indicate a scale radius of the order \begin{equation} r_{\textrm{max}}\sim 1\, \textrm{kpc}\,,\quad {\rm i.e.}\quad m/\lambda^{1/4}\sim 7\,\textrm{eV}/c^2\,, \end{equation} a value in tension with previous results found using the rotation curves of low-surface-brightness (LSB) and dwarf galaxies~\cite{Boehmer:2007um, Arbey:2003sj, Harko.2011, Robles.2012}; see also Refs.~\cite{Dwornik:2013fra, Hua} for bigger galaxies. Our findings strongly disfavor a {\it self-interacting condensate DM halo model} or, if one hypothesizes that the condensate describes only the core of galaxies, they indicate that the relevance of the coherent state to describe DM in larger galaxies is, at best, negligible. | \label{sec:discussion} The viability of the halo model in Eq.~(\ref{density}) has been studied in several papers mainly employing rotational curves of galaxies from different surveys, out of which only the most DM-dominated objects have been selected~\cite{Boehmer:2007um, Arbey:2003sj, Harko.2011, Robles.2012}; see also Ref.~\cite{Lora, Lora2} for a different approach. These studies all point to a scale radius that varies from galaxy to galaxy and ranges from $3\,\textrm{kpc}$ up to $15\,\textrm{kpc}$ (light red band in Figures~\ref{fig:fig3} and~\ref{fig:fig4}), with only isolated instances requiring values outside this range, e.g. M81dw, where $r_{\textrm{max}}\sim 1\,\textrm{kpc}$~\cite{Boehmer:2007um}, and UGC5005, where $r_{\textrm{max}}\sim 24.65\,\textrm{kpc}$~\cite{Harko.2011}. However, these papers also report mean values in the narrow range $r_{\textrm{max}}\sim 5.5-7\,\textrm{kpc}$~\cite{Arbey:2003sj, Harko.2011} (red band in Figures~\ref{fig:fig3} and~\ref{fig:fig4}), suggesting the existence of a self-interacting scalar particle with $m/\lambda^{1/4}\sim 2.6-2.9\,\textrm{eV}/c^2$. Such findings have led to the conclusion that the halo model in Eq.~(\ref{density}) can describe accurately the dynamics of DM-dominated galaxies. The case of Milky Way-like systems, or giant ellipticals, remains to be studied in detail mainly because the dynamical interaction between the condensate and baryons is not well understood there (see however Ref.~\cite{Dwornik:2013fra}, where a set of three high-surface-brightness spirals have been recently considered, e.g. ESO215G39, where $r_{\textrm{max}}\sim 50\,\textrm{kpc}$, and Ref.~\cite{Hua}, where values of the scale radius in the range $r_{\textrm{max}}=5.6 - 98.2\,\textrm{kpc}$ are reported for a subsample of galaxies in the THINGS survey). Note that, contrary to other proposals in the literature, the halo model in Eq.~(\ref{density}) is not expected to describe galaxy clusters. The values reported in previous studies are strongly disfavored by our findings in the present analysis, where we show that the dynamics of the smallest and least luminous galaxies is clearly in conflict, along several lines, with such large scale radii. One could argue that the profile in Eq.~(\ref{density}) is not appropriate to describe the galaxies in Refs.~\cite{Boehmer:2007um, Arbey:2003sj, Harko.2011, Robles.2012, Dwornik:2013fra, Hua} (where in some cases the luminous matter extends up to $10\,\textrm{kpc}$), and suggest that a more elaborated halo model where the condensate represents only the core of the galaxy would be necessary in order to understand the dynamics of these systems. However, it is important to note that a condensate with a scale radius of the order of $1\,\textrm{kpc}$ does not provide the core expected for those galaxies used in previous analysis. Interestingly, the new value of $r_{\textrm{max}}\sim 1\,\textrm{kpc}$ is favored by cosmological observations. A homogeneous and isotropic distribution of matter satisfying the Eq.~(\ref{eq:KG}) has two different regimes depending on the actual value of the charge density, $q=Q/a^3$; see e.g. Ref.~\cite{Arbey:2002}. Here $a$ is the scale factor and $Q$ the number of particles per unit volume today, $a=1$. When the charge density is high, $q\gg m^3c^3/(\lambda \hbar^3)$, the energy density and pressure of the scalar field dilute with the cosmological expansion like dark radiation, $\rho=3\lambda^{1/3} Q^{4/3} c\hbar/(4a^4)$ and $p=(1/3)\rho$, whereas at low densities, when $q\ll m^3c^3/(\lambda\hbar^3)$, like cold DM, $\rho= Qmc^2/a^3$ and $p=0$. From the condition that the transition from dark radiation to DM, fixed at $q\sim m^3c^3/(\lambda\hbar^3)$, has to occur before the time of equality, when $\rho\sim 5\times 10^{13}\,\textrm{eV cm}^{-3}$, we obtain $r_{\rm max}< 80\, \rm kpc$, i.e. $m/\lambda^{1/4}> 0.8\,\textrm{eV/c}^2$. However, the number of extra relativistic species at Big-Bang nucleosynthesis places tighter constraints on the parameters of the theory. For a scalar field, this quantity, defined as the number of extra relativistic neutrino degrees of freedom at Big-Bang nucleosynthesis, takes the form \begin{equation} \Delta N_{\rm eff} = 57.83 \times \left(\Omega_{\textrm{dm}}h^2\right)^{4/3}\left(\frac{\lambda^{1/4}}{m[\textrm{eV}/c^2]}\right)^{4/3}\,, \end{equation} see e.g. Eq.~(67) in Ref.~\cite{Arbey:2003sj}. (Note that there is an extra factor of 1/2 in our expression for $\Delta N_{\rm eff}$ with respect to that in Ref.~\cite{Arbey:2003sj}; this might come from the two helicities of the neutrino.) Using the latest cosmological data provided by PLANCK+WP+highL~\cite{Ade:2013zuv}, $\Omega_{\textrm{dm}}h^2=0.1142\pm 0.0035$ at 1$\sigma$ C.L., and PLANCK+WP+highL+(D/H)$_{\rm p}$~\cite{Cooke:2013cba}, $\Delta N_{\rm eff}=0.23\pm 0.28$, also at 1$\sigma$ CL, we obtain \begin{equation} r_{\textrm{max}}\lesssim 3\, \textrm{kpc}\,,\quad {\rm i.e.}\quad m/\lambda^{1/4}\gtrsim 4\,\textrm{eV}/c^2\,, \end{equation} Note that this result excludes marginally previous values of $r_{\rm max}\gtrsim 5.5\,\rm kpc$ % arising from the study of the rotational curves of LSB and dwarf galaxies~\cite{Boehmer:2007um, Arbey:2003sj, Harko.2011, Robles.2012}. The analysis in this paper applies only for the case of a self-interacting scalar particle with $\lambda> 0$; however, similar results are expected when $\lambda\le 0$. Up to our knowledge, there is no analytic expression for the mass density profile of the halo model when the self-interaction term is less than or equal to zero, but e.g. in the case of $\lambda=0$, the characteristic size and mass of the equilibrium configurations are found to be \cite{Ruffini} of order $R\sim Q^{-1/2}(m_{\textrm{Planck}}/m)(\hbar/mc)$, and $M\sim Qm$, respectively. One can fix the number of particles, $Q$, and mass parameter, $m$, in order to describe the dynamics of dSphs, implying $R\sim 1\, \textrm{kpc}$ and $M\sim 10^8\,M_{\odot}$, see for instance Ref.~\cite{Lora} for the case of Ursa Minor, but then configurations heavier than $10^8\,M_{\odot}$ would be smaller than $1\, \textrm{kpc}$, whereas those larger than $1\, \textrm{kpc}$ would result in halos lighter than $10^8\,M_{\odot}$. In summary, if we dismiss previous constraints, a scenario where the DM galactic halos are described by a single condensate is consistent with the data from the smallest and most DM-dominated nearby galactic systems; nonetheless, these single objects alone will not be consistent with the description of bigger galaxies. | 14 | 4 | 1404.1054 |
1404 | 1404.0379_arXiv.txt | In this paper, we consider how gas damping affects the dynamical evolution of gas-embedded star clusters. Using a simple three-component (i.e. one gas and two stellar components) model, we compare the rates of mass segregation due to two-body relaxation, accretion from the interstellar medium, and gas dynamical friction in both the supersonic and subsonic regimes. Using observational data in the literature, we apply our analytic predictions to two different astrophysical environments, namely galactic nuclei and young open star clusters. Our analytic results are then tested using numerical simulations performed with the NBSymple code, modified by an additional deceleration term to model the damping effects of the gas. The results of our simulations are in reasonable agreement with our analytic predictions, and demonstrate that gas damping can significantly accelerate the rate of mass segregation. A stable state of approximate energy equilibrium cannot be achieved in our model if gas damping is present, even if Spitzer's Criterion is satisfied. This instability drives the continued dynamical decoupling and subsequent ejection (and/or collisions) of the more massive population. Unlike two-body relaxation, gas damping causes overall cluster contraction, reducing both the core and half-mass radii. If the cluster is mass segregated (and/or the gas density is highest at the cluster centre), the latter contracts faster than the former, accelerating the rate of core collapse. | \label{intro} The advances made in telescope resolution over the last few decades have revealed the properties of a number of interesting astrophysical environments that contain both gas and stars in significant quantities. For example, the properties of many young star-forming regions throughout our Galaxy have now been catalogued using the unprecedented spatial resolution of the Hubble Space Telescope (HST), and large ground-based telescopes assisted by adaptive optics \citep[e.g.][]{wang11,phan-bao11,demarchi11,dario12}. The gas being used to form stars in these regions is typically dense and cold, with densities and temperatures on the order of $\sim 10^2-10^6$ M$_{\odot}$ pc$^{-3}$ and $\sim 10$ K (temperatures can reach $\sim 30$ K during the later stages of star formation) \citep[e.g.][]{lada03,mckee07}, respectively. The nuclei of spiral galaxies have also been imaged at high resolution, revealing not only that significant quantities of molecular gas are present, but also that star formation is on-going in these dense stellar environments \citep[e.g.][]{boker03,schinnerer06}. The implications of the presence of gas in galactic nuclei, both with respect to the formation of nuclear star clusters and the growth of super-massive black holes (SMBHs) are not yet fully understood \citep[e.g.][]{hopkins11,gabor13,kormendy13}. Additionally, recent evidence suggests that globular clusters (GCs) underwent prolonged star formation early on in their lifetimes (see \citet{gratton12} for a recent review). That is, gas was present in significant quantities for the first $\sim 10^8$ years, albeit perhaps intermittently \citep{conroy11,conroy12}. This evidence comes in the form of multiple stellar populations identified in the colour-magnitude diagram \citep[e.g.][]{piotto07}, as well as curious abundance anomalies that cannot be explained by a single burst of star formation \citep[e.g.][]{osborn71,gratton01}. Thus, at least until the second generation has formed, stars from the first generation must have been orbiting within a gas-rich environment. On the theoretical front, most studies conducted to date considered one of two extremes. The first has its focus purely on the stellar dynamics (see \citet{spitzer87} and \citet{heggie03} for detailed reviews), long after the gas has been converted to stars and/or ejected from the cluster. The focus of the second extreme is often on the very early stages of star formation \citep[e.g.][]{krumholz05b,kirk11,krumholz11a,krumholz11b,offner11,krumholz12,kirk14}, when gas is first being converted into stars. Little theoretical work has been done connecting these two extremes, when both gas and stars co-exist in significant quantities, particularly in massive clusters and for more than a few tens of Myr. \citet{bonnell97} first studied gas accretion onto small clusters of young stars using a three-dimensional SPH code. The authors find that non-uniform or differential accretion can produce a realistic mass spectrum, even when the initial stellar masses are uniform, and tends to form massive stars at the cluster centre where the gas density is highest \citep[e.g.][]{bonnell98,bonnell01}. Later, \citet{bate03} performed a larger numerical simulation to resolve the fragmentation process down to the opacity limit. The authors find that the star formation process is highly chaotic and dynamic, and that the observed statistical properties of stars are a natural consequence of a dynamical environment. These results were later expanded upon by \citet{bate09}, \citet{bate12} and \citet{maschberger10} to include more massive clusters composed of $\sim$ a few thousand stars. Other authors placed their focus on the evolving properties of the gas \citep{offner09a}, the impact of different initial conditions \citep{girichidis12a} or the evolution of potential and kinetic energy throughout the cloud \citep{girichidis12b}, and showed that clusters tend to be born in a subvirial state \citep[e.g.][]{allison09b}. Both the gas velocity dispersion and temperature can be surprisingly high, especially if radiative feedback is included in the simulations \citep[e.g.][]{offner09a,offner09b}. Along with large-scale magnetic fields \citep[e.g.][]{lee14}, this can dramatically reduce not only the overall accretion rate, but also the final number of stars that form. A number of $N$-body simulations have also been performed to study the effects of gas expulsion or dispersal \citep[e.g.][]{marks08,moeckel12}, sub-cluster merging \citep[e.g.][]{moeckel09b}, primordial mass segregation \citep[e.g.][]{marks08,moeckel09a}, and even techniques to quantify the degree of mass segregation \citep{allison09a,parker12,parker14}. In purely stellar systems (i.e. gas-free), two-body relaxation is the dominant physical mechanism driving the evolution of star clusters for most of their lifetime \citep[e.g.][]{henon60,henon73,spitzer87,heggie03,gieles11}. Long-range gravitational interactions tend to push the cluster toward a state of energy equipartition in which all objects have comparable kinetic energies.\footnote{In reality, such an idealized state is never fully achieved in a cluster with a realistic mass spectrum or potential; see \citet{trenti13} for more details.} Consequently, the velocities of the most massive objects decrease, and they accumulate in the central regions of the cluster. Similarly, the velocities of the lowest mass objects increase, and they are subsequently dispersed to wider orbits. This mechanism is called mass segregation, and has been observed in both young and old clusters \citep[e.g.][]{lada03,vonhippel98,demarchi10}. In the presence of gas, other effects could also contribute to the deceleration of a massive test particle. For example, mass accretion reduces the accretor velocity due to conservation of momentum. This was first argued by \citet{bondi44} and \citet{bondi52} in the supersonic and subsonic limits, respectively. Several authors have also explored the effects of gas dynamical friction, particularly in the steady-state supersonic regime \citep[e.g.][]{dokuchaev64,ruderman71,rephaeli80,ostriker99}, although it is most effective when the perturber velocity is approximately equal to the sound speed of the surrounding gaseous medium. More recently, \citet{lee11} and \citet{lee13} showed that the drag force should be precisely equal to $\dot{m}$v, where $\dot{m}$ is the rate of mass accretion and v is the accretor's velocity relative to the gas, in both the subsonic and supersonic regimes. Consequently, the authors argued that damping due to gas dynamical friction cannot be separated from that due to accretion, and that these processes instead represent different components of the same underlying damping mechanism. In this paper, we address the question: When does the presence of gas in a stellar system significantly affect the stars' dynamics? To answer this question, we will adopt a three-component model to calculate and compare three timescales for a massive test particle orbiting within the cluster potential to become mass segregated. The damping mechanisms we consider are two-body relaxation, accretion from the interstellar medium (ISM) and gas dynamical friction. To first order, this will allow us to constrain the parameter space in the cluster mass-gas density plane for which each of the different damping mechanisms dominates the deceleration of the test particle. We then apply our results to observed data taken from the literature for two different types of astrophysical environments, namely the galactic nuclei of late-type spirals and young star-forming regions, and determine the dominant damping mechanism operating in each. We compare all three timescales in both the subsonic and supersonic regimes in Section~\ref{analytic1}, and apply our analytic model to the available observational data. The modified $N$-body simulations used to model the effects of gas damping are presented in Section~\ref{comp2}, and compared to the predictions of our analytic model, the details of which are provided in an Appendix. Finally, in Section~\ref{discussion}, we summarize our key results, and discuss the significance of our results for different astrophysical environments, in particular galactic nuclei and young star-forming regions. | \label{discussion} In the subsequent sections, we discuss the implications of our results for the evolution of gas-embedded star clusters, comment on the validity of our model assumptions and offer suggestions for improvements in future work. We further discuss what our results imply for different astrophysical cases of interest, including nuclear star clusters and SMBH formation, young open clusters or associations, and primordial globular clusters. \subsection{Mass segregation and energy equilibrium} \label{astro} Here, we discuss the implications of our results for the rate of mass segregation in a gas-embedded star cluster within the context of energy equilibrium. Within the framework of our analytic model, if initially all mass species in a cluster have the same root-mean-square speed, then all three damping mechanisms considered in this paper will push this initial state toward energy equilibrium. Our simulations show that energy equipartition is never actually achieved, even without gas damping (see \citet{trenti13} for more details). The analytic timescale presented in Appendix A and compared to the simulations in Figure~\ref{fig:fig3} nonetheless approximately describe the timescale for mass segregation due to gas dynamical friction\footnote{More accurately, our analytic timescale corresponds more closely to the \textit{onset} of mass segregation due to gas dynamical friction.} to within an order of magnitude, since this timescale depends most sensitively on the \textit{initial} particle velocity, with the dependence on the \textit{final} particle velocity typically being negligible. Our results illustrate that both gas dynamical friction and accretion from the ISM should serve to accelerate the rate of mass segregation (or stratification) within gas-embedded star clusters, operating on timescales comparable to (albeit typically smaller than) the corresponding timescale for two-body relaxation for typical gas densities and cluster masses. Importantly, we do not expect either gas accretion or gas dynamical friction to stop operating once mass segregation is achieved. These mechanisms will continue to reduce the velocities of more massive particles the fastest, causing the cluster to fall further away from energy equilibrium, and contract even further. In particular, clusters that would otherwise achieve a stable state of approximate energy equilibrium will be forced out of this state by gas damping. Thus, so long as gas damping continues to operate, a stable state of energy equilibrium cannot be achieved. Eventually, the central density of the heavier particles becomes sufficiently high that they begin to undergo strong gravitational interactions, progressively dwindling their population size by ejecting each other from the cluster (or colliding). In massive clusters, gas damping could push clusters to core collapse on a shorter timescale than two-body relaxation alone. % This could help to explain why more massive Galactic globular clusters are, on average, more concentrated \citep{harris96,leigh13b}, when two-body relaxation alone should cause the opposite. That is, two-body relaxation causes the concentration to increase slowly over time, but this process operates at a rate that is inversely proportional to the cluster mass. For this scenario to be compatible with the observed distribution of concentrations, our analytic results suggest that the duration of the gas-embedded phase should be proportional to the total cluster mass. This is because the additional gas mass increases the stellar velocities via the virial theorem, and the timescale for gas damping to decelerate a massive test particle scales with the cube of the particle velocity in the steady-state supersonic limit. Thus, the duration of the gas-embedded phase should scale with total cluster mass as M$_{\rm clus}^{2/3}$, since $\tau_{\rm gd,sup} \propto$ $\sigma^3$ and $\sigma \propto$ M$_{\rm clus}^{1/2}$. Here, $\tau_{\rm gd,sup}$ represents the timescale for gas damping (i.e. either gas accretion or gas dynamical friction) to operate in the steady-state supersonic limit and $\sigma$ is the stellar velocity dispersion, taken as a proxy for the typical relative velocity between stars and the gas. If the duration of the gas-embedded phase does indeed scale with the total cluster mass, then this could predict a relation between second generation stars and the concentration parameter, perhaps in the form of a correlation between concentration and the fraction of second generation stars (see Section~\ref{gal-nuc} below). Alternatively, assuming that the total mass in stars greatly exceeds the total gas mass (so that the stellar velocities do not depend strongly on the total gas mass, via the virial theorem), then more massive clusters could end up more concentrated if the gas fraction scales linearly with the total cluster mass, and the duration of the gas-embedded phase is independent of cluster mass. This is because the rate of gas damping scales linearly with gas density, and we assume that the cluster half-mass radius r$_{\rm h}$ is also approximately independent of the total cluster mass \citep{harris96}. Importantly, we expect our main conclusions to hold if our simulations were to be re-performed with a realistic mass spectrum. In particular, gas damping should still accelerate the rate of mass segregation, cause overall cluster contraction and even accelerate the rate of core collapse. However, we caution that higher gas densities than adopted in the simulations performed in this paper will likely be required to achieve comparable effects within a few 100 Myr. This is because the average mass in our simulations is higher than it would be assuming a realistic mass spectrum, increasing the efficiency of gas damping. We did attempt to perform some simulations adopting a realistic mass spectrum, and our preliminary results confirm these conclusions. \subsection{Model assumptions} \label{assumptions} In this section, we improve upon the connection between our results and real astrophysical environments by discussing our model assumptions. First, we discuss our derivation for the timescale for two-body relaxation to operate in a gas-embedded star cluster, as presented in Equation~\ref{eqn:t-rh-gas}. Importantly, we have modeled the gas as a simple background potential in this derivation, which neglects the gravitational interactions between stars and over-densities in the gas such as, for example, any filamentary structure in the gas. We do not expect our model to be accurate in the regime where the total gas mass significantly exceeds the total stellar mass, since here the assumption that the cluster is in steady-state breaks down, as does the assumption that the gas can be treated as a simple background potential. More detailed numerical simulations will be needed to identify the parameter space suitable to our simple analytic treatment for a gas-modified relaxation time. Next, we discuss our simplified treatment of gas dynamical friction and accretion. Probably the most important assumption is that the radial density profile of the gas is uniform, which is certainly not the case in most observed star-forming regions. Among other things, the gas could be more centrally concentrated than the stars due to dissipation, and/or it could exhibit over- and under-densities due to radiation pressure from stars and its own self-gravity. If a region is of sufficiently low density, perturbations traveling through the gas may not be able to propagate through it, and this would decrease the effectiveness of gas dynamical friction. What's more, the orbits of neighboring stars pass directly through the wakes induced by gas dynamical friction. This suggests that the true upper limit for the Coulomb logarithm, incorporated in the derivation for the deceleration due to gas dynamical friction in the supersonic regime, should perhaps be the distance between stars, instead of the cluster half-mass radius. This reduces the rate of deceleration due to gas dynamical friction by a factor $\lesssim 10$. These issues are also not accounted for by our simple estimate for the accretion rate. The Bondi-Hoyle-Lyttleton prescription for accretion represents a strict upper limit, and the true accretion rate could be orders of magnitude lower \citep[e.g.][]{krumholz04,krumholz05a,krumholz06}. Our model is only suited to the steady-state subsonic and supersonic limits, or $v \ll c_{\rm s}$ and $v \gg c_{\rm s}$, respectively. Thus, the assumptions behind our analytic timescales break down for stellar velocity dispersions on the order of the sound speed. Here, gas dynamical friction is at its most efficient, and could be the dominant damping force acting on the test particle for much lower gas densities than our results suggest. Given the simplicity of our derivations for the mass segregation times due to gas accretion and gas dynamical friction, we estimate they are correct to within an order of magnitude, at best. In fact, given the (flawed) assumption of a uniform gas density, the derived rates are likely over-estimates of the true rates. % In general, the conclusion that the gas dynamical friction timescale is shorter than the gas accretion timescale for all but the most bloated objects requires further study using more advanced simulations. Indeed, \citet{lee11} and \citet{lee13} recently argued that the deceleration due to accretion cannot be separated from that due to gas dynamical friction, and that the damping force is precisely equal to $\dot{m}$v in both the subsonic and supersonic regimes. To derive the timescales for gas dynamical friction and accretion to cause a massive test particle to become mass segregated, we assumed that both gas damping and two-body relaxation cause a particle's velocity to vary smoothly from an initial value $\sigma$ to a final value $\sigma\sqrt{\bar{m}/m_{\rm 1}}$. This is not strictly valid, since particles do not adhere to circular orbits in clusters, so that their velocities will vary over the course of a crossing time. In turn, the deceleration due to both gas dynamical friction and accretion should vary over a crossing time, particularly in the supersonic limit, since here both rates depend on the particle velocity. This issue is addressed directly via our computational $N$-body models, which put the validity of this assumption to the test. Given the reasonably good agreement (to within an order of magnitude) between our analytic model and the results of our simulations, we conclude that the assumption that a particle's velocity varies smoothly as it decelerates due to gas damping is a decent first-order approximation. \subsection{Specific astrophysical environments} \label{specific} When do the initial conditions adopted in our analytic model actually occur in nature? In particular, our model assumes that initially all mass species have the same root-mean-square speed. This assumption should be suitable to a cluster that has recently undergone a phase of violent relaxation. In open clusters, this could occur if the parent star-forming region is initially sub-virial, and a pronounced infall phase occurs. This could also apply to the formation of nuclear star clusters in galactic nuclei. However, here, violent relaxation could also occur when significant reservoirs of gas fall into the nucleus, or when/if other star clusters, in particular massive globular clusters, spiral in and merge with the nucleus due to stellar dynamical friction within the host galaxy. In all galactic nuclei and young star-forming regions considered in this paper, the stellar motions should typically be in the supersonic regime, since either the gas is predominantly cold and molecular or the stellar velocity dispersion is very high. Nevertheless, it is possible that in some environments, the gas is sufficiently heated by, for example, stellar winds, supernovae, or high-energy radiation, that the gas sound speed becomes on the order of the stellar velocity dispersion. Indeed, the gas in the Galactic Centre within $\sim$ 1 pc of the SMBH is known to be hot (kT $\sim$ 1 keV) \citep{merritt13}, however here the stellar velocity dispersion is also high. We note that, when the stellar velocities are on the order of the sound speed, gas damping should be maximally effective, since the deceleration due to gas dynamical friction becomes very large \citep[e.g.][]{ostriker99}. Below, we discuss in more detail the implications of our results for specific astrophysical environments. \subsubsection{Galactic nuclei and primordial globular clusters} \label{gal-nuc} The results of our analytic model suggest that, although the rate of gas damping should increase with increasing gas density, the effect is largely canceled by the additional gas mass causing an increase in the stellar velocity dispersion. This suggests that the total duration of the gas-embedded phase plays a more important role in deciding the overall effects of gas damping on the cluster structure than does the gas density. In late-type galaxies, galactic nuclei are thought to undergo continual gas replenishment from the host galaxy. If a new stellar population is born every time this occurs, a correlation could exist between the number of distinct stellar populations and the overall compactness of the nuclear cluster, assuming that the duration of each gas-embedded phase stays roughly the same between gas replenishment events. Additionally, there is some evidence to suggest that more massive GCs are more likely to host multiple stellar populations \citep[e.g.][]{gratton12}. This is in rough agreement with the idea that more compact (or concentrated) clusters should also host a larger fraction of second or even third generation stars, or perhaps just a wider stellar age distribution if distinct generations are not present. As discussed, gas damping should cause clusters to contract, independent of the mass segregation process. The effect can be significant. For example, in our simulations with gas damping, we find a factor of $\sim 2$ difference in the final cluster half-mass radius compared to those simulations without gas damping, for our chosen model assumptions. In galactic nuclei, cluster contraction could be relevant to SMBH formation in the early Universe, and even SMBH growth at later cosmic epochs. If an SMBH is present at the centre of a nuclear star cluster, gas damping will increase the feeding rate of stars into its immediate vicinity, and hence the rate at which stars merge with it. More generally, gas damping could accelerate a phase of runaway mergers to occur in the centres of nuclear clusters, which could then result in SMBH formation or growth. The simulations performed in this paper treat all objects as point particles, so that mergers/collisions are neglected. However, modern simulation-based techniques can address this issue \citep[e.g.][]{portegieszwart04}, which offers an interesting avenue for future studies. Similarly, our results can be applied to stellar-mass black holes in primordial globular clusters, which suggests that gas damping could have an important bearing on the present-day black hole retention fractions. \textbf{We caution that we have not considered stellar evolution-induced mass loss in our simulations, which contributes to cluster expansion. This is particularly important early on in the cluster lifetime, when massive stars are still present \citep[e.g.][]{chernoff90,leigh13b}. We have also neglected dynamical interactions involving binary stars, which can act either as a source of heating or cooling \citep[e.g.][]{fregeau09,converse11,leigh13b}. Future more sophisticated simulations should ideally account for these processes, and their implications for the results presented here.} \subsubsection{Young open clusters} \label{open} Most young open clusters are not yet in virial equilibrium, and hence are not in steady-state. In some cases, the clusters are sub-virial, and are in the process of contracting. This is exactly what we would expect if gas dynamical friction and/or accretion are acting. Specifically, the stellar velocities should be lower than expected from the virial theorem given the currently observed cluster size and structure. With gas dynamical friction and accretion operating, the stars should continually be decelerating, while also attempting to adjust the cluster structure accordingly. The former must take place before the latter can occur. Thus, both gas dynamical friction and accretion could contribute to causing a gas-embedded cluster to appear sub-virial. If star clusters are born primordially mass segregated, our analytic timescales do not strictly apply. Indeed, several recent studies suggest that star-forming regions could be mass segregated as early as the protostellar phase \citep[e.g.][]{kryukova12,elmegreen14,kirk14}. However, with our results in mind, it is perhaps no surprise that many young open clusters appear primordially mass segregated, if this conclusion is based on the ages of the clusters being much shorter than their half-mass relaxation times. All of the mechanisms contributing to mass segregation discussed in this paper should be operating \textit{during} the star formation process, and their rates could be relatively short compared to the star formation time. As protostars accrete mass from the ISM, conservation of momentum should continually act to reduce (typically) the velocities of the protostars. At the same time, protostars are being accelerated/decelerated by the gravitational tug from their peers. The most massive protostars should experience the greatest overall deceleration due to momentum conservation from accretion, and/or experience the least overall acceleration from their less massive peers. Thus, by the time a statistically significant distribution of protostar masses has formed, it stands to reason that a cluster could already appear mass segregated \citep[e.g.][]{girichidis12b}. This represents an extreme application of our model, and is better addressed using more sophisticated numerical simulations of star formation. Previous SPH simulations have shown that stars in the central cluster regions tend to accrete the most due to the higher gas densities, and this is primarily responsible for star clusters appearing primordially mass segregated \citep{bonnell97,bonnell98,bonnell01}. However, primordial mass segregation can be complicated if (massive) clusters are initially born with significant substructure and undergo one or more phases of clump-infall. It is perhaps more likely that the initial conditions of our model would at some point be met if such a scenario were to occur \citep[e.g.][]{maschberger10}. If our simulations were to be repeated assuming a more realistic gas density profile that follows that of the stars, many of the effects discussed in this paper would be amplified, such as the acceleration of core collapse. In general, we do not consider the evolving properties of the gas, and could be missing a lot of interesting physics due to this simplifying assumption. Our analytic model is meant to provide a useful benchmark for comparison between observational data and future more sophisticated numerical simulations. | 14 | 4 | 1404.0379 |
1404 | 1404.5847_arXiv.txt | {The near-Earth asteroid (99942) Apophis is a potentially hazardous asteroid. We obtained far-infrared observations of this asteroid with the Herschel Space Observatory's PACS instrument at 70, 100, and 160\,$\mu$m. These were taken at two epochs in January and March 2013 during a close Earth encounter. These first thermal measurements of Apophis were taken at similar phase angles before and after opposition. We performed a detailed thermophysical model analysis by using the spin and shape model recently derived from applying a 2-period Fourier series method to a large sample of well-calibrated photometric observations. We find that the tumbling asteroid Apophis has an elongated shape with a mean diameter of 375$^{+14}_{-10}$\,m (of an equal volume sphere) and a geometric V-band albedo of 0.30$^{+0.05}_{-0.06}$. We find a thermal inertia in the range 250-800\,Jm$^{-2}$s$^{-0.5}$K$^{-1}$ (best solution at $\Gamma$ = 600\,Jm$^{-2}$s$^{-0.5}$K$^{-1}$), which can be explained by a mixture of low conductivity fine regolith with larger rocks and boulders of high thermal inertia on the surface. The thermal inertia, and other similarities with (25143)~Itokawa indicate that Apophis might also have a rubble-pile structure. If we combine the new size value with the assumption of an Itokawa-like density and porosity we estimate a mass between 4.4 and 6.2 $\cdot$ 10$^{10}$\,kg which is more than 2-3 times larger than previous estimates. We expect that the newly derived properties will influence impact scenario studies and influence the long-term orbit predictions of Apophis. } | The near-Earth asteroid 99942~Apophis was discovered in 2004 (Minor Planet Supplement 109613) and found to be on an Aten-type orbit\footnote{The current orbit's perihelion is at 0.746\,AU, aphelion at 1.0985\,AU, with a=0.922\,AU, i=3.33$^{\circ}$, e=0.191.} crossing the Earth's orbit in regular intervals. At that time, the object raised serious concerns following the discovery that it had a 2.7\% chance of striking the planet Earth in 2029\footnote{\tt http://neo.jpl.nasa.gov/risk \\ http://newton.dm.unipi.it/neodys}. Immediate follow-up observations to address these concerns took place and provided predictions that eliminated the possibility of collision in 2029, although it does enter below the orbit of the geostationary satellites at that time. However there did remain the possibility of Apophis passing through a precise region in space (gravitational keyhole) which could set it up for an impact in the mid-term future (Farnocchia et al.\ \cite{farnocchia13}). Apophis remains an object with one of the highest statistical chances of impacting the Earth among all known near-Earth Asteroids. The studies performed to determine the impact probability require a clear set of physical properties in order to understand the orbital evolution of this asteroid (\v{Z}i\v{z}ka \& Vokrouhlick\'y \cite{zizka11}; Farnocchia et al.\ \cite{farnocchia13}; Wlodarczyk \cite{wlodarczyk13}). The lack of availability of such properties (albedo, size, shape, rotation, physical structure, thermal properties) is a major limiting factor which leads to uncertainties in the role played by non-gravitational effects on that orbit. The Yarkovsky effect due to the recoil of thermally re-radiated sunlight is the most important of these non-gravitational effects. Besides the input to the orbit evolution, the physical properties serve also to address the possible implications if an impact were to occur. A solid body of 300\,m versus a rubble pile hitting the Earth implies different levels of severity as regards its ability to pass through the atmosphere unscathed to create regional versus grandscale damage. Delbo et al.\ (\cite{delbo07a}) determined from polarimetric observations an albedo of 0.33 $\pm$ 0.08 and an absolute magnitude of H = 19.7 $\pm$ 0.4\,mag. These values led to a diameter of 270 $\pm$ 60\,m, slightly smaller than earlier estimates in the range 320 to 970\,m, depending on the assumed albedo. Binzel et al.\ (\cite{binzel09}) described the results of observations they performed in the visible to near infrared (0.55 to 2.45\,$\mu$m) of Apophis where they compared and modeled its reflectance spectrum with respect to the spectral and mineralogical characteristics of likely meteorite analogs. Apophis was found to be an Sq-class asteroid that most closely resembled LL ordinary chondrite meteorites in terms of spectral characteristics and interpreted olivine and pyroxene abundances. They found that composition and size similarities of Apophis with (25143) Itokawa suggested a total porosity of 40\% as a current best guess for Apophis. Applying these parameters to Apophis yielded a mass estimate of 2 $\cdot$ 10$^{10}$\,kg with a corresponding energy estimate of 375 Megatonnes (Mt) TNT for its potential hazard. Substantial unknowns, most notably the total porosity, allowed uncertainties in these mass and energy estimates to be as large as factors of two or three. Up to the time of our own observations, there were no thermal infrared measurements existing on this asteroid. Observations from the Spitzer Space Telescope were not possible as Apophis was not in the Spitzer visibility region during the remainder of its mission. Due to the fact that there was no close encounter with Earth between discovery and now, there are also no groundbased N-/Q-band observations, no Akari and also no WISE observations available. We observed this near-Earth asteroid with the Herschel Space Observatory's (Pilbratt et al.\ \cite{pilbratt10}) PACS (Photodetector Array Camera and Spectrometer) instrument (Poglitsch et al.\ \cite{poglitsch10}) at far-infrared wavelengths (Section \ref{sec:obs}). We present our thermophysical model (TPM) analysis (Section \ref{sec:tpm}) and discuss the results (Section \ref{sec:dis}). | \label{sec:con} The shape and spin properties of Apophis presented by Pravec et al.\ (\cite{pravec14}) were the key elements for our radiometric analysis. The interpretation of the $\sim$3.5\,h of Herschel-PACS measurements in January and March 2013 was done using a well-tested and validated thermophysical model. Applying the radiometric method to a tumbling object is more complex, but it works reliably if the object's orientation and its spin axis is known at the epochs of the thermal measurements. We found the following results: \begin{enumerate} \item The radiometric size solution is D$_{eff}$ = 375$^{+14}_{-10}$\,m; this is the scaling factor for the shape model presented in Pravec et al.\ (\cite{pravec14}) and corresponds to the size of an equal volume sphere. \item The geometric V-band albedo was found to be p$_V$ = 0.30$^{+0.05}_{-0.06}$, almost identical to the value found for the Hayabusa rendezvous target 25143~Itokawa; the corresponding bolometric Bond albedo A is 0.14$^{+0.03}_{-0.04}$. \item A thermal inertia of $\Gamma$ = 600$^{+200}_{-350}$\,Jm$^{-2}$s$^{-0.5}$K$^{-1}$ explains best our combined dataset comprising three different bands and two different epochs. \item Using either Itokawa's bulk density information or a rock density of 3.2\,g/cm$^3$ combined with 30-50\% porosity, we calculate a mass of (5.3 $\pm$ 0.9) $\cdot$10$^{10}$\,kg which is 2 to 3 times higher than earlier estimates. \item No information about surface roughness can be derived from the radiometric analysis of our measurements due to the lack of observations at shorter wavelengths and smaller phase angles close to opposition. But Apophis' thermal inertia is similar to the value derived for Itokawa and this might point to a surface of comparable roughness. \item Apophis' size, the surface characteristics related to the high thermal inertia, and the comparison with similar-size objects, make a cohesionless structure more likely. \end{enumerate} The interior structure -rubble pile or coherent body- is relevant in the context of impact scenario studies. In case of a rubble-pile structure (which is the more likely option) pre-collision encounters with planets could disrupt the body by tidal forces while a more solid interior would leave the object intact. We also expect that the newly derived properties will affect the long-term orbit predictions of Apophis which is influenced by the Yarkovsky effect and in second order also by the solar radiation pressure. In this context, the radiometrically derived size and thermal inertia will play a significant role in risk-analysis studies beyond Apophis' close encounter with Earth in 2029. | 14 | 4 | 1404.5847 |
1404 | 1404.1376.txt | We present a new determination of the concentration-mass relation for galaxy clusters based on our comprehensive lensing analysis of 19 X-ray selected galaxy clusters from the Cluster Lensing and Supernova Survey with Hubble (CLASH). Our sample spans a redshift range between 0.19 and 0.89. We combine weak-lensing constraints from the Hubble Space Telescope (HST) and from ground-based wide-field data with strong lensing constraints from HST. The result are reconstructions of the surface-mass density for all CLASH clusters on multi-scale grids. Our derivation of NFW parameters yields virial masses between $0.53\times10^{15} M_{\odot}/h$ and $1.76\times10^{15} M_{\odot}/h$ and the halo concentrations are distributed around $c_{200\textrm{c}}\sim3.7$ with a $1\sigma$ significant negative trend with cluster mass. We find an excellent 4\% agreement between our measured concentrations and the expectation from numerical simulations after accounting for the CLASH selection function based on X-ray morphology. The simulations are analyzed in 2D to account for possible biases in the lensing reconstructions due to projection effects. The theoretical concentration-mass (c-M) relation from our X-ray selected set of simulated clusters and the c-M relation derived directly from the CLASH data agree at the 90\% confidence level. | The standard model of cosmology ($\Lambda$CDM) is extremely successful in explaining the observed large-scale structure of the Universe \citep[see e.g.][]{PlanckCollaboration2013, Anderson2012}. However, when moving to progressively smaller length scales, inconsistencies between theoretical predictions and real observations have emerged. Examples include the cored mass-density profiles of dwarf-spheroidal galaxies \citep{Walker2011}, the abundance of Milky Way satellites \citep{Boylan-Kolchin2012} and the flat dark matter density profiles in the cores of galaxy clusters \citep{Sand2002,Newman2013}. Galaxy clusters are unique tracers of cosmological structure formation \citep[e.g.][]{Voit2005,Borgani2011}. As the largest collapsed objects in the observable Universe, clusters form the bridge between the large-scale structure of the Universe and the astrophysical regime of individual halos. From an observational point of view, all main mass components of a cluster, hot ionized gas, dark matter and luminous stars, are directly or indirectly observable with the help of X-ray observatories \citep[e.g.][]{Rosati2002,Ettori2013}, gravitational lensing \citep[e.g.][]{Bartelmann2001,Bartelmann2010a} or optical observations. As shown by numerical simulations \citep{Navarro1996}, dark matter tends to arrange itself following a specific, spherically symmetric density profile \begin{equation}\label{NFW} \rho_{\mathrm{NFW}}(r) = \frac{\rho_{\mathrm{s}}}{r/r_{\mathrm{s}}\left(1+r/r_{\mathrm{s}}\right)^{2}}, \end{equation} where the only two parameters $\rho_{\mathrm{s}}$ and $r_{\mathrm{s}}$ are a scale density and a scale radius. This functional form is now commonly called the Navarro, Frenk and White (NFW) density profile. It was found to fit well the dark matter distribution of halos in numerical simulations, independent of halo mass, cosmological parameters or formation time \citep{Navarro1997,Bullock2001}. A specific parametrization of the NFW profile uses the total mass enclosed within a certain radius $r_{\Delta}$ \begin{equation}\label{NFW_mass} M_{\Delta} = 4\pi\rho_{\mathrm{s}}r_{\mathrm{s}}^{3}\left(\ln\left(1+c_{\Delta}\right)-\frac{c_{\Delta}}{1+c_{\Delta}}\right), \end{equation} and the concentration parameter \begin{equation}\label{concentration} c_{\Delta} = \frac{r_{\Delta}}{r_{s}}. \end{equation} When applying the relations above to a specific analysis, the radius $r_{\Delta}$ is chosen such that it describes the halo on the scale of interest. An example is the radius at which the average density of the halo is 200 times the critical density of the Universe at this redshift ($\Delta=200c$). Cosmological simulations show that dark matter structures occupy a specific region in the concentration-mass plane. This defines the concentration-mass (c-M) relation which is a mild function of formation redshift and halo mass \citep{Bullock2001,Eke2001a,Zhao2003a,Gao2008,Duffy2008,Klypin2011,Prada2012,Bhattacharya2013}. Observational efforts have been undertaken to measure the c-M relation either using gravitational lensing \citep{Comerford2007,Oguri2012,Okabe2013}, X-ray observations \citep{Buote2007,Schmidt2007,Ettori2010} or dynamical analysis of cluster members \citep{Lemze2009,Wojtak2010,Biviano2013}. Some of the observed relations are in tension with the predictions of numerical simulations \citep{Fedeli2012,Duffy2008}. The most prominent example of such tension is the cluster Abell 1689 \citep[][and references therein]{Broadhurst2005,Peng2009}, with a concentration parameter up to a factor of three higher than predicted. In a follow-up study, \citet{Broadhurst2008} compared a larger sample of five clusters to the prediction from $\Lambda$CDM and found the derived c-M relation in tension with the theoretical expectations \citep[see also][]{Broadhurst2008a,Zitrin2010a,Meneghetti2011}. Possible explanations for these discrepancies include a selection-bias of the cluster sample since these clusters were known strong lenses, paired with the assumption of spherical symmetry for these systems \citep{Hennawi2007,Meneghetti2010}. Moreover, the influence of baryons on the cluster core \citep{Fedeli2012,Killedar2012} and even the effects of early dark energy \citep{Fedeli2007,Sadeh2008,Francis2009,Grossi2009} have been introduced as possible explanations. Ultimately, a new set of high-quality observations of an unbiased ensemble of clusters was needed to answer the question if observed galaxy clusters are indeed in tension with our cosmological standard model. The Cluster Lensing And Supernova Survey with Hubble (CLASH) \citep{Postman2012} is a multi-cycle treasury program, using 524 Hubble Space Telescope (HST) orbits to target 25 galaxy clusters, largely drawn from the Abell and MACS cluster catalogs \citep{Abell1958,Abell1989,Ebeling2001,Ebeling2007,Ebeling2010}. Twenty clusters were specifically selected by their largely unperturbed X-ray morphology with the goal of representing a sample of clusters with regular, unbiased density profiles that allow for an optimal comparison to models of cosmological structure formation. As reported in \citet{Postman2012} all clusters of the sample are fairly X-ray luminous with X-ray temperatures $T_{\mathrm{x}}\geq 5$ keV and show a smooth morphology in their X-ray surface brightness. For all systems the separation between the brightest cluster galaxy (BCG) and the X-ray luminosity centroid is $<20$~kpc. An overview of the basic properties of the sample can be found in Table~\ref{CLASH_CLUSTERS}. In the following we will use these X-ray selected clusters to derive the observed c-M relation for CLASH clusters based on weak and strong lensing and perform a thorough comparison to the theoretical expectation from numerical simulations. This study has two companion papers. The weak-lensing and magnification analysis of CLASH clusters by \citet{Umetsu2014} and the detailed characterization of numerical simulations of CLASH clusters by \citet{Meneghetti2014}. This paper is structured as follows: Sec.~\ref{Theory} provides a basic introduction to gravitational lensing and introduces the method used to recover the dark matter distribution from the observational data. The respective input data is described in Sec.~\ref{Data} and the resulting mass maps and density profiles of the CLASH clusters are presented in Sec.~\ref{Results}. We interpret our results by a detailed comparison to theoretical c-M relations from the literature in Sec.~\ref{GMc} and use our own tailored set of simulations to derive a CLASH-like c-M relation in Sec.~\ref{SMc}. We conclude in Sec.~\ref{Concl}. Throughout this work we assume a flat cosmological model similar to a WMAP7 cosmology \citep{Komatsu2011} with $\Omega_{\textrm{m}} = 0.27$, $\Omega_{\Lambda} = 0.73$ and a Hubble constant of $h = 0.7$. For the redshift range of our cluster sample this translates to physical distance scales of 3.156 -- 7.897 kpc/\arcsec. \begin{deluxetable*}{lcccccc} \tablecaption{The CLASH X-ray selected cluster sample\label{CLASH_CLUSTERS}} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{z} & \colhead{R.A.} & \colhead{DEC}& \colhead{k$T_{\mathrm{X}}$\tablenotemark{a}}&\colhead{$L_{\mathrm{bol}}$\tablenotemark{a}}& \colhead{$\arcsec\rightarrow$kpc\tablenotemark{b}} \\ &&\colhead{[deg/J2000]} &\colhead{[deg/J2000]}&\colhead{[keV]}&\colhead{[$10^{44}$erg/s]}& } \startdata Abell~383& 0.188& 42.014090& -3.5292641& 6.5& 6.7& 3.156\\ Abell~209& 0.206& 22.968952& -13.611272& 7.3& 12.7& 3.392\\ Abell~1423& 0.213& 179.32234& 33.610973& 7.1& 7.8& 3.482\\ Abell~2261& 0.225& 260.61336& 32.132465& 7.6& 18.0& 3.632\\ RXJ2129+0005& 0.234& 322.41649& 0.0892232& 5.8& 11.4& 3.742\\ Abell~611& 0.288& 120.23674& 36.056565& 7.9& 11.7& 4.357\\ MS2137-2353& 0.313& 325.06313& -23.661136& 5.9& 9.9& 4.617\\ RXCJ2248-4431& 0.348& 342.18322& -44.530908& 12.4& 69.5& 4.959\\ MACSJ1115+0129& 0.352& 168.96627& 1.4986116& 8.0& 21.1& 4.996\\ MACSJ1931-26& 0.352& 292.95608& -26.575857& 6.7& 20.9& 4.996\\ RXJ1532.8+3021& 0.363& 233.22410& 30.349844& 5.5& 20.5& 4.931\\ MACSJ1720+3536& 0.391& 260.06980& 35.607266& 6.6& 13.3& 5.343\\ MACSJ0429-02& 0.399& 67.400028& -2.8852066& 6.0& 11.2& 5.411\\ MACSJ1206-08& 0.439& 181.55065& -8.8009395& 10.8& 43.0& 5.732\\ MACSJ0329-02& 0.450& 52.423199& -2.1962279& 8.0& 17.0& 5.815\\ RXJ1347-1145& 0.451& 206.87756& -11.752610& 15.5& 90.8& 5.822\\ MACSJ1311-03& 0.494& 197.75751& -3.1777029& 5.9& 9.4& 6.128\\ MACSJ1423+24& 0.545& 215.94949& 24.078459& 6.5& 14.5& 6.455\\ MACSJ0744+39& 0.686& 116.22000& 39.457408& 8.9& 29.1& 7.186\\ CLJ1226+3332& 0.890& 186.74270& 33.546834& 13.8& 34.4& 7.897\\ \enddata \tablenotetext{a}{From \citet{Postman2012} and references therein.} \tablenotetext{b}{Conversion factor to convert arcseconds to kpc at the cluster's redshift and given the cosmological background model.} \end{deluxetable*} %%%%%%%%%%%%%%%%%%%%%%%%%%%% SECTION2: LENSING THEORY AND SaWLens ########################################################### | \label{Concl} The HST multi-cycle treasury program CLASH was in part designed to shed light on the dark matter density profile of galaxy clusters by combining the enormous resolving power of the HST with wide-field Subaru imaging. The CLASH X-ray selected sample of galaxy clusters was specifically selected to have a mostly undisturbed X-ray morphology, suggesting that this sub-sample represents an undisturbed and unbiased set of clusters in terms of their density profile. This choice was made since former studies of lensing clusters with exquisite data quality were inconsistent with the predictions of $\Lambda$CDM, and selection effects were thought to be a possible cause of this disagreement. In this work we applied advanced lensing reconstruction techniques to this CLASH data set. Our reconstructions combines weak and strong lensing to fully exploit the lensing data provided by the CLASH program. With the help of adaptively refined grids, we achieve a non-parametric reconstruction of the lensing potential over a wide range of scales, from the inner-most strong-lensing core of the system at scales $\sim 10$kpc out to the virial radius at $\sim 2$ Mpc. This is the first time that such a multi-scale reconstruction using weak and strong lensing has been performed for such a large sample of clusters. Fits to the surface-mass density profiles provide masses and concentrations for 19 massive galaxy clusters. In order to have full control over the selection function of halos and in order to avoid possible biases introduced by the tri-axial structure of high-mass halos, we also derive c-M relations from a new, unique set of simulated halos. These simulations allow us to make specific choices in our selection and analysis, providing a much closer match to real observations. While simulations are usually analyzed in 3D we perform a purely 2D analysis in projection, as this is the only option for the observed lensing data. We apply different selection functions to the simulations, including a selection based on the X-ray morphology of realistic X-ray images of our hydro-simulations. This sample obeys the selection criteria of CLASH. This is of great importance since the selection of a cluster from a numerical simulation based on X-ray regularity, like in the case of CLASH, relates to but is not identical to a selection based on relaxation parameters only. The details of this selection function are studied in much more detail in another CLASH paper by \citet{Meneghetti2014}. For the X-ray selected 2D sample we find excellent agreement between simulations and observations. Observed concentration are on average only 4\% lower than in simulations and we find no statistical indication for tension between the simulated and observed data set. This detailed comparison between observations and simulations in 2D, with full consideration of the underlying selection function is unique and gives us great confidence in the results, which are a confirmation of the $\Lambda$CDM paradigm, at least in the context of a c-M relation of cluster-sized halos. From fitting a c-M relation to the CLASH data directly we find our concentrations distributed around a central value of $c_{200\textrm{c}}\simeq3.7$ with a mild negative trend in mass at the $1\sigma$-level. This c-M relation derived from the CLASH data directly agrees with the c-M relation of simulated X-ray selected halos analyzed in projection at the 90\% confidence level. Our comprehensive likelihood analysis shows that we are insensitive to any possible redshift dependence of the c-M relation. A larger leverage in redshift would be needed to probe this trend which is suggested by numerical simulations. We want to highlight the complementary work on CLASH weak lensing and magnification measurements by \citet{Umetsu2014} and the full characterization of the CLASH simulations by \citet{Meneghetti2014}. However, due to the exquisite quality of the lensing data used for this analysis, further and more advanced studies will be possible. Ongoing analyses include additional functional forms describing the dark matter distribution, like the generalized NFW or Einasto profiles. Particularly the analysis of inner slopes of the CLASH clusters and the intrinsic scatter of c-M relations derived from these profiles will give interesting insights into the physics of dark matter and the role of baryons on cluster scales. Ultimately, one would like to go away from 1D, radial density profiles and describe the full morphology and shape of the dark matter distributions in observations and simulations. Such techniques might indeed prove more powerful in e.g.~distinguishing different particle models of dark matter. The CLASH clusters are clearly the ideal data set to perform such analyses. | 14 | 4 | 1404.1376 |
1404 | 1404.3561.txt | MCG-6-30-15, at a distance of 37~Mpc ($z=0.008$), is the archetypical Seyfert 1 galaxy showing very broad Fe K$\alpha$ emission. We present results from a joint {\it NuSTAR} and {\it XMM-Newton} observational campaign that, for the first time, allows a sensitive, time-resolved spectral analysis from 0.35 keV up to 80 keV. The strong variability of the source is best explained in terms of intrinsic X-ray flux variations and in the context of the light bending model: the primary, variable emission is reprocessed by the accretion disk, which produces secondary, less variable, reflected emission. The broad Fe K$\alpha$ profile is, as usual for this source, well explained by relativistic effects occurring in the innermost regions of the accretion disk around a rapidly rotating black hole. We also discuss the alternative model in which the broadening of the Fe K$\alpha$ is due to the complex nature of the circumnuclear absorbing structure. Even if this model cannot be ruled out, it is disfavored on statistical grounds. We also detected an occultation event likely caused by BLR clouds crossing the line of sight. | \begin{figure*} \centering \includegraphics[width=0.78\columnwidth, angle=-90]{nustar_back.eps} \includegraphics[width=0.78\columnwidth, angle=-90]{pin_back.eps} \caption{\label{backgrounds} {\it Left:} source (in black) + background (in red) spectra from the {\it NuSTAR} FPMA in the 3-80 keV band. {\it Right:} archival {\it Suzaku} HXD-PIN source (in black) + background (in red) spectra in the 15-70 keV band. The source is at the same 15-70 keV flux level in both observations, within a few per cent.} \end{figure*} The bright Seyfert 1 galaxy MCG-6-30-15 ($z$=0.00775) is the first source in which a broad iron K$\alpha$ line was detected with {\it ASCA }\citep{tanaka95}, showing a red tail whose low energy extension is an indicator of the inner radius of the accretion disk and thus of the black hole spin \citep{iwa96,ify99}. The iron K$\alpha$ line is very prominent in this source, since the iron abundance appears to be significantly higher than solar \citep{fab02}. Due to its spectroscopic features, MCG-6-30-15 is one of the most observed AGN in the X-rays. It was observed several times with {\it ASCA} \citep{shih02, mif03}, {\it BeppoSAX} \citep{gmmo99}, {\it RXTE} \citep{lee99, ve01}, {\it XMM-Newton} \citep{wilms01,fab02, fv03, vf04, br06} and {\it Suzaku} \citep{miniutti07, noda11}; multi-observatory data has also been analyzed by \citet{mtr08} and \citet{chfa11}. The soft X-ray spectrum of this source has a complex structure due to warm absorption \citep{otani96}. It has been studied also at high resolution with the {\it Chandra} HETGs \citep{lee01, ylf05} and {\it XMM-Newton} RGS \citep{bsk01}. \citet{tfv03, tflv04} confirmed the presence of dusty warm absorbers, in agreement with optical observations \citep{rey97b}. The extreme variability of MCG-6-30-15 in the X-rays has often been explained with a scenario where two components play the major role: a highly variable power law continuum (with an almost constant photon index) and a much less variable reflection spectrum from the innermost region of the accretion disk (within a few gravitational radii) \citep{shih02, fv03, tum03, miniutti07, parker13}. The light-bending model \citep{fv03, min03, mf04}, a generalization of earlier work \citep{mama96, rb97}, attributes the change of the power law flux to the variation of the location of the X-ray emitting source close to the central black hole. In this scenario much of the radiation is bent down onto the disk and the observed variation in the reflection intensity is small because a large fraction of photons does not escape to infinity but is instead captured by the black hole. The detection of a strong reflection hump, peaking at $\sim$30 keV, in previous high energy observations of MCG-6-30-15 by {\it BeppoSAX} \citep{gmmo99}, {\it RXTE} \citep{lfr00} and {\it Suzaku} \citep{miniutti07} is consistent with this two-component model. An alternative absorption-dominated model has also been used to explain the extreme behavior of MCG-6-30-15 \citep{mtr08, mtr09}. In this model the red wing of the line is not due to strong relativistic effects but to the complex structure of absorbers along the line of sight \citep{miller07, turner07}. These complex absorbing structures (with column densities in the $10^{22}$--$10^{24}$ cm$^{-2}$ range) can produce an apparent broadening of the Fe K$\alpha$ emission line by partially covering the nuclear X-ray source. The covering factor of some of the obscuring media may need to be linked to variations in the nuclear flux, as already shown in the past for the case of MCG-6-30-15 \citep{mtr08}. This interpretation ascribes the constancy of the amplitude of the iron line to the greater distance of the emitting material from the variable X-ray source, while the hard flux excess above $\sim$20 keV is interpreted as originating from Compton-thick clouds at or within the Broad Line Region, partially covering the X-ray nuclear source \citep{tatum13}. We present results from a simultaneous {\it NuSTAR} and {\it XMM-Newton} observational campaign performed in January 2013. Taking advantage of the unique {\it NuSTAR} high-energy sensitivity, we simultaneously cover the 0.35--80 keV energy band with unprecedented signal to noise ratio. The primary focus of this paper is the spectral variability of this source, and understanding how the spectral components vary. We discuss the results in the context of the two scenarios described above. The paper is structured as follows: in Sect. 2 we discuss the joint {\it NuSTAR} and \textit{XMM-Newton} data reduction, in Sect. 3, 4 and 5 the spectral analysis and best fit parameters are presented and discussed within a reflection and absorption scenario, respectively. Sect. 6 is devoted to the spectral variability by occultation from Broad Line Region clouds and Sect. 7 to the flux-flux plots. | We present results from a joint {\it NuSTAR} and {\it XMM-Newton} observational campaign of the bright Sy 1 galaxy MCG-6-30-15 and investigated the spectral variability of the source via a detailed time resolved analysis. The reflection scenario, where the primary variable power law continuum emission is reprocessed by the accretion disk, reproduces the data better than a scenario involving partial absorption by intervening structures. The former is preferred to the latter on statistical grounds, with a reduced $\chi^2$ of 1.10 versus 1.15, for about the same degrees of freedom ($\sim 4000$). Our results can be summarized as follows: \begin{itemize} \item in the reflection scenario, the spectral variability can be either ascribed to a change of the ionization state of the disk or to an intrinsic change in the slope of the nuclear continuum, which is strongly favored on physical grounds (a variation of the photon index of the primary power law within $\Delta\Gamma\simeq0.3$). In the latter case the source is well described with gravitational light bending in the innermost regions of the accretion disk during the first part of the 2013 observational campaign and with intrinsic variations of the X-ray source in the latter part, this is in contrast to previous analyses \citep{fv03,lfm07,miniutti07}; \item the absorption model cannot account for all spectral variability if changes occur in the covering factor only. This is different than the behavior found in previous multi-epoch broad band analyses. A variation in the column density of the material along the line of sight is also needed, ranging between $10^{22}-10^{23}$ cm$^{-2}$; \item we detected an occultation by a BLR cloud (N$_{\rm H}=2.2^{+0.8}_{-0.5}\times 10^{22}$ cm$^{-2}$) crossing the line of sight at a distance of $10^4$ R$_{\rm G}$, with a velocity of $v\simeq 3\times 10^3$ km s$^{-1}$ and a density of $n\simeq 7\times10^{9}$ cm$^{-3}$. This eclipsing event lasted for about 20 ks; \item using flux-flux plots we find strong correlations between {\it XMM} and {\it NuSTAR} energy bands, with an offset indicating a constant component at high energies. We identify significant variability uncorrelated with the source flux, manifested as significant scatter around the best-fit line in the flux-flux plots, too strong to be due to noise. We find that this variability could be caused either by pivoting of the primary power law or by changes in the absorption at low energies, or a combination of the two. \end{itemize} | 14 | 4 | 1404.3561 |
1404 | 1404.5306_arXiv.txt | We address the problem of evaluating the power spectrum of the velocity field of the ICM using only information on the plasma density fluctuations, which can be measured today by {\it Chandra} and {\it XMM-Newton} observatories. We argue that for relaxed clusters there is a linear relation between the rms density and velocity fluctuations across a range of scales, from the largest ones, where motions are dominated by buoyancy, down to small, turbulent scales: $(\delta\rho_k/\rho)^2 = \eta_1^2 (V_{1,k}/c_s)^2$, where $\delta\rho_k/\rho$ is the spectral amplitude of the density perturbations at wave number $k$, $V_{1,k}^2=V_k^2/3$ is the mean square component of the velocity field, $c_s$ is the sound speed, and $\eta_1$ is a dimensionless constant of order unity. Using cosmological simulations of relaxed galaxy clusters, we calibrate this relation and find $\eta_1\approx 1 \pm 0.3$. We argue that this value is set at large scales by buoyancy physics, while at small scales the density and velocity power spectra are proportional because the former are a passive scalar advected by the latter. This opens an interesting possibility to use gas density power spectra as a proxy for the velocity power spectra in relaxed clusters, across a wide range of scales. | Spectacular data accumulated by X-ray observatories on the nearest X-ray brightest clusters of galaxies allow us to probe inhomogeneities in the intracluster medium (ICM) over a broad range of spatial scales. These clusters typically show $\sim 5-10$\% density fluctuations on scales from a few tens to a few hundred kpc \citep[][see also Schuecker et al. 2004 for earlier work]{Chu12,San12}. At the same time, the dynamics of the ICM remain largely unknown. For relaxed clusters, numerical simulations predict predominantly subsonic motions of the ICM on scales from $\sim$Mpc down to a few tens of kpc with approximately Kolmogorov power spectra (PS) of the velocity field \citep[see, e.g.,][]{Nor99,Dol05,Nag07b,Iap11,Vaz11}. The relatively low energy resolution ($\sim 130-150$ eV) of current X-ray CCD-type detectors precludes accurate measurements of gas velocities \citep[see, e.g.,][]{Zhu13b,Tam14}. The gain in resolution can be achieved in cool cores of clusters by using grating spectrometers. Such observations provide mostly upper limits on the gas velocities $\sim$ a few hundred km/s \citep[e.g.,][]{Wer09,San13,Chu04}. One can also use Faraday Rotation measurements to probe the ICM turbulence indirectly \citep[e.g.,][]{Vog03}. The future Japanese-US X-ray observatory {\it Astro-H} \citep[see][launch in 2015]{2010SPIE.7732E..27T} should provide high-resolution ($\sim 4-7$ eV) X-ray spectra, allowing one for the first time to measure gas velocities directly. However, it will not be trivial to extract the PS of the velocity field \citep[see methods developed in][]{Zhu12}. The full power of the methods can only be used once the next generation of X-ray observatories, such as {\it SMART-X}\footnote{http://smart-x.cfa.harvard.edu/index.html} and {\it Athena+}\footnote{http://athena2.irap.omp.eu/}, are operating. In the meantime, are there ways to probe the velocity PS with existing and near-term data? In this Letter, we argue that, for subsonic motions in relaxed clusters, there is a linear relation between the PS of density fluctuations derived from X-ray images, and the velocity PS. Using analytical description of a passive scalar advected by fluid motions in stratified medium, we show that the linearity holds from large scales, where motions are dominated by buoyancy, down to small, turbulent scales, with the same coefficient of proportionality. In turbulent regime linear dependence was found in simulations of massive cluster with solenoidal forcing in \citet{Gas13}. It is interesting that similar situations arise in the context of solar wind, Earth atmosphere, and the ISM \citep[see, e.g.][]{1995ApJ...443..209A}. \citet{Chu12} list the following contributions to measured density variations in clusters: (1) perturbations of the gravitational potential; (2) deviations from the oversimplified model profiles; (3) entropy fluctuations caused by infalling low-entropy gas or by gas advection; (4) pressure variations associated with gas motions and sound waves; (5) metallicity variations; (6) the presence of non-thermal and spatially variable components. Cosmological simulations of relaxed clusters (Section 3), which include effects (1)--(4), illustrate that predicted linearity holds approximately in the case of “natural” cosmological driving. In the companion paper (Gaspari et al., 2014, hereafter G14), high-resolution simulations in a static gravitational potential with solenoidal forcing of turbulence are used to investigate items (3) and (4) and the role of isotropic thermal conduction. | In this Letter, we have addressed the problem of constraining the gas velocity PS in relaxed galaxy clusters using the observed density fluctuations. We argue that \begin{itemize} \item the rms of density and velocity fluctuations are linearly related across a broad range of scales in both buoyancy-dominated and turbulent regimes; \item the constant of proportionality between them is set at large scales by gravity-wave physics and remains approximately the same in the non-linear turbulent regime; \item cosmological simulations of relaxed clusters give a proportionality coefficient $\eta_1 \sim 1\pm 0.3$ between the amplitude of the density fluctuations and the rms component of the flow velocity; \end{itemize} It is an interesting conclusion that, if the energy-injection scales are large enough (e.g., $\sim10^2$~kpc for merger-driven turbulence), stratification leads to anisotropy ($V_\perp\gg V_r$, $k_\perp\ll k_r$), whereas turbulence driven at small scales (e.g., $\sim 10$~kpc, as in the AGN-driven case) will be isotropic---these are the ${\rm Fr}\ll1$ and ${\rm Fr}\sim1$ cases discussed in Section 2.1. Indeed, in cosmological simulations, where turbulence is primarily driven by mergers, we see perpendicular velocities slightly larger than the radial ones in the central 500 kpc. Admittedly, our simulations suffer from insufficient dynamic range and do not include all relevant physical processes. For instance, thermal conduction could erase some of the temperature/density fluctuations and break the relation \exref{rhoV_turb}. Some of these effects are considered in the companion paper (G14), where a series of high-resolution hydrodynamic simulations is carried out, with varying ${\rm Ma}$ and isotropic conductivity. It should be possible to verify the relation \exref{rhoV_turb} using future direct velocity measurements with {\it Astro-H} (combining with current observations). Strong deviations from $\eta\sim1$ would suggest interesting microphysics or the dominance of other sources of density fluctuations. In conclusion we have shown that the analysis of SB fluctuations in X-ray images offers a novel way to estimate the velocity PS in relaxed galaxy clusters. In general, proportionality between the density and velocity amplitudes for subsonic motions is probably a generic feature of small perturbations in stratified atmospheres. | 14 | 4 | 1404.5306 |
1404 | 1404.0009_arXiv.txt | \noindent Following the ground-breaking measurement of the tensor-to-scalar ratio $r = 0.20^{+0.07}_{- 0.05}$ by the BICEP2 collaboration, we perform a statistical analysis of a model that combines Radiative Inflation with Dark Energy (RIDE) based on the $M^2 |\Phi|^2 \ln \left( |\Phi|^2/\Lambda^2 \right)$ potential and compare its predictions to those based on the traditional chaotic inflation $M^2|\Phi|^2$ potential. We find a best-fit value in the RIDE model of $r=0.18$ as compared to $r=0.17$ in the chaotic model, with the spectral index being $n_S=0.96$ in both models. | Introduction} Recently the BICEP2 collaboration~\cite{Ade:2014xna} has reported a measurement of the tensor-to-scalar ratio $r = 0.20^{+0.07}_{- 0.05}$ which provides evidence for large-field (e.g.\ chaotic) inflationary models, as opposed to small-field (e.g.\ hybrid) inflationary models (for a review of inflationary models see e.g.~\cite{Lyth:1998xn}). BICEP2 provides the first direct measurement of the large scale $B$-mode polarisation power spectrum. This receives a contribution only from tensor perturbations and, therefore, the detection of a non-vanishing $B$-mode polarisation is direct evidence for the presence of tensor perturbations. More indirectly, temperature anisotropies can also potentially give evidence of tensor perturbations. In this way, prior to BICEP2, the {\em Planck} collaboration had placed an upper bound $r < 0.11\,(95\%~\rm{C.L.})$ assuming no running of the scalar spectral index. The origin of this disagreement could either be some statistical or systematic effect, or perhaps the first evidence for a running spectral index that would relax the {\em Planck} limit to $r < 0.26\,(95\%~\rm{C.L.})$~\cite{Ade:2013zuv}, which is well compatible with the BICEP2 measurement. The BICEP2 results~\cite{Ade:2014xna} have, within a short time, triggered a series of papers on model updates in the light of the new measurement. Among the models investigated are several scenarios of chaotic inflation~\cite{Harigaya:2014sua,Harigaya:2014qza,Lee:2014spa}, broken primordial power spectrum models~\cite{Hazra:2014aea}, gravity-related scenarios~\cite{Channuie:2013xoa,Joergensen:2014rya,Pallis:2014dma,Bamba:2014jia,Bezrukov:2014nza,Ferrara:2014ima,Kidani:2014pka,Kallosh:2014qta,Hamaguchi:2014mza}, Higgs-related inflation~\cite{Nakayama:2014koa,Cook:2014dga,Germani:2014hqa,Bezrukov:2014bra,Costa:2014lta,Fairbairn:2014nxa,Hamada:2014iga}, scenarios related to supersymmetry~\cite{Dimopoulos:2014boa,Craig:2014rta,Ellis:2014rxa,Lyth:2014yya}, curvaton model~\cite{Byrnes:2014xua}, or natural inflation~\cite{Freese:2014nla,Czerny:2014qqa}. Furthermore, several general analyses of collections of simple models have been presented~\cite{Kobayashi:2014jga,Okada:2014lxa}, general statements about the properties of the inflationary potential are available~\cite{Choudhury:2014kma,Choudhury:2013iaa,Hotchkiss:2011gz,Gong:2014cqa} as well as studies of the consistency of the {\em Planck} and BICEP2 data sets~\cite{McDonald:2014kia,Zhang:2014dxk}, and a general discussion about what we can learn from the exciting new results~\cite{Dodelson:2014exa,Cheng:2014ota,Cheng:2014bma}. None of the above studied models of inflation is able to simultaneously account for both inflation in the early Universe and the preponderance of Dark Energy in the Universe at the current epoch~\cite{Kinney:2009vz,Copeland:2006wr}. Motivated by the desire to account for both phenomena, some time ago we proposed a model of Radiative Inflation and Dark Energy (RIDE)~\cite{DiBari:2010wg}. Earlier attempts~\cite{Rosenfeld:2005mt}, based on the ``schizon model''~\cite{Hill:1988bu,Frieman:1991tu,Frieman:1995pm}, were formulated in the framework of $\varphi^4$ chaotic inflation, which was subsequently essentially excluded by WMAP 5-year data~\cite{Hinshaw:2008kr}. The question we considered was whether the nice feature of such models, namely that they naturally generate a pseudo Nambu-Goldstone boson (PNGB), which receives a potential via gravitational effects~\cite{Kallosh:1995hi} and can then be used as quintessence field, could be implemented within a viable model of inflation. In this note, following the BICEP2 measurement, we revisit the RIDE model~\cite{DiBari:2010wg} based on the idea of a massive complex scalar field $\Phi$ whose mass squared is driven negative close to the Planck scale $M_P$ by radiative effects, leading to a potential of the form\footnote{This potential belongs to a class of scenarios recently studied in a systematic way in \cite{Martin:2013tda}.} $M^2 |\Phi|^2 \ln \left(|\Phi|^2/\Lambda^2 \right)$ which may be compared to the traditional chaotic $M^2|\Phi|^2$ potential.\footnote{We emphasise that both RIDE and chaotic $\varphi^2$ inflation share the need to forbid a possible quartic term~$(\Phi^\dagger \Phi)^2$, or at least suppress it sufficiently. As this cannot be achieved at the level of an effective theory, it is necessary to resort to a concrete model realisation. This fact has already been commented in~\cite{DiBari:2010wg}, and an example of such a framework was also presented where the absence of the quartic term was achieved in a supersymmetric context where no $D$-terms arise. We refer the inclined reader to~\cite{DiBari:2010wg} for a more detailed discussion.} The potential is invariant under a global $U(1)$ symmetry, and the absolute value of the complex field $\Phi$ plays the role of the inflaton field which rolls slowly down a simple potential that resembles chaotic inflation for high field values. Similar to chaotic inflation, the RIDE model leads to predictions for inflation which were fully consistent with WMAP 7-year data~\cite{Komatsu:2010fb}, but potentially threatened later on by {\em Planck}~\cite{Ade:2013zuv}. However, unlike the chaotic model, at the end of inflation the inflation field settles at a non-trivial minimum, thereby breaking the global $U(1)$ and generating a PNGB which receives a small mass via gravitational effects. The resulting effective potential for the PNGB is of a form which is suitable for a quintessence field. Thus the PNGB can explain the existence of Dark Energy. Here we perform a statistical analysis of the RIDE model and compare its predictions to those of the chaotic inflation model. We find a best-fit value in the RIDE model of $r=0.18$ as compared to $r=0.17$ in the chaotic model, with the spectral index being $n_S=0.96$ in both models. The layout of the remainder of this paper is as follows. After reviewing the RIDE model in Sec.~\ref{sec:Model}, we perform a statistical analysis of inflation within the RIDE model in Sec.~\ref{sec:Inflation} and compare its predictions to those of chaotic inflation. We conclude in Sec.~\ref{sec:Conclusions}. | Conclusions} We have revisited the RIDE model based on radiative symmetry breaking that combines inflation with Dark Energy. We have performed a $\chi^2$ analysis for the RIDE model parameters and have compared the predictions of RIDE vs.\ the chaotic $\varphi^2$ inflation model for the spectral index $n_S$ and tensor-to-scalar ratio $r$. The RIDE model gives a slightly better fit to the data than the chaotic $\varphi^2$ inflation model. To be precise we find a best-fit value in the RIDE model of $r=0.18$ as compared to $r=0.17$ in the chaotic model, with the spectral index being $n_S=0.96$ in both models. In addition, RIDE has the \emph{additional advantage} that it accounts for the Dark Energy of the universe via the PNGB quintessence field generated at the end of inflation. | 14 | 4 | 1404.0009 |
1404 | 1404.0523_arXiv.txt | The maximum likelihood estimator is used to determine fit parameters for various parametric models of the Fourier periodogram followed by the selection of the best fit model amongst competing models using the Akaike information criteria. This analysis, when applied to light curves of active galactic nuclei can be used to infer the presence of quasi-periodicity and break or knee frequencies. The extracted information can be used to place constraints on the mass, spin and other properties of the putative central black hole and the region surrounding it through theoretical models involving disk and jet physics. | Broadband flux variability arises from disk and jet based phenomena. The periodogram (eg. van der Klis 1989) of light curves (LCs) of active galactic nuclei implicitly reflects this. Parametric models of the binned periodogram approximate the true expected shape, the power spectral density (PSD) which is ideally normally distributed making it useful to work with. A statistically appropriate model aids in theoretical modeling of variability. Commonly used parametric models include: power law model (eg. Papadakis \& Lawrence 1993), $I(f)=N f^{-\alpha}$ where $\alpha$ is the red-noise slope in the range of -1 and -2.5; broken power law model (eg. Uttley et al. 2002), $I(f)=N (f/f_{Brk})^{-\alpha_{hi}} , f>f_{Brk}$ and $I(f)=N (f/f_{Brk})^{-\alpha_{low}} , f<f_{Brk}$ where $f_{Brk}$ is the break frequency, $\alpha_{hi}$ is the slope of the high frequency region and $\alpha_{low}$ is the slope of the low frequency region; bending power law model (eg. Uttley et al. 2002), $I(f)=N (1+(f/f_{Knee})^2)^{-\alpha/2}$ where $f_{Knee}$ is the knee frequency, $\alpha$ is the slope in the high frequency region above the knee frequency; power law with a Lorentzian QPO model (eg. Nowak 2000), $I(f)=N f^{-\alpha}$+$\disp{\frac{R^2 Q f^2_0/\pi}{f^2_0+Q^2 (f-f_0)^2}}$ where $\alpha$ is red-noise slope and the second term is a Lorentzian function with amplitude $R$, central frequency $f_0$ and quality factor $Q = f_0/\Delta f$ where $\Delta f$ is the frequency spread in the bin hosting the central frequency. | The XMM Newton X-ray light curve (0.3 keV - 10 keV) of REJ 1034+396 which revealed a QPO of $\sim$ 3733 s (Gierli{\'n}ski et al. 2008) is analyzed using the periodogram (296 bins) with the above fit models (results in Table \ref{tab1}). The power law with QPO model (Fig. \ref{fig1}) is the best fit model with an AIC = 149.8 and a significance of the QPO $>$ 99.94 \%. Parameter estimation with MLE and model selection with AIC is computationally efficient when compared to Monte-Carlo simulations based procedures (Uttley et al. 2002). Any other parametric model can be easily incorporated into this procedure. If a statistically significant QPO is detected, a lower limit to the black hole mass and constraints on its spin can be placed assuming that the QPO is from an orbital signature. Theoretical models considering general relativistic effects and emission region structure can be used to simulate PSDs which can then be compared with observations to yield constraints on physical parameters. | 14 | 4 | 1404.0523 |
1404 | 1404.7135_arXiv.txt | To study how the environment can influence the relation between stellar mass and effective radius of nearby galaxies ($z<0.12$), we use a mass-complete sample extracted from the NYU-Value Added Catalogue. This sample contains almost $232000$ objects with masses up to $3\times10^{11}M_{\sun}$. For every galaxy in our sample, we explore the surrounding density within 2 Mpc using two different estimators of the environment. We find that galaxies are slightly larger in the field than in high-density regions. This effect is more pronounced for late-type morphologies ($\sim7.5\%$ larger) and especially at low masses ($M_*<2\times10^{10}M_{\sun}$), although it is also measurable in early-type galaxies ($\sim3.5\%$ larger). The environment also leaves a subtle imprint in the scatter of the stellar mass-size relation. This scatter is larger in low-density regions than in high-density regions for both morphologies, on average $\sim3.5\%$ larger for early-type and $\sim0.8\%$ for late-type galaxies. Late-type galaxies with low masses ($M_*<2\times10^{10}M_{\sun}$) show the largest differences in the scatter among environments. The scatter is $\sim20\%$ larger in the field than in clusters for these low-mass objects. Our analysis suggest that galaxies in clusters form earlier than those in the field. In addition, cluster galaxies seem to be originated from a more homogeneous family of progenitors. | \label{sec:introduction} The present-day stellar mass-size relation of galaxies hold information of the assembly history of galaxies across the cosmic time. Both the average size of the galaxies at a given stellar mass as well as the scatter of the relation are expected to reflect the evolutionary paths followed by the galaxies after their formation. These evolutionary tracks are considered to be different depending on the environment the galaxies inhabit. In fact, halos are expected to evolve fast and early in the highest density region of the Universe whereas in low-density environments this evolution is thought to be quieter \citep{sheth2004,gao2005,harker2006,maulbetsch2007} Analysis of the luminosity-size and the stellar mass-size relations of high-z galaxies in the last 20 years \citep[e.g.][]{schade1996, lilly1998, simard1999, ravindranath2004, trujilloaguerri2004, mcintosh2005, barden2005, trujillo2004, trujillo2006, vanderwel2014} has shown that both late and early-type galaxies were more compact at a given mass or luminosity in the past. The size evolution is more dramatic for the early-type population than for the spiral-like galaxies \citep[see e.g.][]{trujillo2007, buitrago2008}. All these works, in combination with the decline in the number of compact galaxies in the nearby Universe \citep[see][]{trujillo2009,taylor2010, cassata2011, cassata2013, newman2012, szomoru2012, buitrago2013} prove that galaxies have undergone a significant size evolution with cosmic time. Although the general evolution of the stellar mass-size relation with cosmic time (z $\lesssim$ 3) is starting to be well understood \citep[e.g.][]{stringer2013}, making sense of how the environment has affected the evolution of the stellar mass-size relation is by far less clear. For instance, at intermediate to high redshift ($0.5<z<2$), \cite{cooper2012, papovich2012} and \cite{delaye2013} find that elliptical galaxies have larger sizes when belonging to groups or clusters. However, works by \cite{rettura2010} and \cite{huertas2013a} showed no difference in a similar range of masses and redshifts. Moreover, even the opposite result has been found by \cite{raichoor2012}. Exploring early-type galaxies in clusters at z=1.3, they found that these objects are more compact in clusters than in the field. An interesting result by \cite{lani2013} reported that although early-type galaxies are larger when residing in dense environments at z $>$ 1, this trend seems to be weakened as we come closer to the present-day Universe (z $<$ 1). In the nearby Universe, the effect of the environment in the stellar mass-size relation of the galaxies is also not clear. In the case of early-type galaxies, \cite{maltby2010}, \cite{huertas2013b} and \cite{cappellari2013} found no size difference between those objects with the same mass in different environments. However, \cite{poggianti2013} found that elliptical galaxies are more compact in clusters than in the field. For spiral galaxies, \cite{maltby2010} report a slight trend for low/intermediate mass ($10^9M_{\sun}<M_*<10^{10}M_{\sun}$) spirals to be larger in the field than in groups. \cite{fernandez2013} using the AMIGA sample detected that massive ($M_*>10^{10}M_{\sun}$) isolated spirals are $20\%$ larger than those located in dense environments, however, no size differences were reported for lower mass spirals. Both in the nearby Universe as well as in the high-z galaxies, the discrepancies found in the literature can be explained by the relatively modest sample of galaxies analysed, typically on the order of few thousands of galaxies or less, with the largest sample being that used in the work of \cite{huertas2013b} which includes $\sim12000$ galaxies. One of the aims of this paper is to re-analyse this situation by exploring the stellar mass-size relation of both early and late-type galaxies using the large collection of data available in the NYU-VAGC Catalogue \citep{blantonNYU}. This huge dataset allows the sizes of the galaxies to be analysed at a fixed stellar mass with an unprecedented quality from the statistical point of view. In fact, the number of galaxies we use in this work is a factor $\sim50$ larger compared to \cite{maltby2010} and \cite{fernandez2013}. In addition to the average size of galaxies at a given stellar mass depending on the environment, another quantity that deserves attention is the scatter of this relation. It is worth noting that the measured scatter found in the Fundamental Plane and other scaling relations \citep{nipoti2009, nair2011,bezanson2013} is much lower than the one predicted theoretically \citep{nipoti2003,nipoti2009,nipoti2012,ciotti2007,shankar2010b,shankar2013}. The large scatter in the theoretical works are the consequence of the intrinsic stochastic nature of the merging processes. For this reason, only models with very fine-tuned input conditions on the progenitors are able to reproduce the low dispersion values observed in the stellar mass-size distribution. Despite the enormous interest of measuring this quantity, little effort has been made observationally in this direction. To the best of our knowledge, using Sloan Digital Sky Survey (SDSS) data, only \cite{shen2003} have quantified this dispersion segregating the galaxies according to their morphology in the stellar mass-size plane. However, no attempt has been made measure the scatter segregating the galaxies according to their environment. This is understandable again due to the modest size of the samples used to explore the effect of the environment both at low and high-z. As we will show through this paper, the environment only marginally affects the scatter of the stellar mass-size relation and, consequently, the use of large datasets is necessary to identify the role played by the environment in shaping the stellar mass-size plane. This paper is organized as follows: Section \ref{sec:data} contains a description of the data used in this work; Section \ref{sec:environment} describes the different definitions of environment and the sample selection; Section \ref{sec:size-scatter} is dedicated to study the stellar mass-size plane. Finally, our results are shown in Section \ref{sec:results} and discussed in Section \ref{sec:discussion}. A summary can be found in Section \ref{sec:summary}. Throughout the paper a standard $\Lambda CDM$ cosmology is adopted: $\Omega_M=0.3$, $\Omega_{\Lambda}=0.7$ and $H_0=70$ km s$^{-1}$Mpc$^{-1}$. | \label{sec:discussion} Our work shows that both the average size and the scatter in the stellar mass-size distribution are larger in low-density environments that in high-density regions. Those effects are stronger in late-type galaxies than in early-type objects. Moreover, the effects increase when we explore the innermost region of the galaxy clusters. What are these trends telling us about the formation and evolution of the galaxies? Recent semi-analytical models \citep{shankar2013,shankar2014} predict a moderate effect of the environment in the size of the galaxies in the nearby Universe. These models typically segregate the different environments based on their halo mass . According to the models of \cite{shankar2013,shankar2014}, galaxies in more massive halos have larger sizes than galaxies with similar stellar masses inhabiting less massive halos. This is due to the larger number of mergers suffered during their history \citep{shankar2013}. However, the opposite trend is found observationally: late-type galaxies are slightly more compact in clusters than in the field \citep{maltby2010, valentinuzzi2010,fernandez2013,poggianti2013}, and elliptical galaxies do not show any dependence on the environment \citep{maltby2010,huertas2013a,huertas2013b}. Our results support this view: galaxies are larger in the field, independently of their morphology, although early-type galaxies seem to be less sensitive to the environment than disc galaxies. Interestingly, as we move towards higher redshifts, objects located in high-density regions start to show larger sizes ($25\%$ to $50\%$ larger) than those placed in low-density environments. This is shown by different works both at intermediate redshift $z\sim1$: \cite{cooper2012,delaye2013} and \cite{papovich2012} (but see also \citealt{huertas2013a,rettura2010} and \citealt{raichoor2012} for different conclusions); and also at higher redshifts ($z\sim2$, \citealt{strazzullo2013}). \cite{lani2013} studied the environmental dependence of the size for early-type galaxies in two redshift ranges ($0.5<z<1$ and $1<z<2$). At high redshift, they find larger early-type galaxies in dense environments than in underdense environments, consistent with the previously mentioned studies, but the trend weakens for their lower redshift range. Supporting these observations, \cite{maulbetsch2007} found in N-body simulations that for halos in lower density regions the average present-day aggregation rate is 4-5 times higher than for halos in high-density regions. As we move towards higher redshift, their simulations show that this difference becomes smaller until $z\geqslant1$, when the trend is reversed and higher density environments present higher accretion rates than lower density environments. This different mass accretion rate in different environments can explain why results at high redshift differ from those in the nearby Universe. All the above observational and theoretical evidence points towards a scenario in which at earlier epochs ($z>1$) galaxies in high-density regions could have undergone a faster growth than the galaxies in less dense regions. However, in the last 7 Gyr this growth may have slowed down in clusters, while maintained, even increased, in the field, allowing low-density galaxies to reach similar sizes than the ones we observe in the cluster nowadays. When comparing observational studies with simulations, it is important to take into account the role that errors can play in the results. \cite{huertas2013b} and \cite{shankar2014} investigated the possible effect that errors in masses and effective radii can have on measuring environmental effects on the stellar mass-size distribution in the nearby Universe. Both studies indicate that the lack of environmental effects reported in previous works in early-type galaxies could be due to observational errors in measuring both the stellar mass and the effective radius. \cite{huertas2013b} quantify this effect claiming that differences in sizes of $\sim40\%$ at a fixed stellar mass would not be detected with significance. Although quantifying the effect of individual galaxy errors is beyond the scope of this work, it is worth mentioning that our approach should reduce the influence of observational errors and show any possible trend on the size of the galaxies with the environment due to the increase in the size of the sample. Nevertheless, and despite the improvement in the statistical significance of our work compared to previous samples, the effect of the environment we find is very mild and the opposite to that predicted by \cite{shankar2014}. In relation to the scatter in the stellar mass-size plane, it is worth noting that the models predict a scatter in sizes at a fixed stellar mass in disagreement with the tight relations found in the data \citep{nipoti2009,nair2011,shankar2013}. The observed low values for the scatter can only be reproduced by the models requiring a fine tuning in the input scaling relations of the progenitors \citep{shen2003,shankar2014}, forcing them to follow also tight scaling relations. \cite{nipoti2012} points out that even without considering some sources of scatter (such as that associated to the intrinsic scatter of stellar to halo mass relation), the scatter found in simulations is larger than that obtained observationally. It is important to take into account that the value for the scatter obtained in our work is an upper limit of the intrinsic scatter of the stellar mass-size distribution. The observed scatter includes two different contributions: the intrinsic scatter of the stellar mass-size distribution, due to the physical processes taking place in each galaxy, and the scatter induced because of the errors in stellar mass and size measurements. To have a hint on the scatter due to observational errors, we have taken the average stellar mass-size relations obtained in the previous sections and blurred them by generating random points according to the typical error values both in stellar mass and in size. Then, we evaluate the mean size and the scatter of the distribution using our likelihood estimators. Our tests show that scatter due to realistic observational errors ($\frac{\delta \log M_{*}}{\log M_{*}}=0.2$; $\frac{\delta \overline{R}_e}{\overline{R}_e}=0.15$; \citealt{blantonKcorrect,blantonNYU}) is around $<\sigma_{\ln R}>\sim0.20$ for late-type galaxies and $<\sigma_{\ln R}>\sim0.27$ for early-type galaxies. Consequently, the intrinsic scatter of the distribution would be $<\sigma_{\ln R}>\sim0.33$ in the case of late-type galaxies and $<\sigma_{\ln R}>\sim0.37$ for early-type galaxies. These small scatter values would make the discrepancy with the theoretical values even larger. Appendix \ref{sec:appendixB} details the tests carried out to estimate the above values. We find that the scatter in sizes at a fixed stellar mass is slightly smaller in clusters or overdense regions. \cite{nair2011} found a similar result when studying the luminosity-size relation. This result could be connected with a faster evolution and formation of the galaxies in clusters, thus involving a more homogeneous population of the galaxies during the merging processes at high redshift. If our suggestion is correct, the distribution of cluster galaxies in the stellar mass-size plane would reflect a more primordial distribution of sizes, maybe related to progenitors with tighter scaling relations, as required in the models. On the contrary, galaxies in the field could have longer evolutionary time-scales and the objects nearby them would have more time to evolve before they merge into the central galaxy. This could broaden the characteristics of the final objects and therefore broaden the dispersion of the distribution in the stellar mass-size plane. In general, the influence of the environment on the distribution of sizes at fixed stellar mass is stellar mass-dependent and seems more pronounced on low to intermediate stellar masses ($M_*<2\times10^{10}M_{\sun}$). Unfortunately, these masses are harder to study with high statistics and only at very low redshifts can we probe mass-complete samples. Thus, a greater effort to explore this range of stellar masses at progressively higher redshifts has to be made in order to fully characterize the effect of the environment on the size of the galaxies. Exploring the stellar mass-size relation and its scatter at higher redshift can also help to disentangle the effect that merging can have on the growing of galaxies and test other proposed mechanisms for the evolution of the stellar mass-size relation such as the arrival of newcomers \citep[see e.g.][]{vanderwel2005,carollo2013}. We also want to stress that comparing different samples from different works is often not straightforward. Factors such as the IMF used, the methods for measuring structural parameters, band in which parameters are measured or the inhomogeneity of the main samples can cover subtle effects or even artificially reinforce them. At higher redshifts the environment where the galaxies grow leaves an imprint in the size of the galaxies with galaxies in clusters being $30-50\%$ larger than in the field (see \citealt{delaye2013, lani2013,papovich2012}, but see also \citealt{huertas2013a, rettura2010} and \citealt{raichoor2012} for a different view). In the nearby universe, however, the environment plays a minor role, which many current galaxy formation models have failed to predict. We want to end this section by pointing out the profound mystery that our data present. As we have mentioned above, both the numerical simulations and the observations at high redshift show a significantly different evolutionary speed in clusters than in the field. Also, the characteristics of the progenitors should be different depending on the environment. Despite that, in the present-day Universe, the environmental differences in both the mean sizes and the scatter of the galaxies are small. Why have the galaxies today reached such similar sizes both in the clusters and in the field, when they have followed strikingly different growth histories? This is still an open question. In this work we have studied the distribution of the galaxies in the stellar mass-size plane for a mass-complete sample of nearby ($z<0.12$) 232000 objects. We use the NYU-VAGC catalogue to obtain a wide range of galaxies with different stellar masses located in different environments. At fixed stellar mass, we have computed the mean size and the scatter in the distribution using a maximum likelihood method. This calculation has been done using two different estimators of the environment. On one hand, for each galaxy in our sample, we have computed the number of galaxies with masses above $4\times10^{10}M_{\sun}$ within a sphere of 2 Mpc radius and taken the total mass inside this sphere as an indicator of the surrounding density. We take the 10$\%$ of galaxies having the lower and the 10$\%$ having the larger density values as representative samples of underdense and overdense zones. On the other hand, we have compiled a large number of galaxy clusters within our redshift range and compared the galaxies within 2 Mpc from the centre of the clusters with the galaxies in the field. In both approaches, we have segregated our samples according to their morphology based on the S\'ersic index. Regarding the size of the galaxies at a fixed stellar mass, we find that the galaxies are slightly larger ($\sim7.5\%$ for late-type galaxies and $\sim3.5\%$ for early-type galaxies) in less dense environments than in high-density regions. Our result is statistically significant for both late and early-type galaxies and widely consistent among the two methods used to characterize the environment. Qualitative similar results have been previously found for late-type galaxies in several works in a comparable redshift range (\citealt{poggianti2013, fernandez2013} and \citealt{maltby2010}). Our work, however, is the first to claim that, on average, early-type galaxies in low-density environments are about $\sim3.5\%$ larger than in clusters with high statistical significance ($>4\sigma$). This result have not been found in previous works \citep{maltby2010,huertas2013b,fernandez2013}, probably because the effect is subtle and requires a large sample to unveil this environmental effect. \cite{poggianti2013} found qualitatively the same trend but with low significance due to the lower number of galaxies in their sample (a factor of $\sim40$ smaller). The scatter of the stellar mass-size distribution has also been explored in this work. Our results show that the environment does not change significantly the scatter of the overall distribution. Nevertheless, there is a hint of the distribution having less scatter in high-density regions, especially for early-type galaxies and for late-type galaxies with masses below $2\times10^{10}M_{\sun}$. We also confirm previous observational values for the scatter of the overall stellar mass-size relation such as those from \cite{shen2003} and \cite{nair2011}. Taken together, our results point towards an earlier evolution of galaxies in clusters from a more constrained family of progenitors. This evolution may have slowed down in the past few Gyr allowing objects in less dense environments to reach similar sizes to those located in high-density regions. This different evolutionary speed between galaxies in clusters and in the field could have lead to the weak correlation between environment and mean sizes at a fixed stellar mass observed in the present-day Universe. | 14 | 4 | 1404.7135 |
1404 | 1404.6133_arXiv.txt | For a self-gravitating particle of mass $\mu$ in orbit around a Kerr black hole of mass $M\gg\mu$, we compute the $\calO(\mu/M)$ shift in the frequency of the innermost stable circular equatorial orbit due to the conservative piece of the gravitational self-force acting on the particle. Our treatment is based on a Hamiltonian formulation of the dynamics in terms of geodesic motion in a certain locally defined effective smooth spacetime. We recover the same result using the so-called first law of binary black-hole mechanics. We give numerical results for the innermost stable circular equatorial orbit frequency shift as a function of the black hole's spin amplitude, and compare with predictions based on the post-Newtonian approximation and the effective one-body model. Our results provide an accurate strong-field benchmark for spin effects in the general-relativistic two-body problem. | 14 | 4 | 1404.6133 |
||
1404 | 1404.3716_arXiv.txt | Evidence for an excess of gamma rays with ${\cal O}(\rm GeV)$ energy coming from the center of our galaxy has been steadily accumulating over the past several years. Recent studies of the excess in data from the Fermi telescope have cast doubt on an explanation for the excess arising from unknown astrophysical sources. A potential source of the excess is the annihilation of dark matter into standard model final states, giving rise to gamma ray production. The spectrum of the excess is well fit by 30~GeV dark matter annihilating into a pair of $b$ quarks with a cross section of the same order of magnitude as expected for a thermal relic. Simple models that can lead to this annihilation channel for dark matter are in strong tension with null results from direct detection experiments. We construct a renormalizable model where dark matter-standard model interactions are mediated by a pseudoscalar that mixes with the CP-odd component of a pair of Higgs doublets, allowing for the gamma ray excess to be explained while suppressing the direct detection signal. We consider implications for this scenario from Higgs decays, rare $B$ meson decays and monojet searches and also comment on some difficulties that any dark matter model explaining the gamma ray excess via direct annihilation into quarks will encounter. | \label{sec:intro} One of the prime unanswered questions about our Universe is the nature of dark matter (DM). Evidence for DM is overwhelming, coming from a diverse set of observations, among them galactic rotation curves, cluster merging, and the cosmic microwave background (see, e.g.~\cite{Feng:2010gw} and references therein). So far, dark matter has not been observed nongravitationally, yet the fact that a thermal relic with weak-scale annihilation cross section into standard model (SM) final states would have an energy density today that is compatible with dark matter measurements offers hope that this will be possible. The nongravitational interactions of DM are being searched for at particle colliders, in direct detection experiments, and in so-called indirect detection experiments, where the products of DM annihilation or decay are sought. One final state in particular that is searched for in indirect detection experiments is gamma rays which can be produced by DM annihilating, either (i) directly to photons, which would result in an unambiguous line in the case of two body decays, or (ii) into other SM particles that then decay and produce photons in the cascade. The Fermi collaboration has published limits on DM annihilation \cite{Ackermann:2012qk,*Ackermann:2013uma,*Ackermann:2013yva,*Murgia} into final states containing photons. Recently, evidence for such a signal has been mounting, with several groups~\cite{Daylan:2014rsa,Hooper:2013nhl,Hooper:2010mq,Goodenough:2009gk,*Hooper:2011ti,Abazajian:2012pn,*Gordon:2013vta,*Abazajian:2014fta} analyzing data from the Fermi Gamma Ray Space Telescope and finding an excess of gamma rays of energy $\sim$1-3~GeV in the region of the Galactic Center. This excess has a spectrum and spatial morphology compatible with DM annihilation. The potential for astrophysical backgrounds, in particular millisecond pulsars in this case, to fake a signal is always a worry in indirect detection experiments. However, the observation that this gamma ray excess extends quite far beyond the Galactic Center lessens the possibility of astrophysical fakes~\cite{Hooper:2013nhl}, with recent studies finding the excess to extend to at least $10^\circ$ from the Galactic Center~\cite{Daylan:2014rsa,Hooper:2013rwa,*Huang:2013pda}. The excess's spectrum has been fit by DM annihilating to a number of final states, depending on its mass, notably 10~GeV DM annihilating to $\tau^+\tau^-$ (and possibly other leptons) \cite{Barger:2010mc,*Hardy:2014dea,*Modak:2013jya,Hooper:2010mq,Abazajian:2012pn,*Gordon:2013vta,*Abazajian:2014fta,Marshall:2011mm,*Buckley:2011mm,*Lacroix:2014eea} and 30~GeV DM to $b\bar b$ \cite{Goodenough:2009gk,*Hooper:2011ti,Barger:2010mc,*Hardy:2014dea,*Modak:2013jya,Abazajian:2012pn,*Gordon:2013vta,*Abazajian:2014fta,Berlin:2014tja,*Agrawal:2014una}. The size of the excess is compatible with an annihilation cross section roughly equal to that expected for a thermal relic, $\langle \sigma v_{\rm rel}\rangle=3\times10^{-26}~{\rm cm^3}/{\rm s}$, suggesting that it is actually the result of DM annihilation. 30~GeV DM that annihilates to $b$ quarks is particularly interesting, primarily because direct detection experiments have their maximal sensitivity to spin-independent interactions between nuclei and DM at that mass. Reconciling the extremely strong limit from direct detection in this mass range, presently $8\times10^{-46}~{\rm cm}^2$~\cite{Akerib:2013tjd}, with a potential indirect detection signal poses a challenge, possibly offering a clue about the structure of the SM-DM interactions. We will focus on this DM mass and final state in this paper. In the case of DM annihilation to SM fermions through an $s$-channel mediator, we can roughly distinguish the distribution of final states by the spin of the mediator. Spin-0 mediators tend to couple with strength proportional to mass---either due to inheriting their couplings from the Higgs or because of general considerations of minimal flavor violation---which results in decays primarily to the heaviest fermion pair kinematically allowed. On the other hand, spin-1 mediators generally couple more democratically, leading to a more uniform mixture of final states. For this reason, the fact that the excess is well fit by 30~GeV DM annihilating dominantly to $b\bar b$ suggests annihilation through a scalar. However, this is problematic: to get an appreciable indirect detection signal today requires scalar DM (fermionic DM annihilating through a scalar is $p$-wave suppressed) but this leads to a spin-independent direct detection cross section that is in conflict with experimental bounds, as mentioned above. Therefore, we are led to consider a pseudoscalar mediator, instead of a scalar, between the (fermionic) DM and the SM, leading to an effective dimension-six operator of the form \begin{align} {\cal L}_{\rm eff}&=\frac{m_b}{\Lambda^3}\bar\chi i\gamma^5\chi \bar b i\gamma^5 b, \label{eq:Leff} \end{align} where $\chi$ is the DM. This operator has been singled out previously as a good candidate to describe the effective interaction between the SM and the dark sector~\cite{Boehm:2014hva,Alves:2014yha}. It implies $s$-wave DM annihilation, which allows the gamma ray excess to be fit while having a large enough suppression scale $\Lambda$ that it is not immediately ruled out by collider measurements of monojets/photons. The direct detection signal from this operator is spin-dependent and velocity-suppressed, rendering it safe from current constraints. To move beyond the effective, higher dimensional operator in Eq.~(\ref{eq:Leff}) requires confronting electroweak symmetry breaking because the SM portion of ${\cal L}_{\rm eff}$ is not an electroweak singlet: \begin{align} \bar b i\gamma^5 b=i\left(\bar b_L b_R-\bar b_R b_L\right). \end{align} Therefore, ${\cal L}_{\rm eff}$ has to include the Higgs field (which would make it a singlet) which then gets a vacuum expectation value (VEV), implying a mediator which can couple to the Higgs. It is easy to construct a scalar-scalar interaction between DM and the SM using the ``Higgs portal" operator $H^\dagger H$, where $H$ is the SM Higgs doublet, since it is a SM gauge singlet. This portal has been well explored in the literature, particularly in its connection to DM~\cite{Burgess:2000yq,*Strassler:2006im,*Patt:2006fw,*Strassler:2006ri,*Pospelov:2011yp,*Bai:2012nv,*Walker:2013hka}. In this paper, however, we expand the Higgs sector of the SM to include a second doublet, which has enough degrees of freedom to allow for a pseudoscalar to mix with the dark matter mediator. In the presence of CP violation one could also induce a pseudoscalar-scalar coupling via this portal, however it is puzzling why a new boson with CP violating couplings would not also have a scalar coupling to the dark fermion. Including two Higgs doublets allows CP to be an approximate symmetry of the theory, broken by the SM fermion Yukawa coupling matrices. Tiny CP violating couplings will need to be included in order to renormalize the theory at high orders in perturbation theory, but we simply assume that all flavor and CP violation is derived from spurions proportional to the Yukawa coupling matrices, and so has minimal effect on the Higgs potential and dark sector. The outline of this paper is as follows. In Sec.~\ref{sec:model} we introduce the two Higgs doublet model (2HDM) and the pseudoscalar mediator which mixes with the Higgs sector. We also discuss CP violation in the dark sector and in interactions between DM and SM fermions. We briefly discuss the annihilation cross section for our DM model in Sec.~\ref{sec:ann}. In Sec.~\ref{sec:results}, we catalog constraints on this model, such as direct detection, Higgs and $B$ meson decays, and monojets. Section~\ref{sec:conc} contains our conclusions. | \label{sec:conc} An excess in gamma rays from the Galactic Center as measured by the Fermi Gamma Ray Space Telescope can be explained by 30~GeV DM annihilating dominantly into $b\bar{b}$ pairs. To do so while eluding bounds on spin-independent scattering of DM on nuclei suggests that the mediator between the dark sector and the SM is a pseudoscalar. We have studied a 2HDM where the pseudoscalar mediator mixes with the CP-odd Higgs, giving rise to interactions between DM and the SM. At one-loop, scalar-scalar interactions between DM and SM quarks arise. This leads to a spin-independent cross section for direct detection well below the current bound of $8\times10^{-46}{\rm cm}^2$ at a dark matter mass of 30~GeV. Future limits at better than $10^{-49}{\rm cm}^2$ could impact this model. We also consider decays of the 125~GeV SM-like Higgs boson involving the mediator. If the mediator is light $h\to aa\to 4b,\,2b2\mu$ can be constraining with data from the 14~TeV LHC. Additional contributions to $B_s\to\mu^+\mu^-$ in this model eliminate some of the favored parameter space for $m_a<10~{\rm GeV}$. This scenario is not well tested by monojet searches, including ones that rely on $b$-tagging to increase the sensitivity to DM coupled to heavy quarks, due to a suppressed coupling of the mediator to $t$ quarks. Changing the benchmark parameters that we used above does not greatly change the general results. For example, if we lower lower $m_A$ to decrease the $h\to a a$ signal coming from Eq.~(\ref{eq:vport-mass}), we have to decrease $\tan\beta$ because of the CMS heavy Higgs search~\cite{CMS-PAS-HIG-13-021}. Then, to obtain the correct annihilation cross section in Eq.~(\ref{eq:annih}), we have to increase the mixing angle (or, equivalently, $B$) which in turn increases the $h\to a a$ rate. One obvious piece of evidence in favor of this scenario would be finding heavy Higgses at the LHC. However, conclusively determining whether these heavy Higgses are connected to 30~GeV DM annihilating at the center of the galaxy will be a formidable challenge. Among the possible signatures to probe this scenario is $A\to h a\to 2b+{\rm inv.}$ We leave a detailed study of this search and others for future work. | 14 | 4 | 1404.3716 |
1404 | 1404.4841_arXiv.txt | Using the chargino-neutralino and slepton search results from the LHC in conjunction with the WMAP/PLANCK and $(g-2)_{\mu}$ data, we constrain several generic pMSSM models with decoupled strongly interacting sparticles, heavier Higgs bosons and characterized by different hierarchies among the EW sparticles. We find that some of them are already under pressure and this number increases if bounds from direct detection experiments like LUX are taken into account, keeping in mind the associated uncertainties. The XENON1T experiment is likely to scrutinize the remaining models closely. Analysing models with heavy squarks, a light gluino along with widely different EW sectors, we show that the limits on $\mgl$ are not likely to be below 1.1 TeV, if a multichannel analysis of the LHC data is performed. Using this light gluino scenario we further illustrate that in future LHC experiments the models with different EW sectors can be distinguished from each other by the relative sizes of the $n$-leptons + $m$-jets + $\met$ signals for different choices of $n$. | \label{section1} The LHC experiments at $\sqrt{s}=$7/8 TeV have concluded recently. The painstaking searches for supersymmetry (SUSY) \cite{SUSYreviews1,SUSYreviews2,SUSYbooks}, the most popular and attractive extension of the standard model (SM) of particle physics have not observed any signal yet. Consequently stringent limits on the masses of the supersymmetric particles (sparticles) belonging to the strongly interacting sector, expected to be produced with large cross-sections, have been obtained by both the ATLAS and the CMS collaborations \cite{atlas0l, atlas1l, atlas2l, atlas-susy,cms-susy} \footnote{However, these stringent bounds are reduced significantly in compressed SUSY type scenarios \cite{compressed}.}. Whether these limits already put question marks on the naturalness\cite{naturalness,naturalness_recent} of various SUSY models may be debated in spite of the fact that it is hard to quantify the degree of naturalness. Naturalness or the absence of it should therefore be left at the stage of a healthy theoretical debate and not be regarded as the concluding remark on SUSY. The minimal supersymmetric standard model (MSSM)\cite{SUSYreviews2,SUSYbooks} has another important component - the electroweak (EW) sector. The production cross-sections of the sparticles belonging to this sector at the LHC are rather modest. As a result there was no constraint on the properties of these sparticles until recently. Thus some weak mass limits from LEP\cite{lepsusy} and Tevatron \cite{cdf3l5.8fb, d03l2.3fb} were the only available information on this sector. The purpose of this paper is to focus on this sector in the light of the direct constraints from LHC \cite{atlas3lew,atlas2lew,cmsew} as well as indirect constraints like the observed value of the anomalous magnetic moment of the muon from the Brookhaven $\gmin2$ experiment\cite{g-2exp} and the relic density constraints for dark matter from WMAP\cite{wmap} or PLANCK\cite{planck} experiments. Using the combined constraints we then identify the allowed parameter space (APS). We will also consider the constraints from direct \cite{xenon100,lux,xenon1t} and a few selected indirect searches \cite{fermi-lat-gamma} of dark matter which may involve considerable theoretical and astrophysical uncertainties (to be elaborated in a subsequent section). In view of this we present our results in such a way that the effect of each constraint may separately be seen. We also study the prospect of future LHC searches and the issue of distinguishing several EW scenarios having different dark matter (DM) annihilation/coannihilation mechanisms leading to correct relic density (we will often refer this as DM producing mechanisms). Since the SUSY breaking mechanism leading to a given pattern of sparticle masses is unknown, in the most general MSSM the above two sectors are unrelated. Only in models with high scale physics inputs due to considering specific mechanisms of SUSY breaking like the minimal supergravity (mSUGRA)\cite{msugramodel}, the masses of the strong and the EW sparticles are correlated. As a result, the stringent bounds on the former sector translate into bounds on the masses of the latter some of which are apparently much stronger than the direct limits. However, since the mechanism of SUSY breaking is essentially unknown it is preferable to free ourselves from such model dependent restrictions. Apart from particle physics, the EW sparticles may play important roles in cosmology as well. An attractive feature of all models of SUSY with R-parity\cite{SUSYbooks} conservation is that the lightest supersymmetric particle (LSP) is stable. In many models the lightest neutralino $\lspone$ happens to be LSP. This weakly interacting massive particle is a popular candidate for the observed dark matter (DM) in the universe \cite{dm_rev1,dm_rev2,dmmany}. Moreover, the DM annihilation/coannihilation mechanisms leading to acceptable relic density for DM may be driven entirely by the electroweak sparticles\cite{dm_rev1,dmmany,dmmssm}. Consequently the observed value of the DM relic density\cite{wmap,planck} may effectively be used to constrain the EW sector or a specific SUSY model in particular. It was recently emphasized in Ref.\cite{arg_jhep1} that the physics of DM and the stringent LHC bounds on the squark and gluino masses, obtained mainly from the jets + missing energy data, are controlled by two entirely different sectors of the phenomenological MSSM (pMSSM)\cite{pmssm}. While the DM producing mechanisms may broadly be insensitive to the strong sector\footnote{Except in situations like LSP-stop coannihilations.} of the pMSSM\cite{pmssm}, the response of the above LHC bounds to changes in the EW sector parameters is rather weak. It was demonstrated by simulations at the generator level that these bounds change modestly for a variety of EW sectors with different characteristics all consistent with the DM relic density data\cite{arg_jhep1}. Thus the strong constraints on DM production in mSUGRA \cite{dmsugra, dmsugra_recent} due to squark-gluino mass bounds may be just an artifact of this model\footnote{For a recent review focussing on recent searches for dark-matter signatures at the LHC see Ref.\cite{vasiliki}.}. It was further noted that in the unconstrained MSSM, there are many possible DM producing mechanisms which are not viable in mSUGRA due to the constraints on the squark-gluino masses. Some examples are LSP pair annihilation via Z or the lighter Higgs scalar (h) resonance, LSP-sneutrino coannihilation, coannihilation of a bino dominated LSP and a wino dominated chargino etc\cite{Baer:2005jq,arg_jhep1}. It may be emphasized that the discovery of the Higgs boson by the LHC collaborations\cite{higgs} has opened up the possibility of pinpointing the LSP pair annihilation via h-resonance. Subsequently both the CMS and the ATLAS collaborations published direct search limits on the masses of the electroweak sparticles in several models sensitive to the LHC experiments at 7 TeV \cite{atlas2l7, atlas3l7,cms2l3l7}. It was pointed out in Ref.\cite{arg_jhep2} the models constrained by the LHC experiments are important in the context of DM physics as well since many of these models contain light sleptons either of L or R-type. It was demonstrated that even the preliminary mass bounds based on 13 $\ifb$ 8 TeV data\cite{atlas3l8tev13,cms3l8tev9} are able to put non-trivial constraints on parameter space in regard to the neutralino relic density bounds. It was also pointed out that additionally if the gluinos are relatively light (just beyond the reach of the current LHC experiments) these models with the lightest neutralino as the LSP may lead to novel collider signatures. Especially in models with light sleptons the same sign dilepton (SSD) signal may indeed turn out to be stronger than the canonical jets + missing energy signal. Moreover, one is able to distinguish different relic density satisfying mechanisms by measuring the relative rates of the $n$-leptons + $m$-jets + missing energy events for different values of n. More recently the LHC collaborations have published their analyses for EW sparticle searches based on 20 $\ifb$ data \cite{atlas3lew,atlas2lew,cmsew} which, as expected, yield stronger mass bounds. The results were interpreted in terms of several simplified models. In this approach only the masses of a limited number of sparticles relevant to a particular signal are treated as free parameters, while the others are assumed to be decoupled. Moreover, in many cases the LSP is assumed to be bino dominated while the lighter chargino ($\chonepm$) to be wino dominated, but all the parameters that determine the masses and the mixings in the EW gaugino sectors are not precisely identified. However, many of the above parameters which are moderately or marginally important for collider analyses, are quite important for computation of the indirect observables such as the observed DM relic density bounds or $\gmin2$. In view of this we have computed the bounds by a PYTHIA \cite{pythia} based generator level analysis. We use the full set of pMSSM parameters sufficient to determine all relevant observables. We also obtain bounds in related models not considered by the LHC collaborations in Refs.\cite{atlas3lew,atlas2lew,cmsew}. We next consider a few indirect constraints in order of the level of stringency. We note that stringency of a constraint is increased if there is less model dependence while it is decreased if there is a large combined theoretical and experimental errors where some of the theoretical errors may not always even be precisely quantifiable. With the details mentioned in Sec.~\ref{Section:DetailsOfConstraints}, the outline of the above constraints in the aforesaid order are given below: i) the precise dark matter relic density constraint from WMAP/PLANCK\cite{wmap,planck} within the ambit of standard model of cosmology\cite{kolb}, ii) the $\gmin2$ data that deviates from the SM result by more than $3\sigma$\cite{g-2exp,g-2sm1,g-2sm2}, (which is becoming more and more potent with the gradual reduction of the disagreement between the $e^+$$e^-$ data based analyses and the ones that use hadronic $\tau$-decay data for evaluating the contributions for the hadronic vacuum polarisation part of the contributions to the theoretical estimation of $\gmin2$\cite{davierhadtaug-2}), iii) the bound on the spin-independent direct detection cross-section of DM ($\sigma_{\tilde \chi p}^{\rm SI}$) from XENON100\cite{xenon100} and LUX\cite{lux}. We also consider the reach of XENON1T\cite{xenon1t} and iv) the indirect detection constraint from photon signal as given by the FERMI data\cite{fermi-lat-gamma}. With a bino-dominated LSP the last constraint is hardly of any interest as we will see in Sec.~\ref{Section:DirectAndIndirectDetection}. In the optimistic scenario of SUSY discovery in the LHC-13 TeV runs, it would still be difficult to pinpoint the underlying DM producing mechanism by explicitly reconstructing the sparticle spectrum. This is especially true for the early phase of the experiment. In this work we address the possibility of distinguishing various pMSSM scenarios, with characteristic EW sectors constrained by the experiments discussed above. This may be possible if at least one of the strongly interacting sparticles is within the reach of the LHC and its decays bear the imprints of the underlying EW sector as we will show in a later section. In our analysis we will particularly see the effects of variations of $\tan\beta$, the ratio of the vacuum expectation values of the two neutral Higgs bosons, $\mu$, the higgsino mass parameter, the slepton masses etc. This will be explored in a generic scenario with bino dominated LSP and wino dominated $\chonepm$ along with heavy squarks, gluino as well as large masses for the charged Higgs $H^{\pm}$, the heavier CP-even neutral Higgs $H$ and the pseudoscalar Higgs $A$ (${M_{H^{\pm}}, M_H, M_A}$ respectively). We will also consider a large top-trilinear parameter $A_t$ so that the lighter Higgs mass $m_h$ agrees with the observed value in the least possible mass reach of the super-partners. The plan of this paper is as follows. In Sec.~\ref{Section:DetailsOfConstraints} we will review the effect of Higgs mass data as applied to pMSSM and indirect constraints like that from $\gmin2$, WMAP/PLANCK data for relic density of DM and the effect of XENON100, LUX and the future XENON1T on our analysis. In Sec.~\ref{actualanalysis} we will explore various electroweak sectors by having the left and right slepton masses (separately or together) in between the masses of the LSP and the lighter chargino. This will be analysed by considering sufficiently large values of $\mu$ such that one always obtains a bino-dominated LSP and a wino-dominated $\chonepm$. We will find the APS from collider bounds and constraints from the relic density as well as $\gmin2$. In Sec.~\ref{Section:DirectAndIndirectDetection} we will further impose the constraints for spin-independent direct detection cross-section limits from LUX and $\gamma$-ray constraints for indirect detection of DM from Fermi-LAT. In Sec.~\ref{Section:gluinomasslimit} we will analyse a few benchmark points chosen from the models of Sec.~\ref{actualanalysis} and discuss the prospects of distinguishing various models. We will conclude in Sec.~\ref{Section:Conclusion}. | \label{Section:Conclusion} The LHC searches during the 7/8 TeV runs in the $m$-jets + $n$-leptons + $\met$ channels, where m $\geq$ 2, have obtained important limits on the masses of the strongly interacting sparticles - the squarks and the gluinos (see Refs.\cite{atlas0l, atlas1l, atlas2l, atlas-susy,cms-susy}). These limits, however, provide little information on the EW sparticles unless very specific SUSY breaking mechanisms like mSUGRA \cite{msugramodel} are invoked to relate masses of the strong and EW sparticles. The purpose of this paper is to investigate the EW sector of pMSSM \cite{pmssm}. In order to achieve our goal we focus on the bounds from ATLAS and CMS searches for the direct production of $\chonepm$$\lsptwo$ \cite{atlas3lew,cmsew} and slepton pairs \cite{atlas2lew,cmsew} via the hadronically quiet channels with large $\met$. We also include in our analysis the WMAP/PLANCK constraints \cite{wmap,planck} on the observed DM relic density and require $a_\mu^{\rm SUSY}$ to agree with $\Delta a_\mu$ at the level of 2$\sigma$ (Sec.~\ref{Section:DetailsOfConstraints})\cite{g-2exp}. The observables under consideration while sensitive to the EW sectors of SUSY models, are by and large independent of the strongly interacting sparticles. Moreover, the measurement of $m_h$ enables us to study LSP pair annihilation into the h-resonance more precisely. The main conclusion of this paper is that for a fairly large number of pMSSM models \cite{pmssm} without specific assumptions for soft SUSY breaking, the EW sectors are constrained by the above data (see Figs.\ref{LL_0.5_0.5} - \ref{LLR_lsp_slep}). In many cases the constraints are quite severe while they are a little relaxed in the other cases. However, in all cases the allowed parameter space (APS) is a bounded region indicating both upper and lower bounds on EW sparticle masses. Using the model independent limits on $N_{BSM}$ (defined in Sec.~\ref{section3.1}) as obtained by ATLAS and CMS, we constrain the EW sectors of several pMSSM models closely related to the simplified models considered by the LHC collaborations. The models are characterized by different mass hierarchies among the EW sparticles. The simplified models showcase the basic features of dedicated LHC searches but it is important to relate the search results with indirect observables like the DM relic density and $\gmin2$. They also involve unrealistic assumptions like $M_{{\tilde l}_L}=M_{\tilde {\nu}}$ (see Sec.~\ref{section3.1}) and consequently miss some phenomenologically interesting possibilities like the invisible decays of $\lsptwo$ with $\approx$ 100 \% BR (see Sec.~\ref{section3.1.1}). We have used the ATLAS and CMS data to derive new constraints in several models which are interesting in their own right but not included in Refs.\cite{atlas3lew,atlas2lew,cmsew}. We focus on models with bino dominated LSP, wino dominated $\chonepm$ and $\lsptwo$ along with light sleptons. All strongly interacting sparticles and the heavier Higgs bosons are assumed to be decoupled. These models are highly sensitive to the trilepton signal from $\chonepm$$\lsptwo$ pair production. In this analysis we have also taken into account the limits from direct slepton searches (Sec.\ref{section3.5} and Sec.\ref{section3.6}) which sometimes cover parameter spaces insensitive to the trilepton data. We now summarize the results for the models with relatively light $\chonepm$ and $\lsptwo$ and sleptons (L-type or R-type or both) lighter than the above gauginos (Figs.\ref{LL_0.5_0.5} - \ref{30_R_slep_gt_m2}). The tilted LGLS-$\lspone$ model (Sec.~\ref{section3.1.1}, Fig.\ref{LL_0.75_0.25_A}), for low values of $\tan\beta$, is disfavoured by the combined constraints. The LGLRS-$\lspone$ model (Sec.~\ref{section3.2.1} Figs.\ref{LLR_0.75_0.25_A}, \ref{LLR_0.75_0.25_B}) for both low and high $\tan\beta$ is also not viable. The last two constraints follow from both chargino-neutralino and direct slepton searches and illustrate the interplay between different search channels. All the other models in this category have APS consistent with combined constraints. Within pMSSM a few DM producing mechanisms are possible which are not viable in specific models like mSUGRA \cite{arg_jhep1}. LSP-sneutrino coannihilation is a case in point. However, the combined constraints used in our analysis put severe restrictions on some of the pMSSM allowed mechanisms. Bulk annihilation, for example, is disfavoured as the dominant relic density producing mechanism in all models except for one (see Fig.\ref{LLR_lsp_slep_A})). Only in the LLRS model with small $\tan\beta$ the tip of the near vertical red dotted region representing bulk annihilation is consistent with all constraints. The LSP pair annihilation into a light Higgs resonance can produce the required DM relic density for low $\tan\beta$ only. But the LHC constraints rule this out for low $\mchonepm$. As a result the SUSY contribution to $\gmin2$ is suppressed leading to a tension with the measured value. Only if the $\gmin2$ constraint is relaxed to the level of 3$\sigma$, this option is viable in a few cases (see Figs.\ref{LLR_0.5_0.5_A}, \ref{LLR_0.75_0.25_A}, \ref{LLR_lsp_slep_A})\footnote{It may be recalled that in the LGHS model (see Sec. 3.4) the Higgs resonance mechanism can not be excluded beyond doubt since the spoiler mode may weaken the trilepton signal.}. For similar reasons LSP annihilation into the Z-resonance is also not viable. Thus, in contrast to the LSP pair annihilation, various coannihilation processes survive as the main DM producing mechanisms favoured in most scenarios over large regions of parameter space. It is well known that the coannihilation mechanisms operate on narrow strips in each parameter space. The combination of theoretical constraints/ LEP limits, the LHC exclusion contours and the $\gmin2$ constraint at the level of 2$\sigma$ restrict the lower and the upper edges of this strip. Thus in each of the APS under consideration the EW sparticles have their masses bounded from both above and below. We have also analysed models with heavy sleptons and lighter $\chonepm$, $\lsptwo$ (Sec.~\ref{section3.4} and Fig.\ref{30_slep_gt_m2}). In this case the LHC constraints are relatively weak. Nevertheless the strip allowed by WMAP/PLANCK data arising from LSP - $\chonepm$/ $\lsptwo$ coannihilation is bounded by the $\gmin2$ constraint at the level of 2$\sigma$. Models with light sleptons and heavy as well as decoupled $\chonepm$, $\lsptwo$ have also been considered in this analysis. We have analysed the LLS (Sec.~\ref{section3.5}, Fig.\ref{30_LL_lsp_slep}) and the LLRS (Sec.~\ref{section3.6} and Fig.\ref{LLR_lsp_slep}) models with high and low $\tan\beta$. In all cases $\mchonepm$ is assumed to be beyond the direct LHC search limit. We find a bounded APS in each case. For the LLS model LSP-sneutrino coannihilation is responsible for the right amount of relic density. In the LLRS model $\mu$ has to be large to ensure a wino dominated chargino. As a result for both choices of $\tan\beta$ we find $\stauone$ to be the NLSP and LSP undergoes coannihilation with it to produce the required amount of DM relic density. For low $\tan\beta$, LSP pair annihilation into the h-resonance is also viable for slepton masses beyond the LHC reach. This possibility, however, is in conflict with the $\gmin2$ constraint at the $2 \sigma$ level. We note in passing that the light right slepton (LRS) model is inconsistent with the $\gmin2$ limit. We have also studied the impact of the direct and indirect searches of DM on the APS of different models after filtering them through the above three constraints. We would however like to remind the readers of the inherent theoretical, experimental and astrophysical uncertainties and ambiguities involved in the analysis as reviewed in details in the text (see Sec.\ref{section1} and Sec.\ref{section2.3}). After including the DM direct detection limits, it follows from Fig.\ref{dd_LL} that there is a tension between two models and the XENON100 \cite{xenon100}/ LUX \cite{lux} data. These are the LGLS model (Fig.\ref{LL_0.5_0.5_A}) and the tilted LGLS-$\chonepm$ model (Fig.\ref{LL_0.25_0.75_A}) at low $\tan\beta$. Modulo the aforesaid uncertainties the LGLRS (Fig.\ref{LLR_0.5_0.5_A}) and the tilted LGLRS-$\chonepm$ (Fig.\ref{LLR_0.25_0.75_A}) scenarios at low $\tan\beta$ are also in conflict with the direct detection data (Fig.\ref{dd_LLR}). The XENON1T experiment \cite{xenon1t} is expected to scrutinize all the remaining models closely. It follows from Fig.\ref{dd_fig7_to_10} that the other cases namely the LGRS, LGHS, LLS and LLRS models (see Fig.\ref{30_R_slep_gt_m2} to Fig.\ref{LLR_lsp_slep}) are fairly insensitive to XENON100\cite{xenon100} and LUX \cite{lux} data. XENON1T \cite{xenon1t} can spell the final verdict on the LGHS and LGRS models. The remaining models will be probed by the XENON1T \cite{xenon1t} if the theoretical and astrophysical uncertainties are brought under control. Next we consider the possible impacts of the above scenarios on the next round of experiments at LHC. However, it will be hard to establish the underlying model and the DM producing mechanism in the early stages of the experiment even if SUSY is discovered. Therefore we explore the possibility of identifying the observables which are sensitive to different DM producing mechanisms. This may be possible if at least one of the strongly interacting sparticles are relatively light. The feasibility of this approach has already been demonstrated by considering the light stop, the light stop-gluino and the light gluino scenarios and observables based on the $n$-leptons + $m$-jets + $\met$ signal for different values of n\cite{arg_jhep1, arg_jhep2}. In this paper we focus on the light gluino scenario (see Sec.~\ref{Section:gluinomasslimit}). We choose characteristic benchmark points from Figs.\ref{LL_0.5_0.5} to \ref{LLR_lsp_slep} (excluding Figs.\ref{LL_0.75_0.25_A}, \ref{LLR_0.75_0.25_A} and \ref{LLR_0.75_0.25_B}) which are allowed by the combined constraints and correspond to different relic density producing mechanisms (see Tables 1 and 2). Using the latest ATLAS data in search channels with $n$ = 0 \cite{atlas0l}, $n$ = 1 \cite{atlas1l} and $n$ = 2 (same sign dilepton) \cite{atlas2l}. we reanalyse the gluino mass limits in all cases (see Table~\ref{tab3}). In our generator level simulation we have adopted the selection criteria of Refs.\cite{atlas0l,atlas1l,atlas2l}. It is worth noting that the $\mgl$ limit varies considerably with the search channel for each BP. For different scenarios the strongest limit comes from channels corresponding to different $n$. For all scenarios with a L-slepton lighter than the $\chonepm$ (BP 1-6), these limits come from the $n$ = 1 channel. In the remaining cases (BP 7 - 10) the $n$ = 0 channel yields the best limits. However, the above limits for all scenarios lie in a reasonably narrow range: 1105 - 1250 GeV. Thus the limit on $\mgl$ is only moderately sensitive to the EW sector if it is derived from a multichannel analysis. Taking cue from the above discussion the observables which may potentially discriminate among various scenarios can be introduced. We define three ratios $r_1$, $r_2$ and $r_3$ (Table~\ref{tab4}) that are associated with relatively small theoretical errors (see Sec.~\ref{Section:gluinomasslimit}). They are derived using the event rates for $n$ = 0, 1 and 2 for a gluino mass of 1.25 TeV which is just beyond the reach of the recently concluded LHC experiments (see Table~\ref{tab3}). The values of these ratios indeed illustrate that sufficiently accurate measurements may discriminate among the underlying scenarios. { \bf Acknowledgments : } AD acknowledges the award of a Senior Scientist position by the Indian National Science Academy. MC would like to thank Council of Scientific and Industrial Research, Government of India for financial support. | 14 | 4 | 1404.4841 |
1404 | 1404.4888_arXiv.txt | The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each of the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set we perform a validity test and show that when the random forest classifier attempts to classify unknown light-curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive datasets given that the training process is performed offline. We tested our algorithm on \NumMachoLC{} light-curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about \NumOutliers{} objects, which we passed to a post analysis stage by perfoming a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables and X-ray sources. For some outliers there were no additional information. Among them we identified three unknown variability types and few individual outliers that will be followed up in order to do a deeper analysis. | Several important discoveries in astronomy have happened serendipitously while astronomers were examining other effects. For example, William Herschel discovered Uranus on March 13 1781\citep{Herschel1857} while surveying bright stars and nearby faint stars. Similarly, Giuseppe Piazzi found the first asteroid, Ceres, on January 1 1801 \citep{Serio2001} while compiling a catalog of stars positions. Equally unexpected, was the discovery of the cosmic microwave background radiation (CMB) in 1965 by Arno Penzias and Robert Wilson, while testing Bell Labs horn antenna \citep{Penzias1965}. With the proliferation of data in astronomy and the introduction of automatic methods for classification and characterization, the keen astronomer has been progressively removed from the analysis. Anomalous objects or mechanisms that do not fit the norm are now expected to be discovered systematically: serendipity is now a machine learning task. As a consequence, the astronomer\rq s job is not to be behind the telescope anymore, but to be capable of selecting and making the interpretation of the increasing amount of data that technology is providing. \begin{figure*}[] \centering \includegraphics[width=7in]{smiley++.pdf} \label{fig_smiley} \caption{Simple illustration of the method. In most unsupervised methods the red points in the middle will not be considered as outliers because they are in a region with point density that is not separable. The product of the probabilities or the sum of the distances to the known classes may not be adequate as an outlier score, and therefore the joint probability is a better measure for outliers. This case occurs when the conditional probability is lower than the marginal probability as it can be seen from this simple illustration.} \label{fig:smiley} \end{figure*} Outlier detection, as presented here, can guide the scientist on identifying unusual, rare or unknown types of astronomical objects or phenomena (e.g. high redshift quasars, brown dwarfs, pulsars and so on). These discoveries might be useful not only to provide new information but to outline observations, which might require further and deeper investigation. In particular, our research detects anomalies in photometric time series data (light-curves). For this work, each light-curve is described by 13 variability characteristics (period, amplitude, color, etc.) termed \textit{features} \citep{Kim2011,Pichara2012}, which have been used for classification. It is worth noting that the method developed in this paper is not only applicable to time-series data but could also be used for any type of data that need to be inspected for anomalies. In addition to this advantage, the fact that it can be applied to big data, makes this algorithm suitable for almost any outlier detection problem. Many outlier detection methods have been proposed in astronomy. Most of them are unsupervised techniques, where the assumption is made that there is no information about the set of light-curves or their types \citep{Connolly2010}. One of these approaches considers a point-by-point comparison of every pair of light-curves in the data base by using correlation coefficient \citep{Protopapas2008}. Other techniques search for anomalies in lower-dimensional subspaces of the data in order to deal with the massive number of objects or the large quantity of features that describe them \citep{Henrion2012, Connolly2010}. Clustering methods are equally applied in the astronomical outlier detection area aiming to find clusters of new variability classes \citep{Bhattacharyya, Rebbapragada2008}. Unfortunately, these methods either scale poorly with massive data sets and with high dimensional spaces or partially explore the data therefore missing possible outliers. In this paper we face these constraints by creating an algorithm able to efficiently deal with big data and capable of exploring the data space as exhaustively as possible. Furthermore, we address this matter from a different point of view as the one presented by \citet{He2006} as \lq\lq the new-class discovery challenge\rq\rq. Contrary to unsupervised methods, it relays on using labeled examples for each known class in the training set, and unlike supervised methods, we assume the existence of some rare classes in the data set for which we do not have any labeled examples. This approach takes advantage of available information but it does not restrict the anomalous findings to a certain type of light-curves. Furthermore, in unsupervised anomaly detection methods, in which no prior information is available about the abnormalities in the data, anything that differs from the whole dataset is flagged as an outlier and consequently many of the anomalies found would simply be noise. In contrast to these techniques, supervised methods incorporate specific knowledge into the outlier analysis process, thus obtaining more meaningful anomalies. This is illustrated in Figure.~\ref{fig:smiley}. The blue and green points represent instances in a two dimensional feature space from known class 1 and class 2 respectively. The shaded areas represent the boundaries learned from a classifier. The grey points represent isolated outliers and the red points represent outlier classes. In most unsupervised methods the red points in the middle will not be considered as outliers because they are in a region with point density that is not separable. In the naivest supervised methods, anything that is outside the boundaries is considered as an outlier. For the example of the outlier class in the middle, the product of the probabilities or the sum of the distances to the known classes may not be adequate as an outlier score, and therefore the joint probability is a better measure for outliers. This case occurs when the conditional probability is lower than the marginal probability\footnote{This is not necessary true for all cases} as it can be seen from this simple illustration. The conditional probability shown on the left is smaller than the marginal probability shown on the right. Our model will consider those objects as outliers. In the first stage of our method we build a classifier that is trained with known classes (every known object is represented by its features and a label). We then use the classifier decision mechanism to our advantage. More precisely, we learn a probability distribution for the classifier votes on the training set in order to model the behavior of the classifier when the objects correspond to a known variability class. The intuition behind this method is to recognize, and thus to learn, the way the classifier is confused when it comes to voting. By confusion, we refer not only to the hesitation between two or more classes for an object label, but also to the weights it assigns to each of these possibilities. Therefore, when an unlabeled light-curve is fed into the model, the classifier attempts to label it and, if this classifying behavior is known by the model, the object will have a high probability of occurrence and consequently a low outlierness score. On the contrary, the object will have a higher anomaly score and will be flagged as an outlier candidate insofar as the classifier operates in a different way from the previously known mechanisms. Once our outlier candidates are selected, an iterative post-analysis stage becomes necessary. By visual inspection we discriminate artifacts from true anomalies and a) we remove them systematically from our data set and b) create classes of spurious objects that we add to our training set. We then re-run the algorithm and obtain new candidates. These steps are repeated until obtaining no apparent artifacts in our outlier list and a clustering method is finally executed. The purpose of this phase is to group similar objects in new variability classes and consequently to give them an astronomical interpretation. Finally we cross-match the most interesting outliers with all publicly available catalogs with the aim of verifying if there is any additional information about them. In particular we are interested in knowing if they belong to a known class. In the negative case, the outliers will be followed up using spectroscopy to deeply analyze their identity and behavior. To achieve this, we use random forest (RF) \citep{Breiman2001} for the supervised classification in order to obtain the labeling mechanism for each class on the training set. RF has been extensively and successfully used in astronomy for catalogation \citep{Pichara2013, Kim2014}. Starting with the RF output, we construct a Bayesian Network (BN) with the purpose of extracting the classifications patterns which we use for our final score of outlier detection. The paper is organized in the following way: Section \ref{sec:related} is devoted to other methods related to anomaly detections in machine learning and astronomy. In section \ref{sec:background} we detail the background theory including the basic blocks of random forest and Bayesian Networks. Our approach and the pipeline followed in the paper are shown in section \ref{sec:methodology}. Section \ref{sec:data} contains the information about the data used in this work and section \ref{sec:Results} presents the results of the performed tests and the experiments with real data, including re-training and elimination of artifacts. We proceed by explaining in section \ref{sec:post} the post analysis process. Finally, conclusions follow in section \ref{sec:conclusions}. | \label{sec:conclusions} The generation of precise, large and complete sky surveys in the last years has increased the need of developing automated analysis tools to process this tremendous amount of data. These tools should help astronomers to classify stars, characterize objects and detect anomaly among other applications. In this paper we presented an algorithm based on a supervised classifier mechanism that enables us to discover outliers in catalogs of light-curves. To do so we trained a random forest classifier and used a Bayesian Network to obtain the joint probability distribution, which was used for our outlierness score. Different from existing methods, our work comprises a supervised algorithm where all the available information is used to our advantage. Since the amount of data to be processed is huge, one could have expected a high computational complexity and the overtake of the resources. Nevertheless, our algorithm is only expensive in the training stage and extremely fast in the unknown light-curves analysis, allowing us to explore a very large datasets. Furthermore, our method is not only restricted to astronomical problems and could be applied to any data base where anomaly detection is necessary. The results from the application of our work on catalogs of classified periodic stars from MACHO project are encouraging, and establish that our method correctly identifies light-curves that do not belong to these catalogs as outliers. We have identified light-curves that were artifacts because of instrumental, mechanical, electronic or human errors and about \NumOutliers{} light-curves that emerged as intrinsic. After cross-matching these candidates with the available catalogs we found known but rare objects among our outliers and also objects that did not have previously information. By performing a clustering we classified some of them as new variability classes and others as intriguing unique outliers. As future work these objects will be followed up using spectroscopy, in order to characterize them and identify them with new observations. We hope that by doing this analysis we would be able to find more of these objects and turn our isolated outliers into new known variability classes. On the other hand, we are planning to improve our algorithm in the future by creating new robust features and by constructing a more complete and large training set. Furthermore, we aim to apply our algorithm to different large sky surveys as EROS \citep{Ansari2004}, Pan-Starrs \citep{Hodapp2004} and when finished LSST \citep{Tyson}. Finally, in order to help astronomers, we are planning a full release of a software which will include feature calculation of the light-curves and the application of our algorithm as a downloadable software and as an on-line tool and web services in the near future. | 14 | 4 | 1404.4888 |
1404 | 1404.3699_arXiv.txt | {We investigate inflationary Higgs dynamics and constraints on the Standard Model parameters assuming the Higgs potential, computed to next-to-next leading order precision, is not significantly affected by new physics. For a high inflationary scale $H\sim 10^{14}$ GeV suggested by BICEP2, we show that the Higgs is a light field subject to fluctuations which affect its dynamics in a stochastic way. Starting from its inflationary value the Higgs must be able to relax to the Standard Model vacuum well before the electroweak scale. We find that this is consistent with the high inflationary scale only if the top mass $m_t$ is significantly below the best fit value. The region within $2\sigma$ errors of the measured $m_t$, the Higgs mass $m_h$ and the strong coupling $\alpha_s$ and consistent with inflation covers approximately the interval $m_t \lesssim 171.8\,{\rm GeV} + 0.538(m_h-125.5\,{\rm GeV} )$ with $125.4\,{\rm GeV}\lesssim m_h\lesssim 126.3\,{\rm GeV}$. If the low top mass region could be definitively ruled out, the observed high inflationary scale alone, if confirmed, would seem to imply new physics necessarily modifying the Standard Model Higgs potential below the inflationary scale.} \preprint{HIP-2014-07/TH} \begin{document} | With the confirmed discovery of the Standard Model (SM) Higgs boson at LHC \cite{LHC}, it is now apposite to study in detail the evolution of the Standard Model Higgs during inflation. Here we do not adopt any particular inflationary model but merely assume that there is a period of superluminal expansion with a very slowly changing Hubble rate $H$. Intriguingly, as has been much discussed lately, the Higgs field could be the inflation \cite{Bezrukov:2007ep,Barvinsky:2008ia,Bezrukov:2008ut,Bezrukov:2010jz}, albeit at the expense of an abnormally large non-minimal coupling to gravity. However, if confirmed, the detection of primordial gravitational waves by BICEP2 determines the inflationary energy scale to be $\rho^{1/4}\sim 10^{16}$ GeV at the horizon crossing of the observable patch, together with a tensor-to-scalar ratio $r=0.20{}^{+0.07}_{-0.05}$ \cite{Ade:2014xna} , which appears to be at odds with Higgs inflation, see however \cite{Hamada:2014iga,Masina:2014yga,Germani:2014hqa}. Here we do not consider this or any other modified SM scenario but rather investigate SM Higgs dynamics assuming its couplings are not significantly affected by whatever the new physics is driving inflation. The starting point for our analysis is the next-to-next to leading order expression for the SM effective Higgs potential. As is well known, at high Higgs field values the SM potential typically becomes unstable as one moves beyond a critical field value $h = h_c$ and there is a local maximum located at $h_{\rm max}< h_c$. Both the point of instability $h_c$ and $h_{\rm max}$ are very sensitive to the SM parameter values as measured at the electroweak scale \cite{Degrassi:2012ry,Chetyrkin:2012rz,Bezrukov:2012sa}; for the best fit SM parameters $h_{c} \sim 10^{10}$ GeV. However, the instability can be pushed up to $10^{16}$ GeV and beyond by lowering the top mass value. Consistency of the setup of course requires the Higgs potential to be stable at the inflationary scale implied by BICEP2 \cite{Ade:2014xna}. In particular, as pointed out already in \cite{riotto} and later discussed in \cite{higgsinstability}, the inflationary fluctuations should not push the Higgs field over the local maximum $h_{\max}$ and into the false vacuum during the last $60$ e-folds of inflation, corresponding to the observable universe. In other words: we require that at the end of inflation we find ourselves in the region of field space from where the SM vacuum can dynamically be reached. Imposing this constraint we identify the region in the space of the top mass, the Higgs mass, and strong coupling, where the SM Higgs potential remains compatible with the measured inflationary scale of $\rho^{1/4}\sim 10^{16}$ GeV. Given the generic form of the Higgs potential the question then is: how does the Higgs field evolve during inflation? The answer very much depends on whether the Higgs is a light field or not, but also on the initial field value. Our starting point is that well before the electroweak symmetry breaking takes place, the Higgs field must find itself far away from the instability and close to the low-energy vacuum $h=\nu\simeq246$ GeV. If at the onset of inflation, the Higgs is on the wrong side of the local maximum at $h_{\rm max}$ , it must tunnel during inflation to the other side unless the false vacuum is lifted by thermal corrections after the end of inflation. Since the SM expression of the Higgs potential much beyond $h_{\rm max}$ must be modified by unknown new physics, we cannot assign a model-independent probability measure for such a tunneling event. However, since tunneling rates depend exponentially on the differences of the free energies, if tunneling from $h\gg h_{\rm max}$ takes place, afterwards the most probable field value is $h_{\rm max}$, the local maximum. Tunneling could take place any time before the end of inflation, and of course, the initial field value could also be $h\ll h_{\rm max}$ simply by chance. We are thus led to study the dynamics starting from arbitrary initial values in the range $h \leqslant h_{\rm max}$. We find that the SM Higgs is either a light field to start with or becomes light after at most a few e-folds, and its energy density is small compared to the inflationary scale. As its contribution to the total energy density is tiny, the Higgs condensate (zero mode) field acquires nearly scale invariant fluctuations on superhorizon scales. We find that quantum fluctuations dominate over the classical motion close to the maximum $h_{\rm max}$ as well as in the asymptotic regime $h\ll h_{\rm max}$. In the asymptotic quantum regime the mean field fluctuates and is random walking while local perturbations are also being generated. The typical values of the Higgs condensate after the end of inflation, together with its fluctuations, are directly determined by the inflationary scale. If the mechanism for generating curvature perturbation is sensitive to the Higgs value, for example through a modulation of the inflaton decay rate \cite{Lyth:2005qk,Choi:2012cp}, the Higgs fluctuations could leave an imprint in the primordial metric perturbations. In this case the transition from classical Higgs dynamics to the quantum regime could also generate characteristic features in the primordial perturbations, provided the transition occurs when observable scales are crossing the horizon. The paper is organized as follows. In section \ref{sec:inf} we review the form of the radiatively corrected SM Higgs potential and derive consistency conditions for a high scale inflation with the SM Higgs as a spectator field during inflation. In section \ref{sec:dynamics} we present a detailed analysis of the dynamics of the SM Higgs during inflation. Finally, in section \ref{sec:discussion} we summarize the results and discuss possible consequences of SM modifications. | \label{sec:discussion} We have considered the constraints imposed on the Standard Model by the assumption that up to the inflationary scale, the Higgs potential is at least approximatively given by the pure SM prediction and not significantly affected by the field(s) driving inflation. These constraints are of cosmological nature and follow from the fact that during inflation, for all practical purposes the SM Higgs is a light field, which we have verified. Thus during inflation the Higgs field is subject to fluctuations: there will be local field perturbations, but in addition, also the mean field performs a random walk. If inflation lasts long enough, about 200 efolds, the mean field will have settled into its equilibrium distribution, that can be derived in the stochastic approach, by the horizon exit of observable scales. This will provide the initial condition for the Higgs condensate after inflation which is an integral part of the initial data for the subsequent hot big bang epoch. For the best fit parameters and in the next-to-next leading order, the potential of the SM Higgs has a local maximum at large field values, $h_{\rm max}\sim 10^{10}$ GeV. Beyond the maximum there is a false vacuum, which can be either stable or unstable. If unstable, it should be stabilized by new physics modifying the SM potential above scale of the local maximum. The basic assumption here is that new physics has no significant impact on the Higgs potential at field values below $h_{\rm max}$. Whatever the value the Higgs field had at the end of inflation, it should relax to the SM vacuum by the time the electroweak symmetry breaking takes place. Unless the false vacuum gets lifted by thermal corrections after inflation, or is extremely shallow, this requirement implies that the Higgs field at the end of inflation must be at or below the local maximum $h_{\rm max}$ so that it can relax into the correct vacuum by classical dynamics. Here we have pointed out, see also \cite{riotto,higgsinstability}, that for the best fit values this requirement is in tension with the high inflationary scale inflationary scale $H_{\rm inf}\sim 10^{14}$ GeV implied by the BICEP2 detection of gravitational waves. During inflation the SM Higgs turns out to be effectively massless for field values below the local maximum. Hence the mean field acquires fluctuations proportional to the inflationary scale $\delta h\sim H_{\rm inf}\sim 10^{14}$ GeV. Therefore, it is not enough that during inflation the Higgs is located below $h_{\rm max}$ when the observable scales exit the horizon. This configuration has to be also stable against inflationary fluctuations, which could carry the mean field over into the false vacuum. We argue that the tunneling rate out of the false vacuum should be negligible over the observable e-folds. We then show that the condition for the stability is given by $V(h_{\rm max}) \gtrsim H^4$, where $V(h_{\rm max})$ is the potential energy at the local maximum. Computing $V(h_{\rm max})$ in the next-to-next leading order, we find that the SM Higgs the stability is guaranteed only for a sufficiently low top mass with 2-3 $\sigma$ below the best fit value, depending on the measured values of $m_h$ and $\alpha_s$. There may be particle physics reasons for extending the Standard Model, but if the still allowed parameter region depicted in Fig. 2 can be ruled out, the observed high inflationary scale alone would require new physics modifying the Higgs potential. The required modifications should be significant as moderate shifts $|\delta\lambda|\sim \lambda_{\rm SM}$ of the effective Higgs coupling from its SM value at the inflationary scale would not affect the orders of magnitude in the stability condition $V(h_{\rm max}) \gtrsim H^4$. Note that since $H\propto r^{1/2}$ our conclusion is also not sensitive to the exact value of the tensor-to-scalar ratio. Even if the observed tensor-to-scalar ratio would go significantly down from $r\sim 0.2$ the SM vacuum for the best fit parameter values would remain unstable against inflationary fluctuations. We have also carefully investigated the Higgs dynamics during inflation for the SM parameters consistent with the stability condition $V(h_{\rm max}) \gtrsim H^4$. We have argued that the transitions between classical and stochastic regimes in the Higgs dynamics could leave distinct imprints in the spectrum of Higgs fluctuations. If the transitions occur when the observable scales leave the horizon, and if the Higgs perturbations source either adiabatic or isocurvature metric fluctuations, these imprints could be observable in the CMB. While the paper was in preparation, there appeared an article \cite{Fairbairn:2014zia} which also discusses SM stability in the light of BICEP2, with which our results are in a qualitative agreement. | 14 | 4 | 1404.3699 |
1404 | 1404.0559_arXiv.txt | We study the surface brightness profiles of disc galaxies in the 3.6 $\mu m$ images from the Spitzer Survey of Stellar Structure in Galaxies (S$^4$G) and $K_{\text{s}}$-band images from the Near Infrared S0-Sa galaxy Survey (NIRS0S). We particularly connect properties of single exponential (type I), downbending double exponential (type II), and upbending double exponential (type III) disc profile types, to structural components of galaxies by using detailed morphological classifications, and size measurements of rings and lenses. We also study how the local environment of the galaxies affects the profile types by calculating parameters describing the environmental density and the tidal interaction strength. We find that in majority of type II profiles the break radius is connected with structural components such as rings, lenses, and spirals. The exponential disc sections of all three profile types, when considered separately, follow the disc scaling relations. However, the outer discs of type II, and the inner discs of type III, are similar in scalelength to the single exponential discs. Although the different profile types have similar mean environmental parameters, the scalelengths of the type III profiles show a positive correlation with the tidal interaction strength. | Galaxy discs have been known to follow an exponential decline in the radial surface brightness since the study by \citet{freeman1970}. Disc formation is thought to be closely connected to the initial formation of galaxies, the exponential nature being a result of comparable time-scales for viscous evolution and star formation (e.g. \citealt{lin1987}; \citealt{yoshii1989}). The outermost faint regions of galaxies were first studied in detail in edge-on systems by \citet{vanderkruit1979}, who found that the exponential decay of the surface brightness does not always continue infinitely outwards. Instead a sharp change was found with a much steeper surface brightness decline in the outermost parts of the disc. More recently the advance in observations has enabled detailed studies of the faint outer regions of more face-on disc galaxies. Similar, yet not as sharp changes in the surface brightness profiles have been found in many face-on galaxies in the optical (\citealt{erwin2005}; \citealt{pohlen2006}; \citealt{erwin2008}; \citealt{gutierrez2011}), and in the infrared (\citealt{munozmateos2013}). Also, most discs are found to be best described as double exponentials with a change of slope between the two exponential subsections. Discs can thus be divided into three main types (\citealt{pohlen2006}; \citealt{erwin2008}): single exponential discs with no break (type I), discs where the slope is steeper beyond the break (type II), and discs where the outer slope is shallower (type III). The type II breaks in face-on galaxies generally appear much further in than the ``truncations'' in edge-on galaxies (see for example \citealt{martinnavarro2012}), thus most likely representing a different feature. One of the first explanations for type II breaks in the surface brightness profiles was given by \citet{vanderkruit1987}, relating it to angular momentum conservation during the initial collapse of the gas during the galaxy formation. An alternative explanation involves bars, via the influence of the Outer Lindblad Resonance, which is connected to the formation of outer rings (classified as ``II.o-OLR'', \citealt{pohlen2006}, see also \citealt{munozmateos2013}). Indeed, \citet{erwin2008} noted that in some galaxies with outer rings the break radius of a type II profile is similar to the radius of the outer ring. Furthermore, the presence of a star formation threshold have also been associated with type II profiles in galaxies in which the break radius is larger than the outer ring radius (type ``II.o-CT'', \citealt{pohlen2006}; see also \citealt{schaye2004}; \citealt{elmegreen2006}; \citealt{christlein2010}). Nevertheless, possible connections between breaks and different structural components, such as rings and spirals, have not yet been systematically studied. Studies of disc breaks in galaxies have become increasingly important with the discovery of stellar migration in discs (e.g. \citealt{sellwood2002}; \citealt{debattista2006}; \citealt{roskar2008a,roskar2008b}; \citealt{schonrich2009a,schonrich2009b}; \citealt{minchev2012}). The idea of migration has changed the paradigm that stars born in the disc do not travel radially far from their place of birth. On the contrary, stars can travel several kiloparsecs radially both inwards and outwards. Evidence of this process has been found in the solar neighbourhood, where the wide metallicity and age distributions of stars \citep{edvardsson1993} can be explained by radial migration \citep{roskar2008b}. The effects of star formation and radial migration can not necessarily be considered separately (e.g. \citealt{roskar2008a}). In these simulations the type II break is caused by a drop of the star formation rate beyond the break due to the reduced amount of cooled gas. However, the outer disc is simultaneously populated by stars radially migrating from the inner disc. In older stellar populations the outer slope is shallower than in younger populations, possibly being a result of more extended radial spreading due to the longer duration of stellar migration (e.g. \citealt{radburnsmith2012}). Colour profiles have also shown that especially for type II profiles the discs become increasingly redder after the break radius (\citealt{azzollini2008}; \citealt{bakos2008}), also consistent with stellar migration. This interpretation is not unique because in the fully cosmological simulations \citet{sanchezblazquez2009} see reddening beyond a break radius in the disc also without stellar migration, and argue that it could simply be due to a change in the star formation rate around the break. In their simulation the presence of stellar migration can smooth the mass profile of the galaxy up to a point where it appears as a single exponential. However, using similar simulations \citet{roskar2010} noted that cosmological simulations can not yet definitely tell the relative roles of radial migration and star formation in the outer regions of galaxies. Alternatively, the larger radial velocity dispersion of old stars (e.g. for solar neighbourhood \citealt{holmberg2009}) could also explain the observed properties of the outer discs up to a point. Type III profiles remain more ambiguous. Sometimes they are associated with an outer spheroidal or halo component in the galaxy, thus not being a disc feature at all (type ``III-s'', \citealt{pohlen2006}; see also \citealt{bakos2012}). \citet{comeron2012} has proposed that $\gtrsim 50 \%$ of type III profiles could also be created by superposition of a thin and thick disc, when the scalelength of the thick disc is larger than that of the thin disc. Extended UV emission has been found in many galaxies beyond the optical disc (\citealt{gildepaz2005}; \citealt{thilker2005}; \citealt{zaritsky2007}). In such cases the increased star formation at the outskirts of the galaxies could give rise to some of the observed type III profiles. Perhaps the most intriguing possibility of type III profile formation comes from the environmental effects. Galaxies live in a hierarchical universe where galaxy mergers are common. These mergers, and also mild gravitational interactions, between galaxies can certainly change the appearance of the involved galaxies. Already in the early simulations of \citet{toomre1972} close encounters between galaxies were shown to significantly perturb the outer discs of the involved galaxies, and as a result tidal tails and bridges formed. They also showed that the masses of the galaxies affect the outcome, and the less massive galaxy is more strongly perturbed. Furthermore, predictions from more recent simulations have shown that type III profiles could be a result of minor mergers (e.g. \citealt{younger2007}; \citealt{laurikainen2001}). \citet{pohlen2006} made the first attempt to examine the galaxy environments of the different break types by counting the number of neighbouring galaxies from SDSS within 1 Mpc projected radius, for a recession velocity difference to the target galaxy of $|\Delta v|< 350$ km s$^{-1}$, and absolute magnitude of $M_{\text{r'}}< -16$ magnitudes. They concluded that their criteria for the environment were often too harsh to truly characterise it. More recently \citet{maltby2012} compared field and cluster galaxies at higher redshifts ($z_{phot}>0.055$), and found no differences in the break types in different environments. However, they focused only on the outermost disc regions and possibly missed a significant fraction of profile breaks. Therefore, the question of the influence of galaxy environment on disc profile type remains open. We study the disc and break parameters measured from the radial surface brightness profiles. We aim to systematically associate disc breaks with specific structural components of galaxies, such as rings, lenses, and spirals. In addition, we perform a detailed environmental analysis searching for possible connections among the different disc profile types with the galaxy density and the presence of nearby perturbers. As a database we use 3.6 $\mu m$ images from the Spitzer Survey of Stellar Structure in Galaxies (S$^4$G, \citealt{sheth2010}) and $K_{\text{s}}$-band images from the Near Infrared S0-Sa galaxy Survey (NIRS0S, \citealt{laurikainen2011}). The 3.6 $\mu m$ and $K_{\text{s}}$-band images are basically free of extinction, particularly at the large disc radii of most interest here. Both bands trace the old stellar population, which is important because of the expected wavelength dependency of the disc scalelengths (\citealt{bakos2008}; \citealt{radburnsmith2012}). In the environmental analysis we use the 2 Micron All Sky Survey (2MASS) Extended Source Catalog (XSC, \citealt{jarrett2000}) and the 2 Micron All Sky Survey Redshift Survey (RSC, \citealt{huchra2012}). The outline of this paper is as follows. In section \ref{sample-selection} we introduce the sample selection criteria for our study, in section \ref{analysis-methods} we describe the data processing, the analysis methods, and the classification of the profile types. In section \ref{env-ana-methods} we describe the environmental study. In sections \ref{results} and \ref{env_effects} we present the main results of the surface brightness profile analysis and the results of the environmental analysis, respectively. The results are discussed in section \ref{discussion}, and summarised in section \ref{sum-conclusion}. The general parameters of the galaxies, including the environmental parameters, are presented in appendix \ref{app:sample} in Table \ref{app:a}, and the parameters of the discs and breaks in Table \ref{app:b}. \begin{figure*} \begin{center} \includegraphics[width=0.95\linewidth]{sample_nir.pdf} \caption{In the \textit{left panel} the Hubble type distribution of the sample galaxies, and in the \textit{right panel} the absolute B-band magnitude distribution of the galaxies is shown.} \label{sample_histo} \end{center} \end{figure*} | \label{sum-conclusion} We present a detailed study of the disc surface brightness profiles of 248 galaxies using the 3.6 $\mu m$ images that form part of the Spitzer Survey of Stellar Structure in Galaxies (S$^4$G, \citealt{sheth2010}). Additionally, 80 galaxies were taken from the Near Infrared S0-Sa galaxy Survey (NIRS0S, \citealt{laurikainen2011}, observed at $K_{\text{s}}$-band). Using the radial surface brightness profiles we measured the properties of the main disc break type, first defined by \citet{erwin2005}. We associate the breaks with possible structural components in these galaxies using existing size measurements of rings and lenses. In addition, we carried out an environmental study of the sample galaxies using the 2 Micron All Sky Survey Extended Source Catalog (XSC), and the 2 Micron All Sky Survey Redshift Survey (RSC), and calculated the parameters describing the environmental galaxy density ($\Sigma_3^A$) and the Dahari parameter ($Q$) for the tidal interaction strength. Our main results are summarised as follows: \begin{itemize} \item The fractions of the different profile types in the near infrared ($3.6 \, \mu m$ and $K_{\text{s}}$-band) are: type I $32 \pm 3$ \%, type II $42 \pm 3$ \%, and type III $21 \pm 2$ \%. We find also type II.i profiles in $7 \pm 2$ \% of the sample. In seven galaxies we see two breaks. These galaxies are counted twice and explain why the total percentage exceeds 100 \%. \item The inner parts of type III profiles are found to resemble the single exponential discs, while for type II it is the outer disc that more closely resemble the single exponential discs. This suggests that in galaxies with type II profile the evolution of the inner parts of the galaxy has been more significant, while in type III profiles the outer disc has gone through substantial evolution. \item $\sim 56 \%$ of type II profiles can be directly connected to outer lenses ($\sim 8 \%$), or to the outer rings, pseudorings, and ringlenses ($\sim 48 \%$). Almost all of the type II profiles with Hubble types $T<3$ are associated to these structures, with break radii that are coincident with the location of these structures. Therefore in galaxies of Hubble types $T<3$ the breaks are most likely associated to the resonances of bars. \item $\sim 38 \%$ of type II profiles can be visually connected to either the outer edges of intense star formation in the spiral arms or to the apparent outer radii of the spirals. These profiles appear mainly in Hubble types $T>3$, where they explain nearly all of type II profiles. \item Only approximately 1/3 of type III profiles could be associated with distinct morphological structures in the galaxies, such as lenses or outer rings. \item For type III profiles a correlation was found between inner and outer disc scalelengths ($h_i$ and $h_o$) and the Dahari parameter ($Q$), indicating that nearby galaxy encounters are partly causing the upbending part of these profiles. \item The disc scalelengths ($h$) and central surface brightnesses ($\mu_0$) of the inner- and outer discs were found to be similar in barred and non-barred galaxies when the main types (I, II, III) are studied individually. \end{itemize} | 14 | 4 | 1404.0559 |
1404 | 1404.5183_arXiv.txt | We assess the effects of super-massive black hole (SMBH) environments on the gravitational-wave (GW) signal from binary SMBHs. To date, searches with pulsar timing arrays for GWs from binary SMBHs, in the frequency band $\sim1-100$\,nHz, include the assumptions that all binaries are circular and evolve only through GW emission. However, dynamical studies have shown that the only way that binary SMBH orbits can decay to separations where GW emission dominates the evolution is through interactions with their environments. We augment an existing galaxy and SMBH formation and evolution model with calculations of binary SMBH evolution in stellar environments, accounting for non-zero binary eccentricities. We find that coupling between binaries and their environments causes the expected GW spectral energy distribution to be reduced with respect to the standard assumption of circular, GW-driven binaries, for frequencies up to $\sim20$\,nHz. Larger eccentricities at binary formation further reduce the signal in this regime. We also find that GW bursts from individual eccentric binary SMBHs are unlikely to be detectable with current pulsar timing arrays. The uncertainties in these predictions are large, owing to observational uncertainty in SMBH-galaxy scaling relations and the galaxy stellar mass function, uncertainty in the nature of binary-environment coupling, and uncertainty in the numbers of the most massive binary SMBHs. We conclude, however, that low-frequency GWs from binary SMBHs may be more difficult to detect with pulsar timing arrays than currently thought. | The merger of a pair of galaxies hosting central super-massive black holes (SMBHs) is expected to result in the formation of a binary SMBH \nocite{bbr80}(Begelman, Blandford \& Rees 1980). The central SMBHs sink in the merger remnant potential well through the action of dynamical friction, and form a bound binary when the mass within the orbit of the lighter SMBH is dominated by the heavier SMBH. As stars within the binary orbit are quickly ejected, the binary will decay further only if another mechanism to extract binding energy and angular momentum exists. Proposed mechanisms include slingshot scattering of stars on radial, low angular momentum orbits intersecting the binary \citep{fr76,q96,y02}, and friction against a spherical Bondi gas accretion flow \citep{elc+04} or a circum-nuclear gas disk \citep[e.g.,][]{rds+11}. If the orbital decay process can drive the binary to a small separation, gravitational-wave (GW) emission will eventually cause the binary to coalesce \citep[e.g.,][]{pm63,bcc+06}. In a cosmological context, merging dark matter halos follow parabolic trajectories \citep[e.g.,][]{vll+99,w11}, implying large initial eccentricities \citep[typically $\sim0.6$,][]{hfm03} for the orbits of SMBHs sinking towards galaxy merger remnant centres. Steep stellar density gradients in merging galaxies may reduce this eccentricity; indeed, some models suggest that binary SMBHs are likely to be close to circular upon formation \nocite{cpv87,hfm03}\citep[Casertano, Phinney \& Villumsen 1987; Hashimoto, Funato \& Makino 2007;][]{pr94}. Slingshot interactions between binaries and individual stars again grow the eccentricities \nocite{shm06}\citep[e.g., Sesana, Haardt \& Madau 2006;][]{q96,bpb+09,kpb+12}, because binaries spend more time, and hence lose more energy, at larger separations. \citet{rds+11} found that binary SMBHs embedded in massive self-gravitating gas disks will have large eccentricities, between 0.6 and 0.8, at the onset of GW-dominated evolution. In the GW-dominated regime, however, binaries are expected to quickly circularise \citep[e.g.,][]{pm63,bcc+06}. There is no direct observational evidence for the existence of binary SMBHs \nocite{dsd12}(Dotti, Sesana \& Decarli 2012). However, the GW emission from binaries prior to coalescence is an unambiguous signature of their existence. Observing GWs from binary SMBHs will enable binary SMBH physics, as well as models for the formation and evolution of the cosmological SMBH population, to be observationally tested. Here, we focus on the possibility of detecting GWs from binary SMBHs in the early parts of their GW-dominated evolutionary stages with radio pulsar timing arrays \citep[PTAs;][]{fb90,mhb+13}. PTAs are currently sensitive to GWs in the frequency band $\sim1-100$\,nHz, which is complementary to other GW detection experiments. PTAs target both a stochastic, isotropic background of GWs from binary SMBHs \citep[e.g.,][]{ych+11,vlj+11} and GWs from individual binary systems \nocite{esc12,yhj+10}(Yardley et al. 2010; Ellis, Siemens \& Creighton 2012). The summed GW signal from all binary SMBHs in the Universe is expected to approximate an isotropic background, although individual binaries are potentially detectable at all frequencies within the PTA band \citep{rwh+12}. Recent PTA results suggest that a large fraction of existing models for the GW background from binary SMBHs is inconsistent with observations \citep{src+13}. However, most current predictions for the spectral shape \citep{p01}, statistical nature \nocite{svv09}\citep[Sesana, Vecchio \& Volonteri 2009;][]{rwh+12} and strength \citep{s12} of the GW signal from binary SMBHs assume that all binaries are in circular orbits, and losing energy and angular momentum only to GWs. These assumptions correspond to the well-known power law GW background characteristic strain spectrum from binary SMBHs that is proportional to $f^{-2/3}$, where $f$ is the GW frequency. Here, we present an examination of the properties of the GW signal from binary SMBHs given a realistic model for binary orbital evolution. We use a semi-analytic model for galaxy and SMBH formation and evolution \citep[][hereafter G11]{gwb+11} implemented in the Millennium simulation \citep{swj+05} to specify the coalescence rate of binary SMBHs, and augment this with a framework \citep{s10} for the evolution of binary SMBHs in stellar environments. We neglect gas-driven binary evolution. This is because massive galaxies at low redshifts, which are expected to dominate the GW signal from binary SMBHs, will typically be late-type and gas-poor \nocite{ylm+11,mop12}(e.g., Yu et al. 2011; McWilliams, Ostriker \& Pretorius 2012). Two key phenomena in binary SMBH evolution affect the summed GW signal relative to the case of circular binaries evolving under GW emission alone\footnote{We refer to this as the ``circular, GW-driven case'' throughout the paper.}: \begin{enumerate} \item Interactions between binary SMBHs and their environments will accelerate orbital decay compared to purely GW-driven binaries, reducing the time each binary spends radiating GWs. This may reduce the energy density in GWs at the lower end of the PTA frequency band \citep[e.g.,][]{wl03,s13}. \item While circular binaries emit GWs at the second harmonics of their orbital frequencies, eccentric binaries emit GWs at multiple harmonics \citep{pm63}. Given a population of binary SMBHs, this is expected to transfer GW energy density from lower frequencies in the PTA frequency band to higher frequencies \citep{en07,s13}. \end{enumerate} We consider the effects of both these phenomena on the GW signal from binary SMBHs relative to the circular, GW-driven case. We also examine the possibility of detecting bursts of GWs from individual eccentric, massive binaries. In \S2, we outline the binary population model. We give our predictions for the summed GW signal in \S3, along with a discussion of uncertainties in our model. We consider the possibility of detectable GW bursts in \S4. Finally, we summarise our results in \S5 and present our conclusions \S6. Summaries of key PTA implications can be found at the ends of \S3, \S4 and \S5. We adopt a concordance cosmology consistent with the Millennium simulation \citep{swj+05}, with $\Omega_{M}=0.25$, $\Omega_{b}=0.045$, $\Omega_{\Lambda}=0.75$, and $H_{0}= 73$\,km\,s$^{-1}$\,Mpc$^{-1}$. | In this paper, we predict both the GW background characteristic strain spectrum and the distribution of strong GW bursts from eccentric binaries. At a GW frequency of (1\,yr)$^{-1}$, we predict a characteristic strain of $6.5\times10^{-16}<h_{c}<2.1\times10^{-15}$ with approximately 68\% confidence. Accelerated binary evolution driven by three-body stellar interactions causes the characteristic strain spectrum to be diminished with respect to a $h_{c}(f)\propto f^{-2/3}$ power-law at $f\lesssim2\times10^{-8}$\,Hz. At these low frequencies, the signal is further attenuated if binary SMBHs are typically more eccentric at formation. The low-frequency signal may be dominated by a few binaries with combined masses ($M_{1}+M_{2}$) greater than $10^{10}\,M_{\odot}$, to a larger extent than predicted in the circular, GW-driven case \citep{rwh+12}. Numerous uncertainties, however, affect our results. These include observational uncertainties in parameters of our model, and theoretical uncertainties in the efficiency of coupling between binary SMBHs and their environments. We also expect between 0.06 and 0.12 GW bursts that produce $>$40\,ns amplitude ToA variations over a 10\,yr observation time. Larger typical binary eccentricities at formation will result in fewer events than if binaries are less eccentric at formation. These bursts are caused by binary SMBHs with combined masses of $\sim10^{10}\,M_{\odot}$, and typically last $\sim8$\,yr. Shorter, stronger bursts are significantly less likely, as are longer bursts. Upcoming radio telescopes with extremely large collecting areas, such as the Five hundred metre Aperture Spherical Telescope \nocite{lnp13}(FAST, Li, Nan \& Pan 2013) and the Square Kilometre Array \citep[SKA,][]{ckl+04} are likely to significantly expand the sample of pulsars with sufficient timing precision for GW detection as compared to current instruments. PTAs formed with FAST and the SKA will hence be sensitive to a stochastic GW signal at much higher frequencies than current PTAs, which is desirable given the results we present. The mechanism by which binary SMBHs are driven to the GW-dominated regime must involve some form of binary-environment coupling. Hence, independent of the exact model, \textit{there will always be some low-frequency attenuation of the GW signal relative to the circular, GW-driven binary case.} Our results indicate that this attenuation occurs within the PTA frequency band. However, the strength of the binary-environment coupling is quite uncertain, and we urge future work on this topic. Finally, as also emphasised in previous works \citep{en07,s13}, constraining or measuring the spectrum of the GW background at a number of frequencies would provide an excellent test of models for the binary SMBH population of the Universe. | 14 | 4 | 1404.5183 |
1404 | 1404.2595_arXiv.txt | The wavelength-dependence of the extinction of Type Ia \jj in the nearby galaxy M82 has been measured using UV to near-IR photometry obtained with the Hubble Space Telescope, the Nordic Optical Telescope, and the \abu Infrared Telescope. This is the first time that the reddening of a \snia is characterized over the full wavelength range of 0.2--2$\,\mu$m. A total-to-selective extinction, $R_V\geq3.1$, is ruled out with high significance. The best fit at maximum using a Galactic type extinction law yields $R_V= 1.4 \pm 0.1$. The observed reddening of \jj is also compatible with % a power-law extinction, $A_{\lambda}/A_V = \left( {\lambda}/ {\lambda_V} \right)^{p}$ as expected from multiple scattering of light, with $p=-2.1 \pm 0.1$. After correction for differences in reddening, \jj appears to be very similar to \fe over the 14 broad-band filter lightcurves used in our study. | The study of the cosmological expansion history using Type~Ia supernovae (\sneia), of which \jj is the closest in several decades \citep[][hereafter G14]{2014ApJ...784L..12G} has revolutionized our picture of the Universe. The discovery of the accelerating Universe \citep{1998AJ....116.1009R,1999ApJ...517..565P} has lead to one of the biggest scientific challenges of our time: probing the nature of {\em dark energy} through more accurate measurements of cosmological distances and the growth of structure in the universe. \sneia remain among the best tools to measure distances and as the sample grows both in numbers and redshift range, special attention is required in addressing systematic effects. One important source of uncertainty is the effect of dimming by dust. In spite of considerable effort, it remains unclear why the color-brightness relation for \sne Ia from cosmological fits is significantly different from e.g. dimming by interstellar dust with an average $\RV = \AV /\EBV = 3.1$. In the most recent compilation by \citet{2014arXiv1401.4064B}, 740 low and high-$z$ \sneia were used to build a Hubble diagram using the SALT2 lightcurve fitter \citep{2007AA...466...11G}. Their analysis yields $\beta=3.101 \pm 0.075$, which corresponds to $\RV \sim 2$, although the assumed color law in SALT2 differs from the standard Milky-Way type extinction law \citep{1989ApJ...345..245C}. Several cases of $\RV \lsim 2$ has been found in studies of color excesses of local, well-measured, \sneia \citep[\eg][]{2006AJ....131.1639K,2006MNRAS.369.1880E,2008MNRAS.384..107E,2008A&A...487...19N,2010AJ....139..120F}. A low value of $\RV$ corresponds to steeper wavelength dependence of the extinction, especially for shorter wavelengths. In general terms, this reflects the distribution of dust grain sizes where a low $\RV$ implies that the light encounters mainly small dust grains. \citet{2005ApJ...635L..33W} and \citet{2008ApJ...686L.103G} suggest an alternative explanation that non-standard reddening of \sneia could originate from multiple scattering of light, e.g., due to a dusty circumstellar medium, a scenario that has been inferred for a few \sneia \citep{2007Sci...317..924P,2009ApJ...693..207B,2012Sci...337..942D}. A tell-tale signature of reddening through multiple scattering is a power-law dependence for reddening \citep{2008ApJ...686L.103G}, possibly also accompanied by a perturbation of the lightcurve shapes \citep{2011ApJ...735...20A} and IR emission from heated dust regions \citep{2013MNRAS.431L..43J}. \jj in the nearby galaxy M\,82 offers a unique opportunity to study the reddening of a spectroscopically normal (\goobar; Marion~et~al. in prep., 2014) \snia, over an unusually wide wavelength range. Hubble Space Telescope (HST) observations allow us to perform a unique study of color excess in the optical and near-UV, where the difference between the extinction models is the largest. Our data-set is complemented by \Uband\Bband\Vband\Rband\iband observations from the Nordic Optical Telescope (NOT) and \Jband\Hband\Ksband from the \abu Observatory. | We present results from fitting three extinction laws to observations of \jj in 16~photometric bands spanning the wavelength range $0.2$--$2\,\mu$m between phases $-5$ and $+35$ days with respect to \Bband-maximum. We find a remarkably consistent picture with reddening law fits only involving two free parameters. Once reddening is accounted for, the similarity between the multi-color lightcurves of \jj and \fe is striking. We measure an overall steep extinction law with a total-to-selective extinction value $\RV$ at maximum of $\RV=1.4\pm0.1$ for a MW-like extinction law. We also note that fitted extinction laws are consistent when fitted separately around maximum and using the full phase-range. Although the fits slightly disfavor the empirically derived SALT2 color law for \snia, in comparison to a MW-like extinction law as parametrized by FTZ with a low $\RV$, conclusions should be drawn cautiously. SALT2 has not been specifically trained for the near-UV region considered here. Also, there is no prediction for the near-IR. Intriguingly, power-law extinction proposed by \citet{2008ApJ...686L.103G} as a model for multiple scattering of light provides a good description of the reddening of \jj. Increasing this sample is crucial to understand the possible diversity in reddening of \sneia used to measure the expansion history of the Universe. | 14 | 4 | 1404.2595 |
1404 | 1404.5526_arXiv.txt | We examine simulations of isolated galaxies to analyse the effects of localised feedback on the formation and evolution of molecular clouds. Feedback contributes to turbulence and the destruction of clouds, leading to a population of clouds that is younger, less massive, and with more retrograde rotation. We investigate the evolution of clouds as they interact with each other and the diffuse ISM, and determine that the role of cloud interactions differs strongly with the presence of feedback: in models without feedback, scattering events dramatically increase the retrograde fraction, but in models with feedback, mergers between clouds may slightly increase the prograde fraction. We also produce an estimate of the viscous time-scale due to cloud-cloud collisions, which increases with increasing strength of feedback ($t_\nu\sim20$ Gyr vs $t_\nu\sim10$ Gyr), but is still much smaller than previous estimates ($t_\nu\sim1000$ Gyr); although collisions become more frequent with feedback, less energy is lost in each collision than in the models without feedback. | Giant molecular clouds (GMCs) are a fundamental component of galactic structure, making an important contribution to the ISM, and having a dominant role in hosting star formation. Mergers between GMCs may also act as an effective viscosity that is weak but not negligible \citep[hereafter WT12]{2012MNRAS.421.2170W}. The mergers that generate this viscosity also contribute to the evolution of the internal structure of the simulated GMCs, including the orientation of the GMCs' spins. The precise impact of feedback on the GMC population is also an unresolved question. It is thus important to perform galaxy-scale simulations to properly understand the impact of feedback and cloud-cloud interactions on the GMC population, as well as the impact of the GMC population on galactic evolution. Increasing computing power has permitted galaxy-scale hydrodynamic simulations with sufficient resolution and simulation time to resolve molecular cloud evolution \citep{2008MNRAS.391..844D,2008ApJ...680.1083R,2008MNRAS.385.1893D,2009MNRAS.392..294A,2009ApJ...700..358T,2011MNRAS.417.1318D,2011MNRAS.413.2935D,2011ApJ...730...11T,2012MNRAS.425.2157D,2013MNRAS.432..653D,2013MNRAS.tmpL.173D,2013ApJ...776...23B}. These models typically consist of a smooth exponential disc of gas which fragments into clouds as the system evolves. In our previous paper (WT12) we performed numerical simulations of this type and were able to determine the strength of effective viscosity resulting from cloud-cloud collisions in these models. We found the viscous time-scale is on the order of $t_\nu\sim10$ Gyr, much shorter than previous estimates of $t_\nu\sim1000$ Gyr \citep{2002ApJ...581.1013B}. The viscous time-scale in these simulations is on the order of a Hubble time, but while this suggests that the effective viscosity due to cloud-cloud collisions is not a dominant effect, it is still considerably stronger than previously predicted. The viscous timescale may be even shorter at low resolutions, perhaps having a significant effect on cosmological simulations that insufficiently resolve the disc. However, it has been noted that the properties and evolution of molecular clouds depend strongly on the choice of numerical models and parameters, such as the model for the stellar potential \citep{2012MNRAS.425.2157D}, the strength, nature, and presence of stellar feedback \citep{2011ApJ...730...11T,2011MNRAS.413.2935D,2012MNRAS.425.2157D}, softening length, and temperature floor (WT12). Our previous numerical simulations (WT12) were performed in the absence of feedback, but feedback is known to have a significant effect on the properties and evolution of clouds. This will likely have an effect on the effective viscosity from cloud collisions, as the details of cloud interactions will depend on cloud substructure, and on the velocity distribution, which will be directly affected by energy input from feedback. Stellar feedback is traditionally performed by adding thermal or kinetic energy into regions that pass certain criteria for star formation \citep[as discussed in numerous places e.g.][]{2000ApJ...545..728T,2006MNRAS.373.1074S,2009ApJ...695..292C,2010ApJ...717..121C}, and simultaneously transferring some mass from the gaseous component of the simulation into the collisionless stellar component. Sub-grid models have also been produced that model feedback with effective equations of state \citep[e.g. ][]{2003MNRAS.339..289S}. These methods typically assume that star formation is not well resolved in time or space, and so it is possible (and indeed necessary) to make use of simplified models that represent the large-scale average effects of feedback - e.g. the input of thermal or kinetic energy - without accounting for the particular details of the star formation and feedback processes. For instance, it is not possible to directly capture the photoheating of H\textsc{II} regions by O/B associations if the spatial resolution is insufficient to resolve H\textsc{II} regions and the numerical time-step is larger than the typical lifetimes of O/B stars, and hence such small-scale effects must be included as sub-grid models, if included at all. In this work, we use the feedback method of \citet{2000ApJ...545..728T}, which assumes that feedback is dominated by supernovae, that large stars produce supernovae immediately, and that each star particle contains the entire stellar initial mass function. We note that these assumptions are not necessary when applied to simulations with resolutions sufficient to resolve individual molecular clouds, and that more sophisticated methods \citep[e.g.][]{2012MNRAS.422.2609R,2013MNRAS.433...69H} could improve the accuracy of our simulated galaxy and its molecular clouds. To further quantify the effects of feedback in simulated discs, we examine the source of the angular momentum distribution of clouds. Observations \citep{1999A&AS..134..241P,2003ApJ...599..258R,2011ApJ...732...79I,2011ApJ...732...78I} have shown that $40-60$\% of molecular clouds spin retrograde with respect to Galactic rotation. An unperturbed disc with a falling rotation curve should primarily form prograde clouds, unless the clouds form in a contrived geometry \citep{1966MNRAS.131..307M,1993prpl.conf..125B}. Large-scale perturbations such as spiral shocks \citep{1995MNRAS.275..209C} may potentially drive the production of retrograde clouds \citep[or at least affect the cloud population;][]{2014MNRAS.439..936F}, although previous simulations \citep{2011MNRAS.417.1318D,2009ApJ...700..358T,2011ApJ...730...11T,2013ApJ...776...23B} that have produced retrograde clouds have successfully done so without a galactic spiral. The fraction of retrograde clouds varies greatly between these simulations, and so the source of the angular momentum distribution has remained unclear. \citet{2009ApJ...700..358T} performed simulations where the first clouds that formed from the galaxy's initially smooth density profile were strongly prograde, with retrograde clouds forming at later time ($t>140$ Myr) from over-dense gas already disturbed by cloud interactions -- \citet{2013ApJ...776...23B} found $18\%$ of clouds were retrograde after $240$ Myr. \citet{2008MNRAS.391..844D} similarly states that retrograde clouds form as a result of clouds forming from an inhomogeneous ISM, stirred by cloud collisions and/or feedback, but finds that $\sim40$\% of clouds are retrograde. There is a further disagreement in whether retrograde fractions increase \citep{2011MNRAS.417.1318D} or decrease \citep{2011ApJ...730...11T} with increasing strength of feedback. In this work we compare the properties of molecular clouds as a function of angular momentum to resolve these discrepancies and determine the prime drivers of the angular momentum distribution and the effects of feedback. The structure of the paper is as follows: in section~\ref{simsect} we present our simulations, including codes used and initial conditions. In section~\ref{analsect} we summarise our analysis techniques for identifying and tracking clouds, and for quantifying the differences between the cloud populations. In section~\ref{resultsect} we give the results of these analysis techniques, and comment on their significance. In section~\ref{concsect} we present our conclusions. | \begin{itemize} \item We find that the viscous time-scale due to cloud-cloud collisions decreases with the addition of feedback. The clouds are less massive and collisions between them are more frequent when feedback is included, because cloud collisions are less violent and less efficient at losing energy. \item We also find that the feedback algorithm considered here significantly reduces the number of clouds with strongly prograde rotation. After careful analysis, we conclude that small young clouds are more strongly influenced by the turbulence of the ISM and are more likely to form retrograde while large old clouds tend to approach the average angular momentum of the galaxy and thus are more likely to be prograde. Stellar feedback contributes to turbulence and disrupts clouds (reducing the population of old large clouds), producing fewer strongly prograde clouds. \item Finally, we find that interactions between clouds produce very different results depending on the presence of feedback. Without feedback, interactions primarily act to increase the retrograde fraction through scattering events between clouds. However when localised feedback is included, interactions have little effect, and perhaps act to increase the {\em prograde} fraction as clouds form from a high velocity-dispersion medium, and only later merge and grow in mass and angular momentum. Diffuse heating does not have the same effect as localised feedback, as it acts to smooth out the density distribution and rotation curve, producing more strongly prograde clouds. \end{itemize} | 14 | 4 | 1404.5526 |
1404 | 1404.3907_arXiv.txt | Extensive air showers, induced by high energy cosmic rays impinging on the Earth's atmosphere, produce radio emission that is measured with the LOFAR radio telescope. As the emission comes from a finite distance of a few kilometers, the incident wavefront is non-planar. A spherical, conical or hyperbolic shape of the wavefront has been proposed, but measurements of individual air showers have been inconclusive so far. For a selected high-quality sample of 161 measured extensive air showers, we have reconstructed the wavefront by measuring pulse arrival times to sub-nanosecond precision in 200 to 350 individual antennas. For each measured air shower, we have fitted a conical, spherical, and hyperboloid shape to the arrival times. The fit quality and a likelihood analysis show that a hyperboloid is the best parametrization. Using a non-planar wavefront shape gives an improved angular resolution, when reconstructing the shower arrival direction. Furthermore, a dependence of the wavefront shape on the shower geometry can be seen. This suggests that it will be possible to use a wavefront shape analysis to get an additional handle on the atmospheric depth of the shower maximum, which is sensitive to the mass of the primary particle. | A high-energy cosmic ray that enters the atmosphere of the Earth will interact with a nucleus of an atmospheric molecule. This interaction produces secondary particles, which in turn interact, thereby creating a cascade of particles: an \emph{extensive air shower}. The origin of these cosmic rays and their mass composition are not fully known. Due to the high incident energy of the cosmic ray, the bulk of the secondary particles propagate downward with a high gamma factor. As this air shower passes through the atmosphere and the Earth's magnetic field, it emits radiation, which can be measured by antennas on the ground in a broad range of radio frequencies (MHz - GHz) \cite{Allan1966,Jelley1965,Falcke2005}. For a review of recent developments in the field see \cite{Huege:2013a}. The measured radiation is the result of several emission processes \cite{CoREAS}, and is further influenced by the propagation of the radiation in the atmosphere with non-unity index of refraction \cite{EVA}. Dominant in the frequency range considered in this study is the interaction in the geomagnetic field \cite{Kahn1966,Allan:1971,Falcke2005,Codalema2009}. An overview of the current understanding of the detailed emission mechanisms can be found in \cite{Huege2012}. The radio signal reaches the ground as a coherent broadband pulse, with a duration on the order of 10 to $\unit[100]{ns}$ (depending on the position in the air shower geometry). As the radio emission originates effectively from a few kilometers in altitude, the incident wavefront as measured on the ground is non-planar. Geometrical considerations suggest that the amount of curvature and the shape of the wavefront depend on the height of the emission region, suggesting a relation to the depth of shower maximum, $X_{\rm max}$. The depth of shower maximum is related to the primary particle type. Assuming a point source would result in a spherical wavefront shape, which is used for analysis of LOPES data \cite{Nigl2008}. It is argued in \cite{Schroeder2011} that the actual shape of the wavefront is not spherical, but rather conical, as the emission is not point-like but stretched along the shower axis. In a recent further refinement of this study, based on CoREAS simulations, evidence is found for a hyperbolic wavefront shape (spherical near the shower axis, and conical further out) \cite{LOPESwavefront:2014}. Hints for this shape are also found in the air shower dataset collected by the LOPES experiment \cite{Lopes2012}. However, due to high ambient noise levels, the timing precision of these measurements did not allow for a distinction between spherical, hyperbolical and conical shapes on a shower-by-shower basis, and only statistically was a hyperbolic wavefront shape favored. We use the LOFAR radio telescope \cite{LOFAR} to measure radio emission from air showers, in order to measure wavefront shapes for individual showers. LOFAR consists of an array of two types of antennas: the low-band antennas (LBA) sensitive to frequencies in a bandwidth of $\unit[10-90]{MHz}$, and the high-band antennas (HBA) operating in the $\unit[110-240]{MHz}$ range. While air showers have been measured in both frequency ranges \cite{Schellart:2013,Nelles:2014}, this study only uses data gathered with the $\unit[10-90]{MHz}$ low-band antennas. A combination of analog and digital filters limits the effective bandwidth to $\unit[30-80]{MHz}$ which has the least amount of radio frequency interference. For detecting cosmic rays we use the (most densely instrumented) inner region of LOFAR, the layout of which is depicted in Fig.\ \ref{fig:core_layout}. \begin{figure} \begin{center} \includegraphics[width=0.80\textwidth]{images/Superterp_for_wavefront} \caption{Layout of the innermost 8 stations of LOFAR. For each station the outer ring of low band radio antennas (black plus symbols), used for the analysis in this paper, are depicted. Located with the innermost six stations are the particle detectors (grey squares) used to trigger on extensive air showers.} \label{fig:core_layout} \end{center} \end{figure} LOFAR is equipped with ring buffers (called Transient Buffer Boards) that can store the raw-voltage signals of each antenna for up to 5~seconds. These are used for cosmic-ray observations as described in \cite{Schellart:2013}. Inside the inner core of LOFAR, which is a circular area of $\unit[320]{m}$ diameter, an array of 20 scintillator detectors (LORA) has been set up \cite{Thoudam:2014}. This air shower array is used to trigger a read-out of the Transient Buffer Boards at the moment an air shower is detected. The buffer boards provide a raw voltage time series for every antenna in a LOFAR station (a group of typically 96 LBA plus 48 HBA antennas that are processed together in interferometric measurements), in which we identify and analyze the radio pulse from an air shower. Analysis of the particle detections delivers basic air shower parameters such as the estimated position of the shower axis, energy, and arrival direction. The high density of antennas of LOFAR, together with a high timing resolution ($\unit[200]{MHz}$ sampling rate) are especially favorable for measuring the wavefront shape. | We have shown that the wavefront of the radio emission in extensive air showers is measured to a high precision (better than $\unit[1]{ns}$ for each antenna) with the LOFAR radio telescope. The shape of the wavefront is best parametrized as a hyperboloid, curved near the shower axis and approximately conical further out. A hyperbolic shape fits significantly better than the previously proposed spherical and conical shapes. Reconstruction of the shower geometry using a hyperbolic wavefront yields a more precise determination of the the shower direction, and an independent measurement of the core position. Assuming the resulting reconstructed direction has no systematic bias, the angular resolution improves from $\sim 1\, ^\circ$ (planar wavefront) to $\sim 0.1\, ^\circ$ (hyperbolic). This assumption will be tested in a future simulation study. This improvement will be of particular importance for radio $X_\mathrm{max}$ measurements for highly inclined showers where small deviations in arrival angle correspond to large differences in the slanted atmospheric depth. The high antenna density and high timing resolution of LOFAR offer a unique opportunity for a detailed comparison with full Monte Carlo air shower simulations, including the arrival time measurements presented here. Furthermore, efforts to integrate timing information within the $X_\mathrm{max}$ measurement technique from \cite{ICRC_Buitink} are currently ongoing. | 14 | 4 | 1404.3907 |
1404 | 1404.6525_arXiv.txt | {We present $J, H, CH_{4}$ short (1.578~\mic), $CH_{4}$ long (1.652~\mic) and $K_s$-band images of the dust ring around the 10~Myr old star \prim\ obtained using the Near Infrared Coronagraphic Imager (NICI) on the Gemini-South 8.1~meter Telescope. Our images clearly show for the first time the position of the star relative to its circumstellar ring thanks to NICI's translucent focal plane occulting mask. We employ a Bayesian Markov Chain Monte Carlo method to constrain the offset vector between the two. The resulting probability distribution shows that the ring center is offset from the star by 16.7\pp1.3~milliarcseconds along a position angle of \textcolor{black}{26\pp3\dg , along the PA of the ring, 26.47\pp0.04\dg }. We find that the size of this offset is not large enough to explain the brightness asymmetry of the ring. The ring is measured to have mostly red reflectivity across the $JHK_s$ filters, which seems to indicate \textcolor{black}{micron-sized} grains. Just like Neptune's 3:2 and 2:1 mean-motion resonances delineate the inner and outer edges of the classical Kuiper Belt, we find that the radial extent of the \prim\ and the Fomalhaut rings could correspond to the 3:2 and 2:1 mean-motion resonances of hypothetical planets at 54.7~AU and 97.7~AU in the two systems, respectively. \textcolor{black}{A planet orbiting \prim\ at 54.7~AU would have to be less massive than 1.6~\mjup\ so as not to widen the ring too much by stirring.} \\ \\ {\bf Accepted by A\&A on 23 April 2014.} } | We have determined that the \prim\ ring is offset from the star by 16.7\pp1.3~mas based on unsaturated images in five NIR bands. These images unambiguously show the offset and confirm earlier lower-precision measurements \citep{2009AJ....137...53S,2011ApJ...743L...6T}. \textcolor{black}{The densest part of the ring has a roughly red color across the $JHK_s$ filters, indicating 1--5~\mic\ grains \citep{2008ApJ...673L.191D,2008ApJ...686L..95K} but not 50~\mic\ grains as in the estimates of \citet{2005ApJ...618..385W}.} Away from the peak density we cannot make high precision measurements of the relative reflectivity of the ring, because of influence of the data reduction process on the wings of the ring. We show that the brightness asymmetry of the ring in the NIR cannot be explained by the pericenter-glow effect \citep{1999ApJ...527..918W} alone. Higher collision rates at the pericenter or some other phenomenon will have to account for 9\% additional asymmetry over that provided by the pericenter glow (3\%). We discuss a possible explanation for the debris disk ring widths, which has not garnered much attention thus far. Just like Neptune's 3:2 and 2:1 mean-motion resonances delineate the inner and outer edges of the classical Kuiper belt, we find that the radial extent of the \prim\ and Fomalhaut rings could correspond to 3:2 and 2:1 mean-motion resonances of hypothetical planets at 54.7~AU and 97.7~AU in the two systems, respectively. For \prim\ we are only sensitive to planets with masses above several Jupiters. \textcolor{black}{However, a planet at 54.7~AU would have to be less massive than 1.6~\mjup\ so as not to widen the ring too much by stirring.} | 14 | 4 | 1404.6525 |
|
1404 | 1404.1305_arXiv.txt | After the focal plane of the HFI instrument of the Planck mission (launched in May 2009) reached its operational temperature, we observed thermal signatures of interactions of cosmic rays with the Planck satellite, located at the L2 Lagrange point. When a particle hits a component of the bolometers (e.g. thermometer, grid or wafer) mounted on the focal plane of HFI, a thermal spike (called glitch) due to energy deposition is measured. Processing these data revealed another effect due to particle showers of high energy cosmic rays: High Coincidence Events (HCE), composed of glitches occurring coincidentally in many detectors followed by a temperature increase from the nK to the $\mu$K. A flux of about 100 HCE per hour has been estimated. Two types of HCE have been detected: fast and slow. For the first type, the untouched bolometers reached, within a few seconds, the same temperature as those which were "touched". This can be explained by the storage of the energy deposited in the stainless steel focal plane. The second type of HCE is not fully understood yet. These effects might be explained by an extra conduction due to the helium released by cryogenic surfaces and creating a temporary thermal link between the different stages of the HFI. | The \textit{Planck} satellite, launched in May 2009 from Kourou (French Guyana), reached the second Sun-Earth Lagrangian point (L2) in July 2009. Since this date, and until January 2012, the \textit{Planck} mission\cite{planck_coll_1} has mapped the entire sky five times. The mission is composed of two instruments: the High Frequency Instrument (HFI) and the Low Frequency Instrument (LFI). The HFI is composed of 54 Neutron Transmutation Doped (NTD) germanium bolometers\cite{ini_res_boloplanck} in a focal plane cooled to a temperature of 100 mK \cite{cryo_syst}. This 0.1K plate also carries two germanium thermometers and two active thermal controls (PID). Even though the HFI is mapping the sky with an unprecedented photon Noise-Equivalent Power (NEP) of $\sim$ 10$^{-17}$ W/Hz$^{1/2}$, HFI bolometers and thermometers record a large number of thermal spikes due to energy deposit from cosmic rays, called \textit{glitches}\cite{hfi_patanchon}. Cosmic rays are emitted by different sources (stars, supernovae, etc.) and are mainly composed of protons (86\%) and helium nuclei (11\%)\cite{cr_flux}. For particles of energy lower than 10 GeV, the flux is modulated by solar wind. Above this limit, the spectrum can be described by a power law distribution in energy, E$^{-2.5}$ \cite{cr_lowenergy}. At Lagrangian point L2, Planck is mostly affected by protons interacting with the different materials composing the satellite. The contribution of solar particles is negligible outside of major solar flares. These particles can interact in two different ways with matter: \begin{itemize}[noitemsep,nolistsep] \item energy deposit: at relevant energies of particle, the deposited energy primarily depends on the material thickness and atomic number. The deposited energy is dominated by electronic stopping power. \item production of secondary particles: the particle interacting with matter produces a secondary shower of lower energy particles. The multiple particles produced have a lower energy, and therefore deposit energy more efficiently when the primary particle has interacted somewhere in the spacecraft with material. \end{itemize} The ESA Standard Radiation Environment Monitor (SREM) \cite{srem}, located in the Planck spacecraft measures the low energy particle flux at L2 which is mainly composed of protons (electrons are only typically 1 \% of the cosmic rays). The SREM is composed of 3 diode sensors: TC1, TC2 and TC3. TC2 is shielded with 1.7 mm of Aluminum and 0.7 mm of Tantalum which makes it sensitive to ions and protons with energies greater than 39 MeV. TC1 and TC3 are respectively shielded with 1.7 mm and 0.7 mm of Aluminum. The measurement of the cosmic ray flux observed on bolometers is very well correlated when compared to the TC2 diode as shown in fig.~\ref{srem_glitch}. TC1 and TC3 diodes, less shielded, are able to record solar flares not seen in TC2 and the bolometer. This comparison demonstrates that only protons of an energy above $\sim$ 40 MeV can reach the focal plane and interact with bolometers. Lower energy particles are stopped by the material surrounding the HFI bolometers. \begin{figure} \begin{center} \includegraphics[width=1\linewidth,keepaspectratio]{figure1} \end{center} \vspace{-1.2em} \caption{Comparison of TC1, TC2 and TC3 diodes and bolometer glitch rates for the same period of time (about 150 days) which includes two solar flares, designated in by dashed lines. \textbf{Left} : the TC2 diode and glitch rates are very well correlated. The solar flare in June 2011 clearly appears while the one in March 2011 is negligible. \textbf{Right} : TC1 and TC3 diodes both record the solar flares meaning that the first one is composed of fewer energetic particles.} \vspace{-1.2em} \label{srem_glitch} \end{figure} | HCE will be investigated further with an experiment at the Institut d'Astrophysique Spatiale (IAS), in collaboration with JPL, to simulate the effect of helium thermal exchanges between plates at different temperatures. Moreover, data processing is still ongoing and should provide us more information about HCE. The High Frequency Instrument is a technological success. The information gathered on the HCE is a unique source of information on the behavior of 100 mK systems in space. Future space missions will aim at NEP per-detector of $\sim$ 10$^{-18}$ W/Hz$^{1/2}$ or less which will lead to a greater impact of HCE and glitches on data. Thus, future missions will have to be designed taking into account such thermal effects as cosmic ray, vibration, etc. Before flight, cosmic ray impact on detectors and conductance and capacitance measurements of all sensitive elements should be thermally studied (detectors and any thermally coupled material). The helium, present in most cryogenic systems, might also play a role in thermal conduction. Behavior and localization of helium should be studied to avoid any important thermal exchange. Finally, in order to reduce the impact of HCE-like thermal events, the thermal regulation could be more reactive with a shorter time constant, but with thermometers less sensitive to particles (a difficult trade-off). In order to validate the next generation of sub-kelvin space detectors, tests with energetic space and sea-level particles have to be run. They have been completed with a Transition Edge Sensor (TES) bolometer using an alpha particle source ($^{241}$Am) at APC (see Joseph Martino's paper in this journal). Others are in progress with the Kinetic Inductance Detector (KID) at the Neel Institute \cite{kid}. | 14 | 4 | 1404.1305 |
1404 | 1404.1075_arXiv.txt | It has been claimed in the recent literature that a non-trivial relation between the mass of the most-massive star,~\mmax, in a star cluster and its embedded star cluster mass (the \mMr) is falsified by observations of the most-massive stars and the H$\alpha$ luminosity of young star clusters in the starburst dwarf galaxy NGC 4214. Here it is shown by comparing the NGC 4214 results with observations from the Milky Way that NGC 4214 agrees very well with the predictions of the \mMr\,and with the integrated galactic stellar initial mass function (IGIMF) theory. The difference in conclusions is based on a high degree of degeneracy between expectations from random sampling and those from \mMr, but are also due to interpreting \mmax\,as a truncation mass in a randomly sampled IMF. Additional analysis of galaxies with lower SFRs than those currently presented in the literature will be required to break this degeneracy. | \label{se:intro} According to the integrated galactic stellar initial mass function (IGIMF) theory, the stellar initial mass function (IMF, see Appendix~\ref{app:IMF} for a description of the canonical IMF) of a whole galaxy needs to be computed by adding the IMFs of all newly formed star-forming regions. For galaxies with star formation rates (SFRs) smaller than 0.1 \msun/yr the IGIMF \citep[eq.~4.66 in][]{KWP13} is top-light with very major implications for the rate of gas consumption, when compared to the standard notion of an invariant IMF \citep{PK09}. One fundamental corner stone of the IGIMF theory is the existence of a physical (aka non-trivial) relation between the mass of the most-massive star in a star cluster, \mmax, and the total stellar birth mass of the embedded star cluster, \mecl, which is called the \mMr. \citet{WK05b}, \citet{WKB09} and \citet{WKP13} quantified this relationship using resolved very young star clusters in the Milky Way and it was shown with high statistical significance that this relation leads to that the most-massive stars in star clusters are not as massive as would be expected if these clusters formed with their stars randomly drawn (for details on statistical sampling methods see \S~\ref{sub:sampling}) from the IMF. When assuming that the vast majority of star-formation occurs in causally connected events (embedded star clusters and associations) it is important to account for the initial distribution of these events, that is for the embedded cluster mass function (ECMF). Hence the IMF of a whole galaxy, the IGIMF, is the sum of all these events and the \mMr\,implies a suppression of the number of massive stars in galaxies with low star-formation rates (SFR). Because of the relation between the SFR and the mass of most-massive young star cluster, $M_\mathrm{ecl, max}$, in a galaxy \citep{WKL04}, galaxies with low SFRs tend to only form low-mass clusters and due to the \mMr\,only few massive stars. For average and large SFRs, however, the \mMr\,does not suppress the formation of massive stars and the integrated properties of the resulting stellar populations may then, at first sight, not be directly distinguishable from fully randomly sampled populations. The existence of the \mMr, however, is not without challenge and in a recent contribution \citet{ACC13} study a sample of unresolved young star clusters in HST images of the starburst dwarf galaxy NGC 4214. From the colours and H$\alpha$ fluxes and by deriving properties of these clusters via simulations the authors conclude that a physical most-massive-star--embedded-star-cluster relation, i.e. the \mMr, is ruled out and with it the theory of the IGIMF. We also need to point out that the basic principle of the IGIMF is always true as the stellar population of any galaxy is the sum of all star-formation events in it. For more details on the IGIMF see \citet{WK05a} and \citet{KWP13} and more information on the \mMr\,can be found in \citet{WKP13}, \citet{GWKP12}, \citet{BKS11}, \citet{OK12} and \citet{KWP13}. The \mMr\,has been derived from theoretical arguments as well as observational data in the Milky Way and the Magellanic Clouds. As can be seen in panel A of Fig.~\ref{fig:mmaxmecl}, it shows that star clusters are depressed in the formation of massive stars significantly more so than is expected from random sampling from a stellar initial mass function (IMF). {\it We emphasise that all available data on very young populations have been used and the selection criteria are only one of age being younger than 4 Myr and no supernova remnants must be in the cluster.} \begin{figure*} \begin{center} \includegraphics[width=8cm]{mmax_vs_Mecl_obs_up2013.ps} \includegraphics[width=8cm]{mmax_vs_Mecl_obs_up2013mod.ps} \vspace*{-1.5cm} \caption{Panel A: The mass of the most-massive star ($m_\mathrm{max}$) in an embedded cluster versus the stellar mass of the young dynamically un-evolved "embedded" cluster ($M_\mathrm{ecl}$). The filled dots are observations compiled by \citet{WKP13}. The boxes are mm-observations of massive pre-stellar star-forming regions in the Milky Way \citep{JSA09}. The solid lines through the data points are the medians expected for random sampling when using a fundamental upper mass limit, $m_\mathrm{max*}$, of 150\msun\,(lower grey solid line, red in the online colour version) and $m_\mathrm{max*}$ = 300\msun\,(upper grey solid line, blue in the online colour version). The dash-dotted line is the expectation value for random sampling derived from 10$^6$ Monte-Carlo realisations of star clusters. The change in slope at about \mecl\,= 100 \msun\,is caused by the fact that only below the fundamental upper mass limit ($m_\mathrm{max *}$ $\approx$ 150 \msun) it is possible to have clusters made of one star alone. Above this limit, also for random sampling clusters have to have several stars at least. This changes the behaviour of the mean \citep[for details see][]{SM08}. The dashed grey (green in the online colour version) lines are the 1/6th and 5/6th quantiles which would encompass 66\% of the \mmax\,data if they were randomly sampled from the canonical IMF (Fig.~\ref{fig:constrained}). The dotted black line shows the prediction for a relation by \citet{BBV03} from numerical models of relatively low-mass molecular clouds ($\le$ 10000 $M_\odot$). The thin long-dashed line marks the limit where a cluster is made out of one star. It is evident that random sampling of stars from the IMF is not compatible with the distribution of the data. There is a lack of data above the solid lines and the scatter of the data is too small despite the presence of significant observational uncertainties. The existence of a non-trivial, physical $m_\mathrm{max}$-$M_\mathrm{ecl}$ relation is implied. Panel B: Like panel A but shown as large open circles are the \mmax\,values from the modelling of NGC 4214 clusters by \citet{ACC13}. These values can not be directly compared with the direct measurements shown in panel A as they are the results of best-fits of unresolved cluster photometry with models.} \label{fig:mmaxmecl} \end{center} \end{figure*} In panel B of Fig.~\ref{fig:mmaxmecl} the \mmax\,values of \citet{ACC13}, which have not been published but were kindly provided by Daniela Calzetti (priv. communication), are plotted. These \mmax\,values are not based on direct measurements but are the results of best-fits of photometric data of the clusters in NGC 4214 with cluster models and are therefore not plotted together with the data of panel A. While the resolved cluster data into individual stars show a strong trend of rising \mmax\,with increasing \mecl\,the \citet{ACC13} data form a flat distribution. This is very surprising as even in the case of fully randomly sampling stars from the IMF a trend with \mecl\,is expected. Therefore, the NGC 4214 data have a trend which is hidden in the (unknown) error bars or the clusters in NGC 4214 are incompatible with any known sampling procedure. The cluster mass axis in panel B of Fig.~\ref{fig:mmaxmecl} has been limited to \mecl\,$\ge$ 300\msun\,as only such objects are subject of the \citet{ACC13} paper. This also removes the need for a deeper discussion of massive stars allegedly formed in isolation as no stars with stellar mass above 300\msun\,are known and therefore these are not needed to be taken into account in the debate about random sampling in star clusters. For a detailed discussion of recent claims of O stars formed in isolation see \citet{GWKP12}. The actual physical existence of a \mMr\,is not subject of this publication. Instead, we critically discuss the way the \mMr\,has been applied in \citet{ACC13} as well as by \citet{FDK11}, \citet{DFK12} and others. We show these applications to be problematic because they are not self-consistent and because they do not reproduce the input \mMr. In \S~\ref{se:why} it is shown why the \mMr\,can not be a truncation limit. And in \S~\ref{se:ngc4214} the NGC 4214 cluster data are compared with the \citet{WKP13} sample of star clusters before the results are discussed in \S~\ref{se:conclusions}. | \label{se:conclusions} \citet{ACC13} claim that the data obtained from the young starburst dwarf galaxy NGC 4214 falsifies the \mMr\,of \citet{WKB09} and with it the IGIMF of \citet{KW03} and \citet{KWP13}. As show in \S~\ref{se:ngc4214} this claim does not stand up to closer inspection. This is due to the following: \begin{itemize} \item The SFR of NGC 4214 is relatively high (SFR $\approx$ 0.2 \msun\,yr$^{-1}$) but the IGIMF effect (i.e.. the steepening of the galaxy-wide IMF with a deficit of massive stars in comparison to a canonical IMF which depends on the \mMr\,in star clusters) on ionising emissions and \mmax\,are only to be expected for SFRs below $\approx$ 0.05\msun\,yr$^{-1}$ (see figure 5 of \citealt{PWK07}). For galaxies with global 0.1 \msun\,yr$^{-1}$ $<$ SFR $<$ 5 \msun\,yr$^{-1}$ distributed over a fully populated embedded cluster mass function no difference between the relative population of massive stars to the one of the Milky Way is to be expected. \item The \mMr\,is used as a truncation limit for a randomly sampled IMF in \citet[][and as well in \citealt{FDK11}, \citealt{DFK12}]{ACC13}. This leads to inconsistencies with the observed \mmax\,values for Milky Way star clusters as can be seen in Fig.~\ref{fig:samplingmmax}. Sorted sampling, for example, avoids such inconsistencies. \item Strikingly, the \mmax\,values from the best-fitting procedure of \citet{ACC13} show no trend of \mmax\,with \mecl. It is currently not possible to explain such behaviour by any known sampling procedure. However, when comparing the relative frequency of the \mmax\,values of the NGC 4214 sample for clusters with masses between 500 and 4000\msun\,with a sample of young Milky Way clusters in the same mass range, the samples are consistent with being from the same distribution at high confidence. In \citet{WKP13} it has been shown that the Milky Way sample shows a physical \mMr. \item The H$\alpha$ luminosity to \mecl\,ratios given by \citet{ACC13} when using the \mMr\,as a truncation limit do not agree with values independently calculated here (Fig.~\ref{fig:andrews}) while our independent derivation reproduces literature values by \citet{SP05}. Using the here derived models explains all three cluster mass bins of \citet{ACC13} well even when using the \mMr\,as a truncation limit. \end{itemize} If the \mMr\,is applied as a truncation limit for a randomly sampled IMF in star clusters, the NGC 4214 data do not support the existence of a non-trivial \mMr. However, the analytical \mMr\,is an average value with observational data clustered above and below it. Using it as a truncation limit cuts off the portion of the distribution above the \mMr,\,therefore suppressing massive stars which would be expected even in the case that a physical \mMr\,does exist. As can been seen above, the \citet{WKP13} sample of star clusters for the respective cluster mass range reproduces the \citet{ACC13} distribution of most-massive stars in NGC 4214 very well. It also follows that it is possible to reproduce the observed H$\alpha$ luminosities when applying the \mMr. The difference in the conclusions by \citet{ACC13} and this study are likely due to degeneracies between different sampling methods combined with the uncertainties of models for massive stars. In order to further constrain the \mMr\,a larger sample of well studied, preferably resolved, clusters is necessary, as well as in-depth studies of galaxies with very low SFRs \citep{WKP13}. | 14 | 4 | 1404.1075 |
1404 | 1404.6149_arXiv.txt | The black hole at the Galactic centre exhibits regularly flares of radiation, the origin of which is still not understood. In this article, we study the ability of the near-future \textit{GRAVITY} infrared instrument to constrain the nature of these events. We develop realistic simulations of \textit{GRAVITY} astrometric data sets for various flare models. We show that the instrument will be able to distinguish an ejected blob from alternative flare models, provided the blob inclination is $\gtrsim 45^{\circ}$, the flare brightest magnitude is $14 \lesssim m_{\mathrm{K}} \lesssim 15$ and the flare duration is $\gtrsim1$h$30$. | \label{sec:intro} It is very likely that the centre of our Galaxy harbours a supermassive black hole, Sagittarius~A* (Sgr~A*), weighing $M=4.31\,10^{6}\,M_{\odot}$ at $8.33$~kpc from Earth~\citep{ghez08,gillessen09}. A very interesting property of Sgr~A* is the emission of flares of radiation in the X-ray, near-infrared and sub-mm domains \citep[e.g.][]{baganoff01, ghez04, clenet05, yusefzadeh06b,eckart09,doddseden11}. The infrared events are characterized by an overall timescale of the order of one to two hours, and a putative quasi-periodicity of roughly 20 minutes; the luminosity of the source increases by a factor of typically 20 \citep[see][]{hamaus09}. The typical flux density at the maximum of the flare is of the order of 8 mJy \citep{genzel03, eckart09,doddseden11} which translates to approximately $m_{\mathrm{K}}=15$. The brightest infrared flare observed to date reached $m_{\mathrm{K}}=13.5$~\citep{doddseden10}. The longest infrared flare observed lasted around $6$~h~\citep[or three times $2$~h if the light curve is interpreted as many successive events,][]{eckart08}. Various models are investigated to explain these flares: heating of electrons in a jet \citep[][hereafter \emph{jet model}]{markoff01}, a hotspot of gas orbiting on the innermost stable circular orbit (ISCO) of the black hole \citep[][hereafter \emph{hotspot model}]{genzel03,hamaus09}, the adiabatic expansion of an ejected synchrotron-emitting blob of plasma \citep[][hereafter \emph{plasmon model}]{yusefz06}, or the triggering of a Rossby wave instability (RWI) in the disc \citep[][hereafter \emph{RWI model}]{tagger06b,tagger06}. Some authors question the fact that flares can be understood as specific events. The light curve fluctuation is then interpreted as a pure red noise, which translates in a power-law temporal power spectrum with negative slope \citep[][hereafter \emph{red-noise model}]{do09}. So far, no consensus has been reached and all models are still serious candidates to account for the Galactic centre (GC) flares. The analysis of infrared Galactic centre flares will benefit in the very near future from the astrometric data of the \textit{GRAVITY} instrument. \textit{GRAVITY} is a second generation Very Large Telescope Interferometer instrument the main goal of which is to study strong gravitational field phenomena in the vicinity of Sgr~A*~\citep[][see also Table~\ref{grav} for a quick overview of the instrument's characteristics]{eisenhauer11}. \textit{GRAVITY} will see its first light in May 2015. As far as Galactic centre flares are concerned, the exquisite astrometric precision of the instrument is a major asset. \textit{GRAVITY} will reach a precision of the order of $10\, \mu$as, i.e. a fraction of the apparent angular size of the Galactic centre black hole's silhouette~{\citep{bardeen73,falcke00}}, in only a few minutes of integration. Such a precision will allow following the dynamics of the innermost Galactic centre, in the vicinity of the black hole's event horizon. \begin{table} \centering \begin{minipage}{140mm} \caption{Main characteristics of the \textit{GRAVITY} instrument.} \label{grav} \begin{tabular}{@{}ll@{}} \hline Maximum baseline length & 143~m\\ Number of telescopes & 4 (all UTs)\\ Aperture of each telescope & 8.2~m \\ Wavelengths used & 1.9 - 2.5~$\mu$m\\ Angular resolution & 4~mas \\ Size of the total\footnote{Containing the science target and a phase reference star} field of view& 2'' \\ Size of the science\footnote{Containing only the science target} field of view & 71~mas \\ \hline \end{tabular} \end{minipage} \end{table} The aim of this article is to \textit{determine to what extent can \textit{GRAVITY} constrain the flare models from astrometric measurements only}. We will thus only focus on the astrometric signatures of the various models. Other signatures such as photometry or polarimetry~\citep[see e.g.][]{zama11} are not considered here and will be investigated in future papers. Three broad classes of models can be distinguished depending on the astrometric signature of the radiation source: \begin{itemize} \item \emph{circular and confined} motion (hotspot model, RWI model); \item \emph{complex multi-source} motion (red-noise model); \item \emph{quasi-linear and larger-scale} motion (jet model, plasmon model). \end{itemize} The first class is characterized by a source in the black hole equatorial plane, following typically a Keplerian orbit close to the black hole innermost stable circular orbit (ISCO). This source can either be modelled by a phenomenological hotspot~\citep{hamaus09} or any instability able to create some long-lived hotspot in the disc, such as for example the RWI~\citep{tagger06b}. In the second class, many sources are located at different locations, ranging from the ISCO radius $r_{\mathrm{ISCO}}$ to typically $10\,r_{\mathrm{ISCO}}$ (further out, the flux becomes too small to be of significance for the flaring emission). These sources follow non-geodesic trajectories. In the last class, the source is ejected along a nearly linear trajectory out of the equatorial plane. {We note that the two first classes may be called disk-glued models, in the sense that they refer to physical phenomena inside the accretion structure surrounding Sgr~A*. The third class is different as the source is ejected away from the accretion structure.} The point of this classification is that it is quite robust to changing the details of the various models. This is the main reason supporting our choice of considering astrometric data alone: even if the details of the physics of the various models are not always fixed, the broad aspects of the source motion described in the list above are quite firmly established. Our aim is to simulate \textit{GRAVITY} astrometric observations for these three classes of models and to determine whether these near-future data will allow distinguishing between them. We will consider three models belonging to the three classes above: the RWI, red-noise (hereafter RN) and a blob ejection model. Section~\ref{sec:flaremod} presents the three models and explains how flare light curves are modelled. Section~\ref{sec:gravityobs} describes how \textit{GRAVITY} astrometric data are simulated. Section~\ref{sec:distinguish} investigates the ability of the instrument to distinguish between the three classes of models and Section~\ref{sec:conclu} gives conclusions. | \label{sec:conclu} We have analysed the ability of the near-future \textit{GRAVITY} instrument to distinguish different flare models, depending only on their astrometric signatures. We show that the ejection of a blob can be distinguished from alternative models provided the ejection direction is not face-on for the observer. This result holds for a reasonably long flare ($\gtrsim 1$~h~$30$) with a brightest magnitude in {\it K}-band of $14 \lesssim m_{\mathrm{K}} \lesssim 15$. {Our models are very simple for the time being. In particular, we use a geometrically thin disk approximation with very simple emission laws and a pseudo-Newtonian potential for all MHD simulations. However, this simple framework allows to generate images of Sgr~A* accretion flow that looks rather similar to more complex simulations. } {The main result of our simulations is that \textit{GRAVITY} will be able to distinguish an ejection model as compared to "disk-glued" models. This result is rather natural, as most models of disk-like (be it geometrically thin or thick) accretion structures will give rise to typical crescent-shape images~\citep{kamruddin13}, and will lead to rather similar centroid tracks. We believe that this result is robust and will not be changed by developing more sophisticated simulations.} Our result is important in the sense that it demonstrates the ability of \textit{GRAVITY} to make an observable difference between categories of flare models, which has not been possible so far with current instrumentation. Future work will be dedicated to determining to what extent can the astrometric data of \textit{GRAVITY} help distinguish models of the two first astrometric signature classes, as defined in Section~\ref{sec:intro} (typically, the red-noise and RWI models). This will demand resorting not only on astrometry but on other signatures as well, such as photometry and polarimetry. Let us repeat here that our current results, allowing to distinguish an ejected blob, only use astrometric data, thus only a part of the total flare data. Future work will also be dedicated to modelling various flare models using GRMHD simulations in order to be able to study the impact of the spin parameter on the observables. Such a study of Sgr~A* variability in GRMHD simulations is currently developed by various groups~{\citep[see e.g.][]{dexter09,dolence12,henisey12,dexter13,shcherbakov13}}. | 14 | 4 | 1404.6149 |
1404 | 1404.3246_arXiv.txt | With the ultimate aim of using the fundamental or $f$-mode to study helioseismic aspects of turbulence-generated magnetic flux concentrations, we use randomly forced hydromagnetic simulations of a piecewise isothermal layer in two dimensions with reflecting boundaries at top and bottom. We compute numerically diagnostic wavenumber--frequency diagrams of the vertical velocity at the interface between the denser gas below and the less dense gas above. For an Alfv{\'e}n-to-sound speed ratio of about 0.1, a 5\% frequency increase of the $f$-mode can be measured when $k_x\Hp=3$--$4$, where $k_x$ is the horizontal wavenumber and $\Hp$ is the pressure scale height at the surface. Since the solar radius is about 2000 times larger than $\Hp$, the corresponding spherical harmonic degree would be 6000--8000. For weaker fields, a $k_x$-dependent frequency decrease by the turbulent motions becomes dominant. For vertical magnetic fields, the frequency is enhanced for $k_x\Hp\approx4$, but decreased relative to its nonmagnetic value for $k_x\Hp\approx9$. | Much of our knowledge of the physics beneath the solar photosphere is obtained from theoretical calculations and simulations. Helioseismology provides a window to measure certain properties inside the Sun; see the review by \cite{GBS10}. This technique uses sound waves ($p\,$-modes) and to some extent surface gravity waves ($f$-modes), but the presence of magnetic fields gives rise to magneto-acoustic and magneto-gravity waves, whose restoring forces are caused by magnetic fields modified by pressure and buoyancy forces \citep[see, e.g.,][]{T83,C11}. This complicates their use in helioseismology, where magnetic fields are often not fully self-consistently included. This can lead to major uncertainties. Recent detections of changes in the sound travel time at a depth of some 60\,Mm beneath the surface about 12 hours prior to the emergence of a sunspot \citep{Ilo11} have not been verified by other groups \citep{Braun12,Birch13}. Also the recent proposal of extremely low flow speeds of the supergranulation \citep{Hanasoge} is in stark contrast to our theoretical understanding and poses serious challenges. It is therefore of interest to use simulations to explore theoretically how such controversial results can be understood; see, e.g., \cite{Geo07} and \cite{KKMW11} for earlier attempts trying to construct synthetic helioseismic data from simulations. The ultimate goal of our present study is to explore the possibility of using numerical simulations of forced turbulence to assess the effects of subsurface magnetic fields on the $p\,$- and $f$-modes. Subsurface magnetic fields can have a broad range of origins. The most popular one is the buoyant rise and emergence of flux tubes deeply rooted at or even below the base of the convection zone \citep{Cal95}. Another proposal is that global magnetic fields are generated in the bulk of the convection zone with equatorward migration being promoted by the near-surface shear layer \citep{B05}. In this case, subsurface magnetic fields are expected to be concentrated into sunspots through local effects such as supergranulation \citep{SN12} or through downflows caused by negative effective magnetic pressure instability \citep{BKR13,BGJKR14}. The latter mechanism requires only stratified turbulence and its operation can be demonstrated and studied in isolation from other effects using just an isothermal layer. To examine seismic effects of magnetic fields on the $f$-mode, one must however introduce a sharp density drop, which implies a corresponding temperature increase. This leads us to studying a {\em piecewise} isothermal layer. In the Sun, waves are excited by convective motions \citep{Ste67,GK88,GK90}. However, in an isothermal layer there is no convection and turbulence must be driven by external forcing, as it has been done extensively in the study of negative effective magnetic pressure effects. We adopt this method also in the present work, but use a rather low forcing amplitude to minimize the nonlinear effects of large Reynolds numbers and Mach numbers close to unity. In the absence of a magnetic field, linear perturbation theory gives simple expressions for the dispersion relations of $p\,$- and $f$-modes, which are also the modes which we focus on in this work. For large horizontal wavenumbers $k_{\rm h}$, the frequencies of the solar $f$-mode have been observed to be significantly smaller than the theoretical estimates, and both line shift and line width grow with $k_{\rm h}$ \citep{FSTT92,DKM98}. Both effects are expected to arise due to turbulent background motions \citep{MR93a,MR93b,MMR99,M00a,M00b,MKE08}. There have also been alternative proposals to explain the frequency shifts as being due to what \cite{RG94} call an interfacial wave that depends crucially on the density stratification of the transition region between chromosphere and corona; see also \cite{RC95}. In the presence of magnetic fields, both $p\,$- and $f$-mode frequencies are affected. \cite{NT76} derive the dispersion relation for sound waves in the uniformly horizontally magnetized isothermally stratified half-space with a rigid lower boundary. In the presence of structured magnetic fields, e.g., near sunspots, a process called mode conversion can occur, i.e., an exchange of energy between fast and slow magnetosonic modes, which leads to $p\,$-mode absorption in sunspots \citep{Cal06,SC06}. The properties of surface waves in magnetized atmospheres have been studied in detail \citep{R81,MR89,MR92,MAR92}. From these results, one should expect changes in the $f$-mode during the course of the solar cycle. Such variations have indeed been observed \citep{ABPP00,DG05} and may be caused by subsurface magnetic field variations. It was argued by \cite{DGS01} that the time-variation of $f$-mode frequencies could be attributed to the presence of a perturbing magnetic field of order $20\G$ localized in the outer 1\% of the solar radius. The temporal variation of $f$-mode frequencies may be resolved into two components: an oscillatory component with a one-year period which is probably an artifact of data analysis resulting from the orbital period of the Earth, and another slowly varying secular component which appears to be correlated with the solar activity cycle. Subsequent work by \cite{Ant03} showed that variations in the thermal structure of the Sun tend to cause much smaller shifts in $f$-mode frequencies as compared to those in $p\,$-mode frequencies and as such are not effective in accounting for the observed $f$-mode variations. \cite{SKGD97} and \cite{Ant98} deduced from accurately measured $f$-mode frequencies the seismic radius of the Sun. They found that the customarily accepted value of the solar radius, $695.99\Mm$ needs to be reduced by about $200$--$300\kmeter$ to have an agreement with the observed $f$-mode frequencies. From the study of temporal variations of $f$-mode frequencies, \cite{LK05} found evidence for time variations of the seismic solar radius in antiphase with the solar cycle above $0.99\Rsun$, but in phase between $0.97$ and $0.99\Rsun$. The importance of using the $f$-mode frequencies for local helioseismology has been recognized in a number of recent papers \citep{HBBG08,DABCG11,FBCB12,FCB13}. While such approaches should ultimately be used to determine the structure of solar subsurface magnetic fields, we restrict ourselves here to the analysis of oscillation frequencies as a function of horizontal wavenumber. The purpose of the present paper is to use numerical simulations in piecewise isothermal layers to study oscillation frequencies from random forcing and to assess the effects of imposed magnetic fields on the frequencies. In the present work we restrict ourselves to the case of uniform magnetic fields and refer to the case of nonuniform fields in another paper \citep{SBR14}, where we study what is called a fanning out of the $f$-mode. | The prime objective of the present work was to assess the effects of an imposed magnetic field on the $f$-mode, which is known to be particularly sensitive to magnetic fields. One of our motivations is the ultimate application to cases where magnetic flux concentrations are being produced self-consistently through turbulence effects \citep[see, e.g.,][]{BKR13,BGJKR14}. Those investigations have so far mostly been carried out in isothermal domains, which was also the reason for us to choose a piecewise isothermal model, where the jump in temperature and density is needed to allow the $f$-mode to occur. The resulting setup is in some respects different from that of the Sun and other stars, so one should not be surprised to see features that are not commonly found in the context of helioseismology. One of them is a separatrix within the $p\,$-modes as a result of the hot corona, which we associate with the $a$-mode of \cite{HZ94}. Regarding the $f$-mode, there are various aspects that can be studied even in the absence of a magnetic field. Particularly important is the linearly $k_x$-dependent decrease of $\omega_{\rm f}$. The $f$-mode mass increases with the intensity of the forcing and hence with the Mach number. Interestingly, this is also a feature that carries over to the magnetic case where an increase in the magnetic field leads to a decrease in the resulting Mach number and thereby to a decrease in the mode mass in much the same way as in the non-magnetic case. Magnetic fields also lead to a truncated $f$-mode branch above a certain value of $k_x$. One of the most important findings is the systematic increase of the $f$-mode frequencies $\ofmn$ observed in DNS with the horizontal magnetic field. It follows essentially the theoretical prediction and, contrary to the non-magnetic cases, shows an increase with $k_x$ such that the relative frequency shift is approximately proportional to $\vA^2/\cs^2$. This is best measured when $k_x L_0=5$--$7$, i.e., $k_x\Hp=3$--$4$, and with a solar radius of $700\Mm$, being 2000 times larger than $\Hp\approx300\kmeter$, the corresponding spherical harmonic degree would be 6000--8000. In this range, the relative frequency shift is $\delta\omega_{\rm fm}^2/\omega_{\rm f}^2\approx0.1$ when $\vA/\cs\approx0.1$. Since $\delta\omega_{\rm fm}^2/\omega_{\rm f}^2 \approx 2\delta\omega_{\rm fm}/\omega_{\rm f}$, the increase of the $f$-mode frequencies is about 5\%. The observed $f$-mode frequency increase during solar maximum is about $1\uHz$ at a moderate spherical degree of 200 \citep{DG05}. This corresponds to a relative shift of about 0.06\%, but this value should of course increase with the spherical degree. We note that in our case, the magnetic field is the same above and below the interface. Furthermore, $\rho\cs^2$ is also the same above and below the interface, and so is therefore also $\vA^2/\cs^2$. One of the goals of future studies will be to determine how our results would change if the magnetic field existed only below the interface. Interestingly, for vertical and oblique magnetic fields, the $k_x$-dependence of $\ofmn$ becomes non-monotonic in such a way that for small values of $k_x$, $\ofmn$ first increases with $k_x$ and then decreases and becomes less than the theoretical value in the absence of a magnetic field, although it stays above the value obtained without magnetic field, whose reduction is believed to be due to turbulence effects \citep{MMR99,MKE08}. We confirm the numerical results of \cite{PK09} that $p\,$-modes are less affected by a background magnetic field than the $f$-mode. Relative to the non-magnetic case, no significant frequency shifts of $p\,$-modes are seen in a weakly magnetized environment. For a larger density contrast at the interface, with the rest of the parameters being same, the mode amplitudes and line widths increase, but the data look more noisy and the frequency shifts, which can be of either sign \citep{HZ94}, may not be primarily due to the magnetic field. The present investigations allow us now to proceed to more complicated systems where the magnetic field shows local flux concentrations which might ultimately resemble active regions and sunspots. As an intermediate step, one could also impose a non-uniform magnetic field with a sinusoidal variation in the horizontal direction. This will be the focus of a future investigation. | 14 | 4 | 1404.3246 |
1404 | 1404.1899_arXiv.txt | We investigate possible imprints of galactic foreground structures such as the ``radio loops'' in the derived maps of the cosmic microwave background. Surprisingly there is evidence for these not only at radio frequencies through their synchrotron radiation, but also at microwave frequencies where emission by dust dominates. This suggests the mechanism is magnetic dipole radiation from dust grains enriched by metallic iron or ferrimagnetic materials. This new foreground we have identified is present at high galactic latitudes, and potentially dominates over the expected $B$-mode polarization signal due to primordial gravitational waves from inflation. | \label{sec:intro} The study of the cosmic microwave background (CMB) radiation is a key testing ground for cosmology and fundamental physics, wherein theoretical predictions can be confronted with observations \citep{WMAP7:powerspectra,PlanckXV,PlanckXVI}. The temperature fluctuations in the CMB have provided our deepest probe of the Big Bang model and of the nature of space--time itself. Moreover CMB data provide a bridge between cosmology and astro-particle physics, shedding light on galactic cosmic rays \citep{Mertsch}, and galactic X- and $\gamma$-ray emission \citep{PlanckXXVI, PlanckI,PlanckIX}. Since the first release of data from the \textit{Wilkinson Microwave Anisotropy Probe} (WMAP), it has been noted that the derived CMB sky maps exhibit departures from statistical isotropy \citep{Chiang_NG,Chiang_Nas_Coles,Tegmark:Alignment,Multipole_Vector1, Multipole_Vector4,Multipole_Vector2,Park_Genus, Hemispherical_asymmetry,Axis_Evil,Axis_Evil2, cold_spot_origin,power_asymmetry_wmap5,power_asymmetry_subdegree,odd,Gruppuso:2013xba}, probably because of residuals from the incomplete removal of galactic foreground emission. For example the Kuiper Belt in the outer Solar System may partly be responsible for the unexpected quadrupole-octupole alignment and parity asymmetry in the CMB \citep{Hansen}. This issue has acquired even more importance after the first release of cosmological data products from the \textit{Planck} satellite \citep{PlanckXXIII}. In this paper we construct a physical model to account for the local features of the WMAP internal linear combination (ILC) map \citep{Bennett2012} in the direction of the galactic ``radio loops'', in particular Loop~I \citep{Berkhuijsen1971}. We show that in the low multipole domain, $\ell \le 20$, the peaks of the CMB map \emph{correlate} with radio synchrotron emission from Loop~I. However, the physical source of this anomaly is likely related to emission by dust---including magnetic dipole emission from dust grains enriched by metallic Fe, or ferrimagnetic materials like $\mathrm{Fe}_3\mathrm{O}_4$ \citep {DL_99,2013ApJ...765..159D}. The radio loops are probably old supernova remnants (SNRs) in the radiative stage which have swept up interstellar gas, dust, and magnetic field into a shell-like structure. Dust grains are well-coupled to the gas by direct collisions, Coulomb drag, and Lorentz forces. Hence along with synchrotron radiation by high energy electrons, dust emission will be enhanced in the shell, with limb-brightening yielding the observed ring-like morphology. Moreover, part of the Loop~I anomaly overlaps with the \textit{Planck} haze at 30 GHz~\citep{PlanckIX}. We are investigating these issues in more detail and will present the results elsewhere. The structure of this Letter is as follows: In Section~\ref{sec:loops} we summarise the properties of Loops I-IV from radio waves to $\gamma$-rays. In Section~\ref{sec:peaks} we demonstrate that there are spatial correlations between features in the WMAP 9~yr ILC map of the CMB and Loop~I---the CMB temperature is systematically shifted by $\sim 20\,\mu$K along Loop~I. We perform a cluster analysis along the Loop~I radio ring and show that the peaks in the CMB map are indeed clustered in this ring with a chance probability of $\sim10^{-4}$. In Section~\ref{sec:residual}, we discuss how this signal from Loop~I could have evaded foreground subtraction using the ILC method. In Section~\ref{sec:models} we discuss physical mechanisms of the emission from the Loop~I region and argue that it likely arises from interstellar dust, possibly including magnetic dipole radiation from magnetic grain materials. We present our conclusions in Section~\ref{sec:conclusion}. | \label{sec:conclusion} We have found evidence of local galactic structures such as Loop~I in the WMAP ILC map of the CMB which is supposedly fully cleaned of foreground emissions. This contamination extends to high galactic latitude so the usual procedure of masking out the Milky Way \emph{cannot} be fully effective at removing it. It extends to sufficiently high frequencies that it cannot be synchrotron radiation but might be magnetic dipole emission from ferro- or ferrimagnetic dust grains, as suggested by theoretical arguments \citep{DL_99,2013ApJ...765..159D}. This radiation is expected to be \emph{polarised} with a frequency dependent polarization fraction. It has not escaped our attention that as shown in Figure~\ref{fig1}, the lower part of Loop~I, in particular the new loop S1 identified by \cite{Wolleben2007}, crosses the very region of the sky from which the BICEP 2 experiment has recently detected a B-mode polarization signal at $150 \, \text{GHz}$ \citep{Ade:2014xna}. This has been ascribed to primordial gravitational waves from inflation because ``available foreground models'' do not correlate with the BICEP maps. The new foreground we have identified is however \emph{not} included in these models. Hence the cosmological significance if any of the detected B-mode signal needs further investigation. Forthcoming polarization data from the \textit{Planck} satellite will be crucial in this regard. We are indebted to Bruce Draine, Pavel Naselsky, and Andrew Strong for helpful discussions and to the Discovery Center for support. We thank the anonymous referees for critical and constructive comments which helped to improve this Letter. We acknowledge use of the \texttt{HEALPix}\footnote{\url{http://healpix.sourceforge.net}} package~\citep{2005ApJ...622..759G}. Hao Liu is supported by the National Natural Science Foundation of China (Grant No. 11033003 and 11203024), and the Youth Innovation Promotion Association, CAS. Philipp Mertsch is supported by DoE contract DE-AC02-76SF00515 and a KIPAC Kavli Fellowship. Subir Sarkar acknowledges a DNRF Niels Bohr Professorship. | 14 | 4 | 1404.1899 |
1404 | 1404.3587_arXiv.txt | {}{Imaging and spectroscopy of X-ray extended sources require a proper characterisation of a spatially unresolved background signal. This background includes sky and instrumental components, each of which are characterised by its proper spatial and spectral behaviour. While the X-ray sky background has been extensively studied in previous work, here we analyse and model the instrumental background of the ACIS-I detector on-board the Chandra X-ray observatory in very faint mode.}{Caused by interaction of highly energetic particles with the detector, the ACIS-I instrumental background is spectrally characterised by the superposition of several fluorescence emission lines onto a continuum. To isolate its flux from any sky component, we fitted an analytical model of the continuum to observations performed in very faint mode with the detector in the stowed position shielded from the sky, and gathered over the eight year period starting in 2001. The remaining emission lines were fitted to blank-sky observations of the same period. We found $11$ emission lines. Analysing the spatial variation of the amplitude, energy and width of these lines has further allowed us to infer that three lines of these are presumably due to an energy correction artefact produced in the frame store.} {We provide an analytical model that predicts the instrumental background with a precision of $2\%$ in the continuum and $5\%$ in the lines. We use this model to measure the flux of the unresolved cosmic X-ray background in the Chandra deep field south. We obtain a flux of $10.2^{+0.5}_{-0.4} \times 10^{-13}\si{\erg \per \square \centi\meter \deg}\si{\per \second}$ for the $[1-2]\si{\kilo\electronvolt}$ band and $(3.8 \pm 0.2) \times 10^{-12}\si{\erg \per \square \centi\meter \deg}\si{\per \second}$ for the $[2-8]\si{\kilo\electronvolt}$ band.}{} | \begin{figure*}[!ht] \begin{center} \includegraphics[bb=54 360 558 720,width=11.5cm]{./images/continuo.eps} \end{center} \caption{\textit{VF mode-filtered spectrum of the stowed datasets of periods D plus E. Grey areas identify spectral regions used to model the continuum emission.}} \label{fig:presentazione} \end{figure*} One of the main tasks of observers is the extraction of the target source signal from the data, that is separating the source photons from the background photons. This requires identifying an adequate procedure to estimate the background. For less extended or point-like sources, the background contribution to the observed data can be simply estimated by extracting spectra from nearby regions that are free from target or other point-source emission. Because the background may have significant spatial variations, the same procedure cannot be applied to extended sources. For the latter case, a common approach is to use ``blank-sky'' datasets. These datasets are obtained by stacking the data of a number of observations of relatively empty sky regions with low Galactic emission. While this approach accounts for spatial variation of the background (e.g., vignetting or detector non-uniformity) it may not be ideal when high accuracy in the background estimation is required. The total X-ray background flux can be subdivided into three main components that have different spatial variation: the cosmic X-Ray background (CXB), the Galactic local foreground, and the particle detector background. Furthermore, while the first two components are vignetted, the last one is expected to be less sensitive, if at all, to the characteristics of the telescope mirrors. Thus, an accurate background determination for each specific observation requires the three background components to be estimated individually. The CXB is the superposition of resolved and unresolved emission coming from distant X-ray sources, such as AGNs (\citealt{giacconi}). Although this component is generated by sources with different redshift and spectral distributions, it can be modelled by a power law with a photon index $ \Gamma = 1.42$ (e.g., \citealt{gamma_cxb_value}). The Galactic local foreground component has been studied using the first sky maps of the soft background obtained by ROSAT (\citealt{snowden_rosat}). It can been modelled using two-temperature thermal emission components (\citealt{backg_compo}) although its origin and structure are still under debate. The instrumental component is related to the instrument itself and originates from the interaction of high-energy particles with the instrument and its electronics. For XMM-Newton, \cite{xmm_particle} studied the particle background of the EPIC cameras and proposed an analytic model that, in conjunction with the CXB and the Galactic foreground model, can be used to accurately predict the spatial variation of the background in XMM-Newton observations. Inspired by their work, we study and characterise the spatial and temporal variations of the very faint (VF) mode Chandra ACIS-I particle background. Here we develop an analytical model for the Chandra ACIS-I detector background and provide prescriptions on how to combine it with the sky components to predict the X-ray background for individual Chandra observations. We illustrate its application by deriving an accurate estimate of the flux of the unresolved CXB in a Chandra deep field. The CCD behaviour found in this work is characteristic only for the front-illuminated CCDs of ACIS-I. We performed a similar analysis for the two back-illuminated chips of ACIS-S and found that there are significant spectral spatial variations. Thus, a proper characterisation of the background of the back-illuminated requires a much higher photon statistic, which is not available yet. For this reason we limit the detailed analysis to the ACIS-I detector. The paper is organised as follows: in Sec. \ref{dataset} we introduce the datasets used in this work, in Sec. \ref{acis_particle_background} we describe the model production methodology and how to apply it to an observation, in Sec. \ref{risu_cdfs} we report results obtained by testing our model on a real observation. All fits described in this work were performed using maximum-likelihood estimation. | We studied the spatial and spectral variation of the ACIS-I Chandra particle background for the VF mode eight-year period D+E, starting in $2001$. Using ACIS-stowed data for periods D+E, filtered using VF mode, we modelled the continuum as a sum of a power law plus an exponential, the amplitude of both following a gradient along the $y$ direction. Using the blank-sky observations available for the same period, we modelled 11 fluorescence lines. Six of them are spatially variable and come in pairs of mother and daughter lines associated with the artefact of the CTI correction. In the future, this simple analytical model can be easily adjusted to match the upcoming background datasets. We also showed that the best band to be used to normalize the particle background is $[9.5-10.6]\si{\kilo\electronvolt}$. We demonstrated that our model is very stable over the detector, with an accuracy better than $2\%$ in the continuum and $5\%$ in the lines. Given these systematic errors and the statistics available, we were able to constrain the amplitude of the unresolved cosmic X-ray background from the CDFS with the unprecedented precision of $5\%$. One of the strengths of the Chandra observatory is its ability to resolve a large part of the CXB flux thanks to its unique angular resolution. Combining this ability with strong constraints on the systematic of its instrumental background should help us to best constrain the accuracy of imaging and spectral analyses of faint extended sources. Among other aspects, the study of the outskirts of galaxy clusters, which is a well-known example for which controlling the systematics of all background components is critical, will strongly benefit from our work (e.g. \citealt{cluster_out_1}, \citealt{cluster_out_2} or \citealt{cluster_out_3}). | 14 | 4 | 1404.3587 |
1404 | 1404.3855_arXiv.txt | {We show that self-ordering scalar fields (SOSF), i.e.~non-topological cosmic defects arising after a global phase transition, {\em cannot} explain the B-mode signal recently announced by BICEP2. We compute the full $C_\ell^{B}$ angular power spectrum of B-modes due to the vector and tensor perturbations of SOSF, modeled in the large-$N$ limit of a spontaneous broken global $O(N)$ symmetry. We conclude that the low-$\ell$ multipoles detected by BICEP2 cannot be due mainly to SOSF, since they have the wrong spectrum at low multipoles. As a byproduct we derive the first cosmological constraints on this model, showing that the BICEP2 B-mode polarization data admits at most a 2-3\% contribution from SOSF in the temperature anisotropies, similar to (but somewhat tighter than) the recently studied case of cosmic strings. } | \label{sec:intro} The anisotropies of the cosmic microwave background (CMB) have been measured very precisely over the years by a range of different experiments~\cite{Larson:2010gs,Komatsu:2010fb,Dunkley:2010ge,Reichardt:2011yv,Ade:2013ktc,Planck:2013kta}, most recently by the {\it Planck} Satellite~\cite{Planck:2013kta}, whose temperature data demonstrate an impressive agreement with the standard flat $\Lambda$CDM model over all angular scales relevant at decoupling. Quite surprinsingly, the BICEP2 collaboration~\cite{BICEP2} has recently announced the first detection of a B-mode polarization signal at large angular scales. If confirmed, this detection opens a new observational window to models of the early Universe. The leading candidate to explain the BICEP2 low-$\ell$ B-mode signal are tensor perturbations generated during inflation due to quantum fluctuations of the metric. Inflationary tensor modes with a tensor to scalar ratio $r \simeq 0.2$ fit well the observed B-mode angular spectrum at scales $\ell \simeq 50-140$. The higher $\ell$ signal, dominated by the B-modes generated by the lensing of E-modes, shows some excess compared to the expected amplitude. However, the BICEP2 collaboration has warned us~\cite{BICEP2} that data at $\ell \gtrsim 150$ should be considered as preliminary. A mechanism alternative to inflation which generates primordial B-modes are cosmic defects, which may have have formed after a symmetry breaking phase transition in the early Universe~\cite{Kibble:1976sj,Kibble:1980mv}. Defects can be {\em local} or {\em global}, depending on whether they are generated after a phase transition which breaks a gauge or a global symmetry. In the local case only cosmic strings are cosmologically viable defects leaving an imprint in the CMB. In the global case all defects are viable independent of their dimension, except for domain walls which are not allowed. For reviews on cosmic defects see~\cite{VilenkinAndShellard,HindmarshAndKibble,Durrer:2001cg,Hindmarsh:2011qj}. Cosmic defects lead to a variety of phenomenological effects, in particular they produce CMB temperature and polarization anisotropies~\cite{Durrer:1994zza,Durrer2,Durrer3,Turok1,Turok2}, which in general are expected to be non-Gaussian~\cite{Mark1,Mark2,Shellard,Dani}. Backgrounds of gravitational waves (GW) are also expected from several different processes related to cosmic defects: the creation~\cite{GarciaBellido:2007dg,GarciaBellido:2007af,Dufaux:2007pt,DufauxFigueroaBellido,Buchmuller:2013lra}, evolution~\cite{Krauss:1991qu,JonesSmith:2007ne,Fenu:2009qf, Giblin:2011yh,Figueroa:2012kw} and the decay~\cite{Vilenkin:1981bx,Vachaspati:1984gt, Olmez2010,Blanco-Pillado:2013qja} (the latter only applies to cosmic strings). Though in general GW are expected to generate B-mode polarization in the CMB, not every background of GW can lead to a signal at the relevant CMB scales. Essentially, creating low-$\ell$ B-mode polarization requires tensor modes with a significant amplitude at super-horizon scales at the time of decoupling. The GW backgrounds from the formation and decay of cosmic defects, are not expected to contribute a significant B-signal in the low multipole $\ell \simeq \mathcal{O}(10)-\mathcal{O}(100)$ range, simply because their spectra have power mainly at smaller (sub-horizon) scales, i.e.~larger $\ell$. Tensor perturbations created during the evolution of a scaling defect network will contribute to create B-mode polarization patterns mostly at smaller angular scales, however they will also have some power at large angular scales. At late time, the GW spectrum produced during the evolution of a defect network is exactly scale-invariant~\cite{Figueroa:2012kw}, so one may wonder whether it is possible to distinguish it from a flat (i.e.~zero tilt) inflationary tensor spectrum. Would it produce a similar B-mode pattern in the polarization of the CMB as the one expected from inflationary flat tensor modes? Fortunately, a possible confusion between the two GW backgrounds only concerns the direct detection of GW by interferometers: the scale-invariance of tensor perturbations from the evolution of a defect network is only obtained once the modes have entered the horizon during the radiation dominated era. At super horizon scales, however, in a radiation or matter dominated Universe, the GW spectrum from defects is white noise, simply dictated by the spectrum of the source [see below Eq.~(\ref{eq:SuperHtentors})]. A low-$\ell$ B-signal in the CMB due to tensors, on the other hand, concerns modes which either just crossed the horizon or are still super horizon at the time of decoupling. A polarization B-signal in the CMB at large angular scales from tensor perturbations of cosmic defects is in fact expected to be quite different from the corresponding signal of inflationary tensor perturbations\footnote{Note, however, that the temperature angular spectrum from cosmic defects (and scaling seeds in general) on large angular scales which are super horizon at decoupling, is scale invariant. This can be understood by purely dimensional arguments simply because it is dominated by the integrated Sachs Wolfe effect~\cite{Durrer:1996va}.}. Moreover, cosmic defects not only actively create tensor perturbations during their evolution, but also vector and scalar perturbations. Vector perturbations also source B-modes and therefore, in order to properly asses the possible contribution from cosmic defects into the B-mode polarization of the CMB, it is necessary to consider the contribution from both tensor and vector perturbations. After the BICEP2 announcement~\cite{BICEP2} two independent groups~\cite{Lizarraga:2014eaa,Moss:2014cra} have analyzed the possible contribution from cosmic strings to the B-mode polarization. Although different modeling of the string networks were used, both groups conclude that standard cosmic strings can contribute at most a small fraction of few $\%$ to the BICEP2 B-mode signal. Given the current upper bounds set by {\it Planck}~\cite{Ade:2013xla} on the fractional contribution $f_{10}$ from cosmic defects into the temperature angular power spectrum at multipole $\ell = 10$, % it is easy to guess that no significant contribution from cosmic defects to the B-mode polarization signal detected by BICEP2 can be expected. This has been shown explicitely by ~\cite{Lizarraga:2014eaa,Moss:2014cra} for the case of local strings (\cite{Lizarraga:2014eaa} also considered semi-local strings and textures), and indeed it was anticipated by~\cite{Urrestilla:2008jv} years ago. In this paper we carry out a similar analysis to~\cite{Lizarraga:2014eaa,Moss:2014cra} but for a different type of cosmic defects, often referred to as {\it self-ordering~scalar~field} (SOSF), corresponding to non-topological configurations of global fields. The analysis of~\cite{Moss:2014cra} indicates that a peculiar type of heavy strings (very different from the local ones) could in principle explain the BICEP2 signal without the need of inflation. The spectra of such a network of rare strings could resemble that of global defects, suggesting that SOSF might improve the fit to the BICEP2 data. A brief analysis which appeared immediately after the BICEP2 announcement~\cite{Dent:2014rga}, even concluded that SOSF could fully explain the unexpectedly large BICEP2 B-mode signal. On the other hand, \cite{Lizarraga:2014eaa} argued that the similarity of the B-mode polarization signal from cosmic strings, semilocal strings and texture means that none of these models can fit the data (as shown in Fig. 3 of~\cite{Lizarraga:2014eaa} for the texture case). Here we review these claims, using the most extreme SOSF model (the large-$N$ limit), clarifying certain aspects about SOSF, and incorporating the most precise calculations available of the tensor and vector metric perturbations in this scenario. We shall come to the same conclusion as \cite{Lizarraga:2014eaa}, namely that SOSF can contribute at best a few percent to the BICEP2 signal. In section~\ref{sec:SOSF} we briefly review the basics of SOSF and their modelling in the large-$N$ limit of a global $O(N)$-model. In section~\ref{sec:BBspectra} we clarify certain aspects of SOSF, and compute the full $C_\ell^{B}$ angular power spectrum of B-modes due to the vector and tensor perturbations from SOSF. We then compare with the {\it Planck} temperature and BICEP2 polarization data. In section~\ref{sec:conclusions} we discuss the implications of our results and conclude. | \label{sec:conclusions} We have investigated the B-polarization of the CMB from a SOSF model and shown that it cannot reproduce the BICEP2 data. This is due to the fact that the polarization signal comes directly from the last scattering surface at which the BICEP scales corresponding mainly to $50<\ell<150$, are still super-horizon. On the other hand, the super-horizon spectrum of the SOSF tensor modes is uncorrelated white noise and not scale invariant. The same causality argument implies that defects and also SOSF have no first peak in the TE spectrum as has been pointed out in Ref.~\cite{Spergel:1997vq}. In the same spirit, the low-$\ell$ B-polarization spectrum from defects and SOSF is not scale invariant but much bluer and can be used to set strong limits on a possible contribution from defects, here modeled as SOSF. We therefore conclude that SOSF cannot explain on their own the BICEP2 data. Thanks to the BICEP2 data, we have also been able to limit the possible contribution from SOSF to the CMB signal to a marginal fraction of about $f_{10} \lesssim 0.02$. This shows the strength of B-modes for the discrimination of a contribution from scaling seeds which was already pointed out in~\cite{GarciaBellido:2010if}, see also~\cite{Lizarraga:2014eaa}. Future, more precise data will help us to constrain $f_{10}$ much better. For the time being, considering a mixed scenario of inflation + SOSF helps to improve the fit to the current BICEP2 data, in particular, it provides a better fit to the anomalous large amplitude of the high-$\ell$ multipoles. Nevertheless, adding SOSF also reduces the inflationary tensor-to-scalar ratio to $r \approx 0.16$, i.e.~towards a value that is better compatible with the upper bound from {\it Planck} (in the absence of running of the scalar index). Let us also note that this is actually the first constraint for the contribution from the large-$N$ limit of SOSF to the CMB. Since, not surprisingly, the result is very similar to the one for textures ($N=4$), this allows us to generically constrain global $O(N)$ models to contribute at most a few percent to the BICEP2 data and to the CMB anisotropies and polarization in general. The same is expected to hold for other scaling scalar field models with similar characteristics. The main point here is that the energy momentum tensor of the source is uncorrelated on super horizon scales, and that the only scale of the problem is the Hubble horizon scale. | 14 | 4 | 1404.3855 |
1404 | 1404.6531_arXiv.txt | The detection of strong thermochemical disequilibrium in the atmosphere of an extrasolar planet is thought to be a potential biosignature. In this article we present a new kind of false positive that can mimic a disequilibrium or any other biosignature that involves two chemical species. We consider a scenario where the exoplanet hosts a moon that has its own atmosphere and neither of the atmospheres is in chemical disequilibrium. Our results show that the integrated spectrum of the planet and the moon closely resembles that of a single object in strong chemical disequilibrium. We derive a firm limit on the maximum spectral resolution that can be obtained for both directly-imaged and transiting planets. The spectral resolution of even idealized space-based spectrographs that might be achievable in the next several decades is in general insufficient to break the degeneracy. Both chemical species can only be definitively confirmed in the same object if absorption features of both chemicals can be unambiguously identified and their combined depth exceeds~100\%. font \color{darkblue} { Significance\\[5pt]} The search for life on planets outside our own Solar System is among the most compelling quests that humanity has ever undertaken. An often suggested method of searching for signs of life on such planets involves looking for spectral signatures of strong chemical disequilibrium. This article introduces an important potential source of confusion associated with this method. Any exoplanet can host a moon that contaminates the planetary spectrum. In general, we will be unable to exclude the existence of a moon. By calculating the most optimistic spectral resolution in principle obtainable for Earth-like planets, we show that inferring a biosphere on an exoplanet might be beyond our reach in the foreseeable future. }} | With almost a thousand confirmed exoplanets \cite[Open Exoplanet Catalogue,][]{Rein2012}, the prospects of detecting signs of a biosphere on a body outside our own solar system are more promising than ever before. However, there are still huge technological and theoretical challenges to overcome before one can hope to make a clear detection of life on an exoplanet. In this article, we discuss one of these complications, the possibility of false positives due to the presence of an exomoon orbiting the exoplanet. There are many ways that life on an exoplanet might affect the planet's appearance, ranging from deliberate signals from intelligent civilizations \cite{tarter2001} to subtler signs of simple life. In order to characterize an extrasolar world as fully as possible, we ideally would measure its spectrum as a function of time in both the optical and the infrared parts of the spectrum \cite[e.g.][]{cowan+agol2009,kawahara+fujii2011,fujii+kawahara2012,fujii_et_al2013}. For example, spectral evidence of water could suggest that a planet might be habitable. It has also been suggested that an intriguing indication of life might be an increase in the planet's albedo toward the infrared part of the spectrum, which on Earth can be associated with vegetation \cite{seager_et_al2005}. However, these features alone would not be smoking-gun proof of the presence of life. The terms `biomarker' and `biosignature' generally refer to chemicals or combinations of chemicals that {\it could} be produced by life and that {\it could not} be (or are unlikely to be) produced abiotically; hereafter, we use these terms interchangeably. If biosignature gases are detected in the spectrum of an exoplanet, the probability that they actually indicate life depends both on the prior probability of life \cite{spiegel+turner2012} and on the probability that the observed spectroscopic feature could be produced abiotically. The latter possibility is the subject of this paper. Byproducts of metabolism are often thought of as the most promising biomarker \cite{Kaltenegger2002,Kaltenegger2007,Kaltenegger2010,Seager2012,Seager2013a,Seager2013b,Kasting2013}. More specifically, an extreme thermodynamic disequilibrium of two molecules in the atmosphere is considered a biosignature \cite{Lederberg1965, Lovelock1965, Segura2005}. An example of two such species is the simultaneous presence of O$_2$ and a reduced gas such as CH$_4$. It is important to point out that a disequilibrium in a planet's atmosphere should not be considered as clear evidence for life.\footnote{Also note that the Earth might have never had a phase of strong, observable O$_2$/CH$_4$ disequilibrium \cite{Marais2002}.} There is a long list of abiotic sources that could also create a disequilibrium such as impacts~\cite{Kasting1990}, photochemistry~\cite{Zahnle2008}, and geochemistry~\cite{Seager2013b}. In this article, we describe a new scenario for a possible false positive biosignature. If the exoplanet hosts a moon that has an atmosphere itself, the simultaneous observation of the planet and moon modifies the observed spectrum \cite[see also][]{Cabrera2007} and can produce a signal that looks like a disequilibrium in one atmosphere, but is in fact created by two atmospheres blended together. It might be extremely difficult to discern that an exoplanet even has a moon, let alone that one component of a two-chemical biosignature comes from the moon instead of the planet. \begin{figure*}[th!] \begin{center} \resizebox{\columnwidth}{!}{\includegraphics{Earth_O2_1p5e5ppm_CH4_30ppm_6000_10000_2.pdf}} \resizebox{\columnwidth}{!}{\includegraphics{Earth_O2_2e5ppm_Titan2_CH4_50ppm_6000_10000_2.pdf}} \end{center} \caption{Model spectra for cases \#1 and \#2 with varying resolution. (Left) Model spectra of a planet with 15\% O$_2$ and 30ppm CH$_4$ (case \#1). (Right) Black lines: Combined spectra of a planet with 20\% O$_2$ and a moon with 50ppm CH$_4$. Blue lines: Model spectra of a planet with 20\% O$_2$. Red lines: Model spectra of a moon with 50ppm CH$_4$.\label{fig:spcomp} } \end{figure*} The outline of this article is as follows. We first describe our model atmospheres and present simulated spectra. Using those synthetic spectra, we show that the combined spectrum from an oxygen-rich atmosphere such as that of the Earth and a methane rich atmosphere such as that of Titan indeed looks like it could have come from a single atmosphere with a strong disequilibrium. We then calculate a strong upper limit on the spectral resolution of such a system as observed from Earth under ideal conditions with a plausibly-sized space telescope. Our estimate shows that the spectral resolution $R \equiv \lambda/d \lambda$ for such a system is unlikely to exceed~$\sim$1600 with foreseeable technology. Given this maximum-possible resolution, discriminating between a single planet and a planet-moon system is in general unlikely to be possible.\footnote{ If an exomoon transits its planet, its ``secondary-eclipse'' (when it dips behind the planet) offers a way to break the degeneracy, because it presents an opportunity to obtain the spectrum of the planet alone. Transits, however, are unlikely. For instance, the Earth's Moon transits the Earth for only $\sim$2\% of randomly-oriented observers. } Nevertheless, we conclude with a summary and a positive outlook with two possibilities that can provide genuine biosignatures. The first possibility is to find a single chemical species that is sufficient to indicate life. The second one requires the unambiguous identification of both species' absorption features and the combined depth of the features needs to exceed~100\%. | In this article, we studied a false-positive scenario that could spoof biosignatures in spectroscopic observations of exoplanets. We showed that a detection of two chemical species in a spectrum could be caused by light originating from two different bodies. This is particularly important because it has been suggested that a chemical disequilibrium (which involves two or more species) could be a biomarker. However, an observation of two species does not show that they are in fact in the atmosphere of a single object (the planet). An almost identical spectrum would be measured if the two species are in the atmospheres of two different bodies, one of them being on the planet and the other on the planet's moon. Because it is impossible to resolve the moon-planet system (most likely we would not even know of its existence), the two spectra will be blended together, creating a spectrum with absorption bands of both species. To test this scenario, we calculated synthetic spectra of an exoplanet and an exomoon, both with an atmosphere. Using molecular oxygen and methane as the chemical species of interest, we showed that, with the spectral resolution that will be achievable with foreseeable technology, it will be impossible to tell the difference between the true-biosphere case where both species are in the same atmosphere and the false-positive case where one chemical is on the planet and the other is on the moon. Although our specific false-positive example involves oxygen and methane, the effect is the same for any two gases that are considered a biosignature when observed together. Our results show clearly that it is in general not possible to break the degeneracy between the two cases in a low-to-moderate resolution Earth-twin spectrum. The only case where we can safely conclude the coexistence of two species in one planet is that the smooth continuum level is well determined and the sum of the absorption depths of two species exceeds 100\%. We considered a large, idealized space-telescope and show that even using the most optimistic assumptions possible, the spectral resolution is unlikely to be higher than~$R\sim 1600$ for an Earth twin around a Solar-type star. This is a fundamental physical limit just based on the photon noise. Unless we find a planet very close to us ($d \ll 10$~pc) or develop space telescopes significantly larger than considered in this article, the only way to tweak the maximum spectral resolution is by relaxing the assumption of an Earth twin around a Solar-type star. For example, planets that are orbiting low-mass stars and/or are somewhat larger than Earth (so-called super-Earths) have larger planet-star size ratios and could allow improved spectral resolution (see Eq.~\ref{eq:Rmax_trans}). Another way to avoid the exo-moon false-positive scenario altogether is to reconsider single molecule biomarkers, which do not suffer from the degeneracy presented here. Molecular oxygen (O$_2$) and ozone (O$_3$) have been suggested as potential single-species biosignatures. However, either or both of them might show up in the spectrum of an abiotic planet \cite{wordsworth+pierrehumbert2014}, so they do not definitively indicate life. Nevertheless, progress has been made in recent years on more rigorous constraints on the abiological nature of these gases~\cite{Seager2013a}. From the perspective of the human race exploring space and searching for life on other worlds, the results of this paper are inconvenient, yet unavoidable: we will only learn the most fundamental properties of Earth twins unless we find one right in our solar neighbourhood. It will be possible to obtain suggestive clues indicative of possible inhabitation, but ruling out alternative explanations of these clues will probably be impossible for the foreseeable future. Since our results are based on fundamental physical laws, they are unlikely to change, even as technology advances. The logical step forward is to widen our search for life in the universe and include objects, in the Solar System and beyond, that are less similar to Earth but more easily observable. | 14 | 4 | 1404.6531 |
1404 | 1404.1916_arXiv.txt | {} {We determine the metallicities of globular clusters (GCs) in the \object{WLM} and \object{IKN} dwarf galaxies, using VLT/UVES and Keck/ESI spectroscopy. These measurements are combined with literature data for field stars to constrain GC formation scenarios. For the \object{WLM GC}, we also measure detailed abundance ratios for a number of light, $\alpha$, Fe-peak, and $n$-capture elements, which are compared with literature data for the Fornax dSph and the Milky Way. } { The abundances are derived by computing synthetic integrated-light model spectra and adjusting the input composition until the best fits to the observed spectra are obtained. } { We find low metallicities of $\mathrm{[Fe/H]}=-2.0$ and $-2.1$ for the \object{WLM GC} and the GC \object{IKN-5}, respectively. We estimate that 17\%--31\% of the stars with $\mathrm{[Fe/H]}\le-2$ in WLM belong to the GC, and \object{IKN-5} may even contain a similar number of metal-poor stars as the whole of the IKN dwarf itself. While these fractions are much higher than in the Milky Way halo, we have previously found a similarly high ratio of metal-poor GCs to field stars in the \object{Fornax dSph}. The overall abundance patterns in the WLM GC are similar to those observed for GCs in the Fornax dSph: the [Ca/Fe] and [Ti/Fe] ratios are super-Solar at about $+0.3$ dex, while [Mg/Fe] is less elevated than [Ca/Fe] and [Ti/Fe]. The [Na/Fe] ratio is similar to the averaged [Na/Fe] ratios in Milky Way GCs, but higher (by $\sim2\sigma$) than those of Milky Way halo stars. Iron-peak (Mn, Sc, Cr) and heavy elements (Ba, Y, La) generally follow the trends seen in the Milky Way halo. } {The GCs in the \object{WLM} and \object{IKN} dwarf galaxies resemble those in the \object{Fornax dSph} by being significantly more metal-poor than a typical halo GC in the Milky Way and other large galaxies. They are also substantially more metal-poor than the bulk of the field stars in their parent galaxies. It appears that only a small fraction of the \object{Milky Way} GC system could have been accreted from galaxies similar to these dwarfs. The relatively high Na abundance in the \object{WLM GC} suggests that the [Na/O] anti-correlation is present in this cluster, while the high ratios of metal-poor GCs to field stars in the dwarfs are in tension with GC formation scenarios that require GCs to have lost a very large fraction of their initial mass. } | \defcitealias{Larsen2012}{L12b} \defcitealias{Larsen2012a}{L12a} The formation of globular star clusters (GCs) was evidently a common phenomenon in the early Universe. The ``richness'' of the GC population associated with a galaxy is conveniently expressed in terms of the GC specific frequency, $S_N = N_\mathrm{GC} \times 10^{0.4 (15 + M_V)}$, where $M_V$ is the host galaxy absolute magnitude and $N_\mathrm{GC}$ is the number of GCs \citep{Harris1981}. Most large galaxies have $S_N$ of the order of unity to a few, but there are significant variations both within galaxies and from one galaxy to another. Two notable trends are that (1) the GC/field star number ratio tends to increase with decreasing metallicity within a given galaxy \citep{Forte1981,Forbes1997,Larsen2001,Forbes2001,Harris2002,Harris2007} and that (2) many dwarf galaxies host GCs in disproportionately large numbers compared to larger galaxies \citep{Miller2007,Peng2008,Georgiev2010,Harris2013}. The total integrated luminosity of the Milky Way GC system is about $1.5\times10^7 L_{V\odot}$ \cite[using data from][]{Harris1996}, which corresponds to a mass of $2.2\times10^7 M_\odot$ for an average visual mass-to-light ratio of $\Upsilon_V = 1.45$ \citep{McLaughlin2000}. Assuming that about two thirds of these GCs are associated with the halo \citep[e.g.][]{Zinn1985}, the GC system then accounts for 1\%--2\% of the stellar halo mass \citep{Suntzeff1991}. This \emph{present-day} ratio almost certainly differs significantly from the corresponding ratio at the time of the formation of the GCs or shortly thereafter. All star clusters evolve dynamically, and for GCs with initial masses of the order of $10^5 M_\odot$, the time scale for dissolution due to the combined effects of two-body relaxation and bulge/disc shocks is comparable to the Hubble time \citep{Fall1977,Fall2001,Jordan2007,Kruijssen2009}. Clusters that formed with initial masses below this limit in the early Universe will thus have dissolved by now and their stars become part of the general halo field star population. The mass lost from the GC system due to disruption may be comparable to the present-day stellar mass of the Galactic halo \citep{PortegiesZwart2010}. The amount and rate of mass loss experienced earlier in the lifetime of GCs, when they may have interacted more strongly with the surrounding interstellar medium, remains more uncertain. Galactic star-forming regions exhibit a great deal of hierarchical structure and ``clustering'', but the fraction of stars that end up as members of bound, long-lived clusters is only of the order of a few percent in normal galaxies, though possibly higher in starburst environments \citep{Fall2004,Goddard2010,Lada1991,Lada2003,Larsen2000a,Silva-Villa2011,Kruijssen2012}. This has led to the notion of ``infant mortality'' or ``infant weight loss'', whereby young star clusters (or parts thereof) may become unbound after the expulsion of left-over gas from star formation \citep{Hills1980,Elmegreen1983,Goodwin1997,Boily2003,Goodwin2006,Fall2010}. Large amounts of early mass loss have also been suggested as a solution to the ``mass budget'' problem, which is inherent to many theories for the origin of chemical abundance anomalies observed in GCs. Compared to field stars, GC stars display anomalous abundance patterns of many light elements \citep{Carretta2009}, and it has been suggested that this may be due to self-pollution within the clusters via mass loss from massive asymptotic giant branch (AGB) stars or fast-rotating massive main sequence stars (``spin stars''). However, the amount of processed material returned by such stars is insufficient to account for the very large observed fractions of stars with anomalous abundance patterns, which has prompted the suggestion that a large fraction of the normal ``first-generation'' stars have been lost from the clusters. In these scenarios, the present-day GCs would be the surviving remnants of systems that were initially far more massive by perhaps a factor of 10 or more \citep{DErcole2008,Schaerer2011,Bekki2011,Valcarce2011}. Recently, scenarios have been proposed in which the anomalies arise from self-pollution during the formation of the cluster -- for example, by material released from massive interacting binaries \citep{Bastian2013a,Denissenkov2013}. The mass budget problem is less severe in these scenarios. An interesting constraint on these ideas comes from observations of GCs in dwarf galaxies. In particular, the chemical composition has now been studied in detail for all five GCs in the \object{Fornax dSph} using high-dispersion spectroscopy of individual stars \citep{Letarte2006} and integrated light \citep[hereafter L12a and L12b]{Larsen2012a,Larsen2012}. Four of the five clusters in the Fornax dwarf are very metal-poor with metallicities of $\mbox{[Fe/H]}<-2$ (The remaining cluster, \object{Fornax 4}, has $\mbox{[Fe/H]}\approx-1.4$.). Combined with the metallicity distribution of the field stars \citep{Battaglia2006}, this implies that a very large fraction of all the metal-poor stars in the Fornax dSph (20\%--25\%) belong to the four metal-poor GCs \citepalias{Larsen2012}. There is an evident tension between the relatively small number of metal-poor field stars in Fornax and scenarios that require GCs to have lost 90\% or more of their initial masses to explain multiple generations of stars \citep[but see][]{DAntona2013}. Another consequence of the very low metallicities of the Fornax GCs is that only a small fraction of the Milky Way GC system could have been accreted from dwarf galaxies similar to the Fornax dSph. In the Milky Way, only 11 GCs or about 7\% of the GC population have $\mathrm{[Fe/H]}<-2$ \citep{Harris1996}. This, of course, does not necessarily exclude that the halo was built up from smaller fragments, but it does suggest that these fragments experienced a higher degree of early chemical enrichment before GCs were formed, as compared to the surviving dwarf galaxies that are observed today. While the results for the Fornax dSph are interesting, it would clearly be desirable to extend this analysis to additional galaxies. In this paper, we analyse integrated-light spectroscopy of the single globular cluster known in the \object{Wolf-Lundmark-Melotte} (\object{WLM}) galaxy and the brightest of the five GCs found in the \object{IKN} dwarf spheroidal galaxy in the M81 group \citep{Georgiev2010}. Apart from determining the overall metallicities of the GCs and comparing them with the field star metallicity distributions, we also carry out a detailed abundance analysis of the \object{WLM GC} and compare our results with the data for GCs in the \object{Fornax dSph} and the Milky Way. We use the same integrated-light spectral analysis technique that was developed and applied to the Fornax GCs in \citetalias{Larsen2012a} with a few modifications described below (Sect.~\ref{sec:analysis}). | We have presented new VLT/UVES high-dispersion, integrated-light spectroscopy of the globular cluster in the \object{WLM} galaxy. With these data, we have measured the abundances of several light, $\alpha$, Fe-peak, and $n$-capture elements and compared the results with data for Milky Way GCs and field stars and with literature data for extragalactic GCs in the \object{Fornax dSph} and \object{M31}. We have also determined the metallicity of the brightest GC in the IKN dwarf spheroidal in the M81 group, using a new Keck/ESI spectrum. Our main findings and conclusions are as follows: \begin{enumerate} \item We measure metallicities of $\mathrm{[Fe/H]}=-1.96\pm0.03$ and $\mathrm{[Fe/H]}\approx-2.1$ for the \object{WLM GC} and \object{IKN-5}. While there may be systematic uncertainties at the level of $\sim0.1$ dex, it is clear that these GCs are both significantly more metal-poor than a typical metal-poor GC in the Milky Way halo. They are also significantly more metal-poor than the average of the field stars in their parent galaxies. \item By comparison with literature data for the field-star metallicity distributions and star-formation histories, we estimate that the \object{WLM GC} accounts for 17\%--31\% of the metal-poor stars in WLM, while the number of metal-poor stars in the GC \object{IKN-5} may even be comparable to that in the rest of the IKN dwarf galaxy. This makes these two dwarfs similar to the \object{Fornax dSph} in that they have a very high GC-to-field star ratio at low metallicities. \item The GCs in the \object{WLM} and Fornax dwarfs generally have enhanced $\alpha$-element abundances at the level of $\approx+0.3$ dex, as traced by the [Ca/Fe] and [Ti/Fe] ratios. The enhanced $\alpha$-element abundances indicate that chemical enrichment was prompt and dominated by Type II SNe nucleosynthesis for the material out of which these clusters formed. The only exception is \object{Fornax 4}, whose lower [$\alpha$/Fe] ratio is similar to that of field stars of similar metallicity in Fornax. \item The [Mg/Fe] ratios are significantly lower than [Ca/Fe] and [Ti/Fe]. This may be due to anomalous Mg abundances in the cluster stars, although we point out the puzzling observation that the majority of GCs studied in integrated light so far exhibit this phenomenon, while only a small fraction of Milky Way GCs appear to have mean [Mg/Fe] abundances as low as those observed in the integrated-light studies. \item The integrated-light [Na/Fe] ratio in the \object{WLM GC} is about 2$\sigma$ higher than in Milky Way field stars and more similar to the typical average [Na/Fe] abundances of Milky Way GCs. This is consistent with the idea that the WLM GC hosts significant numbers of ``second-generation'' stars which formed out of material that had been processed by $p$-capture nucleosynthesis at high temperatures. \item The Fe-peak elements (Cr, Sc, Mn) and the $n-$capture elements (Ba, Y, La) in the WLM and Fornax GCs generally follow the trends observed in Milky Way field stars and GCs. \end{enumerate} Overall, the chemical composition of the \object{WLM GC} and the Fornax GCs is fairly similar to those of globular clusters of corresponding metallicity in the Milky Way, suggesting that these different environments shared relatively similar early chemical enrichment histories. However, the interstellar gas in the dwarf galaxies had reached a lower level of overall chemical enrichment at the time when the majority of the GCs formed, compared to larger galaxies like the Milky Way, and the dwarfs were apparently able to form bound, massive star clusters at extremely high efficiency compared to field stars. In the context of GC formation scenarios, it is noteworthy that the integrated-light abundance patterns hint at the presence of the same light-element abundance anomalies in the WLM and Fornax clusters (enhanced [Na/Fe] and depleted [Mg/Fe]) that are known from Milky Way GCs. This, combined with the high ratios of GCs vs. field stars in the dwarfs, would appear to favour scenarios for the origin of abundance anomalies within GCs that do not require the clusters to have lost a very large fraction of their initial mass. It also constrains the amount of mass that could have been lost from disrupted star clusters to the field more generally. | 14 | 4 | 1404.1916 |
1404 | 1404.0910_arXiv.txt | In this paper, we present a systematic study of the force-free field equation for simple axisymmetric configurations in spherical geometry and apply it to the solar active regions. The condition of separability of solutions in the radial and angular variables leads to two classes of solutions: linear and nonlinear force-free fields. We have studied these linear solutions and extended the nonlinear solutions for the radial power law index to the irreducible rational form $n= p/q$, which is allowed for all cases of odd $p$ and cases of $q>p$ for even $p$, where the poloidal flux $\psi\propto1/r^n$ and field $\mathbf{B}\propto 1/r^{n+2}$). We apply these solutions to simulate photospheric vector magnetograms obtained using the spectropolarimeter on board \textit{Hinode}. The effectiveness of our search strategy is first demonstrated on test inputs of dipolar, axisymmetric, and non axisymmetric linear force-free fields. Using the best-fit to these magnetograms, we build three-dimensional axisymmetric field configurations and calculate the energy and relative helicity with two independent methods, which are in agreement. We have analyzed five magnetograms for AR 10930 spanning a period of three days during which two X-class flares occurred which allowed us to find the free energy and relative helicity of the active region before and after the flare; our analysis indicates a peak in these quantities before the flare events which is consistent with the results mentioned in literature. We also analyzed single-polarity regions AR 10923 and 10933, which showed very good fits with potential fields. This method can provide useful reconstruction of the nonlinear force-free (NLFF) fields as well as reasonably good input fields for other numerical techniques. | } The active regions in the solar photosphere are locations of high magnetic field where magnetic pressure starts to dominate over gas pressure. In such conditions the plasma is likely to follow a force-free equation of state, where the Lorentz force vanishes at all points. It was shown by \citet{taylor} that in systems where magnetic forces are dominant in the presence of kinematic viscosity, linear force-free fields are natural end configurations. A more general class of force-free fields is obtained when the energy of the system is minimized with constraints of total mass, angular momentum, cross helicity and relative helicity (e.g., \citet{finn83}; \citet{mangalam}). Within the context of force-free configurations, there are numerous possibilities that can be obtained due to underlying geometry and symmetry of the problem in addition to the invariants involved. There have been several attempts to construct such full three-dimensional (3D) models from two-dimensional (2D) data obtained at one level vector magnetograms. A summary of the various numerical techniques are discussed in \citet{schrijver06} and \citet{metcalf08}. They compare six algorithms for the computation of nonlinear force-free (NLFF) magnetic fields, which include optimization \citep{wheat00,wie04,wie06}, magnetofrictional \citep{yang86,myc94, roume,myc97}, Grad–Rubin based \citep{grad,amari97,amari06,wheat07,wheat09,wheat10}, and Green’s function-based methods \citep{yan97,yan00,yan2005,yan2006} by evaluating their performance in tests on analytical force-free field models for which boundary conditions are specified either for the entire surface area of a cubic volume or for an extended lower boundary. Figures of merit were used to compare the input vector field to the resulting model fields. Based on these, they argue that all algorithms yield NLFF fields that agree best with the input field in the lower central region of the volume, where the field and electrical currents are strongest and the effects of boundary conditions the weakest. The NLFF codes when applied to solar data, do not necessarily converge to a single solution. To address this \citet{wheat11} include uncertainties on the electric current densities at the boundaries iteratively until the two nonlinear solutions agree, leading to a more reliable construction. Because the NLFF techniques require good input fields for fast convergence and are subject to uncertainties at the boundary conditions that propagate during extrapolation, we are exploring fits of the data directly to analytic solutions. The best-fit to a well-known (non)linear (semi)analytic solution would give us more insight into the kind of structure that could be present in the volume given an optimal correlation with the fields observed on the magnetogram. The solution thus found can then be exploited to yield quantities of interest such as relative helicity and free energy that can be computed for the 3D configuration. Further, one can explore the stability and dynamics of these structures at a later stage. Whereas there are several possible topologies for various geometries and boundary conditions, e.g., \citet{marsh96}, it is our goal here to take the simplest geometric approach of a sphere. We show that separability condition leads to two classes of solutions: linear and nonlinear force-free fields. We call these linear fields as Chandrasekhar solution \citep{chandra56}, hereafter referred to as C modes and the nonlinear fields as Low-Lou solutions \citep{low90}, hereafter referred to as LL modes. These computationally cheap 3D analytic models are comparable with other numerics or with observations and this allows us to make more precise predictions of the physically relevant configurations. Because the validity of physical assumptions can vary from active region to active region, we restrict ourselves to exploring the most simplest of solutions involving the least number of parameters, namely the choice of the modes and the two of the three Euler angles that will represent any arbitrary rotation of the configuration space into the coordinates of the observed magnetogram. An outline of this approach was previously presented in \citet{prasad13}. The paper is presented as follows: In Section \ref{shell} we describe the formulation of the free energy and relative helicity in a shell geometry. In Section \ref{modes}, we show that the force-free field equation under assumption of axisymmetry leads to linear (C modes) and nonlinear (LL modes) force-free fields which are discussed in Section \ref {csol} and Section \ref{llsol}, respectively. In Section \ref{s:simulation}, we present the construction of magnetogram templates and the search strategy for obtaining the best-fit using suitable fitting parameters. In Section \ref{s:prepare} and Section \ref{compres} we present the data used for this study and compare them with the simulated models. The summary and conclusions are presented in Section \ref{s:summary}. Details of mathematical derivations for some of the relations are are in the Appendices A-H. Table \ref{t:formulae} provides a formulary for the C and LL modes. | } \label{s:summary} \begin{table} \resizebox{18cm}{9cm}{ \begin{tabular}{|l|} \hline \centerline{\bf C MODES}\\ \hline \\ $ \mathbf{B}(r_1<r<r_2)=\left(\frac{-J_{m+3/2}(\alpha r)}{r^{3/2}}\frac{d } {d \mu}[(1-\mu^2)C_m^{3/2}(\mu)],\frac{-1}{r}\frac{d } {d r}[r^{1/2}J_{m+3/2}(\alpha r)](1-\mu^2)^{1/2}C_m^{3/2}(\mu), \frac{\alpha J_{m+3/2}(\alpha r)}{r^{1/2}}(1-\mu^2)^{1/2}C_m^{3/2}(\mu)\right) $ \\ \\ $ \mathbf{A}(r_1<r<r_2)=\mathbf{B}/\alpha;\quad a_{m+1}=\frac{(m+2)r_1^{m+3/2} J_{m+3/2}(\alpha r_1)}{ r_1^{2m+3}-r_2^{2m+3}};\quad b_{m+1}=\frac{(m+1)r_2^{2m+3} r_1^{m+3/2} J_{m+3/2}(\alpha r_1)}{ r_1^{2m+3}-r_2^{2m+3}} $ \\ \\ $ \mathbf{B}_P(r_1<r<r_2)=\left(\left[(m+1) a_{m+1} r^{m}-\frac{(m+2)b_{m+1}}{r^{m+3}}\right]P_{m+1}(\mu),\right. \left.-(1-\mu^2)^{1/2}\left[ a_{m+1} r^{m}+\frac{b_{m+1}}{r^{m+3}}\right]\frac{dP_{m+1}}{d\mu},0\right) $ \\ \\ $ \mathbf{A}_P(r_1<r<r_2)=\left(0,0,(1-\mu^2)^{1/2} P^\prime_l(\mu)\left[\frac{ a_l r^l}{l+1}-\frac{b_l}{lr^{l+1}}\right]\right) $ \\ \\ $ E_v(r)=\frac{(m+1)(m+2)}{2(2m+3)}\left [r\left [\frac{d}{d r} \left\{r^{1/2} J_{m+3/2}(\alpha r) \right\}\right]^2+\left\{\alpha^2r^2-(m+1)(m+2) \right\}J^2_{m+3/2}(\alpha r)\right]; $ \\ \\ $ E_{\mathrm{ff}}(\alpha,n,m, r_1, r_2)=E_v(r_2)-E_v(r_1) =\frac{(m+1)(m+2)}{2(2m+3)}\Bigl [2\int_{r_1}^{r_2}\alpha^2r J^2_{m+3/2}(\alpha r)dr -r_1^{1/2} J_{m+3/2}(\alpha r_1)\frac{d} {d r}\{r^{1/2}J_{m+3/2}(\alpha r)\}|_{r=r_1}\Bigr] $ \\ \\ $ E_{pot}(m,r_1,r_2)=\frac{1}{2(2m+3)}\int_{r_1}^{r_2}\Bigl[\left((m+1)a_{m+1}r^{m+1}- \frac{(m+2)b_{m+1}}{r^{m+2}}\right)^2 +(m+1)(m+2)\left(a_{m+1}r^{m+1}+\frac{b_{m+1}}{r^{m+2}}\right)^2\Bigr]dr $ \\ \\ $ H_{rel}^{FA}(\alpha,n,m,r_1,r_2) =\frac{8\pi E_{\mathrm{ff}}}{\alpha}+\frac{4\pi(m+1)(m+2)}{\alpha (2m+3)}\Bigl[\alpha^2 \int_{r_1}^{r_2}\left(\frac{a_{m+1}r^{m+1}}{m+2} -\frac{b_{m+1}}{(m+1)r^{m+2}}\right )r^{3/2}J_{m+3/2} (\alpha r) d r $ \\ $~~~~~~~~~~~~~~~~~~~~~~~~~~~ +r_1^{1/2}\left(a_{m+1}r_1^{m+1}+\frac{b_{m+1}}{r_1^{m+2}}\right)J_{m+3/2}(\alpha r_1) \Bigr] $ \\ \\ $ H_{rel}^{B}(\alpha,n,m,r_1,r_2)=\frac{8\pi\alpha(m+1)(m+2)}{2m+3}\int_{r_1}^{r_2}rJ_{m+3/2}^2(\alpha r)dr. $ \\ \\ \hline \centerline{\bf LL MODES} \\ \hline $ \mathbf{B}(r<r_2)= \left(\frac{-1}{r^{n+2}}\frac{dP}{\partial\mu}, \frac{n}{r^{n+2}}\frac{P}{(1-\mu^2)^{1/2}},\frac{a}{r^{n+2}}\frac{P^{(n+1)/n}}{(1-\mu^2)^{1/2}}\right); \quad \mathbf{A}(r<r_2)=\left(0,\frac{-a}{n r^{n+1}}\frac{P(\mu)^{(n+1)/n}}{(1-\mu^2)^{1/2}},\frac{1}{r^{n+1}}\frac{P(\mu)}{(1-\mu^2)^{1/2}}\right) $ \\ \\ $ a_l=0,\quad b_l=\frac{2l+1}{2(l+1)}r_1^{l-n}\int_{-1}^1\frac{dP}{d\mu}P_l(\mu)d\mu; \quad \mathbf{B}_P(r_1<r<r_2)=\left(\sum_{l=0}^{\infty}-(l+1)\frac{b_l}{r^{l+2}}P_l(\mu),\sum_{l=0}^{\infty}\frac{-b_l}{r^{l+2}} (1-\mu^2)^{1/2}\frac{dP_l}{d\mu},0\right). $ \\ \\ $ \mathbf{A}_P(r_1<r<r_2)=\left(0,0,(1-\mu^2)^{1/2} P^\prime_l(\mu)\left[\frac{ a_l r^l}{l+1}-\frac{b_l}{lr^{l+1}}\right]\right); \quad E_{pot}(l,r_1)=\sum_{l=0}^\infty\frac{b_l^2(l+1)}{2(2l+1)r_1^{2l+1}} $ \\ \\ $ E_{\mathrm{ff}}(n,m, r_1)=\frac{1}{4(2n+1)r_1^{2n+1}}\int_{-1}^1d\mu \left[ P^\prime(\mu)^2+\frac{n^2 P(\mu)^2}{1-\mu^2}+\frac{a^2 P(\mu)^{(2n+2)/n}}{1-\mu^2}\right] =\frac{1}{4 r_1^{2n+1}}\int_{-1}^1\left\{\left(\frac{d P}{d \mu} \right )^2 -\frac{(n^2+a^2 P^{2/n})P^2}{(1-\mu^2)}\right\}d \mu $ \\ \\ $ H_{rel}^{FA}(n,m,r_1)=-2\pi a \sum_{l=0}^\infty \int_{-1}^1 \frac{b_l}{n l r_1^{n+l}}P^{1+1/n} \frac{d P_l} {d \mu}d\mu =H_{rel}^{B}(n,m,r_1)=\frac{2\pi a}{n r_1^{2n}} \int_{-1}^1 \frac {P^{2+1/n}}{(1-\mu^2)}d\mu. $ \\ \\ \hline \end{tabular} } \caption{Formulary for the various quantities calculated for the C and LL modes. $\mathbf{B}$ and $\mathbf{A}$ denote the force-free magnetic field and its corresponding vector potential. The same quantities for the potential field are denoted by $\mathbf{B}_P$ and $\mathbf{A}_P$ respectively. $E_{\mathrm{ff}}$, $E_{pot}$, $E_{\mathrm{free}}$ and $H_{rel}$ are the force-free energy, potential energy, free energy and the relative helicity of the magnetic field configuration respectively calculated using the Finn Antonesen \& Berger formulae that are analytically equivalent.} \label{t:formulae} \end{table} Here we first summarize the key results of this paper. \begin{enumerate} \item {\em Analytic Results.} We have shown that there are two solutions possible (albeit known already and denoted here as C and LL) from the separability assumption. We calculate the energies and relative helicity of the allowed force-free fields in a shell geometry. The final expression for the field of C modes is given in Equation (\ref{bchand}). We then calculated the corresponding potential field for calculating relative helicity in this region. The expressions for the potential field and its vector potential are given in Equations (\ref{bpchand1}) and (\ref{Apchand1}). The relative helicity thus calculated is given by Equation (\ref{hrelchand}). The expression for energies of the force-free field and the potential field are given by Equations (\ref{eint}) and (\ref{epotchand}), respectively whereby we can calculate the free energy of the system using Equation (\ref{efree}). The alternative expressions for the energy of the force-free field and relative helicity are given in Equations (\ref{evcmode}) and (\ref{hrelcb}), respectively which are analytically in agreement with the previous expressions. For the LL mode we were able to extend the solution set obtained in \citet{low90} from $n=1$ to all rational values of $n=\displaystyle{\frac{p}{q}}$ by solving the Equation (\ref{feq}) for all cases of odd $p$ and for cases of $q>p$ for even $p$, in effect extending solution to practically all $n$. The final expression for the magnetic field is given by Equation (\ref{flow}) and its vector potential by Equation (\ref{alow1}). The expression for the potential field consistent with this force-free field is given by Equation (\ref{Bplow1}) and the corresponding vector potential is given by Equation (\ref{Apchand1}) with the constants evaluated from Equation (\ref{alLL1}). The relative helicity in the region using the Finn\textendash Antonsen formula is given by Equation (\ref{hrellow}). The energies for the force-free and potential fields are given by Equations (\ref{efflow}) and (\ref{epotlow}), respectively. Again, the alternative expressions for the energy of the force-free field and relative helicity are given in Equations (\ref{effvll}) and (\ref{hrbll}), respectively which are analytically in agreement with the previous expressions. For convenience these results are included in the formularies for C and LL modes in Table \ref{t:formulae}. \item {\em Numerical Results.} We formulated a search strategy with parameters including two Euler rotations of the force-free sphere and a variable set that corresponds to the various C and LL modes; see Section \ref{search}. A study of effectiveness of our search strategy is presented in Section\ref{effective}. Here we find that we are able to get the correct configuration for the input field (as in the dipole case) and are able to fit the energies within a factor of two. We then studied the field configurations for three active regions, (Table \ref{tab}) and calculated the free energy and relative helicity for these cases. We were able to get reasonable fits for the above cases; see Table \ref{corrtable}. All of the field configurations analyzed were found to be negatively twisted as seen from the $\alpha$ for the C modes and the helicity of the LL modes; see Table \ref{results}. The fits with nonlinear LL modes seem to be better than the linear C modes. In the case of AR 10930, there was an X3.4-class flare on 2006 December 13, and we confirm in both modes a substantial decrease in free energy and relative helicity after the flare. A comparison of results obtained in this paper with those in the literature for the same flare event is presented in Table \ref{compare}. The relative helicity and free energy in the C mode increased and in the LL mode decreased marginally after the X1.5-class flare on 2006 December 14. The two ARs 10923 and 10933 with single-polarity show very high correlation ($>90\%$) with potential fields. We were not able to explore the full parameter space because of the computational constraints mentioned in Section \ref{search}. Because our best-fit with the observational data for the LL modes is substantially better (~75\%) than those obtained in the test cases, this lends much credibility to the results presented in the paper. \end{enumerate} We find that the approach taken here is fairly good in estimating the quantities of interest, namely relative helicity and free energy; see Table \ref{compare} and Section \ref{compres}. In order to compare with the other estimates available in the literature, where the potential fields are usually extended from the planar surface of the magnetogram to a cuboidal volume over the magnetogram, we rescale our physical quantities obtained for a hemisphere by the factor of the solid angle subtended by the magnetogram at the center. This enables us to approximate their trend before and after a flare event. The validity of this approximation can be seen from the general agreement with other estimates (including observations). An advantage in this method is its ease and utility in calculating these physical quantities, in particular the relative helicity is thus far not calculated by other approaches. Further, we don't have to assume any other boundary conditions for the side walls, as required in the other extrapolation techniques using the cuboidal volume. This method can also provide a useful reconstruction of the NLFFs as well as reasonable input field for other numerical techniques. It is clear that nonlinear LL modes are dominantly better fits than the linear C modes. The search is now limited by computational constraints; in the future, we hope to improve the fits by applying the method to a larger space of geometrical parameters and in more cases of mode numbers $n$ and $m$. The LL solutions of $n=1$ in \citet{low90} and $n=5,7,9$ (odd cases) in \citet{flyer04} have been extended here to the cases of nearly all $n$. The topological properties of these extended solutions can be further studied by considering other boundary conditions.The analytic solutions for LL suffer from the problem of a singularity at the origin, which render them unphysical; this implies that more realistic boundary conditions are necessary. To learn more about the evolution and genesis of these structures, it would be useful to carry out dynamical simulations that allow for footpoint motions with the analytic input fields constructed above to study how the nonlinearity develops; a stability analysis of the nonlinear modes would also be a useful tool (\citet{berger85} has analyzed the linear constant $\alpha$ case). Clearly, these are difficult mathematical problems to be addressed in the future. | 14 | 4 | 1404.0910 |
1404 | 1404.5018_arXiv.txt | In this paper we investigate the formation of Uranus and Neptune, according to the core-nucleated accretion model, considering formation locations ranging from 12 to 30 AU from the Sun, and with various disk solid-surface densities and core accretion rates. It is shown that in order to form Uranus-like and Neptune-like planets in terms of final mass {\it and} solid-to-gas ratio, very specific conditions are required. We also show that when recently proposed high solid accretion rates are assumed, along with solid surface densities about 10 times those in the minimum-mass solar nebula, the challenge in forming Uranus and Neptune at large radial distances is no longer the formation timescale, but is rather finding agreement with the final mass and composition of these planets. In fact, these conditions are more likely to lead to gas-giant planets. Scattering of planetesimals by the forming planetary core is found to be an important effect at the larger distances. Our study emphasizes how (even slightly) different conditions in the protoplanetary disk and the birth environment of the planetary embryos can lead to the formation of very different planets in terms of final masses and compositions (solid-to-gas ratios), which naturally explains the large diversity of intermediate-mass exoplanets. | The increasing number of detected exoplanets with masses similar to those of Uranus and Neptune emphasizes the need to better understand the formation process of the planetary class consisting of planets with rock-ice cores of up to $\approx$ 15 M$_\oplus$ and lower-mass hydrogen/helium envelopes, with a large range of solid-to-gas ratios. However, the formation mechanism for intermediate-mass planets is not well understood (e.g., Rogers et al. 2011), and even within our own solar system there are many open questions regarding the formation of Uranus and Neptune. \par Uranus and Neptune have masses of about 14.5 and 17 M$_{\oplus}$, and are located at 19.2 and 30 AU, respectively. Their exact compositions are not known (e.g., Helled et al. 2011), but they are likely to consist mainly of rock and ices with smaller mass fractions of hydrogen-helium atmospheres (see Fortney \& Nettelmann 2010; Nettelmann et al. 2013 and references therein). The estimated solid-to-gas mass ratios of Uranus and Neptune range between 2.3 and 19 (Guillot 2005; Helled et al. 2011). Hereafter when we refer to the solid-to-gas ratio it should be clear that the solid component consists of all elements heavier than hydrogen and helium, and that in fact, their physical state can differ from a solid state, while the gas corresponds solely to hydrogen and helium. It is commonly assumed that Uranus and Neptune have formed by the core accretion scenario, in which solid core formation accompanied by slow gas accretion is followed by more rapid gas accretion (e.g., Pollack et al. 1996). During the initial phases of planet formation, the slow accretion of gas is controlled by the growth rate of its core. As a result, the solid accretion rate essentially determines the formation timescale of an intermediate-mass planet (see D'Angelo et al. 2011 for a review). The (standard) solid accretion rate is given by (Safronov 1969): \begin{equation} \dot M_\mathrm{core} = {\frac{dM_\mathrm{solid}}{dt}} = \pi R_\mathrm{capt}^2 \sigma_s \Omega F_g, \label{eq:accrete} \end{equation} where $\pi R_\mathrm{capt}^2$ is the capture cross section for planetesimals, $\Omega$ is the orbital frequency, $\sigma_s$ is the solid surface density in the disk, and $F_g$ is the gravitational enhancement factor. As can be seen from Equation~(1), the core accretion rate decreases with increasing radial distance, and as a result, the core formation timescale can be extremely long at radial distances larger than $\sim$ 10 AU. Using the model of the minimum mass solar nebula (MMSN, see Weidenschilling 1977 for details), the formation timescales for Uranus and Neptune exceed $10^9$ years (Safronov 1969). This estimate has introduced the {\it formation timescale problem of Uranus and Neptune}. \par A detailed investigation of the formation of Uranus has been presented in Pollack et al.~(1996). The authors have considered {\it in situ} formation and have shown that for $\sigma_s$ about twice that of the MMSN the approximate core mass and envelope mass of Uranus are reached in about 16 Myr. The timescale is considerably shorter than that of Safronov primarily because of the use of improved (and much higher) values for $F_g$. The core accretion rate, however, depends not only on $\sigma_s$ and $F_g$ but also on the sizes of the accreted planetesimals. Smaller planetesimals can be accreted more easily, and it was shown by Pollack et al.~(1996) that when planetesimals are assumed to have sizes of 1 km (instead of the standard 100 km) the formation timescale of Uranus decreases to $\approx$ 2 Myr. This timescale is within the estimated range of lifetimes of gaseous protoplanetary disks but has been considered unrealistically short because of the use of a simplified core accretion rate in the model. Clearly, the formation timescale for Neptune under similar assumptions would be significantly longer. Due to the long accretion times at large radial distances, Uranus and Neptune are often referred to as ``failed giant planets", because their formation process was too slow to reach runaway gas accretion before the disk gas had dissipated. \par Goldreich et al. (2004) discuss in detail the problem of the accumulation of solid particles, particularly at large distances from the central star. Without considering the effects of the gas, they suggest that to form Uranus- or Neptune-like planets {\it in situ}, one requires, first, relatively small planetesimals, $< 1$ km in radius, and second, a value of $\sigma_s$ a few times that of the MMSN. In fact, they conclude that in order to form the planets within the lifetime of the gas disk, particles of only a few cm in size are needed. The small particles would be generated by collisions between the km-size objects. Numerous collisions among the small particles strongly damp the particle random velocities, resulting in a very cold disk. However, Levison \& Morbidelli (2007), on the basis of N-body simulations, point out that this model is oversimplified and relies on numerous assumptions. They suggest that the main difficulty is the assumption that the surface density of the disk particles remains smooth and uniform. In fact the simulations show that the formation of rings and gaps actually dominates the dynamics. The idea that the solar system was originally much more compact and that Uranus and Neptune were formed at smaller radial distances has been considered by a number of authors (e.g., Thommes et al. 1999; Tsiganis et al. 2005). The planets must arrive at their present locations post-formation, by gravitational scattering or by migration induced by a disk of planetesimals. The success of the ``Nice Model'' to explain many of the observed properties of the solar system led Dodson-Robinson \& Bodenheimer (2010) to investigate the formation of Uranus and Neptune at radial distances of 12 and 15 AU, as suggested by that model. They adopted a disk model which accounts for disk evolution and disk chemistry (Dodson-Robinson et al. 2009), giving values of $\sigma_s$ at those distances an order of magnitude higher than in the MMSN. The planet-formation calculation was similar to that of Pollack et al.~(1996). It was found that the formation timescales of both Uranus and Neptune fell in the range 4--6 Myr, and that in some cases the solid-to-gas ratio was similar to those in the present planets. In addition, the results are consistent with the observed carbon enhancement in the atmospheres of these planets. It was therefore concluded that, indeed, Uranus and Neptune could have formed at smaller radial distances as implied by the Nice model. There are, however, unsolved issues with this formation scenario--- for example, there is no way to distinguish Uranus from Neptune, and more importantly, there must be a cutoff of both solid and gas accretion when the planets reach their current masses; otherwise the model predicts that they would continue to accrete to higher mass. \par While the scenario in which Uranus and Neptune form at smaller radial distances is feasible and somewhat promising, there is no evidence for ruling out the possibility that these planets, and in particular extrasolar planets, could have formed at larger distances. Recent studies on the accretion rate of solids have provided new estimates for the rates in which solids are accreted to a planetary embryo in the core accretion paradigm (Rafikov 2011; Lambrechts \& Johansen 2012). These studies suggest that the solid accretion rates can be significantly higher than previous estimates. With these high accretion rates the core formation timescale at large radial distances is significantly reduced. Although {\it in situ} formation of Neptune is considered unlikely (see below), {\it in situ} formation of Uranus as well as formation of extrasolar giant planets, or Neptune-sized planets, by core accretion at large radial distances might be feasible. \par The aim of this paper is to re-investigate the formation of Uranus and Neptune (as representatives of intermediate-mass planets) and, in particular, to investigate the consequences of employing the high accretion rates mentioned in the previous paragraph. We account for various accretion rates, orbital locations, and disk properties. We employ a full core accretion--gas capture model. As we discuss below, formation of planets at 20 and 30 AU is, in principle, feasible, although their characteristics may not be those of Uranus and Neptune. We also suggest that a major challenge in forming Uranus and Neptune is to derive the correct final masses and the solid-to-gas ratios. However, the sensitivity of the properties of the forming planets to the assumed parameters provides a natural explanation for the diversity of extrasolar planets in the Uranus/Neptune mass regime. \par | Understanding the formation of Uranus and Neptune is crucial for understanding the origin of our solar system, and in addition, the formation of intermediate-mass planets around other stars. Planets that are similar to Uranus and Neptune in terms of mass are likely to form by core accretion, i.e., from a growing solid core which accretes gas at a lower rate, although alternative mechanisms should not be excluded (Boss et al. 2002; Nayakshin 2011). If indeed formed by core accretion, such planets must form fast enough to ensure that gas is accreted onto the core, but at the same time slow enough, in order to remain small in mass and not become gas giant planets. \par Our study shows that simulating the formation of Uranus and Neptune is not trivial, and that getting the correct masses and solid-to-gas ratio depends on the many (unknown) model parameters. Even small changes in the assumed parameters can lead to a very different planet. The core accretion rate and the disk's properties such as solid-surface density and the planetesimals' properties (sizes, dynamics, etc.) play a major role in the formation process, and even small changes in these parameters can influence the final masses and compositions of the planets considerably. \par We have used high accretion rates for the solids and have investigated their impact on the planet formation process. With these high accretion rates and with values of $\sigma_s$ about 10 times those in the MMSN, the formation timescale problem for formation of Uranus or Neptune at 20 AU disappears. However, a new problem arises - the formation of the planets can be so efficient that instead of becoming failed giant planets, Uranus and Neptune would become giant planets, similar to Jupiter and Saturn. The situation becomes even worse with formation at smaller radial distance where the solid surface density is high. At radial distances such as 12 and 15 AU, the planets reach runaway gas accretion within a timescale which is shorter than the average lifetimes of protoplanetary disks, leading to the formation of gaseous giant planets. However, if the values of $\sigma_s$ appropriate for the MMSN are used, a different problem arises. Even with high core accretion rates, the resulting solid masses are too low compared with those of Uranus/Neptune, {at all distances. Scattering of planetesimals is an important effect in this regard.} Our results suggest that intermediate values of $\sigma_s$, perhaps combined with relatively low core accretion rates (e.g.~Run 12UN5), are needed to satisfy the joint constraints provided by disk lifetimes, total masses, and solid-to-gas ratios. \par It should be noted that the ``true" core accretion rate to form Uranus and Neptune is not known and could be different from the values we consider here. Our work shows that even with a relatively low accretion rate, combined with a relatively high solid surface density $\sigma_s$, it is possible to form a giant planet in a relatively short time (Run 12UN3). However, different, but still reasonable choices for these parameters can produce quite different results; for example Run 12UN1 produces a planet of less than half the mass of Uranus after 10 Myr. Finding the right set of parameters that will lead to the formation of planets with the final masses of Uranus and Neptune is challenging, and getting the correct gas masses is even harder. In some cases the gas accretion rate becomes high enough and an increase in mass to the Uranus/Neptune mass range occurs, but the solid-to-gas ratio in the models is inconsistent with that of Uranus and Neptune (see Table 2). The cases we present simply demonstrate the sensitivity of the planetary formation to the assumed parameters, and while they do provide possible scenarios for the formation of Uranus and Neptune, they are certainty not unique. Clearly, the core accretion rate is a major uncertainty in planet formation models, and a better determination of this property (and its time evolution) will have a significant impact on simulations of planetary growth of both terrestrial and giant planets. \par Our work suggests that under the right conditions, {\it in situ} formation for Uranus at 20 AU, or formation of Neptune at about the same distance, is possible. At 30 AU, our run with $\sigma_s$ near the value for the MMSN and with planetesimal scattering, showed that a Neptune-mass planet was not formed. Although Rafikov (2011) shows that the use of the high accretion rate should result in evolution to rapid gas accretion out to 40--50 AU in a MMSN in 3 Myr, our result is different. The main reason is that planetesimal scattering limits the core mass to about 3.3 M$_\oplus$, leading to a slow accretion rate for the envelope. However that result could change if a higher value of $\sigma_s$ were taken. {\it In situ} formation at relatively large radial distances seems to be possible for intermediate-mass planets in general. In addition, high solid surface density, which is expected in metal-rich environments, can lead to fast core formation, and therefore to the formation of giant planets instead of intermediate-mass planets. The latter provides a natural explanation to the correlation between stellar metallicity and the occurrence rate of gas giant planets. Nevertheless, we also find that giant planets can be formed in low-metallicity environments. As a result, giant planets around low-metallicity stars should not necessarily be associated with formation by gravitational instability. It is also concluded that the formation of Uranus/Neptune-mass planets is not always challenging in terms of the formation timescale but often, in terms of reducing the gas accretion in order to prevent the formation of gaseous planets. \par Finally, while our work emphasizes once more the difficulty to simulate the formation of the solar-system planets very accurately, it provides a natural explanation for the diversity in planetary parameters in extrasolar planetary systems. Since planetary disks are expected to have different physical properties (e.g., surface densities, lifetimes) it is clear that the forming planets will have different growth histories, as well as different final masses and compositions. \subsection* | 14 | 4 | 1404.5018 |
1404 | 1404.0584_arXiv.txt | {Time and spatial damping of transverse magnetohydrodynamic (MHD) kink oscillations is a source of information on the cross-field variation of the plasma density in coronal waveguides. } {We show that a probabilistic approach to the problem of determining the density structuring from the observed damping of transverse oscillations enables us to obtain information on the two parameters that characterise the cross-field density profile.} {The inference is performed by computing the marginal posterior distributions for density contrast and transverse inhomogeneity length-scale using Bayesian analysis and damping ratios for transverse oscillations under the assumption that damping is produced by resonant absorption.} {The obtained distributions show that, for damping times of a few oscillatory periods, low density contrasts and short inhomogeneity length scales are more plausible in explaining observations.} {This means that valuable information on the cross-field density profile can be obtained even if the inversion problem, with two unknowns and one observable, is a mathematically ill-posed problem.} | Transverse magnetohydrodynamic (MHD) kink oscillations have been reported in numerous observations of solar coronal magnetic and plasma structures. First revealed in coronal loop observations using the Transition Region and Coronal Explorer (TRACE) by \cite{aschwanden99} and \cite{nakariakov99}, they also seem to be an important part of the dynamics of chromospheric spicules and mottles \citep{depontieu07,kuridze13}; soft X-ray coronal jets \citep{cirtain07}; or prominence fine structures \citep{okamoto07,lin09}. Their presence over extended regions of the solar corona \citep{tomczyk07} may have implications on the role of waves in coronal heating. Observations with instruments such as AIA/SDO, CoMP, and Hi-C by e.g., \cite{morton13} and \cite{threlfall13} have allowed us to analyse transverse oscillations with unprecedented detail. The potential use of transverse oscillations as a diagnostic tool to infer otherwise difficult to measure coronal magnetic and plasma properties was first demonstrated by \cite{nakariakov01}, by interpreting them as MHD kink modes of magnetic flux tubes. Since then, a number of studies have used seismology diagnostic tools that make use of oscillation properties, such as periods and damping times, to obtain information on the magnetic field and plasma density structuring \citep[see e.g., ][]{andries05b,arregui07a,verth08b,verth11}. An overview of recent seismology applications can be found in \cite{arregui12b} and \cite{demoortel12}. The increase in the number of observed events has lately enabled the application of statistical techniques to seismology diagnostics \citep{verwichte13, asensioramos13}. Some of these studies are concerned with the determination of the cross-field density structuring in waveguides supporting MHD oscillations. \cite{goossens02a} were the first to note that measurements of the damping rate of coronal loop oscillations together with the assumption of resonant absorption as the damping mechanism could be used to obtain estimates for the transverse inhomogeneity length scale. By assuming a value for the ratio of the plasma density between the interior of the loop and the corona, they computed the inhomogeneity length scale for a set of 11 loop oscillation events. \cite{verwichte06} analysed how information on the density profile across arcade-shaped models can be obtained from the oscillation properties of vertically polarised transverse waves. When no assumption is made on the density contrast, \cite{arregui07a} and \cite{goossens08a} showed that an infinite number of equally valid equilibrium models is able to reproduce observed damping rates, although they must follow a particular one-dimensional curve in the two-dimensional parameter space of unknowns. More recently, \cite{arregui11b} have shown how information on density contrast from observations can be used as prior information in order to fully constrain the transverse density structuring of coronal loops. \cite{arregui13b} have shown that the existence of two regimes in the damping time/spatial scales would enable the constraint of both the density contrast and its transverse inhomogeneity length scale. The feasibility of such a measurement has yet to be confirmed by observations. We present the Bayesian solution to the problem, that makes use of the computation of marginal posteriors, and show that valuable information on the cross-field density profile can be obtained even if the inversion problem, with two unknowns and one observable, is a mathematically ill-posed problem. | Seismology of transverse MHD kink oscillations offers a way for obtaining information on the plasma density structuring across magnetic waveguides. For resonantly damped kink mode oscillations, the determination of the cross-field density profile from observed damping ratios consists on the solution of an ill-posed mathematical problem with two unknowns and one observable In this study we have introduced a modified Bayesian analysis technique that makes use of the basic definition of marginal posteriors, which are obtained not by sampling the posterior using a Markov Chain Montecarlo technique as in \cite{arregui11b}, but by performing the required integrals over the parameter space once the joint probability is computed. This has led to a better understanding about when and how the unknown parameters may be constrained. The application of this technique for the computation of the probability distribution of the unknowns enables us to draw conclusions about the most plausible values, conditional on observed data. The procedure makes use of all available information and offers correctly propagated uncertainty. Considering typical ranges for the possible values of the unknown parameters we find that, for damping times of a few oscillatory periods, low values for the density contrast are favoured and larger values of this parameter become less plausible. Regarding the transverse inhomogeneity length scale, short values below the scale of the tube radius are found to be more plausible than larger length-scales near the limit for fully non-uniform tubes. The procedure described here can be followed to obtain the most plausible values for the two unknowns that determine the cross-field density profile in coronal waveguides, upon assuming resonant absorption as the damping mechanism operating in transverse kink waves, and conditional on the observed damping ratios with their associated uncertainty. In contrast to \cite{arregui11b}, who used two observables to determine three unknowns, we use one observable to determine two unknown parameters. The reason why we left out the Alfv\'en travel time is because of our interest on the cross-field density profile that is determined by $\zeta$ and $l/R$. The same technique presented in this study can be used by increasing by one the number of observables/unknowns. We have found that this leads to similar posteriors for $\zeta$ and $l/R$, with the additional information on the Alfv\'en travel time being also available. | 14 | 4 | 1404.0584 |
1404 | 1404.4811.txt | Remote sensing observations meet some limitations when used to study the bulk atmospheric composition of the giant planets of our solar system. A remarkable example of the superiority of {\it in situ} probe measurements is illustrated by the exploration of Jupiter, where key measurements such as the determination of the noble gases abundances and the precise measurement of the helium mixing ratio have only been made available through {\it in situ} measurements by the Galileo probe. This paper describes the main scientific goals to be addressed by the future {\it in situ} exploration of Saturn placing the Galileo probe exploration of Jupiter in a broader context and before the future probe exploration of the more remote ice giants. {\it In situ} exploration of Saturn's atmosphere addresses two broad themes that are discussed throughout this paper: first, the formation history of our solar system and second, the processes at play in planetary atmospheres. In this context, we detail the reasons why measurements of Saturn's bulk elemental and isotopic composition would place important constraints on the volatile reservoirs in the protosolar nebula. We also show that the {\it in situ} measurement of CO (or any other disequilibrium species that is depleted by reaction with water) in Saturn's upper troposphere would constrain its bulk O/H ratio. We compare predictions of Jupiter and Saturn's bulk compositions from different formation scenarios, and highlight the key measurements required to distinguish competing theories to shed light on giant planet formation as a common process in planetary systems with potential applications to most extrasolar systems. {\it In situ} measurements of Saturn's stratospheric and tropospheric dynamics, chemistry and cloud-forming processes will provide access to phenomena unreachable to remote sensing studies. Different mission architectures are envisaged, which would benefit from strong international collaborations, all based on an entry probe that would descend through Saturn's stratosphere and troposphere under parachute down to a minimum of 10 bars of atmospheric pressure. We finally discuss the science payload required on a Saturn probe to match the measurement requirements.\\ | \label{intro} Giant planets contain most of the mass and the angular momentum of our planetary system and must have played a significant role in shaping its large scale architecture and evolution, including that of the smaller, inner worlds \citep{2005Natur.435..466G}. Furthermore, the formation of the giant planets affected the timing and efficiency of volatile delivery to the Earth and other terrestrial planets \citep{2001M&PS...36..381C}. Therefore, understanding giant planet formation is essential for understanding the origin and evolution of the Earth and other potentially-habitable environments throughout our solar system. The origin of the giant planets, their influence on planetary system architectures, and the plethora of physical and chemical processes at work within their atmospheres, make them crucial destinations for future exploration. Because Jupiter and Saturn have massive envelopes essentially composed of hydrogen and helium and (possibly) a relatively small core, they are called gas giants. Meanwhile, Uranus and Neptune also contain hydrogen and helium atmospheres but, unlike Jupiter and Saturn, their H$_2$ and He mass fractions are smaller (5 to 20\%). They are called ice giants because their density is consistent with the presence of a significant fraction of ices/rocks in their interiors. Despite this apparent grouping into two classes of giant planets, the four giant planets likely exist on a continuum, each a product of the particular characteristics of their formation environment. Comparative planetology of the four giants in the solar system is therefore essential to reveal the potential formational, migrational, and evolutionary processes at work during the early evolution of the early solar nebula. Much of our understanding of the origin and evolution of the outer planets comes from remote sensing by necessity. However, the efficiency of this technique has limitations when used to study the bulk atmospheric composition that is crucial to the understanding of planetary origin, namely due to degeneracies between the effects of temperatures, clouds and abundances on the emergent spectra, but also due to the limited vertical resolution. In addition, many of the most common elements are locked away in a condensed phase in the upper troposphere, hiding the main volatile reservoir from the reaches of remote sensing. It is only by penetrating below the ``visible'' weather layer that we can sample the deeper troposphere where those most common elements are well mixed. A remarkable example of the superiority of {\it in situ} probe measurements is illustrated by the exploration of Jupiter, where key measurements such as the determination of the noble gases abundances and the precise measurement of the helium mixing ratio have only been possible through {\it in situ} measurements by the Galileo probe \citep{1999Natur.402..269O}. The Galileo probe measurements provided new insights into the formation of the solar system. For instance, they revealed the unexpected enrichments of Ar, Kr and Xe with respect to their solar abundances, which suggested that the planet accreted icy planetesimals formed at temperatures possibly as low as 20--30 K to allow the trapping of these noble gases. Another remarkable result was the determination of the Jovian helium abundance using a dedicated instrument aboard the Galileo probe \citep{1998JGR...10322815V} with an accuracy of 2\%. Such an accuracy on the He/H$_2$ ratio is impossible to derive from remote sensing, irrespective of the giant planet being considered, and yet precise knowledge of this ratio is crucial for the modelling of giant planet interiors and thermal evolution. The Voyager mission has already shown that these ratios are far from being identical, which presumably results from slight differences in their histories at different heliocentric distances. An important result also obtained by the mass spectrometer onboard the Galileo probe was the determination of the $^{14}$N/$^{15}$N ratio, which suggested that nitrogen present in Jupiter today originated from the solar nebula essentially in the form of N$_2$ \citep{2001ApJ...553L..77O}. The Galileo science payload unfortunately could not probe to pressure levels deeper than 22 bars, precluding the determination of the H$_2$O abundance at levels representative of the bulk oxygen enrichment of the planet. Furthermore, the probe descended into a region depleted in volatiles and gases by unusual ``hot spot'' meteorology \citep{1998JGR...10322791O,2004Icar..171..153W}, and therefore its measurements are unlikely to represent the bulk planetary composition. Nevertheless, the Galileo probe measurements were a giant step forward in our understanding of Jupiter. However, with only a single example of a giant planet measurement, one must wonder whether from the measured pattern of elemental and isotopic enrichments, the chemical inventory and formation processes at work in our solar system are truly understood. {\it In situ} exploration of giant planets is the only way to firmly characterize the planet compositions in the solar system. In this context, a Saturn probe is the next natural step beyond Galileo's {\it in situ} exploration of Jupiter, the remote investigation of its interior and gravity field by the JUNO mission, and the Cassini spacecraft's orbital reconnaissance of Saturn. {\it In situ} exploration of Saturn's atmosphere addresses two broad themes. First, the formation history of our solar system and second, the processes at play in planetary atmospheres. Both of these themes are discussed throughout this paper. Both themes have relevance far beyond the leap in understanding gained about an individual giant planet: the stochastic and positional variances produced within the solar nebula, the depth of the zonal winds, the propagation of atmospheric waves, the formation of clouds and hazes and disequilibrium processes of photochemistry and vertical mixing are common to all planetary atmospheres, from terrestrial planets to gas and ice giants and from brown dwarfs to hot exoplanets. This paper describes the main scientific goals to be addressed by the future {\it in situ} exploration of Saturn placing the Galileo probe exploration of Jupiter in a broader context and before the future {\it in situ} exploration of the more remote ice giants. These goals will become the primary objectives listed in the forthcoming Saturn probe proposals that we intent to submit in response to future opportunities within both ESA and NASA. Section \ref{origin} is devoted to a comparison between known elemental and isotopic compositions of Saturn and Jupiter. We describe the different formation scenarios that have been proposed to explain Jupiter's composition and discuss the key measurements at Saturn that would allow disentangling these interpretations. We also demonstrate that the {\it in situ} measurement of CO (or any other disequilibrium species that is depleted by reaction with water) at Saturn could place limits on its bulk O/H ratio. In Section \ref{atm}, we discuss the motivation for the {\it in situ} observation of the atmospheric processes (dynamics, chemistry and cloud formation) at work in Saturn's atmosphere. Section \ref{archi} is dedicated to a short description of the mission designs that can be envisaged. In Section \ref{inst}, we provide a description of high-level specifications for the science payload. Conclusions are given in Section \ref{cls}. | \label{cls} In this paper, we have shown that the {\it in situ} exploration of Saturn can address two major science themes: the formation history of our solar system and the processes at work in the atmospheres of giant planets. We provided a list of recommended measurements in Saturn's atmosphere that would allow disentangling between the existing giant planets formation scenarios and the different volatile reservoirs from which the solar system bodies were assembled. Moreover, we illustrated how an entry probe would reveal new insights concerning the vertical structures of temperatures, density, chemical composition and clouds during atmospheric descent. In this context, the top level science goals of a Saturn probe mission would be the determination of: \begin{enumerate} \item the atmospheric temperature, pressure and mean molecular weight profiles; \item the abundances of cosmogenically abundant species C, N, S and O; \item the abundances of chemically inert noble gases He, Ne, Xe, Kr and Ar; \item the isotopic ratios in hydrogen, oxygen, carbon, nitrogen, He, Ne, Xe, Kr and Ar; \item the abundances of minor species delivered by vertical mixing (e.g., P, As, Ge) from the deeper troposphere, photochemical species (e.g., hydrocarbons, HCN, hydrazine and diphosphine) in the troposphere and exogenic inputs (oxygenated species) in the upper atmosphere; \item the particle optical properties, size distributions, number and mass densities, opacity, shapes and composition. \end{enumerate} \noindent Additional {\it in situ} science measurements aiming at investigating the global electric circuit on Saturn could be also considered (measurement of the Schumann resonances, determination of the vertical profile of conductivity and the spectral power of Saturn lightning at frequencies below the ionospheric cutoff, etc). We advocated that a Saturn mission incorporating elements of {\it in situ} exploration should form an essential element of ESA and NASA's future cornerstone missions. We described the concept of a Saturn probe as the next natural step beyond Galileo's {\it in situ} exploration of Jupiter, and the Cassini spacecraft's orbital reconnaissance of Saturn. Several missions designs have been discussed, all including a spacecraft carrier/orbiter and a probe that would derive from the KRONOS concept previously proposed to ESA \citep{2009ExA....23..947M}. International collaborations, in particular between NASA/USA and ESA/Europe may be envisaged in the future to enable the success of a mission devoted to the {\it in situ} exploration of Saturn. | 14 | 4 | 1404.4811 |
1404 | 1404.1688_arXiv.txt | Developments of this far-reaching research field are summarized from an observational point of view, mentioning important and interesting phenomena discovered recently by photometry of stellar oscillations of any kind. A special emphasis is laid on Cepheids and RR~Lyrae type variables. | \label{intr} Variable stars are astrophysical laboratories. Pulsating stars provide us with information on the internal structure of the stars and stellar evolution as testified by their position in the Hertzsprung-Russell (H-R) diagram. Hot and cool oscillating stars, and luminous and low luminosity pulsators are also found in various parts of the H-R diagram (Fig.~1). Several types of luminous pulsators are useful distance indicators via the period-luminosity ($P$-$L$) relationship. \begin{figure}[!] \centerline{\includegraphics[width=6.0cm,clip=]{szabados-jefferyfig.eps}} \caption{H-R diagram showing the location of various types of pulsating variables (Jeffery, 2008a).} \label{fig1} \end{figure} \begin{table} \footnotesize \begin{center} \caption{Classification of pulsating variable stars.} \label{pulsvartypes} \begin{tabular}{l@{\hskip2mm}l@{\hskip2mm}l@{\hskip2mm}r@{\hskip2mm}c@{\hskip2mm}l} \hline\hline \\ Type & Design. & Spectrum & Period & Amplitude & Remark$^\ast$\\ & & & & mag. & \\[0.5ex] \hline \\ Cepheids & DCEP & F-G Iab-II & 1-135\,d & 0.03-2 & \\ & DCEPS & F5-F8 Iab-II & $<$7\,d & $<$0.5 & 1OT \\ BL Boo & ACEP & A-F & 0.4-2\,d & 0.4-1.0 & anomalous Cepheid\\ W Vir & CWA & FIb & $>$8\,d & 0.3-1.2 & \\ BL Her & CWB & FII & $<$8\,d & $<$1.2 & \\ RV Tau & RV & F-G & 30-150\,d & up to 3 & \\ & RVB & F-G & 30-150\,d & up to 3 & variable mean brightness\\ RR Lyr & RRA & A-F giant & 0.3-1.2\,d & 0.4-2 & \\ & RRC & A-F giant & 0.2-0.5\,d & $<$0.8 & 1OT\\ $\delta$ Sct & DSCT & A0-F5\,III-V & 0.01-0.2\,d & 0.003-0.9 & R+NR\\ SX Phe & SXPHE & A2-F5 subdw.& 0.04-0.08\,d &$<$0.7 & Pop. II\\ $\gamma$ Dor & GDOR & A7-F7\,IV-V & 0.3-3\,d & $<$0.1 & NR, high-order g-mode \\ roAp & ROAP & B8-F0\,Vp & 5-20 min & 0.01 & NR p-modes\\ $\lambda$ Boo & LBOO & A-F & $<$0.1\,d & $<$0.05 & Pop.\,I, metal-poor\\ Maia & & A & & & to be confirmed\\ V361 Hya & RPHS, & sdB & 80-600\,s & 0.02-0.05 & NR, p-mode\\ & EC14026 & & & & \\ V1093 Her & PG1716, & sdB & 45-180 min & $<$0.02 & g-mode\\ & Betsy & & & & \\ DW Lyn & & subdwarf & & $<$0.05 & V1093\,Her\,+\,V361\,Hya\\ GW Vir & DOV, & HeII, CIV & 300-5000\,s & $<$0.2 & NR g-modes \\ & PG1159 & & & & \\ ZZ Cet & DAV & DAV & 30-1500\,s & 0.001-0.2 & NR g-modes\\ DQV & DQV & white dwarf & 7-18 min & $<$0.05 & hot carbon atmosphere\\ V777 Her & DBV & He lines & 100-1000\,s &$<$0.2 & NR g-modes\\[0.5ex] \hline\\ Solar-like & & F5-K1\,III-V & $<$hours & $<$0.05 & many modes\\ \hspace*{5mm}oscill. & & & & & \\[0.5ex] \hline\\ Mira & M & M, C, S IIIe & 80-1000\,d & 2.5-11 & small bolometric ampl.\\ Small ampl. & SARV & K-M\,IIIe & 10-2000\,d & $<$1.0 & \\ \hspace*{5mm}red var. & & & & & \\ Semi-regular & SR & late type I-III & 20-2300\,d & 0.04-2 & \\ & SRA & M, C, S\,III & 35-1200\,d & $<$2.5 & R overtone\\ & SRB & M, C, S\,III & 20-2300\,d & $<$2 & weak periodicity\\ & SRC & M, C, S\,I-II & 30-2000\,d & 1 & \\ & SRD & F-K\,I-III & 30-1100\,d & 0.1-4 & \\ Long-period & L & late type & & & slow \\ \hspace*{5mm}irregular & & & & & \\ & LB & K-M, C, S III & & & \\ & LC & K-M I-III & & & \\ Protoplan. & PPN & F-G I & 35-200\,d & & SG, IR excess\\ \hspace*{5mm}nebulae & & & & & \\[0.5ex] \hline\hline \end{tabular} \end{center} $^\ast$ R = radial; NR = non-radial; 1OT = first overtone; SG = supergiant. Spectrum is given for maximum brightness for large amplitude variables. \end{table} \setcounter{table}{0} \begin{table} \footnotesize \begin{center} \caption{Classification of pulsating variable stars (continued).} \begin{tabular}{l@{\hskip2mm}l@{\hskip2mm}l@{\hskip2mm}r@{\hskip2mm}c@{\hskip2mm}l} \hline\hline \\ Type & Design. & Spectrum & Period & Amplitude & Remark$^\ast$\\ & & & & mag. & \\[0.5ex] \hline \\ 53 Per & & O9-B5 & 1-3\,d & & NR\\ $\beta$ Cep & BCEP & O6-B6\,III-V & 0.1-0.6 & 0.01-0.3 & R + NR\\ & BCEPS & B2-B3\,IV-V & 0.02-0.04 & 0.015-0.025 & R + NR \\ SPB & SPB & B2-B9\,V & 0.4-5\,d & $<$0.5 & high radial order, \\ & & & & & low degree g-modes\\ Be & BE, LERI &Be & 0.3-3\,d & & NR (or rotational?)\\ LBV & LBV & hot SG & 30-50\,d & & NR?\\ $\alpha$ Cyg & ACYG & Bep-Aep\,Ia & 1-50\,d & ~0.1 & NR, multiperiodic\\ BX Cir & & B & ~0.1\,d & ~0.1 & H-deficient \\ PV Tel & PVTELI & B-A\,Ip & 5-30\,d & ~0.1 & He SG, R strange mode\\ & PVTELII & O-B\,I & 0.5-5\,d & & H-def. SG, NR g-mode \\ & PVTELIII& F-G\,I & 20-100 d & & H-def. SG, R?\\[0.5ex] \hline\hline \end{tabular} \end{center} \end{table} Table~1 is an overview of different types of pulsating variables. The underlying physical mechanism exciting stellar oscillations can be different for various pulsators. The General Catalogue of Variable Stars (GCVS, Samus et~al., 2009) lists 33 types and subtypes of pulsating variables, while the International Variable Star Index (VSX) at the AAVSO knows 53 different (sub)types. Another aspect of the classification is the ambiguity due to the simultaneous presence of more than one type of variability. There are numerous pulsating stars among eclipsing variables, as well as rotational variability can be superimposed on stellar oscillations. Pulsation can be excited in certain cataclysmic variables, and erratic variability is typically present in oscillating pre-main sequence stars. From the point of view of astrophysics this is favourable but encumbers the analysis and interpretation of the observational data. Time consuming photometry of pulsating variables is a realm of small telescopes. The temporal coverage (duration of the time series) is critical for studying multiperiodicity, changes in frequency content, modal amplitudes, etc. The accuracy of photometry is varying, it depends on the telescope aperture, detector quality, astroclimate, etc. Millimagnitude accuracy can be easily achieved with ground-based equipments, while the accuracy of photometry from space is up to micromagnitudes. Figure~2 shows an excellent sample light curve of LR~UMa, a DSCT type pulsator obtained with a 1\,m telescope (Joshi et~al., 2000). (The abbreviated designation of various types is found in the 2nd column of Table~\ref{pulsvartypes}). \begin{figure} \centerline{\includegraphics[width=8.0cm,clip=]{szabados-figprec-hd98851.eps}} \caption{A very low-noise light curve of a short-period pulsator LR~UMa (Joshi et~al., 2000).} \label{f2} \end{figure} Depending on the observer's experience and capabilities, one may choose observational targets from a wide range of amplitudes, from microvariables to large amplitude pulsators, while the range of periodicity embraces the shortest values of seconds to the longest ones, several years. In July 2013, there were 47811 variables catalogued in the GCVS, among them 8533 RR~Lyraes, 8098 Miras, 932 classical Cepheids, 762 $\delta$~Sct variables, 414 Type~II Cepheids, 209 $\beta$~Cep variables, 85 $\gamma$~Dor stars, and 80 white dwarf pulsators. A smaller number of variables known to belong to a certain type, however, does not necessarily mean that the given type of pulsating variables is less frequent, some kind of variability is not easy to discover. Moreover, massive photometric surveys, e.g., ASAS (Pojmanski, 2002), OGLE (Szyma\'nski, 2005), MACHO (Alcock et~al., 1999), WASP (Pollacco et~al., 2006), and Pan-STARRS (Burgett \& Kaiser, 2009) resulted in revealing thousands of new variables not catalogued in the GCVS. | 14 | 4 | 1404.1688 |
|
1404 | 1404.3644_arXiv.txt | { We present long term optical variability studies of bright X-ray sources in four nearby elliptical galaxies with {\it Chandra} Advanced CCD Imaging Spectrometer array (ACIS-S) and {\it Hubble Space Telescope (HST)} Advanced Camera for Surveys observations. Out of the 46 bright (X-ray counts $> 60$) sources that are in the common field of view of the {\it Chandra} and {\it HST} observations, 34 of them have potential optical counterparts, while the rest of them are optically dark. After taking into account of systematic errors, estimated using the field optical sources as reference, we find that four of the X-ray sources (three in NGC1399 and one in NGC1427) have variable optical counterparts at a high significance level. The X-ray luminosities of these source are $\sim 10^{38}$ $\rm ergs~s^{-1}$ and are also variable on similar time-scales. The optical variability implies that the optical emission is associated with the X-ray source itself rather than being the integrated light from a host globular cluster. For one source the change in optical magnitude is $> 0.3$, which is one of the highest reported for this class of X-ray sources and this suggests that the optical variability is induced by the X-ray activity. However, the optically variable sources in NGC1399 have been reported to have blue colours ($g - z > 1$). All four sources have been detected in the infra-red (IR) by {\it Spitzer} as point sources, and their ratio of $5.8$ to $3.6 \mu m$ flux are $> 0.63$ indicating that their IR spectra are like those of Active Galactic Nuclei (AGN). While spectroscopic confirmation is required, it is likely that all four sources are background AGNs. We find none of the X-ray sources having optical/IR colours different from AGNs, to be optically variable. | % \label{sect:intro} The unprecedented angular resolution of {\it Chandra} satellite has enabled the study of X-ray point sources in nearby galaxies. Most of these point sources are expected to be X-ray binaries like the ones found in the Milky Way. An important result of the {\it Chandra} observations was the confirmation of Ultra-luminous X-ray sources (ULXs), discovered with { \it Einstein} observatory in the 1980s \citep{Fab89}. These are off-nuclear X-ray point sources with X-ray luminosities in the range $10^{39}-10^{41}$ $\rm ergs~s^{-1}$. The observed luminosities of ULXs exceed the Eddington limit for a $10 M_{\odot}$ black hole, which has led to a sustained debate on the nature of these sources. Since ULXs are off-nuclear sources, their masses must be $< 10^{5}M_{\odot}$ from dynamical friction arguments \citep{Kaa01}. Thus, ULXs may represent a class of Intermediate Mass Black holes (IMBHs) whose mass range ($10 M_{\odot} < M < 10^{5}M_{\odot}$) is between that of stellar mass black holes and super massive ones \citep{Mak00}. Further the nature of the sources in nearby galaxies, which are less luminous than ULX, is also not clear and it is difficult to ascertain whether they harbour neutron stars or black holes. The primary reason for these uncertainties is that unlike Galactic X-ray binaries, it is difficult to identify the companion star in the optical and obtain the binary parameters. For most X-ray sources in nearby galaxies, the associated optical emission is due to the integrated light from a host globular cluster \citep{Kim06,Kim09,Pta06,Goa02} and it is usually not possible to resolve and identify the companion star. However, these studies provide important information regarding the environment of the X-ray sources. For example, ULXs in early type galaxies are associated with red globular clusters \citep{Pta06, Ang01}. Even the non-detection of optical emission allows one to impose strong upper limit on the black hole mass for these accreting systems based on some standard assumptions \citep{Jit11}. However, a more direct inference on the nature of the system requires identification and spectral measurement of the associated optical emission. An important aspect of identifying the correct optical counterpart in a crowded field is to check for optical variability. If the optical emission is variable, it is most probably directly associated with the X-ray source and not the integrated light of stars in a globular cluster. Indeed, for low mass X-ray binaries in the Galaxy, the optical emission is variable and is for some cases correlated with the X-ray emission \citep[e.g.4U 1636-536:][]{Shi11} while for others it is not \citep[e.g. GX 9+9:][]{Kon06}. The optical variability may be due to the orbital motion of the donor star or reprocessing of the variable X-ray emission or X-ray heating of the companion. However, typically the optical counterpart of X-ray binaries in nearby galaxies will not be resolved, especially if the source is in a globular cluster. Hence it is not expected that optical variability will be seen for them. Nevertheless, variability of optical counterparts have been measured for the bright X-ray sources in nearby galaxies. For example, the optical counterpart of NGC1313 X-2 has been identified as a O7 star at solar metallicity, The optical counterpart exhibits variability at $\sim$ 0.2 mag on short time scales \citep{Liu07, Gri08} and the variability may be due to varying X-ray irradiation of the donor star and a stochastic varying contribution from the accretion disk. An independent study of the same source \citep{Muc07} revealed that the optical flux of the counterpart shows variation ${\leq} 30$\% and that it may be a main-sequence star of mass $\sim$ $10-18 M_\odot$ feeding to a black hole of mass $120 M_\odot$. The optical counterpart of Holmberg IX X-1 exhibits photometric variability of $0.136\pm0.027$ in the {\it HST/ACS V} band images \citep{Gri11} although it seems to have a constant magnitude within photometric errors ($22.710\pm0.038$ and $22.680\pm0.015$) in SUBARU {\it V} band images. \cite{Tao11} have reported the optical variability for three ULXs, M101 ULX-1, M81 ULX1 and NGC1313 X-2, at a magnitude difference of 0.2 or larger in the {\it V} band. Some of the X-ray sources in nearby galaxies could be background AGN and it is expected that their optical emission would be variable. It is important to identify more X-ray sources that have optically variable counterparts, which then can be subjected to more detailed observational follow-ups such as spectral and/or simultaneous X-ray/optical observations. A systematic analysis of a number of galaxies to identify such sources will be crucial to understand the nature of these sources. Such an analysis would require multiple optical observations of a galaxy, a uniform scheme to identify optical counterparts of the X-ray sources and more importantly an estimate of the systematic uncertainties in order to avoid any spurious variability that may arise if only statistical errors are considered. In this work, we consider elliptical galaxies which are $\lesssim 20$ Mpc away that have been observed by {\it Chandra} and have more than one {\it HST} observation in the same filter. We restrict our analysis to ellipticals since for them the continuum optical emission can be modelled and subtracted out to reveal optical point sources \citep{Jit11}. Using the field optical sources we estimate the systematic errors in the optical flux measurements and hence can report true optical variability at a high confidence level. Our aim is to study the optical counterpart variability of bright X-ray sources (X-ray counts $> 60$) whose X-ray spectra can be modelled and hence a reliable estimate of its luminosity can be obtained. In the next section, we describe the selection of the sample galaxies. \S 3 and \S 4 describe the X-ray analysis and the method to identify the optical counterparts and to compute the photometry with systematic errors. We discuss the results in \S 5. | \label{sect:discussion} In this work we have studied the long term X-ray and optical variability of X-ray sources in four nearby elliptical galaxies. For the 46 sources in the sample, we have fitted their X-ray spectra using an absorbed power-law or black body model for two {\it Chandra} observations and found that 24 of them show long term X-ray variability. For 34 sources, we have identified optical counterparts. After estimating the systematic error on the photometric magnitude, we find that four of the sources clearly exhibit long term optical variation. Since the optical counterpart is varying it cannot be the integrated light of stars in a globular cluster. Thus, one may expect that the optical variability is induced by the X-ray source. If that is so, these sources are important candidates for further study. The optically variable X-ray sources could be background Active Galactic Nuclei (AGN). The reported optical colours ({\it g - z}) for the sources in NGC1399 \citep{Sha13} are tabulated in Table \ref{IR} and they reveal that the objects are blue and one of them is bluer than blue globular clusters, $1.3 < g-z < 1.9$ \citep{Pao11}. Indeed, the optically variable sources (Source 1 and 2 in NGC1399) were identified as possible contaminants in an earlier analysis \citep{Kun07}. The analysis of {\it HST/WFPC} data reveals that these sources are bluer than ${\it B-I=1.5}$ and hence are not globular clusters. \cite {Bla12} studied the globular cluster systems in NGC1399 using the {\it HST/ACS g, V, I, z} and {\it H} bands. In their study, the sources with $19.5 < I_{814} < 23.5$ and $0.5 < g_{475} - I_{814} < 1.6$ are classified as the globular clusters. and the optically variable sources in NGC1399 (source 1 and 2) again do not satisfy their criteria. This may indicate that they may be background AGN and indeed their IR colours also support this interpretation. Studies have shown that AGN have flux ratios $> 0.63$ in the $5.8$ and $3.6 \mu m$ bands i.e. $F_{5.8}/F_{3.6} > 0.63$ \citep{Pol06,Lac04}. \citet{Sha13} have looked for IR counterparts of X-ray sources in NGC1399 using {\it Spitzer} data. Their quoted IR flux and ratios are tabulated in Table \ref{IR}. All four sources have IR flux ratios $ \geq 0.63$, indicating that they maybe background AGN. Unfortunately these sources are not in the field of view of the {\it Spitzer} $4.5$ and $8.0 \mu m$ images, which would have provided more information on the nature of these sources. We do not find evidence for any optical counterpart to disappear or flux changes by order of magnitude. Such variations would be expected if the X-ray emission is due to a violent transient event like a very bright nova explosion or a tidal disruption of a white dwarf by a black hole. Such transient events are expected to show dramatic variation in both X-ray and optical flux. While there are several X-ray sources which are not detected in the other {\it Chandra} observation, none of them exhibit dramatic variability in the optical. For example, as mentioned earlier, \citet{Irw10} have argued that the lack of H$\alpha$ and H$\beta$ in the spectrum of a ULXs in NGC1399 (CXOJ033831.8-352604) may indicate the tidal disruption of a white dwarf by a black hole. However, here we find that neither the X-ray nor the optical flux show any long term variation. Clearly, conclusive evidence on the nature of these sources can be obtained only by studying their optical spectra and confirming by emission line studies whether a source is a background AGN or not. Such studies will also provide clear information about the origin of the optical source. Since this would require large telescopes in excellent seeing conditions, it is important to choose good potential candidates such as the optically variable sources identified here. A positive identification of a optically variable source as not being a background AGN, would be the crucial step towards understanding these enigmatic sources. \normalem | 14 | 4 | 1404.3644 |
1404 | 1404.3472_arXiv.txt | We report on the results from our search for the Wide-field Infrared Survey Explorer detection of the Galactic low-mass X-ray binaries. Among 187 binaries catalogued in Liu et al. (2007), we find 13 counterparts and two candidate counterparts. For the 13 counterparts, two (4U~0614+091 and GX~339$-$4) have already been confirmed by previous studies to have a jet and one (GRS~1915+105) to have a candidate circumbinary disk, from which the detected infrared emission arose. Having collected the broad-band optical and near-infrared data in literature and constructed flux density spectra for the other 10 binaries, we identify that three (A0620$-$00, XTE J1118+480, and GX 1+4) are candidate circumbinary disk systems, four (Cen X-4, 4U 1700+24, 3A 1954+319, and Cyg X-2) had thermal emission from their companion stars, and three (Sco X-1, Her X-1, and Swift J1753.5$-$0127) are peculiar systems with the origin of their infrared emission rather uncertain. We discuss the results and WISE counterparts' brightness distribution among the known LMXBs, and suggest that more than half of the LMXBs would have a jet, a circumbinary disk, or the both. | X-ray binaries (XRBs), containing either an accreting neutron star or black hole, constitute a large fraction of bright X-ray sources in the Galaxy. When they have $\lesssim 1~M_{\sun}$ low mass companions, a companion overfills its Roche lobe, and mass transfer from the companion to the central compact star occurs via an accretion disk. Such binaries are further categorized as low-mass X-ray binaries (LMXBs). Besides their prominent X-ray emission, LMXBs are generally observable at optical wavelengths, as the accretion disks and/or companion stars radiate sufficiently bright thermal emission. In addition, the LMXBs are known to be able to launch a jet, producing synchrotron emission detectable from optical/infrared to radio wavelengths (e.g., \citealt{fen06, rus+06, rus+07, gal10}). Studies of jets help understand flux variabilities seen in LMXBs, constrain the fractions of accretion energy channeled to different emission components, and thus allow to draw a full picture for the detailed physical processes occurring in LMXBs. The environments in which XRBs are located may be dusty. The supernova explosions that produced the compact stars are thought to have a fallback process \citep{che89}, due to the impact of the reverse shock wave with the outer stellar envelope. As a result, part of the ejected material during a supernova explosion may fallback to the newly born compact star. During the evolution of a LMXB, a substantial amount of mass has been lost from the companion (e.g., \citealt{prp02}). Observational evidence as well as theoretical studies show that disk winds or outflows are probably ubiquitous and may be massive (e.g., \citealt{nei13, ybw12} and references therein). If a small fraction of the material from any of the processes is captured to be around a binary, it is conceivable that a circumbinary disk might have formed from the material, acting to intercept part of emission (X-rays from the central compact star and optical light from the companion and accretion disk) from the binary system and re-radiate the energy at infrared (IR) wavelengths. Indeed, \citet{mm06} observed four LMXBs with \textit{Spitzer} Space Telescope and found that two of them might harbor a circumbinary dust disk. Moreover, the \textit{Spitzer} detection of dust emission features in a so-called microquasar (see, e.g., \citealt{gal10}) clearly indicates the existence of dust material around the LMXB \citep{rah+10}. The Wide-field Infrared Survey Explorer (WISE), launched in 2009 December, mapped the whole sky at its bands of 3.4, 4.6, 12, and 22 $\mu$m (called W1, W2, W3, and W4, respectively) in 2010 \citep{wri+10}. The FWHMs of the averaged point spread function for WISE imaging at the four bands were 6\farcs1, 6\farcs4, 6\farcs5, and 12\farcs0, and the sensitivities (5$\sigma$) generally reached were 0.08, 0.11, 1, and 6 mJy. In the WISE all-sky images and source catalogue released in 2012 March, measurements of over 563 million objects are provided. Therefore WISE imaging has provided sensitive measurements of many different types of celestial objects at the IR wavelengths (see \citealt{wri+10} for details). Using the WISE data, we have carried out searches for the counterparts to the LMXBs catalogued in \citet{lvv07}, for the purpose of identifying sources among them with either jets or circumbinary debris disks. In this paper we report the results from our searches. \begin{figure} \centering \includegraphics[scale=0.58]{f1a.eps} \includegraphics[scale=0.58]{f1b.eps} \includegraphics[scale=0.58]{f1c.eps} \caption{Flux density spectra of A0620$-$00, XTE J1118+480, and GX~1+4. The squares and diamonds are optical/near-IR and \textit{Spitzer} data points, respectively, and crosses are the WISE data points. The dotted, dash-dotted, and dash--triple-dotted curves represent emission from the companions, accretion disks, and circumbinary disks, respectively. The dashed curves (blue one for GX 1+4) represent the total emission from all the components. For GX 1+4, the long dashed curve represents emission from a dust shell, and a red dashed curve is the total emission from the companion and the dust shell. \label{fig:bd}} \end{figure} | \label{sec:dis} Searching through WISE data, we have found 13 counterparts and 2 candidate counterparts among 187 LMXBs catalogued in \citet{lvv07}. By collecting published results and/or analyzing the constructed broad-band spectra for the 13 counterparts, we have identified the origin of the WISE-detected emission. Four of them probably have a candidate circumbinary disk, two harbor a jet, four had thermal emission from their companion stars, and three are peculiar systems with the origin of their IR emission more or less uncertain. It should be noted that these LMXBs are highly variable sources, due to accretion and/or related activities. For the two jet systems, simultaneous multiwavelength observations have been well carried out (e.g., \citealt{mig+10}), but for the other systems the broad-band data used and analyzed in this work are from different epochs, and therefore our studies of the LMXBs are qualitative at most. In order to quantitatively constrain the properties of the candidate circumbinary disk systems or to identify the origin of the IR emission detected from the three peculiar systems, simultaneous observations at from X-ray, optical/IR, to radio frequencies should be carried out. \begin{figure} \begin{center} \includegraphics[scale=0.58]{f5.eps} \caption{$V$ magnitude distribution of 61 LMXBs that have reported $V$ measurements. The filled histogram marks that of the 12 LMXBs with the WISE counterparts. GRS 1915+105 is not included due to high extinction. \label{fig:vdis}} \end{center} \end{figure} For the candidate circumbinary disk systems, \citet{gal+07} have proposed the jet scenario to explain the excess IR emission seen in A0620$-$00 and XTE J1118+480. Given the ubiquity of the presence of jets in the LMXBs \citep{fen06}, the jet scenario is also plausible. Detection of dust signatures such as the PAH emission features seen in GRS~1915+105 is needed in order to verify the presence of a dust disk. On the other hand, the case of GRS~1915+105, a well-known microquasar, has shown that an LMXB can harbor both a circumbinary disk and a jet. Thus the two binaries could also have the both. For GX~1+4, although we can not determine if the excess emission arises from a dust shell or a circumbinary disk, the WISE detection of the excesses has revealed another interesting aspect for this binary: the giant-star companion's significant mass loss is also observable at IR wavelengths. Finally it is interesting to check the detectability of the LMXBs by an IR survey like WISE. In Figure~\ref{fig:vdis}, we show the numbers of the LMXBs as the function of their $V$ magnitude reported in \citet{lvv07}. There are totally 61 LMXBs that have a reported $V$ magnitude value. The distribution of the 12 LMXBs with the WISE counterparts, not including GRS 1915+105 due to extremely high extinction, is shown as the filled histogram. It is not surprising that the WISE-detected sources are among the brightest. The two exceptions are at 18--19 mag, which are 4U 0614+091 and GX 1+4, the first due to the rising spectrum of its jet \citep{mig+06} and the latter due to high extinction (Table~\ref{tab:prop}). From the statistics based on this small sample, while the bias towards the intrinsically bright X-ray sources (such as Sco X-1 and Her X-1) exists, we may suspect that more than half of the LMXBs would have non-stellar IR emission, due to the presence of a jet, a circumbinary debris disk, or the both. | 14 | 4 | 1404.3472 |
1404 | 1404.6127_arXiv.txt | We show that a scalar and a fermion charged under a global $U(1)$ symmetry can not only explain the existence and abundance of dark matter (DM) and dark radiation (DR), but also imbue DM with improved scattering properties at galactic scales, while remaining consistent with all other observations. Delayed DM-DR kinetic decoupling eases the \emph{missing satellites} problem, while scalar-mediated self-interactions of DM ease the \emph{cusp vs. core} and \emph{too big to fail} problems. In this scenario, DM is expected to be pseudo-Dirac and have a mass $100\,{\rm keV}\lesssim m_\chi\lesssim 10\,{\rm GeV}$. The predicted DR may be measurable using the primordial elemental abundances from big bang nucleosynthesis (BBN), and using the cosmic microwave background (CMB). | 14 | 4 | 1404.6127 |
||
1404 | 1404.2018_arXiv.txt | It has recently been shown that dark-matter annihilation to bottom quarks provides a good fit to the galactic-center gamma-ray excess identified in the Fermi-LAT data. In the favored dark matter mass range $m\sim 30-40$ GeV, achieving the best-fit annihilation rate $ \sigma v \sim 5\times 10^{-26} $ cm$^{3}$ s$^{-1}$ with perturbative couplings requires a sub-TeV mediator particle that interacts with both dark matter and bottom quarks. In this paper, we consider the minimal viable scenarios in which a Standard Model singlet mediates $s$-channel interactions {\it only} between dark matter and bottom quarks, focusing on axial-vector, vector, and pseudoscalar couplings. Using simulations that include on-shell mediator production, we show that existing sbottom searches currently offer the strongest sensitivity over a large region of the favored parameter space explaining the gamma-ray excess, particularly for axial-vector interactions. The 13 TeV LHC will be even more sensitive; however, it may not be sufficient to fully cover the favored parameter space, and the pseudoscalar scenario will remain unconstrained by these searches. We also find that direct-detection constraints, induced through loops of bottom quarks, complement collider bounds to disfavor the vector-current interaction when the mediator is heavier than twice the dark matter mass. We also present some simple models that generate pseudoscalar-mediated annihilation predominantly to bottom quarks. | Although dark matter (DM) constitutes roughly 85\% of the matter in our universe, its identity and interactions are currently unknown \cite{Beringer:1900zz}. If DM annihilates to visible states, existing space-based telescopes may be sensitive to the flux of annihilation byproducts arising from regions of high DM density, including the galactic center (GC). Several groups have confirmed a statistically-significant excess in the Fermi-LAT gamma-ray spectrum \cite{Hooper:2010mq,Boyarsky:2010dr,Hooper:2011ti,Abazajian:2012pn, Hooper:2012sr,Gordon:2013vta,Abazajian:2014fta,Daylan:2014rsa, Huang:2013pda, Huang:2013apa}, originally identified in \cite{Goodenough:2009gk}. The excess is largely confined to an angular size of $\lsim 10^\circ$ with respect to the GC, exhibits spherical symmetry, and is uncorrelated with the galactic disk or Fermi bubbles \cite{Daylan:2014rsa}. While this excess may still be astrophysical in origin, potentially due to an unusual population of millisecond pulsars \cite{Gordon:2013vta}, its energy spectrum and spatial distribution are well modeled by an Navarro-Frenk-White profile \cite{Navarro:1996gj} of dark matter particles $\chi \bar \chi$ annihilating to $b \bar b$ with mass and cross section \cite{Abazajian:2014fta} \be \label{eq:rates} \langle \sigma v \rangle &=& (5.1 \pm 2.4) \times 10^{-26}~ \cm^3 \s^{-1}~~, \\ m_\chi &=& 39.4~(^{+3.7}_{-2.9} {~ \rm stat.}) (\pm 7.9 {\rm ~sys.})~ \gev~~, \ee which are compatible with a DM abundance from thermal freeze-out. Recent work has presented the collider and direct-detection constraints on this interpretation assuming flavor-universal and mass-proportional couplings to SM fermions \cite{Alves:2014yha, Berlin:2014tja}; these analyses apply collider bounds on DM production assuming a contact interaction between dark and visible matter. The analyses in \cite{DiFranzo:2013vra,Berlin:2014tja,AgrawalBatellLinHooper} also study simplified models of DM annihilation mediated by color-charged $t$-channel mediators. For perturbative interactions, Eq.~(\ref{eq:rates}) implies that the mediator mass is below a TeV, so it can be produced on-shell at the Large Hadron Collider (LHC) and decay to distinctive final states with a mixture of $b$-jets and missing energy ($\displaystyle{\not}{E}_T$). At direct-detection experiments, this mediator can also be integrated out to induce dark matter scattering through loops of $b$ quarks that exchange photons or gluons with nuclei. Up to differences in Lorentz structure, these processes are generic predictions of any model that explains the Fermi anomaly; however for light mediators ($<2m_\chi$), it is possible to evade collider searches \cite{Boehm:2014hva}. In this paper, we study the scenario with an $s$-channel mediator that predominantly couples to $b$ quarks and focus on the regime in which the mediator is $\gsim 100$ GeV and can decay to pairs of DM particles. The mediator can be produced in processes involving $b$ quarks, and its decays yield final states with $b$ jets and/or missing energy. We extract constraints from LHC searches for new physics in the $b \bar b + {\displaystyle{\not}{E}_T}$ final state and explore the sensitivity of a proposed mono-$b+\displaystyle{\not}{E}_T$ analysis \cite{Lin:2013sca}. We find that large regions of favored parameter space are excluded by existing 8 TeV sbottom searches, whose sensitivity is projected to improve at 13 TeV. The mono-$b$ analysis is expected to be comparable at 8 TeV and set stronger constraints at 13 TeV. We also clarify the LUX limits \cite{Akerib:2013tjd} on scattering through loops of $b$ quarks and find strong bounds on the parameter space of vector-mediators that explain the Fermi excess. The organization of the paper is as follows: in section \ref{sec:general} we discuss a set of possible minimal interactions that can explain the GC excess. In section \ref{sec:ddres}, we consider direct-detection, resonance, and Higgs search constraints on these scenarios. In section \ref{sec:LHC}, we show the constraints on these DM interpretations from sbottom LHC searches, which allow for a possible independent, complementary confirmation of the GC excess. We also estimate the reach of a mono-$b$ search at 8, and extend our results for both searches to 13 TeV. In section \ref{sec:concrete}, we outline concrete models that generate a pseudoscalar mediated annihilation, which is the least constrained of all possible operators that can explain the gamma-ray anomaly. | \label{sec:conclusion} In this paper, we have studied the direct-detection and collider constraints on SM singlet particles that mediate $s$-channel interactions between DM and $b$ quarks, assuming the mediator can decay to DM particles. For simplicity, we have emphasized only parity-conserving interactions between dark and visible sectors, which restricts the class of operators whose annihilation rate is unsuppressed by powers of relative velocity. This is the minimal extension to the SM that suffices to explain the galactic-center gamma-ray excess identified in the Fermi-LAT data. Our main results are as follows: \begin{itemize} \item Direct-detection results from LUX disfavor a vector-vector interaction $(\bar \chi \gamma^\mu \chi)(\bar b \gamma_\mu b)$ that induces DM scattering off detector nuclei only through a $b$-quark loop; the $m_V \gsim 300$ GeV range is ruled out. \item Using a full collider simulation away from the contact-operator limit, we find that {\it existing} LHC sbottom searches at $\sqrt{s} = 8$ TeV strongly disfavor the axial-vector interaction $(\bar \chi \gamma^\mu \gamma_5 \chi)(\bar b \gamma_\mu \gamma_5 b)$ for most combinations of perturbative couplings. While these searches have been used to constrain $t$ channel mediators that carry SM color charge (e.g. sbottoms) \cite{Berlin:2014tja}, this is the first work to highlight their sensitivity to uncolored $s$-channel mediators produced only through the interaction that also yields dark matter annihilation. We also find these searches to be complementary to proposed mono-$b$ + missing-energy searches \cite{Lin:2013sca}, and present 13 TeV projections for both. \item The favored region for the pseudoscalar interaction $(\bar \chi \gamma^5 \chi)(\bar b \gamma_5 b)$ is largely safe from both LHC and direct-detection bounds. Collider searches at 13 TeV are not sensitive to couplings for which explain the Fermi excess. \end{itemize} In light of the strong constraints on the vector and axial-vector scenarios, we also considered two simple, renormalizable models that give rise to pseudoscalar mediated $\chi \bar \chi \to b \bar b$ annihilation. One realization involves a two-Higgs doublet model in which one couples only to down-type quarks and mixes predominantly with a scalar that couples to DM. The other involves a DM coupled pseudoscalar and multiple flavors of vectorlike quarks. Integrating out the vectorlike states yields and effective interaction between $b$-quarks and DM and the pseudoscalar that parametrically depends on the ratio of Higgs VEV and vectorlike mass. Both models generically feature a suppressed pseudoscalar-$b$ quark coupling. \bigskip | 14 | 4 | 1404.2018 |
1404 | 1404.5915_arXiv.txt | Short baseline neutrino oscillation experiments have shown hints of the existence of additional sterile neutrinos in the $\electronvolt$ mass range. Such sterile neutrinos are incompatible with cosmology because they suppress structure formation unless they can be prevented from thermalising in the early Universe or removed by subsequent decay or annihilation. Here we present a novel scenario in which both sterile neutrinos and dark matter are coupled to a new, light pseudoscalar. This can prevent thermalisation of sterile neutrinos and make dark matter sufficiently self-interacting to have an impact on galactic dynamics and possibly resolve some of the known problems with the standard cold dark matter scenario. Even more importantly it leads to a strongly self-interacting plasma of sterile neutrinos and pseudoscalars at late times and provides an excellent fit to CMB data. The usual cosmological neutrino mass problem is avoided by sterile neutrino annihilation to pseudoscalars. The preferred value of $H_0$ is substantially higher than in standard $\Lambda$CDM and in much better agreement with local measurements. | 14 | 4 | 1404.5915 |
||
1404 | 1404.2704_arXiv.txt | In this paper we present time series photometry of 104 variable stars in the cluster region NGC 1893. The association of the present variable candidates to the cluster NGC 1893 has been determined by using $(U-B)/(B-V)$ and $(J-H)/(H-K)$ two colour diagrams, and $V/(V-I)$ colour magnitude diagram. Forty five stars are found to be main-sequence variables and these could be B-type variable stars associated with the cluster. We classified these objects as $\beta$ Cep, slowly pulsating B stars and new class variables as discussed by Mowlavi et al. (2013). These variable candidates show $\sim$0.005 to $\sim$0.02 mag brightness variations with periods of $<$ 1.0 d. Seventeen new class variables are located in the $H-R$ diagram between the slowly pulsating B stars and $\delta$ Scuti variables. Pulsation could be one of the causes for periodic brightness variations in these stars. The X-ray emission of present main-sequence variables associated with the cluster lies in the saturated region of X-ray luminosity versus period diagram and follows the general trend by Pizzolato et al. (2003). | The aim of the present study is to analyze the light curves of those stars which lie in the upper part of the main-sequence (MS) in the colour-magnitude diagram (CMD) of NGC 1893. The upper part of the MS consists of stars of spectral type O to A. The brightness variation in OB supergiants, early B-type stars, Be stars, mid to late B-type stars occurs mostly due to the pulsations (Stankov \& Handler 2005; Kiriakidis et al. 1992; Moskalik \& Dziembowski 1992). Pulsating variable stars expand and contract in a repeating cycle of size changes. The different types of pulsating variables are distinguished by their periods of pulsation and the shapes of their light curves. These could be $\beta$ Cep, slowly pulsating B (SPB), $\delta$ Scuti stars etc. The $\beta$ Cep stars are pulsating MS variables and found to be lying above the upper MS in the $H-R$ diagram with early B spectral types (Handler \& Meingast 2011). These have periods and amplitude in the range of 0.1 - 0.6 d and 0.01 to 0.3 mag, respectively. The SPB stars lie just below the instability strip of $\beta$ Cep variables (Waelkens 1991). The SPB stars are nearly perfectly confined to the main-sequence band (e.g., De Cat et al. 2004). The SPB stars are less massive (3-7 $M_{\odot}$) in comparison to $\beta$ Cep stars (8-18 $M_{\odot}$). The effective temperature of known SPB stars lies in the range of 10000 to 20000 K. The well known Kappa mechanism (Dziembowski et al. 1993; Gautschy \& Saio 1993) is the reason for the periodic brightness variation in these stars. The SPB stars are slow pulsator with period of more than 0.5 d. Theoretical instability strip of SPB stars overlap with instability strip of $\beta$ Cep stars. Waelkens et al. (1998) classified a huge number of B-type stars as new SPB stars using the Hipparcos mission. Another class of stars populating with the B-type MS are the Be stars. They are defined as non- supergiant B star with one or more Balmer lines in their emission. Classical Be stars are physically known as rapidly rotating B-type stars with line emission. As pulsating Be stars occupy the same region of the $H-R$ diagram as $\beta$ Cep and SPB stars, it is generally assumed that pulsations in Be stars have the same origin as the case of $\beta$ Cep and SPB stars (Diago et al. 2008). The another group of pulsation variables is $\gamma$ Dordus (period: 0.3-1.0 d). They are found to be located below the instability strip of $\delta$ Scuti stars. The instability strip of $\gamma$ Dordus overlaps with instability strip of $\delta$ Scuti. The $\delta$ Scuti stars (these are short period variables with periods lying in the range of 0.03 to 0.3 d) are part of the classical instability strip where Cepheids are found and these Cepheids are radially pulsating, high luminosity (classes Ib-II) variables with periods in the range of 1-135 days and amplitudes from several hundredths to $\sim $2 mag in $V$ (the amplitudes are greater in the $B$ band). The spectral type of these objects at maximum light is F whereas at the minimum, the spectral types are G-K. The longer the period of light variation, the later is the spectral type. In addition to above discussions we would like to give a brief description of previous studies on B-type stars' variability. The study of pulsating B-stars having same age and chemical composition in young open clusters provides understanding to interpret variability (Handler et al. 2008; Majewska et al. 2008; Michalska et al. 2009; Handler \& Meingast 2011; Jerzykiewicz et al. 2011; Saesen et al. 2013; Balona et al. 1997; Gruber et al. 2012). Saesen et al. (2013) using the photometric study of the B-type stars in NGC 884 combined with their recent spectroscopic observations offer an interesting and different approach to the advancement of understanding of these young massive objects. Saesen et al. (2010) presented differential time-resolved multi-colour CCD photometry of NGC 884 cluster that leads to the identification of 36 multi-periodic and 39 mono-periodic B-stars, 19 multi-periodic and 24 mono-periodic A- and F-stars, and 20 multi-periodic and 20 mono-periodic variable stars of unknown nature. Saio et al. (2006) used MOST (Microvariability and Oscillations of Stars) satellite to detect variability in supergiant star HD 163899 (B2 Ib/II), and they found 48 frequencies with amplitudes of a few millimagnitudes (mmag) and less. The frequency range is similar to g- and p-mode pulsations. Balona et al. (2011) presented Kepler observations of variability in B-type stars. They presented the light curves of 48 B-type stars. They find no evidence for pulsating stars between the cool edge of the SPB and the hot edge of the $\delta$ Scuti instability strips. Recently, McNamara et al. (2012) analyzed the light curves of 252 B-star candidates in the Kepler data base to further characterize B star's variability and to increase the sample of variable B stars for future study. They classified stars as either constant light emitters, $\beta$ Cep stars, SPB stars, hybrid pulsators, binaries or stars whose light curves are dominated by rotation, hot subdwarfs, or white dwarfs. Mowlavi et al. (2013) in the case of open cluster NGC 3766 found large population of new variable stars between the red end of SPB type stars and the blue end of the $\delta$ Scuti type stars where no pulsations are expected to occur on the basis of standard models. They argued that pulsation could be one of the reasons for showing brightness variation in these stars. Yang et al. (2013) also presented CCD time series photometric observations for the stars in the open clusters NGC 7209, NGC 1582 and Dolidize 18 and found only one star which could be B-type pulsating star. Jerzykiewicz et al. (2003) presented results of a search for variable stars in the field of the young open cluster NGC 2169 and found two $\beta$ Cep stars and other type stars. Diago et al. (2008) have also detected absorption-line B and Be stars in the SMC showing short period variability. Briquet et al. (2001) studied B-type star HD 131120 and found that this star is monoperiodic with a period of 1.569 d. They interpreted the variability of this star in terms of a non-radial g-mode pulsation model as well as in terms of a rotational modulation model. They found that rotational modulation model was able to explain the observed line profile variations of the star. Luo et al. (2012) carried out $BV$ CCD photometric observations of the open cluster NGC 7654, a young open cluster located in the Cassiopeia constellation, to search for variable stars and detected 18 SPB stars. They find that the multi-mode pulsation is more common in the upper part of the MS and g-mode MS pulsating variables probably follow a common period-luminosity relation. In the light of above discussions we aimed to search for variable stars in the upper part of MS of NGC 1893. The NGC 1893 cluster is a young open cluster which is associated with nebulae and obscured by dust clouds. Detailed study of the NGC 1893 on the basis of photometric data has been carried out by Sharma et al. (2007) and Pandey et al. (2013). Marco \& Negueruela (2002a) have found the presence of Herbig Ae/Be in the vicinity of O-type stars and B-type MS stars of later spectral type. On the basis of spectroscopic survey in the region of NGC 1893 Marco \& Negueruela (2002b) suggest that both Herbig Be stars and classical Be stars are present in NGC 1893. Zhang et al. (2008) and Lata et al. (2012) identified a few B-type variable stars in the cluster NGC 1893. The previous studies show that NGC 1893 is one of the richest clusters to study the variable stars. This paper is in continuation of our efforts to study the MS variable stars in young clusters. In our earlier papers (Lata et al. 2011, 2012), we have presented time series photometry of PMS variable candidates. The observations of NGC 1893 in the $V$ band have been carried out on 16 nights during December 2007 to January 2013 in order to identify and characterize the variable stars in the NGC 1893 region. In Section 2 we describe the observations and data reduction procedure. In Section 3 we discuss the cluster membership of detected variables using $(U-B)/(B-V)$, $(J-H)/(H-K)$ two colour diagram (TCD) and $V/(V-I)$ CMD. Section 4 deals with period determination of variables. Section 5 describes luminosity and temperature of the stars. The variability characteristics of the stars are discussed in Section 6. We discuss X-ray luminosity and period for the variables in Section 7. Finally, we summarise our results in Section 8. \begin{figure*} \includegraphics[width=17cm]{ds9_ag_ref.ps} \caption{ The image of NGC 1893 of 18$\times$18 arcmin$^2$ taken from the DSS-R. The observed region from 104 cm is shown by rectangle. The variable candidates detected in the present work are encircled and labeled with numbers. } \end{figure*} \setcounter{figure}{1} \begin{figure*} \includegraphics[width=18.cm, height=12cm]{real_ref_sam.ps} \caption{Sample light curves of a few variable stars identified in the present work. The $\Delta$ m represents the differential magnitude in the sense that variable minus comparison star. The complete figure is available online.} \end{figure*} \begin{table} \caption{Log of the observations. N and Exp. represent number of frames obtained and exposure time respectively. \label{tab:obsLog}} \begin{tabular}{lclcc} \hline S. No.&Date of &Object&{\it V} &{\it I} \\ &observations& &(N$\times$Exp.)&(N$\times$Exp.) \\ \hline 1 & 05 Dec 2007 & NGC 1893&3$\times$40s &- \\ 2 & 08 Dec 2007 & NGC 1893&3$\times$50s &-\\ 3 & 07 Jan 2008 & NGC 1893&2$\times$40s &-\\ 4 & 10 Jan 2008 & NGC 1893&3$\times$50s &-\\ 5 & 12 Jan 2008 & NGC 1893&80$\times$50s &-\\ 6 & 14 Jan 2008 & NGC 1893&70$\times$40s &-\\ 7 & 29 Oct 2008 & NGC 1893&97$\times$50s &-\\ 8 & 21 Nov 2008 & NGC 1893&137$\times$50s &2$\times$50s\\ 9 & 27 Jan 2009 & NGC 1893&5$\times$50s &-\\ 10& 28 Jan 2009 & NGC 1893&5$\times$50s &-\\ 11& 19 Feb 2009 & NGC 1893&5$\times$50s &-\\ 12& 20 Feb 2009 & NGC 1893&3$\times$50s &5$\times$50s\\ 13& 20 Feb 2009 & SA 98 &5$\times$90s &5$\times$60s\\ 14& 31 Oct 2010 & NGC 1893&3$\times$50s &-\\ 15& 22 Dec 2012 & NGC 1893&323$\times$50s &-\\ 16& 05 Jan 2013 & NGC 1893&80$\times$50s &-\\ \hline \end{tabular} \end{table} \setcounter{figure}{2} \begin{figure} \includegraphics[width=9cm]{ubv_ms_pan.ps} \caption{$U-B/B-V$ two colour diagram variable stars. The $UBV$ data have been taken from Sharma et al. (2007) and Massey et al. (1995). The solid curve represents the ZAMS by Girardi et al. (2002) shifted along the reddening vector of 0.72 for $E(B-V)_{min}$= 0.4 mag and $E(B-V)_{max}$= 0.7 mag. The dashed curve represents ZAMS by Girardi et al. (2002) for the foreground field population having $E(B-V)$= 0.30 mag. The stars labeled are discussed in section 6.} \end{figure} | The paper presents the light curves of 104 variable candidates in the young open cluster NGC 1893. Among 104 variable candidates 45 stars could be MS variables. The periods of MS variables ranges from $\sim$0.15 to $\sim$0.6 d with brightness variation of $\lesssim$ 0.02 mag. We classified 3 stars as $\beta$ Cep, 25 stars as SPB stars and 17 stars as new class type stars (cf. Mowlavi et al. 2013) on the basis of their location in the $H-R$ diagram. We have also found 16 stars as variables which could be of PMS nature. Additionally, there are 43 stars which might belong to the field star population. The correlation between X-ray luminosity ($L_{X}$) and period of MS stars follows the general trend as given by Pizzolato et al. (2003). | 14 | 4 | 1404.2704 |
1404 | 1404.5638_arXiv.txt | A long-standing question is whether radiative cooling can lead to local condensations of cold gas in the hot atmospheres of galaxies and galaxy clusters. We address this problem by studying the nature of local instabilities in rotating, stratified, weakly magnetized, optically thin plasmas in the presence of radiative cooling and anisotropic thermal conduction. For both axisymmetric and non-axisymmetric linear perturbations we provide the general equations that can be applied locally to specific systems to establish whether they are unstable and, in case of instability, to determine the kind of evolution (monotonically growing or over-stable) and the growth rates of unstable modes. We present results for models of rotating plasmas representative of Milky Way-like galaxy coronae and cool-cores of galaxy clusters. It is shown that the unstable modes arise from a combination of thermal, magnetothermal, magnetorotational and heat-flux-driven buoyancy instabilities. Local condensation of cold clouds tends to be hampered in cluster cool cores, while it is possible under certain conditions in rotating galactic coronae. If the magnetic field is sufficiently weak the magnetorotational instability is dominant even in these pressure supported systems. | Extended hot atmospheres are believed to be ubiquitous in massive virialized systems in the Universe. These virial-temperature gaseous halos have been detected in X-rays not only in galaxy clusters \citep{Ros02}, but also in massive elliptical \citep{Mat03} and disc \citep{Dai12} galaxies. A combination of different observational findings leads to the conclusion that a corona is present also in the Milky Way \citep{Mil13}. A long-standing question is whether these atmospheres are thermally unstable (in the sense of \citealt{Fie65}): if local thermal instability (TI) occurs, cold gaseous clouds can condense throughout the plasma; otherwise, substantial cooling can only happen at the system center, where, however, it is expected to be opposed by feedback from the central supermassive black hole. This has important implications for the evolution of galaxy clusters \citep[e.g.][and references therein]{Mat78,Mcc12} and galaxies \citep*[e.g.][and references therein]{Mal04,Jou12}. The evolution of thermal perturbations in astrophysical plasmas, subject to radiative cooling and thermal conduction, is a complex physical process \citep{Fie65}, which is influenced by several factors, such as entropy stratification \citep*{Mal87,Bal89,Bin09}, magnetic fields \citep{Loe90,Bal91,Bal10} and rotation \citep[][hereafter \citetalias{Nip10}]{Def70,Nip10}. In the present paper, which is the follow-up of \citet[][hereafter \citetalias{Nip13}]{Nip13}, we focus on the role of rotation in determining the stability properties of these plasmas, in the presence of weak magnetic fields. Rotation is clearly expected to be important in the case of the coronae of disc galaxies \citep{Mar11}, but we stress that a substantial contribution of rotation could be present also in the hot gas of galaxy clusters \citep*{Bia13}. Though our focus is mainly on galaxy and galaxy-cluster atmospheres, it must be noted that the analysis here presented could be relevant also to the study of other rotating optically thin astrophysical plasmas, such as accretion disc coronae \citep{Das13,Li13}. In \citetalias{Nip13} we have shown that rotating, radiatively cooling atmospheres with ordered weak magnetic fields (and therefore anisotropic heat conduction) are unstable to axisymmetric perturbations, in the sense that there is always at least one growing axisymmetric mode. The physical implications of this formal result clearly depend on the nature of this instability, which we try to address in the present work. In particular, we want to explore whether the linear instabilities found in \citetalias{Nip13} are over-stabilities or monotonically growing instabilities, how the evolution of the instability depends on the properties of the perturbation and what are the driving physical mechanisms. We also extend the linear stability analysis of \citetalias{Nip13} considering non-axisymmetric linear perturbations. We analyze the linear evolution of the instabilities in models of rotating plasmas representative of Milky Way-like galaxy coronae and cool-cores of galaxy clusters, comparing the results to those obtained for similar unmagnetized models, characterized by isotropic heat conduction. The instabilities found in the present work are interpreted in terms of well-known instabilities such as the TI, the magnetorotational instability (MRI, \citealt{BalH91}; see also \citealt{Vel59} and \citealt{Cha60}), the magnetothermal instability (MTI, \citealt{Bal00}) and the heat-flux-driven buoyancy instability (HBI, \citealt{Qua08}). The paper is organized as follows. In Section~\ref{sec:goveq} we present the relevant magnetohydrodynamic (MHD) equations and we define the properties of the unperturbed plasma. The results of the linear-perturbation analysis are presented in Section~\ref{sec:axi} for axisymmetric perturbations and in Section~\ref{sec:nonaxi} for non-axisymmetric perturbations. Section~\ref{sec:con} summarizes and concludes. | \label{sec:con} In this paper we have studied the nature of local instabilities in rotating, stratified, weakly magnetized, optically thin astrophysical plasmas in the presence of radiative cooling and anisotropic thermal conduction. A summary of the main results of the present work is the following. \begin{enumerate} \item We have provided the equations that allow to determine the linear evolution of axisymmetric and non-axisymmetric perturbations at any position of a differentially rotating plasma in the presence of a weak ordered magnetic field. Given a model for the background plasma, the evolution of axisymmetric perturbations can be computed by solving numerically the dispersion relation~(\ref{eq:disp_np13}). The evolution of non-axisymmetric perturbations can be determined by integrating numerically the system of ODEs~(\ref{eq:nonaxi_final_1}-\ref{eq:nonaxi_final_3}). \item We have studied the stability properties of rotating models representative of cool cores of galaxy clusters and coronae of Milky Way-like galaxies. In all cases we found monotonically unstable axisymmetric modes. The instability is dominated by the HBI in cool cores and by a combination of TI and MTI in galactic coronae (with the exception of models with very weak poloidal component of the magnetic field, in which the MRI is the dominant instability). \item For the same galaxy and galaxy-cluster models, we have computed the linear evolution of several non-axisymmetric disturbances, finding that the linear non-axisymmetric modes behave similarly to axisymmetric modes with the same wave-length and orientation in the meridional plane, so these systems are locally unstable against general perturbations. In particular, in contrast with the unmagnetized case, differential rotation does not stabilize blob-like disturbances in the presence of an ordered magnetic field. \item Overall, magnetized systems are more prone to local TI than unmagnetized systems: in particular, thermal perturbations tend to be effectively damped by isotropic heat conduction in unmagnetized systems, while, under certain conditions, they can grow monotonically and lead to local condensations in the presence of magnetic fields. \item Differential rotation plays a crucial role in the studied instabilities. Remarkably, if the magnetic field is sufficiently weak the MRI is dominant even in pressure-supported systems such as galactic coronae and cool cores of galaxy clusters. But also when the MRI is not strong, differential rotation can favor the onset of either the MTI or the HBI, which, combined with the TI, can lead to local condensation of cold gas. The presence of vertical velocity gradient is destabilizing, so baroclinic models tend to be more unstable than barotropic models. \end{enumerate} The original motivation of the present investigation was the question of whether cold clouds can form spontaneously throughout the virial-temperature atmospheres of galaxies and clusters of galaxies. In this work we have focused on differentially rotating plasmas, because rotation is potentially relevant in the atmospheres of both galaxies \citep{Mar11} and galaxy clusters \citep{Bia13}. A necessary condition for the spontaneous formation of cold clouds via TI is that, at least in the linear regime, thermal perturbations grow monotonically. Our results suggest that in the presence of a weak ordered magnetic field, provided that the MRI is not dominant and that the gas temperature is relatively low, thermal perturbations can lead to local condensation through a combination of the TI with either the MTI or the HBI. Therefore the formation of cold clouds via local TI is hampered in the cluster cool cores, while it is possible under specific conditions in galactic coronae. While the gas temperature of galactic and galaxy-cluster atmospheres is relatively well constrained (either observationally or theoretically), much less is known about the distribution of their specific angular momentum and the geometry of their magnetic field. In the hypothesis that the conditions are such that linear thermal perturbations grow monotonically, we are left with the question of the non-linear evolution of these unstable systems, which could be addressed with MHD simulations (see, for the non-rotating case, \citealt{Kun12}, \citealt{Mcc12} and \citealt{Wag14}). One possibility is that, in the non-linear regime, finite-size cold clouds form and the medium becomes multiphase, but it is also possible that the main outcome of the instability is that the magnetic field is rearranged in a configuration that counteracts the development of further instabilities or that the gas becomes highly turbulent and the magnetic field highly tangled. A limitation of the present work is that, even in the presence of ordered magnetic fields, we have assumed for simplicity pressure to be isotropic, neglecting the \citet{Bra65} viscosity. In fact, in a magnetized plasma anisotropic momentum transport can affect the stability properties of the plasma: for instance, studying non-rotating models of cluster cool cores, \citet{Lat12} and \citet{Kun12} concluded that, in the presence of \citet{Bra65} viscosity, the HBI is substantially reduced, being localized only in the inner 20\% of the cluster core. The question of whether and how much the results of the present work are affected by \citet{Bra65} viscosity can be addressed by extending the calculations to the case in which anisotropic pressure is self-consistently included: such an investigation would represent a natural follow-up of this paper. | 14 | 4 | 1404.5638 |
1404 | 1404.5312_arXiv.txt | {The nature of dark energy is imprinted in the large-scale structure of the Universe and thus in the mass and redshift distribution of galaxy clusters. The upcoming \textit{eROSITA} mission will exploit this method of probing dark energy by detecting $\sim100,000$ clusters of galaxies in X-rays.} {For a precise cosmological analysis the various galaxy cluster properties need to be measured with high precision and accuracy. To predict these characteristics of \textit{eROSITA} galaxy clusters and to optimise optical follow-up observations, we estimate the precision and the accuracy with which \textit{eROSITA} will be able to determine galaxy cluster temperatures and redshifts from X-ray spectra. Additionally, we present the total number of clusters for which these two properties will be available from the \textit{eROSITA} survey directly.} {We simulate the spectra of galaxy clusters for a variety of different cluster masses and redshifts while taking into account the X-ray background as well as the instrumental response. An emission model is then fit to these spectra to recover the cluster temperature and redshift. The number of clusters with precise properties is then based on the convolution of the above fit results with the galaxy cluster mass function and an assumed \textit{eROSITA} selection function.} {During its four years of all-sky surveys, \textit{eROSITA} will determine cluster temperatures with relative uncertainties of $\Delta T/T\lesssim10\%$ at the $68\%$-confidence level for clusters up to redshifts of $z\sim0.16$ which corresponds to $\sim1,670$ new clusters with precise properties. Redshift information itself will become available with a precision of $\Delta z/(1+z)\lesssim10\%$ for clusters up to $z\sim0.45$. Additionally, we estimate how the number of clusters with precise properties increases with a deepening of the exposure.\\ For the above clusters, the fraction of catastrophic failures in the fit is below $20\%$ and in most cases it is even much smaller. Furthermore, the biases in the best-fit temperatures as well as in the estimated uncertainties are quantified and shown to be negligible in the relevant parameter range in general. For the remaining parameter sets, we provide correction functions and factors. In particular, the standard way of estimating parameter uncertainties significantly underestimates the true uncertainty, if the redshift information is not available. } {The \textit{eROSITA} survey will increase the number of galaxy clusters with precise temperature measurements by a factor of $5-10$. Thus the instrument presents itself as a powerful tool for the determination of tight constraints on the cosmological parameters. At the same time, this sample of clusters will extend our understanding of cluster physics, e.g. through precise $L_{\text{X}}-T$ scaling relations.} | Over the past years, galaxy clusters have become reliable cosmological probes for studying dark energy and for mapping the large-scale structure (LSS) of the Universe \citep[e.g.,][]{Borgani2001,Voit2004,Vikhlinin2009, Vikhlinin2009I, Mantz2010, Allen2011}. Further improved constraints on the nature of dark energy require the analysis of a large sample of galaxy clusters with precisely and accurately known properties. The future \textit{eROSITA} (\textbf{e}xtended \textbf{Ro}entgen \textbf{S}urvey with an \textbf{I}maging \textbf{T}elescope \textbf{A}rray) telescope \citep{Predehl2010,Merloni2012}, which is scheduled for launch in late 2015, will provide such a data sample \citep{Pillepich2011}.\\ \indent X-ray observations of galaxy clusters allow for the precise determination of various cluster properties such as e.g. the total mass as well as the gas mass of the cluster or the temperature and the metal abundance of the intra-cluster medium (ICM) \citep[e.g.][]{Henriksen1986,Sarazin1986,Vikhlinin2009}. The information on these properties is imprinted in the emission spectrum of the ICM, which follows a thermal bremsstrahlung spectrum superimposed by emission lines of highly ionised metals \citep[e.g.,][]{Sarazin1986}. Especially notable are the Fe-L and the Fe-K line complexes at energies of $\sim1$ keV and $\sim7$ keV, respectively. For low gas temperatures of k$T\lesssim 2.5$ keV, emission lines are prominent features in the spectrum in the energy range of roughly ($0.5-8$) keV. With increasing temperatures the lines at the lower energies fade as the metals become completely ionised, whereas other emission lines, such as e.g. the hydrogen like Fe-K line, increase with higher gas temperatures \citep[e.g., Fig. 2 in][]{Reiprich2013}. Analogously to the temperature, the spectrum also reflects the density and the metallicity of the ICM, as well as the cluster redshift, which allows these properties to be recovered in the analysis of X-ray data. While very precise redshifts with uncertainties of $\Delta z \ll0.01$ can be obtained in optical spectroscopic observations, estimating redshifts from X-ray data directly allows for an optimisation of these time-consuming optical spectroscopic observations.\\ \indent Cosmological studies based on galaxy clusters are especially dependent upon the information on their redshift and total mass. As the cluster mass is not a direct observable, galaxy cluster scaling relations are commonly applied to estimate this property based on e.g., the ICM temperature and the cluster redshift \citep[e.g.,][]{Vikhlinin2009,Pratt2009,Mantz2010,Reichert2011,Giodini2013}. This then allows for an analysis of the distribution of galaxy clusters with mass and redshift. This galaxy cluster mass function traces the evolution of the large-scale structure (LSS) and is highly dependent on the cosmological model, implementing galaxy clusters as cosmological probes \citep[e.g.,][]{Press1974,Tinker2008}. Testing the cosmological model through the study of the galaxy cluster mass function has become an important method within the past years \citep[e.g.,][]{Reiprich2002,Voit2004,Vikhlinin2009,Vikhlinin2009I,Mantz2010}. This analysis methodology is not only based on X-ray obervations, but can as well be applied to \textit{Sunyaev-Zel'dovich} (SZ) observations of galaxy clusters. Current SZ cluster surveys, performed by e.g. the \textit{Atacama Cosmology Telescope} (ACT), the \textit{South Pole Telescope} (SPT) and \textit{Planck}, are increasing the impact of these observations and already lead to an improvement in constraining the cosmological parameters \citep[e.g.][]{Vanderlinde2010,Planck2013,Reichardt2013}. Additionally, a combination of SZ ans X-ray observations allows for the calibration of hydrostatic cluster masses, which in turn improves the cosmological constraints. The \textit{eROSITA} instrument will soon improve the data sample of available X-ray clusters, in terms of precision, accuracy and number of clusters. This sample will thus especially allow for optimated cosmological studies by means of X-ray galaxy clusters. As a side effect, also future SZ observations will profit from this cluster sample.\\ \indent \textit{eROSITA} is the German core instrument aboard the Russian Spektrum-Roentgen-Gamma (SRG) satellite, which is scheduled for launch in late 2015 \citep{Predehl2010,Merloni2012}. The main science driver of this mission is studying the nature of dark energy. The first four years of the mission are dedicated to an all-sky survey, followed by a pointed observation phase, both in the X-ray energy range between $(0.1-10)$ keV. Within the all-sky survey, a conservatively estimated effective average exposure time of $t_{\text{exp}}=1.6$ ks is achieved and we expect to detect a total of $\sim10^5$ galaxy clusters, including basically all massive clusters in the observable universe with $M\gtrsim3\times10^{14}h^{-1}\text{ M}_{\odot}$ \citep{Pillepich2011}. For these calculations a minimum of $50$ photon counts within the energy range of $0.5-2.0$ keV is assumed for the detection of a cluster. With this predicted data sample, current simulations estimate an increased precision of the dark energy parameters to $\Delta w_{\text{0}}\approx0.03$ (for $w_{\text{a}}=0$) and $\Delta w_{\text{a}}\approx0.20$ \citep[][; Pillepich et al., in prep.]{Merloni2012}, assuming an evolution of the equation of state of dark energy with redshift as $w_{\text{DE}}=w_{\text{0}}+w_{\text{a}}/(1+z)$.\\\indent These forecasts consider only the galaxy cluster luminosity and redshift to be known with an assumed uncertainty, whereas the precision on the cosmological parameters will be improved if additional cluster information, such as e.g. the ICM temperature, is available \citep[compare e.g.,][]{Clerc2012}. In this work we thus present how accurately and precisely \textit{eROSITA} will be able to determine the ICM temperature in dependence on the cluster masses and redshifts. In an analogous simulation, we investigate for which clusters the survey data will allow for a redshift estimate to optimise optical follow-up observations \citep[compare e.g.,][]{Yu2011}. \\\indent The outline of this paper is as follows. In section 2, we define the properties of the clusters included in our simulations. We also introduce the applied model for the X-ray background as well as the simulation and analysis methods. The following section presents the predicted precisions and accuracies for the cluster temperatures and redshifts, while section 4 emphasises on the number of clusters for which precise properties will be available from \textit{eROSITA} data. The final two sections 5 and 6 contain the discussion and conclusion of this work, respectively.\\\indent If not stated otherwise, we apply a fiducial cosmology of $H_{ \text{0}}=100\cdot h$ km/s/Mpc with $h=0.7$, $\Omega_{\text{m}}=0.3$, $\Omega_{\Lambda}=0.7$, $\sigma_{ \text{8}}=0.795$ and the solar metallicity tables by \cite{Anders1989}. | The upcoming \textit{eROSITA} mission presents a powerful tool to test our current cosmological model and especially to study the nature of dark energy by investigating the distribution of galaxy clusters with mass and redshift. Moreover, it will allow for the study of cluster physics, e.g. in terms of scaling relations, in unprecedented detail. With the simulations presented in this work, we predict the accuracy and the precision with which the \textit{eROSITA} instrument will be able to determine the cluster temperature and redshift, and we introduce the number of clusters for which these properties will be available.\\\indent The highest precision and accuracy of the temperature and redshift are obtained for clusters at the most local redshifts. In general, the precision and the accuracy of the cluster properties do not only show a dependence on the number of detected photons, but as well on the cluster properties themselves, especially on the redshift. For the average exposure time during the \textit{eROSITA} all-sky survey, high precision temperatures will be available for clusters as distant as $z\lesssim0.16$ and the instrument will allow for precise X-ray redshifts up to $z\lesssim0.45$, where for the very local clusters the uncertainty in the redshift is even comparable to optical photometric estimates. However, for the simulation with unknown cluster redshifts, catastrophic failures occur within the spectral fit and limit the parameter space of high precision properties especially for the lowest and the highest masses $\log(M/\text{M}_{\odot})\lesssim14$ and $\log(M/\text{M}_{\odot})\gtrsim15$, respectively. These failures arise due to the redshift as additional free parameter in the fit and because of the thus resulting degeneracy between the redshift and the temperature. As \textit{eROSITA} cluster spectra prove as sensitive estimators of the redshift for local clusters with intermediate masses, optical follow-up observations are most effective, if they first cover clusters without reliable X-ray redshifts and we predict they will preferentially be found above $z\approx0.45$. Additionally, these follow-up observations will eventually allow for more precise redshift estimates also for clusters at lower redshifts.\\\indent Within the \textit{eROSITA} deep exposure fields, X-ray redshift and temperature information will be stronger limited by catastrophic failures than for the lower exposure time. According to this, precise X-ray redshifts are only observed to the same maximum distance as for the all sky survey. In this aspect, our simulations follow the conservative approach of no constraints on the starting values of the fit. However, the number of catastrophic failures for the spectral fit of intermediate and high mass clusters can be reduced, if additional information on the starting values, e.g. through the coupling of the fit parameters by the $L-T$ relation or with the help of first redshift estimates from shallow optical surveys, are available.\\\indent If the redshift of the clusters in the deep exposure fields is known, the percentage of clusters with precise temperatures still increases significantly to the highest redshifts. Even though these deep fields only cover a small sky fraction, the findings for these regions shed light on the expectations for the succeeding pointed observation phase. \\\indent The entire parameter space of clusters with precise properties displays great parameter accuracies, such that for those clusters no parameter bias needs to be corrected for. Only for the long exposure times of $t_{\text{exp}}=20$ ks the bias in the temperature needs to be considered for clusters with available redshifts at distances of $z\gtrsim0.32$. We additionally introduce correction functions, which need to be applied to spectral fits of clusters with a bias in their best-fit properties. For the analysis of observed \textit{eROSITA} data, these correction functions should be applied iteratively. The analysis of spectral cluster data yields preliminary values of the cluster temperature, redshift and luminosity from which the total mass is estimated. Implementing the redshift and the total mass, the correction functions return a revised cluster temperature and redshift, which sequently describe a corrected total mass. These steps are repeated until negligible changes of the properties are obtained with each iteration and the final values are adopted as best estimates.\\\indent Through our simulations, we also investigate the deviation in the uncertainties between the results by the \textit{xspec} {\tt{error}}-command and a statistical distribution. These corrections of the uncertainties need to be considered for the data analysis of clusters with unknown redshift independent of the precision in the cluster properties, as \textit{xspec} underestimates the statistical uncertainty.\\\indent In convolving the galaxy cluster mass function and scaling relations with the \textit{eROSITA} response, we obtain the distribution of clusters with mass and redshift as it will be observed by the instrument. Applying the scaling relations by \cite{Reichert2011}, we expect \textit{eROSITA} to detect $\sim113,400$ clusters of galaxies in total with a minimum photon number of $\eta_{\text{min}}=50$. Out of this total number of clusters, \textit{eROSITA} will provide precise temperatures with $\Delta T/\langle T_{\text{fit}}\rangle\lesssim10\%$ for $\sim1,670$ new clusters in the all-sky survey, which is equivalent to a percentage of $\sim1.5\%$ of the total amount of detected clusters. {This \textit{eROSITA} sample, consisting mainly of so far unstudied clusters, will increase the current catalogue of clusters with precise temperatures by a factor of $5-10$ depending on the refered to catalogue.\\\indent Large samples of precise and accurate cluster data, as they will be available from the \textit{eROSITA} mission, are essential for the computation of tight constraints on the cosmological parameters. As the current simulations on the constraints which \textit{eROSITA} will implement on the cosmology do not include information on the cluster temperature yet \citep{Pillepich2011}, we aim to improve these constraints through our findings \citep[compare][]{Clerc2012} and will predict these improvements in our future work. | 14 | 4 | 1404.5312 |
1404 | 1404.2474_arXiv.txt | {} {The exact mechanism by which astrophysical jets are formed is still unknown. It is believed that necessary elements are a rotating (Kerr) black hole and a magnetised accreting plasma.} {We model the accreting plasma as a collection of magnetic flux tubes/strings. If such a tube falls into a Kerr black hole, then the leading portion loses angular momentum and energy as the string brakes, and to compensate for this loss, momentum and energy is redistributed to the trailing portion of the tube.} {We found that buoyancy creates a pronounced helical magnetic field structure aligned with the spin axis. Along the field lines, the plasma is centrifugally accelerated close to the speed of light. This process leads to unlimited stretching of the flux tube since one part of the tube continues to fall into the black hole and simultaneously the other part of the string is pushed outward. Eventually, reconnection cuts the tube, the inner part is filled with new material and the outer part forms a collimated bubble-structured relativistic jet. Each plasmoid can be considered like an outgoing particle in the Penrose mechanism: it carries extracted rotational energy away from the black hole while the falling part with the corresponding negative energy is left inside the ergosphere.} \bigskip \hspace*{-2cm}\normalsize Published in: {\it Physica scripta {\bf 89}, 045003, 2014.} | Astrophysical jet streams can be found all over the universe and are manifestations of violent energetic outbursts from massive cosmic objects. Those jets are a mixture of super heated, low density gas, extremely energetic particles and magnetic fields which are ejected from the pole area in the form of narrow columns of gas saturated with elementary particles. On the intergalactic scale, these powerful emissions originate in the core of active galactic nuclei (AGNs), including exotic astrophysical objects such as quasars and blazars~\cite{Rawlings1991-138,Narayan2005-199,Giannios2009-L29}. In recent years analytical work~\cite{Blandford1977-433,Narayan2005-199,Punsly2001,Beskin2009} and numerical simulations~\cite{Koide2002-1688,McKinney2006-1561,Hawley2007-117,Hawley2006-103} reveal ingredients which are necessary to form a jet: plasma accretion, magnetic field and a black hole. To underline the physics it is interesting to look at the interaction of the magnetic field with the accreting matter drawn towards the black hole and its role in the jet formation process. This interaction will be most effective in the presence of a rotating (Kerr) black hole~\cite{Misner1973}. Two different models of energy extraction from a rotating black hole are often discussed in the literature. On one hand the Penrose mechanism where inside the ergosphere a particle splits into two particles, one possibly escapes to infinity with higher mass-energy than the original infalling particle, whilst the other particle falls with negative mass-energy past the event horizon into the hole~\cite{Penrose1969-252,Penrose1971-177}. This mechanism is based on frame-dragging of rotating bodies as first derived within general relativity by Lense and Thirring in 1918~\cite{Lense1918-156}. On the other hand the Blandford-Znajek mechanism~\cite{Blandford1977-433} where the accreted material has a magnetic field threaded through as it falls into the rotating black hole. Again frame-dragging coils and twists the magnetic field like a rope. The associated huge electric currents will eventually transfer their energy into the plasma which is blown away from the black hole in form of jets~\cite{Punsly2001,Beskin2009}. \Red{ Because of frame-dragging there exists a region around the Kerr black hole called the ergosphere, the outer surface is the static limit boundary. It is known that the energy of a particle can assume both positive and negative values inside the ergosphere as measured by an observer at infinity~\cite{Misner1973}. If a particle with zero angular momentum falls into the black hole, an external observer will see the particle to start to corotate together with the black hole. The deeper the particle falls into the black hole, the faster it will be seen to rotate from the external observer. Positive and negative angular momentum for a particular particle can be defined if it rotates faster or slower than the zero angular particle as it is seen from an external observer. The rotation induced by frame-dragging tends to zero at infinity and takes a maximal value at the event horizon, so it constitutes a strong differential rotation. } A magnetised plasma can be modelled as a gas-like collection of magnetic flux tubes, each of them behaves as a nonlinear string. Therefore, one can simulate the motion of a flux tube (2D problem) instead of solving numerically the complete set of MHD equations (4D problem). This approach has been successfully applied to the model solar cycle (\cite{babcock1961-572}), the magnetic barrier (or depletion layer) at the dayside magnetopause (\cite{zwan1976-1636,erkaev2006-209}), and magnetic reconnection in the magnetotail (\cite{pudovkin1985-1, chen1999-14613}). This method has also been used in the problem of relativistic string - Kerr black hole interaction: in (Semenov, 2000) the general problem was stated, subsequent first numerical simulation showed the possibility of energy extraction with the help of a flux tube from a rotating black hole (Semenov, 2002), and finally an improved numerical scheme was able to simulate the appearance of the jet (Semenov 2004). In the present paper the important role of buoyancy and magnetic reconnection is discussed in the process of jet formation as well as the propagation of plasmoids outwards the black hole. \renewcommand{\eta}{\tau} | Buoyancy effect can play an important role in the course of jet creation. At the beginning it slows down the motion of the plasma towards the black hole, especially for that part of the tube with low density. Eventually this part starts to move outwards producing a spiral structure needed for centrifugally driven jets. This might resolve an important questions of relativistic jets: how does a magnetised plasma falling into the huge gravitational center of a black hole which, in principle, swallows everything, produce outgoing streams near the speed of light. A further necessary element in this scenario is magnetic field line reconnection. In this context the question arises where is the correct place to reconnect the field lines. One might expect that this place is near the event horizon because of nearly antiparallel field lines there. However, it turns out that gravity in this region is so strong that the release of Maxwellian stresses there by reconnection cannot prevent the plasma from falling into the black hole. The more appropriate site for reconnection is likely to be near the static limit surface, where magnetic reconnection can produce a chain of plasmoids and supply the black hole with new material. In addition, we propose a new relaxation mechanism. It is well known that jets eventually start to spread out, getting wider and wider. Usually this is connected to the collision with the plasma outside or a Kelvin Helmholtz instability. Our point of view is that Maxwellian relaxation continues all the way long and leads to a simplification of the magnetic jet structure. Roughly speaking, each plasmoid tends to become a circle-like structure which leads to a jet of finite thickness. In our scenario the Penrose mechanism has been used twice. Locally to redistribute angular momentum and energy along the string and, consequently, to extract energy from the Kerr black hole, and globally in form of an outgoing plasmoid which transfers additional positive energy and angular momentum away from the black hole. This plasmoid can be considered as a classical Penrose particle which carries away rotational energy from the black hole. The accretion of magnetised plasma into a Kerr black hole with differential rotation results in a common pattern, i.e.\ a helical structure of the magnetic field, a spinning plasma flow which eventually leads to jet formation via the frame dragging effect, the Penrose mechanism, relativistic buoyancy, centrifugal acceleration and reconnection. The predictions of the present mechanism results from the application of energy and angular momentum conservation for magnetic flux tubes. There is practically no dependence on the initial configuration of the magnetic field. } \ack The authors thank B.~Punsly for useful discussions. VSS acknowledges financial support from the TU Graz. | 14 | 4 | 1404.2474 |
1404 | 1404.5890_arXiv.txt | We have searched for intermediate-scale anisotropy in the arrival directions of ultrahigh-energy cosmic rays with energies above 57~EeV in the northern sky using data collected over a 5 year period by the surface detector of the Telescope Array experiment. We report on a cluster of events that we call the hotspot, found by oversampling using 20$\degr$-radius circles. The hotspot has a Li-Ma statistical significance of 5.1$\sigma$, and is centered at ${\rm R.A.}=146\fdg7$, ${\rm Dec.}=43\fdg2$. The position of the hotspot is about 19$\degr$ off of the supergalactic plane. The probability of a cluster of events of 5.1$\sigma$ significance, appearing by chance in an isotropic cosmic-ray sky, is estimated to be 3.7$\times$10$^{-4}$ (3.4$\sigma$). | The origin of ultrahigh-energy cosmic rays (UHECRs), particles with energies greater than 10$^{18}$~eV, is one of the mysteries of astroparticle physics. Greisen, Zatsepin, and Kuz'min (GZK) predicted that UHECR protons with energies greater than $\sim$60~EeV ($6\times10^{19}$~eV) would be severely attenuated primarily due to pion photoproduction interactions with the cosmic microwave background (CMB) radiation \citep{Gre66,Zat66}. This GZK suppression becomes strong if these very high energy cosmic rays are produced at and traveling moderate extragalactic distances. The High Resolution Fly's Eye (HiRes) collaboration was first to observe a suppression of cosmic rays above $\sim$60~EeV \cite{Abb08}, which is consistent with expectation from the GZK cutoff. This suppression was independently confirmed by both the Pierre Auger Observatory (PAO) \citep{Abr08} in the south and Telescope Array (TA) experiment \citep{Abu13a} in the north, which are the largest aperture cosmic-ray detectors currently in operation. The distribution of UHECR sources should be limited within the local universe with distances smaller than 100~Mpc for proton/iron and 20~Mpc for helium/carbon/nitrogen/oxygen (distances within which $\sim$50\% of cosmic rays are estimated to survive) \citep{Kot11}. To accelerate particles up to the ultrahigh-energy region, particles must be confined to the accelerator site for more than a million years by a magnetic field and/or a large-scale confinement volume \citep{Hil84,Pti10}. This would thus limit the number of possible accelerators in the universe to astrophysical candidates such as galaxy clusters, supermassive black holes in active galactic nuclei (AGNs), jets and lobes of active galaxies, starburst galaxies, gamma-ray bursts, and magnetars. Galactic objects are not likely to be the sources since past observations indicate that the UHECRs do not concentrate in the galactic plane and have a relatively isotropic distribution. In addition, our galaxy cannot confine UHECRs above 10$^{19}$~eV within its volume by the Galactic magnetic field. Extragalactic astrophysical objects form well-known large-scale structures (LSSs), most of which are spread along the ``supergalactic plane'' in the local universe. Nearby AGNs are clustered and concentrated around LSS with a typical clustering length of 5--15~Mpc, as observed by Swift BAT \citep{Cap10}. Concentrations of nearby AGNs coincide spatially with the LSS of matter in the local universe, including galaxy clusters such as Centaurus and Virgo. The typical amplitude of such AGN concentrations is estimated to be a few hundred percent of the averaged density within a 20$\degr$-radius circle, which is of an angular scale comparable to the clustering length of the AGNs within 85~Mpc \citep{Aje12}. The main difficulty in identifying the origin of UHECRs is the loss of directional information due to magnetic field induced bending. In order to investigate the UHECR propagation from the extragalactic sources, a number of numerical simulations have been developed \citep{Yos03,Sig04,Taka06,Kas08,Koe09,Taka10,Kal11,Taka12}. In the simulations, the UHECR trajectory between the assumed UHECR source and the Earth is traced through intergalactic and galactic magnetic fields (IGMF and GMF). The results depend strongly on the assumed distribution and density of the UHECR sources and the intervening magnetic fields. The deflection angle of a 60~EeV proton from a source at a distance of 50~Mpc is estimated to be a few degrees assuming models with an IGMF strength of 1~nG. Meanwhile, the estimated deflection by the GMF ranges from a few to about 10 degrees. This, however, depends on the direction in the sky. If the highest-energy cosmic rays come from the local universe such as nearby galaxies, and if they are protons, the maximum amplitude of the cosmic-ray anisotropy above $\sim$60~EeV is expected to be a few hundred percent of the average cosmic-ray flux. In this case, the amplitude of the cosmic-ray anisotropy might be detectable by the UHECR detectors of the TA and PAO. In the highest-energy region, $E>57$~EeV, the PAO found correlations of the cosmic-ray directions within a 3$\fdg$1-radius circle centered at nearby AGNs (within 75~Mpc) in the southern sky \citep{Abr07}. Updated measurements from the PAO indicate a weakened correlation with nearby AGNs \citep{Abr10,Mac12}; the correlating fraction (the number of correlated events divided by all events) decreased from the early estimate of ($69^{+11}_{-13}$)\% to ($33\pm5$)\%, compared with 21\% expected for an isotropic distribution of cosmic rays. The chance probability of the original (69\%) correlation is $6\times10^{-3}$ assuming an isotropic sky. The Telescope Array has also searched for UHECR anisotropies such as autocorrelations, correlations with AGNs, and correlations with the LSS of the universe using the first 40 months of scintillator surface detector (SD) data \citep{Abu12b,Abu13b}. Using 5 years of SD data, we updated results of the cosmic-ray anisotropy with $E>57$~EeV, which shows deviations from isotropy at the significance of 2--3$\sigma$ \citep{Fuk13}. In this letter, we report on indications of intermediate-scale anisotropy of cosmic rays with $E>57$~EeV in the northern hemisphere sky using the 5-year TA SD dataset. | There are no known specific sources behind the hotspot. The hotspot is located near the supergalactic plane, which contains local galaxy clusters such as the Ursa Major cluster (20~Mpc from Earth), the Coma cluster (90~Mpc), and the Virgo cluster (20~Mpc). The angular distance between the hotspot center and the supergalactic plane in the vicinity of the Ursa Major cluster is $\sim$19$\degr$. Assuming the hotspot is real, two possible interpretations are: it may be associated with the closest galaxy groups and/or the galaxy filament connecting us with the Virgo cluster \citep{Dol04}; or if cosmic rays are heavy nuclei they may originate close to the supergalactic plane, and be deflected by extragalactic magnetic fields and the galactic halo field \citep{Tin02,Taka12}. To determine the origin of the hotspot, we will need greater UHECR statistics in the northern sky. Better information about the mass composition of the UHECRs, GMF, and IGMF would also be important. | 14 | 4 | 1404.5890 |
1404 | 1404.2538_arXiv.txt | In this review the current status of several searches for particle dark matter with the Fermi Large Area Telescope instrument is presented. In particular, the current limits on the weakly interacting massive particles, obtained from the analyses of gamma-rays and cosmic-ray electron/positron data, will be illustrated. | \label{sec:intro} A wide range of cosmological observations in the Universe, including large scale structures, the cosmic microwave background and the isotopic abundances resulting from the primordial nucleosynthesis, provide an evidence for a sizable non-baryonic and extremely-weakly interacting cold dark matter (DM)\cite{Roos:2012cc}. Weakly interacting massive particles (WIMPs) provide a theoretically appealing class of candidates for the obscure nature of DM\cite{Jungman:1995df,Bergstrom:2000pn,Bertone:2004pz}, with the lightest supersymmetric neutralino ($\chi$) often taken as a useful template for such a WIMP. It is often argued that the thermal production of WIMPs in the early universe generically leads to a relic density that coincides with the observed order of magnitude of DM fraction on cosmological scales, $\Omega_\chi= 0.229 \pm 0.015$, to the total energy density of the universe\cite{wmap}. The indirect search for DM is one of the main items in the broad Fermi Large Area Telescope (LAT)\cite{Atwood2009,Abdo:2009gy,Ackermann:2012kna} Science menu~\cite{balt}. It is complementary to direct searches for the recoil of WIMPs off the nuclei being carried out in underground facilities~\cite{DRUKIER:2013lva} and at collider experiments searches for missing transverse energy. In this review the word ``indirect'' denotes search for a signature of WIMP annihilation or decay processes through the final products (gamma rays, electrons and positrons) of such processes. Among possible messengers for such indirect searches, gamma rays play a pronounced role as they propagate essentially unperturbed through the Galaxy and therefore directly point to their sources, leading to distinctive spatial signatures; an even more important aspect, as we will see, is the appearance of pronounced spectral signatures. Among many other ground-based and space-borne instruments, the LAT plays a prominent role in this search through a variety of distinct search targets: gamma-ray lines, Galactic and isotropic diffuse gamma-ray emission, dwarf satellites, cosmic ray (CR) electrons and positrons. | The Fermi LAT team has looked for indirect DM signals using a wide variety of methods, and since no signals have been detected, strong constraints have been set. Fermi turned five years in orbit on June, 2013, and it is definitely living up to its expectations in terms of scientific results delivered to the community. The Fermi-LAT Collaboration will provide an improved event reconstruction (Pass 8)~\cite{Atwood:2013rka}, that will improve the potential of the LAT instrument. | 14 | 4 | 1404.2538 |
1404 | 1404.0017_arXiv.txt | The IceCube experiment has recently reported the observation of 28 high-energy ($> 30$~TeV) neutrino events, separated into 21 showers and 7 muon tracks, consistent with an extraterrestrial origin. In this letter we compute the compatibility of such an observation with possible combinations of neutrino flavors with relative proportion ($\alpha_e:\alpha_\mu:\alpha_\tau$)$_\oplus$. Although the 7:21 track-to-shower ratio is naively favored for the canonical ($1:1:1$)$_\oplus$ at Earth, this is not true once the atmospheric muon and neutrino backgrounds are properly accounted for. We find that, for an astrophysical neutrino $E_{\nu}^{-2}$ energy spectrum, ($1:1:1$)$_\oplus$ at Earth is disfavored at 81\%~C.L. If this proportion does not change, 6 more years of data would be needed to exclude ($1:1:1$)$_\oplus$ at Earth at $3\sigma$~C.L. Indeed, with the recently-released 3-year data, that flavor composition is excluded at 92\%~C.L. The best-fit is obtained for ($1:0:0$)$_\oplus$ at Earth, which cannot be achieved from any flavor ratio at sources with averaged oscillations during propagation. If confirmed, this result would suggest either a misunderstanding of the expected background events, or a misidentification of tracks as showers, or even more compellingly, some exotic physics which deviates from the standard scenario. | 14 | 4 | 1404.0017 |
||
1404 | 1404.4634_arXiv.txt | The study of galaxy cluster outskirts has emerged as one of the new frontiers in extragalactic astrophysics and cosmology with the advent of new observations in X-ray and microwave. However, the thermodynamic properties and chemical enrichment of this diffuse and azimuthally asymmetric component of the intra-cluster medium are still not well understood. This work, for the first time, systematically explores potential observational biases in these regions. To assess X-ray measurements of galaxy cluster properties at large radii ($>{R}_{500c}$), we use mock {\em Chandra} analyses of cosmological galaxy cluster simulations. The pipeline is identical to that used for {\em Chandra} observations, but the biases discussed in this paper are relevant for all X-ray observations outside of ${R}_{500c}$. We find the following from our analysis: (1) filament regions can contribute as much as $50\%$ at ${R}_{200c}$ to the emission measure, (2) X-ray temperatures and metal abundances from model fitted mock X-ray spectra in a multi-temperature ICM respectively vary to the level of $10\%$ and $50\%$, (3) resulting density profiles vary to within $10\%$ out to ${R}_{200c}$, and gas mass, total mass, and baryon fractions vary all to within a few percent, (4) the bias from a metal abundance extrapolated a factor of 5 higher than the true metal abundance results in total mass measurements biased high by $20\%$ and total gas measurements biased low by $10\%$ and (5) differences in projection and dynamical state of a cluster can lead to gas density slope measurements that differ by a factor of $15\%$ and $30\%$, respectively. The presented results can partially account for some of the recent gas profile measurements in cluster outskirts by e.g., {\em Suzaku}. Our findings are pertinent to future X-ray cosmological constraints with cluster outskirts, which are least affected by non-gravitational gas physics, as well as to measurements probing gas properties in filamentary structures. | Clusters of galaxies are the largest gravitationally bound objects in our universe. These objects are massive enough to probe the tension between dark matter and dark energy in structure formation, making them powerful cosmological tools. Cluster-based cosmological tests use observed cluster number counts, the precision of which depends on how well observable-mass relations can constrain cluster masses. Finely tuned observable-mass relations require an understanding of the thermal and chemical structure of galaxy clusters out to large radii, as well as an understanding of systematic biases that may enter into observational measurements of the intracluster medium (ICM). A new area of study lies in studying the ICM in the outskirts of clusters, a region where the understanding of ICM physics remains incomplete \citep[e.g., see][for an overview]{reiprich_etal13}. Cluster outskirts can be considered to be a cosmic melting pot, where infalling substructure is undergoing virialization and the associated infalling gas clumps are being stripped due to ram pressure, depositing metal-rich gas in the cluster atmosphere, changing thermal and chemical structure of the ICM. Measurements at these regions will allow us to form a more complete assessment of the astrophysical processes, e.g. gas stripping and quenching of star formation in infalling galaxies, that govern the formation and evolution of galaxy clusters and their galaxies. Gas properties in cluster outskirts are also more suitable for cosmological measurements; the effects of dissipative non-gravitational gas physics, such as radiative cooling and feedback, are minimal in the outskirts compared with the inner regions. Gas density measurements in cluster outskirts will also help constrain the baryon budget. While recent X-ray cluster surveys have provided an independent confirmation of cosmic acceleration and tightened constraints on the nature of dark energy \citep{allen_etal08,vikhlinin_etal09}, the best X-ray data, circa 2009, were limited to radii within half of the virial radius, and cluster outskirts were largely unobserved. Recent measurements from the {\em Suzaku} X-ray satellite have pioneered the study of the X-ray emitting ICM in cluster outskirts beyond ${R}_{200c}$\lastpagefootnotes\footnote{${R}_{200c}$ is the radius of the cluster enclosing an average matter density 200 times the critical density of the Universe.} \citep[e.g.,][]{bautz_etal09,reiprich_etal09, hoshino_etal10,kawaharada_etal10}. These measurements had unexpected results in both entropy and enclosed gas mass fraction at large radii. Entropy profiles from {\em Suzaku} data were significantly flatter than theoretical predictions from hydrodynamical simulations \citep{george_etal09,walker_etal12}, and the enclosed gas mass fraction from gas mass measurements of the Perseus cluster exceeded the cosmic baryon fraction \citep{simionescu_etal11}. These results suggest that gasesous inhomogeneities might cause an overestimate in gas density at large radii. {\em Suzaku} observations and subsequent studies demonstrated that measurements of the ICM in cluster outskirts are complicated by contributions from the cosmic X-ray background, excess emission from gasesous substructures \citep{nagaiandlau_11,zhuravleva_etal13,vazza_etal13,roncarelli_etal13}, and poorly understood ICM physics in the outskirts. However, analyses of cluster outskirts with both {\em Planck} Sunyaev-Zel'dovich and {\em ROSAT} X-ray measurements found otherwise \citep{eckert_etal13a,eckert_etal13b}. The mysteries surrounding these X-ray measurements of cluster outskirts are still unsettled due to a number of observational systematic uncertainties, such as X-ray background subtraction. The latest deep {\em Chandra} observations of the outskirts of galaxy cluster Abell 133, a visually relaxed cluster with X-ray temperature $T_X=4.1$~keV, have given us an unprecedented opportunity to probe the ICM structure in cluster outskirts (Vikhlinin~et~al., in prep.). First, the high sub-arcsec angular resolution allows for efficient point source removal, the isolation of the cosmic X-ray background, and the detection of small-scale clumps, which dominate the X-ray flux at large radii. Second, the long exposure time of 2.4~Msec also enables direct observations of the filamentary distribution of gas out to ${R}_{200c}$. To address potential biases that could affect observed ICM properties at large radii, we use mock {\em Chandra} observations from high-resolution cosmological simulations of galaxy clusters. The goal of this paper is to characterize the properties of diffuse components of the X-ray emitting ICM in the outskirts of galaxy clusters, and to assess the implications for X-ray measurements. Our analysis focuses on the effects that come from a combination of limited spectral resolution and low photon counts in cluster outskirts, where spectral contributions from metal line emissions become more significant than those from thermal bremsstrahlung emissions. Our approach is similar in spirit as in \citet{rasia_etal08} which pioneered the study of systematics on abundance measurements on the ICM by analyzing the spectral properties of mock plasma spectra generated with XSPEC. We extend the study of XSPEC generated mock spectra to the outskirt regions. Our results are valid for all observations of spectra from plasma with a multicomponent temperature. Specifically, we look at the accuracy of metal abundance measurements from spectral fitting and their consequential effects on the measurements of projected X-ray temperature, emission measure, deprojected temperature, gas density, gas mass, and hydrostatic mass derived from these profiles. We explore potential biases by (1) measuring contributions of clumps and filaments to emission measurements, (2) testing the modeling of bremsstrahlung and metal line emissions in low density regions, and (3) checking for line of sight dependence in the density slope in cluster outskirts and differences in observed ICM profiles for an unrelaxed cluster. This work will be useful in assessing the robustness of X-ray measurements beyond ${R}_{500c}$, as we test both the spectral fitting of X-ray photons in low density regions, and the validity of fitting formulae that are used to reconstruct ICM profiles. Our paper is organized as follows: in Section~\ref{sec:methods} we describe the simulations we used and the mock {\em Chandra} analysis pipeline. We present our results in Section~\ref{sec:results}, and give our summary and discussion in Section~\ref{sec:summary}. | \label{sec:summary} The study of galaxy cluster outskirts is the new frontier in cluster-based cosmology. In this work, we used cosmological simulations of galaxy clusters to investigate potential biases in metal abundance measurements from X-ray spectra in the low density cluster outskirts ($R>R_{500}$), where metal line cooling contributions become significant. Using X-ray maps with a constant input metal abundance, we showed how systematically biased metal abundance measurements affect ICM quantities such as the gas density and temperature profiles. In order to test other potential sources of systematic uncertainty in density slope measurements from {\em Chandra} observations of Abell 133 (Vikhlinin~et~al., in prep.), we performed our mock X-ray analysis for projections along the semi-major and semi-minor axes of a relaxed cluster, and for a dynamically disturbed cluster. To summarize our findings: \begin{itemize} \item Metal abundance measurements in the outskirts of our test case of a relaxed cluster shows that the metal abundance is best recovered to within a factor of two. Due to the presence of a multi-temperature medium, strong metal line contributions will be systematically biased low. In the regime of few photon counts (either from low X-ray surface brightness or low abundance), accurate metal line contributions will be difficult to recover from Poisson noise. \item An incorrect measurement of the abundance in the low density regions corresponds to fluctuations in measured projected X-ray temperature to within $10\%$. This temperature bias drives the projected emission measure in a direction opposite of the bias, thus affecting the gas density measurements as much as $15\%$ in the outskirts out to $\approx3\times R_{200c}$. \item We test the extent of biases that may arise from incorrectly extrapolating the metal abundance measurement in galaxy cluster outskirts by fixing the metal abundance to a factor of 2 and a factor of 5 above and below the true abundance. An abundance assumed a factor of 5 too high biases the gas density measurement and the enclosed total mass by as much as $15\%$ at ${R}_{200c}$. \item In the cluster outskirts, an X-ray temperature that is biased low by $10\%$ corresponds to a decrease in the total mass measurement, increase in the enclosed gas mass, and therefore a boosted gas mass fraction by $\sim 10\%$. \item Two potential physical mechanisms for the derived sharpening of density gradients in cluster outskirts are projection and dynamical state of the cluster. Our test cases show that these contribute to no more than $15\%$ and $30\%$, respectively, to the logarithmic gas density slope. The bias due to an incorrect abundance measurement would affect the density slope by no more than $5\%$. \end{itemize} We note that fully testing the accuracy of X-ray measurements in cluster outskirts requires a comparison of measurements of ICM properties directly from the simulation with 3D filaments removed in a self-consistent manner with our projected filament finder. While the test cases used in this study do not include high mass clusters, any bias from incorrect abundance measurements would not exceed the bias found in the lower mass counterparts. High mass clusters have larger temperatures at $R_{200c}$, resulting in a decreased relative contribution of metal line emissions to the X-ray emission measure. This analysis does not include any contributions from simulated backgrounds, and the effects of instrumental and X-ray backgrounds are additional sources of systematic biases that have not been accounted for in our discussions. Furthermore, our analysis does not take into account non-equilibrium electrons at large radii, which have a lower temperature than the mean ICM gas temperature \citep{ruddandnagai_09}. The temperature bias from non-equilibrium electrons is small in clusters of the mass scales we tested, and we expect non-equilibrium electrons to have less of an effect on the emission measure in the cluster outskirts than biases from metal abundance measurements. Metal line contributions have a significant contribution to the X-ray emission in the low density cluster outskirts, and the X-ray emission has a weak dependence on any biases in the temperature at all radii ($\propto T^{1/2}$). Alternatively, in higher mass clusters, non-equilibrium electrons have larger deviations from the mean ICM gas temperature because the equilibration timescale is longer. The corresponding biases in X-ray measurements of the outskirts of high mass clusters is a topic for future work. Our findings emphasize the need for long exposure times to have accurate measurements of metal abundance and X-ray temperature in the low density outskirts of galaxy clusters. Accurate X-ray measurements of the hydrostatic mass require an additional term in the temperature modeling in cluster outskirts to account for the shallow slope in the outermost radii. Additionally, accounting for errors in the metal abundance measurement will require more careful modeling of the thermal and chemical structure in cluster outskirts. In addition to gas inhomogeneities \citep[e.g.,][]{nagaiandlau_11}, biased measurements of the metal abundance in cluster outskirts may partially explain the enhanced gas mass fraction \citep[e.g.,][]{simionescu_etal11} and flattened entropy profiles \citep[e.g.,][]{walker_etal12} observed in cluster outskirts. Errors in metal abundance could produce enhanced emission measurements and a steeper 3D temperature profile, leading to overestimates in gas mass and underestimates in hydrostatic mass. We have identified three potential mechanisms that contribute to density steepening in our mock data, and may affect the observed density slope beyond ${R}_{200c}$: (1) overestimation of the abundance, (2) a line of sight corresponding to the semi-minor axis, and (3) rapid recent mass accretion. For (1), the biases introduced by spectral fitting are not likely to exceed $5\%$. The two physical mechanisms (2) and (3) had respective effects of $15\%$ and $30\%$. The mass accretion rate of a galaxy cluster can influence its DM density \citep{cuesta_etal08,diemer_etal14} and gas profiles (E.T.~Lau~et~al., in prep.). Accurate measurements of gas density and temperature profiles from X-ray observations of cluster outskirts will provide information on the recent mass accretion history of a galaxy cluster, which is one of the dominant systematics in mass-observable relations in cluster cosmology. Deep observations of cluster outskirts will allow us to better understand the growth of galaxy clusters through mass accretion from the surrounding filamentary structures. A careful consideration of all potential biases that enter into ICM measurements in galaxy cluster outskirts will enable us to fully use clusters as cosmological probes. Results from this work will also be useful for future missions, such as {\em SMART-X} \footnote{ \url{http://smart-x.cfa.harvard.edu/} } and {\em Athena+} \footnote{\url{http://athena2.irap.omp.eu/}}, which will begin mapping the denser parts of the filamentary cosmic web. | 14 | 4 | 1404.4634 |
1404 | 1404.6294_arXiv.txt | { We present high spatial resolution X-ray spectroscopy of supernova remnant G292.0+1.8 with the {\sl Chandra} observations. The X-ray emitting region of this remnant was divided into 25 $\times$ 25 pixels with a scale of 20$\arcsec$ $\times$ 20$\arcsec$ each. Spectra of 324 pixels were created and fitted with an absorbed one component non-equilibrium ionization model. With the spectral analysis results we obtained maps of absorbing column density, temperature, ionization age, and the abundances for O, Ne, Mg, Si, S, and Fe. The abundances of O, Ne and Mg show tight correlations between each other in the range of about two orders of magnitude, suggesting them all from explosive C/Ne burning. Meanwhile, the abundances of Si and S are also well correlated, indicating them to be the ashes of explosive explosive O-burning or incomplete Si-burbing. The Fe emission lines are not prominent among the whole remnant, and its abundance are significantly deduced, indicating that the reverse shock may have not propagated to the Fe-rich ejecta. Based on relative abundances of O, Ne, Mg, Si and Fe to Si, we suggest a progenitor mass of $25-30~M_{\odot}$ for this remnant. | G292.0+1.8, also known as MSH 11-54, is a bright Galactic supernova remnant (SNR). It is relatively young with the age of $\le 1600 yr$ (Murdin \& Clark 1979). The detection of strong O and Ne lines in the optical spectrum (Goss et al. 1979; Murdin \& Clark 1979) classified G292.0+1.8 as one of the only three known O-rich SNRs in the Galaxy, the others being Cassiopeia A and Puppis A. G292.0+1.8 was first detected in X-ray with {\sl HEAO} 1 (Share et al. 1978). With the increasing sensitivity and resolution of the following X-ray satellites, finer structures are revealed, such as the central bar-like features (Tuohy, Clark, \& Burton 1982), metal rich ejecta around the periphery, thin filaments with normal composition centered on and extending nearly continuously around the outer boundary (Park et al. 2002). The equivalent width map of O, Ne, Mg, Si clearly show regions with enhanced metallicity of these elements (Park et al. 2002). G292.0+1.8 is bright in O, Ne, Mg and Si and weaker in S and Ar with little Fe, suggesting that the ejecta are strongly stratified by composition and that the reverse shock has not propagated to the Si/S- or Fe-rich zones (Park et al. 2004; Ghavamian et al. 2012). Meanwhile, the central barlike belt has normal chemical composition, suggesting shocked dense circumstellar material for its origin (Park et al. 2004; Ghavamian, Hughes \& Williams 2005). The detection of the central pulsar (PSR J1124$-$5916) and its wind nebula (Hughes et al. 2001, 2003; Camilo et al. 2002; Gaensler \& Wallace 2003; Park et al. 2007) attribute G292.0+1.8 to be a core-collapse SNR. The progenitor mass has been proposed. Hughes \& Singh (1994) suggested a progenitor mass of $\sim 25~M_{\odot}$ by comparing the derived element abundances of O, Ne, Mg, Si, S and Ar with those predicted by the numerical nucleosynthesis calculation based on the {\sl EXOSAT} data. Gonzalez \& Safi-Harb (2003) estimated it to be $\sim 30-40~M_{\odot}$ with the same method from {\sl Chandra} observation. In this paper, we present a spatially resolved spectroscopy of G292.0+1.8, using {\sl Chandra} ACIS observations. In previous work, several typical regions have been picked out and their spectra were analyzed (e.g. Gonzalez \& Safi-Harb 2003, hearafter GS03; Vink et al. 2004; Lee et al. 2010) or the equivalent width maps are given (Park et al. 2002). We, for the first time, give a systematic spectroscopy of this SNR region by region. In Section 2 \& 3, we present observational data and results. In Section 4, we discuss the nucleosynthesis and progenitor mass. A summary is given in Section 5. | \subsection{Nucleosynthesis} The O, Ne, and Mg abudance show tight correlations with each other within the range of about two orders of magnitude. This is a strong evidence that they all come from the explosive C/Ne burning. Meanwhile, such correlation is also found between Si and S abundance, suggesting them to be ashes of explosive O-burning and incomplete Si-burning in the core-collapse supernova. This kind of correlation pattern for elemental abundance is very similar to Cas A (Yang et al. 2008), and also is consistent with the (explosive) nucleosynthesis calculations for massive star (Woosley, Heger \& Weaver 2002; Woosley \& Janka 2005). The difference between G292.0+1.8 and Cas A is that the former SNR is Oxygen-rich while the latter one is Si-rich. In Cas A, the Fe abundance is positively correlated with that of Si when Si abundance is lower than 3 solar abundances, and a negative correlation appears when the Si abundance is higher (Yang et al. 2008). It is suggested that the Si-rich regions are the ejecta of incomplete explosive Si-burning mixing with O-burning products, and the regions with lower Si abundance might be dominated by the shocked circumstellar medium (CSM). In G292.0+1.8, we can only find a weak positive correlation between Si and Fe abundance in the whole remnant. This is not surprising because no prominent Fe lines have been detected in most of the ``pixels'' we divided for G292.0+1.8, and thus the Fe abundance can not be that well-constrained. In the meantime, both Si and Fe are not enriched in the SNR, with their abundances smaller than 1 solar abundance. This, again, confirms that the reverse shock may have not propagated into the Fe-rich ejecta (Park et al. 2004; Ghavamian et al.2009; 2012). \subsection{Progenitor Mass} Theoretical calculations suggested that for core-collapse supernovae the different progenitor mass will yield very different abundance pattern (Woosley \& Weaver 1995, etc). Gonzalez \& Safi-Harb (2003) suggested that the progenitor mass of G292.0+1.8 could be around $30-40~M_{\odot}$ based on the comparison of the observed abundance ratios with theoretical values. Their abundance ratios are taken from several ejecta-dominated regions within this SNR. Here we gave the emission measure weighted average abundance for O, Ne, Mg, Si, S, and Fe among the whole remnant and their rms ($Z$ and $\sigma_{Z}$ in Table~\ref{abundance_ratio}). To compare with GS03 work, we also give the abundance ratio of all the elements with respect to Si and their rms ($Z/Z_{Si}$ and $\sigma_{Z/Z_{Si}}$ in Table~\ref{abundance_ratio}). By taken G03's observational value and theoretical values they employed, we overplotted our measurements in Figure~\ref{G292_progenitor}. The abundance ratios measured here are systematically lower than those from GS03. One would wonder that the reason could be that the shocked CSM regions are included in our calculation. Considering this, we artificially choose the ejecta-dominated region by the standard of the O abundance larger than 1.0, and calculated the corresponding ratio again (Table~\ref{abundance_ratio}, $(Z/Z_{Si})^{\prime}$). We can see that $(Z/Z_{Si})^{\prime}$ are generally consistent with $Z/Z_{Si}$ within the scatter. This is not surprising for two reasons. One is that the Si abundance are tentatively correlated with other elements. The other is that the larger element abundance values contribute much more to the final average values, so that even though the average abundance are much larger, their ratios tend to be similar. Our measurements suggested the progenitor mass for G292.0+1.8 to be $25-30~M_{\odot}$, smaller than $30-40~M_{\odot}$ suggested by GS03. \begin{figure*} \includegraphics[width=\textwidth,clip]{ms1775fig8.eps} \caption{Average abundance ratio and their rms scatter, compared with Gonzalez \& Safi-Harb (2003) and theoretical values.} \label{G292_progenitor} \end{figure*} | 14 | 4 | 1404.6294 |
1404 | 1404.2297_arXiv.txt | We report the serendipitous discoveries of companion galaxies to two high-redshift quasars. \qsoafull\ is a $z=4.79$ quasar included in our recent survey of faint quasars in the SDSS Stripe 82 region. The initial MMT slit spectroscopy shows excess Ly$\alpha$ emission extending well beyond the quasar's light profile. Further imaging and spectroscopy with LBT/MODS1 confirms the presence of a bright galaxy ($i_{\rm AB} = 23.6$) located 2\arcsec\ (12~kpc~projected) from the quasar with strong Ly$\alpha$ emission (EW$_0 \approx 100$~\AA) at the redshift of the quasar, as well as faint continuum. The second quasar, \qsobfull\ ($z=6.25$), is included in our recent HST SNAP survey of $z\sim6$ quasars searching for evidence of gravitational lensing. Deep imaging with ACS and WFC3 confirms an optical dropout $\sim4.5$~mag fainter than the quasar ($Y_{\rm AB}=25$) at a separation of 0.9\arcsec. The red $i_{775}-Y_{105}$ color of the galaxy and its proximity to the quasar (5~kpc~projected if at the quasar redshift) strongly favor an association with the quasar. Although it is much fainter than the quasar it is remarkably bright when compared to field galaxies at this redshift, while showing no evidence for lensing. Both systems may represent late-stage mergers of two massive galaxies, with the observed light for one dominated by powerful ongoing star formation and for the other by rapid black hole growth. Observations of close companions are rare; if major mergers are primarily responsible for high-redshift quasar fueling then the phase when progenitor galaxies can be observed as bright companions is relatively short. | The discovery of quasars as distant as $z\sim7$ already powered by black holes (BHs) with masses $\sim10^9~M_{\sun}$ \citep{Fan+01PI,Mortlock+11,Venemans+13} presents a challenge for early structure formation models. First, some physical process is needed to generate seed BHs at even higher redshifts \citep[e.g.,][]{Volonteri10,Haiman13}. Next, regardless of the nature of the seeds, the initial BHs must grow rapidly in order to reach a billion solar masses or more in less than a Gyr. $\Lambda$CDM models generally predict that this high redshift growth occurs by gas accretion rather than by a succession of black hole mergers \citep{Li+07,DiMatteo+08,TH09}, requiring both a enormous fuel supply and a mechanism for driving the gas from intergalactic scales down to the central regions of galaxies where it can be accreted onto the BH. Mergers of gas-rich disk galaxies have been proposed as a solution to this problem \citep[e.g.,][]{KH00,Hopkins+06,Li+07,Hopkins+08}. In the merger scenario the remnant typically passes through an obscured starburst phase followed by a luminous, unobscured quasar phase. Merger dynamics provide a natural mechanism for shedding angular momentum and driving gas to central regions \citep{Hernquist89}, plausibly accounting for the continuous fuel supply required to grow high-redshift quasars. While the merger hypothesis is consistent with many observations, direct evidence for merger activity associated with individual quasars is rare. This is likely because the observational signatures of a recent merger \citep[tidal tails, gas shells, etc.; see, e.g.,][]{Bennert+08} are short-lived and faint, thus easily overwhelmed by the luminous quasar. Nonetheless, early Hubble Space Telescope (HST) observations showed that at least some nearby luminous quasars show evidence for recent interactions, with a significant fraction having close ($<10$~kpc) companion galaxies \citep{Bahcall+97}. In addition, the fraction of highly dust-obscured quasars showing evidence of recent merger activity is close to unity \citep{ULB08}, in agreement with the general outline of the merger scenario. \begin{figure*} \epsscale{1.15} \plotone{j0256_griz} \caption{SDSS deep Stripe 82 $griz$ images of \qsoa. Images are 20\arcsec\ on a side with a pixel scale of 0\farcs{45} and are oriented with North up and East to the left. The companion galaxy is visible 2\arcsec\ NE of the quasar in the $i$-band image. Two additional galaxies are detected to the NW and SE at distances of $\sim5$\arcsec\ from the quasar and are likely at low redshift given their relatively blue colors. \label{fig:J0256griz} } \end{figure*} Mergers are more likely to occur in overdense environments where the rate of galactic encounters is enhanced. Even if direct evidence for recent merging activity in quasar host galaxies is elusive, circumstantial evidence in the form of local overdensities may argue in favor of merger-driven growth. For example, in the merger-tree-based hydrodynamical simulation of \citet{Li+07}, a single $z\sim6.5$ quasar is assembled from a succession of seven major mergers, so that the quasar is surrounded by nearby ($\la20~$kpc) companions more or less continuously from $z\sim12$ to $z\sim7$ \citep[see Fig.~6 of][]{Li+07}. More broadly, the expectation that exceedingly rare high-redshift quasars would occupy the most highly biased regions at their observed epoch has motivated many surveys of their environments on few Mpc scales \citep{Stiavelli+05,Willott+05,Zheng+06,Kashikawa+07,Kim+09,Utsumi+10, Benitez+13,Husband+13}, the results of which have been inconclusive, with some quasar fields having apparent overdensities of candidate associated galaxies, and some not. Some recent theoretical models even suggest that high-redshift quasars do {\em not} inhabit the strongest large-scale overdensities at high redshift \citep{Fanidakis+13}. A particularly well-studied case of prodigious merger activity associated with a luminous, high-redshift quasar is \br\ at $z=4.7$. One of the first detections of a FIR-hyperluminous, high-redshift sub-mm galaxy (SMG) resulted from observations of the host galaxy of this quasar \citep{Isaak+94}. Subsequently, an optically undetected SMG as bright as the quasar host at sub-mm wavelengths was discovered a mere 4\arcsec\ from the quasar \citep{Omont+96}, as was a pair of Ly$\alpha$-emitting galaxies within $\sim2$\arcsec\ \citep{HME96,Petitjean+96}. \br\ has been dubbed the ``archetypal'' system of a close group of galaxies leading to mergers and fueling both SMBH formation and prodigious star formation \citep{Carilli+13}. It is curious, however, that observations of systems like \br\ are rare, if indeed group-scale mergers are a key pathway to forming high-redshift quasars. In this paper we report the discovery of close companions ($\la10$~kpc) of two high-redshift quasars. The first example is a companion galaxy to the $z\sim5$ quasar \qsoafull~(hereafter \qsoa). This galaxy is located 2\arcsec\ from the quasar and has strong Ly$\alpha$ emission at the same redshift. It bears many similarities to the Ly$\alpha$-emitting companions of \br. We present imaging and spectroscopy of this system in \S\ref{sec:j0256}. The second quasar, \qsobfull~(hereafter \qsob), was included in an HST SNAP survey of $z\sim6$ quasars searching for evidence of gravitational lensing. A galaxy 0\farcs{9} from the quasar was detected in the HST image. In \S\ref{sec:j0050} we present strong evidence from HST imaging that this galaxy is almost certainly associated with the quasar. In \S~\ref{sec:discuss} we consider the physical origin of the observed emission from both galaxies, discuss these observations in the context of the merger hypothesis, and draw rough conclusions on the incidence of close companions for similar quasars. We present brief conclusions and speculate on the nature of companion galaxies in \S\ref{sec:conclude}. All magnitudes are on the AB system \citep{OkeGunn} and have been corrected for Galactic extinction using the \citet{SFD98} extinction maps unless otherwise noted. We adopt a $\Lambda$CDM cosmology with parameters $\Omega_\Lambda=0.727$,~$\Omega_m=0.273$,~ $\Omega_{\rm b}=0.0456$,~and~$H_0=70~{\rm km}~{\rm s}^{-1}~{\rm Mpc}^{-1}$ \citep{Komatsu+11} when needed. \begin{figure} \epsscale{1.15} \plotone{j0256_swirc} \caption{MMT SWIRC $J$ and $H$ images of \qsoa. Images are 5\arcsec\ on a side with a pixel scale of 0\farcs{15} and are oriented with North up and East to the left. The position of the companion galaxy is marked with an arrow. The quasar is detected in both images but the companion galaxy is not. \label{fig:J0256swirc} } \end{figure} | \label{sec:conclude} We have identified two galaxies in close proximity to high-redshift quasars. The first object is located 1\farcs{8} from a $z=4.9$ quasar (\qsoafull) and was discovered serendipitously based on excess Ly$\alpha$ emission present in the spectroscopic slit during the course of an MMT survey of $z\sim5$ quasars. Imaging and moderate depth optical spectroscopy with the LBT confirms that the galaxy is bright ($i=23.6$) and has strong Ly$\alpha$ emission at the quasar redshift as well as UV continuum emission. The second companion galaxy is $<1$\arcsec\ from a highly luminous $z=6.25$ quasar (\qsobfull). HST imaging demonstrates that it is a relatively bright ($Y=25$) optical dropout and highly likely to be at the same redshift as the quasar. Both galaxies are among the most luminous galaxies known at high redshift, with $M_{1500}=-22.7$ for the \qsoa\ companion and $M_{1500}=-21.8$ for the \qsob\ companion. We have considered possible sources for their emission, including internal AGN or fluorescence from the nearby quasars, and for both objects conclude that the observed emission is likely dominated by star formation activity within the galaxies. The coincidence of highly star-forming galaxies near luminous quasars is broadly consistent with the scenario where high-redshift quasars are fueled by major mergers. This study is not the first to find close companion galaxies to high-redshift quasars. The first galaxy to have a reported spectroscopic redshift $z>2$ was a companion to the $z=3.2$ quasar QSO PKS 1614+051 with a separation of 7\arcsec\ \citep{Djorgovski+85}. This galaxy was selected from narrow-band Ly$\alpha$ imaging of the quasar, and was only instance of a companion out of five objects surveyed. The $z=4.7$ quasar \br\ was found to have multiple companions within $\sim5$\arcsec, two discovered via Ly$\alpha$ \citep{HME96,Petitjean+96} and one in the sub-millimeter \citep{Omont+96}. While searching for CO(2-1) line emission from the host galaxies of five quasars at $z\sim6$, \citet{Wang+11} found one clear detection of a companion galaxy 1\farcs{2} from a $z=6.18$ quasar, and a marginal $\sim2\sigma$ detection of extended CO emission $\sim$0\farcs{8} from a $z=5.85$ quasar. These studies either targeted quasar fields under the expectation they would point to a local galaxy overdensity, or targeted quasar hosts while being sensitive to any nearby companions. One can then ask why companion galaxies are not observed more frequently, given the observational attention paid to high-redshift quasars and the hypothesis that major mergers play a key role in triggering high-redshift quasars. One explanation is that the merging timescales are brief, so that observing quasar-galaxy pairs would be rare. Indeed, the ``blowout'' phase that leads to an unobscured quasar may occur only after the progenitor galaxies have fully coalesced \citep[e.g.,][]{Hopkins+06}. In the scenario where successive galactic encounters fuel the growth of high-redshift quasars, the duty cycles of unobscured activity for both star formation (in the progenitor galaxies) and black hole growth may be sufficiently short (relative to the quasar lifetime) that the probability of observing the two simultaneously at rest-frame UV/optical wavelengths is low. Larger, more sensitive surveys at FIR wavelengths with ALMA would better address this question. At rest-UV wavelengths, \citet{Jiang+13} found that $\sim50$\% of bright $z\sim6$ galaxies show evidence for recent mergers or interactions in HST observations. However, observing such faint features in luminous quasar host galaxies would be exceedingly difficult. A final speculation motivated by our observations is that the proximity of a UV-bright galaxy to a rapidly growing supermassive black hole at high redshift may be indicative of a form of {\em positive} feedback. \citet{Dijkstra+08} outline a model where massive seed black holes for $z\sim6$ quasars form at even higher redshifts ($z\sim10$) from close pairs ($\la 10$kpc) of dark matter halos. In this model, one halo hosts a bright star-forming galaxy with a strong Lyman-Werner band flux that photo-dissociates $H_2$ molecules in the neighboring halo, preventing it from cooling below the atomic cooling threshold. The result is direct collapse of the neighboring halo gas into a massive ($\sim10^{4\mbox{-}6}~M_{\sun}$) black hole, providing a seed mechanism for supermassive black holes. However, the original ionizing source at $z\sim10$ would likely coalesce with the quasar host by $z\sim6$, and not explain our observations in which a bright companion galaxy is contemporaneous with an already $\ga10^8~M_\sun$ black hole. On the other hand, observing a companion galaxy to a quasar well after the initial formation of the black hole is consistent with the picture where multiple mergers are needed to grow high-redshift quasars \citep[e.g.,][]{Li+07}, drawing at least some connection between the model of \citet{Dijkstra+08} and our observations. \textbf{Acknowledgements} The authors would like to thank George Becker for providing the ESI spectrum of \qsob. IDM would like to thank Desika Narayanan and Dan Stark for helpful discussions. Support for programs \#12184 and \#12493 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. IDM and XF acknowledge additional support from NSF grants AST 08-06861 and AST 11-07682. The LBT is an international collaboration among institutions in the United States, Italy, and Germany. LBT Corporation partners are: The University of Arizona on behalf of the Arizona University System; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota, and University of Virginia. This paper used data obtained with the MODS spectrographs built with funding from NSF grant AST-9987045 and the NSF Telescope System Instrumentation Program (TSIP), with additional funds from the Ohio Board of Regents and the Ohio State University Office of Research. The MMT Observatory is a joint facility of the University of Arizona and the Smithsonian Institution. {\it Facilities:} \facility{MMT (Red Channel spectrograph, SWIRC)}, \facility{LBT (MODS)}, \facility{HST (ACS,WFC3)}, \facility{SDSS} | 14 | 4 | 1404.2297 |
1404 | 1404.7367_arXiv.txt | Double-lobed radio galaxies a few 100s of kpc in extent, like Cygnus A, are common at redshifts of 1 to 2, arising from some 10 per cent of the most powerful Active Galactic Nuclei (AGN). At higher redshifts they are rare, with none larger than a few 10s of kpc known above redshift $z\sim4$. Recent studies of the redshift evolution of powerful-jetted objects indicate that they may constitute a larger fraction of the AGN population above redshift 2 than appears from a simple consideration of detected GHz radio sources. The radio band is misleading as the dramatic $(1+z)^4$ boost in the energy density of the Cosmic Microwave Background (CMB) causes inverse Compton scattering to dominate the energy losses of relativistic electrons in the extended lobes produced by jets, making them strong X-ray, rather than radio, sources. Here we investigate limits to X-ray lobes around two distant quasars, ULAS\,J112001.48+064124.3 at $z = 7.1$ and SDSS\,J1030+0524 at $z=6.3$, and find that powerful jets could be operating yet be currently undetectable. Jets may be instrumental in the rapid build-up of billion $\Msun$ black hole at a rate that violates the Eddington limit. | Extended X-ray emission produced by inverse Compton scattering of CMB photons in distant giant radio sources has been considered several times (Felten \& Rees 1969; Schwartz 2002a; Celotti \& Fabian 2004; Ghisellini et al 2013). Relativistic electrons with Lorentz factor $\Gamma\sim10^3$ upscatter CMB photons into the soft X-ray band observed around 1 keV. The boost factor in CMB energy density compensates for surface brightness dimming with redshift. The X-ray phase is also expected to even last longer than the radio phase (Fabian et al 2009; Mocz et al 2011). Many examples (e.g. Carilli et al 2002; Scharf et al 2003; Overzier et al 2005; Blundell et a; 2006; Erlund et al 2006; Laskar et al 2010; Smail et a; 2012; Smail \& Blundell 2013) of double-lobed sources simultaneously luminous in both radio and X-rays are known across a wide range in redshift up to $z\sim4$. The highest redshift example of a $>100\kpc$ scale double-lobed structure is 4C23.56 (Blundell \& Fabian 2011) at $z = 2.5$. When inverse Compton losses are considered, the dearth of large radio galaxies at higher redshifts does not necessarily indicate a lack of high redshift jetted sources (Mocz et al 2013; Ghisellini et al 2014). At $z>7$ the energy density of the CMB is over 4000 times greater than at the present epoch and very likely far exceeds that of the magnetic field in any lobes produced by a jet, so making synchrotron radio emission weak and undetectable at GHz frequencies. Hard X-ray emission detected from relativistically-beamed jets by the Burst Alert Telescope (BAT, onboard Swift) implies that the ratio of powerful radio-loud AGN to radio-quiet AGN strongly increases with redshift (Ghisellini et al 2013, 2014). The number density obtained when beaming corrections are accounted for nearly matches that of all massive dark matter haloes capable of hosting billion-solar mass black holes above redshift 4 (Ghisellini et al 2013). It is therefore possible that X-ray lobed sources produced by powerful jets exist, and could even be common, at high redshift. In other words, high redshift quasars may have powerful jets. To test whether powerful high redshift quasars generate powerful jets, we searched for linearly extended, diametrically-opposed X-ray lobes around archival X-ray images of two distant quasars, the most extreme object, ULAS\,J1120+0641 at $z=7.1$, and SDSS\,J1030+0524 at $z=6.3$. Both have relatively deep XMM exposures (Page et al 2014 and Farrah et al 2004, respectively). Schwartz (2002b) has previously searched for X-ray jets around 3 $z\sim6$ quasars using short ($\sim 10$\,ks) Chandra images without any conclusive result. | We have shown that inverse Compton lobes produced by powerful jets powered by the central quasar could be currently undectable despite the fact that the jet power rivals that radiated by the quasar. This becomes more extreme if the jet contains protons as well as electrons. Jets could be the result of the extraction of rotational energy of the black hole (Blandford \& Znajek 1977), but the accretion disk must amplify the magnetic field needed for the extraction of the black hole spin energy. This can correspond to the major energy release of the accretion process which, it is widely assumed, powers the quasar. If substantial energy is extracted from the accretion disc magnetically, then the total power of the source could exceed the Eddington limit, a constraint which applies only to the power in photons (see discussion in Ghisellini et al 2013). A billion $\Msun$ black hole can thereby grow from stellar mass by accretion before redshift 7.1. The upper limit to any X-ray lobes of ULAS\,J1120+0641 (and SDSS\,J1030+0524) imply that the process of massive black hole growth at large redshift could include powerful-jetted outflows. Much of the radiation is due to inverse-Compton upscattering of CMB photons into the X-ray band. The enormous mechanical energy of jetted lobes would represent a formidable and fierce form of kinetic feedback on the surrounding gas (Fabian 2012). This could explain why the galaxy hosts of quasars at $z>3$ are compact (Szomoru et al 2013), and their group and cluster gas have more energy than is explainable by gravitational infall alone (Wu et al 2000; McCarthy et al 2012). Powerful jets are a considerable source of energy, magnetic fields and cosmic rays to the intergalactic medium\footnote{See Gopal-Krishna et al (2001) and Mocz et al (2011) for discussion on the impact of jetted lobes on the intergalactic medium at $1<z<4$.}and may therefore play a role in reionization of the Universe at these early cosmic times. Our results suggest that the high redshift Sky could be criss-crossed by X-ray emitting jets and lobes from growing massive black holes. Deeper X-ray imaging with Chandra and, in time, Athena can test this possibility. | 14 | 4 | 1404.7367 |
1404 | 1404.2068_arXiv.txt | We have investigated some features of the density and arrival time distributions of Cherenkov photons in extensive air showers using the CORSIKA simulation package. The main thrust of this study is to see the effect of hadronic interaction models on the production pattern of Cherenkov photons with respect to distance from the shower core. Such studies are very important in ground based $\gamma$-ray astronomy for an effective rejection of huge cosmic ray background, where the atmospheric Cherenkov technique is being used extensively within the energy range of some hundred GeV to few TeV. We have found that for all primary particles, the density distribution patterns of Cherenkov photons follow the negative exponential function with different coefficients and slopes depending on the type of primary particle, its energy and the type of interaction model combinations. Whereas the arrival time distribution patterns of Cherenkov photons follow the function of the form $t (r) = t_{0}e^{\Gamma/r^{\lambda}}$, with different values of the function parameters. There is no significant effect of hadronic interaction model combinations on the density and arrival time distributions for the $\gamma$-ray primaries. However, for the hadronic showers, the effects of the model combinations are significant under different conditions. | The Atmospheric Cherenkov Technique (ACT) is being used extensively to detect the $\gamma$-rays emitted by celestial sources using the ground-based telescopes within the energy range of some hundred GeV to few TeV. This technique is based on the effective detection of Cherenkov photons emitted by the relativistic charged particles present in the Extensive Air Showers (EASs) initiated by the primary $\gamma$-rays in the atmosphere \cite{Hoffman, Weekes, Acharya}. It is worthwhile to mention that, the celestial sources which emit $\gamma$-rays also emit Cosmic Rays (CRs). But CRs being charged particles, are deflected in the intergalactic magnetic fields and hence they reach us isotropically loosing the direction(s) of their source(s). However, as the $\gamma$-rays are neutral, by detecting them we can pinpoint the locations of such astrophysical sources. As the ACT is an indirect method, detailed Monte Carlo simulation studies of atmospheric Cherenkov photons have to be carried out to estimate the energy of incident $\gamma$-ray. Also it is necessary to reject huge CR background as CRs also produce the EAS in the atmosphere like $\gamma$-rays but with a slight difference. The EAS originated from primary $\gamma$-rays are of pure electromagnetic in nature whereas those due to CRs are a mixture of electromagnetic and hadronic cascades. Many extensive studies have already been carried out on the arrival time as well as on the density distributions of atmospheric Cherenkov photons in EASs at the observation levels in the high as well as low energy regimes using available detailed simulation techniques \cite{Badran, Versha}. As the simulation study is interaction model dependent, it is also necessary to carry out the model dependent study for a reliable result. Although such type of studies have been also carried out extensively in the past, there are not many studies applicable particularly to high altitude observation levels. In keeping this point in mind, in this work we have studied the density and the arrival time distributions of Cherenkov photons in EASs of $\gamma$ and CR primaries, at different energies and at high altitude observation level, using different low and high energy hadronic interaction models available in the present version of the CORSIKA simulation package \cite{Heck}. CORSIKA is a detailed Monte Carlo simulation package to study the evolution and properties of extensive air showers in the atmosphere. This allows to simulate interactions and decays of nuclei, hadrons, muons, electrons and photons in the atmosphere up to energies of some 10$^{20}$eV. For the simulation of hadronic interactions, presently CORSIKA has seven options for high energy hadronic interaction models and three low energy hadronic interaction models. It uses EGS4 code \cite{Nelson} for the simulation of electromagnetic component of the air shower \cite{Heck}. This paper is organized as follows. In the next section, we discuss in detail our simulation process. The section III is devoted for the details about the analysis of the simulated data and the results of the analysis. We summarize our work with conclusions and future outlook in the section IV. | In view of the importance in the ACT and lack of sufficient works applicable to high altitude observation sites, we have made an elaborate study on the density and arrival time distributions of Cherenkov photons in EAS using the CORSIKA 6.990 simulation package \cite{Heck}. Summary of this study and consequent conclusions can be made as follows: The lateral density and arrival time distributions of Cherenkov photons follow a negative exponential function and a function of the form $t (r) = t_{0}e^{\Gamma/r^{\lambda}}$ respectively for all primary particles, energies and model combinations. As these functions' parameters are different, the geometries of these distributions are obviously different depending upon the energy and mass of the primary particle. These parametrisations show that the analytical descriptions of the lateral density and arrival time distributions of Cherenkov photons are possible within some uncertainty. The full scale of such parametrization as a function of energy and shower angle would be useful for analysis of data of $\gamma$-ray telescopes because, it will help to disentangle the $\gamma$-ray showers from the hadronic showers over a given observation level. These distributions of density and arrival time of Cherenkov photons as a function of core distance for the $\gamma$-ray showers are almost independent of hadronic interaction models, whereas that for the proton and iron showers depend on the hadronic interaction models on the basis of the type of models (low and high energy), the energy of primary particle and the distance from the shower core. In most of the cases the model dependence is significant for the iron showers. The systematic effect of the hadronic interaction model dependence has to be taken into account when assessing the effectiveness of background rejection for a $\gamma$-ray telescope. Moreover, from the study of shower to shower fluctuations ($\Delta_{pm}$) of Cherenkov photons' density and arrival time it is clear that they are almost independent of hadronic interaction models, but depend on the energy and type of the primary particle, number of shower samples used for the analysis and the location of detectors. These are very important inference to be taken into care on the estimation of systematic uncertainties in a real $\gamma$-ray astronomy observation. The energy dependent variation of Cherenkov photons' density shows that to get the equivalent numbers of Cherenkov photons from different primary particles, the energy of the particles must be increased to several times with increasing mass of them. This explains why we have chosen different specific energies for different primaries as mentioned in the Sec.II. Similarly, from the study of the altitude effect, i.e. the comparison of lateral distributions of Cherenkov photons at two observation levels, we can conclude that as we go to the higher observation levels, it is possible to detect low energy $\gamma$-ray signals from a source as well as possible to do the $\gamma$-ray astronomy with a much smaller telescope system than at lower observation level. % As mentioned above the full scale parametrisation of the density and arrival time distributions of Cherenkov photons for different primary is important for the analysis of $\gamma$-ray observation data, so in future we are planning to perform such full parametrization for the HAGAR telescope site for the effective analysis of the HAGAR telescope \cite{Versha1} data. Again, since in this work we have considered the energy only upto 10 TeV, hence we will extend our future work upto 100 TeV, the more relevant energy range of $\gamma$-ray astronomy. Furthermore, as the pattern of angular distribution of Cherenkov photons for different primaries is another crucial parameter for separation of hadron showers from the $\gamma$-ray showers, we will take up this issue also for our future study. | 14 | 4 | 1404.2068 |
1404 | 1404.2318_arXiv.txt | It was found in the Fermi-LAT data that there is an extended $\gamma$-ray excess in the Galactic center region. The proposed sources to be responsible for the excess include dark matter annihilation or an astrophysical alternative from a population of millisecond pulsars (MSPs). Whether or not the MSP scenario can explain the data self-consistently has very important implications for the detection of particle dark matter, which is however, subject to debate in the literature. In this work we study the MSP scenario in detail, based on the detected properties of the MSPs by Fermi-LAT. We construct a model of Milky Way disk-component MSPs which can reproduce the $\gamma$-ray properties of the observed Fermi-LAT MSPs, and derive the intrinsic luminosity function of the MSPs. The model is then applied to a bulge population of MSPs. We find that the extended $\gamma$-ray excess can be well explained by the bulge MSPs without violating the detectable flux distribution of MSPs by Fermi-LAT. The spatial distribution of the bulge MSPs as implied by the distribution of low mass X-ray binaries follows a $r^{-2.4}$ profile, which is also consistent with the $\gamma$-ray excess data. We conclude that the MSP model can explain the Galactic center $\gamma$-ray excess self-consistently, satisfying all the current observational constraints. | It has been reported that there is an extended $\gamma$-ray excess in the Galactic center (GC) region in the Fermi Large Area Telescope (Fermi-LAT) data \cite{2009arXiv0910.2998G,2009arXiv0912.3828V,2011PhLB..697..412H, 2011PhLB..705..165B,2012PhRvD..86h3511A,2013PhRvD..88h3521G, 2014arXiv1402.4090A,2014arXiv1402.6703D}. The spatial distribution of the extended excess follows the square of a generalized Navarro-Frenk-White (gNFW, \cite{1997ApJ...490..493N,1996MNRAS.278..488Z}) profile with inner slope $\gamma\approx1.2$, and the $\gamma$-ray spectrum can be fitted with an exponential cutoff power-law or a log-parabolic form \cite{2011PhLB..697..412H,2012PhRvD..86h3511A,2013PhRvD..88h3521G}. The spatial extension of the excess is rather large. Daylan et al. found that up to $12^{\circ}$ away from the GC the excess is still remarkable \cite{2014arXiv1402.6703D}. The analysis of the spatial variation of the $\gamma$-ray emission from the Fermi bubbles \cite{2010ApJ...724.1044S} showed that there might also be an extra component overlapping on the bubble emission, which follows the same projected gNFW$^2$ distribution of the GC excess \cite{2013PDU.....2..118H,2013arXiv1307.6862H,2014arXiv1402.6703D}. This means the excess may exist at even larger scales. The origin of this excess is still unclear, and the proposed sources include dark matter (DM) annihilation \cite{2011PhRvD..84l3005H, 2011JHEP...05..026M,2011PhRvD..83g6011Z,2013arXiv1310.7609H, 2013arXiv1312.7488P,2014arXiv1401.6458B,2014arXiv1403.1987L, 2014arXiv1404.1373A} or a population of millisecond pulsars (MSPs, \cite{2011JCAP...03..010A,2013MNRAS.436.2461M}, see also an earlier work on a MSP interpretation to EGRET diffuse $\gamma$-ray emission \cite{2005MNRAS.358..263W}). Although the DM scenario seems very attractive, it is very crucial to investigate the astrophysical alternatives of the excess, especially in view that direct detection experiments found no signal of DM collision in the corresponding mass ranges \cite{2012PhRvL.109r1301A,2014PhRvL.112i1303A}. A first look at the MSP scenario suggests that it is a plausible interpretation to the data. The best-fitting spectrum of the excess is an exponential cutoff power-law, with power law index $\Gamma\sim1.4-1.6$ and cutoff energy $E_c\sim3-4$ GeV \cite{2013PhRvD..88h3521G,2014PhRvD..89f3515M}. All these are consistent with the average spectral properties of either the Fermi-LAT detected MSPs \cite{2013ApJS..208...17A}, or globular clusters whose $\gamma$-ray emission is believed to be dominated by MSPs \cite{2010A&A...524A..75A}. The number of MSPs needed to explain the data is estimated to be a few $\times10^3$ based on the observed luminosities of MSPs or globular clusters \cite{2012PhRvD..86h3511A,2013MNRAS.436.2461M,2013PhRvD..88h3521G}. Such a number of MSPs is plausible based on the comparison of the stellar mass content in the Galactic bulge and in the globular clusters. The spatial distribution of the $\gamma$-ray excess follows a gNFW profile, which is somehow expected within the dark matter scenario according to N-body simulations with baryon processes \cite{2004ApJ...616...16G,2011arXiv1108.5736G}. However, it is interesting to note that the number distribution of low mass X-ray binaries (LMXBs), which can be tracers of MSPs, from the central region of Andromeda gives a projected $R^{-1.5}$ profile \cite{2007A&A...468...49V, 2007MNRAS.380.1685V}, which is consistent with that to interpret the $\gamma$-ray excess \cite{2012PhRvD..86h3511A}. Hooper et al. investigated in more detail of the MSP scenario to explain the GC excess \cite{2013PhRvD..88h3009H}. Based on several assumptions about the spatial, spin and luminosity distributions of the MSPs, they claimed that MSPs cannot explain the $\gamma$-ray excess data without violating the Fermi-LAT detected number-flux distribution of the MSPs. We revisit this problem in this work, paying special attention on the assumption of the luminosity function of MSPs. We will model the spatial and spectral distribution of MSPs in the Milky Way (MW) disk to reproduce the major MSP observational properties as measured by Fermi-LAT, and infer the intrinsic luminosity function of MSPs (Sec. 2). We then apply the intrinsic luminosity function to a putative bulge population of MSPs and work out their contribution to the diffuse $\gamma$-ray excess without over-producing detectable point sources above the sensitivity threshold of Fermi-LAT (Sec. 3). We show that the MSP scenario can nicely reproduce the $\gamma$-ray excess data, and conclude in Sec. 4 with some discussion. | The analysis of the Fermi-LAT data revealed symmetric and extended $\gamma$-ray excess in the GC region peaking at GeV energies \cite{2009arXiv0910.2998G,2009arXiv0912.3828V,2011PhLB..697..412H, 2011PhLB..705..165B}. The origin of the excess is not clear, and the promising scenarios include DM annihilation and an unresolved MSP population. Although the spectrum of the $\gamma$-ray excess is quite consistent with the average spectrum of the Fermi-LAT detected MSPs, it was argued that in order not to over-produce the detectable MSPs by Fermi-LAT, the unresolved MSP population can only account for $\lesssim10\%$ of the observed $\gamma$-ray \cite{2013PhRvD..88h3009H}. In this work we study the MSP scenario in detail, by including more comprehensive observational constraints from the observational properties of the Fermi-LAT detected MSP sample. We find that there is a large uncertainty in the intrinsic $\gamma$-ray luminosity function of MSPs, which affects significantly the prediction of the diffuse emission from the unresolved MSP population. It was found that the luminosity function adopted in \cite{2013PhRvD..88h3009H} might be too hard to reproduce the observed luminosity function of the Fermi-LAT MSP sample. Adjusting properly the intrinsic luminosity function we can well reproduce the observational properties of the Fermi-LAT MSPs with the MW population of MSPs. Based on this refined luminosity function, we find that a population of MSPs in the bulge can be enough to explain the $\gamma$-ray excess without over-producing the detectable MSPs above the sensitivity of Fermi-LAT. The number of MSPs with luminosities higher than $10^{32}$ erg s$^{-1}$ in the whole bulge region is estimated to be $(1-2) \times 10^{4}$ in order to explain the $\gamma$-ray data. Such a number is compatible with the estimate of the compact remnants in the very central region around the GC \cite{2000ApJ...545..847M,2007MNRAS.377..897D}. We further investigate the spatial distribution of the bulge MSP population, using LMXBs as tracers. Assuming a spatial density profile of $r^{-2.4}$ we can well reproduce the observed LMXB distribution within $10^{\circ}$ around the GC \cite{2008A&A...491..209R}. Such a density profile is quite consistent with that required to explain the GC $\gamma$-ray excess. However, we still need to keep in mind that the current constraint on the number density profile of LMXBs in the GC region is poor. It is possible that the density profile of LMXBs is slightly elongated along the Galactic plane as expected from the stellar model. In that case the MSP scenario may have some tension with the $\gamma$-ray data \cite{2014arXiv1402.6703D}. We show in this work that the MSP population can naturally explain the $\gamma$-ray excess in the GC region. It should be pointed out that any other astrophysical populations with similar spectral, luminosity and spatial characteristics as the MSPs could also be the origin of the excess. In any case, MSPs are the most natural sources to satisfy these constraints. We note that some analyses claimed the $\gamma$-ray excesses extend to even larger scales in the inner Galaxy \cite{2013PDU.....2..118H, 2013arXiv1307.6862H,2014arXiv1402.6703D}. The excess spectra in these regions seem to be even harder than that in the GC, and may be difficult to be explained by MSPs \cite{2013PhRvD..88h3009H}. However, the analysis at large scales may suffer from uncertainties from the large scale diffuse background subtraction, especially if the emission from the Fermi bubbles is not uniform \cite{2014arXiv1402.0403Y}. In spite that there are also uncertainties from the diffuse backgrounds, the results from the GC analysis seem to be more robust \cite{2013PhRvD..88h3521G, 2014PhRvD..89f3515M}. Nevertheless, if the $\gamma$-ray excess does extend to larger scales ($\gg 10^{\circ}$ from GC), the MSP scenario may face difficulty. Finally we propose that multi-wavelength observations of the counterpart of the $\gamma$-ray excess, in e.g. X-rays, may help verify its existence as well as identify its nature. The X-ray emission from the MSPs and possibly the binary systems may show different properties (flux, skymap and spectrum) compared with that from DM annihilation, which could be detectable by e.g., NuSTAR and other future X-ray missions. | 14 | 4 | 1404.2318 |
1404 | 1404.3896_arXiv.txt | { Europe's Gaia spacecraft will soon embark on its five-year mission to measure the absolute parallaxes of the complete sample of $1,000$ million objects down to $20$ mag. It is expected that thousands of nearby brown dwarfs will have their astrometry determined with sub-milli-arcsecond standard errors. Although this level of accuracy is comparable to the standard errors of the relative parallaxes that are now routinely obtained from the ground for selected, individual objects, the absolute nature of Gaia's astrometry, combined with the sample increase from one hundred to several thousand sub-stellar objects with known distances, ensures the uniqueness of Gaia's legacy in brown-dwarf science for the coming decade(s). We shortly explore the gain in brown-dwarf science that could be achieved by lowering Gaia's faint-end limit from 20 to 21 mag and conclude that two spectral-type sub-classes could be gained in combination with a fourfold increase in the solar-neighbourhood-volume sampled by Gaia and hence in the number of brown dwarfs in the Gaia~Catalogue. | \begin{figure*}[ht!] \center{\resizebox{0.99\hsize}{!}{\includegraphics[clip=true,angle=270]{josdebruijne_figure1.eps}}} \caption{\footnotesize Variation over the sky, in equatorial coordinates, of the difference between the sky-average, end-of-mission parallax standard error at $G=20$~mag ($\sigma_\pi = 332~\mu$as, colour coded in white) and the local, end-of-mission parallax standard error, in units of $\mu$as, from \url{http://www.cosmos.esa.int/web/gaia/science-performance}. The variation is caused by Gaia's scanning law. The red, curved band follows the ecliptic plane and the blue areas denote the ecliptic-pole regions.} \label{fig:sky_map} \end{figure*} \begin{table}[ht!] \caption{Sky-averaged Gaia astrometric standard errors between $G = 20$ and $21$~mag for three assumed detection probabilities. The [min--max] range for the parallax error denotes the variation over the sky. Even with modest detection percentages ($p_{\rm det} = 50$\%), Gaia can deliver sub-mas astrometry at $G = 21$~mag.} \label{tab:faint_end} \begin{center} \begin{tabular}{cccc} \\[-25pt] \hline \\[-4pt] $G$ & $\sigma_\pi$ [min -- max] & $\sigma_0$ & $\sigma_\mu$\\ mag & $\mu$as & $\mu$as & $\mu$as~year$^{-1}$\\ \\[-4pt] \hline \\[-4pt] \multicolumn{4}{c}{Detection probability $p_{\rm det} = 100$\%}\\ \\[-8pt] 20.0 & 332~ [233 -– \phantom{1}384] & 247 & 175\\ 20.5 & 466~ [326 -– \phantom{1}539] & 346 & 245\\ 21.0 & 670~ [469 -– \phantom{1}775] & 498 & 353\\ \\ \multicolumn{4}{c}{Detection probability $p_{\rm det} = 80$\%}\\ \\[-8pt] 20.0 & 372~ [260 -– \phantom{1}430] & 276 & 195\\ 20.5 & 521~ [365 -– \phantom{1}602] & 387 & 274\\ 21.0 & 749~ [525 -– \phantom{1}866] & 557 & 394\\ \\ \multicolumn{4}{c}{Detection probability $p_{\rm det} = 50$\%}\\ \\[-8pt] 20.0 & 470~ [329 -– \phantom{1}543] & 349 & 247\\ 20.5 & 659~ [461 -– \phantom{1}762] & 489 & 347\\ 21.0 & 948~ [664 -– 1096] & 704 & 499\\ \\[-4pt] \hline \end{tabular} \end{center} \end{table} Gaia is the current astrometry mission of the European Space Agency (ESA), following up on the success of the Hipparcos mission. With a focal plane containing $106$ CCD detectors, Gaia is about to start surveying the entire sky and repeatedly observe the brightest $1,000$ million objects, down to $20^{\rm th}$ magnitude, during its five-year lifetime. Gaia's science data comprises absolute astrometry, broad-band photometry, and low-resolution spectro-photometry. Spectroscopic data with a resolving power of $11,500$ will be obtained for the brightest $150$ million sources, down to $17^{\rm th}$ magnitude. The thermo-mechanical stability of the spacecraft, combined with the selection of the L2 Lissajous point of the Sun-Earth/Moon system for operations, allows parallaxes to be measured with standard errors less than $10$~micro-arcsecond ($\mu$as) for stars brighter than $12^{\rm th}$ magnitude, $25~\mu$as for stars at $15^{\rm th}$ magnitude, and $300~\mu$as at magnitude $20$. Photometric standard errors are in the milli-magnitude regime. The spectroscopic data allows the measurement of radial velocities with errors of $15$~km~s$^{-1}$ at magnitude $17$. Gaia's primary science goal is to unravel the kinematical, dynamical, and chemical structure and evolution of the Milky Way. In addition, Gaia's data will revolutionise many other areas of science, e.g., stellar physics, solar-system bodies, fundamental physics, exo-planets, and -- last but not least -- brown dwarfs. The Gaia spacecraft has been launched on 19 December $2013$ and will start its science mission in the summer of $2014$. The science community in Europe, organised in the Data Processing and Analysis Consortium (DPAC), is responsible for the processing of the data. The first intermediate data is expected to be released some two years after launch while the final catalogue is expected around $2022$. ESA's community web portal \url{http://www.cosmos.esa.int/gaia} provides more information on the Gaia mission, including~bibliographies. | With a nominal survey limit at $G = 20$~mag, Gaia will make a significant contribution to brown-dwarf science by delivering sub-milli-arcsecond absolute astrometry of an unbiased, all-sky sample of thousands of brown dwarfs in the solar neighbourhood. By lowering the limit to $G = 21$~mag, this sample can, in principle, i.e., when ignoring programmatic limitations, be increased in number by a factor four. | 14 | 4 | 1404.3896 |
1404 | 1404.4364_arXiv.txt | We confront various nonsingular bouncing cosmologies with the recently released BICEP2 data and investigate the observational constraints on their parameter space. In particular, within the context of the effective field approach, we analyze the constraints on the matter bounce curvaton scenario with a light scalar field, and the new matter bounce cosmology model in which the universe successively experiences a period of matter contraction and an ekpyrotic phase. Additionally, we consider three nonsingular bouncing cosmologies obtained in the framework of modified gravity theories, namely the Ho\v{r}ava-Lifshitz bounce model, the $f(T)$ bounce model, and loop quantum cosmology. | Very recently, the BICEP2 collaboration announced the detection of primordial B-mode polarization in the cosmic microwave background (CMB), claiming an indirect observation of gravitational waves. This result, if confirmed by other collaborations and future observations, will be of major significance for cosmology and theoretical physics in general. In particular, the BICEP2 team found a tensor-to-scalar ratio \cite{Ade:2014xna} \begin{equation} \label{rBICEP2} r=0.20_{-0.05}^{+0.07}, \end{equation} at the $1\sigma$ confidence level for the $\Lambda$CDM scenario. Although there remains the possibility that the observed B-mode polarization could be partially caused by other sources \cite{Lizarraga:2014eaa, Moss:2014cra, Bonvin:2014xia}, it is indeed highly probable that the observed B-mode polarization in the CMB is due at least in part to gravitational waves, remnants of the primordial universe. The relic gravitational waves generated in the very early universe is a generic prediction in modern cosmology \cite{Grishchuk:1974ny, Starobinsky:1979ty}. Inflation is one of several cosmological paradigms that predicts a roughly scale-invariant spectrum of primordial gravitational waves \cite{Starobinsky:1979ty, Rubakov:1982df, Starobinsky:1985ww}. The same prediction was also made by string gas cosmology \cite{Brandenberger:1988aj, Brandenberger:2006xi, Brandenberger:2014faa, Biswas:2014kva} and the matter bounce scenario \cite{Wands:1998yp, Finelli:2001sr, Cai:2008qw}. (Note that the specific predictions of $r$ and the tilt of the tensor power spectrum can be used in order to differentiate between these cosmologies.) So far, a lot of the theoretical analyses of the observational data have been in the context of inflation (see, for instance, \cite{Kehagias:2014wza, Ma:2014vua, Harigaya:2014qza, Gong:2014qga, Miranda:2014wga, Hertzberg:2014aha,Lyth:2014yya, Xia:2014tda, Hazra:2014jka, Cai:2014bda, Hossain:2014coa, Hu:2014aua, Zhao:2014rna, Zhang:2014dxk, DiBari:2014oja, Li:2014cka, Chung:2014woa, Cai:2014hja,Hossain:2014ova}). In this present work, we are interested in exploring the consequences of the BICEP2 results in the framework of bouncing cosmological models. In particular, we desire to study the production of primordial gravitational waves in various bouncing scenarios, in both the settings of effective field theory and modified gravity. First, we show that the tensor-to-scalar ratio parameter obtained in a large class of nonsingular bouncing models is predicted to be quite large compared with the observation. Second, in some explicit models this value can be suppressed due to the nontrivial physics of the bouncing phase, namely, the matter bounce curvaton \cite{Cai:2011zx} and the new matter bounce cosmology \cite{Cai:2012va, Cai:2013kja, Cai:2014bea}. Additionally, for bounce models where the fluid that dominates the contracting phase has a small sound velocity, primordial gravitational waves can be generated with very low amplitudes \cite{Bessada:2012kw}. We show that the current Planck and BICEP2 data constrain the energy scale at which the bounce occurs as well as the slope of the Hubble rate during the bouncing phase in these specific models. The paper is organized as follows. In Sec.\ \ref{s.matt-bounce}, we focus on matter bounce cosmologies from the effective field theory perspective. In particular, we explore the matter bounce curvaton scenario \cite{Cai:2011zx} and the new matter bounce cosmology \cite{Cai:2013kja}. In Sec.\ \ref{s.mod-grav}, we explore another avenue for obtaining nonsingular bouncing cosmologies, that is modifying gravity. We comment on the status of the matter bounce scenario in Ho\v{r}ava-Lifshitz gravity \cite{Brandenberger:2009yt}, in $f(T)$ gravity \cite{Cai:2011tc}, and in loop quantum cosmology \cite{Cai:2014zga}. We conclude with a discussion in Sec.\ \ref{s.concl}. | \label{s.concl} In this work, we confronted various bouncing cosmologies with the recently released BICEP2 data. In particular, we analyzed two scenarios in the effective field theory framework, namely the matter bounce curvaton scenario and new matter bounce cosmology, and three modified gravity theories, namely Ho\v{r}ava-Lifshitz gravity, the $f(T)$ theories, and loop quantum cosmology. In all of these models, we showed their capability of generating primordial gravitational waves. Since matter bounce models typically produce a large amount of primordial tensor fluctuations, specific mechanisms for their suppression are needed. In the matter bounce curvaton scenario, introducing an extra scalar coupled to the bouncing field induces a controllable amplification of the entropy modes during the bouncing phase, and since these modes will be transferred into curvature perturbations the resulting tensor-to-scalar ratio is suppressed to a value in agreement with the observations of the BICEP2 collaboration. Another possibility, called the new matter bounce cosmology, is to have two scalar fields, one driving the matter contracting phase and the other driving the ekpyrotic contraction and the nonsingular bounce. Thus, the entropy modes are converted into curvature perturbations when the universe enters the ekpyrotic phase before the bounce, and the resulting tensor-to-scalar ratio is again suppressed to observed values. Furthermore, in both of these models we used the BICEP2 and the Planck results in order to constrain the free parameters in these models, namely the energy scale of the bounce, the slope of the Hubble rate during the bouncing phase, or the Hubble rate at the beginning of the ekpyrotic-dominated phase for the new matter bounce cosmology. Finally, we considered bouncing cosmologies in the framework of modified gravity. In particular, in both the Ho\v{r}ava-Lifshitz bounce model and the $f(T)$ gravity bounce, we have argued that the presence of a curvaton field may suppress the tensor-to-scalar ratio to its observed values. We leave the detailed analysis of this topic for a follow-up study. In loop quantum cosmology, two realizations of the matter bounce have been studied. In the simplest matter bounce model where there is only one matter field, the amplitude of the tensor perturbations is significantly diminished during the bounce due to quantum gravity effects; this process predicts a very small value of $r \sim \mathcal{O}(10^{-3})$, well below the value observed by BICEP2. The other model that has been studied is the new matter bounce scenario, which in the absence of entropy perturbations predicts a large amplitude for the tensor perturbations (in this case quantum gravity effects do not modify the spectrum during the bounce). Therefore, for the new matter bounce scenario in LQC to be viable, it is also necessary to include entropy perturbations in order to lower the value of $r$ to a value in agreement with the results of BICEP2. Also, as can be seen here, the dominant field during the bounce significantly affects how the value of $r$ changes during the bounce and therefore it seems likely that by carefully choosing this field, it may be possible to obtain a tensor-to-scalar ratio in agreement with observations. We leave this possibility for future work. In summary, the predictions of the matter bounce cosmologies where entropy perturbations significantly increase the amplitude of scalar perturbations remain consistent with observations, and thus these models are good alternatives to inflation. | 14 | 4 | 1404.4364 |
1404 | 1404.4977_arXiv.txt | Previous attempts at explaining the gamma-ray excess near the Galactic Centre have focussed on dark matter annihilating directly into Standard Model particles. This results in a preferred dark matter mass of 30-40~GeV (if the annihilation is into $b$ quarks) or 10~GeV (if it is into leptons). Here we show that the gamma-ray excess is also consistent with heavier dark matter particles; in models of secluded dark matter, dark matter with mass up to 76~GeV provides a good fit to the data. This occurs if the dark matter first annihilates to an on-shell particle that subsequently decays to Standard Model particles through a portal interaction. This is a generic process that works in models with annihilation, semi-annihilation or both. We explicitly demonstrate this in a model of hidden vector dark matter with an SU$(2)$ gauge group in the hidden sector. | Recently, an excess of gamma rays in a region of $\sim 10^{\circ}$ around the Galactic Centre has been observed in the Fermi-LAT data~\cite{Goodenough:2009gk,*Hooper:2010mq,*Boyarsky:2010dr,*Abazajian:2012pn,*Hooper:2013rwa,*Gordon:2013vta,*Abazajian:2014fta,*Daylan:2014rsa}. Although possibly consistent with astrophysical sources~\cite{Abazajian:2010zy,*Hooper:2013nhl,*Macias:2013vya,*Yuan:2014rca}, most analyses to date have focussed on interpreting the excess as a product of dark matter (DM) annihilation favouring the narrow mass range $30$-$40$~GeV for particles annihilating mainly into $b$-quarks~\cite{Boehm:2014hva,*Hardy:2014dea,*Alves:2014yha,*Berlin:2014tja,*Agrawal:2014una,*Izaguirre:2014vva,*Ipek:2014gua} or 10 GeV particles annihilating into leptons~\cite{Okada:2013bna,*Modak:2013jya,*Lacroix:2014eea,*Kong:2014haa}. In this paper we emphasise that the form of the gamma-ray spectrum reflects the injection energy of the Standard Model (SM) particles from DM annihilation, rather than directly tracking the DM mass, $m_{\rm{DM}}$. In $2\to 2$ annihilation processes that directly produce SM particles (the case that has so far been considered), the SM particles are produced with an energy $E = m_{\rm{DM}}$, leading to a direct relation between the cosmic-ray energy and the DM mass. However other modes of cosmic-ray production from DM do not feature this relation, thus allowing compatibility with DM particles over a larger mass range. One group of examples that we highlight in this paper is secluded DM~\cite{Pospelov:2007mp} in which the DM annihilates to on-shell particle(s) $\eta$ that subsequently decay to SM particles through a portal interaction. The injection energy of cosmic rays now depends on $m_{\rm{DM}}$ and $m_{\eta}$ and the result is that DM with mass up to 76~GeV is compatible with the Fermi signal. We demonstrate this with the secluded vector DM model proposed in~\cite{Hambye:2008bq} (see also~\cite{Hambye:2009fg,*Arina:2009uq,*Hambye:2013dgv,*Carone:2013wla,*Khoze:2014xha}). The DM in this model is three gauge bosons $Z'^a$ of the same mass that are stabilised by a custodial SO$(3)$ symmetry. The state $\eta$ is a light scalar singlet that mixes with the SM Higgs through the Higgs portal; it decays predominantly into $b$ quarks. An attractive feature of this model is that it contains annihilation and semi-annihilation processes, which may occur when the DM is stabilised under a symmetry larger than $\mathbb{Z}_2$~\cite{D'Eramo:2010ep,*Belanger:2012vp,*D'Eramo:2012rr,*Belanger:2014bga}. This highlights that heavier DM particles may explain the Galactic Centre excess in a large class of models that have yet to be fully explored. | The spectrum of the Galactic Centre excess constrains the injection energy of the SM particles and not directly the mass of the DM responsible for their production. In secluded DM models in which the dominant annihilation channel is to on-shell particle(s) $\eta$ that subsequently decay to SM particles, the cosmic-ray injection energy depends on $m_{\rm{DM}}$ and $m_{\eta}$. We demonstrated that in these models, DM with mass 39-76~GeV provides a good fit to the Galactic Centre excess; this mass range is four times larger than that found previously for models in which DM annihilates directly to SM particles. The Higgs portal coupling that allows $\eta$ to decay also naturally explains why the dominant decay is into $b$ quarks, as preferred by the data. By considering a model of hidden vector DM, we demonstrated that this mechanism works for both annihilation (which dominates when $m_{\eta} \approx M_{Z'}$) and semi-annihilation (dominating when $m_{\eta} < M_{Z'}$). This paper opens up a large number of model building possibilities to explain the Galactic Centre excess beyond those that have previously been considered. | 14 | 4 | 1404.4977 |
1404 | 1404.6005_arXiv.txt | The effective number of neutrinos, $\Neff$, obtained from CMB fluctuations accounts for all effectively massless degrees of freedom present in the Universe, including but not limited to the three known neutrinos. Using a lattice-QCD derived QGP equation of state, we constrain the observed range of $\Neff$ in terms of the the freeze-out of unknown degrees of freedom near to quark-gluon hadronization. We explore limits on the coupling of these particles, applying methods of kinetic theory. We present bounds on the coupling of such particles and discuss the implications of a connection between $\Neff$ and the QGP transformation for laboratory studies of QGP. | 14 | 4 | 1404.6005 |
||
1404 | 1404.4831_arXiv.txt | Comparing the latest observed abundances of $^4$He and D, we make a $\chi^2$ analysis to see whether it is possible to reconcile primordial nucleosynthesis using up-to-date nuclear data of NACRE~II and the mean-life of neutrons. If we adopt the observational data of $^4$He by Izotov et al.~\cite{Izotov2013}, % we find that it is impossible to get reasonable concordance against the standard Big-Bang nucleosynthesis. However, including degenerate neutrinos, we have succeeded in obtaining consistent constraints between the neutrino degeneracy and the baryon-to-photon ratio from detailed comparison of calculated abundances with the observational data of $^4$He and D: the baryon-to-photon ratio in units of $10^{-10}_{}$ is found to be in the range $6.02 \lesssim \eta_{10} \lesssim 6.54$ for the specified parameters of neutrino degeneracy. | Big-bang nucleosynthesis (BBN) provides substantial clues for investigating physical conditions in the early universe. Standard BBN produces about 25 \% of mass in a form of $^4$He, which has been considered to be in good agreement with its abundance observed in a variety of astronomical objects~\cite{Iocco:2008va,Steigman2007,Coc2012,Coc2013}. The produced amount of $^4_{}$He depends strongly on a fraction of neutrons at the onset of nucleosynthesis, but is not very sensitive to the baryon-to-photon $\eta$~($\eta = n^{}_{\rm b}/n^{}_\gamma;\, \eta^{}_{10} = 10^{10}_{} \eta$). Hence the produced amount of $^4$He is used to explore the expansion rate during BBN, which can be related to the effective number of neutrino flavours~\cite{Simha2008}. In addition to ${}^4_{}$He, significant amounts of D, ${}^3_{}$He and ${}^7_{}$Li are also produced. Because of its strong dependence on $\eta$, the abundance of D is crucial in determining $\eta$ and consequently the density parameter of baryons $\Omega^{}_b$. In spite of apparent success in standard BBN, recent observed light elements considered to be primordial have been controversial. Large discrepancies for $^4$He observations emerge between different observers and modelers of observations: Rather high values of $^4$He have been reported for H II regions in blue compact galaxies~\cite{Aver2012,Izotov2013}. It is noted that primordial abundance of $^4$He is deduced from extrapolation to the zero metallicity~\cite{Aver2013}. Deuterium abundance has been observed in absorption systems toward high redshift quasars~\cite{Cooke2014}. It should be noted that the value in D has been believed to limit the present baryon density (e.g. Schramm \& Turner~\cite{schr}). A low value of ${}^7_{}$Li observed in Population II stars reported by Bonifacio et al.~\cite{Bonifacio2007} is considered to be due to depletion and/or destruction during the lifetimes of stars from a high primordial value~\cite{Korn2006,Melendez2010}. {{{Recently, the half-life of neutrons has been updated from the previous adopted value of $885.7 \pm 0.8$~s~\cite{PDG2008} which has been used commonly in BBN calculations consistent with the observed abundances of ${}^4$He and D.}}} However, the latest compilation by Beringer et al. derives the mean-life to be $880.1 \pm 1.1$~s~\cite{PDG2012}, which may suggest inconsistency between BBN and observational values. This indicates further inconsistency against $\eta$ deduced by \cite{Planck_basic,Planck_cosmo}. The apparent spread in the observed abundances of $^4$He should give rise to an inconsistent range of $\eta$. Apart from observational uncertainties, we have no reliable theories beyond the standard theory of elementary particle physics. It is assumed in standard BBN that there are three flavours of massless neutrinos which are not degenerate. However it is suggested by Harvey and Kolb~\cite{Harvey1981} that lepton asymmetry could be large even when baryon asymmetry is small. The magnitude of the lepton asymmetry is of particular interest in cosmology and particle physics. Related to neutrino oscillations, investigations of BBN have been reprised with use of non-standard models (e.g.~Ref.~\cite{sark_b}). As presented by Wagoner et al.~\cite{wago67} and Beaudet \& Goret~\cite{beau76}, the abundances of light elements are modified by neutrino degeneracy (see previous investigations~\cite{yang84,Terasawa85,Kang92,Lisi99}, see also review by Ref.~\cite{sark_a}); it could be necessary and crucial to search consistent regions in $\eta$ within a framework of BBN with degenerate neutrinos by comparing with the latest observation of abundances of He and D. If neutrinos are degenerate, the excess density of neutrinos causes speedup in the expansion of the universe, leaving more neutrons and eventually leading to enhanced production of $^4$He. On the other hand, degenerate electron-neutrinos shift $\beta$-equilibrium to less or more neutrons and hence change abundance production of $^4$He. The latter effect is more significant than the former. In the present paper we investigate BBN with including degenerate neutrinos and using up-to-date nuclear data. Referring to several sets of combinations for recent observed abundances of $^4$He and D, we derive consistent constraints between $\eta$ and the degeneracy parameter. In \S~\ref{sec:obs} we summarize the current situation of observed abundances of light elements. Our results of BBN with updated nuclear data are presented in \S~\ref{sec:BBN}. Discussion is given in \S~\ref{sec:result}. | \label{sec:result} While a large spread in errors of $^4$He by Aver et al.~\cite{Aver2013} hinders us from constraining the amount of the produced $^4$He abundance, {{a smaller range by Izotov et al.~\cite{Izotov2013} permit us to constrain the ${}^4_{}$He production}}. Our results clarify the present controversial situation between standard BBN and observations; the effects of uncertain mechanism originated from a non-standard theory should reflect the ratio of n/p. If we adopt the value in \eqref{eq:xi_eta_results_2}, {{we can obtain the following range for vdensity parameter:}} \[ 0.0220 \le \Omega^{}_b h^2 \le 0.0239, \] which is compatible with {{that}} from $Planck$ measurements. {{We showed}} that the neutrino degeneracy may become one of solutions to solve the discrepancy concerning the present baryon density between BBN and CMB. {{Our results provide a}} {{{narrower range of $\xi^{}_e$ compared with {{the}} previous {{study,}} e.g.~Ref.\cite{Kneller2001}}. BBN alone seems to give {{a}} strong constraint on parameters of a non-standard model such as the neutrino degeneracy.} The ${}^7_{}$Li abundance in the present calculation is still larger than the observational values~\eqref{eq:Li7_Sbord} and \eqref{eq:Li7_Korn}. For the apparent discrepancies among the nuclear data and observations, we may need {{a}} non-standard model beyond Friedmann model: For example, the expansion rate in the universe could deviate significantly in a framework of a Brans-Dicke theory~\cite{Arai1987,Etoh97,Nakamura2006,Berni2010} {{{, or a scalar-tensor theory of gravity~\cite{Coc2006rt,Larena2007}}}}.% If inhomogeneous BBN~\cite{Applegate1987,Rauscher1994} could occur in some regions in the universe, it may solve the problem concerning ${}^7_{}$Li abundance: {{{if there is {{a}} high density region of $\eta > 10^{-5}$ in the BBN era, amounts of produced ${}^7$Li decreases significantly. As a consequence, the average value of ${}^7$Li between the high and the low density regions becomes lower than the predicted value in SBBN~\cite{NakamuraIBBN2013}}. }}} Finally, we would like to emphasize that the nuclear reaction rates responsible to the production of He and D are still not definite. The error bars given by NACRE~II~\cite{NACRE2} may not be always confirmed by other experimental groups. | 14 | 4 | 1404.4831 |
1404 | 1404.5230_arXiv.txt | In the lights of current BICEP2 observations accompanied with the PLANCK satellite results, it has been observed that the simple single field chaotic inflationary models provide a good agreement with their spectral index $n_s$ and large tensor-to-scalar ratio $r$ ($0.15 <r <0.26$). To explore the other simple models, we consider the fractional-chaotic inflationary potentials of the form $V_0 \, \phi^{a/b}$ where $a$ and $b$ are relatively prime. We show that such kind of inflaton potentials can be realized elegantly in the supergravity framework with generalized shift symmetry and a nature bound $a/b < 4$ for consistency. Especially, for the number of e-folding from 50 to 60 and some $a/b$ from 2 to 3, our predictions are nicely within at least $1\sigma$ region in the ($r-n_s$) plane. We also present a systematic investigation of such chaotic inflationary models with fractional exponents to explore the possibilities for the enhancement in the magnitude of running of spectral index ($\alpha_{n_s}$) beyond the simplistic models. | \label{sec_Intro} Among the plethora of inflationary models developed so far, the polynomial inflationary potentials have always been among the center of attraction since the very first proposal as chaotic inflation in Ref.~\cite{Linde:1983gd}. The recent BICEP2 observations~\cite{Ade:2014xna} interpreted as the discovery of inflationary gravitational waves have not only taken these models in the limelight but also supported these to be the better ones among many others. The BICEP2 observations fix the inflationary scale by ensuring a large tensor-to-scalar ratio $r$ as follows~\cite{Ade:2014xna}: \bea & & \hskip0.5cm r = 0.20^{+0.07}_{-0.05} ~(68\% ~{\rm CL}) \nonumber\\ & & H_{\rm inf} \simeq 1.2 \times 10^{14} \, \left(\frac{r}{0.16}\right)^{1/2} \, \, {\rm GeV} \, , \eea where $H_{\rm inf}$ denotes the Hubble parameter during the inflation. Subtracting the various dust models and re-deriving the $r$ constraint still results in high significance of detection and one has $ r=0.16^{+0.06}_{-0.05} $. Thus, it suggests the inflationary process to be (a high scale process) near the scale of the Grand Unified Theory (GUT) and then can provide invaluable pieces of information on the UV completion proposal such as string theory, for example, in searching for a consistent supersymmetry (SUSY) breaking scale \cite{Ibanez:2014zsa, Ibanez:2014kia, Harigaya:2014pqa,Biswas:2014kva,Choudhury:2013jya}. On these lines, some recent progresses on realizing chaotic as well as natural or axion-like inflationary models from string or supergravity framework have been made in Refs.~\cite{Silverstein:2008sg,McAllister:2008hb, Ellis:2014rxa, Hebecker:2014eua, Palti:2014kza, Marchesano:2014mla, Czerny:2014qqa, Kaloper:2014zba, Blumenhagen:2014gta, Choi:2014uaa, Grimm:2014vva,Ashoorioon:2009wa,Ashoorioon:2011ki,Cicoli:2014sva,Kaloper:2008fb,Kaloper:2011jz}. However, most of these works on natural as well as chaotic inflation can only produce integral power of inflaton in the polynomial potential. In this work, we will present a general fractional chaotic inflation which can be naturally generated from supergravity framework by utilizing the generalized shift symmetry. For a given single field potential $V(\phi)$, the sufficient condition for ensuring the slow-roll inflation is encoded in a set of so-called slow-roll conditions defined as follows \bea \label{eq:slow} & & \epsilon \equiv \frac{1}{2} \, \left(\frac{V^\prime}{V}\right)^2 \ll 1 \, , \, \, \eta \equiv \frac{V^{\prime \prime}}{V} \ll 1 \, , \, \, \xi \equiv \frac{V^\prime \, V^{\prime \prime\prime}}{V^2} \ll 1 \, , \eea where $\prime$ denotes the derivative of the potential w.r.t. the inflaton field $\phi$. Also, the above expression are defined in the units of reduced Planck mass $M_{\rm Pl} = 2.44 \times 10^{18} \, {\rm GeV}$. The various cosmological observables such as the number of e-foldings $N_e$, scalar power spectrum $P_s$, tensorial power spectrum $P_t$, tensor-to-scalar ratio $r$, scalar spectral index $n_s$, and runnings of spectral index $\alpha_{n_s}$ can be written in terms of the various derivatives of the inflationary potential via the slow-roll parameters as introduced above \cite{Lidsey:1995np, Sasaki:1995aw, Gao:2014fva}. For example, the number of e-folding is given as, \bea \label{eq:Nefold} & & N_e \equiv \int_{\phi_{end}}^{\phi_*} \, \frac{1}{\sqrt{2 \epsilon}} \, d \phi \, , \eea where $\phi_{end}$ is determined by the end of the inflationary process when $\epsilon =1$ or $\eta=1$. The other cosmological observables relevant for the present study are given by \cite{Lidsey:1995np, Sasaki:1995aw, Gao:2014fva}, \bea & & P_s \equiv \left[\frac{H^2}{4 \, \pi^2 \, (2 \, \epsilon)} \, \left(1 - \left(2 \, C_E - \frac{1}{6}\right) \, \epsilon+\left(C_E \, -\frac{1}{3} \right) \, \eta \right)^2\right] \, ,\nonumber\\ & & r \equiv \frac{P_t}{P_s} \simeq 16 \, \epsilon \left[1 -\frac{4}{3} \, \epsilon +\frac{2}{3} \, \eta + 2 \, C_E \, (2 \, \epsilon -\eta)\right] \, , \\ & & n_s \equiv \frac{d \ln P_s}{d \ln k} \simeq 1 + 2 \biggl[\eta - 3 \, \epsilon -\left(\frac{5}{3} + 12 \, C_E \right)\, \epsilon^2 + (8 \, C_E -1) \epsilon \, \eta \nonumber\\ & & \hskip3.5cm + \frac{1}{3}\,\eta^2 -\left(C_E - \frac{1}{3} \right) \, \xi \biggr] \, ,\nonumber\\ & & \alpha_{n_s}\equiv \frac{d n_s}{d \ln k} \simeq 16 \, \epsilon \, \eta - 24 \, \epsilon^2 - 2 \, \xi \, , \nonumber \eea where $C_E = -2 + 2 \ln 2 +\gamma = -0.73$, $\gamma =0.57721$ being the Euler-Mascheroni constant. Therefore, it is natural to expect that the shape of the inflationary potential are tightly constrained by the experimental bounds on these cosmological observables coming from various experiments. These experimental constraints on the cosmological parameters are also useful for directly reconstructing the single field potential \cite{Choudhury:2014kma} by fixing the magnitude as well as the various derivatives of the potential. The experimental bounds on cosmological observables relevant in the present study are briefly summarized as follows, \bea \label{eq:constrain} & & r = 0.16^{+0.06}_{-0.05} \, , \, \, n_s = 0.957 \pm 0.015 \, , \, \, \alpha_{n_s} = -0.022^{+0.020}_{-0.021}~.~ \eea Apart from the tensor-to-scalar ratio $r$ and spectral index $n_s$, the running of spectral index $\alpha_{n_s}$ has emerged as another crucial cosmological parameter on the lines of reconciling the results between the BICEP2 \cite{Ade:2014xna} and Planck satellite experiments \cite{Ade:2013zuv}. To be more precise, without considering the running of spectral index, Planck + WMAP + highL data \cite{Ade:2013zuv, Hinshaw:2012aka} result in $n_s = 0.9600 \pm 0.0072$ and $r_{0.002} < 0.0457$ at 68 \% CL for the $\Lambda$CDM model, and hence facing a direct incompatibility with recent BICEP2 results. However, with the inclusion of the running of spectral index, the Planck + WMAP + highL data result in $n_s = 0.957 \pm 0.015$, $\alpha_{n_s} = -0.022^{+0.020}_{-0.021}$ and $r_{0.002} < 0.263$ at 95 \% CL. The reconciliation of BICEP2 data along with those of Planck + WMAP + highL \cite{Ade:2013zuv, Hinshaw:2012aka} demands a non-trivial running of spectral index, $\alpha_{n_s} < -0.002$. Although the simplistic single field models are good enough to reproduce the desired values of tensor-to-scalar ratio within $0.15< r < 0.26$ along with $50-60$ e-foldings, most of them fail to generate large enough magnitude for $\alpha_{n_s}$ (which is needed to be of order $10^{-2}$). On these lines, a confrontation, of realizing the desired values of $n_s, \, r$ and $\alpha_{n_s}$ within a set of reconciled experimental bounds of the BICEP2 and Planck experiments, has been observed for chaotic and natural inflation models \cite{Gong:2014cqa}. In this letter, our aim is to consider the fractional chaotic inflation models with potentials $V_0 \, \phi^{a/b}$ where $a$ and $b$ are relatively prime. We shall obtain such kind of inflaton potentials elegantly in the supergravity framework with generalized shift symmetry for $a/b < 4$ where the shift symmetry is broken only by superpotential\footnote{Some fractional-chaotic inflationary potentials with a particular value of the exponent can also be derived from supergravity compactifications on 6-dimensional twisted tori \cite{Gur-Ari:2013sba}.}. We find that for the number of e-folding from 50 to 60 and some $a/b$ from 2 to 3, our models are nicely within the $1\sigma$ region in the $r-n_s$ plane. Furthermore, we investigate the possibility of improvements for a better fit of the three confronting parameters $n_s, \, r$ and $\alpha_{n_s}$ in the lights of Planck and BICEP2 data. However, if the BICEP2 bounds on $r$ is modified/diluted, and if $r$ is found to be smaller than 0.11 in the upcoming Planck results, then the fractional chaotic inflation can still explain the data for exponents $a/b < 2$ with $n_s$ being a little bit larger than 0.96. On the other hand, if the BICEP2 claims would be confirmed in near future, the open question of interest, for the current model we propose, will be whether one can measure the exponent $a/b$. % | \label{sec_Conclusions} In this article, we have constructed the fractional-chaotic inflationary potentials from the supergravity framework utilizing the generalized shift symmetry. A nature bound on the fractional power is provided in our model for consistency. One of the motivations for this construction is to investigate the possibility of realizing a non-trivial and negative running ($\alpha_{n_s}$) of the spectral index $n_s$. It has been well established by now that the simplistic polynomial chaotic inflation models successfully realize large tensor-to-scalar ratio $r$ compatible with the allowed values of the spectral index $n_s$. % However, in order to reconcile the data from the Planck and recent BICEP2 observations, a non-trivial running of spectral index is needed which is usually suppressed at order $10^{-4}$ in chaotic inflationary models. We have studied a generalization of polynomial chaotic inflation by including the fractional exponents in search for possible improvements along this direction. We found that, for the number of e-folding from 50 to 60 and some fractional exponents $a/b$ from 2 to 3, our results are nicely within the $1\sigma$ region in the ($r-n_s$) plane along with an improvement (although insufficient) in the magnitude of the running of spectral index ($\alpha_{n_s}$). Such a class of fractional-chaotic inflationary potentials can also be interesting for facilitating large field excursions on the lines of \cite{Harigaya:2014eta}, and also for studying other cosmological aspects on the lines of \cite{Bastero-Gil:2014oga,Bartrum:2013fia,Bastero-Gil:2014jsa}. If the BICEP2 result will be confirmed in (near) future, the open question, relevant to our fractional-chaotic inflationary model, will be whether one can measure the exponent $a/b$ at some future experiments. % | 14 | 4 | 1404.5230 |
1404 | 1404.2626_arXiv.txt | The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over 230\,000 targets using the Anglo-Australian Telescope. To homogenise the redshift measurements and improve the reliability, a fully automatic redshift code was developed (\textsc{autoz}). The measurements were made using a cross-correlation method for both absorption-line and emission-line spectra. Large deviations in the high-pass filtered spectra are partially clipped in order to be robust against uncorrected artefacts and to reduce the weight given to single-line matches. A single figure of merit (FOM) was developed that puts all template matches onto a similar confidence scale. The redshift confidence as a function of the FOM was fitted with a tanh function using a maximum likelihood method applied to repeat observations of targets. The method could be adapted to provide robust automatic redshifts for other large galaxy redshift surveys. For the GAMA survey, there was a substantial improvement in the reliability of assigned redshifts and in the lowering of redshift uncertainties with a median velocity uncertainty of 33\,km/s. | \label{sec:intro} Spectroscopic redshift measurements of large galaxy samples form the backbone of many extragalactic and cosmological analyses. They are key for testing cosmological models, e.g.\ using redshift space distortions \citep{Kaiser87}, and for providing distances for galaxy population studies when the cosmology is assumed. Redshifts from spectroscopy (spec-$z$) generally have significantly fewer outliers, compared to the true redshift, than from photometric estimates (photo-$z$; \citealt{dahlen13}). In addition, spec-$z$ measurements are essential for accurate low-redshift luminosity estimates ($0.002 \la z \la 0.2$), where the photo-$z$ fractional error is too large, and for dynamical measurements within groups of galaxies (\citealt*{BFG90}, \citealt{robotham11}). Redshift surveys of large numbers of galaxies have been undertaken in recent years using multi-object spectrographs such as the Two Degree Field (2dF, \citealt{lewis02df}), Sloan Digital Sky Survey (SDSS, \citealt{smee13}), Visible Multi-Object Spectrograph (VIMOS, \citealt{lefevre03}), and Deep Imaging Multi-Object Spectrograph (DEIMOS, \citealt{faber03}). For uniformity of a survey product over a large sample, redshift measurement codes have been developed that are either fully automatic \citep{subbarao02,garilli10,bolton12} or partially automatic with some user interaction \citep{colless01,newman13}. The main techniques for spectroscopic redshift measurements are: the identification and fitting of spectral features \citep{MW95}; cross-correlation of observed spectra with template spectra \citep{TD79,kurtz92}; and $\chi^2$ fitting using linear combinations of eigenspectra \citep*{GOD98}. A widely used code is \textsc{rvsao} that allows for cross-correlation with absorption-line and emission-line templates separately \citep{KM98}. The 2dF Galaxy Redshift Survey (2dFGRS) and SDSS have used a dual method with fitting of emission line features and cross-correlation with templates after clipping the identified emission lines from the observed spectra \citep{colless01,stoughton02}. The large VIMOS surveys have used the \textsc{ez} software \citep{garilli10}, which provides a number of options including emission-line finding, cross-correlation and $\chi^2$ fitting. From SDSS Data Release 8 (DR8) onwards \citep{sdssDR8}, including the Baryon Oscillation Spectroscopic Survey (BOSS) targets, the measurements have used $\chi^2$ fitting at trial redshifts with sets of eigenspectra for galaxies and quasars \citep{bolton12}. The Galaxy And Mass Assembly (GAMA) survey is based around a redshift survey that was designed, in large part, for finding and characterising groups of galaxies \citep{driver09,robotham11}. The survey has obtained over 200\,000 redshifts using spectra from the AAOmega spectrograph of the Anglo-Australian Telescope (AAT) fed by the 2dF fibre positioner. AAOmega is a bench-mounted spectrograph with light coming from a 392-fibre slit, split into two beams, each dispersed with a volume-phase holographic grating and focused onto CCDs using a Schmidt camera. See \citet{sharp06} for details. Up until 2013, all the AAOmega redshifts had been obtained using \textsc{runz} \citep*{SCS04}, which is an update to the code used by the 2dFGRS \citep{colless01}. The user assigns a redshift quality for each spectrum from 1--4, which can later be changed or normalised during a quality control process \citep{driver11}. By comparing the redshifts assigned to repeated AAOmega observations of the same target, the typical redshift uncertainty was estimated to be $\sim100\,$km/s. In addition, the blunder rate was $\sim5$ per cent even when the redshifts were assigned a reliable redshift quality of 4. In order to improve the redshift reliability and uncertainties, and thus the group catalogue measurements, a fully automatic code was developed called \textsc{autoz}. Here we describe the \textsc{autoz} algorithm, which uses a cross-correlation method that works equally well with absorption or emission line templates, and that is robust to additive/subtractive residuals and other uncertainties in the reduction pipeline that outputs the spectra. This has substantially improved the GAMA redshift reliability and velocity errors (Liske et al.\ in preparation). A description of the GAMA data is given in \S~\ref{sec:data}. The method for finding the best redshift estimate is outlined in \S~\ref{sec:method}, the quantitative assessment of the confidence is described in \S~\ref{sec:confidence}, and the redshift uncertainty estimate is described in \S~\ref{sec:uncertainty}. A summary is given in \S~\ref{sec:summary}. | \label{sec:summary} We have developed a redshift measurement code called \textsc{autoz} for use on the GAMA AAOmega spectra. The method uses the cross-correlation technique with robust high-pass filtering suitable for galaxy and stellar types applied to the templates (Fig.~\ref{fig:hp-procedure}) and observed spectra (Fig.~\ref{fig:example-hpf}). The observed HPF spectra are inversely weighted by the variance estimated at each pixel, broadened slightly by a maximum kernel filter. To avoid giving too much weight to emission line matches, large deviations in the HPF spectra are partially clipped for both the observed spectra and templates (Fig.~\ref{fig:template26}). Lowering the weight of large deviations reduces the impact of spurious peaks caused by uncorrected artefacts. Real cross-correlation peaks are rarely adversely affected by this because there is additional signal at the correct redshift peaks from other parts of the spectra. For each observed spectrum, the cross-correlation functions are determined for every chosen template. Each function is normalized by dividing by the RMS of the turning points over a specified noise-estimate range (Table~\ref{tab:templates}). These ranges were chosen so that the value of a peak represents a similar confidence level across all the templates. The best four redshift estimates are obtained from the cross-correlation function peaks (Fig.~\ref{fig:cross-correlation}), not including peaks within 600\,km/s of a better redshift estimate. A FOM for the redshift confidence is determined using the value of the highest peak ($r_x$), and the ratio of $r_x$ to the RMS of the 2nd, 3rd and 4th peaks (Eqs.~\ref{eqn:ccsigma1to234}--\ref{eqn:fom-prelim}, Fig.~\ref{fig:fom-calc}). Overall, the procedure can be adjusted and the FOM calibrated using repeat observations within a survey. The GAMA AAOmega redshift survey has taken spectra of over 230\,000 unique targets. As part of a strategy of obtaining high completeness and for quality control, about 40\,000 targets have had two or more spectra taken. These repeats were used to calibrate the confidence level as a function of the FOM using a maximum likelihood method (Eqs.~\ref{eqn:tanh}--\ref{eqn:ml}, Fig.~\ref{fig:fom-calibration}). Overall, the \textsc{autoz} code has significantly improved the redshift reliability within the GAMA main sample, with a high completeness for targets with $3''$-aperture $r$-band magnitudes as faint as 21\,mag (Fig.~\ref{fig:comp-progress}). The redshift uncertainties have also been calibrated using the repeat observations, with most having redshift errors less than 50\,km/s (Fig.~\ref{fig:fibre-velocity}). \new{\textsc{autoz} measurements will be included in public data releases from GAMA DR3 onwards.} With some consideration to making adjustments --- templates, high-pass filtering scale, clipping limits, noise-estimate ranges, FOM calculation and calibration --- the fully automatic method outlined here could be used for other large galaxy redshift surveys. A key factor is using a sufficient number of repeats, both random and at the fainter end of the sample, to allow for an accurate empirical confidence calibration. | 14 | 4 | 1404.2626 |
1404 | 1404.7145_arXiv.txt | The cascading gravity model was proposed to eliminate instabilities of the original DGP model by embedding our 4D universe into a $5D$ brane, which is itself embedded in a $6D$ bulk. Thus gravity cascades from $6D$ down to $4D$ as we decrease the length scales. We show that it is possible to extend this setup to lower dimensions as well, i.e. there is a self-consistent embedding of a $3D$ brane into a $4D$ brane, which is itself embedded in a $5D$ bulk and so on. This extension fits well into the ``vanishing dimensions" framework in which dimensions open up as we increase the length scales. | It has been argued for quite some time that our theory of gravity embodied in Einstein's general relativity needs to be modified on large cosmological scales where we run into problems like the cosmological constant problem, dark matter problem etc. One of the most interesting non-trivial modifications is the so-called DGP model \cite{a}. In that model, our universe is just a $4D$ dimensional brane embedded in a 5D space-time (where $D$ refers to the number of space-time dimensions). Unlike the original model of large extra dimensions, where extra dimensions are either compacted or warped, the bulk space of DGP model is infinite. Fine tuning between the bulk cosmological constant and brane tension ensures that gravity appears $4D$ on reasonable scales, and becomes $5D$ only on very large cosmological scales. Such a setup was extensively studied in context of modified gravity and cosmology. As a very interesting extension of the DGP model, the so-called cascading gravity model was proposed in \cite{d,g,h,Kaloper:2007ap}. The main motivation for it was the apparent instability of the original DGP model. In the cascading model, the appearance of ghosts is prevented by embedding the $4D$ brane in a higher dimensional $5D$ brane, which is itself embedded in a $6D$ bulk. Thus, new dimensions open up for gravity as we increase the length scale. A priori, one does not have to stop with a $6D$ bulk, and can extend this cascading setup to even higher dimensions. On the other hand, the cascading gravity model could be extended from the left, i.e. from the lower dimensional side. Apart from the pure academic interest, one motivation for this comes from the so-called ``vanishing" or ``evolving" dimensions models \cite{Anchordoqui:2010er,Anchordoqui:2010hi,Stojkovic:2013lga,review,Mureika:2011bv}. In ``vanishing dimensions", the short distance physics is lower dimensional rather than higher dimensional. Having less dimensions at high energies has manifold advantages, as pointed out in \cite{Anchordoqui:2010er}. Reducing the number of dimensions at short distances can be achieved by an ad hoc ordered lattice model \cite{Anchordoqui:2010er,Anchordoqui:2010hi}, or elaborate stringy model \cite{as}. It would be interesting to see if the cascading gravity model can also be extended to lower dimensions, which is the main goal of this paper. | In this paper we showed that the cascading gravity model can be successfully extended to lower dimensions. In particular, one can self-consistently embed a 3D brane into a 4D brane which is itself embedded into a 5D brane. We demonstrated this on the example of the scalar field and graviton. The same mechanism which cures instability in the case of gravity in a higher dimensional cascading gravity model works in our case too. Apart from the pure academic interest, our setup fits well into the ``vanishing dimensions" framework where the short distance physics appears to be lower dimensional. | 14 | 4 | 1404.7145 |
1404 | 1404.0881_arXiv.txt | {The spiral host galaxy of GRB 060505 at $z=0.089$ was the site of a puzzling long duration burst without an accompanying supernova. Studies of the burst environment by Th\"one et al. (2008) suggested that this GRB came from the collapse of a massive star and that the GRB site was a region with properties different from the rest of the galaxy. We reobserved the galaxy in high spatial resolution using the VIMOS integral-field unit (IFU) at the VLT with a spaxel size of 0.67 arcsec. Furthermore, we use long slit high resolution data from HIRES/Keck at two different slit positions covering the GRB site, the center of the galaxy and an HII region next to the GRB region. We compare the properties of different HII regions in the galaxy with the GRB site and study the global and local kinematic properties of this galaxy. The resolved data show that the GRB site has the lowest metallicity in the galaxy with $\sim$1/3 Z$_\odot$, but its specific SFR (SSFR) of 7.4 M$_\odot$/yr/L/L* and age (determined by the H$\alpha$ EW) are similar to other HII regions in the host. The galaxy shows a gradient in metallicity and SSFR from the bulge to the outskirts as it is common for spiral galaxies. This gives further support to the theory that GRBs prefer regions of higher star-formation and lower metallicity, which, in S-type galaxies, are more easily found in the spiral arms than in the centre. Kinematic measurements of the galaxy do not show evidence for large perturbations but a minor merger in the past cannot be excluded. This study confirms the collapsar origin of GRB060505 but reveals that the properties of the HII region surrounding the GRB were not unique to that galaxy. Spatially resolved observations are key to know the implications and interpretations of unresolved GRB hosts observations at higher redshifts. } | GRB 060505 and its host galaxy have drawn particular attention in the GRB community for several reasons. GRB 060505 was one of two nearby long-duration GRBs that had no accompanying supernova \citep{Fynbo06, Gehrels06, DellaValle06, GalYam06}. Due to its duration of only $\sim$ 4\,s it was suggested that it could have intact been a short GRB that naturally has no SN \citep{Ofek06}. However, the presence of a significant spectral lag \citep{McBreen06} and its afterglow properties \citep{Xu09} favour a long GRB origin. The host galaxy was somewhat unusual for a long-duration GRB, a late-type, moderately star-forming, solar metallicity spiral galaxy. \cite{Thoene08}, however, found that the properties of the stellar population at the GRB site very much resembled the global properties of dwarf GRB hosts \citep[e.g.][]{Christensen04, Savaglio09,Hjorth13} concerning metallicity and star formation rate. Furthermore, it had a very young age, making the connection to a massive star even more likely. GRB progenitor models suggest that long GRBs can only be produced by metal poor stars, otherwise, large mass losses of the massive progenitor slows down the star such that it cannot form an accretion disk at collapse \citep[e.g.][]{Woosley06}. Indeed, most long-duration GRBs have been found in metal poor environments or galaxies though a few exceptions for high metallicity environments have been found, e.g. GRB 020819 \citep{Levesque10a}, GRB 050826 \citep{Levesque10b}, GRB 080605 \citep{Kruehler12} and GRB 090323 \citep{Savaglio12}. Long duration GRB hosts furthermore show a moderate to high star formation rate. In the low redshift universe those conditions are mainly met in dwarf irregular and blue compact galaxies, but probably also in the outskirts of late-type galaxies. Short GRB hosts seem to show different properties \citep[e.g.][]{Berger13} in line with the suggested origin in the coalescence of two compact objects. In recent years, spatially resolved studies of GRB and SN hosts have become increasingly important. High spatial resolution imaging with the {\it HST} correlated the brightness in the blue at different parts in the host to the one at the explosion site. While SN Type II are distributed evenly, GRBs and Type Ic SNe are associated to the blues regions of their hosts \citep{Fruchter06, Kelly08}. Likewise, WC (cabon-oxygen Wolf-Rayet) stars show a spatial distribution similar to SNe Type Ic and GRBs concerning the brightness of their surrounding region while WN (nitrogen Wolf-Rayet) stars correlate with the distribution of SNe Ib \citep{Leloudas10}. Short GRBs, in contrast, are found predominantly in dimmer regions and with larger offsets from their host galaxy, consistent with an older population and/or kicks of the progenitor system from the birth site by the preceding SN explosion \citep{Fong10}. Following the mass-metallicity relation, some more massive GRB hosts show a higher global metallicity of solar or supersolar values. However, most of them have actually a lower metallicity at the location of the GRB \citep{Levesque10a, Graham13}. As they are usually located in the outskirts of their hosts this is expected due to the usual metallicity gradient in star-forming galaxies. However, some GRB sites have still higher metallicities than theoretical models would require. GRB 0208019 \citep{Levesque10a} f.ex. showed a super-solar metallicity at the GRB site, however, it was a ``dark GRB'' (no optical counterpart was detected \citet{Jakobsson020819}). It is still debated weather the high metallicity might be the reason that those dark GRBs are intrinsically faint \citep[see e.g.][]{Graham13} and whether we have to revise our theoretical models. The site of the short GRB 080905A \citep{Rowlinson} in contrast showed an old population with little star formation, as expected. Comparing GRBs and SNe, \cite{Modjaz08} found a lower metallicity for GRB sites compared to broad-line Type Ic SNe without GRB. The situation is less clear for comparing different types of SNe, while \cite{Modjaz11} finds a lower metallicities for Type Ib SNe compared to Type Ic, \cite{Anderson10, Leloudas12} do not see significant differences. Recent IFU studies of SN sites indeed show differences in metallicity, SFR and age between different types of SN \citep{Ku13a, Kun13, Galbany13}. Detailed resolved studies using integral-field-spectroscopy has to date only been performed for the host of GRB 980425, the first GRB observed to be connected to a broad-line Type Ic SN \citep{Christensen08}. The GRB site turned out to be not the youngest, most metal poor and star-forming region in the galaxy and did not show any WR features, in contrast to another star-forming region next to the one of the GRB site. This might imply that GRBs do not need the extreme properties as suggested by stellar population models and/or other evolutionary channels such as binaries might play an additional role. However, the sample of well studied GRB environment is still somewhat small to get some clear indications on the progenitors of these interesting events. In this paper, we present integral field spectroscopic data of the host galaxy of GRB 060505 together with high resolution spectra in two slitpositions placed along the galaxy (Sec. 2). This allows us to study the properties of the different star-forming regions in the galaxy in terms of metallicity, star-formation rate, stellar population age and extinction and to compare the GRB region with other HII regions in the galaxy (Sec. 3). Furthermore, we investigate the kinematics of the galaxy to search for possible perturbations and other reasons for an increase in star-formation at the GRB site (Sec. 4). | The host of GRB 060505 was only the second GRB host that has been studied with IFU spectroscopy. It had also the second lowest redshift ever measured for a GRB, making it a fortunate case to study the host in detail, which is currently not possible for the majority of GRB hosts. Studies like the one presented here are crucial to determine the validity of future resolved host studies at higher redshift. GRB hosts at low redshift have proven to be a very mixed bag of galaxies, probably because those bursts would go undetected at higher redshifts. This makes it even more important to carefully study those hosts in detail and to investigate possible biases. The earlier study with a longslit across the major axis of the galaxy presented in \cite{Thoene08} suggested that the GRB site is a region with rather peculiar properties compared to the rest of the galaxy. Our new study shows that most of the star-forming regions in that galaxy around the bulge in various spiral arms have similar properties to the GRB site and the strong conclusion was mainly an issue of the longslit placement. The GRB site, however, does have the lowest metallicity (albeit within errors) in the host galaxy. The kinematic analysis shows a rather smooth velocity field and rotation curve. The high dispersion in the centre might point to some gas inflow to an existing yet not very evolved bulge. A thorough kinemetric analysis reveals some degree of disturbance in the velocity field and we cannot exclude some minor merger event in the past. The host overall has no really extreme properties and the SF in the region around the GRB could simply be due to the inside-out SF propagation usually observed in spiral galaxies. Our analysis shows the importance to study low redshift GRB hosts with spatially resolved techniques to be able to properly interpret the global spectra of the entire galaxy that we have to deal with for the large majority of GRB hosts. GRB host are mostly dwarf galaxies, nevertheless they can have different properties throughout the host which is even more relevant for larger hosts. Steep metallicity gradients can lead to wrong metallicities at the GRB site since they are usually found in the outskirts of their hosts, f.ex. determining the metallicity of the integrated spectrum in a larger host seen edge on where the GRB is seemingly lies at the brightest spot of the galaxy but in reality is found in some outer spiral arm (in front or even behind the galaxy since GRBs easily outshine their hosts) would lead to a wrong value. Integral field studies of galaxies at all redshifts have become an extremely growing field. Several past and ongoing large IFU such as CALIFA \citep{Sanchez12} or VENGA \citep{Blanc13} have revealed a lot of information about the gas properties and star-formation history in different kinds of galaxies, but so far they target large nearby galaxies different from the host of GRB 060505 and most other GRB hosts. With future large IFU surveys such as MANGA \footnote{http://www.sdss3.org/future/manga.php} we might even be lucky to observe a galaxy before and after it hosted a GRB to determine if and how such a large explosion influences its immediate environment. | 14 | 4 | 1404.0881 |
1404 | 1404.1375_arXiv.txt | We present a joint shear-and-magnification weak-lensing analysis of a sample of 16 X-ray-regular and 4 high-magnification galaxy clusters at $0.19\simlt z\simlt 0.69$ selected from the Cluster Lensing And Supernova survey with Hubble (CLASH). Our analysis uses wide-field multi-color imaging, taken primarily with Suprime-Cam on the Subaru Telescope. From a stacked shear-only analysis of the X-ray-selected subsample, we detect the ensemble-averaged lensing signal with a total signal-to-noise ratio of $\simeq 25$ in the radial range of $200$ to $3500$\,kpc\,$h^{-1}$, providing integrated constraints on the halo profile shape and concentration--mass relation. The stacked tangential-shear signal is well described by a family of standard density profiles predicted for dark-matter-dominated halos in gravitational equilibrium, namely the Navarro-Frenk-White (NFW), truncated variants of NFW, and Einasto models. For the NFW model, we measure a mean concentration of $c_{200{\rm c}}=4.01^{+0.35}_{-0.32}$ at an effective halo mass of $M_{200{\rm c}}=1.34^{+0.10}_{-0.09}\times 10^{15}M_\odot$. We show this is in excellent agreement with $\Lambda$ cold-dark-matter ($\Lambda$CDM) predictions when the CLASH X-ray selection function and projection effects are taken into account. The best-fit Einasto shape parameter is $\alpha_{\rm E}=0.191^{+0.071}_{-0.068}$, which is consistent with the NFW-equivalent Einasto parameter of $\sim 0.18$. We reconstruct projected mass density profiles of all CLASH clusters from a joint likelihood analysis of shear-and-magnification data, and measure cluster masses at several characteristic radii assuming an NFW density profile. We also derive an ensemble-averaged total projected mass profile of the X-ray-selected subsample by stacking their individual mass profiles. The stacked total mass profile, constrained by the shear+magnification data, is shown to be consistent with our shear-based halo-model predictions including the effects of surrounding large-scale structure as a two-halo term, establishing further consistency in the context of the $\Lambda$CDM model. | \label{sec:intro} Clusters of galaxies represent the largest cosmic structures that have reached a state in the vicinity of gravitational equilibrium. The abundance of massive clusters as a function of redshift is highly sensitive to the amplitude and growth rate of primordial density fluctuations as well as the cosmic volume-redshift relation \citep{2001ApJ...553..545H}. Clusters therefore play a fundamental role in examining cosmological models, allowing several independent tests of any viable cosmology, including the current concordance $\Lambda$ cold dark matter ($\Lambda$CDM) model defined in the framework of general relativity. Clusters, by virtue of their enormous mass, serve as giant physics laboratories for astronomers to explore the role and nature of dark matter, the physics governing the final state of self-gravitating collisionless systems in an expanding universe \citep{1972ApJ...176....1G,1996ApJ...462..563N,Taylor+Navarro2001,Lapi+Cavaliere2009a,Hjorth+2010DARKexp}, and screening mechanisms in long-range modified models of gravity whereby general relativity is restored \citep{Narikawa+Yamamoto2012}. A key ingredient of such cosmological tests is the mass distribution of clusters. In the standard picture of hierarchical structure formation, cluster halos are located at dense nodes where the filaments intersect and are still forming through successive mergers of smaller halos as well as through smooth accretion of matter along their surrounding large-scale structure (LSS). The standard $\Lambda$CDM model and its variants provide observationally testable predictions for the structure of DM-dominated halos. Cosmological $N$-body simulations of collisionless DM have established a nearly self-similar form for the spherically-averaged density profile $\rho(r)$ of equilibrium halos \citep[][hereafter, NFW]{1996ApJ...462..563N} over a wide range of halo masses, with some intrinsic variance associated with mass accretion histories of individual halos \citep{Jing+Suto2000,Merritt+2006,Graham+2006,Navarro+2010,Gao+2012Phoenix,Ludlow+2013}. The degree of mass concentration, $c_{200{\rm c}}=r_{200{\rm c}}/r_{\rm s}$,\footnote{The quantity $r_{200{\rm c}}$ is defined as the radius within which the mean density is $200\times$ the critical density $\rho_c(z)$ of the universe at the cluster redshift $z$, and $r_{\rm s}$ is a scale radius at which $d\ln{\rho}/d\ln{r}=-2$.}, is predicted to correlate with halo mass, since DM halos that are more massive collapse later on average, when the mean background density of the universe is correspondingly lower \citep{2001MNRAS.321..559B,2007MNRAS.381.1450N}. Accordingly, cluster-sized halos are predicted to be less concentrated than less massive systems, and to have typical concentrations of $c_{200{\rm c}}\simeq 3$--$4$, compared to $c_{200\rm c}\simeq 5$ for group-szied halos \citep{Duffy+2008,Bhatt+2013}. Unlike individual galaxies, massive clusters are not expected to be significantly affected by baryonic gas cooling \citep{Blumenthal+1986,Mead+2010AGN,Duffy+2010,Lau+2011,Blanchard+2013} because the majority ($\sim 80\%$) of baryons in clusters comprise a hot, X-ray-emitting phase of the intracluster medium, in which the high temperature and low density prevent efficient cooling and gas contraction. Consequently, for clusters in a state of quasi equilibrium, the form of their total mass profiles reflects closely the underlying DM distribution. Clusters act as gravitational lenses, producing various detectable lensing effects, including deflection, shearing and magnifying of the images of distant background sources. There is a weak-lensing regime where lensing effects can be linearly related to the gravitational potential so that it is possible to determine mass distributions in a model-free way. Weak lensing shear offers a direct means of mapping the mass distribution of clusters \citep{1999PThPS.133...53U,2001PhR...340..291B,Hoekstra+2013} irrespective of the physical nature, composition, and state of lensing matter, providing a direct probe for testing well-defined predictions of halo structure. Lensing magnification provides complementary, independent observational alternatives to gravitational shear \citep{1995ApJ...438...49B,UB2008,vanWaerbeke+2010,Umetsu+2011,Hildebrandt+2011,Ford+2012,Zemcov+2013,Coupon+2013}. Gravitational magnification influences the surface density of background sources, expanding the area of sky, and enhancing the observed flux of background sources \citep{1995ApJ...438...49B}. The former effect reduces the effective observing area in the source plane, decreasing the source counts per solid angle. The latter effect increases the number of sources above the limiting flux because the limiting luminosity $L_{\rm lim}(z)$ at any background redshift $z$ lies effectively at a fainter limit, $L_{\rm lim}(z)/\mu(z)$, with $\mu(z)$ the magnification factor. The net effect is known as {\it magnification bias} and depends on the steepness of the luminosity function. In practice, magnification bias can be used in combination with weak lensing shear to obtain a model-free determination of the projected mass profiles of clusters \citep{Schneider+2000,UB2008,Umetsu+2011,Umetsu2013}, effectively breaking degeneracies inherent in a standard weak-lensing analysis based on shape information alone \cite[Section \ref{subsec:wl}; see also][]{Schneider+Seitz1995}. Our earlier work has established that deep multi-colour imaging allows us to simultaneously detect the observationally independent shear and magnifications signals efficiently from the same data. The combination of shear and magnification allows us not only to perform consistency checks of observational systematics but also to enhance the precision and accuracy of cluster mass estimates \citep{Rozo+Schmidt2010,Umetsu+2012,Umetsu2013}. The Cluster Lensing And Supernova survey with Hubble \citep[CLASH,][]{Postman+2012CLASH} has been designed to map the DM distribution in a representative sample of 25 clusters, by using high-quality strong- and weak-lensing data, in combination with wide-field imaging from Suprime-Cam on the Subaru Telescope \citep[e.g.,][]{Umetsu+2011,Umetsu+2011stack,Umetsu+2012}. CLASH is a 524-orbit multi-cycle treasury {\it Hubble Space Telescope} ({\it HST}) program to observe 25 clusters at $0.18 < z < 0.89$, each in 16 filters with the Wide Field Camera 3 \citep[WFC3,][]{Kimble+2008} and the Advanced Camera for Surveys \citep[ACS,][]{Holland+2003ACS}. The CLASH sample is drawn largely from the Abell and MACS cluster catalogs \citep{Abell1958,Abell1989,Ebeling+2001MACS,Ebeling+2007,Ebeling+2010}. Twenty CLASH clusters were X-ray selected to be massive and to have a regular X-ray morphology. This selection is suggested to minimize the strong bias toward high concentrations in previously well-studied clusters selected for their strong-lensing strength, allowing us to meaningfully examine the $c$--$M$ relation for a cluster sample that is largely free of lensing bias \citep{Postman+2012CLASH}. A further sample of five clusters were selected by their high lens magnification properties, with the primary goal of detecting and studying high-redshift background galaxies magnified by the cluster potential. In this paper we present a joint shear-and-magnification weak-lensing analysis of a sample of 16 X-ray-regular and 4 high-magnification clusters targeted in the CLASH survey. Our analysis uses wide-field multi-band imaging obtained primarily with Subaru/Suprime-Cam. In particular, we aim at using the combination of shear and magnification information to study ensemble-averaged mass density profiles of CLASH clusters and compare with theoretical expectations in the context of the $\Lambda$CDM cosmology. This work has two companion papers: The strong-lensing and weak-shear study of CLASH clusters by \citet{Merten2014clash} and the detailed characterization of numerical simulations of CLASH clusters by \citet[][hereafter, M14]{Meneghetti2014clash}. The paper is organized as follows. In Section \ref{sec:basics}, we summarize the basic theory of cluster weak gravitational lensing. In Section \ref{sec:method}, we present the formalism we use for our weak-lensing analysis which combines shear and magnification information. In Section \ref{sec:data}, we describe the observational dataset, its reduction, weak-lensing shape measurements, and the selection of background galaxies. In Section \ref{sec:clash} we describe our joint shear-and-magnification analysis of 20 CLASH clusters. In Section \ref{sec:clash_stack} we carry out stacked weak-lensing analyses of our X-ray-selected subsample to study their ensemble-averaged mass distribution. Section \ref{sec:discussion} is devoted to the discussion of the results. Finally, a summary is given in Section \ref{sec:summary}. Throughout this paper, we use the AB magnitude system, and adopt a concordance $\Lambda$CDM cosmology with $\Omega_{\rm m}=0.27$, $\Omega_{\Lambda}=0.73$, and $h\equiv 0.7h_{70}=0.7$ \citep{Komatsu+2011WMAP7}, where $H_0 = 100h\, {\rm km\, s^{-1}\,Mpc^{-1}}$. We use the standard notation $M_{\Delta_{\rm c}}$ ($M_{\Delta_{\rm m}}$) to denote the total mass enclosed within a sphere of radius $r_{\Delta_{\rm c}}$ ($r_{\Delta_{\rm m}}$), within which the mean density is $\Delta_{\rm c}$ ($\Delta_{\rm m}$) times the critical (mean background) density of the universe at the cluster redshift. All quoted errors are 68.3\% ($1\sigma$) confidence limits (CL) unless otherwise stated. | 14 | 4 | 1404.1375 |
|
1404 | 1404.6555_arXiv.txt | Due to their long mean free path, X-rays are expected to have an important impact on cosmic reionization by heating and ionizing the intergalactic medium (IGM) on large scales, especially after simulations have suggested that Population III (Pop III) stars may form in pairs at redshifts as high as 20 - 30. We use the Pop III distribution and evolution from a self-consistent cosmological radiation hydrodynamic simulation of the formation of the first galaxies and a simple Pop III X-ray binary model to estimate their X-ray output in a high-density region larger than 100 comoving (Mpc)$^3$. We then combine three different methods --- ray tracing, a one-zone model, and X-ray background modeling --- to investigate the X-ray propagation, intensity distribution, and long-term effects on the IGM thermal and ionization state. The efficiency and morphology of photoheating and photoionization are dependent on the photon energies. The sub-keV X-rays only impact the IGM near the sources, while the keV photons contribute significantly to the X-ray background and heat and ionize the IGM smoothly. The X-rays just below 1 keV are most efficient in heating and ionizing the IGM. We find that the IGM might be heated to over 100 K by $z=10$ and the high-density source region might reach 10$^4$ K, limited by atomic hydrogen cooling. This may be important for predicting the 21 cm neutral hydrogen signals. On the other hand, the free electrons from X-ray ionizations are not enough to contribute significantly to the optical depth of the cosmic microwave background to the Thomson scattering. | 14 | 4 | 1404.6555 |
||
1404 | 1404.4882_arXiv.txt | The Galaxy Evolution Explorer (GALEX) imaged the sky in the Ultraviolet (UV) for almost a decade, delivering the first sky surveys at these wavelengths. Its database contains far-UV (FUV, $\lambda$$_{eff}$ $\sim$ 1528\AA) and near-UV (NUV, $\lambda$$_{eff}$ $\sim$ 2310\AA) images of most of the sky, including deep UV-mapping of extended galaxies, over 200~million source measurements, and more than 100,000 low-resolution UV spectra. The GALEX archive will remain a long-lasting resource for statistical studies of hot stellar objects, QSOs, star-forming galaxies, nebulae and the interstellar medium. It provides an unprecedented % road-map for planning future UV instrumentation and follow-up observing programs in the UV and at other wavelengths. We review the characteristics of the GALEX data, and describe final catalogs and available tools, % that facilitate future exploitation of this database. We also recall highlights from the science results uniquely enabled by GALEX data so far. | \label{s_galex} The Galaxy Evolution Explorer (GALEX), a NASA {\it Small Explorer} Class mission, was launched on April 28, 2003 to perform the first sky-wide Ultraviolet surveys, with both direct imaging and grism in two broad bands, FUV ($\lambda$$_{eff}$ $\sim$ 1528\AA, 1344-1786\AA) and NUV ($\lambda$$_{eff}$ $\sim$ 2310\AA, 1771-2831\AA). It was operated with NASA support until 2012, then for a short time % with support from private funding from institutions; it was decommissioned on June 28, 2013. \footnote{GALEX was developed by NASA with contributions from the Centre National d'Etudes Spatiales of France and the Korean Ministry of Science and Technology.} GALEX's instrument consisted of a Ritchey-Chr\'etien$-$type telescope, with a 50~cm primary mirror and a focal length of 299.8cm. Through a dichroic beam splitter, light was fed to the FUV and NUV detectors simultaneusly. The FUV detector stopped working in May 2009; subsequent GALEX observations have only NUV imaging (Figure \ref{f_figure1}). The GALEX field of view is $\approx$1.2$^{\circ}$ diameter (1.28/1.24$^{\circ}$, FUV/NUV), and the spatial resolution is $\approx$ 4.2/5.3\as (Morrissey et al. 2007). For each observation, the photon list recorded by the two photon-counting micro-channel plate % detectors is used to reconstruct an FUV and an NUV image, % sampled with virtual pixels of 1.5\as. From the reconstructed image, the pipeline then derives a sky background image, and performs source photometry. Sources detected in FUV and NUV images of the same observation are matched by the pipeline with a 3\as radius, to produce a merged list for each observation (Morrissey et al. 2007). To reduce localized % response variations, in order to maximize photometric accuracy, each observation was carried out with a 1$^{\prime}$ spiral dithering pattern. The surveys were accumulated by painting the sky with contiguous {\it tiles}, with series of such observations. A trailing mode was instead used for the latest, privately-funded observations, to cover some bright areas near the MW plane. These latest data currently are not in the public archive. At the end of the GALEX mission, the AIS survey was extended towards the Galactic plane, largely inaccessible during the prime mission phase because of the many bright stars that violated high-countrate safety limits. A survey of the Magellanic Clouds (MC), also previously unfeasible due to brightness limits, % was completed at the end, relaxing the initial count-rate saftey threshold. Because of the FUV detector's failure, these extensions include only NUV measurements (Figure \ref{f_figure1}). \begin{figure*} \label{f_figure1} \vskip -.72cm \centerline{ \includegraphics[width=6.5cm]{fig1_GR6coverage_NUV.pdf} \includegraphics[width=6.5cm]{fig1_GR6coverage_FUVNUV.pdf}} \vskip -.62cm \caption{Sky coverage, in Galactic coordinates, of the GALEX imaging. The surveys with the largest area coverage are AIS (blue) and MIS (green). Observations from other surveys are shown in black (figure adapted from Bianchi et al. 2014a). Data from the privately-funded observations at the end of the mission are not shown. Left: fields observed with at least the NUV detector on; right: fields observed with both FUV and NUV detectors on. The latter constitute the BCScat's.} \end{figure*} \begin{figure*} \label{f_crowd} \vskip -.5cm \centerline{ \includegraphics[width=15cm]{aaaProceedings_NUVA2014_figcrowd_cut.pdf}} \caption{Portion of a GALEX field with the stellar cluster NGC2420 (de Martino et al. 2008). Green circles mark sources detected with our photometry, the purple contours mark sources as defined by the GALEX pipeline. The right-side image is an enlargement of the crowded region. The left image also includes a section of the field's outer edge, showing how numerous rim artefacts intrude photometric source detections. The rim is excluded from the BCS catalogs. } \end{figure*} \begin{figure*} \label{f_6822} \centerline{ \includegraphics[width=17cm]{aaaProceedings_NUVA2014_6822_cut.pdf}} \caption{GALEX images (FUV: blue, NUV: yellow) and optical color-composite images of NGC6822 % in the Local Group % (Efremova et al. 2011, Bianchi et al. 2011c, 2012) and HST view of one of its most prominent HII regions, Hubble~X (Hubble data from Bianchi et al. 2001). Hubble~X is one of the two bright knots in the upper part of the galaxy in GALEX and ground-based images; its core is resolved into an association of young stars with HST (0.1\as resolution, or $\approx$0.2pc at a distance of 460~kpc).} \end{figure*} Much of this short review concerns % GALEX's final data products (from the point of view of science applications; technical documentation is available elsewhere), This choice, at the price of % confining science results to a few highlights (Section \ref{s_science}), % responds to various inquiries and requests, timely as the almost entire (and final) database is becoming available, and new tools, which will support new investigations. | 14 | 4 | 1404.4882 |
|
1404 | 1404.1719_arXiv.txt | The VERTEX code is employed for multi-dimensional neutrino-radiation hydrodynamics simulations of core-collapse supernova explosions from first principles. The code is considered state-of-the-art in supernova research and it has been used for modeling for more than a decade, resulting in numerous scientific publications. The computational performance of the code, which is currently deployed on several high-performance computing (HPC) systems up to the Tier-0 class (e.g.\ in the framework of the European PRACE initiative and the German GAUSS program), however, has so far not been extensively documented. This paper presents a high-level overview of the relevant algorithms and parallelization strategies and outlines the technical challenges and achievements encountered along the evolution of the code from the gigaflops scale with the first, serial simulations in 2000, up to almost petaflops capabilities, as demonstrated lately on the SuperMUC system of the Leibniz Supercomputing Centre (LRZ). In particular, we shall document the parallel scalability and computational efficiency of VERTEX at the large scale and on the major, contemporary HPC platforms. We will outline upcoming scientific requirements and discuss the resulting challenges for the future development and operation of the code. | Theoretical modeling of core-collapse supernovae, specifically the attempt to understand the still unknown explosion mechanism from first principles, is an extremely challenging multi-dimensional, multi-scale, multi-physics problem. For decades, numerical simulations of this spectacular astrophysical phenomenon have been at the forefront of computational physics and high-performance computing, usually pushing the limits of the supercomputers of their time. Unlike almost any other known (astro-)physical scenario, the weakly interacting neutrinos released during the gravitational collapse of a massive star play a subtle \emph{dynamical} role in the evolution and are thought to ultimately power the observed supernova explosion \cite{Bethe1985,Wilson1985}. The leaking of neutrinos out of the extremely dense interior of the collapsed star occurs on timescales which are relevant for the overall dynamics, and their reabsorption in the surrounding layers happens under semi-transparent conditions. As a consequence, the transport of energy, momentum and lepton number has to be treated numerically very accurately, by following the time-evolution of the neutrino distribution function in six-dimensional phase-space, as governed by the Boltzmann equation. Together with the coupling to the evolution of the stellar material (by virtue of the exchange of energy, momentum and lepton number) and the dynamical evolution of the latter this constitutes a computationally extremely expensive radiation-hydrodynamics problem, not to forget an adequate treatment of the microphysics and gravitation. Only at the beginning of this century, modeling the time-evolution of the phase-space distribution of neutrinos at the level of the Boltzmann equation and the coupling to the evolution of the stellar material has become computationally tractable at all. But even with the assumption of spherical symmetry of the stellar medium, which is a severe approximation, a three-dimensional and time-dependent transport problem has to be solved which already stressed the supercomputers of the early 2000's to their limits \cite{Rampp2000,Rampp2000a,Burrows2000,Liebendoerfer2001c,Liebendorfer2001d}. Meanwhile, due to the ever-increasing computing power, but also thanks to algorithmic and technical innovations it has become possible to treat the full three-dimensional evolution of the stellar material coupled to increasingly accurate neutrino transport \cite[and references cited therein]{Janka2012_review,Cardall2012}. For the latter, a few --- apparently reasonable --- approximations are commonly adopted which effectively reduce the dimensionality of the transport problem, as computing genuinely six-dimensional, time-dependent transport solutions still appears out-of-reach in this context. VERTEX (\textbf{V}ariable \textbf{E}ddington Factor \textbf{R}adiation \textbf{T}ransport for Supernova \textbf{Ex}plosions) is one of the very few simulation codes at this level of physical accuracy and comprehensiveness. The code has been around since 2000, has been continuously upgraded by new physics and new algorithmic features since then, and is considered state-of-the-art in the field. Simulations with VERTEX have played a decisive role for establishing the view that the explosion mechanism of core-collapse supernovae is a genuinely multi-dimensional phenomenon \cite{Rampp2000a,Buras2003} and the code has been spear-heading multi-dimensional modeling since then. While the algorithms, their implementation in the code and its verification have been extensively documented \cite{Rampp2000,Rampp2002b,Liebendoerfer2005,Marek2006a}, and numerous physics papers have been produced \cite[and references cited therein]{Janka2012_review}, the computational performance of the code is not yet published in much detail. \smallskip Since the start of its development in the late 1990s, and the first simulation runs \cite{Rampp2000,Rampp2000a} on a single CPU of a NEC SX-5 vector system (gigaflops-performance scale), the VERTEX code has been continuously ported to, and used in production on all major HPC platforms (IBM Bluegene and Power, CRAY, x86, x86\_64), in particular at the three national German HPC centers, the computing center of the Max-Planck-Society (RZG), and various Tier-0 systems of the European PRACE infrastructure. During the last 15 years, and a man-power equivalent of almost half a century, beginning with a serial code and spherically-symmetric models, VERTEX has evolved to a highly tuned, hybrid MPI$/$OpenMP-parallelized HPC code for multi-dimensional simulations of core collapse supernovae \cite{Buras2006a,Buras2006b,Marek2009a,Hanke2013a}. Today, the code is typically operated in production at 10\dots 100~teraflops, using tens of thousands of cores (x86\_64). Very recently, at the "SuperMUC Extreme Scaling Workshop, 2013" of the Leibniz Supercomputing Centre (LRZ), we have demonstrated close-to-petascale performance, using all 131\,000 cores of the SuperMUC system. Importantly, benchmarks at that scale are not mere showcases, as scientific progress with supernova modeling is still heavily computing-time limited, and there is a high demand for further optimization and parallel scaling beyond sustained petascale. Specifically, there is a strong \emph{science-driven need} \cite{Janka2012_talk} for typical simulation runs to get even bigger, i.e.\ more highly resolved (partly addressable by weak scaling) and significantly faster, i.e.\ concerning the time to solution of an individual model run which currently takes many weeks (addressable by strong scaling). \smallskip This paper is organized as follows: Section~\ref{sect:math_model} provides a high-level overview of the algorithm and the parallelization approach adopted for VERTEX. Section~\ref{sect:performance} describes its computational performance in terms of parallel scalability as well as absolute floating-point performance. The subsequent Section~\ref{sect:developments} outlines our ongoing and future developments before we conclude in Section~\ref{sect:conclusions}. | \label{sect:conclusions} We have given an overview of the supernova-simulation code PROMETHEUS-VERTEX, focussing on the neutrino-transport module, VERTEX, and specifically on the adopted parallelization approach and its basic performance characteristics on large HPC platforms. At the time of this writing production runs using VERTEX are typically performed using a number of 16\,000 cores with excellent parallel efficiency. We have demonstrated that it is possible already now to efficiently employ the code on much bigger computers. In the course of the "SuperMUC Extreme Scaling Workshop, 2013" of the Leibniz Supercomputing Centre (LRZ), we were able to use up to 131\,000 cores of the SuperMUC system, thereby maintaining very high parallel efficiency and floating-point performances. The code has reached a quarter of a petaflop (double-precision), which is equivalent to about 10\% of the nominal peak performance of SuperMUC. Importantly, there is indeed a \emph{scientifically driven need} for performing simulations at this scale, and hence the results we have documented here are highly relevant benchmarks, rather than being mere showcases. In particular, physical completeness and scientific relevance of the employed setups have not been sacrificed for achieving high performance. However, although theoretically, such huge production simulations could already now be performed with very good performance, they are not yet feasible in practice. Firstly, the necessary throughput can usually not be achieved when using a significant partition of an entire HPC system, and secondly, system stability still appears to be a serious issue at this scale. Nevertheless, we expect that this situation is rapidly improving and first-principles simulations of core-collapse supernova explosions employing VERTEX neutrino transport will soon be routinely performed at the scale of hundreds of thousands of processor cores. Finally, we have already started preparing the code for the next generation of HPC systems which are expected to provide massive SIMT and SIMD parallelism on the node level, as indicated by current GPU accelerators or many-core coprocessors. Non-trivial work remains to be done in order for VERTEX to be able to efficiently exploit such massive parallelism with relatively weak single-thread performance, but first promising results with GPUs and the Intel Many Integrated Core (MIC) architecture have already been obtained. | 14 | 4 | 1404.1719 |
1404 | 1404.1233_arXiv.txt | We investigate the random walk process in relativistic flow. In the relativistic flow, photon propagation is concentrated in the directions of the flow velocity due to relativistic beaming effect. We show that, in the pure scattering case, the number of scatterings is proportional to the size parameter $\xi\equiv L/l_0$ if the flow velocity $\beta\equiv v/c$ satisfies $\beta/\Gamma\gg \xi^{-1}$, while it is proportional to $\xi^2$ if $\beta/\Gamma\ll \xi^{-1}$ where $L$ and $l_0$ are the size of the system in the observer frame and the mean free path in the comoving frame, respectively. We also examine the photon propagation in the scattering and absorptive medium. We find that, if the optical depth for absorption $\tau_{\rm a}$ is considerably smaller than the optical depth for scattering $\tau_{\rm s}$ ($\tau_{\rm a}/\tau_{\rm s} \ll 1$) and the flow velocity satisfies $\beta\gg \sqrt{2\tau_{\rm a}/\tau_{\rm s}}$, the effective optical depth is approximated by $\tau_*\simeq\tau_{\rm a}(1+\beta)/\beta$. Furthermore, we perform Monte Carlo simulations of radiative transfer and compare the results with the analytic expression for the number of scattering. The analytic expression is consistent with the results of the numerical simulations. The expression derived in this Letter can be used to estimate the photon production site in relativistic phenomena, e.g., gamma-ray burst and active galactic nuclei. | Relativistic flows or jets are important phenomena in many astrophysical objects, such as gamma-ray bursts (GRBs) and active galactic nuclei (AGNs). It is widely accepted that most of high-energy emission from these objects arises from the relativistic jets. However, their radiation mechanism is not fully understood. In particular, recent observations of GRBs have indicated the existence of thermal radiation in the spectrum of the prompt emission, which casts a question to standard emission models invoking synchrotron emission. For example, \citet{2010ApJ...709L.172R} argued that the spectrum of GRB 090902B can be well fitted by a quasi-blackbody with a characteristic temperature of $\sim 290~{\rm keV}$. Moreover, it has been reported that some bursts exhibit a thermal component on a usual non-thermal component \citep[e.g.,][]{2011ApJ...727L..33G, 2012ApJ...757L..31A}. Therefore, investigation of the thermal radiation from GRB jets is crucial to understand the radiation mechanism of GRBs. The thermal radiation from GRB jets have also been theoretically studied by several methods as follows: fully analytical studies \citep[e.g.,][]{2000ApJ...530..292M,2005ApJ...628..847R}, calculations of photospheric emission which treat the thermal radiation as the superposition of blackbody radiation from photosphere \citep[][]{2009ApJ...700L..47L,2011ApJ...732...34L,2011ApJ...732...26M,2011ApJ...731...80N}, and detailed radiative transfer calculations with spherical outflows or approximate structures of the jets \citep[e.g.,][]{2006A&A...457..763G,2012MNRAS.422.3092G,2008ApJ...682..463P, 2010MNRAS.407.1033B,2011ApJ...732...49P, 2013MNRAS.428.2430L,2013ApJ...767..139B,2013ApJ...777...62I}. To study the thermal radiation, treatment of the photosphere needs careful consideration. \citet[][]{2009ApJ...700L..47L,2011ApJ...732...34L,2011ApJ...732...26M,2011ApJ...731...80N} performed the hydrodynamical simulations of relativistic jet and calculated the thermal radiation assuming that the photons are emitted at the photosphere which is defined by the optical depth for electron scattering $\tau_{\rm s}=1$. However, the observed photons should be produced in more inner regions with $\tau_{\rm s}\gg 1$ \citep[e.g.,][]{2013ApJ...764..157B} since the radiation and absorption processes are very inefficient near the photosphere due to the low plasma density. The produced photons propagate through the jet and cocoon which have complicated structure. Thus, radiative transfer calculation of the propagating photons properly evaluating the photon production site is necessary to investigate the thermal radiation from GRB jets. The photon production site can be estimated by the effective optical depth $\tau_*$ \citep[e.g.,][]{1979rpa..book.....R}. However, the expression derived in \cite{1979rpa..book.....R} is based on an assumption that each scatterings is isotropic in observer frame. The assumption does not strictly hold in any moving media because the photon propagation is concentrated to the direction of the flow due to the beaming effect in the observer frame (Figure \ref{fig1}). \begin{figure} \includegraphics[scale=0.85,clip=true]{f1.eps} \caption{Schematic pictures of photon propagation in the jet with the velocity $v\ll c$ (left) and $v\sim c$ (right). When $v\ll c$, the scatterings of the photons are approximately isotropic in the observer frame and the surface of $\tau_*=1$ is located far from the surface of $\tau_{\rm a}=1$, where $\tau_{\rm a}$ is an optical depth in the absence of the scatterings. When $v\sim c$, the photons are concentrate to the direction of the flow. Thus, the surface of $\tau_*=1$ is located close to the surface of $\tau_{\rm a}=1$. } \label{fig1} \end{figure} In this letter, we construct an expression for the effective optical depth considering the random walk process in the relativistic flow. In Section \ref{sec:analytic}, we analytically investigate the random walk process in relativistic flow and present the expression for the effective optical depth. In Section \ref{sec:numerical}, we demonstrate that the number of scatterings obtained by the analytic expression agrees with that derived by Monte Carlo simulations. Finally, summary and discussions are presented in Section \ref{sec:summary}. | \label{sec:summary} In this letter, we investigate the random walk process in relativistic flow. In the pure scattering medium, the mean number of scatterings at the size parameter of $\xi$ is proportional to $\xi^2$ for $ \beta/\Gamma\ll \xi^{-1}$ and to $\xi$ for $\beta/\Gamma\gg \xi^{-1}$. These dependencies of the mean number of scatterings on $\xi$ are well reproduced by the numerical simulations. We also consider the combined scattering and absorption case. If the scattering opacity dominates the absorption opacity, the behavior of the effective optical depth is different depending on the velocity $\beta$. If $\beta\ll \sqrt{2\tau_{\rm a}/\tau_{\rm s}}$, the effective optical depth is $\tau_*\simeq \sqrt{\tau_{\rm a}\tau_{\rm s}/2}$ and if $\beta\gg \sqrt{2\tau_{\rm a}/\tau_{\rm s}}$, $\tau_*\simeq (1+\beta)\tau_{\rm a}/\beta$. In the GRB jets, the flow has ultra-relativistic velocity ($\Gamma\gtrsim100$) and the electron scattering opacity dominates the absorption opacity ($\tau_{\rm s}\gg\tau_{\rm a}$) due to its low density and high temperature. Thus, the effective optical depth in the jet is approximated by $\tau_{*}\simeq 2\tau_{\rm a}$. On the other hand, the cocoon have a non-relativistic velocity \citep[e.g.,][]{2003MNRAS.345..575M} and the effective optical depth in the cocoon could be much higher than the absorption optical depth as $\tau_*\simeq\tau_{\rm a}/\beta\gg \tau_{\rm a}$. The effective optical depth defines the photon production site as $\tau_*=1$. In the subsequent papers, we will perform the radiative transfer calculations for the thermal radiation from GRB jet and cocoon taking into account the photon production at the surface of $\tau_*=1$. This enables us to correctly treat the photon number density at the photon production sites. The results could be applicable not only for GRB jet and cocoon but also for the other astronomical objects such as AGNs or black hole binaries. For example, the super critical accretion flows around the black holes produce a high temperature ($\sim10^8~\mathrm{K}$) and low density ($\sim 10^{-9}~\mathrm{g/cm^3}$) outflow with a semi-relativistic velocity ($\sim 0.1c$) \citep[e.g.,][]{2009PASJ...61..769K}. In these circumstances, the scattering process have a major role on the photon diffusion and the relativistically corrected treatment is necessary even though the flow velocity is rather small compared with the speed of light. | 14 | 4 | 1404.1233 |
1404 | 1404.4620_arXiv.txt | The BICEP2 observation of a large tensor-to-scalar ratio, $r = 0.20^{+0.07}_{-0.05}$, implies that the inflaton $\phi$ in single-field inflation models must satisfy $\phi \sim 10M_{Pl}$ in order to produce sufficient inflation. This is a problem if interaction terms suppressed by the Planck scale impose a bound $\phi \lae M_{Pl}$. Here we consider whether it is possible to have successful sub-Planckian inflation in the case of two-field inflation. The trajectory in field space cannot be radial if the effective single-field inflaton is to satisfy the Lyth bound. By considering a complex field $\Phi$, we show that a near circular but aperiodic modulation of a $|\Phi|^{4}$ potential can reproduce the results of $\phi^2$ chaotic inflation for $n_{s}$ and $r$ while satisfying $|\Phi| \lae 0.01 M_{Pl}$ throughout. More generally, for models based on a $|\Phi|^{4}$ potential, the simplest sub-Planckian models are equivalent to $\phi^{2}$ and $\phi^{4/3}$ chaotic inflation. | The observation by BICEP2 of gravity waves from inflation \cite{bicep1,bicep2} is problematic for single-field inflation models. The Lyth bound \cite{lythb}, following an earlier observation of Starobinsky \cite{staro} and recently generalized in \cite{antusch}, implies that the change of the inflaton field during inflation must be $\sim 10 M_{Pl}$ in order to generate enough inflation\footnote{$M_{Pl} = 2.4 \times 10^{18} \GeV$.}. This is a serious problem if there are interaction terms in the inflaton potential suppressed by the Planck mass scale, since these will impose an upper bound $\phi \lae M_{Pl}$. It may be possible to suppress such interactions via a symmetry. However, this requires a quite non-trivial symmetry, for example a shift symmetry \cite{shift,lyths,kaloper} which is slightly broken to allow an inflaton potential \cite{shift}. If Planck-suppressed interactions are not suppressed by a symmetry then the only alternative is to consider multi-field inflation. One possibility is N-flation\footnote{An alternative multi-field approach which may resolve the problem, M-flation, is described in \cite{matrix}.} \cite{nflation}. In this case, to achieve $\phi_{i} < M_{Pl}$ for the $i = 1,...,N$ scalar fields in chaotic $\phi^2$ N-flation, more than 200 scalar fields are necessary \cite{nflation}. Should it be that no symmetry exists to suppress Planck corrections to the inflaton potential and there is not a very large number of scalar fields, then we need to consider an alternative model of inflation which has sub-Planckian values of the scalar fields throughout. Here we consider whether this is possible in the case of two-field inflation. In general, this will require an unconventional potential. We will consider the fields to be the real and imaginary parts of a complex field $\Phi$. We will show that it is possible to construct a model in which $|\Phi|$ is sub-Planckian throughout and $n_{s}$ and $r$ are in good agreement with Planck and BICEP2. In order to satisfy the Lyth bound, the minimum of the potential in field space cannot be in the radial direction, as it must describe a sufficiently long path across the two-field landscape. (A similar observation was made in \cite{antusch}.) The path must be aperiodic in the phase of $\Phi$ in order to be long enough to satisfy the Lyth bound. As an existence proof of a successful two-field inflation model, we will present a simple example in which the path is nearly circular but aperiodic, so that the minimum of the potential has a spiral-like path in the complex plane. | In the absence of a shift or other symmetry, the Lyth bound presents a severe problem for single-field inflation if Planck-suppressed corrections to the potential exist. N-flation would require a very large number of scalar fields to keep the inflaton sub-Planckian. The alternative is to consider multi-field inflation with a trajectory which is non-radial, so that the {\it change} in the inflaton field can be super-Planckian while the magnitude of the field remains sub-Planckian throughout. This requires an appropriate potential landscape. We have presented an explicit example of such a model. The model is based on a complex field with a nearly circular but aperiodic trajectory in the complex plane. This is achieved by a sinusoidal modulation of a $|\Phi|^{n}$ potential. The model acts as a single-field inflation model with a spiral-like trajectory. We have shown that the simplest sinusoidal modulation of a $|\Phi|^4$ potential exactly reproduces the values for $n_{s}$ and $r$ of the $\phi^2$ chaotic inflation model while maintaining $|\Phi| \lae 0.01 M_{Pl}$ throughout. More generally, the simplest models based on a $|\Phi|^4$ potential favour predictions equivalent to $\phi^2$ and $\phi^{4/3}$ chaotic inflation. The model therefore serves as an existence proof for two-field inflation models that are consistent with Planck and BICEP2 and which satisfy the Lyth bound while remaining sub-Planckian throughout. The aperiodic modulated $|\Phi|^4$ potential we have studied has no obvious physical explanation at present. On the other hand, the potential is a particularly simple example of a two-field potential landscape and might therefore be realized as part of a complete theory of Planck-scale physics. We finally comment on the relation of this model to axion inflation and monodromy \cite{dantes,axmon1,axmon2,westp,setax1,setax2,ido1,ido2}. In the axion inflation monodromy model of \cite{axmon1} and \cite{axmon2}, the effective theory consists of two real scalar fields with a broken shift symmetry. (The dynamics of monodromy in the later string versions are not explicit in the four-dimensional effective theory \cite{westp}.) The decay constants of the axions are approximately aligned to produce an almost flat direction which can be used for inflation. The effective theory is equivalent to a single-field inflation model with a super-Planckian inflaton and Planck-suppressed corrections eliminated by the shift symmetry. The sub-Planckian nature of the model is only apparent in the full theory with two complex fields, where the two real scalar fields are seen to be two axions and the full theory is therefore sub-Planckian if their decay constants are sufficiently sub-Planckian. In our model, the sub-Planckian nature of the model is explicit in the effective theory itself; in the simplest $n = 4$, $m = 1$ model the real scalar fields $\phi_{1}$ and $\phi_{2}$ are no larger than O(0.01)$M_{Pl}$. Our model also differs in being based on a single complex field, with the modulus of the field playing a role in the aperiodic potential. The axion inflation models of \cite{axmon1} and \cite{axmon2} make predictions which are generally different from the power-law chaotic inflation predictions of our model. The string axion monodromy models of \cite{westp,setax1,setax2}, on the other hand, are equivalent to chaotic inflation models with $V(\phi) \propto \phi$ and an additional sinusoidal modulation. (Recently this has been generalized to a range of power-law potentials \cite{west2}.) Finally, the two-axion "Dante's Inferno" model \cite{dantes} makes predictions which are equivalent to those of the present model for the case $m = 1$. | 14 | 4 | 1404.4620 |
1404 | 1404.1834_arXiv.txt | We measure weak lensing mass profiles of voids from a volume-limited sample of SDSS Luminous Red Galaxies (LRGs). We find voids using an algorithm designed to maximize the lensing signal by dividing the survey volume into 2D slices, and then finding holes in this 2D distribution of LRGs. We perform a stacked shear measurement on about 20,000 voids with radii between $15-55 \mpch$ and redshifts between $0.16-0.37$. We measure the characteristic radial shear signal of voids with a signal-to-noise of 7. The mass profile corresponds to a fractional underdensity of about -0.4 inside the void radius and a slow approach to the mean density indicating a partially compensated void structure. We compare our measured shape and amplitude with the predictions of Krause et al 2013. Voids in the galaxy distribution have been extensively modeled using simulations and measured in the SDSS. We discuss how the addition of void mass profiles can enable studies of galaxy formation and cosmology. | The first measurement of lensing from stacked galaxies was observed almost twenty years ago by \citet{bbs1996}. Since then, applications of this technique to the Sloan Digital Sky Survey (SDSS) have made stacked galaxy lensing an indispensable measure of galaxy halo masses, e.g., \citet{mhs2005} and \citet{sjs2009}. More recently, in \citet{cjt2014}, we measured the stacked lensing signal of filaments connecting neighboring Luminous Red Galaxies (LRGs). We were able to study filament properties such as thickness by comparing to simulation results. Now, with the goal of obtaining such a measurement of voids, we construct a void catalog from holes in the LRG distribution of SDSS, measure the void tangential shear profile, and constrain their density profiles. There are many void finders in the literature, all differing in implementation and the resulting set of voids found. \citet{cpf2008} makes a comparison of 13 algorithms. In recent years, methods involving a Voronoi tessellation coupled with a watershed transform have become popular \citep{pvj2007, n2008, lw2012}. These methods have also been successfully applied to data, yielding void catalogs from surveys such as SDSS \citep{slw2012}. A lensing analysis of the \citet{slw2012} catalog was carried out by \citet{mss2013}. However, despite careful attention to details of the shear measurement, the small number of voids in the catalog was likely a factor in the marginal detection significance. Recent work has studied in more detail the properties of dark matter voids in simulations. \citet{hsw2014} found that previous fits to simulation density profiles were too simple and provide fitting formulae with parameters that can be adapted to voids with a range of sizes. \citet{slw2013} and \citet{slw2014} have worked to connect the theory of voids found in the dark matter to those found in galaxies by using Halo Occupation Distribution models to mimic realistic surveys. Excursion set work has focused on providing semi-analytical models of void abundances \citep{sv2004, pls2012}, as well as connecting these models to void counts from simulations \citep{jlh2013}. Once void catalogs are constructed, they have numerous other applications. \citet{hvp2012} used a different void finder \citep{hv2002, pvh2012} to study the photometric properties of void galaxies. They find that void galaxies are bluer than those in higher density environments, but do not vary much within the void itself. Cosmological probes such as the Alcock-Paczynski test \citep{lw2012, slw2012b} and void-galaxy correlations \citep{hws2014} have been proposed. Finally, voids also provide a sensitive test of some modified gravity theories \citep{lzk2012, ccl2013, cpl14a, cpl14b, lcc15, bcl15}. Section 2 describes our basic void-finding algorithm, as well as our cuts to select a subsample useful for lensing. Section 3 explains our weak lensing measurement, null tests, and expected signal-to-noise. Section 4 presents our results on void density profiles, including both a fitted model and model-independent statements. Finally, Section 5 summarizes our results, caveats, and directions for future work. | {\it Void Lensing Detection}. We have made a high signal-to-noise measurement of gravitational lensing by large voids (Fig.~\ref{fig:gammat}), obtaining the first constraints on the dark matter density profile of voids. This measurement may be surprising given that theoretical work \citep{kcd2013} predicted that ambitious future surveys (in particular, Euclid) would be needed for measurements with comparable signal-to-noise. We differ from previous work in that our void finder and void characterization is optimized for lensing. We work with projected 2d slices and have a flexible criterion that allows for some overlap between voids. Our stacked shear measurement is analogous to galaxy-galaxy lensing in that it projects a source galaxy shape along multiple void centers. This greatly increases the total number of lens-source pairs and reduces shape noise by a factor of several. Other improvements described in Section 2 contribute to the size and quality of our void sample. We validate our detection of void lensing in several ways, using both the LRG positions around voids and standard galaxy-shear tests. Figures \ref{fig:lrg-dens} and \ref{fig:analyze} show the validation and improvements based on the LRG distribution. We verify that the tangential shear around random points and the lensing cross-component around void centers are consistent with the null hypothesis. The error analysis is analogous to that for our measurement of filament lensing with the same dataset presented in \citet{cjt2014}. {\it Void density profiles}. We measure the stacked density profile of voids with radii $R_v = 15-55 \mpch$ over the redshift range $0.16 < z < 0.37$. We can make some model-independent statements about void properties (see Table 1). By requiring the projected density to approach the mean density at radii of $2\rv$ or larger, we can convert our measured $\Delta\Sigma$ to estimates of $\Sigma(<\rv)$ and therefore to the fractional density contrast at $\rv$. We further estimate the mass deficit $\delta M$. In addition, we find that our voids are uncompensated within twice the void radius. By $3\rv$ however, the measurements are consistent with fully compensated voids, but we see no evidence for overcompensated voids of the kind seen in simulations (at the lower end of our $\rv$ range). By fitting our measurements with a model motivated by simulations, we can draw conclusions about the voids' 3d density profile and mass deficit $\delta M$ as summarized in Table 1. Our data is consistent with a central density of $\approx 0.5 \bar{\rho}$. At the edge of the void, it is also consistent with a density below the mean density at the LRG ridge, though the corresponding 2d density of LRGs is above the mean (Fig.~\ref{fig:lrg-dens}). {\it Caveats}. The standard disclaimer with void-related work is that the results can be quite sensitive to the specific void-finder used. As highlighted above, this holds true also for our work which is designed to find voids for gravitational lensing. Our use of multiple potential void centers is helpful for lensing S/N reasons, but also makes interpretation of the resulting density profile less straightforward. We expect some miscentering between the lowest dark matter density and the emptiest places in the sparse galaxy density, and our multiple centers may also add to this miscentering in some instances. However, we have checked the results with much stricter criteria for the number of multiple centers, and find only small shifts in the parameter contours (Fig.~\ref{fig:contour}). Furthermore, since the density profiles are very flat between the center and half the void radius, these effects are far less problematic than for galaxy or cluster lensing. We expect our error bars accurately account for shape noise and sample variance. However, we have not accounted for possible shear calibration errors, which could bias the signal by up to $ 5\%$. In addition, two effects could result in a dilution of the signal and thus underestimation of $A_3$: inaccurate source redshifts or fake voids from chance LRG projections. We have not estimated the contribution of these effects. {\it Future Work}. We can attempt a void lensing measurement with several different variants of the void sample. Going beyond our sparse sample of LRGs, we can apply this void finder to the SDSS Main sample. Although the volume probed will be significantly smaller, this disadvantage is offset in part by the larger number of background sources available behind lower redshift voids. Furthermore, \citet{slw2014} find that the voids identified using a lower galaxy luminosity threshold have a lower central dark matter density (as expected based on their lower galaxy bias as well), which should increase the lensing effect. Nearly all detailed applications will require a careful study of our void selection via mock catalogs that create galaxies from HOD prescriptions or dark matter halos. Our measurements are now confined to $\rv > 15 \mpch$, in part because the contamination from fake voids due to projection effects gets worse as the void size gets smaller than the 2d tracer density. Mock catalogs will allow us to go down to smaller radii and estimate the number of fake and real small voids. With those numbers we can take into account the expected dilution of the signal. The comparison of the galaxy distribution with the mass distribution is of great interest. The question of galaxy biasing can be understood better by having measurements in under dense regions to complement those in over dense regions. Many other questions can be posed by stacking voids in different ways: along the major axis of the galaxy distribution, varying the environment and the properties of the galaxy population, and so on. The measurement of a magnification signal behind voids would be of interest, in particular to provide a direct measurement of $\Sigma(R)$. Void mass functions, mass profiles, and the cross-correlation with galaxy profiles are the key ingredients in cosmological applications of voids. The velocity profiles measured in SDSS have an anisotropy and relationship to the mass profile that carry cosmological information \citep{lw2012}. Modified gravity theories in particular predict differences in these observables. In many respects modeling voids is less problematic than massive nonlinear objects like galaxy clusters, and the measurements are not affected by foreground galaxies, but the use of mock catalogs to understand the selection effects in the data is likely to be essential to interpreting survey measurements. | 14 | 4 | 1404.1834 |
1404 | 1404.7848_arXiv.txt | Big Bang Nucleosynthesis (BBN) relates key cosmological parameters to the primordial abundance of light elements. In this paper, we point out that the recent observations of Cosmic Microwave Background anisotropies by the Planck satellite and by the BICEP2 experiment constrain these parameters with such a high level of accuracy that the primordial deuterium abundance can be inferred with remarkable precision. For a given cosmological model, one can obtain independent information on nuclear processes in the energy range relevant for BBN, which determine the eventual $^2$H/H yield. In particular, assuming the standard cosmological model, we show that a combined analysis of Planck data and of recent deuterium abundance measurements in metal-poor damped Lyman-alpha systems provides independent information on the cross section of the radiative capture reaction $d(p,\gamma)^3$He converting deuterium into helium. Interestingly, the result is higher than the values suggested by a fit of present experimental data in the BBN energy range ($10 - 300$ keV), whereas it is in better agreement with {\it ab initio} theoretical calculations, based on models for the nuclear electromagnetic current derived from realistic interactions. Due to the correlation between the rate of the above nuclear process and the effective number of neutrinos $\neff$, the same analysis points out a $\neff>3$ as well. We show how this observation changes when assuming a non-minimal cosmological scenario. We conclude that further data on the $d(p,\gamma)^3$He cross section in the few hundred keV range, that can be collected by experiments like LUNA, may either confirm the low value of this rate, or rather give some hint in favour of next-to-minimal cosmological scenarios. | \label {sec:intro} Big Bang Nucleosynthesis (BBN, see e.g. \cite{Iocco1} for a recent overview) offers one of the most powerful methods to test the validity of the cosmological model around the MeV energy scale. Two key cosmological parameters enter BBN computations, the energy density in baryons, $\Omega_b h^2$, and the effective neutrino number, $\neff$, defined such that the energy density of relativistic particles at BBN is given by \begin{equation} \rho_\mathrm{rel}= \rho_\gamma \left(1+ \frac{7}{8}\, \left( \frac{4}{11} \right)^{4/3} \neff \right) \ , \label{eq:neffdef} \end{equation} \noindent where $\rho_\gamma$ is the Cosmic Microwave Background (CMB) photon energy density, given today by $\rho_{\gamma,0}\approx 4.8 \times 10^{-34}$ g\,cm$^{-3}$. Recent measurements of CMB anisotropies obtained by the Planck satellite are in very good agreement with the theoretical predictions of the minimal $\Lambda$CDM cosmological model. They significantly reduce the uncertainty on the parameters of this model, and provide strong bounds on its possible extensions \cite{PlanckXVI}. Assuming a given cosmological scenario and standard BBN dynamics, it is now possible to infer indirectly from Planck data the abundance of primordial nuclides with exquisite precision. For example, assuming $\Lambda$CDM, the Planck constraint on the baryon density, $\Omega_b h^2=0.02207\pm0.00027$, can be translated into a prediction for the primordial deuterium fraction using the public BBN code \texttt{PArthENoPE} \cite{parthenope}\footnote{In this paper we use a version of \texttt{PArthENoPE} where the $d(p,\gamma)^3$He reaction rate is updated to the best fit experimental determination (see section \ref{sec:nuc}). The deuterium fraction given by the public version of \texttt{PArthENoPE} is slightly different, but the change in the central value is at the level of 4 per mille, only.} \be ^2\mbox{H}/\mbox{H}=(2.65\pm 0.07) \cdot 10^{-5} ~~(68\%~{\rm C.L.}) \ , \label{deutBBN} \ee This constraint is competitive with the most recent and precise direct observations. Recently, the authors of Ref.~\cite{Cooke:2013cba} (see also \cite{Pettini-Cooke}) presented a new analysis of all known deuterium absorption-line systems, including some new data from very metal-poor Lyman-alpha systems at redshift $z=$ 3.06726 (visible in the spectrum of the quasar QSO SDSS J1358+6522) and at redshift $z= 3.04984$ (seen in QSO SDSS J1419+0829). Their result \be ^2{\mbox H}/\mbox{ H}=(2.53\pm0.04)\cdot 10^{-5} ~~(68\%~{\rm C.L.})\ , \label{pettini} \ee is smaller than the (indirect, model-dependent) cosmological determination from CMB data, but with a comparable uncertainty. These two deuterium abundance determinations, while broadly consistent, are off by about two standard deviations. This small tension might well be the result of small experimental systematics, either in Planck or in astrophysical deuterium measurements. However, the point of this paper is to underline that current BBN calculations could also be plagued by systematics in the experimental determination of nuclear rates. As explained in the following, the main uncertainty for standard BBN calculations of $^2$H comes from the rate of the radiative capture reaction $d(p,\gamma)^3$He. A recent review of the experimental status for this process can be found in \cite{Adelberger:2010qa}. The low energy limit of its cross section $\sigma(E)$ (or equivalently, of the corresponding astrophysical factor $S(E)$ \footnote{ We recall that the energy-dependent cross section $\sigma(E)$ is related to the energy-dependent astrophysical factor $S(E)$ through $ \sigma(E)= S(E) e^{-2 \pi \eta}/E$, where $\eta$ is the Sommerfeld factor.}) is well-known thanks to the results of the underground experiment LUNA \cite{casella}. However, during BBN, the relevant energy range in the center of mass is rather around $E \simeq 30 - 300$ keV. For such energies, the uncertainty on the cross section is at the level of 6-10\% when fitting $S(E)$ with a polynomial expression. This translates into a theoretical error on the primordial $^2$H/H ratio of the order of 2\% (for a fixed value of the baryon density and $\neff$), comparable to the experimental error in the above cosmological determination (\ref{deutBBN}) or astrophysical determination (\ref{pettini}). Recently, a reliable {\it ab initio} nuclear theory calculation of this cross section has been performed in \cite{Viviani:1999us,Marcucci:2004sq,Marcucci:2005zc}. The uncertainty on this prediction can be conservatively estimated to be also of the order of 7\% \cite{Nollett:2011aa}. However, the theoretical result is systematically larger than the best-fit value derived from the experimental data in the BBN energy range. By plugging the theoretical estimate of the cross section in a BBN code one finds that more deuterium is destroyed for the same value of the cosmological baryon density, and thus the predicted primordial $^2$H abundance results to be smaller \cite{Nollett:2011aa}. Interestingly, this could be a way to reconcile the slightly different values of $^2$H/H measured in astrophysical data and predicted by Planck. Indeed, the result quoted in eq. (\ref{deutBBN}) using the public BBN code \texttt{PArthENoPE} \cite{parthenope} relies on a value of the cross section $d(p,\gamma)^3$He inferred from nuclear experimental data (the default value for the $d(p,\gamma)^3$He rate used in the code was calculated in \cite{Serpico:2004gx}, and agrees at the 1.4\% level with the best-fit result of \cite{Adelberger:2010qa}). Further data on this crucial cross section in the relevant energy range might be expected from experiments such as LUNA. While waiting for such measurements one can find out to which extent the deuterium measurement of \cite{Cooke:2013cba} can be made even more compatible with Planck predictions when the rate of the reaction $d(p,\gamma)^3$He is treated as a free input parameter. We will address this issue assuming different cosmological models: the minimal $\Lambda$CDM model, $\Lambda$CDM plus extra radiation, a non spatially-flat universe, etc. This simple exercise points out that, remarkably, present CMB data are powerful enough to provide information on nuclear rates. Moreover, we will see that our results give independent support to the theoretical calculation of \cite{Marcucci:2005zc}. Of course, this close interplay between astrophysical observations and nuclear physics is not new. It is worth while recalling the role that the solar neutrino problem played in the quest for a more accurate solar model, and the impact of this question on experimental efforts for measuring specific nuclear cross sections. The paper is organized as follows. In the next section, we discuss in more details the nuclear rates which are most relevant for the determination of the primordial deuterium abundance and its theoretical error. We introduce a simplified way to parameterize the level of uncertainty still affecting the $d(p,\gamma)^3$He reaction rate, found to be sufficient for our analysis. In Section III, we describe our method for fitting cosmological and astrophysical data. We present our results in Section IV, and discuss their implications in Section V. | \label{sec:conclusions} In this work, we have shown that a combined analysis of Planck CMB data and of recent deuterium abundance measurements in metal-poor damped Lyman-alpha systems provides some piece of information on the radiative capture reaction $d(p,\gamma)^3$He, converting deuterium into helium. The value of the rate for this process represents the main source of uncertainty to date in the BBN computation of the primordial deuterium abundance within a given cosmological scenario, parameterized by the baryon density $\Omega_b h^2$ and effective neutrino number $N_{\rm eff}$. The corresponding cross section has not been measured yet with a sufficiently low uncertainty and normalization errors in the BBN center of mass energy range, 30 - 300 keV. In addition to that, the best fit of available data appears to be systematically lower than the detailed theoretical calculation presented in~\cite{Marcucci:2005zc}. Both these issues should be addressed by performing new dedicated experimental campaigns. We think that an experiment such as LUNA at the underground Gran Sasso Laboratories may give an answer to this problem in a reasonably short time. In fact, with the present underground $400~kV$ LUNA accelerator \cite{for} is possible to measure the $^2H(p,\gamma)^3He$ cross section in the $20<E_{cm}(keV)<260$ energy range with an accuracy better than $3\%$, i.e. considerably better than the $9\%$ systematic uncertainty estimated in \cite{ma}. This goal can be achieved by using the large BGO detector already used in \cite{cas}. This detector ensures a detection efficiency of about $70\%$ and a large angular coverage for the photons emitted by the $^2H(p,\gamma)^3He$ reaction. The accurate measurement of the $^2H(p,\gamma)^3He$ absolute cross section may be accomplished with the study of the angular distribution of emitted $\gamma$-rays by means of a large Ge(Li) detector \cite{and, an1}, in order to compare the data with "ab initio" modeling. Our study shows that, interestingly, the combined analysis of Planck and deuterium abundance data returns a larger rate $A_2$ for this reaction than the best fit computed in \cite{Adelberger:2010qa}, where the authors exploit the available experimental information on $d(p,\gamma)^3$He cross section. On the other hand Planck is in better agreement with {\it ab initio} theoretical calculations. More precisely, when the reaction rate $A_2$ is chosen to match its present determination, Planck predicts a value of the primordial deuterium abundance in 2$\sigma$ tension with its direct astrophysical determination. When the same reaction rate $A_2$ is assumed instead to match theoretical calculations, the two values of the primordial deuterium abundance agree at the 1$\sigma$ level. We have shown that this conclusion holds in the minimal $\Lambda$CDM cosmological model, as well as when allowing for a free effective neutrino number. In the latter case, the global likelihood analysis of astrophysical and cosmological data shows a direct correlation between $A_2$ and $N_{\rm eff}$, so that higher values for $A_2$ are in better agreement with non standard scenarios with extra relativistic degrees of freedom. Finally, we have shown that the inclusion of the new BICEP2 dataset also points towards a larger value for $A_2$, especially when $N_{\rm eff}$ is left free to vary. However, a running of the spectral index could bring the value of $A_2$ back in agreement with one even when the BICEP2 dataset is considered. New experimental data on the $d(p,\gamma)^3$He reaction rate will therefore have a significant impact on the knowledge of $N_{\rm eff}$ and of $dn_s/dlnk$ as well. \subsection*{Acknowledgements} We are pleased to thank L.~E.~Marcucci, who kindly provided the results of the {\it ab initio} calculations of the $d(p,\gamma)^3 \mbox{He}$ astrophysical factor described in \cite{Marcucci:2005zc} for discussions and help. We thank the Planck Editorial Board for the internal refereeing of this work. | 14 | 4 | 1404.7848 |
1404 | 1404.0373_arXiv.txt | The recent BICEP2 measurement of primordial gravity waves ($r = 0.2^{+0.07}_{-0.05}$) appears to be in tension with the upper limit from WMAP ($r<0.13$ at 95\% CL) and Planck ($r<0.11$ at 95\% CL). We carefully quantify the level of tension and show that it is very significant (around 0.1\% unlikely) when the observed deficit of large-scale temperature power is taken into account. We show that measurements of TE and EE power spectra in the near future will discriminate between the hypotheses that this tension is either a statistical fluke, or a sign of new physics. We also discuss extensions of the standard cosmological model that relieve the tension, and some novel ways to constrain them. | 14 | 4 | 1404.0373 |
||
1404 | 1404.6953_arXiv.txt | We study particle dynamics in self-gravitating gaseous discs with a simple cooling law prescription via two-dimensional simulations in the shearing sheet approximation. It is well known that structures arising in the gaseous component of the disc due to a gravitational instability can have a significant effect on the evolution of dust particles. Previous results have shown that spiral density waves can be highly efficient at collecting dust particles, creating significant local over-densities of particles. The degree of such concentrations has been shown to be dependent on two parameters: the size of the dust particles and the rate of gas cooling. We expand on these findings, including the self-gravity of dust particles, to see how these particle over-densities evolve. We use the {\scriptsize{PENCIL CODE}} to solve the local shearing sheet equations for gas on a fixed grid together with the equations of motion for solids coupled to the gas through an aerodynamic drag force. We find that the enhancements in the surface density of particles in spiral density wave crests can reach levels high enough to allow the solid component of the disc to collapse under its own self-gravity. This produces many gravitationally bound collections of particles within the spiral structure. The total mass contained in bound structures appears nearly independent of the cooling time, suggesting that the formation of planetesimals through dust particle trapping by self-gravitating density waves may be possible at a larger range of radii within a disc than previously thought. So, density waves due to gravitational instabilities in the early stages of star formation may provide excellent sites for the rapid formation of many large, planetesimal-sized objects. | The field of planet formation currently provides two methods through which large gas giant planets can form in discs around young stars. The favoured model of planet formation is known as the core accretion model. This model proposes that planets grow via a `bottom-up' process, where a core of solid material grows from initially small, kilometre-sized objects via a series of collisions. If this core becomes massive enough, it will begin to accrete a gaseous envelope from the disc \citep{Pollack1996}. For a Jupiter-like gas giant planet to form, the solid core must reach a mass of $\sim 10$ Earth masses before the disc is depleted of gas, a process which is observationally estimated to take from $10^6$ to $10^7$ years \citep{Haisch2001}. A key uncertainty in the core accretion model is the mechanism through which the disc becomes populated with kilometre-sized solid objects, similar to those found in the asteroid belt. It is likely that these objects are assembled via collisional growth from initial small dust grains present in the Interstellar Medium (ISM) during the star formation process that creates the protoplanetary disc. However, current theory has difficulties explaining the growth of dust grains past the metre-scale -- the velocities of metre-sized objects should be larger than the critical threshold for sticking \citep[see e.g.,][]{BlumWurm2008,Guttler2010}, so individual collisions between the bodies are no longer expected to be constructive. In this case, the self-gravity of any resulting rubble pile will be too weak to allow the debris to collapse into a gravitationally bound structure. The dynamics of these smaller particles that ultimately grow to form kilometre-sized planetesimals is largely governed by the aerodynamic drag force that arises from the velocity difference between the particles and the surrounding gas. The radial pressure gradient within the disc tends to be negative, making the gas orbit with sub-Keplerian velocities. The dust is not affected by the gas pressure gradient and would orbit at Keplerian velocities in the absence of drag. The drag force exerted on the dust results in the solids losing angular momentum to the disc and drifting inward at a rate that depends on the particles' size \citep{Weid1977}. For very small grain sizes, the dust is tightly coupled to the gas in the disc and the radial drift velocities are small. For very large objects, the solids are decoupled from the gas, move in approximately Keplerian orbits and again have very small drift velocities. Particles in the intermediate size range can, however, have large drift velocities. Although the exact size range depends on the local properties of the disc, drift velocities can exceed $10^3$cm/s for objects with sizes between 1 cm and 1 m \citep{Weid1977}. Therefore the process through which planetesimals form must be rapid, unless this inward drift is offset. \citet{Laibe2012} have shown that there may be surface density and temperature profiles for which particles may survive this inward migration, and \citet{Rice2004,Rice2006,Gibbons2012} have shown that local pressure maxima associated with density waves due to gravitational instabilities in the disc can trap the particles, saving them from the inward drift. Nevertheless, in the standard core accretion scenario, the period of growth from micron- to decametre-sized objects is assumed to be rapid, otherwise objects in this size range would rapidly spiral inward and be accreted onto the central protostar. In thin discs, gravitational instabilities are characterized by the \cite{Toomre1964} parameter, \[ Q = \frac{c_s\Omega}{\pi G\Sigma}, \] where $c_s$ is the gas sound speed, $\Omega$ is the Keplerian rotation frequency and $\Sigma$ is the disc surface density. Axisymmetric instability occurs for $Q<1$, while non-axisymmetric one can emerge for $Q<1.5-1.7$ \citep{Durisen2007}. If a disc is susceptible to such instabilities, depending on the thermal properties of the disc, one of two outcomes may occur. If the cooling time is greater than some threshold, $t_{c, crit}$, the disc will settle into a quasi-steady state, where the cooling balances the heating generated by gravitoturbulence \citep{Gammie2001}. For cooling times shorter than $t_{c, crit}$, the disc may fragment, forming brown dwarf and/or gas giant planet type objects \citep{Boss1998}. The critical cooling time below which fragmentation occurs is commonly taken to be $t_{c,crit} =3\Omega^{-1}$ \citep{Gammie2001, Rice2003}, however recent studies suggest that this threshold may not be fully converged with recent high resolution simulations, indicating that the critical cooling time, $t_{c,crit}$, may even exceed $10\Omega^{-1}$ \citep{Meru2011}. It has, however, been suggested \cite{Paardekooper2011, Lodato2011, Rice2012} that this non-convergence is a numerical issue rather than actually suggesting that fragmentation could typically occur for $t_c > 10 \Omega^{-1}$. \citet{Paardekooper2012} do, however, suggest that there may be an intermediate range of cooling times for which fragmentation may indeed be stochastic, observing fragmentation in some simulations with cooling times as high as $t_c = 20\Omega^{-1}$. Although very few Class II objects are observed to have sufficiently massive discs for gravitational instabilities to set in \citep{Beckwith1991}, observations indicate that during the Class 0 and Class I phases, massive discs are much more common \citep{Rodriguez2005, Eisner2005}, suggesting that most, if not all stars possess a self-gravitating disc for some period of time during the earliest stages of star formation. If this is the case, these instabilities will likely take the form of non-axisymmetric spiral structures. It has been shown that these spiral waves are highly effective at trapping the solids in the disc. \citet{Rice2004} showed, using global disc simulations, that the surface density of certain particle sizes can be enhanced by a factor of over 100 in spiral wave structure. \citet[][Paper I]{Gibbons2012} used local shearing-sheet simulations to expand on these findings, mimicking the conditions at a range of disc radii to study the particle trapping capabilities of spiral density waves through the disc. These results showed that gravitational instabilities are responsible for creating large over-densities in the solid component of the disc at intermediate to large orbital radii $(>20\rm{AU})$ within the disc. \citet{Rice2006} estimated from global disc simulations that the observed increase in the surface density of solids will lead to the creation of kilometre-scale planetesimals due to the gravitational collapse of the solids in these over-dense regions. Here we aim to extend this to study the gravitational collapse of the solids via local shearing-sheet simulations. The goal of the present work is to demonstrate how the over-densities that form in the solid component of the disc can undergo gravitational collapse as a result of the solids' self-gravity, promoting further grain growth, which can ultimately lead to the formation of planetesimals at very early evolutionary stages when the disc is still self-gravitating. This directly expands on the work in Paper I, where we studied the effect of varying effective cooling time of the gas on the particle-trapping capabilities of spiral density waves for a range of particle sizes (friction times), but including neither the particle self-gravity nor the back-reaction from the particles on gas via drag force. In this paper, taking into account both these factors, we numerically studied dynamical behaviour of particles embedded in a self-gravitating disc using a local shearing sheet approximation. We investigated the possibility that density enhancements in the solid component of the disc can lead to the direct formation of gravitationally bound solid clumps and, if so, study how such clumps might behave. In particular, we are interested in whether gravitationally bound accumulations of solids can form within the disc, since the formation of a large reservoir of planetesimals at early times in the disc is a major obstacle for the core accretion theory. In this regard, we would like to mention that self-gravity of the solid component has been demonstrated to be a principal agent promoting the formation of large planetesimals inside gaseous over-densities arising in compressible magnetohydrodynamic turbulence driven by the magnetorotational (MRI) instability in discs \citep{Johansen2007, Johansen2011}. The paper is organized as follows. In Section 2 we outline the disc model and equations we solve in our simulations. In Section 3 we describe the evolution of the gas and dust particles. Summary and discussions are given in Section 4. \\ \section[]{Dynamical equations} \label{Model} To investigate the dynamics of solid particles embedded in a self-gravitating protoplanetary disc, we solve the two-dimensional (2D) local shearing sheet equations for gas on a fixed grid, including disc self-gravity as in \citet{Gammie2001}, together with the equations of motion of solid particles coupled to the gas through aerodynamic drag force. As mentioned in the Introduction, following \cite{Johansen2011}, we also include self-gravity of particles to examine their collapse properties. As a main numerical tool, we employ the {\scriptsize{PENCIL CODE}} \footnote{See http://code.google.com/p/pencil-code/}. The {\scriptsize{PENCIL CODE}} is a sixth order spatial and third order temporal finite difference code (see \citet{Brandenburg2003} for full details). The {\scriptsize{PENCIL CODE}} treats solids as numerical super-particles \citep{Johansen2006,Johansen2011}. In the shearing sheet approximation, disc dynamics is studied in the local Cartesian coordinate frame centred at some arbitrary radius, $r_0$, from the central object and rotating with the disc's angular frequency, $\Omega$, at this radius. In this frame, the $x$-axis points radially away from the central object, the $y$-axis points in the azimuthal direction of the disc's differential rotation, which in turn manifests itself as an azimuthal parallel flow characterized by a linear shear, $q$, of background velocity along the $x-$axis, ${\bf u}_0=(0,-q\Omega x)$. The equilibrium surface densities of gas, $\Sigma_0$, and particles, $\Sigma_{p,0}$, are spatially uniform. Since the disc is cool and therefore thin, the aspect ratio is small, $H/r_0\ll 1$, where $H=c_s/\Omega$ is the disc scale height and $c_s$ is the gas sound speed. The local shearing sheet model is based on the expansion of the basic 2D hydrodynamic equations of motion to the lowest order in this small parameter assuming that the disc is also razor thin \citep[see e.g.,][]{Gammie2001}. Our simulation domain spans the region $-L_x/2 \leq x \leq L_x/2$, $-L_y/2 \leq y \leq L_y/2$. As is customary, we adopt the standard shearing-sheet boundary conditions \citep{Hawley1995}, namely for any variable $f$, including azimuthal velocity with background flow subtracted, we have \[ f(x,y,t) = f(x+L_x,y-q\Omega L_xt,t),~~~~~~(x-{\rm boundary}) \] \[ f(x,y,t) = f(x,y+L_y,t),~~~~~~~(y-{\rm boundary}) \] The shear parameter $q=1.5$ for the Keplerian rotation profile adopted in this paper. \subsection{Gas density} In this local model, the continuity equation for the vertically integrated gas density $\Sigma$ is \begin{equation} \frac{\partial\Sigma}{\partial t} + {\bf\nabla}\cdot(\Sigma{\bf u}) -q\Omega x\frac{\partial\Sigma}{\partial y}-f_D(\Sigma) = 0 \end{equation} where ${\bf u}=(u_x,u_y)$ is the gas velocity relative to the background Keplerian shear flow ${\bf u}_0$. Due to the high-order numerical scheme of the {\scriptsize{PENCIL CODE}} it also includes a diffusion term, $f_D$, to ensure numerical stability and capture shocks, \[ f_D = \zeta_D(\nabla^2 \Sigma +\nabla \textrm{ ln } \zeta_D \cdot \nabla\Sigma). \] Here the quantity $\zeta_D$ is the shock diffusion coefficient defined as \[ \zeta_D = D_{sh} \langle \max_3[(-\nabla\cdot {\bf u})_+] \rangle(\Delta x)^2 \label{shock} \] where $D_{sh}$ is a constant defining the strength of shock diffusion as outlined in Appendix B of \citet{Lyra2008a}. $\Delta x$ is the grid cell size. \subsection{Gas velocity} The equation of motion for the gas relative to the unperturbed Keplerian flow takes the form \begin{multline} \frac{\partial {\bf u}}{\partial t}+({\bf u}\cdot\nabla){\bf u}-q\Omega x \frac{\partial {\bf u}}{\partial y} = -\frac{\nabla P}{\Sigma} - 2\Omega {\bf\hat{z}}\times{\bf u}+q\Omega u_x {\bf\hat{y}}\\-\frac{\Sigma_p}{\Sigma}\cdot\frac{{\bf u}-{\bf v}_p}{\tau_f}+2\Omega\Delta v{\bf \hat{x}}- \nabla\psi +\bf f_\nu (u), \label{gvel1} \end{multline} where $P$ is the vertically integrated pressure, $\psi$ is the gravitational potential produced together by the perturbed gas surface density, $\Sigma-\Sigma_0$, and the vertically integrated bulk density of particles, $\Sigma_p-\Sigma_{p,0}$ (see equation 6 below). The left hand side of equation (2) describes the advection by the velocity field, {\bf u}, itself and by the mean Keplerian flow. The first term on the right hand side is the pressure force. The second and third terms represent the Coriolis force and the effect of shear, respectively. The fourth term describes the aerodynamic drag force, or back-reaction exerted on the gas by the dust particles \citep[see e.g.,][]{Lyra2008b,Lyra2009,Johansen2011}. This force depends on the difference between the velocity of particles ${\bf v}_p$ and the gas velocity and is inversely proportional to the stopping, or friction time, $\tau_f$, of particles. The fifth term mimics a global radial pressure gradient which reduces the orbital speed of the gas by the positive amount $\Delta v$ and is responsible for the inward radial migration of solids in an unperturbed disc. The sixth term represents the force due to self-gravity of the system. Finally, the code includes an explicit viscosity term, $\bf f_\nu$, \begin{align*} {\bf f_\nu} =& \nu(\nabla^2{\bf u} + \frac{1}{3}\nabla\nabla\cdot{\bf u} + 2{\bf S}\cdot \nabla \textrm{ln }\Sigma) \nonumber \\ & + \zeta_\nu[\nabla(\nabla\cdot{\bf u})+ (\nabla \textrm{ln }\Sigma + \nabla\textrm{ln }\zeta_\nu)\nabla\cdot{\bf u}], \end{align*} which contains both Navier-Stokes viscosity and a bulk viscosity for resolving shocks. Here {\bf S} is the traceless rate-of-strain tensor \[ S_{i j} = \frac{1}{2}\left(\frac{\partial u_i}{\partial x_j}+\frac{\partial u_j}{\partial x_i} - \frac{2}{3}\delta_{i j}\nabla\cdot{\bf u}\right) \] and $\zeta_{\nu}$ is the shock viscosity coefficient analogous to the shock diffusion coefficient $\zeta_D$ defined above, but with $D_{sh}$ replaced by $\nu_{sh}$. \subsection{Entropy} The {\scriptsize{PENCIL CODE}} uses entropy, $s$, as its main thermodynamic variable, rather than internal energy, $U$, as used by \citet{Gammie2001}. The equation for entropy evolution is \begin{equation} \frac{\partial s}{\partial t}+({\bf u}\cdot\nabla)s - q\Omega x\frac{\partial s}{\partial y} = \frac{1}{\Sigma T}\left(2\Sigma\nu{\bf S}^2 - \frac{\Sigma c_s^2}{\gamma(\gamma-1)t_c} + f_{\chi}(s)\right) \end{equation} where the first term on the right hand side is the viscous heating term and the second term is an explicit cooling term. Here we assume the cooling time $t_c$ to be constant throughout the simulation domain and take its value to be sufficiently large that the disc does not fragment and achieves a quasi-steady state. The final term on the right hand side, $f_{\chi}(s)$, is a shock dissipation term analogous to that outlined for the density. \subsection{Dust particles} The dust particles are treated as a large number of numerical super-particles \citep{Johansen2006,Johansen2011} with positions ${\bf x}_p=(x_p,y_p)$ on the grid and velocities ${\bf v}_p=({\rm v}_{p,x},{\rm v}_{p,y})$ relative to the unperturbed Keplerian rotation velocity, ${\bf v}_{p,0}=(0,-q\Omega x_p)$, of particles in the local Cartesian frame. These are evolved according to \begin{equation} \frac{\mathrm{d}{\bf x}_p}{\mathrm{d}t} = {\bf v}_p - q\Omega x_p{\bf \hat{y}} \end{equation} \begin{equation} \frac{\mathrm{d}{{\bf v}_p}}{\mathrm{d}t} = - 2\Omega {\bf\hat{z}}\times{\bf v}_p+q\Omega {\rm v}_{p,x} {\bf\hat{y}}-\nabla\psi+\frac{{\bf u} - {\bf v}_p}{\tau_f}. \label{parvel} \end{equation} The first two terms on the right hand side of equation (5) represent the Coriolis force and the non-inertial force due to shear. The third term is the force exerted on the particles due to the common gravitational potential $\psi$. The fourth term describes the drag force exerted by the gas on the particles which arises from the velocity difference between the two. Unlike the gas, the particles do not feel the pressure force. In the code, the drag force on the particles from the gas is calculated by interpolating the gas velocity field to the position of the particle, using the second order spline interpolation outlined in Appendix A of \citet{Youdin2007}. The back-reaction on the gas from particles in equation (2) is calculated by the scheme outlined in \citet{Johansen2011}. \subsection{Self-gravity} The gravitational potential in the dynamical equations (2) and (5) is calculated by inverting Poisson equation for it, which contains on the right hand side the gas plus particle surface densities in a razor thin disc \citep[e.g.,][]{Lyra2009} \begin{equation} \Delta\psi = 4\pi G(\Sigma-\Sigma_0+\Sigma_p-\Sigma_{p,0})\delta(z)\label{Poisson} \end{equation} using the Fast Fourier Transform (FFT) method outlined in the supplementary material of \citet{Johansen2007}. Note that the perturbed gas, $\Sigma-\Sigma_0$, and particle, $\Sigma_p-\Sigma_{p,0}$, surface densities enter equation (6), since only the gravitational potential associated with the perturbed motion (and hence density perturbation) of both the gaseous and solid components determine gravity force in equations (2) and (5). Here, the surface density is Fourier transformed from the $(x,y)$-plane to the $(k_x,k_y)$-plane without the intermediate co-ordinate transformation performed by \citet{Gammie2001}. For this purpose, a standard FFT method has been adapted to allow for the fact that the radial wavenumber $k_x$ of each spatial Fourier harmonic depends on time as $k_x(t) = k_x(0) + q\Omega k_yt$ in order to satisfy the shearing sheet boundary conditions \citep[see also][]{Mamatsashvili2009}. \subsection{Units and initial conditions} We normalise our parameters by setting $c_{s0}=\Omega=\Sigma_0= 1$. The time and velocity units are $[t] = \Omega^{-1}$ and $[u] = c_{s0}$, resulting in the orbital period $T = 2\pi$. The unit of length is the scale-height, $[l] = H = c_{s0}/\Omega$. The initial Toomre $Q=c_{s0}\Omega/\pi G\Sigma_0$ parameter is taken to be 1 throughout the domain. This sets the gravitational constant $G$ = $\pi^{-1}$. The surface density of gas is initially uniform and set to unity. The simulation domain is a square with dimensions $L_x=L_y=80G\Sigma_0/\Omega^2$ and is divided into a grid of $N_x\times N_y=1024\times 1024$ cells with sizes $\Delta x\times \Delta y=L_x/N_x\times L_y/N_y$. This choice of units sets the domain size $L_x = 80H/\pi Q=25.46H$. It is worth noting that the cooling time, $t_c$, which we have assumed to be constant throughout the sheet, in reality is $t_c = t_c(\Sigma,U,\Omega)$ as described by \citet{Johnson2003}. However, the use of constant cooling time over a sheet of this size allows us to infer the general behaviour of the dust particles at a given location within the disc. The gas velocity field is initially perturbed by some small random fluctuations with the uniform rms amplitude $ \sqrt{\langle\delta {\bf u}^2\rangle} = 10^{-3}$. We take the viscosity and diffusion coefficients to be $\nu = 10^{-2}$ and $\nu_{sh} = D_{sh} = 5.0$. As shown in Paper I, typical values of the radial drift parameter, $\Delta v$, does not have a significant effect on the outcome of the simulations, therefore in all the simulations presented below we take $\Delta v=0.02$. We use $5\times10^5$ particles, split evenly between five friction times, $\tau_f = [0.01,0.1,1,10,100]\Omega^{-1}$. \citet{Bai2010} and \citet{Laibe2012} have shown that there is a spatial resolution criteria which applies to coupled dust and gas simulations such as those outlined above. For the dust particles to be properly resolved, the grid spacing must satisfy $\Delta x < c_s\tau_f$. For the chosen set of parameters we have $\Delta x \sim 0.07c_s\tau_f$, so this condition is satisfied for all but the $\tau_f = 0.01\Omega^{-1}$ particles. As noted in Paper I, this under-resolution of particles does not appear to create any numerical inconsistencies in the evolution of the smallest particles. In all the runs below, each particle species with a fixed radius/friction time is distributed spatially uniformly with the average surface density of $\Sigma_{p,0}=10^{-2}\Sigma_0$ prescribed according to the standard value of dust-to-gas ratio, except one low particle mass run (Fig. 1), where we take $\Sigma_{p,0}=10^{-3}\Sigma_0$. The particles are initially given random positions within the sheet and zero velocities, relative to the background Keplerian flow, ${\bf v}_p(t=0)=0$. | The work presented in this paper expands on the findings of Paper I, presenting a series of simulations modelling the evolution of dust particles in the presence of spiral density waves occurring as a result of gravitational instabilities in the disc. In particular, our study focuses on the effect that the gravitational interaction between massive dust particles, or their self-gravity, has on their evolution, expanding on the massless `tracer' particles adopted previously by also taking into account the particle back-reaction on the gaseous component via drag force. A general picture of the evolution of the dust particles remains unchanged, they evolve through a quasi-steady gaseous density structures associated with density waves produced by a combined effect of disc self-gravity and cooling. Particles accumulate in high-density/pressure regions of spiral density waves, producing as a result significant over-densities in the solid component of the disc, with the magnitude of the particle density enhancement depending on both the cooling time of the gas and the friction time of the particles. The inclusion of the particles' self-gravity can have several significant effects on the evolution of the disc. The intensity of these effects depends on the total mass of particles in a given simulation. If this mass is low (i.e., the dust-to-gas mass ratio $\ll 0.01$) particles' self-gravity has no significant effects on the evolution of either the gas or solid component of the disc. For more canonical dust-to-gas mass ratios ($\sim 0.01$), particle self-gravity causes the particle aggregates, which are trapped in the crests of spiral density waves, to contract further. If such particle concentrations reach high enough densities, gravitational interactions among particles inside become sufficiently large to cause local collapse of the solid component of the disc, leading to the formation of gravitationally bound structures within the disc. This picture, obtained within the local shearing sheet approach permitting higher numerical resolution, is consistent with the results of analogous global simulations by \cite{Rice2006} of particle dynamics in self-gravitating discs which also take into account self-gravity of dust component. Assuming typical disc parameters, however, predicts masses for these structures to be comparable to those of very large planetesimals, with the most massive structures identified, comparable to the mass of large asteroids and dwarf planets, potentially providing seed objects for the growth of terrestrial planets and the cores of gas giant planets as outlined in the calculations of \citet{Pollack1996}. These structures are robust enough to survive in the disc, even after the `parent' spiral density wave in which they formed has been sheared out and the remainder of the solid particles have diffused back into the disc. Interestingly, the physical mass of these structures is only weakly related to the cooling time of the disc. Although at short cooling times particles get trapped within density waves more efficiently, resulting in structures forming faster and usually accounting for a larger fraction of the particles, for a given disc, the lower surface density of material present at the larger radii associated with these shorter cooling times tends to offset this. This suggests that density waves arising from gravitational instabilities in the gas are able to produce large scale planetesimals, even if the effects of self-gravity in the gas is relatively weak. Determination of the full extent of the region where this mechanism can operate is beyond the scope of the present simulations, since to probe inner radii down to about $10$AU, where effective (radiative) cooling times are long but at the same time Toomre's parameter is $Q\sim 1$, i.e., the effect of gas self-gravity can still be appreciable \citep[e.g.,][]{Boley2006, RiceArm2009, Cossins2010}, long simulation times are required, which are not feasible for the simulations posed here. For such large cooling times, numerical viscosity will begin to dominate the shear stresses, requiring us to perform higher resolution studies. In summary, the presented results tend to support and expand upon those obtained in \citet{Rice2006} and Paper I, suggesting an attractive scenario for the rapid creation of a reservoir of planetesimals, along with several very massive objects with $\sim 0.01$ Earth masses. One of the main findings of this study has been to demonstrate the possibility for this mechanism to form planetesimals at a larger range of radii than previously thought \citep[see e.g.,][]{ClarkeLodato2009}. This process potentially solves a major problem in the standard planet formation scenario. Rather than rapidly migrating into the central star, centimetre to metre-sized particles become concentrated in self-gravitating spiral structures. The densities achieved can then lead to planetesimal formation via direct gravitational collapse of the particle aggregates. In this scenario, kilometre-sized planetesimals form very early, removing a major bottleneck in the planet formation process. However, for a fuller understanding of the role of this scenario in the planet formation, one should study the process of subsequent growth and interaction of these planetesimals, which then decouple from the gas and should be dominated by their own gravitational attraction. In this paper, we investigated the dynamics of gas and dust particles in an idealized razor thin disc, so the limitations of such a 2D model and its extension to the three-dimensional (3D) case should be discussed. For the gaseous component, the description of gravitational instabilities within the 2D shearing sheet model is acceptable \citep[see e.g.,][]{Goldreich1978,Gammie2001}, since the characteristic horizontal length scale of the instability and induced structures (density waves) is larger than the vertical one \citep{Goldreich1965,Romeo1992,Mamatsashvili2010,Shi2013}. As a result, the gas motion associated with self-gravitating density waves occurs primarily in disc plane. The situation is more complicated with dust particles, since in the 2D case we cannot take into account their motions perpendicular to the disc mid-plane, or sedimentation, which depends on the particle size -- smaller particles are well mixed with the gas, essentially do not sediment and closely follow the gas, whereas particles with larger (from cm to metre) size gradually settle towards the disc midplane on a timescale of a few orbital times \citep{Goldreich1973}. This implies that the back-reaction drag force from the particles on the gas, which we calculated in terms of the ratio of the vertically integrated surface densities of the particles and gas, $\Sigma_p/\Sigma$, in equation (2), is strictly speaking valid if particles are well-mixed with the gas. For larger particles, as they settle into the midplane, the ratio of the the bulk density of particles to the volume gas density, $\rho_p/\rho$, there is expected to be larger than the ratio of the corresponding surface densities, $\Sigma_p/\Sigma$, and since in 2D particles and the gas have the same infinitely thin scale height, this causes the back-reaction of the drag force from the particles on the gas to be underestimated in the 2D case \citep{Youdin2005,Lyra2008b,Lyra2009}. In the 3D case, the stronger back-reaction force on the gas from the settled dust particles close to the midplane is known to lead to streaming \citep{Youdin2005,Youdin2007} and Kelvin-Helmholtz \citep{Sekiya1998,Youdin2002,Johansen2006} instabilities. The streaming instability enhances particle clumping, thus aiding collapse \citep{Johansen2007b}, while the Kelvin-Helmholtz instability causes vertical stirring of the dust layer \citep[e.g.,][]{Johansen2006}. To study in detail these 3D effects related to particle dynamics in the presence of gas and particle self-gravity and how they compete with the process of particle trapping in density waves, one has to carry out more extensive simulations in the 3D stratified shearing box. In the present local analysis, following analogous 2D global simulations of particle-gas dynamics by \citet{Lyra2008b,Lyra2009}, we have restricted ourselves to expressing the back-reaction drag force by the particle and gas column densities. Evidently, this is a simplification and meant as an initial step towards understanding all the above complex ingredients of particle dynamics in self-gravitating discs. Nevertheless, such a 2D approach allows us to gain insight into the characteristics of particle accumulation in overdense structures due to self-gravity. In regard to the above-mentioned, a question may arise as to whether there is still a way to incorporate sedimentation of the particles in the 2D model of gas-dust coupling. When the particles sediment, the particle scale height is set by the balance between turbulent stirring (diffusion) and vertical gravity \citep[e.g.,][]{Johansen2005,Fromang2006}. Being controlled by the drag force, the turbulent diffusion and therefore the equilibrium scale height of solids $H_p$ depend on the particle radius (friction time). As a consequence, larger particles settle into thinner layers than smaller ones (obviously particle scale heights are different from that of gas). Inside density waves, where, as mentioned above, motion is horizontal, vertical turbulent motions are expected to be weaker bringing the layer of solids closer to a 2D quasi-static configuration, but a dependence on particle size is still expected. Provided these scale heights of solids are known, one could, in principle, find particle surface density as $\Sigma_p\approx 2\rho_pH_p$ and similarly for the gas $\Sigma\approx 2\rho H$ and in this way relate the ratios of dust to gas volume and surface densities, $\Sigma_p/\Sigma\approx (H_p/H)(\rho_p/\rho)$. However, in the 2D case, $H_p$ remains largely uncertain, as it depends on the vertical stirring properties of gravitoturbulence (and other above-mentioned instabilities which will develop in 3D) and should be self-consistently determined through 3D analysis. | 14 | 4 | 1404.6953 |
1404 | 1404.1268_arXiv.txt | Some essential features of the ion plasma wave in both kinetic and fluid descriptions are presented. The wave develops at wavelengths shorter than the electron Debye radius. Thermal motion of electrons at this scale is such that they overshoot the electrostatic potential perturbation caused by ion bunching, which consequently propagates as an unshielded wave, completely unaffected by electron dynamics. So in the simplest fluid description, the electrons can be taken as a fixed background. However, in the presence of magnetic field and for the electron gyro-radius shorter than the Debye radius, electrons can participate in the wave and can increase its damping rate. This is determined by the ratio of the electron gyro-radius and the Debye radius. In interpenetrating plasmas (when one plasma drifts through another), the ion plasma wave can easily become growing and this growth rate is quantitatively presented for the case of an argon plasma. | Introduction }% The ion plasma (IP) mode has been predicted long ago \citep{ses, kur} and experimentally confirmed in \citet{ba,ba2} (but see also in \citet{ses1}, \citet{ver}, \citet{dg}, \citet{kra}, \citet{arm}). Yet, the mode is not frequently discussed in textbooks (as exceptions see \citet{baum} and \citet{bit}), probably mostly because it is expected to be strongly damped \citep{be} and therefore of no practical importance. In fact, in most of the books the curve depicting its low frequency counterpart, i.e., the longitudinal ion acoustic (IA) wave, saturates towards the ion plasma frequency $\omega_{pi}$, and the longitudinal mode is absent above it. Such a profile of the IA wave is simply the result of the standardly assumed quasi-neutrality in the perturbed state. However, the quasi-neutrality condition is not necessarily satisfied for short wavelengths, and the line depicting the longitudinal mode indeed continues even above $\omega_{pi}$. The mode in this frequency regime may play an important role in plasmas with multiply ionized ions and where in the same time $z_i T_e/T_i\gg 1$, when its damping is considerably reduced; here $z_i$ is the ion charge number. Particle-in-cell simulations of this space-charge wave at the ion plasma frequency are performed by \citet{jon} in order to accelerate particles to high energies. More recently, the IP mode has been discussed by \citet{dra}, where an intense pump wave is used to increase the frequency of an ordinary ion-acoustic wave to frequencies near the ion-plasma frequency, and the wave obtained in such a manner is called an induced IP wave. The physics of the ion plasma mode is very briefly described by \citet{be}, and \citet{dra}. For wavelengths satisfying the condition $k \lambda_{de} \geq 1$, where $\lambda_{de} $ is the electron Debye radius, the electron chaotic motion is such that they cannot effectively shield charge fluctuations at so short wavelengths. The electrons simply overshoot potential perturbations caused by ion oscillations and they consequently act only as a background (which is neutralizing only in the equilibrium, but not also in the perturbed state). The electron and ion role is thus reversed as compared to the electron (Langmuir) plasma wave. Compared with the ion acoustic wave, where the perturbed thermal pressure acts as the restoring force, in the ion plasma wave the Debye shielding is ineffective and the restoring force is the electrostatic Coulomb force. So {\em the ion density} oscillates at {\em the ion-plasma frequency} instead of at the acoustic frequency. The absence of the Debye shielding associated with the IP wave may affect various other phenomena in plasmas \citep{dra}, like Stark broadening, braking radiation processes, atomic transitions, etc. Therefore identifying possible sources of the IP mode (like in the case of interpenetrating plasmas, see later in the text), is of great importance and it can help in correctly interpreting observations in space and astrophysical plasmas. On the other hand, taking the solar wind parameters \citep{baum} at 1 AU ($T_e=1.5\cdot 10^5$ K, $T_i=1.2\cdot 10^5$ K, $n_0\approx 5\cdot 10^6$ m$^{-3}$) reveals that the Debye radius for both electrons and protons is around 11 m. The wavelengths of the IP wave are below $2 \pi \lambda_{de}$ and thus eventually exceeding the typical size of space satellites, that are of the size of a fridge or bigger (c.f., Messenger Mercury probe is about $1.3 \times 1.4 \times 1.8$ m, while Ulysses solar probe is around $3.3 \times 3.2 \times 2.1$ m size). It is thus obvious that in such an environment the satellite may appear immersed in the unshielded electric field of a passing IP wave. A growing IP wave can consequently affect the performance of its electric installations, and can lead to surface charging and sparks. It is important to stress that both modes (IA and IP) belong to the same longitudinal branch of a dispersion equation, yet with different wavelengths $k\lambda_{de}<1$ and $k\lambda_{de}>1$, respectively. The physics of the longitudinal electrostatic wave in these two wavelength limits is completely different, as described above, which justifies the two different names (IA and IP) being used. | The ion plasma wave is rarely studied in the literature. Most likely this is the result of the fact that in the simple electron-ion plasma, and with increased ion temperature, the mode is (kinetically) more strongly damped than the ion acoustic wave. These features are presented in Figs.~\ref{f1},~\ref{f2}. On the other hand, the mode develops at spatial scales below the electron Debye radius, at which coherent and organized ion motion due to electrostatic force is usually not expected and the mode has thus remained out of the focus of researchers although it has been experimentally verified \citep{ba,ba2}. However, in case of a mixture of several ion species, the damping of the IP wave can be reduced, and in fact it can become lower than the damping of the IA wave in such an environment. This is shown quantitatively in Sec.~\ref{sec3} in the discussion related to Fig.~\ref{f3}. Within kinetic theory, collisions additionally damp the IP mode but this damping can be smaller than the damping of the IA mode for the same parameters. In a fluid description, the ion plasma wave can be discussed without losing any physics by assuming the electrons as a completely fixed background. In interpenetrating plasmas containing free energy in the flowing plasma component, the IP mode can become growing due to purely kinetic effects. Such an instability, shown in Sec.~\ref{sec3}, develops for the speed of a flowing plasma that is well below the acoustic speed in the standing plasma. In addition, the IP mode is shown to be destabilized more easily that the IA mode. The analysis presented in the work suggests that the ion plasma wave may be much more abundant than expected. In the solar wind environment at 1 AU distance from the Sun, this wave may develop at wavelengths below $2\pi \lambda_{de}\sim 70$ m, and with frequencies of the order of a few kHz. For the magnetic field \citep{crav} of around $7$ nT this yields $\lambda_{de}/\rho_e\simeq 0.01$ while in the same time $\rho_i/\lambda_{de}\simeq 4000$. So for both ions and electrons involved in the IP wave motion the magnetic field plays no role. In the solar corona, the IP mode implies wavelengths of a few centimeters or shorter, and frequencies in the range of $10^7$ Hz or higher, and it is again unaffected by the magnetic field, regardless of direction of propagation. The short wavelength of the wave implies that it may propagate even in plasmas with the magnetic field, and at any angle with respect to the magnetic field vector. In such plasmas with magnetic field, the electrons can affect the IP wave behavior on the condition that their gyro-radius is shorter than the Debye radius, or equivalently when the electron plasma frequency is below the electron gyrofrequency $\omega_{pe}<\Omega_e$. This is shown in Sec.~\ref{sec5}. In case of a growing mode, like the one excited in interpenetrating plasmas discussed in Sec.~III, the energy of the wave can be channeled into internal (kinetic) plasma energy by the same stochastic heating mechanism known to work in the case of the ion acoustic wave. This mechanism is described and experimentally verified for the IA wave in \citet{sk1, sk2}. This means that the mentioned macroscopic flows in the solar atmosphere, as an example of interpenetrating plasmas that drive the IP wave unstable, may directly heat the solar plasma. In the case of IPW propagating along the magnetic flux tubes in the solar atmosphere, there may be refraction and self-focusing of the wave front in case of a greater density at the external regions of the tube. Such a ducting effect is known to play role in the electromagnetic wave propagation \citep{be, vld2} as well. For the IP wave this may result in greater amplitudes and such a linearly focused unshielded electric field of the wave may cause a more efficient acceleration and energization of particles, similar to the case of experimentally amplified IP waves \cite{jon}. \vfill \pagebreak | 14 | 4 | 1404.1268 |
1404 | 1404.6354_arXiv.txt | We present the results of 45 transit observations obtained for the transiting exoplanet HAT-P-32b. The transits have been observed using several telescopes mainly throughout the YETI network. In 25 cases, complete transit light curves with a timing precision better than $1.4\:$min have been obtained. These light curves have been used to refine the system properties, namely inclination $i$, planet-to-star radius ratio $R_\textrm{p}/R_\textrm{s}$, and the ratio between the semimajor axis and the stellar radius $a/R_\textrm{s}$. First analyses by \citet{Hartman2011} suggest the existence of a second planet in the system, thus we tried to find an additional body using the transit timing variation (TTV) technique. Taking also literature data points into account, we can explain all mid-transit times by refining the linear ephemeris by \DeltaP. Thus we can exclude TTV amplitudes of more than \Amplitude. | \label{sec:Introduction} Since the first results of the {\it Kepler} mission were published, the number of known planet candidates has enlarged tremendously. Most {\it hot Jupiters} have been found in single planetary systems and it was believed that those kind of giant, close-in planets are not accompanied by other planets \citep[see e.g.][]{Steffen2012}. This result has been obtained analysing 63 {\it Kepler} hot Jupiter candidates and is in good agreement with inward migration theories of massive outer planets, and planet--planet scattering that could explain the lack of additional close planets in hot Jupiter systems. Nonetheless, wide companions to hot jupiters have been found, as shown e.g. in \citet{Bakos2009} for the HAT-P-13 system. One has to state, though, that the formation of hot Jupiters is not yet fully understood (see \citealt{Steffen2012} and references therein for some formation scenarios, and e.g. \citealt{Lloyd2013} for possible tests). Recently \citet{Szabo2013} reanalysed a larger sample of 159 {\it Kepler} candidates and in some cases found dynamically induced {\it Transit Timing Variations (TTVs)}. If the existence of additional planets in hot Jupiter systems can be confirmed, planet formation and migration theories can be constrained. Since, according to \citet{Szabo2013}, there is only a small fraction of hot Jupiters believed to be part of a multiplanetary system, it is important to analyse those systems where an additional body is expected. In contrast to e.g. the Kepler mission, where a fixed field on the sky is monitored over a long time span, our ongoing study of TTVs in exoplanetary systems only performs follow-up observations of specific promising transiting planets where additional bodies are suspected. The targets are selected by the following criteria: \begin{itemize} \vspace{-0.4em} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[(i)] The orbital solution of the known transiting planet shows non-zero eccentricity (though the circularization time-scale is much shorter than the system age) and/or deviant radial velocity (RV) data points -- both indicating a perturber. \item[(ii)] The brightness of the host star is $V\leq13\:$mag to ensure sufficient photometric and timing precision at 1-2m telescopes. \item[(iii)] The target location on the sky is visible from the Northern hemisphere. \item[(iv)] The transit depth is at least 10 mmag to ensure a significant detection at medium-sized, ground-based telescopes. \item[(v)] The target has not been studied intensively for TTV signals before. \end{itemize} Our observations make use of the YETI network \citep[Young Exoplanet Transit Initiative;][]{YETI}, a worldwide network of small to medium sized telescopes mostly on the Northern hemisphere dedicated to explore transiting planets in young open clusters. This way, we can observe consecutive transits, which are needed to enhance the possibility to model TTVs as described in \cite{Szabo2013}, and \cite{PTmet}. Furthermore, we are able to obtain simultaneous transits observations to expose hidden systematics in the transit light curves, like time synchronization errors, or flat fielding errors. In the past, the transiting exoplanets WASP-12b \citep{Maciejewski2011a, Maciejewski2013a}, WASP-3b \citep{Maciejewski2010,Maciejewski2013b}, WASP-10b \citep{Maciejewski2011b,Maciejewski2011c}, WASP-14b \citep{Raetz2012} and TrES-2 (Raetz et al. 2014, submitted) have been studied by our group in detail. In most cases, except for WASP-12b, no TTVs could be confirmed. Recently, also \citet{vonEssen2013} claimed to have found possible TTV signals around Qatar-1. However, all possible variations should be treated with reasonable care. In this project we monitor the transiting exoplanet HAT-P-32b. The G0V type \citep{Pickles2010} host star HAT-P-32 was found to harbour a transiting exoplanet with a period of $P=2.15\:$d by \citet{Hartman2011}. Having a host star brightness of $V=11.3\:$mag and a planetary transit depth of $21\:$mmag the sensitivity of medium-sized telescopes is sufficient to achieve high timing precision, therefore it is an optimal target for the YETI telescopes. The RV signal of HAT-P-32 is dominated by high jitter of $>60\:$ms$^{-1}$. \citet{Hartman2011} claim that 'a possible cause of the jitter is the presence of one or more additional planets'. \citet{Knutson2013} also analysed the RV signature of HAT-P-32 and found a long term trend indicating a companion with a minimum mass of $5-500\:$M$_{jup}$ at separations of $3.5-12\:$AU. However, such a companion could not yet explain the short time-scale jitter as seen in the Hartman data. Besides the circular orbit fit, an eccentric solution with $e=0.163$ also fits the observed data. Though \citet{Hartman2011} mention that the probability of a real non-zero eccentricity is only $\sim3\%$, it could be mimicked or triggered by a second body in the system. Thus, HAT-P-32b is an ideal candidate for further monitoring to look for Transit Timing Variations induced by a planetary companion. | \begin{table*} \caption{The fit results for the 20 good transit light curves (top rows) as well as the five literature data points from \citet{Sada} and \citet{Gibson2013} (middle rows) for the values $T_{\textrm{mid}}$, $a/R_\textrm{s}$, $k=R_\textrm{p}/R_\textrm{s}$ and $i$. Since \citet{Sada} did not publish all values for each transit fit, only the transit mid time is tabulated for most observations, except the epoch 662 observation, where also the other parameters are available. The formal $rms$ and the resultant $pnr$ are given in the last column, if available. In the bottom rows the results of the complete observations with larger error bars are given for completeness. In case of the Tenerife 1.2m observations, quasi-simultaneous observations in the filters $r_p$ and $B$ have been performed leading to higher $pnr$ values.} \label{tab:fitResults} \begin{savenotes} {\setlength{\tabcolsep}{3.5pt} \begin{tabular}{lcll@{$\:\pm\:$}ll@{$\:\pm\:$}ll@{$\:\pm\:$}ll@{$\:\pm\:$}lcc} \toprule date & epoch &telescope& \multicolumn{2}{c}{$T_{\textrm{mid}}-2\,450\,000\:$d}& \multicolumn{2}{c}{$a/R_\textrm{s}$} & \multicolumn{2}{c}{$k=R_\textrm{p}/R_\textrm{s}$} & \multicolumn{2}{c}{$i\,[^\circ]$} &$rms\:$[mmag] & $pnr\:$[mmag]\\ \midrule 2011-11-01 & 673 & Tenerife 1.2m & $5867.40301$ & $0.00073$ &\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}& $2.3$ & $3.20$\\ 2011-11-14 & 679 & Jena 0.6m & $5880.30267$ & $0.00033$ & $6.13 $ & $0.09 $ & $0.1493$ & $0.0016$ & $89.3 $ & $1.0 $ & $1.9$ & $1.97$\\ 2011-11-29 & 686 & Rozhen 2.0m & $5895.35297$ & $0.00016$ & $6.09 $ & $0.04 $ & $0.1507$ & $0.0010$ & $89.5 $ & $0.6 $ & $1.0$ & $0.62$\\ 2011-11-29 & 686 & Tenerife 1.2m & $5895.35249$ & $0.00080$ &\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}& $1.7$ & $2.36$\\ 2011-12-01 & 687 & Rozhen 2.0m & $5897.50328$ & $0.00033$ & $6.08 $ & $0.09 $ & $0.1508$ & $0.0016$ & $89.3 $ & $0.9 $ & $1.5$ & $0.94$\\ 2011-12-14 & 693 & Rozhen 0.6m & $5910.40274$ & $0.00043$ & $6.11 $ & $0.13 $ & $0.1508$ & $0.0023$ & $89.5 $ & $1.1 $ & $2.0$ & $3.40$\\ 2011-12-27 & 699 & Rozhen 2.0m & $5923.30295$ & $0.00031$ & $6.03 $ & $0.10 $ & $0.1536$ & $0.0018$ & $88.5 $ & $0.9 $ & $1.2$ & $0.96$\\ 2012-01-15 & 708 & Swarthmore 0.6m & $5942.65287$ & $0.00064$ & $6.04 $ & $0.18 $ & $0.1544$ & $0.0024$ & $88.9 $ & $1.4 $ & $3.3$ & $3.28$\\ 2012-08-15 & 807 & Rozhen 2.0m & $6155.50385$ & $0.00026$ & $6.01 $ & $0.11 $ & $0.1496$ & $0.0014$ & $88.5 $ & $1.0 $ & $1.3$ & $1.11$\\ 2012-08-18 & 808 & Tenerife 1.2m & $6157.65470$ & $0.00072$ &\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}& $2.6$ & $3.71$\\ 2012-09-12 & 820 & OSN 1.5m & $6183.45364$ & $0.00085$ & $5.96 $ & $0.18 $ & $0.1524$ & $0.0027$ & $88.7 $ & $1.4 $ & $4.3$ & $4.85$\\ 2012-09-12 & 820 & Trebur 1.2m & $6183.45361$ & $0.00049$ & $6.05 $ & $0.14 $ & $0.1548$ & $0.0022$ & $88.5 $ & $1.2 $ & $2.1$ & $2.33$\\ 2012-09-14 & 821 & OSN 1.5m & $6185.60375$ & $0.00033$ & $6.01 $ & $0.12 $ & $0.1509$ & $0.0016$ & $88.2 $ & $1.2 $ & $1.3$ & $1.07$\\ 2012-10-10 & 833 & Trebur 1.2m & $6211.40361$ & $0.00056$ & $5.98 $ & $0.25 $ & $0.1554$ & $0.0059$ & $88.1 $ & $1.5 $ & $2.2$ & $2.18$\\ 2012-11-22 & 853 & OSN 1.5m & $6254.40404$ & $0.00022$ & $6.037$ & $0.062$ & $0.1507$ & $0.0018$ & $89.2 $ & $0.8 $ & $1.1$ & $0.84$\\ 2013-01-04 & 873 & Tenerife 1.2m & $6542.40397$ & $0.00058$ &\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}& $1.9$ & $2.87$\\ 2013-09-07 & 987 & Jena 0.6m & $6542.50538$ & $0.00032$ & $6.04 $ & $0.11 $ & $0.1497$ & $0.0016$ & $88.7 $ & $1.1 $ & $1.6$ & $1.57$\\ 2013-09-07 & 987 & Rozhen 2.0m & $6542.50530$ & $0.00018$ & $5.97 $ & $0.09 $ & $0.1535$ & $0.0012$ & $88.3 $ & $0.8 $ & $0.9$ & $0.67$\\ 2013-09-07 & 987 & Torun 0.6m & $6542.50522$ & $0.00052$ & $5.89 $ & $0.20 $ & $0.1515$ & $0.0029$ & $87.9 $ & $1.4 $ & $3.5$ & $2.33$\\ 2013-10-06 &1001 & OSN 1.5m & $6572.60532$ & $0.00018$ & $6.11 $ & $0.06 $ & $0.1465$ & $0.0013$ & $89.2 $ & $0.7 $ & $0.9$ & $0.63$\\ 2013-11-01 &1013 & Rozhen 2.0m & $6598.40539$ & $0.00017$ & $6.05 $ & $0.06 $ & $0.1511$ & $0.0010$ & $88.9 $ & $0.8 $ & $0.8$ & $0.68$\\ 2013-11-03 &1014 & OSN 1.5m & $6600.55546$ & $0.00017$ & $6.02 $ & $0.05 $ & $0.1503$ & $0.0009$ & $89.2 $ & $0.8 $ & $1.3$ & $1.33$\\ 2013-12-01 &1027 & OSN 1.5m & $6628.50585$ & $0.00031$ & $6.13 $ & $0.09 $ & $0.1475$ & $0.0022$ & $89.2 $ & $0.9 $ & $1.8$ & $1.59$\\ 2013-12-29 &1040 & Trebur 1.2m & $6656.45533$ & $0.00045$ & $6.07 $ & $0.13 $ & $0.1509$ & $0.0022$ & $88.8 $ & $1.1 $ & $2.6$ & $2.74$\\ \midrule 2011-10-09 & 662 & KPNO 2.1m & $5843.75341$ & $0.00019$ & \multicolumn{2}{l}{$5.98\phantom{0\:}_{-\:0.15}^{+\:0.10}$}&$0.1531$ & $0.0012$&\multicolumn{2}{l}{$88.2\:_{-\:1.0}^{+\:1.2}$} & --&--\\ 2011-10-11 & 663 & KPNO 2.1m & $5845.90287$ & $0.00024$ &\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}& --&--\\ 2011-10-11 & 663 & KPNO 0.5m & $5845.90314$ & $0.00040$ &\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}& --&--\\ 2012-09-06 & 817 &\cite{Gibson2013}& $6177.00392$ & $0.00025$ &\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}& --&--\\ 2012-10-19 & 837 &\cite{Gibson2013}& $6220.00440$ & $0.00019$ &\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}& --&--\\ \midrule 2011-10-04 & 660 & Ankara 0.4m & $5839.45347$ & $0.00101$ & $5.9 $ & $0.2 $ & $0.1448$ & $0.0021$ & $88.1 $ & $1.4 $ & $4.7$ & $2.22$\\ 2012-01-15 & 708 & Gettysburg 0.4m & $5942.65179$ & $0.00113$ & $5.79 $ & $0.36 $ & $0.1493$ & $0.0054$ & $87.3 $ & $1.7 $ & $2.5$ & $4.25$\\ 2012-10-10 & 833 & Jena 0.25m & $6211.40267$ & $0.00214$ & $6.04 $ & $0.65 $ & $0.1514$ & $0.0089$ & $86.5 $ & $2.5 $ & $5.3$ & $6.96$\\ 2012-10-25 & 840 & Tenerife 1.2m & $6226.45618$ & $0.00102$ &\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}& $3.8$ & $5.05$\\ 2012-12-22 & 867 & Tenerife 1.2m & $6284.50460$ & $0.00100$ &\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}&\multicolumn{2}{c}{--}& $3.3$ & $5.10$\\ \bottomrule \end{tabular}} \end{savenotes} \end{table*} We presented our observations of HAT-P-32b planetary transits obtained during a timespan of 24 months (2011 October until 2013 October). The data were collected using telescopes all over the world, mainly throughout the YETI network. Out of 44 started observations we obtained 24 light curves that could be used for further analysis. 21 light curves have been obtained that could not be used due to different reasons, mostly bad weather. In addition to our data, literature data from \citet{Hartman2011}, \citet{Sada}, and \citet{Gibson2013} were also taken into account (see Fig.~\ref{fig:H32OC}, \ref{fig:H32aR}, \ref{fig:H32k}, and \ref{fig:H32i} and Table~\ref{tab:fitResults}). The published system parameters $a/R_\textrm{s}$, $R_\textrm{p}/R_\textrm{s}$ and $i$ from the circular orbit fit of \citet{Hartman2011} were confirmed. In case of the semimajor axis over the stellar radius and the inclination, we were able to improve the results due to the number of observations. As for the planet-to-star radius ratio, we did not achieve a better solution for there is a spread in the data making constant fits difficult. In addition, \citet{Gibson2013} found an M-dwarf $\approx2.8''$ away from HAT-P-32 and hence a possible cause for this spread. Regarding the transit timing, a redetermination of the planetary ephemeris by \DeltaP{} can explain the obtained mid-transit times, although there are still some outliers. Of course, having 1$\sigma$ error bars, one would expect some of the data points to be off the fit. Nevertheless, due to the spread of data seen in the O--C diagram, observations are planned to further monitor HAT-P-32b transits using the YETI network. This spread in the order of \Amplitude{} does not exclude certain system configurations. Assuming circular orbits even an Earth mass perturber in a mean motion resonance could still produce such a signal. | 14 | 4 | 1404.6354 |
1404 | 1404.6212_arXiv.txt | We construct an extension of $f(T)$ gravity with the inclusion of a non-minimal torsion-matter coupling in the action. The resulting theory is a novel gravitational modification, since it is different from both $f(T)$ gravity, as well as from the nonminimal curvature-matter-coupled theory. The cosmological application of this new theory proves to be very interesting. In particular, we obtain an effective dark energy sector whose equation-of-state parameter can be quintessence- or phantom-like, or exhibit the phantom-divide crossing, while for a large range of the model parameters the universe results in a de Sitter, dark-energy-dominated, accelerating phase. Additionally, we can obtain early-time inflationary solutions too, and thus provide a unified description of the cosmological history. | The recent observational advances in cosmology have provided a large amount of high-precision cosmological data, which has posed new challenges for the understanding of the basic physical properties of the Universe, and of the gravitational interaction that dominates its dynamics and evolution. The observation of the accelerated expansion of the Universe \cite{Riess1} has raised the fundamental issue of the cause of this acceleration, which is usually attributed to a mysterious and yet not directly detected dominant component of the Universe, called dark energy \cite{Copeland:2006wr}. In this context, the recently released Planck satellite data of the 2.7 degree Cosmic Microwave Background (CMB) full sky survey \cite{Planckresults} have generally confirmed the standard $\Lambda $ Cold Dark Matter ($\Lambda $CDM) cosmological model. On the other hand, the measurement of the tensor modes from large angle CMB B-mode polarisation by BICEP2 \cite{BICEP2}, implying a tensor-to-scalar ratio $r = 0.2 ^{+0.07}_{-0.05}$, has provided a very convincing evidence for the inflationary scenario, since the generation of gravitational wave fluctuations is a generic prediction of the early de Sitter exponential expansion. However, the BICEP2 result is in tension with Planck limits on standard inflationary models \cite{Mar}, and thus alternative explanations may be required. In principle, magnetic fields generated during inflation can produce the required B-mode, for a suitable range of energy scales of inflation \cite{Mar}. Moreover, the existence of the fluctuations of cosmological birefringence can give rise to CMB B-mode polarization that fits BICEP2 data with $r<0.11$, and no running of the scalar spectral index \cite{Lee}. The above major observational advances require some good theoretical explanations, with the role of giving a firm foundation to cosmology, and the underlying theory of gravity. However, up to now, no convincing theoretical model, supported by observational evidence that could clearly explain the nature of dark energy, has been proposed. Moreover, not only the recent accelerated expansion of the Universe, but also observations at the galactic (galaxy rotation curves) and extra-galactic scale (virial mass discrepancy in galaxy clusters) \cite{Str} suggest the existence of another mysterious and yet undetected major component of the Universe, the so-called dark matter. From all these observations one can conclude that the standard general relativistic gravitational field equations, obtained from the classic Einstein-Hilbert action $S=\int{\left(R/2+L_m\right)\sqrt{-g}d^4x}$, where $R$ is the scalar curvature, and $L_m$ is the matter Lagrangian density, in which matter is minimally coupled to the geometry, cannot give an appropriate quantitative description of the Universe at astrophysical scales going beyond the boundary of the Solar System. To explain dark energy and dark matter in a cosmological context requires the {\it ad hoc} introduction of the dark matter and dark energy components into the total energy-momentum tensor of the Universe, in addition to the ordinary baryonic matter. From a historical point of view, in going beyond the Einstein-Hilbert action, the first steps were taken in the direction of generalizing the geometric part of the standard gravitational action. An extension of the Einstein-Hilbert action, in which the Ricci scalar invariant $R$ is substituted with an arbitrary function of the scalar invariant, $f(R)$, has been extensively explored in the literature \cite{rev}. Such a modification of the gravitational action can explain the late acceleration of the Universe, and may also provide a geometric explanation for dark matter, which can be described as a manifestation of geometry itself \cite{Boehm}. Furthermore, quadratic Lagrangians, constructed from second order curvature invariants such as $R^{2}$, $R_{\mu \nu }R^{\mu\nu }$, $R_{\alpha \beta \mu \nu }R^{\alpha \beta \mu \nu }$, $\varepsilon ^{\alpha \beta \mu \nu }R_{\alpha \beta \gamma \delta }R_{\mu \nu }^{\gamma \delta }$, $C_{\alpha \beta \mu \nu}C^{\alpha \beta \mu \nu}$, etc., have also been considered as candidates for a more general gravitational actions \cite{Lobo:2008sg}, which can successfully explain dark matter and the late-time cosmic acceleration. Alternatively, the interest for extra-dimensions, which goes back to the unified field theory of Kaluza and Klein, led to the development of the braneworld models \cite{Maartens1}. In braneworld models, gravitational effects due to the extra dimensions dominate at high energies, but important new effects, which can successfully explain both dark energy and dark matter, also appear at low energies. Most of the modifications of the Einstein-Hilbert Lagrangian involve a change in the geometric part of the action only, and assume that the matter Lagrangian plays a subordinate and passive role, which is implemented by the minimal coupling of matter to geometry. However, a general theoretical principle forbidding an arbitrary coupling between matter and geometry does not exist {\it a priori}. If theoretical models, in which matter is considered on an equal footing with geometry, are allowed, gravitational theories with many interesting and novel features can be constructed. A theory with an explicit coupling between an arbitrary function of the scalar curvature and the Lagrangian density of matter was proposed in \cite{Bertolami:2007gv}. The gravitational action of the latter model is of the form $S=\int{\left\{f_1(R)+\left[1+\lambda f_2(R)\right]L_m\right\}\sqrt{-g}d^4x}$. In these models an extra force acting on massive test particles arises, and the motion is no longer geodesic. Moreover, in this framework, one can also explain dark matter \cite{Bertolami:2009ic}. The early ``linear'' geometry-matter coupling \cite{Bertolami:2007gv} was extended in \cite{Harko:2008qz} and a maximal extension of the Einstein-Hilbert action with geometry-matter coupling, of the form $S=\int d^{4}x \sqrt{-g}f\left(R,L_m\right)$ was considered in \cite{Harko:2010mv}. An alternative model to $f(R,L_m)$ gravity is the $f(R,\mathcal{T})$ theory \cite{Harko:2011kv}, where $\mathcal{T}$ is the trace of the matter energy-momentum tensor $T_{\mu\nu}$, and the corresponding action given by $S=\int{\left[f\left(R,\mathcal{T}\right)/2+ L_m\right]\sqrt{-g}d^4x}$. The dependence of the gravitational action on $\mathcal{T}$ may be due to the presence of quantum effects (conformal anomaly), or of some exotic imperfect fluids. When the trace of the energy-momentum tensor $\mathcal{T}$ is zero, $\mathcal{T}=0$, which is the case for the electromagnetic radiation, the field equations of $f(R,T)$ theory reduce to those of $f(R)$ gravity. However, the $f(R,L_m)$ or $f(R,\mathcal{T})$ gravitational models are not the most general Lagrangians with nonminimal geometry-matter couplings. One could further obtain interesting gravity models by introducing a term of the form $R_{\mu\nu}T^{\mu\nu}$ into the Lagrangian \cite{fRT,Od}. Such couplings appear in Einstein-Born-Infeld theories \cite{deser}, when one expands the square root in the Lagrangian. The presence of the $R_{\mu\nu}T^{\mu\nu}$ coupling term has the advantage of entailing a nonminimal coupling of geometry to the electromagnetic field. All the above gravitational modifications are based on the Einstein-Hilbert action, namely on the curvature description of gravity. However, an interesting and rich class of modified gravity can arise if one modifies the action of the equivalent torsional formulation of General Relativity. As it is well known, Einstein also constructed the ``Teleparallel Equivalent of General Relativity'' (TEGR) \cite{Unzicker:2005in,TEGR,Hayashi:1979qx,JGPereira,Maluf:2013gaa}, replacing the torsion-less Levi-Civita connection by the curvature-less Weitzenb{\"{o}}ck one, and using the vierbein instead of the metric as the fundamental field. In this formulation, instead of the curvature (Riemann) tensor one has the torsion tensor, and the Lagrangian of the theory, namely the torsion scalar $T$, is constructed by contractions of the torsion tensor. Thus, if one desires to modify gravity in this formulation, the simplest thing is to extend $T$ to an arbitrary function$f(T)$ \cite{Ferraro:2006jd,Linder:2010py}. An interesting aspect of this extension is that although TEGR coincides with General Relativity at the level of equations, $f(T)$ is different than $f(R)$, that is they belong to different modification classes. Additionally, although in $f(R)$ theory the field equations are fourth order, in $f(T)$ gravity they are second order, which is a great advantage. $f(T)$ gravity models have been extensively applied to cosmology, and amongst other applications it is able to explain the late-time accelerating expansion of the Universe without the need for dark energy \cite{Linder:2010py,Chen:2010va,Bengochea001}. Furthermore, following these lines, and inspired by the higher-curvature modifications of General Relativity, one can construct gravitational modifications based on higher-order torsion invariants, such is the $f(T,T_G)$ gravity \cite{Kofinas:2014owa}, which also proves to have interesting cosmological implications. Another gravitational modification based on the teleparallel formulation is the generalization of TEGR to the case of a Weyl-Cartan space-time, in which the Weitzenb\"{o}ck condition of the vanishing of the curvature is also imposed (Weyl-Cartan-Weitzenb\"{o}ck (WCW) gravity), with the addition of a kinetic term for the torsion in the gravitational action \cite{WC1}. In this framework the late-time acceleration of the Universe can be naturally obtained, determined by the intrinsic geometry of the space-time. A further extension of the WCW gravity, in which the Weitzenb\"{o}ck condition in a Weyl-Cartan geometry is inserted into the gravitational action via a Lagrange multiplier, was analyzed in \cite{WC2}. In the weak field limit the gravitational potential explicitly depends on the Lagrange multiplier and on the Weyl vector, leading to an interesting cosmological behavior. In the work, we are interested in proposing a novel gravitational modification based on the torsional formulation, by allowing the possibility of a nonminimal torsion-matter coupling in the gravitational action. In particular, for the torsion-matter coupling we adopt the ``linear'' model introduced in the case of $f(R)$ gravity in \cite{Bertolami:2007gv}. Hence, the gravitational field can be described in terms of two arbitrary functions of the torsion scalar $T$, namely $f_1(T)$ and $f_2(T)$, with the function $f_2(T)$ linearly coupled to the matter Lagrangian. This new coupling induces a supplementary term $\left[1+\lambda f_2(T)\right]L_m$ in the standard $f(T)$ action, with $\lambda$ an arbitrary coupling constant. When $\lambda =0$, the model reduces to the usual $f(T)$ gravity. We investigate in detail the cosmological implications of the torsion-matter coupling for two particular choices of the functions $f_1(T)$ and $f_2(T)$. For both choices the Universe evolution is in agreement with the observed behavior, and moreover it ends in a de Sitter type vacuum state, with zero matter energy density. The details of the transition depend on the numerical values of the free parameters that appear in the functions $f_1(T)$ and $f_2(T)$. The paper is organized as follows. In Section~\ref{fTmodel} we briefly describe the basics of the $f(T)$ gravity model. The field equations of the $f(T)$ theory with linear nonminimal torsion-matter coupling are obtained in Section~\ref{matter}. The cosmological implications of the theory are analyzed in Section~\ref{cosm}. Finally, we conclude and discuss our results in Section~\ref{Concl}. | \label{Concl} In the present paper, we have considered an extension of the $f(T)$ gravity model by introducing a nonminimal coupling between torsion and matter. The geometric part of the action was extended through the introduction of two independent functions of the torsion scalar $T$, namely $f_1(T)$ and $f_2(T)$, respectively, with the function $f_2(T)$ being nonminimally coupled to the matter Lagrangian $L_m$. The resulting gravitational model presents some formal analogies with the nonminimally geometry-matter coupling introduced in \cite{Bertolami:2007gv}. However, the resulting equations, as well as its physical and geometrical interpretations, are very different. The theory of nonminimal torsion-matter coupling is therefore a novel class of gravitational modification. From the physical point of view, in this theory, matter is not just a passive component in the space-time continuum, but it plays an active role in the overall gravitational dynamics, which is strongly modified due to the supplementary interaction between matter and geometry. Moreover, the major advantage of the $f(T)$-type models, namely that the field equations are second order, is not modified by the torsion-matter coupling. As an application of the nonminimal torsion-matter coupling scenario we have considered the dynamical evolution of a flat FRW universe. We have investigated the time dependence of the cosmologically relevant physical parameters, for two different choices of the functions $f_1(T)$ and $f_2(T)$, corresponding to the simplest departures from General Relativity. In these specific models the dynamics of the Universe is determined by the free parameters which appear in the functions $f_1(T)$ and $f_2(T)$, as well as by the matter-torsion coupling constant. Depending on the numerical values of these parameters a large number of cosmological behaviors can be obtained. In our analysis we have considered the matter dominated phase of the Universe evolution, that is, we neglected the matter pressure. More general models with $p_m$ can be easily constructed and analyzed. We restricted our analysis in expanding evolutions, although contracting or bouncing solutions can be easily obtained as well. We have found a universe evolution in agreement with observations, that is a matter-dominated era followed by an accelerating phase. Additionally, the effective dark-energy equation-of-state parameter can lie in the quintessence or phantom regime, which reveals the capabilities of the scenario. Furthermore, a general and common property of the considered models is that they all end in a de Sitter phase, with zero matter density, that is to complete dark-energy domination. Finally, these models also accept solutions with almost constant Hubble function, which can describe the inflationary regime. Thus, the scenario of nonminimal torsion-matter coupling can offer a unified description of the universe evolution, from its inflationary to the late-time accelerated phases. Apart from the exact numerical elaboration, we have extracted approximate analytical expressions in the limit of a small Hubble parameter, that is corresponding to the large-time limit, as well as for large Hubble parameters, that is corresponding to the beginning of the cosmological expansion. These expressions verify the above physical features that were extracted through the numerical analysis. In conclusion, based on the torsional formulation of gravity, we have proposed a novel modified gravitational scenario which contains an arbitrary coupling between the torsion scalar and the matter Lagrangian. The cosmological implications of this theory proves to be very interesting. However, in order for the present scenario to be considered as a good candidate for the description of Nature, additional investigations should be performed, such as the detailed comparison with cosmological observations, the complete perturbation analysis, etc. These necessary studies lie beyond the scope of the present work and are left for future projects. | 14 | 4 | 1404.6212 |
1404 | 1404.7214_arXiv.txt | The BICEP2 experiment confirms the existence of primordial gravitational wave with the tensor-to-scalar ratio $r=0$ ruled out at $7\sigma$ level. The consistency of this large value of $r$ with the {\em Planck} data requires a large negative running $n'_s$ of the scalar spectral index. Herein we propose two types of the single field inflation models with simple potentials to study the possibility of the consistency of the models with the BICEP2 and {\em Planck} observations. One type of model suggested herein is realized in the supergravity model building. These models fail to provide the needed $n'_s$ even though both can fit the tensor-to-scalar ratio and spectral index. | The observed temperature fluctuations in the cosmic microwave background radiation (CMB) strongly suggested that our Universe might experience an accelerated expansion, more precisely, inflation~\cite{starobinskyfr, guth81, linde83, Albrecht:1982wi}, at a seminal stage of evolution. In addition to the solution to the problems in the standard big bang cosmology such as the flatness, horizon, and monopole problems, the inflation models predict the cosmological perturbations in the matter density from the inflaton vacuum fluctuations, which describes the primordial power spectrum consistently. Besides the scalar perturbation, the tensor perturbation is generated as well, which gives the B-mode polarisation as a signature of the primordial gravitational wave. Recently, the BICEP2 experiment has discovered the primordial gravitational wave with the B-mode power spectrum around $\ell \sim 80$~\cite{Ade:2014xna}. If it is confirmed, it will seemingly forward the study in fundamental physics. BICEP2 experiment~\cite{Ade:2014xna} has measured the tensor-to-scalar ratio to be $r=0.20^{+0.07}_{-0.05}$ at the 68\% confidence level for the lensed-$\Lambda$CDM model, with $r=0$ disfavoured at $7.0\sigma$ level. Subtracting the various dust models and re-deriving the $r$ constraint still results in high significance of detection, it results in $r=0.16^{+0.06}_{-0.05}$. From the first-year observations, the {\em Planck} temperature power spectrum \cite{planck13} in combination with the nine years of Wilkinson Microwave Anisotropy Probe (WMAP) polarization low-multipole likelihood \cite{wmap9} and the high-multipole spectra from the Atacama Cosmology Telescope (ACT) \cite{act13} and the South Pole Telescope (SPT) \cite{spt11} ({\em Planck}+WP+highL) constrained the tensor-to-scalar ratio to be $r \le 0.11$ at the 95\% confidence level \cite{Ade:2013zuv,Ade:2013uln}. Therefore, the BICEP2 result is in disagreement with the {\em Planck} result. To reduce the inconsistency between Planck and BICEP2 experiments, we need to include the running of the spectral index $n'_s=d\ln n_s/d\ln k$. With the running of the spectral index, the 68\% constraints from the {\em Planck}+WP+highL data are $n_s=0.9570\pm 0.0075$ and $n'_s=-0.022\pm 0.010$ with $r<0.26$ at the 95\% confidence level. Thus, the running of the spectral index needs to be smaller than 0.008 at the 3$\sigma$ level for any viable inflation model. Because different inflationary models predict different magnitudes for the tensor perturbations, such large tensor-to-scalar ratio $r$ from the BICEP2 measurement will give a strong constraint on the inflation models. Also, the inflaton potential is around the Grand Unified Theory (GUT) scale $2\times 10^{16}$~GeV, and Hubble scale is about $1.0\times 10^{14}$~GeV. From the naive analysis of Lyth bound~\cite{Lyth:1996im}, a large field inflation will be experienced, and then the validity of effective field theory will be challenged since the high-dimensional operators are suppressed by the reduced Planck scale. The inflation models, which can have $n_s\simeq 0.96$ and $r\simeq 0.16/0.20$, have been studied extensively~\cite{Anchordoqui:2014uua, Czerny:2014wua, Ferrara:2014ima, Zhu:2014wda, Gong:2014cqa, Okada:2014lxa, Ellis:2014rxa, Antusch:2014cpa, Freivogel:2014hca, Bousso:2014jca, Kaloper:2014zba, Choudhury:2014kma,*Choudhury:2014wsa,*Choudhury:2013iaa,*Choudhury:2014kma, Choi:2014aca, Murayama:2014saa, McDonald:2014oza, Gao:2014fha, Ashoorioon:2014nta,*Ashoorioon:2013eia,*Ashoorioon:2009wa,*Ashoorioon:2011ki,Sloth:2014sga, Kawai:2014doa,Kobayashi:2014rla,*Kobayashi:2014ooa,*Kobayashi:2014jga,Bastero-Gil:2014oga,DiBari:2014oja,Ho:2014xza,Hotchkiss:2011gz}. Specifically, the simple chaotic and natural inflation models are preferred. Conversely, supersymmetry is the most promising extension for the particle physics Standard Model (SM). Specifically, it can stabilize the scalar masses, and has a non-renormalized superpotential. Also, gravity is critical in the early Universe, so it seems to us that supergravity theory is a natural framework for inflation model building~\cite{Freedman:1976xh,*Deser:1976eh,Antusch:2009ty,*Antusch:2011ei}. However, supersymmetry breaking scalar masses in a generic supergravity theory are of the same order as the gravitino mass, inducing the reputed $\eta$ problem~\cite{Copeland:1994vg,*Stewart:1994ts,*adlinde90,*Antusch:2008pn,*Yamaguchi:2011kg,*Martin:2013tda,Lyth:1998xn}, where all the scalar masses are of the order of the Hubble parameter because of the large vacuum energy density during inflation~\cite{Goncharov:1984qm}. There are two elegant solutions: no-scale supergravity~\cite{Cremmer:1983bf,*Ellis:1983sf,*Ellis:1983ei,*Ellis:1984bm,*Lahanas:1986uc, Ellis:1984bf, Enqvist:1985yc, Ellis:2013xoa, Ellis:2013nxa, Li:2013moa, Ellis:2013nka}, and shift-symmetry in the K\"ahler potential~\cite{Kawasaki:2000yn, Yamaguchi:2000vm, Yamaguchi:2001pw, Kawasaki:2001as, Kallosh:2010ug, Kallosh:2010xz, Nakayama:2013jka, Nakayama:2013txa, Takahashi:2013cxa, Li:2013nfa}. Thus, three issues need to be addressed regarding the criteria of the inflation model building: \\ Firstly (C-1), the spectral index is around 0.96, and the tensor-to-scalar ratio is around 0.16/0.20. \\ Secondly (C-2), to reconcile the Planck and BICEP2 results, we need to have $n'_s \sim -0.22$. For simplicity, we do not consider the alternative approach here~\cite{Freivogel:2014hca, Bousso:2014jca}. \\ Lastly (C-3), we need to violate the Lyth bound and try to realize the sub-Planckian inflation. For simplicity, we will not consider the alternative mechanisms such as two-field inflation models \cite{Hebecker:2013zda,McDonald:2014oza}, and the models which employ symmetries to control the quantum corrections like the axion monodromy~\cite{McAllister:2008hb}. \\ It seemingly appears that (C-1) can be satisfied by a considerable amount of inflaton potentials, thus, this is not a difficulty to overcome. In this paper, we will propose two types of the simple single field inflation models, and show that their spectral indices and tensor-to-scalar ratios are highly consistent with both the Planck and BICEP2 experiments. We construct one type of inflation models from the supergravity theory with shift symmetry in the K\"ahler potential. However, in these simple inflation models, we will show that (C-2) and (C-3) can not be satisfied. | Herein we have proposed one type of the single field inflation models with the hybrid monomial and S-dual potentials $V(\phi)=V_0 \phi^n{\rm sech}(\phi/M)$ and found that $n_s'$ given by the model is around $-10^{-4}$ when $n_s$ and $r$ are consistent with the BICEP2 constraints. If we increase the model parameter $n$ or $g=M/M_{pl}$, for the same value of $n_s$, then the tensor-to-scalar ratio $r$ increases, but the running of the scalar spectral index $n_s'$ moves closer to zero except for $n=2$. Therefore, the model parameters are constrained by the observational results. At the 95\% confidence level, we obtained that $g\ge 20$ for $n=2$, $8\le g\le 30$ for $n=4$ and $6\le g\le 10$ for $n=6$. Then we used the supergravity model building method to propose another type of models with the potentials $V(\phi)=V_0 \phi^n{\rm sech}^2(\phi/M)$. The behavior of this model is similar to the inflation model with the potential $V(\phi)=V_0 \phi^n{\rm sech}(\phi/M)$ and the model is more constrained by the observational data. At the 95\% confidence level, we found that $g\ge 30$ for $n=2$, $15\le g\le 30$ for $n=4$ and $g\sim 10$ for $n=6$. The running of the scalar spectral index for both models is at the order of $-10^{-4}$. Both models failed to provide the second order slow-roll parameter $\xi^2$ as large as the first order slow-roll parameters $\epsilon$ and $\eta$. Furthermore, to obtain large $r$, the inflaton will experience a Planck excursion because of the Lyth bound. This is the common problem for single field inflation as suggested by Gong ~\cite{Gong:2014cqa}. To violate the Lyth bound and result in sub-Planckian inflaton field, the slow roll parameter $\epsilon$ needs not be retained by a monotonous function during inflation \cite{Hotchkiss:2011gz,BenDayan:2009kv}. Recently Ben-Dayan and Brustein ~\cite{BenDayan:2009kv} found that $n_s=0.96$, $r=0.1$ and $n_s'=-0.07$ for a single field inflation with polynomial potential. It remains unclear if a single field inflation model can be constructed which both contain a large $r$ and $n_s'$. | 14 | 4 | 1404.7214 |
1404 | 1404.2607_arXiv.txt | We report the discovery of a luminosity distance estimator using Active Galactic Nuclei (AGN). We combine the correlation between the X-ray variability amplitude and the Black Hole (BH) mass with the single epoch spectra BH mass estimates which depend on the AGN luminosity and the line width emitted by the broad line region. We demonstrate that significant correlations do exist which allows one to predict the AGN (optical or X-ray) luminosity as a function of the AGN X-ray variability and either the H$\beta$ or the Pa$\beta$ line widths. In the best case, when the Pa$\beta$ is used, the relationship has an intrinsic dispersion of $\sim$0.6 dex. Although intrinsically more disperse than Supernovae Ia, this relation constitutes an alternative distance indicator potentially able to probe, in an independent way, the expansion history of the Universe. With this respect, we show that the new mission concept {\it Athena} should be able to measure the X-ray variability of hundreds of AGN and then constrain the distance modulus with uncertainties of 0.1 mag up to $z \sim 0.6$. We also discuss how, using a new dedicated wide field X-ray telescope able to measure the variability of thousands of AGNs, our estimator has the prospect to become a cosmological probe even more sensitive than current Supernovae Ia samples. | \setcounter{footnote}{0} One of the most important results on observational cosmology is the discovery, using type Ia supernovae (SNeIa) as standard candles, of the accelerating expansion of the Universe \citep{riess98, perlmutter99}. However, the use of SNeIa is difficult beyond $z\sim 1$ and limited up to $z\sim2$ \citep[e.g.][]{rubin13}. It is therefore of paramount importance to calibrate other independent distance indicators able to measure the Universe expansion. It would be even better if such a method would be able to probe even beyond these redshifts, where the differences among various cosmological models are larger. Given their high luminosities, since their discovery there have been several studies on the use of Active Galactic Nuclei (AGN) as standard candles or rulers \citep{baldwin77, rudge99,collier99, elvis02}. More recently, also thanks to a better understanding of the AGN structure, more promising methods have been presented (see \citet{marziani13} and the review therein). For example, many authors \citep[e.g.][]{watson11} use the tight relationship between the luminosity of an AGN and the radius of its Broad Line Region (BLR) established via reverberation mapping to determine the luminosity distances. On the other hand, \citet{wang13} suggest that super-Eddington accreting massive BH may reach saturated luminosities, which then provide a new tool for estimating cosmological distances. Besides AGN, gamma ray bursts (GRB) have been used as standard candles, however their low identification rate makes their use difficult \citep[e.g.][and references therein]{schaefer07}. Here we propose a new method to predict the AGN luminosity based on the combination of the virial relations, which allow to derive the BH mass (M$_{\rm BH}$) from the AGN luminosity and the width of the lines emitted from the BLR, and the well established anti-correlation between \MBH\ and the X-ray variability amplitude. | \label{Sec:disc} The above described fits show that, as expected, highly significant relationships exist between the virial products and the AGN X-ray flux variability. These relationships allow us to predict the AGN 2-10 keV luminosities. The less scattered relation has a spread of 0.6-0.7 dex and is obtained when the Pa$\beta$ line width is used. This could be due either because the Pa$\beta$ broad emission line, contrary to H$\beta$, is observed to be practically unblended with other chemical species, or because, as our analysis is based on a collection of data from public archives, the Pa$\beta$ line widths, which comes from the same project \citep{landt08, landt13}, could have therefore been measured in a more homogeneous way. In this case, it is then probable that new dedicated homogeneous observing programs could obtain even less scattered calibrations; at least for the H$\beta$-based relationships discussed in this work. In order to use this method to measure the cosmological distances and then the curvature of the Universe, it is necessary to obtain reliable variability measures, corrected for the cosmological time-dilation, at relevant redshifts. In this respect, the relations based on the H$\beta$ line width measurement are the most promising as can be used even up to redshift $\sim$3 via near infra-red spectroscopic observations (e.g. in the 1-5 $\mu m$ wavelength range with NIRSpec\footnote{See \url{http://www.stsci.edu/jwst/instruments/nirspec}} on the {\it James Webb Space Telescope}). Moreover, recent studies by \citet{lanzuisi14} suggest that previous claims of a dependence on redshift of the AGN X-ray variability should be attributed to selection effects. Our AGN-based relations constitute a distance indicator alternative to SNeIa and GRB, that can be used to cross-check their distance estimates, revealing potential unknown sources of systematic errors in their calibration and improve the constraints on fundamental cosmological parameters including dark energy properties. To assess cosmological relevance of our distance estimate we compare, in Figure \ref{fig_4}, the luminosity distance, D$_L$, of our estimator (blue dots) with two different sets of cosmological models. The first one refers to flat $\Lambda$CDM models allowed by the Union2.1 compilation of SNeIa \citep{suzuki12}. The black curve represents the best fit, while the red dashed and dotted curves are the $\pm 1\sigma$ bounds. The corresponding values for $\Omega_{\rm M}$ are indicated in the plot. The second sets of curves represent a flat Dark Energy models with a non-evolving equation of state ($w$CDM), i.e. with constant $w$-parameter ($w \equiv p/\rho$), consistent with both the Planck maps and galaxy clustering in the BOSS survey \citep{sanchez13}, but with no reference to SNeIa data. The two blue dot-dashed and dashed curves represent $\pm 1\sigma$ bounds with cosmological parameters indicated in the plot. \begin{figure} \centering \includegraphics[width=8.8cm, angle=0]{f02.eps} \caption{{\it Top.} Luminosity distance as a function of redshift. The curves represent cosmological models allowed by different datasets described in the text. Our measures, using L$_X$ and Pa$\beta$, are shown by blue dots. On the lower right corner the typical uncertainty on a single measurement is shown. Red error bars show the expected uncertainties from a survey carried out with {\it Athena}. The expected number of AGNs in each of the 0.4-wide redshift bins is also shown. {\it Bottom.} Percent differences of the various cosmological models compared to their respective best fit. Magenta error bars show the expected uncertainties from a survey carried out with a future WFXT as described in the text. The expected number of AGNs in each of the 0.3-wide redshift bins is also shown.} \label{fig_4} \end{figure} From a cosmological viewpoint our present application should be considered as a proof of concept that, however, can be developed by future missions such as the new mission concept {\it Athena} \citep[recently proposed to the European Space Agency;][]{nandra13}. As D$_{\rm L}$ is proportional to the square root of the luminosity, the 0.7 dex uncertainty on the prediction of the AGN X-ray luminosity corresponds to a 0.35 dex uncertainty on the D$_{\rm L}$ measurement (see lower right corner in the upper panel of Figure \ref{fig_4})\footnote{The logarithmic uncertainties on D$_{\rm L}$ should be multiplied by a factor 5 to convert them into distance modulus, $\Delta\rm M$, units.}. This implies that, if log-normal errors are assumed, variability measures of samples containing a number of AGN, N$_{\rm AGN}$, all having similar redshifts, will provide measures of the distance (at that average redshift) with uncertainties of $\sim$0.35/$\sqrt {\rm N_{\rm AGN }}$ dex. From \citet{vaugh03}, in low signal-to-noise measurement conditions (when the Poissonian noise dominates), the excess variance measurement is larger than the noise when \begin{equation} \sigma^2_{\rm rms} > \sqrt{2\over \rm N}{1 \over {\mu t_o}}, \label{eq_lim} \end{equation} where N is the number of $t_o$ long time intervals, and $\mu$ is the average count rate in $ph/s$ units. As also confirmed by our data, the above formula requires a count rate $\mu \sim 1~ ph/s$ and 80 bins, $t_o= 250$ s long, in order to measure \sig\ larger than $\sim 5\times10^{-4}$ (as mainly observed in this work). If {\it Athena} will be used, $\mu \sim 1~ ph/s$ corresponds to a 2-10 keV flux of 10$^{-13}$ erg s$^{-1}$ cm$^{-2}$. According to the AGN X-ray luminosity function \citep{lafranca05, gilli07}, at these fluxes, with a 10 Ms survey covering 250 deg$^2$ with 500 pointings of the Wide Field Instrument ($\sim$0.5 deg$^2$ large field of view), it will be possible to measure \sig\ in a sample of $\sim$250 unabsorbed (N$_{\rm H}<$10$^{21}$ cm$^{-2}$) AGN contained in each of the redshifts bins 0$<z<$0.3 and 0.3$<z<$0.6, and a sample of $\sim$35 AGN in the redshift bin 0.6$<z<$0.9. In this case D$_{\rm L}$ could be measured with a 0.02 dex uncertainty (0.1 mag) at redshifts less than 0.6, and with a 0.06 dex (0.3 mag) uncertainty in the 0.6$<z<$0.9 bin (red error-bars in Figure \ref{fig_4}). With the proposed {\it Athena} survey our estimator will not be competitive with SNeIa. It will, however, provide a cosmological test independent from SNeIa able to detect possible systematic errors if larger than 0.1 mag in the redshift range $z<0.6$. A value a factor of $\sim 4$ more precise than the other alternative estimator based on the GRBs \citep{schaefer07}. In order to significantly exploit at higher redshifts our proposed \sig-based AGN luminosity indicator to constraint the Universe geometry a further step is necessary, such as a dedicated Wide Field X-ray Telescope (WFXT) with an effective collecting area at least three times larger than {\it Athena} and $\sim$2 deg$^2$ large field of view\footnote{\smallskip A similar kind of mission has already been proposed \citep[][see also: \url{http://wfxt.pha.jhu.edu/index.html}]{conconi10}.}. In this case, as an example, with a 40 Ms long program it would be possible to measure D$_{\rm L}$ with less than 0.003 dex (0.015 mag) uncertainties at redshift below 1.2 and an uncertainty of less than 0.02 dex (0.1 mag) in the redshift range $1.2<z<1.6$. The bottom panel of Figure \ref{fig_4} illustrates more clearly the potential of our new estimator. The curves represent the per cent difference of the luminosity distance models shown in the upper panel with respect to its reference best fit scenario. From the comparison between the magenta error-bars with the model scatter, we conclude that our estimator has the prospect to become a cosmological probe even more sensitive than current SNeIa if applied to AGN samples as large as that of an hypothetical future survey carried out with a dedicated WFXT as described above. | 14 | 4 | 1404.2607 |
1404 | 1404.5211_arXiv.txt | A significant fraction (\(\sim 30\)\%) of the gamma-ray sources listed in the second \textit{Fermi} LAT (2FGL) catalog {is} still of unknown origin, being not yet associated with counterparts at lower energies. Using the available information at lower energies and optical spectroscopy on the selected counterparts of these gamma-ray objects we can pinpoint their exact nature. Here we present a pilot project pointing to assess the effectiveness of the several classification methods developed to select gamma-ray blazar candidates. To this end, we report optical spectroscopic observations of a sample of 5 gamma-ray blazar candidates selected on the basis of their infrared WISE colors or of their low-frequency radio properties. Blazars come in two main classes: {BL Lacs and FSRQs, showing similar optical spectra except for the stronger emission lines of the latter}. For three of our sources the almost featureless optical spectra obtained confirm their {BL Lac} nature, while for the source WISEJ022051.24+250927.6 we observe emission lines with equivalent width \(EW\sim 31\) \AA, identifying it as a {FSRQ} with \(z = 0.48\). The source WISEJ064459.38+603131.7, although not featuring a clear radio counterpart, shows a blazar-like spectrum with weak emission lines with \(EW \sim 7\) \AA, yielding a redshift estimate of \(z=0.36\). In addition we report optical spectroscopic observations of 4 WISE sources associated with known gamma-ray blazars without a firm classification or redshift estimate. For all of these latter sources we confirm {a BL Lac classification}, with a tentative redshift estimate for the source WISEJ100800.81+062121.2 of \(z = 0.65\). | \label{sec:intro} About 1/3 of the \(\gamma\)-ray sources listed in the 2nd \textit{Fermi} catalog \citep[2FGL,][]{nolan12} have not yet been associated with counterparts at lower energies. A precise knowledge of the number of unidentified gamma-ray sources (UGSs) is extremely relevant since for example it could help to provide the tightest constraint on the dark matter models ever determined \citep{abdo2013}. Many UGSs could be blazars, the largest identified population of extragalactic \(\gamma\)-ray sources, but how many are actually blazars is not yet known due in part to the incompleteness of the catalogs used for the associations \citep{2011ApJ...743..171A}. The first step to reduce the number of UGSs is therefore to recognize those that could be blazars. Blazars are the rarest class of Active Galactic Nuclei, dominated by variable, non-thermal radiation over the entire electromagnetic spectrum \citep[e.g.,][]{1995PASP..107..803U,2013MNRAS.431.1914G}. Their observational properties are generally interpreted in terms of a relativistic jet aligned within a small angle to our line of sight \citep{1978bllo.conf..328B}. Blazars have been classified as {BL Lacs and FSRQs (or BZBs and BZQs according to the nomenclature proposed by \citealt{2011bzc3.book.....M}), with the latter showing similar optical spectra except for the stronger emission lines, as well as} higher radio polarization. In particular, if the only spectral features observed are emission lines with rest frame equivalent width \(EW \leq 5\) \AA~the object is classified as a BZB \citep{1991ApJ...374..431S,1997ApJ...489L..17S}, otherwise it is classified as BZQ \citep{1999ApJ...525..127L,2011bzc3.book.....M}. {Systematic projects aimed at obtaining optical spectroscopic observations of blazars are currently carried out by different groups (see, e.g., \citealt {sbarufatti06,sbarufatti09,2012A&A...543A.116L,2013AJ....145..114L}\footnote{\href{http://archive.oapd.inaf.it/zbllac/index.html}{http://archive.oapd.inaf.it/Wallace/index.html}}; \citealt{2013ApJ...764..135S}).} The blazar spectral energy distributions (SEDs) typically show two peaks: one in the range of {radio} - soft X-rays, due to synchrotron emission by highly relativistic electrons within the jet; and another one at hard X-ray or \(\gamma\)-ray energies, interpreted as inverse Compton upscattering by the same electrons of the seed photons provided by the synchrotron emission \citep{1996ApJ...463..555I} with the possible addition of seed photons from outside the jets yielding contributions to the non-thermal radiations due to external inverse Compton scattering \citep[see][]{1993ApJ...416..458D,2009ApJ...692...32D} often dominating {the} \(\gamma\)-ray outputs \citep{2009A&A...502..749A,2011ApJ...743..171A}. \begin{table*} \caption{WISE sources discussed in this paper. In the upper part of the Table we list the \(\gamma\)-ray blazar candidates associated with UGSs or AGUs, while in the lower part we list the sources associated with known \(\gamma\)-ray blazars. Column description is given in the main text (see Sect. \ref{sec:sources}).}\label{table_sources} \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{lcclll} \hline \hline WISE NAME & RA & DEC & OTHER NAME & NAME 2FGL & NOTES \\ & J2000 & J2000 & & & \\ \hline J022051.24+250927.6 & 02:20:51.24 & +25:09:27.6 & NVSSJ022051+250926 & 2FGLJ0221.2+2516 & UGS X-KDE \\ J050558.78+611335.9 & 05:05:58.79 & +61:13:35.9 & NVSSJ050558+611336 & 2FGLJ0505.9+6116 & AGU WISE \\ J060102.86+383829.2 & 06:01:02.87 & +38:38:29.2 & WN0557.5+3838 & 2FGLJ0600.9+3839 & UGS WENSS \\ J064459.38+603131.7 & 06:44:59.39 & +60:31:31.8 & & 2FGLJ0644.6+6034 & UGS WISE \\ J104939.34+154837.8 & 10:49:39.35 & +15:48:37.9 & GB6J1049+1548 & 2FGLJ1049.4+1551 & AGU R-KDE \\ \hline J022239.60+430207.8 & 02:22:39.61 & +43:02:07.9 & BZBJ0222+4302 & 2FGLJ0222.6+4302 & A, Z=0.444? \\ J100800.81+062121.2 & 10:08:00.82 & +06:21:21.3 & BZBJ1008+0621 & 2FGLJ1007.7+0621 & B, CAND \\ J131443.81+234826.7 & 13:14:43.81 & +23:48:26.8 & BZBJ1314+2348 & 2FGLJ1314.6+2348 & B, CAND \\ J172535.02+585140.0 & 17:25:35.03 & +58:51:40.1 & BZBJ1725+5851 & 2FGLJ1725.2+5853 & B, CAND \\ \hline \hline \end{tabular}} \end{center} \end{table*} Recently, \citet{2013ApJS..206...12D} proposed an association procedure to recognize \(\gamma\)-ray blazar candidates on the basis of their positions in the three-dimensional WISE color space. As a matter of fact, blazars - whose emission is dominated by beamed, non thermal emission - occupy a defined region in such a space, well separated from that occupied by other sources in which thermal emission prevails \citep{2011ApJ...740L..48M,paper2}. Applying this method, \citet{2013arXiv1308.1950C} recently identified thirteen gamma-ray emitting blazar candidates from a sample of 102 previously unidentified sources selected from Astronomer's Telegrams and the literature. \citet{2013arXiv1303.3585M} applied the classification method proposed by \citet{2013ApJS..206...12D} {to} 258 UGSs and 210 active galaxies of uncertain type (AGUs) listed in the 2FGL \citep{nolan12} finding candidate blazar counterparts for 141 {UGSs} and 125 {AGUs}. The classification method proposed by \citet{2013ApJS..206...12D}, however, can only be applied to sources detected in all 4 WISE bands, i.e., 3.4, 4.6, 12 and 22 \(\mu\)m. Using the X-ray emission in place of the 22 \(\mu\)m detection, \citet{2013ApJS..209....9P} proposed a method to select \(\gamma\)-ray blazar candidates among \textit{Swift}-XRT sources considering those that feature a WISE counterpart detected at least in the first 3 bands, and with IR colors compatible with the 90\% two-dimensional densities of known \(\gamma\)-ray blazar evaluated using the Kernel Density Estimation (KDE) technique \citep[see, e.g.,][and reference therein]{2004ApJS..155..257R,2009MNRAS.396..223D,2011MNRAS.418.2165L}, so selecting 37 new \(\gamma\)-ray blazar candidates. Similarly, using the radio emission as additional information, \citet{massaro2013c} investigated all the radio sources in NVSS and SUMSS surveys that lie within positional uncertainty region of \textit{Fermi} UGSs and, considering those sources with IR colors compatible with the 90\% two-dimensional KDE densities of known \(\gamma\)-ray blazar, selected 66 additional \(\gamma\)-ray blazar candidates. Finally, \citet{massaro2013b} investigated the low-frequency radio emission of blazars and searched for sources with similar features combining the information derived from the WENSS and NVSS surveys, identifying 26 \(\gamma\)-ray candidate blazars in the \textit{Fermi} LAT the positional uncertainty region of 21 UGSs. In this paper we present a pilot project to assess the effectiveness of the three methods described before (position in the three dimensional WISE IR colors space, {radio or} X-ray detection plus position in the two dimensional WISE IR color space and low-frequency radio properties) in selecting gamma-ray blazar candidates. To this end, we report on optical spectra acquired using MMT, Loiano and OAN telescopes of 5 WISE \(\gamma\)-ray blazar candidates - counterparts of three UGSs and {two AGUs} - in order to identify their nature and to test the reliability of these different approaches in selecting \(\gamma\)-ray candidate blazars. In addition, we also present optical spectra of 4 known \(\gamma\)-ray blazars with uncertain redshift estimates or unknown classification (BZB vs BZQ,) \citep{2011ApJ...743..171A,nolan12} with a WISE counterpart identified by \citet{2013ApJS..206...12D}. {We note that our approach in selecting the targets for our observations is different from that adopted in other works \citep[i.e.][]{2013ApJ...764..135S}, that is, selecting the source closest to radio or optical coordinates. Our approach for the target selection, as reported in \citet{2013ApJS..206...12D}, \citet{2013arXiv1303.3585M}, \citet{2013ApJS..209....9P} and \citet{massaro2013b}, is the following: \begin{itemize} \item[a)] For \textit{Fermi} UGSs or AGUs, among all the sources inside the 95\% {LAT} uncertainty region (\(\sim 10\)') we select gamma-ray blazar candidates on the basis of their multi-wavelength properties {(IR, radio+IR, X-ray+IR, low-frequency radio)}. As a consequence, our selected targets are not necessarily the closest to optical or radio coordinates. {They} may - in principle - not even have a radio counterpart. \item[b)] For known gamma-ray blazars \citet{2013ApJS..206...12D} associate to {Roma-}BZCAT \citep{2011bzc3.book.....M} sources the closest WISE source inside 3.3'' selected on the basis of its WISE colors. So, even if the these source are spatially compatible with radio or optical coordinates due to the WISE PSF extension (\(\sim 6\)'' in W1 band and \(\sim 12\)'' in W4 band, \citealt{2010AJ....140.1868W}), we cannot a-priori be sure that this IR source is actually the blazar counterpart. Since the probability of having two different blazars in 3.3'' is essentially 0 (the blazar density in the sky is about 1 source per 10 square degrees), if the selected WISE source does show a blazar spectrum we can be confident that this is indeed the IR blazar counterpart and that \citeauthor{2013ApJS..206...12D} procedure correctly classified the WISE source. \end{itemize}} The paper is structured as follows: in Sect. \ref{sec:observations} we describe the observation procedures and the data reduction process adopted, in Sect. \ref{sec:sources} we present our results on individual sources and discuss them in Sect. \ref{sec:discussion}, while in Sect. \ref{sec:conclusions} we present our conclusions. Throughout this paper USNO-B magnitudes are reported as photographic magnitudes, SDSS magnitudes are reported in AB system, and 2MASS magnitudes are reported in VEGA system. | \label{sec:conclusions} We presented {a pilot project to assess the effectiveness of several methods in selecting gamma-ray blazar candidates. To this end, we presented} optical spectroscopic observations for a sample of five \(\gamma\)-ray blazar candidates selected with different methods based on their radio to IR properties, and for a sample of four WISE counterparts to known \(\gamma\)-ray blazar. The main results of our analysis are summarized as follows: \begin{enumerate} \item We confirm the blazar nature of all the sources associated with known \(\gamma\)-ray {blazars}. In addition, we obtain for the first time a tentative redshift estimate \(z = 0.65\) for the blazar BZBJ1008+0621. \item We confirm the blazar nature of all the \(\gamma\)-ray blazar candidates selected by \citet{2013arXiv1303.3585M}, \citet{massaro2013b}, \citet{2013ApJS..209....9P} and \citet{massaro2013c}. In addition, we obtain for WISEJ104939.34+154837.8, WISEJ022051.24+250927.6 and WISEJ064459.38+603131.7 redshift estimates of \(z = 0.33\), \(z = 0.48\) and \(z=0.36\), respectively. \item The source WISEJ064459.38+603131.7, in particular, is intriguing since it shows an almost featureless continuum with weak emission lines reminiscent of weak emission line quasar spectra \citep{2006ApJ...644...86S,2009ApJ...696..580S}, but it lacks any obvious radio counterpart, which is required for a blazar classification \citep{2012MNRAS.420.2899G,2013MNRAS.431.1914G}. \end{enumerate} While these preliminary results seem to confirm the effectiveness of the classification method presented by \citet{2013ApJS..206...12D} and of the selection methods presented by \citet{2013arXiv1303.3585M}, \citet{massaro2013b}, \citet{2013ApJS..209....9P} and \citet{massaro2013c}, additional ground-based, optical and near IR, spectroscopic follow up observations of a larger sample of \(\gamma\)-ray blazar candidates are needed to confirm the nature of the selected sources and to obtain their redshift. ~\\ | 14 | 4 | 1404.5211 |
1404 | 1404.5816_arXiv.txt | {Recent comparisons of magnetic field directions derived from maser Zeeman splitting with those derived from continuum source rotation measures have prompted new analysis of the propagation of the Zeeman split components, and the inferred field orientation. In order to do this, we first review differing electric field polarization conventions used in past studies. With these clearly and consistently defined, we then show that for a given Zeeman splitting spectrum, the magnetic field direction is fully determined and predictable on theoretical grounds: when a magnetic field is oriented away from the observer, the left-hand circular polarization is observed at higher frequency and the right-hand polarization at lower frequency. This is consistent with classical Lorentzian derivations. The consequent interpretation of recent measurements then raises the possibility of a reversal between the large-scale field (traced by rotation measures) and the small-scale field (traced by maser Zeeman splitting).} | Although a large number of magnetic field studies have been undertaken using Zeeman splitting of maser spectra \citep[e.g.][]{fish05,surcis11}, the majority of these studies only consider magnetic fields for individual regions. For mapping the field pattern within a source, the intensity of the field is of prime interest, together with changes in field direction, but knowledge of the actual line-of-sight field orientation (either towards or away from the observer) is not usually of importance to the interpretation. However, when considering ensembles of sources, there is a possibility of comparing absolute field directions with Galactic structure, and with measurements obtained by other techniques. Results of the MAGMO survey \citep{green12magmo0}, and prior observations of magnetic field orientation from hydroxyl (OH) maser Zeeman splitting \citep[e.g.][]{reid90,fish03b,han07}, have led us to re-evaluate the field direction for a given Zeeman pattern. Specifically, we address the apparent contradiction in field direction between the maser measurements and those inferred from Faraday rotation \citep[e.g.][]{brown07,vaneck11} by exploring the Zeeman splitting in the quantum mechanical sense. In the weak field limit, Zeeman splitting causes the otherwise degenerate energy levels of an atom or molecule to split into $2 {J}+1$ magnetic components, where $ {J}$ is the total angular momentum quantum number. In the simplest case of a ${J} =1-0$ transition, this results in three transition components\footnote{We focus on this simple instance, applicable to the OH doublet transitions at 1665 and 1667 MHz, and H{\sc i} at 1420 MHz. We note that similar analysis can be applied to the more complex Zeeman patterns of some other transitions, such as the 1720 MHz satellite transition of OH.}: the unshifted (in frequency relative to zero magnetic field) $\pi$ and the two shifted $\sigma$s, denoted $\sigma^{+}$ and $\sigma^{-}$ (Figure\,\ref{splittingfig}). Commonly, conventions are invoked when attributing the $\sigma^{+}$ and $\sigma^{-}$ components to a handedness of circular polarization, and for allocating which of these is found at the higher frequency for a given field direction. In this paper we first outline the current convention for inferring field orientation from an observed maser spectrum (Section\,\ref{conventions}). We then re-evaluate the propagation of the individual components to show how the field direction is fully determined and predictable on theoretical grounds, and is consistent with the previously used convention (for example as adopted in \citealt{davies74} and \citealt{garcia88}). The argument is presented first in an abbreviated descriptive form (Section\,\ref{descriptionbrief}) before a full derivation (Section\,\ref{descriptionfull}). Furthermore, in the appendix we test the compliance of various maser theory publications that discuss polarization (considering the direction of waves, the standard Cartesian axis system and the polarization conventions). \begin{figure} \begin{center} \includegraphics[width=75mm]{Figure1} \caption{Transitions between magnetic sub-levels of Zeeman splitting. $\Delta m = m_{\rmn{lower}}-m_{\rmn{upper}}$ \citep[e.g.][]{garcia88,gray94,gray12}. $\Delta m = +1$ has the lower frequency (higher equivalent Doppler radial velocity), $\Delta m = -1$ has the higher frequency (lower velocity).} \label{splittingfig} \end{center} \end{figure} | We revisit the quantum mechanics and radiative transfer of maser emission under the conditions of Zeeman splitting and establish the correct field orientation for an observed spectrum. Adopting the IEEE convention for right-handed and left-handed circular polarization, and the IAU convention for Stokes $V$ (right-handed circular polarization minus left-handed circular polarization) we find (Figure \ref{summaryfig}): \begin{itemize} \item A magnetic field directed away from the observer will have right-hand circular polarization at a lower frequency (higher velocity) and left-hand circular polarization at a higher frequency (lower velocity). \item A magnetic field directed towards the observer will have right-hand circular polarization at a higher frequency (lower velocity) and left-hand circular polarization at a lower frequency (higher velocity). \end{itemize} \noindent The results of our current analysis are consistent with the classical Lorentzian derivations and mean that Zeeman splitting in the Carina-Sagittarius spiral arm, as measured from previous studies, should be interpreted as a field direction aligned away from the observer, and thus demonstrating a real field reversal in the interstellar medium. \begin{figure} \begin{center} \renewcommand{\baselinestretch}{1.1} \includegraphics[width=8cm]{NewFigure5} \caption{\small Summary of observed Zeeman profiles (top) and corresponding magnetic field directions relative to the observer (bottom). The IAU definition of Stokes $V$ and the IEEE definition of polarization handedness are assumed. Frequency increases from left to right.} \label{summaryfig} \end{center} \end{figure} | 14 | 4 | 1404.5816 |
1404 | 1404.7508_arXiv.txt | The presence of multiple populations is now well-established in most globular clusters in the Milky Way. In light of this progress, here we suggest a new model explaining the origin of the Sandage period-shift and the difference in mean period of type ab RR Lyrae variables between the two Oosterhoff groups. In our models, the instability strip in the metal-poor group II clusters, such as M15, is populated by second generation stars (G2) with enhanced helium and CNO abundances, while the RR Lyraes in the relatively metal-rich group I clusters like M3 are mostly produced by first generation stars (G1) without these enhancements. This population shift within the instability strip with metallicity can create the observed period-shift between the two groups, since both helium and CNO abundances play a role in increasing the period of RR Lyrae variables. The presence of more metal-rich clusters having Oosterhoff-intermediate characteristics, such as NGC 1851, as well as of most metal-rich clusters having RR Lyraes with longest periods (group III) can also be reproduced, as more helium-rich third and later generations of stars (G3) penetrate into the instability strip with further increase in metallicity. Therefore, although there are systems where the suggested population shift cannot be a viable explanation, for the most general cases, our models predict that the RR Lyraes are produced mostly by G1, G2, and G3, respectively, for the Oosterhoff groups I, II, and III. | One of the long-standing problems in modern astronomy is the curious division of globular clusters (GCs) into two groups, according to the mean period ($\left<P_{\rm ab}\right>$) of type ab RR Lyrae variables \citep{Oos39}. Understanding this phenomenon, ``the Oosterhoff dichotomy'', is intimately related to the population II distance scale and the formation of the Milky Way halo \citep[][and references therein]{San81,Lee90,Yoo02,Cat09}. \citet{van73} first suggested ``hysteresis mechanism", which explains the dichotomy as a difference in mean temperature between the type ab RR Lyraes in two groups. That this cannot be the whole explanation for the dichotomy became clear when \citet{San81} found a period-shift at given temperature between the RR Lyraes in two GCs representing each of the Oosterhoff groups, M15 and M3. Sandage suggested that this shift is due to the luminosity difference, which, however, required that the RR Lyraes in M15 are abnormally enhanced in helium abundance. \citet[][hereafter LDZ I]{Lee90}, on the other hand, found that RR Lyraes evolved away from the zero-age horizontal-branch (ZAHB) can explain the observed period-shift when the HB type \citep[][hereafter LDZ II]{Lee94} is sufficiently blue. In the case of M15, which has a blue tail (or extreme blue HB; EBHB) in addition to normal blue HB \citep{Buo85}, it was not clear though whether this evolution effect alone can reproduce the period-shift. More recent colour-magnitude diagrams (CMDs) for other metal-poor Oosterhoff group II GCs, NGC~4590, 5053, and 5466 \citep{Wal94,Nem04,Cor99}, also show that the HB types for these GCs are too red (HB~Type $\approx$ 0.5) to have enough evolution effect. This suggests that a significant fraction of RR Lyraes in these GCs, including M15, are probably near the ZAHB. Therefore, the complete understanding of the difference between the two Oosterhoff groups still requires further investigation. Recent discovery of multiple populations in GCs is throwing new light on this problem. Even in ``normal'' GCs without signs of supernovae enrichments, observations and population models suggest the presence of two or more subpopulations differing in helium and light elements abundances, including CNO \citep[][and references therein]{Gra12a}. It was suggested to be due to the chemical pollution and enrichment by intermediate-mass asymptotic giant branch (AGB) stars, fast-rotating massive stars, and/or rotating AGB stars \citep{Ven09,Dec07,Dec09}. Since the colour of the HB is sensitively affected by age, helium and CNO abundances (see LDZ II), each subpopulation in a GC would be placed in a different colour regime on the HB. Similarly, this would affect the period of RR Lyraes as the variation in chemical composition would change the luminosity and mass of a HB star within the instability strip, by which the period is determined when temperature is fixed \citep{van73}. The purpose of this Letter is to suggest that, in the multiple populations paradigm, the difference in period between the two Oosterhoff groups can be reproduced as the instability strip is progressively occupied by different subpopulations with increasing metallicity. | Figure~\ref{fig:oo} shows a schematic diagram that explains the main points from our models. As discussed above, RR Lyraes in the metal-poor group II cluster M15 are produced by helium and CNO enhanced G2 (lower panel). When the HB of this metal-poor model is shifted redward with increasing metallicity, the instability strip becomes mostly populated by G1 having Oosterhoff I characteristics (middle panel). When this redward shift continues as metallicity increases further, our models indicate that the instability strip would be more populated by G3, producing first the transition case between the groups I and III, having mildly helium enhanced (Y $\approx$~0.26) RR Lyraes with Oosterhoff-intermediate characteristics, such as NGC 1851, as suggested by \citet{Kun13a, Kun13b}. Then, in the most metal-rich regime, if G3 are present, GCs with RR Lyraes having more enhanced helium abundance (Y $\approx$~0.30) and longest periods like NGC 6441 (group III) would be produced (upper panel). Note that the placements of G1 and G2 in this schematic diagram is valid only for GCs where $\Delta$$t$ between G1 and G2 is similar to that in the case of M15. If $\Delta$$t$ is much smaller than $\sim$~1 Gyr as adopted in our models, for example, the RR Lyraes in group I GCs would be more dominated by G2, while the red HB becomes more populated by G1. It appears unlikely, however, that these variations are common, because otherwise most group I GCs would have Oosterhoff-intermediate characteristics in period-shift and $\left<P_{\rm ab}\right>$. Some GCs like NGC 6397 show very small spread in the colour on the HB, as well as in the Na-O plane \citep{Car09}, which suggest that these GCs are probably consisted with only G1. Therefore, there are cases/systems where the suggested population shift cannot be a viable explanation for the Oosterhoff dichotomy. For these GCs, evolution away from ZAHB (LDZ I) and some hysteresis mechanism \citep{van73} are probably at works for the difference in $\left<P_{\rm ab}\right>$ between the groups I and II. The Na-O anticorrelations observed in some HB stars in GCs can provide an important test on our placements of G1, G2, and G3 on the HB . For example, for M5, when the division of G1 and G2 is made properly at [Na/Fe] = 0.1 as suggested by \citet{Car09}, Fig. 9 of \citet{Gra13} shows that $\sim$~65~$\%$ of stars in the red HB have G2 characteristic, while $\sim$~35~$\%$ are in G1 regime. This is in agreement with the ratio of G2/G1 ($\sim$~2; see Fig. 1) in the red HB of our model for M3-like GCs. For more metal-rich GCs, our models predict that the red HB is roughly equally populated by G1 and G2, which agrees well with the Na-O observations for NGC 1851 and NGC 2808 \citep{Gra12b,Mar14}. For the blue HB stars, these observations confirm that they belong to Na-rich and He-rich G3. It is interesting to note that one RR Lyrae variable in NGC 1851 was observed to be Na-rich like blue HB stars (G3), which is consistent with our suggestion that NGC 1851 is a transition case between the Oosterhoff I and III. For the group II GCs, the Na - O observation of HB stars is available only for M22 \citep{Gra14}, which shows that the blue HB stars right next to RR Lyraes have G1 characteristic. While this is in qualitative agreement with our model for M15, the interpretation is more complicated as this GC was also affected by supernovae enrichment \citep[and references therein]{Joo13}. Certainly, spectroscopic observations for a large sample of RR Lyrae and horizontal-branch stars in GCs representing two Oosterhoff groups, such as M15 and M3, are urgently required to confirm our models. | 14 | 4 | 1404.7508 |
1404 | 1404.2431_arXiv.txt | {} {We report the first detections of OH$^+$ emission in planetary nebulae (PNe).} {As part of an imaging and spectroscopy survey of 11 PNe in the far-IR using the PACS and SPIRE instruments aboard the \textit{Herschel Space Observatory}, we performed a line survey in these PNe over the entire spectral range between 51$\mu$m and 672$\mu$m to look for new detections.} {The rotational emission lines of OH$^+$ at 152.99, 290.20, 308.48, and 329.77$\mu$m were detected in the spectra of three planetary nebulae: \object{NGC 6445}, \object{NGC 6720}, and \object{NGC 6781}. Excitation temperatures and column densities derived from these lines are in the range of 27--47~K and 2$\times$10$^{10}$--4$\times$10$^{11}$ cm$^{-2}$, respectively.} {In PNe, the OH$^+$ rotational line emission appears to be produced in the photodissociation region (PDR) in these objects. The emission of OH$^+$ is observed only in PNe with hot central stars ($T_\mathrm{eff} >$ 100\,000 K), suggesting that high-energy photons may play a role in the OH$^+$ formation and its line excitation in these objects, as seems to be the case for ultraluminous galaxies.} | The molecular ion OH$^+$ plays an important role in the chemistry of oxygen-bearing species and in the formation of water in space \citep{1973ApJ...185..505H, 1977A&A....54..345B, 1995ApJS...99..565S, 2012ApJ...754..105H}. Reactions of this molecular ion with H$_2$ can lead to the formation of H$_2$O$^+$ and H$_3$O$^+$, which recombine with electrons to form water. Models indicate that the abundances of these species are sensitive to the flux of UV photons, X-rays, and cosmic rays \citep{2011A&A...525A.119M, 2012ApJ...754..105H, 2013arXiv1308.5556B, 2013A&A...550A..25G}. Observations of OH$^+$ and H$_2$O$^+$ can be used, for example, to infer cosmic ray ionisation rates \citep{2012ApJ...754..105H}. The OH$^+$ molecular ion can also be produced in a medium ionised by shocks \citep[and references therein]{1989ApJ...340..869N, 1990RMxAA..21..499D, 2013A&A...557A..23K}. In the interstellar medium (ISM), OH$^+$ is thought to be formed through two main routes \citep[see ][]{1995ApJS...99..565S,2011A&A...525A.119M,2012ApJ...754..105H}: one is the reaction of O$^+$ with H$_2$ and the other the reaction of O with H$_3^+$. As shown by \citet{2012ApJ...754..105H}, the former route will produce OH$^+$ more efficiently close to A$_\mathrm{V} \sim $ 0.1 and the latter around A$_\mathrm{V} \sim $ 6 (where the precise value depends on the intensity of the radiation field). In photodissociation regions (PDRs), in addition to the reaction between O$^+$ and H$_2$, photoionisation of OH and charge exchange between OH and H$^+$ may also contribute to OH$^+$ formation \citep{2011A&A...525A.119M,2013A&A...560A..95V}. Furthermore, PDR models \citep{2012ApJ...754..105H} show that the production of OH$^+$ is more efficient in two different regimes: (i) low-density clouds with low levels of ultraviolet (UV) radiation ($\chi \sim$ 1--10$^3$ and $n <$ 10$^3$ cm$^{-3}$) and (ii) high-density environments with high levels of UV radiation ($\chi \sim$ 10$^4$--10$^5$ and $n >$ 10$^5$ cm$^{-3}$), where $\chi$ is defined as the ratio between the fluxes of the source and of the interstellar radiation field at h$\mathrm{\nu}$ = 12.4 eV (= 1\,000$\AA$) and hence it is a measure of the radiation field strength \citep[e.g. ][]{1996ApJ...468..269D}. In most sources where OH$^+$ has been detected, its lines are seen in absorption and are usually attributed to diffuse interstellar clouds ionised by the galactic interstellar radiation field and cosmic rays, or by FUV photons from nearby stars. The first detection of OH$^+$ in the ISM was made by \citet{2010A&A...518A..26W} towards the giant molecular cloud Sagittarius~B2 using the Atacama Pathfinder Experiment (APEX) telescope. Absorption lines of OH$^+$ have also been detected along several lines of sight in the Galactic ISM towards bright continuum sources, such as the Orion molecular clouds \citep{2010A&A...518L.110G, 2010A&A...521L..10N, 2010A&A...521L..47G, 2012ApJ...758...83I, 2013A&A...549A.114L} and the massive star-forming region W3~IRS5 \citep{2013arXiv1308.5556B}. Lines of OH$^+$ have also been detected in absorption in the material around young stars \citep{2011PASP..123..138V, 2013arXiv1308.5556B, 2013A&A...557A..23K, 2013A&A...556A..57K} and in the galaxies NGC~4418 and Arp 220 \citep{2011ApJ...743...94R,2013A&A...550A..25G}. Emission of OH$^+$ lines have been observed in ultraluminous galaxies, e.g. Mrk~231 \citep{2010A&A...518L..42V} and NGC~7130 \citep{2013ApJ...768...55P}. In Mrk 231, for example, the detected lines correspond to transitions between the first excited and the ground rotational levels. The powerful OH$^+$ emission in such objects is attributed to the chemistry in X-ray dominated regions \citep[XDRs;][]{2010A&A...518L..42V}. In our Galaxy, detection of OH$^+$ lines in emission has so far been limited to the Orion Bar, the prototypical PDR, where \citet{2013A&A...560A..95V} has reported the detection of the lines at 290.20, 308.48, and 329.77$\mu$m (the transitions from N=1 to N=0), and to the Crab Nebula supernova remnant, where \citet{2013Sci...342.1343B} detected the line at 308.48$\mu$m. In \citet{2013A&A...560A..95V}, the presence of OH$^+$ is analysed in terms of PDR models, with the conclusion that the emission of OH$^+$ is largely due to the PDR produced by the nearby Trapezium stars ($\chi \sim$ 10$^4$--10$^5$ and $n \geq$ 10$^5$ cm$^{-3}$). We report the first detections of OH$^+$ in planetary nebulae (PNe), the ejecta of evolved low- to intermediate-mass stars. This discovery was made simultaneously, but independently with the detection of OH$^+$ in \object{NGC 7293} and \object{NGC 6853} by \citet{Etxaluze_etal_2014}, also published in this volume. | We present here the first detections of OH$^+$ in PNe. The emission was detected in both PACS and SPIRE far-infrared spectra of three of the 11 PNe in the sample obtained in the \textit{Herschel Planetary Nebulae Survey} (\textit{HerPlaNS}). The simultaneous and independent discovery of OH$^+$ in two other PNe in the SPIRE spectra is also reported in this volume \citep[see ][]{Etxaluze_etal_2014}. All five PNe (\object{NGC 6445}, \object{NGC 6720}, and \object{NGC 6781} in \textit{HerPlaNS}, as well as \object{NGC 7293} and \object{NGC 6853}) are molecule rich, with ring-like or torus-like structures and hot central stars ($T_\mathrm{eff} >$~100\,000~K). The OH$^+$ emission is most likely due to excitation in a PDR. Although other factors such as high density and low C/O ratio may also play a role in the enhancement of the OH$^+$ emission, the fact that we do not detect OH$^+$ in objects with $T_\mathrm{eff} <$~100\,000~K suggests that the hardness of the ionising central star spectra (i.e. the production of soft X-rays, $\sim$100--300 eV) could be an important factor in the production of OH$^+$ emission in PNe. | 14 | 4 | 1404.2431 |
1404 | 1404.0744_arXiv.txt | The incompressibility (compression modulus) $K_{\rm 0}$ of infinite symmetric nuclear matter at saturation density has become one of the major constraints on mean-field models of nuclear many-body systems as well as of models of high density matter in astrophysical objects and heavy-ion collisions. It is usually extracted from data on the Giant Monopole Resonance (GMR) or calculated using theoretical models. We present a comprehensive re-analysis of recent data on GMR energies in even-even $^{\rm 112-124}$Sn and $^{\rm 106,100-116}$Cd and earlier data on 58 $\le$ A $\le$ 208 nuclei. The incompressibility of finite nuclei $K_{\rm A}$ is calculated from experimental GMR energies and expressed in terms of $A^{\rm -1/3}$ and the asymmetry parameter $\beta$ = (N-Z)/A as a leptodermous expansion with volume, surface, isospin and Coulomb coefficients $K_{\rm vol}$, $K_{\rm surf}$, $K_\tau$ and $K_{\rm coul}$. Only data consistent with the scaling approximation, leading to a fast converging leptodermous expansion, with negligible higher-order-term contributions to $K_{\rm A}$, were used in the present analysis. \textit{Assuming} that the volume coefficient $K_{\rm vol}$ is identified with $K_{\rm 0}$, the $K_{\rm coul}$ = -(5.2 $\pm$ 0.7) MeV and the contribution from the curvature term K$_{\rm curv}$A$^{\rm -2/3}$ in the expansion is neglected, compelling evidence is found for $K_{\rm 0}$ to be in the range 250 $ < K_{\rm 0} < $ 315 MeV, the ratio of the surface and volume coefficients $c = K_{\rm surf}/K_{\rm vol}$ to be between -2.4 and -1.6 and $K_{\rm \tau}$ between -840 and -350 MeV. In addition, estimation of the volume and surface parts of the isospin coefficient $K_\tau$, $K_{\rm \tau,v}$ and $K_{\rm \tau,s}$, is presented. We show that the generally accepted value of $K_{\rm 0}$ = (240 $\pm$ 20) MeV can be obtained from the fits provided $c \sim$ -1, as predicted by the majority of mean-field models. However, the fits are significantly improved if $c$ is allowed to vary, leading to a range of $K_{\rm 0}$, extended to higher values. The results demonstrate the importance of nuclear surface properties in determination of $K_{\rm 0}$ from fits to the leptodermous expansion of $K_{\rm A}$ . A self-consistent simple (toy) model has been developed, which shows that the density dependence of the surface diffuseness of a vibrating nucleus plays a major role in determination of the ratio K$_{\rm surf}/K_{\rm vol}$ and yields predictions consistent with our findings. | Introduction} The incompressibility (compression modulus) K$_{\rm 0}$ of infinite symmetric nuclear matter (SNM) at saturation density has become one of the major constraints on mean-field models of nuclear many-body systems. Although infinite SNM does not exist in nature, its empirical properties, such as saturation density and saturation energy are rather well established (see e.g. \cite{dutra2012} and references. therein). Other quantities of interest, such as the symmetry energy and its slope at saturation density \cite{tsang2012} and the compressibility modulus are much less constrained and are the subject of continued study. Traditionally, the experimental source of information on K$_{\rm 0}$ has been the Giant Monopole Resonance (GMR). A relatively large amount of data on GMR energies have been collected over the years with development in experimental technique followed by more complicated and accurate data analysis. Alongside analysis and interpretation of GMR data which, admittedly, have some limitations, considerable effort has been put into theoretical calculation of $K_{\rm 0}$. The main model frameworks employed have been non-relativistic Hartree-Fock (HF) and relativistic mean-field (RMF) models with various effective interactions, extended beyond mean field by (Quasiparticle) Random-Phase approximation [(Q)RPA], and different variants of the liquid drop model. We summarize in Table~\ref{tab:surv} a representative selection of results of such calculations. Since the early 1960's, theoretical predictions of the compression modulus have fallen into three classes. The first comprises models based on so-called `realistic' potentials with parameters fitted to data on free nucleon-nucleon scattering (phase-shifts, effective ranges) and properties of the deuteron \cite{falk1961, bethe1971}, and the second models using effective density dependent nucleon-nucleon interactions, fitted to data on (doubly) closed shell nuclei and saturation properties of nuclear matter \cite{brink1967, vautherin1972, beiner1975, gogny1975, koehler1976}. The third class of models utilize the semi-empirical mass formula and its development to the liquid drop model and later the droplet model and its variants \cite{myers1966, myers1969, myers1974,myers1990}. `Realistic' models predicted systematically lower value of incompressibility (100 - 215 MeV) whereas models with effective interactions, mainly of the Skyrme type, predicted a wide range of higher values, up to 380 MeV. The empirical droplet-type models showed limited sensitivity to the value of $K_{\rm 0}$, which has been used as a chosen input parameter rather then a variable obtainable from the fit to atomic masses \cite{moller1995, moller2012}. The preference of the early years was clearly for results of the `realistic' models which were seen as more fundamental. The first (to our knowledge) use of experimental data on GMR energies, in $^{\rm 40}$Ca, $^{\rm 90}$Zr and $^{\rm 208}$Pb, taken from an unpublished report by Marty et al. \cite{marty1975}, was performed by Blaizot et al. \cite{blaizot1976} who determined $K_{\rm 0}$ = (210 $\pm$ 30) MeV. \begingroup \pagestyle{headings} \squeezetable \begin{longtable*}{llll} \caption{\label{tab:surv}K$_{\rm 0}$ as calculated in selected representative theoretical approaches in chronological order. (S)HF stands for (Skyrme)Hartree-Fock, HFB for Hartree-Fock-Bogoliubov, (Q)RPA for (Quasiparticle) Random Phase Approximation, GCM Generator Coordinate Method, FRDM Finite Range Droplet Model, HB Hartree-Bogoliubov, PC Point coupling, EDF Energy Density Functional. All entries are in MeV. For more detail see text and references therein.}\\ \hline\hline \multicolumn{1}{l}{K$_{\rm 0}$} & \multicolumn{1}{c}{Method} & \multicolumn{1}{c}{Data} & \multicolumn{1}{c}{Reference}\\ \hline \endfirsthead \multicolumn{4}{c}% {\tablename\ \thetable{} -- continued from previous page}\\ \hline\hline \multicolumn{1}{l}{K$_{\rm 0}$} & \multicolumn{1}{c}{Method} & \multicolumn{1}{c}{Data} & \multicolumn{1}{c}{Reference}\\ \hline \endhead \hline \endfoot \hline\hline \endlastfoot 214 & Puff-Martin model & Singlet and triplet scattering lengths and & Falk\&Wilets 1961 \cite{falk1961} \\ & Yamaguchi potential & effective ranges; deuteron binding energy; & \\ & & singlet phase shifts at 310 MeV. & \\ 172 - 302 & Various early models & & cited in \cite{falk1961} \\ 150 - 380 & HF + simple 2-body potentials & Properties of light nuclei; & Brink\&Boecker 1967 \cite{brink1967} \\ & & saturation properties of SNM. & \\ 295 & Thomas-Fermi and droplet models & Nuclear masses. & Myers\&Swiatecki 1969 \cite{myers1969} \\ 100-200 & Various (realistic) models & Data prior to 1971 see \cite{bethe1971}. & Bethe 1971 \cite{bethe1971} \\ &HF: Skyrme & Binding energy and charge radius: & Vautherin\&Brink 1972 \cite{vautherin1972} \\ 370(a) & SI (a) & $^{\rm 16}$O and $^{\rm 208}$Pb; & \\ 342(b) & SII (b) & saturation properties of SNM; & \\ & & symmetry energy; & \\ & & spin-orbit splitting: 1p levels in $^{\rm 16}$O. & \\ 240 & Droplet model for arbitrary shape &Nuclear masses; fission barriers; & Myers\&Swiatecki 1974 \cite{myers1974} \\ & & K$_{\rm 0}$ given in the list of preliminary & \\ & & input parameters. & \\ 306-364 & SHF: SIII-SVI & Binding energies and charge radii: & Beiner et al. 1975 \cite{beiner1975} \\ & & $^{\rm 16}$O, $^{\rm 40}$Ca, $^{\rm 48}$Ca, $^{\rm 56}$Ni, & \\ & & $^{\rm 90}$Zr, $^{\rm 140}$Ce, $^{\rm 208}$Pb. & \\ 228 & HFB: Gogny D1 & Properties of $^{\rm 16}$O and $^{\rm 90}$Zr; & Gogny 1975 \cite{gogny1975} \\ & & spin-orbit splitting: 1d and 1p levels in $^{\rm 16}$O; & \\ & & 2s neutron and proton levels in $^{\rm 48}$Ca. & \\ & & saturation properties of SNM; symmetry energy. & \\ 263 & SHF: Ska & Coefficients in the semi-empirical mass & Koehler 1976 \cite{koehler1976} \\ & & formula \cite{myers1969}. & \\ 180 - 240 & HF + RPA: B1, D1, Ska, SIII, SIV & E$_{\rm GMR}$: $^{\rm 40}$Ca, $^{\rm 90}$Zr,$^{\rm 208}$Pb. & Blaizot\&Grammaticos 1976 \cite{blaizot1976} \\ 200 - 240 & Expansion of $K_{\rm A}$; & $^{\rm 16}$O, $^{\rm 40}$Ca,$^{\rm 90}$Zr,$^{\rm 208}$Pb. & Treiner et al. 1981 \cite{treiner1981} \\ & asymptotic RPA sum rules & & \\ 275 - 325 & Expansion of $K_{\rm A}$ & E$_{\rm GMR}$: $^{\rm 24}$Mg, $^{\rm {112-124}}$Sn, $^{\rm {144-152}}$Sm, $^{\rm 208}$Pb. & Sharma et al. 1988 \cite{sharma1988} \\ 301 & Thomas-Fermi Model: & Ground state nuclear properties; neutron matter; & Myers\&Swiatecki 1990 \cite{myers1990} \\ & Seyler-Blanchard effective & diffuseness of nuclear density distributions; & \\ & interaction & parameters of the optical potential. & \\ 200 - 350 & Expansion of $K_{\rm A}$ & All E$_{\rm GMR}$ data available in 1993. & Shlomo\&Youngblood 1993 \cite{shlomo1993} \\ 280 - 310 & Constrained RMF+GCM: & E$_{\rm GMR}$: $^{\rm 40}$Ca,$^{\rm 90}$Zr, $^{\rm 208}$Pb. & Stoitsov et al. 1994, \cite{stoitsov1994} \\ & NL1, NL-SH, NL2, HS, L1 & & \\ 240 & FRDM & Ground state atomic masses; fission barriers; & Moller\&Nix 1995 \cite{moller1995} \\ & & low sensitivity to $K_{\rm 0}$ - cannot rule out higher values. & \\ 210 - 220 & HF(HFB)+RPA: Gogny D1S, D1, & E$_{\rm GMR}$: $^{\rm 90}$Zr, $^{\rm 116}$Sn,$^{\rm 144}$Sm, $^{\rm 208}$Pb. & Blaizot et al. 1995 \cite{blaizot1995} \\ & D250, D260, D280, D300 & & \\ 200 - 230 & SHF + BCS; RPA: &Masses and charge radii: $^{\rm 16}$O, $^{\rm 40,48}$Ca,$^{\rm 90}$Zr, & Farine et al. 1997 \cite{farine1997} \\ & SkK180, SkK200, SkK220, & $^{\rm 112-124,132}$Sn, $^{\rm 144}$Sm,$^{\rm 208}$Pb. & \\ & SkK240, SkKM & & \\ 250 - 270 & Time dependent RMF: & E$_{\rm GMR}$: $^{\rm 16}$Ca, $^{\rm 90}$Zr; $^{\rm 114}$Sn, $^{\rm 208}$Pb. & Vretenar et al. 1997 \cite{vretenar1997} \\ & NL1, NL3; & & \\ & Constrained RMF+GCM: & & \\ & NL1, NL3, NL-SH, NL2 & & \\ 225 - 236 & Comparison with \cite{blaizot1995}; & E$_{\rm GMR}$: $^{\rm 40}$Ca,$^{\rm 90}$Zr,$^{\rm 116}$Sn, $^{\rm 144}$Sm, $^{\rm 208}$Pb. & Youngblood et al. 1999 \cite{youngblood1999} \\ & E0 strength distribution & & \\ 200 - 240 & EDF scaling approximation to GMR & Nuclear masses and E$_{\rm GMR}$ data on & Chung et al. 1999 \cite{chung1999} \\ & & 18 spherical nuclei with 89 $<$ A $<$ 209. & \\ 268 - 308 & Expansion of $K_{\rm A}$ & Nuclear masses. & Sapathy et al. \cite{satpathy1999} \\ 240 - 275 & RMF: family of interactions; & E$_{\rm GMR}$: $^{\rm 208}$Pb. & Piekarewicz 2002 \cite{piekarewicz2002} \\ &SHF + RPA: & Binding energies and charge and neutron radii: & Agrawal et al.2003\cite{agrawal2003} \\ 255(a) & SK255(a) & $^{\rm 16}$O,$^{\rm 40}$Ca,$^{\rm 48}$Ca,$^{\rm 90}$Zr, & \\ 272(b) & SK272(b) & $^{\rm 116}$Sn,$^{\rm 132}$Sn,$^{\rm 208}$Pb; & \\ & & E$_{\rm GMR}$: $^{\rm 90}$Zr,$^{\rm 116}$Sn,$^{\rm 144}$Sm,$^{\rm 208}$Pb; & \\ & & RMF NL3 used as `experimenatal data' . & \\ 230 - 250 &SHF + RPA: & Properties of infinite nuclear matter; & Colo et al. 2004 \cite{colo2004} \\ & & binding energies and charge radii: & \\ & over 40 parameter sets & $^{\rm 40,48}$Ca, $^{\rm 56}$Ni, $^{\rm 208}$Pb; & \\ & & binding energy of $^{\rm 132}$Sn; & \\ & & spin-orbit splitting of the neutron 3p shell in $^{\rm 208}$Pb; & \\ & & surface energy in the ETF approximation with SkM*. & \\ & RMF (HB) with DD & Properties of nuclear matter; & Lalazissis et al. 2005 \cite{lalazissis2005} \\ & meson-nucleon coupling: & nuclear binding energies; charge radii; & \\ 245 (a) & DD-ME1(a) & differences between neutron and proton & \\ 251 (b) & DD-ME2 (b) & density distributions for 18 nuclei. & \\ 220 - 260 & review & E$_{\rm GMR}$. & Shlomo et al. \cite{shlomo2006} \\ 241 & RMF (OME) + PC & OME potentials: radial dependence of the & Hirose et al. 2007 \cite{ring2007} \\ & & non-relativistic G-matrix potentials; & \\ & & PC: EOS of symmetric matter as calculated & \\ & & with the Gogny force GT2. & \\ & SHFB+QRPA+DD pairing: & Volume, surface and mixed pairing; & Colo et al. 2008 \cite{colo2008} \\ 230 - 240(a) & SLy5 (a) & E$_{\rm GMR}$: $^{\rm 208}$Pb (a); & \\ ~220(b) & SkM* (b) & E$_{\rm GMR}$: $^{\rm 112-120}$Sn (b). & \\ 230 - 236 & RMF (BSP, IUFSU,IUFSU*) & Binding energies and charge radii for nuclei & Agrawal et al. 2012 \cite{agrawal2012} \\ & & along several isotopic and isotonic chains; & \\ & & E$_{\rm GMR}$: $^{\rm 90}$Zr, $^{\rm 208}$Pb; & \\ & & properties of dilute neutron matter; & \\ & & bounds on the equations of state of the & \\ & & symmetric and asymmetric nuclear matter & \\ & & at supra-nuclear densities. & \\ 210 - 270 & SHF, RMF: & E$_{\rm GMR}$. & Sagawa 2012 \cite{sagawa2012} \\ & variety of interactions & & \\ & SHFB+QRPA: & & Cao et al. 2012 \cite{cao2012}\\ 217 (a) & SkM* (a) & E$_{\rm GMR}$: Sn isotopes (a); & \\ 230 (b) & SLy5(b) & E$_{\rm GMR}$: Cd, Pb isotopes (b). & \\ \hline \end{longtable*} \pagebreak \endgroup They used theoretical values of $K_{\rm 0}$ calculated with B1 \cite{brink1967}, D1 \cite{gogny1975}, Ska \cite{koehler1976} and SIII and SIV \cite{beiner1975} effective forces in a Hartree-Fock + RPA model. This was welcomed as a step in the right direction, bringing a mean-field result in line with the `realistic' predictions. We will return to that analysis later in this paper (see Sec.~\ref{micro}) and show that modern calculation and current data move the limits on $K_{\rm 0}$ towards higher values. In later years theoretical calculations of $K_{\rm 0}$ developed in two basic directions. These were, first, microscopic calculations based on self-consistent methods with density dependent effective nucleon interactions, both non-relativistic and relativistic, and second, macroscopic models in which the incompressibility of a finite nucleus $K_{\rm A}$ is parameterized in the form of a leptodermous expansion in powers of A$^{\rm -1/3}$. The fundamental difference between the two approaches is that microscopic models yield variables describing vibrating nuclei, such as $K_{\rm 0}$, dependent on the parameters of the effective nucleon interaction. Description of the nuclear surface is not well developed in these models and volume and surface effects cannot be clearly separated. Macroscopic expansion contains individual contributions from the volume, surface, curvature, isospin and Coulomb terms which, in principle, can be obtained directly from a fit to values of $K_{\rm A}$, extracted from experimental GMR energies. $K_{\rm 0}$ is then set equal to the leading term in the expansion, the volume term $K_{\rm vol}$. The usual criticism of macroscopic models is that they do not describe vibrating nuclei adequately because they do not include effects such as anharmonic vibrations, and that the values of the coefficients of the leptodermous expansion are dependent on the accuracy and methods of extraction of GMR energies, and thus $K_{\rm A}$, from raw experimental data \cite{blaizot1980}. The main objection is that the coefficients of the leptodermous expansion are correlated \cite{shlomo1993} and that all the terms in the expansion cannot be determined uniquely. More generally, Satpathy et al. \cite{satpathy1999} pointed out that the semi-empirical mass formula, the basis for expansion of the incompressibility of a finite nucleus, has its problems and the form of leptodermous expansion of $K_{\rm A}$ is not uniquely determined. Since late 1970's, two ways of modeling nuclear matter density under compression have been singled out and extensively studied, the so-called scaling and constrained approximations \cite{blaizot1980, jennings1980, treiner1981}. The difference between the two concepts has a profound consequence on the behavior of the leptodermous expansion. In the constrained approximation the leptodermous expansion is converging slowly and higher order terms in A$^{\rm 1/3}$, in particular the curvature term depending on A$^{\rm 2/3}$ cannot be neglected. Unique determination of the coefficients in the expansion is indeed difficult and the extracted values may contain unwanted contributions from unresolved correlations. However, as was shown by Treiner et al. \cite{treiner1981}, in the scaling approximation the transition density clearly separates the volume from the surface region in a vibrating nucleus. The leptodermous expansion converges fast, higher order terms are negligible and the coefficients reflect properties of real nuclei. Thus the scaling model has been recommended for use in analysis of experimental GMR data as is done in the first part of this paper. Extensive discussion of the pros and cons of the macroscopic and microscopic methods has been given in several papers (see e.g. \cite{blaizot1980, blaizot1981, treiner1981, blaizot1989, blaizot1995}). Although the general tendency has been to prefer the microscopic approach, a fundamental problem emerged also there. The non-relativistic models, mainly using the Skyrme interaction, systematically predicted lower values of $K_{\rm 0}$, around 210 - 250 MeV, (see e.g. \cite{blaizot1995, farine1997, chung1999, colo2004}) but the relativistic models yielded higher values (see e.g. \cite{sharma1988, sharma1989, sharma1989a, stoitsov1994, vretenar1997, piekarewicz2002, sharma2009}). Re-analysis of experimental data available in 1989 using the leptodermous expansion was presented by Sharma et al. \cite{sharma1989, sharma1989a} showed that the best fit was achieved for $K_{\rm 0} \sim$ (300 $\pm$ 25) MeV, thus supporting predictions of relativistic models. Currently a general consensus has developed to adopt a lower value of $K_{\rm 0}$, $K_{\rm 0}$ = (240 $\pm$ 20) MeV (e.g. \cite{shlomo2006}) which has been used as an initial condition/requirement in most models. Skyrme effective interactions were constructed to reproduce this 'canonical' value and attempts were made to reconcile \cite{agrawal2004} and modify effective Lagrangians \cite{agrawal2012} in relativistic models to comply with this adopted value. These efforts however indicate the main weakness of the microscopic approaches. The effective interactions have a flexible form and too many variable parameters so that modifications can be introduced which yield a desired result but do not advance understanding of the underlying physics. The most recent illustration of the problem can be found in \cite{cao2012}, where even the state-of-the-art HFB+QRPA calculation did not succeed to reproduce GMR energies in Sn, Cd and Pb nuclei using the same Skyrme parameterization. The dependence of the calculated value of $K_{\rm 0}$ on the choice of the microscopic model is obvious from examination of Table~\ref{tab:surv}. In parallel with $K_{\rm 0}$, investigation of the isospin incompressibility $K_\tau$, which quantifies the contribution from the neutron-proton difference to the incompressibility of a finite nucleus $K_{\rm A}$, has been performed. We introduce here the term "isospin" incompressibility to avoid confusion with the "symmetry" incompressibility - the name sometimes used for the curvature of the symmetry energy at saturation density $K_{\rm sym}$. This coefficient can be obtained in either the microscopic or the empirical approach \cite{blaizot1981, treiner1981, nayak1990, patra2002, sagawa2007, sharma2009}. Its recent extraction from empirical analysis of GMR data on Sn isotopes \cite{li2007, li2010} attracted a lot of attention as the value of $K_\tau$ was larger than predicted by most of the microscopic models. Determination of $K_\tau$ from experimental data on GMR is complicated by the fact that, as with the volume and surface contributions to $K_{\rm A}$, it also includes volume and surface terms and the latter cannot be easily evaluated in microscopic models \cite{blaizot1981, treiner1981, nayak1990, patra2002}. In this paper we survey existing data on GMR energies in nuclei with A $\ge$ 56 and use them to set limits on $K_{\rm 0}$ and the isospin incompressibility coefficient $K_\tau$, using the macroscopic approach in the scaling approximation and employing a new method of analysis. In Sec.~\ref{basics} we present the basic expressions and the data selection for the analysis followed by Sec.~\ref{anal} containing the the main results. A schematic theoretical model of the ratio of the volume and surface contributions to $K_{\rm A}$ is presented in Sec.~\ref{toy}. Microscopic models are commented on in Sec.~\ref{micro}. Discussion of results and conclusions form Sec.~\ref{concl}. | Summary of the values of K$_{\rm 0}$, K$_\tau$ and ratio of the volume and surface incompressibility $c$, as obtained from the MINUIT fit to data sets RCNP-E, GF-E, TAMU0-E and the M variant of these data sets. Results for each case are given for both matter and charge radii and both values of the error in $K_{\rm coul}$. } \begin{ruledtabular} \begin{tabular}{lcccccc} \multicolumn{7}{l}{$\Delta K_{\rm coul}$ = 0.7} \\ \hline & \multicolumn{3}{c}{matter radii} & \multicolumn{3}{c}{charge radii} \\ \hline & $K_0$ & $K_\tau$ & c & $K_0$ & $K_\tau$ & c \\ \hline RCNP-E & 254(5) & -664(121) & -1.63 & 261(5) & -632(116) & -1.59 \\ RCNP-M & 276(6) & -700(138) & -1.88 & 274(6) & -644(135) & -1.74 \\ GF-E & 251(5) & -476(123) & -1.80 & 252(4) & -392(107) & -1.71 \\ GF-M & 306(9) & -584(169) & -2.35 & 303(8) & -500(173) & -2.24 \\ TAMU0-E & 278(4) & -728(90) & -2.08 & 288(4) & -716(84) & -2.05 \\ TAMU0-M & 347(5) & -835(101) & -2.60 & 344(6) & -800(104) & -2.49 \\ \hline \multicolumn{7}{l}{$\Delta K_{\rm coul}$ = 2.1} \\ \hline & \multicolumn{3}{c}{matter radii} & \multicolumn{3}{c}{charge radii} \\ \hline & $K_0$ & $K_\tau$ & c & $K_0$ & $K_\tau$ & c \\ \hline RCNP-E & 252(8) & -688(235) & -1.58 & 260(8) & -648(228) & -1.56 \\ RCNP-M & 264(13)& -664(305) & -1.75 & 260(12) & -604(310) & -1.58 \\ GF-E & 249(9) & -504(240) & -1.77 & 253(8) & -414(227) & -1.72 \\ GF-M & 306(18)& -563(365) & -2.35 & 304(18) & -488(365) & -2.25 \\ TAMU0-E & 279(8) & -802(198) & -2.05 & 287(9) & -760(223) & -2.03 \\ TAMU0-M & 360(14)& -903(252) & -2.67 & 360(15) & -856(272) & -2.59 \\ \hline \end{tabular} \end{ruledtabular} \end{table*} \begin{table*} \caption{\label{tab:ktvs} $K_{\tau,v}$ and $K_{\tau,s}$ as determined in different model approaches. All entries are in MeV. For more detail see text and the references therein.} \begin{ruledtabular} \begin{tabular}{lrlcl} $K_{\tau,v}$ & $K_{\tau,s}$ & Method & Force & Ref.\\ \hline -420 & 850 & fit to $K_{\rm A}$(RPA) & SIII & \cite{blaizot1981} \\ -508 & 1390 & & SIV & \\ -444 & 630 & & Ska & \\ -420 & 230 & fit to $K_{\rm A}$ (scaling) & SIII & \\ -508 & 670 & & SIV & \\ -444 & 640 & & Ska & \\ \hline -319 & -3540 & fit to $K_{\rm A}$ (constrained) & SIII & \cite{treiner1981} \\ -251 & -1340 & & SkM & \\ -456 & 420 & fit to $K_{\rm A}$ (scaling) & SIII & \\ -359 & 435 & & SkM & \\ \hline -349 & 497 & Extended Thomas-Fermi & SkM* & \cite{nayak1990} \\ -338 & 313 & & RATP & \\ -441 & 875 & & Ska & \\ -456 & 383 & & S3 & \\ \hline -676 & 1951 & RMF & NL1 & \cite{patra2002} \\ -690 & 1754 & RMF & NL3 & \\ -794 & 1716 & RMF & NL-SH & \\ \hline -460(30) & 410(110) & fit to $K_{\rm A}$ from QRPA+HFB+sep. pair. & SLy4 & \cite{vesely2012} \\ -510(30) & 570(120) & & UNEDF0 & \\ -500(30) & 560(100) & fit to $K_{\rm A}$ from QRPA+HFB+z.r. pair. & SLy4 & \\ -550(30) & 740(100) & & UNEDF0 \\ \label{KvsKss} \end{tabular} \end{ruledtabular} \end{table*} and -1600 $ < K_{\rm \tau,s} <$ 1600 MeV, taking into account that $K_{\rm \tau,v}$ is expected to be negative in line with microscopic calculations. The second constraint was constructed assuming that (\ref{ktau}) applies and the expansion in terms of $A^{-1/3}$ \textit{converges at a reasonable rate}, i.e. no higher order terms are significant. The question of what is reasonable can be only answered in a somewhat arbitrary way as there is a large spread in values of $K_{\rm \tau,v}$ and $K_{\rm \tau,s}$ calculated microscopically (see Table~\ref{tab:ktvs}). We looked at two scenarios: (i) the magnitude of the two coefficients is almost the same \cite{treiner1981, nayak1990} and (ii) $K_{\rm \tau,s}$ is roughly three times larger than $K_{\rm \tau,v}$ \cite{patra2002}. Taking the average mass number A=100, we obtain for the ratio (\ref{limit}) 0.2 for the former and 0.5 for the latter. Taking the higher value of the ratio, we choose to allow for a slower convergence of the expansion (\ref{ktau}) \begin{equation} \frac{K_{\tau,s}A^{-1/3}}{K_{\tau,v}} \le 0.5. \label{limit} \end{equation} Simultaneous application of (\ref{ktau}) and (\ref{limit}) yielded results presented in Table~\ref{tab:ktvs-res}. We conclude that the most likely limits on $K_{\rm \tau,s}$ are -810 $ < K_{\rm \tau,v} <$ -370 MeV. Limits on the surface contribution to isospin incompressibility are -1020 $\le K_{\tau,s} \le$ 160 MeV. We note that another possibility to determine $K_{\rm \tau,v}$ and $K_{\rm \tau,s}$ would be to fix $K_{\rm \tau,v}$ to a theoretical value, for example $K_{\rm \tau,v}$ = -(370 $\pm$ 120) MeV \cite{chen2009}. However, these values are naturally model dependent - the heavy ion collision data are no exception. The main objective of our paper is to explore what can be learned from the experimental data (GMR energies) alone using only the assumption that the leptodermous expansion is valid and converges fast. \subsection{The curvature term} We recall that (\ref{kali}) is an expansion in terms of powers of $A^{\rm -1/3}$. The second order term, which depends on $(A^{-1/3})^{\rm 2}$ is called the curvature term. The limited range of A$^{\rm -1/3}$ considered in this work meant that no contribution of order higher than linear could be identified outside experimental error. As an example, linear and quadratic fits to the experimental $K_A$ as a function of A$^{\rm -1/3}$ are illustrated in Fig.~\ref{fig:curv} for the RCNP-E set. A frequently raised objection to analysis of GMR data using the leptodermous formula (\ref{kali}) is that the omission of a very poorly known curvature term may lead to a substantial change in the surface term. Earlier work allows us to estimate this effect. Treiner et al. \cite{treiner1981} calculated the $K_{\rm curv}$ coefficient microscopically in the scaling model using the SIII and SkM Skyrme interactions. They found it to be positive and of the order of 300 MeV. Sharma et al. \cite{sharma1989} also examined the \begin{table} \caption{\label{tab:ktvs-res} K$_{\tau,v}$ and K$_{\tau,s}$ values (in MeV). Matter radii and $\Delta$K$_{\rm coul}$ = 0.7 MeV were used in the calculation. $\overline{A^{\rm -1/3}}$ was taken to be 0.2 in the mass region considered. For more detail see text and the references therein.} \vspace*{5pt} \begin{ruledtabular} \begin{tabular}{lcccccc} & K$_{\tau,v}$ & K$_{\tau,s}$ & $\overline{A^{\rm -1/3}}$ $K_{\tau,s}$ & K$_\tau$ & ratio & $\sigma$ \\ \hline \hline RCNP-E & -500.0 & -950.0 & -190.0 & -690.0 & 0.38 & 7.47 \\ RCNP-M & -620.0 & -410.0 & -82.0 & -702.0 & 0.13 & 4.02 \\ GF-E & -370.0 & -700.0 & -140.0 & -510.0 & 0.38 & 5.17 \\ GF-M & -610.0 & 160.0 & 32.0 & -578.0 & -0.053 & 0.49 \\ TAMU0-E & -550 & -1020.0 & -204.0 & -754.0 & 0.37 & 58 \\ TAMU0-M & -810.0 & -170 & -34.0 & -844.0 & 0.042 & 52.0 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \centerline{\epsfig{file=fig10.eps,height=7cm,width=9cm}} \caption{\label{fig:curv} (Color on-line) Linear (solid) and quadratic (dashed) fits to experimental $K_{\rm A}$ as a function of $A^{\rm -1/3}$ for data set RCNP-E.} \end{figure} consequence of including a curvature term and varied the coefficient between 350 and 400 MeV and found only a 1(4)\% change in $K_{\rm vol}$ ($K_{\rm surf}$) and $K_{\tau}$ almost unaffected. They adopted a value $K_{\rm curv}$ = 375 MeV which was kept constant during their final fits. If we accept as the best estimate of the $K_{\rm curv}$ the value +350 MeV the size of the curvature term is 24 MeV at A = 56 and 10 MeV at A = 208. At the same A values, with $K_{\rm surf}$ = 500 MeV, fits neglecting the curvature term give surface term values 130 MeV and 85 MeV, respectively. The ratio of the curvature to the surface term is thus $\approx$ (15 $\pm$ 3) \% and inclusion of the curvature term would indeed increase the surface term but not to any great extent. The ratio $c$ would decrease also by a factor (1.15 $\pm$ 3)\% , shifting the range from -2.4 $<$ c $<$ -1.6 to -2.8 $<$ c $<$ -1.8 which is not a major change. To further explore the consequence of a range of K$_{\rm curv}$ values, and to illustrate our fitting procedure in detail, we examined the extended equation \begin{eqnarray} \frac{K_{\rm A}}{1 + cA^{-1/3}}- \frac{K_{\rm coul}Z^{\rm 2}A^{\rm -4/3}}{1 + cA^{\rm -1/3}}-\frac{K_{\rm curv}A^{\rm -2/3}}{1 + cA^{-1/3}}& =& \nonumber \\ K_{\rm vol}+K_{\tau} \frac{\beta^{\rm 2}}{1 + cA^{\rm -1/3}}. \label{kanjscurv} \end{eqnarray} and its fit to the RCNP-E data set. Keeping K$_{\rm coul}$ = -(5.2$\pm$0.7) MeV, we first performed the MESH fit in the four-parameter space, stepping K$_{\rm 0}$ in the range 150 to 450 MeV (step 0.1 MeV), K$_{\tau}$ in the range of -900 to -300 MeV (step 0.5 MeV), c in the range of -4 to -0.1 (step 0.01) and K$_{\rm curv}$ in the range of -1600 to 2000 MeV (step 100 MeV). Next we examined the stability of the minimum by making 'slices' across the four-parameter MESH along each parameter axis. The results are shown in Fig.~\ref{fig:curv-slice} demonstrating that exactly the same minimum is reached in each slice, i.e. the minimum is stable. The errors and the correlation coefficients were obtained in subsequent MINUIT fits in which one of the parameters was set at its minimum value in order to examine effects of various correlations. Numerical results are given in Table~\ref{tab:curv} (lines A-D). Lastly we performed a full four-parameter fit, varying K$_{\rm 0}$, $c$, K$_{\rm \tau}$ and K$_{\rm curv}$ simultaneously in \begin{figure} \centerline{\epsfig{file=fig11.eps,height=9.0cm,width=7.0cm}} \caption{\label{fig:curv-slice} (Color on-line) MESH fit to RCNP-E data including the curvature term. Values of $K_{\rm 0}$, $c$, $K_{\tau}$ and $K_{\rm curv}$ as a function $\sigma$ are displayed, showing a well-defined unique minimum, indicated by the vertical dashed line. For more detail see text.} \end{figure} the MINUIT code (line E in Table~\ref{tab:curv}). The correlation coefficients obtained in all fits are shown in Table~\ref{tab:cor}. The analysis has been repeated without the curvature term, performing parameter fits, with results in lines F-J in Table~\ref{tab:curv} and Table~\ref{tab:cor}. We observe that the least correlated parameter in both cases is K$_{\tau}$ and this level of correlation is somewhat smaller when the curvature term is included in the fit. This feature may \begin{table} \caption{\label{tab:curv} Results of fits to RCNP-E data: Results of fits A-E (including the curvature term) and F-G (without the curvature term). Entries without an error in bracket indicate which parameter was kept constant at their minimum value during the fits. For more explanation see text.} \vspace*{10pt} \begin{ruledtabular} \begin{tabular}{cccllc} & $\sigma$ & K$_{\rm 0}$ & $c$ & $K_{\tau}$ & K$_{\rm curv}$ \\ \hline A & 7.110586 & 339 & -3.45(35) & -712(160) & 1689(504) \\ B & 7.110592 & 339(23) & -3.45 & -712(175) & 1682(85) \\ C & 7.110587 & 339(93) & -3.45(1.6) & -712 & 1685(1958) \\ D & 7.110586 & 339(26) & -3.46(67) & -712(176) & 1686 \\ E & 7.110588 & 339(106) & -3.45(1.66) & -712(187) & 1685(2042) \\ \hline F & 7.875914 & 253.8 & -1.628(0.050) & -662(98) & $-$ \\ G & 7.855915 & 253(5) & -1.628 & -662 (121) & $-$ \\ H & 7.855915 & 254(15) & -1.628(19) & -661.9 & $-$ \\ J & 7.855916 & 253(26) & -1.629(27) & -662(177) & $-$ \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table} \caption{\label{tab:cor} Correlation coefficients of parameters in fits shown in Table~\ref{tab:curv}. K$_{\rm 0}$ (I), $c$ (II),$K_{\tau}$ (III), K$_{\rm curv}$ (IV).} \vspace*{5pt} \begin{ruledtabular} \begin{tabular}{ccllccc} & I-II & I-III & I - IV & II-III & II-IV & III-IV \\ \hline A & $-$ & $-$ & $-$ & 0.842 & 0.995 & 0.790 \\ B & $-$ & 0.870 & 0.783 & $-$ & $-$ & 0.424 \\ C & 0.993 & $-$ & 0.988 & $-$ & 0.999 & $-$ \\ D & 0.837 & 0.833 & $-$ & 0.426 & $-$ & $-$ \\ E & 0.977 & 0.516 & 0.969 & 0.350 & 0.999 & 0.334 \\ \hline F & $-$ & $-$ & $-$ & 0.878 & $-$ & $-$ \\ G & $-$ & 0.922 & $-$ & $-$ & $-$ & $-$ \\ H & 0.992 & $-$ & $-$ & $-$ & $-$ & $-$ \\ J & 0.983 & 0.831 & $-$ & 0.728 & $-$ & $-$ \\ \end{tabular} \end{ruledtabular} \end{table} be associated with the fact that K$_{\tau}$ is not dependent on the mass number A in the first order. More generaly, the inclusion of the curvature term in the fits does not dramatically influence the correlation between the rest of the parameters varied in a particular fit. Several conclusions can be drawn from Table~\ref{tab:curv} and Table~\ref{tab:cor}. First, the minima obtained in the both fits, with and without the curvature term, are each stable. The central values of K$_{\rm 0}$, $c$, K$_{\tau}$ and K$_{\rm curv}$ remain almost constant and the extent to which they are correlated is only reflected in the errors. Second, the errors in the full fit including the curvature term, due to the insensitivity of the data to this term, are too large to allow practical determination of the four parameters in the expansion (\ref{kanjscurv}). The fits with one parameter kept constant at its minimum value may indicates that the curvature term is likely to be positive but does not allow deduction of any useful value. For what it is worth, the effect including the curvature term on K$_{\tau}$ is small (of order 8\%), but the considerable changes of K$_{0}$ (33\%) and c (a factor of 2) take them ever further from the currently adopted values. In other words, according to this analysis, K$_{\rm 0} \sim$ 220 - 240 MeV and $c \sim -1$ cannot be recovered by including the curvature term in the fit. Finally, we note that our adopted method of fitting allows determination of only two-parameter correlation coefficients. It may be interesting to look for many-parameter correlations based on the two-parameter data. However, it is not clear whether any practically useful information would be obtained. We present this analysis as an example of the fitting routines and the trend of outcome of the fits when the curvature term is included. We maintain as our main results Table~\ref{tab:summary}, obtained using (\ref{kanjs}), keeping in mind that the values of K$_{\rm 0}$ and the magnitude of $c$ may be even higher. | 14 | 4 | 1404.0744 |
1404 | 1404.5950_arXiv.txt | It has been claimed recently that massive sterile neutrinos could bring about a new concordance between observations of the cosmic microwave background (CMB), the large-scale structure (LSS) of the Universe, and local measurements of the Hubble constant, $H_0$. We demonstrate that this apparent concordance results from combining datasets which are in significant tension, even within this extended model, possibly indicating remaining systematic biases in the measurements. We further show that this tension remains when the cosmological model is further extended to include significant tensor modes, as suggested by the recent BICEP2 results. Using the Bayesian evidence, we show that the minimal $\Lambda$CDM model is strongly favoured over its neutrino extensions by various combinations of datasets. Robust data combinations yield stringent limits of $\sum m_\nu\lesssim0.3$~eV and $m_{\nu,{\rm sterile}}^{\rm eff} \lesssim 0.3$~eV at $95\%$ CL for the sum of active and sterile neutrinos, respectively. | 14 | 4 | 1404.5950 |
||
1404 | 1404.1953_arXiv.txt | The Palomar Transient Factory (PTF) is a multi-epochal robotic survey of the northern sky that acquires data for the scientific study of transient and variable astrophysical phenomena. The camera and telescope provide for wide-field imaging in optical bands. In the five years of operation since first light on December 13, 2008, images taken with Mould-$R$\/ and SDSS-$g'$ camera filters have been routinely acquired on a nightly basis (weather permitting), and two different $H\alpha$ filters were installed in May 2011 (656 and 663~nm). The PTF image-processing and data-archival program at the Infrared Processing and Analysis Center (IPAC) is tailored to receive and reduce the data, and, from it, generate and preserve astrometrically and photometrically calibrated images, extracted source catalogs, and coadded reference images. Relational databases have been deployed to track these products in operations and the data archive. The fully automated system has benefited by lessons learned from past IPAC projects and comprises advantageous features that are potentially incorporable into other ground-based observatories. Both off-the-shelf and in-house software have been utilized for economy and rapid development. The PTF data archive is curated by the NASA/IPAC Infrared Science Archive (IRSA). A state-of-the-art custom web interface has been deployed for downloading the raw images, processed images, and source catalogs from IRSA. Access to PTF data products is currently limited to an initial public data release (M81, M44, M42, SDSS Stripe 82, and the Kepler Survey Field). It is the intent of the PTF collaboration to release the full PTF data archive when sufficient funding becomes available. | Introduction} The Palomar Transient Factory (PTF) is a robotic image-data-acquisition system whose major hardware components include a 92-megapixel digital camera with changeable filters mounted to the Palomar Samuel Oschin 48-inch Telescope. The {\it raison d'\^{e}tre}\/ of PTF is to advance our scientific knowledge of transient and variable astrophysical phenomena. The camera and telescope capacitate wide-field imaging in optical bands, making PTF eminently suitable for conducting a multi-epochal survey. The Mt.-Palomar location of the observatory limits the observations to north of $\approx -30^{\circ}$~in declination. The camera's pixel size on the sky is 1.01 arcseconds. In the five years of operation since first light on December 13, 2008 \citep{ptf}, images taken with Mould-$R$\/ (hereafter~$R$) and SDSS-$g'$ (hereafter~$g$) camera filters have been routinely acquired on a nightly basis (weather permitting), and two different $H\alpha$ filters were installed in May 2011 (656 and 663~nm). \citet{ptf} present an overview of PTF initial results and performance, and \citet{ptf2} give an update after the first year of operation. \citet{rau} describe the specific science cases that enabled the preliminary planning of PTF observations. The PTF project has been very successful in delivering a large scientific return, as evidenced by the many astronomical discoveries from its data; e.g., \citet{sesar}, \citet{arcavi}, and \citet{vaneyken}. As such, it is expected to continue for several more years. This document presents a comprehensive report on the image-processing and data archival system developed for PTF at the Infrared Processing and Analysis Center (IPAC). A simplified diagram of the data and processing flow is given in Figure~\ref{fig:overview}. The IPAC system is fully automated and designed to receive and reduce PTF data, and generate and preserve astrometrically and photometrically calibrated images, extracted source catalogs and coadded reference images. The system has both software and hardware components. At the top level, it consists of a database and a collection of mostly Perl and some Python and shell scripts that codify the complex tasks required, such as data ingest, image processing and source-catalog generation, product archiving, and metadata delivery to the archive. The PTF data archive is curated by the NASA/IPAC Infrared Science Archive\footnote{http://irsa.ipac.caltech.edu/} (IRSA). An overview of the system has been given by \citet{grillmair}, and the intent of this document is to present a complete description of our system and put forward additional details that heretofore have been generally unavailable. \begin{figure*} \includegraphics[scale=1.0]{f1}% \caption{\label{fig:overview} Data and processing flow for the IPAC-PTF system.} \end{figure*} The software makes use of relational databases that are queryable via structured query language (SQL). The PTF operations database, for brevity, is simply referred to herein as the database. Other databases utilized by the system are called out, as necessary, when explaining their purpose. Data-structure information useful for working directly with PTF camera-image files, which is important for understanding pipeline processes, is given in \S{\ref{cameraImages}}. By ``pipeline'' we mean a scripted set of processes that are performed on the PTF data, in order to generate useful products for calibration or scientific analysis. Significant events that occurred during the project's multi-year timeline are documented in \S{\ref{projectevents}}. Our approach to developing the system is given in \S{\ref{devapproach}}. The system's hardware architecture is laid out in \S{\ref{os}} and the design of the database schema is outlined in \S{\ref{db}}. The PTF-data-ingest subsystem is entirely described in \S{\ref{ingest}}. The tools and methodology we have developed for science data quality analysis (SDQA) are elaborated in \S{\ref{sdqa}}. The image-processing pipelines, along with those for calibration, are detailed in \S{\ref{idp}}. The image-data and source-catalog archive, as well as methods for data distribution to users is explained in \S{\ref{idad}}. This paper would be incomplete without reviewing the lessons we have learned throughout the multi-year and overlapping periods of development and operations, and so we cover them in \S{\ref{lessonslearned}}. Our conclusions are given in \S{\ref{conclusions}}. Finally, Appendix A presents the simple method of photometric calibration that was implemented prior to the more sophisticated one of \citet{ofek} was brought into operation. | Conclusions} This paper presents considerable detail on PTF image processing, source-catalog generation, and data archiving at IPAC. The system is fully automated and requires minimal human support in operations, since much of the work is done by software called the ``virtual pipeline operator''. This project has been a tremendous success in terms of the number of published science papers (80 and counting). There are almost 1500 field and filter combinations (mostly $R$ band) in which more than 50 exposures have been taken, which typically occurred twice per night. This has allowed unprecedented studies of transient phenomena from asteroids to supernovae. More than 3~million processed CCD images from 1671 nights have been archived at IRSA, along with extracted source catalogs, and we have leveraged IRSA's existing software to provide a powerful web interface for the PTF collaboration to retrieve the products. Our archived set of reference (coadded) images and catalogs numbers over 40 thousand field/CCD/filter combinations, and is growing as more images that meet the selection criteria are acquired. We believe the many design features of our PTF-data processing and archival system can be used to support future complex time-domain surveys and projects. The system design is still evolving and periodic upgrades are improving its overall performance. | 14 | 4 | 1404.1953 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.